text
stringlengths
4
2.78M
--- abstract: 'We provide general sufficient conditions for the efficient classical simulation of quantum-optics experiments that involve inputting states to a quantum process and making measurements at the output. The first condition is based on the negativity of phase-space quasiprobability distributions ([s]{}) of the output state of the process and the output measurements; the second one is based on the negativity of PQDs of the input states, the output measurements, and the transition function associated with the process. We show that these conditions provide useful practical tools for investigating the effects of imperfections in implementations of boson sampling. In particular, we apply our formalism to boson-sampling experiments that use single-photon or spontaneous-parametric-down-conversion sources and on-off photodetectors. Considering simple models for loss and noise, we show that above some threshold for the probability of random counts in the photodetectors, these boson-sampling experiments are classically simulatable. We identify mode mismatching as the major source of error contributing to random counts and suggest that this is the chief challenge for implementations of boson sampling of interesting size.' author: - 'Saleh Rahimi-Keshari' - 'Timothy C. Ralph' - 'Carlton M. Caves' title: Sufficient Conditions for Efficient Classical Simulation of Quantum Optics --- Introduction ============ It is generally believed that quantum computers can perform certain tasks faster than their classical counterparts. Identifying the resource that enables this speedup is of particular interest in quantum information science. Attempts to identify the elusive quantum feature are generally back-door attacks, studying not what is essential for speedup, but rather what is lacking in quantum circuits that can be efficiently simulated classically. A promising candidate resource comes from the result that, in general, there is no quantum speedup for circuits whose initial states and operations have nonnegative Wigner functions [@Emer-12; @Mari; @Emer-13]. This suggests that negativity of the Wigner function [@Wigner], which can also be viewed as quantum interference [@Stahlke], is a necessary resource for quantum speedup. Of particular interest are quantum-optical models of computation that seem achievable in the near future. There has been considerable interest in boson sampling [@AA] as an intermediate model of quantum computation that, despite its simple physical implementation, is believed to be classically hard. In this model, $N$ single photons are injected into $N$ ports of an $M$-port ($M\gg N$), lossless, passive linear-optical network ().[^1] The remaining $M-N$ input ports receive vacuum states. Using on-off photodetectors at the output of the network, one samples from the output photon-counting probability distribution. This output distribution, in general, is proportional to the squared modulus of the permanent of a complex matrix. Computing permanents is a difficult problem, known to be \#P hard in complexity theory [@Valiant; @A08]. In their original proposal, Aaronson and Arkhipov (AA) provided strong evidence that sampling from the output distribution cannot be simulated efficiently classically [@AA], and this has come to be known as the boson-sampling problem. Subsequent studies showed that there are other product input states for which sampling from the output probability distribution is classically hard [@RanSam; @cat; @coh-fock; @as-sq]. This gives rise to questions about the classes of input states and measurements for which sampling the output distribution is classically intractable. Given the well-developed theory of phase-space quasiprobability distributions ([s]{}) for bosonic states and measurements, an inevitable question asks whether negativity of such [s]{} is required for classical intractability of the sampling problem. In addition, a question of both fundamental and practical importance concerns the effect of imperfections on the classical intractability of sampling problems. There have been various investigations of the effect on boson sampling of imperfections in the  [@Lev-Gar; @Kal-Kin; @Arkhipov; @RohRal12] and of mode mismatching [@Shch]. The present paper makes two contributions to this discussion. The first, developed in Sec. \[sec:generic\], is to formulate two sufficient conditions for efficient classical simulation of generic quantum-optics experiments: $M$ bosonic modes prepared in an arbitrary bosonic state undergo an $M$-mode (trace-preserving) quantum process; one generates samples by making a measurement on the $M$ output modes (see Fig. \[fig:gen-cir\]). The first condition is based on expressing the probability distribution of the measurement outcomes in terms of a  for the output state of the process and [s]{} of the elements of the Positive-Operator-Valued Measure (POVM) that describe the output measurement. If the  of the output state can be efficiently computed and if for some operator orderings all the [s]{} are nonnegative, then efficient classical simulation of sampling is possible. Our second condition generalizes a previous no-go theorem [@Mari], which was given in terms of the Wigner function, and it is particularly useful when one cannot efficiently compute the  of the output state. For this condition, we derive a relation that decomposes the output probability distribution into three parts: a  for the phase-space complex amplitudes of the input state; a transition function associated with the quantum process, which is a conditional  for the output complex amplitudes of the process given the input complex amplitudes; and [s]{} of the measurement POVM elements. If specific operator orderings exist such that all these [s]{}—input, transition, and output—are efficiently describable and nonnegative, sampling from the output probability distribution can be efficiently simulated classically. These conditions show that negativity is a necessary resource for a generic quantum-optics experiment not to be efficiently simulatable; the result includes boson sampling as the special case where the quantum process is a . We emphasize that efficient classical simulation might still be possible using other methods even if our conditions are not satisfied. Our second contribution, developed in Sec. \[sec:bosonsampling\], is to apply the results of the first one to investigate the effects of imperfections on implementations of boson sampling that use single-photon states [@AA] or two-mode squeezed-vacuum states [@RanSam] as inputs and photodetection at the output. The imperfections we consider are loss and mode mismatching at the input to and within the  and subunity efficiency and random counts in the photodetectors. Considering simple models for these errors, we find necessary and sufficient conditions for the relevant [s]{} to be nonnegative, and thus for such boson-sampling implementations to be efficiently simulated classically using these methods. These conditions say that an experiment can be classically simulated when the probability of random counts per photodetector exceeds some threshold in the experiment. The various sources of error we consider are not completely independent. The [*random counts*]{} at the photodetectors include both intrinsic dark counts and counts due to mode-mismatched photons, i.e., nonoverlapping parts of photon wave packets. These mode-mismatched photons are lost to the interference that gives rise to the desired output photocount distribution. They are part of the losses within the apparatus, but they can make their way through the  to the photodetectors and be counted within the spatiotemporal windows of the detectors. They thus contribute essentially random counts within the photodetectors. As we discuss in Secs. \[sec:BSsinglephotons\] and \[sec:BSSPDC\], our conditions for classical simulatability are not a challenge for situations with practical losses and high-quality photodetectors, [*if*]{} the only source of random counts is the intrinsic dark counts in the detectors. The chief challenge for boson sampling, we believe, comes when a substantial number of mode-mismatched photons reach the detectors and are counted as random counts. A good, but not exact rule of thumb is that our methods can classically simulate a boson-sampling experiment when the number of mode-mismatched photons reaching the photodetectors exceeds the number of mode-matched photons. The analysis in Secs. \[sec:BSsinglephotons\] and \[sec:BSSPDC\] suggests that this is an important challenge for implementations of boson sampling of interesting size, i.e. when the size of the system is sufficiently large to represent a challenge for a classical computer to sample. The paper concludes with a discussion of outstanding issues in Sec. \[sec:conclusion\]. Simulation of generic sampling problems {#sec:generic} ======================================= Generic quantum-optics sampling problem {#sec:genericsampling} --------------------------------------- We consider the generic quantum-optics sampling problem depicted in Fig. \[fig:gen-cir\]: $M$ input bosonic modes, with overall density operator $\bm\rho_{\text{in}}$, traverse a quantum process described by a (trace-preserving) quantum operation $\mathcal{E}$, leading to the output state $\bm\rho_{\text{out}}=\mathcal E(\bm\rho_{\text{in}})$; at the output, one measures the $M$ modes, thus sampling from a probability distribution $$\begin{aligned} \label{eq:gen-pro-dis} p({\bm n}) ={\text{Tr}}\big[\bm\rho_{\text{out}}\,\Pi_{\bm{n}}\big],\end{aligned}$$ where POVM elements $\Pi_{\bm{n}}$, with $\bm{n}$ denoting the joint outcome, characterize the measurement. The POVM satisfies a completeness relation, $\sum_{\bm{n}}\Pi_{\bm{n}}=\mathcal{I}$, with $\mathcal{I}$ denoting the identity operator in the $M$-mode Fock space. The quantum operation $\mathcal{E}$ is a linear, trace-preserving, completely positive map from the set of all density operators associated with a quantum system to itself. In general, such a quantum operation describes the system dynamics associated with a joint unitary transformation on the system and an environment and arises formally from tracing out the environment [@NilChu]. ![Generic quantum-optics sampling problem: The input state $\bm\rho_{\text{in}}$ is processed through an $M$-mode quantum process described by quantum operation $\mathcal{E}$, producing the output state $\bm\rho_{\text{out}}=\mathcal E(\bm\rho_{\text{in}})$; an output probability distribution $p(\bm{n})={\text{Tr}}\big[\bm\rho_{\text{out}}\,\Pi_{\bm n}\big]$ is sampled by measuring a POVM $\{\Pi_{\bm n}\}$.[]{data-label="fig:gen-cir"}](gen-cir.pdf){width="0.8\columnwidth"} The question is whether sampling from the output probability distribution (\[eq:gen-pro-dis\]) can be efficiently simulated on a classical computer. If such classical simulation is possible, then using Stockmeyer’s approximate counting algorithm [@Stockmeyer], one can approximate the output probability to within a multiplicative error in $\text{BPP}^{\text{NP}}$, which is contained in the third level of the polynomial hierarchy; $\tilde{p}(\bm{n})$ approximates $p(\bm{n})$ to within a multiplicative factor $g$ if $p(\bm{n})/g\leq\tilde{p}(\bm{n})\leq p(\bm{n}) g$. Ideal boson sampling is a special case of this general problem, for which the input state is a multimode Fock state with $N$ single photons, the quantum process is a lossless LON described by an $M\times M$ unitary matrix $\bm{U}$ with $M\gg N$, and photon-counting measurements are made on each output mode. The output probabilities, in general, are proportional to the squared modulus of permanents of complex matrices, which are, in the likely event of single-photon detections, submatrices of $\bm{U}$ [@Scheel]. Computing permanents is a difficult problem, known to be \#P hard [@Valiant; @A08], and in a class that contains the entire polynomial hierarchy of complexity classes [@Toda]. The key observation by Aaronson and Arkhipov (AA) was that multiplicative approximation of the squared modulus of permanents of real matrices is also a \#P-hard problem, and it is likely this is the case for general complex matrices [@AA]. If boson sampling were classically simulatable, one could use Stockmeyer’s approximate counting algorithm to approximate the output probability to within a multiplicative error, and this would lead to the collapse of the polynomial hierarchy to the third level, which is thought to be unlikely [@AA]. Given two plausible conjectures, AA further showed that it is likely that classical simulation of sampling from probability distributions that closely approximate the ideal output probability, known as [*approximate*]{} boson sampling, is also hard. The approximate sampling problem is of more practical interest than [*exact*]{} sampling because in an experiment the input quantum state, quantum process, and output measurement are only characterized approximately, so one does not sample from the exact probability distribution (\[eq:gen-pro-dis\]). Moreover, one might not be able to distinguish efficiently sampling from two probability distributions that are close to one another. Hence, in practice, an interesting sampling problem is the one for which approximate sampling is hard. In this paper, we do not consider this form of sampling; instead we focus on simulating exact sampling from output distributions that arise in the presence of errors and imperfections. Even though we do not consider approximate sampling explicitly, our simulation methods for exact sampling do lead to a sufficient condition for approximate sampling to be classically simulatable. We motivate our methods for classical simulation by considering a simple special case of the generic sampling problem. Suppose the multimode input state $\bm\rho_{\text{in}}$ has a nonnegative Glauber-Sudarshan $P$ function [@Glauber; @Sudarshan] $P({\bm\alpha}|\bm\rho_{\text{in}})$, i.e., $$\begin{aligned} \bm\rho_{\text{in}}=\int{d^{\,2M}\!\bm{\alpha}\,}P({\bm\alpha}|\bm\rho_{\text{in}})\ket{\bm\alpha}\!\bra{\bm\alpha};\end{aligned}$$ here $\ket{\bm\alpha}$ is an $M$-mode coherent state with phase-space complex amplitudes $\bm\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_M)$.[^2] Such states, as mixtures of coherent states, are often called classical states. Suppose further that the quantum process transforms multimode coherent states to classical states; such processes are known as classical processes [@ProNonCla]. Then the output state $\bm\rho_{\text{out}}$ is classical as well and has a nonnegative $P$ function. Using the linearity of quantum processes over density operators, the output state can be expressed as $$\begin{aligned} \begin{split} \bm\rho_{\text{out}} &=\int{d^{\,2M}\!\bm{\alpha}\,}\,\mathcal{E}\big(\ket{\bm\alpha}\!\bra{\bm\alpha}\big) P(\bm\alpha|\bm\rho_{\text{in}})\\ &=\int{d^{\,2M}\!\bm{\beta}\,}\ket{\bm\beta}\!\bra{\bm\beta} \int{d^{\,2M}\!\bm{\alpha}\,}\,P_{\mathcal{E}}(\bm\beta|\bm\alpha)\,P(\bm\alpha|\bm\rho_{\text{in}}), \end{split}\end{aligned}$$ where $P_{\mathcal{E}}(\bm\beta|\bm\alpha)$ is the $P$ function of the state $\mathcal{E}\big(\ket{\bm\alpha}\!\bra{\bm\alpha}\big)$. Hence, the output probability distribution (\[eq:gen-pro-dis\]) is given by $$\begin{aligned} \label{eq:pro-cl-cl} p(\bm{n})=\int\!{d^{\,2M}\!\bm{\beta}\,}\,\pi^{M}Q_{\Pi}(\bm{n}|\bm\beta) \int\!{d^{\,2M}\!\bm{\alpha}\,} P_{\mathcal{E}}(\bm\beta|\bm\alpha)\,P(\bm\alpha|\bm\rho_{\text{in}}),\end{aligned}$$ where the Husimi $Q$ functions [@Husimi] of the POVM elements, $Q_{\Pi}(\bm{n}|\bm\beta)=\bra{\bm\beta}\Pi_{\bm{n}}\ket{\bm\beta}/\pi^M$, are always nonnegative and satisfy $\pi^M \sum_{\bm{n}} Q_{\Pi}(\bm{n}|\bm\beta)=1$. As all the [s]{} in the expression (\[eq:pro-cl-cl\]) are nonnegative, sampling from the output photon-counting probability distribution can be efficiently simulated on a classical computer, provided the [s]{} can be efficiently generated: $\bm\alpha$ is chosen from the input probability distribution $P(\bm\alpha|\bm\rho_{\text{in}})$; given $\bm\alpha$, $\bm\beta$ is chosen according to the probability distribution $P_{\mathcal{E}}(\bm\beta|\bm\alpha)$; finally, given $\bm\beta$, $\bm{n}$ is chosen according to the measurement distribution $\pi^M Q_{\Pi}(\bm{n}|\bm\beta)$. Applying this procedure to input thermal states for [s]{} and using Stockmeyer’s approximate counting algorithm, it was shown that permanents of Hermitian positive-semidefinite matrices can be approximated to within a multiplicative constant in $\text{BPP}^{\text{NP}}$ [@thermal]. In the next subsection, we generalize this approach to nonclassical states and processes. The key idea, taken over from classical states and processes, is to use phase-space complex amplitudes and associated [s]{} to describe the input state, the transition through the quantum process, and the output measurements. The generalization is to use operator orderings more general than the normal ordering of the $P$ function and the antinormal ordering of the $Q$ function, thus expanding the possibilities for finding nonnegative [s]{}. Sufficient conditions for efficient classical simulation {#sec:sufficient} -------------------------------------------------------- To formulate our condition for efficient classical simulation of the generic sampling problem, we use the $\bm s$-ordered phase-space quasiprobability distributions \[[s]{}\] of a Hermitian operator $\bm\rho$, which are defined by [@CahGla69; @HilOCo84] $$\begin{aligned} \label{eq:s-PQD-def} W^{(\bm s)}(\bm\beta|\bm\rho) =\int\frac{{d^{\,2M}\!\bm{{\xi}}\,}}{\pi^{2M}}\, \Phi^{(\bm s)}(\bm\xi|\bm\rho)\, e^{\bm\beta\bm\xi^{\dagger}-\bm\xi\bm\beta^{\dagger}},\end{aligned}$$ where $$\begin{aligned} \label{eq:s-cha-def} \Phi^{(\bm{s})}(\bm\xi|\bm{\rho})={\text{Tr}}\big[\bm\rho D(\bm\xi)\big]e^{\bm\xi\bm{s}\bm\xi^\dagger/2}\end{aligned}$$ is the corresponding $\bm{s}$-ordered characteristic function, with $$\begin{aligned} D(\bm\xi)=e^{\bm\xi\bm a^\dagger-\bm a\bm\xi^\dagger}= \bigotimes_{j=1}^M D(\xi_j)\end{aligned}$$ being the $M$-mode displacement operator, $\bm a=( a_1,\ldots,a_M)$ the row vector of modal annihilation operators, and $\bm a^\dagger=(a_1^\dagger,\ldots,a_M^\dagger)^T$ the column vector of creation operators. The diagonal matrix $\bm s=\text{diag}(s_1,s_2,\ldots,s_M)$ has the various ordering parameters on the diagonal. Equation (\[eq:s-PQD-def\]) is a Fourier transform, which can be inverted using $$\begin{aligned} \label{eq:Ddelta} \int\frac{{d^{\,2M}\!\bm{{\beta}}\,}}{\pi^{2M}}\, e^{\bm\zeta\bm\beta^{\dagger}-\bm\beta\bm\zeta^{\dagger}} =\delta^{2M}(\bm{\zeta})\end{aligned}$$ to give $$\begin{aligned} \label{eq:s-cha-def2} \Phi^{(\bm s)}(\bm\xi|\bm\rho) =\int{d^{\,2M}\!\bm{{\beta}}\,} W^{(\bm s)}(\bm\beta|\bm\rho)\, e^{\bm\xi\bm\beta^{\dagger}-\bm\beta\bm\xi^{\dagger}}.\end{aligned}$$ Because $\bm\rho$ is Hermitian, the characteristic function satisfies $\Phi^{(\bm{s})*}(\bm\xi|\bm{\rho})=\Phi^{(\bm{s})}(-\bm\xi|\bm{\rho})$, and the ${\hbox{($\bm{s}$)-PQD}}$ (\[eq:s-PQD-def\]) is real. The ${\hbox{($\bm{s}$)-PQD}}$ $W^{(\bm s)}(\bm\beta|\bm\rho)$ gives the $M$-mode Husimi $Q$ function, the Wigner function, and the Glauber-Sudarshan $P$ function for $\bm s=-\bm{I}_M$, $\bm s=0$, and $\bm s=\bm{I}_M$, respectively, where $\bm{I}_M$ denotes the $M\times M$ identity matrix. It is easy to check that the ${\hbox{($\bm{s}$)-PQD}}$s are normalized according to $$\begin{aligned} \int{d^{\,2M}\!\bm{{\beta}}\,} W^{(\bm s)}(\bm\beta|\bm\rho)={\text{Tr}}[\bm\rho].\end{aligned}$$ These are usually called [s]{} when $\bm\rho$ is a density operator, but we generalize the terminology to any Hermitian operator so we can apply it to POVM elements. The outcome probabilities (\[eq:gen-pro-dis\]) can be expressed in terms of the [s]{} of the output state and the POVM as [@CahGla69] $$\begin{aligned} \label{eq:Pn-PQD} p(\bm{n}) =\int{d^{\,2M}\!\bm{\beta}\,}\,\pi^M W^{(-\bm s)}_{\Pi}(\bm{n}|\bm\beta)\,W^{(\bm s)}(\bm\beta|\bm\rho_{\rm out}),\end{aligned}$$ where the measurement ${\hbox{($-\bm{s}$)-PQD}}$ is $$\begin{aligned} \label{eq:s-Pi-PQD-def} W_\Pi^{(-\bm s)}(\bm n|\bm\beta) =\int\frac{{d^{\,2M}\!\bm{{\xi}}\,}}{\pi^{2M}}\, {\text{Tr}}\big[\Pi_{\bm n}D(\bm\xi)\big]\, e^{-\bm\xi\bm{s}\bm\xi^\dagger/2}\, e^{\bm\beta\bm\xi^{\dagger}-\bm\xi\bm\beta^{\dagger}}.\end{aligned}$$ These measurement ${\hbox{($-\bm{s}$)-PQD}}$s are normalized according to $$\begin{aligned} \pi^{M}\sum_{\bm{n}}W^{(-\bm s)}_{\Pi}(\bm{n}|\bm\beta)=1,\end{aligned}$$ for any values of $\bm\beta$ and $\bm s$, as one sees by applying ${\text{Tr}}[D(\bm\xi)]=\pi^M\delta^{2M}(\bm\xi)$ to Eq. (\[eq:s-Pi-PQD-def\]). [*First condition.*]{} We now present a sufficient condition for efficient classical simulation of the sampling problem. If there exist values of $\bm s$ such that the [s]{} in Eq. (\[eq:Pn-PQD\]) are nonnegative, they can be used to simulate sampling from $p(\bm{n})$ in two steps: (i) The vector of complex amplitudes $\bm\beta$ is chosen from the probability distribution $W^{(\bm s)}(\bm\beta|\bm\rho_{\rm out})$. (ii) For the given $\bm\beta$, the outcome $\bm{n}$ is chosen from the probability distribution $\pi^{M} W^{(-\bm s)}_{\Pi}(\bm{n}|\bm\beta)$. This condition is particularly useful if the  of the output state can be efficiently computed, as it can be, for example, for Gaussian input states and Gaussian processes [@Gaussian]. For cases where efficient computation of the output  is not possible, we now derive a general expression that relates the  of the output state $\bm\rho_\text{out}$ to the  of the input state $\bm\rho_\text{in}$. This derivation introduces the transfer function, which transfers complex amplitudes from input to output of the quantum process and which depends on both the input and output operator . An $M$-mode input state can be expanded in terms of displacement operators, $$\begin{aligned} \bm\rho_{\text{in}}=\int\frac{{d^{\,2M}\!\bm{{\xi}}\,}}{\pi^M}\, \Phi^{(\bm{t})}(\bm\xi|\bm\rho_{\text{in}})\, e^{-\bm\xi\bm{t}\bm\xi^{\dagger}/2} D^\dagger(\bm\xi).\end{aligned}$$ In this expression, we can replace $D^\dagger(\bm\xi)=D(-\bm\xi)$ with $D(\bm\xi)$ if we wish, because $\bm\rho_{\text{in}}$ is Hermitian. Linearity of quantum processes implies that $$\begin{aligned} \bm\rho_{\text{out}}= \int \frac{{d^{\,2M}\!\bm{{\xi}}\,}}{\pi^M}\,\Phi^{(\bm{t})}(\bm\xi|\bm\rho_{\text{in}})\, e^{-\bm\xi\bm{t}\bm\xi^{\dagger}/2}\,\mathcal{E}\big(D^\dagger(\bm\xi)\big),\end{aligned}$$ from which we obtain the $(\bm s)$-ordered characteristic function of the output state, $$\begin{aligned} \Phi^{(\bm{s})}(\bm\zeta|\bm\rho_{\text{out}}) =\int\frac{{d^{\,2M}\!\bm{{\xi}}\,}}{\pi^M}\, \Phi^{(\bm{t})}(\bm\xi|\bm\rho_{\text{in}})\, e^{-\bm\xi\bm{t}\bm\xi^{\dagger}/2+\bm\zeta\bm{s}\bm\zeta^{\dagger}/2} \,{\text{Tr}}\big[\mathcal{E}\big(D^\dagger(\bm\xi)\big)D(\bm\zeta)\big].\end{aligned}$$ Using the Fourier transform (\[eq:s-PQD-def\]) and its inverse (\[eq:s-cha-def2\]), we can obtain the relation between the input and output [s]{}, $$\begin{aligned} \label{eq:out-in-PQDs} W^{(\bm s)}(\bm\beta|\bm\rho_\text{out}) =\int {d^{\,2M}\!\bm{{\alpha}}\,} T_{\mathcal{E}}^{(\bm s,\bm t)}(\bm\beta|\bm\alpha)\,W^{(\bm t)}(\bm\alpha|\bm\rho_\text{in}),\end{aligned}$$ where the transition function associated with the quantum process is defined by $$\begin{aligned} \label{trans-fun} T_{\mathcal{E}}^{(\bm s,\bm t)}(\bm\beta|\bm\alpha) =\int\frac{{d^{\,2M}\!\bm{{\zeta}}\,}}{\pi^{2M}}\, e^{\bm\zeta\bm{s}\bm\zeta^{\dagger}/2}\,e^{\bm\beta\bm\zeta^\dagger-\bm\zeta\bm\beta^\dagger} \!\int\frac{{d^{\,2M}\!\bm{{\xi}}\,}}{\pi^M}\, e^{-\bm\xi\bm{t}\bm\xi^{\dagger}/2}\, e^{\bm\xi\bm\alpha^\dagger-\bm\alpha\bm\xi^\dagger}\, {\text{Tr}}\big[\mathcal{E}\big(D^\dagger(\bm\xi)\big)D(\bm\zeta)\big].\end{aligned}$$ The quantity ${\text{Tr}}\big[\mathcal{E}\big(D^\dagger(\bm\xi)\big)D(\bm\zeta)\big]$ gives the “matrix elements” of the quantum process $\mathcal E$ in the displacement-operator basis. We can use the antinormally ordered form of the displacement operator, combined with the coherent-state resolution of the identity, $\mathcal{I}=\int{d^{\,2M}\!\bm{{\gamma}}\,}\ket{\bm\gamma}\!\bra{\bm\gamma}/\pi^M$, to obtain $$\begin{aligned} \label{eq:Dantinormal} e^{-\bm\xi\bm\xi^\dagger/2}D(\bm\xi) =e^{-\hat{\bm{a}}\bm\xi^\dagger}\mathcal{I}e^{\bm\xi\hat{\bm{a}}^{\dagger}} =\int\frac{{d^{\,2M}\!\bm{{\gamma}}\,}}{\pi^M}\, \ket{\bm\gamma}\!\bra{\bm\gamma}\, e^{\bm\xi\bm\gamma^{\dagger}-\bm\gamma\bm\xi^\dagger}.\end{aligned}$$ This allows us to convert $\mathcal{E}\big(D^\dagger(\bm\xi)\big)$ into the action of the quantum process on coherent states: $$\begin{aligned} \label{eq:pros-on-coh} \mathcal{E}\big(D^\dagger(\bm\xi)\big) =e^{\bm\xi\bm\xi^\dagger/2} \int\frac{{d^{\,2M}\!\bm{\gamma}\,}}{\pi^M}\, e^{\bm\gamma\bm\xi^\dagger-\bm\xi\bm\gamma^{\dagger}} \mathcal{E}\big(\ket{\bm\gamma}\!\bra{\bm\gamma}\big).\end{aligned}$$ Using Eqs. (\[trans-fun\]) and (\[eq:pros-on-coh\]), one can check that for trace-preserving quantum processes, we have $$\begin{aligned} \int {d^{\,2M}\!\bm{\beta}\,} T_{\mathcal{E}}^{(\bm s,\bm t)}(\bm\beta|\bm\alpha)=1.\end{aligned}$$ We do not plug Eq. (\[eq:pros-on-coh\]) into Eq. (\[trans-fun\]) because generally the integrals diverge; the art of finding a well-behaved transition function is, for a specific quantum process, to find the most favorable input and output ordering parameters, $\bm s$ and $\bm t$, that make the integrals converge. Now, by combining Eqs. (\[eq:Pn-PQD\]) and (\[eq:out-in-PQDs\]), we can assemble the ingredients for a classical simulation of sampling from the output distribution of the quantum circuit shown in Fig. \[fig:gen-cir\], $$\begin{aligned} \label{eq:Pn-PQD-tran} p(\bm{n}) =\int{d^{\,2M}\!\bm{\beta}\,}\!\!\int{d^{\,2M}\!\bm{{\alpha}}\,} \pi^M W^{(-\bm s)}_{\Pi}(\bm{n}|\bm\beta)\, T_\mathcal{E}^{(\bm s,\bm t)}(\bm\beta|\bm\alpha)\, W^{(\bm t)}(\bm\alpha|\bm\rho_\text{in}).\end{aligned}$$ [*Second condition.*]{} We can carry out a classical simulation, using the following procedure, if there exist values of $\bm t$ and $\bm s$ such that the [s]{} of the input, the transition function, and the measurement are all nonnegative and efficiently describable: (i) The vector of complex amplitudes $\bm\alpha$ is chosen from the input probability distribution $W^{(\bm t)}(\bm\alpha|\bm\rho_\text{in})$. (ii) For the given $\bm\alpha$, the vector $\bm{\beta}$ is chosen from the transition probability distribution $T_\mathcal{E}^{(\bm s,\bm t)}(\bm\beta|\bm\alpha)$. (iii) For the given $\bm\beta$, the outcome $\bm{n}$ is chosen from the output probability distribution $\pi^{M} W^{(-\bm s)}_{\Pi}(\bm{n}|\bm\beta)$. That the three probability distributions are efficiently describable must be judged on a case-by-case basis. For the input, this is generally achieved by assuming that the input state $\bm\rho_\text{in}$ is a product state of the $M$ modes or, perhaps, a product of blocks of modes of fixed size; likewise, for the output, the measurements are generally product measurements of the $M$ modes or products of measurements on blocks of fixed size. The complexity of the transition function depends on the quantum process; for the [s]{} used in boson sampling, the transition function is Gaussian and can be from the matrix that describes the , as we show in Sec. \[sec:bosonsampling\]. This second condition includes the previous results as special cases. For classical states and classical processes, we can choose $\bm s=\bm t=\bm{I}_M$, and Eq. (\[eq:Pn-PQD-tran\]) reduces to Eq. (\[eq:pro-cl-cl\]). In addition, for $\bm s=\bm t=0$, we have that when the transition function is nonnegative and the input quantum state and output POVM elements have nonnegative Wigner functions, sampling from the output distribution can be simulated classically [@Mari]. A procedure for determining if there are input and output orderings that give nonnegative quasidistributions is the following. The  $W^{(\bm t)}(\bm\alpha|\bm\rho_\text{in})$ of the input state is nonnegative for ordering parameters $\bm t\le\bar{\bm t}$, where $\bar{\bm t}\ge-{\bm I}_M$, and the  $W_\Pi^{(-\bm s)}(\bm n|\bm\beta)$ of the POVM elements is nonnegative for $\bm s\ge\bar{\bm s}$, i.e., $s_k\ge \bar{s}_k$, for all $k$, where $\bar{\bm s}\le{\bm I}_M$. The necessary and sufficient condition for there also to be a nonnegative transition function is that $T^{(\bar{\bm s},\bar{\bm t})}(\bm\beta|\bm\alpha)$ be nonnegative and no more singular than a $\delta$ function. This is a necessary and sufficient condition for our second condition to yield a classical simulation. A crucial point, in general and for what follows, is that simulations using our first condition provide tighter bounds for classical simulation than the second condition because the first condition, unlike the second, does not require anything about the input , in particular, that it be nonnegative. To make the difference clear, suppose the input state is a nonclassical Gaussian state, and the process is also a nonclassical process, but one that cancels the nonclassicality of the input state in such a way as to make the output state classical; an example is provided by input squeezed states and antisqueezing operations. In this situation, for any measurement at the output, the experiment is classically simulatable according to the first condition. For the second condition, however, the input  and the transition function are only nonnegative for a limited range of ordering parameters $\bm t\le\bar{\bm t}< \bm{I}_M$, and only for certain measurements does the second condition hold. Efficient classical simulations of implementations of boson sampling {#sec:bosonsampling} ==================================================================== General considerations for passive linear optical networks {#sec:pLON} ---------------------------------------------------------- We now consider the case where the quantum process is a lossy $M$-mode . In this case the quantum process takes coherent states to coherent states according to [@charact] $$\begin{aligned} \label{eq:E-LON} \mathcal{E}_{\text{{\hbox{LON}}}}\big(\ket{\bm\gamma}\!\bra{\bm\gamma}\big) =\ket{\bm\gamma\bm{L}}\!\bra{\bm\gamma\bm{L}},\end{aligned}$$ where $\bm{L}$ is the $M\times M$ transfer matrix describing the . A  is an example of what we call a classical process in Sec. \[sec:genericsampling\]. For a lossless , the matrix $\bm L$ is the unitary matrix $\bm U$ mentioned previously. When there are losses, the quantum operation (\[eq:E-LON\]) follows from a very simple model of an environment: In addition to the $M$ actual modes, there are $M$ loss (environment) modes that are initially in vacuum and that carry away photons lost within the ; the larger  that includes the loss modes is described by a unitary operator $\tilde{\mathcal{U}}$, which transforms annihilation operators according to $$\begin{aligned} \tilde{\mathcal{U}}^\dagger \begin{pmatrix} \bm a&\bm a_0 \end{pmatrix} \tilde{\mathcal{U}} =\begin{pmatrix} \bm a&\bm a_0 \end{pmatrix}\bm\tilde{\bm U},\end{aligned}$$ where $\bm a_0$ is the row vector of annihilation operators for the $M$ loss modes and $$\begin{aligned} \tilde{\bm U}= \begin{pmatrix} \bm L&\bm N\\\bm P&\bm M \end{pmatrix}\end{aligned}$$ is the unitary matrix that describes the complex-amplitude transformation within the larger . The larger  takes overall coherent states to overall coherent states according to $$\begin{aligned} \tilde{\mathcal{U}} \big| \begin{pmatrix} \bm\gamma&\bm\gamma_0 \end{pmatrix} \big\rangle =\big| \begin{pmatrix} \bm\gamma&\bm\gamma_0 \end{pmatrix} \tilde{\bm U} \big\rangle.\end{aligned}$$ The quantum operation (\[eq:E-LON\]) follows from tracing out the loss modes: $$\begin{aligned} \begin{split} \mathcal{E}_{\text{{\hbox{LON}}}}\big(\ket{\bm\gamma}\!\bra{\bm\gamma}\big) &={\text{Tr}}_0\bigl[ \tilde{\mathcal{U}} \bigl| \begin{pmatrix} \bm\gamma&\bm0 \end{pmatrix} \bigr\rangle \bigl\langle \begin{pmatrix} \bm\gamma&\bm0 \end{pmatrix} \bigr| \tilde{\mathcal{U}}^\dagger \bigr]\\ &={\text{Tr}}_0\bigl[ \bigl| \begin{pmatrix} \bm\gamma\bm L&\bm\gamma\bm N \end{pmatrix} \bigr\rangle \bigl\langle \begin{pmatrix} \bm\gamma\bm L&\bm\gamma\bm N \end{pmatrix} \bigr| \bigr]\\ &=\ket{\bm\gamma\bm{L}}\!\bra{\bm\gamma\bm{L}}. \end{split}\end{aligned}$$ What the model teaches is that $\bm L$ is a submatrix of the larger unitary matrix $\tilde{\bm U}$ and thus satisfies $\bm L^\dagger\bm L=\bm I_M-\bm P^\dagger\bm P\le\bm I_M$. In an experiment, the transfer matrix $\bm L$ of any  can be efficiently characterized by inputting coherent states [@charact]. We can use the normally ordered form of the displacement operator, $D(\bm{\zeta})=e^{-\bm\zeta\bm\zeta^\dagger/2}e^{\bm\zeta\bm a^\dagger}e^{-\bm a\bm\zeta^\dagger}$, to obtain $$\begin{aligned} {\text{Tr}}\big[\mathcal{E}_{\text{{\hbox{LON}}}}\big(\ket{\bm\gamma}\!\bra{\bm\gamma}\big)D(\bm\zeta)\big] ={\text{Tr}}\big[\ket{\bm\gamma\bm L}\!\bra{\bm\gamma\bm{L}} D(\bm\zeta)\big]=e^{-\bm\zeta\bm\zeta^\dagger/2} e^{\bm\zeta\bm{L}^\dagger\bm\gamma^\dagger-\bm\gamma\bm{L}\bm\zeta^\dagger}.\end{aligned}$$ Plugging this into Eq. (\[eq:pros-on-coh\]) and invoking Eq. (\[eq:Ddelta\]) gives us $$\begin{aligned} {\text{Tr}}\big[\mathcal{E}_{\text{{\hbox{LON}}}}\big(D^\dagger(\bm\xi)\big)D(\bm\zeta)\big] =\pi^M\delta^{2M}\big(\bm\xi-\bm\zeta\bm{L}^\dagger\big) e^{\bm\zeta(\bm{L}^\dagger\bm L-\bm I_M)\bm\zeta^\dagger}.\end{aligned}$$ Thus the transition function (\[trans-fun\]) becomes $$\begin{aligned} \label{eq:tran-LON} T_{\text{{\hbox{LON}}}}^{(\bm s,\bm t)}(\bm\beta|\bm\alpha) =\int\frac{{d^{\,2M}\!\bm{\zeta}\,}}{\pi^{2M}}\, e^{-\bm\zeta\bm{\Sigma}\bm\zeta^{\dagger}/2} e^{(\bm\beta-\bm\alpha\bm{L})\bm\zeta^\dagger-\bm\zeta(\bm\beta^\dagger-\bm{L}^\dagger\bm\alpha^\dagger)} =\frac{2^M}{\pi^M\det\bm\Sigma} \exp\big[\mathord{-}2(\bm\beta-\bm\alpha\bm L) \bm\Sigma^{-1}(\bm\beta^\dagger-\bm{L}^\dagger\bm\alpha^\dagger)\big].\end{aligned}$$ The transition function is well behaved and nonnegative, and has the final (normalized) Gaussian form, if and only if $$\begin{aligned} \bm\Sigma=\bm{I}_M-\bm{L}^\dagger\bm{L}-\bm s+\bm{L}^\dagger\bm{t}\bm{L}\geq 0,\end{aligned}$$ i.e., $\bm\Sigma$ is positive (semidefinite). Note that if we choose the same ordering at input and output, i.e., $\bm s=\bm t=s\bm I_M$, then $$\begin{aligned} \label{eq:Sigma11} \bm\Sigma=(1-s)(\bm{I}_M-\bm{L}^\dagger\bm{L})\ge0,\end{aligned}$$ provided $s\le1$; further choosing $s=1$, we have $\bm\Sigma=0$ and thus $$\begin{aligned} \label{eq:Tdelta} T_{\text{{\hbox{LON}}}}^{(\bm{I_M},\bm{I_M})}(\bm\beta|\bm\alpha)=\delta^{2M}(\bm\beta-\bm\alpha\bm L).\end{aligned}$$ To apply our second method for generating an efficient classical simulation of sampling, we should apply the procedure outlined at the end of Sec. \[sec:sufficient\]. Suppose the input state has nonnegative ${\hbox{($\bm{t}$)-PQD}}$ $W^{(\bm t)}(\bm\alpha|\bm\rho_\text{in})$ for $\bm t\le\bar{\bm t}$, and the output measurement has nonnegative ${\hbox{($-\bm{s}$)-PQD}}$ $W_\Pi^{(-\bm s)}(\bm n|\bm\beta)$ for $\bm s\ge\bar{\bm s}$. Then the necessary and sufficient condition for our second method to yield an efficient classical simulation of sampling from the output probability distribution is that $$\begin{aligned} \label{eq:barSigma} \overline{\bm\Sigma}=\bm{I}_M-\bm{L}^\dagger\bm{L}-\bar{\bm s}+\bm{L}^\dagger\bar{\bm{t}}\bm{L}\geq 0.\end{aligned}$$ Two special cases deserve attention. For a lossless , the transfer matrix $\bm L=\bm U$ is unitary, and the condition (\[eq:barSigma\]) becomes $\bar{\bm s}\le\bm{U}^\dagger\bar{\bm{t}}\bm U$. In the case of *identical* measurements on all the output modes, the POVM elements become a product $\Pi_{\bm{n}}=\bigotimes_{k=1}^M \Pi_{n_k}$, where $\{\Pi_{n_k}\}$ is the POVM for the measurement on output mode $k$, and the ${\hbox{($-\bm{s}$)-PQD}}$ of the measurements is also a product, $W_\Pi^{(-\bm s)}(\bm n|\bm\beta)=\prod_{k=1}^M W_\Pi^{(-s_k)}(n_k|\beta_k)$. In this situation, the optimal output ordering parameters are the same for all $M$ modes, i.e., $\bar{\bm s}=\bar s\bm I_M$. Thus, for a lossless  with identical product measurements, the condition (\[eq:barSigma\]) simplifies to $\bar s\bm I_M\le\bm{U}^\dagger\bar{\bm{t}}\bm U$, which is equivalent to $\bar s{\bm I}_M\le\bar{\bm t}$. In the next two subsections we apply our conditions for classical simulations to two schemes for boson sampling in the presence of errors. Before turning to that, however, we digress briefly to note that since a  is a classical process, we can provide a classical simulation for all classical input states, since we can choose $\bm t=\bm s=\bm I_M$, i.e., the $P$ function for the input state and the (always nonnegative) $Q$ function for the measurements; this leads to the $\delta$ transition function of Eq. (\[eq:Tdelta\]). This is the motivating case considered at the end of Sec. \[sec:genericsampling\].  A particular example is provided by inputting coherent states to an  and performing any measurements at the output. The flip side of classical input states is classical measurements, such as heterodyne measurements, for which the $P$ functions of the POVM elements are nonnegative, allowing us to choose $\bm s=-\bm I_M$; in this situation, we can choose $\bm t=-\bm I_M$, i.e., the $Q$ function for the input state, and have a nonnegative transition function according to Eq. (\[eq:Sigma11\]). Hence, in the case of classical measurements, efficient classical simulation is possible for any input state. In the symmetric case, where both the input state and the POVM elements have nonnegative Wigner functions, we can choose $\bm t=\bm s=0$, and given Eq. (\[eq:Sigma11\]), the transition function is always nonnegative. An example is Gaussian input states to a  and Gaussian measurements at the output [@BarSan]. Boson sampling with single-photon sources {#sec:BSsinglephotons} ----------------------------------------- We turn now to investigating the effect of errors in a practical implementation of boson sampling that uses single-photon sources and on-off photodetectors. Recall that in this model, which is the one proposed originally by Aaronson and Arkhipov [@AA], $N$ single photons are injected into the first $N$ ports of an $M$-port , with $M\gg N$, and the remaining $N-M$ ports receive the vacuum state. To avoid having more than one count at an output detector, one generally requires that the number of photons counted at the detectors be $\alt\sqrt M$ [@AA; @RanSam]. We consider the following sources of error: impurity of the input photons and mode mismatching of these photons into the , losses and mode mismatching within the , and inefficiency and random counts in the detectors. It is a considerable practical challenge to generate a single-photon state. We assume that the output of the single-photon sources is a statistical mixture of vacuum and a single photon, $(1-\mu)\ket{0}\!\bra{0}+\mu \ket{1}\!\bra{1}$, $\mu\in[0,1]$. Note that this state is the output of a beamsplitter with transmissivity $\sqrt{\mu}$ when the beamsplitter is illuminated by a pure single photon. In addition to the impurity of the input, the input photons are generally not mode-matched to the temporal, frequency, and polarization modes that interfere ideally through the . The nonoverlapping parts of the input photons are lost to the ideal interference that leads to the probability distribution one wants to sample at the output, so we treat them as a loss and model that loss by virtual beamsplitters with transmissivity $\sqrt{\eta_{{\scriptscriptstyle B}}}$. Taking into account both the impurity and the mode mismatching, we have that the state input into the first $N$ ports is $$\begin{aligned} \label{eq:rhosinglephoton} \rho=(1-\bar\eta)\ket{0}\!\bra{0}+\bar\eta\ket{1}\!\bra{1},\end{aligned}$$ where $\bar\eta=\mu\eta_{{\scriptscriptstyle B}}$. We return to a discussion of mode mismatching, at the input to and throughout the , at the end of this subsection. By using Eqs. (\[eq:s-PQD-def\]) and (\[eq:s-cha-def\]) for a single mode, the  of the mixed input state (\[eq:rhosinglephoton\]) is given by [@sFock] $$\begin{aligned} \label{eq:tPQDrho} W^{(t)}(\alpha|\rho)= \frac{2}{\pi} \frac{(1-t)(1-t-2\bar\eta)+4\bar\eta|\alpha|^2}{(1-t)^3}e^{-2|\alpha|^2/(1-t)},\end{aligned}$$ which is nonnegative for $t\leq\bar t=1-2\bar\eta=1-2\mu\eta_{{\scriptscriptstyle B}}$. As the vacuum state ($\bar\eta=0$) is a classical state whose  is nonnegative for $t\leq1$, the overall  of the input is nonnegative for $$\begin{aligned} \label{eq:bar-t-sp} \bm t\leq \bar{\bm{t}}=\bm I_M-2\mu \eta_{{\scriptscriptstyle B}}\bm{J}_N,\end{aligned}$$ where $\bm J_N$ is the diagonal matrix with 1s in the first $N$ diagonal positions and 0s otherwise. Losses within the  are taken into account by the transfer matrix $\bm L$. For a particular implementation of boson sampling, one should use the measured transfer matrix to analyze the system [@charact]. A good part of these losses is mode mismatching within the network, about which we say more below. For our analysis, we adopt a simple model of losses that allows us to investigate how the effect of losses scales with the size of the network. In particular, we assume that all paths through the suffer the same amount of loss and thus describe the network by a transfer matrix $\bm L=\sqrt{\eta_{{\scriptscriptstyle L}}}\,\bm U$, where $\bm U$ is the unitary transfer matrix for a lossless . We make this more specific in the following way. The network consists of $\ell$-port optical elements, each with a uniform transmissivity $\sqrt{\eta_{{\scriptstyle 0}}}$, and has depth $d$. Thus each input port speaks to $\ell^d$ output ports. We assume that the network is fully connected, so $M=\ell^d$; hence, each input photon sees a loss $\eta_{{\scriptscriptstyle L}}=\eta_{{\scriptstyle 0}}^d=\eta_{{\scriptstyle 0}}^{\log_\ell\!M}=M^{\log_\ell\!\eta_0}$. For the on-off photodetectors, which we assume to be identical, we use a model similar to that devised in Ref. [@Barnett] for detectors with subunity efficiency and dark counts. We think of the dark-count probability in the model more generally than in the original model, however; it is not just the probability for intrinsic dark counts in the detector, but it also includes any sort of [*random counts*]{}. We discuss below how mode-mismatched photons at the input and within the  can propagate through the  and contribute random counts at the photodetectors. The POVM elements associated with the on-off outcomes—zero denotes the off state, i.e., no detector click, and 1 denotes a click—are $$\begin{aligned} \Pi_0(\eta_{{\scriptscriptstyle D}},p_{{\scriptscriptstyle D}}) &=(1-p_{{\scriptscriptstyle D}})\sum_{m=0}^{\infty}(1-\eta_{{\scriptscriptstyle D}})^m\ket{m}\bra{m},\\ \begin{split} \Pi_1(\eta_{{\scriptscriptstyle D}},p_{{\scriptscriptstyle D}})&=\mathcal{I}-\Pi_0(\eta_{{\scriptscriptstyle D}},p_{{\scriptscriptstyle D}})\\ &=\sum_{m=0}^{\infty}[1-(1-p_{{\scriptscriptstyle D}})(1-\eta_{{\scriptscriptstyle D}})^m]\ket{m}\bra{m}, \end{split}\end{aligned}$$ where $\eta_{{\scriptscriptstyle D}}$, satisfying $0\leq\eta_{{\scriptscriptstyle D}}\leq 1$, is the detector efficiency, and $1-p_{{\scriptscriptstyle D}}$ is the probability of no random count. The sum in $\Pi_0$ is an unnormalized thermal state, so by using the  of a thermal state, we can find the  of $\Pi_0(\eta_{{\scriptscriptstyle D}},p_{{\scriptscriptstyle D}})$ to be $$\begin{aligned} W^{(-s)}_\Pi(0|\beta) =\frac{1-p_{{\scriptscriptstyle D}}}{\pi} \frac{e^{-\eta_{{\scriptscriptstyle D}}|\beta|^2/[1-\eta_{{\scriptscriptstyle D}}(1-s)/2]}}{1-\eta_{{\scriptscriptstyle D}}(1-s)/2};\end{aligned}$$ this is nonnegative provided $s\ge1-2/\eta_{{\scriptscriptstyle D}}$, which is really no restriction at all. The  of $\Pi_1(\eta_{{\scriptscriptstyle D}},p_d)$, given by $$\begin{aligned} W^{(-s)}_\Pi(1|\beta)=\frac{1}{\pi}-W^{(-s)}_\Pi(0|\beta),\end{aligned}$$ is nonnegative provided that $s\ge\bar s=1-2p_{{\scriptscriptstyle D}}/\eta_{{\scriptscriptstyle D}}$. Writing this in terms of the ordering parameters for all the detectors, we have nonnegative output ${\hbox{($\bm{s}$)-PQD}}$s if $$\begin{aligned} \label{eq:bar-s-sp} \bm s\geq \bar{\bm s}=\bigg(1-\frac{2p_{{\scriptscriptstyle D}}}{\eta_{{\scriptscriptstyle D}}}\bigg)\bm I_M.\end{aligned}$$ Putting Eqs. (\[eq:bar-t-sp\]) and (\[eq:bar-s-sp\]), plus our description of the , into Eq. (\[eq:barSigma\]), we find that the condition for an efficient classical simulation is that $\bm U\overline{\bm\Sigma}\bm U^\dagger=(2p_{{\scriptscriptstyle D}}/\eta_{{\scriptscriptstyle D}}-\eta_{{\scriptscriptstyle L}})\bm I_M+\eta_{{\scriptscriptstyle L}}\bar{\bm t}\ge0$; provided there is even one single-photon input, this reduces to the simple condition $$\begin{aligned} \label{eq:con-sp} p_{{\scriptscriptstyle D}}\ge\eta\equiv\mu\eta_{{\scriptscriptstyle B}}\eta_{{\scriptscriptstyle L}}\eta_{{\scriptscriptstyle D}}=\mu\eta_{{\scriptscriptstyle B}}\eta_{{\scriptscriptstyle D}}\eta_{{\scriptstyle 0}}^{\log_\ell\!M},\end{aligned}$$ where $\eta$ characterizes the overall loss in the experiment. Because of the simplicity of our model for losses within the , the condition (\[eq:con-sp\]) does not apply precisely to [s]{} with general transfer matrices $\bm L$, but the dependence of $\eta_{{\scriptscriptstyle L}}$ on $\ell$ and $M$ does indicate how the condition for simulability scales with the size of the . A recent study [@Aar-Bro] shows that if a fixed number $K$ of photons are lost, boson sampling remains classically hard, provided $K$ is not too large. This suggests that in the presence of loss, one can inject more single photons into the  so that on average, an interesting boson-sampling problem is realized. The mean number of photodetector counts is $\eta N$; if we require that the number of counts does not exceed $\sqrt M$ much and also require $N\le M$, we have $N=\min(M,\sqrt M/\eta)$. To get an idea of what is going on, consider an ambitious, but perhaps realistic example in which $\mu=0.5$, $\eta_{{\scriptscriptstyle B}}=0.1$, $\eta_{{\scriptstyle 0}}=0.98$, $\ell=2$, and $\eta_{{\scriptscriptstyle D}}=0.95$. For $M=10$, we have $\eta_{{\scriptscriptstyle L}}=0.94$, $\sqrt M/\eta=71$, $N=M=10$, and $N\eta=0.44$; the condition for classical simulability is that $p_{{\scriptscriptstyle D}}\ge\eta=0.044$. For $M=100$, we have $\eta_{{\scriptscriptstyle L}}=0.87$, $\sqrt M/\eta=241$, $N=M=100$, and $N\eta=4.2$; the condition for classical simulability is that $p_{{\scriptscriptstyle D}}\ge\eta=0.042$. For $M=1\,600$, we have $\eta_{{\scriptscriptstyle L}}=0.81$, $\sqrt M/\eta=1\,044=N$, and $N\eta=40$; the condition for classical simulability is that $p_{{\scriptscriptstyle D}}\ge\eta=0.038$. An obvious question is why our method needs random counts for classical simulability. The answer is that in the absence of random counts, sampling from the (exact) output probability distribution cannot be efficiently simulated classically; this can be shown using Stockmeyer’s approximate counting algorithm. Even for large losses, it is still possible that all the input photons get counted by the detectors at the output. As any lossy  can be thought of as part of a larger, lossless , probabilities of these events are proportional to the squared modulus of permanents of complex matrices, which are submatrices of a unitary matrix for the larger . If sampling were classically simulatable, using Stockmeyer’s approximate counting algorithm one could approximate one of these probabilities to within a multiplicative error in $\text{BPP}^{\text{NP}}$ \[Stockmeyer’s algorithm allows for the proportionality factors to be of the order $2^{-\text{poly}(N)}$\]; this would lead to the collapse of polynomial hierarchy to the third level because as observed by Aaronson and Arkhipov, multiplicative approximation of these probabilities is \#P hard. Note, however, that this argument does not imply that boson-sampling experiments with losses and very low random counts are still practically interesting. One can expect that above some threshold for losses, a classical algorithm can efficiently generate samples from an approximate probability distribution, and in practice, this cannot be distinguished from the outcomes of the experiment. The importance of random counts prompts us to return to the question of mode mismatching at the input to and within the . Mode mismatching occurs when temporal, frequency, and polarization properties of photon wave packets do not overlap ideally at the input to the  and at the optical elements used to implement a specific . The nonoverlapping parts of the photon wave packets are lost to the ideal interference that leads to the probability distribution one wants to sample at the output. Mode mismatching is thus a loss mechanism and is likely to be the dominant loss mechanism within a large optical network. Without some intervention, however, nonoverlapping parts of the photon wave packets continue through the  and are counted within the temporal and spatial windows defined by the photodetectors. These photocounts are effectively random and contribute to the random-count probability of the detectors (they might be correlated between different output modes, but it is hard to see how this correlation could be used to our advantage); indeed, they are very likely to be the dominant contribution to the random-count probability, as in high-quality detectors, the intrinsic dark-count rate is very low. In principle, one can use active filters (mode cleaners) to remove nonoverlapping parts of photon wave packets at the input to and output from and perhaps within the  and, hence, to turn mode mismatching into a genuine loss where the mode-mismatched parts of the photon wave packets do not contribute counts at the photodetectors. To assess how serious this problem is, suppose that mode mismatching is the dominant loss mechanism. Suppose further that a fraction $f_{{\scriptscriptstyle B}}$ of the photons lost at the input, numbering $\mu(1-\eta_{{\scriptscriptstyle B}})$, continue into the  and on to the photodetectors and that a fraction $f_{{\scriptscriptstyle L}}$ of the photons lost within the , numbering $\mu\eta_{{\scriptscriptstyle B}}(1-\eta_{{\scriptscriptstyle L}})$, continue to the detectors. Assuming these mode-mismatched photons are counted with efficiency $\eta_{{\scriptscriptstyle D}}$, they contribute random-count probability $$\begin{aligned} \label{eq:pDeff} p_{{\scriptscriptstyle D}}=\frac{\eta_{{\scriptscriptstyle D}}\mu N}{M} \Big[f_{{\scriptscriptstyle L}}(1-\eta_{{\scriptscriptstyle L}})\eta_{{\scriptscriptstyle B}}+f_{{\scriptscriptstyle B}}(1-\eta_{{\scriptscriptstyle B}})\Bigl].\end{aligned}$$ With the same assumptions and same values for loss parameters as above, we now also assume that $f_{{\scriptscriptstyle B}}=0.1$, on the grounds that the input loss $\eta_{{\scriptscriptstyle B}}=0.1$ already reflects a major attempt to clean up the input wave-packet modes, and $f_{{\scriptscriptstyle L}}=0.9$, on the grounds that it would be quite difficult to clean up the output photons without introducing additional losses. With these assumptions, we get $p_{{\scriptscriptstyle D}}=0.046$ for $M=10$, $p_{{\scriptscriptstyle D}}=0.049$ for $M=100$, and $p_{{\scriptscriptstyle D}}=0.034$ for $M=1\,600$. Comparing these random counts with the corresponding thresholds from the previous page indicates that mode mismatching is indeed a challenge for boson-sampling experiments of interesting size; recall that this assumes that additional single photons are fed into the  to compensate for losses in order keep the number of detected photons as close to $\sqrt M$ as possible. The scaling with $M$ is such that if large [s]{} can be constructed without compromising the loss parameters, the situation gets better as $M$ increases. It is worth noting that for the range of parameters we have considered, for which $N\simeq M$, the condition for classical simulatability is that the number of mode-mismatched photons counted at the photodetectors, $Mp_{{\scriptscriptstyle D}}$, exceed the number of mode-matched photons, $N\eta$. This is a useful rule of thumb for assessing the simulatability of a boson-sampling experiment. Boson sampling with SPDC sources {#sec:BSSPDC} -------------------------------- A major practical challenge for implementing boson sampling is reliable single-photon sources. In most quantum-optics experiments, spontaneous parametric down-conversion (SPDC) is used as a probabilistic source for preparing single photons [@BRO13; @SPR13; @TIL13; @CRE13]. If the two-mode squeezed vacuum state generated by a SPDC source has weak squeezing, photon counting on the heralding mode prepares vacuum or a single photon in the signal mode, which can then be used as one of the inputs to the $M$ input ports of a boson-sampling . This scheme can be viewed as sampling from the output photon-counting probability distribution of a larger  with $2M$ modes; the larger  consists of the identity process acting on the heralding modes and the original  acting on the signal modes. This scenario implements [*randomized*]{} boson sampling, in which when $N$ photons are randomly detected in the heralding modes, $N$ single photons are injected into the corresponding ports of the original . (With the loss parameters we consider here, in boson sampling with single-photon sources, the single photons are also randomly injected into a LON, but one does not know to which input ports.) In the absence of any losses or inefficiencies, the average number of photons input to the signal-mode  is $N=M\sinh^2\!r$, where $r$ is the squeezing parameter, assumed to be positive without loss of generality; to achieve $N=\sqrt M\ll M$, one chooses $\sinh^2\!r=1/\sqrt M$ [@RanSam]. We consider the following sources of error: mode mismatching of the signal modes into the smaller, signal-mode , described by virtual beamsplitters with transmissivity $\sqrt{\eta_{{\scriptscriptstyle B}}}$; losses in the signal-mode , described by the transfer matrix $\bm L$, but no losses for the heralding modes, so that the overall transfer matrix is $\bm I_M\oplus\bm L$; and for all modes, the model for random counts and inefficiency that we introduced previously for on-off detectors. Since the input squeezed vacuum states are Gaussian and the  is a Gaussian process, we can efficiently find the  of the output state and use our first condition to check whether efficient classical simulation of the sampling problem is possible. To simplify our analysis and to compare directly with our results for single-photon inputs, we specialize to the simple model for loss in the signal-mode  in which $\bm L=\sqrt{\eta_{{\scriptscriptstyle L}}}\,\bm U$. Given this assumption, all the signal modes suffer the same loss in the , so we can refer the  losses to the input, combine them with the mode mismatching of the signal modes, and thus describe both by virtual beamsplitters with transmissivity $\sqrt{\eta_{{\scriptscriptstyle{B\!L}}}}=\sqrt{\eta_{{\scriptscriptstyle B}}\eta_{{\scriptscriptstyle L}}}$, which act on each signal mode before it enters the signal-mode . The upshot is that the larger  is fed by $M$ copies of the two-mode state $$\begin{aligned} \label{eq:2svloss} \bm\rho_{\textrm{hs}}' ={\text{Tr}}_0[\bm\rho_{\textrm{hs}0}] ={\text{Tr}}_0 \big[ \mathcal{U}_{s0}(\eta_{{\scriptscriptstyle{B\!L}}}) \bm\rho_{\textrm{hs}}\otimes\ket{0}\!\bra{0} \mathcal{U}_{s0}^{\dagger}(\eta_{{\scriptscriptstyle{B\!L}}})\big].\end{aligned}$$ Here $\bm\rho_{\textrm{hs}}$ is the two-mode squeezed vacuum state generated by a SPDC source, $\mathcal{U}_{s0}(\eta_{{\scriptscriptstyle{B\!L}}})$ is the unitary operator for a beamsplitter with transmissivity $\sqrt{\eta_{{\scriptscriptstyle{B\!L}}}}$ that acts on the signal mode of the SPDC and a vacuum input, and the trace is taken over the mode reflected from the beamsplitter. With the  losses referred to the input, the larger  is now described by the unitary transfer matrix $\bm{I}_M \oplus \bm{U}$, which corresponds to a $\delta$-function transfer function that does not alter the negativity of the input . The state $\bm\rho_{\textrm{hs}}'$ is a Gaussian state. The Wigner function $(t=0)$ of any Gaussian state is a Gaussian function, but if the state is nonclassical, there exists $\bar t\in(0,1]$ such that for $t\le\bar t$, the ${\hbox{($t$)-PQD}}$ is a Gaussian, for $t=\bar t$, the ${\hbox{($t$)-PQD}}$ has $\delta$-function singularities, and for $t>\bar{t}$, the  is more singular than a $\delta$ function. In order to find $\bar t$ for $\bm\rho_{\textrm{hs}}'$, we need to use the covariance matrix of the Gaussian  and find the value of $t$ at which the covariance matrix transitions from positive to negative, i.e., the smallest eigenvalue goes to zero. The covariance matrix of the Wigner function of $\bm\rho_{\textrm{hs}0}$ in Eq. (\[eq:2svloss\]) is given by $$\begin{aligned} \bm\sigma_{\textrm{hs}0}=\big(\bm{I}_2\oplus \bm{B}_{s0}(\eta_{{\scriptscriptstyle{B\!L}}})\big) \big(\bm\sigma_{\textrm{hs}} \oplus\bm{I}_2\big)\big(\bm{I}_2\oplus \bm{B}^{T}_{s0}(\eta_{{\scriptscriptstyle{B\!L}}})\big), \end{aligned}$$ where $$\begin{aligned} \bm\sigma_{\textrm{hs}}= \begin{pmatrix} \cosh 2r \bm{I}_2 & \sinh 2r \bm{Z}_2 \\[2pt] \sinh 2r \bm{Z}_2 & \cosh 2r \bm{I}_2 \\ \end{pmatrix}\end{aligned}$$ is the covariance matrix of the two-mode squeezed vacuum state with squeezing parameter $r$, with $\bm{Z}_2=\text{diag}(1,-1)$ being the Pauli $z$ matrix, and $$\begin{aligned} \bm{B}_{s0}(\eta_{{\scriptscriptstyle{B\!L}}})= \begin{pmatrix} \sqrt{\eta_{{\scriptscriptstyle{B\!L}}}} \bm{I}_2& -\sqrt{1-\eta_{{\scriptscriptstyle{B\!L}}}} \bm{I}_2 \\[3pt] \sqrt{1-\eta_{{\scriptscriptstyle{B\!L}}}} \bm{I}_2 & \sqrt{\eta_{{\scriptscriptstyle{B\!L}}}} \bm{I}_2 \end{pmatrix}\end{aligned}$$ is the symplectic transformation of a beamsplitter with transmissivity $\sqrt{\eta_{{\scriptscriptstyle{B\!L}}}}$ [@Adesso-Illuminati]. The $4\times 4$ top-left submatrix of $\bm\sigma_{\textrm{hs}0}$ is then the covariance matrix of the Wigner function of $\bm\rho_{\textrm{hs}}'$, $$\begin{aligned} \label{Sighsp} \bm\sigma_{\textrm{hs}}'= \begin{pmatrix} \cosh 2r \bm{I}_2 & \sqrt{\eta_{{\scriptscriptstyle{B\!L}}}} \sinh 2r \bm{Z}_2 \\[3pt] \sqrt{\eta_{{\scriptscriptstyle{B\!L}}}} \sinh 2r \bm{Z}_2\; & [1+\eta_{{\scriptscriptstyle{B\!L}}}(\cosh 2r-1)]\bm{I}_2 \end{pmatrix}.\end{aligned}$$ The covariance matrix of the  is given by $\bm\sigma_{\textrm{hs}}'-t\bm{I}_4$; what we need to know is when, as $t$ increases from zero, the smallest eigenvalue of this $4\times4$ matrix goes to zero. Interchanging rows and columns of $\bm\sigma_{\textrm{hs}}'-t\bm{I}_4$ separates it into the direct sum of two $2\times2$ matrices, $$\begin{aligned} \begin{split} &\begin{pmatrix} \cosh 2r-t& \pm \sqrt{\eta_{{\scriptscriptstyle{B\!L}}}} \sinh 2r \\[3pt] \pm \sqrt{\eta_{{\scriptscriptstyle{B\!L}}}} \sinh 2r\; & 1-t+\eta_{{\scriptscriptstyle{B\!L}}}(\cosh 2r-1) \end{pmatrix}\\[3pt] &\qquad\qquad\qquad =\big[1-t+(1+\eta_{{\scriptscriptstyle{B\!L}}})\sinh^2\!r\big]\bm I_2 \pm(1-\eta_{{\scriptscriptstyle{B\!L}}})\sinh^2\!r\,\bm Z_2\pm2\sqrt{\eta_{{\scriptscriptstyle{B\!L}}}}\sinh r\cosh r\,\bm X_2, \end{split}\end{aligned}$$ which have the same eigenvalues ($\bm X_2$ is the Pauli $x$ matrix). The smaller eigenvalue goes to zero when $$\begin{aligned} \label{eq:bar-t-spdc} t=\bar t=1+(1+\eta_{{\scriptscriptstyle B}}\eta_{{\scriptscriptstyle L}})\sinh^2\!r -\sinh r\sqrt{(1+\eta_{{\scriptscriptstyle B}}\eta_{{\scriptscriptstyle L}})^2\sinh^2\!r+4\eta_{{\scriptscriptstyle B}}\eta_{{\scriptscriptstyle L}}},\end{aligned}$$ where we have restored $\eta_{{\scriptscriptstyle{B\!L}}}=\eta_{{\scriptscriptstyle B}}\eta_{{\scriptscriptstyle L}}$. For the  of the overall input, we have $\bar{\bm{t}}=\bar{t}\bm{I}_{2M}$. The analysis of on-off photodetection in Sec. \[sec:BSsinglephotons\] shows that nonnegativity of the measurement [s]{} requires $s\ge\bar s=1-2p_{{\scriptscriptstyle D}}/\eta_{{\scriptscriptstyle D}}$, and our first method of simulation requires that $s=t\le\bar t$, so the condition for efficient classical simulation of the SPDC scheme is that $\bar s\le\bar t$, which gives $$\begin{aligned} p_{{\scriptscriptstyle D}}\ge -\frac12\eta_{{\scriptscriptstyle D}}(1+\eta_{{\scriptscriptstyle B}}\eta_{{\scriptscriptstyle L}})\sinh^2\!r +\frac12\eta_{{\scriptscriptstyle D}}\sinh r\sqrt{(1+\eta_{{\scriptscriptstyle B}}\eta_{{\scriptscriptstyle L}})^2\sinh^2\!r+4\eta_{{\scriptscriptstyle B}}\eta_{{\scriptscriptstyle L}}}.\end{aligned}$$ The mean number of photons input to the signal modes is $N=M\sinh^2\!r$, meaning that $\sinh^2\!r$ in the above expressions is a surrogate for $N/M$. The average number of counts at the photodetectors is $\eta' N=\eta' M\sinh^2\!r$, where $\eta'=\eta_{{\scriptscriptstyle D}}\eta_{{\scriptscriptstyle L}}\eta_{{\scriptscriptstyle B}}$ gives the total loss through the system. As in our analysis of single-photon boson sampling, we choose $\eta' N=\sqrt M$ provided that $N\le M$; i.e., we choose $N=\min(M,\sqrt M/\eta')$, which is equivalent to $\sinh^2\!r=\min(1,1/\sqrt M \eta')$. Again we consider experiments in which $\eta_{{\scriptscriptstyle B}}=0.1$, $\eta_{{\scriptstyle 0}}=0.98$, $\ell=2$, and $\eta_{{\scriptscriptstyle D}}=0.95$. For $M=10$, we have $\eta_{{\scriptscriptstyle L}}=0.94$, $\sqrt M/\eta'=36$, $N=M=10$, $N\eta'=0.89$, and $\sinh^2\!r=1$; the threshold for classical simulability is $p_{{\scriptscriptstyle D}}\ge 0.076$. For $M=100$, we have $\eta_{{\scriptscriptstyle L}}=0.87$, $\sqrt M/\eta'=120$, $N=M=100$, $N\eta'=8.3$, and $\sinh^2\!r=1$; the threshold for classical simulability is $p_{{\scriptscriptstyle D}}\ge 0.071$. For $M=1\,600$, we have $\eta_{{\scriptscriptstyle L}}=0.81$, $\sqrt M/\eta'=522=N$, $N\eta'=40$, and $\sinh^2\!r=0.33$; the threshold for classical simulability is $p_{{\scriptscriptstyle D}}\ge 0.060$. It is notable that these thresholds are close to twice those we found under comparable conditions for single-photon boson sampling; the single-photon thresholds would be higher if the single-photon sources produced photons with no impurity ($\mu=1$). As in single-photon boson sampling, SPDC boson sampling suffers from the problem of mode-mismatched photons becoming random counts in the photodetectors. The same analysis as for single-photon boson sampling yields random-count probability (\[eq:pDeff\]) with $\mu=1$. Again assuming $f_B=0.1$ and $f_L=0.9$, with all the other parameters the same as above, the random-count probability is $p_{{\scriptscriptstyle D}}=0.091$ for $M=10$, $p_{{\scriptscriptstyle D}}=0.096$ for $M=100$, and $p_{{\scriptscriptstyle D}}=0.033$ for $M=1\,600$. This indicates that mode mismatching is a challenge for SPDC boson-sampling experiments of interesting size. Just as for single-photon sources, this conclusion assumes that additional photons are input to compensate for losses, but again the scaling is favorable provided one can keep losses and mode mismatching under control as system size increases. Conclusion {#sec:conclusion} ========== In this paper we established sufficient conditions for efficient classical simulation of general quantum-optical experiments that involve a quantum state that is subjected to an $M$-mode quantum process and measurement at the output of the process. These conditions support the notion that negativity is a quantum resource by showing that efficient classical simulation of sampling from the output probability distribution is possible when there are (i) nonnegative output-state and output-measurement quasiprobability distributions or (ii) nonnegative input-state and output-measurement quasiprobability distributions and a nonnegative transition function associated with the quantum process. We applied our conditions for classical simulability to two implementations of the boson-sampling problem. We considered simple models of errors and imperfections to assess the effects of mode mismatching, loss in the , and inefficiency and random counts of on-off photodetectors. We found that these errors have a significant impact and obtained random-count thresholds beyond which efficient classical simulation is possible. For any actual implementation of boson sampling, however, one should go beyond the simple examples given here and use our methods to model all the imperfections, noise, and errors, particularly, formulating and analyzing a detailed model of losses and mode mismatching within the particular , in order to determine when it is possible to do classical simulations using our methods. In the case of mode mismatching, nonoverlapping parts of photon wave packets that proceed to and are counted at the detectors are likely to be the major contribution to the random-count probability; hence, it is particularly important to assess the need for and effectiveness of active mode cleaning (so-called quantum filters) to mitigate this effect. We caution that we do not warrant that there is no other method of efficient classical simulation when our conditions are not satisfied. Indeed, we have only considered the problem of sampling from the exact output probability distribution of measurement outcomes. A more general problem is approximate sampling, i.e., sampling from a close approximation to the exact probability distribution, in which case the question is whether sampling from the approximate distribution can be efficiently simulated classically. We have shown that in the presence of losses in boson-sampling experiments and with zero or very low random counts, the exact sampling problem cannot be simulated using our methods. Yet, under the same conditions, one might be able to simulate approximate sampling. A possible approach might be to simulate sampling from a nonnegative distribution that approximates a slightly negative quasidistribution, perhaps using techniques like those recently introduced for discrete-variable systems [@Pashayan15]. We leave this as a subject for future research. Several lessons might be drawn from our work in this paper. First, in any protocol that uses probabilistic state preparation, the state preparation should be included when one searches for efficient classical simulations. If classical simulation is possible for sampling from the whole distribution, then it is also possible for sampling from a subdistribution that is chosen by postselection. This is the approach we used in our analysis of SPDC boson sampling, where we included the heralding modes explicitly in the search for a classical simulation. Second, our random-count thresholds are hard boundaries. These hard boundaries might be moved closer to the ideal problem by considering approximate sampling, as discussed above, but the point here is that such hard boundaries might not be found by considering perturbations about an ideal protocol. This might be a general property of analogue quantum protocols like boson sampling. A third lesson is that it is generally easier to devise analogue quantum protocols than it is to show that the protocol does not have an efficient classical simulation. Confronted with a new analogue quantum protocol, the responsibility of theorists and experimenters alike is to put on the classical thinking cap and to focus on whether classical simulations are possible in the presence of noise; this is essential for designing experiments that are meaningful implementations of the quantum protocol. Our methods for classical simulation are based on the wave aspects of boson-sampling experiments, as opposed to the particle aspects. One classical analogue of boson sampling replaces the identical input bosons with classical distinguishable particles undergoing probabilistic transitions within a network; in this situation, output probabilities are given by permanents of matrices with nonnegative elements [@AA]. In contrast, in our methods, we deal with waves undergoing interference within a  and try to mimic quantum mechanics by using quasiprobability distributions to translate from particle inputs and particle measurements to the complex amplitudes of interfering waves. This is a natural way to try to simulate an analogue quantum protocol like boson sampling. We close by noting that the mode-mismatched photons that make their way to the detectors, which we identify as the chief challenge for boson sampling, are effectively the distinguishable photons of a particle description. We thank F. Shahandeh, A. Lund, W. Bowen, M. Bremner, A. Fedrizzi, M. Almeida, and J. Loredo for informative discussions. This research was supported in part by the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (Project No. CE110001027) and by U.S. National Science Foundation Grants No. PHY-1314763 and No. PHY-1521016. [99]{} V. Veitch, C. Ferrie, D. Gross, and J. Emerson, [*Negative Quasi-Probability as a Resource for Quantum Computation*]{}, New J. Phys. [**14**]{}, 113011 (2012). A. Mari and J. Eisert, [*Positive Wigner Functions Render Classical Simulation of Quantum Computation Efficient*]{}, Phys. Rev. Lett. [**109**]{}, 230503 (2012). V. Veitch, N. Wiebe, C. Ferrie, and J. Emerson, [*Efficient Simulation Scheme for a Class of Quantum Optics Experiments with Non-Negative Wigner Representation*]{}, New J. Phys. [**15**]{}, 013037 (2013). E. Wigner, [*On the Quantum Correction for Thermodynamic Equilibrium*]{}, Phys. Rev. [**40**]{}, 749 (1932). D. Stahlke, [*Quantum Interference as a Resource for Quantum Speedup*]{}, Phys. Rev. A [**90**]{}, 022302 (2014). S. Aaronson and A. Arkhipov, [*The Computational Complexity of Linear Optics*]{}, Theory Computing [**9**]{}(4), 143 (2013). L. Valiant, [*The Complexity of Computing the Permanent*]{}, Theor. Comput. Sci. [**8**]{}, 189 (1979). S. Aaronson, [*A Linear-Optical Proof That the Permanent Is \#P-hard*]{}, Proc. Roy. Soc. London A [**467**]{}, 3393 (2011). A. P. Lund, A. Laing, S. Rahimi-Keshari, T. Rudolph, J. L. O’Brien, and T. C. Ralph, [*Boson Sampling from a Gaussian State*]{}, Phys. Rev. Lett. [**113**]{}, 100502 (2014). P. P. Rohde, K. R. Motes, P. A. Knott, J. Fitzsimons, W. J. Munro, and J. P. Dowling, [*Evidence for the Conjecture That Sampling Generalized Cat States with Linear Optics Is Hard*]{}, Phys. Rev. A [**91**]{}, 012342 (2015). K. P. Seshadreesan, J. P. Olson, K. R. Motes, P. P. Rohde, and J. P. Dowling, [*Boson Sampling with Displaced Single-Photon Fock States Versus Single-Photon-Added Coherent States: The Quantum-Classical Divide and Computational-Complexity Transitions in Linear Optics*]{}, Phys. Rev. A [**91**]{}, 022334 (2015). J. P. Olson, K. P. Seshadreesan, K. R. Motes, P. P. Rohde, and J. P. Dowling, [*Sampling Arbitrary Photon-Added or Photon-Subtracted Squeezed States Is in the Same Complexity Class as Boson Sampling*]{}, Phys. Rev. A [**91**]{}, 022317 (2015). A. Leverrier and R. García-Patrón, [*Analysis of Circuit Imperfections in BosonSampling*]{}, Quantum Information and Computation [**15**]{}, 0489 (2015). G. Kalai and G. Kindler, [*Gaussian Noise Sensitivity and BosonSampling*]{}, arXiv:1409.3093. A. Arkhipov, [*Boson Sampling Is Robust to Small Errors in the Network Matrix*]{}, Phys. Rev. A [**92**]{}, 062326 (2015). P. P. Rohde and T. C. Ralph, [*Error Tolerance of the Boson-Sampling Model for Linear Optics Quantum Computing*]{}, Phys. Rev. A [**85**]{}, 022332 (2012). V. S. Shchesnovich, [*Sufficient Condition for the Mode Mismatch of Single Photons for Scalability of the Boson-Sampling Computer*]{}, Phys. Rev. A [**89**]{}, 022333 (2014). M. A. Nielsen and I. L. Chuang, *Quantum Computation and Quantum Information* (Cambridge University Press, Cambridge, England, 2000). L. J. Stockmeyer, [*On Approximation Algorithms for \#P*]{}, SIAM J. Comput. [**14**]{}, 849 (1985). S. Scheel, [*Permanents in Linear Optical Networks*]{}, arXiv:quant-ph/0406127. S. Toda, [*PP Is as Hard as the Polynomial-Time Hierarchy*]{}, SIAM J. Comput. [**20**]{}, 865 (1991). R. J. Glauber, [*Photon Correlations*]{}, Phys. Rev. Lett. **10**, 84 (1963). E. C. G. Sudarshan, [*Equivalence of Semiclassical and Quantum Mechanical Descriptions of Statistical Light Beams*]{}, Phys. Rev. Lett. **10**, 277 (1963). S. Rahimi-Keshari, T. Kiesel, W. Vogel, S. Grandi, A. Zavatta and M. Bellini, [*Quantum Process Nonclassicality*]{}, Phys. Rev. Lett. [**110**]{}, 160401 (2013). K. Husimi, [*Some Formal Properties of the Density Matrix*]{}, Proc. Phys.-Math. Soc. Japan [**22**]{}, 264 (1940). S. Rahimi-Keshari, A. P. Lund, and T. C. Ralph, [*What Can Quantum Optics Say about Computational Complexity Theory?*]{}, Phys. Rev. Lett. [**114**]{}, 060501 (2015). K. E. Cahill and R. J. Glauber, [*Density Operators and Quasiprobability Distributions*]{}, Phys. Rev. [**177**]{}, 1882 (1969). M. Hillery, R. F. O’Connell, M. O. Scully, and E. P. Wigner, [*Distribution Functions in Physics: Fundamentals*]{}, Phys. Rep. [**106**]{}, 121 (1984). C. Weedbrook, S. Pirandola, R. Garcia-Patron, N. J. Cerf, T. C. Ralph, J. H. Shapiro, and S. Lloyd, [*Gaussian Quantum Information*]{}, Rev. Mod. Phys [**84**]{}, 621 (2012). S. Rahimi-Keshari, M. A. Broome, R. Fickler, A. Fedrizzi, T. C. Ralph, and A. G. White, [*Direct Characterization of Linear-Optical Networks*]{}, Opt. Exp. [**21**]{}, 13450 (2013). S. D. Bartlett, B. C. Sanders, S. L. Braunstein, and K. Nemoto, [*Efficient Classical Simulation of Continuous Variable Quantum Information Processes*]{}, Phys. Rev. Lett. [**88**]{}, 097904 (2002). F. Shahandeh and M. R. Bazrafkan, [*The General Boson Ordering Problem and Its Combinatorial Roots*]{}, Phys. Scr. [**T153**]{}, 014056 (2013). S. M. Barnett, L. S. Phillips, and D. T. Pegg, [*Imperfect Photodetection as Projection onto Mixed States*]{}, Opt. Comm. [**158**]{}, 45 (1998). S. Aaronson and D. J. Brod, [*BosonSampling with Lost Photons*]{}, Phys. Rev. A [**93**]{}, 012335 (2016). M. A. Broome, A. Fedrizzi, S. Rahimi-Keshari, J. Dove, S. Aaronson, T. C. Ralph, and A. G. White, [*Photonic Boson Sampling in a Tunable Circuit*]{}, Science [**339**]{}, 794 (2013). J. B. Spring, B. J. Metcalf, P. C. Humphreys, W. S. Kolthammer, X. Jin, M. Barbieri, A. Datta, N. Thomas-Peter, N. K. Langford, D. Kundys, J. C. Gates, B. J. Smith, P. G. R. Smith, and I. A. Walmsley, [*Boson Sampling on a Photonic Chip*]{}, Science [**339**]{}, 798 (2013). M. Tillmann, B. Dakić, R. Heilmann, S. Nolte, A. Szameit, and P. Walther, [*Experimental Boson Sampling*]{}, Nat. Phot. [**7**]{}, 540 (2013). A. Crespi, R. Osellame, R. Ramponi, D. J. Brod, E. F. Galvão, N. Spagnolo, C. Vitelli, E. Maiorino, P. Mataloni, and F. Sciarrino, [*Integrated Multimode Interferometers with Arbitrary Designs for Photonic Boson Sampling*]{}, Nat. Phot. [**7**]{}, 545 (2013). G. Adesso and F. Illuminati, [*Entanglement in Continuous-Variable Systems: Recent Advances and Current Perspectives*]{}, J. Phys. A [**40**]{}, 7821 (2007). H. Pashayan, J. J. Wallman, and S. D. Bartlett, [*Estimating Outcome Probabilities of Quantum Circuits Using Quasiprobabilities*]{}, Phys. Rev. Lett. [**115**]{}, 070501 (2015). [^1]: The linear-optical networks considered in the context of boson sampling and within this paper are *passive*; i.e., they contain no active elements that generate photons. [^2]: Throughout vectors are row vectors. For vectors of complex numbers, e.g., $\bm\alpha,\bm\beta,\bm\xi,\bm\zeta$, the dagger transposes to a column vector and takes a complex conjugate; for the vector of annihilation operators, $\bm{a}$, the dagger transposes and takes the adjoint.
--- abstract: 'Let $L$ be an $n$-component link ($n>1$) with pairwise nonzero linking numbers in a rational homology $3$-sphere $Y$. Assume the link complement $X:=Y\setminus\nu(L)$ has nondegenerate Thurston norm. In this paper, we study when a Thurston norm-minimizing surface $S$ properly embedded in $X$ remains norm-minimizing after Dehn filling all boundary components of $X$ according to ${\partial}S$ and capping off ${\partial}S$ by disks. In particular, for $n=2$ the capped-off surface is norm-minimizing when $[S]$ lies outside of a finite set of rays in $H_2(X,{\partial}X;{\mathbb{R}})$. When $Y$ is an integer homology sphere this gives an upper bound on the number of surgeries on $L$ which may yield $S^1\times S^2$. The main techniques come from Gabai’s proof of the Property R conjecture and related work.' address: 'Princeton University, Princeton, NJ 08540, USA' author: - Maggie Miller title: The effect of link Dehn surgery on the Thurston norm --- Introduction {#sec:intro} ============ The [*Thurston norm*]{}, introduced by Thurston [@thurston], is a pseudonorm on the second homology of a compact $3$-manifold $M$. Specifically, if $M$ is a compact $3$-manifold (with or without boundary), then “Thurston norm” refers to one of two canonical continuous functions $$x_M:H_2(M;{\mathbb{R}})\to{\mathbb{R}},\hspace{.25in}\text{or}\hspace{.25in} x_M:H_2(M,{\partial}M;{\mathbb{R}})\to{\mathbb{R}}.$$ The Thurston norm $x_M$ on $M$ is degenerate if $x_M(\alpha)=0$ for some $\alpha\neq 0$, and nondegenerate otherwise. Unless otherwise specified, we will always take homology to have real coefficients; we are uninterested in torsion classes. In this paper, we study the effect of Dehn surgery on an oriented link $L$ on the Thurston norm on $H_2(Y\setminus\nu(L),{\partial}(Y\setminus\nu(L));{\mathbb{R}})$, where $Y$ is a rational homology $3$-sphere. In particular, we prove the following theorem. \[mingenuscor\] Let $L=L_1\sqcup L_2$ be a $2$-component link in a rational homology $3$-sphere $Y$. Assume $\operatorname{{lk}}(L_1,L_2)\neq 0$ and that $X:=Y\setminus\nu(L)$ has nondegenerate Thurston norm $x$. Let $S$ be a norm-minimizing surface in $X$. Take $[S]$ to be primitive and in the cone $C$ on the interior of some face of the unit-norm ball of $x$. Let $\widehat{X}$ be the closed $3$-manifold obtained by Dehn filling each component of ${\partial}X$ according to the slope of $S\cap{\partial}X$. Each component of ${\partial}S$ can be capped off by a disk (in a Dehn-filling solid torus) to create a closed surface $\widehat{S}$ in $\widehat{X}$. If $\widehat{S}$ is not norm-minimizing, then either $g(S)=1$ or the genus $g([S])$ is minimal among all homology classes in $C$. In particular, if $[S']$ is a primitive class in $C$ and $\widehat{S'}$ is also not norm-minimizing, then $g([S])=g([S'])$. We use the convention that in a rational homology sphere $Y$, a Seifert surface for a knot $K$ is a surface $S$ so that $[{\partial}S]$ generates the kernel of $i_*:H_1({\partial}(Y\setminus\nu(K);{\mathbb{Z}})\to H_1(Y\setminus\nu(K);{\mathbb{Z}})$. The slope of ${\partial}S$ (and any multiple) is zero. Let $c$ be the smallest positive integer with $c[K]=0$ in $H_1(Y;{\mathbb{Z}})$. If $K'$ is another knot in $Y\setminus\nu(K)$, we say the linking number of $K$ and $K'$ is $\operatorname{{lk}}(K,K')=\frac{1}{c}\langle S,K'\rangle$. For Theorem \[mingenuscor\] and the remainder of Section \[sec:intro\], a surface is norm-minimizing if it maximizes Euler characteristic among all homologous surfaces and includes no nullhomologous subset of components (in homology with real coefficients). We also require that on each torus component of ${\partial}X$, the boundary components of a norm-minimizing surface have parallel orientations. We develop the language to parse and state Theorem \[mingenuscor\] more precisely and concisely in Section \[sec:thurston\]. Theorem \[mingenuscor\] can be stated for links with more components as well, but to compare homology classes we require them to have different boundary slopes on each boundary component of $X$. \[mingenuscor2\] Let $L=L_1\sqcup \cdots\sqcup L_n$ be an $n$-component link in a rational homology $3$-sphere $Y$. Assume $\operatorname{{lk}}(L_i,L_j)\neq 0$ for each $i\neq j$. Let $X:=Y\setminus\nu(L)$. Assume $X$ has nondegenerate Thurston norm. Let $S$ be a norm-minimizing surface meeting every component of ${\partial}X$ so that $[S]$ is primitive and is contained in a cone $C$ on the interior of a face of the unit-ball of $x$. Let $\widehat{X}$ be the closed manifold obtained by Dehn-filling each boundary component of $X$ according to the slope of ${\partial}S$. Let $\widehat{S}$ be the closed surface in $\widehat{X}$ obtained from $S$ by capping off each boundary component of $S$ by a disk in a Dehn-filling solid torus. Then at least one of the following is true: - $g(S)=1$, - $\widehat{S}$ is norm-minimizing, - $g(S)\le g(\beta)$ whenever $\beta\in C$. We will prove Theorem \[mingenuscor2\] in Section \[sec:ncomp\]. In Theorem \[mingenuscor2\] (and later observations about link complements), we require a surface to meet every boundary component of $X$ because we are interested in surgering every component of a link. If a surface $S$ in $X=Y\setminus\nu(L)$ does not meet every boundary component of $X$, then we may restrict $L$ to a sublink $L'$ so that $S\subset X':=Y\setminus\nu(L')$ and $S$ does meet every boundary component of $X'$. Then we may apply Theorem \[mingenuscor2\] (or other relevant theorems) to the link $L'$ in $Y$. Let $\mathcal{S}_P$ be the set of norm-minimizing surfaces not meeting some boundary component $P$ of $X$. Work of Sela [@sela] implies that for all but finitely many choices of slope on $P$, filling $P$ and capping off a surface in $\mathcal{S}_P$ will yield a norm-minimizing surface. We do not separately describe Sela’s theorem as it requires some more technical discussion, but the reader may refer to [@sela] (expanding on work of Gabai [@ft3m2]) if interested in Dehn fillings on boundary torus components which do [*not*]{} meet some norm-minimizing surface. The primary motivation for Theorem \[mingenuscor\] is to study the total number of norm-minimizing surfaces $S\subset X$ (up to homology) so that $\widehat{S}$ are not norm-minimizing. We state the corresponding result first for $2$-component links and then in generality; this is not strictly necessary but aids in readability. \[2compthm\] Let $L=L_1\sqcup L_2$ be a $2$-component link in a rational homology $3$-sphere $Y$. Assume $\operatorname{{lk}}(L_1,L_2)\neq 0$ and that $X:=Y\setminus\nu(L)$ has nondegenerate Thurston norm. Let $S$ be a norm-minimizing surface in $X$ with $[S]$ primitive. Let $\widehat{X}$ be the closed $3$-manifold obtained from $X$ by Dehn-filling both components $P_i={\partial}(\nu(L_i))$ of ${\partial}X$ according to the slope ${\partial}S\cap P_i$. Let $\widehat{S}$ be the closed surface in $\widehat{X}$ obtained by capping off each component of ${\partial}S$ with a disk in one of the Dehn-filling solid tori. There exists a finite set $E\subset H_2(X,{\partial}X;\mathbb{Z})$ so that if $[S]\not\in E$, then $\widehat{S}$ is norm-minimizing. In the conclusion of Theorem \[2compthm\], we also obtain explicit upper bounds on $|E|$ which depend on $Y$ and $L$. Some bounds will be described in Corollaries \[scholium\], \[cor1\] and \[cor2\]. Theorem \[ncompthm\] is related to the Property R theorem of Gabai [@ft3m3]: surgery on a nontrivial knot in $S^3$ cannot yield $S^1\times S^2$. (We discuss this further in Section \[sec:thurston\].) \[proprcor\] Let $L=L_1\sqcup L_2$ be a $2$-component link in a homology $3$-sphere $Y$. Assume $\operatorname{{lk}}(L_1\sqcup L_2)\neq 0$ and that any annulus properly embedded in $X:=Y\setminus\nu(L)$ represents $0\in H_2(X,{\partial}X;{\mathbb{R}})$. Then there are at most finitely many surgeries on $L$ which yield $S^1\times S^2$. We give explicit bounds on this finite number in Corollaries \[scholium\], \[cor1\] and \[cor2\]. In Corollary \[proprcor\], we are implicitly using the fact that there are finitely many nontrivial elements of $\alpha\in H_2(X;{\mathbb{R}})$ whose norm-minimizing representatives have genus zero. We prove this later in Proposition \[g0prop\]. We prove Theorem \[2compthm\] from Theorem \[mingenuscor\] very quickly in Section \[sec:sutured\]. We give examples of $2$-component links in $S^3$ in which $E$ is nonempty in the setting of Theorem \[2compthm\] in Figures \[fig:example1\] and \[fig:example2\]. (In both cases, we obtained the Thurston norm on $S^3\setminus\nu(L)$ from work of McMullen [@mcmullen], who computed the Thurston norm on complements of nearly all links with at most nine crossings, including these two examples.) $L_1$ at 0 175 $L_2$ at 130 175 $S_1$ at 200 100 $S_2$ at 445 100 $\textcolor{red}{[S_1]}$ at 685 105 $\textcolor{blue}{[S_2]}$ at 565 175 ![[**[First:]{}**]{} A $2$-component link $L=L_1\sqcup L_2$ in $S^3$. [**[Second:]{}**]{} A norm-minimizing surface $S_1$ in the homology class of a punctured Seifert surface for $L_1$. [**[Third:]{}**]{} A norm-minimizing surface in the homology class of a punctured surface for $L_2$. [**[Fourth:]{}**]{} The unit ball of the Thurston norm on $S^3\setminus\nu(L)$, in which we indicate $[\pm S_1]$ and $[\pm S_2]$. The surfaces $\widehat{S_1}$ and $\widehat{S_2}$ are not norm-minimizing.[]{data-label="fig:example1"}](example1 "fig:"){width="100mm"} $L_1$ at 30 75 $L_2$ at 125 155 $1$ at 200 75 $4$ at 175 150 $0$ at 340 150 $2[S_1]+[S_2]$ at 680 170 $\textcolor{red}{[S_1]}$ at 660 90 $\textcolor{blue}{[S_2]}$ at 570 175 ![[**[First:]{}**]{} A $2$-component link $L=L_1\sqcup L_2$ in $S^3$. [**[Second:]{}**]{} Let $S_i$ be a surface in the homology class of a punctured Seifert surface for $L_i$. We indicate the boundary slopes of $2[S_1]+[S_2]$. [**[Third:]{}**]{} Surgery on $L_i$ according to these slopes yields $S^1\times S^2$. [**[Fourth:]{}**]{} The unit ball of the Thurston norm on $S^3\setminus\nu(L)$, in which we highlight $2[S_1]+[S_2]$. A norm-minimizing surface $S$ in this homology class is genus-$1$ with three boundary components, but $[\widehat{S}]$ is represented by a $2$-sphere, so the torus $\widehat{S}$ is not norm-minimizing.[]{data-label="fig:example2"}](example2 "fig:"){width="100mm"} \[ncompthm\] Let $L=L_1\sqcup\cdots\sqcup L_n$ be an $n$-component link $(n>1)$ in a rational homology $3$-sphere $Y$. Assume $\operatorname{{lk}}(L_i,L_j)\neq 0$ for $i\neq j$ and that any annulus properly embedded in $X:=Y\setminus\nu(L)$ represents $0\in H_2(X,{\partial}X;{\mathbb{R}})$. Let $S$ be a norm-minimizing surface in $X$ meeting every component of ${\partial}S$. Let $\widehat{X}$ be the closed manifold obtained from $X$ by Dehn filling $X$ according to ${\partial}S$, and let $\widehat{S}\subset\widehat{X}$ be the closed surface obtained from capping off each boundary component of $S$ by a disk within the Dehn-filling solid tori. Let $\widetilde{Y}$ be the $3$-manifold obtained from $Y$ by surgering $Y$ along $L_3\sqcup\cdots\sqcup L_n$ according to ${\partial}S$. There exists an $(n-2)$-dimensional set of rays $E$ from the origin of $H_2(X,{\partial}X;{\mathbb{R}})\cong{\mathbb{R}}^n$ so that if $[S]\not\in E$, then either $\widehat{S}$ is norm-minimizing or $\widetilde{Y}\setminus\nu(L_1\sqcup L_2)$ has degenerate Thurston norm. In Theorem \[ncompthm\], we may reorder the components of $L$ to apply the theorem, if this causes $\widetilde{Y}\setminus\nu(L_1\sqcup L_2)$ to have nondegenerate Thurston norm. In the statement of Theorem \[ncompthm\], we describe $E$ as a set of rays in $H_2(X,{\partial}X;{\mathbb{R}})$ because we are primarily interested in primitive homology classes. (We remind the reader that a primitive class $\alpha$ is one which is integral and not equal to $c\beta$ for any integral class $\beta$ and integer $c>1$.) If $\widehat{cS}$ is not norm-minimizing, then neither is $\widehat{S}$. Thus, $E$ lies in an $(n-2)$-dimensional subcomplex of $(H_2(X,{\partial}X;{\mathbb{R}})\setminus0)/($positive scalar multiplication) $\cong S^{n-1}$. For example, if $n=3$, then $E$ lies in a graph embedded in $S^2$. We could have alternately phrased the conlusion of Theorem \[ncompthm\] as, “for any $q_1,\ldots, q_{n-2}\subset{\mathbb{Q}}\cup\{\pm\infty\}$, there exists a finite set $E\subset({\mathbb{Q}}\cup\{\pm\infty\})^2$ so that if ${\partial}S$ has slope $q_i$ on ${\partial}\overline{\nu(L_i)}$ and $(q_{n-1},q_n)\not\in E$, then either $\widetilde{Y}\setminus\nu(L_1\sqcup L_2)$ has degenerate Thurston norm or $\widehat{S}$ is norm-minimizing.” We believe the given conclusion to be more illuminating, but we are happy for the reader to instead think about surgery slopes. The proof of Theorems \[mingenuscor\], \[mingenuscor2\], \[2compthm\], and \[ncompthm\] are motivated by Gabai’s constructions of taut foliations and sutured manifolds. In particular, we use many important theorems of Gabai which were essential in the proof of Property R. We give specific references and background in Section \[sec:sutured\]. Note that increasing the number of link components allows us to make the statement of Corollary \[proprcor\] about homology $3$-spheres, even though the Property R theorem about knots is not generally true in homology $3$-spheres. For instant, obtain $Y$ by surgering $S^1\times S^2$ along a curve homologous (but not isotopic) to $S^1\times {\text{pt}}$, and let the knot $L$ be a core of the surgery solid torus. See Figure \[fig:mazur\] for an explicit example. $0$ at -8 75 $0$ at 125 155 $0$ at 165 75 $0$ at 298 155 $\textcolor{red}{L}$ at 5 5 ![[**[Left:]{}**]{} A surgery diagram for the Brieskorn sphere $Y:=\Sigma(2,5,7)$. Zero-framed surgery on the knot $L\subset Y$ yields $S^1\times S^2$, but $L$ does not bound a disk in $Y$. [**[Right:]{}**]{} A norm-minimizing surface in $Y\setminus\nu(L)$.[]{data-label="fig:mazur"}](mazur){width="75mm"} Finally, we wish to remark that assumptions on linking number are generally just to avoid trivialities. Very roughly, the main arguments in the proofs of Theorems \[mingenuscor\], \[mingenuscor2\], \[2compthm\], and \[ncompthm\] will come from understanding the intersections of two nonhomologous minimal surfaces. We take the ambient manifold $Y$ to be a rational homology $3$-sphere and $\operatorname{{lk}}(L_i,L_j)\neq 0$ so that the boundary data of a surface $S$ in $Y\setminus\nu(L)$ determines the homology class of $S$. That is, so that ${\partial}:H_2(Y\setminus\nu(L),{\partial}(Y\setminus\nu(L));{\mathbb{Z}})\to H_1({\partial}(Y\setminus\nu(L));{\mathbb{Z}})$ is injective. However, we could have stated the conclusion of Theorem $\ref{ncompthm}$ to be “there exists an $E\subset$(an $(n-2)$-dimensional complex in $({\mathbb{Q}}\cup\{\infty\})^n)$ so that if ${\partial}S$ meets every $P_i$ and ${\partial}S=({\partial}S\cap P_1,\ldots,{\partial}S\cap P_n)$ is not in $E$, then $\widehat{S}$ is norm-minimizing or the Thurston norm becomes degenerate after surgery on $(n-2)$ components of $L$.” This holds perfectly well when some (or all) of the pairwise linking numbers $\operatorname{{lk}}(L_i,L_j)$ are zero (If $L$ is split, then divide $L$ into two split sublinks and induct), as then the image of ${\partial}$ is at most $(n-1)$-dimensional, and contains an at most $(n-2)$-dimensional class of primitive elements. As a simple example, consider a $2$-component link $L_1\sqcup L_2$ in $S^3$ with linking number zero, e.g. the Whitehead link. Any norm-minimizing surface $S$ in $S^3\setminus\nu(L)$ meeting ${\partial}(\overline{\nu(L_i)})$ does so in curves of slope $0$, so we may let $E\subset ({\mathbb{R}}\cup\infty)^2$ be the finite set $\{(0,0)\}$ and be done. We leave the reader with the following natural question. Let $L$ be as in Theorem \[2compthm\] or Theorem \[ncompthm\]. Take the rational homology $3$-sphere $Y$ to actually be the $3$-sphere. Using $Y=S^3$, can we further restrict the set of norm-minimizing surfaces $S$ with the property that $\widehat{S}$ is not norm-minimizing? Organization {#organization .unnumbered} ------------ We break the paper into the following sections. - [**[Section \[sec:thurston\]:]{}**]{} We introduce the Thurston norm and some of its basic properties. - [**[Section \[sec:sutured\]:]{}**]{} We discuss sutured manifolds and foliations. We restate the main theorems and prove Theorem \[2compthm\] from Theorem \[mingenuscor\]. - [**[Section \[sec:graphs\]:]{}**]{} We introduce fat-vertex graphs, which will be used to describe certain intersections of surfaces. - [**[Section \[sec:proof\]:]{}**]{} We prove Theorem \[mingenuscor\], using tools from Sections \[sec:thurston\]–\[sec:graphs\]. - [**[Section \[sec:ncomp\]:]{}**]{} We prove Theorem \[mingenuscor2\]. We then prove Theorem \[ncompthm\] by inducting on Theorem \[2compthm\]. Acknowledgements {#acknowledgements .unnumbered} ---------------- The author would like to thank her graduate advisor, David Gabai, for suggesting this question in the spring of 2017 and for many helpful conversations over a very long period of time. The author is a fellow in the National Science Foundation Graduate Research Fellowship program, under Grant No. DGE-1656466. The Thurston norm {#sec:thurston} ================= In this section, we review the definition of the Thurston norm and some of its basic properties. The Thurston norm can be defined by its valuation on integral homology classes, and then extended naturally to rational and then all real homology classes. On integral homology classes, the Thurston norm is geometrically motivated. \[def:thurston\] Let $M$ be a compact $3$-manifold. Given an oriented surface $S$ smoothly embedded in $M$, we define $$\chi^+(S)=\begin{cases}\max\{-\chi(S),0\}&S\text{ is connected,}\\\sum_{i=1}^n\chi^+(S_i)&S=\sqcup_{i=1}^n S_i.\end{cases}$$ Now let $\alpha$ be an integral element of $H_2(M;{\mathbb{R}})$ or $H_2(M,{\partial}M;{\mathbb{R}})$, where $M$ is a compact $3$-manifold. Define: $$x_M(\alpha)=\min\{\chi^+(S)\mid S\text{ is a surface embedded in $M$ with $[S]=\alpha$}\}.$$ In words, if $S$ is the surface representing $\alpha$ with least negative Euler characteristic, then $x_m(\alpha)=|\chi(S)|$. However, we do not allow nullhomologous disk or $2$-sphere components to artificially increase Euler characteristic, hence the definition of $\chi^+$. Also, if $S$ happens to have positive Euler characteristic, then we say $x_M(\alpha)=0$ rather than a negative number, as we wish for $x_M$ to extend to a pseudonorm. Now let $\beta$ be a rational element of $H_2(M,{\mathbb{R}})$ or $H_2(M,{\partial}M;{\mathbb{R}})$. Say $q\beta$ is an integral homology class, where $q\in{\mathbb{N}}$. Then $$x_M(\beta)=\frac{1}{q}x_M(q\beta).$$ Finally, let $\gamma$ be any element of $H_2(M,{\mathbb{R}})$ or $H_2(M,{\partial}M;{\mathbb{R}}).$ Suppose $\gamma$ is approximated by rational homology classes $\beta_1,\beta_2,\ldots$, so $\gamma=\lim_{n\to\infty}\beta_n$. Then $$x_M(\gamma)=\lim_{n\to\infty}x_M(\beta_n).$$ In Definition \[def:thurston\], we implicitly use the fact that any integral second homology class $\alpha$ in a $3$-manifold $M$ can be represented by an oriented surface $S$ properly embedded in $M$. Let $M$ be a compact $3$-manifold. Let $S$ be an oriented surface representing $\alpha\in H_2(M,{\mathbb{R}})$ or $H_2(M,{\partial}M;{\mathbb{R}})$, with $\alpha\neq 0$. We say that $S$ is [*norm-minimizing*]{} if the following conditions hold: - $S$ has no nullhomologous subset of components, - $\chi^+(S)=x_M(\alpha)$, - If $\chi(S)=0$, then any $S'$ with $[S']=[S]$ and $\chi(S')>0$ has a subset of components that is nullhomologous. In words, a surface $S$ is norm-minimizing if it maximizes Euler characteristic among homologous surfaces, discarding any nullhomologous subsets of components (to prevent adding e.g. nullhomologous $2$-sphere components to artificially increase Euler characteristic). Let $M$ be a compact $3$-manifold whose boundary is a collection of tori. Let $S$ be an oriented surface representing an integral $\alpha\in H_2(M,{\partial}M;{\mathbb{R}})$. We say that $S$ is [*properly norm-minimizing*]{} if $S$ is norm-minimizing and on each boundary component $P$ of $M$, all components of ${\partial}S\cap P$ have the same orientation (parallel on $P$). When $M$ has boundary a collection of tori, any norm-minimizing surface $S$ can be transformed into a properly norm-minimizing surface by adding tubes to $S$ which are parallel to torus boundary components of $M$. See Figure \[fig:proper\]. Each tube is attached to one pair of opposite-orientation boundary components of $M$, and then a neighborhood of the tube is pushed off ${\partial}M$ into the interior of $M$. The resulting surface $S'$ is norm-minimizing (as $\chi(S')=\chi(S)$) but larger genus than $S$, as $S'$ is boundary-compressible at each tube attached to $S$. $P$ at 15 265 $\textcolor{red}{S}$ at 8 50 $\textcolor{red}{S'}$ at 393 50 ![[**[Left:]{}**]{} A norm-minimizing surface $S$ near a torus boundary-component $P$ of $3$-manifold $X$. The components of ${\partial}S$ on $P$ need not have parallel orientations. [**[Right:]{}**]{} We increase the genus of $S$ (while preserving Euler characteristic) to find a properly norm-minimizing surface $S'$ homologous to $S$.[]{data-label="fig:proper"}](proper){width="75mm"} If $x_M(\alpha)>0$ for all $\alpha\neq 0$, then $x_M$ is a norm rather than a pseudonorm. When $x_M$ is a norm, we say that $x_M$ is nondegenerate. In this paper, we are more interested in $H_2(M,{\partial}M;{\mathbb{R}})$ than we are in $H_2(M;{\mathbb{R}})$. We will implicitly identify $H_2(M,{\partial}M;{\mathbb{R}})$ with ${\mathbb{R}}^n$ for appropriate $n$, where the integral lattice points correspond to integral homology classes. Thurston observed that the Thurston norm is convex and linear on rays through the origin, and that the Thurston norm is symmetric under reflection through the origin. Moreover, Thurston understood the geometry of the norm unit-ball $B_{x_M}$. Let $M$ be a compact $3$-manifold. Assume $x_M$ is nondegenerate. Then the unit ball $B_{x_M}\subset H_2(M,{\partial}M;{\mathbb{R}})={\mathbb{R}}^n$ is a nondegenerate polyhedron defined by linear inequalities with integer coefficients. Assume $x_M$ is nondegenerate. If $[S]$ is a primitive homology class and is [*not*]{} contained in the cone on the interior of a face of $B_{x_M}$, then we will call $[S]$ a [*corner*]{} of the Thurston norm. If $[S_1],\ldots,[S_n]$ are corners of the Thurston norm and up to positive scalar multiplication all lie in one closed face of ${\partial}B_{x_M}$, then we say that $[S_1],\ldots,[S_k]$ are an [*adjacent set*]{} of corners of the Thurston norm. When $k=2$, we may just say $[S_1]$ and $[S_2]$ are adjacent. Note that when $x_M$ is defined on a space of dimension more than $2$, a homology class being a corner is [*not*]{} the same as a homology class being vertex. For example, when $b_2(M)=3$ then the unit ball of $x_M:H_2(M;{\mathbb{R}})\to{\mathbb{R}}$ is a $3$-dimensional polyhedron. We would say that a primitive homology class which is a multiple of a vertex or edge on that polyhedron is a corner of $x_M$. Finding norm-minimizing surfaces for corners of the Thurston norm allows one to find a norm-minimizing surface for any integral homology class through cut-and-paste surgery. Let $R$ and $T$ be oriented surfaces in a compact $3$-manifold $M$ which intersect transversely. Let $R+T$ denote the surface resulting from [*cut-and-paste*]{} surgery on $R$ and $T$. That is, where $R$ and $T$ intersect in a closed circle, remove a neighborhood of that circle from both $R$ and $T$ and glue in two disjoint annuli consistent with the orientations of $R$ and $T$. Where $R$ and $T$ intersect in an arc, remove a neighborhood of that arc from both $R$ and $T$ and glue in two disjoint disks consistent with the orientations of $R$ and $T$. Call the resulting surface $R+T$. Equivalently, we might define $V$ to be $\overline{\nu(R)}\cup\overline{\nu(T)}$ for small tubular neighborhoods of $R$ and $T$. Smooth $V$ at corners to be smoothly embedded in $M$. Then let $R+T$ denote the positive boundary component of $V$ (where the orientation on $V$ is induced by the orientations on $R$ and $T$). Note $\chi(R+T)=\chi(R)+\chi(T)$. \[cutpasteprop\] Let $\alpha_1$ and $\alpha_2$ be adjacent corners of a nondegenerate Thurston norm $x_M$, where $M$ is a $3$-manifold with boundary a disjoint union of tori $\sqcup_{i=1}^n P_i$. Then there exist norm-minimizing surfaces $S_1$ and $S_2$ with $[S_i]=\alpha_i$ so that for any positive integers $a$ and $b$, $aS_1+bS_2$ is properly norm-minimizing. Since $x_M$ is nondegenerate, $\chi^+(R)=-\chi(R)$ for any norm-minimizing surface $R$ in $M$. Let $S_1$ and $S_2$ be properly norm-minimizing surfaces with $[S_i]=\alpha_i$. Isotope the $S_i$ near their boundaries so that ${\partial}S_1$ and ${\partial}S_2$ intersect minimally. This ensures that for each $j$, every boundary component of $aS_1+bS_2$ on $P_j$ has the same orientation. We have $x_M(a[S_1]+b[S_2])=ax_M([S_1])+bx_M([S_2])=a\chi^+(S_1)+b\chi^+(S_2)=-a\chi(S_1)-b\chi(S_2)=-\chi(aS_1+bS_2)$. Then we are done if $aS_1+bS_2$ has no disk, $2$-sphere, torus or annulus components. The condition on ${\partial}(aS_1+bS_2)$ along with nondegeneracy of $x_M$ ensures that $aS_1+bS_2$ has do disk or annulus components. Suppose $S_1\setminus S_2$ includes a component $C$ which does not meet ${\partial}S_1$ and with $\chi(C)\ge 0$. Then surger $S_2$ along $C$ to obtain $S'_2$ (i.e. $S'_2:=[S_2\setminus(({\partial}C)\times I)]\cup(C\times S^0)$). See Figure \[fig:simplecutpaste\]. The surface $S'_2$ is homologous to $S_2$. Since $\chi(C)\ge 0$, $S'_2$ is norm-minimizing. Set $S_2:=S'_2$ and repeat until $S_1\setminus S_2$ includes no such component $C$. Now any closed components of $aS_1+bS_2$ much include a region homeomorphic to some component of $S_1\setminus S_2$, which must have negative Euler characteristic. Therefore, $aS_1+bS_2$ has no closed sphere or torus components, so $aS_1+bS_2$ is proeprly norm-minmizing. $C$ at 215 120 $\textcolor{red}{S_1}$ at -20 50 $\textcolor{blue}{S_2}$ at 225 200 $\textcolor{red}{S_1}$ at 515 50 $\textcolor{blue}{S'_2}$ at 765 200 ![[**[Left:]{}**]{} $S_1\setminus S_2$ includes a disk $C$. [**[Right:]{}**]{} We surger $S_2$ along $C$ to find another properly norm-minimizing surface $S'_2$. We replace $S_2:=S'_2$ and repeat until every component of $S_1\setminus S_2$ not meeting ${\partial}S_1$ has negative Euler characteristic. Then $aS_1+bS_2$ cannot have any sphere or torus components.[]{data-label="fig:simplecutpaste"}](simplecutpaste "fig:"){width="95mm"} There is a connection between the Thurston norm and foliations and sutured manifolds, which we dicuss in Section \[sec:sutured\]. Using these connections, Gabai [@ft3m3] proved the following theorem. \[propr\] Let $S$ be a minimum-genus Seifert surface for a knot $K$ in $S^3$. Let $\widehat{S}$ be the closed surface in $S^3_0(K)$ obtained from $S$ by attaching a disk in the Dehn-surgery solid torus to ${\partial}S$. Then $\widehat{S}$ is norm-minimizing. Note that a minimum-genus Seifert surface for a knot $K$ is a properly norm-minimizing surface representing the generator of $H_2(S^3\setminus\nu(K),{\partial}(S^3\setminus\nu(K));{\mathbb{R}})$. The above theorem has the following important corollary, usually referred to as the Property R Conjecture. If $K\subset S^3$ is a nontrivial knot, then $S^3_0(K)\not\cong S^1\times S^2$. Let $S$ be a minimum-genus Seifert surface for $K$. Since $K$ is nontrivial, $S$ has positive-genus. By Theorem \[propr\], $\widehat{S}$ is norm-minimizing. Then no $2$-sphere can represent the generating class $[\widehat{S}]$ of $H_2(S^3_0(K);{\mathbb{Z}})$, so $S^3_0(K)\not\cong S^1\times S^2$. When $L$ is an $n$-component link in a rational homology $3$-sphere $Y$, $H_2(Y\setminus\nu(L),{\partial}(Y\setminus\nu(L));{\mathbb{R}})$ is $n$-dimensional, so we may consider surfaces more general than Seifert surfaces. The Thurston norm of a link complement should always be understood to mean the Thurston norm on relative homology. For this discussion, fix $n=2$. One analogue of Theorem \[propr\] holds by Gabai and J. Rasmussen (independently of each other) but has not appeared in writing. \[fillonethm\] Let $L=L_1\sqcup L_2$ be a $2$-component link in a rational homology $3$-sphere $Y$ with nonzero linking number. Assume $X:=Y\setminus\nu(L)$ has nondegenerate Thurston norm. Let $S$ be a norm-minimizing surface in $X$. Let $N$ denote the result of Dehn-filling $X$ at $P_1={\partial}\overline{\nu(L_1)}$ according to the slope of ${\partial}S\cap P_1$. Let $\widehat{S}$ denote the surface in $N$ obtained from $S$ by capping off each component of ${\partial}S\cap P_1$ by a disk in the Dehn-filling solid torus. Assume $[S]$ is not a corner of the Thurston norm on $X$. Then $\widehat{S}$ is norm-minimizing. Rasmussen’s proof utilizes a connection between Thurston norm and knot Floer homology. We will present an argument of Gabai in Section \[sec:sutured\] via the theory of sutured manifolds, which is more similar to the content in this paper. In Theorem \[fillonethm\], the condition that $[S]$ is not a corner of the Thurston norm is essential. For example, consider the link $L=L_1\sqcup L_2\subset S^3$ in Figure \[fig:example1\]. We see that if $S$ is the norm-minimizing surface in the third image (i.e. in the homology class of a punctured Seifert surface for $L_2$), then Dehn-filling $P_1$ according to the slope $\infty$ of ${\partial}S\cap P_1$ yields $S^3\setminus\nu(L_2)$, in which $\widehat{S}$ is [*not*]{} norm-minimizing. (In fact, $\widehat{S}$ is compressible.) Theorem \[fillonethm\] is similar in flavor to the following result of Baker and Taylor [@baker]. \[baker\] Let $X$ be a compact, connected, orientable, irreducible $3$-manifold whose boundary is a union of tori $P_1,\ldots, P_n$. Assume $X$ is not a cable space and $X\not\cong T^2\times I$. Let $S$ be a norm-minimizing surface in $X$. Let $N$ be the manifold obtained from $X$ by Dehn-filling $P_1$ according to the slope of ${\partial}S\cap P_1$, and let $\widehat{S}\subset N$ be the surface obtained from $S$ by attaching disks to each component of ${\partial}S\cap P_1$ inside the Dehn-filling solid torus. There exists a finite set of slopes $E$ on $P_1$ so that when ${\partial}S\cap P_1$ is not a slope in $E$, then $\widehat{S}$ is norm-minimizing. Theorem \[baker\] applies to a more general class of $3$-manifolds than link complements in rational homology spheres, but Theorem \[fillonethm\] yields a stronger conclusion in its setting. In the same vein, we consider another reasonable analogue of Theorem \[propr\] in the setting of link complements. Let $L=L_1\sqcup L_2$ be a $2$-component link in a rational homology $3$-sphere $Y$. Assume $\operatorname{{lk}}(L_1,L_2)\neq 0$ and that $X:=Y\setminus\nu(L)$ has nondegenerate Thurston norm. Let $S$ be a properly norm-minimizing surface in $X$. Let $\widehat{X}$ be the closed $3$-manifold obtained from $X$ by Dehn-filling both components $P_i={\partial}(\nu(L_i))$ of ${\partial}X$ according to the slope ${\partial}S\cap P_i$. Let $\widehat{S}$ be the closed surface in $\widehat{X}$ obtained by capping off each component of ${\partial}S$ with a disk. There exists a finite set $E\subset H_2(X,{\partial}X;\mathbb{Z})$ so that if $[S]\not\in E$, then $\widehat{S}$ is norm-minimizing. Applying Theorem \[baker\] sequentially to each of $P_1$ and $P_2$ does not imply Theorem \[2compthm\]. When $L=L_1\sqcup\cdots\sqcup L_n$ is an $n$-component link, Theorem \[baker\] is interesting when applied to $1,2,\ldots, n-1$ boundary components of $X$, but cannot constrain the number of elements of $H_2(X,{\partial}X;{\mathbb{R}})$ whose properly norm-minimizing representative $S$ fails to be norm minimizing when capped off in $Y_{{\partial}S}(L)$ ($Y$ surgered along all $n$ components of $L$). We address the $n$-component link case in Theorem \[ncompthm\]. Theorem \[2compthm\] is a consequence of the following theorem. Let $L=L_1\sqcup L_2$ be an $2$-component link in a rational homology $3$-sphere $Y$. Assume $\operatorname{{lk}}(L_1,L_2)\neq 0$ and that $X:=Y\setminus\nu(L)$ has nondegenerate Thurston norm. Let $S$ be a properly norm-minimizing surface in $X$ with $[S]$ primitive and not a corner of the Thurston norm. Let $\widehat{X}$ denote the closed manifold obtained from $X$ by Dehn filling each boundary component of $X$ according to the slope of ${\partial}S\cap{\partial}X$, and let $\widehat{S}\subset \widehat{X}$ be the closed surface obtained from $S$ by capping off each component of ${\partial}S$ by a disk in a Dehn-filling solid torus. If $\widehat{S}$ is [*not*]{} norm-minimizing, then either $g(S)=1$ or $g([S])$ is minimal among all classes in the interior of the same face of the Thurston norm as $[S]$. In particular, if $[S']$ is a primitive class in the interior of the same face as $[S]$ and $\widehat{S'}$ is also not norm-minimizing, then $g([S])=g([S'])$. By genus of a homology class, we mean $g(\alpha)$ to be the genus of a properly norm-minimizing surface representing $\alpha$. That is, in an $n$-component link complement, $$g(\alpha)=\frac{1}{2}\left(2+x_M(\alpha)-\sum_{i=1}^n|\langle\alpha, P_i\rangle|\right).$$ \[proofof2compthm\] Assuming Theorem \[mingenuscor\], to deduce Theorem \[2compthm\] we must show only that there are finitely many primitive elements of $H_2(X,{\partial}X;{\mathbb{R}})$ within each face of the Thurston norm on $X$ of any fixed genus. This fact follows from the following proposition. \[g0prop\] Fix an integer $g$. Let $\Sigma_{g}\subset H_2(X,{\partial}X;{\mathbb{R}})$ contain exactly the primitive elements of genus-${g}$. Then $\Sigma_{g}$ is finite. Let $\operatorname{{lk}}=\operatorname{{lk}}(L_1,L_2)$. Let $\beta\in H_2(X,{\partial}X;{\mathbb{R}})$ be a norm-minimizing class with properly norm-minimizing representative $S$. We claim $|{\partial}S|$ is bounded above uniformly (with bound depending only on $Y$ and $L$). Assuming this, then there is a uniform upper bound on $x([S])$ for $S\in\Sigma_{g}$. Nondegeneracy of Thurston norm then implies that $\Sigma_{g}$ is finite. To see that $|{\partial}S|$ is bounded above uniformly, say $[S]=p\alpha_1+q\alpha_2$, where $\alpha_i$ is the homology class of a punctured Seifert surface for $L_i$ and $p,q$ are coprime integers. Say ${\partial}\alpha_i=m_i\gamma_i$ for some primitive $\gamma_i\in{\partial}Y\setminus\nu(L)$ with $m_i>0$; that is, $m_i$ is the smallest positive number with $m_i[L_i]=0\in H_1(Y;{\mathbb{Z}})$. Note $m_1\operatorname{{lk}}$ and $m_2\operatorname{{lk}}$ are integers, by definition of $\operatorname{{lk}}$. Then ${\partial}S$ meets $P_1$ in slope $-(qm_2\operatorname{{lk}})/(pm_1)$ and ${\partial}S$ meets $P_2$ in slope $-(pm_1\operatorname{{lk}})/(qm_2)$. Then ${\partial}S$ has $\gcd(-qm_2\operatorname{{lk}},pm_1)$ components on $P_1$ and $\gcd(-pm_1\operatorname{{lk}},qm_2)$ components on $P_2$. If $m_1=m_2=1$, then coprimeness of $p$ and $q$ immediately implies $|{\partial}S|\le|\operatorname{{lk}}|+1$. But in general, $$\begin{aligned} |{\partial}S|&=\gcd(pm_1,qm_2\operatorname{{lk}})+\gcd(pm_1\operatorname{{lk}},qm_2)\\ &\le2|\operatorname{{lk}}|\gcd(pm_1,qm_2)\\ &\le2|\operatorname{{lk}}|\gcd(p,m_2)\gcd(q,m_1)\gcd(m_1,m_2)\\ &\le2|\operatorname{{lk}}|m_1^2m_2^2.\end{aligned}$$ This completes the proof of Theorem \[2compthm\]. We prove Theorem \[mingenuscor\] in Section \[sec:proof\]. Theorem \[fillonethm\] can be extended to $n$-component links. \[fillonethm2\] Let $L=L_1\sqcup \cdots\sqcup L_n$ be an $n$-component link in a rational homology $3$-sphere $Y$ with pairwise nonzero linking numbers. Fix an integer $k\in\{1,\ldots,n-1\}$. Assume $X:=Y\setminus\nu(L)$ has nondegenerate Thurston norm. Let $P_i={\partial}\overline{\nu(L_i)}$. Let $S$ be a properly norm-minimizing surface in $X$ which meets every boundary component of $X$, and let $N$ denote the result of Dehn-filling $X$ at $P_i$ according to the slope of ${\partial}S\cap P_i$ for $i=1,\ldots, k$. Let $\widehat{S}$ denote the surface in $N$ obtained from $S$ by capping off each component of ${\partial}S\cap P_i$ by a disk in the Dehn-filling solid torus, for $i=1,\ldots, k$. Assume $[S]$ is not a corner of the Thurston norm on $X$. Then $\widehat{S}$ is norm-minimizing. The proof is essentially the same as for Theorem \[fillonethm\]. We sketch both in Section \[sec:sutured\]. Again, Theorem \[fillonethm2\] may be thought of as a version of Theorem \[baker\] restricted to a class of link complements, but yielding a stronger conclusion in that restricted setting. Similarly, Theorems \[mingenuscor\] and \[2compthm\] extend to $n$-component link complements. Let $L=L_1\sqcup\cdots\sqcup L_n$ be an $n$-component link in a rational homology $3$-sphere $Y$. Assume $\operatorname{{lk}}(L_i,L_j)\neq 0$ for each $i\neq j$. Let $X:=Y\setminus\nu(L)$. Assume $X$ has nondegenerate Thurston norm $x$. Let $S$ be a properly norm-minimizing surface meeting every component of ${\partial}X$ so that $[S]$ is primitive and is not a corner of $x$. Let $\widehat{X}$ be the closed manifold obtained by Dehn-filling each boundary component of $X$ according to the slope of ${\partial}S$. Let $\widehat{S}$ be the closed surface in $\widehat{X}$ obtained from $S$ by capping off each boundary component of $S$ by a disk in a Dehn-filling solid torus. Then at least one of the following is true: - $g(S)=1$, - $\widehat{S}$ is norm-minimizing, - $g([S])\le g(\beta)$ whenever $\beta\in C$. Theorem \[mingenuscor2\] is a generalization of Theorem \[mingenuscor\]. Note that when $n=2$, if $\beta,\alpha\neq 0\in H_2(X,{\partial}X;{\mathbb{R}})$ are not scalar multiples, then ${\partial}\alpha,{\partial}\beta$ have different slopes on each boundary component of $X$. The proof of Theorem \[mingenuscor2\] is exactly the same as the proof of Theorem \[mingenuscor\], but we first prove Theorem \[mingenuscor\] separately as the statement and notation are simpler. We prove Theorem \[mingenuscor2\] in Section \[sec:ncomp\]. Let $L=L_1\sqcup\cdots\sqcup L_n$ be an $n$-component link $(n>1)$ in a rational homology $3$-sphere $Y$ with pairwise nonzero linking numbers. Assume $X:=Y\setminus\nu(L)$ has nondegenerate Thurston norm. Let $S$ be a properly norm-minimizing surface in $X$ meeting every component of ${\partial}X$ . Let $\widehat{X}$ be the closed manifold obtained from $X$ by Dehn filling $X$ according to ${\partial}S$, and let $\widehat{S}\subset\widehat{X}$ be the closed surface obtained from capping off each boundary component of $S$ within the Dehn-filling solid tori. Let $\widetilde{Y}$ be the $3$-manifold obtained from $Y$ by surgering $Y$ along $L_3\sqcup\cdots\sqcup L_n$ according to ${\partial}S$. There exists an $(n-2)$-dimensional set of rays $E$ from the origin of $H_2(X,{\partial}X;{\mathbb{R}})\cong{\mathbb{R}}^n$ so that if $[S]\not\in E$, then either $\widehat{S}$ is norm-minimizing or $\widetilde{Y}\setminus\nu(L_1\sqcup L_2)$ has degenerate Thurston norm. In Section \[sec:ncomp\], we will prove Theorem \[ncompthm\] by induction. Theorem \[2compthm\] will be the base case of this induction. Sutured manifolds and foliations {#sec:sutured} ================================ In this section we give some necessary background on sutured manifolds, introduced by Dave Gabai and explored in many papers including [@ft3m1],[@ft3m2], and [@ft3m3]. We also give some background on foliations; some excellent further resources are by Calegari [@calegari] and Novikov [@novikov]. In the author’s mind, the language of sutured manifolds and foliations are often interchangeable. The main arguments of this paper could likely be rewritten completely in terms of foliations while avoiding sutured manifolds entirely or vice versa, but would be much more cumbersome. Definitions and important theorems ---------------------------------- A sutured manifold $(M,\gamma)$ is a compact, oriented $3$-manifold $M$ together with a set $\gamma$ of pairwise disjoint annuli $A(\gamma)$ and tori $T(\gamma)$ in ${\partial}M$. We may abuse notation and view $\gamma$ as a subset of ${\partial}M$. One oriented core of each element of $A(\gamma)$ is called a [*suture*]{}. The set of all such sutures is denoted by $s(\gamma)$. Every component of $R(\gamma):={\partial}M\setminus{\mathring}{\gamma}$ is oriented. These orientations are taken to be coherent with respect to $s(\gamma)$, so if a component $R$ of $R(\gamma)$ has one boundary component in some $A\in A(\gamma)$, then the induced orientation on ${\partial}R$ must agree with the orientation of the suture in $A$. Let $R_+(\gamma)\subset R(\gamma)$ include all components whose whose normal vectors point out of $M$. Let $R_-(\gamma)=R(\gamma)\setminus R_+(\gamma)$ include all components of $R(\gamma)$ whose normal vectors point into $M$. We will be primarily interested in the case that $(M,\gamma)$ is the [*complementary sutured manifold*]{} to a surface. Let $S$ be an oriented surface properly embedded in a compact $3$-manifold $X$ whose boundary is a (possible empty) union of tori. Let $V=S\times I$, where the $I$ direction is chosen so that for each boundary component $C$ of $S$, $C\times I\subset V$ is contained in ${\partial}X$. Orient the $I$ direction of $S\times I$ so that the normal vector to $S\times 1$ points into $V$ and the normal vector to $S\times0$ points out of $V$. The [*complementary sutured manifold*]{} $(M,\gamma)$ to $S$ is defined by $M=\overline{X\setminus V}$, and the elements of $\gamma$ are the components of $\overline{({\partial}X)\setminus V}$. The orientations of each suture are chosen so that $R_+(\gamma)=S\times 1$ and $R_-(\gamma)=S\times0$ (hence our choice of orientation of the $I$ direction of $S\times I$). Note $|A(\gamma)|=|{\partial}S|$. Moreover, if ${\partial}S$ meets every component of ${\partial}X$, then $T(\gamma)=\emptyset$. See Figure \[fig:complementarymanifold\] for an example of a complementary sutured manifold to a Seifert surface in a knot complement. $R_+(\gamma)$ at 260 55 $R_-(\gamma)$ at 250 115 ![[**[Left:]{}**]{} a Seifert surface $S$ in $S^3\setminus\nu(K)$. [**[Right:]{}**]{} The complementary sutured manifold $(M,\gamma)$. In the picture we draw a genus-$4$ surface ${\partial}M$ in ${\mathbb{R}}^3$. The manifold $M$ is the unbounded region in ${\mathbb{R}}^3$, compactified by ${\mathbb{R}}^3\subset S^3$. The elements of $A(\gamma)$ are drawn as thin black annuli, with orientations of the sutures indicated by arrows. Note that the normal vector to $R_+(\gamma)$ points out of $M$, while the normal vector to $R_-(\gamma)$ points into $M$.[]{data-label="fig:complementarymanifold"}](complementarymanifold "fig:"){width="80mm"} To relate sutured manifolds to the Thurston norm, we need the following key definition. \[tautdef\] A sutured manifold $(M,\gamma)$ is [*taut*]{} if $M$ is irreducible and each of $R_+(\gamma), R_-(\gamma)$ is norm-minimizing in $H_2(M,\gamma;{\mathbb{R}})$. \[tautremark\] From Definition \[tautdef\], if $S$ is a surface properly embedded in a compact irreducible manifold $X$ with ${\partial}X$ a collection of tori, then the complementary sutured manifold $(M,\gamma)$ to $S$ is taut if and only if $S$ is norm-minimizing in $H_2(X,{\partial}X;{\mathbb{R}})$. Thus, to show a surface is norm-minimizing, it is sufficient to show that a certain sutured manifold is taut. We can show tautness using foliations. A codimension-1 oriented [*foliation*]{} ${\mathcal{F}}$ of a compact $3$-manifold $X$ is a collection of oriented (perhaps noncompact) connected surfaces $\{F_\alpha\}_{\alpha\in\Lambda}$ called the [*leaves*]{} of ${\mathcal{F}}$ so that: - $X=\cup_\Lambda F_\alpha$ where $F_\alpha$ is smoothly embedded in $X$ and the leaves $F_\alpha,F_\beta$ are disjoint when $\alpha\neq\beta$, - Every point in the interior of $X$ has a neighborhood $U$ and a coordinate map $(x_1,x_2,x_3):U\to {\mathbb{R}}^3$ so that the components of $F_\alpha\cap U$ are defined by $x_3=c$ for a constant $c$, all oriented so the positive normal vector points in the direction of $(0,0,1)$, - If a component $T$ of ${\partial}X$ is not a leaf of ${\mathcal{F}}$, then every point in $T$ has a neighborhood $U$ and a coordinate map $(x_1,x_2,x_3):U\to \{(x,y,z)\mid x\ge 0\}\subset{\mathbb{R}}^3$ so that the components of $F_\alpha\cap U$ are defined by $x_3=c$ for a constant $c$, all oriented so the positive normal vector points in the direction of $(0,0,1)$. Note that a leaf of a foliation ${\mathcal{F}}$ of a compact $3$-manifold $X$ meets ${\partial}X$ only transversely or is an entire boundary component of $X$. When adapting foliations to the setting of sutured manifolds, this condition changes. A codimension-1 oriented [*foliation*]{} ${\mathcal{F}}$ of sutured manifold $(M,\gamma)$ is a collection of oriented (perhaps noncompact) connected surfaces $\{F_\alpha\}_{\alpha\in\Lambda}$ called the [*leaves*]{} of ${\mathcal{F}}$ so that - $M=\cup_\Lambda F_\alpha$ where $F_\alpha$ is smoothly embedded in $M$ and the leaves $F_\alpha,F_\beta$ are disjoint when $\alpha\neq\beta$, - Every point in $M$ has a neighborhood $U$ and a coordinate map $(x_1,x_2,x_3):U\to {\mathbb{R}}^3$ so that the components of $F_\alpha\cap U$ are defined by $x_3=c$ for a constant $c$, all oriented so the positive normal vector points in the direction of $(0,0,1)$, - Every point in the interior of $A(\gamma)$ or a component of $T(\gamma)$ which is not a leaf of ${\mathcal{F}}$ has a neighborhood $U$ and a coordinate map $(x_1,x_2,x_3):U\to \{(x,y,z)\mid x\ge 0\}\subset{\mathbb{R}}^3$ so that the components of $F_\alpha\cap U$ are defined by $x_3=c$ for a constant $c$, all oriented so the positive normal vector points in the direction of $(0,0,1)$, - Each component of $R(\gamma)$ is a leaf of ${\mathcal{F}}$ (with the same orientation as $R(\gamma)$). Thus, the leaves of a foliation ${\mathcal{F}}$ on a sutured $3$-manifold meet $A(\gamma)$ and perhaps some elements of $T(\gamma)$ transversely. This part of ${\partial}M$ is often called the [*vertical*]{} boundary of $M$, as we imagine the leaves of ${\mathcal{F}}$ as being horizontal. In contrast, $R_+(\gamma)$ and $R_-(\gamma)$ (and perhaps some elements of $T(\gamma)$) are leaves of ${\mathcal{F}}$. This part of ${\partial}M$ is often called the [*horizontal*]{} boundary of $M$. Foliations have been studied in more general settings; see e.g. [@lawson] for more exposition (in non-sutured manifolds). From now on we will always use the term foliation to refer to a codimension-1 oriented foliation of a compact $3$-manifold or sutured manifold – the context will always be clear. \[tautfoldef\] A foliation ${\mathcal{F}}$ on a compact $3$-manifold $M$ or a sutured manifold $(M,\gamma)$ is said to be [*taut*]{} if there exists a circle or properly embedded path $\eta$ in $M$ which intersects every leaf of ${\mathcal{F}}$ and always intersects leaves of ${\mathcal{F}}$ transversely. Definition \[tautfoldef\] allows one to relate foliations and the Thurston norm. \[studyfoliation\] Let ${\mathcal{F}}$ be a taut foliation on a compact $3$-manifold $M$. Suppose $L$ is a leaf of ${\mathcal{F}}$ which is a compact surface. Then $L$ is norm-minimizing in the class $[L]\in H_2(M,{\partial}M;{\mathbb{R}})$. Gabai proved the converse to Theorem \[studyfoliation\] in the following setting. \[thm5.5\]\[havefoliation1\] Let $M$ be a compact connected irreducible oriented $3$-manifold $M$ whose boundary is a (possibly empty) union of tori. Let $S$ be a properly norm-minimizing surface representing a nontrivial class of $H_2(M,{\partial}M;{\mathbb{R}})$. Then $S$ is a leaf of a taut foliation ${\mathcal{F}}$ of $M$ which has no Reeb components in ${\partial}M$. We did not previously define Reeb components. For our purposes, we can use the following nonstandard definition. Let $M$ be a compact connected irreducible oriented $3$-manifold $M$ whose boundary is a (possibly empty) union of tori $P_1\sqcup\cdots\sqcup P_n$. Let $S$ be a properly norm-minimizing surface in $M$. Suppose $S$ is a leaf of a foliation ${\mathcal{F}}$. We say ${\mathcal{F}}$ has no Reeb components in $P_i$ if any curve on $P_i$ not of the same slope as ${\partial}S\cap P_i$ can be perturbed to be transverse to ${\mathcal{F}}|_{P_i}$. If ${\partial}M=\emptyset$ or if ${\mathcal{F}}$ has no Reeb components in any $P_i$ for $i=1,\ldots, n$, then we say ${\mathcal{F}}$ has no Reeb components in ${\partial}M$. The general strategy in the proof of Theorem \[2compthm\] is to construct a taut foliation ${\mathcal{G}}$ on $\widehat{X}$ achieving $\widehat{S}$ as a compact leaf. We may then conclude by Theorem \[studyfoliation\] that $\widehat{S}$ is norm-minimizing. To construct this taut foliation, we begin with a taut foliation ${\mathcal{F}}=\{F_\alpha\}$ on $X$ achieving $S$ as a leaf. The existence of ${\mathcal{F}}$ is ensured by Theorem \[havefoliation1\]. (Note here that we actually need $X=Y\setminus\nu(L)$ to be irreducible to apply Theorem \[havefoliation1\] – but if $X$ decomposes as a connect-sum, then we will apply Theorem \[havefoliation1\] to a prime summand of $X$.) If it happens to be the case that each $F_\alpha|_{{\partial}X}$ consists of compact circles, then we may Dehn-fill ${\partial}X$ and cap off each component of $F_\alpha|_{{\partial}X}$ with a disk to construct a taut foliation on $\widehat{X}$. However, in general we cannot hope that $F_\alpha|_{{\partial}X}$ is compact, so we must first attempt to alter ${\mathcal{F}}$. In Section \[sec:operations\], we describe tools introduced by Gabai [@suspensions] to change a taut foliation ${\mathcal{F}}$. Sutured manifold and foliation operations {#sec:operations} ----------------------------------------- The most basic operations one can perform on a sutured manifold $(M,\gamma)$ (and the only ones needed in this paper) are [*product-disk decomposition*]{} and [*product-annulus decomposition*]{}. Let $(M,\gamma)$ be a sutured manifold. A [*product disk*]{} in $(M,\gamma)$ is a disk $D$ properly embedded in $M$ so that for some coordinates $D=I\times I$, we have $I\times{\partial}I\subset\gamma$, $1\times I\subset R_+(\gamma)$, and $0\times I\subset R_-(\gamma)$. A [*product annulus*]{} in $(M,\gamma)$ is an annulus $A$ properly embedded in $M$ so that ${\partial}A$ has two components ${\partial}_+ A\subset R_+(\gamma)$ and ${\partial}_- A\subset R_-(\gamma)$. We draw a schematic of a product disk in Figure \[fig:productdisk\]. $R_+(\gamma)$ at 160 220 $R_-(\gamma)$ at 140 -20 $D$ at 150 100 ![[**[Left:]{}**]{} a product disk $D$ in a sutured manifold $(M,\gamma)$. [**[Middle:]{}**]{} part of a taut foliation ${\mathcal{F}}$ on $(M,\gamma)$. [**[Right:]{}**]{} Up to perturbation of $D$, ${\mathcal{F}}|_D$ is a product foliation.[]{data-label="fig:productdisk"}](productdisk){width="100mm"} \[transverseremark\] Let ${\mathcal{F}}$ be a taut foliation on a sutured manifold $(M,\gamma)$ containing a product disk or annulus $\Delta$. By work of Thurston [@thurston] and Roussarie [@roussarie], $\Delta$ can be perturbed so that ${\mathring}{\Delta}$ intersects the leaves of ${\mathcal{F}}$ transversely. Given a sutured manifold with taut foliation ${\mathcal{F}}$, we will always assume that product disks and annuli are transverse to ${\mathcal{F}}$. Let $\Delta$ be a product disk or annulus inside a sutured manifold $(M,\gamma)$. Let $V=\Delta\times I$, where the $I$ direction is chosen so that $({\partial}\Delta)\times I\subset {\partial}M$. [*Sutured manifold decomposition*]{} (or more simply, [*decomposition*]{}) of $(M,\gamma)$ along $\Delta$ yields the sutured manifold $(M',\gamma')$, where $M'=\overline{M\setminus V}$. If $\Delta$ is a disk, then $\gamma'$ is obtained from $\gamma$ by surgering the element(s) of $A(\gamma)$ which meet $\Delta$ along $\Delta$ (see Figure \[fig:diskdecomp\] or Figure \[fig:diskdecomplemma\]). If $\Delta$ is an annulus, then $\gamma'$ is a superset of $\gamma$ but also includes the two annuli $\Delta\times 0,\Delta\times 1$. The sutures in these annuli are oriented consistently with the orientations on $R(\gamma)$. We write $(M,\gamma)\xrightarrow\Delta(M',\gamma')$, or say $(M',\gamma')$ is the result of (product) disk/annulus decomposition of $(M,\gamma)$ along $\Delta$. We draw an example of product disk decomposition in Figure \[fig:diskdecomp\]. $R_+(\gamma)$ at 75 30 $R_-(\gamma)$ at 60 92 $R_+(\gamma')$ at 255 30 $R_-(\gamma')$ at 240 92 ![[**[Left:]{}**]{} a sutured manifold $(M,\gamma)$. In the picture we draw a genus-$4$ surface ${\partial}M$ in ${\mathbb{R}}^3$. The manifold $M$ is the unbounded region in ${\mathbb{R}}^3$, compactified by ${\mathbb{R}}^3\subset S^3$. The elements of $A(\gamma)$ are drawn as thin black annuli, with orientations of the sutures indicated by arrows. The sutured manifold $(M,\gamma)$ is the complementary manifold to a Seifert surface $S$ in $S^3$, as shown in Figure \[fig:complementarymanifold\]. We shade four product disks in $(M,\gamma)$. [**[Right:]{}**]{} We decompose $(M,\gamma)$ along each of the four product disks. The result is a sutured manifold $(M',\gamma')$ with $M'\cong B^3$ and $\gamma'$ a single annulus. This sutured manifold is taut, so we conclude by Theorem \[tautdecomp\] that $(M,\gamma)$ is taut. This implies that $S$ is a norm-minimizing surface.[]{data-label="fig:diskdecomp"}](diskdecomp){width="90mm"} \[tautdecompfol\] Let ${\mathcal{F}}$ be a taut foliation on sutured manifold $(M,\gamma)$. Let $\Delta$ be a product disk or annulus in $(M,\gamma)$. By Remark \[transverseremark\], ${\mathcal{F}}':={\mathcal{F}}|_{M'}$ is a foliation on $(M',\gamma')$, where $(M,\gamma)\xrightarrow{\Delta}(M',\gamma')$. In fact, ${\mathcal{F}}'$ is taut. (If $M'=M'_1\sqcup M'_2$ is disconnected, then we mean ${\mathcal{F}}'|_{M'_1}$ and ${\mathcal{F}}'_{M'_2}$ are both taut.) \[tautdecomp\] Let $(M,\gamma)\xrightarrow{\Delta}(M',\gamma')$ be a product disk or annulus decomposition. Then $(M,\gamma)$ is taut if and only if $(M',\gamma')$ is taut. By Remark \[transverseremark\], if $(M,\gamma)$ admits a taut foliation and $(M,\gamma)\xrightarrow{\Delta}(M',\gamma')$ is a product disk or annulus decomposition, then $(M',\gamma')$ admits a taut foliation. From Lemma \[tautdecomp\], we obtain the key fact in the proof of Theorems \[fillonethm\] and \[fillonethm2\]. \[lemmadiskdecomp\] Let $(M,\gamma)$ be a sutured manifold. Suppose a product disk $D$ connects two distinct elements $A_1$ and $A_2$ of $A(\gamma)$. Let $(M_1,\gamma_1)$ be the sutured manifold obtained from $(M,\gamma)$ by surgering $M$ at $A_1$ (i.e. attaching a $3$-dimensional $2$-handle to $M$ along $A_1$) and then forgetting the suture $A_1$ (i.e. $M_1=M\cup_{A_1}E\times I$ for a disk $E$ with boundary the core of $A_1$, $\gamma_1=\gamma\setminus A_1$). Let $(M_2,\gamma_2)$ be the sutured manifold obtained from $(M,\gamma)$ by product-disk decomposition at $D$. Then $(M_1,\gamma_1)\cong (M_2,\gamma_2)$. Let $B\subset M_1$ be given by $B:=\overline{(E\times I)\cup (I\times D)}$, so $B$ is a $3$-ball meeting $A_2$ in a disk. Deform $\nu({\partial}M_1)$ near $B$, sweeping this disk through $B$. This realizes a map $M_1\to M_2,\gamma_1\to\gamma_2$. See Figure \[fig:diskdecomplemma\]. $A_1$ at 735 250 $A_2$ at 1075 250 $D$ at 910 410 $E$ at 735 360 ![[**[Top Middle:]{}**]{} Two distinct elements $A_1, A_2$ of $A(\gamma)$ in a sutured manifold $(M,\gamma)$. We indicate a product disk $D$ connecting $A_1$ and $A_2$. We draw a disk $E$ with boundary the core of $A_1$ to indicate the core of a $2$-handle $E\times I$ that we might attach to $M$. [**[Top Left:]{}**]{} The sutured manifold $(M_1,\gamma_1)$, where $M_1=M\cup(E\times I)$ and $\gamma_1=\gamma\setminus A_1$. [**[Top Right:]{}**]{} The sutured manifold $(M_2,\gamma_2)$, which is obtained from $(M,\gamma)$ by product disk decomposition along $D$. [**[Bottom:]{}**]{} The sutured manifolds $(M_1,\gamma_1)$ and $(M_2,\gamma_2)$ are homeomorphic.[]{data-label="fig:diskdecomplemma"}](diskdecomplemma){width="100mm"} We are now able to prove Theorems \[fillonethm\] and \[fillonethm2\] via an argument of Gabai. We will first prove Theorem \[fillonethm\]. Let $P_2={\partial}\overline{\nu(L_2)}$. Suppose $S'$ is a properly norm-minimizing surface with $[S]=[S']$. Then $g(S)\le g(S')$ and $|{\partial}S\cap P_2|\ge |{\partial}S'\cap P_2|$, so $\chi(\widehat{S})\ge\chi(\widehat{S'})$. Therefore, it would be sufficient to prove the claim for properly norm-minimizing surfaces, so assume $S$ is properly norm-minimizing. Since $[S]$ is not a corner of the Thurston norm, we have $c[S]=a[R]+b[T]$ for some positive integers $a,b$, and $c$ and adjacent corners $[R]$ and $[T]$. Take $R$ and $T$ to be properly norm-mimizing surfaces so that $aR+bT$ is a properly norm-minimizing surface, using Proposition \[cutpasteprop\]. Then $\chi(\widehat{aR+bT})=c\chi(\widehat{S})$. If $\widehat{aR+bT}$ is norm-minimizing, so is $\widehat{S}$, so to prove the claim we may assume that $S=aR+bT$. Let $(M,\gamma)$ be the complementary sutured manifold to $S$ in $X$. Since $S$ is norm-minimizing, $(M,\gamma)$ is taut. For each intersection of $aS_1$ with $bS_2$, we obtain a product disk in $(M,\gamma)$. See Figure \[fig:cutandpaste\]. $\textcolor{red}{R}$ at -20 80 $\textcolor{blue}{T}$ at 150 180 $R+T$ at 460 180 [![[**[Left:]{}**]{} An arc of intersection between two surfaces $R$ and $T$. [**[Middle:]{}**]{} The cut-and-paste surface $S=R+T$. [**[Right:]{}**]{} A product disk in the complementary sutured manifold to $S$.[]{data-label="fig:cutandpaste"}](cutandpaste "fig:"){width="100mm"}]{} Since $\operatorname{{lk}}:=\operatorname{{lk}}(L_1,L_2)\neq 0$, the map ${\partial}:H_2(X,{\partial}X;{\mathbb{R}})\to H_1({\partial}X;{\mathbb{R}})$ is an injection. Thus ${\partial}R$ and ${\partial}T$ have nonmultiple slopes on $P_1$, so ${\partial}R\cap{\partial}T\neq\emptyset$. Then $R$ and $T$ must have some arcs of intersection. Take ${\partial}R$ and ${\partial}T$ to intersect minimally so that any arc of intersection between $R$ and $T$ meets both $P_1$ and $P_2$. Then for each element $A$ of $A(\gamma)$ in $P_1$, there is a product disk connecting $A$ to an element of $A(\gamma)$ in $P_2$. Let $(M',\gamma')$ be the complementary sutured manifold to $\widehat{S}$ in $N$. Then $M'$ is obtained from $M$ by attaching $3$-dimensional $2$-handles to each element of $A(\gamma)$ in $P_1$, and $\gamma'$ is obtained from $\gamma$ by forgetting those elements of $A(\gamma)$. By Lemma \[lemmadiskdecomp\], $(M',\gamma')$ is the result (up to homeomorphism) of a sequence of product-disk decompositions of $(M,\gamma)$. Therefore, tautness of $(M,\gamma)$ implies that $(M',\gamma')$ is taut, so $\widehat{S}$ is norm-minimizing. This concludes the proof of Theorem \[fillonethm\]. Now we prove Theorem \[fillonethm2\]. It is again sufficient to prove the claim when $S$ is properly norm-minimizing. Since $[S]$ is not a corner of the Thurston norm on $X$, we can write $c[S]=a[R]+b[T]$ for positive integers $a,b,c$, where $R$ and $T$ are properly norm-minimizing surfaces and $[R], [T]$ are adjacent corners. There are now many choices of $[R]$ and $[T]$. Again, it is sufficient to consider $S=aR+bT$ (that is, $c=1$). For appropriate choice of $[R]$ and $[T]$, ${\partial}R$ and ${\partial}T$ both meet each boundary component of $X$ and do so in distinct slopes. Here we are using the pairwise nonzero linking numbers of $L_1,\ldots, L_n$ – this means that for fixed $q\in{\mathbb{Q}}$, $j\in\{1,\ldots, n\}$, there is an $(n-2)$-dimensional set of rays in $H_2(X,{\partial}X;{\mathbb{R}})$ with boundary slope $q$ on $P_j$ There is an $(n-3)$-dimensional set of rays in $H_2(X,{\partial}X;{\mathbb{R}})$ with no boundary on $P_i$, because there are $(n-1)$-dimensions of rays in $H_2(X,{\partial}X;{\mathbb{R}})$ altogether and the condition on $P_i$ gives two $1$-dimensional conditions – as a sum of homology classes of punctured Seifert surfaces for $L_j$’s, there cannot be any for $L_i$ and there is a linear equation on the other summands in terms of the linking numbers that must be satisfied). Let $\{A_1,\ldots, A_m\}$ be the elements of $A(\gamma)$ in $P_1\sqcup\cdots\sqcup P_k$. Then for each $j$, there is a product disk corresponding to an intersection of $aR$ with $bT$ meeting $A_j$. By concatenating product disks, we can find disjoint product disks $D_1,\ldots, D_m$ so that $D_i$ meets $A_i$ and an element of $A(\gamma)$ in $P_n$. Let $(M',\gamma')$ be the complementary sutured manifold to $\widehat{S}$ in $N$. Then $M'$ is obtained from $M$ by attaching $3$-dimensional $2$-handles to each element of $A(\gamma)$ in $P_1\sqcup\cdots\sqcup P_{k}$, and $\gamma'$ is obtained from $\gamma$ by forgetting those elements of $A(\gamma)$. By Lemma \[lemmadiskdecomp\], $(M',\gamma')$ is the result (up to homeomorphism) of a sequence of product-disk decompositions of $(M,\gamma)$. Therefore, tautness of $(M,\gamma)$ implies that $(M',\gamma')$ is taut, so $\widehat{S}$ is norm-minimizing. As well as modifying sutured manifolds, one can use product disks and annuli to modify foliations. In particular, if ${\mathcal{F}}$ is a taut foliation on $M$, then we are interested in ${\mathcal{F}}|_{{\partial}M}$. Let $M$ be a compact $3$-manifold with some torus boundary component $P$. Let ${\mathcal{F}}$ be a foliation on $M$ with no Reeb components on $P$. Suppose ${\mathcal{F}}|_P$ includes two circles $C_1, C_2$ cobounding annulus $A\subset P$. Choose coordinates $A=I\times S^1=I\times[0,2\pi]/\sim$. Because ${\mathcal{F}}|_P$ does not have a Reeb component in $P$, there is some boundary-preserving automorphism $f:I\to I$ so that $\lim_{t\to 0^+}x\times t$ and $\lim_{t\to 2\pi^-}f(x)\times t$ are contained in the same leaf of ${\mathcal{F}}$ – i.e. ${\mathcal{F}}|_A$ is induced by the mapping torus structure $A=I\times[0,2\pi]/((x,0)\sim(f(x),1))$. We say that ${\mathcal{F}}|_A$ is a [*suspension*]{} of $f$. We summarize a few operations introduced by Gabai [@suspensions] which we use in the proof of Theorem \[2compthm\]. We skip several interesting operations which we will not make use of, and state others in less than full generality. Suppose $L$ is a leaf of a taut foliation ${\mathcal{F}}$ on a $3$-manifold $M$ or sutured manifold $(M,\gamma)$. Assume $L$ is two-sided in $M$. Let $M'$ be obtained from $M$ by deleting $L$ and replacing it with $L\times I$. (If $M$ is sutured, then extend $\gamma$ to ${\partial}M'$ naturally, with $({\partial}L)\times I$ contained in elements of $A(\gamma)$.) Let ${\mathcal{F}}'$ be the taut foliation of $M'$ which agrees with ${\mathcal{F}}$ outside of $L\times I$, and includes $L\times t$ as a leaf for each $t\in I$. Identify $M'$ with $M$. We say the taut foliation ${\mathcal{F}}'$ on $M$ is obtained from ${\mathcal{F}}$ by [*thickening*]{} the leaf $L$. Let $L$ be a leaf of a taut foliation ${\mathcal{F}}$ on a $3$-manifold $M$ or sutured manifold $(M,\gamma)$. Assume $L$ is two-sided in $M$. Let $M'$ be obtained from $M$ by deleting $L$ and replacing it with $L\times I$. (If $M$ is sutured, then extend $\gamma$ to ${\partial}M'$ naturally, with $({\partial}L)\times I$ contained in elements of $A(\gamma)$.) Let ${\mathcal{F}}'$ be a taut foliation of $M'$ which agrees with ${\mathcal{F}}$ outside of $L\times I$, includes $L\times 0$ and $L\times 1$ as leaves, and so every leaf of ${\mathcal{F}}'|_{L\times (0,1)}$ is transverse to $x\times I$ for each $x\in L$. Identify $M'$ with $M$. We say the taut foliation ${\mathcal{F}}'$ on $M$ is obtained from ${\mathcal{F}}$ by [*I-bundle replacement*]{} on the leaf $L$. This generalizes the leaf thickening operation. \[suspensionchange\] See Figure \[fig:changesuspension\] for an illustration of this operation. Let $D$ be a product disk in sutured manifold $(M,\gamma)$. Let ${\mathcal{F}}$ be a taut foliation on $(M\gamma)$; take $D$ to be transverse to the leaves of ${\mathcal{F}}$. Assume ${\partial}D$ meets two distinct elements $A_0$ and $A_1$ of $A(\gamma)$. Say ${\mathcal{F}}|_{A_i}$ is a suspension of $f_i:I\to I$ for each $i=0,1$. Choose coordinates $D=I\times I$, so $I\times 0\subset A_0$, $I\times 1\subset A_1$, $S^0\times I\subset R(\gamma)$, and each $x\times I$ is contained in one leaf of ${\mathcal{F}}$. Let $V=D\times I$, where the $I$ direction is small and chosen so that $({\partial}D)\times I\subset{\partial}M$. Parametrize $V=D\times I=(I\times I)\times I$ so that each $(x\times I)\times I$ is contained in one leaf of ${\mathcal{F}}$. Choose a boundary-preserving automorphism $g:I\to I$. We obtain a taut foliation ${\mathcal{F}}'$ on $M$ by excising $V\cap{\mathring}{M}$ (with the foliation ${\mathcal{F}}|_V$) and regluing $V=I\times I\times I$ via $$\begin{cases}V\ni((x,t),s)\sim ((x,t),s)\in \overline{M\setminus V}&\parbox{4.5cm}{if $x,s\in I$, $t<1$,\\ $((x,t),s)\in M\cap V$}\\[1em] V\ni((x,1),s)\sim ((f(x),1),s)\in \overline{M\setminus V}&\text{for all $x,s\in I$}.\end{cases}$$ In words, we cut out $V=D\times I$ and reglue by the identity, except that we shift $D\times 1$ vertically according to the map $g$. This changes the intersection ${\mathcal{F}}'|_{A_i}$. Now ${\mathcal{F}}'|_{A_0}$ is a suspension of $f_0\circ g$ while ${\mathcal{F}}'|_{A_1}$ is a suspension of $g^{-1}\circ f_1$. In particular, note that if we choose $g=f_0^{-1}$, then ${\mathcal{F}}'|_{A_0}$ is a suspension of the identity, i.e. a product foliation of compact circles. $R_+(\gamma)$ at 175 215 $R_-(\gamma)$ at 110 -15 ${\mathcal{F}}|_V$ at -25 80 $R_+(\gamma)$ at 500 215 $R_-(\gamma)$ at 440 -15 ${\mathcal{F}}'|_V$ at 640 120 ![[**[Left:]{}**]{} $V=D\times I$, where $D$ is a product disk in a sutured manifold $(M,\gamma)$. We draw the intersection of $V$ with a taut foliation ${\mathcal{F}}$. [**[Right:]{}**]{} We perform a suspension change operation (Operation \[suspensionchange\]) on $V$ to obtain a new taut foliation ${\mathcal{F}}'$. We draw the intersection of ${\mathcal{F}}'$ with the reglued $V$. Note that ${\mathcal{F}}'$ and ${\mathcal{F}}$ do not agree on the elements of $A(\gamma)$ met by $D$.[]{data-label="fig:changesuspension"}](changesuspension){width="80mm"} The suspension change operation allows us essential freedom when performing the $I$-bundle replacement operation. \[anyhomeo\] Let $F$ be a connected, compact positive-genus surface with non-empty boundary. Fix some boundary component $C$ of $F$. Then for any homeomorphism $f:I\to I$, there exists a foliation ${\mathcal{F}}$ of $F\times I$ transverse to the vertical $I$ fibers so that ${\mathcal{F}}|_{C\times I}$ is a suspension of $f$ and ${\mathcal{F}}|_{({\partial}F\setminus C)\times I}$ is a product foliation. Let $\alpha$ be a non-separating simple closed curve on $F$. Let $H=F\setminus\nu(\alpha)$. Let ${\mathcal{G}}$ be the product foliation on $H\times I$. Since $\alpha$ is non-separating, there exist paths $\beta_1,\beta_2$ in $H$ from $C$ to the distinct components $C_1,C_2$ of ${\partial}H\setminus{\partial}F$. Then $\beta_1\times I,\beta_2\times I$ are product disks for ${\mathcal{G}}$ in $H\times I$. See Figure \[fig:genustrick\]. Let $g,h:I\to I$ be automorphisms. By performing the suspension change operation on ${\mathcal{G}}$ at $\beta_i\times I$, we may find a foliation ${\mathcal{G}}'$ on $H$ (transverse to the vertical fibers) so that ${\mathcal{G}}'|_{C\times I}$ is a suspension of $gh$, ${\mathcal{G}}'|_{C_1\times I}$ is a suspension of $\bar{g}$, and ${\mathcal{G}}'_{C_2\times I}$ is a suspesnion of $\bar{h}$. $F\times I$ at 40 175 $C\times I$ at 19 40 $C\times I$ at 290 40 at 525 125 at 525 105 If $\overline{g}$ and $\overline{\overline{h}}=h$ are conjugate, then we may extend ${\mathcal{G}}'$ to a taut foliation ${\mathcal{F}}$ on $F\times I$ by attaching a foliated $(S^1\times I)\times I$ to $H\times I$. To pick the appropriate $g,h$, assume that $f$ has no fixed points (if $f$ is the identity then the product foliation satisfies the lemma; if $f$ fixes finitely many points then we may repeat this argument on each subinterval in which $f$ has no fixed points). Then $f$ is everywhere expanding or contracting. Let $g$ be a (boundary-preserving) automorphism of $I$ that is also everywhere expanding or contracting, but with the opposite behavior of $f$ (that is, if $f$ expands then $g$ contracts, and vice versa). Take $h=\bar{g}f$. Then $gh=f$, and $g$ and $\bar{h}$ are both every contracting or expanding so are conjugate. The foliation ${\mathcal{F}}$ is the desired foliation. Fat-vertex graphs {#sec:graphs} ================= In this section, we will use a fat-vertex graph to record product disks and find product annuli for a complementary sutured manifold. A [*fat-vertex graph*]{} is a graph $(V,E)$ on vertices in $V=\{v_1,\ldots, v_m\}$ connected by edges in $E$ along with injective maps $\phi_i:E_{i}\to S^1$ for each $v_i\in V$, where $E_i$ is the set of ends of edges in $E$ at $v_i$. In words, a fat-vertex graph is a graph with a cyclic ordering of edges at each vertex. This cyclic ordering allows us to describe the [*neighborhood*]{} $\nu(G)$ of $G$. This $\nu(G)$ is a surface with boundary constructed by starting with $n$ disjoint disks $D_1,\ldots, D_m$ in correspondence with $v_1,\ldots, v_m$, and then attaching a band between $D_i$ and $D_j$ for each edge between $v_i$ and $v_j$. The attaching regions of bands along ${\partial}D_i$ respects the cyclic ordering of the edges incident to $v_i$. Note that although we typically use $\nu$ to mean an open neighborhood, in this context it is clear that we are only interested in compact surfaces, so we take $\nu(G)$ to be compact with boundary. We may refer to $D_i$ as $\nu(v_i)$. We refer to each boundary component of $\nu(G)$ as a [*face*]{} of $G$. \[construct\] Let $L=L_1\sqcup L_2$ be a $2$-component link in a rational homology sphere $Y$ with nonzero linking number. Let $X=Y\setminus\nu(L)$. The $3$-manifold $X$ has two torus boundary components $P_1$ and $P_2$, where $P_i={\partial}\overline{\nu(L_i)}$. Let $S$ be a norm-minimizing surface in $X$. Let $(M,\gamma)$ be the complementary sutured manifold to $S$ in $X$. Suppose $\{D_1,\ldots, D_n\}$ is some collection of pairwise disjoint product disks in $(M,\gamma)$, where each disk connects an element of $A(\gamma)$ in $P_1$ to an element of $A(\gamma)$ in $P_2$. Say the sutures of $\gamma$ are $s_1,\ldots, s_m$. Construct a fat-vertex graph $G$ as follows: Let $G$ have vertices $v_{1},\ldots, v_{m}$ and edges $e_{1},\ldots, e_{n}$. If the product disk $D_i$ connects sutures $s_j$ and $s_k$, then the edge $e_{i}$ goes between vertices $v_{j}$ and $v_{k}$. The cyclic order of the edges at each vertex corresponds to the cyclic order of product disks at each suture. That is, if when travelling (positively) around suture $s_i$ we meet product disks $D_{i_1},\ldots, D_{i_j}$ in order, then when travelling around $v_{i}$ we should find edges $e_{{i_1}},\ldots,e_{{i_j}}$ in order. Say $G$ has faces $C_1,\ldots, C_f$. We construct product annuli $A_1,\ldots, A_f$ in $(M,\gamma)$ as follows: - Given an arc $a$ in $C_i\cap({\partial}\overline{\nu (v_j)})$ lying close to $v_j$ between edges $e_k$ and $e_l$, let $\Delta_a$ be the corresponding part of the suture $s_j$ lying between product disks $D_k$ and $D_l$. Push $\Delta_a$ slightly into the interior of $M$, so ${\partial}\Delta\cap R_+$ is one arc, ${\partial}\Delta\cap R_-$ is one arc, and two arcs of ${\partial}\Delta$ lie in the interior of $M$. - Given an arc $b$ in $C_i$ that is parallel to edge $e_j$, let $\Delta_b$ be a copy of $D_i$ pushed slightly to the side, in the same direction one would push $e_j$ to obtain $b$. - Then if $C_i=\cup_{j=1}^k \alpha_j$, where each $\alpha_j$ is an arc as in one of the two previous bullet points, we form a product annulus $A_i=\cup_{j=1}^k \Delta_{\alpha_j}$. In the construction of each $\Delta_{\alpha_j}$, we chose the pushoffs from ${\partial}M$ or $D_l$ so that the edges of the $\Delta_{\alpha_j}$ contained in the interior of $M$ match up, and $A_i$ is an annulus with one boundary component in $R_+$ and the other in $R_-$. See Figure \[fig:productannulus\]. $G$ at 0 300 $\textcolor{red}{C_i}$ at 100 260 $(M,\gamma)$ at 200 300 $\textcolor{red}{A_i}$ at 320 275 $A(\gamma)$ at 300 20 ![Left: a face $C_i$ of $G$, where $G$ is a fat-vertex graph describing product disks in $(M,\gamma)$. Right: we use $C_i$ to construct a product annulus $A_i$ in $(M,\gamma)$.[]{data-label="fig:productannulus"}](graph){width="75mm"} We will continue to use this notation. From now on, $G$ will always refer to a fat-vertex graph describing product disks in a norm-minimizing surface complement in $S^3\setminus\nu(L)$. Later, we will specify the product disks to come from cut-and-paste resolutions as in Proposition \[cutpasteprop\]. When we say that a product annulus $A_i$ corresponds to a face $C_i$ of $G$, we will always mean as in this construction. \[remarkedisks\] Say $G$ has components $G_1,\ldots, G_c$. Let $(M',\gamma')$ be the sutured manifold obtained by decomposing $(M,\gamma)$ by product annulus $A_i$ corresponding to a face $C_i$ of $G$. Say $C_i$ is face of component $G_j$. Let $s_1,\ldots, s_m$ be the sutures of $\gamma$ corresponding to vertices of $G_j$. Then there are disjoint product disks $E_1,\ldots, E_m\subset (M',\gamma')$ so that $E_k$ connects $s_k$ to a suture in $\gamma'\setminus\gamma$ as follows: - Let $q$ be a point in $C_i$. For each $k=1,\ldots, m$, fix a path $p_k$ in ${\partial}\nu(G_j)$ connecting a point in ${\partial}\nu(v_k)$ to $q$. - As in the construction of $A_1,\ldots, A_k$ in Construction \[construct\], each $p_k$ gives a product disk meeting the suture $s_k$ and a suture in $\gamma'\setminus\gamma$. Push the product disks $E_1,\ldots, E_m$ slightly off of each other to be disjoint. We draw the product disks $E_1,\ldots, E_m$ in Figure \[fig:edisks\]. Note that each $E_k$ meets [*the same*]{} suture in $\gamma'\setminus\gamma$. $G$ at 0 300 $\textcolor{red}{C_i}$ at 100 265 $q$ at 100 230 $(M,\gamma)$ at 200 300 $\textcolor{red}{A_i\times I}$ at 225 195 ![Left: paths in $\nu(G)$ connecting each ${\partial}\nu(v_k)$ to a point $q$ in ${\partial}\nu(G)$. The face containing $q$ corresponds to a product annulus $A$ in $(M,\gamma)$. [**[Right:]{}**]{} The paths yields product disks connecting corresponding elements of $A(\gamma)$ to an element of $A(\gamma')\setminus A(\gamma)$, where $(M,\gamma)\xrightarrow{A_i}(M',\gamma')$.[]{data-label="fig:edisks"}](edisks "fig:"){width="75mm"} Proof of Theorem \[mingenuscor\] {#sec:proof} ================================ Let $L=L_1\sqcup L_2$ be an $2$-component link in a rational homology $3$-sphere $Y$. Assume $\operatorname{{lk}}(L_1,L_2)\neq 0$ and that $X:=Y\setminus\nu(L)$ has nondegenerate Thurston norm. Let $S$ be a properly norm-minimizing surface in $X$ with $[S]$ primitive and not a corner of the Thurston norm. Let $\widehat{X}$ denote the closed manifold obtained from $X$ by Dehn filling each boundary component of $X$ according to the slope of ${\partial}S\cap{\partial}X$, and let $\widehat{S}\subset \widehat{X}$ be the closed surface obtained from $S$ by capping off each component of ${\partial}S$ by a disk in a Dehn-filling solid torus. If $\widehat{S}$ is [*not*]{} norm-minimizing, then either $g(S)=1$ or $g([S])$ is minimal among all classes in the interior of the same face of the Thurston norm as $[S]$. In particular, if $[S']$ is a primitive class in the interior of the same face as $[S]$ and $\widehat{S'}$ is also not norm-minimizing, then $g([S])=g([S'])$. Take $[S]$ to be primitive in $H_2(X,{\partial}X;{\mathbb{R}})$ and not a corner of the Thurston norm. Then $c[S]=a[R]+b[T]$ for some positive coprime integers $a$ and $b$, an integer $c>0$, and adjacent corners $[R]$ and $[T]$. Let $\alpha_1,\alpha_2$ be the homology classes of punctured Seifert surfaces for multiples of $L_1$ and $L_2$, respectively. Let $\beta_1,\beta_2\in H_2(X;{\partial}X)$ so $\beta_1=p\alpha_1+q\alpha_2$ and $\beta_2=r\alpha_1+s\alpha_2$ for some integers $p$, $q$, $r$, and $s$. We say the [*determinant of $\beta_1$ and $\beta_2$*]{} is $$\det\left(\beta_1,\beta_2\right)=\bigg|\det\left(\begin{matrix}p&q\\r&s\end{matrix}\right)\bigg|.$$ If $\beta_1$ and $\beta_2$ are vertices of the Thurston norm unit-ball, we call $\det(\beta_1\beta_2)$ the *determinant of the face spanned by $\beta_1$ and $\beta_2$*. \[primitive\] If $\det([S_1],[S_2])=1$, then $c=1$. Consider the parallelogram $A$ in ${\mathbb{R}}^2=H_2(X,{\partial}X;{\mathbb{R}})$ spanned by $[S_1]$ and $[S_2]$. By Pick’s Theorem, $\det([S_1],[S_2])=1+($the number of integer lattice points in the interior of $A$). (Here we use the fact that $[S_1]$ and $[S_2]$ are primitive to know the only lattice points on the boundary of $A$ are at the four corners of $A$.) Thus, if $\det([S_1],[S_2])=1$ then there are no integer lattice points in the interior of $A$, so every lattice point (and hence every element of $H_2(X,{\partial}X;{\mathbb{R}})$ can be expressed as an integral combination of $[S_1]$ and $[S_2]$, Take $R$ and $T$ to be properly norm-minimizing surfaces positioned so that any cut-and-paste of $xR+yT$ is also properly norm-minimizing (for $x,y>0$) as in Proposition \[cutpasteprop\]. Let $S^{xy}$ denote the surface $xR+yT$. Since $\chi(S)=\chi(S')$ and $\chi(\widehat{S})=\chi(\widehat{S'})$ for any properly norm-minizing surface $S'$ with $[S']=[S]$, to conclude that $\widehat{S}$ is norm-minimizing it would be sufficient to show that $\widehat{S'}$ is norm-minimizing. Therefore, we may take $S=S^{ab}$ to be the cut-and-paste sum $aR+bT$. In the following sections, we will study for which $a,b$ the surface $\widehat{S}^{ab}$ may fail to be norm-minimizing. For notation, let $(M^{ab},\gamma^{ab})$ be the complementary sutured manifold to $S^{ab}$ in $X$. Form a fat-vertex graph $G^{ab}$ from this intersection, as in Construction \[construct\]. The product disks used to construct the edges of $G^{ab}$ are the disks that arise from the intersections of $R$ and $T$, as in Figure \[fig:cutandpaste\]. Since $[S^{ab}]=c[S]$ and $[S]$ is primitive, $G^{ab}$ has $c$ components. Constructing a taut foliation on $Y_{{\partial}S^{ab}}(L)$ when possible {#sec:constructing} ------------------------------------------------------------------------ ### Assuming $c=1$ For now, we will assume $S=S^{ab}$, so $G^{ab}$ is a connected graph. The case that $G^{ab}$ is disconnected is not much harder, but this will simplify notation. We address the general case after understanding connected graphs. If $X=Y\setminus\nu(L)$ is reducible, then $X$ factors as $(Y'\setminus\nu(L))\# Y''$ for rational homology spheres $Y'$ and $Y''$, with $Y'\setminus\nu(L)$ irreducible. It is sufficient to prove the Theorem for $L\subset Y'$, so we may assume $X$ is irreducible. Therefore, we may apply Theorem \[havefoliation1\] to obtain a taut foliation ${\mathcal{F}}$ on $X$ realizing $S=S^{ab}$ as a leaf. Restrict ${\mathcal{F}}$ to a taut foliation on the complementary sutured manifold $(M,\gamma)$ to $S$. Say $G:=G^{ab}$ has faces $C_1,\ldots, C_f$. Let $A_1,\ldots, A_f$ be the product annuli in $(M,\gamma)$ corresponding to $C_1,\ldots, C_f$, as in Construction \[construct\]. Let $S_{\pm}:=R_{\pm}(\gamma)$ (so if $M=\overline{X\setminus(S\times I)}$, $S_+=S\times1$ and $S_-=S\times-1$). We write $\widehat{S},\widehat{S}_+,\widehat{S}_-$ to denote the closed surfaces embedded in $\widehat{X}$ obtained from $S,S_+,S_-$ (respectively) by attaching disks (within the Dehn-filling solid tori) to each boundary component. (We have alredy defined $\widehat{S}$ this way, but we want to be clear that $\widehat{S}_+$ and $\widehat{S}_-$ are defined the same way.) Let $(M',\gamma')$ be the result of decomposing $(M,\gamma)$ along the product annulus $A_i$. Let $V_i=A_i\times I$ be the solid torus excised in this decomposition. We name the two elements of $A(\gamma)\setminus A(\gamma')$ $B_1$ and $B_2$. Let ${\mathcal{F}}':={\mathcal{F}}|_{M'}$. By Lemma \[tautdecomp\] and Remark \[tautdecompfol\], ${\mathcal{F}}'$ is taut. Recall from Remark \[remarkedisks\] that there are product disks in $(M',\gamma')$ connecting each element of $A(\gamma)$ to one of $B_1$ or $B_2$. Perform the suspension change operation (Operation \[suspensionchange\]) on these disks to find from ${\mathcal{F}}'$ a new taut foliation ${\mathcal{G}}$ on $(M',\gamma')$, where ${\mathcal{G}}$ restricts to a product foliation on each component of $A(\gamma')\cap A(\gamma)$. (In words, we use product disks to “push” the complicatedness of ${\mathcal{F}}' $ at each element of $A(\gamma)$ onto $B_i$ instead to obtain ${\mathcal{G}}$.) Let $(\widehat{M},\widehat{\gamma})$ be the complementary sutured manifold to $\widehat{S}$ in $\widehat{X}$. Let $(\widehat{M'},\widehat{\gamma'})$ be the result of decomposing $(\widehat{M},\widehat{\gamma})$ along $A_i$. (The notation here is not misleading – $(\widehat{M'},\widehat{\gamma'})$ is obtained from $(M',\gamma')$ by attaching $3$-dimensional $2$-handles to each element of $A(\gamma)\subset A(\gamma')$.) Extend ${\mathcal{G}}$ to a taut foliation of the sutured manifold $(\widehat{M'},\widehat{\gamma'})$ by capping off leaves of ${\mathcal{G}}$ with disks. Say ${\mathcal{G}}|_{B_1}$ is a suspension of a homeomorphism $f:I\to I$ and ${\mathcal{G}}|_{B_2}$ is a suspension of a homeomorphism $g:I\to I$. Our current goal is to reglue $V_i$ to $\widehat{M'}$, foliated so as to extend ${\mathcal{G}}$ to a taut foliation on $\widehat{X}$ achieving $\widehat{S}$ as a leaf. We do not expect to always be able to fill $V_i$ and extend ${\mathcal{G}}$. A priori, we can fill $V_i$ and extend ${\mathcal{G}}$ when $f$ and $\bar{g}$ are conjugate. We will do an $I$-bundle replacement on the leaves $R_+(\widehat{\gamma'}), R_-(\widehat{\gamma'})$ of ${\mathcal{G}}$ to obtain a taut foliation ${\mathcal{G}}'$. Now the foliation induced by ${\mathcal{G}}'$ on the intersection of the $I$-bundle $R_\pm(\widehat{\gamma'})\times I$ with $B_j$ a suspension of some $\mu_j^\pm:I\to I$. See Figure \[fig:suspension\]. We have not yet specified our choice of foliation on the $I$-bundles to determine each $\mu_j^\pm$. We will consider some situations that allow us to choose some of the $\mu_*^*$ freely. Note ${\mathcal{G}}'|_{B_1}$ is a suspension of the concatenation $\mu_1^-f\mu_1^+$ and ${\mathcal{G}}'|_{B_2}$ is a suspension of the concatenation $\mu_2^-g\mu_2^+$. If we can choose $\mu_*^*$ so that $\mu_1^-f\mu_1^+$ and $\bar{\mu}_2^-\bar{g}\bar{\mu}_2^+$ are conjugate, then we can fill $V_i$ and extend ${\mathcal{G}}'$, proving that $\widehat{S}$ is norm-minimizing. We will use Lemma \[anyhomeo\] to choose the suspension homeomorphisms $\mu_{\pm}^j$ appropriately. $S_+$ at 425 340 $S_-$ at 60 40 $\mu_1^+$ at 655 285 $f$ at 660 175 $\mu_1^-$ at 655 85 $\mu_2^+$ at 1035 285 $g$ at 1025 175 $\mu_2^-$ at 1035 85 If $g(S)=1$ whenever $\widehat{S}$ is norm-minimizing, then the theorem holds automatically. So assume $g(S)>1$. [**[Case 1. Neither ${\partial}_+A$ nor ${\partial}_-A$ are separating in $\widehat{S}_+,\widehat{S}_-$.]{}**]{} Then $\widehat{S}_{\pm}\setminus\nu(A)$ is positive genus, so by Lemma \[anyhomeo\] we may choose e.g. $\mu_+^1=\bar{g},\mu_-^2=\bar{f},\mu_+^2=\mu_-^1=\operatorname{{id}}$. Now $\mu_-^1 f \mu_+^1 =\operatorname{{id}}f \bar{g}$ and $\bar{\mu_-^2}\bar{g}\bar{\mu_+^2}=f\bar{g}\operatorname{{id}}$ are conjugate (in fact isotopic) and hence $V_i$ can be filled to extend ${\mathcal{G}}$ to a taut foliation on $(\widehat{M},\widehat{\gamma})$, proving that $\widehat{S}$ is norm-minimizing. [**[Case 2. ${\partial}_+A $ is separating in $\widehat{S}_+$, but ${\partial}_-A $ is non-separating in $\widehat{S}_-$.]{}**]{} (The case in which ${\partial}_-A$ is separating but ${\partial}^+A$ is non-separating is similar.) Let $S_{+}'$ be the component of $S_+\setminus V$ meeting $B_1^+$ and let $S_+^2$ be the component meeting $B_2^+$. Implicitly, we choose the order of the $B_1$ and $B_2$ so that $B_1^+$ contains ${\partial}S_+$. Since ${\partial}_-A $ is non-separating in $\widehat{S}_-$ (and hence in $S_-$) and $S_-$ is incompressible in $X$, ${\partial}_+A $ does not bound a disk in $S_+$. Therefore, at least one of the $\widehat{S}_+^j$ (say WLOG $\widehat{S}_+^2$) is positive-genus. We then choose $\mu_+^2=\bar{f},\mu_{-}^1=\bar{g},\mu_+^1=\mu_-^2=\operatorname{{id}}$. Now $\mu_-^1 f \mu_+^1 =\bar{g}f\operatorname{{id}}$ and $\bar{\mu_-^2}\bar{g}\bar{\mu_+^2}=\operatorname{{id}}\bar{g}f$ are conjugate (in fact isotopic) and hence $V_i$ can be filled to extend ${\mathcal{G}}$ to a taut foliation on $(\widehat{M},\widehat{\gamma})$, proving that $\widehat{S}$ is norm-minimizing. [**[Case 3. Both ${\partial}_+A$ and ${\partial}_-A$ are separating in $\widehat{S}_+,\widehat{S}_-$.]{}**]{} Let $S_{\pm}'$ be the component of $S_\pm\setminus V$ meeting $B_1^\pm$ and let $S_\pm^2$ be the component meeting $B_2^+$. Implicitly, we choose the order of the $B_1$ and $B_2$ so that $B_1^\pm$ contains ${\partial}S_\pm$. [**[Case 3(i). Both $S_+^1$ and $S_-^2$ have positive genus.]{}**]{} (The case in which $\widehat{S}_-^1$ and $\widehat{S}_+^2$ have positive genus is similar.) Choose $\mu_+^1=\overline{g},\mu_-^2=\overline{f}$, and $\mu_-^1=\mu_+^2=\operatorname{{id}}$. Now $\mu_-^1 f \mu_+^1 =\operatorname{{id}}f\bar{g}$ and $\bar{\mu_-^2}\bar{g}\bar{\mu_+^2}=f\bar{g}\operatorname{{id}}$ are conjugate (in fact isotopic). Then ${\mathcal{G}}$ extends over $V_i$ as desired, so $\widehat{S}$ is norm-minimizing. The remaining cases describe the situation in which we cannot apply Lemma \[anyhomeo\] to control either of $\mu_-^1$ and $\mu_+^1$, or either of $\mu_-^2$ and $\mu_+^2$. When this is the case, we cannot immediately cause the two suspension foliations on $B_1$ and $B_2$ to be conjugate. We may then consider other values of $i$, i.e. consider other product annuli constructed from different faces of the fat-vertex graph $G$. If any annulus yields the situation of Case 1, 2, or 3(i), then $\widehat{S}$ is norm-minimizing. When every annulus fails to yield a situation of a previous case, then we may still consider whether each annulus exhibits similar behavior. [**[Case 3(ii). For every choice of $i$, $S_+^2$ is a disk.]{}**]{} Since $S$ is incompressible, when $S_+^2$ is a disk it must be the case that $S_-^2$ is a disk. Note that $A_i\cup($disks in $S_\pm^2)$ is a $2$-sphere. Decomposing $(\widehat{M},\widehat{\gamma})$ along all $A_1,\ldots, A_f$ yields a sutured manifold $(\nu(G)\times I,({\partial}\nu(G))\times I)\sqcup(N,\eta)$, where $(N,\eta)$ is potentially disconnected. Moreover, ${\partial}N$ is a collection of $2$-spheres which each contain one element of $A(\eta)$. The sutured manifold $(\nu(G)\times I,({\partial}\nu(G))\times I)$ is taut (since it is a product) and $(N,\eta)$ is taut (since $R_\pm(\eta)$ is norm-minimizing), so we conclude by Lemma \[tautdecomp\] that $(\widehat{M},\widehat{\gamma})$ is taut. Thus, $\widehat{S}$ is norm-minimizing. If $Y=S^3$, then in Case 3(ii) we may actually conclude that $\widehat{S}$ is a fiber surface by [@generaoflinks Theorem 3.14]. Thus, we consider the last remaining possibility. [[**[Case 3(iii).]{}**]{} For every choice of $A_i$, either $S_+^1$ is genus-zero or both $S_\pm^2$ are disks, and we are not in Case 3(ii).]{} In this case, we either cannot control $\mu_\pm^1$ or cannot control $\mu_\pm^2$. Let $\Sigma_1,\ldots,\Sigma_f$ be the regions of $S_+$ bounded by ${\partial}_+ A_i$ which do not meet ${\partial}S_+$. These regions are disjoint. Some $\Sigma_j$ must be positive-genus, since we are not in Case 3(ii). By assumption, this means that $\cup_{k\neq j}\Sigma_k\subset S_+\setminus\Sigma_j$ is genus-zero. Therefore, $g(S)=g(\Sigma_j)$. If $\widehat{S}$ is not norm-minimizing, then the product annuli corresponding to faces of $G$ must be as in this Case (3(iii)). As a shorthand, we will just say that “$S$ is type 3(iii)” or “$[S]$ is type 3(iii).” When $G^{ab}$ is disconnected ----------------------------- Now suppose $c[S]=a[R]+b[T]$, so $S^{ab}$ has $c$ components. Take $S$ to be one of these components. Now the complementary sutured manifold to $\widehat{S}^{ab}$ has $c$ components. Of these, $(c-1)$ are product sutured manifolds (these are the regions between parallel copies of $S$). The other component is a copy of the complementary sutured manifold $(\widehat{M},\widehat{\gamma})$ to $\widehat{S}$. Let $G$ be the component of $G^{ab}$ whose vertices and edges correspond to elements of $A(\widehat{\gamma})$ and product disks in $(\widehat{M},\widehat{\gamma})$. Then repeat the analysis of Subsection \[sec:constructing\] using $G$ in the place of $G^{ab}$. Again, we find that $(\widehat{M},\widehat{\gamma})$ is taut as long as it is not the case that for each of the product annuli $A_i$ corresponding to a face of $G$, ${\partial}_\pm A_i$ bounds either a disk or a surface of genus-$g(S)$ in $S_\pm$. In this case in which we do not know that $(\widehat{M},\widehat{\gamma})$ is taut, we again say “S is type 3(iii)” or “$[S]$ is type 3(iii)” as a convenient shorthand. Limiting Case 3(iii) {#sec:limiting} -------------------- Now we will understand which homology classes in $S$ may be type 3(iii). Recall that if $[S]\in H_2(X,{\partial}X;{\mathbb{R}})\setminus E$ is primitive, then we can take a properly norm-minimizing surface $S$ to be $S=S^{ab}:=aR+bT$ for some positive coprime $a,b\in{\mathbb{R}}$, where $[R],[T]\in E$ are corners of the Thurston norm on $X$. Say a component of $P_i\setminus(S^{ab})$ is a [*corner*]{} if it does not lie between two parallel copies of $R$ or $T$. Note an annulus in corresponding to a face of $G^{ab}$ is parallel to $P_i$ either only in corners or never in corners. If such an annulus is parallel to $P_i$ only in corners, we call the annulus a [*corner annulus*]{}. Otherwise, we call the annulus a [*noncorner annulus*]{}. See Figure \[fig:corners\]. $\textcolor{blue}{{\partial}T}$ at 45 100 $\textcolor{red}{{\partial}R}$ at 82 40 \[mingenusprop\] If $S=S^{ab}$ is type 3(iii), then $g(S)\le g(S^{xy})$ for all $x,y$. Say the annuli corresponding to faces of $G^{ab}$ bound regions $\Sigma_1,\ldots,\Sigma_n$ in the interior of $S_+$. Note that $S^{xy}$ contains subregions homeomorphic to each $\Sigma_i$, so $g(\Sigma_i)\le g(S^{xy})$. Since S is type 3(iii), for some $i$ we have $g(\Sigma_i)=g(S)$. Therefore, $g(S)\le g(S^{xy})$. Let $c_{xy}$ denote the number of components of $S^{xy}$ for $x,y$ positive integers. (Since $[S]$ is primitive and $[S^{ab}]=c[S]$, $c_{ab}=c$.) If $[S^{ab}]$ is type 3(iii), then $\frac{1}{c_{ab}}g(S^{ab})\le\frac{1}{c_{xy}}g(S^{xy})$ for all coprime $x,y>0$. Say the annuli corresponding to faces of $G^{ab}$ bound regions $\Sigma_1,\ldots,\Sigma_n$ in the interior of $S_+$, where by $S_+$ we mean the component of $S^{ab}_+$ which is not between parallel copies of $S$. Because $x,y>0$, $S^{xy}$ contains subregions homeomorphic to each $\Sigma_i$, so $g(\Sigma_i)\le \frac{1}{c_{xy}}g(S^{xy})$. Since $S^{ab}$ is type 3(iii), for some $i$ we have $g(\Sigma_i)=\frac{1}{c_{ab}}g(S^{ab})$. Therefore, $\frac{1}{c_{ab}}g(S^{ab})\le\frac{1}{c_{xy}}g(S^{xy})$. Thus, if $\widehat{S}$ is not norm-minimizing then either $g(S)=1$, or it is the case that $g(S)\le g(\alpha)$ for every $\alpha$ in the face of the Thurston norm which is spanned by $[R]$ and $[T]$. This completes the proof of Theorem \[mingenuscor\]. \[scholium\] In Theorem \[2compthm\], if $[L_1]=[L_2]=0\in H_1(Y;{\mathbb{Z}})$ and $\operatorname{{lk}}:=\operatorname{{lk}}(L_1,L_2)=\pm1$ and every face of the unit-norm ball of the Thurston norm $x$ on $Y\setminus\nu(L)$ has determinant $1$, then $\widehat{S}$ is norm-minimizing if $[S]$ is not a vertex of $x$ or a bisector of a face of $x$. Take $[S]$ not a corner of $x$ nor a bisector of a face of $x$. Since $\det(aR,bT)=1$, we may take $S=aR+bT$ for some adjacent corners $[R],[T]$ and positive coprime integers $a,b$ with $(a,b)\neq(1,1)$. Because $|\operatorname{{lk}}|=1$, ${\partial}S$ has two components. This, together with nondegeneracy of $x$, implies $g(S)>g(R+T)$. Moreover, $\chi(S)=a\chi(R)+b\chi(T)\le -3$, so $g(S)>1$. Then by Theorem \[mingenuscor\], $\widehat{S}$ must be norm-minimizing. Now we give bounds on $|E|$ in Theorem \[2compthm\]. In Corollaries \[cor1\] and \[cor2\], the actual upper bounds given on $|E|$ could be improved with some mild arithmetic or by investigating the geometry of a specific Thurston unit-norm ball. Here we just give some easy bounds to make clear that a bound on $|E|$ can be made explicit. \[cor1\] In Theorem \[2compthm\], if $[L_1]=[L_2]=0\in H_1(Y;{\mathbb{Z}})$, then we may take $E$ to contain at most $[n_\mu+\sum_{i=1}^{n_x}(\det(\alpha_i,\alpha_{i+1})-1)](1+{{|\operatorname{{lk}}|}\choose{2}})$ elements, where $n_x$ is the number of corners of the Thurston norm $x$ on $X$, $\alpha_1,\ldots,\alpha_{n_\mu}$ are the corners of $x$ in cyclic order (taking $\alpha_{n_\mu+1}:=\alpha_1$), and $\operatorname{{lk}}=\operatorname{{lk}}(L_1,L_2)$. We take $E$ to contain the corners $\alpha_1,\ldots,\alpha_{n_x}$ of $x$. Then we add any integral classes in the parallelogram spanned by $\alpha_i,\alpha_{i+1}$; this consists of exactly $\det(\alpha_i,\alpha_{i+1})-1$ elements. Now if $[S]\not\in E$ then we can take $S=aR+bT$ for some $[R],[T]\in E$. We have $|{\partial}S|\le|\operatorname{{lk}}|+1$, so for $a+b>|\operatorname{{lk}}|+1$ we have $g(S)>g(R+T),1$. Now let $F=\{a\beta_1+b\beta_2\mid \beta_1,\beta_2$ are cyclically adjacent in $E$ with $a,b\ge1$ and $a+b\le |\operatorname{{lk}}|+1$. The set $F$ contains $|E|{{|\operatorname{{lk}}|}\choose{2}}$ elements. Let $E:=E\cup F$. \[cor2\] In Theorem \[2compthm\], if $m_1,m_2$ are the smallest positive integers with $m_1[L_1]=m_2[L_2]=0\in H_1(Y;{\mathbb{Z}})$, then we may take $E$ to contain at most $[n_x+\sum_{i=1}^{n_x}(\det(\alpha_i,\alpha_{i+1})-1)](1+{{C-1}\choose{2}})$ elements, where $n_x$ is the number of corners of the Thurston norm $x$ on $X$, $\alpha_1,\ldots,\alpha_{n_x}$ are the corners of $x$ in cyclic order (taking $\alpha_{n_x+1}:=\alpha_1$), and $C=2|\operatorname{{lk}}(L_1,L_2)|m_1^2m_2^2$. We take $E$ to contain the corners $\alpha_1,\ldots,\alpha_{n_x}$ of $x$. Then we add any integral classes in the parallelogram spanned by $\alpha_i,\alpha_{i+1}$; this consists of exactly $\sum_{i=1}^{n_x}\det(\alpha_i,\alpha_{i+1})-1$ elements. Recall from \[g0prop\] that $|{\partial}S|\le C$, so for $a+b>C$ we have $g(S)>g(R+T),1$. Now let $F=\{a\beta_1+b\beta_2\mid \beta_1,\beta_2$ are cylically adjacent in $E$ with $a,b\ge1$ and $a+b\le C\}$. The set $F$ contains ${{C-1}\choose{2}}|E|$ elements. Let $E:=E\cup F$. Links with many components {#sec:ncomp} ========================== Now we consider links of more than two components. Let $L=L_1\sqcup L_n$ be an $n$-component link in a rational homology $3$-sphere $Y$. Assume $\operatorname{{lk}}(L_i,L_j)\neq 0$ for each $i\neq j$. Let $X:=Y\setminus\nu(L)$. Assume $X$ has nondegenerate Thurston norm and is atoroidal and anannular. Let $S$ be a norm-minimizing surface so that $[S]$ is primitive and is contained in a cone $C$ on the interior of a face of the unit-ball of $x$. Let $\widehat{X}$ be the closed manifold obtained by Dehn-filling each boundary component of $X$ according to the slope of ${\partial}S$. Let $\widehat{S}$ be the closed surface in $\widehat{X}$ obtained from $S$ by capping off each boundary component of $S$ by a disk in a Dehn-filling solid torus. Then at least one of the following is true: - $g(S)=1$, - $\widehat{S}$ is norm-minimizing, - $g(S)\le g(\beta)$ whenever $\beta\in C$. Assume $g(S)>1$ and that $\widehat{S}$ is not norm-minimizing. Let $S'$ be a properly norm-minimizing surface in $X$ with $[S']\in C$. Then there exist adjacent corners $[R],[T]$ of $x$ so that $c[S]=a[R]+b[T],c'[S']=a'[R]+b'[T]$ for some positive integers $a,b,c,a',b',c'$. Take $R$ and $T$ to be properly norm-minimizing, and take $cS=aR+bT, c'S'=a'R+b'T$. If ${\partial}R$ and ${\partial}T$ meet some components $P_i$ of ${\partial}X$ in the same slope $q$ or one in slope $q$ and the other not at all, then $S$ and $S'$ meet $P_i$ in slope $q$. Let $\widetilde{X}$ be obtained from $X$ by filling $P_i$ with slope $q$, and let $\widetilde{S},\widetilde{S}',\widetilde{R},\widetilde{T}$ be the corresponding capped surfaces in $\widetilde{X}$. By Theorem \[fillonethm2\], $\widetilde{S}$ is norm-minimizing. Moreover, if $R$ and $T$ meet $P_i$ in $b_R,b_T$ curves correspondingly, then $c\chi(\widetilde{S})=\chi(S)+ab_R+bb_T=a(\chi(R)+b_R)+b(\chi(T)+b_T)=a\chi(\widetilde{R})+b\chi(\widetilde{T})$. Since $\widetilde{S}$ is norm-minimizing, both $\widetilde{R}$ and $\widetilde{T}$ must be norm-minimizing. Similarly, $c'\chi(\widetilde{S})=a'\chi(\widetilde{R})+b'\chi(\widetilde{T})$. Then by convexity of Thurston norm, $\widetilde{S}'$ is norm-minimizing. Now let $X:=\widetilde{X},R:=\widetilde{R},T:=\widetilde{T},S:=\widetilde{S}, S':=\widetilde{S}'$, $n:=n-1$. Continue until both ${\partial}R$ and ${\partial}T$ meet every component of ${\partial}X$, and always do so in distinct slopes. Let $G$ be the fat-vertex graph describing the intersection of $aR$ and $bT$, as in Construction \[construct\]. By the construction of Subsection \[sec:constructing\], $[S]$ must be type 3(iii). That is, there must be a product annulus $A$ (in the complementary sutured manifold to $S$) corresponding to a face of $G$ so that ${\partial}_+(A)$ bounds a genus-$g(S)$ region $\Sigma$ of ${\mathring}{S}_+$. But since $a',b'>0$, the cut-and-paste $c'S'=a'R+b'T$ must contain a subregion homeomorphic to $\Sigma$. Therefore, $g(S')\ge g(S)$. Finally, we are ready to induct on Theorem \[2compthm\]. Let $L$ be an $n$-component link $(n>1)$ in a rational homology $3$-sphere $Y$ with components $L_1,\ldots, L_n$. Assume $\operatorname{{lk}}(L_i,L_j)\neq 0$ for $i\neq j$ and $X:=Y\setminus\nu(L)$ has nondegenerate Thurston norm. Let $S$ be a properly norm-minimizing surface in $X$ meeting every component of ${\partial}X$ . Let $\widehat{X}$ be the closed manifold obtained from $X$ by Dehn filling $X$ according to ${\partial}S$, and let $\widehat{S}\subset\widehat{X}$ be the closed surface obtained from capping off each boundary component of $S$ within the Dehn-filling solid tori. Let $\widetilde{Y}$ be the $3$-manifold obtained from $Y$ by surgering $Y$ along $L_3\sqcup\cdots\sqcup L_n$ according to ${\partial}S$. There exists an $(n-2)$-dimensional set of rays $E$ from the origin of $H_2(X,{\partial}X;{\mathbb{R}})\cong{\mathbb{R}}^n$ so that if $[S]\not\in E$, then either $\widehat{S}$ is norm-minimizing or $\widetilde{Y}\setminus\nu(L_1\sqcup L_2)$ has degenerate Thurston norm. Let $E$ contain the corners of the Thurston norm $\mu$. Since $\mu$ is nondegenerate, the corners of $\mu$ are contained in an $(n-2)$-dimensional set of rays in $H_2(X,{\partial}X;{\mathbb{R}})$ (e.g. if $n=3$, then the unit ball of $\mu$ is a $3$-dimensional polyhedron. The corners are scalar multiples of vertices and edges, so live in a $1$-dimensional set of rays). Assume $S\not\in E$, so $c[S]=a[R]+b[T]$ for two adjacent corners $[R],[T]$ and positive integers $a,b,c$. Choose $[R]$ and $[T]$ so that ${\partial}R$ and ${\partial}T$ meet $P_i$ in distinct (nonmultiple) slopes for each $i=1,\ldots, n$ (this holds outside of some $(n-1)$-dimensional subspaces of $H_2(X,{\partial}X;{\mathbb{R}})$ consisting of an $(n-2)$-dimensional collection of rays). Take $R$ and $T$ to be properly norm-minimizing surfaces with $S=aR+bT$. Let $E$ contain all classes $\alpha\in H_2(X,{\partial}X;{\mathbb{R}})$ with ${\partial}\alpha\cap P_n$ of slope zero. Since $\operatorname{{lk}}(L_n,L_j)\neq 0$ for each $j<n$, this another $(n-2)$-dimensional collection of rays. We add these classes to $E$ to ensure that Dehn-filling $P_n$ according to ${\partial}S$ yields a link complement in a rational homology sphere. Let $\operatorname{{lk}}_{ij}:=\operatorname{{lk}}(L_i,L_j)$. Note that after surgery of slope $q\in{\mathbb{Q}}$ on $L_n$, for $i\neq j>n$ the link components $L_i,L_j$ become $\widetilde{L}_i,\widetilde{L}_j\subset Y_q(L_n)$ with $\operatorname{{lk}}(\widetilde{L}_i,\widetilde{L}_j)=\operatorname{{lk}}_{ij}-q\operatorname{{lk}}_{in}\operatorname{{lk}}_{jn}$. For each $i\neq j<n$, let $E$ also contain all $\alpha\in H_2(X,{\partial}X;{\mathbb{R}})$ with ${\partial}\alpha\cap P_n$ of slope $\operatorname{{lk}}_{ij}/(\operatorname{{lk}}_{in}\operatorname{{lk}}_{jn})$. This is similarly another (finite collection of) $(n-2)$-dimensional collection of rays. We add these classes to $E$ to ensure that Dehn-filling $P_n$ according to ${\partial}S$ yields a link complement of a link whose linking numbers are pairwise nonzero. Let $X_q$ be the result of Dehn-filling $P_n$ with slope $q$. The claim holds for $n=2$ by Theorem \[2compthm\]. (Note either Theorem \[2compthm\] applies in $\widetilde{Y}$, or $\widetilde{Y}$ has degenerate norm and the claim follows automatically.) As an inductive hypothesis, assume the theorem holds for $(n-1)$-component links, e.g. $L\setminus L_n\subset Y_{q}(L_n)$. Then there is an $(n-3)$-dimensional collection of rays $F$ of $H_2(X_q,{\partial}X_q;{\mathbb{R}})$ so that if $[S_q]$ is not in a ray in $F$, then $\widehat{S_q}$ is norm-minimizing (where $S_q$ is some properly norm-minimizing surface in $X_q$). For $q\in{\mathbb{Q}}\cup\{\pm \infty\}\setminus\{0\}$, let $E_q$ contain rays in $H_2(X_q,{\partial}X_q;{\mathbb{R}})$ whose elements $\beta$ meet $P_n$ in slope $q$ and map to elements of $F$ after capping off $\beta\cap P_n$ in $X_q$. Then $E_q$ is an $(n-3)$-dimensional collection of rays, and the union of all $E_q$ across $q\not\in\{0,\operatorname{{lk}}_{ij}/(\operatorname{{lk}}_{in}\operatorname{{lk}}_{jn})\text{ for some $i\neq j<n$}\}$ is an $(n-2)$-dimensional collection of rays. Let $E\supset E'$. Then if $[S]$ is not in $E$, we find $\widehat{S}$ is norm-minimizing. [9]{}
--- abstract: 'We evaluate the equilibrium state of a mixture of $^7$Li and $^6$Li atoms with repulsive interactions, confined inside a pancake-shaped trap under conditions such that the thickness of the bosonic and fermionic clouds is approaching the values of the $s$-wave scattering lengths. In this regime the effective couplings depend on the axial confinement and full demixing can become observable by merely squeezing the trap, without enhancing the scattering lengths through recourse to a Feshbach resonance.' address: | $^1$NEST-INFM and Classe di Scienze, Scuola Normale Superiore, I-56126, Pisa, Italy\ $^2$Department of Physics, University of Istanbul, Istanbul, Turkey author: - 'Z. Akdeniz$^{1,2}$, P. Vignolo$^1$ and M. P. Tosi$^1\footnote{Author to whom any correspondence should be addressed ({\tt tosim@sns.it})}$' title: 'Boson-fermion demixing in a cloud of lithium atoms in a pancake trap' --- Introduction ============ Dilute boson-fermion mixtures have recently been studied in several experiments by trapping and cooling gases of mixed alkali-atom isotopes [@Schreck2001a; @Goldwin2002a; @Hadzibabic2002a; @Roati2002a; @Modugno2002a; @Ferlaino2003a]. The boson-fermion coupling strongly affects the equilibrium properties of the mixture and can lead to quantum phase transitions. In particular, it has been shown that boson-fermion repulsions in three-dimensional (3D) clouds can induce spatial demixing of the two components when the interaction energy overcomes the kinetic and confinement energies [@Molmer1998a]. Several configurations with different topology are possible for a demixed cloud inside a harmonic trap [@Akdeniz2002a]. Although spatial demixing has not yet been experimentally observed, the experiments of Schreck [*et al.*]{} [@Schreck2001a] on a $^6$Li-$^7$Li mixtures inside an elongated 3D trap appear to be not far from the onset of a demixed state: an increase in the boson-fermion scattering length by a factor five would be needed to enter the phase-separated regime [@Akdeniz2002b]. In this Letter we examine the possibility of attaining phase separation by merely varying the trap geometry for the $^6$Li-$^7$Li mixture at the “natural” values of the scattering lengths. We focus on the case of quasi-two-dimensional (Q2D) confinement inside a pancake-shaped trap. We find that this geometry favours the demixed state through the appearance of the axial size of the atomic clouds in their effective coupling in the azimuthal plane. We also show that several configurations are possible in the plane of the trap at zero temperature, but the configuration consisting of a core of bosons surrounded by a ring of fermions is the most energetically favourable for a wide range of values of system parameters. The model {#method} ========= The atomic clouds are trapped in pancake-shaped potentials given by $$\hspace{-2cm}V^{ext}_{b,f}(x,y,z)=m_{b,f} \omega^2_{x(b,f)}(x^2+\lambda^2_{b,f} y^2)/2+m_{b,f}\omega^2_{z(b,f)}z^2/2 \equiv V_{b,f}({x,y})+U_{b,f}(z)$$ where $m_{b,f}$ are the atomic masses and $\omega_{z(b,f)}\gg\omega_{x(b,f)}$ the trap frequencies. We focus on the case where the trap is flat enough that the dimensions of the clouds in the axial direction, which are of the order of the axial harmonic-oscillator lengths $a_{z(b,f)}=(\hbar/m_{b,f}\omega_{z(b,f)})^{1/2}$, are comparable to the 3D boson-boson and boson-fermion scattering lengths $a_{bb}$ and $a_{bf}$. In this regime we can study the equilibrium properties of the mixture at essentially zero temperature ($T\simeq 0.02\,T_F$) in terms of the particle density profiles in the $\{x,y\}$ plane, which are $n_{c}({x,y})$ for the Bose-Einstein condensate and $n_{f}({x,y})$ for the fermions. The profiles are evaluated using the Thomas-Fermi approximation for the condensate and the Hartree-Fock approximation for the fermions. We shall take $a_{zb}=a_{zf}$ $(=a_z$, say) and only near the end we shall discuss the case $\lambda_{b,f}\ne 1$. The Thomas-Fermi approximation assumes that the number of condensed bosons is large enough that the kinetic energy term in the Gross-Pitaevskii equation may be neglected [@Baym1996a]. It yields $$n_c({x,y})=[\mu_b-V_b({x,y})-g_{bf}n_f({x,y})]/g_{bb} \label{zehra1}$$ for positive values of the function in the square brackets and zero otherwise. Here, $\mu_b$ is the chemical potential of the bosons. This mean-field model is valid when the high-diluteness condition $n_ca_{bb}^2\ll 1$ holds and the temperature is outside the critical region. If the conditions $a_z> a_{bb},a_{bf}$ are fulfilled, the atoms experience collisions in three dimensions and the coupling constants in Eq. (\[zehra1\]) can be written in terms of the 3D ones as [@Lieb2001] $$g_{bb}=\frac{g_{bb}^{3D}} {\sqrt{2\pi} a_z}\;,\;\;\;g_{bf}=\frac{g_{bf}^{3D}}{\sqrt{2\pi} a_z} \label{2Dint}$$ with $g_{bb}^{3D}=4\pi\hbar^2a_{bb}/ m_b$, $g_{bf}^{3D}=2\pi\hbar^2a_{bf}/m_r$ and $m_r=m_bm_f/(m_b+m_f)$. The Hartree-Fock approximation [@Minguzzi1997a; @Amoruso1998a; @Vignolo2000b] treats the fermion cloud as an ideal gas subject to an effective external potential, that is $$n_{f}({x,y})=\int\frac{d^2p}{(2\pi\hbar)^2}\left\{\exp\left[\left( \frac{p^2}{2m_{f}} +V^{eff}({x,y})-\mu_{f}\right)/k_BT\right]+ 1\right\}^{-1}, \label{zehra4}$$ where $\mu_f$ is the chemical potential of the fermions and $$V^{eff}({x,y})=V_{f}({x,y})+g_{bf}n_c({x,y}). \label{zehra3}$$ The fermionic component has been taken as a dilute spin-polarized gas, for which the fermion-fermion $s$-wave scattering processes are inhibited by the Pauli principle and $p$-wave scattering is negligibly small [@Demarco1999b]. In the mixed regime the Fermi wave number in the azimuthal plane should be smaller than $1/a_{bf}$, but this is not a constraint in the regime of demixing where the boson-fermion overlap is rapidly dropping. The chemical potentials $\mu_{b,f}$ characterize the system in the grand-canonical ensemble and are determined by requiring that the integrals of the densities over the $\{x,y\}$ plane should be equal to the average numbers $N_b$ and $N_f$ of particles. The presence of a bosonic thermal cloud can be treated by a similar Hartree-Fock approximation [@Akdeniz2002b], but it has quite negligible effects at the temperatures of present interest. General conditions for phase separation {#tzero} ======================================= The effective strength of the atom-atom collisions in the azimuthal plane depends in our Q2D model on the axial harmonic-oscillator length according to Eq. (\[2Dint\]). We derive and illustrate in this section the consequences for demixing at very low temperatures. We preliminarly recall that in the macroscopic limit the phase transition is sharp and the overlap between the two species after demixing is restricted to the interfacial region, where it is governed by the surface kinetic energy (see Ref. [@Ao1998a] for an example in a 3D model). In the case of mesoscopic clouds under confinement the transition is instead spread out and several alternative definitions of the location of demixing can therefore be given. We discuss below three alternative locations, that we denote as partial, dynamical, and full demixing. Their definition and the derivation of simple analytical expressions in the Q2D model are given in the following. Partial demixing {#partial_sep} ---------------- The interaction energy $E_{int}$ between the boson and fermion clouds initially grows as the boson-fermion coupling is increased, but reaches a maximum and then falls off as the overlap between the two clouds diminishes. We locate partial demixing at the maximum of $E_{int}$, that we calculate from the density profiles according to the expression $$E_{int}=g_{bf} \int dx\,dy \, n_c(x,y) n_f(x,y).$$ In the calculations that we report in this section we have used values of system parameters as appropriate to the experiments in Paris on the $^6$Li-$^7$Li mixture [@Schreck2001a], that is $N_b =N_f\simeq 10^4$ and $\omega_{xb}/2\pi=4000$ Hz, $\omega_{xf}/2\pi=3520$ Hz. We first fix the two scattering lengths at their “natural” values ($a_{bb} = 5.1\,a_0$ and $a_{bf} = 38\,a_0$, with $a_0$ the Bohr radius) and vary the thickness $a_z$ of the trap from $100\,a_{bf}$ down to $a_{bf}$. By this choice we are increasing both the boson-boson and the boson-fermion effective coupling in the azimuthal plane. The behaviour of the interaction energy as a function of the ratio $a_{bf}/a_z$ at fixed $a_{bb}$ and $a_{bf}$ is shown in Fig. \[fig1\]. It is seen that $E_{int}$ goes through a maximum at $a_z \simeq 16\,a_{bf}$. As an alternative and with the aim of a later comparison with the results previously obtained in the 3D case [@Akdeniz2002a; @Akdeniz2002b], we show in Fig. \[fig2\] the interaction energy as a function of the ratio $a_{bf}/a_{bb}$ at fixed $a_{bb}$. The axial thickness $a_z$ has been set equal to the largest of the two scattering lengths. This condition fixes the limit of validity of our model [@Tanatar2002a]. The maximum of $E_{int}$ lies at $a_{bf}/a_{bb} \simeq 0.8$, much below the value $a_{bf}/a_{bb} \simeq 7$ of the ratio of the “natural” scattering lengths. In fact, at $a_{bf}/a_{bb} \simeq 7$ the two clouds would undergo demixing as the axial trap stifness is increased towards a Q2D model. An approximate analytic formula for the location of partial demixing as a function of the system parameters in the Q2D model can be obtained from the condition $\partial E_{int}/\partial g_{bf}=0$ by using the estimate $n_{c,f}\approx N_{b,f}/(2 \pi R_{b,f}^2)$ with the values of the cloud radii in the absence of boson-fermion interactions, $R_f=(8 N_f/\lambda_f)^{1/4}a_{xf}$ and $R_b=(16 N_b a_{bb}/(\sqrt{2\pi}a_z\lambda_b))^{1/4}a_{xb}$ where $a_{x(b,f)}=(\hbar/m_{b,f}\omega_{x(b,f)})^{1/2}$. The location of the maximum in the boson-fermion interaction energy as a function of $a_{bf}/a_{bb}$ lies at $$\left.\frac{a_{bf}}{a_{bb}}\right|_{part}\simeq\gamma_{part} \left(\frac{a_z}{a_{bb}}\right)^{1/2} \label{partially0}$$ where $$\gamma_{part}=\left(c_1\frac{N_f^{1/2}}{N_b^{1/2}}+c_2\frac{N_b^{1/2}}{N_f^{1/2}}\right)^{-1} \label{partially}$$ with $$c_1=\left(\frac{2}{\pi}\right)^{1/4}\!\! \frac{m_f\omega_{xf}}{2m_r\omega_{xb}} \left(\frac{\lambda_f}{\lambda_b}\right)^{1/2}$$ and $$c_2=\left(\frac{2}{\pi}\right)^{1/4}\!\! \frac{m_b\omega_{xb}}{2m_r\omega_{xf}} \left(\frac{\lambda_b}{\lambda_f}\right)^{1/2}.$$ We recognise in Eq. (\[partially\]) a geometric combination of two scaling parameters: $c_1{N_f^{1/2}}/{N_b^{1/2}}$ is dominant in the case $N_b\ll N_f$ while $c_2{N_b^{1/2}}/{N_f^{1/2}}$ is dominant in the opposit limit. The prediction obtained from Eq. (\[partially0\]) is shown in Figs. \[fig1\] and \[fig2\] by a solid arrow. There clearly is good agreement between the analytical estimate and the numerical results. Dynamical demixing ------------------ On further increasing the boson-fermion coupling one reaches the point where the fermion density vanishes at the centre of the trap. This occurs when $$\left.\frac{g_{bf}}{g_{bb}}\right|_{dyn}=\frac{\mu_f}{\mu_b}. \label{pablo}$$ We denote this point as the dynamical location of the demixing in a mesoscopic cloud, since we expect a sharp upturn of the low-lying fermion-like collective mode frequencies to occur here as it is the case for both collisional and collisionless excitations in a mixture under 3D confinement  [@Capuzzi2003a; @Capuzzi2003b]. If we insert the chemical potentials for ideal-gas clouds in Eq. (\[pablo\]), we obtain an approximate expression for the location of dynamical demixing, $$\left.\frac{a_{bf}}{a_{bb}}\right|_{dyn}\simeq\gamma_{dyn} \left(\frac{a_z}{a_{bb}}\right)^{1/2} \label{dyn0}$$ where $$\gamma_{dyn}=\left(\frac{\pi}{2}\right)^{1/4}\!\! \frac{2m_f}{m_b+m_f}\frac{\omega_{xf}}{\omega_{xb}} \left(\frac{N_f\lambda_f}{N_b\lambda_b}\right)^{1/2}.$$ The condition for dynamical demixing coincides with that for partial demixing in the limit $N_b\gg N_f$. The prediction obtained from Eq. (\[dyn0\]) is indicated in Figs. \[fig1\] and \[fig2\] by a dot-dashed arrow. Full demixing {#total_PS} ------------- The point of full demixing is reached when the boson-fermion overlap becomes negligible as in a macroscopic cloud. Using the instability criterion outlined in Ref. [@Viverit2000a] for the 3D case, the condition for full phase separation at $T = 0$ is $$\left.\frac{a_{bf}}{a_{bb}}\right|_{full}\simeq\gamma_{full} \left(\frac{a_z}{a_{bb}}\right)^{1/2} \label{condition}$$ where $$\gamma_{full}=\left(\sqrt{2\pi}\frac{2m_r}{m_b+m_f}\right)^{1/2}.$$ At variance for the condition for full phase separation in 3D, for the Q2D confinement this criterion does not depend on the number of fermions, while $1/a_z$ plays the role of the Fermi momentum. The prediction obtained from Eq. (\[partially0\]) is indicated in Figs. \[fig1\] and \[fig2\] by a short-dashed arrow. Density profiles at full demixing --------------------------------- In summary, it can be seen from Eqs. (\[partially0\]), (\[dyn0\]) and (\[condition\]) that the two control parameters for the transition in a mesoscopic cloud under Q2D confinement are $a_{bf}/a_{bb}$ and $a_z/a_{bb}$. Our main conclusion is that in a Q2D geometry a relatively small value of the boson-fermion scattering length suffices to reach the regime of full phase separation. In the case of the $^6$Li-$^7$Li mixture the bare values of the boson-boson and boson-fermion scattering lengths fulfill the condition for full phase separation for values of $a_z$ lower than 10 $a_{bf}$. In view of the above result, it is useful to illustrate in Fig. \[fig3\] the density profiles to be expected in the fully demixed regime for a mixture under isotropic confinement in the azimuthal plane (notice that a circular disc appears in Fig. \[fig3\] as an oval, because of the different scales used on the horizontal and vertical axes). In fact we have found various metastable configurations for the demixed mesoscopic cloud in addition to the thermodynamically stable one consisting of a core of bosons surrounded by a ring of fermions. In Fig. 3 we show topviews of the density profiles in the $\{x,y\}$ plane for the choice of parameters $a_{bb}=5.1\,a_0$, $a_{bf}=38\,a_0$ and $a_z=a_{bf}$. The most energetically stable configuration is shown in Fig. \[fig3\](a) and lies at energy $E= 74.90\,N\hbar\omega_{xb}$. Figure \[fig3\](b) shows a configuration with fermions at the centre surrounded by a ring of bosons inside a fermion cloud ($E = 78.94 \,N\hbar\omega_{xb}$). In Fig. \[fig3\](c) a fermion slice lies between two boson slices surrounded by a fermion cloud ($E = 79.04\,N\hbar\omega_{xb}$). Finally, Fig. \[fig3\](d) shows an asymmetric configuration in which a core of bosons is displaced away from the centre of the trap ($E = 80.20\,N\hbar\omega_{xb}$). The latter two configurations break the symmetry of the trap and this is an effect due to the finite size of the system. Configurations (b)-(d) are structurally but not energetically stable, [*i.e.*]{} they represent metastable structures in local energy minima. They have been obtained by using density profiles with various topologies as the initial guess for the self-consistent solution of Eqs. (\[zehra1\]) and (\[zehra4\]). In experiments bulk modes with the appropriate symmetry may be exploited to attain these “exotic” configurations [@Akdeniz2003a]. We have also studied the behaviour of the configurations shown in Fig. \[fig3\] as a function of the scattering lengths ($5.1 a_0<a_{bb}<3\times 10^5 a_0$ and $38 a_0<a_{bf}<5\times 10^7 a_0$) and the planar-anisotropy parameters ($10^{-3}<\lambda_{b,f}<400$). In contrast to the case of an elongated 3D confinement studied in our previous work [@Akdeniz2002a], where the lowest-energy configuration can have different topologies depending on the system parameters, we have found that the configuration shown in Fig. \[fig3\](a) remains the most energetically stable for the Q2D geometry. A configuration consisting of a central core of fermions surrounded by a cloud of bosons may, however, be the stable one for sufficiently strong boson-boson repulsions [@Akdeniz2002a]. Summary and concluding remarks {#secconcl} ============================== In summary, we have focussed on a boson-fermion mixture confined inside pancake-shaped traps, such that the scattering events can still be considered three-dimensional but nevertheless affected by the vertical confinement. By using a mean-field description for the equilibrium densities in the azimuthal plane, we have studied the boson-fermion interaction energy as a function of the thickness of the atomic clouds and of the boson-boson and boson-fermion scattering lengths. We have given approximate analytical expressions identifying three critical regimes in terms of the scaling parameters $a_{bf}/a_{bb}$ and $a_z/a_{bb}$: (i) partial demixing where the boson-fermion interaction energy attains maximum value from a balance between increasing interactions and diminishing overlap; (ii) dynamical demixing where the fermionic density drops to zero at the centre of the trap and a sharp dynamical signature of demixing may be expected; and (iii) full demixing where the boson-fermion overlap is negligible as in the macroscopic limit. These different criteria for the location of the phase transition in a mesoscopic cloud yield quite close predictions in a pancake-shaped trap under strong axial confinement. We have found several metastable configurations for the demixed cloud, having various topologies but lying at higher energy above the stable configuration which is composed by a core of condensed bosons surrounded by fermions. We have verified that this is the minimum-energy configuration over extensive ranges of values for the coupling constants, the anisotropy in the plane of the trap, and the thickness of the trap. A main result of our work is that full demixing can be reached in the $^6$Li-$^7$Li mixture by merely tuning the thickness of the trap, without necessarily tuning the scattering lengths. The full-demixing critical value of about $10\,a_{bf}$ for the thickness is not attainable in the actual experiments with the bare value of the $^6$Li-$^7$Li scattering length, and a combination of squeezing of the pancake-shaped trap and exploiting Feshbach resonances should be used. On the contrary, the dynamical location of demixing may be observed by only varying the number of particles, without enhancing the scattering length. In fact for a mixture of $10^6$ atoms of $^7$Li and $10^4$ atoms of $^6$Li trapped in the radial confinement discussed in Sec. \[partial\_sep\], it may be possible to observe a sharp upturn of the low-lying fermionic modes for a pancake thickness of the order of $\sim 600\,a_{bf}\simeq 1\,\mu$m. This work was partially supported by INFM under PRA-Photonmatter. ZA also acknowledges support from TUBITAK and from the Research Fund of the University of Istanbul under Project Number 161/15102003. [xx]{} F. Schreck, L. Khaykovich, K.L. Corwin, G. Ferrari, T. Bourdel, J. Cubizolles, C. Salomon, Phys. Rev. Lett. 87 (2001) 080403. J. Goldwin, S.B. Papp, B. DeMarco, D.S. Jin, Phys. Rev. A 65 (2002) 021402. Z. Hadzibabic, C.A. Stan, K. Dieckmann, S. Gupta, M.W. Zwierlein, A. Görlitz, W. Ketterle, Phys. Rev. Lett. 88 (2002) 160401. G. Roati, F. Riboli, G. Modugno, M. Inguscio, Phys. Rev. Lett. 89 (2002) 150403. G. Modugno, G. Roati, F. Riboli, F. Ferlaino, R.J. Brecha, M. Inguscio, Science 297 (2002) 2240. F. Ferlaino, R.J. Brecha, P. Hannaford, F. Riboli, G. Roati, G. Modugno, M. Inguscio, J. Opt. B 5 (2003) S3. K. M[ø]{}lmer, Phys. Rev. Lett. 80 (1998) 1804. Z. Akdeniz, A. Minguzzi, P. Vignolo, M.P. Tosi, Phys. Rev. A 66 (2002) 013620. Z. Akdeniz, P. Vignolo, A. Minguzzi, M.P. Tosi, J. Phys. B 35 (2002) L105. G. Baym, C.J. Pethick, Phys. Rev. Lett. 76 (1996) 6. E.H. Lieb, R. Seiringer, J. Yngvason, Commun. Math. Phys. 224 (2001) 17. A. Minguzzi, S. Conti, M.P. Tosi, J. Phys.: Condens. Matter 9 (1997) L33. M. Amoruso, A. Minguzzi, S. Stringari, M.P. Tosi, L. Vichi, Eur. Phys. J. D 4 (1998) 261. P. Vignolo, A. Minguzzi, M.P. Tosi, Phys. Rev. A 62 (2000) 023604. B. DeMarco, J.L. Bohn, J.P. Burke Jr., M. Holland, D.S. Jin, Phys. Rev. Lett. 82 (1999) 4208. P. Ao, S.T. Chui, Phys. Rev. A 58 (1998) 4836. B. Tanatar, A. Minguzzi, P. Vignolo, M.P. Tosi, Phys. Lett. A 302 (2002) 131. P. Capuzzi, A. Minguzzi, M.P. Tosi, Phys. Rev. A 67 (2003) 053605. P. Capuzzi, A. Minguzzi, M.P. Tosi, Phys. Rev. A 68 (2003) 033605. L. Viverit, C.J. Pethick, H. Smith, Phys. Rev. A 61 (2000) 053605. Z. Akdeniz, A. Minguzzi, P. Vignolo, Laser Phys. 13 (2003) 577.
--- abstract: 'We verify the conjecture that the sixth binary partition function [@A007729] is equal (aside from the initial zero term) to the partial sums of the Stern-Brocot sequence [@A174868]: $$0,1, 2, 4, 5, 8, 10, 13, 14, 18, 21, 26, 28, 33, 36, 40, 41, 46\ldots$$.' author: - | Michael J. Collins\ Daniel H. Wagner Associates\ mjcollins10@gmail.com\ \ David Wilson\ davidwwilson710@gmail.com. bibliography: - 'EquivSeq.bib' title: Equivalence of OEIS A007729 and A174868 --- Let $b'_k$ be the *sixth binary partition function*, which is the number of ways to write $k$ as a sum $$k = \sum_{i \geq 0} \varepsilon_i 2^i$$ with $\varepsilon_i \in \{0,1,2,3,4,5\}$. We obtain $$b'_{2k} = b'_k + b'_{k-1} + b'_{k-2}$$ by counting the number of representations of $2k$ with $\varepsilon_0 = 0,2$ or $4$; in each case we have a representation $\hat \varepsilon$ of $\frac{2k-\varepsilon_0}{2}$ by taking $\hat\varepsilon_i = \varepsilon_{i+1}$, and the correspondence is clearly one-to-one. Also $b'_{2k+1}=b'_{2k}$, since we can get a representation of $2k+1$ only by taking a representation of $2k$ and adding 1 to $\varepsilon_0$. Thus $$\begin{aligned} b'_{2k} &= 2b'_{k-1} + b'_k \ \ \ (k \mbox{ even}) \\ b'_{2k} &= 2b'_{k-1} + b'_{k-2} \ \ \ (k \mbox{ odd}) \end{aligned}$$ since $b'_{k-1} $ equals either $b'_k$ or $b'_{k-2}$. We eliminate the even/odd repetition by defining $b_k = b'_{2k}$. Then $b_0=1,b_1=2$ and $$\begin{aligned} b_{2k} = 2b'_{2k-1} + b'_{2k} &= 2b_{k-1} + b_k\\ b_{2k+1} = 2b'_{2k} + b'_{2k-1} &= 2b_k + b_{k-1} \ . \end{aligned}$$ This is A007729. If we prepend a zero, defining $\hat b_0=0$ and $\hat b_k = b_{k-1}$ we obtain $$\begin{aligned} \hat b_{2k} &= 2 \hat b_k + \hat b_{k-1} \\ \hat b_{2k+1} &= 2 \hat b_k + \hat b_{k+1}\ . \end{aligned}$$ The same recurrence with the same initial conditions gives A174868, the partial sums of the Stern-Brocot sequence [@A002487]. The Stern-Brocot sequence itself can be defined by $s_0=0, s_1=1$, and $$\begin{aligned} s_{2k} &= s_k \\ s_{2k+1} &= s_k + s_{k+1}\ . \end{aligned}$$ The partial sums are $\sigma_k = \sum_{0 \leq i \leq k} s_k$. Letting $\ell_j = s_{2j-1}+s_{2j} = 2s_j+s_{j-1}$ we get $$\sigma_{2k} = \sum_{1 \leq j \leq k} \ell_j = 2\sigma_k + \sigma_{k-1}$$ and similarly with $\ell'_j = s_{2j}+s_{2j+1} = 2s_j+s_{j+1}$, $$\sigma_{2k+1} = \sum_{0 \leq j \leq k} \ell'_j = 2\sigma_k + \sigma_{k+1} \ .$$
--- abstract: | For an inverse semigroup $S$ with the set of idempotents $E$ and a minimal idempotent, we find necessary and sufficient conditions for the Fourier algebra $A(S)$ to be module amenable, module character amenable, module (operator) biflat, or module (operator) biprojective. 2000 [*Mathematics Subject Classification*]{}: Primary 46L07; Secondary 46H25, 43A07. address: - ' Department of Mathematics, Faculty of Mathematical Sciences, Tarbiat Modares University, Tehran 14115–134, Iran' - 'Department of Mathematics, Garmsar Branch, Islamic Azad University, Garmsar, Iran' - 'School of Mathematics, Statistics and Computer Science, College of Science, University of Tehran, Enghelab Ave., Tehran, Iran' author: - Massoud Amini - Abasalt Bodaghi - Reza Rezavand bibliography: - 'JAMS-paper.bib' title: Module cohomological properties of the Fourier algebra of an inverse semigroup --- Introduction {#s1} ============ The concept of module amenability for a class of Banach algebras which is in fact a generalization of the classical amenability has been developed by the first author in [@A]. Indeed, he defined the module amenability of a Banach algebra $\mathcal A$ to the case that there is an extra ${\mathfrak A}$-module structure on $\mathcal A$ and showed that for every inverse semigroup $S$ with subsemigroup $E$ of idempotents, the $\ell^1(E)$-module amenability of $\ell^1(S)$ is equivalent to the amenability of $S$. Later this notion of amenability is generalized by module homomorphisms in [@bodaghi]. Motivated by $\phi$-amenability and character amenability which are studied in [@kan1] and [@mon], the first and second author [@bod1] introduced the concept of module $(\phi,\varphi)$-amenability for Banach algebras and investigated a module character amenable Banach algebra. They characterized the module $(\phi,\varphi)$-amenability of a Banach algebra $\mathcal A$ through vanishing of the first Hochschild module cohomology group $\mathcal H^1_{\mathfrak A}(\mathcal A, X^*)$ for certain Banach $\mathcal A$-bimodules $X$. They also proved that $\ell^1(S)$ is module character amenable (as $\ell^1(E)$-module) if and only if $S$ is amenable; for generalized notions of module character amenability refer to [@bel]. The first and third authors defined the Fourier algebra $A(S)$ of an inverse semigroup $S$ as the predual of semigroup von Neumann algebra $L(S)$ in [@AR] and showed that the co-multiplication on $L(S)$ is implemented by a multiplicative partial isometry $W$ in $\mathcal B(\ell^2(S \times S))$. The co-algebra structure of $L(S)$ induces a canonical algebra structure on $A(S)$, making it a completely contractive Banach algebra. Since the set of idempotents $E$ of $S$ naturally act on $S$ (by multiplication), $A(S)$ has an extra structure of a Banach module on the semigroup algebra $\ell^1(E)$. It is shown in [@AR] that if $S$ is an amenable inverse semigroup $S$ with the set of idempotents $E$ and a minimal idempotent, then the Fourier algebra $A(S)$ is module operator amenable, as a completely contractive Banach algebra and an operator module over $\ell^1(E)$. There are some other concepts related to the notions of module amenability such as module biprojectivity and module biflatness, introduced by the first and second author in [@boa]. In other words, these concepts are module versions of biprojectivity and biflatness for Banach algebras which are introduced by Helemskii [@hel]. For every inverse semigroup $S$ with subsemigroup $E$ of idempotents, in [@boa], the authors found the necessary and sufficient conditions for the $\ell^1(S)$ to be module biprojective and module biflat (as $\ell^{1}(E)$-module). Module biflatness for the second dual of Banach algebras is studied in [@bj]. In this paper we give necessary and sufficient conditions for the Fourier algebra $A(S)$ to be module amenable, module character amenable, module (operator) biflat, or module (operator) biprojective. Notations and Preliminary Results {#s1} ================================= Module and operator structure ----------------------------- In this subsection we deal with a completely contractive Banach algebra $A$ which is a Banach module over another Banach algebra $\mathfrak A$ with compatible actions. Our reference for operator spaces is [@ER]. If $E$ and $F$ are operator spaces, and $T: E\longrightarrow F$ is a linear map, and $T^{(n)}: M_n(E)\longrightarrow M_n(F)$ is the $n$-th amplification of $T$ to a linear map on the corresponding matrix algebras, then $T$ is called completely bounded if $\|T\|_{cb} := \sup_{n\in\mathbb N} \|T^{(n)}\|< \infty$. When $\|T\|_{cb}\leq 1$, $T$ is called a complete contraction. A completely contractive Banach algebra $A$ is a Banach algebra which is also an operator space such that multiplication map is a complete contraction from $A\hat\otimes A$ to $A$ [@Rua], where $A\hat\otimes A$ is the operator projective tensor product of $A$ by itself. Let $\mathfrak A$ be a Banach algebra and $A$ be a completely contractive Banach algebra and a Banach $\mathfrak A$-module with compatible actions, $$\alpha\cdot (ab)=(\alpha\cdot a)b \,\,\,(a,b\in A,\alpha\in \mathfrak A),$$ and the same for the right action, then we say that $A$ is an [*Banach $\mathfrak A$-module*]{}. Note that, by assumption, $$(\alpha\beta)\cdot a=\alpha\cdot(\beta\cdot a)\quad \,\,\,(a\in A,\alpha,\beta\in \mathfrak A).$$ We know that $A\hat{\otimes}_{\mathfrak A}A\cong (A\hat{\otimes}A)/I$ where $I$ is the closed ideal generated by elements of the form $a\cdot\alpha \otimes b- a\otimes \alpha\cdot b$, for $\alpha\in \mathfrak A,\,a,b\in A$. This is an operator space which inherits its operator space structure from $A\hat{\otimes}A$ [@ER Proposition 3.1.1]. We define $\omega:A\hat{\otimes}A\longrightarrow A$ by $\omega(a\otimes b) = ab$, and $\tilde{\omega}: A\hat{\otimes}_{\mathfrak A} A\cong(A\hat{\otimes}A)/I \longrightarrow A/J$ by $$\tilde{\omega}(a\otimes b + I)= ab + J\,\,\,\,\,\ (a,b\in A),$$ both extended by linearity and continuity where $J=\overline{\langle \omega(I)\rangle}$ is the closed ideal of $A$ generated by $\omega(I)$. Then $\tilde{\omega}, \tilde{\omega}^{**}$ are $A$-$\mathfrak A$-module homomorphisms [@A], and since $\omega$ is a complete contraction, so is $\tilde{\omega}$. Let $V$ be a Banach $A$-module and a Banach $\mathfrak A$-module with compatible actions, $$\alpha\cdot(a\cdot x)=(\alpha\cdot a)\cdot x,\,\ (a\cdot\alpha)\cdot x=a\cdot(\alpha\cdot x)$$ $$(\alpha\cdot x)\cdot a= \alpha\cdot(x\cdot a) \,\,\,\ (a\in A, \alpha \in \mathfrak A, x\in V),$$ and the same for the right or two-sided actions, such that the module actions of $A$ on $V$ are completely bounded, then $V$ is called an [*operator $A$-$\mathfrak A$-module*]{}. If moreover $$\alpha\cdot x=x\cdot\alpha \,\,\,\,\,\ (\alpha \in \mathfrak A, x\in V),$$ then $V$ is called a [*commutative*]{} $\mathfrak A$-module. A left trivial action of $\mathfrak A$ on $A$ is defined as $\alpha\cdot a= f(\alpha)a$ for $\alpha\in \mathfrak A, a\in A$, where $f$ is a (continuous) character of $\mathfrak A$. Given an operator $A$-$\mathfrak A$-module $V$, a bounded map $D:A\longrightarrow V$ is called a [*module derivation*]{} if $$D(a\pm b)= D(a) \pm D(b),\,\,\,\,\ D(ab)= D(a)\cdot b+ a\cdot D(b) \,\,\,\,\ (a,b\in A),$$ and $$D(\alpha\cdot a)= \alpha\cdot D(a),\,\,\,\,\,\ D(a\cdot\alpha)= D(a)\cdot\alpha \,\,\,\,\ (\alpha\in \mathfrak A, a\in A).$$ Note that $D$ is not assumed to be $\mathbb{C}$-linear and that $D:A\longrightarrow V$ is bounded if there exist $M>0$ such that $\|D(a)\|\leq M\|a\|$, for each $a\in A$ [@A]. Let $$\|D\|= \sup_{a\neq 0}\|D(a)\|/\|a\|,$$ then for $D^{(n)}: M_{n}(A)\longrightarrow M_{n}(V)$, we have $\|D^{(n)}\|\leq n^{2}\|D\|$, hence $D^{(n)}$ is bounded for each $n$. If $\sup \|D^{(n)}\|<\infty$, we say that $D$ is [*completely bounded*]{}. A module derivation $D$ is called [*inner*]{} if there exists $v\in V$ such that $$D(a)= a\cdot v-v\cdot a \,\,\,\,\,(a\in A).$$ The Banach algebra $A$ is called [*module operator amenable*]{} (as an $\mathfrak A$-module) if for every commutative operator $A$-$\mathfrak A$-module $V$, each completely bounded module derivation $D:A\longrightarrow V^{*}$ is inner. A bounded net $\{\tilde{u_{i}}\}$ in $A\hat{\otimes}_{\mathfrak A}A$ is called a [*module operator approximate diagonal*]{} if $\tilde{\omega}(\tilde{u_{i}})$ is a bounded approximate identity of $A/J$ and $$\lim\parallel\tilde{u_{i}}\cdot a-a\cdot\tilde{u_{i}}\parallel=0\,\,\,\,\ (a\in A).$$ An element $\tilde{M}\in (A\hat{\otimes}_{\mathfrak A}A)^{**}$ is called a [*module operator virtual diagonal*]{} if $$\tilde{\omega}^{**}(\tilde{M})\cdot a=\tilde{a}, \,\,\,\,\,\ \tilde{M}\cdot a=a\cdot\tilde{M} \,\,\,\,\,\ (a\in A),$$ where $\tilde{a}=a+ J^{\bot\bot}$ (see [@A; @AR] for relation to module amenability, and [@A2] for a correction). Module character amenability ---------------------------- Let $\mathfrak A$ be a Banach algebra with character space $\Phi_{\mathfrak A}$ and let $\mathcal A$ be a Banach $\mathfrak A$-bimodule with compatible actions. Let $\varphi\in\Phi_{\mathfrak A} \cup\{0\}$ and consider the set $\Omega_{\mathcal A}$ of linear maps $\phi: \mathcal A\longrightarrow\mathfrak A$ such that $$\phi(ab)=\phi(a)\phi(b),\quad \phi(a\cdot\alpha)=\phi(\alpha\cdot a)= \varphi(\alpha)\phi(a)\quad(a\in \mathcal A,\,\alpha\in \mathfrak A)$$ A bounded linear functional $m: \mathcal A^*\longrightarrow \mathbb{C}$ is called a [*module $(\phi,\varphi)$-mean*]{} on $\mathcal A^*$ if $m(f\cdot a)=\varphi\circ\phi(a)m(f)$, $m(f\cdot\alpha)=\varphi(\alpha)m(f)$ and $m(\varphi\circ\phi)=1$ for each $f\in \mathcal A^*, a\in \mathcal A$ and $\alpha\in \mathfrak A$. We say $\mathcal A$ is [*module $(\phi,\varphi)$-amenable*]{} if there exists a module $(\phi,\varphi)$-mean on $\mathcal A^*$. Also $\mathcal A$ is called [*module character amenable*]{} if it is module $(\phi,\varphi)$-amenable for each $\phi\in \Omega_{\mathcal A}$ and $\varphi\in\Phi_{\mathfrak A} \cup\{0\}$. Note that if $\mathfrak A=\mathbb{C}$ and $\varphi$ is the identity map then the module $(\phi,\varphi)$-amenability coincides with $\phi$-amenability [@kan1]. It is shown in [@bod1 Corollary 2.4] that module character amenability of $\mathcal A$ implies module character amenability of $\mathcal A/J.$ We restate Remark 2.5 of [@bod1] for the sake of completeness as follows: Let $\phi\in \Omega_{\mathcal A}, \varphi\in\Phi_{\mathfrak A}$ and let $\mathcal A$ be module $(\phi,\varphi)$-amenable. Clearly $\phi((a\cdot\alpha)b-a(\alpha\cdot b))=0$, hence $\phi=0$ on $J$ and $\phi$ lifts to $\tilde \phi: \mathcal A/J\longrightarrow \mathfrak A$ and clearly $P:=\varphi\circ\tilde \phi$ is a character of $ \mathcal A/J$. Since $\mathcal A$ is module (right) character amenable there is $m\in \mathcal A^{**}$ such that $m(f\cdot a)=\varphi\circ\phi(a)m(f)$. Let $M$ be the restriction of $m$ to $J^\perp$ then for each $F\in J^\perp$, the functional $f(a)=F(a+J)$ is well defined and $m(f)=M(F)$ and $\langle f\cdot a,b\rangle=\langle F\cdot(a+J),(b+J)\rangle$ and $M(F\cdot(a+J))=m(f\cdot a)=\varphi\circ\phi(a)m(f)=P(a+J)M(F)$. This shows that $\mathcal A/J$ is $(\tilde \phi\circ\varphi)$-amenable. If every character $P$ of $\mathcal A/J$ could be also constructed as above, this argument shows that module character amenability of $\mathcal A$ implies character amenability of $\mathcal A/J$. Recall that a left Banach $\mathcal A$-module $X$ is called [*[left essential]{}*]{} if the linear span of $\mathcal A \cdot X=\{a\cdot x : a\in \mathcal A, \, x\in X\}$ is dense in $X$. Right essential $\mathcal A$-modules and two-sided essential $\mathcal A$-bimodules are defined similarly. We remark that if $\mathcal A$ is a left (right) essential $\mathfrak{A}$-module, then every $\mathfrak A$-module derivation is also a derivation, in fact, it is linear. For if $a\in \mathcal A$, there is a sequence $(F_n)\subseteq \mathfrak A\cdot \mathcal A$ such that $\lim_n F_n=a$. Assume that $F_n=\sum_{m=1}^{T_n} \alpha_{n,m}\cdot a_{n,m}$ for some finite sequences $(\alpha_{n,m})_{m=1}^{m=T_n}\subseteq \mathfrak{A}$ and $(a_{n,m})_{m=1}^{m=T_n}\subseteq \mathcal A$. Then for each $\lambda\in \mathbb{C}$, $$\begin{aligned} D(\lambda F_n) \!\! \! & = \!\! \! & D\Big(\lambda \sum_{m=1}^{T_n} \alpha_{n,m} \cdot a_{n,m}\Big)= \sum_{m=1}^{T_n} D\big((\lambda\alpha_{n,m}) \cdot a_{n,m}\big) \vspace{0.2cm} \\ & = \!\! \! & \sum_{m=1}^{T_n} (\lambda\alpha_{n,m}) \cdot D(a_{n,m}) = \sum_{m=1}^{T_n} \lambda D(\alpha_{n,m} \cdot a_{n,m})= \lambda D(F_n),\end{aligned}$$ and so, by the continuity of $D$, $D(\lambda a)=\lambda D(a)$, if we assume that $\mathcal A$ is a left essential $\mathfrak{A}$-module. Therefore, we have the following result. \[char\] Let $\mathcal A$ be a left *(*right*)* Banach essential $\mathfrak{A}$-module. Then, $\mathcal A$ is module character amenable if and only if $\mathcal A/J$ is character amenable. We include the proof for the case of left $\mathfrak{A}$-module. The other case is similar. First note that by the above discussion, the module character amenability of $\mathcal A$ implies the character amenability of $\mathcal A/J$. For the converse, let $\phi\in \Omega_{\mathcal A}$ and $\varphi\in\Phi_{\mathfrak A}$. Assume that $X$ is a Banach $\mathcal A$-module and Banach ${\mathfrak A}$-module such that $a\cdot x=\phi(a)\cdot x$ and $\alpha\cdot x=x\cdot\alpha=\varphi(\alpha)x$ for all $x\in X, a\in \mathcal A$ and $\alpha\in \mathfrak A$. Also, $D: \mathcal A \longrightarrow X^*$ is a module derivation. Since $X$ is a commutative Banach ${\mathcal A}$-${\mathfrak A}$-module, the following module actions are well-defined $$(a+J)\cdot x:=\varphi\circ\phi(a)x, \hspace{0.2cm}x\cdot (a+J):=x\cdot a \hspace{0.3cm} (x \in X ,a \in \mathcal A),$$ and so $X$ is a Banach $\mathcal A/J$-module. Consider $\tilde{D}: \mathcal A/J\longrightarrow X $ defined by $\tilde{D}(a+J)=D(a)$, for all $a \in \mathcal A$. Then, $\tilde{D}$ is well-defined and $\mathbb{C}$-linear, thus $D$ is an inner module derivation. Therefore, $\mathcal A$ is module character amenable by [@bod1 Theorem 2.1]. Fourier algebra of an inverse semigroup --------------------------------------- In this subsection we define the Fourier algebra $A(S)$ of an inverse semigroup $S$ and show that it is an operator $\ell^1(E)$-module, where $E$ is the set of idempotents of $S$ and $\ell^1(E)$ is the semigroup algebra of $E$ [@AR]. A discrete semigroup $S$ is called an inverse semigroup if for each $x\in S$ there is a unique element $x^{*}\in S$ such that $xx^{*}x=x$ and $x^{*}xx^{*}=x^{*}$. An element $e\in S$ is called an idempotent if $e=e^{*}=e^{2}$. The set of idempotents of $S$ is denoted by $E$. This is a commutative subsemigroup of $S$ with a canonical partial order: for $e,f\in E$, $e\leq f$ means that $ef=fe=e$. For the rest of the paper, $S$ is an inverse semigroup with set of idempotents $E$. Recall that the [*left regular representation*]{} of $S$ is the map $\lambda:S\longrightarrow B(\ell^{2}(S))$ defined by $$\lambda (s)f(t)= \begin{cases} f(s^{*}t),\,\,\,ss^{*} \geq tt^{*}\\ 0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{otherwise}, \end{cases}$$ and that $\lambda$ is a faithful $*$-representation of $S$ [@B Proposition 2.1]. We define the [*fundamental operator*]{} $W:\ell^{2}(S\times S)\longrightarrow \ell^{2}(S\times S)$ by $$W\psi(s,t)= \begin{cases} \psi(s,s^{*}t),\,\,\,ss^{*}\geq tt^{*}\\ 0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \text{otherwise}, \end{cases}$$ for $\psi\in \ell^2(S\times S)$ and $s,t\in S$. It is easy to see that $W$ is a bounded linear operator. It is shown in [@AR] that $W$ is a [*multiplicative partial isometry*]{} in the sense of [@BS]. The double commutant $L(S)=(\lambda (S))^{''} \subseteq B(\ell^{2}(S))$ is called the (left) [*semigroup von Neumann algebra*]{} of $S$. It is a bi-algebra with the co-multiplication map $\Gamma: L(S)\longrightarrow L(S)\hat\otimes L(S); \ \Gamma(\lambda(s))=\lambda(s)\otimes\lambda(s)$ (c.f. [@AR]). The (unique) predual $A(S)$ of $L(S)$ is a Banach space. Let $\omega_{s,t}(\varphi)=\langle \varphi\delta_{s}\mid\delta_{t}\rangle$, for each $\varphi\in L(S)$. Then $A(S)$ is generated by the set $\{\omega_{s,t}: s,t \in S\}$ as a Banach space [@AR]. One might consider an element $\omega\in A(S)$ as a function on $S$ via $$\omega(s)= \omega(\lambda (s))\quad (s\in S).$$ Since $L(S)$ is a norm closed subspace of $B(\ell^{2}(S))$, it is an operator space and hence $A(S)$ inherits a canonical operator space structure from $L(S)$ as its predual [@ER]. On the other hand, the multiplication map $\Gamma_{\ast}:A(S \times S)\longrightarrow A(S)$ is a complete contraction (see \[2, display 3.8\]), and $A(S\times S)\cong A(S)\hat{\otimes}A(S)$ [@ER 7.2.4]. Therefore, $A(S)$ is a completely contractive Banach algebra [@Rua]. Module structure of A(S) {#m} ------------------------ It is shown in [@AR] that the pointwise multiplication $\omega_{s,t}\omega_{u,v}$ identifies with the algebra multiplication $\omega_{s,t}\cdot\omega_{u,v}$ defined by $$\omega_{s,t}\cdot\omega_{u,v}(\varphi) = (\omega_{s,t}\otimes\omega_{u,v} )(W(\varphi\otimes 1)W^{*})\quad (\varphi\in L(S)).$$ We write $\varepsilon_{a}:=\omega_{a^{*}a, a}\ .$ The span of $\varepsilon_{a}$’s is dense in $A(S)$. These elements are natural replacements for the point indicator functions $\delta_{a}$ in $A(G)$ for a discrete group $G$. One important difference between $\varepsilon_{a}$ and $\delta_{a}$ is that $\varepsilon_{a}$ may have a large support [@AR]. The left regular representation on $S$ lifts to the $*$-representation $\tilde{\lambda}:\ell^1(S)\rightarrow B(\ell^2(S))$ which is faithful by [@W Theorem 2], and we may assume that $\ell^1(S)\subseteq L(S)$. Since $\tilde{\lambda}(\ell^{1}(S)){''}\supseteqq \lambda(S)^{''}=L(S)$, $\ell^{1}(S)$ is a $w^{*}$-dense subset of $L(S)$ [@AR]. Note that $A(S)$ is an operator $\ell^{1}(E)$-module under the actions $$\label{ee} \delta_{e}.\varepsilon_{a}= \varepsilon_{a}\, ,\,\,\,\, \varepsilon_{a}.\delta_{e}= \begin{cases} \varepsilon_{a},\,\,\,\,\,\,\ a^{*}a \leq e \\ 0,\,\,\,\,\,\,\,\,\,\,\,\,\,\ \text{otherwise}. \end{cases} $$ If $a\in S, e\in E$, then $\varepsilon_{a}\varepsilon_{ae}= \varepsilon_{a}$. When $a^{*}a\leq e$, we have $\varepsilon_{a}=\varepsilon_{a}$. Also these module actions are continuous [@AR]. Index of subsemigroups ---------------------- For an inverse semigroup $S$, the equivalence relation defined by $s\sim t$ if and only if there is $e\in E$ with $es=et$ gives the maximal group homomorphic image $G_S:=\{[s]: s\in S\}$ as the set of equivalence classes [@mn]. A subset $T\subseteq S$ is called an subsemigroup of $S$ if it is an $*$-semigroup in the operations of $S$. In this case, $G_T=\{[t]: t\in T\}$ is a subgroup of $S$. For the rest of this subsection, let $T$ be a subsemigroup of $S$. We say that two elements $x,y\in S$ are $T$-equivalent, and write $x\sim_T y$ if there is an element $t\in T$ with $xy^*\sim t$. $\sim_T$ is an equivalence relation. If $x,y,z\in S$, then for each $t\in T$, $xx^*\sim tt^*\in T$. Also if $xy^*\sim t\in T$ then $yx^*\sim t^*\in T$. Finally, if $xy^*\sim t\in T$ and $yz^*\sim s\in T$, then $$xz^*=xx^*xz^*\sim xy^*yz^*\sim ts\in T.$$ This means that $x\sim_T z$. We denote the equivalence class of $x\in S$ under the equivalence relation $\sim_T$ by $xT$ and the set of all such classes by $S/T$, and define the $E$-index of $T$ in $S$ by $$[S:T]_E:=\big|S/T\big|,$$ where the right hand side is the cardinality of $S/T$. $[S:T]_E=[G_S:G_T]$. For $x\in S$, let $[x]G_T$ be the left coset of the subgroup $G_T$, then we show that $xT\mapsto [x]G_T$ is a one-one correspondence between $S/T$ and $G_S/G_T$. This well defined, since if $x\sim_T y$ then $xy^*\sim t\in T$, hence $[x][y]^{-1}=[xy^*]=[t]\in G_T$, and so $[x]G_T=[y]G_T$. A reverse argument as above shows that the map is also injective. The surjectivity is trivial. We say that $T$ is $E$-*abelian* if $st\sim ts$, for each $s,t\in T$. In this case, the subgroup $G_T\leq G_S$ is abelian. We say that $S$ is *almost $E$-abelian*, if it has a subsemigroup $T$ of finite $E$-index which is $E$-abelian. This is in parallel with the notion of almost abelian groups (a group with an abelian subgroup of finite index). By the above two lemmas, we have the following result. \[aa\] $S$ is almost $E$-abelian if and only if $G_S$ is almost abelian. Module cohomological properties of the Fourier algebra ====================================================== In this section we study module amenability, module character amenability, module (operator) biflatness, and module (operator) biprojectivity of the Fourier algebra of an inverse semigroup $S$. Let $I$ and $J$ be the corresponding closed ideals of $\mathcal A \hat \otimes \mathcal A$ and $\mathcal A$, respectively. We firstly bring two following definitions from [@boa]. \[def1\] A Banach algebra $\mathcal A$ is called [*module biprojective*]{} (as an $\mathfrak A$-module) if $\widetilde{\omega}$ has a bounded right inverse which is an ${\mathcal A}/J$-${\mathfrak A}$-module homomorphism. \[def2\] A Banach algebra $\mathcal A$ is called [*module biflat*]{} (as an $\mathfrak A$-module) if $ \widetilde{\omega}^* $ has a bounded left inverse which is an ${\mathcal A}/J$-${\mathfrak A}$-module homomorphism. Recall that a (*bounded*) *left approximate identity* in a Banach algebra $\mathcal A$ is a (bounded) net $\{e_{l}\}_{l\in \mathcal L}$ in $\mathcal A$ such that $\lim_{l}e_{l}a=a$ for all $a\in \mathcal A$. Similarly, a (bounded) right approximate identity can be defined in $A$. A (bounded) approximate identity in $\mathcal A$ is both a (bounded) left approximate identity and a (bounded) right approximate identity. We say the Banach algebra $\mathfrak A$ acts trivially on $\mathcal A$ from left (right) if there is a continuous linear functional $f$ on $\mathfrak A$ such that $\alpha\cdot a=f(\alpha)a$ $(a\cdot\alpha=f(\alpha)a)$ for all $\alpha\in \mathfrak A $ and $a\in \mathcal A$. The following lemma is proved in [@bab Lemma 3.13]. \[tp\] If ${\mathfrak A}$ acts on $\mathcal A$ trivially from the left or right and $\mathcal A/J$ has a right bounded approximate identity, then for each $\alpha \in {\mathfrak A}$ and $a\in\mathcal A$ we have $f(\alpha)a-a\cdot\alpha \in J$. The following result is main key for the main result of this paper, coming in the next subsection. \[p\]Suppose that ${\mathfrak A}$ acts trivially on $\mathcal A$ from the left and $\mathcal A$ has a bounded approximate identity. Then 1. $\mathcal A$ is module biprojective if and only if $\mathcal A/J$ is biprojective; 2. $\mathcal A$ is module biflat if and only if $\mathcal A/J$ is biflat. \(i) Suppose that $\mathcal A/J$ is biprojective such that $\rho$ is the bounded right inverse of $\omega_{\mathcal A/J}$ which is an ${\mathcal A}/J$-module homomorphism. Define the map $\phi:(\mathcal A/J)\widehat{\otimes}(\mathcal A/J)\longrightarrow (\mathcal A\widehat{\otimes} \mathcal A)/I\cong \mathcal A\widehat{\otimes}_{\mathfrak A}\mathcal A$ by $\phi((a+J)\otimes(b+J)):=(a\otimes b)+I$. Assume that $\{e_{j}\}$ is a bounded approximate identity for $\mathcal A$. For each $a,b,c\in \mathcal A$ and $\alpha\in \mathfrak A$, we obtain $$\begin{aligned} [(a\cdot\alpha)b-a(\alpha\cdot b)]\otimes c&=&(a\cdot\alpha)b\otimes c-a(\alpha\cdot b)\otimes c\\ &=&\lim_{j}[((a\cdot\alpha)b\otimes ce_{j})-(a(\alpha\cdot b)\otimes e_{j}c)]\\ &=&\lim_{j}[((a\cdot\alpha)\otimes c)(b\otimes e_{j})-(a\otimes e_{j})((\alpha\cdot b)\otimes c)]\\ &=&\lim_{j}[((a\cdot\alpha)\otimes c)(b\otimes e_{j})-(a\otimes(\alpha\cdot c))(b\otimes e_{j})\\ &&+(a\otimes(\alpha\cdot c))(b\otimes e_{j})-(a\otimes e_{j})((\alpha\cdot b)\otimes c)\\ &&+(a\otimes e_{j})(b\otimes(\alpha\cdot c))-(a\otimes e_{j})(b\otimes(\alpha\cdot c))]\\ &=&\lim_{j}[((a\cdot\alpha)\otimes c-a\otimes(\alpha\cdot c))(b\otimes e_{j})\\ &&+(a\otimes(\alpha\cdot c))(b\otimes e_{j})-(a\otimes e_{j})((\alpha\cdot b)\otimes c\\ &&-b\otimes(\alpha\cdot c))-(a\otimes e_{j})(b\otimes(\alpha\cdot c))]\\ &=&\lim_{j}[((a\cdot\alpha)\otimes c-a\otimes(\alpha\cdot c))(b\otimes e_{j})\\ &&+(ab\otimes(\alpha\cdot c)e_{j})-(a\otimes e_{j})(f(\alpha)b\otimes c\\ &&-b\otimes f(\alpha)c)-(ab\otimes e_{j}(\alpha\cdot c))]\\ &=&\lim_{j}[((a\cdot\alpha)\otimes c-a\otimes(\alpha\cdot c))(b\otimes e_{j})\\ &&+(ab\otimes(\alpha\cdot c)e_{j})-(a\otimes e_{j})f(\alpha)(b\otimes c-b\otimes c)\\ &&-(ab\otimes e_{j}(\alpha\cdot c))]\\ &=&\lim_{j}[((a\cdot\alpha)\otimes c-a\otimes(\alpha\cdot c))(b\otimes e_{j})\in I.\end{aligned}$$ Similarly, $c\otimes[(a\cdot\alpha)b-a(\alpha\cdot b)]\in I$. Hence, $\phi$ is well-defined. And $\phi$ is also a module homomorphism. It is easily verified that $\widetilde{\omega}_{\mathcal A}\circ\phi=\omega_{\mathcal A/J}$. Since $\rho$ is the bounded right inverse of $\omega_{\mathcal A/J}$, the mapping $\phi\circ\rho$ is a bounded right inverse of $\widetilde{\omega}_{\mathcal A}$. Conversely, assume that $\mathcal A$ is module biprojective. Since $\mathcal A$ has a bounded approximate identity, it follows from Lemma \[tp\] that $\mathcal A/J$ is a commutative $\mathfrak A$-module. Therefore, the result follows from [@boa Proposition 2.3]. \(ii) Suppose that $\mathcal A/J$ is biflat and $\widetilde{\rho}$ is the bounded left inverse of $\omega^*_{\mathcal A/J}$ which is an ${\mathcal A}/J$-module homomorphism. If the mapping $\phi$ is as in the part (i), it follows from the proof of previous part that $\phi^*\circ\widetilde{\omega}^*_{\mathcal A}=\omega^*_{\mathcal A/J}$. This means that $\widetilde{\rho}\circ\phi^*$ is the left inverse of $\widetilde{\omega}^*_{\mathcal A}$ which is an ${\mathcal A}/J$-${\mathfrak A}$-module homomorphism. Conversely, if $\mathcal A$ is module biprojective, then $\mathcal A/J$ is biflat by Lemma \[tp\] and [@boa Proposition 2.4]. This finishes the proof. When $\mathcal A$ is a completely contractive Banach algebra and $J$ is a closed ideal, then $A/J$ has a canonical operator space structure coming from the identification $$\mathbb M_n(A/J)=\mathbb M_n(A)/\mathbb M_n(J)$$ making it into a completely contractive Banach algebra [@ER 3.1, 16.1]. We say that $A$ is [*module operator biprojective*]{} ([*biflat*]{}, respectively) if in the above definitions, the right (left, resp.) inverse of $\widetilde{\omega}$ (of $\widetilde{\omega}^*$, resp.) is completely bounded. It is not hard to see that the above argument also works in this setting and we have the following operator analog. We give a short proof for the sake of completeness. \[p2\]Suppose that ${\mathfrak A}$ acts trivially on $\mathcal A$ from the left and $\mathcal A$ has a bounded approximate identity. Then 1. $\mathcal A$ is module operator biprojective if and only if $\mathcal A/J$ is operator biprojective; 2. $\mathcal A$ is module operator biflat if and only if $\mathcal A/J$ is operator biflat. \(i) If $\mathcal A/J$ is operator biprojective and $\rho$ is a completely bounded right inverse of $\omega_{\mathcal A/J}$ and an ${\mathcal A}/J$-module homomorphism. Define $\phi:(\mathcal A/J)\widehat{\otimes}(\mathcal A/J)\longrightarrow (\mathcal A\widehat{\otimes} \mathcal A)/I\cong \mathcal A\widehat{\otimes}_{\mathfrak A}\mathcal A$ by $\phi((a+J)\otimes(b+J)):=(a\otimes b)+I$. As in the proof of Theorem \[p\], we get that $\phi$ is well-defined and a completely bounded module homomorphism. Also $\widetilde{\omega}_{\mathcal A}\circ\phi=\omega_{\mathcal A/J}$. Since $\rho$ is a completely bounded right inverse of $\omega_{\mathcal A/J}$, the map $\phi\circ\rho$ is a completely bounded right inverse of $\widetilde{\omega}_{\mathcal A}$. The converse follows from [@boa Proposition 2.3] using a similar argument. \(ii) If $\mathcal A/J$ is operator biflat and $\widetilde{\rho}$ is a completely bounded left inverse of $\omega^*_{\mathcal A/J}$ and an ${\mathcal A}/J$-module homomorphism, then for the map $\phi$ as in part (i), $\phi^*\circ\widetilde{\omega}^*_{\mathcal A}=\omega^*_{\mathcal A/J}$. This means that $\widetilde{\rho}\circ\phi^*$ is a completely bounded left inverse of $\widetilde{\omega}^*_{\mathcal A}$ and an ${\mathcal A}/J$-${\mathfrak A}$-module homomorphism. The converse follows from a modification of Lemma \[tp\] and [@boa Proposition 2.4]. \[rem\] The condition that $\mathcal A$ has a bounded approximate identity (in Theorem \[p\] and Proposition \[p2\]) happens to be a strong condition for the Fourier algebra. Indeed the Fourier algebra of a group has a bounded approximate identity iff the group is amenable [@Le]. It is not known when the Fourier algebra of an inverse semigroup has a bounded approximate identity. However, from the proof it follows that the above theorem also holds under the weaker condition that $\mathcal A=\mathcal A^{2}$ (this is used in the argument that shows the constructed map $\phi$ is well defined.) As we shall see in the proof of the next theorem, this condition is automatically satisfied for the Fourier algebra $A(S)$ of an inverse semigroup $S$. In the next theorem which is our main result, we consider $A(S)$ as an operator $\ell^1(E)$-module (see subsection \[m\].) \[p3\] Let $S$ be an inverse semigroup. 1. $A(S)$ is module amenable if and only if $S$ is almost abelian. 2. When $S$ is left amenable, then $A(S)$ is module biflat or module biprojective if and only if $S$ is almost abelian. 3. $A(S)$ is always operator biflat and operator biprojective. 4. If $E$ has a minimum element, then $A(S)$ is module operator amenable if and only if $S$ is amenable. If $E$ has no minimum element, then $A(S)$ is always module operator amenable. 5. If $E$ has a minimum element, then $A(S)$ is module character amenable if and only if $S$ is amenable. If $E$ has no minimum element, then $A(S)$ is always module character amenable. First note that by [@AR Proposition 4.13] the condition $\mathcal A=\mathcal A^{2}$ holds for $\mathcal A=A(S)$ and Remark \[rem\] shows that Theorem \[p\] and Proposition \[p2\] apply without having a bounded approximate identity. \(i) This follows from [@abe Propositions 3.2, 3.3] and [@fr Theorem 2.3]. \(ii) If $A(S)$ is module biflat, and so is $A(G_S)$ by Theorem \[p\]. Since $G_S$ is amenable, it follows from the main theorem in [@run] that $G_S$ is almost abelian. Therefore $S$ is almost abelian by Proposition \[aa\]. Conversely, if $S$ is almost abelian, then so is $G_S$ and $A(S)$ is module biprojective by the last corollary of [@run] and Theorem \[p\]. \(iii) This follows from Proposition \[p2\], [@wo Theorem 4.5], and [@ar Theorem 2.4] (note that a discrete group is always a \[QSIN\]-group, in the sense of [@ar page 374]. \(iv) This is [@AR Theorem 5.14]. \(v) Since $A(S)$ is always an essential $\ell^1(E)$-module with actions (\[ee\]), the result follows from Theorem \[char\] and [@mon Corollary 2.4]. M. Amini, [*Module amenability for semigroup algebras*]{}, Semigroup Forum, [**69**]{} (2004), 243–254. M. Amini, [*Corrigendum, Module amenability for semigroup algebras*]{}, Semigroup Forum, [**72**]{} (2006), 493. M. Amini, A. Bodaghi and D. Ebrahimi Bagha, [*Module amenability of the second dual and module topological center of semigroup algebras*]{}, Semigroup Forum, **80** (2010), 302–312. M. Amini and R. Rezavand, [*Module operator amenability of the Fourier algebra of an inverse semigroup*]{}, Semigroup Forum, [**92**]{} (2016), 45–70. O. Yu. Aristov, V. Runde, and N. Spronk, [*Operator biflatness of the Fourier algebra and approximate indicators for subgroups*]{}, J. Functional Analysis, [**209**]{} (2004), 367–387. B. A. Barnes, [*Representations of the $l^{1}$-algebra of an inverse semigroup*]{}, Trans. Amer. Math. Soc., [**218**]{} (1976), 361–396. A. Bodaghi, [*Module $(\varphi,\psi)$-amenability of Banach algeras*]{}, Arch. Math. **46**, No. 4 (2010), 227–235. A. Bodaghi and M. Amini, *Module biprojective and module biflat Banach algebras*, U. P. B. Sci. Bull., Series A. [**75**]{}, Iss. 3, (2013), 25–36. A. Bodaghi and M. Amini, [*Module character amenability of Banach algebras*]{}, Arch. Math (Basel)., [**99**]{} (2012), 353–365. A. Bodaghi, M. Amini and R. Babaee, [*Module derivations into iterated duals of Banach algebras*]{}, Proc. Rom. Aca., Series A, **12**, No. 4 (2011), 277–284. A. Bodaghi, H. Ebrahimi and M. L. Bami, [*Generalized notions of module character amenability*]{}, Filomat, [**31**]{}, No. 6 (2017), 1639–1654. A. Bodaghi and A. Jabbari, [*Module biflatness of the second dual of Banach algebras*]{}, Math. Reports, [**17(67)**]{}, 2 (2015), 235–247. G. Bohm and K. Szlachanyi, [*Weak $C^{*}$-Hopf algebras and multiplicative isometries*]{}, J. Operator Theory, [**45**]{} (2001), 357–376. J. Duncan and I. Namioka, [*Amenability of inverse semigroups and their semigroup algebras*]{}, Proc. Royal Soc. Edinburgh, [**80**]{} (1978), 309–321. E. G. Effros and Z.-J. Ruan, [*Operator Spaces*]{}, Oxford University Press, New York, 2000. B. E. Forrest and V. Runde, [*Amenability and weak amenability of the Fourier algebra*]{}, Mathematische Zeitschrift, [**250**]{} (2005), 731–744. A. Ya. Helemskii, [*Flat Banach moduls and amenable algebras*]{}, Trans. Moscow Math. Soc., **47** (1984), 199–244. E. Kaniuth, A. T-M. Lau and J. Pym, *On $\varphi$-amenability of Banach algebras*, Math. Proc. Camb. Soc., [**144**]{} (2008), 85–96. H. Leptin, [*Sur l’algèbre de Fourier d’un groupe localment compact*]{}, C. R. Acad. Sci. Paris Sér. A-B, [**266**]{} (1968), A1180-A1182. M. S. Monfared, [*Character amenability of Banach algebras*]{}, Math. Proc. Camb. Soc., [**144**]{} (2008), 697–706. W. D. Munn,[*A class of irreducible matrix representations of an arbitrary inverse semigroup*]{}, Proc. Glasgow Math. Assoc., **5** (1961), 41–48. Z.-J. Ruan, [*The operator amenability of $A(G)$*]{}, Amer. J. Math., [**117**]{} (1995), 1449–1474. V. Runde, [*Biflatness and biprojectivity of the Fourier algebra*]{}, Arch. Math (Basel), [**92**]{} (20092), 525–530. P. J. Wood, [*The operator biprojectivity of the Fourier algebra*]{}, Canad. J. Math., [**54**]{} (2002), 1100-1120. J. R. Wordingham, [*The left regular $*$-representation of an inverse semigroup*]{}, Proc. Amer. Math. Soc., [**86**]{} (1982), 55–58.
--- author: - Nicola Scafetta title: 'Discussion on common errors in analyzing sea level accelerations, solar trends and global warming' --- Geophysical systems are usually studied by analyzing time series. The purpose of the analysis is to recognize specific physical patterns and to provide appropriate physical interpretations. Improper applications of complex mathematical and statistical methodologies are possible, and can yield erroneous interpretations. Addressing this issue is important because errors present in the scientific literature may not be promptly recognized and, therefore, may propagate misleading scientists and policymakers and, eventually, delay scientific progress. Herein I briefly discuss a few important examples found in the geophysical literature where time series tools of analysis were misapplied. These cases mostly involve multicollinearity artifacts in linear regression models and Gibbs artifacts in wavelet filters. The following examples are studied: (1) the necessity of recognizing and taking into account multidecadal natural oscillations for properly quantifying anomalous accelerations in tide gauge records; (2) the risk of improperly using overloaded multilinear regression models to interpret global surface temperature records; (3) how to recognize Gibbs boundary artifacts that can emerge when periodic wavelet filters are improperly applied to decompose non-stationary geophysical time series. The proposed reanalyses correct a number of erroneous interpretations while stressing the importance of natural oscillations and of the sun to properly interpret climatic changes. ![image](figure1){width="13cm"} Sea level accelerations versus 60-year oscillations: the New York City case =========================================================================== Tide gauge records are characterized by complex dynamics driven by different forces that on multidecadal and multisecular scales are regulated by a combination of ocean dynamics, of eustasy, isostasy and subsidence mechanisms, and of global warming [@Boon; @Jevrejeva; @Morner; @Morner2013; @Sallenger]. Understanding these dynamics and correctly quantifying accelerations in tide gauge records is important for numerous civil purposes. However, changes of rate due to specific multidecadal natural oscillations should be recognized and separated from a background acceleration that may be potentially induced by alternative factors such as anthropogenic global warming. Let us discuss an important example where this physical aspect was apparently not properly recognized by @Sallenger and @Boon. Figure 1a and b reproduce (with a few additional comments) figures S7 and S8 of the supplementary information file published in @Sallenger, indicated herein as Sa2012. Sa2012’s method for interpreting tide gauge records is detailed below. The example uses the New York City (NYC) (the Battery) annual average tide gauge record that can be downloaded from the Permanent Service for Mean Sea Level (PSMSL) (<http://www.psmsl.org/>) [@Woodworth]. As Fig. 1a shows, Sa2012 analyzed the tide gauge record for NYC from 1950 to 2009; note, however, that Sa2012’s choice appears already surprising because these data have been available since 1856. Sa2012 linearly fit the periods 1950–1979 and 1980–2009, and found that during 1950–1979 the sea level rose with a rate of $2.5\pm0.6$mmyr$^{-1}$, while during 1980–2009 the rate increased to $4.45\pm0.72$mmyr$^{-1}$. Thus, a strong apparent acceleration was discovered and was interpreted as due to the anthropogenic warming of the last 40yr, which could have caused a significant change in the strength of the Atlantic Meridional overturning circulation and of the Gulf Stream. This acceleration was more conveniently calculated by fitting the 1950–2009 period with a second order polynomial, e.g.: $$g(t)=\frac{1}{2}a(t-2000)^{2}+v(t-2000)+c.\label{eq:1}$$ For NYC a 1950–2009 acceleration of $a=0.044\pm0.030$mmyr$^{-2}$ was found. Then, Sa2012 repeated the quadratic fit to evaluate the acceleration during the periods 1960–2009 and 1970–2009, and for NYC the results would be $a=0.083\pm0.049$mmyr$^{-2}$ and $a=0.133\pm0.093$mmyr$^{-2}$, respectively. Similarly, @Boon fit the period from 1969 to 2011 and found $a = 0.20\pm0.07$ mm yr$^{-2}$. Thus, in NYC not only would the sea level be alarmingly accelerating, but the acceleration itself has also incrementally increased during the last decades. Similar results were claimed for other Atlantic coast cities of North America. Finally, as shown in Fig. 1b, Sa2012 extrapolated its fit curves to 2100 and calculated the sea level rate difference (SLRD) to provide a first approximation estimate of the anthropogenic global warming effect on the sea level rise during the 21st century. For NYC, SLRD would be $\sim211$mm if the 1950–1979 and 1980–2009 linear extrapolated trends (reported in the insert of Fig. 1a) were used, but SLRD would increase to about $\sim890$mm if the 1950–1979 linear trend was compared against the 1970–2009 quadratic polynomial fit extrapolation. Alternatively, by also taking into account the statistical uncertainty in the regression coefficients, NYC might experience a net sea level rise of $\sim1130\pm480$mm from 2000 to 2100 if Eq. (\[eq:1\]) is used to fit the 1970–2009 period ($a=0.133\pm0.093$mmyr$^{-2}$, $v=4.6\pm1.1$mmyr$^{-1}$, $c=7084\pm8$mm) and extrapolated to 2100. ![image](figure2){width="15cm"} ![image](figure3){width="13cm"} However, Sa2012’s result does not appear robust because, as I will demonstrate below, the geometrical convexity observed in the NYC tide gauge record from 1950 to 2009 was very likely mostly induced by a quasi 60yr oscillation that is already known to exist in the climate system. In fact, numerous ocean indexes such as the Multidecadal Atlantic Oscillation (AMO), the North Atlantic Oscillation (NAO) and the Pacific Decadal Oscillation (PDO) oscillate with a quasi 60yr period for centuries and millennia [e.g.: @Morner1989; @Morner(1990); @Klyashtorin; @Mazzarella; @Knudsen; @Scafetta2013b], as well as global surface temperature records [e.g.: @Kobashi; @Qian; @Scafetta2010; @Scafetta2012a; @Schulz]. In particular, @Scafetta2010 [@Scafetta2012c; @Scafetta2012d] provided empirical and theoretical evidence that the observed multidecadal oscillation could be solar/astronomical-induced, could be about 60yr-long from 1850 to 2012 and could be modulated by other quasi-secular oscillations [e.g.: @Ogurtsov; @Scafetta2012c; @Scafetta2013]. In fact, a quasi 60yr oscillation is particularly evident in the global temperature records since 1850: 1850–1880, 1910–1940 and 1970–2000 were warming periods and 1880–1910, 1940–1970 and 2000–(2030?) were cooling periods. This quasi 60yr oscillation is superposed to a background warming trend which may be due to multiple causes (e.g.: solar activity, anthropogenic forcings and urban heat island effects) [e.g.: @Scafetta2007; @Scafetta2009b; @Scafetta2010; @Scafetta2012a; @Scafetta2012b; @Scafetta2012c]. Because the climate system is evidently characterized by numerous oscillations, tide gauge records could be characterized by equivalent oscillations too. ![image](figure4){width="15cm"} Indeed, a quasi 60yr oscillation has been found in numerous sea level records since 1700 [@Chambers; @Jevrejeva; @Parker]. Figure 2a shows the global sea level record from 1700 to 2000 proposed by @Jevrejeva fit with Eq. (\[eq:1\]) ($a=0.0092\pm0.0004$mmyr$^{-2}$; $v=2.31\pm0.06$mmyr$^{-1}$; $c=136\pm4$mm). In addition to a relatively small acceleration since 1700AD, which, if continues, will cause a global sea level rise of about $277\pm8$mm from 2000 to 2100, the global sea level record clearly presents large 60–70yr oscillations. This is better demonstrated in Fig. 2b that shows the scale-by-scale palette acceleration diagram of this global sea level record [@Jevrejeva; @Scafetta2013b]. Here the color of a dot at coordinate (x, y) indicates the acceleration $a$ (calculated with Eq.  \[eq:1\]) of a $y$-year-long interval centered in $x$. The color of the dot at the top of the diagram, which in this case is approximately orange, indicates the global acceleration for the 1700–2000 period, $a=0.0092\pm0.0004$mmyr$^{-2}$. The diagram also suggests that for scales larger than 110yr the acceleration is almost homogeneous, around $0.01$mmyr$^{-2}$ or less (orange/yellow color) at all scales and times [@Scafetta2013b]. For example, during the preindustrial 1700–1900 period $a=0.009\pm0.001$mmyr$^{-2}$; during the industrial 1900–2000 period $a=0.010\pm0.0004$mmyr$^{-2}$. Thus, the observed acceleration appears to be independent of the 20th century anthropogenic global warming and could be a consequence of other phenomena, such as the quasi-millennial solar/climate cycle [@Bond; @Kerr; @Kobashi2013; @Kirkby; @Scafetta2012c] observed throughout the Holocene. The millennial solar/climatic cycle has been in its warming phase since 1700, which characterized the Maunder solar minimum during the Little Ice Age. Finally, the alternating quasi regular large green and red areas evident at scales from 30 to 110yr indicate a change of acceleration (from negative to positive, and vice-versa) that reveals the existence of a quasi 60–70yr oscillation since 1700. Strong quasi decadal and bidecadal oscillations are observed at scales below 30yr. In conclusion, because the global sea level record presents a clear quasi 60yr oscillation that also well correlates with the quasi 60yr oscillation found in the NAO index since 1700 [@Scafetta2013b], there is the need to check whether the tide gauge record of NYC too may have been affected by a quasi 60yr oscillation. Herein I extend the finding discussed in @Scafetta2013b. Figure 3a shows the periodogram of the tide gauge record for NYC from 1893 to 2011: the data available before 1893 are excluded from the analysis because the record is seriously incomplete. The periodogram is calculated after the three missing years (in 1992, 1994 and 2001) are linearly interpolated, and the linear trend ($y(t)=2.98(t-2000)+7088$) is detrended because the periodogram gives optimal results if the time series is stationary. The spectral analysis clearly highlights, among other minor spectral peaks, a dominant frequency at a period of about 60yr, which is a typical major multidecadal oscillation found in PDO, AMO and NAO indexes [@Klyashtorin; @Knudsen; @Mazzarella; @Scafetta2012a; @Scafetta2013b; @Manzi]. This quasi 60yr oscillation, after all, is clearly visible in the NYC tide gauge record once this record is plotted since 1856, as shown in Fig. 3b. Consequently, for detecting a possible background sea level acceleration for NYC there is a need of adopting an upgraded regression model that at least must be made of a harmonic component plus a quadratic function of the type: $$f(t)=H\cos\left(2\pi\frac{t-T}{60}\right)+\frac{1}{2}a(t-2000)^{2}+v(t-2000)+c.\label{eq:2}$$ Other longer multisecular and millennial oscillations may be added to the model [@Bond; @Kerr; @Ogurtsov; @Qian; @Scafetta2012c; @Schulz] but, because only about one century of data are herein analyzed, Eq. (\[eq:2\]) cannot be expanded. To determine the exact length of the time period required to avoid multicollinearity and make the 60yr oscillation orthogonal to the quadratic polynomial term, a test proposed in @Scafetta2013b is herein rediscussed for the benefit of the reader. Figure 4a, b and c show fits of a periodic signal of unit period 1 of different length with a quadratic polynomial: the acceleration clearly varies in function of the length of the record $\lambda$. Figure 4d shows that the acceleration $a$ oscillates around zero in function of $\lambda$. The minimum length that makes the acceleration zero is $\lambda=1.8335$ times the length of the period of the oscillation. Thus, to optimally separate a 60yr oscillation from a background acceleration, there is the need of using a $1.8335\times 60=110$yr-long sequence. Indeed, as Fig. 2b shows, the alternation between the red and the green areas ends at scales close to $110$yr indicating that there is the need of using more than 100yr for filtering a background acceleration out from the quasi 60yr oscillation. Note that Sa2012’s regression model was applied to 60yr-long and shorter intervals from 1950 to 2009. As Fig. 4a and d clearly show, using 60yr-long and shorter records ($\lambda\leq1$ period of the oscillation), makes the regression model unable to separate a background acceleration from a 60yr oscillation because the two curves are significantly collinear, and a strong acceleration simply related to the bending of the 60yr oscillation would be found. In the next section, the multicollinearity problem in regression models will be discussed more extensively. NYC sea level data have been intermittently available since 1856, but as of 1893 only three annual means are missing, so the model given by Eq. (\[eq:2\]) can be tested for this record because it is about 120yr long from 1893 to 2011. Figure 3b shows the sea level record for NYC since 1856 (black) fitted with Eq. (\[eq:2\]) for the required optimal 110yr interval from 1902 to 2011 (blue). The fit gives $H=16\pm4$mm; $T=1956\pm2.5$yr; $a=0.006\pm0.005$mmyr$^{-2}$; $v=3.3\pm0.3$mmyr$^{-1}$; $c=7094\pm6$mm. For comparison, Fig. 3b also shows Sa2012’s and Boon’s ([-@Boon]) methods using Eq. (\[eq:1\]): (1) the fit is done from 1950 to 2009 (green) ($a=0.044\pm0.030$mmyr$^{-2}$, $v=3.7\pm0.7$mmyr$^{-1}$, $c=7086\pm7$mm); (2) the fit is done from 1970 to 2009 (purple) ($a=0.133\pm0.1$mmyr$^{-2}$, $v=4.6\pm1.1$mmyr$^{-1}$, $c=7084\pm8$mm); (3) the fit is done from 1969 to 2011 (red) ($a=0.20\pm0.07$mmyr$^{-2}$, $v=5.5\pm0.9$mmyr$^{-1}$, $c=7087\pm7$mm). Projections \#1 and \#2 use Sa2012’s method, projection \#3 uses Boon’s ([-@Boon]) method. To test the sufficient stability of my result, the analysis for the two non-overlapping periods 1856–1934 and 1934–2012 is repeated. In the first case the fit gives $H=14\pm6$mm; $T=1963\pm5$yr; $a=0.018\pm0.023$mmyr$^{-2}$; $v=4.1\pm2.4$mmyr$^{-1}$; $c=7107\pm122$mm. In the second case the fit gives $H=16\pm5$mm; $T=1957\pm5$yr; $a=0.015\pm0.027$mmyr$^{-2}$; $v=3.6\pm0.8$mmyr$^{-1}$; $c=7095\pm7$mm. Because in the three cases the correspondent regression values are compatible to each other within their uncertainty and the regression model calibrated from 1856 to 1934 hindcasts the data from 1934 to 2012, and vice versa, the regression model, Eq. (\[eq:2\]), can be considered sufficiently stable for interpreting the available data. On the contrary, using Eq. (\[eq:1\]) to fit 60yr periods it is obtained: (1) from 1890 to 1949, $a=0.091\pm0.027$mmyr$^{-2}$; (2) from 1920 to 1979, $a=-0.043\pm0.025$mmyr$^{-2}$; (3) from 1950 to 2009, $a=0.044\pm0.030$mmyr$^{-2}$. Because the acceleration values of the three 60yr sub-periods are not compatible to each other within their uncertainty, the regression model Eq. (\[eq:1\]) does not capture the dynamics of the available NYC sea level data. However, the absolute values of the three accelerations are compatible to each other. Thus, the 1950–2009 acceleration value, $a=0.044\pm0.03$mmyr$^{-2}$, does not appear to be anomalous, but it is well within the natural variability of a system that oscillates with a quasi 60yr cycle around a quasi linear upward trend. See @Scafetta2013b for additional discussion demonstrating that the accelerations found using the intervals proposed by Sa2012 and @Boon are arbitrary. Figure 3b also highlights that Eq. (\[eq:2\]) hindcasts quite well the relative sea level in NYC from 1856 to 1901, whose period was not used to calibrate the regression model, which adopted only data from 1902 to 2011. Therefore, the model proposed in Eq. (\[eq:2\]) reconstructs the available data since 1856, takes into account an influence of known climatic oscillations (e.g. the quasi 60yr AMO oscillation) and may be reasonably used as a first approximation forecast tool. On the contrary, Sa2012 and Boon’s ([-@Boon]) models immediately miss the data before 1950, 1969 and 1970, respectively, and ignore the existence of known multidecadal natural oscillations of the climate system. Consequently, the usefulness of the latter models for hindcast/forecast purposes should be questioned even on short periods. Essentially, Sa2012 Boon’s ([-@Boon]) methodologies are too simplistic because, as evident in Fig. 3b, they do not capture the dynamics of the available data and, consequently, miss the true dynamical properties of the system. As Fig. 3b shows, the adoption of Eq. (\[eq:2\]) implies that the relative sea level in NYC accelerated 7 to 22 times *less* than what was obtained with Sa2012’s quadratic fit alone during the two 30yr periods 1950–2009 and 1970–2009, respectively. By using the same extrapolation methodology proposed in Sa2012 and assuming that Eq. (\[eq:2\]) persists during the 21st century, the relative sea level in NYC could rise about $350\pm30$mm from 2000 to 2100, which is significantly less than what Sa2012’s quadratic model extrapolation would suggest, that is up to about $1130\pm480$mm, or, using Boon’s ([-@Boon]) model, the projected sea level would be $1550\pm400$mm from 2000 to 2100, as Fig. 3b shows. In conclusion, the convexity of the NYC tide gauge record from 1950 to 2009 was very likely mostly induced by the quasi 60yr AMO-NAO oscillation that strongly influences the Atlantic cost of North America and can be observed also in the global sea level record since 1700, shown in Fig. 2. However, Sa2012 mistook the 1950–2009 geometrical convexity of the NYC record as if it were due to an anomalous acceleration. Evidently, Sa2012’s 21st century projections for sea level rise in numerous locations need to be revised downward by taking it into account that the known multidecadal variability of the climate system would imply a significantly lower background acceleration than what they have estimated. A similar critique applies to the results by @Boon too, who also used Eq. (\[eq:1\]) to analyze a number of US and Canadian tide gauge records over a 43yr period from 1969 to 2011 and, for NYC, he found a 1969–2011 acceleration of $a=0.20\pm0.07$mmyr$^{-2}$ and projected an alarming sea level rise of $570\pm180$mm above the 1983–2001 sea level mean by 2050. On the contrary, other authors [@Houston; @Parker; @Scafetta2013b] analyzed numerous secular-long tide gauge records and found small (positive or negative) accelerations close to zero ($\sim\pm0.01$mmyr$^{-2}$). Figure 2a shows that a global estimate of the sea level rise since 1700 presents an acceleration slightly smaller than $0.01$mmyr$^{-2}$ since 1700, which may have also been partially driven by the great millennial solar/climate cycle [@Bond; @Kerr; @Kirkby; @Scafetta2012c], which will be more extensively discussed in the next section. ![image](figure5){width="15cm"} Multi-linear regression models and the multicollinearity problem: estimates of the solar signature on climatic records ====================================================================================================================== A number of authors have studied global surface temperature records using multilinear regression models to identify the relative contribution of known forcings of the earth’s temperature field. For example, @Douglass, and @Gleisner interpreted temperature records for the period 1980–2002 using four regression analysis constructors: an 11yr solar cycle signal without any trend, the volcano signal, the ENSO signal, which captures fast climatic fluctuations, and a linear trend that can capture everything else responsible for the 1980–2002 warming trend, including the warming component induced by anthropogenic greenhouse gases (GHG). The four chosen constructors are sufficiently geometrically orthogonal and physically independent of each other. Geometrical orthogonality and physical independence are necessary conditions for efficiently decomposing a signal using multilinear regression models. On the contrary, multilinear regression models may produce seriously misleading and inconclusive results if used with constructors multicollinear to each other. In fact, it is well known that in presence of multicollinearity among the regression predictors the estimated regression coefficients may change quite erratically in response to even minor changes in the model or the data yielding misleading interpretations. An improper application of the multilinear regression method is found in @BS09, indicated herein as BS09. These authors aimed to demonstrate that the increased solar activity during the 20th century contributed only 7% of the observed global warming from 1900 to 2000 (about $0.056$$^\circ$C out of a total warming of $0.8$$^\circ$C) as commonly found with general circulation models [@Hansen2001; @Hansen2007; @IPCC2007]. To do this, BS09 adopted a linear regression model of the global surface temperature that uses as constructors the 10 forcing functions of the GISS ModelE [@Hansen2007]: these 10 forcing functions are depicted in Fig. 5a. However, as I will demonstrate below, BS09’s approach is neither appropriate nor sufficiently robust because their chosen constructors do not satisfy the geometrical orthogonality nor the necessary physical requirements. This two-fold failure is seen in a number of ways. The first way BS09 multi-linear regression fails is mathematical. The predictors of a multilinear regression model must be sufficiently linearly independent, i.e. it should not be possible to express any predictor as a linear combination of the others. On the contrary, all 10 forcing functions used as predictors in BS09, with the exception of the volcano one, present a quasi monotonic trend (positive or negative) during the 20th century [@Hansen2007]. These smooth trends are geometrically quite collinear to one other. Thus, these forcing functions are strongly non-orthogonal and strongly cross-correlated. This is demonstrated in Table 1 where the cross-correlation coefficients among the ten forcing functions depicted in Fig. 5a from 1900 to 1999 are reported. The table clearly indicates that with the exception of the volcano forcing, all other forcing functions are strongly (positively or negatively) cross-correlated ($|r|>0.65$ for just 100yr and in most cases $|r|>0.95$, which indicates an almost 100% cross-correlation). The strong cross-correlation among 9 out of 10 constructors makes BS09 multilinear regression model extremely sensitive to data errors and to the number of the constructors. Paradoxically, even a multilinear regression model that does not use the well-mixed GHG forcing ($F_\mathrm{GHGs}$) at all, which includes also CO$_{2}$ and CH$_{4}$ greenhouse records, would well fit the temperature data with appropriate regression coefficients in virtue of the extremely good multicollinearity that the $F_\mathrm{GHGs}$ record has with other eight forcing functions, as Table 1 shows. This is demonstrated in Fig. 5b where the GISTEMP global surface temperature record [@Hansen2001] is fit with two multilinear regression models of the type: $$T(t)=\sum_{i=1}^{N}\beta_{i}F_{i}(t)+c,\label{eq:3}$$ where $T(t)$ is the temperature record to be constructed, $\beta_{i}$ are the linear regression coefficients and $F_{i}(t)$ are the 10 forcing functions used in BS09. Model 1 uses all ten forcing functions, as used in BS09; Model 2 uses nine forcing functions, where the well-mixed GHG forcing ($F_\mathrm{GHGs}$) is excluded. The regression coefficients of the two models are reported in Table 2. Moreover, to demonstrate the sensitivity of the regression algorithm to even small changes of the data, I repeated the calculation and reported in the last two columns of Table 2 (labeled with “tr”) the regression coefficients obtained with the same two models, using forcing functions truncated at 2 decimal digits (the original functions have 4 decimal digits). Figure 5b clearly shows that Model 1 and Model 2, in both the truncated and untruncated cases, perform almost identically, despite the fact that individual regression coefficients reported in Table 2 are very different from each other in the four cases; the statistical errors associated to these regression coefficients are therefore very large. [lrrrrrrrrrr]{} & $F_\mathrm{GHGs}$ & $F_\mathrm{O_{_3}}$ & $F_\mathrm{H_{_2}O}$ & $F_\mathrm{Sun}$ & $F_\mathrm{land}$ & $F_\mathrm{snow}$ & $F_\mathrm{Aer}$ & BC & $F_\mathrm{Refl}$ & AIE\ $F_\mathrm{GHGs}$ & 1 & 0.94 & 0.99 & 0.65 & $-$0.9 & 0.99 & $-$0.3 & 0.99 & $-$0.99 & $-$0.97\ $F_\mathrm{O_{_3}}$ & 0.94 & 1 & 0.98 & 0.74 & $-$0.97 & 0.96 & $-$0.32 & 0.96 & $-$0.98 & $-$0.99\ $F_\mathrm{H_{_2}O}$ & 0.99 & 0.98 & 1 & 0.71 & $-$0.95 & 0.99 & $-$0.31 & 0.99 & $-$1 & $-$1\ $F_\mathrm{Sun}$ & 0.65 & 0.74 & 0.71 & 1 & $-$0.83 & 0.66 & $-$0.11 & 0.66 & $-$0.7 & $-$0.75\ $F_\mathrm{land}$ & $-$0.9 & $-$0.97 & $-$0.95 & $-$0.83 & 1 & $-$0.92 & 0.26 & $-$0.91 & 0.94 & 0.97\ $F_\mathrm{snow}$ & 0.99 & 0.96 & 0.99 & 0.66 & $-$0.92 & 1 & $-$0.34 & 1 & $-$0.99 & $-$0.98\ $F_\mathrm{Aer}$ (volcano) & $-$0.3 & $-$0.32 & $-$0.31 & $-$0.11 & 0.26 & $-$0.34 & 1 & $-$0.33 & 0.32 & 0.3\ BC & 0.99 & 0.96 & 0.99 & 0.66 & $-$0.91 & 1 & $-$0.33 & 1 & $-$0.99 & $-$0.98\ $F_\mathrm{Refl}$ & $-$0.99 & $-$0.98 & $-$1 & $-$0.7 & 0.94 & $-$0.99 & 0.32 & $-$0.99 & 1 & 0.99\ AIE & $-$0.97 & $-$0.99 & $-$1 & $-$0.75 & 0.97 & $-$0.98 & 0.3 & $-$0.98 & 0.99 & 1\ Because it is also possible to equally well reconstruct the temperature record with Model 2, the methodology adopted by BS09 could also be used to demonstrate that the anthropogenic greenhouse gases such as CO$_{2}$ and CH$_{4}$ are irrelevant for explaining the global warming observed from 1900 to 2000. Moreover, for physical considerations the regression coefficients must be positive, but the regression algorithm finds also negative values, which is another effect of the multicollinearity of the predictors. This result clearly demonstrates the non-robustness and the physical irrelevance of the multilinear regression model methodology implemented in BS09 and, indirectly, also questions their conclusion that the solar activity increase during the 20th century contributed only $\sim7$% of the total warming. The results of the linear regression model used by BS09 would also strongly depend on the specific total solar irradiance record used as constructor. BS09 used a model by @Lean2000 which poorly correlates with the temperature, and they reached a result equivalent to @Lean2009 who just used an update solar model. However, total solar irradiance records are highly uncertain and other solar reconstructions [e.g.: @Hoyt] correlate quite better with the temperature records from 1900 to 2000 [e.g.: @Soon2005; @Soon2011; @Soon2013] and could reconstruct a larger percentage of the 20th century global warming by better capturing the quasi 60yr oscillation found in the temperature records. However, because the linear regression analysis requires accurate constructors and the multidecadal patterns of the solar records, as well as those of the other forcing functions, are highly uncertain, it is better to use an alternative methodology to test how well the GISS ModelE simulates the climatic solar signatures. For example, it is possible to extract the quasi 11yr solar cycle signatures from a set of climatic records and compare the results against the GISS ModelE predictions. I adopted the method proposed by @Douglass and @Gleisner that uses only four constructors (as already explained above) for the period 1980–2003: $$T(t)=\alpha_{V}V(t)+\alpha_{S}S(t)+\alpha_{E}E(t)+a(t-1980)+b.\label{eq:4}$$ The function $V(t)$ is the monthly-mean optical thickness at 550nm associated with the volcano signal; $S(t)$ is the 10.7cm solar flux values, which is a good proxy for the 11yr modulation of the solar activity (not for the multi-decadal trend); $E(t)$ is the ENSO signal (it has been lag-shifted by four months for autocorrelation reasons also indicated in [@Gleisner]); and the linear trend captures any linear warming trend the data may present, which may be due to multiple physical causes such as anthropogenic GHG forcings. [lrrrr]{} & Model 1 & Model 2 & Model 1 & Model 2\ &&&(tr)&(tr)\ $F_\mathrm{GHGs}$ &1.27&0.00& 0.34 & 0.00\ $F_\mathrm{O_{_3}}$ &11.0&2.47& $-$5.39 & $-$7.74\ $F_\mathrm{H_{_2}O}$ &10.2&65.2& 2.86 & 3.54\ $F_\mathrm{Sun}$ &0.30&0.31& 0.54 & 0.52\ $F_\mathrm{land}$ &$-$32.3&$-$13.6& $-$4.88 & $-$3.04\ $F_\mathrm{snow}$ &$-$10.1&$-$7.57& 7.19 & 7.80\ $F_\mathrm{Aer}$ (volcano) &0.05&0.06& 0.04 & 0.05\ $F_\mathrm{BC}$ &13.3&8.58& $-$4.59 & $-$5.45\ $F_\mathrm{Refl}$ &6.12&4.20& $-$0.97 & $-$2.46\ $F_\mathrm{AIE}$ &8.91&4.78& $-$0.38 & $-$0.68\ const “$c$” &0.00&$-$0.05& $-$0.42 & $-$0.45\ [lrrrr]{} & Volcano & Sun & ENSO & Linear\ Volcano & 1 & 0.01 & 0.42 & $-$0.24\ Sun & 0.01 & 1 & $-$0.22 & $-$0.05\ ENSO & 0.42 & $-$0.22 & 1 & $-$0.16\ Linear & $-$0.24 & $-$0.05 & $-$0.16 & 1\ Table 3 reports the cross-correlation coefficient matrix among these four constructors. The cross-correlation coefficients are significantly smaller than those found in Table 1. In particular, the cross-correlation coefficients involving the 11yr solar cycle constructor with the other three constructors are very small: $r=0.01$, $r=-0.22$ and $r=-0.05$, respectively. Thus, this simpler regression model is expected to be mathematically more robust than that adopted in BS09. The 1980–2003 period is used to keep the number of fitting parameters to a minimum. The model (Eq. \[eq:4\]) is used to fit three MSU temperature $T(t)$ records [@Christy]: Temperature Lower Troposphere (TLT, MSU 2); Temperature Middle Troposphere (TMT, MSU 2); Temperature Lower Stratosphere (TLS, MSU 4). The evaluated regression coefficients are recorded in Table 4. Figure 6a shows the three original volcano, solar and ENSO sequences. Figure 6b shows the three regression models against the MSU temperature records. Figure 6c shows the reconstructed 11yr cycle solar signatures in the TLT, TMT and TLS records. Finally, Fig. 6d shows the GISS ModelE reconstruction of the solar signatures from the ground surface to the lower stratosphere. The comparison between Fig. 6c and d stresses the striking discrepancy between the empirical findings and the GISS ModelE predictions for the 11yr solar cycle signatures on climatic records. The empirical analysis shows that the peak-to-trough amplitude of the response to the 11yr solar cycle globally is estimated by the regression model to be approximately $0.12$$^\circ$C near the earth’s surface and rises to 0.3–0.4$^\circ$C at the lower stratosphere. This result agrees with what was found by other authors [@Coughlin; @Crooks; @Gleisner; @Haigh; @Labitzke; @van; @Loon; @Scafetta2005; @Scafetta2009b; @White]. On the contrary, the GISS ModelE predicts a peak-to-trough amplitude of the climatic response to the solar cycle globally of $\sim0.03$$^\circ$C near the ground, rising to $\sim0.05$$^\circ$C at the lower stratosphere (MSU4). Consequently, the GISS ModelE climate simulations significantly underestimate the empirical findings by a factor of $\sim3$ or $4$ for the surface measurements, up to a factor of $\sim8$ for the lower stratosphere measurements. [lrrr]{} & TLT & TMT & TLS\ $\alpha_\mathrm{V}$ & $-$3.18 & $-$2.31 & 8.94\ $\alpha_\mathrm{S}$ & $1.07 \times 10^{-4}$ & $1.25 \times 10^{-4}$ & $2.86 \times 10^{-4}$\ $\alpha_\mathrm{E}$ & 0.131 & 0.139 & 0.0098\ $a$ &0.016 & 0.011& $-$0.027\ $b$ &$-$0.28 & $-$0.28& $-$0.37\ A low response of the climate system to solar changes is not peculiar to the GISS ModelE alone, but appears to be a common characteristic of present-day climate models. For example, the predicted peak-to-trough amplitude of the global surface climate response to the 11yr solar cycle is about $0.025$$^\circ$C in Crowley’s ([-@Crowley]) linear upwelling/diffusion energy balance model; it is about $0.03$$^\circ$C in Wigley’s MAGICC energy balance model [@Foukal2004; @Foukal2006]; it is just a few hundredths of a degree in several other energy balance models analyzed by @North2004. Eventually, in order to correct this situation, other feedback mechanisms and solar inputs than the total solar irradiance forcing alone should be incorporated into the climate models as those adopted by the @IPCC2007. Possible candidates are a cosmic ray’s modulation of the cloud system that alters the albedo [@Kirkby; @Svensmark2007; @Svensmark2009], mechanisms related to UV effects on the stratosphere and others. For example, @Solomon estimated that stratospheric water vapor has largely contributed both to the warming observed from 1980–2000 (by 30%) and to the slight cooling observed after 2000 (by 25%). This study reinforced the idea that climate change is more complex than just a reaction to added CO$_{2}$ and a few other anthropogenic forcings. The causes of stratospheric water vapor variation are not understood yet. Perhaps stratospheric water vapor is driven by UV solar irradiance variations through ozone modulation, and works as a climate feedback to solar variation [@Stuber]. Ozone variation may also be driven by cosmic ray [@Lu2009a; @Lu2009b]. ![image](figure6){width="17cm"} However, BS09 regression model is also not meaningful for another important physical property. Secular-long climatic sequences cannot be modeled using a linear regression model that directly adopts as linear predictors the radiative forcing functions, as done in BS09, because the climate processes the forcing functions non-linearly by deforming their geometrical shape through its heat capacity. See the discussion in @Crowley, @Scafetta2005 [@Scafetta2006a; @Scafetta2006b; @Scafetta2007] and @Scafetta2009b. Essentially, an input radiative forcing function and the correspondent modeled temperature output function do not have the same geometrical shape because each frequency band is processed in different ways (e.g. high frequencies are damped while low frequencies are stretched), and multilinear regression models are extremely sensitive to the shape of the constructors. The same critique applies to @Lean2009, who also adopted forcing functions as temperature linear predictors to interpret the 20th century warming. The above problem may be circumvented by using a regression model that uses as predictors theoretical climatic fingerprints of the single forcing functions once processed by an energy balance model, which approximately simulate the climate system, as proposed for example in @Hegerl. A simple first approximation choice may be a regression model of the temperature of the type: $$T(t)=\alpha_{T{_\mathrm{v}}}T_\mathrm{V}(t)+\alpha_{T{_\mathrm{s}}}T_\mathrm{S}(t)+\alpha_{T{_\mathrm{a}}}T_\mathrm{A}(t)+c,\label{eq:model}$$ where $T_\mathrm{v}(t)$, $T_\mathrm{S}(t)$ and $T_\mathrm{A}(t)$ are the outputs of an energy balance model forced with the volcano, solar and anthropogenic (GHG plus Aerosol) forcing functions, respectively, and $c$ is a constant. The rationale is the following. Energy balance models provide just a rough modeling of the real climatic feedbacks and processes involved in a specific forcing. What the regression model does is estimate signal amplitudes $\alpha$ (unitless) as scaling factors by which energy balance model simulations need to be scaled for best agreement with observations [@Hegerl]. This scaling process also makes, in first approximation, the final results approximately independent of the specific energy balance model used to produce the constructors. In particular, the scaling factor is important for determining a first approximation climatic contribution of the overall solar variations which, as explained above, likely present additional forcing (cosmic ray, UV, etc.) functions that present geometrical similarities to the total solar irradiance forcing function alone, but that are not explicitly included in the climate models yet. Indeed, the multiple solar forcings are very likely quasi-multicollinear, which allows the regression model, Eq. (\[eq:model\]), to approximately estimate, through the scaling factor $\alpha_{T{_\mathrm{s}}}$, their overall effect by using only a theoretical climatic fingerprint of one of them. ![image](figure7){width="17cm"} Because both solar and anthropogenic forcing functions have been increasing since about 1700 (since the Maunder solar minimum and a cold period of the Little Ice Age), to reduce the multicollinearity among the constructors, the regression model should be run against 1000yr-long temperature records to better take advantage of the geometrical orthogonality between the millennial solar cycle [e.g.: @Bond; @Kerr; @Ogurtsov; @Scafetta2012c] and the GHG records. Note that the GHG forcing functions show only a small preindustrial variability and reproduce the shape of a hockey stick [@Crowley]. I observe that this crucial point was also not recognized by @Rohde [figure 5], who used a regression model of the temperature from 1750 to 2010, and also used predictors equivalent to the forcing functions as in BS09. In the present example I used the output functions produced by the linear upwelling/diffusion energy balance model from @Crowley [figure 3A] in the following way: the volcano output is used as a candidate for the volcano related constructor; the GHG and Aerosol outputs are summed to obtain a comprehensive anthropogenic constructor function; the three solar outputs, which for the 20th century use @Lean2000 solar model, are averaged to obtain an average solar constructor function. Note that @Crowley and @Hegerl compared their models against *hockey-stick* temperature reconstructions such as that proposed by @Mann1999, which showed a very little preindustrial variability compared with the post-1900 global warming, and found a relatively small solar signature on climate. However, since 2005, novel paleoclimatic temperature reconstructions have demonstrated a far greater preindustrial variability made of a large millennial cycle with an average cooling from the Medieval Warm Period to the Little Ice Age of about 0.7$^\circ$C [@Moberg; @Mann2008; @Ljungqvist; @Christiansen]; the latter cooling is about 3–4 times greater than what showed by the *hockey-stick* temperature graphs. The example uses the reconstruction of @Moberg assumed to represent a global estimate of the surface temperature, merged in 1850–1900 with the instrumental global surface temperature record HadCRUT4 [@Morice]. The result of the analysis are shown in Fig. 7a and b. The evaluated scaling coefficients using Eq. (\[eq:model\]) are $\alpha_{T{_\mathrm{v}}}=0.7$; $\alpha_{T{_\mathrm{s}}}=3.0$; $\alpha_{T{_\mathrm{a}}}=0.45$; $c=-0.30$$^\circ$C. Figure 7a shows the rescaled energy balance model simulations relative to the three components (volcano, solar and anthropogenic). Figure 7b shows the model, Eq. (\[eq:model\]), against the chosen temperature record and a good fit is found. According to the proposed model, from 1900 to 2000 the solar component contributed $\sim0.35$$^\circ$C (44%) of the total $\sim0.8$$^\circ$C. However, this is likely a low estimate because a fraction (perhaps 10–20%) of the post-1900 GHG increase may have been a climatic feedback to the solar-induced warming itself through CO$_{2}$ and CH$_{4}$ released by up-welled water degassing, permafrost melting and other mechanisms. Thus, more likely, the sun may have contributed at least about 50% of the 20th century warming as found in other empirical studies [e.g.: @Eichler; @Scafetta2007; @Scafetta2009b] that more properly interpreted the climate system response to solar changes using long records since 1600AD, and by taking into account also the scale-by-scale response of the climate system to solar inputs. Indeed, although the regression model appears too rough, Fig. 7a suggests that solar activity and anthropogenic forcings could have contributed almost equally to the global warming observed from 1900 to 2000. On the contrary, by comparison, Fig. 7c and d show what would have been the situation if the solar contribution to the 20th century warming were only 7% of the total, that is about $0.06$$^\circ$C against $0.8$$^\circ$C, as claimed by BS09 and also by the GISS ModelE (see Fig. 6d). Here, I forced the solar component of the regression model to reproduce such a claim, which necessitates a rescaling of Crowley’s solar output $T_\mathrm{S}(t)$ by a factor $\alpha_{T{_\mathrm{s}}}\approx0.5$. Then, a modification of the regression model of Eq. (\[eq:model\]) can be used: $$T(t)-0.5 \cdot T_\mathrm{S}(t)=\alpha_{T{_\mathrm{v}}}T_\mathrm{V}(t)+\alpha_{T{_\mathrm{a}}}T_\mathrm{A}(t)+c.\label{eq:model-1}$$ The three regression coefficients are $\alpha_{T{_\mathrm{v}}}=0.8$, $\alpha_{T{_\mathrm{a}}}=1.1$ and $c=-0.32$$^\circ$C. As Fig. 7c shows, in this case almost the entire 20th century global warming would be interpreted as due to anthropogenic forcings, as all general circulation models of the @IPCC2007 and also @Lean2009 have claimed. However, as Fig. 7d clearly highlights, the same model, Eq. (\[eq:model-1\]), fails to reproduce the data before 1750 by missing the great millennial oscillation, generating both the Medieval Warm Period (1000–1400) and the Little Ice Age (1400–1750). Equation (\[eq:model-1\]) just reproduces a hockey-stick shape that would only agree well with the outdated paleoclimatic reconstruction by @Mann1999, as also originally found by @Crowley. It is possible to observe that a good agreement between the model, Eq. (\[eq:model-1\]), and the data since 1750 exists, which is the same result found by @Rohde [figure 5] with another regression model. These authors concluded that almost all warming since 1750 was induced by anthropogenic forcing. However, @Rohde result is also not robust because their used 260yr interval (1750–2010) is too short a period, during which both the solar and the anthropogenic forcing functions are collinear (both increased); a regression model therefore cannot properly separate the two signals. In conclusion, it is evident that the large preindustrial millennial variability shown by recent paleoclimatic temperature reconstructions implies that the sun has a strong effect on the climate system, and its real contribution to the 20th century warming is likely about 50% of the total observed warming. This estimate is clearly incompatible with BS09’s estimate of a solar contribution limited to a mere 7% of the 20th century global warming. The result is also indirectly confirmed by the results depicted in Fig. 6, demonstrating that GISS ModelE severely underestimates the solar fingerprints on climatic records by a large factor. For equivalent reasons, by claiming a very small solar effect on climate, the general circulation models used by the @IPCC2007 and the regression models proposed by @Rohde and @Lean2009, would be physically compatible only with the outdated hockey-stick paleoclimatic temperature graphs [e.g. @Crowley; @Mann1999] if they were extend back to 1000AD. However, by doing so, they would fail to reproduce the far larger (by a factor of 3 to 4, at least) preindustrial climatic variability revealed by the most recent paleoclimatic temperature reconstruction [e.g.: @Christiansen; @Kobashi2013; @Ljungqvist; @Mann2008; @Moberg]. Maximum overlap discrete wavelet transform, Gibbs artifacts and boundary methods ================================================================================ A technique commonly used to extract structure from complex time series is the maximum overlap discrete wavelet transform (MODWT) multiresolution analysis (MRA) [@Percival]. This methodology decomposes a signal $X(t)$ at the $J$-[th]{} order as follows: $$X(t)=S_{J}(t)+\sum_{j=1}^{J}D_{j}(t),\label{eq:5}$$ where $S_{J}(t)$ works as a low-pass filter and captures the smooth modulation of the data with time scales larger than $2^{J+1}$ units of the time interval $\Delta t$ at which the data are sampled. The detail function $D_{j}(t)$ works as a band-pass filter and captures local variation with periods approximately ranging from $2^{j}\Delta t$ to $2^{j+1}\Delta t$. The technique can be used to model the climatic response to different temporal scales of the solar forcing used later to combine the results to obtain a temperature signature induced by solar forcing as proposed in @Scafetta2005 [@Scafetta2006a; @Scafetta2006b]. ![image](figure8){width="14cm"} ![Reproduction and comment of @BS09’s figure 6 **(A)** and figure 7 **(B)**. The upper panel shows a total solar irradiance model [@Lean2000]; the bottom panel shows its temperature signature produced with the MODWT decomposition at scales larger than the decadal one. The red circles highlight the physical incongruity of the misapplication of the MODWT method by showing that despite the increasing solar activity (upper panel) the wavelet processed curve points downward because of Gibbs artifacts.](figure9){width="8.5cm"} However, the MODWT technique needs to be applied with care because the MODWT pyramidal algorithm is periodic [@Percival]. This characteristic implies that to properly decompose a non-stationary time series, such as solar and temperature records, there is the need of doubling the original series by reflecting it in such a way that the two extremes of the new double sequence are periodically continuous: that is, if the original sequence runs from A to B (let us indicate it as “A-B”), it must be doubled to form a sequence of the type “A-BB-A”, which is periodic at the extremes. This boundary method is called “reflection”. If this trick is not applied and the original sequence is processed with the default periodic boundary method, MODWT interprets the sequence as “A-BA-B”, and models the “BA” discontinuities at the extremes by producing Gibbs ringing artifacts that invalidate the analysis and its physical interpretation. A serious misapplication of the MODWT methodology is also found in @BS09, where they questioned the MODWT results of temperature and solar records found in @Scafetta2005 [@Scafetta2006a; @Scafetta2006b] because they were not able to reproduce them. However, as I will demonstrate below, BS09 misapplied the MODWT by using the periodic boundary method instead of the required reflection one. This error could have been easily recognized by a careful analysis of their results, which were weird. Let us discuss the case. Figure 8 reproduces BS09’s figure 4, which decomposes with MODWT both a total solar irradiance record [@Lean2000] and the GISTEMP global surface temperature record [@Hansen2001]. Figure 8, however, plots BS09’s figure 4 twice, by merging the 2000 border with the 1900 border side-by-side. As evident in the figure, the original sequences present discontinuities at the borders of the type “A-BA-B” due to their upward trend. However, the MODWT decomposed curves, that is, the pink and blue curves are continuous at the borders. This pattern is generated by the MODWT when the default periodic boundary method is applied. Consequently, very serious Gibbs artifacts are observed in the decomposed curves, as evident in the large oscillations present in the pink and blue curves in proximity of the borders in 1900 and in 2000. These artifacts are also evident in the bottom panel of Fig. 8. The consequences of the error are quite serious. The Gibbs artifacts induce a large artificial volatility in the decomposed components signals. Moreover, they generated a serious physical incongruity highlighted in Fig. 9a and b. These figures reproduce BS09’s figures 6 and 7, respectively. Here, a total solar irradiance model [@Lean2000] (Fig. 9a) and its temperature signature reconstruction using the MODWT decomposition methodology of @Scafetta2006a (Fig. 9b) are depicted, respectively. Figure 9a shows that with the MODWT methodology, the solar contribution to the 20th century global warming is about $0.3$$^\circ$C (38%) of the total warming. However, the exact result would be larger if the MODWT were not misapplied and would have fully confirmed @Scafetta2006a [@Scafetta2006b]. In fact, the added red circles in Fig. 9 highlight that the solar activity increased from 1995 to 2000 (Fig. 8a), while its temperature reconstructed signature (Fig. 9b) points downward during the same period, which is unphysical. This pattern was due to the fact that the algorithm, as applied by BS09, processed the signal with the default periodic boundary method and generated large Gibbs artifacts trying to merge the starting and the ending points of the record and, consequently, bent the last decade of the reconstructed solar climatic signature downward. Note that in @Scafetta2006a [compare figures 2 and 3], where the MODWT was applied correctly using the reflection boundary method, this physical incongruity does not exist. @Scafetta2006a [@Scafetta2006b; @Scafetta2007] and @Scafetta2009b are consistent also with the results discussed in Sect. 3 and Figure 7 where it was determined that the sun contributed about 50% of the global warming from 1900 to 2000 using an independent methodology. Let us discuss how to correctly apply the MODWT methodology as used in @Scafetta2005 [@Scafetta2006a] to capture the 11yr and 22yr solar cycle signatures. In addition to the reflection method for mathematical purposes, MODWT methodology has to be used under the following two conditions for physical reasons: (1) the data record needs to be resampled in such a way that the center of the wavelet band-pass filter is located exactly on the 11 and 22yr solar cycles, which are the frequencies of interest; (2) a reasonable choice of the year when the reflection is made, that is, the year 2002–2003 when the sun experienced a maximum for both the 11yr and 22yr cycles to further reduce a problem of discontinuity in the derivative at the border because there is the need to apply MODWT with the reflection method. ![image](figure10){width="13cm"} Point (1) was accomplished by observing that the 11yr cycle (132 months) would fall within the frequency band captured by the wavelet detail $D_{7}(t)$ corresponding to the band between $2^{7}=128$ and $2^{8}=256$ months, that is from 10.7 to 21.3yr. Thus, by adopting the monthly sampling the 11yr cycle would not be centered in $D_{7}(t)$ and the 22yr would not be centered in the wavelet detail $D_{8}(t)$. This would cause an excessive splitting of the 11yr modulation between the adjacent details curve $D_{6}(t)$ and $D_{7}(t)$, and of the 22yr modulation between the adjacent detail curves $D_{7}(t)$ and $D_{8}(t)$. Consequently, to optimize the filter it was necessary to adjust the time step of the time sequence in such a way that the wavelet detail curves fell exactly in the middle of the 11yr and 22yr cycles. This was done by adjusting the time sampling of the record to $\Delta t=132/192=0.6875$ month or to $\Delta t=11/12=0.9167$yr whether the original sequence has a monthly or annual resolution, respectively. This time step adjustment was accomplished with a simple linear interpolation of the original sequence. With the new resolution $\Delta t=0.6875$ month, the detail curve $D_{7}(t)$ would cover the timescales 7.3–14.7yr (median 11yr), and the detail curve $D_{8}(t)$ would cover the timescales 14.7–29.3yr (median 22yr). This time step is necessary to optimally extract the 11yr and 22yr modulations from the data. [llll]{} & Reflection (correct) & Periodic (incorrect) & BS09\ $A_\mathrm{8,temp}$ & $0.057\pm0.0015$$^\circ$C & $0.193\pm0.002$$^\circ$C & $0.18$$^\circ$C\ $A_\mathrm{8,sun}$ & $0.226\pm0.003$Wm$^{-2}$ & $0.266\pm0.0025$Wm$^{-2}$ & $0.32$Wm$^{-2}$\ $A_\mathrm{8,temp}/A_\mathrm{8,sun}$ & $0.252\pm0.010$$^\circ$CW$^{-1}$m$^2$ & $0.726\pm0.014$$^\circ$CW$^{-1}$m$^2$ & $0.56$$^\circ$CW$^{-1}$m$^2$\ $A_\mathrm{7,temp}$ & $0.112\pm0.005$$^\circ$C & $0.077\pm0.010$$^\circ$C & $0.14$$^\circ$C\ $A_\mathrm{7,sun}$ & $0.872\pm0.008$Wm$^{-2}$ & $0.292\pm0.021$Wm$^{-2}$ & $0.45$Wm$^{-2}$\ $A_\mathrm{7,temp}/A_\mathrm{7,sun}$ & $0.128\pm0.007$$^\circ$CW$^{-1}$m$^2$ & $0.264\pm0.053$$^\circ$CW$^{-1}$m$^2$ & $0.311$$^\circ$CW$^{-1}$m$^2$\ Figure 10a shows the MODWT decomposition of the GISTEMP temperature record from 1900 to 2000. The thick black curves are the $D_{7}$ (bottom) and $D_{8}$ (top) wavelet detail curves obtained with the reflection method and the correct time step $\Delta t=0.6875$month. A second set of curves are also depicted in Fig. 10a and these are obtained from the same data, but using cyclic boundary conditions and with $\Delta t=1$month (blue) and $\Delta t=0.6875$month (green). The blue and green curves of Fig. 10a show substantial Gibbs ringing artifacts that are so serious that they even cause an inversion of the bending of the curve. The blue curves exactly correspond to the blue (middle panel) and to the blue and dash (bottom panel) curves depicted in Fig. 8, as calculated by BS09 where these Gibbs artifacts were mistakenly identified as anomalous temperature and solar signatures. The re-analysis also demonstrates that the calculations by BS09 used the $\Delta t=1$month resolution, contrary to what they report in their figure 4. The consequences of the error of using MODWT with the default periodic method are significant. For example, the MODWT detail curves were used to estimate the average peak-to-trough amplitudes $A$ of the oscillations from 1980 to 2000. The blue curves for $D_{8}$ gives $A_\mathrm{8,temp}\approx0.19$$^\circ$C, while the value determined using the correct analysis is $A_\mathrm{8,temp}\approx0.06$$^\circ$C corresponding to the one obtained in @Scafetta2005. Note also that from 1980 to 2000 the blue curve referring to $D_{8}$ is concave while the correct curve (black) is convex. Analogous problem referring to $D_{8}$ is shown in Fig. 10b that analyzes the total solar irradiance of @Lean2000. Again the thick black curves are the $D_{7}$ (bottom) and $D_{8}$ (top) wavelet detail curves obtained with the reflection method and the centered time step $\Delta t=0.6875$month. The thin red lines correspond to the pink curves of Fig. 8. These latter curves are quite different from the black curves because of Gibbs artifacts and because the $\Delta t=1$ month resolution was used. In particular, notice the visibly smaller amplitude of the 11yr solar cycles relative to the correct black curve: with BS09’s methodology one would find $A_\mathrm{7,sun}=0.3$Wm$^{-2}$ while the correct amplitude is significantly larger $A_\mathrm{7,sun}\approx0.9$Wm$^{-2}$ since 1980, as found in @Scafetta2005. Moreover, as Fig. 4a clearly shows, from 1900 to 2000 the amplitude of the 11-year solar cycle varies from $0.5$ to $1.3$ about Wm$^{-2}$ and this range is clearly inconsistent with the average amplitude of $0.45$Wm$^{-2}$ as calculated in BS09. The various amplitudes are listed in Table 5, which also highlights the abnormal results obtained in BS09. In summary, MODWT requires: (1) the reflection method; (2) the sequences should be sampled at specific optimized time intervals that depend on the specific application; (3) for optimal results the borders need to be chosen to avoid discontinuities in the first derivative at the time scales of interest. For example, @Scafetta2005 used the period 1980–2002 because the 11yr and 22yr solar cycles would approximately have been at their maximum. The latter point is important because the reflection method gives optimized results when the derivative at the borders approaches zero. On the contrary, choosing the default periodic method and using sequences sampled at generic time intervals and generic borders, as done in BS09, was demonstrated here to yield results contaminated by significant artifacts. There are other claims raised in BS09 as those based on GISSModelE and GISS CTL simulations simulations, etc. However, in Sect. 3 it has been also demonstrated that these simulations do not reproduce the solar and astronomical signatures on the climate at multiple time scales [see also @Scafetta2010; @Scafetta2012b] and would eventually agree only with the outdated hockey-stick temperature reconstructions such as those proposed by @Mann1999. Therefore, BS09’s additional arguments are of limited utility (1) because those computer simulations appear to seriously underestimate the solar signature on climate and (2) because, in any case, BS09 misapplied the MODWT methodology to analyze them. In this paper I have discussed a few typical examples where time series methodologies used to analyze climatic records have been misapplied. The chosen examples address relatively simple situations that yielded severe physical misinterpretations that, perhaps, could have been easily avoided. A first example addressed the problem of how to estimate accelerations in tide gauge records. It has been shown that to properly interpret the tide gauge record of New York City it is necessary to plot all available data since 1856, as done in Fig. 3b. In this way the existence of a quasi 60yr oscillation, which is evident in the global sea level record since 1700, becomes quite manifest. This pattern suggests a very different interpretation than that proposed for example in @Sallenger or in @Boon. A significantly smaller and less alarming secular acceleration in NYC was found: $a=0.006\pm0.005$mmyr$^{-2}$ against @Sallenger’s 1950–2009 and 1970–2009 accelerations $a=0.044\pm0.03$mmyr$^{-2}$ and $a=0.13\pm0.09$mmyr$^{-2}$, respectively, or Boon’s ([-@Boon]) 1969–2011 acceleration $a=0.20\pm0.07$mmyr$^{-2}$. These large accelerations simply refer to the bending of the quasi 60yr natural oscillation present in this record: see @Scafetta2013b for additional details. Thus, in NYC a more realistic sea level rise projection from 2000 to 2100 would be about $350\pm30$mm instead of $1130\pm480$mm calculated with @Sallenger’s method using the 1970–2009 quadratic polynomial fit or $1550\pm400$mm calculated with Boon’s ([-@Boon]) method using the 1969–2011 quadratic polynomial fit. Moreover, as Fig. 1a shows, by plotting the NYC tide gauge record only since 1950 (compare Figs. 1a and 3b), @Sallenger has somehow obscured the real dynamics of this record; the same critique would be even more valid for @Boon [figures 5–8] who plotted tide gauge records only starting in 1969. Global sea level may rise significantly more if during the 21st century the temperature increases abnormally by several degrees Celsius, as current general circulation models have projected [@Morice]. However, as demonstrated in Sect. 3 (e.g. Figs. 6 and 7) typical climate models used for these projections appear to significantly overestimate the anthropogenic warming effect on climate and underestimate the solar effect. Solar activity is projected to decrease during the following decades and may add a cooling component to the climate [@Scafetta2012c; @Scafetta2013]. As a consequence, it is very likely that the 21st century global temperature projections are too high, as also demonstrated in @Scafetta2012b. Because the global sea level record presents a 1700–1900 preindustrial period acceleration compatible with the 1900–2000 industrial period acceleration ($a=\sim0.01$mmyr$^{-2}$ in both cases), there is no clear evidence that anthropogenic forcings have drastically increased the sea level acceleration during the 20th century. Thus, anthropogenic forcings may not drastically increase the sea level during the 21th century either. The 1700–2000 global sea level is projected to rise about $277\pm8$mm from 2000 to 2100 as shown in Fig. 2a. A second example addressed more extensively the problem of how to deal with multilinear regression models. Multilinear regression models are very powerful, but they need to be used with care to avoid multicollinearity among the constructors yielding meaningless physical interpretations. It has been demonstrated that the 10-constructor multi-linear regression model adopted in @BS09 to interpret the 20th century global warming and to conclude that the sun contributed only 7% of the 20th century warming is not robust because: (1) the used predictors are multicollinear and (2) the climate is not a linear superposition of the forcing functions themselves. About the latter point, it is evident that if the climate system could be interpreted as a mere linear superposition of forcing functions (as also done in @Lean2009), there would be no need to use climate models in the first place. To demonstrate the serious artifacts generated by regression analyses in multicollinearity cases, I showed that by eliminating the predictor claimed to be the most responsible for the observed global warming from 1900 to 2000, that is the well-mixed greenhouse gas forcing function, the regression model was still able to reconstruct equally well the temperature record by using the other 9 constructors. By using the regression model in a more appropriate way, that is, by restricting the analysis to the 1980–2003 period when the data are more accurate and using only orthogonal constructors, it was demonstrated that the GISS ModelE severely underestimates the solar effect on climate by a 3-to-8 factor, as shown in Fig. 6. By using a more physically based regression model (Eq. \[eq:model\]) it was also demonstrated that the large preindustrial temperature variability shown in recent paleoclimatic temperature reconstructions since the Medieval Warm Period implies that the sun has a strong effect on climate change and likely contributed about 50% of the 20th century warming, as found in numerous Scafetta’s papers [@Scafetta2006a; @Scafetta2006b; @Scafetta2007; @Scafetta2009b; @Scafetta2010; @Scafetta2012a; @Scafetta2012b] and by numerous other authors [e.g.: @Eichler; @Hoyt; @Kirkby; @Kobashi2013; @Soon2005; @Soon2011; @Soon2013; @Svensmark2007]. These results contradict @BS09, @IPCC2007, @Lean2009 and Rohde et al.’s ([-@Rohde]) results that the sun contributed little (less than 10%) to the $0.8$$^\circ$C global warming observed from 1900 to 2000. In fact, such a low solar contribution would only be consistent with the geometrical patterns present in outdated hockey-stick temperature reconstructions [e.g. those proposed in: @Crowley; @Mann1999], as shown in Fig. 7. A third example addressed the problem of how to deal with scale-by-scale wavelet decomposition methodologies, which are very useful to interpret dynamical details in geophysical records. Evidently, there is a need to properly take into account the mathematical properties of the methodology to avoid embarrassing artifacts and physical incongruities as those generated by Gibbs artifacts. For example, @BS09 findings that the solar activity increase from 1995 to 2000 has induced a cooling on the global climate and their failure to reproduce the results of @Scafetta2005 [@Scafetta2006a; @Scafetta2006b] were just artifacts due to an improper application of the MODWT technique. @BS09 erroneously applied MODWT with the default period method instead of using the reflection method as demonstrated in Figs. 8–10. The error in applying correctly the decomposition methodology also produced abnormally large uncertainties in their results. I have spent some time to detail how to use this technique for the benefit of the readers interested in properly applying it. Highlighting these kinds of problems is important in science. In fact, while errors in scientific research are sometimes possible and unavoidable, what most harms the scientific progress is the persistence and propagation of the errors. This happens when other scientists uncritically cite and use the flawed results to interpret alternative data, which yields further misinterpretations. This evidently delays scientific progress and may damage society as well. The author would like to thank the Editor and the two referees for useful and constructive comments. The author thanks Mr. Roger Tattersall for encouragement and suggestions. Benestad, R. E. and Schmidt, G. A.: Solar trends and global warming. J. Geophys. Res., 114, D14101, 2009. Bond, G., Kromer, B., Beer, J., Muscheler, R., Evans, M. N., Showers, W., Hoffmann, S., Lotti-Bond, R., Hajdas, I., Bonani, G.: Persistent solar influence on North Atlantic climate during the Holocene. Science 294, 21302136, 2001. Boon, J.D., 2012. Evidence of sea level acceleration at U.S. and Canadian tide stations, Atlantic Coast, North America. J. Coastal Research 28(6), 1437-1445. Chambers, D. P., Merrifield, M. A. and Nerem, R. S.: Is there a 60-year oscillation in global mean sea level? Geophysical Research Letters 39, L18607, 2012. Christy J. R., Spencer, R. W. and Braswell, W. D.: MSU Tropospheric Temperatures: Dataset Construction and Radiosonde Comparisons. J. of Atm. and Ocea. Tech. 17, 1153-1170, 2000. Coughlin, K. and Tung, K. K.: Eleven-year solar cycle signal throughout the lower atmosphere. J. Geophys. Res., 109, D21105, 2004. Christiansen, B., and Ljungqvist, F. C.: The extra-tropical Northern Hemisphere temperature in the last two millennia: reconstructions of low-frequency variability. Climate of the Past 8, 765-786, 2012. Crooks, S. A. and Gray, L. J.: Characterization of the 11- year solar signal using a multiple regression analysis of the ERA-40 dataset. J. of Climate 18, 996-1015, 2005. Crowley, T. J.: Causes of Climate Change Over the Past 1000 Years. Science 289, 270-277, 2000. Douglass, D. H., and Clader, B. D.: Climate sensitivity of the Earth to solar irradiance. Geophys. Res. Lett., 29, 1786-1789, 2002. Eichler, A., Olivier, S., Henderson, K., Laube, A., Beer, J., Papina, T., Gäggeler, H. W., Schwikowski, M.: Temperature response in the Altai region lags solar forcing. Geophys. Res. Lett. 36, L01808, 2009. Foukal, P., North, G. and Wigley, T.: A stellar view on solar variations and climate. Science, 306, 68-69, 2004. Foukal, P., Fröhlich, C., Spruit, H., and Wigley, T.: Variations in solar luminosity and their effect on the Earth’s climate. Nature 443, 161-166, 2006. Gleisner, H., and Thejll, P.: Patterns of tropospheric response to solar variability. Geophys. Res. Lett. 30, 1711, 2003. Haigh, J. D.: The effects of solar variability on the Earth’s climate. Philos. Trans. R. Soc. London Ser. A 361, 95-111, 2003. Hansen, J., Ruedy, R., Sato, M., Imhoff, M., Lawrence, W., Easterling, D., Peterson, T., Karl, T.: A closer look at United States and global surface temperature change. J. Geophys. Res. 106, 23947-23963, 2001. Hansen, J., et al.: Climate simulations for 1880-2003 with GISS modelE. Clim. Dyn., 29, 661-696, 2007. Hegerl, G., Crowley, T. J., Baum, S. K., Kim, K. Y., Hyde, W. T.: Detection of volcanic, solar and greenhouse gas signals in paleo-reconstructions of Northern Hemispheric temperature. Geophysical Research Letters 30, 1242, 2003. Hoyt, D. V. and Schatten, K. H.: A Discussion of Plausible Solar Irradiance Variations, 1700-1992. J. Geophys. Res. 98, 18895-18906, 1993. Houston, J. R., Dean, R. G.: Sea-Level Acceleration Based on U.S. Tide Gauges and Extensions of Previous Global-Gauge Analyses. J. of Coastal Research 27, 409417, 2011. IPCC: Solomon, S., Qin, D., Manning, M., Chen, Z., Marquis, M., Averyt, K. B., Tignor M., Miller, H. L. (eds) in Climate Change 2007: The Physical Science Basis.Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, (Cambridge University Press, Cambridge, 2007). Jevrejeva, S., Moore, J. C., Grinsted, A., Woodworth, P. L.: Recent global sea level acceleration started over 200 years ago? Geophysical Research Letters 35, L08715, 2008. Kerr, R.A.: A variable sun paces millennial climate. Science 294, 14311433, 2001. Kirkby, J.: Cosmic Rays and Climate. Surveys in Geophys. 28, 333-375, 2007. Klyashtorin, L. B., Borisov, V. and Lyubushin, A.: Cyclic changes of climate and major commercial stocks of the Barents Sea. Marine Biology Research 5, 4-17, 2009. Knudsen, M. F., Seidenkrantz, M., Jacobsen, B. H., Kuijpers, A.: Tracking the Atlantic Multidecadal Oscillation through the last 8,000 years. Nature Communications 2, 178, 2011. Kobashi, T., Severinghaus, J. P., Barnola, J. M., Kawamura, K., Carter, T., Nakaegawa, T.: Persistent multi-decadal Greenland temperature fluctuation through the last millennium. Climate Change 100, 733-756, 2010. Kobashi, T., et al.: On the Origin of Multidecadal to centennial Greenland temperature anomalies over the past 800 yr. Clim. Past 9, 583-596, 2013. Labitzke, K.: On the signal of the 11-year sunspot cycle in the stratosphere and its modulation by the quasi, biennial oscillation. J. Atmos. Solar Terr. Phys. 66, 1151-1157, 2004. Lean, J.: Evolution of the Sun’s spectral irradiance since the Maunder Minimum. Geophys. Res. Lett. 27, 2425-2428, 2000. Lean, J. L., and Rind, D. H.: How will Earth’s surface temperature change in future decades? Geophys. Res. Lett., 36, L15708, 2009. Ljungqvist, F.C.: A new reconstruction of temperature variability in the extra-tropical Northern Hemisphere during the last two millennia. Geografiska Annaler Series A 92, 339351, 2010. Lockwood, M.: Recent changes in solar output and the global mean surface temperature. III. Analysis of the contributions to global mean air surface temperature rise. Proc. R. Soc. A 464, 1-17, 2008. Lu, Q.-B.: Correlation between cosmic rays and ozone depletion. Phys. Rev. Lett. 102, 118501, 2009a. Lu, Q.-B.: Cosmic-ray-driven electron-induced reactions of halogenated molecules adsorbed on ice surfaces: Implications for atmospheric ozone depletion. Physics Reports 487, 141-167, 2009b. Mazzarella, A. and Scafetta, N.: Evidences for a quasi 60-year North Atlantic Oscillation since 1700 and its meaning for global climate change. Theoretical and Applied Climatology 107, 599-609, 2012. Mann, M. E., Bradley, R. S. and Hughes, M. K.: Northern hemisphere temperatures during the past millennium: Inferences, uncertainties, and limitations. Geophys. Res. Lett. 26(6), 759-762, 1999. Mann, M. E., Zhang, Z., Hughes, M. K., Bradley, R. S., Miller, S. K., Rutherford, S., Ni, F.: Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia. *PNAS* 105, 13252-13257, 2008. Manzi V., Gennari, R., Lugli, S., Roveri, M., Scafetta, N. and Schreiber, C.: High-frequency cyclicity in the Mediterranean Messinian evaporites: evidence for solar-lunar climate forcing. J. of Sedimentary Research 82, 991-1005, 2012. Moberg, A., Sonechkin, D. M., Holmgren, K., Datsenko, N. M., Karlén, W., Lauritzen, S.-E.: Highly variable Northern Hemisphere temperatures reconstructed from low- and high-resolution proxy data. Nature 433, 613-617, 2005. Morice, C. P., Kennedy, J. J. J., Rayner, N. A., Jones, P. D.: Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4 dataset. J. Geophys. Res. 117, D08101, 2012. Mörner, N.-A.: Changes in the Earth’s rate of rotation on an El Nino to century basis. In: Geomagnetism and Paleomagnetism ( F.J. Lowes et al., eds), 45-53, Kluwer Acad. Publ., 1989. Mörner, N.-A.: The Earth’s differential rotation: hydrospheric changes. Geophysical Monographs, 59, 27-32, AGU and IUGG, 1990. Mörner, N.-A.: Some problems in the reconstruction of mean sea level and its changes with time. Quaternary International 221 (1-2), 3-8, 2010. Mörner, N.-A.: 2013. Solar wind, Earth’s rotation and changes in terrestrial climate. Physical Review & Research International 3(2), 117-136, 2013. North, G. R., Wu, Q., Stevens, M. J.: Detecting the 11-year solarcycle in the surface temperature field. In: Solar Variability and its Effect on Climate. Geophysical Monograph, vol. 141. American Geophysical Union, Washington, DC, USA, pp. 251-259, 2004. Ogurtsov, M. G., Nagovitsyn, Y. A., Kocharov, G. E., Jungner, H.: Long-period cycles of the Sun’s activity recorded in direct solar data and proxies. Solar Physics 211, 371-394, 2002. Parker, A.: Natural oscillations and trends in long-term tide gauge records from the Pacific. Pattern Recogn. Phys., 1, 1-13, 2013. Percival, D. B. and Walden, A. T., 2000. Wavelet Methods for Time Series Analysis. Cambridge Univ. Press, New York, 2000. Rohde, R., et al.: A New Estimate of the Average Earth Surface Land Temperature Spanning 1753 to 2011. Geoinfor. Geostat.: An Overview 1, 1-7, 2013. Qian, W.-H and B. Lu, B.: 2010. Periodic oscillations in millennial global-mean temperature and their causes. Chinese Science Bulletin 55, 4052-4057, 2010. Sallenger Jr., A. H., Doran, K. S. and Howd, P. A.: Hotspot of accelerated sea-level rise on the Atlantic coast of North America. Nature Climate Change 2, 884-888, 2012. Scafetta, N. and West, B. J.: Estimated solar contribution to the global surface warming using the ACRIM TSI satellite composite. Geophys. Res. Lett. 32, L18713, 2005. Scafetta, N. and West, B. J.: Phenomenological solar contribution to the 1900-2000 global surface warming. Geophys. Res. Lett. 33, L05708, 2006a. Scafetta, N. and West, B. J.: Phenomenological solar signature in 400 years of reconstructed Northern Hemisphere temperature record. Geophys. Res. Lett. 33, L17718, 2006b. Scafetta, N. and West, B. J.: Phenomenological reconstructions of the solar signature in the Northern Hemisphere surface temperature records since 1600. J. Geophys. Res. 112, D24S03, 2007. Scafetta, N.: Empirical analysis of the solar contribution to global mean air surface temperature change. J. of Atm. and Sol.-Terr. Phys. 71, 1916-1923, 2009. Scafetta, N.: Empirical evidence for a celestial origin of the climate oscillations and its implications. J. of Atm. and Sol.-Terr. Phys. 72, 951-970, 2010. Scafetta, N.: A shared frequency set between the historical mid-latitude aurora records and the global surface temperature. J. of Atm. and Sol.-Terr. Phys. 74, 145-163, 2012a. Scafetta, N.: Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models. J. of Atm. and Sol.-Terr. Phys. 80, 124-137, 2012b. Scafetta, N.: Multi-scale harmonic model for solar and climate cyclical variation throughout the Holocene based on Jupiter-Saturn tidal frequencies plus the 11-year solar dynamo cycle. J. of Atm. and Sol.-Terr. Phys. 80, 296-311, 2012c. Scafetta, N.: Does the Sun work as a nuclear fusion amplifier of planetary tidal forcing? A proposal for a physical mechanism based on the mass-luminosity relation. J. of Atm. and Sol.-Terr. Phys. 81-82, 27-40, 2012d. Scafetta, N. and Willson, R. C.: Planetary harmonics in the historical Hungarian aurora record (1523-1960). Planetary and Space Science 78, 38-44, 2013a. Scafetta, N.: Multi-scale dynamical analysis (MSDA) of sea level records versus PDO, AMO, and NAO indexes. Climate Dynamics, 2013. DOI: 10.1007/s00382-013-1771-3 Schulz, M. and Paul, A.: Holocene Climate Variability on Centennial-to-Millennial Time Scales: 1. Climate Records from the North-Atlantic Realm. In Climate Development and History of the North Atlantic Realm, p. 41-54. Wefer, G., Berger, W.H., Behre, K.-E., Jansen, E. (Eds), Climate Development and History of the North Atlantic Realm. (Springer-Verlag Berlin Heidelberg), 2002. Solomon, S., Rosenlof, K., Portmann, R., Daniel, J., Davis, S., Sanford, T., Plattner, G.-K.: Contributions of stratospheric water vapor to decadal changes in the rate of global warming. Science Express 327, 1219-1223, 2010. Soon, W.: Variable solar irradiance as a plausible agent for multidecadal variations in the Arctic-wide surface air temperature record of the past 130 years. Geophysical Research Letters 32, L16712, 2005. Soon, W., Dutta, K., Legates, D. R., Velasco, V. and Zhang, W.: Variation in surface air temperature of China during the 20th century. J. Atmos. Sol.-Terr. Phys. 73, 2331-2344, 2011. Soon, W. and Legates, D. R.: Solar irradiance modulation of Equator-to-Pole (Arctic) temperature gradients: Empirical evidence for climate variation on multi-decadal time scales. J. Atmos. Sol.-Terr. Phys. 93, 45-56, 2013. Stuber, N., Ponater, M. and Sausen, R.: Is the climate sensitivity to ozone perturbations enhanced by stratospheric water vapor feedback? Geophys. Res. Lett. 28, 2887-2890, 2001. Svensmark H.: Cosmoclimatology: a new theory emerges. Astronomy & Geophysics 48 (1), 18-24, 2007. Svensmark, H., Bondo, T. and Svensmark, J.: Cosmic ray decreases affect atmospheric aerosols and clouds. Geophys. Res. Lett. 36, L15101, 2009. van Loon, H. and Shea, D. J.: The global 11-year solar signal in July- August. Geophys. Res. Lett. 27, 2965-2968, 2000. White, W. B., Dettinger, M. D. and Cayan, D. R.: Sources of global warming of the upper ocean on decadal period scales. J. Geophys. Res. 108, 3248, 2003. Woodworth, P. L., Player, R.: The Permanent Service for Mean Sea Level: an update to the 21st century. Journal of Coastal Research, 19, 287-295, 2003.
--- abstract: | Free energy landscapes provide insights into conformational ensembles of biomolecules. In order to analyze these landscapes and elucidate mechanisms underlying conformational changes, there is a need to extract metastable states with limited noise. This has remained a formidable task, despite a plethora of existing clustering methods. We propose a novel method for extracting well-defined core states from free energy landscapes. The method is based on a Gaussian mixture free energy estimator, and exploits the shape of the estimated density landscape. The core states that naturally arise from the clustering allow for detailed characterization of the conformational ensemble in a parameter-free way. The clustering quality is evaluated on three toy models with different properties, where the method is shown to consistently outperform other conventional clustering methods. Finally, the method is applied to a temperature enhanced molecular dynamics simulation of Ca^2+^-bound Calmodulin. Through the free energy landscape, we discover a pathway between a canonical and a compact state, revealing conformational changes driven by electrostatic interactions. author: - 'Annie M. Westerlund' - Lucie Delemotte bibliography: - 'zotero.bib' title: 'Parameter-free Clustering of Free Energy Landscapes with Gaussian Mixtures' --- The simulations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at PDC Centre for High Performance Computing (PDC-HPC). This work was supported by grants from the Science for Life Laboratory. The code and tutorial for estimating free energy landscapes and cluster with InfleCS are available free of charge at\ http://www.github.com/delemottelab/tutorial\_FE\_clustering.
--- author: - | [Toshiyuki Arai]{}\ [Department of Applied Mathematics, Faculty of Engineering, Yokohama National University]{}\ [Hodogaya, Yokohama 240-8501, Japan]{}\ [e-mail: t04tttt@gmail.com ]{}\ \ [Choon-Lin Ho]{}\ [Department of Physics, Faculty of Core Research, Ochanomizu University]{}\ [Bunkyo-ku, Tokyo 112-8610 , Japan]{}\ [Department of Physics, Tamkang University [^1]]{}\ [Tamsui 251, Taiwan (R.O.C.)]{}\ [e-mail: hcl@mail.tku.edu.tw]{}\ \ [Yusuke Ide [^2]]{}\ [Department of Information Systems Creation, Faculty of Engineering, Kanagawa University]{}\ [Kanagawa, Yokohama 221-8686, Japan]{}\ [e-mail: ide@kanagawa-u.ac.jp]{}\ \ [Norio Konno]{}\ [Department of Applied Mathematics, Faculty of Engineering, Yokohama National University]{}\ [Hodogaya, Yokohama 240-8501, Japan]{}\ [e-mail: konno@ynu.ac.jp]{}\ --- [**Abstract**]{} In this paper, we consider periodicity for space-inhomogeneous quantum walks on the cycle. For isospectral coin cases, we propose a spectral analysis. Based on the analysis, we extend the result for periodicity for Hadamard walk to some isospectral coin cases. For non-isospectral coin cases, we consider the the system that uses only one general coin at the origin and the identity coin at the other sites. In this case, we show that the periodicity of the general coin at the origin determines the periodicity for the whole system. Introduction ============ In the last two decades, the theory of quantum walk (QW) has bees extensively studied by many researchers. There exist good reviews for this development, for example, Kempe [@Kempe2003], Kendon [@Kendon2007], Venegas-Andraca [@VAndraca2008; @VAndraca2012], Konno [@Konno2008b], Manouchehri and Wang [@ManouchehriWang2013], and Portugal [@Portugal2013]. In the present paper, we focus on periodicity of the time evolution operator of two-state discrete-time QWs (DTQWs) on the cycle graph. The periodicity of the Hadamard walk on the cycle graph was determined by Dukes [@Dukes2014] and Konno et al. [@KonnoShimizuTakei2015]. Note that the word periodicity is also used in the theory of perfect state transfer [@Godsil2012; @Coutinho2014] but we consider little bit stronger version of periodicity in this paper. The rest of this paper is organized as follows. In Sect. 2, we give the definitions of our DTQWs and periodicity. Sections 3 and 4 are devoted to spectral analysis of the time evolution operator of our DTQWs. We note that the spectral analysis is viewed as a generalization of that of Segawa [@Segawa2013]. Corollary \[cor:UMSHadamard\] is an extension of the results given by Dukes [@Dukes2014] and Konno et al. [@KonnoShimizuTakei2015] for space-inhomogeneous coin cases. In Sect. 5, we deal with periodic arranged coin cases which is motivated by Chou and Ho [@ChouHo2014]. Definition of the DTQWs on the cycle graph {#sect:def} ========================================== In this paper, we consider DTQWs on the cycle graph $C_{n}=(V_{n}, E_{n})$ with the vertex set $V_{n}= \{0,1,\ldots ,n-1\}$ and the edge set $E_{n}= \{(i,i+1):i\in V_{n}\ (\!\!\! \mod n)\}$. The Hilbert space of DTQWs is defined by $\mathcal{H}_{n}= \mathrm{span}\{{|i,L\rangle},{|i,R\rangle}:i\in V_{n}\}$ with state vectors ${|i,J\rangle}={|i\rangle}\otimes {|J\rangle}\ (i\in V_{n}, J\in \{L,R\})$ given by the tensor product of elements of two orthonormal bases: $\{{|i\rangle}:i\in V_{n}\}$ for position of the walker, and $\{{|L\rangle}={}^T [1,0], {|R\rangle}={}^T [0,1]\}$ for the chirality (direction) of the motion of the walker. Here ${}^T \!\!A$ denotes the transpose of a matrix $A$. Now we define two types of time evolution operators $U^{MS}_{n}=S_{n}^{MS}\mathcal{C}_{n}$ and $U^{FF}_{n}=S_{n}^{FF}\mathcal{C}_{n}$ on $\mathcal{H}_{n}$ with the coin operator $\mathcal{C}_{n}$, the moving shift operator $S^{MS}_{n}$ and the flip-flop shift operator $S^{FF}_{n}$ defined as follows: $$\begin{aligned} \mathcal{C}_{n}&=\sum_{i=0}^{n-1}{|i\rangle}{\langlei|}\otimes C_{i},\\ S_{n}^{MS}{|i,J\rangle}&= \begin{cases} {|i+1,R\rangle}\ (\!\!\! \mod n)&\text{if}\ \ J=R,\\ {|i-1,L\rangle}\ (\!\!\! \mod n)&\text{if}\ \ J=L, \end{cases}\\ S_{n}^{FF}{|i,J\rangle}&= \begin{cases} {|i+1,L\rangle}\ (\!\!\! \mod n)&\text{if}\ \ J=R,\\ {|i-1,R\rangle}\ (\!\!\! \mod n)&\text{if}\ \ J=L, \end{cases}\end{aligned}$$ where $C_{i}\ (i=0,\ldots ,n-1)$ are $2\times 2$ unitary matrices. Let $X_{t}^{(n)}\in V_{n}$ be the position of our quantum walker driven by the time evolution operator $U_{n}$ ($=U_{n}^{MS}$ or $U_{n}^{FF}$) at time $t$. The probability that the walker with an initial state (unit vector) ${|\psi\rangle}\in \mathcal{H}_{n}$ is found at time $t$ and the position $x$ is defined by $$\begin{aligned} {\mathbb{P}}_{{|\psi\rangle}}(X_{t}^{(n)}=x)=\left\lVert \left({\langlex|}\otimes I_{2}\right)U^{t}{|\psi\rangle}\right\rVert^{2}.\end{aligned}$$ In this paper, we consider periodicity of the DTQWs. In order to define periodicity, we use the following notation: $$\begin{aligned} T_{n}(U) =\inf \left\{t:U^{t}=I_{n}\otimes I_{2}\right\}.\label{def:T_{n}(U)}\end{aligned}$$ We will investigate the period $T_{n}(U_{n}^{MS})$ and $T_{n}(U_{n}^{FF})$. We should remark the following fact: \[rem:spec\] Let $\lambda _{1},\ldots ,\lambda _{2n}$ be the eigenvalues of the time evolution operator $U_{n}$ ($=U_{n}^{MS}$ or $U_{n}^{FF}$) then $U_{n}^{t}=I_{n}\otimes I_{2} \iff \lambda _{1}^{t}=\cdots =\lambda _{2n}^{t}=1$. By Remark \[rem:spec\], the spectral structure of the time evolution operators are important. Here we show a connection between $\mathrm{Spec}\ U_{n}^{MS}$ and $\mathrm{Spec}\ U_{n}^{FF}$. \[lem:spec\] Let $\sigma _{x}={|R\rangle}{\langleL|}+{|R\rangle}{\langleL|}$ and $\mathcal{C}_{n}\sigma_{x}=\sum_{i=0}^{n-1}{|i\rangle}{\langlei|}\otimes C_{i}\sigma_{x}$. We denote $U_{n}^{MS}(\mathcal{C}_{n})=S^{MS}\mathcal{C}_{n}$ and $U_{n}^{FF}(\mathcal{C}_{n})=S^{FF}\mathcal{C}_{n}$. Then we have $\mathrm{Spec}\ U_{n}^{FF}(\mathcal{C}_{n})=\mathrm{Spec}\ U_{n}^{MS}(\mathcal{C}_{n}\sigma_{x})$ [**Proof of Lemma \[lem:spec\].**]{} By the definition, we have $S_{n}^{FF}=(I_{n}\otimes \sigma_{x})S_{n}^{MS}$. Then by using $(I_{n}\otimes \sigma_{x})^{2}=(I_{n}\otimes I_{2})$, we obtain $$\begin{aligned} U_{n}^{FF}(\mathcal{C}_{n}) &= S^{FF}\mathcal{C}_{n} = (I_{n}\otimes \sigma_{x})S_{n}^{MS}\mathcal{C}_{n} = (I_{n}\otimes \sigma_{x})S_{n}^{MS}\mathcal{C}_{n}(I_{n}\otimes \sigma_{x})^{2} = (I_{n}\otimes \sigma_{x})S_{n}^{MS}\mathcal{C}_{n}\sigma_{x}(I_{n}\otimes \sigma_{x})\\ &= (I_{n}\otimes \sigma_{x})U_{n}^{MS}(\mathcal{C}_{n}\sigma_{x})(I_{n}\otimes \sigma_{x}).\end{aligned}$$ This completes the proof. [$\Box$]{} Lemma \[lem:spec\] shows that $T_{n}(U_{n}^{MS})=T_{n}(U_{n}^{FF})$ whenever we consider a pair of DTQWs defined by $U_{n}^{MS}(\mathcal{C}_{n}\sigma_{x})$ and $U_{n}^{FF}(\mathcal{C}_{n})$. Note that the coin operator $\mathcal{C}_{n}\sigma_{x}$ is given by exchanging column of all $C_{i}$ in $\mathcal{C}_{n}=\sum_{i=0}^{n-1}{|i\rangle}{\langlei|}\otimes C_{i}$. Jacobi matrix ============= Before we investigate periodicity of quantum walks defined in Sect.\[sect:def\], it is helpful to consider a related Jacobi matrix. Let $\nu _{1,i}, \nu _{2,i}$ and ${|w_{1,i}\rangle}, {|w_{2,i}\rangle}$ be the eigenvalues and the corresponding orthonormal eigenvectors of $C_{i}\ (i=0,\ldots ,n-1)$. We consider the spectral decomposition of each unitary matrix $C_{i}$ as follows: $$\begin{aligned} C_{i} &=\nu _{1,i}{|w_{1,i}\rangle}{\langlew_{1,i}|}+\nu _{2,i}{|w_{2,i}\rangle}{\langlew_{2,i}|}\notag\\ &=\nu _{1,i}{|w_{1,i}\rangle}{\langlew_{1,i}|}+\nu _{2,i}\left(I_{2}-{|w_{1,i}\rangle}{\langlew_{1,i}|}\right)\notag \\ &=\left(\nu _{1,i}-\nu _{2,i}\right){|w_{1,i}\rangle}{\langlew_{1,i}|}+\nu _{2,i}I_{2},\label{specC}\end{aligned}$$ where $I_{k}$ is the $k\times k$ identity matrix. Here we use the relation $I_{2}={|w_{1,i}\rangle}{\langlew_{1,i}|}+{|w_{2,i}\rangle}{\langlew_{2,i}|}$ coming from unitarity of $C_{i}$. This shows that we can represent $C_{i}$ without ${|w_{2,i}\rangle}$. We define the $n\times n$ Jacobi matrix $J_{n}^{QW}$ for the DTQW as follows: $$\begin{aligned} \label{defJacobiQW} (J_{n}^{QW})_{i,j}=\overline{(J_{n}^{QW})_{j,i}} = \begin{cases} \overline{w_{i}(R)}w_{j}(L) & \text{if $j=i+1\ (\! \! \mod n)$,}\\ 0 & \text{otherwise,} \end{cases}\end{aligned}$$ where ${|w_{1,i}\rangle}={}^T [w_{i}(L),w_{i}(R)]$ and $\overline{z}$ means the complex conjugate of $z\in {\mathbb{C}}$. In this setting, the corresponding Jacobi matrix is the following: $$\begin{aligned} \label{matJacobiQW} J_{n}^{QW}= \begin{bmatrix} 0 & \overline{w_{0}(R)}w_{1}(L) & & & \overline{w_{0}(L)}w_{n-1}(R)\\ \overline{w_{1}(L)}w_{0}(R) & 0 & \ddots & & \mbox{\smash{\huge\textit{O}}} & & \\ & \ddots & \ddots & \ \ddots & \\ & & \ddots & 0 & \overline{w_{n-2}(R)}w_{n-1}(L)\\ \overline{w_{n-1}(R)}w_{0}(L) & \mbox{\smash{\huge\textit{O}}} & & \overline{w_{n-1}(L)}w_{n-2}(R) & 0 \end{bmatrix}.\end{aligned}$$ As we will point out at the next line of Eq. , each eigenvalue of $J_{n}^{QW}$ becomes inner product of two unit vectors. It means that $\textrm{Spec}(J_{n}^{QW})\subseteq [-1,1]$. By direct calculation, we obtain the following lemma for the characteristic polynomial of the Jacobi matrix $J_{n}^{QW}$: \[lem:polyJacobi\] Let $$\begin{aligned} K_{i,j}^{QW}(\lambda)= \begin{bmatrix} \lambda & -\overline{w_{i}(R)}w_{i+1}(L) & & & & & \\ -\overline{w_{i+1}(L)}w_{i}(R) & \lambda & \ddots & & \mbox{\smash{\huge\textit{O}}} & \\ & \ddots & \ddots & \ddots & \\ & & \ddots & \lambda & -\overline{w_{j}(R)}w_{j+1}(L)\\ \mbox{\smash{\huge\textit{O}}} & & & -\overline{w_{j+1}(L)}w_{j}(R) & \lambda \end{bmatrix}.\end{aligned}$$ Then $$\begin{aligned} \label{eq:polyJacobi} \det(\lambda I_{n}-J_{n}^{QW}) &= \lambda \det(K_{1,n-2}^{QW}(\lambda ))\\ \notag &- |w_{0}(R)|^{2}|w_{1}(L)|^{2}\det(K_{2,n-2}^{QW}(\lambda ))-|w_{n-1}(R)|^{2}|w_{0}(L)|^{2}\det(K_{1,n-3}^{QW}(\lambda ))\\ \notag &+ (-1)^{n}\cdot 2\Re \left(\prod_{i=0}^{n-1}\overline{w_{i}(R)}w_{i}(L)\right),\end{aligned}$$ where $\Re (z)$ denotes the real part of $z\in \mathbb{C}$. In addition, we have $$\begin{aligned} &\det(K_{i,j}^{QW}(\lambda))=\lambda \det(K_{i,j-1}^{QW}(\lambda))-|w_{j}(R)|^{2}|w_{j+1}(L)|^{2}\det(K_{i,j-2}^{QW}(\lambda)),\ (j\geq i+1),\\ &\det(K_{i,i}^{QW}(\lambda))=\lambda ^{2} - |w_{i}(R)|^{2}|w_{i+1}(L)|^{2},\\\end{aligned}$$ with a convention $\det(K_{i,i-1}^{QW}(\lambda))=\lambda.$ This leads to the following lemma: \[lem:Kreal\] $\det(K_{i,j}^{QW}(\lambda))$ is a polynomial with real coefficients. If we define $p_{i}=|w_{i}(R)|^{2}$ and $q_{i}=|w_{i}(L)|^{2}$ for $i\in V_{n}$ then the coefficients of $\det(K_{i,j}^{QW}(\lambda))$ are determined by $p_{i}, \ldots, p_{j}, q_{i}, \ldots ,q_{j+1}$. Isospectral coin cases ====================== Now we give a framework of spectral analysis for DTQWs with flip-flop shift on $C_{n}$. In order to do so, we restrict the coin operator as follows: \[ass:coinQW\] We assume all the local coins are isospectral. Thus we use $$\begin{aligned} \label{SpecAnalcoinQWonPath} \mathcal{C}_{n} = \sum_{i=0}^{n-1}{|i\rangle}{\langlei|}\otimes \left\{(\nu _{1}-\nu _{2}){|w_{i}\rangle}{\langlew_{i}|}+\nu _{2}I_{2}\right\} ,\end{aligned}$$ as the coin operator. Let $\lambda _{m}\ (m=0,\ldots n-1)$ be the eigenvalues and ${|v_{m}\rangle}\ (m=0,\ldots n-1)$ be the corresponding (orthonormal) eigenvectors of $J_{n}^{QW}$. For each $\lambda _{m}$ and ${|v_{m}\rangle}$, we define two vectors $$\begin{aligned} \mathbf{a}_{m} &= \sum_{i=0}^{n-1}v_{m}(i){|i\rangle}\otimes {|w_{i}\rangle},\\ \mathbf{b}_{m} &= S_{n}^{FF}\mathbf{a}_{m},\end{aligned}$$ where ${|v_{m}\rangle}={}^{T}\left[v_{m}(0) \ldots v_{m}(n-1)\right]$. By using $(S_{n}^{FF})^{2}=I_{n}\otimes I_{2}$, it is easy to see that $\mathcal{C}_{n}\mathbf{a}_{m}=\nu_{1}\mathbf{a}_{m}$ and then $U_{n}^{FF}\mathbf{a}_{m}=\nu_{1}\mathbf{b}_{m}$. Also we have $\mathcal{C}_{n}\mathbf{b}_{m}=(\nu_{1}-\nu_{2})\lambda _{m}\mathbf{a}_{m}+\nu_{2}\mathbf{b}_{m}$ and $U_{n}^{FF}\mathbf{b}_{m}=\nu_{2}\mathbf{a}_{m}+(\nu_{1}-\nu_{2})\lambda _{m}\mathbf{b}_{m}$. So we have the following relationship: $$\begin{aligned} \label{eq:Uab} U_{n}^{FF} \begin{bmatrix} \mathbf{a}_{m}\\ \mathbf{b}_{m} \end{bmatrix} = \begin{bmatrix} 0 & \nu_{1}\\ \nu_{2} & (\nu_{1}-\nu_{2})\lambda _{m} \end{bmatrix} \begin{bmatrix} \mathbf{a}_{m}\\ \mathbf{b}_{m} \end{bmatrix}.\end{aligned}$$ We also obtain $|\mathbf{a}_{m}|=|\mathbf{b}_{m}|=1$ and the inner product $(\mathbf{a}_{m}, \mathbf{b}_{m})=\lambda _{m}$. This shows that if $\lambda _{m} =\pm 1$ then $\mathbf{b}_{m}=\pm \mathbf{a}_{m}$. Therefore if $\lambda _{m} =\pm 1$ then $U_{n}^{FF}\mathbf{a}_{m}=\pm \nu_{1}\mathbf{a}_{m}$. For cases with $\lambda_{m}\neq \pm 1$, we see from Eq. (\[eq:Uab\]) that the operator $U_{n}^{FF}$ is a linear operator acting on the linear space $\text{Span}\ (\mathbf{a}_{m}, \mathbf{b}_{m})$. In order to obtain the eigenvalues and eigenvectors, we take a vector $\alpha \mathbf{a}_{m} + \beta \mathbf{b}_{m}\in \text{Span}\ (\mathbf{a}_{m}, \mathbf{b}_{m})$. The eigen equation for $U_{n}^{FF}$ is given by $U_{n}^{FF}(\alpha \mathbf{a}_{m} + \beta \mathbf{b}_{m}) = \mu (\alpha \mathbf{a}_{m} + \beta \mathbf{b}_{m})$. From Eq. , this is equivalent to $$\begin{aligned} \begin{bmatrix} 0 & \nu_{2}\\ \nu_{1} & (\nu_{1}-\nu_{2})\lambda_{m} \end{bmatrix} \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \mu \begin{bmatrix} \alpha \\ \beta \end{bmatrix} .\end{aligned}$$ Therefore we can obtain two eigenvalues $\mu_{\pm m}$ of $U_{n}^{FF}$ which are related to the eigenvalue $\lambda_{m}$ of $J_{n}^{QW}$ as solutions of the following quadratic equation: $$\begin{aligned} \mu ^{2}-(\nu_{1}-\nu_{2})\lambda_{m}\mu -\nu_{1}\nu_{2}=0.\end{aligned}$$ Also we have the corresponding eigenvectors $\nu_{2}\mathbf{a}_{m}+\mu_{\pm m}\mathbf{b}_{m}$ by setting $\alpha = \nu_{2}, \beta = \mu_{\pm m}$. As a consequence, we obtain the following lemma: \[lem:eigenUformJ\] Let $\lambda_{m}\ (m=0, \ldots ,n-1)$ be the eigenvalues of $J_{n}^{QW}$, then the corresponding eigenvalues $\mu_{\pm m}$ and the eigenvectors $\mathbf{u}_{\pm m}$ of $U_{n}^{FF}$ are the following: 1. If $\lambda_{m}=\pm 1$ then $\mu _{m}=\pm \nu_{1}$ and $\mathbf{u}_{m}=\mathbf{a}_{m}$. 2. If $\lambda_{m}\neq \pm 1$ then $\mu _{\pm m}$ are the solutions of the following quadratic equation: $$\begin{aligned} \mu ^{2}-(\nu_{1}-\nu_{2})\lambda_{m}\mu -\nu_{1}\nu_{2}=0,\end{aligned}$$ and $\mathbf{u}_{\pm m}=\nu_{2}\mathbf{a}_{m}+\mu_{\pm m}\mathbf{b}_{m}$. The quadratic equation in Lemma \[lem:eigenUformJ\] is rearranged to $$\begin{aligned} \left\{ i \overline{\nu_{1}}^{1/2} \overline{\nu_{2}}^{1/2} \mu \right\}^{2} + 2\Im (\nu_{1}^{1/2} \overline{\nu_{2}}^{1/2})\lambda_{m}\left\{ i \overline{\nu_{1}}^{1/2} \overline{\nu_{2}}^{1/2} \mu \right\} + 1 &= 0.\end{aligned}$$ Thus we have $$\begin{aligned} i \overline{\nu_{1}}^{1/2} \overline{\nu_{2}}^{1/2} \mu _{\pm m} &= - \Im (\nu_{1}^{1/2} \overline{\nu_{2}}^{1/2})\lambda_{m} \pm i \sqrt{1-\left( \Im (\nu_{1}^{1/2} \overline{\nu_{2}}^{1/2})\lambda_{m} \right)^{2}} \\ \mu _{\pm m} &= \left( -\nu_{1}\nu_{2} \right)^{1/2}e^{\pm i \theta _{m}},\end{aligned}$$ where $\cos \theta_{m} = - \Im (\nu_{1}^{1/2} \overline{\nu_{2}}^{1/2})\lambda_{m}$. Therefore if we put $\nu_{j} = e^{i\phi _{j}}$ then the eigenvalues $\mu _{\pm m}$ are given by the following procedure: 1. Rescale the eigenvalue $\lambda_{m}$ of $J_{n}^{QW}$ as $- \Im (\nu_{1}^{1/2} \overline{\nu_{2}}^{1/2})\lambda_{m} = - \sin [(\phi_{1} - \phi_{2})/2]\times \lambda _{m}$. 2. Map the rescaled eigenvalue upward and downward to the unit circle on the complex plane. 3. Take $[(\phi_{1} + \phi_{2} - \pi )/2]$-rotation of the mapped eigenvalues. For usual Szegedy walk cases, i.e., $\nu _{1} = 1, \nu_{2} = -1$ case, we have $\phi _{1} = 0, \phi_{2} = \pi$. Thus we can omit 1 and 3 of the procedure because $- \sin [(\phi_{1} - \phi_{2})/2] = 1, [(\phi_{1} + \phi_{2} - \pi )/2] = 0$. \[rem:eigenUreminder\] According to Lemma \[lem:eigenUformJ\], if all $n$ numbers of eigenvalues of $J_{n}^{QW}$ are not equal to $\pm 1$ then we obtain all $2n$ numbers of eigenvalues of $U_{n}^{FF}$. But if there exist $s$ numbers of the $\lambda_{m}=\pm 1$ eigenvalues of $J_{n}^{QW}$ then we can only obtain $2n-s$ numbers of eigenvalues of $U_{n}^{FF}$. In this case, for every $\lambda _{m}=\pm 1$, we construct the following two vectors: $$\begin{aligned} \tilde{\mathbf{a}}_{m} &= \sum_{i=0}^{n-1}v_{m}(i){|i\rangle}\otimes {|w_{2,i}\rangle},\\ \tilde{\mathbf{b}}_{m} &= S_{n}^{FF}\tilde{\mathbf{a}}_{m},\end{aligned}$$ where ${|w_{2,i}\rangle}$ is the eigenvector corresponding to the eigenvalue $\nu_{2}$ of $C_{i}$ in Eq. . By the definition, we have $|\tilde{\mathbf{a}}_{m}|=|\tilde{\mathbf{b}}_{m}|=1$. Also we obtain the inner product $(\tilde{\mathbf{a}}_{m}, \mathbf{a}_{m})=0$ from orthogonality and $(\tilde{\mathbf{b}}_{m}, \mathbf{a}_{m})=(\tilde{\mathbf{b}}_{m}, \pm\mathbf{b}_{m})=\pm(S_{n}^{FF}\tilde{\mathbf{a}}_{m}, S_{n}^{FF}\mathbf{a}_{m})=0$ from $\lambda _{m}=\pm 1$ and $(S_{n}^{FF})^{2}=I_{n}\otimes I_{2}$. Since $\mathbf{a}_{m}$ belongs to the eigensystem of $\nu _{1}$ of $\mathcal{C}_{n}$, this shows that both $\tilde{\mathbf{a}}_{m}$ and $\tilde{\mathbf{b}}_{m}$ belong to the eigensystem of $\nu _{2}$ of $\mathcal{C}_{n}$. This implies that $$\begin{aligned} U_{n}^{FF} \begin{bmatrix} \tilde{\mathbf{a}}_{m}\\ \tilde{\mathbf{b}}_{m} \end{bmatrix} = \begin{bmatrix} 0 & \nu_{2}\\ \nu_{2} & 0 \end{bmatrix} \begin{bmatrix} \tilde{\mathbf{a}}_{m}\\ \tilde{\mathbf{b}}_{m} \end{bmatrix}.\end{aligned}$$ Therefore $U_{n}^{FF}(\tilde{\mathbf{a}}_{m}\pm \tilde{\mathbf{b}}_{m})=\pm \nu_{2}(\tilde{\mathbf{a}}_{m}\pm \tilde{\mathbf{b}}_{m})$. These are the candidates of eigenvalues and eigenvectors. On the other hand, the two sets $\mathcal{H}_{n}^{(\pm)}= \mathrm{span}\{{|i+1,L\rangle}\pm {|i,R\rangle}:i\in V_{n}\ (\!\!\! \mod n)\}$ are subspaces of whole Hilbert space $\mathcal{H}_{n}$ with $\mathrm{dim}\mathcal{H}_{n}^{(\pm)}=n$, i.e., $\mathcal{H}_{n}=\mathcal{H}_{n}^{(+)}\oplus \mathcal{H}_{n}^{(-)}$. Note that $\mathbf{a}_{m}\pm \mathbf{b}_{m}\in \mathcal{H}_{n}^{(\pm)}$. If $\lambda _{m}=\pm 1$ then $\mathbf{a}_{m}=\pm \mathbf{b}_{m}$. This implies that if $\lambda _{m}=\pm 1$ then the dimension of $\mathcal{H}_{n}^{(\mp)} \cap \text{Span}\ (\mathbf{a}_{m}, \mathbf{b}_{m})$ decreases by $1$. Therefore if $\lambda _{m}=1$ then we can only choose $U_{n}^{FF}(\tilde{\mathbf{a}}_{m}- \tilde{\mathbf{b}}_{m})=\nu_{2}(\tilde{\mathbf{a}}_{m}- \tilde{\mathbf{b}}_{m})$. In the same way, if $\lambda _{m}=-1$ then we can only choose $U_{n}^{FF}(\tilde{\mathbf{a}}_{m}+ \tilde{\mathbf{b}}_{m})=-\nu_{2}(\tilde{\mathbf{a}}_{m}+ \tilde{\mathbf{b}}_{m})$. Using these procedure, we have remaining s numbers of eigenvalues and eigenvectors. As a consequence of Lemmas \[lem:polyJacobi\], \[lem:Kreal\], \[lem:eigenUformJ\] and Remark \[rem:eigenUreminder\], we have the following result: \[thm:periodUFFisospectral\] Under the Assumption \[ass:coinQW\], let $w_{j}(R)=\sqrt{p_{j}}e^{i\theta _{R}(j)}$ and $w_{j}(L)=\sqrt{q_{j}}e^{i\theta _{L}(j)}$ for $j\in V_{n}$ where $p_{j}=|w_{j}(R)|^{2}$ and $q_{j}=|w_{i}(L)|^{2}$. Also let $\tilde{U}_{n}^{FF}$ which is defined by the coin operator $\tilde{\mathcal{C}}_{n}$ with $\tilde{w}_{j}(R)=\sqrt{p_{j}}e^{i(\theta _{R}(j)+\tilde{\theta }_{R}(j))}$ and $\tilde{w}_{j}(L)=\sqrt{q_{j}}e^{i(\theta _{L}(j)+\tilde{\theta }_{L}(j))}$ for $j\in V_{n}$. If $\sum_{j=1}^{n-1}(\tilde{\theta}_{L}(j)-\tilde{\theta }_{R}(j))=2\pi k\ (k\in \mathbb{Z})$ then $T_{n}(U_{n}^{FF})=T_{n}(\tilde{U}_{n}^{FF})$. [**Proof of Theorem \[thm:periodUFFisospectral\].**]{} From Eq. , if $\sum_{j=1}^{n-1}(\tilde{\theta}_{L}(j)-\tilde{\theta}_{R}(j))=2\pi k\ (k\in \mathbb{Z})$ then $\Re \left(\prod_{j=0}^{n-1}\overline{w_{j}(R)}w_{j}(L)\right)=\Re \left(\prod_{j=0}^{n-1}\overline{\tilde{w}_{j}(R)}\tilde{w}_{j}(L)\right)$. Then from Lemmas \[lem:polyJacobi\], \[lem:Kreal\], \[lem:eigenUformJ\] and Remark \[rem:eigenUreminder\], we have $\mathrm{Spec}\ U_{n}^{FF} = \mathrm{Spec}\ \tilde{U}_{n}^{FF}$. Therefore we have $T_{n}(U_{n}^{FF})=T_{n}(\tilde{U}_{n}^{FF})$. [$\Box$]{} Theorem \[thm:periodUFFisospectral\] provides a classification of our DTQW from the point of the periodicity. Indeed, $T_{n}(U_{n}^{FF})$ depends only on the sequence $\{p_{j}\}_{0\leq j \leq n-1}$ and a value $\sum_{j=1}^{n-1}(\tilde{\theta}_{L}(j)-\tilde{\theta }_{R}(j))$. Therefore we can identify DTQWs having the same set of these values. The next corollary provides “Hadamard class” of periodicity. \[cor:UMSHadamard\] Let $\mathcal{C}_{n}^{\prime}=\sum_{j=0}^{n-1}{|j\rangle}{\langlej|}\otimes C^{\prime}_{j}$ with $C_{j}^{\prime} = \frac{1}{\sqrt{2}} \begin{bmatrix} e^{i\tilde{\theta}(j)} & 1 \\ 1 & -e^{-i\tilde{\theta}(j)} \end{bmatrix} $. If $\sum_{j=0}^{n-1}\tilde{\theta}(j)=2\pi k\ (k\in \mathbb{Z})$ then $$\begin{aligned} T_{n}(U^{MS}_{n}(\mathcal{C}_{n}^{\prime})) = \begin{cases} 2, &(n=2)\\ 8, &(n=4)\\ 24 &(n=8)\\ \infty &(n\neq 2,4,8). \end{cases}\end{aligned}$$ [**Proof of Corollary \[cor:UMSHadamard\].**]{} Let $\mathcal{C}_{n}=\sum_{j=0}^{n-1}{|j\rangle}{\langlej|}\otimes H$ with $ H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} $, i.e., the Hadamard walk case. The periodicity for this case is as follows [@Dukes2014; @KonnoShimizuTakei2015]: $$\begin{aligned} T_{n}(U^{MS}_{n}(\mathcal{C}_{n})) = \begin{cases} 2, &(n=2)\\ 8, &(n=4)\\ 24 &(n=8)\\ \infty &(n\neq 2,4,8). \end{cases}\end{aligned}$$ From Lemma \[lem:spec\], we have $T_{n}(U_{n}^{MS}(\mathcal{C}_{n}))=T_{n}(U_{n}^{FF}(\mathcal{C}_{n}\sigma_{x}))$. So we consider $ H\sigma_{1} = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ -1 & 1 \end{bmatrix} $ case. By direct calculation, we obtain $$\begin{aligned} \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ -1 & 1 \end{bmatrix} \begin{bmatrix} 1/\sqrt{2}\\ i/\sqrt{2} \end{bmatrix} &= \frac{1+i}{\sqrt{2}} \begin{bmatrix} 1/\sqrt{2}\\ i/\sqrt{2} \end{bmatrix} = e^{i\pi /4} \begin{bmatrix} 1/\sqrt{2}\\ i/\sqrt{2} \end{bmatrix}, \\ \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ -1 & 1 \end{bmatrix} \begin{bmatrix} 1/\sqrt{2}\\ -i/\sqrt{2} \end{bmatrix} &= \frac{1-i}{\sqrt{2}} \begin{bmatrix} 1/\sqrt{2}\\ -i/\sqrt{2} \end{bmatrix} = e^{-i\pi /4} \begin{bmatrix} 1/\sqrt{2}\\ -i/\sqrt{2} \end{bmatrix}.\end{aligned}$$ Therefore the spectral decomposition of the coin operator $H\sigma_{1}$ is $$\begin{aligned} H\sigma_{1} &= (e^{i\pi /4}-e^{-i\pi /4}) \begin{bmatrix} 1/\sqrt{2}\\ i/\sqrt{2} \end{bmatrix} \begin{bmatrix} 1/\sqrt{2} & -i/\sqrt{2} \end{bmatrix} + e^{-i\pi /4}I_{2}.\end{aligned}$$ We consider the coin operator $\tilde{\mathcal{C}}_{n}=\sum_{j=0}^{n-1}{|j\rangle}{\langlej|}\otimes \widetilde{(H\sigma_{1})_{j}}$ with $$\begin{aligned} \widetilde{(H\sigma_{1})_{j}} &= (e^{i\pi /4}-e^{-i\pi /4}) \begin{bmatrix} 1e^{i\tilde{\theta}_{L}(j)}/\sqrt{2}\\ ie^{i\tilde{\theta}_{R}(j)}/\sqrt{2} \end{bmatrix} \begin{bmatrix} 1e^{-i\tilde{\theta}_{L}(j)}/\sqrt{2} & -ie^{-i\tilde{\theta}_{R}(j)}/\sqrt{2} \end{bmatrix} + e^{-i\pi /4}I_{2} \\ &= \frac{i}{\sqrt{2}} \begin{bmatrix} 1 & -ie^{i(\tilde{\theta}_{L}(j)-\tilde{\theta}_{R}(j))} \\ ie^{-i(\tilde{\theta}_{L}(j)-\tilde{\theta}_{R}(j))} & 1 \end{bmatrix} + \frac{1}{\sqrt{2}} \begin{bmatrix} 1-i & 0 \\ 0 & 1-i \end{bmatrix} \\ &= \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & e^{i(\tilde{\theta}_{L}(j)-\tilde{\theta}_{R}(j))} \\ -e^{-i(\tilde{\theta}_{L}(j)-\tilde{\theta}_{R}(j))} & 1 \end{bmatrix} = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & e^{i\tilde{\theta}(j)} \\ -e^{-i\tilde{\theta}(j)} & 1 \end{bmatrix}.\end{aligned}$$ Using Theorem \[thm:periodUFFisospectral\], we have $T_{n}(U_{n}^{FF}(\tilde{\mathcal{C}}_{n}))=T_{n}(U_{n}^{FF}(\mathcal{C}_{n}))$ if $\sum_{j=0}^{n-1}\tilde{\theta}(j)=2\pi k\ (k\in \mathbb{Z})$. Noting that $\mathcal{C}_{n}^{\prime}=\tilde{\mathcal{C}}_{n}\sigma _{1}$, we obtain the desired result by Lemma \[lem:spec\]. [$\Box$]{} By the same arguments of the proof of Corollary \[cor:UMSHadamard\], we obtain the following result: Let $\mathcal{C}_{n}=\sum_{j=0}^{n-1}{|j\rangle}{\langlej|}\otimes C$ with $ C = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $ and $\tilde{\mathcal{C}}_{n}=\sum_{j=0}^{n-1}{|j\rangle}{\langlej|}\otimes \tilde{C}_{j}$ with $\tilde{C}_{j} = \begin{bmatrix} ae^{i\tilde{\theta}(j)} & b \\ c & de^{-i\tilde{\theta}(j)} \end{bmatrix} $. If $\sum_{j=0}^{n-1}\tilde{\theta}(j)=2\pi k\ (k\in \mathbb{Z})$ then $T_{n}(U^{MS}_{n}(\mathcal{C}_{n}))=T_{n}(U^{MS}_{n}(\tilde{\mathcal{C}}_{n}))$. Non-isospectral coin cases ========================== In this section, we consider several types of DTQWs with non-isospectral coin and the moving shift. In order to define periodic coin operator, we introduce a notation $[C:l, \tilde{C}:m]$ which denotes $$\begin{aligned} C_{i}= \begin{cases} C &\text{if $0\leq i\leq l-1\ (\! \! \mod (l+m))$},\\ \tilde{C} &\text{if $l\leq i\leq l+m-1\ (\! \! \mod (l+m))$}, \end{cases}\end{aligned}$$ in the coin operator $\mathcal{C}_{n}=\sum_{i=0}^{n-1}{|i\rangle}{\langlei|}\otimes C_{i}$. In this section, we consider $[C:1, I_{2}:m]$ model with $n=0\ (\! \! \mod m+1)$ for $2\times 2$ unitary matrix $C$. At the beginning, we consider $m=n-1$ cases. In this cases, the coin operator is defined by $\mathcal{C}_{n}={|0\rangle}{\langle0|}\otimes C + \sum_{i=1}^{n-1}{|i\rangle}{\langlei|}\otimes I_{2}$ then the time evolution operator $U_{n}^{MS}=S_{n}^{MS}\mathcal{C}_{n}$ is given by $$\begin{aligned} U_{n}^{MS} &= {|0\rangle}{\langle1|}\otimes {|L\rangle}{\langleL|}I_{2}+{|0\rangle}{\langlen-1|}\otimes {|R\rangle}{\langleR|}I_{2}\\ &\quad + {|1\rangle}{\langle2|}\otimes {|L\rangle}{\langleL|}I_{2}+{|1\rangle}{\langle0|}\otimes {|R\rangle}{\langleR|}C\\ &\quad + \sum_{i=2}^{n-2}({|i\rangle}{\langlei+1|}\otimes {|L\rangle}{\langleL|}I_{2}+{|i\rangle}{\langlei-1|}\otimes {|R\rangle}{\langleR|}I_{2})\\ &\quad + {|n-1\rangle}{\langle0|}\otimes {|L\rangle}{\langleL|}C+{|n-1\rangle}{\langlen-2|}\otimes {|R\rangle}{\langleR|}I_{2}\\ &= {|0\rangle}{\langle1|}\otimes {|L\rangle}{\langleL|}+{|0\rangle}{\langlen-1|}\otimes {|R\rangle}{\langleR|}\\ &\quad + {|1\rangle}{\langle2|}\otimes {|L\rangle}{\langleL|}+{|1\rangle}{\langle0|}\otimes {|R\rangle}{\langleR|}C\\ &\quad + \sum_{i=2}^{n-2}({|i\rangle}{\langlei+1|}\otimes {|L\rangle}{\langleL|}+{|i\rangle}{\langlei-1|}\otimes {|R\rangle}{\langleR|})\\ &\quad + {|n-1\rangle}{\langle0|}\otimes {|L\rangle}{\langleL|}C+{|n-1\rangle}{\langlen-2|}\otimes {|R\rangle}{\langleR|}.\end{aligned}$$ Thus we have $$\begin{aligned} ({\langle0|}\otimes I_{2})\left(U_{n}^{MS}\right)^{kn}({|0\rangle}\otimes I_{2}) &= {|L\rangle}{\langleL|}C^{k}+{|R\rangle}{\langleR|}C^{k} = ({|L\rangle}{\langleL|}+{|R\rangle}{\langleR|})C^{k} = I_{2}C^{k} \\ &= C^{k},\end{aligned}$$ for $k=1,2,\ldots$. Also for $i\neq 0$, we obtain $$\begin{aligned} ({\langlei|}\otimes I_{2})\left(U_{n}^{MS}\right)^{kn}({|i\rangle}\otimes I_{2}) &= \left({|L\rangle}{\langleL|}\right)^{n-i-1}\left({|L\rangle}{\langleL|}C\right)C^{k-1}\left({|L\rangle}{\langleL|}\right)^{i} \\ &\quad + \left({|R\rangle}{\langleR|}\right)^{i-1}\left({|R\rangle}{\langleR|}C\right)C^{k-1}\left({|R\rangle}{\langleR|}\right)^{n-i} \\ &= {|L\rangle}{\langleL|}C^{k}{|L\rangle}{\langleL|}+{|R\rangle}{\langleR|}C^{k}{|R\rangle}{\langleR|} ,\end{aligned}$$ for $k=1,2,\ldots$. Using this observation, we can reach the following result: \[thm:C1In-1\] For the $[C:1, I_{2}:n-1]$ model for $2\times 2$ unitary matrix $C$, let $\lambda _{1}, \lambda _{2}$ be the pair of eigenvalues of $C$. If $\lambda _{1}=\exp [2\pi i(L_{1}/N_{1})]$ and $\lambda _{2}=\exp [2\pi i(L_{2}/M_{2})]$ where $L_{1}/N_{1}$ and $L_{2}/N_{2}$ are reduced rational numbers, we take $M=l.c.m.(M_{1},M_{2})$ then $T_{n}(U_{n}^{FF})=Mn$, where $l.c.m.(a,b)$ denotes the least common multiple of two integers $a$ and $b$. From the above discussion, we can see the vertex which has the coin $I_{2}$ just through the coin state. Therefore we have the following result for general $[C:1, I_{2}:m]$ model with $n=0\ (\! \! \mod m+1)$ for $2\times 2$ unitary matrix $C$: \[cor:C1Im\] For the $[C:1, I_{2}:m]$ model with $n=0\ (\! \! \mod m+1)$ for $2\times 2$ unitary matrix $C$, let $T_{n/(m+1)}^{FF}$ be the period of the time evolution operator with coin operator $\mathcal{C}_{n/(m+1)}=\sum_{i=0}^{n/(m+1)-1}{|i\rangle}{\langlei|}\otimes C$ and flip-flop shift operator $S_{n/(m+1)}^{FF}$. Then we have $T_{n}(U_{n}^{FF})=(m+1)T_{n/(m+1)}^{FF}$. [**Acknowledgments.**]{} This study was partially supported by Yokohama Academic Foundation. C. L. H. was supported in part by the Ministry of Science and Technology (MoST) of the Republic of China under Grants 102-2112-M-032-003-MY3 and 105-2918-I-032-001. Y. I. was supported by the Grant-in-Aid for Young Scientists (B) of Japan Society for the Promotion of Science (Grant No. 16K17652). N. K. was supported by the Grant-in-Aid for Challenging Exploratory Research of Japan Society for the Promotion of Science (Grant No. 15K13443). C. L. H. would like to thank T. Deguchi, E. Uehara, E. Nozawa, C. Matsuyama, and N. Oshima, for the hospitality extended to him during his visit to the Department of Physics of Ochanomizu University. We also thank the anonymous referees for give us fruitful comments on this paper. [000]{} Chou, C.-I., Ho, C.-L.: Localization and recurrence of quantum walk in periodic potential on a line. Chin. Phys. B [**23**]{}, 110302 (2014). Coutinho, G.: Quantum state transfer in graphs. PhD dissertation, University of Waterloo (2014). Dukes, P. R.: Quantum state revivals in quantum walks on cycles. Results in Physics [**4**]{}, 189–197 (2014). Godsil, C.: State transfer on graphs. Discrete Math. [**312**]{} (1), 129–147 (2012). Kempe, J.: Quantum random walks - an introductory overview. Contemporary Physics [**44**]{}, 307–327 (2003). Kendon, V.: Decoherence in quantum walks - a review. Math. Struct. in Comp. Sci. [**17**]{}, 1169–1220 (2007). Konno, N.: Quantum Walks. In: Quantum Potential Theory, Franz, U., and Schürmann, M., Eds., Lecture Notes in Mathematics: Vol. 1954, pp. 309–452, Springer-Verlag, Heidelberg (2008) Konno, N., Shimizu, Y., Takei, M.: Periodicity for the Hadamard walk on cycles. arXiv:1504.06396v1 (2015). Manouchehri, K., Wang, J.: Physical Implementation of Quantum Walks, Springer (2013). Portugal, R.: Quantum Walks and Search Algorithms, Springer (2013). Segawa, E.: Localization of quantum walks induced by recurrence properties of random walks. J. Comput. Nanosci. [**10**]{}, 1583–1590 (2013) Venegas-Andraca, S. E.: Quantum Walks for Computer Scientists, Morgan and Claypool (2008). Venegas-Andraca, S. E.: Quantum walks: a comprehensive review, Quantum. Inf. Process. [**11**]{}, 1015–1106 (2012). [^1]: Permanent address [^2]: To whom correspondence should be addressed. E-mail: ide@kanagawa-u.ac.jp
--- abstract: 'The characterization of Fibonacci Cobweb poset $P$ as DAG and oDAG is given. The [*dim*]{} 2 poset such that its Hasse diagram coincide with digraf of $P$ is constructed.' author: - | Ewa Krot\ \ Institute of Computer Science, Bia[ł]{}ystok University\ PL-15-887 Bia[ł]{}ystok, ul.Sosnowa 64, POLAND\ e-mail: ewakrot@wp.pl, ewakrot@ii.uwb.edu.pl title: Characterization of the Fibonacci Cobweb Poset as oDAG --- Fibonacci cobweb poset ====================== The Fibonacci cobweb poset $P$ has been invented by A.K.Kwaśniewski in [@4; @3; @10] for the purpose of finding combinatorial interpretation of fibonomial coefficients and eventually their reccurence relation. In [@4] A. K. Kwaśniewski defined cobweb poset $P$ as infinite labeled digraph oriented upwards as follows: Let us label vertices of $P$ by pairs of coordinates: $\langle i,j \rangle \in {\bf N_{0}}\times {\bf N_{0}}$, where the second coordinate is the number of level in which the element of $P$ lies (here it is the $j$-th level) and the first one is the number of this element in his level (from left to the right), here $i$. Following [@4] we shall refer to $\Phi_{s}$ as to the set of vertices (elements) of the $s$-th level, i.e.: $$\Phi_{s}=\left\{\langle j,s \rangle ,\;\;1\leq j \leq F_{s}\right\},\;\;\;s\in{\bf N}\cup\{0\},$$ where $\{F_{n}\}_{n\geq 0}$ stands for Fibonacci sequence. Then $P$ is a labeled graph $P=\left(V,E\right)$ where $$V=\bigcup_{p\geq 0}\Phi_{p},\;\;\;E=\left\{\langle \,\langle j,p\rangle,\langle q,p+1 \rangle\,\rangle\right\},\;\;1\leq j\leq F_{p},\;\;1\leq q\leq F_{p+1}.$$ We can now define the partial order relation on $P$ as follows: let\ $x=\langle s,t\rangle, y=\langle u,v\rangle $ be elements of cobweb poset $P$. Then $$( x \leq_{P} y) \Longleftrightarrow [(t<v)\vee (t=v \wedge s=u)].$$ ![image](FigF.eps){width="80mm"} [Fig. 1. The picture of the Fibonacci “cobweb” poset]{} DAG $\longrightarrow$ oDAG problem =================================== In [@p] A. D. Plotnikov considered the so called “DAG $\longrightarrow$ oDAG problem”. He determined condition when a digraph $G$ may may be presented by the corresponding [*dim* ]{} 2 poset $R$ and he established the algorithm for finding it. Before citing Plotnikov’s results lat us recall (following [@p]) some indispensable definitions. If $P$ and $Q$ are partial orders on the same set $A$, $Q$ is said to be an [**extension**]{} of $P$ if $a\leq_{P} b$ implies $a\leq_{Q} b$, for all $a,b\in A$. A poset $L$ is a [**chain**]{}, or a [**linear order**]{} if we have either $a\leq_{L} b$ or $b\leq_{L} a$ for any $a,b\in A$. If $Q$ is a linear order then it is a [**linear extension**]{} of $P$. The [**dimension**]{} $dim\ R$ of $R$ being a partial order is the least positive integer $s$ for which there exists a family $F=(L_1 ,L_2 ,\ldots,L_s)$ of linear extensions of $R$ such that $R= \bigcap_{i=1}^{s} L_{i}$. A family $F=(L_1,L_2,\ldots,L_s)$ of linear orders on $A$ is called a [**realizer**]{} of $R$ on $A$ if $$R=\bigcap_{i=1}^{s} L_{i}.$$ We denote by $D_{n}$ the set of all acyclic directed $n$-vertex graphs without loops and multiple edges. Each digraph ${\vec G}=(V,{\vec E})\in D_{n}$ will be called [**DAG**]{}. A digraph ${\vec G}\in D_{n}$ will be called [**orderable (oDAG)**]{} if there exists are $dim\ 2$ poset such that its Hasse diagram coincide with the digraph ${\vec G}$. Let ${\vec G}\in D_{n}$ be a digraph, which does not contain the arc $(v_{i},v_{j})$ if there exists the directed path $p(v_{i},v_{j})$ from the vertex $v_{i}$ into the vertex $v_{j}$ for any $v_{i}$, $v_{j}\in V$. Such digraph is called [**regular**]{}. Let $D\subset D_{n}$ is the set of all regular graphs. Let there is a some regular digraph ${\vec G}=(V,E)\in D$, and let the chain ${\vec X}$ has three elements $x_{i_{1}}$, $x_{i_{2}}$, $x_{i_{3}}\in X$ such that $i_{1}<i_{2}<i_{3}$, and, in the digraph ${\vec G}$, there are not paths $p(v_{i_{1}},v_{i_{2}})$, $p(v_{i_{2}},v_{i_{3}})$ and there exists a path $p(v_{i_{1}},v_{i_{3}})$. Such representation of graph vertices by elements of the chain ${\vec X}$ is called the representation in [**inadmissible form**]{}. Otherwise, the chain ${\vec X}$ presets the graph vertices in [**admissible form**]{}. Plotnikov showed that: [*[@p]*]{}\[l1\] \[l1\] A digraph ${\vec G}\in D_{n}$ may be represented by a $dim\ 2$ poset if: 1. there exist two chains ${\vec X}$ and ${\vec Y}$, each of which is a linear extension of ${\vec G}_{t}$; 2. the chain ${\vec Y}$ is a modification of ${\vec X}$ with inversions, which remove the ordered pairs of ${\vec X}$ that there do not exist in ${\vec G}$. Above lemma results in the algorithm for finding [*dim*]{} 2 representation of a given DAG (i.e. corresponding oDAG) while the following theorem establishes the conditions for constructing it. [*[@p]*]{}\[t1\] \[t1\] A digraph ${\vec G}=(V,{\vec E})\in D_{n}$ can be represented by $dim\ 2$ poset iff it is regular and its vertices can be presented by the chain ${\vec X}$ in admissible form. Fibonacci cobweb poset as DAG and oDAG ====================================== In this section we show that Fibonacci cobweb poset is a DAG and it is orderable (oDAG). Obviously, cobweb poset $P=(V, E)$ defined above is a DAG (it is directed acyclic graph without loops and multiple edges). One can also verify that it is regular. For two elements $\langle i, n\rangle , \langle j,m\rangle \in V$ a directed path $p(\langle i, n\rangle , \langle j,m\rangle)\notin E$ will esist iff $n<m+1$ but then $(\langle i, n\rangle , \langle j,m\rangle)\notin E$ i.e. $P$ does not contain the edge $(\langle i, n\rangle , \langle j,m\rangle)$. It is also possible to verify that vertices of cobweb poset $P$ can be presented in admissible form by the chain ${\vec X}$ being a linear extension of cobweb $P$ as follows: $$\begin{gathered} {\vec X}=\Big(\langle 1,0\rangle,\langle 1,1\rangle , \langle 1,2\rangle, \langle 1,3\rangle, \langle 2,3\rangle, \langle 1,4\rangle, \langle 2,4\rangle, \langle 3,4\rangle,\langle 1,5\rangle, \langle 2,5\rangle, \langle 3,5\rangle,\\\langle 4,5\rangle,\langle 5,5\rangle,...\Big),\end{gathered}$$ where $$( \langle s,t\rangle \leq_{{\vec X}} \langle u,v\rangle) \Longleftrightarrow [(s\leq u)\wedge (t\leq v)]$$ for $1\leq s \leq F_{t},\; 1\leq u \leq F_{v},\;\;\;t, v \in {\bf N}\cup\{0\}.$ Fibonacci cobweb poset $P$ satisfies the conditions of Theorem \[t1\] so it is oDAG. To find the chain ${\vec Y}$ being a linear extension of cobweb $P$ one uses Lemma \[l1\] and arrives at: $$\begin{gathered} {\vec Y}=\Big(\langle 1,0\rangle,\langle 1,1\rangle , \langle 1,2\rangle, \langle 2,3\rangle, \langle 1,3\rangle, \langle 3,4\rangle, \langle 2,4\rangle, \langle 1,4\rangle,\langle 5,5\rangle, \langle 4,5\rangle, \langle 3,5\rangle,\\\langle 2,5\rangle,\langle 1,5\rangle,...\Big),\end{gathered}$$ where $$( \langle s,t\rangle \leq_{{\vec Y}} \langle u,v\rangle) \Longleftrightarrow [(t < v)\vee (t=v \wedge s\geq u)]$$ for $1\leq s \leq F_{t},\; 1\leq u \leq F_{v},\;\;\;t, v \in {\bf N}\cup\{0\}$ and finally $$(P,\leq_{P})={\vec X}\cap{\vec Y}.$$ Closing remark ============== For any sequence $\{a_{n}\}$ of natural numbers one can define corresponding cobweb poset as follows: $$\Phi_{s}=\left\{\langle j,s \rangle ,\;\;1\leq j \leq a_{s}\right\},\;\;\;s\in{\bf N}\cup\{0\},$$ and $cobP=\left(V,E\right)$ where $$V=\bigcup_{p\geq 0}\Phi_{p},\;\;\;E=\left\{\langle \,\langle j,p\rangle,\langle q,p+1 \rangle\,\rangle\right\},\;\;1\leq j\leq a_{p},\;\;1\leq q\leq a_{p+1}$$ with the partial order relation on $cobP$ : $$( x \leq_{P} y) \Longleftrightarrow [(t<v)\vee (t=v \wedge s=u)]$$ for $x=\langle s,t\rangle, y=\langle u,v\rangle $ being elements of cobweb poset $cobP$. Similary as above one can show that the family of cobweb posets consist of DAGs representable by corresponding [*dim* ]{} 2 posets (i.e. of oDAGs). [10]{} A. K. Kwaśniewski: [*More on Combinatorial Interpretation of Fibonomial Coefficients*]{}, Bulletin de la Societe des Sciences et des Lettres de £ódŸ(54) Serie: Recherches sur les Deformations Vol. 44, p.23-38, ArXiv:math.CO/0402344 v1 26 Oct 2004 A. K. Kwaśniewski:[*Information on combinatorial Interpretation of FibonomialCoefficients*]{}, Bull.Soc.Lett.Lodz.Ser.Rech.Deform. 42(2003) p.39-41 ArXiv:math.CO/0402291 v1 22 Feb 2004 A. K. Kwaśniewski:[*The Logarythmic Fib-binomial Formula*]{}, Adv. Stud. Math. v. 9 (2004) No.1, p.19-26, ArXiv: math. CO/0406258 13 June 2004 A. K. Kwaśniewski:[*Comments on Combinatorial Interpretation of Fibonomial Coefficients*]{}, an e-mail style letter, Bulletin of the Institute of Combinatorics and its Applications, vol. 42 September 2004, p.10-11 A. D. Plotnikov: [*About Presentation of a Digraf by dim 2 Poset*]{} http://www.cumulativeinquiry.com/Problems/solut2.pdf
--- abstract: 'Unlike the $p=2$ case, the universal Steenrod Algebra ${\mathcal Q} (p) $ at odd primes does not have a fractal structure that preserves the length of monomials. Nevertheless, when $p$ is odd we detect inside $\mathcal Q(p)$ two different families of nested subalgebras each isomorphic (as length-graded algebras) to the respective starting element of the sequence.' address: ' Dipartimento di Matematica e applicazioni,Università di Napoli ’134 Federico II”, Piazzale Tecchio 80 I-80125 Napoli, Italy. E-mail: mbrunett@unina.it, ciampell@unina.it' author: - Maurizio Brunetti - Adriana Ciampella title: Searching for Fractal Structures in the Universal Steenrod Algebra at Odd Primes --- Introduction ============ Let $p$ be any prime. The so-called universal Steenrod algebra ${\mathcal Q}(p)$ is an ${{\mathbb{F}}}_p$-algebra extensively studied by the authors (see, for instance, [@ManuMath]-[@Funda]). On its first appearance, it has been described as the algebra of cohomology operations in the category of $H_\infty$-ring spectra (see [@May]). Invariant-theoretic descriptions of ${\mathcal Q(p)}$ can be found in [@CL] and [@L0]. When $p$ is an odd prime, the augmentation ideal of $\mathcal Q(p)$ is the free ${{\mathbb{F}}}_p$-algebra over the set $$\mathcal S_p = \{ \, z_{\epsilon,k} \; \vert \; (\epsilon,k) \in \{0,1\} \times {{\mathbb{Z}}}\, \}$$ subject to the set of relations $$\label{relaz} \mathcal R_p= \{ \, R(\epsilon, k,n), \, S(\epsilon, k,n) \; \vert \; (\epsilon, k,n) \in \{0,1\} \times {{\mathbb{Z}}}\times {{\mathbb{N}}}_0 \, \},$$ where $$\label{Rkn} R({\epsilon}, k,n)=z_{{\epsilon},pk-1-n}z_{0,k}+\sum_j (-1)^j \binom{(p-1)(n-j)-1}{ j}z_{{\epsilon},pk-1-j}z_{0,k-n+j},$$ and $$\label{Skn} \begin{aligned} S({\epsilon}, k,n)&=z_{{\epsilon},pk-n}z_{1,k}+\sum_j (-1)^{j+1} \binom{(p-1)(n-j)-1}{ j}z_{{\epsilon},pk-j}z_{1,k-n+j}\\ \,&+(1-{\epsilon})\sum_j(-1)^{j+1} \binom{(p-1)(n-j)}{ j}z_{1,pk-j}z_{0,k-n+j}.\\ \end{aligned}$$ Such relations are known as [*generalized Adem relations*]{}. The algebra $\mathcal Q(p)$ is related to many Steenrod-like operations. For instance to those acting on the cohomology of a graded cocommutative Hopf algebra ([@Viet], [@Li]), or the Dyer-Lashof operations on the homology of infinite loop spaces ([@AK] and [@May2]). Details of such connections, at least for $p=2$, can be found in [@BLMS]. In particular, the ordinary Steenrod algebra $\mathcal A(p)$ is a quotient of $\mathcal Q(p)$. At odd primes, the algebra epimorphism is determined by $$\zeta : z_{{\epsilon},k} \longmapsto \begin{cases} \beta^{{\epsilon}} P^k \quad \text{if $k \geq 0$,}\\ 0 \quad \qquad \text{otherwise.} \end{cases}$$ The kernel of the map $\zeta$ turns out to be the principal ideal generated by $z_{0,0}-1$. All monic monomials in ${\mathcal Q}(p) $, with the exception of $z_{\emptyset} =1$ have the form $$\label{mon} z_I=z_{{\epsilon}_1,i_1}z_{{\epsilon}_2,i_2}\cdots z_{{\epsilon}_m,i_m},$$ where the string $I=({\epsilon}_1,i_1;{\epsilon}_2, i_2; \dots ;{\epsilon}_m, i_m)$ is the [*label*]{} of the monomial $z_I$. By [*length*]{} of a monomial $z_I$ of type we just mean the integer $m$, while the length of any $\rho \in {{\mathbb{F}}}_p \subset \mathcal Q(p)$ is defined to be $0$. Since Relations and are homogeneous with respect to length, the algebra $\mathcal Q(p)$ can be regarded as a graded object. A monomial and its label are said to be admissible if $i_j \geq pi_{j+1}+{\epsilon}_{j+1}$ for any $j=1,\dots , m-1$. We also consider $z_{\emptyset}=1 \in {{\mathbb{F}}}_p \subset {\mathcal Q}(p)$ admissible. The set ${\mathcal B}$ of all monic admissible monomials forms an ${{\mathbb{F}}}_p$-linear basis for ${\mathcal Q}(p)$ (see [@CL]). Through two different approaches, in [@ApplMS] and [@RendASFM] it has been shown that ${\mathcal Q}(2)$ has a fractal structure given by a sequence of nested subalgras ${\mathcal Q}_s$, each isomorphic to ${\mathcal Q}$. The interest in searching for fractal structures inside algebras of (co-)homology operations initially arouse in [@Monks], where such structures were used as a tool to establish the nilpotence height of some elements in $\mathcal A(p)$. Results in the same vein are in [@Karaka]. Recently, in [@BolMex] the authors proved that no length-preserving strict monomorphisms turn out to exist in $\mathcal Q(p)$ when $p$ is odd. Hence no descending chain of isomorphic subalgebras starting with $\mathcal Q(p)$ exists for $p>2$. Results in [@BolMex] did not exclude the existence of fractal structures for proper subalgebras of $\mathcal Q(p)$. As a matter of fact, the subalgebras ${\mathcal Q}^0$ and ${\mathcal Q}^1$ generated by the $z_{0,h}$’s and the $z_{1,k}$’s respectively (together with $1$) turn out to have self-similar shapes, as stated in our Theorem \[teo1\], our main result. \[teo1\] Let $p$ be any odd prime. For any ${\epsilon}\in \{0,1\}$ there is a chain of nested subalgebras of $\mathcal Q(p)$ $${\mathcal Q}_0^{\epsilon}\supset {\mathcal Q}_1^{\epsilon}\supset {\mathcal Q}_2^{\epsilon}\supset \dots \supset {\mathcal Q}_s^e \supset {\mathcal Q}_{s+1}^{\epsilon}\supset \dots$$ each isomomorphic to ${\mathcal Q}_0^{\epsilon}={\mathcal Q}^{\epsilon}$ as length-graded algebras. Theorem \[teo1\] relies on the existence of two suitable algebra monomorphisms $$\phi : {\mathcal Q}^0 {{ \longrightarrow }}{\mathcal Q}^0 \quad \text{and} \quad \psi : {\mathcal Q}^1 {{ \longrightarrow }}{\mathcal Q}^1.$$ Indeed, we just set ${\mathcal Q}_s^0=\phi^s({\mathcal Q}^0)$ and ${\mathcal Q}_s^1=\phi^s({\mathcal Q}^1)$, the restrictions $\phi\hspace{-0.15em}\mid_{{\mathcal Q}_s^0}$ and $\psi \hspace{-0.15em}\mid_{{\mathcal Q}_s^1}$ being the desired isomorphism between ${\mathcal Q}_s^{\epsilon}$ and ${\mathcal Q}_{s+1}^{\epsilon}$ $({\epsilon}\in \{0,1\})$. For sake of completeness we point out that the algebra $\mathcal Q (p)$ can also be filtered by the internal degree of its elements, defined on monomials as follows: $$\lvert \rho z_I \rvert = \begin{cases} \sum_h (2i_h(p-1) + {\epsilon}_{i_h}), & \text{if $I=({\epsilon}_1,i_1;{\epsilon}_2, i_2; \dots ;{\epsilon}_m, i_m)$} \\ 0 & \text{if $I=\emptyset$.} \end{cases}$$ In spite of its geometric importance, the internal degree will not play any role here. A first descending chain of subalgebras ======================================= We first need to establish some congruential identities. Let ${{\mathbb{N}}}_0$ denote the set of all non-negative integers. Fixed any prime $p$, we write $$\label{bag} \sum_{i\geq 0} \gamma_i (m) p^i \quad \text{($0 \leq \gamma_i (m) <p$)}$$ to denote the $p$-adic expansion of a fixed $m \in {{\mathbb{N}}}_0$. The following well-known Lemma is a stardard device to compute $\bmod \, p$ binomial coefficients. \[l1\] For any $(a,b) \in {{\mathbb{N}}}_0 \times {{\mathbb{N}}}_0$, the following congruential identity holds. $$\label{bah} {a \choose b} \equiv {\prod}_{i \geq 0} {\gamma_i (a) \choose \gamma_i (b)} \pmod p.$$ See [@Karaka p. 260] or [@StEps I 2.6]. Equation \[bah\] follows the usual conventions: ${0 \choose 0}=1$, and ${l\choose r} =0$ if $0 \leq l <r$. Congruence immediately yields $$\label{bax} {p^ra \choose p^rb} \equiv {a \choose b} \pmod p \qquad \text{for every $r \geq 0$},$$ since, in both cases, we find on the right side of the same products of binomial coefficients, apart from $r$ extra factors all equal to ${0 \choose 0}=1$. \[c1\] For any $(\ell,t,h) \in {{\mathbb{N}}}_0 \times {{\mathbb{N}}}_0 \times \{ 1, \dots , p\}$, the following congruential identity holds. $$\label{plh} \binom{p\ell-h}{pt}\equiv \binom{\ell-1}{ t} \pmod p.$$ Since $p\ell-h=(p-h) + p(\ell-1)$, we have $\gamma_0(p\ell-h)=p-h$. Note also that $\gamma_0(pt)=0$. According to Lemma \[l1\], we get $$\binom{p\ell-h}{pt} \equiv \binom{p-h}{0} \binom{p(\ell-1)}{pt} \pmod p.$$ We now use Congruence \[bax\] for $r=1$, and the fact that ${k \choose 0} =1$ for all $k \in {{\mathbb{N}}}_0$. In order to make notation less cumbersome, we set $$A(k,j) = \binom{(p-1)(k-j)-1}{ j}.$$ \[c2\] Let $(n,j)$ a couple of positive integers. Whenever $j\not \equiv 0 \pmod p$, the binomial coefficient $A(pn,j)$ is divisible by $p$. If a fixed positive integer $j$ is not divisible by $p$, then there exists a unique couple $(l,h) \in {{\mathbb{N}}}\times \{1, \dots , p-1\}$ such that $j=pl-h$. Hence, setting $$T= (p-1)(n-l)+h-1,$$ we get $$\label{binom} A(pn,j)= \binom{(p-h-1)+pT}{(p-h)+p(l-1)} \equiv \binom{p-h-1}{p-h} \cdot \binom{T}{l-1} \pmod p$$ by Lemma \[l1\] and Equation . Since $p-h-1<p-h$, the first factor on the right side of Equation is zero, so the result follows. \[c3\] Let $(s,n,j)$ a triple of positive integers. Whenever $j\not\equiv 0 \pmod{p^s}$, the binomial coefficient $A(p^sn,j)$ is divisible by $p$. We proceed by induction on $s$. The $s=1$ case is essentially Corollary \[c2\]. Suppose now $s>1$. The hypothesis on $j$ is equivalent to the existence of a suitable $(b, i) \in {{\mathbb{N}}}\times \{ 1, \dots, p^s-1 \}$ such that $j=p^sb-i$. Likewise, we can write $i=pl-r$, for a certain $(l,r) \in \{1, \dots, p^{s-2} \} \times \{0, \dots, p-1 \}$. We now distinguish two cases. If $r=0$, the binomial coefficient $A(p^sn,j)$ has the form $\binom{p\ell-h}{pt}$ where $$\ell= (p-1)(p^{s-1}n-p^{s-1}b+l), \qquad h=1, \qquad \text{and} \qquad t= p^{s-1}b-l.$$ By Corollary \[c1\], we get $$A(p^sn,j) \equiv A (p^{s-1}n, p^{s-1}b-l) \pmod p,$$ and the latter is divisible by $p$ by the inductive hypothesis. Assume now $1\leq r\leq p-1$. In this case, $$\label{t'} A(p^sn,j)= \binom{r-1 +pT'}{r+ p(p^{s-1} b -l)}$$ where $T'= (p-1)(p^{s-1}n-p^{s-1}b+l)-r$. Therefore, by Lemma \[l1\] we get $$\label{ko} A(p^sn,j) \equiv \binom{r-1}{r}\cdot \binom{T'}{p^{s-1}b-l} \pmod p.$$ The right side of Equation \[ko\] vanishes, since $r-1<r$, and the proof is over. Lemmas and Corollaries proved so far will be helpful to reduce, in some particular cases, the number of potentially non-zero binomial coefficients in and in . For instance, for any $(h,n) \in {{\mathbb{Z}}}\times {{\mathbb{N}}}_0$, relations of type $R({\epsilon},p^sh-\alpha_s,p^sn)$, where $$\alpha_s ={p^s-1 \over p-1} \qquad (s \geq 1),$$ only involve generators in the set $$\label{sem} \mathcal T_{({\epsilon},s)}= \{ z_{{\epsilon},p^sm-\alpha_s} \, \vert \, m \in {{\mathbb{Z}}}\}$$ as stated in the following Proposition. \[p1\] Let $({\epsilon}, k,n,s)$ a fixed $4$-tuple in $\{0,1\} \times {{\mathbb{Z}}}\times {{\mathbb{N}}}_0 \times {{\mathbb{N}}}$. The polynomial $R({\epsilon}, p^sk-\alpha_s, p^sn)$ in is actually equal to $$z_{{\epsilon}, p^s(pk-1-n)-\alpha_s}z_{0,p^sk-\alpha_s} \,+\sum_j(-1)^j A(n,j) \, z_{{\epsilon},p^s(pk-1-j)-\alpha_s}z_{0,p^s(k-n+j)-\alpha_s}.$$ By definition (see ), $R({\epsilon}, p^sk-\alpha_s,p^sn)$ is equal to $$z_{{\epsilon},p(p^sk-\alpha_s)-1-p^sn}z_{0,p^sk-\alpha_s} \,+\sum_l(-1)^l A(p^sn,l) \, z_{{\epsilon},p(p^sk-\alpha_s)-1-l}z_{0,p^sk-\alpha_s-p^sn+l}.$$ According to Lemma \[c3\], the only possible non-zero coefficients in the sum above occur when $l\equiv 0 \pmod{p^s}$. Thus, we set $l=p^sj$ and write $R({\epsilon}, p^sk-\alpha_s,p^sn)$ as $$z_{{\epsilon},p(p^sk-\alpha_s)-1-p^sn}z_{0,p^sk-\alpha_s} \,+\sum_j(-1)^{p^sj} A (p^sn, p^sj)z_{{\epsilon},p(p^sk-\alpha_s)-1-p^sj}z_{0,p^sk-\alpha_s-p^sn+p^sj}.$$ In such polynomial we can replace $z_{{\epsilon},p(p^sk-\alpha_s)-1-p^sn}$ and $z_{{\epsilon},p(p^sk-\alpha_s)-1-p^sj}$ by $$z_{{\epsilon},p^s(pk-1-n)-\alpha_s} \quad \text{and} \quad z_{{\epsilon},p^s(pk-1-j)-\alpha_s}$$ respectively, since $p\alpha_s+1=p^s+\alpha_s$. Finally, applying Equation as many times as necessary, and recalling that we are supposing $p$ odd, we get $$\label{apnj} (-1)^{p^sj} A(p^sn,p^sj) \equiv (-1)^j A(n,j) \pmod p.$$ As a consequence of Proposition \[p1\], the admissible expression of any non-admissible monomial with label $({\epsilon},p^sh_1-\alpha_s;0, p^sh_2-\alpha_s; \dots ;0, p^sh_m-\alpha_s)$ involves only generators in $\mathcal{T}_{({\epsilon},s)}$. That’s the reason why, for any non-negative integer $s$, there is a well-defined ${{\mathbb{F}}}_p$-algebra ${\mathcal Q}_s^0$ generated by the set $\{1\} \cup \mathcal{T}_{(0,s)}$ and subject to relations $$R(0,{p^sh-\alpha_s}, p^sn)=0 \quad \forall n\in {{\mathbb{N}}}_0.$$ Thus ${\mathcal Q}_0^0$ and ${\mathcal Q}_1^0$ are the subalgebras of ${\mathcal Q}(p)$ generated by the sets $$\{1\}\cup\{z_{0, h}\, \vert \, h\in{{\mathbb{Z}}}\} \quad \text{and} \quad \{1\}\cup\{z_{0, ph-1}\, \vert \, h\in{{\mathbb{Z}}}\}$$ respectively. The former has been simply denoted by ${\mathcal Q}^0$ in Section 1. The arithmetic identity $$\label{palfa} p^{s+1}h-\alpha_{s+1}=p^{s}(ph-1)-\alpha_{s},$$ implies that ${\mathcal Q}_s^0 \supset {\mathcal Q}_{s+1}^0$. \[lem2\] A monomial of type $$\label{admz} z_I=z_{{\epsilon},p^sh_1-\alpha_s}z_{0,p^sh_2-\alpha_s}\cdots \, z_{0,p^sh_m-\alpha_s}$$ is admissible if and only if $h_i \,\geq \, ph_{i+1}$ for any $i=1, \dots , m-1$. Admissibility for a monomial of type is tantamount to the condition $$p^sh_i - \alpha_s \geq p (p^sh_{i+1} - \alpha_s) \quad \forall i \in \{ 1, \dots, m-1 \}.$$ Inequalities above are equivalent to $$h_i \geq p h_{i+1} - \frac{p^s-1}{p^s} \quad \forall i \in \{ 1, \dots, m-1 \},$$ and the ceiling of the real number on the right side is precisely $ph_{i+1}$. \[p2\] An ${{\mathbb{F}}}_p$-linear basis for ${\mathcal Q}_s^0$ is given by the set $\mathcal B_{{\mathcal Q}_s^0}$ of its monic admissible monomials. In [@CL] it is explained the procedure to express any monomial in $\mathcal Q(p)$ as a sum of admissible monomials. As Proposition \[p1\] shows, the generalized Adem relations required to complete such procedure starting from a monomial in ${\mathcal Q}_s^0$ only involve generators actually available in the set at hands. So far, we have established the existence of the following descending chain of algebra inclusions: $${\mathcal Q}^0 ={\mathcal Q}_0^0 \supset {\mathcal Q}_1^0 \supset {\mathcal Q}_2 ^0\supset \dots \supset {\mathcal Q}_s^0 \supset {\mathcal Q}_{s+1}^0\supset \dots,$$ On the free ${{\mathbb{F}}}_p$-algebra ${{\mathbb{F}}}_p\langle \{1\} \cup \mathcal{T}_{(0,0)} \rangle$ we now define a monomorphism $\Phi$ acting on the generators as follows $$\Phi (1) =1 \qquad \text{and} \qquad \Phi (z_{0,k})= z_{0,pk-1}.$$ We set $\Phi^0= 1_{{{\mathbb{F}}}_p\langle \mathcal S_p \rangle}$ and $\Phi^s = \Phi \circ \Phi^{s-1}$ for $s \geq 1$. \[ppp\] For each $s \geq 0$, $$\label{fi1} \Phi^s(z_{0,i_1} \cdots \, z_{0,i_m})= z_{0,p^si_1-\alpha_s} \cdots \; z_{0,p^si_m-\alpha_s},$$ and $$\label{fi2} \Phi^s(R(0,k,n))=R(0,p^sk-\alpha_s, p^sn).$$ Equations and are trivially true for $s=0$. For $s\geq 1$ use an inductive argument taking into account and Proposition \[p1\]. \[ppp2\] Let $\pi : {{\mathbb{F}}}_p \langle \{1\} \cup \mathcal{T}_{(0,0)}\rangle \rightarrow \mathcal Q^0$ be the quotient map.There exists an algebra monomorphism $\phi$ such that the diagram $$\begin{diagram} \node{{{\mathbb{F}}}_p\langle \{1\} \cup \mathcal{T}_{(0,0)} \rangle} \arrow{e,t}{\Phi} \arrow{s,l}{\pi} \node{{{\mathbb{F}}}_p\langle \{1\} \cup \mathcal{T}_{(0,0)} \rangle} \arrow{s,r}{\pi} \\ \node{\mathcal{Q}^0} \arrow{e,b,..}{\phi} \node{\mathcal{Q}^0} \end{diagram}$$ commutes. By Equation , it follows in particular that $$\Phi (R(0,k,n))=R(0,pk-1, pn).$$ Therefore there exists a well-defined algebra map $$\phi: z_{0,i_1}z_{0,i_2}\cdots z_{0,i_m} \in \mathcal Q^0 \longmapsto z_{0,pi_1-1}z_{0,pi_2-1}\cdots z_{0,pi_m-1} \in \mathcal Q^0.$$ Such map is injective since the set $\mathcal B_{{\mathcal Q}_s^0}$ – an ${{\mathbb{F}}}_p$-linear basis for $\mathcal Q^0$ according to Proposition \[p2\] – is mapped onto admissibles by Lemma \[lem2\]. \[cz1\] The algebra $\mathcal Q^0_s$ is isomorphic to its subalgebra $\mathcal Q^0_{s+1}$. By Propositions \[ppp\] and \[ppp2\], we can argue that $\phi^s (\mathcal Q^0) = \mathcal Q^0_s$. Hence the map $$\phi\hspace{-0.15em}\mid_{{\mathcal Q}_s^0} : \operatorname{Im}\phi^s {{ \longrightarrow }}\operatorname{Im}\phi^{s+1}$$ gives the desired isomorphism. Corollary \[cz1\] proves Theorem \[teo1\] for ${\epsilon}=0$. A second descending chain of subalgebras ======================================== The aim of this Section is to provide a proof for the ${\epsilon}=1$ case of Theorem \[teo1\]. We choose to follow as close as possible¶ the line of attack put forward in Section 2. \[s1\] Let $(k,n,s)$ a fixed triple in ${{\mathbb{Z}}}\times {{\mathbb{N}}}_0 \times {{\mathbb{N}}}$. In the polynomial $S(1, p^sk, p^sn)$ is actually equal to $$z_{1, p^s(pk-n)}z_{1,p^sk} +\sum_j(-1)^{j+1}A(n,j) \, z_{1,p^s(pk-j)}z_{1,p^s(k-n+j)}.$$ By definition (see \[Skn\]), $$\label{snj} S(1, p^sk,p^sn)=z_{1,p^s(pk-n)}z_{1,p^sk} +\sum_l(-1)^{l+1}A(p^sn,l) z_{1,p^{s+1}k-l}z_{1,p^sk-p^sn+l}.$$ According to Lemma \[c3\], the only possible non-zero coefficients in the sum above are those with $l\equiv 0$ mod $p^s$. Setting $l=p^sj$, the polynomial becomes $$z_{1,p^{s+1}k-p^sn}z_{1,p^sk} +\sum_j(-1)^{p^sj+1}A(p^sn, p^sj) z_{1,p^{s+1}k-p^sj}z_{1,p^sk-p^sn+p^sj}.$$ The result now follows from Equation . Proposition \[s1\] implies that relations of type $S(1,p^sh,p^sn)$ only involve generators of type $z_{1,p^sm}$. therefore the admissible expression of any non-admissible monomial with label $(1,p^sh_1;1, p^sh_2; \dots ;1, p^sh_m)$ only involves generators in the set $$\label{tem} \mathcal T'_{(1,s)}= \{ z_{1,p^sm} \, \vert \, m \in {{\mathbb{Z}}}\}.$$ So it makes sense to define ${\mathcal Q}_s^1$ as the ${{\mathbb{F}}}_p$-algebra generated by the set $\{1\} \cup \mathcal T'_{(1,s)}$ and subject to relations $$S(1,{p^sh}, p^sn)=0 \quad \forall n\in {{\mathbb{N}}}_0.$$ Each $\mathcal Q^1_s$ is actually a subalgebra of $\mathcal Q(p)$. We have inclusions ${\mathcal Q}_s^1 \supset {\mathcal Q}_{s+1}^1$. In Section 1, the algebra ${\mathcal Q}_0^1$ has been simply denoted by ${\mathcal Q}^1$. \[lem31\] A monomial of type $$\label{z1p} z_{1,p^sh_1}z_{1,p^sh_2}\cdots \, z_{1,p^sh_m}$$ in ${\mathcal Q}_s^1 \subset \mathcal Q(p)$ is admissible if and only if $ h_i \geq p h_{i+1} +1 \quad \forall i \in \{ 1, \dots, m-1 \}. $ By definition, the monomial is admissible if and only if $$p^sh_i \geq p (p^sh_{i+1}) +1 \quad \forall i \in \{ 1, \dots, m-1 \}.$$ Inequalities above are equivalent to $$h_i \geq p h_{i+1} + \frac{1}{p^s} \quad \forall i \in \{ 1, \dots, m-1 \},$$ and the ceiling of the real number on the right side is precisely $ph_{i+1}+1$. \[s2\] An ${{\mathbb{F}}}_p$-linear basis for ${\mathcal Q}_s^1$ is given by the set $\mathcal B_{{\mathcal Q}_s^1}$ of its monic admissible monomials. Follows verbatim the proof of Proposition \[p2\], just replacing “Proposition \[p1\]” by “Proposition \[s1\]” and $\mathcal Q^0_s$ by $\mathcal Q^1_s$. We are now going to prove that the subalgebras in the descending chain $${\mathcal Q}^1 ={\mathcal Q}_0^1 \supset {\mathcal Q}_1^1 \supset {\mathcal Q}_2 ^1\supset \dots \supset {\mathcal Q}_s^1 \supset {\mathcal Q}_{s+1}^1\supset \dots,$$ are all isomorphic. To this aim we consider the injective endomorphism $\Psi$ on the free ${{\mathbb{F}}}_p$-algebra ${{\mathbb{F}}}_p \langle \{1\} \cup \mathcal T'_{1,0} \rangle$ by setting $$\Psi (1) =1 \qquad \text{and} \qquad \Psi (z_{1,k})= z_{1,pk}.$$ \[ppp3\] Let $\pi' : {{\mathbb{F}}}_p \langle \{1\} \cup \mathcal{T'}_{(1,0)}\rangle \rightarrow \mathcal Q^1$ be the quotient map.There exists an algebra monomorphism $\psi$ such that the diagram $$\begin{diagram} \node{{{\mathbb{F}}}_p\langle \{1\} \cup \mathcal{T'}_{(1,0)} \rangle} \arrow{e,t}{\Psi} \arrow{s,l}{\pi'} \node{{{\mathbb{F}}}_p\langle \{1\} \cup \mathcal{T'}_{(1,0)} \rangle} \arrow{s,r}{\pi} \\ \node{\mathcal{Q}^1} \arrow{e,b,..}{\psi} \node{\mathcal{Q}^1} \end{diagram}$$ commutes. Since $ \Psi^s(z_{1,i_1} \cdots \, z_{1,i_m})= z_{1,p^si_1} \cdots \; z_{1,p^si_m}$, by Proposition \[s1\] we argue that $$\label{fi4} \Psi^s(S(1,k,n))=S(1,p^sk, p^sn).$$ Therefore there exists a well-defined algebra map $$\psi: z_{1,i_1} \cdots \, z_{1,i_m} \in \mathcal Q^1 \longmapsto z_{1,pi_1} \cdots \, z_{1,pi_m} \in \mathcal Q^1.$$ Such map is injective since the set $\mathcal B_{{\mathcal Q}_s^1}$ – an ${{\mathbb{F}}}_p$-linear basis for $\mathcal Q^1$ according to Proposition \[s2\] – is mapped onto admissibles by Lemma \[lem31\]. \[cz3\] The algebra $\mathcal Q^1_s$ is isomorphic to its subalgebra $\mathcal Q^1_{s+1}$. By Equation and Proposition \[ppp3\], we can argue that $\psi^s (\mathcal Q^1) = \mathcal Q^1_s$. Thus, the desired isomorphism is given by $$\psi\hspace{-0.15em}\mid_{{\mathcal Q}_s^1} : \operatorname{Im}\psi^s {{ \longrightarrow }}\operatorname{Im}\psi^{s+1}.$$ Further substructures ===================== For each $s\in {{\mathbb{N}}}_0$, we define $V_s$ to be the ${{\mathbb{F}}}_p$-vector subspace of ${\mathcal Q}(p)$ generated by the set of monomials $${\mathcal U}_s=\{z_{1,p^sh_1-\alpha_s}z_{0,p^sh_2-\alpha_s}\cdots \, z_{0,p^sh_m-\alpha_s}\, \vert \; m\geq 2, \; (h_1, \dots ,h_m) \in {{\mathbb{Z}}}^m \, \}.$$ Equation \[palfa\] implies that $V_s \supset V_{s+1}$. None of the $V_s$’s is a subalgebra of $\mathcal Q(p)$, nevertheless, by Proposition \[p1\] and the nature of relations it follows that $V_s$ can be endowed with a right ${\mathcal Q}^0_s$-module structure just by considering multiplication in $\mathcal Q(p)$. By using once again Lemma \[lem2\] and the argument along the proof of Proposition \[p2\], we get \[p8\] An ${{\mathbb{F}}}_p$-linear basis for $V_s$ is given by the set $\mathcal B_{V_s}$ of its monic admissible monomials. \[pult\] The map between sets $$z_{1,i_1}z_{0,i_2}\cdots \, z_{0,i_m} \in {\mathcal U}_0 \; \longmapsto \; z_{1,pi_1-1}z_{0,pi_2-1}\cdots \, z_{0,pi_m-1} \in {\mathcal U}_0$$ can be extended to a well-defined injective ${{\mathbb{F}}}_p$-linear map $\lambda : V_0 {{ \longrightarrow }}V_0.$ Moreover $$\label{lll} \lambda^s (V_0) = V_s \subset V_0.$$ As in the proof of Proposition \[ppp\], Equation \[palfa\] and Proposition \[p1\] show that the polynomial $R({\epsilon},k,n) \in {{\mathbb{F}}}_p \langle \mathcal S_p \rangle$ in mapped onto $R({\epsilon}, p^sk-\alpha_s,p^sn)$ through the $s$-th power of the ${{\mathbb{F}}}_p$-linear map $$\label{bda} \Lambda: z_{{\epsilon}_11,i_1}z_{{\epsilon}_2,i_2}\cdots \, z_{{\epsilon}_m,i_m} \, \in \, {{\mathbb{F}}}_p \langle \mathcal S_p \rangle \, \longmapsto \, z_{{\epsilon}_1,pi_1-1}z_{{\epsilon}_2,pi_2-1}\cdots \, z_{{\epsilon}_m,pi_m-1} \, \in \, {{\mathbb{F}}}_p \langle \mathcal S_p \rangle.$$ Hence there are two maps $\bar{\Lambda}$ and $\lambda$ suct that the diagram\ $$\begin{tikzcd} {{\mathbb{F}}}_p \langle \mathcal S_p \rangle \arrow{r}{\Lambda} & {{\mathbb{F}}}_p \langle \mathcal S_p \rangle \\ {{\mathbb{F}}}_p \langle \mathcal U_0 \rangle \arrow{d}{\pi''} \arrow[hook]{u} \arrow[dotted]{r}{\bar{\Lambda}}& F_p \langle \mathcal U_0 \rangle \arrow[hook]{u} \arrow{d}{\pi''} \\ V_0 \arrow[dotted]{r}{\lambda} & V_0 \end{tikzcd}$$ commutes, where $\pi'' : {{\mathbb{F}}}_p \langle \mathcal U_0 \rangle \rightarrow V_0$ is the quotient map. Finally, taking into account Equation \[palfa\], one checks that $$\label{lamb} \lambda^s(z_{1,i_1}z_{0,i_2}\cdots \, z_{0,i_m})= z_{1,p^si_1-\alpha_s}z_{0,p^si_2-\alpha_s}\cdots \, z_{0,p^si_m-\alpha_s}.$$ Since Equation implies , the proof is over. We now introduce a category $\mathcal K$ whose objects are couples $(M,R)$, with $R$ being any ring, and $M$ any right $R$-module. A morphism between two objects $(M,R)$ and $(N,S)$ is given by a couple $(f, \omega)$ where $f : M \rightarrow N$ is group homomorphism and $\omega: R \rightarrow S$ is a ring homomorphism, furthermore $$f (mr) = f(m) \, \omega(r) \qquad \forall (m,r) \in (M,R).$$ The category $\mathcal K$ is partially ordered by “inclusions”. More precisely we say that $$(M,R) \subseteq (M',R')$$ if $M$ is a subgroup of $M'$ and $R$ is a subring of $R'$. \[t2\] The objects in $\mathcal K$ of the descending chain $$({V}_0, \mathcal Q_0^0) \supset ({V}_1, \mathcal Q_1^0) \supset \dots \supset ({V}_s, \mathcal Q_s^0) \supset ({V}_{s+1}, \mathcal Q_{s+1}^0) \supset \dots \,$$ are all isomorphic. By Proposition \[pult\] it follows that $ \lambda\hspace{-0.15em}\mid_{V_s} : V_s {{ \longrightarrow }}V_{s+1}$ is an isomorphism between ${{\mathbb{F}}}_p$-vector spaces. Thus, recalling Corollary \[cz1\], the desired isomorphism in $\mathcal K$ is given by $$(\lambda\hspace{-0.15em}\mid_{V_s}, \phi\hspace{-0.15em}\mid_{{\mathcal Q}_s^0} ) : ({V}_s, \mathcal Q_s^0) \; {{ \longrightarrow }}\; ({V}_{s+1}, \mathcal Q_{s+1}^0).$$ A final remark ============== Theorem 1.1 in [@BolMex] says that no strict algebra monomorphism in $\mathcal Q(p)$ exists when $p$ is odd. Hence there is no chance to find algebra endomorhisms over $\mathcal Q(p)$ extending the maps $\phi$ and $\psi$ defined in Sections 2 and 3 respectively. Just to give an idea about the obstructions you come up with, consider the ${{\mathbb{F}}}_p$-linear map $$\Theta : {{\mathbb{F}}}_p \langle \mathcal S_p \rangle \; {{ \longrightarrow }}\; {{\mathbb{F}}}_p \langle \mathcal S_p \rangle$$ defined on monomials as follows $$\Theta (z_{{\epsilon}_1,i_1} \cdots \, z_{{\epsilon}_m,i_m} ) = z_{{\epsilon}_1,pi_1} \cdots \, z_{{\epsilon}_m,pi_m}.$$ Neither the map $\Theta$ nor the map $\Lambda$ introduced in Section 4 stabilizes the entire set . Indeed, take for instance $$R(0, 0,0)= z_{0,-1}z_{0,0} \quad \text{and} \quad S(1, 0,0)= z_{1,0}z_{1,0}.$$ The polynomial $$\label{psi0} \Theta (R(0, 0,0)) =z_{0,-p}z_{0,0}$$ does not belong to the set $\mathcal R_p$. In fact, the only polynomial in $\mathcal R_p$ containing as a summand is $$R(0,0,p-1)= z_{0,-1-(p-1)}z_{0,0} + z_{0,-1}z_{0,-p+1}.$$ Similarly, the polynomial $$\label{phi0}\Lambda (S(1, 0,0))= z_{1,-1}z_{1,-1}$$ does not belong to the set $\mathcal R_p$, since it consists of a single admissible monomial, whereas each element in $\mathcal R_p$ always contains a non-admissible monomial among its summands. [Morava]{} S. Araki, T. Kudo, *Topology of $H_n$-spaces and $H$-squaring operations*, Mem. Fac. Sci. Kyusyu Univ. Ser. A [**10**]{} (1956), 85–120. M. Brunetti, A. Ciampella, L. A. Lomonaco, *The Cohomology of the Universal Steenrod algebra*, Manuscripta Math., [**118**]{} (2005), 271–282. M. Brunetti, A. Ciampella, L. A. Lomonaco, *An Embedding for the $E_2$-term of the Adams Spectral Sequence at Odd Primes*, Acta Mathematica Sinica, English Series [**22**]{} (2006), no. 6, 1657–1666. M. Brunetti, A. Ciampella, *A Priddy-type koszulness criterion for non-locally finite algebras*, Colloquium Mathematicum [**109**]{} (2007), no. 2, 179–192. M. Brunetti, A. Ciampella, L. A. Lomonaco, *Homology and cohomology operations in terms of differential operators*, Bull. London Math. Soc. [**42**]{} (2010), no. 1, 53–63. M. Brunetti, A. Ciampella, L. A. Lomonaco, *An Example in the Singer Category of Algebras with Coproducts at Odd Primes*, Vietnam J. Math. [**44**]{} (2016), no. 3, 463–476. M. Brunetti, A. Ciampella, L. A. Lomonaco, *Length-preserving monomorphisms for some algebras of operations*, Bol. Soc. Mat. Mex. [**23**]{} (2017), no. 1, 487–500. M. Brunetti, A. Ciampella, *The Fractal Structure of the Universal Steenrod Algebra: an invariant-theoretic description*, Applied Mathematical Sciences, Vol. [**8**]{} no. 133 (2014), 6681–6687 . M. Brunetti, L. A. Lomonaco, *Chasing non-diagonal cycles in a certain system of algebras of operations*, Ricerche Mat. [**63**]{} (2014), no. 1, suppl., S57–S68. A. Ciampella, *On a fractal structure of the universal Steenrod algebra*, Rend. Accad. Sci. Fis. Mat. Napoli, vol. 81, ([**4**]{}) (2014), 203–207 . A. Ciampella, L. A. Lomonaco, *The Universal Steenrod Algebra at Odd Primes*, Communications in Algebra [**32**]{} (2004), no. 7, 2589–2607. A. Ciampella, L. A. Lomonaco, *Homological computations in the universal Steenrod algebra*, Fund. Math. [**183**]{} (2004), no. 3, 245–252. I. Karaca, *Nilpotence relations in the mod p Steenrod algebra*, J. Pure Appl. Algebra [**171**]{} (2002), no. 2–3, 257–264. A. Liulevicius, *The factorization of cyclic reduced powers by secondary cohomology operations*, Mem. Amer. Math. Soc. [**42**]{} (1962). Lomonaco L. A., *Dickson invariants and the universal Steenrod algebra.* Topology, Proc. 4th Meet., Sorrento/Italy 1988, Suppl. Rend. Circ. Mat. Palermo, II. Ser. [**24**]{} (1990), 429–443. J. P. May, *A General Approach to Steenrod Operations*, Lecture Notes in Mathematics. [**168**]{}, Berlin: Springer, 153–231 (1970). J. P. May, *Homology operations on infinite loop spaces*, Algebraic topology (Proc. Sympos. Pure Math., Vol. XXII, Univ. Wisconsin, Madison, Wis., 1970), pp. 171–185. Amer. Math. Soc., Providence, R.I. (1971). K. G. Monks, *Nilpotence in the Steenrod algebra*, Bol. Soc. Mat. Mexicana (2) [**37**]{} (1992), no. 1-2, 401–416 (Papers in honor of José Adem). N. E. Steenrod, *Cohomology Operations*, lectures written and revised by D. B. A. Epstein, Ann. of Math. Studies [**50**]{}, Princeton Univ. Press, Princeton, NJ (1962).
--- abstract: 'We provide a mathematical study of the modified Diffusion Monte Carlo (DMC) algorithm introduced in the companion article [@DMC]. DMC is a simulation technique that uses branching particle systems to represent expectations associated with Feynman-Kac formulae. We provide a detailed heuristic explanation of why, in cases in which a stochastic integral appears in the Feynman-Kac formula (e.g. in rare event simulation, continuous time filtering, and other settings), the new algorithm is expected to converge in a suitable sense to a limiting process as the time interval between branching steps goes to $0$. The situation studied here stands in stark contrast to the “naïve” generalisation of the DMC algorithm which would lead to an exponential explosion of the number of particles, thus precluding the existence of any finite limiting object. Convergence is shown rigorously in the simplest possible situation of a random walk, biased by a linear potential. The resulting limiting object, which we call the “Brownian fan”, is a very natural new mathematical object of independent interest.' author: - Martin Hairer and Jonathan Weare bibliography: - './refs.bib' title: The Brownian fan --- =0.65cm Introduction ============ Consider a Markov chain $y_{t_k}$ with transition probabilities $\P(x,dy)$ on some state space $\CX$ and, in anticipation of developments below, indexed by a sequence of real numbers $t_0<t_1<t_2<\cdots.$ The nature of the state space does not matter (Polish is enough), but one should think of $\CX = \R^d$ for definiteness. Given functions $\v \colon \CX^2 \to \R$ and $f\colon \CX \to \R$ with sufficient integrability properties (say bounded), the Diffusion Monte Carlo (DMC) algorithm (see [@DMC] for description of DMC compatible with the discussion below) computes an estimate of expectations of the form $$\label{discavet2} \langle f \rangle_t = \mathbf{E}\biggl( f(y_t) \exp\Bigl(- \sum_{t_k\leq t}\v(y_{t_k},y_{t_{k+1}}) \Bigr)\biggr)\;,$$ where $y$ a realisation of the Markov chain described by $\P.$ Strictly for convenience we have assume that $t$ is among the times $t_0,t_1,t_2,\dots.$ More precisely, at time $t,$ DMC produces a collection of $N_{t}$ copies of the underlying system, $\{x_{t}^{(i)}\}_{i=1}^{N_{t}}$ so that \_[i=1]{}\^[N\_[t]{}]{} f(x\_[t]{}\^[(i)]{}) = f \_t. Of course, the expectation in can be computed by generating many independent sample trajectories of $y.$ However, in most cases the weights $\exp\bigl(- \sum_{t_k\leq t}v(y_{t_k},y_{t_{k+1}}) \bigr)$ will quickly degenerate so that a huge number of samples is required to generate a single statistically significant sample. At each time step DMC removes samples with very low weights and replaces them with multiple copies of samples with larger weights, focusing computational effort on statistically significant samples. Variants of DMC are used regularly in a wide range of fields including electronic structure, rare event simulation, and data assimilation (see [@HammersleySIS:1954; @defreitas05; @Allen:FFS:2006; @Johansen:SMCrare:2006; @DMC] for a small sample of these applications). The algorithm has also been the subject of significant mathematical inquiry [@DelMoral:FK:2011]. Continuous time limits have been considered [@Rousset:PhD:2006] for cases in which $\v$ scales like the time interval between branching steps. Perhaps because standard DMC does not have a limit in those cases, more general Feynman-Kac formulae do not seem to have been considered despite their appearance in applications. For the purposes of this article, the main example one should keep in mind is that where $y$ is a time discretisation of a diffusion process obtained for example by applying an Euler scheme with fixed stepsize $\eps$: y\_[t\_[k+1]{}]{} =y\_[t\_k]{} + F(y\_[t\_k]{}) + (y\_[t\_k]{}) \_[k+1]{}, where $\eps = t_{k+1}-t_k$ and the $\xi_k$ are a sequence of i.i.d. random variables (not necessarily Gaussian). Regarding the function $\v$, we will mostly consider the case $\v(y,\tilde y) = V(\tilde y) - V(y)$. As described in [@DMC], this choice arrises in application of DMC to rare event simulation problems. Notice that, for this choice of $\v$ and for $y$ in , when $\eps$ is small, $\v(y_{t_k},y_{t_{k+1}})\sim \sqrt {\eps}.$ This causes a dramatic failure in standard DMC [@DMC]. Yet, for small $\eps$ (with fixed $t$) the expectation $\langle f\rangle_t$ defined in has the perfectly nice limit $$\mathbf{E}\biggl( f(y_t) \exp\Bigl(- V(y_t) + V(y_0) \Bigr)\biggr)$$ where now $y_t$ is the continuous time limit of (the diffusion with drift $F$ and diffusion $\Sigma$). When a stochastic integral appears in the exponential the situation is completely analogous. It is natural then to search for a modification of DMC that can handle these settings. It was shown in [@DMC] that the following algorithm provides an unbiased estimator for $\langle f \rangle_t$ (it satisfies ) and is superior to DMC (it has lower variance and equivalent expected cost). [TDMC]{}\[tdmc\] Ticketed DMC 1. Begin with $M$ copies $x^{(j)}_0 = x_0$. For each $j=1,\dots,M$ choose an independent random variable $\theta^{(j)}_0\sim \mathcal{U}(0,1)$. 2. At step $k$ there are $N_{t_k}$ samples $(x^{(j)}_{t_k},\theta^{(j)}_{t_k})$. Evolve each of the $x^{(j)}_{t_k}$ one step to generate $N_{t_k}$ values $$\tilde x^{(j)}_{t_k}\sim \mathbf{P}\bigl( y_{t_{k+1}}\in dx\,|\, y_{t_k}=x^{(j)}_{t_k}\bigr).$$ 3. For each $j=1,\dots, N_{t_k}$, let P\^[(j)]{} = e\^[-(x\^[(j)]{}\_[t\_k]{},x\^[(j)]{}\_[t\_[k+1]{}]{})]{}. If $P^{(j)} < \theta^{(j)}_{t_k}$ then set $$N^{(j)} = 0.$$ If $P^{(j)} \geq \theta^{(j)}_{t_k}$ then set N\^[(j)]{} = {P\^[(j)]{} + u\^[(j)]{} , 1}, where $u^{(j)}$ are independent $\mathcal{U}(0,1)$ random variables. 4. For $j=1,\dots, N_{t_k}$, if $N^{(j)}>0$ set $$x^{(j,1)}_{t_{k+1}} = \tilde x^{(j)}_{t_{k+1}}\quad\text{and}\quad \theta^{(j,1)}_{t_{k+1}} = \frac{\theta^{(j)}_{t_k}}{P^{(j)}}$$ and for $i=2,\dots,N^{(j)}$ $$x^{(j,i)}_{t_{k+1}} = \tilde x^{(j)}_{t_{k+1}}\quad\text{and}\quad \theta^{(j,i)}_{t_{k+1}}\sim \mathcal{U}((P^{(j)})^{-1},1).$$ 5. Finally set $N_{t_{k+1}} = \sum_{j=1}^{N_{t_k}} N^{(j)}$ and list the $N_{t_{k+1}}$ vectors $\bigl\{x^{(j,i)}_{t_{k+1}}\bigr\}$ as $\bigl\{x^{(j)}_{t_{k+1}}\bigr\}_{j=1}^{N_{t_{k+1}}}$. 6. At time $t$ produce the estimate $$\widehat f _t = \frac{1}{M}\sum_{j=1}^{N_t} f(x^{(j)}_t)\;.$$ The aim of this article is to argue that one expects the application of Algorithm \[tdmc\] to to converge to a finite continuous-time limiting particle process as $\eps \to 0$. Section \[sec:heuristic\] provides very detailed heuristic as to why we expect this to be the case and gives a precise mathematical definition for the expected limiting object. In the special case where $d = 1$, $F = 0$, $\Sigma = 1$, and $V(x) = x$, the process in question is built recursively by successive realisations of a Poisson point process in a space of excursions of $y_t$. A precise definition is given in Definition \[def:fan\] and we call this object the *Brownian fan*. It is of particular interest since, similarly to the fact that every diffusion process looks locally like a Brownian motion, one would expect the general limiting objects described in Section \[sec:heuristic\] to locally “look like” the Brownian fan. The Brownian fan is interesting in its own right from a mathematical perspective and does not seem to have been studied before, though it is very closely related to the “Virgin island model” introduced in [@VirginIsland]. Loosely speaking, the Brownian fan is a branching process obtained in the following way. Start with a (usually finite) number of initial particles on $\R$ that furthermore come each with a tag $v\in \R$, such that the position $x$ satisfies $x > v$. Each of these particles, which we call the ancestor particles, perform independent Brownian motions until they hit the barrier $x=v$, where they get killed. Each of these particles independently produces offspring according to the following mechanism. Denoting by $\CE$ the space of excursions on $\R$, let $\MM$ be the Poisson point process on $\R \times \CE$ with intensity measure $a\,dt \otimes \Q$ for some $a>0$, where $\Q$ is Itô’s Brownian excursion measure [@ItoExc]. If $(t,w)$ is one of the points of $\MM$ and the corresponding ancestor particle is alive at time $t$ and located at $x_t$, then it gives rise to an offspring which performs the motion described by $w$, translated so that the origin of the excursion lies at $(t,x_t)$. This mechanism is then repeated recursively for each new particle created in this way. The Brownian fan has a number of very nice properties. For example, as established by Theorem \[theo:Nt\], the continuous-time analogue of the number of particles at time $t$, $N_t$, corresponding to the Brownian fan satisfies $$\label{expbndonN} \sup_{t>0} \mathbf{E} \exp \left(\lambda_t N_t\right)< \infty$$ for some continuous decreasing function $\lambda_t>0$. As established in Proposition \[prop:bfWmodcont\], the bound in implies that the continuous-time analogue of the workload, $$\mathcal{W}_t = \int_0^t N_s ds$$ is very nearly a differentiable function of time, $$\sup_{t\leq T} \lim_{h\rightarrow 0} \frac{ \left| \mathcal{W}_{t+h} - \mathcal{W}_t\right|}{h \left| \log h\right|} < \infty.$$ This is close to optimal, since there turns out to be a dense set of exceptional times at which there are infinitely many particles alive (see the argument given at the end of Section \[sec:numPart\] below). To conclude this round-up of its mathematical properties, we establish in Proposition \[prop:feller\] that the Brownian fan is a Feller process in a suitable state space. When combined with the continuity of its sample paths established in Proposition \[prop:Kolmogorov\], this implies that the Brownian fan is a strong Markov process. Note that the construction of such a state space is a non-trivial endeavour due to the fact that, while the number $N_t$ of particles alive at time $t$ has finite moments of all orders, there exists a *dense* set of exceptional times for which $N_t = \infty$! Offspring are continuously being created at infinite rate, thus making the Brownian fan quite different from a standard branching diffusion. The “meat” of this article, Sections \[sec:tightness\] and \[sec:convFan\], is then devoted to a rigorous proof of the convergence of the output of Algorithm \[tdmc\] to the Brownian fan for any sufficiently light-tailed one-step distribution for the random variables $\xi_k$. This result is formulated in Theorem \[theo:finalConv\]. The usual method by which one attempts to characterise the continuous-time limit of a sequence of discrete-time processes involves studying the limit of the generators of the discrete-time processes. In our case, as described in Section \[sec:noConv\], the discrete-time generators do *not* converge to the correct limit, at least when applied to a class of very natural-looking test functions. Surprisingly (and rather confusingly), they do actually converge to the generator of a Brownian fan, but unfortunately with the wrong parameters! En route to our convergence result we establish a number of important and extremely encouraging results about the behaviour of the process generated by Algorithm \[tdmc\] for finite $\eps$. For example, in Proposition \[prop:numPart\] we obtain that $$\sup_{\eps < \eps_0} \mathbf{E} \left| N_{t}\right|^p < \infty\;,$$ for any $p\geq 0$ and $\eps_0>0$ sufficiently small where here $N_t$ refers to the number of particles generated in Algorithm \[tdmc\] and not to the Brownian fan (for which we have the even stronger result in ). In Corollary \[cor:Kolmogorov\] we also prove a uniform (in $\eps$) form of continuity of the processes. Notations --------- For any Polish space $\CY$, we will denote by $\MM_+(\CY)$ the space of finite positive measures on $\CY$, endowed with the topology of weak convergence, together with the convergence of total mass. Given a random variable $X$, we denote its law by $\CD(X)$, except in some cases where we introduce dedicated notations. We will often use the notation $a \lesssim b$ as a shorthand for the inequality $a \le C b$ for some constant $C$. The dependence of $C$ on other quantities will usually be clear from the context, and will be indicated when ambiguities may arise. We will also use the standard notations $a \wedge b = \min \{a,b\},$ $a \vee b = \max\{a,b\},$ and $\lfloor a\rfloor = \max\{ i\in \mathbb{Z}:\,i\leq a\}$. Acknowledgements {#acknowledgements .unnumbered} ---------------- [We would like to thank H. Weber for numerous discussions on this article and S. Vollmer for pointing out a number of inaccuracies found in earlier drafts. We would also like to thank J. Goodman and E. Vanden-Eijnden for helpful conversations and advice during the early stages of this work. Financial support for MH was kindly provided by EPSRC grant EP/D071593/1, by the Royal Society through a Wolfson Research Merit Award and by the Leverhulme Trust through a Philip Leverhulme Prize. JW was supported by NSF through award DMS-1109731. ]{} The continuous-time limit and the Brownian fan {#sec:contLimit} ============================================== From now on, we restrict ourselves to the analysis of the case $\v(x,y) = V(y) - V(x)$ for some “potential” $V$ defined on the state space of the underlying Markov process. We argue that if the underlying process is obtained by approximating a diffusion process then, unlike in the case of the naïve generalisation of DMC (see [@DMC Algorithm DMC]), our modification, Algorithm \[tdmc\], converges to a non-trivial limiting process as the stepsize $\eps$ converges to $0$. We will first provide a heuristic argument showing what kind of limiting process one would expect to obtain. The remainder of this article will then be devoted to rigorously constructing the limiting process and proving convergence in the simple case in which the underlying Markov chain is a random walk (rescaled so that it converges to a standard Brownian motion) and the biasing potential $V$ is linear. In this case the limiting process is a very natural object that does not seem to have been studied in the literature so far. We call this object, which is closely related to the construction in [@VirginIsland], the *Brownian fan* (see Section \[sec:hairy\]). It also has a flavour very similar to the construction of the Brownian web [@BWeb] and the Brownian net [@MR2408586], although there does not seem to be an obvious transformation linking these objects. Heuristic derivation of the continuous-time limit {#sec:heuristic} ------------------------------------------------- Throughout this section the underlying Markov chain will be given just like in the introduction by the following approximation to a diffusion: y\_[(k+1)]{} =y\_[k]{} + F(y\_[k]{}) + (y\_[k]{}) \_[k+1]{},y\_[k]{} \^n, where the $\xi_k$ are a sequence of i.i.d. (not necessarily Gaussian) random variables with law $\nu$ and the identity on $\R^n$ as their covariance matrix. The functions $F$ and $\Sigma$ are sufficiently “nice” functions, but since this section is only heuristic, we do not state specific regularity, growth or non-degeneracy assumptions. We have slightly changed our notations by writing $y_{k\eps}$ instead of $y_k$ for the position of the Markov chain after $k$ steps. This is in order to make explicit the fact that as $\eps \to 0$, one has convergence to a continuous-time process. The corresponding notational changes in Algorithm \[tdmc\] are straightforward. Concerning the function $\v$, we take $\v(x,y) = V(y) - V(x)$ for some regular potential $V \colon \R^n \to \R$. Recall now that, as long as a particle is alive, its ticket $\theta$ evolves under Algorithm \[tdmc\] as \^[(j)]{}\_[k]{} = \^[(j)]{}\_[k]{} (V(x\^[(j)]{}\_[(k+1)]{})-V(x\^[(j)]{}\_[k]{})). It is natural therefore to replace $\theta$ by the quantity $v$ given by (-v\^[(j)]{}\_[t]{}) = \^[(j)]{}\_[t]{}(-V(x\^[(j)]{}\_[t]{}) ). In this way, the new “tag” $v$ does not change over time, but is assigned to a particle at the moment of its birth. Translating Steps 3 and 4 of the algorithm into this slightly different setting, we see that if a particle performs a step from $x$ to $y$ such that $V(y) < V(x)$, then it can potentially spawn one or more descendants. The tags $v$ of the descendants are then distributed according to e\^[-v]{} (e\^[-V(x)]{}, e\^[-V(y)]{}), and a particle with tag $v$ lives as long as it stays within the region $\{x\,:\, V(x)\le v\}$. ### Description of the limit {#sec:limit} For very small values of $\eps$, the process described above has the following features. Taking the limit $\eps \to 0$ in , we observe that each particle follows a diffusion process, solving the equation dy\_t = F(y\_t)dt + (y\_t)dB\_t, where $B_t$ is a standard $d$-dimensional Brownian motion. If the particle has tag $v$, then this process is killed as soon as it exits the sublevel set $\{x\,:\, V(x) \le v\}$. Consider the following representation of the object produced by Algorithm \[tdmc\]. Denote by $\Q^\eps_{x,v}$ the law of the $\eps$-discretization of generated by starting at $x$ and killed upon exiting the set $\{y\,:\, V(y) \le v\}$. Let $\tau$ and $\{w_{k\eps}\}_{k=0}^{\tau/\eps}$ be, respectively, the lifetime and trajectory of the original particle. The trajectories of the offspring of this initial trajectory are very nearly given (it will be true in the small $\eps$ limit) by a realisation $\mu^{\eps,1}$ of a Poisson point process with intensity \^(w,) = \_[k=0]{}\^[/-1]{} A\^(k) \_[k]{}\^\^\_[w\_[k]{},V(w\_[k]{})+]{}(k,d), where, according to the rule for generating new offspring in Algorithm \[tdmc\], $$A^\eps (k) = \begin{cases} \frac{1}{\sqrt{\eps}}\bigl(e^{-(V(w_{(k+1)\eps})-V(w_{k\eps}))}-1\bigr) & \text{if } V(w_{(k+1)\eps})<V(w_{k\eps})\\ 0 & \text{if }V(w_{(k+1)\eps})\geq V(w_{k\eps}) \end{cases}$$ and, according to the rule for generating offspring tickets in Algorithm \[tdmc\], $$\int f(\delta)\eta(k,d\delta) =\int_0^1 f\bigl( -\log\bigl( 1 + u(e^{-(V(w_{(k+1)\eps})-V(w_{k\eps}))}-1)\bigr)\bigr)\,du.$$ Here $\Theta_t$ is the map that shifts trajectories forward by time $t$. Since each offspring behaves independently just like the original particle, this suggests that the $n$th generation $\mu^{\eps,n}$ of offspring is obtained recursively as a realisation of the Poisson point process with intensity given by \^[,n]{}() = \^(w,)\^[,[n-1]{}]{}(dw). At each “microscopic” step, the probability of creating a descendent is of order $\sqrt \eps$ so that, in the limit $\eps \to 0$, each particle spawns descendants at infinite rate. However, any such descendant is created at distance $\CO(\sqrt \eps)$ of the “barrier” $V(x) = v$. As a consequence, the probability that it survives for a time of order $1$ before being killed is itself only of order $\sqrt \eps$. Therefore, the rate at which a particle creates descendants that actually survive for a time $\tau$ of order $1$ is finite, but tends to infinity as $\tau \to 0$. Now we will consider the small $\eps$-limit of the object we have constructed. The trajectory $w_t$ becomes a sample path of exiting the set $\{y\,:\, V(y) \le v\}$ at time $\tau$. Denote by $\Q_{x,v}$ the law of the diffusion starting at $x$ and killed upon exiting the set $\{y\,:\, V(y) \le v\}$, which is a probability measure on some space of excursions in $\R^n$. The characterisation of the standard Itô excursion measure (see for example [@RevYor Theorem 4.1] and [@PitmanYor]) then suggests that, for every $x \in \R^n$ such that $\nabla V \neq 0$ and $\Sigma$ is non-degenerate, the limit \_x = \_[0\^+]{} [1]{} \_[x,V(x)+]{}, exists as a $\sigma$-finite measure in the sense that $ {1\over \delta} \Q_{x,V(x)+\delta}$ restricted to the set of excursions longer than a fixed length converges weakly to $\Q_x$ restricted to the same set. The discussion so far suggests that for the limiting object, the trajectories of the first generation of offspring are given by a realisation $\mu^1$ of the Poisson point process with intensity measure (w,) = \_0\^A(w\_t) \_t\^\_[w\_t]{}dt, for some intensity $A\colon \R^n \to \R_+$ yet to be determined and that the $n$th generation $\mu^n$ of offspring is obtained recursively as a realisation of a Poisson point process with intensity given by \^n() = (w,)\^[n-1]{}(dw), with $\CQ$ as in . In order to fully characterise the limiting object, it remains to provide an expression for the intensity function $A$. Let us start by replacing $\Q^\eps_{x,v}$ in equation by $\Q_{x,v},$ i.e. by assuming that for small $\eps$, excursions of the discrete process are very similar to excursions of its continuous time limit. We then apply the relations $$\label{Papprox1} \Q_{x,V(x)+\delta} \approx \delta \Q_x\;$$ and $$V(w_{(k+1)\eps}) - V(w_{k\eps}) \approx \sqrt{\eps}\scal{\nabla V(w_{k\eps}),\Sigma(w_{k\eps}) \xi_{k+1}}$$ with the $\xi_{k+1}$ as in . We then formally obtain $$\begin{gathered} \CQ^\eps(w,\cdot) \approx \eps \sum_{k=0}^{\tau/\eps-1} \mathbf{1}_{\scal{\nabla V(x),\Sigma(x) \xi_{k+1}} < 0}\scal{\nabla V(w_{k\eps}),\Sigma(w_{k\eps}) \xi_{k+1}}\\ \times \int_0^1 u \scal{\nabla V(w_{k\eps}),\Sigma(w_{k\eps}) \xi_{k+1}} du\, \Theta_{k\eps}^\star \Q_{w_{k\eps}}.\end{gathered}$$ Our arguments so far therefore suggest that A(x) = \_[ &lt; 0]{} V(x), (x) z \^2 (dz), where the distribution $\nu$ has mean 0 and identity covariance matrix. Assuming that $\nu$ is symmetric this becomes A(x) = c, where $c= {1\over 4}$. If $\nu$ is not symmetric, one might even expect a prefactor $c$ that depends on $x$. In fact, as we will see in a specific case in the remainder of this section, the correct value is $c = {1\over 2}$, whether the law of $\xi$ is symmetric or not. The reason for this discrepancy is that the relation $$\Q^\eps_{x,V(x)+\delta} \approx \delta \Q_x\;$$ used in our derivation is only valid if $\delta \gg \sqrt \eps$. In our case however, one precisely has $\delta \sim \sqrt \eps$, which introduces a correction factor that eventually gives rise to the value $c = {1\over 2}$. The aim of the next subsection is to show in more detail how this factor ${1\over 2}$ arises in the simplest situation where $F=0$ and $\Sigma = 1$. ### The case of Brownian motion {#sec:BMformal} We now consider the one-dimensional case, where the limiting underlying process is simple Brownian motion. Regarding the underlying discrete problem, we consider the Markov chain defined recursively by y\_[(k+1)]{} = y\_[k]{} + \_[k+1]{}, for an i.i.d. sequence of centred random variables $\xi_k$ with law $\nu$ and variance $1$. For the potential function $V$, we choose $V(x) = -ax$ for some $a > 0$. In order to show that the constant $c$ appearing in is equal to ${1\over 2}$, we will now argue that if we denote by $\Q$ the standard Itô excursion measure (which we normalise in such a way that $\Q = \lim_{\eps \to 0} {1\over \eps} \Q_\eps$, where $\Q_\eps$ is the law of a standard Brownian motion starting at $\eps$ and killed when it hits the origin) and by $\Q^\eps_z$ the law of the random walk starting at $\sqrt \eps z$ and stopped as soon as it takes negative values, then there exists a function $G$ such that \^\_z G(z) , as $\eps\to 0$ when both sides are restricted to excursions that survive for at least some fixed amount of time. We will see that the function $G$ behaves like $G(z) \approx z$ for large values of $z$, but has a non-trivial behaviour for values of order $1$. In terms of our notation from the previous subsection (since $y_t$ is spatially homogeneous and $V$ is linear) this implies that the approximation should really have been replaced by \^\_[x,V(x)+]{} a G() . Since we assumed $a > 0$, our process creates offspring only when it performs a step towards the right, i.e. when $\xi_{k+1} > 0$. The probability that a new particle is created in the $k$-th step is approximately $a\xi_{k+1}$. Furthermore, the small $\eps$ rule for the generation of tags implied by Algorithm \[tdmc\] is = \~(0, \_[k+1]{} ). As a consequence, once we have identified the function $G$ in , our arguments in the previous section lead to the formula $$A(x) = \int_0^\infty (a z) \biggl( \frac{1}{z} \int_0^z a G(y)\,dy\biggr)\,\nu(dz)$$ where $\nu$ is the law of the steps $\xi_{k}$. If we can show that \_0\^\_0\^z G(y)dy(dz) = [12]{}, then we will have $$A(x) = \frac{a^2}{2}\;,$$ a formula consistent with a choice of $c=\frac{1}{2}$ in . In order to identify $G$, we note that if a random walk starting from $\sqrt \eps \delta$ survives for some time of order $1$ before becoming negative then, with overwhelming probability, it will have reached a height of at least $\eps^{1/4}$ (say). Furthermore, if we condition the random walk $\Q^\eps_z$ to reach a level $\sqrt \eps \gamma$ with $1 \ll \gamma \ll \eps^{-1/2}$, one would expect its law to be well approximated by $\sqrt \eps \gamma \Q$ when restricted to excursions that survive for a time of order $1$. As a consequence, we expect that \^\_z |P\_[z,]{} , 1, where $\bar P_{z,\gamma}$ denotes the probability that the simple random walk with $\eps = 1$ started at $z$ reaches the level $\gamma$ before becoming negative. The remainder of this section is devoted to the proof of the fact that if we define $\bar P_{z,\gamma}$ in this way, then under some integrability assumptions for the one-step probability $\nu$, the limit G(z) = \_ |P\_[z,]{} , exists and does indeed satisfy , independently of the choice of $\nu$. Actually, we will prove these statements for the quantity $P_{z,\gamma} = \bar P_{z, \gamma+z}$, which we interpret as the probability that the random walk starting at the origin reaches $[\gamma,\infty)$ before reaching $(-\infty,-z]$. Our first result is as follows: \[prop:defG\] Assume that the law $\nu$ satisfies $\nu(\{|x| \ge K\}) \le C \exp(- c K^\beta)$ for all $K \ge 0$ and some strictly positive constants $c$, $C$ and $\beta$. Then, the limit G(s) = \_ P\_[s,]{} exists and satisfies the relations G(s) = \_[-s]{}\^G(s+z) (dz),s 0,\_[s ]{} [G(s) s]{} = 1. Furthermore, for every $\delta > 0$ there exists $C$ such that the bound |(+s)P\_[s,]{} - G(s)| C [1+s \^[[12]{}-]{}]{}, holds uniformly for all $s \ge 0$ and $\gamma \ge 1\vee s$. Denote by $y_k$ the $k$th step of the random walk starting at the origin. Our main tool is the quantitative convergence result [@ConvRateBM], which states that the supremum distance between a Wiener process and the diffusively rescaled random walk over $n$ steps is of order $n^{-1/4}$. As a consequence we claim first that, for every $\delta > 0$ there exists a constant $C$ such that, for every $a \in [{1\over 3},3]$, we have the bound |P\_[a,]{} - [a1+a]{}| , valid for every $\gamma \ge 1$. Indeed, for any $n \ge 1$, it follows from the previously quoted convergence result that there exists a Brownian motion $B$ such that $|B_t - y_{\lfloor t \rfloor}| \le n^{1/4+\delta}$ for all $t \in [0,n]$ with probability greater than $1 - C/n^{q}$. Here, $\delta > 0$ and $q \ge 1$ are arbitrary, but the constant $C$ of course depends on them. Take $n$ such that $n^{1/4+\delta} \le \gamma$. If $y$ hits $[\gamma,\infty)$ before $(-\infty,-a\gamma]$, then either $\sup_{t \le n} |B_t - y_{\lfloor t \rfloor}| > n^{1/4+\delta}$, or $\sup_{t \le n} |B_t| \le 3\gamma$, or $B$ hits $[\gamma-n^{1/4+\delta},\infty)$ before it hits $(-\infty,-a\gamma - n^{1/4+\delta}]$. As a consequence, P\_[a,]{} + [Cn\^[q]{}]{} + (- c n/\^2) . Reversing the roles of $\gamma$ and $a\gamma$, we thus obtain the bound |P\_[a,]{} - [a1+a]{}| + [1n\^[q]{}]{} + (- c n/\^2) . Choosing $\delta$ small enough and $n = \gamma^{2+\delta}$, the claim then follows. In order to obtain the convergence of the right hand side in , we make use of the fact that, for $\bar \gamma > \gamma$, one has the identity P\_[s,|]{} = P\_[s,]{} \_0\^P\_[s++ z,|- - z ]{}\_(dz), where $\nu_\gamma$ is the law of the “overshoot” $y_n - \gamma$ at the first time $n$ such that $y_n \ge \gamma$, conditioned on never reaching below the level $-s$. Since $P_{ s+\gamma + z,\bar \gamma - \gamma - z}$ is an increasing function of $z$, we immediately obtain the lower bound P\_[s,|]{} P\_[s,]{} P\_[s+,|- ]{}, If we choose $\gamma = a \bar \gamma$ for $a \in [{1\over 4},{1\over 2}]$, it then follows from that P\_[s,|]{} P\_[s,]{} ([+ s |+ s]{} - [C\^[[12]{}-]{}]{}) , for all $\gamma$ sufficiently large and uniformly over all $s \in [0,\gamma]$. Setting $Q_{s,\gamma} = (\gamma + s)P_{s,\gamma}$, it thus follows that one has the bound Q\_[s,|]{} Q\_[s,]{} (1 - [C\^[[12]{}-]{}]{}), possibly for a different constant $C$. Let $\gamma_0 \ge 1$ be such that the factor on the right of this equation is greater than $1/2$. By , there then exists $s_0\ge \gamma_0$ such that for $s \ge s_0$ and $\gamma \in [s,2s]$ one has $Q_{s,\gamma} \ge C(1+s)$. Furthermore, for $s \le s_0$ and $\gamma \in [(1\vee s), 2(\gamma_0\vee s)]$, there exists a non-zero constant such that $Q_{s,\gamma} \ge C$. Iterating , we then conclude that there exists a constant $C>0$ such that the bound Q\_[s,]{} C(1+s), holds uniformly over all $s > 0$ and all $\gamma \ge 1 \vee s$. On the other hand, for arbitrary $\alpha>0$, one has from the lower bound P\_[s,|]{} (\_({x &gt; \^}) + P\_[s++ \^,|- - \^]{})P\_[s,]{} . In order to bound $\nu_\gamma(\{x > \gamma^\alpha\})$, we note that this event can happen only if either one of the first $\gamma^3$ increments exceeds $\gamma^\alpha$, or the random walk never exceeds the value $\gamma$ within these $\gamma^3$ steps. Similarly to before, it then follows that \_({x &gt; \^}) (\^3 (-c \^) + \^[-q]{} + (-c ) ), for every $q > 0$ and uniformly over $s \le \gamma$. It follows from the lower bound on $P_{s,\gamma}$ obtained previously that $\nu_\gamma(\{x > \gamma^\alpha\}) \lesssim \gamma^{-q}$ for any power $q > 0$, so that we obtain the upper bound P\_[s,|]{} P\_[s,]{} ([+s |+s]{} + [C\^[[12]{}-]{}]{}) , with the same domain of validity as before. Using a very similar argument as before, we obtain a constant $\bar C$ such that $\gamma P_{s,\gamma} \le \bar C (1+s)$ uniformly over $s > 0$ and $\gamma \ge 1\vee s$. Combining the bounds we just obtained, we obtain ||P\_[s,|]{} - P\_[s,]{}| , uniformly over $\bar \gamma > \gamma > s$, from which it follows immediately that the sequence $\{\gamma P_{s,\gamma}\}_{\gamma \ge 1}$ is Cauchy, so that it has a limit $G(s)$. It remains to show that $G$ has the desired properties. The first one follows immediately from the identity P\_[s,]{} = \_P\_[ s+z,-z]{}(dz), which holds provided that we define the integrand to be $1$ for $z > \gamma$ and $0$ for $z < -s$. In order to show that $G(s) / s \to 1$, we fix some (large) value $s$ and choose $\gamma_n = 2^n s$. It then follows from that |Q\_0 - s| s\^[[12]{}+]{}, where we used the notation $Q_n = (\gamma_n + s)P_{s,\gamma_n}$ as a shorthand. Furthermore, it follows immediately from that there exists a constant $C$ independent of $s$ such that $|Q_n| \le C s$ uniformly in $n$. As a consequence, we obtain the recursive bound |Q\_n-Q\_[n-1]{}| . Summing over $n$ yields $|Q_n - s| \lesssim s^{{1\over 2}+\delta}$, uniformly in $n$, so that the claim follows. The quantitative error bound follows in the same way. \[cor:boundRW\] In the same setting as above, one has the bound |P\_[s,]{} - [s +s]{}|, uniformly for all $s \ge 0$ and $\gamma \ge 1\vee s$. Combine with the bounds on $G(s) - s$ obtained at the end of the proof above. Somewhat surprising is the fact that the function $G$ obtained in the Proposition \[prop:defG\] does indeed satisfy , independently of the choice of transition probability $\nu$, provided that we assume that $\nu$ has some exponential moment. \[prop:expG\] Let $G$ be as in Proposition \[prop:defG\] and assume that the law $\nu$ satisfies \_e\^[c|z|]{}(dz) &lt; , for some $c > 0$. Then, one has the identity $\int_0^\infty \nu([s,\infty))\,G(s)\,ds = {1\over 2}$. Note that, by Fubini’s theorem, \_0\^(\[s,))G(s)ds = \_0\^\_0\^s G(y)dy(ds), so that we do obtain . Integrating from $0$ to an arbitrary value $K > 0$ and applying Fubini’s theorem, we obtain the identity \_0\^K G(s)ds = \_0\^G(z) (\[z-K,z\])dz. In this proof we denote by $\CI = \int_0^\infty G(z)\nu([z,\infty))\,dz$ the quantity of interest. Simple algebraic manipulations then yield from &= \_0\^G(z)(\[z-K,))dz - \_0\^G(z)(\[z-K,z\])dz\ &= \_0\^G(z)(\[z-K,))dz - \_0\^K G(z)dz\ &= \_0\^G(z) ((\[z-K,)) - \_[z &lt; K]{})dz. Since this identity holds for every $K>0$, it follows in particular that one has &= \_0\^G(z) \_0\^((\[z-K,)) - \_[z &lt; K]{}) e\^[-K]{}dKdz\ &= \_0\^G(z) (\_0\^(\[z-K,)) e\^[-K]{}dK - e\^[-z]{})dz,\[e:boundII\] for every $\eps > 0$. At this stage, we note that one has the identity \_0\^(\[z-K,)) e\^[-K]{}dK = e\^[-z]{} e\^ - e\^[-z]{} \_[z]{}\^ e\^[K]{} (\[K,))dK, where $\xi$ denotes an arbitrary random variable with law $\nu$. Since $\nu$ has some exponential moment by assumption, $\nu([K,\infty))$ decays exponentially so that the second term in this identity satisfies |e\^[-z]{} \_[z]{}\^ e\^[K]{} (\[K,))dK| C e\^[-z]{}, for some constants $\gamma, C>0$, provided that $\eps$ is small enough. Inserting this into , it follows that = \_0\^G(z) e\^[-z]{} (e\^ - 1)dz + (). At this stage, we use again the fact that $\xi$ has exponential moments to deduce that e\^ - 1 = [\^2 2]{} + (\^3), where we used the fact that $\E \xi = 0$ and $\E \xi^2 = 1$, so that = [\^2 2]{} \_0\^G(z) e\^[-z]{} dz + (). It then follows from the fact that $\lim_{s \to \infty} G(s)/s = 1$ and the dominated convergence theorem that = \_[0]{} [\^2 2]{} \_0\^z e\^[-z]{} dt = [12]{}, which is precisely the desired expression. It is clear that these results should hold under much weaker integrability conditions on $\nu$. However, since we need some exponential moments on $\nu$ at several places in the sequel, we did not try to improve on this. Some properties of the limiting process {#sec:hairy} --------------------------------------- In this section, we provide a rigorous definition of the limiting process loosely defined in Section \[sec:limit\], and we study some of its properties. In order to be able to use existing results on Brownian excursions, we restrict ourselves to the same situation as in Section \[sec:BMformal\], namely the case where the underlying diffusion is a Brownian motion and the potential $V(x) = -ax$ is linear. We call the resulting object the *Brownian fan*. ### Recursive Poisson point processes Before we give a formal definition of the Brownian fan, we define a “recursive Poisson point process”. Loosely speaking, this is a Crump-Mode-Jagers process [@CMJ] with Poisson distributed offspring, but where the number of offspring of any given individual is allowed to be almost surely infinite. Note again that our construction is very similar to the one given in [@VirginIsland]. Given a Polish space $\CX$ and a function $F \colon \CX \to \R_+$, we denote throughout this section by $\MM_+^F(\CX)$ the space of $\sigma$-finite measures $\mu$ on $\CX$ such that (F\^[-1]{}(0)) = 0,({x: F(x) &gt; }) &lt; , for all $\eps > 0$. We endow this with the topology of convergence in total variation on each set of the form $\{x\,:\, F(x) > 1/n\}$. Given a (measurable) map $\CQ$ from $\CX$ to $\MM_+^F(\CX)$, we can then build for every $x \in \CX$ a point process as follows. Define $\mu^0_x = \delta_x$ and, for $n \ge 1$, define $\mu^n_x$ recursively as a (conditionally independent of the $\mu^{\ell}_x$ with $\ell < n$) realisation of a Poisson point process with intensity measure \_n = \_(y)\^[n-1]{}\_x(dy), where we view $\mu^n_x$ as a random $\sigma$-finite positive integer-valued measure on $\CX$. (In principle, it may happen that $\CQ_n\bigl(\{x\,:\, F(x) > \eps\}\bigr) = \infty$ for some $\eps>0$. In this case, our construction stops there.) When then set \^[\[n\]]{}\_x = \_[=0]{}\^n \^n\_x, and we call $\mu^{[n]}_x$ the *recursive Poisson point process* of depth $n$ with kernel $\CQ$. We will occasionally need to refer the Brownian fan spawned by an initial Brownian motion $w$. For this purpose we will use the symbol $\mu^{[n]}_w$ (or $\mu^{n}_w$ for a specific generation) and rely on the context to differentiate $\mu^{[n]}_x$ and $\mu^{[n]}_w$. If, in these symbols we omit the subscript entirely then it is assumed that $x=0.$ In general, there is no reason to expect the sequence $\mu^{[n]}_x$ to converge to a finite limit. However, one has the following simple criterion ensuring that this is the case: \[lem:intPPP\] Let $F$ and $\CQ$ be as above and assume that there exists $c < 1$ such that $\int F(y) \CQ(x,dy) \le c F(x)$ for every $x \in \CX$. Then, for every $x\in \CX$, there exists a random $\sigma$-finite measure $\mu^{[\infty]}_x$ on $\CX$ such that $\lim_{n \to \infty} \E \int F(y) \bigl(\mu^{[\infty]}_x - \mu^{[n]}_x\bigr)(dy) = 0$. Fix $x \in \CX$ and denote by $\FF_n$ the $\sigma$-algebra generated by $\mu^{[n]}_x$. It then follows from the definition of the $\mu^{[n]}_x$ that (\_F(y)\^[+1]{}\_x(dy)| \_) = \_\_F(y) (z,dy)\^\_x(dz) c \_F(z)\^\_x(dz). As a consequence, one has $\E \int_\CX F(y)\,\mu^{\ell}_x(dy) \le c^\ell F(x)$, and the claim follows. A useful identity is the following. Denote by $\{\tilde \mu^{[\infty]}_y\}_{y \in \CX}$ a collection of *independent* copies of recursive Poisson point processes with “initial conditions” $y$. Then one has the identity in law \^[\[\]]{}\_x \_x + \_\^[\[\]]{}\_y \^[1]{}\_x(dy), where, as before, $\mu^{1}_x$ is a realisation of a Poisson point process with intensity $\CQ(x,\cdot)$, which is itself independent of the $\tilde \mu^{[\infty]}_y$. This identity makes sense since the integral on the right is really just a countable sum. ### Construction of the Brownian fan {#sec:BFan} We now denote by $\CE$ the set of excursions with values in $\R$. We consider elements of $\CE$ as triples $(s,t,y)$ Where $s < t \in \R \cup \{+\infty\}$, and $y \in \CC(\R,\R)$ has the property that $y_{\tau} = y_t$ for $\tau \ge t$ and $y_{\tau} = y_{s} $ for $\tau \le s$. We also write $\CE_0$ for the subset of those triples $(s,t,y)$ such that $s = 0$. Denoting a generic excursion by $w$, we write $\s(w)$ for its starting time and $\e(w)$ for its end time, , $\s(s,t,y) = s$, and $\e(s,t,y) = t$. We also denote by $\l(w)$ the lifetime of the excursion, which is the interval $\l(w) = [\s(w), \e(w)]$. In order to keep notations compact, we will also identify an excursion with its path component, making the abuse of notation $w_t = y_t$. There is a natural metric on $\CE$ given by d(w,w) = d\_ł(w,w) + \_[k 1]{} 2\^[-k]{} (1 \_[|t| 2\^k]{} |w\_t - w\_t|), where the distance $d_\l$ between the supports is given by d\_ł(w,w) = 1(|(w) - (w)| + |(w) - (w)|). The reason for this particular choice of metric is that it ensures that $\CE$ is a Polish space, while still allowing for infinite excursions. For $\tau \in \R$ and $v \in \CE$, we denote by $\Theta_{v,\tau}\colon \CE \to \CE$ the shift map given by \_[v,]{} (s,t,w) (s + , t + , w\_[ +]{} + v\_), which essentially changes the coordinate system so that the origin $(0,0)$ is mapped to $(\tau, v_{\tau})$. Denoting as before by $\Q$ the standard Itô excursion measure, we now give the following definition: \[def:fan\] The *Brownian fan* with intensity $a>0$ is the recursive Poisson point process on $\CE$ with kernel (w,) = [a 2]{} \_[(w)]{}\^[(w)]{} \_[w,]{}\^ d, and initial condition given by a realisation of Brownian motion, starting at the origin and killed when it reaches the level $-L$, where $L$ is exponentially distributed with mean $a$. The reason for killing the original Brownian motion at this particular level is natural, due to the distribution of the initial tag in Step 1 of the algorithm. It is however essentially irrelevant to the mathematical construction. Formally, the Brownian fan is a particular case of the Virgin Island Model [@VirginIsland] with $a$ playing the same role in both models, $h = a$, and $g = 1/2$. The differences are twofold. First, the case of constant non-vanishing $a$ actually doesn’t fall within the framework of [@VirginIsland] since the author there uses the standing assumption that $a(0) = 0$. The other difference is mostly one of perspective. while we have so far defined the Brownian fan as a point process on a space of excursions, one of the purposes of this article is to show that it is also well-behaved as an actual Markov process with values in a suitable state space of (possibly infinite) point configurations. By only keeping track of the genealogy of the particles and not their precise locations, one can construct a “real tree” on top of which the Brownian fan is constructed (loosely speaking) by attaching a Brownian excursion to each branch. This is very similar in spirit to Le Gall’s construction of the Brownian map [@BrownianMap; @BrownianMap2], starting from Aldous’s continuous random tree [@CRT]. The scaling properties of the Brownian fan however are quite different. In particular, Theorem \[theo:Nt\] below implies that the underlying tree has Hausdorff dimension $1$, as opposed to the CRT which has Hausdorff dimension $2$. Before we proceed, let us show that it is possible to verify the assumptions of Lemma \[lem:intPPP\], so that this object actually exists for every $a > 0$: The kernel $\CQ$ defined in satisfies the assumption of Lemma \[lem:intPPP\] with the choice F(w) = e\^[-(w)]{} (1 - e\^[-|ł(w)|]{}), provided that $\eta$ is large enough. It follows from the properties of $\Q$ that there exists a constant $C$ independent of $\gamma$ and $a$ such that \_F(w) (w,dw) &= [a]{} \_[(w)]{}\^[(w)]{} e\^[-]{} \_0\^(1 - e\^[-s]{}) s\^[-3/2]{}dsd\ & (e\^[-(w)]{} - e\^[-(w)]{}) = [Ca ]{} F(v), where $C$ is a constant independent of $a$ and $\eta$. The claim then follows by choosing $\sqrt \eta > Ca$. The remainder of this section is devoted to a study of the basic properties of the Brownian fan. In particular,we will show that there exists a suitable space $\CX$ of particle configurations such that it can be viewed as a $\CX$-valued Markov process with continuous sample paths that satisfies the Feller property. ### Number of particles and workload rate {#sec:numPart} Define the set $\CN_t \subset \CE$ of excursions that are “alive at time $t$” by \_t = {w : (w) &lt; t &lt; (w)}. With this notation, the number of particles alive at time $t$ for the Brownian fan is given by N\_t = \^[\[\]]{}(\_t), which is in principle allowed to be infinite. \[theo:Nt\] There exist a constant $C>0$ and a strictly positive continuous decreasing function $\lambda \colon \R_+ \to \R_+$ such that ( \_t N\_t) C, holds uniformly over all $t>0$. We will see in the proof that one can choose $\lambda$ of the form \_t = K\^[-1]{} e\^[-Kt]{}, for $K$ sufficiently large. For $\lambda > 0$ and $s,t \ge 0$, set N\^\_[s,t]{} = (\^[\[\]]{}\_w(\_t)), where $w$ is any excursion starting from $0$ with lifetime $s$. Since, by the definition , the value of $N^\lambda_{s,t}$ does not depend on the precise choice of excursion, we do not include it in the notation. It also follows from the construction of $\mu^{[\infty]}_w$ that the function $t \mapsto N^\lambda_{t,t}$ is increasing in $t$ and that $N^\lambda_{s,t} = N^\lambda_{t,t}$ for $s \ge t$. We also define $M^\lambda_{t}$ by M\^\_[t]{} = (\_ \^[\[\]]{}\_w(\_t) (dw)), where $\cM$ is a Poisson random measure with intensity measure $\Q$ and the realisations $\mu^{[\infty]}_w$ are independent of $\cM$ and of each other. While $N^\lambda$ measures the total number of offspring alive at time $t$ due to an excursion starting at time $0$, $M^\lambda$ measures the rate at which these offspring are created. Indeed, combining with the definition of $\mu_{w}^1$ and the superposition principle for Poisson point processes, we have the identity N\^\_[s,t]{} = \_[s t]{} + \_0\^[s t]{} M\^\_[t-r]{}dr. It therefore remains to obtain suitable bounds on $M_t^\lambda$. It follows from and standard properties of Poisson point processes (see for example [@MR2356959 Theorem 6.3]) that one has the identity M\^\_[t]{} &= \_((\^[\[\]]{}\_w(\_t)) - 1)(dw) = \_(e\^[N\^\_[(w),t]{}]{} - 1)(dw)\ &\_( (\_[\_t]{}(w) + \_0\^[(w)t]{} M\^\_[t-s]{}ds) - 1)(dw). At this stage, we note that the integrand appearing in this expression depends on $w$ only through $\e(w)$. It is then convenient to break the integral into a contribution coming from $\e(w) > t$, as well as its complement. Since, under $\Q$, the quantity $\e(w)$ is distributed according to the measure ${s^{-3/2}\over \sqrt{2\pi}}ds$, this yields M\^\_[t]{} &( (+ \_0\^[t]{} M\^\_rdr) - 1)((w) t)\ &+ \_0\^t ( (\_0\^[s]{} M\^\_[t-r]{}dr) - 1)[s\^[-3/2]{}]{}ds. We now assume that both $\lambda$ and $t$ are sufficiently small so that + \_0\^[t]{} M\^\_rdr 1. This assumption allows us to use the bound $e^t - 1 \le 2t$, so that we obtain the more manageable expression M\^\_[t]{} &(+ \_0\^[t]{} M\^\_rdr) + a \_0\^t \_0\^[s]{} M\^\_[t-r]{}dr[s\^[-3/2]{}]{}ds\ &= [4 t\^[-1/2]{} ]{} + [2 a t\^[-1/2]{} ]{}\_0\^[t]{} M\^\_rdr + a \_0\^t \_s\^[t]{} M\^\_rdr[(t-s)\^[-3/2]{}]{}ds\ &= [4t\^[-1/2]{} ]{} + [2 a t\^[-1/2]{} ]{}\_0\^[t]{} M\^\_rdr + [2a ]{} \_0\^t (t-s)\^[-1/2]{} M\^\_sds\ & + [4a ]{} \_0\^t (t-s)\^[-1/2]{} M\^\_sds. Writing $H^\lambda_t = t^{1/2} M^\lambda_{t}$, we thus obtain the bound H\^\_t + [4a ]{} t\^[1/2]{} \_0\^t (t-s)\^[-1/2]{}s\^[-1/2]{} H\^\_sds. We can now apply the fractional version of Gronwall’s lemma [@Nualart:2002p6052 Lemma 7.6] (with $b = {4a \over \sqrt{2\pi}} $, $a = {4\lambda \over \sqrt {2\pi}}$, and $\alpha = {1\over 2}$), so that there exists a constant $C>0$ depending on $a$ but independent of $\lambda$ such that H\^\_t C(Ct). From this, we immediately deduce from and the definition of $H^\lambda$ a similar bound on $N^\lambda_{t,t}$. Choosing $\lambda = K^{-1}e^{-Kt}$ for sufficiently large $t$ then allows to satisfy and to obtain $N^\lambda_{t,t}\le 2$, thus completing the proof. As a corollary, we obtain a rather sharp bound on the modulus of continuity of the total workload process $\CW_t = \int_0^t N_s\,ds$. One has \[prop:bfWmodcont\] For every $T>0$, one has \_[t T]{}\_[h 0]{} [|\_[t+h]{} - \_t| h |h|]{} &lt; , almost surely. It follows from the generalised Young inequality that, for every $a,b \in \R_+$, and every $\lambda, \eta > 0$, one has the inequality ab (e\^[a]{} - 1 - a + (1+ b/)(1+b/) - b/), so that N\_t (e\^[N\_t]{} + (1+ 1/)(1+1/)). It follows immediately that |\_[t+h]{} - \_[t]{}| \_t\^[t+h]{} e\^[N\_s]{}ds + [h (1+ )]{} (1+1/). Setting $\eta = h$, we obtain the bound |\_[t+h]{} - \_t| \_0\^[T+1]{} e\^[N\_s]{}ds + C\_h |h|, uniformly over all $h \le 1$ and all $t \in [0,T]$. The claim now follows immediately from Theorem \[theo:Nt\]. Although the number of particles alive at any deterministic time has exponential moments, there exists a *dense* set of exceptional times for which $N_t = \infty$. For one, this follows from the fact that, under $\Q$, $\e(w)$ is distributed proportionally to $s^{-3/2} ds$, so that every particle creates an infinite number of offspring in every time interval. Actually, one has the even stronger statement that there is a dense set of exceptional times at which the number of particles belonging to the *first* generation of offspring is infinite. Indeed, if we denote by $\MM$ a Poisson random measure on $\R_+^2$ with density $c s^{-3/2}\,dr\,ds$ for a suitable constant $c$, then the number $N^1_t$ of particles in the first generation of offspring is given by N\^1\_t = (A\_t),A\_t = {(r,s) \_+ : s t-r}. For $k \ge 0$ and $d \in \{1,\ldots 2^k\}$, we then set A\_[k,d]{} = \[(d-1)2\^[-k]{}, d2\^[-k]{}\] , so that, by the scaling properties of $\MM$, the random variables $N_{k,d} = \MM(A_{k,d})$ form a sequence of i.i.d. Poisson random variables. For any given point $(r,s)$, we set $B_{(r,s)} = [r, r+s]$, which is the set of times $t$ such that $(r,s) \in A_t$, and we set D\_[(r,s)]{} = {(k,d) : B\_[(r’,s’)]{} B\_[(r,s)]{} (r’,s’) A\_[k,d]{}}. Since the set $D_{(r,s)}$ is infinite for every $(r,s)$, we can then build a sequence $(r_n, s_n)$ recursively in the following way. Start with $(r_0, s_0) = (0,1)$ and then, given $(r_n, s_n)$ for some $n\ge 0$, define $(k_n, d_n)$ as the first (in lexicographic order) element $(k,d) \in D_{(r_n, s_n)}$ such that $N_{k,d} \ge 1$. We then set $(r_{n+1}, s_{n+1})$ to be one of the points of $\MM$ located in $A_{k_n,d_n}$. By construction, one then has $\cap_{n \ge 1} B_{(r_n,s_n)} = \{t\}$ for some $t \in [0,1]$, and $\MM(A_t) \ge \sum_n \MM(A_{k_n, d_n}) = \infty$, as stated. Of course, the interval $[0,1]$ in this procedure is arbitrary. If we want to show that there exists an exceptional time within any deterministic time interval $[t_0, t_1]$, it suffices to start the algorithm we just described with $r_0 = t_0$ and $s_0 = t_1 - t_0$. The Brownian fan as a Markov process ==================================== In this section, we slightly shift our perspective. We no longer consider the Brownian fan as a point process of excursions, but we consider it as an evolving system of particles. Our system will therefore be described by a Markov process in some space of integer-valued measures on a subset of $\R^2$ corresponding to the admissible combinations of “position + tag”. The problem is that, as we have seen in the previous section, there are exceptional times at which the limiting process consists of infinitely many particles. The first challenge is therefore to construct a space $\CX$ of integer-valued measures with a sensible topology which can still accommodate these “bad” configurations in such a way that the limiting process is continuous both as a function of time and as a function of its initial configuration. Once this space is defined, we show that the Brownian fan possesses the Feller property in $\CX$ (i.e. the corresponding Markov semigroup leaves the space of bounded continuous functions invariant). In fact, we will show that it preserves the space of Lipschitz continuous functions. The continuity property of the discrete-time process established below in Proposition \[prop:Kolmogorov\] also implies the time continuity of the Brownian fan (see Theorem \[theo:finalConv\] below). These properties allow us to conclude that the Brownian fan $t \mapsto \mu_t$ is in fact a strong Markov process. In Section \[sec:noConv\] below, we will furthermore compute its generator $A$ on a class of “nice” test functions. We will also present a “negative” result showing that if we denote by $T_\eps$ the one-step Markov operator corresponding to the evolution of Algorithm \[tdmc\], then one has $A \neq \lim_{\eps \to 0} \eps^{-1} \bigl(T_\eps - 1\bigr)$. This is in stark contrast with, for example, Euler approximations to stochastic differential equations, where such an equality would hold, at least when applied to sufficiently regular test functions. State space {#sec:statespace} ----------- Our construction is essentially the Wasserstein-$1$ analogue of the construction given in [@GigliFigalli]. Let $\CM\subset \R^n$ be a convex open set with boundary $\d\CM$. For $p \in (0,1]$, we then denote by $\ell^p(\CM)$ the set of all integer-valued measures $\mu$ on $\CM$ such that \_p = \_ d\^p(y,) (dy) &lt; , where $d(y,\d \CM)$ denotes the (Euclidean) distance from $y$ to the boundary of $\CM$. Note that since this quantity vanishes at the boundary, there are elements $\mu\in\ell^p(\CM)$ such that $\mu(\CM) = \infty$. We endow $\ell^p(\CM)$ with a slight modification of the Wasserstein-$1$ metric by setting: -\_p = \_[f \_p\^0()]{} (f(y)(dy) - f(y)(dy)), where we denoted by $\Lip_p^0(\CM)$ the set of all functions $f \colon \CM\to \R$ such that |f(x) - f(y)| |x-y|\^p, for all $x,y \in \CM$, and $f(y) = 0$ for all $y \in \d \CM$. Our notation is consistent in the sense that if we take for $\nu$ the null measure in , then we precisely recover . This can be seen by taking $f(x) = d^p(x,\d \CM)$, which is optimal by and the triangle inequality. If $\mu$ and $\nu$ happen to have the same (finite) mass, then the expression does not change when one adds a constant to $f$. In this case, we are thus reduced to the usual Wasserstein-$1$ distance between $\mu$ and $\nu$, but with respect to the modified distance function d\_p(x,y) = |x-y|\^p (d\^p(x,) + d\^p(y,)). Note that the completion of $\CM$ under the distance function $d_p$ consists of $\CM \cup \{\Delta\}$, where $\Delta$ is a single “point on the boundary” such that $d_p(x,\Delta) = d^p(x,\d\CM)$ for every $x \in \CM$. If one has $\mu(\CM) < \nu(\CM) < \infty$, then the distance $\|\cdot\|_p$ reduces to the Wasserstein-$1$ distance (again with respect to $d_p$) between $\bar \mu$ and $\nu$, where $\bar\mu$ is obtained from $\mu$ by placing a mass $\nu(\CM) - \mu(\CM)$ on the boundary $\Delta$. The following alternative characterisation of in the case of purely atomic measures is a version of the Monge-Kantorovich duality in this context: Consider a situation where $\mu = \sum_{i=1}^N \delta_{x_i}$ and $\nu = \sum_{i=1}^M \delta_{y_i}$. Then, -\_p = \_[S\_[N+M]{}]{} \_[i=1]{}\^[N+M]{} d\^p(x\_i, y\_[(i)]{}), where $S_{N+M}$ is the group of permutations of $N+M$ elements and we set $x_j = \Delta$ for $j > N$ and $y_j = \Delta$ for $j>M$. See for example [@MR2369050]. This characterisation suggests the following “interpolation” procedure between elements in $\ell^p(\CM)$. Let $\mu = \sum_{i=1}^N \delta_{x_i}$ and $\nu = \sum_{i=1}^N \delta_{y_i}$, where we assumed that both measures charge the same number of points (this is something that we can always achieve by possibly adding points on $\Delta$). assume furthermore that these points are ordered in such a way that -\_p = \_[i=1]{}\^[N]{} |x\_i- y\_i|\^p. Again, this can always be enforced by suitably reordering the points and possibly adding points on the boundary. We then define, for $t \in (0,1)$, the “linear interpolation” $L_t(\mu,\nu)$ by L\_t(,) = \_[i=1]{}\^N \_[z\_i]{},z\_i = ty\_i + (1-t)x\_i. Note that this procedure is not necessarily unique, but it is easy to resolve this ambiguity by optimising over the possible pairings $\{(x_i, y_i)\}$ realising the above construction, according to some arbitrary criteria. In any case, one can check that this construction has the property that L\_s(,)-L\_t(,)\_p |t-s|\^p -\_p, for any $s,t \in [0,1]$, which will be a useful fact in the sequel. Definition of the process {#sec:defFanMP} ------------------------- For the remainder of this section, we set = {(x,v) \^2: v &gt; -ax}, which is the natural configuration space for our process. We will use capital letters to distinguish points in $\CM$ from points in $\R$. By Theorem \[theo:Nt\], we already know that, for any fixed time $t$, the Brownian fan almost surely has only finitely many particles alive at time $t$. Define now the evaluation map $E_t \colon \CE \to \CM \cup \{\Delta\}$ by E\_t(w) = { [cl]{} (w\_t, -a w\_[(w)]{}) &\ & . For a given “initial condition” $(x,v) \in \CM$, we then set \_t = E\_t\^\^[\[\]]{}\_w , which is an $\ell^p(\CM)$-valued random variable. Here, $w$ is a realisation of a Brownian motion starting at $x$ and killed when it hits $-v/a$, and $\mu^{[\infty]}_w$ is the corresponding realisation of the Brownian fan. (Just so that $E_t$ has the correct effect on $w$, one can for example set $\s(w) = -1$ and make sure that $w(-1) = -v/a$.) As a consequence of Theorem \[theo:Nt\] and of our definition of the Brownian fan, we then indeed have $\mu_t \in \ell^p(\CM)$ for every $p \le 1$. Note at this stage that we can simply discard the, typically infinite, mass on $\Delta$ by identifying measures that only differ on $\Delta$. As already mentioned earlier, this is consistent with the identification $\Delta \sim \d \CM$ already made in the interpretation of the construction of $\ell^p(\CM)$. This construction can be extended to any initial condition in $\ell^p(\CM)$ with finite total mass, by considering independent Brownian fans for each particle. As a consequence of the Markov property of the Brownian excursion and the independence properties of Poisson point processes, it is then straightforward to verify that $t \mapsto \mu_t$ is indeed a Markov process. Actually, by Proposition \[prop:numPart\], we know that for any fixed collection of deterministic times $\{t_1,\ldots, t_k\}$, one has $\mu_{t_k}(\CM) < \infty$ almost surely, so that our construction determines a probability measure on $\bigl(\ell^p(\CM)\bigr)^{\R_+}$ by Kolmogorov’s extension theorem. At this stage however, we know absolutely nothing about the continuity properties of this process, and this is the subject of the remainder of this section. Feller property --------------- We now show that the Brownian fan constructed in Sections \[sec:BFan\] and \[sec:defFanMP\] has the Feller property in $\ell^p(\CM)$ for every $p \le 1$. As in [@GigliFigalli], we could have defined spaces $\ell^p(\CM)$ in a natural way for $p > 1$. However, the Feller property would fail in this case because of the following simple heuristic argument. For any $p$, we can change the initial condition by an amount less than $\delta$ in $\ell^p$ by creating $N$ particles at distance $\eps = (\delta/N)^{1/p}$ from the boundary of $\CM$. For $\eps$ small, the probability that any such particle survives up to time $1$ (say) is bounded from below by $c\eps$ for some $c>0$. On average, the number of survivors will thus be on the order of $\eps N \sim \delta^{1/p} N^{(p-1)/p}$. Furthermore, at time $1/2$, each of these surviving particles will be at a distance of order $1$ of the boundary of $\CM$. As a consequence, by increasing $N$ but keeping $\delta$ fixed (or even sending $\delta$ to $0$ sufficiently slowly), the law of the process at time $1$ with an initial condition arbitrarily close to $0$ can be at arbitrarily large distance of $0$, so that the Feller property fails. For $p \le 1$ on the other hand, we have \[prop:feller\] For any $p \le 1$, the Brownian fan gives rise to a Feller process in $\ell^p(\CM)$. Even more, the corresponding Markov semigroup preserves the space of bounded Lipschitz continuous functions. For any two initial conditions $\mu_0$ and $\bar \mu_0$, write as before \_0 = \_[j=1]{}\^N \_[X\_0\^[(j)]{}]{},|\_0 = \_[j=1]{}\^N \_[|X\_0\^[(j)]{}]{}, with the $X_0^{(j)}\in\CM$ and $\bar X_0^{(j)}\in \CM$ chosen in such a way that \_0- |\_0\_p = \_[j=1]{}\^N d\_p(X\_0\^[(j)]{},|X\_0\^[(j)]{}). Our aim now is to show that there exists a constant $C$ such that \_t- |\_t\_p C \_0-|\_0\_p, independently of $t \le 1$, where the pair $(\mu_t,\bar \mu_t)$ is a particular coupling between the Brownian fans starting from $\mu_0$ and $\bar \mu_0$ respectively. Denote by $\mu_t^{(j)}$ the contribution to $\mu_t$ originating from the initial particle $X_0^{(j)}$ and similarly for $\bar \mu_t^{(j)}$. Then, by the triangle inequality, one obtains the bound \_t- |\_t\_p \_[j 1]{} \_t\^[(j)]{}- |\_t\^[(j)]{}\_p , so that the claim follows from if we can show that \_t\^[(j)]{}- |\_t\^[(j)]{}\_p C d\_p(X\_0\^[(j)]{}, |X\_0\^[(j)]{}). In other words, it suffices to consider the special case when both $\mu_0$ and $\bar \mu_0$ consist of one single particle, which we denote by $X_0 = (x_0, v_0)$ and $\bar X_0 = (\bar x_0, \bar v_0)$ respectively. One then constructs a coupling between the two processes $\mu_t$ and $\bar \mu_t$ by running both particles with the same Brownian motion and spawning children according to the same Poisson process (as long as the corresponding particle is alive). We denote by $X_t$ and $\bar X_t$ the evolutions of the two initial particles in $\CM$, driven by the same realisation of a Brownian motion, and stopped when they reach $\d\CM$. We can assume without loss of generality that $v_0 + ax_0 < \bar v_0 + a\bar x_0$, so that the particle $X$ dies before the particle $\bar X$. Denoting by $\tau$ and $\bar \tau$ the respective lifetimes of these particles, one thus has $\tau \le \bar\tau$. Denote now by $\MM_t$ the (random) measure on $[0,t]\times \CM$ which is such that, for $I \subset [0,t]$ and $A \subset \CM$, $\MM_t(I\times A)$ is the number of particles in $A$ at time $t$ that are offspring of a particle created from the “ancestor particle” $\bar X$ at some time $s\in I$. With this notation, if we denote by $\Xi\colon \CM \to \CM$ the map (x,v) = (x+ x\_0 - |x\_0, v - a ( x\_0 - |x\_0)), then one has the decompositions \_t(A) &= \_[t]{} \_[X\_t]{}(A) + \^\_t(\[0,t\] A),\ |\_t(A) &= \_[|t]{} \_[|X\_t]{}(A) + \_t(\[0,|t\] A). Denote now by $\delta$ the *Euclidean* distance between the two initial particles, so that their $\ell^p$-distance is $\delta^p$. It then follows immediately from the above decomposition that one has the bound \_t - |\_t\_p &\^p + \_[t ]{}d\_p(|X\_t,) + \^p \_t(\[0,t\] )\ &+ \_d\_p(Y,)\_t(\[t,|t\] dY). Since $d(\bar X(\tau),\d \CM) = \delta$ by the definition of $\tau$ and $\delta$, it follows from Jensen’s inequality and the Martingale property of (stopped) Brownian motion that one has the bound $\E \one_{t \in [\tau,\bar \tau]}d_p(\bar X_t,\d \CM) \le \delta^p$. It also follows from Theorem \[theo:Nt\] that $\E \MM_t([0,\tau\wedge t] \times \CM) < \infty$, independently of $\delta$. Finally, it follows from an argument very similar to the proof of Theorem \[theo:Nt\] that \_d\_p(Y,)\_t(ds dY) C ds, uniformly over $s \in [0,t]$. It follows that \_d\_p(Y,)\_t(\[t,|t\] dY) C ||t - t| C , where we used the fact that if $\tau_\delta$ is the first hitting time of $0$ by a Brownian motion starting at $\delta$, then $\E (\tau_\delta \wedge 1) \le C\delta$. Combining these bounds completes the proof. Lack of convergence of the generators {#sec:noConv} ------------------------------------- One standard method to prove convergence of a sequence of Markov processes to a limiting process, once tightness has been established, is to show that the corresponding generators converge in a suitable sense. In our situation, one actually does *not* expect the generator of the approximate process to converge to that of the limiting process, when testing it on “nice” test functions. We first argue at a technical level why this is the case, before providing an intuitive explanation. Inspired by [@EthKur86MP; @Dawson], we consider test functions of the form F() = (), where $f \colon \CM \to \R_+$ is a sufficiently smooth function such that $f(x,v) = 1$ for $(x,v) \in \d \CM$. This boundary condition is required since elements $\mu \in \ell^p(\CM)$ can have infinite mass (and, as we have already seen, the limiting process really does acquire infinite mass at some exceptional times) accumulating near $\d \CM$. Being in $\ell^p(\CM)$ for $p \le 1$ does however ensure that smooth functions such as above are integrable. Exploiting the independence structure of the process as well as its space homogeneity, we can reduce ourselves to the case of an initial condition of the form $\mu_0 = \delta_{(x,v)}$ for some $v < -ax$. In this case, for sufficiently small $\eps > 0$, the probability that the original particle dies within the time interval $\eps$ is of the order $\eps^p$ for any $p>0$. We therefore only need to take into account the possibility of creating some descendant(s), with the killing mechanism being taken care of by the boundary condition of $f$. While the average number of “second generation” descendants is of order $\eps$, any such descendant will typically have travelled to a distance of order $\sqrt \eps$ from $\d\CM$, so that only the first generation has a chance of contributing to the generator. We then have [1]{} (F(\_) - F(\_0)) A\_0 f(x,v) + [f(x,v) ]{} , where A\_0 = [12]{}\_x\^2, is the generator of Brownian motion, and where $\MM_\eps$ is the (projection to time $\eps$ of the) Poisson point process yielding the first generation of offsprings. Note now that since these offspring will be created near $\d \CM$ and since $f=1$ there, we can further approximate this expression by [1]{} (F(\_) - F(\_0)) A\_0 f(x,v) + [f(x,v) ]{} f’(x) \_(|x + a\^[-1]{}|v) \_(d|x,d|v). Here, we wrote $f'(x)$ as a shortcut for $\d_x f(x,v)\big|_{v = -ax}$. Denoting by $e_s$ the position, relative from its starting point, of an excursion of length $s$ and making use of the formula for the intensity measure of $\MM_\eps$, we obtain for the last term in this equation the expression [1]{} \_(|x + a\^[-1]{}|v) \_(d|x,d|v) = [a2 ]{} \_0\^\_0\^[s]{} e\_s(t)dt s\^[-3/2]{}ds . At this stage we note that, for a Brownian excursion of length $s$, we have for $t \le s$ the identity e\_s(t) = which can be computed using the explicit formula given for the transition probabilities of the Brownian excursion on page 59 of [@BMHandbook]. Inserting this into , a tedious but straightforward calculation then yields \_[0]{} [1]{} \_(|x + a\^[-1]{}|v) \_(d|x,d|v) = [a2]{}, so that we finally obtain for the generator $A$ the expression A F(\_0) = A\_0 f(x,v) + [a 2]{} f(x,v) f’(x) . Recall that this is for the particular case where $\mu_0 = \delta_{(x,v)}$. In the general case, we can use the independence structure of the process to obtain A F(\_0) = F(\_0) \_ ([A\_0 f(x,v) f(x,v)]{} + [a 2]{} f’(x) )\_0(dx,dv). Compare this with the generator of a usual branching diffusion, where the term $a f'(x)/2$ would be replaced by $a(f(x) - 1)$, with $a$ the branching rate. On the other hand, if we denote by $T_\eps$ the Markov operator describing one step of Algorithm \[tdmc\], we might expect that one also obtains $A$ as the limit $\eps^{-1} \bigl(T_\eps - 1\bigr)$ as $\eps \to 0$. This would indeed be the case if there was no branching or if branching did only occur at a finite rate. Considering again initial conditions of the form $\mu_0 = \delta_{(x,v)}$ for some $v < -ax$. By reasoning similar to that in the beginning of Section \[sec:contLimit\], we obtain \^[-1]{} (T\_F - F)(\_0) A\_0 f(x,v) + [a 2]{} f(x,v) f’(x) \_0\^y\^2(dy) , which is always *different* from , and is actually what we would have obtained from the wrong guess . A possible reason for this discrepancy is that, while the Markov semigroup of the Brownian fan does indeed preserve test functions of the type , we do not expect this to be true of the Markov operator $T_\eps$. Instead, the “correct” space of test functions for $T_\eps$ is of the same type, but the function $f$ should have a “boundary layer” near $\d\CM$. Another reason why the generator of the Brownian fan is not such a useful object is that many seemingly innocent observables, like for example the total number $\CN$ of particles, do *not* belong to its domain. This follows from the fact that if we consider again a simple initial condition $\mu_0$ as above, then $\E \CN(\mu_\eps) - 1 \approx \sqrt \eps$ for small $\eps$. The reason the total number of particles nevertheless remains finite (at least for fixed times) is that there are exceptional states where one or more particles are very near $\d\CM$ and for which $\E \CN(\mu_\eps) - 1 \approx -\CO(1)$. The remainder of the article is devoted to providing a rigorous proof of the fact that, in the situation of the previous two sections, the process given by Algorithm \[tdmc\] converges to the Brownian fan in $\CC([0,T], \ell^p(\CM))$ for any $T>0$ and $p \le 1$. The overall strategy of the proof is classical: we first prove a tightness result in Section \[sec:tightness\] and then show that finite-dimensional marginals converge to those of the Brownian fan in Section \[sec:convFan\]. Difficulties arise on two fronts. First, to prove the tightness result, it is convenient to have uniform moment bounds on the number of particles at fixed time for the approximating system. These turn out to be much more difficult to obtain for the approximating system than for the Brownian fan, which is mainly due to a lack of uniform exponential bounds. A second difficulty arises in the proof that finite-dimensional distributions converge to those of the Brownian fan. While it is intuitively clear that those excursions that survive for times of order $\CO(1)$ do converge to suitably normalised Brownian excursions, this result is rather technical and, surprisingly, does not seem to appear in the literature. Furthermore, no convergence result holds for the typical excursions which die very early. We therefore also need to argue that, both at the level of Algorithm \[tdmc\] and at the level of the Brownian fan, these small excursions do not matter in the limit. Tightness {#sec:tightness} ========= As in the previous section, we restrict ourselves to the particular case when the underlying Markov process is given by a rescaled random walk, namely y\_[(k+1)]{} = y\_[k]{} + \_[k+1]{}, where the $\xi_k$ are i.i.d. random variables with distribution $\nu$ having some exponential moments, and where the potential $V$ is given by a linear function, $V(x) = -ax$. Our aim is to show that as $\eps \to 0$, the sequence of birth and death processes obtained by running Algorithm \[tdmc\] is tight in a state space $\CX$, which we will now describe. Formulation of the tightness result ----------------------------------- With the construction of the previous section in mind, we choose as our state space $\CX = \ell^p(\CM)$, where $\CM = \{(x,v,n) \in \R^2 \times \N \,:\, v > -ax\}$, and $p \le 1$ is arbitrary. Here, the coordinate $n$ is used to keep track of the generation of a particle: direct offspring of a particle from the $n$th generation belong to the $(n+1)$st generation. We extend the Euclidean distance to $\R^2 \times \N$ by additionally postulating that the distance between particles belonging to different generations is given by the sums of the distances of the two particles to $\d\CM$. The boundary $\d \CM$ is given as before by $\d \CM = \{(x,v,n)\,:\, v = -ax\}$. We will use capital letters for elements of $\CM$ to differentiate them from elements of $\R$. In order to formulate our result, we will make use of the following notation. For $t = k\eps$ with $k$ an integer, we denote by $\mu_t^\eps$ the empirical measure of the particles alive at time $t$, and by $N_t$ the number of such particles. Sometimes, it will be convenient to consider the particles instead as a collection of elements $X_t^{(j)}\in\CM$, so that we write \_t\^= \_[j=1]{}\^[N\_t]{} \_[X\^[(j)]{}\_t]{}. We do not specify how exactly we order the particles, as this is completely irrelevant for our purpose. For $t \in (k\eps, (k+1)\eps)$, we define $\mu_t^\eps$ by using the “linear interpolation” procedure , setting \_t\^= L\_s(\_[k]{}\^, \_[(k+1)]{}\^),s = \^[-1]{}t - k. The interpolation procedure $L_s$ is a very minor modification from the one described above, in the sense that we only connect particles belonging to the same generation. In this way, the process $t \mapsto \mu_t^\eps$ has continuous trajectories for every $\eps$. The main result of this section is as follows: \[theo:tight\] Let $p \le 1$ and denote by $\CL^\eps$ the law of the process $t \mapsto \mu_t^\eps$ described above, viewed as a family of probability measures on $\CC([0,1],\CX)$. Assume furthermore that there exists $c> 0$ such that $\int e^{c|y|}\nu(dy) < \infty$. Then, for any single particle initial condition $\mu_0^\eps = \delta_{X_0}$ with $X_0 \in \CM$, there exists $\eps_0>0$ such that the family $\{\CL^\eps\}_{\eps \le \eps_0}$ is tight. Combining [@Dawson Theorem 3.6.4] and [@Billingsley Theorem 8.3], we see that, in order to obtain tightness, it is sufficient to show that: For every $\delta > 0$, there exists a compact set $K_\delta \subset \CX$ such that $\P(\mu_t^\eps \in K_\delta) > 1-\delta$, uniformly over $t \in [0,1]$ and $\eps < \eps_0$. There exists $\alpha > 0$ and $C>0$ such that $\E \|\mu_t^\eps - \mu_s^\eps\|_p^q \le C|t-s|^{\alpha q}$, uniformly over all $s,t\in [0,1]$ and $\eps < \eps_0$. The first claim then follows from Proposition \[prop:compact\] below, while the second claim is the content of Proposition \[prop:Kolmogorov\]. The proof of this result is the content of the remainder of this subsection and goes roughly as follows. In Section \[sec:boundsNPart\], we obtain a moment bound on the number of particles alive at any fixed time $t \in [0,1]$, which is uniform in $\eps > 0$. This then allows to obtain the compactness at fixed time in Section \[sec:compactFixed\]. The verification of Kolmogorov’s continuity criterion is the content of Section \[sec:Kolmo\]. Moment bounds on the number of particles {#sec:boundsNPart} ---------------------------------------- In the sequel, we will denote by $\K_p(X)$ the $p$th cumulant of a random variable $X$ and by $\K_p(X\,|\, \FF)$ the same cumulant, conditioned on the $\sigma$-field $\FF$. We will use the important property that $\K_p(X+Y\,|\,\FF) = \K_p(X\,|\,\FF) + \K_p(Y\,|\,\FF)$, provided that $X$ and $Y$ are independent, conditionally on $\FF$. We will also use the fact that if $X$ is a positive random variable, then $\K_p(X) \le \E X^p$ and there exists a constant $C$ such that the bound X\^p C\_[q p]{} (\_q (X))\^[p/q]{}, holds. Our aim now is to obtain a bound on the cumulants of the number of particles alive at time $t$ which is independent of $\eps$. We start with an initial configuration containing only one particle, which belongs to generation $0$, and we set $N^0_t \in \{0,1\}$, depending on whether or not this particle is still alive at some subsequent time $t$. We also define $N^n_t = \mu_t^\eps(\CM^n)$, where $\CM^n = \{(x,v,\ell) \in \CM\,:\, \ell=n\}$, which is the number of particles in the $n$th generation that are alive at time $t$. We also denote by $N^n_{s,t}$ the number of such particles that were created at time $s \le t$. For any $X_0 = (x,v)$ with $v > -ax$, we write $\E_{X_0}$ for expectations of observables for the process generated by starting Algorithm \[tdmc\] with underlying dynamic , started with one single initial particle in generation $0$ at location $x$ with tag $v$. Finally, for $x \in \R$, we write $\E_x$ for the same expectation, but where the initial particle has tag $v = +\infty$, meaning that it is “immortal”. We then have the following result: \[prop:numPart\] Consider the situation of Theorem \[theo:tight\]. For every $p \ge 0$, there exist $C_p>0$ and $\eps_0 > 0$ such that, under the rules of Algorithm \[tdmc\], the number $N_t = \mu_t^\eps(\CM)$ of particles alive at time $t$ satisfies $\E_{x} |N_t|^p \le 2\exp(C_p t)$, uniformly over all $\eps \le \eps_0$ and $x \in \R$. Furthermore, there exists $\rho > 0$ such that \_[x]{} N\^n\_t (n+1)\^[t/]{} 2\^[-n]{}, uniformly over all $\eps \le \eps_0$, $n \ge 0$, and $t>0$. Finally, denoting by $R^\gamma_t$ the number of offspring alive at time $t$ that have never been at distance more than $\gamma$ from $\d \CM$, we have the bound \_[x]{} R\^\_t C , uniformly over $t \in [0,1]$, $\gamma \in (0,1]$, and $\eps \le \eps_0\wedge\gamma^2$. The proof will make use of the following elementary fact where, for $\lambda = n + p$ with $n \in \N$ and $p \in (0,1]$, we denote by $\CI(\lambda)$ the law of a random variable $Y$ such that $Y= n$ with probability $1-p$ and $Y = n+1$ with probability $p$. (This is so that $\E Y = \lambda$.) \[lem:unif\] Let $Y$ be a random variable with law $\CI(\lambda)$. Then, for any $q \ge 1$ and any $\lambda>0$, one has the bound $\E Y^q \le \lambda + (2\lambda)^q$. By inspection, one has Y\^q = p (n+1)\^q + (1-p) n\^q. In the case $n=0$, one then has $\E Y^q = \lambda$, so the statement is true. For $n \ge 1$, one uses the fact that both $n+1$ and $n$ are bounded by $2\lambda$, and the claim follows at once. We restrict ourselves to times that are integer multiples of $\eps$. Furthermore, from now on, we fix an initial condition $x$, so that we just write $\E$ instead of $\E_x$. We also denote by $\FF^n$ the $\sigma$-algebra containing all information pertaining to particles in generations up to (and including) $n$. With this notation at hand, we obtain for $N^n_t$ the bound ( N\^n\_t)\^p &\_[q=1]{}\^[p]{} (\_q (N\^n\_t| \^[n-1]{}))\^[p/q]{}\ &= \_[q=1]{}\^[p]{} (\_[t]{} \_q (N\^n\_[, t]{}| \^[n-1]{}))\^[p/q]{}\ &\_[q=1]{}\^[p]{} (\_[t]{} (|N\^n\_[, t]{}|\^q| \^[n-1]{}))\^[p/q]{},\[e:boundNkp\] where we used in the first step and the independence of the offspring in the second step. In the above expression, $\ell$ takes only integer values. Note now that \_[t]{} C, uniformly over all $t\ge \eps$. As a consequence, for any positive sequence $a_n$ and any power $r \ge 1$, one has the bound (\_[t]{} a\_ )\^r &= (\_[t]{} a\_ )\^r \_[t]{} a\^r\_\ &= \_[t]{}[t\^[r-12]{}(t + -)\^[r-12]{} \^[r-1]{}]{} a\^r\_ , where we have set $\tilde a_\ell = \eps^{-1} \sqrt{t(t + \eps -\eps\ell)}\, a_\ell$ in the intermediate steps. Applying this inequality to , we obtain ( N\^n\_t)\^p \_[q=1]{}\^[p]{} \_[t]{} [t\^[p-q2q]{}(t + -)\^[p-q2q]{} \^[p-qq]{}]{} (( |N\^n\_[, t]{}|\^q| \^[n-1]{}))\^[p/q]{}. Denote now by $M^{n,j}_s$ the number of particles in the $n$th generation created at time $s$ by the $j$th particle from the $n-1$st generation. We write $\GG_s$ for the $\sigma$-algebra generated by this additional data. Each of these particles yields a contribution to $N^n_{s,t}$ of either $1$ or $0$, depending whether it survives or not. Furthermore, these contributions, which we will denote by $S^{n,j,i}_{s,t}$, are all independent and, for the same value of $j$, they are also identically distributed. By definition, we thus have the identity N\^n\_[, t]{} = \_[j=1]{}\^[N\^[n-1]{}\_[(-1)]{}]{} \_[i=1]{}\^[M\^[n,j]{}\_]{} S\^[n,j,i]{}\_[,t]{}. We now twice make use of the inequality (\_[j=1]{}\^m a\_j)\^q m\^[q-1]{} \_[j=1]{}\^m a\_j\^q, which is valid for any $q \ge 1$, $m \ge 0$ and sequence of positive numbers $a_j$. This yields the bound (N\^n\_[,t]{})\^q (N\^[n-1]{}\_[(-1)]{})\^[q-1]{} \_[j=1]{}\^[N\^[n-1]{}\_[(-1)]{}]{} (M\^[n,j]{}\_)\^[q-1]{} \_[i=1]{}\^[M\^[n,j]{}\_]{} S\^[n,j,i]{}\_[,t]{}. Note that since $S^{n,j,i}_{\eps\ell,t}$ can only take the values $0$ or $1$, raising it to the power $q$ makes no difference. By the “gambler’s ruin theorem” [@RWLawler Thm 5.1.7], we have the bound (S\^[n,j,i]{}\_[,t]{}| \^[n-1]{}\_) C, where $\sqrt{\eps}\xi^j$ denotes the step performed by the $j$th particle of the $(n-1)$st generation between times $\eps(\ell-1)$ and $\eps\ell$. Regarding the number of offspring $M^{n,j}_{\eps\ell}$, it follows from the definition of the algorithm that its distribution is given by $\CI(\exp(a\sqrt{\eps}\xi^j_+) - 1)$, where $\xi^j_+$ denotes the positive part of $\xi^j$. Combining this with , Lemma \[lem:unif\], and , we thus obtain the bound (|N\^n\_[,t]{}|\^q | \^[n-1]{}) |N\^[n-1]{}\_[(-1)]{}|\^[q-1]{}\_[j=1]{}\^[N\^[n-1]{}\_[(-1)]{}]{} (\^j + 1). In order to simplify this expression, we use the fact that, for $x \ge 0$, there exists a constant $C$ depending on $q$ such that e\^x - 1 x e\^x,(e\^x - 1)\^q C x e\^[qx]{}, for every $q \ge 1$. This yields (|N\^n\_[,t]{}|\^q | \^[n-1]{}) |N\^[n-1]{}\_[(-1)]{}|\^[q-1]{} \_[j=1]{}\^[N\^[n-1]{}\_[(-1)]{}]{} [\_[\^j 0]{} e\^[aq\^j]{} ]{}(\^j + 1)\^2. Using once again, we get the bound ((|N\^n\_[,t]{}|\^q | \^[n-1]{}))\^[pq]{} |N\^[n-1]{}\_[(-1)]{}|\^[p-1]{} \_[j=1]{}\^[N\^[n-1]{}\_[(-1)]{}]{} \^[pq]{} [\_[\^j 0]{} e\^[ap\^j]{} (t+- )\^[p2q]{} ]{}(\^j + 1)\^[2pq]{}. At this stage, we note that, conditional on the state of the system at time $\eps (\ell-1)$ the steps $\xi^j$ are all independent and identically distributed with law $\nu$. Setting P\_= \_0\^e\^[ap z]{} (z+1)\^[2pq]{}(dz), it follows that ((|N\^n\_[,t]{}|\^q | \^[n-1]{}))\^[pq]{} |N\^[n-1]{}\_[(-1)]{}|\^p . Inserting this into yields for $t \le 1$ the bound ( N\^n\_t)\^p P\_\_[t]{} |N\^[n-1]{}\_[(-1)]{}|\^p. On the other hand, since we assumed that the initial particle is immortal, $N^{n-1}_s$ is stochastically increasing as a function of $s$, so that we obtain from the bound ( N\^n\_t)\^p P\_t (N\^[n-1]{}\_t)\^p. Since, by the exponential integrability assumption on $\nu$, there exists $C > 0$ such that, for $\eps$ small enough, $P_\eps \le C$ and since $N^0_t = 1$, we conclude that there exists some $\lambda_p$ such that, for all $n \ge 1$, one has the bound ( N\^n\_t)\^p (\_p t)\^[n/2]{} . It follows that there exists $t_\star > 0$ (depending on $p$) such that $\E (N_t)^p \le 2$ (say), for every $t \le t_\star$. To show that $\E (N_t)^p < \infty$ for every $t$, we use again , combined with the Markov property of the process to conclude that $\E (N_{t+t_\star})^p \le 2 \E (N_t)^p$, from which the first claim follows. To get , observe that if we choose $\rho$ small enough so that $\lambda_1 \rho \le 1/4$, the bound with $t \le \rho$ follows from . Denote now by $\CF_t$ the filtration generated by all events up to time $t$. For $t$ and $\rho$ that are multiples of $\eps$ it follows from the Markov property that (N\^n\_t | \_[t-]{}) \_[=0]{}\^n N\^\_[t-]{} N\^[n-]{}\_ where the expectation on the right is taken with respect to an initial condition with one single immortal particle. The claimed bound for arbitrary $t>0$ therefore follows by induction. It remains to obtain the bound on $R^\gamma_t$. Denote by $R^\gamma_{s,t}$ the number of offspring contributing to $R^\gamma_t$ that are created at time $s < t$, so that $R^\gamma_t = \sum_{\eps k \le t} R^\gamma_{k\eps,t}$. Denote now by $Q_{z,t}^{\gamma}$ the probability that, after time $t$, the random walk with initial condition $\sqrt \eps z$ has never exited the interval $[0,\gamma]$. It then follows that (R\^\_[k,t]{}| \_[k]{}) = aN\_[k]{} \_0\^\_0\^z e\^[az]{} Q\^\_[y,t-k]{}dy(dz). This is because, by Step 3 of Algorithm \[tdmc\], the expected number of offspring created by a particle performing an upward step of size $\sqrt \eps z$ is given by $e^{a\sqrt \eps z} - 1$ while, if we denote by $\sqrt \eps y$ the distance between the starting point of the offspring and the “wall” below which it is killed, the law of $y$ is given by $c^{-1} \int_0^z e^{a\sqrt \eps y} \,dy$ where $c$ is a normalizing constant. Since $c = (e^{a\sqrt \eps z} - 1)/a\sqrt \eps$, follows. Using again the gambler’s ruin theorem, combined with the Markov property of the random walk, we obtain for $Q_{z,t}^{\gamma}$ the bound Q\_[z,t]{}\^ C [(z+1) ]{} |Q\^\_[t/2]{}, where $\bar Q^\gamma_t$ is the probability that the random walk , starting at the origin, stays within $[-\gamma, \gamma]$ up to time $t$. Using the scaling properties of Brownian motion, combined with [@ConvRateBM], we conclude that one has the bound |Q\^\_t 1 , for every $q > 0$, so that Q\_[z,t]{}\^ (1 ). Inserting this bound into , using the previously obtained bounds on $N_{k\eps}$, and summing over all $k$, the claim follows at once. It follows from Theorem 4.1 in [@DMC] that, in the particular case when the initial tag $v$ is distributed according to the logarithm of a uniformly distributed random variable, one has the identity $\E N_t = \E e^{a y_t}$, where $y_t$ is the rescaled random walk with steps $\nu$. The (at least one-sided) exponential integrability assumption on $\nu$ is therefore a necessary assumption in order to obtain any kind of moment bounds on $N_t$. Even if we assume that $\nu$ has Gaussian tails and despite the result previously obtained for the Brownian fan in Theorem \[theo:Nt\], it is *not* true in general that $N_t$ defined by Algorithm \[tdmc\], has uniform exponential moments as $\eps \to 0$. This is because even for the first step, the probability that the original particle performs a step of order $\eps^{-p}$ is of order $\exp(-\eps^{-2p-1})$. If this were to happen, the number of offspring created in this way would be of order $\exp(\eps^{-p})$, which immediately shows that exponential moments blow up as $\eps \to 0$. Compactness at any fixed time {#sec:compactFixed} ----------------------------- We now show that we can find a compact set $K_\delta$ such that $\mu_t^\eps$ belongs to $K_\delta$ with high probability, uniformly over $\eps$ and $t\in[0,1].$ Our first ingredient for this is the following moment bound: \[prop:moments\] Consider the setting of Theorem \[theo:tight\]. Then, there exist constants $C$ and $\eps_0$ such that, for every $t \in [0,1]$ and every $\eps \le \eps_0$, the bound \_[X\_0]{}(\_|x-x\_0|\^[2p]{} \_t\^(dx,dv,dk)) C t\^p, holds uniformly over all initial conditions $X_0 = (x_0, v_0, 0) \in \CM$. We restrict ourselves to the case when $t$ is an integer multiple of $\eps$, since the bound on the remaining times easily follows from our interpolation procedure. Furthermore, we can restrict ourselves to the case when the initial particle is immortal, which formally corresponds to setting $v = +\infty$. By translation invariance, we also restrict ourselves to the case where the initial particle is located at the origin, and we denote the corresponding expectation by $\E_0$. It follows from Theorem 4.1 in [@DMC] that if we choose $v = {1\over a} \log u$, where $u$ is drawn uniformly from $[0,1]$ and denote by $\tilde \mu_t^\eps$ the corresponding process, then one has the identity \_[0]{}(\_x\^[2p]{} \_t\^(dx,dv,dk)) = \_0 (e\^[a y\_t]{} (y\_t)\^[2p]{}), where $y_t$ is the simple random walk started at the origin. It follows immediately from the exponential integrability of $\nu$ that this quantity is bounded by $C t^p$ for $t \le 1$ and for $\eps$ small enough. On the other hand, one can realise the process $\mu_t^\eps$, which starts with an immortal initial particle, in the following way: 1. Consider the process $\tilde \mu_t^\eps$, where the $v$-component is as above. 2. When the initial (generation $0$) particle is killed, replace it instantly by an immortal particle starting at the current location. Let $x^0_t$ denote the trajectory of the initial particle and let $s$ be the time at which the initial particle is killed and replaced. Let $P_t = \P(s \le t)$. By the construction just outlined, we have the recursion relation F\_[t,0]{} = E\_t + P\_t (F\_[t-s, x\^0\_s]{}| s t ), where we set F\_[t,x\^0\_s]{} &= \_[0]{}(\_(x + x\^0\_s)\^[2p]{} \_t\^(dx,dv,dk)) ,\ E\_t &= \_[0]{}(\_x\^[2p]{} \_t\^(dx,dv,dk)). (Remember that the difference between $E_t$ and $F_{t,0}$ is that in order to compute $F$, we start with an immortal particle.) Setting $F_t = F_{t,0}$, using the fact that, for every $\delta > 0$ one can find $C_\delta$ such that $(x+x^0_s)^{2p} \le C_\delta (x^0_s)^{2p} + (1+\delta)x^{2p}$, and recalling that $E_t \le Ct^p$, we deduce that F\_t Ct\^p + C\_P\_t((x\^0\_s)\^[2p]{} \_[x\^0\_s]{} (N\_[t-s]{})| s t)) + (1+) P\_t (F\_[t-s]{}| s t), where $N_t$ is the number of particles alive at time $t$ for the system started with an immortal particle. Note now that, for $t \le 1$, we know from Proposition \[prop:numPart\] that the expected number of particles alive at any given time is bounded by some constant uniform in $\eps$. Since this bound is also uniform in the initial condition, we have P\_t ((x\^0\_s)\^[2p]{} \_[x\^0\_s]{} (N\_[t-s]{}| s t)) P\_t ((x\^0\_s)\^[2p]{}| st) = ((x\^0\_s)\^[2p]{} \_[s t]{}). Defining $\tilde s = s \wedge t$, we then obtain the trivial bound ((x\^0\_s)\^[2p]{} \_[s t]{}) \_0 |y\_[s]{}|\^[2p]{} \_0 |y\_t|\^[2p]{} t\^p, where we made use of the fact that $|y_t|^{2p}$ is a submartingale to obtain the second inequality. Setting now $\bar F_t = \sup_{s \le t} F_s$, we can combine all of these bounds to get the inequality |F\_t C\_t\^p + (1+) P\_t |F\_t. Since one can check (for example by again using the fact that the random walk approximates a Brownian motion for small $\eps$) that $\sup_{\eps \le 1} \sup_{t\le 1} P_t = \sup_{\eps \le 1} P_1 < 1$, the claim follows at once by choosing $\delta$ sufficiently small. This result can now be used to deduce the announced uniform tightness result over fixed times: \[prop:compact\] Consider the setting of Theorem \[theo:tight\]. Then, for every $\delta>0$ there exists a compact set $K_\delta \subset \CX$ such that $\P_{X_0}(\mu_t^\eps \in K_\delta) > 1-\delta$, uniformly over $t \in [0,1]$, $\eps \le \eps_0$ and $X_0 \in \CM^0$. For any $n\in \Z_+$ and $R \in \R_+$, denote \^[\[n\]]{} = {(x,v,) : n},B\_R = {(x,v,) : |x| |v| R}. For any such $n$ and $R$ and for $m \in \Z_+$, we then denote by $K_{n,m,R} \subset \CX$ the set of all integer-valued measures $\eta$ on $\CM$ such that $\eta(\CM \setminus \CM^{[n]}) = 0$, $\eta(\CM) \le m$, and $\eta(\CM \setminus B_R) = 0$. Since these sets are obviously compact, it suffices to find, for every $\delta > 0$, sufficiently large values $n$, $R$ and $m$ so that $\P_{X_0}(\mu_t^\eps \in K_{n,m,R}) > 1-\delta$ uniformly over all $\eps \le \eps_0$ and $t \in [0,1]$. Note that $K_{n,m,R} \subset K_n^1 \cap K_m^2 \cap K_R^x \cap K_R^v$ where, for $K_n^1$ and $K_m^2$ we only enforce the conditions involving $n$ and $m$ respectively. The set $K_R^x$ consists of configurations such that the $x$-coordinate of every particle is less than $R$ in absolute value, while $K_R^v$ enforces that the $v$-coordinate be less than $R$. It follows immediately from that there exists $\gamma>0$ and a constant $C$ (depending in principle on the time we consider, but it can be chosen uniformly over $t \in [0,1]$), such that ¶\_[X\_0]{}(\_t\^K\_n\^1) C e\^[-n]{}, for every $n \ge 0$. Similarly, it follows from the moment bounds on $N_t$ obtained in Proposition \[prop:numPart\] that, for every $p>0$, there exists $C$ such that ¶\_[X\_0]{}(\_t\^K\_m\^2) . In order to get a bound on the $x$-coordinate of the particles, we combine Proposition \[prop:moments\] with Chebyshev’s inequality, so that ¶\_[X\_0]{}(\_t\^K\_R\^x) . It remains to obtain a bound on the probability of not being in $K_R^v$. For this, we use the fact that on the one hand, the label of a particle always satisfies $v > -ax$. On the other hand, any descendant of the initial particle will always satisfy $v \le -a\sup_{t \le 1} x^{0}_t$, where here we denote by $x^{0}_t$ the position of the original particle at time $t$. Since this particle was assumed to start at the origin, we obtain $v^2 \le a^2 x^2 + a^2\sup_{t \le 1} (x^0_t)^2$, so that we obtain the bound ¶\_[X\_0]{}(\_t\^K\_R\^v) , just as above. Combining all of these bounds, the claim follows by choosing $n$, $R$ and $m$ large enough. Kolmogorov criterion {#sec:Kolmo} -------------------- The aim of this section is to obtain the following bound on the time regularity of our process: \[prop:Kolmogorov\] For every $p \le 1$ and $q \ge 1$, there exists a constant $C$ such that \_[X\_0]{} \_\^- \_[X\_0]{}\_p\^q C\^[p q/2]{}, where $X_0\in \CM^0$ is an initial condition with only one particle in the system, and $\delta < 1$ is an integer multiple of $\eps$. As usual, the precise location of the particle is irrelevant by translation invariance, so the above bound is uniform over all choices of $X_0$. Before we proceed to the proof of Proposition \[prop:Kolmogorov\], we observe that the bound implies that Kolmogorov’s criterion holds for the process over a fixed interval of time: \[cor:Kolmogorov\] For every $p \le 1$ and $q \ge 1$, one has the bound \_[X\_0]{} \_[t+]{}\^- \_t\^\_p\^q C\^[p q/2]{}, uniformly over all $\delta, t, \eps \in [0,1]$. Note first that we can restrict ourselves to the case when $t$ and $\delta$ are integer multiples of $\eps$. Indeed, it follows from the definition of $\|\cdot\|_p$ and from that, if $k\eps \le s < t \le (k+1)\eps$, then \_[s]{}\^- \_t\^ \^[-p]{} |t-s|\^p \_[k]{}\^- \_[(k+1)]{}\^. We then obtain from Proposition \[prop:Kolmogorov\] the bound \_[X\_0]{} \_[t+]{}\^- \_t\^\_p\^q &= \_[X\_0]{} (\_[X\_0]{} ( \^\_[t+]{}- \^\_t\_p\^q| \_t))\ &\_[X\_0]{} (|N\_t|\^[q-1]{}\_[j=1]{}\^[N\_t]{}\_[X\_t\^[(j)]{}]{} \_\^- \_[X\_t\^[(j)]{}]{}\_p\^q)\ &C \^[pq/2]{} \_[X\_0]{} (|N\_t|\^[q]{}) C \^[pq/2]{}, where we made use of the Markov property, , and . Denote by $X_t$ the location at time $t$ of a single particle starting at $X_0 = (x_0,v_0,0)$ and evolving under the rescaled random walk stopped when it reaches the boundary of $\CM$. Denote by $x_t$ the position in $\R$ corresponding to $X_t$. From the properties of the random walk and the definition of $\|\cdot\|_p$, for every $q>1$ there exists a constant $C$ such that the bound \_[X\_0]{} \_[X\_]{}- \_[X\_0]{}\_p\^q C\^[qp/2]{}, holds, independently of the initial condition and independently of $\eps \le 1$. Let us now bound the contribution from the descendants of the initial particle. Ordering the particles alive at time $t$ in such a way that the original particle has label $1$ (if it is not alive anymore, we consider it as being located on the boundary, where it was stopped), we have the bound \_[X\_]{}- \_\^\_p \_[j = 2]{}\^[N\_]{} d\_p(X\_\^[(j)]{}, ), so that \_[X\_0]{} \_[X\_]{}- \_\^\_p\^q &\_[X\_0]{} (\_[j = 2]{}\^[N\_]{} d\_p(X\_\^[(j)]{}, ))\^q\ &((\_[X\_0]{} (N\_-1)\^[2q-1]{}) (\_[X\_0]{} \_[j = 2]{}\^[N\_]{} d\_p\^[2q]{}(X\_\^[(j)]{}, )))\^[1/2]{}\ &C(\_[X\_0]{} \_[j = 2]{}\^[N\_]{} d\_p\^[2q]{}(X\_\^[(j)]{}, ))\^[1/2]{} , where the second inequality follows from Proposition \[prop:numPart\]. Recall now that if $X_\delta^{(j)} = (x,v,n)$, then one has $d_p(X_\delta^{(j)}, \d \CM) = |x-v/a|^p$, where $v/a$ is guaranteed to take values between $\inf_{s \le \delta} x_s$ and $x$. As a consequence, we have the bound |x-v/a|\^p |x - \_[s ]{} x\_s|\^p |x-x\_0|\^p + \_[s ]{} |x\_s-x\_0|\^p, so that \_[X\_0]{} \_[j = 2]{}\^[N\_]{} d\_p\^[2q]{}(X\_\^[(j)]{}, ) &\_[X\_0]{}( N\_ \_[s ]{} |x\_s-x\_0|\^[2pq]{})\ &+ \_[X\_0]{} |x-x\_0|\^[2pq]{} \_\^(dx,dv,dk) \^[pq]{}, where the last bound is a consequence of Propositions \[prop:numPart\] and \[prop:moments\], as well as standard bounds on the supremum of a random walk. The claim now follows at once. Convergence of fixed-time distributions {#sec:convFan} ======================================= In this section, we show that any limiting process obtained from the tightness result of the previous section necessarily coincides with the Brownian fan constructed in Section \[sec:defFanMP\]. With the notations of that section at hand, our convergence result can be formulated as follows. \[theo:finalConv\] Consider the setting of Theorem \[theo:tight\] with an initial condition $X_0 = (x,v,0) \in \CM$, and consider $\mu_t$ as above with the “initial condition” for $\mu^{[\infty]}$ given by a Brownian motion starting at $x$, killed when it reaches the level $-v/a$. Then, for every $p \in (0,1]$, there exists a version of the process $\{\mu_t\}_{t \ge 0}$ which is a continuous Markov process with values in $\ell^p(\CM)$. Furthermore, denoting the law of its restriction to the time interval $[0,1]$ by $\CL$, the sequence of measures $\CL^\eps$ converges weakly to $\CL$ in $\CC([0,1], \ell^p(\CM))$. It suffices to show that, for any fixed collection of times $\{t_1,\ldots, t_k\}$, the law of $\{\mu_{t_i}^\eps\}_{i \le k}$ converges weakly to that of $\{\mu_{t_i}\}_{i \le k}$. Indeed, Corollary \[cor:Kolmogorov\] then implies that the process $\mu_t$ satisfies Kolmogorov’s continuity criterion and therefore has a continuous version. By Theorem \[theo:tight\], we deduce weak convergence in $\CC([0,1], \ell^p(\CM))$ from the convergence of marginals. Using the Markov property, the superposition property of the process, and Proposition \[prop:numPart\], we reduce ourselves to the case $k=1$ with an initial condition consisting of one single particle. Denote now by $\mu^{\eps,[\infty]}$ the random integer-valued measure on $\CE$ obtained by running Algorithm \[tdmc\] until time $1$. Observe that $\mu^{\eps,[\infty]}$ can be built in the following way. For an excursion $w \in \CE$, we build a random measure $\QQ^\eps(w)$ by the following procedure. For every $k \in \N$ with $\eps k > \s(w)$ and $\eps(k+1) < \e(w) \wedge 1$ we set w\_k = w\_[(k+1)]{}-w\_[k]{}. If $\Delta w_k > 0$, we then draw a random variable $N^1_{k}$ with law $\CI(\exp(a\Delta w_k)-1)$ (see and Lemma \[lem:unif\]). For $j=1,\ldots, N^1_{k}$, we build i.i.d. excursions $w^{k,j} \in \CE$ by the following procedure. First, draw a uniform random variable $u \sim \CU(\exp(-a \Delta w_k),1)$ and set $v = \log u -a w_{k\eps}$. Then, denote by $\{y^{k,j}_{\ell\eps}\}_{\ell=0}^L$ an instance of the random walk , started at $w_{(k+1)\eps}$ and stopped just before it becomes smaller than $-v/a$ (so that $y^{k,j}_{L\eps} > -v/a$). The excursion $w^{k,j}$ is then given by $\s(w^{k,j}) = k\eps$, $\e(w^{k,j}) = (L+k+2)\eps$, $w^{k,j}_{\eps (\ell+k+1)} = y^{k,j}_{\ell\eps}$ for $\ell \in \{0,\ldots,L\}$, $w^{k,j}(\ell \eps) = -v/a$ for the remaining integer values of $\ell$, and linear interpolation in between integer values. We then set \^(w) = \_[k=1]{}\^\_[j=1]{}\^[N\^1\_k]{} \_[w\^[k,j]{}]{}, which is the point measure describing the children of the particle with trajectory $w$. Similarly to before, we build $\mu^{\eps,[\infty]}$ recursively by the following procedure: Build an excursion $w^0 \in \CE$ as above, starting at $x$ and stopped at $-v/a$, where $(x,v,0)\in \CM$ is the initial condition appearing in the statement. Set $\mu^{\eps,0} = \delta_{w^0}$. Given $\mu^{\eps,\ell}$, define $\mu^{\eps,\ell+1}$ by \^[,+1]{} = \_\^(w)\^[,]{}(dw), where the $\{\QQ^\eps(w)\}_{w \in \CE}$ are all independent (and independent of the $\mu^{\eps,\ell'}$ with $\ell' \le \ell$). Note that the integral is actually a finite sum, so the construction makes sense. Set $\mu^{\eps,[\ell]} = \sum_{\ell'=0}^\ell \mu^{\eps,\ell'}$ for positive $\ell$ (including the case $\ell = \infty$). If we set \_t\^= E\_t\^\^[,\[\]]{}, where $E_t$ is defined as in Section \[sec:defFanMP\], then the process $\mu_t^\eps$ is indeed equal in law to the process considered in Section \[sec:tightness\][^1]. Denote now by $\CE_\gamma$ the set of excursions of height at least $\gamma$, namely \_= {w : t ł(w) w\_t w((w)) + }. We also write $\mu^{\eps,[n]}_\gamma$ for the measure obtained exactly like $\mu^{\eps,[n]}$, but where we replace $\QQ^\eps(w)$ by its restriction $\QQ^\eps_\gamma(w)$ to the set $\CE_\gamma$ at every step, so that $\mu^{\eps,[n]}_\gamma \le \mu_\eps^{[\infty]}$ almost surely. In words, $\QQ^\eps_\gamma$ is obtained from $\QQ^\eps$ by discarding all excursions of height less than $\gamma$, as well as the descendants of any such excursions. Combining Propositions \[prop:numPart\] and \[prop:moments\], we see that, for $\eps_0$ small enough, there exist constants $C$ and $\alpha > 0$ such that one has the bound \_[\_0]{} ¶(E\_t\^\^[,\[\]]{} E\_t\^\^[,\[n\]]{}\_) C (e\^[-n]{} + \^p), uniformly over $\eps$ and $t \in [0,1]$. Following an argument along the lines of the proof of Theorem \[theo:Nt\], a similar bound can be shown to hold for $\P \bigl(E_t^\star\mu^{[\infty]} \neq E_t^\star\mu^{[n]}_\gamma\bigr)$, where $\mu^{[n]}_\gamma$ is the recursive Poisson process of depth $n$ constructed like $\mu^{[\infty]}$, but with $\CQ(w,\cdot)$ replaced by its restriction $\CQ_\gamma(w,\cdot)$ to $\CE_\gamma$. As a consequence, we conclude that it is sufficient to show that, for every fixed $\gamma > 0$ and $n \ge 1$, \_[0]{} (\^[,\[n\]]{}\_) = (\^[\[n\]]{}\_), where $\CD(\cdot)$ denotes the law of a random variable and convergence is to be understood in the sense of weak convergence on the space $\MM_+(\CE)$ endowed with the Wasserstein-$1$ distance (see [@Villani] and Section \[sec:convRPP\] below). Our aim then is to make use of Theorem \[theo:convPPP\] below which gives a general convergence result for recursive Poisson point processes. The drawback is that $\mu^{\eps,[n]}_\gamma$ is itself not a recursive Poisson point process, due to the fact that $\QQ^\eps_\gamma(w)$ is not a realisation of a Poisson point process. However, it would be one if, in the construction of $\QQ^\eps(w)$, we had drawn $N^1_k$ according to a Poisson distribution with mean $\exp(a\Delta w_k)-1$. Denote by $\QQt(w)$ the Poisson point process obtained in this way, by $\QQtd$ its restriction to $\CE_\gamma$, and by $\tilde \mu^{\eps,[n]}_\gamma$ the recursive Poisson point process of depth $n$ obtained by iterating $\QQtd$. We then claim that it is possible to find a coupling between $\tilde \mu^{\eps,[n]}_\gamma$ and $\mu^{\eps,[n]}_\gamma$ such that ¶(\^[,\[n\]]{}\_\^[,\[n\]]{}\_) C\_[n,]{} , where the constant $C_{n,\gamma}$ depends on $n$ and $\gamma$, but not on $\eps$. Indeed, if we denote by $U_\lambda$ a random variable with law $\CI(\lambda)$ and by $\bar U_\lambda$ a Poisson random variable with mean $\lambda$, then it is straightforward to check that there exists a constant $C$ and a coupling between $U_\lambda$ and $\bar U_\lambda$ such that ¶(|U\_- |U\_| = 1) C(1\^2),¶(|U\_- |U\_| &gt; 1) C (1\^3). Furthermore, by Proposition \[prop:defG\], the probability that the random walk started at $x$ reaches $\gamma$ before becoming negative is bounded from above by $C_\gamma (x+\sqrt \eps)$ for some constant $C_\gamma$ depending on $\gamma$. As a consequence, one can construct a coupling between $\QQ^\eps_\gamma(w)$ and $\QQtd(w)$ such that ¶(\^\_(w) (w)) C\_\_[k]{} |w\_k|\^2 (|w\_k| + ). Similarly, regarding the total mass of $\QQ^\eps_\gamma$, one has the bound \^\_(w,) = \^(w,\_) C\_\_[k]{} |w\_k|\^2 e\^[c |w\_k|]{}, for some constant $c$. If $w$ is an excursion created by the procedure above, the expected values of and are bounded by $C_\gamma \sqrt \eps$ and $C_\gamma$ respectively, for a possibly different constant depending on $\gamma$. Proceeding as in Lemma \[lem:intPPP\], it follows that, if we set F\_(w) = 1 + [1]{}\_[k]{} |w\_k|\^3 + \_[k]{} |w\_k|\^2 e\^[c |w\_k|]{}, we obtain the bound $\E \int_\CE F_\eps(w) \mu^{\eps,[n]}_\gamma(dw) < C_{n,\gamma}$, uniformly over $\eps$. (But this constant might potentially grow very fast as $\gamma \to 0$ and $n \to \infty$!) Combining this with , the bound then follows at once. As a consequence, it is sufficient to show, instead of , that \_[0]{} (\^[,\[n\]]{}\_) = (\^[\[n\]]{}\_). This is the content of Proposition \[prop:convFD\] below, which completes the proof. The remainder of this section is devoted to the proof of . The outline of the proof goes as follows. First, in Section \[sec:convExc\], we show that the law of a single excursion of the random walk , conditioned on hitting a prescribed level $\gamma$, converges as $\eps \to 0$ to the Brownian excursion, conditioned to reach $\gamma$. In Section \[sec:convRPP\] we then provide a general criterion for the convergence of recursive Poisson point processes. Finally, in Section \[sec:convFD\] we combine these results in order to obtain . Convergence of excursion measures {#sec:convExc} --------------------------------- As before, we denote by $y_t$ the rescaled random walk given by y\_[(k+1)]{} = y\_[k]{} + \_[k+1]{}, where the $\xi_k$ are an i.i.d. sequence of random variables with law $\nu$. As before, we extend this to arbitrary times by linear interpolation. We also assume that $\nu$ has some exponential moment as before. The aim of this section is to show that if we start $y_t$ at some initial condition $x \sim \sqrt \eps$ and condition it on reaching a prescribed height $\gamma$ before becoming negative, then its law converges to that of an unnormalised Brownian excursion, conditioned to reach height $\gamma$ (call it $w^\gamma$). Throughout this whole section, we will only consider the process on a fixed time interval, which for definiteness we choose to be equal to $[0,1]$. A more precise description of the law $\Q_\gamma$ of $w^\gamma$ is given by the identity \_() = [\_0\^s\^[-3/2]{} g\_(s) \_[s,]{}()ds \_0\^s\^[-3/2]{} g\_(s)ds]{}, where $\Q_{s,\gamma}$ denotes the law of a Brownian excursion of length $s$, conditioned to reach level $\gamma$, and g\_(s) = \_[s,0]{}({w: }) = ({w: }), where $\Q$ is the standard Itô excursion measure. Since $g_\gamma(s) \to 0$ exponentially fast for small $s$, this does indeed define a probability measure on $\CC(\R_+, \R)$. We then turn this into a probability measure on $\CC([0,1],\R)$ by restriction. We view $\CC([0,1],\R)$ as a subset of $\CE$ via the injection $\iota\colon \CC([0,1],\R)\hookrightarrow \CE$ given by \(w) = 0,(w) = 1 {t &gt; 0: w(t) = 0}, and by setting the path component of $\iota w$ equal to $w$, stopped when it reaches the time $\e(\iota w)$. We endow $\CE$ as before with the distance $d$ given in . Since we only deal with excursions starting at $0$ and stopped before time $1$, the distance $d$ is equivalent on this set to the (pseudo-)distance $\bar d$ given by |d (w,w’) = 1 (|(w) - (w’)| + \_[t ]{} |w\_t - w’\_t|). Regarding the space $\CC([0,1],\R)$, we endow it throughout this section with the metric d(w,w’) = 1 \_[t ]{} |w\_t - w’\_t|. We furthermore denote by $\|\cdot\|_d$ the Wasserstein-$1$ metric associated with any distance function $d$, which is just the dual of the corresponding Lipschitz (semi-)norm. The main theorem of this section is the following: \[theo:convLaws\] Let $x > 0$ and denote by $\Q^{\eps}_{z,\gamma}$ the law of the random walk , starting at $z = \sqrt \eps x$ and conditioned to reach level $\gamma$ before becoming negative. Then, for every $\bar \beta < {1\over 16}$ there exists a constant $C$ such that \^\_[z,]{} - \_\_d C \^[|]{},\^\^\_[z,]{} - \^\_\_[|d]{} C \^[2|5]{}, uniformly over all $\eps \in (0,1]$, all $x \in [0, \eps^{-1/3}]$, and all $\gamma \in [\eps^{1/16},1]$. Our main abstract ingredient in the proof is the following criterion for the convergence of conditional probabilities when the probability of the set on which the measures are conditioned converges to $0$: \[lem:convCond\] Let $\mu$ and $\pi$ be two probability measures on some Polish space $\CY$ with metric $d$ and diameter $1$ and let $\CD_\mu\colon \CY \to [0,1]$ and $\CD_\pi\colon \CY \to [0,1]$. For $\rho > 0$, set A\_&= {x : y \_(y) &gt; \_(x) + d(x,y)/},\ |A\_&= {x : d(x,A\_) }. Assume that $\delta$, $\eps_1$ and $\eps_2$ are such that \_\_(x) (dx) ,- \_d \_1, \_x |\_(x) - \_(x)| \_2, where $\|\cdot\|_d$ denotes the Wasserstein-$1$ distance with respect to $d$. Define measures $\tilde\mu$ and $\tilde \pi$ by \(A) = c\_\_A \_(x) (dx),(A) = c\_\_A \_(x) (dx), where $c_\mu$ and $c_\pi$ are such that these are probability measures. Then, the bound - \_d ([3\_1]{} + \_2 + 2(|A\_)), holds for every $\rho \le 1$. In particular, one has $\int_\CY \CD_\mu(x) \mu(dx)>0$ whenever the right hand side in is strictly smaller than $1$, so that the bound is non-trivial. Let $f \colon \CY \to \R$ be a test function such that $\Lip_d(f) \le 1$. Since the diameter of $\CY$ is assumed to be $1$, we can assume without loss of generality (by possibly adding a constant to $f$) that $\sup_x |f(x)| \le {1\over 2}$. Since one has the identity - \_d = \_[\_d(f) 1]{} (c\_f(x)\_(x)(dx) - c\_f(x)\_(x)(dx))= \_[\_d(f) 1]{} \_f, our aim is to bound $\CI_f$, uniformly over $f$. Note first that, by the bound on $f$, \_f & |[1c\_]{} - [1 c\_]{}| + [c\_2]{} |\_(x)-\_(x)|(dx)\ &+ c\_|f(x)\_(x)(dx)-f(x)\_(x)(dx)|. Note that the first term is nothing but a particular instance of the last term with $f = {1\over 2}$. Since the second term is furthermore trivially bounded by $\eps_2 / (2\delta)$, it suffices to bound the last term. The problem in bounding this term arises of course from the fact that $\CD_\pi$ is not Lipschitz continuous. For any $\rho > 0$, we can however “mollify” this function by setting \_\^(x) = \_[y ]{} (\_(y) - [d(x,y)]{}). It is then straightforward to check that $\Lip_d (\CD_\pi^\rho) \le \rho^{-1}$, that $\CD_\pi (x) \le \CD_\pi^\rho(x) \le \sup_y\CD_\pi (y)$, and that furthermore $\CD_\pi^\rho(x) = \CD_\pi(x)$ for all $x \not \in A_\rho$. It then follows from the definition of $\eps_1$ that |f(x)\_\^(x)(dx)-f(x)\_\^(x)(dx)| (1 + (2)\^[-1]{}). Furthermore, |f(x)\_\^(x)(dx)-f(x)\_(x)(dx)| , and similarly for the term with $\pi$ replaced by $\mu$. In order to bound $\mu(A_\rho)$, we set as above F\^(x) = \_[y ]{} (\_[A\_]{}(y) - [d(x,y)]{}) , so that (A\_) F\^(x) (dx) + F\^(x) (dx) + (|A\_), where we used the fact that $F^\rho$ vanishes outside of $\bar A_\rho$. Collecting all of these bounds, the claim follows. An alternative description of $w^\gamma$ is given by the following. Let $Y$ be a Bessel-$3$ process starting at the origin and let $\tau_\gamma$ be its first hitting time of $\gamma$, i.e. $\tau_\gamma = \inf\{t \ge 0\,:\, Y_t \ge \gamma\}$. Let furthermore $B$ be a Brownian motion independent of $Y$, which is stopped when it reaches the level $\gamma$. Then, one has the decomposition w\^\_t = { [cl]{} Y\_t &\ - B\_[t-\_]{} & . This is a consequence of the symmetry of the Brownian excursion under time reversal, combined with [@RogWil Theorem 49.1] for example. We can use the above decomposition to obtain the following bound: \[lem:contMdelta\] For any $\beta < {1\over 4}$, there exists a constant $C$ such that, for every $\gamma,\gamma' \in (0,1]$, one has the bound \_- \_[’]{}\_d C |- ’|\^. The decomposition suggests a natural coupling between $w^\gamma$ and $w^{\gamma'}$ by building them from the same basic building blocks $Y$ and $B$. The characterisation of the Bessel-$3$ process as the norm of a $3$-dimensional Brownian motion, together with standard hitting estimates for Brownian motion, imply that ¶(|\_- \_[’]{}| &gt; ) 1 , so that in particular $\P(|\tau_\gamma - \tau_{\gamma'}| \ge \sqrt{|\gamma - \gamma'|}) \le C|\gamma - \gamma'|^{1/4}$. The result now follows from the fact that both $B$ and $Y$ are almost surely $\alpha$-Hölder continuous for every $\alpha < {1\over 2}$. \[lem:convBM\] Let $B^\gamma_z$ be a Brownian motion starting at $z$, conditioned to hit $\gamma$ before $0$, and stopped upon its return to $0$. Then, for every $\beta < 1$, there exists a constant $C$ depending on $\beta$ such that the bound (B\^\_) - \_\_d C\_\^, holds uniformly over $\eps \in (0,\gamma\wedge 1]$ and $\gamma > 0$. Let $w^\gamma$ be as above and let $\tau_\eps$ be the first passage time of $w^\gamma$ through $\eps$. Then, it follows from the decomposition that one has the exact identity B\^\_() w\^(- \_). It then follows from the small ball estimates of Brownian motion that, for every $\bar \zeta < 2$, one has the bound ¶(\_\^[|]{}) . The desired estimate then follows at once from the Hölder regularity of $w^\gamma$. \[prop:convEpsAlpha\] Let $\alpha \in (0,{1\over 8})$. Suppose further that $\gamma > 0$ is fixed and denote by $y^{\gamma}_t$ the random walk $y_t$ conditioned to hit $[\gamma,\infty)$ before hitting $\R_-$ and stopped when it then hits $\R_-$. Assume that $y^{\gamma}_0 = \eps^\alpha$. Then the law of $y^\gamma$ converges weakly to $\Q_\gamma$ as $\eps \to 0$. Furthermore, for every $\beta > 0$, there exists a constant $C$ such that the bound (y\^) - \_\_d C (\^[[18]{}-]{} + \^[-]{}), holds uniformly over all $\eps \le 1$ and $\gamma \in [\eps^\alpha,1]$. By Lemma \[lem:convBM\], it suffices to compare the law of $y^\gamma$ with that of $B^\gamma_{\eps^\alpha}$. The result will then be a consequence of Lemma \[lem:convCond\]. To see this, we partition the state space $\CY = \{w \in \CC([0,1],\R)\,:\, w_0 = \eps^\alpha\}$ into three sets in the following way. Given a continuous function $w$ with $w_0 \in (0,\gamma)$, we set $\tau = 1 \wedge \inf\{t > 0\,:\, w_t \not \in [0,\gamma]\}$, and we define sets $A^{(i)}$ with $i\in \{1,2,3\}$ by &&&&&\ & A\^[(1)]{} = {w: w\_ = 0},&& A\^[(2)]{} = {w: w\_ = },&& A\^[(3)]{} = {w: = 1}. Define furthermore functions $F_\gamma$ and $F_\gamma^\eps$ on $\CY$ by F\_(w) = [w\_]{} ,F\_\^(w) = { [cl]{} 0 &\ 1 &\ |P\_[,[w\_1]{}]{} & . where $\bar P_{z,\gamma}$ is defined as in the discussion before Proposition \[prop:defG\]. With these definitions at hand, if we set $\mu = \CD(y)$ with $y_0 = \eps^\alpha,$ $\pi = \CD(B_{\eps^\alpha})$, $\CD_\mu = F_{\gamma}^\eps$, and $\CD_\pi = F_\gamma$, then we are precisely in the setting of Lemma \[lem:convCond\] with $\tilde \mu = \CD(y^\gamma)$ and $\tilde \pi = \CD(B_{\eps^\alpha}^\gamma)$. Note first that, since $F_\gamma$ is precisely the probability that a Brownian motion started from $w_1$ hits $\gamma$ before $0$, we have = \_(w) (dw) = [\^]{}. Furthermore, it follows from Corollary \[cor:boundRW\] (and from the fact that $F_\gamma$ and $F_\gamma^\eps$ coincide outside of $A^{(3)}$) that \_1 = \_[w]{} |F\_(w) - F\_\^(w)| . (We could have replaced ${1\over 4}$ by any exponent less than ${1\over 2}$ here.) Regarding the distance between the unconditioned measures, we obtain from [@ConvRateBM] the bound \_2 = - \_d \^[18]{}. It thus remains to obtain a bound on $\bar A_\rho$. By the definition of $A_\rho$ and of $F_\gamma$, $w \in A_\rho$ implies that either $w \in A^{(1)} \cup A^{(3)}$ and $d(w,A^{(2)}) \le \rho$ or $w\in A^{(1)}$ and $d(w,A^{(3)}) \le \rho$. This implies that A\_{w: \_[t ]{} w\_t } {w: \_[t ]{} w\_t } , so that |A\_{w: \_[t ]{} w\_t } {w: \_[t ]{} w\_t } . Since (by the reflection principle) the law of the extremum of Brownian motion over a finite time interval has a smooth density with respect to Lebesgue measure, there exists a constant $C$ independent of $\eps$ and $\delta$ such that $\pi(\bar A_\rho) \le C\rho$. Inserting these bounds into Lemma \[lem:convCond\], we thus obtain the bound (y\^) - (B\_[\^]{}\^)\_d ([\^[14]{} ]{} + \^[18]{} + ). Setting $\rho = \eps^{1\over 8} \gamma^{-{1\over 2}}$ completes the proof. We now have all the ingredients required for the proof of Theorem \[theo:convLaws\]. Assume as in the previous proof that $y^\gamma_t$ is the random walk $y_t$ conditioned to hit $[\gamma,\infty)$ before hitting $\R_-$ and stopped when it then hits $\R_-$. In contrast to the above setup we now assume that $y^\gamma_0 = z = x\sqrt \eps$ for some $x\geq 0$. Let $k_0$ be given by k\_0 = {k &gt; 0: y\^\_[k]{} \^[116]{}}. It then follows from [@ConvRateBM], combined with standard small ball estimates for Brownian motion that, for every $\beta < {1\over 8}$ and every $p>0$ there exists a constant $C$ such that ¶(k\_0 &gt; \^) \^p, uniformly over $\eps \le 1$, for all $x$ such that $x\sqrt \eps \le \eps^{1\over 16}$. Furthermore, the probability that $y^\gamma_{k_0\eps} > 2\eps^{1\over 16}$ (say) is exponentially small in $\eps$, again uniformly over $x$. It then follows from Proposition \[prop:convEpsAlpha\] (choosing $\alpha = {1\over 16}$) that, for every $\bar\beta < {1\over 16}$ one can construct a joint realisation of $y^\gamma$ and $w^\gamma$ such that d(y\^, w\^(-k\_0)) \^[[|]{}]{}. (Here we extend $w^\gamma$ by setting it to $0$ for negative times.) On the other hand, the Hölder regularity of the sample paths of $w^\gamma$ together with implies that d(w\^, w\^(-k\_0)) \^[[|]{}]{}, so that the bound on $\|\Q^{\eps}_{z,\gamma} - \Q_\gamma\|_d$ follows. In order to obtain the bound on $\|\iota^\star\Q^{\eps}_{z,\gamma} - \iota^\star\Q_\gamma\|_{\bar d}$, we make use of the same coupling between $y^\gamma$ and $w^\gamma$ as above, so that $\E d(y^\gamma, w^\gamma) \lesssim \eps^{\bar \beta}$. It then remains to obtain a bound on |(y\^) - (w\^)| . For this, note first that, by Chebychev’s inequality, one has the bound ¶(d(y\^, w\^) \^)\^[|-]{}, valid for every $\beta \le \bar \beta$. Consider now any two paths $y^\gamma$ and $w^\gamma$ at distance less than $\eps^\beta$ and define \_1 = 1 {t &gt; 0: w\^(t) \^}, \_2 = 1 {t &gt; \_1: w\^(t) &lt; -\^}. In this way, one has both $\e(\iota w^\gamma) \in [\tau_1,\tau_2]$ and $\e(\iota y^\gamma) \in [\tau_1,\tau_2]$, so that it remains to obtain a bound on $\tau_2 - \tau_1$. The explicit expression for the hitting time of a line for a Brownian motion yields ¶(\_2 - \_1 \^) \^[- [2]{}]{}, for any $\alpha < 2\beta$. Choosing $\beta = {3\bar\beta\over 5}$ and $\alpha = {2\bar \beta \over 5}$ and combining this with , we then obtain ¶(|(y\^) - (w\^)| \^[2|5]{})\^[2|5]{}, from which the bound follows. Convergence of recursive Poisson point processes {#sec:convRPP} ------------------------------------------------ The aim of this section is to provide a general result allowing us to bound the distance between two recursive Poisson point processes of the same depth $n$ in terms of their respective kernels. This result is the main abstract result on which the proof of the convergence result will be based. One difficulty that we have to overcome is that there is very little uniformity in the proximity of the kernel describing $\mu^{\eps,[n]}_\gamma$ to the one describing $\mu^{[n]}_\gamma$. Throughout this section, given a Polish space $\CX$ with a distance function $d$ bounded by $1$, we define the Wasserstein-$1$ distance between any two positive measures with finite mass (and not just probability measures!) by - \_1 = \_[f 1 f\_1]{} (\_f(x) (dx) - \_f(x) (dx)). If $\mu$ and $\pi$ happen to have the same mass, then the additional constraint on the supremum norm $\|f\|_\infty$ of $f$ is redundant in the above expression, and we simply recover the usual Wasserstein-$1$ distance. In the case where the masses of $\mu$ and $\pi$ are different however, our choice of definitions ensures that - \_1 |() - ()| + |()| - [()]{}\_1, where $\approx$ denotes that both quantities are bounded by multiples of each other, with proportionality constants that are independent of $\mu$ and $\pi$. The main result of this section is the following: \[theo:convPPP\] Let $\CQ$ and $\bar \CQ \colon \CX \to \MM_+(\CX)$ be two measurable maps and assume that the Polish space $\CX$ is endowed with a metric $d$ bounded by $1$. Let $A \subset \CX$, $\eps \in (0,1]$ and $K \ge 1$ be such that the bounds $$\begin{aligned} \sup_{x\in \CX} \CQ(x,\CX) &\le K\;,&\qquad \|\CQ(x)- \CQ(y)\|_1 &\le K d(x,y)\;,\quad \label{e:ass1}\\ \sup_{x\in A} \bar\CQ(x,A^{{\text{c}}}) &\le \eps\;,&\qquad \sup_{x\in A} \|\CQ(x) - \bar \CQ(x)\|_1 &\le \eps\;, \label{e:ass2}\end{aligned}$$ hold, where we use the notation $A^{\text{c}} = \CX \setminus A$. Fix $n>0$, $\bar x \in A$ and $x \in \CX$, and denote by $\mu_x^{[n]}$ and $\bar \mu_{\bar x}^{[n]}$ the recursive Poisson point processes with respective kernels $\CQ$ and $\bar \CQ$. Then, there exists a coupling between $\mu_x^{[n]}$ and $\bar \mu_{\bar x}^{[n]}$ such that (1 \_[x]{}\^[\[n\]]{}-|\_[|x]{}\^[\[n\]]{}\_1) C (+ d(x,|x)), where the proportionality constant $C$ depends only on $K$ and $n$. One useful feature of the way that we set up the bounds in the statement is that we only require information about the kernel $\bar \CQ$ on the set $A$. In the application we have in mind, the kernel $\bar \CQ$ will be the one describing $\tilde \mu^{\eps,[n]}_\gamma$, while the set $A$ will consist of trajectories exhibiting “typical” behaviour at small scales. Before we turn to the proof of this theorem, we show that if $\pi_n \to \pi$ in the Wasserstein-$1$ sense, then the (usual) Poisson point processes with these intensity measures also converge to each other weakly in the Wasserstein-$1$ distance: \[prop:distPPP\] Let $\pi$ and $\bar \pi$ be two finite positive measures on a Polish space $\CX$ endowed with a metric $d \le 1$ and let $\mu$ and $\bar \mu$ be the corresponding Poisson point processes on $\CX$. Then, there exists a constant $C$ and a coupling between $\mu$ and $\bar \mu$ such that (1 - |\_1) C (- |\_1 1). The proof relies on the fact that, if $\CP(\lambda)$ denotes the Poisson distribution with parameter $\lambda$, one has the total variation bound d\_ ((), (|)) |- ||1, see for example [@MR2221228]. We can construct $\mu$ (and similarly for $\bar \mu$) in the following way. First, draw a Poisson random variable $N$ with parameter $\pi(\CX)$. Then, draw $N$ independent random variables $\{X_1,\ldots,X_N\}$ with law $\pi / \pi(\CX)$ and set = \_[k=1]{}\^N \_[X\_k]{}. By , we can now construct a Poisson random variable $\bar N$ with parameter $\bar \pi(\CX)$ such that ¶(|N N) |() - |()|. Assuming that $\bar N = N$, we can then draw random variables $\{\bar X_1,\ldots,\bar X_N\}$ in such a way that the pairs $(\bar X_k, X_k)$ are distributed according to a coupling between $\pi / \pi(\CX)$ and $\bar \pi / \bar\pi(\CX)$ that minimises their expected distance. If $\bar N \neq N$, then we simply draw $\{\bar X_1,\ldots,\bar X_N\}$ according to $\bar \pi / \bar\pi(\CX)$, independently of the $X_k$. If we then define $\bar \mu$ similarly to , it follows that - |\_1 { [cl]{} N - [||()]{}\_1 &\ N + |N & . As a consequence, we obtain the bound (1 - |\_1) ¶(N |N) + - [||()]{}\_1 N, so that the claim follows from , the definition of $N$, and . Note first that, by combining the first bound in with the second bound in , we obtain the bound \_[x A]{} |(x,) K+. It follows that we have the recursive bound $$\E \bigl(\bar \mu^n_{\bar x}(A) \,|\, \bar \mu_{\bar x}^{[n-1]}(A^{\text{c}}) = 0\bigr) \le 2K \E \bigl(\bar \mu^{n-1}_{\bar x}(A) \,|\, \bar \mu_{\bar x}^{[n-2]}(A^{\text{c}}) = 0\bigr)\;,$$ so that, since $\bar \mu^0_{\bar x}(A) = 1$ by assumption, $$\label{e:bound1} \E \bigl(\bar \mu^n_{\bar x}(A) \,|\, \bar \mu_{\bar x}^{[n-1]}(A^{\text{c}}) = 0\bigr) \le (2K)^n\;.$$ On the other hand, defining the positive measures $\pi^n$ and $\bar \pi^n$ by $$\pi^{n+1} = \int_{\CX} \CQ(y) \mu^n_{x} (dy)\qquad \text{and}\qquad \bar \pi^{n+1} = \int_{\CX} \bar\CQ(y) \bar \mu^n_{\bar x} (dy)$$ we have the inequality $$\begin{aligned} \P(\bar \mu_{\bar x}^{[n]}(A^{\text{\tiny c}}) > 0) &\le \P(\bar \mu_{\bar x}^{[n-1]}(A^{\text{c}}) > 0) + \P(\bar \mu^n_{\bar x}(A^{\text{c}}) > 0\,|\, \bar \mu_{\bar x}^{[n-1]}(A^{\text{c}}) = 0)\\ &\le \P(\bar \mu_{\bar x}^{[n-1]}(A^{\text{c}}) > 0) + \E\bigl( \bar \pi^n(A^{\text{c}}) \,|\, \bar \mu_{\bar x}^{[n-1]}(A^{\text{c}}) = 0\bigr)\\ &\le \P(\bar \mu_{\bar x}^{[n-1]}(A^{\text{c}}) > 0) + \eps D + \P \bigl(\bar \mu^{n-1}_{\bar x}(A) > D \,|\, \bar \mu_{\bar x}^{[n-1]}(A^{\text{c}}) = 0\bigr)\\ &\le \P(\bar \mu_{\bar x}^{[n-1]}(A^{\text{c}}) > 0) + \eps D + {1\over D}\E \bigl(\bar \mu^{n-1}_{\bar x}(A) \,|\, \bar \mu_{\bar x}^{[n-1]}(A^{\text{c}}) = 0\bigr)\\ &\le \P(\bar \mu_{\bar x}^{[n-1]}(A^{\text{c}}) > 0) + \eps D + {(2K)^{n-1}\over D}\;,\end{aligned}$$ which is valid uniformly over all $D> 0$. Choosing $D \sim 1/\sqrt\eps $, we thus obtain the recursion relation $$\P(\bar \mu_{\bar x}^{[n]}(A^{\text{c}}) > 0) \le \P(\bar \mu_{\bar x}^{[n-1]}(A^{\text{c}}) > 0) + C\sqrt \eps\;,$$ from which it follows that $$\label{e:bound2} \P(\bar \mu_{\bar x}^{[n]}(A^{\text{c}}) > 0) \le C\sqrt \eps\;,$$ where in both cases the constant $C$ depends on $K$ and $n$, but not on $\eps$. Note now that we have the bound $$\begin{aligned} \|\pi^{n+1} - \bar \pi^{n+1}\|_1 &\le \Bigl\|\int_\CX \CQ(y)\, \bigl(\mu^n_x - \bar \mu^n_{\bar x}\bigr)(dy)\Bigr\|_1 + \int_\CX \|\CQ(y) - \bar \CQ(y)\|_1 \bar \mu^n_{\bar x}(dy) \\ &\le \|\mu^n_x - \bar \mu^n_{\bar x}\|_1 \bigl(\Lip( \CQ) + \| \CQ\|_\infty\bigr) + \eps \bar \mu^n_{\bar x}(A) \\ &\quad + \int_{A^{\text{c}}} \|\CQ(y) - \bar \CQ(y)\|_1 \bar \mu^n_{\bar x}(dx) \;,\end{aligned}$$ so that the bound $$1 \wedge \|\pi^{n+1} - \bar \pi^{n+1}\|_1 \le 2K \bigl(\|\mu^n_x - \bar \mu^n_{\bar x}\|_1 \wedge 1\bigr) + \eps D + \one_{\bar\mu^n_{\bar x}(A) > D} + \one_{\bar\mu^n_{\bar x}(A^{\text{c}}) > 0}\;,$$ is valid for every $D>0$. Furthermore, by Chebyshev’s inequality and –, one has $$\begin{aligned} \P(\bar \mu^n_{\bar x}(A) > D) &\le \P(\bar \mu^n_{\bar x}(A) > D \,|\, \bar\mu_{\bar x}^{n-1}(A^{\text{c}}) = 0\bigr) + \P \bigl(\bar\mu_{\bar x}^{n-1}(A^{\text{c}}) > 0\bigr)\\ &\le {K^n\over D} + C\sqrt \eps\;.\end{aligned}$$ Choosing again $D \sim 1/\sqrt \eps$, we thus obtain the bound (1 \^[n+1]{} - |\^[n+1]{}\_1) 2K (\^n\_x - |\^n\_[|x]{}\_1 1) + C . Applying Proposition \[prop:distPPP\], we conclude that, given $\mu^n_x$ and $\bar \mu^n_{\bar x}$, it is possible to construct a coupling between $\mu^{n+1}_x$ and $\bar \mu^{n+1}_{\bar x}$ such that (1 \^[n+1]{}\_x - |\^[n+1]{}\_[|x]{}\_1) (1 \^[n]{}\_x - |\^[n]{}\_[|x]{}\_1) + , from whence the claim now follows at once. Convergence of the truncated distributions {#sec:convFD} ------------------------------------------ We are now in a position to provide the proof of . Again, throughout this section, we make the standing assumption that the one-step distribution $\nu$ for the random walk has some exponential moment. We also use the notations $\tilde \mu^{\eps,[n]}_\gamma$ and $\mu^{[n]}_\gamma$ from . Let $\CE$ be the space of real-valued excursions as before. We then introduce the map $\J^\eps \colon \CE \to \MM_+(\R^2)$ given by \^(w)(dz,dt) = a \_[k ]{} e\^[a z]{} \_[k]{}(dt) \_[\[0,\_k w\]]{}(z)dz, where $\delta_z$ denotes the Dirac measure located at $z$, $\Dpe w_k$ is defined by \_k w = [w\_[(k+1)]{}-w\_[k]{}]{}, and we used the convention that $\one_{[0,z]} = 0$ if $z < 0$. As before, denote by $\Q_{z,\gamma}^{\eps}$ the law of the random walk , starting at $z\sqrt \eps$, and conditioned to hit level $\gamma$ before becoming negative. We stop it as soon as it hits $\R_-$, so that we interpret $\Q_{z,\gamma}^{\eps}$ as a measure on $\CE_0$. Recall also that $\bar P_{z,\gamma/\sqrt \eps}$, with $\bar P_{z,\gamma}$ defined as in Section \[sec:BMformal\] denotes the probability that this event actually happens. With this notation, the measure $\CQ^\eps_\gamma$ describing $\tilde \mu^{\eps,[n]}_\gamma$ (i.e. $\CQ^\eps_\gamma(w,\cdot)$ is the intensity measure of $\QQtd(w)$) is given by \^\_(w, ) &= \_[\^2]{} \_[w,t]{}\^\^\_[z,]{} |P\_[z,]{} [\^(w)(dz,dt) ]{}\ &= \_[\^2]{} ([1]{} \_[w,t]{}\^\^\_[z,]{})( |P\_[z,]{}) \^(w)(dz,dt). Note now that if $w$ is a typical realisation of $\Q^\eps_{z,\gamma}$, then $\J^\eps(w)$ is expected to be close to the measure \(w) = \_[ł(w)]{}(t)dt (dz) , where $\hat \nu$ is the measure on $\R_+$ given by G(z)(dz) = \_0\^\_0\^z G(y)dy(dz), for any test function $G$. This is because $e^{a \sqrt \eps \Dpe_k w} \sim 1$ and the law of $\Dpe_k w$ would be given by $\nu$, were it not for the conditioning. On the other hand, the kernel $\CQ_\gamma$ describing the truncated Brownian fan $\mu^{[n]}_\gamma$ with parameter $a$ is given by \_(w, ) = [a2]{}\_[ł(w)]{} \_[w,t]{}\^\_ dt = [1]{}\_[\^2]{} \_[w,t]{}\^\_G(z) (w)(dz,dt), where $\Q_\gamma$ is the law of a Brownian excursion conditioned to reach level $\gamma$ and $G$ was defined in . This is the case because of the well-known fact that ${1\over \gamma}\Q_\gamma$ is the law of the unnormalised Brownian excursion *restricted* to the set of excursions reaching level $\gamma$. The second identity in is a consequence of the definition of $\hat \nu$, combined with Proposition \[prop:expG\]. With these notations at hand, we have: \[prop:convFD\] Consider the setting and assumptions of Theorem \[theo:finalConv\]. For every $\delta < {1\over 32}$, there exists a constant $C$ depending on $\gamma$, $n$ and $\delta$ such that (1 \^[,\[n\]]{}\_-\^[\[n\]]{}\_\_1) C \^, uniformly over $\eps \le \eps_0$ for some $\eps_0$ small enough. We apply Theorem \[theo:convPPP\] with $\CQ = \CQ_\gamma$, $\bar \CQ = \CQ^\eps_\gamma$, and $A$ to be determined. Using the results obtained earlier in this section, it turns out that the assumptions are then relatively straightforward to check. For convenience, we introduce the notation (w)(dz,dt) = G(x) \^(w)(dz,dt), and similarly for $\tJ$. We also denote by $\Pi_2 \colon \R^2 \to \R$ the projection onto the second component, so that $\Pi_2^\star\tJe(w)$ (and similarly for $\tJ$) is the projection of $\tJe(w)$ onto the $t$-component. We also denote by $\Omega_\eps \subset \CE$ the subset of excursions given by \_= \_\^[(1)]{} \_\^[(2)]{} , where we set \_\^[(1)]{} &= {w: (w)({x &gt; \^[-1/3]{}}) = 0},\ \_\^[(2)]{} &= {w: \_2\^(w) - \_2\^(w)\_1 \^[1/10]{} }. Also, in the definition of $\Omega_\eps^{(2)}$, the Wasserstein-$1$ distance is taken with respect to the somewhat unusual distance on $\R$ given by d(t, t’) = 1 |t-t’|\^[1/3]{}. The reason for this seemingly strange choice will become clear later. The set $\Omega_\eps$ defined in this way will play the role of $A$ when applying Theorem \[theo:convPPP\]. Note first that $\CQ_\gamma(w,\CX) \le {a\over 2\gamma}$, which is bounded independently of $w$. Furthermore, one has the bound \_(w,) - \_(|w,)\_1 & \_[ł(w)ł(|w)]{} \_[w,t]{}\^\_- \_[|w,t]{}\^\_\_1 dt\ &+ [a2]{} (|(w) - (|w)| + |(w) - (|w)|). Since furthermore $\|\Theta_{w,t}^\star \Q_\gamma - \Theta_{\bar w,t}^\star \Q_\gamma\|_1 \le |w_t - \bar w_t|$, it does indeed follow that $\CQ$ is globally Lipschitz continuous as required, so that holds with some constant $K$ depending on $\gamma$. It remains to check that holds, with $\eps$ replaced by some power of $\eps$. By Theorem \[theo:convLaws\], we already know that there exists a constant $C$, possibly depending on $\gamma$, such that \^\_[z,]{}- \_\_1 C \^, for any exponent $\delta \in (0,{1\over 16})$, and uniformly over all $z \le \eps^{-1/3}$. As a consequence, for every $w \in \Omega_\eps$, we have the bound \_(w,) - \^\_(w,)\_1 & \_[\^2]{} ( |P\_[z,]{} - G(z)) \^(w)(dz,dt)\ &+ [1]{}\_[\^2]{} \_[w,t]{}\^\^\_[z,]{}-\_[w,t]{}\^\_(w)(dz,dt)\ &+ [1]{}\_[\^2]{} \_[w,t]{}\^\_(\_2\^(w)(dt)-\_2\^(w)(dt))\_1\ &C\_\^[1/5]{} \^(w)(\^2) + C\_\^ (w)(\^2)\ &+ C\_\^[1/10]{} (1 + \_[w,t]{}\^\_), where $\Lip\, \Theta_{w,t}^\star \Q_\gamma$ denotes the quantity \_[w,t]{}\^\_= \_[tt’]{} [\_[w,t]{}\^\_-\_[w,t]{}\^\_\_11 |t-t’|\^[1/3]{}]{}. It is at this stage that the choice of distance function becomes clear: with respect to the usual Euclidean distance, the map $t \mapsto \Theta_{w,t}^\star \Q_\gamma$ would not be Lipschitz continuous. In this way however, it follows immediately from the Hölder continuity of Brownian motion that $\Lip\, \Theta_{w,t}^\star \Q_\gamma < \infty$. Furthermore, it follows from the definition of $\Omega_\eps^{(2)}$ that $\tJe(w)(\R^2)$ is bounded by a constant independent of $w$ and $\eps$. Since $G(x)$ is bounded from below by a constant, this implies that the same is true of $\J^\eps(w)(\R^2)$. Combining these bounds, we then obtain \_(w,) - \^\_(w,)\_1 C\_\^, for some constant $C$ and any $\delta < {1\over 16}$. In order to complete the proof, it thus remains to show that $\inf_{w \in \Omega_\eps} \CQ^\eps_\gamma(w, \Omega^{\text{c}}_\eps) \to 0$ sufficiently fast as $\eps \to 0$, where $\Omega^{\text{c}}_\eps$ denotes the complement of $\Omega_\eps$. As a consequence of the definition of $\Omega_\eps$, this follows if we can show that $\Q^\eps_{z,\gamma}(\Omega^{\text{c}}_\eps)\to 0$ as $\eps \to 0$, uniformly over all $z \le \eps^{-1/3}$. Instead of considering $\Q^\eps_{z,\gamma}$, it is much easier to consider $\Q_z^{\eps}$, the law of the rescaled random walk over the time interval $[0,1]$. Observe that $\Q^\eps_{z,\gamma}$ is obtained from $\Q_z^{\eps}$ by first conditioning on the event that $\gamma$ is reached before the walk becomes negative and then stopping the walk. By Proposition \[prop:defG\], the probability of this event is bounded from below by $C\sqrt \eps$ for some constant $C$ depending on $\gamma$ but independent of $z \in [0,\eps^{-1/3}]$. Therefore it is sufficient to find a set $\Xi_\eps$ with the following two properties: There exists an exponent $\zeta > {1\over 2}$ and a constant $C$ such that $\Q_z^{\eps}(\CX\setminus\Xi_\eps) \le C \eps^\zeta$ for every $\alpha \le 1$ and every $z \in [0,\eps^{-1/3}]$. For every $w \in \Xi_\eps$ and every $t \in [0,1]$, the path $\tilde w$ obtained from $w$ by stopping it at time $t$ belongs to $\Omega_\eps$. In order to determine whether a path $w$ belongs to $\Xi_\eps$, we “coarse-grain” it into pieces of length $\eps^{1/3}$ (the precise exponent is not very important as we do not endeavour to obtain optimal convergence rates) and we impose that the contribution of $\tJe(w)$ on each piece is very close to what it should be. In other words, setting $I_k \subset [0,1]$ by $I_k = [\eps^{1/3} k,\eps^{1/3}(k+1))$, we define $\Xi_\eps$ as \_= \_\^[(1)]{} {w: |\_2\^(w)(I\_k) - [a2]{} |I\_k|| }, where $|I_k|$ denotes the length of the interval $I_k$. We first note that if $w \in \Xi_\eps$ and $t \in [0,1]$, then the path $\tilde w$ obtained by stopping $w$ at time $t$ does belong to $\Omega_\eps$. Since $\tJe(\tilde w) \le \tJe(w)$, $w\in \Omega_\eps^{(1)}$ implies that $\tilde w \in \Omega_\eps^{(1)}$. Denote now by $\eta^\eps$ the measure $\eta^\eps = \Pi_2^\star\tJe(\tilde w)$ and by $\eta$ the target measure, namely $\eta = {a\over 2}\lambda|_{[0,t]}$, where $\lambda$ denotes the Lebesgue measure. By the assumption on $\tJe(w)$, it follows that one has $|\eta^\eps(I_k) - \eta(I_k)| \le \sqrt\eps$ for each of the intervals $I_k$, except possibly for the interval containing $t$. As a consequence, denoting by $\eta^\eps_k$ and $\eta_{k}$ the restrictions of $\eta^\eps$ and $\eta$ to $I_k$, one obtains for all $k$ such that $t \not \in I_k$ the bound \^\_[k]{} - \_[k]{}\_1 \^[[19]{} + [13]{}]{} + \^[12]{}. (Recall that we use the distance function , this is the reason for the exponent $1\over 9$.) Summing over $k$ and using the fact that $\|\eta^\eps_{k} - \eta_{k}\|_1 \le C \eps^{1/3}$ for the value $k$ such that $t \in I_k$, it follows that one has $\|\eta^\eps - \eta\|_1 \le C \eps^{1/9}$, which indeed implies that $\tilde w \in \Omega_\eps$, at least for $\eps$ small enough. It remains to show that $\Q_z^{\eps}(\CE\setminus\Xi_\eps) \le C \eps^\zeta$ for sufficiently large $\zeta$. Observe first that $\Q_z^{\eps}(\CE\setminus \Omega_\eps^{(1)}) \le C \eps^p$ for every $p>0$ thanks to the fact that the distribution $\nu$ of the steps of our random walk has exponential tails. To obtain the required bound on $\Q_z^{\eps}(\CE\setminus\Xi_\eps)$, we use the fact that $\Pi_2^\star \tJe(w)(I_k)$ consists of the sum of $\eps^{-2/3}$ i.i.d. copies of a random variable $Y$ with law given by Y a \_0\^Z e\^[az]{} G(z)dz, (Z) = , where $\nu$ is the one-step distribution of the underlying random walk. Note now that, as a consequence of Proposition \[prop:expG\] and the fact that $\nu$ admits some exponential moment, one has Y = [a2]{}+ (\^[3/2]{}), for all $\eps$ sufficiently small. Similarly, it is straightforward to check that $\E |Y|^p = \CO(\eps^p)$ for every $p > 0$. As a consequence, one has for every $p>0$ the bound |\_2\^(w)(I\_k) - [a2]{} |I\_k||\^p C \^[2p/3]{}, for a constant possibly depending on $p$. It immediately follows that ¶(|\_2\^(w)(I\_k) - [a2]{} |I\_k|| &gt; ) C \^p, for every $p > 0$. Summing over $k$ and combining this with the previously obtained bound on $\Q_z^{\eps}(\CE\setminus \Omega_\eps^{(1)})$, we conclude that $\Q_z^{\eps}(\CE\setminus\Xi_\eps) \le C \eps^p$ for every $p \ge 0$, which concludes our proof. [^1]: Strictly speaking, the two processes agree only at times that are multiples of $\eps$ since the two interpolation procedures we used may differ when the trajectories of two particles from the same generation cross each other. This is irrelevant.
--- abstract: 'The dynamic network of relationships among corporations underlies cascading economic failures including the current economic crisis, and can be inferred from correlations in market value fluctuations. We analyze the time dependence of the network of correlations to reveal the changing relationships among the financial, technology, and basic materials sectors with rising and falling markets and resource constraints. The financial sector links otherwise weakly coupled economic sectors, particularly during economic declines. Such links increase economic risk and the extent of cascading failures. Our results suggest that firewalls between financial services for different sectors would reduce systemic risk without hampering economic growth.' author: - Dion Harmon - Blake Stacey - 'Yavni Bar-Yam' - 'Yaneer Bar-Yam' date: 'March 6, 2009; revised November 11, 2010' title: | Networks of Economic Market Interdependence\ and Systemic Risk --- The global economy is a highly complex system [@ref:dcs] whose dynamics reflects the connections among its multiple components, as found in other networked systems [@ref:barrat2008; @ref:christakis2008; @ref:hidalgo2007]. A common property of complex systems is the risk of cascading failures, where a failure of one node causes similar failures in linked nodes that propagate throughout the system, creating large scale collective failures. Economic risks associated with cascading financial losses are manifest in the current economic crisis [@ref:greenlaw2008] and the earlier Asian economic crisis [@ref:radelet1998], but are not considered in conventional measures of investment risk [@ref:jorion2006]. A central question is the role that complex systems science can play in informing regulatory policy that preserves the ability of markets to promote economic growth through freedom of investment, while protecting the public interest by preventing financial meltdowns due to systemic risk. Characterizing the network of economic dependencies and its relationship to risk is key [@ref:mantegna1999; @ref:garas2008; @ref:schweitzer2009; @ref:smith2009; @ref:emmert-streib2010]. The dependencies among organizations involve large numbers of factors, including competition for capital and labor, supply and demand relationships among organizations that deliver common end products or rely upon common inputs, natural disasters and climate conditions, acts of war and peace, changes of government or its policies including economic policy such as interest rates, and geographic association. Quantifying such dependencies, [*e.g.,*]{} through Leontief models [@ref:carvalho2008; @ref:leontief1986], is difficult because many of the dependencies are non-linear and driven by socio-economic events not included in these models. Also, behavioral economics [@ref:barberis2003; @ref:delong1990; @ref:delong1990b] suggests that under some conditions collective investor behavior, [ *e.g.,*]{} from perceptions of value, may have significant effects. Reflecting both fundamental and behavioral interactions, correlations in market value of firms can serve as a measure of the perceived aggregate financial dependence and quantify “herding” behavior in collective fluctuations. Moreover, price correlations are directly relevant to measures of risk. We constructed a network of dependencies among 500 corporations having the largest stock trading volume, augmented with several economic indices (oil prices, and bond prices reflecting interest rates). We formed a network where links are present for the highest correlations in daily returns in each year from 2003 to 2008. In order to display the effect of changes over time, we constructed a single network over all years, with each corporation in a particular year represented by a node linked to itself in the previous and next year. Each year is separately shown in Figure \[fig:1\]. We included only economic sectors that are significantly self-correlated, as the larger network constructed from the entire market obscures key insights. Previous correlational analyses have described how correlations may arise from external forces across the market (arbitrage pricing theory [@ref:chamberlain1983; @ref:ross1976]) or used correlations to characterize sectors and market crashes (econophysics [@ref:mantegna2000; @ref:onnela2003]). This work lacks an understanding of the economic origins of changes in dependencies and their policy implications. We examine variations of within- and between-sector correlations, arising from non-linear effects, for information about changes in economic conditions prior to and during the economic crisis. ![\[fig:1\] Network of correlations of market daily returns for years as indicated. Dots represent individual corporations colored according to economic sector: technology (blue), basic materials including oil companies (light grey) and others (dark grey), and finance including real-estate (dark green) and other (light green). Links shown are the highest 6.25% of Pearson correlations of $\log(p(t)/p(t-1))$ time series, where $p(t)$ are adjusted daily closing prices of firms [@ref:yahoo], in each year. Larger dots are spot oil prices at Brent, UK and Cushing, OK (black) and the price of ten year treasury bonds (green). ](econ-network-paper-fig1){width="18cm"} The study of network community properties often requires careful analysis [@ref:fortunato2010]. In our case, the observations we describe are manifest visually and were also tested statistically. In particular, apparent trends were tested using the $t$-statistic of differences in link densities within and between sectors (merging), or the minimum of this statistic between one sector and each of the others (self-clustering). Sectors are statistically linked (unlinked) to an index, if the $t$-statistic comparing links to the index relative to the link density of the graph is above 4 (below 2). The following observations and trends from 2003 through 2008 are apparent and quantifiable: In 2003 there is a separate cluster of real estate related financial institutions (dark green), which over time merges into the larger financial cluster (green) (not merged through 2004 quarter 4, $p<10^{-10}$, from 2007 quarter 2 to 2008 quarter 3, $p \geq 0.18$.). The technology sector (blue), while strongly clustered during economic growth (2003-2006), becomes relatively weakly clustered during the economic crisis (2008) (self-clustering statistic has negative slope, $p<10^{-66}$, and changes sign in 2008, $p<10^{-10}$). Interest rates (larger green dot) are sometimes related to the technology cluster (linked for 8 out of 26 quarters). The oil sector (grey) is highly clustered (any other sector is separate, $p<10^{-13}$), and over time becomes increasingly linked to the rest of the basic materials cluster (dark grey) (positive slope, $p<10^{-45}$), which itself becomes more connected to the technology cluster (positive slope, $p<10^{-64}$). The oil cluster is only sometimes correlated to oil prices (large black dots) (linked for 7 of 27 quarters). We will show that the network dynamics are consistent with the sequence of economic events of the financial crisis [@ref:greenlaw2008]. In traditional external factor models and models of collective behavior in interacting systems [@ref:dcs], correlations are constant over time, but recent models have introduced the fitting of dynamical correlations of market indices [@ref:cappiello2007; @ref:engle2002]. We will show that changes in correlations among corporations can be understood using intuitive models for this period of time. ![\[fig:2\] Market correlations and external events from 1985 to 2008. [**A:**]{} The average strength of correlations within and between economic sectors. Sectors included are finance (green), technology (blue), and basic materials (grey), double colored lines are correlations between sectors (blue-grey, blue-green, and grey-green). Correlations are calculated using twelve month windows, shifted quarterly from Jan 1985 through Jan 2008. [**B:**]{} Average correlations among stocks from all economic sectors. Black to light grey lines omit in each 12 month period the highest 0, 2, 5, 10, 20 absolute average return days respectively. [**C:**]{} Financial sector correlations separated into real estate related (dark green), other (light green), and between these sectors (hatched light and dark green) using left axis scale. The arrow indicates the effective merger of the sectors. Also shown are a housing price indicator (red, using right axis scale) [@ref:sp] and the search volume on Google for “housing bubble” (blue, arbitrary units) [@ref:google]. [**D:**]{} Basic materials sector correlations separated into oil (light grey) and others (dark grey) as well as finance (green), with mixed color lines reflecting inter-sector correlations (left axis scale). Also shown are prices of spot oil in Brent, UK (light red) and Cushing, OK (dark red), aluminum (light blue) and copper (dark blue) normalized to maximum values (right axis, both oil prices are normalized to the maximum of Cushing, OK). Average correlation of oil price in OK with the oil sector is shown (red/grey hatched). Arrow is the merger of oil and other basic materials. [**E:**]{} Rolling average correlations of the sectors in A (blue, left axis scale) shown with market value change (green, left axis scale, the return of S&P500 index). Market declines (negative returns) coincide with higher than average market correlations ($p<0.02$). Also shown are effective limits on interbank loans (red, right axis scale, the difference between the London Interbank Offered Rate (LIBOR) and the Federal Funds Overnight Rate (annualized) at the beginning of each quarter, divided by the latter), having high values in the current economic crisis.](econ-network-paper-fig2){width="18cm"} Specific external events can be identified whose timing coincides with observed changes in correlations. Fig. \[fig:2\]C shows that the merger of the real estate and other financial sectors stocks coincides with both a peak in search frequency for “housing bubble” on Google [@ref:choi2009], and a turning point in the behavior of housing prices ($p<10^{-3}$). This timing is consistent with the understanding [@ref:greenlaw2008] that the decline in housing prices triggered the financial system crisis due to large investments in mortgage backed securities across the financial sector. Fig. \[fig:2\]D shows a potential role of critical resources: first, in the changing coupling of the basic materials sector to other parts of the economy; second, in the changing coupling of oil sector to oil prices, which is only one of the factors affecting the oil industry. Nonlinearity due to dramatic increases in prices can readily be explained because they are additive components of fundamental economic factors, [*i.e.,*]{} costs of production. When commodity prices are low, other components dominate, but when commodity prices are high they have larger effects, so the fractional variation is nonlinearly related to the total. The proximate coincidence of the severe commodities price increases [@ref:bernanke2008] with the housing crisis ($p<10^{-5}$) may be understood either through fears of commodity shortages due to rapid growth, or the transfer of investment from the housing sector to commodities [@ref:cabellero2008]—investment demand rather than a use demand surge. This is consistent with the observation that economic growth by itself does not cause high correlations. However, general considerations of the role of constraints imply that when growth encounters the limit of available resources, increased correlations should occur as changes in one sector impact resource availability for another. Note that the correlations are primarily positive—commodity values rise with increasing financial sector values—consistent with fears of growth causing shortages or increasing investment demand. Negative correlations would be expected if commodity prices actually constrained economic growth. Limiting investments ([*i.e.,*]{} limiting capital-to-asset ratios) in order to moderate risk directly influences opportunities for growth. However, our results also point to a different strategy, which recognizes that financial institutions cross-link otherwise weakly correlated economic sectors. The key is that economic couplings among companies propagate the effect of failures. If economic entity G fails in a financial obligation to entity H, the impact on H may affect other entities J and K, that are linked to H, even if their activity has nothing to do with G. Conversely, while a small capital-to-asset ratio may be risky for a particular institution, if the investments are within a particular economic sector the failure of that institution is unlikely to cause economy wide repercussions. Thus, segregating financial relationships, particularly among activities that are not otherwise related, or are weakly related, reduces systemic risk. The idea that separations between components of the financial sector contribute to economic stability was a key aspect of legislation to stabilize the American banking system after the market crash of 1929. The Glass–Steagall Act of 1933 [@ref:fdic2007; @ref:heakal] separated investment banking from consumer (retail) banking to prevent the fluctuations from other parts of the economy affecting consumer banking. This Act was progressively eroded until its repeal in 1999 [@ref:gramm1999]. Other historical forms of separation imposed by law or by practice included the separation of savings and loan associations and insurance providers from commercial and investment banking, as well as geographic separation by state [@ref:fdic2007; @ref:gramm1999]. While many effects contribute to correlations in economic activity [@ref:carvalho2008; @ref:horbath1993], nonlinearities associated with investment during market declines support the historical intuition that regulating these dependencies is more critical than regulating those arising from, [*e.g.,*]{} supply chains. One of the arguments in favor of deregulation was that banks, by investing in diverse sectors, would have greater stability [@ref:heakal]. Our analysis implies that the investment across economic sectors itself creates increased cross-linking of otherwise much more weakly coupled parts of the economy, causing dependencies that increase, rather than decrease, risk. Quite generally, separation prevents failure propagation and connections increase risks of global crises. Subdivision is a universal property of complex systems [@ref:dcs; @ref:simon1997]. An increase in separation of financial services is likely to entail costs, and the cost-benefit tradeoffs of imposing particular types of separation are yet to be determined. In summary, complex systems science focuses on the role of interdependence, a key aspect of the dynamical behavior of economic crises as well as the evaluation of risks in both “normal” and rare conditions. We have analyzed the dynamics of correlational dependencies in rising and falling markets. The impact on the economic system of repeals of Depression-era government policies is becoming increasingly manifest through scientific analysis of the current economic crisis. Previous studies [@ref:pozen2008] showed that repeal of the “uptick rule” in 2007 reduced economic stability by reducing returns and increasing fluctuations of the securities market. This study suggests that erosion of the Glass–Steagall Act, the consolidation of banking functions, and cross sector investments eliminated “firewalls” that could have prevented the housing sector decline from triggering a wider financial and economic crisis. Acknowledgements: We thank James H. Stock, Jeffrey C. Fuhrer and Richard Cooper for helpful comments. [45]{} Y. Bar-Yam, Dynamics of Complex Systems (Perseus Press, Reading, 1997). A. Barrat, M. Barth�lemy, A. Vespignani, Dynamical Processes on Complex Networks (Cambridge University Press, 2008). N. A. Christakis, J. H. Fowler, NEJM� 358, 2249 (2008). C. A. Hidalgo, B. Klinger, A. L. Barabasi, R. Hausmann, Science 317, 482 (2007). D. Greenlaw, J. Hatzius, A. K. Kashyap, H. S. Shin, Leveraged losses: Lessons from the mortgage market meltdown (Proceedings of the U.S. Monetary Policy Forum, 2008). S. Radelet, J. D. Sachs, R. N. Cooper, B. P. Bosworth, Brookings Papers on Economic Activity 1, 1 (1998). P. Jorion, Value at Risk: The New Benchmark for Managing Financial Risk (McGraw-Hill, ed. 3, 2006). R. Mantegna, European Physical Journal B 11: 193–97 (1999). A. Garas, P. Argyrakis, S. Havlin, European Physical Journal B 63, 265–271 (2008). F. Schweitzer [*et al.*]{} Economic Networks: The New Challenges. Science 325, 5939 (2009). R. D. Smith, Journal of the Korean Physical Society 54, 6, 2460–63 (2009). F. Emmert-Streib, M. Dehmer PLoS ONE 5, 9: e12884 (2010). V. M. Carvalho, thesis, University of Chicago (2008). W. W. Leontief, Input-output Economics (Oxford University Press, ed. 2, 1986) N. Barberis, R. H. Thaler, in Handbook of the Economics of Finance, G. M. Constantinides, M. Harris, R. M. Stulz, Eds. (Elsevier, ed. 1, 2003), vol. 1, no. 2, chap. 3. J. B. De Long, A. Shleifer, L. H. Summers, R. J. Waldmann, Journal of Finance 45, 379 (1990). J. B. De Long, A. Shleifer, L. H. Summers, R. J. Waldmann, Journal of Political Economy 98, 703 (1990). G. Chamberlain, M. Rothschild, Econometrica 51, 1305 (1983). S. Ross. Journal of Economic Theory 13, 341 (1976). R. N. Mantegna, H. E. Stanley, An Introduction to Econophysics (Cambridge University Press, 2000) J. P. Onnela, A. Chakraborti, K. Kaski, J. Kertesz, A. Kanto, Phys. Rev. E 68, 056110 (2003). S. Fortunato, Physics Reports 486, 75–174 (2010). M. Carlson, A Brief History of the 1987 Stock Market Crash with a Discussion of the Federal Reserve Response (Finance and Economics Discussion Series, Divisions of Research & Statistics and Monetary Affairs Federal Reserve Board, Washington, D.C., 2007). D. Acemoglu, A. Scott, J. Monetary Econ. 40, 501 (1997). G. Bekaert, G. Wu, Review of Financial Studies 13, 1�(2000). L. Cappiello, R. Engle, K. Sheppard, Journal of Financial Econometrics 4, 537 (2007). R. Engle, J. of Business and Economic Statistics 20, 339 (2002). L. Veldkamp, J. of Econ. Theory 124, 230 (2005). G. Wu, The Determinants of Asymmetric Volatility (Social Science Resource Network, 2001; <http://ssrn.com/abstract=248285>). K. J. Forbes, R. Rigobon, Journal of Finance 57, 2223 (2002). H. Shin, Risk and liquidity in a system context (Bank for International Settlements Working Paper 212, 2006). J. M. Keynes, A Treatise on Money (Harcourt, Brace and Co., New York, 1930). H. Choi, H. Varian, Predicting the Present with Google Trends (Google Inc., 2009; <http://google.com/googleblogs/pdfs/google_predicting_the_present.pdf>). B. S. Bernanke, Remarks on the economic outlook (International Monetary Conference, Barcelona, Spain, 2008). R. J. Caballero, E. Farhi, P. O. Gourinchas, Financial Crash, Commodity Prices and Global Imbalances (National Bureau of Economic Research Working Paper, 2008). J. M. Poterba, J. of Econ. Perspectives 14, 99 (2000). Broker-Dealers Net Capital Requirements (48 Stat. 74, section 15c3-1, 1934). G. Bekaert, A. Ang, International Asset Allocation with Time-Varying Correlations (Social Science Research Network, 1999; : <http://ssrn.com/abstract=156048>). S. R. Das, R. Uppal, Systemic Risk and International Portfolio Choice (American Financial Association, 2003 Washington, DC Meetings, 2002). K. E. Kroner, V. K. Ng, Rev. Financ. Stud. 11, 817 (1998). F. Longin, B. H. Solnik, Extreme Correlation of International Equity Markets (CEPR Discussion Papers, no. 2538, 2000). F. Longin, B. Solnik, Journal of Finance 56, 249 (2001). A. J. Patton, Journal of Financial Econometrics 2, 130 (2004). Important Banking Legislation (Federal Deposit Insurance Corporation, 2007; <http://www.fdic.gov/regulations/laws/important/>) Gramm-Leach-Bliley Financial Services Modernization Act, Pub.L.106-102, 113 Stat. 1338, enacted November 12, 1999 M. Horvath, Review of Economic Dynamics 1, 781 (1998). R. Heakal, What Was The Glass-Steagall Act? (Investopedia; <http://www.investopedia.com/articles/03/071603.asp>) H. A. Simon, The Sciences of the Artificial (MIT Press, ed. 3, 1997) R. C. Pozen, Y. Bar-Yam, There’s a Better Way to Prevent “Bear Raids” (The Wall Street Journal, November 18, 2008). Yahoo! finance (<http://finance.yahoo.com>). Case Shiller composite-10 home price index, (Standard & Poor; <http://www2.standardandpoors.com/portal/site/sp/en/us/page.topic/indices_csmahp/0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0.html>) Google trends (<http://www.google.com/trends>).
--- abstract: | We consider a two-asset non-linear model of option pricing in an environment where the correlation is not known precisely, as it varies between two known values. First we discuss the non-negativity of the solution of the problem. Next, we construct and analyze a positivity preserving, flux-limited finite difference scheme for the corresponding boundary value problem. Numerical experiments are analyzed.\ [*Keywords*.]{} Two-asset worst-case option pricing model, fully non-linear parabolic equation, positive ODE system, van Leer flux-limiter, non-negativity preservation, stability author: - | Miglena N. Koleva, Lubin G. Vulkov\ \ title: 'A Positive Flux Limited Difference Scheme for Option Pricing 2D Fully Non-linear Parabolic Equation with Uncertain Correlation' --- Introduction ============ Very important for the valuation of option pricing models is the correct specification of the respective model parameters. Some of them are given from the market, or estimated from historic or forward looking data but others are the result of calibration to market prices. These techniques leads to more realistic in practice *non-linear* models with uncertain parameter values, for example volatility, interest rate, dividend or correlation. Usually this parameters range between upper and lower known bonds and consequently we may consider highest and lowest option value, called *best* and *worst* values. These prices can be interpret as *worst-case pricing* for short and long position respectively. Well-known one-factor uncertain volatility models are derived by Avellaneda, Levy and Parás [@ALP]. Following Black-Scholes hedging and no-arbitrage arguments they construct a worst/best option pricing model where the value of the volatility depends on the sign of the second derivative, the Gamma greek ($\Gamma$). The same idea applied to the case of uncertain interest rate or uncertain dividend yield (independent of the asset price) in the case of continuous dividend leads to non-linear one-asset uncertain parameter models, which gives a consistent way to eliminate the dependence of a price on a parameter and to some extent reduce model dependence [@Wil1]. The same arguments [@Wil1 p.313] can be carried over to multi-asset models, strongly dependent on the correlation $\rho$ between the stochastic processes of the underlying state variable. The correlation is difficult to guess or calculate in practice so it can be considered as uncertainty. Following [@BS] and [@Wil1], this simple hedging strategy is realized in [@Top1] for two-asset option pricing model. To be self-contained we outline the derivation of the model, presented in [@Top1]. Consider the correlation bounded by $-1\leq \rho_1\leq \rho \leq\rho_2\leq 1$ and define the price movements of two underlying assets $S_1$, $S_2$ (for time $t$, trends (drift rates) $\mu_1$, $\mu_2$, volatilities $\sigma_1$, $\sigma_2$ and increments of standard Wiener’s process $dX$) $$\begin{aligned} dS_1=\mu_1S_1dt+\sigma_1S_1dX,\\ dS_2=\mu_2S_2dt+\sigma_2S_2dX,\end{aligned}$$ correlated by $E(dX_idX_j)=\rho dt$. By Itô’s Lemma we express an infinitesimal change in the portfolio ($\Pi$), consisting of a long position in one option and short position in both underlyings. Next, eliminating the risk, just as in the classical argument when deriving the Black-Scholes equation for the option prise $V(S_1,S_2,t)$ we get $$d\Pi=\left(\frac{\partial V}{\partial t}+\frac12\sigma_1^2S_1^2\frac{\partial^2V}{\partial S_1^2}+\frac12\sigma_2^2S_2^2\frac{\partial^2V}{\partial S_2^2}+\rho\sigma_1\sigma_2S_1S_2\frac{\partial^2V}{\partial S_1\partial S_2}\right)dt.$$ In order to derive worst-case scenario model we will be extremely pessimistic: in every infinitesimal time step we assume that a correlation leads to the smallest growth in the portfolio, i.e. $$\label{Pf0} \min\limits_{\rho} d\Pi=r\Pi\, dt, \ \ \hbox{where} \ \ r>0 \ \ \hbox{is the interest rate.}$$ Taking into account that the portfolio consists of a long position in one option and short position in both underlying we have $$\label{Pf1} r\Pi \, dt=r\left(V(S_1,S_2,t)-\frac{\partial V}{\partial S_1}S_1- \frac{\partial V}{\partial S_2}S_2\right)dt$$ and $$\begin{aligned} \min\limits_{\rho} d \Pi&=&\min\limits_{\rho}\left\{\left(\frac{\partial V}{\partial t}+\frac12\sigma_1^2S_1^2\frac{\partial^2V}{\partial S_1^2}+\frac12\sigma_2^2S_2^2\frac{\partial^2V}{\partial S_2^2}+\rho\sigma_1\sigma_2S_1S_2\frac{\partial^2V}{\partial S_1\partial S_2}\right)dt\right\}\nonumber\\[-0.05in] \label{Pf2}\\[-0.05in] &=& \left\{\begin{array}{ll} \ds\frac{\partial V}{\partial t}+\frac12\sigma_1^2S_1^2\frac{\partial^2V}{\partial S_1^2}+\frac12\sigma_2^2S_2^2\frac{\partial^2V}{\partial S_2^2}+\rho_1\sigma_1\sigma_2S_1S_2\frac{\partial^2V}{\partial S_1\partial S_2}, & \ds \frac{\partial^2V}{\partial S_1\partial S_2}>0,\\ \ds \frac{\partial V}{\partial t}+\frac12\sigma_1^2S_1^2\frac{\partial^2V}{\partial S_1^2}+\frac12\sigma_2^2S_2^2\frac{\partial^2V}{\partial S_2^2}+\rho_2\sigma_1\sigma_2S_1S_2\frac{\partial^2V}{\partial S_1\partial S_2}, & \ds \frac{\partial^2V}{\partial S_1\partial S_2}<0. \end{array}\right.\nonumber\end{aligned}$$ Combining , via and taking into account the dividends (denoted by $D_1$ and $D_2$) we obtain the worst-case pricing equation $$\begin{aligned} \begin{split}\label{Eq} &\frac{\partial V}{\partial t}+\frac12\sigma_1^2S_1^2\frac{\partial^2V}{\partial S_1^2}+\frac12\sigma_2^2S_2^2\frac{\partial^2V}{\partial S_2^2}+\rho(\Gamma_{cross})\sigma_1\sigma_2S_1S_2\frac{\partial^2V}{\partial S_1\partial S_2}\\&\hspace{0.6in}+(r-D_1)S_1\frac{\partial V}{\partial S_1}+(r-D_2)S_2\frac{\partial V}{\partial S_2}-rV=0, \ \ (S_1,S_2)\in \Omega=\mathbb{R}^+\times \mathbb{R}^+, \ \ 0\leq t<T; \end{split}\\[0.1in] \rho(\Gamma_{cross})=\left\{\begin{array}{ll} \rho_1,&\Gamma_{cross}>0,\\ \rho_2,&\Gamma_{cross}<0. \end{array}\right.,\ \ \ \ \Gamma_{cross}=\frac{\partial^2V}{\partial S_1\partial S_2},\ \ \ \ -1\leq \rho_1\leq \rho_2\leq 1.\hspace{0.2in}\label{Nonl}&\end{aligned}$$ In the best-case scenario for an investor with long position, $\rho(\Gamma_{cross})$ is determined by $$\rho(\Gamma_{cross})=\left\{\begin{array}{ll} \rho_1,&\Gamma_{cross}<0,\\ \rho_2,&\Gamma_{cross}>0. \end{array}\right.$$ There are many numerical methods for [one-asset]{} uncertain parameter models available in the literature. For example, for the uncertain volatility model (which is identical with Leland model of transaction cost [@Wil1]), in [@PFV] is developed numerical iteration algorithm. Positivity preserving method is presented in [@K]. A fully-implicit, monotone discretization method is developed for the solution of option pricing model with uncertain drift rate in [@WWFV]. For multi-asset (or two-asset) *linear models*, various numerical methods can be found in the literature, e.g. [@CJFC], where the authors present positivity preserving numerical approach for two-asset linear option pricing stochastic volatility model. Amid numerous publications, related to the numerical solution of option pricing models, the investigations concerning non-linear multi-asset option pricing models are scarce. The only work (we managed to find in the literature), related to the non-linear two-asset option pricing model with uncertain correlation, is the paper of J. Topper [@Top1]. The author implement the collocation finite element method with cubic Hermite trial functions to solve the worst-case scenario for the considered problem. In [@Ma] a two-asset stochastic correlation model is considered, where the correlation coefficient is a random walk following the square root process. This leads to linear model that is solved by quasi-Monte Carlo method. In this paper we develop a second-order positivity preserving numerical method for the problem ,. We construct implicit-explicit difference scheme, using different stencils, in dependence of the sign of correlation, for the approximation of $\Gamma_{cross}$ and application of van Leer flux limiter approach for the first derivative discretization. Mild restrictions for space and time mesh step sizes guarantee the stability and *positivity preserving property* of the numerical solution, i.e. starting with non-negative initial data to obtain a non-negative numerical solution at each time layer. The rest of the paper is organized as follows. In the next section, we formulate the differential problem on bounded domain, after application of the exponential variable change [@E; @TR]. The non-negativity of the solution is discussed. Combining the monotone techniques in [@R; @Sam1] with flux limiting, we perform a space discretization of the problem in Section 3. A positive fully-discrete scheme is derived in the next section. Numerical results are discussed in Section 5 and the paper is completed by some conclusions. The differential problem ======================== Let now $\overline{\Omega}=\Omega\cup \partial\Omega=[L_W,L_E]\times[L_S,L_N]\subseteq \mathbb{R}^+\times \mathbb{R}^+$. Following the financial modelling in [@Top1] we consider the equation , , associated with the terminal and boundary conditions [@Top0; @Top1; @Top2; @Top3] $$\begin{aligned} V(S_1,S_2,T)&=&g_0(S_1,S_2)\geq 0\; \hbox{in}\; \Omega \label{TC}\\ \frac{\partial V(S_1,S_2,t)}{\partial n}&=&g_1(S_1,S_2,t)\geq 0 \; \hbox{on}\; \partial \Omega_1,%\; g_i\left\{\begin{array}{ll} %\geq 0,& S_1=L_E \; \hbox{or} \; S_2=L_N,\\ %\leq 0,&S_1=L_W \; \hbox{or} \; S_2=L_S, %\end{array}\right. \label{BC1}\\ V(S_1,S_2,t)&=&g_2(S_1,S_2,t)\geq 0 \; \hbox{on}\; \partial \Omega_2 \not\equiv \emptyset,\ \ \partial \Omega_1\cup\partial \Omega_2=\partial \Omega.\label{BC2}\end{aligned}$$ Here $\partial/\partial n$ is the outward derivative to $S_1$ or $S_2$ and $T$ is time to maturity. Using the logarithmic prices $$\label{ChV} x_i=\ln S_i,\ \ i=1,2,\ \ \ \ \tau=T-t,$$ we introduce the operators $$\begin{aligned} \mathcal{L}_iu&=&-\frac12\sigma_1^2\frac{\partial^2u}{\partial x_1^2}-\frac12\sigma_2^2\frac{\partial^2u}{\partial x_2^2}-\rho_i\sigma_1\sigma_2\frac{\partial^2u}{\partial x_1\partial x_2}\\&&-(r-D_1-\frac12\sigma_1^2)\frac{\partial u}{\partial x_1}-(r-D_2-\frac12\sigma_2^2)\frac{\partial u}{\partial x_2}+ru, \ \ \ \ i=\{0,1,2\},\end{aligned}$$ where we formally set $\rho_0=\rho(\widetilde{\Gamma}'_{cross})$. Then - is transformed to the following problem for $u(x_1,x_2,\tau)=V(S_1,S_2,t)$, $(x_1,x_2)\in \overline{\Omega'}=[\ln{L_W},\ln{L_E}]\times[\ln{L_S},\ln{L_N}]\subseteq \mathbb{R}^2$. $$\begin{aligned} %\begin{split} && \frac{\partial u}{\partial \tau}+\mathcal{L}_0u=0, \ \ (x_1,x_2,\tau)\in Q_T\equiv \Omega'\times (0,T); \label{EqT}\\ %\end{split}\\[0.1in] && \Gamma'_{cross}=e^{-(x_1+x_2)}\frac{\partial^2u}{\partial x_1\partial x_2},\ \ \ \ \widetilde{\Gamma}'_{cross}=\frac{\partial^2u}{\partial x_1\partial x_2},\hspace{1in}\label{NonlT} \end{aligned}$$\ $$\begin{aligned} u(x_1,x_2,0)&=&g'_0(x_1,x_2)\; \hbox{in}\; \Omega' \label{IC}\\ \frac{\partial u(x_1,x_2,\tau)}{\partial n'}&=&{g}'_1(x_1,x_2,\tau) \; \hbox{on}\; \partial \Omega'_1,\label{BC1T}\\ u(x_1,x_2,\tau)&=&g'_2(x_1,x_2,\tau) \; \hbox{on}\; \partial \Omega'_2,\ \ \partial \Omega'_1\cup\partial \Omega'_2=\partial \Omega',\label{BC2T} \end{aligned}$$ where $\partial/\partial n'$ is the outward derivative to $x_1$ or $x_2$, ${g}'_0(x_1,x_2)=g_0(e^{x_1},e^{x_2})$, ${g}'_2(x_1,x_2,\tau)=g_2(e^{x_1},e^{x_2},\tau)$ and$${g}'_1(x_1,x_2,\tau)=\left\{\begin{array}{ll} e^{x_1}g_1(e^{x_1},e^{x_2},\tau),& x_1=\ln L_W \ \ \hbox{or}\ \ x_1=\ln L_E,\\ e^{x_2}g_1(e^{x_1},e^{x_2},\tau),& x_2=\ln L_S\ \ \hbox{or}\ \ x_2=\ln L_N. \end{array}\right.$$ The notation $(\cdot)'$ indicates the transformed by object $(\cdot)$. Due to the complexity of the presented nonlinear model there are difficulties in obtaining existence and uniqueness results for problem -. In this paper we are not concerned with this aspect of the problem but we shall discuss the minimum principle. We denote by $C^{m,q} (Q_{T})$ the space of functions defined on $Q_{T}$ that have continuous derivative with respect to $x=(x_{1},x_{2})$ up to order $m$ and continuous derivative with respect to $t$ up to order $q$. Typically, no $C^{2,1}$ solution exists on the hole domain $Q_{T}$ of equation with discontinuous function $\rho _0$. The particularity of the equation is that it shows degeneracy, because it is possible $\widetilde{\Gamma}'_{cross} = 0$. Thus it is naturally to assume the existence of a set $S (x_{1} , x_{2} , \tau) \subset Q_{T} $ on which $ \widetilde{\Gamma}'_{cross} ( x_{1} , x_{2} , \tau) =0$. This set (it is expected to be a surface) is not given in advance so that we have a Stefan-like problem. But is derived from stochastic finance and therefore specific interface (internal boundary) conditions are needed. We assume $ u \in C^{2,1} (Q_{T} )$ across the phase-change surfaces that is in accordance with condition $ \widetilde{\Gamma}'_{cross} ( x_{1} , x_{2} , \tau) \vert_{\mathcal{S}} = 0 $. Out of the interface $ \mathcal{S} ( x_{1} , x_{2} , \tau) $ we assume even higher regularity, $ u \in C^{3,1} ( \Omega_{T} \backslash \mathcal{S})$. By $\partial \Omega_T^p$ we denote the parabolic boundary of $\overline{Q}_T$, i.e. $\partial \Omega_T^p=\{(x_1,x_2,\tau):(x_1,x_2)\in\partial \Omega', 0\leq \tau <T\}$, i.e. the boundary of $Q_T$ minus the interior of the top part of the boundary, $\Omega'\times \{\tau=T\}$. Also, by $Q_T^+$ ($Q_T^-$) we will denote the subset of $Q_T$, where $\widetilde{\Gamma}'_{cross} >0$ ($\widetilde{\Gamma}'_{cross} <0$). Suppose that the function $ u \in C ( {\overline{Q}}_{T} ) \cap C^{2,1} ( Q_{T} ) \cap C^{3,1} ( \Omega_{T} \backslash \mathcal{S} ) $ satisfies in $Q_{T} $ the problem - and $g'_0(x_1,x_2)\geq 0$ in $\Omega'$ and $g'_i(x_1,x_2,\tau)\geq 0$ on $\partial\Omega'_i$, $i=1,2$. Then $u$ can not attain negative local minimum in $ { \overline {Q}}_{T} \setminus \partial Q_T^p$ and $u\geq 0$ on $\overline{Q}_T$. **Proof.** Suppose that there exists a local minimum point $P_0(x_{1_0},x_{2_0},\tau_0)\in {Q}_T$ with $u(P_0)<0$. 1\. If $0<\tau_0<T$, then $P_0$ belongs to the interior of $Q_T$ and therefore, $$\frac{\partial u}{\partial \tau}(P_0)=\frac{\partial u}{\partial x_1}(P_0)=\frac{\partial u}{\partial x_2}(P_0)=0,\label{e15}$$ and $$\frac{\partial^2 u}{\partial x_1^2}(P_0)\geq 0,\ \ \frac{\partial^2 u}{\partial x_2^2}(P_0)\geq 0.\label{e16}$$ 1.1. Suppose $P_0\in S$. Then $\frac{\partial^2 u}{\partial x_1 \partial x_2}=0$ and , lead to $$\left( \frac{\partial u}{\partial \tau}+\mathcal{L}_0u\right)(P_0)<0,$$ which contradicts to equation . 1.2. Suppose that $P_0\in Q_T^+$ (similar is the treatment of the case $P_0\in Q_T^-$). Then, in view of , we have $$\begin{aligned} \begin{split} 0&=\left(\frac{\partial u}{\partial \tau}+\mathcal{L}_1u\right)(P_0)=\mathcal{L}_1u(P_0)\\ &=-\frac12\sigma_1^2\frac{\partial^2 u}{\partial x_1^2}(P_0)-\frac12\sigma_2^2\frac{\partial^2 u}{\partial x_2^2}(P_0)-\rho_1\sigma_1\sigma_2\frac{\partial^2 u}{\partial x_1 \partial x_2}(P_0)+ru(P_0).\label{e17} \end{split} \end{aligned}$$ Since $P_0$ is not on the boundary of $Q_T$, there is a neighborhood of $(x_{1_0},x_{2_0},t_0)$ within of the domain $Q_T$ where we can use the Taylor expansion: $$\begin{aligned} & &u(x_{1_0}+\triangle x_1,x_{2_0}+\triangle x_2,\tau_0)=u(P_0)\\ &&+\frac12\left(\frac{\partial^2 u}{\partial x_1^2}(P_0)(\triangle x_1)^2+2\triangle x_1\triangle x_2\frac{\partial^2 u}{\partial x_1 \partial x_2}(P_0)+\frac{\partial^2 u}{\partial x_2^2}(P_0)(\triangle x_2)^2\right)+O((\triangle x_1)^3+(\triangle x_2)^3). \end{aligned}$$ Taking into account that $u(x_{1_0}+\triangle x_1,x_{2_0}+\triangle x_2,\tau_0)> u(P_0)$ for all $\triangle x_1$ and $\triangle x_2$ that are small enough, we have $$\label{e18} \frac{\partial^2 u}{\partial x_1^2}(P_0)(\triangle x_1)^2+2\triangle x_1\triangle x_2\frac{\partial^2 u}{\partial x_1 \partial x_2}(P_0)+\frac{\partial^2 u}{\partial x_2^2}(P_0)(\triangle x_2)^2\geq 0.$$ Since $u(P_0)<0$, from follows that $$\label{e18a} \sigma_1^2\frac{\partial^2 u}{\partial x_1^2}(P_0)+2\rho_1\sigma_1\sigma_2\frac{\partial^2 u}{\partial x_1 \partial x_2}(P_0)+\sigma_2^2\frac{\partial^2 u}{\partial x_2^2}(P_0)< 0.$$ In order to match the Taylor expansion to get a contradiction, we require the last inequality as $$\label{e19} \left(\frac{\sigma_1}{\sqrt{C}}\right)^2\frac{\partial^2 u}{\partial x_1^2}(P_0)+2\rho_1 \frac{\sigma_1}{\sqrt{C}}\frac{\sigma_2}{\sqrt{C}}\frac{\partial^2 u}{\partial x_1 \partial x_2}(P_0)+ \left(\frac{\sigma_2}{\sqrt{C}}\right)^2\frac{\partial^2 u}{\partial x_2^2}(P_0)< 0,$$ where $C>0$ is a constant. Next we take $$\triangle x_1=\frac{\sigma_1}{\sqrt{C}}\ \ \hbox{and} \ \ \triangle x_2=\frac{\sigma_2}{\sqrt{C}}.$$ This contradicts to for sufficiently large $C$. 1.3. Suppose $P_0\in \partial\Omega'_1$ and for concreteness let $P_0(\ln L_W,x_2,\tau)$, i.e. ${x_1}_0=\ln L_W$. Then following similar considerations as in the Hopf’s lemma [@Ev], we conclude that $\partial u/\partial n(P_0)>0$, where $n(P_0)$ is the outer normal. But $\partial u/\partial n (P_0)=-\partial u/\partial x (P_0)=g_1(P_0)\leq 0$, so we get contradiction. 2\. Now suppose $\tau_0=T$. Then we will have $\frac{\partial u}{\partial \tau}(P_0)\leq 0$, instead of $\frac{\partial u}{\partial \tau}(P_0) =0$ in and we once more deduce the contradiction in the cases 1.1, 1.2 and 1.3. $\Box$ Space discretization ===================== In the present section we develop the numerical method, combining the idea of A. Samarskii et al. [@Sam1] to use different stencils for the approximation of the mixed derivative with the flux limiter approach [@GGWC; @HV; @LV] in two space directions for approximation of the first derivatives. We define an uniform mesh in space $\overline{\Omega}$ $$\begin{aligned} &\overline{\omega}_h=\left\{x=({x_1}_i,{x_2}_j):\; {x_1}_i=L_W+(i-1)h_1,\ \ {x_2}_j=L_S+(j-1)h_2,\right.\\ &\left. i=1,\dots,N_1,\; j=1,\dots,N_2,\ \ h_1={(L_W-L_E)}/{(N_1-1)}, \; h_2={(L_N-L_S)}/{(N_2-1)}\right\}\end{aligned}$$ and denote the numerical solution at point $({x_1}_{i},{x_2}_{j},\tau)$ by $u_{i,j}(\tau):=u({x_1}_{i},{x_2}_{j},\tau)$. Further, we use the notations $$\begin{aligned} &&\ds {u_{{\overline{x}_1}_{i,j}}}=\frac{u_{i,j}-u_{i-1,j}}{h_1}, \ \ {u_{{x_1}_{i,j}}}={u_{{\overline{x}_1}_{i+1,j}}}, \ \ {u_{{\overline{x}_2}_{i,j}}}=\frac{u_{i,j}-u_{i,j-1}}{h_2}, \ \ {u_{{x_2}}}_{i,j}={u_{{\overline{x}_2}_{i,j+1}}}, \\ &&\ds u_{\mathring{x}_{s_{i,j}}}=\frac12[{u_{{x_s}_{i,j}}}+{u_{{\overline{x}_s}_{i,j}}}],\ \ {u_{\overline{x}_sx_p}}={(u_{\overline{x}_s})_{x_p}},\ \ {u_{\mathring{x}_s\mathring{x}_p}}=({u_{\mathring{x}_s})_{\mathring{x}_p}}, \ \ s,p \in \mathbb{N},\\ &&\ds u_{{x_1x_2}_{i,j}}^-=\frac12[u_{{\overline{x}_1}{x_2}_{i,j}}+u_{x_1{{\overline{x}_2}_{i,j}}}], \ \ u_{{x_1x_2}_{i,j}}^+=\frac12[u_{x_1{x_2}_{i,j}}+u_{{\overline{x}_1}\,{\overline{x}_2}_{i,j}}],\ \ \hbox{see Figure \ref{f1}}.\end{aligned}$$ \ We may present an arbitrary function $v$ in the form $v=v^+-v^-$ (and $|v|=v^++v^-$), where $v^+=\max\{0,v\}$ and $v^-=\max\{0,-v\}$. Thus, according to and for $\rho'_{i,j}:=\rho(\widetilde{\Gamma}'_{{cross}_{i,j}})$ we have $$\label{rho} \rho'_{i,j}={\rho'}_{i,j}^+-{\rho'}_{i,j}^-=\left\{\begin{array}{ll} \rho_1^+-\rho_1^-,& \widetilde{\Gamma}'_{{cross}_{i,j}}> 0,\\ \rho_2^+-\rho_2^-,& \widetilde{\Gamma}'_{{cross}_{i,j}} < 0.\\ \end{array}\right.$$ For approximation of the first derivatives in we apply van Leer flux limiter technique [@GGWC; @HV; @LV] in both space directions. Consider the conservative derivatives approximation $$\begin{aligned} \begin{split}\label{FD} &A_s\frac{\partial u}{\partial x_s}=A_s\frac{\partial u}{\partial x_s}\simeq A_s\frac{U_{\mathrm{e}_s+1/2}-U_{\mathrm{e}_s-1/2}}{h_s},\ \ s=\{1,2\},\ \ \hbox{where}\\ &A_s=r-D_s-\frac12 \sigma_s^2, \ \ U_{\mathrm{e}_s\pm q}=\left\{\begin{array}{ll} u_{i\pm q,j},& s=1,\\ u_{i,j\pm q},& s=2, \end{array}\right.\ \ q\in \mathbb{R}. \end{split}\end{aligned}$$ Using gradient ratios $$\label{ratio} \theta_{\mathrm{e}_s+1/2}=\frac{u_{{x_s}_{i,j}}}{u_{{\overline{x}_s}_{i,j}}}, %\ \ \theta_{i+1/2,j}=\frac{u_{x_{i,j}}}{u_{\overline{x}_{i,j}}}, \ \ \theta_{i,j+1/2}=\frac{u_{y_{i,j}}}{u_{\overline{y}_{i,j}}},$$ we define van Leer flux limiter [@GGWC; @HV; @L74] $$\label{FL} \Phi(\theta)=\frac{|\theta|+\theta}{1+|\theta|}.$$ Observe that $\Phi(\theta)$ is Lipschitz continuous, continuously differentiable for all $\theta\neq 0$, and $$\label{Pr} \Phi(\theta) = 0,\ \ \hbox{if} \ \ \theta \leq 0 \ \ \hbox{and} \ \ \Phi(\theta)\leq2 \min\{1, \theta\}.$$ Note that at the extreme points of $u$, the slopes $u_{{x_s}_{i,j}}$ and $u_{{\overline{x}_s}_{i,j}}$ have opposite signs and $\Phi(\theta_{\mathrm{e}_s+1/2})=0$. Following [@GGWC] the numerical flux $U_{\mathrm{e}_s+1/2}$ is approximated in a non-linear way $$\label{U1} U_{\mathrm{e}_s+1/2}=U_{\mathrm{e}_s}+\frac12\Phi(\theta_{\mathrm{e}_s+1/2})(U_{\mathrm{e}_s}-U_{\mathrm{e}_s-1}).$$ Reflecting the indices that appear in $u_{i,j}$ about $i + 1/2$ or $j + 1/2$ yields [@GGWC] $$\label{U2} U_{\mathrm{e}_s+1/2}=U_{\mathrm{e}_s+1}+\frac12\Phi(\theta_{\mathrm{e}_s+3/2}^{-1})(U_{\mathrm{e}_s+1}-U_{\mathrm{e}_s+2}).$$ Similarly, the flux $U_{\mathrm{e}_s-1/2}$, corresponding to and is defined by shifting the index $s$ (i.e. $i$ or $j$). Using the symmetry property of the flux limiter $\Phi(\theta)=\theta \Phi(\theta^{-1})$ [@KT] and , we approximate $A_s\frac{\partial u}{\partial x_s}$ at point $({x_1}_{i},{x_2}_{j},\tau)$, applying and in dependence of the sign of $A_s=A_s^+-A_s^-$: $$\begin{aligned} \begin{split}\label{Flux} &\hspace{1in} A_s\frac{\partial u}{\partial x_s}\simeq A_s^+\Lambda_s^+u_{x_s}-A_s^-\Lambda_s^-u_{\overline{x}_s},\ \ s=\{1,2\},\\ &\Lambda_s^+=1+\frac12\Phi(\theta_{\mathrm{e}_s+1/2}^{-1})-\frac12\Phi(\theta_{\mathrm{e}_s+3/2}),\ \ \Lambda_s^-=1+\frac12\Phi(\theta_{\mathrm{e}_s+1/2})-\frac12\Phi(\theta_{\mathrm{e}_s-1/2}^{-1}), \end{split}\end{aligned}$$ where $0 \leq \Lambda_s^-\leq 2$ and $0 \leq \Lambda_s^+\leq 2$ in view of , . We implement the idea of [@R] so that we use different stencils for the approximation of the second mixed derivative and by , we obtain the following discretization for at point $({x_1}_{i},{x_2}_{j},\tau)$, $2<i<N_1-1$, $2<j<N_2-1$: $$\begin{aligned} \begin{split}\label{EqSD} &\frac{\partial u}{\partial \tau}-\frac12\sigma_1^2u_{\overline{x}_1x_1}-\frac12\sigma_2^2u_{\overline{x}_2x_2}-\sigma_1\sigma_2(\rho'^+u_{{x_1x_2}}^+-\rho'^-u_{{x_1x_2}}^-)\\ &\hspace{1.3in}-A_1^+\Lambda_1^+u_{x_1}+A_1^-\Lambda_1^-u_{\overline{x}_1}-A_2^+\Lambda_2^+u_{x_2}+A_2^-\Lambda_2^-u_{\overline{x}_2}+ru=0, \end{split}\end{aligned}$$ where $\widetilde{\Gamma}'_{{cross}_{i,j}}\simeq {u_{\mathring{x}_s\mathring{x}}}_{p_{i,j}} $ and $\rho'_{i,j}=\rho'({u_{\mathring{x}_s\mathring{x}}}_{p_{i,j}} )$. For computing the gradient ratio in grid points for $i=\{2,N_1-1\}$ or $j=\{2,N_2-1\}$ we need the values of $u_{i,j}$ at the outer grid nodes $({x_1}_{0},{x_2}_{j},\tau)$, $({x_1}_{N_1+1},{x_2}_{j},\tau)$, $({x_1}_{i},{x_2}_{0},\tau)$ and $({x_1}_{i},{x_2}_{N_2+1},\tau)$ for $1<i<N_1$, $1<j<N_2$. Then the second-order extrapolation formulas [@Sam] will be used $$\begin{aligned} u_{0,j}=3u_{1,j}-3u_{2,j}+u_{3,j}, \ \ u_{N_1+1,j}=3u_{N_1,j}-3u_{N_1-1,j}+u_{N_1-2,j},\\ u_{i,0}=3u_{i,1}-3u_{i,2}+u_{i,3}, \ \ u_{i,N_2+1}=3u_{i,N_2}-3u_{i,N_2-1}+u_{i,N_2-2}.\end{aligned}$$ It is trivial to incorporate Dirichlet boundary conditions on $\partial\Omega'_2$ in the numerical scheme. Thus, only for illustration, we consider the case $\partial\Omega'_1 \equiv \partial\Omega'$, $\partial\Omega'_2 \equiv \emptyset$ and impose on the whole boundary. *West boundary $\partial \Omega'_W$: $i=1$, $1<j<N_2$.* From we have $$\label{WB} -u_{\mathring{x}_{1_{1,j}}}={g}'_{1_{1,j}}(\tau),\ \ \hbox{and therefore}\ \ u_{0,j}=2h_1{g}'_{1_{1,j}}(\tau)+u_{2,j}, \ \ j=2,\dots,N_2-1.$$ Applying for $i=1$, $1<j<N_2$, where the term $-\Lambda_1^-u_{{\overline{x}_1}_{1,j}}$ is replaced by ${g}'_{1_{1,j}}(\tau)$ and $u_{0,j}$, $u_{0,j\pm 1}$ are eliminated from , we get $$\begin{aligned} \begin{split}\label{EqSD_W} &\frac{\partial u}{\partial \tau}-\frac{\sigma_1^2}{h_1}u_{x_1}-\frac12\sigma_2^2u_{\overline{x}_2x_2}-\frac{\sigma_1\sigma_2}{2}|\rho'|(u_{x_1x_2}-u_{x_1{\overline{x}_2}}) -A_1^+\Lambda_1^+u_{{x}_1}-A_2^+\Lambda_2^+u_{x_2}\\ &\hspace{0.2in}+A_2^-\Lambda_2^-u_{\overline{x}_2}+ru=A_1^-{g}'_{1}+\frac{\sigma_1^2}{h_1}g'_1-{\sigma_1}{\sigma_2}({\rho'}^+ g'_{1_{{\overline{x}_2}}}-{\rho'}^-g'_{1_{{x_2}}}),\ \ \ \ \rho'_{1,j}=\rho'({-g'_1}_{{{\mathring{x}_{2_{1,j}}}}}). \end{split}\end{aligned}$$ *North boundary $\partial \Omega'_N$: $1<i<N_1$, $j=N_2$.* Now is replaced by $$\label{NB} u_{\mathring{x}_{2_{i,N_2}}}={g}'_{1_{i,N_2}}(\tau)\ \ \Rightarrow \ \ u_{i,N_2+1}=2h_2{g}'_{1_{i,N_2}}(\tau)+u_{i,N_2-1}, \ \ i=2,\dots,N_1-1.$$ As before, from at point $(x_{1_{i}},x_{2_{N_2}},\tau)$, replacing $\Lambda_2^+u_{x_{2_{i,N_2}}}$ by ${g}'_{1_{i,N_2}}(\tau)$ we obtain $$\begin{aligned} \begin{split}\label{EqSD_N} &\frac{\partial u}{\partial \tau}-\frac12\sigma_2^2u_{\overline{x}_1x_1}+\frac{\sigma_2^2}{h_2}u_{\overline{x}_2}-\frac{\sigma_1\sigma_2}{2}|\rho'|(u_{\overline{x}_1\overline{x}_2}-u_{x_1{\overline{x}_2}}) -A_1^+\Lambda_1^+u_{{x}_1}+A_1^-\Lambda_1^-u_{\overline{x}_1}\\ &\hspace{0.2in}+A_2^-\Lambda_2^-u_{\overline{x}_2}+ru =A_2^+{g}'_{1}+\frac{\sigma_2^2}{h_2}g'_1+{\sigma_1}{\sigma_2}({\rho'}^+ g'_{1_{{{x}_1}}}-{\rho'}^-g'_{1_{{\overline{x}_1}}}),\ \ \ \ \rho'_{i,N_2}=\rho'({g'_1}_{{{\mathring{x}_{1_{i,N_2}}}}}). \end{split}\end{aligned}$$ *East boundary $\partial \Omega'_E$: $i=N_1$, $1<j<N_2$.* Similarly, is discretizied by $$\label{EB} u_{\mathring{x}_{1_{N_1,j}}}={g}'_{1_{N_1,j}}(\tau)\ \ \hbox{and} \ \ u_{N_1+1,j}=2h_1{g}'_{1_{N_1,j}}(\tau)+u_{N_1-1,j}, \ \ j=2,\dots,N_2-1.$$ Thus, from written at grid node $(x_{1_{N_1}},x_{2_{j}},\tau)$, we get the approximation at east boundary $$\begin{aligned} \begin{split}\label{EqSD_E} &\frac{\partial u}{\partial \tau}+\frac{\sigma_1^2}{h_1}u_{\overline{x}_1}-\frac12\sigma_2^2u_{\overline{x}_2x_2}-\frac{\sigma_1\sigma_2}{2}|\rho'|(u_{\overline{x}_1\overline{x}_2}-u_{\overline{x}_1{{x}_2}}) +A_1^-\Lambda_1^-u_{{\overline{x}}_1}-A_2^+\Lambda_2^+u_{x_2}\\ &\hspace{0.2in}+A_2^-\Lambda_2^-u_{\overline{x}_2}+ru=A_1^+{g}'_{1}+\frac{\sigma_1^2}{h_1}g'_1+{\sigma_1}{\sigma_2}({\rho'}^+ g'_{1_{{{x}_2}}}-{\rho'}^-g'_{1_{{\overline{x}_2}}}),\ \ \ \ \rho'_{N_2,j}=\rho'({g'_1}_{{{\mathring{x}_{2_{N_2,j}}}}}). \end{split}\end{aligned}$$ *South boundary $\partial \Omega'_S$: $1<i<N_1$, $j=1$.* Now the corresponding discrete boundary condition in is $$\label{SB} u_{\mathring{x}_{2_{i,1}}}={g}'_{1_{i,1}}(\tau)\ \ \Rightarrow \ \ u_{i,0}=2h_2{g}'_{1_{i,1}}(\tau)+u_{i,2}, \ \ i=2,\dots,N_1-1.$$ The discretization, corresponding to the south boundary is: $$\begin{aligned} \begin{split}\label{EqSD_S} &\frac{\partial u}{\partial \tau}-\frac12\sigma_1^2u_{\overline{x}_1x_1}-\frac{\sigma_2^2}{h_2}u_{{x}_2}-\frac{\sigma_1\sigma_2}{2}|\rho'|(u_{{x}_1{x}_2}-u_{\overline{x}_1{{x}_2}}) -A_1^+\Lambda_1^+u_{{x}_1}+A_1^-\Lambda_1^-u_{\overline{x}_1}\\ &\hspace{0.2in}-A_2^+\Lambda_2^+u_{{x}_2}+ru =A_2^-{g}'_{1}+\frac{\sigma_2^2}{h_2}g'_1-{\sigma_1}{\sigma_2}({\rho'}^+ g'_{1_{{{\overline{x}}_1}}}-{\rho'}^-g'_{1_{{{x}_1}}}),\ \ \ \ \rho'_{i,1}=\rho'(-{g'_1}_{{{\mathring{x}_{1_{i,1}}}}}). \end{split}\end{aligned}$$ *North-West corner node:* $i=1$, $j=N_2$. Following the same technique as before, we eliminate artificial grid nodes arise in (written at point $i=1$, $j=N_2$), using boundary conditions for $j=N_2$ and for $i=1$ and replace $A_1^-\Lambda_1^-u_{\overline{x}_1}$ by $A_1^- g'_1$ and $A_2^+\Lambda_2^+u_{{x}_2}$ by $A_2^+ g'_1$. More different is the treatment of the term $u_{0,N_2+1}$: $$u_{0,N_2+1}=\left\{\begin{array}{ll} u_{2,N_2-1}+2h_2g'_{1_{2,N_2}}+2h_1g'_{1_{1,N_2+1}},&\hbox{applying first}\ \ \eqref{WB}, \ \ \hbox{then}\ \ \eqref{NB},\\ u_{2,N_2-1}+2h_2g'_{1_{0,N_2}}+2h_1g'_{1_{1,N_2-1}},&\hbox{applying first}\ \ \eqref{NB}, \ \ \hbox{then}\ \ \eqref{WB}. \end{array}\right.$$ Averaging the above quantities we obtain $$\begin{aligned} u_{0,N_2+1}&=&u_{2,N_2-1}+h_2g'_{1_{2,N_2}}+h_1g'_{1_{1,N_2+1}}+h_2g'_{1_{0,N_2}}+h_1g'_{1_{1,N_2-1}}\\ %&&=u_{1,N_2-1}+2h_2g'_{1_{2,N_2}}+2h_1g'_{1_{1,N_2-1}}+h_1g'_{1_{1,N_2+1}}+h_2g'_{1_{0,N_2}}-h_2g'_{1_{2,N_2}}-h_1g'_{1_{1,N_2-1}}\\ &&=u_{1,N_2-1}+2h_2g'_{1_{2,N_2}}+2h_1g'_{1_{1,N_2-1}}+2h_1h_2({g'_1}_{{{\mathring{x}_{2}}}}-{g'_1}_{{{\mathring{x}_{1}}}})_{1,N_2}.\end{aligned}$$ To compute $\rho'(u_{\mathring{x}_1\mathring{x}_2})$ at grid node $i=1$, $j=N_2$ we proceed similarly: $$u_{\mathring{x}_1\mathring{x}_2}=\left\{\begin{array}{rl}\ds g'_{1_{\mathring{x}_1}},& \hbox{applying}\ \ \eqref{WB},\\ \ds -g'_{1_{{\mathring{x}_2}}},& \hbox{applying}\ \ \eqref{NB}, \end{array}\right.\ \ u_{\mathring{x}_1\mathring{x}_2}\simeq 0.5(g'_{1_{\mathring{x}_1}}-g'_{1_{{\mathring{x}_2}}})\ \ \Rightarrow \ \ \rho'(u_{\mathring{x}_1\mathring{x}_2})\simeq \rho'(g'_{1_{\mathring{x}_1}}-g'_{1_{{\mathring{x}_2}}}),$$ as we need only the sign of $u_{\mathring{x}_1\mathring{x}_2}$. Consequently, the approximation at North-West corner node is $$\begin{aligned} \begin{split}\label{EqSD_WN} \frac{\partial u}{\partial \tau}-\frac{\sigma_1^2}{h_1}u_{{x}_1}+\frac{\sigma_2^2}{h_2}u_{{\overline{x}}_2}+\sigma_1\sigma_2|\rho'|u_{{x}_1{\overline{x}}_2} -A_1^+\Lambda_1^+u_{{x}_1}+A_2^-\Lambda_2^-u_{{\overline{x}}_2}+ru\\ =(A_1^-+A_2^+){g}'_{1} +\left(\frac{\sigma_1^2}{h_1}+\frac{\sigma_2^2}{h_2}\right)g'_1+{\sigma_1}{\sigma_2}{\rho'^+}(g'_{1_{{{{x}}_1}}}-g'_{1_{{{\overline{x}}_2}}})+{\sigma_1}{\sigma_2}{\rho'^-}G_{NW},\ \ \hbox{where} \\ G_{NW}=g'_{1_{{{{x}}_1}}}-g'_{1_{{{\overline{x}}_2}}}+{g'_1}_{{{\mathring{x}_{2}}}}-{g'_1}_{{{\mathring{x}_{1}}}}\ \ \hbox{and} \ \ \rho'_{1,N_2}=\rho'[({g'_1}_{{{\mathring{x}_{1}}}}-{g'_1}_{{{\mathring{x}_{2}}}})_{1,N_2}]. \end{split}\end{aligned}$$ *North-East corner node:* $i=N_1$, $j=N_2$. From , and at point $i=N_1$, $j=N_2$ we get $$\begin{aligned} \begin{split}\label{EqSD_NE} \frac{\partial u}{\partial \tau}+\frac{\sigma_1^2}{h_1}u_{{\overline{x}}_1}+\frac{\sigma_2^2}{h_2}u_{{\overline{x}}_2}-\sigma_1\sigma_2|\rho'|u_{{\overline{x}}_1{\overline{x}}_2} +A_1^-\Lambda_1^-u_{{\overline{x}}_1}+A_2^-\Lambda_2^-u_{{\overline{x}}_2}+ru\\ =(A_1^++A_2^+){g}'_{1} +\left(\frac{\sigma_1^2}{h_1}+\frac{\sigma_2^2}{h_2}\right)g'_1-{\sigma_1}{\sigma_2}{\rho'^+}G_{NE}-{\sigma_1}{\sigma_2}{\rho'^-}(g'_{1_{{{{\overline{x}}}_1}}}+g'_{1_{{{\overline{x}}_2}}}),\ \ \hbox{where} \\ G_{NE}=g'_{1_{{{{\overline{x}}}_1}}}+g'_{1_{{{\overline{x}}_2}}}-{g'_1}_{{{\mathring{x}_{2}}}}-{g'_1}_{{{\mathring{x}_{1}}}}\ \ \hbox{and} \ \ \rho'_{1,N_2}=\rho'[({g'_1}_{{{\mathring{x}_{1}}}}+{g'_1}_{{{\mathring{x}_{2}}}})_{1,N_2}]. \end{split}\end{aligned}$$ *South-East corner node:* $i=N_1$, $j=1$. Again, from , and at point $i=N_1$, $j=1$ we have $$\begin{aligned} \begin{split}\label{EqSD_SE} \frac{\partial u}{\partial \tau}+\frac{\sigma_1^2}{h_1}u_{{\overline{x}}_1}-\frac{\sigma_2^2}{h_2}u_{{{x}}_2}+\sigma_1\sigma_2|\rho'|u_{{\overline{x}}_1{{x}}_2} +A_1^-\Lambda_1^-u_{{\overline{x}}_1}-A_2^+\Lambda_2^+u_{{{x}}_2}+ru\\ =(A_1^++A_2^-){g}'_{1} +\left(\frac{\sigma_1^2}{h_1}+\frac{\sigma_2^2}{h_2}\right)g'_1-{\sigma_1}{\sigma_2}{\rho'^+}(g'_{1_{{{\overline{x}}_1}}}-g'_{1_{{{{{x}}}_2}}})-{\sigma_1}{\sigma_2}{\rho'^-}G_{SE},\ \ \hbox{where} \\ G_{SE}=g'_{1_{{{{\overline{x}}}_1}}}-g'_{1_{{{{x}}_2}}}-{g'_1}_{{{\mathring{x}_{1}}}}+{g'_1}_{{{\mathring{x}_{2}}}}\ \ \hbox{and} \ \ \rho'_{N_1,1}=\rho'[({g'_1}_{{{\mathring{x}_{2}}}}-{g'_1}_{{{\mathring{x}_{1}}}})_{N_1,1}]. \end{split}\end{aligned}$$ *South-West corner node:* $i=j=1$. As before, from , and at point $i=1$, $j=1$ we obtain $$\begin{aligned} \begin{split}\label{EqSD_SW} \frac{\partial u}{\partial \tau}-\frac{\sigma_1^2}{h_1}u_{{{x}}_1}-\frac{\sigma_2^2}{h_2}u_{{{x}}_2}-\sigma_1\sigma_2|\rho'|u_{{{x}}_1{{x}}_2} -A_1^+\Lambda_1^+u_{{{x}}_1}-A_2^+\Lambda_2^+u_{{{x}}_2}+ru\\ =(A_1^-+A_2^-){g}'_{1} +\left(\frac{\sigma_1^2}{h_1}+\frac{\sigma_2^2}{h_2}\right)g'_1+{\sigma_1}{\sigma_2}{\rho'^+}(g'_{1_{{{{x}}_1}}}+g'_{1_{{{{{x}}}_2}}})+{\sigma_1}{\sigma_2}{\rho'^-}G_{SW},\ \ \hbox{where} \\ G_{SW}=g'_{1_{{{{{x}}}_1}}}+g'_{1_{{{{x}}_2}}}-{g'_1}_{{{\mathring{x}_{1}}}}-{g'_1}_{{{\mathring{x}_{2}}}}\ \ \hbox{and} \ \ \rho'_{1,1}=\rho'[(-{g'_1}_{{{\mathring{x}_{1}}}}-{g'_1}_{{{\mathring{x}_{2}}}})_{1,1}]. \end{split}\end{aligned}$$ Now, we are going to investigate conditions, which guarantee the positivity preserving property of the semi-discrete problem. Further we need the following well known results. Consider the initial value problem (IVP) for the ODE system $$u'(\tau)=g(\tau,u(\tau)), \ \ \tau\geq \tau_0,\ \ u(\tau_0)=u^0, \ \ \tau_0\in \mathbb{R},\ \ u^0\in R^p, \ \ g: \mathbb{R}\times \mathbb{R}^p\rightarrow \mathbb{R}^p\label{IVP}$$ The ODE in and the IVP are said to be positive if $g$ is continuous and has a unique solution for all $\tau_0$ and for all $u^0$, and $u(\tau)\geq 0$ holds for all $ \tau\geq\tau_0$ whenever $u^0\geq 0$. A semi-discretization of a given PDE (with non-negative solution) is called positive if it leads to a positive ODE system. \[L1\] Let $g$ is continuous and has a unique solution for all $\tau_0$ and for all $u_0$. The initial value problem is positive if and only if $$v_i=0, \ \ v_j\geq 0 \ \ for \; all\ \ j\neq i\ \ \Rightarrow \ \ g_i(\tau,v)\geq 0,$$ holds for all $\tau$ and any vector $v \in \mathbb{R}^p$ and all $i=1,\dots, p$. As a consequence of Lemma \[L1\] is [**([@H1 p. 34])**]{.nodecor} \[C1\] A linear system $u'(\tau)=Au(\tau)$, $A=\{a_{i,j}\}$ is positive iff $a_{i,j}\geq 0$ for all $i\neq j$. Guided by this results, we can apply (just as in [@GGWC]) the statement of Lemma \[L1\] and Corollary \[C1\] for the numerical discretization of of -, written in the form $$\begin{aligned} \frac{d u}{d\tau}=C_{i+1,j}u_{i+1,j}+C_{i-1,j}u_{i-1,j}+C_{i,j+1}u_{i,j+1}+C_{i,j-1}u_{i,j-1}+C_{i+1,j-1}u_{i+1,j-1}\nonumber\\+C_{i-1,j-1}u_{i-1,j-1} +C_{i-1,j+1}u_{i-1,j+1}+C_{i+1,j+1}u_{i+1,j+1}\label{PDS}\\-C_{i,j}u_{i,j}+g(\tau),\ \ \ \ i=1,\dots,N_1,\ \ j=1,\dots,N_2.\nonumber\end{aligned}$$ \[L2\] The ODE system, defined by is positive, if all coefficients $C_{\Sigma_{i,j}}=\{C_{i\pm1,j}, C_{i,j\pm1}$, $C_{i\pm1,j\pm1}\}$ are non-negative and $g(\tau)\geq 0$. **Proof.** The results follows from Lemma \[L1\].$\Box$ \[Th1\] The numerical discretization , combined with Dirichlet boundary conditions (on $\partial \Omega'_2$) and approximations , , , and , , , of the Neumann boundary conditions, depending on the boundary $\partial \Omega'_1$, is positive, if $$\begin{aligned} \begin{split}\label{RH} \frac{\sigma_1}{\sigma_2}\max\limits_{{1+b_W\leq i\leq N_1-b_E}\atop{1+b_S\leq j\leq N_2-b_N}}|\rho'| \leq\frac{h_1}{h_2}\leq\frac{\sigma_1}{\sigma_2\max\limits_{{1+b_W\leq i\leq N_1-b_E}\atop{1+b_S\leq j\leq N_2-b_N}}|\rho'|},\ \ \hbox{where}\\ b_Q=\left\{\begin{array}{ll} 1,& \partial \Omega'_Q \subseteq \partial \Omega'_2,\\ 0, & \hbox{elsewhere} \end{array}\right., \ \ Q=\{W,E,N,S\}. \end{split}\end{aligned}$$ **Proof.** First we consider the discretization at inner points: $2<i<N_1-1$, $2<j<N_2-1$. Taking into account that $|\rho'_{i,j}|=\rho'^+_{i,j}+\rho'^-_{i,j}$, the coefficients, corresponding to are $$\begin{aligned} &\ds C_{i\pm1,j}=\frac{\sigma_1^2}{2h_1^2}-\frac{\sigma_1\sigma_2|\rho'_{i,j}|}{2h_1h_2}+\frac{A_{1}^\pm\Lambda_{1_{i,j}}^\pm}{h_1},\\ &\ds C_{i,j\pm1}=\frac{\sigma_2^2}{2h_2^2}- \frac{\sigma_1\sigma_2|\rho'_{i,j}|}{2h_1h_2}+\frac{A_{2}^\pm\Lambda_{2_{i,j}}^\pm}{h_2},\\ &\ds C_{i-1,j+1}=C_{i+1,j-1}=\frac{\sigma_1\sigma_2\rho'^-}{2h_1h_2}, \ \ \ \ C_{i-1,j-1}=C_{i+1,j+1}=\frac{\sigma_1\sigma_2\rho'^+}{2h_1h_2}, \ \ g\equiv 0.\end{aligned}$$ To ensure the condition of Lemma \[L2\] we require $$\label{Es1} \frac{\sigma_1}{\sigma_2}\max\limits_{{1<i<N_1}\atop{1<j<N_2}}|\rho'| \leq\frac{h_1}{h_2}\leq\frac{\sigma_1}{\sigma_2^2\max\limits_{{1<i<N_1}\atop{1<j<N_2}}|\rho'|}.$$ For equation, corresponding to Neumann condition imposed on the East boundary ($i=N_1$, $1<j<N_2$) from we have $$\begin{aligned} &\ds C_{N_1-1,j}=\frac{\sigma_1^2}{h_1^2}-\frac{\sigma_1\sigma_2|\rho'_{N_1,j}|}{h_1h_2}+\frac{A_{1}^-\Lambda_{1_{N_1,j}}^-}{h_1},\\ &\ds C_{N_1,j\pm1}=\frac{\sigma_2^2}{2h_2^2}- \frac{\sigma_1\sigma_2|\rho'_{N_1,j}|}{2h_1h_2}+\frac{A_{2}^\pm\Lambda_{2_{N_1,j}}^\pm}{h_2},\\ &\ds C_{N_1-1,j \pm 1}=\frac{\sigma_1\sigma_2|\rho'_{N_1,j}|}{2h_1h_2}, \ \ \ \ g_{N_1,j}=A_1^+{g}'_{1_{N_1,j}}+\frac{\sigma_1^2}{h_1}g'_{1_{N_1,j}}+{\sigma_1}{\sigma_2}({\rho'}^+ g'_{1_{{{x}_2}}}-{\rho'}^-g'_{1_{{\overline{x}_2}}})_{N_1,j}.\end{aligned}$$ It is easy to verify that $C_{\Sigma_{N_1,j}}\geq 0$ and $g_{i,N_2}\geq 0$ if $$\label{Es2} \frac{\sigma_1}{\sigma_2}\max\limits_{{1<j<N_2}}|\rho'_{N_1,j}| \leq\frac{h_1}{h_2}\leq\frac{\sigma_1}{\sigma_2\max\limits_{{1<j<N_2}}|\rho'_{N_1,j}|}.$$ Similarly, from , , , corresponding to Neumann boundary condition on $\partial \Omega'_{\{W,S,N\}}$ respectively, to guarantee that $C_{\Sigma_{\partial \Omega'_{\{W,S,N\}}}}\geq 0$ and $g_{\partial \Omega'_{\{W,S,N\}}}\geq 0$, we obtain the estimates $$\begin{aligned} \begin{split}\label{Es3} &\hspace{1.5in}\frac{\sigma_1}{\sigma_2}\max\limits_{{1<j<N_2}}|\rho'_{1,j}| \leq\frac{h_1}{h_2}\leq\frac{\sigma_1}{\sigma_2\max\limits_{{1<j<N_2}}|\rho'_{1,j}|}, \\& \frac{\sigma_1}{\sigma_2}\max\limits_{{1<i<N_1}}|\rho'_{i,1}| \leq\frac{h_1}{h_2}\leq\frac{\sigma_1}{\sigma_2\max\limits_{{1<i<N_1}}|\rho'_{i,1}|},\ \ \frac{\sigma_1}{\sigma_2}\max\limits_{{1<i<N_1}}|\rho'_{i,N_2}| \leq\frac{h_1}{h_2}\leq\frac{\sigma_1}{\sigma_2\max\limits_{{1<i<N_1}}|\rho'_{i,N_2}|} \end{split}\end{aligned}$$ Similar estimate is obtained from the discretizations at the corner node, where the two Neumann boundaries intersects. For example, let $\{\partial \Omega'_N, \partial \Omega'_E \}\subseteq \partial \Omega'_1$, then from for all elements of $C_{\Sigma_{N_1,N_2}}$ and $g_{N_1,N_2}$ we have $$\begin{aligned} &\ds C_{N_1-1,N_2}=\frac{\sigma_1^2}{h_1^2}-\frac{\sigma_1\sigma_2|\rho'_{N_1,N_2}|}{h_1h_2}+\frac{A_{1}^-\Lambda_{1_{N_1,N_2}}^-}{h_1},\\ &\ds C_{N_1,N_2-1}=\frac{\sigma_2^2}{h_2^2}- \frac{\sigma_1\sigma_2|\rho'_{N_1,N_2}|}{h_1h_2}+\frac{A_{2}^-\Lambda_{2_{N_1,N_2}}^-}{h_2},\ \ C_{N_1-1,N_2- 1}=\frac{\sigma_1\sigma_2|\rho'_{N_1,N_2}|}{h_1h_2}, \\ &\ds g_{N_1,N_2}=\left(A_1^++A_2^++\frac{\sigma_1^2}{h_1}+\frac{\sigma_2^2}{h_2}\right){g}'_{1_{N_1,N_2}} -{\sigma_1}{\sigma_2}[{\rho'^+}G_{NE}+{\rho'^-}(g'_{1_{{{{\overline{x}}}_1}}}+g'_{1_{{{\overline{x}}_2}}})]_{N_1,N_2}.\end{aligned}$$ The requirement $C_{\Sigma_{N_1,N_2}}\geq 0$ and $g_{N_1,N_2}\geq 0$ leads to the estimate $$\begin{aligned} \begin{split}\label{Es4} \frac{\sigma_1}{\sigma_2}|\rho'_{N_1,N_2}| \leq\frac{h_1}{h_2}\leq\frac{\sigma_1}{\sigma_2|\rho'_{N_1,N_2}|}. \end{split}\end{aligned}$$ Similarly, from , , we get $$\label{Es5} \frac{\sigma_1}{\sigma_2}|\rho'_{1,N_2}| \leq\frac{h_1}{h_2}\leq\frac{\sigma_1}{\sigma_2|\rho'_{1,N_2}|}, \ \ \frac{\sigma_1}{\sigma_2}|\rho'_{N_1,1}| \leq\frac{h_1}{h_2}\leq\frac{\sigma_1}{\sigma_2|\rho'_{N_1,1}|},\ \ \frac{\sigma_1}{\sigma_2}|\rho'_{1,1}| \leq\frac{h_1}{h_2}\leq\frac{\sigma_1}{\sigma_2|\rho'_{1,1}|}.$$ Collecting all results -, we obtain . $\Box$ Full discretization =================== In this section we develop an implicit-explicit second-order numerical algorithm which preserves the positivity property of the solution. A semi-implicit and implicit method are used for the diffusion (the non-linear term is computed at the old time level) and reaction terms respectively while the convection term is approximated explicitly. The grid points over the time interval $[0,T]$ are defined by $\tau_{n}=\tau_{n-1}+\triangle\tau$, $n=1,2\dots$, $\tau_0=0$. Approximations of $u(x_i,y_j,\tau_n)$ is denoted by $u_{i,j}^n$, but further for simplicity, we use the notations ${\widehat{u}}_{i,j}:=u^n_{i,j}$ and ${u}_{i,j}:=u^{n-1}_{i,j}$, $\widehat{u}_t:=(\widehat{u}-u)/\triangle\tau$ The full discretization of is $$\begin{aligned} \begin{split}\label{EqSD_FD} &\widehat{u}_t-\frac12\sigma_1^2\widehat{u}_{\overline{x}_1x_1}-\frac12\sigma_2^2\widehat{u}_{\overline{x}_2x_2}-\sigma_1\sigma_2(\rho'^+\widehat{u}_{{x_1x_2}}^+-\rho'^-\widehat{u}_{{x_1x_2}}^-)+r\widehat{u} =A_1^+\Lambda_1^+u_{x_1}-A_1^-\Lambda_1^-u_{\overline{x}_1}\\ &+A_2^+\Lambda_2^+u_{x_2}-A_2^-\Lambda_2^-u_{\overline{x}_2},\ \ i=2,\dots,N_1-1, \ \ j=2,\dots,N_2-1. \end{split}\end{aligned}$$ For non-homogeneous Neumann boundaries (if any) we obtain from , , ,, the following discretization $$\begin{aligned} \begin{split}\label{EqSD_W_FD} &\widehat{u}_t-\frac{\sigma_1^2}{h_1}\widehat{u}_{x_1}-\frac12\sigma_2^2\widehat{u}_{\overline{x}_2x_2}-\frac{\sigma_1\sigma_2}{2}|\rho'|(\widehat{u}_{x_1x_2}-\widehat{u}_{x_1{\overline{x}_2}})+r\widehat{u}= A_1^+\Lambda_1^+u_{{x}_1}+A_2^+\Lambda_2^+u_{x_2}\\ &-A_2^-\Lambda_2^-u_{\overline{x}_2}+A_1^-{\widehat{g}}'_{1}+\frac{\sigma_1^2}{h_1}\widehat{g}'_1-{\sigma_1}{\sigma_2}({\rho'}^+ \widehat{g}'_{1_{{\overline{x}_2}}}-{\rho'}^-\widehat{g}'_{1_{{x_2}}}),\ \ \ i=1,\ \ j=2,\dots,N_2-1. \end{split}\\[0.1in] \begin{split}\label{EqSD_N_FD} &\widehat{u}_t-\frac12\sigma_2^2\widehat{u}_{\overline{x}_1x_1}+\frac{\sigma_2^2}{h_2}\widehat{u}_{\overline{x}_2}-\frac{\sigma_1\sigma_2}{2}|\rho'|(\widehat{u}_{\overline{x}_1\overline{x}_2}-\widehat{u}_{x_1{\overline{x}_2}})+r\widehat{u} =A_1^+\Lambda_1^+u_{{x}_1}-A_1^-\Lambda_1^-u_{\overline{x}_1}\\&-A_2^-\Lambda_2^-u_{\overline{x}_2}+ A_2^+{\widehat{g}}'_{1}+\frac{\sigma_2^2}{h_2}\widehat{g}'_1+{\sigma_1}{\sigma_2}({\rho'}^+ \widehat{g}'_{1_{{{x}_1}}}-{\rho'}^-\widehat{g}'_{1_{{\overline{x}_1}}}),\ \ i=2,\dots,N_1-1, \ \ j=N_2. \end{split}%\\[0.1in]\end{aligned}$$ $$\begin{aligned} \begin{split}\label{EqSD_E_FD} &\widehat{u}_t+\frac{\sigma_1^2}{h_1}\widehat{u}_{\overline{x}_1}-\frac12\sigma_2^2\widehat{u}_{\overline{x}_2x_2}-\frac{\sigma_1\sigma_2}{2}|\rho'|(\widehat{u}_{\overline{x}_1\overline{x}_2}-\widehat{u}_{\overline{x}_1{{x}_2}})+r\widehat{u}= -A_1^-\Lambda_1^-u_{{\overline{x}}_1}+A_2^+\Lambda_2^+u_{x_2}\\ &-A_2^-\Lambda_2^-u_{\overline{x}_2}+A_1^+{\widehat{g}}'_{1}+\frac{\sigma_1^2}{h_1}\widehat{g}'_1+{\sigma_1}{\sigma_2}({\rho'}^+ \widehat{g}'_{1_{{{x}_2}}}-{\rho'}^-\widehat{g}'_{1_{{\overline{x}_2}}}),\ \ i=N_1, \ \ j=2,\dots,N_2-1. \end{split}\\[0.1in] \begin{split}\label{EqSD_S_FD} &\hspace{-0.08in}\widehat{u}_t-\frac12\sigma_1^2\widehat{u}_{\overline{x}_1x_1}-\frac{\sigma_2^2}{h_2}\widehat{u}_{{x}_2}-\frac{\sigma_1\sigma_2}{2}|\rho'|(\widehat{u}_{{x}_1{x}_2}-\widehat{u}_{\overline{x}_1{{x}_2}})+r\widehat{u} =A_1^+\Lambda_1^+u_{{x}_1}-A_1^-\Lambda_1^-u_{\overline{x}_1}\\ &+A_2^+\Lambda_2^+u_{{x}_2} +A_2^-{\widehat{g}}'_{1}+\frac{\sigma_2^2}{h_2}\widehat{g}'_1-{\sigma_1}{\sigma_2}({\rho'}^+ \widehat{g}'_{1_{{{\overline{x}}_1}}}-{\rho'}^-\widehat{g}'_{1_{{{x}_1}}}),\ \ i=2,\dots,N_1-1,\ \ j=1. \end{split}\end{aligned}$$ Finally, for the corner nodes, where the two Neumann boundaries intersects, from , , , we have $$\begin{aligned} \begin{split}\label{EqSD_WN_FD} &\hspace{-0.65in}\widehat{u}_t-\frac{\sigma_1^2}{h_1}\widehat{u}_{{x}_1}+\frac{\sigma_2^2}{h_2}\widehat{u}_{{\overline{x}}_2}+\sigma_1\sigma_2|\rho'|\widehat{u}_{{x}_1{\overline{x}}_2}+r\widehat{u} =A_1^+\Lambda_1^+u_{{x}_1}-A_2^-\Lambda_2^-u_{{\overline{x}}_2}+(A_1^-+A_2^+){\widehat{g}}'_{1}\\& +\left(\frac{\sigma_1^2}{h_1}+\frac{\sigma_2^2}{h_2}\right)\widehat{g}'_1+{\sigma_1}{\sigma_2}{\rho'^+}(\widehat{g}'_{1_{{{{x}}_1}}}-\widehat{g}'_{1_{{{\overline{x}}_2}}})+{\sigma_1}{\sigma_2}{\rho'^-}\widehat{G}_{NW},\ \ i=1,\ \ j=N_2. \end{split}\\[0.1in] \begin{split}\label{EqSD_NE_FD} &\ \ \; \widehat{u}_t+\frac{\sigma_1^2}{h_1}\widehat{u}_{{\overline{x}}_1}+\frac{\sigma_2^2}{h_2}\widehat{u}_{{\overline{x}}_2}-\sigma_1\sigma_2|\rho'|\widehat{u}_{{\overline{x}}_1{\overline{x}}_2}+r\widehat{u} =-A_1^-\Lambda_1^-u_{{\overline{x}}_1}-A_2^-\Lambda_2^-u_{{\overline{x}}_2}+(A_1^++A_2^+){\widehat{g}}'_{1}\\&\hspace{0.7in} +\left(\frac{\sigma_1^2}{h_1}+\frac{\sigma_2^2}{h_2}\right)\widehat{g}'_1-{\sigma_1}{\sigma_2}{\rho'^+}\widehat{G}_{NE}-{\sigma_1}{\sigma_2}{\rho'^-}(\widehat{g}'_{1_{{{{\overline{x}}}_1}}}+\widehat{g}'_{1_{{{\overline{x}}_2}}}), \ \ i=N_1,\ \ j=N_2. \end{split}\\[0.1in] \begin{split}\label{EqSD_SE_FD} &\widehat{u}_t+\frac{\sigma_1^2}{h_1}\widehat{u}_{{\overline{x}}_1}-\frac{\sigma_2^2}{h_2}\widehat{u}_{{{x}}_2}+\sigma_1\sigma_2|\rho'|\widehat{u}_{{\overline{x}}_1{{x}}_2}+r\widehat{u} =-A_1^-\Lambda_1^-u_{{\overline{x}}_1}+A_2^+\Lambda_2^+u_{{{x}}_2}+(A_1^++A_2^-){\widehat{g}}'_{1}\\ &\hspace{0.7in} +\left(\frac{\sigma_1^2}{h_1}+\frac{\sigma_2^2}{h_2}\right)\widehat{g}'_1-{\sigma_1}{\sigma_2}{\rho'^+}(\widehat{g}'_{1_{{{\overline{x}}_1}}}-\widehat{g}'_{1_{{{{{x}}}_2}}})-{\sigma_1}{\sigma_2}{\rho'^-}\widehat{G}_{SE},\ \ i=N_1,\ \ j=1. \end{split}%\\[0.1in]\end{aligned}$$ $$\begin{aligned} \begin{split}\label{EqSD_SW_FD} &\hspace{-0.75in}\widehat{u}_t-\frac{\sigma_1^2}{h_1}\widehat{u}_{{{x}}_1}-\frac{\sigma_2^2}{h_2}\widehat{u}_{{{x}}_2}-\sigma_1\sigma_2|\rho'|\widehat{u}_{{{x}}_1{{x}}_2}+r\widehat{u} =A_1^+\Lambda_1^+u_{{{x}}_1}+A_2^+\Lambda_2^+u_{{{x}}_2} +(A_1^-+A_2^-){\widehat{g}}'_{1}\\& +\left(\frac{\sigma_1^2}{h_1}+\frac{\sigma_2^2}{h_2}\right)\widehat{g}'_1+{\sigma_1}{\sigma_2}{\rho'^+}(\widehat{g}'_{1_{{{{x}}_1}}}+\widehat{g}'_{1_{{{{{x}}}_2}}})+{\sigma_1}{\sigma_2}{\rho'^-}\widehat{G}_{SW},\ \ i=1,\ \ j=1. \end{split}\end{aligned}$$ Next, we discuss positivity preserving property and stability of the numerical solution. The system , associated with Dirichlet boundary conditions and the discretization -, in the case of Neumann boundary can be written in the following compact form $$\begin{aligned} &\ds -C_{i+1,j}\widehat{u}_{i+1,j}-C_{i-1,j}\widehat{u}_{i-1,j}-C_{i,j+1}\widehat{u}_{i,j+1}-C_{i,j-1}\widehat{u}_{i,j-1}-C_{i+1,j-1}\widehat{u}_{i+1,j-1}\nonumber\\[-0.08in] & \label{CF}\\[-0.08in] & \ds -C_{i-1,j-1}\widehat{u}_{i-1,j-1} -C_{i-1,j+1}\widehat{u}_{i-1,j+1}-C_{i+1,j+1}\widehat{u}_{i+1,j+1}+C_{i,j}\widehat{u}_{i,j}=f_{i,j},\nonumber\end{aligned}$$ for $i=1,\dots,N_1$, $j=1,\dots,N_2$ and equivalent matrix form $$\begin{aligned} & \ds\mathcal{M}\widehat{U}=\mathcal{F},\ \ \hbox{where}\\& \ds U=[\underbrace{u_{1,1},u_{2,1},\dots,u_{N_1,1}}_{j=1},\dots,\underbrace{u_{1,j},u_{2,j},\dots,u_{N_1,j}}_{2\leq j\leq N_2-1},\dots,\underbrace{u_{1,N_2},u_{2,N_2},\dots,u_{N_1,N_2}}_{j=N_2}]^T,\end{aligned}$$ where $\mathcal{M}=\{m_{k,p}\}$ is a square $N_1N_2\times N_1N_2$ matrix and $\mathcal{F}=\{f_k\}$, $k=i+(j-1)N_1$ is a column-vectors with $N_1N_2$ known from the previous time level entries. Following Corollary 3.20 [@V p.91], if $\mathcal{M}$ is diagonal dominant matrix with $m_{k,p}\leq 0$ for all $k\neq p$ and $m_{k,k}>0$ for all $1\leq k\leq N_1N_2$, then $M^{-1}>0$. Thus, if $\mathcal{F}\geq0$, we can conclude that $\widehat{U}\geq 0$. On this base we can prove the following statement If $g_s\geq 0$, $s=0,1,2$, holds and $$\label{RT} \triangle \tau\leq \frac{h_1h_2}{2(|A_{1}|h_2+|A_{2}|h_1)},$$ then the numerical solution of the problem - (respectively -), obtained by , associated with Dirichlet boundary conditions and discretization - (depending on $\partial\Omega$) is non-negative. **Proof.** We apply induction method: the statement holds for $\tau_0=0$, assume that it holds at time $\tau_{n-1}$ and prove that this statement holds at time $\tau_{n}$. Thus, via to the time integration, the corresponding assertion holds at each time level. Let $u^{n-1}\geq 0$. First, using the compact form of the presented numerical scheme, we show that $\mathcal{M}^{-1}>0$, which means that matrix $\mathcal{M}$ posses the above mentioned property, i.e. for all $i=1,\dots,N_1$ and $j=1,\dots,N_2$: . $\mathcal{M}$ is diagonally dominant, which is equivalent to $|C_{i,j}|\geq \hspace{-0.3in}\sum\limits_{C_{i+s_1,j+s_2} \in C_{\sum_{i,j}}}\hspace{-0.3in}|C_{i+s_1,j+s_2}|$;\ . $m_{k,p}\leq 0$ for all $k\neq p$, equivalently to $C_{{i+s_1,j+s_2}}\geq 0$ for all $C_{i+s_1,j+s_2} \in C_{\sum_{i,j}}$;\ . $m_{k,k}>0$ for all $1\leq k\leq N_1N_2$, equivalently to $C_{i,j}> 0$.\ Then we find the condition which guarantees\ . the non-negativity of the right-hand side $\mathcal{F}$.\ At inner points $2\leq i \leq N_1-1$, $2\leq j \leq N_2-1$ from we get the corresponding coefficients of and $\mathcal{F}$ $$\begin{aligned} &\ds \hspace{-0.1in}C_{i,j}=\frac{1}{\triangle \tau}+\frac{\sigma_1^2}{h_1^2}+\frac{\sigma_2^2}{h_2^2}-\frac{\sigma_1\sigma_2|\rho'_{i,j}|}{h_1h_2}+r, \; C_{i\pm1,j}=\frac{\sigma_1^2}{2h_1^2}-\frac{\sigma_1\sigma_2|\rho'_{i,j}|}{2h_1h_2},\; C_{i,j\pm 1}=\frac{\sigma_2^2}{2h_2^2}- \frac{\sigma_1\sigma_2|\rho'_{i,j}|}{2h_1h_2},\nonumber\\ &\ds C_{i-1,j+1}=C_{i+1,j-1}=\frac{\sigma_1\sigma_2\rho'^-_{i,j}}{2h_1h_2}, \ \ \ \ C_{i-1,j-1}=C_{i+1,j+1}=\frac{\sigma_1\sigma_2\rho'^+_{i,j}}{2h_1h_2},\nonumber\\ [-0.1in] & \label{TP_1} \\ [-0.1in] &\ds f_{i,j}=\frac{1}{\triangle \tau}u_{i,j}+A_{1}^+\Lambda_{1_{i,j}}^+\frac{u_{i+1,j}-u_{i,j}}{h_1}-A_{1}^-\Lambda_{1_{i,j}}^-\frac{u_{i,j}-u_{i-1,j}}{h_1}\nonumber\\ &\ds +A_{2}^+\Lambda_{2_{i,j}}^+\frac{u_{i,j+1}-u_{i,j}}{h_2}-A_{2}^-\Lambda_{2_{i,j}}^-\frac{u_{i,j}-u_{i,j-1}}{h_2},\nonumber\end{aligned}$$ Properties - are fulfilled, owing to . We have $|C_{i,j}|- \hspace{-0.3in}\sum\limits_{C_{i+s_1,j+s_2} \in C_{\sum_{i,j}}}\hspace{-0.3in}|C_{i+s_1,j+s_2}|=\frac{1}{\tau}+r$, $C_{i,j}\geq \frac{1}{\tau}+r >0$ and all $C_{i+s_1,j+s_2} \in C_{\sum_{i,j}}$ are non-negative. To ensure the property we require $$\frac{1}{\triangle \tau}-\frac{A_{1}^+\Lambda_{1_{i,j}}^+}{h_1}-\frac{A_{1}^-\Lambda_{1_{i,j}}^-}{h_1} -\frac{A_{2}^+\Lambda_{2_{i,j}}^+}{h_2}-\frac{A_{2}^-\Lambda_{2_{i,j}}^-}{h_2}\geq 0,$$ which leads to restriction . Let for instance $\partial \Omega'_E \subseteq \partial \Omega'_1$. Thus from we have $$\begin{aligned} &\ds \hspace{-0.1in}C_{N_1,j}=\frac{1}{\triangle \tau}+\frac{\sigma_1^2}{h_1^2}+\frac{\sigma_2^2}{h_2^2}-\frac{\sigma_1\sigma_2|\rho'_{N_1,j}|}{h_1h_2}+r, \ \ \ \ C_{N_1-1,j}=\frac{\sigma_1^2}{h_1^2}-\frac{\sigma_1\sigma_2|\rho'_{N_1,j}|}{h_1h_2},\nonumber\\ &\ds C_{N_1,j\pm 1}=\frac{\sigma_2^2}{2h_2^2}- \frac{\sigma_1\sigma_2|\rho'_{N_1,j}|}{2h_1h_2},\ \ \ \ C_{N_1-1,j\pm 1}=\frac{\sigma_1\sigma_2|\rho'_{N_1,j}|}{2h_1h_2}, \nonumber\\ [-0.1in] & \label{TP_2} \\ [-0.1in] &\ds f_{N_1,j}=\frac{1}{\triangle \tau}u_{N_1,j}+\left(A_{1}^++\frac{\sigma_1^2}{h_1}-\frac{\sigma_1\sigma_2|\rho'_{N_1,j}|}{h_2}\right)\widehat{g}'_{1_{N_1,j}}+\frac{\sigma_1\sigma_2\rho'^+_{N_1,j}}{h_2}\widehat{g}'_{1_{N_1,j+1}}+\frac{\sigma_1\sigma_2\rho'^-_{N_1,j}}{h_2}\widehat{g}'_{1_{N_1,j-1}}\nonumber\\ & \ds -A_{1}^-\Lambda_{1_{N_1,j}}^-\frac{u_{N_1,j}-u_{N_1-1,j}}{h_1} +A_{2}^+\Lambda_{2_{N_1,j}}^+\frac{u_{N_1,j+1}-u_{N_1,j}}{h_2}-A_{2}^-\Lambda_{2_{N_1,j}}^-\frac{u_{N_1,j}-u_{N_1,j-1}}{h_2},\nonumber\end{aligned}$$ As before - follows from . The right-hand side is non-negative if additionally to we have $$\frac{1}{\triangle \tau}-\frac{A_{1}^-\Lambda_{1_{N_1,j}}^-}{h_1} -\frac{A_{2}^+\Lambda_{2_{N_1,j}}^+}{h_2}-\frac{A_{2}^-\Lambda_{2_{N_1,j}}^-}{h_2}\geq 0 \ \ \hbox{and therefore restriction}\ \ \eqref{RT}.$$ From equations , and we obtain similar results. Consider now the corner node $i=N_1$, $j=N_2$, $\{\partial \Omega'_N,\partial \Omega'_E\} \subseteq \partial \Omega'_1$. From we determine $$\begin{aligned} &\ds \hspace{-0.1in}C_{N_1,N_2}=\frac{1}{\triangle \tau}+\frac{\sigma_1^2}{h_1^2}+\frac{\sigma_2^2}{h_2^2}-\frac{\sigma_1\sigma_2|\rho'_{N_1,N_2}|}{h_1h_2}+r, \ \ \ \ C_{N_1-1,N_2}=\frac{\sigma_1^2}{h_1^2}-\frac{\sigma_1\sigma_2|\rho'_{N_1,N_2}|}{h_1h_2},\nonumber \\ &\ds C_{N_1,N_2- 1}=\frac{\sigma_2^2}{h_2^2}- \frac{\sigma_1\sigma_2|\rho'_{N_1,N_2}|}{h_1h_2},\ \ \ \ C_{N_1-1,N_2- 1}=\frac{\sigma_1\sigma_2|\rho'_{N_1,N_2}|}{h_1h_2}, \nonumber %\\ \end{aligned}$$ $$\begin{aligned} &\ds f_{N_1,N_2}=\frac{1}{\triangle \tau}u_{N_1,N_2}+\left(A_{1}^++A_{2}^++\frac{\sigma_1^2}{h_1}+\frac{\sigma_2^2}{h_2}-\frac{\sigma_1\sigma_2|\rho'_{N_1,j}|}{h_2}-\frac{\sigma_1\sigma_2|\rho'_{N_1,j}|}{h_1}\right)\widehat{g}'_{1_{N_1,N_2}}\nonumber\\ [-0.1in] & \label{TP_3}\\ [-0.1in] &\ds +\frac{\sigma_1\sigma_2}{2h_2}\left[\left(|\rho'_{N_1,N_2}|+\rho'^-_{N_1,j}\right)\widehat{g}'_{1_{N_1,N_2-1}}+\rho'^+_{N_1,N_2}\widehat{g}'_{1_{N_1,N_2+1}}\right] \nonumber\\ &\ds +\frac{\sigma_1\sigma_2}{2h_1}\left[\left(|\rho'_{N_1,N_2}|+\rho'^-_{N_1,j}\right)\widehat{g}'_{1_{N_1-1,N_2}} +\rho'^+_{N_1,N_2}\widehat{g}'_{1_{N_1+1,N_2}}\right] \nonumber\\ & \ds -A_{1}^-\Lambda_{1_{N_1,N_2}}^-\frac{u_{N_1,N_2}-u_{N_1-1,N_2}}{h_1} -A_{2}^-\Lambda_{2_{N_1,N_2}}^-\frac{u_{N_1,N_2}-u_{N_1,N_2-1}}{h_2},\nonumber\end{aligned}$$ Evidently, restrictions and guarantees properties - . Similar considerations can be applied for , and . $\Box$ The next results concern the stability of the presented numerical method. If $\partial \Omega_1\equiv \emptyset$ or $\partial \Omega_1\not \equiv \emptyset$ and $g_1=0$, $g_s\geq 0$, $s=0,2$ both and hold, then the numerical solution of the problem - (respectively -), obtained by , associated with Dirichlet boundary conditions and discretization - (depending on $\partial\Omega$) is stable (in maximal discrete norm) with respect to the initial and boundary conditions. **Proof.** Without loss of generality we will consider , and . The estimates for the other part of the boundary are similar. Let $\|u\|:=\max\limits_{i,j}|u_{i,j}|$. Taking into account restrictions and , from and we estimate $$\label{IP} \|\widehat{u}\|\leq \frac{1}{1+r\triangle \tau}\|u\|.$$ Similarly, from , and we again obtain . For homogeneous Neumann boundary conditions we apply the same considerations and after time integration procedure we set $$\hspace{1.95in} \|u\|\leq \max\{\|g'_0\|,T\max\limits_{\partial \Omega'_2}g'_2\}.\hspace{1.95in} \Box$$ If $g_s\geq 0$, $s=0,1,2$, $g_1\neq 0$, $\partial \Omega_1\not \equiv \emptyset$, , hold then the numerical solution of the problem - (respectively -), obtained by , associated with Dirichlet boundary conditions and discretization - (depending on $\partial\Omega$) is stable (in maximal discrete norm) with respect to the initial and boundary conditions. **Proof.** Again we consider , and . As before, at inner points we obtain the estimate . From , and , substituting $\frac{\sigma_1^2}{h_1}\widehat{g}'_{1_{N_1,j}}=\frac{\sigma_1^2}{h_1}\widehat{u}_{\mathring{x}_{1_{N_1,j}}}$, $\left(\frac{\sigma_1^2}{h_1}+\frac{\sigma_1^2}{h_2}\right)\widehat{g}'_{1_{N_1,N_2}}=\frac{\sigma_1^2}{h_1}\widehat{u}_{\mathring{x}_{1_{N_1,N_2}}}+\frac{\sigma_1^2}{h_2}\widehat{u}_{\mathring{x}_{2_{N_1,N_2}}}$ in view of and , we get $$\begin{aligned} &&\|\widehat{u}\|\leq \frac{1}{1+r\triangle \tau}\|u\|+\triangle \tau A_{1}^+\|\widehat{g'}_1\|, \\ &&\|\widehat{u}\|\leq \frac{1}{1+r\triangle \tau}\|u\|+\triangle \tau (A_{1}^++A_{2}^+)\|\widehat{g'}_1\|.\end{aligned}$$ Then, taking into account also the Dirichlet boundary conditions (if any), the time integration procedure in general case leads to $$\hspace{1.0in} \|u\|\leq \max\{\|g'_0\|+C\max\limits_{\partial \Omega'_1}g'_1, T\max\limits_{\partial \Omega'_2}g'_2\}, \ \ \hbox{where} \ \ C = T(|A_{1}|+|A_{2}|). \ \ \ \ \ \ \ \ \ \ \ \ \Box$$ Numerical Examples ================== In this section we test the accuracy, convergence rate and positivity preserving of the presented numerical methods for model problem - (and -). Model parameters are $D_1=0.0487902$, $D_2=0$, $\sigma_1=\sigma_2=0.2$, $r=0.0953102$ [@Top1]. In agreement with we can choose $h=h_1=h_2$ ($N=N_1=N_2$). When we deal with exact solution (Example 1), the convergence rate in maximal discrete norm is computed using two consecutive meshes: $$\begin{aligned} CR_\infty=\log_2\frac{E^{N/2}_\infty}{E^N_\infty}, \ \ \ \ E^N_\infty=\max\limits_{{1\leq i,j \leq N_1}}|E_{i,j}^N|, \end{aligned}$$ where $E_{i,j}^N$ is the difference between the exact and the numerical solutions at point $(x_{1_i},x_{2_j},T)$ on a mesh with $N\times N$ grid nodes in space. Alternatively, if the exact solution is not available (Example 2), the convergence rate is computed by the same formula but now $E_{i,j}^{N}$ is the difference between two *numerical* solutions, computed on meshes with $N$ and $2N$ grid nodes respectively. In order to avoid division by zero in uniform flow regions, we add $\varepsilon <<1$ ($\varepsilon=10^{-30}$) to both numerator and denominator of the gradient ratio . **Example 1** () In the right hand side of the equation we add an appropriate residual function and consider non-homogeneous Neumann boundary conditions on East, North and South boundary ($\partial \Omega'_1\equiv \partial \Omega'_E\cup \partial \Omega'_N \cup \partial \Omega'_S$) and Dirichlet boundary conditions on the West boundary ($\partial \Omega'_2\equiv \partial \Omega'_W$) such that $$u(x_1,x_2,\tau)=e^{-\tau/2}\cos(\pi x_1/3)\cos(\pi x_2/3),$$ is the exact solution of the modified problem -. The computations are performed in two domains: $$\overline{\Omega}'^A =[-1,1]\times[-1,1],\ \ \ \ \overline{\Omega}'^B \simeq [-\ln(200),\ln(200)]\times[-\ln(200),\ln(200)].$$ for $T=0.5$ and fixed for all time levels time step $\triangle\tau=h^2$. The results for different values of $\rho_1$, $\rho_2$ in each domain $\overline{\Omega'}^A$ and $\overline{\Omega'}^B$ are given in Table \[t1\]. We observe second-order convergence rate of the numerical method. [@rccccccccc]{}\ &&&&\ \ &&&&\ \ && & $CR_\infty$&& & $CR_\infty$&& & $CR_\infty$\ \ 21&& 6.48015e-4 & && 1.69489e-2 & && 1.69709e-2 &\ 41&& 1.58029e-4 & 2.0359 && 4.83743e-3 & 1.8089 && 4.84391e-3 & 1.8088\ 81&& 3.83190e-5 & 2.0441 && 1.21792e-3 & 1.9898 && 1.21971e-3 & 1.9896\ 161&& 9.38348e-6 & 2.0299 && 2.86828e-4 & 2.0862 && 2.87575e-4 &2.0845\ 321&& 2.32268e-6 & 2.0143 && 6.84829e-5 & 2.0664 && 6.86652e-5 & 2.0663\ **Example 2** () We solve - (and -) by the presented numerical method for different initial and boundary conditions. All computations are performed in $\overline{\Omega}'^B$ for $\rho_1=-0.2$, $\rho_2=0.6$. For the convergence test we take $\triangle\tau=h^2$ fixed and $T=2$, while the given plots are for different time and time steps, satisfying equality in . We denote by $E$ the exercise price, $w_i$ is the weight of the $i$-th asset, ’cap’ parameter is used for capped-style options, BS (Price, Strike, Time) is the Black-Scholes vanilla Put/Call option price. We consider the following test problems: - *European exchange option with pay-off:* $P(S_1,S_2)=\max\{0,S_2-S_1\}$. We use the pay-off function as the source for the Dirichlet condition [@KN]. Namely, $\partial \Omega'_1\equiv \emptyset$ and $g_2(S_1,S_2,t)=P(S_1,S_2)$. - *Worst-off two Call option with barrier* [@ZVF]. Now $P(S_1,S_2)=\max\{0,\min\{S_1,S_2\}-E\}$ and $\partial \Omega'_1\equiv \emptyset$, $g_2(S_1,S_2,t)=P(S_1,S_2)$. - *Capped Put on a basket of two equities* [@Top0; @Top1]. The initial function is $g_0=\min\{\textnormal{cap},\max\{0,E-w_1S_1-w_2S_2\}\}$, boundary conditions are ($\partial \Omega'_1\equiv \emptyset$) with $$g_2=\left\{\begin{array}{ll} 0 & \hbox{on} \ \ \partial\Omega_N\cup\partial\Omega_E,\\ BS(S_1,\frac{E}{w_1},t)-BS(S_1,\textnormal{cap},t)& \hbox{on} \ \ \partial\Omega_S,\\[0.035in] BS(S_2,\frac{E}{w_2},t)-BS(S_2,\textnormal{cap},t)& \hbox{on} \ \ \partial\Omega_W, \end{array}\right.$$ The boundary conditions at $\partial \Omega_W$ and $\partial \Omega_S$ represents the prices of capped European option with strike prices of $E/w_1$ and $E/w_2$, respectively [@Top1]. - *Two-asset barrier options* [@Hn; @Top1]. We consider $\partial \Omega'_1\equiv \partial \Omega'_E\cup \partial \Omega'_N \cup \partial \Omega'_S$, $\partial \Omega'_2\equiv \partial \Omega'_W$, $g_0=\max\{0, w_1S_1-E\}$, $g_2=0$, $g_1=0$ on $\Omega_S\cup \Omega_N$, $g_1=1$ on $\Omega_E$. - *Capped Call on a Basket of two equities* [@Top0; @Top1]. In this case $\partial \Omega'_1\equiv \partial \Omega'_E\cup \partial \Omega'_N $, $\partial \Omega'_2\equiv \partial \Omega'_W\cup \partial \Omega'_S$, $g_0=\min\{\textnormal{cap},\max\{0,w_1S_1+w_2S_2-E\}\}$, $g_1=0$ on $\Omega_N\cup \Omega_E$ and $$g_2=\left\{\begin{array}{ll} BS(S_1,\textnormal{cap},t)-BS(S_1,\frac{E}{w_1},t)& \hbox{on} \ \ \partial\Omega_S,\\[0.035in] BS(S_2,\textnormal{cap},t)-BS(S_2,\frac{E}{w_2},t)& \hbox{on} \ \ \partial\Omega_W, \end{array}\right.$$ In Table \[t2\] we give convergence rate ($CR_\infty$), computed on three consecutive meshes, for each test problem, $E=100$, $w_1=w_2=1$, cap $=10$. [@ccccccc]{}\ && & & & &\ \ 21-41-81 && 1.4458 & 1.3809 & 0.7447 & 1.1625 & 0.7443\ 41-81-161 && 1.8038 & 1.5757 & 1.4963 & 1.4525 & 1.4732\ 81-161-321 && 2.0477 & 1.7639 & 1.8234 & 1.8884 & 1.8022\ We observe that the order of convergence very close to 2 for all problems . Conclusions {#conclusions .unnumbered} =========== In this paper we develop second-order in space implicit-explicit finite difference method, based on the van Leer flux-limiter technique, for the worst-case pricing model in financial mathematics. Under mild time and space step restrictions the proposed method is stable (with respect to initial and boundary conditions) and preserves the non-negativity of the numerical solution. Van Leer’s flux limiter technique is implemented appropriately also for non-homogeneous Neumann boundary conditions, ensuring second order convergence rate and possibility to guarantee the positivity preserving property of the numerical solution. Various numerical examples confirm the theoretical statements and illustrate the second order convergence in space variable. The very important question - to find interface curve (in the one dimensional case) or surface (in the two-dimensional case) where the sign of $\Gamma_{cross}$ changes and on this base to construct numerical method for the corresponding linear problems on both sides of the interface will be the main subject of our next work. Acknowledgement {#acknowledgement .unnumbered} =============== This research was supported by the European Union under Grant Agreement number 304617 (FP7 Marie Curie Action Project Multi-ITN STRIKE - Novel Methods in Computational Finance) and Bulgarian National Fund of Science under Project DID 02/37-2009. [20]{} M. Avellaneda, A. Levy, A. Parás, Pricing and hedging derivative securities in markets with uncertain vilatilities, Appl. Math. Fin. 2 (1995) 73–88. F. Black, M. Scholes, The pricing of options and corporate liabilities, J.Pol. Econ. 81 (1973) 637 – 659. R. Company, L. Jódar, M. Fakharany, M.-C. Casabán, Removing the Correlation Term in Option Pricing Heston Model: Numerical Analysis and Computing, Abstract and Applied Analysis 2013 (2013) Article ID 246724, 11 pages Ehrhardt, M. (Ed) [Nonlinear Models in Mathematical Finance: New Research Trends in Option Pricing]{}, Nova Science Publishers, N.Y. (2008). L. C. Evans, Partial Differential Equations, 2nd edition, American Math Society, 2010. A. Gerisch, D.F. Griffiths, R. Weiner, and M.A.J. Chaplain, A Positive splitting method for mixed hyperbolic–parabolic systems, Num. Meth. for PDEs 17(2) (2001), 152–168. Z. Horváth, Positivity of Runge-Kutta and diagonally split Runge-Kutta methods, Appl. Numer. Math. 28 (1998), 309–326. E. G. Haung, The Complete Guide to Option Pricing Formulas, New York, 1997. W. Hundsdorfer, Numerical Solution of Advection-Diffusion-Reaction Equations, Lecture Notes,Thomas Steiljes Inst. CWI Amstrdam,2000. W. Hundsdorfer, J. Verwer, [Numerical Solution of Time-Dependent Advection-Diffusion-Reaction Equations]{}, Springer Series in Computational Mathematics [33]{}, Springer-Verlag, Berlin, Heidelberg, New York, 2003. R. Kangro, R. Nicolaides, Far field boundary conditiond for Black-Scholes equations, SIAM J. Numer. Anal. 38(4) (2000) 1357–1368. M. Koleva, Positivity preserving numerical method for non-linear Black-Scholes models, Lect. Notes Comp. Sci. 8236 (20130 363–370. D. Kusmin, S. Turek, High-resolution FEM-TVD schemes based on a fully multidimensional flux limiter, [J. Comp. Phys]{}. [198]{}(1) (2004), 131–158. B. van Leer, Towards the ultimate conservative difference scheme II. Monotonicity and conservation combined in a second order scheme, [J. Comput. Phys.]{} [14]{} (1974), 361–370. R.J. LeVeque, Numerical Methods fof Conservation Laws, Birkhäuser, 1992. Jun Ma, A stochastic correlation model with mean revision for pricing multi asset options, Asia-Pasific Finan. Markets 16 (2009) 97 – 109. D.M. Pooley, P.A. Forsyth, K.R. Vetzal, Numerical convergence properties of option pricing PDEs with uncertain volatility, IMA J. Numer. Anal. 23 (2003) 241–267. I.V. Rybak, Monotone and conservative difference scheme for elliptic equations with mixed derivatives, Math. Model. and Anal. 9(2) (2004) 169–178. Samarskii A. A., [The Theory of Difference Schemes]{}, Marcel Dekker Inc, 2001. A. Samarskii, V. Mazhukin, P. Matus and G. Shishkin, Monotone difference schemes for equations with mixed derivatives, Mathematical Modeling 13(2) (2001) 17–26, 2001 J. Topper, Finite element modeling of exotic options, Discussion paper 216, Universität Hannonver, 1998. J. Topper, Worst case pricing of rainbow optionsn. Discussion paper 217, Fachbereich Wirtschaftswissenschaften, Universität Hannonver, October 2001 (ISSN 0949-9962). J. Topper, Uncertain parameters and reverse convertibles, Risk 14 (2001) 1-14. J. Topper, Financial Engineering with Finite Elements, Chapter 10, p.248, Wiley, 2005, 360p. Tavella, D., Randall, C. *Pricing Financial instruments*, Wiley, New York (2000). R.S. Varga, Matrix Iterative Analysis, Springer-Verlag Berlin Heidelberg, 2000 (Second Revised and Expanded Edition). P. Wilmott, Derivatives: The Theory and Practice of Financial Engineering, Chapter 27, pages 383–393, Whiley, 1998. H. Windcliff, J. Wang, P.A. Forsyth, K.R. Vetzal, Hedging with a Correlated Asset: Solution of a Nonlinear Pricing PDE, J. of Comp. and Appl. Math. 200 (2007) 86–115 R. Zvan, K.R. Vetzal, P.A. Forsyth, PDE methods for pricing barrier options, J. of Economics Dynamics $\&$ Control 24 (2000) 1563–1590.
--- abstract: 'An abstract chordal metric is defined on linear control systems described by their transfer functions. Analogous to a previous result due to Jonathan Partington [@Par] for $H^\infty$, it is shown that strong stabilizability is a robust property in this metric.' address: 'Department of Mathematics, London School of Economics, Houghton Street, London WC2A 2AE, United Kingdom.' author: - Amol Sasane title: A generalized chordal metric making strong stabilizability a robust property --- Introduction ============ The aim of this note is to give an extension of a result due to Jonathan Partington (recalled below in Proposition \[theorem\_partington\]) saying that [*strong*]{} stabilizability is a robust property of the plant in the chordal metric. The basic and almost unique ingredient in the proof of this fact is a result proved by Partington in [@Par92 Lemma 2.1, p.84] (which we have restated in Lemma \[lemma\_partington\]). The only new point is that we prove that the analogous result holds in an abstract setting, hence expanding the domain of applicability from the original setting of unstable plants over $H^\infty$ to ones over arbitrary rings of stable transfer functions satisfying mild assumptions. (Here, as is usual in the control engineering literature, $H^\infty$ denotes the Hardy algebra of bounded holomorphic functions defined in the complex open right half plane $\{s\in {\mathbb{C}}: \textrm{Re}(s)>0\}$.) We recall the general [*stabilization problem*]{} in control theory. Suppose that $R$ is an integral domain with identity (thought of as the class of stable transfer functions) and let ${\mathbb{F}}(R)$ denote the field of fractions of $R$. Then the stabilization problem is: - Given ${\mathbf{p}}\in {\mathbb{F}}(R) $ (an unstable plant transfer function), - find ${\mathbf{c}}\in {\mathbb{F}}(R)$ (a stabilizing controller transfer function), - such that (the closed loop transfer function) $$H({\mathbf{p}},{\mathbf{c}}):= \left[\begin{array}{cc} {\displaystyle \frac{{\mathbf{p}}}{1-{\mathbf{p}}{\mathbf{c}}} }_{\phantom{p}} & \displaystyle\frac{{\mathbf{p}}{\mathbf{c}}}{1-{\mathbf{p}}{\mathbf{c}}} \\ \displaystyle\frac{{\mathbf{p}}{\mathbf{c}}}{1-{\mathbf{p}}{\mathbf{c}}} & \displaystyle\frac{{\mathbf{c}}}{1-{\mathbf{p}}{\mathbf{c}}} \end{array}\right]$$ - belongs to $R^{2\times 2}$ (that is, it is stable). The demand above that $H({\mathbf{p}},{\mathbf{c}})\in R^{2\times 2}$ guarantees that the “closed loop” transfer function of the signal map $$\left[ \begin{array}{cc} u_1\\ u_2 \end{array}\right] \mapsto \left[ \begin{array}{cc} y_1 \\ y_2 \end{array}\right],$$ in the interconnection of ${\mathbf{p}}$ and ${\mathbf{c}}$ as shown in Figure \[figure\_feedback\_config\], is stable. (So after the interconnection, “nice” signals are indeed mapped to nice signals.) \[c\]\[c\][${\mathbf{p}}$]{} \[c\]\[c\][${\mathbf{c}}$]{} \[c\]\[c\][$u_1$]{} \[c\]\[c\][$u_2$]{} \[c\]\[c\][$y_1$]{} \[c\]\[c\][$y_2$]{} ![Feedback connection of the plant ${\mathbf{p}}$ with the controller ${\mathbf{c}}$.[]{data-label="figure_feedback_config"}](feedbackconfig.eps "fig:"){width="5.4"} A stronger version of the problem is when we require a [*stable*]{} controller ${\mathbf{c}}\in R$ which stabilizes ${\mathbf{p}}$. If such a ${\mathbf{c}}$ exists, then we say that ${\mathbf{p}}$ is [*strongly stabilizable*]{}. In the [*robust stabilization problem*]{}, one goes a step further than the stabilization problem. One knows that the plant is just an approximation of reality, and so one would really like the controller ${\mathbf{c}}$ to not only stabilize the [*nominal*]{} plant ${\mathbf{p}}_0$, but also all sufficiently close plants ${\mathbf{p}}$ to ${\mathbf{p}}_0$. The question of what one means by “closeness” of plants thus arises naturally. So one needs a function $d$ defined on pairs of stabilizable plants such that 1. $d$ is a metric on the set of all stabilizable plants, 2. $d$ is amenable to computation, and 3. stabilizability is a robust property of the plant with respect to $d$. There are various known metrics which do the job, notably the gap metric ([@ZamElS]), the graph metric ([@Vid]) and the Vinnicombe $\nu$-metric (see [@Vin] for the rational transfer function case and [@BalSas], [@Sas] for its recent extension for nonrational transfer functions). This last metric is in some sense the “best” one, as it is comparatively easy to compute and admits some sharp robustness results. The Vinnicombe metric itself arose from a very natural idea of defining a metric between meromorphic functions in the complex right half plane, namely the pointwise chordal metric, defined below. This metric has been studied by function theorists (see for example [@Hay]), since it is a natural analogue of the $H^\infty$ distance between bounded analytic functions, and it can be used for functions with poles in a disk. The use of the chordal metric to study robustness of stabilizability was made by Ahmed El-Sakkary in [@Sak]. If ${\mathbf{p}}_1,{\mathbf{p}}_2$ are two meromorphic functions in the open right half plane, then the [*chordal distance*]{} $\kappa$ between ${\mathbf{p}}_1,{\mathbf{p}}_2$ is $$\kappa({\mathbf{p}}_1,{\mathbf{p}}_2):=\sup_{\substack{s\in {\mathbb{C}};\; \textrm{Re}(s)>0;\\ \textrm{either } {\mathbf{p}}_1(s)\neq \infty\textrm{ or } {\mathbf{p}}_2(s)\neq \infty}} \frac{|{\mathbf{p}}_1(s)-{\mathbf{p}}_2(s)|}{\sqrt{1+|{\mathbf{p}}_1(s)|^2} \sqrt{1+|{\mathbf{p}}_2(s)|^2}}.$$ This metric has the interpretation that it is the supremum of the pointwise Euclidean distance between the points ${\mathbf{p}}_1(s)$ and ${\mathbf{p}}_2(s)$ on the Riemann sphere. Recall that the stereographic projection allows the identification of the extended complex plane ${\mathbb{C}}\cup \{\infty\}$ with the unit sphere $\mathbf{S}$ of diameter $1$ in ${\mathbb{R}}^3$, where the point $z=0$ in the complex plane corresponds to the south pole $S$ of the sphere $\mathbf{S}$ and the point $z=\infty$ corresponds to the north pole $N$ of $\mathbf{S}$. Points $P_{\mathbb{C}}$ in the complex plane can be identified with a corresponding point $P_{\mathbf{S}}$ on the sphere $\mathbf{S}$, namely the one in $\mathbf{S}$ which lies on the straight line joining $P_{\mathbb{C}}$ and $N$. See Figure \[riemann\]. \[c\]\[c\][${\scriptscriptstyle P_{\scriptscriptstyle\mathbf{S}}}$]{} \[c\]\[c\][${\scriptscriptstyle P_{\scriptscriptstyle{\mathbb{C}}}\equiv z}$]{} \[c\]\[c\][${\scriptscriptstyle \textrm{S}\equiv 0}$]{} \[c\]\[c\][${\scriptscriptstyle \textrm{N}\equiv \infty}$]{} \[c\]\[c\][${\mathbb{C}}$]{} \[c\]\[c\][${\mathbf{S}}$]{} \[c\]\[c\][${\scriptscriptstyle \frac{1}{2}}$]{} ![The Riemann sphere with diameter $1$ and centre at $\left(0,0,\displaystyle \frac{1}{2}\right)$.](riemann.eps "fig:"){width="9"} \[riemann\] The following result was shown by Jonathan Partington (see [@Par92 Theorem 2.2, p.84] or [@Par Theorem 4.3.4, p.83]). \[theorem\_partington\] Let ${\mathbf{p}}_0, {\mathbf{p}}\in {\mathbb{F}}(H^\infty)$, and let ${\mathbf{c}}\in H^\infty$ be such that $ {\mathbf{g}}_0:= \displaystyle \frac{{\mathbf{p}}_0}{1-{\mathbf{c}}{\mathbf{p}}_0} \in H^\infty. $ Set $k:= \|{\mathbf{c}}\|_\infty$ and $g=\|{\mathbf{g}}_0\|_\infty$. If $$\kappa({\mathbf{p}},{\mathbf{p}}_0)< \displaystyle \frac{1}{3} \min\left\{1, \;\;\frac{1}{g}, \;\;\frac{1}{k(1+kg)} \right\},$$ then ${\mathbf{p}}$ is also stabilized by ${\mathbf{c}}$. This follows from the following key estimate, which gives a lower bound on the chordal distance; see [@Par92 Lemma 2.1, p.84] or [@Par Lemma 4.3.3]. \[lemma\_partington\] If $z_1,z_2\in {\mathbb{C}}$ and $0<a<1$, then $$\kappa(z_1,z_2) :=\frac{|z_1-z_2|}{\sqrt{1+|z_1|^2} \sqrt{1+|z_2|^2}}\geq \min \left\{ \frac{a^2}{1+a^2} |z_1-z_2|, \;\;\frac{a^2}{1+a^2} \left|\frac{1}{z_1}-\frac{1}{z_2} \right|,\;\; \frac{1-a^2}{1+a^2}\right\}.$$ Abstract set-up and main result ------------------------------- Our main result is given in Theorem \[main\_theorem\] below. We will assume throughout the following: - $R$ is a commutative ring without zero divisors and with identity. - $S$ is a complex, commutative, unital, semisimple Banach algebra. - $R\subset S$, that is, there is an injective ring homomorphism $\iota: R\rightarrow S$. - $R$ is a full in $S$, that is, if ${\mathbf{x}}\in R$ and $\iota ({\mathbf{x}})$ is invertible in $S$, then ${\mathbf{x}}$ is invertible in $R$. (A3) allows identification of elements of $R$ with elements of $S$. So in the sequel, if ${\mathbf{x}}$ is an element of $R$, we will simply write ${\mathbf{x}}$ (an element of $S$!) instead of $\iota({\mathbf{x}})$. We will denote by ${\mathbb{F}}(R)$ the field of fractions over $R$. An element ${\mathbf{p}}\in {\mathbb{F}}(R)$ is said to have a [*coprime factorization over $R$*]{} if $${\mathbf{p}}=\displaystyle \frac{{\mathbf{n}}}{{\mathbf{d}}},$$ where ${\mathbf{n}},{\mathbf{d}}\in R$, ${\mathbf{d}}\neq 0$ and there exist ${\mathbf{x}},{\mathbf{y}}\in R$ such that ${\mathbf{n}}{\mathbf{x}}+{\mathbf{d}}{\mathbf{y}}=1$. We define the subset of [*coprime factorizable plants over*]{} $R$ to be the set $${\mathbb{S}}(R):=\{{\mathbf{p}}\in {\mathbb{F}}(R): {\mathbf{p}}\textrm{ has a coprime factorization}\}.$$ The maximal ideal space of $S$ is denoted by $M(S)$. If ${\mathbf{x}}\in S$, then we denote by $\widehat{{\mathbf{x}}}$ the Gelfand transform of ${\mathbf{x}}$. Also, we set $$\|{\mathbf{x}}\|_\infty:=\max_{\varphi \in M(S)}|\widehat{{\mathbf{x}}}(\varphi)|.$$ If ${\mathbf{p}}_1,{\mathbf{p}}_2 \in {\mathbb{S}}(R)$, then the [*chordal distance*]{} $\kappa$ between ${\mathbf{p}}_1,{\mathbf{p}}_2$, which have coprime factorizations $${\mathbf{p}}_1=\displaystyle \frac{{\mathbf{n}}_1}{{\mathbf{d}}_1}\textrm{ and }{\mathbf{p}}_2=\displaystyle \frac{{\mathbf{n}}_2}{{\mathbf{d}}_2},$$ is $$\kappa({\mathbf{p}}_1,{\mathbf{p}}_2):=\sup_{\varphi \in M(S)} \frac{|\widehat{{\mathbf{n}}_1}(\varphi)\widehat{{\mathbf{d}}_2}(\varphi)-\widehat{{\mathbf{n}}_2}(\varphi)\widehat{{\mathbf{d}}_1}(\varphi)|}{ \sqrt{|\widehat{{\mathbf{n}}_1}(\varphi)|^2+|\widehat{{\mathbf{d}}_1}(\varphi)|^2} \sqrt{|\widehat{{\mathbf{n}}_2}(\varphi)|^2+|\widehat{{\mathbf{d}}_2}(\varphi)|^2}}.$$ The function $\kappa$ given by the above expression is well-defined. Indeed, if $${\mathbf{p}}_1=\frac{{\mathbf{n}}_1}{{\mathbf{d}}_1}=\frac{\widetilde{{\mathbf{n}}}_1}{\widetilde{{\mathbf{d}}}_1},$$ then ${\mathbf{n}}_1 \widetilde{{\mathbf{d}}}_1=\widetilde{{\mathbf{n}}}_1{\mathbf{d}}_1$, and so, for each $\varphi \in M(S)$, we have $ \widehat{{\mathbf{n}}_1}(\varphi) \widehat{\widetilde{{\mathbf{d}}}_1}(\varphi)=\widehat{\widetilde{{\mathbf{n}}}_1}(\varphi)\widehat{{\mathbf{d}}_1}(\varphi). $ Using this one can see that $$\frac{|\widehat{{\mathbf{n}}_1}(\varphi)\widehat{{\mathbf{d}}_2}(\varphi)-\widehat{{\mathbf{n}}_2}(\varphi)\widehat{{\mathbf{d}}_1}(\varphi)|}{ \sqrt{|\widehat{{\mathbf{n}}_1}(\varphi)|^2+|\widehat{{\mathbf{d}}_1}(\varphi)|^2} } = \frac{|\widehat{\widetilde{{\mathbf{n}}}_1}(\varphi)\widehat{{\mathbf{d}}_2}(\varphi)-\widehat{{\mathbf{n}}_2}(\varphi)\widehat{\widetilde{{\mathbf{d}}}_1}(\varphi)|}{ \sqrt{|\widehat{\widetilde{{\mathbf{n}}}_1}(\varphi)|^2+|\widehat{\widetilde{{\mathbf{d}}}_1}(\varphi)|^2} },$$ and so it follows that the expression in the definition of $\kappa$ is independent of any particular choice of a coprime factorization of either plant. We have the following result. \[main\_proposition\] $\kappa$ is a metric on ${\mathbb{S}}(R)$. The proof is straightforward, but we give the details as they elucidate the use of the basic assumptions in our abstract setting. (D1) If ${\mathbf{p}}_1,{\mathbf{p}}_2 \in {\mathbb{S}}(R)$, then it is clear from the expression for $\kappa({\mathbf{p}}_1,{\mathbf{p}}_2)$ that it is nonnegative. Furthermore, $\kappa({\mathbf{p}},{\mathbf{p}})=0$ for any ${\mathbf{p}}\in {\mathbb{S}}(R)$. Finally, if ${\mathbf{p}}_1, {\mathbf{p}}_2\in {\mathbb{S}}(R)$ are such that $\kappa({\mathbf{p}}_1,{\mathbf{p}}_2)=0$, then we must have, with ${\mathbf{p}}_1,{\mathbf{p}}_2$ having coprime factorizations $${\mathbf{p}}_1=\displaystyle \frac{{\mathbf{n}}_1}{{\mathbf{d}}_1}\textrm{ and } {\mathbf{p}}_2=\displaystyle \frac{{\mathbf{n}}_2}{{\mathbf{d}}_2},$$ that for all $\varphi \in M(S)$ that $ \widehat{{\mathbf{n}}_1}(\varphi)\widehat{{\mathbf{d}}_2}(\varphi)-\widehat{{\mathbf{n}}_2}(\varphi)\widehat{{\mathbf{d}}_1}(\varphi)=0, $ and by (A3) and the semisimplicity of the Banach algebra (A2), we obtain ${\mathbf{n}}_1{\mathbf{d}}_2={\mathbf{n}}_2{\mathbf{d}}_1$, that is, ${\mathbf{p}}_1={\mathbf{p}}_2$. (D2) If ${\mathbf{p}}_1,{\mathbf{p}}_2 \in {\mathbb{S}}(R)$, then it is clear from the expression for $\kappa$ that $\kappa({\mathbf{p}}_1,{\mathbf{p}}_2)=\kappa({\mathbf{p}}_2,{\mathbf{p}}_1)$. (D3) Let ${\mathbf{p}}_1, {\mathbf{p}}_2, {\mathbf{p}}_3\in {\mathbb{S}}(R)$ have coprime factorizations $${\mathbf{p}}_1=\displaystyle \frac{{\mathbf{n}}_1}{{\mathbf{d}}_1}, \quad {\mathbf{p}}_2=\displaystyle \frac{{\mathbf{n}}_2}{{\mathbf{d}}_2}, \quad {\mathbf{p}}_3=\displaystyle \frac{{\mathbf{n}}_3}{{\mathbf{d}}_3}.$$ Since the usual Euclidean distance in ${\mathbb{R}}^3$ satisfies the triangle inequality, it follows that $$\begin{aligned} \frac{|\widehat{{\mathbf{n}}_1}(\varphi)\widehat{{\mathbf{d}}_2}(\varphi)-\widehat{{\mathbf{n}}_2}(\varphi)\widehat{{\mathbf{d}}_1}(\varphi)|}{ \sqrt{|\widehat{{\mathbf{n}}_1}(\varphi)|^2+|\widehat{{\mathbf{d}}_1}(\varphi)|^2} \sqrt{|\widehat{{\mathbf{n}}_2}(\varphi)|^2+|\widehat{{\mathbf{d}}_2}(\varphi)|^2}} &\leq& \frac{|\widehat{{\mathbf{n}}_1}(\varphi)\widehat{{\mathbf{d}}_3}(\varphi)-\widehat{{\mathbf{n}}_3}(\varphi)\widehat{{\mathbf{d}}_1}(\varphi)|}{ \sqrt{|\widehat{{\mathbf{n}}_1}(\varphi)|^2+|\widehat{{\mathbf{d}}_1}(\varphi)|^2} \sqrt{|\widehat{{\mathbf{n}}_3}(\varphi)|^2+|\widehat{{\mathbf{d}}_3}(\varphi)|^2}} \\ &&+ \frac{|\widehat{{\mathbf{n}}_3}(\varphi)\widehat{{\mathbf{d}}_2}(\varphi)-\widehat{{\mathbf{n}}_2}(\varphi)\widehat{{\mathbf{d}}_3}(\varphi)|}{ \sqrt{|\widehat{{\mathbf{n}}_3}(\varphi)|^2+|\widehat{{\mathbf{d}}_3}(\varphi)|^2} \sqrt{|\widehat{{\mathbf{n}}_2}(\varphi)|^2+|\widehat{{\mathbf{d}}_2}(\varphi)|^2}}\end{aligned}$$ Consequently, $\kappa({\mathbf{p}}_1, {\mathbf{p}}_2) \leq \kappa({\mathbf{p}}_1, {\mathbf{p}}_2)+\kappa({\mathbf{p}}_1, {\mathbf{p}}_2)$. This completes the proof. Our main result is the following, which we will prove in the next section. \[main\_theorem\] Suppose that ${\mathbf{p}}_0, {\mathbf{p}}\in {\mathbb{S}}(R)$ and ${\mathbf{c}}\in R$ is such that $ {\mathbf{g}}_0:= \displaystyle \frac{{\mathbf{p}}_0}{1-{\mathbf{c}}{\mathbf{p}}_0} \in R. $ Set $k:= \|{\mathbf{c}}\|_\infty$ and $g=\|{\mathbf{g}}_0\|_\infty$. If $$\kappa({\mathbf{p}},{\mathbf{p}}_0)< \displaystyle \frac{1}{3} \min\left\{1,\;\; \frac{1}{g}, \;\;\frac{1}{k(1+kg)} \right\},$$ then ${\mathbf{p}}$ is also stabilized by ${\mathbf{c}}$. Proof of the main result ======================== Lemma \[lemma\_partington\] plays a key role in the proof of Theorem \[main\_theorem\], and so we include its short proof (taken from [@Par92 Lemma 2.1, p.84]) here. Consider the three possible cases, which are collectively exhaustive: - $|z_1| \leq \displaystyle \frac{1}{a}$ and $|z_2|\leq \displaystyle\frac{1}{a}$. Then $\kappa(z_1,z_2) \geq \displaystyle\frac{a^2}{1+a^2} |z_1-z_2|$. - $|z_1| \geq a$ and $|z_2|\geq a$. Then $\displaystyle\frac{1}{|z_1|}\leq \displaystyle\frac{1}{a}$ and $ \displaystyle\frac{1}{|z_2|}\leq \displaystyle\frac{1}{a}$. As $\kappa(z_1,z_2)=\kappa\left( \displaystyle\frac{1}{z_1}, \displaystyle\frac{1}{z_2}\right)$, it follows from $\underline{1}^\circ$ above that $\kappa(z_1,z_2) \geq \displaystyle\frac{a^2}{1+a^2} \left|\frac{1}{z_1}-\frac{1}{z_2}\right|$. - $|z_1| \leq a$ and $|z_2|\geq \displaystyle\frac{1}{a}$, or vice versa. Since the distance between the spherical caps on the Riemann sphere corresponding to the regions $\{z\in {\mathbb{C}}: |z|\leq a\}$ and $\left\{z\in {\mathbb{C}}: |z| \geq \displaystyle\frac{1}{a}\right\}$ is $\kappa\left(a,\displaystyle\frac{1}{a}\right)= \displaystyle\frac{1-a^2}{1+a^2}$, it follows that $\kappa(z_1,z_2)\geq \displaystyle\frac{1-a^2}{1+a^2}$. This completes the proof. Let ${\mathbf{p}}_0=\displaystyle \frac{{\mathbf{n}}_0}{{\mathbf{d}}_0}$ and ${\mathbf{p}}=\displaystyle \frac{{\mathbf{n}}}{{\mathbf{d}}}$ be coprime factorizations of ${\mathbf{p}}_0$ and ${\mathbf{p}}$. Since ${\mathbf{c}}$ stabilizes ${\mathbf{p}}_0$, it follows in particular that $$\frac{1}{1-{\mathbf{p}}_0 {\mathbf{c}}}=\frac{{\mathbf{d}}_0}{{\mathbf{d}}_0-{\mathbf{n}}_0 {\mathbf{c}}} \in R \textrm{ and } \frac{{\mathbf{p}}_0}{1-{\mathbf{p}}_0 {\mathbf{c}}}=\frac{{\mathbf{n}}_0}{{\mathbf{d}}_0-{\mathbf{n}}_0 {\mathbf{c}}} \in R.$$ Moreover, since $({\mathbf{n}}_0,{\mathbf{d}}_0)$ are coprime in $R$, there exist ${\mathbf{x}},{\mathbf{y}}\in R$ such that $ {\mathbf{n}}_0\cdot {\mathbf{x}}+{\mathbf{d}}_0\cdot {\mathbf{y}}=1. $ Hence it follows that $$\frac{1}{{\mathbf{d}}_0-{\mathbf{n}}_0 {\mathbf{c}}}=\frac{{\mathbf{n}}_0\cdot {\mathbf{x}}+{\mathbf{d}}_0\cdot {\mathbf{y}}}{{\mathbf{d}}_0-{\mathbf{n}}_0 {\mathbf{c}}} =\frac{{\mathbf{p}}_0}{1-{\mathbf{p}}_0 {\mathbf{c}}} \cdot {\mathbf{x}}+\frac{1}{1-{\mathbf{p}}_0 {\mathbf{c}}}\cdot {\mathbf{y}}\in R.$$ So ${\mathbf{d}}_0-{\mathbf{n}}_0 {\mathbf{c}}$ is invertible as an element of $R$. In particular, it is also invertible as an element of $S$, and so $$\label{eq_0} \textrm{for all } \varphi \in M(S), \;\;\widehat{{\mathbf{d}}_0}(\varphi)-\widehat{{\mathbf{n}}_0}(\varphi) \widehat{{\mathbf{c}}}(\varphi) \neq 0.$$ Suppose that ${\mathbf{d}}-{\mathbf{n}}{\mathbf{c}}$ is invertible as an element of $R$, then $$\begin{aligned} \frac{1}{1-{\mathbf{p}}{\mathbf{c}}}={\mathbf{d}}\cdot ({\mathbf{d}}-{\mathbf{n}}{\mathbf{c}})^{-1} \in R, && \frac{{\mathbf{p}}}{1-{\mathbf{p}}{\mathbf{c}}}={\mathbf{n}}\cdot ({\mathbf{d}}-{\mathbf{n}}{\mathbf{c}})^{-1} \in R, \\ \frac{{\mathbf{c}}}{1-{\mathbf{p}}{\mathbf{c}}}={\mathbf{c}}\cdot {\mathbf{d}}\cdot ({\mathbf{d}}-{\mathbf{n}}{\mathbf{c}})^{-1} \in R, && \frac{{\mathbf{p}}{\mathbf{c}}}{1-{\mathbf{p}}{\mathbf{c}}}=-1+{\mathbf{d}}\cdot ({\mathbf{d}}-{\mathbf{n}}{\mathbf{c}})^{-1} \in R, \end{aligned}$$ and so $H({\mathbf{p}}, {\mathbf{c}}) \in R^{2\times 2}$, showing that ${\mathbf{p}}$ is also stabilized by ${\mathbf{c}}$, and we are done. So suppose that ${\mathbf{d}}-{\mathbf{n}}{\mathbf{c}}$ is not invertible as an element of $R$. Then ${\mathbf{d}}-{\mathbf{n}}{\mathbf{c}}$ is not invertible in $S$ too, since by assumption (A4), $R$ is a full subring of $S$. Thus there is a $\varphi_0\in M(S)$ such that $$\label{eq_1} \widehat{{\mathbf{d}}}(\varphi_0)-\widehat{{\mathbf{n}}}(\varphi_0) \widehat{{\mathbf{c}}}(\varphi_0) =0.$$ We consider the following cases. $\underline{1}^\circ$ If $\widehat{{\mathbf{d}}}(\varphi_0)=0$, then $\widehat{{\mathbf{n}}}(\varphi_0)\neq 0$ by the coprimeness of $({\mathbf{d}},{\mathbf{n}})$ and so by , $\widehat{{\mathbf{c}}}(\varphi_0)=0$. Hence by , $\widehat{{\mathbf{d}}_0}(\varphi_0)\neq 0$. So in this case we have $$\kappa({\mathbf{p}}, {\mathbf{p}}_0)\geq \frac{|\widehat{{\mathbf{d}}_0}(\varphi_0)|}{\sqrt{|\widehat{{\mathbf{n}}_0}(\varphi_0)|^2+|\widehat{{\mathbf{d}}_0}(\varphi_0)|^2}} = \kappa\left(\frac{\widehat{{\mathbf{n}}_0}(\varphi_0)}{\widehat{{\mathbf{d}}_0}(\varphi_0)}, \infty\right).$$ But since $\widehat{{\mathbf{c}}}(\varphi_0)=0$, we have $$\left|\frac{\widehat{{\mathbf{n}}_0}(\varphi_0)}{\widehat{{\mathbf{d}}_0}(\varphi_0)}\right| = \left|\frac{\widehat{{\mathbf{n}}_0}(\varphi_0)}{\widehat{{\mathbf{d}}_0}(\varphi_0)-\widehat{{\mathbf{n}}_0}(\varphi_0) \cdot \widehat{{\mathbf{c}}}(\varphi_0) }\right| = |\widehat{{\mathbf{g}}_0}(\varphi_0)|\leq \|{\mathbf{g}}_0\|_\infty =g.$$ Thus if $a$ is any number such that $0<a<1$, we have $$\label{eq_2} \kappa({\mathbf{p}}, {\mathbf{p}}_0)\geq \kappa (g,\infty)=\kappa \left( \frac{1}{g}, 0\right)\geq \min \left\{\frac{a^2}{1+a^2} \frac{1}{g},\;\;\frac{1-a^2}{1+a^2} \right\}.$$ $\underline{2}^\circ$ Now let $\widehat{{\mathbf{d}}}(\varphi_0)\neq 0$. Then using , it follows that $\widehat{{\mathbf{n}}}(\varphi_0)\neq 0$ and $\widehat{{\mathbf{c}}}(\varphi_0)\neq 0$. Suppose first that $\widehat{{\mathbf{d}}_0}(\varphi_0)=0$. By the coprimeness of $({\mathbf{d}}_0,{\mathbf{n}}_0)$, we have $\widehat{{\mathbf{n}}_0}(\varphi_0)\neq 0$. Then we have $$\kappa({\mathbf{p}}, {\mathbf{p}}_0)\geq \frac{|\widehat{{\mathbf{d}}}(\varphi_0)|}{\sqrt{|\widehat{{\mathbf{n}}}(\varphi_0)|^2+|\widehat{{\mathbf{d}}}(\varphi_0)|^2}} = \kappa\left(\frac{\widehat{{\mathbf{n}}}(\varphi_0)}{\widehat{{\mathbf{d}}}(\varphi_0)}, \infty\right)= \kappa\left(\frac{1}{\widehat{{\mathbf{c}}}(\varphi_0)}, \infty\right),$$ where we have used to obtain the last equality. But $$g= \|{\mathbf{g}}_0\|_\infty =\sup_{\varphi \in M(S)} \left|\frac{\widehat{{\mathbf{n}}_0}(\varphi)}{ \widehat{{\mathbf{d}}_0}(\varphi)-\widehat{{\mathbf{n}}_0}(\varphi)\widehat{{\mathbf{c}}_0}(\varphi)}\right| \geq \left|\frac{\widehat{{\mathbf{n}}_0}(\varphi_0)}{ \widehat{{\mathbf{d}}_0}(\varphi_0)-\widehat{{\mathbf{n}}_0}(\varphi_0)\widehat{{\mathbf{c}}_0}(\varphi_0)}\right| =\frac{1}{|\widehat{{\mathbf{c}}_0}(\varphi_0)|}.$$ Thus if $a$ is any number such that $0<a<1$, we have $$\label{eq_3} \kappa({\mathbf{p}}, {\mathbf{p}}_0)\geq\kappa\left(\frac{1}{\widehat{{\mathbf{c}}}(\varphi_0)}, \infty\right) \geq \kappa (g,\infty)=\kappa \left( \frac{1}{g}, 0\right)\geq \min \left\{ \frac{a^2}{1+a^2} \frac{1}{g},\;\;\frac{1-a^2}{1+a^2} \right\}.$$ Finally, suppose that $\widehat{{\mathbf{d}}_0}(\varphi_0)\neq 0$. If $\widehat{{\mathbf{n}}_0}(\varphi_0)=0$, then $$\kappa({\mathbf{p}}, {\mathbf{p}}_0)\geq \frac{|\widehat{{\mathbf{n}}}(\varphi_0)|}{\sqrt{|\widehat{{\mathbf{n}}}(\varphi_0)|^2+|\widehat{{\mathbf{d}}}(\varphi_0)|^2}} =\kappa\left(\frac{1}{\widehat{{\mathbf{c}}}(\varphi_0)}, \infty\right),$$ using , and proceeding in the same manner as above, we obtain once again. Suppose now that $\widehat{{\mathbf{n}}_0}(\varphi_0)\neq 0$. We have $$\kappa({\mathbf{p}}, {\mathbf{p}}_0)\geq \kappa \left( \frac{\widehat{{\mathbf{n}}}(\varphi_0)}{\widehat{{\mathbf{d}}}(\varphi_0)}, \frac{\widehat{{\mathbf{n}}_0}(\varphi_0)}{\widehat{{\mathbf{d}}_0}(\varphi_0)} \right).$$ Using we have that $$\frac{\widehat{{\mathbf{n}}}(\varphi_0)}{\widehat{{\mathbf{d}}}(\varphi_0)}-\frac{\widehat{{\mathbf{n}}_0}(\varphi_0)}{\widehat{{\mathbf{d}}_0}(\varphi_0)} = \frac{1}{\widehat{{\mathbf{c}}}(\varphi_0)}-\frac{\widehat{{\mathbf{n}}_0}(\varphi_0)}{\widehat{{\mathbf{d}}_0}(\varphi_0)} = \frac{1}{\widehat{{\mathbf{c}}}(\varphi_0)}\left( 1-\widehat{{\mathbf{c}}}(\varphi_0)\cdot \frac{\widehat{{\mathbf{n}}_0}(\varphi_0)}{\widehat{{\mathbf{d}}_0}(\varphi_0)}\right) .$$ Clearly $$\left|\frac{1}{\widehat{{\mathbf{c}}}(\varphi_0)}\right|\geq \frac{1}{\|{\mathbf{c}}\|_\infty} =\frac{1}{k}.$$ Furthermore, $$\left|\frac{\widehat{{\mathbf{d}}_0}(\varphi_0)}{\widehat{{\mathbf{d}}_0}(\varphi_0)-\widehat{{\mathbf{n}}_0}(\varphi_0)\widehat{{\mathbf{c}}_0}(\varphi_0)}\right| = \left|1+\widehat{{\mathbf{c}}_0}(\varphi_0)\frac{\widehat{{\mathbf{n}}_0}(\varphi_0)}{\widehat{{\mathbf{d}}_0}(\varphi_0)-\widehat{{\mathbf{n}}_0}(\varphi_0)\widehat{{\mathbf{c}}_0}(\varphi_0)}\right| = |1+\widehat{{\mathbf{c}}_0}(\varphi_0)\widehat{{\mathbf{g}}_0}(\varphi_0)| \leq 1+kg.$$ Hence $$\label{eq_4} \left|\frac{\widehat{{\mathbf{n}}}(\varphi_0)}{\widehat{{\mathbf{d}}}(\varphi_0)}-\frac{\widehat{{\mathbf{n}}_0}(\varphi_0)}{\widehat{{\mathbf{d}}_0}(\varphi_0)}\right| \geq \frac{1}{k(1+kg)}.$$ Also, since $\widehat{{\mathbf{n}}_0}(\varphi_0)\neq 0$, we have $$\frac{\widehat{{\mathbf{d}}}(\varphi_0)}{\widehat{{\mathbf{n}}}(\varphi_0)}-\frac{\widehat{{\mathbf{d}}_0}(\varphi_0)}{\widehat{{\mathbf{n}}_0}(\varphi_0)} = \widehat{{\mathbf{c}}}(\varphi_0)-\frac{\widehat{{\mathbf{d}}_0}(\varphi_0)}{\widehat{{\mathbf{n}}_0}(\varphi_0)} = -\frac{1}{\widehat{{\mathbf{g}}_0}(\varphi_0)}.$$ Thus $$\label{eq_5} \left|\frac{\widehat{{\mathbf{d}}}(\varphi_0)}{\widehat{{\mathbf{n}}}(\varphi_0)}-\frac{\widehat{{\mathbf{d}}_0}(\varphi_0)}{\widehat{{\mathbf{n}}_0}(\varphi_0)}\right| \geq \frac{1}{\|{\mathbf{g}}_0\|_\infty} =\frac{1}{g}.$$ Combining and , we obtain that if $a$ is any number such that $0<a<1$, we have $$\label{eq_6} \kappa({\mathbf{p}}, {\mathbf{p}}_0)\geq \min \left\{ \frac{a^2}{1+a^2} \frac{1}{g},\;\;\frac{a^2}{1+a^2} \frac{1}{k(1+kg)}, \;\;\frac{1-a^2}{1+a^2} \right\}.$$ Finally, ,, yield in [*all*]{} cases. With $a:=\displaystyle \frac{1}{\sqrt{2}}$, we obtain $$\kappa({\mathbf{p}}, {\mathbf{p}}_0)\geq \frac{1}{3} \min \left\{ \frac{1}{g},\;\; \frac{1}{k(1+kg)},\;\;1 \right\},$$ which contradicts the hypothesis. Hence ${\mathbf{d}}-{\mathbf{n}}{\mathbf{c}}$ is invertible as an element of $R$, and hence ${\mathbf{p}}$ is stabilized by ${\mathbf{c}}$. An example ========== Consider the bidisc ${\mathbb{D}}^2:={\mathbb{D}}\times {\mathbb{D}}=\{(z_1,z_2)\in {\mathbb{C}}^2: |z_1|< 1 \textrm{ and } |z_2|< 1\}$. Let $R:=W^{1}({\mathbb{D}}^2)$ be the Wiener algebra of the bidisc, that is, $$W^1 ({\mathbb{D}}^2):= \left\{f:= \sum_{k_1,k_2\geq 0} a_{k_1,k_2}z_1^{k_1} z_2^{k_2}: \|f\|_1:=\sum_{k_1,k_2\geq 0} |a_{k_1,k_2}|<+\infty\right\}.$$ Then this is a relevant class of stable transfer functions arising in the analysis/synthesis of multidimensional digital filters, and membership in this class guarantees bounded input-bounded output (BIBO) stability; see for example [@Bos §2.1, p.3-4]. Consider the nominal plant ${\mathbf{p}}_0$ given by $${\mathbf{p}}_0:=\frac{z_1 z_2}{z_1^2z_2^2-1}$$ which has the coprime factorization ${\mathbf{p}}=\displaystyle \frac{{\mathbf{n}}_0}{{\mathbf{d}}_0}$, where $ {\mathbf{n}}_0:=z_1 z_2, \; {\mathbf{d}}_0:=z_1^2z_2^2-1. $ A stable controller which stabilizes ${\mathbf{p}}_0$ is ${\mathbf{c}}:=z_1z_2\in W^1({\mathbb{D}}^2)$, and we have $${\mathbf{g}}_0:=\displaystyle \frac{{\mathbf{p}}_0}{1-{\mathbf{p}}_0{\mathbf{c}}}=-z_1z_2.$$ We take $S:=A({\mathbb{D}}^2)$, namely the bidisc algebra of functions continuous on $\overline{{\mathbb{D}}}\times \overline{{\mathbb{D}}}$ and holomorphic functions in ${\mathbb{D}}^2$, with pointwise operations and the supremum norm: $$\|f\|_\infty:= \sup_{z_1,z_2\in {\mathbb{D}}} |f(z_1,z_2)|, \quad f\in A ({\mathbb{D}}^2).$$ Then since the maximal ideal spaces of $W^1({\mathbb{D}}^2)$ and of $A({\mathbb{D}}^2)$ can both be identified with $\overline{{\mathbb{D}}}\times \overline{{\mathbb{D}}}$ [@Rud Theorem 11.7, p.279], it follows that $W^1({\mathbb{D}}^2)$ is full subalgebra in $A({\mathbb{D}}^2)$. Clearly, $g:=\|{\mathbf{g}}_0\|_\infty =\|-z_1 z_2\|_\infty=1$ and $k:=\|{\mathbf{c}}\|_\infty=\|z_1z_2\|_\infty=1$. So for all ${\mathbf{p}}\in {\mathbb{S}}(W^1({\mathbb{D}}^2))$ satisfying $$\kappa({\mathbf{p}}, {\mathbf{p}}_0) < \frac{1}{3} \min\left\{1,\;\; \frac{1}{g}, \;\;\frac{1}{k(1+kg)} \right\}=\frac{1}{3} \min\left\{1, \;\; \frac{1}{1(1+1\cdot 1)}, \;\; \frac{1}{1}\right\}=\frac{1}{6},$$ ${\mathbf{p}}$ is also stabilized by ${\mathbf{c}}$. In particular, if we consider plants of the form $${\mathbf{p}}_\alpha:= \frac{z_1 z_2-\alpha}{z_1^2z_2^2-1},$$ for real $\alpha$ satisfying $|\alpha|<1$, then we can estimate $ \kappa({\mathbf{p}}_\alpha,{\mathbf{p}}_0)$ as follows. We have $$\begin{aligned} \kappa({\mathbf{p}}_\alpha,{\mathbf{p}}_0)&=&\sup_{z_1,z_2\in {\mathbb{D}}} \frac{|\alpha| |z_1^2 z_2^2 -1|}{\sqrt{|z_1 z_1-\alpha|^2+|z_1^2 z_2^2 -1|^2}\sqrt{|z_1 z_1|^2+|z_1^2 z_2^2 -1|^2}} \\ &\leq& \sup_{z_1,z_2\in {\mathbb{D}}} \frac{|\alpha|}{\sqrt{|z_1 z_1|^2+|z_1^2 z_2^2 -1|^2}} \leq \sup_{0\leq k\leq 1} \frac{|\alpha|}{\sqrt{k^2+(1-k^2)^2}}=\frac{2}{\sqrt{3}}|\alpha|.\end{aligned}$$ Thus for $\alpha$ satisfying $|\alpha|<\displaystyle\frac{1}{4\sqrt{3}}$, ${\mathbf{p}}_\alpha$ is stabilized by ${\mathbf{c}}$. [**Acknowledgements:**]{} The author thanks Jonathan Partington for kindly providing a copy of [@Par92], and Rudolf Rupp for useful comments on a previous draft of the article. [10]{} J.A. Ball and A.J. Sasane. Extension of the $\nu$-metric. [*Complex Analysis and Operator Theory*]{}, 6:65-89, no. 1, 2012. N.K. Bose. [*Multidimensional systems theory and applications*]{}. Second edition. With contributions by B. Buchberger and J. P. Guiver. Kluwer Academic Publishers, Dordrecht, 2003. A.K. El-Sakkary. The gap metric: robustness of stabilization of feedback systems. [*IEEE Transactions on Automatic Control*]{}, 30:240-247, no. 3, 1985. W.K. Hayman. [*Meromorphic functions*]{}. Oxford Mathematical Monographs Clarendon Press, Oxford 1964. J.R. Partington. Robust control and approximation in the chordal metric. In [*Robust Control*]{}, Proceedings of the workshop held in Tokyo, June 23–24, 1991, edited by S. Hosoe. Lecture Notes in Control and Information Sciences, 183. Springer-Verlag, Berlin, 1992. J.R. Partington. [*Linear operators and linear systems. An analytical approach to control theory.*]{} London Mathematical Society Student Texts 60, Cambridge University Press, Cambridge, 2004. W. Rudin. [*Functional analysis.*]{} Second edition. International Series in Pure and Applied Mathematics. McGraw-Hill, New York, 1991. A.K. El-Sakkary. Estimating robustness on the Riemann sphere. [*International Journal of Control*]{}, 49:561–567, no. 2, 1989. A.J. Sasane. Extension of the $\nu$-metric for stabilizable plants over $H^\infty$. [*Mathematical Control and Related Fields*]{}, 2:29-44, no. 1, March 2012. M. Vidyasagar. The graph metric for unstable plants and robustness estimates for feedback stability. [*IEEE Transactions on Automatic Control*]{}, 29:403-418, no. 5, 1984. G. Vinnicombe. Frequency domain uncertainty and the graph topology. [*IEEE Transactions on Automatic Control*]{}, no. 9, 38:1371-1383, 1993. G. Zames and A.K. El-Sakkary. Unstable systems and feedback: The gap metric. In [*Proceedings of the Eighteenth Allerton Conference on Communication, Control and Computing*]{} (Monticello, IL, 1980), 380-385, University of Illinois, Department of Electrical Engineering, Urbana-Champaign, IL, Oct. 1980.
--- abstract: 'For a topological group $G$ the intersection ${{\text{\sc{Kor}}}(G)}$ of all kernels of ordinary representations is studied. We show that ${{\text{\sc{Kor}}}(G)}$ is contained in the center of $G$ if $G$ is a connected pro-Lie group. The class ${{\text{\sc{Kor}}}(\mathcal{C})}$ is determined explicitly if $\mathcal{C}$ is the class ${\text{\sc Conn}{{\text{\sc Lie}}}}$ of connected Lie groups or the class ${\text{\sc almConn}{{\text{\sc Lie}}}}$ of almost connected Lie groups: in both cases, it consists of all compactly generated abelian Lie groups. Every compact abelian group and every connected abelian pro-Lie group occurs as ${{\text{\sc{Kor}}}(G)}$ for some connected pro-Lie group $G$. However, the dimension of ${{\text{\sc{Kor}}}(G)}$ is bounded by the cardinality of the continuum if $G$ is locally compact and connected. Examples are given to show that ${{\text{\sc{Kor}}}(\mathcal{C})}$ becomes complicated if $\mathcal{C}$ contains groups with infinitely many connected components.' author: - Markus Stroppel title: | Kernels of Linear Representations\ of Lie Groups, Locally Compact Groups,\ and Pro-Lie Groups --- The questions we consider and the answers that we have found ============================================================ In the present paper we study (Hausdorff) topological groups. If all else fails, we endow a group with the discrete topology. For any group $G$ one tries, traditionally, to understand the group by means of representations as groups of matrices. To this end, one studies the continuous homomorphisms from $G$ to ${\mathrm{GL}_{n}{{\mathbb C}}}$ for suitable positive integers $n$; so-called *ordinary representations*. This approach works perfectly for finite groups because any such group has a faithful ordinary representation but we may face difficulties for infinite groups; there do exist groups admitting no ordinary representations apart from the trivial (constant) one. See \[ex:Burnside\] below. The possible images of $G$ under ordinary representations are called *linear groups* over ${\mathbb C}$. More generally, one may study linear groups over arbitrary fields. See [@MR0335656] and [@MR1039816] for overviews of results in that direction. We just note here that for every free abelian group $A$ there exists at least one field $F$ such that $A$ is a linear group over $F$. However, there do exist abelian groups that are not linear over any field, see [@MR0013164], cf. [@MR0335656 2.2]. Note also that quotients of linear groups may fail to be linear, cf. [@MR0335656 Ch.6]. This phenomenon will play a role in \[realHeis\] below. In the present notes, we are mainly interested in that part of $G$ that cannot be understood by means of ordinary representations, namely, the intersection ${{\text{\sc{Kor}}}(G)}$ of all kernels of ordinary linear representations. A detailed overview over the results of the present paper will be given in \[overview\] below; we give some coarse indications here before we introduce more specific notation. We will show that ${{\text{\sc{Kor}}}(G)}$ is a central subgroup of $G$ if $G$ belongs to the class ${\text{\sc Conn}{{\text{\sc ProLie}}}}$ of all connected pro-Lie groups (in particular, if $G$ is locally compact and connected). Moreover, we investigate the class ${{\text{\sc{Kor}}}(\mathcal{G})}:={\{{{{\text{\sc{Kor}}}(G)}}\left|\vphantom{}\right.\, {G\in\mathcal{G}}\}}$ for different classes $\mathcal{G}$ of groups. For the class ${\text{\sc Conn}{{\text{\sc Lie}}}}$ of connected Lie groups, in particular, we show in \[KOconnLie\] that ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc Lie}}}})}$ is the class ${\text{\sc CgAL}}$ of compactly generated abelian Lie groups. Thus this class is as large as possible (after the observation that ${{\text{\sc{Kor}}}(G)}$ is central for each $G\in{\text{\sc Conn}{{\text{\sc Lie}}}}$, cf. \[connCenter\]). The class ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ contains all connected abelian pro-Lie groups and all compact abelian groups, see \[KOconnProLie\]. For the class ${\text{\sc Conn}{{\text{\sc LCG}}}}$ of connected locally compact groups it turns out that there is a somewhat surprising bound on the dimension of members of ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$, see \[KorConnLCGfinGen\]. For topological groups $G$, $H$ let ${\mathrm{Hom}_{}(G,H)}$ denote the set of continuous homomorphisms from $G$ to $H$. If $G$ and $H$ are (topological) vector spaces over ${\mathbb F}$, we write ${\mathrm{Hom}_{{\mathbb F}}(G,H)}$ for the set of continuous ${\mathbb F}$-linear homomorphisms. We put $${\mathrm{OR}(G)} := \bigcup\limits_{n\in{\mathbb N}}{\mathrm{Hom}_{}(G,{\mathrm{GL}_{n}{{\mathbb C}}})}, \text{ \ then \ } {{\text{\sc{Kor}}}(G)} = \bigcap\limits_{\rho\in{\mathrm{OR}(G)}}\ker\rho \,.$$ Clearly ${{\text{\sc{Kor}}}(G)}$ is a closed normal subgroup of $G$, in fact, it is fully invariant (i.e., each endomorphism of the topological group $G$ maps ${{\text{\sc{Kor}}}(G)}$ into itself, see \[fullyInv\] below). In order to keep notation simple, we also consider continuous homomorphisms from $G$ to the group ${\mathrm{GL}_{}{(V)}}$ of all linear bijections of a vector space $V$ of finite dimension $n$ over ${\mathbb F}\in\{{\mathbb R},{\mathbb C}\}$. Note that this does not mean that we consider our problem in greater generality because ${\mathrm{GL}_{}{(V)}}$ is isomorphic to ${\mathrm{GL}_{n}{{\mathbb F}}}\le{\mathrm{GL}_{n}{{\mathbb C}}}$. Our present problem bears some similarity to questions that arise in character theory. E.g., for a locally compact abelian group $G$ the ordinary representations may be reduced to collections of homomorphisms from $G$ into ${\mathrm{GL}_{1}{{\mathbb C}}}\cong{\mathbb C}^\times\cong {\mathbb R}\times{\mathbb T}$ where ${\mathbb T}:={\mathbb R}/{\mathbb Z}$ as usual. The elements of ${G^*}:={\mathrm{Hom}_{}(G,{\mathbb T})}$ are called *characters* of $G$ while those of ${\mathrm{Hom}_{}(G,{\mathbb R})}$ are the *real characters*[^1]. Since ${\mathrm{Hom}_{}({\mathbb R},{\mathbb T})}$ separates points we have $\bigcap_{\chi\in{G^*}}\ker\chi \subseteq \bigcap_{\rho\in{\mathrm{Hom}_{}(G,{\mathbb R})}}\ker\rho$. Pontryagin’s duality theory for locally compact abelian groups (cf. [@MR2226087 Ch.F]) uses characters, it rests on the fact that ${\mathrm{Hom}_{}(G,{\mathbb T})}$ separates points for each locally compact abelian group. This is no longer true for arbitrary topological abelian groups, cf. [@MR551496 23.32]. The intersection over the kernels of real characters has been identified in [@MR0214697], cf. [@MR0089361], [@MR2413959]. Our present problem disappears if we take a local (or, rather, an infinitesimal) point of view. Indeed Ado’s Theorem ([@MR0027753], see [@MR0030946] for an English translation, cf. also [@MR0028829], [@MR0032613], [@MR0201576]) asserts that every Lie algebra $\mathfrak{g}$ of finite dimension over ${\mathbb R}$ or ${\mathbb C}$ has a faithful ordinary representation; i.e. a faithful homomorphism into ${\mathrm{\mathfrak{gl}}_{n}{{\mathbb R}}}$ for some $n$. Thus a single, suitably chosen representation suffices to show that ${{\mathfrak{{Kor}}}(\mathfrak{g})}$ is trivial, where $${{\mathfrak{{Kor}}}(\mathfrak{g})} := \bigcap_{n\in{\mathbb N}}\quad\bigcap_{\rho\in{\mathrm{Hom}_{}(\mathfrak{g},{\mathrm{\mathfrak{gl}}_{n}{{\mathbb C}}})}}\ker\rho \,.$$ For a pro-Lie algebra $\mathfrak{g}$ (i.e., a projective limit $\mathfrak{g}$ of Lie algebras of finite dimension over ${\mathbb R}$ such that $\mathfrak{g}$ is complete as a topological vector space, cf. [@MR2337107 Ch.7]) one also knows that ${{\mathfrak{{Kor}}}(\mathfrak{g})}$ is trivial. In particular, there is no useful relationship between ${{\text{\sc{Kor}}}(G)}$ and ${{\mathfrak{{Kor}}}({{\mathfrak{L}}(G)})}$ if $G$ is a group which has Lie algebra ${{\mathfrak{L}}(G)}$ (in the sense of \[addLie\] below). However, the Lie algebra will be useful to construct ordinary representations of a pro-Lie group, see \[connCenter\]. The following will be of interest to us here: ${\text{\sc TG}}:$ : topological Hausdorff groups, ${\text{\sc ProLie}}:$ : pro-Lie groups, i.e., complete projective limits of Lie groups, ${\text{\sc CG}}:$ : compact groups, ${\text{\sc LCG}}:$ : locally compact groups, ${\text{\sc LCA}}:$ : locally compact abelian groups, ${\text{\sc Lie}}:$ : Lie groups (without separability assumptions, i.e., including *all* discrete groups), ${\text{\sc SepLie}}:$ : separable Lie groups (i.e., the $\sigma$-compact members of ${\text{\sc Lie}}$), ${\text{\sc CgAL}}= {\text{\sc LCA}}\cap{\text{\sc SepLie}}:$ : compactly generated abelian Lie groups, and the classes ${\text{\sc Ab}{\mathcal{G}}}$, ${\text{\sc Conn}{\mathcal{G}}}$ or ${\text{\sc almConn}{\mathcal{G}}}$ consisting of the abelian, connected or almost connected members of the class $\mathcal{G}$, respectively. Here a group $G$ is called *almost connected* if the quotient $G/G_0$ modulo the connected component $G_0$ is compact. For sentimental historical reasons, we write ${\text{\sc LCA}}$ and ${\text{\sc CA}}$ instead of the more systematic ${\text{\sc Ab}{{\text{\sc LCG}}}}$ and ${\text{\sc Ab}{{\text{\sc CG}}}}$, respectively. The diagram in Figure \[fig:inclusions\] indicates the inclusions between these classes. [ &&\ & @[-]{}\[ur\] & & @[-]{}\[ul\]\ &&@[-]{}\[ul\]@[-]{}\[ur\] &&@[-]{}\[ul\]\ &@[-]{}\[uu\] & @[-]{}\[uur\] &@[-]{}\[uu\]@[-]{}@/\^[3ex]{}/\[uull\]& @[-]{}\[u\]\ @[-]{}\[ur\]@[-]{}\[urr\] & @[-]{}\[u\]@[-]{}\[ur\]@[-]{}\[urr\]& @[-]{}\[u\]@[-]{}@/\_[1ex]{}/\[urr\] & @[-]{}\[u\]& @[-]{}\[u\]@/\^[5ex]{}/@[-]{}\[uuulll\]\ &&& @[-]{}\[u\]@/\^[1ex]{}/@[-]{}\[uuur\]\ && @[-]{}\[uul\]@[-]{}\[uur\] && @[-]{}\[uu\]@[-]{}\[ul\] ]{} Note that we only consider Hausdorff groups; otherwise, the closure of the trivial subgroup would occur inside ${{\text{\sc{Kor}}}(G)}$ throughout. As it is customary in the theory of locally compact groups, we do not include separability in the definition of a Lie group. This means that *every* discrete group is a Lie group, and it secures (via the solution of Hilbert’s Fifth Problem, cf. [@MR0058607], [@MR0073104]) that every locally euclidean group is a Lie group. Thus the additive group ${\mathbb R}_{\mathrm{discr}}$ of real numbers with the discrete topology belongs to ${{\text{\sc Lie}}\setminus{\text{\sc SepLie}}}$ (it is not a member of ${\text{\sc CgAL}}$), and the identity from ${\mathbb R}_{\mathrm{discr}}$ to ${\mathbb R}$ is a bijective morphism of Lie groups which is not open. If one wants to use the Open Mapping Theorem (which is indeed one of the major reasons to require separability) one has to be careful and make sure that the domain of the mapping is $\sigma$-compact. Note that every closed subgroup of an *almost connected* Lie group belongs to ${\text{\sc SepLie}}$. In our present context separability does not appear to be of much help (see \[ex:separabilityDoesNotHelp\]\[separabilityDoesNotHelp\], \[KOsepLie\]) while almost connectedness is a useful condition for Lie groups (cf. \[almConnLie\]) where it actually means that the number of connected components is finite. \[overview\] In the present paper, we obtain the following. - ${{\text{\sc{Kor}}}}$ is a functor that preserves products, see \[fullyInv\], \[cartProd\]. - For each $G\in{\text{\sc Conn}{{\text{\sc ProLie}}}}$ we have ${{\text{\sc{Kor}}}(G)}\subseteq{\mathrm{Z}(G)}\cap\closure{G'}$, see \[connCenter\] and \[KOinZcapComm\]. - ${{\text{\sc{Kor}}}({\text{\sc almConn}{{\text{\sc Lie}}}})} = {{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc Lie}}}})} = {\text{\sc CgAL}}$, see \[KOconnLie\] and \[almConnLie\]. - ${{\text{\sc{Kor}}}(G)}$ is trivial for every compact and every abelian proto-Lie group $G$, see \[KOCG\]. - For $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$ the connected component $({{\text{\sc{Kor}}}(G)})_0$ of ${{\text{\sc{Kor}}}(G)}$ has a finitely generated dense subgroup. Thus the weight of $({{\text{\sc{Kor}}}(G)})_0$ is bounded by $2^{\aleph_0}$, see \[connKOsmall\]. - If $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$ is solvable then $\closure{G'}$ has a finitely generated dense subgroup, cf. \[sKOfiniteIfSolv\]. This implies that the weight of ${{\text{\sc{Kor}}}(G)}$ is bounded by $2^{\aleph_0}$. - The inclusions ${\text{\sc Conn}{{\text{\sc Ab}{{\text{\sc ProLie}}}}}} \subset {\bm{\Pi}}({\text{\sc CgAL}}\cup{\text{\sc CA}}) \subseteq {{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})} \subseteq {\mathbf{\overline{S}}}({\text{\sc Conn}{{\text{\sc Ab}{{\text{\sc ProLie}}}}}})$ are established in \[KOconnProLie\] and \[prob:KOConnProLie\]. - ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$ contains those $A\in{\text{\sc LCA}}$ that possess a finitely generated dense subgroup, see \[KorConnLCGfinGen\]. - For each $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$ there exist $A\in{\text{\sc CA}}$ and natural numbers $e,f$ such that the connected component $A_0$ is monothetic and ${{\text{\sc{Kor}}}(G)}\cong{\mathbb Z}^f\times A\times{\mathbb R}^e$, see \[KorConnLCGfinGen\]. In particular, the dimension of members of ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$ is bounded by $2^{\aleph_0}$. The latter two of these results mean that ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$ is sandwiched between the class ${\text{\sc small}{{\text{\sc LCA}}}}$ of groups of the form ${\mathbb Z}^f\times F\times{\mathbb R}^e$ where $F$ is a compact group with a finitely generated dense subgroup and the class of groups of the form ${\mathbb Z}^f\times A\times{\mathbb R}^e$ where $A$ is compact and its connected component $A_0$ has a finitely generated dense subgroup; here $f$ and $e$ may be arbitrary natural numbers. The classes ${{\text{\sc{Kor}}}({\text{\sc SepLie}})}$ and ${{\text{\sc{Kor}}}({\text{\sc LCG}})}$ are large and complicated, see \[KOsepLie\], \[KOLCG\] and \[KOLCGnotClosedUnderQ\]. Some open problems are stated in Section \[sec:openQuestions\]. Relevant results about (non-) linear(ity of) groups are collected in Section \[sec:appendix\]. Basic results ============= In this section we discuss functoriality of ${{\text{\sc{Kor}}}}$ and the behavior of this functor with respect to homomorphisms (in particular quotients) and products. \[fullyInv\] For $\varphi\in{\mathrm{Hom}_{}(G,H)}$ we have $\varphi({{\text{\sc{Kor}}}(G)}) \le {{\text{\sc{Kor}}}(H)}$. In particular, mapping $G$ to ${{\text{\sc{Kor}}}(G)}$ and $\varphi$ to ${{\text{\sc{Kor}}}(\varphi)} := \varphi|_{{{\text{\sc{Kor}}}(G)}}^{{{\text{\sc{Kor}}}(H)}}$ implements a functor from the category of topological groups to itself. For each $\rho\in{\mathrm{OR}(H)}$ we have $\rho\circ\varphi\in{\mathrm{OR}(G)}$, and $\rho(\varphi(x))={\mathrm{id}}$ follows for each $x\in{{\text{\sc{Kor}}}(G)}$. \[KOquotients\] Let $G$ be a group, and let $N$ be a subgroup of $G$. 1. In any case the group ${{\text{\sc{Kor}}}(N)}$ is contained in ${{\text{\sc{Kor}}}(G)}$. 2. If $N$ is a normal subgroup of $G$ then ${{\text{\sc{Kor}}}(G)}N/N$ is contained in ${{\text{\sc{Kor}}}(G/N)}$. 3. \[wellBehavedQuotient\] If $N$ is normal and contained in ${{\text{\sc{Kor}}}(G)}$ then ${{\text{\sc{Kor}}}(G)}/N = {{\text{\sc{Kor}}}(G/N)}$. 4. \[radical\] The subgroup ${{\text{\sc{Kor}}}(G)}$ is a radical in the sense that ${{\text{\sc{Kor}}}(G/{{\text{\sc{Kor}}}(G)})}$ is trivial. The first two assertions follow from \[fullyInv\] using the inclusion map $\iota\colon N\to G$ and the canonical quotient map $q_N\in{\mathrm{Hom}_{}(G,G/N)}$. Now assume that $N$ is normal in $G$ and contained in ${{\text{\sc{Kor}}}(G)}$. For $x\in G\setminus{{\text{\sc{Kor}}}(G)}$ there exists $\rho\in{\mathrm{OR}(G)}$ with $\rho(x)\ne{\mathrm{id}}$, and $\rho$ factors as $\rho=\lambda\circ q_N$ because $N\le{{\text{\sc{Kor}}}(G)}\le\ker\rho$. Thus there exists $\lambda\in{\mathrm{OR}(G/N)}$ with $\lambda(xN)\ne{\mathrm{id}}$. This means ${{\text{\sc{Kor}}}(G)}/N \ge {{\text{\sc{Kor}}}(G/N)}$; the inclusion ${{\text{\sc{Kor}}}(G)}/N \le {{\text{\sc{Kor}}}(G/N)}$ is clear already. Thus \[wellBehavedQuotient\] is established, and the last assertion follows. From \[KOquotients\]\[radical\] we see that ${{\text{\sc{Kor}}}(G)}$ is a sort of “radical” of the group $G$. Note that ${{\text{\sc{Kor}}}(G/N)}$ may be much larger than ${{\text{\sc{Kor}}}(G)}N/N$ if $N$ is not contained in ${{\text{\sc{Kor}}}(G)}$; the example in \[realHeis\] is instructive here, again. The category ${\text{\sc TG}}$ and its full subcategories ${\text{\sc ProLie}}$, ${\text{\sc Conn}{{\text{\sc ProLie}}}}$, ${\text{\sc Ab}{{\text{\sc ProLie}}}}$ and ${\text{\sc Conn}{{\text{\sc Ab}{{\text{\sc ProLie}}}}}}$ are closed under arbitrary products, and these are as expected (i.e., cartesian products with the product topology). A category of topological groups may contain products (in the categorical sense) that are endowed with a topology that is different from the product topology; e.g., this happens in the categories ${\text{\sc LCG}}$ and ${\text{\sc LCA}}$ (cf. [@MR2226087 16.22]). However, products in ${\text{\sc Conn}{{\text{\sc LCG}}}}$ are the same as those in ${\text{\sc TG}}$ (in particular, they exist only if all but a finite number of the factors are compact), see [@MR2226087 16.23]. For $\mathcal{G} \subseteq {\text{\sc TG}}$ let ${\mathbf{P}}(\mathcal{G})$ denote the class of all cartesian products of *finitely* many members of $\mathcal{G}$. By ${\bm{\Pi}}(\mathcal{G})$ we mean the class of arbitrary cartesian products of members of $\mathcal{G}$. In any case, we use the product topology. The class ${\mathbf{S}}(\mathcal{G})$ consists of all subgroups of members of $\mathcal{G}$ while ${\mathbf{\overline{S}}}(\mathcal{G})$ contains only the closed subgroups (which is more reasonable if one studies classes of complete groups as we do here). By ${\mathbf{{Q}}}(\mathcal{G})$ we denote the class of all Hausdorff quotient groups of members of $\mathcal{G}$, i.e. the class of all groups $G/N$ where $G\in\mathcal{G}$ and $N$ is a *closed* normal subgroup of $G$. Finally, let ${\mathbf{{\widehat{Q}}}}(\mathcal{G})$ be the class of all (Hausdorff) completions of members of $\mathcal{G}$. A topological group need not have a completion, cf. [@MR979294 III§3$\cdot$4, Thm.1]. The classes ${\text{\sc ProLie}}$ and ${\text{\sc Conn}{{\text{\sc ProLie}}}}$ are not closed under ${\mathbf{{Q}}}$, see [@MR2337107 4.11]. However, Hausdorff quotients of pro-Lie groups are proto-Lie groups (see [@MR2337107 4.1]), i.e., they possess completions which belong to ${\text{\sc ProLie}}$, again. Thus ${\mathbf{{\widehat{Q}}}}({\text{\sc ProLie}})={\text{\sc ProLie}}\subsetneqq{\mathbf{{Q}}}({\text{\sc ProLie}})$. Locally compact groups are complete anyway (see [@MR979294 III§3$\cdot$3, Cor.1], cf. [@MR2337107 1.31] or [@MR2226087 8.25]), and ${\mathbf{{Q}}}({\text{\sc LCG}})={\text{\sc LCG}}={\mathbf{{\widehat{Q}}}}({\text{\sc LCG}})$. The following corollary to \[KOquotients\] is applicable to the class ${\text{\sc Conn}{{\text{\sc LCG}}}}$ and its subclass ${\text{\sc Conn}{{\text{\sc Lie}}}}$, cf. \[connCenter\]. The class ${\text{\sc Conn}{{\text{\sc ProLie}}}}$ is problematic because it is not closed under ${\mathbf{{Q}}}$. Some restriction of the sort “${{\text{\sc{Kor}}}(G)}\le{\mathrm{Z}(G)}$” is necessary, cf. \[KOLCGnotClosedUnderQ\]. \[quotients\] Consider $\mathcal{G}\subseteq{\text{\sc TG}}$ with ${\mathbf{{Q}}}(\mathcal{G})=\mathcal{G}$ and assume that ${{\text{\sc{Kor}}}(G)}\le{\mathrm{Z}(G)}$ holds for each $G\in\mathcal{G}$. Then ${{\text{\sc{Kor}}}(\mathcal{G})}={\mathbf{{Q}}}({{\text{\sc{Kor}}}(\mathcal{G})})$. \[proLieModLCGisComplete\] Let $N$ be a normal subgroup of a pro-Lie group $G$. If $N$ is locally compact then $G/N$ is complete, and thus a pro-Lie group. \[cartProd\] The functor ${{\text{\sc{Kor}}}}$ preserves products in the category ${\text{\sc TG}}$, in fact, we have ${{\text{\sc{Kor}}}(\prod_{j\in J}A_j)} = \prod_{j\in J}{{\text{\sc{Kor}}}(A_j)}$. For $\mathcal{G}\subseteq{\text{\sc TG}}$ this means ${\mathbf{P}}({{\text{\sc{Kor}}}(\mathcal{G})})={{\text{\sc{Kor}}}(\mathcal{G})}$ if ${\mathbf{P}}(\mathcal{G})=\mathcal{G}$ and ${\bm{\Pi}}({{\text{\sc{Kor}}}(\mathcal{G})})={{\text{\sc{Kor}}}(\mathcal{G})}$ if ${\bm{\Pi}}(\mathcal{G})=\mathcal{G}$. Let $\prod_{j\in J}A_j$ be a cartesian product; the indexing set $J$ may be infinite. For $m\in J$ let $\eta_m\colon A_m\to\prod_{j\in J}A_j$ be the natural inclusion, and let $\pi_m\colon\prod_{j\in J}A_j\to A_m$ be the natural projection. If $\rho\colon\prod_{j\in J}A_j\to{\mathrm{GL}_{n}{{\mathbb C}}}$ is an ordinary representation of the product then $\rho\circ\eta_m$ is an ordinary representation of $A_m$, and we obtain that the subgroup generated by $\bigcup_{j\in J}\eta_j({{\text{\sc{Kor}}}(A_j)})$ is contained in ${{\text{\sc{Kor}}}(\prod_{j\in J}A_j)}$. The product $\prod_{j\in J}{{\text{\sc{Kor}}}(A_j)}$ is the closure of that subgroup and thus also contained in ${{\text{\sc{Kor}}}(\prod_{j\in J}A_j)}$. Conversely, consider $x\in\prod_{j\in J}A_j\setminus\prod_{j\in J}{{\text{\sc{Kor}}}(A_j)}$. Then there exists $m\in J$ such that $\pi_m(x)\notin{{\text{\sc{Kor}}}(A_m)}$ and we find an ordinary representation $\rho_m$ of $A_m$ with ${\mathrm{id}}\ne\rho_m(\pi_m(x)) = (\rho_m\circ\pi_m)(x)$. Since $\rho_m\circ\pi_m$ is an ordinary representation of $\prod_{j\in J}A_j$ this shows $x\notin\prod_{j\in J}A_j \setminus {{\text{\sc{Kor}}}(\prod_{j\in J}A_j)}$. \[CpCommSolv\] Let $G$ be a solvable connected (not necessarily closed) subgroup of ${\mathrm{GL}_{n}{{\mathbb C}}}$. Then the following holds. 1. \[invFlag\] There exists a sequence $V_0,\dots,V_n$ of $G$-invariant subspaces such that for all $j<n$ we have $V_j\le V_{j+1}$ and $\dim{V_j}=j$. 2. For any sequence as in \[invFlag\] the commutator group $G'$ acts trivially on each $V_{j+1}/V_j$. 3. There are no compact (in particular, no finite) subgroups in $G'$ except the trivial one. Replacing $G$ by its closure in ${\mathrm{GL}_{n}{{\mathbb C}}}$ we lose neither solvability nor connectedness, cf. [@MR2226087 2.9, 7.5]. For closed connected solvable subgroups of ${\mathrm{GL}_{n}{{\mathbb C}}}$ assertion \[invFlag\] is Lie’s Theorem, cf. [@MR514561 Thm.2.2, Ch.III]. Since $G$ acts as a subgroup of the abelian group ${\mathrm{GL}_{}{(V_{j+1}/V_j)}} \cong {\mathrm{GL}_{1}{{\mathbb C}}} \cong {\mathbb C}^\times$ on $V_{j+1}/V_j$ the commutator group $G'$ acts trivially on that quotient. Finally, let $C$ be a compact subgroup of $G'$. We proceed by induction to show that $C$ acts trivially on $V_j$ for each $j\le n$. Indeed, if $C$ acts trivially on $V_j$ it acts on $V_{j+1}$ as a subgroup of the group $N_j$ consisting of all $a\in{\mathrm{GL}_{}{(V_{j+1})}}$ acting trivially both on $V_j$ and on $V_{j+1}/V_j$. Now $N_j$ is isomorphic to the additive group ${\mathrm{Hom}_{{\mathbb C}}(V_{j+1}/V_j,V_j)}$. This is the additive group of a vector space of finite dimension over ${\mathbb R}$, and does not contain compact subgroups apart from the trivial one. \[CpCommSolvCor\] Let $G$ be a solvable connected group. Then every compact subgroup of $G'$ is contained in ${{\text{\sc{Kor}}}(G)}$. Connectedness is a crucial assumption in \[CpCommSolvCor\], as finite groups show. Applications of \[CpCommSolvCor\] are given in \[realHeis\] and \[exa:HeisenbergWithCpCenter\] below. See also \[maltsev\]\[maltsev3\] and \[nahlus\]. Lie algebras and pro-Lie groups =============================== For a Lie group $L$ one model for the Lie algebra is the space ${\mathrm{Hom}_{}({\mathbb R},L)}$ of all one-parameter subgroups, cf. [@MR0409722]. This point of view works for quite general classes of topological groups, see [@MR2337107 Ch.2]. \[addLie\] For a topological group $G$ let ${{\mathfrak{L}}(G)}$ denote the space ${\mathrm{Hom}_{}({\mathbb R},G)}$ endowed with the compact-open topology (i.e., the topology of uniform convergence on compact sets). We call $\exp\colon{{\mathfrak{L}}(G)}\to G\colon X\mapsto X(1)$ the *exponential map* for $G$. Multiplication of $X\in{{\mathfrak{L}}(G)}$ by a scalar $r\in{\mathbb R}$ is given as $r\,X(t) := X(tr)$. Addition and the Lie bracket are more involved, and not defined for arbitrary topological groups. We say that $G$ *has a Lie algebra* if the following conditions are satisfied: 1. \[defAdd\] For all $X,Y\in{{\mathfrak{L}}(G)}$ there are elements $X+Y$ and $[X,Y]$ in ${{\mathfrak{L}}(G)}$ such that $$\begin{array}{rcl} (X+Y)(t) & = & \lim\limits_{n\to\infty} \left( X(\frac{t}{n}) \,Y(\frac{t}{n}) \right)^n \text{ \ and} \\{} [X,Y](t^2)& = & \lim\limits_{n\to\infty} {\operatorname{comm}\left(X(\frac{t}{n}),Y(\frac{t}{n})\right)}^{n^2} \end{array}$$ hold for all $t\in{\mathbb R}$; here ${\operatorname{comm}\left(g,h\right)}:=ghg^{-1}h^{-1}$ is the group commutator. 2. The set ${{\mathfrak{L}}(G)}$ is a topological Lie algebra with respect to these operations. We say that $G$ has a *generating Lie algebra* if it has a Lie algebra and the range of the exponential map generates a dense subgroup of the connected component $G_0$. In [@MR2337107 3.5] it is shown that every projective limit $G$ of Lie groups has a Lie algebra and, moreover, this Lie algebra ${{\mathfrak{L}}(G)}$ is a pro-Lie algebra: i.e., the filter basis of closed ideals of finite co-dimension in ${{\mathfrak{L}}(G)}$ converges to $0$ and ${{\mathfrak{L}}(G)}$ is complete as a topological vector space. In particular, this Lie algebra is *residually finite-dimensional*; the homomorphisms to finite-dimensional Lie algebras separate the points. Every almost connected pro-Lie group, and thus every almost connected locally compact group, has a generating Lie algebra, cf. [@MR2337107 4.22]. Our technical machinery culminates in the adjoint representation: \[connProLieSuff\] Let $G$ be a topological group. 1. For each $g\in G$ there is a unique bijection ${\mathrm{Ad}_{}}(g)$ of ${{\mathfrak{L}}(G)}$ onto itself such that $gX(t)g^{-1} = {\mathrm{Ad}_{}}(g)(X)(t)$ holds for all $t\in{\mathbb R}$. 2. The action $G\times{{\mathfrak{L}}(G)}\to{{\mathfrak{L}}(G)}\colon (g,X)\mapsto {\mathrm{Ad}_{}}(g)(X)$ is continuous. Now assume that $G$ has a Lie algebra. 1. For each $g\in G$ the bijection ${\mathrm{Ad}_{}}(g)$ is an automorphism of the Lie algebra ${{\mathfrak{L}}(G)}$. 2. The adjoint representation ${\mathrm{Ad}_{}}\colon G\to{\mathrm{Aut}({{\mathfrak{L}}(G)})}$ is a continuous linear representation, where ${\mathrm{Aut}({{\mathfrak{L}}(G)})}$ is endowed with the strong operator topology (i.e., the topology of pointwise convergence). 3. The kernel of ${\mathrm{Ad}_{}}$ is the centralizer of (the closure of) the subgroup generated by the range of the exponential function. 4. If $G$ is a pro-Lie group then each ideal of the Lie algebra ${{\mathfrak{L}}(G)}$ is invariant under the adjoint action of the connected component of $G$. \[connCenter\] If $G$ is a connected pro-Lie group then ${{\text{\sc{Kor}}}(G)}$ is contained in the center ${\mathrm{Z}(G)}$ of $G$ and is therefore abelian. According to \[connProLieSuff\] the adjoint representation ${\mathrm{Ad}_{}}$ induces ordinary representations on the finite-dimensional quotients of ${{\mathfrak{L}}(G)}$ that separate the points modulo $\ker{\mathrm{Ad}_{}}$, which equals the center of $G$. For every connected locally compact group $G$ the group ${{\text{\sc{Kor}}}(G)}$ is contained in the center ${\mathrm{Z}(G)}$. Using \[KOquotients\] and \[proLieModLCGisComplete\] we infer: \[quotientsKOConnProLie\] The class ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ is closed under quotients modulo locally compact groups while ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc Lie}}}})}$ and ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$ are closed under arbitrary (Hausdorff) quotients. \[ex:separabilityDoesNotHelp\] 1. \[separabilityDoesNotHelp\] Theorem \[connCenter\] does not extend to the case of arbitrary disconnected Lie groups even if we assume separability. In fact there are countable discrete groups $G$ with ${{\text{\sc{Kor}}}(G)}=G$, see \[ex:Burnside\] below. However, everything is fine for almost connected Lie groups, see \[almConnLie\]. 2. There are groups in ${\text{\sc LCG}}\setminus{\text{\sc ProLie}}$ such that ${\mathrm{Ad}_{}}$ is a faithful representation (of infinite degree) but every ordinary representation is trivial on the connected component, cf. \[powerD\]. The following observation will be useful to obtain restrictions on the structure and size (measured by the dimension, i.e., the rank of the Pontryagin dual) of members of ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$; cf. \[sKOfiniteIfSolv\], \[connKOsmall\] and \[sKOfinite\] below. The passage to the *closure* of the commutator group is essential, cf. \[SL2timesH\] and \[exa:monotheticConnCA\]. \[KOinZcapComm\] 1. If $G\in{\text{\sc ProLie}}$ then ${{\text{\sc{Kor}}}(G)}\le\closure{G'}$. 2. If $G\in{\text{\sc Conn}{{\text{\sc ProLie}}}}$ (in particular, if $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$) then ${{\text{\sc{Kor}}}(G)} \le {\mathrm{Z}(G)}\cap\closure{G'}$. The quotient $G/\closure{G'}$ is an abelian proto-Lie group (see [@MR2337107 4.1]). Thus ${\mathrm{OR}(G/\closure{G'})}$ separates the points, and ${{\text{\sc{Kor}}}(G/\closure{G'})}$ is trivial. For each $x\in G\setminus\closure{G'}$ we thus find some $\rho\in{\mathrm{OR}(G/\closure{G'})}$ with $x\,\closure{G'} \notin\ker\rho$, and composing $\rho$ with the quotient map we find $\rho'\in{\mathrm{OR}(G)}$ such that $x\notin\ker{\rho'}$. If $G$ is also connected then ${{\text{\sc{Kor}}}(G)}\le{\mathrm{Z}(G)}$ has been established \[connCenter\]. Almost connected Lie groups =========================== \[realHeis\] The following example has been around for quite some time, see [@MR0073104 4.14, p.191] or [@MR0327979]. We use it to show that ${\mathbb R}/{\mathbb Z}$ belongs to ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc Lie}}}})}$. Let $H$ be the real Heisenberg group; i.e., $H={\mathbb R}^3$ with the multiplication $(a,b,x)*(c,d,y)=(a+c,b+d,x+y+ad-bc)$. Clearly this is a connected Lie group, and the center $\{0\}^2\times{\mathbb R}$ coincides with the commutator group $H'$. The cyclic subgroup $Z := \{0\}^2\times{\mathbb Z}$ is closed and central in $H$. Thus $H/Z$ is a connected nilpotent (and thus solvable) Lie group. According to \[CpCommSolvCor\] the compact central subgroup $C:=(\{0\}^2\times{\mathbb R})/Z \cong {\mathbb R}/{\mathbb Z}$ of $(H/Z)'$ is contained in ${{\text{\sc{Kor}}}(H/Z)}$. On the other hand, the quotient $H/C\cong{\mathbb R}^2$ has a faithful ordinary representation. Therefore, we obtain ${{\text{\sc{Kor}}}(H/Z)} = C \cong {\mathbb R}/{\mathbb Z}$. For every connected Lie group $L$ with semi-simple Lie algebra one knows that ${{\text{\sc{Kor}}}(L)}$ is a discrete (and thus central) normal subgroup, cf. [@MR1064110 5.3.6 Thm.8, p.264]. In fact, for any such group one can read off ${{\text{\sc{Kor}}}(L)}$ from [@MR1064110 Table10, p.318f]. The relevant information is accessible algorithmically, see [@RealLIE], [@MR1270178]. For our present purposes, we require explicit knowledge of the case where $L$ is the simply connected covering of ${\mathrm{PSL}_{2}{{\mathbb R}}}$. \[Sl2\] Let $S$ denote the simply connected covering of ${\mathrm{PSL}_{2}{{\mathbb R}}}$. The center ${\mathrm{Z}(S)}$ of $S$ is infinite cyclic, but no proper covering of ${\mathrm{SL}_{2}{{\mathbb R}}}$ admits a faithful ordinary representation; see [@MR1064110 Table10, p.318f] (where the Lie algebra ${\mathrm{\mathfrak{sl}}_{2}{{\mathbb R}}}$ occurs in the guise of ${\mathrm{\mathfrak{sp}}_{2}{{\mathbb R}}} = {\mathrm{\mathfrak{sp}}_{4p+2}{{\mathbb R}}}$ for $p=0$), cf. also [@MR0327979] and [@MR1384300 95.9, 95.10]. Thus we obtain ${{\text{\sc{Kor}}}(S)} = {\{{z^2}\left|\vphantom{}\right.\, {z\in{\mathrm{Z}(S)}}\}} \cong {\mathbb Z}$. Passing from $S$ to the quotient $S_d := S/{\{{z^{2d}}\left|\vphantom{}\right.\, {z\in{\mathrm{Z}(S)}}\}}$ we find ${{\text{\sc{Kor}}}(S_d)} \cong {\mathbb Z}/d\,{\mathbb Z}$ for any nonnegative integer $d$. Instead of the simple group ${\mathrm{PSL}_{2}{{\mathbb R}}}$ we could use any other simple Lie group with infinite fundamental group (cf. \[simpleLieInfinitePi\]) for the construction in \[Sl2\]. \[SL2timesH\] Again, let $S$ denote the simply connected covering of ${\mathrm{SL}_{2}{{\mathbb R}}}$, and let $H$ be the real Heisenberg group (cf. \[realHeis\]). Pick a generator $\zeta$ for the center of $S$. Then the subgroup $K:= {\{{(\zeta^{-2z},(0,0,z))}\left|\vphantom{}\right.\, {z\in{\mathbb Z}}\}}$ is closed and central. Passing to the quotient $G:=({S\times H})/K$ amounts to an identification of $\zeta^2$ with $(0,0,1)$. Composing the inclusion maps of the two factors $S$ and $H$ with the quotient map modulo $K$ we obtain continuous homomorphisms $\eta_S$ and $\eta_H$ from $S$ and $H$, respectively, into $G$. Composition of ordinary representations of $G$ with $\eta_X$ then yields ordinary representations of $X\in\{S,H\}$. Any ordinary representation of $S$ has $\zeta^2$ in its kernel (cf. \[Sl2\]). Therefore, any ordinary representation $\varphi$ of $G$ yields a representation $\varphi\circ\eta_H$ of $H$ with $(0,0,1)$ in its kernel. According to \[realHeis\] this implies $\{0\}^2\times{\mathbb R}\le \ker(\varphi\circ\eta_H)$. Therefore, the subgroup $$R:={\{{(\zeta^{2z},(0,0,x))}\left|\vphantom{}\right.\, {z\in{\mathbb Z},x\in{\mathbb R}}\}} / K$$ is contained in ${{\text{\sc{Kor}}}(G)}$. Since $G/R\cong{\mathrm{SL}_{2}{{\mathbb R}}}\times{\mathbb R}^2$ has a faithful ordinary representation we obtain ${{\text{\sc{Kor}}}(G)}=R\cong{\mathbb R}$. The examples collected so far suffice to determine the class ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc Lie}}}})}$. For the proof of \[KOconnLie\] we need the following explicit description of the class ${\text{\sc CgAL}}$ (see also [@MR2542208]). \[membersCgAL\] The elements of ${\text{\sc CgAL}}$ are precisely those of the form $${\mathbb Z}^f\times \prod_{j\in J}({\mathbb Z}/d_j{\mathbb Z}) \times ({\mathbb R}/{\mathbb Z})^c \times{\mathbb R}^e \eqno{(*)}$$ where $f,e,c$ are nonnegative integers and $(d_j)_{j\in J}$ is a finite family of positive integers. The groups with a product decomposition as in $(*)$ are clearly compactly generated Lie groups. Among all locally compact abelian groups, those of the form $(*)$ are characterized by the property of being compactly generated and having no small subgroups, cf. [@MR2226087 21.17]. Since a Lie group has no small subgroups, the assertion follows. \[KOconnLie\] Exactly the compactly generated abelian Lie groups occur as ${{\text{\sc{Kor}}}(L)}$ for a suitable connected Lie group $L$. In other words, we have ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc Lie}}}})}={\text{\sc CgAL}}$. Each member of the class ${\text{\sc CgAL}}$ is isomorphic to a product $${\mathbb Z}^f\times \prod_{j\in J}({\mathbb Z}/d_j{\mathbb Z}) \times ({\mathbb R}/{\mathbb Z})^c \times{\mathbb R}^e$$ where $f,e,c$ are nonnegative integers and $(d_j)_{j\in J}$ is a finite family of positive integers, cf. \[membersCgAL\]. Thus the assertion of our theorem follows from \[cartProd\] together with the examples given in \[realHeis\], \[Sl2\], and \[SL2timesH\]. Explicitly, the group $$L:= S^f \times \prod_{j\in J} S_{d_j} \times (H/Z)^c \times ((S\times H)/K)^e$$ satisfies our requirements. In order to prove the converse it suffices to note that ${{\text{\sc{Kor}}}(G)}$ is a closed abelian subgroup of $G$ if $G\in{\text{\sc Conn}{{\text{\sc Lie}}}}$, and thus lies in ${\text{\sc CgAL}}$. Even for a simply connected Lie group $L$ with simple complex Lie algebra it need not be true that $L/{{\text{\sc{Kor}}}(L)}$ has an *irreducible* faithful ordinary representation. For instance, consider the simply connected covering ${\mathrm{Spin}_{8}}{{\mathbb C}}$ of ${\mathrm{SO}_{8}{{\mathbb C}}}$; the center of that group is non-cyclic and cannot be mapped faithfully into the centralizer of an irreducible ordinary representation because that centralizer is a subgroup of the multiplicative group of Hamilton’s quaternions by Schur’s Lemma. However, the group ${\mathrm{Spin}_{8}}{{\mathbb C}}$ is linear, and ${{\text{\sc{Kor}}}({\mathrm{Spin}_{8}}{{\mathbb C}})}$ is trivial. \[finiteIndex\] If $U$ is an open normal subgroup of finite index in $G$ then ${{\text{\sc{Kor}}}(U)} = {{\text{\sc{Kor}}}(G)}$. For any $\rho\in{\mathrm{OR}(U)}$ the $U$-module $M_\rho$ associated with $\rho$ yields an induced module $L_\rho:={\mathrm{ind}_{U}^{G}}M_\rho$, cf. [@MR1357169 8.4, p.230f] or [@MR783636 XVIII 7.3]. The corresponding representation $\lambda\colon G\to{\mathrm{GL}_{}{(L_\rho)}}$ can now be combined with the faithful regular representation $\mu\colon G/U\to{\mathrm{GL}_{}{({\mathbb C}^{G/U})}}$ of the finite quotient to obtain an ordinary representation of $G$ whose kernel is contained in $\ker\rho$. Since $\rho\in{\mathrm{OR}(U)}$ was arbitrary we obtain ${{\text{\sc{Kor}}}(G)}\le{{\text{\sc{Kor}}}(U)}$. The reverse inclusion is clear from \[KOquotients\]. In an almost connected Lie group $G$ the discrete and compact quotient $G/G_0$ is finite. Thus \[finiteIndex\] yields: \[almConnLie\] If $G$ is an almost connected Lie group then ${{\text{\sc{Kor}}}(G)}={{\text{\sc{Kor}}}(G_0)}$. In particular, we have ${{\text{\sc{Kor}}}({\text{\sc almConn}{{\text{\sc Lie}}}})} = {{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc Lie}}}})} = {\text{\sc CgAL}}$. One would of course like to extend \[almConnLie\] to the classes ${\text{\sc almConn}{{\text{\sc LCG}}}} \subset {\text{\sc almConn}{{\text{\sc ProLie}}}}$. The ordinary representations of the compact quotient separate the points; thus ${{\text{\sc{Kor}}}(G)}\le G_0$ if $G\in{\text{\sc almConn}{{\text{\sc ProLie}}}}$. If one wants to proceed along the lines of the proof of \[finiteIndex\] then there remains the problem to extend a representation of $G_0$ via induction. This question is treated by Mackey [@MR0042420] where an invariant scalar product is assumed and the group in question is required to be locally compact *and separable*. Note that \[finiteIndex\] yields ${{\text{\sc{Kor}}}(U)}={{\text{\sc{Kor}}}(G)}$ for each open normal subgroup of $G$ but ${{\text{\sc{Kor}}}(G_0)}$ might still be smaller although $G_0$ is the intersection of those open normal subgroups (cf. [@MR2226087 6.8]). Examples that are not Lie groups ================================ A natural source of locally compact groups that are not Lie groups originates from the fact that the class ${\text{\sc CG}}$ is closed under arbitrary cartesian products. Another well-understood (and rich) class of locally compact groups is the class ${\text{\sc LCA}}$. Both classes do not contribute to ${{\text{\sc{Kor}}}({\text{\sc LCG}})}$: \[KOCG\]\[KOLCA\] The class ${{\text{\sc{Kor}}}({\text{\sc CG}})}\cup{{\text{\sc{Kor}}}({\text{\sc LCA}})}\cup{{\text{\sc{Kor}}}({\text{\sc Ab}{{\text{\sc ProLie}}}})}$ consists of the trivial group alone. In fact ${{\text{\sc{Kor}}}(G)}$ is trivial for every abelian proto-Lie group $G$. The Peter-Weyl Theorem asserts that the ordinary representations separate the points in any compact group, cf. [@MR2226087 14.33]. Thus ${{\text{\sc{Kor}}}({\text{\sc CG}})}$ contains only the trivial group. It remains to consider $A\in{\text{\sc LCA}}$. Every character of $A$ is a continuous homomorphism from $A$ into ${\mathbb R}/{\mathbb Z}\cong{\mathrm{SU}_{1}{{\mathbb C}}}<{\mathrm{GL}_{1}{{\mathbb C}}}$ and thus belongs to ${\mathrm{OR}(A)}$. Pontryagin duality (cf. [@MR2226087 22.6]) implies that the characters separate the points of $A$. Thus ${{\text{\sc{Kor}}}(A)}$ is trivial. For a proto-Lie group $G$ the continuous homomorphisms to Lie groups separate the points. If $G$ is abelian then it suffices to consider homomorphisms from $G$ to abelian Lie groups, and it follows that ${{\text{\sc{Kor}}}(G)}$ is trivial. \[powerD\] Let $C$ be a compact simple non-abelian group and let $D$ be any infinite discrete group. For instance, the group $C={\mathrm{SO}_{3}{{\mathbb R}}}$ would do — in any case, $C$ will be connected (see [@MR2226087 4.13]). The product topology turns $C^D$ into a compact connected group belonging to ${\text{\sc Conn}{{\text{\sc LCG}}}}\subset{\text{\sc Conn}{{\text{\sc ProLie}}}}$ but not to ${\text{\sc Lie}}$. We form the semidirect product $G:=D\ltimes C^D$ where conjugation with $d\in D$ maps $(c_j)_{j\in D} \in C^D$ to $(b_j)_{j\in D}$ with $b_j=c_{dj}$. Then $G\in{\text{\sc LCG}}$. The closed normal subgroups of $C^D$ are in one-to-one correspondence with the lattice of subsets of $D$, cf. [@MR1859180 3.12]: to $J\subseteq D$ we associate the product $\prod_{j\in D}B_j$ where $B_j=C_j$ if $j\in J$ and $B_j$ is trivial if $j\in D \setminus J$. Consequently, every non-trivial closed normal subgroup of $G$ contains the connected component $G_0\cong C^D$, and we have ${{\text{\sc{Kor}}}(G)}=G_0\cong C^D$ here. Our investigation of the normal subgroups also makes clear that $G$ does not belong to ${\text{\sc ProLie}}$. Let $S$ again denote the universal covering group of ${\mathrm{SL}_{2}{{\mathbb R}}}$. As in [@MR1391958 Ex.0.6] we consider the filter basis $\mathcal{N}(S)^\times$ of all *nontrivial* subgroups of the center of $S$. The projective limit $G:=\lim_{N\in\mathcal{N}(S)^\times}S/N$ is a connected locally compact group (see [@MR1391958 2.12]) with a center ${\mathrm{Z}(G)}$ isomorphic to the universal zero dimensional compactification of ${\mathbb Z}$. In other words ${\mathrm{Z}(G)}$ is isomorphic to $\prod_{p\in{\mathbb P}}{\mathbb Z}_p$ where ${\mathbb P}$ is the set of primes and ${\mathbb Z}_p$ is the additive group of $p$-adic integers. Note that ${\mathrm{Z}(G)}$ has a unique quotient of order $2$ because ${\mathrm{Z}(G)}/{\{{z^2}\left|\vphantom{}\right.\, {z\in{\mathrm{Z}(G)}}\}} \cong \prod_{p\in{\mathbb P}}{\mathbb Z}_p / \prod_{p\in{\mathbb P}}2\,{\mathbb Z}_p \cong {\mathbb Z}_2/2\,{\mathbb Z}_2 \cong {\mathbb Z}/2\,{\mathbb Z}$. From [@MR1391958 2.14] we infer that $G$ and $G/{\mathrm{Z}(G)}\cong{\mathrm{PSL}_{2}{{\mathbb R}}}$ have essentially the same Lie algebra. As $G/{\mathrm{Z}(G)}\cong{\mathrm{PSL}_{2}{{\mathbb R}}}$ is a simple group, the only proper normal subgroups of $G$ are those of ${\mathrm{Z}(G)}$. If $\rho$ is an ordinary representation of $G$ then $\rho({\mathrm{Z}(G)})$ is a pro-finite subgroup of a Lie group and thus finite. This means that $\ker\rho=\ker(\rho|_{{\mathrm{Z}(G)}})$ is co-finite in ${\mathrm{Z}(G)}$, and $G/\ker\rho$ is an extension of Lie groups (namely $G/{\mathrm{Z}(G)}\cong{\mathrm{PSL}_{2}{{\mathbb R}}}$ and the finite group ${\mathrm{Z}(G)}/\ker\rho$). Now $G/\ker\rho$ is a connected Lie group, has the same Lie algebra as ${\mathrm{PSL}_{2}{{\mathbb R}}}$ and possesses a faithful ordinary representation. Thus $|{\mathrm{Z}(G)}/\ker\rho|\le2$ and $\ker\rho$ contains ${\{{z^2}\left|\vphantom{}\right.\, {z\in{\mathrm{Z}(G)}}\}}\cong\prod_{p\in{\mathbb P}}{\mathbb Z}_p$. Since $G/{\{{z^2}\left|\vphantom{}\right.\, {z\in{\mathrm{Z}(G)}}\}}\cong{\mathrm{SL}_{2}{{\mathbb R}}}$ clearly has a faithful ordinary representation we obtain ${{\text{\sc{Kor}}}(G)}\cong\prod_{p\in{\mathbb P}}{\mathbb Z}_p$. A topological group $G$ is called *monothetic* if there exists $g\in G$ such that the closure of the subgroup generated by $g$ is dense in $G$. Any such $g$ is called a *topological generator* of $G$. The class of all compact monothetic (necessarily abelian) groups will be denoted by ${\text{\sc monCA}}$. The one-parameter subgroups of $G$ are the elements of ${\mathrm{Hom}_{}({\mathbb R},G)}$. We say that $G$ has a *dense one-parameter subgroup* if there is $\varphi\in{\mathrm{Hom}_{}({\mathbb R},G)}$ with $\closure{\varphi({\mathbb R})}=G$. \[lemm:monothetic\] 1. A locally compact monothetic group is either compact or isomorphic to ${\mathbb Z}$. 2. If $A\in{\text{\sc LCA}}$ has a dense one-parameter subgroup then $A$ is either isomorphic to ${\mathbb R}$ or $A$ is a connected compact monothetic group. 3. \[dualMonothetic\] The Pontryagin duals of compact monothetic groups are the subgroups of the discrete group ${\mathbb Q}^{(2^{\aleph_0})}\times{\mathbb Q}/{\mathbb Z}$. 4. The duals of connected compact monothetic groups are the subgroups of the discrete group ${\mathbb Q}^{(2^{\aleph_0})} \cong {\mathbb R}_{\mathrm{discr}}$. The first two assertions are known as Weil’s Lemma, cf. [@MR2226087 6.26]. Assertion \[dualMonothetic\] is due to [@MR0006543], we present the argument for the reader’s convenience. The existence of a cyclic subgroup means that there is an epimorphism $\eta\colon{\mathbb Z}\to A$ in the category ${\text{\sc LCA}}$ which upon dualizing gives a monomorphism ${\eta^*}\colon{A^*}\to{{\mathbb Z}^*}\cong{\mathbb R}/{\mathbb Z}$; cf. [@MR2226087 15.5, 15.7, 20.13]. Since $A$ is compact the dual ${A^*}$ is discrete (see. [@MR2226087 20.6]), and we may interpret ${\eta^*}$ as an embedding of ${A^*}$ into the discrete group ${\mathbb R}_{\mathrm{discr}}/{\mathbb Z}\cong {\mathbb Q}^{(2^{\aleph_0})}\times{\mathbb Q}/{\mathbb Z}$. Now connectedness of $A$ implies that ${A^*}$ is torsion-free (cf. [@MR2226087 23.18]) and ${\eta^*}$ induces an embedding of ${A^*}$ into the quotient ${\mathbb Q}^{(2^{\aleph_0})}$ of ${\mathbb R}_{\mathrm{discr}}/{\mathbb Z}$ modulo its torsion group ${\mathbb Q}/{\mathbb Z}$. In order to prove the last assertion we dualize the epimorphism $\varphi\colon{\mathbb R}\to A$ to a monomorphism ${\varphi^*}\colon{A^*}\to{{\mathbb R}^*}\cong{\mathbb R}$ and then replace the range by the discrete group ${\mathbb R}_{\mathrm{discr}}$. \[exa:HeisenbergWithCpCenter\] Generalizing the construction described in \[realHeis\] we take $A\in{\text{\sc LCA}}$ with a dense one-parameter group $\varphi\in{\mathrm{Hom}_{}({\mathbb R},A)}$ and define a multiplication $*$ on ${\mathbb R}^2\times A$ by $(a,b,x)*(c,d,y)=(a+c,b+d,x+y+\varphi(ad-bc))$. Then $H_\varphi:=({\mathbb R}^2\times A,*)$ is a connected locally compact group. If $A$ is compact we proceed as in \[realHeis\] to see ${{\text{\sc{Kor}}}(H_\varphi)}=\{0\}^2\times A\cong A$. \[exa:monotheticConnCA\] Again, let $S$ denote the universal covering group of ${\mathrm{SL}_{2}{{\mathbb R}}}$. Assume that $A$ is a compact monothetic group and let $c$ be a topological generator. Proceeding as in \[SL2timesH\] we construct the quotient $G$ of $S\times A$ such that $\zeta^2$ is identified with $c$. Then an argument as in \[SL2timesH\] shows that ${{\text{\sc{Kor}}}(G)}$ contains the closure of $c$ in $G$. This is a subgroup isomorphic to $A$. Connected pro-Lie groups ======================== In order to show ${\text{\sc CA}}\subset{{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ we study free compact abelian groups. \[defs:weight\] For a pointed compact space $X$ let $X/\text{conn}$ be the totally disconnected compact space of connected components of $X$ and let ${\mathop{\mathfrak{w}}}(X)$ denote the weight of $X$ (i.e., the minimal cardinality of a basis for the topology on $X$). Put ${\mathop{\mathfrak{w}}}_0(X) := {\mathop{\mathfrak{w}}}(X)-1$; this coincides with ${\mathop{\mathfrak{w}}}(X)$ if the latter is infinite. Let $C_0(X,{\mathbb T})$ denote the set of all continuous functions from $X$ to ${\mathbb T}$ mapping the base point to $0$. This set endowed with the compact-open topology and the pointwise operations becomes a topological group. The quotient $\left[X,{\mathbb T}\right] := C_0(X,{\mathbb T}) / C_0(X,{\mathbb T})_0$ modulo the connected component $C_0(X,{\mathbb T})_0$ is discrete (cf. the paragraph preceding [@MR2261490 8.50]); its compact Pontryagin dual ${\left[X,{\mathbb T}\right]^*}$ plays a crucial role in the structure of the free compact abelian group $F(X)$. \[structureFreeCA\] For every nonsingleton pointed compact space the free compact abelian group $F(X)$ is isomorphic to $({{\mathbb Q}^*})^{{\mathop{\mathfrak{w}}}(X)^{\aleph_0}} \times \prod_{p\in{\mathbb P}}{\mathbb Z}_p^{{\mathop{\mathfrak{w}}}_0(X/\text{conn})} \times {\left[X,{\mathbb T}\right]^*}$. The group $\left[X,{\mathbb T}\right]$ is torsion-free, and its Pontryagin dual ${\left[X,{\mathbb T}\right]^*}$ is a quotient of some power of ${{\mathbb Q}^*}$. The structure of $F(X)$ is known from [@MR849093 1.5.4], cf. [@MR2261490 8.67]. For every compact Hausdorff space $X$ the group $\left[X,{\mathbb T}\right] \cong H^1(X,{\mathbb Z})$ is torsion-free, see [@MR849093 1.3.1], cf. [@MR2261490 8.50(ii)]. For $d:=\dim_{\mathbb Q}({\mathbb Q}\otimes\left[X,{\mathbb T}\right])$ we have an embedding of $\left[X,{\mathbb T}\right]$ into ${\mathbb Q}\otimes\left[X,{\mathbb T}\right]\cong{\mathbb Q}^{(d)}$ which dualizes to the quotient map in question. \[KOconnProLie\] The class ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ contains at least the class ${\bm{\Pi}}({\text{\sc CgAL}}\cup{\text{\sc CA}}) = {\bm{\Pi}}({\{{\mathbb Z},{\mathbb R}\}\cup{\text{\sc CA}}})$. In particular, we have ${\text{\sc Conn}{{\text{\sc Ab}{{\text{\sc ProLie}}}}}} \subset {{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$. The class ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$ contains the class ${\text{\sc small}{{\text{\sc LCA}}}} = {\mathbf{P}}(\{{\mathbb Z},{\mathbb R}\}\cup{\text{\sc monCA}})$ consisting of all groups of the form ${\mathbb Z}^f\times A\times{\mathbb R}^e$ with $e,f\in{\mathbb N}$ and a compact monothetic group $A$. Abelian pro-Lie groups are studied in [@MR2103546], cf. [@MR2337107 Ch.5]: in particular, the connected ones are of the form ${\mathbb R}^c\times C$ where $c$ is arbitrary (possibly infinite) and $C$ is a connected compact abelian group. Such a group belongs to ${\text{\sc Conn}{{\text{\sc LCA}}}}$ precisely if $c$ is finite, cf. [@MR2226087 24.9]. Let $C$ be a compact abelian group. Then $C$ is a quotient of the free compact abelian group $F(C)$. From \[structureFreeCA\] we know that $F(C)$ is a quotient of a product of compact monothetic groups. The class ${\text{\sc monCA}}$ of all compact monothetic groups is contained in ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})} \subset {{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ by \[exa:monotheticConnCA\] and \[exa:HeisenbergWithCpCenter\]. From \[cartProd\] and \[quotientsKOConnProLie\] we conclude ${\text{\sc CA}}= {\mathbf{{Q}}}{\bm{\Pi}}({\text{\sc monCA}}) \subseteq {{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ and ${\mathbf{{Q}}}{\mathbf{P}}({\text{\sc monCA}}) \subseteq {{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$. From \[SL2timesH\] we recall that ${\mathbb Z}$ and ${\mathbb R}$ lie in ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}\subset{{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$; it remains to use products once again. The factor $A$ in \[KOconnProLie\] cannot be arbitrarily large; we show ${\mathop{\mathfrak{w}}}(A_0)\le 2^{\aleph_0}$ in \[connKOsmall\] below. If the answer to \[sKOfinite\] is affirmative then we know that ${\text{\sc small}{{\text{\sc LCA}}}}$ coincides with ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$, cf. \[KorConnLCGfinGen\]. We have mentioned in \[structureFreeCA\] that the factor ${\left[X,{\mathbb T}\right]^*}$ of $F(X)$ has a torsion-free dual. Conversely, every torsion-free abelian group $A$ is isomorphic to $\left[X,{\mathbb T}\right]$ for some compact connected Hausdorff space (namely, for the underlying space $X=|{A^*}|$ of the Pontryagin dual of $A$), see [@MR849093 1.3.2]. The corrections in [@MR927286] only concern assertions about cardinalities (dimension, rank) in  [@MR849093]. \[noLargeDiscrete\] Each discrete member of ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ is finitely generated. Each member of ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$ is compactly generated. Discrete central subgroups of connected pro-Lie groups are finitely generated by [@MR2475971 5.10]. Consider $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$. Any compact neighborhood in ${{\text{\sc{Kor}}}(G)}$ generates a central subgroup $C$ of $G$, and ${{\text{\sc{Kor}}}(G)}/C$ is a discrete subgroup of $G/C \in {\text{\sc Conn}{{\text{\sc LCG}}}} \subset {\text{\sc Conn}{{\text{\sc ProLie}}}}$. Now it remains to note that the class of compactly generated locally compact groups is closed under extensions, see [@MR2226087 6.11]. \[cpGenLCA\] Locally compact abelian groups are compactly generated precisely if they are contained in some almost connected locally compact group, see [@MR2542208 Cor.1]. The compactly generated members of ${\text{\sc LCA}}$ are of the form ${\mathbb Z}^f\times C \times{\mathbb R}^e$ with natural numbers $e,f$ and some $C\in{\text{\sc CA}}$, cf. [@MR2226087 23.11]. Note that *every* $C\in{\text{\sc CA}}$ is contained in a connected compact group because the characters separate the points: this yields an embedding into ${\mathbb T}^{{C^*}}$. Connected locally compact groups ================================ Our aim in this section is to establish a bound on the size of ${{\text{\sc{Kor}}}(G)}$ if $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$. To this end, we need some information on the weight (cf. \[defs:weight\]) and generating rank. If $X$ is discrete then ${\mathop{\mathfrak{w}}}(X)$ is just the cardinality $|X|$. The function ${\mathop{\mathfrak{w}}}$ is monotonic; i.e. each subspace $Y\subseteq X$ satisfies ${\mathop{\mathfrak{w}}}(Y)\le {\mathop{\mathfrak{w}}}(X)$. For an infinite compact group the weight coincides with the *local weight*, i.e., the minimal cardinality of a neighborhood basis. For $C\in{\text{\sc Conn}{{\text{\sc CA}}}}$ a finer invariant than the weight is the *rank* $\dim_{\mathbb Q}({\mathbb Q}\otimes{C^*})$ of its dual ${C^*}$. Note that ${C^*}$ embeds in ${\mathbb Q}\otimes{C^*}$ only if ${C^*}$ is torsion-free (i.e., if $C$ is connected, see [@MR2226087 23.18]). The rank of ${C^*}$ coincides with the topological dimension of $C$, cf. [@MR2261490 8.26]. Every compact abelian Lie group has finite dimension; its dual is finitely generated. \[exas:Weight\] 1. For each positive integer $n$ we have ${\mathop{\mathfrak{w}}}({\mathbb R}^n)=\aleph_0$. 2. \[weightDual\] The equality ${\mathop{\mathfrak{w}}}(A)={\mathop{\mathfrak{w}}}({A^*})$ holds for each $A\in{\text{\sc LCA}}$. 3. In particular, for $C\in{\text{\sc CA}}$ we have ${\mathop{\mathfrak{w}}}(C)=|{C^*}|$. 4. For $C\in{\text{\sc CA}}$ we have ${\mathop{\mathfrak{w}}}(C_0) = \max\{\aleph_0,\dim_{\mathbb Q}({\mathbb Q}\otimes{C^*})\}$ unless $C$ is trivial. 5. If $n$ is a positive integer and $C\in{\text{\sc Conn}{{\text{\sc CA}}}}$ then ${\mathop{\mathfrak{w}}}({\mathbb R}^n\times C) = \max\{\aleph_0,{\mathop{\mathfrak{w}}}(C)\}$. 6. If $G\in{\text{\sc CG}}$ and $N$ is a totally disconnected closed normal (and thus central) subgroup of $G_0$ then ${\mathop{\mathfrak{w}}}(G)={\mathop{\mathfrak{w}}}(G/N)$. The assertion on ${\mathop{\mathfrak{w}}}({\mathbb R}^n)$ is obvious from the fact that the underlying space is metrizable and the weight equals the local weight. See [@MR551496 24.14] or [@MR2261490 7.76] for assertion \[weightDual\]. The assertions on compact groups are taken from [@MR2261490 12.25] and [@MR1082789 3.2]. \[characterizeMonothetic\] For $C\in{\text{\sc CA}}$ we have ${\mathop{\mathfrak{w}}}(C_0)\le 2^{\aleph_0}$ precisely if $C_0$ is monothetic. If $C\in{\text{\sc CA}}$ has a finitely generated dense subgroup then ${\mathop{\mathfrak{w}}}(C)\le2^{\aleph_0}$. From \[exas:Weight\] we know ${\mathop{\mathfrak{w}}}(C_0)=\dim_{\mathbb Q}({\mathbb Q}\otimes{C_0^*})$. Since ${C_0^*}$ is torsion-free we have an embedding $\eta\colon{C_0^*}\to{\mathbb Q}\otimes{C_0^*}$. Thus ${C_0^*}$ is the dual of a monothetic group, see \[lemm:monothetic\]. Following [@MR1082789] (cf. [@MR2261490 12.1]), a subset $X$ of a topological group $G$ is called *suitable* if it does not contain the neutral element $1$, is discrete and closed in $G\setminus\{1\}$, and generates a dense subgroup of $G$. Every locally compact group possesses suitable sets by [@MR1082789 1.12]. We define the *generating rank* $s(G)$ as the minimum over the cardinalities of suitable sets. \[exas:genRank\] 1. For the additive group ${\mathbb R}$ any two elements that are linearly independent over ${\mathbb Q}$ form a suitable set. Thus $s({\mathbb R})=2$. 2. A suitable set for ${\mathbb R}^n$ needs at least $n+1$ elements because fewer vectors will either be linearly dependent (and thus contained in a proper closed subgroup) or form a basis (and then generate a discrete proper subgroup). It is known that there exists $v\in{\mathbb R}^n$ such that $v+{\mathbb Z}$ generates a dense subgroup of ${\mathbb R}^n/{\mathbb Z}^n$. The standard basis together with any such $v$ forms a suitable set for ${\mathbb R}^n$. Thus $s({\mathbb R}^n)=n+1$. 3. For $G\in{\text{\sc Conn}{{\text{\sc CG}}}}$ the generating rank depends on the weight, see [@MR2261490 12.22]: 1. If ${\mathop{\mathfrak{w}}}(G)\le2^{\aleph_0}$ and $G$ is abelian (but not trivial) then $s(G)=1$. 2. If ${\mathop{\mathfrak{w}}}(G)\le2^{\aleph_0}$ and $G$ is not abelian then $s(G)=2$. 3. If ${\mathop{\mathfrak{w}}}(G)>2^{\aleph_0}$ then $s(G)=\min{\left\{{\beth}\left|\vphantom{\beth{\mathop{\mathfrak{w}}}(G)\le\beth^{\aleph_0}\strut}\right.\, {{\mathop{\mathfrak{w}}}(G)\le\beth^{\aleph_0}}\right\}}$. 4. In particular, the generating rank of any connected Lie group is finite. \[genRankModTotDisconnected\] Let $G\in{\text{\sc Conn}{{\text{\sc CG}}}}$ and let $N$ be a normal subgroup of $G$. Then $s(G/N)\le s(G)\le s(G/N)+s(N)$. If $N$ is totally disconnected then $s(G)=s(G/N)$. \[redMaxCp\] Let $G\in{\text{\sc almConn}{{\text{\sc LCG}}}}$. By the Maltsev–Iwasawa Theorem (see [@MR0029911 Thm.13] combined with the solution of Hilbert’s Fifth Problem [@MR0058607], cf. [@MR0073104] or [@MR0276398]) there exists a maximal compact subgroup $M$ of $G$ which is unique up to conjugacy. If $M\ne G$ then there is a positive integer $n$ (called the characteristic index in [@MR0029911]) and there are subgroups $R_1,\dots,R_n$ all isomorphic to ${\mathbb R}$ such that $G=R_1\cdots R_nM$. The number $n$ equals the topological dimension $\dim{G/M}$ of the coset space $G/M$ (which is actually a manifold). If $M=G$ we just have $n=0$. Note that $M$ is connected. Thus the weight of $G$ equals that of $M$ if $G=M$ and satisfies ${\mathop{\mathfrak{w}}}(G)=\max\{\aleph_0,{\mathop{\mathfrak{w}}}(M)\}$ otherwise. \[genRankAlmConnLCG\] For $G\in{\text{\sc almConn}{{\text{\sc LCG}}}}$ pick a maximal compact subgroup $M$ of $G$. Then $s(G)\le 2\dim{G/M}+s(M)$. In particular, the generating rank $s(G)$ is finite if ${\mathop{\mathfrak{w}}}(M)\le2^{\aleph_0}$. If $\psi\colon G\to H$ is a continuous homomorphism with dense range then $s(G)\ge s(H)$. We use subgroups $R_1,\dots,R_n$ as in \[redMaxCp\], where $n:=\dim{G/M}$. Combining a suitable set for $M$ with suitable sets for each $R_j$ we find $s(G)\le 2n+s(M)$. From \[exas:genRank\] we thus infer that $s(G)$ is finite if $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$ satisfies ${\mathop{\mathfrak{w}}}(G)\le2^{\aleph_0}$. Every suitable set $X$ constructed in this way will be relatively compact because only finitely many elements lie outside the compact group $M$. Now [@MR2261490 12.4] asserts that $\psi(X)\setminus\{1\}$ will be suitable in $H$ whenever $\psi\colon G\to H$ is a continuous homomorphism with dense range. \[estimateGenRankVsWeight\] For each non-trivial $G\in{\text{\sc almConn}{{\text{\sc LCG}}}}$ we have $s(G)\le {\mathop{\mathfrak{w}}}(G)\le s(G)^{\aleph_0}$. By the Maltsev–Iwasawa Theorem (cf. \[redMaxCp\]) we know that $G$ is homeomorphic to ${\mathbb R}^n\times M$ for a maximal compact subgroup $M$ and some nonnegative integer $n$. The estimates $s(M)\le {\mathop{\mathfrak{w}}}(M)\le s(M)^{\aleph_0}$ are valid for any non-trivial compact group $M$, see [@MR2261490 12.27]. If the group $G$ is compact it coincides with $M$. If $M$ is trivial but $n\ge1$ then \[genRankAlmConnLCG\] yields that $s(G)\le 2n$ is finite. Thus $s(G)\le \aleph_0 = {\mathop{\mathfrak{w}}}(G) < 2^{\aleph_0} = s(G)^{\aleph_0} $. There remains the case where $G$ is not compact and $M$ is not trivial. Then $n\ge1$ and ${\mathop{\mathfrak{w}}}(G)=\max\{\aleph_0,{\mathop{\mathfrak{w}}}(M)\}$. Now the estimates $s(G)\le 2n+s(M)\le \aleph_0+s(M) \le \aleph_0+{\mathop{\mathfrak{w}}}(M) = {\mathop{\mathfrak{w}}}(G) $ from \[genRankAlmConnLCG\] and ${\mathop{\mathfrak{w}}}(G) = \aleph_0+{\mathop{\mathfrak{w}}}(M) \le \aleph_0+s(M)^{\aleph_0} \le 2^{\aleph_0}+ s(M)^{\aleph_0} = s(G)^{\aleph_0}$ yield the claim. The inequality $s(G)\le {\mathop{\mathfrak{w}}}(G)$ holds for arbitrary $G\in{\text{\sc LCG}}$ see [@MR1082789 4.2]. \[IwasawaRevisited\] Let $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$. We consider a compact normal subgroup $K$ of $G$, the centralizer $C:={\mathrm{C}_{G}(K)}$ and its connected component $C_0$. Then $G=C_0K$. We abbreviate $D:=C\cap K$. From [@MR0029911 Thm.2] we know $CK=G$. The group $C$ is a closed subgroup of the $\sigma$-compact group $G$ and thus $\sigma$-compact itself, see [@MR2226087 6.10, 6.12]. The Open Mapping Theorem (cf. [@MR2226087 6.19]) yields that $C/D$ is isomorphic to $G/K$. Now $C/(C_0D)$ is totally disconnected (cf. [@MR2226087 6.9]) but also connected because it is a continuous image of $C/D \cong G/K$. Thus $C=C_0D$ and $C_0K = C_0DK = CK = G$. \[sKOfiniteIfAbK\] If $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$ has a compact normal subgroup $K$ such that $G/K$ is a Lie group and $K_0$ is abelian then $s(\closure{G'})$ is finite. Consequently, the group ${{\text{\sc{Kor}}}(G)}$ satisfies ${\mathop{\mathfrak{w}}}({{\text{\sc{Kor}}}(G)})\le 2^{\aleph_0}$ in this case. The compact abelian normal subgroup $K_0$ is contained in the center of $G$. A dense subgroup of $\closure{G'}$ is generated by $\gamma(S\times S)$ where $S$ is any suitable set for $G/K_0$ and $\gamma\colon G/K_0\times G/K_0 \to \closure{G'}\colon (aK_0,bK_0) \mapsto aba^{-1}b^{-1}$ is induced by the commutator map. We pick a maximal compact subgroup $M$ of $G$. Then $K_0\le M$ and by \[genRankAlmConnLCG\] there exists a nonnegative integer $n$ such that $s(G/K_0) \le 2n+s(M/K_0)$. Now $s(G/K_0)$ is finite because the compact connected Lie group $M/K$ has finite generating rank and $s(M/K_0)=s(M/K)$ by \[genRankModTotDisconnected\]. Thus we may choose a finite set $S$; then $\gamma(S\times S)\setminus\{1\}$ is a finite (and thus indeed closed and discrete) suitable set for $\closure{G'}$. The bound for the weight of ${{\text{\sc{Kor}}}(G)}$ now follows from \[estimateGenRankVsWeight\], monotonicity of the weight function and the fact that ${{\text{\sc{Kor}}}(G)}$ is contained in $\closure{G'}$, see \[KOinZcapComm\]. For any solvable compact group the connected component is abelian, see [@MR0029911 Thm.2]. Thus \[sKOfiniteIfAbK\] yields: \[sKOfiniteIfSolv\] If $G$ is a solvable connected locally compact group then $s(\closure{G'})<\aleph_0$ and ${\mathop{\mathfrak{w}}}({{\text{\sc{Kor}}}(G)})\le{\mathop{\mathfrak{w}}}{\closure{G'}}\le 2^{\aleph_0}$. \[exam:wZlarge\] Let $\aleph$ be any cardinal, let $S$ be a connected compact Lie group with simple Lie algebra and non-trivial center $Z$, and put $G := S^\aleph$. Then $Z$ is finite and $\closure{S'} = S$. Thus $G$ is a compact connected group with $\closure{G'}=G$ and ${\mathrm{Z}(G)}\cap\closure{G'} = {\mathrm{Z}(G)} \cong Z^\aleph$. Now ${\mathop{\mathfrak{w}}}({\mathrm{Z}(G)}\cap\closure{G'}) = {\mathop{\mathfrak{w}}}(Z^\aleph) = |Z^{(\aleph)}|$, and this cardinality may be arbitrarily large. \[connKOsmall\] For each $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$ we have ${\mathop{\mathfrak{w}}}({{\text{\sc{Kor}}}(G)}_0)\le2^{\aleph_0}$. In particular, the connected component of the maximal compact subgroup of ${{\text{\sc{Kor}}}(G)}$ is monothetic. Let $K$ be the maximal compact normal subgroup of $G$ (cf. [@MR0029911 Thm.14]). Then \[IwasawaRevisited\] says $G=C_0K$ where $C_0$ is the connected component of the centralizer $C:={\mathrm{C}_{G}(K)}$ of $K$ in $G$. Thus $\closure{G'} = \closure{C_0'K'} = \closure{C_0'}\,\closure{K'}$ because $\closure{K'}$ is compact, and $({\mathrm{Z}(G)}\cap\closure{G'})_0 \le C_0\cap\closure{G'}$ is contained in $\closure{C_0'}(C_0\cap\closure{K'}) \le C_0$. The locally compact group $C_0/(C_0\cap K)$ admits an injective continuous homomorphism into the Lie group $G/K$ and thus is a Lie group itself. Moreover, we have that $C_0\cap K$ is contained in the center of $K$ and thus abelian. Thus \[sKOfiniteIfAbK\] applies to $C_0$ and $K$, yielding ${\mathop{\mathfrak{w}}}(\closure{C_0'})\le2^{\aleph_0}$. The group $B:=C_0\cap\closure{K'}$ is contained in the intersection of $\closure{K'}$ with the center of the compact group $K$. Thus $B$ is totally disconnected, see [@MR2261490 9.23], and the quotient $B\closure{C_0'}/\closure{C_0'} \cong B/(B\cap\closure{C_0'})$ is totally disconnected, as well. This yields that the connected component ${{\text{\sc{Kor}}}(G)}_0$ of ${{\text{\sc{Kor}}}(G)}$ is contained in $\closure{C_0'}$, and the bound ${\mathop{\mathfrak{w}}}({{\text{\sc{Kor}}}(G)}_0)\le2^{\aleph_0}$ is established. Now the connected component of the maximal compact subgroup of ${{\text{\sc{Kor}}}(G)}$ is monothetic by \[characterizeMonothetic\]. \[KorConnLCGfinGen\] For each $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$ there exist $A\in{\text{\sc CA}}$ and natural numbers $e,f$ such that $A_0$ is monothetic and ${{\text{\sc{Kor}}}(G)}\cong{\mathbb Z}^f\times A\times{\mathbb R}^e$. In particular, the dimension of members of ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$ is bounded by $2^{\aleph_0}$. We combine \[connKOsmall\] with \[noLargeDiscrete\], \[cpGenLCA\], and \[characterizeMonothetic\]. Partial results =============== The classes ${{\text{\sc{Kor}}}({\text{\sc SepLie}})}$, ${{\text{\sc{Kor}}}({\text{\sc Lie}})}$, ${{\text{\sc{Kor}}}({\text{\sc LCG}})}$ and ${{\text{\sc{Kor}}}({\text{\sc ProLie}})}$ are quite large and not very well understood. We will indicate some large subclasses and note (in \[KOLCGnotClosedUnderQ\]) that these classes are not closed under the operations ${\mathbf{\overline{S}}}$, ${\mathbf{{Q}}}$, and ${\mathbf{{\widehat{Q}}}}$. \[KOsepLie\] The class ${{\text{\sc{Kor}}}({\text{\sc SepLie}})}$ contains the following: 1. The class ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc Lie}}}})} = {{\text{\sc{Kor}}}({\text{\sc almConn}{{\text{\sc Lie}}}})} = {\text{\sc CgAL}}$, cf. [\[KOconnLie\]]{} and [\[almConnLie\]]{}. 2. \[SimpleCharP\] All countable discrete simple groups with infinite elementary abelian subgroups, cf. [\[ex:Burnside\]\[bounded\]]{}. 3. Countably infinite discrete simple groups with finitely many conjugacy classes, such as those constructed as HNN-extensions, see [\[ex:Burnside\]\[HNN\]]{}. Moreover, ${{\text{\sc{Kor}}}({\text{\sc SepLie}})}$ is closed under ${\mathbf{P}}$. Note that ${{\text{\sc{Kor}}}({\text{\sc SepLie}})}$ is considerably larger than ${{\text{\sc{Kor}}}({\text{\sc almConn}{{\text{\sc Lie}}}})} = {\text{\sc CgAL}}$. \[KOLCG\] The class ${{\text{\sc{Kor}}}({\text{\sc LCG}})}$ contains the following: 1. The class ${{\text{\sc{Kor}}}({\text{\sc SepLie}})} \cup {{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$, and thus ${\text{\sc CgAL}}$ and all compact monothetic groups. 2. All groups of the form $C^D$ where $C$ is a compact simple non-abelian group and $D$ is infinite, see [\[powerD\]]{}. 3. \[pAdicSimple\] All simple non-discrete totally disconnected locally compact groups. 4. \[tooLargeSimple\] All simple discrete groups of cardinality larger than $2^{\aleph_0}$. 5. All discrete simple groups with infinite elementary abelian subgroups. Moreover, ${{\text{\sc{Kor}}}({\text{\sc LCG}})}$ is closed under ${\mathbf{P}}$ but not under ${\bm{\Pi}}$. The class in \[KOLCG\]\[pAdicSimple\] includes the simple $p$-adic groups such as ${\mathrm{PSL}_{n}{{\mathbb Q}_p}}$. Among the groups in \[KOLCG\]\[tooLargeSimple\] we find, for instance, the simple classical groups over large fields such as ${\mathrm{PSL}_{n}{F}}$ where $F={\mathbb Q}(X)$ is a purely transcendental extension with a transcendency basis $X$ such that $|X|>|{\mathbb R}|$. \[noSOthree\] The groups ${\mathrm{SO}_{3}{{\mathbb R}}}$ and ${\mathrm{PSL}_{2}{{\mathbb C}}}$ do not belong to ${{\text{\sc{Kor}}}({\text{\sc TG}})}$. The Lie algebra ${{\mathfrak{L}}({\mathrm{SO}_{3}{{\mathbb R}}})}$ is isomorphic to the vector product algebra $({\mathbb R}^3,\times)$, and ${\mathrm{Ad}_{}}({\mathrm{SO}_{3}{{\mathbb R}}})\cong{\mathrm{SO}_{3}{{\mathbb R}}}$ contains all automorphisms of that algebra. Therefore, each automorphism of ${\mathrm{SO}_{3}{{\mathbb R}}}$ is an inner automorphism, see[^2]  [@MR2261490 6.59]. Now assume that there exists a group $G\in{\text{\sc TG}}$ with ${{\text{\sc{Kor}}}(G)}\cong{\mathrm{SO}_{3}{{\mathbb R}}}$. Then $G$ is the direct product of ${{\text{\sc{Kor}}}(G)}$ with its centralizer $C$, see \[IwasawaRevisited\]. Thus $G/C\cong{\mathrm{SO}_{3}{{\mathbb R}}}$ has a faithful ordinary representation, and ${{\text{\sc{Kor}}}(G)}\le C$. This is a contradiction. For the group ${\mathrm{PSL}_{2}{{\mathbb C}}}\cong{\mathrm{PGL}_{2}{{\mathbb C}}}$ we can proceed in the same way because this group also has only inner automorphisms, see [@54.0149.02], cf. [@MR606555]. \[KOLCGnotClosedUnderQ\] The classes ${{\text{\sc{Kor}}}({\text{\sc LCG}})}$ and ${{\text{\sc{Kor}}}({\text{\sc ProLie}})}$ are not closed under any one of the operations ${\mathbf{\overline{S}}}$, ${\mathbf{{Q}}}$, or ${\mathbf{{\widehat{Q}}}}$. The group ${\mathrm{PSL}_{2}{{\mathbb R}}}$ has outer automorphisms (induced by elements of ${\mathrm{GL}_{2}{{\mathbb R}}}$ with non-square determinant). Every group ${\mathrm{PSL}_{n}{F}}$ with $n>2$ over a commutative field $F$ has outer automorphisms induced by polarities of the projective space. Thus the argument used in the proof of \[noSOthree\] does not easily extend to arbitrary classical simple groups. Open questions {#sec:openQuestions} ============== \[sKOfinite\]\[prob:ConnLCG\] Is it true that ${\mathop{\mathfrak{w}}}({{\text{\sc{Kor}}}(G)})\le 2^{\aleph_0}$ holds for every $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$ ? [*Comments on .*]{}. If the answer to this problem is affirmative then ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})} = {\text{\sc small}{{\text{\sc LCA}}}}$, cf. \[KOconnProLie\], \[noLargeDiscrete\] and \[cpGenLCA\]. In \[noLargeDiscrete\] we have seen that ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$ consists of compactly generated groups, and \[connKOsmall\] says that the connected component of the maximal compact subgroup of ${{\text{\sc{Kor}}}(G)}$ lies in ${\text{\sc small}{{\text{\sc CA}}}}$. For an affirmative answer to \[prob:ConnLCG\] it suffices to exclude totally disconnected compact groups $A$ with ${\mathop{\mathfrak{w}}}(A) > 2^{\aleph_0}$ from ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc LCG}}}})}$ because that class is closed under the passage to Hausdorff quotients, cf. \[quotients\]. For $G\in{\text{\sc Conn}{{\text{\sc LCG}}}}$ consider the maximal compact normal subgroup $K$ of $G$ and let $C_0$ be the connected component of the centralizer of $K$. Then the weight of $C_0\cap\closure{K'}$ may be arbitrarily large, as \[exam:wZlarge\] shows. It is therefore clear that we have to find a more subtle approach than \[KOinZcapComm\] if we want to give an affirmative answer to \[sKOfinite\]. \[prob:KOConnProLie\] Which abelian pro-Lie groups are in ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ ? [*Comments on .*]{}. Every element of ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ is contained in the center of a connected pro-Lie group (namely, $G$) and thus contained in some connected abelian pro-Lie group, cf. [@MR2337107 12.90]. From \[KOconnProLie\] we thus infer $${\text{\sc Conn}{{\text{\sc Ab}{{\text{\sc ProLie}}}}}} \subset {{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})} \subseteq {\mathbf{\overline{S}}}({\text{\sc Conn}{{\text{\sc Ab}{{\text{\sc ProLie}}}}}}) \,.$$ A discrete abelian group belongs to ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ precisely if it is finitely generated (and thus lies in ${\text{\sc CgAL}}$), see \[noLargeDiscrete\]. 1. Is ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ closed under ${\mathbf{{\widehat{Q}}}}$ ? 2. Is ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ closed under ${\mathbf{\overline{S}}}$ ? [*Comments on .*]{}. We know that ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ is not closed under ${\mathbf{{Q}}}$ because the group ${\mathbb R}^{\mathbb R}$ belongs to ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ but has a quotient which is not complete (cf. [@MR2337107 4.11]), and thus does not lie in ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})} \subseteq {\text{\sc Ab}{{\text{\sc ProLie}}}}$. From \[quotientsKOConnProLie\] we know that ${{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc ProLie}}}})}$ is closed under quotients modulo locally compact groups. What about ${{\text{\sc{Kor}}}(G)}$ if $G$ is an almost connected pro-Lie group, or an almost connected locally compact group? [*Comments on .*]{}. The conclusion of \[connCenter\] breaks down if we drop the assumption of connectedness, cf. \[powerD\]. In many questions about the structure of pro-Lie groups it is possible to weaken a connectedness hypothesis to “almost connected” (i.e., compactness of $G/G_0$). For instance, almost connected locally compact groups are pro-Lie groups while for arbitrary disconnected locally groups the homomorphisms into Lie groups need not separate the points. We have seen in \[almConnLie\] that ${{\text{\sc{Kor}}}({\text{\sc almConn}{{\text{\sc Lie}}}})} = {{\text{\sc{Kor}}}({\text{\sc Conn}{{\text{\sc Lie}}}})}$ is very well behaved. The examples in \[powerD\] fail to be almost connected, and also fail to be pro-Lie groups. The same applies to most of our examples of groups $G\in{\text{\sc LCG}}$ with ${{\text{\sc{Kor}}}(G)}=G$. Note that the discrete examples are in ${\text{\sc Lie}}\subset{\text{\sc ProLie}}$. For $\mathcal{G}\in\{{\text{\sc LCG}},{\text{\sc ProLie}}\}$ we ask: 1. Is ${\text{\sc CA}}$ completely contained in ${{\text{\sc{Kor}}}(\mathcal{G})}$ ? 2. Is ${\text{\sc LCA}}$ completely contained in ${{\text{\sc{Kor}}}(\mathcal{G})}$ ? 3. Which part of ${\text{\sc CG}}$ is contained in ${{\text{\sc{Kor}}}(\mathcal{G})}$ ? 4. Which discrete groups are in ${{\text{\sc{Kor}}}(\mathcal{G})}$ ? 5. What *is* ${{\text{\sc{Kor}}}(\mathcal{G})}$ ? [*Comments on .*]{}. If we drop all connectedness assumptions on $G$ we obtain examples $G$ where ${{\text{\sc{Kor}}}(G)}$ is not abelian. However, the inclusion ${{\text{\sc{Kor}}}(\mathcal{G})}\subset\mathcal{G}$ falls far from being an equality. The class ${{\text{\sc{Kor}}}(\mathcal{G})}$ appears to be complicated and not accessible to an easy “constructive” description (such as: “take the following basic examples and use certain constructions like products or quotients”). Definitely, the class ${\text{\sc CG}}$ is not completely contained in ${{\text{\sc{Kor}}}(\mathcal{G})}$. For instance, we know that ${\text{\sc CG}}\cap{{\text{\sc{Kor}}}(\mathcal{G})}$ is not closed under ${\mathbf{\overline{S}}}$ or ${\mathbf{{Q}}}$, see \[noSOthree\]. It is also open whether arbitrary discrete groups are in ${{\text{\sc{Kor}}}({\text{\sc ProLie}})}$. Appendix: linear groups {#sec:appendix} ======================= We collect some known facts regarding the question whether a given group is *linear*, i.e., admits a faithful ordinary representation. In our present terminology, this means that the trivial group is a member of ${\{{\ker\rho}\left|\vphantom{}\right.\, {\rho\in{\mathrm{OR}(G)}}\}}$. Examples like the additive group ${\mathbb Z}_p$ of $p$-adic integers or any infinite elementary abelian group show that this condition is, in general, much stronger than the condition that ${{\text{\sc{Kor}}}(G)}$ is trivial. By way of contraposition, we use the criteria for linearity in certain examples in order to determine ${{\text{\sc{Kor}}}(G)}$ or to even show ${{\text{\sc{Kor}}}(G)}=G$. See \[ex:Burnside\] but also \[Sl2\], \[SL2timesH\], \[exa:monotheticConnCA\]. \[maltsev\] Let $G$ be a connected Lie group. 1. The group $G$ is linear if, and only if, its solvable radical and its maximal semisimple subgroups are linear. 2. If $G$ is semisimple and linear and $Z$ is a discrete normal (i.e., central) subgroup of $G$ then $G/Z$ is linear, as well. 3. \[maltsev3\] If $G$ is solvable then it is linear if, and only if, it is the semidirect product of a maximal compact subgroup and a simply connected normal subgroup. \[nahlus\] Let $G$ be a connected Lie group with Lie algebra $\mathfrak{g}:={{\mathfrak{L}}(G)}$, let $\mathfrak{r}$ be the solvable radical of the commutator algebra $\mathfrak{g}'$, and let $\mathfrak{z}$ be the center of $\mathfrak{g}$. Choose a maximal torus $T$ of the solvable radical of $G$ and a maximal semisimple subgroup $S$. Then $G$ is linear precisely if $\mathfrak{r}\cap\mathfrak{z}\cap{{\mathfrak{L}}(T)}=\{0\}$ and $S$ is linear. We have seen that the structure of ${{\text{\sc{Kor}}}(G)}$ for a connected Lie group $G$ may depend essentially on the choice of $G$ as a quotient of its simply connected covering $\tilde{G}$. This raises the problem of characterizing those Lie algebras whose associated Lie groups are *all* linear. Let $\mathfrak{g}$ be a Lie algebra of finite dimension over ${\mathbb R}$, let $\mathfrak{r}$ be the solvable radical of the commutator algebra $\mathfrak{g}'$, let $\mathfrak{z}$ be the center of $\mathfrak{g}$, and let $\mathfrak{s}$ be a maximal semisimple subalgebra of $\mathfrak{g}$. Then *every* connected Lie group with Lie algebra $\mathfrak{g}$ is linear precisely if the simply connected group associated to $\mathfrak{s}$ is linear and $\mathfrak{r}\cap\mathfrak{z}=\{0\}$. \[simpleLieInfinitePi\] Among the connected Lie groups with simple Lie algebra, the most obvious non-linear examples are those with infinite center. These are precisely those connected Lie groups with simple Lie algebra where the maximal compact subgroups have centralizers of positive dimension. Cf. [@MR514561 Ch.VIII, §6; Ch.X, §6]. Assume that $G$ is a connected Lie group, and that $G$ is linear. Then the holomorph ${\mathrm{Aut}(G)}\ltimes G$ is linear if, and only if, one of the following holds: 1. The nilradical of $G$ is simply connected. 2. The group $G$ is perfect (i.e., coincides with its commutator subgroup). 3. The quotient $G/G'$ is isomorphic to ${\mathbb R}/{\mathbb Z}$. \[Burnside\] 1. \[BurnsideBounded\] If $G\le{\mathrm{GL}_{n}{{\mathbb C}}}$ has finite exponent (i.e. if there exists $m\ge1$ such that ${\{{g^m}\left|\vphantom{}\right.\, {g\in G}\}}$ is trivial) then $G$ is a finite group. 2. \[BurnsideClasses\] If $H\le{\mathrm{GL}_{n}{{\mathbb C}}}$ contains only finitely many conjugacy classes then $H$ is finite. \[schurTorsion\] If $G\le{\mathrm{GL}_{n}{{\mathbb C}}}$ is a torsion group (i.e., if every element of $G$ has finite order) then every finitely generated subgroup of $G$ is finite. \[ex:Burnside\] Using Burnside’s results we provide examples of (discrete) simple non-abelian groups in ${{\text{\sc{Kor}}}({\text{\sc Lie}})}$ or even in ${{\text{\sc{Kor}}}({\text{\sc SepLie}})}$ — separability just means countability here. 1. \[bounded\] For each infinite field $F$ of positive characteristic $p$ the simple group ${\mathrm{PSL}_{2}{F}}$ does not admit any non-trivial ordinary representation because it contains an infinite group of exponent $p$, cf. \[Burnside\]\[BurnsideBounded\]. Thus ${{\text{\sc{Kor}}}({\mathrm{PSL}_{2}{F}})} = {\mathrm{PSL}_{2}{F}}$. 2. \[HNN\] There exists a countably infinite group $H$ such that every non-trivial element of $H$ has infinite order, and all these elements form a single conjugacy class, see [@MR0032641]. Clearly this group $H$ is simple, and ${{\text{\sc{Kor}}}(H)}=H$ follows from \[Burnside\]\[BurnsideClasses\]. 3. If $p$ is a sufficiently large prime then there exists a countably infinite group ${\mathrm{Olm}_{p}}$ such that every proper subgroup of ${\mathrm{Olm}_{p}}$ has order $p$ and any such subgroup contains a set of representatives for the conjugacy classes, see [@MR1191619 §19]. We may conclude ${{\text{\sc{Kor}}}({\mathrm{Olm}_{p}})}={\mathrm{Olm}_{p}}$ from any one of Burnside’s results as stated in \[Burnside\], and also from \[schurTorsion\]. [10]{} \[1\][[\#1](#1)]{} \[1\][[doi:\#1](http://dx.doi.org/#1)]{} \[1\][\#1 ]{} \#1 \#2\[2\][ [MR\#1 \#2](http://www.ams.org/mathscinet-getitem?mr=#1)]{} \[1\] \[1\][ [Zbl \#1](http://www.zentralblatt-math.org/NEW/zmath/en/search/?q=an:#1&format=c% omplete) ]{} \[1\] \[1\][ [JfM \#1](http://www.zentralblatt-math.org/NEW/zmath/en/search/?q=an:#1) ]{} \[2\][\#2]{} I. D. Ado, *The representation of [L]{}ie algebras by matrices*, Uspehi Matem. Nauk (N.S.) **2** (1947), 6(22), 159–173, ISSN 0042-1316. I. D. Ado, *The representation of [L]{}ie algebras by matrices*, Amer. Math. Soc. Translation **2** (1949), 21, ISSN 0065-9290. S. Ardanza-Trevijano, M. J. Chasco, X. Dom[í]{}nguez, *The role of real characters in the [P]{}ontryagin duality of topological abelian groups*, J. Lie Theory **18** (2008),  1, 193–203, ISSN 0949-5932, <http://www.heldermann-verlag.de/jlt/jlt18/chascola2e.pdf>. R. B[ö]{}di M. Joswig, *[R]{}eal[LIE]{}. [A]{} software package for real representations of quasi-simple [L]{}ie groups*, <http://www.mathematik.tu-darmstadt.de/~joswig/RealLie/>. R. B[ö]{}di M. Joswig, *Tables for an effective enumeration of real representations of quasi-simple [L]{}ie groups*, Sem. Sophus Lie **3** (1993),  2, 239–253, ISSN 0940-2268, <http://www.heldermann-verlag.de/jlt/jlt03/BOEDIPL.PDF>. N. Bourbaki, *General topology. [C]{}hapters 1–4*, Elements of Mathematics (Berlin), Springer-Verlag, Berlin, 1989, ISBN 3-540-19374-X. W. Burnside, *On criteria for the finiteness of the order of a group of linear substitutions*, Proc. London Math. Soc. (2) **3** (1905), 435–440, . M. J. Collins, *Representations and characters of finite groups*, Cambridge Studies in Advanced Mathematics  22, Cambridge University Press, Cambridge, 1990, ISBN 0-521-23440-9. J. E. Diem F. B. Wright, *Real characters and the radical of an abelian group*, Trans. Amer. Math. Soc. **129** (1967), 517–529, ISSN 0002-9947, <http://www.jstor.org/stable/1994605>. J. Dieudonn[é]{}, *On the automorphisms of the classical groups*, Memoirs of the American Mathematical Society  2, American Mathematical Society, Providence, R.I., 1980, ISBN 0-8218-1202-5. P. R. Halmos H. Samelson, *On monothetic groups*, Proc. Nat. Acad. Sci. U. S. A. **28** (1942), 254–258, ISSN 0027-8424, <http://www.pnas.org/content/28/6/254.full.pdf>. Harish-Chandra, *Faithful representations of [L]{}ie algebras*, Ann. of Math (2) **50** (1949), 68–76, ISSN 0003-486X, . Harish-Chandra, *On faithful representations of [L]{}ie groups*, Proc. Amer. Math. Soc. **1** (1950), 205–210, ISSN 0002-9939, <http://www.jstor.org/stable/2031923>. S. Helgason, *Differential geometry, [L]{}ie groups, and symmetric spaces*, Pure and Applied Mathematics  80, Academic Press Inc., New York, 1978, ISBN 0-12-338460-5. E. Hewitt K. A. Ross, *Abstract harmonic analysis. [V]{}ol. [I]{}*, Grundlehren der [M]{}athematischen [W]{}issenschaften 115, Springer-Verlag, Berlin, , 1979, ISBN 3-540-09434-2. G. Higman, B. H. Neumann, H. Neumann, *Embedding theorems for groups*, J. London Math. Soc. **24** (1949), 247–254, ISSN 0024-6107, . K. H. Hofmann, *Analytic groups without analysis*, *Symposia [M]{}athematica, [V]{}ol. [XVI]{} ([C]{}onvegno sui [G]{}ruppi [T]{}opologici e [G]{}ruppi di [L]{}ie, [INDAM]{}, [R]{}ome, 1974)*, 357–374, Academic Press, London, 1975. K. H. Hofmann S. A. Morris, *Free compact groups. [I]{}. [F]{}ree compact abelian groups*, Topology Appl. **23** (1986),  1, 41–64, ISSN 0166-8641, . K. H. Hofmann S. A. Morris, *Correction: “[F]{}ree compact groups. [I]{}. [F]{}ree compact abelian groups” \[[T]{}opology [A]{}ppl. [**23**]{} (1986), no. 1, 41–64; [MR]{}0849093 (88a:22011)\]*, Topology Appl. **28** (1988),  1, 101–102, ISSN 0166-8641, . K. H. Hofmann S. A. Morris, *Weight and [$c$]{}*, J. Pure Appl. Algebra **68** (1990), 1-2, 181–194, ISSN 0022-4049, . K. H. Hofmann S. A. Morris, *The structure of abelian pro-[L]{}ie groups*, Math. Z. **248** (2004),  4, 867–891, ISSN 0025-5874, . K. H. Hofmann S. A. Morris, *The structure of compact groups*, de [G]{}ruyter Studies in Mathematics  25, Walter de Gruyter & Co., Berlin, augmented , 2006, ISBN 978-3-11-019006-9; 3-11-019006-0. K. H. Hofmann S. A. Morris, *The [L]{}ie theory of connected pro-[L]{}ie groups*, [EMS]{} Tracts in Mathematics  2, European Mathematical Society (EMS), Zürich, 2007, ISBN 978-3-03719-032-6. K. H. Hofmann, S. A. Morris, M. Stroppel, *Locally compact groups, residual [L]{}ie groups, and varieties generated by [L]{}ie groups*, Topology Appl. **71** (1996),  1, 63–91, ISSN 0166-8641, . K. H. Hofmann K.-H. Neeb, *Pro-[L]{}ie groups which are infinite-dimensional [L]{}ie groups*, Math. Proc. Cambridge Philos. Soc. **146** (2009),  2, 351–378, ISSN 0305-0041, . K. H. Hofmann K.-H. Neeb, *The compact generation of closed subgroups of locally compact groups*, J. Group Theory **12** (2009),  4, 555–559, ISSN 1433-5883, . K. Iwasawa, *On the representation of [L]{}ie algebras*, Jap. J. Math. **19** (1948), 405–426. K. Iwasawa, *On some types of topological groups*, Ann. of Math. (2) **50** (1949), 507–558, ISSN 0003-486X, . I. Kaplansky, *Lie algebras and locally compact groups*, The University of Chicago Press, Chicago, Ill.-London, 1971. S. Lang, *Algebra*, Addison-Wesley Publishing Company Advanced Book Program, Reading, MA, , 1984, ISBN 0-201-05487-6. D. H. Lee T. S. Wu, *On faithful representations of the holomorph of [L]{}ie groups*, Math. Ann. **275** (1986),  3, 521–527, ISSN 0025-5831, . G. W. Mackey, *On induced representations of groups*, Amer. J. Math. **73** (1951), 576–592, ISSN 0002-9327, . A. I. Maltsev, *On linear [L]{}ie groups*, C. R. (Doklady) Acad. Sci. URSS (N. S.) **40** (1943), 87–89. D. Montgomery L. Zippin, *Topological transformation groups*, Interscience Publishers, New York-London, 1955. M. Moskowitz, *A remark on faithful representations*, Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) **52** (1972), 829–831 (1973). N. Nahlus, *Note on faithful representations and a local property of [L]{}ie groups*, Proc. Amer. Math. Soc. **125** (1997),  9, 2767–2769, ISSN 0002-9939, . A. Y. Olshanski[ĭ]{}, *Geometry of defining relations in groups*, Mathematics and its Applications (Soviet Series)  70, Kluwer Academic Publishers Group, Dordrecht, 1991, ISBN 0-7923-1394-1. A. L. Onishchik . B. Vinberg, *Lie groups and algebraic groups*, Springer Series in Soviet Mathematics, Springer-Verlag, Berlin, 1990, ISBN 3-540-50614-4. D. J. S. Robinson, *A course in the theory of groups*, Graduate Texts in Mathematics  80, Springer-Verlag, New York, , 1996, ISBN 0-387-94461-3. W. Roelcke S. Dierolf, *Uniform structures on topological groups and their quotients*, McGraw-Hill International Book Co., New York, 1981, ISBN 0-07-0543412-8. H. Salzmann, D. Betten, T. Grundh[ö]{}fer, H. H[ä]{}hl, R. L[ö]{}wen, M. Stroppel, *Compact projective planes*, de [G]{}ruyter Expositions in Mathematics  21, Walter de Gruyter & Co., Berlin, 1995, ISBN 3-11-011480-1. O. Schreier B. L. van der Waerden, *Die [A]{}utomorphismen der projektiven [G]{}ruppen*, [A]{}bh. [M]{}ath. [S]{}em. [U]{}niv. [H]{}amburg **6** (1928), 303–322, . I. Schur, *[Über Gruppen periodischer linearer Substitutionen.]{}*, Sitzungsber. Preuss. Akad. Wiss. (1911), 619–627. M. Stroppel, *Locally compact groups with many automorphisms*, J. Group Theory **4** (2001),  4, 427–455, ISSN 1433-5883, . M. Stroppel, *Locally compact groups*, EMS Textbooks in Mathematics, European Mathematical Society (EMS), Zürich, 2006, ISBN 3-03719-016-7, . W. T. van Est, *On [A]{}do’s theorem*, Nederl. Akad. Wetensch. Proc. Ser. A 69 = Indag. Math. **28** (1966), 176–191. B. A. F. Wehrfritz, *Infinite linear groups. [A]{}n account of the group-theoretic properties of infinite groups of matrices*, Springer-Verlag, New York, 1973. F. B. Wright, *Topological abelian groups*, Amer. J. Math. **79** (1957), 477–496, ISSN 0002-9327, . H. Yamabe, *A generalization of a theorem of [G]{}leason*, Ann. of Math. (2) **58** (1953),  2, 351–365, ISSN 0003-486X, . A. E. Zalesski[ĭ]{}, *Linear groups*, *Current problems in mathematics. [F]{}undamental directions, [V]{}ol. 37 ([R]{}ussian)*, Itogi Nauki i Tekhniki, 114–228, 236, Akad. Nauk SSSR Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1989. A substantial part of these notes was written while the author was a guest of SFB 478 “Geometrische Strukturen in der Mathematik”, Münster, Germany. [**Author’s address:** ]{}\ Markus Stroppel, Fachbereich Mathematik, Universität Stuttgart, D-70550 Stuttgart, Germany. [^1]: One should not confuse this notion of real character (of topological abelian groups) with the usage of the term in the theory of characters of finite groups where it denotes a class function assuming real values, cf. [@MR1050762 p.56ff]. Take the direct product $\prod_{p\in{\mathbb P}}{\mathbb Z}(p^\infty)$ with the discrete topology. The direct sum $\bigoplus_{p\in{\mathbb P}}{\mathbb Z}(p^\infty) \cong {\mathbb Q}/{\mathbb Z}$ is the torsion subgroup, but the full product is isomorphic to ${\mathbb Q}^{(2^{\aleph_0})}\oplus{\mathbb Q}/{\mathbb Z}$. The latter is isomorphic to $({\mathbb R}/{\mathbb Z})_{\mathrm{discr}}$ and to $({\mathbb R}\times{\mathbb Q}/{\mathbb Z})_{\mathrm{discr}}$. The projection onto the torsionfree summand ${\mathbb Q}^{(2^{\aleph_0})}\cong{\mathbb R}_{\mathrm{discr}}$ is a real character, indeed. [^2]: The discussion of ${\mathrm{Aut}({{\mathfrak{L}}({\mathrm{SO}_{3}{{\mathbb R}}})})}$ in [@MR2261490 p.252] contains an error; indeed ${\mathrm{O}_{3}{{\mathbb R}}}\setminus{\mathrm{SO}_{3}{{\mathbb R}}} \not\subseteq {\mathrm{Aut}({{\mathfrak{L}}({\mathrm{SO}_{3}{{\mathbb R}}})})}$.
--- abstract: | In this paper we consider a model of particles jumping on a row of cells, called in physics the one dimensional totally asymmetric exclusion process (TASEP). More precisely we deal with the TASEP with open or periodic boundary conditions and with two or three types of particles. From the point of view of combinatorics a remarkable feature of this Markov chain is that it involves Catalan numbers in several entries of its stationary distribution. We give a combinatorial interpretation and a simple proof of these observations. In doing this we reveal a second row of cells, which is used by particles to travel backward. As a byproduct we also obtain an interpretation of the occurrence of the Brownian excursion in the description of the density of particles on a long row of cells. address: - 'E. D.: CAMS, EHESS, 52 bd Raspail, 75006 Paris, France' - 'G. S.: LIX, CNRS, École Polytechnique, 91128 Palaiseau, France' author: - Enrica Duchi - 'Gilles Schaeffer${}^*$' bibliography: - 'libre.bib' date: 'October 19, 2003, revised July 25, 2004' title: A combinatorial approach to jumping particles --- Jumping particles {#JumpingGilles} ================= We shall consider a model of jumping particles on a row of $n$ cells that was exactly solved in the early 90’s in physics, under the name *one dimensional totally asymmetric exclusion process with open boundaries*, or TASEP for short. Roughly speaking, the TASEP consists of black particles entering a row of $n$ cells from an infinite reservoir on the left-hand side and randomly hopping to the right with the simple exclusion rule that each cell may contain at most one particle. ![An informal illustration of the TASEP.[]{data-label="fig:rough"}](reservoir.eps){width="70mm"} The TASEP is usually defined as a continuous-time Markov process on a finite set of configurations of particles on a line. We shall use an alternative definition as a finite state Markov chain —with discrete time— which is more convenient for our combinatorial purpose. One could insist on calling our chain the TASEC, with “C” for chain instead of “P” for process, but as we will argue later, there is no need for this distinction. Another cosmetic modification we allow ourselves consists in putting a white particle in each empty cell, so as to make explicit the left-right particle-hole symmetry of the chain. Definition of the TASEP ----------------------- A *TASEP configuration* is a row of $n$ cells, each containing either one black particle or one white particle (see Figure \[fig:example-config\]). These cells are delimited by $n+1$ walls: the left border (or wall $0$), the $i$th separation wall for $i=1,\ldots,n-1$, and the right border (or wall $n$). ![A basic configuration with $n=10$ cells.[]{data-label="fig:example-config"}](example-basic-config.eps){width="40mm"} The TASEP is a Markov chain $S^0_{\alpha\beta\gamma}$ defined on the set of TASEP configurations for any three parameters $\alpha$, $\beta$ and $\gamma$ in the interval $]0,1]$. From time $t$ to $t+1$, the chain evolves from the configuration $\tau=S^0_{\alpha\beta\gamma}(t)$ to a configuration $\tau'=S^0_{\alpha\beta\gamma}(t+1)$ as follows: - A wall $i$ is chosen uniformly at random among the $n+1$ walls, and then may become *active* with probability $\lambda(i)$, with $\lambda(i)=\alpha$ for $i=1,\ldots,n-1$, $\lambda(0)=\beta$ and $\lambda(n)=\gamma$. - If the wall does not become active, then nothing happens: $\tau'=\tau$. - Otherwise from $\tau$ to $\tau'$ some changes may occur near the active wall: - If the active wall is not a border ($i\in\{1,\ldots,n-1\}$) and has a black particle on its left-hand side and a white one on its right-hand side, then these two particles swap: $\bullet{\ensuremath{|\!|}}\circ\to\circ|\bullet$. - If the active wall is the left border ($i=0$) and the leftmost cell contains a white particle, then this particle leaves the system and is replaced by a black one: ${\ensuremath{|\!|}}\circ\to|\bullet$. - If the active wall is the right border ($i=n$) and the rightmost cell contains a black particle, then this particle leaves the system and is replaced by a white one: $\bullet{\ensuremath{|\!|}}\to\circ|$. - Otherwise the configuration is left unchanged: $\tau'=\tau$. As illustrated by Figure \[fig:example-evolution\], black particles travel from left to right and white particles do the opposite. The entire chain for $n=3$ is shown in Figure \[fig:system-basic\]. The four cases $a,b,c,d$ define an application $\vartheta:(\tau,i)\mapsto\tau'$ from the set of configurations with an active wall into the set of configurations. The definition of the TASEP can be rephrased in terms of this application as: at time $t$ choose a random wall $i=I(t)$ and set $$S^0_{\alpha\beta\gamma}(t+1) =\left\{ \begin{array}{ll} \vartheta(S^0_{\alpha\beta\gamma}(t),i) & \textrm{with probability } \lambda(i),\\ S^0_{\alpha\beta\gamma}(t) & \textrm{otherwise}. \end{array} \right.$$ The parameters $\alpha$, $\beta$ and $\gamma$ control the rate at which particles try to move inside the system and at the borders. In particular, we shall call *maximal flow regime* the special case $\alpha=\beta=\gamma=1$, in which the rate at which particles try to move is maximal, and denote $S^0=S^0_{111}$ the corresponding chain. ![An example of an evolution, with $n=4$ and $\alpha=\beta=\gamma=1$. The active wall triggering each transition is indicated by the symbol ${\ensuremath{|\!|}}$.[]{data-label="fig:example-evolution"}](exemple-evolution.eps){width="90mm"} Continuous-time descriptions of the TASEP ----------------------------------------- In the physics literature, the TASEP is usually described in the following terms. The time is continuous, and one considers each wall independently: during any small time interval $dt$, wall $i$ has probability $\lambda(i) dt$ to trigger a move $\omega\mapsto\vartheta(\omega,i)$. The rate $\lambda(i)$ takes the same values $\alpha$, $\beta$, $\gamma$ as previously. Following the probabilistic literature [@liggett], one can give a formulation which is equivalent to the previous one, but already closer to ours. In this description, each wall waits for an independent exponential random time with rate 1 before waking up (in other terms, at any time, the probability that wall $i$ will still be sleeping after $t$ seconds is $e^{-t}$). When wall $i$ wakes up, it has probability $\lambda(i)$ to become active. If this is the case, then the transition $\omega\to\vartheta(\omega,i)$ is applied to the current configuration $\omega$. In any case the wall falls again asleep, restarting its clock. This continuous-time TASEP is now easily coupled to the Markov chain $S^0_{\alpha\beta\gamma}$. Let the time steps of $S^0_{\alpha\beta\gamma}$ correspond to the succession of moments at which a wall wakes up. Then in both versions, the index of the next wall to wake up is at any time a uniform random variable on $\{0,\ldots,n\}$, and when a wall wakes up the transition probabilities are identical. This implies that the stationary distributions of the continuous-time TASEP and its Markov chain replica are the same. A remarkable stationary distribution ------------------------------------ Among many results on the TASEP, Derrida *et al.* [@derrida; @pasquier] proved the following nice properties of the chain $S^0=S^0_{111}$, in which particles enter, travel and exit at the same maximal rate. First, $$\label{Mafalda} \textrm{Prob}(S^0(t) \textrm{ contains $0$ black particles}) \;\mathop{\longrightarrow}_{t\rightarrow\infty}\; \frac{1}{C_{n+1}},$$ where $C_{n+1}=\frac1{n+2}{2n+2\choose n+1}$ is the $(n+1)$th Catalan number. More generally, for all $0\leq k\leq n$, $$\label{Sapi} \textrm{Prob}(S^0(t)\textrm{ contains }k\textrm{ black particles}) \;\mathop{\longrightarrow}_{t\rightarrow\infty}\; \frac{\frac{1}{n+1}{n+1\choose k}{n+1\choose n-k}}{C_{n+1}},$$ where the numerators are called Narayana numbers. The model is a finite state Markov chain which is clearly ergodic so that the previous limits are in fact the probabilities of the same events in the unique stationary distribution of the chain [@markov]. More generally, Derrida *et al.* provided expressions for the stationary probabilities of the chain $S^0_{\alpha\beta\gamma}$ for generic $\alpha$, $\beta$, $\gamma$. Since their original work a number of papers have appeared providing alternative proofs and further results on correlations, time evolutions, etc. Recent advances and a bibliography can be found for instance in the article [@DLS03]. General books about this kind of particle processes are [@liggett; @spohn]. However, the remarkable appearance of Catalan numbers in the stationary distribution of $S^0$ is not easily understood from the proofs in the physics literature. As far as we know, these proofs rely either on a *matrix ansatz*, or on a *Bethe ansatz*, both being then proved by a recursion on $n$. Combinatorial results --------------------- Our main ingredient to study the TASEP consists in the construction of a new Markov chain $X^0_{\alpha\beta\gamma}$ on a set $\Omega^0_n$ of *complete configurations* that satisfies two main requirements: on the one hand the stationary distribution of the chain $S^0_{\alpha\beta\gamma}$ can be simply expressed in terms of that of the chain $X^0_{\alpha\beta\gamma}$; on the other hand the stationary behavior of the chain $X^0_{\alpha\beta\gamma}$ is easy to understand. The complete configurations that we introduce for this purpose are made of two rows of $n$ cells containing black and white particles. The first requirement is met by imposing that, disregarding what happens in the second row, the chain $X^0_{\alpha\beta\gamma}$ simulates in its first row the chain $S^0_{\alpha\beta\gamma}$. As illustrated by Figure \[fig:informal\], the second row will be used by black and white particles to return to their start point, thus revealing a circulation of the particles. The second requirement is met by adequately choosing the complete configurations and the transition rules so that $X^0_{\alpha\beta\gamma}$ has a simple stationary distribution: in particular in the case $\alpha=\beta=\gamma=1$, $X^0=X^0_{111}$ will have a uniform stationary distribution. ![The circulation of black particles in the complete chain.[]{data-label="fig:informal"}](circulation.eps){width="\linewidth"} The chain $X^0_{\alpha\beta\gamma}$ is described in Section \[sec:completechain\], together with a fundamental property of its transition rules. Our main result, presented in Section \[sec:stationarydistribution\], is the combinatorial interpretation of the stationary distribution of the chain $S^0_{\alpha\beta\gamma}$, and in particular of Formulas (\[Mafalda\])–(\[Sapi\]). It is known in the literature that some of the results on the TASEP can be extended to models with three particle types [@ABL; @derrida]. We show that this is the case of Formulas (\[Mafalda\])–(\[Sapi\]) by adapting our approach in Section \[sec:3tasep\] to the 3-TASEP, a Markov chain $S_{\alpha\beta\gamma\varepsilon}$ in which there are 3 types of particles, $\bullet$, ${\ensuremath{{\!\times\!}}}$ and $\circ$ and transitions of the form $$\bullet{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}\to{\ensuremath{{\!\times\!}}}|\bullet, \qquad \bullet{\ensuremath{|\!|}}\circ\to\circ|\bullet, \quad \textrm{and}\quad{\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}\circ\to\circ|{\ensuremath{{\!\times\!}}}.$$ Our main results for the 3-TASEP are obtained by a relatively simple modification of the complete chain. In particular our combinatorial approach yields the following variant of Formula (\[Mafalda\])–(\[Sapi\]) for the chain $S=S_{111\frac12}$: for any $k+\ell+m=n$, $$\textrm{Prob}(S(t) \textrm{ contains } \textrm{ $k$ $\bullet$, $\ell$ ${\ensuremath{{\!\times\!}}}$, and $m$ $\circ$}) \;\mathop{\longrightarrow}_{t\rightarrow\infty}\; \frac{\frac{\ell+1}{n+1}{n+1\choose k}{n+1\choose m}}{\frac{1}{2} {2n+2\choose n+1}}.$$ The TASEP and 3-TASEP are sometimes also defined with periodic boundary conditions: instead of giving special rules for walls $0$ and $n$, one identifies these two walls and applies the same rule to every wall. With these boundary conditions, the stationary distribution of the TASEP is easily seen to be uniform. In Section \[sec:periodic\] we apply our method to study the more interesting distribution of the 3-TASEP with periodic boundary conditions. In this chain the number of particles of each type is fixed (since they cannot leave the system), and, for $k$ $\bullet$, $\ell$ $\times$, $m$ $\circ$, with $k+\ell+m=n$, we recover the known result: $$\textrm{Prob}(\widehat S(t)\;=\; |\underbrace{{\ensuremath{{\!\times\!}}}\cdots{\ensuremath{{\!\times\!}}}}_\ell |\underbrace{\circ\cdots\circ}_m |\underbrace{\bullet\cdots\bullet}_k|) \;\mathop{\longrightarrow}_{t\rightarrow\infty}\; \frac1{{n \choose k}{n \choose m}}.$$ A different combinatorial proof of this later formula was recently proposed by Omer Angel [@omer]. The complete chain {#sec:completechain} ================== Complete configurations. ------------------------ A *complete configuration* of $\Omega^0_n$ is a pair of rows of $n$ cells satisfying the following constraints: - *The balance condition*: The two rows contain together $n$ black and $n$ white particles. - *The positivity condition*: On the left of any vertical wall there are at least as many black particles as white ones. An example of complete configuration is given in Figure \[fig:example-complete-config\], together with a pair of rows that violates the positivity condition. Given a complete configuration of length $n$, and an integer $j$, $0\leq j\leq n$, let $B(j)$ and $W(j)$ be respectively the numbers of black and white particles lying in the first $j$th columns (from left to right), and set $E(j)=$ $B(j)-W(j)$. In other terms, the quantities $B(j)$, $W(j)$ and $E(j)$ represent the number of black particles, the number of white particles, and their difference on the left of wall $j$. In particular, $E(0)=E(n)=0$, and Condition ($ii$) of the definition of complete configurations reads $E(j)\geq0$ for $j=0,\ldots,n$ (this is why we call it a positivity condition). Readers with background in enumerative combinatorics may have recognized here complete configurations with $n$ columns as bicolored Motzkin paths with $n$ steps, or Dyck paths with $2n+2$ steps in disguise [@stanley Chap. 6]. ![A complete configuration with $n=10$, and a pair of rows violating the positivity condition at wall 4.[]{data-label="fig:example-complete-config"}](example-complete-config.eps "fig:"){width=".25\linewidth"} ![A complete configuration with $n=10$, and a pair of rows violating the positivity condition at wall 4.[]{data-label="fig:example-complete-config"}](example-bad-config.eps "fig:"){width=".25\linewidth"} In particular these characterizations yield the following lemmas. A direct proof of these lemmas is given in Section \[sec:counting\] for completeness. \[lemma:olaf\] The number $|\Omega^0_{n}|$ of complete configurations is $C_{n+1}=\frac{1}{n+1} { 2n+2 \choose n}=\frac1{n+2}{2n+2\choose n+1}$. \[lemma:ciclico\] Let $k,m,n$ be non negative integers with $k+m=n$. The number $|\Omega^0_{k,m}|$ of complete configurations of $\Omega^0_{n}$ with $k$ black and $m$ white particles on the top row, and $m$ black and $k$ white particles on the bottom row is $\frac{1}{n+1}{n+1\choose k}{n+1\choose m}$. A first hint to our interest in complete configurations should follow from the comparison of the lemmas with the probabilities in (\[Mafalda\]) and (\[Sapi\]). First definition of the complete chain {#sec:informal} -------------------------------------- The Markov chain $X^0_{\alpha\beta\gamma}$ on $\Omega^0_n$ will be defined in terms of an application $T$ from the set $\Omega^0_n \times \{0,\ldots,n\}$ to the set $\Omega^0_n$ that extends the application $\vartheta$. Given a complete configuration $\omega$ and an active wall $i$, the action of $T$ on the top row of $\omega$ does not depend on the second row, and mimics the action of $\vartheta$ as defined by cases $a$, $b$, $c$ and $d$ of the description of the TASEP. In particular in the top row, black particles travel from left to right and white particles from right to left. As opposed to that, in the bottom row, $T$ moves black and white particles backward, as illustrated by Figure \[fig:informal\]. In order to describe how these moves are performed, we first introduce the concept of sweep (see Figure \[fig:interpretation\]): - A *white sweep* between walls $i_1$ and $i_2$ consists in all white particles that are in the bottom row between walls $i_1$ and $i_2$ simultaneously hopping to the right (some black particles thus being displaced to the left in order to fill the gaps). For well definiteness a white sweep between $i_1$ and $i_2$ can occur only if the particle on the right-hand side of $i_2$ is black. - A *black sweep* between walls $i_1$ and $i_2$ consists in all black particles that are in the bottom row between walls $i_1$ and $i_2$ simultaneously hopping to the left (some white particles thus being displaced to the right in order to fill the gaps). For well definiteness a white sweep between $i_1$ and $i_2$ can occur only if the particle on the left-hand side of $i_1$ is white. ![A white sweep and a black sweep.[]{data-label="fig:interpretation"}](interpretation4.eps){width="90mm"} Next, given a complete configuration and a wall $i$, we distinguish the following walls: if there is a black particle on the left-hand side of wall $i$ in the top row, let $j_1<i$ be the leftmost wall such that there are only white particles in the top row between walls $j_1$ and $i-1$; if there is a white particle on the right-hand side of $i$ in the top row, let $j_2>i$ be the rightmost wall such that there are only black particles in the top row between walls $i+1$ and $j_2$. ![Sweeps occurring below the transition $(\bullet{\ensuremath{|\!|}}\circ\to\circ|\bullet)$.[]{data-label="fig:ambiguous"}](ambiguous.eps){width="110mm"} With these definitions, we are in the position to describe the action of $T$. Given a complete configuration $\omega\in\Omega^0_n$ and a wall $i\in\{0,\ldots,n\}$, the cases $a$, $b$, $c$ and $d$ of the transition rule $\vartheta$ describe the top row of the image $T(\omega,i)$, and they are complemented as follows to describe the bottom row of the image: - In this case $i\in\{1,\ldots,n-1\}$ and this wall separates a black and a white particle in the top row of $\omega$. The moves in the bottom row then depend on the particle on the bottom right of wall $i$ in $\omega$: if it is black, a white sweep occurs between $j_1$ and $i$, otherwise it is white and a black sweep occurs between $i+1$ and $j_2+1$ (or between $i+1$ and $n$ if $j_2=n$). These moves are illustrated by Figure \[fig:ambiguous\] (see also Figures \[move-right-sweep\]–\[move-left-sweep\], left and middle). - In this case $i=0$ and the leftmost particle of the top row of $\omega$ is white. Then the leftmost column of $\omega$ is a $|{\ensuremath{{\circ\atop\bullet}}}|$-column. These two particles exchange (so that a black particle enters in the top row in agreement with rule $b$ for $\vartheta$), and a black sweep occurs between the left border and wall $j_2+1$, or between the left and right borders if $j_2=n$ (see Figure \[move-left-sweep\], right). - In this case $i=n$ and the rightmost particle of the top row of $\omega$ is black. Then the rightmost column of $\omega$ is a $|{\ensuremath{{\bullet\atop\circ}}}|$-column. These two particles exchange (so that a white particle enters in the top row in agreement with rule $c$ for $\vartheta$), and a white sweep occurs between wall $j_1$ and the right border (see Figure \[move-right-sweep\], right). - Otherwise nothing happens. The fact that the configuration $T(\omega,i)$ produced in each case satisfies the positivity constraint is not difficult to prove and it is explicitly checked in the next section. The Markov chain $X^0_{\alpha\beta\gamma}$ on the set $\Omega^0_n$ of complete configurations with length $n$ is defined from $T$ exactly as the TASEP is described from $\vartheta$: the evolution rule from time $t$ to $t+1$ consists in choosing $i=I(t)$ uniformly at random in $\{0,\ldots,n\}$ and setting $$X^0(t+1)=\left\{ \begin{array}{ll} T\left(X^0(t),i\right) &\textrm{ with probability $\lambda(i)$,}\\ X^0(t) &\textrm{ otherwise.} \end{array}\right.$$ By construction, the Markov chains $S^0_{\alpha\beta\gamma}$ and $X^0_{\alpha\beta\gamma}$ are related by $$S^0_{\alpha\beta\gamma}\;\equiv\;\textrm{top}(X^0_{\alpha\beta\gamma}),$$ where $\textrm{top}(\omega)$ denotes the top row of a complete configuration $\omega$, and the $\equiv$ is intended as identity in law at any time, provided $S_{\alpha\beta\gamma}^0(0)$ and $\textrm{top}(X_{\alpha\beta\gamma}^0(0))$ are equally distributed. An appealing interpretation from a combinatorial point of view is that we have revealed a circulation of the particles, that use the bottom row to travel backward and implement the infinite reservoirs, as illustrated by Figure \[fig:informal\]. An example of evolution is given by Figure \[fig:example-evolution-comp\]. The TASEP and complete chain with two particles for $n=3$ are represented in Figures \[fig:system-basic\]–\[fig:system-complete\]. ![An example of actual evolution with $n=4$ and $\alpha=\beta=\gamma=1$.[]{data-label="fig:example-evolution-comp"}](exemple-evolution-comp-two.eps){width="\linewidth"} Restatement of the transition rules: the bijection $\bar T$ {#sec:bijection-intro} ----------------------------------------------------------- \[thm:bijection\] The application $T$ is the first component $\Omega^0_n\times\{0,\ldots,n\}\to\Omega^0_n$ of a bijection $\bar T$ from $\Omega^0_n\times\{0,\ldots,n\}$ into itself. In order to define the application $\bar T$, we shall partition the set $\Omega^0_n\times\{0,\ldots,n\}$ into classes $A_{a'}$, $A_{a''}$, $A_{b}$, $A_{c}$, and $A_{d}$, and describe, for each class $A$, its image $B=\bar T(A)$. From now on in this section, $(\omega,i)$ and $(\omega',j)$ respectively denote an element of the current class and its image, and $j_1$ and $j_2$ are defined from $(\omega,i)$ as in Section \[sec:informal\]. We are going to describe the image $(\omega',j)$ of $(\omega,i)$ by $\bar T$ in terms of deletions and insertions of $|{\ensuremath{{\bullet\atop\circ}}}|$- or $|{\ensuremath{{\circ\atop\bullet}}}|$-columns or of ${\bullet\atop}|{\atop\circ}$-diagonals. One advantage of these operations is that they clearly preserve the balance and positivity conditions, so we will directly know in each case that the image $\omega'$ belongs $\Omega^0_n$. The reader is invited to chek, using Figures \[move-right-sweep\] and \[move-left-sweep\], that the configuration $\omega'$ obtained in each case is, as claimed in the theorem, the same as the configuration $T(\omega,i)$ that was described in terms of sweeps in the previous section: - There are two cases depending on the type of the particle $R$ that is below $Q$ in $\omega$: - Then $j=j_1$ and $\omega'$ is obtained by moving the $|{\ensuremath{{\circ\atop\bullet}}}|$-column $|{Q\atop R}|$ from the right-hand side of wall $i$ to the right-hand side of wall $j$ (Figure \[move-right-sweep\], left-middle). The image $\smash{B_{a'}}$ of the class $A_{a'}$ consists of pairs $(\omega',j)$ such that: the wall $j$ is the left border ($j=0$) or it has a black particle on its left-hand side in the top row, there is a $|{\ensuremath{{\circ\atop\bullet}}}|$-column on the right-hand side of wall $j$, and the sequence of white particles on the right-hand side of wall $j$ in the top row is followed by a black particle. - . Then $j=j_2$ and $\omega'$ is obtained by moving the particles $P$ and $R$ from wall $i$ (where they form a ${\bullet\atop}|{\atop\circ}$-diagonal) to wall $j$ so that they form a ${\bullet\atop}|{\atop\circ}$-diagonal if $j<n$ (Figure \[move-left-sweep\], left), or a $|{\ensuremath{{\bullet\atop\circ}}}|$-column if $j=n$ (Figure \[move-left-sweep\], middle). The image $B_{a''}$ of the class $A_{a''}$ consists of pairs $(\omega',j)$ with a $|{\ensuremath{{\circ\atop\circ}}}|$-column or the border on the right-hand side of wall $j$ of $\omega'$ and such that there is a non-empty sequence of black particles on the left-hand side of wall $j$ in the top row, followed by a white particle. ![Moves in the cases $A_a''$ and $A_{b}$. Below the two left-hand side configurations, the black sweep in the bottom row is illustrated on an exemple.[]{data-label="move-left-sweep"}](move-right-sweep.eps){width="\linewidth"} ![Moves in the cases $A_a''$ and $A_{b}$. Below the two left-hand side configurations, the black sweep in the bottom row is illustrated on an exemple.[]{data-label="move-left-sweep"}](move-left-sweep.eps){width="\linewidth"} - The cell under $Q$ then contains a black particle $P$ (Figure \[move-left-sweep\], right). Then $j=j_2$ and $\omega'$ is obtained by moving $P$ and $Q$ to wall $j$ so that they form a ${\bullet\atop}|{\atop\circ}$-diagonal if $j<n$ or a $|{\ensuremath{{\bullet\atop\circ}}}|$-column if $j=n$. The image $B_{b}$ of the class $A_{b}$ consists of pairs $(\omega',j)$ with a $|{\ensuremath{{\circ\atop\circ}}}|$-column or the border on the right-hand side of wall $j$ of $\omega'$ and such that there is a non-empty sequence of black particles on the left-hand side of wall $j$ in the top row, ending at the left border. - The cell under $P$ then contains a white particle $Q$ (Figure \[move-right-sweep\], right). Then $j=j_1$ and $\omega$ is obtained by moving $P$ and $Q$ to wall $j$ so that they form a $|{\ensuremath{{\circ\atop\bullet}}}|$-column on its right-hand side. The image $B_{c}$ of the class $A_{c}$ consists of pairs $(\omega',j)$ such that: the wall $j$ is the left border ($j=0$) or it has a black particle on its left-hand side in the top row, there is a $|{\ensuremath{{\circ\atop\bullet}}}|$-column on the right-hand side of wall $j$, and the sequence of white particles on the right-hand side of wall $j$ in the top row ends at the right border. - . These configurations are left unchanged by $\bar T$, so that $B_d=\bar T(A_d)=A_d$. In each case of the definition, the transformation described is reversible: from $(\omega',k)$ in one of the image classes, the wall $i$ and then the configuration $\omega$ are easily recovered. The theorem thus follows from the fact that $\{B_{a'},B_{a''}, B_{b},B_{c}, B_{d}\}$ is a partition of $\Omega^0_n\times\{0,\ldots,n\}$. Stationary distributions {#sec:stationarydistribution} ======================== The Markov chain $X^0_{\alpha\beta\gamma}$ is clearly aperiodic and we check in Section \[sec:paths\] that it is irreducible, *i.e.* that there is an evolution between any two configurations. This implies that the chain $X^0_{\alpha\beta\gamma}$ is ergodic, *i.e.* it has a unique stationary distribution, to which $X^0_{\alpha\beta\gamma}(t)$ converges as $t$ goes to infinity [@markov]. Our aim in this section is to find this distribution and to use it to give a combinatorial interpretation to that of $S^0_{\alpha\beta\gamma}$. We first deal with the maximal flow regime, for which all ingredients are now ready. Then we discuss the generic case. The maximal flow regime $\alpha=\beta=\gamma=1$ {#sec:maximal} ----------------------------------------------- \[thm:proba\] The Markov chain $X^0$ has a uniform stationary distribution. ![The $14$ complete configurations for $n=3$ and transitions between them. The starting point of each arrow indicates the wall triggering the transition (loop transitions are not indicated). For $\alpha=\beta=\gamma=1$, the stationary probabilities are uniform (equal to $1/14$) since each configuration has equal in- and out-degrees. Ignoring the bottom rows reduces this Markov chain to the chain of Figure \[fig:system-basic\].[]{data-label="fig:system-complete"}](Particle-basic-bis.eps){width=".8\linewidth"} ![The $14$ complete configurations for $n=3$ and transitions between them. The starting point of each arrow indicates the wall triggering the transition (loop transitions are not indicated). For $\alpha=\beta=\gamma=1$, the stationary probabilities are uniform (equal to $1/14$) since each configuration has equal in- and out-degrees. Ignoring the bottom rows reduces this Markov chain to the chain of Figure \[fig:system-basic\].[]{data-label="fig:system-complete"}](Particle-complex-ter.eps){width=".9\linewidth"} As illustrated by Figure \[fig:system-complete\], Theorem \[thm:bijection\] says that the vertices of the transition graph of the chain have equal in- and out-degrees. Moreover the $n+1$ possible transitions from a configuration $\omega$ have equal probabilities to be chosen, since the active wall is chosen uniformly in $\{0,\ldots,n\}$. The uniform distribution on $\Omega^0_n$ hence clearly satisfies the local stationarity equation at each configuration $\omega$: assuming that at time $t$ the distribution is uniform, $$\textrm{Prob}(X(t)=\omega)=\frac1{|\Omega^0_n|}\qquad\textrm{for all}\;\omega,$$ then at time $t+1$, it remains uniform, since $${\textrm{Prob}(X(t+1)=\omega')} =\!\!\! \sum_{(\omega,i)\in T^{-1}(\omega')} \!\!\!\! \textrm{Prob}(X(t)=\omega)\cdot{\frac1{n+1}} \;=\; {\big| T^{-1}(\omega')\big|}\cdot\frac1{|\Omega^0_n|}\cdot\frac{1}{n+1} \;=\;\frac1{|\Omega^0_n|},$$ where $T^{-1}(\omega')$ denotes the set of preimages of $\omega'$ respectively by $T$. The last equality follows from the facts that $T^{-1}(\omega')= \{ \bar T^{-1}(\omega', j) \mid j=0, \ldots, n\}$, and that $\bar T$ is a bijection. The relation $S^0\equiv\textrm{top}(X^0)$ now allows us to derive from Theorem \[thm:proba\] the annonced combinatorial interpretation of Formulas (\[Mafalda\]) and (\[Sapi\]). Let $\textrm{top}(\omega)$ denote the top row of a complete configuration $\omega$. Then for any initial distribution $S^0(0)$ and $X^0(0)$ with $S^0(0)\equiv\textrm{top}(X^0(0))$, and any TASEP configuration $\tau$, $$\textrm{Prob}(S^0(t) = \tau)\;=\; \textrm{Prob}(\textrm{top}(X^0(t)) =\tau) \;\mathop{\longrightarrow}_{t\rightarrow\infty}\; \frac{\big|\{\omega\in\Omega^0_n\mid \textrm{top}(\omega)=\tau\}|} {|\Omega^0_{n}|}.$$ In particular, for any $k+m=n$, we obtain combinatorially the formula: $$\textrm{Prob}(S^0(t) \textrm{ contains } k \textrm{ black and } m \textrm{ white particles}) \;\mathop{\longrightarrow}_{t\rightarrow\infty}\; \frac{|\Omega^0_{k,m}|}{|\Omega^0_n|} \;=\; \frac{\frac{1}{n+1}{n+1\choose k}{n+1\choose m}}{C_{n+1}}.$$ As discussed in Section \[sec:conclusion\], this interpretation sheds a new light on some recent results of Derrida *et al.* connecting the TASEP to Brownian excursions [@Derrida03brownian]. Arbitrary $\alpha$, $\beta$ and $\gamma$ {#sec:arbitrary} ---------------------------------------- In order to express the stationary distribution of the general chain $X^0_{\alpha\beta\gamma}$, we associate a weight $q(\omega)$ to each complete configuration $\omega$, which is defined in terms of two combinatorial statistics. By definition, a complete configuration $\omega$ is a concatenation of four types of columns $|{\bullet\atop\bullet}|$, $|{\bullet\atop\circ}|$, $|{\circ\atop\bullet}|$ and $|{\circ\atop\circ}|$, subject to the balance and positivity conditions. In particular, the concatenation of two complete configurations of $\Omega^0_i$ and $\Omega^0_j$ with $i+j=n$ yields a complete configuration of $\Omega^0_n$. Let us call *prime* a configuration that cannot be decomposed in this way. A complete configuration $\omega$ can be uniquely written as a concatenation $\omega=\omega_1\cdots\omega_m$ of prime configurations. These prime factors can be of three types: $|{\ensuremath{{\bullet\atop\circ}}}|$-columns, $|{\ensuremath{{\circ\atop\bullet}}}|$-columns, and *blocks* of the form $|{\ensuremath{{\bullet\atop\bullet}}}|\omega'|{\ensuremath{{\circ\atop\circ}}}|$ with $\omega'$ a complete configuration. The inner part $\omega'$ of a block $\omega=|{\ensuremath{{\bullet\atop\bullet}}}|\omega'|{\ensuremath{{\circ\atop\circ}}}|$ is referred to as its *inside*. Now, given a complete configuration $\omega$, let us assign labels to some of the particles of its bottom row: first, each white particle is labeled $z$ if it is not in a block, and then, each black particle is labeled $y$ if it is not in the inside of a block and there are no $z$ labels on its left. The number of labels of type $y$ and the number of labels of type $z$ in a configuration $\omega$ will be denoted $n_y(\omega)$ and $n_z(\omega)$ respectively. Then the *weight* of a configuration $\omega$ is defined as $$q(\omega)\;=\;{\beta^n\gamma^n} \left(\frac{\alpha}{\beta}\right)^{n_y(\omega)} \left(\frac{\alpha}{\gamma}\right)^{n_z(\omega)} \;=\;\alpha^{n_y(\omega)+n_z(\omega)} \beta^{n-n_y(\omega)}\gamma^{n-n_z(\omega)}.$$ In other terms, there is a factor $\alpha$ per label, a factor $\beta$ per unlabeled black particle and a factor $\gamma$ per unlabeled white particle. For instance, the weight of the configuration of Figure \[fig:qpar\] is $\alpha^{8}\beta^{10}\gamma^{16}$, and more generally the weight is a monomial with total degree $2n$. ![A configuration $\omega$ with weight $q(\omega)=\alpha^8\beta^{10}\gamma^{16}$. Labels are indicated below particles. []{data-label="fig:qpar"}](qpar.eps){width="70mm"} \[thm:generic\] The Markov chain $X^0_{\alpha\beta\gamma}$ has the following unique stationary distribution: $$\textrm{Prob}(X^0_{\alpha\beta\gamma}(t)=\omega) \;\mathop{\longrightarrow}_{t\rightarrow\infty}\; \frac{q(\omega)}{Z^0_n}, \qquad \textrm{where } Z^0_n={\sum_{\omega' \in \Omega^0_n}q(\omega')},$$ where $q(\omega)$ is the previously defined weight on complete configurations. Since $X^0_{\alpha\beta\gamma}$ is aperiodic and irreducible, it is sufficient to show that the distribution induced by the weights $q$ is stationary. The result is based on a further property of the bijection $\bar T$ of which $T$ is the first component. \[lem:preserve\] The bijection $\bar T:\Omega^0_n\times\{0,\ldots,n\}\to\Omega^0_n\times\{0,\ldots,n\}$ transports the weights: $$\label{for:preserve} \lambda(i)\,q(\omega)\;=\;\lambda(j)\,q(\omega'),\qquad \textrm{for all $(\omega',j)=\bar T(\omega,i)$,}$$ where $\lambda(i)=\alpha$ for $i\in\{1,\ldots,n-1\}$, $\lambda(0)=\beta$ and $\lambda(n)=\gamma$. Let $\omega$ be a complete configuration belonging to $\Omega_n^0$. The following properties will be useful: - *In a local configuration $|{\bullet\atop?}|{\ensuremath{{\circ\atop\bullet}}}|$ the black particle in the bottom row never contributes a label $y$.* The black particle of a $|{\ensuremath{{\circ\atop\bullet}}}|$-column can contribute a label $y$ only if it is not in the inside of a block. This happens only if the particle $?$ is white and is not in a block. But then this particle carries a label $z$ which is on the left of the black particle. - *The bottom white particle of a $|{\ensuremath{{\circ\atop\circ}}}|$-column never contributes a label $z$.* This property is immediate since a $|{\ensuremath{{\circ\atop\circ}}}|$-column is always in a block. - *The deletion/insertion of a $|{\ensuremath{{\circ\atop\bullet}}}|$-column does not change the labels of other particles.* When a $|{\ensuremath{{\circ\atop\bullet}}}|$-column is inserted or removed in the inside of a block, the block structure is unchanged and there is no effect on labels. When it is inserted or removed at a position no included in a block it may contribute a label $y$, but this has no effect on other labels. - *The deletion/insertion of a ${\bullet\atop}|{\atop\circ}$-diagonal taking the form $|{\bullet\atop?}|{\ensuremath{{\circ\atop\circ}}}|\leftrightarrow|{\circ\atop?}|$ does not change the labels of other particles.* The situation $|{\ensuremath{{\bullet\atop\circ}}}|{\ensuremath{{\circ\atop\circ}}}|\leftrightarrow |{\ensuremath{{\circ\atop\circ}}}|$ can be view as the insertion or deletion of a $|{\ensuremath{{\bullet\atop\circ}}}|$-column in the inside of a block, which has no effect. The other situation $|{\ensuremath{{\bullet\atop\bullet}}}|{\ensuremath{{\circ\atop\circ}}}|\leftrightarrow|{\ensuremath{{\circ\atop\bullet}}}|$ may occur outside a block, in which case a white particle is added or removed on the bottom row, but in a small block $|{\ensuremath{{\bullet\atop\bullet}}}|{\ensuremath{{\circ\atop\circ}}}|$. The relation is checked using these properties by comparing $q(\omega)$ and $q(\omega')$ in each case of the definition of the bijection $\bar T$. - If $j\neq0$ (Figure \[move-right-sweep\], left), according to Property 1 the particle $R$ does not contribute a label $y$ neither in $\omega$ nor in $\omega'$. Moreover, according to Property i, the displacement of the $|{\ensuremath{{\circ\atop\bullet}}}|$-column does not affect labels of other particles. Hence $q(\omega)=q(\omega')$, in agreement with $\lambda(i)=\lambda(j)=\alpha$. If instead $j=0$ (Figure \[move-right-sweep\], middle), Property 1 applies only to $\omega$: in $\omega'$, the displaced $|{\ensuremath{{\circ\atop\bullet}}}|$-column is the leftmost one, so that its black particle contributes a supplementary $y$ label. Therefore $q(\omega')=q(\omega)\frac{\alpha}{\beta}$, in agreement with $\lambda(i)=\alpha$, $\lambda(0)=\beta$. - If $j\neq n$ (Figure \[move-left-sweep\], left), from Property 2 we see that the particle $R$ does not contribute a label $y$ neither in $\omega$ nor in $\omega'$. Observe moreover that the displacement of a ${\bullet\atop}|{\atop\circ}$-diagonal does not affect labels of other particles according to Property ii. Hence $q(\omega)=q(\omega')$, in agreement with $\lambda(i)=\lambda(j)=\alpha$. If $j=n$ (Figure \[move-left-sweep\], middle), Property 2 applies only to $\omega$: the move amounts to deleting a ${\bullet\atop}|{\atop\circ}$-diagonal and inserting a $|{\ensuremath{{\bullet\atop\circ}}}|$-column at the right border. The white particle of this column thus contributes a $z$ label. Therefore $q(\omega')=q(\omega)\frac{\alpha}{\gamma}$, in agreement with $\lambda(i)=\alpha$ and $\lambda(n)=\gamma$. - If $j\neq n$ (Figure \[move-left-sweep\], right), the move consists in deleting a $|{\ensuremath{{\circ\atop\bullet}}}|$-column, which is the leftmost and thus contributes a $y$ label in $\omega$, and inserting a ${\bullet\atop}|{\atop\circ}$-diagonal, which according to Property 2 does not contribute a $z$ label. According to Property i and ii the other labels are left unchanged. Therefore $q(\omega')=q(\omega)\frac{\beta}{\alpha}$, in agreement with $\lambda(0)=\beta$ and $\lambda(j)=\alpha$. If $j=n$, $q(\omega')=q(\omega)\frac{\beta}{\alpha} \frac{\alpha}{\gamma}= q(\omega)\frac{\beta}{\gamma}$, in agreement with $\lambda(0)=\beta$ and $\lambda(n)=\gamma$. - If $j\neq 0$ (Figure \[move-right-sweep\], right), $\omega'$ is obtained by deleting a $|{\ensuremath{{\bullet\atop\circ}}}|$-column on the left-hand side of the wall $n$ and inserting a $|{\ensuremath{{\circ\atop\bullet}}}|$-column on the right-hand side of $j_1$. According to Property i only the labels of displaced particles can be affected. Since the deleted $|{\ensuremath{{\bullet\atop\circ}}}|$-column is the rightmost column, its white particle contributes a $z$ label in $\omega$. As opposed to that, Property 1 forbids the $|{\ensuremath{{\circ\atop\bullet}}}|$-column to contribute a label in $\omega'$. Therefore $q(\omega')=q(\omega)\frac{\gamma}{\alpha}$, in agreement with $\lambda(n)=\gamma$ and $\lambda(j)=\alpha$. If $j=0$, $q(\omega')=q(\omega)\frac{\gamma}{\alpha} \frac{\alpha}{\beta}= q(\omega)\frac{\gamma}{\beta}$, in agreement with $\lambda(n)=\gamma$ and $\lambda(0)=\beta$. In order to see that the distribution induced by $q$ is stationary, let us assume that $$\label{eq:niciII} \textrm{Prob}(X^0_{\alpha\beta\gamma}(t)=\omega) \;=\frac{q(\omega)}{Z^0_n}, \qquad\textrm{for all }\omega\in\Omega^0_n,$$ and try to compute $\textrm{Prob}(X^0_{\alpha\beta\gamma}(t+1)=\omega')$. For this, recall that $I(t)$ denotes the random wall selected at time $t$ and define $J(t+1)$ as follows: if $I(t)$ becomes active so that $X^0_{\alpha\beta\gamma}(t+1)=T(X^0_{\alpha\beta\gamma}(t),I(t))$, then define $J(t+1)$ by $\bar T(X^0_{\alpha\beta\gamma}(t),I(t))=(X^0_{\alpha\beta\gamma}(t+1),J(t+1))$; otherwise set $J(t+1)=I(t)$. Then, since $T$ is given as the first component of $\bar T$, $$\textrm{Prob}\big(X^0_{\alpha\beta\gamma}(t+1)=\omega'\big)\;=\; \sum_{j=0}^n\textrm{Prob}\big(X^0_{\alpha\beta\gamma}(t+1)=\omega',\; J(t+1)=j\big).$$ Now, by definition of the Markov chain $X^0_{\alpha\beta\gamma}$, for all $\omega'$ and $j$, $$\begin{aligned} \textrm{Prob}\big(X^0_{\alpha\beta\gamma}(t+1)=\omega',\;J(t+1)=j\big) &=& \lambda(i)\cdot \textrm{Prob}\big(X^0_{\alpha\beta\gamma}(t)=\omega,\;I(t)=i\big)\\ &&\quad\;+\; (1-\lambda(j))\cdot\textrm{Prob}\big(X^0_{\alpha\beta\gamma}(t)=\omega',\; I(t)=j\big), \end{aligned}$$ where $(\omega,i)=\bar T^{-1}(\omega',j)$. Since the random variable $I(t)$ is uniform on $\{0,\ldots,n\}$, we get $$\begin{aligned} \textrm{Prob}\big(X^0_{\alpha\beta\gamma}(t+1)=\omega',\;J(t+1)=j\big) &=& \lambda(i)\cdot \frac{q(\omega)}{Z^0_n}\frac1{n+1} \;+\; (1-\lambda(j))\cdot \frac{q(\omega')}{Z^0_n}\frac1{n+1}. \end{aligned}$$ But since $\bar T$ preserves the weights via Relation (\[for:preserve\]), $\lambda(i)q(\omega)=\lambda(j)q(\omega')$ and the terms involving $\lambda$ cancel. Finally $$\textrm{Prob}(X^0_{\alpha\beta\gamma}(t+1)=\omega')\;=\;\sum_{j=0}^n\frac{q(\omega')}{Z^0_n}\frac1{n+1} \;=\;\frac{q(\omega')}{Z^0_n},$$ and this completes the proof that the distribution induced by $q$ is stationary. The 3-TASEP {#sec:3tasep} =========== The combinatorial approach we developed in the previous sections can be extended to a slightly more general model, the 3-TASEP, which we now define. The 3-TASEP is similar to the TASEP but each time a black or a white particle exits, there is a certain probability $\varepsilon$ that the particle that enter in its place is a neutral particle $\times$. On the one hand, as in the TASEP, black particles always travel from left to right and white particles always do the opposite. On the other hand, neutral particles have no preferred direction and get displaced in opposite direction by black and white particles. An informal illustration of the 3-TASEP is given by Figure \[fig:rough-extended\]. ![The 3-TASEP.[]{data-label="fig:rough-extended"}](reservoir-bwx.eps){width=".5\linewidth"} Definition of the 3-TASEP ------------------------- A *3-TASEP configuration* is a row of $n$ cells, each containing one particle, which can be of type $\bullet$, $\times$ or $\circ$. An example of configuration is given by Figure \[fig:example-config-bwv\]. The local configuration around a wall $i$ in a configuration $\tau$ is denoted $\tau[i]$: for $i\in\{1,\ldots,n-1\}$, $\tau[i]$ is the element of the set $\{\bullet{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}},\bullet{\ensuremath{|\!|}}\circ,\bullet{\ensuremath{|\!|}}\bullet,{\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}\bullet,{\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}},{\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}\circ,\circ{\ensuremath{|\!|}}\bullet,\circ{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}},\circ{\ensuremath{|\!|}}\circ\}$ that describes the two cells surrounding wall $i$, for $i=0$, $\tau[0]\in\{{\ensuremath{|\!|}}\bullet,{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}},{\ensuremath{|\!|}}\circ\}$, and for $i=n$, $\tau[n]\in\{\bullet{\ensuremath{|\!|}},{\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}},\circ{\ensuremath{|\!|}}\}$. ![A 3-TASEP configuration with $n=14$ cells.[]{data-label="fig:example-config-bwv"}](example-basic-three-config.eps){width="60mm"} The 3-TASEP is a Markov chain $S_{\alpha\beta\gamma\varepsilon}$ defined on the set of 3-TASEP configurations in terms of four parameters $\alpha$, $\beta$ and $\gamma$ in $]0,1]$ and $\varepsilon$ in $[0,1]$. From time $t$ to $t+1$, the chain evolves from the configuration $\tau=S_{\alpha\beta\gamma\varepsilon}(t)$ to a configuration $\tau'=S_{\alpha\beta\gamma\varepsilon}(t+1)$ as follows: - A wall $i$ is chosen uniformly at random among the $n+1$ walls. - Depending on the local configuration $\tau[i]$ around wall $i$, a transition may be triggered: - unstable local configurations in the middle ($i\in\{1,\ldots,n-1\}$): - Case $\bullet{\ensuremath{|\!|}}\circ$, a transition $\bullet{\ensuremath{|\!|}}\circ\to\circ|\bullet$ occurs with probability $\lambda(\bullet{\ensuremath{|\!|}}\circ):=\alpha$. - Case ${\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}\circ$, a transition ${\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}\circ\to\circ|{\ensuremath{{\!\times\!}}}$ occurs with probability $\lambda({\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}\circ):=\beta$. - Case $\bullet{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}$, a transition $\bullet{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}\to{\ensuremath{{\!\times\!}}}|\bullet$ occurs with probability $\lambda(\bullet{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}):=\gamma$. - unstable local configurations on the left border ($i=0$): - Case ${\ensuremath{|\!|}}\circ$, the particle exits with total probability $\lambda({\ensuremath{|\!|}}\circ):=\beta$, in 2 possible ways: - a transition ${\ensuremath{|\!|}}\circ\to|\bullet$ occurs with probability $(1-\varepsilon)\,\beta,$ - or a transition ${\ensuremath{|\!|}}\circ\to|{\ensuremath{{\!\times\!}}}$ with probability $\varepsilon\beta$ (neutralization), - Case ${\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}$, a transition ${\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}\to|\bullet$ occurs with probability $\lambda({\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}):= (1-\varepsilon)\beta\,\gamma/\alpha$. - unstable local configurations on the right border ($i=n$): - Case $\bullet{\ensuremath{|\!|}}$, the particle exits with total probability $\lambda(\bullet{\ensuremath{|\!|}}):=\gamma$, in 2 possible ways: - a transition $\bullet{\ensuremath{|\!|}}\to\circ|$ occurs with probability $(1-\varepsilon)\,\gamma$, - or a transition $\bullet{\ensuremath{|\!|}}\to{\ensuremath{{\!\times\!}}}|$ with probability $\varepsilon\gamma$ (neutralization), - Case ${\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}$, a transition ${\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}\to\circ|$ occurs with probability $\lambda({\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}):=(1-\varepsilon)\gamma\,\beta/\alpha$. - stable local configurations: - Cases $\bullet{\ensuremath{|\!|}}\bullet$, ${\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}$, $\circ{\ensuremath{|\!|}}\circ$, $\circ{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}$, $\circ{\ensuremath{|\!|}}\bullet$, ${\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}\bullet$, ${\ensuremath{|\!|}}\bullet$ and $\circ{\ensuremath{|\!|}}$, no transition occur: $\lambda(\bullet{\ensuremath{|\!|}}\bullet)=\lambda({\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}) =\lambda(\circ{\ensuremath{|\!|}}\circ)=\lambda(\circ{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}) =\lambda(\circ{\ensuremath{|\!|}}\bullet)=\lambda({\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}\bullet) =\lambda({\ensuremath{|\!|}}\bullet)=\lambda(\circ{\ensuremath{|\!|}}):=0$. - If a transition occurs, the new configuration $\tau'$ is obtained from $\tau$ by applying the transition to the local configuration around the chosen wall. Otherwise, $\tau'=\tau$. In order to explain the role of the parameters $\alpha$, $\beta$, $\gamma$ and $\varepsilon$ a few remarks are useful: - The equality $\lambda({\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}\circ)=\lambda({\ensuremath{|\!|}}\circ)$ translates the idea that a white particle feels the same attraction to the left in front of a neutral particle as it feels for exiting at the left border. A similar interpretation holds for $\lambda(\bullet{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}})=\lambda(\bullet{\ensuremath{|\!|}})$ and black particles. - The equality $\lambda({\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}})/\lambda({\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}\circ) =(1-\varepsilon)\lambda(\bullet{\ensuremath{|\!|}})/\lambda(\bullet{\ensuremath{|\!|}}\circ)$, or says that the ratio between entry and movement rates for white particles is the same in presence of black or neutral particles. A similar interpretation holds for black particles. - The fact that the same quantity $\varepsilon$ controls the probability that a $\times$ particles enters instead of a black particle or instead of a white particle may be thought of as a curious restriction: it is dictated by technical considerations in the proof, and at the present state we do not know whether it can be easily circumvented or not. The TASEP with parameter $\alpha$, $\beta$ and $\gamma$ is recovered by taking $\varepsilon=0$. Indeed, in this case, after the initial neutral particles have exit the system, no new neutral particles are created and the rules are exactly those of the TASEP as presented in Section \[JumpingGilles\]. It will be useful to reformulate again the transition of the 3-TASEP in terms of applications from the set of configurations with a chosen wall into the set of configurations. Since there are two possible transitions in the cases ${\ensuremath{|\!|}}\circ$ and $\bullet{\ensuremath{|\!|}}$ we introduce the following two applications: - The application $\vartheta_1:(\tau,i)\to\tau'$ performing at wall $i$ the transitions prescribed by cases $a_1$, $a_2$ and $a_3$, $b'_1$ and $b_2$, $c'_1$ and $c_2$. - The application $\vartheta_2:(\tau,i)\to\tau'$ performing at wall $i$ the transitions prescribed by cases $a_1$, $a_2$ and $a_3$, $b''_1$ and $b_2$, $c''_1$ and $c_2$. Then the transitions of the chain $S_{\alpha\beta\gamma\varepsilon}$ can be described as follows: choose $i=I(t)$ uniformly at random in $\{0,\ldots,n\}$ and set $$S_{\alpha\beta\gamma\varepsilon}(t+1)=\left\{ \begin{array}{llc} \vartheta_1(\tau,i) & \textrm{ with probability} &(1-\varepsilon) \lambda(\tau[i]),\\ \vartheta_2(\tau,i) & \textrm{ with probability} &\varepsilon \lambda(\tau[i]),\\ \tau & \textrm{ otherwise,} \end{array}\right.$$ where $\tau=S_{\alpha\beta\gamma\varepsilon}(t)$, and $\tau[i]$ denotes the local configuration around wall $i$ in $\tau$. Complete configurations for the 3-TASEP --------------------------------------- The complete configurations for the 3-TASEP are concatenations of complete configurations for the TASEP separated by $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns: more explicitly, each complete configuration $\omega$ with $\ell$ $\times$-particles in the first row can be uniquely written $\omega_0|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|\omega_1\cdots|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|\omega_\ell$ where each $\omega_i$ belongs to $\Omega_{n_i}$ for some $n_i\geq0$. In other terms, these complete configurations are pairs of rows of cells containing particles such that the $\times$-particles always form $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns and such that between two $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns the balance and positivity conditions are satisfied. Let $\Omega_n$ denote the set of complete configurations of length $n$. ![A complete configuration with $n=14$.[]{data-label="fig:example-complete-config-x"}](example-complete-three.eps){width="60mm"} An example of a complete configuration is given in Figure \[fig:example-complete-config-x\]: from left to right the subconfigurations have successively length 3, 0, 1, and 7. The local configuration around wall $i$ in a complete configuration $\omega$, describing the one or two columns surrounding wall $i$, is denoted $\omega[i]$. The following enumerative Lemmas are proved in Section \[sec:counting\]. \[lem:count-all\] The cardinality of $\Omega_n$ is $\frac{1}{2}{2n+2\choose n+1}$. \[lem:count-all-refined\] For any $k+\ell+m=n$, the cardinality of the set $\Omega_{k,m}^\ell$ of complete configurations of $\Omega_n$ with $\ell$ $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns, and $k$ black and $m$ white particles on the top row is $\frac{\ell+1}{n+1}{n+1\choose k}{n+1\choose m}$. \[lemma:olaf-le-retour\] For any $\ell+p=n$, the cardinality of the set $\Omega_{n}^\ell$ of complete configurations of $\Omega_n$ with $\ell$ $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns is $\frac{\ell+1}{n+1}{2n+2\choose n-\ell}$. The complete chain $X_{\alpha\beta\gamma\varepsilon}$ ----------------------------------------------------- We shall directly define the chain $X_{\alpha\beta\gamma\varepsilon}$ in terms of two bijections $\bar T_1$ and $\bar T_2$ from $\Omega_n\times\{0,\ldots,n\}$ to itself. To do this, we partition the set $\Omega_n\times\{0,\ldots,n\}$ into classes, and we first describe for each class $A$ the image $(\omega',j)$ of a pair $(\omega,i)\in A$ by $\bar T_1$. The bijection $\bar T_2$ is then described as a simple variation on $\bar T_1$. As in Section \[sec:informal\], given a complete configuration $\omega$ with top row $\tau$ and a wall $i$, we distinguish the following walls: if the local configuration $\tau[i]$ is $\bullet{\ensuremath{|\!|}}\circ$, ${\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}\circ$, $\bullet{\ensuremath{|\!|}}$ or ${\ensuremath{{\!\times\!}}}{\ensuremath{|\!|}}$, then let $j_1<i$ be the leftmost wall such that there are only white particles in the top row between walls $j_1$ and $i-1$; if the local configuration $\tau[i]$ is $\bullet{\ensuremath{|\!|}}\circ$, $\bullet{\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}$, ${\ensuremath{|\!|}}\circ$ or ${\ensuremath{|\!|}}{\ensuremath{{\!\times\!}}}$, then let $j_2>i$ be the rightmost wall such that there are only black particles in the top row between walls $i+1$ and $j_2$. ![Cases $\bullet{\ensuremath{|\!|}}\circ$ and $\times{\ensuremath{|\!|}}\circ$ for the bijections $\bar T_1$ and $\bar T_2$.[]{data-label="move-right-sweep-x"}](move-right-sweep-x.eps){width="\linewidth"} The action of $\bar T_1$ is described separately for the different cases of local configuration $\omega[i]$: - unstable local configurations in the middle ($i\in\{1,\ldots,n-1\}$): - Cases ${\bullet\atop?}{\ensuremath{|\!|}}{\ensuremath{{\circ\atop\bullet}}}$ and ${\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}{\ensuremath{|\!|}}{\circ\atop\bullet}$. Then $j=j_1$, and $\omega'$ is obtained by moving the $|{\ensuremath{{\circ\atop\bullet}}}|$-column from the right-hand side of wall $i$ to the right-hand side of wall $j$ (Figure \[move-right-sweep-x\]). The image $B_{a'}$ of this class consists of pairs $(\omega',j)$ such that: the wall $j$ is the left border ($j=0$) or there is a black or a $\times$ particle on its left-hand side, there is a $|{\ensuremath{{\circ\atop\bullet}}}|$-column on its right-hand side, and the sequence of white particles on the right-hand side of wall $j$ in the top row is followed by a black or a $\times$ particle. - Cases ${\bullet\atop?}{\ensuremath{|\!|}}{\ensuremath{{\circ\atop\circ}}}$ or ${\bullet\atop\circ}{\ensuremath{|\!|}}{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}$. Then $j=j_2$, and $\omega'$ is obtained by removing the two particles that form the ${\bullet\atop}|{\atop\circ}$-diagonal or the $|{\ensuremath{{\bullet\atop\circ}}}|$-column at wall $i$ and replacing them at wall $j$ so that they form a ${\bullet\atop}|{\atop\circ}$-diagonal if there is a white particle on the right-hand side of wall $j$ in the top row, or a $|{\ensuremath{{\bullet\atop\circ}}}|$-column otherwise (Figure \[move-left-sweep-x\]). The image $B_{a''}$ of this class consists of pairs $(\omega',j)$ with a $|{\ensuremath{{\circ\atop\circ}}}|$-column, an $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-column, or the border on the right-hand side of wall $j$ and such that there is a non-empty sequence of black particles on the left-hand side of wall $j$ in the top row, followed by a white or an $\times$ particle. ![Cases $\bullet{\ensuremath{|\!|}}\circ$ and $\bullet{\ensuremath{|\!|}}\times$ with black sweep for the bijections $\bar T_1$ and $\bar T_2$.[]{data-label="move-left-sweep-x"}](move-left-sweep-x.eps){width="\linewidth"} - unstable local configurations on the left border $(i=0$): - Case ${\ensuremath{|\!|}}{\ensuremath{{\circ\atop\bullet}}}$. Then $j=j_2$, and $\omega'$ is obtained by removing the two particles that form the $|{\ensuremath{{\circ\atop\bullet}}}|$-column on the left border and replacing them at wall $j$ so that they form a ${\bullet\atop}|{\atop\circ}$-diagonal if there is a white particle on the right-hand side of wall $j$ in the top row, or a $|{\ensuremath{{\bullet\atop\circ}}}|$-column otherwise. The image $B_{b'}$ of the class $A_{b'}$ by $\bar T_1$ consists of pairs $(\omega',j)$ with a $|{\ensuremath{{\circ\atop\circ}}}|$-column, an $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-column, or the border on the right-hand side of wall $j$ of $\omega'$ and such that there is a non-empty sequence of black particles on the left-hand side of wall $j$ in the top row, ending at the left border. - Case ${\ensuremath{|\!|}}{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}$. Then $\omega'=\omega$ and $j=0$. The image $B_{b''}$ of the class $A_{b''}$ by $\bar T_1$ consists of pairs $(\omega',0)$ with a $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-column on the left border. - unstable local configurations on the right border ($i=n$): - Case ${\ensuremath{{\bullet\atop\circ}}}{\ensuremath{|\!|}}$. Then $j=j_1$, and $\omega'$ is obtained by removing the rightmost $|{\ensuremath{{\bullet\atop\circ}}}|$-column and forming a $|{\ensuremath{{\circ\atop\bullet}}}|$-column on the right-hand side of wall $j$. The image $B_{c'}$ of the class $A_{c'}$ by $\bar T_1$ consists of pairs $(\omega',j)$ such that: the wall $j$ is the left border ($j=0$) or it has a black or a $\times$ particle on its left-hand side, there is a $|{\ensuremath{{\circ\atop\bullet}}}|$-column on its right-hand side, and the sequence of white particles on the right-hand side of wall $j$ in the top row ends at the right border. - Case ${\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}{\ensuremath{|\!|}}$. Then $\omega'=\omega$ and $j=n$. The image $B_{c''}$ of the class $A_{c''}$ by $\bar T_1$ consists of pairs $(\omega',n)$ with a $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-column on the right border. ![The application $\bar T_2$ on the borders.[]{data-label="fig:enter-T2"}](enter-T1.eps){width=".95\linewidth"} ![The application $\bar T_2$ on the borders.[]{data-label="fig:enter-T2"}](enter-T2.eps){width=".95\linewidth"} Finally let $A_d$ denote the set of pairs $(\omega,i)$ such that the local configuration around wall $i$ of $\omega$ is stable. The mapping $\bar T_1$ has no effect on these pairs, and $B_d=A_d$. The application is invertible in each case and the sets $\{B_{a'},B_{a''},B_{b'},B_{b''},B_{c'},B_{c''},B_d\}$ form a partition of $\Omega_n\times\{0,\ldots,n\}$. Hence $\bar T_1$ is a bijection from $\Omega_n\times\{0,\ldots,n\}$ onto itself. The application $\bar T_2$ differs from $\bar T_1$ only at the borders. Consider the involution $Y$ on $\Omega_n\times\{0,\ldots,n\}$ that acts only on a pair $(\omega,i)$ by changing, if $i=0$ or $i=n$, the local configuration $\omega[i]$ according to the following rules: $$\textstyle {\ensuremath{|\!|}}{\ensuremath{{\circ\atop\bullet}}}\;\leftrightarrow\;{\ensuremath{|\!|}}{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}, \qquad\textrm{ and }\quad {\ensuremath{{\bullet\atop\circ}}}{\ensuremath{|\!|}}\;\leftrightarrow\;{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}{\ensuremath{|\!|}}.$$ Then the image of $(\omega,i)$ by $\bar T_2$ is defined to be the image of $Y(\omega,i)$ by $\bar T_1$. In particular $\bar T_2$, being the composition $\bar T_1\circ Y$ of two bijections is itself a bijection. The action of $\bar T_2$ on the borders is illustrated by Figure \[fig:enter-T2\]. Let now $T_1$ and $T_2$ denote respectively the first component of $\bar T_1$ and $\bar T_2$. Then the Markov chain $X_{\alpha\beta\gamma\varepsilon}$ is defined in terms of the $T_1$ and $T_2$ exactly as $S_{\alpha\beta\gamma\varepsilon}$ is defined in terms of the $\vartheta_1$ and $\vartheta_2$: choose $i=I(t)$ uniformly at random in $\{0,\ldots,n\}$ and set $$X_{\alpha\beta\gamma\varepsilon}(t+1)=\left\{ \begin{array}{llc} T_1(\omega,i) & \textrm{ with probability } & (1-\varepsilon) \lambda(\tau[i]),\\ T_2(\omega,i) & \textrm{ with probability }& \varepsilon \lambda(\tau[i]),\\ \omega & \textrm{ otherwise,} \end{array}\right.$$ where $\omega=X_{\alpha\beta\gamma\varepsilon}(t)$, and $\tau=\textrm{top}(\omega)$. The stationary distribution of $X_{\alpha\beta\gamma\varepsilon}$ and $S_{\alpha\beta\gamma\varepsilon}$ -------------------------------------------------------------------------------------------------------- The parameter $n_y$ and $n_z$ of Section \[sec:arbitrary\] are extended in a straightforward way to complete configurations of $\Omega_n$ by putting labels independently in each subconfiguration delimited by $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns or borders. Then for $\omega\in\Omega_n$, set $$q(\omega)\;=\; \beta^n\gamma^n(1-\varepsilon)^n \left(\frac{\alpha}{\beta}\right)^{n_y(\omega)} \left(\frac{\alpha}{\gamma}\right)^{n_z(\omega)} \left(\frac{\alpha^2\varepsilon} {\beta\gamma(1-\varepsilon)}\right)^{\ell(\omega)} .$$ where $\ell(\omega)$ denotes the number of $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns in $\omega$. Then Theorem \[thm:generic\] extends verbatim: \[thm:generic-3tasep\] The Markov chain $X_{\alpha\beta\gamma\varepsilon}$ has the following unique stationary distribution: $$\textrm{Prob}(X_{\alpha\beta\gamma\varepsilon}(t)=\omega) \;\mathop{\longrightarrow}_{t\rightarrow\infty}\; \frac{q(\omega)}{Z_n}, \qquad \textrm{where } Z_n={\sum_{\omega' \in \Omega_n}q(\omega')},$$ where $q(\omega)$ is the previously defined weight on the complete configurations of the 3-TASEP. Again this theorem immediately yields a combinatorial interpretation of the stationary distribution of the chain $S_{\alpha\beta\gamma\varepsilon}$, via the relation $S_{\alpha\beta\gamma\varepsilon}= \mathrm{top}(X_{\alpha\beta\gamma\varepsilon})$. In particular in the case $\alpha=\beta=\gamma=1$, $\varepsilon=1/2$, we obtain the following corollary on $S=S_{111\frac12}$ and $X=X_{111\frac12}$: Let $\textrm{top}(\omega)$ denote the top row of a complete configuration $\omega$. Then for any initial distributions $S(0)$ and $X(0)$ with $\textrm{top}(X(0))=S(0)$, and any basic configuration $\tau$, $$\textrm{Prob}(S(t) = \tau)\;=\; \textrm{Prob}(\textrm{top}(X(t)) =\tau) \;\mathop{\longrightarrow}_{t\rightarrow\infty}\; \frac{\big|\{\omega\in\Omega_n\mid \textrm{top}(\omega)=\tau\}|}{|\Omega_{n}|}.$$ In particular, for any $k+\ell+m=n$, we obtain combinatorially the formula: $$\textrm{Prob}(S(t) \textrm{ contains } k \textrm{ black and } m \textrm{ white particles}) \;\mathop{\longrightarrow}_{t\rightarrow\infty}\; \frac{|\Omega^\ell_{k,m}|}{|\Omega_n|} \;=\; \frac{\frac{\ell+1}{n+1}{n+1\choose k}{n+1\choose m}}{\frac{1}{2} {2n+2\choose n+1}}.$$ Theorem \[thm:generic-3tasep\] is an easy consequence of the fact that the two bijections preserve weights in the sense of the following lemma. Recall that $\lambda$ describes the transition probabilities for each possible local configuration. \[lem:preserve-x\] The applications $\bar T_1$ and $\bar T_2$ transport together the weight $\lambda$ in the following sense: for all $(\omega',j)\in\Omega\times\{0,\ldots,n\}$, $$(1-\varepsilon)\lambda(\omega_1[i_1])\,q(\omega_1) +\varepsilon\lambda(\omega_2[i_2])\,q(\omega_2) \;=\;\lambda(\omega'[j])\,q(\omega'),$$ where $(\omega_1,i_1)=\bar T_1^{-1}(\omega',j)$ and $(\omega_2,i_2)=\bar T_2^{-1}(\omega',j)$. This lemma is easily verify by a case by case analysis similar to that of Lemma \[lem:preserve\]. This proof exactly mimics the proof of Theorem \[thm:generic\], using Lemma \[lem:preserve-x\] instead of Lemma \[lem:preserve\]. Periodic boundary conditions {#sec:periodic} ============================ A standard alternative to our definition of the TASEP is to consider periodic boundary conditions: the leftmost cell is considered on the right-hand side of the rightmost cell, or equivalently, the configurations are arranged on a circle (see Figure \[fig:informal-circle\] a., the circle is rigid, not subject to rotation). ![A basic and a complete configuration of the 3-TASEP with periodic boundary conditions.[]{data-label="fig:informal-circle"}](exemple-circle-with-directions.eps){width=".5\linewidth"} Since there are no border walls in these configurations, the Markov chain $\smash{\widehat S_{\alpha\beta\gamma}}$ is defined using only Cases $a_1$, $a_2$, $a_3$ of the transition of the 3-TASEP. Observer that the numbers $k$, $\ell$ and $m$ of black, ${\ensuremath{{\!\times\!}}}$ and white particles do not change during the evolution. The case without $\times$ particle is easily seen to have a uniform stationary distribution, so we concentrate on the case with at least one $\times$ particle. Our approach is easily adapted to deal with this case. Let $\widehat\Omega_n$ be a new set of complete configurations that are made of two rows of cells arranged on a circle and that are such that the subconfigurations between two $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns, when read in clockwise direction, satisfy the balance and positivity constraints. More precisely we are interested in the subset $\smash{\widehat\Omega^\ell_{k,m}}$ of configurations of $\smash{\widehat\Omega_n}$ that have $\ell$ $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns, $k$ black and $m$ white particles in the top row. The following lemma is proved in Section \[sec:counting\]. \[lemma:nici\] The cardinality of $\widehat \Omega^\ell_{k,m}$ is ${{n \choose k}{n \choose m}}$. Cases $A_{a'}$ and $A_{a''}$ of the definition of $\bar T_1$ allow to define a bijection $\widehat T$ from $\smash{\widehat\Omega^\ell_{k,m}\times\{0,\ldots,n-1\}}$ to itself and an associated Markov chain $\widehat X_{\alpha\beta\gamma}$ such that $\widehat S_{\alpha\beta\gamma}\equiv\mathrm{top}(\widehat X_{\alpha\beta\gamma})$. The same argument as in Section \[sec:maximal\] for the chain $X^0$ then immediately yields the fact that $\widehat X=\widehat X_{111}$ has a uniform stationary distribution. In particular: $$\textrm{Prob}(\widehat X(t)=\omega) \;\mathop{\longrightarrow}_{t\rightarrow\infty}\; \frac1{|\widehat\Omega^\ell_{k,m}|}\;=\;\frac1{{n \choose k}{n \choose m}}.$$ Furthermore, the statistics $n_y$ and $n_z$ are immediately extended to configurations of $\widehat\Omega_n$ by putting label independently on every subconfiguration between $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns. Lemma \[lem:preserve-x\] adapts in a straightforward way (with $\varepsilon=0$), and allows to express the stationary distribution in the general case: The Markov chain $\widehat X_{\alpha\beta\gamma}$ has the following unique stationary distribution: $$\textrm{Prob}(\widehat X_{\alpha\beta\gamma}(t)=\omega) \mathop{\longrightarrow}_{t\to\infty}\frac{q(\omega)}{\widehat Z_n}, \qquad\textrm{ where }\widehat Z_n=\sum_{\omega'\in\widehat \Omega_n}q(\omega'),$$ where $q(\omega)=\beta^n\gamma^n(\alpha/\beta)^{n_y(\omega)}(\alpha/\gamma)^{n_z(\omega)}$. Finally the stationary distribution of $S_{\alpha\beta\gamma}$ is recovered from the relation $\widehat S_{\alpha\beta\gamma}\equiv \widehat X_{\alpha\beta\gamma}$. In particular, for $\widehat S=\widehat S_{111}$, $$\textrm{Prob}(\widehat S(t)=\tau) \;\mathop{\longrightarrow}_{t\rightarrow\infty}\; \frac{|\{\omega\in\widehat\Omega^\ell_{k,m} \mid\textrm{top}(\omega)=\tau\}|} {|\widehat\Omega^\ell_{k,m}|}.$$ For instance, a configuration $\tau$ of the form $$|{\ensuremath{{\!\times\!}}}| \underbrace{\circ\cdots\circ}_{m_1}| \underbrace{\bullet\cdots\bullet}_{k_1}| {\ensuremath{{\!\times\!}}}| \underbrace{\circ\cdots\circ}_{m_2}| \underbrace{\bullet\cdots\bullet}_{k_2}| {\ensuremath{{\!\times\!}}}|\cdots| {\ensuremath{{\!\times\!}}}| \underbrace{\circ\cdots\circ}_{m_\ell}| \underbrace{\bullet\cdots\bullet}_{k_\ell}|$$ for some $k_1+\ldots+k_\ell=k$, $m_1+\ldots+m_\ell=m$ corresponds to only one complete configuration $$\textstyle |{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}| \underbrace{\textstyle{\ensuremath{{\circ\atop\bullet}}}\cdots{\ensuremath{{\circ\atop\bullet}}}}_{m_1}| \underbrace{\textstyle{\ensuremath{{\bullet\atop\circ}}}\cdots{\ensuremath{{\bullet\atop\circ}}}}_{k_1}| {\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}| \underbrace{\textstyle{\ensuremath{{\circ\atop\bullet}}}\cdots{\ensuremath{{\circ\atop\bullet}}}}_{m_2}| \underbrace{\textstyle{\ensuremath{{\bullet\atop\circ}}}\cdots{\ensuremath{{\bullet\atop\circ}}}}_{k_2}| {\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|\cdots| {\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}| \underbrace{\textstyle{\ensuremath{{\circ\atop\bullet}}}\cdots{\ensuremath{{\circ\atop\bullet}}}}_{m_\ell}| \underbrace{\textstyle{\ensuremath{{\bullet\atop\circ}}}\cdots{\ensuremath{{\bullet\atop\circ}}}}_{k_\ell}|$$ (because of the positivity constraints on blocks between $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns), and thus has probability $1/{n\choose k}{n\choose m}$ in the stationary distribution of $\widehat S$. Irreducibility {#sec:paths} ============== In this section we verify that the Markov chains $X^0$, $\widehat X$ and $X$ are irreducible, *i.e.* that there is a positive probability to go from any configuration $\omega$ to any other one $\omega'$. In other terms we need to prove that the transition graphs of these three chains are connected. The proof is based on an observation about iterating the bijections $\bar T$, or $\bar T_1$ or $\bar T_2$, and on induction on $n$. To every pair $(\omega,i)$ of $\Omega_n\times\{0,\ldots,n\}$ we associate a reduced configuration $\omega^i$ in $\Omega_{n-1}$, obtained from $\omega$ by deleting two particles around the wall $i$ in a way that depends on the local configuration: - Cases ${\bullet\atop?}{\ensuremath{|\!|}}{\ensuremath{{\circ\atop\bullet}}}$, ${\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}{\ensuremath{|\!|}}{\ensuremath{{\circ\atop\bullet}}}$ and ${\ensuremath{|\!|}}{\ensuremath{{\circ\atop\bullet}}}$. The reduced configuration $\omega^i$ is obtained by removing the $|{\ensuremath{{\circ\atop\bullet}}}|$-column on the right-hand side of wall $i$. - Case ${\bullet\atop?}{\ensuremath{|\!|}}{\ensuremath{{\circ\atop\circ}}}$. The reduced configuration $\omega^i$ is obtained by removing the two particles forming the ${\bullet\atop}|{\atop\circ}$-diagonal around wall $i$. - Cases ${\ensuremath{{\bullet\atop\circ}}}{\ensuremath{|\!|}}{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}$ and ${\ensuremath{{\bullet\atop\circ}}}{\ensuremath{|\!|}}$. The reduced configuration $\omega^i$ is obtained by removing the $|{\ensuremath{{\bullet\atop\circ}}}|$-column on the left-hand side of wall $i$. - Cases ${\ensuremath{|\!|}}{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}$ and ${\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}{\ensuremath{|\!|}}$. The reduced configuration is obtained by removing the $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-column on the border. \[lem:cycle\] Let $\tilde\omega$ be a configuration of $\Omega_{n-1}$. Let $S(\tilde\omega)$ be the set of pairs $(\omega,i)$ of $\Omega_n{\ensuremath{{\!\times\!}}}\{0,\ldots,n\}$ having $\tilde\omega$ as reduced configuration, *i.e.* such that $\omega^i=\tilde\omega$. In particular let $\omega_0$ be the configuration $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|\tilde\omega$ and $\omega_n$ be the configuration $\tilde\omega|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$, and define $S^0(\tilde\omega)=S(\tilde\omega)\setminus\{(\omega_0,0),(\omega_n,n)\}$. Then: - The set $S^0(\tilde\omega)$ is a cyclic orbit of $\bar T_1$: given $(\omega,i)\in S^0(\tilde\omega)$, all other elements of $S^0(\tilde\omega)$ can be reached by successive applications of $\bar T_1$. - The set $S(\tilde\omega)$ is a cyclic orbit of $\bar T_2$. - If $\tilde\omega\in\Omega^0_{n-1}$ then $S^0(\tilde\omega)\subset\Omega^0_n$ and $S^0(\tilde\omega)$ is a cyclic orbit of $\bar T$. As can be checked on Figures \[move-right-sweep\] and \[move-right-sweep-x\], starting from a pair $(\omega,i)$ of the corresponding classes and iterating $\bar T_1$, $\bar T_2$ or $\bar T$, the selected wall moves to the left with the pair of black and white particles, and successively stops on the right-hand side of every black or ${\ensuremath{{\!\times\!}}}$ particle of the top row, until it reaches the left border. Similarly, as can be checked on Figures \[move-left-sweep\] and \[move-left-sweep-x\], iterating $\bar T_1$, $\bar T_2$ or $\bar T$ from a pair $(\omega,i)$ of the corresponding classes, the selected wall moves to the right with the pair of black and white particles, stopping on the left-hand side of every white and ${\ensuremath{{\!\times\!}}}$ particle of the top row, until it reaches the right border. As shown by Figures \[fig:enter-T1\]–\[fig:enter-T2\], the application $\bar T_2$, and the applications $\bar T_1$ or $\bar T$ behave differently when the border is reached: $\bar T_2$ visits the configurations $\omega_0$ or $\omega_n$ while $\bar T_1$ or $\bar T$ skips them and immediately restart moving in the opposite direction. Starting from an element $(\omega,i)$ all other elements of $S(\tilde\omega)$ (respectively $S^0(\tilde\omega)$) are thus visited in a cycle by successive applications of $\bar T_2$ (respectively $\bar T_1$ or $\bar T$). Lemma \[lem:cycle\] provides us with cycles in the transition graph on $\Omega_n$, and each cycle is associated to a reduced configuration of $\Omega_{n-1}$. The next lemma transports transitions from $\Omega_{n-1}$ to $\Omega_n$. \[lem:releve\] Let $(\tilde\omega',j)=\bar T_1(\tilde\omega,i)$ be a transition between two configurations of $\Omega_{n-1}$. Then there exist $k,i_+,j_+$ and $\omega,\omega'$ such that $(\omega,k)\in S(\tilde\omega)$, $(\omega',k)\in S(\tilde\omega)$, and $(\omega',j_+)=\bar T_1(\omega,i_+)$. The same holds for $\bar T_2$. For $\bar T_1$ observe that in each case of Figure \[move-right-sweep-x\], and on the second leftmost case of Figure \[fig:enter-T1\], a $|{\ensuremath{{\circ\atop\bullet}}}|$-column can be inserted on the left border without interfering with the action of $\bar T_1$: take $\omega=|{\ensuremath{{\circ\atop\bullet}}}|\tilde\omega$, $\omega'=|{\ensuremath{{\circ\atop\bullet}}}|\tilde\omega$, $k=0$, $i_+=i+1$, $j_+=j+1$. Similarly in each case of Figure \[move-left-sweep-x\], and on the leftmost case of Figure \[fig:enter-T1\], a $|{\ensuremath{{\bullet\atop\circ}}}|$-column can be inserted on the right border without interfering with the action of $\bar T_1$: take $\omega=\tilde\omega|{\ensuremath{{\bullet\atop\circ}}}|$, $\omega'=\tilde\omega|{\ensuremath{{\bullet\atop\circ}}}|$, $k=n$, $i_+=i$, $j_+=j$. For $\bar T_2$ observe that in each case of Figures \[move-left-sweep-x\]–\[fig:enter-T2\], an $|{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}|$-column can be inserted, either on the left or on the right border, without interfering with the action of $\bar T_2$. Lemma \[lem:releve\] gives a transition between an element of the cycle associated to $\tilde\omega$ and an element of the cycle associated to $\tilde\omega'$. Taking the connectivity of the transition graph on $\Omega_{n-1}$ as induction hypothesis, we conclude that all cycles of Lemma \[lem:cycle\] belong to the same connected component of the transition graph defined by $\bar T_2$ on $\Omega_n$. Since every element of $\Omega_n$ belong to a cycle, this concludes the proof of the irreducibility of $X$. As opposed to this the transition graph defined by $\bar T_1$ is seen to connect only configurations with the same number of $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns. In particular the chain $X_{\alpha\beta\gamma0}$ with $\varepsilon=0$ is not irreducible, but, the transition graph defined by $\bar T$ (or $\bar T_1$) on $\Omega^0_n$ is connected and the chain $X^0_{\alpha\beta\gamma}$ is irreducible. Finally the chain $\widehat T$ is seen to be irreducible in a similar manner as soon as there is at least one $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-column. The number of complete configurations and the cycle lemma {#sec:counting} ========================================================= **Lemma \[lem:count-all\]**. [ *The cardinality of $\Omega_n$ is $\frac{1}{2} {2n+2\choose n+1}$.*]{} Let $\Gamma_{n+1}$ be the set of (unconstrained) configurations of $n+1$ black and $n+1$ white particles distributed between two rows of $n+1$ cells, so that $|\Gamma_{n+1}|={2n+2 \choose n+1}$. Among these configurations, we restrict our attention to the subset $\overline{\Gamma}_{n+1}$ of those ending with $|{\ensuremath{{\bullet\atop\circ}}}|$- or a $|{\ensuremath{{\bullet\atop\bullet}}}|$-column. Exchanging $\bullet$ and $\circ$ particles is a bijection between $\overline{\Gamma}_{n+1}$ and its complement in $\Gamma_{n+1}$, so that $|\overline{\Gamma}_{n+1}|=\frac{1}{2}{2n+2\choose n+1}$. The proof of the lemma consists in a bijection $\phi$ between $\Omega_n$ and $\overline{\Gamma}_{n+1}$ (see Figure \[fig:number\]). Given $\omega \in \Omega_n$, its image $\phi(\omega)$ is obtained as follows: First, if [the number of $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns of $\omega$ is even,]{} add a $|{\ensuremath{{\bullet\atop\circ}}}|$-column at the end of $\omega$, otherwise add to it an $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-column. Then replace the first half of the $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns by $|{\ensuremath{{\circ\atop\circ}}}|$-columns, and the remaining half by $|{\ensuremath{{\bullet\atop\bullet}}}|$-columns (from left to right). By construction the resulting $\phi(\omega)$ belongs to $\overline{\Gamma}_{n+1}$. Conversely, consider $\gamma\in\overline\Gamma_{n+1}$, and let $d=\min(E(j))$ be the *depth* of $\gamma$. Then set $j_i=\min\{j\mid E(j)=-2i\}$, and $j'_i=\max\{j\mid E(j-1)=-2i\}$, for $i=1,\ldots,|d/2|$, and define the application $\psi$ that first changes columns $j_i$ and $j'_i$ into $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns for all $i=1,\ldots,|d/2|$, and then removes the last column. By construction the blocks between two of the modified columns of $\gamma$ satisfy the positivity condition, so that $\psi(\gamma)\in\Omega_{n+1}$. Finally the applications $\phi$ and $\psi$ are clearly inverse of each other. ![From (i) an element of $\overline\Gamma_{n+1}$, to (ii) one of $\Omega_n$. The $(B(j)-W(j))_{j=0..n+1}$ are given under both configurations and graphically represented.[]{data-label="fig:number"}](number-complete.eps){width="120mm"} **Lemmas \[lemma:ciclico\] and \[lem:count-all-refined\].** [*For any $k+\ell+m=n$, the cardinality of the set $\Omega^\ell_{k,m}$ of complete configurations with with $\ell$ $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns, $k$ black and $m$ white particles in the top row is $\frac{\ell+1}{n+1}{n+1\choose k}{n+1\choose m}$.* ]{} The statement is verified using the cycle lemma (see [@lothaire Ch. 11], or [@stanley Ch. 5]). Denote by $\Delta^{\ell+1}_{n}$ the set of configurations with $p=n-\ell=k+m$ black and $p+2\ell+2$ white particles distributed between two rows of $n+1$ cells. Then the cardinality of the subset $\Delta^{\ell+1}_{k,m}$ of elements of $\Delta^{\ell+1}_{n}$ that have $k$ black particles in the top row and the other $m$ in the bottom row is $\smash{{n+1 \choose k}{n+1 \choose m}}$. In such a configuration the number of white particles exceeds by $2\ell+2$ that of black particles, so that $E(n+1)=-2\ell-2$. Given $\omega$ in $\Delta^{\ell+1}_{k,m}$, let $d=\min(E(j))$ be the depth of $\omega$, and set $j_i=\min\{j\mid E(j)=d+2i\}$, for $i=0,\ldots,\ell$. By construction, these $\ell+1$ columns are $|{\ensuremath{{\circ\atop\circ}}}|$-columns. On the one hand, let $\bar{\Delta}^{\ell+1}_{k,m}$ be the set of pairs $(\omega,j)$ where $\omega\in\Delta^{\ell+1}_{k,m}$ and $j\in\{j_0,\ldots,j_\ell\}$, so that $|\bar{\Delta}^{\ell+1}_{k,m}|= {n+1 \choose k}{n+1\choose m}\cdot (\ell+1)$. On the other hand, define the set $\bar{\Omega}^{\ell+1}_{k,m}$ of pairs $(\omega',i)$ where $\omega'$ is obtained from an element of $\Omega^{\ell}_{k,m}$ by adding a final $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-column, and $i\in\{0,\ldots,n\}$. By construction, $|\bar{\Omega}^{\ell+1}_{k,m}|= |\Omega^\ell_{k,m}|\cdot (n+1)$. The proof of the lemma consists in a bijection $\phi$ between $\bar{\Delta}^{\ell+1}_{k,m}$ and $\bar{\Omega}^{\ell+1}_{k,m}$ (see Figure \[fig:ciclico\]). Given $(\omega,j) \in \bar{\Delta}^{\ell+1}_{k,m}$, let $\omega_1$ denote the first $j$ columns of $\omega$, and $\omega_2$ the $n+1-j$ others. Then by construction of $j$, the concatenation $\omega_2|\omega_1$ satisfies $E(i)>-2\ell-2$ for $i=1,\ldots,n$, and $E(n+1)=-2\ell-2$. This implies that $\omega_2|\omega_1$ decomposes as a sequence $\omega'_0,\omega'_1,\ldots,\omega'_{\ell}$ of $\ell+1$ (possibly empty) blocks that satisfy the positivity constraint, each followed by a $|{\ensuremath{{\circ\atop\circ}}}|$-column. Let $\omega'$ be obtained by replacing these $\ell+1$ $|{\ensuremath{{\circ\atop\circ}}}|$-columns by $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns. Then the map $(\omega,j)\to(\omega',n+1-j)$ is a bijection of $\bar\Delta^{\ell+1}_{k,m}$ onto $\bar{\Omega}^{\ell+1}_{k,m}$: the inverse bijection is readily obtained by first replacing the $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns into $|{\ensuremath{{\circ\atop\circ}}}|$-columns, and then recovering the factorization $\omega_2|\omega_1$ from the fact that $\omega_2$ has $n+1-j$ columns. ![(i) An element of $\bar\Delta^{\ell+1}_{k,m}$ (with $\ell=3$ and column $j=6$ colored), (ii) its conjugate (with column $n+1-j$ colored), and (iii) the corresponding element of $\Omega^\ell_{k,m}$. The sequence $(B(j)-W(j))_{j=0..n+1}$ is given under each configuration and graphically represented.[]{data-label="fig:ciclico"}](cycle-lemma-2.eps){width="120mm"} **Lemmas \[lemma:olaf\] and \[lemma:olaf-le-retour\].** [ *The cardinality of the set $\Omega^\ell_{n}$ of complete configurations of $\Omega_n$ that have $\ell$ $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns is $\frac{\ell+1}{n+1} { 2n+2 \choose n-\ell}$.* ]{} The proof uses the same arguments than the proof of Lemma \[lem:count-all-refined\]. The only difference is that, instead of counting elements of $\Delta^{\ell+1}_{k,m}$ with $k$ black particles in the top row and $m$ in the bottom row, we count elements of $\Delta^{\ell+1}_{n}$, the set of configurations of $n-\ell$ black particles and $n+2+\ell$ white particles distributed in two rows. Hence the previous factor $|\Delta^{\ell+1}_{k,m}|={n+1\choose k}{n+1\choose m}$ is replaced by $|\Delta^{\ell+1}_n|={2n+2 \choose n-\ell}$. **Lemma \[lemma:nici\].** [*The number $|\widehat\Omega^\ell_{k,m}|$ of configurations of $|\widehat\Omega_{n}|$ having $\ell$ $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns, $k$ black particles at the top, and $m$ at the bottom is ${n \choose k}{n \choose m}$.* ]{} Recall that $\Delta^\ell_{k,m}$ denotes configurations of length $n$ with $k$ black and $m+\ell$ white particles in the top row, and $m$ black and $k+\ell$ white particles in the bottom row, so that $\smash{|\Delta^\ell_{k,m}|={n \choose k}{n \choose m}}$. In order to prove the statement of the lemma we show that $\Delta^\ell_{k,m}$ and ${\widehat\Omega^\ell_{k,m}}$ are in bijection. Let $\smash{\delta \in \Delta^\ell_{k,m}}$, and consider its depth $d=\min(E(i))$ and the $\ell$ columns $j_i=\min\{j\mid E(j)=d+2i\}$, $i=0,\ldots, \ell-1$, as in the proof of Lemma \[lemma:olaf\]. By definition of these columns, the positivity condition is satisfied by each block between two of them. Moreover, by definition of $j_0$ and $j_{\ell-1}$, the positivity condition is also satisfied by the concatenation $\omega_\ell|\omega_0$ of the final block $\omega_\ell$ and the initial block $\omega_0$. Hence transforming the columns $j_0,\ldots,j_\ell$ into $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns, and arranging the two rows in a circle by fusing walls $0$ and $n$ at the apex yields a configuration $\phi(\delta)$ of $\widehat\Omega^\ell_{k,m}$ (recall that these configurations are not considered up to rotation). Conversely, given $\omega$ in $\widehat\Omega^\ell_{k,m}$, a unique element $\delta$ of $\Delta^\ell_{k,m}$ such that $\phi(\delta)=\omega$ is obtained by opening at the apex and transforming $|{\ensuremath{{{\ensuremath{{\!\times\!}}}\atop{\ensuremath{{\!\times\!}}}}}}|$-columns into $|{\ensuremath{{\circ\atop\circ}}}|$-columns. Conclusions and relations to Brownian excursions {#sec:conclusion} ================================================ The starting point of this paper was a “combinatorial Ansatz”: the stationary distribution of the two particle TASEP with boundaries can be expressed in terms of Catalan numbers hence should have a nice combinatorial interpretation. In our interpretation, configurations of the TASEP are completed by a (usually hidden) second row in which particles go back. In the most interesting case $\alpha=\beta=\gamma=1$, the resulting chain has a uniform stationary distribution so that the probability of a given TASEP configuration just reflects the diversity of possible rows hidden below it. We do not claim that our combinatorial interpretation is of any physical relevance. However, apart from explaining the “magical” occurrence of Catalan numbers in the problem, it sheds new light on the recent results of Derrida *et al.* [@Derrida03brownian] connecting the TASEP with Brownian excursion. More precisely, using explicit calculations, Derrida *et al.* show that the density of black particles in configurations of the two particle TASEP can be expressed in terms of a pair $(e_t,b_t)$ of independent processes, a Brownian excursion $e_t$ and a Brownian motion $b_t$. In our interpretation these two quantities appear at the discrete level, associated to each complete configuration $\omega$ of $\Omega^0_n$: - The role of the Brownian excursion for $\omega$ is played by the halved differences $e(i)=\frac12(B(i)-W(i))$ between the number of black and white particles sitting on the left of wall $i$, for $i=0,\ldots,n$. By definition of complete configurations, $(e(i))_{i=0,\ldots,n}$ is a discrete excursion, that is, $e(0)=e(n)=0$, $e(i)\geq0$ and $|e(i)-e(i-1)|\in\{0,1\}$, for $i=0,\ldots, n$. - The role of the Brownian motion is played for $\omega$ by the differences $b(i)=B_{top}(i)-B_{bot}(i)$ between the number of black particles sitting in the top and in the bottom row, on the left of wall $i$, for $i=0,\ldots, n$. This quantity $(b(i))_{i=0,\ldots,n}$ is a discrete walk, with $|b(i)-b(i-1)|\in\{0,1\}$ for $i=0,\ldots,n$. Since $e(i)+b(i)=2B_{top}(i)-i$, the functions $e$ and $b$ allow one to describe the cumulated number of black particles in the top row of a complete configuration. Accordingly, the density of black particles in a given segment $(i,j)$ is $(B_{top}(j)-B_{top}(i))/(j-i)=\frac12+\frac{e(j)-e(i)}{2(j-i)} +\frac{b(j)-b(i)}{2(j-i)}$. This is a discrete version of the quantity considered by Derrida *et al.* in [@Derrida03brownian]. Now the two walks $e(i)$ and $b(i)$ are correlated since one is stationary when the other is not, and vice-versa: $|e(i)-e(i-1)|+|b(i)-b(i-1)|=1$. Given $\omega$, let $I_e=\{\alpha_1<\ldots<\alpha_p\}$ be the set of indices of $|{\ensuremath{{\bullet\atop\bullet}}}|$- and $|{\ensuremath{{\circ\atop\circ}}}|$-columns, and $I_b=\{\beta_1<\ldots<\beta_q\}$ the set of indices of $|{\ensuremath{{\bullet\atop\circ}}}|$- and $|{\ensuremath{{\circ\atop\bullet}}}|$-columns ($p+q=n$). Then the walk $e'(i)=e(\alpha_i)-e(\alpha_{i-1})$ is the excursion obtained from $e$ by ignoring stationary steps, and the walk $b'(i)=b(\beta_i)-b(\beta_{i-1})$ is obtained from $b$ in the same way. Conversely given a simple excursion $e'$ of length $p$, a simple walk $b'$ of length $q$ and a subset $I_e$ of $\{1,\ldots,p+q\}$ of cardinality $p$, two correlated walks $e$ and $b$, and thus a complete configuration $\omega$ can be uniquely reconstructed. The consequence of this discussion is that the uniform distribution on $\Omega^0_n$ corresponds to the uniform distribution of triples $(I_e,e',b')$ where, given $I_e$, the processes $e'$ and $b'$ are independent. A direct computation shows that in the large $n$ limit, with probability exponentially close to 1, a random configuration $\omega$ is described by a pair $(e',b')$ of walks of roughly equal lengths $n/2+O(n^{1/2+\varepsilon})$. In particular, up to multiplicative constants, the normalized pairs $(\frac{e'(tn/2)}{n^{1/2}},\frac{b'(tn/2)}{n^{1/2}})$ and $(\frac{e(tn)}{n^{1/2}},\frac{b(tn)}{n^{1/2}})$ both converge to the same pair $(e_t,b_t)$ of independent processes, with $e_t$ a standard Brownian excursion and $b_t$ a standard Brownian walk. Another possible outcome of our approach could be an explicit construction of a continuum TASEP by taking the limit of the Markov chain $X$, viewed as a Markov chain on pairs of walks. An appealing way to give a geometric meaning to the transitions in the continuum limit could be to use a representation in terms of parallelogram polyominos [@stanley], using the process $e(t)$ (or $e_t$ in the continuum limit) to describe the width of the polymonino and the process $b(t)$ (or $b_t$ in the continuum limit) to describe the vertical displacement of its spine. Acknowledgments. {#acknowledgments. .unnumbered} ---------------- Referees are warmly thanked for their great help in improving the paper.
--- abstract: 'In a developmental framework, autonomous robots need to explore the world and learn how to interact with it. Without an a priori model of the system, this opens the challenging problem of having robots master their interface with the world: how to perceive their environment using their sensors, and how to act in it using their motors. The sensorimotor approach of perception claims that a naive agent can learn to master this interface by capturing regularities in the way its actions transform its sensory inputs. In this paper, we apply such an approach to the discovery and mastery of the visual field associated with a visual sensor. A computational model is formalized and applied to a simulated system to illustrate the approach.' author: - bibliography: - 'bibICDL.bib' title: Autonomous Grounding of Visual Field Experience through Sensorimotor Prediction --- Introduction {#sec:Introduction} ============ As advocated by the developmental approach of robotics, a truly autonomous agent should explore its environment and learn on its own how to interact with it [@cangelosi2015developmental]. For a *tabula rasa* agent this implies discovering that such an environment does exist, that it has a structure and properties, but also that there exists an interface [@hoffman2015interface] to this external world: the body which mediates perception and action. Having this knowledge emerge in a naive agent represents an intimidating challenge. This is in part why traditional robotic approaches prefer relying on engineers to hand-code this knowledge into robots in the form of sensory processing, decision making, and control algorithms [@brady2012robotics]. However, such an a priori definition of perception and action presents limitations and risks. It is too rigid to allow the robot to adapt to unforeseen conditions or to develop new ways to interact with the environment. Moreover it is too complex for the system’s designers to encompass every aspect of the rich robot-environment interaction. And above all, engineers’ intuitions about the way animals perceive and act might simply be incorrect. In order to overcome those limitations, one has to tackle the initial challenge of understanding how a naive agent can learn to interact with the world. The problem is the following: how to make sense and use of the uninterpreted sensorimotor flow a naive agent has access to? Without a priori knowledge the incoming sensory flow is like static snow on a screen - furthermore not necessarily encoding an image - and the outgoing motor flow is an unknown choice of actions whose consequences in the world are hidden. Confronted with this difficulty, the large majority of unsupervised approaches propose to re-encode sensory information based on its statistical properties. This is for instance the case of some currently very successful Deep Learning methods. However, as underlined by the final supervision stage of deep convolutional object recognition [@krizhevsky2012imagenet], those filtered sensory inputs remain unintepreted for the naive agent. It is like the static snow image has been filtered to generate a new (hopefully smaller) one. Noteworthy enough, most of those approaches focus on processing the sensory flow and leave aside the other facet of the problem, the motor flow. Another approach is required to make initially unintepreted sensory input useful and let a naive agent know how it can act in the world. The sensorimotor approach of perception inspired by the sensorimotor contingencies theory (SMCT) is one promising proposal [@o2001sensorimotor]. It claims that perception is *mastering the way sensory inputs can be actively transformed by actions*. Although the original SMCT does span a larger scope of arguments related to cognition and consciousness [@oregan2011red], our sensorimotor approach of perception focuses on the pragmatical aspect of this claim. Perception can be acquired by letting a robot explore the world and discover how its motor commands transform its sensory inputs. Intuitively, this means that a naive agent can explore its motor space and check how it changes the incoming static snow image. The structure in the external world will thus induce structure in the generated transformations. Contrarily to passive sensory processing though, this structure will later be useful for the agent as it describes how it can navigate in its sensorimotor space and allow it to select actions to reach a (sensorimotor) goal. This core idea also blurs the boundary between perception and action, as the agent learns to master its interface with the world, including both how it perceives (how sensory inputs can be actively transformed) and how it acts (the dual: how its actions transform sensory inputs). Numerous experimental results suggest that such a sensorimotor account of perception is relevant [@bach2003sensory; @kaspar2014experience; @witzel2015determines]. In particular, recent results show that humans learn to master their visual field, that is the way ocular saccades transform visual sensory inputs [@herwig2014predicting]. More precisely they learn the relations between different sensory inputs encoding the same visual feature at different positions on the retina, as well as the motor commands between them. Such a relation is for instance how a vertical edge is precisely encoded in the fovea but coarsely encoded when projected in the periphery of the retina after a given saccade. It has been shown in artificial setups that modified relations can be learned, leading to an altered visual field experience even after perceptive capacities have stabilized in adult subjects (for instance, associating two periodic visual features of different frequencies in the fovea and the periphery) [@herwig2014predicting]. Taking inspiration from this work and the sensorimotor approach of perception, this paper proposes a formalization of the visual field mastery problem. It is directly in line with previous work in which a simpler setup was considered [@laflaquiere2015fov]. Below, a computational model is proposed to describe how sensorimotor transformations related to a moving visual sensor can be discovered and captured by a naive agent. A simulation is introduced to illustrate and evaluate the approach on a visual search task. Finally results are analyzed and current limitations as well as future extensions of the approach are discussed. Problem formulation {#sec:Problem formulation} =================== ![Simplified illustration of a retina representing the non-uniform distribution of light-sensitive cells and the resulting variable sensory encoding of visual features. Visual features are projected on the retina and shift when the eye saccades. The sensory encoding of visual features depends on where their projection lands. Resolution is for instance significantly higher in the fovea (center of the retina) than in periphery.[]{data-label="fig:fig1"}](figure1.png){width="0.9\linewidth"} We are interested in agents equipped with a visual sensor, that is a 2D array of pixels collecting information from a limited part of the environment. Taking inspiration from the human retina, we consider that the sensor array is divided into multiple *receptive fields* gathering a small neighborhood of pixels [@lindeberg2013computational]. Each receptive field is considered as an independent sensor generating its own sensory inputs. Note that, like in the human retina, all receptive fields don’t necessarily share the same properties: number, relative positions, and nature of light-sensible cells or pixels in our case. The same *visual feature* - visual information coming from a small part of the environment - can thus be encoded differently depending on which receptive field processes it (see Fig. \[fig:fig1\]). Due to the physical embedding of the visual sensor in the world and the structure of the latter, the visual features projected on the retina shift from receptive field to receptive field when the sensor moves (or when the environment moves, which is a dual case we will not consider in this paper). Relations thus exist between the sensory inputs of the different receptive fields and the saccadic motor outputs. Initially the agent is naive and doesn’t know those relations. However, it can discover them by exploring its environment, collecting and modeling sensorimotor experiences shaped by those physical constraints. Doing so, the agent learns to control the interface with the world that the sensor constitutes: it knows which different sensory inputs in different parts of the retina correspond to the same visual feature and how to move visual features in the retina with motor commands. Formally we denote a receptive field with a superscript $a$ and the sensory state generated by each of them as the multivariate random variable $\mathbf{S}^a$ that can take values: $$\mathbf{s}^a_i = [s^a_{i,1},\dots,s^a_{i,d^a}],$$ where $d^a$ is the number of pixels in receptive field $a$, and $s^a_{i,k}\in\mathbb{R}$ is the individual excitation of the $k^{\text{th}}$ pixel for state $i$. The same way, we denote the saccadic motor commands the agent can generate with a multivariate random variable $\mathbf{M}$ that can take values: $$\mathbf{m}_q = [m_{q,1},\dots,m_{q,d^m}],$$ where $d^m$ is the number of motors, and $m_{q,k}\in\mathbb{R}$ is the individual excitation of the $k^{\text{th}}$ motor contributing to the sensor movement $q$. Note that no superscript $a$ is needed as all receptive fields are moved together during saccades. Moreover motors could potentially be redundant regarding the sensor’s displacements in the world, in which case the agent would have to discover its actual working space (see [@laflaquiere2015learning] for an example of such a sensorimotor structuring). The naive agents we consider have no specific policy to explore the world. Consequently the agent’s exploration strategy is simply set to random commands, similar to motor babbling observed in babies. We propose to formalize the agent’s learning process as the building of a predictive model: $$P \big( \mathbf{S}^b(t+1)=\mathbf{s}^b_j \;|\; \mathbf{S}^a(t)=\mathbf{s}^a_i \:,\: \mathbf{M}(t)=\mathbf{m}_q \big),$$ describing the probability of transition between a pre-saccadic state $\mathbf{s}^a_i$ in receptive field $a$ and a post-saccadic state $\mathbf{s}^b_j$ in receptive field $b$, given the realization of a saccade $\mathbf{m}_q$. The agent can estimate this probability by exploring the world and compute statistics on sensorimotor transitions data: $$\Big( \mathbf{s}_i^a(t),\mathbf{m}_q(t)\rightarrow \mathbf{s}^b_j(t+1) \Big).$$ Note that the temporal variable $t$ is dropped in future developments to lighten notations. This probabilistic approach fits nicely in the framework of predictive modeling which describes the brain as a predictive machine and can be applied to the capture of sensorimotor contingencies [@seth2014predictive]. ![image](figure2.png){width="\linewidth"} The physical embedding of the sensor in the world induce constraints on the agent’s experience; these constraints should appear as a structure in the predictive model. More precisely, specific sensorimotor transitions should be significantly more probable than others as they correspond to visual features shifting between receptive fields during saccades. To evaluate which receptive fields $(a,b)$ are coupled this way, we propose to evaluate the normalized mutual information between them, given a motor command (random variables are omitted when possible to shorten equations): $$I(\mathbf{S}^a;\mathbf{S}^b \:|\: \mathbf{m}_q) = \frac{H(\mathbf{S}^b\:|\:\mathbf{m}_q) - H(\mathbf{S}^b\:|\:\mathbf{S}^a,\mathbf{m}_q)}{H(\mathbf{S}^b\:|\:\mathbf{m}_q)}, \label{eq:mutualinformation}$$ with $H(\mathbf{S}^b\:|\:\mathbf{m}_q)$ the entropy of receptive field $b$ given the motor command $q$: $$H(\mathbf{S}^b\:|\:\mathbf{m}_q) = -\sum_j P\big(\mathbf{s}^b_j\:|\:\mathbf{m}_q\big) \log\Big(P\big(\mathbf{s}^b_j\:|\:\mathbf{m}_q\big)\Big),$$ and $H(\mathbf{S}^b\:|\:\mathbf{S}^a,\mathbf{m}_q)$ its entropy conditioned on the state of receptive field $a$: $$H(\mathbf{S}^b\:|\:\mathbf{S}^a,\mathbf{m}_q) = - \sum_{i,j} P\big(\mathbf{s}^b_j,\mathbf{s}^a_i|\mathbf{m}_q) \log\left(\frac{P\big(\mathbf{s}^b_j,\mathbf{s}^a_i|\mathbf{m}_q\big)}{P(\mathbf{s}^a_i|\mathbf{m}_q)} \right).$$ Entropy is a measure of the unpredictability of the post-saccadic variable $\mathbf{S}^b$, that can be conditioned on the outcome of the pre-saccadic variable $\mathbf{S}^a$. Mutual information $I(\mathbf{S}^a;\mathbf{S}^b\:|\:\mathbf{m}_q)$ is thus a measure of how much information $\mathbf{S}^a$ provides about $\mathbf{S}^b$, given a saccade $\mathbf{m}_q$. For each saccade, mutual information should thus be significantly lower for pairs of receptive fields $(a,b)$ between which visual features shift. Those relations are the way the sensor’s physical structure is accessible to the naive agent. Simulation {#sec:Simulation} ========== ![image](figure3_bigger.png){width="\linewidth"} System description {#sec:System description} ------------------ A simple agent-environment system is simulated in order to apply and evaluate the approach. As illustrated in Fig. \[fig:fig2\], it coarsely captures the kind of interaction a moving eye has with its environment. The agent is a camera with a $84^2$ pixels retina. This retina is divided into $7^2$ juxtaposed receptive fields of size $12^2$ pixels. Yet, in order to mimic the heterogeneous human retina, the resolution of each receptive field is artificially reduced as it lies further from the center of the retina. This is done by grouping the receptive fields into $4$ concentric layers with respective resolutions of $\big((\frac{1}{6},\frac{1}{3},\frac{1}{2},1)\times 12 \big)^2$ pixels. Practically the resolution is reduced by respectively keeping only one pixel every $(6,3,2,1)$ pixels, both for rows and columns, in the small $12^2$ image received by the receptive fields. This variable resolution reproduces the significant loss of information between the center of our retina - the *fovea* - and its periphery. Each pixel can originally generate an excitation $s^a_{i,k}$ in the interval $[0,255]$, with the two extremas respectively corresponding to no excitation (black pixel) and full excitation (white pixel). However, in order to emphasize the fact that the agent will not rely on implicit structure in the data, we apply two transformations to disrupt the sensory signal encoding (see Fig. \[fig:fig2\] center). First, different linear transformations $g$ are independently applied to all pixels of the retina to modify their excitation functions: $$g(x)=\alpha x + \beta, \text{with } \alpha\in[1-\frac{\beta}{255},-\frac{\beta}{255}], \beta\in[0,255]$$ The functions’ parameters $\alpha$ and $\beta$ are randomly generated for each pixel, and ensure that the transformed input still lies in a subspace of $[0,255]$. Once drawn, the transformation is fixed and considered a property of the sensor (more precisely, of each pixel in the array). Second, the pixels are mixed up before forming the sensory state vector $\mathbf{s}^a_i$. This way the order in which excitations appear in the vector doesn’t correspond to any topological organization of pixels in the receptive field. The mixed order is drawn randomly for each receptive field and then considered a fixed property of the sensor (more precisely, of each receptive field). The agent’s sensory experience is discretized by considering that each receptive field $a$ can be in a finite number of prototype states $\mathbf{s}^a_i$. The number $N^a$ of states in receptive fields is arbitrarily set accordingly to their resolutions: $N^a = (\frac{1}{6},\frac{1}{3},\frac{1}{2},1) \times 60$. The states themselves are generated in a data driven way by randomly collecting $2.5\times10^4$ sensory inputs and applying a simple K-means algorithm to cluster them in $N^a$ states $\mathbf{s}^a_i$ for each receptive field (see Fig. \[fig:fig2\]). Likewise a finite number of $Q=8$ saccadic movements $m_q$ are considered. They correspond to the sensor’s translations in the retina plane such that the central *foveal* receptive field exchanges positions with the $8$ receptive fields of layer 1 (see Fig. \[fig:fig2\]). They have been chosen so that visual features completely shift between receptive fields during saccades. A larger set of such saccades could be considered, for instance by having the fovea exchange position with all others receptive fields. However $Q$ is purposefully kept low to reduce the computational cost of the simulation which would require a parallel porting on GPU to increase efficiency. Simple artificial environments are generated for the agent to explore. Like illustrated in Fig. \[fig:fig2\], they are images made of a black background on top of which white squares of variable sizes are randomly distributed. Ten different environments are generated this way. During the simulation, the agent successively explores them during the same amount of time. This environmental simplicity allows us to easily keep track of what is being captured in the predictive model (see Sec. \[sec:Results\]). However it shouldn’t be considered a drawback of the approach as the focus of this paper lies in capturing the sensor structure, not environmental properties. Estimating the predictive model {#sec:Estimating the predictive model} ------------------------------- In order to estimate the predictive model $P(\mathbf{S}^b\:|\:\mathbf{S}^a,\mathbf{M})$, the agent explores each environment by randomly generating $10^5$ saccades. The number[^1] of individual sensorimotor transitions $(\mathbf{s}^a_i,\mathbf{m}_q\rightarrow\mathbf{s}^b_j)$ experienced by the agent is thus equal to $10^5\times10\times7^2\times7^2 = 2401\times10^6$. The probability of each sensorimotor transition $P(\mathbf{s}^b_j\:|\:\mathbf{s}^a_i,\mathbf{m}_q)$ is then estimated based on those data: $$%\forall (a, b, q, i, j), P(\mathbf{s}^b_j|\mathbf{s}^a_i,\mathbf{m}_q) = \frac{\text{count}\big( (\mathbf{s}^a_i,\mathbf{m}_q\rightarrow\mathbf{s}^b_j) \big)}{\sum_{j=1}^{N^b} \text{count}\big( (\mathbf{s}^a_i,\mathbf{m}_q\rightarrow \mathbf{s}^b_j) \big)}, \label{eq:estimateP}$$ For convenience, those elementary predictive models are stored “by blocks" in the agent’s memory. This way, a block $(a,b,q)$ gathers predictive models related to the receptive fields $a$ and $b$, and the saccade $q$. It forms a small matrix where each row corresponds to a pre-saccadic state $i$ and each column to a post-saccadic state $j$ (see Fig. \[fig:fig3\]). According to Eq. , each row of the matrix thus defines a conditional distribution $P\big( \mathbf{S}^b\:|\:\mathbf{s}^a_i,\mathbf{m}_q \big)$ over the post-saccadic states $\mathbf{s}^b_j$. ![image](figure4_2x2.png){width="0.93\linewidth"} Visual search {#sec:Visual search} ------------- A visual search task is proposed in order to illustrate how the estimated sensorimotor predictive model is useful to the agent. The latter is placed in an environment similar to the ones explored during the learning process. Repeatedly, a desired foveal sensory input $\widehat{\mathbf{s}}^b_j$ (equivalent to a visual feature) is defined. The agent has to counterfactually search for it in the field of view and perform a saccade so that the desired sensory state is reached. Practically, the desired sensory inputs $\widehat{\mathbf{s}}^b_j$ are selected by looking at the visual features (un-encoded ground truth) received by other receptive fields and encoding them as if they were projected in the fovea. This ensures that the desired visual feature is present in the current field of view and that the desired sensory state can potentially be reached. Moreover, because only saccades of one receptive field width have been considered during the learning, only the layer 1 of receptive fields directly surrounding the fovea is considered during this search (see Fig. \[fig:fig2\]). The agent estimates which motor command to perform as follows. First for each peripheral receptive field $a$ a motor command $\mathbf{m}^a_{f}$ is determined such that: $$f = \text{argmax}_q \big( I(\mathbf{S}^a;\mathbf{S}^f|\mathbf{m}_q) \big),$$ where $\mathbf{S}^f$ corresponds to the post-saccadic foveal sensory state. The motor command $\mathbf{m}^a_{f}$ is thus the one that maximizes the mutual information between the receptive field $a$ and the fovea. From an external point of view, $\mathbf{m}^a_{f}$ is the saccade that makes visual features shift from the receptive field $a$ to the fovea. Second the saccade $\mathbf{m}^{a^*}_{f}$ the agent has to perform to foveate the desired visual feature is determined as: $$\mathbf{m}^{a^*}_{f} \;|\; a^* = \text{argmax}_a P\big(\widehat{\mathbf{s}}^b_j\:|\:\mathbf{s}^a_i,\mathbf{m}^a_{f}\big).$$ Intuitively the agent selects the receptive field $a$ whose current sensory state $\mathbf{s}^a_i$ has the highest probability to transform into the desired foveal state $\widehat{\mathbf{s}}^b_j$ after performing the saccade $\mathbf{m}^{a^*}_f$. Results {#sec:Results} ======= Predictive model and sensor structure ------------------------------------- As claimed in Sec. \[sec:Problem formulation\], the physical embedding of the sensor in the world should be translated in the predictive model as a structure of highly predictable sensorimotor transitions. As illustrated in Fig. \[fig:fig3\], this structure can indeed be observed in the block matrices $(a,b,q)$ of the predictive model: some blocks (for instance $(a=25,b=24,q=1)$) display stronger predictability patterns than others (for instance $(a=25,b=49,q=5)$). This can be formally measured and more easily visualized by looking at the normalized mutual information computed for each block $(a,b,q)$ according to Eq. . Depending on the executed saccade $\mathbf{m}_q$ different pairs of receptive fields $(a,b)$ display a significantly higher mutual information than others (see Fig. \[fig:fig3\]). From our omniscient point of view, we can confirm that those pairs correspond to receptive fields between which visual features shift during the corresponding saccade. The mutual information matrix displays regular patterns for each saccade because we purposefully organized its rows and columns according to the receptive fields order in the retina. Note however that the agent doesn’t have access to such knowledge and cannot take advantage of those patterns. Yet we can also observe that mutual information between receptive fields is not binary - a spectrum of intermediary values does exist. This variability is due to the fact that the predictive model not only captures structure induced by the visual sensor but also structure induced by the environment. Because the environment statistically displays local continuity, neighboring receptive fields can better predict the state of a target receptive field. Intuitively receiving a uniform black visual feature in a receptive field allows for instance the agent to fairly accurately predict that neighboring receptive fields also receive a uniform black feature. Such a continuity property could be taken advantage of to estimate a topological map of the different receptive fields, as was proposed in [@kuipers2008drinking]. However, although it can help an external observer visualize the data, we see no incentive for a naive agent to build such a map. To demonstrate the impact of the environmental structure on the predictive model, the simulation has been run with a random noise environment (each pixel is initially drawn in $[0,255]$). Figure \[fig:fig3\] shows how the lack of environmental structure removes intermediary values of mutual information to only leave the ones related to the actual sensor structure. In the coupled blocks $(a,b,q)$ induced by the sensor structure, the estimated predictive model also informs the agent on which sensory states $\mathbf{s}^a_i$ correspond to the same visual feature encoded in different parts of the retina. Figure \[fig:fig4\] shows a few example sensory states $\mathbf{s}^a_i$ and the different sensory states $\mathbf{s}^b_j$ they predict in the coupled receptive fields $b$ given the $8$ motor commands. Along the sensory inputs the agent has access to are displayed the actual visual features they correspond to (computed by inverting the sensory transformation described in Section. \[sec:System description\]). One can observe that each group of those sensory states encodes the same visual feature, which can nonetheless be encoded with different resolutions. The agent can thus estimate that completely different sensory inputs actually correspond to the same visual feature (information about the world). Moreover, he knows which motor action transforms one into the other, which from an external point of view corresponds to making the visual feature shift on the retina. Inverting the sensory encoding also reveals why two sensory inputs are predicted most of the time in uncoupled blocks $(a,b,q)$ (see for instance block $(a=13,b=25,q=1)$ in Fig. \[fig:fig3\]). They correspond to the uniform white and black features which are significantly more probable in these artificial environments. When receptive field $a$ doesn’t inform about receptive field $b$, predicting those two features is a safe estimate. Finally, one can also notice that, because of the resolution loss between the different retina layers, the association between lower resolution pre-saccadic states $\mathbf{s}^a_i$ and higher resolution post-saccadic ones $\mathbf{s}^b_j$ can be ambiguous. Intuitively, it simply corresponds to the fact that a blurry pattern can correspond to multiple sharp ones. This can be seen in the second panel of Fig. \[fig:fig4\] where sensory state $\mathbf{s}^{17}_{30}$ transforms into the foveal state $\mathbf{s}^{25}_{17}$ after saccade $\mathbf{m}_4$: the probability is only $\approx \! 0.52$ because another visual feature (white two black rows at the top) can also be predicted with a probability of $\approx \! 0.48$. Note that the opposite is not true as higher resolution patterns can unambiguously predict their lower resolution counterpart. We argue that it is the rationale why an initially naive agent would naturally prefer the foveal encoding of a visual feature compared to every other one in the retina: it is the only encoding that can unambiguously predict all other ones. That is also the reason why foveation seems like a sensible objective in the visual search task. The visual search task described in Sec. \[sec:Visual search\] was performed on $10^3$ successive iterations. The agent succeeded in reaching the desired foveal state in every trial. This remarkable success rate is due to the way the visual search task is designed: the desired sensory state is foveal (unambiguous) and can be reached at each iteration. However it highlights the quality of the estimated predictive model which always associates paired sensory states in different receptive fields. This simple search task illustrates how successfully the agent can use the predictive model it estimated to control its initially unstructured sensorimotor interaction with the environment. Discussion {#sec:Discussion} ========== This paper proposed a sensorimotor formalization of the problem of having a naive agent autonomously learn to master a visual field. A computational model has been defined and applied on a simulated system to illustrate how an agent can discover the regular transformations induced on its sensorimotor experiences by the physical embedding of its sensor in the world. Those transformations define both how different sensory inputs coming from different parts of the sensor encode the same visual features, and how motor commands transform some into the others. Those regularities are discovered by exploring the world and can later be used by the agent to internally simulate interactions with the world and consequently select the most favorable action to perform, like illustrated in a simple visual search task. The simulation also showed how specific sensory states can naturally emerge among the different ones that encode the same visual feature in an heterogeneous sensor. In a retina-like sensor, this is the case of foveal encodings that can accurately predict all others peripheral encoding due to their higher resolution. Future extensions of the approach should illustrate other core aspects of the sensorimotor approach of perception. In particular, we’ll address the problem of autonomous emergence of semantics. So far, the simulated agent identified different sensory states that encode the same visual features, which can be seen as a first step towards semantics. However, a more convincing result will be to show how different features can be autonomously grouped together based on more abstract properties (for instance group together all vertical or horizontal edges in the proposed simulation). Those properties can indeed be identified in the way the corresponding sensory states can be transformed through action. Future developments will also illustrate how sensorimotor transformations can be organized in a hierarchical model that can be latter used for an efficient exploration of new environments. Finally, the algorithm proposed in this paper will be ported on a GPU to benefit from parallel computing in order to evaluate a more realistic retina exploring more complex environments. [^1]: Number of saccades $\times$ Number of environments $\times$ Number of pre-saccadic receptive fields $\times$ Number of post-saccadic receptive fields
--- abstract: 'In this work, we study a cosmological model of spatially homogeneous and isotropic accelerating universe which exhibits a transition from deceleration to acceleration. For this, Friedmann Robertson Walker(FRW) metric is taken and Hybrid expansion law $a(t)=t^{\alpha} \exp(\beta t )$ is proposed and derived. We consider the universe to be filled with two types of fluids barotropic and dark energy which have variable equations of state. The evolution of dark energy, Hubble, and deceleration parameters etc., have been described in the form of tables and figures. We consider $581$ data’s of observed values of distance modulus of various SNe Ia type supernovae from union $2.1$ compilation to compare our theoretical results with observations and found that model satisfies current observational constraints. We have also calculated the time and redshift at which acceleration in the Universe had commenced.' --- \ \ \ \ \ \ \ : 98.80.Jk; 95.30.Sf\ [*Keywords*]{}: FRW universe; SNe Ia data; Observational parameters; Accelerating universe.\ Introduction ============ The latest findings on observational ground during last three decades by various cosmological missions like observations on type Ia Supernovae (SNe Ia) [@ref1]$-$[@ref5], CMBR fluctuations [@ref6; @ref7], large scale structure (LSS) analysis [@ref8; @ref9], SDSS collaboration [@ref10; @ref11], WMAP collaboration [@ref12], Chandra X-ray observatory [@ref13], the Hubble space telescope cluster supernova survey $V$ [@ref14], BOSS collaboration[@ref15], the WiggleZ dark energy survey [@ref16] and latest Planck collaboration results [@ref17] all confirms that our universe is undergoing through an accelerating expansion. It is concluded that our universe is dominated by an exotic dark energy (DE). It has negative pressure, so is repulsive and creates acceleration in the universe. Inclusion of cosmological constant in the Einstein’s field equation again got importance as a positive cosmological constant is considered as a source of Dark energy. $\Lambda$-CMD cosmology [@ref18; @ref19] is just Eddington-Lemaitre model with the difference that cosmological constant term acts as a source of dark energy with equation of state $ p_ \Lambda = \rho _ \Lambda = \frac{-\Lambda c^4}{8\pi G}$. However, the model suffers from, inter alia, fine tuning and cosmic coincidence problems [@ref20]. Any acceptable cosmological model must be able to explain the current expansion of the universe.\ In any cosmological model, we require to find out rate of the expansion of the universe. Hubble constant determines it. Observationally we require high-precision measurement observatories to estimate Hubble constant. In general relativity, energy conservation equation provides linear relation ship amongst rate of the expansion, pressure, density, and temperature. Dark energy negative pressure and density are also included in it. There is a cosmological equation of state such as the relationship between temperature, pressure, and combined matter energy and vacuum energy density for any region of space. The problem for equation of state for barionic matter has been solved by cosmologists by providing the phases of the universe like stiff matter, radiation dominated and present dust dominated universe, but determination of the equation of state for dark energy is one of the biggest problem in observational cosmology today. Carroll and Hoffman [@ref21] presented a Dark energy (DE) model in which DE is considered in a conventional manner as a fluid by the equation of state (EoS) parameter $\omega_{de} = \frac{p_{de}}{\rho_{de}}$ which is not necessarily constant. We need to investigate equation of state (EoS) parameter for the whole span of the universe. At present, it is nearly equal to $-1$. So far two main theories related to variable equation of state for dark energy are quintessence and phantom models of dark energy. In quintessence model $-1 \leq\omega_{de} < 0$ where as in phantom model $\omega_{de} \leq -1 $. Latest surveys such as Supernovae Legacy Survey, Gold sample of Hubble Space Telescope [@ref22; @ref23], CMB (WMAP, BOOMERANG) [@ref24; @ref25] and large scale structure (Sloan Digital Sky Survey) data [@ref26] ruled out possibility of $\omega_{de} \ll -1$ but $\omega_{de}$ may be little less than $-1$. It is found that the present amount of the DE is so small as compared with the fundamental scale (fine-tuning problem). It is comparable with the critical density today (coincidence problem) [@ref18]. So we need a different forms of dynamically changing DE with an effective equation of state (EoS), $\omega_{(de)} = p_{(de)}/\rho_{(de)} < -1/3$. SNe Ia data [@ref27] and a combination of SNe Ia data with CMBR anisotropy and galaxy clustering statistics [@ref9] put limits on $\omega_{(de)}$ as $-1.67 < \omega_{de} < -0.62$ and $-1.33 < \omega_{de} < - 0.79$, respectively.\ Komatsu et al. and Hinshaw et al. [@ref26; @ref28] estimated limits on $\omega_{(de)}$ as $-1.44 < \omega_{(de)} < -0.92$ at $68\%$ confidence level. These observations were based on the combination of cosmological datasets coming from CMB anisotropies, luminosity distances of high redshift type Ia supernovae and galaxy clustering. Recently, Amirhashchi et al. [@ref29; @ref30], Pradhan et al. [@ref31], Saha et al. [@ref32], Pradhan [@ref33] and Kumar [@ref34] have made study on FRW based dark energy model in which they considered an interacting and non-interacting two type of fluids, one for barotropic matter and other for dark energy.\ In this work, we study a particular model which exhibits a transition from deceleration to acceleration. We consider Baryonic matter, dark energy, and “curvature” energy. Both baryonic matter and dark energy have variable equations of state. We study the evolution of the dark energy parameter within a framework of an FRW cosmological model filled with two fluids (barotropic and dark energy). The cosmological implications of this two-fluid scenario are discussed in detail. The model is shown to satisfy current observational constraints. Basic field equations and their solutions ========================================= The dynamics of the universe is governed by the Einstein’s field equations (EFEs) given by $$\label{1} R_{ij}-\frac{1}{2}Rg_{ij} =-\frac{8\pi G}{c^{4}}T_{ij},$$ where $R_{ij}$ is the Ricci tensor, $R$ is the scalar curvature, and $T_{ij}$ is the stress-energy tensor taken as $T_{ij} = T_{ij}(m)+T_{ij}(de).$ We assume that our universe is filled with two types of perfect fluids (since homogeneity and isotropy imply that there is no bulk energy transport), namely an ordinary baryonic fluid and dark energy. The energy-momentum tensors of the contents of the universe are presented as follows: (with the subscripts $m$ and $de$ denoting ordinary matter and dark energy, respectively). $T_{ij}(m)=\left(\rho_m + p_m\right)u_{i}u_{j}-p_m g_{ij}$ and $T_{ij}(de)=\left(\rho_{de}+p_{de}\right)u_{i}u_{j}-p_{de} g_{ij}$. In standard spherical coordinates $(x^{i} = (t, r, \theta, \phi)$, a spatially homogeneous and isotropic Friedmann-Robertson-Walker (FRW) line-element has the form (in units $c = 1$) $$\label{2} ds{}^{2}=dt{}^{2}-a(t){}^{2}\left[\frac{dr{}^{2}}{(1+kr^{2})}+r^{2}({d\theta{}^{2}+sin{}^{2}\theta d\phi{}^{2}})\right],$$ Where (i) k=-1 is closed universe (ii) k=1 is open universe and (iii) k=0 is spatially flat universe. Solving EFEs (\[1\]) for the FRW metric (\[2\]) and energy momentum tensors described above, we get the following equations of dynamic cosmology $2\frac{\ddot{a}}{a}+H^{2} = -8\pi G p + \frac{k}{a^{2}}$ and $H^{2} = \frac{8\pi G}{3}\rho + \frac{k}{a^{2}}$, where $H=\frac{\dot{a}}{a}$ is the Hubble constant. Here an over dot means differentiation with respect to cosmological time $t$. We have deliberately put the curvature term on the right of EFEs, as this term is made to acts like an energy term. For this, we assume that the density and pressure for the curvature energy are as follows $\rho_{k}=\frac{3 k}{8\pi Ga^{2}}, ~~ p_{k}=-\frac{k}{8\pi Ga^{2}}$. With this choice, EFEs are read as $$\label{3} 2\frac{\ddot{a}}{a}+H^{2} = -8\pi G (p+p_{k}) ~\&~ H^{2}=\frac{8\pi G}{3}\,(\rho+\rho_{k}).$$ The energy density $\rho$ in is comprised of two types of energy, namely, matter and dark energy $\rho_m$ and $\rho_{de}$ where as the pressure ‘$p$’ is comprised of pressure due to matter and pressure due to dark energy. We can express $\rho = \rho_m + \rho_{de}$, and $p =p_m+p_{de}$. The energy conservation law(ECL)  $T^{ij}_{;j}=0$  provides the following well known equation amongst the density $\rho$, pressure $p$ and Hubble constant $H$ $\dot{\rho}+3H(p+\rho)=0$, where $\rho=\rho_{m}+\rho_{de}+\rho_{k}$ and $p=p_{m}+p_{de}+p_{k},$ are the total density and pressure of the universe, respectively. We see that $\rho_{k}$ and $p_{k}$ satisfy ECL independently, i.e., $\dot{\rho_{k}}+3H(p_{k}+\rho_{k})=0$, so that $\frac{d}{dt}{(\rho_{m}+\rho_{de})}+3H(p_{m}+p_{de}+\rho_{m}+\rho_{de})=0$. We assume that both matter and dark energies are minimally coupled so that they are conserved simultaneously, i.e. $\dot{\rho_{m}}+3H(p_{m}+\rho_{m})=0$, and $\dot{\rho_{de}}+3H(p_{de}+\rho_{de})=0$. ECLs are integrable when suitable functions are chosen relating pressure to density: $p_{m}=\omega_{m}\rho_{m}$, $p_{de}=\omega_{de}\rho_{de}$ and $ p_{k}=\omega_{k}\rho_{k}$. In the early part of the universe, it was radiation dominated $\omega_{m}=\frac{1}{3},\rho_{m} \varpropto a^{-4}$. Later on when the radiation decoupled from the baryons, the matter dominated era began. In this epoch, $ \omega_{m}=0, \rho_{m}\varpropto a^{-3}$. In order to describe the whole history of the universe through the equation of state, we may write $\rho_{m} = \rho_{dust} + \rho_{rad} = (\rho_{m})_{0} \left(\frac{a_0}{a}\right)^3\left[\mu+(1-\mu)\frac{a_0}{a}\right]$, where the parameter $\mu$ varies as per the different era of the universe. For a particular era, it is a constant. For example, $\mu=1$ corresponds to the present dust filled era, and $\mu=0$ corresponds to the early era of the universe in which only radiation was present. Here terms with subscript zero are constants and describe values at present. Equations of states for energies corresponding to $k$ are as follows $ p_{k}=\omega_{k}\rho_{k}~\mbox{where}~ \omega_{k}=-1/3.$ This gives $\rho_{k}\varpropto a^{-2}= (\rho_{k})_{0}\left[\frac{a_0}{a}\right]^2$ . We use the relationship between scale factor and red shift as $\frac{a_0}{a} = 1+z$, to write variables in terms of redshift $z$ rather than time. The critical density and the density parameters for energy density, dark energy and curvature density (in units $c = 1$) are, respectively, defined by $ \rho_{c}=\frac{3H^{2}}{8\pi G}$, $\Omega_{m}=\frac{\rho_{m}}{\rho_{c}}$, $ \Omega_{de}=\frac{\rho_{de}}{\rho_{c}}$, and $\Omega_{k}=\frac{\rho_{k}}{\rho_{c}}$. With these in hand, we can write the FRW field equations as follows $$\label{4} H^2=H^{2}_{0}\left[(\Omega_{m})_{0} \left(\frac{a_0}{a}\right)^{3}\left[\mu+(1-\mu)\frac{a_0}{a}\right]+(\Omega_{k})_{0} \left(\frac{a_0}{a}\right)^{2} \right] + H^2 \Omega_{de},$$ and $$\label{5} 2q = 1 + \frac{3H^2_0}{H^2}\left[\omega_{m}(\Omega_{m})_{0} \left(\frac{a_0}{a}\right)^{3}\left[\mu+(1-\mu)\frac{a_0}{a}\right] - \frac{1}{3}(\Omega_{k})_{0} \left(\frac{a_0}{a}\right)^2\right]+ 3\omega_{de}\Omega_{de} ,$$ where $q$ is the deceleration parameter defined by $ q=-\frac{\ddot{a}}{aH^2}$. It is important to mention here that this model represents a decelerating universe in the absence of dark energy. This is so because the deceleration parameter $q$ is positive when the dark energy is zero. We recall $q\lesseqqgtr 1/2$ for closed, flat and open universes, respectively. The $\Lambda$ CMD model fits best with the present day observations. In this model, $\Lambda$ accounts for the vacuum energy with its energy density $\rho_{\Lambda}$ and pressure $p_{\Lambda}$ satisfying the equation of state (EoS) $ p_{\Lambda} = - \rho_{\Lambda} = \frac{\Lambda}{8\pi G}; ~ ~ \omega_{de} = -1$. The purpose of this paper is to investigate the effect of $\omega_{de}$ as a function of time. We have only two equations and the scale factor ‘$a$’, pressure $p$ and energy density $\rho$ to be determined. So we have to use a certain ansatz. In e print [@ref35], we have developed hybrid expansion law \[HEL\] for scale factor ’a’ as follows $a(t)=t^{\alpha} \exp(\beta t )$, where $\alpha$ and $\beta$ are arbitrary constants. These have been obtained as $ \beta = 0.0397474\sim 0.04, \alpha = 0.415066 \sim 0.415$ on the basis of Planck’s observational results [@ref17] and the following differential equations obtained from HEL. $$\label{6} \alpha (1+z) H H_z = \alpha (q+1) H^2 = (H-\beta)^2 = \frac{\alpha^2}{t^2}.$$ Hubble constant $H$ -------------------- Eq. (\[6\]) is solved for the Hubble constant $H$ as a function of redshift $z$ as follows $$\label{7} (H-\beta)^{\alpha}= A~\exp\left(\frac{\alpha\beta}{H-\beta}\right) (1+z),$$ where constant of integration A is obtained as $A = 0.134$ on the basis of the present value of $H (H_0=0.07$ Gy$^{-1})$. A numerical solution of Eq. (\[7\]) shows that the Hubble constant is an increasing function of red shift.\ ![Plot of Hubble constant ($H$) versus redshift ($z$)](fig1.eps){width="10cm" height="5cm"} ![Plot Redshift ($z$) versus time ($t$) ](fig2.eps){width="10cm" height="5cm"} As it is clear that Hubble constant is in fact not a constant. It varies slowly over red shift and time. Various researchers [@ref15; @ref16; @ref36; @ref37; @ref38; @ref39] have estimated values of Hubble constant at different red-shifts using Differential age approach and Galaxy Clustering method. The following table describes various observed values of Hubble constant Hob along with corrections as per red-shift in the range $0\leq z \leq 2$. We have also calculated and put the corresponding theoretical values of $H$ as per our model in the table. It is found that both observed and theoretical values tally considerably and support our model. \[Table-1\]\ (The values of the Hubble constant ($H$) for redshift ($z$) ranging between $0$ to $2$ )\ (Hubble constant is expressed in Gy $^{-1}$ ) $\begin{array}{|c|c|c|c|c|c|c|c|} \hline z & 0.07 & 0.1 & 0.12 & 0.17 & 0.179 & 0.199 \\ \hline Hob & .0706 \pm .0200 & .0706 \pm .0122 & .0701 \pm .0267 & .0848 \pm .0081 & .0766 \pm .0040 & .0766 \pm .0051 \\ \hline Hth & 0.0722454 & 0.0732392 & 0.0739125 & 0.0756344 & 0.0759503 & 0.0766589\\ \hline \end{array}$\ \ $\begin{array}{|c|c|c|c|c|c|c|c|} \hline z & 0.2 & 0.27 & 0.28 & 0.35 & 0.352 & 0.4 \\ \hline Hob & .0746 \pm .0302 & .0787 \pm .0143 & .0908 \pm .0374 & .0846 \pm .0085 & .0848 \pm .0143 & .0971 \pm .0173 \\ \hline Hth & 0.0766946 & 0.0792498 & 0.0796243 & 0.0823145 & 0.0823932 & 0.0843112\\ \hline \end{array}$\ $\begin{array}{|c|c|c|c|c|c|c|c|} \hline z & 0.44 & 0.48 & 0.57 & 0.593 & 0.60 & 0.68 \\ \hline H0b & .0844 \pm .0079 & .0991 \pm.0634 & .0984 \pm .0034 & .1064 \pm .0132 & .0899 \pm .0062 & .0934 \pm .0081 \\ \hline Hth & 0.0859548 & 0.0876406 & 0.0915919 & 0.0926377 & 0.092959 & 0.0967302\\ \hline \end{array}$\ $\begin{array}{|c|c|c|c|c|c|c|c|} \hline z & 0.73 & 0.781 & 0.875 & 0.88 & 0.90 & 1.037 \\ \hline Hob & .0989 \pm .0071 & 0.1073 \pm .0122 & 0.1278 \pm .0173 & .0920 \pm .0409 & .1196 \pm .0235 & .1575 \pm .0204 \\ \hline Hth & 0.0991824 & 0.101761 & 0.106723 & 0.106723 & 0.10809 & 0.115938 \\ \hline \end{array}$\ \ \ $\begin{array}{|c|c|c|c|c|c|c|c|} \hline z &1.3 & 1.363 & 1.43 & 1.53 & 1.75 & 1.965 \\ \hline Hob & .1718 \pm .0173 & .1636 \pm .0343 & .1810 \pm .0184 & .1432 \pm .0143 & .2066 \pm .0409 & .1907 \pm .0515 \\ \hline Hth & 0.132796 & 0.137201 & 0.142047 & 0.149595 & 0.167577 & 0.187053 \\ \hline \end{array}$\ \ \ Figure $1$ depicts the variation of the Hubble constant $H$ with red shift $z$. From this figure, we observe that $H$ increases with the increase of red shift. In this figure, cross signs are $31$ observed values of Hubble constants $H_{0}$ with corrections where as the linear curve is the theoretical graph of the Hubble constant $H$ as per our model. Figure $2$ plots the variation of red shift $z$ with time $t$ which shows that in the early universe the red shift was more than at present. DE parameter $\Omega_{de}$ and equation of state parameter $\omega_{de}$ for DE density ---------------------------------------------------------------------------------------- Now, from Eqs.(\[4\]), (\[5\]) and (\[6\]), the density parameter $\Omega_{de}$ and EoS parameter $\omega_{de}$ for dark energy are given by $$\label{8} H^2 \Omega_{de} = H^2 - (\Omega_{m})_0 H^2_0 (1+z)^3 -(\Omega_{k})_0 H^2_0 (1+z)^2$$ $$\label{9} \omega_{de}=\frac{(2-3\alpha)H^2-4\beta H + 2 \beta^2 + \alpha (\Omega_{k})_0 H^2_0 (1+z)^2 } {3 \alpha [H^2-H_0^2(\Omega_{m})_0(1+z)^{3}-H_0^2(\Omega_{k})_0(1+z)^{2}]}.$$ Where we have taken $\omega_{m}=0$ and $\mu=1$ for present dust filled universe. We present the following numerical tables \[$2$\] and \[$3$\], which display values of the energy parameter $\Omega_{de}$ and EoS parameter $\omega_{de}$ for DE versus red shift $z$ ranging between $0$ to $1.4$. \[Table-$2$\]\ (The values of $\Omega_{de}$ and $z$ ranging between $0$ to $3$ ) $\begin{array}{|c|c|c|c|c|c|c|c|c|} \hline z & 0.0 & 0.2 & 0.4 & 0.6 & 0.8 & 1.0 \\ \hline \Omega_{de} & 0.695 & 0.671884 & 0.609804 & 0.506256 & 0.360992 & 0.176341 \\ \hline \end{array}$\ \[Table-3\]\ (The values of EoS parameter for DE ($\omega_{de}$) and red shift ($z$) ranging between $0$ to $1$ ) $\begin{array}{|c|c|c|c|c|c|} \hline z & 0. & 0.1 & 0.2 & 0.3 & 0.4 \\ \hline \omega_{de} & -1.00673 & -0.961887 & -0.926735 & -0.90137 & -0.886524 \\ \hline \end{array}$\ $\begin{array}{|c|c|c|c|c|c|c|} \hline z & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1.0 \\ \hline \omega_{de} & -0.883867 & -0.896634 & -0.930997 & -0.999402 & -1.13007 & -1.40182 \\ \hline \end{array}$\ \ \ Figure $3$ depicts the variation of $\Omega_{de}$ with $z$. Figure $4$ plots the variation of $\omega_{de}$ with $z$. These Figures $3$ and $4$ support the content of the tables $2$ and $3$.\ ![Plot of Energy density parameter for DE ($\Omega_{de}$) versus redshift ($z$)](fig3.eps){width="10cm" height="5cm"} ![EoS parameter for DE ($\omega_{de}$) versus redshift ($z$)](fig4.eps){width="10cm" height="5cm"} It is found that at $z \simeq 1.16321$, there is a singularity while determining $ \omega_{de}$. So we can say that this model works well during $ 0 \leq z \leq 1.1632$. Moreover, $ -1.68889\leq \omega_{de} \leq -0.910382 $ during $ 0 \leq z \leq 0.98$. This is the limit on $\omega_{de}$ found by various surveys. The recent supernovae SNI $997ff$ at $z \simeq 1.7$ is consistent with a decelerated expansion at the epoch of high emission ( Benitez et al. [@ref40], Turner & Riess [@ref41]). Time at which energy density parameter for DE $\Omega_{de}$ opposes deceleration ---------------------------------------------------------------------------------- From Eqs. (\[7\]) and (\[8\]), we observe that\ $z \to 1.1633,~~\Omega_{de}\to -0.0000984892$ and when $z \to 1.1632 ,~~\Omega_{de}\to 0.0000157204.$\ It means\ $\Omega_{de}\to 0$, when $z \to 1.16325~~$ i.e.   $t \to 4.91961 ~ ~ Gyrs. $\ At this time $ q \to 0.092543 $. Thus, as per our model, dark energy begins its role of opposing deceleration and exerting negative pressure at time $t \to 4.91961 ~ ~ Gyrs$. Densities in our model ----------------------- From ECLs for the matter density and dark energy density we obtain: $\rho_{m}= (\Omega_{m})_0(1+z)^3(\mu+(1-\mu)(1+z))(\rho_{c})_0$ and $ \rho_{de}= \Omega_{de}[(\Omega_{m})_0 [1+z]^3[\mu+(1-\mu)(1+z)]+(\Omega_{k})_0 [1+z]^2+ \Omega_{de}](\rho_{c})_0,$ where $(\rho_{c})_0 = 1.88\times10^{-29}gm/cm^{3}.$ Now we present the following table which describes the dark energy density $\rho_{de}$ in units of $(\rho_{c})_0$ for redshift $z$ ranging between $0$ and $1$. \[Table-4\]\ (The values of energy density for DE ($\rho_{de}$) for redshift ($z$) ranging in between $0$ and $1$ ) $\begin{array}{|c|c|c|c|c|c|} \hline z & 0.0 & 0.1 & 0.2 & 0.3 & 0.4 \\ \hline \rho_{de}/(\rho_{c})_0 & 0.695 & 0.752362 & 0.80457 & 0.848396 & 0.879828 \\ \hline \end{array}$\ \ $\begin{array}{|c|c|c|c|c|c|c|} \hline z & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1.0 \\ \hline \rho_{de}/(\rho_{c})_0 & 0.893955 & 0.884863 & 0.845532 & 0.767754 & 0.642085 & 0.45784 \\ \hline \end{array}$ Transition from deceleration to acceleration ---------------------------------------------- Now we present the following table which describes the transition from deceleration to acceleration. \[Table-$5$\]\ (The values of the deceleration parameter ($q$) for redshift ($z$) ranging between $0$ and $10$ ) $\begin{array}{|c|c|c|c|c|c|} \hline z & 0 & 1 & 2 & 3 & 4 \\ \hline q & -0.552016 & 0.0253422 & 0.521904 & 0.849706 & 1.04866 \\ \hline \end{array}$ $\begin{array}{|c|c|c|c|c|c|c|} \hline z & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline q & 1.16973 & 1.24585 & 1.29563 & 1.32943 & 1.35318 & 1.37036\\ \hline \end{array}$ ![Deceleration parameter $q$ versus red shift $z$](fig5.eps){width="10cm" height="5cm"} Figure $5$ depicts the variation of DP ($q$) with red shift $z$ based on the above table. The figure describes the fact more clearly. Time at which acceleration had began ------------------------------------ At $z=0.9557,~ \&~ 0.9558$, our model gives following values of Hubble constant $H$, deceleration parameter $q$ and and corresponding time.\ $$H(0.9557)\to 0.111206,~ ~ q(0.9557)\to -0.0000124355, ~ ~t(0.9557)\to 5.81124 ,$$ and $$H(0.9558)\to 0.111212, ~ ~ q(0.9558)\to 0.0000450098, ~ ~ t(0.9558)\to 5.81078 .$$ This means that the acceleration had begun at $ z\to 0.95575,t\to 5.81104~ Gyr, H\to 0.111209 ~Gyr^{-1} $. At this time $ \Omega_{de}=0.220369 $ and $\omega_{de}=-1.54715$. Luminosity distance versus red shift relation ============================================= Red shift-luminosity distance relation [@ref21; @ref42] ia an important observational tool to study the evolution of the universe. The expression for luminosity distance($D_L$) is obtained in term of red shift as the light coming out of a distant luminous body get red shifted due to the expansion of the universe. We determine the flux of a source with the help of luminosity distance. It is given as $D_{L}=a_{0} r (1+z)$, where r is the radial co ordinate of the source. We consider a ray of light having initially $ \frac{d\theta}{ds}=0 ~ ~ \mbox{and} ~ ~ \frac{d\phi}{ds}=0$, then geodesic for the metric (\[5\]) will determine $ \frac{d^2\theta}{ds^2}=0 ~ ~ \mbox{and} ~ ~ \frac{d^2\phi}{ds^2}=0$. So if we pick up a light ray in a radial direction, then it continues to move along the $r$-direction always, and we get following equation for the path of light $ds^{2}= c^{2}dt^{2}- \frac{a^{2}}{1+kr^2}dr^2=0$. As we have seen, the effect of curvature is very small at present, $(\Omega_{k})_0=0.005$, so for the sake of simplification, we take $k=0$. From this we obtain ![ Luminosity distance ($D_{L}$) $-$ Redshift ($z$) Plot](fig6.eps){width="10cm" height="5cm"} $r = \int^r_{0}dr = \int^t_{0}\frac{cdt}{a(t)} = \frac{1}{a_{0}H_{0}}\int^z_0\frac{cdz}{h(z)}$, where we have used $ dt=dz/\dot{z}$, $\dot{z}=-H(1+z)$ and $h(z)=\frac{H}{H_0}$. So we get following expression for the luminosity distance $$\label{10} D_{L}=\frac{c(1+z)}{H_{0}}\int^z_0\frac{dz}{h(z)}$$ In our earlier work [@ref43; @ref44], we already obtained luminosity distance.\ Solving Eqs. (\[7\]) $-$ (\[9\]) and (\[10\]) numerically, we get following table for values of luminosity distance ($D_{L}$) at various red shifts. \[Table-6\]\ ( Luminosity distances ($D_{L}$) at redshifts ($z$) in the range $0$ to $3$ ) $ \begin{array}{|c|c|c|c|c|c|c|c|c|} \hline z & 0.0 & 0.2 & 0.4 & 0.6 & 0.8 & 1.0 & 1.2 & 1.4 \\ \hline D_{L} & 0.0 & 0.229565 & 0.512198 & 0.839454 & 1.2038 & 1.59863 & 2.01827 & 2.4579 \\ \hline \end{array} $\ $ \begin{array}{|c|c|c|c|c|c|c|c|c|} \hline z & 1.6 & 1.8 & 2.0 & 2.2 & 2.4 & 2.6 & 2.8 & 3.0 \\ \hline D_{L} & 2.9135 & 3.38175 & 3.85993 & 4.34584 & 4.83769 & 5.33408 & 5.83385 & 6.33612 \\ \hline \end{array} $ Figure $6$ depicts the variation of the luminosity distance with red shift based on Table-$6$. We observe that the luminosity distance is an increasing function of red shift. Distance modulus $\mu$ and apparent magnitude $m_{b}$ for type Ia Supernovas (SNe Ia): ======================================================================================= The distance modulus $\mu$ of a source is defined as $ \mu = m_{b}-M$, where $m_{b}$ and $M$ are the apparent and absolute magnitude of the source, respectively. The distance modulus is related to the luminosity distance through the following formula $ \mu = M-m_{b} = 5 log_{10} \left(\frac{D_L}{Mpc}\right) + 25 $. Type Ia supernova (SN Ia) are standard candle. They have a common absolute magnitude $M$ irrespective of the red shift $z$. We use following equation for luminosity distance for a supernova at very small red shift $D_L=\frac{cz}{H_0}$. ![Plot of apparent magnitude ($m_b$) versus red shift ($z$)](fig7) ![Plot of distance modulus ($\mu=M-m_b$) versus redshift ($z$)](fig8) In the literature there are so many supernova of low red shift whose apparent magnitudes are known. These determines common absolute magnitude $M$ of all SN Ia supernovae. In our earlier work [@ref43; @ref44], we have obtained $M$ as follows $ M = 5log_{10}\left(\frac{H_0}{.026 c}\right)- 8.92$. From these, we obtain, $log_{10}(H_{0}D_{L})= (m_{b} - 16.08)/5 + log_{10}(.026c)$ and following expression for apparent magnitudes $$\label{12} m_{b} = 16.08+ 5log_{10}\left[\frac{1+z}{.026} \int^z_0\frac{dz}{h(z)}\right].$$ Solving Eqs. (\[7\]) $-$ (\[9\]) and (\[12\]) numerically, we present the following table describing the values of the apparent magnitude $m_{b}$ for redshift $z$ ranging between $0$ and $1.6$ \[Table-7\]\ (The values of apparent magnitude ($m_{b}$) for redshift ($z$) in the range $0$ to $1.6$ ) $ \begin{array}{|c|c|c|c|c|c|c|c|c|} \hline z & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 \\ \hline m_{b} & 19.1639 & 20.8097 & 21.8154 & 22.5523 & 23.1379 & 23.6251 & 24.0426 & 24.4079 \\ \hline \end{array} $\ $ \begin{array}{|c|c|c|c|c|c|c|c|c|} \hline z & 0.9 & 1. & 1.1 & 1.2 & 1.3 & 1.4 & 1.5 & 1.6 \\ \hline m_{b} & 24.7323 & 25.0239 & 25.2883 & 25.53 & 25.7523 & 25.958 & 26.149 & 26.3272 \\ \hline \end{array} $\ \ We consider $581$ data points of the observed values of the distance modulus of various SNe Ia type supernovas from the union $2.1$ compilation [@ref14] with red shift in the range $z\leq 1.414$. We calculate the corresponding theoretical values as per our model. The following Figures $7$ & $8$ depict the closeness of observational and theoretical results, thereby justifying our model. Conclusions ============ In this work, efforts are made to develop a cosmological model which satisfies the cosmological principle and incorporates the latest development which envisaged that our universe is accelerating due to dark energy. We have also proposed a variable equation of state for dark energy in our model. We studied a model with radiation, dust and dark energy which shows a transition from deceleration to acceleration. We have successfully subjected our model to various observational tests. In a nutshell, we believe that our study will pave the way to more research in future, in particular, in the area of the early universe, inflation and galaxy formation, etc. The proposed hybrid expansion law may help in the investigations of hidden matter like dark matter, dark energy and black holes. Acknowledgement {#acknowledgement .unnumbered} =============== The authors (G. K. Goswami & A. Pradhan) sincerely acknowledge the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India for providing facility where part of this work was completed during a visit. A. Pradhan would also like to thank the University of Zululand, South Africa for providing facilities and support where part of this work has been done. The authors would like to convey their sincere thanks to Sergei D. Odintsov ,Kazuharu Bamba and Sunny Vagnozzi for useful suggestions and providing references [@ref45]$-$[@ref52] for the improvement of the paper in present form. [000]{} Perlmutter, S. et al., 1998. Discovery of a supernova explosion at half the age of the Universe, Nature 391, 51. Perlmutter, S. et al., 1999. Measurements of $\Omega$ and $\Lambda$ from $42$ high-redshift supernovae, Astrophys. J. 517, 5. Riess, A.G. et al., 1998. Observational evidence from supernovae for an accelerating universe and a cosmological constant, Astron. J. 116, 1009. Tonry, J.L. et al., 2003. Cosmological results from high-z supernovae, Astrophys. J. 594, 1. Clocchiatti, A. et al., 2006. Hubble Space Telescope and ground-based observations of type Ia Supernovae at redshift $0.5$: cosmological implications, Astrophys. J. 642, 1. de Bernardis, P. et al., 2000. A flat universe from high-resolution maps of the cosmic microwave background radiation, Nature 404, 955-959. Hanany, S. et al., 2000. MAXIMA-1: a measurement of the cosmic microwave background anisotropy on angular scales of $10'-5°$, Astrophys. J. 545, L5-L9. Spergel, D.N. et al. \[WMAP collaboration\], 2003. First year wilkinson microwave anisotropy probe (WMAP) observations determination of cosmological parameters, Astrophys. J. Suppl. 148, 175. Tegmark, M. et al. \[SDSS collaboration\], 2004. Cosmological parameters from SDSS and WMAP, Phys. Rev. D 69, 103501. Seljak, U. et al., 2005. Cosmological parameter analysis including SDSS Ly$\alpha$ forest and galaxy bias constraints on the primordial spectrum of fluctuations neutrino mass and dark energy, Phys. Rev. D 2005, 71. Adelman-McCarthy, J. K. et al., 2006. The fourth data release of the sloan digital sky survey, Astrophys. J. Suppl. 162, 38. Bennett, C.L. et al., 2003. First year wilkinson microwave anisotropy probe (WMAP) observations preliminary maps and basic results, The Astrophys. J. Suppl. 148, 1-43. Allen, S.W. et al., 2004. Constraints on dark energy from chandra observations of the largest relaxed galaxy clusters, Mon. Not. R. Astron. Soc. 353, 457. Suzuki, N. et al., 2012. The Hubble space telescope cluster supernova survey $V$ improving the dark-energy constraints above $z> 1$ and building an early-type-hosted supernova sample, Astrophys. J. 746, 85-115. Delubac, T. et al. \[BOSS Collaboration\], 2015. Baryon acoustic oscillations in the Ly$\alpha$ forest of BOSS DR$11$ quasars, Astron. Astrophys. 574, A59. Blake, C. et al. \[The WiggleZ Dark Energy Survey\], 2012. The wiggleZ dark energy survey joint measurements of the expansion and growth history at $z < 1$, Mon. Not. R. Astron. Soc. 425, 405-414. Ade, P.A.R. et al. \[Planck Collaboration\], 2016. Planck 2015 results $XIV$ dark energy and modified gravity, Astron. Astrophys. 594, A14. Copeland, E.J. at el., Dynamics of dark energy, Int. J. Mod. Phys. D 15, 1753-1935. Gr$\o$n, $\O$., Hervik, S. 2007. Einstein’s general theory of relativity with modern applications in cosmology (Springer Publication ). Weinberg, S., 1989. The cosmological constant problem, Rev. Mod. Phys. 61, 1. Carroll, S.M., Hoffman, M., 2003. Can the dark energy equation-of-state parameter $\omega$ be less than $-1$, Phys. Rev. D 68, 023509. Riess, A.G. et al., 2004. Type Ia supernova discoveries at $z>1$ from the hubble space telescope: Evidence for past deceleration and constraints on dark energy evolution, Astrophys. J. 607, 665. Astier, P. et al., 2006. The supernova legacy survey measurement of $\Omega_{m}$, $\Omega_{\Lambda}$ and $\omega$ from the first year data set, Astron. Astrophys. 447, 31. Eisentein, D.J. et al., 2005. Detection of the baryon acoustic peak in the large-scale correlation function of SDSS luminous red galaxies, Astrophys. J. 633, 560. MacTavish, C.J. et al., 2006. Cosmological parameters from the 2003 flight of BOOMERANG, Astrophys. J. 647, 799. Komatsu, E. et al., 2009. Five-year wilkinson microwave anisotropy probe observations: Likelihoods and parameters from the wmap data, Astrophys. J. Suppl. Ser. 180, 330. Knop, R.K. et al., 2003. New constraints on $\Omega_{m}, \Omega_{\Lambda}$ and $\omega$ from an independent set of $11$ high-redshift supernovae observed with the hubble space telescope, Astrophys. J. 598, 102. Hinshaw, G. et al., 2009. Nine-year wilkinson microwave anisotropy probe (WMAP) observations cosmological parameter results, Astrophys. J. Suppl. 180, 225. Amirhashchi, H., Pradhan, A., Saha, B., 2011. An interacting two-fluid scenario for dark energy in an FRW universe, Chin. Phys. Lett. 28, 039801. Amirhashchi, H., Pradhan, A., Zainuddin, H., 2011. An interacting and non-interacting two-fluid dark energy models in FRW universe with time dependent deceleration parameter, Int. J. Theor. Phys. 50, 3529. Pradhan, A., Amirhashchi, H., Saha, B., 2011. An interacting and non-interacting two-fluid scenario for dark energy in FRW universe with constant deceleration parameter, Astrophys. Space Sci. 333, 343. Saha, B., Amirhashch, H., Pradhan, A., 2012. Two-fluid scenario for dark energy models in an FRW universe-revisited, Astrophys. Space Sci. 342, 257 Pradhan, A., 2014. Two-fluid atmosphere from decelerating to accelerating Friedmann–Robertson–Walker dark energy models, Indian J. Phys. 88, 215. Kumar, S., 2011. Some FRW models of accelerating universe with dark energy, Astrophys. Space Sci. 332, 449. Goswami, G.K., Pradhan, A., Beesham, A., 2019. FLRW Accelerating Universe with Interactive Dark Energy, arXiv: 1906.00450. Zhang, C. et al., 2014. Four new observational $H (z)$ data from luminous red galaxies in the sloan digital sky survey data release seven, Res. Astron. Astrophys. 14, 1221-1233. Stern, D. et al., 2010. Cosmic chronometers constraining the equation of state of dark energy I $H (z)$ measurements, Jour. Cosmo. Astropart. Phys. 02, 008: Moresco, M., 2015. Raising the bar new constraints on the hubble parameter with cosmic chronometers at $z\sim 2$, Mon. Not. R. Astron. Soc. 450, L16$-$L20. Simon, J. et al., 2005. Constraints on the redshift dependence of the dark energy potential, Phys. Rev. D 71, 123001. Benitez, N. et al., 2002. The magnification of SN 1997ff the farthest known supernova, Astrophys. J. 577, L1. Turner, M., Riess, A.G., 2002. Do type Ia supernovae provide direct evidence for past deceleration of the universe?, Astrophys. J. 569, 18. Liddle, A. R., Lyth, D.H., 2000. Cosmological inflation and large-scale structure (Cambridge University Press). Goswami, G.K., Dewangan, R.N., Yadav, A.K., 2016. Anisotropic universe with magnetized dark energy, Astrophys. Space Sci. 361, 119. Goswami, G.K., Dewangan, R. N., Yadav, A. K., Pradhan, A., 2016. Anisotropic string cosmological models in Heckmann-Schucking space-time, Astrophys. Space Sci. 361, 47. Capozziello S., Luongo O., Saridakis E.N., 2015. Transition red shift in f(T) cosmology and observational constraints, Phys. Rev. D, 91, 12, 124037. . S. Capozziello S., Farooq O., Luongo O., Ratra B., 2014. Cosmographic bounds on the cosmological deceleration-acceleration transition redshift in f ( R ) gravity, Phys. Rev. D, 90, 4, 044016. Bamba K., Capozziello S., Nojiri S. and Odintsov S. D.,2012. General review of fluid DEs and its equivalence with scalar DE is given in Dark energy cosmology: the equivalent description via different theoretical models and cosmography tests, Astrophys. Space Sci.,342, 155 . Sunny Vagnozzi et al., 2018. Constraints on the sum of the neutrino masses in dynamical dark energy models with $\omega(z) \geq -1$ are tighter than those obtained in $\Lambda$ CDM, Phys. Rev. D 98 , 083501 Capozziello S. et al., 2006. Observational constraints on dark energy with generalized equations of state, Phys.Rev. D 73, 043512 Nojiri S. and Odintsov S. D, 2004. The Final state and thermodynamics of dark energy universe, Phys.Rev. D70, 103522 Nojiri S. and Odintsov S. D, 2006. The New form of the equation of state for dark energy fluid and accelerating universe, Phys.Lett. B 639,144-150. Nojiri S. and Odintsov S. D, 2006. Holographic DEs with phantom-non-phantom transitions were introduced in. Unifying phantom inflation with late-time acceleration: Scalar phantom-non-phantom transition model and generalized holographic dark energy , Gen.Rel.Grav. 38, 1285-1304.
--- abstract: 'In this article we investigate the permeability of a porous medium as given in Darcy’s law. The permeability is described by an effective hydraulic pore radius in the porous medium, the fluctuation in local hydraulic pore radii, the length of streamlines, and the fractional volume conducting flow. The effective hydraulic pore radius is related to a characteristic hydraulic length, the fluctuation in local hydraulic radii is related to a constriction factor, the length of streamlines is characterized by a tortuosity, and the fractional volume conducting flow from inlet to outlet is described by an effective porosity. The characteristic length, the constriction factor, the tortuosity and the effective porosity are thus intrinsic descriptors of the pore structure relative to direction. We show that the combined effect of our pore structure description fully describes the permeability of a porous medium. The theory is applied to idealized porous media, where it reproduces Darcy’s law for fluid flow derived from the Hagen-Poiseuille equation. We also apply this theory to full network models of Fontainebleau sandstone, where we show how the pore structure and permeability correlate with porosity for such natural porous media. This work establishes how the permeability can be related to porosity, in the sense of Kozeny-Carman, through fundamental and well-defined pore structure parameters: characteristic length, constriction, and tortuosity.' author: - Carl Fredrik Berg bibliography: - 'myreferences.bib' date: 'Accepted: 14 March 2014 in Transport in Porous Media' title: 'Permeability Description by Characteristic Length, Tortuosity, Constriction and Porosity' --- Introduction {#sec:introduction} ============ Understanding the process of flow through porous media is of great importance in many fields, including petroleum engineering and hydrology. Slow viscous fluid flow in porous media is traditionally described by Darcy’s law, a proportional relation that links the fluid discharge $Q$ to an applied piezometric head (hydraulic head) difference $\Delta h$: $$k= -\frac{Q \mu \Delta s}{A \rho g \Delta h}, \label{eq:darcy}$$ where $A$ is the cross-sectional area of the porous medium, $\Delta s$ is the length of the porous medium in direction of applied head difference, $\mu$ is the constant fluid viscosity, $\rho$ is the constant fluid density, $g$ is acceleration due to gravity, and $k$ is a constant for the porous medium called (intrinsic) *permeability* [@darcy1856determination; @bear1988dynamics; @dullien1992porous]. For simplicity, $\rho g h$ will also be called the piezometric head throughout this article. A fundamental question in flow through porous media is how the permeability $k$ can be related to the porosity $\phi$ through well-defined parameters of the pore structure. A well-known relation of this kind is the semi-empirical Kozeny-Carman equation. It was first proposed by Kozeny [@kozeny1927ueber] as $$k \propto \tau \frac{\phi^3}{(1-\phi)^2}d_w^2, \label{eq:kozeny}$$ where $d_w$ is an effective grain size, and $\tau$ is a tortuosity of the porous medium describing the relative difference between the microscopic (interstitial) head gradient along the streamline and the macroscopic head gradient. Kozeny derived his equation assuming that the porous medium could be viewed as a bundle of streamtubes [@kozeny1927ueber]. Carman noted that linking the microscopic fluid velocity to the Darcy velocity for the porous medium involves scaling with the factor $\tau$ [@carman1937fluid]. Carman therefore modified Kozeny’s Eq.  by multiplying with the tortuosity $\tau$ [@carman1937fluid]: $$k \propto \tau^2 \frac{\phi^3}{(1-\phi)^2}d_w^2. \label{eq:carman}$$ For a monodisperse sphere pack we have $d_w = 6(1-\phi)/S_0$, where $S_0$ is the specific surface area [@kozeny1927ueber]. This leads to a more general form of the Kozeny-Carman equation: $$k = c_0 \tau^2 \frac{\phi^3}{S_0^2} = c_0 \tau^2 r_h^2 \phi, \label{eq:kozeny-carman}$$ here $r_h = \phi/S_0$ is the (mean) *hydraulic radius* and $c_o$ is a coefficient called *Kozeny’s constant* [@kozeny1927ueber; @carman1937fluid; @bear1988dynamics; @dullien1992porous]. The hydraulic radius $r_h$ is assumed to represent an effective pore radius of the porous medium. It is a purely geometric length which does not take into account the effect on permeability from pore size variation or connectivity. Others have proposed length-scales more suitable for permeability description, among these the smallest pore along the most conductive percolating pathway [@katz1986quantitative], nuclear magnetic resonance relaxation time [@banavar1987magnetic], or grain size distribution [@berg1970method]. Johnson *et al.* [@johnson1986new] suggested a length by weighting with the electric field, thereby related to (electrical) transport and thus a *dynamical* length in contrast to the *geometrical* hydraulic radius $r_h$. This length has been shown to be a better permeability descriptor than the hydraulic radius $r_h$ [@schwartz1993cross], however, there is no fixed relation between electrical conductance and fluid flow [@saeger1991flow]. A dynamical length linked to fluid flow instead of electrical conductance was introduced in [@bear1967generalized]. This length was derived from the microscopic hydraulic conductance [@bear1967generalized; @bear1988dynamics], thus descriptive of fluid flow in porous media. For a porous medium, the (hydraulic) tortuosity is a measure of the microscopic flows deviation from the direction of the applied piezometric head difference, reflected by the length of the microscopic streamlines [@bear1988dynamics; @dullien1992porous; @adler1992porous]. Tortuosity has been defined as $\Delta s / s_e$, where $\Delta s$ is the length of the porous medium in the direction of applied piezometric head difference, and $s_e$ is the effective streamline length [@bear1988dynamics; @koponen1996tortuous]. There is ambiguity associated with the derivation of $s_e$ [@bear1988dynamics; @koponen1996tortuous], however most formulations are a weighted average of the streamline lengths [@duda2011hydraulic]. Furthermore, the constricting and expanding nature of pore channels converges and diverges the streamlines, which leads to variation in fluid velocity along the streamlines and decrease the permeability. This effect has been treated for simplified porous media [@dullien1992porous], for more complex material the standard deviation of the cross-sectional area has been used as an estimate [@schopper1966theoretical], while for electrical conductance the effect of pore channel variation has been described in general [@berg2012]. We follow Ref. [@dullien1992porous] and use the term *constriction factor* to account for this porous medium property. Note that the term *constrictibility* is common when considering effective diffusion in porous media, see e.g. Ref. [@van1974analysis]. The effect on transport from constrictions is frequently lumped together with the effect from tortuosity, see e.g. Refs. [@schwartz1993cross; @bear1967generalized; @bear1988dynamics]. However, the pore structure effect on permeability from these two distinct geometrical properties can be separated for any porous medium, as shown in this article. The (geometrical) porosity $\phi=\Omega/V$ is the fraction of pore space $\Omega$ in the porous medium of total volume $V$. The pore space $\Omega$ is sometimes interpreted as the connected pore space, thus both the permeability and porosity vanish at a percolation threshold at a finite geometrical porosity [@mavko1997effect]. Other authors have also excluded dead-end pores [@bear1988dynamics; @guo2012dependency], or considered an effective porosity based on the streamlines connected both to the inlet and the outlet of the porous medium [@duda2011hydraulic; @koponen1997permeability]. A large body of literature exists on the relation between macroscopic transport properties and pore structure; the interested reader is referred to [@dullien1992porous] and [@bear1988dynamics] for reviews of numerous relations. The recent progress in computing and imaging provides tools for advances on the topic. A number of methods for statistically [@adler1990flow; @yeong1998reconstructing; @jiao2009superior] or process based [@oren2002process] reconstruction of three-dimensional porous media from two-dimensional thin section images has been proposed. Advances in micro-computed tomography (mCT) have made it possible to directly image many types of natural porous media with sufficient resolution to represent the three-dimensional pore structure [@schwartz1994transport; @arns2001accurate]. For flow and transport simulations network analogs have been widely used to represent the pore space [@fatt1956network; @blunt2001flow; @oren1998extending], while the full microscopic pressure and velocity fields can also be computed for both idealized models [@lemaitre1990fractal; @schwartz1993cross; @zhang1995direct] and for natural porous media such as reservoir rocks [@arns2001accurate; @mostaghimi2013computations]. The aim of this work is to derive a comprehensive relation for porous media between the permeability and porosity from detailed pore structure information. In contrast with existing relations the permeability will be fully defined by separable descriptors of the pore structure without introducing free parameters or constants. In Sect. \[section:local\_perm\] we introduce a microscopic permeability factor which describes the local contribution to the effectiveness of the pore space to conduct fluid flow. This permeability factor is decomposed into streamlines of the fluid flow, and further factorized into distinct contributions from characteristic length, constriction and tortuosity in Sect. \[sec:perm\_streamline\]. In Sect. \[sec:perm\_description\] we integrate the characteristic length, constriction and tortuosity from individual streamlines into effective parameters, all pore structure descriptors. In Sect. \[sec:hagen-poiseuille\] we use the Hagen-Poiseuille equation to demonstrate our approach on idealized porous media, while the methodology is applied to network analogs of Fontainebleau sandstone data in Sect. \[section:bentheimer\_network\]. Microscopic (Interstitial) Permeability {#section:local_perm} ======================================= Consider a porous medium $V$ of length $\Delta s$ in direction of an applied piezometric head difference $\Delta h$, consisting of matrix and pore space $\Omega \subset V$ filled with an incompressible fluid. At the microscopic (interstitial) scale, a slow (creeping) flow is governed by the Stokes equation supplemented by the continuity equation: $$\begin{aligned} \mu \nabla^2 \vec{u} &= \rho g \nabla h, \label{eq:stokesa} \\ \nabla \cdot \vec{u} &= 0, \label{eq:stokesb}\end{aligned}$$ \[eq:stokes\] where $\vec{u}$ is the microscopic fluid velocity, and $h$ is the microscopic piezometric head [@dullien1992porous; @bear1988dynamics; @adler1992porous]. In the following we will refer to Eqs.  simply as the Stokes equations. Throughout this article the fluid flow is assumed to be governed by the Stokes equations. There are no time derivative terms in the Stokes equations; hence a constant piezometric head difference $\Delta h$ implies a steady-state flow. Let $\mathbb{S}$ denote the set of all streamlines $\mathcal{S}$ connected both to the inlet and the outlet of the porous medium, and let $\Omega_s = \{ x \in \mathcal{S} \in \mathbb{S} \}$ be the subset of $\Omega$ where the fluid flows from inlet to outlet. The *effective porosity* is $\phi_s = \Omega_s/V$ [@koponen1997permeability; @duda2011hydraulic]. Due to the linearity of the Stokes equations, the streamlines $\mathcal{S}$ are independent of the magnitude of applied piezometric head drop $\Delta h$ and the constants $\rho$ and $\mu$, thus $\Omega_s$ and $\phi_s$ are only dependent on pore structure and direction of the applied piezometric head drop. Note that for dead end pores we might have pore space that is not in $\Omega_s$, still the fluid velocity might be non-zero, called reentrant flow in Ref. [@duda2011hydraulic]. In our system the piezometric head difference $-\rho g \Delta h$ is the potential that drives the fluid through the porous medium, hence the rate of applied energy is $-\rho g \Delta h Q$. The potential driving the microscopic fluid flow is the piezometric head $\rho g h$, and the rate of work done by this piezometric head potential is given by $- \rho g \nabla h \cdot \vec{u}$. Applying the divergence theorem, and invoking that the fluid velocity $\vec{u}$ is a solenoidal vector field from the continuity equation Eq. , we obtain $$\begin{aligned} \int_{\Omega_s} \rho g \nabla h \cdot \vec{u} dV &= \int_{\Omega_s} \rho g \nabla h \cdot \vec{u} + \rho g h(\nabla \cdot \vec{u}) dV \notag \\ &= \int_{\Omega_s} \nabla \cdot \left( \rho g h \vec{u} \right) dV %\notag \\ = \int_{\delta \Omega_s} \rho g h \vec{u} \cdot \vec{n} dS \notag \\ &= \rho g (h_{out}Q - h_{in}Q) = \rho g \Delta h Q,\end{aligned}$$ where $\vec{n}$ is the outward pointing unit normal field of the boundary $\delta \Omega_s$, $h_{in}$ is the piezometric head at inlet, and $h_{out}$ the head at outlet. Here we use that $\vec{u} \cdot \vec{n} = 0$ except at inlet and outlet. The permeability as given by Darcy’s law in Eq.  can then be expressed as follows: $$k= -\frac{Q \mu \Delta s}{A \rho g \Delta h} = \phi_s \frac{1}{\Omega_s} \int_{\Omega_s} -\mu \rho g \nabla h \cdot \vec{u} \left( \frac{\Delta s}{\rho g \Delta h} \right)^2 dV. \label{eq:darcy_frac_to_power}$$ We will denote the integrand in Eq.  as the *microscopic permeability factor*: $$\kappa = -\mu \rho g \nabla h \cdot \vec{u} \left( \frac{\Delta s}{\rho g \Delta h} \right)^2. \label{eq:kappa_def}$$ The microscopic permeability factor $\kappa$ is then the rate of work done by the piezometric head potential $-\rho g \nabla h \cdot \vec{u}$ multiplied by the factor $\mu \Delta s^2 /(\rho g \Delta h)^2$, however it is a constant only dependent on the pore space $\Omega \subset V$ and the direction of the applied head difference $\Delta h$, in contrast to $-\rho g \nabla h \cdot \vec{u}$. We derive the *effective permeability factor* by integrating $\kappa$ over $\Omega_s$: $$\kappa_s = \frac{1}{\Omega_s}\int_{\Omega_s} \kappa dV = -\frac{1}{\Omega_s}\mu \rho g \Delta h Q\left(\frac{\Delta s}{\rho g \Delta h}\right)^2. \label{eq:kappa_glob_def}$$ Since the microscopic permeability factors $\kappa$ are constants only dependent on the pore space and the direction of the applied head difference $\Delta h$, so is the macroscopic permeability factor $\kappa_s$. Moreover $$k= \kappa_s \phi_s. \label{eq:darcy_global}$$ We have thereby divided the permeability into two factors; the effective porosity $\phi_s$ yielding the pore space fraction where fluid flow from inlet to outlet, and the permeability factor $\kappa_s$ yielding the effectiveness of the pore space $\Omega_s$ to conduct fluid flow. Both $\phi_s$ and $\kappa_s$ are dependent on direction, leading to the anisotropy of the permeability. Following the same arguments as above we also have $$k=\hat{\kappa} \phi,$$ where $$\hat{\kappa} = \frac{1}{\Omega}\int_{\Omega} \kappa dV.$$ This implies that $$\int_{\Omega \setminus \Omega_s} \kappa dV = 0,$$ even though the permeability factor $\kappa$ might be non-zero in $\Omega \setminus \Omega_s$. Hence reentrant flow does not contribute to the effective permeability factor. In the following we work with streamlines connected to inlet and outlet, thereby excluding the part of the pore space containing reentrant flow in addition to stagnant parts. Streamline Decomposition {#sec:perm_streamline} ======================== In this section, we show how the permeability factor can be decomposed onto streamlines, and furthermore how the permeability factor for each streamline can be segmented into parts describing the hydraulic conductance, the constrictions and the tortuosity along this streamline. We can discretize the space $\Omega_s$ into a disjoint union of simply connected spaces, such that each streamline is fully contained within one simply connected space. Using the continuity equation given by Eq. , there exist scalar functions $\Lambda$ and $X$ such that $\nabla \Lambda \times \nabla X = \vec{u}$ [@bear1988dynamics; @aris1989vectors]. The scalar functions $\Lambda$ and $X$ represent two families of stream surfaces whose intersections are the streamlines. Every point in $x \in \Omega_s$ is uniquely described by the streamline $\mathcal{S}$ passing through $x$, and the distance $s$ along $\mathcal{S}$ from inlet to point $x$. The streamline $\mathcal{S} = \mathcal{S}(\Lambda=\lambda,X=\chi)$ is the intersection of the surfaces given by the constants $\lambda$ and $\chi$. This is similar to a Lagrangian frame of reference, however we use distance instead of time to distinguish points on a streamline. A change of variables from the usual Cartesian coordinates $(x,y,z)$ to the streamline coordinates $(\Lambda,X,s)$ gives the Jacobian $$\left\lVert \frac{\delta (\Lambda,X,s)}{\delta (x,y,z)} \right\lVert = (\nabla \Lambda \times \nabla X) \cdot \nabla s = \vec{u} \cdot \frac{\vec{u}}{u} = u.$$ By vector calculus identities we have $\vec{u} = \nabla \Lambda \times \nabla X = \nabla \times \Lambda \nabla X$. Using Stokes’ theorem, the fluid discharge $\hat{Q}$ through a streamtube bounded by four stream surfaces represented by the constants $\lambda_1,\lambda_2,\chi_1$ and $\chi_2$ is then $$\hat{Q} = \int \int_S \vec{u} \cdot \vec{n} dS = \int_{\delta S} \Lambda \nabla X \cdot \vec{l} ds = (\lambda_2-\lambda_1)(\chi_2-\chi_1), \label{eq:streamtube_to_discharge}$$ where $S$ is a cross section of the streamtube, and $\vec{l}$ is the unit tangent to the surface boundary $\delta S$. In the last equality we use that either $\Lambda$ or $X$ is constant for each of the four line segments in $\delta S$. We can now define a permeability factor for the individual streamlines. Starting with Eq.  we have $$\begin{aligned} \kappa_s &= \frac{1}{\Omega_s} \int_{\Omega_s} \kappa dV = \frac{1}{\Omega_s} \int \int \int_{\mathcal{S}(\Lambda,X)} \frac{\kappa}{u} ds dX d\Lambda \notag \\ &= \frac{1}{\Omega_s} \int \int \int_{\mathcal{S}(\Lambda,X)} - \mu \rho g \frac{\nabla h \cdot \vec{u}}{u} \left( \frac{\Delta s}{\rho g \Delta h} \right)^2 ds dX d\Lambda \notag \\ &= \frac{1}{\Omega_s} \int_\mathbb{S} - \mu \rho g \Delta h \left( \frac{\Delta s}{\rho g \Delta h} \right)^2 dQ_\mathcal{S} = \frac{1}{\Omega_s} \int_\mathbb{S} \kappa(\mathcal{S}) dQ_\mathcal{S}, \label{eq:kappa_streamlines} \end{aligned}$$ where $\kappa(\mathcal{S}) = - \mu \Delta s^2/(\rho g \Delta h)$ is the permeability factor for the streamline $\mathcal{S}$. For the fourth equality we use that $\nabla h \cdot \vec{u} / u = \delta h/\delta s$, and that the total head difference along a streamline is equal the applied head difference $\Delta h$. The infinitesimal fluid discharge for the infinitesimal streamtube given by $dX$ and $d\Lambda$ is denoted by $dQ_\mathcal{S}$, where $dX d\Lambda = dQ_\mathcal{S}$ from Eq. . Note that $\kappa(\mathcal{S})$ is a constant, and that $$\kappa_s = \frac{1}{\Omega_s} \int_\mathbb{S} \kappa(\mathcal{S}) dQ_\mathcal{S} = \frac{1}{\Omega_s} \kappa(\mathcal{S}) \int_\mathbb{S} dQ_\mathcal{S} = \frac{Q}{\Omega_s} \kappa(\mathcal{S}). \label{eq:kappa_by_streamlines}$$ Rewriting the expression for $\kappa(\mathcal{S})$, we obtain: $$\kappa(\mathcal{S}) = - \frac{\mu \Delta s^2}{\rho g \Delta h} = \left( \frac{\Delta s}{l_\mathcal{S}} \right)^2 \int_{\mathcal{S}} -\frac{\mu u}{\rho g \nabla h \cdot \vec{u}} ds \left( \frac{1}{l_\mathcal{S}^2} \Delta h \int_{\mathcal{S}} \frac{u}{\nabla h \cdot \vec{u}}ds \right)^-, \label{eq:kappa_streamline_segmented}$$ where $l_\mathcal{S}$ is the length of the streamline $\mathcal{S}$. In the following, we will link the permeability factor for a streamline to descriptors of the pore structure: namely, tortuosity, constriction and hydraulic conductance; all represented in Eq. . Tortuosity ---------- The *tortuosity* of the streamline $\mathcal{S}$ is given by $\tau(\mathcal{S})=\Delta s/l_\mathcal{S}$, i.e. the length of the porous medium divided by the length of the streamline [@bear1988dynamics; @koponen1996tortuous]. Due to the linearity of the Stokes equations, Eqs. , the streamline $\mathcal{S}$ is independent of the magnitude of applied piezometric head drop $\Delta h$ and the constants $\rho$ and $\mu$, thus $\tau(\mathcal{S})$ is only dependent on pore structure and direction of the applied piezometric head drop. For smaller tortuosity $\tau(\mathcal{S})$ the fluid needs to travel longer distance, and more applied head potential is expended due to transport distance. This increase in energy expenditure is reflected in the smaller factor $\tau(\mathcal{S})^2=(\Delta s/l_\mathcal{S})^2$ in Eq. . Longer travel distance for the fluid decreases the effectiveness of the pore space to conduct flow. Constriction Factor {#subsec:streamline_constriction_factor} ------------------- For a straight circular pore channel of length $L$ with cross-sectional area $A(x)$ at point $x$, the degree of variation in cross-sectional area can be measured by the constriction factor $$\begin{aligned} C &= \frac{1}{L^2} \int_0^L A(x)^2 dx \int_0^L \frac{1}{A(x)^2} dx \label{eq:const_old_def} \\ &= \frac{1}{L^2} \int_0^L \frac{Q}{\rho g \nabla h(x)} dx \int_0^L \frac{\rho g \nabla h(x)}{Q} dx \notag \\ &= \frac{1}{L^2} \int_0^L \frac{1}{\nabla h(x)} dx \int_0^L \nabla h(x) dx, \label{const_new_def}\end{aligned}$$ corresponding to definitions introduced in Refs. [@dullien1992porous; @berg2012]. For the second equality we assume the fluid flow is described by the Hagen-Poiseuille equation (see Eq. ), thus $Q/(\rho g \nabla h(x)) \propto A(x)^2$. When the fluid is incompressible, the total discharge $Q$ must be constant through all pore channel cross-sections due to mass-balance, yielding Eq. . For porous media in general, the cross-sectional area $A(x)$ used in Eq.  is not straight-forward defined. As seen in Sect. \[subsec:constriction\], $C$ represents the reduction in permeability due to the variation in cross-sectional area. Following Ref. [@berg2012] we propose a (hydraulic) *constriction factor* for streamline $\mathcal{S}$ by replacing the head gradient $\nabla h$ in Eq.  with the head derivative $\delta h /\delta s = \nabla h \cdot \vec{u} / u$ along the streamline: $$C(\mathcal{S}) = \frac{1}{l_\mathcal{S}^2} \int_\mathcal{S} \frac{u}{ \nabla h \cdot \vec{u}} ds \int_\mathcal{S} \frac{ \nabla h \cdot \vec{u} }{ u } ds = \frac{1}{l_\mathcal{S}^2} \Delta h \int_\mathcal{S} \frac{u }{ \nabla h \cdot \vec{u}} ds. \label{eq:const_one_path}$$ As with the tortuosity $\tau(\mathcal{S})$, the constriction factor $C(\mathcal{S})$ is only dependent on pore structure and direction. When the fluid flows through a constriction, the head derivative $\delta h /\delta s$ along the streamline increases. A large variation in pore size along the streamline then translates into a large variation in the head derivative. The constriction factor $C(\mathcal{S})$ thus relates to the constricting and expanding nature of the pore space along the streamline $\mathcal{S}$, or equivalently the converging-diverging set of streamlines around streamline $\mathcal{S}$. For a larger constriction factor $C(\mathcal{S})$ the effectiveness of the pore space to conduct flow is reduced. Hydraulic Conductance --------------------- The microscopic hydraulic conductance is given by $$B = -\frac{\mu u^2}{\rho g \nabla h \cdot \vec{u}}, \label{eq:local_hydraulic_conductance}$$ and is related to the pore size and shape, and the location in the pore [@bear1967generalized; @bear1988dynamics]. Following Eq. , the *hydraulic conductance* for a streamline $\mathcal{S}$ is $$B(\mathcal{S}) = \int_{\mathcal{S}} B \frac{1}{u} ds = \int_{\mathcal{S}} -\frac{\mu u}{\rho g \nabla h \cdot \vec{u}} ds.$$ Observe that $B(\mathcal{S})$ represents the second term of Eq. . Also note that $B(\mathcal{S})$ is dependent on both the magnitude of the applied piezometric head drop $\rho g \Delta h$ and the viscosity $\mu$, in addition to pore structure and direction of the applied head drop. This is in contrast with the hydraulic conductance $B$, tortuosity $\tau(\mathcal{S})$ and constriction factor $C(\mathcal{S})$, which are only dependent on pore structure and direction. With the formulations above, Eq.  can now be rewritten as follows: $$\kappa(\mathcal{S}) = \frac{B(\mathcal{S}) \tau(\mathcal{S})^2}{C(\mathcal{S})}. \label{eq:kappa_eq_Btortconst_one_path}$$ The permeability factor for an individual streamline is then expressed by descriptors of the pore structure. Effective Permeability {#sec:perm_description} ====================== In this section we show how the permeability factor $\kappa_s$ can be segmented into a characteristic length, a constriction factor and a tortuosity by averaging over the streamline values. These are strictly related to fluid flow, based on the solution of the Stokes equation inside the porous medium. The effective hydraulic conductance is found as the volume-weighted average of the hydraulic conductance $B$ [@bear1967generalized; @bear1988dynamics]: $$B_s = \frac{1}{\Omega_s} \int_{\Omega_s} B dV. \label{eq_global_hydraulic_conductance}$$ Since $B$ is only dependent on pore space and direction, so is $B_s$. From $$B_s = \frac{1}{\Omega_s} \int_{\Omega_s} B dV = \frac{1}{\Omega_s}\int_\mathbb{S} B(\mathcal{S}) dQ_\mathcal{S}, \label{eq_global_hydraulic_conductance}$$ we have a correspondence between the volume integral of $B$ and the streamline integral of $B(\mathcal{S})$. We define the *characteristic (hydraulic) length* as $L_h = \sqrt{8 B_s}$ to represent the effective hydraulic pore radius of the porous medium. Note that the characteristic length scales linearly with the size of the porous medium, as desired for a characteristic length. As seen in Sect. \[subsec:char\_length\], for a porous medium consisting of parallel circular tubes of radius $r$ and length $\Delta s$, where these tubes connect the opposite sides of a cube of side length $\Delta s$, then $L_h = r$, and $k = \phi_s \kappa_s = \phi_s B_s = \phi_s L_h^2/ 8$ as desired for such a medium [@dullien1992porous]. Consider a porous medium for which $C(\mathcal{S})=1$ for all streamlines $\mathcal{S}$, e.g. a single tube of constant cross-sectional area. Following Eq. , for such a porous medium we desire for a tortuosity $\tau_s^2$ to give $k = \phi_s B_s \tau_s^2$ [@dullien1992porous; @bear1988dynamics]. Then $\kappa_s = B_s \tau^2_s$ by Eq. , thus invoking Eq.  gives $$\tau^2_s = \frac{1}{\int_\mathbb{S} B(\mathcal{S}) dQ_\mathcal{S}}\int_\mathbb{S} \tau^2(\mathcal{S}) B(\mathcal{S}) dQ_\mathcal{S}. \label{eq:tort_path}$$ Hence the tortuosity squared $\tau_s^2$ is a weighted average of the streamline tortuosity squared. Note that the tortuosity $\tau^2_s$ is only dependent on the pore space $\Omega_s$ and the direction of applied piezometric head drop, it is dimensionless and scale invariant. The tortuosity is commonly formulated as a weighted average of the streamline lengths [@duda2011hydraulic; @koponen1997permeability], therefore let the tortuosity $\hat{\tau}^\alpha= 1/\int_\mathbb{S} \hat{w}(\mathcal{S}) dQ_\mathcal{S} \int_\mathbb{S} \tau^\alpha (\mathcal{S}) \hat{w}(\mathcal{S}) dQ_\mathcal{S}$ be another weighted average of streamline tortuosity $\tau(\mathcal{S})$, now to a power $\alpha$. Consider a porous medium for which $C(\mathcal{S})=1$ and $\tau(\mathcal{S})$ is constant for all streamlines $\mathcal{S}$. We still want $k = \phi_s B_s \hat{\tau}^\alpha$, thus $\hat{\tau}^\alpha = \tau_s^2$, which yields $$\tau^\alpha(\mathcal{S}) = \frac{1}{\int_\mathbb{S} \hat{w}(\mathcal{S}) dQ_\mathcal{S}} \int_\mathbb{S} \tau^\alpha (\mathcal{S}) \hat{w}(\mathcal{S}) dQ_\mathcal{S} = \hat{\tau}^\alpha = \tau_s^2 = \tau^2(\mathcal{S}),$$ therefore $\alpha=2$. If we desire a streamline decomposition to hold for $B_s \hat{\tau}^2 = \int_\mathbb{S} \kappa(\mathcal{S}) dQ_\mathcal{S}$, then $(\int_\mathbb{S} B(\mathcal{S}) dQ_\mathcal{S} / \int_\mathbb{S} \hat{w}(\mathcal{S}) dQ_\mathcal{S}) \hat{w}(\mathcal{S}) = B(\mathcal{S})$, yielding $\tau_s^2$ unique of the form described by $\hat{\tau}^\alpha$. Using Eqs. , , and , we have $$B_s \tau_s^2 = \frac{1}{\Omega_s} \int_\mathbb{S} B(\mathcal{S}) \tau^2(\mathcal{S}) dQ_\mathcal{S} = \frac{1}{\Omega_s} \int_\mathbb{S} \kappa(\mathcal{S}) C(\mathcal{S}) dQ_\mathcal{S} = \kappa_s \frac{1}{Q} \int_\mathbb{S} C(\mathcal{S}) dQ_\mathcal{S}. \label{eq:Btort_eq_kappa_const}$$ By factoring out the hydraulic conductance $B_s$ and tortuosity $\tau_s^2$, the remaining contribution to the permeability factor $\kappa_s$ is $$C_s = \frac{1}{Q} \int_\mathbb{S} C(\mathcal{S}) dQ_\mathcal{S}, \label{eq:const}$$ where $C_s$ is denoted the (hydraulic) *constriction factor.* The constriction factor is also only dependent on the pore space $\Omega_s$ and direction, it is dimensionless and scale invariant. While the hydraulic conductance represents an effective hydraulic pore radius, the constriction factor represents the fluctuation in hydraulic pore radii. From Eq.  and Eq.  we have $$\kappa_s = \frac{B_s \tau^2_s}{C_s} = \frac{\tau_s^2 L_h^2}{8 C_s}. \label{eq:kappa_eq_Btortconst_global}$$ Combining Eq.  with Eq.  then gives: $$k = \kappa_s \phi_s = \frac{\tau_s^2 B_s \phi_s}{C_s} = \frac{\tau_s^2 L_h^2 \phi_s}{8 C_s}. \label{eq:perm_as_Btort_const}$$ We thus have a full description of the porous medium permeability by pore structure related parameters. The permeability factor $\kappa_s$ gives the effectiveness of the pore space $\Omega_s$ to conduct flow. This effectiveness is reduced by longer flow paths given with a smaller tortuosity $\tau_s$, more variation in pore size along the flow paths described by a larger constriction factor $C_s$, and smaller pores reflected by a smaller characteristic length $L_h$. Note that these factors are dependent on direction in addition to the pore structure, which leads to the anisotropy of the permeability. Single Tube Example {#sec:hagen-poiseuille} =================== Capillary bundle of tube models have a wide use as simplified representations of porous media. The single tube examples in this section illustrate such model representations. Consider a straight cylindrical tube with constant cross-section of radius $R$. If the length of the tube is much larger than the radius, then the flow inside the tube is approximated by the Hagen-Poiseuille equation $$Q= -\frac{\pi R^4 \rho g \nabla h}{8 \mu}. \label{hagen-poiseuille_q}$$ Moreover, the flow velocity is given by $$u(r) =-\frac{(R^2-r^2) \rho g \nabla h}{4 \mu}, \label{hagen_poiseuille_u}$$ where $r$ is the distance from the center of the tube [@adler1992porous]. In the equation above and subsequently in this section, $\nabla h$ is also used to denote the scalar value $-\lVert \nabla h \rVert$. In the following subsections, the Hagen-Poiseuille equation is used to demonstrate our theory introduced above. Permeability Factor ------------------- Consider a straight cylindrical tube inside a cube of side-length $L$, and with an applied piezometric head difference $\Delta h$ over two opposite sides of the cube. Let the tube be of length $L$ and aligned with the applied head, then the piezometric head gradient inside the tube is $\nabla h = \Delta h/L$. Combining Darcy’s law with the Hagen-Poiseuille equation, Eqs.  and , the permeability of the cube is given by: $$k = -\frac{Q \mu L}{L^2 \rho g \Delta h} = \frac{\pi R^4}{8 L^2}. \label{hagen-poiseuille_darcy}$$ Using the flow velocity a distance $r$ from the tube center as given by Eq. , and the piezometric head gradient $\nabla h = \Delta h/L$, the permeability factor described in Eq.  is $\kappa(r) = (R^2-r^2)/4$. Taking the volume-weighted average then gives the effective permeability factor $$\kappa_s = \frac{1}{\pi R^2 L} \int_0^L \int_0^R \kappa(r) 2\pi r dr ds = \frac{R^2}{8}. \label{eq:int_of_kappa}$$ This gives $\kappa_s \phi = \pi R^4/(8 L^2)$, which is equal to the result from Darcy’s law in Eq. , hence our results are consistent with Eq. . Characteristic Length {#subsec:char_length} --------------------- Let our porous medium and applied piezometric head be as above. The permeability can be calculated using the individual contributions from the characteristic length, constriction, and tortuosity, as given by Eq. . The tortuosity and constriction are equal to $1$ when the fluid flow inside the tube is described by the Hagen-Poiseuille equation, while the hydraulic conductance is $B(r) = (R^2-r^2)/4$. The volume-weighted average of $B(r)$ is then equal to the integration of $\kappa(r)$ in Eq. , therefore $B_s = R^2/8$. This gives a characteristic length $L_h = \sqrt{8B_s} = R$. The characteristic length thus equals the radius of the tube, as desired. Since $\tau_s = 1$ and $C_s = 1$, we have $$\frac{\tau^2_s L_h^2 \phi}{8C_s} = \frac{\pi R^4}{8 L^2},$$ which agrees with the result from Darcy’s law in Eq. , hence our results are consistent with Eq. . Tortuosity ---------- ![Idealized porous medium with a single constriction.[]{data-label="constriction_img"}](straight_tort_example.ps){width="\linewidth"} ![Idealized porous medium with a single constriction.[]{data-label="constriction_img"}](constriction_tubes_w_box_flipped.ps){width="\linewidth"} Let again our porous medium and applied piezometric head be as above, except that the length of the straight circular tube with constant cross-sectional area is $s > L$, as shown in Fig. \[tort\_tube\_img\]. Then the piezometric head gradient inside the tube is $\nabla h = \Delta h/s$. Combining Eqs.  and gives the permeability as: $$k = -\frac{Q \mu L}{L^2 \rho g \Delta h} = \frac{\pi R^4}{8 Ls}. \label{hagen-poiseuille_darcy_tort}$$ Note that a larger $s$ gives a smaller permeability $k$, while $s=L$ gives a permeability equal to Eq. . The calculations for the characteristic length in Sect. \[subsec:char\_length\] still hold, so $L_h = R$. The constriction factor $C_s$ is equal to $1$, while each streamline has length $s$, thus $\tau_s=\tau(\mathcal{S})=L/s$. We then have $$\frac{\tau^2_s L_h^2 \phi}{8C_s} = \frac{\pi R^4}{8 Ls},$$ which is equal to the result from Darcy’s law in Eq. , and consistent with Eq. . Note that by changing the orientation of the tube shown in Fig. \[tort\_tube\_img\] the tortuosity and the permeability will change, while in this idealized case the characteristic length and constriction factor stay constant. In this case the anisotropy of the permeability is captured by the tortuosity. Constriction {#subsec:constriction} ------------ We will now use an idealized porous medium to investigate the effect of constriction. Consider a porous medium consisting of two tube segments in sequence aligned with the applied piezometric head, as depicted in Fig. \[constriction\_img\]. The two tube segments both have length $L/2$ inside a cube of side-length $L$, while the radii of the two segments are $R_1$ and $R_2$. When assuming $R_i \ll L/2$ for $i=1,2$, the flow inside the tube segments can be approximated by the Hagen-Poiseuille equations and the tortuosity can be approximated as $\tau_s = 1$. Invoking Eq.  we can show that the discharge is $Q = -(\pi R_1^4 R_2^4 \rho g \nabla h)/(4 \mu (R_1^4 + R_2^4)).$ Using Darcy’s law, Eq. , we then obtain $$k= - \frac{Q\mu L}{L^2 \rho g \Delta h} = \frac{\pi R_1^4 R_2^4}{4L^2 (R_1^4 + R_2^4)}. \label{hagen-poiseuille_darcy_const}$$ From Eq.  we have $\nabla h_i = -8 \mu Q /(\pi R_i^4 \rho g)$, and from Eq.  we derive the constriction factor as $$C_s = \frac{1}{L^2} \int_0^L \nabla h dx \int_0^L \frac{dx}{\nabla h} = \frac{1}{4}\frac{ (R_1^4 + R_2^4)^2}{R_1^4 R_2^4}.$$ For each tube section we have $B_i = R_i^2/8$, which gives $B_s= (R_1^4+R_2^4)/(8(R_1^2 + R_2^2))$. Since $\tau_s^2 = 1$, we have: $$k = \frac{\tau_s^2 B_s \phi_s}{C_s} = \frac{\pi R_1^4 R_2^4}{4L^2 (R_1^4 + R_2^4)},$$ consistent with Eq. . Note that a larger difference between $R_1$ and $R_2$ gives a larger constriction factor $C_s$, which then implies a lower permeability, while $C_s=1$ when $R_1=R_2$, as desired. We will now revisit the more general constriction example from Sect. \[subsec:streamline\_constriction\_factor\], where we considered a tube with cross-sectional area $A(x)$ for $x \in [0,L]$. We still assume that the flow inside the tube is approximated by the Hagen-Poiseuille equations and that the tortuosity can be approximated as $\tau_s = 1$. Then $\nabla h (x) = -8 \mu Q \pi /(A(x)^2 \rho g)$, from Eq.  we then have $$k = \frac{1}{8 \pi L \int_0^L \frac{1}{A(x)^2}dx}.$$ The constriction factor $$C_s = \frac{1}{L^2} \int_0^L \frac{1}{A(x)^2} dx \int_0^L A(x)^2 dx$$ equals Eq.  in Sect. \[subsec:streamline\_constriction\_factor\]. For the cross-section at point $x$ we have $B(x) = A(x)/(8 \pi)$, thus $B_s = \int A(x)^2 dx/(8 \pi \int A(x) dx)$. Now assume another porous medium with equal pore volume and with a constant cross-sectional area. Then the cross-sectional area is $\hat{A}=(1/L) \int A(x) dx$, $\hat{B}_s = \hat{A}/(8\pi)$ and $\hat{k} = \hat{A}^2/(L^2 8 \pi)$. When factoring out the hydraulic conductance, the reduction in permeability due to the varying cross-sectional area is $$\frac{k/B_s}{\hat{k}/\hat{B}_s} = \frac{1}{C_s},$$ hence the reduction equals the inverse of the constriction factor. Fontainebleau Rock Example {#section:bentheimer_network} ========================== We next turn to natural porous media, such as given by micro-CT (microtomography) images and rock models of Fontainebleau sandstone. Using the e-Core software [@e-core_v152] we generated three-dimensional rock models of Fontainebleau sandstone with porosities ranging from 8 to 26%. We used the exact same grain packing for all models, while we changed the amount of quartz cementation to achieve the variation in porosities. The rock modeling process is described in detail in Refs. [@oren2006digital; @berg2012]. The micro-CT images and model sample size were 2.7 mm cubed with a resolution of 5.7 $\mu$m. ![Visualization of the network representation of the pore space of Fontainebleau sandstone mCTc in Tabel \[tab\_models\]. Balls represent pore bodies, and sticks represent pore throats. This network has 7413 pore bodies and 14254 pore throats.[]{data-label="network_img"}](network_mct_fountainebleau_18_bw.eps){width="7cm"} We extracted network analogs using the e-Core software [@e-core_v152]. The software extracts a pore network using a grain based algorithm [@bakke19973], which segments the pore space into pore bodies and pore throats, each associated with a point $x \in V$, a volume, a shape factor $G$ [@mason1991capillary], and an inscribed radius $r$ [@patzek2001shape]. One such network is visualized in Fig. \[network\_img\]. The distance between the two pore bodies $t1$ and $t2$ connected by a pore throat $t3$ is divided into parts: parts $l_{t1}$ and $l_{t2}$ are associated with each pore body, $t1$ and $t2$, respectively; and part $l_{t3}$ associated with the pore throat. Following Eq. (18) in Ref. [@oren1998extending], the hydraulic conductance of each part is approximated by $$g = \frac{3 r^4}{80 \mu G}. \label{eq:hydraulic_conductance}$$ The effective hydraulic conductance between two pore bodies $t1$ and $t2$ connected by a pore throat $t3$ is taken as the length-weighted harmonic average of the three parts $$g_t = l_t \left( \frac{l_{t1}}{g_{t1}} + \frac{l_{t3}}{g_{t3}} + \frac{l_{t2}}{g_{t2}} \right)^{-1},$$ where $l_t = l_{t1}+l_{t3}+l_{t2}$. Assuming a Hagen-Poiseuille type relation between the fluid discharge $Q_{t_{1,2}}$ from pore $t1$ to pore $t2$ and the head gradient $\nabla h_{t_{1,2}} = (h_{t2} - h_{t1})/l_t$ for each pore throat $t$, we have $$Q_{t_{1,2}} = -g_t \frac{\rho g(h_{t2}-h_{t1})}{l_t},$$ where $h_{ti}$ is the piezometric head associated with the pore body $ti$. ------ -------- ------------ --------------- ---------- -------- -------- --------------- --------------- --------------- --------- $\kappa_s$ Const. Rock $\phi$ $\phi_s$ $[(\mu m)^2]$ $\tau_s$ $\tau$ $C_s$ $L_h [\mu m]$ $r_c [\mu m]$ $[(\mu m)^2]$ $[mD]$ mCTa 0.081 0.048 0.86 0.347 0.373 107.97 78.62 10.4 0.041 41.72 mCTb 0.128 0.114 4.569 0.41 0.425 28.41 78.5 15.1 0.521 528.06 mCTc 0.176 0.166 13.165 0.449 0.462 19.91 102.06 21.3 2.192 2220.86 mCTd 0.21 0.2 14.078 0.447 0.464 20.49 107.39 20.3 2.811 2848.57 a 0.086 0.056 0.766 0.337 0.355 105.3 75.28 9.1 0.043 43.44 b 0.101 0.079 1.865 0.363 0.381 58.97 81.73 13.2 0.147 149.22 c 0.125 0.111 3.841 0.387 0.405 40.51 91.08 15.3 0.427 432.61 d 0.153 0.143 9.573 0.431 0.451 26 103.6 18.8 1.371 1388.87 e 0.176 0.168 13.987 0.442 0.462 22.75 114.03 22.9 2.345 2376.21 f 0.206 0.198 22.428 0.461 0.48 18.36 124.6 26.3 4.449 4507.45 g 0.245 0.237 33.435 0.467 0.487 15.81 139.26 30.8 7.935 8040.4 ------ -------- ------------ --------------- ---------- -------- -------- --------------- --------------- --------------- --------- The network model can now be viewed as a resistor network analog, with a one-to-one correspondence between the pore throats in the porous medium and the resistors in the resistor network analog, and also there is a one-to-one correspondence between the pore bodies and the network nodes. Each pore throat (resistor) $t$ is given a conductance $g_t/l_t$. Let $h_i$ be the piezometric head corresponding to pore body (node) $i$, and $\{ t_{ij} \}_{j=1}^{\alpha_i}$ the pore throats (resistors) connected to pore body $i$. We then solve for $h_i$ such that $$\sum_{j=1}^{\alpha_i}{ -g_t\frac{\rho g(h_j-h_i)}{l_t} } = \sum_{j=1}^{\alpha_i}{ Q_{t_{ij}}} = 0,$$ where we have fixed piezometric head at the inlet and outlet boundaries. ![Plot showing the correspondence between porosity $\phi$ and $\phi_s$ for the rock models and the micro-CT (mCT) data, together with a linear fit to the model data.[]{data-label="por_vs_gradpor"}](plot_por_vs_gradpor.ps){width="7cm"} The network volume with non-zero head gradient can be calculated as $$\Omega_s = \sum_{h_{t1} \not= h_{t2}}{ \left( \frac{V_{t1}}{\alpha_{t1}} + \frac{V_{t2}}{\alpha_{t2}} + V_{t3} \right) }.$$ The values for porosity $\phi$ and $\phi_s = \Omega_s/V$ for the network representations of our micro-CT images and models are reported in Table \[tab\_models\]. In Fig. \[por\_vs\_gradpor\] we have plotted porosity $\phi$ versus $\phi_s$. A linear fit to the model data plotted in Fig. \[por\_vs\_gradpor\] gives the correspondence $$\phi_s(\phi) = 1.126(\phi - 0.030). \label{phi_path_to_phi}$$ The fluid velocity inside the network elements is not resolved; we therefore threat the fluid velocity as constant inside each network element. The average fluid velocity for part $t1$ and $t2$ are given by $u_{ti} = g_{ti} \rho g \lvert h_{ti}-h_{it} \rvert \alpha_{ti} / V_{ti}$ where $i=1,2$, while for part $t3$ it is $u_{t3} = g_{t3} \rho g \lvert h_{1t}-h_{2t} \rvert / V_{t3}$. Here $h_{1t}$ and $h_{2t}$ are piezometric heads such that $g_{t1} \rho g \lvert h_{t1}-h_{1t} \rvert/l_{t1} = g_{t3} \rho g \lvert h_{1t}-h_{2t} \rvert /l_{t3} = g_{t2} \rho g \lvert h_{2t} -h_{t2} \rvert/l_{t2}$. The fluid velocity $\vec{u}$ is in the opposite direction of the gradient of the head, i.e. $\nabla h \cdot \vec{u} = - \lVert \nabla h \rVert u$. The local permeability factors for the sections $t_1, t_2, t_3$, as given by Eq. , are: $$\begin{aligned} \kappa_{ti} &= \frac{g_{ti} (\rho g (h_{ti}-h_{it}))^2 \alpha_{ti}}{l_{ti} V_{ti}} \left(\frac{ \Delta s}{\Delta h}\right)^2 \text{ for } i=1,2, \text{ and} \notag \\ \kappa_{t3} &= \frac{g_{t3} (\rho g (h_{1t}-h_{2t}))^2}{l_{t3} V_{t3}} \left(\frac{ \Delta s}{\Delta h}\right)^2. \notag\end{aligned}$$ This enables the calculation of the effective permeability factor as the volume average of these local contributions: $$\kappa_s = \frac{1}{\Omega_s} \sum_{h_{t1} \not= h_{t2}}{ \left( \frac{V_{t1}}{\alpha_{t1}} \kappa_{t1} + \frac{V_{t2}}{\alpha_{t2}} \kappa_{t2} + V_{t3} \kappa_{t3} \right) }.$$ The results are reported in Table \[tab\_models\]. ![Plot showing porosity $\phi$ versus permeability $k$ for the Fontainebleau rock models and micro-CT (mCT) data, together with Eq.  describing the correlation.[]{data-label="por_vs_perm"}](plot_por_vs_kappa.ps){width="0.9\linewidth"} ![Plot showing porosity $\phi$ versus permeability $k$ for the Fontainebleau rock models and micro-CT (mCT) data, together with Eq.  describing the correlation.[]{data-label="por_vs_perm"}](plot_por_vs_perm.ps){width="0.9\linewidth"} The effective permeability factor $\kappa_s$ versus porosity $\phi$ is plotted in Fig. \[por\_vs\_kappa\]. The function $$\kappa_s(\phi)=1181(\phi-0.054)^{2.12}, \label{eq_por_vs_kappa}$$ is included as a fit to the model data. For $\phi = 0.054$ we have $\kappa_s(\phi) = 0$, which is interpreted as percolation threshold for this sandstone [@mavko1997effect]. The permeability $k = \kappa_s \phi_s$, as given by Eq. , is also listed in Table \[tab\_models\]. Calculated porosity and permeability for the network representations of Fontainebleau sandstone are plotted in Fig. \[por\_vs\_perm\]. Combining Eqs. , and , we have: $$\begin{aligned} k(\phi) &= \kappa_s(\phi) \phi_s(\phi) = 1181(\phi-0.054)^{2.12} 1.126(\phi - 0.030) \notag \\ &=1329 (\phi-0.054)^{2.12} (\phi-0.030). \label{eq_darcy_is_por_and_kappa}\end{aligned}$$ This function is also included in Fig. \[por\_vs\_perm\], and provides a derived porosity-permeability relationship for the Fontainebleau samples. We discretized the volume $\Omega_s$ into a disjoint union $\sqcup{\mathcal{S}}$, where $\mathcal{S}$ is a subvolume of a single series of pore throats $t_\mathcal{S} = \{ t \}$, with the first throat connected to an inlet boundary and the last connected to an outlet boundary. Each $\mathcal{S}$ transports a constant discharge $Q_\mathcal{S}$, and $\sum{Q_\mathcal{S}} = Q$. The discretization $\sqcup \mathcal{S}$ is a simplification of the streamlines, similar to the concept of a bundle of capillary tube model. In the network representation for the Fontainebleau sandstone the discretization $\Omega_s = \sqcup{\mathcal{S}}$ is dependent on the fluid flow across the network nodes, however different ways of tracing streamlines across the network nodes were tested, yielding comparable results. For the sections $t_1, t_2$ and $t_3$ of constant hydraulic conductance as associated with pore throat $t \in t_\mathcal{S}$, we have the corresponding parts $\{ \mathcal{S}_{ti} \}$ of $\mathcal{S}$. Each volume $\mathcal{S}_{ti} \subset V_{ti}$ then has the associated length $l_{ti}$. The three sections have $$\begin{aligned} B_{ti} &= -\frac{u}{\rho g \nabla h} = \frac{\mu g_{ti} l_{ti} \alpha_{ti}}{ V_{ti}} \text{ for } i=1,2, \text{ and} \notag \\ B_{t3} &= -\frac{u}{\rho g \nabla h} = \frac{\mu g_{t3} l_{t3}}{ V_{t3}}. \notag\end{aligned}$$ For each $\mathcal{S}$ in $\Omega_s = \sqcup{\mathcal{S}}$ we calculated $$B(\mathcal{S})= \frac{1}{\mathcal{S}} \sum_{t \in t_\mathcal{S}}{\mathcal{S}_{t1} B_{t1} + \mathcal{S}_{t2} B_{t2} + \mathcal{S}_{t3} B_{t3}}.$$ We separately calculated the constriction factor $$C(\mathcal{S}) = \frac{\Delta h }{\left(\sum_{t \in t_\mathcal{S}}{l_t } \right)^2} \times \sum_{t \in t_\mathcal{S}}{ \left( \frac{l_{t1}^2}{ \lvert h_{t1}-h_{1t} \rvert }+ \frac{l_{t2}^2}{ \lvert h_{t2}-h_{2t} \rvert} + \frac{l_{t3}^2}{ \lvert h_{1t}-h_{2t} \rvert }\right) }, \label{eq:const_net_path}$$ and the tortuosity $$\tau(\mathcal{S}) = \frac{\Delta s}{\sum_{t \in t_\mathcal{S}}{ l_t}}.$$ We then obtain the characteristic length, constriction factor and tortuosity for the volume $\Omega_s$ as: $$\begin{aligned} L_h &= \sqrt{8 B_s} = \sqrt{8 \frac{1}{\Omega_s} \sum{\mathcal{S} B(\mathcal{S})} }, \label{eq:B_net_total} \\ C_s & = \frac{1}{Q} \sum{Q_\mathcal{S} C(\mathcal{S})}, \label{eq:const_net_total} \\ \tau^2_s & = \frac{1}{\sum \mathcal{S}B(\mathcal{S})} \sum{ \tau^2(\mathcal{S}) \mathcal{S}B(\mathcal{S})}. \label{eq:tort_net_total} \end{aligned}$$ The calculated values are reported in Table \[tab\_models\]. We see that $\tau^2_s L_h^2 / (8 C_s) = \kappa_s$, which is consistent with Eq. . For comparison we calculated the tortuosity $\tau$ as $$\tau =\frac{\Delta s}{ \frac{1}{Q} \sum_t Q_t l_t },$$ where $Q_t$ is the discharge through pore throat $t$ [@bear1988dynamics; @duda2011hydraulic]. Note that the values for $\tau_s$ and $\tau$ are similar, however $\tau_s < \tau$. We also calculated the critical pore radius $r_c$ corresponding to the smallest network element radius of the set of largest network elements that percolate through the network [@katz1986quantitative], where the radius of a network element is given by $r\sqrt[4]{3/(10 G \pi)}$. The values are reported in Table \[tab\_models\]. Such characteristic length scales are seen to be significantly lower than the hydraulic characteristic lengths $L_h$ calculated according to Ref. [@bear1967generalized; @bear1988dynamics] for the Fontainebleau networks, included in Fig. \[por\_vs\_char\_length\]. ![Plot showing porosity $\phi$ versus characteristic length squared $L_h^2$ and critical pore radius squared $r_c^2$ for the Fontainebleau rock models and the micro-CT (mCT) data.[]{data-label="por_vs_char_length"}](plot_por_vs_char_length.ps){width="10cm"} In Fig. \[por\_vs\_char\_length\] we have plotted porosity $\phi$ versus the characteristic length squared $L_h^2$, the critical pore radius $r_c$, together with a functional fit $L_h^2(\phi) = \exp(7.650\phi+8.073)$. For high porosities, i.e. rocks with little cementation and then larger pores, we have larger characteristic length $L_h$ than for low porosities. The characteristic lengths $L_h$ for the micro-CT images scatter around the trend given by the simulated rock models. ![Plot showing porosity $\phi$ versus both tortuosity squared $\tau_s^2$ and inverse constriction $C_s^-$ for the Fontainebleau rock models and the micro-CT (mCT) data.[]{data-label="por_vs_tort_const"}](plot_por_vs_tort_const.ps){width="10cm"} In Fig. \[por\_vs\_tort\_const\] we have plotted porosity $\phi$ versus both the tortuosity $\tau_s^2$ and the inverse constriction factor $C_s^-$. For high porosities we have less permeability reduction due to both tortuosity and constriction, compared to low porosities. Cementation increases the ratio between pore body and pore throat cross-sectional area, which yields a larger fluctuation in pore size along the streamlines and therefore a higher constriction factor. When cementation blocks pore throats completely, it increases the length of the streamlines and reduces the tortuosity $\tau_s$. A function $\tau^2_s(\phi) = 0.108\ln(\phi)+0.380$ gives a visual match to the tortuosity of the Fontainebleau sandstone models, while the constriction factor of the models follows a trend $C^-_s(\phi) = 0.342(\phi-0.051)$. Both functions are also plotted in Fig. \[por\_vs\_tort\_const\]. The calculated tortuosity $\tau_s^2$ and inverse constriction $C_s^-$ of the micro-CT images follow the trend given by the models. The calculated pore structure descriptors for each Fontainebleau sandstone sample reveal a strong functional relation with respect to porosity. This is desirable for sensitive descriptors. The functional trends display a non-trivial behavior at the percolation threshold derived in Eq. : The tortuosity $\tau_s^2$ and characteristic length $L_h^2$ indicate a non-zero value at the percolation threshold, while the inverse constriction factor $C_s^-$ tends to zero at the percolation threshold. Conclusion ========== In this work we have fundamentally described and calculated the permeability in porous media. The permeability $k$ of a porous medium is equal to $\kappa_s \phi_s$. Here the effective porosity $\phi_s$ is the fractional volume conducting flow from inlet to outlet. An effective permeability factor $\kappa_s$ is given by the volume-weighted average of the microscopic permeability factors $$\kappa = -\mu \rho g \nabla h \cdot \vec{u} \left( \frac{\Delta s}{\rho g \Delta h} \right)^2.$$ This microscopic permeability factor $\kappa$ relates the local contribution of the pore structure to effectiveness of the pore space to conduct fluid flow $\kappa_s$. We have shown that $\kappa_s = \tau_s^2 B_s/C_s = \tau_s^2 L_h^2/(8C_s)$, where the effective pore radius in the porous medium is described by the characteristic length $L_h$, fluctuation in local hydraulic radii is described by the constriction factor $C_s$, and the effective length of the streamlines is described by the tortuosity $\tau_s$. These characteristic length, constriction factor and tortuosity are direction dependent intrinsic descriptors of the pore structure. Their directional dependence leads to anisotropy of the permeability, i.e., the tensorial form of the permeability. We have shown that our methodology reproduces results for Hagen-Poiseuille flow in tubes. It is also applied to a natural porous medium given by a pore network representation of Fontainebleau sandstone, where we show how the distinct contributions to the permeability from characteristic length, constriction and tortuosity correlate with porosity. As long as the flow and piezometric head field can be obtained, this methodology is applicable to any porous medium. This work demonstrates how the permeability can be related to porosity, in the sense of Kozeny-Carman, through fundamental and measurable descriptors of the pore structure. Such derived physical relation between permeability and porosity from detailed pore structure information leads to a better fundamental understanding of structure-property relations in porous media. I would like to thank Rudolf Held (Statoil) for valuable discussions and contributions to the manuscript.
--- abstract: 'We obtain sharp estimates for the localized distribution function of ${\mathcal{M}}{\phi}$, when ${\phi}$ belongs to $L^{p,\infty}$ where ${\mathcal{M}}$ is the dyadic maximal operator. We obtain these estimates given the $L^1$ and $L^q$ norm, $q<p$ and certain weak $L^p$-conditions.' author: - 'Eleftherios N. Nikolidakis' title: '**Sharp weak type inequalites for the dyadic maximal operator**' --- [*Keywords*]{}: Dyadic, Maximal Introduction ============ The dyadic maximal operator on ${\mathbb{R}}^n$ is a useful tool in analysis and is defined by: $$\begin{aligned} {\mathcal{M}}_d{\phi}(x)=\sup\bigg\{\frac{1}{|Q|}\int_Q|{\phi}(u)|du:x\in Q, \; Q\subseteq{\mathbb{R}}^n \ \ \text{dyadic cube}\bigg\} \label{eq1.1}\end{aligned}$$ for every ${\phi}\in L^1_{loc}({\mathbb{R}}^n)$ where $|\cdot|$ is the Lebesgue measure on ${\mathbb{R}}^n$ and the dyadic cubes are those formed by the grids $2^{-N}{\mathbb{Z}}^n$ for $N=1,2,{\ldots}\;.$ As it is well known it satisfies the following weak type (1,1) inequality $$\begin{aligned} |\{x\in{\mathbb{R}}^n:{\mathcal{M}}_d{\phi}(x)\ge{\lambda }\}|\le\frac{1}{{\lambda }}\int_{\{{\mathcal{M}}_d{\phi}\ge{\lambda }\}}|{\phi}(u)|du \label{eq1.2}\end{aligned}$$ for every ${\phi}\in L^1({\mathbb{R}}^n)$ and every ${\lambda }>0$ from which it is easy to get the following $L^p$ inequality: $$\begin{aligned} \|{\mathcal{M}}_d{\phi}\|_p\le\frac{p}{p-1}\|{\phi}\|_p. \label{eq1.3}\end{aligned}$$ For every $p>1$ and ${\phi}\in L^p({\mathbb{R}}^n)$ it is easy to see that the weak type inequality (\[eq1.2\]) is best possible and it is proved in [@9] that (\[eq1.3\]) is also best possible (for general martingales see [@2] and [@3]. In studying the dyadic maximal operator it would be convenient to work with functions supported in the unit cube $[0,1]^n$ and more generally defined on a non-atomic probability measure space $(X,{\mu })$ where the dyadic sets are given in a family ${\mathcal{T}}$ of measurable subsets of $X$ that has a tree-like structure similar to the one in the dyadic case. Then we replace ${\mathcal{M}}_d$ by $$\begin{aligned} {\mathcal{M}}_{\mathcal{T}}{\phi}(x)=\sup\bigg\{\frac{1}{{\mu }(I)}\int_I|{\phi}|d{\mu }:x\in I\subseteq X,\; I\in{\mathcal{T}}\bigg\} \label{eq1.4}\end{aligned}$$ and (\[eq1.2\]) and (\[eq1.3\]) remain true and sharp is this setting. Actually, in this general setting (\[eq1.3\]) has been improved even more by inserting the $L^1$-norm of ${\phi}$ as a variable giving the so called Bellman functions of the dyadic maximal operator. In fact in [@5] the following function of variables $f,F$ has been explicitly computed $$\begin{aligned} B(f,F)=\sup\bigg\{\int_X({\mathcal{M}}_{\mathcal{T}}{\phi})^pd{\mu }:{\phi}\ge0, \; \int_X{\phi}d{\mu }=f,\; \int_X{\phi}^pd{\mu }=F\bigg\} \label{eq1.5}\end{aligned}$$ where $0<f^p\le F$. The related Bellman functions for the case $p<1$ have been also computed in [@6]. It is interesting now to search what happens in case we replace the $L^p$-norm with the quasi norm $\|\cdot\|_{p.\infty}$ defined in $L^{p,\infty}$, where $$\begin{aligned} \|{\phi}\|_{p,\infty}=\sup\{{\lambda }{\mu }(\{{\phi}\ge{\lambda }\})^{1/p}:{\lambda }>0\} \label{eq1.6}\end{aligned}$$ for every ${\phi}$ such that this supremum is finite. It is known that $L^{p,\infty}\varsupsetneq L^p$ and ${\mathcal{M}}$ can be defined on $L^{p,\infty}$ with values on $L^{p,\infty}$. As a matter of fact it is not difficult to see that ${\mathcal{M}}_{\mathcal{T}}$ satisfies the following $$\begin{aligned} \|{\mathcal{M}}_{\mathcal{T}}{\phi}\|_{p,\infty}\le\frac{p}{p-1}\|{\phi}\|_{p,\infty} \label{eq1.7}\end{aligned}$$ for every ${\phi}\in L^{p,\infty}$. In [@8] it is proved that (\[eq1.7\]) is best possible. Actually, a stronger fact is proved there, namely that $$\begin{aligned} \sup\bigg\{\|{\mathcal{M}}_{\mathcal{T}}{\phi}\|_{p,\infty}:{\phi}\ge0,\int_X{\phi}d{\mu }=f,\;\|{\phi}\|_{p,\infty}=F\bigg\}=\frac{p}{p-1}F \label{eq1.8}\end{aligned}$$ for every $(f,F)$ such that $0<f\le\frac{p}{p-1}F$. That is (\[eq1.7\]) is sharp allowing every value for the $L^1$-norm of ${\phi}$. In the present paper we compute $$\sup\bigg\{{\mu }((\{{\mathcal{M}}_{\mathcal{T}}{\phi}\ge{\lambda }\})):{\phi}\ge0, \; \int_X{\phi}d{\mu }=f,\; \int_X{\phi}^qd{\mu }=A, \; \|{\phi}\|_{p,\infty}=F\bigg\}. \label{eq1.9}$$ for a fixed $q$ such that $1<q<p$, and for all allowable values of $(f,A,F)$. Actually doing this we improve (\[eq1.2\]) even more by inserting as variables the $L^q$-norm and the $L^{p,\infty}$-quasi norm of ${\phi}$. From this we have as a consequence that $$\begin{aligned} \sup\bigg\{\|{\mathcal{M}}_{\mathcal{T}}{\phi}\|_{p,\infty}:{\phi}\ge0,\;\int_X{\phi}d{\mu }=f,\;\int_X{\phi}^qd{\mu }=A,\; &\|{\phi}\|_{p,\infty}=F\bigg\}\nonumber\\ &=\frac{p}{p-1}F, \label{eq1.10}\end{aligned}$$ that is (\[eq1.7\]) is best possible allowing every possible value of the $L^1$ and $L^q$-norm. At last we mention that all the above calculations are independent of the measure space and the associated tree. We begin now with: Preliminaries ============= Let $(X,{\mu })$ be a non-atomic probability space. The following holds: \[lem2.1\] Let ${\phi}:X{\rightarrow}{\mathbb{R}}^+$ be measurable and $I\subseteq X$ be measurable with ${\mu }(I)>0$. Suppose that $\frac{1}{{\mu }(I)}\int\limits_I{\phi}d{\mu }=s$. Then for every $t$ such that $0<t\le{\mu }(I)$ there exists a measurable set $E_t\subseteq I$ with ${\mu }(E_t)=t$ and $\frac{1}{{\mu }(E_t)}\int\limits_{E_t}{\phi}d{\mu }=s$. Consider the measure space $(I,{\mu }/I)$ and let $\psi:I{\rightarrow}{\mathbb{R}}^+$ be the restriction of ${\phi}$ on $I$ that is $\psi={\phi}/I$. Then if $\psi^\ast:[0,{\mu }(I)]{\rightarrow}{\mathbb{R}}^+$ is the decreasing rearrangement of $\psi$, we have that $$\begin{aligned} \frac{1}{t}\int^t_0\psi^\ast(u)du\ge\frac{1}{{\mu }(I)}\int^{{\mu }(I)}_0\psi^\ast (u)du=s\ge\frac{1}{t}\int^{{\mu }(I)}_{{\mu }(I)-t}\psi^\ast(u)du. \label{eq2.1}\end{aligned}$$ Since $\psi^\ast$ is decreasing we get the inequalities in (\[eq2.1\]), while the equality is obvious since $$\int^{{\mu }(I)}_0\psi^\ast(u)du=\int_I{\phi}d{\mu }.$$ From (\[eq2.1\]) it is easily seen that there exists $r\ge0$ such that $t+r\le{\mu }(I)$ with $$\begin{aligned} \frac{1}{t}\int^{t+r}_r\psi^\ast(u)du=s. \label{eq2.2}\end{aligned}$$ It is also easily seen that there exists $E_t$ measurable subset of $I$ such that $$\begin{aligned} {\mu }(E_t)=t \ \ \text{and} \ \ \int_{E_t}{\phi}d{\mu }=\int^{t+r}_r\psi^\ast(u)du \label{eq2.3}\end{aligned}$$ since $(X,{\mu })$ is non-atomic. From (\[eq2.2\]) and (\[eq2.3\]) we get the conclusion of the lemma. [$\quad\square$]{} We now call two measurable subsets of $X$ almost disjoint if ${\mu }(A\cap B)=0$. We give now the following \[def2.1\] A set ${\mathcal{T}}$ of measurable subsets of $X$ will be called a tree if the following conditions are satisfied. 1. $X\in{\mathcal{T}}$ and for every $I\in{\mathcal{T}}$ we have that ${\mu }(I)>0$. 2. For every $I\in{\mathcal{T}}$ there corresponds a finite or countable subset $C(I)\subseteq{\mathcal{T}}$ containing at least two elements such that: - the elements of $C(I)$ are pairwise almost disjoint subsets of $I$. - $I=\cup\, C(I)$. 3. ${\mathcal{T}}=\bigcup\limits_{m\ge0}{\mathcal{T}}_{(m)}$ where ${\mathcal{T}}_0=\{X\}$ and $${\mathcal{T}}_{(m+1)}=\bigcup_{I\in{\mathcal{T}}_{(m)}}C(I).$$ 4. ${\displaystyle}\lim_{m{\rightarrow}+\infty}\sup_{I\in{\mathcal{T}}_{(m)}}{\mu }(I)=0$. [$\quad\square$]{} From [@5] we have the following \[lem2.2\] For every $I\in{\mathcal{T}}$ and every ${\alpha}$ such that $0<{\alpha}<1$ there exists subfamily ${\mathcal{F}}(I)\subseteq Y$ consisting of pairwise almost disjoint subsets of $I$ such that $${\mu }\bigg(\bigcup_{J\in{\mathcal{F}}(I)}J\bigg)=\sum_{J\in {\mathcal{F}}(I)}{\mu }(J)=(1-{\alpha}){\mu }(I). \text{{$\quad\square$}}$$ Let now $(X,{\mu })$ be a non-atomic probability measure space and ${\mathcal{T}}$ a tree as in Definition 1.1. We define the associated maximal operator to the tree ${\mathcal{T}}$ as follows: For every ${\phi}\in L^1(X,{\mu })$ and $x\in X$, then $${\mathcal{M}}{\phi}(x)={\mathcal{M}}_{\mathcal{T}}{\phi}(x)=\sup\bigg\{\frac{1}{{\mu }(I)}\int_I|{\phi}|d{\mu }:\;x\in I\in{\mathcal{T}}\bigg\}.$$ Domain of the extremal problem ============================== Our aim is to find for every ${\lambda }>0$ the following $$\begin{aligned} B(f,A,{\lambda })=\sup\bigg\{{\mu }(\{{\mathcal{M}}{\phi}\ge{\lambda }\}):{\phi}\ge0,\;&\int_X{\phi}d{\mu }=f,\; \int_X{\phi}^qd{\mu }=A,\nonumber\\ &\hspace*{2cm}\|{\phi}\|_{p,\infty}=\frac{p-1}{p}\bigg\}. \label{eq3.1}\end{aligned}$$ For this reason we define $$\begin{aligned} B_1(f,A,{\lambda })=\sup\bigg\{{\mu }(\{{\mathcal{M}}{\phi}\ge{\lambda }\}):{\phi}\ge0,\;&\int_X{\phi}^q d{\mu }=A, \nonumber\\ &\hspace*{-0.4cm}\|{\phi}\|_{p,\infty}\le\frac{p-1}{p}\bigg\}. \label{eq3.2}\end{aligned}$$ In order to find (\[eq3.1\]) and (\[eq3.2\]) it is necessary to find the allowable values of $f$ and $A$. That is the values for which there exists ${\phi}:(X,{\mu }){\rightarrow}{\mathbb{R}}^+$ such that $$\int_X{\phi}d{\mu }=f,\;\int_X{\phi}^qd{\mu }=A \ \ \text{and} \ \ \|{\phi}\|_{p,\infty}=\frac{p-1}{p} \ \ \text{or} \ \ \|{\phi}\|_{p,\infty} \le\frac{p-1}{p}.$$ For the beginning let $f,A$ and ${\phi}$ such that $$\int_X{\phi}d{\mu }=f, \; \int_X{\phi}^qd{\mu }=A, \ \ \|{\phi}\|_{p,\infty}\le\frac{p-1}{p}.$$ Consider the decreasing rearrangement of ${\phi}$, $g={\phi}^\ast:[0,1]{\rightarrow}{\mathbb{R}}^+$. Then for every ${\lambda }>0|\{g\ge{\lambda }\}|={\mu }(\{{\phi}\ge{\lambda }\})$ where $|\cdot|$ the Lebesgue measure on $[0,1]$. As a consequence $$\begin{aligned} \int^1_0g=f, \ \ \int^1_0g^q=A \label{eq3.3}\end{aligned}$$ and $$\begin{aligned} \sup\{{\lambda }|\{g\ge{\lambda }\}|^{1/p}:{\lambda }>0\}\le\frac{p-1}{p} \label{eq3.4}\end{aligned}$$ (\[eq3.4\]) now gives for every ${\lambda }>0$ that $$\begin{aligned} |\{g\ge{\lambda }\}|\le\bigg[\frac{p-1/p}{{\lambda }}\bigg]^p. \label{eq3.5}\end{aligned}$$ But if $\psi:(0,1]{\rightarrow}{\mathbb{R}}^+$ defined by $\psi(t)=(1-p)t^{-1/p}$, then (\[eq3.5\]) means that $$\begin{aligned} g(t)\le\psi(t) \ \ \text{for every} \ \ t\in(0,1], \label{eq3.6}\end{aligned}$$ since $g$ is decreasing. Now from (\[eq3.6\]) we easily get $0<f\le1$. Fix such a $f$. Obviously from Holder’s inequality $f^q\le A$. We search now for the minimum and maximum values of $A$ for which there exist $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ decreasing such that (\[eq3.3\]) and (\[eq3.5\]) hold. We have the following simple \[lem3.1\] If $g$ satisfies (\[eq3.3\]) and (\[eq3.6\]) then $A\le{{\varGamma}}f^{p-q/p-1}$ where ${{\varGamma}}=\Big(\frac{p-1}{p}\Big)^q\frac{p}{p-q}$. It is easy to see that the function which gives the maximum value of $A$ for which there exists $g$ such that (\[eq3.3\]) and (\[eq3.6\]) hold (for a fixed $f$) is that with the largest possible values. As a matter of fact if $g$ does not have the largest possible values we can arrange things in such a way to produce a function $g_1$ with the same integral and bigger $L^q$-norm. This is done by increasing $g$ to $g_2$ in suitable sets such that $g_2\le\psi$ and decreasing $g$ analogously again in suitable sets. Then since $1<q$ we easily get that the $L^q$-norm of $g_2$ is bigger than that of $g_1$. So we set $g_1:(0,1]{\rightarrow}{\mathbb{R}}^+$ such that $$g_1(t)=\psi(t)=\frac{p-1}{p}t^{-1/p}, \ \ t\in(0,c] \ \ \text{for} \ \ c<1$$ suitable such that $\int\limits^c_0g_1(t)dt=f$ which is equivalent to $$\begin{aligned} \int^c_0\psi(t)dt=f\Leftrightarrow c^{1-\frac{1}{p}}=f\Leftrightarrow c=f^{p/p-1}. \label{eq3.7}\end{aligned}$$ Then $$\begin{aligned} \int^1_0g^q_1(t)dt&=\bigg(\frac{p-1}{p}\bigg)^q\int^c_0t^{-q/p}dt \\ &=\bigg(\frac{p-1}{p}\bigg)^q\frac{1}{1-\frac{q}{p}}c^{1-\frac{q}{p}}={{\varGamma}}f^{p-q/p-1}.\end{aligned}$$ After the comments and the calculations we get the proof of theLemma. [$\quad\square$]{} In a similar way for a fixed $0<f\le1$ we need to find the smallest value of $A$ for which there exist $g$ such that (\[eq3.3\]) and (\[eq3.6\]) hold. This is done in the following steps \[lem3.2\] If $0<f\le1$ and $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ such that $g$ is decreasing, $g\le\psi$ $\int\limits^1_0g(t)dt=f$ and $\int\limits^1_0 g^q(t)dt=A$ then $A\ge A_f$ where $$A_f=\left\{\begin{array}{ccc} f^q, & \text{if} & 0<f\le\frac{p-1}{p} \\ \bigg(\frac{p-1}{p}\bigg)^q\frac{1}{p-q}\bigg\{p-q[p(1-f)]^{p-q/p-1}\bigg\}, & \text{if} & \frac{p-1}{p}<f\le1. \end{array}\right.$$ Indeed since $f^q\le A$ for every $f\le1$ we need only to check the case $\frac{p-1}{p}<f\le1$. (Notice that for $A_1=f^q$ and $g$ such that $g(t)=f$, for every $t\in[0,1]$ we have that $g\le\psi$, $\int\limits^1_0g(t)dt=f$ and $\int\limits^1_0g^q(t)dt=A_1$ in case where $0<f\le\frac{p-1}{p}\Big)$. As before we need to find that $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ with the smallest values such that $\int\limits^1_0g(t)dt=f$ and $g\le\psi$. Arguing as before, we consider the function $g_2:[0,1]{\rightarrow}{\mathbb{R}}^+$ defined by: $$\begin{aligned} g_2(t)&=\frac{p-1}{p}c^{-1/p}, \ \ t\in(0,c] \\ &=\frac{p-1}{p}t^{-1/p}, \ \ t\in[c,1]\end{aligned}$$ where $c$ is such that $$\begin{aligned} \int^1_0g_2(t)dt=f. \label{eq3.8}\end{aligned}$$ (\[eq3.8\]) now give $\frac{p-1}{p}c^{1-\frac{1}{p}}+\int\limits^1_c\psi(t)dt=f\Rightarrow \frac{p-1}{p}c^{1-\frac{1}{p}}+\Big(1-c^{1-\frac{1}{p}}\Big)=f\Rightarrow c^{1-\frac{1}{p}}=p(1-f)\Rightarrow c=\Big[p(1-f)\Big]^{p/p-1}$. So we can easily see that $$\int^1_0g^q_1(t)dt=\bigg(\frac{p-1}{p}\bigg)^q\frac{1}{p-q}\{p-q[p(1-f)]^{p-q/p-1}\}$$ so the lemma is proved. [$\quad\square$]{} So we proved that for every $f,A$ such that there exists a $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ with $\int\limits^1_0g=f$, $\int\limits^1_0g^q=A$, $g\le\psi$ we have that $f\le1$ and $A_f\le A\le{{\varGamma}}f^{p-q/p-1}$. In fact we adittionally proved that for every $f$ there exist functions $g_1,g_2\le\psi$ such that $\int\limits^1_0g_i=f$ and $\int\limits^1_0g^q_1(t)dt={{\varGamma}}f^{p-q/p-1}$, $\int\limits^1_0g^q_2(t)dt={\mathcal{A}}_f$. We use this to prove the following \[lem3.3\] If $0<f\le1$ and ${\mathcal{A}}_f\le A\le{{\varGamma}}f^{p-q/p-1}$ then there exists $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ such that $$g\le\psi, \ \ \int^1_0g(t)dt=f \ \ \text{and} \ \ \int^1_0g^q(t)dt=A.$$ Let $0<f\le1$ and $g_1,g_2$ as before. For every ${\ell}\in[0,1]$ we define $h_{\ell}:={\ell}g_1+(1-{\ell})g_2$. Then $h_{\ell}\le\psi$, $\int\limits^1_0h_{\ell}=f$ and $h_1=g_1$, $h_0=g_2$. Then we consider the function $T:[0,1]{\rightarrow}{\mathbb{R}}^+$ defined by $T({\ell})=\int\limits^1_0h^q_{\ell}$. It is obvious that $T$ is continuous an $[0,1]$ and that $T(0)={\mathcal{A}}_f\le A\le T(1)={{\varGamma}}f^{p-q/p-1}$. As a consequence we have that there exists a ${\ell}\in[0,1]$ such that $T({\ell})=\int\limits^1_0h^q_{\ell}(t)dt=A$. By setting $g=h_{\ell}$ the lemma is proved. [$\quad\square$]{} \[rem3.1\] Suppose that $f,A$ are such that there exists $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ decreasing with $\int\limits^1_0g=f$, $\int\limits^1_0g^q=A$, $g\le\psi$. Then since $(X,{\mu })$ is non-atomic there exists ${\phi}:(X,{\mu }){\rightarrow}{\mathbb{R}}^+$ such that ${\phi}^\ast=g$. Then obviously $$\int_X{\phi}dt=f, \ \ \int_X{\phi}^q dt=A, \ \ \|{\phi}\|_{p,\infty}\le\frac{p-1}{p}.$$ We collect all the above in the following \[cor3.1\] For $f$ and $A$ positive constants the following are equivalent 1. There exists ${\phi}:X{\rightarrow}{\mathbb{R}}^+$ measurable such that $$\int_X{\phi}d{\mu }=f, \ \ \int_X {\phi}^qd{\mu }=A, \ \ \|{\phi}\|_{p,\infty} \le\frac{p-1}{p}.$$ 2. $0<f\le1$ and ${\mathcal{A}}_f\le A\le{{\varGamma}}f^{p-q/p-1}$. We say then that $(f,A)\in D$. Actually using the above arguments it is easy to see that the following is true. \[cor3.2\] For $f$ and $A$ positive constants with $A\neq f^q$ the following are equivalent 1. There exists ${\phi}:(X,{\mu }){\rightarrow}{\mathbb{R}}^+$ measurable such that $$\int_X{\phi}d{\mu }=f, \ \ \int_X{\phi}^q d{\mu }=A, \ \ \|{\phi}\|_{p,\infty}=\frac{p-1}{p}.$$ 2. $0<f\le1$ and ${\mathcal{A}}_f\le A\le{{\varGamma}}f^{p-q/p-1}$. The extremal problem ==================== Suppose now that $(f,A)\in D$ and ${\lambda }>0$. We remind that $$\begin{aligned} B(f,A,{\lambda })=\sup\bigg\{{\mu }(\{{\mathcal{M}}{\phi}\ge{\lambda }\}):&{\phi}\ge0,\;\int_X{\phi}d{\mu }=f, \; \int_X{\phi}^qd{\mu }=A,\;\nonumber \\ &\hspace*{2.3cm}\|{\phi}\|_{p,\infty}=\frac{p-1}{p}\bigg\} \label{eq4.1}\end{aligned}$$ and $$\begin{aligned} B_1(f,A,{\lambda })=\sup\bigg\{{\mu }(\{{\mathcal{M}}{\phi}\ge{\lambda }\}):\;&{\phi}\ge0,\;\int_X{\phi}d{\mu }=f,\;\int_X{\phi}^q d{\mu }=A, \nonumber\\ &\hspace*{2.3cm}\|{\phi}\|_{p,\infty}\le\frac{p-1}{p}\bigg\} \label{eq4.2}\end{aligned}$$ Our aim is to find the above functions. First observe that $B(f,A,{\lambda })=B_1(f,A,{\lambda })=1$, for ${\lambda }<f$ so we can suppose that ${\lambda }\ge f$. Obviously $$\begin{aligned} B_1(f,A,{\lambda })\ge B(f,A,{\lambda }). \label{eq4.3}\end{aligned}$$ As we shall see later we have equality in (\[eq4.3\]). We work out (\[eq4.2\]). Let ${\phi}$ be as in there and $E=\{{\mathcal{M}}{\phi}\ge{\lambda }\}$. Then $E$ is the almost disjoint union of elements of ${\mathcal{T}}$, $I_j$, $j=1,2,{\ldots}\;.$ Indeed, we just need to consider those $I\in{\mathcal{T}}$ maximal under the condition $\frac{1}{{\mu }(I)}\int\limits_I{\phi}d{\mu }\ge{\lambda }$. For every $j$ we have that $$\int_{I_j}{\phi}d{\mu }\ge{\lambda }{\mu }(I_j).$$ Summing (\[eq4.4\]) up to $j$ we get $$\begin{aligned} \int_E{\phi}d{\mu }\ge{\lambda }{\mu }(E). \label{eq4.4}\end{aligned}$$ We again consider the decreasing rearrangement of ${\phi}$, let ${\phi}^\ast:[0,1]{\rightarrow}{\mathbb{R}}^+$. In this point we need a fact which is true on every non-atomic finite measure space and can be seen in [@1]. Namely that for every ${\delta }\in[0,1]$ $$\int^{\delta }_0{\phi}^\ast(t)dt=\sup\left\{\begin{array}{ll} & K \; \text{measurable subset} \\ [-2.5ex] \int\limits_K{\phi}d{\mu }: & \\ [-2.5ex] & \text{of}\; X\; \text{such that}\;{\mu }(K)={\delta }\\ \end{array}\right\}$$ where the supremum is actually attained. From (\[eq4.5\]) we now get in view of the previous comment for $a={\mu }(E)$ that $\int\limits_0^a{\phi}^\ast(t)dt\ge{\lambda }a$, so if we define by $$T(f,A,{\lambda })=\sup\left\{\begin{array}{ll} & \exists\; g:[0,1]{\rightarrow}{\mathbb{R}}^+ \ \ \text{decreasing} \\ {\alpha}\in(0,1]: & \text{such that} \; \int^1_0g=f,\;\int^1_0g^q=A,\;g \le\psi\\ & \text{and} \; \int^q_0g\ge{\alpha}{\lambda }\end{array} \right\}$$ we have that $$\begin{aligned} B_1(f,A,{\lambda })\le T(f,A,{\lambda }). \label{eq4.5}\end{aligned}$$ In fact in relation (\[eq4.5\]) the converse inequality is also true. We state is as a \[lem4.1\] If $a\in(0,1]$ and $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ such that $g$ is decreasing and $$\int^a_0g(t)dt\ge a{\lambda },\;\int^1_0g(t)dt=f,\;\int^1_0g^q(t)dt=A, \ \ g\le\psi,$$ then there exists ${\phi}:(X,{\mu }){\rightarrow}{\mathbb{R}}^+$ measurable with $$\int_X{\phi}d{\mu }=f,\;\int_X{\phi}^qd{\mu }=A, \;\|{\phi}\|_{p,\infty}\le\frac{p-1}{p}$$ with the additional property: $${\mu }(\{{\mathcal{M}}{\phi}\ge{\lambda }\})\ge a.$$ Indeed from Lemma \[lem2.2\] setting $I=X$ we guarantee the existence of a sequence $(I_j)_j$ of pairwise almost disjoint elements of ${\mathcal{T}}$ in such a way that $$\begin{aligned} {\mu }(\bigcup_jI_j)=\sum{\mu }(I_j)=a. \label{eq4.7}\end{aligned}$$ We consider the measure space $([0,a],\;|\cdot|)$ where $|\cdot|$ is Lebesgue measure. Because of $\int\limits^a_0g(t)dt\ge a{\lambda }$, applying Lemma \[lem2.1\] repeatedly we have as a consequence the existence of a partition $S=\{A_j,\;j=1,2,{\ldots}\}$ of $[0,a]$, which consists of Lebesgue measurable subsets of $[0,a]$ such that $$\begin{aligned} |A_j|={\mu }(I_j) \ \ \text{and} \ \ \int_{A_j}g(t)dt\ge{\lambda }|A_j|. \label{eq4.8}\end{aligned}$$ For every $j=1,2,{\ldots}$ let now $g_j=(g/A_j)^\ast$ defined on $[0,|A_j|]$. Since $(X,{\mu })$ is non-atomic and ${\mu }(I_j)=|A_j|$ we easily see that for every $j$ there exists ${\phi}_j:I_j{\rightarrow}{\mathbb{R}}^+$ measurable such that ${\phi}^\ast_j=g_j$. Additionally suppose that $g'=(g/(a,1])^\ast$ and set $Y=X{\smallsetminus}\cup I_j$. Since ${\mu }(Y)=1-a$ for the same reasons we get a ${\phi}':Y{\rightarrow}{\mathbb{R}}^+$ such that $({\phi}')^\ast=g'$. Then since $I_j$ are pairwise almost disjoint there exists a measurable function ${\phi}:X{\rightarrow}{\mathbb{R}}^+$ such that ${\phi}|_{I_j}={\phi}_j$ almost everywhere for every $j$ and ${\phi}|_Y={\phi}'$. Then it is easy to see that $${\phi}^\ast=g\le\psi, \ \ \int_{I_j}{\phi}d{\mu }=\int_{A_j}gd{\mu }\ge{\lambda }|A_j|={\lambda }{\mu }(I_j)$$ that is $$\frac{1}{{\mu }(I_j)}\int_{I_j}{\phi}d{\mu }\ge{\lambda }\ \ \text{for every} \ \ j=1,2,{\ldots}\;.$$ So, $\{{\mathcal{M}}{\phi}\ge{\lambda }\}\supseteq \cup I_j$. As a consequence we get ${\mu }(\{{\mathcal{M}}{\phi}\ge{\lambda }\})\ge a$ and the lemma is proved. [$\quad\square$]{} Using now Lema \[lem4.1\] we see that $$\begin{aligned} B(f,A,{\lambda })=T(f,A,{\lambda }). \label{eq4.9}\end{aligned}$$ In fact we have equality in (\[eq4.9\]) even if we replace the inequality: $$\int^a_0g(t)dt\ge a{\lambda }$$ given in the definition of $T(f,A,{\lambda })$ by equality, thus getting the function $S(f,A,{\lambda })$, we state it as \[rem4.1\] $B_1(f,A,{\lambda })=S(f,A,{\lambda })$ where $$S(f,A,{\lambda })=\sup\left\{\begin{array}{cl} & \exists\;g:[0,1]{\rightarrow}{\mathbb{R}}^+\;\text{decreasing such that} \\ a\in(0,1]: & \\ & \int^1_0g=f,\;\int^1_0g^q=A,\;g\le\psi\\ & \text{and} \; \int^a_0g(t)dt={\alpha}{\lambda }. \\ \end{array} \right\} \text{{$\quad\square$}}$$ This is true because if $g$ is as in the definition of $T(f,A,{\lambda })$, then of course $\int\limits^a_0g(t)dt\ge a{\lambda }$. But then there exists ${\beta}\ge a$ such that $\int\limits^{\beta}_0g(t)dt={\beta}{\lambda }$ since ${\theta }(t)$ is a decreasing function of $t$, where ${\theta }$ is defined by ${\theta }:(0,1]{\rightarrow}{\mathbb{R}}^+$ with ${\theta }(t)=\frac{1}{t}\int\limits^t_0g(u)du$ ($g$ is decreasing). But then the remark follows by applying Lemma \[lem4.1\] with ${\beta}$ in place of $a$. [$\quad\square$]{} We collect all the above as \[cor4.1\] For $(f,A)\in D$, $B_1(f,A,{\lambda })$ equals the supremum of all $a\in(0,1]$ for which there exists $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ decreasing and $A_1,A_2\ge0$ such that $$\int^a_0g=f_1,\;\int^a_0g^q=A_1, \; \int^1_ag=f_2,\;\int^1_ag^q=A_2$$ and $g\le\psi$ where $A_1+A_2=A$, $f_1={\lambda }{\alpha}$, $f_2=f-{\lambda }{\alpha}$. [$\quad\square$]{} \[rem4.2\] Notice that we can ignore the demand that $g$ is decreasing in Corollary \[cor4.1\] since we can repeat the proof of Lemma \[lem4.1\] without this hypothesis. [$\quad\square$]{} Using now the techniques of Section 3 we can prove the following generalizations of Corollary \[cor3.1\]. \[prop4.1\] Let ${\alpha}\in(0,1]$ $f_1,A_1$ positive numbers, were $f_1\le {\alpha}^{1-\frac{1}{p}}$. Then the following are equivalent 1. $\exists\;g:[0,{\alpha}]{\rightarrow}{\mathbb{R}}^+$ Lebesgue measurable such that $g\le\psi$ on $[0,a]$ and $$\int^{\alpha}_0{\phi}=f_1, \ \ \int^{\alpha}_0{\phi}^q_1=A_1.$$ 2. a\) If $0<f_1\le\dfrac{p-1}{p}{\alpha}^{1-\frac{1}{p}}$ then $\dfrac{f^q_1}{{\alpha}^{q-1}}\le A_1\le{{\varGamma}}f_1^{p-q/p-1}$\ b) If $\dfrac{p-1}{p}{\alpha}^{1-\frac{1}{p}}\le f_1\le {\alpha}^{1-\frac{1}{p}}$ then $${{\varDelta}}_f({\alpha})\le A_1\le{{\varGamma}}f_1^{\frac{p-q}{p-1}}$$ where $${{\varDelta}}_f({\alpha})=\bigg(\frac{p-1}{p}\bigg)^p\frac{1}{p-q}\bigg\{p{\alpha}^{1-\frac{q}{p}}-q\Big[ p\Big({\alpha}^{1-\frac{1}{p}}-f_1\Big)\Big]\bigg]^{p-q/p-1}\bigg\}. \text{{$\quad\square$}}$$ \[prop4.2\] For $a\in(0,1]$ and $f_2,A_2$ such that $f_2\le1-{\alpha}^{1-\frac{1}{p}}$ the following are equivalent 1. $\exists\;g:[a,1]{\rightarrow}{\mathbb{R}}^+$ Lebesgue measurable such that $$g\le\psi \ \ \text{on} \ \ [{\alpha},1] \ \ \text{and} \ \ \int^1_{\alpha}g=f_2,\;\int^1_{\alpha}g^q=A_2.$$ 2. a\) If $f_2\le(1-{\alpha})\dfrac{p-1}{p}$ then $\dfrac{f^q_2}{(1-{\alpha})^{q-1}}\le A_2\le E_{f_2}({\alpha})$ where $$E_{f_2}({\alpha})={{\varGamma}}\Big[\Big(f_2+{\alpha}^{1-\frac{1}{p}}\Big)^{p-q/p-1}-{\alpha}^{1-\frac{q}{p}}\Big].$$ b) If $(1-{\alpha})\dfrac{p-1}{p}\le f_2\le1-{\alpha}^{1-\frac{1}{p}}$ then $${{\varGamma}}_{f_2}({\alpha})\le A_2\le E_{f_g}({\alpha})$$ where $${{\varGamma}}_{f_2}({\alpha})=\bigg(\frac{p-1}{p}\bigg)^qc^{-q/p}(c-{\alpha})+{{\varGamma}}(1-c^{1-\frac{q}{p}})$$ where $c$ satisfies $$\begin{aligned} \frac{1}{p}c^{1-\frac{1}{p}}+\bigg(1-\frac{1}{p}\bigg){\alpha}c^{-1/p}=1-f_2. \text{{$\quad\square$}} \label{eq4.10}\end{aligned}$$ \[rem4.3\] Notice that since $(1-{\alpha})\frac{p-1}{p}\le f_2\le1-{\alpha}^{1-\frac{1}{p}}$ then there exists unique $c$ satisfying (\[eq4.10\]). [$\quad\square$]{} In light now of Corollary \[cor4.1\] and Proposition \[prop4.1\] for a fixed ${\lambda }>f$ we define the following functions $T_{\lambda },S_{\lambda }:[0,1/{\lambda }^p]{\rightarrow}{\mathbb{R}}^+$ by $$T_{\lambda }({\alpha})=\left\{\begin{array}{lll} {\lambda }^{\alpha}{\alpha}, & \text{for} & {\alpha}\le\Big[\frac{p-1}{p}\Big/{\lambda }\Big]^p \\ [1ex] {{\varDelta}}_f({\alpha}), & \text{for} & \Big[\frac{p-1}{p}\Big/{\lambda }^p\Big]^p<{\alpha}\le\frac{1}{{\lambda }^p}. \end{array}\right\}$$ where $f_1={\lambda }{\alpha}$ and $S_{\lambda }({\alpha})={{\varGamma}}({\lambda }{\alpha})^{p-q/p-1}$. In light of Proposition \[prop4.2\] and Corollary \[cor4.1\] we also define $F_{\lambda },G_{\lambda }:[0,f/{\lambda }]{\rightarrow}{\mathbb{R}}^+$ for ${\alpha}$ such that $f-{\lambda }{\alpha}\le1-{\alpha}^{1-\frac{1}{p}}$. 1. If $0<f\le\dfrac{p-1}{p}$ $$F_{\lambda }({\alpha})=\frac{(f-{\lambda }{\alpha})^q}{(1-{\alpha})^{q-1}} \ \ \text{and} \ \ G_{\lambda }({\alpha})=E_{f_2}({\alpha}).$$ 2. While if $\frac{p-1}{p}<f\le1$ $$F_{\lambda }({\alpha})=\left\{\begin{array}{lll} \frac{(f-{\lambda }{\alpha})^q}{(1-{\alpha})^{q-1}}, & \text{for} & \frac{f-\frac{p-1}{p}} {{\lambda }-\frac{p-1}{p}}\le{\alpha}\le\frac{f}{{\lambda }} \\ [1ex] {{\varGamma}}_{f_2}({\alpha}), & \text{for} & {\alpha}\le\frac{f-\frac{p-1}{p}}{{\lambda }-\frac{p-1}{p}} \end{array}\right\}$$ and $G_{\lambda }({\alpha})=E_{f_2}({\alpha})$ where $f_2=f-{\lambda }{\alpha}$. After giving the definitions of $T_{\lambda },S_{\lambda },F_{\lambda },G_{\lambda }$ we can rewrite Corollary \[cor4.1\] as \[cor4.2\] For $(f,A)\in D$ and ${\lambda }>f$, $B_1(f,A,{\lambda })$ equals to supremum of all ${\alpha}\in(0,1]$ such that ${\alpha}\le\min\{f/{\lambda },1/{\lambda }^p\}$ and $f-{\lambda }{\alpha}\le 1-{\alpha}^{1-\frac{1}{p}}$ for which there exist $A_1,A_2\ge0$ with $$\left.\begin{array}{l} T_{\lambda }({\alpha})\le A_1\le S_{\lambda }({\alpha}) \\ F_{\lambda }({\alpha})\le A_2\le G_{\lambda }({\alpha}) \end{array}\right\} \ \ \text{and} \ \ A=A_1+A_2.$$ \[rem4.4\] i) After stating Corollary \[cor4.2\] it is easy to see, in view of Propositions \[prop4.1\] and \[prop4.2\], that the supremum in (\[eq4.2\]) is actually maximum. ii\) If ${\alpha}=B_1(f,A,{\lambda })$ then obviously ${\alpha}\le\frac{f}{{\lambda }}$, while ${\alpha}\le\frac{1}{{\lambda }^p}$. This is true because by i) there exists $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ such that $\int\limits^{\alpha}_0g={\alpha}{\lambda }$, $g\le\psi$, $\int\limits^1_0g=f$, $\int\limits^1_0g^q=A$. But then, by the first two relations we get easily ${\alpha}\le\frac{1}{{\lambda }^p}$. iii\) Notice also that because of Propositions \[prop4.1\] and \[prop4.2\] $F_{\lambda }({\beta}),G_{\lambda }({\beta}),T_{\lambda }({\beta}),S_{\lambda }({\beta})$ have geometric interpretations as $L^q$-norms of essentially unique functions on the respective intervals when ${\beta}\le\min\Big\{\frac{f}{{\lambda }},\frac{1}{{\lambda }^p}\Big\}$ and $f-{\lambda }{\beta}\le1-{\beta}^{1-\frac{1}{p}}$. We state now the following \[lem4.2\] For $(f,A)\in D$ such that ${\mathcal{A}}_f\lneq A$ and ${\alpha}({\lambda })=B_1(f,A,{\lambda })$, there exists ${\lambda }_1\ge\Big(\frac{1}{f}\Big)^{1/p-1}$ such that ${\alpha}({\lambda })=\frac{1}{{\lambda }^p}$, for every ${\lambda }\ge{\lambda }_1$. If ${\lambda }\ge\Big(\frac{1}{f}\Big)^{1/p-1}$ then $\frac{1}{{\lambda }^p}\le\frac{f}{{\lambda }}$. We consider the equation $$F_{\lambda }(1/{\lambda }^p)=A-\frac{{{\varGamma}}}{{\lambda }^{p-q}}.$$ We easily see that $$\lim_{{\lambda }{\rightarrow}\infty}F_{\lambda }(1/{\lambda }^p)={\mathcal{A}}_f\nleq A=\lim_{{\lambda }{\rightarrow}+\infty}\bigg(A-\frac{{{\varGamma}}}{{\lambda }^{p-q}}\bigg).$$ For ${\lambda }={\lambda }_0=\Big(\frac{1}{f}\Big)^{1/p-1}$ we have that $$F_{{\lambda }_0}(1/{\lambda }^p_0)=F_{{\lambda }_0}\bigg(\frac{f}{{\lambda }_0}\bigg)=0\ge A-{{\varGamma}}f^{p-q/p-1}=A-\frac{{{\varGamma}}}{{\lambda }^{p-q}_0}.$$ So, there exists ${\lambda }\ge{\lambda }_0$ such that $$F_{\lambda }(1/{\lambda }^p)=A-\frac{{{\varGamma}}}{{\lambda }^{p-q}}.$$ Let $${\lambda }_1=\inf\bigg\{{\lambda }\ge{\lambda }_0:F_{\lambda }(1/{\lambda }^p)=A-\frac{{{\varGamma}}}{{\lambda }^{p-q}}\bigg\}$$ which is obviously a minimum. Then $$\begin{aligned} F_{{\lambda }_1}\bigg(\frac{1}{{\lambda }^p_1}\bigg)=A-\frac{{{\varGamma}}}{{\lambda }_1^{p-q}}. \label{eq4.11}\end{aligned}$$ Consider the following function defined on $[0,1/{\lambda }^p_1]$: $g_1(t)=\psi(t)$, $0\le t\le1/{\lambda }^p_1$. Applying Proposition \[prop4.2\] for ${\alpha}=\frac{1}{{\lambda }^p_1}$, $f_2=f-\frac{1}{{\lambda }^{p-1}_1}$ we obtain that there exists $g_2:\Big[\frac{1}{{\lambda }_1^p},1\Big]{\rightarrow}{\mathbb{R}}^+$ such that $$g_2\le\psi\bigg/_{\Big[\frac{1}{{\lambda }_1^p},1\Big]}, \; \int^1_{1/{\lambda }_1^p}g_2=f-\frac{1}{{\lambda }_1^{p-1}},\; \int^1_{1/{\lambda }_1^p}g_2^q=F_{{\lambda }_1}\bigg(\frac{1}{{\lambda }^p_1}\bigg).$$ But then if $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ with $g\Big/_{\Big[0,\frac{1}{{\lambda }_1^p}\Big]}=\psi$ and $g\Big/_{\Big[\frac{1}{{\lambda }_1^p},1\Big]}=g_2$ we have because of (\[eq4.9\]) that $$\int^1_0g_1=f,\;\int^1_0g^q=A,\;g\le\psi,\;\int^{1/{\lambda }^p_1}_0g=\frac{1}{{\lambda }_1^{p-1}}=\frac{1} {{\lambda }_1^p}\cdot{\lambda }_1$$ and according to Lemma \[lem4.1\] we have that ${\alpha}({\lambda }_1)=B_1(f,A,{\lambda }_1)\ge\frac{1}{{\lambda }^p_1}$. But of course ${\alpha}({\lambda }_1)\le\frac{1}{{\lambda }^p_1}$, so that ${\alpha}({\lambda }_1)=\frac{1}{{\lambda }_1^p}$. But then we easily see that ${\alpha}({\lambda })=\frac{1}{{\lambda }^p}$ for every ${\lambda }\ge{\lambda }_1$. This is true because $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ as mentioned before satisfies: $$\begin{aligned} g\le\psi,\;\int^{1/{\lambda }^p}_0g=\frac{1}{{\lambda }^{p-1}}=\frac{1}{{\lambda }^p}\cdot{\lambda }\label{eq4.12}\end{aligned}$$ $\int\limits^1_0g=f$, $\int\limits^1_0g^q=A$. Applying now Lemma \[lem4.1\] we obtain the result, that is ${\alpha}({\lambda })=\frac{1}{{\lambda }^p}$ ${\forall}\;{\lambda }\ge{\lambda }_1$. [$\quad\square$]{} Let now ${\lambda }_2=\min\Big\{{\lambda }:{\alpha}({\lambda })=\frac{1}{{\lambda }^p}\Big\}$ and ${\lambda }$ such that ${\lambda }:{\alpha}({\lambda })=\frac{1}{{\lambda }^p}$. Then $\frac{1}{{\lambda }^p}\le\frac{f}{{\lambda }}\Rightarrow{\lambda }\ge\Big(\frac{1}{f}\Big)^{1/p-1}={\lambda }_0$, so that ${\lambda }_2=\min\Big\{{\lambda }\ge{\lambda }_0:{\alpha}({\lambda })=\frac{1}{{\lambda }^p}\Big\}$. Let ${\lambda }_1$ as defined in Lemma \[lem4.2\]. Obviously ${\lambda }_1\ge{\lambda }_2$. We state now the following \[lem4.3\] If $(f,A)\in D$ and ${\lambda }>f$ such that ${\alpha}={\alpha}({\lambda })=B_1(f,A,{\lambda })\nleq\frac{1}{{\lambda }^p}$ then there exists $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ such that $g\le\psi$, $\int\limits^{\alpha}_0g={\alpha}{\lambda }$, $\int\limits^1_0g=f$, $\int\limits^1_0g^q=A$ and $\int\limits^1_{\alpha}g^q=A_2$ where $A_2=F_{\lambda }({\alpha})$. The existence of such ${\alpha}$ $g$ is guaranteeted but with $A_2$ such that $F_{\lambda }({\alpha})\le A_2\le G_{\lambda }({\alpha})$. Suppose that $A_2>F_{\lambda }({\alpha})$. Then for a suitable ${\varepsilon }>0$ which will be chosen later there is a $g_1:[0,1]{\rightarrow}{\mathbb{R}}^2$ such that $g_1\le\psi$, $\int\limits^1_0g_1=f$, $\int\limits^1_{\alpha}g_1^q=A_2-{\varepsilon }\ge F_{\lambda }({\alpha})$ and $g_1=g$ on $[0,{\alpha}]$. This is true because of Proposition \[prop4.2\]. Since now ${\alpha}({\lambda })<\frac{1}{{\lambda }^p}$ then $g_1\nleq\psi$ on a subset of $\Big[0,\frac{1}{{\lambda }^p}\Big]$ with positive measure, that is there is space between $g$ and $\psi$ on $\Big[0,\frac{1}{{\lambda }^p}\Big]$. Indeed if $g_1=\psi$ on $\Big[0,\frac{1}{{\lambda }^p}\Big]$ then we would have that $\int\limits^{1/{\lambda }^p}_0g_1=\int\limits^{1/{\lambda }^p}_0\psi=\frac{1}{{\lambda }^p}\cdot{\lambda }$ and so Lemma \[lem4.1\] would give ${\alpha}({\lambda })=\frac{1}{{\lambda }^p}$ which is contradiction by assumption. So that $$\bigg|\{g_1\nleq\psi\}\cap\bigg[0,\frac{1}{{\lambda }^p}\bigg]\bigg|>0.$$ So, since $q>1$ we can increase $g_1$ to $g_2$ on $[0,1/{\lambda }^p]$ and decrease $g_1$ to $g_2$ on $[1/{\lambda }^p,1]$ in a way that $\int\limits^1_0g_2=f$, $\int\limits^1_0g_2^q>\int\limits^1_0g_1^q=A-{\varepsilon }$ such that there exists ${\beta}>{\alpha}$ so that $\int\limits^{\beta}_0g_2\ge{\beta}{\lambda }$. Actually, if ${\varepsilon }>0$ is small enough we can arrange everything so that $\int\limits^1_0g^q_2=A$. This gives $B(f,A,{\lambda })>{\alpha}$ which is a contradiction. So the lemma is proved. [$\quad\square$]{} Now, let ${\lambda }_2$ be as before. Since ${\lambda }_2$ is the minimum positive ${\lambda }$ such that ${\alpha}({\lambda })=1/{\lambda }^p$, we have from Lemma \[lem4.3\] and by continuouty reasons that there exists $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ such that $$\int^{1/{\lambda }^p_2}_0g=\frac{1}{{\lambda }^{p-1}_2}=\frac{1}{{\lambda }^p_2}\cdot{\lambda }_2,\; \int^1_0g=f,\;\int\limits^1_0g^q=A,\;g\le\psi$$ and such that $$A_2=\int^1_{1/{\lambda }^p_2}g^q=F_{{\lambda }_2}({\alpha})$$ where ${\alpha}={\alpha}({\lambda }_2)=\frac{1}{{\lambda }^p_2}$. But then $$T_{{\lambda }_2}({\alpha}({\lambda }_2))=S_{{\lambda }_2}({\alpha}({\lambda }_2))=\frac{{{\varGamma}}}{{\lambda }_2^{p-q}}.$$ Since $$T_{{\lambda }_2}({\alpha})\le A_1=\int^{1/{\lambda }^p_2}_0g\le S_{{\lambda }_2}({\alpha})$$ we have that $$F_{{\lambda }_2}\bigg(\frac{1}{{\lambda }^p_2}\bigg)+\frac{{{\varGamma}}}{{\lambda }_2^{p-q}}=A$$ that is ${\lambda }_1={\lambda }_2$. Let now ${\lambda }={\lambda }_1={\lambda }_2$. Then $$F_{\lambda }(1/{\lambda }^p)+\frac{{{\varGamma}}}{{\lambda }^{p-q}}=A,$$ and ${\lambda }\ge\Big(\frac{1}{f}\Big)^{1/p-1}$. For ${\mu }>{\lambda }$ and ${\beta}=\frac{1}{{\mu }^p}$ we have that $$\frac{1}{{\mu }^p}\le\frac{f}{{\mu }} \ \ \text{and} \ \ f-{\mu }{\beta}\le1-{\beta}^{1-\frac{1}{p}}.$$ Then $F_{\mu }({\beta})=F_{\mu }(1/{\mu }^p)$ describes the minimum $L^q$-norm value of functions $g$ defined on $$\bigg[\frac{1}{{\mu }^p},1\bigg]=[{\beta},1] \ \ \text{for which} \ \ \int^1_{\beta}g=f-{\mu }{\beta}.$$ So $$F_{\mu }\bigg(\frac{1}{{\mu }^p}\bigg)+\frac{{{\varGamma}}}{{\mu }^{p-q}}=\int^1_0g^q_{\mu }$$ where $g_{\mu }$ is defined such that $$g_{\mu }:=\psi,[0,{\beta}],\ \ \int^1_{\beta}g^q_{\mu }=F_{\mu }\bigg(\frac{1}{{\mu }^p}\bigg) \ \ \text{and} \ \ g_{\mu }\le\psi.$$ But then it is easy to see because of the form of $g_{\mu }$ that $\int\limits^1_0g^q_{\mu }$ decreases when ${\mu }$ increases. So that for every ${\mu }>{\lambda }$ $F_{\mu }\Big(\frac{1}{{\mu }^p}\Big)+\frac{{{\varGamma}}}{{\mu }^{p-q}}<A$. Summarizing all the above we obtain the following \[thm4.1\] If ${\alpha}=B_1(f,A,{\lambda })$ where $(f,A)\in D$ with ${\mathcal{A}}_f\nleq A$, then 1. ${\alpha}({\lambda })=\dfrac{1}{{\lambda }^p}$ for every ${\lambda }\ge{\lambda }_1$, where ${\lambda }_1$ is the unique root of the equation $$F_{\lambda }\bigg(\frac{1}{{\lambda }^p}\bigg)+\frac{{{\varGamma}}}{{\lambda }^{p-q}}=A \ \ \text{on the interval} \ \ \bigg(\bigg(\frac{1}{f}\bigg)^{1/p-1},+\infty\bigg).$$ 2. For every $f<{\lambda }<{\lambda }_1$ ${\alpha}$ equals the supremum of all ${\beta}$ such that ${\beta}\le\min\Big(\frac{f}{{\lambda }},\frac{1}{{\lambda }^p}\Big)$ and $f-{\lambda }{\beta}\le1-{\beta}^{1-1/p}$ for which $$T_{\lambda }({\beta})\le A-F_{\lambda }({\beta})\le S_{\lambda }({\beta}). \text{{$\quad\square$}}$$ We now analyze part (ii) of Theorem \[thm4.1\]. Let $f<{\lambda }<{\lambda }_1$, so that ${\alpha}={\alpha}({\lambda })=B_1(f,{\alpha},{\lambda })<\frac{1}{{\lambda }^p}$. Of course, we must also have that ${\alpha}\le\frac{f}{{\lambda }}$. We search now for those ${\beta}\in\Big[0,\frac{1}{{\lambda }^p}\Big]$ such that $f-{\lambda }{\beta}\le1-{\beta}^{1-1/p}$. Consider $K$ defined on $\Big[0,\frac{1}{{\lambda }^p}\Big]$ by $K({\beta})=f-1+{\beta}^{1-q/p}-{\lambda }{\beta}$. Since $K'({\beta})=\frac{p-1}{p}{\beta}^{-1/p}-{\lambda }$, $K$ increasing on $[0,{\beta}_0]$, decreasing on $\Big[{\beta}_0,\frac{1}{{\lambda }^p}\Big]$ with maximum value at the point ${\beta}_0$ where ${\beta}_0=\Big[\frac{p-1/p}{{\lambda }}\Big]^p$. Then $$K({\beta}_0)=f-1+\bigg[\frac{p-1/p}{{\lambda }}\bigg]^{p-1}\cdot\frac{1}{p}$$ which may be positive as well as negative. We first work in case that $K({\beta}_0)>0$ and $\frac{p-1}{p}<f\le1$. From the above we have that there exist ${\beta}_1,{\beta}_2\le\frac{1}{{\lambda }^p}$ with ${\beta}_1<{\beta}_2$ so that $f-{\lambda }{\beta}_i=1-{\beta}_i^{1-\frac{1}{p}}$ for $i=1,2$ and for ${\beta}\le\frac{1}{{\lambda }^p}$ we have that $f-{\lambda }{\beta}\le1-{\beta}^{1-\frac{1}{p}}$ if and only if ${\beta}\in[0,{\beta}_1]\cup\Big[{\beta}_2,\frac{1}{{\lambda }^p}\Big]$. With the above hypothesis we prove the following \[lem4.4\] For $(f,A)$ such that $A>{\mathcal{A}}_f$, $f<{\lambda }<{\lambda }_1$ we have that $${\alpha}=B_1(f,A,{\lambda })\in\bigg[{\beta}_2,\min\bigg\{\frac{f}{{\lambda }},\frac{1}{{\lambda }^p}\bigg\}\bigg].$$ Obviously, for ${\gamma }=\frac{f}{{\lambda }}$, $f-{\gamma }{\lambda }\le1-{\gamma }^{1-\frac{1}{p}}$ and ${\lambda }{\beta}_2\le{\beta}_2^{1-\frac{1}{p}}$ so by means of Proposition \[prop4.1\] there exists ${\phi}:[0,{\beta}_2]{\rightarrow}{\mathbb{R}}^+$ such that $$\int^{{\beta}_2}_0{\phi}={\lambda }{\beta}_2,\;{\phi}\le\psi,\;\int^{{\beta}_2}_0{\phi}^q=T_{\lambda }({\beta}_2).$$ Now since ${\beta}_2>{\beta}_0=\Big[\frac{p-1/p}{{\lambda }}\Big]^p$, $T_{\lambda }({\beta}_2)={{\varDelta}}_f({\beta}_2)$. We extend now ${\phi}$ on $[0,1]$ by defining ${\phi}=\psi$ on $[{\beta}_2,1]$. Then since $f-{\lambda }{\beta}_2=1-{\beta}_2^{1-\frac{1}{p}}$ we have that $\int\limits^1_0{\phi}=f$. By definition now of ${\phi}$ and ${{\varDelta}}_f({\beta}_2)$ the form of ${\phi}$ must be such that $\int\limits^1_0{\phi}^q={\mathcal{A}}_f$ (since $\int\limits^1_0{\phi}=f$). But we remind that $\int\limits^{{\beta}_2}_0{\phi}={\lambda }{\beta}_2$. Then, since $\int\limits^1_0{\phi}^q={\mathcal{A}}_f<A$ it is easy to construct a function $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ such that $g\le\psi$, $\int\limits^1_0g=f$, $\int\limits^1_0g^q=A$ which for a ${\gamma }$ with ${\gamma }>{\beta}_2$ satisfies $\int\limits^{\gamma }_0g={\gamma }{\lambda }$. But then, applying Lemma \[lem4.1\] $B_1(f,A,{\lambda })\ge{\gamma }>{\beta}_2$ or ${\alpha}\in[{\beta}_2;\min\Big\{\frac{f}{{\lambda }},\frac{1}{{\lambda }^p}\Big\}\Big]$ that is, what we needed to prove. [$\quad\square$]{} Consider now the following function defined on $$R_{\lambda }:{{\varDelta}}=\bigg[{\beta}_2,\min\bigg\{\frac{f}{{\lambda }},\frac{1}{{\lambda }^p}\bigg\}\bigg]{\rightarrow}{\mathbb{R}}^+$$ with $R_{\lambda }({\beta})=F_{\lambda }({\beta})+S_{\lambda }({\beta})$, ${\beta}\in{{\varDelta}}$. Because of the definitions of $F_{\lambda }$ and $S_{\lambda }$ and having in mind the geometric interpretations of them, we easily see that $R_{\lambda }$ is increasing on ${{\varDelta}}$. In fact $S_{\lambda }({\alpha})$ represents the $L^q$-norm of a function $g_1$ defined on $[0,{\alpha}]$ such that $\int\limits^{\alpha}_0g_1={\alpha}{\lambda }$ and $g_1=\bigg\{\begin{array}{ccc} \psi, & \text{on} & [0,c] \\ 0, & \text{on} & (c,{\alpha}] \end{array}$ for suitable $c$. But $F_{\lambda }({\alpha})$ represents the minimum $L^q$-norm of all functions ${\phi}$ defined on $[{\alpha},1]$ such that $\int\limits^1_{\alpha}{\phi}=f-{\lambda }{\alpha}$. Then there exists essentially unique $g_2:[{\alpha},1]{\rightarrow}{\mathbb{R}}^+$ such that $\int\limits^1_{\alpha}g_2=f-{\lambda }{\alpha}$ and $F_{\lambda }({\alpha})=\int\limits^1_0g^q_2$. We set $g=\bigg\{\begin{array}{ccc} g_1, & \text{on} & [0,{\alpha}] \\ g_2, & \text{on} & ({\alpha},1] \end{array}$. Increasing now ${\alpha}$, it is obvious that we increase $\int\limits^1_0g^q$, that is $R_{\lambda }({\alpha})$ increases. Let now ${\alpha}=B_1(f,A,{\lambda })$ then as we mentioned before, ${\alpha}\in{{\varDelta}}$, and of course $T_{\lambda }({\alpha})\le A-F_{\lambda }({\alpha})\le S_{\lambda }({\alpha})$ because of Lemma \[lem4.3\]. If ${\alpha}<\min\Big\{\frac{f}{{\lambda }},\frac{1}{{\lambda }^p}\Big\}$ and $F_{\lambda }({\alpha})+T_{\lambda }({\alpha})<A$ then since $f-{\lambda }{\alpha}<1-{\alpha}^{1-\frac{1}{p}}$ there exists ${\gamma }$ such that ${\alpha}<{\gamma }<\min\Big\{\frac{f}{{\lambda }},\frac{1}{{\lambda }^p}\Big\}$ such that $F_{\lambda }({\gamma })+T_{\lambda }({\gamma })<{\mathcal{A}}$ and of course $$R_{\lambda }({\gamma })=F_{\lambda }({\gamma })+S_{\lambda }({\gamma })\ge A \ \ (R_{\lambda }\;\text{is increasing)}.$$ That is $$T_{\lambda }({\gamma })\le A-F_{\lambda }({\gamma })\le S_{\lambda }({\gamma }).$$ Then Corollary \[cor4.2\] gives $B_1(f,A,{\lambda })\ge{\gamma }>{\alpha}$, a contradiction that is if ${\alpha}=B(f,A,{\lambda })<\min\Big\{\frac{f}{{\lambda }},\frac{1}{{\lambda }^p}\Big\}$ we must have that $F_{\lambda }({\alpha})+T_{\lambda }({\alpha})=A$. Consider now ${\lambda }_0=\Big(\frac{1}{f}\Big)^{1/p-1}$ and the function $h:E=\Big[f,\Big(\frac{1}{f}\Big)^{1/p-1}\Big]{\rightarrow}{\mathbb{R}}^+$ defined by $h({\lambda })=T_{\lambda }(f/{\lambda })$. Notice that for $f\le{\lambda }\le\Big(\frac{1}{f}\Big)^{1/p-1}$ we have that $\frac{f}{{\lambda }}\le\frac{1}{{\lambda }^p}$, so this definition makes sense. Then $$h(f)=T_f(1)={\mathcal{A}}_f\nleq A \ \ \text{and} \ \ h({\lambda }_0)={{\varGamma}}\frac{1}{{\lambda }^{p-q}_0}= {{\varGamma}}f^{p-q/p-1}\ge A.$$ Again, having in mind the geometric interpretation of $T_{\lambda }({\alpha})$ it is easy to see that $h$ is strictly increasing on $E$. So there exists unique ${\lambda }_3\in E$ such that $T_{{\lambda }_3}(f/{\lambda }_3)=A$. Now for $f<{\lambda }\le{\lambda }_3$ $$T_{\lambda }(f/{\lambda })\le A=A-F_{\lambda }(f/{\lambda })\le{{\varGamma}}f^{p-q/p-1}=S_{\lambda }(f/{\lambda })$$ that is in view of Corollary \[cor4.2\] $B_1(f,A,{\lambda })=\frac{f}{{\lambda }}$. For ${\lambda }_3<{\lambda }<{\lambda }_1$ we obviously have that $$B(f,A,{\lambda })=\max\{{\alpha}\in{{\varDelta}}:F_{\lambda }({\alpha})+T_{\lambda }({\alpha})=A\}.$$ So we found $B_1(f,A,{\lambda })$ in case that $f<{\lambda }<{\lambda }_1$ and $K({\beta}_0)>0$, $\frac{p-1}{p}<f\le1$. The case $K({\beta}_0)=0$ is worked out in the same way where we replace ${\beta}_2$ by ${\beta}_0$, while the case $K({\beta}_0)<0$ is worked out for $${{\varDelta}}=\bigg[0,\min\bigg\{\frac{f}{{\lambda }},\frac{1}{{\lambda }^p}\bigg\}\bigg].$$ Analogous results are obtained when $0<f\le\frac{p-1}{p}$ where $${{\varDelta}}=\bigg[0,\min\bigg\{\frac{f}{{\lambda }},\frac{1}{{\lambda }^p}\bigg\}\bigg],$$ since then $$f-{\lambda }{\beta}\le\frac{p-1}{p}(1-{\beta})\le1-{\beta}^{1-\frac{1}{p}} \ \ \text{for every} \ \ {\beta}\le\frac{1}{{\lambda }^p}.$$ We state all the above results in the following \[thm4.2\] If $(f,A)\in D$, $A\nleq A_f$ then $B_1(f,A,{\lambda })$ is given by $$B_1(f,A,{\lambda })=\left\{\begin{array}{ll} 1, & 0<{\lambda }\le f \\ \frac{f}{{\lambda }}, & f<{\lambda }\le{\lambda }_3 \\ {\delta }, & {\lambda }_3<{\lambda }\le{\lambda }_1 \\ \frac{1}{{\lambda }^p}, & {\lambda }_1\le{\lambda }\end{array}\right.$$ where $${\delta }=\max\{{\gamma }\in{{\varDelta}}:F_{\lambda }({\gamma })+T_{\lambda }({\gamma })=A\}. \text{{$\quad\square$}}$$ That is we found sharp inequalities concerning the localized distribution function of ${\mathcal{M}}{\phi}$ given the $L^1$ and $L^q$-norms and the usual quasi-norm $\|\cdot\|_{p,\infty}$ of ${\phi}$ for $1<q<p$ as variables. \[rem4.5\] i\) The case where $A={\mathcal{A}}_f$ can be worked out separately because there exists essentially unique function $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ such that $\int\limits^1_0g=f$, $\int\limits^1_0g^q=A$, $g\le\psi$. ii\) We have that $B(f,A,{\lambda })=B_1(f,A,{\lambda })$, for $A\neq f^q$ as mentioned in the beginning of this section. This is true of course for ${\lambda }\ge{\lambda }_1$, that is for ${\lambda }$ such that ${\alpha}({\lambda })=B_1(f,A,{\lambda })=\frac{1}{{\lambda }^p}$. Now for ${\lambda }<{\lambda }_1$ let ${\alpha}=B_1(f,A,{\lambda })$. Then there exists $g:[0,1]{\rightarrow}{\mathbb{R}}^+$ such that $\int\limits^1_0g=f$, $\int\limits^1_0g^q=A$, $\int\limits^{\alpha}_0g={\alpha}{\lambda }$, $g\le\psi$. Then it is easy to see that for every ${\varepsilon }>0$ small enough we can change $g$ to $g_{\varepsilon }$ in a way that $$\int^{{\alpha}-{\varepsilon }}_0g_{\varepsilon }\ge({\alpha}-{\varepsilon }){\lambda }, \ \ \int^1_0g_{\varepsilon }=f, \ \ \int^1_0g^q_{\varepsilon }=A+{\delta }_{\varepsilon }, \ \ \|g_{\varepsilon }\|_{p,\infty}=\frac{p-1}{p}$$ and ${\delta }_{\varepsilon }{\rightarrow}0$ as ${\varepsilon }{\rightarrow}0^+$. This using continuity arguments gives $B(f,A,{\lambda })={\alpha}$. iii\) In the statement of Theorem \[thm4.2\] it is not difficult to see (by doing some tedious calculations in any case according to the way that $F_{\lambda }$, $T_{\lambda }$ are defined) that for the range ${\lambda }_3<{\lambda }<{\lambda }_1$ there is in fact unique ${\gamma }\in{{\varDelta}}$ such that $F_{\lambda }({\gamma })+T_{\lambda }({\gamma })=A$, because $F_{\lambda }+T_{\lambda }$ is increasing on ${{\varDelta}}$. iv\) Notice the continuity of the function as calculated on Theorem \[thm4.2\], at the point ${\lambda }={\lambda }_1$. As a matter of fact ${\delta }$ is such that $F_{{\lambda }_1}({\delta })+T_{{\lambda }_1}({\delta })=A$. But ${\lambda }_1$ is such that $$F_{{\lambda }_1}\bigg(\frac{1}{{\lambda }_1^p}\bigg)+\frac{{{\varGamma}}}{{\lambda }_1^{p-q}}=A, \ \ \text{and} \ \ \frac{{{\varGamma}}}{{\lambda }^{p-q}_1}=S_{{\lambda }_1}\bigg(\frac{1}{{\lambda }_1^p}\bigg)=T_{{\lambda }_1} \bigg(\frac{1}{{\lambda }^p_1}\bigg).$$ So that $$F_{{\lambda }_1}({\delta })+T_{{\lambda }_1}({\delta })=F_{{\lambda }_1}\bigg(\frac{1}{{\lambda }^p_1}\bigg)+T_{{\lambda }_1} \bigg(\frac{1}{{\lambda }_1^p}\bigg)=A,$$ which in view of Remark iii) above, gives ${\delta }=\frac{1}{{\lambda }^p_1}$. [$\quad\square$]{} We state now the following immediate corollary of Theorem \[thm4.2\] as \[thm4.3\] For $(f,A)\in D$, ${\lambda }>f$, $F=\frac{p-1}{p}$ and $A\gneq{\mathcal{A}}_f$ the following holds: $$\sup\bigg\{\|{\mathcal{M}}{\phi}\|_{p,\infty}:{\phi}\ge0,\;\int_X{\phi}d{\mu }=f,\;\int_X{\phi}^qd{\mu }=A, \;\|{\phi}\|_{p,\infty}=F\bigg\}=\frac{p}{p-1}F$$ with the supremum attained. That is (\[eq1.7\]) is best possible allowing every value of the $L^1$ and $L^q$-norm. [$\quad\square$]{} [99]{}
--- abstract: 'The quality of Cadmium Zinc Telluride (CZT) detectors is steadily improving. For state of the art detectors, readout noise is thus becoming an increasingly important factor for the overall energy resolution. In this contribution, we present measurements and calculations of the dark currents and capacitances of 0.5 cm thick CZT detectors contacted with a monolithic cathode and 8$\times$8 anode pixels on a surface of 2$\times$2 cm$^2$. Using the NCI ASIC from Brookhaven National lab as an example, we estimate the readout noise caused by the dark currents and capacitances. Furthermore, we discuss possible additional readout noise caused by pixel-pixel and pixel-cathode noise cross-coupling.' author: - 'Alfred Garson III$^{1}$, Qiang Li$^{1}$ Ira V. Jung$^{2}$, Paul Dowkontt$^{1}$, Richard Bose$^{1}$, Garry Simburger$^{1}$, and Henric Krawczynski$^{1}$[^1] [^2]' title: Leakage Currents and Capacitances of Thick CZT Detectors --- CZT, electronic noise, radiation detection. Introduction ============ are multiple applications for the room-temperature semi-conductor Cadmium Zinc Telluride (CZT), ranging from medical imaging over homeland security to astroparticle physics experiments. The high efficiency and good spectral and spatial resolution of CZT make it an attractive material for detecting and measuring photons in the energy range from a few keV to a few MeV. As the fractional yield of high-quality crystals increases (and the cost is reduced), CZT will become even more prolific in radiation detection systems. Limits on the performance of of CZT detector systems depend on characteristics of both the detector and the readout electronics. State-of-the-art CZT detectors combine excellent homogeneity over typical volumes between 0.5$\times$2$\times$2 cm$^3$ and 1.5$\times$2$\times$2 cm$^3$ with high electron $\mu\tau$-products on the order of $10^{-2}$ cm$^2$ V$^{-1}$. As the best thick CZT detectors achieve now 662 keV energy resolutions better than 1% FWHM (full width half maximum), low-noise readout becomes increasingly more important. In the following, we will present leakage current and capacitance measurements performed on CZT detectors from the company Orbotech Medical Solutions [@Orb]. Orbotech uses the Modified Horizontal Bridgman process to grow the CZT substrates. The process gives substrates with excellent homogeneity, but a somewhat low bulk resistivity of 10$^9$ $\Omega$ cm. In earlier work, several groups including ourselves have shown that pixel-cathode dark currents can be suppressed efficiently by contacting the substrates with high-work function cathodes [@Ira; @Jaesub]. We are currently testing Orbotech detectors with a wide range of thicknesses and with a range of pixel pitches (see Fig. 1, and Qiang et al., 2007). In this contribution, we present measurements of the dark currents and capacitances of an Orbotech CZT detector (0.5$\times$2$\times$2 cm$^3$, 8$\times$8 pixel,2.4mm pitch, 1.6mm pixel side-length), and discuss the resulting readout noise. In Sect. 2, the ASIC used as a benchmark for noise calculations is described, and the noise model parameters are given. The results of dark current and capacitance measurements are described in Sect. 3. In Sect. 4 the resulting noise is estimated, and in Sect. 5 pixel-pixel and pixel-cathode noise cross-coupling is discussed. In Sect. 5, the results are summarized. Noise Model =========== As a reference for our noise calculations, we use the “NCI ASIC” developed by Brookhaven National Laboratory and the Naval Research Laboratory for the readout of Si strip detectors (De Geronimo et al., 2007). The self-triggering ASIC comprises 32 channels. Each front-end channel provides low-noise charge amplification for pulses of selectable polarity, shaping with a stabilized baseline, adjustable discrimination, and peak detection with an analog memory. The channels can process events simultaneously, and the read out is sparsified. The ASIC requires 5 mW of power per channel. ![Orbotech Cadmium Zinc Telluride (CZT) detectors. From left to right, the detectors have volumes of 1$\times$2$\times$2 cm$^3$, 0.75$\times$2$\times$2 cm$^3$, 0.5$\times$2$\times$2 cm$^3$, and 0.2$\times$2$\times$2 cm$^3$.[]{data-label="CZTs"}](CZTs){width="2.5in"} We use the following noise model to calculate the equivalent noise charge (ENC): $$ENC^{2} = \left[A_{1}\frac{1}{\tau_{P}}\frac{4kT}{g_{m}}+A_{3}\frac{K_{f}}{C_{G}}\right](C_{G}+C_{D})^{2}$$ $$+A_{2}\tau_{P}2q(I_{L}+I_{RST})$$ where $A_{1},A_{2}$,and $A_{3}$ characterize the pulse shaping filter, $\tau_{p}$ is the pulse peaking time, $C_{D}$ and $C_{G}$ are the detector and MOSFET capacitances, respectively, $g_{m}$ is the MOSFET transconductance, $K_{f}$ is the 1/f noise coefficient, $I_{L}$ is the detector leakage current, and $I_{RST}$ is the parallel noise of the reset system (De Geronimo et al.,2002). For a given detector ($C_{D},I_{L}$) and ASIC ($A_{1-3},C_{G},g_{m},K_{f}$) $\tau_{p}$ can be optimized to reduce the ENC. For the NCI ASIC, we use [@Ger; @Gianluigi-Detailed-Noise-Discussion]: $A_{1}$=0.89, $A_{2}$=$A_{3}$=0.52, $K_{f}$=$10^{-24}$, $C_{G}$=6pF, $g_{m}$=8mS, and $I_{RST}$=50pA. Dark Current and Capacitance in Orbotech CZT Detectors ====================================================== ![Current-voltage measurements of 0.5cm-thick CZT detectors with various cathode contact materials. For all materials, the leakage current is $<$0.2 nA/pixel at biases up to -1500 Volts.[]{data-label="IV"}](IV2){width="4.0in"} Figure \[IV\] shows the IV curves for one 2$\times$2$\times$0.5 cm$^3$ Orbotech CZT detector, for different cathode contact materials. The preferred cathode material is Au, as Au cathodes give leakage currents $<$0.2 nA/pixel at a cathode bias voltage of -1500 Volts, and give slightly better spectroscopic performance than other cathode materials. We used a commercial capacitance meter to measure the capacitance between all pixels and the cathode. The measurement set-up is shown in Fig. \[Cap2\]. High voltage blocking capacitors were used to protect the LCR meter from the detector bias voltage. A low pass filter was used to isolate the LCR-meter-detector circuit from the high voltage supply at the kHz frequencies used by the LCR meter. Largely independent of bias voltage, we measure a capacitance of 9 pF for all 64 pixels, corresponding to a pixel-cathode capacitance of 0.14 pF per pixel. The measured result agrees well with a simple estimate of the anode to cathode capacitance: using a dielectric constant of $\epsilon=$ 10 [@Spp], a parallel plate capacitor with the same dimensions as our detector has a capacitance of 8 pF. The measurements of the pixel-pixel capacitances resulted in upper limits of $<$1 pF. For inner pixels (non-border pixels) we estimated the pixel-pixel capacitances with the same 3-D Laplace solver that we are using to model the response of CZT detectors from our fabrication [@Ira]. The code determines the potential inside a large grounded box that houses the detector. The capacitance between one pixel and its neighbors is determined by setting the voltage of the one pixel to $\Delta V=$ 1 V while keeping the other pixels and the cathode at ground potential. The charge $\Delta Q$ on the biased pixel and on the neighboring pixels is determined with the help of Gauss’ law and appropriate Gaussian surfaces. The procedure gives the capacitances $C=\Delta Q/\Delta V$. We obtain a next-neighbor capacitance of 0.06 pF, and a diagonal-neighbor capacitance of 0.02 pF. ![Circuit diagram for the pixel-cathode capacitance measurements. Blocking capacitors are used to protect the LCR meter from the detector bias voltage. A low pass filter is used to decouple the detector from the high voltage supply at the frequencies used by the LCR meter. The set-up makes it possible to measure the capacitance as function of bias voltage. We used cathode biases down to -2000 V.[]{data-label="Cap2"}](Cap2){width="2.0in"} Equivalent Noise Charge of CZT ASIC Readout Electronics ======================================================= With the previous results, we can now evaluate Equation (1). In the context of 662 keV energy depositions (assuming 4.64 eV per electron-hole pair generation, [@Spp]), Fig. \[results\] plots the FWHM contribution (red line) of the readout electronics’s ENC as a function of dark current $I_{L}$ (top) and pixel capacitance $C_{D}$ (bottom). For the upper plot, $C_{D}$ is held constant at 1.0 pF and for the lower plot $I_{L}$ was fixed at 1 nA. In both panels, the green line marks a readout noise contribution of 0.25% FWHM to the 662 keV energy resolution. At the $^{\sim}_<$ 0.25%-level, the contribution of the readout noise to the detector energy resolution is negligible. For the specific ASIC considered here, we see that leakage currents up to $\sim$3 nA and pixel capacitances $\sim$10 pF are acceptable. The leads between the readout ASIC and the detector should be sufficiently short to stay below 10 pF. ![The two panels show the electronic readout noise contribution to the total FWHM energy resolution as a function of detector dark current (top,red line) and detector capacitance (bottom, red line). The calculations are made for the NCI ASIC. A detector capacitance of 1 pF was assumed for the top plot. The electronic noise ($\sim$0.1$\%$ FWHM) is independent of the leakage current $<$0.2 nA/pixel. The bottom figure assumes a leakage current of 1 nA and the resulting electronic noise is constant ($\sim$0.15$\%$ FWHM) for detector capacitances smaller than 2 pF/pixel.[]{data-label="results"}](resultsii){width="3.5in"} Pixel-Pixel and Pixel-Cathode Noise Cross-Coupling ================================================== In this section we consider possible additional noise contributions arising from the capacitive coupling between adjacent pixels and between a pixel and the cathode. The capacitive coupling can result in amplifier noise from one channel being injected into the other channel. In the following we use the terminology used by Spieler (2005), and assume that all pixels and the cathode are read out by identical preamplifiers. We first consider pixel-pixel noise cross coupling (compare Fig. 5) and consider the coupling between a pixel, its four nearest neighbors and the cathode. The output noise voltage ($\nu{n0}$) of an amplifier creates a noise current, $i_{n}$. $$i_{n} = \frac{\nu_{n0}}{\frac{1}{\omega C_{f}}+\frac{1}{\omega \left(4C_{SS}+C_{b}\right)}}$$ Here, $C_{SS}$, $C_{B}$, and $C_{f}$ is the pixel-pixel capacitance, the pixel-cathode capacitance, and the amplifier capacitance, respectively. The current is divided among the capacitively coupled channels in proportion to the coupling capacitance. The fraction of $i_{n}$ going to a nearest neighbor is $$\eta_{nn} = \left(4 + \frac{C_{B}}{C_{SS}}\right) \approx \frac{1}{6}.$$ Adding the additional noise from the four nearest neighbors in quadrature, we find that the pixel-pixel crosstalk will increase the electronic noise by: $$\sqrt{4}\nu_{nn} = 2\frac{\eta_{nn}i_{n}}{\omega C_{f}} \approx 8\%\nu_{n0}$$. In most applications, one reads out the pixels [*and*]{} the cathode. For single-pixel events, the pixel-to-cathode signal ratio can be used to correct the anode signal for the depth of the interaction. For multiple-pixel events, the time offset between the cathode signal and the pixel signals can be used to perform a proper depth of interaction correction for each individual pixel. The pixel-cathode noise cross-coupling can be more significant. The equivalent noise charge from cathode noise being injected into pixels, $Q_{CP}$, depends on the number of pixels ($n_{pix}$) and the ratio of the feedback capacitance to the detector capacitance [@Spp]: $$Q_{CP} = \frac{Q_{n0}}{1 + n_{pix}\frac{C_{d}}{C_{f}}}.$$ Here $n_{pix}$=64 is the number of pixel, $C_{d}$= 7 pF is the capacitance between the cathode and all the pixels, and $C_{f}$=50 fF is the preamplifier feedback capacitance. With these values, the cathode noise can increase the readout noise of the anode channels by 68%. Summary ======= We measured pixel-cathode dark currents and the pixel-cathode capacitances, both as function of detector bias voltage. The measurements give dark currents well below a nA per pixel, and a pixel-cathode capacitance of 0.14 pF per pixel. ![The diagram illustrates the cross-coupling between readout channels of an ASIC. The noise voltage, $\nu_{n0}$, at the output of the measurement pixel’s amplifier (center channel) sees an infinite resistance at the amplifier input. The resulting noise current, $i_{n}$, is divided into the cathode and nearest neighbor pixel channels in proportion to their capacitances. []{data-label="Spieler"}](circuit2){width="3.5in"} The pixel-pixel capacitances were smaller than the accuracy of our measurements, and we determined them with the help of a 3D Laplace solver. We obtain the result that pixel-pixel capacitance is 0.06 pF for direct neighbors and 0.02 pF for diagonal neighbors. For a state-of-the-art ASIC as the NCI ASIC used as a benchmark here, the noise model predicts a very low level of readout noise. With these nominal capacitance values, pixel-pixel noise cross-coupling is a minor effect, but cathode-pixel noise cross coupling can be significant. In practice, the readout noise will be higher owing to additional stray capacitances from the detector mounting and the readout leads, and from pick-up noise. For the design of a readout system, short leads and a proper choice of the detector mounting board substrate are thus of utmost importance. Acknowledgments {#acknowledgments .unnumbered} =============== We gratefully acknowledge Gianluigi De Geronimo and Paul O’Connor for information concerning the NCI ASIC. This work is supported by NASA under contract NNX07AH37G, and by the DHS under contract 2007DN077ER0002. [1]{} Orbotech Medical Solutions LTd., 10 Plaut St., Park Rabin, P.O.Box: 2489, Rehovot, Israel, 76124. I.V. Jung, A. Garson III, H. Krawczynski, A. Burger, B. Groza, 28, 397 (2007) \[arXiv: 0710.2655\]. Hong, J., et al., SPIE Conference Proceedings ÒHard X-Ray and Gamma-Ray Detector Physics IXÓ, 6706-10 (2007), in press \[arXiv:0709.2719\]. H. Spieler, *Semiconductor Detector Systems*, Oxford University Press (2005). G. De Geronimo, P. O’Connor, A. Kandasamy, J. Grosholz, Proc SPIE 4784 (2002). [^1]: $^{1}$Washington University in St. Louis, 1 Brookings Drive, CB 1105, St. Louis, Mo, 63130,$^{2}$Friedrich-Alexander Universitat Erlangen-Nurnberg [^2]: A. Garson:agarson3@hbar.wustl.edu
--- author: - Klaus Hornberger title: | Monitoring approach to open quantum dynamics\ using scattering theory --- Introduction ============ A truism of quantum physics tells that no system is perfectly isolated and it is therefore not surprising that the study of open quantum systems is an ubiquitous  theme of present day quantum mechanics, see [[@Breuer2002a; @Carmichael1993a; @Gardiner2000a; @Weiss1999a]]{} and refs. therein. An important class of evolution equations for open systems are Markovian master equations. They imply that environmental correlations disperse fast, so that on a coarse-grained timescale the temporal change of the system state $\rho$ depends on the present state of the system, but not on its history. From the strict point of view of an operationalist (who dismisses the notion of a “system” altogether and takes $\rho$ as describing an equivalence class of preparations in the lab [[@Kraus1983a]]{}) one may even argue that any valid differential equation for the time evolution of $\rho$ must generate a completely positive dynamical semigroup [[@Alicki1987a]]{} and must hence be Markovian. Putting the pros and cons of Markovian vs. non-Markovian formulations aside, it is fair to say that a large class of open quantum systems is described appropriately by time-local master equations. At the same time, it is curious that the Markov property does not emerge naturally in standard microscopic derivations. Rather, one has to impose it “by hand”, usually by interpreting some quantities as correlation functions, which must then be assumed to be $\delta$-correlated. This may be still transparent in weak coupling calculations such as the Bloch-Redfield  approach [[@Breuer2002a]]{}, but tends to be awkward if a non-perturbative treatment of the interaction with the environment is needed. In the present letter I would like to motivate and exemplify a general method of obtaining master equations which do incorporate the microscopic interactions in a non-perturbative fashion. It differs from the standard approaches in that it takes the Markov assumption not as an approximation in the course of the calculation, but as a premise, implemented before tracing out the environment. It will be applicable whenever the interaction with the environment can reasonably be described in terms of individual interaction events or “collisions”, that is, if one can take the environment as consisting of independent (quasi)-particles which probe the system each at a time, in the sense that both the rate and the effect of an individual collision are separately physically meaningful and can be formulated microscopically. One may then implement the Markov requirement right from the outset by disregarding the change of the environmental state after each collision. This will be justified if the environment is sufficiently large and stationary, and in particular if many different environmental (quasi)-particles are involved so that each has much time to carry away and disperse its correlation with the system. It is clear that the apparatus of time-dependent scattering theory [[@Taylor1972a]]{} is predestined for this type of description. Its microscopically defined S-matrix maps from the incoming to the exact outgoing asymptotes of the system-environment state without a temporal evolution, and a partial trace over the scattered environment yields the system state after a single collision. One would like to write the temporal change of the system as the rate of collisions multiplied by this change due to an individual scattering. The great difficulty with this is that in general also the rate depends on the system state so that a naive implementation would yield a nonlinear equation. Below, I describe how this is circumvented by applying the concept of generalized and continuous measurements. The use and strength of the method is then demonstrated by deriving the master equation for the internal quantum dynamics of an immobile system affected by a gaseous environment in terms of the multichannel scattering amplitudes. Monitoring approach =================== My first aim is to argue that the system $\rho$ evolves as $\partial_t \rho = \left( i \hbar \right)^{- 1} \left[ \mathsf{H}, \rho \right] + \mathcal{L} \rho$ with $$\begin{aligned} \mathcal{L} \rho & = & \frac{i}{2} {\ensuremath{\operatorname{Tr}}}_{{\ensuremath{\operatorname{env}}}} \left( \left[ \mathsf{T} + \mathsf{T}^{\dag}, \Gamma^{1 / 2} \left[ \rho \otimes \rho_{{\ensuremath{\operatorname{env}}}} \right] \Gamma^{1 / 2} \right] \right) \nonumber\\ & & + {\ensuremath{\operatorname{Tr}}}_{{\ensuremath{\operatorname{env}}}} \left( \mathsf{T} \Gamma^{1 / 2} \left[ \rho \otimes \rho_{{\ensuremath{\operatorname{env}}}} \right] \Gamma^{1 / 2} \mathsf{T}^{\dag} \right) \nonumber\\ & & - \frac{1}{2} {\ensuremath{\operatorname{Tr}}}_{{\ensuremath{\operatorname{env}}}} \left( \Gamma^{1 / 2} \mathsf{T}^{\dag} \mathsf{T} \Gamma^{1 / 2} \left[ \rho \otimes \rho_{{\ensuremath{\operatorname{env}}}} \right] \right) \nonumber\\ & & - \frac{1}{2} {\ensuremath{\operatorname{Tr}}}_{{\ensuremath{\operatorname{env}}}} \left( \left[ \rho \otimes \rho_{{\ensuremath{\operatorname{env}}}} \right] \Gamma^{1 / 2} \mathsf{T}^{\dag} \mathsf{T} \Gamma^{1 / 2} \right) . \label{eq:me1}\end{aligned}$$ Here $\mathsf{H}$ is the Hamiltonian of the isolated system and $\rho_{{\ensuremath{\operatorname{env}}}}$ the reduced single-particle state of the environment. The operator $\mathsf{T}$ is the nontrivial part of the two-particle S-matrix $\mathsf{S} = \mathsf{I} + i \mathsf{T}$ describing the effect of a single collision between environmental particle and system. The rate of collisions is described by $\Gamma$, a positive operator in the total Hilbert space, which determines the probability of a collision to occur in a small time interval $\Delta t$, $$\begin{aligned} {\ensuremath{\operatorname{Prob}}} \left( \text{C}_{\Delta t} | \rho \right) & = & \Delta t {\ensuremath{\operatorname{tr}}} \left( \Gamma \left[ \rho \otimes \rho_{{\ensuremath{\operatorname{env}}}} \right] \right) . \label{eq:probscat}\end{aligned}$$ Like the S-matrix, the operator $\Gamma$ can in principle be characterized operationally in independent experiments. Its microscopic formulation will in general involve a total scattering cross section and the current density operator of the relative motion (see below). To motivate the time evolution (\[eq:me1\]) we picture the environment as monitoring the system continuously by sending probe particles which scatter off the system at random times. The state dependent collision rate can now be incorporated into the dynamical description by assuming that the system is encased by a hypothetical, minimally invasive detector with time resolution $\Delta t$. It tells at any instant whether a probe particle has passed by and is going to scatter off the system. The important point to note is that the information that a collision will take place changes our description of the impinging two-particle state. According to the theory of generalized measurements [[@Kraus1983a; @Busch1991a; @Jacobs2007a]]{} the new state is the normalized image of a norm-decreasing completely positive map $\mathcal{M} (\cdot | \text{C}_{\Delta t})$ in the total Hilbert space satisfying ${\ensuremath{\operatorname{tr}}} \left( \mathcal{M} (\varrho | \text{C}_{\Delta t}) \right) = \Delta t {\ensuremath{\operatorname{tr}}} \left( \Gamma \varrho \right)$. For an efficient [[@efficientmeas]]{} and minimally-invasive detector it has the form $$\begin{aligned} \mathcal{M} (\varrho | \text{C}_{\Delta t}) & = & \Delta t \Gamma^{1 / 2} \varrho \Gamma^{1 / 2} . \label{eq:mestra}\end{aligned}$$ The significance of this measurement transformation is to imprint our improved knowledge about the incoming two-particle wave packet, and it may be viewed as enhancing those parts which head towards a collision. In principle, an efficient measurement (which introduces no classical noise by mapping pure states to pure states) may be given by a more general operator, $\mathcal{M} (\varrho | \text{C}_{\Delta t}) = \mathsf{M}_{\Delta t} \varrho \mathsf{M}_{\Delta t}^{\dag}$ as long as it satisfies $\mathsf{M}_{\Delta t}^{\dag} \mathsf{M}_{\Delta t} = \Delta t \Gamma$. The above ‘minimally-invasive’ choice of $\mathsf{M}_{\Delta t}$ is reasonable because a possible unitary part $\mathsf{U}_{\Delta t}$ in its general polar decomposition $\mathsf{M}_{\Delta t} = \mathsf{U}_{\Delta t} \Gamma^{1 / 2} \sqrt{\Delta t}$ would describe a reversible “back action” which has no physical justification in our case of a thought measurement invoked only to account for the state dependence of collision probabilities. Also the absence of a detection event during $\Delta t$ changes the state. The corresponding complementary map $\mathcal{M} (\cdot | \overline{\text{C}}_{\Delta t})$ satisfies ${\ensuremath{\operatorname{tr}}} \left( \mathcal{M} (\varrho | \overline{\text{C}}_{\Delta t}) \right) = 1 - \Delta t {\ensuremath{\operatorname{tr}}} \left( \Gamma \varrho \right)$ and the Kraus representation with time-invariant operators reads $\mathcal{M} (\varrho | \overline{\text{C}}_{\Delta t}) = \varrho - \Gamma^{1 / 2} \varrho \Gamma^{1 / 2} \Delta t$. We can now form the unconditioned system-probe state after time $\Delta t$ by allowing for the fact that the detection outcomes are not really available. Thus, the infinitesimally evolved state is given by the mixture of the colliding state transformed by the S-matrix and the untransformed non-colliding one, weighted with the respective probabilities, $$\begin{aligned} \varrho' \left( \Delta t \right) & = & {\ensuremath{\operatorname{Prob}}} \left( \text{C}_{\Delta t} | \rho \right) \mathsf{S} \frac{\mathcal{M} (\varrho | \text{C}_{\Delta t})}{{\ensuremath{\operatorname{tr}}} \left( \mathcal{M} (\varrho | \text{C}_{\Delta t}) \right)} \mathsf{S}^{\dag} \nonumber\\ & & + {\ensuremath{\operatorname{Prob}}} \left( \overline{\text{C}}_{\Delta t} | \rho \right) \frac{\mathcal{M} (\varrho | \overline{\text{C}}_{\Delta t})}{{\ensuremath{\operatorname{tr}}} \left( \mathcal{M} (\varrho | \overline{\text{C}}_{\Delta t}) \right)} \nonumber\\ & = & \mathsf{S} \Gamma^{1 / 2} \varrho \Gamma^{1 / 2} \mathsf{S}^{\dag} \Delta t + \varrho - \Gamma^{1 / 2} \varrho \Gamma^{1 / 2} \Delta t. \nonumber\end{aligned}$$ Using the unitarity of $\mathsf{S}$, which implies $i \left( \mathsf{T} - \mathsf{T}^{\dag} \right) = - \mathsf{T}^{\dag} \mathsf{T}$, the differential quotient can be written as $$\begin{aligned} \frac{\varrho_{}' \left( \Delta t \right) - \varrho}{\Delta t} = & \mathsf{T} \mathsf{\Gamma}^{1 / 2} \varrho \mathsf{\Gamma}^{1 / 2} \mathsf{T}^{\dag} - \frac{1}{2} \mathsf{T}^{\dag} \mathsf{T} \mathsf{\Gamma}^{1 / 2} \varrho \mathsf{\Gamma}^{1 / 2} \nonumber\\ & - \frac{1}{2} \mathsf{\Gamma}^{1 / 2} \varrho \mathsf{\Gamma}^{1 / 2} \mathsf{T}^{\dag} \mathsf{T} + i \left[ {\ensuremath{\operatorname{Re}}} \left( \mathsf{T} \right), \mathsf{\Gamma}^{1 / 2} \varrho \mathsf{\Gamma}^{1 / 2} \right] . \nonumber\end{aligned}$$ One arrives at (\[eq:me1\]) by tracing out the environment with $\varrho = \rho \otimes \rho_{{\ensuremath{\operatorname{env}}}}$, taking the limit of continuous monitoring $\Delta t \rightarrow 0$, and adding the generator of the free system evolution. Thus, the collision rate with its state dependence is incorporated by the operators $\Gamma^{1 / 2}$ and they may be thought of, in a stochastic unravelling of the master equation [[@Carmichael1993a; @Gardiner1992a; @Molmer1993a; @Wiseman1996a; @Brun2002a; @Cresser2006a]]{}, as serving to weight each trajectory with the rate before it scatters. The operators $\mathsf{T}$ describe the individual microscopic interaction process [*without approximation*]{}. Note also that (\[eq:me1\]) generates a dynamical semigroup by construction since $\mathcal{M} (\cdot | \text{C}_{\Delta t})$ and $\mathcal{M} (\cdot | \overline{\text{C}}_{\Delta t})$ are completely positive. To judge whether the trace in (\[eq:me1\]) yields a useful master equation one has to specify system and environment. A first application of this general equation can already be found in the recent Ref. [[@Hornberger2006b]]{}, where it is used to describe the motion of a distinguished, freely moving point-particle in the presence of a gas. The above discussion thus serves to complete the derivation in [[@Hornberger2006b]]{}, where a quantum version of the linear Boltzmann equation was obtained which displays all expected limiting properties. In the following, I will demonstrate the use and generality of eq. (\[eq:me1\]) by posing a complementary question, namely, how the the [*internal*]{} dynamics of an immobile system gets affected by an environment of structureless gas particles. Application to an immobile system ================================= If the motional system degrees of freedom are disregarded, a single-particle S-matrix can be used to describe the (in general inelastic) interaction with the environmental particles. The resulting master equation should describe non-perturbatively both the coherent and the incoherent processes induced by this coupling. An example would be the collisional decay of molecular eigenstates into chiral configurations, or the phonon-induced decoherence of a quantum dot. For concreteness, the environment is assumed to be an ideal Maxwell gas of density $n_{{\ensuremath{\operatorname{gas}}}}$, atomic mass $m$, and single particle state ${\rho_{{\ensuremath{\operatorname{env}}}} = \left( \lambda_{{\ensuremath{\operatorname{th}}}}^3 / \Omega \right)^{} \exp \left( - \beta \mathsf{p}^2 / 2 m \right)}$ with $\mathsf{p}$ the momentum operator, $\lambda_{{\ensuremath{\operatorname{th}}}} = \hbar \sqrt{2 \pi \beta / m}$ the thermal wave length, and $\Omega$ the normalization volume. In the language of scattering theory the free energy eigenstates of the non-motional degrees of freedom are called channels. In our case, they form a discrete basis of the system Hilbert space, and $| \alpha \rangle$ will be used to indicate internal (and possibly rotational), non-degenerate system eigenstates of energy $E_{\alpha}$. In this channel basis, $\rho_{\alpha \beta} = \langle \alpha | \rho | \beta \rangle$, the equation of motion (\[eq:me1\]) takes on the form of a general master equation of Lindblad type, $$\begin{aligned} \partial_t \rho_{\alpha \beta} = & \frac{E_{\alpha} + \varepsilon_{\alpha} - E_{\beta} - \varepsilon_{\beta}}{i \hbar} \rho_{\alpha \beta} + \sum_{\alpha_0 \beta_0} \rho_{\alpha_0 \beta_0} {\,}M_{\alpha \beta}^{\alpha_0 \beta_0} \label{eq:master}\\ & - \frac{1}{2} \sum_{\alpha_0 } \rho_{\alpha_0 \beta_{}} {\,}{\,}\sum_{\gamma} M_{\gamma \gamma}^{\alpha_0 \alpha} - \frac{1}{2} \sum_{\beta_0} \rho_{\alpha \beta_0} {\,}\sum_{\gamma} M_{\gamma \gamma}^{\beta \beta_0} \nonumber\end{aligned}$$ with energy shifts $\varepsilon_{\alpha}$ discussed below and rate coefficients $$\begin{aligned} M_{\alpha \beta}^{\alpha_0 \beta_0} = & \langle \alpha | {\ensuremath{\operatorname{Tr}}}_{{\ensuremath{\operatorname{env}}}} \left( \mathsf{T} \Gamma^{1 / 2} \left[ | \alpha_0 \rangle \langle \beta_0 | \otimes \rho_{{\ensuremath{\operatorname{env}}}} \right] \Gamma^{1 / 2} \mathsf{T}^{\dag} \right) | \beta \rangle . \nonumber\\ & \label{eq:M}\end{aligned}$$ To calculate these complex quantities we need to specify the rate operator $\Gamma$. In the present case it is naturally given in terms of the current density operator $\mathsf{j} = n_{{\ensuremath{\operatorname{gas}}}} \mathsf{p} / m$ of the impinging gas particles and the channel-specific total scattering cross sections $\sigma \left( {\ensuremath{\boldsymbol{p}}}, \alpha \right)$, $$\begin{aligned} \Gamma & = & \sum_{\alpha} | \alpha \rangle \langle \alpha | \otimes n_{{\ensuremath{\operatorname{gas}}}} \frac{\left| \mathsf{p} \right|}{m} \sigma \left( \mathsf{p}, \alpha \right) . \label{eq:Gamma}\\ & & \nonumber\end{aligned}$$ Defining the channel operator $\mathsf{c} = \sum_{\alpha} \alpha | \alpha \rangle \langle \alpha |$ one can thus write $\Gamma = | \mathsf{j} | \sigma \left( \mathsf{p}, \mathsf{c} \right)$. In principle, $\Gamma$ must also involve a projection to the subspace of incoming wave packets, attributing zero collision probability to any wave packet located far off the scattering center and travelling away from it. This is important because such an outgoing state will not remain invariant under $\mathsf{S}$. \[It may be strongly transformed since the definition of $\mathsf{S}$ involves a backward evolution.\] In practice, the microscopic definition of $\Gamma$ is easier if one takes care of the projection separately. This is easily done if $\rho_{{\ensuremath{\operatorname{env}}}}$ admits a convex decomposition into incoming and outgoing states. Alternatively, one may dispense with the projection by modifying the definition of $\mathsf{S}$ so that outgoing wave packets are kept invariant (see below). Let us now evaluate the rate coefficients $M_{\alpha \beta}^{\alpha_0 \beta_0}$ by using a decomposition of $\rho_{{\ensuremath{\operatorname{env}}}}$ that permits to separate in- and out-wave packets. As shown in [[@Hornberger2003b]]{} the thermal gas state can be written as a phase space integration over projectors onto minimum uncertainty gaussian states $| \psi_{{\ensuremath{\boldsymbol{r}}}_0 {\ensuremath{\boldsymbol{p}}}_0} \rangle = \bar{\lambda}_{{\ensuremath{\operatorname{th}}}}^{3 / 2} \exp \left( - \bar{\beta} \left( \mathsf{p} -{\ensuremath{\boldsymbol{p}}}_0 \right)^2 / 4 m \right) |{\ensuremath{\boldsymbol{r}}}_0 \rangle$ whose spatial extension $\bar{\lambda}_{{\ensuremath{\operatorname{th}}}} = \hbar \sqrt{2 \pi \bar{\beta} / m}$ is determined by an inverse temperature $\bar{\beta} > \beta$, $$\begin{aligned} \rho_{{\ensuremath{\operatorname{env}}}} & = & {\,}\int {\mathrm{d}}{\ensuremath{\boldsymbol{p}}}_0 \hat{\mu} \left( {\ensuremath{\boldsymbol{p}}}_0 \right) \int_{\Omega} \frac{{\mathrm{d}}{\ensuremath{\boldsymbol{r}}}_0}{\Omega} {\,}| \psi_{{\ensuremath{\boldsymbol{r}}}_0 {\ensuremath{\boldsymbol{p}}}_0} \rangle \langle \psi_{{\ensuremath{\boldsymbol{r}}}_0 {\ensuremath{\boldsymbol{p}}}_0} |. \label{eq:rhops}\end{aligned}$$ Here $\hat{\mu} \left( {\ensuremath{\boldsymbol{p}}}_0 \right) = (2 \pi m / \hat{\beta})^{- 3 / 2} \exp (- \hat{\beta} {\ensuremath{\boldsymbol{p}}}_0^2 / 2 m)$ is the Maxwell-Boltzmann distribution corresponding to the temperature $\hat{\beta}^{- 1} = \beta^{- 1} - \bar{\beta}^{- 1}$, so that by setting a $\bar{\beta}$ one splits up the gas temperature $\beta^{- 1}$ into a part determining the localization of the $| \psi_{{\ensuremath{\boldsymbol{r}}}_0 {\ensuremath{\boldsymbol{p}}}_0} \rangle$ and a part characterizing their motion. We choose $\bar{\beta}$ large and take eventually the limit $\bar{\beta} \rightarrow \infty$, $\hat{\beta} \rightarrow \beta$ of very extended wave packets so that $\hat{\mu}$ approaches the original Maxwell-Boltzmann distribution $\mu$. Inserting (\[eq:rhops\]) into (\[eq:M\]) yields $$\begin{aligned} M_{\alpha \beta}^{\alpha_0 \beta_0} & = & \int {\mathrm{d}}{\ensuremath{\boldsymbol{p}}}_0 \hat{\mu} \left( {\ensuremath{\boldsymbol{p}}}_0 \right) \int_{\Omega} \frac{{\mathrm{d}}{\ensuremath{\boldsymbol{r}}}_0}{\Omega} {\,}m_{\alpha \beta}^{\alpha_0 \beta_0} \left( {\ensuremath{\boldsymbol{r}}}_0, {\ensuremath{\boldsymbol{p}}}_0 \right) . \label{eq:M1}\end{aligned}$$ Here the phase space function $$\begin{aligned} m_{\alpha \beta}^{\alpha_0 \beta_0} \left( {\ensuremath{\boldsymbol{r}}}_0, {\ensuremath{\boldsymbol{p}}}_0 \right) & {:=}& \int {\mathrm{d}}{\ensuremath{\boldsymbol{p}}} {\,}\langle \alpha | \langle {\ensuremath{\boldsymbol{p}}}| \mathsf{T} \Gamma_{}^{1 / 2} | \alpha_0 \rangle | \psi_{{\ensuremath{\boldsymbol{r}}}_0 {\ensuremath{\boldsymbol{p}}}_0} \rangle \nonumber\\ & & \times \langle \beta_0 | \langle \psi_{{\ensuremath{\boldsymbol{r}}}_0 {\ensuremath{\boldsymbol{p}}}_0} | \Gamma_{}^{1 / 2} \mathsf{T^{\dag}} | \beta \rangle |{\ensuremath{\boldsymbol{p}}} \rangle \label{eq:m1}\end{aligned}$$ gives the contribution of different phase space regions to the rate coefficient $M_{\alpha \beta}^{\alpha_0 \beta_0}$. This permits now to restrict the calculation to incoming wave packets. Since the $m_{\alpha \beta}^{\alpha_0 \beta_0}$ are averaged over all available positions in (\[eq:M1\]) it is natural to confine this spatial average at fixed ${\ensuremath{\boldsymbol{p}}}_0$ to a cylinder pointing in the direction of ${\ensuremath{\boldsymbol{p}}}_0$, whose longitudinal support $\Lambda_{{\ensuremath{\boldsymbol{p}}}_0}$ vanishes at outgoing positions and whose transverse base area is given by an average cross section $\Sigma_{{\ensuremath{\boldsymbol{p}}}_0}$. In terms of the longitudinal and transverse positions ${\ensuremath{\boldsymbol{r}}}_{\|{\ensuremath{\boldsymbol{p}}}_0} {:=}\left( {\ensuremath{\boldsymbol{r}}} \cdot {\ensuremath{\boldsymbol{p}}}_0 \right) {\ensuremath{\boldsymbol{p}}}_0 / p_0^2$ and ${\ensuremath{\boldsymbol{r}}}_{\bot {\ensuremath{\boldsymbol{p}}}_0} ={\ensuremath{\boldsymbol{r}}}-{\ensuremath{\boldsymbol{r}}}_{\|{\ensuremath{\boldsymbol{p}}}_0}$ we have $$\begin{aligned} M_{\alpha \beta}^{\alpha_0 \beta_0} & = & \int {\mathrm{d}}{\ensuremath{\boldsymbol{p}}}_0 \, \hat{\mu} \left( {\ensuremath{\boldsymbol{p}}}_0 \right) \int_{\Lambda_{{\ensuremath{\boldsymbol{p}}}_0}} \frac{{\mathrm{d}}{\ensuremath{\boldsymbol{r}}}_{\|{\ensuremath{\boldsymbol{p}}}_0}}{\Lambda_{{\ensuremath{\boldsymbol{p}}}_0}} {\,}{\,}\int_{\Sigma_{{\ensuremath{\boldsymbol{p}}}_0}} \frac{{\mathrm{d}}{\ensuremath{\boldsymbol{r}}}_{\bot {\ensuremath{\boldsymbol{p}}}_0}}{\Sigma_{{\ensuremath{\boldsymbol{p}}}_0}} \nonumber\\ & & \times m_{\alpha \beta}^{\alpha_0 \beta_0} \left( {\ensuremath{\boldsymbol{r}}}_{\|{\ensuremath{\boldsymbol{p}}}_0} +{\ensuremath{\boldsymbol{r}}}_{\bot {\ensuremath{\boldsymbol{p}}}_0}, {\ensuremath{\boldsymbol{p}}}_0 \right) . \label{eq:M2}\end{aligned}$$ In order to evaluate $m_{\alpha \beta}^{\alpha_0 \beta_0}$, insert momentum resolutions of unity between the $\mathsf{T}$ and $\Gamma$ operators in (\[eq:m1\]) and use the representation [[@Taylor1972a]]{} $$\begin{aligned} \langle \alpha_f | \langle {\ensuremath{\boldsymbol{p}}}_f | \mathsf{T} | \alpha_i \rangle |{\ensuremath{\boldsymbol{p}}} \text{$_i$} \rangle = & \frac{f_{\alpha_f \alpha_i} \left( {\ensuremath{\boldsymbol{p}}}_f, {\ensuremath{\boldsymbol{p}}}_i \right)}{2 \pi \hbar m}\, \delta \left( E_{p_f \alpha_f} - E_{p_i \alpha_i} \right) \nonumber\\ & \label{eq:Trep}\end{aligned}$$ in terms of the multi-channel scattering amplitude and total energies $E_{p \alpha_{}} = p^2 / 2 m + E_{\alpha}$. By transforming the new integration variables to mid-points and chords one obtains a Gaussian function which approaches, for large $\bar{\beta}$, a $\delta$-function in the midpoints. Integrating out the latter one finds that the combination of the $\delta$-functions from (\[eq:Trep\]) confine the chord integration to a plane perpendicular to ${\ensuremath{\boldsymbol{p}}}_0$. Integrating out the parallel component leads to the factor ${\exp \left( - \bar{\beta} m \left( E_{\alpha} - E_{\alpha_0} - E_{\beta} + E_{\beta_0} \right)^2 / 8 p_0^2 \right)}$ which, again for large $\bar{\beta}$, can be replaced by $$\begin{aligned} \chi_{\alpha \beta}^{\alpha_0 \beta_0} & {:=}& \left\{ \begin{array}{ll} 1 & \text{if $E_{\alpha} - E_{\alpha_0} = E_{\beta} - E_{\beta_0}$}\\ 0 & \text{otherwise} . \end{array} \right. \nonumber\end{aligned}$$ The resulting expression is independent of ${\ensuremath{\boldsymbol{r}}}_{\|{\ensuremath{\boldsymbol{p}}}_0}$, $$\begin{aligned} m_{\alpha \beta}^{\alpha_0 \beta_0} \left( {\ensuremath{\boldsymbol{r}}}_0, {\ensuremath{\boldsymbol{p}}}_0 \right) = & \chi_{\alpha \beta}^{\alpha_0 \beta_0} \frac{n_{{\ensuremath{\operatorname{gas}}}}}{m^2} \int {\mathrm{d}}{\ensuremath{\boldsymbol{p}}} {\,}{\,}\int \frac{{\mathrm{d}}\tilde{{\ensuremath{\boldsymbol{p}}}}_{\bot {\ensuremath{\boldsymbol{p}}}_0}}{\left( 2 \pi \hbar \right)^2} \nonumber\\ & \times \exp \left( - \bar{\beta} \frac{\tilde{{\ensuremath{\boldsymbol{p}}}}_{\bot {\ensuremath{\boldsymbol{p}}}_0}^2}{8 m} - i \frac{{\ensuremath{\boldsymbol{r}}}_{0, \bot {\ensuremath{\boldsymbol{p}}}_0} \cdot \tilde{{\ensuremath{\boldsymbol{p}}}}_{\bot {\ensuremath{\boldsymbol{p}}}_0}}{\hbar} \right) \nonumber\\ & \times f_{\alpha \alpha_0} \left( {\ensuremath{\boldsymbol{p}}}, {\ensuremath{\boldsymbol{p}}}_0^+ \right) f_{\beta \beta_0}^{\ast} \left( {\ensuremath{\boldsymbol{p}}}, {\ensuremath{\boldsymbol{p}}}_0^- \right) \nonumber\\ & \times \delta \left( \frac{{\ensuremath{\boldsymbol{p}}}^2 - \left( {\ensuremath{\boldsymbol{p}}}_0^+ \right)^2}{2 m} + E_{\alpha} - E_{\alpha_0} \right) \nonumber\\ & \times \sqrt{\left( 1 + \frac{\tilde{{\ensuremath{\boldsymbol{p}}}}_{\bot {\ensuremath{\boldsymbol{p}}}_0}^2}{4 p_0^2} \right) \sigma \left( {\ensuremath{\boldsymbol{p}}}_0^+, \alpha_0 \right) \sigma \left( {\ensuremath{\boldsymbol{p}}}_0^-, \beta_0 \right)} \nonumber\end{aligned}$$ with ${\ensuremath{\boldsymbol{p}}}_0^{\pm} {:=}{\ensuremath{\boldsymbol{p}}}_0 \pm \tilde{{\ensuremath{\boldsymbol{p}}}}_{\bot {\ensuremath{\boldsymbol{p}}}_0} / 2$. The ${\ensuremath{\boldsymbol{r}}}_{\|{\ensuremath{\boldsymbol{p}}}_0}$-integration in (\[eq:M2\]) yields an approximate two-dimensional $\delta$-function in $\tilde{{\ensuremath{\boldsymbol{p}}}}_{\bot {\ensuremath{\boldsymbol{p}}}_0}$ so that we obtain $$\begin{aligned} M_{\alpha \beta}^{\alpha_0 \beta_0} & = & \chi_{\alpha \beta}^{\alpha_0 \beta_0} \frac{n_{{\ensuremath{\operatorname{gas}}}}}{m^2} \int {\mathrm{d}}{\ensuremath{\boldsymbol{p}}} {\,}{\,}{\mathrm{d}}{\ensuremath{\boldsymbol{p}}}_0 \mu \left( {\ensuremath{\boldsymbol{p}}}_0 \right) f_{\alpha \alpha_0} \left( {\ensuremath{\boldsymbol{p}}}, {\ensuremath{\boldsymbol{p}}}_0 \right) \nonumber\\ & & \times f_{\beta \beta_0}^{\ast} \left( {\ensuremath{\boldsymbol{p}}}, {\ensuremath{\boldsymbol{p}}}_0 \right) \delta \left( \frac{{\ensuremath{\boldsymbol{p}}}^2 -{\ensuremath{\boldsymbol{p}}}_0^2}{2 m} + E_{\alpha} - E_{\alpha_0} \right), \nonumber\\ & & \label{eq:M3}\end{aligned}$$ provided we identify the average cross section of (\[eq:M2\]) with the geometric mean of the total cross sections of the involved channels, i.e., $\Sigma_{{\ensuremath{\boldsymbol{p}}}_0} = \sqrt{\sigma \left( {\ensuremath{\boldsymbol{p}}}_0 ; \alpha_0 \right) \sigma \left( {\ensuremath{\boldsymbol{p}}}_0 ; \beta_0 \right)} $. Moreover, the final limit $\bar{\beta} \rightarrow \infty$ replaced $\hat{\mu}$ by $\mu$ in (\[eq:M3\]). With the same method one shows that the first term in (\[eq:me1\]) merely modifies the unitary evolution. Its effect is to shift the system energies from $E_{\alpha}$ to $E_{\alpha} + \varepsilon_{\alpha}$ by a thermal average of the “forward scattering amplitudes”, $$\varepsilon_{\alpha} = - 2 \pi \hbar^2 \frac{n_{{\ensuremath{\operatorname{gas}}}}}{m} \int {\mathrm{d}}{\ensuremath{\boldsymbol{p}}}_0 \mu \left( {\ensuremath{\boldsymbol{p}}}_0 \right) {\ensuremath{\operatorname{Re}}} \left[ f_{\alpha \alpha} \left( {\ensuremath{\boldsymbol{p}}}_0, {\ensuremath{\boldsymbol{p}}}_0 \right) \right] . \label{eq:epsa}$$ It is reassuring that the explicit expressions (\[eq:M3\]) and (\[eq:epsa\]) can be shown to be equivalent to the more abstract master equation by Dümcke [[@Dumcke1985a]]{}, obtained in a “low-density limit” scaling approach [[@Breuer2002a; @Alicki1987a; @Alicki2003a]]{} for the special case of a factorizing interaction potential, $\mathsf{V}_{{\ensuremath{\operatorname{tot}}}} = \mathsf{A} \otimes \mathsf{B}_{{\ensuremath{\operatorname{env}}}}$, and for times large compared to all system time scales. The present approach thus generalizes this result to arbitrary interaction potentials (satisfying asymptotic completeness) and to arbitrary times as long as they are greater than the duration of a single collision. It is worth noting that the $M_{\alpha \beta}^{\alpha_0 \beta_0}$ can as well be obtained in a more direct, while less solid way if the [*diagonal*]{} momentum representation of $\rho_{{\ensuremath{\operatorname{env}}}}$ is used instead of (\[eq:rhops\]). A projection to the incoming wave packets is then hard to implement and, as discussed above, the application of $\mathsf{S}$ to improper momentum states leads to the unwanted transformation also of its “outgoing components”. As a consequence, the resulting expression for $M_{\alpha \beta}^{\alpha_0 \beta_0}$ is ill-defined, involving the square of the $\delta$-functions in (\[eq:Trep\]) and the normalization volume $\Omega$. This can be healed by noting that any consistent modification of $\mathsf{S}$ which keeps outgoing wave packets invariant must conserve the probability current. This condition provides a simple rule how to form a well-defined expression [[@Hornberger2003b; @Hornberger2007RR]]{}, whose multichannel version yields the result (\[eq:M3\]) immediately for any momentum diagonal $\rho_{{\ensuremath{\operatorname{env}}}}$. The expression for the rate coefficients can be rewritten, for isotropic $\mu$, in terms of an average over the velocity distribution $\nu \left( v \right) = 4 \pi m^3 v^2 \mu \left( mv \right)$ and angular integrations, which bring about the velocity $v_{{\ensuremath{\operatorname{out}}}} = \sqrt{v^2 - 2 \left( E_{\alpha} - E_{\alpha_0} \right) / m}$ of the gas particle after a possibly inelastic collision. For rotationally invariant scattering amplitudes, $f_{\alpha \alpha_0} \left( \cos \left( {\ensuremath{\boldsymbol{p}}}, {\ensuremath{\boldsymbol{p}}}_0 \right) ; E = p_0^2 / 2 m \right)$, we have $$\begin{aligned} M_{\alpha \beta}^{\alpha_0 \beta_0} = & \chi_{\alpha \beta}^{\alpha_0 \beta_0} \int_0^{\infty} {\mathrm{d}}v\, \nu \left( v \right) n_{{\ensuremath{\operatorname{gas}}}} v_{{\ensuremath{\operatorname{out}}}} 2 \pi \int_{- 1}^1 {\mathrm{d}}\left( \cos \theta \right) \nonumber\\ & \times f_{\alpha \alpha_0} \left( \cos \theta ; \frac{m}{2} v^2 \right) f_{\beta \beta_0}^{\ast} \left( \cos \theta ; \frac{m}{2} v^2 \right) . \label{eq:M4}\end{aligned}$$ This shows that limiting cases of (\[eq:master\]) display the expected dynamics. For the populations $\rho_{\alpha \alpha}$ it reduces to a rate equation where the total cross sections $\sigma_{\alpha \alpha_0} \left( \frac{m}{2} v^2 \right) $ for scattering from channel $\alpha_0$ to $\alpha$ determine the transition rates, $M_{\alpha \alpha}^{\alpha_0 \alpha_0} = \int {\mathrm{d}}v \nu \left( v \right) n_{{\ensuremath{\operatorname{gas}}}} v_{{\ensuremath{\operatorname{out}}}} \sigma_{\alpha \alpha_0}$. In the case of purely elastic scattering, on the other hand, i.e., $M_{\alpha \beta}^{\alpha_0 \beta_0} = M_{\alpha \beta}^{\alpha \beta} \delta_{\alpha \alpha_0} \delta_{\beta \beta_0}$, the coherences decay exponentially, $\partial_t \left| \rho_{\alpha \beta} \right| = - \gamma_{\alpha \beta}^{{\ensuremath{\operatorname{elastic}}}} \left| \rho_{\alpha \beta} \right|$, with a rate determined by a [*difference*]{} of scattering amplitudes, $$\begin{aligned} \gamma_{\alpha \beta}^{{\ensuremath{\operatorname{elastic}}}} = & \pi \int {\mathrm{d}}v\, \nu \left( v \right) n_{{\ensuremath{\operatorname{gas}}}} v_{{\ensuremath{\operatorname{out}}}} \int_{- 1}^1 {\mathrm{d}}\left( \cos \theta \right) \\ & \times \left| f_{\alpha \alpha} \left( \cos \theta ; \frac{m}{2} v^2 \right) - f_{\beta \beta} \left( \cos \theta ; \frac{m}{2} v^2 \right) \right|^2 . \nonumber\end{aligned}$$ It shows clearly that the more coherence is lost, in this case, the better the scattering environment can distinguish between system states $| \alpha \rangle$ and $| \beta \rangle$. Conclusions =========== In conclusion, a general method of incorporating formal scattering theory into the dynamic description of open quantum systems was presented. Based on the theory of generalized measurements, it yields completely positive master equations which account for the environmental interaction in a non-perturbative fashion. When applied to an immobile system in the presence of a gas, it provides a detailed and realistic account of the interplay between coherent system dynamics and the (possibly much faster) incoherent effects of the environment. I thank Bassano Vacchini for helpful discussions. This work was supported by the DFG Emmy Noether program. [10]{} H.-P. Breuer and F. Petruccione, *The Theory of Open Quantum Systems*, Oxford University Press, Oxford, 2002. H. Carmichael, *An Open Systems Approach to Quantum Optics*, Springer, Berlin, 1993. C. W. Gardiner and P. Zoller, *Quantum Noise*, Springer, New York, 2000. U. Weiss, *Quantum Dissipative Systems*, World Scientific, Singapore, 2nd edition, 1999. K. Kraus, *States, Effects and Operations: Fundamental Notions of Quantum Theory*, Springer, Berlin, 1983. R. Alicki and K. Lendi, *Quantum Dynamical Semigroups and Applications*, Springer, Berlin, 1987. J. R. Taylor, *Scattering Theory*, John Wiley & Sons, New York, 1972. P. Busch, P. J. Lahti, and P. Mittelstaedt, *The Quantum Theory of Measurement*, Springer-Verlag, Berlin, 1991. K. Jacobs and D. A. Steck, quant-ph/0611067 (2007), to appear in Contemp. Phys. J. K. Breslin, G. J. Milburn, and H. M. Wiseman, Phys. Rev. Lett. **74**, 4827 (1995); C. A. Fuchs and K. Jacobs, Phys. Rev. A **63**, 062305 (2001). C. W. Gardiner, A. S. Parkins, and P. Zoller, Phys. Rev. A **46**, 4363–4381 (1992). K. Mølmer, Y. Castin, and J. Dalibard, J. Opt. Soc. Am. B **10**, 524–538 (1993). H. M. Wiseman, Quantum Semiclass. Opt. **8**, 205–222 (1996). T. A. Brunn, Am. J. Phys. **50**, 719–737 (2002). J. D. Cresser, S. M. Barnett, J. Jeffers, and D. T. Pegg, Opt. Comm. **264**, 353–361 (2006). K. Hornberger, Phys. Rev. Lett. **97**, 060601 (2006). K. Hornberger and J. E. Sipe, Phys. Rev. A **68**, 012105 (2003). R. Dümcke, Commun. Math. Phys. **97**, 331–359 (1985). R. Alicki and S. Kryszewski, Phys. Rev. A **68**, 013809 (2003). K. Hornberger, *Introduction to Decoherence Theory*, eprint quant-ph/0612118.
--- abstract: 'Auger $LMM$ spectra and preliminary model simulations of Ar$^{9+}$ and metastable Ar$^{8+}$ ions interacting with a clean monocrystalline $n$-doped Si(100) surface are presented. By varying the experimental parameters, several spectroscopic features have been observed providing valuable information for the development of an adequate interaction model. On our apparatus the ion beam energy can be lowered to almost mere image charge attraction. High data acquisition rates could still be maintained yielding an unprecedented statistical quality of the Auger spectra.' address: - 'Institut für Kernphysik, Westfälische Wilhelms-Universität Münster, Wilhelm-Klemm-Str. 9, D-48149 Münster, Germany' - 'Laboratoire de Physico-Chimie Théorique, C.N.R.S. et Université de Bordeaux I, 351 Cours de Libération, 33405 Talence Cedex, France' author: - 'J. Ducrée[^1], J. Mrogenda, E. Reckels, M. Rüther, A. Heinen, Ch. Vitt, M. Venier, J. Leuker, and H.J. Andrä' - 'R. Díez Muiño' title: 'Interactions of Ar$^{9+}$ and metastable Ar$^{8+}$ with a Si(100) surface at velocities near the image acceleration limit' --- Introduction {#sec:intro} ============ The interactions of highly charged ions (HCI) with surfaces have attracted the strong interest of several research groups in the past, getting a strong boost in the last decade due to the increasing availability of high performance HCI ion sources and improvement of other experimental equipment. In recent years, various technological applications of HCI surface collisions have been conceived, in particular for the wide field of microscopic and nanoscopic surface modification. In order to foster these efforts, a better understanding of the different stages of the scattering process has to be attained. Experimentalists hope to take advantage of the quick charge exchange processes and the release of the large amount of potential energy stored in the HCI. Unfortunately little consent has been accomplished among researchers on the time scales and the location of these processes although a comprehensive series of spectra and interpretations has been published on this crucial issue so far. According to the classical overbarrier model [@Bur91; @Duc97], the neutralization of the HCI sets in at a critical distance of typically $R_c \simeq 15$ [Å]{} in front of the first bulk layer. $R_c$ depends on the target work function $W$ and the initial charge $q$ of the HCI. In the region below $R_c$, target band electrons are successively captured into resonant ionic Rydberg states with $n \simeq q \sqrt{R_\infty/W}$. As soon as more than two electrons have been transferred, the highly excited hollow atom starts to relax via autoionization processes yielding low-energy electrons. X-ray emission is strongly suppressed for light nuclei. Several studies [@Mey95; @Hat96] have been carried out showing that the overwhelming fraction of the reflected particles is neutral and suggesting that the projectile charge $q$ is already compensated on the incoming path. Nevertheless, it is commonly accepted by now [@Sch94] that the intra-atomic transition rates involved in the cascade are by far too slow to perform a complete relaxation of the neutralized HCI in front of the surface. Autoionization spectra originating from highly charged ions containing initial inner-shell vacancies are characterized by a strong and intense low-energy region and a uniquely shaped high-energy branch which can unambiguously be ascribed to intra-atomic transitions involving the inner-shell vacancy. Despite of the low transition rates, certain peak structures can even be associated with Auger emission from fully populated shells neighboring the initial core configuration. In order to clarify the evolution from Rydberg populations to fully occupied lower shells and motivated by new experimental findings [@Mey91; @Koe94; @Hst94] about large fractions of subsurface emission within the autoionization spectra, additional interaction mechanisms have been postulated and worked into simulations [@Lim95a; @Pag95; @Sto95]. Also a comparison between Auger spectra for the same HCI projectile impinging on different target species [@Lim96] and a new theoretical approach [@Arn95a] shed new light on the interaction scenario. It seems that the energetic positions of target and projectile electronic states play an important role in all direct inner-shell filling mechanisms below the surface. After the HCI has penetrated into the bulk, band electrons can shield the HCI core charge and directly feed the lower lying hollow atom states while generating a plasmon or an electron hole pair [@Die95; @Die96] (so called $MCV$ processes). For projectiles with high kinetic energies, electrons can be directly transferred from bulk atom levels into inner projectile levels yielding a velocity dependent filling rate [@Lim95a]. In an attempt to extract information on particular transition types from the spectra, experimentalists have analyzed $L$-Auger spectra of Ar$^{9+}$ ions impinging on tungsten [@Zwa87; @Fol89; @Zwa89; @And91A], copper [@Koe92] and gold [@Mey91]. These early efforts have been obstructed by the large number of initial $M$-shell configurations that had to be considered in the interpretation of the $LMM$ spectra with a few distinctive structures only. In recent years, research activities focused on $K$-Auger spectra of hydrogenlike second row ions C$^{5+}$, N$^{6+}$, O$^{7+}$, F$^{8+}$, Ne$^{9+}$ [@Hst94; @And93; @Fol90; @Lim94a] and Ar$^{17+}$ [@Mey95] instead. Some clearly pronounced peak regions can be identified in most of these spectra and assigned to a comparatively small set of initial $L$-shell configurations. A strong systematic dependence of the relative peak intensities of these $KLL$ spectra on the experimental conditions has provided valuable information about the contributing ionic shell configurations. In this paper, we present several series of $L$-Auger spectra emitted during the interaction of Ar$^{9+}$ and metastable “heliumlike” Ar$^{8+}$ ions impinging with beam energies between 8eV and 4.6 keV and different experimental geometries on an n-doped Si(100) crystal. For the first time, we discovered significant modifications of the shape of the autoionization spectra for different projectile energies well below 1 keV and for different observation and interaction geometries. They include two geometries that largely suppress the detection of all subsurface electron emission. The so obtained spectra exhibit a unique peak profile that largely deviates from spectra taken under all other experimental geometries. These effects are very surprising because in the energy regime below 1 keV all collisional $M$-shell sidefeeding can generally be ruled out and $MCV$ rates can be treated in a static approximation. In order to understand the behavior of the spectra at different incident energies we developed an interaction model taking into account the special role of the $3d$ subshell, which mediates an efficient $M$-shell filling via valence band electrons within the bulk. Incorporating this model into a Monte Carlo simulation, the observed alterations in the subpeak intensities and positions can qualitatively be reproduced. This model is experimentally supported by a series of $L$-Auger spectra emitted by metastable Ar$^{8+}$ projectiles. Under the same experimental conditions, the Ar$^{8+}$ $LMM$ Auger peak structures turn out to be amazingly similar to the Ar$^{9+}$ $LMM$ structures. Section \[sec:setup\] will introduce to the experimental setup implemented for our measurements. In Section \[sec:observations\], we display several sets of autoionization spectra as obtained under specified experimental conditions. Section \[sec:grouping\] will describe how the $LMM$ subpeaks in our Ar$^{9+}$ spectra can be assigned to particular groups of intra-atomic transitions. The next Section \[sec:subsim\] outlines the basic ingredients of the subsurface interaction model which we employ for the simulation of the Auger spectra. In Section \[sec:evolution\], we extract information about the evolution of the projectile neutralization and Auger emission from the combined analysis of experimental observations and the simulation results. Further experimental proof for the portrayed interaction mechanism will be given with the discussion of the [ *L*]{}-Auger spectra of metastable Ar$^{8+}$ projectiles in \[sec:Ar8spectra\]. Finally, in \[sec:discussion\], we summarize the basic findings of this paper and give a short outlook on future research. Experimental setup {#sec:setup} ================== Highly charged ions are extracted by a fixed voltage of –20kV from an ECR ion source, developed in our laboratory. The metallic vacuum chamber of the source can be floated on selectable potentials $U_Q$ with respect to earth potential. These ions are $q/m$ separated by a double focusing sector magnet system including an aberration correction lens. Two electrostatic Einzel lenses convey the beam through the intermediate stages of a differentially pumped vacuum system which is needed to maintain the pressure gradient between the ECR source ($p \simeq 1 \times 10^{-6}$mbar) and the UHV target chamber ($p \simeq 5 \times 10^{-12}$mbar). Before hitting the grounded Si wafer, the ions pass through two deceleration lenses which are optimized for a maximum of ions deposited on the target surface of approximately 1cm$^2$. The kinetic ion energy distribution is recorded by an ion spectrometer which is mounted on the beam axis close behind the movable target. For Ar$^{9+}$ and Ar$^{8+}$ beams, the full width at half maximum never exceeded 2eV per charge. The center of the peak is a measure for the kinetic projectile energy after deceleration $E_{\mbox{kin}} = q (U_Q + U_P)$ where $U_P$ is the plasma potential which builds up between the plasma and the walls of the ECR source. An averaged value of $U_P = 12$V has been observed with variations over months of less than $\pm$2V. The Si(100) surface has been prepared by successive cycles of Ar$^{1+}$ sputtering at grazing incidence and annealing until all impurities have disappeared from AES spectra and good LEED patterns have shown up. The geometry within the target chamber is displayed in Fig. \[fig:geometry\](a). The beam axis intersects the target surface at an angle $\Theta$. Electrons are detected by an electrostatic entrance lens followed by a $150^\circ$ spherical sector analyzer at an angle $\Psi$ with respect to the surface. In most measurements we chose $\Theta+\Psi=90^\circ$. As $\Psi$ approaches $0^\circ$ in Fig. \[fig:geometry\](b), the path length inside the solid for electrons which are emitted below the surface drastically increases such that the detection of above or near surface emission is clearly favored. Due to the chamber alignment and the large acceptance angle of $\eta = 16\pm6^{\circ}$ of our electron spectrometer entrance lens, below-surface emission is always observed, but to a much smaller extent than above or near surface emission. The absolute spectral intensity in the ($\Psi \simeq 0^\circ$)-geometries greatly diminishes, though. By rotating the target of Fig. \[fig:geometry\](a) around the ion beam axis with the surface normal pointing out of the image plane, the condition of $\Theta+\Psi=90^\circ$ could be relaxed, and geometries with $\Theta=5^\circ$ and $\Psi=0^\circ$ have been achieved. The effective incident energy of the ions on the surface is given by $E_{\mbox{kin}}$ plus the energy gain resulting from the image charge acceleration [@Win93] $$E_{\mbox{im}} \simeq \frac{W}{3\sqrt{2}} q^{3/2}$$ where the work function $W$ equals 4.6eV for our Si target and $q = 9$. Accordingly, there will always remain a minimum incident energy of approx. 29eV leading to an additional perpendicular projectile velocity component $\Delta v_{\perp} = \sqrt{2 \cdot E_{\mbox{im}}/m}$. Thus the interaction period of the ion in front of the surface can principally not be stretched above an upper limit depending on $q$ and $W$ even though the original perpendicular velocity component $v_{\perp} = \sqrt{2 E_{\mbox{kin}}/m} \cdot \cos(\Theta)$ of the projectile may vanish by selecting $U_Q = -U_P$ or $\Theta \mapsto 0^\circ$. When the incident energy $E_{\mbox{kin}}$ is lowered the beam spreads up at the target (Liouville’s theorem) and incident angles may deviate from their nominal values $\Theta$. In the energy domain $E_{\mbox{kin}} < E_{\mbox{im}}$, the projectile path is strongly bent by the attractive image acceleration causing increased effective incident angles $\Theta_{\mbox{eff}}$, especially for small $\Theta$. Hence the values given for $\Theta$ in this paper are intended to delineate the chamber geometry rather than the effective scattering geometry of an individual projectile. Projectile penetration depths at the stage of complete neutralization and deexcitation can be estimated by multiplying $v_\perp$ by a typical overall interaction time of $10^{-14}$s. With $E_\perp = \frac{1}{2} m v_\perp^2$ expressed in eV, the perpendicular path length $z_{pen}$ of the Ar projectile within the bulk can be attained from $z_{pen} = 0.22$Å$\times \sqrt{E_\perp [\mbox{eV}]}$. This implies that at energies $E_\perp$ in the range of 100eV, $z_{pen}$ stays below one lattice constant amounting to 5.43Å for Si. [TRIM]{} simulations [@Zie85] performed for a 10eV and a 100eV Ar$^{1+}$ beam impinging on a Si crystal at perpendicular incidence yield average lateral ranges of 3$\pm$1Åand 10$\pm$4Å, respectively. These distances refer to the total penetration depth until the ion is stopped within the bulk. Experimental observations {#sec:observations} ========================= Using the apparatus described in the preceeding section, we have measured secondary electron spectra emitted by Ar$^{9+}$ and metastable Ar$^{8+}$ ions during their interaction with the Si wafer. In this work we will focus on examining the well defined high-energy $L$-Auger peaks covering the interval between 120eV and 300eV. The spectra also feature a low-energy part which extends up to more than 100eV. The analysis of electron spectra in this energy domain is aggravated by a lack of substructures, the superposition of kinetic and intra-atomic emission and their sensitivity to stray electromagnetic fields. Regarding the high-energy branch, we point out that no background due to kinetic electron emission has to be considered for $E_{\mbox{kin}} \leq 121$ eV since the collision energies $E_{coll}=E_{\mbox{kin}}+E_{\mbox{im}}$ are smaller than the lower bound of the spectral region to be examined. By selecting $U_Q=-20$V$<U_P= 12 \pm 2$V, we can prevent HCIs from reaching the grounded target. Only projectiles that are partially neutralized before the deceleration stages and secondary electrons which are generated by collisions of the HCIs with beam transport lens elements (these are on negative potentials) can hit the target where they may set free secondary electrons. We discovered that both contributions are negligible. In Fig. \[fig:Ar9Si:45deg:energy:normreg\] we present three Ar$^{9+}$ spectra measured under $\Theta=45^\circ$ and with $E_{\mbox{kin}}=9$ eV, 121eV and 1953eV. This and all following spectra are normalized to the total intensity in the $L$-Auger region between 160eV and 240eV. Considering that at maximum one $L$-Auger process per ion takes place, this type of normalization method is suitable to display the intensity shifts between $L$-transition subgroups as discussed in this paper. We note that the calibration of the spectra to the absolute beam intensity is prone to errors which emerge from the uncertainty in the correction factors compensating geometrical and kinetic effects. At first we recognize the general shape of an Ar$^{9+}$ $LMM$ spectrum featuring a dominant peak at 211eV, a broad structure reaching down to about 120eV on the low-energy side and a shoulder sitting on the high energy tail of the spectrum. At $E_{\mbox{kin}}=9$ eV, this shoulder can be resolved into two subpeaks of almost equal height at 224eV and 232eV. Proceeding to higher $E_{\mbox{kin}}$, the 232eV-peak disappears and the 224eV-peak gains intensity. Presumably due to the poor statistical quality, the latter 232eV-substructure cannot unambiguously be identified in de Zwart´s [@Zwa89] measurement[^2] which was taken under the same experimental geometry and roughly the same incident energy on a tungsten target. The data acquisition statistics of our spectra exhibits a remarkably high quality. Beam current shifts during measurements are compensated by an online normalization of the spectra to the overall charge current $I_q$ hitting the target. The accumulated counts per 1eV energy channel in the ($\Theta=45^\circ$,$E_{\mbox{kin}}=121$ eV)-spectrum amount to more than 200,000 at the 211eV-maximum letting the relative error drop below 0.3%. We note that each spectrum in Fig. \[fig:Ar9Si:45deg:energy:normreg\] has been recorded in a single five minute run. This is possible due to the high current $I_q=125$nA on the target which can be converted into a particle current $I_p$ by dividing $I_q$ over the projectile charge $q$ and applying a correction factor compensating secondary electron emission. Multiplying $I_p$ by an appropriate geometrical factor, it can be shown that the overall experimental count rate in the high-energy branch roughly correlates to the emission of one high-energy electron per incoming HCI. The spectral series of Ar$^{9+}$ ions impinging on Si(100) with constant $E_{\mbox{kin}}=121$ eV in Fig. \[fig:Ar9Si:121eV:angle:normreg\] displays the variation of the relative peak intensities with the experimental geometry. Recalling Fig. \[fig:Ar9Si:45deg:energy:normreg\], we discover that the presence of a strong 232eV-subpeak is connected to minimum perpendicular velocities $v_\perp$. In the measurement under $\Theta=90^\circ$, the observation angle $\Psi$ is very flat and a second broad peak region evolves around 198eV. Switching to the other “grazing observation” alignment at $\Theta=5^\circ,\Psi=0^\circ$, this structure is preserved proving that its presence is related to a small observation angle $\Psi$ rather than the direction of incidence $\Theta$ or $\Theta_{\mbox{eff}}$. Under $\Theta=5^\circ$, the perpendicular projectile penetration into the bulk is principally limited to less than one lattice constant. The severe discrepancy between the two spectra under $\Theta=5^\circ$ and $\Theta=5^\circ,\Psi=0^\circ$ in Fig. \[fig:Ar9Si:121eV:angle:normreg\] illustrates the extreme above-surface sensitivity of the ($\Psi=0^\circ$)-measurements since the physics of interaction is only determined by $\Theta_{\mbox{eff}}$ and $E_{\mbox{kin}}$ which remain constant. We deduce that the broad peak region is generated above or at least near the first bulk layer. Because this region looses its weight under $\Psi=45^\circ$ when electrons originating from all interaction phases are detected, above-surface processes only supply a minor fraction of the total high-energy emission. Nevertheless, the ratio between the $detected$ above- and below-surface emission is strongly enhanced at grazing observation $\Psi = 0^\circ$ and small projectile penetration depths. To obtain a quantitative estimate, we ran <span style="font-variant:small-caps;">TRIM</span> calculations [@Zie85] for Ar$^{1+}$ ions colliding with a Si target. The results show that a few percent of the incoming particles are reflected for $E_{\mbox{kin}}=121$ eV and 2 keV complying with the preceeding interpretation of the ($\Psi = 0^\circ$)-spectra. We point out that one has to be careful about adopting these findings for HCI beams because the <span style="font-variant:small-caps;">TRIM</span> code solely employs potentials which are strictly speaking only valid for onefold ionized ground state projectiles. For incident energies of less than 10eV when $E_{\mbox{im}} > E_{\mbox{kin}}$, the code fails to produce physically meaningful output since it obviously misrepresents the potentials evolving from the complex coupling of the HCI-surface system. These potentials are decisive for the calculation of the HCI trajectory along the prolonged interaction period in front of the surface and the reflection probability. At such low incident energies, no experimental data on reflection coefficients of Ar$^{q+}$ impinging on Si(100) are available in the literature or refers to grazing incidence conditions where the physics of interaction is different despite the similar vertical velocity components. The detection of the unique peak profile at grazing observation combined with the oncoming discussion may be regarded as indirect experimental evidence for the existence of reflected projectiles. The shifts of the upper edge of the 211eV-peak in Fig. \[fig:Ar9Si:121eV:angle:normreg\] can consistently be explained by an enhanced below-surface damping of the emitted electrons at $\Theta=90^\circ$ which is more effective than at $\Theta=5^\circ,\Psi=0^\circ$ due to the higher perpendicular velocity component $v_\perp$. In Fig. \[fig:Ar9Si:9eV:angle:normreg\] we show spectra of Ar$^{9+}$ ions impinging on a n-Si(100) surface under different incident angles with minimal kinetic energies, i.e., $E_{\mbox{kin}} = 9$ eV. For $\Theta=5^\circ$ and $\Theta=45^\circ$, the spectra are nearly identical reflecting the fact that the self-image attraction is greater than the kinetic projectile energy so that the effective angle of incidence $\Theta_{\mbox{eff}}$ becomes almost independent of its original value $\Theta$. While approaching perpendicular incidence, the same broad region between 160eV and 205eV as in Fig. \[fig:Ar9Si:121eV:angle:normreg\] pops up again. For the two different ($\Psi=0^\circ$)-geometries, the main peaks exhibit about the same height. Since $v_\perp$ is minimal in all four spectra, the upper edge of the 211eV-peak remains sharp and does not shift to lower energies due to bulk damping as in Fig. \[fig:Ar9Si:45deg:energy:normreg\]. Even more, the high-energy branches above 211eV coincide almost perfectly. Keeping in mind our particular choice of normalization method and the minimum incident energy $E_{\mbox{kin}} = 9$ eV, the latter feature suggests that the peak intensity within the high-energy tail region results from above-surface emission which is insensitive to bulk damping of the outgoing electrons. In Fig. \[fig:Ar9Si:92deg:energy:normreg\] we present another series of Ar$^{9+}$ spectra taken at a fixed angle $\Theta=90^\circ$ (i.e., $\Psi=0^\circ$) for different incident energies $E_{\mbox{kin}}$. As the point of emissions moves deeper into the solid, below-surface contributions are successively filtered out by bulk damping. The double-peak profile transforms into a single unstructured maximum widening to the low-energy side as $E_{\mbox{kin}}$ increases. The low-energy bounds of the 198eV maximum coincide at $E_{\mbox{kin}}=9$ eV and 121eV. The spectrum measured at $E_{\mbox{kin}}=121$ eV demonstrates that the appearance of the broad peak structure under $\Psi = 0^\circ$ and the 232eV-peak occurring solely at minimal $v_\perp$ are obviously not immediately linked to each other. The combined analysis of the spectra in Figs. \[fig:Ar9Si:45deg:energy:normreg\]-\[fig:Ar9Si:92deg:energy:normreg\] renders the following preliminary picture which will be supported by further evidence and simulations in the next sections. The dominant 211eV-peak originates from below-surface emission since its center moves downward, it broadens and its intensity decreases when long path length of the emitted electrons through the bulk to the spectrometer entrance can be assumed. Furthermore, it does not disappear with growing $v_\perp$. This also holds for the lower lying part of the spectrum. Two equally intense subpeaks on the high-energy shoulder exclusively appear when $v_\perp$ is minimized. As $v_\perp$ increases, the 224eV-peak gains intensity while the 232eV-peak quickly vanishes. This behavior suggests a dependence of the 232eV-intensity on the above-surface interaction time even though the resulting emission process may occur after surface penetration. The broad peak region between 160eV and 205eV under $\Psi=0^\circ$ and $E_{\mbox{kin}} \leq 121$ eV represents near- or above-surface emission since the “detection window” is shallow and the chamber geometry favors detection of above-surface transitions at the same time. Subsurface contributions are shielded by bulk damping. For reasons that will be given in Section \[sec:Ar8spectra\], it is likely that it is made up of a small fraction of above-surface emission from partially screened incoming or ionized reflected particles. The preceeding experimental findings will play a crucial role in the conception of an interaction model in Section \[sec:evolution\]. Energetic grouping of atomic $LMM$ transitions {#sec:grouping} ============================================== In this section we will attribute some spectral features occurring in the energy range between 150eV and 300eV to distinct groups of $LMM$ Auger transitions. The energetic overlap between neighboring groups will “fortunately” turn out to be sufficiently small such that relative peak intensities can be related to the participation of distinguished Auger processes. Furthermore, certain projectile deexcitation mechanisms can definitively be ruled out if no intensity is measured in their proper energy range. By merely comparing peak energies, we obtain valuable information concerning the HCI-solid interaction which supplement the experimental observations of Section \[sec:observations\] *before* launching any simulation. At the present state of research, peak energies can be evaluated more accurately than transition rates for the HCI solid system. We employ the well known Cowan code [@Cow81] in order to simulate configuration energies based on spherically symmetrized wave functions for *free* atoms and ions. In order to calculate Auger transition energies *within the bulk*, we have to take into account the effect of the self induced charge cloud consisting of valence band (VB) electrons which surrounds the HCI. First approaches have been made on this behalf [@Arn95a; @Arn95] using the density functional theory (DFT). Results show that the nonlinear screening effects due to the electron gas are to a good approximation equivalent to the screening by outer shell “spectator” electrons in a free atom. The hollow atom entering the bulk loses all Rydberg shell electrons due to the screening by the target electron gas. The radii of the resonantly populated orbitals are of the order of the capture distances, i.e., about 10[Å]{} and therefore much larger than the Thomas-Fermi screening length of less than one [Å]{}ngstr[ø]{}m as derived in a free electron gas model. Therefore all Rydberg levels will be depleted leaving behind the original $1s^22s^2p^5$ core configuration and possibly some $M$-, $N$- and $O$-shell electrons. The target electron gas will swiftly take over the role of the outer electrons to screen and so neutralize the HCI charge. A good estimate for the reaction rate of the electron gas to the HCI “point charge” perturbation is provided by the plasmon frequency which lies in the vicinity of $10^{16}$s$^{-1}$ for metals. This is way above typical rates of the other HCI bulk interaction processes and we can thus assume that the HCI core screening by VB electrons is instantaneous. Except for the special handling of the transitions with $3d$ participation, which will be outlined below, all subsurface Auger transition energies given in this paper will hence be derived for neutral initial states possessing a total amount of $q$ $M$- and $N$-shell electrons and singly ionized final states. Let us now look at the grouping of $LMM$ transitions which is plotted in Fig. \[fig:Ar9:LMM:histo:spec\]. The histogram displays the energetic positions of all $LMM$ transitions originating from initial $2p^53s^xp^yd^z$ configurations ($n_M=x+y+z \leq 9$) of “hollow” Ar$^{9+}$ atoms which are neutralized via $q-n_M$ “spectator” electrons in the $N$-shell. Angular momentum coupling as in [@Sch94] is not taken into account. Each transition is weighted by unity in the plot discarding transition rates and statistical factors due to different subshell occupations. For the sake of clarity, the whole spectrum is convoluted by a Gaussian function of constant width 2eV. This modification evens out conglomerations of Auger lines at certain energies which are an artifact of strictly applying the spectator electron approximation. The width is sufficiently small not to lead to an additional overlap of $LMM$ subgroup intensities. Within the same group, Auger transition energies generally tend to increase steadily with the overall shell population. For comparison, the dotted line in Fig. \[fig:Ar9:LMM:histo:spec\] represents an autoionization spectrum of Ar$^{9+}$ ions impinging on a Si(100) surface at $E_{\mbox{kin}}=121$ eV and $\Theta=\Psi=45^\circ$ as reproduced from the experimental data in Fig. \[fig:Ar9Si:45deg:energy:normreg\]. Fig. \[fig:Ar9:LMM:histo:spec\] reveals that $LMM$ Auger transitions involving a free and initially neutral Ar atom can cover the energy interval between 166eV ($2p^53s^24s^2p^5 \mapsto 2p^63s^04s^2p^5$) and 267eV ($2p^53d^9 \mapsto 2p^63d^7$). For convenience, the groups of $LMM$ transitions displayed in Fig. \[fig:Ar9:LMM:histo:spec\] and the following part of the paper are classified by the angular $\ell$ quantum numbers of two participating $M$-shell electrons. In all cases, the final states are made up of the atomic $2p$ level, the remaining [ *M*]{}-core states and an appropriate continuum state. For $LMM$ processes, we omit the $2p$ level in our notation. The low-energy part of the $LMM$ spectrum can be assigned to $3ss$- and $3sp$ transitions. The higher $3sp$ intensity can be explained by their statistical weight and their $3p$ contribution clearly enhancing the transition rates. The fact that the two small peaks arising in some spectra between 190eV and 200eV fall into the $3sp$ peak region in Fig. \[fig:Ar9:LMM:histo:spec\] might be fortuitous. Due to our coarse resolution concerning the energetic grouping, we are not able to ascribe these peaks to particular $3sp$ transitions. Several things indicate that the dominant peak region around 211eV is composed of $3pp$ transitions out of a massively occupied [ *M*]{}-shell instead of $3sd$ transitions the energy range of which also covers this peak region. At first, it is intuitively plausible, considering that all three bound state wave functions possess the same angular momentum, that the by far highest $LMM$ rates are calculated for the $3pp$ group. Second, the sharp upper edge of the 211eV-maximum resembles the upper boundary of the $3pp$ curve which is composed of $3pp$ transitions out of a completed $M$-shell. Due to level filling statistics, a sharp edge is unlikely to form if its corresponding transitions take place out of intermediate shell occupations. Third, atomic structure calculations yield that $3pp$ energies accumulate around 211eV for all initial $3s^2p^yd^z$ configurations ($y+z \geq 5$), regardless of the particular choice of $y$ and $z$. This automatically implies that prior to the majority of all $3pp$ decays either more than seven electrons have to be captured into the $M$-shell or the induced charge cloud provides an equivalent screening effect. According to the $LMM$ grouping in Fig. \[fig:Ar9:LMM:histo:spec\] we can assign the two subpeaks on the high-energy shoulder of the $LMM$-maximum to $3sd$- and $3pd$ transitions, respectively. $3pp$ processes are unlikely to contribute to the region above 213eV since they require at least one $3s$ vacancy along with a ninefold occupied $M$-shell. These initial configurations will immediately be converted into $3s^2$ configuration due to the very fast super Coster-Kronig (sCK) decay channel involving three $M$-shell electron levels. The spectral range of the $3pd$ peak is cut off at about 235eV and $3dd$ transitions do obviously not produce enough intensity to appear with a distinct peak region in the spectra. These observations provide experimental evidence that the $3d$ level cannot be completely populated within the bulk and that quick sCK transitions tend to carry $3d$ populations into lower lying sublevels before $LMM$ transitions take place. The missing structures and the spectral range of the high-energy tail extending above 300eV suggest that it consists of the large variety of LXY transitions with X,Y$\in${N, O} rather than $3dd$ transitions. The $LMM$ cut-off at 235eV can be understood by taking a deeper look at the effective projectile potential $V_{\mbox{eff}}$ within the bulk (see Fig. \[fig:potential\]) which is deformed with respect to the corresponding free ionic Coulomb potential $V^{free}_{Coul}$. Close to the projectile nucleus $r \ll a_0$, the effective potential $V_{\mbox{eff}}$ converges into $V^{free}_{Coul}$. At intermediate distances $r \simeq a_0$, the screening of outer levels and the electron gas starts to act on the projectile levels. In this domain $V_{\mbox{eff}}$ is well represented by a free atom potential $V^{screen}_{Coul}$ which is screened by outer shell spectator electrons. All $nl$ subshells with energies $E^{nl}_b$ are elevated by a subshell dependent amount of $\Delta E^{nl}_b$ with respect to $V^{free}_{Coul}$. Far away from the nucleus the effective potential $V_{\mbox{eff}}$ merges into $V_0$ denoting the bottom of the valence band. Fig. \[fig:bind\] displays the $M$-sublevel binding energies $E^{nl}_b$ of Ar$^{9+}$ as a function of the total $M$-shell population $n_M$. The values have been calculated by the Cowan code for spectator electron configurations, i.e., for the potential $V^{screen}_{Coul}$. This modeling has proven to yield good agreement with experimental and more sophisticated theoretical results in the past. In a work by Schippers *et al.* [@Sch94], the main $KLL$ peak energies of the hydrogenlike second row ions C$^{5+}$, N$^{6+}$, O$^{7+}$, F$^{8+}$ and Ne$^{9+}$ have been reproduced. Arnau *et al.* [@Arn95] have demonstrated that the spectator electron model complies with DFT calculations including nonlinear screening effects for hydrogenlike Ne$^{9+}$ ions in an Al target. Detailed calculations even reveal that the induced charge density tries to mimic the shape of the wavefunctions of the neighboring unoccupied atomic level. In Fig. \[fig:bind\] we added $2p$ binding energies of hydrogenlike C$^{5+}$ and Ne$^{9+}$ as obtained from the spectator model and for comparison the DFT calculation for Ne$^{9+}$ as a function of the total $L$-shell population $n_L$. Following [@Arn95], the screening of the atomic spectator electrons resembles the screening by the VB electron gas because the inner atomic levels are energetically separated from the VB much like they are separated from next higher subshell in a free atom. This argument holds for the Ar$^{9+}$ $3s$- and $3p$ level and also for nearly all $L$-shell levels in hydrogenlike HCIs which are situated between the C$^{5+}$ and Ne$^{9+}$ curves. The evolution of the $3d$ sublevel energies with $n_M$ in Fig. \[fig:bind\] differs from the lower lying subshells, though. We observe that the $3d$ level binding energies are significantly closer to the VB and grow above $V_0$ as soon as more than five electrons populate the $M$-shell. We performed a DFT calculation showing that $3d$ electrons are already lost to the VB continuum for $n_M>4$. The spectral cut-off in the $3pd$ transition domain in Fig. \[fig:Ar9:LMM:histo:spec\] can now be explained by omitting all contributions from $3pd$ transitions with $n_M>4$. Aiming to correct for the shape of $V_{\mbox{eff}}$ which largely deviates from $V_{Coul}^{screen}$ for $E_b^{n\ell} \simeq V_0$ (see Figs. \[fig:potential\] and \[fig:bind\]), we shift the atomic $3d$ level to $V_0$ for $n_M \leq 4$ to attain higher transition energies compared to the mere spectator electron model. In this manner we derive the experimental $3sd$- and $3pd$ peak positions on the high-energy shoulder within an accuracy of 2% and 1%, respectively. Monte Carlo simulation of the subsurface interaction phase {#sec:subsim} ========================================================== In order to elucidate the interaction mechanism which eventually generates the measured spectra we worked out a Monte Carlo simulation [@Kal86]. Our goal was to reproduce the intensity shifts of the observed spectra for different incident energies in Fig. \[fig:Ar9Si:45deg:energy:normreg\]. On the analogy of previous simulations by Schippers et al. [@Sch94], Page et al. [@Pag95] and Stolterfoht et al. [@Sto95] on the $L$-shell filling of hydrogenlike highly charged ions at metal surfaces, we only keep track of the populations of the two innermost projectile shells containing at least one vacancy and focus on the most dominant transition rates. The ionic cores are neutralized by $N$-shell spectator electrons. Among all intra-atomic Auger processes, only those yielding an electron above the vacuum level are considered. During the simulation, the three $M$-subshell populations are recorded continuously. Transition rates, transition energies and sublevel energies are evaluated dynamically at each iteration step according to the particular $\{n_{3s}|n_{3p}|n_{3d}\}$ configuration. From one step to the next, only the fastest transition which is derived statistically from its nominal rate takes place. The Monte Carlo method implies the averaging of the simulation results over a sufficient amount of projectiles. We find that the simulated spectra converge after $N \simeq 1 \times 10^5$ particle runs and chose $N=1 \times 10^6$. In our implementation of the subsurface cascade, each particle is started at the first bulk layer with a fixed angle of incidence $\Theta=45^\circ$ and energy $E_{\mbox{kin}}$. For $E_{\mbox{kin}}=121$ eV and 2 keV we assume an initially empty $M$-shell. Intra-Atomic rates {#sec:intraatomic} ------------------ The $LMM$ rates are evaluated by a fit expression proposed by Larkins [@Lar71C] for free multiply ionized atoms possessing no $N$-shell spectator electrons. Accordingly, if one or two of the $n$ electrons of a subshell which could contain $n_0$ electrons are involved in an Auger process, the Auger rate calculated using the formulae appropriate for a filled shell $\Gamma^{\mbox{filled}}_{\ell_1 \ell_2}$ is reduced by $n/n_0$ or $[n(n-1)]/[n_0(n_0-1)]$, respectively. Values for $\Gamma^{\mbox{filled}}_{\ell_1 \ell_2}$ are only supplied for $3ss$-, $3sp$- and $3pp$ transitions in the literature which account for the greatest part of the overall $LMM$ intensity in the literature. For $3sd$-, $3pd$- and $3dd$ transitions, we scale the $LMM$ rates $\Gamma^{\mbox{filled}}_{3\ell d}$ to reproduce the experimental peak heights. Table \[tab:intrarates\] lists the six $\Gamma^{\mbox{filled}}_{\ell_1 \ell_2}$ rates which are held constant for different simulations. These $LMM$ rates should not be greatly affected by the embedding of the HCI into the electron gas because they chiefly depend on the radii of the participating $M$-subshells which remain fairly unchanged. To show this we recall that the shape of the induced charge cloud is similar to the $N$-shell. Within the hydrogen atom approximation, the radii of the screening cloud $r_{sc}$ and the atomic shells (schematically inserted in Fig. \[fig:potential\]) both scale with $(n-1)^2 \{1+\frac{1}{2}[1-\frac{\ell(\ell+1)}{(n-1)^2}]\}$. The ratio $r_{sc}/r_{3p} = 2.5$ with $sc = 4p$ has to be related to the ratio $r_{3p}/r_{3s}$ amounting to 0.83. Due to its great extension, the screening electron cloud should therefore have a minor impact on the $M$-shell orbitals and hence on the $LMM$ rates given in Table \[tab:intrarates\]. Since we do not resolve $N$-sublevels, Coster-Kronig $MMN$ transitions have to be handled by a global base rate for each $M$-level pair. In a simple approach, we weight each $MMN$ base rate by the initial $M$-sublevel occupation and final state vacancies such that the average rate amounts to $3 \times 10^{14}$s$^{-1}$. For the purposes of this paper, only the order of magnitude with respect to the other transition types matters. We remark that Armen and Larkins [@Arm91] have calculated transition rates for $MMN$ decay channels which are of the order of $4 \times 10^{14}$s$^{-1}$, depending strongly on the angular coupling. This is in sufficiently good agreement with our assumption. Only $MMN$ transitions with a final state above the continuum level are included leaving over solely $(3s)(3d)$N transitions which are of particular importance for the initial phase of the interaction. The sCK $MMM$ rates are known to be 10 to 100 times faster than any rates for Auger transitions possessing the initial and final holes in different principal shells. In our simulation, they mainly serve to regroup any $M$-shell configuration into the appropriate $M$-shell ground state before $LMM$ transitions take place. To achieve this, we utilize a base rate of $1 \times 10^{15}$s$^{-1}$ which is scaled by the $M$-subshell occupation statistics. In Table \[tab:sCKrates\] we put together the average number of $MMM$ processes per particle and the average $M$-sublevel occupation at the time of $MMM$ emission for the two sCK transitions which are relevant for our simulation. $MCV$ filling within the bulk {#sec:filling} ----------------------------- Target levels below $V_0$ can be filled by transitions involving electrons of valence band states (C) which are perturbed by the ionic core. The energy gain is conveyed either onto another VB electron which is emitted into the continuum or a collective excitation (plasmon) is created in the medium. The theoretical approach including the charge displacement in the description of the excited outgoing electrons is much more complicated and, at present, only unperturbed valence band states (V) are included in the calculations[@Die95; @Die96]. The VB electrons take on the role of outer shells in a free atom. Using DFT to describe the interaction between the ion and the metal valence band and following the same scheme as in [@Die96], we have derived $MCV$ rates for the Ar$^{9+}$–Si system. Table \[tab:MCVrates\] lists the rates per spin state $\Gamma^{MCV}_{3\ell}$ into the three $M$-sublevels with the number of initial $M$-shell electrons $n_M$ as parameter. These $MCV$ rates still have to be multiplied by the number of unoccupied final states in the particular $M$-sublevel to attain actual transition rates between two atomic configurations. $\Gamma^{MCV}_{tot}$ denotes the overall $MCV$ rate into the $M$-shell after carrying out the appropriate statistics. Since sCK transitions are much faster than $MCV$s (cf.  Table \[tab:sCKrates\]), we only consider “Coster-Kronig final states” as initial configurations in the DFT calculation. The transition rates are independent of the projectile velocity $v_p$ equaling their static values for all incident energies occurring in this work. Table \[tab:MCVrates\] reveals that $\Gamma^{MCV}_{3d}$ assumes by far the highest values. Taking into account the high degeneracy of the $3d$ level, effective rates $\Gamma^{MCV}_{3d}$ exceed $\Gamma^{MCV}_{3p}$ and $\Gamma^{MCV}_{3s}$ by more than one and two orders of magnitude, respectively. With increasing $n_M$, $MCV$ transfer into the $3p$ state accelerates reaching the $\Gamma^{MCV}_{3d}$ values at low $n_M$. This is important considering that for $n_M>4$ the $3d$ shell vanishes and $MCV$s into the $3p$ level constitute the most effective $M$-shell filling mechanism which is eventually responsible for the formation of the dominant 211eV-peak. Collisional filling {#sec:collisions} ------------------- For projectile energies above 1 keV, sidefeeding into the HCI $M$-shell due to direct electron transfer from target atom core levels supplies a velocity dependent filling rate. The transfer crossection increases with the energetic vicinity of inner projectile and target states [@Gre95] which is maximum for the Ar$^{9+}$ $3s$ level with the $2p$ bulk level of Si possessing $E_b^{2p}=109$ eV (cf.  Fig \[fig:bind\]). Experimentally, a Si target $LMM$ Auger peak for spectra with $E_{\mbox{kin}} \geq 1$ keV can be observed which is directly connected to the vacancy transfer. For 2 keV projectiles traveling through a silicon crystal in (100)-direction, collisional filling supplies a $3s$ sidefeeding rate of $\Gamma_{3s}^{coll} = v_p/d = 1.8 \times 10^{14}$s$^{-1}$ going on the assumption of one electron transfer per collision. Within the energy range below 1 keV, collision frequencies are small and the distance of closest approach is too large even for head-on collisions to allow a sufficient level crossing for sidefeeding [@Gre95]. Simulation of the 121eV- and 2 keV-spectra {#sec:121eVspectrum} ------------------------------------------ In Fig. \[fig:Ar:8:9:Si:sim:exp\] we plot the experimental spectra from Fig. \[fig:Ar9Si:45deg:energy:normreg\] into three subplots and compare them with our simulation results which are convoluted by a Gaussian function of 3eV width. In this section we look at the Ar$^{9+}$ spectra and postpone the discussion of the Ar$^{8+}$ spectra which are displayed in the same plot to Section \[sec:Ar8spectra\]. The difference between the simulated spectra in (a) and (b) stems from collisional filling which is exclusively enabled for $E_{\mbox{kin}}=2$ keV. In addition, we performed a convolution of the 2 keV-spectrum with an exponential function with a decay length of 3 a.u. to compensate for elastic and inelastic energy losses of electrons on their way through the bulk region. For $E_{\mbox{kin}} < 2$ keV, this damping becomes negligible due to the shallow projectile penetration. The intensity ratios among the different $LMM$ subgroups and their peak positions are approximately reproduced. The $3pp$ region displays too much intensity, though which might be caused by the [ *LMM*]{} rate fit formula (cf. Section \[sec:intraatomic\]) overestimating the $3pp$ rates for high $M$-populations, see also [@Lar71C] (Table VI). The $3ss$ intensity is clearly too low suggesting that other transitions types not considered in our model may contribute to this region. The enhancement of the $3sd$ peak parallel to the disappearance of the $3pd$ peak and intensity gain of the $3sp$ region towards the $E_{\mbox{kin}}=2$ keV-spectrum as a consequence of the collisional filling can nicely be observed (cf.  Table \[tab:intrarates\]). The average $M$-sublevel populations at the time of $LMM$ emission (cf.  Table \[tab:intrarates\]) indicate that the high-energy shoulder is generated along the early subsurface interaction phase. On the other hand, the dominant $3pp$ peak occurs at high $M$-populations benefitting from the growing $MCV$ rates into the $3p$ level and the disappearance of the $3d$ level towards high $n_M$. The missing $3dd$ intensity confirms the presence of the fast $MMN$ and [ *MMM*]{} decay channels which inhibit the buildup of $3d$ populations larger than one. In the experimental spectra, the low-energy tail displays much less structure than the simulation indicating that the mere spectator electron model might be incomplete. We carried out other simulations where 20% of the $LMM$ transitions start out from singly ionized initial configurations such that the peak regions loose part of their intensity to the low-energy side. Doing so the intensity dip around 200eV gets partially ironed out and the low-energy tail stretches beyond 160eV. A similar effect could be induced by the consideration of L$_{2,3}MMM$ double Auger processes [@Abe75] for which Carlson and Krause [@Car65] measured a relative contribution to all radiationless transitions of 10$\pm$2% and energy shifts of more than 10eV [@Sie69]. For the sake of the clarity of the displayed simulation results we did not implement this correction in Fig. \[fig:Ar:8:9:Si:sim:exp\]. Simulation for a statistical initial $M$-population {#sec:simulation} --------------------------------------------------- It is very surprising that by reducing the incident energy from about 121eV to 9eV, a significant shift in the relative peak intensities still takes place. On the one hand velocity dependent below-surface filling can be ruled out in this energy domain, on the other hand this effect must originate from different subshell populations at the time of $LMM$ emission. Let us assume for the moment that individual $M$-subshells of each particle are filled statistically (by a Poisson distribution which is cut off at the subshell degeneracy) at the first bulk layer according to their respective degeneracy, i.e., $\left<n_{3\ell}\right>$=2/18, 6/18 and 10/18, multiplied by the mean total $M$-shell population $\left<n_M\right>$ for $\ell=3s$-, $3p$- and $3d$ level, respectively. In Fig. \[fig:Ar:8:9:Si:sim:exp\] we present results of a Monte Carlo simulation with $\left<n_M\right>=2$. For a greater part of these initial configurations, new [ *M*]{}-shell redistribution channels open up via $MMN$s and sCKs which are energetically forbidden for $n_M=0$ and carry part of the $3d$ population immediately into the $3p$- rather than the $3s$ level. The simulations in Fig. \[fig:Ar:8:9:Si:sim:exp\](b,c) and Table \[tab:intrarates\] indeed reproduce the intensity shift from the $3sd$ peak to the $3pd$ peak at 232eV going from $E_{\mbox{kin}}=$121eV to 9eV. We remark that this simple model of an initial $M$-shell population before bulk penetration does not hold exactly for the Ar$^{8+}$ simulation where we set $n_{3s}=1$, $\left<n_{3p}\right>=1$ and $\left<n_{3d}\right>=1$. We are going to provide a physical motivation for the model in Section \[sec:Ar8spectra\]. The evolution of the subsurface cascade {#sec:evolution} ======================================= According to the experimental clues and arguments of Sections \[sec:observations\] and \[sec:grouping\], the overwhelming part of the high-energy branch originates from below-surface emission. For this phase, we designed the simulation presented in the previous section. In the following we describe the evolution of the subsurface cascade on the basis of the simulation results combined with the experimental data. As the HCI penetrates into the crystal bulk region all electrons that have previously been captured into outer Rydberg levels will be lost and band electrons will neutralize the core charge over a distance of roughly the Debye screening length of the electron gas. Thus a second generation of hollow atoms emerges within the bulk. Prior to any electron capture, the $O$-shell of the Ar$^{9+}$ core is the uppermost ionic shell to still fit below $V_0$. As long as not more than two electrons populate inner levels, solely XCV transitions (with X$\in${L, M, N, O}) can proceed. Since the XCV transition probability increases with the effective screening and degeneracy of the final level, XCVs preferably populate the [ *O*]{}-shell. Before any significant NOO and MNO Auger emission can take place, the rapid XCV filling successively pushes the $O$- and $N$-shell above $V_0$. This period is accompanied by LCV, LNO, LMN transitions etc. creating the smoothly decreasing part of the spectrum above the $3pd$ edge. We note that this early phase of the neutralization may already start before complete bulk penetration when the projectile travels through the vacuum tail of the valence band. The loss of whole atomic shells into the valence band stops when the $M$-shell is reached. At this point of the scenario, a low [ *M*]{}-shell population with a statistical preference for the $3d$ level (due to its high degeneracy) is likely to occur. [ *MMN*]{}-CK processes transfer these $3d$ electrons quickly into the $3s$ level before a large $3d$ population can accumulate. Other [ *MMN*]{} transitions $(3p)(3d)$N and later $(3s)(3p)$N are energetically forbidden. This $M$-shell redistribution is accelerated by high speed sCK processes with rates of the order of $10^{15}$s$^{-1}$. Whereas $3sdd$ transitions are immediately possible, $MMM$ transitions into the $3p$ level require $n_M > 3$. Along this early $M$-shell redistribution phase the [ *M*]{}-population remains fairly constant at $n_M \simeq 2+n_{3s}$, though because one $M$-electron is lost along each $MMM$ process. Thus $3sd$-$LMM$ processes out of initial $3s^2d$ constellations are characteristic for this phase causing the 224eV-peak in the experimental spectra. It lasts comparatively long because the condition $n_M \simeq 2$ keeps the $MCV$ rates (cf. Table \[tab:MCVrates\]) minimal. The Ar$^{9+}$ core will always be surrounded by an induced VB charge cloud (C) because the number of bound states $n_b$ below $V_0$ is smaller than the projectile core charge $q=9$ (Fig. \[fig:bind\]). Hence $MCV$ processes continue to populate empty $M$-levels faster and faster with increasing $n_M$. As soon as $n_M>3$ is satisfied, $3pdd$ sCKs become energetically possible and a $3p$ population builds up while the $3d$ population remains approximately at one due to the presence of the $MMN$ and sCK decay channels. $3dd$ transitions require the transient formation of very unstable $M$-shell configuration that are unlikely to occur so they do not appear in the spectra. At $n_M>4$, the $3d$ level vanishes into the valence band thus interrupting further $3sd$- and $3pd$ emission. Since the $3pp$-$LMM$ transitions possess much higher rates than any other $LMM$ transitions they clearly prevail during this later stage of the subsurface interaction. The dominant peak which is centered at 211eV for $E_{\mbox{kin}} \leq 121$ eV in Fig. \[fig:Ar9Si:45deg:energy:normreg\] corresponding to $3pp$ transitions with $n_M \geq 7$ provides evidence for the described mechanism, in particular for the high $MCV$ rates into the $3d$- and later the $3p$ level. The intensity gain of the $3sd$ peak with respect to the $3pd$ peak for high $E_{\mbox{kin}}$ is consistent with the greater time window of the former transition during the early interaction phase. This effect furthermore verifies the assumption of collisional sidefeeding into the $3s$ level and therefore the 224eV-peak assignment by itself. All phases are accompanied by $3ss$- and $3sp$-$LMM$ transitions which constitute the low-energy tail and the region around the two faint subpeaks between about 180eV and 200eV, respectively. Spectra of metastable A$^{8+}$ projectiles {#sec:Ar8spectra} ========================================== Seeking to extract additional experimental evidence for the described Ar$^{9+}$ interaction mechanism, we performed a series of measurements involving metastable ($2p^53s$) Ar$^{8+}$ ions colliding at $\Theta=45^\circ$ and various kinetic energies with a Si crystal (Fig. \[fig:Ar8Si:45deg:energy:normreg\]). A straight comparison with the corresponding Ar$^{9+}$ series in Fig. \[fig:Ar9Si:45deg:energy:normreg\] shows that the general shape of the spectra is unaffected by the additional $3s$ electron except for a slight enhancement of the $3ss$- and $3sp$ intensities. In fact, the only new structure observed is a small peak arising at 247eV for $E_{\mbox{kin}}=8$ eV and generally for lowest perpendicular projectile velocities $v_\perp$ which can also be deduced from Fig. \[fig:Ar8Si:8eV:angle:normreg\]. The 247eV-peak has been discussed in detail in [@Duc97L] along with corresponding peaks which occur under similar conditions in the spectra of second row ions in $1s2s$ configurations. It can be assigned to so called [*LMV*]{}$_W$ transitions in the course of which the $3s$ electron jumps into the $2p$ vacancy. The emitted electron comes from a level possessing a binding energy which equals the target work function $W=4.6$ eV for silicon. Due to the shape of the subsurface potential $V_{\mbox{eff}}$ (Fig. \[fig:potential\]), these levels cannot exist after projectile penetration into the bulk occurred. As mentioned earlier in Section \[sec:grouping\], the strong decrease in spectral intensity above 235eV gives evidence for this assertion. The identification of an above-surface [*LMV*]{}$_W$ peak suggests that inner atomic shells X$\in${M, N, O, $\ldots$} could be partially filled before bulk penetration by an autoionization process XV$_W$V$_W$. We mentioned earlier that also $MCV$ set in with continuously increasing rates as the HCI travels through the vacuum tail of the valence band. Compared to the $MCV$ filling within the bulk, these near-surface $M$-shell filling channels are likely to proceed significantly slower, though. Since sCK processes require certain minimum $M$-shell populations they are widely inhibited for these constellations. One can thus expect that a very slow projectile might enter the bulk region with a low $M$-shell population $\left< n_M \right> \simeq 2$ favoring the $3d$ level due to its degeneracy. This way we can motivate the ansatz for the simulation of the spectra at minimum $E_{\mbox{kin}}$ in Section \[sec:simulation\], even though an explicit experimental evidence is still missing. The astonishing similarity of the rest of the Ar$^{8+}$ and the Ar$^{9+}$ data bears out our previous assumption of fast $MCV$, $MMN$ and sCK processes within the bulk redistributing any $M$-shell population swiftly into the $3s$ level. In order to compensate for the additional $3s$ electron in Ar$^{8+}$ $M$-shell, sCKs have to proceed before an $LMM$ transition takes place. This automatically implies that the $M$-shell must be sufficiently populated and quickly replenished at this point. Because a large $M$-shell population far in front of the surface would be in contradiction to all previous experiments we can exempt the above-surface zone as the origin of the emitted electrons. This obviously also holds for the $3pd$ peak at 232eV. We made use of the great correspondence of the Ar$^{8+}$ and the Ar$^{9+}$ spectra to check the mechanisms and rates entering our interaction model. For the simulations on Ar$^{8+}$ projectiles which are also shown in Fig. \[fig:Ar:8:9:Si:sim:exp\] we kept the same transitions types and rates but added an $3s$ electron to the initial $M$-shell population. Within the accuracy of our interaction model, the similarity of the two series is well reproduced. Summary and discussion {#sec:discussion} ====================== In this work we have presented detailed experimental results on the interaction of Ar$^{9+}$ and metastable Ar$^{8+}$ ions impinging on a Si(100) crystal. Doing so we focused on autoionization spectra measured at low impact energies. In this energy domain, we identified several new spectral features which alter with the perpendicular projectile velocity component and with the angle of incidence and observation. A consistent interaction model has been suggested for which $MCV$ processes and the energetic vicinity of the Ar$^{9+}$ $3d$ subshell to the bottom of the silicon valence band play a decisive role. The subsurface interaction phase has been simulated using a Monte Carlo code. Feeding the code with realistic transition rates, we have been able to reconstruct the experimental peak positions and intensity shifts for different projectile energies. Our results give indirect evidence for a very effective below-surface $MCV$ filling as postulated by theory. In contrast to $KLL$ spectra of hydrogenlike second row ions impinging on metal surfaces, the main intensity of the Ar$^{9+}$ $LMM$ spectra is located on the high-energy side of the peak region corresponding to a massively occupied $3p$ subshell. We demonstrated that this peculiar shape of the high-energy region is linked to the special role of the $3d$ subshell which mediates a fast $M$-shell filling in the beginning and later disappears due to the screening of the valence band electron gas. We presented spectra measured at small observation angles with respect to the surface parallel. They contain a high intensity peak region which most likely originates from Auger emission of incoming or reflected projectiles which do not experience the full bulk screening, yet. In addition, we spotted a distinct peak in the Ar$^{8+}$ spectra for the lowest perpendicular incident velocities which can be explained by a unique above-surface process involving the $L$-vacancy and two electrons from the resonantly populated shells. HCI beams have been deemed a candidate for future surface modification techniques for some time. It has been demonstrated that single ions can give rise to nanoscale size features on certain surfaces [@Par95]. Also sputter yields on insulators could be significantly enhanced by using slow HCIs instead of fast singly charged projectiles. At very low kinetic energies, the energy deposition concentrates on a very small area which extends approximately one lattice constant in the vicinity of the first bulk layer. In this manner, an energy of several keV can be carried into this zone where it might be converted into activation energy for processes like sputtering, crystal growth and surface catalysis. Research in this field is under way and first results have been presented already. Acknowledgments {#acknowledgments .unnumbered} =============== This work was sponsored by the German Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie under Contract No. 13N6776/4. We are also grateful for support from the Ministerium für Wissenschaft und Forschung des Landes Nordrhein-Westfalen. J. Burgdörfer, P. Lerner, and F. W. Meyer, Phys. Rev. A [**44**]{}, 5674 (1991). J. Ducrée, F. Casali, and U. Thumm, accepted by Phys. Rev. A (1998). F. W. Meyer, L. Folkerts, H. O. Folkerts, and S. Schippers, Nucl. Instrum. Methods Phys. Res., Sect. B [**98**]{}, 441 (1995). S. Hatke, A. Hoffknecht, S. Hustedt, J. Limburg, I. G. Hughes, R. Hoekstra, W. Heiland, and R. Morgenstern, Nucl. Instrum. Methods Phys. Res., Sect. B [ **115**]{}, 165 (1996). S. Schippers, J. Limburg, J. Das, R. Hoekstra, and R. Morgenstern, Phys. Rev. A [**50**]{}, 540 (1994). F. W. Meyer, S. H. Overbury, C. D. Havener, P. A. [Zeijlmans van Emmichoven]{}, and D. M. Zehner, Phys. Rev. Lett. [**67**]{}, 723 (1991). R. Köhrbrück, M. Grether, A. Spieler, N. Stolterfoht, R. Page, A. Saal, and J. Bleck-Neuhaus, Phys. Rev. A [**50**]{}, 1429 (1994). S. Hustedt, J. Freese, S. Mähl, W. Heiland, S. Schippers, J. Bleck-Neuhaus, M. Grether, R. Köhrbrück, and N. Stolterfoht, Phys. Rev. A [**50**]{}, 4993 (1994). J. Limburg, S. Schippers, I. Hughes, R. Hoekstra, R. Morgenstern, S. Hustedt, N. Hatke, and W. Heiland, Nucl. Instrum. Methods Phys. Res., Sect. B [ **98**]{}, 436 (1995). R. Page, A. Saal, J. Thomaschewski, L. Aberle, J. Bleck-Neuhaus, R. Köhrbrück, M. Grether, and N. Stolterfoht, Phys. Rev. A [**52**]{}, 1344 (1995). N. Stolterfoht, A. Arnau, M. Grether, R. Köhrbrück, A. Spieler, R. Page, A. Saal, J. Thomaschewski, and J. Bleck-Neuhaus, Phys. Rev. A [**52**]{}, 445 (1995). J. Limburg, S.Schippers, R. Hoekstra, R. Morgenstern, H. Kurz, M. Vana, F. Aumayr, and H. Winter, Nucl. Instrum. Methods Phys. Res., Sect. B [**115**]{}, 237 (1996). A. Arnau, P. A. [Zeijlmans van Emmichoven]{}, J. I. Juaristi, and E. Zaremba, Nucl. Instrum. Methods Phys. Res., Sect. B [**100**]{}, 279 (1995). R. [Díez Mui[ñ]{}o]{}, A. Arnau, and P. M. Echenique, Nucl. Instrum. Methods Phys. Res., Sect. B [**98**]{}, 420 (1995). R. [Díez Mui[ñ]{}o]{}, N. Stolterfoht, A. Arnau, A. Salin, and P. M. Echenique, Phys. Rev. Lett. [**76**]{}, 4636 (1996). S. T. de Zwart, Nucl. Instrum. Methods Phys. Res., Sect. B [**23**]{}, 239 (1987). L. Folkerts and R. Morgenstern, Journal de Physique [**Colloque C1, suppl. no 1**]{}, 541 (1989). S. T. de Zwart, A. G. Drentje, A. L. Boers, and R. Morgenstern, Surf. Sci. [ **217**]{}, 298 (1989). H. J. Andrä, A. Simionovici, T. Lamy, A. Brenac, G. Lamboley, S. Andriamonje, J. J. Bonnet, A. Fleury, M. Bonnefoy, M. Chassevent, and A. Pesnelle, Z. Phys. D [**21, suppl.**]{}, 135 (1991). R. Köhrbrück, K. Sommer, J. P. Biersack, J. Bleck-Neuhaus, S. Schippers, P. Ronci, D. Lecler, F. Fremont, and N. Stolterfoht, Phys. Rev. A [**45**]{}, 4653 (1992). H. J. Andrä, A.Simionovici, T. Lamy, A. Brenac, and A. Pesnelle, Europhys. Lett. [**23**]{}, 361 (1993). L. Folkerts and R. Morgenstern, Europhys. Lett. [**13**]{}, 377 (1990). J. Limburg, J. Das, S. Schippers, R. Hoekstra, and R. Morgenstern, Phys. Rev. Lett. [**73**]{}, 786 (1994). H.Winter, C. Auth, R. Schuch, and E. Beebe, Phys. Rev. Lett. [**71**]{}, 1939 (1993). J. F. Ziegler, J. P. Biersack, and U. Littmark, in [*The [S]{}topping and [R]{}ange of [I]{}ons in [S]{}olids*]{}, edited by J. F. Ziegler (Pergamon Press, New York, 1985), Vol. 1. R. D. Cowan, [*The Theory of Atomic Structure and Spectra*]{} (University of California Press, Berkeley, 1981). A. Arnau, R. Köhrbrück, M. Grether, A. Spieler, and N. Stolterfoht, Phys. Rev. A [**51**]{}, R3399 (1995). M. H. Kalos and P. A. Whitlock, [*Monte [C]{}arlo [M]{}ethods*]{} (John Wiley & Sons, New York, 1986), Vol. I: Basics. F. P. Larkins, J. Phys. B [**4**]{}, L29 (1971). G. B. Armen and F. P. Larkins, J. Phys. B [**24**]{}, 741 (1991). M. Grether, A. Spieler, R. Köhrbrück, and N. Stolterfoht, Phys. Rev. A [ **52**]{}, 426 (1995). T. [Å]{}berg, in [*Atomic Inner-Shell Processes I: Ionization and Transition Probabilities*]{}, edited by B. Crasemann (Academic Press, New York, 1975), Chap. 9, pp. 353-375. T. A. Carlson and M. O. Krause, Bull. Amer. Phys. Soc. [**10**]{}, 455 (1965). K. Siegbahn, C. Nordling, A. Fahlman, R. Nordberg, K. Hamrin, J. Hedman, G. Johansson, T. Bergmark, L. O. Werme, R. Manne, and Y. Baer, [*ESCA Applied to Free Molecules*]{} (North-Holland, Amsterdam, 1969). J. Ducrée, J. Mrogenda, E. Reckels, M. Rüther, A. Heinen, Ch. Vitt, M. Venier, J. Leuker, and H. J. Andrä, submitted to Phys. Rev. Lett. (unpublished). D. C. Parks, R. Bastasz, R. W. Schmieder, and M. Stöckli, J. Va. Sci. Technol. B [**13**]{}, 941 (1995). process $\Gamma^{\mbox{filled}}_{3\ell_1\ell_2}$ $E_{\mbox{kin}}=9$ eV ($\left<n_M \right>=2$) $E_{\mbox{kin}}=121$ eV $E_{\mbox{kin}}=2$ keV --------- ------------------------------------------ ----------------------------------------------- ------------------------- ------------------------- $3ss$ $3.31 \times 10^{12}$ 0.8% (2.0$|$4.5$|$0.1) 1.0% (2.0$|$4.3$|$0.1) 2.0% (2.0$|$4.1$|$0.1) $3sp$ $5.29 \times 10^{13}$ 15.9% (1.6$|$5.1$|$0.0) 17.1% (1.7$|$5.1$|$0.0) 22.1% (2.0$|$5.0$|$0.0) $3pp$ $1.98 \times 10^{14}$ 72.5% (1.4$|$5.5$|$0.0) 70.0% (1.5$|$5.5$|$0.0) 66.8% (2.0$|$5.4$|$0.0) $3sd$ $6.20 \times 10^{14}$ 5.3% (1.2$|$0.5$|$1.4) 7.2% (1.2$|$0.4$|$1.5) 7.4% (2.0$|$0.3$|$1.4) $3pd$ $1.65 \times 10^{15}$ 4.6% (0.6$|$1.5$|$1.4) 3.7% (0.8$|$1.4$|$1.4) 1.4% (1.5$|$1.2$|$1.2) $3dd$ $4.13 \times 10^{14}$ 0.8% (0.4$|$0.4$|$2.4) 1.0% (0.5$|$0.2$|$2.3) 0.4% (1.2$|$0.1$|$2.2) : Monte Carlo simulation results on $LMM$ processes for Ar$^{9+}$ impinging on Si(100) with $E_{\mbox{kin}}=9$ eV, 121eV and 2 keV. $\Gamma^{\mbox{filled}}_{3\ell_1\ell_2}$ gives the $LMM$ rate for a filled $M$-shell as required for the implemented fit formula [@Lar71C]. For each simulation, we list the relative intensity and, in brackets, the average ($n_{3s}|n_{3p}|n_{3d}$)-configuration at the time of $LMM$ decay which provides information about the evolution of the subsurface cascade.[]{data-label="tab:intrarates"} process $E_{\mbox{kin}}=9$ eV ($\left<n_M \right>=2$) $E_{\mbox{kin}}=121$ eV $E_{\mbox{kin}}=2$ keV --------- ----------------------------------------------- ------------------------- ------------------------- $3sdd$ 66.2% (0.2$|$0.4$|$2.4) 81.8% (0.2$|$0.2$|$2.4) 11.7% (1.0$|$0.1$|$2.2) $3pdd$ 16.3% (1.0$|$0.2$|$2.8) 21.0% (1.0$|$0.2$|$1.5) 17.9% (1.5$|$0.1$|$2.4) : Monte Carlo simulation results on $MMM$ processes for Ar$^{9+}$ impinging on Si(100) with $E_{\mbox{kin}}=9$ eV, 121eV and 2 keV. The table lists the average occurrence of each transition type and, in brackets, the average $M$-sublevel population at the time of $MMM$ emission. Other sCK transitions are energetically forbidden.[]{data-label="tab:sCKrates"} $n_M$ $\Gamma^{MCV}_{3s}$ \[s$^{-1}$\] $\Gamma^{MCV}_{3p}$ \[s$^{-1}$\] $\Gamma^{MCV}_{3d}$ \[s$^{-1}$\] $\Gamma^{MCV}_{tot}$ \[s$^{-1}$\] ------- ---------------------------------- ---------------------------------- ---------------------------------- ----------------------------------- 0 $9.92 \times 10^{11}$ $2.07 \times 10^{12}$ $6.61 \times 10^{13}$ $8.10 \times 10^{14}$ 1 $1.21 \times 10^{13}$ $2.54 \times 10^{13}$ $9.18 \times 10^{13}$ $1.08 \times 10^{15}$ 2 - $3.26 \times 10^{13}$ $1.22 \times 10^{14}$ $1.41 \times 10^{15}$ 3 - $4.46 \times 10^{13}$ $1.44 \times 10^{14}$ $1.70 \times 10^{15}$ 4 - $6.53 \times 10^{14}$ - $2.61 \times 10^{15}$ 5 - $5.78 \times 10^{14}$ - $1.74 \times 10^{15}$ 6 - $4.48 \times 10^{14}$ - $9.17 \times 10^{14}$ 7 - $3.27 \times 10^{14}$ - $3.27 \times 10^{14}$ : $MCV$ rates for the Ar$^{9+}$/Si system. The table lists $MCV$ transition rates per spin state $\Gamma^{MCV}_{3\ell}$ for each $M$-sublevel and the overall $MCV$ rate $\Gamma^{MCV}_{tot}$ taking into account occupation statistics as evaluated by DFT calculations. $n_M$ gives the initial number of $M$-electrons. The rates refer to initial $M$-shell ground state configurations. For $n_M=0$, $MCV$ processes filling the $3d$ level possess by far the highest rates. As the subsurface cascade proceeds and $n_M>4$, the $3d$ level vanishes and the $MCV$s into the $3p$ level rapidly populate the $M$-shell.[]{data-label="tab:MCVrates"} [^1]: author to whom correspondence should be addressed.Electronic address: ducree@uni-muenster.de [^2]: There has obviously been a mistake in the calibration of the plot on the energy axis that has been corrected in [@Fol89].
--- abstract: 'The effect of fat content in cheese curds on their rheological properties were examined using dynamic shear measurements. Surplus fat addition to milk samples caused two distinct types of changes in the temperature dependence of the viscoelastic moduli of resultant curds. The first was a significant reduction in the moduli over a wide temperature range, which is attributed to the presence of liquefied fat globules within the milk protein network. The second was the excess contribution to the low-temperature moduli owing to the reinforcing effect of solidified fat globules. An upward shift in the sol-gel phase transition temperature driven by an increased fat content was also observed.' author: - Hiroyuki Shima - Morimasa Tanimoto bibliography: - 'HShima\_fatcontrol\_EFRT.bib' date: 'Received: date / Accepted: date' title: 'Effect of milk fat content on the viscoelasticity of mozzarella-type cheese curds' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction ============ Fat and protein are the two primary components in raw milk. The fat content of bovine milk is nearly 4% by weight, and it is dispersed in milk serum as globules with diameters that range from 0.2 $\mu$m to 15 $\mu$m, c.a. 4 $\mu$m on average [@Michalski2004]. Similar to fat globules, casein proteins (i.e., the major class of milk protein) exist as colloidal particles, known as casein micelles, and have diameters that range from 50 nm to 500 nm (average 120 nm). These colloidal domains comprise almost 80 % of the total solid content in milk. Therefore, their structural stability and inter-particle interactions strongly affect the quality of dairy products such as cheese, yoghurt, and butter. In particular, the presence of fat in cheese is necessary to develop the characteristic flavour profile and favoured mouth-feel. The production of natural cheese is initiated by the addition of rennet to milk. The rennet-induced proteolysis of the surfaces of casein micelles leads to their aggregation, resulting in a three-dimensional protein network. The network exhibits non-uniform viscoelasticity in accordance with changes in the temperature, pH, and protein concentration [@Nabulsi; @Catarino]. Cavities in the network are filled with fat globules and some whey; the total mixture of these materials comprises a cheese curd. Many fat globules in the curd remain stored, even after curd syneresis is completed, and they contribute to the desirable functional properties of the final cheese product. In fact, artificial removal of fat from the curd causes quality degradation, leading to a firm and dry cheese that melts poorly [@Mistry2001]. Toward quality improvement, numerous studies have focused on the effect of fat content or its reduction in cheese. Despite consumer enthusiasm for fat-free diets, these attempts have met with limited success [@Banks; @Childs; @Skeie]. A better understanding of the interplay between fat globules and the protein network is indispensable for developing a solution. Aside from the practical motivation, it is also interesting from an academic perspective to explore the effects of fat content on the rheology of cheese curds. An important feature of fat globules, which contribute to curd rheology, is the wide variety in in size and melting temperature. The broad distribution of fat globule sizes allows them to interact with cheese microstructures in multiple ways. Large fat globules are likely to disrupt a portion of the protein network and suppress direct cross-linking between protein threads. Hence, if they are liquefied, large globules are expected to plasticize adjacent protein threads [@Johnstona1984], yielding a structurally loose matrix with reduced firmness. In contrast, small globules tend to occlude the fine empty spaces in the network [@MichalskiLait2004] and are thought to act as reinforcing fillers [@Desai1994] if they are in solid phase. However, a simple explanation of the temperature dependence of the fat content may be insufficient owing to the wide variety of fat melting points. There is not a sharp difference between the liquid and solid states of fat globules in curds. A single fat globule encloses many kinds of triglyceride isomers with different melting points [@Jensen1991; @Lopez2005], and thus the solidity and fluidity of the globule are determined by the relative proportion of isomers. The actual melting temperature ranges from $-40 ^\circ$C to 40$^\circ$C, between which crystalline and liquid fat coexist in curd [@Pilhofer1994]. It remains unclear how the two competing roles of fat globules, as plasticizers and reinforcing fillers, are manifested with respect to thermal-induced changes in cheese curd rheology. ![image](fig01a.eps){width="30.00000%"} ![image](fig01b.eps){width="30.00000%"} ![image](fig01c.eps){width="30.00000%"} In the present study, we address the effect of fat content and pH control on the viscoelastic moduli of rennet cheese curds. The pH control allows us to examine the effects of fat content under various structural conditions of the protein network. High pH conditions cause protein networks to become weaker and more porous. In contrast, low pH conditions result in network contraction, in which either or both of the effects as plasticizers or fillers may be enhanced. To verify our conjecture, we performed dynamic shear tests and measured the variation in the temperature dependence of the moduli with changes in pH. Particular emphasis was paid to the rheological behaviours below 20 $^\circ$C and above 50 $^\circ$C, wherein most fat globules are solidified and liquefied, respectively. Material and method =================== Preparation of sample milk -------------------------- Figure \[fig\_diagram\] displays a flow chart summarizing the production of pH- and fat-controlled cheese curds. Raw milk was obtained from the Kiyosato Milk Plant located at the foothills of Mount Yatsugatake, Japan. To assess the effects of fat content on cheese rheology, two classes of fat-adjusted milk were prepared. Skim milk was produced with a fat content of less than 1% by weight. Fat-enriched milk was produced by adding 1 kg of fresh cream with 47 % fat into 17 kg of raw milk with 3.8 % fat. The fat-enriched milk contained 6.2 % fat by weight. Each milk sample (18 L) was first pasteurized by maintaining the sample at 65 $^\circ$C for 30 min. This process eliminates bacteria, thus preventing the degradation of milk proteins at high temperatures. After pasteurization, the samples were then cooled to 31 $^\circ$C. The sample pH was 6.65 at this stage, and was directly determined using an electrode-type pH meter (SK-620PH, skSATO, Tokyo, Japan). Starter insertion ----------------- Milk acidification was triggered by adding 18 mL of a Lactobacillus culture solution to the milk sample. The culture solution was a mixture of 0.3 g of Direct Vat Set (DVS) Lactobacillus starter (CHN-11, Chr. Hansen, Nosawa & Co., Ltd., Tokyo, Japan) with 300 mL of pasteurized milk that was prepared in advance. After adding the culture solution, the sample was maintained at 31 $^\circ$C, which is the temperature that produces the optimal Lactobacillus activity, for 30 min. The sample pH at this stage was 6.50. ![image](fig02.eps){width="70.00000%"} Cheese curd production ---------------------- Rennet was then added to the above sample that was slightly acidified by lactobacilli. In this experiment, 0.5 g of rennet (CHY-MAX, Chr. Hansen, Nosawa & Co., Ltd.) dissolved in sterile cold water was added to the sample, which was then maintained at 31 $^\circ$C for 30 min. After the milk started to coagulate, the sample was cut into cubes of $12\times 12 \times 12$ mm$^3$ to remove a portion of whey from the curds. After cutting, the sample was gently agitated for 5 min to encourage the removal of whey. As a result, whey corresponding to one-third of the original sample weight was eliminated, yielding two classes of curd granules that differed in fat content. To complete the whey removal process, hot water was added gradually to the curd granules so that the sample temperature increased at a rate of 0.5 $^\circ$C/min. When it reached 38 $^\circ$C, the granules were gently agitated again for 45 min; eventually, all whey was removed. In the final step, the curds were aged until they reached the target pH (4.8-5.7). The time required to attain curds with the lowest pH was approximately 6 h. After pH adjustment, a series of curds with different pH values were frozen and stored. Immediately before measurement, the curds were defrosted in a refrigerator and then stirred at 50-60 $^\circ$C. Composition analysis -------------------- The chemical composition of the fat-controlled curds is summarized in Table \[table01\]. For both no-fat and high-fat samples, the highest pH among the samples examined are displayed, as well as the analysis method. It is noteworthy that the amount of fat was supressed to 2.7 g/100 g in the no-fat sample, while it increased to 30.1 g/100 g in the high-fat sample. The ratios of calcium to protein are presented in the bottom row of Table \[table01\], and indicate that fat control has no effect on the calcium content within the protein network. No-fat Hi-fat Method of analysis ------------------------------------- ------------- -------------- -------------------------------------- pH 5.60 5.70 pH meter Fat (g/100g) [**2.7**]{} [**30.1**]{} Acid hydrolysis method Protein (g/100g) 38.8 16.3 Kjeldahl method \[8pt\] Moisture (g/100g) 53.1 48.2 Atmospheric heating drying method Ash (g/100g) 3.7 1.7 Direct ashing method P (g/100g) 0.805 0.346 $^*$ICP atomic emission spectroscopy Ca (g/100g) 1.23 0.502 same as above Ratio Ca/Protein ($\times 10^{-2}$) 3.17 3.08 \[table01\] Dynamic shear measurement ------------------------- Viscoelastic moduli of the cheese curds and their dependence on pH, temperature, and fat content were evaluated by small-amplitude oscillatory shear tests. These are non-destructive tests for determining the viscoelasticity of a material [@Gunasekaran; @Tunick2011], and have been widely used for analysing cheeses and other foodstuff such as chocolates [@Vaart2013] and rice bran [@YHZhang2014]. Specifically, an oscillatory shear strain is applied to the sample at constant frequency of 1 Hz and a constant strain amount of 0.1 %, which satisfies the linear viscoelastic condition. Decreasing the temperature from 65 $^\circ$C to 5 $^\circ$C was carried out in a ramp fashion with a constant cooling rate of 2 $^\circ$C/min. The observed quantities were the temperature ($T$) dependences of the elastic (or storage) modulus, designated $G'(T)$ and the viscous (or loss) modulus, $G''(T)$. The former is a measure of the elastic energy stored per oscillation cycle; plainly stated, this parameter indicates the degree to which the sample gives a solid-like response to the dynamic load. The latter is a measure of the energy dissipated as heat per cycle, and indicates the degree to which a sample shows liquid-like behaviour. Empirical measurements were performed using a rheometer (Anton Paar MCR 302). The samples were thinly sliced and sandwiched between two flat disk plates with a 25 mm radius, facing each other, separated by a gap of 2 mm. The sample surface was coated with silicone oil to prevent evaporation of water during measurements. After coating the sample, it was gradually cooled, during which $G'$ and $G''$ were measured by applying the oscillatory shear. From the $G'$ and $G''$ data, the loss tangent $\tan \delta \equiv G''/G'$ was also evaluated for each temperature and pH condition. ![image](fig03a.eps){width="45.00000%"} ![image](fig03b.eps){width="45.00000%"} Result I: Plasticizer effect and reinforcing effect =================================================== Figure \[fig01\] shows single-logarithm plots of $G'(T)$ and $G''(T)$ for samples under different pH conditions. The measured data for the no-fat and high-fat samples are plotted in Fig. \[fig01\](a) and Fig. \[fig01\](b), respectively. For each pH condition, 10 samples were analysed and only minor sample dependence was detected. Based on the figure, the quasi-static cooling of the samples from 60 $^\circ$C (or slightly above) to 5 $^\circ$C causes exponential increases in the magnitude of both $G'(T)$ and $G''(T)$. This rigidity enhancement driven by slow cooling is attributed to the disappearance of thermally excited vibrations in the constituents. By cooling the samples, the degree of thermal vibration in the protein threads as well as fat globules is depressed and local detachment between protein threads becomes difficult. As a result, the samples get firmer as the temperature decreases, consistent with our observations in our daily life. This rigidity enhancement seems to be universal for all pH conditions and fat contents. An effect of fat-control on the magnitudes of $G'(T)$ and $G''(T)$ was clear, when we compared the high-fat data with the no-fat data presented in Fig. \[fig01\]. At every $T$ and pH, the moduli for the high-fat samples are nearly one order of magnitude smaller than those for the non-fat samples. This fat-induced reduction in the moduli is explained by the plasticizer effect of fat globules. The globules tend to fill in the voids of the protein network or get between the protein threads. The insertion of the fat globules into voids or gaps between protein threads keep them further apart and reduce the forces of attraction between the threads, thus making the whole curd more flexible. Such a plasticizer effect (as well as a possible mechanical cushion effect) is pronounced at moderately high temperatures ($> 20 ^\circ$C) because a large portion of the fat in the globules melts and becomes deformable. Furthermore, the plasticizer effect is enhanced at a relatively large pH ($\sim$ 5.60 - 5.70), as the protein network is rather sparse and accordingly involves numerous voids into which fat globules can penetrate. Another important consequence of the fat addition to the raw milk was a rapid growth in the low-temperature moduli with cooling. As shown in Fig. \[fig01\](b), below 20 $^\circ$C, the moduli rapidly increase with cooling for every pH. This rapid growth in the moduli results from the solidification of fat globules in the protein matrix. Below 20 $^\circ$C, the solid domain of the fat pooled in the globules gradually increases with cooling; as a result, they begin to function as reinforcing fillers. A similar reinforcing effect has been observed in our previous study [@Shima2015], in which the viscoelastic moduli of cheese curds free from fat control were examined. In the no-fat data shown in Fig. \[fig01\](a), the reinforcing effect disappears almost completely, and there is only a slight change in the slope of the moduli curves at approximately 20 $^\circ$C owing to the minimal fat content in the skim milk. In short, we identified the temperature ranges within which fat globules act as plasticizers and/or reinforcing fillers. The plasticizing effect was observed over the whole temperature range (5$^\circ$C – 60 $^\circ$C), reducing both the moduli of $G'(T)$ and $G''(T)$ for the high-fat samples. The reinforcing effect was observed only below 20$^\circ$C, leading to an excess contribution of the low-temperature moduli of the high-fat samples. The identification of these temperature ranges was the first main result of the present work. Result II: Sol-gel structural phase transition ============================================== ![image](fig04a.eps){width="45.00000%"} ![image](fig04b.eps){width="45.00000%"} Figure \[fig02\] shows the $T$-dependences of the loss tangent, $\tan \delta$, for the fat-controlled and pH-regulated samples in the same way as those in Fig. \[fig01\]. For both high-fat and no-fat samples, the loss tangent exceeds unity at the following temperature ranges: $T>59 ^\circ$C for the no-fat case with pH 5.21 and $T>57 ^\circ$C for the high-fat case with pH 4.82. The high temperature ranges showing $\tan \delta >1$ indicates a structural transition from a sol phase (liquid state) to a gel phase (solid state) [@CYMTung1982; @RossMurphy1995], above which the curds react to an external stress in a more viscous and fluidic, less elastic manner [@Vliet1989]. The heat-induced flowability is believed to result from higher molecular mobility and reduced cross-linkage within the casein network [@Mleko2005]. These two (and potentially other) physicochemical factors promote molecular alignment parallel to the tensile direction, enhancing the flow of the cheese curds at temperatures that satisfy $\tan \delta > 1$. The sol-gel transition demonstrated in Fig. \[fig02\] is consistent with the thermal softening of the casein network in fully coagulated mozzarella cheese reported in the literature [@Ak1996]. Using the squeezing flow method, it has been reported that mozzarella cheese shows a decreased resistance to flow as the temperature increases; the relaxation time of mozzarella cheese was reduced by several-fold as the temperature increased from 30 $^\circ$C to 60 $^\circ$C, corresponding to a monotonic decrease in $G'(T)$ and $G''(T)$ with increasing $T$ (see Fig. \[fig01\]). It is interesting to examine how the pH and fat content affect the sol-gel phase transition temperature $T_C$. First, sufficient acidification is required to observe the sol-gel transition. Reducing the pH promotes the dissociation of calcium ions from the bonding parts of protein molecules. Hence, the network becomes looser, yielding feasible flow rates above $T_C$. In our high-fat condition, for instance, the sample at pH 4.82 undergoes the transition at $T=57 ^\circ$C, whereas no transition occurs for larger pH values. In the no-fat condition, a pH of 5.21 suffices for samples to go through the transition at $T=59 ^\circ$C. Second, the value of $T_C$ decreases as the fat content decreases. Indeed, our previous study has shown that $T_C=43 ^\circ$C for the sample free from fat-control for a pH of 4.8. Additionally, based on Fig. \[fig02\](a), for no-fat samples with pH 4.8, $T_C$ is close to or less than that for fat-control-free samples. The fat-induced reduction in $T_C$ (for a fixed pH) indicates the consistent adhesion property of fat globules to the protein matrix. Since liquefied globules that are soft and deformable tend to maintain adhesion to surrounding protein molecules, adjacent protein threads are glued and cannot break apart. If the fat content decreases, the gluing mechanism is suppressed and the gel phase is favoured at moderately high temperatures. This explains why $T_C$ values for the natural-fat and no-fat samples are smaller than that of the high-fat samples. In short, we revealed the effect of fat content on the sol-gel transition temperature $T_C$. The addition of excess fat to raw milk results in an upward shift of the $T_C$ of cheese curds due to the adhesion property of fat globules to protein threads. This is the second main result of the present article. Conclusion ========== We investigated the effect of fat content variation on dependences of $G'(T)$ and $G''(T)$ on $T$ for mozzarella-type cheese curds. We observed two distinct effects. Specifically, we detected a fat-induced reduction in the moduli ([*i.e.,*]{} the plasticizing effect) at all temperatures between 5$^\circ$C and 60 $^\circ$C, and a fat-induced excess contribution to the moduli ([*i.e.,*]{} the reinforcing effect) that is observable only below 20$^\circ$C. The former effect is attributed to the presence of liquefied fat globules wrapped around the three-dimensional protein network, which reduce the attractive forces between the threads, making the system more flexible. With additional cooling, in contrast, a portion of fat pooled in the globules becomes solidified and begins to function as a reinforcing filler; this results in the latter effect. In addition to these two effects, we revealed the fat-induced reduction in the critical temperature $T_C$ for the sol-gel phase transition of the cheese curds. This increase in $T_C$ indicates the adhesion property of liquefied fat globules to protein threads. The authors express their gratitude to Emeritus Prof. Ryoya Niki, Prof. Katsuyoshi Nishinari, Prof. Kaoru Sato, and Mr. Kunio Ueda for fruitful discussions and technical supports. This work was supported by JSPS KAKENHI Grant Numbers 25390147 and 25560035. [ $\quad $None. ]{} [ $\quad$ This article does not contain any studies with human or animal subjects. ]{}
--- abstract: | The Herglotz problem is a generalization of the fundamental problem of the calculus of variations. In this paper, we consider a class of non-differentiable functions, where the dynamics is described by a scale derivative. Necessary conditions are derived to determine the optimal solution for the problem. Some other problems are considered, like transversality conditions, the multi-dimensional case, higher-order derivatives and for several independent variables. **Keywords**: calculus of variations; scale derivative. **Mathematics Subject Classification**: 49K05; 26A33. author: - | Ricardo Almeida\ `ricardo.almeida@ua.pt` date: | Center for Research and Development in Mathematics and Applications (CIDMA)\ Department of Mathematics, University of Aveiro, 3810–193 Aveiro, Portugal title: A Scale Variational Principle of Herglotz --- Introduction ============ The calculus of variations deals with optimization of a given functional, whose algebraic expression is the integral of a given function, that depends on time, space and the velocity of the trajectory: $$x \mapsto \int_a^b L(t,x(t),\dot x(t))\, dt.$$ The variational principle of Herglotz can be seen as an extension of such classical theories, but instead of an integral, we have the functional as a solution of a differential equation (see [@Guenther2; @Herglotz]): $$\left\{\begin{array}{l} \dot{z}(t)=L(t,x(t),\dot x(t),z(t)), \quad \mbox{ with } \, t\in[a,b],\\ z(a)=z_a .\end{array}\right.$$ Without the dependence of $z$, we can convert this problem into a calculus of variations problem. In fact, integrating the differential equation $$\dot{z}(t)=L(t,x(t),\dot x(t))$$ from $a$ to $b$, we obtain $$z(b)=\int_a^b\left[L(t,x(t),\dot x(t))+\frac{z_a}{b-a}\right]\,dt.$$ Recently, more advances were made namely proving Noether’s type theorems for the variational principle of Herglotz (see e.g. [@Georgieva1; @Georgieva2; @Georgieva3; @Guenther1; @Guenther2; @Orum]). The aim of this paper is to consider the Herglotz problem, but the trajectories $x(\cdot)$ may be non-differentiable functions. We believe that this situation may model more efficiently certain physical problems, like fractals. The organization of the paper is the following. In Section \[sec2\] we define what is a scale derivative, following the concept as presented in [@Cresson1], and we present some of its main properties, like the algebraic rules, integration by parts formula, etc. In Section \[sec3\] we prove our new results. After presenting the Herglotz scale problem, we prove a necessary condition that every extremizer must fulfill. Some generalizations of the main result are also presented to complete the study. Scale calculus {#sec2} ============== We review some definitions and the main results from [@Cresson1] that we will need. For more on the subject, see references [@Almeida1; @Cresson1; @Cresson2]. From now on, let $\alpha, \beta,h$ be reals in $]0,1[$ with $\alpha+\beta>1$ and $h \ll 1$, and consider $I:=[a-h,b+h]$. Let $f:I\rightarrow \mathbb{R}$ be a function. The delta derivative of $f$ at $t$ is defined by $$\Delta_h[f](t):=\frac{f(t+ h)-f(t)}{h}, \quad \mbox{for} \quad t\in[a-h,b],$$ and the nabla derivative of $f$ at $t$ is defined by $$\nabla_h[f](t):=\frac{f(t)-f(t-h)}{h}, \quad \mbox{for} \quad t\in[a,b+h].$$ If $f$ is differentiable, then $$\lim_{h\to 0}\Delta_{h}[f](t)=\lim_{h\to 0}\nabla_{h}[f](t) = f'(t).$$ These two operators can be combined into a single one, where the real part is the mean value of such operators, and the complex part measures the difference between them. The $h$-scale derivative of $f$ at $t$ is given by $$\label{eq:def:cp} \frac{{\Box_{h}}f}{\Box t}(t) =\frac12 \left[ \left( \Delta_{h}[f](t) + \nabla_{h}[f](t) \right)+i \left( \Delta_{h}[f](t) - \nabla_{h}[f](t) \right) \right], \quad \mbox{for}\quad t \in [a,b].$$ For complex valued functions $f$, such definition is extended by $$\frac{{\Box_{h}}f}{\Box t}(t)= \frac{{\Box_{h}}\mbox{Re}f}{\Box t}(t)+ i \frac{{\Box_{h}}\mbox{Im}f}{\Box t}(t).$$ We now explain how to drop the dependence on the parameter $h$ in the definition of the scale derivative. First, consider the set ${C^0_{conv}}([a,b]\times ]0,1[,\mathbb{C})$ of the functions $g\in {C^0}([a,b]\times ]0,1[,\mathbb{C})$ for which the limit $$\lim_{h\to 0}g(t,h)$$ exists for all $t\in [a,b]$, and let $E$ be a complementary space of ${C^0_{conv}}([a,b]\times ]0,1[,\mathbb{C})$ in ${C^0}([a,b]\times ]0,1[,\mathbb{C})$. Define $\pi$ the projection of ${C^0_{conv}}([a,b]\times ]0,1[,\mathbb{C})\oplus E $ onto ${C^0_{conv}}([a,b]\times ]0,1[,\mathbb{C})$, $$\begin{array}{lcll} \pi: & {C^0_{conv}}([a,b]\times ]0,1[,\mathbb{C})\oplus E & \to & {C^0_{conv}}([a,b]\times ]0,1[,\mathbb{C})\\ &g:= g_{conv}+g_E & \mapsto & \pi(g)=g_{conv}. \end{array}$$ Using these definitions, we arrive at the main concept of [@Cresson1]. \[def:ourHD\] The scale derivative of $f\in {C^0}(I,\mathbb{C})$, denoted by $\frac{\Box f}{\Box t}$, is defined by $$\label{eq:scaleDer} \frac{\Box f}{\Box t}(t):=\left< \frac{{\Box_{h}}f}{\Box t} \right>(t), \quad t \in [a,b],$$ where $$\left< \frac{{\Box_{h}}f}{\Box t} \right>(t):= \lim_{h\to 0}\pi\left(\frac{{\Box_{h}}f}{\Box t}(t)\right).$$ Given $f: I^n= [a-nh,b+nh]\rightarrow \mathbb{C}$, define the higher-order scale derivative of $f$ by $$\frac{\Box^n f}{\Box t^n}(t)=\frac{\Box}{\Box t}\left( \frac{\Box^{n-1} f}{\Box t^{n-1}} \right)(t), \quad t\in[a,b],$$ where $\frac{\Box f^1}{\Box t^1}:= \frac{\Box f}{\Box t}$ and $\frac{\Box f^0}{\Box t^0}:=f$. We will adopt the notation $\Box^n f(t)$ instead of $\frac{\Box^n f}{\Box t^n}(t)$ when there is no danger of confusion. Scale partial derivatives are also considered here. They are defined as in the standard case. Let $f:\prod_{i=1}^n[a_i-h,b_i+h]\to\mathbb{R}$ be a function. Define, for each $i\in\{1,\ldots,n\}$, $$\Delta_h^i[f](t_1,\ldots,t_n):=\frac{f(t_1,\ldots,t_{i-1},t_i+ h,t_{i+1},\ldots,t_n)-f(t_1,\ldots,t_{i-1},t_i,t_{i+1},\ldots,t_n)}{h},$$ for $t_i\in[a_i-h,b_i]$ and for $t_j\in[a_j-h,b_j+h] \, \mbox{ if } \, j\not=i$, and $$\nabla_h^i[f](t_1,\ldots,t_n):=\frac{f(t_1,\ldots,t_{i-1},t_i,t_{i+1},\ldots,t_n)-f(t_1,\ldots,t_{i-1},t_i-h,t_{i+1},\ldots,t_n)}{h},$$ for $t_i\in[a_i,b_i+h]$ and for $t_j\in[a_j-h,b_j+h], \, \mbox{ if } \, j\not=i.$ The $h$-scale partial derivative of $f$ with respect to the $i-th$ coordinate is given by $$\frac{{\Box_{h}}f}{\Box t_i}(t_1,\ldots,t_n) =\frac12 \left[ \left( \Delta_{h}^i[f]+ \nabla_{h}^i[f] \right)+i \left( \Delta_{h}^i[f]- \nabla_{h}^i[f]\right) \right],$$ for $t_i \in [a_i,b_i].$ The definition of partial scale derivatives $\Box f/\Box t_i$ is clear. In what follows, we will denote $$C^n_\Box([a,b], \mathbb{K}):= \{f \in C^0(I^n, \mathbb{K})\mid \frac{\Box^k f}{\Box t^k}\in C^0(I^{n-k}, \mathbb{C}), k=1,2,\ldots,n \}, \quad \mathbb{K}= \mathbb{R} \mbox{ or } \mathbb{K}= \mathbb{C}.$$ Let $f\in C^0(I,\mathbb{C})$ and $\alpha\in]0,1[$ . We say that $f$ is Hölderian of Hölder exponent $\alpha$ if there exists a constant $C>0$ such that, for all $s,t \in I$, $$|f(t)-f(s)|\leq C |t-s|^\alpha,$$ and we write $f \in H^\alpha(I, \mathbb{C})$, or simply $f\in H^\alpha$ when there is no danger of mislead. We say that $f(t_1,\ldots,t_n)\in H^\alpha$ if $f(t_1,\ldots,t_{i-1},\cdot,t_{i+1},\ldots,t_n)\in H^\alpha$, for all $i\in\{1,\ldots,n\}$ and for all $t_j\in[a_j,b_j], \, \, j\not=i$. \[LeibnizRule\] For all $f\in H^\alpha$ and $g\in H^\beta$, we have $$\frac{\Box (f.g)}{\Box t}(t)=\frac{\Box f}{\Box t}(t).g(t)+f(t).\frac{\Box g}{\Box t}(t), \quad t \in [a,b].$$ \[Barrow\] Let $f\in C^1_{\Box}([a,b],\mathbb{R})$ be such that $$\label{nec_condition} \lim_{h\to 0} \int_a^b \left(\frac{\Box_h f}{\Box t}\right)_E(t) \, dt=0,$$ where $ \frac{\Box_h f}{\Box t}:= \left(\frac{\Box_h f}{\Box t}\right)_{conv} + \left(\frac{\Box_h f}{\Box t}\right)_E.$ Then, $$\int_a^b \frac{\Box f}{\Box t}(t)\, dt=f(b)-f(a).$$ As a consequence, we have the following integration by parts formula. If $$\lim_{h\to 0} \int_a^b \left(\frac{\Box_h (f \cdot g)}{\Box t}\right)_E(t) \, dt=0,$$ where $f\in H^\alpha$ and $g \in H^\beta$,then $$\int_a^b \frac{\Box f}{\Box t}(t) \cdot g(t) dt = \left[f(t)g(t)\right]_a^b - \int_a^b f(t)\cdot \frac{\Box g}{\Box t}(t) dt.$$ The scale variational principle of Herglotz {#sec3} =========================================== The (classical) variational principle of Herglotz is described in the following way. Consider the differential equation $$\left\{ \begin{array}{l} \dot{z}(t)=L(t,x(t),\dot x(t),z(t)), \quad \mbox{ with } \, t\in[a,b]\\ z(a)=z_a\\ x(a)=x_a, \, x(b)=x_b, \end{array}\right.$$ where $x,z$ and $L$ are smooth functions. We wish to find $x$ (and the correspondent solution $z$ of the system) such that $z(b)$ attains an extremum. The necessary condition is a second-order differential equation: $$\frac{d}{dt}\frac{\partial L}{\partial \dot x}=\frac{\partial L}{\partial x}+\frac{\partial L}{\partial z}\frac{\partial L}{\partial \dot x},$$ for all $t\in[a,b]$. This can be seen as an extension of the basic problem of calculus of variations. If $L$ does not depend on $z$, then integrating the differential equation along the interval $[a,b]$, we get $$\left\{ \begin{array}{l} \displaystyle \int_a^b \left[ L(t,x(t),\dot x(t))+\frac{z_a}{b-a} \right] \, dt \quad \to \quad \mbox{extremize} \\ x(a)=x_a, \, x(b)=x_b. \end{array}\right.$$ As is well known, many physical phenomena are characterized by non-differentiable functions (e.g. generic trajectories of quantum mechanics [@Feynman], scale-relativity without the hypothesis of space-time differentiability [@Nottale]). The usual procedure is to replace the classical derivative by a scale derivative, and consider the space of continuous (and non-differentiable) functions. The scale calculus of variations approach was studied in [@Almeida1; @Cresson1; @Cresson2] for a certain concept of scale derivative $\Box x(t)$: $$\left\{ \begin{array}{l} \displaystyle \int_a^b L(t,x(t),\Box x(t))\quad \to \quad \mbox{extremize} \\ x(a)=x_a, \, x(b)=x_b. \end{array}\right.$$ Motivated by this problem, we define the fundamental scale variational principle of Herglotz. First we need to define what extremum is. We say that $z\in C^1([a,b],\mathbb C)$ attains an extremum at $t=b$ if $z'(b)=0$. The problem is then stated in the following way. Consider the system $$\label{MainProblem}\left\{ \begin{array}{l} \dot{z}(t)=L(t,x(t),\Box x(t),z(t)), \quad \mbox{ with } \, t\in[a,b]\\ z(a)=z_a\\ x(a)=x_a, \, x(b)=x_b. \end{array}\right.$$ For simplicity, define $$[x,z](t):=(t,x(t),\Box x(t),z(t)).$$ We assume that 1. the trajectories $x$ are in $H^\alpha \cap C^1_\Box([a,b], \mathbb{R})$, $\Box x \in H^\alpha$ and the functional $z$ in $C^2([a,b],\mathbb C)$, 2. for each $x$, there exists a unique solution $z$ of the system 3. $z_a,x_a,x_b$ are fixed numbers, 4. the Lagrangian $L:[a,b]\times\mathbb R\times \mathbb C^2\to\mathbb C$ is of class $C^2$. Observe that the solution $z(t)$ actually is a function on three variables, to know $z=z(t,x(t),\Box x(t))$. When there is no danger of mislead, we will simply write $z(t)$. We are interested in finding a trajectory $x$ for which the corresponding solution $z$ is such that $z(b)$ attains an extremum. In particular, what necessary conditions such solutions must fulfill. These equations are called Euler-Lagrange equation types. Again, problem can be reduced to the scale variational problem in case $L$ is independent of $z$: $$\int_a^b L\left[(t,x(t),\Box x(t))+\frac{z_a}{b-a}\right]\,dt\quad \to \quad \mbox{extremize}.$$ \[TNC\]If the pair $(x,z)$ is a solution of problem , and $\frac{\partial L}{\partial \Box x}[x,z]\in H^\alpha(I,\mathbb{C})$ ($\alpha\in]0,1[$), then $(x,z)$ is a solution of the equation $$\label{NC}\frac{\Box}{\Box t}\left(\frac{\partial L}{\partial \Box x}[x,z](t)\right)=\frac{\partial L}{\partial x}[x,z](t)+\frac{\partial L}{\partial z}[x,z](t)\frac{\partial L}{\partial \Box x}[x,z](t),$$ for all $t\in[a,b]$. Let $\epsilon$ be an arbitrary real, and consider variation functions of $x$ of type $x(t)+\epsilon \eta(t)$, with $\eta\in H^\beta(I,\mathbb{R}) \cap C^1_{\Box}([a,b], \mathbb{R})$ ($\beta\in]0,1[$), $\eta(a)=\eta(b)=\Box \eta(a)=0$, and $$\lim_{h \to 0} \int_a^b\left( \frac{\Box_h}{\Box t}\left(\lambda(t)\frac{\partial L}{\partial \Box x}[x,z](t) \eta(t)\right)\right)_E \, dt =0.$$ The corresponding rate of change of $z$, caused by the change of $x$ in the direction of $\eta$, is given by $$\theta (t)=\frac{d}{d\epsilon} \left.z(t,x(t)+\epsilon\eta(t),\Box x(t)+\epsilon \Box \eta(t))\right|_{\epsilon=0}.$$ Then $$\begin{array}{ll} \dot{\theta}(t)&=\displaystyle\frac{d}{dt}\frac{d}{d\epsilon} \left.z(t,x(t)+\epsilon\eta(t),\Box x(t)+\epsilon\Box \eta(t))\right|_{\epsilon=0}\\ &=\displaystyle\frac{d}{d\epsilon} \left.L(t,x(t)+\epsilon\eta(t),\Box x(t)+\epsilon\Box \eta(t),z(t,x(t)+\epsilon\eta(t), \Box x(t)+\epsilon\Box \eta(t))\right|_{\epsilon=0}\\ &=\displaystyle\frac{\partial L}{\partial x}[x,z](t)\eta(t)+\frac{\partial L}{\partial \Box x}[x,z](t)\Box \eta(t)+\frac{\partial L}{\partial z}[x,z](t)\theta(t). \end{array}$$ We obtain a first order linear differential equation on $\theta$, whose solution is $$\lambda(b)\theta(b)-\theta(a)=\int_a^b\lambda(t)\left[\frac{\partial L}{\partial x}[x,z](t)\eta(t)+\frac{\partial L}{\partial \Box x}[x,z](t)\Box\eta(t)\right]dt,$$ where $$\lambda(t)=\exp\left(-\int_a^t \frac{\partial L}{\partial z}[x,z](\tau)d\tau \right).$$ Using the fact that $\theta(a)=\theta(b)=0$, we get $$\int_a^b\lambda(t)\left[\frac{\partial L}{\partial x}[x,z](t)\eta(t)+\frac{\partial L}{\partial \Box x}[x,z](t)\Box\eta(t)\right]dt=0.$$ Integrating by parts the second term, we obtain $$\int_a^b\left[\lambda(t)\frac{\partial L}{\partial x}[x,z](t)-\frac{\Box}{\Box t}\left(\lambda(t)\frac{\partial L}{\partial \Box x}[x,z](t)\right)\right]\eta(t)dt+\left[\eta(t) \lambda(t)\frac{\partial L}{\partial \Box x}[x,z](t)\right]_a^b=0.$$ Since $\eta(a)=\eta(b)=0$, and $\eta$ is an arbitrary function elsewhere, $$\lambda(t)\frac{\partial L}{\partial x}[x,z](t)-\frac{\Box}{\Box t}\left(\lambda(t)\frac{\partial L}{\partial \Box x}[x,z](t)\right)=0,$$ for all $t\in[a,b]$. Since the function $t\mapsto \lambda(t)$ is differentiable, and the function $t\mapsto \frac{\partial L}{\partial \Box x}[x,z](t)$ is in $H^\alpha$, it follows that $$\lambda(t)\left(\frac{\partial L}{\partial x}[x,z](t)+\frac{\partial L}{\partial z}[x,z](t)\frac{\partial L}{\partial \Box x}[x,z](t)-\frac{\Box}{\Box t}\left(\frac{\partial L}{\partial \Box x}[x,z](t)\right)\right)=0.$$ Finally, since $\lambda(t)>0$, for all $t$, we get $$\frac{\Box}{\Box t}\left(\frac{\partial L}{\partial \Box x}[x,z](t)\right)=\frac{\partial L}{\partial x}[x,z](t)+\frac{\partial L}{\partial z}[x,z](t)\frac{\partial L}{\partial \Box x}[x,z](t),$$ for all $t \in[a,b]$. Assume that the set of state functions $x$ is $C^1([a,b],\mathbb R)$. Then equation becomes $$\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{x}}[x,z](t)\right)=\frac{\partial L}{\partial x}[x,z](t)+\frac{\partial L}{\partial z}[x,z](t)\frac{\partial L}{\partial \dot{x}}[x,z](t),$$ which is the generalized variational principle of Herglotz as in [@Herglotz]. \[TNC2\] Let the pair $(x,z)$ be a solution of the problem , but now $x(b)$ is free. Then $(x,z)$ is a solution of the equation $$\frac{\Box}{\Box t}\left(\frac{\partial L}{\partial \Box x}[x,z](t)\right)=\frac{\partial L}{\partial x}[x,z](t)+\frac{\partial L}{\partial z}[x,z](t)\frac{\partial L}{\partial \Box x}[x,z](t),$$ for all $t\in[a,b]$, and verifies the transversality condition $$\frac{\partial L}{\partial \Box x}[x,z](b)=0.$$ Following the proof of Theorem \[TNC\], the Euler-Lagrange equation is deduced. Then $$\left[\eta(t) \lambda(t)\frac{\partial L}{\partial \Box x}[x,z](t)\right]_a^b=0.$$ Since $\eta(a)=0$ and $\eta(b)$ is arbitrary, we obtain the transversality condition. **Multi-dimensional case** For simplicity, we considered so far one state function $x$ only, but the multi-dimensional case $(x_1,\ldots,x_n)$ is easily studied. \[TNC3\] Let $\alpha\in]0,1[$ and let the vector $(x_1,\ldots,x_n,z)$ be a solution of the problem: find $(x_1,\ldots,x_n)$ that extremizes $z(b)$, with $$\label{System2}\left\{ \begin{array}{l} \dot{z}(t)=L(t,x_1(t),\ldots,x_n(t),\Box x_1(t),\ldots,\Box x_n(t),z(t)), \quad \mbox{ with } \, t\in[a,b]\\ z(a)=z_a\\ x_i(a)=x_{ia}, \, x_i(b)=x_{ib} \end{array}\right.$$ where, for all $i\in\{1,\ldots,n\}$, 1. the trajectories $x_i$ are in $H^\alpha \cap C^1_\Box([a,b], \mathbb{R})$, $\Box x_i \in H^\alpha$ and the functional $z$ in $C^2([a,b],\mathbb C)$, 2. $z_a,x_{ia},x_{ib}$ are fixed numbers, 3. $\frac{\partial L}{\partial \Box x_i}[x_1,\ldots,x_n,z]\in H^\alpha(I,\mathbb{C})$ 4. the Lagrangian $L:[a,b]\times\mathbb R^n\times \mathbb C^{n+1}\to\mathbb C$ is of class $C^2$. Then, for all $i\in\{1,\ldots,n\}$, $(x_1,\ldots,x_n,z)$ is a solution of the equation $$\frac{\Box}{\Box t}\left(\frac{\partial L}{\partial \Box x_i}[x_1,\ldots,x_n,z](t)\right)=\frac{\partial L}{\partial x_i}[x_1,\ldots,x_n,z](t)+\frac{\partial L}{\partial z}[x_1,\ldots,x_n,z](t)\frac{\partial L}{\partial \Box x_i}[x_1,\ldots,x_n,z](t),$$ for all $t\in[a,b]$. \[TNC4\] Let the vector $(x_1,\ldots,x_n,z)$ be a solution of the problem as stated in Theorem \[TNC3\], but now $x_i(b)$ is free, for all $i \in\{1,\ldots,n\}$. Then, for all $i\in\{1,\ldots,n\}$, $(x_1,\ldots,x_n,z)$ is a solution of the equation $$\frac{\Box}{\Box t}\left(\frac{\partial L}{\partial \Box x_i}[x_1,\ldots,x_n,z](t)\right)=\frac{\partial L}{\partial x_i}[x_1,\ldots,x_n,z](t)+\frac{\partial L}{\partial z}[x_1,\ldots,x_n,z](t)\frac{\partial L}{\partial \Box x_i}[x_1,\ldots,x_n,z](t),$$ for all $t\in[a,b]$, and verifies the transversality condition $$\frac{\partial L}{\partial \Box x_i}[x_1,\ldots,x_n,z](b)=0.$$ **Higher-order derivatives case** \[TNC5\] Let $\alpha\in]0,1[$ and let the pair $(x,z)$ be a solution of the problem: find $x$ that extremizes $z(b)$, with $$\left\{ \begin{array}{l} \dot{z}(t)=L(t,x,\Box x(t),\ldots,\Box^n x(t),z(t)), \quad \mbox{ with } \, t\in[a,b]\\ z(a)=z_a\\ \Box^i x(a)=x_{ia}, \, \Box^i x(b)=x_{ib}, \quad \mbox{for all } \, i\in\{0,\ldots,n-1\}, \end{array}\right.$$ where 1. the trajectories $x$ are in $H^\alpha \cap C^n_\Box([a,b], \mathbb{R})$, $\Box x \in H^\alpha$ and the functional $z$ in $C^2([a,b],\mathbb C)$, 2. $z_a,x_{ia},x_{ib}$ are fixed numbers, for all $i\in\{0,\ldots,n-1\}$, 3. $\frac{\partial L}{\partial \Box^i x}[x,z]\in H^\alpha(I^n,\mathbb{C})$, for all $i\in\{1,\ldots,n\}$, 4. $[x,z](t)=(t,x,\Box x(t),\ldots,\Box^n x(t),z(t))$ and $[x](t)=(t,x,\Box x(t),\ldots,\Box^n x(t))$, 5. the Lagrangian $L:[a,b]\times\mathbb R^{n+1}\to\mathbb R$ is of class $C^2$. Then, $(x,z)$ is a solution of the equation $$\lambda(t)\frac{\partial L}{\partial x}[x,z](t)+\sum_{i=1}^n (-1)^i \frac{\Box^{i}}{\Box t^{i}}\left(\lambda(t)\frac{\partial L}{\partial \Box^ix}[x,z](t)\right)=0,$$ for all $t\in[a,b]$. Let $x(t)+\epsilon \eta(t)$ be a variation function of $x$, with $\epsilon\in\mathbb R$ and $\eta\in H^\beta \cap C^n_\Box([a,b], \mathbb{R})$ ($\beta\in]0,1[$). Also, assume that the variations fulfill the conditions: 1. for all $i=0,\ldots,n-1$, $\Box^i \eta(a)= \Box^i \eta(b)=0$, and $\Box^n \eta(a)=0$, 2. for all $i=1, 2, \ldots, n$ and $k=0, 1,\ldots, i-1$, $$\lim_{h \to 0} \int_a^b\left( \frac{\Box_h}{\Box t}\left(\lambda(t)\frac{\Box^k}{\Box t^k}\left(\frac{\partial L}{\partial \Box^i x}[x,z](t)\right)\Box^{i-k-1} \eta(t) \right)\right)_E \, dt =0.$$ Define $$\theta (t)=\frac{d}{d\epsilon} \left.z(t,x(t)+\epsilon \eta(t),\Box x(t)+\epsilon \Box \eta(t),\ldots,\Box^n x(t)+\epsilon \Box^n \eta(t))\right|_{\epsilon=0}.$$ Then $$\dot{\theta}(t)=\frac{\partial L}{\partial x}[x,z](t)\eta(t)+\sum_{i=1}^n\frac{\partial L}{\partial \Box^ix}[x,z](t)\Box ^i\eta(t)+\frac{\partial L}{\partial z}[x,z](t)\theta(t).$$ Solving this linear ODE, we arrive at $$\int_a^b\lambda(t)\left[\frac{\partial L}{\partial x}[x,z](t)\eta(t)+\sum_{i=1}^n\frac{\partial L}{\partial \Box^ix}[x,z](t)\Box ^i\eta(t)\right]dt=0,$$ where $$\lambda(t)=\exp\left(-\int_a^t \frac{\partial L}{\partial z}[x,z](\tau)d\tau \right).$$ Integrating by parts $n$ times, we obtain the following: $$\begin{split} & \displaystyle\int_a^b \left[\lambda(t)\frac{\partial L}{\partial x}[x,z](t)+\sum_{i=1}^n (-1)^i \frac{\Box^{i}}{\Box t^{i}}\left(\lambda(t)\frac{\partial L}{\partial \Box^ix}[x,z](t)\right)\right] \eta(t) dt \\ & + \left[\sum_{i=1}^n \sum_{k=0}^{i-1}(-1)^k \frac{\Box^{k}}{\Box t^{k}}\left(\lambda(t)\frac{\partial L}{\partial \Box^ix}[x,z](t)\right)\Box^{i-1-k} \eta(t) \right]_a^b=0, \end{split}$$ and rearranging the terms, we get $$\begin{split} & \displaystyle\int_a^b \left[\lambda(t)\frac{\partial L}{\partial x}[x,z](t)+\sum_{i=1}^n (-1)^i \frac{\Box^{i}}{\Box t^{i}}\left(\lambda(t)\frac{\partial L}{\partial \Box^ix}[x,z](t)\right)\right] \eta(t) dt \\ & + \left[\sum_{i=1}^n \left[\sum_{k=i}^{n}(-1)^{k-i} \frac{\Box^{k-i}}{\Box t^{k-i}}\left(\lambda(t)\frac{\partial L}{\partial \Box^kx}[x,z](t)\right)\right]\Box^{i-1} \eta(t) \right]_a^b=0. \end{split}$$ Since $\Box^i \eta(a)= \Box^i \eta(b)=0$, for all $i\in\{0,\ldots,n-1\}$ and $\eta$ is arbitrary elsewhere, we get $$\lambda(t)\frac{\partial L}{\partial x}[x,z](t)+\sum_{i=1}^n (-1)^i \frac{\Box^{i}}{\Box t^{i}}\left(\lambda(t)\frac{\partial L}{\partial \Box^ix}[x,z](t)\right)=0,$$ for all $t\in[a,b]$. \[TNC6\] Let the pair $(x,z)$ be a solution of the problem as stated in Theorem \[TNC5\], but now $\Box^i x(b)$ is free, for all $i\in\{0,\ldots,n-1\}$. Then, $(x,z)$ is a solution of the equation $$\lambda(t)\frac{\partial L}{\partial x}[x,z](t)+\sum_{i=1}^n (-1)^i \frac{\Box^{i}}{\Box t^{i}}\left(\lambda(t)\frac{\partial L}{\partial \Box^ix}[x,z](t)\right)=0,$$ for all $t\in[a,b]$, and verifies the transversality condition $$\sum_{k=i}^{n}(-1)^{k-i} \frac{\Box^{k-i}}{\Box t^{k-i}}\left(\lambda(t)\frac{\partial L}{\partial \Box^kx}[x,z](t)\right)=0 \quad \mbox{at} \quad t=b,$$ for all $i\in\{1,\ldots,n\}$. **Several independent variables case** We generalize Theorem \[TNC\] for several independent variables. First we fix some notations. The variable time is $t\in[a,b]$, $x=(x_1,\ldots,x_n)\in \Omega:=\prod_{i=1}^n[a_i,b_i]$ are the space coordinates and the state function is $u:=u(t,x)$. \[TNC7\] Let $\alpha\in]0,1[$ and let the pair $(u,z)$ be a solution of the problem: find $u$ that extremizes $z(b)$, with $$\label{System4}\left\{ \begin{array}{l} \dot{z}(t)=\displaystyle \int_\Omega L\left(t,x,u,\frac{\Box u}{\Box t},\frac{\Box u}{\Box x_1},\ldots,\frac{\Box u}{\Box x_n},z(t)\right)\, d^nx, \quad \mbox{ with } \, t\in[a,b]\\ z(a)=z_a\\ u(t,x) \quad \mbox{takes fixed values,} \quad \forall t\in[a,b]\, \forall x \in \partial\Omega\\ u(t,x) \quad \mbox{takes fixed values,} \quad \forall t\in\{a,b\}\, \forall x \in \Omega,\\ \end{array}\right.$$ where, for all $i\in\{1,\ldots,n\}$, 1. the trajectories $u$ are in $H^\alpha(I\times\Omega, \mathbb{R}) \cap C^1_\Box([a,b]\times\Omega, \mathbb{R})$, $\frac{\Box u}{\Box t},\frac{\Box u}{\Box x_i} \in H^\alpha([a,b]\times\Omega, \mathbb{C})$ and the functional $z$ in $C^2([a,b],\mathbb C)$, 2. $z_a$ is a fixed number, 3. $d^nx=dx_1\ldots dx_n$, 4. $\frac{\partial L}{\partial \Box t}[u,z],\frac{\partial L}{\partial \Box x_i}[u,z] \in H^\alpha(I\times\Omega, \mathbb{C})$, where $\frac{\partial L}{\partial \Box t}[u,z]$ denotes the partial derivative of $L$ with respect to the variable $\frac{\Box u}{\Box t}$, and $\frac{\partial L}{\partial \Box x_i}[u,z]$ denotes the partial derivative of $L$ with respect to the variable $\frac{\Box u}{\Box x_i}$, and $[u,z](t)=(t,x,u,\frac{\Box u}{\Box t},\frac{\Box u}{\Box x_1},\ldots,\frac{\Box u}{\Box x_n},z(t))$, 5. $L:[a,b]\times\Omega\times\mathbb R\times\mathbb C^{n+2}\to\mathbb C$ is of class $C^2$. Then, $(u,z)$ is a solution of the equation $$\frac{\partial L}{\partial u}[u,z](t)+\frac{\partial L}{\partial \Box t}[u,z](t) \int_\Omega \frac{\partial L}{\partial \Box z}[u,z](t) \, d^nx-\frac{\Box}{\Box t}\left(\frac{\partial L}{\partial \Box t}[u,z](t)\right)-\sum_{i=1}^n \frac{\Box}{\Box x_i} \left(\frac{\partial L}{\partial \Box x_i}[u,z](t)\right)=0,$$ for all $t\in[a,b]$ and for all $x\in\Omega$. Let $u(t,x)+\epsilon \eta(t,x)$ be a variation function of $u$, with $\epsilon\in\mathbb R$ and $\eta\in H^\beta(I\times\Omega, \mathbb{R}) \cap C^1_\Box([a,b]\times\Omega, \mathbb{R})$ ($\beta\in]0,1[$). Also, assume that the variations fulfill the conditions: 1. $\eta(t,x)=0,\quad \forall t\in[a,b]\,\forall x \in \partial\Omega$, 2. $\eta(t,x)=0,\quad \forall t\in\{a,b\}\, \forall x \in \Omega$, 3. $\frac{\Box \eta}{\Box t}(a,x)=\frac{\Box \eta}{\Box x_i}(a,x)=0,\quad \forall x \in \Omega$, 4. for all $i=1, 2, \ldots, n$, $$\lim_{h \to 0} \int_a^b\left( \frac{\Box_h}{\Box t}\left(\lambda(t)\frac{\partial L}{\partial \Box t}[u,z](t) \eta(t)\right)\right)_E \, dt =0.$$ and $$\lim_{h \to 0} \int_a^b\left( \frac{\Box_h}{\Box x_i}\left(\lambda(t)\frac{\partial L}{\partial \Box x_i}[u,z](t) \eta(t)\right)\right)_E \, dt =0,$$ where $$\lambda(t)=\exp\left(-\int_a^t\int_\Omega \frac{\partial L}{\partial z}[u,z](\tau) \, d^nx \, d\tau \right).$$ Let $$\theta (t)=\frac{d}{d\epsilon} \left.z\left(t,x,u+\epsilon\eta,\frac{\Box u}{\Box t}+\epsilon \frac{\Box \eta}{\Box t},\frac{\Box u}{\Box x_1}+\epsilon \frac{\Box \eta}{\Box x_1},\ldots,\frac{\Box u}{\Box x_n}+\epsilon \frac{\Box \eta}{\Box x_n}\right)\right|_{\epsilon=0}.$$ Proceeding with some calculations, we arrive at the ODE $$\dot{\theta}(t)-\int_\Omega \frac{\partial L}{\partial z}[u,z](t) \, d^nx \,\, \theta(t) =\int_\Omega \frac{\partial L}{\partial u}[u,z](t)\eta+\frac{\partial L}{\partial \Box t}[u,z](t)\frac{\Box\eta}{\Box t}+\sum_{i=1}^n\frac{\partial L}{\partial \Box x_i}[u,z](t)\frac{\Box \eta}{\Box x_i}\, d^nx.$$ Solving the ODE, and taking into consideration that $\theta(a)=\theta(b)=0$, we get $$\int_a^b \int_\Omega \lambda(t)\left[\frac{\partial L}{\partial u}[u,z](t)\eta+\frac{\partial L}{\partial \Box t}[u,z](t)\frac{\Box\eta}{\Box t}+\sum_{i=1}^n\frac{\partial L}{\partial \Box x_i}[u,z](t)\frac{\Box \eta}{\Box x_i}\right]\, d^nx \, dt=0.$$ Integrating by parts, and considering the boundary conditions over $\eta$, we get $$\int_a^b\int_\Omega \left[\lambda(t)\frac{\partial L}{\partial u}[u,z](t) -\frac{\Box}{\Box t}\left(\lambda(t)\frac{\partial L}{\partial \Box t}[u,z](t)\right)-\sum_{i=1}^n\frac{\Box}{\Box x_i} \left(\lambda(t)\frac{\partial L}{\partial x_i}[u,z](t)\right)\right]\eta d^nxdt=0.$$ By the arbitrariness of $\eta$, it follows that for all $t\in[a,b]$ and for all $x\in\Omega$, $$\lambda(t)\frac{\partial L}{\partial u}[u,z](t) -\frac{\Box}{\Box t}\left(\lambda(t)\frac{\partial L}{\partial \Box t}[u,z](t)\right)-\sum_{i=1}^n\frac{\Box}{\Box x_i} \left(\lambda(t)\frac{\partial L}{\partial x_i}[u,z](t)\right)=0.$$ Since $\lambda(t)>0$, this condition implies that $$\frac{\partial L}{\partial u}[u,z](t)+\frac{\partial L}{\partial \Box t}[u,z](t) \int_\Omega \frac{\partial L}{\partial \Box z}[u,z](t) \, d^nx-\frac{\Box}{\Box t}\left(\frac{\partial L}{\partial \Box t}[u,z](t)\right)-\sum_{i=1}^n \frac{\Box}{\Box x_i} \left(\frac{\partial L}{\partial \Box x_i}[u,z](t)\right)=0,$$ for all $t\in[a,b]$ and for all $x\in\Omega$, and the theorem is proved. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank the two reviewers for their insightful comments on the paper, as these comments led us to an improvement of the work. This work was supported by Portuguese funds through the CIDMA - Center for Research and Development in Mathematics and Applications, and the Portuguese Foundation for Science and Technology (FCT-Fundação para a Ciência e a Tecnologia), within project UID/MAT/04106/2013. [99]{} R. Almeida and D.F.M. Torres. Nondifferentiable variational principles in terms of a quantum operator. Math. Methods Appl. Sci. [**34**]{} (2011) 2231-–2241. J. Cresson and I. Greff. A non-differentiable Noether’s theorem. J. Math. Phys. [**52**]{} (2011) No.2, 023513, 10 pp. J. Cresson and I. Greff. Non-differentiable embedding of Lagrangian systems and partial differential equations. J. Math. Anal. Appl. [**384**]{} (2011) No. 2, 626-–646. R. Feynman and A. Hibbs. Quantum mechanics and path integrals, MacGraw-Hill,1965. B. Georgieva and R. Guenther. First Noether-type theorem for the generalized variational principle of Herglotz. Topol. Methods Nonlinear Anal. [**20**]{} (2002) No. 2, 261–273. B. Georgieva and R. Guenther. Second Noether-type theorem for the generalized variational principle of Herglotz. Topol. Methods Nonlinear Anal. [**26**]{} (2005) No. 2, 307–314. B. Georgieva, R. Guenther and T. Bodurov. Generalized variational principle of Herglotz for several independent variables. First Noether-type theorem, J. Math. Phys. [**44**]{} (2003) No. 9, 3911–3927. R.B. Guenther, J.A. Gottsch and D.B. Kramer, The Herglotz algorithm for constructing canonical transformations, SIAM Rev. [**38**]{} (1996) No. 2, 287–293. R.B. Guenther, C.M. Guenther and J.A. Gottsch, The Herglotz Lectures on Contact Transformations and Hamiltonian Systems, Lecture Notes in Nonlinear Analysis, Vol. 1, Juliusz Schauder Center for Nonlinear Studies, Nicholas Copernicus University, Torún, 1996. G. Herglotz, Berührungstransformationen, Lectures at the University of Göttingen, Göttingen, 1930. L. Nottale. The theory of scale relativity. Internat. J. Modern Phys. A [**7**]{} (1992) No. 20, 4899-–4936. J.C. Orum, R.T. Hudspeth, W. Black and R.B. Guenther, Extension of the Herglotz algorithm to nonautonomous canonical transformations, SIAM Rev. [**42**]{} (2000) No. 1, 83–90. S. Santos, N. Martins, D.F.M. Torres, Higher-order variational problems of Herglotz type, Vietnam Journal of Mathematics [**42**]{} (2014) No. 4, 409–419.
--- author: - 'V. Pérez-Mesa, O. Zamora, D. A. García-Hernández, B. Plez, A. Manchado, A. I. Karakas' - 'M. Lugaro' date: 'Received September 15, 1996; accepted March 16, 1997' title: Rubidium and zirconium abundances in massive Galactic asymptotic giant branch stars revisited --- [Luminous Galactic OH/IR stars have been identified as massive ($>$ 4-5 M$_{\odot}$) asymptotic giant branch (AGB) stars experiencing hot bottom burning and Li production. Their Rb abundances and \[Rb/Zr\] ratios, as derived from classical hydrostatic model atmospheres, are significantly higher than predictions from AGB nucleosynthesis models, posing a problem to our understanding of AGB evolution and nucleosynthesis.]{} [We report new Rb and Zr abundances in the full sample (21) of massive Galactic AGB stars, previously studied with hydrostatic models, by using more realistic extended model atmospheres.]{} [For this, we use a modified version of the spectral synthesis code Turbospectrum and consider the presence of a circumstellar envelope and radial wind in the modelling of the optical spectra of these massive AGB stars. The Rb and Zr abundances are determined from the 7800 [Å]{} Rb I resonant line and the 6474 [Å]{} ZrO bandhead, respectively, and we explore the sensitivity of the derived abundances to variations of the stellar (T$_{eff}$) and wind ($\dot{M}$, $\beta$ and $v_{exp}$) parameters in the pseudo-dynamical models. The Rb and Zr abundances derived from the best spectral fits are compared with the most recent AGB nucleosynthesis theoretical predictions.]{} [The Rb abundances derived with the pseudo-dynamical models are much lower (in the most extreme stars even by $\sim$1-2 dex) than those derived with the hydrostatic models, while the Zr abundances are similar. The Rb I line profile and Rb abundance are very sensitive to the wind mass-loss rate $\dot{M}$ (especially for $\dot{M}$ $\geq$ 10$^{-8}$ M$_{\odot}yr^{-1}$) but much less sensitive to variations of the wind velocity-law ($\beta$ parameter) and the expansion velocity $v_{exp}$(OH).]{} [We confirm the earlier preliminary results based on a smaller sample of massive O-rich AGB stars, that the use of extended atmosphere models can solve the discrepancy between the AGB nucleosynthesis theoretical models and the observations of Galactic massive AGB stars. The Rb abundances, however, are still strongly dependent of the wind mass-loss $\dot{M}$, which, unfortunately, is unknown in these AGB stars. Accurate mass-loss rates $\dot{M}$ (e.g., from rotationally excited lines of the CO isotopologues in the radio domain) in these massive Galactic AGB stars are needed in order to break the models degeneracy and get reliable (no model-dependent) Rb abundances in these stars.]{} Introduction ============ The asymptotic giant brach (AGB; @herwig05 [@karakaslattanzio14]) is occupied by low- and intermediate-mass (0.8 $\leq$ M $\leq$ 8 M$_{\odot}$) stars in the last nuclear-burning phase. At the end of the AGB phase, these stars develop thermal pulses (TP) and suffer extreme mass loss. AGB stars are thus one of the main contributors to the enrichment of the interstellar medium (ISM) of light elements (e.g. Li, C, N, F) and heavy (*slow* neutron capture, *s*-process) elements and so to the chemical evolution of galaxies [@busso99]. AGB stars are also one of the most prominent source of dust in galaxies and the site of origin of the vast majority of meteoritic stardust grains [e.g. @hoppeott97; @nittler97; @lugaro17]. In low-mass AGB stars (M $<$ 4 M$_{\odot}$) $^{12}$C is produced during the TP-AGB phase, and carried to the stellar surface via the third dredge-up (TDU) that can occur after each TP, transforming originally O-rich stars in C-rich stars (C/O$>$1) [e.g. @herwig05; @karakas07; @lugaro11]. However, the more massive AGB stars (M $>$ 4-5 M$_{\odot}$) are O-rich (C/O$<$1) because the so-called “hot bottom burning” (hereafter, HBB) process is activated. The HBB converts $^{12}$C into $^{13}$C and $^{14}$N through the CN cycle via proton captures at the base of the convective envelope, thus preventing the formation of a carbon star [@sackmann92; @mazzitelli99]. The *s*-process allows the production of neutron-rich elements heavier than iron (*s*-elements such as Sr, Y, Zr, Ba, La, Nd, Tc, etc.) by *s* -process. In the low-mass AGB stars (roughly &lt; 4 M$_{\odot}$), the $^{13}$C($\alpha$, n)$^{16}$O reaction is the dominant neutron source [e.g. @abia01]. In the more massive AGB stars instead, neutrons are mainly released by the $^{22}$Ne($\alpha$, n)$^{25}$Mg reaction, resulting in a higher neutron density (up to 10$^{13}$ n/cm$^{3}$) and temperature environment than in lower mass AGB stars [@garcia-hernandez06]. The Rb produced depends on the probability of the $^{85}$Kr and $^{86}$Rb capturing a neutron before decaying and acting as “branching points” [see @vanraai12 for more details]. The probabiliy of this happening depends on the local neutron density [@beermacklin89]. The $^{87}$Rb/$^{85}$Rb isotopic ratio is a direct indicator of the neutron density at the production site but it is not possible to distinguish individual $^{87}$Rb and $^{85}$Rb from stellar spectra [@garcia-hernandez06]. However, the relative abundance of Rb to other nearby *s*-process elements such as Zr is very sensitive to the neutron density, and so a good discriminant of the stellar mass and the neutron source at the $s$-process site [@lambert95; @abia01; @garcia-hernandez06; @vanraai12]. In other words, \[Rb/Zr\]$<$0 is observed in low-mass AGB stars where the main neutron source is the $^{13}$C($\alpha$, n)$^{16}$O reaction [@plez93; @lambert95; @abia01], while \[Rb/Zr\]$>$0 is observed in more massive AGB stars, where the neutrons are mainly released through the $^{22}$Ne($\alpha$, n)$^{25}$Mg reaction [@garcia-hernandez06; @garcia-hernandez07; @garcia-hernandez09]. Chemical abundance analyses using classical MARCS hydrostatic atmospheres [@gustafsson08] revealed strong Rb overabundances ($\sim$10$^{3}$-10$^{5}$ times solar) and high \[Rb/Zr\] ratios ($\geqslant$ 3-4 dex) in massive AGB stars (generally very luminous OH/IR stars) of our own Galaxy and the Magellanic Clouds (MC; @garcia-hernandez06 [@garcia-hernandez07; @garcia-hernandez09]). This observationally confirmed for the first time that the $^{22}$Ne neutron source dominates the production of *s*-process elements in these stars. However, the extremely high Rb abundances and \[Rb/Zr\] ratios observed in most the massive stars (and especially in the lower metallicity MC AGB stars) have posed a “Rb problem”; such extreme \[Rb/Fe\] and \[Rb/Zr\] values are not predicted by the *s*-process AGB models, [@vanraai12; @karakas12], suggesting fundamental problems in our present understanding of AGB nucleosynthesis and/or of the complex extended dynamical atmospheres of these stars [@garcia-hernandez09]. [@zamora14] constructed new pseudo-dynamical MARCS model atmospheres by considering the presence of a gaseous circumstellar envelope with a radial wind and applied them to a small sample of five O-rich AGB stars with different expansion velocities and metallicities. The Rb abundances and \[Rb/Zr\] ratios obtained were much lower than those obtained with classical hydrostatic models; in better agreement with the AGB nucleosynthesis theoretical predictions. In this paper, we use the [@zamora14] pseudo-dynamical model atmospheres to obtain the abundances of Rb and Zr in the full sample of massive Galactic AGB stars previously analyzed with hydrostatic models [@garcia-hernandez06; @garcia-hernandez07]. These Rb and Zr abundances are then compared with the more recent AGB nucleosynthesis theoretical predictions available in the literature. Sample and observational data ============================= Our sample is composed by 21 massive Galactic AGB stars (most of them very luminous OH/IR stars) previously analyzed by @garcia-hernandez06 [@garcia-hernandez07]; we use their high-resolution ($R\sim$40,000$-$50,000) optical echelle spectra [^1]. The signal-to-noise ($S/N$) ratios achieved in the reduced spectra strongly vary from the blue to the red (typically $\sim$ 10-20 at 6000 $\AA$ and $>$100 at 8000 Å). The Rb and Zr abundances were determined from the resonant 7800 $\AA$ Rb I line and the 6474 $\AA$ ZrO bandhead, respectively, by using classical MARCS hydrostatic model atmospheres [@garcia-hernandez06; @garcia-hernandez07]. The Rb abundances and \[Rb/Zr\] ratios obtained from this chemical analysis are mostly in the range \[Rb/Fe\]$\sim$0.6$-$2.6 dex and \[Rb/Zr\]$\sim$0.1$-$2.1 dex. The atmospheric parameters and Rb abundances derived with the hydrostatic models as well as other useful observational information like the OH expansion velocity, variability period, and the presence of Li are listed in Table \[table\_obs\_param\]. IRAS name $T_{eff}$ (K) log $g$ $v_{exp}$(OH) (km s$^{-1}$) Period (days) Lithium [\[]{}Rb/Fe[\]]{}$_{static}$ S/N at 7800 Å -------------- --------------- --------- ----------------------------- --------------- --------- ------------------------------ --------------- -- 01085$+$3022 3000$^{*}$ $-$0.5 13 560 yes 2.0 49 04404$-$7427 3000 $-$0.5 8 534 ... 1.3 68 05027$-$2158 2800 $-$0.5 8 368 yes 0.4 418 05098$-$6422 3000 $-$0.5 6 394 no 0.1 309 05151$+$6312 3000 $-$0.5 15 ... no 2.1 161 06300$+$6058 3000 $-$0.5 12 440 yes 1.6 127 07222$-$2005 3000 $-$0.5 8 1200 ... 0.6 30 09194$-$4518 3000 $-$0.5 11 ... ... 1.1 25 10261$-$5055 3000 $-$0.5 4 317 no $<-$1.0 595 14266$-$4211 2900 $-$0.5 9 389 no 0.9 106 15193$+$3132 2800 $-$0.5 3 360 no $-$0.3 266 15576$-$1212 3000 $-$0.5 10 415 yes 1.5 91 16030$-$5156 3000 $-$0.5 7-14 579 yes 1.3 86 16037$+$4218 2900 $-$0.5 4 360 no 0.6 115 17034$-$1024 3300 $-$0.5 3-9 346 no 0.2 189 18429$-$1721 3000 $-$0.5 7 481 yes 1.2 98 19059$-$2219 3000 $-$0.5 13 510 ... 2.3 32 19426$+$4342 3000 $-$0.5 9 ... ... 1.0 19 20052$+$0554 3000$^{*}$ $-$0.5 16 450 yes 1.5 47 20077$-$0625 3000 $-$0.5 12 680 ... 1.3 19 20343$-$3020 3000 $-$0.5 8 349 no 0.9 76 Chemical abundance analysis using pseudo-dynamical models ========================================================= Modified version of the Turbospectrum spectral synthesis code ------------------------------------------------------------- We have used the v12.2 version of the spectral synthesis code *Turbospectrum* [@alvarez98; @plez12], which considers the presence of a circumstellar gas envelope and a radial wind, as modified by [@zamora14]. The main modifications are the following: (i) the Doppler effect due to the extended atmosphere and velocity field is introduced in the routines that compute the line intensities at the stellar surface; (ii) the source function of the radiative transfer is assumed to be the same as computed in the static case [@gustafsson08]. The validity of this approximation was tested by comparing with Monte Carlo simulations [see @zamora14]; (iii) the scattering term of the source function ($\varpropto\sigma_{\lambda}J_{\lambda}$) is not shifted to save computing time and it is only incorporated for the continuum. This scattering term is computed as in the static case using the Feautrier method [@nordlund84; @gustafsson08]; and (iv) the velocity field is taken into account through a shift of the absorption coefficient $\kappa_{\lambda}$; the source function is built using the static $\sigma_{\lambda}J_{\lambda}$ and the shifted $\kappa_{\lambda}B_{\lambda}$. The emerging intensity is then computed in the observer frame by a direct quadrature of the source function. Extended atmosphere models -------------------------- For the analysis of each star in our sample, we have adopted the atmosphere parameters from @garcia-hernandez06 [@garcia-hernandez07] and the solar reference abundances by [@grevesse07]. We constructed our pseudo-dynamical models from the original MARCS hydrostatic atmosphere model structure. We expanded the atmosphere radius by a wind out to $\sim$5 stellar radii and a radial velocity field [@zamora14]. In the MARCS hydrostatic model, the $R_{\ast}$ is the radius corresponding to $r(\tau_{Ross}=1)$, where $r$ is the distance from the center of the star and $\tau_{Ross}$ is the Rosseland optical depth. We have computed the stellar wind following the mass conservation (Eq. 1), radiative thermal equilibrium (Eq. 2) and a classical $\beta$-velocity law (Eq. 3), $$\rho(r) = \frac{{\textit{\.M}}}{4 \pi r^2 v(r)}$$ $$r T^2 = constant = r_{out}T^2_{out}$$ $$v(r) = v_0+(v_{\infty}-v_0)\left(1-\frac{R_*}{r}\right)^{\beta} \,,$$ where $\rho(r)$ is the density of the envelope radius $r$, $\dot{M}$ is the mass-loss rate and $v(r)$ is the velocity of the envelope, which is calculated by means of Eq.(3). In Eq.(3), $v_{0}$ is a reference velocity for the beginning of the wind and $\beta$ is an arbitrary free parameter. We take $v_{0}=v(R_{\ast})$ for the onset of the wind and the extension of the envelope begins from the outer radius of the hydrostatic model. Using Eq.(2) the envelope is extended, layer by layer, out to the distance $r_{max}$, which corresponds to the maximum radius in our calculations, with $T_{min}=1000$ K. *Turbospectrum* cannot compute lower temperatures due to numerical reasons [@zamora14]. ![Velocity vs. distance from the star in four of our AGB wind models. These velocity laws present different expansion velocities $v_{exp}$(OH), mass-loss rates $\dot{M}$ and $\beta$ exponents. The effective temperature $T_{eff}$ = 3000 K, gravity log $g$ = $-$0.5 and the solar chemical composition are the same in all models.[]{data-label="v_r"}](v_vs_r.png){width="9.3cm"} Resulting grids of synthetic spectra ------------------------------------ The synthetic spectra are generated with the modified version of *Turbospectrum* by using the extended pseudo-dynamical model atmospheres as input. We constructed a mini-grid of synthetic spectra for each sample star by adopting the atmospheric parameters (e.g., effective temperature, macroturbulence[^2]) from @garcia-hernandez06 [@garcia-hernandez07]. Basically, the stellar mass, gravity log $g$, microturbulent velocity $\xi$, metallicity \[Fe/H\], and C/O ratio are fixed to 2 M$_{\odot}$, $-$0.5 dex, 3 kms$^{-1}$, 0.0, and 0.5 dex, respectively [see @garcia-hernandez07 for more details]. On the other hand, for the mass-loss rate $\dot{M}$ and the exponent $\beta$, we use values between $\dot{M} \sim 10^{-9}-10^{-6} M_{\odot}yr^{-1}$ in steps of $0.5\times10^{-1}$ $M_{\odot}yr^{-1}$ and $\beta \sim 0.2-1.6$ in steps of 0.2. We have not considered the case where $\beta = 0.0$ because the expansion velocity would be constant at any $r$. We assume the OH expansion velocity ($v_{exp}$(OH); see Table \[table\_obs\_param\]) as the terminal velocity because the OH maser emission is found at very large distances of the central star [see e.g., @decin10]. Figure \[v\_r\] shows examples of the $\beta$-velocity laws used in our pseudo-dynamical models based on the MARCS hydrostatic models. Finally, for the Rb and Zr abundances we used \[Rb/Fe\]$\sim-$2.6 to $+$3.0 dex, and \[Zr/Fe\]$\sim-$1.0 to $+$1.0 in steps of 0.1 and 0.25 dex, respectively. The resulting mini-grid ($\sim$4400 models) is compared to the observed spectrum in order to find the synthetic spectrum that best fits the 7800 $\AA$ Rb I line and the 6474 $\AA$ ZrO bandhead profiles and their adjacent pseudocontinua. In order to obtain the best fits, we made use of a procedure based on the comparison between synthetic and observed spectra, while in [@zamora14] the observed spectra were fitted by eye. The method is a modified version of the standard $\chi^{2}$ test, $$\chi^{2*}=\chi^{2} \times w = \left(\sum_{i=1}^{N}\dfrac{[Yobs_{i}-Ysynth_{i} (x_{1}...x_{M})]^2}{Y obs_{i}} \right) \times w$$ where $Y obs_{i}$ and $Ysynth_{i}$ are the observed and synthetic data points, respectively, with $N$ the number of data points, and $M$ the number of free parameters. On the other hand, $w$ is a vector that gives a stronger weight to the detailed spectral profiles of the Rb I line and the ZrO bandhead. This way, the lowest value of $\chi^{2*}$ gives us the best fitting synthetic spectrum from the mini-grid for each sample star. The use of the $\chi^{2*}$ test to find the best fits to the observed spectra reveals the presence of important degeneracies in the resulting grids of pseudo-dynamical synthetic spectra; i.e., very similar synthetic spectra are obtained from different sets of wind parameters (see below for more details). Moreover, in a some cases (IRAS 04404$-$7427, IRAS 05027$-$2158, IRAS 05098$-$6422, IRAS 06300$+$6058, IRAS 10261$-$5055, IRAS 18429$-$1721, IRAS 19059$-$2219 and IRAS 20343$-$3020) the use of the $\chi^{2*}$ test is not enough for obtaining the synthetic spectrum that best reproduces the observed one and the best fits have to be found by eye. Unfortunately, the wind model parameters $\dot{M}$ and $\beta$ are generally not known for stars in our sample (see below), complicating the abundance analysis. Thus, here we study the sensitivity of the synthetic spectra and the abundance results to variations of the stellar and wind parameters. Sensitivity of the synthetic spectra to variations of the model parameters {#sec:variationofparameters} -------------------------------------------------------------------------- ![image](comparison_Rb_M.png){width="9.15cm" height="6.7cm"} ![image](comparison_Rb_Teff.png){width="9.15cm" height="6.7cm"} ![image](comparison_Rb_beta.png){width="9.15cm" height="6.7cm"} ![image](comparison_Rb_vexp.png){width="9.15cm" height="6.7cm"} ![image](comparison_Zr_M.png){width="9.15cm" height="6.7cm"} ![image](comparison_Zr_Teff.png){width="9.15cm" height="6.7cm"} ![image](comparison_Zr_beta.png){width="9.15cm" height="6.7cm"} ![image](comparison_Zr_vexp.png){width="9.15cm" height="6.7cm"} Here, we analyze how the variations in stellar (T$_{eff}$) and wind ($\dot{M}$, $\beta$ and $v_{exp}$(OH)) parameters influence the output synthetic spectra. Figures \[comparisons\_Rb\] and \[comparisons\_Zr\] show examples of synthetic spectra for different stellar and wind parameters in the spectral regions around the 7800 $\AA$ Rb I line and 6474 $\AA$ ZrO bandhead, respectively. We note that the fraction of the absortion at 7800 Å due to other species (e.g., TiO) is tipically around 20%. The Rb I line profile is very sensitive to the wind mass-loss rate $\dot{M}$ (especially for $\dot{M}$ $\geq$ 10$^{-8}$ M$_{\odot}yr^{-1}$); the Rb I line is significantly deeper and blue-shifted with increasing $\dot{M}$ (Figure \[comparisons\_Rb\], top-left panel). However, the Rb I line profile is much less sensitive to changes of the wind velocity-law ($\beta$ parameter); being only slightly deeper with increasing $\beta$ (Figure \[comparisons\_Rb\], bottom-left panel). In addition, for $\beta$ values higher than $\sim$1.2 (shallower velocity profiles), the Rb I line profile is not sensitive to variations of the expansion velocity $v_{exp}$(OH) because the velocity profiles are very similar in our extended model atmosphere (up to $\sim$10$^{14}$ cm; see Figure 1). Variations in the expansion velocity $v_{exp}$(OH) mainly affect the blue-shift of the Rb I line and, in addition, for large $v_{exp}$(OH) values the core of the Rb I line is less deep (Figure \[comparisons\_Rb\], bottom-right panel). Finally, the Rb I absorption line is stronger with decreasing effective temperature T$_{eff}$ (as expected; Figure \[comparisons\_Rb\], top-right panel) but this time the wealth of TiO molecular lines and the pseudo-continua are also affected. We note that all these effects (variations in the Rb I profile in terms of depth and blue-shift) are more evident for extreme mass-loss rates ($\dot{M}$ $\geq$ 10$^{-7}$ M$_{\odot}yr^{-1}$) and higher Rb abundances. On the other hand, the ZrO bandhead profile is not sensitive to the wind parameters $\dot{M}$, $\beta$, and $v_{exp}$(OH) (see Figure \[comparisons\_Zr\]; top-left, bottom-left, and bottom-right panels, respectively). The ZrO bandhead profile (as well as the adjacent TiO lines and pseudo-continuum) are, again as expected, stronger with decreasing T$_{eff}$ (Figure \[comparisons\_Zr\], top-right panel). This is because ZrO is formed deeper than Rb I in the atmosphere, being much less affected by the circumstellar envelope and radial wind. Results {#sec:results} ======= As we have mentioned above, there are important degeneracies in the resulting mini-grids of synthetic spectra for each star. Two synthetic spectra with the same $T_{eff}$, $log g$ and $v_{exp}$(OH), but different $\beta$, $\dot{M}$, and \[Rb/Fe\] abundances could be practically identical in spite of the different wind parameters. This complicates the abundance analysis because the wind model parameters $\dot{M}$ and $\beta$ are generally not known for stars in our sample. In any case, we can use some observational constraints and previous results on a few similar OH/IR stars to limit the possible variation range of these wind parameters (in particular for the mass-loss rates $\dot{M}$, see below). By using multiple rotationally excited lines of both $^{12}$CO and $^{13}$CO, [@debeck10] provide accurate mass-loss rates $\dot{M}$ for a large sample of Galactic AGB stars. Unfortunately, only one star (IRAS 20077$-$0625) from our present sample of Rb-rich OH/IR massive AGB stars is included in their work and we cannot fit this star with our pseudo-dynamical models (see below). There are seven massive AGB stars of OH/IR type (WX Psc, V669 Cas, NV Aur, OH 26.5$+$0.6, OH 44.8$-$2.3, IRC $-$10529 and OH 104.9$+$2.4) previously studied in the optical by @garcia-hernandez07. Their variability periods and mass-loss rates range from 552 to 1620 days[^3] and from 1.8$\times$10$^{-5}$ $M_{\odot}yr^{-1}$ to 9.7$\times$10$^{-6}$ $M_{\odot}yr^{-1}$, respectively. Interestingly, all these stars are extremely obscured in the optical, being too red or without optical counterpart[^4]; they likely already have entered the superwind phase. Thus, the $\dot{M}$ values in optically obscured OH/IR AGB stars can be taken as upper limits (i.e. $<$10$^{-6}$ $M_{\odot}yr^{-1}$) for our sample of OH/IR massive AGB stars with optical counterparts; i.e. with useful spectra around the 7800 Å Rb I line. Indeed, we generally find that lower mass-loss rates ($\sim$10$^{-7}-$10$^{-8}$ $M_{\odot}yr^{-1}$) give superior fits to the observed Rb I line profiles. Mass-loss rates of $\sim$10$^{-6}$ $M_{\odot}yr^{-1}$ (or higher) give strong Rb I absorption lines for solar Rb abundances (see also @zamora14) with the consequence that all stars in our sample of OH/IR massive AGB stars would be Rb-poor. By combining the variability periods from Table 1 and the mass-loss rates estimated from the Rb I line profiles (mainly in the range $\sim$10$^{-7}-$10$^{-8}$ $M_{\odot}yr^{-1}$; see Table 2) into the AGB mass-loss formula by @vassiliadis93 (their Eq. (5)), we obtain reasonable current stellar masses in the range $\sim$2.5$-$6 $M_{\odot}$. In Table \[table\_masses\] we show the mass-loss rates obtained from the best spectral fits ($\dot{M}_{fit}$) and the current stellar masses by using the mass-loss expression from [@vassiliadis93]. -------------- -------- ---------------------- --------------- IRAS name Period $\dot{M}$ M$_{current}$ (days) ($M_{\odot}yr^{-1}$) ($M_{\odot}$) 01085$+$3022 560 1.0$\times$10$^{-7}$ 4.6 04404$-$7427 534 1.0$\times$10$^{-7}$ 4.3 05027$-$2158 368 1.0$\times$10$^{-7}$ 2.7 05098$-$6422 394 5.0$\times$10$^{-7}$ 2.4 05151$+$6312 ... 1.0$\times$10$^{-8}$ ... 06300$+$6058 440 1.0$\times$10$^{-7}$ 3.4 07222$-$2005 1200 ... ... 09194$-$4518 ... ... ... 10261$-$5055 317 1.0$\times$10$^{-9}$ 3.8 14266$-$4211 389 5.0$\times$10$^{-8}$ 3.1 15193$+$3132 360 1.0$\times$10$^{-9}$ 4.2 15576$-$1212 415 1.0$\times$10$^{-8}$ 3.9 16030$-$5156 579 1.0$\times$10$^{-8}$ 5.6 16037$-$1024 360 1.0$\times$10$^{-7}$ 2.6 17034$-$1024 346 1.0$\times$10$^{-8}$ 3.2 18429$-$1721 481 1.0$\times$10$^{-8}$ 4.6 19059$-$2219 510 5.0$\times$10$^{-8}$ 4.3 19426$+$4342 ... ... ... 20052$+$0554 450 5.0$\times$10$^{-7}$ 2.9 20077$-$0625 680 ... ... 20343$-$3020 349 1.0$\times$10$^{-9}$ 4.1 -------------- -------- ---------------------- --------------- : Mass-loss rates estimated from the best spectral fits and current stellar masses obtained by using the [@vassiliadis93] mass-loss formula (their Eq. (5)).[]{data-label="table_masses"} The $\beta$ parameter in our models (only up to $\sim$10$^{14}$ cm from the photosphere; see Figure 1) cannot be directly compared with other estimations of this parameter in the literature [e.g. @decin10; @Danilovich15], which map much outer regions in the circumstellar envelope and that usually obtain quite high and uncertain values (0 $\leq$ $\beta$ $\leq$ 5.0). However, the effect of the $\beta$ parameter on our synthetic spectra is minor compared to the mass-loss rate $\dot{M}$ and we keep it as a free parameter in our abundance analysis. We note also that the velocity profiles are very similar in our extended model atmosphere for $\beta$ $\geq$ 1.2; i.e., the Rb I line profile is not more sensitive to variations of the expansion velocity and the abundance results are very similar for $\beta$ $\geq$ 1.2. We generally find better fits with low $\beta$ values (or steeper velocity profiles; see Table \[table\_beta\_free\]). As mentioned above, the parameters of the hydrostatic models providing the best fit to the observations and the Rb abundances derived are shown in Table \[table\_obs\_param\]. The static models use the solar abundances from [@grevesse98] for computing the Rb abundances [see @garcia-hernandez06; @garcia-hernandez07], while our pseudo-dynamical models use the more recent solar abundances from [@grevesse07]. In [@zamora14] we compared the Rb abundances from static models using [@grevesse98] and [@grevesse07], and the Rb abundances obtained agree within $\sim$0.2 dex in most cases. Figure \[Rb\_Zr\] shows that our pseudo-dynamical atmosphere models reproduce the observed 7800 $\AA$ Rb I line profile much better than the classical hydrostatic models in four sample stars (see Appendix \[Append\_sample\] for the rest of sample stars). On the other hand, the Zr abundances derived from the extended models are similar to those obtained with the hydrostatic models because the 6474 $\AA$ ZrO bandhead is formed deeper in the atmosphere and is less affected by the radial velocity field [@zamora14]. We could obtain the Rb and Zr abundances (or upper limits) for 17 sample stars. The rest of sample stars (IRAS 07222$-$2005, IRAS 09194$-$4518, IRAS 19426$+$4342 and IRAS 20077$-$0625) seem to display different Rb I line profiles (e.g., with more than one circumstellar contribution or anomalously broad profiles with red-extended wings; see e.g., Figure \[no\_ajustado\]) that cannot be completely reproduced by our present version of the spectral synthesis code. In the two stars (IRAS 07222$-$2005 and IRAS 09194$-$4518) shown in Figure \[no\_ajustado\] we cannot fit the two Rb I components (circumstellar and photospheric) at the same time; e.g., we only might fit partially the blue-shifted circumstellar component using a larger mass-loss rates ($>$10$^{-6}$ $M_{\odot}yr^{-1}$). Curiously, these two stars present the largest periods (see Table \[table\_obs\_param\]) and they may be the more extreme and evolved stars in our sample, where our extended models do not not work so well (e.g., due to even more extended atmospheres). It is not completely clear, however, if the observed profiles are real because these four sample stars have the lowest quality spectra (*S/N* $<$ 30 at 7800 Å; see Table \[table\_obs\_param\]). For the two sample stars with unknown OH expansion velocity, IRAS 16030$-$5156 and IRAS 17034$-$1024, we explore the velocity range displayed by other stars with similar variability periods (see Table \[table\_obs\_param\]). Similar fits can be obtained for $v_{exp}$(OH) $\sim$ 7$-$12 and 7$-$9 kms$^{-1}$ (in combination with sligthly different wind parameters) for IRAS 16030$-$5156 and IRAS 17034$-$1024, respectively, and we thus adopt average velocities of 10 and 8 kms$^{-1}$, respectively, in the abundance analysis (see Table \[table\_beta\_free\]). ![image](Rb_line.png){width="9.1cm" height="6.7cm"} ![image](Zr_bandhead.png){width="9.1cm" height="6.7cm"} ![image](iras07222.png){width="9.15cm"} ![image](iras09194.png){width="9.15cm"} Table \[table\_beta\_free\] shows the atmospheric and wind parameters as well as the Rb and Zr abundances (or upper limits) from the best fits to the observed spectra when the wind parameters $\dot{M}$ and $\beta$ are not fixed. In most cases, the best fit is obtained for both low $\beta$ ($\sim$0.2) and $\dot{M}$ ($\sim$10$^{-9}$-10$^{-7}$ M$_{\odot}$ yr$^{-1}$) values. The new Rb abundances obtained from extended models are lower than those obtained using the hydrostatic models, and the difference is larger for stars with higher hydrostatic Rb abundances. In addition, this difference is smaller for lower $v_{exp}$(OH) and increases with increasing $v_{exp}$(OH), as expected. On the other hand, in the case of Zr we obtain upper limits mostly between 0.0 and $+$0.25 dex, as derived from the hydrostatic models. Figure \[Rb\_beta\_free\] displays the hydrostatic and pseudo-dynamical Rb abundances versus the OH expansion velocity for the wind parameters that provide the best fits (Table \[table\_beta\_free\]. We plot the Rb abundances versus the expansion velocity because the $v_{exp}$(OH) can be used as a mass indicator independent of the distance in OH/IR stars [@garcia-hernandez07]. In addition, in Figure \[Rb\_beta\_free\] we have marked the Li-rich stars [@garcia-hernandez07] by squares. About half of the stars with v$_{exp}$(OH) $>$ 6 km/s are Li-rich and most of these stars are also the more Rb-rich ones. We get pseudo-dynamical abundances lower than the hydrostatic ones and a worse correlation between[^5] the Rb abundances and $v_{exp}$(OH); the Rb-$v_{exp}$(OH) relationship is flatter (with a higher degeneration) for the pseudo-dynamical case (see also Sect. \[comparison\_section\]). IRAS name $T_{eff}$ (K) log $g$ $\beta$ $\dot{M}$ (M$_{\odot}$ yr$^{-1}$) $v_{exp}$(OH) (km s$^{-1}$) [\[]{}Rb/Fe[\]]{}$_{static}$ [\[]{}Rb/Fe[\]]{}$_{dyn}$ [\[]{}Zr/Fe[\]]{}$_{dyn}$ -------------- --------------- --------- --------- ----------------------------------- ----------------------------- ------------------------------ --------------------------- --------------------------- 01085$+$3022 3000\* $-$0.5 0.2 1.0$\times$10$^{-7}$ 13 2.0 0.6 $\leq$ 0.0 04404$-$7427 3000 $-$0.5 0.2 1.0$\times$10$^{-7}$ 8 1.3 0.1 $\leq$ 0.0 05027$-$2158 2800 $-$0.5 0.4 1.0$\times$10$^{-7}$ 8 0.4 $-$0.7 $\leq$ +0.5 05098$-$6422 3000 $-$0.5 1.4 1.0$\times$10$^{-8}$ 6 0.1 $-$0.2 $\leq$ +0.25 05151$+$6312 3000 $-$0.5 1.0 1.0$\times$10$^{-8}$ 15 2.1 1.3 $\leq$ +0.25 06300$+$6058 3000 $-$0.5 0.2 1.0$\times$10$^{-7}$ 12 1.6 0.4 $\leq$ 0.0 07222$-$2005 3000 $-$0.5 ... ... 8 0.6 ... ... 09194$-$4518 3000 $-$0.5 ... ... 11 1.1 ... ... 10261$-$5055 3000 $-$0.5 0.2 1.0$\times$10$^{-9}$ 4 $<$ $-$1.0 $<$ $-$1.1 $\leq$ +0.25 14266$-$4211 2900 $-$0.5 0.2 5.0$\times$10$^{-8}$ 9 0.9 0.1 $\leq$ 0.0 15193$+$3132 2800 $-$0.5 1.6 1.0$\times$10$^{-9}$ 3 $-$0.3 $-$0.5 $\leq$ 0.0 15576$-$1212 3000 $-$0.5 0.2 1.0$\times$10$^{-8}$ 10 1.5 1.0 $\leq$ 0.0 16030$-$5156 3000 $-$0.5 0.2 1.0$\times$10$^{-8}$ 10 1.3 0.6 $\leq$ +0.25 16037$+$4218 2900 $-$0.5 1.2 1.0$\times$10$^{-8}$ 4 0.6 0.2 $\leq$ +0.25 17034$-$1024 3300 $-$0.5 0.8 1.0$\times$10$^{-8}$ 8 0.2 $-$0.7 $\leq$ 0.0 18429$-$1721 3000 $-$0.5 0.2 1.0$\times$10$^{-8}$ 7 1.2 0.9 $\leq$ +0.25 19059$-$2219 3000 $-$0.5 0.2 1.0$\times$10$^{-7}$ 13 2.3 0.5 $\leq$ +0.25 19426$+$4342 3000 $-$0.5 ... ... 9 1.0 ... ... 20052$+$0554 3000\* $-$0.5 0.2 5.0$\times$10$^{-7}$ 16 1.5 0.0 $\leq$ 0.0 20077$-$0625 3000 $-$0.5 ... ... 12 1.3 ... ... 20343$-$3020 3000 $-$0.5 1.2 1.0$\times$10$^{-9}$ 8 0.9 0.7 $\leq$ 0.0 ![Rb abundances derived both with hydrostatic (blue dots) and pseudo-dynamical model atmospheres (red triangles) with best-fit parameters plotted against the OH expansion velocity. The Li-rich stars are indicated by squares. A typical error bar of $\pm$0.7 dex is also displayed.[]{data-label="Rb_beta_free"}](Rb_vs_vexp_beta_free.png){width="9.55cm"} [ccccccccc]{} IRAS name & $T_{eff}$ (K) & log $g$ & $\beta$ & $\dot{M}$ (M$_{\odot}$ yr$^{-1}$) & $v_{exp}$(OH) (km s$^{-1}$) & [\[]{}Rb/Fe[\]]{}$_{static}$ & [\[]{}Rb/Fe[\]]{}$_{dyn}$ & [\[]{}Zr/Fe[\]]{}$_{dyn}$\ 01085$+$3022 & 3000\* & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-7}$ 1.0$\times$10$^{-7}$ ---------------------- & 13 & 2.0 & ----- 0.6 0.3 ----- & $\leq$ 0.0\ 04404$-$7427 & 3000 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-7}$ 1.0$\times$10$^{-7}$ ---------------------- & 8 & 1.3 & ----- 0.1 0.1 ----- & ------------ $\leq$ 0.0 ------------ \ 05027$-$2158 & 2800 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-7}$ 1.0$\times$10$^{-7}$ ---------------------- & 8 & 0.4 & -------- $-$0.6 $-$1.1 -------- & ------------- $\leq$ +0.5 ------------- \ 05098$-$6422 & 3000 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-8}$ 5.0$\times$10$^{-7}$ ---------------------- & 6 & 0.1 & -------- $-$0.2 $-$2.1 -------- & -------------- $\leq$ +0.25 -------------- \ 05151$+$6312 & 3000 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 5.0$\times$10$^{-7}$ 5.0$\times$10$^{-7}$ ---------------------- & 15 & 2.1 & -------- 0.0 $-$0.5 -------- & -------------- $\leq$ +0.25 -------------- \ 06300$+$6058 & 3000 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-7}$ 1.0$\times$10$^{-7}$ ---------------------- & 12 & 1.6 & ----- 0.4 0.3 ----- & ------------ $\leq$ 0.0 ------------ \ 07222$-$2005 & 3000 & $-$0.5 & ... & ... & 8 & 0.6 & ... & ...\ 09194$-$4518 & 3000 & $-$0.5 & ... & ... & 11 & 1.1 & ... & ...\ 10261$-$5055 & 3000 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-9}$ 1.0$\times$10$^{-9}$ ---------------------- & 4 & $<$ $-$1.0 & ------------ $<$ $-$1.1 $<$ $-$1.1 ------------ & -------------- $\leq$ +0.25 -------------- \ 14266$-$4211 & 2900 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 5.0$\times$10$^{-8}$ 1.0$\times$10$^{-7}$ ---------------------- & 9 & 0.9 & -------- 0.1 $-$0.4 -------- & ---------------- $\leq$ 0.0 $\leq$ $-$0.25 ---------------- \ 15193$+$3132 & 2800 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-9}$ 1.0$\times$10$^{-9}$ ---------------------- & 3 & $-$0.3 & -------- $-$0.4 $-$0.4 -------- & $\leq$ 0.0\ 15576$-$1212 & 3000 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-8}$ 1.0$\times$10$^{-8}$ ---------------------- & 10 & 1.5 & ----- 1.0 0.9 ----- & ------------ $\leq$ 0.0 ------------ \ 16030$-$5156 & 3000 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-8}$ 1.0$\times$10$^{-7}$ ---------------------- & ---- 10 ---- & 1.3 & -------- 0.6 $-$0.4 -------- & -------------- $\leq$ +0.25 -------------- \ 16037$+$4218 & 2900 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-8}$ 1.0$\times$10$^{-8}$ ---------------------- & 4 & 0.6 & ----- 0.5 0.2 ----- & -------------- $\leq$ +0.25 -------------- \ 17034$-$1024 & 3300 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-8}$ 1.0$\times$10$^{-8}$ ---------------------- & --- 8 --- & 0.2 & -------- $-$0.7 $-$0.8 -------- & ------------ $\leq$ 0.0 ------------ \ 18429$-$1721 & 3000 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-8}$ 5.0$\times$10$^{-7}$ ---------------------- & 7 & 1.2 & -------- 0.9 $-$1.0 -------- & -------------- $\leq$ +0.25 $\leq$ 0.0 -------------- \ 19059$-$2219 & 3000 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-7}$ 5.0$\times$10$^{-8}$ ---------------------- & 13 & 2.3 & ----- 0.5 0.5 ----- & -------------- $\leq$ +0.25 -------------- \ 19426$+$4342 & 3000 & $-$0.5 & ... & ... & 9 & 1.0 & ... & ...\ 20052$+$0554 & 3000\* & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 5.0$\times$10$^{-7}$ 5.0$\times$10$^{-7}$ ---------------------- & 16 & 1.5 & -------- 0.0 $-$0.7 -------- & ------------ $\leq$ 0.0 ------------ \ 20077$-$0625 & 3000 & $-$0.5 & ... & ... & 12 & 1.3 & ... & ...\ 20343$-$3020 & 3000 & $-$0.5 & ----- 0.2 1.2 ----- & ---------------------- 1.0$\times$10$^{-9}$ 1.0$\times$10$^{-9}$ ---------------------- & 8 & 0.9 & ----- 0.7 0.7 ----- & ------------ $\leq$ 0.0 ------------ \ Also, we have carried out several tests with different $\beta$ and $\dot{M}$ values in order to check the sensitivity of the derived abundances to variations of the wind parameters. In Table \[table\_beta\_test\] we present the wind parameters and Rb abundances obtained when fixing $\beta$ to 0.2 and 1.2. Basically, the Rb abundances are lower in the $\beta$ = 1.2 case because a higher $\beta$ deepens the Rb I 7800 $\AA$ line for the same Rb abundance (see Figure \[comparisons\_Rb\]); in a few cases, however, the $\dot{M}$ of the best fit also changes, further affecting the determination of the Rb abundance. The Zr abundances (or upper limits) are similar in most cases; the upper limits only change when the $\dot{M}$ is not the same for the $\beta$ = 0.2 and 1.2 cases. Figure \[Rb\_beta\_fixed\] represents the Rb abundances obtained versus $v_{exp}$(OH) for $\beta$ = 0.2 and $\beta$ = 1.2 . By comparing the Rb abundances from Figure \[Rb\_beta\_free\] and Figure \[Rb\_beta\_fixed\], it is clear that the Rb abundances are slightly lower in the $\beta$ = 1.2 case. Moreover, the correlation between the pseudo-dynamical Rb abundances and $v_{exp}$(OH) for different $\beta$ values is worse (e.g., flatter) than the hydrostatic case. In addition, the dispersion seems to be larger for the $\beta$ = 1.2 case. ![Rb abundances derived both with hydrostatic (blue dots) and extended model atmospheres with $\beta$ = 0.2 (magenta triangles) and $\beta$ = 1.2 (green squares) plotted against the OH expansion velocity.[]{data-label="Rb_beta_fixed"}](Rb_vs_vexp_betas.png){width="9.55cm"} On the other hand, Figure \[Rb\_mass\_fixed\] displays the Rb results when $\dot{M}$ is fixed to 10$^{-8}$, 10$^{-7}$ and 10$^{-6}$ M$_{\odot}$ yr$^{-1}$. This could be equivalent to consider that our AGB sample stars have a similar evolutionary stage in terms of the mass loss; of course we have a strong degeneracy between the progenitor masses and mass loss/evolutionary stage. In the particular case of the Li-rich AGB stars, statistical arguments suggest that these stars should have a narrow initial mass range [see @dicriscienzo16]; 4-5 or 5-6 M$_{\odot}$ according to the most recent ATON [@dicriscienzo16] or Monash [@karakaslugaro16] AGB nucleosynthesis models, respectively. The current stellar masses from Table \[table\_masses\] show, however, that there is a complicated interplay (degeneracy) between Li enhancement, progenitor mass and mass-loss rate and that the progenitor mass range of these stars may be actually broader; e.g., their current stellar mass and Li abundance ranges are $\sim$2.7-5.6 M$_{\odot}$ and $\sim$log$\varepsilon$(Li)$\sim$0.7-2.6 dex. Figure \[Rb\_mass\_fixed\] shows that the Rb abundances decrease with increasing $\dot{M}$ and the dispersion of the Rb abundances is much lower when fixing $\dot{M}$. The slopes (and correlation coefficients) of the Rb-$v_{exp}$(OH) correlations are more similar to the one obtained with hydrostatic models. The Rb abundances from extended models approach the hydrostatic ones with decreasing $\dot{M}$ (both sets of Rb abundances are identical for $\dot{M}$ $\leq$ 10$^{-9}$ M$_{\odot}$ yr$^{-1}$) because the atmosphere is less extended with decreasing $\dot{M}$, as expected. Finally, we fixed $\dot{M}$ and $\beta$, which could be equivalent to consider that our AGB sample stars have the same mass loss stage and velocity profile. Figure \[Rb\_beta02\_mass\_fixed\] displays the pseudo-dynamical Rb abundances versus $v_{exp}$(OH) for $\beta$ = 0.2 and $\dot{M}$ values of 10$^{-8}$, 10$^{-7}$ and 10$^{-6}$ M$_{\odot}$ yr$^{-1}$. The Rb results when fixing both $\dot{M}$ and $\beta$ are very similar (with a slightly tighter correlation with $v_{exp}$(OH)) to those obtained when only fixing $\dot{M}$ (see Figure \[Rb\_mass\_fixed\]) because $\beta$ = 0.2 is the most common value obtained from the best spectral fits (all wind parameters free); an exception is the AGB star IRAS 15193$+$3132 (with the lowest $v_{exp}$(OH) and high $\beta$) for which only an upper limit to the Rb abundance (\[Rb/Fe\]$\leq$0.7) could be obtained because the pseudo-dynamical model does not converge for such unusual combination of wind parameters ($\dot{M}$=10$^{-6}$ M$_{\odot}$ yr$^{-1}$; $\beta$=0.2; $v_{exp}$(OH)=3 km s$^{-1}$) coupled with \[Rb/Fe\]$<$0.7 dex (see Figure \[Rb\_beta02\_mass\_fixed\]). ![Rb abundances vs. the expansion velocity ($v_{exp}$(OH)) for extended model atmospheres with $\dot{M}$ = 10$^{-8}$, 10$^{-7}$ and 10$^{-6}$ M$_{\odot}$ yr$^{-1}$ (green triangles, yellow squares and cyan diamonds, respectively) in comparison with those obtained from hydrostatic models (blue dots).[]{data-label="Rb_mass_fixed"}](Rb_vs_vexp_masas.png){width="9.55cm"} ![Rb abundances vs. the expansion velocity ($v_{exp}$(OH)) for extended model atmospheres with $\beta$ = 0.2 and $\dot{M}$ = 10$^{-8}$, 10$^{-7}$ and 10$^{-6}$ M$_{\odot}$ yr$^{-1}$ (green triangles, yellow squares and cyan diamonds, respectively) in comparison with those obtained from hydrostatic models (blue dots).[]{data-label="Rb_beta02_mass_fixed"}](Rb_vs_vexp_beta02_masas.png){width="9.55cm"} Figure \[rb\_vs\_period\] displays the \[Rb/Fe\] abundances from the best spectral fits versus the variability periods *P*. As already mentioned, our sample is composed by AGB stars of different progenitor masses and evolutionary stages. Most stars with *P* $>$ 400 days are Li-rich and present some Rb enhancement[^6], which suggests that, on average, these stars are more massive stars experiencing HBB and/or more evolved stars (because of the longer periods) than the group of non Li-rich (and generally Rb-poor) stars with *P* $<$ 400 days. The stars IRAS 05027$-$2158 (P$=$368 days) and IRAS 20343$-$3020 (P$=$349 days) are exceptions in the latter group. IRAS 05027$-$2158 is slightly Li-rich and Rb-poor, suggesting that it is a relatively massive AGB star (say $\sim$3.5$-$4.5 M$_{\odot}$[^7]) at the beginning of the TP phase (e.g., in an inter-pulse period just before or after the super Li-rich phase) but it is not evolved enough for efficient Rb production [see @garcia-hernandez13]. On the other hand, IRAS 20343$-$3020 is slightly Rb-rich and Li-poor, which suggests a more advanced evolutionary stage and a slightly higher initial mass (say $\sim$4.0$-$5 M$_{\odot}$) than IRAS 05027$-$2158 [see Fig. 1 in @garcia-hernandez13]. ![\[Rb/Fe\] pseudo-dynamical abundances versus variability period (*P*). The Li-rich and Li-poor stars are marked with magenta dots and green triangles, respectively. The two stars where Li could not be estimated are marked with cyan dots (see text).[]{data-label="rb_vs_period"}](rb_vs_period.png){width="9.1cm"} Comparison with AGB nucleosynthesis models {#comparison_section} ========================================== In Figure \[models\_RbZr\_M\] we compare our new \[Rb/Fe\] abundances and \[Rb/Zr\] ratios with solar metallicity massive (3$-$9 M$_{\odot}$) AGB predictions from several nucleosynthesis models: [@vanraai12], [@karakas12], [@karakaslugaro16] (Monash), [@pignatari16] (NuGrid/MESA) and [@cristallo15] (FRUITY[^8]). The predicted \[Rb/Fe\] abundances and \[Rb/Zr\] ratios ranges are 0.00$-$1.35 and $-$0.45$-$0.52 dex, respectively. The Monash models [@vanraai12; @karakas12; @karakaslugaro16] use the stellar evolutionary sequences calculated with the Monash version of the Mount Stromlo Stellar Structure Program [@frost96], which uses the [@vassiliadis93] mass-loss prescription on the AGB. A post-processing code is used to obtain in detail the nucleosynthesis of a large number of species, including the *s*-process abundances. Due to convergence difficulties, the stellar evolution models used in the calculations are not always evolved until the end of the superwind phase and synthetic models have been used to estimate the effect of remaining TPs and to completely remove the envelope. We refer the reader to [@vanraai12], [@karakas12] and [@karakaslugaro16] for more details about the theoretical models. Here, we only report the main differences between these models and that are basically the following: i) the use of different nuclear networks; i.e., the total number of nuclear species considered and the values of some reaction rates and neutron-capture cross sections (see below); and ii) the use by [@karakas12] of a modified [@vassiliadis93] mass-loss prescription, which delays the beginning of the superwind phase until the pulsation period reaches values of 700-800 days (instead of the value of 500 days used in the other models), resulting in a higher Rb production. The NuGrid/MESA and FRUITY models assume AGB mass-loss prescriptions, nuclear physics inputs and treatments of convection different from the Monash models. In particular, the @blocker95 and @straniero06 mass-loss formulae for the AGB phase are assumed by the NuGrid/MESA and FRUITY models, respectively. Furthermore, these models produce self-consistently the $^{13}$C neutron source as a result of the different convective boundary mixing scheme and treatments of the convective borders, while in the Monash models the mixing required to produce the $^{13}$C neutron source is included in a parametrized way during the post processing and it is typically not included in massive AGB stars, following theoretical [@gorielysiess04] and observational indications [@garcia-hernandez13] [see also @pignatari16; @cristallo15; @karakaslugaro16 for more details]. In relation to the main results: i) the NuGrid/MESA solar metallicity massive AGB models are qualitatively similar to the Monash models in terms of HBB and light *s*-process element production (of the elements from Rb to Zr) are seen at the stellar surface, the latter due to the activation of the $^{22}$Ne neutron source and the subsequent operation of the TDU; and ii) the FRUITY solar metallicity massive AGB models are different to the Monash and NuGrid/MESA models because these models experience very inefficient TDU, hence the signature of the nucleosynthesis due to the $^{22}$Ne neutron source is not visible at the stellar surface. Figure \[models\_RbZr\_M\] shows that the FRUITY massive AGB models predicts final \[Rb/Fe\] $<$ 0.15, which does not explain the observed range of Rb abundances and \[Rb/Zr\] ratios; specifically the \[Rb/Zr\] ratios remains negative for all masses. Another difference of the FRUITY models with the Monash and NuGrid models is that the FRUITY models do not predict HBB to occur in AGB stars, unless the metallicity is very low, at least ten times lower than solar. However, spectroscopic observations of massive AGB stars demonstrate that they experience HBB; as evidenced by: i) strong Li overabundances observed in massive AGB stars in the Galaxy [Fe/H=0.0; e.g., @garcia-hernandez07; @garcia-hernandez13], the Magellanic Clouds [Fe/H=$-$0.7-$-$0.3; e.g., @plez93; @smith95; @garcia-hernandez09] and the dwarf galaxy IC 1613 [Fe/H=$-$1.6; e.g., @menzies15]; ii) N enhancements and low $^{12}$C/$^{13}$C ratios in Magellanic Cloud Li-rich massive AGBs [e.g., @plez93; @mcsaveney07]. The lack of HBB in the FRUITY predictions is also at odds with the observations of the so-called type I planetary nebulae in very different metallicity environments and galaxies; which are expected to be the descendants of HBB massive AGB stars based on their strong N and He overabundances [see e.g. @stanghellini06; @karakas09; @leisy96; @garcia-rojas16 and references therein]. It is to be noted here that the several Monash AGB models [@vanraai12; @karakas12; @karakaslugaro16] mentioned above notably use different rates for the $^{22}$Ne($\alpha,n$)$^{25}$Mg reaction, which drives the production of *s*-process elements in massive AGB stars. In particular, [@karakaslugaro16] use the $^{22}$Ne($\alpha, n$)$^{25}$Mg reaction from [@iliadis10], neutron-capture cross section of the Zr isotopes [@lugaro14], and a more extended nuclear network of 328 species (from H to S, and then from Fe to Bi). The [@vanraai12] models, instead, use a nuclear network of 166 species (up to Nb) and the $^{22}$Ne($\alpha, n$)$^{25}$Mg reaction rate from [@karakas06], while @karakas12 explored different networks (166, 172 and 320 species) and $^{22}$Ne($\alpha, n$)$^{25}$Mg reaction rates; from @karakas06, [@iliadis10] and [@angulo99], NACRE. The [@vanraai12] models (from 4 to 6.5 M$_{\odot}$ at $Z$ = 0.02; Fig. \[models\_RbZr\_M\]) show that both the \[Rb/Fe\] abundances and \[Rb/Zr\] ratios increase with the initial mass of the AGB star, as the star becomes hotter and the $^{22}$Ne($\alpha, n$)$^{25}$Mg reaction is more efficiently activated. However, the \[Rb/Fe\] abundances from the last computed TP are too low (ranging from 0.0 to 0.26 dex). The corresponding Rb abundances (\[Rb/Fe\]$\sim$0.0$-$1.0 dex) from the synthetic evolution calculations cover most of the Rb abundances observed; although they cannot explain the star IRAS 05151$+$6312 with \[Rb/Fe\]=1.3 dex. Such high Rb abundances can be reached by the synthetic calculations of the solar metallicity 6 and 7 M$_{\odot}$ AGB models with delayed superwinds of [@karakas12] when using the faster NACRE rate for the $^{22}$Ne($\alpha, n$)$^{25}$Mg reaction. Finally, the [@karakaslugaro16] models (from 4.5 to 8 M$_{\odot}$ at $Z$ = 0.014[^9]; Fig. \[models\_RbZr\_M\]) predict lower Rb abundances than the [@karakas12] models of the same mass and similar metallicity, mostly due to the implementation of the delayed superwind and the use of the NACRE rate in [@karakas12]. The NuGrid/MESA models (from 3 to 5 M$_{\odot}$ at $Z$ = 0.02; Fig. \[models\_RbZr\_M\]) reproduce the observed \[Rb/Fe\] and \[Rb/Zr\] ranges, up to 0.9 and 0.4 dex, respectively. However, we note that only in the 5 M$_{\odot}$ case the NuGrid/MESA models see signature of HBB and predict O-rich stars. The 3 and 4 M$_{\odot}$ cases become C-rich stars and do not experience HBB, which is at odds with our sample of O-rich stars [@garcia-hernandez06]. Regarding the \[Rb/Zr\] ratios, obviously also in this case the higher \[Rb/Zr\] ratios are obtained from the models with delayed superwind (P = 700-800), however, these \[Rb/Zr\] ratios are still lower than our observed values. The maximum value from the AGB models is \[Rb/Zr\] = 0.52 for M = 5 M$_{\odot}$, and the maximum value from our observations is \[Rb/Zr\] = 1.05. A possible explanation is that Zr could be depleted into dust [see e.g. @vanraai12; @zamora14], producing the differences between the theoretical and observational \[Rb/Zr\] ratios. Abundance measurements of similar *s*-elements such as Sr and Y would be needed in order to clarify this problem. ![image](Rb_M_stars.png){width="9.1cm"} ![image](RbZr_M_stars.png){width="9.1cm"} Conclusions =========== We have reported new Rb and Zr abundances determined from the 7800 $\AA$ Rb I line and the 6474 $\AA$ ZrO bandhead, respectively, in a complete sample of massive Galactic AGB stars, previously studied with hydrostatic models, using more realistic extended atmosphere models and a modified version of the spectral synthesis code Turbospectrum, which considers the presence of a circumstellar envelope with a radial wind. The Rb abundances are much lower (in some cases even 1-2 dex) with the pseudo-dynamical models, while the Zr abundances are close to the hydrostatic ones because the 6474 Å ZrO bandhead is formed deeper in the atmosphere and is less affected than the 7800 Å Rb I resonante line by the circumstellar effects. We have studied the sensitivity of the determined abundances to variations in the stellar (T$_{eff}$) and wind ($\dot{M}$, $\beta$ and $v_{exp}$) parameters. The Rb abundances are very sensitive to the mass loss rate $\dot{M}$ but much less to the $\beta$ parameter and $v_{exp}$(OH). The Zr abundances, instead, are not affected by variations of the stellar and wind parameters. The Rb abundances from extended models are lower than those obtained from the hydrostatic ones, and the difference is larger in the stars with the highest Rb abundances in the hydrostatic case. We have represented the hydrostatic and pseudo-dynamical Rb abundances against the $v_{exp}$(OH), which can be used as a mass indicator independent of the distance, and we have observed a flatter correlation. The difference between the hydrostatic and pseudo-dynamical Rb abundances increases with increasing the $v_{exp}$(OH), due to the fact that the presence of a circumstellar envelope affects more strongly the more massive stars. Furthermore, the dispersion of the correlation between the Rb abundance and $v_{exp}$(OH) is larger in the pseudo-dynamical case. When we fix the wind parameters $\dot{M}$ (i.e., equivalent to assuming that our AGB sample stars have a similar evolutionary stage in terms of mass loss), and/or $\beta$ (the same velocity profile) the dispersion is lower. The Monash nucleosynthesis theoretical predictions reproduce the range of new Rb and Zr abundances although \[Rb/Fe\] values above 1.0 can be matched only if the superwind is delayed to after the period reaches 700-800 days. We also note that the rate of the $^{22}$Ne($\alpha,n$)$^{25}$Mg reaction is crucial, but still hampered by large systematic uncertainties [see e.g. @bisterzo16; @massimi17]. Underground measurements, planned, e.g., at LNGS-LUNA (Laboratory for Underground Nuclear Astrophysics) will help to resolve the current issues. The FRUITY massive AGB models predict Rb abundances much lower than observed and negative \[Rb/Zr\] ratios, at odds with the observations. The NuGrid/MESA models of 4 and 5 M$_{\odot}$ predict \[Rb/Fe\] as high as 0.9 dex, however, the 4M$_{\odot}$ model do not experience HBB and becomes C-rich, while our sample stars are clearly O-rich. The maximum observed \[Rb/Zr\] ratios are still more than a factor of two larger than predicted by the nucleosynthesis models. A possible explanation to this difference between the observations and the predictions is that Zr could be depleted into dust. Observations of other *s*-process elements Sr and Y belonging to the same first peak as Rb and Zr will help clarifying this mismatch. In summary, the \[Rb/Fe\] abundances and \[Rb/Zr\] ratios previously derived with hydrostatic models are certainly not predicted by the most recent theoretical models of AGB nucleosynthesis. In particular, the highest \[Rb/Fe\] abundances and \[Rb/Zr\] ratios observed in massive Galactic AGBs are much larger than theoretically predicted. The new \[Rb/Fe\] abundances and \[Rb/Zr\] ratios as obtained from our simple (but more realistic) pseudo-dynamical model atmospheres are much lower in much better agreement with the theoretical predictions, significantly resolving the mismatch between the observations and the nucleosynthesis models in the more massive AGB stars. This confirms the earlier Zamora et al. preliminary results on a smaller sample of massive O-rich AGB stars but here we find that the Rb abundances are strongly dependent of the wind mass-loss $\dot{M}$, which is basically unknown in our AGB stars sample. Follow-up radio observations (e.g. the rotational lines of the several CO isotopologues) of these massive Galactic AGB stars are encouraged in order to get precise mass-loss rates estimates, which are needed to break the actual models degeneracy and obtain more reliable (no model-dependent) Rb abundances in massive AGB stars. This work is based on observations at the 4.2 m William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de Los Muchachos of the Instituto de Astrofisica de Canarias. Also based on observations with the ESO 3.6 m telescope at La Silla Observatory (Chile). We thank Marco Pignatari and Umberto Battino for providing information about the Nugrid/MESA models. V.P.M. acknowledges the financial support from the Spanish Ministry of Economy and Competitiveness (MINECO) under the 2011 Severo Ochoa Program MINECO SEV-2011-0187. D.A.G.H. was funded by the Ramón y Cajal fellowship number RYC-2013-14182. V.P.M., O.Z., D.A.G.H. and A.M. acknowledge support provided by the MINECO under grant AYA-2014-58082-P. M.L. is a Momentum (“Lendület-2014” Programme) project leader of the Hungarian Academy of Sciences. M. L. acknowledges the Instituto de Astrofísica de Canarias for inviting her as a Severo Ochoa visitor during 2015 August when part of this work was done. This paper made use of the IAC Supercomputing facility HTCondor (http://research.cs.wisc.edu/htcondor/), partly financed by the Ministry of Economy and Competitiveness with FEDER funds, code IACA13-3E-2493. This work benefited from discussions at The 12th Torino Workshop on Asymptotic Giant Branch Stars in August 2016 supported by the National Science Foundation under Grant No. PHY-1430152 (JINA Center for the Evolution of the Elements). Abia, C., Busso, M., Gallino, R., et al. 2001, , 559, 1117 Alvarez, R., & Plez, B. 1998, , 330, 1109 Angulo, C., Arnould, M., Rayet, M., et al. 1999, Nuclear Physics A, 656, 3 Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, , 47, 481 Beer, H., & Macklin, R. L. 1989, , 339, 962 Bisterzo, S., Travaglio, C., Wiescher, M., et al. 2016, Journal of Physics Conference Series, 665, 012023 Busso, M., Gallino, R., & Wasserburg, G. J. 1999, , 37, 239 Blöcker, T. 1995, A&A, 297, 727 Cristallo, S., Piersanti, L., Straniero, O., et al. 2011, , 197, 17 Cristallo, S., Straniero, O., Piersanti, L., & Gobrecht, D. 2015, , 219, 40 Danilovich, T., Teyssier, D., Justtanont, K. et al. 2015, A&A, 581, A60 De Beck, E., Decin, L., de Koter, A., et al. 2010, , 523, A18 Decin, L., Justtanont, K., De Beck, E., et al. 2010, , 521, L4 Di Criscienzo, M., Ventura, P., Garc[í]{}a-Hern[á]{}ndez, D. A., et al. 2016, , 462, 395 Frost, C. A., & Lattanzio, J. C. 1996, , 473, 383 Garc[í]{}a-Hern[á]{}ndez, D. A., Garc[í]{}a-Lario, P., Plez, B., et al. 2006, Science, 314, 1751 Garc[í]{}a-Hern[á]{}ndez, D. A., Garc[í]{}a-Lario, P., Plez, B., et al. 2007, , 462, 711 Garc[í]{}a-Hern[á]{}ndez, D. A., Manchado, A., Lambert, D. L., et al. 2009, , 705, L31 Garc[í]{}a-Hern[á]{}ndez, D. A., Zamora, O., Yag[ü]{}e, A., et al.  2013, , 555, L3 Garc[í]{}a-Rojas, J., Peña, M., Flores-Durán, S., Hernández-Mart[í]{}nez, L., 2016, A&A, 586, A59 Goriely, S., & Siess, L. 2004, , 421, L25 Grevesse, N., & Sauval, A. J. 1998, , 85, 161 Grevesse, N., Asplund, M., & Sauval, A. J. 2007, , 130, 105 Gustafsson, B., Edvardsson, B., Eriksson, K., et al. 2008, , 486, 951 Herwig, F. 2005, , 43, 435 Iliadis, C., Longland, R., Champagne, A. E., & Coc, A. 2010, Nuclear Physics A, 841, 323 Hoppe, P., & Ott, U. 1997, American Institute of Physics Conference Series, 402, 27 Justtanont, K., Teyssier, D., Barlow, M. J., et al. 2013, , 556, A101 Karakas, A., & Lattanzio, J. C. 2007, , 24, 103 Karakas, A. I., & Lattanzio, J. C. 2014, , 31, e030 Karakas, A. I., van Raai, M. A., Lugaro, M., Sterling, N. C., Dinerstein, H. L. 2009, ApJ, 690, 1130 Karakas, A. I., Campbell, S. W., & Stancliffe, R. J. 2010, , 713, 374 Karakas, A. I., Garc[í]{}a-Hern[á]{}ndez, D. A., & Lugaro, M. 2012, , 751, 8 Karakas, A. I. 2014, , 445, 347 Karakas, A. I., & Lugaro, M. 2016, , 825, 26 Karakas, A. I., Lugaro, M. A., Wiescher, M., G[ö]{}rres, J., & Ugalde, C. 2006, , 643, 471 Lambert, D. L., Smith, V. V., Busso, M., Gallino, R., & Straniero, O. 1995, , 450, 302 Leisy, P., & Dennefeld, M. 1996, A&AS, 116, 95 Lugaro, M., & Chieffi, A. 2011, Lecture Notes in Physics, Berlin Springer Verlag, 812, 83 Lugaro, M., Karakas, A. I., Stancliffe, R. J., & Rijs, C. 2012, , 747, 2 Lugaro, M., Tagliente, G., Karakas, A. I., et al. 2014, , 780, 95 Lugaro, M., Karakas, A. I., Bruno, C. G., et al. 2017, Nature Astronomy, 1, 0027 Massimi, C., Altstadt, S., Andrzejewski, J., et al. 2017, Physics Letters B, 768, 1 Mazzitelli, I., D’Antona, F., & Ventura, P. 1999, , 348, 846 McSaveney, J. A., Wood, P. R., Scholz, M., Lattanzio, J. C., Hinkle, K. H. 2007, MNRAS, 378, 1089 Menzies, J. W., Whitelock, P. A., & Feast, M. W. 2015, MNRAS, 452, 910 Nittler, L. R., Alexander, O., Gao, X., Walker, R. M., & Zinner, E. 1997, , 483, 475 Nordlund, A. 1984, in Methods in Radiative Transfer, ed W. Kalkofen (Cambridge, New York: Cambridge University Press), 211 Pignatari, M., Herwig, F., Hirschi, R. et al. (2016, ApJS, 225, 24 Plez, B. 2012, Turbospectrum: Code for spectral synthesis, Astrophysics Source Code Library 1205.004 Plez, B., Smith, V. V., & Lambert, D. L. 1993, , 418, 812 Sackmann, I.-J., & Boothroyd, A. I. 1992, , 392, L71 Samus, N. N., Durlevich, O. V., & et al. 2009, VizieR Online Data Catalog, 1, Smith, V. V., Plez, B., Lambert, D. L., Lubowich, D. A. 1995, ApJ, 441, 735 Stanghellini, L., Guerrero, M. A., Cunha, K., Manchado, A., Villaver, E. 2006, ApJ, 651, 898 Straniero, O., Gallino, R., & Cristallo, S. 2006, NuPhA, 777, 311 van Raai, M. A., Lugaro, M., Karakas, A. I., Garc[í]{}a-Hern[á]{}ndez, D. A., & Yong, D. 2012, , 540, A44 Vassiliadis, E., & Wood, P. R. 1993, , 413, 641 Watson, C. L. 2006, Society for Astronomical Sciences Annual Symposium, 25, 47 Wood, P. R., Bessell, M. S., & Fox, M. W. 1983, , 272, 99 Zamora, O., Garc[í]{}a-Hern[á]{}ndez, D. A., Plez, B., & Manchado, A. 2014, , 564, L4 Complete sample {#Append_sample} =============== ![image](Rb_line_appendix1.png){width="9.1cm" height="6.5cm"} ![image](Zr_bandhead_appendix1.png){width="9.1cm" height="6.5cm"} ![image](Rb_line_appendix2.png){width="9.1cm" height="6.5cm"} ![image](Zr_bandhead_appendix2.png){width="9.1cm" height="6.5cm"} ![image](Rb_line_appendix3.png){width="9.1cm" height="6.5cm"} ![image](Zr_bandhead_appendix3.png){width="9.1cm" height="6.5cm"} ![image](Rb_line_appendix4.png){width="9.1cm" height="6.5cm"} ![image](Zr_bandhead_appendix4.png){width="9.1cm" height="6.5cm"} [^1]: The high-resolution spectra were obtained using the Utrecht Echelle Spectrograph (UES) at the 4.2 m William Herschel Telescope (La Palma, Spain) and the CAsegrain Echelle SPECtrograph (CASPEC) of the ESO 3.6 m telescope (La Silla, Chile) during several observing periods in 1996-97 [see @garcia-hernandez07]. [^2]: The synthetic spectra are convolved with a Gaussian profile (with a certain FWHM typically between 250 and 400 mÅ) to account for macroturbulence as well as instrumental profile effects. [^3]: The variability periods of our sample stars are also lower, from $\sim$320 to 580 days (only two stars display periods in excess of 580 days; see Table 1). [^4]: The only exception is WX Psc as already noted by @zamora14. This star (with a mass-loss rate of $\sim$1.8 $\times$ 10$^{-5}$ $M_{\odot}yr^{-1}$, @debeck10; @justtanont13) has an extremely faint optical counterpart. The S/N around 7800 Å is too low for an abundance analysis but a strong Rb I absorption line is clearly detected in its optical spectrum. [^5]: The correlation coefficients are $r=$ 0.84 and 0.54 for the hydrostatic and pseudo-dynamic cases, respectively. [^6]: The only exceptions are IRAS 04404$-$7427 and IRAS 19059$-$2219, whose optical counterparts are too red to estimate their Li abundances [i.e., the S/N at 6708 Å  is too low; see @garcia-hernandez07] [^7]: The initial mass for HBB activation is model dependent; i.e., at solar metallicity HBB is activated at $\sim$3.5 and 4.5 M$_{\odot}$ depending on the mass-loss and convection prescriptions used in the models [see e.g. @garcia-hernandez13 for more details]. [^8]: FUll-Network Repository of Updated Isotopic Tables and Yields: http:// fruity.oa-teramo.inaf.it/. [^9]: According to the more recent solar abundances from @asplund09
--- abstract: 'Unpolarized 1.047 GeV proton inelastic scatterings from Ni isotopes, $^{62}$Ni and $^{64}$Ni are analyzed phenomenologically employing an optical potential model and the first order collective model in the relativistic Dirac coupled channel formalism. The Dirac equations are reduced to the Schrödinger-like second-order differential equations and the effective central and spin-orbit optical potentials are analyzed by considering mass number dependence. The multistep excitation via $2^+$ state is found to be important for the $4^+$ state excitation in the ground state rotational band at the proton inelastic scatterings from the Ni isotopes. The calculated deformation parameters for the 2$^+$, the 4$^+$ states of the ground state rotational band and the first 3$^-$ state are found to agree pretty well with those obtained in the nonrelativistic calculations.' author: - Sugie date: Received 2018 title: 'Dirac Phenomenological Analyses of 1.047 GeV Proton Inelastic Scatterings from $^{62}$Ni and $^{64}$Ni' --- [^1] INTRODUCTION ============ Relativistic Dirac approaches based on the Dirac equation have been very successful for describing the intermediate energy proton scatterings from the nuclei, achieving better agreement with the experimental data than the nonrelativistic approaches based on the Schrödinger equation [@1; @2; @3; @4; @5; @6; @7; @8; @9; @10]. However, it is still necessary to analyze more nuclear scattering data using the Dirac approach in order to complete the systematic Dirac analyses and eventually to provide a reliable basis for replacing the nonrelativistic Schrödinger approach with the relativistic Dirac approach for the analyses of the nuclear scatterings. In this work we performed a relativistic Dirac coupled channel analysis for the inelastic proton scatterings from Ni isotopes, $^{62}$Ni and $^{64}$Ni, by using an optical potential model [@1] and the first order collective model. This work is a follow-up of our previous publication for the Dirac phenomenological analyses of the inelastic proton scatterings from the other Ni isotopes, $^{58}$Ni and $^{60}$Ni [@11] . Ni isotopes are of interest because they are known to have a doubly closed shell ($N=Z=28$) surrounded by only a few off-shell neutrons [@12]. The Dirac optical potential and the deformation parameters are searched to fit the experimental data using a computer program called ECIS [@13], where a Numerov method is employed to solve the complicated Dirac coupled channel equations. The Dirac equations are reduced to the Schrödinger-like second-order differential equations and the effective central and spin-orbit optical potentials are analyzed by considering the mass number dependence. Theory and Results ================== Dirac phenomenological analyses are performed for the 1.047 GeV unpolarized proton inelastic scatterings from Ni isotopes, $^{62}$Ni and $^{64}$Ni, by employing an optical potential model and a first-order collective model. Ni isotopes are of interest because they have the closed proton shell, Z=28. They are known to have a closed 1 $f_{7/2}$ proton shell with a few off-shell neutrons outside the closed neutron 1 $f_{7/2}$ shell [@12]. $^{62}$Ni and $^{64}$Ni are spin-0 nuclei and most of the theoretical procedures for the Dirac phenomenological calculation for the proton scatterings from spin-0 nuclei are given in our previous publications [@3; @4; @8; @9; @10; @11; @14; @15]. Hence, they will be omitted in this paper. The Dirac equation may be rewritten as two coupled equations for the upper ($\Psi_u$) and lower ($\Psi_l$) components of the Dirac wave function, $\Psi (r)$, and we let $$\Psi_u (r) = K(r)\psi(r) , \hspace{.5in} K(r)=A^{1/2} \exp [\int iU_V^r (r) dr ] \label{e3}$$ where $K(r)\rightarrow 1$ as $r \rightarrow \infty $, $A=(m+U_S +E-U_V^0 )/(m+E) $. Here, $U_S$ is the scalar potential, $U_V^r$ and $U_V^0$ are the space-like and the time-like vector potentials, respectively. Under this wave function transformation, we can have the Schrödinger-like second-order differential equation for $\psi(r)$ as follows and can compare with the conventional nonrelativistic Schrödinger equation. $$[p^2 + 2E(U_{cent} +U_{SO} {\bf \sigma} \cdot {\bf L} ) ]\psi (r)=[(E-V_c )^2- m^2 - \frac{2U_{AM} }{r} -\frac{\partial U_{AM} }{\partial r} -U_{AM}^2] \psi (r). \label{e3}$$ Here, the Schrödinger equivalent or effective central potential which contains the Darwin potentials, and effective spin-orbit potentials are defined as follows. $$\begin{aligned} U_{cent} & = & \frac{1}{2E} [2EU_V^0 +2mU_S -U_V^{02}+U_S^2-2V_c U_V^0 \nonumber \\ & & +U_T^2 +2U_T U_{AM}-\frac{U_T+U_{AM}}{A}(\frac{\partial A}{\partial r}) \nonumber \\ & & +\frac{2U_T}{r}+2EU_{Darwin}] \nonumber \\ U_{Darwin} & = & \frac{1}{2E} [-\frac{1}{2r^2 A} \frac{\partial }{\partial r}(r^2 \frac{\partial A}{\partial r})+\frac{3}{4A^2}(\frac{\partial A}{\partial r})^2] \nonumber \\ U_{SO} & = & \frac{1}{2E}[\frac{1}{r A}(\frac{\partial A}{\partial r})+\frac{2}{r}(U_T+U_{AM})]\end{aligned}$$ Here, $U_{AM}(r)=\frac{k}{2m} \frac{\partial}{\partial r} V_c (r)$ and $k$ is the abnormal magnetic moment ($k=1.79$ for proton, $k=-1.91$ for neutron). Hence in the Dirac approach, it is shown that the spin-orbit potential appears naturally when we reduce the Dirac equation to a Schrödinger-like second-order differential equation, while in the nonrelativistic Schrödinger approach, we have to insert the spin-orbit potential by hand. The Dirac equations are numerically solved to get the parameters fitting best to the experimental data by employing the minimum $\chi^2$ method. In order to obtain the optimizing optical potential parameter set we minimize the chi-square for given scattering observables by varying the adjustable parameters in the coupled differential equations and iterations. When the number of experimental data is $n$ for the given angular distribution of scattering observables, the chi-square $\chi^2 $ is defined as $$\chi^2 = \sum_{i=1}^n {|x_{th} (\theta_i )-x_{exp} (\theta_i )|^2 \over (\Delta x_{exp} (\theta_i ) )^2 } ,$$ where $x_{th} $ denotes the theoretical value, $x_{exp} $ denotes the experimental value and $\Delta x_{exp} $ denotes the experimental error of the scattering observable which is the scattering differential cross section in this work. The experimental data are obtained from Ref. 16 for the 1.047 GeV unpolarized proton inelastic scatterings from $^{62}$Ni and $^{64}$Ni. The first $2^+$ and $4^+$ states are assumed to be members of the ground state rotational band (GSRB) ($J^{\pi}=0^+$) and also assumed to be collective rotational states. As a first step, the 12 parameters of the direct scalar and vector potentials in Woods-Saxon shapes are searched to reproduce the elastic scattering experimental data. The calculated results are shown as dotted lines in Figs. 1 and 2 for the elastic scatterings from $^{62}$Ni and $^{64}$Ni, respectively. It is seen that the results of the Dirac phenomenological calculations can reproduce the elastic experimental data quite well, showing better agreement with the data compared to the results obtained in the nonrelativistic calculations [@16]. In the figures, ‘cpd’ means ‘coupled’. ![Differential cross section of the low-lying excited states of the GSRB for 1.047 GeV p + $^{62}$Ni scattering. The dotted, dashed, dash-dot and solid lines represent the results of Dirac calculation where only the elastic scattering is considered, where the ground state and the $2^+$ state are coupled, where the ground state and the $4^+$ state are coupled, and where the ground state, the $2^+$ state and the $4^+$ state are coupled, respectively.[]{data-label="fig1"}](62nistf.eps){width="10.0cm"} ![Differential cross section of the low-lying excited states of the GSRB for 1.047 GeV p + $^{64}$Ni scattering. The dotted, dashed, dash-dot and solid lines represent the results of Dirac calculation where only elastic scattering is considered, where the ground state and the $2^+$ state are coupled, where the ground state and the $4^+$ state are coupled, and where the ground state, the $2^+$ state and the $4^+$ state are coupled, respectively.[]{data-label="fig1"}](64nistf.eps){width="10.0cm"} Potential Nucleus Strength (MeV) Radius (fm) Diffusiveness (fm)   ----------- ----------- ---------------- ------------- ---------------------- -- -- -- -- Scalar $^{62}$Ni -356.5 3.284 0.6815   real $^{64}$Ni -38.56 7.204 0.8492   Scalar $^{62}$Ni 941.2 3.324 0.8398   imaginary $^{64}$Ni 1.822 5.109 0.5247   Vector $^{62}$Ni 402.5 3.019 0.6310   real $^{64}$Ni 22.58 6.817 1.0310   Vector $^{62}$Ni -407.7 3.583 0.6141   imaginary $^{64}$Ni -90.48 4.121 0.6189   : Calculated optical potential parameters of a Woods-Saxon shape for 1.047 GeV proton elastic scatterings from $^{62}$Ni and $^{64}$Ni. \[table1\] Potential Nucleus Strength (MeV) Radius (fm) Diffusiveness (fm)   ----------- ----------- ---------------- ------------- ---------------------- -- -- -- -- Scalar $^{62}$Ni -34.24 5.088 0.4274   real $^{64}$Ni -127.9 6.653 0.9248   Scalar $^{62}$Ni 948.2 3.383 0.4829   imaginary $^{64}$Ni 72.50 5.022 0.2833   Vector $^{62}$Ni 256.3 3.067 0.6277   real $^{64}$Ni 63.79 6.562 1.0013   Vector $^{62}$Ni -546.8 3.391 0.5754   imaginary $^{64}$Ni -92.97 4.670 0.4454   : Calculated optical potential parameters of a Woods-Saxon shape for 1.047 GeV proton inelastic scatterings from $^{62}$Ni and $^{64}$Ni, for the cases where all three states, $0^+$, $2^+$ and $4^+$ states, are coupled. \[table1\] The calculated optical potential parameters of the Woods-Saxon shape for the 1.047 GeV proton scatterings from $^{62}$Ni and $^{64}$Ni are shown in Tables I and II for the elastic scattering and for the inelastic scattering where all three states of the GSRB, $0^+$, $2^+$ and $4^+$ states are coupled, respectively. Showing the similar pattern as in spherically symmetric nuclei [@3], the real scalar potentials and the imaginary vector potentials are found to be large and negative, and that the imaginary scalar potentials and the real vector potentials to be positive and large, except the imaginary scalar potential for the elastic scattering from $^{64}$Ni which is found to be rather small. It is observed that the strength parameters of all four potentials mostly decrease as the mass number is increased from 62 to 64, for both elastic and inelastic scatterings, except at the real scalar potentials when the inelastic scattering is considered. The radius parameters of the potentials increase as the mass number is increased from 62 to 64, as expected. As a first step for inelastic scattering calculations, only the ground state and one excited state, the $2^+$ state or the $4^+$ state, are included at once in the calculations. Next, the ground state, the $2^+$ state, and the $4^+$ state are included in the inelastic scattering calculations to investigate the effect of the channel coupling between the excited states of the GSRB, which is known to be strong as shown in our previous publications for the proton scatterings from axially symmetric deformed nuclei [@9; @11]. The Dirac coupled channel equations are solved phenomenologically to obtain the best fitting optical potential and deformation parameters to the experimental data by using the minimum $\chi^2$ method. The real and the imaginary $\beta_\lambda$ are set to be equal for a given potential type, so that $\beta_S$ and $\beta_V$ are determined for each excited state. In Figs. 1 and 2, the calculated results for the the $2^+$ state and the $4^+$ state are also shown. For the $2^+$ state, the agreement with the experimental data didn’t change noticably by adding the coupling with the $4^+$ state in the calculation. We observe that the $\chi^2$ for the $2^+$ state is reduced slightly when the coupling with the $4^+$ state is added in the calculation. However, the agreement with the experimental data for the $4^+$ state improved significantly by adding the coupling with the $2^+$ state in the calculation, indicating multistep excitation via $2^+$ state is important for the $4^+$ state excitation in the GSRB at the proton scatterings from both nuclei, $^{62}$Ni and $^{64}$Ni, which is the same feature found at the scatterings from $^{58}$Ni and $^{60}$Ni [@11]. $\chi^2/n$ for the $4^+$ state is reduced to about 1/3, from 10.02 to 3.12 for the case of $^{62}$Ni, and from 10.03 to 3.09 for the case of $^{64}$Ni, when the coupling with the $2^+$ state is added in the calculations. However, it is observed that the theoretical values are shifted a little from the data at the first and the second minima of the $4^+$ state data at the scattering from $^{64}$Ni. It can be due to the coupling effect with the $6^+$ state which possibly belong to the GSRB or the higher level excitations near the $4^+$ excitation energy level, which are not included in this calculation. ![The effective central and spin-orbit potentials for the proton scattering from $^{62}$Ni. CR and CI denote central real and imaginary optical potentials, and SOR and SOI denote spin-orbit real and imaginary optical potentials, respectively.[]{data-label="fig2"}](ni62epc.eps){width="10.0cm"} ![The effective central and spin-orbit potentials for the proton scattering from $^{64}$Ni. CR and CI denote central real and imaginary optical potentials, and SOR and SOI denote spin-orbit real and imaginary optical potentials, respectively.[]{data-label="fig2"}](ni64epc.eps){width="10.0cm"} In Figs. 3 and 4, the effective central and spin-orbit potentials for the proton scatterings from $^{62}$Ni and $^{64}$Ni are shown. The dotted, dashed, and solid lines represent the results of the Dirac phenomenological calculations where only elastic scattering is considered, where the ground state and the $2^+$ state are coupled, and where the ground state, the $2^+$ state and the $4^+$ state are coupled, respectively. Surface-peaked phenomena are observed at the effective central potentials for the scattering from $^{62}$Ni as shown at the scatterings from the other axially deformed nuclei such as $^{20}$Ne and $^{24}$Mg [@8; @14], whereas the surface-peaked phenomena are not observed at the effective central potentials for the scattering from $^{64}$Ni. The effective central potentials are observed to have about the same values near the surface, at near 4 fermi, for all three cases, at both nuclei, while the spin-orbit potentials are found to have a little variations near the surface. The surface-peaked phenomena are shown at the all effective spin-orbit potentials, indicating that the spin-orbit interaction may be considered as a surface-peaked interaction. Somehow, it is observed that the peak position of the imaginary spin-orbit potential is found at near 3 fermi for the scattering from $^{62}$Ni, whereas the peak position of real spin-orbit potential is found at near 6 fermi for the scattering from $^{64}$Ni. ![The effective central and spin-orbit optical potentials for the proton elastic scattering from Ni isotopes.[]{data-label="fig2"}](epnie.eps){width="10.0cm"} ![The effective central and spin-orbit optical potentials for the proton inelastic scattering from Ni isotopes, for the case where the ground state, the $2^+$ state and the $4^+$ state are coupled.[]{data-label="fig2"}](epni24c.eps){width="10.0cm"} In Fig. 5, the effective central and spin-orbit potentials for the proton elastic scattering from Ni isotopes, $^{58}$Ni, $^{60}$Ni [@11], $^{62}$Ni and $^{64}$Ni are compared with each other. In Fig. 6 the effective potentials for the proton inelastic scattering from the Ni isotopes are shown for the case where the ground state, the $2^+$ state and the $4^+$ state are coupled. The dotted, dash-dot, dashed, and solid lines represent the results of the Dirac phenomenological calculations for the proton scatterings from $^{58}$Ni, $^{60}$Ni, $^{62}$Ni and $^{64}$Ni, respectively. It is shown that the peak position of the effective real spin-orbit potential is moved to the direction of large $r$ as the mass number is increased, but the tendency is not shown clearly at the effective imaginary spin-orbit potentials, for both cases, as shown in Figs. 5 and 6. The strength parameters of the real effective central and spin-orbit potentials decrease as the mass number is increased for the elastic scattering, but the tendency is not shown for the inelastic scatterings. The real and the imaginary parts of the effective central potentials and the real parts of the spin-orbit potentials for the scattering from $^{62}$Ni are observed to have abnormal wiggling shapes at near 3 fermi, indicating some inner structure of the nucleus. ![Differential cross section of the first 3$^-$ states of the 1.047 GeV proton scatterings from Ni isotopes. The solid lines represent the results of Dirac calculation where the ground state and the $3^-$ state are coupled.[]{data-label="fig1"}](ni3-st1047.eps){width="10.0cm"} ![The effective central and spin-orbit optical potentials for the proton inelastic scattering from Ni isotopes, for the case where the ground state and the $3^-$ state are coupled.[]{data-label="fig2"}](ni3-ef1047n.eps){width="10.0cm"} -------------- ------------- -------- ------------ ------------ ----------------------------------------- -- -- Target Energy   nuclei (MeV) $\beta_S $ $\beta_V $ $\beta_{NR} $   $ ^{58}Ni $ 1.45 0.24 0.21 $0.233^{17}, 0.187^{18}, 0.207^{19} $   $2^+ $ state $^{60}Ni $ 1.33 0.25 0.24 $0.211^{18}, 0.232^{19}, 0.255^{21} $   $^{62}Ni $ 1.17 0.209 0.233 $0.193^{18}, 0.26^{12} $   $^{64}Ni $ 1.35 0.188 0.199 $0.192^{18}, 0.22^{12}, 0.206^{21} $   $ ^{58}Ni $ 2.46 0.07 0.08 $0.093^{17}, 0.10^{20} $   $4^+ $ state $^{60}Ni $ 2.50 0.11 0.10 $ 0.127^{21}$   $^{62}Ni $ 2.34 0.037 0.054 $ 0.11^{12} $   $^{64}Ni $ 2.61 0.046 0.051 $ 0.09^{12} $   $ ^{58}Ni $ 4.47 0.180 0.160 $0.173^{19} $   $3^- $ state $^{60}Ni $ 4.04 0.192 0.181 $ 0.186^{19}, 0.209^{21}$   $^{62}Ni $ 3.76 0.206 0.194 $ 0.23^{12} $   $^{64}Ni $ 3.55 0.180 0.191 $0.23^{12}, 0.203^{21} $   -------------- ------------- -------- ------------ ------------ ----------------------------------------- -- -- : The calculated deformation parameters for the $2^+ $ states and the $4^+$ states for 1.047 GeV proton scatterings from Ni isotopes are shown for the case where the ground state, the $2^+$ state and the $4^+$ state are coupled. The calculated deformation parameters for the $3^- $ states for 1.047 GeV proton scatterings from Ni isotopes are also shown for the case where the ground state and the $3^-$ state are coupled. \[table2\] In Table III, we show the deformation parameters for the $2^+$ states and the $4^+$ states of Ni isotopes. It also contains the results of our previous calculations for the proton scatterings from $^{58}$Ni and $^{60}$Ni [@11]. It is shown that the deformation parameters for the $2^+$ state at $^{64}$Ni are smaller than those at $^{62}$Ni, as expected from the fact that the excitation energy of the state is larger at the scattering from $^{64}$Ni. We can say that the $2^+$ state is less strongly coupled to the ground state at the scattering from $^{64}$Ni than at the scattering from $^{62}$Ni. However, the deformation parameter $\beta_S $ for the $4^+$ state excitation at the scattering from $^{62}$Ni is found to be smaller than that of $^{64}$Ni, even when the excitation energy is smaller at $^{62}$Ni. It is found that $\beta_V$ is lager than $\beta_S$ for the scatterings from $^{62}$Ni and $^{64}$Ni nuclei, while the tendency is true only for the 4$^+$ state excitation at the scattering from $^{58}$Ni when the scatterings from $^{58}$Ni and $^{60}$Ni are considered. We also performed the Dirac phenomenological calculation for the inelastic scatterings from the Ni isotopes considering the first 3$^-$ excitation by using the first order vibrational collective model. The calculated results for the differential cross-section is shown in Fig. 7 and the effective potentials for the 3$^-$ state coupled case are shown in Fig. 8. It is clearly shown that the results of the Dirac phenomenological calculations give better agreement with the experimental data compared to those obtained in the nonrelativistic calculations [@16]. The deformation parameters for the $3^- $ states for 1.047 GeV proton scatterings from Ni isotopes are also shown for the case where the ground state and the $3^-$ state are coupled, in Table III. It is found that the deformation parameters for the 2$^+$, the 4$^+$ states of the GSRB and the first 3$^-$ state agree pretty well with those obtained in the nonrelativistic calculations [@12; @17; @18; @19; @20; @21], even though the theoretical bases are quite different. CONCLUSIONS =========== A relativistic Dirac phenomenological calculation using an optical potential model could reproduce the experimental data for the excited states of the GSRB at the 1.047 GeV unpolarized proton inelastic scatterings from Ni isotopes, $^{62}$Ni and $^{64}$Ni reasonably well, achieving a little better agreement with the data compared to the results obtained in the norelativistic calculations. The Dirac equations are reduced to the Schrödinger-like second-order differential equations to get the effective central and spin-orbit potentials, and surface-peaked phenomena are observed at the effective real central potentials for the scattering from $^{62}$Ni, as shown for the scatterings from $^{20}$Ne and $^{24}$Mg. The effective central potentials and the effective real spin-orbit potentials are found to have abnormal wiggling shape at about 3 fermi for the scattering from $^{62}$Ni, indicating some inner structure of the nucleus. The first-order rotational collective models are employed to accommodate the low-lying excited states of the GSRB in the nuclei, and the calculated deformation parameters are compared with those obtained for the other Ni isotopes. The multistep excitation via $2^+$ state is confirmed to be important for the $4^+$ state excitation of the GSRB at the proton scatterings from $^{62}$Ni and $^{64}$Ni, as previously shown at the proton scatterings from the other Ni isotopes, $^{58}$Ni and $^{60}$Ni. The Dirac phenomenological calculation for the inelastic scatterings from the Ni isotopes considering the first 3$^-$ excitation is also performed by using the first order vibrational collective model. It is found that the deformation parameters for the 2$^+$, the 4$^+$ states of the GSRB and the first 3$^-$ state agree pretty well with those obtained in the nonrelativistic calculations. This work was supported by the research grant of the Kongju National University in 2018. The author would like to thank Seong-Hyeon Jeong for his valuable technical help in the preparation of this paper. L. G. Arnold, B. C. Clark, R. L. Mercer, and P. Swandt, Phys. Rev. C [**23**]{}, 1949 (1981). J. A. McNeil, J. Shepard, and S. J. Wallace, Phys. Rev. Lett [**50**]{}, 1439 (1983); [**50**]{}, 1443 (1983). S. Shim, Ph.D. dissertation, The Ohio State University 1989; L. Kurth, B. C. Clark, E. D. Cooper, S. Hama, S. Shim, R. L. Mercer, L. Ray, and G. W. Hoffmann, Phys. Rev. C [**49**]{}, 2086 (1994). S. Shim, B. C. Clark, E. D. Cooper, S. Hama, R. L. Mercer, L. Ray, J. Raynal, and H. S. Sherif, Phys. Rev. C [**42**]{}, 1592 (1990). R. de Swiniarski, D. L. Pham, and J. Raynal, Z. Phys. A - Hadrons and Nuclei [**343**]{}, 179 (1992). D. L. Pham and R. de Swiniarski, Nuovo Cimento A [**107**]{}, 1405 (1994). J. J. Kelly, Phys. Rev. C [**71**]{}, 064610 (2005). S. Shim, M. W. Kim, B. C. Clark, and L. Kurth Kerr, Phys. Rev. C [**59**]{}, 317 (1999). S. Shim, Shin-Ho Ryu and Min-Soo Kim, J. Korean. Phys. Soc. [**51**]{}, 271 (2007); S. Shim, Shin-Ho Ryu and Min-Soo Kim, J. Korean. Phys. Soc. [**53**]{}, 1146 (2008). S. Shim and M. W. Kim, Int. Jou. of Mod. Phys. E [**21**]{}, 1250098 (2012). S. Shim, to be published in Can. Jou. Phys. (2017). P. Beuzit, J. Delaunay, J.P. Fouan and N. Cindro, Nucl. Phys. A [**128**]{}, 594 (1969). J. Raynal, [*Computing as a Language of Physics*]{}, ICTP International Seminar Course, 281(IAEA, Italy, 1972); J. Raynal, [*Notes on ECIS94*]{}, Note CEA-N-2772, 1994. S. Shim and M. W. Kim, J. Korean. Phys. Soc. [**64**]{}, 483 (2014). S. Shim, Can. Jou. Phys. [**95**]{}, 317 (2017). R. M. Lombard, G. D. Alkhazov and O. A. Domchenkov, Nucl. Phys. A [**360**]{}, 233 (1981). l. Ray, T. Kozlowski, D.G. Madland, C.L. Morris, J.C. Pratt [*et al*]{}, Phys. Lett. [**83B**]{}, 275 (1979). E. Fabrii, S. Micheletti, M. Pignanelli, F. G. Resmini, R. De Leo [*et al*]{}, Phys. Rev. C [**21**]{}, 844 (1980). A. Ingemarsson, T. Johansson and G. Tibell, Nucl. Phys. A [**365**]{}, 426 (1981). G.S. Kyle, N.M. Hintz, M.S. Oothoudt, M. Kaletka, P.M. Lang [*et al*]{}, Phys. Lett. [**91B**]{}, 353 (1980). P.J. van Hall, S.D. Wassenaar, S.S. Klein, G.J. Nijgh, J.H. Polane and O.J. Poppema, J. Phys. G: Nucl. Part. Phys. [**15**]{}, 199 (1989). [^1]: Fax: +82-41-850-8489
--- abstract: 'The recent detection of ULASJ1342+0928, a bright QSO at $z=7.54$, provides a powerful probe of the ionisation state of the intervening intergalactic medium, potentially allowing us to set strong constraints on the epoch of reionisation (EoR). Here we quantify the presence of Ly$\alpha$ damping wing absorption from the EoR in the spectrum of ULASJ1342+0928. Our Bayesian framework simultaneously accounts for uncertainties on: (i) the intrinsic QSO emission (obtained from reconstructing the [Ly$\alpha$]{} profile from a covariance matrix of emission lines) and (ii) the distribution of [H[II]{}]{} regions during reionisation (obtained from three different 1.6$^3$ Gpc$^3$ simulations spanning the range of plausible EoR morphologies). Our analysis is complementary to that in the discovery paper (Ba[ñ]{}ados et al.) and the accompanying method paper (Davies et al.) as it focuses solely on the damping wing imprint redward of [Ly$\alpha$]{} ($1218 < \lambda < 1230$Å), and uses a different methodology for (i) and (ii). We recover weak evidence for damping wing absorption. Our intermediate EoR model yields the following constraints on the volume-weighted neutral hydrogen fraction at $z=7.5$: $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} = 0.21\substack{+0.17 \\ -0.19}$ (68 per cent). The constraints depend weakly on the EoR morphology. Our limits are lower than those presented by Ba[ñ]{}ados et al. and Davies et al., though they are consistent at $\sim$ 1 – 1.5 $\sigma$. We attribute this difference to: (i) a lower amplitude intrinsic [Ly$\alpha$]{} profile obtained from our reconstruction pipeline, driven by correlations with other high-ionisation lines in the spectrum which are relatively weak; and (ii) only considering transmission redward of Ly$\alpha$ when computing the likelihood, which reduces the available constraining power but makes the results less model-dependent. Our results are consistent with previous estimates of the EoR history, and support the picture of a moderately extended EoR.' author: - | Bradley Greig$^{1,2}$[^1], Andrei Mesinger$^{3}$, & Eduardo Ba[ñ]{}ados$^{4}$\ $^1$ARC Centre of Excellence for All-Sky Astrophysics in 3 Dimensions (ASTRO 3D), University of Melbourne, VIC 3010, Australia\ $^2$School of Physics, University of Melbourne, Parkville, VIC 3010, Australia\ $^3$Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa, Italy\ $^4$The Observatories of the Carnegie Institution for Science, 813 Santa Barbara Street, Pasadena, California 91101, USA bibliography: - 'Papers.bib' title: 'Constraints on reionisation from the $\bmath{z=7.5}$ QSO ULASJ1342+0928' --- cosmology: observations – cosmology: theory – dark ages, reionisation, first stars – quasars: general – quasars: emission lines Introduction ============ The epoch of reionisation ([EoR]{}) denotes the final major baryonic phase change in the Universe; when the pervasive, dense neutral hydrogen fog is lifted by the cumulative ionising radiation from the first stars, galaxies and QSOs. The timing and duration of the EoR can loosely be constrained from indirect measurements such as the integral constraints on the reionisation history ([H[II]{}]{} fraction) from the Thomson scattering of photons [e.g. @George:2015p5869; @Collaboration:2015p4320]. More direct, though often controversial, constraints on the latter stages of the EoR can be made from the absorption of [Ly$\alpha$]{} photons by lingering cosmic [H[I]{}]{} patches. However, the intergalactic medium (IGM) at $z\gtrsim 6$ becomes dense enough that even if a small fraction of hydrogen is neutral (${x_{\rm HI}}{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}} \raise1pt\hbox{$>$}}}10^{-4}-10^{-5}$), the vast majority of photons which redshift into [Ly$\alpha$]{} resonance are absorbed. Thus the [Ly$\alpha$]{} forest saturates at high-$z$ [@Fan:2006p4005]. A more versatile probe of the IGM neutral fraction is the [Ly$\alpha$]{} damping wing (e.g. @Rybicki1979 [@MiraldaEscude:1998p1041]). These Lorenzian wings of the [Ly$\alpha$]{} profile are extended and relatively smooth functions of frequency. The absorption cross-section in these wings is reduced by $\sim5$–6 orders of magnitude with respect to that at line centre, making it ideally suited to probing the order unity fluctuations in ${x_{\rm HI}}$ during the patchy EoR. Detecting damping wing absorption in galaxy spectra generally requires large statistical samples, as well as assumptions about their redshift evolution and/or clustering properties [e.g. @HaimanSpaans1999; @Ouchi:2010p1; @Stark:2010p1; @Pentericci:2011p1; @Ono:2012p1; @Caruana:2014p1; @Schenker:2014p1; @Mesinger:2015p1584; @SM15; @Mason:2018]. QSOs on the other hand are much rarer objects; however, they are also much brighter allowing a damping wing, if present, to be recovered from a single observed spectrum. Using bright QSOs to constrain the EoR requires two key ingredients: (i) a knowledge of the intrinsic QSO emission; and (ii) a knowledge of the absorption caused by the EoR. Both (i) and (ii) need to be estimated statistically, with the uncertainties carefully quantified, since we are relying on a single object to place constraints on the EoR. We briefly discuss each in turn. The intrinsic spectrum can be estimated from a composite of lower redshift objects [e.g. @Francis:1991p5112; @Brotherton:2001p1; @VandenBerk:2001p3887; @Telfer:2002p5713]. However, a statistical reconstruction should take advantage of all of the available data from the QSO in question. In particular, bright high-$z$ QSOs seem to exhibit anomalously large [C[IV]{}]{} blueshifts [@Mazzucchelli:2017]; as the [Ly$\alpha$]{} blue-shift is strongly correlated with the [C[IV]{}]{} blueshift, generic QSO templates are unlikely to fit the [Ly$\alpha$]{} lines of high-$z$ QSOs (e.g. @Bosman:2015p5005). In @Greig:2017a, we developed a reconstruction method which samples a covariance matrix of [Ly$\alpha$]{} and other strong emission line profiles from a sample of $\sim1700$ moderate-$z$ QSOs. This approach directly uses the strength and shape of the observed emission lines (e.g. [C[IV]{}]{}, [Si[IV]{}]{} and [C[III\]]{}]{}) to recover the intrinsic [Ly$\alpha$]{} profile, with a statistical characterisation of the recovery. Typical reconstruction errors are of order a few percent around the [Ly$\alpha$]{} line. A common alternative approach is to deconstruct the QSO emission into principle component vectors, and fit these to the spectrum [e.g. @Boroson:1992p4641; @Francis:1992p5021; @Suzuki:2005p5157; @Suzuki:2006p4770; @Lee:2011p1738; @Paris:2011p4774]. @Davies:2018a recently introduced a sophisticated version of this principle component analysis (PCA), decomposing the QSO spectrum into components redward and blueward of $\lambda = 1280$Å. They then reconstruct the [Ly$\alpha$]{} profile from the correlations between these red and blue side PCA components. This mapping from (unattenuated) red-side components to the blue-side components, and the associated errors, are trained on a large data set of $\sim$ 13,000 moderate-$z$ spectra. They obtain $\sim$ percent level errors in the recovery similar to @Greig:2017a. The second requirement for EoR constraints is a model for the attenuation by the EoR. The EoR damping wing can attenuate the source flux both on the blue and the red side of the [Ly$\alpha$]{} line. The source QSO is capable of highly ionising its environment, allowing some blue-side transmission to be seen (the so-called proximity zone). Inside the proximity zone, the attenuation is a combination of: (i) resonant absorption by a fluctuating [Ly$\alpha$]{} forest, and (ii) a smooth damping wing from the more distant cosmic [H[I]{}]{} patches. Modelling (i) requires high-resolution simulations of the local QSO environment, while modelling (ii) requires ultra large-scale simulations of the EoR morphology (e.g. @Mesinger:2004p5737 [@Maselli:2007p5744; @Bolton:2011p1063; @Keating:2015p5004; @Eilers:2017]). In contrast, the attenuation on the red-side of the [Ly$\alpha$]{} line is free from resonant absorption, requiring only an understanding of large-scale EoR morphology to compute the associated damping wing absorption. However, the damping wing imprint is weaker on the red side (far from the cosmic [H[I]{}]{} patches), making it more degenerate with the intrinsic QSO emission. In @Greig:2017b we combined the [Ly$\alpha$]{} reconstruction technique of @Greig:2017a with large-scale EoR simulations of @Mesinger:2016p1, constraining the hydrogen neutral fraction from the spectra of the $z=7.1$ QSO ULASJ1120+0641 [hereafter J1120; @Mortlock:2011p1049]. Our Bayesian framework recovered strong evidence for an IGM damping wing redward of [Ly$\alpha$]{} ($\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} = 0.40\substack{+0.21 \\ -0.19}$ at 68 per cent confidence). Subsequent analysis by @Davies:2018b found similar results, $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} = 0.48\substack{+0.26 \\ -0.26}$ at 68 per cent confidence. In this work, we apply the same analysis framework to the spectrum of the recently-discovered $z=7.5$ QSO, ULASJ1342+0928 (hereafter J1342; @Banados:2018). Using their own analysis method, which performs the reconstruction using blue+red PCA components and models the proximity zone in addition to the red-side damping wing imprint, @Davies:2018a finds $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} = 0.60\substack{+0.20 \\ -0.23}$ at 68 per cent confidence. As the analysis methods of @Greig:2017a and @Davies:2018a are different (we go into more details below), this work, applied to the same input spectrum as @Davies:2018a, serves as an independent and complementary verification of the inferred EoR constraints from J1342. This work is structured as follows. In Section \[sec:Method\] we briefly outline our analysis pipeline and in Section \[sec:results\] we provide our main results and discussion. In Section \[sec:Conclusion\] we finish with our closing remarks. Throughout we adopt the background cosmological parameters: ($\Omega_\Lambda$, $\Omega_{\rm M}$, $\Omega_b$, $n$, $\sigma_8$, $H_0$) = (0.69, 0.31, 0.048, 0.97, 0.81, 68 km s$^{-1}$ Mpc$^{-1}$), consistent with cosmic microwave background anisotropy measurements by the Planck satellite [@Collaboration:2015p4320] and unless otherwise stated, distances are quoted in comoving units. Method {#sec:Method} ====== Reconstruction of the intrinsic [Ly$\alpha$]{} profile {#sec:Reconstruction} ------------------------------------------------------ In @Greig:2017a, we constructed a covariance matrix to characterise correlations between the emission line parameters[^2] from the four most prominent high ionisation lines, [Ly$\alpha$]{}, [C[IV]{}]{}, [Si[IV]{}+O[IV\]]{}]{} and [C[III\]]{}]{}. For both [Ly$\alpha$]{} and [C[IV]{}]{} we found a stronger preference for a broad and narrow component Gaussian to describe the line profile[^3]. Finally, we simultaneously fit a single power-law continuum. The dataset comprised 1673 moderate-$z$ ($2.08 < z < 2.5$)[^4], high signal to noise (S/N $>15$) QSOs from SDSS-III (BOSS) DR12 [@Dawson:2013p5160; @Alam:2015p5162]. With this covariance matrix, we then perform our reconstruction of the intrinsic [Ly$\alpha$]{} profile of J1342 as follows: - We fit the rest-frame spectrum of J1342 at $\lambda > 1275$Å using the \[[C[II]{}]{}\] redshift [@Venemans:2017][^5], obtaining estimates of the continuum and the [Si[IV]{}+O[IV\]]{}]{}, [C[IV]{}]{} and [C[III\]]{}]{} emission line profiles (we simultaneously fit for absorption lines by modelling each with a single Gaussian profile). - Using these red-side component fits, we collapse the 18-dimensional (Gaussian distributed) covariance matrix into a six dimensional estimate of the intrinsic [Ly$\alpha$]{} emission line profile (a two component Gaussian, each with an amplitude, width and velocity offset). - We then draw intrinsic [Ly$\alpha$]{} profiles from this distribution, applying a flux prior within the range $1250 < \lambda < 1275$Å to ensure our reconstructed profiles fit the observed spectrum over this range.[^6] The IGM damping wing during the EoR {#sec:IGMDampingWing} ----------------------------------- We compute our IGM damping wing profiles using the Evolution of 21-cm Structure (EOS; @Mesinger:2016p1)[^7] 2016 simulations. These comprise 1.6 Gpc on a side semi-numerical reionisation simulations on a 1024$^3$ grid, including state-of-the-art sub-grid prescriptions for inhomogeneous recombinations and photo-heating suppression of star-formation. We consider three different EoR morphologies, characterised by different efficiencies of star-formation inside low-mass halos, and visualised in Fig. \[fig:xHslices\]: - [**[<span style="font-variant:small-caps;">Small [H[II]{}]{}</span>]{}**]{} – EoR driven by galaxies residing in $M_h {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}} \raise1pt\hbox{$>$}}}10^8 M_\odot$ haloes. In this scenario the EoR morphology is characterised by numerous small cosmic [H[II]{}]{} regions. - [**[<span style="font-variant:small-caps;">Intermediate [H[II]{}]{}</span>]{}**]{} – EoR driven by galaxies residing in $M_h {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}} \raise1pt\hbox{$>$}}}10^9 M_\odot$ haloes. An intermediate scenario between the [<span style="font-variant:small-caps;">Small [H[II]{}]{}</span>]{} and [<span style="font-variant:small-caps;">Large [H[II]{}]{}</span>]{} models. We consider this to be our fiducial model[^8]. - [**[<span style="font-variant:small-caps;">Large [H[II]{}]{}</span>]{}**]{} – EoR driven by galaxies residing in $M_h {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}} \raise1pt\hbox{$>$}}}10^{10} M_\odot$ haloes, with an EoR morphology characterised by more spatially extended [H[II]{}]{} structures. We note that in @Greig:2017b we mistakenly represented the [<span style="font-variant:small-caps;">Intermediate [H[II]{}]{}</span>]{} model as the [<span style="font-variant:small-caps;">Large [H[II]{}]{}</span>]{} model; however, the EoR morphology had limited impact on the results of that work, as we shall re-confirm below. We extract a total of $10^5$ synthetic IGM damping wing profiles, constructed from 10 randomly oriented sightlines emanating from the centres of $10^4$ haloes between $6\times10^{11} < M_h < 3\times10^{12}$ $M_\odot$ at $z=7.5$. When computing the cumulative contribution from all encountered [H[I]{}]{} patches, we exclude the first $\sim11$ comoving Mpc (1.3 physical Mpc) consistent with the estimated near-zone of J1342 [@Banados:2018]. The [IGM]{} neutral fraction at $z=7.5$ is then left as a free parameter by sampling the corresponding ionisation fields obtained from different redshift snapshots from the EoS simulations (i.e. different [$\bar{x}_{\rm HI}$]{}). Jointly fitting the IGM damping wing and intrinsic [Ly$\alpha$]{} profile distributions {#sec:JointFitting} --------------------------------------------------------------------------------------- Finally, we infer the IGM neutral fraction by fitting the observed spectrum of J1342 by simultaneously sampling the distributions of both the intrinsic [Ly$\alpha$]{} line profile and the synthetic IGM damping wing profiles. Our procedure is as follows: 1. We draw $\sim10^5$ reconstructed [Ly$\alpha$]{} line profiles directly from the procedure outlined in Section \[sec:Reconstruction\]. 2. Each intrinsic profile is multiplied by the 10$^5$ synthetic damping wing opacities in Section \[sec:IGMDampingWing\], to produce $\sim10^{10}$ mock spectra for each [$\bar{x}_{\rm HI}$]{} snapshot and EoR morphology. 3. All $\sim10^{10}$ mock spectra are then compared to the observed spectrum of J1342 over $1218$Å$ < \lambda < 1230$Å (consistent with @Greig:2017b). 4. The resulting likelihood, averaged (i.e. marginalised) over all $\sim10^{10}$ mock spectra, is then assigned to that particular [$\bar{x}_{\rm HI}$]{}. 5. Steps (ii)–(iv) are then repeated for each [$\bar{x}_{\rm HI}$]{} to obtain a final 1D probability distribution function (PDF) of [$\bar{x}_{\rm HI}$]{} for each of the EoR morphologies. Results and Discussion {#sec:results} ====================== In Figure \[fig:Profile\] we provide our reconstructed intrinsic [Ly$\alpha$]{} emission line profile. The red curve corresponds to the best-fit (maximum likelihood; ML) reconstructed profile, while the 300 thin grey lines are posterior samples, illustrating the breadth of the uncertainties in the reconstruction pipeline. Our reconstructed intrinsic [Ly$\alpha$]{} profile is almost entirely dominated by a single, broad component Gaussian. This arises owing to the strong preference for a large, broad component Gaussian to characterise the [C[IV]{}]{} emission line in J1342 and the corresponding correlation between the [Ly$\alpha$]{}–[C[IV]{}]{} broad components. An extremely small, narrow component Gaussian can be identified near $1210$Å owing to the extreme [C[IV]{}]{} blueshift of J1342 ($\sim6600$ km/s). Though $z>6.5$ QSOs typically exhibit extreme [C[IV]{}]{} blueshifts [@Mazzucchelli:2017], J1342 itself is an outlier with a blueshift more than a factor of two larger than J1120. The covariance matrix developed by @Greig:2017a does not include QSOs with such extreme [C[IV]{}]{} blueshifts as J1342. However, in @Greig:2017b (see Fig. A1 and the associated discussion) we verified that the covariance matrix of emission line properties could be extrapolated to reconstruct QSOs within our dataset with [C[IV]{}]{} blueshifts similar to J1120 ($\sim2000$ km/s). Since the extrapolation works for J1120, we assume that the extrapolation is equally valid for J1342. Our recovered intrinsic [Ly$\alpha$]{} profile for J1342 is similar to the SDSS/BOSS composite template constructed by @Banados:2018 (blue-dashed curve). Their composite was constructed from 46 QSOs with similar [C[IV]{}]{} blueshifts relative to [Mg[II]{}]{} and [C[IV]{}]{} equivalent widths of J1342. Likewise, the @Davies:2018a PCA-based reconstruction exhibits a qualitatively similar ML profile. Specifically, their reconstruction prefers a dominant contribution from a broad component-like feature for the [Ly$\alpha$]{} profile, with a secondary narrower component near $1210$Å. However, the distribution of reconstructed profiles differs in our two approaches. Their PCA method does not provide a direct estimate of the associated uncertainty in the fit. To estimate the uncertainty and be able to forward model the intrinsic emission, they construct a covariance matrix of fit errors for each spectral bin, by performing a reconstruction on their SDSS sample and comparing to the actual spectra. As a result, their profile samples (c.f. the thin blue curves of their figure 8) have unphysical oscillatory features (though it is likely such features average out in their full analysis). Moreover, our reconstruction method prefers a notably broader distribution of reconstructed [Ly$\alpha$]{} profiles than that presented by @Davies:2018b. This broad scatter arises from the correlations amongst individual emission line profiles, which can have notable scatter [@Greig:2017a][^9]. The large spread in reconstructed intrinsic emission profiles translates to a broader PDF for the inferred IGM neutral fraction (discussed below). In the zoom-in panel of Figure \[fig:Profile\], we present the confidence intervals from our joint fitting of the IGM damping wing and reconstructed [Ly$\alpha$]{} profiles (Section \[sec:JointFitting\]). Here, the yellow (blue) shaded regions correspond to the 68 (95) percentiles, using the [<span style="font-variant:small-caps;">Intermediate [H[II]{}]{}</span>]{} EoR morphology. The red curve is the same best-fit reconstructed profile as shown in the left panel, and is used to illustrate the impact of the IGM damping wing on the intrinsic [Ly$\alpha$]{} profile. The offset of the red curve and the yellow/blue strips is suggestive of the presence of an IGM damping wing; however, the spread around the ML shown with the grey curves in main panel makes such evidence weak. We quantify this in Figure \[fig:PDFs\], which presents the main results of this work: the 1D PDFs of the IGM neutral fraction for the three different EoR morphologies. In summary, for each EoR morphology we find: - [**[<span style="font-variant:small-caps;">Small [H[II]{}]{}</span>]{}**]{}; $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}}\sim0.14$, $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} < 0.28$ ($0.51$) at 68 (95) per cent - [**[<span style="font-variant:small-caps;">Intermediate [H[II]{}]{}</span>]{}**]{}; $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} = 0.21\substack{+0.17 \\ -0.19}$ (68 per cent), $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} < 0.61$ (95 per cent) - [**[<span style="font-variant:small-caps;">Large [H[II]{}]{}</span>]{}**]{}; $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} = 0.28\substack{+0.20 \\ -0.23}$ (68 per cent), $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} < 0.70$ (95 per cent). We do not find strong evidence for J1342 to be in a significantly neutral IGM. Depending on the EoR model, the spectrum is consistent with being in a fully-ionised Universe at $\sim$ 1–2$\sigma$. This broad distribution is driven in part by the afore-mentioned large scatter in the reconstruction. Similar to, but slightly stronger than J1120 (also shown in the plot; @Greig:2017b), we find a weak EoR morphological dependence on the recovered IGM neutral fraction. Compared to J1120, the PDFs are closer to low [$\bar{x}_{\rm HI}$]{} values. The sightline-to-sightline scatter in damping wing opacity increases with decreasing [$\bar{x}_{\rm HI}$]{} (see e.g. figure 3 in @Mesinger:2008p5748), driving broader PDFs. In this regime, the absorption is more sensitive to the incidence with the remaining rare neutral patches whose sizes and separation depend on the source model. Although at first glance it might seem strange that the neutral fraction at $z=7.5$ preferred by J1342 is lower than the one preferred by J1120 at $z=7.1$, it is important to note that the distributions are quite broad. Thus a physically reasonable neutral fraction which evolves monotonically with redshift is perfectly consistent within the errors. Specifically, comparing to the EoR history constraints in Fig. 10 of @Greig:2017EORhist, we see that the [<span style="font-variant:small-caps;">Intermediate [H[II]{}]{}</span>]{} model constraint of $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} = 0.21\substack{+0.17 \\ -0.19}$ (68 per cent) falls comfortably within the 1 $\sigma$ range at $z=7.5$. Our results are in mild tension ($\sim$ 1 – 1.5$\sigma$) with the constraints in @Banados:2018 and @Davies:2018b. These authors find stronger evidence of an incomplete reionisation, $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} = 0.56\substack{+0.21 \\ -0.18}$ [Model A; @Banados:2018] and $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} = 0.60\substack{+0.20 \\ -0.23}$ [@Davies:2018a]. While it is difficult to do a direct comparison with their works, we can speculate on the main causes for this difference. The two approaches yield different results for both components in the analysis: (i) the reconstruction of the intrinsic emission profile; and (ii) the attenuation from the IGM. We discuss these briefly in turn. Although qualitatively similar, the PCA reconstruction in @Davies:2018b results in a somewhat higher amplitude intrinsic emission than we predict with our emission line covariance approach (see Figure \[fig:Profile\]). This would naturally require stronger attenuation of the intrinsic flux to achieve a fit to the observed spectrum. As a result, higher neutral fractions are preferred. The opposite is true for the reconstructions of J1120 by these authors. Furthermore, as mentioned previously the scatter around the ML is smaller in their reconstruction, resulting in a narrower PDFs of the inferred neutral fraction. The main difference in modelling the IGM attenuation lies in the treatment of the near zone transmission. Our analysis ignores the flux in the near zone, fitting only the damping wing redward of the line centre (and any probable infall). In contrast, the state-of-the-art approach of @Davies:2018b also uses the near zone flux of J1120 and J1342 when comparing to simulated spectra. They do this by performing a 1D radiative transfer through a 100 $h^{-1}$ Mpc [Ly$\alpha$]{} forest simulation, adding to this a smooth damping contribution from a large semi-numerical simulation of the EoR. Their radiative transfer assumes a constant ionising QSO luminosity, which is on for a fixed time; their final results marginalise over this quasar lifetime.[^10] Using the resulting near zone models when comparing to the observed spectra adds constraining power, but makes the results much more model dependent. For example, the [Ly$\alpha$]{} forest simulations used in that work have a volume which is roughly a factor of ${\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}} \raise1pt\hbox{$>$}}}$ 300 too small to capture the rare, biased $\sim10^{12} {M_\odot}$ halos expected to host these QSOs. As a result, they do not simulate the biased environment of these QSOs, which might have important consequences for the corresponding near zone transmission profiles. Finally we comment on the quality of the observed spectrum. In all works, the QSO spectrum used in the analysis has been the combined Magellan/FIRE and Gemini/GNIRS spectrum, which corresponds to a resolution of $R\sim1800$ [@Banados:2018]. This relatively low S/N spectrum results in numerous spurious features in the emission spectrum which can hinder attempts to characterise the QSO continuum or to accurately fit the various emission lines, a prerequisite for this work and the PCA approach of @Davies:2018a. Further, features in emission (absorption) could artificially bias recovered IGM neutral fractions to lower (higher) values. Although attempts have been made to identify and mask problematic regions of the spectrum, this is made difficult owing to the lower resolution. It will therefore be fruitful to return to this analysis once a deeper spectrum is obtained. Conclusion {#sec:Conclusion} ========== With the recent detection of the $z=7.5$ QSO, J1342 [@Banados:2018], we perform an independent analysis quantifying the damping wing imprint from the EoR. @Banados:2018 and @Davies:2018b have already analysed this source using their own analysis pipelines recovering $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} \sim 0.6$. In both previous works, the red and blue side of the [Ly$\alpha$]{} line is used for the constraints (e.g. Models B and C of @Banados:2018). Here we focus only on the red side of the line ($>1218$Å). This is a conservative choice in that it is less constraining but more model independent as it is not sensitive to the complicated modelling of the near zone transmission. We use the same analysis pipeline that was applied to the $z=7.1$ QSO, J1120 [@Greig:2017b]. We perform a reconstruction of the intrinsic (unattenuated) QSO profile near [Ly$\alpha$]{} using a covariance matrix of correlations between various known emission lines [@Greig:2017a] and then couple these with synthetic IGM damping wing profiles extracted from large EoR simulations with different morphologies. We then fit $\sim10^{10}$ template profiles to the observed spectrum of J1342 between 1218 – 1230Å within a Bayesian framework. We recover systematically lower values that those presented by @Banados:2018 and @Davies:2018b, although they are consistent at $\sim$ 1 – 1.5 $\sigma$. Specifically, we find for our three EoR morphologies: - [**[<span style="font-variant:small-caps;">Small [H[II]{}]{}</span>]{}**]{}; $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}}\sim0.14$, $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} < 0.28$ ($0.51$) at 68 (95) per cent - [**[<span style="font-variant:small-caps;">Intermediate [H[II]{}]{}</span>]{}**]{}; $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} = 0.21\substack{+0.17 \\ -0.19}$ (68 per cent), $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} < 0.61$ (95 per cent) - [**[<span style="font-variant:small-caps;">Large [H[II]{}]{}</span>]{}**]{}; $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} = 0.28\substack{+0.20 \\ -0.23}$ (68 per cent), $\bar{x}_{{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi}{}} < 0.70$ (95 per cent). We suspect the primary differences arise from the reconstruction of the intrinsic QSO profile and the modelling of the host QSO environment. Our results are consistent within 1 $\sigma$ with previous estimates of the global EoR history, which are suggestive of a moderately-extended reionisation. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Fred Davies for comments on a draft version of this manuscript. Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. AM acknowledges funding support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 638809 – AIDA – PI: AM). [^1]: E-mail: greigb@unimelb.edu.au [^2]: Each component of the emission line is modelled as a Gaussian, fully described by the peak height, width and velocity offset from systemic. [^3]: Note that in the construction of this dataset we removed QSOs where the [Ly$\alpha$]{} line profile was not well characterised by two Gaussian components (see Appendix C in @Greig:2017a). In most cases, the primary cause of this were absorption features at or near [Ly$\alpha$]{}. In total, this resulted in $\sim150$ QSOs being excluded from our final dataset (i.e. $\lesssim10$ per cent). [^4]: This dataset uses the SDSS-III pipeline redshift to convert to rest-frame (see Appendix A of @Greig:2017a for discussions on the redshift choice). [^5]: While we have used the \[[C[II]{}]{}\] redshift for J1342, we do not have \[[C[II]{}]{}\] redshifts for the SDSS-III dataset. This difference in redshift choice can lead to biases in the recovered line blueshifts from the line fitting and reconstruction pipelines. However, this bias is sub-dominant compared to the scatter in the correlations in the emission line parameters and variations between the different lines of sight through our EoR simulations. [^6]: Note, this prior range differs from the $1230 < \lambda < 1275$Å used in @Greig:2017b. Here we are more conservative in our choice owing to the evidence of a stronger damping wing imprint that may extend beyond 1230Å and the lower S/N of the observed spectrum. If we were to relax the prior range used for J1120 [@Greig:2017b], the overall constraints would remain essentially unchanged, however the PDFs would be slightly broader. [^7]: http://homepage.sns.it/mesinger/EOS.html [^8]: Although this is currently highly uncertain, here we motivate the [<span style="font-variant:small-caps;">Intermediate [H[II]{}]{}</span>]{} EoR morphology as a reasonable choice. Some groups have recently suggested there might be weak evidence of a turn-over starting to appear in the faint end of the lensed $z\sim6$ LFs (Yue et al. 2018; Atek et al. 2018). Moreover, the current consensus of EoR observations prefers a late, moderately-extended reionisation, most naturally driven by galaxies of intermediate masses (e.g. Mitra et al. 2017; Fig. 11 in @Greig:2017EORhist). [^9]: It is also broader than the distribution shown for J1120 [@Greig:2017b] where we presented only the 68 percentiles while also using a more aggressive flux prior. [^10]: As pointed out in @Davies:2018b, our analysis effectively assumes a complicated prior over such a quasar lifetime. By ensuring that the surrounding [H[II]{}]{} regions in our EoR simulations are [*at least*]{} as large as the observed near zone, we are essentially assuming a minimum QSO contribution. For the damping wing redward of [Ly$\alpha$]{} used in our analysis, this is mainly relevant in very neutral universes in which galaxies could not by themselves carve out large enough [H[II]{}]{} bubbles surrounding the QSO. As a result, our analysis is slightly biased against very neutral Universes, what we call a “conservative” choice in @Greig:2017b. In any case, @Davies:2018b show that the assumed QSO lifetime has a very negligible impact on their results for a reasonable range of values $10^4$ – $10^7$ yr (e.g. see their figures 7 and 10).
--- author: - 'Zhen-Tao Zhang[^1]' - 'Zheng-Yuan Xue' - Yang Yu title: 'Detecting fractional Josephson effect through $4\pi$ phase slip' --- introduction ============ Topological superconductor with $p$-wave pairing is a hot topic in condensed matter physics. The system can host at its boundaries one kind of exotic quasiparticles-Majorana Fermions (MFs), which are their own antiparticles. MFs has important applications in quantum information processing [Kitaev01,Nayak08,Xue13,Xue15]{}. Two separate MFs could construct one physical qubit, named topological qubit. The non-locality makes topological qubit immune from local environment noise. Nowadays, intrinsic topological superconductor has yet to be found. Moreover, MFs are predicted to also exist in some complicate systems, e.g., topological insulator coupled to s-wave superconductor via proximity effect [@Fu08], or spin-orbit coupled semiconducting nanowire combined with superconductivity and magnetic field [@Lutchyn10; @Oreg10]. Recently, several groups have claimed that they had observed some important signatures of MFs in these systems [Mourik12,Deng12,Das12]{}. However, the existence of MFs has not been confirmed due to the lack of a smoking-gun evidence.A remarkable signature of MFs is fractional Josephson effect. It is well known that the supercurrent through a conventional Josephson junction is $2\pi $ periodic with the phase difference across the junction. However, this statement is not always true for topological Josephson junction, which is made with two weakly coupled topological superconductor instead of s-wave superconductor. Kitaev has predicted that the current-phase relation in topological Josephson junction should be $4\pi $ periodic [@Kitaev01]. This period doubling of the Josephson current is protected by fermion parity conservation. The fermion parity would not change unless a quasiparticle excitation occurs. Unfortunately, non-equilibrium quasiparticles were found in superconducting system at very low temperature, which is called quasiparticle poisoning [@Matveev93; @Joyez94]. It can break the parity conservation of the system and restore the $2\pi $ period of the current in the characteristic time. Therefore, the experiment to probe the $4\pi $ periodicity should be accomplished within the characteristic time of quasiparticle poisoning. On the other side, the experimental duration time is limited by adiabatic condition and measurement speed. Fast manipulation of the phase difference can excite transitions from the subgap Majorana bound states to the out-gap continuum states due to the Landau-Zener transition. Therefore, it is challenge to experimentally detect the fractional Josephson effect. Recently, several theoretical proposals are brought forward to overcome the quasiparticle poisoning problem [San-Jose12,Houzet13,Peng16]{}. Although these proposals are nearly insensitive to quasiparticle poisoning, they all require that the junction works in the ballistic regime, where the nanowire is nearly transparent, i.e., the conductance $D\sim 1$. In this regime the nontopological Josephson junction can also produce the fractional Josephson effect due to the Landau-Zener transition [@Sau12; @Sothmann13]. Therefore, it is desirable to figure out a scheme working in the tunneling regime of the junction ($D\ll 1$). In addition, most of previous researches have paid attentions to AC Josephson effect where the junction is voltage or current biased. Actually, fractional DC Josephson effect, which does not bring dissipation, is more useful in the context of quantum information processing. For instance, it can be employed to couple topological qubits with conventional superconducting qubits. Here we conceive a scheme for detecting fractional DC Josephson effect. Compared with its AC analog [Wiedenmann16,Bocquillon16,Bocquillon17]{}, the DC effect is more susceptible to parity-breaking excitations and other imperfections. Generally, three mechanisms, conventional Josephson coupling [@Pekker13], quasiparticle poisoning, the coupling of MFs from one topological superconductor, result a conventional $2\pi $ phase slip which screens the $4\pi $ slip of topological Josephson energy. By elaborately designing the parameters of device and experiment, we can overcome these problems at the same time. Firstly, the conventional Josephson coupling could be neglected when the parameters of the superconducting circuit are proper, because the conventional Josephson energy $E_{J}$ relies on the parameters of the junction in a different manner with its topological analog $E_{m}$. When $% E_{J}$ is much smaller than $E_{m}$, the $2\pi $ phase slips will be inhibited. Secondly, our scheme can be implemented in a time scale much shorter than the characteristic time of quasiparticle poisoning. At last, the circuit used in our scheme could be reasonably designed such that the interaction of MFs from one topological superconductor is much smaller than the topological Josephson coupling. In this case, the $4\pi $ phase slip can overwhelm the conventional $2\pi $ slip. System and Hamiltonian ====================== The system we considered is a superconducting loop interrupted by a junction. The junction is made by putting a spin-orbit coupled semiconductor nanowire on two separate superconductors. The two pieces of the nanowire contacting with the superconductors underneath is superconducting due to proximity effect. Combining with a parallel magnetic field, the nanowire could be tuned into the topological phase. When the Zeeman splitting excesses a critical value $B_{c}=\sqrt{\Delta ^{2}+\mu ^{2}}$ ($\Delta $ and $\mu $ are$\ $the superconducting gap and the chemical potential, respectively), the two pieces of proximitized nanowire will transition to topological superconductors and two pairs of MFs emerge at their boundaries(see Fig. \[circuit\]). Moreover, the two MFs at the junction couple with each other. The coupling Hamiltonian reads $$\label{eq1} H_{m}=i\gamma _{1}\gamma _{2}E_{m}\cos \frac{\varphi }{2},$$in which $\gamma _{1},\gamma _{2}$ are Majorana operators, $\varphi $ is the phase difference across the junction, $E_{m}=\Delta \sqrt{D}$ is the amplitude of the topological Josephson coupling energy with $D$ the conductance of the quasi-one-dimensional nanowire. Besides, the conventional Josephson coupling of the junction may also exist, which is related to the quasi-continuum states above the superconducting gap. In the case of one-channel nanowire, the conventional Josephson coupling can be written as $$\label{eq2} H_{J}=-\Delta \sqrt{1-D\sin ^{2}\frac{\varphi }{2}}.$$In the low conductance regime ($D\ll 1$), $H_{J}$ transforms to the celebrated tunneling Josephson coupling $H_{J}=-E_{J}\cos \varphi $ (up to a constant) with $E_{J}=\Delta D/4$. Therefore, it is straightforward to deduce the relation $E_{J}=E_{m}^{2}/4\Delta $. If $E_{m}$ is much smaller than the supercoducting gap, we can get $E_{J}\ll E_{m}$. In this case, we can safely ignore $H_{J}$ term [@Hell16] and write the whole Hamiltonian as $$\label{eq3} H=E_{c}n^{2}+E_{L}(\varphi -\varphi _{e})^{2}+H_{m},$$where $E_{c}=2e^{2}/C$ is the charge energy of the junction, and $% E_{L}=(\phi _{0}/2\pi )^{2}/2L$ is inductive energy of the circuit with $% \phi _{0}$ being flux quantum. $\varphi _{e}=2\pi \phi _{e}/\phi _{0}$, $% \phi _{e}$ denotes the external flux threading the loop. The Hamiltonian is as same as that of a flux qubit except the Josephson coupling term. As well-known, a pair of MFs composes one Dirac fermion, and $H_{m}$ can be expressed as $$\label{eq4} H_{m}=E_{m}\cos \frac{\varphi }{2}(2f^{\dag }f-1),$$ ![(Color online) Schematic of the circuit. The superconducting loop is interrupted by a superconductor-normal metal-superconductor junction. The loop is biased with an external flux $\protect\phi _{e}$. The junction is formed by a spin-orbit coupled nanowire laying on the separate superconductors. When the magnetic field along the nanowire is larger than a critical value, the two pieces of proximitized nanowire (orange sections) are topological superconductors. At the boundaries are four MFs $\protect\gamma _{1},\protect% \gamma _{2},\protect\gamma _{3},\protect\gamma _{4}$. []{data-label="circuit"}](1.eps) in which we have defined $f=(\gamma _{1}+i\gamma _{2})/2$. The eigenvalue of $f^{\dag }f$ (0 or 1) determines the parity of the Dirac fermion (even or odd). The topological Josephson coupling given by $H_{m}$ has two distinguishable characters. Firstly, the coupling is $4\pi $ periodic with phase difference. Resultantly, the charge tunneling the junction is in unit of single-electron instead of Cooper-pair. Very recently, an experiment [Albrecht16]{} has examined this character in Coulomb blockade regime, in which $% E_{C}\gg E_{m}$. In the opposite regime, that is $E_{C}\ll E_{m}$, the $4\pi $ phase slip dual with single electron tunneling can occur. Secondly, the coupling depends upon the fermion parity of the two MFs at the junction. This character makes the $4\pi $ phase slip sensitive to the fermion-parity breaking events, such as quasiparticle poisoning. In the following section, we will present our scheme for uncovering the unique $4\pi $ feature of MFs. Scheme ====== We now investigate how to observe the $4\pi $ phase slip with the system shown in the last section. Without lossing generality, we assume that the parity of MFs is restricted in the even subspace. Later on, we will consider the effect of the unintended change of the parity on the phase slips. Under the circumstances, the potential energy of the whole Hamiltonian (Eq. ([eq3]{})) is $$\label{eq5} U=E_{L}(\varphi -\varphi _{e})^{2}-E_{m}\cos \frac{\varphi }{2}.$$By tuning the parameter $\varphi _{e}$ we can control the configuration of the potential. If $\varphi _{e}=0$, the potential has one global minimum at $% \varphi =0$ (see Fig.2A). If the flux is biased at $\varphi _{e}=2\pi $, a symmetric double-well profile of the potential is formed, similar to the potential of a flux qubit biased at $\varphi _{e}=\pi $. However, the separation of the two minima of the double-well is $\sim $$4\pi $ instead of $\sim $$2\pi $ (see Fig.2B). The lowest two energy eigenstates in the double-well are symmetric and antisymmetric superpositions of left and right local states. The energy splitting of them is denoted by $\Delta E$. For probing the $4\pi $ phase slip, we initially set $\varphi _{e}=0$. In low temperature limit, the system will be reset to the ground state in the well around $\varphi =0$. Then, switch the bias to $\varphi _{e}=2\pi $ quickly to make sure that the system localizes in the left well during this operation, and wait for a time $\Delta t$ $\sim \frac{1}{\Delta E}$. In this period, the resonant tunneling of the phase difference between the double well can happen, and the state of the system is coherently oscillating between the the left and right local state of the double well. Finally, bias the circuit away from $\varphi _{e}=2\pi $ and measure the total flux of the circuit. The resulting flux can either be about 0 or $2\phi _{0},$ corresponding to the left or right local state of the double well respectively. The possibility of finding ’$2\phi _{0}$’ oscillates with $% \Delta t$. In experiment, we can measure the total flux of the loop with another RF SQUID [@Spanton17]. The possibility of the system projecting to the $2\phi _{0}$ state can be obtained by repeating the above operations many times. Note that if the same operations are applied to a conventional or topological trivial RF SQUID, the final measured flux would definitely be $\phi _{0}$ independent of $\Delta t$, because of the $2\pi $ periodicity of their Josephson couplings [@Lange15]. Hence, the oscillating $4\pi $ phase slip is a distinctive signature of topological Josephson junction. However, in practice the superconducting circuit is subject to some unavoidable disturbance which might destroy the signature. Therefore, it is vital to investigate the robustness of our scheme. ![Potential energy configurations. The fermion parity is even, and the circuit is biased at $\protect\varphi _{e}=0$ (A), $\protect\varphi % _{e}=2\protect\pi $ (B). In (B), the potential is a symmetric double-well with the lowest two energy eigenstates be symmetric and antisymmetric superpositions of left and right local states. ](2.eps){width="7cm" height="4.5cm"} Effect of quasiparticle poisoning --------------------------------- In Eq. (\[eq5\]), we have assumed that the parity of MFs is conserved in the whole process. Actually, the parity conservation can be broken by quasiparticle poisoning. Quasiparticles exist in various superconducting systems even at vary low temperature. One quasiparticle excitation event could alter the occupation of the in-gap states in a junction. For the topological Josephson junction, it would turn over the parity of MFs. In our case, we prepare the MFs at even parity state, thus an unwanted excitation will take it to odd state. If this happens when the circuit is biased at $% \varphi =2\pi $, the potential energy profile is changed. It is obvious that the circuit will eventually stay at the ground state of the well with minimum at $\varphi =2\pi $. That is exactly the result in conventional RF SQUID in the same bias sequence. Thus, the $4\pi $ phase slip disappears. Therefore, anyone who is going to observe the $4\pi $ phenomenon must carry out the experiment in a period shorter than the quasiparticle poisoning time. Generally, the parity lifetime of the bound state in a proximitized semiconductor nanowire applied magnetic field exceeds 10 $\mu s$ [Albrecht16]{}. The time needed to implement our scheme is on the order of $% 1/\Delta E$. Typically, we choose the parameters as follows: $E_{m}=25$ GHz$% \times h$, $E_{c}=3$ GHz$\times h$, $E_{L}=1$ GHz$\times h$. With this parameter configuration, we have numerically calculated the splitting $% \Delta E=25$ MHz. This value means that the phase slips happen in the time scale of $40$ $ns$, which is at least two orders of magnitude shorter than the poisoning time. We stress that after each run of the experiment, the Fermion parity will be initialized to even subspace. Therefore, we can claim that the quasiparticles have little impact on our scheme.A comment is in order. In our parameters set, the Josephson coupling energy is much larger than the inductive energy with ratio $E_{m}/E_{L}=25$. Even though, the finiteness of the ratio would make the distance of the two minimum of the symmetric double well is not equal to $4\pi $, but rather smaller than it. In fact, the distance is about $3\pi $ with our parameters. From this view of point, the expression $4\pi $ phase slip is somewhat misleading. Similarly, in a conventional RF SQUID the amplitude of the phase slip is not $2\pi $ either ($<2\pi $). Actually, the names are stemming from the formation of the relate Josephson coupling. What is more, we can distinguish these two kinds of phase slips without any confusion. Effect of finite length of topological superconductor ----------------------------------------------------- We know that the coupling of the two MFs of one topological superconductor is oscillating with the length of the superconductor [@Cheng09; @Sarma12]. The oscillation amplitude decreases exponentially with the length $L$, $$\label{eq6} \varepsilon =\varepsilon _{0}e^{-L/\xi },$$where $\varepsilon _{0}$ is a prefactor, $\xi $ is superconducting coherence length. Generally, if the topological superconductor is much longer than its superconducting coherence length, this coupling is rather weak and can be neglected. That is why we have not put the interaction between $\gamma _{1}$($\gamma _{2}$) and $\gamma _{3}$($\gamma _{4}$) in Eq. (\[eq1\]). However, in practice, the length of a one-dimensional topological superconductor may be limited by the technique to make it or the size of the circuit. It is necessary to investigate the effect of the coupling between $\gamma _{1}$($% \gamma _{2}$) and $\gamma _{3}$($\gamma _{4}$) on the $4\pi $ phase slips.Let us first look at the Josephson coupling energy in absence of the interactions $\gamma _{1}\gamma _{3}$, $\gamma _{2}\gamma _{4}$, ie., $H_{m}$ (Eq. (\[eq4\])). When the phase difference takes values of $(2k+1)\pi $ (k be integer), the even and odd parity states are degenerate. When the interactions present, the potential energy can be addressed as $$\label{eq7} U^{\prime }=E_{L}(\varphi -\varphi _{e})^{2}-E_{m}\cos \frac{\varphi }{2}% \sigma _{z}+\varepsilon \sigma _{x},$$ ![Two kinds of tunnelings. The solid lines are describing potential energy given by Eq. (\[eq7\]) after diagonalization in the parity subspace. The existence of MFs couplings $\protect\gamma _{1}\protect\gamma % _{3},\protect\gamma _{2}\protect\gamma _{4}$ makes the transition of the fermion parity of $\protect\gamma _{1}\protect\gamma _{2}$ possible. Tunneling 1 (dashed line) does not change the parity while Tunneling 2 (doted line) does.](3.eps){width="7cm" height="6cm"} where $\sigma _{x,z}$ are Pauli operators acting in the fermion parity space of $\gamma _{1},\gamma _{2}$. $\varepsilon $ denotes the coupling strength of $\gamma _{1}\gamma _{3}$ ($\gamma _{2}\gamma _{4}$) which is much smaller than $E_{m}$. It is easy to see that the odd-even degeneracies at $\varphi =(2k+1)\pi $ are lift, and instead anticrossings arise, which leads to the mixing of the two parity states. When the circuit is biased at $\varphi _{e}=2\pi $ with the initial state be the ground state in the left well, there are two possible tunneling events. One is tunneling to the right well with same parity (named Tunneling 1), and the other is tunneling to the nearest well with opposite parity (Tunneling 2), as shown in Fig. 3. Tunneling 1 is the consequence of the topological Josephson coupling and signify the $4\pi $ phase slip. In contrast, Tunneling 2 denote the $2\pi $ phase slip which is always connected to the topological trivial Josephson junction. Therefore, if Tunneling 2 dominates the process, $4\pi $ phase slip is covered and we can not tell the topological phase from the topological trivial phase. To this end, one needs to clarify whether the Tunneling 2 is weak enough to be neglected under experimentally feasible condition. Now we devote to estimate the tunneling rate of Tunneling 2. The coexistence of parity switching and quantum fluctuation of the phase difference make the task troublesome. We solve this problem in a quasiclassical manner. As Tunneling 2 will change the fermion parity, it is reasonable to believe that the tunneling rate should be related to the transition rate of the parity states when $\varphi $ is considered as a classical quantity. While biasing the circuit at $\varphi _{e}=2\pi $, the system is initially located at left well with minimum of $\sim \pi /2$ (not 0 due to the finite of $E_{m}/E_{L}$) and parity is even. After Tunneling 2, the system localizes at $\varphi =2\pi $ and parity is odd. Therefore, the tunneling rate is limited by the transition rate of the fermion parity at $% \varphi =\pi /2$. For convenience, we assume they are approximately equal. The calculation of parity transition rate is a typical two-level-system problem. Starting with even parity, the population of odd parity state is oscillating with time between 0 and $P$, with $P=\varepsilon \Big /\sqrt{% \varepsilon ^{2}+(E_{m}\cos \frac{\pi }{4})^{2}}$. According to Eq. ([eq6]{}) and the parameters in Ref. [@Deng16], when the nanowire is as long as $L=2$ $\mu m$ which is reachable in experiment, the MFs coupling $% \varepsilon $ is three orders of magnitude smaller than $E_{m}$. In this case, the maximum odd parity population $P\approx 0$, which means that even$% \rightarrow $odd transition rate is almost vanishing. One may argue that the initial state does not localize at $\varphi =\pi /2$, but spreads on a range even including the anticrossing point $\varphi =\pi $. In fact, the parity transition rate reach its maximum value of $\varepsilon $ at the anticrossing, which is the same order of magnitude as tunneling rate of Tunneling 1, i.e., $\Delta E$. However, the probability of the initial state be around the anticrossing is very small due to the large ratio $% E_{m}/\varepsilon $, thereby Tunneling 2 would rarely occur in the period of Tunneling 1. In other words, $4\pi $ phase slip will not be covered by $2\pi $ phase slip. Discussion and Conclusion ========================= We would like to discuss the feasibility of our scheme. The scheme is conceived based on the Hamiltonian of the system given by Eq. (\[eq3\]), in which we have neglected the conventional Josephson coupling of the topological junction. For justifying this approximation, we estimate the ratio $E_{J}/E_{m}$ with practical parameters. For the typical material NbN, its superconducting critical temperature is $\sim $10 K, which equals eight times of the value of $E_{m}$ chosen in this paper. This condition in turn leads to $E_{J}=E_{m}/32$. Consequently, the conventional Josephson coupling has little effect on the 4$\pi $ phase slips and can be ignored. In addition, the large ratio of $\Delta /E_{m}$ is helpful to prevent the subgap Majorana bound state being excited to the continuum states. The other issue is the viability of RF SQUID with a very small inductance energy. It is worth noting that a small value of the inductance energy and, thus, a large magnitude of *L* is essential for the observation of the $4\pi $ phase slip, since the large ratio $E_{m}/E_{L}$ can make the distance of the minima of the double well of the superconducting phase far exceed $2\pi $. Taken $E_{L}=1$ GHz, the inductance of the loop *L* is up to 100 nH. In experiment, we can design a large area superconducting loop, or make use of a array of Josepshon junctions playing the role of a superinductor, such as that in fluxonium qubit [@Pekker13]. Indeed, the requirement of the large inductance could be loosed at the expense of slightly reducing the amplitude of the phase slip.In conclusion, we have proposed a scheme for detecting fractional DC Josephson effect in topological RF SQUID system through $4\pi $ phase slip. To observe this phase slip, we take advantage of the resonant tunneling of the phase difference. Our calculations with reachable parameters show that the duration of the process of the scheme is much shorter than the quasiparticle poisoning time. More importantly, the $4\pi $ phase slip could overwhelm the topological trivial $2\pi $ phase slip with a practical nanowire length. Our scheme is experimentally feasible, and promising for exploring the interplay of topological superconductors and quantum computation. We thank the very helpful discussions with Shi-Liang Zhu. This work was funded by the National Science Foundation of China (No.11404156), the Startup Foundation of Liaocheng University (Grant No.318051325), the NFRPC (Grant No.2013CB921804), and the NKRDP of China (Grant No. 2016YFA0301800). [0]{} . . . . . . . . . . . . . . . . . . . . ; . . . . . . . . [^1]: zhzhentao@163.com
--- abstract: 'One of the principal models of magnetic sensing in migratory birds rests on the quantum spin-dynamics of transient radical pairs created photochemically in ocular cryptochrome proteins. We consider here the role of electron spin entanglement and coherence in determining the sensitivity of a radical pair-based geomagnetic compass and the origins of the directional response. It emerges that the anisotropy of radical pairs formed from spin-polarized molecular triplets could form the basis of a more sensitive compass sensor than one founded on the conventional hyperfine-anisotropy model. This property offers new and more flexible opportunities for the design of biologically inspired magnetic compass sensors.' author: - 'Hannah J. Hogben' - Till Biskup - 'P. J. Hore' bibliography: - 'abbrjrnl.bib' - 'entanglement.bib' title: 'Entanglement and Sources of Magnetic Anisotropy in Radical Pair-Based Avian Magnetoreceptors' --- The biophysics and biochemistry that allow birds to sense the direction of the geomagnetic field (25-65 $\mu$T) are for the most part obscure. One of the two currently popular hypotheses (the other involves biogenic iron-oxide nanostructures [@wink-jrsif-7-S273]) is founded on magnetically sensitive photochemical reactions in the retina [@schu-zpc-111-1]. It is thought that photo-induced radical pairs in cryptochrome, a blue-light photoreceptor protein, may constitute the primary magnetic sensor [@ritz-bj-78-707; @maed-pnas-109-4774] and a variety of supporting evidence has accumulated over the last few years (reviewed in [@rodg-pnas-106-353; @lied-jrsif-7-S147; @ritz-pchem-3-262; @mour-conb-22-343]). If this mechanism proves to be correct, it will incontrovertibly come under the umbrella of ‘quantum biology’ [@ball-n-474-242], as an instance of Nature using fundamentally quantum behaviour – in this case the coherent spin dynamics of radical pairs – to achieve something that would be essentially impossible by means of more conventional chemistry. For this reason, the avian magnetic compass has attracted the attention of quantum information theorists and others wishing to understand the role played by spin-entanglement and to determine whether the techniques of quantum control could shed light on this intriguing sensory mechanism [@cai-prl-104-220502; @gaug-prl-106-040503; @cai-pra-85-022315; @cai-pra-85-040304]. A fundamental property of radical pairs that allows sensitivity to magnetic interactions orders of magnitude smaller than $k_{\rm B}T$ is that their chemical transformations conserve electron spin. Radical pairs are therefore created with the same spin-multiplicity (singlet or triplet) as their precursors. Owing to electron-nuclear hyperfine (HF) interactions, neither singlets nor triplets are, in general, eigenstates of the spin Hamiltonian. Consequently, the radical pair starts out in a non-stationary superposition which evolves coherently at frequencies determined by the HF interactions and also, crucially for a magnetic sensor, by the electronic Zeeman interactions with an external magnetic field [@rodg-pnas-106-353]. Spin decoherence and spin relaxation can be slow enough to allow even an Earth-strength magnetic field to modulate the spin dynamics and hence alter the yields of the products formed by spin-selective reactions. The anisotropy of the HF interactions leads to anisotropic reaction yields and hence, in principle, a magnetic direction sensor [@cint-cp-294-385; @maed-n-453-387]. The singlet state – the initial state of the radical pairs formed photochemically in cryptochromes [@maed-pnas-109-4774; @webe-jpcb-114-14745] – is entangled: $$\begin{aligned} \label{eq:singletstate} |{\rm S}\rangle\langle{\rm S}| &=& \tfrac{1}{2}|\alpha_1\beta_2\rangle\langle\alpha_1\beta_2|+\tfrac{1}{2}|\beta_1\alpha_2\rangle\langle\beta_1\alpha_2|\\\nonumber &&-\tfrac{1}{2}|\alpha_1\beta_2\rangle\langle\beta_1\alpha_2|-\tfrac{1}{2}|\beta_1\alpha_2\rangle\langle\alpha_1\beta_2|\end{aligned}$$ ($\alpha$ and $\beta$ are the $m_S=\pm\frac{1}{2}$ spin states of the two unpaired electrons). But other initial states are also known to result in magnetically sensitive chemistry [@stei-cr-89-51]: do they too need to be entangled or is it sufficient if they are ‘merely’ coherent? Or is neither entanglement nor coherence necessary for a magnetic compass? Questions such as these have been addressed in two recent papers. Briegel and his group noted that randomly generated separable (i.e. not entangled) initial states could result in reaction product yields more anisotropic than those produced from an initial singlet state under the same conditions [@cai-prl-104-220502]. The other study, by Benjamin and colleagues, reached similar conclusions by analysing model radical pair systems, finding significant product yield anisotropies for the separable initial state [@gaug-prl-106-040503] $$\begin{aligned} \label{eq:separablestate} \tfrac{1}{2}|{\rm S}\rangle\langle{\rm S}| + \tfrac{1}{2}|{\rm T_0}\rangle\langle{\rm T_0}| &=& \tfrac{1}{2}|\alpha_1\beta_2\rangle\langle\alpha_1\beta_2|\\\nonumber && + \tfrac{1}{2}|\beta_1\alpha_2\rangle\langle\beta_1\alpha_2|\end{aligned}$$ in which ${\rm T}_0$ is the $m_S=0$ triplet spin state. Here we examine the role of initial entanglement and attempt to clarify the various sources of magnetic anisotropy that might form the basis of a radical pair compass sensor in birds. #### Initial radical pair states. We start by identifying chemically feasible initial electron spin states. Geminate radical pairs are normally formed by spin-conserving chemical reactions so that at the moment of their creation they are either pure singlet, described by the initial electron spin density matrix $\hat{\rho}_0=\hat{\rho}_0({\rm S})=|{\rm S}\rangle\langle{\rm S}|$, or pure triplet $\hat{\rho}_0=\hat{\rho}_0({\rm T})=\frac{1}{3}\left(\hat{\openone}-|{\rm S}\rangle\langle{\rm S}|\right)$. Occasionally, singlet and triplet formation channels operate in parallel [@maed-cc-47-6563], in which case $\hat{\rho}_0$ is a weighted sum of $\hat{\rho}_0({\rm S})$ and $\hat{\rho}_0({\rm T})$, i.e. of $|{\rm S}\rangle\langle{\rm S}|$ and $\hat{\openone}$: $$\begin{aligned} \label{eq:mixedstart} \hat{\rho}_0 &=& \mu\hat{\rho}_0({\rm S})+(1-\mu)\hat{\rho}_0({\rm T})\\\nonumber &=& \tfrac{1}{3}(4\mu-1)|{\rm S}\rangle\langle{\rm S}|+\tfrac{1}{3}(1-\mu)\hat{\openone} \end{aligned}$$ Eq.  is also appropriate for ‘F-pairs’ [@stei-cr-89-51] formed from radicals with uncorrelated spins (i.e. $\mu=\frac{1}{4}$). The operators $|{\rm S}\rangle\langle{\rm S}|$ and $\hat{\openone}$ and their linear combinations are invariant to rotations in the electron spin-space, meaning that all states that can be written in the form of Eq.  are isotropic. Any $\hat{\rho}_0$ that cannot be so written is necessarily anisotropic. Significantly different initial states can occur when the radical pair comes from a molecular triplet precursor formed by intersystem crossing (ISC). This route is common in photochemical reactions of the general type: $$\begin{aligned} \label{eq:reactionscheme} \text{AB}\xrightarrow{\;h\nu\;}\,^{\rm S}[{\rm AB}]^\ast\xrightarrow{\;\text{ISC}\;}\,^{\rm T}[{\rm AB}]^\ast\xrightarrow{\;\text{reaction}\;}{^{\rm T}[{\rm A^\bullet\;B^\bullet}]}\end{aligned}$$ in which the final step that creates the triplet radical pair could be homolysis (as shown) or inter- or intramolecular electron transfer, hydrogen atom transfer, etc. The formation of $^{\rm T}[{\rm AB}]^\ast$ from $^{\rm S}[{\rm AB}]^\ast$ requires the creation of spin angular momentum at the expense of orbital angular momentum. This process is mediated by spin-orbit coupling and is anisotropic in the molecular frame [@groo-mp-12-259]. That is, the three triplet sub-levels of $^{\rm T}[{\rm AB}]^\ast$ are differentially populated leading to a spin polarization in the molecular frame that is passed to the radical pair on its formation. In an appropriately chosen molecular axis system, the initial state of the radical pair may be written: $$\begin{aligned} \label{eq:molframe} \hat{\rho}_0 &=& \sum_{q=x,y,z}p_q |{\rm T}_q\rangle\langle{\rm T}_q|\end{aligned}$$ Anisotropic ISC is known to be responsible for a variety of spin-chemical and spin-polarization phenomena [@stei-cr-89-51; @atki-mp-27-1633; @kats-mp-100-1245; @koth-jpcb-114-14755]. Aside from linear combinations of Eqs  and , there are no other commonly occurring initial conditions for radical pairs subject to weak magnetic fields. #### Minimal radical pair model. Insights into the spin dynamics of the various initial states just identified can be obtained from a minimal model [@timm-mp-95-71] comprising two electron spins one of which is coupled to a spin- nucleus (e.g. $^1$H). The HF interaction is either isotropic or axially anisotropic according to the value of a dimensionless parameter, $\alpha$ [@cint-cp-294-385]. Two cases are considered specifically: $\alpha=0$ (isotropic) and $\alpha=-1$ (the anisotropic interaction that results in the largest reaction yield anisotropy for this 3-spin system [@cai-pra-85-040304]). To account for the chemical reactivity of the radical pair, we adopt the ‘exponential model’ [@timm-mp-95-71] in which singlet and triplet states react spin-selectively with the same first-order rate constant, $k$, to form distinct products. The quantum yields of these competing reactions are calculated using standard methods [@cint-cp-294-385; @timm-mp-95-71] (outlined in the Appendix). The two quantities of interest are $\Phi_{\rm S}$, the fractional yield of the product formed via the singlet pathway, referred to here as the ‘reaction yield’, and $\Delta\Phi_{\rm S}$, the magnitude of its anisotropy: $\Delta\Phi_{\rm S}=\text{max}\left\{\Phi_{\rm S}\right\}-\text{min}\left\{\Phi_{\rm S}\right\}$. The variation of $\Phi_{\rm S}$ with the orientation of the radical pair in a 50 $\mu$T magnetic field is the basis of the compass sensor. To begin, we choose the isotropic initial condition in Eq.  together with an anisotropic HF interaction ($\alpha=-1$). In the not unrealistic limit, $|a|\gg\omega\gg k$ [@cint-cp-294-385; @timm-cpl-334-387]: $$\begin{aligned} \label{eq:isotropic} \Phi_{\rm S} &=& \tfrac{1}{4}+\tfrac{1}{12}(4\mu-1)\cos^2\theta; \quad \Delta\Phi_{\rm S}=\tfrac{1}{12}|4\mu-1|\end{aligned}$$ where $a$ is the isotropic HF coupling constant, $\omega$ is the strength of the magnetic field, and $\theta$ is the angle between the symmetry axis of the HF tensor and the magnetic field vector. $\Phi_{\rm S}$ is anisotropic, and therefore potentially suitable as a magnetic compass, except when the initial state is a statistical ( : ) mixture of singlet and triplet ($\mu=\frac{1}{4}$). The maximum anisotropy ($\Delta\Phi_{\rm S}=\frac{1}{4}$) occurs when the initial state is pure singlet ($\mu=1$); for a pure triplet initial state ($\mu=0$), $\Delta\Phi_{\rm S}$ is smaller by a factor of three. These results were verified by exact numerical simulations (see Appendix). To quantify the entanglement of the various initial electron spin states considered here, we use the ‘concurrence’ $C(\hat{\rho}_0)$ proposed by Wootters [@woot-prl-80-2245] for a two-qubit density operator. For the initial condition in Eq. , $C(\hat{\rho}_0)$ is $2\mu-1$ when $\mu>\frac{1}{2}$ and zero when $\mu\le\frac{1}{2}$ (see Appendix). Thus, a singlet–triplet mixture must contain more than 50% singlet for the initial state to be entangled. The pure triplet state ($\mu=0$) is not entangled, but as we have just seen it gives rise to a significantly anisotropic reaction yield. We now turn to a different initial condition, a linear combination of Eq.  (with $\mu=0$) and Eq.  (with $p_x=p_y=0$; $p_z=1$): $$\begin{aligned} \label{eq:lincomb} \hat{\rho}_0 &=& \eta|{\rm S}\rangle\langle{\rm S}|+(1-\eta)|{\rm T}_z\rangle\langle{\rm T}_z|\end{aligned}$$ i.e. an anisotropic mixed singlet-triplet initial state in which the triplet component is 100% polarized along the molecular $z$-axis. In the same limit as before ($|a|\gg\omega\gg k$), but now for an *isotropic* HF interaction: $$\begin{aligned} \label{eq:anisotropic} \Phi_{\rm S} &=& \tfrac{3}{8}-\tfrac{1}{4}(1-\eta)\sin^2\theta; \quad \Delta\Phi_{\rm S}=\tfrac{1}{4}(1-\eta)\end{aligned}$$ where $\theta$ is now the angle between the triplet polarization axis ($z$) and the magnetic field vector. The anisotropy is maximised when $\eta=0$ (pure $|{\rm T}_z\rangle$ triplet, $\Delta\Phi_{\rm S}=\frac{1}{4}$) and is at a minimum when $\eta=1$ (pure singlet, $\Delta\Phi_{\rm S}=0$). Once again, these expressions were confirmed by numerical simulations (see Appendix). We note that Eqs  and predict identical maximum directional responses. The reaction yield is isotropic when $\eta=1$ because then both the initial state $|{\rm S}\rangle\langle{\rm S}|$ and the spin-Hamiltonian are isotropic. The angle-dependence in Eq.  clearly arises because the spin dynamics depend on the direction of the magnetic field with respect to the quantization ($z$) axis of the initial $|{\rm T}_z\rangle$ state [@kats-njp-12-085016]. The concurrence of the density operator in Eq.  is $2\eta-1$ when $\eta\ge\frac{1}{2}$ and $1-2\eta$ when $\eta\le\frac{1}{2}$. Pure singlet and pure $|{\rm T}_z\rangle$ triplet thus have the same degree of entanglement but lead to very different $\Delta\Phi_{\rm S}$. Hitherto we have taken the reaction rates of the singlet and triplet states ($k_{\rm S}$ and $k_{\rm T}$) to be identical. Once this restriction is lifted, it is even possible to have magnetic field effects when the initial state is a statistical mixture of singlet and triplet: $\hat{\rho}_0=\frac{1}{4}\hat{\rho}_0({\rm S})+\frac{3}{4}\hat{\rho}_0({\rm T})=\frac{1}{4}\hat{\openone}$. To illustrate this point, simulations for the minimal radical pair with an anisotropic HF coupling are included in the Appendix. $\Delta\Phi_{\rm S}$ is non-zero except when $k_{\rm S}=k_{\rm T}$. That is, a radical pair can exhibit magnetic compass properties even when its initial electron spin state is neither entangled nor coherent. In this case the coherence arises during the spin evolution as a result of the differential reactivity of the singlet and triplet states. #### Relation between compass properties and entanglement. A complex picture emerges from these simple considerations. Entangled initial states can give small or zero reaction yield anisotropy. Non-entangled initial states can lead to appreciable anisotropy. With two sources of anisotropic reaction yields – the initial state and the HF interactions – it is tricky to assess whether entanglement, or coherence in a given basis, is essential for magnetic compass action. For example, replacing $\hat{\rho}_0=|{\rm S}\rangle\langle{\rm S}|$ (Eq. ) by $\hat{\rho}_0=\frac{1}{2}|{\rm S}\rangle\langle{\rm S}|+\frac{1}{2}|{\rm T}_0\rangle\langle{\rm T}_0|$ (Eq. ) not only removes the initial entanglement, and the coherence in the $\left\{|\alpha_1\beta_2\rangle,|\beta_1\alpha_2\rangle\right\}$ basis, it also introduces anisotropy that was not present in $|{\rm S}\rangle\langle{\rm S}|$. Similarly, most randomly chosen initial states are anisotropic and some will give a larger $\Delta\Phi_{\rm S}$ than does $|{\rm S}\rangle\langle{\rm S}|$ under identical conditions. In short, it appears that initial entanglement is not a particularly helpful concept when assessing the sensitivity of a radical pair compass; nor is it straightforwardly illuminating to consider the behaviour of artificial initial states. #### A radical pair compass based on initial-state anisotropy. The above considerations suggest an alternative compass design in which the directionality comes from the initial condition rather than the HF interactions. In the minimal model, the initial state that gives the largest reaction yield anisotropy is $\hat{\rho}_0=|{\rm T}_q\rangle\langle{\rm T}_q|$ where $q=x,y,z$ (see Appendix). We therefore compare $|{\rm T}_q\rangle\langle{\rm T}_q|$ with $|{\rm S}\rangle\langle{\rm S}|$ using exact numerical simulations (see Appendix). The possibility that spin-polarized triplet radical pairs might offer some advantage over singlets has been noted before but without realistic suggestions for the chemical origin of such initial states [@kats-njp-12-085016]. ![\[fig:rp11\] Reaction yield anisotropy, $\Delta\Phi_{\rm S}$, calculated (see Appendix) for a radical pair in which one radical contains a $^1$H nucleus (spin-) and a $^{14}$N nucleus (spin-1). $k=10^6\;\rm s^{-1}$ and $\omega = 50\;\rm\mu T$. The HF coupling parameters (in mT) are: $a_{\rm H} = -0.8$; $T_{{\rm H},xx} = 0.8\;\delta$; $T_{{\rm H},yy} = -0.6\;\delta$; $T_{{\rm H},zz} = -0.2\;\delta$; $a_{\rm N} = 0.4$; $T_{{\rm N},xx} = -0.5\;\delta$; $T_{{\rm N},yy} = -0.5\;\delta$; $T_{{\rm N},zz} = 1.0\;\delta$. $\hat{\rho}_0=|{\rm S}\rangle\langle{\rm S}|$ (black) and $\hat{\rho}_0=|{\rm T}_y\rangle\langle{\rm T}_y|$ (green). Also shown are representations of the hyperfine tensors for $\delta=0$ (left) and $\delta=1$ (right).](Fig2){width="3in"} Figure \[fig:rp11\] shows the reaction yield anisotropy of a radical pair inspired by the flavin adenine dinucleotide radical, FADH$^\bullet$, formed photochemically in cryptochromes [@lang-jacs-131-14274]. One radical contains $^1$H and $^{14}$N nuclei with isotropic HF couplings approximately equal to those of the proton and nitrogen (H5 and N5, see appendix) in the central ring of the tricyclic isoalloxazine ring system of FADH$^\bullet$ (these being the two largest HF interactions in FADH$^\bullet$ [@webe-jacs-123-3790]). The anisotropic components of the two interactions were also modelled on FADH$^\bullet$, but with a uniform scaling by a factor of $\delta$, in the range $0.001-1.0$. For the smaller values of $\delta$, the spin-Hamiltonian is essentially isotropic. When the initial state $\hat{\rho}_0$ is a 100% spin-polarized triplet, $\Delta\Phi_{\rm S}$ has significant magnitude for all values of $\delta$. In contrast, when $\hat{\rho}_0=|{\rm S}\rangle\langle{\rm S}|$, $\Delta\Phi_{\rm S}$ is essentially zero until the HF tensors become significantly anisotropic ($\delta\approx0.1$). By the time the HF anisotropy is comparable to that in FADH$^\bullet$ (i.e. $\delta\approx1.0$), both initial states give very similar directional responses to the 50 $\mu$T applied magnetic field. This suggests that a spin-polarized triplet geminate radical pair with isotropic HF interactions could operate as a compass sensor just as well as an initial singlet state with anisotropic HF interactions. Indeed, there are circumstances in which, other things being equal, the anisotropy of the initial state might offer a more sensitive compass than one based on HF anisotropy. Biologically plausible radical pairs are likely to have many magnetic nuclei (mostly $^1$H and $^{14}$N) with differently aligned HF tensors. Simulations suggest that the directional information potentially available from individual HF tensors tends to be scrambled in a multinuclear radical pair, resulting in a greatly reduced $\Delta\Phi_{\rm S}$ (see Appendix). A simple illustration of this effect is given in Fig. \[fig:rp40\] which shows simulations of the reaction yield anisotropy for a spin system in which one of the radicals contains four spin- nuclei with tetrahedrally disposed axial HF tensors. When all four tensors are identical ($\delta=1$), the reaction yield anisotropy for $\hat{\rho}_0=|{\rm S}\rangle\langle{\rm S}|$ vanishes, by symmetry. However, when the symmetry is reduced to $C_{\rm 3v}$, by scaling the principal components of one of the HF tensors by a factor $\delta$, the value of $\Delta\Phi_{\rm S}$ increases but does not approach that afforded by $\hat{\rho}_0=|{\rm T}_x\rangle\langle{\rm T}_x|$ until $|\log_{10}\delta|$ reaches ca. $1.0$. Thus it appears that the compass properties of a radical pair with many mutually cancelling HF interactions could be ‘rescued’ by having a triplet, rather than a singlet, initial condition, provided the triplet is spin-polarized by anisotropic intersystem crossing. ![\[fig:rp40\] Reaction yield anisotropy, $\Delta\Phi_{\rm S}$, calculated (see Appendix) for a radical pair in which one radical contains four $^1$H nuclei, all of which have axially anisotropic HF interactions with $a = 0$. The symmetry axes of the four HF tensors are directed towards the vertices of a tetrahedron. Three of the tensors have principal values: $T_{11} = T_{22} = -1.0$, $T_{33} = 2.0\rm\; mT$. The fourth is identical apart from a uniform scaling of the principal values by a factor $\delta$. $k=10^6\;\rm s^{-1}$ and $\omega = 50\rm\;\mu T$. $\hat{\rho}_0=|{\rm S}\rangle\langle{\rm S}|$ (black) and $\hat{\rho}_0=|{\rm T}_x\rangle\langle{\rm T}_x|$ (green). Also shown are representations of the hyperfine tensors for $\delta=0.5$, $\delta=1.0$, and $\delta=1.5$.](Fig3-smaller){width="3.375in"} #### Discussion. Having identified the initial spin-states in which radical pairs may be formed by chemical reaction, we revisited earlier attempts to determine the importance of entanglement and coherence as determinants of the anisotropic responses of radical pair magnetoreceptors. It appears that the use of artificial initial spin-states for this purpose is somewhat confounded by their intrinsic anisotropy, the effects of which may dominate the anisotropy conferred by the HF interactions. From these considerations it emerges that the anisotropy of radical pairs formed from spin-polarized molecular triplets could form the basis for a magnetic compass that is more sensitive than one based on the conventional HF-anisotropy model [@schu-zpc-111-1] in particular when the HF couplings are not strongly anisotropic or when the individual effects of multiple HF anisotropies tend to counteract one another. Would a triplet radical pair compass be compatible with cryptochrome as the primary magnetoreceptor? In the cryptochromes investigated hitherto (bacterial [@bisk-acie-50-12647], plant [@maed-pnas-109-4774] and frog [@webe-jpcb-114-14745]), flavin-tryptophan radical pairs are formed as *singlets*. However, avian cryptochromes may behave differently, and there are precedents for triplet radical pairs in other flavoproteins [@eise-jacs-130-13544; @tham-jacs-132-15542]. Superficially, it appears that flavins may be suitable for an initial triplet-state compass: intersystem crossing in both flavin mononucleotide and riboflavin at near-neutral pH results in fractional populations of the zero-field triplet sub-levels of $p_x=\frac{1}{3}, p_y=\frac{2}{3}, p_z=0$ [@kowa-jacs-126-11393]. Within the minimal model discussed above, this would lead to a high reaction yield anisotropy, two-thirds that of the maximum possible (see Appendix). The use of spin-polarized triplets should open new channels for the design of bio-inspired molecular devices for sensing the direction of weak magnetic fields. We thank DARPA (QuBE: N66001-10-1-4061) and the EPSRC for financial support. Appendix ======== #### Basis states. The spin dynamics of radical pairs may usefully be described in terms of two distinct sets of basis states. In both, singlet and triplet states are eigenstates of the total electron spin operator, $\hat{S}$: $$\begin{aligned} \label{eq:appdx1} \langle{\rm S}|\hat{S}|{\rm S}\rangle &=& 0\\\nonumber \langle{\rm T}_i|\hat{S}|{\rm T}_i\rangle &=& \sqrt{2} \quad (i=0,\pm 1\quad\text{or}\quad x,y,z)\end{aligned}$$ The triplet basis states are either the eigenstates of $\hat{S}_z$, the component of $\hat{S}$ along the $z$-axis: $$\begin{aligned} \label{eq:appdx2} \langle{\rm T}_m|\hat{S}_z|{\rm T}_m\rangle &=& m \quad (m=0,\pm 1)\end{aligned}$$ or are defined in terms of the three cartesian components of $\hat{S}$: $$\begin{aligned} \label{eq:appdx3} \langle{\rm T}_q|\hat{S}_q|{\rm T}_q\rangle &=& 0 \quad (q=x,y,z)\\\nonumber \langle{\rm T}_x|\hat{S}_y|{\rm T}_z\rangle &=& {\rm i} \quad (\text{and cyclic permutations of } x,y,z)\end{aligned}$$ The relations between the two are: $$\begin{aligned} \label{eq:appdx4} |{\rm T}_x\rangle &=& \tfrac{1}{\sqrt{2}}|{\rm T}_{-1}\rangle-\tfrac{1}{\sqrt{2}}|{\rm T}_{+1}\rangle \\\nonumber |{\rm T}_y\rangle &=& \tfrac{\rm i}{\sqrt{2}}|{\rm T}_{-1}\rangle+\tfrac{\rm i}{\sqrt{2}}|{\rm T}_{+1}\rangle \\\nonumber |{\rm T}_z\rangle &=& |{\rm T}_0\rangle\end{aligned}$$ $|{\rm S}\rangle$ and $|{\rm T}_m\rangle$ can also be written: $$\begin{aligned} \label{eq:appdx5} |{\rm S}\rangle &=& \tfrac{1}{\sqrt{2}}\left[|\alpha_1\beta_2\rangle - |\beta_1\alpha_2\rangle\right] \\\nonumber |{\rm T}_{+1}\rangle &=& |\alpha_1\alpha_2\rangle\\\nonumber |{\rm T}_0\rangle &=& \tfrac{1}{\sqrt{2}}\left[|\alpha_1\beta_2\rangle + |\beta_1\alpha_2\rangle\right] \\\nonumber |{\rm T}_{-1}\rangle &=& |\beta_1\beta_2\rangle\end{aligned}$$ where $|\alpha_j\rangle$ and $|\beta_j\rangle$ are defined by: $$\begin{aligned} \label{eq:appdx6} \langle\alpha_j|\hat{S}_{j,z}|\alpha_j\rangle &=& +\tfrac{1}{2} \\\nonumber \langle\beta_j|\hat{S}_{j,z}|\beta_j\rangle &=& -\tfrac{1}{2} \quad (j=1,2)\end{aligned}$$ The axis system in which $\hat{S}_x$, $\hat{S}_y$ and $\hat{S}_z$ and $\hat{S}_{j,z}$ are defined may be chosen to be the ‘laboratory frame’, in which the $z$-axis is commonly the direction of the applied magnetic field, or a ‘molecular frame’, which could, for example, be the principal axis system of one of the hyperfine (HF) tensors. The triplet state $|{\rm T}_q\rangle$ $(q=x,y,z)$, Eq. , is spin-polarized in the $q=0$ principal plane within the molecule [@groo-mp-12-259]. #### Initial state. The initial state of the radical pair spin system, $\hat{\rho}(0)$, is written as the direct product of the initial density operator for the two electron spins, $\hat{\rho}_0$, and identity operators for each of the nuclear spins ($i=1,2,\cdots$) to which the electrons are coupled: $$\begin{aligned} \label{eq:appdx7} \hat{\rho}(0) &=& \frac{1}{M}\hat{\rho}_0 \otimes \left\{\bigotimes_i\hat{\openone}_i\right\}\end{aligned}$$ ($M$ is the total dimension of the nuclear spin-space). It is assumed that the formation of the radical pair is not nuclear spin-dependent. In the absence of chemical reactivity and spin-decoherence, the probability that the radical pair is in a singlet state at time $t$ is given by the expectation value of the singlet projection operator, $\hat{P}^{\rm S}$: $$\begin{aligned} \label{eq:appdx8} \left\langle\hat{P}^{\rm S}\right\rangle\!(t) &=& \text{Tr}\left[\hat{\rho}(t)\hat{P}^{\rm S}\right] \\\nonumber &=& \text{Tr}\left[{\rm e}^{-{\rm i}\hat{H}t}\hat{\rho}(0){\rm e}^{+{\rm i}\hat{H}t}\hat{P}^{\rm S}\right]\end{aligned}$$ where $\hat{H}$ is the time-independent spin Hamiltonian and $$\begin{aligned} \label{eq:appdx9} \hat{P}^{\rm S} &=& \left\{|{\rm S}\rangle\langle{\rm S}|\right\}\otimes\left\{\bigotimes_i\hat{\openone}_i\right\}\end{aligned}$$ #### Minimal model. The principles of radical pair magnetoreception can be discussed using a simple model comprising two electron spins and one spin- nucleus (e.g. $^1$H) with an axially anisotropic HF coupling to one of the electron spins: $$\begin{aligned} \label{eq:appdx10} \hat{H}_\text{hfi} &=& \sum_{q=x,y,z}A_{qq}\hat{S}_{1,q}\hat{I}_q \\\nonumber &=& a\sum_{q=x,y,z}\hat{S}_{1,q}\hat{I}_q + a\alpha\left(\hat{S}_{1,x}\hat{I}_x+\hat{S}_{1,y}\hat{I}_y-2\hat{S}_{1,z}\hat{I}_z\right)\end{aligned}$$ $a$ is the isotropic part of the HF interaction (expressed as an angular frequency) and $\alpha$ is a dimensionless axiality parameter [@cint-cp-294-385]. Defined in this way, the HF interaction has cylindrical symmetry around the molecular $z$-axis. Two cases are considered specifically: $\alpha=0$ (isotropic HF interaction) and $\alpha=-1$ (the HF interaction that results in the largest anisotropy in the reaction yield of this 3-spin system [@cai-pra-85-040304]). The electron Zeeman interaction is included by means of the spin Hamiltonian: $$\begin{aligned} \label{eq:appdx11} \hat{H}_\text{Zeeman} &=& \omega\sum_{j=1,2}\left[\hat{S}_{j,z}\cos\theta+\hat{S}_{j,x}\sin\theta\right]\end{aligned}$$ in which $\omega$ is the strength of the applied magnetic field (expressed as an angular frequency) and $\theta$ specifies its direction with respect to the symmetry axis ($z$) of the HF tensor. It is assumed that the $g$-tensors of the two radicals are identical and isotropic, and that the nuclear Zeeman interactions are negligible. Both are excellent approximations for organic radicals subject to the weak magnetic fields of interest here. ![\[fig:minimalrp\] Reaction yield anisotropy, $\Delta\Phi_{\rm S}$, of the minimal radical pair model with a non-coherent initial state, $\hat{\rho}_0=\frac{1}{4}\hat{\openone}$. $\Delta\Phi_{\rm S}$ is shown as a function of the rate constants $k_{\rm S}$ and $k_{\rm T}$. $a = 1.0\;\rm mT$, $\alpha=-1$, $\omega = 50\;\rm \mu T$.](Fig1-smaller){width="3.375in"} ![\[fig:mfe\] Magnetic field effect on the reaction yield of the minimal radical pair model. $k=10^{-3}a$. (a) Anisotropic HF interaction ($\alpha=-1$). S (solid lines) and T (dashed lines) denote the initial radical pair states $\hat{\rho}_0=\hat{\rho}_0({\rm S})$ ($\mu=1$) and $\hat{\rho}_0=\hat{\rho}_0({\rm T})$ ($\mu=0$), respectively, in Eq. . The perturbation theory result in Eq. is valid in the shaded region where $|a|\gg\omega\gg k$. The dependence of $\Phi_{\rm S}$ on the strength of the applied magnetic field ($\omega/a$) is shown for various angles between the symmetry axis of the HF tensor and the magnetic field vector. The sharp features near $\log_{10}(\omega/a) = 0.3$ arise from level anti-crossings [@timm-cpl-334-387]. (b) Isotropic HF interaction ($\alpha = 0$). S (solid line) and T (dashed lines) denote $\hat{\rho}_0=|{\rm S}\rangle\langle{\rm S}|$ ($\eta=1$) and $\hat{\rho}_0=|{\rm T}_z\rangle\langle{\rm T}_z|$ ($\eta=0$), respectively, in Eq. . The perturbation theory result in Eq. is valid in the shaded region where $|a|\gg\omega\gg k$. The dependence of $\Phi_{\rm S}$ on the strength of the applied magnetic field ($\omega / a$) is shown for various angles between the triplet alignment axis and the magnetic field vector.](Fig4a){width="3.375in"} To account for the chemical reactivity of the radical pair within the minimal model, we use the ‘exponential model’ [@timm-mp-95-71] in which singlet and triplet states react spin-selectively with the same first-order rate constant, $k_{\rm S}=k_{\rm T}=k$, to form distinct products. Although unlikely to be strictly valid for any real magnetoreceptor, this approximation simplifies the algebra without distorting the underlying physics. The yield of the chemical product formed via the *singlet* pathway, and its anisotropy, are calculated as [@timm-mp-95-71]: $$\begin{aligned} \label{eq:appdx12} \Phi_{\rm S} &=&k\int_0^\infty\left\langle\hat{P}^{\rm S}\right\rangle\!(t){\rm e}^{-kt}{\rm d}t \\ \label{eq:appdx13} \Delta\Phi_{\rm S} &=& \text{max}\left(\Phi_{\rm S}\right) - \text{min}\left(\Phi_{\rm S}\right)\end{aligned}$$ so that $0\le\Phi_{\rm S}\le 1$. The corresponding yield for the triplet reaction channel is simply $1-\Phi_{\rm S}$. $\Phi_{\rm S}$ is referred to as the reaction yield and $\Delta\Phi_{\rm S}$ as the reaction yield anisotropy. The variation of $\Phi_{\rm S}$ with the orientation of the radical pair with respect to an external magnetic field forms the basis of the compass mechanism [@schu-zpc-111-1]. When $k_{\rm S}\ne k_{\rm T}$, the calculation of $\Phi_{\rm S}$ is performed in Liouville space: $$\begin{aligned} \label{eq:appdx14} \Phi_{\rm S} &=& k_{\rm S}\left\langle\hat{P}^{\rm S}|\hat{\hat{L}}^{-1}|\hat{\rho}(0)\right\rangle \\ \label{eq:appdx15} \hat{\hat{L}} &=& {\rm i}\left(\hat{H}\otimes\hat{\openone}_8-\hat{\openone}_8\otimes\hat{H}^{\sf T}\right) \\\nonumber && + \tfrac{1}{2}k_{\rm S}\left(\hat{P}^{\rm S}\otimes\hat{\openone}_8 + \hat{\openone}_8\otimes\hat{P}^{\rm S}\right) \\\nonumber && + \tfrac{1}{2}k_{\rm T}\left(\hat{P}^{\rm T}\otimes\hat{\openone}_8 + \hat{\openone}_8\otimes\hat{P}^{\rm T}\right)\end{aligned}$$ where $\hat{\openone}_8$ is the identity operator in the 8-dimensional spin-space and $\hat{P}^{\rm T}=\hat{\openone}_8-\hat{P}^{\rm S}$. Fig. \[fig:minimalrp\] shows simulations for the minimal radical pair with an anisotropic HF coupling and an initial state: $$\begin{aligned} \label{eq:appdx16} \hat{\rho}_0 &=& \tfrac{1}{4}\hat{\rho}_0({\rm S})+\tfrac{3}{4}\hat{\rho}_0({\rm T}) = \tfrac{1}{4}\hat{\openone}\end{aligned}$$ $\Delta\Phi_{\rm S}$ is non-zero except when $k_{\rm S}=k_{\rm T}$. The radical pair can exhibit magnetic compass properties even when its initial electron spin state is neither entangled nor coherent. The coherence arises during the spin evolution as a result of the differential reactivity of the singlet and triplet states. #### Perturbation theory. To obtain estimates of the maximum possible magnetic responses within the minimal model, we use a perturbative approach [@timm-cpl-334-387; @cint-cp-294-385], appropriate for weak applied magnetic fields and long-lived radical pairs. This approximation is valid when $|a|\gg\omega\gg k$. These conditions are not unrealistic: HF interactions are of the order of $10^8\;\rm rad\; s^{-1}$ ($\approx 500\;\rm\mu T$), the geomagnetic field is roughly $10^7\;\rm rad\; s^{-1}$ ($\approx 50\;\rm\mu T$), and plausible values of the rate constant $k$ are $10^5-10^6\;\rm s^{-1}$ [@rodg-pnas-106-353]. Eqs  and were verified by the exact numerical simulations, the results of which are shown in Fig. \[fig:mfe\]. #### Multinuclear radical pairs. ![\[fig:anisoappdx\] Reaction yield anisotropy, $\Delta\Phi_{\rm S}$, calculated for a radical pair in which one radical contains a $^1$H nucleus (spin-) and a $^{14}$N nucleus (spin-1). $k=10^6\;\rm s^{-1}$ and $\omega=50\;\rm\mu T$. The HF coupling parameters (in mT) are as in the caption for Fig. \[fig:rp11\]. $\hat{\rho}_0=|{\rm S}\rangle\langle{\rm S}|$ (black). $\hat{\rho}_0=|{\rm T}_x\rangle\langle{\rm T}_x|$ (a, green), $\hat{\rho}_0=|{\rm T}_y\rangle\langle{\rm T}_y|$ (b, green), $\hat{\rho}_0=|{\rm T}_z\rangle\langle{\rm T}_z|$ (c, green).](Fig5a){width="3in"} In the general case, the HF component of the spin Hamiltonian has the form: $$\begin{aligned} \label{eq:appdx17} \hat{H}_\text{hfi} &=& \sum_{j=1,2}\sum_k\left[\hat{\mathbf{S}}_j\cdot\mathbf{A}_{jk}\cdot\hat{\mathbf{I}}_k\right] \\\nonumber &=& \sum_{j=1,2}\sum_k\left[a_{jk}\hat{\mathbf{S}}_j\cdot\hat{\mathbf{I}}_k + \hat{\mathbf{S}}_j\cdot\mathbf{T}_{jk}\cdot\hat{\mathbf{I}}_k\right]\end{aligned}$$ where $a_{jk}$, $\mathbf{T}_{jk}$ and $\mathbf{A}_{jk}$ are, respectively, the isotropic HF coupling constant, the anisotropic HF tensor and the total HF tensor for nucleus $k$ coupled to the electron in radical $j$. The Zeeman term is: $$\begin{aligned} \label{eq:appdx18} \hat{H}_\text{Zeeman} &=& \omega\sum_{j=1,2}\left[\hat{S}_{j,x}\sin\theta\cos\phi\right. \\\nonumber &&\qquad\qquad + \left.\hat{S}_{j,y}\sin\theta\sin\phi + \hat{S}_{j,z}\cos\theta\right]\end{aligned}$$ where $\theta$ and $\phi$ specify the direction of the field in the molecular axis system. Fig. \[fig:anisoappdx\] shows, for completeness, versions of Fig. \[fig:rp11\] in which the three initial triplet states are compared with $|{\rm S}\rangle\langle{\rm S}|$. #### Concurrence. To quantify the entanglement of the various initial electron spin states $\hat{\rho}_0$, we use the ‘concurrence’ proposed by Wootters for a two-qubit density operator [@woot-prl-80-2245]: $$\begin{aligned} \label{eq:appdx19} C(\hat{\rho}_0) &=& \text{max}\left\{0,\lambda_1-\lambda_2-\lambda_3-\lambda_4\right\}\end{aligned}$$ where the $\lambda_i$ are the non-negative real square roots of the eigenvalues, in decreasing order, of: $$\begin{aligned} \label{eq:appdx20} \hat{\rho}_0\left(\hat{\sigma}_y\otimes\hat{\sigma}_y\right)\hat{\rho}_0^\ast\left(\hat{\sigma}_y\otimes\hat{\sigma}_y\right)\end{aligned}$$ in which $\hat{\sigma}_y$ is twice the $\hat{S}_y$ operator for a single electron spin, and $\hat{\rho}_0^\ast$ is the complex conjugate of $\hat{\rho}_0$. #### General initial conditions. Within the minimal model, the most general initial state consistent with both Eqs  and is: $$\begin{aligned} \label{eq:appdx21} \hat{\rho}_0 &=& \varepsilon|{\rm S}\rangle\langle{\rm S}| + (1-\varepsilon)\sum_{q=x,y,z}p_q|{\rm T}_q\rangle\langle{\rm T}_q|\end{aligned}$$ with $0\le\varepsilon\le 1$, $p_x,p_y,p_z \ge 0$ and $p_x+p_y+p_z=1$. Some such states are entangled and some are not. Almost all are anisotropic and may lead to compass behaviour even when the spin-Hamiltonian is isotropic. For the minimal model, with an isotropic HF coupling and $|a|\gg\omega\gg k$, the initial state in Eq.  leads to: $$\begin{aligned} \label{eq:appdx22} \Phi_{\rm S} &=& \tfrac{1}{8}(2\varepsilon+1)+\tfrac{1}{4}(1-\varepsilon)\\\nonumber && \left(p_x\sin^2\theta\cos^2\phi + p_y\sin^2\theta\sin^2\phi + p_z\cos^2\theta\right) \\ \Delta\Phi_{\rm S} &=& \tfrac{1}{4}(1-\varepsilon)(p_\text{max}-p_\text{min})\end{aligned}$$ where $p_\text{max} = \text{max}\{p_x,p_y,p_z\}$ and $p_\text{min} = \text{min}\{p_x,p_y,p_z\}$. The maximum anisotropy is obtained when $\varepsilon = 0$, $p_\text{max}=1$ and $p_\text{min}=0$ (giving $\Delta\Phi_{\rm S}=\frac{1}{4}$). The concurrence $C(\hat{\rho}_0)$ of the state in Eq. is: $$\begin{aligned} \label{eq:appdx23} 2\varepsilon-1 \quad &&\text{when}\quad \varepsilon > \tfrac{1}{2} \\ 2p_\text{max}(1-\varepsilon)-1 \quad &&\text{when}\quad \varepsilon \le \tfrac{1}{2} \\\nonumber &&\quad\text{and}\quad p_\text{max}\ge\tfrac{1}{2(1-\varepsilon)} \\ 0 \quad && \text{otherwise}\end{aligned}$$ #### Laboratory-frame polarization Although not relevant for a geomagnetic compass sensor, radical pairs can be created with large laboratory-frame polarizations, i.e. with unequal populations of the triplet eigenstates in a strong magnetic field ($|{\rm T}_m\rangle$ ($m=0,\pm 1$), as defined by Eq. , with the $z$-axis being the direction of a strong applied magnetic field). For example, the Triplet Mechanism of Chemically Induced Dynamic Electron Polarization can result in large polarizations for radical pairs produced by triplet states formed by anisotropic intersystem crossing [@atki-mp-27-1633; @hore-cpl-69-563]. Another example, which also requires the electron spins to be quantized by strong electron Zeeman interactions, is seen in the EPR spectra of spin-correlated radical pairs with non-zero electron-electron exchange and/or dipolar interactions [@buck-cpl-135-307; @hore-cpl-137-495]. --------- ---------- --------------- ----------- ----------- ----------- Nucleus $a$ / mT $T_{jj}$ / mT N5 $0.393$ $-0.498$ $0.4380$ $0.8655$ $-0.2432$ $-0.492$ $0.8981$ $-0.4097$ $0.1595$ $0.990$ $-0.0384$ $0.2883$ $0.9568$ H5 $-0.769$ $-0.616$ $0.9819$ $0.1883$ $-0.0203$ $-0.168$ $-0.0348$ $0.2850$ $0.9579$ $0.784$ $-0.1861$ $0.9398$ $-0.2864$ --------- ---------- --------------- ----------- ----------- ----------- : \[tab:fadhf\] HF data for N5 and H5 in FADH$^\bullet$. ![image](FADrad){width="1.5in"} #### Intersystem crossing. The implications of anisotropic intersystem crossing for the behaviour and properties of radical pairs is discussed in detail by Steiner and Ulrich [@stei-cr-89-51] (pp. 109–112). In most cases, as here, intersystem crossing is assumed to be independent of HF-coupled nuclear spins. However, Kothe *et al.* [@koth-jpcb-114-14755], in an elegant study of quantum oscillations in an organic triplet state, have shown that the nuclei are in fact involved and that the appropriate molecular-frame triplet basis states are eigenstates of the combined zero-field and hyperfine Hamiltonians. We do not consider this possibility here. #### FADH$^\bullet$ radical. The HF interactions for FADH$^\bullet$ used to calculate the reaction yield anisotropies shown in Fig. \[fig:rp40\] are based on the data in Table \[tab:fadhf\] [@webe-jacs-123-3790].
--- abstract: 'We use a combination of numerical density matrix renormalization group (DMRG) calculations and several analytical approaches to comprehensively study a simplified model for a spatially anisotropic spin-1/2 triangular lattice Heisenberg antiferromagnet: the three-leg triangular spin tube (TST). The model is described by three Heisenberg chains, with exchange constant $J$, coupled antiferromagnetically with exchange constant $J''$ along the diagonals of the ladder system, with periodic boundary conditions in the shorter direction. Here we determine the full phase diagram of this model as a function of both spatial anisotropy (between the isotropic and decoupled chain limits) and magnetic field. We find a rich phase diagram, which is remarkably dominated by quantum states – the phase corresponding to the classical ground state appears only in an exceedingly small region. Among the dominant phases generated by quantum effects are commensurate and incommensurate coplanar quasi-ordered states, which appear in the vicinity of the isotropic region for most fields, and in the high field region for most anisotropies. The coplanar states, while not classical ground states, can at least be understood semiclassically. Even more strikingly, the largest region of phase space is occupied by a spin density wave phase, which has incommensurate collinear correlations along the field. This phase has no semiclassical analog, and may be ascribed to enhanced one-dimensional fluctuations due to frustration. Cutting across the phase diagram is a magnetization plateau, with a gap to all excitations and “up up down” spin order, with a quantized magnetization equal to 1/3 of the saturation value. In the TST, this plateau extends almost but not quite to the decoupled chains limit. Most of the above features are expected to carry over to the two dimensional system, which we also discuss. At low field, a dimerized phase appears, which is particular to the one dimensional nature of the TST, and which can be understood from quantum Berry phase arguments.' author: - Ru Chen - Hyejin Ju - 'Hong-Chen Jiang' - 'Oleg A. Starykh' - Leon Balents title: 'Ground states of spin-$\frac{1}{2}$ triangular antiferromagnets in a magnetic field' --- Introduction {#sec:intro} ============ The nearest-neighbor spin-1/2 Heisenberg antiferromagnet on the triangular lattice is an archetypal model of frustrated quantum magnetism. While the isotropic model in zero field is rather well-understood and is known to order into a coplanar “120$^\circ$” state[@PhysRevLett.69.2590], away from this limit the situation is less clear. Two deformations of the Hamiltonian are of particular physical and experimental importance: the application of an external magnetic field and the introduction of spatial anisotropy into the exchange interactions. The spatial anisotropy is introduced by decomposing the lattice into chains with bonds of strength $J$, arranged into a parallel array, with inter-chain interactions of strength $J'$ (see Fig. \[fig:lattice\]). Here, we define $R\equiv 1-J'/J$ as the degree of anisotropy, and $h$ measures the applied magnetic field. There have been many extensive studies that consider these effects separately. However, a two-dimensional (2d) phase diagram, taking both effects together, remains to be understood. This problem is of considerable experimental interest. The application of a magnetic field is one of the few general means to tune quantum magnets [*in situ*]{}, and provides very important information on the quantum dynamics, as well as clues to the underlying spin Hamiltonian, which is often not well-known. Two materials whose behavior in magnetic fields has been extensively studied are Cs$_2$CuCl$_4$ and Cs$_2$CuBr$_4$, which are known to be approximately described by the spatially anisotropic version of the model, with larger anisotropy in the chloride ($R \approx 0.7$) than the bromide ($R\approx 0.3-0.5$). Both materials exhibit a rich structure of multiple phases in applied magnetic fields, for which a theoretical view of the phase diagram would be quite helpful. The solution of the ground state of a fully two dimensional frustrated quantum spin model in a two-parameter phase space is quite ambitious. Here, we consider a somewhat simpler task, by concentrating on the problem defined by the model confined to a cylinder with a circumference of three lattice spacings (i.e. making $y$ periodic with period 3), which we refer to as a [*Triangular Spin Tube*]{}, or TST (see Fig. \[fig:lattice2\]). By a combination of analytical approaches and extensive numerical simulations using the Density Matrix Renormalization Group [@DMRG] (DMRG), we reveal a rich and complex phase diagram for the TST, shown in Fig. \[fig:phase\]. We argue in the Discussion (Sec.\[sec:discussion\]) that much of this diagram translates to the fully 2d model. Whenever possible, we use a nomenclature for the ground state phases which translates directly to two dimensions, though there are, of course, differences due to the absence of spontaneously broken continuous symmetry in one dimension. Different parts of this phase diagram will be discussed in detail in the bulk of the paper, but we will highlight a few aspects here, where strong quantum features occur. First, the isotropic line, $R=0$, as a function of magnetic field has been considered many times in the two-dimensional limit. There, semi-classical methods[@chubukov1991quantum; @alicea2009quantum] predict the stabilization of both coplanar spin configurations by quantum fluctuations, and, most interestingly, a magnetization plateau, at which the magnetization of the system is fixed (at $T=0$) at 1/3 of the saturation magnetization over a range of magnetic fields. We will refer to this state as the “1/3 plateau" throughout this text. On the plateau, the spins order into a collinear configuration. Stabilization of such a plateau is very much a quantum effect and is one of the more striking quantum features of the TST. The presence of the plateau has been confirmed for both the one-dimensional [@okunishi03; @dagotto2007; @hikihara2010] and the two-dimensional spin-1/2 Heisenberg models by exact diagonalization [@honecker2004magnetization], coupled-cluster [@farnell2009] and variational [@tay2010variational] methods. Our DMRG study of the TST is also consistent with the semi-classical picture along the $R=0$ line. We directly confirm the two “coplanar” phases, and accurately locate the boundaries of the 1/3 plateau. Another regime of strong quantum fluctuations occurs when $R$ is close to 1, where the system is composed of weakly coupled (strictly) one dimensional (1d) chains. There, an approach based on scaling and bosonization methods is possible, following Refs. . Those techniques (explained in this context in Sec. \[sec:weak-coupled\]) predict a [*spin density wave*]{} (SDW) state over a wide range of applied fields. In this SDW state, the dominant spin correlations are those of the Ising component parallel to the field, in sharp contrast to the classical behavior. Our DMRG simulations show that the SDW state dominates a remarkably broad region of the phase diagram, extending far beyond the decoupled line, $R=1$. In two dimensions, the quasi-1D approach of Refs.  shows the existence of a (very narrow) 1/3 plateau arising out of the SDW phase, leading to the speculation that the plateau persists for all $R$ in two dimensions. In the TST, we find that the plateau is also very robust, and persists almost, but not quite, to the 1D limit. The suppression relative to two dimensions can be understood as a result of enhanced fluctuations due to the one-dimensionality of the TST. To check this, we have also carried out some DMRG studies of wider cylinders consisting of 6 and 9 sites in the periodic direction. Our results appear consistent with the existence of a plateau for all $R$ in two dimensions. The last quantum regime we discuss here is clearly specific to the periodic boundary conditions imposed around the TST. This occurs at zero field, where for all values of $R$, we observe a spontaneously dimerized ground state. The dimerization is most clearly observed in the entanglement entropy, which shows a pronounced oscillatory behavior along the chain. We argue that this can be understood as an effect of one-dimensional quantum fluctuations upon an underlying short-range spiral magnetically ordered state, somewhat similar to the formation of a Haldane gap in integer spin chains with collinear classical states. The elementary excitations of the dimerized state are solitons, and we show how the behavior at small magnetization can be understood in terms of a dilute system of such solitons. The remainder of the paper is organized as follows. In Sec. \[sec:DMRG\] we introduce the model and then describe key technical aspects of our DMRG simulations, including the procedure to determine the phase boundaries using the second derivative of the ground state energy and entanglement entropy, and careful finite size scaling. In Sec. \[sec:iso\], we review and compare the semi-classical predictions to the DMRG results in the isotropic limit. Next, we discuss the high field region in Sec. \[sec:high field\]. In the vicinity of the saturation field, the problem can be modeled as a dilute system of spin-flip bosons. We compare an analysis of this limit, built upon an analytic solution of the Bethe-Salpeter equation, to the DMRG, and find a transition between coplanar and cone phases, and a commensurate-incommensurate transition. In Sec. \[sec:weak-coupled\], we study the regime of weakly coupled chains, and in particular discuss the spin density wave (SDW) state and show that the 1/3 plateau terminates in a Kosterlitz-Thouless transition around $R\sim 0.7\pm0.1$ for the TST. We consider the low field region in Sec. \[sec:lowfield\], showing the persistent dimerization, the evidence for solitons at small magnetization, and the commensurate to incommensurate transition near $R=0$. DMRG numerical results will be presented throughout these sections, presenting the important features used to identify each phase. Physical quantities, like entanglement entropy, vector chirality and the spin density profile will be shown for some representative large system size. Finally, we conclude in Sec. \[sec:discussion\] with a summary and discuss some generalizations of our results to larger spin and two-dimensional systems. Model and DMRG method {#sec:DMRG} ===================== Hamiltonian and notation {#sec:hamiltonian-notation} ------------------------ The explicit Hamiltonian studied in this paper is written as $$\begin{aligned} \label{eq:hami} H & = & \sum_{x,y} \left[ J \, \mathbf{S}_{x,y} \cdot \mathbf{S}_{x+1,y} + J' \, \mathbf{S}_{x,y} \cdot \left( \mathbf{S}_{x,y+1}+ \mathbf{S}_{x-1,y+1} \right) \right] \nonumber\\ && - h \sum_{x,y} \mathbf{S}_{x,y}^z,\end{aligned}$$ where $x$ is the direction along the chains, and $y$ is perpendicular to it, and $h$ is the magnetic field. Importantly, we choose coordinates, as shown in Fig. \[fig:lattice\]b, where the triangular lattice is “sheared” to embed it in a square one. This is convenient for the application of periodic boundary conditions in the TST. Many previous works on the anisotropic triangular lattice in two dimensions, including those by some of the authors[@starykh2010extreme; @schnyder2008spatially], use instead “cartesian” coordinates, as shown in in Fig. \[fig:lattice\]a. Both for convenience in certain calculations (especially in the quasi-one-dimensional limit), and to clarify the connection to this prior work, we give the relation between the sheared and cartesian coordinates here. In cartesian coordinates, we take the distance between sites along the chains and the (normal) distance between chains to unity. Defining the cartesian coordinates as ${\sf x}, {\sf y}$, and ${\bf\sf r}=({\sf x}, {\sf y})$, then $$\label{eq:9} {\sf x} = x + y/2, \qquad {\sf y} = y.$$ From this, we may also obtain the relationship between wavevectors in the two coordinate frames. We require ${\bf q}\cdot {\bf r} = {\bf\sf q}\cdot {\bf\sf r}$, which implies $$\label{eq:10sf} q_x = {\sf q}_x, \qquad q_y =\tfrac{1}{2} {\sf q}_x + {\sf q}_y.$$ DMRG {#sec:dmrg} ---- Throughout this paper, we rely extensively on DMRG simulations. For the present study, we kept up to $m=3072$ states in the DMRG block, performing more than 24 sweeps to obtain fully converged results. In doing so, we find that our truncation error is of the order $10^{-7}$. We also take advantage of the cylindrical boundary condition to study large systems and to reduce finite-size effects for a more reliable extrapolation to the thermodynamic limit. In particular, in the regions above the 1/3 plateau, we find that observables have much better convergence, with a truncation error of the order $10^{-9}$. Even in the regions below the 1/3 plateau not close to the dimerized phase, we find reasonable convergence, with a slightly larger truncation error on the order of $10^{-7}$. However, when we approach the dimerized phase near zero magnetization, finite-size effects dominate: system sizes up to $N=180 \times 3$ do not provide a reliable extrapolation to the thermodynamic limit. The phase boundaries in Fig. \[fig:phase\] were determined from the simulations. We describe the methodology for doing so here, leaving the characterization of the phases which occur for subsequent Sections. For the case of continuous transitions, it is common to calculate the second derivative of the ground state energy, $\frac{\partial^2 E_0}{\partial R^2}$. The calculation follows standard procedure of using three data points at $R+dR$, $R$, and $R-dR$, according to the formula $\partial^2E_0/\partial R^2=[E_0(R+dR)+E_0(R-dR)-2E_0(R)]/dR^2$. The derivative diverges when the infinite-size system undergoes a transition. For finite systems, however, one will observe a finite peak that increases with system size. We then determine the phase boundaries numerically by looking at the peak position as a function of tuning parameter R. For example, as shown in Fig.4 (a), sharp peaks are located at R=0.6. We observe that the peak value increases significantly with sample size, for all system sizes studied. We have not attempted to carry out detailed finite size scaling analyses of the peaks, as our focus here is on the phases, not the critical behavior at the transitions between them. This transition corresponds to the upper dashed line in Fig. \[fig:phase\], where there is a transition between an incommensurate planar and a cone phase. We use similar procedures to determine phase boundaries at other magnetizations, e.g. $M/M_s = 1/2, 1/6$ in Figs. \[fig:dE2\](b,c) correspond to the middle and lower dashed lines in Fig. \[fig:phase\], respectively. In addition to these divergent peaks, there are some other features (which are [*[not]{}*]{} phase transitions) due to finite size effects. For example, in Fig. \[fig:dE2\](a) for $M/M_s = 5/6$, a broad peak near $R = 0.8$ actually decreases (and eventually goes to zero) in the thermodynamic limit. Therefore, we can confidently say that the cone phase dominates in the region $R > 0.6$, and that there is no transition at $R = 0.8$. Similarly, for Figs. \[fig:dE2\](b,c), the fluctuations in the plots near $R \approx 0.7, 0.45$, respectively, are finite size effects and vanish in the thermodynamic limit. Finally, we use the structure factor $$\label{eq:strfac} S^{\mu \mu}(q)=\frac{1}{N}\sum_{{\bf r},{\bf r}'} e^{-i {\bf q} \cdot ({\bf r}-{\bf r}')} \langle S_{\bf r}^{\mu} S_{{\bf r}'}^{\mu} \rangle .$$ to determine the boundaries between the commensurate and incommensurate phases. For example, for small $R$, the transverse and longitudinal components of the structure factor peak at commensurate momenta ${\bf Q}=(4\pi/3,2\pi/3)$ and $(2\pi/3,4\pi/3)$, respectively. This defines the “C planar" regions in Fig. \[fig:phase\]. Semi-classical behavior in the isotropic case {#sec:iso} ============================================= Two-dimensional model --------------------- The isotropic model, $J'=J$, has been extensively studied in two dimensions, and it is believed that a semi-classical description, with weak quantum fluctuations included via spin wave theory, is qualitatively correct in this case[@chubukov1991quantum]. We find that the semi-classical analysis largely carries over to the TST, with small modifications to allow for one-dimensional fluctuations. Therefore we review the established semi-classical results first. In the classical limit, where spins are described as O(3) vectors, the isotropic problem is known to display an “accidental" degeneracy in a non-zero applied magnetic field [@kawamura1985]. This can be seen from the fact that this model can be rewritten as $$H = \frac{J}{2} \sum_{\bigtriangleup} \left( \mathbf{S}_{\bigtriangleup} - \frac{h}{3J}\hat{\mathbf{z}} \right)^2,$$ where $ \mathbf{S}_{\bigtriangleup} = \mathbf{S}_1 + \mathbf{S}_2 + \mathbf{S}_3$ is the sum of the spins on a triangle, and the sum is over all triangles on the lattice. The ground state configuration is given by the constraint $$\mathbf{S}_{\bigtriangleup} - \frac{h}{3J} \hat{\mathbf{z}}= 0.$$ At zero magnetization, this constraint is solved by placing all spins in a plane, with the three spins in each triangle at $120^\circ$ angles to one another in a three sublattice structure. A specific ground state is specified by three angles, e.g. two determining the plane of the spins and one determining the angle within the plane. All such states are related by O(3) spin symmetry; so this is a symmetry-demanded degeneracy. a previous DMRG study[@HCJiang2009] on the 2d model also confirms the three sublattice structure. In a non-zero field, the ground states retain a three-sublattice structure, with three arbitrary angles remaining to determine the specific ground state. However, the presence of the field reduces the O(3) symmetry to O(2) (or U(1)), and only one of these angular degrees of freedom is symmetry demanded. The remaining two angular degrees of freedom constitute an [*accidental*]{} degeneracy. Two simple states within the degenerate manifold are the coplanar and umbrella ones, shown in Fig. \[fig:comm-planar\]. As first shown by Chubukov and Golosov [@chubukov1991quantum], this accidental degeneracy is lifted by quantum fluctuations. They showed by a $1/S$ spin wave expansion that the degeneracy is lifted in favor of the coplanar states. Additionally, they demonstrated the existence of the 1/3 plateau, in which the spins adopt a 3 sublattice “up up down” structure. Away from the plateau, the coplanar state retains a 3 sublattice structure with ordering wavevector ${\bf\sf Q} = (4\pi/3,0)$, or ${\bf Q} = (4\pi/3, 2\pi/3)$ [@chubukov1991quantum; @alicea2009quantum; @griset2011deformed]. Below the plateau, the 3 spins form a “Y” with one spin antiparallel to the field and two spins with equal positive projection to the field but at opposite angles from each other. This can be viewed as a deformation of the $120^\circ$ state with spins in a plane containing the magnetic field. Here the spin configurations can be parametrized by $$\begin{aligned} \label{eq:comm-planar1} \langle S_{\mathbf{r}}^+ \rangle & = & a \, e^{i \theta} \sin \left( {\bf Q} \cdot {\bf r} \right) \nonumber \\ \langle S_{\mathbf{r}}^z \rangle & = & b - c \cos^2 \left( {\bf Q} \cdot {\bf r} \right),\end{aligned}$$ where $\theta$ is an arbitrary angle specifying the plane of the spins, while $a,b,c$ are constants dependent upon the field magnitude. Since $ {\bf Q} \cdot {\bf r} = 2\pi (2x+y)/3$, we see from Eq.  that when $2x+y$ is a multiple of 3, one of the spin is aligned with the magnetic field. Above the plateau one finds instead a “V” configuration, with two spins identical and the third chosen to give zero moment normal to $z$. In this case, we have $$\begin{aligned} \label{eq:comm-planar2} \langle S_{\mathbf{r}}^+ \rangle & = & a \, e^{i \theta} \cos \left( {\bf Q} \cdot {\bf r} \right) \nonumber \\ \langle S_{\mathbf{r}}^z \rangle & = & b - c \cos^2 \left( {\bf Q} \cdot {\bf r} \right).\end{aligned}$$ Note that the cosine in the first line of Eq.  never vanishes on lattice sites, so that spins are never parallel to the field in the V state. One dimension {#sec:one-dimension} ------------- We will see that the semi-classical results summarized in the previous subsection for the two-dimensional case remain qualitatively correct, at least at short distances, in the TST. However, we must still account for the effects of quantum fluctuations on long length scales, since the one dimensional system [*cannot*]{} break the U(1) spin-rotational symmetry about the field axis. Since the U(1) symmetry is unbroken in the plateau state, there are no essential effects of one-dimensional fluctuations there. However, they have qualitative effects in the Y and V phases, since $\langle S_r^+ \rangle=0$ there, in contrast to Eqs. (\[eq:comm-planar1\],\[eq:comm-planar2\]). Note that the modulation of $\langle S^z_r \rangle$ is perfectly consistent with one-dimensionality, and is expected to persist directly without qualitative modifications. To incorporate one-dimensional fluctuations, we regard the semiclassical results in Eqs. (\[eq:comm-planar1\],\[eq:comm-planar2\]) as defining the local spin ordering, with a [*fluctuating*]{} quantum phase $\theta(x,\tau)$ ($\tau$ is imaginary time), that is, we make the replacement $$\label{eq:1} S_r^+(\tau) \rightarrow a \, e^{i \theta(x,\tau)} \sin \left( {\bf Q} \cdot {\bf r} \right),$$ in the Y phase, and $$\label{eq:2} S_r^+(\tau) \rightarrow a \, e^{i \theta(x,\tau)} \cos \left( {\bf Q} \cdot {\bf r} \right),$$ in the V phase. Note that these formulae are [*not*]{} invariant under translations, reflecting the three-sublattice structure of the coplanar phases. This can also be seen from the oscillations in the $\langle S^z_r\rangle$ expectation values. Even when one-dimensional fluctuations are taken into account, translational symmetry is broken. This is still consistent with the Mermin-Wagner theorem, since the broken translational symmetry is discrete. Translating by one or two lattice spacings, one obtains two other symmetry related but distinct ground states. In both the Y and V phases, the field $\theta(x,\tau)$, representing the “would-be" Goldstone mode of the spontaneously broken U(1) symmetry, is governed by the usual massless free relativistic boson action, $$\label{eq:3} S_\theta = \int \! dx d\tau \left\{ \frac{v K}{2} (\partial_x \theta)^2 + \frac{K}{2v} (\partial_\tau \theta)^2\right\}.$$ Comparison to DMRG ------------------ We now turn to a comparison of the semi-classical predictions, corrected as in the previous subsection for one-dimensional fluctuations, to the DMRG. ### Entanglement entropy The simplest comparison arises immediately from Eq. : the low energy physics is that of a single massless scalar field, which is a conformal field theory with central charge $c=1$. This central charge can be directly measured using the entanglement entropy. According to conformal field theory[@Cardy], in a one dimensional critical system with open boundary conditions and total length $L$, the von Neumann entanglement entropy associated to a region with length $x$ and its complement of length $L-x$ is given by $$\label{eq:ent} S(x,L) = \frac{c}{6} \ln\left[ \frac{L}{\pi} \sin\left( \frac{\pi x}{L} \right) \right].$$ By plotting the entropy $S(x,L)$ versus the reduced coordinate $x'=\ln[\frac{L}{\pi} \sin\left( \frac{\pi x}{L} \right)]$, we can directly extract $c$ from the numerics. As shown in Fig. \[fig:EE\_isotropic\], we can indeed obtain $c=1$ with high accuracy for both $\rm Y$ and $\rm V$ phases. For example, the obtained central charge $c=0.98$ at $M/M_s=1/6$ in the $\rm Y$ phase below the plateau, and $c=0.97$ at $M/M_s=1/2$ in the $\rm V$ phase above the plateau. Both are consistent with the theoretical prediction. ### $S^z$ profile {#sec:s_z-profile} The modulation of $\langle S^z_{\mathbf{r}}\rangle$ predicted by the semi-classical theory in Eqs. (\[eq:comm-planar1\],\[eq:comm-planar2\]) can be directly compared to the DMRG results. This is shown in Figs. \[fig:Sz-iso\],\[fig:Sz-iso2\]. Note that a particular symmetry broken state is chosen in the simulations, presumably due to pinning by the boundaries, which explicitly break translational symmetry. The origin of the coordinate ${\bf r}$ in Eqs. (\[eq:comm-planar1\],\[eq:comm-planar2\]) must be appropriately chosen to match the chosen ground state. ### $S^\pm$ correlations {#sec:spm-correlations} Due to quantum fluctuations of the phase $\theta$, the single spin expectation value $\langle S_r^+\rangle=0$. Therefore, we must instead turn to correlation functions to detect the Y and V structure of the local ordering. Using Eq. , we obtain $$\label{eq:4} \langle S_{\mathbf{r}}^+ S_{{\mathbf{r}}'}^- \rangle \sim a^2 \sin \left( {\bf Q} \cdot {\bf r} \right) \sin \left( {\bf Q} \cdot {\bf r}' \right) \left\langle e^{i(\theta(x) - \theta(x'))}\right\rangle,$$ in the Y phase below the 1/3 plateau. A similar formula, with the sines replaced by cosines, describes the correlation function of the V phase above the plateau. The correlation function is evaluated with respect to Eq. , where a finite-size form, first derived in Ref. , is as follows $$\begin{aligned} \label{eq:ruadd1} \langle e^{i(\theta(x) - \theta(x'))}\rangle&=&C_\eta(x,x'),\end{aligned}$$ where $$\begin{aligned} \label{eq:ruadd1a} C_\eta(x,x')&=&a_0^\eta\frac{[f(2x)f(2x')]^{\eta/2}}{[f(x-x')f(x+x')]^{\eta}} ,\\ f(x)&=& \left[ \frac{2(L+1)}{\pi}\sin\left(\frac{\pi|x|}{2(L+1)}\right)\right].\nonumber\end{aligned}$$ Here $a_0$ is a cut-off dependent factor, which we can take to unity, absorbing the dependence in $a$ in Eq. . The function, $f(x)$, originates from a quantum average over the normal modes of the bosonic field $\theta$. One is now able to fit the DMRG measurement of the transverse spin-spin correlation function to Eqs. (\[eq:4\],\[eq:ruadd1\]) to obtain the ordering wave vector and the additional fit parameter, $\eta$. A comparison is plotted in Fig. \[fig:XY\_isotropic\], where we show the correlation function along each chain (i.e., $y=1,2,3$) for $R=0$ and $M/M_s = 1/6, 1/2$. The fitting in Fig. \[fig:XY\_isotropic\]a yields a commensurate wave vector ${\bf Q}=(4\pi/3,2\pi/3)$ and $\eta=0.65$ for $M/M_s=1/6$, which corresponds to the Y phase below the plateau. Above the plateau, in the $V$ phase shown in Fig. \[fig:XY\_isotropic\]b, the ordering wave vector still shows commensurability, ${\bf Q}=(4\pi/3,2\pi/3)$ with $\eta=0.43$. One can show that in the thermodynamic limit, the correlation function in Eq.  reduces to a simple power-law relation $\propto |x-x'|^{-\eta}$, which is reflected by our data for distances $|x-x'|\ll L/2$. Behavior for small non-zero $R$ {#sec:behavior-small-non} ------------------------------- If we perturb slightly away from the isotropic limit, i.e. $0<R\ll 1$, we expect the semi-classical picture to still hold. This has been analyzed in Refs. . Classically, the minimum energy spin configuration changes immediately when $R>0$ from a commensurate state to an incommensurate one, with an ordering wavevector ${\bf\sf Q} \neq (4\pi/3,0)$ or ${\bf Q} \neq (4\pi/3, 2\pi/3)$. However, we expect that quantum fluctuations will stabilize the commensurate state for a range of anisotropies for a generic value of the magnetic field. The reason is that coplanar phases break discrete translational symmetries of the lattice. Since there are three equivalent ground states connected by translations, the symmetry breaking can be described by a $\mathbb{Z}_3$ order parameter. Specifically, the combination $$\label{eq:5} \zeta_r = S_r^z e^{2\pi i (x+2y)/3},$$ defines a $\mathbb{Z}_3$ order parameter with $\langle \zeta_r\rangle = |\zeta| e^{i\vartheta}$ and $\vartheta=0,2\pi/3, 4\pi/3$ in the three distinct $\mathbb{Z}_3$ domains. To restore this discrete symmetry, a phase transition is required. More specifically, there are topological excitations of the coplanar state which are domain walls, also called solitons, connecting different symmetry broken states. There is a non-zero energy gap to create a domain wall in any phase with long-range $\mathbb{Z}_3$ order. For the $\mathbb{Z}_3$ order to be destroyed, solitons must proliferate in the ground state. Small changes of parameters, such as $R$, cannot instantly lower the gap for the domain walls to zero, which implies stability of the phase for a range of $R$ values. This is correct, at least, away from the exceptional points where $h=0$ (where the symmetry breaking becomes continuous) and $h=h_{\rm sat}$ (where the symmetry breaking vanishes). We will discuss the vicinity of these exceptional points in subsequent sections. In general, with increasing anisotropy, $R$, we will encounter a phase transition to an incommensurate phase, which corresponds to the proliferation of solitons and a vanishing of their gap. Beyond that point, $\langle \zeta_r\rangle$ becomes zero, and $S^z$ correlations peak at a wavevector other than $\mathbf{Q}=(4\pi/3,2\pi/3)$. This transition is discussed in Sec. \[subsec:incomm-comm\]. A useful test for this phase is the measurement of the central charge via entanglement entropy. In the commensurate regions, even for $R>0$, we expect $c=1$, while incommensurate phases may have $c>1$. We observe this effect in Fig. \[fig:EE\_isotropic\], which shows $c=1$ in the commensurate state, whereas Fig. \[fig:cone-planar\] shows $c=2$ in the incommensurate state. In addition, we can check for commensurability using structure factor measurements, as discussed in Sec. \[sec:dmrg\]. Phenomenological analysis at low field {#sec:CI} -------------------------------------- We now address the region slightly away from $R=0$ and at low applied magnetic field. We begin the discussion from a 2d point of view, though it largely applies to the TST as well. Commensurate coplanar spin order is described by the order parameter ${\bf d} = {\bf n}_1 - i {\bf n}_2$, where ${\bf n}_1$, ${\bf n}_2$ are mutually orthogonal vectors with identical norm spanning the plane of the spin order. Then, a spin at coordinate ${\mathbf{r}}$ can be written as $$\label{eq:CI1} {\bf S}_{\mathbf{r}}= M + {\rm Re}({\bf d} e^{ i {\bf Q}\cdot{\bf r}}) = M + {\bf n}_1 \cos[{\bf Q}\cdot{\bf r} ] + {\bf n}_2 \sin[{\bf Q}\cdot{\bf r}].$$ Lattice translations transform ${\bf d} \to {\bf d} e^{-i 2\pi/3}$, while lattice inversion, ${\bf r} \to -{\bf r}$, results in complex conjugation, ${\bf d} \to {\bf d}^*$. The effective Ginzburg-Landau Hamiltonian describing the coplanar state should remain invariant under these operations (see Ref.  for a closely related discussion). Then, $$\begin{aligned} H_{\rm comm} &=& - r {\bf d}^* \cdot {\bf d} + a_0 |\partial_x {\bf d}|^2 + a_1 ({\bf d}^* \cdot {\bf d})^2 + a_2 |{\bf d} \cdot {\bf d}|^2 \nonumber\\ && + \chi_1 h^2 {\bf d}^* \cdot {\bf d} + \chi_2 |{\bf h} \cdot {\bf d}|^2 \nonumber\\ &&+ \frac{1}{2}\chi_3 [ ({\bf h} \cdot {\bf d})^3 + ({\bf h} \cdot {\bf d}^*)^3]. \label{eq:Hcomm}\end{aligned}$$ Here, at mean-field level, $r>0$ is required to obtain non-zero ${\bf n}_{1,2}$, and $a_{0,1} >0$, for stability in the ordered phase. Furthermore, $a_2 > 0$ energetically imposes the orthogonality condition ${\bf n}_1 \cdot {\bf n_2} =0$ in zero field. To favor coplanar (rather than umbrella) spin structures in a finite magnetic field, requires $\chi_2 < 0$. We may expect that $\chi_2$ is a function of the anisotropy, being negative for the isotropic limit $R=0$ and changing sign to positive values for sufficiently large $R$, where the order by disorder physics favoring coplanar states gives way to the classical energetic preference for umbrella states. Here we restrict ourselves to the small anisotropy regime, for which we expect $\chi_2$ to remain negative. With the preference for coplanar states set by $\chi_2<0$, for field oriented along ${\hat z}$, the preferred configurations of ${\bf d}$ may be parametrized as $$\label{eq:18} {\bf d} = |d| e^{i\tilde\theta} \left[ {\bf \hat z} + i( \cos\theta {\bf \hat x} + \sin\theta {\bf \hat y})\right],$$ where $\theta$ describes the orientation of the plane of the spins, and $\tilde\theta$ the angle of the spins within that plane. With this form for ${\bf d}$, we obtain the spin operators as $$\begin{aligned} \label{eq:73} S^z_{x,y} & \sim & M + |d| \cos ({\bf Q}\cdot {\bf r} + \tilde\theta), \nonumber \\ S^+_{x,y} & \sim & - |d| e^{-i\theta} \sin({\bf Q}\cdot {\bf r} + \tilde\theta).\end{aligned}$$ The last term in Eq.  describes the commensurate locking of the spin to the lattice by the finite magnetic field. Using Eq. , it may be rewritten as a sine-Gordon term $$\label{eq:19} H_{sg} = \chi_3 |d|^3 h^3 \cos[3 \tilde\theta].$$ The sign, $\chi_3 > 0$, is fixed by the condition that one of the three spins in a sublattice must be oriented opposite to the external field in the commensurate state. Thus, in the commensurate state, $\tilde\theta = \pi$ in . Now we move away from the isotropic line to $R>0$. Here 3-fold rotational symmetry is broken, which allows the introduction of an additional term, linear in derivatives, into the effective Hamiltonian: $$\begin{aligned} H_{\rm incomm} &=& \frac{i}{2} b_1 ({\bf d}^* \cdot \partial_x {\bf d} - {\bf d} \cdot \partial_x {\bf d}^*) \nonumber\\ && = - b_1 |d|^2 \partial_x \tilde\theta.\end{aligned}$$ Since this term must vanish at $R=0$ and be analytic, $b_1 \sim R$. This term competes with the sine-Gordon term in Eq. , with the commensurate state with constant $\tilde\theta$ favored at small $R$ and destabilized at larger $R$. Thus the commensurate-incommensurate transition in two dimensions can be described by a Hamiltonian of the phase $$\label{eq:H-CIC} H_{\rm C-IC} = \int d^2 {\bf r} \{ \tilde{a}_0 (\partial_x \tilde\theta)^2 - \tilde{b}_1 \partial_x \tilde\theta +\tilde{\chi}_3 h^3 \cos[3 \tilde\theta]\}.$$ Here, the coefficients with tildes, $\tilde{a}_0, \tilde{b}_1, \tilde{\chi}_3$, are rescaled by unimportant factors, such as the amplitude $|d|$. The sine-Gordon model of the form in Eq.  appears in several guises in this paper, and is analyzed in Appendix \[sec:sine-gordon-model\]. It encodes a commensurate-incommensurate transition (CIT) with increasing $\tilde{b}_1$. This transition is mean-field like for $d=2$, and we may apply the results of Appendix \[sec:dgeq-2:-mean\]. This gives a critical value for the CIT of $\tilde{b}_{1,{\rm cr}} \sim \sqrt{\tilde{a}_0 \tilde{\chi}_3 h^3}$ for the incommensurate state, which translates to $$\label{eq:92} h_{\rm C-IC} \sim R^{2/3},$$ since $\tilde{b}_1 \sim R$. This is roughly consistent with shape of the boundary in the lower left corner of Fig. \[fig:phase\]. For the TST, the situation is complicated by one-dimensional fluctuations. At zero field, $h=0$, we know that, in fact, the ground state is [*not*]{} a spiral but rather a dimerized phase. Hence, we cannot directly apply the above analysis at the lowest fields. The dimerized phase is broken fairly rapidly by the field, and so, above some small critical field, we may expect to be able to use results of this type. Even so, we should really use results for the $d=1$ case, where a non-mean-field analysis applies, as described in Appendix \[sec:d=1:-quant-fluct\]. Using Eq. , the critical value $\tilde{b}_{1,{\rm cr}}$ is suppressed by a factor of $(\tilde\chi_3 h^3/\tilde{a}_0)^{\Delta_3/(4-2\Delta_3)}$, so that the net result is $\tilde{b}_{1,{\rm cr}} \sim h^{\frac{3-\Delta_3}{2-\Delta_3}}$, and hence $$\label{eq:93} h_{\rm C-IC} \sim R^{\frac{2-\Delta_3}{3-\Delta_3}}.$$ Here, $\Delta_3$ is the scaling dimension of the $\cos 3\tilde\theta$ term. Assuming the commensurate phase is at all stable for small $R$ implies $\Delta_3 <2$, so that the cosine term is relevant in the isotropic case, $R=0$. It is also bounded below by zero, so that the exponent in Eq.  varies between $0$ and $2/3$. Once again, we caution that the expression must be taken with care, since it does not in fact apply at the lowest fields. High Field Region {#sec:high field} ================= Spin flip bosons {#subsec:dilute} ---------------- In this section, we study the phase diagram near saturation, i.e. for applied fields sufficiently large that the magnetization is close to its maximum of $1/2$ per site. [*At*]{} saturation, the ground state of the model is the trivial product state with all spins aligned in the direction selected by the field. For fields above the saturation field, this is the exact ground state, and the lowest excited states consist of single magnons, in which just one spin has been flipped relative to the saturated state. These magnons are bosons with $S^z=1$, and upon reducing the field to the saturation value, the minimum energy required to create a magnon vanishes. Below the saturation field, therefore, we can expect Bose-Einstein condensation (BEC) of these magnons. In the one-dimensional TST, strict BEC is not possible due to phase fluctuations, but these fluctuations are readily taken into account and a quasi-condensate description remains appropriate. To formalize the magnon BEC picture, one may transform the spin model to a bosonic one[@matsubara1956lattice; @batyev1984antiferromagnet; @batyev1986; @nikuni1995hexagonal; @ueda2009magnon; @kolezhuk2012], using the equivalence of the spin $s=1/2$ Hilbert space to that of hard-core bosons: $$\begin{aligned} \label{eq:bec1} S_{\mathbf{r}}^+ & = & {\mathcal P}_{\mathbf{r}}\, b_{\mathbf{r}}\, {\mathcal P}_{\mathbf{r}}\\ S_{\mathbf{r}}^z & = & \frac{1}{2} - n_{\mathbf{r}},\end{aligned}$$ where $n_{\mathbf{r}}= b_{\mathbf{r}}^\dagger b_{\mathbf{r}}^{\vphantom\dagger}$ is the boson occupation number, and one must project onto the space of no double boson occupancy, ${\mathcal P}_{\mathbf{r}}= |n_{\mathbf{r}}=0\rangle \langle n_{\mathbf{r}}=0| + |n_{\mathbf{r}}=1\rangle\langle n_{\mathbf{r}}=1|$. Eq.  is equivalent to the Holstein-Primakoff bosonization formula, truncated to quadratic order in boson operators and taking $s=1/2$, provided the no double occupancy constraint is imposed. The generalization to $s>1/2$ will be briefly discussed later in Sec. \[sec:highS\]. It is convenient to implement the no double occupancy constraint by first relaxing the constraint, adding an on-site interaction $U$ to the Hamiltonian, and then realizing the projection by taking the $U\rightarrow \infty$ limit. In this way we can proceed simply by rewriting the Heisenberg model using Eq. , forgetting the projection operators, i.e. taking ${\mathcal P}_{\mathbf{r}}\rightarrow 1$. We thereby obtain a boson Hamiltonian with hopping terms ($J$), on-site energies ($J,h$), an on-site ($U$) and nearest-neighbor ($J,J'$) interactions. Fourier transforming to diagonalize the quadratic terms, we find $$\begin{aligned} \label{eq:bec2} H = &&\sum_{\mathbf{k}}\left[ \epsilon( {\mathbf{k}}) - \mu \right] b_{\mathbf{k}}^\dagger b_{\mathbf{k}}^{\vphantom\dagger} + \nonumber\\ &&\frac{1}{2N} \sum_{{\mathbf{k}},{\mathbf{k}}',\mathbf{q}} V(\mathbf{q}) b_{{\mathbf{k}}+\mathbf{q}}^\dagger b_{{\mathbf{k}}'-\mathbf{q}}^\dagger b_{{\mathbf{k}}'}^{\vphantom\dagger}b_{\mathbf{k}}^{\vphantom\dagger},\end{aligned}$$ where $$\begin{aligned} \label{eq:bec3} \epsilon({\bf k}) & = & J( {\bf k} ) - J_{\text{min}},\\ \mu & = & h_{\text{sat}} - h, \text{ where } h_{\text{sat}} = J(0) - J_{\text{min}},\\ V({\mathbf{k}}) &=& 2\left( \epsilon({\mathbf{k}}) + U \right).\end{aligned}$$ Here, $J({\mathbf{k}})$ is the Fourier transform of the exchange interaction, $\mu$ is the bosonic chemical potential, and $h_{\text{sat}}$ is the saturation field. We will use this formalism to derive an effective action for the dilute bosons, and also to locate (if any) a transition between the planar and cone phases near saturation. Effective field theory for dilute bosons {#sec:effect-field-theory} ---------------------------------------- For $h>h_{\text{sat}}$, the vacuum is an exact ground state of this Hamiltonian, i.e. $b_{\bf k} | 0 \rangle = 0$. Below the saturation field, a finite density of magnons is introduced into the system, and a BEC or quasi-BEC is expected. The phase of the system, and correspondingly the magnetic order (correlations), is determined by the structure of this condensate (or quasi-condensate). To determine this structure, we construct an effective model. The lowest energy magnon excitations in the triangular lattice occur at non-zero momenta $\pm {\bf Q}$, which minimize the dispersion[@nikuni1995hexagonal; @ueda2009magnon]. In our (sheared) coordinates, the dispersion relation is $$\label{eq:11} J_{\rm TST}({\bf k}) = J \cos k_x + J' [ \cos k_y + \cos (k_y-k_x)].$$ In two dimensions, we can choose arbitrary $k_x$ and $k_y$, and the minima occur at ${\bf k}=\pm {\bf Q}_{2d}$, with ${\bf Q}_{2d}=(Q_{2d},Q_{2d}/2)$, and $$\label{eq:10} Q_{2d} = 2 \arccos \left[ -\frac{J'}{2J} \right].$$ Note that in the conventional cartesian coordinates this wavevector is ${\bf\sf Q}=(Q_{2d},0)$. For the TST, we must quantize $k_y=0,2\pi/3,4\pi/3$. With this restricted choice of $k_y$, the 2d wavevector ${\bf Q}_{2d}$ cannot generally be achieved. Instead, we find that the minimum energy wavevector is ${\bf k}_{\rm TST} = \pm {\bf Q}_{\rm TST} = \pm (Q_{1d},2\pi/3)$, with $$\label{eq:12} Q_{1d} = \pi + \arctan \left( \frac{\sqrt{3}J'}{2J-J'}\right).$$ The two wavevectors coincide when $J=J'$. In a low-energy description, the modes away from these two minima may be integrated out, leaving an effective theory in terms of two “flavors” of bosons, $\psi_1$ and $\psi_2$, defined via $$\label{eq:bec4} b_{\bf k} = \psi_{1,{\bf Q+k}} + \psi_{2,{\bf -Q+k}} + \bar{b}_{\bf k}.$$ Here, $\psi_{1,{\bf q}}$ ($\psi_{2,{\bf q}}$) is defined as a boson “centered” on the minimum energy momentum ${\bf Q}$ ($-{\bf Q}$), with weight only for small $|q|<\Lambda$, where $\Lambda \ll 2\pi$ is a cut-off introduced by integrating out the modes away from the minima. The third operator $\bar{b}_{\bf k}$ represents the high energy modes which remain uncondensed, and are integrated out. In two dimensions, Fourier transforming in $q_x,q_y$ back to real space leads to slowly varying continuum fields $\psi_a({\bf r})$, where ${\bf r}$ is a two dimensional spatial coordinate. For the TST, we need to keep only the mode with minimum energy $q_y$, and so, we Fourier transform only in $q_x$, which leads to a continuum field dependent only on the position along the chain, $x$. In this continuum limit, the boson fields are governed by an effective action of the form $$\begin{aligned} \label{eq:6} {\mathcal S} & = & \int d^d{\bf r} d\tau \, \Bigg\{ \psi_1^\dagger ( \partial_\tau -\frac{1}{2m}\nabla^2 ) \psi_1 + \psi_2^\dagger ( \partial_\tau -\frac{1}{2m}\nabla^2 ) \psi_2 - \mu \left( \rho_1 + \rho_2 \right) + \frac{1}{2} \Gamma_1 \left( \rho_1^2 + \rho_2^2 \right) + \Gamma_2 \rho_1 \rho_2 \Bigg\},\end{aligned}$$ where $\rho_\alpha = | \psi_\alpha |^2$. We have written the action, Eq. , in a form which includes both the TST ($d=1$) and two dimensional ($d=2$) cases. We expand to fourth order in $|\psi_a|$ and to lowest order in derivatives, which is justified near saturation due to the diluteness of the magnons. The quadratic terms in Eq.  can be readily extracted from the exact single-magnon dispersion, which is given in Eq.  (in general in two dimensions the quadratic term may have an anisotropic effective mass tensor [@ueda2009magnon], which is not explicitly shown in Eq. ). The quartic interaction terms are more subtle, because though the magnons may be assumed dilute, the lattice-scale interactions in Eq.  are not weak. Therefore the parameters $\Gamma_1, \Gamma_2$ must be obtained from a more careful analysis, which we return to below. Order parameter structure {#sec:order-param-struct} ------------------------- Taking for the moment the $\Gamma_a$ as phenomenological parameters, we discuss the structure of the condensed or quasi-condensed phase. If $\mu<0$, there are no bosons in the system, and the vacuum is the ground state. When $\mu>0$, a finite density of bosons is present. Depending upon their interactions, different phases may result [@nikuni1995hexagonal]. To discuss the nature of these phases, a mean field analysis of Eq.  is sufficient. We comment on the modifications to the mean field results at the end of this subsection. In mean field theory, we simply minimize ${\mathcal S}$ in Eq.  for constant values of $\psi_\alpha$. When $\mu>0$ and $\Gamma_1 < \Gamma_2$, then $\rho_1 \neq 0, \rho_2 = 0$ or vice versa, which means that the magnons condense at one of the two minima: a single-Q condensate. Here, in minimizing the energy, one finds that $\rho_1 = \langle \rho_1\rangle = \mu/\Gamma_1$ and $E/N = -\mu^2/(2\Gamma_1)$. By taking $\psi_{1,2} = \sqrt{\rho_{1,2}} e^{i\theta_{1,2}}$, one can write the spin operator as follows $$\begin{aligned} \label{eq:bec6} S_{\mathbf{r}}^+ & = & \overline{\psi} \,e^{i({\bf Q}\cdot{\bf r}+\theta_1)}\\ S_{\mathbf{r}}^z & = & \frac{1}{2} - \langle \rho_1\rangle,\end{aligned}$$ where $\overline{\psi} = \sqrt{ \langle \rho_1\rangle}$ in mean field theory. We see that the $z$-component of the spins is non-zero but constant in space, while the $xy$ components rotate as one moves in space. Such a configuration is called a cone or umbrella phase, because the spins trace out a cone as one proceeds through the lattice, see Figure \[fig:comm-planar\](c). When $\Gamma_2 < \Gamma_1$, then $\rho_1 = \rho_2$, which means that the bosons condense at both $+{\bf Q}$ and $-{\bf Q}$. This is a double-Q condensate with density $\langle \rho\rangle = \langle \rho_1\rangle + \langle \rho_2\rangle = \mu/(\Gamma_1 + \Gamma_2)$ in mean field theory. Here, the energy $E/N = \mu^2/(\Gamma_1+\Gamma_2)$. Again, by letting $\psi_{1,2} = \sqrt{\rho_{1,2}} e^{i\theta_{1,2}}$ and $\theta_{1,2} = \theta \pm \tilde\theta$, $$\begin{aligned} \label{eq:bec7} S_{\mathbf{r}}^+ & = & 2 \overline{\psi} \, e^{i \theta} \cos \left( {\bf Q}\cdot{\bf r} + \tilde\theta \right)\\ S_{\mathbf{r}}^z & = & \frac{1}{2} - 4 \langle \rho\rangle \, \cos^2 \left( {\bf Q}\cdot{\bf r}+ \tilde\theta \right),\end{aligned}$$ where $\overline{\psi}= \sqrt{\langle\rho\rangle}$ in mean field theory. In this phase, the $z$-component of the spins is not constant, but the phase of $S_r^+$ is constant. This implies that the spins remain in a plane, i.e. this is a coplanar phase. Instead of a cone, the spins in this phase sweep out a “fan” – so this is sometimes called a fan state. How much of this survives beyond mean field theory? In general, the dependence of the density on chemical potential is affected by fluctuations. Note that in the original spin problem, this dependence gives the behavior of the magnetization versus field in the vicinity of saturation, as is seen from Eq. . As is well-known[@Subirbook], the BEC transition at $\mu=0$ is a very simple example of a quantum critical point, whose upper critical dimension is $d=2$. Thus in two dimensions, the deviations from mean field theory are minimal and consist just of logarithmic corrections. However, in $d=1$ the corrections are much more significant, and the dependence of the density on chemical potential is quite different. In mean field theory, we see that there is a first order transition between the cone and fan states upon varying $\Gamma_1-\Gamma_2$ through zero. In fact, the location of this transition at $\Gamma_1=\Gamma_2$ is correct and moreover, exact, beyond mean field theory. To see this, note that when $\Gamma_1=\Gamma_2=\Gamma$, the interaction terms may be rewritten as $\frac{\Gamma}{2}(\rho_1+\rho_2)^2$, which implies that the action has an enlarged [*SU(2) symmetry*]{} under rotations $\psi_\alpha \rightarrow \sum_\beta U_{\alpha\beta} \psi_\beta$, where $U$ is an arbitrary $SU(2)$ matrix. This guarantees the degeneracy of the cone and fan states at this point, since one can be rotated into the other by such an $SU(2)$ rotation, and therefore, fixes the location of the cone to coplanar transition. When $\Gamma_1 \neq \Gamma_2$, the $SU(2)$ symmetry of Eq.  is reduced to U(1)$\times$U(1), corresponding to independent phase rotations of $\psi_1$ and $\psi_2$. As a consequence, there will be one gapless mode in the theory described by Eq.  for each bose field with non-zero amplitude, i.e. one in the cone state and two in the fan. The fluctuations of these gapless modes lead, in the one dimensional TST, to power-law correlations of the spin components transverse to the magnetic field, rather than the long range order (broken symmetry states) obtained in mean field. Physically, the overall U(1) symmetry under simultaneous and equal rotations of both fields reflects conservation of $S^z$, and is microscopically mandated by the Heisenberg model. The “orthogonal” symmetry under the rotation of the two boson fields by opposite phases is [*emergent*]{}, however. It is a consequence of the [*discrete*]{} translational symmetry of the lattice, and the (generically) incommensurate nature of the wavevector $Q$. In general, this symmetry is broken by terms (which should be added to $\mathcal{S}$ in Eq. ) of the form $$\label{eq:bec8} {\mathcal S}' = -\sum_n w_n \int d^d{\bf x} d\tau \, \left( \psi_1^\dagger \psi_2 \right)^n \, e^{-i n {\bf q}_n\cdot {\bf r}} + h.c.,$$ where naïvely ${\bf q}_n=2{\bf Q}$, but in fact we can take ${\bf q}_n=2{\bf Q}- {\bf K}/n$, where ${\bf K}$ is any reciprocal lattice (RL) vector, since ${\bf r}$ is a lattice coordinate. So henceforth we work with $$\label{eq:8} {\bf q}_n = {\rm min}_{{\bf K} \in {\rm RL}} [ 2{\bf Q}- {\bf K}/n],$$ i.e. we choose ${\bf K}$ to minimize the magnitude of ${\bf q}_n$. When the wavevector ${\bf Q}$ is incommensurate and the magnitude of these terms are small, their oscillations average to zero over short distances, and they can thereby be neglected. However, if $2n{\bf Q}$ is close to a reciprocal lattice vector, then ${\bf q}_n$ is small and the corresponding $w_n$ term becomes slowly varying, and it can have effects that persist into the continuum theory. This occurs only if $2n{\bf Q}$ is close to a reciprocal lattice vector [*and*]{} the amplitude of both $\psi_1$ and $\psi_2$ is non-zero, i.e. within the coplanar or fan state. This leads to commensurate-incommensurate transitions, discussed in Sec. \[subsec:incomm-comm\]. In the cone state, such effects are not important. In this case we expect one gapless “Goldstone” mode ($\theta_1$) and power-law transverse spin correlations. But actually there is some hidden long range order. Note that in Eq.  we have (arbitrarily) chosen the minimum with $\rho_1\neq 0$ and $\rho_2=0$, instead of the one with $\rho_1=0$, $\rho_2\neq0$. In doing so, the system spontaneously breaks discrete symmetries. In particular, for the TST, this choice breaks both inversion symmetry and a “charge conjugation” symmetry, the latter being the anti-unitary symmetry of the Scrödinger equation under complex conjugation of the wavefunction. Although the fluctuations of the phase $\theta_1$ above will reduce the mean field magnetic order to quasi-long-range order in the TST, the discrete symmetry breaking is robust to one dimensional fluctuations. This symmetry breaking can be most directly sensed by the vector chirality [@kolezhuk05; @hikihara2010], $$\label{eq:chirality} V_{x,y}=\hat{z}\cdot \langle{\bf S}_{x,y} \times {\bf S}_{x+1,y}\rangle.$$ Replacing $S_r^+$ in Eq.  by the ansatz in Eq., we find $V = \overline{\psi}^2 \sin Q$, i.e. a non-zero and constant value in the cone state. The opposite sign would be obtained for the solution with $\rho_1=0$, $\rho_2\neq0$, so this serves as an Ising-type order parameter for the cone state. Incommensurate planar to cone state transition at the saturation {#subsec:planar to cone} ---------------------------------------------------------------- ### Bethe-Salpeter equation {#subsubsec:bs} Now that we have described the phases of Eq. , we will briefly outline the methods to compute $\Gamma_1, \Gamma_2$. When the external field is sufficiently close to the saturation field, then the density of magnons, or spin flips, is dilute. In this case, we can safely use the ladder approximation[@abrikosov1975methods; @beliaev1958application; @beliaev1958energy] to renormalize the interaction vertex in a controlled manner. In fact, we strictly speaking analyze the interactions for fields [ *above*]{} the saturation field, where there are no bosons present in the ground state, and we consider just two bosons interacting pairwise above the vacuum. We require the behavior in the limit in which the saturation field is approached, i.e. in which the energy of the two interacting bosons approaches zero. This limit should be familiar from ultra-cold atomic systems, in which the complicated interactions between atoms can be replaced by one or a few scattering lengths, which represent the effective interactions in the dilute limit. Here we obtain the effective interactions from the Bethe-Salpeter (BS) equation, which reads $$\label{eq:bec9} \Gamma( k, k'; q ) = V(q) -\int_p \frac{ V(q-p) \Gamma(k,k';p) }{ \epsilon(k+p) + \epsilon(k'-p)+ \Omega}.$$ Here $\Gamma(k,k';q)$ is the irreducible four-point interaction vertex taken with all external frequencies equal to zero, and $\Omega = 2(h - h_{\text{sat}}) = -2\mu$. The $k, k'$ are the incoming momenta and $k+q, k'-q$ are the outgoing momenta, as shown in Fig. \[fig:ladder\]. From this, one obtains that $\Gamma_1 = \Gamma( Q, Q, 0 )$ and $\Gamma_2 = \Gamma(Q, -Q, 0 ) + \Gamma( Q, -Q, -2Q )$. In Eq. , we introduced a factor of $U$ into the definition of $V(q)$ to enforce the spin-$1/2$ constraint, which is equivalent to taking the limit $U \to \infty$. This limit in the BS language, Eq. , provides us with an additional constraint which reads [@batyev1984antiferromagnet; @nikuni1995hexagonal] $$\label{eq:bec10} \int_p \frac{\Gamma(k,k';p)}{\epsilon(k+p) + \epsilon(k'-p)+\Omega} = 1.$$ Both Eq.  and Eq.  can be applied either in two or three dimensions, or for the one dimensional TST; in the latter case, the integral over $p$ should be regarded as an integral over $p_x$ and a [*sum*]{} over the discrete $p_y= 0, 2\pi/3, 4\pi/3$. Notice that in two or fewer dimensions, since $\epsilon(k) \sim k^2, V(k) \sim 1$ near $k=0$, the integral is at least logarithmically divergent when $\Omega$ approaches zero. This reflects the fact that weak interactions are marginally relevant at the zero density fixed point in $d=2$, and relevant for $d<2$. We use this to our advantage, since we are interested precisely in this limit: the singular parts dominate the vertex function as $\Omega \rightarrow 0^+$, and we extract these dominant singular terms analytically to obtain the asymptotic behavior. For $d > 2$, the integrals become non-singular, and one can directly take the $\Omega=0$ limit. ### Calculation of $\Gamma_1$ and $\Gamma_2$ in 2d {#sec:BS-2d-lattice} We first give a brief summary of our calculations for the 2d case. The dispersion minima occurs at ${\bf k},{\bf k'} = \pm {\bf Q}_{2d} = \pm (Q_{2d},Q_{2d}/2)$, where $Q_2d$ is given in Eq. . To solve the BS equation, we use the following ansatz: $$\label{eq:ansatz} \Gamma(k,k';q;\Omega) = A_0 + A_1 \cos q_x + A_2 \sin q_x + A_3 \cos q_y+ A_4 \sin q_y+ A_5 \cos (q_y-q_x)+A_6 \sin(q_y-q_x),$$ where $A_i$ are coefficients dependent on $k, k', J, J'$ and $\Omega$. With Eqs. (\[eq:bec9\], \[eq:bec10\], \[eq:ansatz\]), one can solve a set of linear equations for the coefficients $A_i$, which gives an explicit form of $\Gamma(q)$ for a given set of $k,k',J,J'$ and $\Omega$. Details of the 2d case are given in Appendix \[app:2d\]. From the solution, we simply obtain $$\label{eq:2dgamma} \Gamma_1 > \Gamma_2, \qquad \textrm{for } 0<R< 1,$$ which implies that [*for all range of anisotropies, $0\leq R \leq 1$, the ground state near saturation field is always an incommensurate planar (or fan) state*]{}. To see how the incommensurate planar state dominates over the cone state in the weakly coupled chains region, we expand the expression of $\Gamma$’s in the leading order of both $1/\ln\Omega$ and $j \equiv J'/J$ $$\begin{aligned} \label{eq:gamma2d-decoup} \Gamma_1/J&=&[-4\pi j+\frac{\pi}{2}j^3+O(j^5)]\frac{1}{\ln\Omega} \nonumber \\ &&+[-8j\pi \ln(4j)+\alpha+O(j^3)]\frac{1}{(\ln\Omega)^2}+..., \nonumber \\ \Gamma_2/J&=&[-4\pi j+\frac{\pi}{2}j^3+O(j^5)]\frac{1}{\ln\Omega} \nonumber \\ &&+[-8j\pi \ln(4j)+O(j^3)]\frac{1}{(\ln\Omega)^2}+..., \nonumber \\ \alpha &=& \frac{8j\pi(24-16\ln2-3\pi\ln2)}{16+3\pi} > 0.\end{aligned}$$ Since the extra factor $\alpha$ is always larger than zero, the ground state always prefers the fan state in the limit of decoupled chains. One can analytically check this result in the same limit, $J' \ll J$. We discuss this extension in Appendix \[sec:BS-1d\]. ### Calculation of $\Gamma_1$ and $\Gamma_2$ in the TST {#sec:BS-TST} We now present a brief overview of our calculations on the TST. We consider an infinitely long system, where $q_x$ is continuous and $q_y=0,2\pi/3, 4\pi/3$ is discretized by periodic boundary conditions. The dispersion minima occur at ${\bf k},{\bf k'} = \pm {\bf Q}_{1d} = \pm (Q_{1d},2\pi/3)$, given in Eq. . We are now in a position to solve the BS equation, where we follow similar procedures as the two-dimensional case. We use the same ansatz, Eq. , to solve for the coefficients $A_i$. From these coefficients, we can obtain the explicit forms of $\Gamma(q)$, for which we provide details in Appendix \[app:tst\]. Our results are as follows $$\label{eq:2d4} \begin{array}{cc} \Gamma_1 > \Gamma_2, \qquad & \textrm{for } 0<R< 0.48, \\ \Gamma_1 < \Gamma_2, \qquad& \textrm{for } 0.48<R<1. \end{array}$$ This tells us that for $R < R_c = 0.48$, the incommensurate (fan) state is favored, while for $R > R_c$, the cone (umbrella) state is favored. This result is in agreement with the analytical result, in Appendix \[sec:BS-1d\], where it was shown that spins order into a cone state in the decoupled chains limit. Commensurate-Incommensurate Transitions (CIT) {#subsec:incomm-comm} --------------------------------------------- In the previous subsection, we found that near saturation, the ground state of the two-dimensional model for all $R$ and of the TST for $R > 0.48$ is coplanar, with modulation of the $z$-component of the spin at wavevector $2Q$. As mentioned in Section \[sec:order-param-struct\], this implies spontaneous breaking of the discrete translational symmetry, which is sensitive to commensurability effects via the terms in Eq. . In particular, we expect that the wavevector $Q$ will [*lock*]{} to commensurate values, where $2Qn$ is a reciprocal lattice vector, over a finite range of field and anisotropy, $R$. We now turn to a description of these commensurate-incommensurate transitions (CITs), both in the 2d case and for the TST. To study the CITs, we must now consider the full action, Eqs. (\[eq:6\],\[eq:bec8\]), for $h<h_{\rm sat}$, i.e. for $\mu>0$, where the bosons are at non-zero density. In two-dimensions, we can regard them as condensed, while in the TST, true condensation is impossible but the system can be viewed as a quasi-condensate or a Luttinger liquid. In either case, amplitude fluctuations of the $\psi_\alpha$ fields are small, and we can write the effective action in terms of the phases $\theta_\alpha$, where $\psi_\alpha \sim \psi_0 e^{-i\theta_\alpha}$ in the coplanar/fan region. Conceptually, the effective action for the phase fields is obtained by first following the renormalization of the system away from the zero density fixed point, $\mu=0$, where amplitude fluctuations are still important. Once the energy scale set by $\mu$ is reached, these fluctuations are quenched, and it is sufficient to consider only small fluctuations in the amplitudes. To achieve this, we simply make the assumption of small amplitude fluctuations in Eqs. (\[eq:6\],\[eq:bec8\]), but with the bare couplings replaced by [*fully renormalized ones, at the scale $\mu$*]{}. We believe this procedure properly captures the scaling for small $\mu$, though it is not quantitatively reliable. Because the low energy dispersion of the single magnon states is exactly known and described by the quadratic terms in Eq. , the corresponding couplings are unrenormalized. The interactions $\Gamma_1$ and $\Gamma_2$, however, are renormalized by multiple scatterings, which is exactly what is captured by the BS equation discussed in Sec. \[subsec:planar to cone\]. From this analysis, we simply take as our renormalized couplings $\Gamma_a(\Omega =2\mu)$. Note that this would be exactly correct if we replaced $\mu$ by $|\mu|$ for the case $\mu<0$, but on scaling grounds it should give the correct dependence even for $\mu>0$. The renormalized interactions can be approximately represented for small $\mu$ as $$\label{eq:21} \Gamma_\alpha(\mu) \sim \frac{u_\alpha}{1+ m u_\alpha/\zeta(m\mu)},$$ where $$\label{eq:31} \zeta(m\mu) = \left\{ \begin{array}{cc} (m\mu)^{1/2} &\qquad d=1 \\ 1/|\ln(m\mu)| & \qquad d=2\end{array}\right. ,$$ and $u_\alpha$ are constants related to the “bare” values of $\Gamma_\alpha$. We can in principle use the renormalize $\Gamma_\alpha(\mu)$ for the original lattice spin model, which have the same leading and first sub-leading terms for small $\mu$ (up to second order in $\zeta \ll 1$) as in Eq. , but with considerably more complicated coefficients. Beyond second order in $\zeta$, the lattice $\Gamma_\alpha$ differ somewhat, and the expression is unwieldy. The above form is sufficient for our purposes, and is exact for a continuum model. Once the $\Gamma_\alpha(\mu)$ are known, the analysis is straightforward [@ueda2009magnon]. We write $\psi_\alpha = \left[\overline{\rho} + \sigma_\alpha\right]^{1/2} e^{-i\theta_\alpha}$, and assume small fluctuations in $\sigma_\alpha$ around the saddle point value for $$\label{eq:40} \overline{\rho}=\frac{\mu}{(\Gamma_1(\mu)+\Gamma_2(\mu))}.$$ (Here we assume $\Gamma_1(\mu)>\Gamma_2(\mu)$). Eq.  properly captures, through the dependence of $\Gamma_\alpha$ on $\mu$, the non-mean-field dependence of the boson density on chemical potential. In particular, it yields $\overline{\rho} \sim \mu^{1/2}$ in 1+1 dimensions, consistent with the fact that repulsively interacting bosons behave with an effective hard core at low density, and consequently have an equation of state similar to free fermions. Expanding the action to quadratic order in $\sigma_\alpha$ and neglecting irrelevant terms involving derivatives of $\sigma_\alpha$ and their couplings to higher derivatives of $\theta_\alpha$, we obtain (neglecting constant terms) $$\begin{aligned} \label{eq:32} \mathcal{S} & = & \int d^d{\bf r} d\tau\, \Bigg\{ i (\sigma_1 \partial_\tau \theta_1 + \sigma_2 \partial_\tau \theta_2) + \frac{\overline{\rho}}{2m}( |\nabla\theta_1|^2 + |\nabla\theta_2|^2) + \frac{\Gamma_1}{2} (\sigma_1^2+\sigma_2^2) + \Gamma_2 \sigma_1 \sigma_2 \Bigg\}. \end{aligned}$$ Next, we integrate out the $\sigma_\alpha$ fields, and express the resulting action in terms of new linear combinations, $$\label{eq:30} \theta=\theta_1+\theta_2, \qquad \tilde\theta=\theta_1-\theta_2.$$ The result is $$\begin{aligned} \label{eq:33} \mathcal{S} & = & \mathcal{S}_\theta + \mathcal{S}_{\tilde\theta},\end{aligned}$$ where $$\begin{aligned} \label{eq:34} \mathcal{S}_\theta &=& \int d^d{\bf r}\, d\tau\, \left\{ \frac{\kappa_c}{2}(\partial_\tau\theta)^2 + \frac{\rho_c}{2} (\nabla \theta)^2 \right\},\end{aligned}$$ with $$\label{eq:36} \kappa_c = \frac{1}{2(\Gamma_1(\mu)+\Gamma_2(\mu))}, \qquad \rho_c = \frac{\overline{\rho}}{2m},$$ and $$\begin{aligned} \label{eq:7} \mathcal{S}_{\tilde\theta} & =& \int d^d{\bf r}\, d\tau\, \Big\{ \frac{\kappa}{2}(\partial_\tau\tilde\theta)^2 + \frac{\rho}{2} (\nabla \tilde\theta)^2 \nonumber \\ && - \sum_n \lambda_n \cos [n (\tilde\theta - {\bf q}_n \cdot {\bf r})]\Big\},\end{aligned}$$ with $$\begin{aligned} \label{eq:37} \kappa & = & \frac{1}{2(\Gamma_1(\mu)-\Gamma_2(\mu))}, \qquad \rho = \frac{\overline{\rho}}{2m}, \nonumber \\ \lambda_n & = & 2 w_n \overline{\rho}^n.\end{aligned}$$ Here we have restored the term resulting from $\mathcal{S}'$ in Eq. . Note that the “charge” field $\theta$ describes the Goldstone mode of the broken (or quasi-broken in 1d) U(1) symmetry, and thus remains exactly massless. It completely decouples from the $\tilde\theta$ field, and can be neglected in the analysis of the CIT. We are now in a position to analyze the CIT using Eqs. (\[eq:7\],\[eq:37\]) and the results of Appendix \[sec:sine-gordon-model\]. This is strongly dimension dependent, so we treat the cases of two dimensions and one dimension separately. ### Two dimensions {#sec:two-dimensions} In two dimensions, we begin by presuming that [*one*]{} of the cosines in Eq.  is almost non-oscillating, i.e when one of the $q_n$ is close to zero. Generically, this will happen for one specific minimal $n$, when $$\label{eq:17} Q_{2d} = \frac{\pi m}{n} + \delta Q,$$ for some specific $m,n$, with $|\delta Q|\ll 1$. The other rapidly oscillating cosines can be neglected, and we retain only the weakly oscillatory one. Then, in the ${\sf x},{\sf y}$ coordinates, the action takes the form given in Eq. , with $\lambda_n=\lambda$, and $q=q_n = 2 \delta Q$. We can now directly apply the results of Appendix \[sec:dgeq-2:-mean\]. Using $\delta=\rho q = 2\rho \delta Q$, and Eq. , we obtain that the commensurate state is stable for $|\delta Q|< \delta Q_c$, which defines the location $\delta Q_c$ of the CIT as $$\begin{aligned} \delta Q_c & \sim & \sqrt{\lambda_n/\rho} \sim \sqrt{m w_n} \, \overline{\rho}^{(n-1)/2}, \nonumber \\ \label{eq:ic8} & \sim & \sqrt{m w_n} (\Upsilon(\mu)\mu)^{(n-1)/2} ,\end{aligned}$$ where we used Eq. (\[eq:37\]) for $d=2$, and, of course, we assume $\mu>0$. Here $$\label{eq:35} \Upsilon(\mu) = \frac{1}{\Gamma_1(\mu)+\Gamma_2(\mu)} \sim \frac{2|\ln (m\mu)|}{m} ~\text{for}~ \mu \ll 1,$$ is a weak logarithmic function of $\mu$. For the commensurate state centered around $R=0$ ($J'=J$), we have $n=3$, and the phase boundary for the C-IC transition is linear in $\mu$, up to logarithmic corrections. However, as $n$ increases, the widths of the commensurate phases decrease. ### One dimension {#sec:one-dimension-1} In the TST, to derive the 1d theory we must sum over discrete $y$. This restricts the $\lambda_n$ terms in Eq.  to $n$ which are multiples of $3$, so that the $y$ component of ${\bf q}_n$ ($=2nQ_y$) is a multiple of $2\pi$. Following the discussion for two dimensions, we again consider wavevectors $$\label{eq:94} Q_{1d} = \frac{\pi m}{n} + \delta Q,$$ with appropriate $m,n$ such that $|\delta Q|\ll 1$, and keep only the dominant cosine term of order $n$, which then matches the sine-Gordon form in Eq.  with $q= 2\delta Q$. Then we take over results from Appendix \[sec:d=1:-quant-fluct\]. According to that discussion, a commensurate phase is stabilized whenever the scaling dimension of the cosine term, $\Delta_n$, is less than two. Using the result in Eq.  and also, Eq. , we obtain $$\label{eq:38} \Delta_n = \frac{n^2}{\sqrt{2}\pi} \left( \frac{\mu}{m}\right)^{1/4} \sqrt{\frac{u_1-u_2}{u_1 u_2}},$$ so that $\Delta_n \ll 1$ for $\mu \ll 1$. This shows that $\Delta_n<2$, and the commensurate phase is indeed realized. Note that if we approximate $\Delta_n=0$, then this becomes the same classical estimate as in the previous section, except that $\Gamma_a(\mu)$ has a different dependence in one dimension. While this is in principle appropriate for very small $\mu$, the $1/4$ exponent in Eq.  indicates that $\Delta_n$ can be substantial nonetheless, so we will proceed with the estimate taking $\Delta_n \neq 0$. Using $\delta= 2\rho \delta Q$ and the estimate for the critical $\delta_c$ in Eq. , and applying Eqs.  and , we find the location of the 1d CIT as $$\label{eq:39} \delta Q_c \sim \left(w_n m^{\frac{n+1}{2}} n^{\Delta_n} \mu^{\frac{n-1}{2}}\right)^{\frac{1}{2-\Delta_n}}.$$ For $n=3$ and assuming $\Delta_n \to 0$, this predicts $\delta Q_c \sim \mu^{1/2}$, which does not agree with $\mu \sim R$ scaling of the C-IC boundary in the upper left corner of the phase diagram in Fig. \[fig:phase\]. However the range of $\mu$ there is not particularly small, $h$ changes from $4.5$ to approximately $3$ as $R$ changes from $0$ to $0.1$. This observation calls for a more careful analysis of behavior predicted by Eqs. (\[eq:38\],\[eq:39\]) for $\mu \sim O(1)$. We find that numerical coefficients in make $\Delta_{n=3}$ to vary in the interval $0.5 - 1$ for $\mu$ relevant to the C-IC boundary in Fig. \[fig:phase\], resulting in an almost linear dependence $\delta Q_c \sim \mu$ away from the strict $\mu\to 0$ limit and in qualitative agreement between our analysis here and the numerical data in Fig. \[fig:phase\]. DMRG results ------------ In Sec. \[sec:order-param-struct\], we show that the cone state corresponds to a single-Q condensate bosonic field, while the incommensurate planar state corresponds to double-Q condensate. This is verified by the central charge measurement, where we find $c= 2$ to describe the coplanar phase as shown in Fig. \[fig:cone-planar\]a, as opposed to $c=1$ for the cone in Fig. \[fig:cone-planar\]b . The transverse spin-spin correlation function for the cone state can be written as $$\begin{aligned} \label{eq:ruadd2} \langle S_{\mathbf{r}}^+ S_{{\mathbf{r}}'}^- \rangle & \sim & \overline{\psi}^2 \cos\left({\bf Q} \cdot \left({\bf r}-{\bf r}'\right)\right) \left\langle e^{i(\theta({\mathbf{r}}) - \theta({\mathbf{r}}'))}\right\rangle, \nonumber \\ & \sim & \overline{\psi}^2 \cos\left({\bf Q} \cdot \left({\bf r}-{\bf r}'\right)\right) C_\eta(x,x')\end{aligned}$$ With $C_\eta(x,x')$ given in Eq.  . We fit the DMRG results to this formula in Fig. \[fig:corr-cone\]b. The transverse correlation shows a clear sinusoidal pattern with incommensurate wavevector ${\bf Q}=(1.10\pi,2\pi/3)$ and $\eta=0.37$ at $M/M_s=5/6$, $R=0.66$. Fig. \[fig:corr-cone\]b shows an excellent fit which yields the exponent $\eta=0.37$. The whole procedure is repeated for the incommensurate planar state, $$\begin{aligned} \label{eq:ruadd3} \langle S_{\mathbf{r}}^+ S_{{\mathbf{r}}'}^- \rangle & \sim & 4\overline{\psi}^2 \left\langle \cos\left({\bf Q} \cdot {\bf r}+{\tilde \theta(x)}\right) \cos\left({\bf Q} \cdot {\bf r}'+{\tilde \theta}(x')\right)\right\rangle \nonumber \\ &&\left\langle e^{i(\theta(x) - \theta(x'))}\right\rangle. \\ &=& \frac{\overline{\psi}^2}{2} \cos({\bf Q} \cdot ({\mathbf{r}}-{\mathbf{r}}')) C_{\eta+\tilde\eta}(x,x')\nonumber \end{aligned}$$ The exponent $\eta$ and ${\tilde \eta }$ come from averaging the $\theta$ and $\tilde{\theta}$ fields, respectively. The fitting estimates ${\bf Q}=(1.26\pi,2\pi/3)$ and $\eta+{\tilde \eta }=0.54$ at $M/M_s=5/6$, $R=0.3$, shown in Fig. \[fig:corr-cone\]a. Next we consider the vector chirality (VC), which is defined as $V_{x,y}=\hat{z}\cdot \langle S_{x,y}\times S_{x+1,y}\rangle$ in Eq. . As discussed in Sec. \[sec:order-param-struct\], since the cone state favors XY order, the VC should be a nonzero and constant value. Indeed, as shown in Fig. \[fig:spin-chiral\], the VC correlation function does not decay with distance in the cone state, i.e., $R=0.66$ and $0.80$, and the finite-size scaling (Fig. \[fig:spin-chiral\](b)) shows that the corresponding VC order parameter remains finite in the thermodynamic limit. Instead, for planar states, the spins are confined to one plane, so the VC correlation decays exponentially (see $R=0.4$ data in Fig. \[fig:spin-chiral\]). Weakly Coupled Chains {#sec:weak-coupled} ===================== Bosonization of a Heisenberg chain {#subsec:bosonization} ---------------------------------- In this section, we give a brief overview of applying Abelian bosonization to a single spin-1/2 Heisenberg chain in a magnetic field. The Hamiltonian of interest is as follows $$\label{eq:chain} H_{ch}=J\sum\limits_{x=1}^{L} \mathbf{S}(x) \cdot \mathbf{S}(x+1)- h \sum_{x=1}^L S^z(x),$$ where the magnetic field is chosen along the $z$-direction, and the lattice spacing has been set to 1. Here, the magnetization, $M \equiv \sum_x \frac{1}{L}S^z(x)$, is conserved, and hence, the magnetic field, $h$, can be treated as a chemical potential to relate the properties at $h \neq 0$ to those at $h=0$. For any magnetizations less than saturation, i.e. $M < M_{\rm sat} = 1/2$, the low energy theory can be described by a canonical set of a massless scalar field, $\theta$, and its dual field $\phi$ $$\label{eq:Hchain} H_0=\int dx \frac{v}{2}((\partial_x\phi)^2+(\partial_x\theta)^2).$$ These two fields satisfy the familiar commutation relations $$\label{eq:comm} [\theta(x),\phi(x')]=-i\Theta(x-x')$$ where $\Theta$ is the Heaviside step function. The spin velocity, $v$, in Eq. , is a function of the magnetization, $M$. When $M = 0$, $v/J = \pi/2$, and the $SU(2)$ symmetry is restored. For the case when $M >0$, $v$ decreases continuously and is numerically determined by the Bethe ansatz integral equations (see Fig. 9 of Ref. ). At a fixed magnetization, both the longitudinal (along the field direction) and transverse (perpendicular to the field axis) spin fluctuations have gapless excitations. The longitudinal modes occur at commensurate wave vector $k_x=0$ and incommensurate ones $k_x = \pi \pm 2\delta$, where $\delta = \pi M$, while the transverse modes are at commensurate wave vector $k_x = \pi$ and incommensurate vectors $k_x = \pm 2\delta$. Then, one can expand the spin operator around these low energy gapless modes, i.e. $$\begin{aligned} \label{eq:spinops} S^z(x) &=& M+\mathcal{S}_0^z(x)+ e^{i(\pi-2\delta)x} \mathcal{S}_{\pi-2\delta}^z(x) \nonumber \\ &&+ e^{-i(\pi-2\delta)x} \mathcal{S}_{\pi+2\delta}^z(x), \nonumber \\ S^+(x) &=& e^{-i2\delta x} \mathcal{S}_{-2\delta}^+(x)+ e^{i2\delta x} \mathcal{S}_{2\delta}^+(x)\nonumber\\ &&+(-1)^x \mathcal{S}_{\pi}^+(x),\end{aligned}$$ where ${S}_0^z$, $\mathcal{S}_{\pi\pm 2\delta}^z(x)$, $\mathcal{S}_{\pm 2\delta}^+(x)$ and $\mathcal{S}_{\pi}$ are operators whose scaling dimensions depend on $M$. One can rewrite these operators in terms of the bosonic fields, $\phi$ and $\theta$, $$\begin{aligned} \label{eq:bosontoS} \mathcal{S}_0^z(x) &=& \beta^{-1} \partial_x\phi , \nonumber \\ \mathcal{S}_{\pi-2\delta}^z(x) &=& -\frac{i}{2} A_1 e^{-2\pi i \phi/\beta} ,\nonumber \\ \mathcal{S}_{\pm 2\delta}^+(x) &=&\pm \frac{i}{2} A_2 e^{i\beta \theta} e^{\pm i 2\pi \phi/\beta} ,\nonumber \\ \mathcal{S}_{\pi}^+(x)&=& A_3 e^{i \beta \theta}.\end{aligned}$$ Here, the parameter $\beta\equiv 2\pi \mathcal{R}$ is related to the compatification radius $\mathcal{R}$ and can be calculated by solving the integral equations, which can be found in Refs. . The compactification radius takes on a simple form, $2\pi \mathcal{R}^2 = 1$ at zero magnetization, and approaches $2 \pi\mathcal{R}^2 = 1/2$ as $M \to M_{\rm sat} = 1/2$. The constants, $A_1$, $A_2$ and $A_3$, are determined numerically[@hikihara2004correlation]. Furthermore, at $M=0$, the scaling dimension of ${S}_0^z$ and $\mathcal{S}_{\pm 2\delta}^+(x)$ is $1$, and these operators can be written in its $SU(2)$ symmetric form ${\bf M}={\bf J}_R+{\bf J}_L$. The scaling dimension of $\mathcal{S}_{\pi\pm 2\delta}^z(x)$ and $\mathcal{S}_{\pi}$, however, is $1/2$ at zero magnetization and is related to the staggered Néel order, ${\bf N}$, and dimerization ${\bf \epsilon}$. Further details for the $M=0$ case are provided in Appendix \[sec:zero-field-analysis\]. Now, in order to compare our DMRG results to this analysis, we must enforce open boundary conditions (BC) along the chain direction to mimic DMRG’s BC. This can be achieved by introducing two additional “phantom sites" at $x=0$ and $x=L+1$ [@OBCAffleck]. At these positions, we enforce boundary conditions on the bosonic field, $\phi$, where $\phi(x=0)=0$ and $\phi(x=L+1)=0$. The sum in Eq.  now runs from site index 0 to L, and we effectively obtain a periodicity of $L+1$ using these phantom sites. We can now substitute Eq.  into Eq. , and enforce the open boundary conditions. The spin operators can now be written as (for brevity, we suppress chain index $y$) $$\begin{aligned} \label{eq:spinopscont} S^z(x) &=& \tilde{M}+\frac{1}{\beta}\frac{d\phi}{dx} - A_1 \sin(\frac{2\pi}{\beta}\phi(x)-(\pi-2\tilde{\delta})x), \nonumber \\ \label{eq:s+} S^+(x) &=& e^{i\beta \theta(x)}[A_3(-1)^x\nonumber\\ &&+A_2 \sin(\frac{2\pi}{\beta}\phi(x)+2\tilde{\delta} x)],\end{aligned}$$ where $\tilde{M}=M L/(L+1)$ and $\tilde{\delta}=\pi \tilde{M}$. The bosonic field, $\phi$, can also be expanded in terms of its lattice modes as $$\label{eq:phi} \phi(x)=\sum\limits_{n=1}^{\infty}\frac{\sin(q_n x)}{\sqrt{\pi n}}(a_n+a_n^+),$$ where $q_n=\pi n/(L+1)$. Here, $a_n$ and $a_n^+$ are the annihilation and creation operators and satisfy the commutation relation $[a_{n},a_{n'}^+]=\delta_{n,n'}$. Triangular spin tube -------------------- We now extend our previous discussion to study the behavior of the TST, described by Eq. , in the limit of weak coupling, $J' \ll J$. Using the low energy expansions of the spin operators in Eq. , we can express the low energy Hamiltonian as $H = H_0 + H_1$, where $H_0$ is described by a sum over the free bosonic modes in Eq.  on each chain. Here, $H_1$ describes interchain interactions and is as follows $$\begin{aligned} \label{eq:perturbH1} H_1&=& J'\sum\limits_{y=1}^{3} \int\limits_{x=0}^{L}dx \{2 \tilde{M}^2+2\mathcal{S}_{y;0}^z \mathcal{S}_{y+1;0}^z \\ &+&\sum_{\sigma=\pm} (1-e^{2i\sigma\tilde{\delta}})\mathcal{S}_{y;\pi+2\sigma\tilde{\delta}}^z \mathcal{S}_{y+1;\pi-2\sigma\tilde{\delta}}^z \nonumber \\ &+&\frac{1}{2}[\mathcal{S}_{y;\pi}^+ \partial_x \mathcal{S}_{y+1;\pi}^- + {\rm h.c.}] \nonumber\\ &+& \sum_{\sigma=\pm} \left[ \left(\frac{1+e^{2i\sigma\tilde{\delta}}}{2}\right) \mathcal{S}_{y;2\sigma\tilde{\delta}}^+ \mathcal{S}_{y+1;2\sigma \tilde{\delta}}^- +{\rm h.c.}\right]\}, \nonumber\end{aligned}$$ where again, $\tilde{M}=M L/(L+1)$. The first term, $2\tilde{M}^2$, with scaling dimension 0, is the most relevant, but is trivially a constant. The second term is marginal with scaling dimension 2, and renormalizes the Luttinger parameters and the velocities of the bosonic fields, $\phi, \theta$, in Eq. . The third term is relevant at $\tilde{M} = 0$ with scaling dimension 1, and becomes marginal as magnetization increases, approaching a scaling dimension 2 as $\tilde{M} \to M_{\rm sat}$. This term is responsible for the SDW phase that arises when relevant. The fourth term, which involves a derivative, is marginal at $\tilde{M} = 0$ with scaling dimension 2 and becomes increasingly relevant with increasing magnetization, saturating to a scaling dimension of 3/2 as $\tilde{M} \to M_{\rm sat}$. This is a “twist" term that favors the cone or XY phase that orders perpendicular to the magnetic field. The last term is always irrelevant, with scaling dimension $\ge 2$ and can be neglected in the analysis of this theory. Apart from the trivial constant term, the SDW and the “twist" terms are the most relevant ones and have competing scaling dimensions as magnetization varies from $0$ to saturation. With the exception of some subtleties that arise from the TST boundaries (we discuss this in later subsections), standard scaling arguments can be made about these two operators. For small $M$, the SDW term dominates, and the system orders into a collinear SDW in which the ordering momentum, $\pi - 2 \tilde{\delta}$ scales linearly with magnetization. The twist interaction dominates over the SDW at a larger magnetization, and the system orders into a cone-like state. Since there is no spontaneous breaking of continuous symmetry in one dimension, the SDW and cone order are not really ordered states, but are Luttlnger liquids with one gapless mode. This competition between cone and SDW phase was discussed for 2d triangular lattice in Ref. , where critical magnetization, $M_{\rm crit}$, at which the quantum phase transition from the SDW to the cone phase takes place, was evaluated. The TST has the same critical $M_{\rm crit} = 0.64 M_{\rm sat}$ as the 2d case, except that the cone state obtained in this quasi-1d regime is smoothly connected to the cone phase obtained in the high field region in Sec. \[sec:high field\]. Eq.  is not complete as it does not account for several less-obvious relevant terms which are allowed by the lattice symmetry of the problem. This will be considered in more detail later. Within the SDW phase, it is possible to lock the SDW momentum to a commensurate value by accounting for high-order umklapp processes. The first of these leads to a commensurate SDW, which is in fact identical to the 1/3 plateau with the “up up down” structure. This is discussed extensively later in Sec. \[sec:plateau\]. Other more relevant intra-chain interaction terms may appear due to fluctuations that are not accounted for in the naïve bosonization in Eq. . We will discuss these effects in Appendix \[sec:cone\]. SDW {#subsec:sdw} --- In the region of low to intermediate magnetization and small $J'$, we can neglect all terms in $H_1$ except the marginal one and the SDW interaction. Using bosonization, Eq. , the Hamiltonian can be re-written as follows $$\begin{aligned} \label{eq:hamisdw} H_{sdw} &=& \sum\limits_{y=1}^{3}\int dx \frac{v}{2} \left[(\partial_x\phi_y)^2 + (\partial_x\theta_y)^2\right] +\frac{2J'}{\beta^2} \partial_x \phi_y \partial_x \phi_{y+1} \nonumber\\ &+& \gamma_{\rm sdw}\cos[\frac{2\pi}{\beta}(\phi_y-\phi_{y+1})-\frac{\pi-2\tilde{\delta}}{2}] .\end{aligned}$$ where the bare SDW coupling is given by $\gamma_{\rm sdw}=J'A_1^2\sin(\tilde{\delta}) > 0$. ### Scaling considerations {#sec:scal-cons} Renormalization group arguments give considerable insight into the physics of Eq. . All but the last term in $H_{sdw}$ are scale invariant, and can be considered a fixed point Hamiltonian. The remaining SDW term, proportional to $\gamma_{\rm sdw}$, is not, and renormalizes under the scale transformation $x \rightarrow b x$, according to the usual linearized relation $$\label{eq:29} \gamma_{\rm sdw}(b) = b^{2-\Delta_{\rm sdw}} \gamma_{\rm sdw},$$ where $b>1$ is an arbitrary scale factor. As discussed in the previous subsection, $\Delta_{\rm sdw}<2$, so that the SDW interact is [*relevant*]{}, and grows in strength under rescaling. Eq.  is valid for small dimensionless $\gamma_{\rm sdw}(b)$, and therefore the weak coupling regime is limited by the condition $\gamma_{\rm sdw}(b)< v$. This defines an “SDW correlation length” $\xi_{\rm sdw}$ such that $\gamma_{\rm sdw}(b)=v$: $$\label{eq:41} \xi_{\rm sdw} \sim (v/\gamma_{\rm sdw})^{1/(2-\Delta_{\rm sdw})}.$$ In the weakly coupled chain regime, $\gamma_{\rm sdw}$ is small and so $\xi_{\rm sdw}$ is large. On scales large compared to this correlation length, we expect that the bosonic modes appearing inside the SDW term become “pinned” to values which minimize this interaction. This pinning corresponds to the creation of well-established SDW order. Due to the divergence of $\xi_{\rm sdw}$, however, the establishment of SDW order can be prevented by finite size effects, even for reasonably large systems accessible by DMRG. For a finite system of length $L$, we must compare the SDW correlation length to $L$, and it is expected that physical quantities will be functions of the dimensionless ratio $\Xi_{\rm sdw}\equiv \xi_{\rm sdw}/L$. For $\Xi_{\rm sdw} \ll 1$, SDW-like behavior is expected, but when $\Xi_{\rm sdw} \gtrsim 1$, there may be a non-trivial crossover. This occurs particularly in the case of the TST, for which an analysis, detailed below, shows that the crossover is [*discontinuous*]{}. ### L = $\infty$ {#sec:l-=-infty} For an infinitely [*long*]{} system, $\Xi=0$, we can understand the nature of the SDW state by simply minimizing the $\gamma_{\rm sdw}$ term in Eq. . When the [*width*]{} is also infinite, i.e. in two dimensions, one can simultaneously minimize each cosine term (for each $y$) independently. This occurs by taking $$\label{eq:phicond} \left.\frac{2\pi}{\beta}\phi_y\right|_{\rm d=2}=\varphi+\frac{\pi-2\tilde{\delta}}{2}y ,$$ where $\varphi$ is an arbitrary constant ($x$- and $y$-independent) phase. Allowing for small gradients of $\varphi$, which might be present due to fluctuations or perturbations and by substituting Eq.(79) into Eq.(73), we see that the spin operator can then be represented as $$\label{eq:51} \left. S_y^z(x) \right|_{\rm d=2} \sim \tilde{M} + \frac{\partial_x \varphi}{2\pi} - A_2 \sin \big[ \varphi(x) - \tfrac{\pi-2\tilde{\delta}}{2} (2x-y)\big],$$ which indeed is the classic form for a spin density wave with wavevector $\frac{\pi-2\tilde{\delta}}{2}(-2,1)$. This corresponds to an ideal two dimensional SDW state, and $\varphi$ gives the “sliding” or “phason”[@chaikin2000principles] mode of the SDW. For generic irrational $\tilde\delta/\pi$, $\varphi$ remains a gapless pseudo-Goldstone mode associated with translational symmetry breaking. In two dimensions, the zero point fluctuations of this mode do not, however, destroy long-range SDW order. Now consider the case of the TST ladder, where $y=1,2,3$ and periodic boundary conditions are applied. In this case it is generically impossible to simultaneously minimize each cosine term separately. Instead, the minimum occurs when $$\label{eq:42} \left.\frac{2\pi}{\beta} \phi_y\right|_{L=\infty, {\rm TST}} = \varphi + \frac{2\pi}{3} y,$$ where again $\varphi$ is an arbitrary constant, reflecting the invariance of Eq.  under uniform translations of all the $\phi_y$. Again, one can express the spin operator here using this form $$\begin{aligned} \label{eq:52} \left. S_y^z(x) \right|_{L=\infty,{\rm TST}} & \sim & \tilde{M} + \frac{\partial_x \varphi}{2\pi} \\ & & - A_2 \sin \big[ \varphi(x) - (\pi-2\tilde{\delta})x +\tfrac{2\pi}{3} y\big]. \nonumber\end{aligned}$$ In contrast with Eq. , the minimum configuration in the TST, Eq.  is [*independent*]{} of $\tilde\delta$, manifesting in Eq.  as a difference dependence on $y$ from Eq. . The difference is due to the frustration of the intrinsic 2d SDW order by periodic boundary conditions, which tend to lock the SDW order to a commensurate form in the $y$ direction. Interestingly, the two results coincide when $\tilde\delta = \pi/6$, which corresponds to the case $M=M_{\rm sat}/3$. At this point, the periodicity of the TST and the SDW order are compatible. As in the 2d case, at the level of Eq.  applied to the TST, the uniform translation mode $\varphi$ remains gapless. Unlike the 2d case, however, in one dimension, the zero point fluctuations of this mode are sufficient to disrupt long range SDW order, which instead manifests as power law correlations. Nevertheless, the short distance physics is still that of an SDW, and moreover the 1d fluctuations are easily accounted for theoretically. This is accomplished simply by treating $\varphi$ as a free massless boson, as we discuss below in Sec. \[sec:finite-length-linfty\]. ### Finite length $L<\infty$ {#sec:finite-length-linfty} As we have discussed in Sec. \[subsec:bosonization\], for a finite length chain, we must impose the boundary conditions $\phi_y(x=0) = \phi_y(x=L)=0$. These conditions are [*incompatible*]{} with the values, in Eq. , which minimize the SDW term in the infinitely long case. This means that end effects strongly affect, and tend to suppress SDW ordering. What do we expect? For short systems, where $\Xi \gg 1$, the end effects will dominate, and the effects of the SDW interaction become negligible. In other words, all components $\phi_y$ will be largely not affected by the SDW term, and the system should behave similarly to three decoupled chains of finite length. For long systems, $\Xi \ll 1$, the SDW pinning should be effective far from the boundaries, and only the pseudo-Goldstone mode $\tilde\Phi_0$ will behave like a massless field (pinned at the boundaries). Let us now address the crossover. It is convenient to first make a change of basis [@cabra1998magnetization] from the $\phi_1,\phi_2,\phi_3$ to new fields $\Phi_0,\Phi_1,\Phi_2$: $$\begin{aligned} \label{eq:sdw11} \left( \begin{array}{c} \phi_1 \\ \phi_2 \\ \phi_3 \end{array} \right) = \begin{pmatrix} 1/\sqrt{3} & 1/\sqrt{2} & 1/\sqrt{6} \\ 1/\sqrt{3} & 0 & -2/\sqrt{6} \\ 1/\sqrt{3} & -1/\sqrt{2} & 1/\sqrt{6} \end{pmatrix} \left( \begin{array}{c} \Phi_0 \\ \Phi_1 \\ \Phi_2 \end{array} \right) .\end{aligned}$$ The dual fields $\theta_y$ transform similarly. Note that the center of mass field is just proportional to the SDW phase introduced earlier: $\Phi_0 = \frac{\sqrt{3}\beta}{(2\pi)} \varphi$. The boundary conditions $\phi_y=0$ at the ends translate to $\Phi_i=0$ at the ends. The SDW Hamiltonian now reads $H_{\rm sdw} = H_{\rm sdw}^{(0)} + H_{\rm sdw}^{(1)}$, where the harmonic part $$\label{eq:sdw12} H_{\rm sdw}^{(0)} = \sum\limits_{n=1}^{3} \int dx \left[ \frac{\tilde{v}_n}{2\kappa_n} (\partial_x\Phi_n)^2 + \frac{\tilde{v}_n\kappa_n}{2}(\partial_x\Theta_n)^2\right]$$ is expressed in terms of renormalized stiffnesses $\kappa_0^{-2} = 1 + 4 J'/(\beta^2 v)$ and $\kappa_{1,2}^{-2} = 1 - 2J'/(\beta^2 v)$ and velocities $\tilde{v}_n = v/\kappa_n$. Its interacting part (the analog of the second line in Eq.  written in the new basis) reads $$\begin{aligned} \label{eq:sdw13} H_{\rm sdw}^{(1)} &=& \gamma_{\rm sdw} \int dx ~2 \cos[\frac{2\pi}{\sqrt{2}\beta} \Phi_1 - \frac{\pi-2\tilde{\delta}}{2}] \cos[\frac{2\pi}{\beta}\sqrt{\frac{3}{2}}\Phi_2] \nonumber\\ &+& \cos[\frac{2\pi}{\beta} \sqrt{2}\Phi_1 + \frac{\pi-2\tilde{\delta}}{2}].\end{aligned}$$ Note that the center-of-mass mode $\Phi_0 \propto \varphi$ does not enter in Eq. . Thus it behaves as a free massless boson, independent of the strength of the SDW coupling. The distinction between $\delta$ and $\tilde\delta$ in the SDW Hamiltonian is not important when analyzing the crossover, and will be dropped in this subsection from now on. To analyze the crossover, we first carry out the renormalization group procedure by integrating out fluctuations of the fields due to modes with wavelength less than the system size $L$. In doing so, we replace $\gamma_{\rm sdw}$ by its renormalized value at this scale, $$\begin{aligned} \label{eq:43} \gamma_{\rm sdw} & \rightarrow & \gamma_{\rm sdw}(L) = L^{-\Delta_{\rm sdw}} \gamma_{\rm sdw} .\end{aligned}$$ Note that we have done the coarse-graining step of the RG of integrating out modes, but we have not rescaled any fields or coordinates, so as to keep the original units unchanged for clarity. Under this coarse-graining transformation, the quadratic terms in the Hamiltonian remain unmodified. In this renormalized Hamiltonian, it is appropriate to carry out a classical saddle point approximation for $\Phi_1$ and $\Phi_2$, which are the fields pinned by the SDW coupling. The SDW potential in Eq.  is minimized by $\Phi_2=0$, which is compatible with the boundary condition, and so, we can impose this condition. Then only $\Phi_1$ enters the saddle point condition in a non-trivial way. For simplicity we specialize to the case $\delta=\pi/6$, or $M=M_{\rm sat}/3$. Then we may define $\Psi = \frac{2\pi}{\sqrt{2}\beta} \Phi_1 + \frac{2\pi}{3}$, for which the saddle point Hamiltonian, neglecting the decoupled $\Phi_0$ term becomes $$\label{eq:sdw8} H_{\rm class} = \int_0^L dx\, \Big\{ K (\partial_x \Psi)^2 - \gamma_{\rm sdw}(L) (\cos[2 \Psi] + 2 \cos[\Psi]) \Big\},$$ with $K=\beta^2 \tilde{v}_1/4\pi^2 \kappa_1$. The $\gamma_{\rm sdw}$ term is clearly minimized by $\Psi=0$, while the open boundaries require $\Psi(0)=\Psi(L)=2\pi/3$, causing the strong suppression of SDW order by the ends. There can be a non-trivial configuration, $\Psi(x)$, which minimizes the functional $H_{\rm class}$. To bring out the crossover physics, we transform to dimensionless coordinates, letting $$\begin{aligned} \label{eq:44} x & = & \sqrt{K/\gamma_{\rm sdw}(L)} z,\end{aligned}$$ which gives $$\label{eq:45} H_{\rm class} = \epsilon_0\int_0^{\tilde{L}} dz\, \Big\{ (\partial_z \Psi)^2 - (\cos[2 \Psi] + 2 \cos[\Psi]) \Big\},$$ with $$\begin{aligned} \label{eq:46} \epsilon_0 & = & \sqrt{K \gamma_{\rm sdw}(L) }, \\ \tilde{L} & = & (L/\xi)^{1-\Delta_{\rm sdw}/2} = \Xi_{\rm sdw}^{\Delta_{\rm sdw}/2-1}, \\ \xi_{\rm sdw} & = & (K/\gamma_{\rm sdw})^{1/(2-\Delta_{\rm sdw})}.\label{eq:47}\end{aligned}$$ Note that Eq.  agrees, at the level of scaling, with Eq.  obtained earlier from general arguments. For the purpose of minimization, the overall prefactor $\epsilon_0$ is irrelevant, so it is clear already from Eq.  that the properties are a function of the scaling variable $\Xi_{\rm sdw}$ only, as expected. We are now prepared for the saddle point approximation, which consists in minimizing Eq. . Starting from the Euler-Lagrange equation, which has the usual “energy” integral of motion, one obtains $$\left( \frac{d\Psi}{d z}\right)^2 = C - (2 \cos[\Psi] + \cos[2 \Psi]), \label{eq:sdw9}$$ where the integration constant (“energy”) $C$ is fixed by the condition $d\Psi(z=\tilde{L}/2)/dz = 0$ as $C = (2 \cos[\Psi_{1/2}] + \cos[2 \Psi_{1/2}])$, where we denote $\Psi_{1/2} \equiv \Psi(z=\tilde{L}/2)$. As a result the mid-ladder value of $\Psi$ is implicitly given by the following integral $$\begin{aligned} \label{eq:sdw10} &&\int_{\Psi_{1/2}}^{2\pi/3} \frac{d \varphi}{\sqrt{ 2 \cos[\Psi_{1/2}] + \cos[2 \Psi_{1/2}] - 2 \cos[\varphi] - \cos[2 \varphi]}} \nonumber\\ &&= \frac{\tilde{L}}{2}.\end{aligned}$$ The full crossover (in this saddle point approximation) is obtained from Eq. . First, we observe that in the limit $\Psi_{1/2} \to 0$, the above integral diverges logarithmically, implying that indeed, $\Psi(\tilde{L}/2) =0$ in the infinite-size limit. The short system size limit is less obvious. For small $\tilde{L}$, we must choose $\Psi_{1/2}$ to minimize the integral. However, if we make the obvious choice to let $\Psi_{1/2} = 2\pi/3 - \epsilon$, with $\epsilon \to 0^+$, one finds that the integral in fact does not vanish but approaches the [*constant*]{} value $\pi/\sqrt{6}$. In fact, the integral as a function of $\Psi_{1/2}$ has a non-monotonic dependence, and the minimum value of the integral is $\approx 1.1436 < \pi/\sqrt{6}=1.2826$, which is achieved for $\Psi_{1/2}\approx 1.3178 < 2\pi/3 =2.0944$. Regardless, the lower bound on the integral implies that there is a minimum dimensionless length, $\tilde{L}_{\rm min} \geq 2.28$, such that for $\tilde{L}< \tilde{L}_{\rm min}$, the minimum action solution is simply $\Psi_{1/2}=2\pi/3$, i.e. $\Psi(z)=1/2$ for [*all*]{} $z$. For such short systems, the boundary conditions [*completely*]{} disrupt the SDW order, and the system behaves as though it were just decoupled chains. The transition from $\tilde{L}< \tilde{L}_{\rm min}$ to $\tilde{L}> \tilde{L}_{\rm min}$ is evidently discontinuous, since $\Psi_{1/2}$ must jump from a value $\Psi_{1/2} \leq 1.3178$ at $\tilde{L}=\tilde{L}_{\rm min}+\epsilon$ to $\Psi_{1/2}=2\pi/3$ for shorter systems. To precisely determine the value of $\tilde{L}_{\rm min}$ requires a comparison of the action of the non-trivial and trivial solutions to see where they cross. What are the consequences of this transition? In numerics, the transition can be probed by varying $L$ [*or*]{} varying $J'/J$ at fixed $L$. In either case, on crossing the transition, one expects a sharp change from SDW-like behavior for $\tilde{L}>\tilde{L}_{\rm min}$ to decoupled chain-like behavior for $\tilde{L}<\tilde{L}_{\rm min}$. In the SDW-like regime, the two modes $\Phi_1,\Phi_2$ may be considered to have developed a gap, and consequently, the entanglement entropy of a bipartite cut of the sample is reduced compared to the decoupled chain-like regime. Specifically, in the SDW-like regime a logarithmic growth with $L$ is expected and consistent with central charge $c=1$, while in the decoupled chain regime, the behavior should be closer to $c=3$. [*At*]{} the transition, a sharp [*drop*]{} with increasing $L$ of the entanglement entropy is expected. More detailed predictions can be made for the spin density profile, $\langle S^z_y(x)\rangle$. We make such a comparison in the following subsection. DMRG results for SDW {#sec:dmrg-results-sdw} -------------------- A number of measurements in the DMRG give evidence of the SDW state. As discussed in the previous subsection, the SDW regime of long TSTs can be described by pinning the fields $\Phi_1=\Phi_2=0$, and allowing for gapless fluctuations of the free massless boson field $\Phi_0$. In the semiclassical approximation discussed in Sec. \[sec:finite-length-linfty\], one can do somewhat better by using the $\Phi_0$ fluctuations [*and*]{} replacing $\Phi_2 \rightarrow 0$ and $\Phi_1(x) \rightarrow \frac{\sqrt{2}\beta}{2\pi}(\Psi(x)-\frac{2\pi}{3})$, with $\Psi(x)$ given by the solution of Eq. . In this way, one obtains from Eq.  $$\begin{aligned} \label{eq:sdw5} \langle S_y^z(x)\rangle &=& \tilde{M} + \frac{2-y}{2\pi} \partial_x \Psi(x) \\ &-& \frac{A_1}{X^{\eta_{\rm sdw}}} \sin[(2-y)(\Psi(x)-\frac{2\pi}{3}) - (\pi - 2 \tilde\delta)x]. \nonumber\end{aligned}$$ Here the quantity $$\label{eq:sdw6} X = [\frac{2(L+1)}{\pi}\sin(\frac{\pi|x|}{L+1})],$$ arises from the quantum average over the free boson field $\Phi_0$, which is evaluated along the lines of Ref. , with the result that the exponent $$\label{eq:48} \eta_{\rm sdw} = \frac{\pi\kappa_0}{3\beta^2} = \frac{\kappa_0}{6}\frac{1}{2\pi{\cal R}^2}.$$ For $M=M_{\rm sat}/3$ and small $J'$, we estimate $\kappa_0 \approx 1$ and $2\pi{\cal R}^2 \approx 1 -1/(2\ln[6 \sqrt{8/(\pi e)}]) = 0.72$ (see Appendix A of Ref. ), which leads to $\eta_{\rm sdw} \approx 0.23$, so the spin density profile decays quite slowly with distance from the boundary in the SDW regime. Note that the $y=2$ chain does not depend on $\Psi$, so one can directly compare the numerically obtained magnetization profile for the ‘non-frustrated’ chain with Eq. , see Fig. \[fig:sz2M\] below. One may wonder about the selection of the $y=2$ chain. For the geometry of our simulations, the model has full translational symmetry, $y\rightarrow y+1$ in the $y$ direction. This symmetry is broken by our [*combined*]{} choice of saddle point $\Psi=\Phi_0=0$ in the bulk [*and*]{} the boundary condition $\Phi_0=0$ at the edges. Examination of the interaction term in Eq.  shows that there are apparently two other minimum solutions, $\Psi=\pi$ and $\Phi_2 = \pm \beta/\sqrt{6}$. In the infinite system, these are [*equivalent*]{} to the one we have chosen, insofar as they give identical results for all operators if we make a suitable translation of $\Phi_0$. However, the choice of boundary condition for $\Phi_0$ prevents this translation and results in a broken symmetry state. By a different choice of the otherwise equivalent saddle points, we can obtain formulae analogous to Eq.  but with the $y=1$ or $y=3$ chains independent of $\Psi$. In principle, for a finite system even the discrete translational symmetry should be unbroken, but the restoration of this symmetry is probably only at extremely low energies at which tunneling occurs between these minima, and indeed we find the symmetry to be spontaneously broken in our DMRG simulations. In the decoupled regime, $\tilde{L}<\tilde{L}_{\rm min}$, it is more appropriate to just calculate the spin expectation value using the free theory, Eq. , for all three fields $\Phi_0,\Phi_1,\Phi_2$. Then we obtain, instead of Eq. , the result that $$\label{eq:49} \langle S_y^z(x)\rangle = \tilde{M} + \frac{A_1}{X^{\eta_{\rm dc}}} \sin[(\pi - 2 \tilde\delta)x] ,$$ where the “decoupled chains” exponent is $$\label{eq:50} \eta_{\rm dc} = \frac{\pi(\kappa_0+\kappa_1+\kappa_2)}{3\beta^2}.$$ In the same small $J'$ approximation, this gives $\eta_{\rm dc} \approx 3 \eta_{\rm sdw}$, so that $\eta_{\rm dc} \approx 0.610$. Note that there is a much more rapid decay of the spin density profile from the boundary in this regime. We compare the spin density profile in Eq.  with our DMRG data and find reasonable agreement. Fig. \[fig:sz2M\] shows a comparison of numerical data with magnetization profile of the non-frustrated chain, i.e. the $y=2$ result of Eq. , while Fig. \[fig:sz-frustrated\] shows that of frustrated chains, $y = 1,3$. We can also measure in DMRG the central charge via entanglement entropy, which yields $c=1$ for the SDW phase as opposed to $c=3$ for decoupled chains. This is shown in Fig. \[fig:cc-sdw\], where the plots show that at magnetizations $M/M_s = 1/6, 1/2$ for $R = 0.5$, the central charges obtained from numerics are $c = 0.9, 0.95$, respectively. These values are very close to the predicted $c=1$, which gives evidence for the SDW. Another measurement we can perform is the transverse spin-spin correlation function, which should decay exponentially to support the SDW state. We observe exactly this behavior from our simulations, as shown in Fig. \[fig:corrxy-sdw\]. Finally, power-law behavior is expected for the “octupolar” correlation function [@hikihara2010], $$\label{eq:16} \langle (\Pi_{y=1}^3 S_y^+(x)) (\Pi_{y=1}^3 S_y^-(x'))\rangle \sim C_{\eta_3}(x,x').$$ The operator $\Pi_{y=1}^3 S_y^+(x)$ may be though of as inserting a soliton – an extra period – into the SDW. This correlation function decays in the thermodynamic limit with the power-law exponent $$\eta_3=\frac{3\beta^2}{2\pi \kappa_0} = \frac{1}{2\eta_{\rm sdw}}.$$ We indeed observe such power law behavior in the DMRG, as shown in Fig. \[fig:Sd3CorSdw\]. Fitting this data (for $M/M_s=1/2$, $R=0.7$) gives $\eta_3 = 3.1\pm 0.2$, while the $S^z$ profile in Fig. \[fig:sz2M\] for the same parameters is fit to $\eta_{\rm sdw} = 0.2\pm 0.1$, yielding the product $\eta_3 \eta_{\rm sdw} = 0.62 \pm 0.31$. The uncertainties for each exponent is crudely estimated by tracing out the boundary values when the fitting starts to mismatch the DMRG result. The slow decay of the $S^z$ profile and strong boundary effects as seen in Fig. \[fig:sz2M\] induce significant uncertainties in the estimate for $\eta_{\rm sdw}$, so we consider the degree of agreement to the expected value $\eta_3 \eta_{\rm sdw} =1/2$ satisfactory. $M=M_{\rm sat}/3$ plateau {#sec:plateau} ========================= Magnetization plateaux are observed frequently in models of frustrated magnetism, and in a number of experiments on such materials. Theoretically, we define a magnetization plateau as a ground state of a spin system in a magnetic field $h$, such that for a range of fields, $h_1<h<h_2$, the magnetization (along the field) $M(h) = M_{\rm p}$ is constant. This implies that the magnetization is a good quantum number, and, since by assumption the only term in the Hamiltonian coupling to the applied field is $h M$, that the ground state wavefunction itself is independent of the field in this range. Moreover, since the magnetization $M$ is just the total spin $S^z_{\rm tot}$ along the field direction, the symmetry under rotations generated by $S^z_{\rm tot}$ is unbroken. Thus, there can be no spin expectation values normal to the field. Furthermore, no other nearby states must cross the ground state (in energy) in this field range, since it remains the ground state, and thus, since states with different magnetization must have energy depending linearly on the field, there must be a [*spin gap*]{} to excitations which carry non-zero spin $S^z$ relative to the plateau state. There are restrictions on such gapped states, following from the Lieb-Schultz-Mattis theorem and related arguments[@oshikawa1997magnetization]. One way to understand them is to map the spins to hard-core bosons, where the boson number $n_i = S_i^z + 1/2$. A gapped, insulating ground state of bosons in one dimension must have an [*integer*]{} number of bosons [*per unit cell*]{}. This implies that the total spin $\sum_{i\in {\rm u.c.}} \langle S_i^z\rangle$ per unit cell must be an [*integer*]{} if the unit cell contains an even number of sites, and must instead be a [*half integer*]{} if the unit cell contains an odd number of sites. Often, such gapped plateau states may be considered as ordered states with spins arranged in some pattern parallel and antiparallel to the field within a unit cell. A prominent feature in the phase diagram we obtain is a magnetization plateau at one third of the saturation magnetization, $M=M_{\rm sat}/3$. This has been extensively studied in the literature for the isotropic model [@richter2009; @tay2010variational], $R=0$, where it is usually regarded as a result of quantum “order by disorder”. The structure of the plateau state in that case is indeed in agreement with a semi-classical approach [@chubukov1991quantum], and has a unit cell consisting of two up and one down spin, forming a three-sublattice enlargement of the primitive triangular lattice unit cell. Based on a combination of our DMRG studies and an analytic analysis of the quasi-1d limit, $J'/J \ll 1$ (below), we show that, in the 2d system, the plateau state persists in the full range of anisotropies $0<R\leq 1$ and forms a single phase throughout. For the one-dimensional TST, however, we find that the plateau, while present in the isotropic regime, [ *terminates*]{} before reaching the decoupled chains limit. Both these results can be understood from the relation between the plateau state and the SDW phase, as will be explained in the next subsection. Plateau states from SDW {#subsec:1/3fromsdw} ----------------------- The collinear SDW state shares many of the expected elements of the plateau phase. It has an unbroken U(1) symmetry, even in the 2d limit, and exponentially decaying transverse correlations in the TST. It has rather long-range oscillating correlations of the component of the spin parallel to the field, and consequently a markedly modulated $\langle S^z_y(x)\rangle$ profile in finite systems. The distinction between the SDW and the plateau phase is that the former is generically incommensurate and gapless. Both these differences may be removed due to further interactions neglected up to now, which [*pin*]{} the gapless phason mode $\varphi$ at specific discrete values. This has been discussed at length already in Ref.  for the two dimensional case. There, it was argued that an infinite sequence of plateaux occur at $T=0$ within the SDW phase, the strongest of these being the 1/3 plateau, and that all these plateaux exist at arbitrarily small $J'/J$. In the two-dimensional system, the plateau width (in a magnetic field) can be estimated to scale as $J (J'/J)^{9/2}$, see Ref. . Here, we will restrict the discussion to the TST, and find that one-dimensional fluctuations suppress most of these plateaux, including the 1/3 plateau for sufficiently small $J'/J$. The plateau formation is due to additional interactions neglected in the sine-Gordon Hamiltonian presented so far in Eqs. (\[eq:sdw12\],\[eq:sdw13\]), which involve [*higher harmonics*]{} of the phason mode $\varphi$. The allowed terms are obtained directly from a symmetry analysis. The action of the symmetries of the problem on $\varphi$ may be understood directly from the expression for the spin operator in the SDW phase of the TST in Eq. . Under each symmetry, which is a lattice space group operation, $\varphi$ must be chosen to transform appropriately [*so that $S_y^z(x)$ is a scalar*]{}. This dictates the following transformation rules 1. translation along $x$, $x\rightarrow x+1$: $\varphi \rightarrow \varphi + \pi-2\delta$. 2. translation along $y$, $y \to y+1$: $\varphi \rightarrow \varphi - 2\pi/3$. 3. 2D inversion, $x \to -x$, $y\to 2-y $: $\varphi \rightarrow -\pi/3 - \varphi$. In addition, there is a “gauge invariance” arising because of the ambiguity of $\varphi$ due its definition as a phase variable, which forces the invariance of the Hamiltonian under [*local*]{} shifts of $\varphi$ by $2\pi$. Note that in this section, we always consider the infinite $L$ limit, and neglect the difference between $\tilde\delta$ and $\delta$. Using the local gauge invariance, we seek terms of the form $$\label{eq:53} H_{\rm pin} = \sum_n \int \! dx\, t_n \sin (n \varphi + \alpha_n),$$ where $t_n$ and $\alpha_n$ are arbitrary parameters. (In general we can also allow $\alpha_n$ to be a slowly varying linear function of $x$, which is important for a full analysis of commensurate to incommensurate transitions, but we do not require this here for the more limited purpose of just identifying the relevant plateau states). Using the translational symmetry along $y$, we immediately obtain the constraint that $t_n=0$ unless $n$ is a multiple of $3$, and so we set $n=3 k$. The inversion symmetry then forces $\alpha_n=0$ (mod $2\pi$), so finally, we find $$\label{eq:54} H_{\rm pin} = \sum_{k \in \mathbb{Z}} \int \! dx\, t_k \sin 3k\varphi,$$ where we have redefined the $t_k$ appropriately. Now it remains to apply translation symmetry along $x$. This simply gives the condition that $3k( \pi - 2\delta)$ is an integer multiple of $2\pi$. Writing $\delta = \pi M = (\pi/2) M/M_{\rm sat}$, we have $$\label{eq:55} \frac{M}{M_{\rm sat}} = \frac{3k - 2p}{3k},$$ with $k,p$ integers. This gives a rational family of potential magnetization plateaux, whose strength decreases with increasing $k$. An actual plateau occurs for a given value of magnetization characterized by integers $k,p$ only if the associated term, $t_k$, is [*relevant*]{} [@hikihara2010], when considered as a perturbation to the low energy Hamiltonian of the SDW state, which is just the free massless field theory for $\varphi$. The scaling dimension of the operator in Eq.  is easily obtained as $\Delta_{3k} = 9 k^2 \eta_{\rm sdw} = 3\pi k^2 \kappa_0/\beta^2$, c.f. Eq. , and therefore, under RG, we find $$\label{eq:56} t_k(b) = t_k b^{2-\Delta_{3k}} = t_k b^{2-9k^2\eta_{\rm sdw}}.$$ Here, $t_k$ is relevant, and a magnetization plateau appears when $\Delta_{3k}<2$. Consider the case $k=1$, which corresponds to the case $M=M_{\rm sat}/3$, and small $J'/J$. There (recall Sec. \[sec:dmrg-results-sdw\]) $\eta_{\rm sdw} \approx 0.23$ so $\Delta_3 \approx 2.07>2$, and thus $t_1$ is [*irrelevant*]{}. Because $\Delta_{3k}$ increases quadratically with $k$, clearly all other potential plateau with larger $k$ are absent in the quasi-1d limit. Thus we expect that for $J'/J \ll 1$, the SDW state remains stable, and there are no magnetization plateaux. With increasing $J'$, however, $\eta_{\rm sdw}$ decreases, owing to its dependence on $\kappa_0$ in Eq. . Including this dependence, and using the quasi-1d formula for $\kappa_0$ (in the text following Eq. ), we obtain the condition that $t_1$ becomes relevant, i.e. $\Delta_3 <2$, when $J'/J > 0.17$. We believe that this is still in the domain where the quasi-1d approach is valid. The result predicts that the 1/3 plateau appears only for $R<0.83$ in the TST. At fixed $M=M_{\rm sat}/3$, the transition from the gapless SDW to gapped plateau state at this value of $R$ or $J'/J$ is in the Kosterlitz-Thouless universality class, as is well-known for the quantum sine-Gordon model. Consequently, the gap vanishes exponentially on approaching the transition from the more isotropic side, and the ground state energy itself shows only an unobservably weak essential singularity at the transition. We note that other potential plateaux with $n = 3k \geq 6$ are so strongly suppressed by fluctuations that we do not expect any to occur, at least in the quasi-1d regime. It is interesting to consider the spin structure on the plateau. This depends on the sign of $t\equiv t_1$. For $t>0$, the $\sin 3\varphi$ pinning term in Eq.  is minimized by three values with equal energy, $\varphi = -\pi/6 + 2\pi n/3$, with $n=0,1,2$. For these values, using Eq. , the spin density profile takes the form $$\label{eq:57} \langle S_y^z(x)\rangle_{t>0} = \tilde{M} + A_1 \sin \big[ \tfrac{\pi}{6}+ \tfrac{2\pi}{3}(x-y-n)\big].$$ This equation describes a three sublattice structure with two spins “up”, i.e. with $\langle S_y^z(x)\rangle > \tilde{M}$, when $x-y-n=0,1\, (\textrm{mod 3})$ and one spin “down”, when $x-y-n=2\, (\textrm{mod 3})$. This is the semi-classical up-up-down state, and has precisely the same qualitative structure as predicted semiclassically in the isotropic limit $J'=J$. For the other case, $t<0$, the minima occur for $\varphi = +\pi/6 + 2\pi n/3$, and the spin density profile becomes $$\label{eq:58} \langle S_y^z(x)\rangle_{t<0} = \tilde{M} - A_1 \sin \big[ \tfrac{\pi}{6}- \tfrac{2\pi}{3}(x-y-n)\big].$$ This describes instead a three sublattice structure with two spins nominally “down”, with $x-y-n=0,2\, (\textrm{mod 3})$, and the remaining one up. This state does not have a natural semiclassical picture, and instead corresponds to the ‘quantum’ version of the plateau, discussed for the two-dimensional lattice in Ref. . A caricature of this state is a three site unit cell with two sites forming a spin singlet entangled pair, and the third (the “up” site) polarized along the field. Our DMRG results are consistent with the up-up-down configuration, Eq. , suggesting that $t > 0$ case is realized. We should stress that, apart from the quantitative estimate of $\kappa_0$, nothing in this subsection depends upon the quasi-1d approach. The conditions for the existence and stability of the plateaux arising out of the SDW state are otherwise completely general results based only on symmetries of the TST and general arguments. DMRG results for plateau {#subsec:1/3fromDMRG} ------------------------ In this section, we discuss how we use DMRG to probe into the 1/3 plateau. The first observation of its existence is the constant entanglement entropy for the ranges of $R$ on the 1/3 plateau, as shown in Fig. \[fig:ee-plateau\]. This shows an ordered state which corresponds to central charge $c = 0$, in Eq. . Furthermore, we can measure the transverse spin-spin correlations, which should decay exponentially in the 1/3 plateau. We show this measurement in Fig. \[fig:cc-plateau\], for $R = 0.2$ as well as the isotropic case $R = 0$. In Fig. \[fig:Szprofile-plateau\](a,b), we plot the $S^z$ profile of the spins forming the three sublattices on the 1/3 plateau. Near $x = L/2$, we see a perfect up-up-down structure, with some boundary effects on the edges of the chain. This gives definitive evidence of the robustness of the 1/3 plateau in these ranges of anisotropies. Moreover, in Fig. \[fig:Szprofile-plateau\](c), we see that the plateau persists up until $R \approx 0.8$, at which point, the system undergoes a Kosterlitz-Thouless transition that destroys the 1/3 plateau. As described in the previous subsections, this is a signature of the 1d TST only: in 2d, the plateau is even more robust, extending down to $R = 1$. This is further discussed in Sec. \[sec:discussion2D\]. To characterize the properties of the plateau as well as its width, we will adopt the following method, which takes advantage of the total spin conservation due to the presence of the $U(1)$ symmetry with a magnetic field along $z$-axis. In this case, we can work in a given total spin sector $S^z=\sum_i S^z_i$, and get the corresponding ground state energy $E(S^z)$ $$\begin{aligned} E(S^z,h)=E(S^z)-h\cdot S^z.$$Then the energy difference between two adjacent spin $S^z$ sectors is given by $$\begin{aligned} \delta E(S^z,h)=E(S^z+1,h)-E(S^z,h).$$ Generally, at small magnetic field $h$, $E(S^z+1,h)>E(S^z,h)$, so $\delta E(S^z,h)>0$. However, $E(S^z+1,h)\leq E(S^z,h)$ when $h$ is large enough, so $\delta E(S^z,h)\leq0$. Therefore, the boundaries of the plateau can be determined when $E(S^z+1,h)=E(S^z,h)$, with the upper boundary $h^2_c(S^z)$ and lower boundary $h^1_c(S^z)$ of the plateau given by $$\begin{aligned} h^2_c(S^z) &=& E(S^z+1)-E(S^z), \nonumber\\h^1_c(S^z) &=& E(S^z)-E(S^z-1).\label{eq:plateauboundary}\end{aligned}$$ Finally, the corresponding width of the plateau can also be obtained as $$\begin{aligned} W(S^z)=h^2_c(S^z)-h^1_c(S^z).\label{eq:plateauwidth}\end{aligned}$$ In DMRG, the boundaries of the 1/3 plateau can be computed using Eq.  by fixing the total spin to $S^z=\frac{NM_s}{3}$. Here, $M_s=\frac{1}{2}$ is the saturation magnetization, and $N$ is the total number of sites. As shown in Figs. \[fig:DMRG-plateau\](a,b), both the upper and lower boundaries of the 1/3 plateau are determined using different system sizes and anisotropies. The corresponding width of the plateau is also given in Fig. \[fig:DMRG-plateau\](c) using Eq. . From this, we can see that the plateau is very robust and remains finite when the anisotropy, $R$, is small, and decreases with increasing $R$. Interestingly, the plateau still remains finite even $R$ is very large, i.e., $R=0.7$, although the width $W$ is very small. In the region $0.7<R\leq 1$, finite-size scaling of the data shows that the width of the plateau is zero within the numerical error, for example, at the decoupled chain limit $R=1$. Low field regime {#sec:lowfield} ================ At zero field, there is already considerable work on the spatially anisotropic Heisenberg model in two dimensions [@weng2006spin; @yunoki2006two; @pardini2008magnetic; @bishop2009; @heidarian2009spin; @tay2010variational; @SRWhite2011]. Away from the quasi-1d region, i.e. for $0 < R \lesssim 0.8 $, the ground state of the 2d model is unambiguously magnetically ordered, in a coplanar spiral with an incommensurate wavevector that varies continuously with $R$. With increasing anisotropy, the ground state is less clear, and is quite difficult to resolve numerically, owing to the fact that correlations between chains set in only at extremely long length scales for small $J'/J$. A controlled renormalization group approach predicts, however, that in the limit $0< J'/J \ll 1$, the system develops a [ *collinear*]{} magnetic state instead of the spiral one [@starykh2007ordering]. Such a collinear state is qualitatively distinguished from the spiral one by its pattern of symmetry breaking, which leaves a residual U(1) spin rotation symmetry about the ordering axis, in contrast to the spiral state which fully breaks SU(2) symmetry with no residual continuous invariance remaining. Here we turn to the situation in the one-dimensional TST. We argue that in this case the spiral order is converted by 1d quantum fluctuations into a fully gapped state with spontaneous [*staggered dimerization*]{}. The argument is quite general and is expected to hold for any 1d system with local non-collinear order and a half-integer spin per unit cell. Furthermore, specifically for the TST, we show that the tendency to short-range spiral order is [*more*]{} robust than in 2d, and unlike in 2d, it prevails over collinear order even in the limit of arbitrarily small $J'/J$. Thus staggered dimerization is predicted at zero field for all $0 \leq R < 1$ for the TST. See Appendix \[sec:zero-field-analysis\] for an alternative calculation that leads to the same conclusion as the one presented below. Given the presence of dimerization in zero field, we can discuss the behavior in low fields, or more properly for small magnetization, in terms of the elementary excitations of the symmetry broken dimerized state, which are domain wall solitons. We obtain in this way different gapless phases at low field, including the SDW state discussed previously from the quasi-1d point of view. Zero field dimerization from spiral order {#sec:dimer-from-spir} ----------------------------------------- In the following, we assume that on short space and time scales, the spins establish a similar spiral order to that of the 2d system. This notion can be made more systematic by considering spin tubes made by wrapping the triangular lattice into cylinders with larger circumference. Once the circumference is large enough compared to the correlation length of the spiral order, the latter should become well-established. It seems reasonable to regard this as being the case already for the circumference three TST studied here. This is corroborated also by the close correspondence of the phase diagram in the weakly anisotropic limit, $R \ll 1$, and the expected semi-classical one, as discussed already in Sec. \[sec:iso\]. With this assumption, the description of the TST should be that of a Non-Linear $\sigma$-Model ([NL$\sigma$M]{}) for the spiral order, confined to the finite width cylinder. This starting point is similar to the one of Haldane [@haldane1988] applied to unfrustrated spin chains of spin $S$, which locally establish collinear Néel order. From this formulation, Haldane established the existence of a featureless gapped state for integer $S$, while it is known that chains with half-integral $S$ harbor a gapless Bethe chain-like phase instead. The case of the TST is distinct from Haldane’s analysis, however, owing to the different symmetry of the order. While the collinear Néel case is described by a vector O(3) [NL$\sigma$M]{}, the spiral case is instead described by a [NL$\sigma$M]{}with a [ *matrix*]{} SO(3) order parameter [@dombre1989]. Here the matrix may be constructed from the local spin order, $$\label{eq:59} {\bf S}_i \sim m ({\bf\hat n}_1 \cos {\bf q}\cdot {\bf r}_i + {\bf\hat n}_2 \sin {\bf q}\cdot {\bf r}_i),$$ where ${\bf\hat{n}}_1$ and ${\bf\hat{n}}_2$ specify the plane of the spiral, with ${\bf\hat{n}}_1\cdot{\bf\hat{n}}_2=0$, ${\bf q}$ the spiral wavevector, and $m$ the amplitude of the quasi-static moment. One can construct from this the SO(3) matrix $$\label{eq:60} {\mathcal O} = \left( {\bf\hat{n}}_1 | {\bf\hat{n}}_2 | {\bf\hat{n}}_3 \right),$$ with ${\bf\hat{n}}_3 = {\bf\hat{n}}_1 \times {\bf\hat{n}}_2$. If on short space and time scales, spiral order is present, we expect that an appropriate effective [NL$\sigma$M]{} action is given by $$\begin{aligned} \label{eq:71} && S_{NL\sigma M} = \\ && \frac{1}{2g} \int \! dx \, d\tau\, \Big\{ \frac{1}{v} {\rm Tr} \left[\partial_\tau {\mathcal O}^T \partial_\tau {\mathcal O}\right] + v {\rm Tr} \left[\partial_x {\mathcal O}^T \partial_x {\mathcal O}\right]\Big\}.\nonumber\end{aligned}$$ Note that, for a quasi-1d system with circumference $L_y$, the effective coupling constant $g\sim c/L_y \ll 1$ for large $L_y$, with some constant two-dimensional coupling constant $c$. Famously, in Haldane’s analysis of spin chains with a vector O(3) order parameter, the naïve [NL$\sigma$M]{} action must be supplemented by a topological term [@haldane1988]. Topology of the order parameter is also important here, but its nature is rather distinct from Haldane’s case. For clarity, we compare and contrast the two situations here. The vector O(3) order parameter comprises a manifold isomorphic to the sphere $S^2$. Its topology is summarized by the homotopy groups $\Pi_1(S^2) = 0$ and $\Pi_2(S^2) = \mathbb{Z}$. The former implies that there are no non-trivial loops on the sphere, and correspondingly no [*singular*]{} point defects in two dimensions. The latter, second homotopy group implies that there are classes of non-trivial [*smooth*]{} configurations of the order parameter in two dimensions, parametrized by an integer. These configurations are skyrmions, lacking any singularity. Because of the lack of any singularity, the skyrmions appear in a continuum limit of the O(3) vector [NL$\sigma$M]{} , and modify the physics of the [NL$\sigma$M]{} through a topological $\theta$-term, which gives a phase factor to configurations with non-zero skyrmion number. Based on this [NL$\sigma$M]{} with $\theta$-term, Haldane postulated distinctly different behavior for integer and half-integer spin chains. In the matrix SO(3) case, the order parameter manifold is $S^3/\mathbb{Z}_2$, and the corresponding homotopy groups are $\Pi_1(S^3/\mathbb{Z}_2) = \mathbb{Z}_2$ and $\Pi_2(S^3/\mathbb{Z}_2) =0$. The trivial second fundamental group means that [*non-singular*]{} configurations of the order parameter have no topological distinctions. This implies that a continuum limit exists in which there are no topological defects and there is no topological term. Instead, the non-vanishing first homotopy group implies that there are [*singular*]{} point defects in two dimensions, with an Ising character. Note that in our theory, these are point defects in space-time, or [*instantons*]{}. Such defects are well-known in classical two-dimensional non-collinear magnets, and are known as $\mathbb{Z}_2$ vortices [@kawamura1984]. They do not appear in the continuum [NL$\sigma$M]{} , but are allowed in a lattice theory. Instead, the proper way to treat them is to [*embed*]{} the continuum theory in a larger one in which the defects appear as [*operator insertions*]{}, with some fugacity and selection rules. This situation is familiar from the Kosterlitz-Thouless analysis of the classical XY model, in which the naïve continuum theory is just the Gaussian spin-wave line, and the defects are point vortices which are treated as a kind of Coulomb gas [@chaikin2000principles]. It occurs also in the quantum analysis of 2+1 dimensional collinear antiferromagnets, where the singular defects are hedgehogs or monopoles. The separation of these defects and the continuum theory is the basis of the theory of deconfined quantum criticality [@senthil2004]. With this understanding, we may first consider the SO(3) matrix [NL$\sigma$M]{} any $\mathbb{Z}_2$ vortices, which is simply described by Eq. . There is no topological term. This SO(3) [NL$\sigma$M]{} is, like all [NL$\sigma$M]{}’s in two dimensions for non-abelian groups, asymptotically free. Lacking any quantum phase factors, we expect simply that it develops a gap at a length scale $\xi \sim e^{g_0/g} \sim e^{\frac{g_0}{c} L_y}$, and that order parameter (hence spin) correlations decay exponentially beyond this scale. The gap itself behaves as $\Delta \sim v/\xi$. Note the difference from Haldane’s case, where the $\theta$ term, which is non-trivial for half-integer spin, fundamentally alters the behavior of the continuum [NL$\sigma$M]{}, leading to gapless behavior in the half-integer spin case. Here, there is no topological term, and the system is always gapped with exponential spin correlations. Now we can consider the role of the $\mathbb{Z}_2$ vortex instantons. Such a vortex is described in the field theory by an operator, $\psi$, which inserts the vortex at a particular space-time point. It is crucial to consider the quantum numbers of a $\mathbb{Z}_2$ vortex, i.e. how the operator $\psi$ transforms under physical symmetries. The relevant operations are time-reversal, translation, and inversion. It can be argued (we discuss this in Appendix \[sec:transf-prop-mathbbz\]) that the vortex operator is invariant under time-reversal and translations along $y$, and transforms under the other two operations, translation along $x$, $T_x$, and inversion, $P$, according to $$\begin{aligned} \label{eq:63} & & T_x: x \rightarrow x+1, \; \psi \rightarrow (-1)^{L_y} \psi, \\ & & P: x\rightarrow -x, y\rightarrow -y, \; \psi \rightarrow (-1)^{L_y} \psi.\label{eq:64}\end{aligned}$$ From the above properties, we see that [*for odd $L_y$, $\psi$ has the transformation properties of a staggered dimerization operator*]{}. In general, two operators with the same symmetry are expected to have non-zero overlap in the operator sense, and their correlations will be proportional. Thus, for odd $L_y$, the $\mathbb{Z}_2$ vortex operator $\psi$ can be viewed as a staggered dimerization order parameter. Let us consider the correlations of $\psi$. Its two-point correlation function is obtained by inserting two $\mathbb{Z}_2$ vortices in the system at separated space-time points. When they are widely separated, the result should be just the product of two independent $\mathbb{Z}_2$ vortices. Naively, using Eq. , such a vortex has an action which diverges logarithmically with the system size. However, its [ *effective*]{} action is expected to be finite, due to the vanishing order and stiffness beyond the scale $\xi$. Roughly, the effective action for a single vortex is thus obtained by replacing the system size by $\xi$, so $S_v \sim \frac{1}{g} \ln \xi \sim g_0/g^2$. Then we expect that $$\label{eq:72} \lim_{x\rightarrow \infty} \langle \psi(x) \psi(0)\rangle \sim e^{-2S_v} \sim e^{-g_0/g^2} \sim e^{-c L_y^2},$$ with some constant $c$. The saturation to a finite value as $x\rightarrow \infty$ implies $\langle \psi\rangle \neq 0$, and hence, for odd $L_y$, the existence of staggered dimer order. For even $L_y$, there is no connection of $\mathbb{Z}_2$ vortices to dimerization, so although the former are present, the system forms simply a featureless gapped state. We can probe into this state by measuring the entanglement entropy in DMRG for a range of anisotropies at zero field. We show this in Fig. \[fig:EEm0\_0\], where an oscillatory behavior of period 2 gives clear evidence of the dimerized phase described above. Gapless states in low but non-zero field {#sec:gapless-states-low} ---------------------------------------- As argued in the previous subsection, the ground state in zero field is a non-magnetic dimerized state with a gap to all excitations. As a consequence of the gap, the ground state is unchanged by application of a sufficiently small field. The ground state changes when the field is large enough that a state with non-zero spin crosses the energy of the spin zero ground state. Generally, if the transition to a state of non-zero magnetization occurs continuously, we can think that the state with non-zero magnetization consists of a dilute set of elementary excitations above the zero field ground state. We must consider therefore the elementary excitations of the dimerized state, and in particular those which carry non-zero spin (as these couple to the field). The most important such excitations are the topological [*soliton*]{} excitations which are characteristic of the broken Ising symmetry of the dimerized state. Such solitons are domain walls, connecting the two distinct dimerized ground states. As is well-known from the study of the Majumdar-Gosh chain [@sutherland1981], solitons of this type carry spin, and in particular for the TST, one can readily argue that the solitons carry half-integer spin, namely $S^z =$ 1/2, 3/2, as shown in Fig. \[fig:solitons\]. Both values of the spin are possible, and generally differ in energy. The solitons are topological excitations insofar as they are non-local: they cannot be created by the action of any local operator on a dimerized ground state. In addition to the topological soliton excitations, non-topological excitations carrying spin $S^z=1$ also exist. They can be visualized either by replacing a singlet dimer by a triplet of aligned spins, or as a bound pair of $S^z=1/2$ solitons. Generally, if the magnetized state is realized as a dilute system of [*non*]{}-topological $S^z=1$ triplons, then the dimerization is not disrupted and must persist for $M>0$. Numerically, however, the dimerization appears to be disrupted at all non-zero $M$. We will assume henceforth that the magnetized state (at small $M>0$) should be regarded as a collection of topological soliton excitations, and neglect the $S^z=1$ triplons. In general, the excitations can be characterized by spatial quantum numbers in addition to spin. For an excitation localized in $x$ in the TST, we may consider the transformations under translations along $y$, $T_y$, and under inversion, $P$. From Fig. \[fig:solitons\], it is clear that the $S^z=3/2$ soliton is invariant under both. However, this is not the case for the $S^z=1/2$ soliton, which has additional structure. In general, out of the three non-dimerized spins in the “core” of the domain wall, we can form three linearly independent states with $S^z=1/2$, $$\label{eq:66} |m\rangle = \frac{1}{\sqrt{3}}\left[ \zeta^m \begin{pmatrix} \downarrow \\ \uparrow \\ \uparrow \end{pmatrix} + \begin{pmatrix} \uparrow \\ \downarrow \\ \uparrow \end{pmatrix} + \frac{1}{\zeta^{m}} \begin{pmatrix} \uparrow \\ \uparrow \\ \downarrow \end{pmatrix} \right],$$ where $\zeta= e^{2\pi/3}$ and $m=0,\pm 1$. These are simply momentum eigenstates along the 3 site chain. The state $|0\rangle$ is invariant under the $T_y$ and $P$ operations, while the [*chirality*]{} eigenstates $|\pm\rangle$ form a two-dimensional irreducible representation. In general, the chirality states would differ in energy from the scalar one. If we crudely model the soliton core as a three-site antiferromagnetic Heisenberg chain, then we see that the chirality states have lower energy, so we expect that the elementary solitons take this form. Consequently, there are two chirality “flavors” to the $S^z=1/2$ solitons. To understand the impact of the solitons, we will need the relation between the microscopic lattice operators and those which describe the solitons. The simplest to consider is the dimerization operator, or the bond kinetic energy, $B_{x,y} = \vec{S}_{x,y}\cdot \vec{S}_{x+1,y}$. This is negative on singlet bonds and has zero average on bonds with uncorrelated spins. In a ground state, it oscillates with period $2$ in the $x$ direction. However, the singlets are shifted over by one sublattice on crossing a soliton, so $$\label{eq:79} B_{x,y} \sim \overline{B} + (-1)^{x+N(x)} \epsilon_0,$$ where $\overline{B}$ is the non-zero average, and $\epsilon_0$ is the amplitude of the bond modulation. We have defined $N(x) = \sum_{x'<x} a_{+,x'}^\dagger a_{+,x'}^{\vphantom\dagger} + a_{-,x'}^\dagger a_{-,x'}^{\vphantom\dagger} + a_{3,x'}^\dagger a_{3,x'}^{\vphantom\dagger}$, which is the number of solitons to the left of the position $x$. The $N(x)$ factor accounts for the shift in the singlet position on crossing each domain wall. Next we turn to the spin density operator $S^z_{x,y}$. We are interested in its action on states which consist of a low density of solitons. It is helpful to consider a caricature of these states in which solitons are described by a wavefunction which is a product of columns of singlets, spaced by occasional non-singlet columns with either the chiral $S^z=1/2$ form, or fully aligned $S^z=3/2$ spins, as shown in Fig. \[fig:solitons\]. If the operator $S_{x,y}^z$ acts on a column $x$ which is part of a singlet, it converts that singlet to an $S^z=0$ triplet state. This triplet costs a non-zero energy equal to the zero field spin gap, and having $S^z=0$ gains no energy back from the magnetic field. Thus if we restrict our description to a low energy one, below the zero field spin gap, we can simply take $S^z_{x,y}$ to annihilate the state in this case. If, however, $x$ is located at the position of a soliton, then $S_{x,y}^z$ gives back a low energy state, which consists either of the original soliton or one with reversed chirality. Notably, in moving down the 1d system, solitons alternate between odd and even columns of the lattice. Thus a non-zero spin is only measured when $S^z_{x,y}$ acts on an even or odd site, if the number of solitons to the left of the position $x$ is fixed. This lets us write the following expression for the spin operator, $$\begin{aligned} \label{eq:75} && S_{x,y}^z \sim \left[1 + (-1)^{x+N(x)}\right] \big[ a_{+,x}^\dagger a_{+,x}^{\vphantom\dagger} + a_{-,x}^\dagger a_{-,x}^{\vphantom\dagger} \nonumber \\ &&\; + \zeta^y a_{+,x}^\dagger a_{-,x}^{\vphantom\dagger} + \zeta^{-y} a_{-,x}^\dagger a_{+,x}^{\vphantom\dagger} + a_{3,x}^\dagger a_{3,x}^{\vphantom\dagger}\big],\end{aligned}$$ where $a_{+,x}, a_{-,x}$ are annihilation operators for chiral $S^z=1/2$ solitons, and $a_{3,x}$ is an annihilation operator for an $S^z=3/2$ soliton. Finally we consider the spin raising operator, $S_{x,y}^+$, containing the XY components of the spin. Acting on a site which is part of a singlet bond, the raising operator converts the singlet to an $S^z=1$ triplet, with amplitude $\mp 1/\sqrt{2}$ depending upon whether the site is the left or right member of that singlet. The triplet with $S^z=1$ has overlap with the state of two adjacent $S^z=1/2$ solitons (as well as other states not in the low energy sector). Simple algebra shows that, for instance $$\begin{aligned} \label{eq:76} && \begin{pmatrix} | s \rangle \\ |\uparrow\uparrow\rangle \\ |s\rangle \end{pmatrix} = \\ && \nonumber \frac{1}{3}\left[ |+\rangle|+\rangle + |-\rangle|-\rangle - \frac{1}{2}\left( |+\rangle|-\rangle + |-\rangle|+\rangle\right)\right]+\cdots,\end{aligned}$$ where on the left hand side, $|s\rangle$ represents the singlet state, and the columns represent the three columns in the TST. One the right hand side, the state has been decomposed into soliton states, and the ellipses represent higher energy states. Here we took the triplet to reside in the middle row. The other triplets can be obtained by translation, as the chirality states are translational eigenstates. From this construction, we obtain the analogous relation to Eq. , $$\begin{aligned} \label{eq:77} S^+_{x,y} &\sim & (-1)^{x+N(x)} \sum_{m=\pm} \Big[ \zeta^{m y} a_{m,x}^\dagger a_{m,x+(-1)^{x+N(x)}}^\dagger \nonumber \\ & & + a_{m,x}^\dagger a_{-m,x+(-1)^{x+N(x)}}^\dagger \Big].\end{aligned}$$ The low energy excited eigenstates will not consist of localized quasiparticles but delocalized ones, as solitons may hop between columns of the same sublattice, i.e. even or odd $x$. As a consequence, the states are eigenstates of the $x$-momentum $k_x$, which is defined [*modulo $\pi$*]{} rather than the usual $2\pi$, due to the doubled background unit cell of the dimerization. In the dilute limit we should consider only the states near the minimum energy of the corresponding energy bands. For the $S^z=3/2$ solitons, which are inversion symmetric, if this minimum is non-degenerate it must occur at $k_x=0$ or $k_x=\pi/2$. We expect it to occur at the latter, $k_x=\pi/2$ value, owing to the dominant antiferromagnetic spin correlations. For the $S^z=1/2$ solitons, inversion symmetry implies instead that if the positive chirality ($q=+1$) soliton has minimum energy at $k_x=q_0$, then the negative chirality soliton has its minimum energy at $k_x=-q_0$. We are not aware of a general argument to fix the momentum $q_0$, however, and expect it is generically non-zero. We have checked this by a crude and uncontrolled variational calculation of the soliton dispersion, which indeed gives minimum energy states with opposite non-zero momenta for opposite chirality (this calculation gives $q_0=\pi/6$, but we do not expect this to be accurate). With this in mind, we focus only on the minimum energy states and take a continuum limit, writing $$\begin{aligned} \label{eq:78} a_{\pm,x} & \sim & \psi_\pm(x) e^{\pm i q_0 x}, \\ a_{3,x} & \sim & \Psi(x) e^{i\frac{\pi}{2} x},\end{aligned}$$ where $\psi_m(x)$ and $\Psi(x)$ are taken as slowly varying continuum boson fields. Then Eqs.  become $$\begin{aligned} \label{eq:80} S^z_{x,y} & \sim & \left[1 + (-1)^{x+N(x)}\right] \big[\sum_{m=\pm} \psi_m^\dagger \psi_m^{\vphantom\dagger} \nonumber \\ && + \sum_m e^{i m (2q_0 x + \frac{2\pi}{3} y)} \psi_m^\dagger \psi_{-m}^{\vphantom\dagger} + \Psi^\dagger \Psi^{\vphantom\dagger}\big], \\ S^+_{x,y} & \sim & 2i \sin q_0 \sum_m e^{i m (2q_0 x + \frac{4\pi}{3} y)} m (\psi_m^\dagger)^2 \nonumber \\ && + 2\cos q_0 (-1)^{x+N(x)} \sum_m e^{i m (2q_0 x + \frac{4\pi}{3} y)} (\psi_m^\dagger)^2 \nonumber \\ && + 2 \cos q_0 (-1)^{x+N(x)} \psi_+^\dagger \psi_-^\dagger\end{aligned}$$ We are now in a position to write down an effective continuum theory to describe the low magnetization state in terms of bosonic field operators $\psi_m$ for $S^z=1/2$ solitons with chirality $m$ and $\Psi_m$ for the $S^z=3/2$ solitons, all taken near their band minima. By symmetry, it takes the form $$\begin{aligned} \label{eq:65} H_{\rm low} & = & \int \! dx\, \Big\{ \sum_{m=\pm } \psi_{m}^\dagger \big( -\frac{1}{2m_1}\partial_x^2+ \epsilon_{1/2 }-h/2\big) \psi_{m}^{\vphantom\dagger} + \Psi^\dagger \big( -\frac{1}{2m_2}\partial_x^2+ \epsilon_{3/2 }-3h/2\big) \Psi^{\vphantom\dagger} + V[\psi_+^\dagger \psi_+^{\vphantom\dagger}, \psi_-^\dagger \psi_-^{\vphantom\dagger}, \Psi^\dagger \Psi^{\vphantom\dagger} ] \Big\}. \nonumber \\ &&\end{aligned}$$ Here $V$ is a general potential of quartic order and higher in the fields, representing interactions of the solitons. We have dropped terms above which mix the different soliton species, e.g. ones which might annihilate one $S^z=3/2$ soliton while creating 3 $S^z=1/2$ solitons. Most such terms, at least at low order, are prohibited by various symmetries, such as translation and inversion symmetry, at least for a generic incommensurate wavevector $q_0$ for the $S^z=1/2$ solitons. Consider increasing the magnetic field $h$ from zero. The ground state remains the soliton vacuum, i.e. the dimerized state, until the energy of a state with non-zero solitons crosses the energy of the vacuum. Assuming repulsive interactions between solitons, this occurs when the energy of a single soliton vanishes, and this type of soliton will enter the system. We must compare the energies $\epsilon_{1/2} - h/2$ and $\epsilon_{3/2}- 3h/2$, and see which vanishes first on increasing $h$. If the $S^z=3/2$ soliton energy is large, $\epsilon_{3/2}>3 \epsilon_{1/2}$, then the $S^z=1/2$ solitons will appear, at $h=2\epsilon_{1/2}$. Conversely, if $\epsilon_{3/2}<3 \epsilon_{1/2}$, then the $S^z=3/2$ solitons will appear, at $h=2\epsilon_{3/2}/3$. The critical ratio $\epsilon_{3/2}/\epsilon_{1/2} = 3$ is valid at infinitesimal soliton density, i.e. $M\rightarrow 0^+$. At larger magnetization, interactions amongst solitons may become important, and will probably tend to disfavor the $S^z=1/2$ solitons further, since these must occur at a higher density and hence interact more strongly. Since in any case we do not know the energies $\epsilon_{3/2}, \epsilon_{1/2}$, we cannot actually use this criteria quantitatively. Instead, we simply consider both types of soliton liquids as possibilities, and determine their properties at a phenomenological level. Let us consider first the $S^z=1/2$ case. Then we can neglect the $\Psi$ particle, which has an energy gap even when the $\psi_q$ solitons enter the system. The structure of the solitonic state is determined to a degree by the potential $V$ in Eq. . By symmetry, it has the form $$\label{eq:74} V[n_+,n_-,0] = \frac{a}{2} (n_+^2 + n_-^2) + b n_+ n_-.$$ With $a>0$ for stability, the state depends upon the coefficient $b$. If we assume $b<a$, then it is favorable for both solitons to enter the system in equal amounts, and the system forms a one dimensional Bose liquid of particles with two flavors. Owing to the strong quantum fluctuations in one dimension, this is a Luttinger liquid phase with two independent massless bosonic modes, associated to the two conserved densities. In the CFT terminology, this is a state with central charge $c=2$. If instead $b>a$, it is preferable for the system to choose one state of soliton only. In this case there is a spontaneously broken discrete symmetry (inversion $P$), and only a single massless bosonic mode, or $c=1$. We focus on the former case, $b<a$, which we argue [*describes the same phase*]{} as the semi-classical incommensurate planar state. To see this, we show that the spin correlations in the two-flavor $S^z=1/2$ soliton liquid have the same form as those in the 1d incommensurate planar phase, described in Sec. \[sec:CI\]. In the soliton liquid, we can use the usual bosonization of bosons for each of the two species, $\psi_m \sim \sqrt{\bar{n}_s/2}e^{-i\theta_m}$, $\psi_m^\dagger \psi_m^{\vphantom\dagger} \sim \bar{n}_s/2 + \partial_x \phi_m/\pi$ (and $\Psi^\dagger \Psi=0$), where $\phi_m$ is the dual field to the boson phase $\theta_m$. With this, we may conveniently represent the non-local operator $N(x) = \bar{n}_s x + \sum_m \phi_m/\pi$, where $\bar{n}_s$ is the mean soliton density. Note since each soliton carries $S^z=1/2$ spread over the TST of width $3$, the average magnetization [*per site*]{} is $M= \frac{1}{3}\bar{n}_s/2 = \bar{n}_s/6$. Then $$\begin{aligned} \label{eq:81} B_{x,y} & \sim & \overline{B} + \epsilon_0 \cos [(\pi + 2\delta)x + \varphi], \\ S^z_{x,y} & \sim & \left(1+ \cos[(\pi + 2\delta)x + \varphi]\right) \Big( M + \frac{\partial_x \varphi}{6\pi} \nonumber \\ && + n_s\cos [\theta_+-\theta_- + 2q_0 x + \frac{2\pi}{3}y]\Big), \\ S^+_{x,y} & \sim & 2i\sin q_0 \sum_m e^{im(2q_0 x + \frac{2\pi}{3}y)} m e^{2i\theta_m} \nonumber \\ && + 2\cos q_0 \cos[(\pi + 2\delta)x + \varphi] \Big( e^{i(\theta_+ + \theta_-)} \nonumber \\ && + \sum_m e^{im(2q_0 x + \frac{2\pi}{3}y)} e^{2i\theta_m} \Big).\end{aligned}$$ Here $2\delta = \pi \bar{n}_s = 2\pi M/3$ and $\varphi = \phi_+ + \phi_-$. We can compare the above to the semi-classical result. In the semi-classical limit, the bosonic phases $\theta_\pm$ are weakly fluctuating, while $\phi_\pm$ and hence $\varphi$ are strongly fluctuating. Then the dominant terms in the spin operators, with smallest scaling dimension, are those which do not contain any of the strongly fluctuating phases, $$\begin{aligned} \label{eq:82} S^z_{x,y} & \sim & M + n_s\cos [\tilde\theta + 2q_0 x + \frac{2\pi}{3}y], \\ S^+_{x,y} & \sim & -4\sin q_0\, e^{i\theta} \sin [\tilde\theta + 2q_0 x + \frac{2\pi}{3}y],\end{aligned}$$ where we defined $\theta=\theta_++\theta_-$ and $\tilde\theta = \theta_+ - \theta_-$. This can be directly compared to Eqs.  of Sec. \[sec:CI\]. We see that the [*form*]{} of the spin operators is identical to that in the incommensurate coplanar state. Thus we can regard the $S^z=1/2$ chiral soliton liquid as another limit of the same phase. Let us turn to the case of the $S^z=3/2$ soliton liquid. As there is no chirality quantum number in this case, the state can be simply viewed as a Luttinger liquid without spin, and is expected to be described by a $c=1$ theory of a single massless boson. We argue that this $S^z=3/2$ soliton liquid is in fact another SDW phase very similar to the one obtained by the quasi-one-dimensional approach of Sec. \[subsec:sdw\]. While one might have expected to find the [ *identical*]{} SDW phase in this way, we instead find that the $S^z=3/2$ soliton liquid is an SDW state with a different SDW wavevector, in particular with $Q_y=0$, contrasting with the value $Q_y=2\pi/3$ obtain from the quasi-1d approach. If the $S^z=3/2$ liquid indeed occurs, therefore, we presumably require a phase transition to the other SDW state upon increasing magnetization. To observe the SDW structure of the $S^z=3/2$ soliton liquid, we again consider the spin correlations. Now we have no chiral solitons, $\psi_m^\dagger \psi_m^{\vphantom\dagger}=0$. This immediately implies that [*there are no low energy excitations with spin $S^z=1$ and hence no low energy content to the $S^\pm$ operators*]{}. Thus XY correlations decay exponentially in this phase, exactly as in the the SDW phase. To examine the $S^z$ correlations, we can bosonize the non-chiral bosons. This gives $\Psi \sim \sqrt{\overline{n}_s}e^{i\vartheta}$, $\Psi^\dagger\Psi^{\vphantom\dagger} \sim \overline{n}_s + \partial_x \varphi/\pi$, with dual phases $\varphi,\vartheta$. Now $N(x) = \overline{n}_s x + \varphi/\pi$, and we note the relation between the magnetization and soliton density is changed to $M= \overline{n}_s/2$, since the solitons have spin $S^z=3/2$. We see then that $$\label{eq:83} S^z_{x,y} \sim (1 + \cos[(\pi + 2\delta)x + \varphi]) (M + \frac{\partial_x\varphi}{2\pi}).$$ Higher harmonics of the above cosine also appear in a more careful treatment. Note that the incommensurability is different in this case: $2\delta = \pi \overline{n}_s = 2\pi M$. Eq.  can be compared to the corresponding formula, Eq. , for the quasi-1d SDW state in the TST. We see that it is identical, save for the presence of a factor $2\pi y/3$ inside the cosine in the quasi-1d case. This shows that the two states have the same structure, save for a difference in the SDW wavevector, as mentioned above. Discussion {#sec:discussion} ========== In this paper, we have presented a comprehensive analysis of the field-anisotropy phase diagram of the three-leg spin-1/2 triangular spin tube, of interest primarily as an approximation to the corresponding two dimensional Heisenberg model on the anisotropic triangular lattice. Pronounced quantum effects, strongly deviating from the expectations based on classical analysis, occur throughout the phase diagram. In this section, we will discuss the implications of our results for two dimensions, and how robust these quantum effects are to other modifications to the model. Implications for two dimensions {#sec:discussion2D} ------------------------------- Throughout the paper we have commented on how results obtained in the one-dimensional TST geometry apply to the two-dimensional spin-1/2 system. Here we summarize these connections, with particular attention to the phase diagram in 2d. With a few exceptions, the phases we obtained for the TST have straightforward analogs in 2d, and consequently we expect the 2d diagram to be only slightly modified. For example, on the isotropic line, $R=0$, away from very small field, all the phases we found are precisely those expected from the semiclassical analysis of Refs. . We expect the semiclassical analysis to only work better in 2d, so the same coplanar and plateau states, and their incommensurate analogs for small anisotropy, $0<R\ll 1$, should occur there as well. We note that all coplanar states are intrinsically stabilized by quantum effects. Perhaps the most striking feature amongst these states is the 1/3 magnetization plateau, which extends well beyond the semiclassical regime in our phase diagram for the TST, Fig. \[fig:DMRG-plateau\]. In addition to the TST, we have also studied the 1/3 magnetization plateau for $L_y=6$ cylinders, see Figure \[fig:plateau6leg\]. Close to the isotropic limit, $R\ll 1$, the plateau width is only slightly changed by the increase in width from $L_y=3$ to $L_y=6$, and its value $\Delta h \approx 0.7 J$ agrees well with previous numerical studies [@richter2009; @tay2010variational; @sakai2011; @miyashita2004]. This trend in width is consistent with our picture that for small $R$ the phases proximate to the plateau are commensurate planar [*ordered*]{} ones in the 2d limit. The broken $U(1)$ symmetry of these phases makes them sensitive to infrared quantum fluctuations in the 1d geometry, since of course continuous symmetries are unbroken in 1d. Hence in the thinner cylinders, the commensurate plateau state competes slightly more effectively against the planar phases than in two dimensions, leading to a wider plateau for smaller circumference. In the intermediate region, $0.2\lesssim R \lesssim 0.7$, the trend is much more striking and [ *opposite*]{} to that for small $R$: the plateau width is seen to increase significantly compared with that for $L_y=3$. The same is true in the larger anisotropy limit, $0.7\lesssim R <1$, for which our preliminary results, based on the finite-size scaling for quasi-2D systems with $L_x \approx L_y$ (for such highly anisotropic systems, we were unable to converge the $L_x \rightarrow \infty$ limit), still suggests a finite 1/3 plateau, consistent with analytical arguments put forward in [@starykh2010extreme] and in Sec. \[sec:plateau\]. The increase of the plateau width is understood as being due to a greater stability of crystal phases in two dimensions. Our DMRG results strongly support existence of the 2d magnetization plateau state for [*all*]{} values of spatial anisotropy $0 < R < 1$. Several experimental spin-1/2 materials with the triangular lattice structure have indeed been observed to support a 1/3 magnetization plateau, including the well documented material Cs$_2$CuBr$_4$ [@ono2003; @fortune2009] as well as Ba$_3$CoSb$_2$O$_9$, studied more recently [@shirata2012]. A notable exception is Cs$_2$CuCl$_4$, which is isostructural to Cs$_2$CuBr$_4$, but does [*not*]{} exhibit a magnetization plateau[@tokiwa2006]. In our opinion, as explained in detail in Ref. , the plateau is destabilized in this case by three-dimensional coupling, which is stronger (relative to the appropriate $J$) in the Cs-based magnet in comparison with the Br-based one [@ono2005], with perhaps strong Dzyaloshinskii-Moriya (DM) interactions in Cs$_2$CuCl$_4$ playing an additional role [@povarov2011]. The SDW phase dominates a large fraction of the phase diagram for the TST. This is an entirely quantum phase (since it requires modulation of the [*length*]{} of the static moments), which in 2d exhibits incommensurate collinear long-range order along the field direction. Being of quantum origin, one may wonder whether the SDW persists into 2d. Based on renormalization group arguments, discussed extensively in Ref. , we know that the SDW indeed must exist in the quasi-1d regime, $J'\ll J$, when inter-chain correlations are relatively weak. We expect that the region occupied by the SDW may be somewhat curtailed in 2d relative to that in the TST, but that it still is quite large. This is based on intuition and numerical evidence that inter-chain correlations remain suppressed for relatively large $J'$ due to frustration. Experimental verification of this novel magnetic state is clearly called for. In this regard we would like to point out a recent series of experiments on quasi-1d spin-1/2 material LiCuVO$_4$. While much of the interest in this material stems from the high-field nematic phase predicted [@mzh2010] and observed [@svistov2011] to occur near the saturation field, several experimental studies [@buttgen2007; @masuda2011; @svistov2012; @takigawa2012] have found strong evidence in favor of an incommensurate longitudinal SDW phase in the intermediate range of magnetic fields. To understand this finding better, it is important to realize that the inter-chain exchange in this material is of zig-zag (triangular) type albeit of predominantly ferromagnetic sign [@enderle2005]. The considerations of Section \[sec:weak-coupled\] make it clear that the SDW phase is not sensitive to the sign of inter-chain $J'$ and should appear in the model with ferromagnetic $J'$ as well, see for example Ref.  for explicit calculations. We thus would like to posit that a recent neutron scattering study [@mourigal2012], which observed longitudinal spin fluctuations but no transverse ones, is very much consistent with SDW phase scenario. Like the spin nematic phase, which is expected to occur at much higher magnetic fields, the SDW phase does not support low-energy transverse spin excitations. It would also be interesting to seek evidence of an SDW state in Cs$_2$CuBr$_4$. The above aspects of the TST and 2d phase diagrams are qualitatively similar. Qualitative differences are expected at low and high fields. At zero field, the TST exhibits a dimerized phase, which we attribute (Sec. \[sec:lowfield\]) to quantum fluctuation effects specific to one dimension. In 2d, most of the zero field line should exhibit incommensurate spiral order, with a small region of collinear antiferromagnet at small $J'/J$, as argued in Ref. . At high field, near saturation, where the TST shows both coplanar and cone phases, we saw in Sec. \[sec:BS-2d-lattice\] that in 2d only the coplanar state occurs. This is a rather surprising result, since the coplanar state might be considered more quantum than the cone. This observation poses a tricky problem of connecting the limit of field approaching saturation at fixed small $J'$, where the coplanar state is expected, to the limit of vanishing $J'$ at fixed field slightly below saturation, where we instead expect a cone state. In 2d, therefore, a phase boundary must emanate from the saturation point at $J'=0$, and we do not presently understand where this boundary extends to. Putting together all these considerations, we can construct schematic phase diagrams for two dimensions. The two simplest possibilities we could construct are shown in Fig. \[fig:schematicpds\]. The quasi-1d analysis, which was carried out directly in 2d in Ref. , demands the cone, SDW, and plateau phases at non-zero field and small $J'/J$. It also requires a collinear anti-ferromagnetic state at zero field and small $J'/J$. This collinear state is expected to be rapidly destroyed in favor of the SDW as the field is imposed. It is likely to become canted as it does so, but in the absence of a detailed description of this narrow region descending from the collinear antiferromagnet at zero field, we label it “quasi-collinear” in the figures. Near the isotropic line, the semi-classical description requires commensurate (C) planar and incommensurate (IC) planar states, as well as the 1/3 plateau. Finally, near saturation, the dilute spin-flip approach becomes exact, and the solution of the BS equation required the IC planar phase. The shaded phases and the boundaries containing circles are taken from preliminary DMRG results for more two-dimensional system with $L_y=6,9$ lattice spacings around the circumference. The remaining phase boundaries are drawn arbitrarily to connect the known regions demanded by the above reasoning in the simplest possible manner consistent with scaling. The principal uncertainty in the diagrams is the extent of the cone phase. We expect it to occupy a relatively small portion of the phase space, despite the fact that it is the classical ground state everywhere below saturation except on the $R=0$ line! In the first schematic, Fig. \[fig:schematicpds\]a, the cone state occupies the minimum possible area, while a more semi-classical situation might be as shown in Fig. \[fig:schematicpds\]b. ### Comparison to other work {#sec:comp-other-work} It is interesting to compare our results to those of Tay and Motrunich[@tay2010variational], which is the only other comprehensive study of the full anisotropy-field phase diagram of which we are aware. We caution that a strict comparison is not possible because both their and our predictions for 2d are somewhat schematic, being based on conjectural extrapolation of results for the 1d TST (us) and finite clusters (them). Nevertheless, one notices immediately similarities between their schematic 2d phase diagram, Fig. 10 of their paper, and our Fig. \[fig:phase\]. First, the region near the isotropic line is in both cases quite close to semiclassical predictions. Small differences appear at low fields, where indeed quantum effects of the finite systems studied in both works are probably maximal. Second, near the saturation field, they also find a wide range of incommensurate planar phase (called incommensurate V in their study). Our analytical BS analysis indicates that this phase in fact extends over the full range of anisotropy, a fact which was not resolved in their diagram. Third, both studies indicate the robustness of the 1/3 plateau. As already mentioned above, our results for the width of the plateau $\Delta h \approx 0.8 J$ at the isotropic point $R=0$ agree well with those of Refs. . The more recent exact diagonalization study [@sakai2011] predicts smaller width, about $0.5 J$, but this is based on extrapolating $\Delta h$ from small-size clusters. For $R > 0$, Ref.  is the only one we can compare with, and qualitative agreement is quite good. Our DMRG work completes the phase diagram, demonstrating the $1/3$ plateau existence for all $J'>0$. The major distinction between the two works is in our finding of the SDW state in a wide field anisotropy range, where Tay and Motrunich postulate separate spin liquid, spiral (corresponding to our cone state), and quasi-1d regimes. In our work, renormalization group arguments rather clearly establish the SDW phase in the small $J'/J$ regime in 2d, which is the quasi-1d region of Tay and Motrunich. We think it likely that even in 2d, the SDW phase extends to $R \approx 0.5$. Suppressing the quantum effects {#sec:suppr-quant-effects} ------------------------------- As remarked above, we predict two types of quantum states – coplanar phases and collinear SDWs – in the 2d S=1/2 model. While remarkably robust in this case, these quantum phases can be suppressed by other changes to the model: larger spins $S>1/2$, three-dimensional coupling, and Dzyaloshinskii-Moriya (DM) interactions. ### Higher spin {#sec:highS} We first consider $S>1/2$, and find that the quantum phases are strongly suppressed. We begin with the vicinity of the saturation field. In Sec. \[sec:BS-2d-lattice\], we showed that for $S=1/2$ the system forms a coplanar state in this limit for all $0<J'/J \leq 1$. This is surprising since except for the isotropic case, the coplanar phase is not a classical ground state. Using the calculations sketched below, we find that with increasing $S$, the classical results are recovered, with the coplanar phase restricted to increasingly narrow region near the isotropic limit, where it occurs due to classical degeneracy. To do so we use the representation below[@batyev1986] , which is more convenient than the Holstein-Primakoff one: $$\begin{aligned} \label{eq:laS1} S_{\bf\sf r}^\dagger &=& \sqrt{2 S} [1 + (K_s - 1) b_{\bf\sf r}^\dagger b_{\bf\sf r}] b_{\bf\sf r} , \\ S_{\bf\sf r}^z &=& S - b_{\bf\sf r}^\dagger b_{\bf\sf r} ,\nonumber\end{aligned}$$ where $K_s = \sqrt{1 - 1/(2S)}$. This expression reproduces the matrix elements of spin raising and lowering operators between states with different magnetization [*exactly*]{} within the two-magnon (two spin flip) subspace. The advantage of this form is that it requires no $1/S$ expansion. Note that for $S=1/2$ reduces to , thanks to the hard-core condition $(b_{\bf\sf r})^2 =0$, while for large $S\gg 1$ we recover Holstein-Primakov asymptote $K_s \sim -1/(4 S)$. Note that for $S \geq 1$ the hard-core constraint is not required and as a result the $U$-term is absent from the two-magnon Hamiltonian [@kolezhuk2012]. The Hamiltonian within the two-magnon subspace retains the form in but now the interaction term is a bit more complicated, $$\begin{aligned} &&V({\bf\sf k},{\bf\sf k}',{\bf\sf q}) = \frac{1}{2}\Big(J({\bf\sf q}) + J({\bf\sf k}+{\bf\sf q}-{\bf\sf k}')\Big) \\ && - S K_s \Big( J({\bf\sf k}+{\bf\sf q}) + J({\bf\sf k}'-{\bf\sf q}) + J({\bf\sf k}) + J({\bf\sf k}')\Big), \nonumber\\ &&J({\bf\sf k}) = 2 J \cos[{\sf k}_x] + 4 J' \cos[\frac{{\sf k}_x}{2}] \cos[\frac{\sqrt{3}{\sf k}_y}{2}]. \nonumber\end{aligned}$$ Numerical solution of the BS equation for the two-dimensional triangular lattice, which proceeds along the same lines as in Sec. \[subsec:planar to cone\], finds that for higher spins $S\geq 1$, near the saturation field the coplanar phase near the isotropic limit is limited to a region $J'>J'_{\rm cr}>0$, with a cone phase obtaining instead for $J'<J'_{\rm cr}$. The critical value monotonically increases with $S$, taking the values $J'_{\rm cr}/J \approx 0.1, 0.5, 0.61$ for $S=1$, $3/2$ and $2$, respectively. These findings show that the absence of the cone state for $S=1/2$ found here is a very unusual feature of the most quantum case. Larger, more classical spins, do recover the classically expected state, although still in a limited range of $J'/J$. We next turn to the SDW phase. Since this state is rooted in the one-dimensional limit, we consider just the limit of weakly coupled chains, for $S>1/2$, and in particular $S=1$. We find that the SDW is completely absent in this case. To see this, we consider a magnetic field above the lower critical field $h_\Delta$ needed to overcome the non-zero Haldane gap ($\Delta_{s=1} \approx 0.41 J$ for $J' \ll J$). This turns the gapped (and, essentially, decoupled – see Refs. spin-1 chains into critical Luttinger liquids[@konik02; @fath03; @kolezhuk05]. It turns out that these critical chains are characterized by a Luttinger parameter $K = 1/(4 \pi {\cal R}^2)\geq 1$ for all values of the magnetic field above the gap-closing $h_\Delta$.[@konik02; @fath03] This immediately implies that the scaling dimension of the longitudinal spin density operator $\mathcal{S}_{\pi-2\delta}^z(x)$ in is $K > 1$ as well, which makes inter-chain SDW coupling in (which has twice this scaling dimension) strictly irrelevant. As a consequence the SDW phase does not occur in the quasi-1d limit. Since this was its most stable regime in the $S=1/2$ case, it may well be that the SDW phase is totally absent for $S=1$! It would be interesting to check this in future simulations. What replaces the SDW? The large value of $K$ implies an increased tendency to spin ordering transverse to the field direction, and indeed the twist term (4th term in ) is instead always relevant, leading to stabilization of the cone state. This result is supported by analytical [@kolezhuk05] and numerical [@mcculloch08] studies of the spin 1 zigzag ladder. For example, Ref.  finds a finite vector chirality (that is, a cone state) for all values of the magnetization in the case of $J_1 - J_2$ spin-1 chain, along the $J_1 = J_2$ line. Note that above, we found that the cone state was also stabilized for small $J'/J$ in the vicinity of saturation. It is likely then that the cone phase evolves smoothly between the 1d limit $J'/J = 0^+$ and the approach to saturation at finite $J'/J$. Moreover, the presence of the cone state at small $J'$ implies the absence of any magnetization plateau in that regime. The predictions appear quite similat to those of the semiclassical analysis of Ref. , which suggests that the full phase diagram for $S=1$ might be well described semiclassically. It is clear that in particular the 1/3 plateau must terminate at some finite (and perhaps not particularly small) value of the $J'/J$ ratio in this case. ### Three dimensional coupling {#sec:three-dimens-coupl} Another experimentally-relevant modification of the spin-$1/2$ Hamiltonian is three dimensional coupling. We consider the simplest case of unfrustrated antiferromagnetic inter-plane exchange interaction $J''$ between identical triangular layers. Provided the three dimensional coupling is unfrustrated, we expect that the particular form is not too important. Such an interaction is expected to make the spin system more classical and thus to promote the classical cone state over the coplanar one. Considering again the regime near saturation, one may readily solve the BS equation, appropriately modified to the three-dimensional situation. We indeed find that high-field co-planar configuration changes to the cone one for sufficiently large $J''/J$ ratio. When the triangular lattice is isotropic, $J'=J$, this occurs for $(J''/J)_{\rm cr} \approx 0.2$, in agreement with the calculation in Ref. . Not unexpectedly, the critical $J''$ becomes smaller for weaker inter-chain exchange $J'$. For example, for $J' /J = 0.75$, as perhaps appropriate for Cs$_2$CuBr$_4$, we find $(J''/J)_{\rm cr} \approx 0.15$ while for $J' /J = 0.34$ (the Cs$_2$CuCl$_4$ case), $(J''/J)_{\rm cr} \approx 0.034$. One-dimensional scaling arguments, described in Appendix \[sec:BS-1d\], suggest that $(J''/J)_{\rm cr} \sim (J'/J)^2$ when $J'/J \ll 1$, in agreement with the numerical values listed above. In the 1d limit, $J'/J \ll 1$, introduction of unfrustrated $J''/J \ll 1$ disfavors SDW order in favor of a cone phase. This is discussed in detail in Sec.V of Ref.. Thus three-dimensional coupling, if unfrustrated, tends to remove all quantum features of the phase diagram. ### Dzyaloshinskii-Moriya interactions {#sec:dzyal-moriya-inter} A variety of DM interactions can be present in anistotropic triangular lattice systems, depending upon the crystal symmetry and microscopic details. This can lead to diverse effects which are difficult to discuss without being more specific. For the materials Cs$_2$CuCl$_4$ and Cs$_2$CuBr$_4$, the symmetry allowed DM interactions were obtained and discussed in detail in Ref.. Here we describe only the effects of the dominant DM term in those materials, which can be written as $$\label{eq:92} H_{\rm DM} = \sum_{x,y} {\bf D} \cdot {\bf S}_{x,y}\times \left( {\bf S}_{x-1,y+1} - {\bf S}_{x,y+1}\right),$$ in the notation of this paper, with the DM-vector ${\bf D}= D {\bf\hat a}$ oriented along the crystallographic $a$ axis, normal to the triangular planes. Though small, a non-zero $D$ has significant effects in both zero field and when a magnetic field is applied normal to the triangular plane, i.e. parallel to the DM-vector. In these situations, unlike the $J'$ interchain coupling, it is not frustrated either by the dominant chain interactions $J$ or by the applied magnetic field. It tends to favor the cone state (or a spiral in zero field), and can obliterate the more quantum coplanar and SDW phases completely if sufficiently strong in this field orientation. Indeed, with this field orientation, an arbitrarily weak DM coupling inevitably forces the state in immediate proximity to the saturated state to be a cone phase, for all values of $J'/J$. This occurs because the DM coupling splits the degeneracy of the two minimum energy spin wave modes, already at the single spin wave level, making a two-component condensate impossible when the spin flip magnons are sufficiently dilute. We note, however, that when the magnetic field is applied normal to the $a$ axis, i.e. in the triangular plane, it itself frustrates the DM interaction. In this situation, the DM interaction is largely ineffective and has only minimal perturbative effects on the spin correlations. These field orientations are therefore optimal for observing quantum effects. Experimental implications and future directions {#sec:future-directions} ----------------------------------------------- Our study indicates that a number of “quantum” ordered states may be found in $S=1/2$ anisotropic triangular lattice systems. These states are not so exotic as quantum spin liquids, and are well characterized by their symmetries and associated order parameters. They are instead quantum in the weaker sense that they cannot be obtained in the classical limit. Most notably, we obtained a SDW state whose order involves (quasi-)periodic modulation of the [ *length*]{} of the spin expectation value, along the field direction. We suggest this state occupies a wide swath of the field-anisotropy phase diagram, provided perturbations to our model are not too strong. The particular material Cs$_2$CuBr$_4$ appears a good candidate for the observation of the SDW state, since three-dimensional coupling is known to be relatively weak there, and experiments have already identified the 1/3 magnetization plateau. Direct observation of the SDW would consist of observing the incommensurate ordering wavevector evolving monotonically with field, for fields above and below the plateau, and correlating this wavevector with the average magnetization. We expect it to approximately follow the 1d relation, $q= \pi (1- M/M_s)$, away from the plateau. Given its 1d origin, one might well also expect that the inelastic spectra retain 1d features, such as spinon continua, in the SDW state and even in the plateau state above the gap. Of course, at low energy, in the vicinity of the SDW wavevector, we expect the collective phason mode to dominate. There must therefore be significant rearrangement of the spectra on passing from low to high energy. A more detailed understanding of the spectral evolution with energy, field, and anisotropy may make an interesting subject for future study. In Cs$_2$CuBr$_4$, many additional features suggestive of phase transitions were identified above the 1/3 plateau in the magnetization process with an in-plane field.[@fortune2009]  Our study indicates that few such transitions should be expected in the pure $J-J'$ model. Likely additional DM interactions (beyond the one given in Eq. ) and perhaps further-neighbor couplings are at play. Study of their effects is a possible avenue for more research. More generally, the richness and surprisingly quantum nature of field-anisotropy phase diagram of the relatively weakly frustrated triangular lattice suggests that the behavior on more frustrated lattices such as the kagomé and pyrochlore may be even more interesting. The methods used here should be helpful in attacking these problems. We would like to thank A. Chubukov, R. Coldea, A. Kolezhuk, M. Mourigal, F. Mila, M. Takigawa, and M. Zhitomirsky for discussions and communications. We acknowledge support from the Center for Scientific Computing at the CNSI and MRL: an NSF MRSEC (DMR-1121053) and NSF CNS-0960316. This research was supported in part by the National Science Foundation under Grants NSF DMR-1206809 (LB, RC, and HJ), NSF PHY11-25915 (HCJ), and NSF DMR-1206774 (OAS). Sine-Gordon model and commensurate-incommensurate transitions {#sec:sine-gordon-model} ============================================================= In this appendix, we summarize the Commensurate-Incommensurate Transition (CIT) within the sine-Gordon model, which appears in multiple places throughout the manuscript. We consider the sine-Gordon action in $d+1$ dimensions, with the form $$\begin{aligned} \label{eq:84} \mathcal{S}_{\rm sg} & = & \int d^d{\bf x}\, d\tau\, \Bigg\{ \frac{\kappa}{2} (\partial_\tau \vartheta)^2+ \sum_{\mu} \frac{\rho_{\mu} }{2} (\partial_\mu\vartheta)^2 \nonumber \\ && - \lambda \cos \left[ n (\vartheta-q x)\right] \Bigg\} ,\end{aligned}$$ where $\vartheta$ is the sine-Gordon field. We can write an alternative expression in terms of the shifted field, $\hat\vartheta=\vartheta - qx$, so that $$\begin{aligned} \label{eq:85} \mathcal{S}_{\rm sg} & = & \int d^d{\bf x}\, d\tau\, \Bigg\{ \frac{\kappa}{2} (\partial_\tau \hat\vartheta)^2+ \sum_{\mu} \frac{\rho_{\mu} }{2} (\partial_\mu\hat\vartheta)^2 \nonumber \\ && + \delta \partial_x \hat\vartheta - \lambda \cos \left[ n \hat\vartheta\right] \Bigg\} ,\end{aligned}$$ with $\delta = \rho_x q$. In general, large $\delta$ prefers an incommensurate state, where the field $\hat\vartheta$ is non-uniform and unpinned, while for small $\delta$, a commensurate phase occurs, where $\hat\vartheta$ is pinned to a fixed value by the cosine term. The detailed nature of the sine-Gordon model depends upon dimensionality, so we treat the $d=1$ and $d \geq 2$ cases separately. $d\geq 2$: mean-field transition {#sec:dgeq-2:-mean} -------------------------------- For $d \geq 2$, the fluctuations of the phase field $\hat\vartheta$ are small even in the absence of the sine-Gordon term, i.e. for $\lambda=0$. This can be seen from the fact that, already at the Gaussian level, the free boson propagator is non-divergent at small momentum for $d\geq 2$. This implies that the fluctuations of $\vartheta$ are bounded, and one can therefore treat the entire problem by a saddle point approximation. Moreover, one can show that fluctuation effects are negligible in the (quantum) CIT for $d\geq 2$. More formally, $D=d+1=2+1$ is the upper critical dimension for the CIT. Therefore in this case we may proceed by simply minimizing the action in Eq. . The minimum action configuration is independent of the $d-1$ coordinates normal to $x$ and $\tau$. This gives $$\label{eq:15} {\mathcal S}_{sg} = L_\perp^{d-1}\beta E_{1d},$$ 5 where $L_\perp$ is the system width in the directions normal to $x$, and $\beta$ is the length of the imaginary time integration. The one-dimensional energy is then $$\label{eq:86} E_{1d} = \int dx\, \left\{ \frac{\rho}{2} (\partial_x\hat\vartheta)^2 + \delta \partial_x\hat\vartheta - \lambda\cos ( n\hat\vartheta)\right\},$$ where $\rho = \rho_x$. Notice that $\delta$ only appears as a boundary term, which means that the energy depends on $\delta$ only through the winding number, $N = \left(\hat\vartheta( x = L ) - \hat\vartheta( x = 0)\right)\frac{n}{2\pi}$. Consider the case $N = 0$. Then, the solution is uniform, i.e. $\hat\vartheta = 2\pi k/n$, with $k = 0,1,2 ...$. With $N=1$, one obtains a well-known soliton solution of the sine-Gordon model[@chaikin2000principles], which reads $$\label{eq:ic7} \hat\vartheta(x) = \frac{4}{n} \arctan \left\{ e^{\pm n \sqrt{\frac{\lambda}{\rho}}(x - x_0)} \right\},$$ where $x_0$ is the location of the center of the soliton. Note that the soliton has a width $w \sim \sqrt{\rho/\lambda} $, and energy $E \sim \sqrt{\rho\lambda}$. This gives a critical value, $$\label{eq:87} \delta_c = 4 \sqrt{\rho \lambda}/\pi,$$ such that, for $\delta < \delta_c$, domain wall solitons cost positive energy and so, are unfavorable, resulting in a commensurate wavevector. For $\delta > \delta_c$, it is favorable for solitons to be present, and the minimum energy configuration will be an array of solitons which characterizes an incommensurate phase. Eq.  defines the [*location*]{} of the CIT phase boundary. We may also discuss its critical properties. On the commensurate side, no solitons are present, which implies the winding number $N=0$ precisely, and the ground state energy and field configuration are independent of $\delta$. Thus, there is no visible critical behavior in the ground state (hence in equal time correlations) in the commensurate phase. On the incommensurate side, however, the minimum energy configuration of $\hat\varphi(x)$ depends upon $\delta$. It can be considered as an array of solitons, whose main characteristic is the spacing $\ell$ between solitons. This spacing is determined by the balance of the negative energy to introduce a soliton (which favors many solitons with a short spacing) and the repulsive energy of interaction between solitons (which favors large spacing). The repulsive interaction is exponentially small in the separation $\ell$ in units of the width $w$. Hence the energy of the array is $$\label{eq:88} E_{1d} = E_{1d}^{C} - (\delta-\delta_c) \frac{2\pi L}{n \ell} + c \sqrt{\rho\lambda} \frac{L}{\ell} e^{-\ell/w},$$ where $c$ is an unimportant constant, and $L/\ell$ is the total number of solitons. Minimizing this over $\ell$, one finds the critical behavior, to leading logarithmic accuracy, $$\label{eq:89} \ell \sim w \ln \left[\frac{\delta_c}{\delta-\delta_c}\right],$$ for $0<\delta-\delta_c \ll \delta_c$. The presence of the soliton array implies that the average gradient of the phase $\hat\vartheta$ is non-zero, which defines the [ *incommensurability*]{} wavevector $\overline{q}$: $$\label{eq:90} \overline{q} = \overline{\partial_x \hat\vartheta} = \frac{2\pi}{n\ell} \sim \frac{1}{w|\ln[\delta-\delta_c|/\delta_c]|} \Theta(\delta-\delta_c).$$ The incommensurability $\overline{q}$ in the incommensurate phase gives the shift of the ordering wavevector from its commensurate value. Other critical properties at the CIT in $d\geq 2$ are readily obtained from the results above. For example, the ground state energy density is simply the saddle point value of $E_{1d}$, which scales as $$\label{eq:91a} \frac{E}{L} \sim -\frac{\delta-\delta_c}{|\ln(\delta-\delta_c)|} \Theta(\delta-\delta_c).$$ $d=1$: quantum fluctuations {#sec:d=1:-quant-fluct} --------------------------- In the case $d=1$, fluctuations of the phase field cannot be neglected. This can be anticipated from the Gaussian level result that, in the absence of a sine-Gordon term, the free boson Green’s function is logarithmically divergent at small momentum, signalling large fluctuations of $\vartheta$. Hence we must deal directly with the 1+1-dimensional action, $$\begin{aligned} \label{eq:91} && \mathcal{S}_{\rm sg} = \int \! dx\, d\tau\! \Bigg\{ \frac{\kappa}{2} (\partial_\tau \hat\vartheta)^2+ \frac{\rho}{2} (\partial_x\hat\vartheta)^2 + \delta \partial_x \hat\vartheta - \lambda \cos n\hat\vartheta \Bigg\} .\nonumber \\\end{aligned}$$ Once again, $\delta$ is the coefficient of a pure boundary term, which simply counts the number of solitons in the system. A finite density of solitons will be generated, provided the energy of a soliton for $\delta=0$ is compensated by this boundary energy, which equals $2\pi \delta/n$. Thus we need the energy of a soliton at $\delta=0$, i.e. in the pure quantum sine-Gordon model. We estimate this as follows. The scaling dimension of the cosine term, $\Delta_n$, is easily calculated, and is equal to $$\label{eq:24} \Delta_n = \frac{n^2}{4\pi \sqrt{\kappa\rho}}.$$ The cosine is relevant when $\Delta_n<2$, and irrelevant if $\Delta_n>2$. When it is irrelevant, there is no pinning of the phase field at low energies. A state of this type is known as a “floating phase”, and because of the lack of pinning, the state becomes immediately incommensurate for any non-zero $\delta$, i.e. $\delta_c=0$, and there is no CIT. When the cosine is relevant, then when $\delta=0$, the phase is pinned at low energies, and the energy of a soliton is non-zero. We need to estimate this energy to locate the value $\delta_c$ which defines the CIT. We do this by renormalization group (RG) arguments. Renormalizing out to a length $\xi$, the cosine is reduced by fluctuations by an amount proportional to $\xi^{-\Delta_n}$, so $\lambda_{\rm eff} \sim \lambda \xi^{-\Delta_n}$. For a possible soliton of width $\xi$, the energy cost is of order $$\label{eq:25} \epsilon_s \sim \frac{\rho}{\xi} \left(\frac{2\pi}{n}\right)^2 - \lambda_{\rm eff} \xi.$$ The actual soliton size is determined by optimizing this over $\xi$, which gives $$\label{eq:26} \xi \sim \left( \frac{\rho}{\lambda n^2}\right)^{\frac{1}{2-\Delta_n}},$$ and thus an energy cost for the soliton of order $$\label{eq:27} \epsilon_s \sim \lambda^{\frac{1}{2-\Delta_n}} \left( \frac{\rho}{n^2}\right)^{\frac{1-\Delta_n}{2-\Delta_n}}.$$ This energy should equal the energy gain $2\pi\delta_c/n$ from the boundary term at the CIT, which gives $$\label{eq:28} \delta_c \sim \sqrt{\lambda\rho} \left(\frac{\lambda}{\rho}\right)^{\frac{\Delta_n}{4-2\Delta_n}}.$$ Note that this approaches the mean-field result of the previous subsection when $\Delta_n \rightarrow 0$, and becomes very suppressed when $\Delta_n \rightarrow 2^-$ (since we must assume $\lambda<\rho$ for consistency of the treatment). We now turn to the critical behavior, which in 1+1 dimensions is a storied problem in critical phenomena. It is sometimes referred to as a Pokrovsky-Talapov transition, due to the solution by those authors.[@PhysRevLett.42.65]  We recapitulate the essence of the argument. As in the mean-field case, for $\delta<\delta_c$, there are no solitons in the system, and the ground state energy is independent of $\delta$, i.e. there is no sign of criticality in any static quantity. However, the excitation gap for creating a soliton vanishes linearly with $\delta_c-\delta$. For $0<\delta-\delta_c \ll \delta_c$, we expect a low density of solitons to be present in the system, again determined by the balance of the (negative) single soliton energy and the repulsive soliton-soliton interactions. We must, however, in this case treat the problem quantum mechanically. In particular, we must consider the effects of interactions properly in the low density limit. In this limit, the kinetic energy and momentum of individual solitons is vanishingly small, and well-known results for low energy scattering apply. In particular, for short-range repulsively interacting particles in one dimension, the probability of transmission [*vanishes*]{} in the low energy limit. Thus effectively, regardless of the microscopic strength of the interaction, or of its short distance structure, the solitons behave at low densities as though they were [ *hard core*]{} particles, which cannot pass one another. To model this behavior, we can treat the solitons as [*fermions*]{}. Interactions at longer distances beyond the local hard core are weak and unimportant, so the fermions are effectively [*free*]{}. The free fermion problem is trivially soluble, so we can easily obtain the critical behavior. When $\delta>\delta_c$, we simply fill the negative energy fermion states to form a Fermi sea. The sine-Gordon model has Lorentz invariance, so the dispersion of the solitons must be relativistic, hence the energy for a single soliton is $$\label{eq:86a} E_{\rm sol} = \sqrt{\epsilon_s^2 + v^2 k^2} - \frac{2\pi}{n} \delta,$$ where the velocity $v=\sqrt{\rho/\kappa}$, and $\epsilon_s = 2\pi \delta_c/n$. The Fermi momentum $k_F$ is determined by the condition $E_{\rm sol}=0$. It will be small near the CIT, so we may expand the relativistic dispersion into its non-relativistic limit $$\label{eq:87a} E_{\rm sol}(k_F) = -\frac{2\pi}{n} (\delta-\delta_c) + \frac{k_F^2}{2m} = 0,$$ with $m= \epsilon_s/v^2$. This determines the Fermi momentum $$\label{eq:88a} k_F = \left[ \frac{4\pi m}{n} (\delta-\delta_c) \right]^{1/2} \sim \sqrt{\delta-\delta_c}.$$ The density of solitons is just $k_F/\pi$, as usual for spin-less fermions, so the incommensurability is thus $$\label{eq:89a} \overline{q} = \frac{2\pi}{n} \frac{k_F}{\pi} = \frac{2k_F}{n} \sim \sqrt{\delta-\delta_c}.$$ The square-root behavior is quite distinct from the logarithmic one in $d\geq 2$. We may also easily obtain the behavior of the ground state energy density, as the total energy of the Fermi sea, $$\begin{aligned} \label{eq:90a} \frac{E}{L} & = & \int_{-k_F}^{k_F} \frac{dk}{2\pi} \left[ \frac{k^2}{2m} - \frac{2\pi}{n} (\delta-\delta_c)\right] \nonumber \\ &\sim & -(\delta-\delta_c)^{3/2} \Theta(\delta-\delta_c).\end{aligned}$$ Many more results, e.g. for correlations in the incommensurate phase, can be readily obtained from the free fermion formulation, but we leave this to the reader to discover for themselves in the literature. Detailed calculations of BS {#app:ideal2d} =========================== In this appendix, we present our solutions to the Bethe-Salpeter (BS) equation in Eq. . This equation applies only near saturation field, where the system can be modeled as dilute (hard core) bosons. We substitute our ansatz, Eq. , into the BS equation. With the constraint equation, Eq. , which enforces $s=1/2$, we obtain a set of linear equations for the constants $A_i$, which can be written in a matrix form as $$\label{eq:mat} \begin{pmatrix} \tau_{11} & \tau_{12} &\tau_{13} &\tau_{14} & \tau_{15} &\tau_{16} & \tau_{17} \\ 2J\tau_{21} & 2J\tau_{22}+1 &2J\tau_{23} &2J\tau_{24} & 2J\tau_{25} &2J\tau_{26} & 2J\tau_{27}\\ 2J\tau_{31} & 2J\tau_{32} &2J\tau_{33}+1 &2J\tau_{34} & 2J\tau_{35} &2J\tau_{36} &2J\tau_{37}\\ 2J'\tau_{41} & 2J'\tau_{42} &2J'\tau_{43} &2J'\tau_{44}+1 & 2J'\tau_{45} &2J'\tau_{46} & 2J'\tau_{47}\\ 2J'\tau_{51} & 2J'\tau_{52} &2J'\tau_{53} &2J'\tau_{54} & 2J'\tau_{55}+1 &2J'\tau_{56} &2J'\tau_{57} \\ 2J'\tau_{61} & 2J'\tau_{62} &2J'\tau_{63} &2J'\tau_{64} & 2J'\tau_{65} &2J'\tau_{66}+1 &2J'\tau_{67} \\ 2J'\tau_{71} & 2J'\tau_{72} &2J'\tau_{73} &2J'\tau_{74} & 2J'\tau_{75} &2J'\tau_{76} &2J'\tau_{77}+1 \\ \end{pmatrix} \left( \begin{array}{c} A_0 \\ A_1 \\A_2 \\A_3 \\A_4 \\A_5 \\A_6 \end{array} \right)= \left( \begin{array}{c} 1 \\ 2J \\ 0 \\ 2J' \\ 0 \\2J' \\0\end{array} \right)$$ where we have defined $$\begin{aligned} \label {eq:ru1} \tau_{lm}(k,k';\Omega) &\equiv& \int_{q}\frac{{\bf T}_l(q){\bf T}_m(q)}{\epsilon(k+q)+\epsilon(k'-q)+\Omega} \\ {\bf T}(q)&=&(1,\cos q_x , \sin q_x , \cos q_y, \sin q_y, \nonumber \\ &&\cos (q_x-q_y),\sin(q_x-q_y)),\end{aligned}$$ and $\Omega \propto | h - h_{\rm sat}|$. Although the $\tau_{lm}'s$ are integrals over simple trigonometric functions and other known quantities, e.g. the dispersion, these integrals are divergent in both one and two dimensions and must be treated with care. It is possible, however, to analyze them asymptotically. Once these integrals are evaluated, we can solve for the constants $A_i$ to obtain $\Gamma(q)$ from our ansatz, Eq. . In the next two subsections, we take the reader through our asymptotic analysis. Asymptotic behavior of $\tau_{lm}$ for the 2d case {#app:2d} -------------------------------------------------- In this section, we calculate the $\tau_{lm}$’s for the 2d case. As aforementioned, we are interested in performing asymptotic analysis in the limit $\Omega \to 0$, as the full integrals are too complicated to evaluate fully. We can partition the integrals into the the first two subleading terms, $B_{lm} \ln(\Omega)+C_{lm}$, where the constants $B, C$ are independent of $\Omega$. We can consider two cases: one with the same incoming momenta, i.e. the cone phase with $\Gamma_1 = \Gamma({\bf Q},{\bf Q},0) = \Gamma(-{\bf Q},-{\bf Q},0)$, and the other with different incoming momenta, i.e. the coplanar phase with $\Gamma_2 = \Gamma({\bf Q}, -{\bf Q}, 0) + \Gamma({\bf Q},-{\bf Q},-2{\bf Q})$. Here, the wave vector ${\bf Q}$ minimizes the dispersion relation in Eq. , which can now be substituted into Eq. . After some algebraic simplifications, we obtain $$\label{eq:ru2} \tau_{lm}=\frac{1}{4\pi^2}\int_0^{2\pi} dq_x \int_0^{2\pi} dq_y\frac{{\bf T}_l(p){\bf T}_m(p)}{a+b\cos q_y}.$$ The exact forms of $a, b$ will depend on whether the incoming momenta are same or different. In this appendix, we will only present our results for $l = m = 1$, in which case, we can integrate analytically over $q_y$ in Eq. , and obtain the following $$\label{eq:ru3} \tau_{11}=\int_0^{2\pi} dq_x \frac{1}{\sqrt{a^2-b^2}}$$ To proceed further, we need to specify the exact form of $a$ and $b$. 1. [*Same incoming momenta*]{}: For the same incoming momenta, $a,b$ take on the following form $$\begin{aligned} \label{eq:ru3} && a= \Omega+J(2+j^2-(2-j^2)\cos q_x) \nonumber \\ && b= J (-2j^2 \cos(q_x/2)),\end{aligned}$$ where we define $j \equiv J'/J$. The integrand diverges near $q_x=0$ like $1/q_x$ in the limit $\Omega \rightarrow 0$, and thus, integral is logarithmically divergent. After some analysis, the integral takes on the following form, $$\label{eq: ru4} \tau_{11} \sim -\frac{1}{2\pi j \sqrt{4-j^2}}\ln(\Omega)+\frac{\ln(2j(4-j^2))}{\pi j \sqrt{4-j^2}} .$$ 2. [*Different incoming momenta*]{}: For this case, $a,b$ are as follows $$\begin{aligned} \label{eq:ru5} a&=& \Omega+J [2+j^2+(-2+j^2) \cos(q_x) \nonumber \\ &+&j\sqrt{4-j^2} \sin(q_x)] \nonumber \\ b&=& -2Jj \left[j\cos(q_x/2)+\sqrt{4-j^2}\sin(q_x/2)\right].\end{aligned}$$ The integrand now has two divergent points at $q_{x}=0$ and $q_{x}=-2\arccos(1-j^2/2)$. Therefore, in comparison with the previous case, the logarithmic term doubles, and the integral takes on the form $$\begin{aligned} \label{eq:ru6} \tau_{11} \sim -\frac{1}{\pi j \sqrt{4-j^2}}\ln(\Omega)+\frac{2\ln(j(4-j^2))}{\pi j \sqrt{4-j^2}} .\end{aligned}$$ $\tau_{lm}$ for the TST case {#app:tst} ---------------------------- In computing the $\tau_{lm}$’s for the TST, we turn the two-dimensional integral in the previous section into a single integral over $q_x$ and a sum over $q_y$. As one can imagine, the asymptotic behaviors differs in the TST from the 2d, in that, in the limit $\Omega \to 0$, the integrals diverge as $1/\sqrt{\Omega}$. Therefore, the two subleading terms of the integrals are $B_{lm}/\sqrt{\Omega}+C_{lm}$, where again, $B,C$ are independent of $\Omega$. We present our results for $l=m=1$ for the two cases of the same and differing incoming momenta. 1. For the [*same incoming momenta*]{}, we obtain the following expression $$\label{eq:7ru} \tau_{11} = \frac{1}{6 \sqrt[4]{j^2-j+1} \sqrt{\Omega }}+\frac{4}{3 \sqrt{9 j^2+24 \sqrt{(j-1) j+1} j-24 j+\frac{36 (j-1)}{(j-1) j+1}+36}}+O(\sqrt{\Omega}),$$ where again, $j\equiv J'/J$. 2. We now compute $\tau_{11}$ for the case of [*differing incoming momenta*]{}, in which case, the integral evaluates to $$\label{eq:ru8} \tau_{11}= \frac{1}{3 \sqrt[4]{(j-1) j+1} \sqrt{\Omega }}+\frac{1}{3 \sqrt{3} \sqrt{j\left(3 j+4 \sqrt{(j-1) j+1}-4\right)}}+O(\sqrt{\Omega}).$$ Weakly coupled chains limit {#sec:BS-1d} --------------------------- In this appendix, we analytically check the results of Sec. \[sec:BS-2d-lattice\] in the limit of weakly coupled chains, $J' \ll J$. Recall that the calculation was done for a full two-dimensional lattice. Hereafter, we will use Cartesian coordinates, $({\sf x},{\sf y})$, for convenience. In this limit, we can express the spin flip operator as a continuous function of ${\sl x}$, which is along the chain direction, while keeping the chain index ${\sf y} \in {\cal Z}$ discrete. Then, from Eq. , we write this operator as $\Psi_{\sf y}({\sf x}) \sim S^+_{{\sf y}, \pi}({\sf x})$, where its low energy theory is described by the following action $$\begin{aligned} \label{eq:bs1} {\mathcal S}_{\rm 1d} & = & \sum_{\sf y} \int d {\sf x} d\tau \Big\{ \Psi_{\sf y}^\dagger ( \partial_\tau -\frac{1}{2m}\partial_{\sf x}^2 - \mu)\Psi_{\sf y} \nonumber\\ && - t (\Psi_{\sf y}^\dagger i \partial_{\sf x} \Psi_{{\sf y}+1} + {\rm h.c.}) + u \Psi_{\sf y}^\dagger \Psi_{\sf y}^\dagger \Psi_{\sf y} \Psi_{\sf y} \nonumber\\ && + v \Psi_{\sf y}^\dagger \Psi_{{\sf y}+1}^\dagger \Psi_{{\sf y}+1} \Psi_{\sf y} \Big\}.\end{aligned}$$ The spin-flip (magnon) mass, $m = 1/J$, follows from the quadratic dispersion of the magnon mode near momentum $\pi$ in a fully polarized chain. Additional interaction terms describe the hard-core constraint ($u$ term) as well as the transverse ($t = J'_{xy}/2$) and longitudinal ($v = 2 J'_z$) parts of the interchain exchange interaction $J'$. Note that $t$-term contains a spatial derivative with respect to ${\sf x}$, which reflects the frustration of the interchain exchange by the triangular geometry. In addition, this term contains a factor of $i$ from the staggered factor $(-1)^x = e^{i \pi x}$ in Eq. , and from the fact that ${\sf x}$ takes half-integer values on odd chains (see Eq. , Fig. \[fig:lattice\](a), and Appendix D6 of Ref. ). We can analyze each term of Eq.  through simple dimensional analysis, which will deem all these terms to be relevant under RG. Denoting the spatial scale along ${\sf x}$ as $L$, we can conclude that $\tau \sim L^2$, $\Psi_{\sf y}$ scale as $1/\sqrt{L}$, while thee three interaction terms, $t, u$ and $v$, scale as $L$. Hence, these are [*relevant*]{} interactions and must be included in our analysis of the low energy theory. We can Fourier transform Eq.  and write the Hamiltonian that corresponds to this action, $$\begin{aligned} \label{eq:bs2} H_{\rm 1d} &=& \sum_{\bf\sf k} \Psi_{\bf\sf k}^\dagger (\frac{{\sf k}_x^2}{2 m} + 2 t {\sf k}_x \cos[{\sf k}_y] - \mu)\Psi_{\bf\sf k} \nonumber\\ &&+ \frac{1}{2N}\sum_{{\bf\sf k},{\bf\sf k}',{\bf\sf q}} V({\bf\sf k},{\bf\sf k}',{\bf\sf q}) \Psi_{{\bf\sf k} + {\bf\sf q}}^\dagger \Psi_{{\bf\sf k}' - {\bf\sf q}}^\dagger \Psi_{{\bf\sf k}'} \Psi_{{\sf\bf k}}.\end{aligned}$$ Here $V({\bf\sf k},{\bf\sf k}',{\bf\sf q}) = V({\bf\sf q}) = 2 u + 2 v \cos[{\sf q}_y]$. Note that while the range of ${\sf k}_x$ is not restricted, $-\infty < k_x < \infty$, that of ${\sf k}_y$ is limited by the lattice, $-\pi \leq {\sf k}_y \leq \pi$. This single particle dispersion contain two degenerate moment at ${\bf\sf Q}_1 = (-2 t m, 0)$ and ${\bf\sf Q}_2 = (2 t m, \pi)$. The single particle dispersion has two degenerate minima, at ${\bf\sf Q}_1 = (-2 t m, 0)$ and ${\bf\sf Q}_2 = (2 t m, \pi)$. We can now compute the renormalized couplings $\Gamma_1, \Gamma_2$ in a similar manner as the previous subsections. However, we alter our ansatz of the BS equation, Eq. , to take the form $\Gamma({\bf\sf q}) = A_0 + A_1 \cos[{\sf q}_y]$, because the odd contribution, $\propto \sin[{\sf q}_y]$, vanishes under the integral as the denominator in Eq.  is even for all combinations of incoming and transferred momenta. Computing $\Gamma_1, \Gamma_2$ requires one to solve two linear equations for $A_0, A_1$, which involve 2d integrals over functions with denominators like $[{\sf k}_x^2/m + 16 m t^2 \sin^2[{\sf k}_y/2] + \Omega]$ (for $\Gamma_1$) and $[({\sf k}_x - 4 m t \sin^2[{\sf k}_y/2])/m + 4 m t^2 \sin^2[{\sf k}_y] + \Omega]$ (for $\Gamma_2$). We first evaluate these integrals analytically by separating out the leading terms in $\ln[\Omega/(mt^2)]$, then taking the limit $u \to \infty$ to, again, enforce the $s=1/2$ constraint. The expressions are as follows $$\begin{aligned} \label{eq:bs3} \frac{\Gamma_1}{8 \pi t} &=& \frac{1 + \frac{4}{3}\gamma}{(1 + \frac{4}{3}\gamma) \ln\Upsilon + 4 \ln 2 + 4 \gamma (\frac{4}{3}\ln 2 -1)},\\ \frac{\Gamma_2}{8 \pi t} &=& \frac{1}{\ln\Upsilon + 2 \ln 2}, \label{eq:bs4}\end{aligned}$$ where $\gamma = v/(\pi t)$ and $\Upsilon = 16 m t^2/\Omega$. Given these forms, we can conclude that $\Gamma_1 > \Gamma_2$ for $\gamma \geq \gamma_c = 3 \ln 2/(6 - 4 \ln 2) \approx 0.644$. Since we are considering the isotropic Heisenberg model, where $\gamma = 4/\pi > \gamma_c$, we observe that the coplanar fan state prevails over the cone state in the $J' \ll J$ limit, in agreement with the full lattice approach in Eq. , once the parameters $m$,$t$,$v$ are expressed in terms of exchange integrals. With this approach, we can also estimate the width of the planar fan state near saturation field through simple dimensional analysis of Eq. . Since the chemical potential, $\mu = h_{\rm sat} - h$, scales as $L^2$ and the $t$ interaction scales as $L$, the phase boundary between the planar and the lower-field phase must scale as $\Delta h \sim (J')^2/J$. This boundary separates the planar fan phase from the cone phase, a region in which a standard bosonization description of Sec. \[sec:weak-coupled\] becomes appropriate. Details of this analysis are presented in Appendix \[sec:cone\]. Similar reasoning allows one to estimate the stability of the planar fan state with respect to inter-layer coupling $J''$, which is always present in real materials. It is clear that (non-frustrated) inter-layer coupling corresponds to adding a simple single particle hopping term between layers with a different ${\sf z}$-coordinated $\int d\tau d{\sf x} \sum_{\sf z} J'' (\Psi^\dagger_{{\sf y},{\sf z}} \Psi_{{\sf y},{\sf z}+1} + {\rm h.c.})$ term to the action in Eq. . Such a term also scales as $L^2$, which implies that the phase boundary between the planar and the cone phase in the $J' - J''$ plane takes on a quadratic shape, $J'' \sim (J')^2/J$. Additional one dimensional analysis {#sec:addit-one-dimens} =================================== The purpose of this appendix is to show that the TST geometry with 3-legs is unique in that the renormalized couplings generated through RG produce significantly different physics for $N=3$ compared to that of $N>3$, in the limit $J' \ll J$. Moreover, we show that the arguments given below further support our claims in Sec. \[sec:lowfield\] for the existence of a dimerized state near low field. Finally, we conclude this appendix with a more thorough analysis of the cone state near high fields. Zero field analysis by quasi-1d methods {#sec:zero-field-analysis} --------------------------------------- We start with the zero field case of Eq.  in the limit of decoupled chains $J' \ll J$, where each Heisenberg chain can be bosonized using the Wess-Zumino-Witten SU(2)$_1$ theory, with central charge $c = 1$. In this theory, the spin operator can be decomposed into its uniform ${\mathbf{M}}_y(x) = {\mathbf{J}}_{R,y}(x)+{\mathbf{J}}_{L,y}(x)$ and staggered ${\mathbf{N}}_y(x)$ magnetizations $$\label{low:1} \mathbf{S}_{x,y} \to a_0 \left[ \mathbf{M}_y(x) + (-1)^x \mathbf{N}_y(x) \right],$$ and its scalar product can be written in the continuum limit $$\label{low:2} \mathbf{S}_{x,y} \cdot \mathbf{S}_{x+1,y} \to (-1)^x \epsilon_y(x),$$ where $\epsilon_y(x)$ is the staggered dimerization. With $J' = 0$, this theory describes the Luttinger liquid fixed point of the decoupled chains. The scaling dimensions of these continuum operators, ${\mathbf{M}}, {\mathbf{N}},$ and ${\epsilon}$, determine the relevance of each operator as it perturbs this fixed point. The uniform magnetization has scaling dimension 1, whereas both the staggered spin magnetization and the staggered dimerization have scaling dimension 1/2. These three continuum operators form a closed operator algebra with well-defined operator product expansions (OPEs) used widely in literature [@gogolin2004bosonization; @senechal2004introduction; @starykh2004dimerized; @starykh2005anisotropic; @starykh2007ordering; @schnyder2008spatially; @starykh2010extreme]. For instance, the product of ${\mathbf{J}}_R$ and ${\mathbf{N}}$ can be expanded as $$\label{eq:jn} J^a(x,\tau) N^b (x', \tau') = \frac{i \epsilon^{abc} N^c(x,\tau) - i \delta^{ab} {\epsilon}(x,\tau)}{4\pi\left( v (\tau-\tau') - i (x-x') + a_0 \sigma_\tau \right)},$$ where $\tau$ is the imaginary time, $v = \pi J a_0/2$ is the spin velocity, and $a_0$ is the short-distance cutoff. Let us now consider interchain Hamiltonian perturbing the decoupled Heisenberg chains, $$\label{low:3} V = J' \sum_{y = 1}^3 \sum_x \mathbf{S}_y(x) \left( \mathbf{S}_{y+1}(x) + \mathbf{S}_{y+1} (x-1) \right).$$ Perturbation theory is formulated by expanding the partition function $Z = \int e^{-S_0-\int d\tau V}$ up to quadratic order, i.e. $$\label{eq:pert} Z \simeq \int e^{-S_0} \left[ 1 - \int_\tau V + \frac{1}{2} \text{T} \int_{\tau_1}\int_{\tau_2} V(\tau_1) V(\tau_2) \right],$$ with an implied short time cutoff $\alpha = a_0/ v$. Here, T is the time-ordering operator. To utilize this perturbation theory and the OPEs, we express Eq.  in terms of continuum operators, Eqs.  and , $$\begin{aligned} \label{eq:v1} V_1 & = & 2a_0^2 J' \sum_{y = 1}^3 \sum_x {\mathbf{M}}_y(x) \cdot {\mathbf{M}}_{y+1}(x),\\ V_2 & = & - a_0^2 J' \sum_{y = 1}^3 \sum_x {\mathbf{M}}_y(x) \cdot {\partial_x}{\mathbf{M}}_{y+1}(x),\\ V_3 & = & a_0^2 J' \sum_{y = 1}^3 \sum_x {\mathbf{N}}_y(x) \cdot {\partial_x}{\mathbf{N}}_{y+1}(x), \\ \label{eq:v4} V_4 & = & -a_0^2 J' \sum_{y = 1}^3 \sum_x {\mathbf{N}}_y(x) \cdot \frac{1}{2} {\partial_x}^2 {\mathbf{N}}_{y+1}(x),\end{aligned}$$ where $V = V_1 + V_2 + V_3+V_4$. It is [*crucial*]{} to realize that the periodic boundary conditions enforced in the $y$-direction by the TST system, c.f. Fig. \[fig:lattice2\], allows us to rewrite any operator $\mathcal{O}$ as $$\label{eq:v1a} \sum_{y=1}^3 \sum_x \mathcal{O}_y \mathcal{O}_{y+1} = \sum_{y=1}^3 \sum_x \mathcal{O}_y \mathcal{O}_{y+2}.$$ Using OPEs, one can show that the nearest neighbor chain couplings of the staggered magnetization and dimerization enter in the third power of $J'$, $$\label{eq:third} V = J_3 \sum_{y = 1}^3 \sum_x \left( {\mathbf{N}}_y (x) \cdot {\mathbf{N}}_{y+1} (x) - \frac{3}{2} {\epsilon}_y (x) {\epsilon}_{y+1} (x) \right),$$ where $J_3 > 0$ and $J_3 \propto (J')^3$. This is done by first generating ${\partial_x}{\mathbf{N}}_{y-1} {\partial_x}{\mathbf{N}}_{y+1}$ by quadratic in $V_3+V_4$ terms. Next, this term is fused with $V_1$ to generate the $J_3 \propto (J')^3$ interaction. The calculations are similar to those described in Refs.  and refer the reader to these papers for more details. In a 2d system[@starykh2007ordering; @starykh2010extreme], however, we find that the generated term is instead quartic in $J'$, with interaction constant $J_4 \sim (J')^4/J^3$ and is of the opposite (negative) sign $J_4 < 0$ in comparison with $J_3$ above. It turns out that $J_3 \sim (J')^3 > 0$ is a feature of the $N=3$ TST model [*only*]{}: wider tubes with $N>3$ are anologous to the 2d case, where the renormalized couplings $\sim (J')^4/J^3 < 0$. Note that this difference is important as it implies that spin tubes with $N>3$ are not frustrated by the periodic BC along the $y$-direction. Going back to the $N=3$ TST, both of the generated interactions in Eq.  are strongly relevant (scaling dimension 1) and scale to strong coupling under RG transformations. It would appear that because of the greater numerical coefficient of ${\epsilon}_y {\epsilon}_{y+1}$ in Eq. , it is the dimerized ground state that emerges from the competition in the strong coupling. However, this argument is not complete as it neglects the crucially important effect of marginally irrelevant in-chain backscattering term, $\propto {\bf J}_R \cdot {\bf J}_L$, which in fact breaks the symmetry between the ${\mathbf{N}}_y \cdot {\mathbf{N}}_{y+1}$ and ${\epsilon}_y {\epsilon}_{y+1}$ interactions in favor of the first one[@starykh2007ordering]. This outcome is not unexpected as it is well-known that in-chain marginal current-current interaction spoils the extended $SU(2)_R \times SU(2)_L$ symmetry of the Heisenberg chain by subleading logarithmic corrections which modify chain spin correlations as follows [@voit1988; @affleck1989] $$\begin{aligned} \langle {\mathbf{N}}_y(x) {\mathbf{N}}_y(0)\rangle &=& (\ln[x])^{1/2} x^{-1}, \nonumber\\ \langle {\epsilon}_y(x) {\epsilon}_y(0)\rangle &=& (\ln[x])^{-3/2} x^{-1}.\end{aligned}$$ Essentially, the same mechanism promotes interchain ${\mathbf{N}}_y \cdot {\mathbf{N}}_{y+1}$ interaction over that of staggered dimerizations. In the infinite 2d lattice, this leads to the stabilization of the collinear antiferromagnetic phase [@starykh2007ordering], which, however, is not possible in the TST geometry. It is important to realize at this point that the relevant $J_3 \sum_y {\mathbf{N}}_y \cdot {\mathbf{N}}_{y+1}$ interaction, which describes non-frustrated coupling of staggered magnetizations on neighboring chains, changes the geometry of the system into that of a [*rectangular*]{} spin tube. The renormalized, relevant coupling, $J_3$, become comparable to the intrachain exchange $J$ under RG and forces Néel vectors ${\mathbf{N}}_{1,2,3}$ to order into the familiar $120^\circ$ pattern on every rung. Our 1d reasoning stops at this scale, but further progress can be made by assuming that the spin tube with $J_3 \sim J$ can be accessed from the opposite limit of the strong rung exchange $J_\perp \gg J$[@schulz1997]. In this limit, the spins on each rung form 3-spin triangles that interact via $J_\perp = J_3$, and are coupled to neighboring triangles by a weak exchange $J$. The ground state of each triangle is 4-fold degenerate and is characterized by [*two*]{} quantum numbers, total spin $s_{\rm rung} = 1/2$ and chirality $\tau$, which is itself another pseudo-spin $1/2$ object. The physical meaning of $\tau$ is just a sense of either a clockwise or a counterclockwise rotation of the ‘unpaired’ spin-$1/2$ in the ground state of the individual triangle. In other words, in addition to spin 1/2, the ground state now carries finite momentum $\pm 2\pi/3$ due to chirality. Focusing on this low-energy subset of triangle’s states, one can derive spin-orbital Hamiltonian [@kawano1997] $$\begin{aligned} H_{\rm s-o} &=& \frac{J_\perp}{N} \sum_x {\bf s}_{\rm rung}(x) \cdot {\bf s}_{\rm rung}(x+1) \times\nonumber\\ &&\times [1 + \alpha_N (\tau^+_x \tau^-_{x+1} + \tau^-_x \tau^+_{x+1})] \label{eq:so}\end{aligned}$$ describing correlated dynamics of spins and chiralities. For the triangular ladder considered here, $N=3$ and $\alpha_N = 4$. The presented arguments remain valid for any [*odd*]{} $N$, however. See Ref. for $N=5$ and Ref. for $N>5$. Analytical [@schulz1997; @orignac2000] and numerical [@kawano1997; @pati2000; @fouet2006frustrated] studies of the model find dimerized ground state, in agreement with our consideration in Section \[sec:dimer-from-spir\]. Fig. \[fig:EEm0\_0\], which shows oscillatory behavior of the entanglement entropy for different values of $R$, represents clear evidence of the dimerized ground state. Finally, we conclude by discussing the way to generate an interaction of the uniform magnetizations from the next neighboring chains. This is done by fusing $V_1$ in Eq.  with itself, which yields, under Eq. , $$\begin{aligned} \label{eq:apB2} \delta H_{MM} = -\frac{(2J')^2}{2} \sum_y \int_x \int_{x'}&& \langle M^z_y(x,\tau) M^z_y(x',\tau')\rangle \nonumber \\ &&\times M^z_{y-1} M^z_{y+1},\end{aligned}$$ Because the result is converging, the integral of the $y$-th chain correlation function can be extended to the full $x - v\tau$ plane. This, using important short-distance cut-off $\sim a_0 \rm{sign}(\tau)$ and $y = v\tau$ (see Ref. for detailed discussion), leads to $$\int_{-\infty}^\infty dx \int_{-\infty}^\infty dy \Big(\frac{1}{(y + i x + a_0 \rm{sign}(y))^2} + {\rm h.c.}\Big) = 4\pi.$$ As a result we obtain for the amplitude of $\delta \gamma_{\rm MM} = (J')^2/(\pi v)$, where $v$ is magnetization dependent spin velocity. Cone state {#sec:cone} ---------- Now, turn on the magnetic field. When a large enough magnetic field is applied to the TST, the “twist" order, the fourth term in Eq. , becomes more relevant than the SDW. This was discussed in previous papers for the two-chain ladder[@kolezhuk05] as well as the 2d triangular lattice[@starykh2007ordering; @starykh2010extreme]. As both the SDW and the cone interaction amplitudes in are of the order $J'$, the relative importance of the two interactions can be estimated [@starykh2007ordering] from a comparison of their scaling dimensions, $\Delta_{\rm saw} = 1/(2 \pi \mathcal{R}^2)$ and $\Delta_{\rm cone} = 1 + 2\pi \mathcal{R}^2$. These two dimensions are equal when $2 \pi \mathcal{R}^2 = (\sqrt{5}-1)/2$, which takes place at sufficiently high magnetization $M\approx 0.6 M_{\rm sat}$. Because of rather steep dependence $M(h)$ of the magnetization on the magnetic field near the saturation, this value of magnetization corresponds to $h\approx 0.9 h_{\rm sat}$, see Fig. 2 in Ref. . A similar conclusion is obtained by comparing mean-field transition temperatures of these two ordered states as functions of magnetization, see Ref. . These arguments, however, are not complete because they do not take into account the fluctuation-generated interactions between spin densities on [*next-nearest*]{} chains. The most important of these in the presence of an external magnetic field is given by $$\label{eq:cone1} V'_{\rm cone} = \delta \gamma_{\rm cone} \sum_y \int dx ~\mathcal{S}^+_{\pi, y} \mathcal{S}^{-}_{\pi, y+2} + {\rm h.c.}.$$ Even though the generated coupling constant is small, $\delta \gamma_{\rm cone} \ll J'/J \ll 1$, this interaction does not involve spatial derivatives and has scaling dimension $2\pi \mathcal{R}^2$ which approaches $1/2$ as $h \to h_{\rm sat}$. Thus, this is a strongly relevant term. In a 2d system[@starykh2007ordering; @starykh2010extreme], $\delta \gamma_{\rm cone} \sim (J')^4/J^3 < 0$ as discussed in the previous subsection. (Note that is written in the ‘sheared’ system of coordinates.) When translated into Cartesian coordinates, it implies antiferromagnetic (positive) exchange interactions between spins on next-nearest chains at the same position ${\sf x}$ along the chain [@starykh2007ordering]. Crucially, as emphasized in the previous section, the TST geometry allows for a stronger renormalized coupling, of the order of $\delta \gamma_{\rm cone} \equiv J_3 \sim (J')^3/J^2 > 0$. The difference is due to slightly different routes to in 2d and $N=3$ TST geometries. One can first show that, when you start from the original cone interaction $$\label{eq:cone2} V_{\rm cone} = \gamma_{\rm cone} \sum_y \int dx ~\mathcal{S}^+_{\pi, y} \partial_x \mathcal{S}^{-}_{\pi, y+1} + {\rm h.c.},$$ one can couple the derivatives $\partial_x \mathcal{S}^{\pm}_{\pi}$ on the next-nearest chains $y$ and $y+2$, $$\label{eq:cone4} V''_{\rm cone} \sim \frac{\gamma_{\rm cone}^2}{v} \sum_y \int dx ~\partial_x \mathcal{S}^+_{\pi, y} \partial_x \mathcal{S}^{-}_{\pi, y+2} + {\rm h.c.}.$$ This step parallels calculations leading to Eq.  with minor variation due to $U(1)$ symmetry of the system in the presence of an external magnetic field. In this situation the scaling dimension of the $\mathcal{S}_\pi$ field is smaller than $1/2$ which leads to a slightly different numerical pre-factor in the renormalization. However the functional dependence on $J'$ remains the same. Secondly, for all $N > 3$ one also needs to generate $$\label{eq:cone3} V'_{\rm MM} = -\delta \gamma_{\rm MM} \sum_y \int dx ~M^z_y M^z_{y+2},$$ which was described in the end of the previous subsection, Sec. \[sec:zero-field-analysis\] . Here, $\delta \gamma_{\rm MM} \sim (J')^2/J$. Fusing next and together leads to the result . In the $N=3$ TST, however, the second step is not required due to , and we end up with a larger coupling of the order $\delta\gamma_{\rm cone} \sim (J')^3/J^2 > 0$ in . To compare the original $V_{\rm cone}$ with the generated $V'_{\rm cone}$ quantitatively, we can estimate the RG scale $\ell$ at which the coupling constant of the interaction becomes of the order one (in units of spin velocity $v$). For this is, with logarithmic accuracy, $\ell_{\rm cone} \sim -\ln(J')/(2 - \Delta_{\rm cone}) = -\ln(J')/(1 - 2\pi \mathcal{R}^2)$, while for it is $\ell_3 \sim -3 \ln(J')/(2 - 2\pi \mathcal{R}^2)$. We immediately conclude that $\ell_3 < \ell_{\rm cone}$ for all values of $2\pi \mathcal{R}^2 \in (1,1/2)$, i.e. that the generated cone interaction term is more relevant than the bare one for all values of magnetization in the case of $N=3$ TST. Similar consideration allows us to analyze the competition between the generated cone $V'_{\rm cone}$ interaction and the SDW one, which is characterized by the RG scale $\ell_{\rm sdw} \sim - 2\pi \mathcal{R}^2 \ln(J' \sin[\delta])/(4\pi \mathcal{R}^2 - 1)$. We find that $\ell_{\rm sdw} < \ell_3$ for $1 \geq 2\pi \mathcal{R}^2 \geq \sqrt{7}-2 \approx 0.65$, which corresponds to low-to-intermediate range of magnetization $M\gtrsim 0.25$. At higher $M$, however, the modified cone interaction takes over the SDW one. (For the 2d case, the comparison is less conclusive as the result sensitively depends on numerical factors inside the argument of the logarithm [@starykh2010extreme].) We now investigate the consequences of the strong $J_3\equiv \delta\gamma_{\rm cone}$ interaction in Eq.  for the TST problem. In the high-field region where SDW fluctuations are suppressed, the Hamiltonian of the system is given by the sum of $H_0$ in Eq. , the generated direct coupling $V'_{\rm cone}$ in Eq. , and the original cone interaction $V_{\rm cone}$ in Eq. , which now is a subleading one in comparison with . With this, we perform abelian bosonization form of the interaction potential and arrive at the following expression, $$\begin{aligned} \label{eq:tst-cone} H^{\rm TST}_{\rm cone} &&= J_3 \int dx \{ \cos[\beta(\theta_1 - \theta_2)] + \cos[\beta(\theta_2 - \theta_3)] +\cos[\beta(\theta_3 - \theta_1)]\}\\ &&+ \frac{\beta J'}{2} \int dx \{ \partial_x(\theta_1 + \theta_2) \sin[\beta(\theta_1 - \theta_2)] +\partial_x(\theta_2 + \theta_3) \sin[\beta(\theta_2 - \theta_3)] + \partial_x(\theta_3 + \theta_1) \sin[\beta(\theta_3 - \theta_1)] \}.\nonumber\end{aligned}$$ For $J_3 \gg J'$, which is the appropriate regime according to our RG arguments above, this potential is minimized by configurations with $\cos[\beta(\theta_y - \theta_{y+1})] = -1/2$ for all $y$. This allows for two different values of sine terms, $\sin[\beta(\theta_y - \theta_{y+1})] = \pm \sqrt{3}/2$. In fact, different signs describe states with different vector chiralities defined as $$\label{eq:tst-kappa} \kappa_y^z = \Big(\mathbf{S}_y \times \mathbf{S}_{y+1}\Big)_z \sim \sin[\beta(\theta_y - \theta_{y+1})] .$$ Thus, different signs of $\kappa_y^z$ correspond to different senses of rotation (clockwise or counterclockwise) of $e^{i\beta \theta_y}$ as we go from one chain to the next. These chiralities also represent useful order parameter describing two degenerate cone states [@sato07A]. To account for the subleading twist terms with spatial derivatives in , we shift $\theta_y \to \theta_y + \upsilon x$, where $\upsilon$ is determined by the requirement that in the new ground state, the bosonic field $\theta$ is twist-less, i.e. $\langle \partial_x \theta_y\rangle =0$. Minimizing $H_0 + H^{\rm TST}_{\rm cone}$ over $\upsilon$, we find $$\label{eq:tst-shift} \upsilon = -\beta J' \langle \sin[\beta(\theta_y - \theta_{y+1})] \rangle \sim - J' \kappa_y^z$$ This shows that the doubly-degenerate cone state is characterized by incommensurate transverse spin correlations, by virtue of the relation ${\cal S}_y^+ = (-1)^x e^{i\beta \theta_y} \to \exp[i (\pi + \upsilon) x + i \beta \theta_y]$. Depending on the spontaneously chosen vector chirality, Eq. , transverse spin correlations are picked at either $Q_{1,x} = \pi + \upsilon$ (for $\kappa_y^z > 0$) or $Q_{2,x} = -\pi + \upsilon$ (for $\kappa_y^z < 0$) along the chain. Transformation properties of $\mathbb{Z}_2$ vortices {#sec:transf-prop-mathbbz} ==================================================== In this appendix, we address the transformation properties of the $\mathbb{Z}_2$ vortex instanton operator $\psi$. We give several arguments. First, these properties have been implicitly obtained in the case of a three leg spin tube, slightly different from the one studied here, in Ref. . There, the authors explicitly evaluate the Berry phase contribution to the action for instantons on the lattice. Microscopically, the instantons are associated with columns of spatial [*links*]{} along the $x$-direction of the cylinder (see below how this arises in another formulation). They showed that, due to the Berry phase, a single pair of instantons (an odd number of instantons cannot occur) is accompanied by a weight, $$\label{eq:62} e^{iS_{BP}} =e^{2\pi i S (x-x')},$$ where $x$ and $x'$ are the locations of the instantons. For half-integer spins, this gives an oscillating factor equal to +1 or -1 if the separation between instantons is even or odd, respectively. From this we can extract the transformation properties. If we translate [*one*]{} of the instantons, $x \rightarrow x+1$, we see that the weight in Eq.  changes sign. This requires $\psi \rightarrow -\psi$, in agreement with Eq. . Under inversion, $P$, about a lattice site, the instantons, which live on the links, change from the even to odd sublattice of bonds and vice-versa. Inverting a single instanton, therefore, changes the parity of $x$, and hence also the sign of the weight in Eq. . Thus, again, $\psi$ is odd under inversion, in agreement with Eq. . Since the instantons do not move under time-reversal or translation along $y$, the invariance of $\psi$ under these operations is obvious. Thus, for the case $L_y=3$, for the model studied in Ref. , the symmetry of the instanton operator is determined as shown in the text. We turn now to an alternative derivation of the transformation laws, which gives the general result, and clarifies its generality. Here we follow the general strategy of Ref. , in which the $\mathbb{Z}_2$ vortices are explicitly separated from the smooth configurations of the SO(3) order parameter using a slave particle construction. This is achieved by writing the unit vectors defining the SO(3) matrix in terms of a “slave spinon” $z_\alpha$: $$\label{eq:61} {\bf\hat{n}}_1 + i {\bf\hat{n}}_2 = \epsilon_{\alpha\beta} z_\beta {\boldsymbol \sigma}_{\alpha\gamma} z_\gamma,$$ where the complex, two component vector $z_\alpha$ is constrained to have unit norm, $\sum_\alpha z_\alpha^* z_\alpha^{\vphantom*}=1$. This representation faithfully reproduces the orthonormality constraints on the ${\bf\hat n}_i$, but is two to one: the physical order parameter ${\mathcal O}$ is unchanged by the transformation $z_\alpha \rightarrow - z_\alpha$. This is actually a gauge invariance, since the transformation is made locally. The $\mathbb{Z}_2$ vortex is a configuration in which, on encircling the center of the defect, $z_\alpha$ returns not to itself but to $-z_\alpha$. As explained in Ref. , a low energy effective theory, appropriate to describe the regime with a local spiral order, as well as a quantum disordered phase, is a 2+1 dimensional $\mathbb{Z}_2$ gauge theory coupled to the spinon variables $z_\alpha$. We refer the reader to Ref.  for details. The $\mathbb{Z}_2$ vortex in this theory appears as a configuration of a spinon field which has a discontinuity $z_\alpha \rightarrow -z_\alpha$ across a semi-infinite “cut” emanating from the vortex. This $\mathbb{Z}_2$ vortex is accompanied by an Ising vortex, the so-called “vison", which is itself a defect with a non-zero Ising gauge field crossing the same semi-infinite “cut". In this way, the topological defects of the spiral magnet become identified with the visons of the $\mathbb{Z}_2$ gauge theory. The discussion in the previous paragraph applies to $\mathbb{Z}_2$ vortices in two-dimensional space, which are particles in the 2+1-dimensional theory. We need to go from this to the description of instantons in the 1+1-dimensional theory obtained by applying periodic boundary conditions in the $y$ direction. A 1+1-dimensional instanton can be viewed as an event in which a pair of $\mathbb{Z}_2$ vortices is nucleated: one of them winds around the cylinder and finally arrives back at the other $\mathbb{Z}_2$ vortex and annihilates it. We can, by the previous argument, consider the particles nucleated and annihilated to be visons in the gauge theory. Such a process was considered in Ref.  (in the Supplementary Material), where it was shown that the operator representing this process in the Ising gauge theory has the transformation properties in Eqs. , , i.e. this operator can be viewed as a staggered dimerization operator for odd $L_y$. There, a rectangular lattice gauge theory was studied, but the basic physics is quite general. Let us consider the translation. We ask about the amplitude to first wind one vison around the cylinder at position $x$, and then wind another at position $x+1$. The overall phase of the amplitude for both processes taken together gives the transformation property of the instanton operator under translation. The visons reside at the plaquette centers of the original lattice, and the winding trajectories form closed circles at fixed $x$, circumnavigating the cylinder. Together, these two events form two such circles that enclose one column of sites in the lattice. The fundamental property of a vison is that it has a mutual statistical interaction with “electric” gauge charges, with the wavefunction acquiring a phase of $\pi$ whenever one encircles the other. For a $S=1/2$ system, a unit gauge charge is present at every lattice site – this represents the physical spin at each site. The net effect of the two events together is that one vison is wound around each site of the lattice between the two circles, leading to an overall amplitude of $(-1)^{L_y}$ for the two processes together. Here, $L_y$ is the number of sites contained between the two circles. This gives the result in Eq. . Note that we may also roughly understand this phase factor by considering the smooth rotations of microscopic spins between the two contours, all of which rotate by $2\pi$, and, due to their $s=1/2$ spinor transformation properties, each acquire a minus sign. A similar argument shows that spatial inversion gives the same phase factor. Explicit calculations for these factors in the Ising gauge theory can be found in Ref. . Note that these arguments do not depend at all on the interactions in the model, just the presence of these symmetries and fundamental statistics of the particles. [86]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). , , , , ****, (). , **** (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , ** (, ). , ****, (). , , , ** (, ). , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , , ****, (). , , , ****, (). , ****, (). , ** (, ). , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , , , , ****, (). , , , , , , , ****, (). , , , , ****, (). , , , , , , ****, (). , , , , , , , , , , , ****, (). , , , , , ****, (). , ****, (). , , , , , , , , , ****, (). , , , , , , ****, (). , , , , , ****, (). , , , , , ****, (). , . , , , , , , , , , , , ****, (). , , , (), . , , , , , , , , ****, (). , ****, (). , ****, (). , ****, (). , , , , , , ****, (). , ****, (). , , , ** (, ). , , , ** (, ). , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , in **, edited by (, ), vol. of **, p. . , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). , ****, (). , , , , ****, (). , ****, (). , , , ****, ().
--- author: - Parna Roy - Parongama Sen title: 'Interplay of interfacial noise and curvature driven coarsening: Supplementary material' --- Snapshots and the density of interfaces ======================================= The snapshots for different values of $x$ are plotted in the figures below for system size $64\times 64$. For $x=0.5$ (Fig. \[snp0.5\]) we can see that there is no clear domain formation, only rough interfaces exist as is well known. Dynamical evolution in this case is interfacial noise driven; in a finite system, a random fluctuation of large size ultimately leads to a consensus state. For values of $x$ close to unity, we note that there are two possibilities. Either the system reaches a consensus state in a short time or it may take a much larger time as the system evolves through metastable states with minimum curvature. ![Typical snapshots at different times for $x=0.5$ shows that coarsening is interfacial noise driven.[]{data-label="snp0.5"}](snap_z0.5.eps){width="8.0cm" height="4.5cm"} ![Certain configurations show fast relaxation. Snapshots show such a configuration for different times for $x=0.9$.[]{data-label="snp0.9"}](snap_z0.9_fast.eps){width="8.0cm" height="4.5cm"} In the main text, we have shown the latter case for $x=0.9$. Here we show an example for the first case (Fig. \[snp0.9\]). In this case no domains occur with nearly straight edges in the intermediate times. For $x=1.0$ (Fig. \[snp1.0\]) certain configurations reach frozen striped state which do not evolve further as the coarsening is curvature driven. In Figs \[snp1.0\] and \[snp1.0\_f\], two different cases of coarsening for $x=1$ are shown with or without freezing. ![Typical snapshots at different times for $x=1$ shows that coarsening is curvature driven. This configuration reached striped frozen state.[]{data-label="snp1.0"}](snap_z1.0.eps){width="8.0cm" height="4.5cm"} ![Typical snapshots at different times for $x=1$ shows that coarsening is curvature driven. This configuration reached the consensus state.[]{data-label="snp1.0_f"}](snap_z1.0_fast.eps){width="8.0cm" height="4.5cm"} ![Plot of density of interfaces $n(t)$ for $x=0.5,0.55,0.6,0.7$.[]{data-label="dw_supp"}](dw_supp.eps){width="8.0cm" height="4.5cm"} In Fig. \[dw\_supp\] we have plotted the density of interfaces $n(t)$ for values of $x = 0.5$ and a few other values. In this figure, it can be seen clearly that the coarsening becomes much faster compared to $x=0.5$ even as the deviation of $x$ from 0.5 is small. It also shows that a clear existence of a kink for $x = 0.7$ and not smaller values supporting the conjecture that a crossover behaviour occurs for $x \approx 0.7$. Consensus time ============== In the main text, the consensus times distribution $D(\tau)$ for $L=32$ has been reported. In Fig. \[dist\_64\] we show the distributions for $L=32$ magnifying the region $\tau \leq 1000$. small values of $\tau$ only. In the inset we have shown $D(\tau)$ for $x=1$ for $L=64$. From this figure we can argue that for larger system sizes, in the Ising limit $x=1.0$, the exponential decay at larger $\tau$ is not present, the distribution only contains a sharply peaked symmetric function of finite width. In the main text we have reported that the conventional behaviour of $D(\tau)$ continues till $x_c\approx0.7$ beyond which $D(\tau)$ changes its behaviour considerably. Fig. \[time1\] supports the statement. For other values of $x$, only the average value of $\tau$ has been estimated so far for $L > 32$. In fig. \[time\] we have plotted $\langle \tau \rangle$ as a function of $x$ for $L=48, 64, 80$. The results are qualitatively similar for $L=32$, and the results support the conjecture that discontinuities occur at $x=0.5$ and $1$. In the inset we have plotted $\langle \tau \rangle$ as a function of system size $L$ for different values of $x$. Although for $x=0.5$ one gets the behaviour $\langle \tau\rangle \propto L^2 \log L$, it is difficult to conclude about the exact dependence for other values of $x$. For $x = 1$ the dependence is simply $L^2$ as is known. ![Plot of $D(\tau)$ for initial time scale. Inset shows plot of the distribution of the consensus time $D(\tau)$ for $L=64$ for $x=1.0$.[]{data-label="dist_64"}](time_dist_SM.eps){width="9.0cm" height="5.5cm"} ![Plot of $D(\tau)$ for $x=0.65, 0.7, 0.75$.[]{data-label="time1"}](time_dist1.eps){width="9.0cm" height="5.5cm"} ![Plot of consensus time $\tau$ as a function of $x$ for $L=48, 64, 80$. Inset shows variation of $\tau$ with system size for $x=0.5,0.6,0.7,0.8$.[]{data-label="time"}](time_size.eps){width="9.0cm" height="5.5cm"} ![Plot of $n(t)$ as a function of time for $L=32$ for $x=0.4$. Left inset shows variation of $P(t)$ with $t$ and right inset shows variation of the persistence exponent $b$ with $x$.[]{data-label="dw_per"}](dw_per.eps){width="8.0cm" height="4.5cm"} Results for $x<0.5$ =================== The focus of the paper has been on $x \geq 0.5$ for which absorbing states can be reached. However, the region $x < 0.5$ also yields some interesting results. As the state undergoes continuous evolution, the persistence probability goes to zero and density of active bonds remain finite. In Fig. \[dw\_per\] we have plotted density of active bonds $n(t)$ and in the inset we have plotted the persistence probability $P(t)$ as a function of time for $L=32$ for $x=0.4$. The persistence probability has an exponential decay ($P(t)\sim\exp(-bt)$), i.e., it is faster than that in the voter model. The parameter $b$ has nonlinear dependence on $x$ (see inset of Fig. \[dw\_per\]).
--- abstract: 'Given a separably closed field $K$ of characteristic $p>0$ and degree of imperfection finite (often $1$) we study the $\sharp$-functor which takes a semiabelian variety $G$ over $K$ to the maximal divisible subgroup of $G(K)$. We show that the $\sharp$-functor need not preserve exact sequences. We relate preservation of exactness to issues of descent as well as to model-theoretic properties of $G^{\sharp}$, and give an example where $G^{\sharp}$ does not have “relative Morley rank". We also mention characteristic $0$ versions of our results, where differential algebraic methods are more prominent.' author: - | Franck Benoist[^1]\ University of Leeds and Univ. Paris-Sud 11 - | Elisabeth Bouscaren\ CNRS - Univ. Paris-Sud 11 - | Anand Pillay[^2]\ University of Leeds title: 'Semiabelian varieties over separably closed fields, maximal divisible subgroups, and exact sequences' --- Introduction ============ For a semiabelian variety $G$ over a separably closed field $K$ of characteristic $p>0$ and finite degree of imperfection, the group ${{p^{\infty} G(K)}}$ = $\cap_{n} p^{n}(G(K))$ played a big role in Hrushovski’s proof of the function field Mordell-Lang conjecture in positive characteristic. The group ${{p^{\infty} G(K)}}$ which we also sometimes call $G^{\sharp}$, is [*type-definable*]{} in the structure $(K,+,\cdot)$. (Strictly speaking $K$ should be taken to be “saturated" for this to be meaningful, and this will be assumed below). It was claimed in [@Hrushovski] that ${{p^{\infty} G(K)}}$ always has [*finite relative Morley rank*]{} (see section 2.3 for the definition). One of the reasons or motivations for writing the current paper is to show that this is not the case: there are $G$ such that ${{p^{\infty} G(K)}}$ does not even have relative Morley rank. (However ${{p^{\infty} G(K)}}$ [*does*]{} have finite $U$-rank which suffices for results such as Proposition 4.3 of [@Hrushovski] to go through, hence the validity of the main results of [@Hrushovski] is unaffected.) As the second author noticed some time ago, the “relative Morley rank" problem is related in various ways to whether the $p^{\infty}$ (or ${\sharp}$)-functor preserves exact sequences. So another theme of the current paper is to give conditions on an exact sequence $0\to G_{1} \to G_{2} \to G_{3}\to 0$ of semiabelian varieties over $K$ which imply exactness of the sequence $0 \to G_{1}^{\sharp} \to G_{2}^{\sharp} \to G_{3}^{\sharp} \to 0$, as well as giving situations where the sequence of $G_{i}^{\sharp}$ is NOT exact. A third theme relates the preservation of exactness by $\sharp$ to the issue of descent of a semiabelian variety $G$ over $K$ to the field of “constants" ${{K^{p^{\infty}}}}$ = $\cap_{n}K^{p^{n}}$ of $K$. If $K$ has degree of imperfection $e$ (meaning that $K$ has dimension $p^{e}$ as a vector space over its $pth$ powers $K^{p}$), then $K$ can be equipped naturally with $e$ commuting iterative Hasse derivations. We will, for simplicity, mainly consider the case where $e=1$ (so for example where $K = {{\mathbb{F}_p}}(t)^{sep}$), in which case we have a single iterative Hasse derivation $(\partial_{n})_{n}$ whose field of absolute constants is ${{K^{p^{\infty}}}}$. This differential structure on $K$ will play a role in some proofs, by virtue of so-called $D$-structures on varieties over $K$. However $p$-torsion and Tate modules will be our central technical tools in the positive characteristic case. The analogue in characteristic $0$ of the differential field $(K,(\partial_{n})_{n})$ is simply a differentially closed field $(K,\partial)$ (of characteristic zero). And for an abelian variety $G$ over our characteristic $0$ differentially closed field $K$ we have what is often called the “Manin kernel" for $G$, the smallest Zariski-dense “differential algebraic" subgroup of $G(K)$, which we denote again by $G^{\sharp}$. The issues of preservation of exactness by $\sharp$ and the relationship to descent to the field ${\cal C}$ of constants, make sense in characteristic $0$ too, and where possible we give uniform results and proofs. Our paper builds on earlier work by the second author and Françoise Delon [@BD2] where among other things, the groups $G^{\sharp}$ (in positive characteristic) are characterized as precisely the commutative divisible type-definable groups in separably closed fields. Our results, especially in characteristic $0$, are also influenced by and closely related to themes in the third author’s joint paper with Daniel Bertrand [@BePi]. Let us now describe the content and results of the paper. Section 2 recalls key notions and facts about differential fields, and semiabelian varieties over separably closed fields. We also discuss relative Morley rank, preservation of descent under isogeny, and some properties of $p^{\infty}G(K)$. In section 3 we introduce the $\sharp$-functor in all characteristics and begin relating relative Morley rank to exactness. We also make some observations about descent of semiabelian varieties, $D$-structures, $p$-torsion, and Tate modules, proving for example that in positive characteristic the semiabelian variety $G$ descends to the constants if and only if $G$ has a $D$-group structure if and only if, in the ordinary case, all of the (power of $p$)-torsion of $G$ is $K$-rational. Section 4 contains the main results of the paper. The key result, Proposition 4.2, characterizes the obstruction to preservation of exactness by the $\sharp$-functor, and is proved in all characteristics. Proposition 4.3 concludes that if $0\to G_1 \to G_2 \to G_3 \to 0$ is an exact sequence of semiabelian varieties (ordinary in characteristic $p$) such that $G_{1}$ and $G_{3}$ are defined over the constants, ${\cal C}$, then the sequence of $G_{i}^{\sharp}$’s is exact if and only if $G_{2}$ descends to ${\cal C}$. Together with results from section 3 we are then able to present our example (in positive characteristic) of a semiabelian variety $G$ such that $G^{\sharp}$ does not have relative Morley rank (in fact the example is simply any nonconstant extension of a constant ordinary abelian variety by an algebraic torus). The remainder of section 4 contains both positive and negative results about preservation of exactness by $\sharp$ in various situations. For example in characteristic $0$, the $\sharp$-functor applied to any exact sequence of [*abelian varieties*]{} preserves exactness, whereas there is a counterexample in positive characteristic. Elisabeth Bouscaren would like to thank particularly Ehud Hrushovski and Françoise Delon for numerous discussions in the past years on the questions addressed in this paper. Grateful thanks from all three authors go especially to Daniel Bertrand and Damian Rössler for numerous and enlightening discussions. Among the many others who have helped with explanations or discussions with some of the authors, let us give special thanks to Jean-Benoit Bost, Antoine Chambert-Loir, Marc Hindry, Minhyong Kim and Thomas Scanlon. Preliminaries ============= Hasse fields ------------ We summarise here basic facts and notation about the fields $K$ that concern us. More details can be found in [@BeDe], [@Ziegler1] for the characteristic $p$ case and [@marker] for the characteristic zero case. If $K$ is a separably closed field of characteristic $p >0$ then the dimension of $K$ as a vector space over the field $K^{p}$ of $p^{th}$ powers is infinite or a power $p^{e}$ of $p$. In the second case, $e$ is called the degree of imperfection (we will just say the “invariant") of $K$ and we will be interested in the case when $e \geq 1 $ (and often when $e=1$). For $e$ finite, a $p$-basis of $K$ is a set $a_{1},..,a_{e}$ of elements of $K$ such that $\{a_{1}^{n_{1}}a_{2}^{n_{2}}...a_{e}^{n_{e}}: 0\leq n_{i} < p^e\}$ form a basis of $K$ over $K^{p}$. The first order theory of separably closed fields of characteristic $p>0$ and invariant $e$ (in the language of rings) is complete (and model complete). We call the theory $SCF_{p,e}$. It is also stable (but not superstable) and certain natural (inessential) expansions that we mention below, have quantifier elimination. For $R$ an arbitrary ring (commutative with a $1$), an [*iterative Hasse derivation*]{} $\partial$ on $R$ is a sequence $(\partial_{n}:n = 0,1,...)$ of additive maps from $R$ to $R$ such that (i) $\partial_{0}$ is the identity, (ii) for each $n$, $\partial_{n}(xy) = \sum_{i+j = n}\partial_{i}(x)\partial_{j}(y)$, and (iii) for all $i,j$, $\partial_{i}\circ\partial_{j} = {i+j\choose i}\partial_{i+j}$. Note that $\partial_{1}$ is a derivation, and that when $R$ has characteristic $0$, $\partial_{n} = \partial_1^{n}/n!$ (So in the characteristic $0$ case the whole sequence $(\partial_{n})_{n}$ is determined by $\partial_{1}$.) By the [*constants*]{} of $(R,(\partial_{n})_{n\geq 0})$ one usually means $\{r\in R:\partial_{1}(r) = 0\}$ and by the [*absolute constants*]{} $\{r\in R: \partial_{n}(r) = 0$ for all $n>0\}$. In this paper, we will mainly consider the field of absolute constants, denoted ${\mathcal{C}}$, and refer to them in the sequel as “the constants”. If $\partial^{1}$ and $\partial^{2}$ are iterative Hasse derivations on $R$ we say that they commute if each $\partial^{1}_{i}$ commutes with each $\partial^{2}_{j}$. \(i) If $K$ is a separably closed field of invariant $e \geq 1$, then there are commuting iterative Hasse derivations $\partial^{1},..,\partial^{e}$ on $K$ such that the common constants of $\partial_1^{1},..,\partial_1^{e}$ is $K^{p}$. In this case the common (absolute) constants of $\partial^{1},..,\partial^{e}$ is the field $K^{p^{\infty}} = \cap_{n}K^{p^{n}}$. (ii) Moreover in (i), if $a_{1},..,a_{e}$ is a $p$-basis of $K$, then each $\partial^{i}_{j}$ is definable in the field $K$ over parameters consisting of the $a_{1},..,a_{e}$ and their images under the maps $\partial^{n}_{m}$ ($n = 1,..,e$, $m \geq 0$). (iii) The theory $CHF_{p,e}$ of separably closed fields of degree $e$, equipped with $e$ commuting iterative Hasse derivations $\partial^{1},..,\partial^{e}$, whose common field of constants is $K^{p}$, is complete, stable, with quantifier elimination (in the language of rings together with unary function symbols for each $\partial^{i}_{n}$, $i=1,..,e$, $n > 0$). Note that after adding names for a $p$-basis $a_{1},..,a_{e}$ of the separably closed field $K$, we obtain for each $n$ a basis $1,d_{1},..,d_{p^{n}-1}$ of $K$ over $K^{p^{n}}$, and the functions $\lambda_{n,i}$ such that $x = \sum_{i} (\lambda_{n,i}(x))^{p^{n}}d_{i}$ for all $x$ in $K$, are definable with parameters $a_{1},..,a_{e}$ in the field $K$. The theory of separably closed fields also has quantifier elimination in the language with symbols for a $p$-basis and for each $\lambda_{n,i}$. The relation between the $\lambda$-functions and the $\partial^{i}_{j}$ is given in section 2 of [@BeDe]. In the current paper we concentrate on the iterative Hasse derivation formalism. In fact when we mention separably closed fields $K$ with an iterative Hasse structure, we will usually assume that $e = 1$ and so $K$ is equipped with a single iterative Hasse derivation $\partial = (\partial_{n})_{n}$. The basic example is ${\mathbb F}_{p}(t)^{sep}$ (where $^{sep}$ denotes separable closure) with $\partial_{1}(t) = 1$ and $\partial_{i}(t) = 0$ for all $i>1$. The assumption that $e =1$ is made here for the sake of simplicty, as some of the results we will be quoting are only explicitely written out for this case, but it will be no real restriction, thanks to: (see for example [@BeDe]) Let $K_0$ be an algebraically closed field of characteristic $p$, and $K_1$ a finitely generated extension of $K_0$. Then there is a separably closed field $K$ of degree of imperfection $1$, extending, $K_1$ and such that $K_0 = K^{p^\infty}$. Our characteristic $0$ analogue is simply a differentially closed field $(K,\partial)$ of characteristic $0$, where now $\partial$ is the single distinguished derivation (rather than a sequence). The corresponding first order theory is $DCF_{0}$, in the language of rings together with a symbol for $\partial$. The theory $DCF_{0}$ is complete with quantifier elimination, but is now $\omega$-stable. Characteristic $p$ ------------------ Let $K$ be a separably closed field of characteristic $p$ and finite degree of imperfection $e \geq 1$, and let $\overline K$ denote an algebraic closure of $K$. ### Separability and related issues We first make some simple remarks about morphisms and varieties which are essential when working in characteristic $p$ over non perfect fields. Recall that if $V$ and $W$ are two irreducible varieties over $K$, and $f$ is a dominant $K$-morphism from $V$ to $W$, $f$ is said to be [*separable*]{} if the field extension $K(W) < K(V)$ is separable. If $V$ is a variety defined over $K$, $V(K)$ denotes the set of $K$-rational points of $V$. Recall that when $K$ is separably closed, $V(K)$ is Zariski dense in $V$. Let $G, H$ be two connected algebraic groups defined over $K$ and $f$ a surjective separable morphism from $G$ to $H$ . Then $f$ takes $G(K)$ onto $H(K)$. [**Claim 1**]{} We can suppose without loss of generality that $K$ is sufficiently saturated: Let $K_1 >K$ be saturated. Then $f$ extends uniquely to a surjective separable morphism $f_1$ from $G\times_K K_1$ to $H\times_K K_1$. Suppose we have proved that $f (G(K_1)) = H(K_1)$. This is a first order statement about $K_1$ : $$\forall y (y \in H \rightarrow \exists x (x \in G \land f(x) = y).$$ with parameters in $K$ , hence as $K < K_1$, it is also true in $K$. [ox$\Box$]{} So we can suppose that $G, H$ and $f$ are all defined over some small $K_0 <K$ and that $K$ is $|K_0|^+$-saturated. [**Claim 2.**]{} If $h \in H(K)$ is a generic point of $H$ over $K_{0}$ (in the sense of algebraic geometry) then $h \in f(G(K))$. Proof: We can find a generic point $g$ of $G(\overline K)$ over $K_{0}$ such that $f(g) = h$. By separability of $f$, $K_{0}(g)$ is a separable extension of $K_{0}(h)$, so contained in a separable closure of $K_{0}(h)(a_{1},..,a_{n})$ for some $a_{i}$ which are algebraically independent over $K_{0}(h)$. Choosing, by saturation of $K$, $b_{1},..,b_{n}\in K$, algebraically independent over $K_{0}(h)$, and an isomorphism taking the separable closure of $K_{0}(h)(a_{1},..,a_{n})$ to the separable closure of $K_{0}(h)(b_{1},..,b_{n})$, we find $g'\in G(K)$ such that $f(g') = h$. [ox$\Box$]{} Now let $h\in H(K)$ be arbitrary. By Zariski-denseness of $H(K)$ and saturation of $K$ we can find $h_{1}\in H(K)$, generic over $K_{0}(h)$ (in the sense of algebraic groups). Let $h_{2} = h_{1}^{-1}h$ which is also in $H(K)$ and also a generic point of $H$ over $K_{0}(h)$. By Claim 2, both $h_{1}$ and $h_{2}$ are in the image of $G(K)$ under $f$. Hence $h$ is too. When we say that an [*exact sequence of algebraic groups*]{} $$0 \rightarrow G_1 \stackrel{g}{\rightarrow} G_2\stackrel{f}{\rightarrow} G_3\rightarrow 0$$ is defined over a field $K$, we mean that the algebraic groups $G_1,G_2, G_3$ are defined over $K$, that $f,g$ are morphisms of algebraic groups which are defined over $K$ [*and separable*]{}. Then $G_3$ is isomorphic (as an algebraic group) to $G_2/g(G_1)$ and we will often suppose that $G_1$ is a closed subgroup of $G_2$. ### Semiabelian varieties We now recall some very basic facts about semiabelian varieties (see for example [@Mumford]). We will be particularly interested in rationality issues, that is in the groups of $K$-rational points of some basic subgroups of $G(K)$. Recall that a [*semiabelian*]{} variety $G$ (over $K$) is an extension of an abelian variety by a torus, i.e. $$0 \rightarrow T \rightarrow G\rightarrow A\rightarrow 0$$ where $T$ is a torus defined over $K$, $A$ is an abelian variety defined over $K$ and the two morphisms are separable and defined over $K$ ($G$ is then also defined over $K$ in the usual sense as an algebraic group). The following facts hold when $K$ is separably closed: \(i) Let $T$ be a torus defined over $K$. Then $T$ is $K$-split, that is $T$ is isomorphic [*over $K$*]{} to some product of the multiplicative group, $({{\mathbb{G}_m}})^{\times n}$. Any closed subgroup of $T$ is then also defined over $K$.\ (ii) Semiabelian varieties are commutative and divisible, i.e. $G(\overline K ) $, the group of $\overline K$-rational points of $G$ is a commutative divisible group.\ (iii) Let $G$ be a semiabelian variety defined over $K$, then any closed connected subgroup of $G$ is defined over $K$. Over a separably closed field $K$ of characteristic $p>0$, the semiabelian varieties defined over $K$ are exactly the commutative divisible algebraic groups defined over $K$. The behaviour of the torsion elements of $G$ is particularly important. The next classical facts will enable us to fix some notation for the rest of the paper. Let $G$ be a semiabelian variety over $K$, written additively, and $$0 \rightarrow T \rightarrow G\rightarrow A\rightarrow 0,$$ with $dim(A) =a$ and $dim (T) = t$\ [*1.*]{} If $n$ is prime to $p$, then $[n] : G \mapsto G$, $x \mapsto nx$ is a separable isogeny of degree (= separable degree) $n^{2a + t}$. We denote by $G[n]$ the kernel of $[n]$, the points of $n$-torsion: $G[n] \cong ({\mathbb Z}/ n {\mathbb Z})^{2a +t}$. By separability, $G[n] = G[n](K)$.\ [*2.* ]{} $[p]: G \mapsto G$ is an inseparable isogeny of degree $p^{2a +t}$, and of inseparable degree at least $p^{(a+t)}$. Hence there is some $r$, $0 \leq r \leq a$ such that, for every $n$, $$Ker [p^n] \, = \, G[p^n] \cong ({\mathbb Z}/ {p^n} {\mathbb Z})^{r}.$$ We say that $G$ is [*ordinary*]{} if $r = a$.\ As $G[p^n]$ is finite, it is contained in $G(\overline K)$, but not necessarily in $G(K)$.\ [*3.* ]{} Let $G[p^\infty] $ or $G[p^\infty](\overline K)$ denote the elements of $G$ with order a power of $p$, and $G[p']$ or $G[p'](\overline K)$ denote the elements of $G$ with order prime to $p$. Then $G[p'] = G[p'](K) $ is Zariski dense in $G$.\ Note that, even for $G$ ordinary, we may well have that $G[p^\infty](K) = \{0\}$. We will also need the following easy observations: Let $ 0 \rightarrow G_1 \rightarrow G_2\stackrel{f}{\rightarrow} G_3\rightarrow 0$ be an exact sequence of semiabelian varieties over $K$. Then\ 1. The restriction of $f$ to prime-to-$p$ torsion remains exact, i.e. $$0 \rightarrow G_1[p'] \rightarrow G_2[p'] \stackrel{f}{\rightarrow} G_3[p']\rightarrow 0,$$ 2. The restriction of $f$ to the $p^\infty$-torsion remains exact, i.e. $$0 \rightarrow G_1[p^{\infty}] \rightarrow G_2[p^{\infty}] \stackrel{f}{\rightarrow} G_3[p^{\infty}]\rightarrow 0,$$ 3. It follows that $$0 \rightarrow Tor G_1 \rightarrow Tor G_2 \stackrel{f}{\rightarrow} Tor G_3\rightarrow 0,$$ where $Tor G$ denotes the group of all torsion elements of $G$. One need only check that for any $n$, if $a \in G_3[n]$, there is some $g \in G_2[n]$ such that $f(g) =a$: Let $h \in G_2(\overline K)$ be such that $f(h) = a$. Then $n f(h) = f(nh) = 0$, hence $n h \in G_1(\overline K)$. If $nh \not= 0$, by divisibility of $G_1(\overline K)$, let $t \in G_1(\overline K) $ be such that $nt = nh$. then $f(h-t) = a$ and $h-t \in G[n]$. Divisibility by $p$ also behaves quite differently in $G(\overline K)$ and in $G(K)$. Let $$p^\infty G(K) := \bigcap_{n\geq 1} [p^n] G(K).$$ 1\. $G(K)$ is $n$-divisible for any $n$ prime to $p$.\ 2. For $n$ prime to $p$, for every $k$, $G[n] \subset [p^k] G(K)$.\ 3. $G[p']$ is a divisible subgroup of $G(K)$.\ 4. $p^\infty G(K)$ is $n$-divisible for any $n$ prime to $p$.\ 5. $p^\infty G(K)$ is infinite and Zariski dense in $G$.\ 6. $p^\infty G(K)$ is the biggest divisible subgroup of $G(K)$. 1\. Because $[n]$ is separable, $[n]$ induces a surjection from $G(K)$ onto $G(K)$ (by \[separablemorphisms\]).\ 2. Let $g \in G[n]$, $n$ prime to $p$. Let $a,b$ be integers such that $an+bp^k = 1$; then $an g + b p^k g = g = p^k (bg)$ and $bg \in G(K)$. Furthermore note that $bg$ has finite order prime to $p$.\ 3. Clear from the above.\ 4. For every $k$, $p^k G(K)$, being a homomorphic image of the group $G(K)$ is also $n$-divisible. It follows that $p^\infty G(K)$ is $n$-divisible.\ 5. By 2. $p^\infty G(K)$ contains $G[p']$, which is infinite and Zariski dense in $G$.\ 6. It suffices to show that $p^\infty G(K) $ is $p$-divisible. This will follow from the finiteness of $p^n$-torsion for every $n$. Let $g$ be any element in $G(K)$, consider the following tree, $T(g)$, indexed by finite sequences of elements of $\mathbb N$: $g_\emptyset = g$, for any $g_s$ in the tree, the successors of $g_s$ are the finite number of elements $g_{s\smallfrown i}$ in $G(K)$ such that $[p] (g_{s\smallfrown i}) = g_s$ (by finiteness of $p$-torsion). Then: – for any $s$ of length $n>0$, $[p] g_{s\smallfrown i} = g_s $, in particular for any $s$ of length $n$, $[p^n] g_s = g$. Conversely, if, for some $n$, $[p^n] h = g$ and $h \in G(K)$ , then $h = g_s$ for some $s$ of length $n$, – If $g \in p^\infty G(K)$, the tree $T(g)$ is infinite. It is a finitely branching tree, so by Koenig’s Lemma, it must have an infinite branch. – Conversely, suppose that there is an infinite branch in $T(g)$, then $g \in p^\infty G(K)$. So let $g \in p^\infty G(K)$, Let $X= \{g_s\}$ be the elements along an infinite branch. Then for any $k$, $[p^k] g_s =g$ where $s$ has length $k$. And $T(g_s)$ has an infinite branch, $X_s = \{g_t \in X ; t \supset s\}$, so $g_s \in p^\infty G(K)$. Various equivalent characterizations of ${p^{\infty} G(K)}$ were given in [@BD2]. But the following one was omitted at the time. Recall that an [*infinitely definable*]{} set in $K^n$, denoted [[ $\Conj$-definable ]{}]{} set, is a subset of $K^n$ which is the intersection of a small (size strictly smaller than the cardinality of $K$) collection of definable subsets of $K^{n}$. Suppose that $K$ is $\omega_1$-saturated. Let $G$ be a semiabelian variety defined over $K$. Then ${p^{\infty} G(K)}$ is the smallest [ $\Conj$-definable ]{}group of $G(K)$ which is Zariski dense in $G$. Let $H$ be any [ $\Conj$-definable ]{}subgroup of $G(K)$, also Zariski dense in $G$. By stability, $H$ is a decreasing intersection of definable subgroups of $G(K)$, $(H_i)_{i\in I}$. Certainly each $H_i$ is itself Zariski dense in $H$. By [@BD1], the connected component of $H_i$, $C_i$ is also definable in $G(K)$ and has finite index in ${H_i}$. It follows that it is also Zariski dense in $G$. Now, for every $r\geq 1$ the (definable) subgroup $[p^r] C_i$ is also Zariski dense in $G$. It follows by compactness, that $\cap_{n\geq 1} [p^n] C_i$ is also Zariski dense in $G$. But $\cap_{n\geq 1} [p^n] C_i$ is a divisible group, and ${p^{\infty} G(K)}$ is the unique divisible subgroup of $G(K)$ which is Zariski dense in $G$ ([@BD2], Prop. 3.6). So ${p^{\infty} G(K)}= \cap_{n\geq 1} [p^n] C_i$ for every $i$ and is hence contained in $H$. ### Isogenies and descent in char.p We will not necessarily directly use all the classical facts about isogenies recalled below, but they give a picture of the various problems linked to descent questions in characteristic $p$. In this section, $K$ is any separably closed field of characteristic $p>0$, $G$ and $H$ are semiabelian varieties defined over $K$. Note first that if $f$ is any morphism (= morphism of algebraic groups) from $G$ to $H$, both defined over $K$, then $f$ is also defined over $K$: by \[subgroups\], the graph of $f$, which is a closed connected subgroup of $G \times H$ is also defined over $K$. Recall that an [*isogeny*]{} is a surjective morphism of algebraic groups with finite kernel. It is classical that if $A$ is a semiabelian variety over $K$, for every $n$ the $n^{th}$-Frobenius isogeny $Fr^n : A \longrightarrow Fr^n A$ ($Fr^n A$ is then defined over $K^{p^n}$) admits a dual isogeny, the $n^{th}$-Verschiebung, denoted $V_n : Fr^n A \longrightarrow A$, such that $V_n \circ Fr^n = [p^n]_A $ and $Fr^n \circ V_n = [p^n]_{Fr^n A}$. It is easily seen, counting degrees, that: If $G$ is ordinary, then for every $n$, the Verschiebung $V_n$ is separable. Let $G$ be a semiabelian variety defined over $K$. Then if $a \in p^n G(K)$, there exists $b \in G(K)$ such that $a\in K(b^{p^n})$. So if $G$ is defined over $K^{p^n}$, then $[p^n] G(K) \subset G(K^{p^n})$ and in particular ${p^{\infty} G(K)}= p^{\infty} G(K^{p^n})$. Consider the $n^{th}$-Verschiebung $ V_n$, described above. If $a \in p^n G(K)$, then $a = p^n b $, for some $b \in G(K)$, and $a = V_n (b^{p^n})$. If $G$ is defined over $K^{p^n}$, then the Verschiebung is also defined over $K^{p^n}$ and $a \in K^{p^n}(b^{p^n}) = K^{p^n}$. Abelian varieties have one specific very important property: Let $A$ be an abelian variety defined over $K$. Then $A$ is isogenous over $K$ to a finite product of simple (i.e. which have no proper nontrivial closed connected subgroup) abelian varieties. Let $K_0 < K_1$, with $K_0$ algebraically closed, and let $G_1$ be a semiabelian variety defined over $K_1$. We will say that $G_1$ [*descends*]{} to $K_0$ if there is a semiabelian variety $G_0$, defined over $K_0$ and an [**isomorphism**]{} $f$ between $G_1$ and $G_0 \times_{K_0} K_1$. In characteristic $0$, any semiabelian variety which is isogenous to one defined over some algebraically closed $K_0$ descends, in the sense above, to $K_0$ (the proof is identical to that of the following lemma). The situation is more complicated in characteristic $p$. Let $f$ be a separable isogeny from $G_1$ to $H_1$, both being semiabelian varieties. If $G_1$ is defined over some algebraically closed field $K_0$, then $H_1$ descends to $K_0$. As $f$ is a separable isogeny, the kernel of $f$ is a finite closed subgroup of $G_1(K_0)$, $H$, of cardinality the degree (= separable degree) of $f$. Then $G':= G_1/H$ is a semiabelian variety defined over $K_0$, and $f$ induces an isomorphism from $H_1$ onto $G'$. The following is more complicated but also classical. Let $K_0\subset K_1$, with $K_0$ algebraically closed. Let $A$ be an abelian variety defined over $K_1$, $B$ an abelian variety defined over $K_0$ and $f$ a separable isogeny from $A$ onto $B$. Then $A$ is isomorphic to $A'\times_{K_0} K$ for some abelian variety $A'$ over $K_0$. This is a particularly simple case of the “Proper base change theorem” (see for example in [@SGA1] or [@Milne]). Consider $N$ the kernel of $f$, which is a finite subgroup of $A(K_1)$, The set of abelian varieties over $K_{1}$ which contain $N$ and are isomorphic to $B\times_{K_{0}}K_{1}$ are parametrized by a certain cohomology group $H^1(B\times_{K_0} K_1,N)$. Now let $N'$ be an algebraic group (finite of course) defined over $K_{0}$ which is isomorphic to $N$. The base change theorem says that $H^1(B\times_{K_0} K_1, N)$ is isomorphic to $H^1(B,N')$, and through this isomorphism, $A$ will be isomorphic to some $A_0 \times_{K_0} K_1$, for $A_0$ defined over $K_0$. In the case of dimension one, one does not need the assumption that $f$ is separable: Let $K_0 < K$, with $K_0$ algebraically closed. Let $A$ be an elliptic curve defined over $K$, $B$ an elliptic curve defined over $K_0$, and $f$ an isogeny from $A$ onto $B$. Then $A$ is isomorphic to $B'\times_{K_0} K$ for some elliptic curve $B'$ over $K_0$. First go up to $\overline K$, the algebraic closure of $K$, and consider the situation over $\overline K$. By the remark at the beginning of the section, it suffices to show that there exists an isomorphism $g$, defined over $\overline K$ from $A\times_K \overline{ K} $ to some $B'\times_K \overline{ K}$ where $B'$ is defined over $K_0$, So we can suppose that $K$ itself is algebraically closed.\ Consider the inverse isogeny, $h$ from $ B$ onto $A$, defined over $K$. As $K$ is perfect, the isogeny $h$ factors through some power of the Frobenius (see for example [@silverman]): $$B \rightarrow Frob^n B \stackrel{g}\rightarrow \, A$$ where $g$ is now a separable isogeny from $Frob^n B$ onto $A$, defined over $K$. As $K_0$ is algebraically closed, and $Frob^n B$ is also defined over $K_0$, Lemma \[easyseparableisogeny\] now applies. In section \[TorsionTate\] we will give some further results about isogenies and descent which seem to be less classical. In fact, we have given here the proof for \[ellipticdescent\] as it is particularly simple, but the above result is also a direct consequence of the fact (Corollary \[ordinaryisogenydescent\]) that if $A$ is an ordinary abelian variety, which is isogenous to one defined over some algebraically closed field $K_0$, then $A$ descends to $K_0$. In dimension bigger than 1, the above is no longer true for inseparable isogenies, in the case of non ordinary abelian varieties: For any abelian variety $A$ there is a one-one correspondence between (isomorphism classes of) purely inseparable isogenies and sub $p$-Lie algebras of Lie $A$ (see [@Serre] or [@Mumford]): It follows that for any supersingular elliptic curve $E$ over $\overline {\mathbb F_p}$, there is an abelian variety $A$, isogenous to $E\times E$, which cannot be isomorphic to any abelian variety defined over $\overline {\mathbb F_p}$. Relative Morley Rank -------------------- In this section $T$ will be a complete theory, and we work in a given very saturated model $M$ of cardinality $\kappa$ say. We will here define [*relative*]{} Morley rank, namely Morley rank inside a given [[ $\Conj$-definable ]{}]{} set. This was called [*internal Morley dimension*]{} in [@Hrushovski]. By an [[ $\Conj$-definable ]{}]{} set (infinitely definable set) we mean a subset of some $M^{n}$ which is the intersection of a small (size $< \kappa$) collection of definable subsets of $M^{n}$ (that is the set of realizations of a partial type over a small set of parameters). We will fix an [[ $\Conj$-definable ]{}]{} set $X\subseteq M^{n}$. If $X$ is an infinitely definable subset of $M^n$, by a [*relatively definable*]{} subset of $X$ we mean a subset of the form $Z = X\cap Y$ for $Y\subseteq M^{n}$ definable with parameters. Then we can define in the usual way Morley rank for relatively definable subsets $Z$ of $X$: (i) $RM_{X}(Z) \geq 0$ if $Z$ is nonempty. (ii) $RM_{X}(Z) \geq \alpha + 1$ if there are $Z_{i}\subseteq Z$ for $i < \omega$ which are relatively definable subsets of $X$, such that $Z_{i}\cap Z_{j} = \emptyset$ for $i \neq j$ and $RM_{X}(Z_{i}) \geq \alpha$ for all $i$. (iii) for limit ordinal $\alpha$, $RM_{X}(Z) \geq \alpha$ if $RM_{X}(Z) \geq \delta $ for all $\delta < \alpha$. As in the absolute case we obtain (relative) Morley degree. Namely suppose that $RM_{X}(Z) = \alpha < \infty$. Then there is a greatest positive natural number $d$ such that $Z$ can be partitioned into $d$ (relatively in X) definable sets $Z_{i}$ such that $RM_{X}(Z_{i}) = \alpha$ for all $i$. We will say that $X$ has relative Morley rank if $RM_{X}(X) < \infty$. \(i) Suppose that $Y$ is a relatively definable subset of $X$. Then $RM_{X}(Y) = RM_{Y}(Y)$. (ii) We can also talk about the relative Morley rank $RM_{X}(p)$ of a complete type $p$ of an element of $X$ over a set of parameters. It will just be the infimum of the relative Morley ranks of the (relatively) definable subsets of $X$ which are in p. (iii) Suppose that $T$ is countable and $X$ is [[ $\Conj$-definable ]{}]{} over a countable set of parameters $A_{0}$. Then $X$ has relative Morley rank if and only if for any countable set of parameters $A\supseteq A_{0}$ there are only countably many complete types over $A$ extending $X$. Now suppose that $X,Y$ are [[ $\Conj$-definable ]{}]{} sets and $f: X \to Y$ is a surjective definable function. By definability of $f$ we mean that $f$ is the restriction to $X$ of some definable function on a definable superset of $X$. Note that then each fibre $f^{-1}(c)$ of $f$ is a relatively definable subset of $X$, so we can talk about its relative Morley rank (with respect to $X$ or to itself, which will be the same by Remark 2.2 (i)). Suppose $X,Y$ are [[ $\Conj$-definable ]{}]{} sets and $f:X\to Y$ is surjective and definable. (i) Suppose that $RM_{Y}(Y) = \beta$ and for each $c\in Y$, $RM_{X}(f^{-1}(c)) \leq \alpha$. Then $RM_{X}(X)\leq \alpha(\beta + 1)$ if $\alpha > 0$, and $\leq \beta$ if $\alpha = 0$. (ii) $RM_{Y}(Y) \leq RM_{X}(X)$. \(i) This is proved in the definable (absolute) case by Shelah [@Shelah] (Chapter V, Theorem 7.8) and Erimbetov [@Erimbetov]. Martin Ziegler [@Ziegler] also gives a self-contained proof which adapts immediately to our more general context. (ii) is easier, and has the same inductive proof as in the definable (absolute) case, bearing in mind that because $f$ is the restriction to $X$ of a definable function on a definable superset of $X$, the preimage under $f$ of any relatively definable subset of $Y$ is a relatively definable subset of $X$. If $X = G$ is an [[ $\Conj$-definable ]{}]{} group with relative Morley rank then the general theory of totally transcendental groups applies, for example giving $DCC$ on (relatively) definable subgroups, theory of generics, stabilizers, connected components, etc. Likewise if G has finite relative Morley rank then the general theory of definable groups of finite Morley rank applies. If one assumes stability of the ambient theory $T$, some of these facts may be easier to see (using for example the fact that $G$ will be an intersection of definable groups). As our intended application or example is the stable theory $CHF_{p,e}$, there is no harm assuming stability, but we emphasize that it is not required. We now consider an exact sequence of [[ $\Conj$-definable ]{}]{} groups $1 \rightarrow G_{1} \rightarrow G_{2} \stackrel{h}{\rightarrow} G_{3} \rightarrow 1$. We will assume that $G_{1} = Ker(h) \subseteq G_{2}$, and note again that $G_{1}$ is then a relatively definable (normal) subgroup of $G_{2}$. With this notation we have: \(i) Suppose $G_{1}$ and $G_{3}$ have (finite) relative Morley rank. Then so does $G_{2}$. (ii) Moreover if $G_{1}$, $G_{3}$ have finite relative Morley ranks $k,s$ respectively, then $RM_{G_{2}}(G_{2}) = k+s$. \(i) follows immediately from Lemma \[mapRMR\]. (ii): By part (i) $G_{2}$ has finite relative Morley rank. But then the proof that $U$-rank and Morley rank coincide in definable groups of finite Morley rank (see [@Pillay-Pong], Remark B.2(iii) for example) goes through in the present context to show that for complete types of elements of $G_{2}^{eq}$, $U$-rank coincides with relative Morley rank (as defined in Remark 2.2(ii).). In particular relative Morley rank on types is additive, so if $b$ realizes the generic type of $G_{2}$ (over a base set of parameters), then as $tp(h(b))$ realizes the generic type of $G_{3}$ and $tp(b/h(b))$ is the generic of a translate of $G_{1}$, we see, writing $RM(b)$ for relative Morley rank of $tp(b)$ etc, that $RM(b) = RM(b,h(b)) = RM(b/h(b)) + RM(h(b))$. Hence $RM_{G_{2}}(G_{2}) = RM_{G_{1}}(G_{1}) + RM_{G_{3}}(G_{3})$. The $\sharp$ functor and descent to the constants ================================================= The $\sharp$ functor -------------------- Here $K$ will be either a separably closed field of characteristic $p>0$ and finite degree of imperfection, or a differentially closed field of characteristic $0$ (so with distinguished derivation $\partial$). We distinguish the cases by “characteristic $p$", “characteristic $0$". In the characteristic $p$ case we will take $K$ to be say $\omega_{1}$-saturated. Definability will mean in the sense of the structure $K$. $G$ will be a semiabelian variety defined over $K$. In the characteristic $0$ case, as $DCF_{0}$ is $\omega$-stable we have $DCC$ on definable subgroups of a definable group, so any [[ $\Conj$-definable ]{}]{} group is definable. In the characteristic $p$ case, by stability, any [[ $\Conj$-definable ]{}]{} subgroup is an intersection of at most countably many definable groups. $G^{\sharp}$ is the smallest [[ $\Conj$-definable ]{}]{} subgroup of $G(K)$ which is Zariski-dense in $G$. By Proposition \[Zardense\], in characteristic $p$, $G^{\sharp}$ coincides with ${{p^{\infty} G(K)}}$. In characteristic $0$, $G^{\sharp}$ is sometimes called the “Manin kernel" (see [@Marker-quaderni]). In any case alternative characterizations and key properties are given in the Lemma following. \(i) $G^{\sharp}$ can also be characterized as the smallest [[ $\Conj$-definable ]{}]{} subgroup of $G(K)$ which contains the (prime-to-$p$, in char. $p$ case) torsion of $G$. (ii) $G^{\sharp}$ is connected (no relatively definable subgroup of finite index), and of finite $U$-rank in char. $p$, and finite Morley rank in char. $0$. (iii) If $G$ is defined over the constants ${\mathcal{C}}$ of $K$, then $G^\sharp = G(\cal C )$. \(i) Note first that the (prime-to-$p$) torsion is contained in $G(K)$. In the characteristic $p$ case, $G^{\sharp} = {{p^{\infty} G(K)}}$ does contain the prime-to $p$-torsion. On the other hand as the prime-to $p$-torsion is Zariski-dense in $G$ any subgroup of $G$ containing the prime-to-$p$ torsion is Zariski-dense. So the lemma is established in characteristic $p$. The characteristic $0$ case is well-known and due originally to Buium. See for example Lemma 4.2 of [@Pillay-countable] where it is proved that any definable Zariski-dense subgroup of a connected commutative algebraic group $G$ contains $Tor(G)$. (ii) $G^\sharp$ is connected as any finite index subgroup of a Zariski-dense subgroup is also Zariski-dense. In the characteristic 0 case, Buium [@Buium1] showed that $G^\sharp$ has finite Morley rank. An account, using $D$-groups, appears in [@BePi]. In the characteristic $p$ case, finite $U$-rank of $G^{\sharp}$ was shown by Hrushovski in [@Hrushovski], and follows easily from Lemma \[divweil2\]. (iii) In characteristic $p$, this is a direct consequence of Lemma \[divweil2\]. In characteristic $0$ it can be seen as follows: Assume $G$ to be defined over ${\mathcal{C}}$. Note that $G(\cal C)$ is definable in the differentially closed field $K$. As ${\mathcal{C}}$ is algebraically closed $G({\mathcal{C}})$ is Zariski-dense in $G(K)$. (True for any variety defined over ${\mathcal{C}}$.) If $H$ is an [[ $\Conj$-definable ]{}]{} subgroup of $G(K)$, properly contained in $G(\cal C)$, then $H$ will be clearly an algebraic subgroup of $G(\cal C)$, but then $H(K)$ is a proper algebraic subgroup of $G(K)$ containing $H$, so $H$ could not be Zariski-dense in $G(K)$. Let $G,H$ be semiabelian varieties defined over $K$, and $f:G\to H$ a (not necessarily separable) rational homomorphism, also defined over $K$. Then (i) $f(G^{\sharp}) \subseteq H^{\sharp}$. (ii) If $f$ is (geometrically) surjective then $f(G^{\sharp}) = H^{\sharp}$. \(i) Let $Tor_{p'}(G)$ be the prime to $p$ torsion (so all the torsion in char. $0$). Note that $f(Tor_{p'}(G))\subseteq Tor_{p'}(H)$. If (i) fails then $ C = f(G^{\sharp})\cap H^{\sharp}$ is a proper [[ $\Conj$-definable ]{}]{} subgroup of $H(K)$ which by Lemma 3.3 contains $f(Tor_{p'}(G))$. But then $f^{-1}(C)\cap G(K)$ is an [[ $\Conj$-definable ]{}]{} subgroup of $G(K)$ which contains $Tor_{p'}(G)$ and is properly contained in $G^{\sharp}$, contradicting Lemma \[othercharacterizations\]. (ii) If $f$ is geometrically surjective then (by $\omega_1$-saturation in the characteristic $p$ case) $f(G^{\sharp})$ is [[ $\Conj$-definable ]{}]{} and it must be Zariski-dense in $H$. By part (i), and the definition of $H^{\sharp}$, $f(G^{\sharp}) = H^{\sharp}$. (Characteristic $p$) Let $f:G\to H$ be as in the hypothesis of Lemma 3.4 (ii). If $f$ is separable (that is induces a separable extension of function fields) then as we remarked in Proposition \[separablemorphisms\] $f|G(K): G(K) \to H(K)$ is surjective. If $f$ is not separable, $f$ may no longer be surjective at the level of $K$-rational points, but nevertheless Lemma \[surjectivity\](ii) says it is surjective on the $\sharp$-points when $K$ is $\omega_1$-saturated. By Lemma 3.4 (i) we can consider $\sharp$ as a functor from the category of semiabelian varieties over $K$ to the category of [[ $\Conj$-definable ]{}]{} groups in $K$. It is natural to ask whether $\sharp$ preserves exact sequences, and this is an important theme of the paper. Recall that by an [*exact sequence of algebraic groups*]{} defined over a field $K$, we mean that the homomorphisms are not only defined over $K$ but also separable. So we will be considering the situation of semiabelian varieties $G_{2}, G_{3}$ defined over $K$, a separable surjective rational homomorphism $f:G_{2}\to G_{3}$ defined over $K$, with $Ker(f) = G_{1}$ connected and thus a semiabelian subvariety of $G_{2}$ defined over $K$. Then the sequence $0 \to G_1(K) \to G_2(K) \to G_3(K) \to 0$ clearly remains exact (in the category of definable groups in $K$), using say 2.3 in the characteristic $p$ case. By Lemma \[surjectivity\] the sequence $$0 \to G_{1}^{\sharp} \to G_{2}^{\sharp} \to G_{3}^{\sharp} \to 0$$ will be exact if and only if $$G_{1}^{\sharp} = G_{1}(K)\cap G_{2}^{\sharp}.$$ So the group $(G_{1}(K)\cap G_{2}^{\sharp})/G_{1}^{\sharp}$ is the obstruction to exactness. In the characteristic $0$ case this group which is clearly of finite Morley rank, can be seen to be connected and embeddable in a vector group. By Lemma 4.2 of [@Pillay-countable] for example, $G_1(K)/G_1^{\sharp}$ (as a group definable in $K$ by elimination of imaginaries) embeds definably in $(K,+)^{n}$ for some $n$. Hence $(G_2^{\sharp}\cap G_1(K))/G_2^{\sharp}$ also embeds in $(K,+)^{n}$, and as such is a (finite-dimensional) vector space over the field of constants of $K$. Hence $(G_2^{\sharp}\cap G_1(K))/G_1^{\sharp}$ is connected. Note that, as $G_1^{\sharp}$ is also connected, it follows that $G_2^{\sharp}\cap G_1(K) $ itself is also connected. The characteristic $p$ case is different in an interesting way. Note first, that the group $(G_{1}(K)\cap G_{2}^{\sharp})/G_{1}^{\sharp}$ is not even infinitely definable, it is the quotient of two [ $\Conj$-definable ]{}groups. Such groups are usually called “hyperdefinable”. We will recall the (model theoretic) definition of a connected component. First, if $G$ is an [ $\Conj$-definable ]{}group in a stable theory, then we have DCC on intersections of uniformly relatively definable subgroups (see [@poizat] or [@wagner]). What this means is that if $\phi(x,y)$ is a formula, then the intersection of all subgroups of $G$ relatively defined by some instance of $\phi(x,y)$, is a finite subintersection. It follows that, working in a saturated model say, the intersection of all relatively definable subgroups of $G$ of finite index, is the intersection of at most $|L|$ many (where $L$ is the language). We call this intersection, $G^{0}$, the [*connected component*]{} of $G$. It is normal, and type-definable over the same set of parameters that $G$ is. Moreover $G/G^{0}$ is naturally a profinite group. In the $\omega$-stable case (or the relative Finite Morley Rank case as in section \[Morleyrank\]), by DCC on relatively definable subgroups, $G^0$ will itself be relatively definable and of finite index in $G$ . (Characteristic $p$) Let $G_{1}$ be a semiabelian subvariety of the semiabelian variety $G_{2}$, both defined over $K$. Then $G_{1}^{\sharp}$ is the connected component of $G_{1}(K)\cap G_{2}^{\sharp}$. First by \[surjectivity\], $G_{1}^{\sharp}$ is a subgroup of $G_{1}(K)\cap G_{2}^{\sharp}$. By Lemma \[othercharacterizations\] $G_{1}(K)\cap G_{2}^{\sharp}$ is [[ $\Conj$-definable ]{}]{} of finite $U$-rank. Hence, for any $H$ [ $\Conj$-definable ]{}subgroup of $G_{1}(K)\cap G_{2}^{\sharp}$, classical $U$-rank inequalities for groups give us that $U(H[n]) + U([n] {H}) = U(H)$, As for each $n$ the $n$-torsion of $H$ is finite, $U(H[n])= 0$, hence for any $n$, $[n] H$ has finite index in $H$. It follows that any [[ $\Conj$-definable ]{}]{} subgroup of $G_{1}(K)\cap G_{2}^{\sharp}$ is connected iff it is divisible. But $G_{1}^{\sharp}$ is the maximum divisible subgroup of $G_{1}(K)$. Thus $G_{1}^{\sharp}$ must coincide with the connected component of $G_{1}(K)\cap G_{2}^{\sharp}$ By Lemma \[connectedcomponent\], the quotient $(G_{1}(K)\cap G_{2}^{\sharp})/G_{1}^{\sharp}$ is a profinite group. If $G_{2}^{\sharp}$ had relative Morley rank, the quotient would have to be finite (as remarked before Lemma 3.6). We will see in section \[sectionexactness\] an example where the quotient is infinite and give an explicit description of this quotient in terms of suitable Tate modules. For the record we now mention cases (in characteristic $p$) where $G^{\sharp}$ has (finite) relative Morley rank. (Characteristic $p$). Let $G$ be a semiabelian variety over $K$. Then (i) If $G$ descends to ${{K^{p^{\infty}}}}$ (in particular if $G$ is an algebraic torus) then $G^{\sharp}$ has finite relative Morley rank. (ii) If $G = A$ is an abelian variety then $G^{\sharp}$ has finite relative Morley rank. \(i) We may assume that $G$ is defined over ${{K^{p^{\infty}}}}$. Then by \[divweil2\] $G^{\sharp} = {{p^{\infty} G(K)}} = G({{K^{p^{\infty}}}})$. As ${K^{p^{\infty}}}$ is a “pure" algebraically closed field inside $K$, $G({{K^{p^{\infty}}}})$ has relative Morley rank equal to the (algebraic) dimension of $G$. (ii) The abelian variety $A$ is isogenous to a product of simple abelian varieties. So we may reduce to the case where $A$ is simple. In that case $A^\sharp$ has no proper infinite definable subgroup (2.16 in [@Hrushovski] or Cor.3.8 in [@BD2]). By stability, $A^\sharp$ has no proper infinite [[ $\Conj$-definable ]{}]{} subgroup. We will now use an appropriate version of Zilber’s indecomposability theorem to see that $A^{\sharp}$ has finite relative Morley rank. As $A^{\sharp}$ has finite $U$-rank, there is some small submodel $K_{0}$ (over which $A^{\sharp}$ is defined) and a complete type $p(x)$ over $K_{0}$ extending “$x\in A^\sharp$", which has $U$-rank $1$ (and is of course stationary). Let $Y\subseteq A^\sharp$ be the set of realizations of $p$. Then $Y$ is an [[ $\Conj$-definable ]{}]{} subset of $A^\sharp$ which is “minimal", namely $Y$ is infinite and every relatively definable subset of $Y$ is either finite or cofinite. We claim that $Y$ is “indecomposable" in $A^\sharp$, namely for each relatively definable subgroup $H$ of $A^\sharp$, $|Y/H|$ is $1$ or infinite. For if not, then as remarked earlier the intersection of all the images of $H$ under automorphisms fixing $K_{0}$ pointwise, will be a finite subintersection $H_{0}$, now defined over $K_{0}$, and we will have $|Y/H_{0}| >1$ and finite, contradicting stationarity (or even completeness) of $p$. Let now $X$ be a translate of $Y$ which contains the identity $0$. Then $X$ is still a minimal [[ $\Conj$-definable ]{}]{} subset of $A^\sharp$. Moreover Theorem 3.6.11 of [@wagner] applies to this situation, to yield that the subgroup $B$ say of $A^\sharp$ which is generated by $X$ is [[ $\Conj$-definable ]{}]{} and moreover of the form $X + X + ... + X$ ($m$ times) for some $m$. As noted above, it follows that $B = A^\sharp$, and so the function $f:X^{m} \to A^\sharp$ is a definable surjective function between [[ $\Conj$-definable ]{}]{} sets, in the sense of section 2.3. But as $X$ is minimal, clearly $RM_{X}(X) = 1$ and $RM_{X^{m}}(X^{m}) = m$. By Lemma 2.18 (ii), $A^\sharp$ has finite relative Morley rank too. D-structures and descent ------------------------ Here again, we consider a model $(K,\partial)$ of $DCF_0$ or of $CHF_{p,1}$, where in the latter case it is convenient to assume $\omega_1$-saturation. In order to relate some properties of $G^{\sharp}$ with descent to constants, we introduce the tool of prolongations and D-structures.\ We first give an ad hoc description of the prolongations. A more systematic definition can be found in [@Buium2] or [@Vojta].\ If $V\subseteq {\mathbb{A}}^m$ is a smooth irreducible algebraic variety over $K$, we define the $n$-th prolongation of $V$ to be the Zariski-closure of the image of $V(K)$ by $\partial_{\leq n}:=(\partial_0, \ldots,\partial_n)$, $$\Delta_nV:=\overline{\{\partial_{\leq n}(x) \colon x\in V(K)\}} \subseteq {\mathbb{A}}^{m{n+1}}.$$ This construction has functorial properties which allows us to build $\Delta_nV$ for any smooth irreducible variety over $K$, with the definable map $\partial_{\leq n}:V(K) \longrightarrow \Delta_n V(K)$ having Zariski-dense image. For $m\ge n \ge 0$, we have a natural projection morphism $\pi_{m,n} : \Delta_m V \longrightarrow \Delta_nV$ such that $\pi_{m,n}\circ \partial_{\le m} = \partial_{\le n}$.\ In the case where $V=G$ is a connected algebraic group, each $\Delta_n G$ has a natural structure of algebraic group and the maps $\partial_{\le n}$, $\pi_{m,n}$ are homomorphisms. Let $G$ be a connected algebraic group defined over $K$. A D-structure on $G$ is a sequence of homomorphic regular sections $s=(s_n)_{n\in {\mathbb{N}}}$ for the projective system $(\pi_{m,n} : \Delta_m G \longrightarrow \Delta_n G)_{m\ge n \ge 0}$, i.e. each $s_n : G \longrightarrow \Delta_n G$ is a regular homomorphism defined over $K$, and these homomorphisms satisfy $\pi_{m,n} \circ s_m = s_n$ and $s_0=\text{id}_G$. For $(G,s)$ an irreducible algebraic group with a D-structure over $K$, and $L$ an extension of $K$, we denote by $(G,s)^{\partial}(L)$ the [ $\Conj$-definable ]{}subgroup of $G(L)$, $$(G,s)^{\partial}(L)=\{x\in G(L) \colon \partial_n(x)=s_n(x) \text{ for all } n\ge 0\}.$$ Let $G$ be a semiabelian variety over $K$. In order to define a D-structure on $G$, it suffices that, for some (any) generic point $g$ of $G^{\sharp}(L)$ over $K$ ($L$ an extension of $K$), for any $n\ge 0$, $\delta_n(g)\in K(g)$. Indeed, because $G^{\sharp}$ is Zariski-dense in $G$, such a property induces a rational map from $G(L)$ to $\Delta_n G (L)$, which can be extended to an homomorphism $s_n$ by a classical stability argument. We obtain in that way a D-structure on $G$ because $s_n$ coincides with $\partial_{\le n}$ on the Zariski-dense subgroup $G^{\sharp}$, and the $\delta_{\le n}$’s give a sequence of definable sections by definition.\ In particular, if $G$ is defined over the constants ${\mathcal{C}}$, for each $g\in G^{\sharp}=G({\mathcal{C}})$, $\partial_n(g)=0$ for $n\ge 1$, hence we can define a natural D-structure on $G$. The two following results are a converse of this observation. For each $n\ge 0$, the kernel of $\pi_{n,0}: \Delta_nG \longrightarrow G$ is a unipotent group (see [@Pillay-countable] in characteristic $0$ or [@Benoist] in arbitrary characteristic). It follows that $G$ admits at most one D-structure, since the difference between two sections is an homomorphism $G\longrightarrow \text{Ker}(\pi_{n,0})$, hence zero. Let $G$ be a semiabelian variety over $K$ with a D-structure. Then $G$ descends to the constants. In the characteristic $0$ case, this result appears implicitly in [@Buium1], but see Lemma 3.4 in [@BePi] for more explanations.\ In the characteristic $p$ case, it is proved in [@BeDe] (Proof of Theorem 4.4), that such a semiabelian variety $G$ descends to $K^{p^n}$ for every $n$ (this is actually equivalent). Then it is shown, using moduli spaces, that if $G$ is an abelian variety, $G$ descends to $K^{p^n}$ for every $n$ if and only if $G$ descends to ${\mathcal{C}}=\bigcap_n K^{p^n}$. The general case will follow from the lemma below. (Characteristic $p$) Let us consider a semiabelian variety $$0 \longrightarrow T \longrightarrow G \stackrel{f}{\longrightarrow} A \longrightarrow 0,$$ and suppose that $G$ descends to $K^{p^n}$ for all $n$. Then the same is true for $A$, and both descend to ${\mathcal{C}}= K^{p^\infty}$. Let us consider the following commutative diagramm  \ (8,2) (0,2) (2,2) (4,2) (6,2) (8,2) (4,0) (6,0) \^[$f$]{} \_[$\Delta_n f$]{} &lt;[$\pi_{n,0}$]{} &gt;[$\pi_{n,0}$]{}  \ From our hypothesis, there is a D-structure on $G$, given by sections $s_n:G\longrightarrow \Delta_n G$. We claim that there is an induced D-structure on $A$. Indeed, $s_n(T)$ has to lie inside the linear part $H$ of $\Delta_n G$, which is given by the exact sequence $$0 \longrightarrow H \longrightarrow \Delta_n G \stackrel{f \circ \pi_{n,0}}{\longrightarrow} A \longrightarrow 0,$$ and since $f \circ \pi_{n,0} = \pi_{n,0} \circ \Delta_n f$, $$0 \longrightarrow \text{Ker}(\Delta_n f) \longrightarrow H \stackrel{\Delta_n f}{\longrightarrow} \text{Ker}(\pi_{n,0}) \longrightarrow 0.$$ It follows that $\Delta_n f \circ s_n (T)$ lies in the unipotent group $\text{Ker}(\pi_{n,0})$, hence is $0$. The homomorphism $\Delta_n f \circ s_n$ factorizes through $f$: we find $t_n: A \longrightarrow \Delta_n A$ such that $\Delta_n f \circ s_n = t_n \circ f$. It follows that $\pi_{n,0} \circ t_n \circ f = \pi_{n,0} \circ \Delta_n f \circ s_n = f \circ \pi_{n,0} \circ s_n = f$, and since $f$ is surjective, $\pi_{n,0} \circ t_n = \text{id}_A$. And for $m\ge n$, since $Ker(\pi_{n,0})$ is unipotent, the homomorphism $\pi_{m,n} \circ t_m - t_n:A \longrightarrow \text{Ker}(\pi_{n,0})$ is zero: we have obtained a D-structure on $A$.\ As explained above, this implies ([@BeDe]) that $A$ descends to the constants. But we also know from [@BeDe] that $G$ descends to $K^{p^n}$ for every $n$. It is classical that $\text{Ext}(A,T)\simeq (\text{Ext}(A,{\mathbb{G}_m}))^t\simeq (\hat A)^t$, where $t=\text{dim}(T)$ and $\hat A$ is the dual abelian variety of $A$, also defined over ${\mathcal{C}}$ (see for example [@Serre]). It follows that the isomorphism type of $G$ is parametrized by a point in $\hat A(\bigcap_n K^{p^n})=\hat A({\mathcal{C}})$, that is $G$ descends to the constants. \ In the following, we will only make explicit use of D-structures in characteristic $0$; in characteristic $p$, we will use more usual objects, namely the Tate modules.\ Note that in characteristic $0$, since $\partial_i=\frac{1}{i!}\partial_1$, it suffices to have $s=s_1:G \longrightarrow \Delta_1 G$ in order to define a D-structure; $\Delta_1 G$ is also known as the twisted tangent bundle of $G$. Let us quote the following from [@BePi], section 3.1. (Characteristic $0$) Let $G$ be a semiabelian variety. The universal extension $\tilde{G}$ of $G$ by a vector group (as defined in [@Rosenlicht]) admits a unique D-structure $s$. Let us write $\tilde G$ as $0 \longrightarrow W_G \longrightarrow \tilde G \longrightarrow G \longrightarrow 0$, and consider $U_G$ the maximal D-subgroup of $(\tilde G,s)$ which is a subgroup of $W_G$. We still denote by $s$ the D-structure induced on $\tilde G/U_G$. Then $G^{\sharp}$ is isomorphic to $(\tilde G/U_G ,s)^{\partial}$. It follows from Proposition \[Dstructuredescent\] that $G$ descends to the constants if and only if $G\simeq \tilde{G}/W_G$ has a D-structure if and only if $U_G=W_G$. Furthermore the $\partial$ functor is exact on the class of algebraic $D$-groups ([@kowalski-pillay]). In particular, ${(\tilde G/U_G,s)}^{\partial} \cong {(\tilde G, s)^{\partial}} / {(U_G ,s)^{\partial}}$. Torsion points, Tate modules and descent ---------------------------------------- We deal now with the characteristic $p$ case; $G$ being a semiabelian variety over any model $(K,\partial)$ of $CHF_{p,1}$, that is any separably closed field of degree of imperfection $1$. We define $\tilde G$ as the inverse limit $$\tilde{G}:=\mathop{\text{lim}}_{\leftarrow} (G\stackrel{[p]}{\longleftarrow} G \stackrel{[p]}{\longleftarrow} \ldots ).$$ In particular, for $L$ an extension of $K$ (we will mainly consider $L=K$ or $L=\overline K$), $$\tilde{G}(L)=\{(x_i)_{i\in {\mathbb{N}}}\in G(L)^{{\mathbb{N}}};\forall i\ge 0, x_i=[p]x_{i+1}\}.$$ Let $\pi_G$ be the projection on the “left component” $G$. The kernel of $\pi_G$ is called the Tate-module of $G$, denoted by $T_pG$.\ Its $L$-points in an arbitrary algebraically closed extension $L$ of $K$ coincide with the sequences of torsion points in $\overline K$, $$T_pG(\overline K)=\{(x_i)_{i\in {\mathbb{N}}}\in G(\overline K)^{{\mathbb{N}}};x_0=0,\forall i\ge 0, x_i=[p]x_{i+1}\}$$ Let us remark that for a given $g_0\in G(K)$, there is some $(x_i)_{i\in {\mathbb{N}}}\in \tilde{G}(K)$ with $g_0 = x_0$ if and only if $g_0\in G^{\sharp}$; we deduce from this the relation between the Tate-module of $G$ and $G^{\sharp}$. The morphism $\pi_G$ induces an exact sequence: $$0 \rightarrow T_pG(K) \rightarrow \tilde G(K) \stackrel{\pi_G}{\rightarrow} G^{\sharp} \rightarrow 0.$$ Objects such as $\tilde G(K)$ and $T_pG(K)$ are what are called “$*$-definable" groups in $K$, so the exact sequence in Lemma 3.16 is in the category of $*$-definable groups.\ In the case of ordinary semiabelian varieties, with dimension of the abelian part $a$, it is well-known that $T_pG(\overline K)\simeq {\mathbb{Z}}_p^a$ (see [@Mumford], chapter IV). We relate now the part of the $p^{\infty}$-torsion lying in $K$ with issues of descent. Most of the following results seem to be well-known, see for example [@voloch] for the description of the torsion of $G$ for abelian schemes of maximal Kodaira-Spencer rank. But we have found no systematic exposition which we could quote and furthermore, we choose to give here particularly elementary proofs which are suitable for our purpose. Let $G$ be an ordinary semiabelian variety over $K$. Then for every $n$, $G[p^n](K) = G[p^n] $ if and only if $G$ descends to $K^{p^n}$. In particular, $G$ descends to $K^{p^\infty}$ if and only if $G[p^\infty](K) = G[p^\infty]$ if and only if $T_pG(K)=T_pG(\overline K)$. Let us fix $n\ge 1$. If $G$ descends to $K^{p^n}$, we may assume that $G$ is defined over $K^{p^n}$, as is the Verschiebung $V_n$. Since $G$ is ordinary, the kernel of $V_n$ consists of $K^{p^n}$-rational points, and since $[p^n]=V_n \circ Fr^n$, $G[p^n]=Fr^{-n}(\text{Ker}(V_n))\subseteq G(K)$.\ Conversely, assume that $G[p^n]\subseteq G(K)$. Since $V_n$ is separable, $G$ is isomorphic to the quotient $Fr^n G/\text{Ker}(V_n)$. But $\text{Ker}(V_n)=Fr^n(G[p^n])$ is a finite group of $K^{p^n}$-rational points, hence $Fr^nG/\text{Ker}(V_n)$ is defined over $K^{p^n}$.\ The “in particular” statement follows from Lemma \[descentsemiabelian\]. This was proved with the assumptions that $K$ was $\omega_1$-saturated, but we can easily reduce to this situation: Let $L$ be an $\omega_1$-saturated elementary extension of $K$. Then $L$ is a separable extension of $K$, of same degree of imperfection and $L^{p^\infty}$ and $K$ are linearly disjoint over $K^{p^\infty}$. Applying \[descentsemiabelian\], we conclude that $G\times_{K} L$ descends to $L^{p^\infty}$, and by linear disjointness, that $G$ descends to $ K^{p^\infty}$. Let $K_0 $ be an algebraically closed field and $K_1 >K_0$ a finitely generated extension of $K_0$. Let $G$ be an ordinary semiabelian variety over $K_1$. If $G[p^\infty](K_1) = G[p^\infty]$, then $G$ descends to $K_0$. As $K_0$ is algebraically closed, $K_1$ is a separable extension of $K_0$, hence it is contained in the separable closure of $K_0(t_1,\ldots, t_n)$ for $t_1,\ldots,t_n$ algebraically independent. Then (Fact \[invariant1\]) there is a separably closed field $K$ of degree of imperfection $1$, extending, $K_1$ and such that $K_0 = K^{p^\infty}$. We can now apply Proposition \[abelianseparabletorsion\] to conclude that $G$ descends to $K^{p^\infty}$. This yields easily the following result which we already mentioned in Section \[isogenies\]. Let $G$ be an ordinary semiabelian variety over some algebraically closed field $K_0$. If $H$ is any semiabelian variety over $K_1 > K_0$ such that there is an isogeny $f$ from $G$ to $H$, then $H$ descends to $K_0$. Let $K_2 <K_1$ be a finitely generated extension of $K_0$ over which $H$ and the isogeny $f$ from $G$ to $H$ are defined. We claim first that any point of $p^\infty $-torsion in $H$ is the image of a point of $p^\infty$-torsion in $G$: indeed let $h \in H[p^\infty]$, i.e. for some $m$, $[p^m] h = 0$. Let $g \in G(\overline K_2)$, be a preimage of $h$, $f(g) = h$. Then $[p^m] g \in Ker f$. If $f$ is purely inseparable, then $f$ is injective on $G(\overline K)$ and hence $g \in G[p^m]$. Otherwise, let $n$ be the order of the finite group $Ker f$ in $G(\overline K_2)$. Then $n = p^r d$, where $d$ is prime to $p$. By Bezout, $1 = u d + v p^m$, $u,v \in \mathbb Z$. Then $g = [ud] g + [vp^m] g$, so $f(g) = f([ud]g)$, and $[p^r] [ud] g = 0$. Hence $h = f(e)$ for some $e :=[ud]g\in G[p^\infty]$.\ Now as $G$ is defined over the algebraically closed field $K_0$, $G[p^\infty] = G(K_0 )$ and hence by the above claim $ G[p^\infty] = G(K_2) [p^\infty]$. We can now apply Corollary \[allptorsion\]. Let $0 \longrightarrow C \longrightarrow B\longrightarrow A\longrightarrow 0$, be an exact sequence of ordinary abelian varieties with $A$ and $C$ defined over $K_0$ some algebraically closed field. Then $B$ descends to $K_0$. By Poincaré reducibility theorem, $B$ is isogenous to $A\times C$, which is defined over $K_0$, and we just have to apply Corollary \[ordinaryisogenydescent\]. (Thanks to A. Chambert-Loir and L. Moret-Bailly for pointing this out to us) The example described in \[supersingular\] shows that the assumption that the varieties are ordinary in \[abeliandescent\] is essential. We remarked that there is an abelian variety $A$ isogenous to $E\times E$ for $E$ a supersingular elliptic curve ( hence defined over a finite field), which itself does not descend to $\overline{ \mathbb F_p}$. Such an abelian variety $A$, which is of course not ordinary yields an example of an element of $EXT(E_1,E_2)$, where $E_1, E_2$ are elliptic curves over $\overline {\mathbb{F}_p}$, which [*does not descend*]{} to $\overline {\mathbb{F}_p}$. To see this, note first that every proper abelian subvariety of $A$ must be isomorphic to an abelian variety defined over $\overline{ \mathbb F_p}$: let $\rho$ be the isogeny from $A$ onto $E\times E$ and consider $B <A$. Then through $\rho$, $B$ is isogenous to some proper abelian subvariety $C$ of $E\times E$, which itself is defined over $\overline {\mathbb{F}_p}$ (Fact \[subgroups\]). Both $C$ and $B$ must have dimension one, hence by Proposition \[ellipticdescent\], $B$ is isomorphic to some abelian variety defined over $\overline {\mathbb{F}_p}$. Now, pick any $E_1 < A$, of dimension one (there are some, as $A$ is isogenous to $E\times E$), and consider $E_2 := A/E_1$. By Poincaré reducibility theorem $E_2$ is isogenous to some $C < A$, such that $A = E_1 + C$ and $E_1 \cap C $ is finite. So again by \[ellipticdescent\], $E_2$ descends to $\overline {\mathbb{F}_p}$. We complete this section with some easy remarks on torsion in $G(K)/G^{\sharp}$ in characteristic $p$ which will immediately enable us to describe the link between the question of relative Morley rank and that of preservation of exactness. (Characteristic $p$) Let $G$ be a semiabelian variety defined over $K$. (i) $G[p^{\infty}](K)$ (the group of elements of $G(K)$ with order a power of $p$) is a direct sum of a divisible group and a finite group. (ii) $G(K)/G^{\sharp}$ has finite torsion. (iii) If $G$ descends to ${{K^{p^{\infty}}}}$ then $G(K)/G^{\sharp}$ is torsion-free. (iv) If $G(K)$ has trivial $p$-torsion then $G(K)/G^{\sharp}$ is torsion-free. \(i) $G[p^{\infty}](K)$ is a subgroup of $G[p^{\infty}]$ which is a finite direct sum of copies of the Prüfer group ${\mathbb Z}_{p^{\infty}}$. As $G^{\sharp}$ is divisible, if $g\in G(K)$ and $ng\in G^{\sharp}$ then there is $h\in G^{\sharp}$ so that $ng = nh$ whereby $n(g-h) = 0$ so $g$ is congruent mod $G^{\sharp}$ to an element of order $n$. We know that $G^{\sharp}$ contains all the prime-to-$p$-torsion of $G$. On the other hand by (i) $G[p^{\infty}](K)/G^{\sharp}$ is finite. This gives (ii) immediately. Similarly, for cases (iii) and (iv), where $G^{\sharp}$ contains all the torsion of $G(K)$. (Characteristic $p$) Suppose that $K$ is $\omega_1$-saturated and let $G$ be a semiabelian variety over $K$, $0 \rightarrow T \rightarrow G \rightarrow A \rightarrow 0$. Then the following are equivalent: (i) $G^\sharp$ has relative Morley rank\ (ii) the sequence $0 \rightarrow T^{\sharp} \rightarrow G^\sharp \rightarrow A^\sharp \rightarrow 0$ is exact\ (iii) $ G^{\sharp} \cap T(K) / T^{\sharp}$ is finite\ (iv) $G^{\sharp} \cap T(K)$ is divisible. By the previous lemma, as $T$ has no $p$-torsion, $T(K)/T^{\sharp}$ is torsion free. Also note that $T^{\sharp} = T({\cal C})$ is divisible and is the connected component of $G^{\sharp}\cap T$ (\[connectedcomponent\]). Hence $T(K) \cap G^{\sharp} /T^{\sharp}$ is finite iff it is trivial iff the sequence $0 \rightarrow T^{\sharp} \rightarrow G^\sharp \rightarrow A^\sharp \rightarrow 0$ is exact. And moreover these conditions are equivalent to the divisibility of $G^{\sharp}\cap T$. This gives the equivalence of (ii), (iii), and (iv). On the other hand if $G^{\sharp}$ has finite relative Morley rank, then every relatively definable subgroup is connected by finite, so (i) implies (iii). Conversely, we have seen (\[caseswithRMR\]) that both $T^{\sharp}$ and $A^{\sharp}$ have relative Morley rank. By \[RMRexact\], the exactness of the sequence implies that $G^{\sharp}$ also has relative Morley rank. Thus (ii) implies (i). Exactness ========= As before $(K, \partial) $ is a model of $DCF_0$ or of $CHF_{p,1}$, which we will assume to be $\omega_1$-saturated in the characteristic $p$ case. We will now see some equivalent criteria for when the $\sharp$ functor preserves exact sequences, in all characteristics, and obtain as corollary a result linking exactness and descent for (ordinary) semiabelian varieties (Section \[exactness and descent\]). Then we will look more closely at the case of abelian varieties (Section \[sectionabelianvarieties\]) and extensions of elliptic curves (Section \[sectionellipticcurves\]). Exactness and descent --------------------- For the sake of uniformity, we will harmonize the notation introduced in sections \[sectionDstructures\] and \[TorsionTate\] for characteristics $p$ and $0$, Let $K$ be of characteristic $p$, let $G$ be a semiabelian variety over $K$. We will denote $T_pG(K)$ by $(U_G)^\partial$ and $\tilde G (K)$ by $\tilde G^\partial$. So again we emphasize that these are $*$-definable groups in $K$. From section \[sectionDstructures\] we now see that, in all characteristics $$G^{\sharp} \mbox{ is isomorphic to } \tilde G^\partial /{(U_G)^\partial}$$ where of course, by isomorphic here we mean definably isomorphic in the relevant structure. Notation: If $f : G \longrightarrow H$ is a morphism of semiabelian varieties defined over $K$, we denote by $\tilde f$ the induced morphism from $\tilde G$ to $\tilde H$.\ (Characteristic 0) If $H_1$, $H_2$ are algebraic groups with a $D$-structure, and $h : H_1 \longrightarrow H_2$ is a morphism of algebraic groups which respects the $D$-structure, we denote by $h^\partial$ the induced definable homomorphism from ${H_1}^\partial$ to ${H_2}^\partial$. When $G,H$ are semiabelian varieties, $\tilde G$ and $\tilde H$ have unique $D$-structures, and so for any $f:G\to H$, $\tilde f$ respects the $D$-structures, whereby ${\tilde f}^{\partial}$ is defined. (See section \[sectionDstructures\]).\ (Characteristic $p$) If $\tilde f : \tilde G \longrightarrow \tilde H$, for $G, H$ semiabelian varieties defined over $K$, we denote by ${\tilde f}^\delta$ the induced map from ${\tilde G}^\partial = \tilde G (K)$ to ${\tilde H}^\partial = \tilde H (K)$. Let $0\longrightarrow G_1 \longrightarrow G_2 \stackrel{f}\longrightarrow G_3\longrightarrow 0$ be an exact sequence of semiabelian varieties defined over $ K $. Then the sequence $0\longrightarrow (\tilde G_1)^\partial \longrightarrow (\tilde G_2)^\partial \stackrel{{\tilde f}^\partial}\longrightarrow (\tilde G_3)^\partial \longrightarrow 0$ is also exact. In characteristic $0$, $\tilde G_i$ is the universal vectorial extension of $G_i$ (see section \[sectionDstructures\]) and the sequence $$0\longrightarrow \tilde G_1 \longrightarrow \tilde G_2 \stackrel{f}\longrightarrow \tilde G_3\longrightarrow 0$$ is also exact. Each $\tilde G_i$ admits a (unique) $D$-structure and the functor $H \mapsto H^\partial$ preserves exact sequences for the category of algebraic groups with a $D$-structure (section \[sectionDstructures\]). In characteristic $p$, $(\tilde G_i)^\partial = \tilde G_i (K) = \{ (x_n)_{n\in {\mathbb N}} \, : \, \forall n \, \, x_n \in G_i (K), x_n = [p] \, x_{n+1}\}$. Clearly the kernel of $(\tilde f)^\partial $ is $\tilde G_1 (K)$. The surjectivity of $(\tilde f)^\partial $ is not as obvious. Let $K_0$ be a countable subfield of $K$ over which everything is defined. Then, for $(h_i)_{i\in {\mathbb{N}}} \in \tilde G_3 (K) $, we can realize in $K$ (which is $\omega_1$-saturated), the following type of length $\omega$ over $K_0((h_i)_{i \in {\mathbb{N}}})$ ): $$\bigwedge_{i\in {\mathbb{N}}} (x_i\in G_2 \wedge f(x_i)=h_i \wedge x_i=[p]\, x_{i+1}).$$ Indeed this type can be finitely realized in $K$: for $i\le n$, choose some $g_{n+1}\in G_2(K)$ such that $f(g_{n+1})=h_{n+1}$ and let $g_i=[p^{n+1-i}]\, g_{n+1}$. For a realisation $(g_i)_{i\in {\mathbb{N}}}$ of this type, we have $g_0\in G_1(K)$ (since $f(g_0)=h_0=0$), $(g_i)_{i\in {\mathbb{N}}}\in \tilde G_2 (K) $, hence $g_0\in p^{\infty}G_2(K)$ and $\tilde f ((g_i)_{i\in {\mathbb{N}}})=(h_i)_{i \in{\mathbb{N}}}$. The next proposition gives us a very useful equivalent to the exactness of the $\sharp$ functor. It should be noted that there is no assumption that any of the $U_{G_i}^\partial$’s or, (in characteristic $0$) any of the $U_{G_i}$’s, are non trivial. Given the exact sequence $0\longrightarrow G_1 \longrightarrow G_2 \stackrel{f}\longrightarrow G_3\longrightarrow 0$, if $(\tilde f)^\partial $ is the induced map as above, let $(\tilde f_U)^\partial$ denote the restriction of $(\tilde f)^\partial$ to $(U_{G_2})^\partial$ and let $\tilde f_\pi$ denote the induced map from $G_2^\sharp $ to $G_3^\sharp$, when we identify $G_i^\sharp$ with $(\tilde G_i)^\partial/ (U_{G_i})^\partial$. Let $0\longrightarrow G_1 \longrightarrow G_2 \stackrel{f}\longrightarrow G_3\longrightarrow 0$ be an exact sequence of semiabelian varieties defined over $ K $. Then the following are equivalent: \(i) $0\longrightarrow {G_1}^\sharp \longrightarrow {G_2}^\sharp \stackrel{f_\pi}\longrightarrow {G_3}^\sharp \longrightarrow 0$ is exact\ (ii) $0\longrightarrow (U_{G_1})^\partial \longrightarrow( U_{G_2})^\partial \stackrel{(\tilde f_U)^\partial}\longrightarrow (U_{G_3})^\partial \longrightarrow 0$ is exact\ (iii) $(\tilde f_U)^\partial \, : (U_{G_2})^\partial \longrightarrow \, (U_{G_3})^\partial$ is surjective\ (iv) (in characteristic $0$) $0\longrightarrow U_{G_1} \longrightarrow U_{G_2} \stackrel{\tilde f_U}\longrightarrow U_{G_3}\longrightarrow 0$ is exact.\ Furthermore $G_1(K) \cap {G_2}^\sharp / {G_1}^\sharp \, \widetilde{\longrightarrow} \, (U_{G_3})^\partial / {(\tilde f_U)}^\partial ( ( U_{G_2})^\partial)$. From the previous lemma, one derives the following commutative diagram of exact sequences (\*):  \ (8,8) (4,0) (0,2) (2,2) (4,2) (6,2) (8,2) (0,4) (2,4) (4,4) (6,4) (8,4) (4,6) (4,8) \_[$\pi_3$]{} \^[$\pi_2$]{} &gt;[$(\tilde f_U )^\partial$]{} &gt;[$(\tilde f)^\partial$]{} &gt;[$\tilde f_{\pi}$]{}  \ [**Claim:**]{} $Ker ({\tilde f_U})^\partial = (U_{G_1})^\partial$. In char. $0$: $Ker \tilde f_U$ is a (unipotent) subgroup of $U_{G_2} \cap \tilde G_1$, hence of $W_1$, and contains $U_{G_1}$. It inherits a $D$-structure from $U_{G_2}$ and so by maximality of $U_{G_1}$, they must be equal. Now, going back to the definition, $(U_{G_1})^\partial = \{x \in (U_{G_1},s)(K) ; s(x) = \partial (x) \} = U_{G_2}^\partial \cap U_{G_1}(K)$, and so $Ker (\tilde f_U )^\partial = (U_{G_1})^\partial $.\ In char. $p$, $(U_{G_1})^\partial = T_p G_1 (K) = T_p G_2 (K) \cap \tilde G_1 (K)$. [ox$\Box$]{} Let $S :=(U_{G_3})^\partial / (\tilde f_U)^\partial ( (U_{G_2})^\partial)$ (the cokernel of $(\tilde f_U )^\partial$). Then the classical Snake Lemma applied to diagram (\*) gives the existence of a homomorphism $d$ from $Ker (\tilde f_\pi )$ to $S$, such that the sequence $0 \, \longrightarrow (U_{G_1})^\partial \, \longrightarrow \, {(\tilde G_1)}^\partial \, \longrightarrow \, Ker (\tilde f_\pi ) \, \stackrel{d}\longrightarrow \, S \, \longrightarrow \, 0 \longrightarrow 0$ is exact in the following commutative diagram:  \ (8,8) (2,0) (4,0) (0,2) (0,6) (2,2) (4,2) (6,2) (8,2) (0,4) (2,4) (4,4) (6,4) (8,4) (2,6) (4,6) (6,6) (4,8) (6,0) \_[$\pi_3$]{} \^[$\pi_2$]{} &gt;[$(\tilde f_U)^\partial$]{} &gt;[$(\tilde f)^\partial$]{} &gt;[$\tilde f_{\pi}$]{}  \ This says exactly that\ $S = (U_{G_3})^\partial / (\tilde f_U)^\partial ( (U_{G_2})^\partial)$ is isomorphic to $Ker (\tilde f_\pi) / {((\tilde G_1 )^\partial / (U_{G_1})^\partial )}$, that is, to $(G_1(K) \cap {G_2}^\sharp )/ {G_1}^\sharp$. It follows in particular that $0\longrightarrow {G_1}^\sharp \longrightarrow {G_2}^\sharp \stackrel{f_\pi}\longrightarrow {G_3}^\sharp \longrightarrow 0$ is exact\ if and only if\ $0\longrightarrow (U_{G_1})^\partial \longrightarrow( U_{G_2})^\partial \stackrel{(\tilde f_U)^\partial}\longrightarrow (U_{G_3})^\partial \longrightarrow 0$ is exact\ if and only if $(\tilde f_U)^\partial$ is surjective. In characteristic $0$, this is equivalent to the exactness of the sequence $0\longrightarrow U_{G_1} \longrightarrow U_{G_2} \stackrel{(\tilde f_U)}\longrightarrow U_{G_3} \longrightarrow 0$. One direction follows because the $\partial$ functor is exact on groups with a $D$-structure. For the other direction suppose that the sequence of the $(U_{G_i})^\partial$’s is exact. For each $i$, $U_{G_i}^\partial$ is Zariski dense in $U_{G_i}$, and has transcendence degree (or Morley rank) equal to the dimension of the algebraic group $U_{G_i}$. It follows that $dim U_{G_1} + dim U_{G_3} = dim U_{G_2}$ and hence that $ dim \tilde f_U (U_{G_2}) = dim U_{G_3}$. Being vector groups, all these groups are connected, and it follows that $\tilde f_U$ is surjective. We can now give the proof of the main theorem which relates exactness of the $\sharp$ functor to questions of descent, restricted, in char. $p$ to the class of [*ordinary* ]{} semiabelian varieties. Proposition \[Maindescent\] is no longer true without the assumption ordinary (see Remark \[counterexampletoMaindescent\]). Let $0\rightarrow G_1 \rightarrow G_2 \rightarrow G_3\rightarrow 0$ be an exact sequence of (ordinary in char.p) semiabelian varieties defined over $ K $. Suppose that $G_1$ and $G_3$ descend to the constants of $K$. Then, $G_1(K) \cap G_2^\sharp = G_1^\sharp$ (i.e. the sequence with the $\sharp$’s remains exact) if and only if $G_2$ also descends to the constants. Let $K_0$ be a countable elementary submodel of $K$ over which everything is defined. By isomorphism, we can suppose that both $G_1$ and $G_2$ are actually defined over ${\cal C}\cap K_0$, the field of constants of $K_0$. If $G_2$ descends to the constants, then by isomorphism, we can suppose that $G_2$ is also defined over the constants, so for every $i$ ${G_i}^\sharp = G_i ( {\cal C})$. And then $G_1(K) \cap G_2^\sharp = G_1(K) \cap G_2 ({\cal C}) = G_1 ({\cal C}) = G_1^\sharp$. For the converse, suppose that $0\rightarrow {G_1}^\sharp \rightarrow {G_2}^\sharp \rightarrow {G_3}^\sharp \rightarrow 0$ is exact. In characteristic $0$, by Propostion \[exactnessofU\], then $0\rightarrow U_{G_1} \rightarrow U_{G_2} \rightarrow U_{G_3}\rightarrow 0$ is also exact. We know that (see Fact \[Dunipotent\]) as $G_1$ and $G_3$ descend to the constants, $U_{G_1} = W_1$ and $U_{G_3} = W_3$. Consider the dimensions, as vector spaces, of the $U_{G_i}$’s. By exactness, $dim U_{G_2} = dim U_{G_1} + dim U_{G_3}$. But we also have that the dimension of $dimW_2 = dim W_1 + dim W_3$ (this follows from Lemma \[exactnessofGtilde\]). So $dim U_{G_2} = dim W_2$ and $U_{G_2} = W_2$, that is, $G_2$ descends to the constants. In characteristic $p$, our assumption that the $G_i$’s are ordinary ensures that for each $i$, $T_p G_i (\overline K) \cong {{\mathbb Z}_p}^{a_i}$, where $a_i$ is the dimension of the abelian part of $G_i$. If $G_1$ and $G_3$ descend to ${\cal C}$, then $T_p G_1 (K) = T_p G_1 ({\cal C})=T_p G_1 (\overline K ) $ and $T_p G_3 (K) =T_p G_3 ({\cal C})= T_p G_3 (\overline K )$. By \[exactnessofU\] the sequence $$0\longrightarrow T_p {G_1}(K) \longrightarrow T_p G_2 (K) \longrightarrow T_p G_3 (K) \longrightarrow 0$$ is exact. It follows that $T_p G_2 (K) \cong {{\mathbb Z}_p}^{a_1 + a_3}$. As $a_1 + a_3 = a_2$ (by exactness of $0\longrightarrow G_1 \longrightarrow G_2 \longrightarrow G_3\longrightarrow 0$), it follows that $T_p G_2 (K) = T_p G_2 (\overline K)$, and by Proposition \[abelianseparabletorsion\], that $G_2$ descends to the constants. For any ordinary abelian variety $A$ defined over the constants of $K$, there exists an exact sequence over $K$, $$0 \longrightarrow {\mathbb G}_m\longrightarrow H \longrightarrow A \longrightarrow 0$$ such that $${{\mathbb G}_m}^\sharp \not= H^\sharp \cap {\mathbb G}_m.$$ As in the proof of \[descentsemiabelian\], we use the fact that $EXT(A, {\mathbb G}_m)$ is parametrized (up to isomorphism) by the dual abelian variety of $A$, say $\hat A$, which is also over the constants (see [@Serrebook]). Then $H$ will descend to the constants $\cal C$ of $K$ if and only if $H$ corresponds to a $\cal C$-rational point of $\hat A$. So just pick some $K$-rational point of $\hat A$ which is not $\cal C$-rational. We have established in Proposition \[Morleysemiabelian\] the connection between exactness and relative Morley rank, and we can conclude that: (Characteristic $p$) There is an ordinary semiabelian variety $G$, such that $G^{\sharp}$ does not have relative Morley rank. In fact, as above, for any ordinary abelian variety $A$ defined over $K^{p^\infty}$, there is some semiabelian variety $G$ in $EXT(A,\mathbb{G}_m)$ such that $G^\sharp$ does not have relative Morley rank. We will finish this section with some easy corollaries, in characteristic $p$, of Proposition \[exactnessofU\]. Again, $0\longrightarrow G_1 \longrightarrow G_2 \stackrel{f}\longrightarrow G_3\longrightarrow 0$ is an exact sequence of semiabelian varieties defined over $ K $. Recall from Proposition \[exactnessofU\] that $G_1(K) \cap G_2^{\sharp} /G_1^\sharp\, \cong \, T_pG_3 (K) / f (T_p G_2 (K))$. (Characteristic $p$) If $G_3[p^\infty](K)$ is finite, then the $\sharp$ sequence is exact. Since $G_3[p^{\infty}](K)$ is finite, $T_pG_3(K)=0$. If we have the extra assumption that $G_1(K)$ has no $p$-torsion, then the non exactness can be read directly from the groups of $p^\infty$-torsion. As we will see in the next section (\[nonexactabelian\]) this is no longer true if $G_1(K)$ has some $p$-torsion. (Characteristic $p$) We assume now that $G_1(K)$ has no $p$-torsion. If $f(G_2[p^{\infty}](K))=G_3[p^{\infty}](K)$, then the $\sharp$ sequence is exact. We show that the hypothesis implies that for each $n$, $f(G_2[p^n](K))=G_3[p^n](K)$: consider $g\in G_3[p^n](K)$, and $h\in G_2[p^{\infty}](K)$ such that $f(h)=g$. Let $m$ be such that $[p^m]h=0$; if $m\le n$, the claim is proved. If $m >n$ we have that $[p^n]h\in G_1(K)$ and $[p^{m-n}]([p^n]h)=0$. Since $G_1(K)$ has no $p$-torsion, $[p^n]h=0$.\ It follows that $\tilde f (T_pG_2(K))=T_pG_3(K)$: let $(g_i)_{i\in {\mathbb{N}}}$ be in $T_pG_3(K)$ and consider the tree of sequences of size at most $\omega$, $(h_i)_{i<L}$, such that for all $i$, $h_i\in G_2[p^{\infty}](K)$, $f(h_i)=g_i$, $[p]h_i=h_{i-1}$ and $h_0=0$, ordered by initial segment. This tree has finite branching, since $G_2[p](K)$ is finite, and has branches of arbitrary length: for every $n$, pick $h_n\in G_2[p^n](K)$ such that $f(h_n)=g_n$ and consider the sequence $(0,[p^{n-1}]h_n,\ldots,h_n)$. It follows by Koenig’s Lemma that the tree has an infinite branch, which gives $(h_i)_{i\in {\mathbb{N}}}\in T_pG_2(K)$ such that $\tilde f ((h_i))=(g_i)$. If we add the assumption that the semiabelian varieties have relative Morley rank, we get the following characterization: (Characteristic $p$) Let $0 \longrightarrow G_1 \longrightarrow G_2 \longrightarrow G_3 \longrightarrow 0$ be an exact sequence of semiabelian varieties such that $G_2^\sharp$ has relative Morley rank. then the following are equivalent \(1) the sequence $0 \longrightarrow {G_1}^{\sharp} \longrightarrow {G_2}^\sharp \longrightarrow {G_3}^\sharp \longrightarrow 0$ is exact \(2) $ G_1 [p^\infty ] (K) \cap {G_2}^\sharp \, = \, G_1 [p^\infty ] (K) \cap {G_1}^\sharp$. In particular the sequence will be exact when $G_1$ descends to the constants , or, more generally, when $ G_1 [p^\infty ](\overline K ) = G_1[p^\infty ] (K)$, and also when $G_1[p^\infty ] (K) = {0}$. Recall that ${G_i}^\sharp = p^\infty G_i (K)$. We know that (1) holds if and only if ${G_1}^\sharp = {G_2}^\sharp \cap G_1 (K)$. So trivially, (1) implies (2). We know that ${G_1}^\sharp$ contains all the $p'$-torsion of $G_1(K)$. It follows that if (2) holds, then ${G_2}^\sharp \cap G_1(K)/ {G_1}^\sharp $ is torsion free. As by assumption ${G_2}^\sharp $ has relative Morley rank, this quotient must be finite, if it is torsion-free, it is trivial. If $ G_1 [p^\infty ](\overline K ) = G_1[p^\infty ] (K)$ then $G_1[p^\infty ] (K) \subset G_1^\sharp$. If $G_1$ descends to the constants, then $G_1^\sharp = G_1 ({\mathcal{C}})$ and in particular, $ G_1[p^\infty ](K) = G_1[p^\infty ]({\mathcal{C}}) = G_1 [p^\infty ](\overline K)$.\ Abelian varieties ----------------- In characteristic $0$, the situation is completely different for abelian varieties and follows quickly from Proposition 4.2. (Characteristic 0) Let $0 \longrightarrow A \longrightarrow B \longrightarrow C \longrightarrow 0$ be an exact sequence of abelian varieties over $K$. Then the induced sequence $0 \longrightarrow A^\sharp \longrightarrow B^\sharp \longrightarrow C^\sharp \longrightarrow 0$ is also exact. By Poincaré complete reducibility, $A\times C$ is isogenous to $B$, inducing an isogeny of $\widetilde{A\times C} = \tilde A \times \tilde C$ with $\tilde B$. As this is also an isogeny of $D$-groups it induces an isogeny between $U_{A\times C} = U_{A}\times U_{C}$ and $U_{B}$. As these are vector groups it follows that the induced sequence $0 \longrightarrow U_{A} \longrightarrow U_{B} \longrightarrow U_{C} \longrightarrow 0$ is exact. Hence by Proposition 4.2, so is $0 \longrightarrow A^\sharp \longrightarrow B^\sharp \longrightarrow C^\sharp \longrightarrow 0$ In contrast to the characteristic $0$ case, in characteristic $p$ there are counterexamples to the exactness of $\sharp$, even for ordinary abelian varieties. They will have to be quite different from the counterexamples seen in the previous section for semiabelian varieties, as can be seen from the following direct corollary of Proposition \[torsioninkernel\]. Recall from Fact \[caseswithRMR\] that for all abelian varieties $A$, $A^\sharp$ has finite relative Morley rank. (Characteristic $p$) Let $0 \longrightarrow C \longrightarrow B\longrightarrow A\longrightarrow 0$, be an exact sequence of abelian varieties over $K$. If $C(K)$ has no $p$-torsion, or if $C$ descends to the constants, then the sequence $0 \longrightarrow C^\sharp \longrightarrow B^\sharp \longrightarrow A^\sharp \longrightarrow 0$ is exact. From Corollary \[noptorsionabelian\] we see that Proposition \[Maindescent\] does not hold for non ordinary (semi)abelian varieties. Indeed, consider again the example described in Remark \[counterexampletodescent\] of a (non ordinary) abelian variety $A \in EXT(E_1,E_2)$, where $E_1, E_2$ are two elliptic curves over ${\mathbb F}_p$, and which itself does not descend to the constants. Nevertheless, by the above corollary, the sequence $0 \longrightarrow E_1^\sharp \longrightarrow A^\sharp \longrightarrow E_2^\sharp \longrightarrow 0$ is exact. There are still cases, not covered by Corollary \[noptorsionabelian\], where one obtains non exactness, even in the ordinary case: (Characteristic $p$) There is an exact sequence of (ordinary) abelian varieties such that the induced $\sharp $ sequence is not exact Let $A$ be an ordinary elliptic curve, defined over $K^p$, which does not descend to $K^{p^\infty}$ and $C$ an ordinary elliptic curve defined over $K^{p^\infty}$. Then we know by Proposition \[abelianseparabletorsion\] that $A [p](K) \cong {\mathbb{Z}}/p{\mathbb{Z}}\cong C[p](K)$ but $A[p^\infty ](K)$ is finite. Pick an isomorphism $f$ between $A [p](K)$ and $C[p](K)$.\ Let $H \subset A[p](K) \times C[p](K) := \{ (a, - f(a)) ; a \in A|p][K)\}$, and $B := (A \times C) / H$. Then $A$ is isomorphic to $A_1 := A \times {0} +H \subset B$. Consider the exact sequence: $$0 \longrightarrow A_1 \longrightarrow B \stackrel{g}\longrightarrow B/A_1 \longrightarrow 0.$$ Note that $C_1 := B/A_1$ is isogenous to $C$, hence by \[ellipticdescent\] or \[ordinaryisogenydescent\], descends to $K^{p^\infty}$.\ We claim that the $p^\infty$ sequence is no longer exact, that is, we claim that $p^\infty A_1 (K) \not= p^\infty B(K) \cap A_1 (K)$.\ Pick some $c \not= 0$, $c \in C[p]$. As $C$ is defined over $K^{p^\infty}$, $c\in C(K^{p^\infty}) = p^\infty C(K)$. It follows that – $(0,c) + H \in p^\infty B(K)$ – $(0,c)+ H \in A_1[p](K)$ ($(0,c) - (f^{-1}(c), 0) \in H$). By assumption $A_1[p^\infty] (K) $ is finite, so $A_1[p^\infty] (K) \cap p^\infty A_1(K) = \{0\}$, and $c \in [C(K^{p^\infty})] \cap A_1(K)] \setminus p^\infty A_1(K)$. We can say more about the example described above: $$0 \longrightarrow A_1 \longrightarrow B \stackrel{g}\longrightarrow C_1 \longrightarrow 0$$ Let $e$ be any element in $C_1[p^n](K)$, pick some preimage of $e$ in $B(K)$ of the form $(0,y) +H$. Then $[p^n] ((0,y)+H) \in Ker g = (A \times 0) +H$, hence $(0, [p^n] y ) \in H $ and $[p^n] y \in C[p](K)$. So $e$ is the image of an element in $B[p^{n+1}]$. From this we can conclude: \(i) $g( B[p^\infty](K)) = C_1 [p^\infty](K)$, which shows that \[noptorsion\] does not hold without the assumption on the torsion. \(ii) By an infinite tree argument, as in the proof of Cor. \[noptorsion\], we deduce that $[p] T_p C_1 (K) \subset \tilde g ( T_p B (K))$. We know that $p^\infty B(K) \cap A_1 (K)/p^\infty A_1$ is finite but non trivial and (\[exactnessofU\]) that it is isomorphic to $T_p C_1 (K) / \tilde g (T_p B(K))$. It follows that it must isomorphic to ${\mathbb Z} / p {\mathbb Z}$. The case when the abelian part of $G$ is an elliptic curve in characteristic $p$ -------------------------------------------------------------------------------- Here we consider only the case of characteristic $p$. Recall the following basic facts about $p$-torsion in elliptic curves (see for example [@silverman]): – If $E$ is ordinary, then for each $n$, $E[p^n] \cong {\mathbb{Z}}/ p^n {\mathbb{Z}}$, and $E[p^\infty] \cong {\mathbb{Z}}_{p^\infty}$. – If $E$ is not ordinary, then $E$ is supersingular, i.e. $E[p^\infty] = \{0\}$. In that case, $E$ is isomorphic to an elliptic curve defined over a finite field. From Proposition \[abelianseparabletorsion\] and Corollary \[finitetorsionin$G_3$\], it is easy to conclude that: Let $E$ be an ordinary elliptic curve which does not descend to $K^{p^\infty}$. If $G_1, G_2 $ are semiabelian varieties over $K$ and if the sequence $0\longrightarrow G_1 \longrightarrow G_2 \longrightarrow E \longrightarrow 0$ is exact, then the sequence $0 \longrightarrow G_1^{\sharp} \longrightarrow G_2^\sharp \longrightarrow E^\sharp \longrightarrow 0$ is exact. We can now summarize exactly the situation for a semiabelian variety $G$ whose abelian part is an elliptic curve, $0 \longrightarrow T \longrightarrow G \longrightarrow E \longrightarrow 0$: Let $G$ be as above:\ (i) If $E$ is supersingular, then the $\sharp$ sequence remains exact and $G$ has relative Morley rank.\ (ii) If $E$ is ordinary and does not descend to the constants then the $\sharp$ sequence remains exact and $G$ has relative Morley rank .\ (iii) If $E$ is ordinary and descends to the constants, the following are equivalent – the $\sharp$ sequence is exact – $G$ descends to the constants – $G$ has relative Morley rank – $G[p^\infty](K)$ is infinite. In the case when $G$ does not descend to the constants, then $(G^\sharp \cap T(K) ) / T^\sharp $ is isomorphic to the profinite group ${\mathbb{Z}}_p$. Recall first that Proposition \[Morleysemiabelian\] says that in the present context $G^\sharp$ has relative Morley rank if and only if the $\sharp$ sequence is exact.\ (i) If $E$ is supersingular, it has no $p$-torsion and Corollary \[finitetorsionin$G_3$\] applies.\ (ii) If $E$ does not descend to the constants, Corollary \[elliptictraceless\] applies.\ (iii) If $E$ is ordinary and descends to $K^{p^\infty}$, by Proposition \[Maindescent\], the $\sharp$ sequence will be exact if and only if $G$ descends to $K^{p^\infty}$. As $T$ has no $p$-torsion, $G[p^\infty] \cong E[p^\infty] \cong {\mathbb{Z}}_{p^\infty}$. So if $G$ descends to the constants, then $G[p^\infty] (K) = G[p^\infty]$ so is infinite.\ If $G$ does not descend to $K^{p^\infty}$, by Proposition \[abelianseparabletorsion\], for some $n$, $G[p^n](K)$ must be a proper subgroup of $G[p^n]\cong {\mathbb{Z}}/p^n Z$ of order $p^n$, which forces it to be trivial. Hence $G[p^\infty](K)$ is finite. In particular $T_p G(K) = \{0\}$. By Proposition \[exactnessofU\], $ (G^\sharp \cap T(K) ) / T^\sharp $ is isomorphic to $ T_p E(K)/ \tilde f (T_pG(K)) \cong T_p E(K) = T_p E \cong {\mathbb{Z}}_p$, completing the proof of (iii). Additional remarks and questions ================================ 1\. In characteristic $p$, the counterexamples to exactness of the induced $\sharp$ sequence arise from the following situation: we have two connected commutative definable groups $H_1 <H_2$ which are not divisible. We consider $D_2$ the biggest divisible subgroup (which is infinitely definable) of $G_2$. The counterexamples are exactly the cases when $G_1 \cap D_2$ is [*not* ]{} divisible. One can ask the same question also for other classes of groups, in particular for commutative algebraic groups: Given $G_1 <G_2$ two commutative connected algebraic groups defined over some algebraically closed field $K$ of characteristic $p$, consider $D <G_2$, the biggest divisible subgroup of $G_2$. It is easy to check that $D$ is a closed subgroup of $G_2$, also defined over $K$. Using the characterizations of the groups $p^\infty G(K)$, given in terms of the Weil restrictions $\Pi_{K/K^{p^n}} G$ in [@BeDe], one can deduce easily from our examples that the same phenomenon occurs for commutative algebraic groups. 2\. We will finish by mentioning a rather intriguing question, which, as far as we know remains open. Let $A$ be an abelian variety defined over ${\mathbb F}_p(t)$ and let $K_0$ denote the separable closure of ${\mathbb F}_p(t)$. We can consider $A(K_0)$ and $p^\infty A(K_0)$. As we recalled in section \[prelimsemiab\], $p^\infty A(K_0) $ is the biggest divisible subgroup of $A(K_0) $ and contains all the torsion of $A$ which is prime to $p$. We do not know if $p^\infty A(K_0)$ can contain any element which is not torsion. Note that if $A$ is defined over ${K_0}^{p^\infty} = \overline {{\mathbb F}_p}$, then $p^\infty A(K_0)= A(\overline {{\mathbb F}_p})$, where indeed every element is torsion. Note that, from the beginning of section 3, in characteristic $p$, when dealing with $A^\sharp = p^\infty A(K)$, we suppose that $K$ is $\omega_1$-saturated, which ensures that $A^\sharp$ contains elements which are not torsion. In characteristic $0$ there are results along these lines, sometimes going under the expression “Manin’s theorem of the kernel". A formal statement and proof (depending on results of Manin, Chai,..) appears in [@BePi] (Corollary K.3 of the Appendix), and says that if $A$ is an abelian variety over the algebraic closure $K_{0}$ say of ${\mathcal{C}}(t)$, equipped with a derivation with field of constants ${\mathcal{C}}$, and $A$ has ${\mathcal{C}}$-trace $0$, then $A^{\sharp}(K_{0})$ is precisely the group of torsion points of $A$. [aa]{} F. Benoist, Théorie des modèles des corps munis d’une dérivation de Hasse, Ph.D. thesis, Univ. Paris 7, 2005. F. Benoist & F. Delon, Questions de corps de définition pour les variétés abéliennes en caractéristique positive, Journal de l’Institut de Mathématiques de Jussieu, vol. 7 (2008), 623–639. D. Bertrand & A. Pillay, A Lindemann-Weierstrass theorem for semiabelian varieties over function fields, preprint 2008. E. Bouscaren & F. Delon, Groups definable in separably closed fields, [*Transactions of the A.M.S.*]{}, 354 (2002), 945–966. E. Bouscaren & F. Delon, Minimal groups in separably closed fields, [*The Journal of Symbolic Logic*]{}, 67 (2002), 239–259. A. Buium, [*Differential algebraic group of finite dimension*]{}, Lecture Notes in Mathematics 1506, Springer-Verlag, 1992. A. Buium, [*Differential Algebra and Diophantine Geometry*]{}, Hermann, Paris, 1994. M.M. Erimbetov, Complete theories with $1$-cardinal formulas, [*Algebra i Logika*]{}, 14 (1975), 245–257. E. Hrushovski, The Mordell-Lang conjecture for function fields, [*Journal of the American Mathematical Society*]{}, 9 (1996), 667–690. P. Kowalski & A. Pillay, Quantifier elimination for algebraic $D$-groups, TAMS 358 (2006), 167–181. D. Marker, Model theory of differential fields, in Model Theory of Fields (second edition), Lecture Notes in Logic, ASL, AK Peters, 2006. D. Marker, Manin kernels, Quaderni Math. vol 6, Napoli 2000, p. 1-21. J.S. Milne, [*Etale cohomology*]{}, Princeton University Press, 1980. D. Mumford, [*Abelian varieties*]{}, Published for the Tata Institute of Fundamental Research, Bombay by Oxford University Press, 1985. A. Pillay, Differential algebraic groups and the number of countable differentially closed fields, in Model Theory of Fields, cited above. A. Pillay & W.-Y. Pong, On Lascar rank and Morley rank of definable groups, JSL, vol. 67 (2002), 1189-1196. B. Poizat, [*Stable groups*]{}, Mathematical Surveys and Monographs, American Mathematical Society, 2001. M. Rosenlicht, Extensions of vector groups by abelian varieties, American Journal of Mathematics 80 (1958), 685–714. A. Grothendieck & M. Raynaud, [*Revêtements étales et groupe fondamental*]{}, Séminaire de géométrie algébrique du Bois Marie (SGA1), 1960-61, Lecture Notes in Mathematics 224, Springer-Verlag, 1971. J.-P. Serre, Algebraic groups and class fields, Graduate Texts in Mathematics, Springer-Verlag, 1988, J.-P. Serre, Quelques propriétes des variétes abéliennes en caractéristique p, American Journal of Maths. Vol 80, 3 (1958), 715-739. S. Shelah, [*Classification Theory*]{}, 2nd edition, North Holland, 1990. J.H. Silverman, [*The arithmetic of elliptic curves*]{}, Graduate Texts in Mathematics, Springer-Verlag, 1986. P. Vojta, Jets via Hasse-Schmidt derivations, in Diophantine geometry, 335–361, CRM Series, 4, Ed. Norm., Pisa, 2007, 335–361 F. Voloch, Diophantine approximation on Abelian varieties in characteristic $p$, [*American Journal of Mathematics*]{}, 4 (1995), 1089–1095. F.O. Wagner, [*Stable groups*]{}, London Mathematical Society Lecture Notes, Cambridge University Press, 1997. M. Ziegler, A remark on Morley rank, preprint 1997, http://home.mathematik.uni-freiburg.de/ziegler/Preprints.html M. Ziegler, Separably closed fields with Hasse derivations, [*Journal of Symbolic Logic*]{}, 68 (2003), 311–318. F. Benoist franck.benoist@math.u-psud.fr\ Univ. Paris-Sud 11\ Department of Mathematics, Bat. 425\ F-91405 Orsay Cedex, France. E. Bouscaren elisabeth.bouscaren@math.u-psud.fr\ CNRS - Univ. Paris-Sud 11\ Department of Mathematics, Bat. 425\ F-91405 Orsay Cedex, France. A. Pillay pillay@maths.leeds.ac.uk\ University of Leeds\ Department of Pure Mathematics\ Leeds LS2 9JT, United Kingdom. [^1]: Supported by a Post Doctoral position of the Marie Curie European Network MRTN CT-2004-512234 (MODNET) (Leeds, 2007-2008). [^2]: Supported by a Marie Curie Excellence Chair 024052, and an EPSRC grant EP/F009712/1
CERN-PH-TH/2012-323\ SISSA 31/2012/EP [ **A First Top Partner Hunter’s Guide** ]{}\ \[1.2cm\] [ <span style="font-variant:small-caps;">Andrea De Simone</span>$^{a,\,b}$, <span style="font-variant:small-caps;">Oleksii Matsedonskyi</span>$^c$, <span style="font-variant:small-caps;">Riccardo Rattazzi</span>$^d$, <span style="font-variant:small-caps;">Andrea Wulzer</span>$^{c\,,e}$]{}\ $^a$ *CERN, Theory Division, CH-1211 Geneva 23, Switzerland*\ $^b$ *SISSA and INFN, Sezione di Trieste, Via Bonomea 265, I-34136 Trieste, Italy*\ $^c$ *Dipartimento di Fisica e Astronomia and INFN, Sezione di Padova,\ via Marzolo 8, I-35131 Padova, Italy*\ $^d$ *Institut de Théorie des Phénomènes Physiques,\ École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland*\ $^e$ *Institute for Theoretical Physics, ETH Zurich, CH-8093 Zurich, Switzerland* **Abstract** > We provide a systematic effective lagrangian description of the phenomenology of the lightest top-partners in composite Higgs models. Our construction is based on symmetry, on selection rules and on plausible dynamical assumptions. The structure of the resulting simplified models depends on the quantum numbers of the lightest top partner and of the operators involved in the generation of the top Yukawa. In all cases the phenomenology is conveniently described by a small number of parameters, and the results of experimental searches are readily interpreted as a test of naturalness. We recast presently available experimental bounds on heavy fermions into bounds on top partners: LHC has already stepped well inside the natural region of parameter space. Introduction ============ The exploration of the weak scale at the Large Hadron Collider is set to unveil the dynamics of electroweak symmetry breaking. A giant step in that direction was achieved this year with the discovery of a bosonic resonance, whose features are remarkably compatible with those of the Standard Model Higgs boson. Whether we like it or not, the main question now facing us concerns the role of naturalness in the dynamics of the newly discovered boson. Theoretically we can think of two broad scenarios that concretely realize naturalness: supersymmetry and compositeness. In the case of supersymmetry, the implications and the search strategies have been worked out in much greater detail than in the case of compositeness. That is explained partly by the undisputable theoretical appeal of supersymmetry (gauge coupling unification, connection with string theory, etc.) and partly by the comfort of dealing with a perturbative set up. The difficulty in dealing with strong dynamics has instead, and for a long time, slowed down progress in the exploration of compositeness, and, in particular, progress on its objective phenomenological difficulties (mostly flavor, but also precision tests). Interesting ideas were indeed put forward early on [@Kaplan:1983fs; @partcomp], but the absence of a weakly coupled approach prevented more concrete scenarios to appear. However, in the last decade, thanks in particular to the holographic perspective on compositeness, [*semi-perturbative*]{} scenarios have been depicted and studied [@Agashe:2004rs][^1]. Even though a very compelling single model did not cross our horizon, we believe we have learned how to broadly depict interesting scenarios, while remaining sufficiently agnostic on the details (see for instance [@silh]). The first aspect of an interesting set up is that the Higgs is a pseudo-NG-boson associated with the spontaneous breakdown of an approximate global symmetry. The second aspect is that flavor arises from [*partial compositeness*]{}: the quarks and leptons acquire a mass by mixing with composite fermions. Partial compositeness, although much more convincing than the alternatives, does not, by itself, lead to a fully realistic flavor scenario. This is because of constraints from $\epsilon_K$ [@Csaki:2008zd], electric dipole moments and lepton flavor violation (see Ref. [@KerenZur:2012fr] for a recent appraisal). In a realistic scenario partial compositeness should likely be supplemented by additional symmetries. In any case, and regardless of details, a robust feature is that the Higgs potential is largely determined by the dynamics associated with the top quark and the composite states it mixes to, the so-called top partners. That is in a sense obvious and expected, as the top quark, because of its large coupling, color multiplicity and numerics already contributes the leading quadratically divergent correction to the Higgs mass within the SM. It is nonetheless useful to have depicted a scenario that concretely realizes that expectation. The naturalness of electroweak symmetry breaking depends then on the mass of the fermionic top-partners. That is in close analogy with the supersymmetric case, where naturalness is largely controlled by the mass of the bosonic top partners, the stops. ![](fig0.pdf "fig:") \[fig0\] The case of light stops in supersymmetry is being actively considered both theoretically and experimentally. One main goal is to effectively cover all regions of parameter space, without being swamped by the less relevant parameters. Simplified models or motivated assumptions like “natural susy” [@Barbieri:2009ev] offer a convenient way to achieve that goal. In the reduced parameter space (featuring stop mass parameters and possibly the gluino or lightest neutralino mass), the constraints from of experimental searches offer a direct and largely model independent appraisal of naturalness. The goal of this paper is to provide a similar simplified approach to describe the results of experimental searches for top partners. We will focus on the composite Higgs scenario based on the minimal coset $SO(5)/SO(4)$. The basic simplifying assumption is that the spectrum has the structure depicted in figure \[fig0\], where one $SO(4)$ multiplet of colored Dirac fermions $\Psi$ is parametrically lighter than the other states. As already illustrated in Ref. [@Contino:2011np] for the case of bosonic resonances, in that limit one expects the dynamics of $\Psi$ to be described by a weakly coupled effective lagrangian. Therefore the simplified model, at leading order in an expansion in loops and derivatives, can be consistently described by a finite number of parameters. Moreover symmetry and selection rules, via the Callan-Coleman-Wess-Zumino (CCWZ) [@ccwz] construction, reduce the number of relevant parameters. It is obviously understood that the limiting situation presented by the simplified model is not expected to be precisely realized in a realistic scenario. However, a realistic situation where the splitting with the next-to-lightest multiplet is of the order $M_\Psi$ is qualitatively already well described by the simplified model. Only if the splitting were parametrically smaller than $M_\Psi$ would there be dramatic changes. We should also stress that our models are truly minimal, in that they do not even possess sufficient structure (states and couplings) to make the Higgs potential calculable. In principle we could add that structure. For instance by uplifting our multiplet $\Psi$ to a full split $SO(5)$ multiplet, like in a two site model, we could make the Higgs potential only logarithmically divergent, thus controlling its size in leading log approximation, and making the rough connection between $M_\Psi$ and naturalness more explicit along the lines of [@Matsedonskyi:2012ym] (see also [@gillioz; @babis; @dissertori] for a similar construction). We could even go as far as making the one loop Higgs potential finite with a three site model [@panico; @DeCurtis:2011yx], or by imposing phenomenological Weinberg sum-rules [@Marzocca:2012zn; @pomarolriva]. However in these less minimal models the first signals at the LHC would still be dominated by the lightest $SO(4)$ multiplet, whatever it may be. The point is that while the contribution of the heavier multiplets does not decouple when focussing on a UV sensitive quantity like the Higgs potential, it does decouple when considering the near threshold production of the lightest states. For the purpose of presenting the results of the LHC searches in an eloquent way, the simplified model is clearly the way to go. There already exists a literature on simplified top partner models in generic composite Higgs scenarios [@Contino:2006nn; @continoservant; @mrazecwulzer], where the role of symmetry is not fully exploited. Focussing on the minimal composite Higgs model based on $SO(5)/SO(4)$, our paper aims at developing a systematic approach where all possible top partner models are constructed purely on the basis of symmetry and selection rules. In the end we shall derive exclusion plots in a reduced parameter space, which in general involves the mass and couplings of the top-partner $\Psi$. Now, even though these are not the parameters of a fundamental model, given their overall size, we can roughly estimate how natural the Higgs sector is expected to be. We can then read the results of searches as a test of the notion of naturalness. To make that connection, even if qualitative, we must specifiy the dynamics that gives rise to the top Yukawa. As discussed in [@Panico:2012uw], there are several options, each leading to a different structure of the Higgs potential and thus to a different level of tuning. The common feature of all scenarios is that the top partners need to be light for a reasonably natural theory, the way the tuning scales with the top-partners’ mass is instead different in each case. In this paper we focus on the possibility that the right handed top quark $t_R$ is a $SO(4)$ singlet belonging to the strong sector, therefore the top Yukawa simply arises from an $SO(5)$ breaking perturbation of the form \_L q\_L [O]{}\_R+[[h.c.]{}]{} . Here ${\cal O}_R$ is a composite operator, which in the low energy theory maps to $Ht_R$, thus giving rise to a top Yukawa coupling $y_t\sim \lambda_L$. The operator $O_R$ however also interpolates in general for massive states, the top partners. Now, from simple power counting, and also from explicit constructions [@panico], at leading order in the breaking parameter $\lambda_L$ we expect the Higgs potential to have the form \[powercount\] V(h)={a h\^2 +++…} . where $a,b,c,\dots$ are coefficients expected to be $O(1)$, $f$ is the decay constant of the $\sigma$-model, while $m_*$ broadly indicates the mass scale of the top partners. Then, since $\Psi$ is, ideally, the lightest top-partner we have $M_\Psi \lsim m_*$. Given $m_*$ and $f$, the measured values $v\equiv \langle h\rangle= 246$ GeV and $m_h= 125$ GeV, may require a tuning of $a$ and $b$ below their expected $O(1)$ size. More explicitly one finds a= ()\^2 \[a\] and, defining the top-partner coupling as $g_*\equiv m_*/f$ according to Ref. [@silh], b= . \[b\] By these equations we deduce that in the most natural scenario the top partners should not only be light (say below a TeV) but also not too strongly coupled. While of course the whole discussion is very qualitative, we still believe eqs. (\[a\])-(\[b\]) give a valid rule of thumb for where the top partners should best be found. It is with eqs. (\[a\])-(\[b\]) in mind that one should interpret the results of the searches for top partners. Notice that while naturalness favors sub-TeV fermionic resonances, electroweak precision constraints favor instead bosonic resonances above 2-3 TeV. A technically natural and viable model should therefore be more complex than a generic composite model described by a single scale. This situation closely resembles that of supersymmetric models, where the light squark families and the gluinos are pushed up by direct searches, while technical naturalness demands the stops to be as light as possible. This paper is organized as follows. In Section \[sect:models\] we discuss the structure of the models and their main features such as the mass spectrum and the couplings of the top partners. Then, in Section \[sec:TPP\] we turn to analyse the phenomenology of the top partners, their production mechanisms and decay channels, highlighting the most relevant channels to focus LHC searches on. The bounds on the model parameters are derived in Section \[sect:bounds\], using the LHC data available at present [^2]. Finally, our concluding remarks are collected in Section \[conclusions\]. The Models {#sect:models} ========== Our first goal is to develop a simplified description of the top partners, suited for studying the phenomenology of their production at the LHC. These simplified models should capture the robust features of more complete explicit constructions[^3] or, better, of a putative general class of underlying theories. In particular, robust, and crucial, features are the pNGB nature of the Higgs and the selection rules associated with the small breaking of the corresponding global symmetry. We will see below that these features strongly constraints the structure of the spectrum and of the couplings of the top partners, similarly to what was found in Ref. [@panico] for the case of partial $t_R$ compositeness. We thus assume that the Higgs is the pNGB of the minimal coset ${\textrm{SO(5)}}/{\textrm{SO(4)}}$ and construct Lagrangians that respect the non-linearly realized ${\textrm{SO(5)}}$ invariance. We follow the standard CCWZ construction [@ccwz], whose detailed formulation for our coset is described in Appendix A. The CCWZ methodology has been first employed to model the top partners in Ref. [@Marzocca:2012zn]. The central objects are the Goldstone boson $5\times 5$ matrix $U$ and the $d_\mu$ and $e_\mu$ symbols constructed out of $U$ and its derivative. The top partner field $\Psi$ has definite transformation properties under the unbroken ${\textrm{SO(4)}}$ group. We will consider two cases, $\Psi$ transforming in the $r_\Psi={\mathbf{4}}$ or $r_\Psi=\mathbf{1}$ of ${\textrm{SO(4)}}$. In our construction the right-handed top quark $t_R$ emerges as a chiral bound state of the strong dynamics. $t_R$ must thus belong to a complete multiplet of the unbroken subgroup ${\textrm{SO(4)}}$, and, given we do not want extra massless states, it must be a singlet. That does not yet fully specify its quantum numbers. This is because, in order to reproduce the correct hypercharge, one must enlarge the global symmetry by including an extra unbroken ${\textrm{U(1)}}_X$ factor and define the hypercharge as $Y\,=\,T_R^3\,+\,X$, where $T_R^3$ is the third ${\textrm{SU(2)}}_R$ generator of ${\textrm{SO(5)}}$.[^4] Therefore the coset is actually , $t_R$ has $X$ charge equal to $2/3$ while the Higgs is $X$ neutral (its hypercharge coincides with its $T_R^3$ charge). A second assumption concerns the coupling of the elementary fields, [*[i.e.]{}*]{} the SM gauge fields $W_\mu$ and $B_\mu$ and the elementary left-handed doublet $q_L=(t_L,b_L)$, to the strong sector [^5]. The EW bosons are coupled by gauging the SM subgroup of ${\textrm{SO(5)}}\times{\textrm{U(1)}_X}$. The $q_L$ is assumed to be coupled *linearly* to the strong sector, following the hypothesis of partial compositeness [@partcomp]. In the UV Lagrangian this coupling has therefore the form $${\mathcal L}_{\textrm{mix}}^{\textrm{UV}}=y\, \overline{q}_L^\alpha\Delta^*_{\alpha\,I_{\mathcal{O}}}{\mathcal O}^{I_{\mathcal{O}}} +{\textrm h.c.}\equiv y \left(\overline{Q}_L\right)_{I_{\mathcal{O}}}{\mathcal O}^{I_{\mathcal{O}}} +{\textrm h.c.}\,, \label{lmix}$$ where ${\mathcal{O}}$ is an operator of the strong sector that transforms in some representation ${{r}}_{\mathcal{O}}$ of ${\textrm{SO(5)}}\times{\textrm{U(1)}_X}$. The choice of ${{r}}_{\mathcal{O}}$ is, to some extent, free. Minimality, and the aim of reproducing explicit models considered in the literature, led us to consider two cases: ${{r}}_{\mathcal{O}}={\mathbf{5}}_{\mathbf{2/3}}$ and ${\mathbf{r}}_{\mathcal{O}}={\mathbf{14}}_{\mathbf{2/3}}$ [^6]. Notice that the ${\textrm{U(1)}}_X$ charge of the operators must be equal to the one of the $t_R$ in order for the top mass to be generated after EWSB. In total, depending on whether the top partners will be in the ${\mathbf{4}}_{\mathbf{2/3}}$ or in the ${\mathbf{1}}_{\mathbf{2/3}}$ of the unbroken $SO(4)$, we will discuss four models named ,  and ,  respectively. The classification of the various models is summarized in Table \[models\]. The explict breakdown of $SO(5)$ due to $y$ in eq. (\[lmix\]) gives rise to a leading contribution to the Higgs potential $V(h)$. However, in order to be able to tune the Higgs vacuum expectation value $v$ to be much smaller that its natural scale $f$, one may need to tune among themselves contributions to $V(h)$ with a different functional dependence on $h/f$. In the case of ${\mathbf{r}}_{\mathcal{O}}={\mathbf{14}}_{\mathbf{2/3}}$, the top Yukawa seed $y$ itself gives rise to two independent structures, whose coefficients can be so tuned that $v/f\ll 1$. On the other hand, in the case of ${\mathbf{r}}_{\mathcal{O}}={\mathbf{5}}_{\mathbf{2/3}}$, the leading contribution to the potential consist of just one structure $\propto \sin^2h/f\cos^2h/f$, with well defined, non-tunable, minima and maxima. In the latter case then, in order to achieve $v\ll f$, one should assume there exists an additional of $SO(5)$ breaking coupling whose contribution to the potential competes with that of the top. If this additional coupling does not involve the SM fields, which seems resonable, then its contribution to $V$ will arise at tree level. In order not to outcompete the top contribution, which arises at loop level, then this coupling should be so suppressed that its relative impact on strong sector quantities is of order $O(y^2/16\pi^2$). The latter should be compared to the effects of relative size $(y/g_\Psi)^2$ induced at tree level by the mixing in eq. (\[lmix\]) and accounted for in this paper. We conclude that, even when an extra $SO(5)$ breaking coupling is needed, it is not likely to affect the phenomenology of top partners in a quantitatively significant way. Now back to the top partners. Our choices of their quantum numbers correspond to those obtained in explicit constructions. However our choice could also be motivated on general grounds by noticing the operators ${\mathcal{O}}$ interpolate for particles with the corresponding quantum numbers. By decomposing ${\mathcal{O}}$ under the unbroken ${\textrm{SO(4)}}$ we obtain, respectively, and ${\mathbf{14}}_{\mathbf{2/3}}={\mathbf{4}}_{\mathbf{2/3}}+ {\mathbf{1}}_{\mathbf{2/3}}+{\mathbf{9}}_{\mathbf{2/3}}$. In both cases we expect to find a ${\mathbf{4}}_{\mathbf{2/3}}$ and/or a $ {\mathbf{1}}_{\mathbf{2/3}}$ in the low-energy spectrum. It could be also interesting to study top partners in the ${\mathbf{9}}_{\mathbf{2/3}}$, but this goes beyond the scope of the present paper.   $r_{\mathcal{O}}={\mathbf{5}}_{\mathbf{2/3}}$ $r_{\mathcal{O}}={\mathbf{14}}_{\mathbf{2/3}}$ -------------------------------------- ----------------------------------------------- ------------------------------------------------ $r_\Psi={\mathbf{4}}_{\mathbf{2/3}}$ $r_\Psi={\mathbf{1}}_{\mathbf{2/3}}$ : []{data-label="models"} The coupling of eq. (\[lmix\]) breaks the ${\textrm{SO(5)}}\times{\textrm{U(1)}_X}$ symmetry explicitly, but it must of course respect the SM group. This fixes unambiguously the form of the tensor $\Delta$ and thus of the *embeddings*, $(Q_L)_{I_{\mathcal{O}}}=\Delta_{\alpha\,I_{\mathcal{O}}}q_L^\alpha$, of the elementary $q_L$ in ${\textrm{SO(5)}}\times{\textrm{U(1)}_X}$ multiplets. For the ${\mathbf{5}}$ and the ${\mathbf{14}}$, respectively the fundamental and the two-indices symmetric traceless tensor, we have (Q\_L\^)\_I=[1]{}( i b\_L\ b\_L\ i t\_L\ - t\_L\ 0 ),(Q\_L\^)\_[I,J]{}=[1]{}( 0 & 0 & 0 & 0 & i b\_L\ 0 & 0 & 0 & 0 & b\_L\ 0 & 0 & 0 & 0 & i t\_L\ 0 & 0 & 0 & 0 & -t\_L\ i b\_L & b\_L & i t\_L & -t\_L &0\ ). \[emb\] Though explicitly broken, the ${\textrm{SO(5)}}\times{\textrm{U(1)}_X}$ group still gives strong constraints on our theory. Indeed the elementary-composite interactions of eq. (\[lmix\]) formally respect the symmetry provided we formally assign suitable transformation properties to the embeddings. Under $g\in{\textrm{SO(5)}}$ we have (Q\_L\^)\_Ig\_[I]{}\^[I’]{}(Q\_L\^)\_[I’]{},(Q\_L\^)\_[IJ]{} g\_[I]{}\^[I’]{} g\_[J]{}\^[J’]{}(Q\_L\^)\_[I’J’]{}, \[transemb\] while the ${\textrm{U(1)}_X}$ charge is equal to $2/3$ in both cases. We will have to take into account this symmetry in our constructions. Effective Lagrangians {#efflagr} --------------------- Based on the symmetry principles specified above we aim at building phenomenological effective Lagrangians for the $q_L$, the composite $t_R$ and the lightest top partner states $\Psi$. The basic idea is that our Lagrangians emerge from a “complete” theory by integrating out the heavier resonances in the strong sector. We thus need to rely on some qualitative description of the dynamics in order to estimate the importance of the various effective operators. We follow the “SILH” approach of Ref. [@silh] and characterize the heavy resonances in terms of a single mass scale $m_*$ and of a single coupling $g_*=m_*/f$. As we already suggested in the introduction, parametrizing the strong sector in terms of a single scale is probably insufficient: a $125$ GeV Higgs suggests that the mass scale of the fermionic resonances should be slightly lower than that of the vectors. For our purposes the relevant scale $m_*$ should then be identified with the mass scale of the fermionic sector. We thus adopt the following power-counting rule $$\displaystyle {\mathcal L}=\sum\frac{m_*^4}{g_*^2}\left(\frac{y\, q_L}{m_*^{3/2}}\right)^{n_{\textrm{el}}} \left(\frac{g_* \Psi}{m_*^{3/2}}\right)^{n_{\textrm{co}}}\left(\frac{\partial}{m_*}\right)^{n_\partial}\left(\frac{\Pi}{f}\right)^{n_\pi}\,, \label{powc}$$ where $\Pi=\Pi^{1,\ldots,4}$ denotes the canonically normalized four real Higgs field components and $f$ is the Goldstone decay constant. Notice the presence of the coupling $y$ that accompanies (due to eq. (\[lmix\])) each insertion of the elementary $q_L$. Analogously the operators involving the SM gauge fields, omitted for shortness from eq. (\[powc\]), should be weighted by $g_{\textrm{SM}}/m_*$. The $t_R$ is completely composite and therefore it obeys the same power-counting rule as the top partner field $\Psi$. Two terms in our effective Lagrangian will *violate* the power-counting. One is the kinetic term of the elementary fields, which we take to be canonical, while eq. (\[powc\]) would assign it a smaller coefficient, $(y/g_*)^2$ in the case of fermions and $(g/g_*)^2$ in the case of gauge fields. This is because the elementary field kinetic term does not emerge from the strong sector, it was already present in the UV Lagrangian with ${\mathcal{O}}(1)$ coefficient. Indeed it is precisely because their kinetic coefficient is bigger than what established in eq. (\[powc\]), that the elementary fields have a coupling weaker than $g_*$. The other term violating power-counting is the *mass* of the top partners, which we denote by $M_\Psi$. We assume $M_\Psi<m_*$ in order to justify the construction of an effective theory in which only the top partners are retained while the other resonances are integrated out. The ratio $M_\Psi/m_*$ is our expansion parameter. We will therefore obtain accurate results only in the presence of a large separation, $M_\Psi\ll m_*$, among the lightest state and the other resonances [^7]. However already for a moderate separation, $M_\Psi\lesssim m_*$, or even extrapolating towards $M_\Psi\simeq m_*$, our models should provide a valid qualitative description of the relevant physics. Nevertheless for a more careful study of the case of small separation our setup should be generalized by incorporating more resonances in the effective theory. ### Top partners in the fourplet First we consider models  and , in which the top partners are in the ${\mathbf{4}}_{\mathbf{2/3}}$. In this case the top partner field is =[1]{}( iB-i\ B+\ iT+i\ -T+ ), \[4plet\] and it transforms, following CCWZ, as \_ih(; g)\_i\^[j]{}\_j, under a generic element $g$ of ${\textrm{SO(5)}}$. The $4\times4$ matrix $h$ is defined by eq.s (\[gtrans\]) and (\[hd\]) and provides a non-linear representation of the full ${\textrm{SO(5)}}$. The four $\Psi$ components decompose into two SM doublets $(T,B)$ and $(\Xft,\Xtt)$ of hypercharge $1/6$ and $7/6$ respectively. The first doublet has therefore the same quantum numbers as the $(t_L,b_L)$ doublet while the second one contains a state of exotic charge $5/3$ plus another top-like quark $\Xtt$. When the $q_L$ is embedded in the ${\mathbf{5}}_{\mathbf{2/3}}$, [*[i.e.]{}*]{} in model , the leading order Lagrangian is \^&=&i|q\_L q\_L+i|t\_R t\_R +i|(+i)-M\_|\ && +, \[eq:lagrangian2\] where $c_{1,2}$ are coefficients expected to be of order $1$. The above Lagrangian with totally composite $t_R$ was first written in Ref. [@Marzocca:2012zn]. Notice the presence of the $\slashed{e}=e_\mu\gamma^\mu$ term which accompanies the derivative of the top partner field: it reconstructs the CCWZ covariant derivative defined in eq. (\[covder\]) and is essential to respect $SO(5)$. In the second line of the equation above we find, first of all, a *direct* interaction, not mediated by the coupling $y$, among the composite $t_R$ and the top partners. This term is entirely generated by the strong sector and would have been suppressed in the case of partial $t_R$ compositeness. It delivers, looking at the explicit form of $d_\mu$ in eq. (\[dande\]), couplings involving the top, the partners and the SM gauge fields. These will play an important role in the single production and in the decay of the top partners. The last two terms give rise, in particular, to the top quark mass but also to trilinear couplings contributing to the single production of top partners. Notice that the indices of the embedding $Q_L^{{\mathbf{5}}}$ *can not* be contracted directly with those of $\Psi$ because they live in different spaces. The embeddings transform linearly under $\textrm{SO}(5)$ as reported in eq. (\[transemb\]) while $\Psi$ transforms under the non-linear representation $h$. For this reason one insertion of the Goldstone matrix, transforming according to eq. (\[gtrans\]), is needed. For brevity we omitted from eq. (\[eq:lagrangian2\]) the kinetic term of the gauge fields and of the Goldstone Higgs, the latter is given for reference in eq. (\[hkt\]). Moreover we have not yet specified the covariant derivatives $D_\mu$ associated with the SM gauge group, these are obviously given by D\_q\_L&=&(\_-ig W\_\^i [\^i2]{}-i[16]{}g’ B\_-ig\_SG\_)q\_L,\ D\_t\_R&=&(\_-i[23]{}g’ B\_-ig\_SG\_)t\_R ,\ D\_&=&(\_-i [23]{} g’ B\_-ig\_SG\_). \[cder\] where $g,g'$ and $g_S$ are the ${\textrm{SU}}(2)_L\times {\textrm{U}}(1)_Y$ and ${\textrm{SU}}(3)_c$ gauge couplings. We remind the reader that the top partners form a color triplet, hence the gluon in the above equation. The Lagrangian is very similar for model , where the $q_L$ is embedded in the symmetric traceless $Q_L^{{\mathbf{14}}}$. We have \^&=&i|q\_L q\_L+i|t\_R t\_R +i|(+i)-M\_|\ &+&,\[eq:lagrangian214\] notice that the two indices of $Q_L^{{\mathbf{14}}}$ are symmetric and therefore the term that mixes it with $\Psi$ is unique. The factor ${1 \over 2}$ introduced in the last term is merely conventional. In both models   and   the leading order Lagrangian contains four parameters, $\{ M_\psi$, $y$, $c_1$, $c_2\}$, on top of the Goldstone decay constant $f$. One parameter will however have to be fixed to reproduce the correct top mass, while the remaining three parameters could be traded for two physical masses, for instance $m_{\Xft}$ and $m_B$, and the coupling $c_1$. It will often be convenient to associate the mass $M_\Psi$ with a coupling $g_\psi$ $$g_\Psi\equiv\frac{M_\Psi}{f}\,.$$ We will see below that $c_1\times g_\Psi$ controls the strength of the interactions between the top partners and the Goldstone bosons at energy $\sim M_\Psi$. In particular it controls the on-shell couplings relevant for single production and for two body decays. Notice that, as a function of energy, the effective strength of this trilinear interaction is instead $\sim c_1 E/f$. For $c_1=O(1)$, as suggested by power counting, the effective coupling is of order $g_*\equiv m_*/f$ at the energy scale of the heavier resonances, in accord with the principle of partial UV completion proposed in Ref. [@Contino:2011np]. Power counting and partial UV completion then equivalently imply $c_1=O(1)$ and therefore $c_1 g_\Psi < g_*$. This result obviously follows from the fact that the Higgs is a derivatively coupled pNGB. It would be lost if the Higgs was instead treated as a generic resonance. In the latter case the expected coupling would be independent of the mass and it would be larger, of order $g_*$. Moreover notice that, although on shell it leads to an effective Yukawa vertex, the interaction associated with $c_1$ does not affect the spectrum when $H$ acquires a vacuuum expectation value. That again would not be true if we did not account for the pNGB nature of $H$. The pNGB nature of $H$ is not accounted for in the first thorough work on simplified top partner models [@continoservant] and in the following studies (see in particular [@mrazecwulzer; @AguilarSaavedra:2009es]). Notice that, a priori, one of the four parameters describing the simplified model could be complex. This is because we have at our disposal only $3$ chiral rotations to eliminate the phases from the Lagrangians (\[eq:lagrangian2\]) and (\[eq:lagrangian214\]). Nevertheless we are entitled to keep all the parameters real if we demand the strong sector respects a CP symmetry defined in Appendix A. It is easy to check that CP requires the non-derivative couplings to be real while the coefficient of the term involving to $d_\mu$ must be purely imaginary. CP conservations is an additional hypothesis of our construction, however the broad phenomenology does not significantly depend on it. ### Top partners in the singlet The Lagrangian is even simpler if the top partners are in the ${\mathbf{1}}_{\mathbf{2/3}}$. In this case we only have one exotic top-like state which we denote as ${\widetilde{T}}$. For the two models,  and  that we aim to consider the Lagrangian reads, respectively \^&&=|q\_L i q\_L+|t\_R i t\_R +i|i-M\_|\ && +,\ \^&&=|q\_L i q\_L+|t\_R i t\_R +i|i -M\_|\ && +. \[eq:lagrangian211\] Notice that we could have also written a direct mixing among $t_R$ and $\Psi$ because the two fields now have identical quantum numbers. However this mixing can obviously be removed by a field redefinition. Models  and , apart from $f$, contain three parameters, $\{M_{\psi},\,y,\,c_2\}$, one of which must again be fixed to reproduce the top mass. We are left with two free parameters that correspond to the coupling $c_2$ and to the mass $m_{\widetilde{T}}$ of the partners. Notice that in this case all the parameters can be made real by chiral rotations without need of imposing the CP symmetry. The latter symmetry is automatically respected in models  and . In order to complete the definition of our models let us discuss the theoretically expected size of their parameters. From the discussion in the introduction and from experience with concrete models, one can reasonably argue that the favorite range for $M_\Psi$ is between $500$ GeV and $1.5$ TeV, while $g_\Psi$ is favored in the range $1\lsim g_\Psi \lsim 3$. It is also worth recalling the favorite range of the decay constant $f\equiv M_\Psi/g_\Psi$, which is conveniently traded for the parameter $\xi$ defined in Ref. [@Agashe:2004rs] =, where $v=2m_W/g=246$ GeV is the EWSB scale. Since $\xi$ controls the deviation from the SM at low energies it cannot be too large. Electroweak precision tests suggest $\xi\simeq 0.2$ or $\xi\simeq 0.1$, which corresponds to $f\simeq500$ GeV or $f\simeq800$ GeV. Smaller values of $\xi$ would of course require more tuning. Finally, the strength of the elementary-composite coupling $y$ is fixed by the need of reproducing the correct mass of the top quark. We will see in the following section that this implies $y\sim y_t=1$. A first look at the models -------------------------- Now that the models are defined let us start discussing their implications. The simplest aspects will be examined in the present section while a more detailed analysis of their phenomenology will be postponed to the following one. ### The Spectrum We start from model  and we first focus on the fermionic spectrum. The mass-matrix after EWSB is easily computed form eqs. (\[eq:lagrangian2\]) and (\[emb\]) by using the explicit form of $U$ on the Higgs VEV obtained from eq. (\[uvev\]). By restricting to the sector of $2/3$-charged states we find ( |t\_L\ |T\_L\ \_L )\^T ( - &yf\^2&yf\^2\ 0&-M\_&0\ 0&0&-M\_\ ) ( t\_R\ T\_R\ \_R ) , \[eq:mass231\] where $\epsilon = \langle h\rangle/f$ is defined as the ratio among the VEV of the Higgs field and the Goldstone decay constant. The relation among $ \langle h\rangle$ and the EWSB scale is reported in eq. (\[VEV\]), from which we derive ==\^2. We immediately notice a remarkable feature of the mass-matrix (\[eq:mass231\]): only the *first line*, [*[i.e.]{}*]{} the terms which involve the $t_L$, is sensitive to EWSB while the rest of the matrix remains unperturbed. This is due to the fact that the Higgs is a pNGB and therefore its non-derivative interactions can only originate from the breaking of the Goldstone symmetry ${\textrm{SO}}(5)$. The ${\textrm{SO}}(5)$ invariant terms just produce derivative couplings of the Higgs and therefore they cannot contribute to the mass-matrix. Since the Goldstone symmetry is broken exclusively by the terms involving the elementary $q_L$ it is obvious that the mass-matrix must have the form of eq. (\[eq:mass231\]). Notice that this structure would have been lost if we had not taken into account the pNGB nature of the Higgs. Indeed if we had treated the Higgs as a generic composite ${\textrm{SO}}(4)$ fourplet, Yukawa-like couplings of order $g_*$ and involving $t_R$ and $\Psi$ would have been allowed. After EWSB those terms would have given rise to $(2,1)$ and $(3,1)$ mass matrix entries of order $g_* v$. The peculiar structure of the mass-matrix has an interesting consequence. It implies that only one linear combination of $T$ and $\Xtt$, with coefficients proportional to the $(1,2)$ and $(1,3)$ entries, mixes with the $q_L$, while the orthogonal combination does not mix either with the $q_L$ or with any other state. Explicitly, the two combinations are T’=&&1,\ ’=&&1. \[XT\] After this field redefinition the mass-matrix becomes block-diagonal ( \_L\ \_L’\ \_L )\^T ( - &yf &0\ 0&-M\_&0\ 0&0&-M\_\ ) ( t\_R\ T\_R’\ \_R ) , \[eq:mass231prime\] so that the state $\primeXtt$ is already a mass eigenstate with mass $m_{\Xtt}=M_\Psi$. But the spectrum also contains a second particle with exactly the same mass. Indeed the $\Xft$ cannot mix because it is the only state with exotic charge and therefore it maintains the mass $m_{\Xft}=M_\Psi$ it had before EWSB. The $\Xtt$ and the $\Xft$ are thus *exactly degenerate*. This remarkable property is due to the pNGB nature of the Higgs and it would be generically violated, as previously discussed, if this assumption was relaxed. This result also depends on $t_R$ being a composite singlet. If $t_R$ was instead a partially composite state mixing to a non-trivial representation of $SO(5)$ (for instance a [**5**]{}) there would be additional entries in the mass matrix. [^8] In a sense our result depends on $y$ being the only relevant parameter that breaks $SO(5)$ explicitly. ![[]{data-label="spectrum"}](figureSp.pdf) Once the mass-matrix has been put in the block-diagonal form of eq. (\[eq:mass231prime\]) it is straightforward to diagonalize it and to obtain exact formulae for the rotation matrices and for the masses of the top and of the $T$ partner. However the resulting expressions are rather involved and we just report here approximate expressions for the masses. We have m\_t&& ,\ m\_T&& . \[masss\] From the above equation we obtain the correct order of magnitude for the top mass if, as anticipated, $y\sim y_t$ and $g_\Psi\gtrsim1$. In this region of the parameter space the corrections to the approximate formulae are rather small, being suppressed by both a factor $y^2/g_\Psi^2$ (which is preferentially smaller than one) and by $\xi\ll1$. However we will consider departures from this theoretically expected region and therefore we will need to use the exact formulae in the following sections. Similarly we can study the sector of $-1/3$ charge states. It contains a massless $b_L$, because we are not including the $b_R$ in our model, plus the heavy $B$ particle with a mass m\_B=. \[mb1\] This formula is exact and shows that the bottom sector does not receive, in this model, any contribution from EWSB. By comparing the equation above with the previous one we find that the splitting among $T$ and $B$ is typically small m\_B\^2-m\_T\^2y\^2f\^2 \^2, \[mdtb\] and positive in the preferred region $g_\Psi>y$, although there are points in the parameter space where the ordering $m_T>m_B$ can occur. The splitting among the two doublets is instead always positive, $m_B^2-m_{\Xft}^2=y^2f^2$. The typical spectrum of the top partners that we have in our model is depicted in figure \[spectrum\]. The situation is not much different in model . The mass-matrix for charge $2/3$ states has again the form of eq. (\[eq:mass231\]) ( |t\_L\ |T\_L\ \_L )\^T ( - &2(+) &2(-)\ 0&-M\_&0\ 0&0&-M\_\ ) ( t\_R\ T\_R\ \_R ) , \[eq:mass232\] and again it can be put in a block-diagonal form by a rotation among the $T$ and the $\Xtt$ similar to the one in eq. (\[XT\]). Therefore also in model  the physical $\Xtt$ has mass $M_\Psi$ and it is degenerate with the $\Xft$. The approximate top and $T$ mass are given in this case by m\_t&& ,\ m\_T&& . \[masss1\] Similarly we can compute the mass of the $B$ partner and we find m\_B=-\^2. In this case, differently from model  (see eq. (\[mb1\])), the mass of the $B$ is sensitive to EWSB. Apart from this little difference the spectrum is very similar to the one of model  described in figure \[spectrum\]. The models with the singlet are much simpler because there is only one exotic state. The mass matrices read: ( \_L\ \_L )\^T ( - & -\ 0&-M\_\ ) ( t\_R\ T\_R ) , \[eq:mass1A\] ( \_L\ \_L )\^T ( - & -\ 0&-M\_\ ) ( t\_R\ T\_R ) , \[eq:mass1B\] for models  and  respectively. The mass eigenvalues for model  are m\_t&& ,\ m\_&&M\_. \[masss21\] For model  instead we have m\_t&& ,\ m\_&&M\_. \[masss22\] As one can see from the last expressions the mass of the $\widetilde T$ receives positive contributions proportional to $y^2$ and hence for a fixed mass of the $\widetilde T$, $y$ must be limited from above. Unlike the models with fourplet partners, in the singlet case $y$ completely controls the couplings of the $\widetilde T$ with the top and bottom quarks (see Sec. \[gc\]). Therefore one can expect that for a given $m_{\widetilde T}$ there exists a maximal allowed coupling of the SM particles with the top partner and hence for small masses the single production of $\widetilde T$ is suppressed. In addition small values of $m_{\widetilde T}$ become unnatural since they require very small $y$ together with a very large $c_2$ needed to recover correct top mass. By minimizing the largest eigenvalue of the mass matrix with respect to $M_{\Psi}$ for fixed $y$ and $f$ one can find a minimal allowed mass of the $\widetilde T$ which is given by m\_[T]{}\^&=&m\_[t]{}+[ 1 2]{} y f ,\ m\_[T]{}\^&=&m\_[t]{}+[ 1 2 2]{} y f 2, \[minmassttilde\] for the models  and  respectively. The bound given in eq. (\[minmassttilde\]) will affect the exclusion plots in the following. ### Trilinear Couplings {#trilinear} Other interesting qualitative aspects of our models are discovered by inspecting the explicit form of the Lagrangians in unitary gauge. These are reported in Appendix \[ferc\], and are written in the “original” field basis used to define the Lagrangians in eq.s (\[4plet\], \[eq:lagrangian2\], \[eq:lagrangian214\], \[eq:lagrangian211\]), [*[i.e.]{}*]{} before the rotation to the mass eigenstates. Appendix \[ferc\] contains, for reference, the complete Lagrangian including all the non-linear and the derivative Higgs interactions. However the coupling that are relevant to the present discussion are the trilinears involving the gauge fields and the Higgs in the models  and , reported in eq. (\[dcoup\]), (\[ecoup\]), (\[mc5\]) and (\[mc14\]). The first remarkable feature of eq. (\[ecoup\]) is that the $Z$ boson couplings with the $B$ is completely standard: it is not modified by EWSB effects and coincides with the familiar SM expression $g_Z=g/c_w(T_L^3-Q)$. In particular it coincides with the $Z\bar b_L b_L$ coupling, involving the elementary $b_L$, because $b_L$ and $B$ have the same $SU(2)\times U(1)$ quantum numbers. The $Z$-boson coupling to charge $-1/3$ quarks is therefore proportional to the identity matrix. Consequently the $Z$ interactions remain diagonal and canonical even after rotating to the mass eigenbasis. In particular, in the charge $-1/3$ sector, there will not be a neutral current vertex of the form $B\rightarrow Z b$. This property is due to an accidental parity, $P_{LR}$, defined in Ref. [@Contino:2011np] as the exchange of the Left and the Right ${\textrm{SO}}(4)$ generators. This symmetry is an element of ${\textrm{O}}(4)$ and it acts on the top partner fourplet of eq. (\[4plet\]) and on the Higgs field $\vec{\Pi}$ through the $4\times4$ matrix P\_[LR]{}\^[(4)]{}=( -1&0&0&0\ 0&-1&0&0\ 0&0&-1&0\ 0&0&0&1\ ). \[plr\] The action of $P_{LR}$ is readily uplifted to ${\textrm{O}}(5)$ with the $5\times5$ matrix $P_{LR}^{(5)}={\textrm{diag}} (-1,-1,-1,1,1)$. We see that $P_{LR}$ is not broken by the Higgs VEV, which only appears in the last component of the $\vec\Pi$ vector. In Ref. [@zbb], it was shown that $P_{LR}$ invariance protects the $Z$ couplings from tree-level corrections at zero momentum transfer. That case applied to $b$ quarks, but the statement generalizes straighforwardly: if all the particles with a given charge have the same $P_{LR}$, then, at tree level in the weak interactions, the neutral current vertices in that charge sector are canonical and, in particular, diagonal. The Lagrangians (\[eq:lagrangian2\]) and (\[eq:lagrangian214\]) are approximately $P_{LR}$ invariant, with the breaking coming only from the weak gauge couplings and from the weak mixing $y$ between elementary and composites. However at tree level, for which case the elementary fields can be treated as external spectators, even this weak breaking is ineffective in the charge $-1/3$ and $5/3$ sectors. Notice indeed that according to eqs. (\[4plet\],\[plr\]), under $P_{LR}$, $B$ and $X_{5/3}$ are odd, while $T$ and $-X_{2/3}$ are interchanged. Then, inspection of the embedding in eq. (\[emb\]) shows that while we cannot assign a consistent $P_{LR}$ to $t_L$, we can instead assign negative $P_{LR}$ to $b_L$. At tree level, $t_L$ will not affect processes involving only quarks with charge $-1/3$ and $5/3$, and therefore the associated explicit breaking of $P_{LR}$ will be ineffective. Analogously the breaking in the gauge sector is seen not to matter at tree level. For a detailed discussion we refer the reader to section 2.4 of Ref. [@Mrazek:2011iu]. This explains the result previously mentioned for the $(b, \,B)$ sector and also predicts that the coupling of the $\Xft$ must be canonical as well. This is indeed what we see in eq. (\[ecoup\]). The same argument applies to $\widetilde T_R$ and $t_R$ in the singlet models  and . The Z-vertex of those states is not modified, in particular there is no $\bar{t}_R Z\widetilde T_R$ vertex and the production/decay with Z is always controlled by left-handed coupling. On the other hand, for $\widetilde T_L$ and $t_L$ the argument does not apply, regardless of $P_{LR}$, given $\widetilde T_L$ and $t_L$ do not have the same $SU(2)\times U(1)$ quantum numbers. Another interesting property concerns the $W$ couplings of the $B$ with the charge $2/3$ states. We see in eq. (\[ecoup\]) that the linear combination of the $T$ and the $\Xtt$ that couples with the $B$ is exactly orthogonal to the physical (mass-eigenstate) $\Xtt'$ field defined in eq. (\[XT\]) for model . Only the $T'$ couples to $W\,B$, leading after the second rotation to transitions among the physical $t$, $T$ and $b$, $B$. Such couplings are instead absent for the physical $\Xtt$ which therefore, cannot decay to $Wb$. This feature is not, for what we can say, the result of a symmetry, but rather an accidental feature of model . In model  instead the coupling is allowed because the physical $\Xtt$ (see the mass-matrix in eq. (\[eq:mass232\])) is not anymore orthogonal to the combination that couples to $W\,B$. Nevertheless the $\Xtt$-$B$ coupling is suppressed by $\langle h\rangle^2/f^2$ and therefore the decay $\Xtt\rightarrow Wb$, though allowed in principle, is phenomenologically irrelevant as we will discuss in the following section. A final comment concerns the couplings of the physical Higgs field $\rho$. The couplings from the strong sector are only due to the $d_\mu$ term in eq. (\[dcoup\]) and are purely derivative. Therefore, because of charge conservation, they cannot involve the $B$ partner. Higgs couplings in the $-1/3$ charge sector could only emerge from the elementary-composite mixings. However they are accidentally absent in model , as shown by eq. (\[mc5\]). Therefore the decay of the $B$ to the Higgs is absent in this model. In model , on the contrary, this decay is allowed through the vertex in eq. (\[mc14\]). Top Partners Phenomenology {#sec:TPP} ========================== Let us now turn to discuss the main production mechanisms and decay channels of the top partners in the models under consideration. We will first of all, in sect. \[pd\], describe how the cross-sections of the production processes and the partial decay widths can be conveniently parametrized analytically in terms of few universal functions, extracted from the Monte Carlo integration. This method, supplemented with tree-level event simulations to compute the acceptances associated with the specific cuts of each experimental search, will allow us to explore efficiently the multi-dimensional parameter space of our model avoiding a time-consuming scan. Not all the production and decay processes that could be computed with this method are equally sizable, however. In sect. \[gc\] we will present an estimate of the various processes based on the use of the Goldstone boson Equivalence Theorem [@equivalence], this will allow us to classify (in sect. \[mrc\]) the channels which are more promising for the search of the top partners at the LHC. Production and Decay {#pd} -------------------- Given that the partners are colored they can be produced in pairs through the QCD interactions. The pair production cross-section is universal for all the partners and it can be parametrized by a function \_(m\_X), which depends uniquely on the partner’s mass $m_X$, for which we have analytical formulae. We have constructed $\sigma_{\textrm{pair}}$ by interpolation using the HATHOR code [@hathor] which incorporates perturbative QCD corrections up to NNLO. The values of the cross-section used in the fit are reported in Table \[tab:xsecpair\] for the LHC at $7$ and $8$ TeV center of mass energy. In this and all the other simulations we adopted the set of parton distribution functions MSTW2008 [@mstw]. ---------------------- ------------------------------ --------------------------- $M\, [\textrm{GeV}]$ $\sqrt{s}=7$ TeV $\sqrt{s}=8$ TeV 400 (0.920) 1.41 $\times 10^3$ (1.50) 2.30 $\times 10^3$ 500 \(218) 330 \(378) 570 600 (61.0) 92.3 \(113) 170 700 (19.1) 29.0 (37.9) 56.9 800 (6.47) 9.88 (13.8) 20.8 900 (2.30) 3.55 (5.33) 8.07 1000 (0.849) 1.33 (2.14) 3.27 1100 (0.319) 0.507 (0.888) 1.37 1200 (0.122) 0.196 (0.375) 0.585 1300 (4.62) 7.60 $\times 10^{-2}$ (0.160) 0.253 ---------------------- ------------------------------ --------------------------- : []{data-label="tab:xsecpair"} The other relevant process is the single production of the top partners in association with either a top or a bottom quark. This originates, as depicted in Figure \[spd\], from a virtual EW boson $V=\{W^\pm,\,Z\}$ emitted from a quark line which interacts with a gluon producing the top partner and one third-family anti-quark. The possible relevance of single production was first pointed out in Ref.  [@Willenbrock] . The relevant couplings have the form g\_[Xt\_R]{}\_Rt\_R+g\_[Xt\_L]{}\_Lt\_L+ g\_[Xb\_L]{}\_Lb\_L, \[spc\] where $X=\{T,B,\Xtt,\Xft,\Tt\}$ denotes generically any of the top partners. At each vertex the EW boson $V$ is understood to be the one of appropriate electric charge. Notice that there is no vertex with the $b_R$ because the latter state is completely decoupled in our model, we expect this coupling to be negligible even in more complete constructions. It is important to outline that the couplings $g_{Xt_R}$, $g_{Xt_L}$ and $g_{Xb_L}$ can be computed analytically in our models. They arise from the interactions reported in Appendix B after performing the rotation to the physical basis of mass eigenstates. Since the rotation matrices can be expressed in a closed form the explicit formulae for the couplings are straightforwardly derived. The result is rather involved and for this reason it will not be reported here, however it is easily implemented in a [*[Mathematica]{}*]{} package. The single production cross-sections are quadratic polynomials in the couplings, with coefficients that encapsulate the effect of the QCD interactions, the integration over the phase-space and the convolution with the parton distribution functions. These coefficients depend uniquely on the mass of the partner and can be computed by Monte Carlo integration. Once the latter are known we obtain semi-analytical formulae for the cross-sections. The production in association with the $\overline{b}$ is simply proportional to $g_{Xb_L}^2$ while the one with $\overline{t}$ would be, a priori, the sum of three terms proportional to $g_{Xt_L}^2$, $g_{Xt_R}^2$ and $g_{Xt_L}\cdot g_{Xt_R}$ which account, respectively, for the effect of the left-handed coupling, of the right-handed one and of the interference among the two. However in the limit of massless top quark, $m_t\ll m_X$, the processes mediated by the left-handed and by the right-handed couplings become physically distinguishable because the anti-top produced in association with $X$ will have opposite chirality in the two cases. Therefore in the limit $m_t\rightarrow0$ the interference term can be neglected. Moreover, the coefficients of the ${g_{Xt_L}}^2$ and ${g_{Xt_R}}^2$ terms will be equal because the QCD interactions are invariant under parity. Thus the cross-sections will be very simply parametrized as &&\_(X)=\_[Vt]{}(m\_X),\ &&\_(X)=(g\_[Xb\_L]{})\^2\_[Vb]{}(m\_X), \[prod1\] in terms of few functions $\sigma_{Vt}(m_X)$ and $\sigma_{Vb}(m_X)$. The charge-conjugate processes, in which either $\overline{X}\,t$ or $\overline{X}\,b$ are produced, can be parametrized in terms of a similar set of coefficient functions. The only difference is the charge of the virtual $V$ emitted from the light quark line. We thus have &&\_(t)=\_[V\^t]{}(m\_X),\ &&\_(b)=(g\_[Xb\_L]{})\^2\_[V\^b]{}(m\_X), \[prod2\] where $V^\dagger$ denotes the charge conjugate of the vector boson $V$. A similar way of computing cross sections of the $W-b$ fusion type of single-production was carried out in Ref. [@Godfrey] where they adapted the fitting functions of Ref. [@Berger] to non-SM couplings. ---------------------- ------------------ ------------------ ------------------ ------------------ $M\, [\textrm{GeV}]$ $\sqrt{s}=7$ TeV $\sqrt{s}=8$ TeV $\sqrt{s}=7$ TeV $\sqrt{s}=8$ TeV 400 (2.70) 3.10 (4.32) 4.92 (32.49) 43.47 (47.83) 61.43 500 (1.49) 1.80 (2.50) 2.97 (15.85) 20.44 (24.10) 33.10 600 (0.858) 1.06 (1.49) 1.84 (8.53) 12.89 (13.55) 18.80 700 (0.511) 0.637 (0.928) 1.15 (4.60) 6.70 (7.92) 11.34 800 (0.313) 0.399 (0.590) 0.745 (2.82) 4.01 (4.58) 7.22 900 (0.194) 0.250 (0.377) 0.497 (1.60) 2.50 (2.89) 4.48 1000 (0.121) 0.160 (0.246) 0.325 (0.956) 1.636 (1.81) 2.83 1100 (0.075) 0.103 (0.164) 0.215 (0.604) 0.980 (1.181) 1.72 1200 (0.048) 0.066 (0.107) 0.146 (0.377) 0.586 (0.726) 1.23. 1300 (0.031) 0.043 (0.072) 0.098 (0.234) 0.386 (0.463) 0.731 ---------------------- ------------------ ------------------ ------------------ ------------------ : []{data-label="tab:xsecsingle"} One might question the validity of the zero top mass approximation which allowed us to neglect the interference and parametrize the cross-section as in eq.s (\[prod1\]) and (\[prod2\]). We might indeed generically expect relatively large corrections, of the order of $m_t/m_X$. However the corrections are much smaller in our case, we have checked that they are around $1\%$ in most of the parameter space of our models. The reason is that the interference is further reduced in our case because the left- and right-handed couplings are never comparable, one of the two always dominates over the other. This enhances the leading term, $g_{Xt_L}^2$ or $g_{Xt_R}^2$, in comparison with the interference $g_{Xt_L}\cdot g_{Xt_R}$. Moreover this implies that eq.s (\[prod1\]) and (\[prod2\]) could be further simplified, in the sum it would be enough to retain the term which is dominant in each case. We will show in the following section that the dominant coupling is $g_{Xt_R}$ in the case of the fourplet (models  and ) and $g_{Xt_L}$ in the case of the singlet (models  and ). It total, all the single-production processes are parameterized in terms of $5$ universal coefficient functions $\sigma_{W^\pm t}$, $\sigma_{Z t}$ and $\sigma_{W^\pm b}$. Notice that a possible $\sigma_{Zb}$ vanishes because flavor-changing neutral couplings are forbidden in the charge $-1/3$ sector as explained in the previous section. As such, the single production of the $B$ in association with a bottom quark does not take place. We have computed the coefficient functions $\sigma_{W^\pm t}$ and $\sigma_{W^\pm b}$, including the QCD corrections up to NLO, using the MCFM code [@mcfm]. To illustrate the results, we report in Table \[tab:xsecsingle\] the single production cross-section with coupling set to unity, for different values of the heavy fermion mass, and for the $7$ and $8$ TeV LHC. The values in the table correspond to the sum of the cross sections for producing the heavy fermion and its antiparticle, on the left side we show the results for $t\,B$ production, on the right one we consider the case of $b\,\widetilde{T}$. In our parametrization of eq.s (\[prod1\]) and (\[prod2\]) the cross-sections in the table correspond respectively to $\sigma_{W^+ t}+\sigma_{W^- t}$ and to $\sigma_{W^+ b}+\sigma_{W^- b}$. We see that the production with the $b$ is one order of magnitude larger than the one with the $t$, this is not surprising because the $t$ production has a higher kinematical threshold and therefore it is suppressed by the steep fall of the partonic luminosities. The values in the table do not yet correspond to the physical single-production cross-sections, they must still be multiplied by the appropriate couplings. The last coefficient function $\sigma_{Z t}$ cannot be computed in MCFM and therefore to extract it we used a LO cross section computed with  [@madgraph] using the model files produced with package [@feynrules]. To account for QCD corrections in this case we used the k-factors computed with MCFM for the $t\, B$ production process. In order to quantify the importance of single production we plot in figure \[fig:prod\] the cross-sections for the various production mechanisms in our models as a function of the mass of the partners and for a typical choice of parameters. We see that the single production rate can be very sizeable and that it dominates over the QCD pair production already at moderately high mass. This is again due to the more favorable lower kinematical threshold, as carefully discussed in Ref. [@mrazecwulzer]. ![[]{data-label="fig:prod"}](xsectionscomparison2.pdf){width="50.00000%"} Let us finally discuss the decays of the top partners. The main channels are two-body decays to vector bosons and third-family quarks, mediated by the couplings in eq. (\[spc\]). For the partners of charge $2/3$ and $-1/3$ also the decay to the Higgs boson is allowed, and competitive with the others in some cases. This originates from the interactions of the partners with the Higgs reported in Appendix B, after the rotation to the physical basis of mass eigenstates. The relevant couplings can be computed analytically similarly to the $g_{t_{L,R}X}$ and $g_{b_{L}X}$. Thus we easily obtain analytical tree-level expressions for the partial widths and eventually for the branching fractions. In principle cascade decays $X\rightarrow X' V$ or $X'H$ are also allowed, however these are never sizable in our model as we will discuss in sect. \[mrc\]. Couplings to Goldstone Bosons {#gc} ----------------------------- Let us now turn to classify the relative importance of the various production mechanisms and decay channels described in the previous section. Since the partners are much heavier than the EW bosons, $m_X\gg m_W$, their dynamics is conveniently studied by using the Equivalence Theorem, which applies at energies $E\gg m_W$. To this end, we will momentarily abandon the unitary gauge and describe our model in the $R_\xi$-gauge where the Goldstone degrees of freedom associated with the unphysical Higgs components are reintroduced. The Higgs field is now parameterized as [^9] H=( [c]{} h\_u\ h\_d )=( [c]{} \^+\ 1(++i\^0) ). \[Hdoublet\] The Equivalence Theorem states that, at high energies, the longitudinal components of the $W^\pm$ and of the $Z$ bosons are described, respectively, by the charged and the neutral Goldstone fields $\phi^\pm$ and $\phi^0$. The transverse polarizations are instead well described by vector fields $W^\pm_\mu$ and $Z_\mu$, in the absence of symmetry breaking. However the transverse components give a negligible contribution to our processes, and this is for two reasons. First, their interactions emerge from the SM covariant derivatives and therefore these are proportional to the EW couplings $g$ or $g'$. We will see below that the couplings of the longitudinal, [*[i.e.]{}*]{} of the Goldstones, are typically larger than that. Second, the transverse components can not mediate, before EWSB, any transition between particles in different multiplets of the gauge group. Indeed the couplings of the $W_\mu^\pm$ and $Z_\mu$ fields are completely fixed by gauge invariance and therefore they are diagonal in flavor space. Only after EWSB do states from different multiplets mix and flavor-changing couplings like in eq. (\[spc\]) arise. Therefore these effects must be suppressed by a power of $\epsilon=\langle{h}\rangle/f$. This means that the transverse gauge bosons basically do not participate to the production and decay of the top partners: the decay will mostly be to longitudinally polarized vectors, while the virtual $V$ exchanged in single production diagram will be dominantly longitudinally polarized. For our purposes, we can thus simply ignore the vector fields and concentrate on the Goldstones. In the models with the fourplet,  (\[eq:lagrangian2\]) and  (\[eq:lagrangian214\]), the first source of Goldstone couplings is the term $ i\, c_1 \left(\bar{\Psi}_R\right)_i \,\slashed{d}^i \, t_R$. One would naively expect this interaction to be the dominant one because it originates entirely from the strong sector without paying any insertion of the elementary-composite coupling $y$. Before EWSB the couplings are i+. \[gc0\] It is not difficult to check that the interactions above respect not only the SM but also the full ${\textrm{SO}}(4)$ symmetry of the strong sector. Eq. (\[gc0\]) contains derivative operators, therefore it is not yet suited to read out the actual strength of the interactions. However it can be simplified, provided we work at the tree-level order, by making use of the equations of motion of the fermion fields. [^10] After integrating by parts and neglecting the top mass, we find +, \[gc1\] showing that the strength of the interaction is controlled by the masses of the heavy fermions. Neglecting the elementary-composite coupling $y$, the masses all equal $M_\Psi$, and the coupling, modulo an $O(1)$ coefficient, is given by $ g_\Psi=M_\Psi/f$, as anticipated in the previous section. Once again we remark that this feature follows from the Goldstone boson nature of the Higgs. Indeed if the Higgs were a generic resonance, not a Goldstone, then it could more plausibly have a Yukawa $g_*\overline{\Psi}^i\Pi_it_R$ vertex with strength dictated by the strong sector coupling $g_*$. Those of eq. (\[gc1\]) are the complete Goldstone interactions in the limit of a negligible elementary-composite coupling $y$. However we can not rely on this approximation because we will often be interested in relatively light top partners, with $g_\Psi\leq y\simeq y_t$. It is straightforward to incorporate the effect of $y$, due to the mixing terms in eq.s (\[eq:lagrangian2\]) and (\[eq:lagrangian214\]) for model  and , respectively. After diagonalizing the mass-matrix, again neglecting EWSB, the Goldstone interactions for both models become $$\begin{tabular}{ | c | c | } \cline{2-2} \multicolumn{1}{c|}{} & \fA, \fB \\ \hline $ \phi^+{\barXft}_L \,t_R$ & $ \sqrt{2} c_1 g_{\psi}$ \\ $(\rho+i \phi^{0}){\barXtt}_L \,t_R $ & $c_1 g_{\psi}$ \\ $(\rho-i \phi^{0})\overline{T}_L \,t_R $ & $-c_1 \sqrt{y^2 + g_{\psi}^2} + {c_2 y^2 \over \sqrt 2 \sqrt{y^2 + g_{\psi}^2}}$ \\ $ \phi^- \overline{B}_L \,t_R$ & $c_1 \sqrt{2} \sqrt{y^2 + g_{\psi}^2} - {c_2 y^2 \over \sqrt{y^2 + g_{\psi}^2}}$ \\ \hline \end{tabular} \label{coup4}$$ which reduces to eq. (\[gc1\]) for $y\ll g_\Psi$. Notice that eq. (\[coup4\]) only contains couplings with the right-handed top quark. This is not surprising because the top partners live in SM doublets and therefore their only allowed Yukawa-like interactions are with the $t_R$ singlet. The couplings with the $q_L$ doublet emerge only after EWSB and are suppressed by one power of $\epsilon$. Therefore they typically do not play a mayor role in the phenomenology. Obviously the SM symmetry is respected in eq. (\[coup4\]), this explains the $\sqrt{2}$ suppression of the $\Xtt$ and of the $T$ couplings compared with the ones of the $\Xft$ and of the $B$. The situation is different in the models with the singlet,  and  (\[eq:lagrangian211\]). In that case there is no direct contribution from the strong sector to the Goldstone coupling and all the interactions are mediated by $y$. The couplings are $$\begin{tabular}{ | c | c| } \cline{2-2} \multicolumn{1}{c|}{} & \oA, \oB \\ \hline $(\rho+i \phi^{0})\overline{\widetilde{T}}_R \,t_L $ & $ { y \over \sqrt{2}} $ \\ $ \phi^+ \overline{\widetilde{T}}_R \,b_L$ & $y$ \\ \hline \end{tabular} \label{coup1}$$ The top partner $\Tt$ now is in a SM singlet, therefore the interactions allowed before EWSB are the ones with the left-handed doublet. The $\sqrt{2}$ suppression of the coupling with the top is due, once again, to the SM symmetry. One important implication of eq. (\[coup1\]) is that the $\Tt$, contrary to the partners in the fourplet, can be copiously produced singly in association with a bottom quark. We will discuss this and other features of our models in the following section. The Most Relevant Channels {#mrc} -------------------------- We discuss here the most relevant production and decay processes of each top partner, identifying the best channels where these particles should be looked for at the LHC. Obviously one would need an analysis of the backgrounds to design concrete experimental searches for these promising channels and to establish their practical observability. We leave this to future work and limit ourselves to study, in section 4, the constraints on the top partners that can be inferred from presently available LHC searches of similar particles Let us first consider the models ,  and analyze separately each of the new fermions. - $\Xft$ $\Xft$, together with $\Xtt$, is the lightest top partner, it is therefore the easiest to produce. Production can occur in pair, via QCD interactions, or in association with a top quark through its coupling with a top and a $W^+$. The coupling, see eq. (\[coup4\]), is controlled by $g_\psi=m_\Xft/f$, which grows with mass at fixed $f$. We thus expect single production to play an important role at high mass, where it is enhanced with respect to pair production by both kinematics and a larger coupling (at fixed $f$). This is confirmed, for a particular but typical choice of parameters, by the plot in Figure \[fig:prod\]. Since it is the lightest partner, $\Xft$ decays to $W^+t$ with unit branching ratio. The relevant channel for its observation is $\Xft\rightarrow tW$ in association with a second top quark of opposite charge. The latter is present in both single and pair production processes. This results in clean signals consisting of either same-sign dileptons or trileptons plus jets. In the following section we will recast the LHC searches for these signals and obtain a limit on $\Xft$ production. In addition to two top quarks and a $W$, pair production also leads to a second hard $W$ while single production (see Figure (\[spd\])) features a light-quark jet associated with virtual $W$ emission. Notice that the light-quark jet in single production is typically forward with a $p_T\lesssim m_W$ because the emission of the virtual $W$ is enhanced in this kinematical region [@mrazecwulzer] . In practice this jet has the same features of the“tag jets" in VBF Higgs production and in $WW$–scattering. The events are thus characterized by a forward isolated jet in one of the hemispheres. The relevant kinematical distributions are shown in Figure (\[fig:forwardjet\]) for the production of a $600$ GeV partner. Like in VBF or $WW$-scattering, one might hope to employ the forward jet as a tag to discriminate single production form the background. Ref. [@mrazecwulzer] argued that the main source of forward jets in the background, QCD initial state radiation, tends to produce more central and less energetic jets, however further investigations are needed. Present LHC searches are designed for pair- rather than for single-production. Because of the $\eta^{jet}$ and $p_T^{jet}$ cuts that they adopt, they are thus weakly sensitivity to forward jets. We believe that it would be worth to explore the possible relevance of forward jets in designing the searches for top partners. ![[]{data-label="fig:forwardjet"}](pteta.pdf "fig:"){width="49.00000%"} ![[]{data-label="fig:forwardjet"}](e.pdf "fig:"){width="49.00000%"}\ - $\Xtt$ $\Xtt$ is also light and therefore easier to produce than the heavier partners. At the leading order, as eq. (\[coup4\]) shows, it couples with strength $c_1 g_{\psi}$ to the Higgs and $Z$ bosons. The dominant decay channels are thus $\Xtt\rightarrow Zt$ and $\Xtt\rightarrow ht$ and ${{\rm \,BR}}(\Xtt \to Z \, t) \approx {{\rm \,BR}}(\Xtt \to h\, t) \approx 0.5$. In model  the coupling to $Wb$ vanishes exactly, while in model  the coupling is non-zero but suppressed by $\epsilon\sim v/f$. The decay $\Xtt\to Wb$ is therefore typically sub-dominant and can become relevant only in a corner of parameter space characterized by low mass, $y \epsilon=O(1)$ and $c_1<1$. Given that $\Xtt\to ht$ is probably difficult to detect (see however Ref. [@ht] for recent analyses), the search for $\Xtt$ must rely on the decay mode $\Xtt\to Zt$, with $Z$ further decaying to charged leptons. An extra suppression from the small branching ratio must then be payed. This disfavors the $\Xtt$ signal compared to that of $\Xft$, for which the branching ratio needed to reach the leptonic final state is close to one. $\Xtt$ is produced in pairs via QCD interactions and singly via the $Z\Xtt t$ coupling,. In the latter case a top quark is produced in association. Both production modes lead to a resonant $\Xtt\to Zt$ plus one top of opposite charge. In the case of single production there will be a forward jet, as previously discussed in the case of $\Xft$. In the case of pair production there will be either a Higgs or a $Z$ from the other partner. Another possible single production mode, in association with a $b$ quark rather than a $t$, is strictly forbidden in model  and is suppressed by the small coupling to $Wb$ in model . However single production in association with a $b$ is kinematically favored over that with $t$. Kinematics then compensates the suppressed coupling and makes the two rates typically comparable in model , as shown in Fig. \[fig:compareWBZT\]. By comparing with Fig. \[fig:prod\], we see that, in the case of $\Xtt$, single production in association with a $t$ is suppressed compared to the case of $\Xft$[^11]. This is mainly due to the $\sqrt{2}$ factor in charged current versus neutral current vertices, see eq. (\[coup4\]). Moreover, the difference between the $W$ and $Z$ couplings, taking into account $u$- and the $d$-type valence quark content of the proton, further enhances by a $\sim 1.2$ factor the virtual $W$ emission rate with respect to the $Z$ rate. Combining this enhancement with the factor of $2$ in the squared coupling, one explains the relative sizes of the $\Xft$ and $\Xtt$ production cross sections. ![[]{data-label="fig:compareWBZT"}](compareWBZT "fig:"){width="50.00000%"}\ - $T$ $T$ is systematically heavier than $\Xtt$, but the phenomenology is very similar. Therefore it will merely give a subdominant contribution to the $\Xtt$ channels described in the previous paragraph. Indeed, by eq. (\[coup4\]), also $T$ couples at leading order with equal strength to the Higgs and to the $Z$, leading to ${{\rm \,BR}}(T \to Z \, t) \approx {{\rm \,BR}}(T \to h\, t) \approx 0.5$. The coupling to $Wb$ arises at order $\epsilon$, and it can be relevant, as explained for $\Xtt$ above, thanks to the favorable kinematics of associated production with a $b$. One may in principle consider chain decays seeded by $T\to \Xtt Z$, $T\to \Xtt h$ or $T\to \Xft W$, given these channels are normally kinematically open. However the corresponding couplings are generically smaller than those controlling the direct decays to $t_R$. This is a straightforward consequence of the equivalence theorem and of $SU(2)$ selection rules. The decays to $t_R$, involve longitudinally polarized vectors and $h$, living in the linear Higgs doublet $H$: given the top partners are $SU(2)$ doublets and $t_R$ is a singlet, the coupling respects $SU(2)$ and so it arises at zeroth order in $\epsilon$. On the other hand, the transitions among top partners living in different $SU(2)$ doublets obviously require an extra insertion of the Higgs vacuum expectation value. The resulting amplitudes are therefore suppressed by one power of $\epsilon$ and the corresponding branching ratios negligible. - $B$ $B$ is even heavier than $T$, though the mass difference, $m_B-m_T \sim {y^2 v^2 / 4 m_B}$ (see eq. (\[mdtb\])), is typically rather small. The most relevant decay mode is $B\to W t$, mediated by the coupling $\sim c_1 g_\Psi$ in eq. (\[coup4\]). Like in the case of $T$, $SU(2)$ selection rules suppress the decay to $W\Xtt $. Moreover, the decay $B\to WT$, when kinematically allowed, proceeds either via a transverse $W$, with SM gauge coupling $g<g_\Psi$, or via a longitudinal $W$, with effective coupling suppressed by $\epsilon$. Therefore also this decay is significantly suppressed. The decay $B\to Zb$ is forbidden because, as we explained in sect. \[trilinear\], flavor-changing neutral couplings are absent in the charge $-1/3$ sector. The $B\to hb$ channel is forbidden in model  and suppressed by $\epsilon$ in model . In the latter model it can play a role, but only in a corner of the parameter space. Single production, since the $ZBb$ vertex is absent, is always accompanied by a top quark. The signature of single $B$ production is therefore a resonant $B\to W t$ plus an opposite charge top, the same final states of single $\Xft$ production. In the end, $B$ production, single and pair, has the same signatures as $\Xft$ production: same sign leptons or trileptons plus jets. Let us now switch to models  and , where the only new heavy fermion is the $\tilde T$. - $\widetilde T$ $\widetilde T$ has a very rich phenomenology because it can be copiously produced through all the three mechanisms described above. We see in eq. (\[coup1\]) that $\widetilde T$ couples to both $Zt$ and $Wb$, with a coupling of order $y\sim y_t/c_2$. It can therefore be singly produced either in association with a top or with a bottom quark. Notice that in the range $c_2\sim 1$ suggested by power counting, the trilinear coupling is of order $y_t $, which is expected to be generically smaller than the strong sector coupling $g_\psi$ that controls the single production of top partners in a $(2,2)$. The bands in the left panel of Fig. \[fig:ttildesp2\], indicate the single prooduction cross section[^12] for $0.5<c_2<2$: comparing the blue band to the corresponding case of $\Xtt t$ and $\Xft t $ production in models  and  , one notices, as expected, a typically smaller rate for models  and .While $y\sim y_t$ ($c_2\sim 1$) is favored by naive power counting, one can entertain the possibility of choosing $y> y_t$ ($c_2<1$), for which the single production rate can be sizeable. However, for a given value of $m_{\widetilde T}$ and $f$, there is a mathematical upper bound $y_{max}$ on $y$ determined by eqs. (\[minmassttilde\]). The right plot in Fig. \[fig:ttildesp2\] shows that $y_{max}$ grows with $m_{\widetilde T}$ and that it is comparable in model  and model .  In the left panel of Fig. \[fig:ttildesp2\], the green line and the blue line shows, respectively for $\tilde T b$ and $\tilde T t$, the maximal allowed cross section, which basically coincides with the choice $y=y_{max}$ [^13]. For such maximal values the single production cross section can be quite sizeable. Single production of a $\tilde T$-like partner was considered in the context of Little Higgs models in Refs. [@Perelstein; @Han], and more recently for composite Higgs models in Ref. [@Vignaroli], where it was also considered the possibility of using a forward jet tag as a handle for this kind of searches. The total cross section in this channel is favored over single production with a $t$ by both kinematics and by the $\sqrt{2}$ factor in charged current transitions. Indeed, as shown in Fig. (\[fig:ttildesp2\]) associated $\tilde T b$ production dominates even over pair production in all the relevant mass-range while single production with the $t$ is rather small. The role of kinematics is especially important in this result, as the large $\tilde T b$ cross section is dominated by the emission of a soft $b$, with energy in the tens of GeV, a regime obviously unattainable in the similar process wih a $t$. Indeed by performing a hard cut of order $m_t$ on the $p_T$ of the $b$, the $\tilde T b$ cross section would become comparable to that for $\tilde T t$. Unfortunately the current LHC searches do not exploit the large inclusive rate of production with the $b$ quark because they are designed to detect pair production. We will show in the following section that the acceptance of single production, with the cuts presently adopted is extremely low. We believe there is space for substantial improvement in the search strategy. ![ []{data-label="fig:ttildesp2"}](TZtXScomparison_AB "fig:"){width="48.00000%"} ![ []{data-label="fig:ttildesp2"}](TZtXScomparison_y "fig:"){width="46.20000%"}\ Also concerning decays, all the possible channels are important in the case of $\widetilde T$. It decays to $W b$, $Z t$ and $h t$ at zeroth order in $\epsilon$, with a fixed ratio of couplings. By looking at eq. (\[coup1\]) we obtain ${{\rm \,BR}}(\widetilde T \to Z\,t) \approx {{\rm \,BR}}(\widetilde T \to h\, t) \approx {1 \over 2}{{\rm \,BR}}(\widetilde T \to W\, b)\approx 0.25$. Actually the branching fraction to $Wb$ is even further enhanced by the larger phase space, though this is only relevant for low values of $m_{\widetilde T}$. Given that the branching fraction is larger, ideally the resonant $Wb$ production would be the best channel to detect the $\widetilde T$. However one should manage to design a search strategy to reject the background while retaining the signal. In particular one should retain as much as possible the contribution from the large single production in association with the $b$. A possibly cleaner decay channel could then be $\widetilde T \to Z\,t$ with leptonic $Z$. LHC Bounds {#sect:bounds} ========== In this section we derive bounds on our models using the presently available LHC searches. Given that the top partners are heavy fermions coupled to top and bottom, we focus on the experimental searches for $4^{th}$ family quarks, which present a somewhat similar phenomenology [^14]. We will make use of the following searches for $4^{th}$ family quarks performed by CMS: 1) $b'\to Wt$ with same-sign dileptons or trileptons in the final state [@cmsBWt]; 2) $t'\to Zt$ with trileptons in the final state [@cmsTZt]; 3) $t'\to Wb$ with two leptons in the final state [@cmsTWb]. In what follows, we quantify the impact of these three searches on our models, by adopting the following strategy. We compute separately the production cross-sections of the top partners, the branching fractions into the relevant channels and the efficiencies associated with the selection cuts performed in each experimental search. The cross-sections and the branching fractions at each point of the parameter space are encapsulated in semi-analytical formulae as described in section \[sec:TPP\]. The efficiencies must instead be obtained numerically through a Monte Carlo simulation. Not having at our disposal a reliable tool to estimate the response of the detector, a fully realistic simulation of the hadronic final states would not be useful. Therefore we decided not to include showering and hadronization effects in our analysis, and we stopped at the parton level. We applied the reconstruction (e.g., of $b$-jets and leptons) and selection cuts on the partonic events in order to get an estimate of the kinematical acceptance. Moreover, we included the efficiencies for $b$-tagging, lepton reconstruction and trigger through universal reweighting factors extracted from the experimental papers. In oder to account for the possible merging of soft or collinear partons in single jet we applied the anti-$k_T$ clustering algorithm [@antikt] for jet reconstruction with distance parameter $\Delta R=0.5$. Search for $b^{\prime}\to W\,t$ ------------------------------- This search applies only to models , . The analysis of Ref. [@cmsBWt] aims at studying a $4^{th}$ family $b'$ that is pair produced by QCD and is assumed to decay to $Wt$ with unit branching fraction. The search is performed in the final state with at least one tagged $b$-jet and either same-sign dileptons or trileptons ($e$ or $\mu$). Three or two additional jets are required, respectively, in the dilepton and trilepton channels. Apart from the usual isolation, hardness and centrality cuts for the jets and the leptons, a hard cut is required on the scalar sum of the transverse momenta of the reconstructed object and missing $p_T$. Ref. [@cmsBWt] reports the observed number of events in the two categories and the expected SM background. From these elements, given the efficiency of the signal in the two channels, one puts a bound on the pair production cross-section and eventually on the mass of the $b'$. With $4.9\,fb^{-1}$ of data at $7$ TeV the bound is $611$ GeV at $95\%$ confidence level. Below we will quantify the impact of this search on the parameter space of our models. The top partners contributing to the signal are $\Xft$ and $B$ because, as shown in the previous section, they lead to two tops of opposite charge and to at least one extra $W$ in both pair and single production. To derive the bound we must compute, for each partner and production mode, the efficiency of the signal in the dilepton and the trilepton channels as a function of the partner’s mass. The total production cross-sections are computed semi-analytically at each point of the parameter space. Combining the cross-sections with the efficiencies we obtain the signal yield in the two channels that must be compared with the observed number of the events and with the expected background. We perform this comparison by computing the confidence level of exclusion (CL) defined through the $CL_s$ hypothesis test [@CLs], as explained in some detail in Appendix C. At the practical level it is important that at the end of this procedure we obtain an *analytical* expression for the $CL$ as a function of the fundamental parameters of our model. This makes very easy and fast to draw the exclusion bounds even if we work in a multi-dimensional parameter space. ### Efficiencies {#efficiencies .unnumbered} The first step is to simulate the signal processes. Rather than employing our complete model we have used a set of simplified models containing the SM fields and interactions plus the two relevant new particles – $\Xft$ and $B$ – with the appropriate couplings to $Wt$ responsible for the single production and the decay. We will employ the right-handed $\Xft Wt_R$ or $BWt_R$ vertices because, as we have shown in section \[gc\], the top partners couple mainly to the $t_R$. However, to make contact with Ref. [@cmsBWt], we simulated also the case of left-handed vertices because for a $4^{th}$ family $b'$ the coupling originates from an off-diagonal entry of the generalized $V_{CKM}$ matrix and it is purely left-handed. We will see that the chirality of the couplings significantly affects the efficiencies. We generated parton level events without showering, hadronization and detector simulation. The events were analyzed using the cuts and the identification/reconstruction efficiencies for $b$-tagging and leptons reported in [@cmsBWt]. We also included the trigger efficiency as an overall multiplicative factor. Not having enough information on how the $\tau$ leptons were treated in the analysis we have accounted only for the missing energy from the tau decays while the jets and leptons candidates coming from taus were simply rejected. We checked that the inclusion of $\tau$-jets does not introduce appreciable differences, but $\tau$-leptons might affect our results. We have found that the most severe cut is the one on the transverse momenta of the leptons candidates, $p_T>20\, {{\rm \,GeV}}$. This is because most of the events which could contribute to di-(tri-)leptons contain exactly 2(3) charged lepton candidates, thus loosing only one of them causes the loss of the event. The number of generated jets per event is instead larger than the minimally required one and therefore the impact of the jet cut is less prominent. The signal efficiency is defined as the product of the cut efficiencies with the branching ratios of the $t$ and the $W$ to the required final states. The results are given in Tables \[tab:efficienciessimple2l\] and \[tab:efficienciessimple3l\] for different mass points. In these tables, the efficiencies of the pair-produced $b'$ obtained in Ref. [@cmsBWt] are compared with the ones obtained for a left-handed coupling with our method. The accuracy of our simplified treatment of QCD radiation and detector effects is quantified by the level of agreement between these results. We see that the discrepancy is below $10\%$ in the dilepton channel and around $30\%$ in the case of trileptons. In view of these results we have decided to be conservative and to present our results by showing exclusion limits computed using our efficiency and also using an efficiency reduced respectively by $10\%$ for dileptons and $30\%$ for trileptons. From Tables \[tab:efficienciessimple2l\] and \[tab:efficienciessimple3l\] we also see that the efficiency in our model is significantly larger than the one for the $4^{th}$ family $b'$. This is because the right-handed top (and the left-handed anti-top) produced in the decay in our models tends to produce more energetic charged leptons than a left-handed top. The lepton $p_T$ distribution is therefore harder and the cut $p_T>20\, {{\rm \,GeV}}$ is more easily satisfied. Finally, we notice, somewhat surprisingly, that the efficiencies for the $\Xft$ and for the $B$ partners are substantially identical. One would have expected some difference at least in the dilepton channel, since the two leptons come from the decay of a single heavy particle in the first case while they have a different origin in the second one. However this makes no difference in practice. $M$ $[\textrm{GeV}]$ $\Xft$ partner $[\%]$ $B$ partner $[\%]$ 4$^{\rm th}$ family $b'$ $[\%]$ $b'$ Ref. [@cmsBWt] $[\%]$ ---------------------- ----------------------- -------------------- --------------------------------- ---------------------------- 450 $1.90\pm0.05$ $1.93\pm0.05$ $1.65\pm0.04$ $1.52\pm0.13$ 550 $1.97\pm0.05$ $1.98\pm0.05$ $1.72\pm0.05$ $1.71\pm 0.14$ 650 $1.96\pm0.05$ $1.96\pm0.05$ $1.85\pm0.05$ $1.71\pm 0.15$ : []{data-label="tab:efficienciessimple2l"} $M$ $[\textrm{GeV}]$ $\Xft$ partner $[\%]$ $B$ partner $[\%]$ 4$^{\rm th}$ family $b'$ $[\%]$ $b'$ Ref. [@cmsBWt] $[\%]$ ---------------------- ----------------------- -------------------- --------------------------------- ---------------------------- 450 $0.88\pm0.02$ $0.84\pm0.02$ $0.69\pm0.02$ $0.47\pm 0.05$ 550 $0.98\pm0.02$ $0.94\pm0.02$ $0.81\pm0.02$ $0.56\pm 0.05$ 650 $1.04\pm0.03$ $1.07\pm0.02$ $0.82\pm0.02$ $0.63\pm 0.06$ : []{data-label="tab:efficienciessimple3l"} ### Plots and results {#plots-and-results .unnumbered} Now using the event analysis algorithm shortly described above we can compute the signal efficiencies for same-sign dileptons and trileptons in the framework of the model with a totally composite top right and top partners in a four-plet. For this we again employ simplified models with only two top partners but this time we use exact couplings corresponding to typical points in the parameter space. Therefore apart from the right-handed coupling which is still dominant there is a small admixture of the left-handed one. We present the results for $\Xft$ and $B$ masses in a range $400 - 1000\, GeV$ in tables \[tab:efficienciesPPtR\] and \[tab:efficienciesSPtR\] for pair and single production respectively. ---------------------- ---------------- ------------------- ---------------- ------------------- $$M$ [\textrm{GeV}]$ for $B$ $[\%]$ for $\Xft$ $[\%]$ for $B$ $[\%]$ for $\Xft$ $[\%]$ 400 $1.67\pm0.03$ $1.61\pm0.04$ $0.66\pm0.01$ $0.67\pm 0.02$ 600 $1.96\pm0.03$ $2.02\pm0.04$ $0.93\pm0.01$ $0.93\pm 0.01$ 800 $1.81\pm0.03$ $1.86\pm0.04$ $0.98\pm0.02$ $0.97\pm 0.02$ 1000 $1.63\pm0.03$ $1.63\pm0.04$ $0.99\pm0.02$ $0.96\pm 0.02$ ---------------------- ---------------- ------------------- ---------------- ------------------- : []{data-label="tab:efficienciesPPtR"} ---------------------- ---------------- ------------------- ---------------- ------------------- $$M$ [\textrm{GeV}]$ for $B$ $[\%]$ for $\Xft$ $[\%]$ for $B$ $[\%]$ for $\Xft$ $[\%]$ 400 $0.50\pm0.01$ $0.49\pm0.01$ $0.12\pm0.01$ $0.12\pm 0.01$ 600 $0.68\pm0.01$ $0.69\pm0.01$ $0.22\pm0.01$ $0.22\pm 0.01$ 800 $0.65\pm0.01$ $0.74\pm0.01$ $0.26\pm0.01$ $0.25\pm 0.01$ 1000 $0.63\pm0.01$ $0.70\pm0.01$ $0.28\pm0.01$ $0.27\pm 0.01$ ---------------------- ---------------- ------------------- ---------------- ------------------- : []{data-label="tab:efficienciesSPtR"} Now, by using the obtained efficiencies together with the method elaborated above for computing the cross sections, one can compute the number of signal events in dileptons and trileptons and check if it falls into the region allowed by Fig. \[fig:exclusionN2N3\]. In Fig. \[fig:exclusionxiM\] we show the excluded region in the ($\xi$,$M_{X^{5/3}}$) plane, where $\xi = ( {v \over f})^2$, depending on whether the single production is suppressed ($c_1=0.3$) or enhanced ($c_1=3$) and whether also $B$ contributes to the signal ($M_B \gtrsim M_{\Xft}$, $y=0.3$) or not ($M_B \gg M_{\Xft}$, $y=3$). Fig. \[fig:exclusioncM\] shows the exclusion in terms of $M_{\Xft}$ and $c_1$. Since, as was discussed in sect. \[gc\], the leading contribution to single production couplings is the same for models  and , the excluded regions are also similar for both models. A difference shows up when $c_1\ll 1$ and the $h \bar B b$ vertex of model $\fB\ $ becomes important thus decreasing BR($B \to W t$) and also when ${y \over g_{\psi}}\epsilon = {\cal O} (1)$ and higher order effects modify the single production couplings. The excluded regions are almost symmetric with respect to $c_1 \to -c_1$, which can be understood as follows. When only $\Xft$ production matters, the single production rate is proportional to $|c_1|^2$ at lowest order in $\epsilon$. Higher order terms only matter in the region of small $|c_1|$ where the single production rate is anyway negligible and the bound is driven by pair production which is insensitive to $c_1$. When $B$ production matters, that is because $m_B-m_\Xft \ll m_\Xft $, corresponding to $y\ll g_\psi$. From eq. (\[coup4\]) it is then evident that in this regime the couplings of both particles are approximately $\propto c_1$, so that the signal yield is again symmetric under $c_1 \to -c_1$. The gray regions on the plots correspond to an estimate of the mentioned above error in the determination of the efficiencies. ![[]{data-label="fig:exclusionxiM"}](exclusion_BWt_xiM_4A "fig:"){width="49.00000%"} ![[]{data-label="fig:exclusionxiM"}](exclusion_BWt_xiM_4B "fig:"){width="49.00000%"}\ ![ []{data-label="fig:exclusioncM"}](exclusion_BWt_cM_4A "fig:"){width="49.00000%"} ![ []{data-label="fig:exclusioncM"}](exclusion_BWt_cM_4B "fig:"){width="49.00000%"}\ Search for $t^{\prime}\to Z\, t$ --------------------------------- The search in Ref. [@cmsTZt] is designed to detect an up-type $4^{th}$ generation quark $t^{\prime}$ pair-produced by QCD and decaying to $Zt$. The search is performed in the trilepton channel, with two same-flavor and opposite-charge leptons with an invariant mass around the $Z$ pole. Moreover, at least two jets are required. Apart from the usual hardness and isolation cuts for jets and leptons an important event selection is performed with the variable $R_T$, defined as the scalar sum of the reconstructed momenta *without* including the two hardest leptons and the two hardest jets. $R_T$ is required to be above $80$ GeV. With $1.14\, fb^{-1}$ of $7$ TeV data the bound on the $t'$ is of $475$ GeV. All the top partners of charge $2/3$ can contribute to this final state [^15], these are $\Xtt$ and $T$ in models  and  and $\widetilde{T}$ in models  and . Remember however that the masses and the couplings of $\Xtt$ and of $T$ are closely tight to those of, respectively, $\Xft$ and $B$. Namely, the masses are similar (or equal) and the couplings at the leading order (see eq. (\[coup4\])) differ by a factor of $\sqrt 2$. Therefore the search of charge $2/3$ states will constrain the same combinations of the fundamental parameters of the model. But the bound on the charge $2/3$ partners which can be obtained using Ref. [@cmsTZt] is by far less stringent than the one from the $b'\to Wt$ search [@cmsBWt] described above. Given approximately 5 times less of analyzed data the limit on the production cross section of Ref. [@cmsTZt] is significantly looser than the one of the Ref. [@cmsBWt]. Moreover in our model the production yield of the charge-$2/3$ states is typically lower than the one of the $\Xft$ and of the $B$. This is because of the branching ratio suppression to reach the $tZ$ final state and because the single production rate is smaller (see section \[mrc\]). Therefore we can safely ignore Ref. [@cmsTZt] when constraining models  and . And moreover, for the reasons described above, we expect that, even by updating the search of Ref. [@cmsTZt] to the same integrated luminosity of Ref. [@cmsBWt], it would not become more important. Hence in the following we will only use the search for $t^{\prime}\to Z\, t$ to constrain the parameters of models  and  . As in the previous section, we will obtain semi-analytical formulae for the signal yield by computing the efficiencies at each mass point and multiplying with the production cross section computed in Section 3.1. Differently from the previous case, the search is performed in a single channel. Therefore we will not need any statistical analysis, we will just compare the computed signal yield with the $95\%$ CL limit obtained in Ref. [@cmsTZt] which corresponds to $9.6$ signal events. ### Efficiencies {#efficiencies-1 .unnumbered} The efficiencies are computed with a model which incorporates ${\widetilde{T}}$ and its couplings to $Zt$, $ht$ and $Wb$. These are responsible for the two single production modes and for the decay. The ${\widetilde{T}}$ couples only to left-handed quarks, therefore in this case we will employ left-handed couplings to compute the efficiencies. Our results can thus be directly compared with the efficiencies reported in Ref. [@cmsTZt] because the coupling is left-handed also in the case of a $4^{th}$ family quark. The ${\widetilde{T}}$ can contribute to the signal both in the pair and in the single production mode, provided that at least one ${\widetilde{T}}$ decays to $Zt$ paying a branching fraction of around $1/4$ (see Section \[mrc\]). In the case of pair production all the decay modes of the second produced ${\widetilde{T}}$ ($Zt$, $Wb$ or $ht$) are potentially relevant. We have computed separately the efficiencies in all three cases. The methodology of the analysis, and in particular the treatment of the experimental efficiencies, closely follows the one of the previous section. The results are shown in Table \[tab:efficienciesPPSPTZt\], our efficiencies contain the cut losses and the $W$, $Z$, $h$ branching fractions to the required final state. The efficiencies listed in the first column of the table can be directly compared with the ones of Ref. [@cmsTZt], we have checked that the discrepancy is around $25\%$ which corresponds to approximately $1.5\sigma$ of the signal uncertainty obtained in the Ref. [@cmsTZt]. We see in Table \[tab:efficienciesPPSPTZt\] that the efficiency for the single production with the $b$ is extremely low, below $1$ . This is because the single production signal (see Figure \[spd\]) is characterized by three leptons plus one hard ($b$) jet from the top decay, plus one forward jet from the virtual $W$ emission and a ${\overline{b}}$ from the gluon splitting. But the gluon splitting is enhanced in the collinear region, therefore the $b$-jet emitted from the gluon is also preferentially forward and with low $p_T$. In order for the event to pass the selection cut, that requires at least two jets with $p_T>25$ GeV and $|\eta|<2.4$, at least one of the two preferentially forward jets must be central and hard enough, implying a significant reduction of the cross-section. However this is not yet the dominant effect, the main reduction of the signal is due to the cut $R_T>80$ GeV discussed before. Indeed $R_T$ is computed without including the two hardest leptons and the two hardest jets, which in our case means, since we have only $3$ leptons and typically only $2$ jets, that the momentum of the softest lepton must be above $80$ GeV. Therefore in the end the signal is completely killed. The situation is better for the single production with the $t$ since one typically has more particles produced in this case and therefore the efficiencies are comparable with the ones of pair production. ---------------------- ----------------------------- ----------------------------- ----------------------------- -------------------------------- ------------------ $M$ $[\textrm{GeV}]$ $T \bar T \to Zt\, Z\bar t$ $T \bar T \to Zt\, W\bar b$ $T \bar T \to Zt\, h\bar t$ $T\, \bar t\, j$ $T\, \bar b\, j$ 300 $1.78$ $1.22$ $1.51$ $\,\,\,\,\,\,1.13\,\,\,\,\,\,$ $0.03$ 350 $1.93$ $1.47$ $1.64$ $1.17$ $0.03$ 450 $2.21$ $1.81$ $1.81$ $1.25$ $0.05$ 550 $2.34$ $1.93$ $1.95$ $1.30$ $0.06$ 650 $2.40$ $2.12$ $1.96$ $1.35$ $0.08$ ---------------------- ----------------------------- ----------------------------- ----------------------------- -------------------------------- ------------------ : []{data-label="tab:efficienciesPPSPTZt"} The situation is better for the single production with the $t$, the efficiencies are comparable with the ones of pair production (see Table \[tab:efficienciesPPSPTZt\]). However, we have seen in section \[mrc\] (see fig. \[fig:ttildesp2\]) that the rate of pair production is typically larger than the one of single production with the top, in the relevant mass range. Since the efficiencies are comparable we do not expect a sizable contribution from this process. The signal is totally dominated by the pair production and the ${{\rm \,BR}}(\widetilde T \to Z\,t)$ is fixed to be about 1/4, as discussed in section \[mrc\]. Therefore the bounds one can infer are mainly on $m_{\Tt}$, but a mild dependence on the other parameters ($\xi$ and $y$) is still residual in the BR. The resulting bound is about $m_{\Tt}\gtrsim 320$ GeV in both models $\oA, \oB$, and it is maximized at large $\xi$ and small $y$ $m_{\Tt}\gtrsim 350$ GeV in model $\oB$. These bounds are not competitive with those coming from the $t^{\prime}\to W\, b$ search, as we are going to discuss next. Search for $t^{\prime}\to W\, b$ --------------------------------- ![[]{data-label="fig:exclusionxiMsing"}](exclusion_TWb_xiM_1A "fig:"){width="48.00000%"} ![[]{data-label="fig:exclusionxiMsing"}](exclusion_TWb_xiM_1B "fig:"){width="48.00000%"}\ The last experimental study that we are going to consider is the search for a $4^{th}$ generation $t^{\prime}$ quark decaying to $W b$ [@cmsTWb]. The search is performed in the channel of two opposite sign leptons (away from the $Z$ pole) with two tagged bottom quarks. A very important selection cut, which is needed to suppress the background from the top quark production, is that the invariant mass of all the lepton and $b$-jet pairs, $M_{l b}$, is above $170$ GeV. This forbids that the lepton and the $b$ originate from the decay of a top quark. Using data from $5 fb^{-1}$ of integrated luminosity, a lower bound of $557{{\rm \,GeV}}$ was set on the $t^{\prime}$ mass [@cmsTWb]. In our models, only $\widetilde{T}$ can decay to $Wb$ with a sizable branching fraction. We will therefore use Ref. [@cmsTWb] to put constraints on models  and [^16]. The single production mode with the $b$ is definitely not relevant in this case because it only leads to one lepton. The one with the $t$ is also irrelevant because the second lepton would come from the decay of the top quark and it would not satisfy the cut on $M_{lb}$. We are therefore left with pair production. Moreover, because of the $M_{bj}>170$ GeV cut, and as was explicitly checked in Ref. [@Berger:2012ec], pair production contributes to the signal only if both $\widetilde{T}$’s decay to $Wb$. We are then left with the same channel, $\Tt\overline{\Tt}\to WbWb$, considered in Ref. [@cmsTWb]. The chirality of the coupling responsible for the decay is also the same as in the $4^{th}$ family case. Therefore the efficiencies can be extracted directly from Ref. [@cmsTWb] without any need for additional simulations. Given the efficiency and taking into account that the branching fraction of the $\widetilde{T}$ to $Wb$, we can easily compute the signal yield and compare it with the bound obtained in [@cmsTWb]. ### Plots and results {#plots-and-results-1 .unnumbered} We show the excluded regions of the parameter space in terms of $\xi$ and $M_{\widetilde T}$ on the Fig. \[fig:exclusionxiMsing\]. The exclusion is stronger for larger $y$ (and smaller $c_2$) due to a larger $BR(\widetilde T \to W b)$ in this case. As was already discussed in the Section \[gc\] the gauge interactions of the model $\oB$ are similar to the ones of the model $\oA$ and therefore the excluded regions are also similar. The difference is sizable in the region close to $\xi=0.5$ where in the model $\oB$ interactions with a Higgs boson vanish according to Eq. (\[scalarlagr1B\]) and therefore the $BR$ of the competitive decay to $W b$ increases. The regions without solutions for $\widetilde T(y,\xi)$ when $y$ is large correspond to those defined by the Eq. (\[minmassttilde\]). Due to a larger amount of data analyzed and a higher $BR$ of the $\widetilde T\to W\,b$ decay mode the search of Ref. [@cmsTWb] gives a better constraint on the parameters of our models than the previously considered search $\widetilde T\to Z\,t$ [@cmsTZt]. However one may expect that with increased amount of analyzed data the search for $\widetilde T\to Z\,t$ can become competitive due to its sensitivity to single production. ![[]{data-label="fig:exclusionMchart"}](exclMchartA "fig:"){width="49.00000%"} ![[]{data-label="fig:exclusionMchart"}](exclMchartB "fig:"){width="49.00000%"}\ Summary of exclusions --------------------- The results of the searches described above can be conveniently summarized by scanning over the values of the model parameters and selecting the most and the least stringent bounds on the top-partners’ masses. The highest excluded masses of $X_{5/3}$ and $X_{2/3}$ correspond to the lowest value of $y$ and highest $c_1$ and $\xi$, and the opposite for the lowest exclusion. For $T$ and $B$ the highest exclusion corresponds to the highest $y$, $c_1$ and $\xi$ and the opposite for the lowest exclusion. Maximal $\widetilde T$ mass exclusion is reached when $y$ and $\xi$ are maximal and the minimal exclusion is obtained for minimal $y$ and $\xi$. In Fig. \[fig:exclusionMchart\], we show our results for the maximal and minimal exclusions obtained by varying the parameters in the ranges: $y\in[0.3,3]$, $c_1\in[0.3,3]$ and $\xi\in[0.1,0.3]$. Conclusions =========== In this paper we described an approach to systematically construct the low-energy effective lagrangian for the lighest colored fermion multiplet related to the UV completion of the top quark sector: the top partner. Our construction is based on robust assumptions, as concerns symmetries, and on plausible assumptions, as concerns the dynamics. Our basic dynamical assumption, following Ref. [@silh], is that the electroweak symmetry breaking sector, or at least the fermionic sector, is broadly decribed by a coupling $g_*$ and a mass scale $m_*$. This assumption implies a well definite power counting rule. In particular the derivative expansion is controlled by inverse powers of $m_*$. In the technical limit where the top partner multiplet $\Psi$, is parametrically much lighter than the rest of the spectrum ($M_\Psi\ll m_*$), our power counting provides a weakly coupled effective lagrangian description of the phenomenology of $\Psi$. The basic idea is that, in this case, the effects of the bulk of the unknown spectrum at the scale $m_*$ can be systematically described by an expansion in powers of $M_\Psi/m_*$. The lagrangian obtained in this limit defines our simplified description of the top parters. One should however keep in mind that the most likely physical situation is one where $m_*-M_\Psi \sim M_\Psi$, where an effective lagrangian is formally inappropriate. In practice, however, we expect it to be more than adequate for a first semi-quantitative description of the phenomenology and certainly to assess experimental constraints. The comparison with explicit constructions supports this expectation. As concerns the symmetries of the strong sector, we considered the minimal composite Higgs based on the $SO(5)/SO(4)$ coset. Furthermore we focussed on the simplest possibility where the right-handed top quark $t_R$ is itself a composite fermion. The leading source of breaking of $SO(5)$ is thus identified with top quark Yukawa coupling $y_t$. In our construction, we have fully exploited the selection rules obtained by treating $y_t$ as a small spurion with definite transformation properties. For instance the structure of the mass spectrum and the couplings are greatly constrained by symmetry and selection rules. In particular the pNGB nature of the Higgs doublet implies the couplings originating from the strong sector are purely derivative: at high energy, or for heavy on-shell fermions, these couplings are effectively quite sizeable and yet they do not affect the spectrum even accounting for $\langle H\rangle \not =0$. If the Higgs were not treated as a pNGB a large trilinear would be associated with a large Yukawa coupling and the spectrum would necessarily be affected when $\langle H\rangle \not = 0$. Depending on the quantum numbers of the top partner multiplet $\Psi$ and of the composite operator ${\cal O}$ that seeds the top Yukawa in the microscopic theory, one can then consider a variety of models. We focussed on the four possibilities shown in Table \[models\], which could be considered the simplest ones. Our method can however be directly applied to perhaps more exotic possibilities. For instance one exotic, but not implausible, case would be ${\cal O}={\mathbf{14}}_{\mathbf{2/3}}$ with $\Psi$ in the symmetric traceless tensor of $SO(4)$, that is $\Psi={\mathbf{9}}_{\mathbf{2/3}}$. This case involves a top partner with electric charge $8/3$, performing a spectacular chain decay to $3W^+ +b$. Our effective lagrangian depends on a manageable number of parameters. Once the top mass is fixed, beside the Goldstone decay constant $f$ and the partner mass $M_\Psi$, there remain, depending on the model, only one or two additional parameters. These parameters, $c_{1,2}$, control the size of the trilinear couplings between $\Psi$, third family fermions, and vector bosons or Higgs. They thus control the decay and the single production of top partners. Moreover, naive power counting suggests a preferred $O(1)$ range for these parameters. This fact, coupled with the constraints due to symmetry, robustly implies a definite structure for the interactions vertices in each model. For instance, for the case where $\Psi$ spans an $SO(4)$ quadruplet, the trilinear coupling to the Higgs doublet involves mostly a $t_R$ and is expected to be of the order of a strong sector coupling $g_\Psi=M_\Psi/f$. Moreover it grows with $M_\Psi$ making simple production even more important in the range of heavy $\Psi$. In the case of a singlet $\Psi$, the trilinear is of order $y_t$ and involves the left handed doublet $(t,b)_L$. These details, including the chirality of the top, affect the collider phenomenology of the models and, consequently, the constraints from searches. Not only have we a few lagrangian parameters, but also they mainly affect phenomenology via their contribution to the trilinear couplings. Using this property we devised a semi-analytical way to efficiently simulate the contribution of single production to the signal. For any given mass $M_\Psi$, we numerically simulated single production once for all, assigning trilinear coupling equal to unity. The physical cross section was then obtained by folding this numerical result with the analytical dependence of the physical trilinear coupling on the model parameters. Thus, once the efficiencies associated with a given experimental search are known, the constraint in parameter space can be obtained analytically. We implemented the calculation of the cross-sections in a [*[Mathematica]{}*]{} notebook which is available on request. We applied our results to the presently available LHC searches. We focussed on the search for $4^{th}$ family fermions, that have signatures similar to those of top partners, and recast them to constrain our models. The main results can be read from Fig. \[fig:exclusioncM\] and Fig. \[fig:exclusionxiMsing\]. The former figure shows that, in the relevant region $c_1=O(1)$, single production has a mild but non-negligible impact on the bounds. In the course of our analysis, it became evident that there exists significant space for improvement in the search strategy if one wants to best constrain this class of models. The searches we used were tailored to pair production of heavy quarks, while single production of top partners has different features. First of all, in single production there is only one hard decaying object, with the $t$ or $b$ produced in association beeing often less hard (much more so in the case of a $b$): the cuts performed in present searches tends to penalize single production. Secondly, single production is always associated with a very forward jet, originating from the collinear splitting of a typically valence quark into a longitudinally polarized vector boson. The resulting forward jet has exactly the same features of the tag jets of $WW$ scattering. There is a good chance the use of the same tag in the searches for singly produced top partners would significantly extend the sensitivity. We also realized that the single production with the $b$ is typically very large (see fig. \[fig:ttildesp2\]) in the case of a singlet top partner $\widetilde{T}$, tagging this production mechanism would increase significantly the LHC sensitivity to this kind of particles. Ideally the best channel of detection would be the resonant $Wb$ production from the $\widetilde{T}$ decay, accompanied by one forward jet from the longitudinal vector boson emission. The second $b$ quark which is present in the reaction, which comes from the gluon splitting as in fig. \[spd\], is typically quite soft both in $p_\bot$ and in energy. Thus it is probably strongly affected by QCD initial state radiation and difficult to detect. Building upon these considerations, it would be worth to undertake a thorough experimental analysis, including the effect of radiation and detector simulation, suitably designed for the search of singly produced top-partners. In the test of weak scale naturalness, the search for all possible fermionic top partners represents the other half of the sky. In this paper we have introduced a first systematic description of top partner phenomenology. The simplicity of the result should hopefully serve as a basis for future systematic experimental studies. As seen from our theorist’s analysis, the present searches have already advanced well into the region suggested by naturalness. But there is no doubt somebody out there can do better. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Sascha Joerg for collaboration on part of this work and Francesco Tramontano for help with MCFM. The work of O.M. is supported by the European Programme Unication in the LHC Era, contract PITN-GA-2009-237920 (UNILHC). The work of R.R. is partially sponsored by the Swiss National Science Foundation under grant 200020-138131. The work of A.W. is supported in part by the ERC Advanced Grant no.267985 Electroweak Symmetry Breaking, Flavour and Dark Matter: One Solution for Three Mysteries (DaMeSyFla). Explicit CCWZ construction for $\mathbf{{\textrm{SO}}(5)/{\textrm{SO}}(4)}$ {#append} =========================================================================== ### Generators and Goldstone Matrix {#generators-and-goldstone-matrix .unnumbered} The generators of $\textrm{SO}(5)$ in the fundamental representation are conveniently chosen to be $$(T^\alpha_{L,R})_{IJ} = -\frac{i}{2}\left[\frac{1}{2}\varepsilon^{\alpha\beta\gamma} \left(\delta_I^\beta \delta_J^\gamma - \delta_J^\beta \delta_I^\gamma\right) \pm \left(\delta_I^\alpha \delta_J^4 - \delta_J^\alpha \delta_I^4\right)\right]\,, \label{eq:SO4_gen}$$ $$T^{i}_{IJ} = -\frac{i}{\sqrt{2}}\left(\delta_I^{i} \delta_J^5 - \delta_J^{i} \delta_I^5\right)\,, \label{eq:SO5/SO4_gen}$$ where $T^{\alpha}_{L,R}$ ($\alpha = 1,2,3$) are the $\textrm{SO}(4) \simeq \textrm{SU}(2)_L \times \textrm{SU}(2)_R$ unbroken generators, while $T^{i}$ ($i = 1, \ldots, 4$) are the broken ones and parametrize the coset $\textrm{SO}(5)/\textrm{SO}(4)$. An equivalent notation for unbroken generators which we will use is $T^{a}$ with $a = 1,\ldots,6$. The indices $IJ$ take the values $1, \ldots, 5$. The normalization of the $T^{A}$’s is chosen as ${\rm Tr}[T^A, T^B] = \delta^{AB}$. The $T^{\alpha}_{L}$ and $T^{\alpha}_{R}$ generators span respectively the $\textrm{SU}(2)_L$ and $\textrm{SU}(2)_R$ subgroups, and obey the standard commutation relations $$\left[T^{\alpha}_{L,R}, T^{\beta}_{L,R}\right] = i \varepsilon^{\alpha\beta\gamma}\, T^{\gamma}_{L,R}\,.$$ The $T_L$’s are therefore identified as the generators of the SM $\textrm{SU}(2)_L$. Notice that in our parametrization the unbroken $T^a$’s are block-diagonal $$T^a=\left(\begin{array}{cc}t^a &0 \\ 0 &0 \end{array}\right)\,,$$ and the generators obey the following commutation relation $$\left[T^{a},T^{i}\right]=\left(t^{a}\right)_{{j}{i}}T^{j}\,. \label{crb}$$ With these generators, the parametrization of the Goldstone boson matrix is explicitly given by U= U() = =( **1**\_[44]{}-[\^T\^2]{} (1-)&\ -[\^T]{}&\ ) \[gmatr\] where $\Pi^2 \equiv \vec{\Pi}^t \vec{\Pi}$. Under $g\in \textrm{SO}(5)$, the Goldstone matrix transforms as $$U(\Pi)\;\rightarrow\; U(\Pi^{(g)})=g\cdot U(\Pi)\cdot h^t(\Pi ; g)\,, \label{gtrans}$$ where $h(\Pi ; g)$ is block-diagonal in our basis $$h=\left(\begin{matrix}h_4 &0 \\ 0 &1 \end{matrix}\right)\,, \label{hd}$$ with $h_4\in \textrm{SO}(4)$. Under the unbroken $\textrm{SO}(4)$ the $\Pi$’s transform *linearly*, using eq. (\[crb\]) we get $\Pi^{i}\rightarrow {(h_4)^{i}}_{\ j}\Pi^{j}$. Given our embedding of the SM group, the $\Pi$ four-plet can be rewritten as $$\vec{\Pi}=\left(\begin{matrix}\Pi_1\\ \Pi_2\\ \Pi_3\\ \Pi_4\end{matrix}\right)= \frac1{\sqrt{2}}\left(\begin{matrix}-i\,(h_u-h_u^\dagger)\\ h_u+h_u^\dagger \\ i\,(h_d-h_d^\dagger)\\ h_d+h_d^\dagger \end{matrix}\right)\,,$$ \[Higgsfield\] where H=( [c]{} h\_u\ h\_d ), is the standard Higgs doublet of $+1/2$ Hypercharge. In the unitary gauge, in which h\_u=0,h\_d=, \[ugauge\] where $\rho$ is the canonically normalized physical Higgs field, the Goldstone boson matrix of eq. (\[hd\]) simplifies and becomes U=( 1&0&0&0&0\ 0&1&0&0&0\ 0&0&1&0&0\ 0&0&0&&\ 0&0&0&-&\ ) . \[uvev\] Given that we will have to gauge the SM subgroup of $\textrm{SO}(5)$, we must consider also *local* transformations, $g=g(x)$, in the above equation. We also have to define *gauge sources* $A_\mu^A$ $$A_\mu = A_\mu^A T^A\;\rightarrow\;A_\mu^{(g)}=g\left[A_\mu + i\partial_\mu \right]g^t\,,$$ some of which we will eventually make dynamical while setting the others to zero. Explicitly, the dynamical part of $A_\mu$ will be $$A_\mu = \frac{g}{\sqrt{2}}W^+_\mu\left(T_L^1+i T_L^2\right)+\frac{g}{\sqrt{2}}W^-_\mu\left(T_L^1-i T_L^2\right)+g \left(\cw Z_\mu+\sw A_\mu \right)T_L^3+g' \left(\cw A_\mu-\sw Z_\mu \right)T_R^3\,, \label{gfd}$$ where $\cw$ and $\sw$ denote respectively the cosine and the sine of the weak mixing angle and $g$, $g'$ are the SM couplings of $\textrm{SU}(2)_L$ and $\textrm{U}(1)_Y$. Notice that $A_\mu$ belongs to the unbroken $\textrm{SO}(4)$ subalgebra, this will simplify the expression for the $d$ and $e$ symbols that we will give below. ### The ${\mathbf{d}}$ and ${\mathbf{e}}$ symbols {#the-mathbfd-and-mathbfe-symbols .unnumbered} Still treating $A_\mu$ as a general element of the $\textrm{SO}(5)$ algebra, we can define the $d$ and $e$ symbols as follows. Start from defining $$\bar{A}_\mu\equiv A_\mu^{(U^t)}=U^t\left[A_\mu+i\partial_\mu \right] U\,,$$ this transforms under $\textrm{SO}(5)$ in a peculiar way $$\displaystyle \bar{A}_\mu\;\rightarrow\;A_\mu^{(h\cdot U^t\cdot{g^t}\cdot{ g})}=\bar{A}_\mu^{(h)}=h\left[\bar{A}_\mu+i\partial_\mu\right] h^t$$ Since $h=h(\Pi ; g)$ is an element of $\textrm{SO}(4)$ as in eq. (\[hd\]), the shift term in the above equation, $ih\partial_\mu h^t$, lives in the $\textrm{SO}(4)$ subalgebra. Therefore, if we decompose $\bar{A}_\mu$ in broken and unbroken generators $$\bar{A}_\mu\equiv -\,d_\mu^{i} T^{i}-\,e_\mu^a T^a\,, \label{defde}$$ we have that $d_\mu^{i}$ transforms linearly (and in the fourplet of $\textrm{SO}(4)$) while the shift is entirely taken into account by $e_\mu^a$. We have $$d_\mu^{i}\;\rightarrow \left(h_4\right)^{i}_{\ {j}}d_\mu^{j}\; \;\;\;\;\;\textrm{and}\;\;\;\;\;e_\mu\equiv e_\mu^a t^a\;\rightarrow\;h_4\left[e_\mu-i\partial_\mu\right]h_4^t\,. \label{trrule}$$ Let us now restrict, for simplicity, to the case in which $A_\mu$ belongs to the $\textrm{SO}(4)$ subalgebra, as for our dynamical fields in eq. (\[gfd\]). It is not difficult to write down an explicit formula for $d$ and $e$, these are given by $$\begin{aligned} d_\mu^{i}&&=\sqrt{2}\left(\frac1{f}-\frac{\sin{\Pi/f}}\Pi\right)\frac{\vec{\Pi}\cdot \nabla_\mu\vec{\Pi}}{\Pi^2}\Pi^{i}+\sqrt{2}\frac{\sin{\Pi/f}}\Pi\nabla_\mu\Pi^{i}\nonumber\\ e_{\mu}^a&&=-A_\mu^a+4\,i\,\frac{\sin^2{(\Pi/2f)}}{\Pi^2}\vec{\Pi}^t t^a\nabla_\mu\vec{\Pi} \label{dande}\end{aligned}$$ where $\nabla_\mu\Pi$ is the “covariant derivative” of the $\Pi$ field: $$\nabla_\mu\Pi^{i}=\partial_\mu\Pi^{i}-i A_\mu^a\left(t^a\right)^{i}_{\ j}\Pi^{ j}\,.$$ The first use we can make of the $d_\mu$ symbol is to define the $\textrm{SO}(5)$-invariant kinetic Lagrangian for the Goldstone bosons, this is given by \_=4 d\_\^[i]{} d\^\_[i]{}. \[hkt\] In the unitary gauge of eq. (\[ugauge\]) and using eq. (\[gfd\]) for $A_\mu$ the Goldstone Lagrangian becomes \_=2 (h)\^2 + 4 f\^2 \^2 (|W|\^2+1[2c\_w\^2]{}Z\^2), \[hktug\] from which we can check that the field $\rho$ is indeed canonically normalized and read the $W$ and $Z$ masses $m_W=g/2f\sin{\frac{\langle h\rangle}f}$, $m_Z=m_W/c_w$. This fixes relation among $\langle v\rangle$ and the EW scale $v=246$ GeV v=f. \[VEV\] The $e_\mu$ symbol can instead be used to construct the CCWZ covariant derivatives, because the shift term in its transformation rule of eq. (\[trrule\]) compensates for the shift of the ordinary derivative. Consider for instance the field ${\Psi}$ defined in eq. (\[4plet\]) of the main text, which transforms in the ${\mathbf4}$ of $\textrm{SO}(4)$, [*[i.e.]{}*]{} like $\Psi\rightarrow h_4\cdot\Psi$. The covariant derivative is $$\nabla_\mu\Psi \,=\,\partial_\mu\Psi+i\,e_{\mu}^at^a\Psi\,. \label{covder}$$ ### The CP symmetry {#the-cp-symmetry .unnumbered} By looking at eq. (\[Higgsfield\]) and remembering that CP acts as $H(x)\rightarrow H^*(x^{(P)})$ on the Higgs doublet we immediately obtain the action of the CP transformation on the Goldstone fields $\Pi$ and on the Goldstone matrix $U$. It is (x)C\_4(x\^[(P)]{}),U(x) C\_5U(x\^[(P)]{})C\_5, where $C_4$ and $C_5$ are respectively a $4\times4$ and a $5\times5$ diagonal matrices defined as C\_4=(-1,+1,-1,+1),C\_5=(-1,+1,-1,+1,+1). In the above equations the superscript “$(P)$” denotes the action of ordinary spatial parity. Similarly, the ordinary action of CP on the SM gauge fields in eq. (\[gfd\]) is recovered if we take A\_C\_5A\_\^[(P)]{}C\_5. From the above equations it is straightforward to derive the CP transformations of the $d$ and $e$ symbols defined in eq. (\[defde\]), d\_\^i\^[i]{}\_[j]{}(d\_\^[(P)]{})\^j,e\_C\_4(e\_\^[(P)]{})C\_4. In the fermionic sector, adopting for definiteness the Weyl basis, the CP transformation of the $q_L$ and of the $t_R$ are the usual ones (x)\^[(CP)]{}=i\^0\^2\^\*(x\^[(P)]{}), \[orcp\] for $\chi=\{t_L,b_L,t_R\}$. For the top partners, in the case in which they transform in the fourplet of ${\textrm{SO(4)}}$ as in eq. (\[4plet\]), it is natural to define CP as \_i\_[i]{}\^[j]{}\_j\^[(CP)]{}, while for the case of the singlet we simply have $\Psi\;\rightarrow\;\Psi^{(CP)}$. Notice that with this definition the charge eigenstate fields $\{T,B,\Xtt,\Xft\}$ defined in eq. (\[4plet\]) have “ordinary” CP transformation as in eq. (\[orcp\]); Fermion Couplings {#ferc} ================= In this appendix we report the explicit form of the fermion couplings to gauge bosons and to the Higgs, in the unitary gauge defined by eq. (\[ugauge\]), that arise in our four models , ,  and , defined respectively in eq.s (\[eq:lagrangian2\]), (\[eq:lagrangian214\]) and (\[eq:lagrangian211\]). All the couplings are given *before* the rotations that diagonalize the mass-matrices. The first two terms, which are relevant for the models  and , are i\_R\^i\_it\_R=&&([\_[5/3]{}]{})\_R\^+t\_R -\_R\^-t\_R -\_Rt\_R\ && -([\_[2/3]{}]{})\_Rt\_R +it\_R, \[dcoup\] and the term with the $e_\mu$ symbol which we combine, for convenience, with the one from the covariant derivative in eq. (\[cder\]) (23 g’-)=&& (-12+13 s\_w\^2)B+(12-53 s\_w\^2) \_[5/3]{}\ &&+(12-23 s\_w\^2)T+(-12-23 s\_w\^2)[\_[2/3]{}]{}\ &&+{\^-.\ &&. +\_[5/3]{}\^++h.c.}\ &&+ . \[ecoup\] The couplings to the photon are not reported explicitly in the above equation because they are simply the standard ones, being completely fixed by the ${\textrm{U}}(1)_{\textrm{em}}$ residual gauge symmetry. In addition to the ones in eq. (\[ecoup\]), non-derivative couplings with the Higgs field emerge in model  from the terms y f(Q\^[**5**]{}\_L)\^[I]{} U\_[I i]{}\_[ R]{}\^i&=&y f |b\_L B\_R+y f\_L,\ y c\_2 f (Q\^[**5**]{}\_L)\^[I]{} U\_[I 5]{}t\_R&=& -\_L t\_R, \[mc5\] while in model  we have y f([\_L\^]{})\^[IJ]{} U\_[I i]{} U\_[J 5]{} \_R\^i&=& y f |b\_L B\_R+ 2\_L\ [y c\_2 f 2]{}([\_L\^]{})\^[IJ]{} U\_[I 5]{}U\_[J 5]{} t\_R&=& -\_L t\_R. \[mc14\] For the models with the singlet,  and , the only gauge-fermion interactions come from the covariant derivative 23g’&=&-23 s\_w\^2 +23e . The couplings with the Higgs come instead from y f([ \_L\^]{})\^[I]{} U\_[I 5]{} \_R &=& -\_L\_R,\ y c\_2 f([\_L\^]{})\^[I]{} U\_[I 5]{} t\_R &=& -\_L t\_R, \[scalarlagr1A\] for model  and from f([ \_L\^]{})\^[Ij]{} U\_[I 5]{} U\_[J 5]{} \_R &=& -\_L\_R,\ [y c\_2 2]{} f([\_L\^]{})\^[Ij]{} U\_[I 5]{} U\_[J 5]{} t\_R &=& -\_L t\_R. \[scalarlagr1B\] for model . Statistical tools ================= In the analysis performed in the Ref. [@cmsBWt] the $CL_s$ method is used to obtain the exclusion confidence intervals for the mass of the $b^{\prime}$ quark. However this exclusion is made in terms of the pair production cross section assuming some fixed ratio between the yield in dileptons and trileptons channels. In our case this ratio depends on the relative strength of the single and pair production and can significantly deviate from the one used in the Ref. [@cmsBWt]. Thus we want to re-do part of the experimental analysis in order to extract a more model-independent exclusion in terms of the number of di- and trileptons separately. Though we are not restricted to using the $CL_s$ only, we think that this method is well suited for constraining the parameter space of our model. To use the $CL_s$ we first construct a test statistics $q$ as a log-ratio of probability density for the signal+background hypothesis to the background hypothesis: q = -2 \_[i=2l,3l]{}[P ( n\_i | s\_i+ b\_i ) P ( n\_i | b\_i )]{} where $n_i$ - number of observed in the pseudo-experiment di- and trilepton events, $s_i$ and $b_i$ - number of the signal and background events respectively. The distribution $P$ for a small number of events can be taken as a Poissonian modified due to the presence of the uncertainties. The largest uncertainty in the experimental analysis comes from the background estimation and can be accounted for by taking a marginal probability density defined as P ( n\_i | s\_i+ b\_i ) = ( n\_i | s\_i + \_i b\_i ) ln[N]{}(\_i,1,\_[\_i]{}) d\_i where ${\cal P}$ stands for a Poissonian distribution of the observed number of events and $ln{\cal N}$ for a log-normal distribution of the nuisance parameters $\nu_i$ centered at the value 1 with a variance corresponding to a relative error in the background estimation $\delta_{\nu_i}^2=\delta_{b_i}^2 / b^2_i$. The analogous definition is taken for the background-only probability distribution $P \left( n_i | b_i \right)$. The confidence level of the signal+background (backround only) hypothesis is defined as CL\_[sb(b)]{} = \_[q\^[obs]{}]{}\^ P\_[sb(b)]{}(q) dq where $P_{sb(b)}(q)$ is a probability density of $q$ which corresponds to $n_i$ distributed according to the signal+background (background only) hypothesis and $q^{obs}$ corresponds to the observed number of events $n_i^{obs}$. Finally the exclusion confidence level for the signal $s_i$ is: CL\_[excl]{} = 1-[ CL\_[sb]{} CL\_[b]{}]{} Obtained in this way confidence intervals coincide with those given in the Ref. [@cmsBWt] with a relative deviation of excluded pair-production cross section less than 5%. The difference can be caused by our simplified treatment of the nuisance parameters, i.e. neglecting the signal uncertainties and assuming that the backgrounds of di- and trileptons are completely uncorrelated. Using the given above definition we find a region in a plane $(s_2,s_3)$ excluded with 95%CL (Fig. \[fig:exclusionN2N3\]). ![[]{data-label="fig:exclusionN2N3"}](excludedN2N3 "fig:"){width="50.00000%"}\ [99]{} D. B. Kaplan and H. Georgi, Phys. Lett.  B [**136**]{} (1984) 183. D. B. Kaplan. Nucl. Phys., B365:259Ð278, 1991. K. Agashe, R. Contino and A. Pomarol, Nucl. Phys. B [**719**]{}, 165 (2005) \[hep-ph/0412089\]. G. F. Giudice, C. Grojean, A. Pomarol and R. Rattazzi, JHEP [**0706**]{}, 045 (2007) \[hep-ph/0703164\]. C. Csaki, A. Falkowski and A. Weiler, JHEP [**0809**]{}, 008 (2008) \[arXiv:0804.1954 \[hep-ph\]\]. B. Keren-Zur, P. Lodone, M. Nardecchia, D. Pappadopulo, R. Rattazzi and L. Vecchi, Nucl. Phys. B [**867**]{}, 429 (2013) \[arXiv:1205.5803 \[hep-ph\]\]. S. Dimopoulos and G. F. Giudice, Phys. Lett. B [**357**]{} (1995) 573 \[hep-ph/9507282\]. A. G. Cohen, D. B. Kaplan and A. E. Nelson, Phys. Lett. B [**388**]{} (1996) 588 \[hep-ph/9607394\]. R. Barbieri and D. Pappadopulo, JHEP [**0910**]{}, 061 (2009) \[arXiv:0906.4546 \[hep-ph\]\]. R. Contino, D. Marzocca, D. Pappadopulo and R. Rattazzi, JHEP [**1110**]{}, 081 (2011) \[arXiv:1109.1570 \[hep-ph\]\]. S. Coleman, J. Wess and B. Zumino, Phys. Rev. 177 (1969) 2239; C. Callan, S. Coleman, J. Wess and B. Zumino, Phys. Rev. 177 (1969) 2247. O. Matsedonskyi, G. Panico and A. Wulzer, arXiv:1204.6333 \[hep-ph\]. M. Gillioz, Phys. Rev. D [**80**]{}, 055003 (2009) \[arXiv:0806.3450 \[hep-ph\]\]. C. Anastasiou, E. Furlan and J. Santiago, Phys. Rev. D [**79**]{}, 075003 (2009) \[arXiv:0901.2117 \[hep-ph\]\]. G. Dissertori, E. Furlan, F. Moortgat and P. Nef, JHEP [**1009**]{}, 019 (2010) \[arXiv:1005.4414 \[hep-ph\]\]. G. Panico and A. Wulzer, JHEP [**1109**]{}, 135 (2011) \[arXiv:1106.2719 \[hep-ph\]\]. S. De Curtis, M. Redi and A. Tesi, JHEP [**1204**]{} (2012) 042 \[arXiv:1110.1613 \[hep-ph\]\]. D. Marzocca, M. Serone and J. Shu, JHEP [**1208**]{} (2012) 013 \[arXiv:1205.0770 \[hep-ph\]\]. A. Pomarol and F. Riva, JHEP [**1208**]{} (2012) 135 \[arXiv:1205.6434 \[hep-ph\]\]. R. Contino, T. Kramer, M. Son and R. Sundrum, JHEP [**0705**]{}, 074 (2007) \[hep-ph/0612180\]. R. Contino and G. Servant, JHEP [**0806**]{}, 026 (2008) \[arXiv:0801.1679 \[hep-ph\]\]. J. Mrazek and A. Wulzer, Phys. Rev. D [**81**]{}, 075006 (2010) \[arXiv:0909.3977 \[hep-ph\]\]. G. Panico, M. Redi, A. Tesi and A. Wulzer, arXiv:1210.7114 \[hep-ph\]. \[ATLAS Collaboration\], ATLAS-CONF-2012-130 \[CMS Collaboration\], CMS-PAS-B2G-12-003. K. Agashe, R. Contino, L. Da Rold and A. Pomarol, Phys. Lett. B [**641**]{}, 62 (2006) \[hep-ph/0605341\]. J. Mrazek, A. Pomarol, R. Rattazzi, M. Redi, J. Serra and A. Wulzer, Nucl. Phys. B [**853**]{}, 1 (2011) \[arXiv:1105.5403 \[hep-ph\]\]. J. A. Aguilar-Saavedra, JHEP [**0911**]{}, 030 (2009) \[arXiv:0907.3155 \[hep-ph\]\]. J. M. Cornwall, D. N. Levin and G. Tiktopoulos, Phys. Rev. D 10, 1145 (1974) \[Erratum-ibid. D 11, 972 (1975)\]; C. E. Vayonakis, Lett. Nuovo Cim. 17, 383 (1976); M. S. Chanowitz and M. K. Gaillard, Nucl. Phys. B 261, 379 (1985). M. Aliev, H. Lacker, U. Langenfeld, S. Moch, P. Uwer and M. Wiedermann, Comput. Phys. Commun.  [**182**]{}, 1034 (2011) \[arXiv:1007.1327 \[hep-ph\]\]. A. D. Martin, W. J. Stirling, R. S. Thorne and G. Watt, Eur. Phys. J. C [**63**]{}, 189 (2009) \[arXiv:0901.0002 \[hep-ph\]\]. S. S. D. Willenbrock and D. A. Dicus, Phys. Rev. D [**34**]{}, 155 (1986). S. Godfrey, T. Gregoire, P. Kalyniak, T. A. W. Martin and K. Moats, JHEP [**1204**]{}, 032 (2012) \[arXiv:1201.1951 \[hep-ph\]\]. E. L. Berger and Q. -H. Cao, Phys. Rev. D [**81**]{}, 035006 (2010) \[arXiv:0909.3555 \[hep-ph\]\]. J. M. Campbell and R. K. Ellis, Phys. Rev. D [**62**]{}, 114012 (2000) \[hep-ph/0006304\]; J. M. Campbell, R. K. Ellis and F. Tramontano, Phys. Rev. D [**70**]{}, 094012 (2004) \[hep-ph/0408158\]; J. M. Campbell, R. Frederix, F. Maltoni and F. Tramontano, Phys. Rev. Lett.  [**102**]{}, 182003 (2009) \[arXiv:0903.0005 \[hep-ph\]\]; J. M. Campbell, R. Frederix, F. Maltoni and F. Tramontano, JHEP [**0910**]{}, 042 (2009) \[arXiv:0907.3933 \[hep-ph\]\]. J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer and T. Stelzer, JHEP [**1106**]{}, 128 (2011) \[arXiv:1106.0522 \[hep-ph\]\]. N.D. Christensen, C. Duhr, Comput. Phys. Commun.180 (2009) 1614-1641 \[arXiv:0806.4194 \[hep-ph\]\] A. Azatov, O. Bondu, A. Falkowski, M. Felcini, S. Gascon-Shotkin, D. K. Ghosh, G. Moreau and S. Sekmen, Phys. Rev. D [**85**]{}, 115022 (2012) \[arXiv:1204.0455 \[hep-ph\]\]; K. Harigaya, S. Matsumoto, M. M. Nojiri and K. Tobioka, Phys. Rev. D [**86**]{}, 015005 (2012) \[arXiv:1204.2317 \[hep-ph\]\]. M. Perelstein, M. E. Peskin and A. Pierce, Phys. Rev. D [**69**]{}, 075002 (2004) \[hep-ph/0310039\]. T. Han, H. E. Logan, B. McElrath and L. -T. Wang, Phys. Rev. D [**67**]{}, 095004 (2003) \[hep-ph/0301040\]. N. Vignaroli, Phys. Rev. D [**86**]{}, 075017 (2012) \[arXiv:1207.0830 \[hep-ph\]\]. C. Rogan, arXiv:1006.2727 \[hep-ph\]; S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Rev. D [**85**]{}, 012004 (2012) \[arXiv:1107.1279 \[hep-ex\]\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], JHEP [**1205**]{}, 123 (2012) \[arXiv:1204.1088 \[hep-ex\]\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Rev. Lett.  [**107**]{}, 271802 (2011) \[arXiv:1109.4985 \[hep-ex\]\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Lett. B [**716**]{}, 103 (2012) \[arXiv:1203.5410 \[hep-ex\]\]. M. Cacciari, G. P. Salam, and G. Soyez, JHEP [**04**]{} (2008) 063 \[arXiv:0802.1189\]. A. L. Read, J. Phys. G [**28**]{}, 2693 (2002). J. Berger, J. Hubisz and M. Perelstein, JHEP [**1207**]{}, 016 (2012) \[arXiv:1205.0013 \[hep-ph\]\]. [^1]: By [*semi-perturbative*]{} here we mean that in these models, typified by warped compactifications, there exists a sufficiently interesting subset of questions, even involving physics at energies well above the weak scale, that can be addressed using perturbation theory. [^2]: While this work was being completed ATLAS [@ATLASnew] and CMS [@CMSnew] presented dedicated searches for top partners, which we did not include in our analysis. From a preliminary investigation we expect mild changes in our results from these new data because both the ATLAS and the CMS searches are optimized to detect pair production. As we will discuss in the conclusions, a radical improvement of the bounds could perhaps be achieved, with the present energy and luminosity, but only with searches dedicated to single production. [^3]: See [@Panico:2012uw] for a complete calculable model with totally composite $t_R$, analogous holographic 5d models could be formulated following the approach of Ref. [@Agashe:2004rs]. [^4]: See Appendix A for the explicit form of the generators. [^5]: The light quark families and the leptons will not be considered here because their couplings are most likely very weak. [^6]: Another possible option considered in the literature is ${{r}}_{\mathcal{O}}={\mathbf{4}}_{\mathbf{1/6}}$. However this option is not available once $t_R$ is chosen to be a $SO(4)$ singlet: the top would not acquire a mass. It should also be remarked that, regardless of the nature of $t_R$, ${{r}}_{\mathcal{O}}={\mathbf{4}}_{\mathbf{1/6}}$ is disfavored when considering dangerous tree level corrections to the $Zb\bar b$ vertex [@zbb; @Mrazek:2011iu]. [^7]: An organizing principle, termed [*[partial UV completion]{}*]{} (PUV), to consistently construct an effective lagrangian for a parametrically light resonance was proposed in Ref. [@Contino:2011np]. There the focus was on the more involved case of vector and scalar resonances. According to PUV, the couplings involving the lighter resonance should roughly saturate the strength $g_*$ when extrapolated at the scale $m_*$. We refer to Ref. [@Contino:2011np] for a more detailed discussion. The effective lagrangians we construct in this paper automatically satisfy PUV in the range of parameters suggested by the power counting rule in eq. (\[powc\]). [^8]: The top partner’s spectrum with partially composite $t_R$ has been worked out in Ref. [@panico; @Matsedonskyi:2012ym]. [^9]: Notice that the Goldstone fields $\phi^{\pm,0}$ in eq. (\[Hdoublet\]) are not canonically normalized. Indeed the non-linearities in the Higgs kinetic term of eq. (\[hkt\]) lead to a kinetic coefficient equal to $\sin{\epsilon}/\epsilon$, with $\epsilon=\langle h\rangle/f$. However this is irrelevant for the purpose of the present discussion. [^10]: When considering a perturbation described by a small parameter $\eta$ to a Lagrangian, the use of the equations of motion of the unperturbed theory is equivalent to permorming field redefinitions of the form $\Phi\to \Phi+\eta F[\Phi,\partial]$. For example, to deal with the first term of eq. (\[gc0\]), the relevant redefinition is $$\displaystyle \begin{matrix} T_R\;\rightarrow\;T_R+\frac{\sqrt{2}c_1}{f}h_d^\dagger t_R\\ t_R\;\rightarrow\;t_R-\frac{\sqrt{2}c_1}{f}h_d T_R \end{matrix}\,.$$ This eliminates the derivative interaction and makes the first term of eq. (\[gc1\]) appear. It also leads to new interactions with more fields that however are irrelevant for our processes at the tree-level. [^11]: Even though the two plots correspond to different models the couplings of the $\Xft$ and $\Xtt$ do not differ at leading order in models  and .  [^12]: By fixing $m_t$, $\xi$, $c_2$ and $m_{\tilde T}$ the result for model   and  coincide. Indeed, by comparing the lagrangians (\[scalarlagr1A\]) and (\[scalarlagr1B\]), one notices that the gauge vertices and the mass spectrum of model   equal those of model   when the equality $y^{\text{\oA}} \sin \epsilon = y^{\text{\oB}} \sin 2\epsilon /2$ holds. [^13]: Note that, for a given $m_{\widetilde T}$, $y_{max}$ does indeed correspond to the maximal value of the $W \bar b \widetilde T$-coupling, while the coincidence is not exact in the case of the $Z \bar t \widetilde T$-coupling. [^14]: Significant bounds on the top partners could also emerge from unrelated studies like the searches of SUSY performed with the “razor” variable [@razor]. We thank M. Pierini for suggesting this possibility, obviously this is an interesting direction to explore. [^15]: Actually because of the quite loose cuts on the invariant mass of leptonic $Z$ used in this search also the $X^{5/3}$ and the $B$ could contribute, however this effect is subdominant. [^16]: Also pair produced $B$ and $\Xft$ decaying to $Wt$ contributes to the final states considered in Ref. [@cmsTWb]. However the resulting bound on these states is lower than the one obtained using Ref. [@cmsBWt]. In addition the signature used in Ref. [@cmsTWb] is insensitive to single production. Thus we do not expect any improvement of the bounds on the models  and  from this search.
--- abstract: 'We show that for each non-negative integer $k$, every bipartite tournament either contains $k$ arc-disjoint cycles or has a feedback arc set of size at most $7(k-1)$.' author: - Jasine Babu - Ajay Saju Jacob - 'R. Krithika' - Deepak Rajendraprasad title: 'A Note on Arc-Disjoint Cycles in Bipartite Tournaments' --- Introduction ============ Tournaments and bipartite tournaments form a mathematically rich subclass of directed graphs with interesting structural and algorithmic properties [@gutin; @moon; @digraphs]. A tournament is a directed graph obtained by assigning a unique orientation to each edge of an undirected complete simple graph. Similarly, a bipartite tournament is a directed graph obtained by assigning a unique orientation to each edge of an undirected complete bipartite simple graph. Tournaments and bipartite tournaments are tremendously useful in modelling competitions and thus problems on these graphs have several applications in areas like machine learning, voting systems and social choice theory. One such problem is <span style="font-variant:small-caps;">Feedback Arc Set</span>. A feedback arc set is a set of arcs of the given graph whose deletion results in an acyclic graph. Given a directed graph and a non-negative integer $k$, <span style="font-variant:small-caps;">Feedback Arc Set</span> is the problem of determining if the graph has a feedback arc set of size at most $k$. Finding a minimum feedback arc set in tournaments and bipartite tournaments is -hard [@fast-hard-alon; @fast-hard; @fast-hard3; @fasbt-nphard]. However, it is known that for each non-negative integer $k$, every tournament either contains $k$ arc-disjoint cycles or has a feedback arc set of size at most $5k$ [@mfcs19] and results from [@Chen15; @McDonald18] improve the bound of $5k$ to $3.7k$. In this note, we prove an analogous result for bipartite tournaments [^1].\ [**Preliminaries.** ]{} A [*directed graph*]{} (or [*digraph*]{}) is a pair consisting of a set $V$ of vertices and a set $A$ of arcs. An arc is specified as an ordered pair of vertices. We will consider only simple unweighted digraphs. For a digraph $D$, $V(D)$ and $A(D)$ denote the set of its vertices and the set of its arcs, respectively. Two vertices $u$, $v$ are said to be [*adjacent*]{} in $D$ if $(u,v) \in A(D)$ or $(v,u) \in A(D)$. For a vertex $v \in V(D)$, its [*out-neighborhood*]{}, denoted by $N^{+}(v)$, is the set $\{u \in V(D) \mid (v,u) \in A(D)\}$ and its [*in-neighborhood*]{}, denoted by $N^{-}(v)$, is the set $\{u \in V(D) \mid (u,v) \in A(D)\}$. This notation is extended to a subset $X$ of vertices as $N^{+}(X)=\cup_{v \in X}N^{+}(v)$ and $N^{-}(X)=\cup_{v \in X}N^{-}(v)$. For a set $X \subseteq V(D) \cup A(D)$, $D-X$ denotes the digraph obtained from $D$ by deleting $X$. A [*path*]{} $P$ in $D$ is a sequence $(v_1,\dots,v_k)$ of distinct vertices such that for each $i \in [k-1]$, $(v_i,v_{i+1}) \in A(D)$. A path $P$ is called an [*induced*]{} path if there is no arc in $D$ that is between two non-consecutive vertices of $P$. A [*cycle*]{} $C$ in $D$ is a sequence $(v_1,\dots,v_k)$ of distinct vertices such that $(v_1,\dots,v_k)$ is a path and $(v_k,v_1) \in A(D)$. A cycle $C=(v_1,\dots,v_k)$ is called an [*induced*]{} (or [*chordless*]{}) cycle if there is no arc in $D$ that is between two non-consecutive vertices of $C$ with the exception of the arc $(v_k,v_1)$. The length of a path or cycle $X$ is the number of vertices in it and is denoted by $|X|$. A cycle of length $q$ is called a $q$-cycle and a cycle on three vertices is also called a [*triangle*]{}. A digraph is called a [*directed acyclic graph*]{} if it has no cycles. Any directed acyclic graph $D$ has an ordering $\sigma$ called [*topological ordering*]{} of its vertices such that for each $(u,v) \in A(D)$, $\sigma(u)<\sigma(v)$ holds. A bipartite digraph is a digraph $B$ whose vertex set can be partitioned into two sets $X$ and $Y$ such that every arc in $B$ has one endpoint in $X$ and the other endpoint in $Y$. We denote $B$ as $B[X,Y]$ where $X$ and $Y$ form the bipartition of the underlying bipartite graph. It is easy to see that a bipartite digraph has no triangle and any 4-cycle is an induced 4-cycle. Given a digraph $D$, $D^R$ denotes the digraph obtained from $D$ by reversing all the arcs. For a set of arcs $F \subseteq A(D)$, $F^R$ denotes the set $\{(u,v) : (v,u) \in F\}$. The following result is well-known. \[obs1\] A set of arcs $F$ is a feedback arc set of the digraph $D$ if and only if $F^R$ is a feedback arc set of $D^R$. Cycles and Feedback Arc Sets ============================ In this section, we show that for each non-negative integer $k$, every bipartite tournament either contains $k$ arc-disjoint cycles or has a feedback arc set of size at most $7(k-1)$. Consider a digraph $D$. Let $\mathcal{P}(D)$ denote the set of induced paths on four vertices in $D$. We define two equivalence relations $\sim_2$ and $\sim_3$ on $\mathcal{P}(D)$ as follows. For any two paths $P$ and $P'$ in $\mathcal{P}(D)$, - $P \sim_2 P'$ if and only if paths $P$ and $P'$ differ only in the second vertex. - $P \sim_3 P'$ if and only if paths $P$ and $P'$ differ only in the third vertex. For every triple $(x,y,z)$ of vertices in $D$, let $E_{D,2}[x,y,z]$ denote the $\sim_2$-equivalence class consisting of paths in $\mathcal{P}(D)$ with $x$ as first vertex, $y$ as third vertex and $z$ as fourth vertex. Similary, let $E_{D,3}[x,y,z]$ denote the $\sim_3$-equivalence class consisting of paths in $\mathcal{P}(D)$ with $x$ as first vertex, $y$ as second vertex and $z$ as fourth vertex. By definition of $\sim_2$ and $\sim_3$, we have the following observation. \[obs2\] For every triple $(x,y,z)$ of vertices in $D$, $E_{D,2}[x,y,z] = E_{D^R,3}[z,y,x]$. Next, for a vertex $v \in V(D)$, define the following sets. - $\operatorname{first}_D(v)$ is the number of $\sim_2$-equivalence classes consisting of paths in $\mathcal{P}(D)$ with $v$ as the first vertex. - $\sec_D(v)$ is the number of $\sim_3$-equivalence classes consisting of paths in $\mathcal{P}(D)$ with $v$ as the second vertex. \[obs3\] $\underset{v \in V(D)}\sum \operatorname{first}_D(v)$ is the number of $\sim_2$-equivalence classes on $\mathcal{P}(D)$ and $\underset{v \in V(D)} \sum \sec_D(v)$ is the number of $\sim_3$-equivalence classes on $\mathcal{P}(D)$. Observations \[obs2\] and \[obs3\] lead to the following. \[obs4\] $\underset{v \in V(D)}\sum \operatorname{first}_D(v) = \underset{v \in V(D^R)}\sum \sec_{D^R}(v)$ and $\underset{v \in V(D)}\sum \sec_{D}(v) = \underset{v \in V(D^R)} \sum \operatorname{first}_{D^R}(v)$. For a bipartite digraph $D[X,Y]$, let $\Lambda(D)$ denote the number of pairs $u, v$ of vertices in $D$ with $u \in X$, $v \in Y$ and neither $(u,v) \in A(D)$ nor $(v, u) \in A(D)$. Now, we relate the size of a minimum feedback arc set in a bipartite digraph $D$ that has no 4-cycles and the number $\Lambda(D)$. Similar results are known for digraphs that have no 3-cycles [@Chen15; @Chudnovsky08; @Dunkum11] and the proofs crucially use a double counting argument concerning induced paths on three vertices. Our proof for bipartite digraphs with no 4-cycles is along similar lines but involves a more intricate counting argument related to induced paths on four vertices. \[lem:bt-ep\] Let $D[X,Y]$ be a bipartite digraph in which for every pair $u \in X$, $v \in Y$ of distinct vertices, at most one of $(u,v)$ or $(v,u)$ is in $A(D)$. If $D$ has no 4-cycles, then we can compute a feedback arc set of $D$ of size at most $\Lambda(D)$ in polynomial time. We will prove the claim by induction on $|V(D)|$. The claim trivially holds for $|X|<2$ or $|Y| < 2$ as in these cases, the empty set is a feedback arc set. Hence, assume that $|X| \geq 2$ and $|Y| \geq 2$. First we apply a simple preprocessing rule on $D$. If $D$ has a vertex $v$ that either has no in-neighbours or no out-neighbours then delete $v$ from $D$. As $v$ is not in any cycle of $D$, any feedback arc set of $D'$ is an feedback arc set of $D$. [**Case 1:**]{} Suppose $\sum_{v \in V(D)} \operatorname{first}_D(v) \leq \sum_{v \in V(D)} \sec_D(v)$. Then, there is a vertex $u \in V(D)$ such that $\operatorname{first}_D(u) \leq \sec_D(u)$. Without loss of generality assume that $u \in X$. Consider the following sets of vertices of $D$: $Y_1 = N^{-}(u)$, $Y_2 = N^{+}(u)$, $Y_3=Y \setminus (Y_1 \cup Y_2)$, $X_2=N^+(Y_2)$ and $X_1=X \setminus (X_2 \cup \{u\})$. Following are the properties of these sets. - $Y_1$ and $Y_2$ are non-empty due to the preprocessing. - There is no arc from a vertex $x \in X_2$ to a vertex $y \in Y_1$. Otherwise, $(u,y',x,y)$ is a 4-cycle where $x \in N^+(y')$ and $y' \in Y_2$. - By the definition of $X_1$, there is no arc from a vertex $y \in Y_2$ to a vertex $x \in X_1$. Let $D_1$ denote the subgraph $D[X_1, Y_1 \cup Y_3 ]$ and $D_2$ denote the subgraph $D[X_2 \cup \{u\}, Y_2]$. As $D_1$ and $D_2$ are vertex-disjoint subgraphs of $D$, we have $\Lambda(D) \geq \Lambda(D_1) + \Lambda(D_2)$. Further, $\sec_D(u)$ is the number of non-adjacent pairs $a, b$ such that $a \in Y_1$ and $c \in X_2$. As $a \in V(D_1)$ and $c \in V(D_2)$, we have $\Lambda(D) \geq \Lambda(D_1) + \Lambda(D_2) + \sec_D(u)$. Let $E$ denote the set of arcs $(x,y)$ such that $x \in X_2$ and $y \in Y_3$. Let $F_1$ and $F_2$ be feedback arc sets of $D_1$ and $D_2$, respectively. We claim that $F = F_1 \cup F_2 \cup E$ is a feedback arc set of $D$. The sets $F_1$ and $F_2$ are obtained inductively and thus $F$ can be computed in polynomial time. Now, if there exists a cycle $C$ in the graph obtained from $D$ by deleting the arcs in $F$, then $C$ has an arc $(p,q)$ with $p \in V(D_1)$ and $q \in V(D_2)$ and an arc $(r,s)$ with $r \in V(D_2)$ and $s \in V(D_1)$. Following are the properties of vertices $r$ and $s$. - It is not possible that $r \in Y_2$ and $s \in X_1$ by the definition of $X_1$. - It is not possible that $r \in X_2$ and $s \in Y_1$ as $D$ has no 4-cycle. Therefore, it follows that $(r,s) \in E$ which leads to a contradiction. Therefore, $F$ is a feedback arc set of $D$ of size $|F|=|F_1|+|F_2|+|E|$. Also, $|E| = \operatorname{first}_D(u)$ and by the choice of $u$, we have $\operatorname{first}_D(u) \leq \sec_D(u)$. Hence, we can conclude that $|F| \leq |F_1|+ |F_2| + \sec(u)$ and by induction hypothesis, $|F_1| \leq \Lambda (D_1)$ and $|F_2| \leq \Lambda (D_2)$. It now follows that $|F| \leq \Lambda (D)$.\ [**Case 2:** ]{} Suppose $\sum_{v \in V(D)} \operatorname{first}_D(v) > \sum_{v \in V(D)} \sec_D(v)$. In this case, from Observation \[obs4\], it follows that $\sum_{v \in V(D)} \operatorname{first}_{D^R}(v) < \sum_{v \in V(D)} \sec_{D^R}(v)$. Then, there is a vertex $u \in V(D^R)$ such that $\operatorname{first}_{D^R}(u) \leq \sec_{D^R}(u)$. By a similar argument as in Case 1, it follows that $D^R$ (and hence $D$ from Observation \[obs1\]) has a feedback arc set of size at most $\Lambda(D^R)=\Lambda(D)$. This leads to the following main result. For every non-negative integer $k$, every bipartite tournament $T$ either contains $k$ arc-disjoint 4-cycles or has a feedback arc set of size at most $7(k-1)$ that can be obtained in polynomial time. Suppose $\mathcal{C}$ is a maximal set of arc-disjoint 4-cycles in $T$ with $|\mathcal{C}|\leq k-1$. Let $D$ denote the digraph obtained from $T$ by deleting the arcs that are in some 4-cycle in $\mathcal{C}$. Clearly, $D$ has no 4-cycle and $\Lambda(D) \leq 4(k-1)$. From Lemma \[lem:bt-ep\], we know that $D$ has a feedback arc set $F$ of size at most $4(k-1)$. Next, consider a topological ordering $\sigma$ of $D-F$. Each 4-cycle of $\mathcal{C}$ contains at most three arcs which are backward in $\sigma$. If we denote by $F'$ the set of all the arcs of the 4-cycles of C which are backward in $\sigma$, then we have $|F'| \leq 3(k-1)$ and $F \cup F'$ is a feedback arc set of $D$. Therefore, $T$ has a feedback arc set of size at most $7(k-1)$. [10]{} N. Alon. . SIAM J. Discrete Math., 20(1), 137–142, 2006. J. Bang-Jensen and G. Gutin. . Congressus Numerantium, 115:131–170, 1996. J. Bang-Jensen and G. Gutin. . Springer-Verlag London, 2009. S. Bessy, M. Bougeret, R. Krithika, A. Sahu, S. Saurabh, J. Thiebaut and M. Zehavi. . In 44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019), pages 27:1–27:14, 2019. P. Charbit, S. Thomass[é]{} and A. Yeo. . Comb Probab Comput., 16(1), pages 1–4, 2007. K. Chen, S. Karson, D. Liu and J. Shen . Electronic Journal of Linear Algebra, 28, 117-123, 2015. M. Chudnovsky, P. Seymour and B. Sullivan . Combinatorica, 28, 1–18, 2008. V. Conitzer. . In 21st National Conf. on Artificial Intelligence - Volume 1, pages 613–619, 2006. M. Dunkum, P. Hamburger and A. P[ó]{}r . Combinatorica, 31, Article 55, 2011. J. Guo, F. H[ü]{}ffner and H. Moser. . Information Processing Letters, 102(2), 62 - 65, 2007. A. S. Jacob and R. Krithika. . In 14th International Workshop on Algorithms and Computation (WALCOM 2020), pages 249–260, 2020. J. McDonald, G. J. Puleo and C. Tennenhouse. . Preprint arXiv:1806.08809v2, 2018. J.W. Moon. . Holt, Rinehart and Winston, New York, 1968. [^1]: This result is mentioned in [@walcom20] as Theorem 2, however, the proof has a gap.
--- author: - | Vladislav Khramtsov$^{1}$, Alexey Sergeyev$^{1,2}$, Chiara Spiniello$^{3,4}$, Crescenzo Tortora$^{5}$, Nicola R. Napolitano$^{3,6}$,\ Adriano Agnello$^{7}$, Fedor Getman$^{3}$, Jelte T. A. de Jong$^{8}$, Konrad Kuijken$^{9}$, Mario Radovich$^{10}$,\ HuanYuan Shan$^{11}$, and Valery Shulga$^{12,2}$ date: 'Submitted on June 03, 2019 ' title: 'KiDS-SQuaD II: Machine learning selection of bright extragalactic objects to search for new gravitationally lensed quasars' --- [The KiDS Strongly lensed QUAsar Detection project (KiDS-SQuaD) aims at finding as many previously undiscovered gravitational lensed quasars as possible in the Kilo Degree Survey. This is the second paper of this series where we present a new, automatic object classification method based on machine learning technique.]{} [The main goal of this paper is to build a catalogue of bright extragalactic objects (galaxies and quasars), from the KiDS Data Release 4, with a minimum stellar contamination, preserving the completeness as much as possible, to then apply morphological methods to select reliable gravitationally lensed quasar candidates.]{} [After testing some of the most used machine learning algorithms, decision trees based classifiers, we decided to use CatBoost, that was specifically trained with the aim of creating a sample of extragalactic sources as clean as possible from stars. We discuss the input data, define the training sample for the classifier, give quantitative estimates of its performances, and finally describe the validation results with *Gaia* DR2, AllWISE, and GAMA catalogues. ]{} [ We have built and make available to the scientific community the KiDS Bright EXtraGalactic Objects catalogue (KiDS-BEXGO), specifically created to find gravitational lenses. This is made of $\approx6$ millions of sources classified as quasars ($\approx 200\,000$) and galaxies ($\approx 5.7$M), up to $r<22^m$. From this catalog we selected ’Multiplets’: close pairs of quasars or galaxies surrounded by at least one quasar, presenting the 12 most reliable gravitationally lensed quasar candidates, to demonstrate the potential of the catalogue, which will be further explored in a forthcoming paper. We compared our search to the previous one, presented in the first paper from this series, showing that employing a machine learning method decreases the stars-contaminators within the gravitationally lensed candidates.]{} [Our work presents the first comprehensive identification of bright extragalactic objects in KiDS DR4 data, which is for us the first necessary step to find strong gravitational lenses in wide-sky photometric surveys, but has also many other more general astrophysical applications.]{} Introduction ============ Strong gravitationally lensed quasars are very rare objects, especially in the case of quadruply lensed [@Oguri10]. However, it was clear, since the first discovery [@Walsh1979], that these systems are extremely useful tools for observational cosmology, cosmography and extragalactic astrophysics. When the light coming from a distant quasar intercepts a massive galaxy, it gets blended and it forms multiple images of the same source, which are often also magnified, becoming brighter. The light-curves of these different images have different paths and thus are offset by a measurable time-delay that depends on the cosmological distances between the observer, the lens and the source, and on the gravitational potential of the lens [@Refsdal64]. This time-delays return a one-step measurement of the expansion history of the Universe (primarily $H_0$), and also allow to set constrains on the dark matter halo of the lens galaxy [@Suyu14]. Moreover, on top of the deflection caused by the lens, the light of the quasar can also be deflected by the gravitational field of other low-mass bodies moving along the line-of-sight ($10^{-6}<m/M_{\odot}<10^{3}$, e.g., single stars, brown dwarfs, planets, globular clusters, etc.). This phenomenon, known as microlensing, can be very effective to study the inner structure of the source [@Anguita08; @Motta12; @Sluse11; @Guerras2013; @Braibant14], to estimate the masses of these compact bodies [@Kochanek04] or to study the stellar content of the lens galaxies [@Schechter02; @Bate11; @Oguri14]. Unfortunately, in all these mentioned cases, and especially for cosmography, the biggest limitation lies in the relatively small number of confirmed lenses. Thus, taking advantage from the high spatial resolution ($0.2\arcsec$/pixel, @VST1) and stringent seeing constraints ($<0.8\arcsec$ in $r$-band) of the Kilo Degree Survey (KiDS, @kids_main [@KiDS3; @KiDS4]), we have recently started the KiDS Strongly lensed QUAsar Detection project, KiDS-SQuaD, presented in @Spiniello18, hereafter Paper I. We are carrying on a systematic census of lensed quasars with the final goal of building a statistically relevant sample of lenses, covering a wide range of parameters (geometrical configurations, deflector masses and morphologies, redshifts and nature of the sources) to study the the dark matter halos of lens galaxies up to ${\rm z}\sim1$ [@Schechter02; @Bate11; @Suyu14] as well as the QSO-host galaxy co-evolution up to ${\rm z}\sim2$ [e.g., @Ding2017], to put constraints on the inner structure of the quasar accretion disk [size and thermal profile; e.g. @Anguita08; @Motta12] as well as the broad-line-region geometry [e.g., @Sluse11; @Guerras2013; @Braibant14] and finally for precise cosmography (e.g @Suyu17). The first step to find gravitationally lensed quasars is, obviously, to classify objects, selecting quasars and galaxies while minimizing as much as possible the stellar contamination. More generally, the identification of extragalactic objects, quasars and galaxies at all redshifts, is a very important task, that can help to answer to a wide range of astrophysical and cosmological questions, such as the relationship between active galactic nuclei (AGN) and host galaxies or the cosmic evolution of Super Massive Black Holes [@Kauffmann00; @Haehnelt00; @Wyithe03; @Hopkins06; @Shankar09; @Shen09] or the formation and evolution of galaxies [@Gama1] across cosmic time. Spectroscopy is without any doubt a powerful way to unambiguously identify and classify extragalactic objects. The most comprehensive dataset of spectroscopically confirmed quasars to date is the Sloan Digital Sky Survey [@Paris18], and a few forthcoming spectroscopic surveys will exponentially increase the amount of confirmed quasars, e.g. Dark Energy Spectroscopic Instrument [DESI, @DESI] and 4-meter Multi-Object Spectroscopic Telescope [4MOST, @4most]. However spectroscopy comes with a price: it is in fact time-expensive or effective only on small patches on the sky. Deep wide-field sky photometric surveys, on the other hand, offers nowadays an unprecedented opportunity to carry on this task on a much larger portion of the sky, modulo the development and the use of sophisticated automatic methods (e.g., Decision Trees, @Quinlan86, Naives Bayes, @Duda73, Neural Networks, @Rumelhart86, Support Vector Machines, @vapnik [@Cortes95]) to process the very large amount of produced data. KiDS, in particular, is the ideal platform to identify and classify objects and, more specifically to search for strong gravitational lenses, because of its excellent (for ground observation standards) seeing quality (mean $r$-band $\approx 0.70$ $FWHM$), deepness ($25^m$ in $r$-band) as well as its wide field of view ($1350$ deg$^2$ have been covered and will be released with DR5). The power of KiDS in the objects classification has already been shown by @Nakoneczny2019 [hereafter N19,] who built and released a catalogue of quasars from KiDS DR3 ($440$ deg$^2$), classified with a random forest supervised machine learning model, trained on Sloan Digital Sky Survey DR14 [SDSS DR14, @SDSS14] spectroscopic data. The approach we undertake in this paper is similar to the one presented in , as we also use KiDS data as input and SDSS as training sample, although we fine-tune and customize our pipelines to be more suitable for the search of lensed quasars. Moreover, the biggest difference between these works is that now we have available photometry in nine-bands. In fact, the optical data in KIDS are now (starting from DR4) complemented by infrared data from the VISTA Kilo-degree INfrared Galaxy (VIKING) survey, covering the same KiDS area in the $Z,Y,J,H,K_s$ near-infrared bands [@VIKING]. Thus, the KiDS$\times$VIKING photometric dataset provides a unique deep, wide coverage in nine bands ($u,g,r,i,Z,Y,J,H,K_s$) which has been proved to be extremely effective to separate quasars from stars using photometric characteristics (e.g. @Carrasco15). Indeed, one of the limitations of the first paper of this series was the manual optical colors selection we used to select quasars-like objects. In fact, in this way, the number of final lensed quasar candidates highly depends on the (somehow arbitrary and often calibrated on previous finding) selection criteria. Moreover, generally this number is of the order of $10\div30$ per deg$^{-2}$, making the necessary second step of visual inspection very difficult and long. Thus, to make our research suitable to deal with the larger amount of data coming from the fourth (and in the future the fifth) KiDS Data Release [@KiDS4] and also new deep wide-field surveys, e.g. Euclid [@euclid] or LSST [@lsst], here, in this second paper, we developed a method based on machine learning (ML) and on the combination of VIKING and KIDS data that allow us to more efficiently pinpoint high redshift systems while eliminating as much as possible stellar contamination. ML methods are, in fact, very effective in identifying quasars (and more generally, extragalactic sources,@Eyer05 [@Ball2006; @Elting2008; @Kim2011; @Gieseke2011; @ksz2012; @Brescia2015; @Carrasco15; @Peters15; @Krakowski2016; @Krakowski2018; @Viquar2018; @Khramtsov2018; @Nolte2019; @Bai2019]) with respect to any manual colour cut. They allow to explore, with a little human intervention and affordable computing time, large datasets, thus selecting candidates with less stringent pre-selection criteria, maximizing the precision (recovery rate) and minimizing the stellar contamination. Recently, a class of specific type of classifiers, the ensembles of decision trees, showed their advantage in the identification of extragalactic sources, and in particular quasars [@Ball2006; @Carrasco15; @Hernitschek2016; @Schindler2017; @Schindler2018; @Sergeyev2018; @Jin2019], also, specifically, as already mentioned, within the KiDS collaboration . Moreover, ML methods to search for the strong gravitational lenses already exist, although we note that the large majority of them are build to find galaxy-galaxy lenses rather than lensed quasars using a deep learning approach (e.g., @Cabanac07 [@Paraficz16; @Lanusse18; @Metcalf18; @Petrillo17; @Petrillo19a; @Petrillo19b]; but see also @Agnello15ml [@Ostrovski17; @KroneMartins2018; @Jacobs19] for lensed quasars). Moreover, many of these methods are based on the analysis of imaging data directly, rather than on a catalog level like we do in this paper. Nevertheless, we decided to build our own classifier in order to be able to fully customize the characteristics and parameters of the algorithm, also given the required completeness and purity we need for the resulting catalogue. It is of crucial importance for us to build a catalogue of extragalactic objects (not only quasars, since in some case the deflector can give a non-negligible contribution to the light of the whole system or the multiple images can be blended together and thus produce in the KiDS catalogue an ’extended’ match rather than many ’point-like’ ones), that is as clean as possible from stars, and, at the same time, as complete as possible. Thus, developing our own tool and releasing the resulting catalogue is the best possible choice. As main result of the novel classification pipeline that, specifically developped for our specific task, we present here the catalogue of Bright EXtraGalactic Objects from KiDS DR4 – KiDS-BEXGO, which we then use to search for gravitationally lensed quasars, using some of the methods and idea already presented in Paper I. This paper is organized as follows: in Section \[sec:catalog\] we give a general overview on the catalogues and data we use. In Section \[sec:classification\] we discuss the method to classify objects and isolate extragalactic ones, using optical and infrared deep photometry, and we introduce and describe our own classification pipeline. In Section \[sec:results\] we present the result of such a pipeline: the KiDS-BEXGO catalogue, and different validation techniques, based on external data, to test the performance of the classifier. Finally, in Section \[sec:lensing\] we focus on ’Multiplets’: close pairs of quasars, or galaxies surrounded by at least one quasar (within $5''$), which represent the primary input catalogue for our search for strong gravitational lenses. We present our conclusions and future perspectives in Section \[sec:conclusions\]. In addition, we present in the Appendix a direct comparison of three different machine learning methods, all based on decision trees. Data overview {#sec:catalog} ============= The input catalogue from KiDS DR4 --------------------------------- The Kilo-Degree Survey [KiDS, @kids_main] is an European Southern Observatory (ESO) public survey, carried on with the VLT Survey Telescope [VST, @VST1; @VST2], that covers $1350$ deg$^2$ on sky in four optical broad-band filters, namely $u,g,r,i$. Optical data from KiDS are complemented with data from the VISTA Kilo-Degree Infrared Galaxy Survey [VIKING, @VIKING], that has already completed the observations in five near-infrared bands ($Z,Y,J,H,K_s$) within the same region of the sky. The latest KiDS data release (KiDS DR4 @KiDS4), encompasses all the survey tiles ($1006$ deg$^2$ in total) already released in the previous KiDS data releases [@KiDS3] with additional tiles covering $\approx 550$ new deg$^2$, thus doubling the area coverage of DR3. In addition, infrared photometric data from VIKING is also included in the KiDS DR4 release for the aperture-matched sources [@Kids_VIKING]. Typical magnitude limits for each band are 24.2, 25.1, 25.0, 23.6, 22.7, 22.0, 21.8, 21.1, 21.2 (AB magnitudes, $5\sigma$ in $2''$ aperture), with seeing generally below $1.0''$ in $u,g,r,i,Z,Y,J,H,K_s$ bands [@Kids_VIKING; @KiDS4]. We started from the KiDS multi-band DR4 catalogue and selected $\approx 45$M sources, that were detected in the $r$-band, which is the one with the best seeing ($0.7''$), and have a match in each of the other eight bands too. However, for the implementation of the classification method presented in this paper, we do not use the full catalogue but we limit to 9583913 sources with $r<22^m$, covering $\approx$1000 deg$^2$ in all of the 9 filters with small errors on each magnitudes (we remove all the sources with `MAGERR_GAAP`$>1^m$ in each of the band). In fact, as in , we also use spectroscopically confirmed objects from the Sloan Digital Sky Survey Data Release 14 [SDSS DR14, @SDSS14] as training sample and therefore we limit our inference to bright objects to avoid any extrapolation to unseen regions in the space of features. Throughout this paper we always make use of the Gaussian Aperture and PSF [GAaP, @gaap1; @gaap2] magnitudes, corrected for extinction. Finally, as we describe in more details below, we also use the magnitude-dependent parameter `CLASS_STAR` for the objects classification. This was already proven to be a very important feature in . The histogram of the $r$-band magnitude distribution for the whole KIDS DR4 and for the spectroscopically confirmed objects that we use as training sample is shown in Figure \[fig:rmag\]. The training sample will be presented in details in the next Section. The training sample from SDSS DR14 ---------------------------------- To provide accurate classification, we need to use a large sample of objects with known true classes. Such data can be obtained from spectroscopic surveys; for our purpose, following the approach of , we use the SDSS DR14[^1] catalogue. The SDSS DR14 catalogue contains 4311571 spectroscopically confirmed objects, classified on the basis of their spectra in three main classes: galaxies (2546963 objects), quasars (824548 objects) and stars (940060 objects), which we will preserve in our classification setting up a 3-label classification system, as we will describe in details in Section \[sec:classification\]. We assume that a quasar (hereafter, `QSO`) is a point-like source[^2] with `QSO` class and `QSO` or `BROADLINE` subclass; a normal galaxy (hereafter, `GALAXY`) is an extended source that has a `GALAXY` class label without `STARFORMING BROADLINE` and `STARBURST BROADLINE` subclasses. The stars labeling in SDSS does not have subclasses, so simply we assumed that the source is a star (hereafter, `STAR`) if it has the class `STAR` from the catalogue. We cross-match this catalogue with the catalogue of bright sources from KiDS DR4 described above, using a 1.0 arcsec radius, and obtaining, as result, a training sample composed of 183048 sources. However some of them have dubious spectroscopic classification. A careful cleaning is very important for our scientific purpose, but an automatic masking procedure, eliminating all the dubious cases, that is often applied in classification pipelines to reach the highest possible pureness, is not appropriate here because it might cause the loss of interesting objects with complex morphology and photometric properties, that can be actually good lens candidates[^3]. We therefore had to pay particular attention to the cleaning procedure which we carried on in a rather manual and interactive way. In particular we use this first “unclean” training sample to train the classifiers (which we will describe in the next section). We then visually inspected all the misclassified objects (of the order of few hundreds)[^4]. Interestingly, during the inspection, we discovered, that SDSS DR14, indeed, contains few objects with wrong labels, possibly due to a somehow imperfect procedure (among these we found few white dwarfs and few compact galaxies labeled as `QSO`, blended sources where one of the component is a star, or stars projected into a galaxy) and realized that classifiers trained on such dataset can inherit these mistakes. Thus we removed all the the sources, the true class of which did not fit with its imaging and/or spectral properties and we repeat the whole classification pipeline a few times (testing it also with different classifiers, see next section). We note that the total amount of removed sources does not exceed a few percents of the training sample, but that still, the classification results before and after this iterative cleaning procedure are not identical, with the classifier learned with the “clean” training sample producing better results in terms of pureness. Finally, another test we did to get a better handle on the importance of our assumptions in building the input catalogue and the best training sample for it, was to change the chosen threshold in the photometric errors of each single band. In particular, we tested three different upper limit for the errors on the magnitudes of the training sample: $1^m$, $0.5^m$ and $0.3^m$. As for the cleaning, we trained the classifiers three times with three different training sets made of objects passing these three thresholds and then we compared the performances. We found negligible differences in purity and completeness (at the 0.1% level) in the classification of the training sample. Then we also compared the predictions for the whole input catalog obtained using the three different training samples, finding, again, no significant differences in the distribution of the sources among the classes. Thus, we decided to use the training sample with the largest number of object and the same error threshold as the input catalogue ($1^m$). In conclusion, after removing the sources with 1) bad spectroscopic redshift estimation (for which $\texttt{zWarning}>0$), 2) missing one or more of the 9 optical-infrared magnitudes, 3) high photometric errors ($>1^m$ in each filter) and the 4) accidental duplicates, retrieved after our cross-matching procedure, we ended up with 121375 sources, of which 24307 sources classified as `STAR`, 12152 sources classified as `QSO`, and 84917 sources labeled as `GALAXY`. This catalogue, which we will name for the rest of the paper SDSS$\times$KiDS, will be used in Section \[sec:learning\_process\] as training sample for the classifiers. ![Histogram of the $r$ magnitudes for the full KiDS DR4 catalogue (blue) and the training sample from SDSS$\times$KiDS (red).[]{data-label="fig:rmag"}](rmag_distr.png) Classification {#sec:classification} ============== The main idea behind the classification problem that we have to solve is that it is possible to separate objects into stars, quasars and galaxies on the basics of their photometry, because each family of objects has specific photometric characteristics, which are different from objects belonging to a different family. Thus, our first task was to define the feature space where the quasars, galaxies and stars will be located in three different well separated regions. We identify optical-infrared colours as the most suitable features for the objects classification; since we have 9 magnitudes ($u,g,r,i,Z,Y,J,H,K_s$), there are 36 colours as pairwise differences of various magnitudes. We note that this approach is physically-motivated and model-driven. showed that, although magnitudes contribute less to classification than colors and stellarity index, the output based on colors only was different from that using also magnitudes. However, we have many more colors at our disposal, thanks to the five additional infrared bands available to us, which make it easier to properly separate stars from quasars and galaxies. Moreover, considering the fact that it is somehow hard to find simple cut in magnitudes that allow to separate different classes of objects, we decided to do not use magnitudes and only consider colors which are more effective in separating the different classes. Also, we add the `CLASS_STAR` flag to the feature set, corresponding to the ’stellarity’ of a source and derived from the KiDS $r$-band images, the ones with the best seeing. This KiDS parameter is a continuous measure of whether the object is extended (`CLASS_STAR`=0) or point-like (`CLASS_STAR`=1) and has been proved to be a very powerful feature in the classification (e.g. ). As showed in @kids_main (Fig. 8), the `CLASS_STAR` parameter depends on the signal-to-noise ratio (SNR) and it is an effective way to separate stars from galaxies only for data with SNR$>50$. Thus, an alternative for selecting the input data to classify, which would probably allow to investigate also fainter magnitudes, might be to put a cut in the SNR rather than on $r$-band magnitude. However, at the present stage, the more severe cut in magnitude is necessary given the limitation of the training sample that we use. Colors and stellarity values of the sources correspond to the coordinates in the high-dimensional feature space, in which the classification has been performed. As already specified in the introduction, we decided to define a 3-class problem, where the classes correspond to stars, quasars and galaxies. In fact, this 3-class labeling allows us to get the purest identification of quasars, unlike the 2-class (stars and quasars, assuming galaxies as extended sources) or the 4-class (stars, quasars, regular galaxies and galaxies with strong emission lines, e.g. starforming galaxies) schemes. Also, we stress that a 2-class problem, which only separates stars from extragalactic sources is not enough for our scientific purpose. It is true that to find gravitationally lensed quasars, we need a catalogue that contains both galaxies and quasars, but we need to be able to separate these two classes properly in order to find suitable lens candidates (see Section \[sec:lensing\]). In the following sections we describe the final classification algorithm and calibration strategy that we use to build our catalogue, which was the end product of a large series of tests and experiments we carried out, also using different classification schemes, detailed in the Appendix \[testing\_classifiers\]. In fact, we tested three classifiers based on decision trees (Random Forests and two different Gradient Boosting approaches). We decided, in the end, to use the CatBoost (@cat1 [@cat2]), one of the two Gradient Boosting [GB, @gradboost] ensemble algorithms that we tested, because it was the one providing the best performance during the training process, as described in the following Section. In general, Gradient Boosting [GB, @gradboost] is an ensemble algorithm that constructs a learner by fitting in an iterative way the gradients of the predictions’ residuals of the previously constructed learners, typically decision trees (gradient boosting decision tree, GBDT). CatBoost in particular, is a novel, fast, scalable, high performance open-source GBDT library[^5], developed by Yandex researchers and engineers[^6]. CatBoost has the great advantage, with respect to other Gradient Boosting algorithms, that it uses Ordered Boosting [@cat2] to avoid the overfitting problem, as we highlight in the Appendix \[testing\_classifiers\]. To our knowledge, this is the first application of the CatBoost algorithm to an astronomical task. Fine-tuning and learning process {#sec:learning_process} -------------------------------- To be able to analyze the performance of classifier, one need to define a set of validation data and the type of learning with respect to the training-to-validation division. We therefore split the validation into two groups: out-of-fold (OOF) and hold-out. The hold-out sample consists of a random subsample of the initial training data which we keep to access the final performance of the classifiers. The remaining part of the initial training sample is used to learn the classifiers with a $k$-fold cross validation procedure. This method is one of the most commonly used way to train classifiers and directly compare classification algorithms. Briefly, one divides the training sample into $k$ randomly partitioned disjoint equal parts. Then, the classification algorithm trains on $k-1$ parts and the remaining one is used as testing data. This process is repeated $k$ times, each time using one of the $k$ disjoint testing subsamples and obtaining a prediction from it. The combination of these $k$ predictions is the so-called OOF sample. Finally, to obtain the prediction on the new data, one have to make $k$ predictions, from each fold’s model, and average them. A schematic view of the learning process is visualized in Figure \[fig:10cv\]. Starting from the KiDS$\times$SDSS sample of 121376 sources, we randomly selected 20% of it as hold-out sample and use the remaining 80% as OOF training sample in the cross-validation process[^7] We stress that, among the classifiers that we tested, CatBoost returned the best performance both on the hold-out and OOF samples. ![A schematic view of the learning via 10-fold cross validation procedure and validation with the OOF and the hold-out samples, drown from the initial training sample[]{data-label="fig:10cv"}](catboost/10foldCVDiagram4.png) Before training can take place, the classifier has a list of parameters that have to be tuned to reach the highest possible classification quality. This is true for each of the different classifiers that we tested (see the Appendix for more details on each of them). For this purpose, we performed optimal hyperparameter search on 60% of initial training sample via a 3-fold cross-validation with a a ’BayesSearch’ for CatBoost (and XGBoost, while we use a ’GridSearch’ method for RF). While tuning the wide range of hyperparameters for CatBoost, we noticed that the most influential ones were the `max_depth` and the `early_stopping` parameters. We selected $\texttt{max\_depth}=8$ and $\texttt{early\_stopping}=150$ after BayesSearch, with a maximum number of trees equals $3500$. Moreover, we applied a weighting criterion to the loss function for the CatBoost model to further decrease the contamination by stars in the extragalactic objects catalogue (see Appendix \[GBDT\] for more details). After the above described fine-tuning, we finally trained CatBoost with the same training and validation data with a $10$-fold cross-validation (see Fig. \[fig:10cv\]). The result of the performance for the final CatBoost model (after the fine-tuning) is presented, as confusion matrices, in Figure \[fig:confmatr\_end\] for the OOF sample (top) and the hold-out sample (bottom). Using the weighting for stars and galaxies, we received a significant improvement in the purity of the quasar sample; in fact, comparing the confusion matrices before weighting loss function (see Appendix) and after that, one can see, that the rate of stars, classified as quasars, decreased from $\approx 0.60\%$ to $\approx 0.30\%$. CatBoost lost only $< 1.50\%$ of the quasars, thus only marginally decreasing the resulting completeness of this class. ![Importance of the 10 most significant features, calculated with CatBoost in each of 10 folds. The dispersion of importance for each feature is represented by horizontal ticks at each bar.[]{data-label="fig:fimportance"}](catboost/sns_imp2.png) ![Confusion matrices for the final version of CatBoost, after weighting the loss function, performed on the OOF sample (*top* panel) and the hold-out sample (*bottom* panel).[]{data-label="fig:confmatr_end"}](conf_matr/conf_matr_final_VK.png) ![Top panel: histogram of the $r$ magnitudes for the three classes of training sources. Bottom panel: the rate of stars, misclassified as quasars (red curve) and galaxies (black curve), as a function of their $r$ magnitude. The plots are produced for the full initial training sample.[]{data-label="fig:contaminators"}](catboost/contaminators3.png) Another notable result, that we can get with CatBoost, is the relative importance that each feature has for the classification procedure. Feature importance, calculated with decision tree, shows the frequency, with which a certain feature occurs in the tree. In such a way, the higher frequency is directly related to the higher feature contribution to separate the sources, i.e. to the importance of a given feature. An excellent example of this kind of analysis, together with a full description of the feature importance technique is presented in @DIsanto2018. Figure \[fig:fimportance\] shows the 10 most informative features for the all of the CatBoost models, trained one by one via 10-fold cross-validation. Among these, the `CLASS_STAR` is certainly the most important one, followed by $H-K_s$, $u-g$, $g-r$, and $J-K_s$. This is in perfect agreement with a number of results in the literature, e.g., using $u-g$ and $g-r$ colour diagram it is possible to separate low-redshift quasars from stars [@quas_colors1; @Carrasco15]; these features, together with the stellarity, are in fact also the most important ones in other ML based classificators (e.g. ). Also, quasars at $z\approx2.5$ and $z\approx5.6$ may be recovered by employing $K$-band information in the colour space [@quas_colors2]. And finally, it is well known, and also intuitively easy to understand, that morphological information (described here by the `CLASS_STAR` feature) allows us to clearly select galaxies, dividing them from stars and quasars in the relatively bright magnitude range that we consider ($r<22^m$). The maximum rate of stars-contaminators per bin of $r$-band magnitude in the quasars catalogue equals $\le0.6\%$ and is expected at the faint end of the sample ($r\approx22$ mag). Instead, the stars misclassified as galaxies span over the full optical $r$ magnitude range and does not exceed the $0.1\%$. This is clearly shown in the bottom panel of Figure \[fig:contaminators\]). Finally, we checked the contamination rate against the signal-to-noise ratio (SNR) in $u$-band, which is the noisiest one, finding that the relative contamination of stars decreases at each magnitude bin by $\sim2\%$ when we only consider objects with SNR $>100$. For the input data, whose distribution in the feature space should be similar to that of the training set, we expect a contamination of $0.3\%$ from stars and $0.1\%$ from galaxies in the sample of quasars, and $0.1\%$ from stars and $0.6\%$ from quasars in the sample of galaxies. Thus, we conclude that the algorithm is able to correctly classify up to $97.5\%$ of all the bright quasars from the KiDS DR4 data, and up to $99.8\%$ of the galaxies. Surely, these estimations are ideal and do not really reflect the real situation, because they are only based on the training sample, that is a much smaller and simpler case than the full KiDS DR4 catalogue. A more realistic estimate of the quality of our extragalactic catalogue, in terms of purity and completeness, can be obtained using the external data to validate the resulting sample of classified sources, as we will do in Section \[sec:catalog\]). We stress that for our final scientific purpose of finding gravitational lensed quasars, the most crucial point is to be able to get rid of the stellar contaminants. It is, in fact, of fundamental importance to separate as well as possible stars from quasars, being both point-like sources. Since the KiDS DR4 input catalogue consists of galaxies mostly, there will be a non negligible number of galaxies contaminating the quasars sample. However, as we will explain in more details in Section \[sec:lensing\], strong lenses can be classified as `GALAXY`, if the deflector gives a non-negligible contribution to the light and the multiple images of quasar are not deblended (thus, the whole system will result in the one extended object with colors which are a mix of galaxies and quasars typical colors), or they can be identified as multiple quasars. This is the main reason why we build and inspect a catalogue containing all the extragalactic sources (i.e. `QSO`+`GALAXY`), looking for ’multiplets’(i.e. sources classified as `QSO` and with at least one near-by `QSO` companion) to find lenses belonging to the latter group and by looking for galaxies with at least two quasars near-by (within $5\arcsec$) to find lenses belonging to the former group. \[p\_cut\] cut-off $p_{\texttt{QSO}}$ $p_{\texttt{GALAXY}}$ $p_{\texttt{STAR}}$ --------- -------------------- ----------------------- --------------------- $>0.99$ 62425 5538193 3001287 $>0.95$ 112222 5605735 3533787 $>0.90$ 128393 5629623 3611762 $>0.80$ 145653 5655586 3660368 $>0.67$ 161818 5673902 3688514 $>0.50$ 181336 5690885 3711692 : Number of resulting objects in the classified KiDS DR4 catalogue, for each class with different probability threshold to define class belonging. The Bright EXtraGalactic Objects Catalogue in KiDS DR4 (KiDS-BEXGO) {#sec:results} =================================================================== The outputs of the CatBoost classifier for each object are three numbers which represent the probability of belonging to the different classes of objects: $p_{\texttt{STAR}}$, $p_{\texttt{GALAXY}}$, $p_{\texttt{QSO}}$. In general, we assume that a source belongs to a given class when the probability of being in that class is the highest. With this simple assumption, starting from the input 9.5 million sources in the KiDS DR4 catalogue, we retrieved: 181336 quasars, 3711692 stars and 5690885 galaxies. Using instead a more severe threshold, i.e., considering that an object belongs to a class when the corresponding probability is $>0.8$, we obtain: 5665586 (59%) “sure” galaxies, 3660368 (38%) “sure” stars and 145653 (1.5%) “sure” quasars, plus 122306 objects (1.3%) with “unsecure” classification. We note that for the classification of objects in the final catalog we stick to the original assumption that a source belongs to the class with the largest probability, without applying any further threshold, since “unsecure” extragalctic sources (with $p_{\texttt{GALAXY}}\sim p_{\texttt{QSO}}$) could very well be good lens candidates where the deflector and the quasar images are blended and all give a contribution to the light of the system. ![The upper panel shows the confusion matrices for the final version of CatBoost performed on the OOF sample using different threshold of probability in the objects classification (see text for more details). The lower panel shows instead the completeness rate of the OOF sample as a function of the adopted probability threshold for each class.[]{data-label="fig:confmatr-pro"}](conf_matr/conf_matr_prob_CS.png "fig:")\ ![The upper panel shows the confusion matrices for the final version of CatBoost performed on the OOF sample using different threshold of probability in the objects classification (see text for more details). The lower panel shows instead the completeness rate of the OOF sample as a function of the adopted probability threshold for each class.[]{data-label="fig:confmatr-pro"}](conf_matr/prob-thresh_CS.png "fig:") However, since the levels of completeness and purity depend on the chosen probability, and different scientific cases might require different levels, we provide the number of objects classified in each subsample for 5 different thresholds in Table \[p\_cut\]. We also show the confusion matrices obtained for the OOF training sample for the four highest probability levels (0.8, 0.9, 0.95, 0.99) and the completeness rate as a function of the probability threshold for the three classes, in Fig. \[fig:confmatr-pro\], where the thresholds were applied to each of the classes. Finally, Fig. \[fig:triangle\] provides a visualization of the class distribution of the classified KiDS DR4 catalogue with a density plot, where each corner of the triangle represents the maximum probability to belong to a given class. Objects within the region delimited by dotted lines are “sure”, according to the threshold given above ($p>0.8$). ![Density plot of the final distribution of sources among the classes in the KiDS DR4 catalogue. The triangle corners show the maximum probability to belong to a given family (left `QSO`, right `GALAXY`, up `STAR`), and colors indicate number of objects. Dashed lines correspond to the $p=0.8$ threshold.[]{data-label="fig:triangle"}](catboost/triangle_simple.png) In the next section, we will only focus on all the objects with $p_{\texttt{QSO}}>p_{\texttt{STAR}}$ or $p_{\texttt{GALAXY}}>p_{\texttt{STAR}}$, that form the Bright EXtraGalactic Objects Catalogue in KiDS DR4 (KiDS-BEXGO) that we will then use in Section \[sec:lensing\] for the gravitational lens search. We discuss here instead three of the many possible validation procedures, for one or more classes of objects, performed using external data (from the *Gaia* astrometric survey, from the AllWISE infrared catalogue, from the GAMA survey). Using external dataset to validate catalogues obtained with ML techniques is a rather standard procedure, as e.g. already shown in and @Khramtsov18, although in latter case the PMA [@pma] catalogue of proper motions was used to validate purity of galaxies. Given the results presented in the tests below, together with predictions on the hold-out sample, we are very confident that our ML classifier is able to minimize the stellar contamination in the BEXGO catalogue, which is the first, most crucial step if aiming at digging for gravitationally lensed quasars within very large catalogues. ![Distribution of `CLASS_STAR` parameter for the sources from KiDS DR4, classified as galaxies (black), quasars (red) and stars (blue). []{data-label="fig:classstar_dr4"}](catboost/class_star_dr4_chiara.png) ![image](gaia/k4_qso_pm.png) ![image](gaia/k4_qso_parallax.png) Astrometric validation {#sec:validation} ---------------------- Recently, the *Gaia* [@gc1] astrometric survey has provided optical realization of the International Celestial Reference System, materialized with $\approx 500\,000$ quasars, and named *Gaia* DR2 Celestial Reference Frame [*Gaia*-CRF2, @gc2b; @gaia_lindegren]. The latest data release, *Gaia* DR2 [@gc2a], introduced 5 astrometric parameters (positions $\alpha, \delta$, proper motions $\mu_{\alpha}, \mu_{\delta}$, and parallaxes $\Bar{\omega}$) for $1.3$ billion sources, covering the whole celestial sphere up to $G<21^m$[^8]. The systematical errors in *Gaia* DR2, estimated with a large sample of quasars, do not exceed $0.03$ mas. Thus, the *Gaia* DR2 provides an excellent mean for testing the purity of our catalogue, especially for quasars. In fact, one of the main observational properties of quasars, that can be used to validate the sample of candidates classified as such, is positional stationary in the optical wavelength range. Being quasars very distant sources, they have proper motions of only few microarcseconds, due to different cosmological effects [@quasar_pm]. We cross-matched the KiDS DR4 sample of 9.5 million classified sources with the *Gaia* DR2 catalogue using a $0''.5$ radius, and retrieved a sample of sources with defined astrometric parameters, of which 52636 were classified as `QSO`, 2369414 were classified as `STAR` and 25346 – as `GALAXY`. We checked the proper motions and parallaxes of all the objects classified as quasars and with a match in *Gaia*, to test this assumption that quasars are indeed zero-proper motion and zero-parallax sources within the systematic errors. The results of this test are shown in Figure \[fig:gaia\]. The behaviour of the proper motion components is consistent with the estimated contamination of stars within the quasar subsample of the KiDS DR4 catalogue (Fig. \[fig:contaminators\]). In fact, at the faintest magnitudes ($20^m.5 < G < 21^m.0$), the proper motion components deviates strongly, due to a larger contamination from stars. At the bright end of magnitudes, the standard deviation of the mean of proper motions and parallaxes is also large, but this is rather due to relatively small amount of sources and, possibly, to contamination from stars. Also, it is important to note, that the parallax (right plot) is biased for the sample of extragalactic sources towards the value of $-0.029$ mas [@gaia_lindegren]. According to the statistical measures of astrometric parameters, that is reported in Table \[gaia\_stats\], we thus can conclude, that the sample of KiDS DR4 quasars mainly consists of motionless sources. There is some disagreement between median and mean values of the parameters, that can be explained by the existence of stars within the sample of quasars with high proper motions (up to $\pm$ 40 mas yr$^{-1}$ at least in one of the components) and parallaxes (up to 35 mas). A more detailed astrometric analysis providing a more quantitative estimation of the rate of contaminating stars, cannot be produced without accurate modeling and involving another external datasets, which goes beyond the purposes of this paper. \[gaia\_stats\] Parameter Mean Median Standard deviation ----------------------------------- -------- -------- -------------------- $\Bar{\omega}$, \[mas\] -0.010 -0.014 1.125 $\mu_{\alpha}$, \[mas yr$^{-1}$\] -0.028 -0.018 2.104 $\mu_{\delta}$, \[mas yr$^{-1}$\] -0.104 -0.005 2.005 : Basic statistics of astrometric parameters for KiDS DR4 `QSO`, cross-matched with *Gaia* DR2 In order to add something for the galaxies, we use the very simple argument that, by construction, *Gaia* should contains no galaxies at all [@Robin2012]. Thus all of the objects with high $p_{\texttt{GALAXY}}$ should not have a match in *Gaia* DR2. This is, of course, only a rough approximation since there might be a number of galaxies that *Gaia* still measures, as for instance, objects with bright cores. Among the $\approx$25000 `GALAXY` with a match in *Gaia*, we note that only 1784 have `CLASS_STAR`$>0.5$, thus they can be point-like sources in KiDS, that our algorithm misclassified, or very compact galaxies below the KiDS resolution. In Figure \[fig:classstar\_dr4\] we show the distribution of the `CLASS_STAR` parameter for each class of full KiDS DR4 catalogue. Assuming that galaxies are all extended objects, we would expect to find in KiDS no objects classified as `GALAXY` with `CLASS_STAR`$>0.5$. However, there are objects that are point-like, according to their `CLASS_STAR` value, but have been classified as `GALAXY` by our algorithm. The number of point-like galaxies from Figure \[fig:classstar\_dr4\] is larger than a couple of thousands, as predicted by the cross-match with *Gaia*. This slight disagreement might be explained by the better resolution of *Gaia* [@KroneMartins2018]: these sources might be seen as point-like in KiDS, but are extended and thus not identified in *Gaia*. Despite this, the majority of `GALAXY` sources with a *Gaia* match are indeed extended objects in KiDS, or sources near by a bright star, as we directly verified on a random sample of $\approx$5000 objects, via the SDSS DR14 Navigate Tool[^9], and then also checking the KiDS $r$-band images, finding, for most of the sources, bright features (e.g., cores, regions in arms, etc.), that could be resolved only for galaxies with significant angular size. Validation of quasars with mid-infrared data from WISE ------------------------------------------------------ Using mid-infrared (MIR) colours is a very effective way to separate quasars from stars and passive galaxies. In fact, unlike stars and inactive galaxies, that show approximately zero MIR colors, the emission of AGNs conforms to the power-law emission in MIR wavelength range, that causes higher red MIR colours [@Elvis1994; @stern2005; @Assef2013]. As largely demonstrated by a number of published works, including Paper I, by using a combination of infrared color and magnitudes cuts, it is possible to separate quasars from stars and galaxies (e.g. the two-color criteria of [@Lacy2004], [@stern2005], and [@Donley2012] with *Spitzer* [@Werner2004] data; or the two-color criteria in [@Jarrett2011] and [@Mateos2012] or the one-color criteria of [@stern2012] and [@Assef2013] using data from the Wide-field Infrared Survey Explorer (WISE, @Wright2010). Here, we decided to use the single infrared one-colour cut: \[3.6\]$\mu$m-\[4.5\]$\mu$m$ > 0.8$ proposed by @stern2012, using data from WISE, the NASA space mission, aimed to map all sky in 4 MIR bands: $W1,W2,W3,W4$ (3.6,4.5,12 and 22 $\mu$m respectively). This criterion can separate quasars with resulting purity of $\approx 95\%$, but allows one to select quasars only up to $z\approx3.5$ [@Guo2018]. We caution the reader that a given sample selected with this criterion can be contaminated by brown dwarfs, that have similar colours. The more elaborated two-colour criterion of [@Mateos2012] allows to reduce this contamination, but it requires reliable measurements in the \[12\]$\mu$m band, which would significantly decreases the total number of matched sources in our case. We note that, in general, it is harder to validate the purity of galaxies in the same way since stars overlap with (non-active) galaxies in this dimension (see, e.g. Fig. 12 in @Wright2010). Finally, we clarify to the reader that in this paper, the WISE data is only used as validation for the quasars catalogue but not for the lenses search. In fact, in Paper I, we highlighted that the bottle-neck of our search was indeed the too severe colour WISE pre-selection. This could be caused by the fact that, in case the lens and the source are blended in WISE and the deflector gives a large contribution to the light, the colours of this [*effective*]{} source may be not quasar-like anymore and move indeed toward lower $W1-W2$ values. Here we rely on a much solid and trustable way to classify objects, our ML based classifier, and thus we do not need to apply any cut nor we need to require a match with WISE to build our candidate list. We cross-matched the SDSS training sample as well as the catalogue of all the classified objects (Section \[sec:catalog\]) with the AllWISE [@Cutri2013] data release using a $2''.0$ radius. The resulting sample consists of 114773 quasars, 3289858 galaxies, and 2020768 stars for the classified objects and of 8879 quasars, 78816 galaxies, and 13249 stars for the training data. Figure \[fig:allwise\] shows the histograms of distribution of the $W1-W2$ color for the KIDS DR4 objects classified in the three classes (left panels, solid lines), and for the corresponding training sample (right panel, dotted lines), color coded by their classification: red for `QSO`, black for `GALAXY` and blue for `STARS`. In general, the distribution of the full catalogue shows a similar distribution to that of the training sample, with the peak of the `QSO` subsample shifted toward larger $W1-W2$ values, as expected. We note, however, that for the `GALAXY` and `STAR` classes, the distribution of the full catalogue is is much broader than the distribution of the corresponding training sample, especially towards larger values, both in negative and in positive. This might indicate a lower pureness for these families and consequently a larger contamination level in the `QSO` family or a lack of particular class of families (e.g. active galaxies) in the training sample. As we show in the next subsection, the pureness of the objects classified as `GALAXY` seems to be quite high, according to the external validation of this class performed via a cross-match with the Galaxy And Mass Assembly Survey Data Release 3 [GAMA DR3, @GamaDr3]. More and deeper investigations will be performed on pureness and completeness in the forthcoming paper of the KiDS-SQuaD series. Nevertheless, for the purposes of this paper, we are confident, and we will prove in Section \[sec:lensing\], that our automatic classifier allowed us to obtain a starting catalogue of quasars and galaxy with a stellar contamination much smaller than the one obtained in Paper I, where we rely on simple and manual optical and infrared color-cuts. ![image](allwise/k4_log3.png) ![image](allwise/tr_log3.png) Validation of galaxies with GAMA -------------------------------- To validate the pureness and completeness of the subsample of galaxies within the BEXGO catalogue, we cross-match the final catalogue of classified object with spectroscopically confirmed galaxies from GAMA. In particular, following the suggestions given on the GAMA website, we retrieved all the objects[^10] with redshifts $0.05<z<0.9$ and with a high “normalised” redshift quality ($nQ>1$). We matched these $\approx208$k sources with our final catalogue of classified objects from KiDS, finding 105334 systems in common. Among these 105018 were indeed classified as `GALAXY` from CatBoost and 104970 have a $p_{\texttt{GALAXY}} \ge 0.8$. Thus, only the 0.3% of the common objects have been misclassified (123 as `STARS` and 181 `QSO`). Although we are aware that this test is not definitive and that it is not straightforward to directly translate the relative number of contaminant into a percentage of pureness of the final galaxy catalogue, it shows that, at least for this small but representative sample og galaxies, our CatBoost classifiers does a good job. We speculate that one of the reasons for slight disagreement between the distribution of galaxies from the training sample and galaxies classified as such in the BEXGO catalogue in the $W1-W2$ space might arise from the fact that, although we limited the analysis and classification to only objects brighter than $r<22^m$, the SDSS galaxies are generally more luminous than the KiDS ones. We stress again that our final purpose is to create an authomatic and effective method to build a catalogue of extragalactic objects, with the smallest possible contamination from stars, that is the first necessary step to search for strong gravitational lensed quasars. We believe that these three validation steps with external data demonstrated that we succeeded in our goal and thus we can now use the newly created catalogue to search for lens candidates. Searching for gravitationally lensed quasars {#sec:lensing} ============================================ Strong gravitationally lensed quasars are valuable but very rare objects (according to @Oguri10, one quasars in $\sim 10^{3.5}$ is expected to be strongly lensed for $i$-band limiting magnitude deeper than $i\approx21^m$, see e.g. their Fig. 3 and Sec. 3.1) that give direct, purely gravitational probes of cosmology and extragalactic astrophysics. Generally speaking, we can separate lensed quasars in three families: systems where the quasar images dominate (mainly low-separation couples/quadruplets with a faint deflector in between), objects were the deflector is a bright, usually red, galaxy that dominates the light budget of the system, and finally systems where both lens and source give a non-negligible contribution to the light. In the last two cases, CatBoost will most probably return multiple matches of which at least one will be classified as extended (`GALAXY`), while in the former one it will classify them as `QSO`. However, we note that in cases where the separation between the quasar components is too low, the objects might be not resolved in the KiDS catalogue, and thus result in a single match/classification from our algorithm. Indeed, most of the known gravitationally lensed quasars with low separation between the multiple images, discovered in the SDSS, are identified as galaxies (only in few cases as single quasar), since the poor resolution does not allow deblending. Of course the better image resolution of KiDS helps in this case, however some lenses with very low separation are blended also in KiDS. This is why it is of crucial importance to have a catalogue of extragalactic objects as clean as possible from stellar contamination and as complete and efficient as possible in classifying galaxies and quasars. To demonstrate our statement that lensed quasars are not always classified as (multiples, near-by) `QSO` by ML based algorithms that work in a magnitude-color space, and at the same time to highlight the importance of having an extragalactic catalogue, we carried on a test on the recovery of known lenses, as already done in Paper I. -------------------------------------------------------------------------------------------------------------------------- ID $p_{\texttt{STAR}}$ $p_{\texttt{QSO}}$ $p_{\texttt{GALAXY}}$ ---------------------- --------------------- -------------------- -------------------------------------------------------- J004941.90-275225.87 2.0E-5 1.0E-5 **[0.9999]{}\ & & &\ J033238.22-275653.32 & 2.0E-5 & 1.0E-5 & **[0.9999]{}\ & & &\ J115252.26+004733.11 & 2.0E-5 & 1.0E-5 & **[0.9999]{}\ & & &\ J220132.76-320144.73 & 0.0004 & 4.0E-5 & **[0.9999]{}\ & & &\ J234416.95-305625.98 & 0.0004 & 0.0012 & **[0.9983]{}\ & & &\ J105644.89-005933.34 & 7.0E-4 & **[0.9978]{} & 0.0015\ & & &\ J112320.73+013747.53 & 0.0086 & **[0.9829]{} & 0.0085\ & & &\ J142758.89-012130.31 & 0.0035 & **[0.9831]{} & 0.0134\ & & &\ J025257.87-324908.65 & 0.0011 & **[0.9950]{} & 0.0039\ & & &\ J145847.59-020205.87 & 0.0004 & 0.0011 & **[0.9985]{}\ J145847.66-020204.86 & 2.0E-5 & 3.0E-5 & **[0.9999]{}\ & & &\ J032606.87-312254.21 & 0.0019 & **[0.9944]{} & 0.0037\ J032606.78-312253.52 & 0.0023 & **[0.9793]{} & 0.0184\ & & &\ J143228.96-010613.51 & 0.0006 & **[0.9980]{} & 0.0014\ J143229.25-010615.98 & 0.0008 & **[0.9966]{} & 0.0025\ & & &\ J104237.27+002301.42 & 0.0477 & **[0.8637]{} & 0.0886\ J104237.24+002302.76 & 0.0652 & **[0.7721]{} & 0.1627\ & & &\ J092455.82+021923.69 & 0.0078 & **[0.8909]{} & 0.1012\ J092455.82+021925.30 & 0.0059 & 0.4543 & **[0.5397]{}\ & & &\ J122608.10-000602.31 & 0.0046 & **[0.9610]{} & 0.0343\ J122608.03-000602.25 & 0.0199 & **[0.5048]{} & 0.4752\ J122608.13-000559.09 & 0.0009 & 0.0016 & **[0.9997]{}\ & & &\ J133534.80+011805.61 & 0.0128 & **[0.9806]{} & 0.0066\ J133534.87+011804.45 & 0.0056 & **[0.9797]{} & 0.0066\ J133534.97+011809.32 & 0.0013 & 0.0050 & **[0.9937]{}\ & & &\ J152720.14+014139.66 & 0.0058 & **[0.9617]{} & 0.0325\ J152720.27+014140.96 & 0.0005 & 0.0006 & **[0.9999]{}\ ****************************************************** -------------------------------------------------------------------------------------------------------------------------- : Known lenses in the KIDS DR4 footprints. All of them are recovered in our extragalactic catalogue, 8 have multiple matches. For the single ones (upper ’block’), 5 are classified as `GALAXY` and 4 as `QSO` (we highlight in bold the highest probability). For the multiple matches, half of the time all the components belong to the same family (middle ’block’) and the other half, they belong to different families. We report for each component of each system the J2000 coordinates in the ID column (in the usual ’hhmmss.ss$\pm$ddmmss.ss’ format), and the probability to belong to each of the three classification families.[]{data-label="tab:known_lenses"} We started from the same list of $\approx260$ confirmed lensed quasars that we used in Paper I, collected from the CfA-Arizona Space Telescope LEns Survey [CASTLES, @Munoz98] Project database, the SDSS Quasar Lens Search (SQLS, @Inada12) and updated it with systems recently discovered in wide-sky surveys [@Agnello18_atlas; @Agnello18_gaia; @Ostrovski18; @Anguita18; @Lemon18; @Spiniello19_des]. The cross-match between the full KIDS DR4 catalogue of objects with $r<22^m$ and the list of known lensed quasars (288 systems), gave us 17 known lenses. All of them have been retrieved in the KiDS-BEXGO catalogue, 10 classified as `QSO` and 7 as `GALAXY` (one of them with a $p_{\texttt{QSO}}\approx0.45$). These lenses are reported in Table \[tab:known\_lenses\], together with the probability to belong to each class. We do not explicitly report their right ascensions and declinations in separate columns because their ID already contain the J2000 coordinates in the usual ’hhmmss.ss$\pm$ddmmss.ss’ format. Based on this simple and qualitative test, it appears clear that selecting only quasars would allow one to find only lens systems were the contribution to the light from the quasars is much larger than the contribution of the deflector (selecting only `QSO` we would retrieve roughly 65% of the known lensed quasar population – 11 over 17 systems). Finally, although this goes behind the scope of Paper II, we note that another important advantage of having an extragalactic source catalogue (rather than only quasars) is the possibility to search for galaxy-galaxy gravitational lenses Such type of gravitationally lensed objects allow to investigate in great details the mass distribution in massive galaxies upt to $z\sim1$, especially when combined with dynamics [@Koopmans06; @Koopmans09; @Spiniello11; @Spiniello15]. Morphological and photometric criteria can be used to find this kind of lenses: one should look for red extended objects (`GALAXY` with red colors), with the presence of blue extended objects (`GALAXY` with blue colors) within small circular apertures. We will work in this direction in a forthcoming paper, possibly using authomatic, machine learning based routines to this scope (e.g. @Petrillo17 [@Petrillo19a; @Petrillo19b]) and already available catalogues of Luminous red galaxies in the Kilo Degree Survey (e.g., @Vakili2019. Starting from the KiDS-BEXGO catalogue of 5880276 objects, we retrieve only systems belonging to the following distinct groups: 1. QSO-Multiplets: sources classified as `QSO` and with at least one near-by `QSO` companion (within a $5\arcsec$ circular aperture radius) with similar colors, 2. GALAXY-Multiplets: sources classified as `GALAXY` and surrounded by at least one object classified as `QSO` within a $5\arcsec$ circular aperture radius[^11] This simple procedure allowed us to obtain 347 unique objects for the first group and 611 unique objects belonging to the second one, which we then visually inspected. Among these, some where already known lenses, some are probably binary quasars and some are simply contaminants appearing asclose-by companions because of sky projection. Nevertheless, we found many very promising lens candidates, that two people of our team graded from 0 to 4, with 4 being a sure lens. We present the 12 candidates with grade $\ge2.5$ in Table \[tab:qso\_candidates\] (divided into the two Multiplets kinds). We publicly release their coordinates to facilitate spectroscopic follow-up, which is the last necessary step for the unambiguous confirmation. Finally, the $gri$-combined KiDS cutouts of these 12 candidates are shown in Figure. \[fig:candidates\]. The first top rows show candidates belonging to the GALAXY-Multiplets family while the bottom row show QSO-Multiplets candidates. In the former group, the deflector give a much larger contribution to the light, as can be seen from the images. KiDSJ0008-3237 seem to be a very reliable galaxy-galaxy candidate, while KiDSJ0215-2909, definitively among the most promising objects, might be a fold-quadruplet, similar to the one recently found in the VST-ATLAS Survey, WISE 025942.9-163543 [@Schechter18]. and very useful for cosmography studies (time-delay measurement of $H_0$, see e.g.’The H0 Lenses in COSMOGRAIL’s Wellspring’[^12] results). ![image](lenses/Best_candidates_finalNEW.jpg) We note that, of the 17 known lenses, only 8 have been selected as multiplets (2 as QSO-Multiplets and 6 as GALAXY-Multiplets). The other 9 systems have not been deblended in the KiDS catalogue, and thus only have one single match in our classification scheme. These numbers are perfectly in line with the results obtained in Paper I where we found that the Multiplet method alone allowed the recovery of $\sim40$% of the known lenses population. In a forthcoming paper of this series, fully dedicated to the lens search, we will perform a more careful candidate selection, also based on improved color and magnitude criteria to select objects with similar colors and applying to the full BEXGO catalogue the Blue and Red Offsets of Quasars and Extragalactic Sources (BaROQuES) scripts, already successful tested in Paper I. We finally note that we re-discovered a very promising quadruplet: KIDS0239-3211 that was presented in a AAS research note [@Sergeyev2018] and it was found by the first application of our ML based classifier[^13]. The same system has also been detected @Hartley17 using image-based Support Vector Machine classifier and by @Petrillo19b using Convolutional Neural Networks; but since they do not released the coordinates in their paper, we re-discover it in a completely independent way. ---------------- --------------- -------------------- ----------- --------- ------------ ------------------------- ID RA DEC Multiplet Num. of Separation Notes (hh:mm:ss.ss) ($\pm$dd:mm:ss.ss) type matches (arcsec) KIDSJ0008-3237 00:08:16.01 -32:37:15.80 GAL 3 2.7 Gravitational arc KIDSJ0106-2917 01:06:49.80 -29:17:12.20 GAL 2 3.4 Double with large shear KIDSJ0206-2855 02:06:30.86 -28:55:42.22 GAL 2 1.2 Low-separation double KiDSJ0208-3203 02:08:53.16 -32:02:03.51 GAL 3 2.0 Cross Quad candidate KIDSJ0215-2909 02:15:14.4 -29:09:25.6 GAL 3 3.2 Fold Quad candidate KIDSJ1204+0034 12:04:56.58 +00:34:06.02 GAL 3 4.6 Large-separation double KiDSJ1346+0017 13:46:12.38 +00:17:20.18 GAL 4 2.8 Double with large shear KIDSJ1359+0129 13:59:43.98 +01:28:13.90 GAL 2 0.9 Low-separation double KIDSJ0000-3502 00:00:57.10 -35:02:54.15 QSO 2 1.0 Low-separation double KIDSJ0139-3103 01:39:59.08 -31:03:35.06 QSO 2 1.6 Low-separation double KIDSJ0201-3208 02:01:15.44 -32:08:34.57 QSO 2 1.6 Low-separation double KIDSJ1334-0120 13:34:11.18 -01:20:52.22 QSO 2 1.1 Low-separation double ---------------- --------------- -------------------- ----------- --------- ------------ ------------------------- Finally, we cross-match the list of all the lens candidates found in Paper I with the BEXGO catalogue. We find that among the 210 objects we have found in @Spiniello18, 148 are recovered in the extragalactic objects catalogue ($\approx45\%$ classified as `QSO` and $\approx55\%$ as `GALAXY`) and 66 are also selected as Multiplets. Of the 62 remaining objects, 33 have $r>22^m$ and therefore were discarded at the input catalogue creation stage, and 29 were classified as `STAR` by CatBoost; these 29 stars indeed also have a match in *Gaia*, and all of them have non-negligible proper motions and parallaxes. Finally, among the DR3 candidates that have not been found in the DR4 KiDS-BEXGO, 4 have been spectroscopically followed-up and turned out to be stars[^14]. These numbers nicely demonstrate that the employment of a ML based classifier further help in decreasing the risk of stellar contamination within gravitationally lensed quasar candidates. Of the seven known lenses that we recovered in Paper I, six are still recovered. We only lose the nearly Identical Quasar (NIQ) couple QJ0240-343 [@Tinney95; @Tinney97] behind the Fornax dwarf spheroidal galaxy, once again because it has $r$ mag of $r=22.17^m$ and thus it does not satisfied our initial conditions. Results and Conclusions {#sec:conclusions} ======================= In this second paper of the KiDS Strongly lensed QUAsar Detection Project (KiDS-SQuaD) we have presented a new machine learning based classifier to identify extragalactic objects in order to find lensed quasars within the KiDS DR4 data. The technique adopted in this paper became quite standard in the extragalactic community to classify objects in multi-band photometric surveys [@Gieseke2011; @ksz2012; @Brescia2015; @Carrasco15; @Peters15; @Krakowski2016; @Krakowski2018; @Viquar2018; @Khramtsov2018; @Barrientos2018; @Nolte2019; @Bai2019], which provide a very large amount of data, and have been already tested on the KiDS DR3 [@Nakoneczny2019]. In fact, @Nakoneczny2019 presented a ML based pipeline that allowed them to classify objects into three classes (stars, galaxies and quasars) and successfully applied it to the KiDS DR3. Our work, although extending from their findings, has been developed within a different framework, i.e. the search of lensed quasars, and it therefore differs from in many aspects, from the assumption that quasars are point-like source, to the cleaning procedure, optimization, and fine-tuning aimed at minimize as much as possible the stellar contamination in the catalogue of extragalactic objects. Finally, here we also add infrared data, using deep photometry in 9-bands (instead of four), which further helps in isolating stars. Summary {#sec:summary} ------- We provide here a general summary of the archived results of this paper, highlighting with bullet points the main steps that we undertook from the presentation of a new pipeline to the search of gravitationally lensed quasars. In particular, we have: - used the full potential of machine learning methods on broad optical-infrared photometry data, after having applied a careful cleaning on the training SDSS$\times$KiDS sample, also visually inspecting the ambiguous cases when necessary; - performed an ad-hoc customization and fine-tuning of the parameters of the CatBoost algorithm, which we identified as the best possible classifiers for our purposes, to reach the required levels for purity and completeness and to avoid overfitting poroblems. We also implemented a weighting procedure, that allowed us to reach the best possible purity of quasars (decreasing the rate of stars, classified as quasars, from 0.6% to 0.3%); - splitted the training dataset into a hold-out and out-of-fold part to asses the performance of our classifier in terms of completeness and pureness; - defined (and then solved) a 3-class problem (`STAR`, `GALAXY`, `QSO`), working with a simple basic assumption made for the classification, namely that quasars and stars are point-like sources while galaxies are extended. We therefore used the `CLASS_STAR` parameter – a ’stellarity’ index from KiDS catalogue – which turned out to be the most important feature in our classification algorithm (as in ), together with optical and infrared colors; - applied CatBoost on all the data from KiDS DR4 with magnitude brighter than $r=22^m$. For each source, the classifier calculated the probability of belonging to the three different classes of objects: $p_{\texttt{STAR}}$, $p_{\texttt{GALAXY}}$, $p_{\texttt{QSO}}$, and then we assumed that a source belongs to a given class when the probability of being in that class is the highest; - studied the variation in completeness and pureness as a function of the probability threshold used to assign an object to a given class; - collected all the objects that were not classified as stars, building the KiDS DR4 Bright EXtraGalactic Objects catalogue (KiDS-BEXGO), which we then also validated using external data (*Gaia* DR2, AllWISE and GAMA); - showed the potential of the KiDS-BEXGO catalogue in the gravitationally lensed quasar search, with a simple test on the recovery of known, confirmed lenses, and proved, in this way, that our method of selecting extragalactic sources (not only quasars) is a necessary condition to discover as many new systems as possible; - used the KiDS-BEXGO catalogue to search for new, undiscovered gravitationally lensed quasars, looking for objects with a near-by companion. We have obtained a list of 958 ’Multiplets’ (347 `QSO` and 611 `GALAXY`) that we visually inspected, finding 12 very reliable lens candidates for which we release coordinates and KiDS images; - showed the improvement, in terms of stellar contaminants in the final candidate list with respect to what obtained in in Paper I, but at the same time also demonstrated the need for different methods to search for lenses candidates within the catalogue (e.g. the BaROQuES) and directly analyzing images (DIA). These methods will be investigate in a forthcoming publication. In addition, we present in the Appendix \[testing\_classifiers\] a direct comparison of some of the most used classifiers based on decision trees. This test helped us to compare and quantify the performance of each of them on the same training sample in order to choose the most suitable one for our purposes, namely CatBoost. Future perspectives and improvements {#sec:future} ------------------------------------ From the predictions of @Oguri10, we estimated that $\approx50$ lensed quasars are expected in the KIDS DR4 (1000 deg$^2$), when limiting to systems with $r<22^m$; 17 lenses are already known, thus, in principle, more than half are still undiscovered (and even more going to fainter magnitudes). In this Paper II, we focused on the first necessary step to find all the catchable gravitational lenses: an object classifier, that allowed us to get rid of the very numerous stellar contaminants and will allow us to analyze with a minimum human intervention very large datasets. We note that our classifier is build and trained for this specific purpose. A forthcoming paper within the KiDS consortium (Nakoneczny et al., in prep.) will present a machine learning based pipeline trained for general scientific purposes, providing photometric redshifts for galaxies and quasars in KiDS DR4 on top of the objects classification, and testing machine learning extrapolation to increase catalog completeness on fainter magnitudes. Moreover, we also plan to further improve the classification model, working in a more complex and complete feature space and developping a more detailed classification scheme (e.g., spitting the classification of galaxies on late and early types, giving that massive early types are more likely acting as deflectors because on average more massive). In Paper III, already in preparation (Sergeyev et al., in prep), we focus instead only on the gravitationally lens search, presenting a more systematic way, as automatic as possible, to select reliable candidates from the KiDS-BEXGO catalogue. We will apply photometric and morphological criteria, e.g. based on optical and infrared color, or on the simple fact that a centroid offsets of the same object among different surveys, covering different bands is expected since the deflector and quasar images contribute differently in different wavelength ranges (BaROQuES). We will also exploit the full potential of the Direct Image Analysis (DIA, see Paper I for more details) to get precise astrometry and fit the photometry of our most reliable candidates. Finally, we already started the necessary spectroscopic follow-up, to get a final, unambiguous confirmation of the lensing nature for as many systems as possible, and to obtain secure redshift measurements that will allow us translate the lens model results (e.g., Einstein radii) into physical mass measurements. The authors wish to thank Maciej Bilicki for the interesting discussion and the very constructive comments that helped in improving the final manuscript. CS has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie actions grant agreement No 664931. CT acknowledges funding from the INAF PRIN-SKA 2017 program 1.05.01.88.04. KK acknowledges support by the Alexander von Humboldt Foundation. JTAdJ is supported by the Netherlands Organisation for Scientific Research (NWO) through grant 621.016.402. HYS acknowledges the support from the Shanghai Committee of Science and Technology grant No. 19ZR1466600. This work is based on data products from observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 177.A-3016(A), 177.A-3016(B), 177.A-3016(C), 177.A-3016(D), 177.A-3016(E), 177.A-3016(F), 177.A-3016(G), 177.A-3016(H), 177.A-3016(I), 177.A-3016(J), 177.A-3016(K), 177.A-3016(L), 177.A-3016(M), 177.A-3016(N), 177.A-3016(O), 177.A-3016(P), 177.A-3016(Q), 177.A-3016(S), 177.A-3017(A), 177.A-3018(A), 060.A-9038(A), 094.B-0512(A), and on data products produced by Target/OmegaCEN, INAF-OACN, INAF-OAPD and the KiDS production team, on behalf of the KiDS consortium. OmegaCEN and the KiDS production team acknowledge support by NOVA and NWO-M grants. Members of INAF-OAPD and INAF-OACN also acknowledge the support from the Department of Physics & Astronomy of the University of Padova, and of the Department of Physics of Univ. Federico II (Naples). This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology. WISE and NEOWISE are funded by the National Aeronautics and Space Administration. This publication makes use of data products from the Sloan Digital Sky Survey IV. Funding for SDSS IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. This work has made use of data from the European Space Agency (ESA) mission *Gaia* (<https://www.cosmos.esa.int/gaia>), processed by the *Gaia* Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the *Gaia* Multilateral Agreement. This work has made use of data from Galaxy and MAss Assemby Survey. GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is http://www.gama-survey.org/. [99]{} Abolfathi B., Aguado, D., Aguilar, G., et al. 2018, ApJS, 235, 42 Abraham, S., Philip, N., Kembhavi, A., Wadadekar, Y. G., & Sinha, R. 2012,, 419, 80 Agnello A., Treu, T., Ostrovski, F., et al. 2015,, 454, 1260 Agnello A., Kelly B. C., Treu T., Marshall P. J., 2015,, 448, 1446 Agnello, A., Schechter, P. L., Morgan, N. D., et al. 2018, , 475, 2086 Agnello A., Lin, H., Kuropatkin, N., et al. 2018,, 479, 4345 Agnello, A., & Spiniello, C. 2018, arXiv:1805.11103 Akhmetov, V., Fedorov, P., Velichko, A., & Shulga, V. 2017, , 469, 763 Amaro, V., Cavuoti, S., Brescia, M., et al. 2019,, 482, 3, 3116 Anguita, T., Schmidt, R. W., Turner, E. L., et al. 2008, , 480, 327 Anguita T., Schechter, P.L., Kuropatkin, N. et al. 2018,, 480, 5017 Assef, R. J., Stern, D., Kochanek, C. S., et al. 2013, ApJ, 772, 26 Bachchan, R. K., Hobbs, D. & Lindegren, L. 2016, A&A, 589, A71 Bai, Y., Liu, J., Wang, S., & Yang, F. 2019, , 157, 9 Baldry I. K., et al., 2018,, 474, 3875 Ball, N. M., Brunner, R. J., Myers, A. D., & Tcheng, D. 2006, , 650, 497 Barrientos, F., Pichara, K., Troncoso, P., et al. 2018, VST in the Era of the Large Sky Surveys, 9 Bate, N. F., Floyd, D. J. E., Webster, R. L., & Wyithe, J. S. B. 2011, , 731, 71 Blanton, M., Bershady, M., Abolfathi, B., et al. 2017, AJ, 154, 28 Bilicki, M., Hoekstra, H., Brown, M., et al. 2018, A&A, 616, A69, 22 Braibant, L., Hutsem[é]{}kers, D., Sluse, D., Anguita, T., & Garc[í]{}a-Vergara, C. J. 2014, , 565, L11 Breiman L. 2001, Machine Learning, 45, 5 Brescia, M., Cavuoti, S., & Longo, G. 2015, , 450, 3893 Cabanac R. A., et al., 2007, A&A, 461, 813 Carrasco D., Barrientos, L., Pichara, K., et al. 2015, A&A, 584, A44 Capaccioli, M. & Schipani, P. 2011, The Messenger, 146, 2 Capaccioli, M., Schipani, P., de Paris, G., et al. 2012, in Science from the Next Generation Imaging and Spectroscopic Surveys, 1 Chen, T. & Guestrin, C. 2016, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, p. 785-794 Chiu, K., Richards, G., Hewett, P., and Maddox, N. 2007,, 375, 1180 Cortes, C. & Vapnik, V. 1995, Mach. Learn., 20, 273 Cutri, R., Wright, E., Conrow, T., et al. 2013, Explanatory Supplement to the AllWISE Data Release Products, Tech. rep. D’Isanto, A., Cavuoti, S., Gieseke, F., Polsterer, K. 2018, A&A, 616, A97, 21 de Jong, J. T. A., Verdoes Kleijn, G. A., Kuijken, K. H., & Valentijn, E. A. 2013, Exp. Astron., 35, 25 de Jong, J. T. A., Verdoes Kleijn, G. A., Erben, T., et al. 2017, A&A, 604, A134 DESI Collaboration, Aghamousa, A., Aguilar, J., et al. 2016, arXiv:1611.00036 Ding, X., Treu, T., Suyu, S. H., et al. 2017,, 465, 4634 Donley, J. L., Koekemoer, A. M., Brusa, M., et al. 2012, ApJ, 748, 142 Dorogush, A.V., Ershov, V., Gulin, A., arXiv:1810.11363 Driver S. P., et al., 2009, A&G, 50, 5.12 Duda, R. O. & Hart, P. E. 1973, Pattern classification and scene analysis (J. Wiley & Sons) Edge, A., Sutherland, W., Kuijken, K., et al. 2013, The Messenger, 154, 32 Elting, C., Bailer-Jones, C. A. L., & Smith, K. W. 2008, American Institute of Physics Conference Series, 1082, 9 Elvis, M., Wilkes, B. J., McDowell, J. C., et al. 1994, ApJS,95, 1 Eyer L., Blake C., 2005,, 358, 30 Friedman, J.H. 2000, Annals of Statistics, 29, 1189 *Gaia* Collaboration, (Prusti, T., et al.) 2016, A&A, 595, A1 *Gaia* Collaboration, (Brown, A., et al.) 2018, A&A, 616, A1 *Gaia* Collaboration, (Mignard, F., et al.) 2016, A&A, 616, A14 Gieseke, F., Polsterer, K. L., Thom, A., et al. 2011, arXiv:1108.4696 Guerras, E., Mediavilla, E., Jimenez-Vicente, J., et al. 2013, ApJ, 778, 123 Guo, S., Qi, Z., Liao, S. et al. 2018, A&A, 618, A144 Hartley P., Flamary R., Jackson N., Tagore A. S., Metcalf R. B., 2017,, 471, 3378 Haehnelt, M. G., & Kauffmann, G. 2000, , 318, L35 Hernitschek N., Schlafly, E., Branimir, S., et al. 2016, ApJ, 817, 73 Hopkins P. F., Hernquist L., Cox T. J., Robertson B., Springel V., 2006, ApJS, 163, 50 Inada N., Oguri, M., Shin, M.-S., et al., 2012, AJ, 143, 119 Ivezic, Ž., & LSST Science Collaboration 2013, LSST Science Requirements Document, <http://ls.st/LPM-17> Jacobs C., et al., 2019,, 484, 5330 Jarrett, T. H., Cohen, M., Masci, F., et al. 2011, ApJ, 735, 112 Jin, X., Zhang, Y, Zhang, J. 2019,, 485, 4539 Kauffmann, G., & Haehnelt, M. 2000, , 311, 576 Khramtsov, V., & Akhmetov, V. 2018, Proceedings of a IEEE XIIIth International Scientific and Technical Conference “CSIT”, 2018, 72 Khramtsov, V., Akhmetov, V., & Fedorov, P. 2018b, A&A, under review Kim, D.-W., Protopapas, P., Byun, Y.-I., et al. 2011, , 735, 68 Kovács, A., & Szapudi, I. 2015, , 448, 1305 Krakowski, T., Ma[ł]{}ek, K., Bilicki, M., et al. 2016, , 596, A39 Krakowski T., Ma[ł]{}ek K., Bilicki M., Siudek M., Pollo A., 2018, pas7.conf, 7, 252 Kochanek C. S., 2004, ApJ, 605, 58 Koopmans L. V. E., Treu T., Bolton A. S., Burles S., Moustakas L. A., 2006, ApJ, 649, 599 Koopmans L. V. E., Bolton, A., Treu, T., et al. 2009, ApJ, 703, L51 Krone-Martins, A., Delchambre, L., Wertz, O., et al. 2018, A&A, 616, L11 Kuijken, K. 2008, A&A, 482, 1053 Kuijken, K., Heymans, C., Hildebrandt, H., et al. 2015,, 454,3500 Kuijken, K., Heymans, C., Dvornik, A., et al. 2019, A&A, in press Lacy, M., Storrie-Lombardi, L. J., Sajina, A., et al. 2004, ApJS, 154, 166 Lanusse F., Ma Q., Li N., Collett T. E., Li C.-L., Ravanbakhsh S., Mandelbaum R., P[ó]{}czos B., 2018,, 473, 3895 Laureijs, R., Amiaux, J., Arduini, S., et al. 2011, arXiv:1110.3193 Lindegren, L., Hernandez, J., Bombrun, A., et al. 2018 A&A, 616, A2 Lemon, C. A., Auger, M. W., McMahon, R. G., & Koposov, S. E. 2017, , 472, 5023 Lemon, C., Auger, M., McMahon, R., & Ostrovski, F. 2018, , 479, 5060 Mansour, Y. 1997, Proceedings of the 14th International Conference on Machine Learning, p.195-201 Mateos, S., Alonso-Herrero, A., Carrera, F. J., et al. 2012, , 426, 3271 Matthews, B. 1975 , Biochimica et Biophysica Acta (BBA) - Protein Structure, 405, 2, p. 442-451. Meylan, G., Jetzer, P., North, P., et al. 2006, Saas-Fee Advanced Course 33: Gravitational Lensing: Strong, Weak and Micro Metcalf R. B., et al., 2018, arXiv, arXiv:1802.03609 Motta, V., Mediavilla, E., Falco, E., & Mu[ñ]{}oz, J. A. 2012, , 755, 82 Mu[ñ]{}oz J. A., Falco E. E., Kochanek C. S., Leh[á]{}r J., McLeod B. A., Impey C. D., Rix H.-W., Peng C. Y., 1998, Ap&SS, 263, 51 Nakoneczny, S., Bilicki, M., Solarz, A. et al. 2019, A&A, 624, A13 Nolte, A., Wang, L., Bilicki, M., Holwerda, B., & Biehl, M. 2019, arXiv:1903.07749 Oguri M., et al., 2006, AJ, 132, 999 Oguri M., & Marshall P. J. 2010, , 405, 2579 Oguri, M., Rusu, C. E., & Falco, E. E. 2014, , 439, 2494 Ostrovski F., et al., 2017, , 465, 4325 Ostrovski F., et al., 2018, , 473, L116 Paraficz D., et al., 2016, A&A, 592, A75 Peters C. M., et al., 2015, ApJ, 811, 95 Petrillo C. E., Tortora, C., Chatterjee, S., et al. 2017, , 472, 1129 Petrillo C. E., Tortora, C., Chatterjee, S., et al., 2019a, , 482, 807 Petrillo C. E., Tortora, C., Vernardos, G., et al., 2019b, , 484, 3879 Proft, S. & Wambsganss, J. 2015, A&A, 574, A46 Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., and Gulin., A. 2018, in Advances in Neural Information Processing Systems, 31, 6638 Quinlan, J. R. 1986, Mach. Learn., 1, 81 Refsdal S., 1964, , 128, 307 Richard J., et al., 2019, Msngr, 175, 50 Rumelhart, D. E., Hinton, G. E., & Williams, R. J. 1986, in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, (Cambridge, MA, USA: MIT Press), 318 Suyu S. H., Bonvin, V., Courbin, F., et al. 2017,, 468, 2590 Wyithe, J. S. B., & Loeb, A. 2003, , 595, 614 P[â]{}ris I., Petitjean, P., Aubourg, [É]{}, et al. 2018, A&A, 613, A51 Richards, G. T., Fan, X., Newberg, H. J., et al. 2002, AJ, 123, 2945 Richards, G. T., Myers, A. D., Gray, A. G., et al. 2009, ApJS, 180, 67 Robin, A., Luri, X., Reylé, C., et al. 2012, A&A, 543, A100 Rusu, C. E., Berghea, C. T., Fassnacht, C. D., et al. 2018, arXiv:1803.07175 Shankar, F., Weinberg, D. H., & Miralda-Escud[é]{}, J. 2009, , 690, 20 Shen, Y., Strauss, M. A., Ross, N. P., et al. 2009, , 697, 1656 Schechter, P. L., & Wambsganss, J. 2002, , 580, 685 Schechter P. L., Anguita T., Morgan N. D., Read M., Shanks T., 2018, RNAAS, 2, 21 Schindler, J.-T., Fan, X., McGreer, I. D., et al. 2018, ApJ, 863, 144 Schindler, J.-T., Fan, X., McGreer, I. D., et al. 2017, ApJ, 851, 13 Sergeyev A., Spiniello C., Khramtsov V., et al. 2018, RNAAS, 2, 189 Sluse, D., Schmidt, R., Courbin, F., et al. 2011, , 528, A100 Spiniello C., Koopmans L. V. E., Trager S. C., Czoske O., Treu T., 2011,, 417, 3000 Spiniello C., Koopmans L. V. E., Trager S. C., Barnab[è]{} M., Treu T., Czoske O., Vegetti S., Bolton A., 2015,, 452, 2434 Spiniello, C., Agnello, A., Napolitano, N. R., et al. 2018, , 480, 1163 Spiniello C., Sergeyev, A., Marchetti, L., et al. 2019,.tmp..750S Stern, D., Eisenhardt, P., Gorjian, V., et al. 2005, ApJ, 631, 163 Stern, D., Assef, R. J., Benford, D. J., et al. 2012, ApJ, 753, 30 Suyu, S. H., Treu, T., Hilbert, S., et al. 2014, , 788, L35 Sutherland, W. 2012, Science from the Next Generation Imaging and Spectroscopic Surveys, 40 Treu T., Agnello A., Strides Team, 2015, AAS, 225, 318.04 Tinney, C. G. 1995, , 277, 609 Tinney, C. G., Da Costa, G. S., & Zinnecker, H. 1997, , 285, 111 Vakili M., et al., 2019,, in press. Vapnik, V. 1995, The nature of statistical learning theory (New York, USA: Springer-Verlag New York, Inc.) Viquar, M., Basak, S., Dasgupta, A., Agrawal, S., & Saha, S. 2018, arXiv:1804.05051 Walsh D., Carswell R. F., Weymann R. J. 1979, Nature, 279, 381 Werner, M. W., Roellig, T. L., Low, F. J., et al. 2004, ApJS, 154, 1 Wright, E. L., Eisenhardt, P.R.M., Mainzer, A. K., et al. 2010, AJ, 140, 1868 Wright, A.H., Hildebrandt, H., Kuijken, K., et al. 2018, arXiv:1812.06077 Testing different classifiers {#testing_classifiers} ============================= The main result of the main body of the paper, is a catalogue of bright objects from KiDS DR4 that we classified into three families, `STAR`, `QSO`,`GALAXY`, using a machine learning based classifier, that uses the Gradient Boosting algorithm. In order to choose the best possible algorithm for our purposes, we tested two different approaches (and three diffrent methods) based on ensembles of decision trees, namely the Gradient Boosting (GB, @gradboost) and Random Forest [RF, @RF] algorithms, which was the choice of . Here in this Appendix, we provide a more detailed description of the classifiers, their main characteristics, strength and weakness points to give to the reader a better understanding of the differences and similarities between them and to finally justify our final choice. As already stated in the main body, the general classification problem can be simply explained considering a training dataset ($D$) with $n$ samples and $m$ features for each sample with defined label $y_i$: $D=\{\mathbf{x}_i,y_i\}$ where $i\in\{0,...,n-1\}, \mathbf{x}_i \in \mathbb{R}^m, y_i \in \mathbb{N}$. The goal is then to create an approximation function $F: \mathbf{x}\to y$. The two GB algorithms mentioned above follow two different approaches of ensemble learning, that could be propagated not only on the decision trees. We describe them in details in the following sections. ![image](conf_matr/rf_oof_n.png) ![image](conf_matr/xgb_oof_n.png) ![image](conf_matr/cat_oof_n.png) ![image](conf_matr/rf_hold_n.png) ![image](conf_matr/xgb_hold_n.png) ![image](conf_matr/cat_hold_n.png) Gradient Boosting Decision Trees {#GBDT} -------------------------------- XGBoost, @xgb and CatBoost, @cat1 [@cat2]), are two Gradient Boosting [GB, @gradboost] algorithms that implement two different schemes for calculating gradients. Let us consider the ensemble of $K$ trees, where the predicted score for an input $\mathbf{x}$ is given by the sum of the values predicted by the individual $K$ trees: $\hat{y}^K(\mathbf{x})=\sum_{j=1}^K f_j(\mathbf{x})$, where $f_j$ is the output of the $j$-th decision tree. Building the $(K+1)$th decision tree minimizes an objective function $L=\sum_{i=1}^n \ell\big(y_i, \hat{y_i}^K(\mathbf{x}_i) + f_{K+1}(\mathbf{x}_i)\big)+\Lambda(f_{K+1})$, where $\ell\big(y_i, \hat{y_i}^K(\mathbf{x}_i) + f_{K+1}(\mathbf{x}_i)\big)$ depends on the first (and, possibly, second) deviation of the loss function $\ell\big(y_i, \hat{y_i}^K(\mathbf{x}_i)\big)$, and $\Lambda(f_{K+1})$ is a regularization function that penalizes the complexity of the $(K+1)$-th tree to prevent overfitting. To build a $(K+1)$-th decision tree, the algorithm starts with a single decision node and iteratively tries to add a best split for each node, until a stop criterion on tree growth is satisfied. XGBoost estimates the gradient value for all of the objects in a leaf and calculates the average gradient to determine the best split for each node. In this way, the gradient is estimated via the same data points, on which the current decision tree was built on. In general, such splitting procedure leads to the gradient bias (due to the repeated usage of the same objects through all iterations) and, as result, to an overfitting problem [@cat2]. CatBoost, the chosen algorithm for this paper, in its turn, implements the splitting technique called Ordered Boosting [@cat2], that overcomes this problem. With Ordered Boosting, the gradients are calculated not for all of the objects, but for the shuffled training dataset (so-called, random permutations), wherein the gradients are calculated for the objects before a given one. In such a way, gradient for the $j+1$ object is calculated based on prediction of the model, learnt by previous samples in shuffled dataset. One of the limitations of the GBDT algorithms is a wide range of parameters that have to be tuned to get the highest classification quality; CatBoost has advantages also on this, because it performs well also without hyperparameters tuning. For our task, the most influential hyperparameters, that have to be tuned in GBDT algorithms, are: 1. `learning_rate` – the rate of gradient descent; 2. `min_split_loss` – the minimum loss reduction required to split a node of the tree; 3. `max_depth` – the maximum depth of the decision trees; 4. `min_child_weight` – the minimum number of samples in the node of the decision tree required to make a split; 5. `max_delta_step` – the maximum step controlling convergence during gradient descent; 6. `colsample_bytree` – the subsample ratio of the features during building each decision tree; 7. `subsample` – the subsample ratio of the training objects In particular, we noticed that the parameters that mostly affected the learning quality for our training dataset were `learning_rate` (greater values correspond to a sharper gradient descent, that is good for learning acceleration, but can lead to missing the loss minimum), and `max_depth` (greater values correspond to a large complexity of the trees, and can lead to overfitting). Moreover, GB algorithms usually allow to use a stop criterion, responsible for the termination of the learning when an overfitting occurs (the so-called, `early_stopping` parameter). It is expressed via the number of constructed trees, after which the quality of the metric does not increase anymore. Usually this parameter ranges between 10 and 1000 trees, depending on `learning_rate`. If the early stopping criterion is met, the GB algorithm accepts the number of trees, satisfied to the best score. Paying particular attention to the `early_stopping` parameter is the best way to avoid as much as possible overfitting. To quantify, how the quality of the classification changes over the iterations, for XGBoost and CatBoost, it is necessary to define a quality metric. Defining this metric allows to express the changing of the classification quality against the complexity of GBDT algorithm and can be easily used to control the overfitting. Widely used quality metrics are accuracy, precision, recall and F1-score; however, these metrics are sensitive to the imbalance in number of training sources among different classes. Therefore, we decided to use Matthews correlation coefficient [MCC, @MCC] which is instead insensitive to it. Finally, a good way to decrease the number of the stars and galaxies classified by the algorithm as quasars is to apply a weight to the loss function of these two classes. In fact, this trick, applied to CatBoost, helped us in the paper to improve the final purity of the quasar selection with a minimal decreasing of the completeness on the training set. In particular, we weighted the loss function for the `STAR` and `GALAXY` samples in the following way: $$L=\frac{1}{\sum_{i=1}^n w_i} \sum_{i=1}^n w_i [\ell\big(y_i, \hat{y_i}^K(\mathbf{x}_i) + f_{K+1}(\mathbf{x}_i)\big)+\Lambda(f_{K+1})]$$ where $w_i = 1$ if source is a `QSO` and $w_i=4$ if source is `STAR` or `GALAXY`. Random Forest ------------- Another method of ensemble learning with decision trees is based on the use of a Random forest [RF, @RF] algorithm. This was the choice adopted in KiDS DR3. The basic idea of RF is that a set of decision trees can fit a robust classifier by averaging their decisions. From the performance side, the RF constructs a large number of decision trees and then uses the majority vote among them. The main pipeline of the single decision tree carries on the following steps: 1. generation of different samples from the training set with the same size, but using random subsets of all the objects. The repetition of some objects among different subsamples is required to make the sample complete (so-called, random subsample with replacement); 2. let the decision tree learning on the random subsample with replacement (1), but using randomly selected $\approx \sqrt{m}$ features. The RF consists, then, in many learning processes, each performed on a single decision tree using different random subsamples with replacement and randomly selected features. Then, the single prediction of a class for given objects is a simple average on the predictions of all constructed decision trees (bootstrap bagging method). The big advantage of RF is that it uses together the bootstrap bagging method (averaging prediction of the estimators learnt with random subsamples with replacement) and the learning of each estimator with random subset of features. These procedures prevent overfitting and, in most of the cases, help to improve the classification performance and to increase the generalization ability of the RF. The principal hyperparameters, that are required to the RF fitting on the training dataset, are: 1. `n_estimators` – the number of decision trees; 2. `max_features` – the maximum number of features to be used in the node; 3. `max_depth` – the maximum depth of the decision trees; 4. `min_samples_split` – the minimum number of samples in the node of the decision tree required to make a split; 5. `min_samples_leaf` – the minimum number of samples required to be in the leaf node (the end node in which the splitting finishes) of each tree; 6. `class_weight` – weights associated with each class (is required in the case of imbalance training sample) The number of estimators and the maximum depth (and/or minimum number of samples in the node) are mandatory hyperparameters. Moreover, a fine tuning of the parameters 3, 4 and 5 is crucial to avoid overfitting. For instance, setting the values of these parameters to their common values of $\{\infty, 2, 1\}$, respectively will most of the time lead to overfitting. In fact, if the depth of the decision tree is too high, and the minimum number of sample required to be in the leaf node is too small, then each single object in the training will have his own class characterized by its features. This will then make impossible to classify new, unknown objects, although the accuracy of classification of the training set will equals about 100% [@Mansour1997]. Thus, to reduce overfitting, one has to limit the maximum depth of the decision trees (usually set to 3-20, depending on the amount and topology of the features), and/or the minimum number of samples in the nodes. Performance of the classification algorithms {#sec:performance} -------------------------------------------- To directly compare the performance of the three algorithms that we tested, we use confusion matrices. These show the relative number of predicted objects in each of the three classes with respect to the number of true classes. The confusion matrices for RF, XGBoost and CatBoost are shown in Figure \[fig:confmatr\]. As we can see, RF provides the highest completeness for quasars ( $\approx 98.8\%$), but with a contamination of $\approx 0.8\%$ from stars and $\approx 0.1\%$ from galaxies; XGBoost shows the largest purity of quasar selection, but with a very low completeness (only $\approx 75.8\%$ quasars were classified as quasars). For the `GALAXY` class, CatBoost and RF provide very similar completeness (higher than XGBoost), with a very low contamination from star ($<0.1\%$) but CatBoost gives a much larger contamination from `QSO` ($\approx 0.25\%$). As one can see, RF and CatBoost show very similar results. To better understand which was the best choice to make for our scientific purposes, we decided to compare the difference between the MCC value received for the training sample and the one received for the OOF sample, for each of the algorithms. Usually, a large difference between these two scores (training and validation) indicates an overfitting in the model, i.e. the classifier lose the generalization ability, and consequently is able to classify correctly only training data. For the RF, we received the following MCC values for training and OOF samples respectively: 0.9925 and 0.9892 (with a difference of $0.0033$). For CatBoost the MCC equals 0.9901 for training data and 0.9894 for the OOF sample, thus, in this case, the difference ($0.0007$) is almost 5 times smaller. Thus, the better ability to generalize good results on unseen dataset, combined with the fact that CatBoost keeps the purity and the completeness of the quasar selection at a very high level and it maximize the galaxies completeness, at the same time removing as much as possible stellar contaminants, convinced us that CatBoost is the optimal classifier for our purposes. [^1]: SDSS DR14 is the second release of the Sloan Digital Sky Survey IV phase [@sdssiv] and it includes data from all previous SDSS data releases. [^2]: we note that this assumption has not been made in that included in their `QSO` training sample the relatively near ($z<0.2$) AGNs and visible host galaxies. [^3]: as a matter of fact, inspecting the misclassified data from SDSS we found three interesting lensed quasar candidates that we selected for spectroscopic follow up. If confirmed, the system will be presented in a forthcoming publication. [^4]: We use the Navigate SDSS visual tool (<http://skyserver.sdss.org/dr14/en/tools/chart/navi.aspx>) to inspect misclassified sources [^5]: https://catboost.ai/ [^6]: https://yandex.com/company/ [^7]: The hold-out and OOF samples are kept fixed for all the various algorithms that we tested (see Appendix). [^8]: This limit corresponds to $r\approx 21^m$ for quasars at $z\leq3$, [@proft2015] [^9]: <http://skyserver.sdss.org/dr14/en/tools/chart/navi.aspx> [^10]: also the ones observed by other surveys, i.e. we queried the table ’SpecAll’ [^11]: The choice of a $5\arcsec$ circular aperture radius is motivated by the average separation of all the known lenses. [^12]: <https://shsuyu.github.io/H0LiCOW/site/> [^13]: We used in that case a Random Forests classifier, trained with spectroscopically confirmed objects from SDSS DR14. [^14]: We have already started a spectroscopic follow-up campaign using different facilities (e.g. the NTT@La Silla, the SALT@Suthernland Observatory, the LBT@Mt. Graham). The detailed results will be presented in forthcoming dedicated papers.
--- abstract: 'This paper focuses on the problem of $L-$channel quadratic Gaussian multiple description (MD) coding. We recently introduced a new encoding scheme in [@our_ISIT] for general $L-$channel MD problem, based on a technique called ‘Combinatorial Message Sharing’ (CMS), where every subset of the descriptions shares a distinct common message. The new achievable region subsumes the most well known region for the general problem, due to Venkataramani, Kramer and Goyal (VKG) [@VKG]. Moreover, we showed in [@our_ITW] that the new scheme provides a strict improvement of the achievable region for any source and distortion measures for which some 2-description subset is such that the Zhang and Berger (ZB) scheme achieves points outside the El-Gamal and Cover (EC) region. In this paper, we show a more surprising result: CMS outperforms VKG for a general class of sources and distortion measures, which includes scenarios where for all 2-description subsets, the ZB and EC regions coincide. In particular, we show that CMS strictly extends VKG region, for the $L$-channel quadratic Gaussian MD problem for all $L\geq3$, despite the fact that the EC region is complete for the corresponding 2-descriptions problem. Using the encoding principles derived, we show that the CMS scheme achieves the complete rate-distortion region for several asymmetric cross-sections of the $L-$channel quadratic Gaussian MD problem, which have not been considered earlier.' author: - | Kumar Viswanatha, Emrah Akyol and Kenneth Rose\ ECE Department, University of California - Santa Barbara\ {kumar,eakyol,rose}@ece.ucsb.edu[^1] bibliography: - 'Journal\_Bibtex.bib' title: On the Role of Common Codewords in Quadratic Gaussian Multiple Descriptions Coding --- Multiple description coding, Combinatorial message sharing, Quadratic Gaussian multiple descriptions Introduction\[sec:Introduction\] ================================ The multiple descriptions (MD) problem has been studied extensively, yielding a series of advances , ranging from achievability [@EGC; @ZB; @VKG; @Ramchandran; @our_ISIT; @our_ITW; @Binned_CMS_ITW] to converse results [@Ozarow; @Jun_Chen_ind_central; @Viswanatha_2_levels]. In the general MD setup, the encoder generates $L$-descriptions of the source for transmission over $L$ available channels and it is assumed that the decoder receives a subset of the descriptions perfectly and the remaining are lost. The objective is to quantify the set of all achievable rate-distortion (RD) tuples for the $L-$rates $(R_{1},\ldots,R_{L})$ and distortion levels corresponding to the $2^{L}-1$ possible description loss patterns $(D_{\mathcal{K}},\mathcal{K}\subseteq\{1,\ldots,L\})$. One of the first achievable regions for the 2-channel MD problem was derived by El-Gamal and Cover (EC) in 1982 [@EGC]. It was shown by Ozarow in [@Ozarow] that the EC region is complete when the source is Gaussian and the distortion measure is mean squared error (MSE). Zhang and Berger (ZB), however, later showed in [@ZB] that the EC coding scheme is strictly sub-optimal in general. In particular, for a binary source under Hamming distortion, sending a common codeword within the two descriptions can achieve points that are strictly outside the the EC region. The converse to the ZB scheme in still not known for general sources and distortion measures. Since then several researchers have worked on extending the EC and ZB approaches to the $L-$channel MD problem [@VKG; @Ramchandran; @Jun_Chen_ind_central; @Viswanatha_2_levels]. An achievable scheme, due to Venkataramani, Kramer and Goyal (VKG) [@VKG], directly builds on EC and ZB, and introduces a combinatorial number of refinement codebooks, one for each subset of the descriptions. Motivated by ZB, a *single* common codeword is also shared by all the descriptions, which assists in better coordination of the messages, improving the RD trade-off. We recently introduced a new coding scheme called ‘Combinatorial Message Sharing’ (CMS) in [@our_ISIT], wherein a distinct common codeword is shared by members of each subset of the transmitted descriptions. The new achievable RD region subsumes the VKG region for general sources and distortion measures. Moreover, we demonstrated in [@our_ITW] that CMS achieves a strictly larger region than VKG for all $L>2$, if there exists a 2-description subset for which ZB achieves points strictly outside the EC region. In particular, CMS achieves strict improvement for a binary source under Hamming distortion. Ozarow’s converse result [@Ozarow] motivated researchers to seek extended results for the $L-$channel quadratic Gaussian MD problem [@Jun_Chen_ind_central; @Viswanatha_2_levels]. It was shown in [@Jun_Chen_ind_central] that a special case of the VKG coding scheme, called the ‘correlated quantization’ scheme (a generalization of Ozarow’s encoding mechanism to $L-$channels), where *no common codewords are sent,* achieves the complete rate region, when only the individual and the central distortion constraints are imposed. A different and important line of attack focused on a practically interesting cross-section of the general MD problem, called the ‘symmetric MD problem’ (see [@Ramchandran]), based on encoding principles derived from Slepian and Wolf’s random binning techniques. In fact, CMS principles can be extended to incorporate such random binning techniques, to utilize the underlying symmetry in the problem setup as illustrated recently in [@Binned_CMS_ITW]. However, in this paper, we restrict ourselves to the general asymmetric setup to demonstrate the potential gains of using the common codewords of CMS for the quadratic Gaussian MD problem. Optimality of EC for the 2-descriptions setup has led to a natural conjecture that common codewords do not play a necessary role in quadratic Gaussian MD coding, and all the achievable regions characterized so far neglect the common layer codewords (see eg., [@VKG; @Jun_Chen_ind_central; @Viswanatha_2_levels]). In this paper, we show that, surprisingly CMS strictly outperforms VKG for a Gaussian source under MSE distortion. More generally, we show that strict improvement holds for a general class of sources and distortion measures, which includes several scenarios in which, for every 2-description subset, ZB and EC lead to the same achievable region. We also show that the common codewords of CMS play a critical role in achieving the complete RD region for several asymmetric cross-sections of the $L-$channel quadratic Gaussian MD problem. We note that, due to severe space constraints, in this paper, we avoid restating all the prior results and refer the readers to [@our_ISIT] and [@our_ITW] for a brief description of EC, ZB and VKG schemes. In the following section, we begin with a brief description of the CMS coding scheme. Formal definition and CMS coding scheme\[sec:Prior\_results-1\] =============================================================== A source produces a sequence of $n$ iid random variables, denoted by $X^{n}=\left(X^{(1)},X^{(2)}\ldots,X^{(n)}\right)$. We denote $\mathcal{L}=\{1,\ldots,L\}$. There are $L$ encoding functions, $f_{l}(\cdot)\,\, l\in\mathcal{L}$, which map $X^{n}$ to the descriptions $J_{l}=f_{l}(X^{n})$, where $J_{l}\in\{1,\ldots B_{l}\}$ for some $B_{l}>0$. The rate of description $l$ is defined as $R_{l}=\log_{2}(B_{l})$. Each of the descriptions are sent over a separate channel and are either received at the decoder error free or are completely lost. There are $2^{L}-1$ decoding functions for each possible received combination of the descriptions $\hat{X}_{\mathcal{K}}^{n}=\left(\hat{X}_{\mathcal{K}}^{(1)},\ldots,\hat{X}_{\mathcal{K}}^{(n)}\right)=g_{\mathcal{K}}(J_{l}:l\in\mathcal{K})$, $\forall\mathcal{K}\subseteq\mathcal{L},\mathcal{K}\neq\phi$, where $\hat{X}_{\mathcal{K}}$ takes on values on a finite set $\hat{\mathcal{X}}_{\mathcal{K}}$, and $\phi$ denotes the null set. When a subset $\mathcal{K}$ of the descriptions are received at the decoder, the distortion is measured as $D_{\mathcal{K}}=E\left[\frac{1}{N}\sum_{t=1}^{n}d_{\mathcal{K}}(X^{(t)},\hat{X}_{\mathcal{K}}^{(t)})\right]$ for some bounded distortion measures $d_{\mathcal{K}}(\cdot)$ defined as $d_{\mathcal{K}}:\mathcal{X}\times\hat{\mathcal{X}}_{\mathcal{K}}\rightarrow\mathcal{R}$. A RD tuple $(R_{i},D_{\mathcal{K}}:i\in\mathcal{L},\mathcal{K}\subseteq\mathcal{L},\mathcal{K}\neq\phi)$ is achievable if there exit $L$ encoding functions with rates $(R_{1}\ldots,R_{L})$ and $2^{L}-1$ decoding functions yielding distortions $D_{\mathcal{K}}$. The closure of the set of all achievable RD tuples is defined as the ‘*$L$-channel multiple descriptions RD region*’. In what follows, $2^{\mathcal{S}}$ denotes the set of all subsets (power set) of any set $\mathcal{S}$ and $|\mathcal{S}|$ denotes the set cardinality. Note that $|2^{\mathcal{S}}|=2^{|\mathcal{S}|}$. $\mathcal{S}^{c}$ denotes the set complement. For two sets $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$, we denote the set difference by $\mathcal{S}_{1}-\mathcal{S}_{2}=\{\mathcal{K}:\mathcal{K}\in\mathcal{S}_{1},\mathcal{K}\notin\mathcal{S}_{2}\}$. We use the shorthand $\{U\}_{\mathcal{S}}$ for $\{U_{\mathcal{K}}:\mathcal{K}\in\mathcal{S}\}$[^2]. Before describing CMS, we define the following subsets of $2^{\mathcal{L}}$: $$\begin{aligned} \mathcal{I}_{W} & = & \{\mathcal{S}:\mathcal{S}\in2^{\mathcal{L}},\,\,|\mathcal{S}|=W\}\nonumber \\ \mathcal{I}_{W+} & = & \{\mathcal{S}:\mathcal{S}\in2^{\mathcal{L}},\,\,|\mathcal{S}|>W\}\label{eq:Iw}\end{aligned}$$ Let $\mathcal{B}$ be any non-empty subset of $\mathcal{L}$ with $|\mathcal{B}|\leq W$. We define the following subsets of $\mathcal{I}_{W}$ and $\mathcal{I}_{W+}$: $$\begin{aligned} \mathcal{I}_{W}(\mathcal{B}) & = & \{\mathcal{S}:\mathcal{S}\in\mathcal{I}_{W},\,\,\mathcal{B}\subseteq\mathcal{S}\}\nonumber \\ \mathcal{I}_{W+}(\mathcal{B}) & = & \{\mathcal{S}:\mathcal{S}\in\mathcal{I}_{W+},\,\,\mathcal{B}\subseteq\mathcal{S}\}\label{eq:Iwb}\end{aligned}$$ We also define $\mathcal{J}(\mathcal{K})=\bigcup_{l\in\mathcal{K}}\mathcal{I}_{1+}(l)$. Note that $\mathcal{J}(\mathcal{L})=2^{\mathcal{L}}-\phi$. Next, we briefly describe the CMS encoding scheme in [@our_ISIT]. Recall that, unlike VKG, CMS allows for ‘combinatorial message sharing’, i.e a common codeword is sent in each (non-empty) subset of the descriptions. The shared random variables are denoted by ‘$V$’. The base and the refinement layer random variables are denoted by ‘$U$’. First, the codebook for $V_{\mathcal{L}}$ is generated. Then, the codebooks for $V_{\mathcal{S}}$, $|S|=W$ are generated in the order $W=L-1,L-2\ldots2$. $2^{nR_{\mathcal{Q}}^{''}}$ codewords of $V_{\mathcal{Q}}$ are independently generated conditioned on each codeword tuple of $\{V\}_{\mathcal{I}_{W+}(\mathcal{Q})}$. This is followed by the generation of the base layer codebooks, i.e. $U_{l}$, $l\in\mathcal{L}$. Conditioned on each codeword tuple of $\{V\}_{I_{1+}(l)}$, $2^{nR_{l}^{'}}$ codewords of $U_{l}$ are generated independently. Then the codebooks for the refinement layers are formed by generating a single codeword for $U_{\mathcal{S}},\,\,|\mathcal{S}|>1$ conditioned on every codeword tuple of $(\{V\}_{\mathcal{J}(\mathcal{S})},\{U\}_{2^{\mathcal{S}}-\mathcal{S}})$. Observe that the base and the refinement layers in the CMS scheme are similar to that in the VKG scheme, except that they are now generated conditioned on a subset of the shared codewords. The encoder employs joint typicality encoding, i.e., on observing a typical sequence $x^{n}$, it tries to find a jointly typical codeword tuple, one from each codebook. As with VKG, the codeword index of $U_{l}$ (at rate $R_{l}^{'}$) is sent in description $l$. However, now the codeword index of $V_{\mathcal{S}}$ (at rate $R_{\mathcal{S}}^{''}$) is sent in *all* the descriptions $l\in\mathcal{S}$. Therefore the rate of description $l$ is: $$R_{l}=R_{l}^{'}+\sum_{\mathcal{K}\in\mathcal{J}(l)}R_{\mathcal{K}}^{''}\label{eq:main_rate-1}$$ We next formally state the achievable RD region. Let $\mathcal{Q}$ be any subset of $2^{\mathcal{L}}$. Then, we say that $\mathcal{Q}\in\mathcal{Q}^{*}$ if it satisfies the following property: $$\mathcal{K}\in\mathcal{Q}\,\,\Rightarrow\,\,\,\mathcal{I}_{|\mathcal{K}|+}(\mathcal{K})\subset\mathcal{Q}\label{eq:cond_Q_main}$$ $\forall\mathcal{K\in\mathcal{Q}}$. Further, we denote by $[\mathcal{Q}]_{1}$ the set of all elements of $\mathcal{Q}$ of cardinality $1$, i.e.,: $$[\mathcal{Q}]_{1}=\{\mathcal{K}:\mathcal{K}\in Q,\,|\mathcal{K}|=1\}$$ Let $(\{V\}_{\mathcal{J}(\mathcal{L})},\{U\}_{2^{\mathcal{L}}-\phi})$ be any set of $2^{L+1}-L-2$ random variables jointly distributed with $X$. For any set $\mathcal{Q}\in\mathcal{Q}^{*}$ we define: $$\begin{aligned} \alpha(\mathcal{Q}) & = & \sum_{\mathcal{K}\in\mathcal{Q}-[\mathcal{Q}]_{1}}H\left(V_{\mathcal{K}}|\{V\}_{\mathcal{I}_{|\mathcal{K}|+}(\mathcal{K})}\right)\nonumber \\ & & +\sum_{\mathcal{K}\in2^{[\mathcal{Q}]_{1}}-\phi}H\left(U_{\mathcal{K}}|\{V\}_{\mathcal{I}_{1+}(\mathcal{K})},\{U\}_{2^{\mathcal{K}}-\phi-\mathcal{K}}\right)\nonumber \\ & & -H\left(\{V\}_{\mathcal{Q}-[\mathcal{Q}]_{1}},\{U\}_{2^{[\mathcal{Q}]_{1}}-\phi}|X\right)\label{eq:alpha_defn_1-1}\end{aligned}$$ We follow the convention $\alpha(\phi)=0$. Next we state the rate-distortion region achievable by the CMS scheme for the $L-$descriptions framework. \[thm:main\]Let $(\{V\}_{\mathcal{J}(\mathcal{L})},\{U\}_{2^{\mathcal{L}}-\phi})$ be any set of $2^{L+1}-L-2$ random variables jointly distributed with $X$, where $U_{\mathcal{S}}$ and $V_{\mathcal{S}}$ take values in some finite alphabets $\mathcal{U}_{\mathcal{S}}$ and $\mathcal{V}_{\mathcal{S}}$, respectively $\forall\mathcal{S}$. Let $\mathcal{Q}^{*}$ be the set of all subsets of $2^{\mathcal{L}}-\phi$ satisfying (\[eq:cond\_Q\_main\]) and let $R_{\mathcal{S}}^{''},\,\,\mathcal{S}\in\mathcal{I}_{1+}$ and $R_{l}^{'},\,\, l\in\mathcal{L}$ be $2^{L}-1$ auxiliary rates satisfying: Then, the RD region for the $L-$channel MD problem contains the rates and distortions for which there exist functions $\psi_{\mathcal{S}}(\cdot)$, such that, $$\begin{aligned} R_{l} & = & R_{l}^{'}+\sum_{\mathcal{S}\in\mathcal{J}(l)}R_{\mathcal{S}}^{''}\label{eq:rate_condition_thm}\\ D_{\mathcal{S}} & \geq & E\left[d_{\mathcal{S}}\left(X,\psi_{\mathcal{S}}\left(U_{\mathcal{S}}\right)\right)\right]\label{eq:dist_condition_thm}\end{aligned}$$ The closure of the achievable tuples over all such $2^{L+1}-L-2$ random variables is denoted by $\mathcal{RD}_{CMS}$. $\mathcal{RD}_{CMS}$ can be extended to continuous random variables and well-defined distortion measures using techniques similar to [@Wyner]. We omit the details here and assume that the above region continues to hold even for well behaved continuous random variables (for example, a Gaussian source under MSE). $\mathcal{RD}_{CMS}$ is convex, as a time sharing random variable can be embedded in $V_{\mathcal{L}}$. Refer to [@our_ISIT]. Strict Improvement for a General Class of Sources and Distortion Measures\[sec:Prior\_results\] =============================================================================================== We begin by defining $\mathcal{Z}_{ZB}$, the set of all sources (for given distortion measures at the decoders), for which there exists an operating point $(R_{1},R_{2},D_{1},D_{2},D_{12})$ that *cannot* be achieved by an ‘independent quantization’ mechanism using the ZB coding scheme. More specifically, $X\in\mathcal{Z}_{ZB}$, if there exists a strict suboptimality in the ZB region when the closure is defined only over joint densities for the auxiliary random variables satisfying the following conditions: $$\begin{aligned} P(U_{1},U_{2}|X,V_{12}) & = & P(U_{1}|X,V_{12})P(U_{2}|X,V_{12})\nonumber \\ E\left[d_{\mathcal{K}}(X,\psi_{\mathcal{K}}(U_{\mathcal{K}}))\right] & \leq & D_{\mathcal{K}},\,\;\;\;\mathcal{K}\in\{1,2,12\}\nonumber \\ U_{12} & = & f(U_{1},U_{2},V_{12})\label{eq:ZZB}\end{aligned}$$ where $f$ is any deterministic function. We will show in Theorem \[thm:General\_CMS\] that $\forall X\in\mathcal{Z}_{ZB}$, $\mathcal{RD}_{VKG}\subset\mathcal{RD}_{CMS}$. ![The cross-section that we consider in order to prove that CMS achieves points outside the VKG region for a general class of source and distortion measures. CMS achieves the the complete RD region for this setup for several distortion regimes for the quadratic Gaussian MD problem. \[fig:3\_des\_new\]](3_des_new) Before stating the result we describe the particular cross-section of the RD region that we will use to prove strict improvement in Theorem \[thm:General\_CMS\]. Consider a 3-descriptions MD setup for a source $X$ wherein we impose constraints only on distortions $(D_{1},D_{2},D_{3},D_{12},D_{23})$ and set the rest of the distortions, $(D_{13},D_{123})$ to $\infty$. This cross-section is schematically shown in Fig. \[fig:3\_des\_new\]. To illustrate the gains underlying CMS, here we restrict ourselves to the setting wherein we further impose $D_{1}=D_{3}$ and $D_{12}=D_{23}$. The points in this cross-section, achievable by VKG and CMS, are denoted by $\overline{\mathcal{RD}}_{VKG}(X)$ and $\overline{\mathcal{RD}}_{CMS}(X)$, respectively. We note that the symmetric setting is considered *only* for simplicity. The arguments can be easily extended to the asymmetric framework. This particular symmetric cross-section of the 3-descriptions MD problem is equivalent to the corresponding 2-descriptions problem, in the sense that, one could use any coding scheme to generate bit-streams for descriptions 1 and 2, respectively. Description 3 would then carry a replica (exact copy) of the bits sent in description 1. Due to the underlying symmetry in the problem setup, the distortion constraints at all the decoders are satisfied. Hence an achievable region based on the ZB coding scheme can be derived as follows. Let $(G_{12},F_{1},F_{2},F_{12})$ be any random variables jointly distributed with $X$ and taking values over arbitrary finite alphabets. Then the following RD-region is achievable for which there exist functions $(\psi_{1},\psi_{2},\psi_{12})$ such that $R_{1}=R_{3}$, $D_{1}=D_{3}$, $D_{12}=D_{23}$ and: $$\begin{aligned} & R_{1}\geq I(X;F_{1},G_{12}),\,\, R_{2}\geq I(X;F_{2},G_{12})\nonumber \\ & R_{1}+R_{2}\geq2I(X;G_{12})+H(F_{1}|G_{12})+H(F_{2}|G_{12})\nonumber \\ & -H(F_{1},F_{2},F_{12}|X,G_{12})+H(F_{12}|F_{1},F_{2},G_{12})\nonumber \\ & D_{\mathcal{K}}\geq E\left[d_{\mathcal{K}}(X,\psi_{\mathcal{K}}(F_{\mathcal{K}}))\right],\,\,\mathcal{K}\in\{1,2,12\}\label{eq:RD(X)}\end{aligned}$$ The closure of achievable RD-tuples over all random variables $(G_{12},M_{1},M_{2},M_{12})$ is denoted by $\overline{\mathcal{RD}}(X)$. In the following theorem, we will show that $\overline{\mathcal{RD}}(X)\subseteq\overline{\mathcal{RD}}_{CMS}(X)$. We also show that the VKG coding scheme *cannot* achieve the above RD region, i.e., $\overline{\mathcal{RD}}_{VKG}(X)\subset\overline{\mathcal{RD}}(X)$, if $X\in\mathcal{Z}_{ZB}$. We note that in Theorem \[thm:General\_CMS\], we focus only on the 3-descriptions setting. However, the results can be easily extended to the general $L-$descriptions scenario. Also note that $\overline{\mathcal{RD}}_{CMS}(X)$ could be strictly larger than $\overline{\mathcal{RD}}(X)$, in general. \[thm:General\_CMS\](i) For the setup shown in Fig. \[fig:3\_des\_new\] the CMS scheme achieves $\overline{\mathcal{RD}}(X)$, i.e., $\overline{\mathcal{RD}}(X)\subseteq\overline{\mathcal{RD}}_{CMS}(X)$. \(ii) If $X\in\mathcal{Z}_{ZB}$,then there exists points in $\overline{\mathcal{RD}}(X)$ that **cannot** be achieved by the VKG encoding scheme, i.e., , It directly follows from (i) and (ii) that $\mathcal{RD}_{VKG}\subset\mathcal{RD}_{CMS}$ for the $L-$channel MD problem $\forall L\geq3$, if $X\in\mathcal{Z}_{ZB}$. ** We first provide an intuitive argument to justify the claim and then follow it up with a formal argument. Due to the underlying symmetry in the setup CMS introduces common layer random variables $V_{123}=G_{12}$ and $V_{13}=F_{1}$. It then sends the codeword of $V_{13}$ is both descriptions 1 and 3 (i.e., $U_{1}=U_{3}=V_{13}$). Hence it is sufficient for the encoder to generate enough codewords of $U_{2}=F_{2}$ (conditioned on $V_{123}$) to maintain joint typicality with the codewords of $V_{13}=F_{1}$. However VKG is forced to set the common layer random variable $V_{13}$ to a constant. Thus, in this case, the encoder needs to generate enough number of codewords of $U_{2}$ so as to maintain joint typicality individually with the codewords of $U_{1}$ and $U_{3}$, which are now generated independently conditioned on $V_{123}$, entailing some excess rate for $U_{2}$[^3]. Part (i) of the theorem is straightforward to prove. We set $V_{123}=G_{12}$, $V_{13}=F_{1}$, $U_{2}=F_{2}$, $U_{12}=U_{23}=F_{12}$ and $U_{1}=U_{3}=V_{13}$ and the rest of the random variables to constants in the CMS achievable region in [@our_ISIT]. This leads to the RD region in (\[eq:RD(X)\]). We next prove (ii). We consider one particular boundary point of (\[eq:RD(X)\]) and show that this cannot be achieved by VKG. Let $D_{1}$, $D_{2}$ and $D_{12}$ be fixed. Consider the following quantity: $$\begin{aligned} R_{VKG}^{*}(D_{1},D_{2},D_{12})=\inf\,\,\Bigl\{ R_{2}:R_{1}=R_{3}=R_{X}(D_{1})\label{eq:Thm3_2}\\ (R_{1},R_{2},R_{3},D_{1},D_{2},D_{1},D_{12},D_{12})\in\overline{\mathcal{RD}}_{VKG}(X)\Bigr\}\nonumber \end{aligned}$$ Note that the corresponding quantity achievable using $\overline{\mathcal{RD}}_{CMS}(X)$ is given by the solution to the following optimization problem: $$\begin{aligned} R_{CMS}^{*}(D_{1},D_{2},D_{12})=\inf\,\,\Bigl\{ I(U_{2};X,U_{1}|V_{123})\nonumber \\ I(X;V_{123})+I(U_{12};X|V_{123},U_{1},U_{2})\Bigr\}\label{eq:Thm3_3}\end{aligned}$$ where the infimum is over all joint densities $P(V_{123},U_{1},U_{2},U_{12}|X)$, where $P(V_{123},U_{1}|X)$ is any joint density for which there exists a function $\psi_{1}(\cdot)$ such that: $$\begin{aligned} I(X;V_{123},U_{1})=R(D_{1}), & & E\left[d_{1}(X,\psi_{1}(U_{1}))\right]=D_{1}\label{eq:CMS_13_constraints-1}\end{aligned}$$ i.e., $(V_{123},U_{1})$ leads to an RD-optimal reconstruction of $X$ at $D_{1}$ and $P(U_{12},U_{2}|X,U_{1},V_{123})$ is any distribution for which there exists function $\psi_{2}(\cdot)$ and $\psi_{12}(\cdot)$ satisfying the distortion constraints for $D_{2}$ and $D_{12}$, respectively. We will show that $R_{VKG}^{*}>R_{CMS}^{*}$. We next specialize and restate $\overline{\mathcal{RD}}_{VKG}(X)$ for the considered cross-section. Let $(V_{123},U_{1},U_{2},U_{3},U_{12},U_{23})$ be any random variables jointly distributed with $X$ taking values on arbitrary alphabets. Then, $\overline{\mathcal{RD}}_{VKG}$ contains all rates and distortions for which there exist functions $\psi_{1}(\cdot),\psi_{2}(\cdot),\psi_{3}(\cdot),\psi_{12}(\cdot),\psi_{23}(\cdot)$, such that: $$\begin{aligned} R_{i} & \geq & I(X;U_{i},V_{123}),\,\, i\in\{1,2,3\}\nonumber \\ R_{i}+R_{2} & \geq & 2I(X;V_{123})+I(U_{i};U_{2}|V_{123})\nonumber \\ & & +I(X;U_{i},U_{2},U_{i2}|V_{123}),\,\, i\in\{1,3\}\nonumber \\ R_{1}+R_{3} & \geq & 2I(X;V_{123})+H(U_{1}|V_{123})\nonumber \\ & & +H(U_{3}|V_{123})-H(U_{1},U_{3}|X,V_{123})\nonumber \\ R_{1}+R_{2}+R_{3} & \geq & 3I(X;V_{123})+\sum_{i=1}^{3}H(U_{i}|V_{123})\nonumber \\ & & +\sum_{\mathcal{K}\in\{12,23\}}H(U_{\mathcal{K}}|\{U\}_{\mathcal{K}},V_{123})\nonumber \\ & & -H(U_{1},U_{2},U_{3},U_{12},U_{23}|X,V_{123})\label{eq:RD_VKG_bar-1}\end{aligned}$$ $$\begin{aligned} E\left(d_{\mathcal{K}}(X,\psi_{\mathcal{K}}(U_{\mathcal{K}}))\right) & \leq & D_{\mathcal{K}},\,\,\mathcal{K}\in\{1,2,3,12,13\}\label{eq:RD_VKG_Dist-1}\end{aligned}$$ where $R_{1}=R_{3}$, $D_{1}=D_{3}$ and $D_{12}=D_{23}$. Observe that the random variables $U_{13}$ and $U_{123}$ have been set to constants as we do not impose distortion constraints $D_{13}$ and $D_{123}$, respectively. We can further restrict the joint density $P(V_{123},U_{1},U_{2},U_{3},U_{12},U_{23}|X)$ to satisfy: $$\begin{aligned} & P(U_{12},U_{23}|X,V_{123},U_{1},U_{2},U_{3})=\nonumber \\ & P(U_{12}|X,V_{123},U_{1},U_{2})P(U_{23}|X,V_{123},U_{2},U_{3})\label{eq:U_1223_Mark}\end{aligned}$$ without any loss of optimality. Next imposing $R_{1}=R_{3}=R_{X}(D_{1})$ in (\[eq:RD\_VKG\_bar-1\]), enforces the joint density $P(V_{123},U_{1},U_{3}|X)$ to satisfy the following constraints: $$\begin{aligned} I(X;V_{123},U_{i}) & = & R(D_{1}),\,\, i\in\{1,3\}\nonumber \\ E\left[d_{i}(X,\psi_{i}(V_{123},U_{i}))\right] & = & D_{1},\,\, i\in\{1,3\}\label{eq:VKG_13_constraints}\\ P(U_{1},U_{3}|X,V_{123}) & = & P(U_{1}|X,V_{123})\times P(U_{3}|X,V_{123})\nonumber \end{aligned}$$ where the last condition is required to satisfy the constraint on $R_{1}+R_{3}$ in (\[eq:RD\_VKG\_bar-1\]). Therefore, using (\[eq:RD\_VKG\_bar-1\]) and (\[eq:U\_1223\_Mark\]) we have: $$\begin{aligned} R_{VKG}^{*}=\inf\,\,\Bigl\{ I(X;V_{123})+I(U_{2};U_{1},U_{3},X|V_{123})\nonumber \\ +I(X;U_{12}|U_{1},U_{2},V_{123})+I(X;U_{23}|U_{2},U_{3},V_{123})\Bigr\}\label{eq:R_s_VKG}\end{aligned}$$ where the infimum is over all joint densities $P(V_{123},U_{1},U_{2},U_{3},U_{12},U_{23}|X)$ satisfying (\[eq:VKG\_13\_constraints\]) for which there exist functions $\psi_{2}(\cdot),\psi_{12}(\cdot),\psi_{23}(\cdot)$ satisfying the distortion constraints in (\[eq:RD\_VKG\_Dist-1\]). From (\[eq:R\_s\_VKG\]) and (\[eq:Thm3\_3\]) it follows that $R_{VKG}^{*}$ is equal to $R_{CMS}^{*}$ if and only if the two quantities on the RHS of (\[eq:R\_s\_VKG\]) and (\[eq:Thm3\_3\]), respectively, are equal. However for any joint density, we have $I(U_{2};U_{1},U_{3},X|V_{123})\geq I(U_{2};U_{1},X|V_{123})$ and $I(X;U_{23}|V_{123},U_{2},U_{3})\geq0$. Also note that the constraints in (\[eq:CMS\_13\_constraints-1\]) are a subset of the constraints in (\[eq:VKG\_13\_constraints\]). Hence for $R_{VKG}^{*}$ to be equal to $R_{CMS}^{*}$, any joint density which achieves $R_{VKG}^{*}$ must satisfy the following conditions: \(i) The joint density of $(X,V_{123},U_{1},U_{2},U_{12})$ must be the same as the corresponding joint density which achieves $R_{CMS}^{*}$ (in (\[eq:Thm3\_3\])). (ii)$I(U_{2};U_{3}|V_{123},U_{1},X)=0$, $I(X;U_{23}|V_{123},U_{2},U_{3})=0$.\ The constraint $I(X;U_{23}|V_{123},U_{2},U_{3})=0$ implies that $X$ and $U_{23}$ are independent given $V_{123}$, $U_{2}$ and $U_{3}$. Equivalently this constraint implies that the reconstruction $\hat{X}_{23}$ can be written as a deterministic function of $V_{123}$, $U_{2}$ and $U_{3}$, i.e., for $R_{VKG}^{*}$ to be equal to $R_{CMS}^{*}$, there must exist a function $\tilde{\psi}_{23}(V_{123},U_{2},U_{3})$ such that $E\left(d_{23}(X,\tilde{\psi}_{23}(V_{123},U_{2},U_{3}))\right)\leq D_{23}=D_{12}$. On the other hand, the constraint $I(U_{2};U_{3}|V_{123},U_{1},X)=0$ implies that $H(U_{3}|V_{123},U_{1},X)=H(U_{3}|V_{123},U_{1},U_{2},X)$. However, the joint density of $(X,V_{123},U_{1},U_{3})$ must satisfy (\[eq:VKG\_13\_constraints\]) for $R_{1}=R_{3}=R_{X}(D_{1})$ to hold, i.e., $H(U_{3}|V_{123},U_{1},X)=H(U_{3}|V_{123},X)$. Hence for $R_{VKG}^{*}$ to be equal to $R_{CMS}^{*}$, we require: $$H(U_{3}|V_{123},X)=H(U_{3}|V_{123},U_{1},U_{2},X)\label{eq:contra_cond2}$$ which implies that $U_{2}\leftrightarrow(X,V_{123})\leftrightarrow U_{3}$ must hold. Recall that the joint density $P(U_{3},V_{123}|X)$ is RD-optimal at $D_{1}$ and the joint density $P(U_{2},V_{123}|X,U_{1})$ must be identical to the joint density which achieves $R_{CMS}^{*}$ (from condition (i)). Hence, it follows that, if $X\in\mathcal{Z}_{ZB}$, there exists at least one RD tuple in $\overline{\mathcal{RD}}(X)$ that cannot be achieved if we constrain the joint density to simultaneously satisfy both the conditions (i) and (ii), proving the theorem. **Discussion:** A direct consequence of the above theorem is that, if $X\in\mathcal{Z}_{ZB}$, then the common layer codewords of CMS provide strict improvement in the achievable region as compared not using them, i.e., if $X\in\mathcal{Z}_{ZB}$, $\mathcal{RD}_{VKG}\Bigr|_{V_{\mathcal{L}}=\Phi}\subseteq\mathcal{RD}_{VKG}\subset\mathcal{RD}_{CMS}$, where $\mathcal{RD}\Bigr|_{V_{\mathcal{L}}=\Phi}$ denotes the VKG region when the common layer random variable (denoted by $V_{\mathcal{L}}$) is set to a constant $\Phi$[^4]. In fact, it is possible to show that, whenever $X\in\mathcal{Z}_{EC}$, $\mathcal{RD}\Bigr|_{V_{\mathcal{L}}=\Phi}\subset\mathcal{RD}_{CMS}$, where $\mathcal{Z}_{EC}$ is defined as the set of all sources for which there exists an operating point (with respect to the given distortion measures) that *cannot* be achieved by an ‘independent quantization’ mechanism using the EC coding scheme, i.e., if there exists an operating point that *cannot* be achieved by EC using a joint density for the auxiliary random variables satisfying: $$\begin{aligned} P(U_{1},U_{2}|X) & = & P(U_{1}|X)P(U_{2}|X)\nonumber \\ E\left[d_{\mathcal{K}}(X,\psi_{\mathcal{K}}(U_{\mathcal{K}}))\right] & \leq & D_{\mathcal{K}},\,\,\,\mathcal{K}\in\{1,2,12\}\nonumber \\ U_{12} & = & f(U_{1},U_{2})\label{eq:ZEC}\end{aligned}$$ where $f$ is any deterministic function. Note that the set $\mathcal{Z}_{ZB}$ is a subset of $\mathcal{Z}_{EC}$. Also observe that if $X\notin\mathcal{Z}_{EC}$, the concatenation of two independent optimal quantizers is optimal in achieving a joint reconstruction. While this condition could be satisfied for specific values of $D_{1},D_{2}$ and $D_{12}$, it is seldom achieved *for all* values of $(D_{1},D_{2},D_{12})$. Though such sources are of some theoretical interest, the multiple descriptions encoding for such sources is degenerate. Hence with some trivial exceptions, it can be asserted that the common layer codewords in CMS can be used to achieve a strictly larger region (compared to not using any common codewords) for all sources and distortion measures, $\forall L\geq3$. Gaussian MSE Setting\[sec:Gaussian-MSE-Setting\] ================================================ In the following theorem we show that, under MSE, a Gaussian source belongs to $\mathcal{Z}_{ZB}$. \[thm:General\_CMS-1\](i) CMS achieves the **complete** RD region for the symmetric 3-descriptions quadratic Gaussian setup shown in Fig. \[fig:3\_des\_new\]. \(ii) The VKG encoding scheme cannot achieve all the points in the region, i.e., It follows from (i) and (ii) that $\mathcal{RD}_{VKG}\subset\mathcal{RD}_{CMS}$ for the $L-$channel quadratic Gaussian MD problem $\forall L>2$. Proof of (i) is straightforward and follows directly from the proof of Theorem 1. Hence, we only prove (ii). Specifically, we show that, a Gaussian random variable, under MSE, belongs to $\mathcal{Z}_{ZB}$. (ii) then follows directly from Theorem 1. Consider the 2-description quadratic Gaussian problem. It follows from Ozarow’s results (see also [@EGC]) that, if $D_{12}\leq D_{1}+D_{2}-1$, then the following rate region is achievable (and complete): $$\begin{aligned} R_{\mathcal{K}} & \geq & \frac{1}{2}\log\frac{1}{D_{\mathcal{K}}},\,\,\mathcal{K}\in\{1,2,12\}\label{eq:high_dist_region-1}\end{aligned}$$ i.e., there is no excess rate incurred due to encoding the source using two descriptions. Observe that the excess sum rate term in the ZB must be set to zero to achieve the above rate-region. We will show that, if we restrict the optimization to conditionally independent joint densities, then it is impossible to simultaneously satisfy all the distortions and achieve $I(U_{1};U_{2}|V_{12})=0$. Recall that the ZB region achievable using any joint density $P(X,V_{12},U_{1},U_{2},U_{12})$ is given by: $$\begin{aligned} R_{i} & \geq & I(X;V_{12},U_{i})\,\, i\in\{1,2\}\nonumber \\ R_{1}+R_{2} & \geq & I(X;V_{12})+I(U_{1};U_{2}|V_{12})\nonumber \\ & & +I(X;V_{12},U_{1},U_{2},U_{12})\nonumber \\ D_{\mathcal{K}} & \geq & E\left[(X-\psi_{\mathcal{K}}(U_{\mathcal{K}}))\right],\,\,\mathcal{K}\in\{1,2,12\}\label{eq:}\end{aligned}$$ Let us consider the corner point $P_{0}\triangleq(R_{1},R_{2})=(\frac{1}{2}\log\frac{1}{D_{1}},\frac{1}{2}\log\frac{D_{1}}{D_{12}})$ for some $(D_{1},D_{2},D_{12})$ satisfying $D_{12}\leq D_{1}+D_{2}-1$ and show that this point is not achievable by the ZB scheme when we restrict the joint densities to satisfy (\[eq:ZZB\]). First, as $I(X;V_{12},U_{1},U_{2},U_{12})\geq\frac{1}{2}\log\frac{1}{D_{12}}$, $P_{0}$ can be achieved only by joint densities that satisfy $I(X;V_{12})=0$. Hence, to prove the theorem, it is sufficient to show that $P_{0}$ is not achievable when we restrict the optimization to joint densities satisfying (\[eq:ZZB\]) and $I(X;V_{12})=0$. Let $P(V_{12},U_{1},U_{2},U_{12},X)$ be any such joint density and let $\mathcal{V}_{12}$ be the alphabet for $V_{12}$. Then the associated ZB achievable region can be rewritten as: $$\begin{aligned} R_{i} & \geq & \sum_{v_{12}\in\mathcal{V}_{12}}P(v_{12})I(X;U_{i}|V_{12}=v_{12}),\,\, i\in\{1,2\}\nonumber \\ R_{1}+R_{2} & \geq & \sum_{v_{12}\in\mathcal{V}_{12}}P(v_{12})\Bigl[I(U_{1};U_{2}|V_{12}=v_{12})\nonumber \\ & & +I(X;U_{1},U_{2},U_{12}|V_{12}=v_{12})\Bigr]\nonumber \\ D_{\mathcal{K}} & \geq & E\left[(X-\psi_{\mathcal{K}}(U_{\mathcal{K}}))^{2}\right],\,\,\mathcal{K}\in\{1,2,12\}\nonumber \\ & = & E\left[E\left[(X-\psi_{\mathcal{K}}(U_{\mathcal{K}}))^{2}\Bigl|V_{12}\right]\right]\label{eq:ZB_v12_split-1}\end{aligned}$$ We will next show that the optimization can be further restricted to joint densities $P(X,V_{12})Q(\tilde{U}_{1},\tilde{U}_{2},\tilde{U}_{12}|X,V_{12})$ such that $(X,\tilde{U}_{1},\tilde{U}_{2},\tilde{U}_{12})$ are jointly Gaussian given $V_{12}=v_{12}$, $\forall v_{12}\in\mathcal{V}_{12}$ and satisfying $Q(\tilde{U}_{1},\tilde{U}_{2}|X,V_{12})=Q(\tilde{U}_{1}|X,V_{12})Q(\tilde{U}_{2}|X,V_{12})$. First, note that, as $I(X;V_{12})=0$, $P(X|V_{12}=v_{12})$ is Gaussian $\forall v_{12}\in\mathcal{V}_{12}$. Next, recall that $P_{0}$ is obtained by first minimizing $R_{1}$ followed by minimizing $R_{2}$ given $R_{1}$ subject to all the distortion constraints. From (\[eq:ZB\_v12\_split-1\]), we have $R_{1}=\min\sum_{v_{12}\in\mathcal{V}_{12}}P(v_{12})I(X;U_{1}|V_{12}=v_{12})$, where the minimization is over all joint densities $P(X,V_{12},U_{1})$ satisfying the distortion constraint for $D_{1}$. Let $P(X,V_{12},U_{1})$ be any joint density satisfying the distortion constraint for $D_{1}$. Consider the joint density generated as $Q(X,V_{12},\tilde{U}_{1})=P(X,V_{12})Q(\tilde{U}_{1}|X,V_{12})$ where $(X,\tilde{U}_{1})$ are jointly Gaussian given $V_{12}=v_{12}$ and $K_{X,\tilde{U}_{1}|V_{12}=v_{12}}=K_{X,U_{1}|V_{12}=v_{12}}$, $\forall v_{12}\in\mathcal{V}_{12}$. Observe that $Q(\cdot)$ also satisfies the distortion constraint for $D_{1}$. As a Gaussian density over the relevant random variables maximizes the conditional entropy for a fixed covariance matrix [@Thomas], it follows that $I(X;U_{1}|V_{12}=v_{12})\geq I(X;\tilde{U}_{1}|V_{12}=v_{12})$, $\forall v_{12}\in\mathcal{V}_{12}$. Hence, to achieve minimum $R_{1}$, it is sufficient to restrict the optimization to densities wherein $(X,U_{1})$ are jointly Gaussian given $V_{12}$. Next consider minimizing $R_{2}$ given $R_{1}$. From (\[eq:ZB\_v12\_split-1\]), we have: $$\begin{aligned} R_{2} & = & \min\Bigl\{\sum_{v_{12}\in\mathcal{V}_{12}}P(v_{12})\Bigl[I(\tilde{U}_{1};U_{2}|V_{12}=v_{12})\nonumber \\ & & +I(X;\tilde{U}_{1},U_{2},U_{12}|V_{12}=v_{12})\Bigr]-R_{1}\Bigr\}\end{aligned}$$ where the minimization is over all joint densities $P(X,V_{12},U_{1})P(U_{2},U_{12}|X,V_{12},U_{1})$ satisfying (\[eq:ZZB\]) and $I(X;V_{12})=0$ and where $(X,U_{1})$ are jointly Gaussian given $V_{12}=v_{12}$, $\forall v_{12}\in\mathcal{V}_{12}$ (required to minimize $R_{1}$). It is easy to show using similar arguments that the above minimization is again achieved by a joint density where $(X,U_{1},U_{2},U_{12})$ are jointly Gaussian given $V_{12}=v_{12}$ and such that $Q(U_{1},U_{2}|X,V_{12})=Q(U_{1}|X,V_{12})Q(U_{2}|X,V_{12})$ $\forall v_{12}\in\mathcal{V}_{12}$. Hence, to achieve $P_{0}$ using a joint density that satisfies (\[eq:ZZB\]), it is sufficient to optimize the rates over joint densities satisfying the following properties: 1\) $(X,U_{1},U_{2},U_{12})$ are jointly Gaussian given $V_{12}=v_{12}$ $\forall v_{12}\in\mathcal{V}_{12}$ 2\) $I(X;V_{12})=0$ 3\) $I(U_{1};U_{2}|X,V_{12})=0$ 4\) $P(X,V_{12},U_{1},U_{2},U_{12})$ satisfies all the distortion constraints Observe that, for any joint density that satisfies all the above properties, it is impossible to achieve $I(U_{1};U_{2}|V_{12})=0$. Therefore, the excess sum rate term in the ZB scheme is non-zero, concluding that $P_{0}$ is not achievable by the ZB scheme using any independent quantization mechanism. Therefore, a Gaussian random variable under MSE belongs to $\mathcal{Z}_{ZB}$, proving the theorem. Note that, as $\mathcal{Z}_{ZB}\subseteq\mathcal{Z}_{EC}$, a Gaussian source under MSE belongs to $\mathcal{Z}_{EC}$. Hence, the ‘correlated quantization’ scheme (an extreme special case of VKG) which has been proven to be complete for several cross-sections of the $L-$descriptions quadratic Gaussian MD problem [@Jun_Chen_ind_central], is strictly suboptimal in general. Points on the boundary\[sub:Points-on-the\] =========================================== Before stating the results formally, we review Ozarow’s result for the 2-descriptions MD setting. Ozarow showed that the complete region for the 2-descriptions Gaussian MD problem can be achieved using a ‘correlated quantization’ scheme which imposes the following joint distribution for $(U_{1},U_{2},U_{12})$ in the EC scheme: $$\begin{aligned} U_{1}=X+W_{1}\nonumber \\ U_{2}=X+W_{2}\label{eq:Ozarow_Result-1}\end{aligned}$$ $U_{12}=E(X|U_{1},U_{2})$, where $W_{1}$ and $W_{2}$ are zero mean Gaussian random variables independent of $X$ with covariance matrix $K_{W_{1}W_{2}}$, and the functions $\psi_{\mathcal{K}}(U{}_{\mathcal{K}})$ are given by the respective MSE optimal estimators, eg., $\psi_{1}(U_{1})=E\left[X|U_{1}\right]$. The covariance matrix $K_{W_{1}W_{2}}$ is set to satisfy all the distortion constraints. Specifically, the optimum $K_{W_{1}W_{2}}$ is given by: $$K_{W_{1}W_{2}}=\left[\begin{array}{cc} \sigma_{1}^{2} & \rho_{12}\sigma_{1}\sigma_{2}\\ \rho_{12}\sigma_{1}\sigma_{2} & \sigma_{2}^{2} \end{array}\right]\label{eq:Ozarow_K-1}$$ where $\sigma_{i}^{2}=\frac{D_{i}}{1-D_{i}}\,\, i\in\{1,2\}$ and the optimum $\rho_{12}$, denoted by $\rho_{12}^{*}$, is given by (see [@Zamir]): $$\begin{aligned} \rho_{12}^{*} & = & \begin{cases} -\frac{\sqrt{\pi D_{12}^{2}+\gamma}-\sqrt{\pi D_{12}^{2}}}{(1-D_{12})\sqrt{D_{1}D_{2}}} & D_{12}\leq D_{12}^{max}\\ 0 & D_{12}\geq D_{12}^{max} \end{cases}\nonumber \\ \gamma & = & (1-D_{12})\Bigl[(D_{1}-D_{12})(D_{2}-D_{12})\nonumber \\ & & +D_{12}D_{1}D_{2}-D_{12}^{2}\Bigr]\nonumber \\ D_{12}^{max} & = & D_{1}D_{2}/(D_{1}+D_{2}-D_{1}D_{2})\nonumber \\ \pi & = & (1-D_{1})(1-D_{2})\label{eq:other_defn-1}\end{aligned}$$ We denote the complete Gaussian-MSE $L$-descriptions region by $\mathcal{RD}_{G}^{L}$. The characterization of $\mathcal{RD}_{G}^{2}$ is given in [@EGC] (see also [@Zamir]) and we omit restating it explicitly here for brevity. In this section we show that CMS achieves the complete RD region for several cross-sections of the general quadratic Gaussian $L-$channel MD problem. We again begin with the 3-descriptions case and then extend the results to the $L$ channel framework. Recall the setup shown in Fig. \[fig:3\_des\_new\], i.e, a cross-section of the general 3-descriptions rate-distortion region wherein we impose constraints only on distortions $(D_{1},D_{2},D_{3},D_{12},D_{23})$ and set the rest of the distortions, $(D_{13},D_{123})$ to $1$. Here we consider the general asymmetric case, i.e. $D_{1}\neq D_{3}$ and $D_{12}\neq D_{23}$ and show that the CMS scheme achieves the complete rate region in several distortion regimes. In the following theorem, without loss of generality we assume that $D_{1}\leq D_{3}$. If $D_{3}\leq D_{1}$, then the theorem holds by interchanging ‘$1$’ and ‘$3$’ everywhere. Let $D_{12}$ be any distortion such that $D_{12}\leq\min\{D_{1},D_{2}\}$. We define $D_{23}^{*}=D_{23}^{*}(D_{1},D_{2},D_{3},D_{12})$ as: $$D_{23}^{*}=\frac{\sigma_{2}^{2}\sigma_{3}^{2}\left(1-\rho{}^{2}\right)}{\sigma_{2}^{2}\sigma_{3}^{2}\left(1-\rho{}^{2}\right)+\sigma_{2}^{2}+\sigma_{3}^{2}-2\sigma_{2}\sigma_{3}\rho}\label{eq:defn_D23-1}$$ where $\sigma_{i}^{2}=\frac{D_{i}}{1-D_{i}}$ $i\in\{2,3\}$ and $$\rho=\rho_{12}^{*}\frac{\sigma_{1}}{\sigma_{3}}\label{eq:defn_rho}$$ where $\rho_{12}^{*}$ is defined in (\[eq:other\_defn-1\]). In the following theorem, we will show that CMS achieves the complete rate-region if $D_{23}=D_{23}^{*}$. \[thm:Sum\_Rate\_Gauss\]For the setup shown in Fig. \[fig:3\_des\_new\], let $D_{1}\leq D_{3}$. Then, \(i) CMS achieves the complete rate-region if: $$\begin{aligned} D_{23} & = & D_{23}^{*}(D_{1},D_{2},D_{3},D_{12})\label{eq:sum_rate_thm1}\end{aligned}$$ where $D_{23}^{*}$ is defined in (\[eq:defn\_D23-1\]). The rate region is given by: $$\begin{aligned} R_{i} & \geq & \frac{1}{2}\log\frac{1}{D_{i}}\,\, i\in\{1,2,3\}\nonumber \\ R_{1}+R_{2} & \geq & \frac{1}{2}\log\frac{1}{D_{1}D_{2}}+\delta(D_{1},D_{2},D_{12})\nonumber \\ R_{2}+R_{3} & \geq & \frac{1}{2}\log\frac{1}{D_{2}D_{3}}+\delta(D_{2},D_{3},D_{23})\label{eq:sum_rate_thm2}\end{aligned}$$ where $\delta(\cdot)$ is defined by: $$\delta(D_{1},D_{2},D_{12})=\frac{1}{2}\log\left(\frac{1}{1-(\rho_{12}^{*})^{2}}\right)\label{eq:delta_defn}$$ where $\rho_{12}^{*}$ is defined in (\[eq:other\_defn-1\]). \(ii) Moreover, CMS achieves the minimum sum-rate if one of the following hold: \(a) For a fixed $D_{12}$, $D_{23}\geq D_{23}^{*}(D_{1},D_{2},D_{3},D_{12})$ \(b) For a fixed $D_{23}$, $D_{12}\in\{D_{12}:\delta(D_{2},D_{3},D_{23})\geq\delta(D_{1},D_{2},D_{12})\}$ We note that the above rate region *cannot* be achieved by VKG. We omit the details of the proof here as it can be proved in same lines as the proof of Theorem \[thm:General\_CMS-1\]. An achievable rate-distortion region can be derived for general distortions using the encoding principles we derive as part of this proof. However, it is hard to prove outer bounds if the conditions in (\[eq:sum\_rate\_thm1\]) are not satisfied and hence we omit stating the results explicitly here. Both the CMS and the VKG schemes achieve the complete rate region when $D_{12}\geq D_{12}^{max}$ and $D_{23}\geq D_{23}^{max}$, where $D_{12}^{max}$ and $D_{23}^{max}$ are defined in (\[eq:other\_defn-1\]). It is easy to show that in this case an independent quantization scheme is optimal and the complete achievable rate-region is given by $R_{i}\geq\frac{1}{2}\log\frac{1}{D_{i}}\,\, i\in\{1,2,3\}$. It is easy to verify that CMS achieves the minimum sum-rate whenever $D_{12}=D_{23}$ for any $D_{1},D_{3}$. We begin with the proof of (i). The proof of (ii) then follows almost directly from (i). First we show the converse, which is quite obvious. Conditions on $R_{i}$ follow from the converse to the source coding theorem. Conditions on $R_{1}+R_{2}$ and $R_{2}+R_{3}$ follow from Ozarow’s result, to achieve $(D_{1},D_{2},D_{12})$ using descriptions $\{1,2\}$ and to achieve $(D_{2},D_{3},D_{23})$ using descriptions $\{2,3\}$ at the respective decoders. We next prove that CMS achieves the rate region in (\[eq:sum\_rate\_thm2\]) if (\[eq:sum\_rate\_thm1\]) holds. We first give an intuitive argument to explain the encoding scheme. Description 3 carries an RD-optimal quantized version of $X$ (which achieves distortion $D_{3}$). Description 1 carries all the bits embedded in description 3 along with ‘refinement bits’ which assist in achieving distortion $D_{1}\leq D_{3}$. This entails no loss in optimality as a Gaussian source is successively refinable under MSE [@Successive_Refinement]. Description 2 then carries a quantized version of the source which is correlated with the information in descriptions 1 and 3. We will show that if $D_{23}=D_{23}^{*}(D_{1},D_{2},D_{3},D_{12})$, then the correlations can be set such that description 2 is optimal with respect to both descriptions 1 and 3. Formally, to achieve the rate region in (\[eq:sum\_rate\_thm2\]), we set the auxiliary random variables in the CMS coding scheme as follows: $$\begin{aligned} V_{13} & = & X+W_{1}+W_{3}\nonumber \\ U_{3} & = & V_{13}\nonumber \\ U_{1} & = & X+W_{1}\nonumber \\ U_{2} & = & X+W_{2}\nonumber \\ U_{12}=\Phi & & U_{23}=\Phi\label{eq:Sum_rate_pf_3}\end{aligned}$$ and the functions $\psi(\cdot)$ as the respective MSE optimal estimators, where $W_{1},W_{2},W_{3}$ are zero mean Gaussian random variables independent of $X$ with a covariance matrix: $$K_{W_{1}W_{2}W_{3}}=\left[\begin{array}{ccc} \tilde{\sigma}_{1}^{2} & \rho_{12}\tilde{\sigma}_{1}\tilde{\sigma}_{2} & 0\\ \rho_{12}\tilde{\sigma}_{1}\tilde{\sigma}_{2} & \tilde{\sigma}_{2}^{2} & 0\\ 0 & 0 & \tilde{\sigma}_{3}^{2} \end{array}\right]\label{eq:Sum_rate_pf_4}$$ where $\tilde{\sigma}_{1}^{2}=\sigma_{1}^{2}=\frac{D_{1}}{1-D_{1}}$, $\tilde{\sigma}_{2}^{2}=\sigma_{2}^{2}=\frac{D_{2}}{1-D_{2}}$, $\tilde{\sigma}_{3}^{2}=\sigma_{3}^{2}-\sigma_{1}^{2}=\frac{D_{3}}{1-D_{3}}-\frac{D_{1}}{1-D_{1}}$. The correlation coefficient $\rho_{12}$ is set to achieve distortion $D_{12}$, i.e. $\rho_{12}=\rho_{12}^{*}$ defined in (\[eq:other\_defn-1\]). Let us denote by $W_{13}=W_{1}+W_{3}$. Observe that the encoding for descriptions 2 and 3 resembles Ozarow’s correlated quantization scheme with $U_{2}=X+W_{2}$ and $U_{3}=X+W_{13}$. Let us denote the correlation coefficient between $W_{2}$ and $W_{13}$ be $\rho$. We have the following equation relating $\rho_{12}$ and $\rho$ (which is equivalent to (\[eq:defn\_rho\])): $$\rho_{12}^{*}\tilde{\sigma}_{1}=\rho\sqrt{\tilde{\sigma}_{1}^{2}+\tilde{\sigma}_{3}^{2}}\label{eq:rho_defn2}$$ Note that the above relation is derived using the independence of $W_{2}$ and $(W_{1},W_{3})$, which follows from our choice of $K_{W_{1}W_{2}W_{3}}$. Hence the minimum distortion $D_{23}$ achievable using the above choice for the joint density of the auxiliary random variables is given by: $$\begin{aligned} D_{23} & = & \mbox{Var}(X|U_{2},U_{3},V_{13})\nonumber \\ & = & \mbox{Var}(X|U_{2},V_{13})\nonumber \\ & = & D_{23}^{*}\label{eq:min_D23}\end{aligned}$$ We next derive the rates required by this choice of $K_{W_{1}W_{2}W_{3}}$. Direct application of Theorem \[thm:main\] using the above joint density leads to the following achievable rate region for any given distortions $D_{1},D_{2},D_{3},D_{12},D_{23}$: $$\begin{aligned} R_{13}^{''} & \geq & \frac{1}{2}\log\frac{1}{D_{3}}\\ R_{2}^{'} & \geq & \frac{1}{2}\log\frac{1}{D_{2}}\\ R_{1}^{'}+R_{13}^{''} & \geq & \frac{1}{2}\log\frac{1}{D_{1}}\\ R_{2}^{'}+R_{13}^{''} & \geq & H(V_{13})+H(U_{2})-H(V_{13},U_{2}|X)\\ & = & H(U_{3})+H(U_{2})-H(U_{3},U_{2}|X)\\ & = & \frac{1}{2}\log\frac{1}{D_{3}D_{2}}+\frac{1}{2}\log\left(\frac{1}{1-\rho{}^{2}}\right)\\ & = & \frac{1}{2}\log\frac{1}{D_{3}D_{2}}+\delta(D_{2},D_{3},D_{23}^{*})\end{aligned}$$ $$\begin{aligned} R_{1}^{'}+R_{2}^{'}+R_{13}^{''} & \geq & H(V_{13})+H(U_{1}|V_{13})+H(U_{2})\\ & & -H(U_{1},V_{13},U_{2}|X)\\ & = & I(X;U_{1},V_{13})+I(U_{2};X,U_{1},V_{13})\\ & =^{(a)} & I(X;U_{1})+I(X;U_{2})\\ & & +I(U_{2};U_{1},V_{13}|X)\\ & = & I(X;U_{1})+I(X;U_{2})\\ & & +I(U_{2};U_{1},U_{3}|X)\\ & =^{(b)} & I(X;U_{1})+I(X;U_{2})\\ & & +I(W_{2};W_{1},W_{1}+W_{3})\\ & = & I(X;U_{1})+I(X;U_{2})+I(W_{2};W_{1})\\ & & +I(W_{2};W_{3}|W_{1})\\ & =^{(c)} & I(X;U_{1})+I(X;U_{2})+I(W_{2};W_{1})\\ & = & \frac{1}{2}\log\frac{1}{D_{1}D_{2}}+\frac{1}{2}\log\left(\frac{1}{1-(\rho_{12}^{*})^{2}}\right)\\ & = & \frac{1}{2}\log\frac{1}{D_{1}D_{2}}+\delta(D_{1},D_{2},D_{12})\end{aligned}$$ $$\begin{aligned} R_{1} & = & R_{13}^{''}+R_{1}^{'}\nonumber \\ R_{2} & = & R_{2}^{'}\nonumber \\ R_{3} & = & R_{13}^{''}\label{eq:CMS_Gauss_ach_region}\end{aligned}$$ where $(a)$ follows from the Markov chain $X\leftrightarrow U_{1}\leftrightarrow V_{13}$, $(b)$ from the independence of $X$ and $(W_{1},W_{2},W_{3})$ and $(c)$ from the independence of $W_{3}$ and $(W_{1},W_{2})$. At a first glance, it might be tempting to conclude that the region for the tuple $(R_{1},R_{2},R_{3})$ in (\[eq:CMS\_Gauss\_ach\_region\]) is equivalent to the region given by (\[eq:sum\_rate\_thm2\]). This is not the case in general as the equations in (\[eq:CMS\_Gauss\_ach\_region\]) have an implicit constraint on the auxiliary rates $R_{13}^{''},R_{1}^{'},R_{2}^{'}\geq0$. However, we will show that if $D_{3}\geq D_{1}$, then the two regions are indeed equivalent. We denote the rate region given in (\[eq:sum\_rate\_thm2\]) by $\mathcal{R}$ and the region in (\[eq:CMS\_Gauss\_ach\_region\]) by $\mathcal{R}^{*}$. Clearly, $\mathcal{R}^{*}\subseteq\mathcal{R}$, as any $(R_{1},R_{2},R_{3})$ that satisfies (\[eq:CMS\_Gauss\_ach\_region\]) also satisfies (\[eq:sum\_rate\_thm2\]). We need to show that $\mathcal{R}^{*}\supseteq\mathcal{R}$. Towards proving this claim, note that both $\mathcal{R}$ and $\mathcal{R}^{*}$ are convex regions bounded by hyper-planes. Hence, it is sufficient for us to show that all the corner points of $\mathcal{R}$ lie in $\mathcal{R}^{*}$. Clearly, $\mathcal{R}$ has 6 corner points denoted by $P_{ijk}$ $i,j,k\in\{1,2,3\}$ defined as: $$\begin{aligned} P_{ijk} & = & \{r_{i},r_{j},r_{k}\}\nonumber \\ r_{i} & = & \min R_{i}\nonumber \\ r_{j} & = & \min_{R_{i}=r_{i}}R_{j}\nonumber \\ r_{k} & = & \min_{R_{i}=r_{i},R_{j}=r_{j}}R_{k}\end{aligned}$$ To prove $\mathcal{R}^{*}\supseteq\mathcal{R}$, we need to prove that every corner point $(r_{1},r_{2},r_{3})\in\mathcal{R}$ is achieved by some non-negative $(R_{13}^{''},R_{1}^{'},R_{2}^{'},R_{1},R_{2},R_{3})\in\mathcal{R}^{*}$ such that $R_{i}=r_{i},\,\, i\in\{1,2,3\}$. We set $R_{13}^{''}=R_{3}=r_{3}$ and $R_{2}^{'}=R_{2}=r_{2}$ and show that we can always find $R_{1}^{'}\geq0$ satisfying (\[eq:CMS\_Gauss\_ach\_region\]) such that $R_{1}=R_{1}^{'}+R_{13}^{'}=r_{1}$. Let us first consider the points $P_{213}=P_{231}$ given by: $$\begin{aligned} r_{1} & = & \frac{1}{2}\log\frac{1}{D_{1}}+\delta(D_{1},D_{2},D_{12})\nonumber \\ r_{2} & = & \frac{1}{2}\log\frac{1}{D_{2}}\nonumber \\ r_{3} & = & \frac{1}{2}\log\frac{1}{D_{3}}+\delta(D_{2},D_{3},D_{23})\end{aligned}$$ This can be achieved by using the following auxiliary rates, $R_{2}^{'}=r_{2}$, $R_{13}^{''}=r_{3}$ and $$\begin{aligned} R_{1}^{'} & = & \frac{1}{2}\log\frac{D_{3}}{D_{1}}+\delta(D_{1},D_{2},D_{12})\nonumber \\ & & -\delta(D_{2},D_{3},D_{23})\nonumber \\ & = & \frac{1}{2}\log\frac{(1-D_{1})D_{3}-(\rho_{12}^{*})^{2}D_{1}(1-D_{3})}{(1-D_{1})D_{1}(1-(\rho_{12}^{*})^{2})}\end{aligned}$$ It is easy to verity that $R_{1}^{'}\geq0$ if $D_{3}\geq D_{1}$. Hence $P_{213}=P_{231}\in\mathcal{R}^{*}$. Let us next consider the points $P_{132}=P_{312}$ given by: $$\begin{aligned} r_{1} & = & \frac{1}{2}\log\frac{1}{D_{1}}\nonumber \\ r_{2} & = & \frac{1}{2}\log\frac{1}{D_{2}}\nonumber \\ & & +\max\{\delta(D_{1},D_{2},D_{12}),\delta(D_{2},D_{3},D_{23})\}\nonumber \\ r_{3} & = & \frac{1}{2}\log\frac{1}{D_{3}}\end{aligned}$$ Again it is easy to show that $(R_{13}^{''},R_{1}^{'},R_{2}^{'})=(r_{3},r_{1}-r_{3},r_{2})$ belongs to $\mathcal{R}^{*}$. Finally, we consider the remaining two points $P_{123}$ and $P_{321}$. $P_{123}$ is given by: $$\begin{aligned} r_{1} & = & \frac{1}{2}\log\frac{1}{D_{1}}\nonumber \\ r_{2} & = & \frac{1}{2}\log\frac{1}{D_{2}}+\delta(D_{1},D_{2},D_{12})\nonumber \\ r_{3} & = & \frac{1}{2}\log\frac{1}{D_{3}}\nonumber \\ & & +\left(\delta(D_{2},D_{3},D_{23})-\delta(D_{1},D_{2},D_{12})\right)^{+}\end{aligned}$$ where $x^{+}=\max\{x,0\}$. Consider the following auxiliary rates: $R_{13}^{''}=r_{3}$, $R_{2}^{'}=r_{2}$ and $R_{1}^{'}=\frac{1}{2}\log\frac{D_{3}}{D_{1}}$. Clearly the first three constraints in (\[eq:CMS\_Gauss\_ach\_region\]) are satisfied by these auxiliary rates. The following inequalities prove that the last two constraints are also satisfied by these rates and hence $P_{123}\in\mathcal{R}^{*}$. $$\begin{aligned} R_{2}^{'}+R_{13}^{''} & = & r_{2}+r_{3}\\ & = & \frac{1}{2}\log\frac{1}{D_{2}D_{3}}+\delta(D_{1},D_{2},D_{12})\\ & & +\left(\delta(D_{2},D_{3},D_{23})-\delta(D_{1},D_{2},D_{12})\right)^{+}\\ & \geq & \frac{1}{2}\log\frac{1}{D_{2}D_{3}}+\delta(D_{2},D_{3},D_{23})\end{aligned}$$ $$\begin{aligned} R_{2}^{'}+R_{1}^{'}+R_{13}^{''} & = & \frac{1}{2}\log\frac{1}{D_{1}D_{2}}+\delta(D_{1},D_{2},D_{12})\nonumber \\ & & +\left(\delta(D_{2},D_{3},D_{23})-\delta(D_{1},D_{2},D_{12})\right)^{+}\nonumber \\ & \geq & \frac{1}{2}\log\frac{1}{D_{1}D_{2}}+\delta(D_{1},D_{2},D_{12})\end{aligned}$$ Next consider $P_{321}$: $$\begin{aligned} r_{1} & = & \frac{1}{2}\log\frac{1}{D_{1}}\nonumber \\ & & +\left(\delta(D_{1},D_{2},D_{12})-\delta(D_{2},D_{3},D_{23})\right)^{+}\nonumber \\ r_{2} & = & \frac{1}{2}\log\frac{1}{D_{2}}+\delta(D_{2},D_{3},D_{23})\nonumber \\ r_{3} & = & \frac{1}{2}\log\frac{1}{D_{3}}\end{aligned}$$ Using same arguments as before, it is easy to show that $P_{321}\in\mathcal{R}^{*}$ by using the following auxiliary rates: $R_{13}^{''}=r_{3}$, $R_{2}^{'}=r_{2}$ and $R_{1}^{'}=\frac{1}{2}\log\frac{D_{3}}{D_{1}}+\left(\delta(D_{1},D_{2},D_{12})-\delta(D_{2},D_{3},D_{23})\right)^{+}$. Therefore, it follows that $\mathcal{R}=\mathcal{R}^{*}$ and hence CMS achieves the complete rate region, proving (i). We next prove (ii)(a). It follows from (i) that the following rate point is achievable $\forall D_{23}\geq D_{23}^{*}$: $$\begin{aligned} \left\{ R_{1},R_{2},R_{3}\right\} & = & \Bigl\{\frac{1}{2}\log\frac{1}{D_{1}},\frac{1}{2}\log\frac{1}{D_{2}}+\delta(D_{1},D_{2},D_{12}),\nonumber \\ & & \frac{1}{2}\log\frac{1}{D_{3}}\Bigr\}\end{aligned}$$ Also observe that $\forall D_{23}\geq D_{23}^{*}$, $\delta(D_{1},D_{2},D_{12})\geq\delta(D_{2},D_{3},D_{23})$ and hence a lower bound to the sum rate is $\frac{1}{D_{1}D_{2}D_{3}}+\delta(D_{1},D_{2},D_{12})$. Therefore the above point achieves the minimum sum rate $\forall D_{23}\geq D_{23}^{*}$. The proof of (ii)(b) follows similarly by noting that if $D_{12}\in\{D_{12}:\delta(D_{2},D_{3},D_{23})\geq\delta(D_{1},D_{2},D_{12})\}$, the minimum sum rate is given by $\frac{1}{D_{1}D_{2}D_{3}}+\delta(D_{2},D_{3},D_{23})$ which is achieved by the point: $$\begin{aligned} \left\{ R_{1},R_{2},R_{3}\right\} & = & \Bigl\{\frac{1}{2}\log\frac{1}{D_{1}},\frac{1}{2}\log\frac{1}{D_{2}}+\delta(D_{2},D_{3},D_{12}),\nonumber \\ & & \frac{1}{2}\log\frac{1}{D_{3}}\Bigr\}\end{aligned}$$ This proves the theorem. It is interesting to observe that the optimum encoding scheme introduces common codewords (creates an interaction) between descriptions 1 and 3, even though these two descriptions are never received simultaneously at the decoder. While common codewords typically imply redundancy in the system, in this case, introducing them allows for better co-ordination between the descriptions leading to a smaller rate for the common branch. Similar principles can be used to show that CMS achieves the complete RD region for the $L-$channel quadratic Gaussian MD problem, for several distortion regimes. ![Example: This figure denotes the regime of distortions wherein the CMS scheme achieves the complete rate region and the minimum sum rate. Here $D_{1}=0.1,D_{2}=0.15$ and $D_{3}=0.2$. The blue points correspond to the region of distortions wherein the CMS scheme achieves the complete rate-region and the green points represent the region where the CMS scheme achieves the minimum sum rate \[fig:Example:-This-figure\].](Gauss_Eg) We consider an asymmetric setting where $D_{1}=0.1,D_{2}=0.15$ and $D_{3}=0.2$. Fig. \[fig:Example:-This-figure\] shows the regime of distortions where CMS achieves the complete rate-region and minimum sum rate. The blue region corresponds to the set of distortion pairs $(D_{12},D_{23})$ wherein the CMS rate-region is complete. The green region denotes the minimum sum rate points. It is clearly evident from the figure that CMS achieves the minimum sum rate for a fairly large regime of distortions. Conclusion ========== In this paper, we showed that CMS achieves a strictly larger region compared to VKG for a general class of sources and distortion measures, which includes the quintessential setting of Gaussian source under mean squared error. As a consequence, it follows that the ‘correlated quantization’ scheme (an extreme special case of VKG), is strictly suboptimal in general. We also showed that CMS achieves the complete rate region for the 3-description symmetric cross-section and several asymmetric cross-sections of the setup shown in Fig. \[fig:3\_des\_new\]. [^1]: The work was supported by the NSF under grants CCF - 1016861 and CCF-1118075. [^2]: Note the difference between $\{U\}_{\mathcal{S}}$ and $U_{\mathcal{S}}$. $\{U\}_{\mathcal{S}}$ is a set of variables, whereas $U_{\mathcal{S}}$ is a single variable. [^3]: It might be tempting to conclude that the suboptimality in VKG is due to conditions for joint typicality of all the codewords, while for this cross-section, joint typicality of codewords of $U_{1}$ and $U_{3}$ is unnecessary. However, it is possible to show that common codewords provide strict improvement even when joint typicality only within prescribed subsets is imposed. The details are omitted here. [^4]: Note that setting $V_{\mathcal{L}}$ to a constant in VKG is equivalent to setting all the common layer random variables to constants in CMS.
--- abstract: | When the experimental data set is contaminated, we usually employ robust alternatives to common location and scale estimators, such as the sample median and Hodges-Lehmann estimators for location and the sample median absolute deviation and Shamos estimators for scale. It is well-known that these estimators have high positive asymptotic breakdown points and are normally consistent as the sample size tends to infinity. To our knowledge, the finite-sample properties of these estimators depending on the sample size have not well been studied in the literature. In this paper, we fill this gap by providing their closed-form finite-sample breakdown points and calculating the unbiasing factors and relative efficiencies of the robust estimators through the extensive Monte Carlo simulations up to the sample size 100. The numerical study shows that the unbiasing factor improves the finite-sample performance significantly. In addition, we also provide the predicted values for the unbiasing factors which are obtained by using the least squares method which can be used for the case of sample size more than 100. ****: breakdown, unbiasedness, robustness, relative efficiency author: - | Chanseok Park  and Haewon Kim\ Applied Statistics Laboratory\ Department of Industrial Engineering\ Pusan National University\ Busan 46241, Korea - | Min Wang\ Department of Management Science and Statistics\ The University of Texas at San Antonio\ San Antonio, TX, USA bibliography: - 'REFm22.bib' title: '**Finite-sample properties of robust location and scale estimators**' --- Introduction ============ Estimation of the location and scale parameters of a distribution, such as the mean and standard deviation of a normal population, is a common and important problem in the various branches of engineering including: biomedical, chemical, materials, mechanical and industrial engineering. The quality of the data plays an important role in estimating these parameters, whereas in the engineering sciences, the experimental data is often contaminated due to the measurement errors, the volatile operating conditions, etc. Thus, robust estimations are advocated as alternatives to commonly used location and scale estimators (e.g., the sample mean and sample standard deviation) for estimating the parameters of population. For example, for the case where some of the observations are contaminated by outliers, we usually adopt the sample median and Hodges-Lehmann [@Hodges/Lehmann:1963] estimators for the location parameter and the sample median absolute deviation [@Hampel:1974] and Shamos [@Shamos:1976] estimators for the scale parameter, because these estimators have a large breakdown point and thus perform well both in the presence and absence of outliers. The breakdown point is a common criterion for measuring the robustness of an estimator. The larger the breakdown point of an estimator, the more robust it is. The finite-sample breakdown point [@Donoho/Huber:1983] is defined as the maximum proportion of incorrect or arbitrarily observations that an estimator can deal with without making an egregiously incorrect value. For example, the breakdown points of the sample mean and the sample median are 0 and $1/2$, respectively. In general, the breakdown point can be written as a function of the sample size. In this paper, we provide the finite-sample breakdown points for the various location and scale estimators mentioned above. We show that when the sample sizes are small, they are noticeably different from the asymptotic breakdown point, which is the limit of the finite sample breakdown point as the sample size approaches infinity. It deserves mentioning that for robust scale estimation, the MAD and the Shamos estimators not only have positive asymptotic breakdown points, but also are normally consistent as the sample size goes to infinity. However, when the sample size is small, they have serious biases and provide inappropriate estimation of the scale parameter. Some bias-correction techniques are commonly adopted to improve the finite sample performance of these estimators. For instance, [@Williams:2011] studied the finite sample correction factors through computer simulations for several simple robust estimators of the standard deviation of a normal population, which include the MAD, interquartile range, shortest half interval, and median moving range, Later on, [@Hayes:2014] obtained the finite-sample bias-correction factors for the MAD for the scale parameter. They have shown that finite sample correction factors can significantly eliminate systematic biases of these robust estimators, especially when the sample sizes are small. To our knowledge, finite-sample properties of the sample median absolute deviation (MAD) and Shamos estimators have received little attention in the literature except for some references covering topic for small sample sizes. This observation motivates us to employ the extensive Monte Carlo simulation to obtain the empirical biases of these estimators. Given that the empirical variance of an estimator is one of the important metrics for evaluating an estimator, we also obtain the values of the finite-sample variances of the median, Hodges-Lehmann, MAD and Shamos estimators under the standard normal distribution, which are not fully provided in the statistics literature. Numerical results show that the unbiasing factor improves the finite-sample performance of the estimator significantly. In addition, we provide the predicted values for the unbiasing factors obtained by the least squares method which can be used for the case of the sample size more than 100. The remainder of this paper is organized as follows. In Section \[section:02\], we derive the finite-sample breakdown points for robust location estimators and robust scale estimators, respectively. Through using the extensive Monte Carlo simulation, we calculate the empirical biases of the MAD and Shamos estimators in Section \[section:03\] and the finite-sample variances of the median, HL, MAD, and Shamos estimators in Section \[section:04\]. Some concluding remarks are provided in Section \[section:05\]. Finite-sample breakdown point {#section:02} ============================= In this section, we derive the finite-sample breakdown points for robust location estimators: the sample median and Hodges-Lehmann (HL) estimator in Subsection \[section:02:01\] and for robust scale estimators: the MAD and Shamos estimator in Subsection \[section:02:02\]. Robust location estimators {#section:02:01} -------------------------- It is well known that the asymptotic breakdown points of the sample median and the HL estimator are 1/2 and $1-1/\sqrt{2}$, respectively. Note that these estimators are in a closed form and are location-equivariant in the sense that $\hat{\theta}(X_1+b,X_2+b,\ldots,X_n+b) = \hat{\theta}(X_1,X_2,\ldots,X_n)+b$. However, in many cases, the finite-sample breakdown points can be noticeably different from the asymptotic breakdown point, especially when the sample size $n$ is small. For instance, when $n = 10$, we observe from equation (\[EQ:e-median\]) that the finite-sample breakdown point for the median is 0.4, which is different from its asymptotic breakdown point of 0.5. Suppose that we have a sample of size $n$, $X_1, X_2, \ldots, X_n$. Then we can make up to $\lfloor (n-1)/2 \rfloor$ of the sample observations arbitrarily large without making the median arbitrarily large. Let $\lfloor\cdot\rfloor$ be the floor function ($\lfloor x \rfloor$ is the largest integer not exceeding $x$). The finite-sample breakdown point of the median is given by $$\label{EQ:e-median} \epsilon_n = \frac{\lfloor (n-1)/2 \rfloor}{n}.$$ Using the fact that $\lfloor x \rfloor$ can be rewritten as $\lfloor x \rfloor = x - \delta$ where $0 \le \delta < 1$, we have $$\epsilon_n = \frac{\lfloor (n-1)/2 \rfloor}{n} = \frac{1}{2} - \frac{1}{2n} - \frac{\delta}{n}.$$ The asymptotic breakdown point of the median is obtained by taking the limit of the finite-sample breakdown point as $n\to\infty$, which provides that $\epsilon = \lim_{n\to\infty} \epsilon_n = 1/2$. The HL estimator is defined as the median of all pairwise averages of the sample observations and is given by $$\mathop{\mathrm{median}} \Big( \frac{X_i+X_j}{2} \Big).$$ Note that the median of all pairwise averages is calculated for $i<j$, $i\le j$, and $\forall(i,j)$. We denoted these three versions as $$\begin{aligned} \mathrm{HL1} &= \mathop{\mathrm{median}}_{i<j} \Big( \frac{X_i+X_j}{2} \Big), \ \ \mathrm{HL2} &= \mathop{\mathrm{median}}_{i\le j} \Big( \frac{X_i+X_j}{2} \Big), \ \ \mathrm{and} \ \ \mathrm{HL3} &= \mathop{\mathrm{median}}_{\forall(i,j)} \Big( \frac{X_i+X_j}{2} \Big),\end{aligned}$$ respectively. In what follows, we first derive the breakdown point for the $\mathrm{HL3}$ and then use a similar approach to derive the breakdown point for $\mathrm{HL1}$ and $\mathrm{HL2}$. Suppose that we make $k$ of the $n$ observations arbitrarily large with $0 \le k \le n$. Notice that there are $n\times n$ paired average terms (so-called Walsh averages) in the HL3 estimator: $(X_i+X_j)/2$, where $i,j = 1,2,\ldots,n$. Because the HL3 estimator is the median of the $n\times n$ values, the finite-sample breakdown point cannot be greater than $\lfloor (n^2-1)/2\rfloor/n^2$ due to equation (\[EQ:e-median\]). If we make $k$ of the $n$ observations arbitrarily large, then the number of arbitrarily large Walsh averages becomes $n^2-(n-k)^2$. These two facts provide the following relationship $$\frac{n^2-(n-k)^2}{n^2} \le \frac{\lfloor (n^2-1)/2\rfloor}{n^2},$$ which is equivalent to $k^2 - 2nk + \lfloor (n^2-1)/2\rfloor \ge 0$. The finite-sample breakdown point of the $\mathrm{HL3}$ is then given by $\epsilon_n = k^{*} / n$, where $$\label{EQ:kstar} k^{*} = \max\Big\{k \in \mathbb{N}: k^2-2nk+\lfloor(n^2-1)/2\rfloor\ge0 \textrm{~and~} 0 \le k \le n \Big\}.$$ To obtain an explicit formula for (\[EQ:kstar\]), we let $f(x) = x^2 - 2nx + \lfloor (n^2-1)/2\rfloor$. Since $f'(x)=2(x-n)$, $f(x)$ is decreasing for $x<n$. The roots of $f(x)=0$ are given by $x_1 = n-\sqrt{n^2 - \lfloor (n^2-1)/2\rfloor}$ and $x_2 = n+\sqrt{n^2 - \lfloor (n^2-1)/2\rfloor}$. Since $k$ is an integer and $k \le n$, we have $k^*=\lfloor x_1 \rfloor$, that is $k^* = \Big\lfloor n - \sqrt{n^2-\lfloor (n^2-1)/2\rfloor} \Big\rfloor.$ Then we have the closed-form finite-sample breakdown point of the $\mathrm{HL3}$ $$\label{EQ:e-HL3} \epsilon_n = \frac{\Big\lfloor n-\sqrt{n^2-\lfloor(n^2-1)/2\rfloor}\Big\rfloor}{n}.$$ The asymptotic breakdown point of $\mathrm{HL3}$ is given by $\epsilon = \lim_{n\to\infty} \epsilon_n$. Using $\lfloor x \rfloor = x-\delta$ where $0\le\delta<1$, we can rewrite (\[EQ:e-HL3\]) as $$\epsilon_n = \frac{n-\sqrt{n^2- (n^2-1)/2 +\delta_1}-\delta_2}{n},$$ where $0\le \delta_1 <1$ and $0\le \delta_2 <1$. Thus, we have $\epsilon = 1-1/\sqrt{2} \approx 29.3\%$. In the case of the $\mathrm{HL1}$ estimator, there are $n(n-1)/2$ Walsh averages. Since the $\mathrm{HL1}$ estimator is the median of the $n(n-1)/2$ Walsh averages, the finite breakdown point cannot be greater than $\lfloor\{n(n-1)/2-1\}/2 \rfloor/ \{n(n-1)/2\}$ due to equation (\[EQ:e-median\]) again. If we make $k$ observations arbitrarily large with $k\le n-1$, then there are $n(n-1)/2 - (n-k)(n-k-1)/2$ arbitrarily large Walsh averages. Thus, the following inequality holds $$\frac{n(n-1)/2 - (n-k)(n-k-1)/2}{ n(n-1)/2} \le \frac{1}{n(n-1)/2} \Big\lfloor \frac{n(n-1)/2-1}{2} \Big\rfloor,$$ which is equivalent to $k^2 - (2n-1)k + 2\lfloor(n^2-n-2)/4\rfloor \ge 0.$ In a similar way as done for equation (\[EQ:kstar\]), we let $k^*$ be the largest integer $k$ satisfying the above with $0 \le k \le n-1$. For convenience, we let $f(x)=x^2 - (2n-1)x + 2\lfloor(n^2-n-2)/4\rfloor$. Then $f(x)$ is decreasing for $x<n-1/2$ due to $f'(x)=2x-(2n-1)$ and the roots of $f(x)=0$ are given by $n-1/2 \pm \sqrt{(n-1/2)^2-2\lfloor(n^2-n-2)/4\rfloor}$. Thus, using the similar argument to that used for the $\mathrm{HL3}$ case, we have $k^* = \Big\lfloor n-1/2 - \sqrt{(n-1/2)^2-2\lfloor(n^2-n-2)/4\rfloor} \Big\rfloor.$ Then we have the closed-form finite-sample breakdown point of the $\mathrm{HL1}$ $$\label{EQ:e-HL1} \epsilon_n = \frac{\Big\lfloor n-1/2 - \sqrt{(n-1/2)^2-2\lfloor(n^2-n-2)/4\rfloor}\Big\rfloor}{n}.$$ It should be noted that we also have $\epsilon=\lim_{n\to\infty}\epsilon_n=1-1/\sqrt{2}$. Similar to the case of the $\mathrm{HL1}$, we obtain that the closed-form finite-sample breakdown point of the $\mathrm{HL2}$ estimator is given by $$\label{EQ:e-HL2} \epsilon_n = \frac{\Big\lfloor n+1/2 - \sqrt{(n+1/2)^2-2\lfloor(n^2+n-2)/4\rfloor}\Big\rfloor}{n}.$$ Robust scale estimators {#section:02:02} ----------------------- For robust scale estimation, we consider the MAD [@Hampel:1974] and the Shamos estimator [@Shamos:1976]. The MAD is given by $$\label{EQ:MAD} \mathrm{MAD} = \frac{\displaystyle{\mathop\mathrm{median}_{1\le i\le n}}|X_i-\tilde{\mu}|}{\Phi^{-1}({3}/{4})},$$ where $\tilde{\mu} = \mathrm{median}(X_i)$ and $\Phi^{-1}({3}/{4})$ is needed to make this estimator consistent under the normal distribution [@Rousseeuw/Croux:1993]. This resembles the median and its finite-sample breakdown point is the same as that of the median in (\[EQ:e-median\]). The Shamos estimator is given by $$\label{EQ:Shamos} \mathrm{SH} = \frac{\displaystyle\mathop{\mathrm{median}}_{i < j} \big( |X_i-X_j| \big)}% {\sqrt{2}\,\Phi^{-1}(3/4)},$$ where $\sqrt{2}\,\Phi^{-1}(3/4)$ is needed to make this estimator consistent under the normal distribution [@Levy/etc:2011]. Of particular note is that the Shamos estimator resembles the HL1 estimator by replacing the Walsh averages by pairwise differences. Thus, its finite-sample breakdown point is the same as that of the HL1 estimator in (\[EQ:e-HL1\]). In the case of the HL estimator, the median is calculated for $i<j$, $i\le j$, and $\forall(i,j)$, but the median in the Shamos estimator is calculated only for $i<j$ because $|X_i-X_j|=0$ for $i=j$. Note that the MAD and Shamos estimators are in a closed form and are scale-equivariant in the sense that $\hat{\theta}(aX_1+b,aX_2+b,\ldots,aX_n+b) = |a|\cdot\hat{\theta}(X_1,X_2,\ldots,X_n)$. In Table \[TBL:breakdown\], we provide the finite-sample breakdown points of the estimators considered in this paper. Also, we provide the plots of these values in Figure \[FIG:breakdown\]. ![The finite-sample breakdown points under consideration.\[FIG:breakdown\]](breakdown) Empirical biases {#section:03} ================ As mentioned above, the MAD in (\[EQ:MAD\]) and the Shamos estimator in (\[EQ:Shamos\]) are normally consistent, that is, as the sample size goes to infinity, it converges to the standard deviation $\sigma$ under the normal distribution, $N(\mu,\sigma)$. However, when the sample size is small, they have serious biases. In this section, we obtain the unbiasing factors for the MAD and Shamos estimators through the extensive Monte Carlo simulation. It deserves mentioning that the location estimators such as the median and the Hodges-Lehmann estimator have no bias under the normal distribution. For this simulation, we generated a sample of size $n$ from the standard normal distribution, $N(0,1)$, and calculated the MAD and Shamos estimates. We repeated this simulation ten million times ($I=10^7$) to obtain the empirical biases of these two estimators. In Table \[TBL:eBias1\], we provide the empirical biases for $n=2,3,\ldots,100$. We also provide the plot of these empirical biases in Figure \[FIG:bias\]. Using these biases, we can easily obtain the unbiasing factors as follows. For convenience, let $A_n$ be the empirical bias of the $\mathrm{MAD}$. Then $1+A_n$ is the unbiasing factor and thus an empirically unbiased MAD is given by $$\frac{\mathrm{MAD}}{1+A_n}.$$ Similarly, an empirically unbiased Shamos estimator is given by $$\frac{\mathrm{SH}}{1+B_n},$$ where $B_n$ is the empirical bias of the $\mathrm{SH}$. ![Empirical biases of the MAD and Shamos estimators.\[FIG:bias\]](bias) For the case when $n > 100$, we suggest to estimate them as follows. Since the MAD in (\[EQ:MAD\]) and Shamos in (\[EQ:Shamos\]) are normally consistent, $A_n$ and $B_n$ converge to zero as $n$ goes to infinity. For a large value of $n$, we suggest to use the methods proposed by [@Hayes:2014] and [@Williams:2011]. [@Hayes:2014] suggests the use of $A_n \approx {a_1}/{n} + {a_2}/{n^2}$ and [@Williams:2011] suggests the use of $A_n \approx {a_3} {n}^{-a_4}$. Similarly, we can also estimate $B_n$ using $B_n \approx {b_1}/{n} + {b_2}/{n^2}$ and $B_n \approx {b_3} {n}^{-b_4}$. To estimate these, we obtained more empirical bias in Table \[TBL:eBias2\] for $n=109, 110, 119, 120, \ldots, 499, 500$. Using the values for the case of $n>50$, we can obtain the least squares estimate given by $$A_n = - \frac{0.76213}{n} - \frac{0.86413}{n^2}.$$ Also, we can obtain the least squares estimate $A_n$ using the method of [@Williams:2011] after the logarithm transformation which is given by $$A_n = -0.804168866 \cdot n^{-1.008922}.$$ Note that [@Hayes:2014] and [@Williams:2011] estimated $A_n$ for the case of odd and even values of $n$ separately. However, for a large value of $n$, the gain in precision may not be noticeable as Figure \[FIG:bias\] shows that there is no noticeable difference in the case of the odd and even values of $n$. We can also obtain the least squares estimate $B_n$ using [@Hayes:2014] and [@Williams:2011] for a large value of $n$ as follows $$\begin{aligned} B_n &= \frac{0.414253297}{n} + \frac{0.442396799}{n^2} \\ \intertext{and} B_n & = 0.435760656 \cdot n^{-1.0084443},\end{aligned}$$ respectively. In Table \[TBL:eBias2\], we provide the estimated biases of the MAD and Shamos estimators. These results show that the estimated biases are very accurate up to the fourth decimal point. Also, there is no noticeable difference between the two estimates by [@Hayes:2014] and [@Williams:2011]. It is well known that the sample standard deviation $S_n$ is not unbiased under the normal distribution. To make it unbiased, the unbiasing factor $c_4(n)$ is widely used so that $S_n/c_4(n)$ is unbiased. We suggest to use $c_5$ and $c_6$ notations for the unbiasing factors of the MAD and Shamos estimators, respectively. Then we can obtain the unbiased MAD and Shamos estimators for any value of $n$ given by $$\label{EQ:unbiased-MAD-SH} \frac{\mathrm{MAD}}{c_5(n)} \text{~~and~~} \frac{\mathrm{SH}}{c_6(n)},$$ where $c_5(n)=1+A_n$ and $c_6(n)=1+B_n$. Empirical variances {#section:04} =================== In this section, through the extensive Monte Carlo simulation, we calculate the finite-sample variances of the median, HL, MAD and Shamos estimators under the standard normal distribution. We generated a sample of size $n$ from the standard normal distribution and calculated their empirical variances for a given value of $n$. We repeated this simulation ten million times ($I=10^7$) to obtain the empirical variance for each of $n=2, 3, \ldots, 100$. It should be noted that the values of the asymptotic relative efficiency (ARE) of various estimators are known. Here the ARE is defined as $$\label{EQ:ARE} \mathrm{ARE}(\hat{\theta}_2 | \hat{\theta}_1) = \lim_{n\to\infty} \mathrm{RE}(\hat{\theta}_2 | \hat{\theta}_1) ,$$ where $$\label{EQ:RE} \mathrm{RE}(\hat{\theta}_2 | \hat{\theta}_1) = \frac{\mathrm{Var}(\hat{\theta}_1)}{\mathrm{Var}(\hat{\theta}_2)} .$$ where $\hat{\theta}_1$ is often a reference or baseline estimator. For example, under the normal distribution, we have $\mathrm{ARE}(\mathrm{median} | \bar{X}) = {2}/{\pi} \approx 0.6366 $, $\mathrm{ARE}(\mathrm{HL} | \bar{X}) = {3}/{\pi} \approx 0.9549$, $\mathrm{ARE}(\mathrm{MAD} | S_n) = 0.37$, and $\mathrm{ARE}(\mathrm{Shamos} | S_n) = 0.863$, where $\bar{X}$ is the sample mean and $S_n$ is the sample standard deviation. For more details, see [@Serfling:2011] and [@Levy/etc:2011]. Note that with a random sample of size $n$ from the standard normal distribution, we have $\mathrm{Var}(\bar{X})=1/n$ and $\mathrm{Var}(S_n)=1- c_4(n)^2$, where $c_4(n) = \{2/(n-1)\}^{1/2} \cdot \Gamma(n/2)/\Gamma( (n-1)/2 )$. Thus, we have $n\,\mathrm{Var}(\mathrm{median}) \approx 1/0.6366 = 1.57$, $n\,\mathrm{Var}(\mathrm{HL}) \approx 1/0.9549 = 1.0472$, $\mathrm{Var}(\mathrm{MAD})/(1-c_4(n)^2) \approx 1/0.37 = 2.7027$ and $\mathrm{Var}(\mathrm{Shamos})/(1-c_4(n)^2) \approx 1/0.863 = 1.15875$ for a large value of $n$. We provide these values of each of $n$ in Tables \[TBL:nvar1\] and \[TBL:nvar2\]. In Figure \[FIG:nvar\], we also plotted these values. For the case when $n > 100$, we suggest to estimate these values based on [@Hayes:2014] or [@Williams:2011] as we did the biases in the previous section. We suggest the following models to obtain these values for $n>100$: $$\begin{aligned} n\,\mathrm{Var}(\mathrm{median}) &= 1.57 + \frac{a_1}{n} + \frac{a_2}{n^2} \\ n\,\mathrm{Var}(\mathrm{HL}) &= 1.0472 + \frac{a_3}{n} + \frac{a_4}{n^2} \\ \frac{\mathrm{Var}(\mathrm{MAD})}{1-c_4(n)^2} &= 2.7027 + \frac{a_5}{n} + \frac{a_6}{n^2} \\ \intertext{and} \frac{\mathrm{Var}(\mathrm{Shamos})}{1-c_4(n)^2}&= 1.15875+\frac{a_7}{n}+\frac{a_8}{n^2}.\end{aligned}$$ One can also use the method based on [@Williams:2011]. For brevity, we used the method based on [@Hayes:2014]. To estimate these values for $n>100$, we obtained the empirical REs in Table \[TBL:nvar3\] for $n=109$, $110$, $119$, $120$, $\ldots$, $499$, $500$. Notice that Figure \[FIG:nvar\] indicates that it is reasonable to estimate the values for the median and MAD in the case of odd and even values of $n$ separately. Using the large values of $n$, we can estimate the above coefficients. For this, we use the simulation results in Tables \[TBL:nvar2\] and \[TBL:nvar3\]. Then the least squares estimates based on the method of [@Hayes:2014] are given by $$\begin{aligned} %-- location n\,\mathrm{Var}(\mathrm{median}) &= 1.57 - \frac{0.6589}{n} - \frac{0.943}{n^2} \qquad (\textrm{for~odd~} n) \\ n\,\mathrm{Var}(\mathrm{median}) &= 1.57 - \frac{2.1950}{n} + \frac{1.929}{n^2} \qquad (\textrm{for~even~} n) \\ n\,\mathrm{Var}(\mathrm{HL1}) &= 1.0472 + \frac{0.1127}{n} + \frac{0.8365}{n^2} \\ n\,\mathrm{Var}(\mathrm{HL2}) &= 1.0472 + \frac{0.2923}{n} + \frac{0.2258}{n^2} \\ n\,\mathrm{Var}(\mathrm{HL3}) &= 1.0472 + \frac{0.2022}{n} + \frac{0.4343}{n^2} \\ %-- scale \frac{\mathrm{Var}(\mathrm{MAD})}{1-c_4(n)^2} &= 2.7027 +\frac{0.2996}{n} - \frac{149.357}{n^2} \qquad (\textrm{for~odd~} n) \\ \frac{\mathrm{Var}(\mathrm{MAD})}{1-c_4(n)^2} &= 2.7027 - \frac{2.417}{n} - \frac{153.010}{n^2} \qquad (\textrm{for~even~} n) \intertext{and} \frac{\mathrm{Var}(\mathrm{Shamos})}{1-c_4(n)^2}& = 1.15875 +\frac{2.822}{n} +\frac{12.238}{n^2}.\end{aligned}$$ In Tables \[TBL:RE1\] and \[TBL:RE2\], we also calculated the REs of the afore-mentioned estimators for $n=1,2,\ldots,100$ using the above empirical variances. For $n>100$, one can also easily obtain the REs using the above estimated variances. It should be noted that the REs of the median and HL estimators are one for $n=1,2$. When $n=1,2$, the median and the HL are essentially the same as the sample mean. Note that the HL1 is not available for $n=1$. Another noticeable result is that the RE of the HL1 is exactly one when $n=4$. When $n=4$, the HL1 is the median of $(X_1+X_2)/2$, $(X_1+X_3)/2$, $(X_1+X_4)/2$, $(X_2+X_3)/2$, $(X_2+X_4)/2$ and $(X_3+X_4)/2$. Then this is the same as the median of $(X_{(1)}+X_{(2)})/2$, $(X_{(1)}+X_{(3)})/2$, $(X_{(1)}+X_{(4)})/2$, $(X_{(2)}+X_{(3)})/2$, $(X_{(2)}+X_{(4)})/2$ and $(X_{(3)}+X_{(4)})/2$, where $X_{(i)}$ are order statistics. Because $(X_{(1)}+X_{(2)})/2 \le (X_{(1)}+X_{(3)})/2 \le (X_{(1)}+X_{(4)})/2$ and $(X_{(2)}+X_{(3)})/2 \le (X_{(2)}+X_{(4)})/2 \le (X_{(3)}+X_{(4)})/2$, we have $$\mathrm{HL}_1 = \frac{1}{2}\left( \frac{X_{(1)}+X_{(4)}}{2}+\frac{X_{(2)}+X_{(3)}}{2} \right) = \frac{X_1+X_2+X_3+X_4}{4} = \bar{X}.$$ Thus, the RE of the HL1 should be one. In this case, as expected, the finite-sample breakdown is zero as provided in Table \[TBL:breakdown\]. It should be noted that the $\mathrm{MAD}/c_5(n)$ and $\mathrm{Shamos}/c_6(n)$ are unbiased for $\sigma$ under the normal distribution, but their square values are not unbiased for $\sigma^2$. Using the empirical and estimated variances, we can obtain the unbiased versions as follows. For convenience, we denote $v_5(n) = \mathrm{Var}(\mathrm{MAD})$ and $v_6(n) = \mathrm{Var}(\mathrm{Shamos})$, where the variances are obtained using a sample of size $n$ from the standard normal distribution $N(0,1)$ as mentioned earlier. Since the MAD and Shamos estimators are scale-equivariant, we have $\mathrm{Var}(\mathrm{MAD}) = v_5(n) \sigma^2$ and $\mathrm{Var}(\mathrm{Shamos}) = v_6(n) \sigma^2$ with a sample from the normal distribution $N(\mu, \sigma^2)$. It is immediate from (\[EQ:unbiased-MAD-SH\]) that $E(\mathrm{MAD}) = c_5(n) \sigma$ and $E(\mathrm{Shamos}) = c_5(n) \sigma$. Considering $E(\hat{\theta}^2) = \mathrm{Var}(\hat{\theta}) + E(\hat{\theta})^2$, we have $E(\mathrm{MAD}^2) = v_5(n) \sigma^2 + \big\{ c_5(n) \sigma \big\}^2$ and $E(\mathrm{Shamos}^2) = v_6(n) \sigma^2 + \big\{ c_6(n) \sigma \big\}^2.$ Thus, the following estimators are unbiased for $\sigma^2$ under the normal distribution $$\frac{\mathrm{MAD}^2}{v_5(n) + c_5(n)^2} \text{~~~and~~~} \frac{\mathrm{Shamos}^2}{v_6(n) + c_6(n)^2} .$$ [Concluding remarks]{} {#section:05} ====================== In this paper, we studied the finite-sample properties of the sample median and Hodges-Lehmann estimators for location and the sample median absolute deviation and Shamos estimators for scale. We first obtained closed-form finite-sample breakdown points for these robust location and scale estimators for the population parameters. We then calculated the unbiasing factors and relative efficiencies of the MAD and the Shamos estimators for the scale parameter through the extensive Monte Carlo simulations up to the sample size 100. The numerical study showed that the unbiasing factor significantly improves the finite-sample performance. In addition, we also provided the predicted values for the unbiasing factors which are obtained by using the least squares method which can be used for the case of sample size more than 100. To facilitate the implementation of the proposed method, we developed the R package library, which will be available at the author’s personal web page. Acknowledgment {#acknowledgment .unnumbered} ============== This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (No. NRF-2017R1A2B4004169). Appendix: Tables and Figures {#appendix-tables-and-figures .unnumbered} ============================ [1.0]{} ----- -- ---------------- ---------------- ----------- ----------- $n$ median and MAD HL1 and Shamos HL2 HL3 2 0.0000000 0.0000000 0.0000000 0.0000000 3 0.3333333 0.0000000 0.0000000 0.0000000 4 0.2500000 0.0000000 0.2500000 0.2500000 5 0.4000000 0.2000000 0.2000000 0.2000000 6 0.3333333 0.1666667 0.1666667 0.1666667 7 0.4285714 0.1428571 0.2857143 0.2857143 8 0.3750000 0.2500000 0.2500000 0.2500000 9 0.4444444 0.2222222 0.2222222 0.2222222 10 0.4000000 0.2000000 0.3000000 0.2000000 11 0.4545455 0.2727273 0.2727273 0.2727273 12 0.4166667 0.2500000 0.2500000 0.2500000 13 0.4615385 0.2307692 0.2307692 0.2307692 14 0.4285714 0.2142857 0.2857143 0.2857143 15 0.4666667 0.2666667 0.2666667 0.2666667 16 0.4375000 0.2500000 0.2500000 0.2500000 17 0.4705882 0.2352941 0.2941176 0.2352941 18 0.4444444 0.2777778 0.2777778 0.2777778 19 0.4736842 0.2631579 0.2631579 0.2631579 20 0.4500000 0.2500000 0.2500000 0.2500000 21 0.4761905 0.2380952 0.2857143 0.2857143 22 0.4545455 0.2727273 0.2727273 0.2727273 23 0.4782609 0.2608696 0.2608696 0.2608696 24 0.4583333 0.2500000 0.2916667 0.2916667 25 0.4800000 0.2800000 0.2800000 0.2800000 26 0.4615385 0.2692308 0.2692308 0.2692308 27 0.4814815 0.2592593 0.2962963 0.2592593 28 0.4642857 0.2857143 0.2857143 0.2857143 29 0.4827586 0.2758621 0.2758621 0.2758621 30 0.4666667 0.2666667 0.2666667 0.2666667 31 0.4838710 0.2580645 0.2903226 0.2903226 32 0.4687500 0.2812500 0.2812500 0.2812500 33 0.4848485 0.2727273 0.2727273 0.2727273 34 0.4705882 0.2647059 0.2941176 0.2647059 35 0.4857143 0.2857143 0.2857143 0.2857143 36 0.4722222 0.2777778 0.2777778 0.2777778 37 0.4864865 0.2702703 0.2702703 0.2702703 38 0.4736842 0.2631579 0.2894737 0.2894737 39 0.4871795 0.2820513 0.2820513 0.2820513 40 0.4750000 0.2750000 0.2750000 0.2750000 41 0.4878049 0.2682927 0.2926829 0.2926829 42 0.4761905 0.2857143 0.2857143 0.2857143 43 0.4883721 0.2790698 0.2790698 0.2790698 44 0.4772727 0.2727273 0.2954545 0.2727273 45 0.4888889 0.2888889 0.2888889 0.2888889 46 0.4782609 0.2826087 0.2826087 0.2826087 47 0.4893617 0.2765957 0.2765957 0.2765957 48 0.4791667 0.2708333 0.2916667 0.2916667 49 0.4897959 0.2857143 0.2857143 0.2857143 50 0.4800000 0.2800000 0.2800000 0.2800000 ----- -- ---------------- ---------------- ----------- ----------- : Finite-sample breakdown points.\[TBL:breakdown\] [1.0]{} ----- -- -------------- ------------- -- ----- -- -------------- ------------- $n$ MAD Shamos $n$ MAD Shamos 1 NA NA 51 $-0.0152820$ $0.0082120$ 2 $-0.1633880$ $0.1831500$ 52 $-0.0149951$ $0.0081874$ 3 $-0.3275897$ $0.2989400$ 53 $-0.0146042$ $0.0079775$ 4 $-0.2648275$ $0.1582782$ 54 $-0.0145007$ $0.0078126$ 5 $-0.1781250$ $0.1011748$ 55 $-0.0140391$ $0.0076743$ 6 $-0.1594213$ $0.1005038$ 56 $-0.0139674$ $0.0075212$ 7 $-0.1210631$ $0.0676993$ 57 $-0.0136336$ $0.0074051$ 8 $-0.1131928$ $0.0609574$ 58 $-0.0134819$ $0.0072528$ 9 $-0.0920658$ $0.0543760$ 59 $-0.0130812$ $0.0071807$ 10 $-0.0874503$ $0.0476839$ 60 $-0.0129708$ $0.0070617$ 11 $-0.0741303$ $0.0426722$ 61 $-0.0126589$ $0.0069123$ 12 $-0.0711412$ $0.0385003$ 62 $-0.0125598$ $0.0067833$ 13 $-0.0620918$ $0.0353028$ 63 $-0.0122696$ $0.0066439$ 14 $-0.0600210$ $0.0323526$ 64 $-0.0121523$ $0.0065821$ 15 $-0.0534603$ $0.0299677$ 65 $-0.0118163$ $0.0064889$ 16 $-0.0519047$ $0.0280421$ 66 $-0.0118244$ $0.0063844$ 17 $-0.0467319$ $0.0262195$ 67 $-0.0115177$ $0.0062930$ 18 $-0.0455579$ $0.0247674$ 68 $-0.0114479$ $0.0061910$ 19 $-0.0417554$ $0.0232297$ 69 $-0.0111309$ $0.0061255$ 20 $-0.0408248$ $0.0220155$ 70 $-0.0110816$ $0.0060681$ 21 $-0.0376967$ $0.0208687$ 71 $-0.0108875$ $0.0058994$ 22 $-0.0368350$ $0.0199446$ 72 $-0.0108319$ $0.0058235$ 23 $-0.0342394$ $0.0189794$ 73 $-0.0106032$ $0.0057172$ 24 $-0.0335390$ $0.0182343$ 74 $-0.0105424$ $0.0056805$ 25 $-0.0313065$ $0.0174421$ 75 $-0.0102237$ $0.0056343$ 26 $-0.0309765$ $0.0166364$ 76 $-0.0102132$ $0.0055605$ 27 $-0.0290220$ $0.0160158$ 77 $-0.0099408$ $0.0055011$ 28 $-0.0287074$ $0.0153715$ 78 $-0.0099776$ $0.0053872$ 29 $-0.0269133$ $0.0148940$ 79 $-0.0097815$ $0.0053062$ 30 $-0.0265451$ $0.0144027$ 80 $-0.0097399$ $0.0052348$ 31 $-0.0250734$ $0.0138855$ 81 $-0.0094837$ $0.0052075$ 32 $-0.0248177$ $0.0134510$ 82 $-0.0094713$ $0.0051173$ 33 $-0.0236460$ $0.0130228$ 83 $-0.0092390$ $0.0050697$ 34 $-0.0232808$ $0.0127183$ 84 $-0.0092875$ $0.0049805$ 35 $-0.0222099$ $0.0122444$ 85 $-0.0091508$ $0.0048705$ 36 $-0.0220756$ $0.0118214$ 86 $-0.0090145$ $0.0048695$ 37 $-0.0210129$ $0.0115469$ 87 $-0.0088191$ $0.0048287$ 38 $-0.0207309$ $0.0113206$ 88 $-0.0088205$ $0.0047315$ 39 $-0.0199272$ $0.0109636$ 89 $-0.0086622$ $0.0046961$ 40 $-0.0197140$ $0.0106308$ 90 $-0.0085714$ $0.0046698$ 41 $-0.0188446$ $0.0104384$ 91 $-0.0084718$ $0.0046010$ 42 $-0.0188203$ $0.0100693$ 92 $-0.0083861$ $0.0045544$ 43 $-0.0180521$ $0.0098523$ 93 $-0.0082559$ $0.0045191$ 44 $-0.0178185$ $0.0096735$ 94 $-0.0082650$ $0.0044245$ 45 $-0.0171866$ $0.0094973$ 95 $-0.0080977$ $0.0044074$ 46 $-0.0170796$ $0.0092210$ 96 $-0.0080708$ $0.0043579$ 47 $-0.0165391$ $0.0089781$ 97 $-0.0078810$ $0.0043536$ 48 $-0.0163509$ $0.0088083$ 98 $-0.0078492$ $0.0042874$ 49 $-0.0157862$ $0.0086574$ 99 $-0.0077043$ $0.0042520$ 50 $-0.0157372$ $0.0084772$ 100 $-0.0077614$ $0.0041864$ ----- -- -------------- ------------- -- ----- -- -------------- ------------- : Empirical biases of the MAD and Shamos estimators ($n=2,3,\ldots,100$).\[TBL:eBias1\] [1.0]{} ----- -- -------------- -------------- ----------------- -- ----------- -------------- ----------------- $n$ MAD $A_n$(Hayes) $A_n$(Williams) Shamos $B_n$(Hayes) $B_n$(Williams) 109 $-0.0070577$ $-0.0070648$ $-0.0070753$ 0.0038374 0.0038377 0.0038425 110 $-0.0070262$ $-0.0069999$ $-0.0070104$ 0.0037996 0.0038025 0.0038073 119 $-0.0064893$ $-0.0064655$ $-0.0064756$ 0.0034984 0.0035124 0.0035170 120 $-0.0064342$ $-0.0064111$ $-0.0064212$ 0.0034691 0.0034828 0.0034875 129 $-0.0059226$ $-0.0059599$ $-0.0059693$ 0.0032441 0.0032379 0.0032422 130 $-0.0059018$ $-0.0059137$ $-0.0059230$ 0.0032241 0.0032127 0.0032170 139 $-0.0054913$ $-0.0055277$ $-0.0055362$ 0.0029854 0.0030031 0.0030070 140 $-0.0055067$ $-0.0054879$ $-0.0054963$ 0.0029548 0.0029815 0.0029854 149 $-0.0051140$ $-0.0051539$ $-0.0051615$ 0.0028230 0.0028002 0.0028036 150 $-0.0051018$ $-0.0051193$ $-0.0051267$ 0.0028080 0.0027814 0.0027847 159 $-0.0047958$ $-0.0048275$ $-0.0048340$ 0.0026355 0.0026229 0.0026258 160 $-0.0047790$ $-0.0047971$ $-0.0048035$ 0.0026154 0.0026064 0.0026093 169 $-0.0045272$ $-0.0045399$ $-0.0045455$ 0.0024503 0.0024667 0.0024692 170 $-0.0045260$ $-0.0045130$ $-0.0045185$ 0.0024402 0.0024521 0.0024545 179 $-0.0042827$ $-0.0042847$ $-0.0042894$ 0.0023257 0.0023281 0.0023301 180 $-0.0042517$ $-0.0042607$ $-0.0042653$ 0.0023122 0.0023151 0.0023170 189 $-0.0040825$ $-0.0040566$ $-0.0040605$ 0.0021780 0.0022042 0.0022058 190 $-0.0040837$ $-0.0040351$ $-0.0040389$ 0.0021673 0.0021925 0.0021941 199 $-0.0038386$ $-0.0038516$ $-0.0038546$ 0.0020904 0.0020928 0.0020940 200 $-0.0038277$ $-0.0038323$ $-0.0038352$ 0.0020786 0.0020823 0.0020835 249 $-0.0031016$ $-0.0030747$ $-0.0030745$ 0.0016628 0.0016708 0.0016704 250 $-0.0030911$ $-0.0030623$ $-0.0030621$ 0.0016562 0.0016641 0.0016636 299 $-0.0025435$ $-0.0025586$ $-0.0025562$ 0.0013875 0.0013904 0.0013889 300 $-0.0025436$ $-0.0025500$ $-0.0025476$ 0.0013822 0.0013858 0.0013842 349 $-0.0021615$ $-0.0021908$ $-0.0021869$ 0.0012033 0.0011906 0.0011884 350 $-0.0021702$ $-0.0021846$ $-0.0021806$ 0.0012013 0.0011872 0.0011849 399 $-0.0019214$ $-0.0019155$ $-0.0019106$ 0.0010331 0.0010410 0.0010383 400 $-0.0019273$ $-0.0019107$ $-0.0019058$ 0.0010287 0.0010384 0.0010357 449 $-0.0017199$ $-0.0017017$ $-0.0016960$ 0.0009196 0.0009248 0.0009217 450 $-0.0017210$ $-0.0016979$ $-0.0016922$ 0.0009170 0.0009227 0.0009197 499 $-0.0015134$ $-0.0015308$ $-0.0015247$ 0.0008413 0.0008319 0.0008286 500 $-0.0015061$ $-0.0015277$ $-0.0015216$ 0.0008389 0.0008303 0.0008270 ----- -- -------------- -------------- ----------------- -- ----------- -------------- ----------------- : Empirical biases of the MAD and Shamos estimators along with their estimates ($n=109,110, 119, 120, \ldots,499, 500$).\[TBL:eBias2\] [1.0]{} ----- -- -------- -------- -------- -------- -- -------- -------- $n$ median HL1 HL2 HL3 MAD Shamos 1 1.0000 NA 1.0000 1.0000 NA NA 2 1.0000 1.0000 1.0000 1.0000 1.1000 2.2001 3 1.3463 1.0871 1.0221 1.0871 1.4372 2.3812 4 1.1930 1.0000 1.0949 1.0949 1.1680 1.6996 5 1.4339 1.0617 1.0754 1.0754 1.9809 1.8573 6 1.2882 1.0619 1.0759 1.0602 1.6859 1.7883 7 1.4736 1.0630 1.0814 1.0756 2.2125 1.6180 8 1.3459 1.0628 1.0728 1.0705 1.9486 1.5824 9 1.4957 1.0588 1.0756 1.0678 2.3326 1.5109 10 1.3833 1.0608 1.0743 1.0641 2.1072 1.4855 11 1.5088 1.0602 1.0693 1.0649 2.4082 1.4643 12 1.4087 1.0560 1.0670 1.0614 2.2112 1.4234 13 1.5195 1.0567 1.0685 1.0629 2.4570 1.4008 14 1.4298 1.0565 1.0663 1.0603 2.2848 1.3905 15 1.5249 1.0562 1.0645 1.0603 2.4952 1.3719 16 1.4457 1.0547 1.0637 1.0590 2.3412 1.3554 17 1.5302 1.0541 1.0633 1.0587 2.5217 1.3434 18 1.4585 1.0540 1.0621 1.0574 2.3846 1.3355 19 1.5333 1.0532 1.0605 1.0567 2.5447 1.3249 20 1.4702 1.0545 1.0620 1.0581 2.4185 1.3146 21 1.5383 1.0536 1.0611 1.0573 2.5611 1.3079 22 1.4770 1.0527 1.0596 1.0557 2.4475 1.3015 23 1.5420 1.0532 1.0597 1.0564 2.5758 1.2953 24 1.4850 1.0529 1.0594 1.0560 2.4699 1.2883 25 1.5438 1.0521 1.0586 1.0553 2.5873 1.2825 26 1.4896 1.0518 1.0578 1.0545 2.4886 1.2776 27 1.5462 1.0526 1.0582 1.0553 2.5960 1.2731 28 1.4954 1.0511 1.0567 1.0538 2.5030 1.2676 29 1.5476 1.0525 1.0581 1.0552 2.6070 1.2650 30 1.5005 1.0518 1.0571 1.0543 2.5199 1.2616 31 1.5482 1.0514 1.0564 1.0538 2.6132 1.2586 32 1.5057 1.0517 1.0567 1.0541 2.5335 1.2552 33 1.5516 1.0521 1.0571 1.0545 2.6208 1.2519 34 1.5091 1.0512 1.0560 1.0534 2.5442 1.2493 35 1.5515 1.0508 1.0554 1.0530 2.6285 1.2466 36 1.5123 1.0512 1.0557 1.0534 2.5545 1.2433 37 1.5531 1.0512 1.0556 1.0534 2.6332 1.2415 38 1.5148 1.0502 1.0545 1.0522 2.5637 1.2393 39 1.5550 1.0513 1.0555 1.0533 2.6344 1.2364 40 1.5173 1.0507 1.0547 1.0526 2.5720 1.2357 41 1.5532 1.0498 1.0539 1.0518 2.6403 1.2325 42 1.5206 1.0510 1.0549 1.0528 2.5780 1.2315 43 1.5552 1.0504 1.0541 1.0522 2.6436 1.2287 44 1.5224 1.0500 1.0537 1.0518 2.5869 1.2284 45 1.5568 1.0504 1.0541 1.0522 2.6477 1.2260 46 1.5240 1.0496 1.0533 1.0514 2.5904 1.2248 47 1.5570 1.0504 1.0539 1.0521 2.6511 1.2232 48 1.5249 1.0493 1.0528 1.0510 2.5960 1.2214 49 1.5562 1.0495 1.0529 1.0512 2.6537 1.2199 50 1.5267 1.0499 1.0532 1.0514 2.6014 1.2184 ----- -- -------- -------- -------- -------- -- -------- -------- : The values of $n\times\mathrm{Var}(\hat{\theta})$ for the median and the Hodges-Lehmann, and the values of $\mathrm{Var}(\hat{\theta})/(1-c_4(n)^2)$ for the MAD and Shamos ($n=1,2,\ldots,50$).\[TBL:nvar1\] [1.0]{} ----- -- -------- -------- -------- -------- -- -------- -------- $n$ median HL1 HL2 HL3 MAD Shamos 51 1.5583 1.0502 1.0534 1.0517 2.6577 1.2199 52 1.5298 1.0499 1.0532 1.0515 2.6053 1.2174 53 1.5592 1.0501 1.0533 1.0517 2.6568 1.2160 54 1.5298 1.0489 1.0519 1.0503 2.6125 1.2156 55 1.5584 1.0493 1.0523 1.0508 2.6631 1.2144 56 1.5330 1.0497 1.0527 1.0512 2.6139 1.2132 57 1.5589 1.0496 1.0526 1.0510 2.6649 1.2126 58 1.5337 1.0495 1.0524 1.0509 2.6161 1.2098 59 1.5598 1.0501 1.0530 1.0515 2.6671 1.2095 60 1.5349 1.0489 1.0517 1.0503 2.6219 1.2095 61 1.5594 1.0492 1.0519 1.0505 2.6667 1.2073 62 1.5361 1.0492 1.0520 1.0505 2.6235 1.2071 63 1.5594 1.0485 1.0512 1.0498 2.6695 1.2064 64 1.5373 1.0494 1.0521 1.0507 2.6260 1.2050 65 1.5598 1.0488 1.0514 1.0500 2.6731 1.2067 66 1.5380 1.0496 1.0521 1.0508 2.6297 1.2036 67 1.5606 1.0494 1.0519 1.0506 2.6722 1.2034 68 1.5389 1.0491 1.0516 1.0503 2.6341 1.2030 69 1.5607 1.0479 1.0504 1.0491 2.6748 1.2025 70 1.5399 1.0490 1.0514 1.0502 2.6351 1.2016 71 1.5595 1.0482 1.0506 1.0494 2.6738 1.2005 72 1.5410 1.0491 1.0515 1.0503 2.6351 1.1993 73 1.5622 1.0492 1.0515 1.0503 2.6754 1.1993 74 1.5426 1.0498 1.0521 1.0510 2.6395 1.1990 75 1.5619 1.0489 1.0512 1.0500 2.6763 1.1985 76 1.5415 1.0486 1.0509 1.0497 2.6411 1.1975 77 1.5616 1.0485 1.0508 1.0496 2.6780 1.1975 78 1.5434 1.0494 1.0516 1.0505 2.6453 1.1971 79 1.5639 1.0493 1.0515 1.0504 2.6794 1.1968 80 1.5445 1.0497 1.0519 1.0508 2.6453 1.1958 81 1.5612 1.0486 1.0507 1.0496 2.6815 1.1960 82 1.5444 1.0494 1.0515 1.0504 2.6472 1.1947 83 1.5626 1.0484 1.0505 1.0494 2.6815 1.1947 84 1.5449 1.0490 1.0511 1.0500 2.6475 1.1939 85 1.5630 1.0484 1.0504 1.0494 2.6831 1.1938 86 1.5441 1.0479 1.0499 1.0489 2.6505 1.1931 87 1.5643 1.0495 1.0514 1.0504 2.6830 1.1923 88 1.5448 1.0478 1.0497 1.0487 2.6535 1.1929 89 1.5640 1.0487 1.0506 1.0496 2.6857 1.1931 90 1.5463 1.0483 1.0503 1.0493 2.6562 1.1920 91 1.5634 1.0486 1.0505 1.0495 2.6853 1.1914 92 1.5477 1.0491 1.0509 1.0500 2.6567 1.1913 93 1.5631 1.0481 1.0500 1.0490 2.6859 1.1906 94 1.5482 1.0488 1.0507 1.0497 2.6584 1.1907 95 1.5629 1.0481 1.0499 1.0490 2.6878 1.1905 96 1.5466 1.0477 1.0495 1.0486 2.6576 1.1894 97 1.5636 1.0480 1.0498 1.0489 2.6881 1.1895 98 1.5477 1.0477 1.0495 1.0486 2.6613 1.1899 99 1.5642 1.0483 1.0501 1.0492 2.6888 1.1887 100 1.5484 1.0481 1.0498 1.0489 2.6604 1.1874 ----- -- -------- -------- -------- -------- -- -------- -------- : The values of $n\times\mathrm{Var}(\hat{\theta})$ for the median and the Hodges-Lehmann, and the values of $\mathrm{Var}(\hat{\theta})/(1-c_4(n)^2)$ for the MAD and Shamos ($n=51,52,\ldots,100$).\[TBL:nvar2\] [1.0]{} ----- -- -------- -------- -------- -------- -- -------- -------- $n$ median HL1 HL2 HL3 MAD Shamos 109 1.5655 1.0490 1.0506 1.0498 2.6889 1.1857 110 1.5508 1.0484 1.0500 1.0492 2.6657 1.1856 119 1.5651 1.0478 1.0492 1.0485 2.6936 1.1830 120 1.5526 1.0478 1.0493 1.0486 2.6717 1.1836 129 1.5661 1.0477 1.0490 1.0483 2.6953 1.1809 130 1.5541 1.0478 1.0492 1.0485 2.6727 1.1804 139 1.5671 1.0491 1.0503 1.0497 2.6963 1.1792 140 1.5567 1.0495 1.0508 1.0502 2.6770 1.1794 149 1.5666 1.0484 1.0496 1.0490 2.7008 1.1789 150 1.5566 1.0484 1.0496 1.0490 2.6815 1.1788 159 1.5673 1.0484 1.0495 1.0490 2.7006 1.1768 160 1.5584 1.0485 1.0495 1.0490 2.6827 1.1765 169 1.5661 1.0474 1.0485 1.0479 2.7012 1.1757 170 1.5578 1.0476 1.0486 1.0481 2.6861 1.1755 179 1.5676 1.0477 1.0487 1.0482 2.7038 1.1750 180 1.5590 1.0480 1.0490 1.0485 2.6889 1.1750 189 1.5663 1.0473 1.0483 1.0478 2.7043 1.1743 190 1.5584 1.0473 1.0482 1.0478 2.6903 1.1741 199 1.5681 1.0481 1.0490 1.0486 2.7049 1.1741 200 1.5608 1.0482 1.0491 1.0486 2.6904 1.1732 249 1.5679 1.0472 1.0479 1.0476 2.7083 1.1705 250 1.5623 1.0479 1.0486 1.0483 2.6977 1.1709 299 1.5689 1.0477 1.0483 1.0480 2.7084 1.1673 300 1.5642 1.0479 1.0485 1.0482 2.6986 1.1670 349 1.5700 1.0479 1.0484 1.0481 2.7131 1.1673 350 1.5654 1.0479 1.0484 1.0481 2.7049 1.1675 399 1.5691 1.0475 1.0479 1.0477 2.7126 1.1650 400 1.5646 1.0469 1.0474 1.0472 2.7072 1.1651 449 1.5694 1.0474 1.0478 1.0476 2.7125 1.1639 450 1.5659 1.0475 1.0479 1.0477 2.7056 1.1645 499 1.5701 1.0475 1.0479 1.0477 2.7147 1.1637 500 1.5674 1.0482 1.0486 1.0484 2.7101 1.1646 ----- -- -------- -------- -------- -------- -- -------- -------- : The values of $n\times\mathrm{Var}(\hat{\theta})$ for the median and the Hodges-Lehmann, and the values of $\mathrm{Var}(\hat{\theta})/(1-c_4(n)^2)$ for the MAD and Shamos ($n=109,110, 119,120, \ldots, 499,500$).\[TBL:nvar3\] [1.0]{} ----- -- -------- -------- -------- -------- -- -------- -------- $n$ median HL1 HL2 HL3 MAD Shamos 1 1.0000 NA 1.0000 1.0000 NA NA 2 1.0000 1.0000 1.0000 1.0000 0.9091 0.4545 3 0.7427 0.9199 0.9784 0.9199 0.6958 0.4199 4 0.8382 1.0000 0.9133 0.9133 0.8562 0.5884 5 0.6974 0.9419 0.9299 0.9299 0.5048 0.5384 6 0.7763 0.9417 0.9295 0.9432 0.5932 0.5592 7 0.6786 0.9407 0.9248 0.9297 0.4520 0.6180 8 0.7430 0.9409 0.9322 0.9342 0.5132 0.6320 9 0.6686 0.9445 0.9297 0.9365 0.4287 0.6618 10 0.7229 0.9426 0.9308 0.9398 0.4746 0.6732 11 0.6628 0.9432 0.9352 0.9391 0.4153 0.6829 12 0.7098 0.9470 0.9372 0.9422 0.4522 0.7026 13 0.6581 0.9464 0.9359 0.9408 0.4070 0.7139 14 0.6994 0.9465 0.9378 0.9432 0.4377 0.7192 15 0.6558 0.9468 0.9394 0.9432 0.4008 0.7289 16 0.6917 0.9482 0.9402 0.9443 0.4271 0.7378 17 0.6535 0.9486 0.9405 0.9445 0.3966 0.7444 18 0.6856 0.9487 0.9415 0.9457 0.4194 0.7488 19 0.6522 0.9495 0.9430 0.9463 0.3930 0.7547 20 0.6802 0.9483 0.9416 0.9451 0.4135 0.7607 21 0.6501 0.9491 0.9424 0.9458 0.3905 0.7646 22 0.6770 0.9499 0.9437 0.9472 0.4086 0.7684 23 0.6485 0.9495 0.9437 0.9466 0.3882 0.7720 24 0.6734 0.9498 0.9440 0.9470 0.4049 0.7762 25 0.6478 0.9504 0.9447 0.9476 0.3865 0.7798 26 0.6713 0.9507 0.9454 0.9483 0.4018 0.7827 27 0.6468 0.9501 0.9450 0.9476 0.3852 0.7855 28 0.6687 0.9514 0.9463 0.9490 0.3995 0.7889 29 0.6462 0.9501 0.9451 0.9477 0.3836 0.7905 30 0.6664 0.9507 0.9459 0.9485 0.3968 0.7926 31 0.6459 0.9511 0.9466 0.9489 0.3827 0.7945 32 0.6641 0.9508 0.9463 0.9486 0.3947 0.7967 33 0.6445 0.9505 0.9460 0.9483 0.3816 0.7988 34 0.6627 0.9513 0.9470 0.9493 0.3931 0.8004 35 0.6445 0.9516 0.9475 0.9496 0.3804 0.8022 36 0.6612 0.9513 0.9472 0.9493 0.3915 0.8043 37 0.6439 0.9513 0.9473 0.9493 0.3798 0.8055 38 0.6601 0.9522 0.9483 0.9504 0.3901 0.8069 39 0.6431 0.9512 0.9475 0.9494 0.3796 0.8088 40 0.6591 0.9518 0.9481 0.9500 0.3888 0.8093 41 0.6438 0.9525 0.9489 0.9507 0.3787 0.8113 42 0.6576 0.9515 0.9479 0.9498 0.3879 0.8120 43 0.6430 0.9521 0.9486 0.9504 0.3783 0.8138 44 0.6569 0.9524 0.9490 0.9508 0.3866 0.8141 45 0.6423 0.9520 0.9487 0.9504 0.3777 0.8157 46 0.6562 0.9527 0.9494 0.9511 0.3860 0.8164 47 0.6422 0.9520 0.9489 0.9505 0.3772 0.8175 48 0.6558 0.9530 0.9499 0.9515 0.3852 0.8187 49 0.6426 0.9529 0.9497 0.9513 0.3768 0.8197 50 0.6550 0.9525 0.9495 0.9511 0.3844 0.8208 ----- -- -------- -------- -------- -------- -- -------- -------- : Relative efficiencies of the median, Hodges-Lehmann, MAD and Shamos estimators to the sample mean under the normal distribution ($n=1,2,\ldots,50$).\[TBL:RE1\] [1.0]{} ----- -- -------- -------- -------- -------- -- -------- -------- $n$ median HL1 HL2 HL3 MAD Shamos 51 0.6417 0.9522 0.9493 0.9508 0.3763 0.8197 52 0.6537 0.9525 0.9495 0.9510 0.3838 0.8214 53 0.6414 0.9523 0.9494 0.9509 0.3764 0.8223 54 0.6537 0.9534 0.9506 0.9521 0.3828 0.8226 55 0.6417 0.9530 0.9503 0.9517 0.3755 0.8234 56 0.6523 0.9527 0.9499 0.9513 0.3826 0.8243 57 0.6415 0.9528 0.9501 0.9514 0.3752 0.8247 58 0.6520 0.9528 0.9502 0.9515 0.3822 0.8266 59 0.6411 0.9523 0.9497 0.9510 0.3749 0.8268 60 0.6515 0.9534 0.9508 0.9521 0.3814 0.8268 61 0.6413 0.9531 0.9506 0.9519 0.3750 0.8283 62 0.6510 0.9531 0.9506 0.9519 0.3812 0.8284 63 0.6413 0.9538 0.9513 0.9526 0.3746 0.8289 64 0.6505 0.9529 0.9505 0.9517 0.3808 0.8299 65 0.6411 0.9535 0.9511 0.9523 0.3741 0.8287 66 0.6502 0.9528 0.9505 0.9516 0.3803 0.8308 67 0.6408 0.9529 0.9507 0.9518 0.3742 0.8310 68 0.6498 0.9532 0.9509 0.9521 0.3796 0.8312 69 0.6408 0.9543 0.9520 0.9532 0.3739 0.8316 70 0.6494 0.9533 0.9511 0.9522 0.3795 0.8323 71 0.6412 0.9540 0.9518 0.9529 0.3740 0.8330 72 0.6489 0.9532 0.9510 0.9521 0.3795 0.8338 73 0.6401 0.9532 0.9510 0.9521 0.3738 0.8338 74 0.6483 0.9525 0.9504 0.9515 0.3789 0.8341 75 0.6402 0.9534 0.9513 0.9524 0.3736 0.8344 76 0.6487 0.9536 0.9516 0.9526 0.3786 0.8351 77 0.6404 0.9537 0.9517 0.9527 0.3734 0.8351 78 0.6479 0.9529 0.9509 0.9520 0.3780 0.8353 79 0.6394 0.9530 0.9510 0.9520 0.3732 0.8355 80 0.6475 0.9526 0.9507 0.9517 0.3780 0.8363 81 0.6405 0.9537 0.9517 0.9527 0.3729 0.8361 82 0.6475 0.9529 0.9510 0.9520 0.3778 0.8370 83 0.6400 0.9538 0.9520 0.9529 0.3729 0.8370 84 0.6473 0.9533 0.9514 0.9523 0.3777 0.8376 85 0.6398 0.9539 0.9520 0.9529 0.3727 0.8376 86 0.6476 0.9543 0.9525 0.9534 0.3773 0.8381 87 0.6393 0.9529 0.9511 0.9520 0.3727 0.8387 88 0.6473 0.9544 0.9526 0.9535 0.3769 0.8383 89 0.6394 0.9536 0.9518 0.9527 0.3723 0.8382 90 0.6467 0.9539 0.9521 0.9530 0.3765 0.8389 91 0.6396 0.9536 0.9519 0.9528 0.3724 0.8394 92 0.6461 0.9532 0.9515 0.9524 0.3764 0.8394 93 0.6398 0.9541 0.9524 0.9533 0.3723 0.8399 94 0.6459 0.9535 0.9518 0.9526 0.3762 0.8399 95 0.6398 0.9541 0.9524 0.9533 0.3721 0.8400 96 0.6466 0.9545 0.9529 0.9537 0.3763 0.8407 97 0.6396 0.9542 0.9526 0.9534 0.3720 0.8407 98 0.6461 0.9545 0.9529 0.9537 0.3757 0.8404 99 0.6393 0.9539 0.9523 0.9531 0.3719 0.8412 100 0.6458 0.9541 0.9525 0.9533 0.3759 0.8422 ----- -- -------- -------- -------- -------- -- -------- -------- : Relative efficiencies of the median, Hodges-Lehmann, MAD and Shamos estimators to the sample mean under the normal distribution ($n=51,52,\ldots,100$).\[TBL:RE2\] ![The values of $n\,\mathrm{Var}(\mathrm{median})$, $n\,\mathrm{Var}(\mathrm{HL})$, ${\mathrm{Var}(\mathrm{MAD})}/{(1-c_4(n)^2)}$, and ${\mathrm{Var}(\mathrm{Shamos})}/{(1-c_4(n)^2)}$. \[FIG:nvar\]](nvar) ![The relative efficiencies under consideration.\[FIG:RE\]](RE)
--- abstract: 'We prove that a closed 4-manifold has shadow-complexity zero if and only if it is a kind of 4-dimensional graph manifold, which decomposes into some particular blocks along embedded copies of $S^2\times S^1$, plus some complex projective spaces. We deduce a classification of all 4-manifolds with finite fundamental group and shadow-complexity zero.' address: 'Dipartimento di Matematica “Tonelli”, Largo Pontecorvo 5, 56127 Pisa, Italy' author: - Bruno Martelli title: - 'Complexity of PL-manifolds' - 'Four-manifolds with shadow-complexity zero' --- Introduction ============ Piecewise-linear (equivalently, smooth) closed four-manifolds form an enormous set which is still poorly understood. In contrast with dimensions 2 and 3, even a conjectural picture which aims to describe this set globally is missing. Restricting to simply connected manifolds does not help much: Donaldson and Seiberg-Witten invariants have revealed the existence of infinitely many distinct simply-connected manifolds sharing the same topological structure; these *exotic* 4-manifolds have been constructed using various techniques, but a general procedure for constructing (and classifying) *all* simply connected 4-manifolds sharing the same topological structure is still not available. For an overview on this topic, see for instance [@Ste]. For an introduction to 4-manifolds see the books [@GoSti; @Sco]. We would like to study the set of all closed oriented 4-manifolds globally, by means of a suitable *complexity*. A complexity is a function which assigns to every compact manifold a non-negative integer that measures in some sense how “complicate” the manifold is. A complexity induces a *filtration* of the set of all 4-manifolds into subsets $\calM_0 \subset \calM_1 \subset \calM_2 \subset \ldots$ where $\calM_c$ is the set of all manifolds having complexity at most $c$. In such a setting, we would like to construct (and hopefully classify) all 4-manifolds lying in $\calM_c$, starting from $c= 0,1, \ldots $ There are of course various types of reasonable complexities, and different choices may lead to completely different filtrations. However, the problem of constructing and listing all the manifolds in $\calM_0$, $\calM_1, \ldots$ is hard for most of these choices. For instance, a natural complexity might be the minimum number of 4-simplexes in a simplicial (or semisimplicial?) triangulation: with this choice, it may be encouraging to know that $\calM_c$ is finite for all $c$. However, as far as we know, noone has attempted to classify 4-manifolds that can be triangulated with $2, 4, \ldots$ simplexes. In fact, triangulations seem too rigid and complicate for our purposes. In dimension 3, Matveev [@Mat] has used the somewhat dual notion of *simple spine* to define a complexity for all compact 3-manifolds which satisfies various nice properties: for instance, it is additive on connected sums. A two-dimensional polyhedron is *simple* when it has generic singularities, as in Fig. \[models:fig\]. Matveev defines the complexity $c(M)$ of a 3-manifold $M$ as the minimum number of vertices in a simple spine. The price to pay for using spines instead of triangulations is that we get infinitely many manifolds in each $\calM_c$. However, each set $\calM_c$ contains only finitely many “interesting” 3-manifolds (say, closed irreducible or bounded hyperbolic), which have been listed for low values of $c=0,1,2,3,\ldots $ by various authors, see [@survey; @Mat:book; @Mat11] and the references therein. \[models:fig\] Most 4-manifolds do not have two-dimensional spines, so Matveev’s definition cannot be extended *as is* to dimension 4. There are however two natural variations, which lead to two distinct complexities for compact (piecewise-linear) 4-manifolds. One natural variation is obtained by taking three-dimensional simple spines. This extension works in fact for piecewise-linear manifolds of arbitrary dimension $n$ (by taking simple spines of dimension $n-1$): the resulting complexity is introduced and studied in [@Ma:PL]. Let $\calM_0 \subset \calM_1 \subset \ldots$ be the induced filtration in dimension 4: as shown in [@Ma:PL], the set $\calM_0$ contains closed 4-manifolds with arbitrary fundamental group, and thus cannot be classified completely. Moreover, many (possibly all) simply-connected 4-manifolds lie in $\calM_0$, so even restricting to simply connected manifolds does not help much. The set $\calM_0$ is interesting, but is too big to be classified. Another variation consists of using 2-dimensional simple polyhedra not as spines but as more general objects, called *shadows*: following Turaev [@Tu0; @Tu], a shadow is a (locally flat) simple polyhedron $X$ in the interior of a compact 4-manifold $M$ such that $M$ is obtained from a regular neighborhood of $X$ by adding 3- and 4-handles. Every compact 4-manifold has a shadow, so it makes sense to define the complexity of a compact 4-manifold as the minimum number of vertices of a shadow. This notion has been recently introduced and studied by Costantino [@Co]. To avoid confusion, the two notions just introduced in dimension 4 may be called respectively *spine-complexity* and *shadow-complexity*. Spine-complexity was studied in [@Ma:PL]. We study here the shadow-complexity (which we call complexity for short) and its induced filtration, which we still denote by $\calM_0 \subset \calM_1 \subset \ldots$ In this paper we give a characterization of the set $\calM_0$ of all closed 4-manifolds having shadow-complexity zero. As we will see, such a set is considerably smaller than the one we obtain from spine-complexity. In particular, we can classify completely the manifolds in $\calM_0$ having finite fundamental group. The set $\calM_0$ is big enough to contain various interesting manifolds, and small enough to allow classifications. Shadow-complexity thus seems to be particularly well-behaved and it seems both feasable and interesting to pursue our program with $\calM_1, \calM_2, \ldots $ The most important discovery is that the set $\calM_0$ looks very much like the set of Waldhausen’s 3-dimensional graph manifolds [@Wa]. Recall that a Waldhausen graph manifold is any 3-manifold which decomposes into blocks homeomorphic to $D^2\times S^1$ or $P^2\times S^1$, where $P^2$ is the pair-of-pants. The manifold can indeed be described via a graph, with vertices of valence 1 and 3 encoding the blocks, and some data on the edges telling us how they are glued. There are many ways to extend this notion to higher dimensions. To preserve generality, we may take a fixed set of oriented $n$-manifolds $\calS = \{M_1,\ldots, M_k,\ldots\}$ and say that an oriented $n$-manifold $M$ is a *graph manifold generated by $\calS$* if $M$ decomposes (along codimension-1 submanifolds) into pieces (orientation-preservingly) PL-homeomorphic to these manifolds. The manifold $M$ can thus be described appropriately by a graph, with vertices of different types corresponding to the elements of $\calS$, and some information on the edges encoding the way they are glued. In dimension 4 there are various interesting choices for $\calS$, which lead to quite different notions of graph manifolds. For instance, Mozgova defined in [@Moz] a 4-dimensional graph manifold as a manifold generated by torus bundles over compact surfaces of negative Euler characteristic. The blocks are glued along torus bundles over $S^1$, such as the 3-torus. The generalization we propose here of Waldhausen’s graph manifolds is of different kind. Each block has some boundary components, all homeomorphic to $S^2\times S^1$. The pieces are thus glued along copies of $S^2\times S^1$. A simple way to get 4-manifolds with such boundary consists of drilling a closed manifold along closed curves (thus removing a $D^3\times S^1)$ or along spheres with Euler number zero (thus removing a $D^2\times S^2$). Consider the following blocks. - $M_{i_1\cdots i_k}$ is obtained from $S^3 \times S^1$ by drilling a closed braid as in Fig \[blocchi:fig\]. - $N_i$ is obtained from $S^2\times S^2$ by drilling $i$ parallel spheres of type $\{pt\}\times S^2$. The graph manifolds we consider here are generated by the following set: $$\calS_0 = \big\{M_1, M_{11}, M_2, M_{111}, M_{12}, M_3, N_1, N_2, N_3\big\}.$$ \[blocchi:fig\] We can now state the main result proved in this paper. For any integer $h > 0$ and any oriented $n$-manifold $N$, we denote by $\#_h N$ the connected sum of $h$ copies of $N$. When $h=0$ we set $\#_0 N = S^n$ and when $h<0$ we set $\#_h N = \#_{-h}\overline N$. \[main:teo\] A closed oriented 4-manifold $M$ has complexity zero if and only if $M= M'\#_h \matCP^2$ for some integer $h$ and some graph manifold $M'$ generated by $\calS_0$. We now investigate these graph manifolds: we would like to show that they indeed lie among “the simplest 4-manifolds” also from other viewpoints. A simple method for constructing non-trivial closed 4-manifolds consists of taking the double of a 4-dimensional 2-handlebody, *i.e.* a compact 4-manifold made of 0-, 1-, and 2-handles. The resulting manifolds may have arbitrary (finitely presented) fundamental group. The graph manifolds generated by $\calS_0$ belong to this set. Actually, they are doubles of the “simplest” types of 2-handlebodies: those which collapse to simple polyhedra without vertices, as the following shows. \[graph:prop\] Let $M$ be a closed oriented 4-manifold different from $\#_h (S^3\times S^1)$. The following conditions are equivalent. 1. $M$ is a graph manifold generated by $\calS_0$. 2. $M$ is the boundary of a compact oriented 5-manifold which collapses onto a simple polyhedron without vertices. 3. $M$ is the double of a compact oriented 4-manifold which collapses onto a simple polyhedron without vertices. Graph manifolds generated by $\calS_0$ bound 5-manifolds and have thus signature zero. Therefore the integer $h$ in the statement of Theorem \[main:teo\] equals the signature of $M$. We mention that most doubles of 2-handlebodies are *not* graph manifolds: the hypothesis that the collapsed 2-polyhedron has no vertices is quite strong. In some sense, graph manifolds are the “simplest” such doubles. In particular, graph manifolds generated by $\calS_0$ do not realize every possible fundamental group, see Proposition \[finite:teo\] below. There are various analogies between Waldhausen’s graph manifolds and those generated by $\calS_0$. Compare Proposition \[graph:prop\] to the following. Let $M$ be a closed oriented 3-manifold. The following conditions are equivalent. 1. $M$ is a graph manifold. 2. $M$ is the boundary of a compact oriented 4-manifold which collapses onto a locally flat simple polyhedron without vertices. Note also that Waldhausen’s graph manifolds are generated by the set $$\calS^{\rm Wald} = \big\{L_1, L_2, L_3 \big\}$$ where $L_i$ is obtained from $S^2\times S^1$ by drilling along $i$ parallel curves of type $\{pt\}\times S^1$. This set has some resemblances with $\calS_0$. The following proposition holds also for Waldhausen’s manifolds. \[G:prop\] The set $\calG_0$ of all 4-dimensional graph manifolds generated by $\calS_0$ is closed under connected sum and finite coverings. That is, 1. if $M, M' \in \calG_0$ then $M\# M' \in \calG_0$; 2. if $M\in \calG_0$ and $\widetilde M \to M$ is a finite covering, then $\widetilde M \in \calG_0$. In a weak sense, complexity in dimension 4 is similar to Gromov norm in dimension 3: Waldhausen’s graph manifolds are precisely the closed 3-manifolds having Gromov norm zero (thanks to geometrization!), while the graph manifolds generated by $\calS_0$ plus projective planes are precisely the closed 4-manifolds having complexity zero. Waldhausen introduced and also classified his graph manifolds in [@Wa]. We classify here the graph manifolds generated by $\calS_0$ having finite fundamental group. These manifolds are easily described as boundaries of some 5-manifolds, as follows. A finite presentation $\calP$ of a group defines a 2-dimensional polyhedron $X^2$ with one vertex, one edge for each generator, one disc for each relator. Let $\calS(\calP)$ denote the set of all closed oriented 4-manifolds that are boundaries of some oriented 5-manifold that collapses onto $X^2$. The following is easily proved. Recall that an oriented 4-manifold is *spin* when its second Stiefel-Whitney calss $w_2$ vanishes. \[presentation:prop\] The following holds. 1. The set $\calS(\calP)$ contains finitely many 4-manifolds, precisely one of which is spin. 2. The manifolds in $\calS(\calP)$ share the same cellular 3-skeleton: therefore all their homology groups and the homotopy groups $\pi_1$ and $\pi_2$ depend only on $\calP$. 3. If $\calP$ and $\calP'$ are related by Andrew-Curtis moves [@AnCu], then $\calS(\calP) = \calS(\calP')$. For instance, the trivial (empty) presentation $\calP = \langle\, |\, \rangle$ yields $\calS(\calP) = \{S^4\}$. A balanced presentation (*i.e.* having the same number of generators and relators) of the trivial group always yields a unique homotopy 4-sphere. The Andrew-Curtis conjecture states that every such presentation is related to the trivial one by AC-moves [@AnCu]. If this holds, then such a homotopy 4-sphere is always $S^4$. However, such a conjecture is commonly believed to be false: one way to disprove it could be to constuct a fake $S^4$ in this way. Consider the standard presentations $$\calC_n = \langle a | a^n \rangle, \quad \calD_{2n} = \langle a,b | a^2, b^2, (ab)^n \rangle$$ of the cyclic and dihedral groups. We classify the manifolds in $\calS(\calC_n)$ and $\calS(\calD_{2n})$ and assign them some names. \[finite:prop\] We have the following. $$\begin{aligned} \calS(\calC_n) & = & \left\{\begin{array}{ll} \left\{C_n^0, C_n^1 \right\} & {\rm \ if\ } n {\rm \ is \ even,}\\ \left\{C_n^0\right\} & {\rm \ if\ } n {\rm \ is \ odd.} \end{array}\right. \\ \calS(\calD_{2n}) & = & \left\{\begin{array}{ll} \left\{D_n^0, D_n^1, D_n^2, D_n^3\right\}& {\rm \ if\ } n=2 \\ \left\{D_n^{00}, D_n^{10}, D_n^{20}, D_n^{01}, D_n^{11}, D_n^{21} \right\} & {\rm \ if\ } n>2 {\rm \ is \ even.} \\ \left\{D_n^{0}, D_n^{1}, D_n^{2}\right\} & {\rm \ if\ } n>2 {\rm \ is \ odd.} \\ \end{array}\right.\end{aligned}$$ The manifolds $C^0_n, D^0_n, D^{00}_n$ are spin, the others are not. The manifolds $C_n^0$, $C_n^1$, $D^0_n$, $D^2_n$, $D^{00}_n$, $D^{10}_n$, $D^{20}_n$ are even, the others are odd. The universal covering of every manifold in the list is $\#_k (S^2\times S^2)$, for some $k$. Recall that a spin 4-manifold is always even, while the converse is true for simply connected manifolds, but not in general. Some non-spin manifolds in the list, like $D_2^1$ and $D_2^3$, have the same homotopy and homology groups, and intersection forms. We have distinguished them by counting the number of spin coverings. We may now deduce from Theorem \[main:teo\] a classification of all 4-manifolds with complexity zero and finite fundamental group. \[finite:teo\] A closed 4-manifold $M$ with finite fundamental group has complexity zero if and only if $$M = N \#_h(S^2\times S^2) \#_k \matCP^2 \#_l \overline\matCP^2$$ for some $$N \in \calS(\calC_{2^n}) \cup \calS(\calC_{3\cdot 2^n}) \cup \calS(\calD_{2\cdot 2^n})$$ and $h,k,l,n \geqslant 0$. \[finite:cor\] A simply connected closed 4-manifold $M$ has complexity zero if and only if $M$ is a connected sum of copies of $S^4$, $S^2\times S^2$, and $\matCP^2$ (with both orientations). That is, $$M = \#_h (S^2\times S^2) \qquad {\rm or } \qquad M = \#_h \matCP^2 \#_k \overline\matCP^2$$ for some $h,k\geqslant 0$. It is worth emphasizing that Corollary \[finite:cor\] needs the whole proof of Theorem \[main:teo\], which is the core result of this paper. As far as we know, restricting to simply connected 4-manifolds (and thus shadows) does not help much: the whole machinery described in this paper is needed. We can easily calculate the classical topological invariants of the manifolds found. For every pair of integers $(\chi, \sigma)$ with $\chi+\sigma$ even there is a closed 4-manifold having complexity zero, signature $\sigma$, and Euler number $\chi$. The Euler characteristic of a graph manifold generated by $\calS_0$ is the sum of the characteristics of the blocks. All blocks have $\chi = 0$, except $\chi (N_1) = 2$ and $\chi (N_3) = -2$. Therefore the Euler characteristic of a graph manifold may be any even integer. Its signature is zero since it bounds a 5-manifold. Via connected sums with $\matCP^2$ we get manifolds with arbitrary $\sigma$. Note that $\chi + \sigma$ is even for every closed oriented 4-manifold. Concerning intersection forms, we get the following. Let $H$ denote the form $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$. The intersection form of a closed 4-manifold having complexity zero is either $n[-1]\oplus m[+1]$ or $kH$. Graph manifolds have zero signature and thus an indefinite form which is either $n[-1]\oplus n[+1]$ or $kH$. By summing projective planes we get the result. The only intersection form admitted for 4-manifolds which has not yet been encountered is $2mH\oplus nE_8$. We thus ask the following. What are the manifolds of lowest complexity having intersection form $2mE_8\oplus nH$? Is the $K3$ among them? Which pairs $(m,n)$ do we get? As we said above, Matveev’s complexity induces a filtration $\calM_0\subset \calM_1\subset \ldots$ where each $\calM_c$ contains infinitely many 3-manifolds, but only finitely many interesting ones. This also holds for our $\calM_0$, if we decide that doubles of 2-handlebodies and non-irreducible 4-manifolds are not interesting. We conjecture that this holds for all values of $c$. \[finite:conj\] For every natural number $c$ there are only finitely many irreducible 4-manifolds of complexity $c$ that are not doubles of 2-handlebodies. In fact, constructing shadows with few vertices of doubles of 2-handlebodies is pretty easy and we expect that there are infinitely many of them for all $c$. One may reasonably argue that doubles of 2-handlebodies are interesting, since they might contain for instance fake copies of $S^4$, see [@AnCu]. We thus propose an alternative conjecture. \[simply:conj\] For every natural number $c$ there are finitely many irreducible simply connected 4-manifolds of complexity $c$. Inside $\calM_0$ we found only $\matCP^2$ and $S^2\times S^2$. Note that, by a result of Auckly, the number of such manifolds (if finite) grows faster than polinomially. The number $n_c$ of distinct manifolds lying in $\calM_c$ that are topologically homeomorphic to $K3$ is bigger than $c^{k\sqrt[3] c}$ for some constant $k$ and sufficiently big $c$. Note that a fixed simple polyhedron may give rise only to finitely many closed manifolds [@Ma:link], but there are infinitely many simple polyhedra with a given number of vertices. One may try to attack the conjectures by proving that only finitely many simple polyhedra may yield “interesting” 4-manifolds. Finiteness may also be obtained *a priori* by defining a complexity which uses a much more restricted class of simple polyhedra, *i.e.* the *special* ones: see [@Au; @Co; @Ma:link]. This *special complexity* $c^{\rm spec}$ is only related to the complexity $c$ we use here via the obvious inequality $c(M^4)\leqslant c^{\rm spec}(M^4)$. The closed 4-manifolds $M^4$ having $c^{\rm spec}(M^4)\leqslant 1$ are $S^4$, $\matCP^2$, $\matCP^2\#\overline \matCP^2$, $\matCP^2\#\matCP^2$, and $S^2\times S^2$, see [@Co]. Finally, we show how complexity allows to state three well-known conjectures in a similar form. We denote by P, AC, P4 respectively the (now proven) Poincaré conjecture, the Andrew-Curtis conjecture [@AnCu], and the (piecewise-linear) 4-dimensional Poincaré conjecture. We denote by $\sim$ the homotopy equivalence between manifolds. \[analogies:teo\] The following holds. 1. P holds $\Longleftrightarrow$ $c(M^3)=0$ for every 3-manifold $M^3\sim S^3$; 2. P4 holds $\Longleftrightarrow$ $c(M^4)=0$ for every 4-manifold $M^4\sim S^4$; 3. AC holds $\Longleftrightarrow$ $c(\calP)=0$ for every presentation $\calP$ of the trivial group. Complexity of presentations is defined in Section \[presentations:subsection\]. The three types of complexities mentioned in Theorem \[analogies:teo\] are all defined as the minimum number of vertices of some simple polyhedron. The equivalence (1) follows from Matveev’s seminal paper [@Mat], (2) follows from Corollary \[finite:cor\] and (3) is easily proved in Section \[presentations:subsection\]. Structure of the paper {#structure-of-the-paper .unnumbered} ---------------------- All the results stated in the introduction except Theorem \[main:teo\] are proved in Section \[simple:section\]. The rest of the paper is devoted to proving Theorem \[main:teo\]. An outline of the proof is present in Section \[outline:subsection\]. In Section \[shadows:section\] we recall (a version of) the definition of Turaev’s shadows. We construct shadows (with boundary) without vertices of all the blocks in $\calS_0$ and of $\matCP^2$. In Section \[operations:section\] we prove that blocks can be assembled along their $(S^2\times S^1)$-boundaries and can be summed (via an internal connected sum) without increasing the complexity. This approach is very similar to the *bricks* construction used in [@MaPe] for 3-manifolds. Section \[moves:section\] collects some moves that relate two shadows of the same 4-manifold. We introduce there various new moves that are particularly useful when there are no vertices. In Section \[without:section\] we study simple shadows without vertices, their 4-dimensional thickening, and their 3-dimensional boundary. Sections \[reduction:section\] to \[proof:section\] contain the core of the proof of Theorem \[main:teo\]. We will always work in the piecewise-linear category. Every manifold and map is tacitly assumed to be PL. Acknowledgements {#acknowledgements .unnumbered} ---------------- The author would like to thank Francois Costantino for the many discussions on this topic, and the Maths Department of Austin for its hospitality. Simple polyhedra {#simple:section} ================ We prove here all the assertions made in the introduction except Theorem \[main:teo\]. We introduce a graph notation to encode simple polyhedra without vertices which will also be used in the subsequent sections. Simple polyhedra with boundary ------------------------------ ![image](shadow.pdf) \[shadow:fig\] A *simple polyhedron with boundary* is a compact polyhedron $X$ where every point has a link homeomorphic to a circle with three radii, a circle with a diameter, a circle, or a segment. Star neighborhoods are shown in Fig. \[shadow:fig\]. The *boundary* $\partial X$ is the union of all points of type (4). Points of type (1) are called *vertices*. The points of type (2) and (3) form respectively some manifolds of dimension 1 and 2: their connected components are called respectively *edges* and *regions*. The *singular part* $SX$ of $X$ is the union of all points of type (1), (2), and (4). For simplicity, we will often employ the term *simple polyhedron* to denote a simple polyhedron with boundary. Simple polyhedra without vertices {#simple:subsection} --------------------------------- In this paper we are concerned only with simple polyhedra $X$ without vertices. Consider one such polyhedron $X$. Each component of $SX$ is a circle. Its regular neighborhood $N$ has the structure of a $Y$-bundle over $S^1$, where $Y$ denotes the cone over 3 points. There are three topological types for $N$: its boundary may have 3, 2, or 1 components, and look like respectively as $(111)$, $(12)$, and $(3)$ from Fig. \[sum\_no\_gleam:fig\]. We use the names $Y_{111}$, $Y_{12}$, and $Y_3$ to denote these three objects. Of course we have $Y_{111} \isom Y\times S^1$. After removing regular neighborhoods of the circles in $SX$ we are left with regions. These in turn decompose, as every surface, into discs, Möbius strips, and pair-of-pants. We denote such objects by $D^2$, $Y_2$, and $P^2$. The name $Y_2$ follows from analogy with Fig. \[sum\_no\_gleam:fig\]. We have proved the following. \[simple:prop\] Every simple polyhedron without vertices decomposes along simple closed curves into pieces homeomorphic to $D^2$, $P^2$, $Y_2$, $Y_{111}$, $Y_{12}$, and $Y_3$. \[vertices:fig\] A simple polyhedron $X$ without vertices $X$ is easily encoded by a graph $G$ with vertices as in Fig. \[vertices:fig\]. Vertices of type (D), (P), (2), (111), (12), (3) denote respectively pieces homeomorphic to $D^2$, $P^2$, $Y_2$ $Y_{111}$, $Y_{12}$, and $Y_3$. A vertex of type (B) encodes a boundary component of $X$. Note that the vertex of type (12) is not symmetric: the edge marked with two lines should correspond to the region winding twice over the singular circle in $SX$. Every edge of $G$ denotes a gluing of two such pieces. There are two possible gluings, since there are two self-homeomorphisms of $S^1$ up to isotopy, one orientation-preserving and one reversing. This gives a map $\beta:H_1(G,\matZ_2)\to\matZ_2$. Each piece admits a self-homeomorphism that reverses the orientation of the boundary circles. Therefore the graph $G$ and $\beta$ together encode the simple polyhedron $X$. Since a surface can split along pants, discs, and Möbius strips in multiple ways, there are some moves that modify the graph while leaving the associated polyhedron unchanged. Some of these are shown in Fig. \[mosse\_innocue\_ungleamed:fig\]. \[mosse\_innocue\_ungleamed:fig\] Simple homotopy and presentations {#presentations:subsection} --------------------------------- A *simple homotopy* between two polyhedra $X, X'$ of dimension 2 is a composition of simplicial collapses and expansions that transform $X$ into $X'$. Two polyhedra $X$ and $X'$ are *3-deformation equivalent* if there is a simple homotopy between them which involves only collapses and expansions of simplexes of dimension $\leqslant 3$. Recall from the introduction that every presentation $\calP$ defines a 2-dimensional polyhedron $X_\calP$. \[bijection:teo\] The map $\calP \mapsto X_\calP$ defines a bijection between Andrew-Curtis classes of presentations and 3-deformation classes of 2-dimensional polyhedra. See [@HoMe] for a careful proof of this theorem and a nice introduction to the subject. We introduce the following definition. The *complexity* $c(\calP)$ of a presentation $\calP$ is the minimum number of vertices of a simple polyhedron $X$ with boundary which is 3-deformation equivalent to $X_\calP$. This number is always finite, since every 2-dimensional polyhedron is easily seen to be 3-deformation equivalent to a simple one. By Theorem \[bijection:teo\], the number $c(\calP)$ depends only on the Andrew-Curtis class of $\calP$ and may also be interpreted as a complexity on 3-deformation classes of polyhedra. Thanks to Theorem \[bijection:teo\] we can safely shift from presentations (up to AC-equivalence) to 2-dimensional polyhedra (up to 3-deformation). Free products of presentations correspond to wedge products of polyhedra, and we denote both these operations by $\vee$. For the sake of clearness, we denote by $S^2$ the presentation $\langle a| a,a\rangle$ which indeed corresponds to $S^2$. Here we will need the following. \[AC:prop\] The presentations (up to AC-equivalence) of finite groups having complexity zero are precisely those of the form $\calP \vee_h S^2$ for some $h\geqslant 0$ and some $\calP = \calC_{2^n}$, $\calC_{3\cdot 2^n}$, or $\calD_{2\cdot 2^n}$ with $n\geqslant 0$. We will use at various points the following trick. Let $X$ be a simple polyhedron without vertices. It is described by a graph $G$ with vertices as in Fig. \[vertices:fig\]. Consider the move in Fig. \[wedge:fig\]. An edge of the graph determines a circle in a region of $X$. If we shrink the circle to a point (and $G$ is a tree), the resulting polyhedron is a wedge $X_1\vee X_2$ of two simple polyhedra, as described by the move. \[wedge:fig\] There is an obvious map $X\to X_1\vee X_2$ which induces a surjective map $$\pi_1(X) \to \pi_1(X_1) * \pi_1(X_2).$$ If $X$ is simply connected then both $X_1$ and $X_2$ also are, and if $\pi_1(X)$ is finite then either $\pi_1(X_1)$ or $\pi_1(X_2)$ is trivial (and the other is finite). \[AC\_mosse:fig\] Another fact that we will use is that both moves in Fig. \[AC\_mosse:fig\] can be realized via 3-deformations (this can be seen easily). We will denote 3-deformation equivalence via the symbol $\sim$. We will now prove a general claim. Let $Y_i$ be the simple polyhedron drawn in Fig. \[gira:fig\]-(1). Let $X$ be any simple polyhedron without vertices and with one boundary component (*i.e.*, we have $\partial X\isom S^1$). Let $\hat X$ be obtained from $X$ by capping the boundary with a disc. \[gira:fig\] *Claim. If $\pi_1(\hat X) = \{e\}$ then $X$ is 3-deformation equivalent (relative to $\partial X$) to $X' = Y_i\vee_h S^2$ for some $i,h\geqslant 0$.* By a 3-deformation equivalence relative to $\partial X$ we mean that collapses and expansions take place away from $\partial X$. Note that the claim easily implies the following. *Corollary. A simply connected simple polyhedron without boundary and without vertices is 3-deformation equivalent to $\vee_h S^2$.* We prove the claim. The polyhedron $X$ is described by a graph $G$ with vertices as in Fig. \[vertices:fig\]. There is precisely one vertex of type , corresponding to $\partial X$. A graph $\hat G$ for $\hat X$ is obtained simply by substituting this vertex with a . Both graphs are trees since $H_1(\hat X,\matZ)$ is trivial. We prove the claim by induction on the number of vertices of $G$. The vertex cannot be incident to one vertex of type or because and are not simply connected. Therefore the vertex is incident to one vertex of type , , , or . In the first case $X$ is a disc, *i.e.* $X = Y_0$ and we are done. In all other cases we conclude by induction, as follows. \[AC\_mosse2:fig\] Each of the moves in Fig. \[AC\_mosse2:fig\] transforms $X$ into one or two polyhedra which satisfy our induction hypothesis: we can easily conclude in each case. More precisely, move (1) transforms $X$ into two polyhedra $X_1$ and $X_2$. Consider the capped polyhedra $\hat X$, $\hat X_1$, and $\hat X_2$: we have $\hat X \sim \hat X_1 \vee \hat X_2$. Therefore $\{e\} = \pi_1(\hat X) = \pi_1(\hat X_1)* \pi_1(\hat X_2)$. Thus $\pi(\hat X_1) = \pi(\hat X_2) = \{e\}$ and our induction hypothesis apply to both $X_1$ and $X_2$. Therefore $$\begin{aligned} X_1 & \sim & Y_i\vee_h S^2 \\ X_2 & \sim & Y_j \vee_k S^2 \end{aligned}$$ and we easily deduce that $$\begin{aligned} X & \sim & Y_{\min\{i,j\}} \vee_{h+k+1} S^2.\end{aligned}$$ Note that all the 3-deformations are performed away from $\partial X_1$ and $\partial X_2$ and therefore survive in $X$. We turn to move (2). The first trick described above gives a map $\hat X \to \hat X_1\vee\hat X_2$ which is surjective on fundamental groups, thus we conclude again that $X_1$ and $X_2$ fulfill the induction hypothesis. Again we get $$\begin{aligned} X_1 & \sim & Y_i \vee_h S^2 \\ X_2 & \sim & Y_j\vee_k S^2 \end{aligned}$$ which implies that $$\begin{aligned} \hat X & \sim & X_{\langle a | a^{2^i}, a^{2^j} \rangle} \vee_{h+k} S^2. \end{aligned}$$ Since $\hat X$ is simply connected, either $i=0$ or $j=0$. Suppose $i=0$: we then get $$\begin{aligned} X & \sim & Y_j\vee_{h+k} S^2. \end{aligned}$$ In move (3) the polyhedron $X$ is transformed into a polyhedron $X'$ such that $\hat X \sim \hat X'$, see Fig. \[AC\_mosse:fig\]-(2). Therefore $X'$ fulfills the hypothesis and we get $$\begin{aligned} X' & \sim & Y_i\vee_h S^2\end{aligned}$$ which implies that $$\begin{aligned} X & \sim & Y_{i+1}\vee_h S^2.\end{aligned}$$ Finally, in move (4) we have a map $\hat X \to \hat X'$ which is surjective on fundamental groups. Therefore $X'$ fulfills the hypothesis. We get $$\begin{aligned} X' & \sim & Y_i\vee_h S^2\end{aligned}$$ which implies that $$\begin{aligned} \hat X & \sim & \langle a | a^{2^i}, a^2 \rangle \vee_h S^2.\end{aligned}$$ Since $\pi_1(\hat X)=\{e\}$, we deduce that $i=0$. This implies that $X \sim Y_0\vee_h S^2$. We have proved the claim. It is now easy to deduce the proposition. Let $X$ be a simple polyhedron without vertices. It always collapses onto the union of a simple polyhedron without boundary and some 1-dimensional polyhedron. Since $\pi_1(X)$ is finite, the 1-dimensional polyhedron also collapses and we are left either with a simple polyhedron without boundary, which we still call $X$, or with a point. In the latter case we are done. Represent $X$ via a graph $G$. Take an edge of $G$. It determines a loop $\gamma$ in a region of $X$, which separates $X$ into two polyhedra $X_1$, $X_2$ with $\partial X_1 = \partial X_2 = \gamma$. We apply the usual trick by shrinking $\gamma$ to a point. We get a surjective map from $\pi_1(X)$ to $\pi_1(\hat X_1)*\pi_1(\hat X_2)$. Since $\pi_1(X)$ is finite, either $\pi_1(\hat X_1)$ or $\pi_1(\hat X_2)$ is trivial. Suppose that $\pi_1(\hat X_1)$ is trivial. Then we apply the claim to $X_1$. We get $X_1 \sim Y_i\vee S^2$ relative to $\gamma$. We can apply this to every edge of $G$. It is easy to conclude that $X$ is 3-deformation equivalent to a polyhedron which may be represented via one single vertex $v$ from Fig. \[vertices:fig\] and a polyhedron of type $Y_i\vee_h S^2$ attached to each of the incident edges. We conclude as follows: - if $v$ is of type (D) then $X \sim X_{\calC_{2^i}}\vee_h S^2$; - if $v$ is of type (P) then $X \sim X_{\langle a,b | a^{2^i}, b^{2^j}, (ab)^{2^k} \rangle}\vee_h S^2$; - if $v$ is of type (2) then $X \sim X_{\calC_{2\cdot 2^i}}\vee_h S^2$; - if $v$ is of type (111) then $X \sim X_{\langle a | a^{2^i}, a^{2^j}, a^{2^k} \rangle}\vee_h S^2 \sim X_{\calC_{2^{\min \{i,j,k\}}}} \vee_{h+2} S^2$; - if $v$ is of type (12) then $X \sim X_{\langle a | a^{2\cdot 2^i}, a^{2^j} \rangle}\vee_h S^2 \sim X_{\calC_{2^{\min \{i+1,j\}}}} \vee_{h+1} S^2$; - if $v$ is of type (3) then $X \sim X_{\calC_{3\cdot 2^i}}\vee_h S^2$. In all cases we are done except when $v$ is of type (P). Recall that a group presented as $$\langle a,b \ | \ a^p, b^q, (ab)^r \rangle$$ is finite precisely when $1/p+1/q+1/r<1$. Thus when $v$ is of type (P) and we take $i\leqslant j \leqslant k$ we get: - $(i,j,k) = (0,j,k)$, and $X \sim X_{\calC_{2^{\min\{j,k\}}}} \vee_{h+1} S^2$, - $(i,j,k) = (1,1,k)$, and $X \sim X_{\calD_{2\cdot 2^k}} \vee_h S^2$ as required. Five-dimensional thickenings ---------------------------- We now study 5-dimensional thickenings of simple polyhedra, and their 4-dimensional boundaries. Five-dimensional thickenings are easier to study than four-dimensional ones: this may explain why Proposition \[graph:prop\] is much easier to prove than Theorem \[main:teo\]. To prove the proposition we start with a general lemma (which is well-known to experts). \[23:lemma\] Let $X$ be a compact 2-dimensional polyhedron. Let $M$ be a closed oriented 4-manifold. The following conditions are equivalent. 1. $M$ is the boundary of a compact oriented 5-manifold which collapses on $X$; 2. $M$ is the double of a compact 4-manifold which collapses on $X$. \(2) $\Rightarrow$ (1). We have $M = DN$ for some 4-dimensional compact $N$ which collapses to $X$. Clearly the 5-dimensional $N\times [0,1]$ also collapses to $X$ and $\partial (N\times [0,1]) \isom M$. \(1) $\Rightarrow$ (2). We have $M= \partial W$ for some oriented 5-manifold $W$ which collapses to $X$. Choose a triangulation of $X$ and thicken it to a handle decomposition for $W$. Thicken arbitrarily the triangulation of $X$ to a handle decomposition of a 4-manifold $N$, and thicken it again to a handle decomposition of $N\times [0,1]$. The manifolds $W$ and $N\times [0,1]$ have the same 0- and 1-handles. Concerning 2-handles, their attaching circles are homotopic, and since they lie in some 4-dimensional manifold they are actually isotopic. The only thing that might differ between the handle decompositions of $W$ and $N\times [0,1]$ is the way each 2-handle is attached: there are two possibilities since $\pi_1(SO(3))=\matZ_2$. In dimension 4, there are infinitely many possibilities since $\pi_1(SO(2)) = \matZ$. A 2-handle for $N$ induces a 2-handle for $N \times [0,1]$ according to the surjective homomorphism $\pi_1(SO(2))\to \pi_1(SO(3))$ induced by a standard injective map $SO(2) \to SO(3)$. When constructing $N$, it suffices to choose on each 2-handle a framing with the right parity, coherent with the corresponding 2-handle of $W$. With this choice, we get $W \isom N\times [0,1]$, and we are done. In practice, to deal with graph manifolds we may use a smaller generating set $\calS_0' \subset \calS_0$, as the following shows. \[smaller:prop\] Every graph manifold generated by $\calS_0$ is a connected sum of $h\geqslant 0$ copies of $S^3\times S^1$ and $k\geqslant 0$ graph manifolds generated by the set $$\calS_0' = \big\{M_2, M_{111}, M_{12}, M_3, N_1, N_3\big\}.$$ A graph manifold generated by $\calS_0$ decomposes into blocks homeomorphic to those of $\calS_0'$ and $M_1, M_{11}, N_2$. Each block homeomorphic to $M_{11} = N_2 = S^2\times S^1 \times [0,1]$ may be simply removed or substituted with a pair of $N_1 = D^2\times S^2$ and $N_3=P^2\times S^2$. It remains to prove that we can also rule out the block $M_1 = D^3\times S^1$. Every self-diffeomorphism of $S^2\times S^1$ extends to $D^3\times S^1$, see [@LaPo]. Therefore there is only one way to glue this block to the adjacent block. Gluing $M_1$ consists of *filling*, the opposite of drilling along a curve. It is thus clear that by gluing $M_1$ to some piece $M_{i_1\cdots i_h}$ we get a simpler piece $M_{i_1\cdots \hat i_j \cdots i_h}$, or $M_\emptyset = S^3\times S^1$. So after finitely many simplifications we may suppose that each $M_1$ is glued only along a copy of $N_1$ or $N_3$. In the first case we get $S^4$. In the second case, it is easy to see that $$M_1 \cup N_3 \isom M_1 \# M_1$$ and we proceed by iteration. Every manifold in $\calS_0$ is easily seen to be a double and thus admits an orientation-reversing self-homeomorphism. For that reason the chosen orientation is not important. The same holds for every graph manifold generated by $\calS_0$. We may now prove Proposition \[graph:prop\]. A simple polyhedron without boundary in a 4-manifold is locally flat if it is locally contained in a 3-dimensional slice, see Definition \[properly:defn\]. \[graph2:prop\] Let $M$ be a closed oriented 4-manifold different from $\#_h (S^3\times S^1)$. The following conditions are equivalent. 1. $M$ is a graph manifold generated by $\calS_0$. 2. $M$ is the boundary of a compact oriented 5-manifold which collapses onto a simple polyhedron without vertices (and without boundary). 3. $M$ is the double of a compact 4-manifold which collapses onto a simple polyhedron without vertices (and without boundary). 4. $M$ is the double of a compact 4-manifold which collapses onto a locally flat simple polyhedron without vertices (and without boundary). The equivalence between (2) and (3) is settled by Lemma \[23:lemma\] \(2) $\Rightarrow$ (1). Let $X$ be a simple polyhedron without vertices and $W^5$ a compact oriented 5-manifold collapsing to it. The polyhedron $X$ decomposes into pieces as stated by Proposition \[simple:prop\]. The pieces are homeomorphic to $D^2$, $P^2$, $Y_2$, $Y_{111}$, $Y_{12}$, or $Y_3$. The regular neighborhood $N(X)$ of $X$ in $W^5$ decompose similarly into pieces obtained by thickening the pieces above. These pieces are homeomorphic respectively to $D^2\times D^3$, $P^2\times D^3$, $S^1\times D^4$, $S^1\times D^4$, $S^1\times D^4$, and $S^1\times D^4$ again. Each piece $P$ of $N(X)$ fibers over the corresponding piece $\pi(P)$ of $X$. The 4-dimensional boundary $\partial P$ decomposes into a “horizontal” part, which is contained in $\partial N(X)$, and a “vertical” part, consisting of $\pi^{-1}(\partial (\pi(P)))$. The vertical part is made of copies of $D^3\times S^1$ that are glued together to form properly embedded submanifolds of $N(X)$. It is easy to check that the horizontal part is homeomorphic respectively to $N_1$, $N_3$, $M_2$, $M_{111}$, $M_{12}$, or $M_3$. Therefore $\partial N(X)$ is a graph manifold. Since $W^5$ collapses onto $X$, we have $W^5 \isom N(X)$ and we are done. \(1) $\Rightarrow$ (2). By Proposition \[smaller:prop\], every graph manifold $M\neq \#_k(S^3\times S^1)$ is a connected sum of some graph manifolds $Q_1,\ldots, Q_h$ generated by $\calS_0'$ and $h'$ copies of $S^3\times S^1$. (We have $h\geqslant 1$ and $h'\geqslant 0$ since $M\neq \#_k(S^3\times S^1)$.) Consider one $Q_i$. It decomposes into pieces homeomorphic to $M_2$, $M_{111}$, $M_{12}$, $M_3$, $N_1$, and $N_3$. As we have seen, every such piece is the horizontal boundary of a 5-dimensional block which fibers over some simple polyhedron with boundary without vertices. Every self-homeomorphism of $S^2\times S^1$ is isotopic to one which preserves the foliation in spheres and thus extends to $D^3\times S^1$. We can therefore glue correspondingly the 5-dimensional blocks. The resulting 5-manifold $W^5_i$ fibers (and collapses) to a simple polyhedron $X_i$ without boundary and without vertices. Its boundary $\partial W^5_i$ is homeomorphic to $Q_i$. By using $h-1$ times the move in Fig. \[sum\_no\_gleam:fig\] we construct from $X_1,\ldots, X_h$ a connected simple polyhedron $X$ such that the boundary-sum $W^5 = W_1^5\sharp \ldots \sharp W_h^5$ collapses onto $X$. Of course, we have $\partial W^5 = Q_1\#\ldots \#Q_h$. We then use $h'$ times Fig. \[sum\_no\_gleam:fig\] again to realize $h'$ self-connected sums and get the $\#_{h'}(S^3\times S^1)$ factors. \(4) $\Rightarrow$ (3). Obvious. \(2) $\Rightarrow$ (4). In the proof of Lemma \[23:lemma\], we have the freedom to construct a locally flat $X$. We can easily prove Proposition \[G:prop\]. The set $\calG_0$ of all 4-dimensional graph manifolds generated by $\calS_0$ is closed under connected sum and finite coverings. That is, 1. if $M, M' \in \calG_0$ then $M\# M' \in \calG_0$; 2. if $M\in \calG_0$ and $\widetilde M \to M$ is a finite covering, then $\widetilde M \in \calG_0$. If $W^5$ collapses onto a simple polyhedron $P^2$ and $W'^5$ collapses onto $P'^2$, then the $\partial$-connected sum $W\sharp W'$ collapses onto the simple polyhedron $R^2$ constructed in Fig. \[sum\_no\_gleam:fig\]. Since $\partial (W \sharp W') = \partial W \# \partial W'$, we get (1). We turn to (2). Since $W^5$ collapses onto a 2-dimensional polyhedron, it admits a decomposition with 0-, 1-, and 2-handles. Therefore the inclusion $\partial W^5 \to W^5$ induces an isomorphism on fundamental groups. Every covering of $\partial W^5$ is thus induced by a covering of $P^2$. The covering of a simple polyhedron without vertices is a simple polyhedron without vertices, hence we are done. \[sum\_no\_gleam:fig\] Finite fundamental groups ------------------------- We prove here Propositions \[presentation:prop\] and \[finite:prop\]. Let $\calS(X^2)$ denote the set of all closed 4-manifolds that are boundaries of some orientable 5-manifold that collapses onto $X^2$. \[AC:teo\] If $X$ and $X'$ are 3-deformation equivalent then $\calS(X) = \calS(X')$. The set of all 5-dimensional thickenings of $X$ and $X'$ coincide, see [@AnCu; @HoMe]. Therefore the set of their boundaries also coincide. The following shows that $\calS(X)$ is finite. \[w2:prop\] Let $X^2$ be a compact 2-dimensional polyhedron. For every class $\alpha\in H_2(X^2,\matZ_2)$ there is precisely one 5-dimensional manifold $W^5$ collapsing onto $X^2$ with $w_2(W^5) = \alpha$. See [@HaKrTe] for a proof. The 5-dimensional thickenings of a 2-dimensional polyhedron $X^2$ are thus in natural correspondence with the elements in $H_2(X^2,\matZ_2)$. We can now prove Propositions \[presentation:prop\] and \[finite:prop\]. The following holds. 1. The set $\calS(\calP)$ contains finitely many 4-manifolds, precisely one of which is spin. 2. The manifolds in $\calS(\calP)$ share the same cellular 3-skeleton: therefore all their homology groups and the homotopy groups $\pi_1$ and $\pi_2$ depend only on $\calP$. 3. If $\calP$ and $\calP'$ are related by Andrew-Curtis moves [@AnCu], then $\calS(\calP) = \calS(\calP')$. Let $X^2$ be the polyhedron determined by $\calP$. Proposition \[w2:prop\] implies that $X^2$ thickens to finitely many 5-manifolds $W^5$, precisely one of which has vanishing $w_2(W^5)$. The map $i^*:H^2(W^5,\matZ_2)\to H^2(\partial W^5,\matZ_2)$ induced by inclusion is injective since $H^1(W^5,\partial W^5) \isom H_4(W^5)= 0$. By the naturality of the Stiefel-Whitney class we have $i^*(w_2(W^5)) = w_2(\partial W^5)$. Hence $W^5$ is spin if and only if $\partial W^5$ is spin, and (1) is proved. We turn to (2). The 1-skeleton of $X^2$ can be thickened in a unique way to a 5-manifold, whose boundary is $\#_k (S^3\times S^1)$. Such a boundary intersects the 2-cells of $X^2$ into a link. The set $\calS(\calP)$ consists of all the 4-manifolds that can be obtained by surgery along that link. Therefore these manifolds share the same 3-skeleton. (A surgery consists of removing $S^1\times D^3$ and then adding a 2-handle and a 4-handle. The 2-handle depends on a framing, but its core disc does not. By adding only the core discs we thus get a common 3-skeleton for all the manifolds in $\calS(\calP)$.) Finally, (3) follows from Theorem \[AC:teo\]. We have the following. $$\begin{aligned} \calS(\calC_n) & = & \left\{\begin{array}{ll} \left\{C_n^0, C_n^1 \right\} & {\rm \ if\ } n {\rm \ is \ even,}\\ \left\{C_n^0\right\} & {\rm \ if\ } n {\rm \ is \ odd.} \end{array}\right. \\ \calS(\calD_{2n}) & = & \left\{\begin{array}{ll} \left\{D_n^0, D_n^1, D_n^2, D_n^3\right\}& {\rm \ if\ } n=2 \\ \left\{D_n^{00}, D_n^{10}, D_n^{20}, D_n^{01}, D_n^{11}, D_n^{21} \right\} & {\rm \ if\ } n>2 {\rm \ is \ even.} \\ \left\{D_n^{0}, D_n^{1}, D_n^{2}\right\} & {\rm \ if\ } n>2 {\rm \ is \ odd.} \\ \end{array}\right.\end{aligned}$$ The manifolds $C^0_n, D^0_n, D^{00}_n$ are spin, the others are not. The manifolds $C_n^0$, $C_n^1$, $D^0_n$, $D^2_n$, $D^{00}_n$, $D^{10}_n$, $D^{20}_n$ are even, the others are odd. The universal covering of every manifold in the list is $\#_k (S^2\times S^2)$, for some $k$. Let $X_\calP$ be the 2-dimensional polyhedron associated to some presentation $\calP$. Let $W^5$ be the 5-dimensional thickening of $X_\calP$, determined by its Steifel-Whitney class $w_2 \in H^2(W^5,\matZ_2)\isom H^2(X_\calP,\matZ_2)$. By naturality, the Stiefel-Whitney class of $\partial W^5$ is the image $i^*(w_2)$ along the injective map $i^*:H^2(W^5,\matZ_2)\to H^2(\partial W^5,\matZ_2)$. The following holds: 1. the 4-manifold $\partial W^5$ is spin if and only if $i^*(w_2)(\alpha)=0$ for all $\alpha\in H_2(\partial W^5,\matZ_2)$; 2. the 4-manifold $\partial W^5$ is even if and only if $i^*(w_2)(\alpha)=0$ for all $\alpha\in H_2(\partial W^5,\matZ)$. Note that $i_*:H_2(\partial W^5) \to H_2(W^5)$ is surjective (because $H_2(W^5,\partial W^5) \isom H^3(W^5) \isom H^3(X_\calP) = 0$). Of course we have $i^*(w_2) (\alpha) = w_2(i_*(\alpha))$ for all $\alpha$. We can thus modify the two assertions above as follows. 1. the 4-manifold $\partial W^5$ is spin if and only if $w_2(\alpha)=0$ for all $\alpha\in H_2(W^5,\matZ_2)$; 2. the 4-manifold $\partial W^5$ is even if and only if $w_2(\alpha)=0$ for all $\alpha\in H_2(W^5,\matZ)$. We identify the homologies of $W^5$ and $X_\calP$. Let us now consider the case $\calP = \calC_n = \langle a |a^n \rangle$. We have the following. $$H_2(X_{\calC_n}, \matZ_2) = \left\{\begin{array}{ll} \matZ_2 & {\rm \ if\ } n {\rm \ is \ even,} \\ 0 & {\rm \ if\ } n {\rm \ is \ odd.} \end{array}\right.$$ If $n$ is odd, there is only one spin 5-dimensional thickening and $\partial W^5$ is a spin manifold, which we denote by $C_n^0$. If $n$ is even, we have two possibilities: one spin manifold $C_n^0$ and one non-spin manifold $C_n^1$. We have $H_2(X_{\calC_n},\matZ)=0$: by what just said, the manifold $C_n^1$ is even. Let us turn to dihedral manifolds, *i.e.* to $\calP = \calD_{2n} = \langle a, b| a^2, b^2, (ab)^n\rangle$. We first consider the very symmetric case $n=2$. We may picture $X=X_{\calD_4}$ after a small 3-deformation as a pair-of-pants with 3 projective planes attached. We have $$\begin{aligned} H_2(X,\matZ_2) & = & \matZ_2 + \matZ_2 + \matZ_2, \\ H_2(X,\matZ) & = & \matZ.\end{aligned}$$ A basis for $H_2(X,\matZ_2)$ is given by the three projective planes. We then get a dual basis for $H^2(X,\matZ_2$). The modulo-2 map $H_2(X,\matZ) \to H_2(X,\matZ_2)$ sends 1 to $(1,1,1)$. Up to symmetries of $X$, there are four choices for $w_2 \in H^2(X,\matZ_2)$: 1. $(0,0,0)$ leads to a spin manifold $D_n^0$; 2. $(1,0,0)$ leads to a non-spin odd manifold $D_n^1$; 3. $(1,1,0)$ leads to a non-spin even manifold $D_n^2$; 4. $(1,1,1)$ leads to a non-spin odd manifold $D_n^3$. We need to distinguish $D_n^1$ from $D_n^3$. We do this by looking at their index-two coverings. Each $D_n^i$ has three such coverings, and it turns out that the number of spin manifolds among them is $3-i$. This is easily seen as follows: each covering $\pi:\tilde X \to X$ is determined by the choice of one projective plane $P$ in $X$. The polyhedron $\tilde X$ contains two projective planes fibering over $P$ and two spheres fibering over the two other projective planes in $X$. These four surfaces generate $H_2(\tilde X, \matZ_2)$. Let $p:\tilde W \to W$ be the covering of thickenings. We have $p^*(w_2)(\alpha) = w_2(p(\alpha))$. If $\alpha$ is a sphere, it double-covers a projective plane $P'\subset X$ and we have $w_2(p(\alpha)) = w_2(2P') = 0$. If $\alpha$ is a projective plane over $P$ we get $p^*(w_2)(\alpha) = w_2(P)$. Thus $\tilde X$ is spin iff $w_2(P)=0$. Therefore $D^i_n$ has $3-i$ spin coverings of index two. The other dihedral manifolds are treated similarly. We always take $X$ to be a pair-of-pants with three discs attached along its boundary, winding 2, 2, and $n$ times. If $n>2$ is odd, we get $$\begin{aligned} H_2(X,\matZ_2) & = & \matZ_2 + \matZ_2, \\ H_2(X,\matZ) & = & \matZ.\end{aligned}$$ The modulo-2 map $H_2(X,\matZ) \to H_2(X,\matZ_2)$ sends 1 to $(1,1)$. Up to symmetries we have three choices for $w_2$: 1. $(0,0)$ leads to a spin manifold $D_n^0$; 2. $(1,0)$ leads to a non-spin odd manifold $D_n^1$; 3. $(1,1)$ leads to a non-spin even manifold $D_n^2$. If $n>2$ is even, we get $$\begin{aligned} H_2(X,\matZ_2) & = & \matZ_2 + \matZ_2 + \matZ_2, \\ H_2(X,\matZ) & = & \matZ.\end{aligned}$$ A basis for $H_2(X,\matZ_2)$ is given by two projective planes and one 2-cell winding $n$ times. The modulo-2 map $H_2(X,\matZ) \to H_2(X,\matZ_2)$ sends 1 to $(0,0,1)$ or $(1,1,1)$, depending on whether $n/2$ is even or odd. Suppose $n/2$ is even. Up to symmetries we have six choices for $w_2$: 1. $(0,0,0)$ leads to a spin manifold $D_n^{00}$; 2. $(1,0,0)$ leads to a non-spin even manifold $D_n^{10}$; 3. $(1,1,0)$ leads to a non-spin even manifold $D_n^{20}$; 4. $(0,0,1)$ leads to a non-spin odd manifold $D_n^{01}$; 5. $(1,0,1)$ leads to a non-spin odd manifold $D_n^{11}$; 6. $(1,1,1)$ leads to a non-spin odd manifold $D_n^{21}$. To distinguish them, we look at coverings determined by non-normal subgroups $H$ of order two. Up to conjugacy, there are only two such groups, generated by $a$ and $b$. Thus we get two coverings. As above, we see that the number of spin coverings of $D_n^{ij}$ is $2-i$, and we are done. When $n/2$ is odd the discussion is the same, except for $(1,0,1)$ and $(1,0,0)$ that are swapped: 1. $(1,0,1)$ leads to a non-spin even manifold $D_n^{10}$; 2. $(1,0,0)$ leads to a non-spin odd manifold $D_n^{11}$. Finally, the same arguments show that the universal covering of each such manifold is spin, since $H_2(\tilde X,\matZ_2)$ has a basis generated by spheres which cover an even number of times the elements in $H_2(X,\matZ_2)$. Such a manifold is still a graph manifold generated by $\calS_0$, and thus it must be $\#_k(S^2\times S^2)$. Outline of the proof of Theorem \[main:teo\] {#outline:subsection} -------------------------------------------- Theorem \[main:teo\] says that $c(M)=0$ if and only if $M=M'\#_h\matCP^2$ for some graph manifold $M'$ generated by $\calS_0$ and some integer $h$. It is easy to see that every manifold of type $M'\#_h\matCP^2$ has indeed complexity zero using the following result. (A more detailed proof will be given in Section \[operations:section\], see Theorem \[easy:teo\].) \[bubble:fig\] \[bubble:prop\] Let a compact orientable 4-manifold $M$ collapse onto a simple polyhedron $X\subset \interior{M}$ without boundary. A shadow $DX$ for the double $DM$ of $M$ is constructed from $X$ by adding a bubble on each region as in Fig. \[bubble:fig\]. By Proposition \[graph2:prop\] we may suppose that $X$ is locally flat. We have two mirror copies $X_1$ and $X_2$ of $X$ inside $DM$. The complement of a regular neighborhood of $X_1$ in $DM$ collapses onto $X_2$. Take one point $x$ inside each region of $X_1$. Since $M$ collapses onto $X$, for each $x$ there is a natural properly embedded 2-disc $D\subset M$ intersecting $X_1$ in $x$. Its double gives a 2-sphere $S_x\subset DM$. Let $X_1'$ be $X_1$ plus the union of all these spheres $S_x$, one for each region of $X_1$. The polyhedron $X_1'$ intersects $X_2$ transversely in one point in each region of $X_2$. Therefore the complement of a regular neighborhood of $X_1'$ in $DM$ collapses onto a 1-dimensional subpolyhedron of $X_2$. Thus this complement is made of 3- and 4-handles. To get a shadow it remains to perturb the double points $x$. This can be done as in Fig. \[perturb:fig\] below. The resulting polyhedron $DX$ is simple and is thus a shadow of $DM$. The result of the perturbation is that $DX$ is $X$ plus one bubble on each region. Let a compact 4-manifold $M$ collapse onto a simple polyhedron $X$ with $n$ vertices. We have $c(DM)\leqslant n$. Bubbles do not add vertices to a simple polyhedron. Therefore the shadow $DX$ for $DM$ has $n$ vertices. Proposition \[graph:prop\] implies that every graph manifold generated by $\calS_0$ has complexity zero. A shadow for $\matCP^2$ is also easily described (a projective line, which is homeomorphic to $S^2$). Finally, complexity is subadditive on connected sums, that is $$c(M\#N) \leqslant c(M) + c(N)$$ and it is hence clear that every manifold $M'\#_h\matCP^2$ in Theorem \[main:teo\] has complexity zero. Proving that these are the only manifolds we can get is considerably harder. In some sense, this result is quite surprising, because there are many complicate shadows without vertices of closed manifolds that are not of the type prescribed by Proposition \[bubble:prop\]. Many of them do not contain bubbles at all. For instance, let $X$ be the union of two (real) projective planes with an annulus connecting two non-trivial loops as in Fig. \[example:fig\]. It is easy to see that such a polyhedron is a shadow of the manifold $C_2^1$ introduced in Proposition \[finite:prop\]. However, it does not contain bubbles. \[example:fig\] The point is that there are various non-trivial moves that relate shadows of the same manifolds. The ones that we use here are collected in Fig. \[move\_all:fig\] below (or equivalently Fig. \[thickening:fig\]). For instance, using move (5) we transform the polyhedron $X$ from Fig. \[example:fig\] into a projective plane with a bubble, which is indeed a shadow of the type prescribed by Proposition \[bubble:prop\]. Note that the graphs in the moves have (half-)integers decorating the edges. A shadow has a half-integer decorating each region called *gleam*. Gleams make 4-dimensional thickenings much more complicate than 5-dimensional ones. Each of the listed moves can be applied only in presence of appropriate gleams. The core proof of Theorem \[main:teo\] consists of showing that every shadow $X$ without vertices of a closed 4-manifold can be transformed into a nice shadow with bubbles as in Proposition \[bubble:prop\] by mean of the moves listed in the pictures. When we find a shadow with a bubble on each region we can conclude that $M$ is a graph manifold generated by $\calS_0$. (Bubbles of course have appropriate gleams.) In the transformations, we sometimes need to remove some $\matCP^2$-summands. To find the appropriate moves that transform a given complicated shadow $X$ into a nice shadow with bubbles we adapt to this setting a technique of Neumann and Weintraub [@Neu]. Neumann and Weintraub proved that a plumbing of spheres plus a 4-handle can only give rise to connected sums of $S^2\times S^2$ and $\matCP^2$. The point was that the boundary of such a plumbing is forced to be $S^3$ (in order for a 4-handle to be attached). The plumbing describes $S^3$ as a graph manifold (two solid tori connected by a chain of products $T\times [0,1]$). Since $S^3$ is a “simple” 3-manifold, a “complicate” description of $S^3$ as a graph manifold must simplify somewhere. Luckily, the simplification of the boundary graph manifold translates into a semplification of the plumbing, and they may proceed by induction. We apply the same procedure here. Let $X$ be a complicate shadow without vertices of a closed 4-manifold $M$. The boundary $\partial N(X)$ of the thickening of $X$ must be homeomorphic to $\#_h(S^2\times S^1)$, in order for the 3- and 4-handles to be attached. This is a very restrictive condition. As noted by Costantino and Thurston [@CoThu], the subdivision of $X$ into fundamental pieces described by Proposition \[simple:prop\] induces a decomposition of $\partial N(X)$ as a graph manifold. Since $\#_h(S^2\times S^1)$ is relatively “simple”, the description as a graph manifold must simplify somewhere. Hopefully, this simplification translates into a move that transforms $X$ into a simpler shadow for $M$, and we proceed by induction. Unfortunately, not all simplifications translate from $\partial N(X)$ to $X$, and more work has to be done. During all the proof we use an approach similar to the one introduced in [@MaPe]. Namely, we extend the notion of shadows from closed manifolds to manifolds bounded by copies of $S^2\times S^1$: we call such a manifold a *block*. When simplifying $X$, we sometimes discard some blocks that belong to $\calS_0$. Shadows {#shadows:section} ======= In this section we recall Turaev’s definition of shadow [@Tu0; @Tu]. We then focus on manifolds whose boundary is a (possibly empty) union of copies of $S^2\times S^1$, which we call *blocks*. We then construct shadows for all the blocks contained in $\calS_0$ and $\matCP^2$. Shadows {#shadows} ------- Let $M$ be a compact oriented 4-manifold (possibly with boundary) and $L\subset\partial M$ a (possibly empty) framed link. \[properly:defn\] A *properly embedded* simple polyhedron $X$ in $(M,L)$ is a simple polyhedron $X\subset M$ such that $\partial X = X\cap \partial M = L$ and $X$ is locally flat in $M$, *i.e.* it is locally embedded as $Q\times \{0\} \subset D^3\times D^1$ where $Q\subset D^3$ is one of the models of Fig. \[shadow:fig\]. \[horizontal:rem\] Let $X$ be a properly embedded simple polyhedron in a pair $(M,L)$. The boundary $\partial N(X)$ of a regular neighborhood $N(X)$ of $X$ has a *vertical* part $\partial_{\rm vert}N(X) = N(X)\cap\partial M$, consisting in some solid tori, and a *horizontal* part $\partial_{\rm hor}N(X) = \overline {\partial N(X)\setminus \partial M}$. We will often use the following terminology. A *1-handlebody* is a (possibly disconnected) oriented 4-manifold made of 0- and 1-handles. Every connected component of a 1-handlebody is homeomorphic to either $D^4$ or the boundary-connected sum of some copies of $D^3\times S^1$. Gleams ------ Let $X$ be a simple polyhedron properly embedded in some pair $(M,L)$. Every region of $X$ is naturally equipped with a half-integer called *gleam*, defined by Turaev in [@Tu]. We recall its definition here. \[two-component:fig\] The singular part of $X$ thickens to a 1-handlebody. The rest of $X$ consists of some regions $f_1, \ldots, f_k$: each $f_i$ thickens to a $D^2$-bundle over $f_i$, see Fig. \[two-component:fig\]-(1). Take one $f=f_i$. The gleam of $f$ is defined by comparing this disc bundle with the interval bundle over $\partial f$ induced by $X$, see Fig. \[two-component:fig\]-(2). This is done as follows. The boundary of the $D^2$-bundle $B$ over $f$ consists of a horizontal part $\partial_{\rm hor} B$, a $S^1$-bundle over $f$, and a vertical part $\partial_{\rm vert}B$, the $D^2$-bundle over $\partial f$. The 3-manifold $\partial_{\rm hor}B$ is oriented as the boundary of $B$, which is in turn oriented since $M$ is. Fix a section $s$ of the $S^1$-bundle $\partial_{\rm hor}B$ over $f$ and an orientation on the $S^1$-fiber. The section $s$ induces on each boundary torus $T_i$ of $\partial_{\rm hor}B$ a homology basis $(\mu_i,\lambda_i)$ such that $\lambda_i$ is the oriented fiber and $\mu_i$ is contained in $\partial s$ and oriented so that $(\mu_i,\lambda_i)$ is a positive basis (with respect to the orientation on $T_i$ induced by the one of $\partial_{\rm hor}B$). Let $\gamma_i$ be one component of $\partial f$. If $\gamma_i$ is a component of $L$, the framing of $L$ induces a trivial $D^1$-subbundle of the $D^2$-bundle over $\gamma$. If $\gamma_i$ is not in $L$, there is a $D^1$-subbundle on $\gamma_i$ induced by $X$, which might be twisted: see Fig \[two-component:fig\]-(2). In both cases we get a $S^0$-subbundle of the $S^1$-bundle $\partial_{\rm hor}B$ over $\partial f$. If the $S^0$-bundle is trivial, it consists of two parallel curves which are homologically described as $\mu_i+e_i\lambda_i$ for some integer $e_i$. If the bundle is twisted, it consists of one curve, homologically described as $2\mu_i + \bar e_i\lambda_i$ for some odd integer $\bar e_i$. In this case we set $e_i = \bar e_i/2$. If $f$ has at least one boundary component, the *gleam* of $f$ is defined as $\sum e_i$. (It does not depend on the chosen section and orientation on the $S^1$-fiber.) When $X=f$ is a closed surface, the gleam is defined as the Euler number $e$ of the $S^1$-fibration over $X$. If $X$ is orientable, this equals the self-intersection $[X]\cdot [X]$. Let a region $f$ of $X$ be *odd* or *even* if the number of twisted $D^1$-bundles on $\partial f_0$ is respectively odd or even. (This notion depends only on $X$ and not on its embedding.) Note that the gleam of $f$ is an integer or a half-odd, depending on whether $f$ is even or odd. \[orientation:rem\] If the orientation of $M$ is switched, all gleams change by a sign. \[clockwise:rem\] The frame of $L$ determines the gleams of the adjacent faces. If we change the frame of a component of $L$ by a clockwise twist, the gleam of the adjacent face of $X$ changes by $+1$. Shadows {#shadows-1} ------- The following definition is due to Turaev. A *shadow* is a simple polyhedron with boundary equipped with an integer (resp. half-odd) decorating each even (resp. odd) region. The discussion above shows that a simple polyhedron $X$ properly embedded in a pair $(M,L)$ is naturally a shadow. A converse holds. We say that the pair $(M,L)$ is a *thickening* of $X$ if $M$ collapses onto $X$. \[Turaev:prop\] Every shadow has a unique thickening up to homeomorphism. Recall that every homeomorphism is implicitely assumed piecewise-linear. The boundary $\partial M$ of a thickening decomposes into a horizontal and vertical part, see Remark \[horizontal:rem\]. Blocks {#blocks:subsection} ------ The only pairs $(M,L)$ we consider in this paper are the following. \[block:defn\] A *block* is a compact 4-manifold $M$ with (possibly empty) boundary made of some copies of $S^2\times S^1$. A *framed block* is a pair $(M,L)$ where $M$ is a block and $L$ consists of one fiber $\{pt\}\times S^1$ on each boundary component, with some framing. The link $L$ of a famed block $(M,L)$ is in fact determined up to isotopy by the block $M$, but its framing is not. The notion of shadow of a closed manifold was introduced by Turaev in [@Tu]. We extend it to blocks, in the spirit of [@MaPe]. \[shadow:defn\] A properly embedded simple polyhedron $X$ in a block $(M,L)$ is a *shadow of $(M,L)$* if $M$ is obtained from a regular neighborhood of $X\cup\partial M$ by adding $3$- and $4$-handles. When $M$ is closed, the link $L$ is empty and we get Turaev’s definition. \[1:handlebody:rem\] A properly embedded simple polyhedron $X$ in $(M,L)$ is a shadow of $(M,L)$ if and only if $M\setminus \interior{N(X)}$ is a 1-handlebody. A well-known result of Laudenbach and Poenaru together with Proposition \[Turaev:prop\] show that a shadow of a closed 4-manifold determines the manifold. This result can be extended to blocks. \[unique:prop\] Let $X$ be a shadow of some famed block $(M,L)$. The framed block is determined by the thickening $(N(X),L)$ of $X$, and hence by $X$ itself. The shadow $X$ determines its thickening $(N(X),L)$ by Proposition \[Turaev:prop\]. The vertical boundary $\partial_{\rm vert}N(X)$ consists of one solid torus $V_i$ fibering on each component $\gamma_i$ of $\partial X$. We can reconstruct the full boundary $\partial M$ by attaching a mirror copy $V_i'$ of $V_i$ along $\partial V_i$, so that $V_i\cup V_i'\isom S^2\times S^1$, see Fig. \[determina:fig\]. \[determina:fig\] The regular neighborhood $R= N(X\cup V_1'\cup \ldots \cup V_k') = N(X\cup\partial M)$ in $M$ is uniquely determined by collaring each $V_i'$. The complement of $R$ in $M$ consists of 3- and 4-handles: by Laudenbach-Poenaru’s theorem [@LaPo] the manifold $M$ does not depend on the way these handles are attached. Finally, the link $L$ is $\partial X$ and its framing is determined by the gleams of the incident faces, see Remark \[clockwise:rem\]. Proposition \[unique:prop\] talks about uniqueness. Actually, its proof also shows the following existence result. Recall that the boundary of a connected 1-handlebody is homeomorphic to $\#_h (S^2\times S^1)$, for some $h$. \[bordo:prop\] Let $X$ be a shadow. It is the shadow of some block $(M,L)$ if and only if the boundary $\partial N(X)$ of its thickening is homeomorphic to $\#_h (S^2\times S^1)$ for some $h\geqslant 0$. \[abuse:rem\] Let $X$ be a shadow of some framed block $(M,L)$. By modifying the gleams on the regions incident to $L$ we get a shadow of the same block $M$, with a possibly different framing $L'$, see Remark \[clockwise:rem\]. With a little abuse we therefore sometimes omit the gleams on these regions, and call the resulting partially decorated polyhedron a *shadow* of the (unframed) block $M$. (The unframed link $L$ is determined by $M$, so we also omit it.) Examples {#examples:subection} -------- The 4-sphere has a shadow without vertices. \[sphere:prop\] The 2-sphere with gleam 0 is a shadow for $S^4$. Its thickening is $S^2\times D^2$. By adding a 3- and a 4-handle we get $S^4$. Complex projective space and the blocks in $\calS_0$ have shadows without vertices. \[plane:prop\] Any complex line is a shadow for $\matCP^2$. It is a 2-sphere with gleam 1. The complement of an open regular neighborhood is a disc. The gleam equals its self-intersection number. We turn to the blocks in $\calS_0$. \[blocks:prop\] The (unframed) blocks $$M_{11}, M_2, M_{111}, M_{12}, M_3, N_1, N_2, N_3$$ have shadows homeomorphic to (respectively) $$Y_{11}, Y_2, Y_{111}, Y_{12}, Y_3, D^2, A^2, P^2.$$ It is easy to find a natural proper embedding of each polyhedron in the corresponding block. The complement (of an open regular neighborhood) of each polyhedron is then easily seen to collapse onto a 1-dimensional polyhedron: this implies that it is a 1-handlebody; we are hence done by Remark \[1:handlebody:rem\]. As an example, let us denote by $P^3$ the 3-dimensional pair-of-pants, *i.e.* the 3-sphere $S^3$ minus three open balls. We have $M_{111} = P^3\times S^1$. Let $Y$ be the cone over 3 points. The polyhedron $Y_{111}$ is homeomorphic to $Y\times S^1$. It is easy to visualize $Y_{111}$ as a shadow of $M_{111}$. Embed $Y$ inside $P^3$ as in Fig. \[spider:fig\]. Note that $P^3\setminus \interior{N(Y)} \isom D^3$. Therefore $M_{111}\setminus \interior{N(Y_{111})} \isom D^3\times S^1$, a 1-handlebody. \[spider:fig\] Operations with shadows {#operations:section} ======================= Two blocks can be combined to produce a new block in two ways: by an internal connected sum, or by glueing two boundary components (the latter operation is called an *assembling*, following the terminology of [@MaPe]). We show here how both these operations can be easily translated into some moves on shadows. An important feature of these moves is that they do not produce any new vertex. We recover another proof of the easy part of Theorem \[main:teo\], namely that every manifold of type $M'\#_h\matCP^2$ (with $M'$ graph manifold generated by $\calS_0$) has complexity zero. (Another proof was given in Subsection \[outline:subsection\].) Connected sum ------------- \[sum:fig\] A *connected sum* in a (possibly disconnected) framed block $(M,L)$ consists of removing the interiors of two $n$-discs and identifying the new boundary spheres via an orientation-reversing map. (We use this slightly more general definition instead of the usual one, where $M$ has two connected components each containing one ball.) \[sum:prop\] The move in Fig \[sum:fig\] transforms a shadow $X_1$ of some framed block $(M_1,L_1)$ into a shadow $X_2$ of some other framed block $(M_2,L_2)$, and viceversa. The pair $(M_2,L_2)$ is a connected sum of $(M_1,L_1)$. Consider the 4-dimensional thickenings $N(X_1)$, $N(X_2)$ of $X_1$, $X_2$. Since the gleam of the disc is zero, the portion on the right embeds in a three-dimensional slice, *i.e.* in a 3-disc $D^3\subset N(X_2)$. The move in Fig. \[sum2:fig\] does not change the thickening of $X_2$. Therefore $N(X_2)$ is obtained from $N(X_1)$ by adding a 1-handle. This easily implies the assertion. \[sum2:fig\] Immersed shadows ---------------- An *immersed shadow* is a properly embedded polyhedron $X$ in $(M,L)$ which is everywhere simple, except at finitely many double points. More precisely, the link of every point $x$ of $X$ is either a circle with three radii, a circle with a diameter, a circle, a segment, or two circles. We require implicitly as above that $X$ be locally flat, *i.e.* the star of each point is standardly embedded. The first 4 types must be embedded in a 3-dimensional slice as in Fig \[shadow:fig\], and the new type is embedded as two transverse discs intersecting in $x$. An immersed shadow $X$ is also equipped with gleams. It is naturally the image of a shadow $\widetilde X \to X$ along a map which is everywhere injective except at the double points. The regular neighborhood of $X$ in $M$ can be naturally pulled back to an abstract regular neighborhood $N(\widetilde X)$ of $\widetilde X$, which induces some gleams on $\widetilde X$. These gleams can then be projected to $X$. \[perturb:lemma\] Every double point of $X$ can be locally perturbed as in Fig. \[perturb:fig\], with the gleams changed as shown (there are two possible moves). The move does not change the regular neighborhood of the polyhedron. Locally at the double point, the polyhedron $X$ consists of two transverse discs in $D^4$. Then $X$ intersects $S^3 = \partial D^4$ into a Hopf link. The move substitutes the two transverse discs with $A\cup D$, where $A\subset S^3$ is an annulus spanning the Hopf link and $D\subset D^4$ is a properly embedded 2-disc intersecting the core of $A$ in $\partial D$. Since the core of $A$ is an unknot in $S^3$, the disc $D$ is obtained simply by pushing inside $D^4$ a spanning disc in $S^3$. The regular neighborhood does not change, because the removed piece (two transverse discs) and the new one $D\cup A$ both thicken to a 4-disc. There are two non-isotopic spanning annuli in the Hopf link, and they give rise to non-isotopic constructions. The gleam of $D$ is $\pm 1$ depending on the choice of $A$. The gleams of the incident faces are changed correspondingly as $\mp 1$. The gleams were calculated in [@CoThu]. \[perturb:fig\] The perturbation is the analogue of $\to$ in half dimensions (perturb a 4-valent vertex inside a surface: note that there are two possible moves also here). Assembling {#assembling:subsection} ---------- Let $(M,L)$ be a (possibly disconnected) framed block. Let $N_1$ and $N_2$ be two boundary components of $M$. Each component contains a framed knot. An *assembling* of $(M,L)$ is the operation of identifying $N_1$ and $N_2$ via a map $\psi$ which preserves the framed knots. The result of this operation is a new framed block $(M',L')$. We now investigate the effect of this operation on shadows. We will need the following result, proved in [@LaPo]. \[1:handlebody:lemma\] Every 2-sphere $\Sigma\subset \partial H$ in the boundary of a $1$-handlebody $H$ bounds a properly embedded 3-disc $D^3\subset H$ such that $H\setminus \interior {N(D^3)}$ is a 1-handlebody. This is an easy consequence of Laudenbach-Poenaru’s theorem [@LaPo] which states that every self-homeomorphism of $\partial H$ extends to $H$. Recall that the 1-handlebody need not to be connected. \[assembling:fig\] \[assembling:prop\] The move in Fig. \[assembling:fig\] transforms a shadow $X_1$ of some framed block $(M_1,L_1)$ into a shadow $X_2$ of some other framed block $(M_2,L_2)$, and viceversa. The pair $(M_2,L_2)$ is an assembling of $(M_1,L_1)$. The move in Fig. \[perturb2:fig\] transforms $X_2$ into an immersed simple polyhedron (with gleams) $Q\cup\Sigma$. Here $\Sigma$ is a 2-sphere with gleam zero. The regular neighborhoods of $X_2$ and $Q\cup\Sigma$ are the same by Lemma \[perturb:lemma\], so we may work with $Q\cup\Sigma$ instead of $X_2$. \[perturb2:fig\] Suppose $X_1$ is a shadow of some framed block $(M_1,L_1)$. The polyhedron $Q$ is obtained by gluing two components of $\partial X_1$ contained in two components $N'$, $N''$ of $\partial M_1$. This map can be extended to a unique homeomorphism between $N'$ and $N''$ which preserves the framing. Let $(M_2,L_2)$ be the result of such an assembling. We have a natural embedding $Q\subset M_2$. The components $N'$ and $N''$ glue to form a submanifold $N\subset M_2$ homeomorphic to $S^2\times S^1$ and intersecting $Q$ into $\{pt\}\times S^1$. Embed also $\Sigma$ as $S^2\times \{pt\}$, see Fig. \[converse:fig\]-(1). Note that $N\setminus \interior{N(Q\cup \Sigma)}$ is homeomorphic to $D^3$. Therefore $M_2\setminus \interior{N(Q\cup\Sigma)}$ is obtained by adding a 1-handle to $M_1\setminus \interior{N(X_1)}$. Since the latter is a 1-handlebody, the former also is. By Lemma \[perturb:lemma\] the regular neighborhood $N(Q\cup\Sigma)$ is isotopic to $N(X_2)$. Therefore $X_2$ is a shadow of $(M_2,L_2)$. \[converse:fig\] The converse is proved similarly. Given $X_2$ shadow of $(M_2,L_2)$, we transform it into $Q\cup \Sigma$. The regular neighborhood $N(Q\cup\Sigma)$ has a 3-dimensional slice $N$ as in Fig. \[converse:fig\]-(2) homeomorphic to $S^2\times S^1$ minus an open ball. The boundary $\partial N$ is a 2-sphere in $\partial N(X)$. Since $M_2\setminus\interior{N(Q\cup\Sigma)}$ is a 1-handlebody $H$, it contains a properly embedded 3-disc $D^3$ with $\partial D^3 = \partial N$, such that $H\setminus \interior{N(D^3)}$ is again a 1-handlebody by Lemma \[1:handlebody:lemma\]. Therefore $N\cup D^3\isom S^2\times S^1$ and by cutting $(M_2,L_2,Q)$ along $N\cup D^3$ we get a $(M_1,L_1,X_1)$, with $X_1$ a shadow for $(M_1,L_1)$ as required. Filling ------- The block $D^3\times S^1$ plays a particular role here. We call the assembling of a framed block $(M,L)$ and a framed $D^3\times S^1$ along some component $N$ of $\partial M$ a *filling* of $(M,L)$. This operation consists of attaching a 3-handle and a 4-handle to $N$, so by Laudenbach-Poenaru theorem [@LaPo], the filled block depends only on $(M,L)$ and $N$. In Section \[blocks:subsection\] we have described some shadows of all the blocks involved in Theorem \[main:teo\], except $D^3\times S^1$. In some sense, the natural shadow for this block is the *empty* shadow, whose complement in $D^3\times S^1$ is indeed made of 3- and 4-handles! We adapt Proposition \[assembling:prop\] to this particular situation. \[filling:fig\] \[filling:prop\] The move in Fig. \[filling:fig\] transforms a shadow $X_1$ of some framed block $(M_1,L_1)$ into a shadow $X_2$ of some framed block $(M_2,L_2)$, and viceversa. The block $(M_2,L_2)$ is a filling of $(M_1,L_1)$. As suggested by Fig. \[homeomorphic:fig\], there is a homeomorphism between $M_1\setminus N(X_1\cup\partial M_1)$ and $M_2\setminus N(X_2\cup\partial M_2)$. Therefore $X_1$ is a shadow if and only if $X_2$ is, and it follows easily that $M_2$ is obtained by filling $M_1$. \[homeomorphic:fig\] Complexity zero --------------- We can now prove again the easy half of Theorem \[main:teo\] (another proof was given in Section \[outline:subsection\]). \[easy:teo\] Let $M$ be a graph manifold generated by $\calS_0$ and $h$ an integer. The manifold $M\#_h \matCP^2$ has complexity zero. By Proposition \[smaller:prop\], the manifold $M$ is a connected sum of $h\geqslant 0$ copies of $S^3\times S^1$ and $k\geqslant 0$ graph manifolds generated by $$\calS_0' = \big\{M_2, M_{111}, M_{12}, M_3, N_1, N_3\big\}.$$ If $h=k=0$ then $M=S^4$ which has a shadow without vertices, see Proposition \[sphere:prop\]. The blocks in $\calS_0'$ and $\matCP^2$ also have shadows without vertices, see Propositions \[plane:prop\] and \[blocks:prop\]. Assemblings and connected sums translate into moves for shadow that do not produce vertices by Propositions \[sum:prop\] and \[assembling:prop\]. It remains to show that every closed oriented 4-manifold having complexity zero is of this type. The rest of the paper is devoted to the proof of this non-trivial fact. Moves {#moves:section} ===== We describe here some moves that relate two shadows of the same block. Some basic moves are well-known: these were discovered by Turaev and are shown in Fig. \[move\_all:fig\]. The moves shown in Fig. \[mossa\_all:fig\] are new and more useful in our vertex-free context: they are proved in this section. They are more efficiently encoded in Fig. \[thickening:fig\]. \[move\_all:fig\] \[moves:prop\] The moves in Fig. \[move\_all:fig\] relate two shadows $X_1, X_2$ of the same block $(M,L)$. As shown by Turaev, the shadows $X_1$ and $X_2$ have homeomorphic thickenings. Therefore the blocks are also homeomorphic by Proposition \[unique:prop\]. \[mossa\_all:fig\] \[mosse:prop\] The moves in Fig. \[mossa\_all:fig\] relate two shadows $X_1, X_2$ of the same block $(M,L)$. The annular region of both portions in Fig. \[mossa\_all:fig\]-(1) have gleam zero. Therefore both portions may be embedded in a 3-dimensional slice $D^3$ as in the figure. Their regular neighborhoods are the same since they are so in $D^3$. (Alternatively, use Fig. \[move\_all:fig\]-(2) a couple of times.) \[mossa2\_proof:fig\] The left portion in Fig. \[mossa\_all:fig\]-(2) is the perturbation of the left portion $Q\cup D$ in Fig. \[mossa2\_proof:fig\], see Fig. \[perturb:fig\]. The portion $Q$ can be embedded in a 3-dimensional slice $D^3$ because the disc has gleam zero, and the disc $D$ intersects the slice in an arc, as in the figure. Apply the move in Fig. \[sum2:fig\] as in Fig. \[mossa2\_proof:fig\]-right. The result is the union $D_1\cup D_2\cup D$ of three transverse discs. By perturbing the two intersection points $D_1\cap D$ and $D_2\cap D$ we get Fig. \[mossa\_all:fig\]-(2)-right. (Alternatively, the move may also be obtained as a combination of the basic moves in Fig. \[move\_all:fig\].) The portion of shadow in Fig. \[mossa\_all:fig\]-(3)-left can also be drawn as in Fig. \[triplet:fig\]-left, with a $+1$-gleamed disc attached along the red circle. We can apply the moves shown in Fig. \[triplet:fig\]. In the resulting portion the disc delimited by the red circle has gleam $+1-1/2 = 1/2$. The new portion can be described as in Fig. \[triplet2:fig\]-left, with an annulus attached to the red circle. A final step is then shown in Fig. \[triplet2:fig\]. \[triplet:fig\] \[triplet2:fig\] The move in Fig. \[mossa\_all:fig\]-(4) follows from the one in Fig. \[mossa\_all:fig\]-(3): it suffices to add temporarily an auxiliary annulus in order to transform the portion in Fig. \[mossa\_all:fig\]-(4)-left as in Fig. \[mossa\_all:fig\]-(3)-left. \[mossa5\_proof:fig\] \[mossa6\_proof:fig\] \[mossa6\_proof2:fig\] The move in Fig. \[mossa\_all:fig\]-(5) is constructed in Fig. \[mossa5\_proof:fig\] as a composition of the move in Fig. \[move\_all:fig\]-(2) and its inverse. The move in Fig. \[mossa\_all:fig\]-(6) is constructed similarly: in order to apply Fig. \[mossa5\_proof:fig\] we first slide away the vertical annulus as shown in Fig. \[mossa6\_proof:fig\] (only the attaching of the annulus is shown, in red). Finally, note that a $+1$-gleamed disc is attached to the rightmost Möbius band producing a projective plane: the projective plane and the two incident regions are drawn in Fig. \[mossa6\_proof2:fig\]-left. We can turn the red segment counterclockwise as in Fig. \[mossa6\_proof2:fig\] and get a portion as in Fig. \[mossa\_all:fig\]-(2)-right, as required. Shadows without vertices. {#without:section} ========================= As shown in Section \[simple:subsection\], a simple polyhedron without vertices may be described via a graph. A shadow $X$ without vertices is thus encoded by a graph whose edges are decorated with half-integers. We summarize here briefly the moves introduced in the previous section using such decorated graphs. The boundary $\partial N(X)$ of the thickening of $X$ is a closed 3-manifold. As proved by Costantino and Thurston [@CoThu], the graph describes correspondingly a decomposition of $\partial N(X)$ as a graph manifold. Such a decomposition is described at the end of this section. Decorated graph {#decorated:subsection} --------------- Let a graph with vertices as in Fig. \[vertices:fig\] describe a simple polyhedron without vertices. Let $e$ be an edge of the graph. If precisely one of its endpoints is incident to a vertex of type (5) as an unmarked edge, then the *parity* of $e$ is odd. Otherwise, it is even. A *decorated graph* is a graph whose vertices are as in Fig. \[vertices:fig\], and whose edges are decorated with half-integers. The half-integer decorating an edge $e$ is an integer or a half-odd, depending on the parity of $e$. A graph determines a simple polyhedron $X$. Note that an edge of the graph determines a region of $X$. (Many edges may determine the same region.) A decorated graph determines a shadow: the gleam of a region is the sum of all the half-integers decorating the edges that determine that region. The parity of the edges was defined above in order to be coherent with the parity of the regions of $X$, so the result is indeed a shadow. Every simple shadow $X$ without vertices can be described by some decorated graph in this way. Such a graph is not really unique: some moves modify the graph while leaving the shadow unchanged, see Fig. \[mosse\_innocue:fig\]. \[mosse\_innocue:fig\] \[blocks:fig\] There are two types of 1-valent vertices and , and we call them respectively *flat* and *fat*. A flat vertex denotes a component of $\partial X$. When we want to describe a shadow $X$ of some (unframed) block $M$, we may omit decorations on the edges incident to flat vertices, according to Remark \[abuse:rem\]. As an example, the graphs in Fig. \[blocks:fig\] describe the shadows of the blocks in $\calS_0$ and of $\matCP^2$, see Propositions \[plane:prop\] and \[blocks:prop\]. Moves {#moves} ----- The moves described in Section \[moves:section\] can be easily visualized using decorated graphs. \[thickening:fig\] The moves in Fig. \[thickening:fig\] relate two shadows $X_1, X_2$ of the same block $(M,L)$. The moves (1-6) are the ones described in Fig. \[mossa\_all:fig\]. Move (7) corresponds to two different perturbations of a double point, see Fig. \[perturb:fig\]. Move (8) follows from Fig. \[sum\_ass:fig\]-(3) below: both $X_1$ and $X_2$ are shadows of the same block, obtained from another block by drilling along the same curve. \[sum\_ass:fig\] The moves in Fig. \[sum\_ass:fig\] transform a shadow $X$ of a block $(M,L)$ into a shadow $X'$ of a block $(M',L')$, and viceversa. The block $(M',L')$ is respectively a connected sum, assembling, or filling of $(M,L)$. This corresponds to Propositions \[sum:prop\], \[assembling:prop\], and \[filling:prop\]. Decomposition into pieces {#decomposition:subsection} ------------------------- Let $X$ be a shadow without vertices and $N(X)$ its thickening. As shown by Costantino and Thurston [@CoThu], there is a natural map $\pi:\partial N(X) \to X$ which is a circle fibering over the non-singular points of $X$. (Such a map might actually extended to the whole of $N(X)$, but we only need the boundary here.) Let $G$ be a decorated graph describing $X$. Recall that such a graph determines a decomposition into pieces of $X$, and each vertex of $G$ determines a piece of $X$. \[deco:prop\] The decorated graph $G$ describes a decomposition of the closed 3-manifold $\partial N(X)$ into pieces bounded by tori, as follows. 1. Every piece $Q$ of $X$ determines a “horizontal” piece $\overline{\pi^{-1}(\interior Q)}$: its homeomorphism type depends on $Q$ and is shown in Table \[pieces:table\]. 2. Every component $C$ of $\partial X$ determines a “vertical” solid torus $\pi^{-1}(C)$. Vertex ------------------------- ----------------- ----------------- ------------------ ----------------- ----------- ------------- $Q$ (name) $D^2$ $P^2$ $Y_2$ $Y_{111}$ $Y_{12}$ $Y_3$ $Q$ (picture)    $\pi^{-1}(Q)$ (name) $D^2\times S^1$ $P^2\times S^1$ $Y_2\timtil S^1$ $P^2\times S^1$ $(A^2,2)$ $(D^2,3,3)$ $\pi^{-1}(Q)$ (picture) \[pieces:table\] The map $\pi:\partial N(X) \to X$ is a circle bundle on non-singular points. If $Q$ is a surface, the piece $\overline{\pi^{-1}(\interior Q)}$ is the orientable circle bundle over $Q$: this holds in cases , , and . The pieces corresponding to , , and are obtained by thickening the singular edge to a product $D^3\times S^1$. We can think of $Q$ as properly embedded inside $D^3\times S^1$, so that $\overline{\pi^{-1}(\interior Q)}$ consists of the boundary $S^2\times S^1$ minus an open regular neighborhood of $\partial Q$. The curves $\partial Q$ are the closed braids in $S^2\times S^1$ shown in the table. Note that the vertices and both give rise to solid tori. However, they are positioned differently with respect to the fibration $\pi$: their meridian is respectively vertical (*i.e.* a fiber of $\pi$) and horizontal (*i.e.* a section of $\pi$). Analogously, the vertices and both yield a piece homeomorphic to $P^2 \times S^1$, but positioned differently: the fiber $\{pt\} \times S^1$ is respectively vertical and horizontal. Reduction to very simple polyhedra {#reduction:section} ================================== This and the subsequent sections are strictly devoted to the proof of Theorem \[main:teo\]. We start by eliminating some types of vertices. In this section we prove the following. \[reduction:fig\] \[very:simple:teo\] Let $X$ be a shadow of a block $(M,L)$, described via a decorated graph. - Suppose the graph contains a vertex of type . The move shown in Fig. \[reduction:fig\]-(1) transforms $X$ into a shadow $X'$ of a block $(M',L')$. - Suppose the graph contains a vertex of type . The move shown in Fig. \[reduction:fig\]-(2) transforms $X$ into a shadow $X'$ of a block $(M',L')$. - Suppose the graph contains a vertex of type . One of the two moves shown in Fig. \[reduction:fig\]-(3) and (4) transforms $X$ into a shadow $X'$ of a block $(M',L')$. In all cases, the original block $(M,L)$ is obtained from $(M',L')$ by a combination of assemblings or connected sums. This result allows to restrict our investigation to a smaller class of shadows, whose underlying polyhedron is as follows. A *very simple polyhedron* is a simple polyhedron which may described via a decorated graph with vertices of types shown in Fig. \[vertices2:fig\]. \[vertices2:fig\] In other words, there are no pieces of type , , and from Fig. \[vertices:fig\]: these pieces can be ruled out thanks to Theorem \[very:simple:teo\], as the following corollary shows. (The notion of graph manifold generated by $\calS_0$ extends trivially to manifolds with non-empty boundary.) \[very:simple:cor\] Every block $(M,L)$ having a shadow without vertices is obtained via connected sums and assemblings from $(M_1,L_1)\sqcup (M_2,L_2)$ where $M_1$ is a graph manifold generated by $\calS_0$ and $(M_2,L_2)$ has a very simple shadow. (Both $M_1$ and $M_2$ may be disconnected.) Let $X$ be a shadow without vertices of $(M,L)$. It may be described as a decorated graph $G$. If $G$ is as in Fig. \[blocks:fig\], then $M$ is a graph manifold. Otherwise, suppose it contains a vertex of type , , or . Theorem \[very:simple:teo\] applies: we can perform one of the moves in Fig. \[reduction:fig\] which simplifies the graph, and we conclude by induction. The rest of the section is mainly devoted to the proof of Theorem \[very:simple:teo\]. Horizontal and vertical compressing discs ----------------------------------------- Let $X$ be a shadow of some block $(M,L)$, encoded via a decorated graph $G$. Each edge of $G$ determines a simple closed curve $\gamma$ in a region of $X$ and a torus $T = \pi^{-1}(\gamma)\subset \partial N(X)$ fibering over $\gamma$ via the natural fibration $\pi:\partial N(X) \to X$, see Section \[decomposition:subsection\]. Such a torus has a compressing disc $D$ in $\partial N(X\cup \partial M)$ because of the following general fact. \[compressing:lemma\] Every torus $T$ inside $\#_k(S^2\times S^1)$ has a compressing disc. The fundamental group of $\#_k(S^2\times S^1)$ is a free group. A free group does not contain $\matZ\times\matZ$, so $T$ has a compressing disc by Dehn’s Lemma. Such a compressing disc may be positioned in various ways with respect to the fibration $\pi$. We will be interested only in two special cases. \[horizontal:defn\] A compressing disc $D$ for $T$ is *vertical* (resp. *horizontal*) if $\partial D$ it is isotopic to a fibre (resp. a section) of the fibration $\pi:T\to\gamma$. If the compressing disc of $T$ is horizontal or vertical, we may somehow simplify the shadow, as the following shows. \[hv:fig\] \[disc:prop\] Suppose that $T$ has a vertical (resp. horizontal) compressing disc. The move in Fig. \[hv:fig\]-(1) (resp. (2)) transforms $X$ into a shadow $X'$ of a block $(M',L')$. The original $(M,L)$ is obtained from $(M',L')$ by assembling (resp. connected sum). Let $H$ be the 1-handlebody $M\setminus \interior{N(X\cup\partial M)}$. We push the interior of the compressing disc $D$ slightly inside $H$, keeping $\partial D$ fixed. Now $H' = H \setminus \interior{N(D)}$ is homeomorphic to $H\cup($1-handle) and is hence still a 1-handlebody. We enlarge $D$ to a disc $D'\supset D$ with $\partial D' \subset X$, see Fig. \[push:fig\]. Set $Y = X \cup D'$. We have $M\setminus \interior{N(Y\cup\partial M)} \isom H'$. If $D$ is horizontal, the disc $D'$ is attached along $\alpha$ and $Y$ is thus simple. By construction, the disc $D'$ has gleam zero. Therefore a regular neighborhood of $D'$ looks like the right portion of Fig. \[sum:fig\], with some gleams $n$ and $-n$ added to the two adjacent regions (for some integer $n$, which depends on how many times $\partial D$ winds around the fiber). We can therefore apply the inverse of the move in Fig. \[sum:fig\], and the result is as in Fig. \[hv:fig\]-(2)-right. By Proposition \[sum:prop\], the result is a shadow of some $(M',L')$ of which $(M,L)$ is a connected sum. If $D$ is vertical, the curve $\partial D$ projects to a point $x\in \gamma$ and the whole of $\partial D'$ is thus identified with $x$. That is, the disc $D'$ actually closes up to a 2-sphere $\Sigma$ which intersects $X$ transversely in $x$. We can thus apply the converse of the moves shown in Fig. \[perturb2:fig\] and Fig. \[assembling:fig\]. The result follows from Proposition \[assembling:prop\]. \[push:fig\] Note that in most cases the compressing disc in neither horizontal nor vertical, and no move is possible. Proposition \[disc:prop\] is a key tool we will use to prove inductively Theorem \[main:teo\]. Given a decorated graph, we look for horizontal or vertical compressing discs. If found, the graph may be simplified along one of the moves in Fig. \[hv:fig\], and we are done. Finding such a compressing disc is however hard: it is sometimes necessary to first modify the decorated graph with some of the moves listed in Fig. \[thickening:fig\]. The rest of the paper is mostly devoted to fulfill this task. Eliminate some types of vertices {#no:236:subsection} -------------------------------- Let $X$ be a shadow of a block $(M,L)$ described by a decorated graph $G$. We prove here that a vertex of type , , or gives rise to a vertical or horizontal compressing disc. \[no:2:prop\] Consider a vertex of type . Let $T_1, T_2, T_3$ be the tori lying above the three indident edges. Either there is one $T_i$ which has a horizontal compressing disc, or every $T_i$ has a vertical compressing disc. The corresponding piece of $\partial N(X\cup\partial M)$ is homeomorphic to $P^2\times S^1$. Some standard arguments in 3-dimensional topology show that at least one boundary torus $T_i$ of $P^2\times S^1$ has a compressing disc $D$ whose boundary $\partial D$ is either isotopic to a fiber (*i.e.* vertical) or to a section (*i.e.* horizontal). In the first case, the compressing disc extends fiberwise also to the two other boundary tori. This is the argument. Every boundary component of $P^2\times S^1$ has a compressing disc. Suppose each of them is neither horizontal nor vertical. If all discs are directed outside of $P^2\times S^1$, then $\partial (N(X\cup\partial M))\isom \#_h (S^2\times S^1)$ has a summand which is a Seifert manifold with 3 singular fibers: a contradiction [@Sei]. If one disc is directed inside, after an isotopy it intersects $P^2\times S^1$ into an essential planar surface. However, such a surface in $P^2\times S^1$ must intersect one boundary component either horizontally or vertically [@Sei], against our assumptions. \[no:36:prop\] Consider a vertex of type or . The torus $T$ lying above the incident edge has a vertical compressing disc. The corresponding piece in $\partial N(X\cup\partial M)$ is $Y_2\timtil S^1\isom (D^2,2,2)$ or $(D^2,3,3)$, see Table \[pieces:table\]. Its boundary has a compressing disc $D$, directed outward. By Dehn filling the piece along the slope $\partial D$ we thus get some summands of $\partial N(X\cup \partial M) \isom \#_k (S^2\times S^1)$. Standard arguments on Seifert manifolds show that the $p/q$-Dehn filling on the knot shown in Table \[pieces:table\] is $\#_h (S^2\times S^1)$ if and only if $p/q=\infty $, *i.e.* when the meridinal disc is vertical (and $h=1$ in this case). Therefore $D$ must be vertical. The two propositions just stated imply Theorem \[very:simple:teo\]. If the decorated graph contains a vertex of type or , the torus lying above the incident edge has a compressing disc and hence we can apply Fig. \[hv:fig\]-(1). The result is a move as in Fig. \[reduction:fig\]-(1,2). If it contains a vertex of type , there are three tori above the edges. Either one has a horizontal compressing disc, or all three have vertical compressing discs. The corresponding move in Fig. \[hv:fig\] applies and the result is one of the moves in Fig. \[reduction:fig\]-(3,4). (Apply Fig. \[mosse\_innocue:fig\]-left.) Try to eliminate other types of vertices ---------------------------------------- Unfortunately, there is no result analogous to Propositions \[no:2:prop\] and \[no:36:prop\] for vertices type , , or . A partial result for the 3-valent vertex is the following. \[scoppia:fig\] \[(4):prop\] Consider a vertex of type . It determines a piece in $\partial N(X\cup\partial M)$ homeomorphic to $P^2\times S^1$. Suppose that the fiber $\{pt\}\times S^1$ bounds a disc in $\partial N(X\cup\partial M)$. The move in Fig. \[scoppia:fig\] transforms $X$ into a shadow $X'$ of some $(M',L')$ of which $(M,L)$ is a twice connected sum. The compressing disc is actually a horizontal disc in this case! We can therefore perform the move in Fig. \[hv:fig\]-(2) and the inverse of Fig. \[sum\_ass:fig\]-(1). The sequence of moves is shown in Fig. \[scoppia\_proof:fig\]. \[scoppia\_proof:fig\] A much weaker result concerning the 2-valent vertex is the following. \[boundary:fig\] \[boundary:prop\] Consider a vertex of type . The move in Fig. \[boundary:fig\] transform $X$ into the shadow $X'$ of some other block $(M',L')$. See Fig. \[boundary\_proof:fig\]. \[boundary\_proof:fig\] The move shown in Fig. \[boundary:fig\] changes dramatically the block and thus cannot be used to simplify shadows. (The proof shows that $M'$ is obtained from $M$ by surgery, *i.e.* by substituting a $S^1\times D^3$ with a $S^2\times D^2$.) Trees with level functions {#trees:section} ========================== We make here another step towards the proof of Theorem \[main:teo\]. According to Corollary \[very:simple:cor\], we may restrict to blocks having very simple shadows. A very simple shadow is described via a decorated graphs with vertices as in Fig. \[vertices2:fig\]. In this section, we show that we may further restrict to decorated graphs that are trees equipped with a *level function*. The level function is a function on vertices which is defined below. A decorated tree equipped with such a function is a *decorated tree with levels*. We prove here the following. \[leveled:teo\] Let $X$ be a very simple shadow of a block $(M,L)$. One of the following holds. 1. A move as in Fig. \[scoppia:fig\] transforms $X$ into a shadow $X'$ of a block $(M',L')$ such that $(M,L)$ is a twice connected sum of $(M',L')$. 2. The shadow $X$ can be encoded via a decorated tree with levels. We thus get a refinement of Corollary \[very:simple:cor\]. \[leveled:cor\] Every block $(M,L)$ having a shadow without vertices is obtained via connected sums and assemblings from $(M_1,L_1)\sqcup (M_2,L_2)$ where $M_1$ is a graph manifold generated by $\calS_0$ and $(M_2,L_2)$ has a shadow encoded by a decorated tree with levels. (Both $M_1$ and $M_2$ may be disconnected.) By Corollary \[very:simple:cor\], we may restrict to very simple shadows. Let $X$ be a very simple shadow. If it may be encoded as a tree with a level function we are done. Otherwise, the move in Fig. \[scoppia:fig\] applies: the number of vertices of type decreases and we proceed by induction. The rest of this section is devoted to the definition of a level function and to the proof of Theorem \[leveled:teo\]. The level function {#leveled:subsection} ------------------ Two vertices in a graph are *adjacent* if they are joined by an edge. A sequence of distinct vertices $v_1,\ldots, v_k$ form a *line* if $v_i$ and $v_{i+1}$ are adjacent for all $i$. A *decorated tree* is a decorated graph $T$ without cycles. Let $T$ be a decorated tree which encodes a shadow $X$ of a block $(M,L)$. A *level function* on $T$ is a function which associates to each vertex $v$ a non-negative integer $l(v)$ such that the following holds. 1. there are $k\geqslant 2$ vertices having level zero, and they form a line $v_1,\ldots, v_k$ called *root*; 2. every vertex $v$ of type or is adjacent to precisely one vertex $v'$ with $l(v')>l(v)$; 3. on every line $w_1,\ldots, w_h$ we have $w_i \leqslant \max \{w_1,w_h\}$ for all $i$. The third condition says that whenever the level starts increasing on a line, it keeps being non-decreasing forever. There is also a fourth condition which relates the function $l$ with the induced decomposition of the closed 3-manifold $\partial N(X)$. To state it we first need to introduce first some terminology and prove some easy facts. Consider a decorated tree $T$ and a function $l$ fulfilling the three requirements just stated. Let $v$ be a vertex. Define $S_v$ as the set of all vertices $v'$ such that there is a line $$v=v_1, \ldots, v_k = v'$$ with $l(v_2)>l(v_1)$. We have $l(v')>l(v)$ for every $v'\in S_v$. The set $S_v$ is non-empty precisely when $v$ is of type or . When non-empty, it contains precisely one vertex adjacent to $v$, and spans a subtree of $T$; the vertex $v$ is the only one in $T\setminus S_v$ which is adjacent to some vertex in $S_v$. It follows easily from the assumptions (1), (2), and (3) above. Recall from Proposition \[deco:prop\] that $T$ also encodes a decomposition of the closed 3-manifold $\partial N(X)$. Every vertex $v$ corresponds to a 3-dimensional piece $M_v\subset \partial N(X)$ bounded by tori according to Table \[pieces:table\]. If $S$ is a set of vertices of $T$, we set $M_S = \cup_{v\in S} M_v$. Let $v$ be a vertex of type or . The manifold $M_{S_v}$ is connected and has only one boundary torus, attached to one boundary torus of $M_v$. The set $S_v$ spans a subtree; thus the corresponding pieces in $\partial N(X)$ glue to form a connected manifold $M_{S_v}$. Since $v$ is the only vertex adjacent to some vertices of $S_v$, this manifold is bounded by a single torus attached to $M_v$. When $v$ is of type or , the piece $M_v$ is a Seifert manifold, homeomorphic to either $P^2\times S^1$ or $(A,2)$. (In both cases, the Seifert fibration is unique up to isotopy and induces a fibration on the boundary tori [@Sei].) Finally, we can state the fourth and last requirement for our level function $l$. 1. for every vertex $v$ of type or , the manifold $M_{S_v}$ is a solid torus, whose meridian is attached to a section of the fibration of $M_v$. A *level function* on $T$ is a function which fulfills all the requirements (1)-(4) listed above. A *decorated tree with levels* is a decorated tree $T$ which encodes a shadow $X$ of some block $(M,L)$, equipped with a level function. Build a level function ---------------------- Let $T$ be a decorated tree with levels, encoding a shadow $X$ of a block $(M,L)$. As the following result shows, the level function puts some serious restrictions on the decomposition of $\partial N(X)$. \[4:consequence:prop\] For every vertex $v$ of type or , the manifold $M_v\cup M_{S_v}$ is either homeomorphic to $A\times S^1$ or $D^2\times S^1$. The manifold $\partial N(X)$ is either $S^3$ or $S^2\times S^1$. The manifold $M_v$ is homeomorphic to either $P^2\times S^1$ or $(A,2)$. The manifold $M_{S_v}$ is a solid torus attached to $M_v$, whose meridian is a section of the fibration of $M_v$. The fibration on $M_v$ thus extends on $M_v\cup M_{S_v}$ without creating new exceptional fibers. Therefore $M_v\cup M_{S_v}$ is either homeomorphic to $A\times S^1$ or to $(D,2)\isom D^2\times S^1$. We may simplify inductively the decomposition of $\partial N(X)$ as follows: if $v$ is of type , simply delete $M_v\cup M_{S_v}$; if it is of type , substitute it with a single solid torus. After finitely many steps we end up with a decomposition containing only solid tori, and thus $\partial N(X)$ has Heegaard genus at most 1. Since $X$ is a shadow of some block, we must have $\partial N(X) = \#_h(S^2\times S^1)$. Therefore $h$ equals 0 or 1, as required. We can finally prove Theorem \[leveled:teo\]. The very simple shadow $X$ is encoded via a decorated graph $G$, whose vertices are of type , , , or . This also encodes correspondingly a decomposition of $\partial N(X)\isom \#_h(S^2\times S^1)$ into pieces homeomorphic to solid tori, solid tori, $P^2\times S^1$, and $(A,2)$. The theorem follows from a slightly more general result about decompositions of $\#_h(S^2\times S^1)$ into pieces homeomorphic to solid tori, $(A,2)$, and $P^2\times S^1$. Any such decomposition yields a graph with vertices of valence 1, 2, or 3, and the notion of level function applies *as is* to this more general context. We prove the claim by induction on the number of pieces in the decomposition. If the decomposition consists of two solid tori then (2) holds and we are done. Otherwise, every solid torus $D^2\times S^1$ is adjacent to a $P^2\times S^1$ or $(A,2)$. If the meridian of one solid torus is attached to $P^2\times S^1$ along the fiber, then (1) holds and we are done. It cannot be attached to a fiber of $(A,2)$, since this would yield a projective plane, but there is no such surface in $\#_h(S^2\times S^1)$. Therefore we can suppose the solid tori are not attached along fibers. Suppose one solid torus is attached along a section of the fibration of the adjacent $P^2\times S^1$ or $(A,2)$. The two pieces glued together are then homeomorphic to either $A\times S^1$ or $(D,2)\isom D^2\times S^1$. We can thus construct a simpler decomposition by removing these pieces and adding a $D^2\times S^1$ if the second case holds. By our induction hypothesis either (1) or (2) holds. If (1) holds in the new decomposition, it also holds in the old one, and we are done. If (2) holds, the new decomposition has a level function on its graph $G'$. The level function easily lifts from $G'$ to $G$, as follows. The graph $G$ is constructed from $G'$ with one of the moves shown in Fig. \[claim:fig\]. With move (1), assign $l(v_3) = \max \{l(v_1), l(v_2)\}$ and $l(v_4) = l(v_3)+1$. With move (2), assign $l(v_2) = l(v_1)$ and $l(v_3) = l(v_2)+1$. \[claim:fig\] We are left with the case every solid torus is attached along a curve which is neither a fiber nor a section of the adjacent $P^2\times S^1$ or $(A,2)$. This produces a new singular fiber. We thus get a decomposition into blocks that are either $P^2\times S^1$, an annulus with one singular fiber, a disc with two singular fibers, or $S^2$ with $3$ singular fibers. By assembling blocks with matching fibers, we get either a Seifert manifold fibering over an orbifold with $\chi\leqslant 0$, and hence not homeomorphic to $S^2\times S^1$ and $S^3$, or a prime manifold with nontrivial JSJ: a contradiction in all cases. The claim is proved. Finally, we show how the claim implies Theorem \[leveled:teo\]. Our shadow $X$ may be represented as a decorated graph $G$, which also encodes a decomposition of $\partial N(X\cup\partial M)\isom \#_h(S^2\times S^1)$. The claim applies to $G$. If (1) holds, there is a piece $P^2\times S^1$ whose fiber bounds a compressing disc. It corresponds to a vertex of type and Proposition \[(4):prop\] applies. The move in Fig. \[scoppia:fig\] thus transforms $X$ into a shadow $X'$ of some $(M',L')$ of which $(M,L)$ is a twice connected sum. If (2) holds, the graph $G$ is a tree which may be equipped with a level function, and we are done again. Leaves, fruits, and branches {#leaves:section} ============================ We investigate here the decorated trees with levels defined in the previous section. We introduce some terminology – *leaves*, *fruits*, and *branches* – and we study some moves that transform a tree into another. Drawing and cutting ------------------- Let $T$ be a decorated tree with levels. We always draw $T$ with this convention: higher vertices in the picture have lower levels. As an example, see Fig. \[leveled:fig\]. \[leveled:fig\] Consider a vertex $v$ on $T$. Recall that $S_v$ generates a subtree, which is non-empty precisely when $v$ is of type or . Since the vertex may be oriented in two different ways, there are three possibilities, which we may picture as in Fig. \[Sv:fig\]. We start by investigating the first one. Fig. \[stacca:fig\] shows a way of *cutting* the subtree spanned by $S_v$. The result is a new decorated tree $T'$ with levels. The new level function $l'$ should be clear from the figure. (More precisely: let $v_2\in S_v$ be the vertex adjacent to $v$. We set $l'(w) = 0$ and $l'(v_*) = l(v_*) - l(v_2)$ for each vertex $v_*\in S_v$. In particular, the vertices $w$ and $v_2$ belong to the root of $T'$.) \[Sv:fig\] \[stacca:fig\] \[cut:prop\] Let $T$ be a decorated tree with levels and $v$ a vertex of type . The cut in Fig. \[stacca:fig\] produces a new decorated tree $T'$ with levels. The new tree $T'$ encodes a shadow $X'$ with $\partial N(X') = S^3$. The axioms (1)-(4) descend easily from $T$ to $T'$, hence $T'$ is indeed a decorated tree with levels. The only non-trivial fact to prove is that $\partial N(X')=S^3$. The manifold $M_{S_v}$ is a solid torus, whose meridian is attached to a section of $M_v$. We have $\partial N(X') = M_w \cup M_{S_v}$. The meridian of the solid torus $M_w$ is in fact isotopic to the fiber of $M_v$. Therefore the meridians of the two solid tori $M_w$ and $M_{S_v}$ have intersection 1, and thus $\partial N(X')=S^3$. Leaves ------ Let $v$ be a vertex of type . If $S_v$ consists of a single vertex, this vertex is a *leaf*. A leaf is a vertex of valence 1 and is either flat or fat, see Fig. \[leaf:fig\]. The edge connecting the base $v$ with its leaf is decorated with some integer $n$. When the vertex is flat the integer is not very important since it only determines the framing on the corresponding component of $\partial X$. On a fat vertex, we must have $n=\pm 1$, as the following shows. \[leaf:fig\] \[leaf:prop\] Let $T$ be a decorated tree with levels. The edge joining a fat leaf and its base is decorated by $\pm 1$. If we cut the leaf as in Fig. \[stacca:fig\] we find a shadow $X' = S^2$ with gleam $n$. Therefore $\partial N(X')$ is the lens space $L(n,1)$. We must have $\partial N(X') = S^3$ by Proposition \[cut:prop\]: therefore $n= \pm 1$. The sign of $\pm 1$ can in fact be changed easily. Let $T$ be a decorated tree with levels, encoding a shadow $X$ of some block $(M,L)$. The moves in Fig. \[move\_leaf:fig\] transform $T$ into a decorated tree with levels $T'$ encoding a shadow $X'$ of the same block $(M,L)$. Move (1) is Fig. \[thickening:fig\]-(7). To get (2), first use Fig. \[thickening:fig\]-(8) to move the gleam $n$ to the right, and then use Fig. \[thickening:fig\]-(1). \[move\_leaf:fig\] Vertices of valence 2. ---------------------- Vertices of type are more difficult to treat than 3-valent vertices. We may eliminate them with a move which changes however dramatically the topology of the block. Each of the moves in Fig. \[elimina5:fig\] transforms a decorated tree $T$ with levels into another decorated tree $T'$ with levels. The move is taken from Fig. \[boundary:fig\]. In each move of Fig. \[elimina5:fig\] the levels of the new vertices in $T'$ can be deduced from the picture. They are arranged so that $T'$ is indeed equipped with a level function. (Note that every leaf is decorated with a gleam $\pm 1$, in accordance with Proposition \[leaf:prop\].) \[elimina5:fig\] Note that $T$ and $T'$ determine non-homeomorphic blocks $(M,L)$ and $(M',L')$ in general. We can now state a version of Proposition \[cut:prop\] for 2-valent vertices. Let $T$ be a decorated tree with levels and $v$ a vertex of type . Each cut in Fig. \[stacca\_generalized:fig\] produces a new decorated tree with levels $T'$. The new tree $T'$ encodes a shadow $X'$ such that $\partial N(X') = S^3$. First apply the corresponding move in Fig. \[elimina5:fig\] and then Proposition \[cut:prop\]. An example is shown in Fig. \[leveled\_example:fig\]. \[stacca\_generalized:fig\] \[leveled\_example:fig\] Nice flat vertices ------------------ We may suppose that flat vertices only occur in some “nice” position, which we now explain. \[flat:fig\] \[flat:prop\] Let $T$ be a decorated tree with levels, encoding a shadow $X$ of some block $(M,L)$. - Each of the moves in Fig. \[flat:fig\]-(1,2,3,4) transforms $T$ into a decorated tree $T'$ with levels encoding a shadow $X'$ of some block $(M',L')$. The block $(M,L)$ is homeomorphic to $(M',L')$ or obtained from it via an assembling. - The tree $T$ cannot contain a portion as in Fig. \[flat:fig\]-(5). The move (1) is simply a changing of level function. Move (2) is similar to Fig. \[move\_leaf:fig\]-(2). Move (3) and (4) follow from Proposition \[disc:prop\]: the non-flat vertex gives a block $P^2\times S^1$ or $(A,2)$, which we see as a link complement from Table \[pieces:table\]. The flat vertices produce an $\infty$ Dehn filling. The result is a solid torus which yields a vertical compressing disc, so that Fig. \[hv:fig\]-(1) applies. On the other hand, an $\infty$ filling on the knot winding once in the picture representing $(A,2)$ does not give a solid torus. Proposition \[4:consequence:prop\] thus forbids Fig. \[flat:fig\]-(5). \[not\_nice:fig\] A flat vertex $v$ is *nice* if it is a leaf and is not contained in a portion as in Fig. \[not\_nice:fig\]. (In other words, $v$ is nice if it is adjacent to a 3-valent vertex $v'$ with $l(v')< l(v)$, which is not itself adjacent to another 3-valent vertex $v''$ with $l(v'')<l(v)$.) By the following result, we may suppose that every vertex is nice. \[flat:cor\] Let $T$ be a decorated tree with levels encoding a shadow $X$ of some block $(M,L)$. The block is obtained via assemblings from $(M_1,L_1)\sqcup (M_2,L_2)$ where $(M_1,L_1)$ is a graph manifold generated by $\calS_0$ and $(M_2,L_2)$ has a shadow $X'$ encoded via a decorated tree $T'$ with levels such that 1. every flat vertex of $T'$ is nice; 2. the tree $T'$ has no more vertices than $T$. We may suppose that every flat vertex is a leaf by using the moves in Fig. \[flat:fig\]-(1,3,4). Each such move de-assembles a 4-dimensional graph manifold. We then eliminate the configurations as in Fig. \[not\_nice:fig\] using Fig. \[flat:fig\]-(2). Fruits ------ Take a decorated tree $T$ with levels and a 3-valent vertex $v$. If $S_v$ is as in Fig. \[fruit:fig\]-(1), we call it a *fruit*. The vertex $v$ is the *base* of the fruit. Note that a fruit encodes a projective plane in the shadow. \[fruit:fig\] \[move\_fruit:fig\] \[fruit:prop\] Let $T$ be a decorated tree with levels. A fruit is decorated as in Fig. \[fruit:fig\]-(2). The move in Fig. \[move\_fruit:fig\] transforms $T$ into another tree $T'$ with levels. Take a fruit, decorated with some gleams $a$ and $b-1/2$ as in Fig. \[fruit\_proof:fig\]-(1), with $a,b$ both integers. As in the proof of Proposition \[leaf:prop\], by cutting the lowest vertex as in Fig. \[stacca\_generalized:fig\]-(1) we find that we must have $a=\pm 1$. Up to switching both gleams we may set $a =-1$. The moves in Fig. \[fruit\_proof:fig\]-(2) produce a tree $T'$ with levels encoding some shadow $X'$. We can cut $X'$ as in Fig. \[fruit\_proof:fig\]-(3). The result is a shadow $X''$ with $\partial N(X'')=S^3$ by Proposition \[cut:prop\]. It is made of three discs with gleams $b$, $-1$, $-3$. This may be further transformed into two spheres intersecting transversely in a point, with Euler numbers $b-1$ and $-4$ using Fig. \[perturb:fig\]. Since $\partial N(X'')=S^3$, one such sphere must have Euler number zero, and hence $b=1$ as required. (See Lemma \[plumbing:lemma\].) Finally, the move in Fig. \[move\_fruit:fig\] is constructed in Fig. \[fruit\_proof:fig\]-(4). As above, recall that $T$ and $T'$ represent non-homeomorphic blocks in general. \[fruit\_proof:fig\] Branches -------- Take a decorated tree $T$ with levels and a vertex $v$ of type or . If $S_v$ is not contained in a leaf or in a fruit we call it a *branch*. See an example in Fig. \[leveled\_generalized:fig\]. \[leveled\_generalized:fig\] Depending on its base $v$, there are three types of branches, shown in Fig. \[Sv:fig\]. We will prove Theorem \[main:teo\] inductively by simplifying branches, starting from the ones of highest level. It is relatively easy to simplify a branch of type (1) or (3) from Fig. \[Sv:fig\]. Unfortunately, more work needs to be done to simplify branches of type (2). Therefore we call a branch as in Fig. \[Sv:fig\]-(2) a *bad branch*. We now analyze bad branches. Let $T$ be a decorated tree with levels defining a shadow $X$. Let a vertex $v$ be the base of a bad branch. It defines a block $M_v \isom (A,2)$ in the decomposition of $\partial N(X)$. The branch $S_v$ in turn defines a solid torus $M_{S_v}$ whose meridian is attached to a boundary component $T$ of $(A,2)$. The torus $T$ has a preferred homology basis: the meridian $\mu$ is the fiber $\pi^{-1} (x)$ of a point in $X$ along the natural projection $\pi:\partial N(X) \to X$. The longitude $\lambda$ is the fiber of the Seifert fibration $(A,2)$. The meridian of the solid torus $M_{S_v}$ is attached along a curve $\mu + q\lambda$. We call the integer $q$ the *torsion* of the bad branch. We show some examples (omitting the proof). Two bad branches with torsion $q$ are shown in Fig. \[bad\_branch\_examples:fig\]-(1,2). \[bad\_branch\_examples:fig\] \[bad\_branch\_move1:fig\] \[bad:branch:prop\] Let $T$ be a decorated tree with levels containing a bad branch with torsion $q$. The move in Fig. \[bad\_branch\_move1:fig\] transforms $T$ into another decorated tree $T'$ with levels. If we substitute a bad branch with torsion $q$ with another bad branch having the same torsion $q$ we get a new decorated tree with levels. Here, we substitute the bad branch with the one in Fig. \[bad\_branch\_examples:fig\]-(1). Then we modify as in Fig. \[bad\_branch\_proof:fig\]. \[bad\_branch\_proof:fig\] \[bad\_branch\_move2:fig\] More can be done if $|q|\leqslant 1$. \[reducible:prop\] Let $T$ be a decorated tree with levels encoding a shadow $X$ of a block $(M,L)$. Let $T$ contain a bad branch with torsion $q$. If $q=0$ (resp. $\pm 1$), the move in Fig. \[bad\_branch\_move2:fig\]-(1) (resp. (2)) produces a decorated tree $T'$ with levels encoding a shadow $X'$ of a block $(M',L')$. The block $(M,L)$ is an assembling (resp. connected sum) of $(M',L')$. If $q=0$, the meridian of the solid torus is vertical. If $q= \pm 1$, it is horizontal. This gives a vertical or horizontal compressing disc and Lemma \[compressing:lemma\] applies. The moves in Fig. \[hv:fig\] may be represented here as in Fig. \[bad\_branch\_move2:fig\]. Let us say that a bad branch is *reducible* if one of the following holds: - $q=0$ and the branch does not consist of a single flat vertex; - $q=\pm 1$ and the branch does not consist of a single fat vertex. A reducible bad branch can indeed be simplified thanks to Proposition \[reducible:prop\]. We will thus focus on non-reducible bad branches. Plumbing lines {#plumbing:section} ============== We will use some techniques that were inspired by a paper of Neumann and Weintraub [@Neu]. In that paper, the authors classified the closed 4-manifolds that may be obtained by adding a 4-handle to a plumbing of spheres. What we do here is in fact a generalization of that result, since a plumbing of sphere becomes a simple polyhedron without vertices after perturbing the double points as in Fig. \[perturb:fig\]. Our generalization is twofold: we consider any kind of simple polyhedron without vertices, and we also admit 3-handles. Recall that a *plumbing* of spheres in a $4$-manifold is a subspace consisting of some embedded oriented (locally flat) $2$-spheres with transverse intersections. Its regular neighborhood is encoded by the *plumbing graph*, having a vertex for each sphere, decorated with the Euler number of its normal bundle (*i.e.* its algebraic self-intersection), and an edge for each intersection, decorated with its sign. In particular, in a *plumbing line* as in Fig. \[plumbing\_line:fig\] we can orient all spheres in order to get positive intersections, so that the plumbing is determined by the sequence of Euler numbers $(e_1,\ldots,e_n)$. The *boundary* of the plumbing is the $3$-dimensional boundary of its regular neighborhood. \[plumbing\_line:fig\] \[easy:plumbing:lemma\] Let $(e_1,\ldots,e_n)$ be a plumbing line, whose boundary is homeomorphic to $S^3$. We have $|e_i|\leqslant 1$ for at least one value of $i$. We need here the following stronger version of Lemma \[easy:plumbing:lemma\]. If $(e_1,\ldots,e_n)$ is a plumbing line, note that $(e_n,\ldots,e_1)$ and $(-e_1,\ldots,-e_n)$ are plumbing lines defining the same unoriented 4-manifold. \[plumbing:lemma\] Let $(e_1,\ldots,e_n)$ be a plumbing line, whose boundary is homeomorphic to either $S^3$ or $S^2\times S^1$. Up to reversing the sequence and/or changing all signs, one of the following holds. - $e_1 = 0$, - $e_1=1$ and $n=1$, - $e_1=1$ and $e_2\in \{0,1,2,3\}$, - $e_i=0$ for some $i\not\in \{1,n\}$ and $e_{i-1}e_{i+1}\leqslant 0$, - $e_i= 1$ for some $i\not\in \{1,n\}$ and $e_{i-1} \in\{0,1,2,3\}, e_{i+1}\geqslant 0$. We prove the assertion by contradiction. Therefore we suppose that - if $e_i=0$, then $i\not\in\{1,n\}$ and $e_{i-1}e_{i+1}>0$; - if $e_i=\pm 1$, one of the following holds: 1. there is a $j\in\{i-1, i+1\}\cap\{1,\ldots,n\}$ such that $e_ie_j<0$, or 2. we have $e_ie_j\geqslant 4$ for all $j\in\{i-1,i+1\}\cap \{1,\ldots,n\}$. and we conclude that the boundary of a regular neighborhood of the plumbing is neither homeomorphic to $S^3$ nor to $S^2\times S^1$. The fundamental group of the boundary is a cyclic group, whose order is the absolute value of the determinant of the bilinear form on $H_2$ (the order is infinite when this value is zero). This determinant is $$f(e_1,\ldots,e_n) = \det \left(\begin{array}{ccccc} e_1 & 1 & 0 & \ldots & 0 \\ 1 & e_2 & 1 & \ddots & \vdots \\ 0 & 1 & e_3 & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots & 1 \\ \phantom{\Big|}\!\! 0 & \ldots & 0 & 1 & e_n \end{array}\right).$$ We have the following equalities $$\begin{aligned} f(\emptyset) & = & 1 \\ f(e_1) & = & e_1 \\ f(e_1,\ldots,e_n) & = & e_1f(e_2,\ldots,e_n)-f(e_3,\ldots,e_n) \label{first:eqn} \\ f(\ldots,e_{i-1},0,e_{i+1},\ldots) & = & -f(\ldots,e_{i-1}+e_{i+1},\ldots), \\ f(0,e_2,e_3,\ldots ) & = & -f(e_3,\ldots), \\ f(\ldots,e_{i-1},1,e_{i+1},\ldots) & = & f(\ldots,e_{i-1}-1,e_{i+1}-1,\ldots), \\ f(1,e_2,\ldots) & = & f(e_2-1,\ldots), \\ f(\ldots,e_{i-1},1,1,e_{i+2},\ldots) & = & -f(\ldots, e_{i-1}+e_{i+2}-1, \ldots). \label{last:eqn}\end{aligned}$$ We prove now by induction on $n$ that $|f(e_1,\ldots,e_n)|\geqslant 2$. This implies that the boundary of the plumbing is neither $S^3$ nor $S^2\times S^1$. If $n=1$, we have $|f(e_1)|=|e_1|\geqslant 2$ by our hypothesis above. Suppose now $n>1$. If $|e_i|\geqslant 2$ for all $i$, equation (\[first:eqn\]) gives $|f(e_1,\ldots,e_{i+1})|>|f(e_1,\ldots,e_i)|$ for all $i$, and we are done. If there is a $e_i=0$, then $f(\ldots, e_{i-1},0,e_{i+1},\ldots) = -f(\ldots,e_{i-1}+e_{i+1},\ldots)$. By hypothesis $e_{i-1}e_{i+1}>0$, hence $|e_{i-1}+e_{i+1}|\geqslant 2$ and the shorter sequence $(\ldots,e_{i-1}+e_{i+1},\ldots)$ is easily seen to still satisfy our induction hypothesis (note that $e_{i-1}, e_{i+1}$, and $e_{i-1}+e_{i+1}$ all have the same sign and thus $|e_{i-1}+e_{i+1}|\geqslant |e_{i-1}|+|e_{i+1}|$). Therefore we conclude. We may now suppose that $e_i\neq 0$ for all $i$. Hence $e_i=\pm 1$ for some $i$, say $e_i=1$. We consider the case $i=1$. We have $f(1,e_2,\ldots) = f(e_2-1,\ldots)$. By our hypothesis we have either $e_2<0$ or $e_2\geqslant 4$. In the first case, the shorter sequence $(e_2-1,\ldots)$ still satisfies the induction hypothesis. In the second case, it also does, except if $(e_1,e_2,e_3, e_4,\ldots )= (1,4,1, e_4,\ldots)$ and $e_4\geqslant 4$. The new sequence is $(3,1,e_4,\ldots)$, which may in turn be shortened to $(2,e_4-1,\ldots)$. Again, we are done except when $(\ldots, e_4, e_5, e_6,\ldots) = (\ldots, 4, 1, e_6,\ldots)$ with $e_6\geqslant 4$. By repeating this argument we eventually end up with a sequence $(2,\ldots,2,e_{2k}-1,\ldots)$ with $e_{2k-1}>4$, or $(2,\ldots,2)$, or $(2,\ldots,2,3)$. Each of these satisfies our hypothesis, so we are done. Consider the case $i\not\in\{1,n\}$. One of the following holds. 1. we have $e_{i-1}<0$ (up to reversing the sequence), or 2. we have $e_{i-1}, e_{i+1}\geqslant 4$. We have $f(\ldots,e_{i-1},1,e_{i+1},\ldots) = f(\ldots,e_{i-1}-1,e_{i+1}-1,\ldots)$. Suppose (1) holds. The new sequence satisfies our hypothesis except if one of the following holds: - $e_{i+1} = 1$, - $(\ldots, e_{i+1}, e_{i+2}, e_{i+3},\ldots) = (\ldots,4,1,e_{i+3},\ldots )$ with $e_{i+3}\geqslant 4$. If $e_{i+1}=1$, we have $(\ldots, e_{i-1},1,1,e_{i+2},\ldots )$ with $e_{i+2}<0$. Equation (\[last:eqn\]) gives $f(\ldots,e_{i-1},1,1,e_{i+2},\ldots) = -f(\ldots, e_{i-1}+e_{i+2}-1, \ldots)$ and the new sequence fullfills the hypothesis. If the second case holds, we repeat our argument as above and end up with a shorter sequence of type $(\ldots, e_{i-1}-1,2,\ldots,2,e_h -1, \ldots)$ with $e_h>4$, or $(\ldots, e_{i-1}-1,2,\ldots,2)$, or $(\ldots,e_{i-1}-1,2,\ldots,2,3)$. Each such satisfies the hypothesis. Suppose (2) holds. The new sequence fulfills the hypothesis, except if one of the following holds: - $(\ldots, e_{i+1}, e_{i+2}, e_{i+3},\ldots ) = (\ldots, 4,1,e_{i+3},\ldots)$ with $e_{i+3}\geqslant 4$, or - $(\ldots, e_{i-3}, e_{i-2}, e_{i-1},\ldots) = (\ldots, e_{i-3},1,4,\ldots)$ with $e_{i-3}\geqslant 4$. Both cases may hold. For each such we proceed as above. Proof of the theorem {#proof:section} ==================== Finally, we prove here Theorem \[main:teo\]. We start with a lemma. \[local:fig\] \[contains:lemma\] Let $T$ be a decorated tree with levels. One of the following holds: - the tree contains a flat vertex which is not nice; - the tree contains a reducible bad branch; - the tree contains a portion as in Fig. \[local:fig\], possibly after applying some moves as in Fig. \[move\_leaf:fig\]. We suppose that every flat vertex is nice and that there are no reducible branches in $T$. We deduce that $T$ contains a portion as in Fig. \[local:fig\]. We start by claiming that $T$ contains a portion $Z$ as in Fig. \[branch:fig\]-(1), such that: \[branch:fig\] 1. every $G_i$ is either a leaf or a fruit, see Fig. \[branch:fig\]-(2); 2. the portion $A$ is one of those (a), (b), (c), (d) shown in Fig. \[branch:fig\]-(2); 3. the portion $B$ is one of those (e), (f) shown in Fig. \[branch:fig\]-(2). Let the *level* of a branch $S_v$ be the level $l(v)$ of its base vertex $v$. If there is no branch at all in $T$, then the whole tree $T$ is as in Fig. \[branch:fig\]-(1) (with $A$ of type (a) and B of type (e)) and we may take $Z=T$. Otherwise, consider a branch having the highest level among branches. The branch is as in Fig. \[branch:fig\]-(1) with B of type (e) and $A$ either of type (b), (c), or . We are done, except when the latter case holds, *i.e.* when the branch is bad. To avoid bad branches, we take $Z$ as a branch having the highest level among good branches. (Again, if there are no good branches, take $Z=T$.) The portion $Z$ is as required. \[plumbing\_line2:fig\] We now construct a plumbing line from $Z$. Actually, we construct a tree with levels as in Fig. \[plumbing\_line2:fig\], which in turn may be transformed into a plumbing line as in Fig. \[plumbing\_line:fig\] via the (inverse of the) move that perturbs double points, see Fig. \[perturb:fig\]. The tree with levels is constructed by substituting the pieces A and B as prescribed by Fig. \[ripeto:fig\]-(b,c,d,f), and each flat leaf and fruit as in Fig. \[ripeto:fig\]-(T,F). \[ripeto:fig\] If $A$ is of type (d) or $B$ is of type (f), it is a bad branch with some torsion $q$. By hypothesis, it is not reducible. In other words: - if $q=0$, the bad branch consists of a single flat vertex; - if $q=\pm 1$, the bad branch consists of a single fat vertex. Since every flat vertex is nice, the first case is excluded. Therefore $q\neq 0$. We end up with a decorated tree with levels as in Fig. \[plumbing\_line2:fig\], which determines a plumbing line as in Fig. \[plumbing\_line:fig\], with some integers $e_1,\ldots, e_n$. Now we apply Lemma \[plumbing:lemma\]. The sequence $(e_1,\ldots, e_n)$ contains one of the following subsequences: 1. $(0)$; 2. $(\pm 1)$; 3. $(0,e_2,\ldots)$; 4. $(\ldots,e_{n-1},0)$; 5. $(\ldots, e_{i-1},0,e_{i+1},\ldots)$ with $e_{i-1}e_{i+1}\leqslant 0$; 6. $(1,e_2,\ldots)$ with $e_2\geqslant 0$ not equal to 4; 7. $(\ldots,e_{n-1}, 1)$ with $e_{n-1}\geqslant 0$ not equal to 4; 8. $(-1,e_2,\ldots)$ with $e_2\leqslant 0$ not equal to $-4$; 9. $(\ldots,e_{n-1}, -1)$ with $e_{n-1}\leqslant 0$ not equal to $-4$; 10. $(\ldots,e_{i-1},1,e_{i+1},\ldots)$ with $e_{i-1}\geqslant 0$, $e_{i+1}\geqslant 0$, not both equal to 4; 11. $(\ldots,e_{i-1},-1,e_{i+1},\ldots)$ with $e_{i-1}\leqslant 0$, $e_{i+1}\leqslant 0$, not both equal to -4. In the first two cases (i) and (ii) the sequence has only one element. The subsequence identifies a portion of $Z$. We now show that this portion is one of those listed in Fig. \[local:fig\]. To preserve clarity, we first suppose that $Z$ does not contain flat leaves. The portions $A$, $B$, $G_i$ of $Z$ contribute to the plumbing line $(e_1,\ldots, e_n)$ as follows, see Fig. \[ripeto:fig\]: - portions of type $A$-(a) and $A$-(b) contribute in the same way; - a portion of type $A$-(c) contributes with $(-2, \ldots)$; - a portion of type $A$-(d) contributes with $(2,q,-2,\ldots)$; - a portion of type $B$-(f) contributes with $(\ldots,-2,q,2)$; - a fruit contibutes with an integer $\pm 4$. Recall that $q$ is always non-zero. We consider first the case $k=0$, *i.e.* there is no $G_i$. The portion $Z$ thus consists of the pieces $A$ and $B$ glued together. The various possibilities are shown in Fig. \[AB:fig\]. We analyse each separately: \[AB:fig\] - the sequence consists of a single number $(x)$. By hypothesis, $|x|\leqslant 1$ which leads to (18) or (19); - the portion $Z$ would be a leaf and not a branch: excluded; - the sequence is $(-2,x-1/2)$. Therefore $x=\pm 1/2$ which leads to (12); - the sequence is $(2,q,-2,x-1/2)$ with $q\neq 0$. Therefore $x=\pm 1/2$ which leads to (15); - like (de); - the sequence is $(x-1/2, -2, q, 2)$ with $q\neq 0$. As above, we get $x=\pm 1/2$. If $q= \pm 1$, the bad branch consists of a single vertex, and hence $Z$ is a fruit and not a branch: excluded. Therefore $|q|\geqslant 2$. However, the moves contained in the proof of Lemma \[plumbing:lemma\] show that this sequence does not give $S^3$ or $S^2\times S^1$: excluded; - the sequence is $(-2,x-1,-2,q,2)$ with $q\neq 0$. Therefore $x=0$ and again this sequence does not give $S^3$ or $S^2\times S^1$; - the sequence is $(2,q,-2,x-1,-2,q',2)$ with $q,q'\neq 0$. Therefore $x=0$ which leads to (21). We turn to the case $k>0$. We consider first the portion formed by $A$ and $G_1$. It is as in Fig. \[AG:fig\]. We use implicitly Fig. \[move\_leaf:fig\]-(1) at various points. We analyze each case separately: \[AG:fig\] - the sequence starts as $(x+1,\ldots )$. If $|x+1| \leqslant 1$ we get either (1) or (2); - the sequence starts as $(x+1,\ldots )$. If $|x+1|\leqslant 1$ we get either (5) or (6); - the sequence starts as $(-2,x+1/2, \ldots)$. If $x+1/2 \in \{-1,0\}$ we get (13); - the sequence starts as $(2,q,-2,x+1/2,\ldots)$. If $x+1/2 \in \{-1,0\}$ we get (16); - the sequence starts as $(x, \pm 4, \ldots)$. If $x=0$ we get (7); - the sequence starts as $(x, \pm 4, \ldots)$. If $x=0$ we get (11); - the sequence starts as $(-2,x-1/2, \pm 4, \ldots)$. Two configurations both lead to (14): they are $(-2,x-1/2,-4,\ldots)$ with $x-1/2 = -1$ and $(-2,x-1/2,4)$ with $x-1/2 = 0$; - the sequence starts as $(2,q,-2,x-1/2, \pm 4)$; we get two configurations exactly as before, which lead to (17). The portion formed by $G_k$ and $B$ is treated analogously. We turn to a portion involving $G_i$ and $G_{i+1}$ as in Fig. \[GG:fig\]. We analyze each case: \[GG:fig\] - the sequence contains $(\ldots, x+2, \ldots)$. If $|x+2|\leqslant 1$ we get (3) or (4); - the sequence contains $(\ldots,\pm 4, x+1, \ldots)$. If $x+1 = 0$ we get (8); if $(\ldots, \pm 4, x+1, \ldots)$ equals $(\ldots, 4, 1, \ldots)$ or $(\ldots, -4, -1, \ldots)$ we get (9); - the sequence contains $(\ldots, \pm 4, x , \pm 4,\ldots)$ or $(\ldots, \pm 4, x, \mp 4,\ldots)$. In the second case, if $x=0$ we get (10). We are left to consider the presence of flat leaves. These do not contribute to the plumbing line $(e_1,\dots, e_n)$: we therefore conclude that the branch contains a portion of those already listed, plus maybe some additional flat leaves. In all the portions found, such leaves may be slid away by using the move in Fig. \[move\_leaf:fig\]-(2), except when the branch is very small: this happens in cases (12), (15), (18), (19), and (21). In all but the last case, the branch contains a portion of type (20). In the last case, it contains a portion of type (22). Neumann and Weintraub [@Neu] used Lemma \[easy:plumbing:lemma\] to simplify the plumbing line, via a move that eliminates the sphere with small Euler number. Here we do the same. As the following shows, all the portions listed in Fig. \[local:fig\] may be simplified. \[simplify:fig\] \[simplify2:fig\] \[simplify3:fig\] \[simplify4:fig\] \[simplify:prop\] Let $T$ be a decorated tree with levels encoding a shadow $X$ of a block $(M,L)$. Each of the moves in Figg. \[simplify:fig\], \[simplify2:fig\], \[simplify3:fig\], and \[simplify4:fig\] transforms $T$ into a new tree $T'$ with levels encoding a shadow $X'$ of some block $(M',L')$. The block $(M,L)$ is homeomorphic to $(M',L')$, or obtained from it via one assembling or connected sum. Move (1) is the inverse of Fig. \[sum\_ass:fig\]-(2), with one ($-1$)-gleamed attached on the left and some moves from Fig. \[mosse\_innocue:fig\]. Move (2) is the inverse of Fig. \[sum\_ass:fig\]-(1). Move (3) is Fig. \[thickening:fig\]-(2) followed by the inverse of Fig. \[sum\_ass:fig\]-(1). Move (4) is Fig. \[thickening:fig\]-(1) followed by (1). Move (5) is Fig. \[thickening:fig\]-(1). \[simplify\_proof:fig\] In move (6), consider the simple closed curve $\gamma$ determined by the edge in Fig. \[simplify\_proof:fig\]-(1). If we cut the branch as in Fig. \[stacca\_generalized:fig\] we get a tree with levels of a shadow $X'$ with $\partial N(X') = S^3$. The curve $\gamma$ bounds on the left of this tree a portion equal to the one in Fig. \[simplify:fig\]-(1)-top. It is easy to see that the torus over $\gamma$ has a vertical disc over that portion (on the left). Since we are in $S^3$, the torus over $\gamma$ bounds (on the right) another disc which intersects this vertical disc in a point: that is, it is horizontal. Therefore $\gamma$ bounds a horizontal disc on the right. It does so also in the original tree $T$. Since $\gamma$ bounds a horizontal disc we can perform the move in Fig. \[hv:fig\]-(2). The result is as in Fig. \[simplify\_proof:fig\]-(2)-left. It now suffices to apply Fig. \[thickening:fig\]-(2) and we are done. Move (7) is the inverse of Fig. \[sum\_ass:fig\]-(1) and Fig. \[thickening:fig\]-(4). Move (8) is the composition of Fig. \[thickening:fig\]-(2), Fig. \[thickening:fig\]-(3), Fig. \[move\_leaf:fig\]-(1), and (2). Move (9) is Fig. \[thickening:fig\]-(1-3-2). Concerning (10), apply Fig. \[thickening:fig\]-(1-6), then (1) and Fig. \[flat:fig\]-(4). Move (11) is again Fig. \[thickening:fig\]-(1). Move (12) is Fig. \[thickening:fig\]-(4). Move (13) is Fig. \[thickening:fig\]-(3). Move (14) is Fig. \[thickening:fig\]-(6). Moves (15), (16), and (17) are similar. Concerning move (20), note that removing a flat vertex corresponds to filling by Fig. \[sum\_ass:fig\]-(3). The inverse operation is drilling along the curve $\gamma$ determined by the flat vertex. The curve $\gamma$ is null-homotopic since it is contained in a disc. Therefore drilling corresponds to making a connected sum with $S^2\times D^2$, whence move (20). Move (21) is Fig. \[thickening:fig\]-(5). The resulting Möbius strip determines a vertical disc and thus can be deassembled by Proposition \[no:36:prop\]. Move (22) is a mixure of (20) and (21): we first remove the flat vertex and fill, then perform (21) and drill back the curve, which is now homotopic to the core of the Möbius strip. We finally prove the difficult part of Theorem \[main:teo\]. We actually prove a more general version, which includes blocks with boundary. Let $X$ be a shadow without vertices of some block $(M,L)$. We have $M = M'\#_h \matCP^2$ for some integer $h$ and some graph manifold $M'$ generated by $\calS_0$. By Corollary \[leveled:cor\], we may suppose that $(M,L)$ has a shadow encoded by a decorated tree $T$ with levels. We prove our theorem by induction on the number of vertices of $T$. By Corollary \[flat:cor\] we may suppose that every flat vertex in $T$ is nice. We may also suppose that every bad branch is non-reducible (otherwise we may simplify it by Proposition \[reducible:prop\] and decrease the number of vertices). We can now apply Proposition \[contains:lemma\] to ensure that the tree contains one of the 22 portions listed in Fig. \[local:fig\]. If the portion is (18) or (19), the shadow $X$ is a sphere with gleam $\pm 1$ or $0$, and $M$ is respectively $\pm \matCP^2$ or $S^4$. Otherwise, the portion may be simplified by Proposition \[simplify:prop\] and we conclude by our induction hypothesis. More precisely, in all cases except (5), (6), and (11) the number of non-flat vertices decreases. In case (5), (6), (11) the number of non-flat vertices may remain unchanged: however, there can be only finitely many such moves, since they strictly decrease the levels of some vertices (and leave the levels of the other vertices unchanged). [99]{} <span style="font-variant:small-caps;">J. J. Andrews – M. L. Curtis</span>, *Free groups and handlebodies*, Proc. of the Amer. Math. Soc. **16** (1965), 192-195. <span style="font-variant:small-caps;">Auckly</span>, *The Number of Smooth 4-Manifolds with a Fixed Complexity* Int. Math. Res. Not. **17** (2007), Article ID rnm054, 19 p. <span style="font-variant:small-caps;">F. Costantino</span>, *Complexity of $4$-manifolds*, Experimental Math. **15** (2006), 237-249. <span style="font-variant:small-caps;">F. Costantino – D. Thurston</span>, *3-manifolds efficiently bound 4-manifolds*, Journal of Topology, **1** (2008), 703-745. <span style="font-variant:small-caps;">R. E. Gompf – A. I. Stipsicz</span>, “$4$-manifolds and Kirby calculus,” Graduate Studies in Mathematics, 20. American Mathematical Society, Providence, RI, 1999. <span style="font-variant:small-caps;">I. Hambleton – M. Kreck – P. Teichner</span>, *Topological 4-manifolds with geometrically 2-dimensional fundamental groups*, [arXiv:0802.0995]{} <span style="font-variant:small-caps;">C. Hog-Angeloni – W. Metzler – A. J. Sieradski</span> “Two-Dimensional Homotopy And Combinatorial Group Theory”, London Math. Soc. Lec. Notes Ser. **197**. <span style="font-variant:small-caps;">F. Laudenbach – V. Poenaru</span>, *A note on $4$-dimensional handlebodies*, Bull. Soc. Math. France **100** (1972), 337-344. <span style="font-variant:small-caps;">B. Martelli</span>, *Links, two-handles, and four-manifolds*, Int. Math. Res. Not. **58** (2005), 3595-3623. , *Complexity of PL manifolds*, [arXiv:0810.5478]{} , *Complexity of 3-manifolds*, “Spaces of Kleinian groups," London Math. Soc. Lec. Notes Ser. **329** (2006), 91-120. <span style="font-variant:small-caps;">B. Martelli – C. Petronio</span>, *Three-manifolds having complexity at most $9$*, Experimental Math. **10** (2001), 207-237. <span style="font-variant:small-caps;">S. V. Matveev</span>, *Special skeletons of piecewise linear manifolds*, (Russian) Mat. Sb. (N.S.) **92(134)** (1973), 282-293. , *Complexity theory of three-dimensional manifolds*, Acta Appl. Math. **19** (1990), 101-130. “Algorithmic topology and classification of 3-manifolds,” Algorithms and Computation in Mathematics, **9**, Springer, Berlin, 2007. , *Tabulation of three-dimensional manifolds*, Russ. Math. Surv. **60** (2005), 673-698. <span style="font-variant:small-caps;">A. Mozgova</span> *Non-singular graph-manifolds of dimension 4*, Alg. & Geom. Top. **5** (2005), 1051-1073. <span style="font-variant:small-caps;">W. D. Neumann – S. H. Weintraub</span>, *Four-manifolds constructed via plumbing*, Math. Ann. **238** (1978), 71-78. <span style="font-variant:small-caps;">A. Scorpan</span>, “The Wild World of 4-Manifolds,” American Mathematical Society, Providence, RI, 2005. <span style="font-variant:small-caps;">H. Seifert</span>, *Topologie dreidimensionalen gefaserter Räume*, Acta Math. **60** (1933) 147-238. <span style="font-variant:small-caps;">R. J. Stern</span>, *Will we ever classify simply-connected smooth 4-manifolds?*, Clay Mathematics Proceedings **5** (2006), Floer Homology, Gauge Theory and Low Dimensional Topology, CMI/AMS Book Series, 225-240. <span style="font-variant:small-caps;">D. Thurston</span>, *The algebra of knotted trivalent graphs and Turaev’s shadow world*, Geom. Topol. Monogr. **4** (2002), 337-362. <span style="font-variant:small-caps;">V. Turaev</span>, *Shadow links and face models of statistical mechanics*, J. Differential Geom. **36** (1992), 35-74. , “Quantum invariants of knots and 3-manifolds,” de Gruyter, Berlin, 1994. <span style="font-variant:small-caps;">Waldhausen</span>, *Eine Klasse von $3$-dimensionalen Mannigfaltigkeiten,* Invent. Math. **3** (1967), 308-333; ibid. **4** (1967) 87-117.
--- abstract: 'In this paper, we will discuss the use of a Sampling Method to reconstruct impenetrable inclusions from Electrostatic Cauchy data. We consider the case of a perfectly conducting and impedance inclusion. In either case, we show that the Dirichlet to Neumann mapping can be used to reconstruct impenetrable sub-regions via a sampling method. We also propose a non-iterative method [based on boundary integral equations]{} to reconstruct the impedance parameter using the reconstructed boundary of the inclusion from the knowledge of multiple Cauchy pairs which can be computed from Dirichlet to Neumann mapping. Some numerical reconstructions are presented in two space dimensions.' --- [**A direct method for reconstructing inclusions and boundary conditions from electrostatic data**]{} Isaac Harris and William Rundell\ Department of Mathematics\ Texas A $\&$ M University\ College Station, Texas 77843-3368\ E-mail: iharris@math.tamu.edu and rundell@math.tamu.edu [**Keywords**]{}: inverse boundary value problems, shape reconstruction, boundary impedance, sampling methods, integral equations.\ [**AMS subject classifications:**]{} 35J05, 45Q05, 65R30 Introduction {#intro} ============ In this paper we use direct methods (otherwise known as non-iterative methods) to reconstruct impenetrable inclusions from electrostatic Cauchy data. This problem models the non-destructive testing for interior inclusion using the voltage and current measurements on the accessible outer boundary. In particular, for a Dirichlet or Impedance inclusion we derive a sampling algorithm to recover the inclusion from the knowledge of the Dirichlet-to-Neumann (DtN) mapping. We focus on the case of Laplace’s equation but the techniques used in this paper still hold for the case where the Laplacian is replaced with a uniformly elliptic operator in divergence form with sufficiently smooth coefficients. This gives a computationally simple way to solve the inverse problem of reconstructing the inclusion from the knowledge of DtN mapping. An important feature of sampling methods is that one does not need a prior information about the type or number of inclusions, unlike iterative methods where one needs to have some a prior knowledge of the inclusions to ensure convergence. See [@iterative-inclusion; @uniqueness; @EIT-impedance] for examples of iterative methods applied to reconstructing impenetrable inclusions. Sampling algorithms have grown in popularity over the past two decades since their inception in [@CK] as a computationally simple way to recover obstacles. These method where first used to recover unknown obstacles from time-harmonic scattering data (see monographs [@CCbook; @kirschbook] and the references therein). Over the years these methods have been employed to solve similar problems in the time domain. In [@TDLSM-wave; @TDLSM-elastic] the Linear Sampling Method is applied to the acoustic and elastic wave equation, respectively. Recently in [@impedance-heat] the Linear Sampling Method was applied to recovering an impedance inclusion in a heat conductor. Once the boundary of the inclusion is reconstructed, we then consider the problem of determining the boundary conditions on the interior boundary from the knowledge of the boundary and the DtN mapping. This amounts to solving our inverse problem in two steps where we first determine the boundary from the DtN mapping and then use the reconstructed boundary to determine the boundary conditions. Since we know that the Cauchy data on the outer boundary uniquely determines the electrostatic potential by the unique continuation principle we derive a system of boundary integral equations to reconstruct the Cauchy data on the interior boundary. From this one can determine the boundary condition on the interior boundary. We focus on the case of an impedance condition, where we provide an inversion method for determining the impedance parameter from the recovered Cauchy data. In our investigation of this problem we are able to show that the DtN mapping uniquely determines the $L^\infty$ impedance parameter. It should be noted that uniqueness for both the inclusion and the impedance condition follows from two pairs of Cauchy data (suitable chosen) from [@unique-imp] in the case of an inclusion with $C^{2, \alpha}$ boundary and $C^{1 , \alpha}$ impedance function. The rest of the paper is structured as follows. We begin by formulating the direct and inverse problem under consideration. Next, we consider the problem of reconstructing the interior Dirichlet or Impedance boundary from the electrostatic Cauchy data. To this end, a sampling method is derived to determine the inclusion. We then turn our attention to reconstructing the impedance parameter given the interior boundary and the DtN mapping. Uniqueness is proven and a inversion algorithm is described using boundary integral equation. Lastly, we provide numerical experiments in 2-dimensions to show the feasibility of our inversion algorithm. Statement of the Direct and Inverse Problem =========================================== We begin by considering the boundary value problems associated with the electrostatic problem with and without an impenetrable inclusions as derived from the quasi-static Maxwell’s equations. Assume that $D \subset {\mathbb{R}}^d$ (for $d=2$ or 3) is a simply connected open set with $C^2$-boundary $\partial D$ with unit outward normal $\nu$. Now let $\Omega \subset D$ be a (possibly multiple) simply connected open set with $C^2$-boundary $\partial \Omega$, where we assume that $\text{dist}(\partial D, \partial \Omega)>0.$ For a material without an inclusion, we define $u \in H^1(D)$ to be the unique solution to the following boundary value problem $$\begin{aligned} \Delta u=0 \quad \text{in} \quad D \quad \text{with} \quad u \big|_{\partial D}= f. \label{healthy}\end{aligned}$$ for a given $f \in H^{1/2}(\partial D)$. The function $u$ is the electrostatic potential for material without defects. Now for the defective material with an impenetrable inclusion, we define $u_0 \in H^1(D \setminus \overline{\Omega})$ as the solution to $$\begin{aligned} \Delta u_0=0 \quad \text{in} \quad D \setminus \overline{\Omega} \quad \text{with} \quad u_0 \big|_{\partial D}= f \quad \text{and} \quad \mathcal{B}(u_0) \big|_{\partial \Omega}=0. \label{defective}\end{aligned}$$ for a given $f \in H^{1/2}(\partial D)$. Here the function $u_0$ is the electrostatic potential for the defective material and the boundary operator $\mathcal{B}$ is given by 1. $\mathcal{B}(u_0) = u_0$ the Dirichlet boundary condition on $\partial \Omega$ or 2. $\mathcal{B}(u_0) = \nu \cdot {\nabla}u_0 + \gamma(x) u_0$ the impedance boundary condition on $\partial \Omega$. We assume that the impedance parameter $\gamma(x)$ is a non-trivial function in $$L^{\infty }_{+}(\partial \Omega) := \Big\{ \gamma \in L^{\infty}(\partial \Omega) \, \, : \, \, \inf\limits_{\partial \Omega} \gamma (x) \geq 0 \Big\}.$$ Here we take $\nu$ to be the unit outward normal to the domain $D \setminus \overline{\Omega}$, see Figure \[dp-pic\]. ![ The electrostatic problem for material with an inclusion. []{data-label="dp-pic"}](EIT-Prob2-2) Assume that the ‘voltage’ $f$ is applied on the boundary $\partial D$ and the current $ \nu \cdot {\nabla}u_0 = \partial_\nu u_0$ is ‘measured’ on $\partial D$. From these measurements we wish to reconstruct the impenetrable inclusion $\Omega$ without any a prior knowledge of the number of inclusions or boundary condition on $\partial \Omega$. Also, if we assume that it is known [*a priori*]{} that the boundary condition is of impedance type we wish to also reconstruct the parameter $\gamma$. For the case of a perfectly conducting inclusion i.e. zero Dirichlet condition on $\partial \Omega$ it has been shown in [@uniqueness] that a single pair of voltage and current measurements on $\partial D$ can be used to determine $\partial \Omega$. In [@iterative-inclusion; @uniqueness] iterative methods based on conformal mapping is used to reconstruct a single perfectly conducting inclusion. The question of unique determination of the boundary $\partial \Omega$ and impedance condition $\gamma (x)$ is more involved than for the case of the perfectly conduction inclusion. It has been shown in [@unique-imp] that two pairs of voltage and current measurements on the boundary $\partial D$ are enough to determine $\partial \Omega$ and $\gamma(x)$ provided the currents are linearly independent and non-negative assuming that $\gamma (x) \in C^{1,\alpha}(\partial \Omega)$ and $\partial \Omega$ is class $C^{2,\alpha}$ for $0<\alpha <1$. See [@C-Map-imp; @EIT-impedance] where iterative methods are proposed to determine the inclusion and the impedance. The goal here is to develop a sampling method to reconstruct the boundary of the inclusion $\partial \Omega$ then one can reconstruct $\gamma(x)$ using a system of boundary integral equations, which is less computationally expensive to reconstruct the impedance parameter once the boundary is known. By our assumptions on the impedance parameter $\gamma(x)$ it can easily be shown that both and are well-posed using variational techniques (see Chapter 5 of [@evans]) for the Dirichlet or impedance boundary condition on the inclusion. By the linearity of the equation and boundary conditions we have that the voltage to electrostatic potential mappings $$f \longmapsto u \quad \text{and} \quad f \longmapsto u_0$$ are bounded linear operators from $H^{1/2}(\partial D)$ to $H^1(D)$ and $H^1 \big(D \setminus \overline{\Omega} \big)$, respectively. We now define the DtN mappings from $H^{1/2}(\partial D) \longmapsto H^{-1/2}(\partial D)$ such that $$\Lambda f=\partial_{\nu} u \big|_{\partial D} \quad \text{ and } \quad \Lambda_0 f=\partial_{\nu} u_0 \big|_{\partial D}.$$ Due to the well-posedness of and along with the Trace Theorem (see for e.g. [@evans]) it follows that the DtN mappings are bounded linear operators. The [*inverse shape problem*]{} we consider here is to reconstruct the support of the impenetrable inclusion $\Omega$ from the knowledge of the DtN mappings $\Lambda$ and $\Lambda_0$, i.e. we want to determine the boundary $\partial \Omega$ from the set of all possible measurements $(f \, , \partial_{\nu} u)$ and $(f \, , \partial_{\nu} u_{0})$ on $\partial D$. Moreover, for the case where $\mathcal{B}(u_0)$ is given by the impedance boundary condition on $\partial \Omega$ we consider the [*inverse impedance problem*]{} of recovering the impedance function $\gamma$ from the knowledge of the DtN mappings $\Lambda_0(\gamma)$. A Sampling Method for the Inverse Shape Problem =============================================== We now derive a Sampling Method for our inverse [shape]{} problem. Therefore, we now consider the inverse problem of reconstructing $\partial \Omega$ from the knowledge of $\Lambda_0 = \Lambda_0(\partial \Omega)$. The goal is to first reconstruct the boundary $\partial \Omega$ via a sampling method and provided that $\mathcal{B}(u_0)$ is given by the impedance boundary condition we then reconstruct the impedance parameter using boundary integral equations in the following section. The sampling method is based on connecting the support of the unknown region to an ill-posed equation involving the operator defined by the measurements (i.e. the difference of the DtN mappings). First we decompose the difference of the DtN mappings and analyze the operators used in our decomposition, then we develop the sampling methods using the decomposition to derive an inversion algorithms to determine the support of the inclusion. For the case of multiple inclusions we let $\Omega= \bigcup_{m=1}^{M} \Omega_m$ where $\Omega_m \subset D$ is a simply connected open set with $C^2$-boundary $\partial \Omega_m$ such that $\Omega_i \cap \Omega_j$ is empty for all $i \neq j$. All of the analysis in the preceding sections holds where the space Trace spaces $H^{\pm 1/2}(\partial \Omega)$ are understood as the product space $H^{\pm 1/2}(\partial \Omega_1) \times \cdots \times H^{\pm 1/2}(\partial \Omega_M)$. \[by-the-way\] The results in section can easily be extended for the case when the Laplacian is replaced by ${\nabla}\cdot A(x) {\nabla}$ where $A(x) \in C^1(D, {\mathbb{C}}^{d \times d})$ is given by $$A(x)=\sigma(x) - \text{i} \, \omega {\epsilon}(x)$$ where the electric conductivity $\sigma(x)$ and electric permittivity ${\epsilon}(x)$ are symmetric real valued matrices such that $$\overline{\xi} \cdot \sigma(x) \xi \geq \sigma_{\text{min} } |\xi|^2 \quad \text{and} \quad \overline{\xi} \cdot {\epsilon}(x) \xi \geq 0$$ for all $\xi\in{\mathbb C}^d$ for almost every $x\in D$ with frequency $\omega \geq 0$. We now develop a sampling method to reconstruct the support of $\Omega$ from a knowledge of the difference of the DtN operators $(\Lambda - \Lambda_0)$ for the Dirichlet or Impedance boundary condition. As we will see later in this section this method can be used without having any a prior information about the type or number of inclusions. To do so, we will decompose the difference of the DtN operators using two operators, the first mapping the voltage $f$ to an appropriate boundary value on $\partial \Omega$ and the second mapping takes the aforementioned boundary values on $\partial \Omega$ to the difference of the currents $\partial_{\nu} (u-u_0)$ on $\partial D$. Notice, that the difference of the currents on $\partial D$ is given by $(\Lambda - \Lambda_0)f$ on $\partial D$. Therefore, consider the difference of the solutions $u-u_0$ in $H^1( D \setminus \overline{\Omega})$ which satisfies $$\begin{aligned} \Delta (u-u_0 )=0 \quad &\text{in} \quad D \setminus \overline{\Omega} \label{difference1} \\ (u-u_0) \big|_{\partial D}= 0 \quad &\text{and} \quad \partial_{\nu} (u-u_0 ) \big|_{\partial \Omega} \in H^{-1/2}(\partial \Omega). \label{difference}\end{aligned}$$ Now, motivated by equation - we let $w \in H^1( D \setminus \overline{\Omega})$ be the unique solution to $$\begin{aligned} \Delta w=0 \quad &\text{in} \quad D \setminus \overline{\Omega} \quad \text{with} \quad w \big|_{\partial D}= 0 \quad \text{and} \quad \partial_{\nu} w \big|_{\partial \Omega} =h \label{equ-w}\end{aligned}$$ for a given $h \in H^{-1/2}(\partial \Omega)$. Again, using a variational method one can show that equation is well-posed, so we can define via the Trace Theorem the bounded linear operator $$G: H^{-1/2}(\partial \Omega) \longmapsto H^{-1/2}(\partial D) \quad \text{ given by } \quad Gh = \partial_{\nu} w \big|_{\partial D}$$ where $w$ is the unique solution to equation . Notice that for $h= \partial_{\nu} (u-u_0 ) \big|_{\partial \Omega}$ then $Gh=(\Lambda - \Lambda_0)f$. We now define the bounded linear operators $H^{1/2}(\partial D) \longmapsto H^{-1/2}(\partial \Omega)$ such that $$L f = \partial_{\nu} u\big|_{\partial \Omega} \quad \text{ and } \quad L_0 f = \partial_{\nu} u_0 \big|_{\partial \Omega}$$ where $u$ and $u_0$ are the solutions of and respectively. This gives the following decomposition. \[decomp\] The difference of the DtN operators $$(\Lambda - \Lambda_0) : H^{1/2}(\partial D) \longmapsto H^{-1/2}(\partial D)$$ associated with and has the factorization $ (\Lambda - \Lambda_0) = G(L-L_0)$. We now analyze the operators used to factorize the difference of the DtN operators. We begin by analyzing the operator $$G: H^{-1/2}(\partial \Omega) \longmapsto H^{-1/2}(\partial D).$$ In the following results we obtain the properties of this operator. \[G\] The operator $G: H^{-1/2}(\partial \Omega) \longmapsto H^{-1/2}(\partial D)$ [given by]{} $Gh = \partial_{\nu} w \big|_{\partial D}$ where $w$ is the unique solution to equation is compact and injective. We begin by proving the compactness. Notice that by interior elliptic regularity (see for e.g. [@evans]) we have that for any $h \in H^{-1/2}(\partial \Omega)$ the solution to is in $H^2_{loc}(D \setminus \overline{\Omega}).$ Now let $D_0$ be such that $\Omega \subset \overline{D}_0$ and $ \overline{D}_0 \subset D$ where $\partial D_0$ is class $C^2$ and let $w$ be the solution to which gives that the trace of $w$ on $\partial D_0$ is in $H^{3/2}(\partial D_0)$. This implies that $w$ satisfies $$\Delta w=0 \quad \text{in} \quad D \setminus \overline{D}_0 \quad \text{with} \quad w \big|_{\partial D}= 0 \quad \text{and} \quad w \big|_{\partial D_0} \in H^{3/2}(\partial D_0)$$ and by global elliptic regularity [@evans] we have that $w \in H^2(D \setminus \overline{D}_0)$. The Trace Theorem implies that $Gh=\partial_{\nu} w \big|_{\partial D} \in H^{1/2}(\partial D)$ and the compact embedding of $H^{1/2}(\partial D)$ into $H^{-1/2}(\partial D)$ proves the claim. Now we prove the injectivity. Let $h \in \text{Null} (G)$ and $w$ be the solution to with boundary data $h$. Therefore, we have that $$\Delta w=0 \quad \text{in} \quad D \setminus \overline{\Omega} \quad \text{and} \quad w = \partial_{\nu} w =0 \quad \text{on} \quad \partial D.$$ By appealing to unique continuation we can conclude that $w=0$ in $D \setminus \overline{\Omega}$ and therefore the Trace Theorem gives that $h=0$, proving the claim. To analyze the operator $G$ further we now compute it’s Transpose (Dual) operator $G^{\top}$. Therefore, let $\langle \cdot \, ,\cdot \rangle_{\Gamma}$ denote the dual pairing between $H^{1/2}(\Gamma)$ and $H^{-1/2}(\Gamma)$ (with $L^2(\Gamma)$ as the pivot space) and by definition $$\langle \, G^{\top} \varphi , h \, \rangle_{\partial \Omega} = \langle \, \varphi , G h \, \rangle_{\partial D} = \int\limits_{\partial D} {\varphi} \, \partial_{\nu} w \, \text{d}s \quad \text{for all} \quad \varphi \in H^{1/2}(\partial D)\, \, \, \text{and} \, \, \, h \in H^{-1/2}(\partial \Omega).$$ Now take a lifting of the function $\varphi \in H^{1/2}(\partial D)$ such that $v \in H^1\big(D \setminus \overline{\Omega}\, \big)$ is the unique solution to $$\begin{aligned} \Delta v=0 \quad &\text{in} \quad D \setminus \overline{\Omega} \quad \text{ with } \quad v \big|_{\partial D}= \varphi \quad \text{and} \quad \partial_{\nu} v \big|_{\partial \Omega} =0. \label{equ-v}\end{aligned}$$ Applying Green’s 2nd Theorem and using the boundary value problems and gives $$\langle G^{\top} \varphi , \, h \, \rangle_{\partial \Omega} =\int\limits_{\partial D} {\varphi} \, \partial_{\nu} w \, \text{d}s = - \int\limits_{\partial \Omega} {v} \, h \, \text{d}s$$ and we can conclude that $$G^{\top} : H^{1/2}(\partial D) \longmapsto H^{1/2}(\partial \Omega) \quad \text{is given by} \quad G^{\top} \varphi = - v \big|_{\partial \Omega}$$ where $v$ is the unique solution to equation . Just as in the proof of Theorem \[G\] one can clearly see that due to the unique continuation principal that $G^{\top}$ is injective and therefore since (see for e.g. [@McLean]) $$\overline{ \text{Range}(G) } = ^{a} \hspace{-0.025in}\text{Null}\big(G^{\top} \big)$$ (here $^a$ denotes the annihilators) we have the following result. The operator $G: H^{-1/2}(\partial \Omega) \longmapsto H^{-1/2}(\partial D)$ [given by]{} $Gh = \partial_{\nu} w \big|_{\partial D}$ where $w$ is the unique solution to equation has a dense range. Now we turn our attention to the injectivity of the operator $(L- L_0 )$. \[assume\] Assume that for any $g \in H^{-1/2}(\partial \Omega)$ that $$\Delta \phi = 0 \quad \text{in} \quad \Omega \ \quad \text{and} \quad \partial_{\nu} \phi + \gamma \phi = g \quad \text{on} \quad \partial \Omega$$ has a unique solution $\phi \in H^1(\Omega)$ depending continuously on the boundary data. Here $\nu$ is the unit inward norm to the boundary $\partial \Omega$. [ Since $\nu$ is the inward pointing normal it is clear that uniqueness is not guaranteed since the boundary condition will have the wrong sign for the positive impedance parameter. ]{} Note that Assumption \[assume\] is a common feature of sampling method. In [@lsmaniso] where the linear sampling method is used to reconstruct anisotropic obstacles using time-harmonic acoustic measurements one must assume that the corresponding wave number is not a so-called interior transmission eigenvalue of the obstacle. In our case Assumption \[assume\] says that $\lambda=1$ is not an associated weighted Steklov eigenvalue given by the values $\lambda \in {\mathbb{R}}$ such that there is a nontrivial solution to $$\Delta \phi = 0 \quad \text{in} \quad \Omega \ \quad \text{and} \quad \partial_{\nu} \phi + \lambda \, \gamma(x) \phi =0 \quad \text{on} \quad \partial \Omega.$$ Since the set of eigenvalues is discrete $\lambda=1$ is almost surely not an eigenvalue for a given domain $\Omega$ and impedance $\gamma(x)$. With Assumption \[assume\] we now consider the injectivity of of the operator $(L- L_0 )$. The operator $$(L- L_0 ): H^{1/2}(\partial D) \longmapsto H^{-1/2}(\partial \Omega) \quad \text{given by} \quad (L - L_0) f = \partial_{\nu}(u- u_0) \big|_{\partial \Omega}$$ where $u$ and $u_0$ are the solutions of and is injective. To prove the injectivity we split the proof into two parts for the two boundary conditions on $\partial \Omega$ under consideration. First assume that $\mathcal{B}$ is the Dirichlet boundary condition on $\partial \Omega$ and let $f \in \text{Null} (L-L_0)$, therefore by definition we have that $\partial_{\nu} (u-u_0) =0$ on $ {\partial \Omega}$ where $u$ and $u_0$ are the solutions of and respectively. This implies that the difference $u-u_0$ solve with boundary data $h=0$ and by well-posedness we conclude that $u=u_0$ in $D \setminus \overline{\Omega}$. We now have $$\Delta u=0 \quad \text{in} \quad \Omega \quad \text{and} \quad u = 0 \quad \text{on} \quad \partial \Omega$$ which it follows that $u=0$ in $\Omega$. By unique continuation we have $u=0$ in $D$ which gives that $f=0$. Similarly for the Impedance boundary condition on $\partial \Omega$ if we let $f \in \text{Null} (L-L_0)$ then we can conclude that $u=u_0$ in $D \setminus \overline{\Omega}$. This implies $$\Delta u=0 \quad \text{in} \quad \Omega \quad \text{and} \quad \partial_{\nu} u + \gamma(x) u = 0 \quad \text{on} \quad \partial \Omega$$ which implies that $u=0$ in $\Omega$ by Assumption \[assume\]. By again appealing to unique continuation and we conclude that $f=0$, proving the claim. Recall that the difference of the DtN operators has the decomposition $(\Lambda - \Lambda_0) = G(L-L_0)$. Since $L$ and $L_0$ are both bounded linear operators by appealing to the previous results we have the following. \[compact/injective\] The difference of the DtN operators $$(\Lambda - \Lambda_0) : H^{1/2}(\partial D) \longmapsto H^{-1/2}(\partial D)$$ is compact and injective. We now derive a sampling method to solve our inverse problem. Sampling methods often connect the support of the region of interest to an ill-posed problem where one uses a singular solution to the background equation. The idea is to show that due to the singularity that a particular equation is “not" solvable unless the singularity is contained in the region of interest. To this end, we prove the following results to derive our inversion method. \[dense-range\] The difference of the DtN operators $$(\Lambda - \Lambda_0) : H^{1/2}(\partial D) \longmapsto H^{-1/2}(\partial D)$$ is symmetric (i.e. is equal to it’s transpose) and therefore has a dense range. To begin, we let $f_j \in H^{1/2}(\partial D)$ where $u^{(j)} \in H^1(D)$ and $u_0^{(j)} \in H^1(D\setminus \overline{\Omega})$ are the unique solutions to and , respectively for $j=1,2$. We now consider $$\big\langle f_1 , (\Lambda - \Lambda_0) f_2 \big\rangle_{\partial D}$$ where again $\langle \cdot \, ,\cdot \rangle_{ \partial D}$ denotes the dual pairing between $H^{1/2}(\partial D)$ and $H^{-1/2}(\partial D)$. By definition we have that $$\begin{aligned} \big\langle f_1 , (\Lambda - \Lambda_0) f_2 \big\rangle_{\partial D}&= \int\limits_{\partial D} f_1\, \partial_{\nu}u^{(2)} - f_1 \, \partial_{\nu}u^{(2)}_0 \, \text{d}s= \int\limits_{\partial D} u^{(1)} \partial_{\nu} u^{(2)} \, ds -\int\limits_{\partial D} u_0^{(1)} \partial_{\nu} u_0^{(2)} \, \text{d}s \end{aligned}$$ Now, by Green’s 1st Theorem $$\begin{aligned} \big\langle f_1 , (\Lambda - \Lambda_0) f_2 \big\rangle_{\partial D}= \int\limits_{ D} {\nabla}u^{(1)} \cdot {\nabla}u^{(2)} \, \text{d}x - \int\limits_{ D \setminus \overline{\Omega} } {\nabla}u_0^{(1)} \cdot {\nabla}u_0^{(2)} \, \text{d}x + \int\limits_{\partial \Omega } u_0^{(1)} \partial_{\nu} u_0^{(2)} \, \text{d}s.\end{aligned}$$ For the Dirichlet boundary condition on $\partial \Omega$ we obtain $$\big\langle f_1 , (\Lambda - \Lambda_0) f_2 \big\rangle_{\partial D}= \int\limits_{ D} {\nabla}u^{(1)} \cdot {\nabla}u^{(2)} \, \text{d}x - \int\limits_{ D \setminus \overline{\Omega} } {\nabla}u_0^{(1)} \cdot {\nabla}u_0^{(2)} \, \text{d}x$$ and for the Impedance boundary condition conclude that $$\big\langle f_1 , (\Lambda - \Lambda_0) f_2 \big\rangle_{\partial D}= \int\limits_{ D} {\nabla}u^{(1)} \cdot {\nabla}u^{(2)} \, \text{d}x - \int\limits_{ D \setminus \overline{\Omega} } {\nabla}u_0^{(1)} \cdot {\nabla}u_0^{(2)} \, \text{d}x - \int\limits_{\partial \Omega } \gamma(x) \, u_0^{(1)} u_0^{(2)} \, \text{d}s.$$ Therefore, we have that the right hand side of the above expressions are symmetric bilinear forms and therefore $(\Lambda - \Lambda_0)$ is symmetric. By Corollary \[compact/injective\] we can conclude that $(\Lambda - \Lambda_0)$ has a dense range. We define $\mathbb{G} (x,z)$ as the Green’s function for $D$ which is the solution to $$\Delta \mathbb{G} (\cdot \, , \, z) =- \delta(\cdot - z) \quad \text{in} \quad D \quad \text{and} \quad \mathbb{G} (\cdot \, , z) =0 \quad \text{on} \quad \partial D.$$ We now connect the support of the inclusion $\Omega$ to the range of the operator $G$. \[range\] $\partial_{\nu} \mathbb{G} (\cdot \, , z) \big|_{\partial D} \in \text{Range}(G)$ if and only if $z \in \Omega$. Notice that for $z \in \Omega$, $\mathbb{G}( \cdot \, ,z) \in H^1(D \setminus \overline{\Omega})$ is harmonic in the annular region and satisfies with $h_z=\partial_{\nu} \mathbb{G} (\cdot \, , z)$ on $\partial \Omega$. It is clear that $Gh_z=\partial_{\nu} \mathbb{G} (\cdot \, , z) \big|_{\partial D}$. Now, assume that $\partial_{\nu} \mathbb{G} (\cdot \, , z) \big|_{\partial D} \in \text{Range}(G)$ for some $z \in D \setminus \overline{\Omega}$. Therefore, we can conclude that there is a $w_z$ solving such that $$\partial_{\nu} w_z=\partial_{\nu} \mathbb{G} (\cdot \, , z) \quad \text{on} \quad {\partial D}.$$ We now define $U_z=w_z-\mathbb{G}(\cdot \, , z)$ which satisfies $$\Delta U_z =0 \quad \text{in} \quad D \setminus \big(\overline{\Omega} \cup \{z\} \big) \quad U_z =\partial_{\nu} U_z=0 \quad \text{on} \quad \partial D.$$ Holmgren’s Theorem implies that $w_z= \mathbb{G} (\cdot \, , z)$ in $D \setminus \big(\overline{\Omega} \cup \{z\} \big)$, but interior elliptic regularity gives that $w_z$ is bounded as $x \rightarrow z$ where as $|\mathbb{G} (x,z)| \rightarrow \infty$ as $x \rightarrow z$, proving the claim by contradiction. Next we turn our attention to showing that the ‘linear’ sampling method can be applied as an inversion method for our inverse problem. The linear sampling method was first derived in [@CK] as a way to reconstruct impenetrable obstacles using time-harmonic acoustic waves. We now show that a sampling algorithm can be used to reconstruct the inclusion. From the analysis given in this section we now have all we need to derive sampling method for reconstructing $\Omega$. To this end, consider the ill-posed ‘current-gap’ equation $$(\Lambda - \Lambda_0) f_z = \phi_z \quad \text{for} \quad z \in D \quad \text{where} \quad \phi_z = \partial_{\nu} \mathbb{G} (\cdot \, , z) \big|_{\partial D}. \label{PGE}$$ By Theorem \[dense-range\] we have that for all $z \in D$ we have that there exists an approximating sequence $\big\{f_{z , {\varepsilon}} \big\}_{{\varepsilon}>0}$ of solutions to where $$\big\| (\Lambda - \Lambda_0) f_{z , {\varepsilon}} - \phi_z \big\|_{H^{-1/2}(\partial D)} \longrightarrow 0 \quad \text{as } {\varepsilon}\rightarrow 0.$$ Now assume that $\| f_{z , {\varepsilon}} \|_{H^{1/2}(\partial D)}$ is bounded as ${\varepsilon}\rightarrow 0$. Since $f_{z , {\varepsilon}} \in H^{1/2}(\partial D)$ is a bounded sequence we have that there is a weakly convergent subsequence (still denote with ${\varepsilon}$) such that $f_{z , {\varepsilon}} {\rightharpoonup}f_{z,0}$ as ${\varepsilon}\to 0$. Since $(\Lambda - \Lambda_0)$ is compact we can conclude that $$(\Lambda - \Lambda_0) f_{z , {\varepsilon}} \longrightarrow (\Lambda - \Lambda_0) f_{z,0} \quad \text{and} \quad (\Lambda - \Lambda_0) f_{z , {\varepsilon}} \longrightarrow \phi_z \quad \text{as} \quad {\varepsilon}\rightarrow 0 \quad \text{ in} \,\,\,H^{-1/2}(\partial D).$$ By the decomposition given in Theorem \[decomp\] this implies that $\phi_z \in$ Range$(G)$, which is a contradiction of Theorem \[range\] if $z \notin \Omega$. From the above analysis we have derived a sampling method for recovering the unknown inclusion $\Omega$ by constructing approximate solutions to . \[LSM\] Let $\phi_z = \partial_{\nu} \mathbb{G} (\cdot \, , z) \big|_{\partial D}$. Then for any sequence $\big\{f_{z , {\varepsilon}} \big\}_{{\varepsilon}>0} \in H^{1/2}(\partial D)$ that is an approximating solution of such that $$\big\| (\Lambda - \Lambda_0) f_{z , {\varepsilon}} - \phi_z \big\|_{H^{-1/2}(\partial D)} \longrightarrow 0 \quad \text{as } {\varepsilon}\rightarrow 0$$ then $\| f_{z , {\varepsilon}} \|_{H^{1/2}(\partial D)} \longrightarrow \infty$ as ${\varepsilon}\rightarrow 0$ for all $z \notin \Omega$. Notice that Theorem \[LSM\] says that equation is not “approximately solvable" provided that $z \notin \Omega$ i.e. there is no sequence of approximate solutions whose (weak) limit satisfies . Since we assume that $(\Lambda - \Lambda_0)$ and $\phi_z$ are known we can use a regularization strategy to find an approximate solution to the current-gap equation . Also notice that it does not matter if one has the Dirichlet or impedance boundary condition on $\partial \Omega$, Theorem \[LSM\] is valid for either case. One can easily modify the analysis in this section to show that Theorem \[LSM\] is also valid for the perfectly insulated inclusion where $\mathcal{B}(u_0)=\partial_\nu u_0$. This gives that the sampling method is robust in the fact that it can be applied for multiple boundary conditions. The inversion algorithm for reconstructing the boundary $\partial \Omega$ is as follows. 1. Choose a grid of points in $D$ 2. For each grid point ‘solve’ via a regularization strategy. 3. Plot the $W(z)= \| f_{z , {\varepsilon}} \|^{-1}_{H^{1/2}(\partial D)}$ where $ f_{z , {\varepsilon}}$ is the regularized solution to 4. Then the set ${\partial \Omega_\delta}=\big\{ z \in D \, \, : \, \, W(z) = \delta \ll1 \big\}$ should approximate $\partial \Omega$. One important theoretical question to ask is does the regularized solutions to satisfy Theorem \[LSM\] and become unbounded as the regularization parameter tends to zero for $z \notin \Omega$. An alternative sampling method is the factorization method (see [@kirschbook]) where one proves that the range of a known operator defined by the measurements operator uniquely determines the region $\Omega$ and gives a simple numerical inversion algorithm. In [@EIT-inclusion] the factorization method has been used to reconstruct penetrable inclusions from electrostatic Cauchy data. In [@MUSICImp] the MUSIC algorithm, which can be seen as a discrete version of the factorization method, was derived to detect corrosion of an known interior boundary. For many inverse boundary value problems for elliptic equations the factorization method has been used to validate the linear sampling method using the eigenvalue decomposition of the measurements see [@arens; @LSM-AL]. Integral Equations for the Inverse Impedance Problem {#BIE} ===================================================== In this section, we will derive a non-iterative method for reconstructing the impedance parameter $\gamma (x)$. Even though we focus on the case of the impedance boundary condition the reconstruction method presented in this section works for determining if the inclusion is perfectly conducting/insulated. To this end, we consider the inverse problem of reconstructing the boundary impedance from the knowledge of $\Lambda_0 (\gamma)$. We assume that the boundary $\partial \Omega$ is known and that the DtN mapping which maps $u_0 = f$ on $\partial D$ to $\partial_{\nu} u_0 = g$ on $\partial D$ is given on some subset of $H^{1/2}(\partial D)$. The idea is to use the knowledge of the Cauchy data on $\partial D$ to recover the corresponding Cauchy data on $\partial \Omega$. Once we have the Cauchy data on $\partial \Omega$ of $u_0$ the impedance parameter can be determined by solving $\partial_{\nu} u_0 +\gamma(x) u_0=0$ on $\partial \Omega$. We begin this section by proving a uniqueness for the inverse problem. Since we assume that the DtN mapping is known we wish to prove that the [*inverse impedance*]{} and [*inverse shape*]{} problems admits a unique solution. Since we assume that we have an infinite data set we should be able to prove uniqueness for sufficiently less regularity than is needed in [@unique-imp]. To do so, we first need the following Theorem. \[dense-set\] The set $$\, {\mathcal U} := \Big\{ u_0 \big|_{\partial \Omega} \, \, : \, \, u_0 \in H^1( D \setminus \overline{\Omega}) \, \, \text{ solving } \eqref{defective} \, \, \text{ for all } \, \, f \in H^{1/2}(\partial D) \Big\}$$ is a dense subspace of $L^2(\partial \Omega)$. To prove the claim let $\phi \in L^2(\partial \Omega)$ and assume that $$\int\limits_{\partial \Omega} u_0 \phi \, \text{d}s = 0 \quad \text{ for all } \quad f \in H^{1/2}(\partial D).$$ Now let $v \in H^1(D \setminus \overline{\Omega})$ be the unique solution to $$\Delta v=0 \quad \text{in} \quad D\setminus \overline{\Omega} \quad \text{ with } \quad v=0 \,\, \text{ on} \quad \partial D \quad \text{ and } \quad \partial_{\nu} v+ \gamma v= \phi \,\, \text{ on } \quad \partial \Omega.$$ Using Green’s 2nd Theorem we have that $$\begin{aligned} 0& = \int\limits_{D\setminus \overline{\Omega} } u_0 \Delta v - v\Delta u_0 \, \text{d}x = \int\limits_{\partial D} u_0 \partial_\nu v - v \partial_\nu u_0 \, \text{d}s + \int\limits_{\partial \Omega} u_0 \partial_\nu v - v \partial_\nu u_0 \, \text{d}s.\end{aligned}$$ Appealing to the boundary conditions for both $u_0$ and $v$ gives $$\begin{aligned} -\int\limits_{\partial D} f \partial_\nu v \, \text{d}s & = \int\limits_{\partial \Omega} u_0 ( \partial_\nu v + \gamma v) \, \text{d}s = \int\limits_{\partial \Omega} u_0 \phi \, ds = 0. \end{aligned}$$ This implies that $\partial_\nu v=0$ on $\partial D$ since it is orthogonal to all $f \in H^{1/2}(\partial D)$. Since $v$ has zero Cauchy data on $\partial D$ we have that $v=0$ in $D \setminus \overline{\Omega}$ and the Trace Theorem gives that $\phi = 0$. Proving the claim. Notice that Theorem \[dense-set\] hold true for any dense subset of $H^{1/2}(\partial D)$. We can now prove that the impedance is uniquely determined by the knowledge of the DtN mapping on any dense subset of $H^{1/2}(\partial D)$. \[unique\] The DtN mapping $\Lambda_0 : H^{1/2}(\partial D) \longmapsto H^{-1/2}(\partial D)$ uniquely determines impedance parameter $\gamma (x) \in L^{\infty}_{+}(\partial \Omega)$. Assume that $\Lambda_{0 }(\gamma_1) = \Lambda_{0 } ( \gamma_2)$ then let $u_0^{(j)}$ be the solution to with impedance $\gamma_j$ for $j=1,2$. Therefore, we have that $u_0^{(1)}$ and $u_0^{(2)}$ have the same Cauchy data on $\partial D$ which implies that $u_0 = u_0^{(1)}=u_0^{(2)}$ in $D \setminus \overline{\Omega}$ by unique continuation. We can conclude that $$\partial_{\nu} u_0 + \gamma_1 u_0 =0 \quad \text{and} \quad \partial_{\nu} u_0 + \gamma_2 u_0 =0 \quad \text{ on } \, \, \partial \Omega.$$ Subtracting the impedance conditions implies that $(\gamma_1 - \gamma_2) u_0 = 0$ on $\partial \Omega$ for all $f \in H^{1/2}(\partial D)$. We conclude that $(\gamma_1 - \gamma_2)$ is orthogonal to the set $\mathcal{U}$ and is therefore zero a.e. on $\partial D$ proving the claim. The above proof is carried out in a variational setting so the uniqueness holds for the case where that Laplacian is replaced with ${\nabla}\cdot A(x) {\nabla}$ where the symmetric coefficient matrix satisfies the same assumptions as in Remark \[by-the-way\]. We now turn our attention to deriving an inversion method for determining $\gamma (x)$ from the knowledge of the DtN mapping $\Lambda_0$ and $\partial \Omega$. Our inversion method requires us to write the electrostatic potential function $u_0$ in terms of boundary integral operators. To this end, we adopt the notation $\partial D= \Gamma_{\text{m}}$ (i.e. the measurements boundary) and $\partial \Omega= \Gamma_{\text{i}}$ (i.e. the impedance boundary). Therefore, since both boundaries are assumed to be $C^2$ we define $$\mathcal{D}_{\text{m}}: H^{1/2}(\Gamma_{\text{m}}) \mapsto H^1(D) \cup H^1_{loc}({\mathbb{R}}^d \setminus \overline{D}) \quad \text{ and } \quad \widetilde{ \mathcal{D}}_{\text{i}}: H^{1/2}(\Gamma_{\text{i}}) \mapsto H^1(\Omega) \cup H^1_{loc}({\mathbb{R}}^d \setminus \overline{\Omega})$$ by the boundary integral operators $$(\mathcal{D}_{\text{m}}\, \varphi)(x) = 2\int\limits_{\Gamma_{\text{m}} } \varphi (y) \partial_{\nu(y)} \Phi(x,y) \, \text{d}s_y \quad \text{ for } \, \, x \in {\mathbb{R}}^d \setminus \Gamma_{\text{m}}$$ and $$(\widetilde{\mathcal{D}}_{\text{i}} \, \psi)(x) = 2 \int\limits_{\Gamma_{\text{i}} } \psi (y) \big[ \partial_{\nu(y)} \Phi(x,y) + |x|^{2-d} \big]\, \text{d}s_y \quad \text{ for } \, \, x \in {\mathbb{R}}^d \setminus \Gamma_{\text{i}}.$$ Recall that $\Phi(x,z)$ is the fundamental solution to Laplace’s equation in ${\mathbb{R}}^d$ given by $$\Phi(x,y)= - \frac{ 1 }{2 \pi } \ln | x-y | \, \text{ in } \, {\mathbb{R}}^2 \quad \text{and} \quad \Phi(x,y)=\frac{1}{4 \pi} \frac{1}{| x-y |} \, \text{ in } \, {\mathbb{R}}^3.$$ We refer to [@int-equ-book; @McLean] for the mapping properties and analysis of the above boundary integral operators. Since the double layer boundary integral operators satisfy Laplace’s equation in $D \setminus \overline{\Omega}$ we make the ansatz that $$\begin{aligned} u_0 = (\mathcal{D}_{\text{m}} \, \varphi )(x) + (\widetilde{\mathcal{D}}_{\text{i}} \, \psi )(x) \quad \text{ for } \, \, x\in D \setminus \overline{\Omega}. \label{int-rep}\end{aligned}$$ Using the jump relations for the double layer potentials in we have that $$\begin{aligned} ( I- K_{\text{mm}} ) \, \varphi - \widetilde{K}_{\text{im}} \, \psi&= -f \quad \text{ on } \, \, \Gamma_\text{m} \label{BIE1}\\ K_{\text{mi}} \, \varphi + (I+ \widetilde{K}_{\text{ii}}) \, \psi &= u_0 \big|_{\Gamma_\text{i}} \quad \text{ on } \, \, \Gamma_\text{i}\label{BIE2}\end{aligned}$$ where $$K_{\text{pq}} \varphi = (\mathcal{D}_{\text{p}} \varphi )(x) \quad \text{and} \quad \widetilde{K}_{\text{pq}} \psi= (\widetilde{\mathcal{D}}_{\text{p}} \psi )(x) \quad \text{ for } \, \, x \in \Gamma_\text{q}$$ with the index $\text{p,q}=$m,i. Notice that we have used that $u_0=f$ on $\Gamma_{\text{m}}$ in equation . In order to proceed we must show that the system of integral equations in - is well-posed. To this end, define the operator $${\mathcal A} = \begin{bmatrix} ( I- K_{\text{mm}} ) & - \widetilde{K}_{\text{im}} \\ K_{\text{mi}} & (I+ \widetilde{K}_{\text{ii}}) \end{bmatrix} : H^{1/2}(\Gamma_{\text{m}}) \times H^{1/2}(\Gamma_{\text{i}}) \mapsto H^{1/2}(\Gamma_{\text{m}}) \times H^{1/2}(\Gamma_{\text{i}})$$ which represents the integral operator associated with -. The operator ${\mathcal A} : H^{1/2}(\Gamma_{\text{m}}) \times H^{1/2}(\Gamma_{\text{i}}) \mapsto H^{1/2}(\Gamma_{\text{m}}) \times H^{1/2}(\Gamma_{\text{i}})$ has a bounded inverse. To prove the claim we show that the operator is satisfies the Fredholm alternative and is injective. We begin by proving the injectivity of ${\mathcal A}$. To this end, assume that $(\varphi_1 , \varphi_2)^{\top} \in \text{Null}({\mathcal A})$ which implies that $w(x)=(\mathcal{D}_{\text{m}} \varphi_{\text{1}} )(x) + (\widetilde{\mathcal{D}}_{\text{i}} \varphi_{\text{2}} )(x)$ satisfies Laplace’s equation in $D \setminus \overline{\Omega}$ with zero Dirichlet trace on the boundary. The uniqueness of Laplace’s equation with zero Dirichlet data implies that $w=0$ in $D \setminus \overline{\Omega}$. The continuity of the normal derivative of the double layer potential on $\Gamma_{\text{m}}$ we have that $w$ satisfies Laplace’s equation in ${\mathbb{R}}^d \setminus \overline{D}$ with zero Neumann trace on $\Gamma_{\text{m}}$ and uniqueness gives that $w=0$ in ${\mathbb{R}}^d \setminus \overline{D}$. Now using the jump relation for the double layer potential we conclude that $\varphi_1 = 0$. This gives that $w(x)=(\widetilde{\mathcal{D}}_{\text{i}} \varphi_{\text{2}} )(x)$ and since $w$ has zero exterior Dirichlet trace on $\Gamma_{\text{i}}$ which implies that $(I+ \widetilde{K}_{\text{ii}}) \varphi_{\text{2}}=0$. Since that operator $(I+ \widetilde{K}_{\text{ii}})$ is injective we have that $\varphi_{\text{2}}=0$, proving the injectivity. We now show that ${\mathcal A}$ is the compact perturbation of an invertible operator. To this end, we notice that $${\mathcal A} = \begin{bmatrix} ( I- K_{\text{mm}} ) & 0 \\ 0 & (I+ \widetilde{K}_{\text{ii}}) \end{bmatrix} + \begin{bmatrix} 0 & - \widetilde{K}_{\text{im}} \\ K_{\text{mi}} & 0 \end{bmatrix}.$$ It is well known (see [@int-equ-book]) that both $( I- K_{\text{mm}} )$ and $(I+ \widetilde{K}_{\text{ii}})$ are invertible from $H^{1/2}(\Gamma_{p})$ to itself where $p=$m,i respectively. Next, we show that the operators $$K_{\text{mi}}: H^{1/2}(\Gamma_{\text{m}}) \longmapsto H^{1/2}(\Gamma_{\text{i}}) \quad \text{ and } \quad \widetilde{K}_{\text{im}} : H^{1/2}(\Gamma_{\text{i}}) \longmapsto H^{1/2}(\Gamma_{\text{m}})$$ are compact. Let $v= ({\mathcal{D}}_{\text{m}} \varphi_{\text{m}} )(x) $ for some $\varphi_{\text{m}} \in H^{1/2}(\Gamma_{\text{m}})$ which solves Laplace’s equation in $D$ and is therefore analytic in the interior of $D$. We can conclude that $v\big|_{\Gamma_\text{i}} =K_{\text{mi}} \varphi_{\text{m}} \in H^{3/2}(\Gamma_{\text{i}})$ and the compactness follows from the compact embedding of $H^{3/2}$ into $H^{1/2}$. A similar argument proves that compactness of the operator $\widetilde{K}_{\text{im}} : H^{1/2}(\Gamma_{\text{i}}) \mapsto H^{1/2}(\Gamma_{\text{m}})$, which proves the claim since ${\mathcal A} $ is injective and the compact perturbation of an invertible operator. Recall that $u_0 \big|_{\Gamma_\text{i}}$ is still unknown so we use that $\partial_{\nu} u_0 = g$ on $\Gamma_\text{m}$ to determine the Dirichlet value of the electrostatic potential $u_0$ on $\Gamma_\text{i}$. Solving - for $(\varphi , \psi )^{\top}$ in terms of $u_0 \big|_{\Gamma_\text{i}}$ we have that is a representation of $u_0$ in terms of it’s Dirichlet data on $\Gamma_\text{i}$. Taking the normal derivative of on $\Gamma_\text{m}$ gives that $$\begin{aligned} g = T_{\text{mm}}\, \varphi + \widetilde{T}_{\text{im}} \, \psi \quad \text{ for } \, \, x \in \Gamma_\text{m} \label{eq-from-data}\end{aligned}$$ where the operators are given by $$T_{\text{mm}} \, \varphi = \partial_{\nu(x)} (\mathcal{D}_{\text{m}}\, \varphi )(x) \quad \text{and } \quad \widetilde{T}_{\text{im}} \, \psi = \partial_{\nu(x)} (\widetilde{\mathcal{D}}_{\text{i}} \, \psi )(x) \quad \text{ for } x \in \Gamma_\text{m}.$$ To recover $u_0 \big|_{\Gamma_\text{i}}$ one solves which can be written as $$\begin{aligned} \label{data-completion-equation} {g} = \big[T_{\text{mm}} \quad \widetilde{T}_{\text{im}} \big] \begin{bmatrix} ( I- K_{\text{mm}} ) & - \widetilde{K}_{\text{im}} \\ K_{\text{mi}} & (I+ \widetilde{K}_{\text{ii}}) \end{bmatrix}^{-1} \begin{bmatrix} -f \\ u_0 \big|_{\Gamma_\text{i}} \end{bmatrix} \quad \text{ for } \, \, x \in \Gamma_\text{m}. \end{aligned}$$ Once $u_0 \big|_{\Gamma_\text{i}}$ is known equation gives that $u_0$ in known for all $x \in D \setminus \overline{\Omega}$ and therefore $\partial_\nu u_0 \big|_{\Gamma_\text{i}}$ is given by taking the normal derivative of on $\Gamma_\text{i}$. Since the Cauchy data on $\Gamma_\text{i}$ is known the impedance condition $\partial_{\nu} u_0 +\gamma(x) u_0=0$ can be used to reconstruct the unknown impedance parameter. One can solve for the impedance $$\gamma(x_n) = - \frac{\quad \partial_{\nu} u_0 (x_n)}{u_0 (x_n)} \quad \text{for} \quad n=1, \cdots ,N \quad \text{with} \, \,\,\, x_n \in \Gamma_{\text{i}} .$$ One can also consider using a least squares method for recovering the impedance by $$\min\limits_{\gamma (x) \in L^{\infty}(\partial \Omega)} \sum\limits_{n=1}^N \Big| \partial_{\nu} u_0(x_n) +\gamma(x_n) u_0 (x_n) \Big|^2 \quad \text{ where }\quad \gamma(x) = \sum\limits_{m=1}^M c_m \Psi_m(x)$$ for some choice of basis functions $\Psi_m$ for $x \in \Gamma_{\text{i}}$. Since we assume that $\Lambda_0$ is known we can apply this inversion procedure for multiple Cauchy pairs $f_j$ and $g_j =\Lambda_0 f_j$ and determine the impedance parameter $\gamma_j (x)$ for $j=1, \cdots ,M$. Therefore, we can take the reconstructed impedance parameter to be the average of the reconstructions. Numerical Validation {#numerics} ==================== We now provide some numerical examples of our inversion methods. To do so, we will consider reconstructing both Dirichlet and Impedance inclusions in the unit disk. Recall that, $\phi_z$ is the normal derivative of the greens function in the unit disk with zero trace on the boundary and is therefore given by the Poisson kernel $$\phi_z (\theta) = \frac{1}{2\pi} \frac{1-|z|^2}{|z|^2 +1-2|z| \cos(\theta - \theta_z )}$$ where $\theta_z$ is the polar angle that the point $z$ makes with the positive $x$-axis. We begin by showing that the Theorem \[LSM\] can be used to reconstruct the inclusion for both the Dirichlet and Impedance boundary condition. Once the inclusion is reconstructed by the sampling method we then turn to giving numerical reconstructions of the impedance parameter. Reconstruction of a Dirichlet inclusion ---------------------------------------- For this case we only consider a simple example and will give more substantial reconstructions for the case of an impedance condition. Assume that the boundary of the inclusion $\partial \Omega=\rho \big( \cos(\theta) , \sin(\theta) \, \big)$ where $0 < \rho<1$. Since we assume that $D$ is the unit disk in ${\mathbb{R}}^2$ we attempt to find a representation of the electrostatic potential $u_0(r ,\theta)$ which solves the problem $$\begin{aligned} \Delta u_0(r, \theta)=0 \quad \text{for all} \quad \rho < r<1 \quad \text{ and } \quad \theta \in [0 , 2 \pi)\\ u_0(1, \theta) = f(\theta) \quad \text{and} \quad u_0(\rho , \theta)=0.\end{aligned}$$ Now, since $u_0(r , \theta)$ solves Laplace’s equation in an annular region we assume it can be written as linear combination of solutions to the to problem in the annuals and therefore has the form $$\begin{aligned} u_0(r,\theta)=a_0 +b_0 \ln r + \sum_{|n|=1}^{\infty} \big(a_n r^{|n|} + b_n r^{-|n|} \big) \, \text{e}^{ \text{i} n \theta}. \end{aligned}$$ By applying the boundary conditions we have that (see [@iterative-inclusion]) $$\begin{aligned} u_0(r,\theta)=\frac{f_0}{\ln \rho} \ln \left( \frac{\rho}{r} \right) + \sum_{|n|=1}^{\infty} \frac{f_n}{1-\rho^{2|n|}} \left(r^{|n|} - r^{-|n|}\rho^{2|n|} \right) \text{e}^{ \text{i} n \theta}\end{aligned}$$ where $f_n$ are the Fourier coefficients for $f$ given by $$f_n= \frac{1}{2 \pi} \int\limits_{0}^{2 \pi} f(\phi)\, \text{e}^{- \text{i} n \phi} \, d \phi .$$ Therefore, by taking the derivative with respect to $r$ gives that $$\partial_r u_0(1,\theta)= - \frac{f_0}{\ln \rho }+ \sum_{|n|=1}^{\infty} |n| f_n \, \frac{1+\rho^{2|n|}}{1-\rho^{2|n|}} \, \text{e}^{ \text{i} n \theta}.$$ It is clear that the electrostatic potential for a material without a perfectly conducting inclusion is given by $$\begin{aligned} u(r,\theta)={f_0} + \sum_{|n|=1}^{\infty} {f_n} r^{|n|} \text{e}^{ \text{i} n \theta} \quad \text{and} \quad \partial_r u(1,\theta)= \sum_{|n|=1}^{\infty} |n| f_n \text{e}^{ \text{i} n \theta}. \label{potential}\end{aligned}$$ This now gives that the difference of the DtN mapping is given by $$\begin{aligned} (\Lambda - \Lambda_0 ) f =\frac{f_0}{\ln \rho} -2 \sum_{|n|=1}^{\infty} |n| \frac{ \rho^{2|n|}}{1-\rho^{2|n|}} f_n \, \text{e}^{ \text{i} n \theta}. \label{dtn-series}\end{aligned}$$ By interchanging summation and integration we obtain that $$(\Lambda - \Lambda_0 ) f = \int\limits_{0}^{2 \pi} K(\theta , \phi) f(\phi) \, d \phi$$ where the kernel is given by $$\begin{aligned} K(\theta , \phi) = \frac{1}{ 2 \pi \ln \rho} -\frac{1}{\pi} \sum_{|n|=1}^{\infty} |n| \frac{ \rho^{2|n|}}{1-\rho^{2|n|}} \text{e}^{ \text{i} n( \theta - \phi)}. \label{def-k}\end{aligned}$$ We now consider the approximation of $(\Lambda - \Lambda_0)$ by a truncated series. In our experiments we will take the terms for $0 \leq |n| \leq 19$. In the following we see that the converges of the truncated series $(\Lambda - \Lambda_0)_N$ converges exponentially fast to $(\Lambda - \Lambda_0)$ as $N \rightarrow \infty$. \[DtN-approx\] Let $(\Lambda - \Lambda_0)_N : H^{1/2}(0,2 \pi) \longmapsto H^{-1/2}(0,2 \pi)$ be the truncated series approximation of $(\Lambda - \Lambda_0)$ given by then we have that $$\big\| (\Lambda - \Lambda_0) -(\Lambda - \Lambda_0)_N \big\| \leq C \rho^{2(N+1)}$$ where $\| \cdot \|$ is the operator norm on $\mathcal{L} \big( H^{1/2}(0,2 \pi) \, , \, H^{-1/2}(0,2 \pi) \big).$ To begin, let $f \in H^{1/2}(0,2 \pi)$ then we have that by $$\begin{aligned} (\Lambda - \Lambda_0)f -(\Lambda - \Lambda_0)_N f = -2 \sum_{|n|=N+1}^{\infty} |n| \frac{ \rho^{2|n|}}{1-\rho^{2|n|}} f_n \, \text{e}^{ \text{i} n \theta}.\end{aligned}$$ Now by the Cauchy-Schwarz inequality in $\ell^2$ we have that $$\begin{aligned} \Big| (\Lambda - \Lambda_0)f -(\Lambda - \Lambda_0)_N f \Big|^2 &\leq C \sum_{|n|=N+1}^{\infty} |n| \frac{ \rho^{4|n|}}{(1-\rho^{2|n|})^2} \big| \text{e}^{ \text{i} n \theta} \big|^2 \, \cdot \sum_{|n|=N+1}^{\infty} |n| |f_n|^2 \\ &\leq C \big\| f \big\|^2_{H^{1/2}(0,2 \pi)} \, \sum_{|n|=N+1}^{\infty} |n| \rho^{4|n|}. \end{aligned}$$ We have used that $$\big\| f \big\|^2_{H^{1/2}(0,2 \pi)} = \sum_{|n|=0}^{\infty} \big(1+ |n|^2\, \big)^{1/2}\, |f_n|^2.$$ Now, notice that $$\Big| (\Lambda - \Lambda_0)f -(\Lambda - \Lambda_0)_N f \Big|^2 \leq C\, \rho^{4(N+1)}\, \big\| f \big\|^2_{H^{1/2}(0,2 \pi)} .$$ Since the $H^{-1/2}(0,2 \pi)$-norm is bounded by $L^{\infty}(0,2 \pi)$-norm we can conclude that $$\big\| (\Lambda - \Lambda_0) -(\Lambda - \Lambda_0)_N \big\| \leq C\rho^{2(N+1)}$$ proving the claim. We can approximate the difference of the DtN mappings $(\Lambda - \Lambda_0)$ where we apply Theorem \[LSM\] to the discretized operator. We discretize our operator by using a simple collocation method with $64$ equally spaced points in the interval $[0 , 2\pi )$. This gives a $64 \times 64$ matrix approximation of $(\Lambda - \Lambda_0)$ which we will denote ${\bf A}$ and a vector $ {\bf b}_z=[\phi_z(\theta_j)]_{j=1}^{64}$ where $\theta_j$ are the collocation points. In our calculations we add random noise to the DtN mappings given by ${\bf A}^{\delta}_{i,j}={\bf A}_{i,j}\big( 1 +\delta {\bf E}_{i,j} \big)$ where the mean zero random matrix ${\bf E}$ [satisfying ]{} $\| {\bf E} \|_2 =1$. This gives a discretized version of $$\begin{aligned} {\bf A}^{\delta} {\bf f}_z = {\bf b}_z \quad \text{ for } \quad z \in D. \label{matrix-equ}\end{aligned}$$ Since the operator $(\Lambda - \Lambda_0)$ is compact we have that it’s matrix approximation is ill-conditioned. In order to solve we use Tikhonov’s regularization. To this end, we let $\sigma_i \in {\mathbb{R}}^+$ be the singular values with $ {\bf u}_i$ and $ {\bf v}_i$ in ${\mathbb{C}}^{64}$ the singular vectors of the matrix ${\bf A}^{\delta}$. We let $ {\bf f}_z^{\text{Tik}}$ be the regularized solution to given by $$\begin{aligned} {\bf f}_z^{\text{Tik}} = \sum\limits_{i=1}^{64} \frac{\sigma_i}{\alpha(\delta) + \sigma^2_i} \, ( {\bf b}_z , {\bf v}_i)_{\ell^2} \, {\bf u}_i. \end{aligned}$$ Here $ {\bf f}_z^{\text{Tik}}$ denotes the Tikhonov’s regularization solution where $\alpha$ is chosen by the discrepancy principle. Therefore, to reconstruct the inclusions we define the function $$W(z) =\big\| {\bf f}_z^{\text{Tik}} \big\|^{-1}_{\ell^2}.$$ Even though we plot the weaker $L^2$-norm we see that this is sufficient to approximate the inclusion $\Omega$. In the following experiments we take the uniformly distributed noise level $\delta=0.05$ where we plot the indicator function $W(z)$, see Figures \[reconstruct1\] and \[reconstruct2\]. ![Reconstruction of a perfectly conducting circular inclusion by the Sampling Method with radius $\rho=0.5$ []{data-label="reconstruct1"}](DC1.jpg) ![Reconstruction of a perfectly conducting circular inclusion by the Sampling Method with radius $\rho=0.25$ []{data-label="reconstruct2"}](DC2.jpg) Reconstruction of an impedance inclusion ---------------------------------------- We begin this section by considering the case when the impedance boundary is given by $\rho \big( \cos(\theta) , \sin(\theta) \, \big)$ where $0 < \rho<1$ and the impedance parameter $\gamma$ is constant. This implies that the electrostatic potential $u_0$ satisfies $$\begin{aligned} \Delta u_0(r, \theta)=0 \quad \text{for all} \quad \rho < r<1 \quad \text{ and } \quad \theta \in [0 , 2 \pi)\\ u_0(1, \theta) = f(\theta) \quad \text{and} \quad \big(-\partial_r + \gamma \big) u_0(\rho , \theta)=0.\end{aligned}$$ Just as in the previous section we assume $$\begin{aligned} u_0(r,\theta)=a_0 +b_0 \ln r + \sum_{|n|=1}^{\infty} \big(a_n r^{|n|} + b_n r^{-|n|} \big) \, \text{e}^{ \text{i} n \theta}. \end{aligned}$$ After some calculations we obtain that $$u_0 (r, \theta) = f_0 (1-\sigma_0 \ln r) + \sum_{|n|=1}^{\infty} \frac{f_n}{1+ \sigma_n \rho^{2|n|}} \left(r^{|n|} + \sigma_n r^{-|n|}\rho^{2|n|} \right) \text{e}^{ \text{i} n \theta}$$ where $$\sigma_0 = - \frac{\gamma}{\ln \rho - \rho^{-1} } \quad \text{ and } \quad \sigma_n= \frac{|n| - \rho \gamma}{|n| + \rho \gamma} \quad \text{ for } \quad n \neq 0.$$ This now gives that the difference of the DtN mapping is given by $$(\Lambda - \Lambda_0 ) f = \sigma_0 f_0 + 2 \sum_{|n|=1}^{\infty} |n| \frac{ \sigma_n \rho^{2|n|}}{1 + \sigma_n \rho^{2|n|}} f_n \, \text{e}^{ \text{i} n \theta}.$$ Just as in the previous section we see that the Fourier coefficients of the difference of the DtN mapping decay exponentially fast. Numerically this implies that the higher modes will not add any extra information to the reconstructions since they will be below any reasonable noise threshold. In our numerical experiments we will only consider the first 20 Fourier modes just as in the previous section. To reconstruct an inclusion with an impedance coefficient $\gamma\big(x(\theta)\big)$ we us a boundary integral equation to simulate the DtN mappings. To this end, we assume that $u_0$ can be written as a combination of a double layer potential on $\partial D=\Gamma_{\text{m}}$ and a single layer potential on $\partial \Omega = \Gamma_{\text{i}}$. Here the boundary of $\Gamma_{\text{m}}$ is given by the boundary of the unit disk and $\Gamma_{\text{i}}$ is given by $x(\theta): [0,2 \pi] \mapsto {\mathbb{R}}^2$ which is a $2\pi$-periodic representation of the $C^2$ boundary. Applying the boundary conditions $$u_0 \big|_{\Gamma_{\text{m}}}=f \quad \text{ and } \quad \big( \partial_{\nu} u_0 + \gamma u_0 \big) \big|_{\Gamma_{\text{i}}}=0$$ gives a $2 \times 2$ system of boundary integral equations. The boundary integral equations are solved via the Nyström method using 32 equally spaced points which gives a representation of $u_0$ in $D \setminus \overline{\Omega}$. This should give a sufficiently accurate approximation for the electrostatic potential due to the exponential convergence (see [@int-equ-book]). We then compute $\Lambda_0\, \text{e}^{\text{i} n \theta}$ where $\Lambda_0$ is the DtN mapping for the material with the inclusion by taking the normal derivative of $u_0$ on $\Gamma_{\text{m}}$. It is clear that $\Lambda \, \text{e}^{\text{i} n \theta} = n \text{e}^{\text{i} n \theta}$ for all $n \in {\mathbb{Z}}$. To obtain a discretized version of we consider $$f_z \approx \sum\limits_{n=0}^{19} f^z_n \text{e}^{\text{i} n \theta} \quad \text{which implies that } \quad \sum\limits_{n=0}^{19} f^z_n (\Lambda-\Lambda_0) \text{e}^{\text{i} n \theta} \approx \phi_z$$ and solve for the first 20 Fourier coefficients to the solution $f_z$ to . In our experiments we solve the above equation for $\theta_j \in [0 , 2\pi )$ where $\theta_j$ are taken to be 20 equally spaced points. This gives a $20 \times 20$ linear system which is solved using a spectral cut-off, where the cut-off parameter is chosen based on the level of noise in the data. To visualize the inclusion as in the previous examples we let $$W(z)=\left[ \sum\limits_{n=0}^{19} \big| f^z_n \big|^2 \right]^{-1/2} .$$ We implement this for three different inclusions given by $$\begin{aligned} \text{Circular shaped inclusion:} \quad x(\theta) &= \big( 0.3 \cos(\theta) \, , \, 0.3 \sin(\theta) \big)\\ \text{Elliptical shaped inclusion:} \quad x(\theta) &= \big( 0.5 \cos(\theta) \, , \, 0.3 \sin(\theta) \big)\\ \text{Cardioid shaped inclusion:} \quad x(\theta) &= \frac{0.35+0.3\cos(\theta)+0.05\sin(2\theta)}{1+0.7\cos(\theta)} \, \big( \cos(\theta) \, , \, \sin(\theta) \big)\end{aligned}$$ with the impedance parameter $\gamma \big(x(\theta)\big) = 2-\sin^4(\theta)$ where $4\%$ mean zero uniformly distributed random noise is added to the flux data measurements, see Figures \[recon7\], \[recon8\] and \[recon9\]. ![Reconstruction the circle via the Sampling Method with impedance parameter $\gamma\big(x(\theta)\big)=2-\sin^4(\theta)$ with cut-off parameter $10^{-4}$. []{data-label="recon7"}](cutoff-circle.jpg) ![Reconstruction of the ellipse via the Sampling Method with impedance parameter $\gamma\big(x(\theta)\big)=2-\sin^4(\theta)$ with cut-off parameter $10^{-4}$. []{data-label="recon8"}](cutoff-ellipse.jpg) ![Reconstruction of the cardioid via the Sampling Method with impedance parameter $\gamma\big(x(\theta)\big)=2-\sin^4(\theta)$ with cut-off parameter $10^{-4}$.[]{data-label="recon9"}](cutoff-cardiod.jpg) Reconstruction of the impedance parameter ----------------------------------------- We now give a numerical example of recovering the impedance parameter using the method described in Section \[BIE\]. Therefore, we present an example where the boundary has been reconstructed by Theorem \[LSM\]. Here we consider the ellipse $x(\theta) = \big( 0.5 \cos(\theta) \, , \, 0.3 \sin(\theta) \big)$ with impedance parameter $\gamma\big(x(\theta)\big)=2-\sin^4(\theta)$ from the previous section. In our calculations we first represent the reconstructed curve using trigonometric polynomials. To this end, we assume that the inclusion $\Omega$ is centered at the origin and taking the values on the level curve given in Figure \[recon7\] we approximate $$x(\theta) = \big( x_1(\theta) , x_2(\theta) \big) \quad \text{ such that } \quad x_p (\theta) = \sum\limits_{m=1}^{M} a_m^{(p)} \cos (m \theta) +b_m^{(p)} \sin (m \theta)$$ where $p=1,2$. The coefficients $a_m^{(p)}$ and $b_m^{(p)}$ are solved for in the least squares sense with Tikhonov regularization such that $x(\theta)$ approximates the reconstructed curve. In our calculations we penalize the $H^2(0,2 \pi)$ norm of $x_p (\theta)$ by taking the regularization parameter based on the level of noise in the data. Now that we have an approximation of $x(\theta)$ we can reconstruct the impedance using boundary integral equations. We apply the data completion algorithm described in Section \[BIE\] to recover the Cauchy data on the interior boundary $\Gamma_{\text{i}}$. Using the same method as in the previous Section for any given $f$ we can compute the corresponding $\Lambda_0 f$. Note that our original data on $\Gamma_{\text{m}}$ is subject to $4\%$ mean zero random noise and these errors transfer to the reconstruction of $\Gamma_{\text{i}}$. This gives that to reconstruct $\gamma\big(x(\theta)\big)$ we must solve a discretized version of where the Nyström method using 64 points is used to discretize the equation. Using a standard Tikhonov regularization scheme we solve the discretized version of which allows use to determine $u_0$ and $\partial_{\nu} u_0$ on $\Gamma_{\text{i}}$ for a given $f$. In our calculations we take $f (\theta) = \cos (k \theta) \text{ and } \sin (k \theta)$ for $k=1, \cdots ,8$ which corresponds to having 16 voltage and current measurements. For each $f$ the impedance is computed by $$\gamma \big(x(\theta_j) \big ) = - \frac{ \partial_{\nu} u_0 \big(x(\theta_j) \big )}{u_0 \big(x(\theta_j) \big )} \quad \text{where} \quad \theta_j = \frac{2 j \pi}{64} \quad \text{ for }\quad j = 0,\cdots, 64.$$ In Figure \[recon-imped\] we show the approximation of the reconstructed ellipse as well as the plot of the reconstructed impedance which is obtain by averaging the 16 results. ![On the left an approximation of the boundary of the inclusion for $M=7$. On the right is the reconstruction of the impedance $\gamma\big(x(\theta)\big)=2-\sin^4(\theta)$ from 16-Cauchy pairs.[]{data-label="recon-imped"}](recon-ellipse.jpg "fig:")![On the left an approximation of the boundary of the inclusion for $M=7$. On the right is the reconstruction of the impedance $\gamma\big(x(\theta)\big)=2-\sin^4(\theta)$ from 16-Cauchy pairs.[]{data-label="recon-imped"}](recon-imped.jpg "fig:") [99]{} H. Ammari et al., [A MUSIC-type algorithm for detecting internal corrosion from electrostatic boundary data]{}, [*Numer. Math.*]{} [**108**]{} (2008), 501-5028. T. Arens, [Why linear sampling method works]{}, [*Inverse Problems*]{} [**20**]{} (2004), 163-173. T. Arens and A. Lechleiter, Indicator Functions for Shape Reconstruction Related to the Linear Sampling Method, [*SIAM Journal on Imaging Sciences*]{}, [**8(1)**]{}, (2015), 513-535. V. Bacchelli, Uniqueness for the determination of unknown boundary and impedance with the homogeneous Robin condition, [*Inverse Problems*]{} [**25**]{}, (2009) 015004 F. Ben Hassen, Y. Boukari and H. Haddar, Inverse impedance boundary problem via the conformal mapping method: the case of small impedances [*Revue ARIMA*]{}, [**13**]{} (2010), pp. 47-62 M. Bruhl, Explicit characterization of inclusions in electrical impedance tomography, [*SIAM J. Math. Anal.*]{}, [**32**]{}, No. 6, (2001) 1327Ð1341 F. Cakoni and D. Colton, *“A Qualitative Approach to Inverse Scattering Theory”*, Springer, Berlin 2014. F. Cakoni, D. Colton and H. Haddar, The linear sampling method for anisotropic media, [*J.Comput. Appl. Math.*]{}, [**146**]{}, (2002), 285-299. F. Cakoni , Y. Hu and R. Kress, Simultaneous reconstruction of shape and generalized impedance functions in electrostatic imaging, [*Inverse Problems*]{} [**30**]{} (2014) 105009. D. Colton and A. Kirsch, [A simple method for solving inverse scattering problems in the resonance region]{}, [*Inverse Problems*]{} [**12**]{} (1996), 383-393. D. Colton and R. Kress, , Springer, New York, third edition, 2013. H. Haddar and R. Kress, [Conformal mappings and inverse boundary value problems,]{} [*Inverse Problems* ]{}, [**21**]{}, (2005), 935-953. H. Haddar, A. Lechleiter and S. Marmorat, An improved time domain linear sampling method for Robin and Neumann obstacles, [*Applicable Analysis*]{}, [**93**]{} (2014) 369-390. L. Evans, *“Partial Differential Equations"*, 2$^{nd}$ edition, AMS 2010. N. Khaji and S.H. Dehghan Manshadi, Time domain linear sampling method for qualitative identification of buried cavities from elastodynamic over-determined boundary data. [*Computers $\&$ Structures*]{} [**153**]{}, (2015), 36-48. A. Kirsch and N. Grinberg, *The Factorization Method for Inverse Problems*. Oxford University Press, Oxford 2008. A. Kirsch, The Factorization Method for a Class of Inverse Elliptic Problems. [*Math. Nachrichten*]{} [**278**]{} (2005), 258-277. R. Kress, Inverse Dirichlet problem and conformal mapping. [*Math. Comput. Simul.*]{}, [**66**]{} (2004) 255-265. R. Kress, [*“Linear Integral Equations”*]{}, Springer, New York, third edition, 2014. W. McLean,[*“Strongly elliptic systems and boundary integral equations”*]{}. Cambridge: Cambridge University Press 2000. G. Nakamura and H. Wang, Numerical reconstruction of unknown Robin inclusions inside a heat conductor by a non-iterative method, [*Inverse Problems*]{} [**33**]{} (2017), 055002 W. Rundell, Recovering an obstacle and its impedance from Cauchy data, [*Inverse Problems*]{} [**24**]{} (2008), 045003
--- abstract: 'In this paper, we study Manolescu’s construction of the relative Bauer-Furuta invariants arising from the Seiberg-Witten equations on 4-manifolds with boundary. The main goal of this paper is to introduce a new gauge fixing condition in order to apply the finite dimensional approximation technique. We also hope to provide a framework to extend Manolescu’s construction to general 4-manifolds.' address: 'Kavli Institute for the Physics and Mathematics of the Universe (WPI), Todai Institutes for Advanced Study, The University of Tokyo, 5-1-5 Kashiwa-No-Ha, Kashiwa, Chiba 277-8583, Japan' author: - Tirasan Khandhawit bibliography: - 'research.bib' title: 'A new gauge slice for the relative Bauer-Furuta invariants' --- Introduction ============ Stable homotopy invariants arising from gauge theory have provided many interesting results in low-dimensional topology. One of the first examples is Furuta’s $10/8$-theorem, which provides constraints on intersection forms of smooth 4-manifolds [@Furu]. Later, Bauer and Furuta constructed an invariant for a closed 4-manifold as an element in a certain stable cohomotopy group ([@BFII], [@BFI]). The basic idea of this construction is to consider the Seiberg-Witten map, rather than its moduli space of solutions, and then consider approximated maps between finite dimensional vector spaces to obtain a stable map between spheres. One useful observation for this construction is that the Seiberg-Witten map can be written as a sum of linear and compact operators. In 2003, Manolescu constructed a Floer spectrum for a rational homology 3-sphere [@Man1]. Roughly speaking, the construction comes from finite dimensional approximation of the Seiberg-Witten flow on an infinite-dimensional space. This allows one to extend the notion of Bauer-Furuta invariants to 4-manifolds with a rational homology sphere as a boundary. Let $X$ be a smooth, compact, connected, oriented 4-manifold with boundary $\partial X = Y$ and equip $X$ with a spin$^c$ structure whose restriction induces a spin$^c$ structure on $Y$. Conceptually, one can view the construction of the relative Bauer-Furuta invariant as a combination of finite dimensional approximation on both $X$ and $Y$ using the Seiberg-Witten map and the restriction map $$\begin{aligned} \mathcal{M}(X) \rightarrow \mathcal{B}(Y)\end{aligned}$$ from the moduli space of Seiberg-Witten solutions of $X$ to the quotient configuration space of $Y$ as a boundary term. An important step is to instead consider spaces of configurations with certain gauge fixing condition so that we have a map with Fredholm property between vector spaces. The main purpose of this paper is to introduce a new gauge fixing for a 4-manifold with boundary. An advantage of our gauge fixing condition, called the double Coulomb condition, is that the restriction map from the corresponding slice on $X$ to the Coulomb slice on $Y$ is linear. In contrast, the restriction map from the previously used Coulomb-Neumann slice on $X$ to the Coulomb slice on $Y$ is not linear and its nonlinear part is not compact. In fact, the boundary condition of our double Coulomb condition is motivated by this situation. In Section \[sec prelim\], we give a definition of the double Coulomb condition and prove its basic properties. In Section \[sec fredholm\], we show that the double Coulomb slice has several properties analogous to the Coulomb-Neumann slice. In Section \[sec main\], we apply finite dimensional approximation to this slice. At the end, we specialize to the case when $b_1(Y) = 0$ and reproduce Manolescu’s construction of the relative Bauer-Furuta invariant, denoted by $SWF(X)$. When $b_1 (Y) = 0$, the Seiberg-Witten map with boundary term on the double Coulomb slice gives an $S^1$-equivariant stable homotopy class of maps $$\begin{aligned} \mathit{SWF}(X) : \Sigma^{-b^+ (X)} Th_{Dir} (X) \rightarrow \mathit{SWF}(Y), \label{eq mainth}\end{aligned}$$ where $\mathit{SWF}(Y)$ is the Floer spectrum associated to $Y$ and $Th_{Dir} (X)$ is the Thom spectrum associated to a family of Dirac operator on $X$ parametrized by the Picard torus of $X$. In the special case when $b_1 (X) = 0$, we have $$\begin{aligned} \mathit{SWF}(X) : \mathbf{S}^{-b^+ (X)\mathbb{R} - \frac{\sigma(X)}{8}\mathbb{C}} \rightarrow \mathit{SWF}(Y), \label{eq mainb1=0}\end{aligned}$$ where $\mathbf{S}$ is the sphere spectrum. \[thm b1=0\] In particular, when $X$ is a cobordism between two 3-manifolds, one can use duality and reinterpret the relative Bauer-Furuta invariant as a morphism between Floer spectra. Suppose that $\partial X = \widebar{Y_1} \coprod Y_2$ and $b_1 (X) = b_1(Y_1) = b_1 (Y_2) = 0$, then we also have an $S^1$-equivariant stable homotopy class of maps $$\begin{aligned} \mathit{SWF}(X) : \mathit{SWF}(Y_1) \rightarrow \mathbf{S}^{b^+ (X)\mathbb{R} + \frac{\sigma(X)}{8}\mathbb{C}} \wedge \mathit{SWF}(Y_2). \nonumber\end{aligned}$$ \[cor cobord\] We point out that the construction can be applied directly to give a stable homotopy class of $\mathit{Pin}(2)$-equivariant maps when all manifolds are spin. This $Pin(2)$-version of Corollary \[cor cobord\] plays a crucial role in the recent applications of $\mathit{Pin}(2)$-equivariant stable homotopy invariants to low-dimensional topology [@Lin14; @Man13-2; @Man13-1]. Another goal of the paper is to provide a framework to extend Manolescu’s construction to a 4-manifold whose boundary can be any 3-manifold. The case $b_1 (Y) = 1$ was studied by Kronheimer and Manolescu in [@ManK]. We also hope that this new slice will help prove other important properties of the relative Bauer-Furuta invariant. In Appendix \[App con\], we provide some background in Conley index theory. We also include the result regarding independence of index pairs in the construction (see Proposition \[prop maptocon\]), which, we believe, has not appeared before. This was part of the author’s Ph.D. thesis at Massachusetts Institute of Technology. The author would like to gratefully thank Tom Mrowka for advising and support during the graduate study. The author would also like to thank Mikio Furuta and Ciprian Manolescu for several helpful discussions. This work was supported by World Premier International Research Center Initiative (WPI), MEXT, Japan. Preliminaries: The double Coulomb condition {#sec prelim} =========================================== In this section, let $M$ be a compact, connected, oriented, Riemannian manifold with boundary $\partial M = \coprod N_i$. We will describe a variant of the the Hodge decomposition of 1-forms in order to set up an appropriate slice for finite dimensional approximation of the Seiberg-Witten map. The inclusion $\partial M \rightarrow M$ gives a decomposition of a differential form of $M$ at the boundary to tangential part and normal part $$\begin{aligned} \omega_{| \partial M} = \mathbf{t}\omega + \mathbf{n}\omega . \nonumber \end{aligned}$$ Thus, $\mathbf{t} \omega$ is the restriction of $\omega$ to the boundary. When $\partial M $ has more than one connected components, we also denote by $\mathbf{t}_i$ the restriction to the $i$-th component. Let $\star$ be the Hodge star and $d^*$ be the codifferential. With this notation, we recall the formula for integration by parts (namely, the Green’s formula) $$\begin{aligned} \int_M \left\langle d \omega , \eta \right\rangle = \int_M \left\langle \omega , d^* \eta \right\rangle + \int_{\partial M} \mathbf{t} \omega \wedge \star \mathbf{n}\eta \label{eq green}\end{aligned}$$ and the identity $\star (\mathbf{n} \omega) = \mathbf{t} (\star \omega) $. We now introduce a space of 1-forms with double Coulomb condition. We say that a 1-form $\alpha$ satisfies the double Coulomb condition if 1. $\alpha$ is coclosed ($d^* \alpha = 0$). 2. its restriction to the boundary is coclosed, i.e. $d^* ( \mathbf{t} \alpha ) = 0$ on $\partial M$. 3. For each $i$, the integral $\int_{N_i} \mathbf{t}_i (\star \alpha) = 0$. Denote by $\Omega^1_{\mathit{CC}} (M)$ the space of all 1-forms satisfying the double Coulomb condition. \[def 1-bC\] When the metric of $M$ is cylindrical near the boundary, i.e. the neighborhood of the boundary is isometric to $(-\epsilon,0] \times \partial M$, one can decompose a 1-form $\omega$ in the collar neighborhood as $$\begin{aligned} \omega = \alpha(t) + \beta(t) + \gamma(t) dt , \label{eq 1formcylin} \end{aligned}$$ where $\alpha(t) , \beta(t) ,$ and $\gamma(t)$ is an exact 1-form, a coclosed 1-form, and a 0-form on $\partial M$ respectively and each of them is time dependent. One can see that the Neumann condition $\mathbf{n}\omega =0$ simply means $\gamma(0) = 0$, while the condition $d^* ( \mathbf{t}\omega ) = 0$ means $\alpha(0) = 0$ and the last condition in Definition \[def 1-bC\] is a condition on the integral of $\gamma(0) $. Hodge theory on manifolds with boundary has been studied by many authors (cf. [@HodgeB; @GSchwarz]). However, the double Coulomb condition and the following decomposition appear to be new. Any 1-form $\alpha$ can be written uniquely as a sum $\alpha = \omega + d \xi$ where $\omega \in \Omega^1_\mathit{CC} (M)$ and $\xi$ is a 0-form. In other words, there is an isomorphism $$\begin{aligned} \Omega^1 (M) \cong \Omega^1_\mathit{CC} (M) \oplus d \Omega^0 (M) \label{eq CCdecom} \end{aligned}$$ \[prop 1-decomp\] This decomposition is not orthogonal as opposed to the standard Hodge decomposition $\Omega^1 (M) \cong \Omega^1_\mathit{CN} (M) \oplus d \Omega^0 (M)$, where $\Omega^1_\mathit{CN} (M)$ is a space of coclosed 1-form with vanishing normal component ($\mathbf{n} \alpha = 0$). If the condition $\int_{N_i} \mathbf{t}_i (\star \alpha) = 0$ is omitted, a decomposition exists but not unique with ambiguity of dimension $b_0 (\partial M) - 1$. Let $\alpha$ be a 1-form. We will first find a function $\xi_1$ such that $\alpha - d\xi_1$ satisfies (i) and (ii) of Definition \[def 1-bC\], that is $$\begin{aligned} d^* ( d \xi_1) &= d^* \alpha, \\ d^* ( \mathbf{t} d\xi_1 ) &= d^* ( \mathbf{t} \alpha ). \end{aligned} \label{eq hodge1}$$ Since $d$ and $\mathbf{t}$ commute, we see that $d^* ( \mathbf{t} d\xi_1 ) = \Delta \mathbf{t} \xi_1 $. We can instead consider $$\begin{aligned} \Delta \xi_1 &= d^* \alpha, \\ \mathbf{t} \xi_1 &= G d^* ( \mathbf{t} \alpha ), \end{aligned}$$ where $G$ is a Green’s operator of the Laplacian of the boundary $\partial M$. This equation is precisely the Dirichlet problem for the Poisson equation and is uniquely solvable (cf. [@GSchwarz]). As a result, one can always find such $\xi_1$ as claimed. Next, consider a function $\xi$ which satisfies $$\begin{aligned} \Delta \xi &= 0, \\ \mathbf{t}_i \xi &= c_i , \end{aligned} \label{eq hodgeker}$$ where $c_i$ are constants. This is a homogeneous solution of (\[eq hodge1\]). Since this equation is also the Dirichlet problem, there is a unique solution $\xi$ for each vector $c =(c_1 , \ldots , c_{b_0 (\partial M)})$. Denote by $K$ the space of functions $\xi $ satisfying (\[eq hodgeker\]) for all possible $c$. This is a vector space of dimension $b_0 (\partial M) $ and $\xi$ is a constant function when the $c_i$’s are all equal. Let us consider a map $ ev : K \rightarrow \mathbb{R}^{b_0 (\partial M)}$ by assigning the value $ \int_{N_i} \mathbf{t}_i (\star d \xi) $ to the $i$-th component. When $\eta = d \xi$ and $\omega$ is a nonzero constant function, the Green’s formula (\[eq green\]) implies that $$\begin{aligned} 0 = \int_M \Delta \xi + \sum \int_{N_i} \mathbf{t}_i (\star d \xi). \nonumber\end{aligned}$$ Then, we see that the image of $ev$ is in the hyperplane $H_0 := \left\lbrace (r_i) \in \mathbb{R}^{b_0 (\partial M)} \: | \: \sum r_i = 0 \right\rbrace $. By plugging $(\omega , \eta ) = (\xi , d\xi)$ in the Green’s formula, we find that the kernel of $ev$ consists of the constant functions. Thus, the map $ev$ gives an isomorphism between $ K_0 := \left\lbrace \xi \in K \: | \: \sum c_i = 0 \right\rbrace $ and the hyperplane $ H_0 $. Finally, we notice that any coclosed 1-form $\omega$ satisfies $ 0 = \sum \int_{N_i} \mathbf{t}_i (\star \omega) $ by pairing $\omega$ with a nonzero constant function in the Green’s formula. Note that this makes the condition (iii) trivial when $\partial M$ has just one component. From the previous paragraph, we can now pick $\xi_2 \in K_0$ so that $\int_{N_i} \mathbf{t}_i (\star (\alpha - d \xi_1 - d \xi_2)) = 0$. Hence $\alpha - d \xi_1 - d \xi_2 \in \Omega^1_\mathit{CC} (M) $. The uniqueness of the decomposition follows from the fact that the kernel of $ev$ consists of the constant functions. For a coclosed 1-form $\alpha$, the condition $\int_{N_i} \mathbf{t}_i (\star \alpha) = 0$ is equivalent to $\alpha$ being orthogonal to $d \xi$ for all $\xi \in K$. Indeed, we have $$\int_M \left\langle d \xi , \alpha \right\rangle = \sum c_i \int_{N_i} \mathbf{t}_i (\star \alpha).$$ The Seiberg-Witten map with boundary terms {#sec fredholm} ========================================== From now on, let $X$ be a compact, connected, oriented, Riemannian 4-manifold with nonempty boundary $\partial X =Y$. We choose a metric so that a neighborhood of the boundary is isometric to the cylinder $I \times Y$ for some interval $I = (-C ,0]$. Let $\mathfrak{s}_X$ be a spin$^c$ structure on $X$ and $\mathfrak{s}$ be the induced spin$^c$ structure on $Y$. Denote by $S_X = S^+ \oplus S^-$ the spinor bundle of $X$ and by $S$ the spinor bundle of $Y$. Denote by $\mathcal{A}_X$ the space of spin$^c$ connection on $S_X$ and by $\Gamma(S^\pm)$ the space of sections of the spinor bundles and by $\Omega_+^2 (X)$ the space of self-dual 2-forms. The Seiberg-Witten map is given by $$\begin{aligned} \mathit{SW}: \mathcal{A}_X \oplus \Gamma(S^+) &\rightarrow i \Omega_+^2 (X) \oplus \Gamma(S^-) \\ (A,\Phi) &\mapsto (\frac{1}{2} F^+_{A^t} - \rho^{-1} ((\Phi \Phi^* )_0), \slashed{D}_{A}^+ \Phi), \end{aligned}$$ where $F^+_{A^t}$ is the self-dual part of the curvature of the associated connection on the determinant bundle $\Lambda^2 S^+ $, $\slashed{D}_{A}^+$ is the Dirac operator, $(\Phi \Phi^* )_0$ is the trace-free part of the endomorphism $\Phi \Phi^*$, and $\rho$ is the Clifford multiplication. The gauge group $\mathcal{G} := Map(X , S^1)$ acts on the above spaces so that $SW$ is $\mathcal{G}$-equivariant. The action is given by $u \cdot A \mapsto A - u^{-1} d u$ on connections and by pointwise multiplication on spinors whereas the action is trivial on 2-forms. There is also the gauge subgroup $\mathcal{G}^\bot := \left\{ e^\xi \; | \; \xi \in C^{\infty}(X; i\mathbb{R}) \text{ and } \int_X \xi = 0 \right\}$, which lies in the connected component of $\mathcal{G}$. With a reference connection $A_{0}$, the quotient of $\mathcal{A}_X \oplus \Gamma(S^+)$ by $\mathcal{G}^\bot$ can be identified with a global slice with the double Coulomb condition $$\begin{aligned} \mathit{Coul}^\mathit{CC}(X) = \left\{ (a , \phi ) \in i \Omega^1 (X) \oplus \Gamma (S^+) \; | \; a \in\Omega^1_\mathit{CC} (X) \right\}. \nonumber\end{aligned}$$ This is a consequence of the decomposition (\[eq CCdecom\]) from Proposition \[prop 1-decomp\]. The Seiberg-Witten map then becomes $$\begin{aligned} SW: \mathit{Coul}^{CC}(X) &\rightarrow i \Omega_+^2 (X) \oplus \Gamma(S^-) \\ (a,\phi) &\mapsto (d^+ a - \rho^{-1} ((\phi \phi^* )_0)+ \frac{1}{2} F^+_{A_0^t} \, , \, \slashed{D}_{A_{0}}^+ \phi + \rho(a)\phi \, ) \end{aligned}$$ and we can write $SW = \hat{D} + \hat Q$ where $\hat D = (d^+ , \slashed{D}_{A_{0} }^+ )$ and $\hat Q$ is the sum of a quadratic map and the constant term $\frac{1}{2} F^+_{A_0^t}$. On the 3-manifold side, we also have the Coulomb slice $$\mathit{Coul}(Y) = \left\{ \left( b , \psi \right) \in i \Omega^1 (Y) \oplus \Gamma (S) \; | \; d^* b = 0 \right\}$$ which arises from the quotient of a configuration space by a gauge subgroup. For $a \in \Omega^1_{CC} (X)$, its restriction to the boundary is already coclosed, so that the restriction induces a map between the slices $$\begin{aligned} r : \mathit{Coul}^{CC}(X) \rightarrow \mathit{Coul}(Y). \end{aligned}$$ This gives a Seiberg-Witten map with boundary terms $$\begin{aligned} SW \oplus r: \mathit{Coul}^{\mathit{CC}}(X) &\rightarrow \left( i \Omega_+^2 (X) \oplus \Gamma(S^-) \right) \oplus \mathit{Coul}(Y). \end{aligned}$$ As usual, we will extend the above maps to maps between appropriate Sobolev spaces. For a fixed real number[^1] $k>3$, we consider the $L^{2}_{k+1}$ completion of the domain of $SW$, the $L_k^2$ completion of the codomain of $SW$, and the $L^2_{k + 1/2}$ completion of $Coul(Y)$ so that $\hat Q$ is compact and the restriction map $r$ is bounded.   However, the linear part $\hat{D} \oplus r$ is not Fredholm. To obtain a Fredholm map, we need to consider the above operator with spectral boundary condition as in the Atiyah-Patodi-Singer boundary-value problems. On the boundary 3-manifold $Y$, we have a first-order self-adjoint elliptic operator $D$ acting on the Coulomb slice $$\begin{aligned} D : i {\ensuremath{\operatorname{Ker}(d^*)}} \oplus \Gamma (S) &\rightarrow i {\ensuremath{\operatorname{Ker}(d^*)}} \oplus \Gamma (S) \label{eq linearmap3d} \\ (b,\psi ) &\mapsto (* d b , \slashed{D}_{B_0} \psi ), \nonumber\end{aligned}$$ where the connection $B_0$ is the restriction of $A_0$. Denote by $H_0^-$ its nonpositive eigenspace and by $\Pi_0^-$ the projection onto $H_0^-$. We now consider a map of the form $$\begin{aligned} \hat{D} \oplus (\Pi^-_0 \circ r) : \mathit{Coul}^{\mathit{CC}} (X) \rightarrow \left( i \Omega_+^2 (X) \oplus \Gamma(S^-) \right) \oplus H^-_0 . \label{eq map-coul}\end{aligned}$$ We will show that this map, extended to the Sobolev completion, is Fredholm with an elliptic estimate. The proof resembles that of Proposition 17.3.2 from [@Mono]. The map $\hat{D} \oplus (\Pi^-_0 \circ r)$ in (\[eq map-coul\]) is Fredholm and its index is equal to $2\operatorname{Ind}_{\mathbb{C}} (\slashed{D}_{A_0}^+ ) +b_1(X) - b^+ (X) - b_1(Y) $. In addition, we have an estimate $$\begin{aligned} \left\| x \right\|_{L^2_{k+ 1}} \leq C \left( \left\|\hat{D} x \right\|_{L^2_{k}} + \left\| (\Pi^-_0 \circ r) x \right\|_{L^2_{k+ 1/2}} + \left\| x \right\|_{L^2} \right). \label{eq ellipest}\end{aligned}$$ \[prop bFred\] The main idea is to apply the Atiyah-Patodi-Singer boundary-value problem (cf. [@APSI]) to the extended operator coming from the gauge fixing condition. Then, we will compare projections onto different semi-infinite subspaces in the boundary terms. One subspace arises from a spectral boundary condition of the extended operator while another is a sum of an eigenspace on $\mathit{Coul}(Y)$ and a subspace characterizing the double Coulomb condition. We will also make use of the following observation: suppose that an operator $\widetilde{D}$ arises from two operators $D_1 : V \rightarrow W_1$ and $D_2 : V \rightarrow W_2$, in the sense that $$\begin{aligned} \widetilde{D} = ( D_1 , D_2 ) : V \rightarrow W_1 \oplus W_2 .\end{aligned}$$ It is not hard to check that $$\begin{aligned} {\ensuremath{\operatorname{Ker}(\widetilde{D} )}} &= {\ensuremath{\operatorname{Ker}({D_1}|_{{\ensuremath{\operatorname{Ker}(D_2)}}})}} = {\ensuremath{\operatorname{Ker}({D_2}|_{{\ensuremath{\operatorname{Ker}(D_1)}}})}} \\ {\ensuremath{\operatorname{Coker}(\widetilde{D} )}} &= {\ensuremath{\operatorname{Coker}(D_1)}} \oplus {\ensuremath{\operatorname{Coker}(D_2|_{{\ensuremath{\operatorname{Ker}(D_1)}}})}} \\ &= {\ensuremath{\operatorname{Coker}(D_1|_{{\ensuremath{\operatorname{Ker}(D_2)}}})}} \oplus {\ensuremath{\operatorname{Coker}(D_2)}} . \end{aligned} \label{eq sumFredholm}$$ Consequently, the map $D_1|_{{\ensuremath{\operatorname{Ker}(D_2)}}} : {\ensuremath{\operatorname{Ker}(D_2)}} \rightarrow W_1$ is Fredholm if $\widetilde{D} $ is Fredholm. Let us start by considering an elliptic operator $\widetilde{D}$ given by $$\begin{aligned} \widetilde{D} : i \Omega^1 (X) \oplus \Gamma (S^+) &\rightarrow i \Omega_+^2 (X) \oplus \Gamma(S^-) \oplus i \Omega^0 (X) \\ (a,\phi) &\mapsto (d^+ a , D_{A_0}^+ \phi , d^* a). \end{aligned}$$ This is the map $\hat D$ together with the summand $d^*$ for gauge fixing. Then, we write $\widetilde{D} = D_0 + K$, where $K$ extends to an operator of order $0$ and $D_0$ has the form $$\begin{aligned} D_0 = \frac{d}{dt} + \widetilde{L}, \end{aligned}$$ in the collar neighborhood (up to isomorphisms) and the operator $\widetilde{L}$ is a first-order, self-adjoint elliptic operator given by $$\begin{aligned} \widetilde{L} : i \Omega^1 (Y) \oplus \Gamma (S) \oplus i \Omega^0 (Y) &\rightarrow i \Omega^1 (Y) \oplus \Gamma(S) \oplus i \Omega^0 (Y) \\ (b,\psi , c) &\mapsto (* d b - dc , \slashed{D}_{B_0} \psi , - d^* b).\end{aligned}$$ Using the Hodge decomposition, the restriction of $\widetilde{L}$ to $i \Omega^1 (Y) \oplus i \Omega^0 (Y) = i {\ensuremath{\operatorname{Im}(d)}} \oplus i{\ensuremath{\operatorname{Ker}(d^*)}} \oplus i \Omega^0 (Y)$ can be written as a block $$\begin{aligned} \begin{bmatrix} 0 & 0 & -d \\ 0 & *d & 0 \\ -d^* &0 &0 \end{bmatrix}. \end{aligned}$$ One can also rearrange and view the domain (and the codomain) of $\widetilde{L}$ as $\mathit{Coul}(Y) \oplus i {\ensuremath{\operatorname{Im}(d)}} \oplus i \Omega^0 (Y)$, so that $\widetilde{L} = D \oplus L_1$ (as in (\[eq linearmap3d\])) and $L_1$ is an operator on $i {\ensuremath{\operatorname{Im}(d)}} \oplus i \Omega^0 (Y)$ with a block form $$\begin{aligned} \begin{bmatrix} 0 & -d \\ -d^* &0 \end{bmatrix}. \end{aligned}$$ We now apply the Atiyah-Patodi-Singer boundary-value problem to the operator $\widetilde{D}$. Conequently, we have that the map $$\begin{aligned} \widetilde{D} \oplus (\widetilde{\Pi}^- \circ r) : i \Omega^1 (X) \oplus \Gamma (S^+) &\rightarrow i \Omega_+^2 (X) \oplus \Gamma(S^-) \oplus i \Omega^0 (X) \oplus \widetilde{H}^- \nonumber\end{aligned}$$ is Fredholm, where $\widetilde{\Pi}^-$ is the projection onto the nonpositive eigenspace of $\widetilde{L}$, denoted by $\widetilde{H}^- \subset \mathit{Coul}(Y) \oplus i {\ensuremath{\operatorname{Im}(d)}} \oplus i \Omega^0 (Y)$. Let $H_1^- \subset i {\ensuremath{\operatorname{Im}(d)}} \oplus i \Omega^0 (Y)$ be the nonpositive eigenspace of $L_1$ and $\Pi_1^-$ be its spectral projection and we see that $\widetilde{\Pi}^- = \Pi_0^- \oplus \Pi_1^-$. Let $W \subset i \Omega^0 (Y)$ be the subspace of locally constant functions and let $\Pi_{2}$ be the projection from $i {\ensuremath{\operatorname{Im}(d)}} \oplus i \Omega^0 (Y)$ onto $ i {\ensuremath{\operatorname{Im}(d)}} \oplus W^{}$ whose kernel is $\left\{ 0 \right\} \oplus \left\{ f \, | \, \int_{Y_i} f = 0 \right\}$. We would like to apply the earlier observation (\[eq sumFredholm\]) to the map $\widetilde{D} \oplus ((\Pi^-_0 \oplus \Pi_2) \circ r)$ because the kernel of $d^* \oplus (\Pi_2 \circ r)$ is precisely $\mathit{Coul}^{\mathit{CC}} (X)$. We start comparing ${\ensuremath{\operatorname{Im}(\Pi^-_1)}}$ and ${\ensuremath{\operatorname{Ker}(\Pi_2)}}$ by observing that $d d^*$ is positive on $i{\ensuremath{\operatorname{Im}(d)}}$. Consequently, the pairs $(b , d^* (d d^*)^{-1/2} b )$ and $(0,c)$ lie in $H_1^-$ for any $b \in i{\ensuremath{\operatorname{Im}(d)}}$ and for any locally constant function $c$. Moreover, the intersection of $H^-_1$ and $\left\{ 0 \right\} \oplus \left\{ f \, | \, \int_{Y_i} f = 0 \right\}$ is the zero set, so we can see that any element in $i {\ensuremath{\operatorname{Im}(d)}} \oplus i \Omega^0 (Y)$ can be written uniquely as the sum of elements from these two subspaces. Consequently, the kernel of $\Pi^-_0 \oplus \Pi_{2}$ is complementary to the image of $\Pi^-_0 \oplus \Pi_1^-$. By Proposition 17.2.6 from [@Mono], we can conclude that the operator $$\widetilde{D} \oplus ((\Pi^-_0 \oplus \Pi_{2}) \circ r) : i \Omega^1 (X) \oplus \Gamma (S^+) \rightarrow i \Omega_+^2 (X) \oplus \Gamma(S^-) \oplus i \Omega^0 (X) \oplus (H^-_0 \oplus i {\ensuremath{\operatorname{Im}(d)}} \oplus W)$$ is Fredholm. From (\[eq sumFredholm\]), we set $D_1= \hat{D}\oplus (\Pi^-_0 \circ r )$ and $D_2 = d^* \oplus (\Pi_2 \circ r)$ and deduce that the map $\hat{D} \oplus (\Pi^-_0 \circ r )$ is Fredholm with index $$\begin{aligned} \operatorname{Ind} (\hat{D} \oplus (\Pi^-_0 \circ r ) ) = \operatorname{Ind} (\widetilde{D} \oplus ((\Pi^-_0 \oplus \Pi_2) \circ r) ) + \dim{{\ensuremath{\operatorname{Coker}(D_2)}}}. \nonumber\end{aligned}$$ To find a formula for the index, one can observe that the operators $\widetilde{D} \oplus (\widetilde{\Pi}^- \circ r)$ and $\widetilde{D} \oplus ((\Pi^-_0 \oplus \Pi_{2}) \circ r) $ have the same index (see the proof of Proposition 17.2.6 in [@Mono]). From the proof of Proposition \[prop 1-decomp\], one can deduce that the cokernel of $D_2$ is isomorphic to the space of constant functions on $Y$. Hence, we obtain $\operatorname{Ind} (\hat{D} \oplus (\Pi^-_0 \circ r ) ) = \operatorname{Ind} (\widetilde{D} \oplus (\widetilde{\Pi}^- \circ r) ) +1 $. The index of $\widetilde{D} \oplus (\widetilde{\Pi}^- \circ r)$ can be computed from the index formula of the two operators $d^+ + d^*$ and $\slashed{D}^+_{A_0}$ with the spectral boundary condition. For instance, the index for $d^+ + d^*$ is given by $$\begin{aligned} \operatorname{Ind}(d^+ + d^*) = -\frac{1}{2} \int_X \left( \frac{p_1 (X)}{3} + e(X)\right) + \frac{\eta_{sign} - k_{sign}}{2}, \nonumber \end{aligned}$$ where $p_1 (X)$, $e(X)$ are the Pontryagin class and the Euler class of $X$ and $\eta_{sign}$, $k_{sign}$ are the eta invariant and the dimension of the kernel of the odd signature operator on $Y$ respectively. The kernel of the odd signature operator is the space of harmonic $0$-forms and $1$-forms of $Y$. Using the signature theorem and the Gauss-Bonnet theorem, we have $$\begin{aligned} \sigma(X) + \chi(X) = \int_X \left( \frac{p_1 (X)}{3} + e(X)\right) - \eta_{sign}, \nonumber \end{aligned}$$ which gives $$\begin{aligned} \operatorname{Ind}(d^+ + d^*) = - \frac{\sigma(X) + \chi(X) + b_0 (Y) + b_1(Y)}{2} . \nonumber \end{aligned}$$ One can extract from the cohomology long exact sequence of the pair $(X,Y)$ that $$\sigma(X) + \chi(X) + b_0 (Y) + b_1(Y) = 2 ( b_0 (X) - b_1(X) + b^+ (X) + b_1(Y)).$$ Putting everything together, we have the desired quantity $$\begin{aligned} \operatorname{Ind} (\hat{D} \oplus (\Pi^-_0 \circ r ) ) &= 2\operatorname{Ind}_{\mathbb{C}} (\slashed{D}_{A_0}^+ ) - ( b_0 (X) - b_1(X) + b^+ (X) + b_1(Y)) +1 \nonumber \\ &= 2\operatorname{Ind}_{\mathbb{C}} (\slashed{D}_{A_0}^+ ) + b_1(X) - b^+ (X) - b_1(Y). \label{eq index4db}\end{aligned}$$ Finally, there is an elliptic estimate for $\widetilde{D} \oplus (\widetilde{\Pi}^- \circ r)$ as a consequence of the Atiyah-Patodi-Singer theorem. Since $\Pi^-_0 \oplus \Pi_{2}$ is an isomorphism on the image of $\Pi^-_0 \oplus \Pi_1^-$, we also have an estimate for the operator $\widetilde{D} \oplus ((\Pi^-_0 \oplus \Pi_{2}) \circ r) $ $$\begin{aligned} \left\| x \right\|_{L^2_{k+ 1}} \leq C \left( \left\| \widetilde{D} x \right\|_{L^2_{k}} + \left\| ((\Pi^-_0 \oplus \Pi_{2}) \circ r) x \right\|_{L^2_{k+ 1/2}} + \left\| x \right\|_{L^2} \right).\end{aligned}$$ Restricting $x$ to ${\ensuremath{\operatorname{Ker}(D_2)}} = \mathit{Coul}^{\mathit{CC}} (X)$, we obtain the desired estimate. We can also replace $\Pi^-_0$ by any projection $\Pi^-$ commensurate to $\Pi^-_0$, i.e. a projection such that $\Pi^- - \Pi^-_0$ is compact. The index will change according to the formula $\operatorname{Ind}(\hat D \oplus \Pi^- ) = \operatorname{Ind}(\hat D \oplus \Pi^-_0 ) + \operatorname{Ind}(\Pi^- \Pi^-_0)$, where $\Pi^- \Pi^-_0$ denotes the Fredholm operator $ \Pi^- \Pi^-_0 : {\ensuremath{\operatorname{Im}( \Pi^-_0)}} \rightarrow {\ensuremath{\operatorname{Im}(\Pi^-)}} $. Furthermore, the constant in the estimate (\[eq ellipest\]) can be fixed for all such commensurate projections. Finite Dimensional Approximation for the Seiberg-Witten map with boundary {#sec main} ========================================================================= We briefly recall the construction of finite dimensional approximation on the boundary 3-manifold $Y$. Through out the section, we work on a general setting with no restriction on $b_{1}(Y)$ (cf. [@TK1]). At the very end, we will specialize to the case $b_1(Y)=0$ (cf. [@Man1]). On the Coulomb slice $\mathit{Coul}(Y)$, we have a Seiberg-Witten vector field $F$ given by $$\begin{aligned} F(b,\psi) = \left( *db+ {\rho^{-1} \left( \psi \psi^* \right)_0} +\frac{1}{2} *F_{B_0^t } - d \bar{\xi}(\psi) \, , \, \slashed{D}_{B_0 } \psi +b \cdot \psi - \bar{\xi}(\psi) \psi \right), \nonumber\end{aligned}$$ where $\bar{\xi}(\psi)$ is a unique function that satisfies $\Delta \bar{\xi}(\psi) = d^* \left( {\rho^{-1} \left( \psi \psi^* \right)_0} + \frac{1}{2} *F_{B_0^t} \right)$ and $\int_Y \bar{\xi}(\psi) = 0$. This vector field arises from a (nonlinear) projection of the gradient of the Chern-Simons-Dirac functional onto the Coulomb slice. Note that $F$ can also be decomposed as a sum $F = D + Q$ where $D = (*d , \slashed{D}_{B_0} )$ is the linear operator from (\[eq linearmap3d\]) and the nonlinear term $Q$ has nice compactness properties. When necessary, we pick a perturbation $\mathfrak{q}$ on the 3-manifold $Y$. This induces a perturbation on the cylinder $I \times Y$. On the 4-manifold $X$, we will also pick a perturbation $\hat{\mathfrak{p}}$ supported in the collar neighborhood of $Y$ such that the restriction to $\left\{ 0 \right\} \times Y$ is $\mathfrak{q}$. In addition, as in [@Mono §24.1], we assume that $\hat{\mathfrak{p}}$ is of the form $$\begin{aligned} \hat{\mathfrak{p}} = \beta \mathfrak{q} + \beta_0 \mathfrak{p}_0 , \nonumber\end{aligned}$$ where $\beta$ is a cut-off function with value 1 near the boundary, $\beta_0$ is a bump function supported in $(-C , 0)$, and $ \mathfrak{p}_0$ is another perturbation on $Y$. We will always choose $ \mathfrak{q} , \mathfrak{p}_0 $ from the Banach space of tame perturbations (c.f. [@Mono §11]). For the rest of the paper, it is understood that the Seiberg-Witten map and the Chern-Simons-Dirac functional are perturbed. We will continue to write $SW = \hat D + \hat Q$ and $F = D\ + Q$ as we keep the linear parts the same and we add terms from perturbation to the nonlinear parts. When the perturbations are tame, the nonlinear terms $\hat Q$ and $Q$ retain appropriate compactness properties. Choose a closed and bounded subset $\mathcal{R}$ in the $L^{2}_{k+1/2}$ completion of $\mathit{Coul}(Y)$ with the following property: if a trajectory $y(t)$ satisfies $$\begin{aligned} - \frac{\partial}{\partial t} y (t) = F (y(t)) \nonumber\end{aligned}$$ and lies in $\mathcal{R}$ for all time $t\in\mathbb{R}$, then $y(t)$ in fact lies in the interior of $\mathcal{R}$ for all time. This subset $\mathcal{R}$ can be viewed as an isolating neighborhood for the Seiberg-Witten flow. A key result (cf. [@TK1 Proposition 11]) for constructing the Floer spectrum is that a compact subset $\mathcal{R} \cap W$ of a sufficiently large finite dimensional subspace $W$ is an isolating neighborhood for a *compressed flow* on $W$ given by a projected vector field $$\begin{aligned} - \frac{\partial}{\partial t} y (t) = \pi_W F (y(t)), \nonumber\end{aligned}$$ where $\pi_W$ is a projection onto $W$. When $b_1 (Y) = 0$, a large ball $B(2R)$ in $\mathit{Coul}(Y)$ can be chosen for such an isolating neighborhood (cf. [@Man1 Proposition 3]). For the construction of $\mathcal{R}$, we refer to [@ManK] for the case $b_1 (Y) = 1$ and to [@TK1 §4.4] for a general case. As a result, one can obtain Conley index ${\ensuremath{\mathcal{I}(\mathcal{R} \cap W)} }$ with respect to this compressed flow (see Appendix \[App con\] for a background in Conley index theory). This will allow us to construct the Floer spectrum later on. Now, we consider a map $$\begin{aligned} SW \oplus (\Pi^-_{} \circ r) : \mathit{Coul}^{\mathit{CC}} (X) \rightarrow i \Omega_+^2 (X) \oplus \Gamma(S^-) \oplus H^- ,\end{aligned}$$ where $ \Pi^-$ is a projection onto a semi-infinite subspace $H^-$ commensurate to $H^-_0$ as introduced in Proposition \[prop bFred\] and in the subsequent remark. For convenience, we denote the 4-dimensional part of the codomain, $i \Omega_+^2 (X) \oplus \Gamma(S^-)$, by $\mathcal{V}_X$. Notice that there is a residual action on $\mathit{Coul}^{\mathit{CC}} (X)$ by the quotient $$\mathcal{G} / \mathcal{G}^\bot \simeq H^1(X;\mathbb{Z}) = \mathbb{Z}^{b_1 (X)},$$ which can be viewed as a group of harmonic maps $\mathcal{H}^1_{\mathit{CC}}(X)$ with double Coulomb condition. By Proposition \[prop 1-decomp\], there is a unique map $u$ with $u^{-1} du \in \Omega_{\mathit{CC}}^{1}$ for each cohomology class. Elements of $\mathcal{H}^1_{\mathit{CC}}(X)$ span a subspace of dimension $b_1(X)$ in the slice $\mathit{Coul}^{\mathit{CC}} (X) $ and we will denote by $\mathcal{U}_X$ its orthogonal complement. This gives a decomposition $\mathit{Coul}^{\mathit{CC}} (X) \simeq \mathcal{U}_X \oplus \mathbb{R}^{b_1(X)}$ and one may view $\mathcal{U}_X$ as a fiber of this (trivial) bundle. If $X$ is closed or the Coulomb-Neumann condition is used, the subspace $\mathcal{U}_X$ can be identified with ${\ensuremath{\operatorname{Im}(d^*)}} \subset {\ensuremath{\operatorname{Ker}(d^*)}}$. The rest of the construction will closely follow the construction of the relative Bauer-Furuta invariant in [@Man1]. First, we pick a sufficiently large ball $B(R)$ of $\mathcal{U}_X$ in $L^{2}_{k+1}$ topology. The image of this ball under the restriction map is bounded. We can pick a bounded isolating neighborhood $\mathcal{R}$ containing this image in the slice $\mathit{Coul}(Y)$. The choices of $B(R)$ and $\mathcal{R}$ will depend on universal constants in Corollary \[cor universalb\]. For each positive integer $n$, let $H^{-}_n$ be a semi-infinite subspace of $\mathit{Coul}(Y)$ such that its projection $\Pi^-_n$ is commensurate to $\Pi^-_0$. Since $\hat{D} \oplus (\Pi^-_n \circ r)$ is Fredholm, we pick a finite-dimensional subspace $V_n \oplus W_n$ of the codomain $\mathcal{V}_X \oplus H_n^-$ such that it contains the cokernel this map. We will require that $H^-_n$, $V_n$, and $W_n$ are sequences of increasing subspaces approaching the whole spaces. In particular, we need that $\mathcal{R} \cap W_n$ is an isolating neighborhood of the compressed Seiberg-Witten flow on $W_n$. For example, as in [@Man1], we may choose $W_n$ to be the subspace containing all eigenspaces of $D$ with respect to eigenvalues in an interval $[\lambda_n , \mu_n ]$ and $H^-_n$ to be the semi-infinite subspace containing all eigenspaces with eigenvalues in an interval $(-\infty , \mu_n ]$ with $-\lambda_n , \mu_n \rightarrow \infty$. Let $U_n$ be the preimage of $V_n \oplus W_n$ under the linear map $\hat{D}\oplus (\Pi^-_n \circ r)$. Consider a projected Seiberg-Witten map $$\begin{aligned} \pi_{V_n \oplus W_n} \circ ( SW \oplus (\Pi^-_n \circ r) ) : U_n \rightarrow V_n \oplus W_n \nonumber \end{aligned}$$ and this gives a map $$\begin{aligned} B(R, U_n ) \rightarrow V_{n}\times (\mathcal{R} \cap W_n )\end{aligned}$$ when restricting to the ball. Let $\epsilon_n$ be a sequence of positive numbers approaching $0$, then we will try to show that, for $n$ sufficiently large, this induces a map of the form $$\begin{aligned} B(R, U_n ) / S(R , U_n) \rightarrow V_{n} / B(\epsilon_n , V_n )^C \wedge N_n / L_n,\end{aligned}$$ where $B(\epsilon_n , V_n )^C$ is an element in $V_{n}$ outside the small ball and $(N_n,L_n)$ is an index pair for $\mathcal{R} \cap W_n$ with respect to the compressed Seiberg-Witten flow on $W_n$. Consequently, this will give a map from a sphere to a suspension of the Conley index $$\begin{aligned} S^{U_n} \rightarrow S^{V_n} \wedge {\ensuremath{\mathcal{I}(\mathcal{R} \cap W_n)} }. \nonumber\end{aligned}$$ A crucial part in the construction is to assure that we can find such an index pair for which the induced map is well-defined. There are three main ingredients to establish such maps. First, we recall results regarding the moduli space of Seiberg-Witten solutions on a 4-manifold with boundary. Boundedness of X-trajectories ----------------------------- Let $X^* := X \cup_Y ( [0,\infty) \times Y)$ be a manifold with cylindrical ends. A Seiberg-Witten solution on $X^*$, also known as an *$X$-trajectory*, can be viewed as a pair of a solution on $X$ and a half-trajectory on $Y$ with compatibility condition. In particular, there is a homeomorphism between moduli spaces (cf. [@Mono Lemma 24.2.2]) $$\begin{aligned} \mathcal{M}(X^*) \simeq \mathcal{M}(X) \times_{\mathcal{B}( Y)} \mathcal{M}([0,\infty) \times Y). \label{eq modulidecomp} \end{aligned}$$ For an $X$-trajectory $\gamma$, one can define its topological energy $\mathcal{E}(\gamma)$ (cf. [@Mono §24]). When $\gamma$ is asymptotic to $\mathfrak{a}$ on the cylindrical end, we have that $$\begin{aligned} \mathcal{E}(\gamma) = C_X - {\mathcal{L}}(\mathfrak{a}), \label{eq Xenergy}\end{aligned}$$ where ${\mathcal{L}}$ is the (perturbed) Chern-Simons-Dirac functional on $Y$ and $C_X$ is a constant which depends only on $X$, a spin$^c$ structure, a metric, and a perturbation. We now state the compactness result in Seiberg-Witten theory. ([@Mono], Proposition 24.6.4) For $C>0$, the space of (broken) $X$-trajectories with energy $ < C$ is compact in the topology defined in [@Mono]. \[prop 4dcompact\] Next, we will show that an $X$-trajectory with finite energy actually has its energy bounded by a universal constant. There is a uniform bound for an energy of any $X$-trajectory with finite energy. When the spin$^c$ structure $\mathfrak{s}$ of $Y$ is nontorsion, we further require that the perturbation is regular in the sense that all moduli spaces of trajectories are regular. \[lem finiteenergy\] First, we observe that an $X$-trajectory with finite energy is always asymptotic to a critical point of ${\mathcal{L}}$ on the cylindrical ends. By (\[eq Xenergy\]), we only need to consider the value of ${\mathcal{L}}$ at its critical points. When $\mathfrak{s}$ is torsion, one could see that the statement is trivial by compactness of the solutions to the 3-dimensional Seiberg-Witten equations modulo gauge. When $\mathfrak{s}$ is nontorsion, we also have to control the gauge action, which can be viewed as a homotopy class of an $X$-trajectory on the quotient configuration space. When the perturbation is regular, the set of critical points modulo gauge is a finite set. The following argument will be similar to the finiteness result in monopole Floer homology. Let $[\mathfrak{a}] $ be a class of critical points and fix an $X$-trajectory $[\gamma_0 ]$ asymptotic to $[\mathfrak{a}]$ with homotopy class $\theta_0$. Suppose that $[\gamma]$ is an arbitrary element of a moduli space of $X$-trajectories asymptotic to $[\mathfrak{a}]$ with a homotopy class $\theta$. When this moduli space is regular, its dimension is given by a quantity $\operatorname{gr}_{\theta}(X,[\mathfrak{a}])$ with a relation $$\begin{aligned} \operatorname{gr}_{\theta}(X,[\mathfrak{a}]) = \operatorname{gr}_{\theta_0}(X,[\mathfrak{a}]) + \left( [u] \cup c_1 (\mathfrak{s}) \right) \left[Y\right], \label{eq gradediff}\end{aligned}$$ where $[u] \in H^1 (Y; \mathbb{Z})$ corresponds to a class $\theta \theta_0^{-1}$. On the other hand, the difference of energies of $[\gamma]$ and $[\gamma_0]$ is given by $$\begin{aligned} \mathcal{E}(\gamma) - \mathcal{E}(\gamma_0) &= {\mathcal{L}}(\mathfrak{a}) - {\mathcal{L}}(u \cdot \mathfrak{a}) \\ &= -2 \pi^2 \left( [u] \cup c_1 (\mathfrak{s}) \right) \left[Y\right].\end{aligned}$$ Since the dimension $\operatorname{gr}_{\theta}(X,[\mathfrak{a}])$ is nonnegative, the equation (\[eq gradediff\]) implies that $$\mathcal{E}(\gamma) \leq \mathcal{E}(\gamma_0) + 2 \pi^2 \operatorname{gr}_{\theta_0}(X,[\mathfrak{a}]).$$ This finishes the proof as there are finitely many classes of critical points. For the $Pin(2)$-equivariant case, we will also use equivariant perturbations as in the upcoming work of F. Lin [@FLin]. In this context, one considers Morse-Bott version of Floer homology, where a critical point $\mathfrak{a}$ is replaced by a critical submanifold $\mathfrak{C}$. There are analogous compactness and transversality results as well as the relative grading $\operatorname{gr}_{\theta}(X,[\mathfrak{C}])$. The argument above can be directly applied to a nonexact perturbation which is either balanced or positively monotone (cf. [@Mono §29]). We now deduce the uniform boundedness result for $X$-trajectories with finite energy[^2] with respect to a particular gauge fixing. A pair $(x,y)$ of a solution $x \in \mathcal{U}_X $ with $\mathit{SW}(x)=0 $ and a half-trajectory $y: [0, \infty) \rightarrow \mathit{Coul}(Y)$ satisfying $- \frac{\partial}{\partial t} y (t) = F (y(t))$ and $r(x) = y(0) $ gives rise to an $X$-trajectory. Moreover, there are constants $B_{k} $ and $ C_k$ such that, for any such pair $(x,y)$ with finite energy, we have - $\left\| x \right\|_{L^2_{k+ 1}} \leq B_k$. - For each $t \geq 0$, there is a harmonic gauge transformation $u_t \in H^1(Y;\mathbb{Z}) $ such that $\left\| u_t \cdot y(t) \right\|_{L^2_{k+ 1/2}} \leq C_k$. \[cor universalb\] The gauge transformation $u_t$ comes from a residual gauge action on $\mathit{Coul}(Y)$. For any $y = (\alpha, \psi) \in \mathit{Coul}(Y) $, there is a unique gauge transformation (up to constant) $u$ such that $\alpha - u^{-1}du$ satisfies the period condition $\int \beta_j \wedge (\alpha - u^{-1}du) \in [0,2\pi) $, where $\{ \beta_j \}$ is a dual basis of $H_1 (Y;\mathbb{Z})$. This condition was used in the proof of compactness results in [@Mono]. For the first part we will proceed similarly to the construction of the homeomorphism (\[eq modulidecomp\]), that is we will glue a solution on $X$ and a half-trajectory on $Y$ to obtain a solution on $X^*$. Recall that $y$ is a Coulomb projection of a Seiberg-Witten trajectory $\bar y$ with $\bar y (0) = y(0) $ (cf. [@TK1 §4.2]). We can write a solution $x$ near the boundary of $X$ and $\bar y$ in cylindrical coordinates as in (\[eq 1formcylin\]) $$\begin{aligned} x &= \left( \alpha_1(t) + \beta_1(t) + \gamma_1(t) dt , \psi_1(t) \right), \\ \bar y (t) &= \left( \alpha_2(t) + \beta_2(t) + \gamma_2(t) dt , \psi_2(t) \right).\end{aligned}$$ The hypothesis $x \in \mathcal{U}_X$ and $r(x) =y$ implies that $\alpha_1(0) = \alpha_2(0)$, $\beta_1(0) = \beta_2(0) = 0$, and $\psi_1(0) = \psi_2(0)$. The only thing we need to concern is that $\bar y$ is in temporal gauge, particularly $\gamma_2(0) =0$, but $\gamma_1(0)$ is not necessarily zero. Let $f: [0,\infty) \rightarrow [0,\infty)$ be a smooth compactly supported function with $f'(0) = 1$ and consider a gauge transformation $U(t) = e^{\int_0^t f(s) \gamma_1(0) ds}$ on $[0,\infty) \times Y$. We see that $U^{-1} d U = \int_0^t f(s) d_Y (\gamma_1(0)) ds + f'(t) \gamma_1(0) dt$ and $U(0) = id$, so that $x$ and $U(t) \bar y(t)$ now agree on the boundary $\{ 0\} \times Y$. Note that the resulting solution on $X^*$ is in a mixed gauge condition: the part on $X$ is in the double Coulomb gauge and the part on $[0,\infty) \times Y$ is in temporal gauge away from the boundary. One can always turn a solution on $X^*$ to a solution in this mixed gauge in a unique way. For the second part, Proposition \[prop 4dcompact\] combined with Lemma \[lem finiteenergy\] gives universal bounds on the Sobolev norms of $X$-trajectories with finite energy in the above mixed gauge. Since the restriction and Coulomb projection are continuous, we have universal bounds for the Sobolev norms of a pair $(x,y)$ described above as desired. Approximated solutions ---------------------- Secondly, we will need convergence result for approximated $X$-trajectories. The idea is to combine finite dimensional approximation arguments for both closed 4-manifolds and closed 3-manifolds. For the rest of the section, we let $\left\{ \hat{\pi}_n \right\}$ be a sequence of projection onto a finite-dimensional subspace $V_n$ of $\mathcal{V}_X$ such that $\hat{\pi}_n \rightarrow id$ strongly. Similarly, let $\left\{ \Pi^-_n \right\}$ be a sequence of projection onto a semi-infinite subspace $H^-_{n}$ of $\mathit{Coul}(Y)$ with the following properties; - $H^-_{n}$ contains the nonpositive eigenspace $H^-_0$ of $D$, - $ \Pi^-_n$ is commensurate to $ \Pi^-_0$, the projection onto $ H^-_0$, - $\Pi^-_n \rightarrow id$ strongly. We also let $\left\{ \pi_n \right\}$ be a sequence of projection onto a finite-dimensional subspace $W_{n}$ of $H^-_{n}$ such that $\pi_n \rightarrow id$ strongly. In addition, we require that the commutator $D \pi_n - \pi_n D \rightarrow 0$ in operator norm. The following proof will be almost the same as that of Proposition 6 in [@Man1], except that there is a slight simplification because we do not need to consider a nonlinear Coulomb projection in the argument. \[lemma 4dhalflimit\] Let $\left\{ x_n \right\}$ be a bounded sequence in the $L^2_{k+1}$ completion of $\mathcal{U}_X$ such that $\hat{D} x_n \in V_n$ and $(\Pi^-_n \circ r)x_n \in W_n$. Suppose that $ (\hat{D} + \hat{\pi}_n \hat{Q} ) x_n \rightarrow 0$ in $L^2_{k}$ and there is a sequence of half-trajectories $y_n : [0,\infty) \rightarrow W_n $ uniformly bounded in the $L^2_{k + 1/2}$ completion of $\mathit{Coul}(Y)$ such that $$\begin{aligned} - \frac{\partial}{\partial t} y_n (t) = \pi_n F y_n(t) ,\end{aligned}$$ together with $y_n (0) = (\Pi^-_n \circ r) x_n$. Then, after passing to a subsequence, the sequence $\left\{ x_n \right\}$ converges to a Seiberg-Witten solution $x$ in $L^2_{k+1}$ and there exists a Seiberg-Witten half-trajectory $y $ with $y(0) = r(x)$ and $y_n(t)$ converges to $y(t)$ in $L^2_{k+ 1/2}$ for all $t \ge 0$. Since $\left\{ x_n \right\}$ is bounded, there is a subsequence of $x_n$ converges to $x$ weakly in the $L^2_{k+1}$ norm. After passing to this subsequence, we have strong convergence $x_n \rightarrow x$ in $L^2_{k}$ by Rellich lemma. Since a linear map preserves weak limits and $\hat{Q}$ is continuous in $L^2_k$, we also see that $(\hat{D} + \hat{Q} ) x_n $ converges to $(\hat{D} + \hat{Q} ) x $ weakly in the $L^2_k$ norm. On the other hand, we have $$\begin{aligned} \left\| (\hat{D} + \hat{Q} ) x_n \right\|_{L^2_k} \leq \left\| (\hat{D} + \hat{\pi}_n \hat{Q} ) x_n \right\|_{L^2_k} + \left\| (1- \hat{\pi}_n ) \hat{Q} x_n \right\|_{L^2_k} .\end{aligned}$$ The first term goes to $0$ by the hypothesis while the second term also goes to $0$ because $(1- \hat{\pi}_n )$ converges to $0$ uniformly on a compact set (the image of a bounded set under $\hat{Q}$). Hence, $(\hat{D} + \hat{Q} ) x $ must be equal to $0$. Moreover, we have $$\begin{aligned} \left\| \hat{D}(x_n - x) \right\|_{L^2_k} \leq \left\| (\hat{D} + \hat{Q} ) x_n \right\|_{L^2_k} + \left\| \hat{Q} x - \hat{Q} x_n \right\|_{L^2_k} \rightarrow 0 . \label{eq converg1}\end{aligned}$$ Next, we move on to the 3-dimensional part. Similar to the proof of Proposition 3 in [@Man1], there is a half-trajectory $y : [0,\infty) \rightarrow \mathit{Coul}(Y) $ such that $y_n (t) \rightarrow y(t)$ in $L^2_{k+ 1/2}$ uniformly on any compact subset of the open half-line $(0,\infty) $ but the convergence holds only in $L^2_{k- 1/2}$ on a compact subset of the closed half-line $[0,\infty)$. In addition, $y$ is a Seiberg-Witten trajectory $$\begin{aligned} - \frac{\partial}{\partial t} y (t) = F y(t) .\end{aligned}$$ Applying the fundamental theorem of calculus to $ e^{t D} \Pi^-_0 \gamma(t)$, we have $$\begin{aligned} e^D \Pi^-_0 \gamma(1) - \Pi^-_0 \gamma (0) &= \int_0^1 e^{t D} \Pi^-_0 \left( D \gamma(t) + \frac{\partial}{\partial t} \gamma(t) \right) dt, \nonumber\end{aligned}$$ where we use the fact that $D$ and $\Pi^-_0$ commute. We will consider the integrand when $\gamma = y - y_n$ and use a decomposition $$\begin{aligned} (D + \frac{\partial}{\partial t} ) (y_n - y) = ( D \pi_n - \pi_n D ) y_n + \pi_n (Q y - Q y_n ) + (1 - \pi_n ) Q y . \label{eq converg2} \end{aligned}$$ For the last two terms, we use the fact that $e^{ D} \Pi^-_0$ and $Q$ are bounded maps on $L^2_{k+ 1/2}$, so that, for some constant $R_0$, $$\begin{aligned} \left\| e^{t D} \Pi^-_0 \left( \pi_n (Q y (t) - Q y_n (t)) + (1 - \pi_n ) Q y(t) \right) \right\|_{L^2_{k+ 1/2}} \leq R_0 , \label{eq R0ofQ}\end{aligned}$$ uniformly on $t \in [0,1]$. Note that $e^{ D} \Pi^-_0$ is bounded because we consider the exponential of $D$ restricted to its negative eigenspace. Let us fix $\delta > 0$. By continuity of $Q$, we have that $Q y_n (t) \rightarrow Q y(t)$ in $L^2_{k+ 1/2}$ uniformly on $[\delta , 1]$. Moreover, $\left\| y(t) \right\|_{L^2_{k+ 1/2}}$ is uniformly bounded on this interval. By compactness of $Q$, we can conclude that $(1 - \pi_n ) Q y(t) \rightarrow 0$ in $L^2_{k+ 1/2}$ uniformly on $[\delta , 1]$ as well. As a result, for any $\epsilon >0$, we can find a sufficiently large integer $N_0$ depending on a fixed $\delta_0 < \epsilon/2R_0$ so that the integral $$\begin{aligned} \int_{\delta_0}^1 \left\| e^{t D} \Pi^-_0 \left( \pi_n (Q y (t) - Q y_n (t)) + (1 - \pi_n ) Q y(t) \right) \right\|_{L^2_{k+ 1/2}} dt < \epsilon/2\end{aligned}$$ when $n > N_0$. Using (\[eq R0ofQ\]), we add the integral on $[0,\delta_0]$ and obtain, for $n > N_0$, $$\begin{aligned} \int_{0}^1 \left\| e^{t D} \Pi^-_0 \left( \pi_n (Q y (t) - Q y_n (t)) + (1 - \pi_n ) Q y(t) \right) \right\|_{L^2_{k+ 1/2}} dt < \delta_0 R_0 + \epsilon/2 < \epsilon . \end{aligned}$$ For the first term on the right hand side of (\[eq converg2\]), we use the hypothesis that the commutator $D \pi_n - \pi_n D \rightarrow 0$ as a bounded operator on $L^2_{k+ 1/2}$. Since $\{ y_n \}$ is uniformly bounded, we see that $( D \pi_n - \pi_n D ) y_n (t) \rightarrow 0$ in $L^2_{k+ 1/2}$ uniformly on $[0, \infty )$. Putting everything together, we have $$\begin{aligned} \left\| \Pi^-_0 (y(0) - y_n (0)) \right\| &\leq \left\| e^D \Pi^-_0 (y(1) - y_n (1)) - \Pi^-_0(y(0) - y_n(0)) \right\| + \left\| e^D \Pi^-_0 (y(1) - y_n (1)) \right\| \\ &\leq \int_0^1 \left\| e^{t D} \Pi^-_0 (D + \frac{\partial}{\partial t} ) (y_n (t) - y(t)) \right\| dt + \left\| e^D \Pi^-_0 (y(1) - y_n (1)) \right\| \end{aligned}$$ and we can conclude that $\Pi^-_0 y_n (0) \rightarrow \Pi^-_0 y (0) $ in $L^2_{k+ 1/2}$ topology. Since $\Pi^-_n r( x_n) = y_n (0) $ and $H^-_0 \subset H^-_n$, we have $\Pi^-_0 r (x_n) = \Pi^-_0 y_n (0) $. On the other hand, we see that $\Pi^-_0 r (x_n)$ converges to $\Pi^-_0 r(x)$ weakly in $L^2_{k+ 1/2}$ because $x_n$ converges to $x$ weakly in the $L^2_{k+1}$ and $\Pi^-_0 r$ is bounded linear. Thus we must have $\Pi^-_0 r (x) = \Pi^-_0 y(0)$. Together with (\[eq converg1\]), the elliptic estimate (\[eq ellipest\]) implies that $x_n$ converges to $x$ in $L^2_{k+1}$. Since $r$ is bounded linear, we also have that $r(x_{n })$ converges to $r(x)$ in $L^2_{k+ 1/2}$. Then, we observe that $$\begin{aligned} \left\Vert y_n (0) - r(x)\right\Vert = \left\Vert \Pi^-_n r( x_n) - r(x)\right\Vert \leq \left\Vert \Pi^-_n \left(r( x_n) - r(x)\right)\right\Vert + \left\Vert \left( 1 - \Pi^-_n \right)r(x)\right\Vert \end{aligned}$$ so that $y_n(0)$ converges to $r(x) $ in $L^2_{k+ 1/2}$ because $\Pi^-_n $ converges to the identity strongly. Since $y_n(0)$ converges to $y(0)$ in $L^2_{k- 1/2}$, we must also have $ r (x) = y(0)$ and the convergence $y_n(0) \rightarrow y(0)$ in $L^2_{k+ 1/2}$. The main results ---------------- The last ingredient is a technical lemma from Conley index theory to guarantee existence of an appropriate index pair. Recall that we want a map of the form $$\begin{aligned} B(R, U_n ) / S(R , U_n) \rightarrow V_{n} / B(\epsilon_n , V_n )^C \wedge N_n / L_n .\end{aligned}$$ The situation is almost the same as the set up of Proposition \[prop maptocon\] in Appendix \[App con\] except that there is a map in the first factor. Since we are collapsing everything outside of the ball in $V_n $, we can focus only on the second factor by considering $$\begin{aligned} \Pi^-_n \circ r: B(R, U_n ) \cap (\hat{\pi}_n SW)^{-1}( B(\epsilon_n , V_n )) \rightarrow \mathcal{R} \cap W_n . \label{eq preSW1} \end{aligned}$$ Note that the image of $B(R, U_n )$ under $\Pi^-_n \circ r$ is already in $W_n$ by the choice of $U_n$. Then, it is left verify to that this map satisfies hypothesis of Proposition \[prop maptocon\] with $A =B(R, U_n ) \cap (\hat{\pi}_n SW)^{-1}( B(\epsilon_n , V_n )) $ and $B = S(R, U_n ) \cap (\hat{\pi}_n SW)^{-1}( B(\epsilon_n , V_n ))$. We now state the main result. Let us fix a sufficiently large radius $R$ and a sufficiently large isolating neighborhood $\mathcal{R}$ depending on the radius $R$. With the above notation, for $n$ sufficiently large, we obtain a map $$\begin{aligned} S^{U_n} \rightarrow S^{V_n} \wedge {\ensuremath{\mathcal{I}(\mathcal{R} \cap W_n)} } \label{eq finitemap}\end{aligned}$$ induced from the map $\pi_{V_n \oplus W_n} \circ ( SW \oplus (\Pi^-_n \circ r) ): B(R, U_n ) \rightarrow V_{n}\times (\mathcal{R} \cap W_n ) $. We will prove by contradiction. After passing to a subsequence, suppose that there is a sequence of $V_n , W_n$, and $\epsilon_n$ such that the image of the map (\[eq preSW1\]) does not satisfy hypothesis of Lemma \[relCon\]. This gives a sequence $x_n \in B(R, U_n ) \cap (\hat{\pi}_n SW)^{-1}( B(\epsilon_n , V_n ))$ with an image $\Pi^-_n \circ r (x_n)$ lies in the invariant set of the compressed flow on $\mathcal{R} \cap W_n$ in positive time direction. In other words, we have a sequence of approximated half-trajectories $y_{n} : [0,\infty) \rightarrow \mathcal{R} \cap W_n $ with $- \frac{\partial}{\partial t} y_n (t) = \pi_n F y_n(t)$ and $y_n(0) = \Pi^-_n \circ r (x_n)$. We now arrive at the set up to apply Lemma \[lemma 4dhalflimit\]. As a result, the sequence $\{x_n\}$ converges to a 4-dimensional solution $x$ and $\{y_n\}$ converges to a Seiberg-Witten half-trajectory $y$ with $r(x) = y(0)$. Together, we have an $X$-trajectory with finite energy and universal constants as in Corollary \[cor universalb\]. There are two cases to consider. Case 1: $x_n \in S(R,U_n)$. Here, we choose $R$ larger than the universal constant $B_k$. From Corollary \[cor universalb\], this is a contradiction since we have an $X$-trajectory with $ \left\| x \right\|_{L^2_{k+ 1}} = R > B_k$. Case 2: There exist $t_n \geq 0$ such that $y_n(t_n) \in \partial \mathcal{R} \cap W_n $. Here, we choose an isolating neighborhood $\mathcal{R}$ arising from transverse cut-off of a union of balls $\mathbb{Z}^{b_1 (Y)} \cdot B(R' , \mathit{Coul}(Y))$ in the $L^2_{k+1}$ norm (cf. [@TK1] and [@ManK]) with $R'$ larger than the universal constant $C_k$. The limit $y(t)$ is asymptotic to a critical point $\mathfrak{a}$ on the cylindrical end with $\mathfrak{a} \in {\ensuremath{\operatorname{Int}(\mathcal{R})}}$. This implies that $t_{n} \rightarrow t_0 \geq 0$, so that $y(t_0) \in \partial \mathcal{R}$. This is a contradiction as $\left\| u_{t_0} \cdot y(t_0 ) \right\|_{L^2_{k+ 1/2}} = R' > C_k$. Let us try to keep track of choices made in the construction. The choice of $\epsilon_n$ does not matter as long as it is sufficiently small. From Proposition \[prop maptocon\], the map is independent of the choice of index pairs. After passing to stable maps, the map is also independent of the choice of $V_n $. For simplicity, we will specialize to the case when $W_n = V^{\mu_n}_{\lambda_n}$ the sum of eigenspaces of $D$ with respect to eigenvalues in an interval $[\lambda_n , \mu_n ]$ and $H^-_n = V^{\mu_n}_{\infty}$ defined similarly. One can show that there is an isomorphism between Conley indices $$\begin{aligned} \Sigma^{-V^0_{\lambda_n}} {\ensuremath{\mathcal{I}(\mathcal{R} \cap V^{\mu_n}_{\lambda_n})} } \simeq \Sigma^{-V^0_{\lambda_{n+1}}} {\ensuremath{\mathcal{I}(\mathcal{R} \cap V^{\mu_{n+1}}_{\lambda_{n+1}})} }. \nonumber\end{aligned}$$ Consequently, we can desuspend the Conley index on the right hand side of (\[eq finitemap\]) by the corresponding negative eigenspace as above so that the resulting object, denoted by $E(\mathcal{R})$ does not depend on the choice of $V^{\mu_n}_{\lambda_n}$. Applying the index formula (\[eq index4db\]) to (\[eq finitemap\]), we obtain $$\begin{aligned} \mathbf{S}^{(- b^+ (X) - b_1(Y))\mathbb{R} + \operatorname{Ind}_{\mathbb{C}} (\slashed{D}_{A_0}^+ )\mathbb{C} } \rightarrow E(\mathcal{R}), \label{eq prefinal}\end{aligned}$$ where we note that $\mathcal{U}_X$ is a subspace of $\mathit{Coul}^{CC} (X)$ of codimension $b_1(X)$. For the rest of the section, let us consider the case when $b_1 (Y) = 0$. [(of Theorem \[thm b1=0\])]{} The same argument in [@Man1] shows that different choices of sufficiently large radii $R$ and sufficiently large isolating neighborhoods $\mathcal{R}$ (which can be chosen to be the balls $B(2R)$ in $\mathit{Coul}(Y)$) give maps in the same stable homotopy class. Note that, in this $b_1(Y)=0$ case, we do not require the perturbation to be regular, so we can choose any $ \mathfrak{q} , \mathfrak{p}_0 $ from the Banach space of tame perturbations together with any suitable bump functions $\beta_0 , \beta$. Consequently, the choice of connections, metrics, and perturbations does not matter because the spaces of these choices are all contractible, except that we need to desuspend $E(B(2R))$ again by $\operatorname{Ind}_{\mathbb{C}} (\slashed{D}_{A_0}^+ ) + \frac{\sigma(X) - c_1 (det S^+)^2}{8}$ complex dimension to obtain the Floer spectrum $SWF(Y)$. Putting everything together, we obtain (\[eq mainb1=0\]) from (\[eq prefinal\]). Moreover, we obtain (\[eq mainth\]) by considering a family of the above maps parametrized by the Picard torus of $X$. Maps to Conley indices {#App con} ====================== In this appendix, we will briefly recall essential parts of Conley index theory. A thorough treatment can be found in [@Conley] and [@SalaCon]. Let $\phi$ be a flow on a finite-dimensional manifold $M$ (or more generally, a locally compact Hausdorff topological space). Denote the flow action by $\phi(x,t)$ or $x \cdot t$ for $x\in M$ and $t \in \mathbb{R}$. Let $X$ be a subset of $M$. 1. The invariant subset in positive direction is given by $A^+(X) := \left\{ x \in X | x \cdot \mathbb{R}^+ \subset X \right\}$. 2. The *maximal invariant subset* of $X$ is given by ${\ensuremath{\operatorname{Inv}(X)}} = \left\{ x \in X | x \cdot \mathbb{R} \subset X \right\}$. 3. A compact subset $X$ of $M$ is called an *isolating neighborhood* if ${\ensuremath{\operatorname{Inv}(X)}}$ is contained in ${\ensuremath{\operatorname{Int}(X)}}$ the interior of $X$. 4. A compact subset $S$ of $M$ is called an *isolated invariant set* if there is an isolating neighborhood $X$ so that ${\ensuremath{\operatorname{Inv}(X)}} = S$. Given an isolated invariant set or an isolating neighborhood, one will be able to extract some topological data called Conley index, which can be viewed as a generalization of a Morse index. Now, we introduce the important concept of an index pair. \[def indexp1\] Let $S$ be an isolated invariant set. A pair of compact subsets $(N,L)$ is called an *index pair* for $S$ if the following conditions hold 1. $S \subset {\ensuremath{\operatorname{Int}(\operatorname{cl}(N \backslash L ))}} $ and $S = {\ensuremath{\operatorname{Inv}(\operatorname{cl}(N \backslash L ) )}}$, 2. $L$ is positively invariant relative to $N$, i.e. the condition $x \in L$ and $x \cdot [0,t] \subset N$ implies $x \cdot [0,t] \subset L$. 3. $L$ is an exit set for $N$, i.e. if $x \in N $ but $x \cdot [0, \infty) \nsubseteq N$, then there exists $t>0$ such that $x \cdot [0,t] \subset N$ and $x \cdot t \in L$. For an isolating neighborhood $X$ with ${\ensuremath{\operatorname{Inv}(X)}} = S$, we will also call $(N,L)$ an index pair for $X$ if it is an index pair for $S$. Fundamental results in Conley index theory state that an index pair always exists and that all such pairs are homotopy equivalent. As a result, one may view the Conley index as an invariant which assigns a homotopy type of such index pairs to an isolated invariant set. However, it is also important to consider the Conley index as a collection of all index pairs and natural homotopy equivalences between them. One motivation for this is to reduce ambiguity of the choice of index pairs in various constructions. For an isolated invariant set (or an isolating neighborhood) $S$, we define its *Conley index* ${\ensuremath{\mathcal{I}(S)} }$ as a collection of objects consisting of pointed spaces $ (N/L , [L])$ arising from an index pairs $(N,L)$ for $S$. For a pair of two index pairs, we also have a collection of *flow maps* induced from the flow. These flow maps are homotopy equivalences and are naturally homotopic to each other. Such a collection of spaces and maps between them is also known as a connected simple system. See [@Kur2] or [@SalaCon] for the details. In this paper, we will also need to construct maps from spaces to Conley indices. Under certain hypothesis, a map from a space to an isolating neighborhood can give rise to a map to an index pair. We begin with a lemma shown in Appendix of [@Man1]. \[relCon\] Let $X$ be an isolating neighborhood with ${\ensuremath{\operatorname{Inv}(X)}} = S$. If a pair $(A,B)$ of compact subsets of $X$ satisfies the following 1. If $x \in A^+(X) \cap A$, then $[0,\infty) \cdot x \cap \partial X = \emptyset $, 2. $ B \cap A^+(X) = \emptyset$, then there exists an index pair $(N,L)$ of $S$ such that $A \subset N \subset X$ and $B \subset L$. Now, suppose that we have a map $f : A \rightarrow X$ and a compact subset $B$ of $A$. If the pair $(f(A) , f(B))$ satisfies the hypothesis of the above lemma, there exists an index pair $(N,L)$ containing $(f(A),f(B))$ and we obtain a map $f : A/B \rightarrow N/L $ induced from the inclusion. It remains to show that this map is independent (up to homotopy) of the choice of index pairs so that it gives a well-defined map from $A/B$ to the Conley index ${\ensuremath{\mathcal{I}(X)} }$. Given two index pairs $(N_1 , L_1)$ and $(N_2 , L_2)$ with $f(A) \subset N_1 \cap N_2$ and $f(B) \subset L_1 \cap L_2$, we wish to show that the diagram below commutes up to homotopy A/B \[swap\] \[swap\][f\_2]{} & N\_1 / L\_1 \[swap\]\ & N\_2 / L\_2 where $f_1 , f_2$ are maps induced by inclusions and $F$ is a flow map from $(N_1 , L_1)$ to $(N_2 , L_2)$. We point out that this is straightforward when $(N_1 , L_1) \subset (N_2 , L_2)$ because the inclusion $N_1 / L_1 \hookrightarrow N_2 / L_2$ is homotopic to a flow map $N_1 / L_1 \stackrel{F}{\rightarrow} N_2 / L_2$ (cf. [@Kur2 Proposition 3.1]). For a general case, we will construct a sequence of inclusions that relates $(N_1 , L_1)$ and $(N_2 , L_2)$ through index pairs which contain $(f(A),f(B))$. Since the subsets $N_{i}$ and $L_{i}$ are contained in $X$, we will consider a pair $(N_i \cup P(L_i , X) , P(L_i , X) )$ where $P(L_{i},X) := \left\{ y \cdot t \, | \, y \in Y \text{and } y \cdot [0,t] \subset X \right\}$ is the *minimal positively invariant* set of $L_{i}$ relative to $X$. It is not hard to see that these are index pairs. In addition, the subsets $N_i \cup P(L_i , X)$ and $P(L_i , X)$ are positively invariant relative to $X$. Furthermore, we claim that the intersection $\bigcap_{i = 1,2} (N_i \cup P(L_i , X ) , P(L_i , X) )$ is also an index pair. Let us suppose that $x \in \bigcap_{i = 1,2} N_i \cup P(L_i , X ) $ and $x \cdot \mathbb{R}^+ \nsubseteq X$. By the exit set property of the pair $(N_i \cup P(L_i , X) , P(L_i , X) )$, there exists $t_i$ such that $x \cdot [0,t_i] \subset N_i \cup P(L_i , X)$ and $x \cdot t_i \in \ P(L_i , X)$ for $i = 1,2$. Without loss of generality, we may assume that $t_1 \geq t_2$. Since the subsets $N_2 \cup P(L_2 , X)$ and $P(L_2 , X)$ are positively invariant relative to $X$, we see that $x \cdot [0,t_1] \subset N_2 \cup P(L_2 , X)$ and $x \cdot t_1 \in P(L_2 , X)$ as well. This implies that $\bigcap_{i=1,2} P(L_i , X )$ is an exit set for $\bigcap_{i =1,2} N_i \cup P(L_i , X )$. It is straightforward to check other properties and verify that the intersection $\bigcap_{i = 1,2} (N_i \cup P(L_i , X ) , P(L_i , X) )$ is an index pair. Note that, in general, the intersection of two index pairs needs not be an index pair. We now have a sequence of inclusions of index pairs containing $(f(A),f(B))$. This is shown in the diagram below (we abbreviate $P(L)$ for $P(L , X)$ in the diagram). (N\_1 P(L\_1 ) , P(L\_1 ) ) & & & & (N\_2 P(L\_2 ) , P(L\_2 ) )\ & & & &\ & & \_[i = 1,2]{} (N\_i P(L\_i ) , P(L\_i ) ) & &\ & & & &\ (N\_1 , L\_1) & & & & (N\_2, L\_2) From the above discussion, we can conclude Let $X$ be an isolating neighborhood and $B \subset A$ be compact sets. Suppose that there is a map $f : A \rightarrow X$ such that a pair $(f(A),f(B))$ satisfies the hypothesis of Lemma \[relCon\]. Then, we have a well-defined map $f : A/B \rightarrow {\ensuremath{\mathcal{I}(X)} }$ induced from the inclusion. \[prop maptocon\] [^1]: This $k$ corresponds to a half-integer $k+1/2 $ in [@Man1]. [^2]: This is analogous to the finite type condition in [@Man1]. However, the finite energy condition implies the finite type condition (cf. [@Mono §5]).
--- abstract: 'We study the state-resolved production of neon ion after resonant photoionization of Ne via the $2s$–$3p$ Fano resonance. We find that by tuning the photon energy across the Fano resonance a surprisingly high control over the alignment of the final $2p$ hole along the polarization direction can be achieved. In this way hole alignments can be created that are otherwise very hard to achieve. The mechanism responsible for this hole alignment is the destructive interference of the direct and indirect (via the autoionizing $2s^{-1}3p$ state) ionization pathways of $2p$. By changing the photon energy the strength of the interference varies and $2p$-hole alignments with ratios up to 19:1 between $2p_0$ and $2p_{\pm 1}$ holes can be created—an effect normally only encountered in tunnel ionization using strong-field IR pulses. Including spin-orbit interaction does not change the qualitative feature and leads only to a reduction in the alignment by $2/3$. Our study is based on a time-dependent configuration-interaction singles (TDCIS) approach which solves the multichannel time-dependent Schrödinger equation.' author: - 'Elisabeth Heinrich-Josties' - Stefan Pabst - Robin Santra bibliography: - 'amo.bib' - 'books.bib' - 'notes.bib' title: '[Controlling the $2p$ Hole Alignment in Neon via the $2s$–$3p$ Fano Resonance]{}' --- Introduction {#sec1} ============ Fano resonances [@Fa-PhysRev-1961] appear in almost any field of physics ranging from atomic physics to solid-state physics and to optics [@MiFl-RMP-2010]. Their most characteristic feature is the asymmetric line profile [@KrPa-AJP-2014] which results from the coherent interference of a direct continuum channel and an indirect channel which involves a discrete quasi-bound state [@BaDu-AJP-2004]. These asymmetric line shapes have been first discussed in atomic physics in the context of photoabsorption [@Be-ZPhys-1935] and electron scattering [@La-RadResSupp-1959]. In the last decades, there has been an increasing interest in Fano resonances in the presence of strong-field [@LaZo-PRA-1981; @TaGr-PRA-2012] and ultrashort [@WiBu-PRL-2005; @WaCh-PRL-2010] pulses. Strong-field pulses modify the ionization continuum and alter [@OtKa-Science-2013] or even destroy [@TaGr-PRA-2012] the characteristic Fano profile. With attosecond and femtosecond pulses the electron motion of an autoionization process can be studied [@ArLi-PRL-2010; @GiCh-RPL-2010]. Also the interplay of Fano resonances with free-electron laser pulses has been investigated [@NiMe-PRA-2009]. With the rapid advances in laser technology, it is nowadays possible to study the influence of the details of the ionization process on the parent ion and not only on the ionized photoelectron [@LoLe-PRL-2007; @GoKr-Nature-2010]. Here, transient absorption spectroscopy has been used to probe population and coherences within the parent ion [@SaYa-PRA-2011]. The high control of the delay between the pump and probe pulses makes it possible to measure even the sub-cycle ionization dynamics and the hole population build-up [@LoGr-ChemPhys-2007; @WiGo-Science-2011; @PaSy-PRA-2012]. Also the magnitude of the magnetic quantum number of the hole can be resolved providing information about the hole alignment within an $nl$-subshell [@GoKr-Nature-2010; @SaYa-PRA-2011; @WiGo-Science-2011]. In this paper, we show how in photoionization the interference of the direct and indirect ionization pathways results in an unusual ionic state with a highly aligned ionic hole. Specifically, we consider photoionization of neon with a photon energy that is resonant with the autoionizing $2s^{-1}3p$ state. We investigate what influence this resonance has on the ionic hole that is eventually formed in the $2p$ shell. Our calculations are performed using the time-dependent configuration-interaction singles (TDCIS) approach [@GrSa-PRA-2010]. The correlation-driven autoionization process ($2s^{-1}3p_0 \rightarrow 2p^{-1}_m\,\varepsilon l_m$) produces the same final states as the direct $2p$ photoionization ($2s^22p^6 + \gamma \rightarrow 2p_m^{-1} \varepsilon\, l_m$). The constructive and destructive interferences of these two pathways lead to the characteristic Fano profile in the photoionization cross section [@CoMa-PR-1967; @ScDo-PRA-1996; @LaBe-JPB-1997; @ArLi-PRL-2010]. Also the photoelectron angular distribution (characterized by the asymmetry parameter $\beta$) varies strongly across the Fano resonance [@LaBe-JPB-1997]. In addition, as we demonstrate here, there is also a profound effect on the $2p$ hole, which cannot be deduced from the angular photoelectron distribution. When tuning the photon energy across the resonance, the hole alignment (the ratio between $2p_0$ and $2p_{\pm 1}$ hole populations) varies dramatically from ratios around 1.6 in the non-resonant case to ratios as large as 19 in the resonant case. These high ratios, signaling that the hole is dominantly located in the $2p_0$ orbital, are unusual in the XUV regime and are normally only encountered after tunnel ionization with strong-field IR pulses [@LoGr-ChemPhys-2007; @IvSp-JMO-2005; @Pa-EPJST-2013]. The maximum hole alignment occurs when the direct $2p$ photoionization pathway interferes most destructively with the indirect $2p$ ionization pathway ($2s^22p^6 + \gamma \rightarrow 2s^{-1}3p_0 \rightarrow 2p^{-1}_m\,\varepsilon l_m$). The photon energy where this destructive interference is the strongest coincides with the minimum position of the Fano profile. By tuning the photon energy above or below the $2s$–$3p$ resonance, one controls how these two pathways interfere and, consequently, one controls the $2p$ hole alignment. The rest of the manuscript is structured as follows: Section \[sec2\] discusses briefly our TDCIS approach. In Sec. \[sec3\], we present our results beginning in Sec. \[sec3.fano\] with a review of basic aspects of the $2s$–$3p$ Fano resonance in the energy (Sec. \[sec3.fano.spec\]) and time domains (Sec. \[sec3.fano.temp\]), and explaining in Sec. \[sec3.align\] the mechanism of the $2p$ hole alignment when targeting this Fano resonance. The influence of spin-orbit interaction on the hole alignment is studied in Sec. \[sec3.align.ls\]. Section \[sec4\] concludes the discussion. Atomic units are employed throughout unless otherwise indicated. Theory {#sec2} ====== Our implementation of the TDCIS approach to solve the multichannel Schrödinger equation has been described in previous publications [@GrSa-PRA-2010; @PaGr-PRA-2012]. We have applied our TDCIS approach to a wide spectrum of processes [@Pa-EPJST-2013], ranging from attosecond photoionization [@PaSa-PRL-2011] to nonlinear x-ray ionization [@SyPa-PRA-2012] and strong-field tunnel ionization [@WiGo-Science-2011; @PaSy-PRA-2012; @KaPa-PRA-2013]. The $N$-body TDCIS wave function reads $$\begin{aligned} \label{eq:tdcis} {\left|\Psi(t)\right>} &=& \alpha_0(t) \, {\left|\Phi_0\right>} + \sum_{a,i} \alpha^a_i(t) \, {\left|\Phi^a_i\right>} ,\end{aligned}$$ where ${\left|\Phi_0\right>}$ is the Hartree-Fock ground state and ${\left|\Phi^a_i\right>}= \hat c^\dagger_a \hat c_i {\left|\Phi_0\right>}$ are singly excited configurations with an electron removed from the initially occupied orbital $i$ and placed in the initially unoccupied orbital $a$. Since Eq.  describes all $N$ electrons in the atom, an electron can be removed from any orbital. This multichannel approach is very helpful in describing ionization processes with XUV and x-ray light where more than one orbital is accessible. By limiting the sum over $i$, specific occupied orbitals can be picked to be involved in the dynamics, thereby testing the multichannel character of the overall dynamics. Inserting Eq.  into the full time-dependent Schrödinger equation, one finds the following equations of motion for the CIS coefficients: \[eq:eoms\] $$\begin{aligned} \label{eq:eoms.1} i\, \dot\alpha_0(t) =& -E(t)\, \sum_{a,i} {\left(\Phi_0\right|} \hat z {\left|\Phi^a_i\right)} \\\nonumber \label{eq:eoms.2} i\, \dot\alpha^a_i(t) =& {\left(\Phi^a_i\right|} \hat H_0 {\left|\Phi^a_i\right)} \, \alpha^a_i(t) +\! \sum_{b,j} {\left(\Phi^a_i\right|} \hat H_1 {\left|\Phi^b_j\right)} \alpha^b_j(t) \\ & -E(t) \Big(\! {\left(\Phi^a_i\right|} \hat z {\left|\Phi_0\right)} \alpha_0(t) +\! \sum_{b,j} {\left(\Phi^a_i\right|} \hat z {\left|\Phi^b_j\right)} \alpha^b_j(t) \!\Big) ,\end{aligned}$$ where $\hat H_0= \sum_n \left[ \frac{\hat{\bf p}^2_n}{2} - \frac{Z}{|\hat{\bf r}_n|} + V_\textrm{MF}(\hat{\bf r}_n) - i\eta W(|\hat {\bf r}_n|)\right] - E_\textrm{HF} $ includes all one-particle operators (kinetic energy, attractive nuclear potential, the mean-field potential, $\hat V_\textrm{MF}$, contributing to the standard Fock operator [@SzOs-book], and the complex absorbing potential, $-i\eta W(\hat r)$, preventing artificial reflection from the boundaries of the numerical grid. The entire energy spectrum is shifted by the Hartree-Fock energy $E_\textrm{HF}$ such that the Hartree-Fock ground state is at zero energy (for details see Ref. [@RoSa-PRA-2006; @GrSa-PRA-2010]). The nuclear charge is given by $Z$ and the index $n$ runs over all $N$ electrons in the system. Light-matter interaction for linearly polarized pulses in the electric-dipole approximation is given in the length gauge by $-E(t)\, \hat z$ with $\hat z = \sum_n \hat z_n$ [@Pa-EPJST-2013]. All of the electron-electron interactions that cannot be described by the mean-field potential $\hat V_\textrm{MF}$ are captured by $\hat H_1 = \frac{1}{2}\sum_{n,n'} \frac{1}{|\hat{\bf r}_n - \hat{\bf r}_{n'}|} - \sum_n \hat V_\textrm{MF}(\hat{\bf r}_n)$. Introducing a local complex potential has the consequence that the symmetric inner product $\left(\cdot\right|,\left|\cdot\right)$ must be used instead of the hermitian one $\left<\cdot\right|,\left|\cdot\right>$ [@RiMe-JPB-1993]. The second term in Eq. , which describes the electron-electron interaction, is the only term within the TDCIS theory that leads to many-body effects. Electronic correlation effects, which within TDCIS can only occur between the ionic state (index $i$) and the photoelectron (index $a$), are captured in the interchannel coupling terms ($i \neq j$) where both indices ($a$ and $i$) are changed simultaneously. It means that the ionic state changes due to the interaction with the excited electron. Intrachannel interactions do not change the ionic state ($i=j$) and describe the long-range $-1/r$ Coulomb potential for the excited electron. Intrachannel interaction can be viewed in terms of a one-particle potential and cannot lead to electron-electron correlations. The importance of many-body correlation effects [@PaSa-PRL-2011; @PaSa-PRL-2013] can be easily tested by either allowing (full TDCIS model) or prohibiting (intrachannel TDCIS model) interchannel interactions which are captured in the $\hat H_1$. Results {#sec3} ======= We begin in Sec. \[sec3.fano\] with a discussion of the spectral and temporal properties of the $2s$–$3p$ Fano resonance in neon, which we exploit in Sec. \[sec3.align\] to control the hole alignment by tuning the XUV pulse across the Fano resonance. $2s$–$3p$ Fano Resonance {#sec3.fano} ------------------------ ### Spectroscopic Features {#sec3.fano.spec} The photoabsorption cross section, $\sigma(\omega)$, of neon around the $2s$–$3p$ resonance obtained within TDCIS is shown in Fig. \[fig.cross\], both with and without interchannel coupling between the $2s$ and $2p$ shells. They are both calculated via an autocorrelation function (see Refs. [@Pa-EPJST-2013; @KrPa-AJP-2014]). Strictly speaking, the $2s$–$3p$ resonance has in principle no line width in the intrachanel model since the state $2s^{-1}3p$ cannot autoionize and, therefore, lives forever. In Fig. \[fig.cross\], this resonance has a finite width that is artificial and has been introduced by hand for better visualization [^1]. ![(color online) Photoabsorption cross section of neon in the vicinity of the $2s\rightarrow3p$ resonance for the intrachannel TDCIS model (blue-dashed line) and the full TDCIS model (red-solid line). Fano profile fits [@LaBe-JPB-1997] give the resonance frequency for the intrachannel TDICS model $\omega_\textrm{intra}$, and the resonance frequency for the full TDCIS model $\omega_\textrm{res}$. The curve for the intrachannel model is shifted up by +6 Mb for better visualization. []{data-label="fig.cross"}](fig_2.eps){width="\figwidth\linewidth"} With the addition of interchannel coupling of the electrons in the full model, the excited $2s^{-1}\,3p$ state autoionizes to a singly charged ionic state $2p^{-1} \, \varepsilon l$. This indirect ionization of a $2p$ electron ($2s^22p^6+\gamma \rightarrow 2s^{-1}3p \rightarrow 2p^{-1}\,\varepsilon l$) and the direct one-photon ionzation of a $2p$ electron ($2s^22p^6+\gamma \rightarrow 2p^{-1}\,\varepsilon l$) can now interfere, resulting in an asymmetric Fano line shape [@Fa-PhysRev-1961] (see Fig. \[fig.cross\]). We fit both curves (with and without interchannel interactions) to the characteristic Fano profile [@Fa-PhysRev-1961; @LaBe-JPB-1997] given by $$\begin{aligned} \label{eq.fano} \sigma(\omega) =& \sigma_a\frac{(q+\epsilon)^2}{1+\epsilon^2} + \sigma_b , \qquad \textrm{with}\quad \epsilon = \frac{\omega-\omega_\textrm{res}}{\Gamma/2} ,\end{aligned}$$ where $q$ is the Fano parameter describing the asymmetry of the line shape, $\omega_\textrm{res}$ is the resonance frequency of the transition, and $\Gamma$ is the width of the resonance structure. These fits give the transition frequencies for both models as well as the transition width and Fano parameter for the full model: $\omega_\textrm{intra}=45.511$ eV, $\omega_\textrm{res}=45.538$ eV, $\Gamma=31.8$ meV, and $q=-1.32$. The experimentally obtained value for the resonance position is 45.546 eV, for the line width is 13 meV, and the Fano parameter is $q=-1.58$ [@LaBe-JPB-1997; @CoMa-PR-1967]. Since the experimental line width is more than twice as small as our theoretical one, the spectral features presented in Figs. \[fig.holealign\_weak\]-\[fig.align.ls\] will be in reality not as broad. Qualitatively, however, this line width discrepancy has no effect on the results and the conclusions. At frequencies below $\omega_\textrm{res}$, the two ionization pathways constructively interfere and the overall $2p$ ionization is increased. Above $\omega_\textrm{res}$, the two pathways destructively interfere and the overall $2p$ ionization is suppressed. The photon energies at the minimum $\omega_\textrm{min}=45.559$ eV and the maximum $\omega_\textrm{max}=45.525$ eV are determined visually. ### Temporal Features {#sec3.fano.temp} In order to investigate the temporal character of the autoionization process, we resonantly excite neon with a relatively short $2.4$ fs pulse of frequency $\omega_\textrm{res}$ and a peak intensity of $5.6\times 10^{13}$ W/cm$^2$. The duration of this pulse is purposely chosen to be much shorter than the lifetime of the $2s$ hole given by $T_{2s^{-1}3p}=1/\Gamma=20.7$ fs, in our calculations. ![(color online) (a) Hole population of the $2s$ (red-solid line), $2p_0$ (green-dashed line), and $2p_{\pm 1}$ (blue-dashed line) orbitals as well as the ground state depopulation (pink-dashed line) for the full CIS model. The pulse has a Gaussian shape and is $2.4$ fs long (FWHM of the intensity) centered around $t=0$, and has the carrier frequency $\omega_\textrm{res}$. Also shown are scaled close-ups of the ground state depopulation (b), and of the hole populations of the different orbitals (c-e). []{data-label="fig.auger"}](fig_1.eps){width="\figwidth\linewidth"} The hole population for the $2s$, $2p_0$, and $2p_{\pm 1}$ orbitals as well as the depopulation of the neon ground state are presented in Fig. \[fig.auger\]. Note that for linearly polarized light the sign of the magnetic quantum number $m$ is unimportant and the $+m$ and $-m$ electrons behave exactly in the same way when the initial state is an $M=0$ state as it is the case for closed-shell atoms. At the end of the pulse, all $2p_m$ depopulations increase while that of the $2s$ decreases. The total depopulation, which is the sum of the $2s$ and all $2p_m$ orbitals, remains constant, indicating that the $2p$ and $2s$ hole populations vary equally but oppositely. Note that in Fig. \[fig.auger\](b-e) the time scale is changed to visually emphasize these temporal trends. This is also consistent within TDCIS, where the depopulation of the ground state can no longer change when the pulse is over \[see Eq. \]. Only the hole rearranges with time from the $2s$ orbital to the $2p$ orbitals. This hole rearrangement is the resonant Auger decay (or the autoionization process). The energy released by the hole movement, $26.9$ eV, is sufficient to knock the excited electron residing in the $3p$ shell, which has a binding energy of $2.9$ eV, into the continuum [^2]. Hole Alignment {#sec3.align} -------------- As we have seen in Sec. \[sec3.fano.spec\], the indirect ionization pathways via the autoionizing $2s^{-1}3p$ state interferes constructively or destructively with the direct ionization pathway depending on the detuning of the photon energy. The spectral information (in Fig. \[fig.cross\]) does, however, not contain channel-resolved cross sections. Particularly, it cannot answer the question as to which extent the interference affects all $2p_m$ ionization channels equivalently or whether there is a preferred $m$ ionization channel. A non-uniform behavior would result in different effective ionization rates for $2p_0$ and $2p_{\pm 1}$ and, consequently, in a modified ratio between $2p_0$ and $2p_{\pm 1}$ hole populations compared to the ratio expected for non-resonant one-photon ionization at similar photon energies. By studying theoretically and experimentally the angular distribution of the photoelectron [@LaBe-JPB-1997], a large variation of the asymmetry parameter $\beta$ has been found. Therefore, we also expect an variation in the ionic hole states. However, it is not possible to connect directly the angular photoelectron distribution with the ionic hole state. Theoretical studies [@LaBe-JPB-1997] showed that at $\omega_\textrm{min}$ an asymmetry parameter of $\beta=0$ is expected for the $2s$–$2p$ Fano resonance, meaning the photoelectron is in a pure $s$-wave state. For this special case, the photoelectron angular distribution can be related to the ionic hole alignment, since an $s$-wave photoelectron can only originate from the $2p_0$ orbital. Such a connection to the ionic state has, however, not been made in earlier studies. ![(color online) (a) Hole populations are shown as a function of the photon energy for the $2p_0$ orbital (dashed-dark-blue line), the $2p_{\pm 1}$ orbital (solid-red line) as well as for the two $2p_0$ ionization channels $2p_0^{-1}\varepsilon d_0$ (dotted-light-blue line) and $2p_0^{-1}\varepsilon s_0$ (dashed-dotted-violet line). (b) The ratio $2p_0/2p_{\pm 1}$ is shown for the full TDCIS model (solid-orange line) and the intrachannel TDCIS model (dashed-green line). A Gaussian pulse with a peak intensity of $3.5\times10^{13}$ W/cm$^2$ and a FWHM-duration of 174 fs has been used. []{data-label="fig.holealign_weak"}](fig_11.eps){width="\figwidth\linewidth"} In Fig. \[fig.holealign\_weak\](a), the $m$-resolved hole populations of the $2p$-shell are shown (thick lines). Next to the hole populations for $2p_0$ (dashed dark-blue line) and $2p_{\pm 1}$ (solid red line), the two partial-wave channels $2p_0^{-1}\varepsilon s_0$ (dashed-dotted violet line) and $2p_0^{-1}\varepsilon d_0$ (dotted light-blue line) are shown as well. For the $2p_{\pm 1}$ hole ionization, there exists only one ionization channel where the continuum electron is a $d$-wave (i.e., $2p_{\pm 1}^{-1}\varepsilon d_{\pm 1}$). As we can see from Fig. \[fig.holealign\_weak\](a) the $2p_m$ populations do vary across the resonance. Especially the ionization for $2p_{\pm 1}$ is much more suppressed at $\omega_\text{min}$ than for $2p_{0}$. In the following, we investigate in more details why the ionization of the $2p_m$ orbitals behave so differently by having a closer look at the partial-wave contributions leading to $s$-wave and $d$-wave photoelectrons. ### $d$-wave photoelectron The $2p_{\pm 1}$ ionization is much more suppressed than $2p_0$ around $\omega_\textrm{min}$ \[see Fig. \[fig.holealign\_weak\](a)\]. For $2p_{\pm 1}$, the destructive interference is so strong that it leads to a suppression of almost 2 orders of magnitude compared to non-resonant photon energies. All $2p_m^{-1}\varepsilon d_m$ partial-wave channels show the same degree of suppression. To be more precise, the ratio between $2p_0^{-1}\varepsilon d_0$ and $2p_{\pm 1}^{-1}\varepsilon d_{\pm 1}$ is exactly $4/3$. A detailed analysis shows that this ratio between the $m=0$ and $|m|=1$ appears in both, the direct and the indirect, ionization pathways and can be explained by the Wigner-Eckart theorem [@Zare-book]. Consequently, the behavior of constructive and destructive interference is exactly the same for all $d$-wave channels, $2p_m^{-1} \varepsilon d_m$. ### $s$-wave photoelectron To generate $2p_0$ holes there exists another ionization channel leading to an $s$-wave photoelectron, i.e., $2p_0^{-1} \varepsilon s_0$. The behavior of this partial-wave channel is different than the behavior of the $2p_m^{-1} \varepsilon d_m$ partial-wave channels \[see Fig. \[fig.holealign\_weak\](a)\]. For $2p_0^{-1} \varepsilon s_0$, the destructive interference happens at $\omega_\textrm{max}$ and constructive interference occurs at $\omega_\textrm{min}$. The overall trend is dominated by $2p_m^{-1} \varepsilon d_m$, since the probability of ejecting an electron from a $p$-orbital into an $s$-continuum is generally much smaller than ejecting the electron into a $d$-continuum [@FaCo-RMP-1968]. Only around $\omega_\textrm{min}$, where the ionization into a $d$-continuum is strongly suppressed, the situation changes and ionization into the $s$-continuum becomes the dominant ionization channel (corresponding to an asymmetry parameter of $\beta=0$). The relative enhancement of the $2p_0^{-1} \varepsilon s_0$ partial-wave channel results in a ten times smaller overall suppression for $2p_0$ ionization than for $2p_{\pm 1}$ ionization \[see Fig. \[fig.holealign\_weak\]\]. ### The ratio of $2p_m$ hole populations In Fig. \[fig.holealign\_weak\](b), the hole population ratio $2p_0/2p_{\pm 1}$ is shown as a function of the photon energy for the full TDCIS model (orange-solid line) and the intrachannel TDCIS model (green-dashed line). This ratio is a direct measure of hole alignment, where 1 stands for an isotropic hole distribution, $\infty$ for perfect hole alignment along the polarization direction, and 0 for perfect hole antialignment in the plane perpendicular to the polarization direction. Strong variations of the hole alignment across the Fano resonance are found resulting in ratios that vary by more than one order of magnitude (between 1.6 and 18). A ratio of 18 means the $2p$ hole is primarily located in the $2p_0$ orbital, and only a 10% chance exists to find the hole in either the $2p_{+1}$ or $2p_{-1}$ orbital. Such strong hole alignment is normally only encountered in the strong field regime where tunnel ionization almost exclusively ionizes the outermost $p_0$ orbital (when using linearly polarized light) [@PaGr-PRA-2012; @IvSp-JMO-2005]. In the off-resonance limit, the intrachannel TDCIS model and the full TDCIS model approach the same value for the $2p_0/2p_{\pm 1}$ ratio (1.6). Such values are very common in the XUV and x-ray regimes where an almost isotropic distribution of the hole is found with a slight preference for the polarization direction (i.e., $m=0$). The maximum hole alignment is reached when the photon energy is $\omega_\textrm{min}$, located at the minimum of the Fano resonance, which is exactly the energy where the suppression of the dominant ionization channels (leading to $2p^{-1}_m\,\varepsilon d_m$) is most pronounced, and only $s$-wave photoelectrons are formed which leave a $2p_0$ hole behind. Spin-orbit coupling {#sec3.align.ls} ------------------- Up to now, we have ignored that the $2p$ shell is actually split due to spin-orbit coupling into two subshells $2p_j$ with $j=1/2$ and $j=3/2$. As a result, the hole alignment has to be defined with respect to $m_j$ and not $m_l$. In particular, the $2p^{m_j}_{3/2}$ hole populations for $m_j=\pm 1/2$ and $m_j=\pm 3/2$ have to be compared. Here, $m_j$ refers to the projection of the total angular momentum $j$ along the XUV polarization axis. In our TDCIS approach, we consider only the spin-orbit interaction within the ion, where it is the strongest, and we neglect it for the photoelectron (see Ref. [@PaSy-PRA-2012] for details). Figure \[fig.align.ls\] (a) shows the hole populations of $2p^{\pm0.5}_{0.5}$, $2p^{\pm0.5}_{1.5}$, and $2p^{\pm1.5}_{1.5}$, and (b) shows the ratio between $2p^{\pm0.5}_{1.5}$ and $2p^{\pm1.5}_{1.5}$ defining the hole alignment. Figure \[fig.align.ls\] shows the same trends as Fig. \[fig.holealign\_weak\]. The mixing of $2p_0$ and $2p_{\pm 1}$ orbitals in the spin-orbit case reduces the maximum hole alignment within the $2p_{3/2}$-shell by $\sim\!2/3$ in comparison to the non-spin-orbit case [^3], which results in a maximum alignment ratio of $\sim\!13$ instead of 18. ![(color online) (a) Hole population is shown as a function of the photon energy for the $2p^{\pm0.5}_{0.5}$ orbital (solid-red line), the $2p^{\pm0.5}_{1.5}$ orbital (dashed-blue line), and the $2p^{\pm1.5}_{1.5}$ orbital (dotted-light-blue line). (b) The ratio $2p^{\pm1.5}_{1.5}/2p^{\pm 0.5}_{1.5}$ is shown for the full TDCIS model (solid-brown line) and the intrachannel TDCIS model (dashed-green line). The same pulse parameters as in Fig. \[fig.holealign\_weak\] has been used. []{data-label="fig.align.ls"}](fig_ls.eps){width="\figwidth\linewidth"} The reduction factor of 2/3 can be easily explained when expressing the spin-orbit-split orbitals in terms of the non-spin-orbit-split orbitals. Specifically, the transformation between the spin-orbit-split (coupled basis) and non-spin-orbit-split (uncoupled basis) orbitals reads: \[eq:pop.ls\] $$\begin{aligned} \label{eq:pop.ls_0.5-0.5} {\left|2p^{\pm 0.5}_{0.5}\right>} =& \pm\sqrt{\frac{2}{3}} {\left|2p_{\pm 1,\mp {\frac{1}{2}}}\big.\right>} \mp \sqrt{\frac{1}{3}} {\left|2p_{0,\pm {\frac{1}{2}}}\big.\right>} \\ \label{eq:pop.ls_0.5-1.5} {\left|2p^{\pm 0.5}_{1.5}\right>} =& +\sqrt{\frac{1}{3}} {\left|2p_{\pm 1,\mp {\frac{1}{2}}}\big.\right>} + \sqrt{\frac{2}{3}} {\left|2p_{0,\pm {\frac{1}{2}}}\big.\right>} \\ \label{eq:pop.ls_1.5-1.5} {\left|2p^{\pm 1.5}_{1.5}\right>} =& {\left|2p_{\pm 1,\pm {\frac{1}{2}}}\big.\right>}\end{aligned}$$ where ${\left|2p_{m,\sigma}\right>}$ refers to the spatial $2p_m$ orbital with the spin projection $\sigma$. Note that in Sec. \[sec3.align\] we focused only the spatial part of the orbitals because the spin-up and spin-down components behave exactly the same [^4]. The spin-orbit interaction is treated here in degenerate perturbation theory (see Ref. [@RoSa-PRA-2009; @PaSy-PRA-2012]) where only the impact on the angular momentum is considered. The radial part is unaffected by the spin-orbit interaction which leads to errors of few per cent [@PaSa-JPB-submitted]. By using Eqs. (\[eq:pop.ls\_0.5-0.5\]–\[eq:pop.ls\_1.5-1.5\]), all populations shown in Fig. \[fig.align.ls\](a) can be written in terms of the non-spin-orbit-split populations shown in Fig. \[fig.holealign\_weak\](a), and, consequently, also the alignment ratio in the case of spin-orbit splitting can be expressed in terms of the ratio without spin-orbit splitting as done earlier. Conclusion {#sec4} ========== We have shown that resonant excitation of the autoionizing $2s^{-1}3p$ state leads to a second ionization pathway that can interfere with the direct $2p$ photoionization pathway and strongly influences the state of the parent ion. This interference is well known as the origin of the characteristic Fano profile. Also the asymmetry parameter $\beta$ measuring the angular distribution of the photoelectron varies strongly across the Fano resonance but a direct relation to the hole alignment cannot be made. We showed that this interference has destructive character at $\omega_\textrm{min}$ and creates a dark-state in the photoelectron continuum. As a result, the $2p^{-1}\varepsilon d$ ionization channel is strongly suppressed, and the photoelectron is emitted as a pure $s$-wave. Consequently, the only orbital that is ionized is the $2p_0$ orbital. The imbalance of ionizing $2p_0$ and $2p_{\pm1}$ orbitals leads to a large hole alignment along the XUV polarization direction. The ratio between the populations of $2p_0$ and $2p_{\pm1}$ goes as high as 19—localizing the hole in the $2p_0$ orbital—and is significantly different than the off-resonant value ($1.6$), which possesses only a slight hole alignment. Strong hole alignments are usually only encountered after tunnel ionization with strong-field IR pulses, where the Keldysh parameter is well below 1 [@Pa-EPJST-2013]. Here, we used XUV pulses and we are in the perturbative one-photon regime, where the Keldysh parameter is well above 1 and large anisotropies in the hole states are not expected. When disabling interchannel coupling effects, i..e, disabling the correlation-driven autoionization mechanism of the excited $2s^{-1}3p$ state, no interference of the ionization pathways occurs and no hole alignment modulation appears when tuning across the $2s$–$3p$ resonance. Including spin-orbit interaction within the ion does not change the picture. Only the strong hole alignment within the $2p_{3/2}$-shell is reduced by a factor $2/3$, which still results in a strong hole alignment with ratios up to $13\!:\!1$ between $2p_{3/2}^{\pm1/2}$ and $2p_{3/2}^{\pm3/2}$ hole populations. Controlling the hole alignment via the $2s$–$3p$ Fano resonance serves as an example of how correlation effects can be explicitly targeted and exploited to create new and exotic electronic states in atoms and molecules. Similiarly other Fano resonances can be used where the strength of the resonance determines how strongly the hole alignment can be tuned. Furthermore, with a second pulse the Fano resonance could be modified within attoseconds [@OtKa-Science-2013] to gain an even larger control of the electronic motion. Also the extension to high-intensity pulses is interesting, which can be realized with currently available seeded free-electron lasers like FERMI [@MuNi-NatPhoton-2012] or sFLASH [@DeDr-PRL-2013]. First preliminary results we have obtained suggest that completely different ionization behavior occurs when a Fano resonance is driven by a high-intensity pulse. E. H.-J. would like to thank the DAAD RISE program and her mother Christa Heinrich-Josties for financial support. This work has been supported by the Deutsche Forschungsgemeinschaft (DFG) under grant No. SFB 925/A5. [^1]: The autocorrelation function is exponentially damped by hand. The damping factor is directly related to the width of the Lorentzian. [^2]: These energies were calculated using values found in the National Institute of Standards and Technology (NIST) database [@NIST_website]. [^3]: The exact transformation from the hole alignment ratio without spin-orbit coupling, $r_\textrm{nols}$, to the ratio when spin-orbit coupling is included, $r_\textrm{ls}$, reads: $r_\textrm{ls} = 1/3 + 2/3\, r_\textrm{nols}$. For large hole alignments, the constant can be ignored. [^4]: To be more precise, the non-spin-orbit-split hole populations $2p_m$ are the sums of the corresponding spin-up and spin-down hole populations.
--- author: - | *by\ [  ]{}\ Valentin Poénaru[^1]* date: July 2006 title: | On the 3-Dimensional Poincaré Conjecture and the 4-Dimensional Smooth Schoenflies Problem\ [– a double research announcement –]{} --- [**0 A foreword**]{} This is the announcement of an alternative approach to the 3-dimensional Poincaré Conjecture, different from Perelman’s big and spectacular breakthrough. No claim concerning the other parts of the Thurston Geometrization Conjecture, come with our purely 4-dimensional line of argument. The format of the present paper is that of a rather informal letter which, conceivably, I might have written to some mathematical friend, let us say to David Gabai. Introduction {#sec1} ============ The present shortish paper is a double research announcement. On the one hand, my 4-dimensional program for proving the 3-dimensional Poincaré Conjecture, developed during several decades, is now finally completely finished. The last step which was missing is provided by the Theorem 1 below. But then also, there is an outgrowth of this program, namely the proof that any smooth 4-dimensional Schoenflies ball is geometrically simply-connected, i.e. it possesses smooth handlebody decompositions without handles of index one. This is Theorem 2 below. My almost purely 4-dimensional techniques for the Poincaré Conjecture, are completely independent, of course, of the Ricci flow approach of R. Hamilton and G. Perelman; see here [@Mo] for more extensive references. It is the two theorems 1 and 2 mentioned above and then stated precisely below, which are the novelties here, with respect to my 2004 informal and more tentative announcement in the Steklov Proceedings [@Po-S]. [**Theorem 1.**]{} – [*For any homotopy $3$-ball $\Delta^3$, we introduce the following, canonically attached, open smooth $4$-manifold $$\label{eq1.1} X^4 = {\rm int} \, [(\Delta^3 \times I) \, \# \, \infty \, \# \, (S^2 \times D^2)] \, .$$ IF this $X^4$ is geometrically simply connected, THEN so is $\Delta^3 \times I$ itself.*]{} So, the format of this statement is, actually, the following implication $$\label{eq1.2} X^4 ({\rm open}) \ {\rm g.s.c.} \Longrightarrow \Delta^3 \times I ({\rm compact}) \ {\rm g.s.c.}$$ Next, we consider Schoenflies 4-balls. By definition these are smooth compact 4-manifolds, which we will denote generically by $\Delta_{\rm Schoenflies}^4$, such that $$\partial \Delta_{\rm Schoenflies}^4 = S^3, \ \mbox{and there is a smooth embedding} \ \Delta_{\rm Schoenflies}^4 \subset S_{\rm standard}^4 \, .$$ With this, here is our [**Theorem 2.**]{} – [*Any $\Delta_{\rm Schoenflies}^4$ is geometrically simply connected.*]{} Remember here that, according to the classical work of Barry Mazur [@Ma], it is certainly known that such a $\Delta_{\rm Sch}^4$ is topologically the 4-ball. Even better, if we delete from it a boundary point, then what we get is diffeomorphic to the standard 4-ball, with a boundary point removed. The plan of the present paper is the following. In the next section 2 we will give a bird’s eye view short outline of my proof of the Poincaré Conjecture, showing in particular how Theorem 1 above fits into it. A much more detailed outline, but with Theorem 1 occuring there with a question mark, was given in the Steklov paper [@Po-S], and so we will be really very brief here. In the same [@Po-S], which may be considered a companion of the present announcement, Theorem 2 was only hinted at, as a possibility. Next, in the sections 3 and 4, we will give a glimpse of how the proofs of Theorems 1 and 2 go. As it stands, section 3 should give already a very first idea, while the even more impressionistic section 4 touches on some more technical issues. But the point is that both proofs can and will be presented, largely, simultaneously, and in the same breath. Here is how one should view their starting points. For Theorem 1, the starting point is an [*hypothesis*]{}, namely the $X^4$ (\[eq1.1\]) being g.s.c.; Theorem 2 starts from the fact that, as a consequence of Barry’s work mentioned above, the open smooth 4-manifold $\Delta^4 \cup (S^3 \times [0,\infty))$ ($= {\rm int} \, \Delta^4$), which we may as well call now $X^4$ again, [is]{} geometrically simply connected. Of course, Barry’s work really implies that it is the standard $R^4$. But only the g.s.c. property will be retained here, for our present purposes. I do believe that, afterwards, in order to show that any g.s.c. $\Delta_{\rm Schoenflies}^4$ is actually standard, the full strength of Barry’s result (as far as the $4^d$ DIFF category goes) should be needed. But this is another story. The complete detailed proofs of Theorems 1 and 2 are completely (hand-) written down, in a very long two-part paper, to which I will refer hereafter as “PoV-B”, listed as [@PoV-B]. I hope to be able to make it available in a typed version in a not too long time. I should also add that working on Theorem 2 has been, for me, a very good testing ground for the PoV-B technology of Theorem 1, with a lot of feed-back between the two items. I owe too much to too many people to start listing them all here and now. This notwithstanding, I do want to thank David Gabai, without the help of whom this work would not have been here. And then, I should also mention that the very first impetus for trying to link together things like in the two theorems above, came from a suggestion which Michael Freedman has made to David and me, way back in the Spring 1995. Finally, thanks are due to the IHÉS for generously offering me the possibility to use its typing facilities and to Cécile Cheikhchoukh and Marie-Claude Vergne for the typing and the drawings. A brief outline of the proof of the Poincaré Conjecture {#sec2} ======================================================= There are three distinct steps in the proof, each short to state but also each with a very long proof. I will present them here as follows. STEP I. Here, the climax is the following final result [**Theorem 3.**]{} – [*For any homotopy $3$-ball $\Delta^3$, the open smooth $4$-manifold $X^4$ from [(\[eq1.1\])]{} is geometrically simply connected.*]{} The complete detailed proof is contained in the series of papers [@PoI], [@PoII], [@PoIII], [@PoIV-A], [@PoIV-B] and [@PoV-A]. For this last paper, which proves a 4-dimensional result, completely independent of the rest, there is also a shorter version [@PoV-Aoutline]. For this and the next step, see also [@Ga], [@Po-B], [@Po-T]. Notice that what Theorem 3 does, is to prove the hypothesis occuring in Theorem 1, i.e. in the implication (\[eq1.2\]). STEP II. Just like it was already the case for Theorem 1, the main result takes here again the form of an implication, namely, now the following $$\label{eq2.1} \Delta^3 \times I \ \mbox{geometrically simply connected} \ \Longrightarrow \Delta^3 \ {\rm standard}.$$ More explicitly, this is the following [**Theorem 4.**]{} – [*Let $\Delta^3$ be a homotopy $3$-sphere which is such that $\Delta^3 \times I$ is geometrically simply connected. Then $\Delta^3 = B^3$.*]{} A brief outline of the proof can be found in [@Po-S], [@Po-B]. The complete detailed proof is contained in [@PoVI]. Here are some words concerning the way in which the hypothesis that $\Delta^3 \times I$ is g.s.c. is being used in the proof of Theorem 4. At this point here is a little lemma; the various terms which are used in the statement are all explained in [@Ga] or [@PoI] (and [@PoII]). [**Lemma 5.**]{} – [*Let $\Delta^3$ be a homotopy $3$-ball which is such that $\Delta^3 \times I$ is g.s.c. (in the smooth category). Then there exists a collapsible pseudo-spine representation for $\Delta^3$, call it $K^2 \overset{f}{\longrightarrow} \Delta^3$, for which one can find a desingularization $\varphi$ having the following property. There exists a strategy for zipping $f$, which is COHERENT for $\varphi$.*]{} What “coherence” means here is that when, during the zipping process any two singularities $s_1 , s_2$ meet in a head-on collision, then their desingularizations are well-matched together: the $S(N)$ branches of $s_1$ match the $S(N)$ branches of $s_2$ (and not the $N(S)$ branches). The proof of Lemma 5 can be found in [@Ga], [@PoV-A] and the converse statement to Lemma 5, going from coherence to g.s.c. is true too; actually it is even easier (see [@Ga] and [@PoII]). Now, the point is that the starting point of the infinite processes via which Theorem 4 is proved, is a collapsible pseudo-spine representation for $\Delta^3$, having the coherence property. This is how the geometric simple connectivity of $\Delta^3 \times I$ comes in. STEP III. This step is our present Theorem 1. The ordering of our three steps above was chronology rather than logic. This being said, on the same lines as the references [@PoI] to [@PoIV-B] and [@PoV-A], all part of step I above, just after them and before [@PoVI] (step II), there should actually come now the [@PoV-B] of step III, containing the proof of Theorem 1 and, incidentally, of Theorem 2 too. Once one assumes all the three steps above, one can plug Theorem 3 (step I) into Theorem 1 (step III) and conclude that $\Delta^3 \times I$ is always geometrically simply connected. When this fact is plugged into Theorem 4 (step II), then this yields the following main result [**Theorem 6.**]{} (THE POINCARÉ CONJECTURE) – [*Every homotopy $3$-ball $\Delta^3$ is standard, i.e. $\Delta^3 = B^3$.*]{} Notice that, on the way, we have also proved the so-called COHERENCE THEOREM, stating that [*every*]{} homotopy 3-ball admits a collapsible pseudo-spine representations which is also coherent. The gap in an earlier attempted direct proof for the coherence theorem, which was detected in the Spring 1995 by Michael Freedman and David Gabai, is now completely filled in by the combination of [@PoV-A] (which in the presentation chosen here has been included inside Step I) together with the Theorem 1 above, i.e. by [@PoV-B]. Together, the PoV-A ([@PoV-A]) and PoV-B ([@PoV-B]) completely supersede the by now dead Orsay preprint 94-25 [@PoV], from 1994. One may also put these things as follows. The [@PoIV-B] proves that $\Delta^3 \times I$ has the property of being geometrically simply connected [at long distance]{} (this is a notion weaker than g.s.c., for which I refer to [@Po-S], [@Po-B]). Then what [@PoV-A] $+$ [@PoV-B] actually do for us, is to deduce the COHERENCE THEOREM from this last property. Some hints concerning the proof of the Theorems 1 and 2 {#sec3} ======================================================= We will try, as much as possible, to present the two proofs in parallel; the fact that we may do this, should be seen as a distinctive feature of the present approach. Everything now is in the smooth category and we will denote by $\Delta^4$ a compact bounded 4-manifold which is either $\Delta^3 \times I$ (with $\Delta^3$ a homotopy 3-ball) or $\Delta_{\rm Schoenflies}^4$. We start with the following sequence of nested spaces $$\label{eq3.1} \Delta^4 = \Delta_{\rm small}^4 \subset X_{\rm open}^4 \subset \Delta_{\rm large}^4 = \Delta_1^4 \, ,$$ where $\Delta_{\rm small}^4$ and $\Delta_{\rm large}^4$ are two copies of the same $\Delta^4$, separated by a product collar. The $X^4$ is, according to the case, either the ${\rm int} \, [(\Delta^3 \times I) \, \# \, \infty \, \# \, (S^2 \times D^2)]$ from (\[eq1.1\]) or, in the Schoenflies case it is $\Delta^4 \cup (\partial \Delta^4 \times [0,\infty))$. In both cases we have a splitting of the form $X^4 = X^3 \times R$, but there is of course no compact splitting like $\Delta^3 \times I$, in the Schoenflies case. From the very beginning we are presented here with two distinct features or structures, referred hereafter as “RED” and “BLUE” respectively. The RED feature of (\[eq3.1\]) is a “collapse” $X^4 \to \Delta^4$; but this requires some qualifications. On the one hand, since $X^4$ is not compact, we should rather talk about an infinite dilatation process, going the other way around. But then also, more seriously, in the case of $\Delta^4 = \Delta^3 \times I$, the collapse (and we drop the quotation marks from now on), certainly has some [defects]{}, namely the infinitely many $\# \, (S^2 \times D^2)$ of the corresponding $X^4$. In our present brief outline we will chose to rather ignore them. Of course, in a more realistic discussion they will have to be dealt with; but see here also the remark which follows after (\[eq3.33.2\]). But then also, they are absent in the Schoenflies context. The BLUE feature of (\[eq3.1\]) is that $X^4$, as such, is geometrically simply connected; here the compact $\Delta^4$ is altogether being ignored. In a combinatorial language, $X^4$ admits a smooth cell-decomposition with a 2-skeleton which is $$\mbox{(a collapsible infinite 2-complex)} + \mbox{(2-cells added)}.$$ What we may hope to achieve by combining the two features above would be to construct, inside the collar $\Delta_1^4 - {\rm int} \, \Delta^4$ coming with (\[eq3.1\]), a system of embedded exterior discs, in cancelling position with the 1-handles of $\Delta^4$. It is not very hard to show that this would imply that $\Delta^4$ is g.s.c., i.e. we would get both Theorems 1 and 2 this way. Presumably, the exterior discs should be gotten starting from the BLUE structure, and the connection with our $\Delta^4$, i.e. the cancelling property should be gotten by invoking the RED feature too. This sounds more like a vague pipe-dream, of course, but it may still serve as a vague guide-line for what will be following next. But before we can really start off the ground, we will need to change the initial set-up (\[eq3.1\]), in several successive stages. STAGE I. Let us start by denoting $\Delta^2$ the 2-spine of $\Delta^3 = \Delta^3 \times 0 \subset \Delta^3 \times I$, [*or*]{} the 2-skeleton of $\Delta_{\rm Schoenflies}^4$ (so as to avoid having to deal with the 3-handles of $\Delta_{\rm Schoenflies}^4$), according to the case. The point is that all we need is to show that the $4^d$ regular neighbourhood $N^4 (\Delta^2)$ is g.s.c. It is on $\Delta^2$, rather than on $\Delta^4$, that we will focus from now on. We may even call $N^4 (\Delta^2) , \Delta^4$. Now, a priori both the BLUE and the RED features, are each expressible in terms of two smooth cell-decompositions of $X^4$, the two being independent of each other. But then, making use of the smooth Hauptvermutung of J.H.C. Whitehead [@W1] and also of some combinatorial arguments, into which we will not go here, one can produce a unique smooth cell-decomposition of $X^4$ which exhibits both the BLUE and the RED features. This will be done in terms of some combinatorial data to be explained now; this may be a bit lengthy, but it is unavoidable for our exposition. Let $X^2$ be the 2-skeleton of $X^4$, and let also $\Gamma (\infty) \subset X^2$ be the 1-skeleton. The $\Delta^2 \subset X^2$ is a subcomplex, with its finite 1-skeleton $\Gamma (1) \subset \Gamma (\infty)$. Inside $\Gamma (\infty)$ live two independent set of points $$\label{eq3.2} R \ (\mbox{for red}) \subset \Gamma (\infty) \supset B \ (\mbox{for blue}) \, .$$ Both $\Gamma (\infty) - R$ and $\Gamma (\infty) - B$ are trees, making that a given edge $e \subset \Gamma (\infty)$ contains at most one $R_i \in R$ and one $B_j \in B$. When $R_i \in e \ni B_j$, it will be assumed that $R_i = B_j \in R \cap B$. One should think here of the $R_i , B_j$ as being 1-handles or, more precisely 1-handle cocores, i.e. properly embedded 3-balls $(B^3 , \partial B^3) \subset (N^4 (\Gamma (\infty) , \partial N^4 (\Gamma (\infty))$. For further purposes, the following notations will be introduced too $$\label{eq3.3} R \, \cap \, \Gamma (1) = \{ R_1 , R_2 , \ldots , R_n \} \ \mbox{and} \ R - \{ R_1 , R_2 , \ldots , R_n \} = \{ h_1 , h_2 , h_3 , \ldots \} \, .$$ One gets the $X^2$ and/or the $N^4 (X^2)$ by adding 2-cells and/or 2-handles along an infinite framed link $$\{ \mbox{link} \} \subset \partial N^4 (\Gamma (\infty)) \approx \Gamma (\infty) \, .$$ The link comes with two independent disjoined partitions $$\begin{aligned} \label{eq3.4} \{ \mbox{link} \} &= &\sum_1^n \Gamma_i + \sum_1^{\infty} C_j + \sum_1^{\infty} \gamma_k^0 \quad (\mbox{RED partition}) \\ &= &\sum_1^{\infty} \eta_{\ell} + \sum_1^{\infty} \gamma_m^1 \quad (\mbox{BLUE partition}) \, . \nonumber\end{aligned}$$ For each element of a link we have an associated 2-cell and/or 2-handle, denoted in both cases by $D^2$ (curve). With this, (\[eq3.4\]) leads to the following decompositions $$\label{eq3.5} \Delta^2 = \Gamma (1) \cup \sum_1^n D^2 (\Gamma_i) \, , \quad \mbox{and}$$ $$\begin{aligned} X^2 &= &\Gamma (\infty) \cup \left( \sum_1^n D^2 (\Gamma_i) + \sum_1^{\infty} D^2 (C_j) + \sum_1^{\infty} D^2 (\gamma_k^0)\right) \nonumber \\ &= & \Gamma (\infty) \cup \left( \sum_1^{\infty} D^2 (\eta_{\ell}) + \sum_1^{\infty} D^2 (\gamma_m^1)\right) \, . \nonumber\end{aligned}$$ [**Remark.**]{} We have used the same $n$, which by all means will mean the cardinality of $R \cap \Gamma (1)$, for the cardinality of the 2-handles $D^2(\Gamma)$ of $\Delta^2$ too. Now, this is perfectly legitimate in the case $\Delta^3 \times I$, when $\Delta^2$ is the spine. In the Schoenflies case, $\Delta^2$ is the 2-skeleton and, then actually $$\bar n = {\rm card} \, (D^2 (\Gamma)) > n = {\rm card} \, (R \cap \Gamma (1)) \, .$$ Once this is understood, there should be no problem concerning this ambiguity in notation. So, each curve and each disc comes with two independent labels, a red one and a blue one. The $X^4$ comes with a big RED collapsing flow (with possible defects in the $\Delta^3 \times I$ case). With this, the $D^2 (\gamma_k^0)$ are essentially (but see here also the remark below) those 2-cells which are killed by the $3^d$ RED collapse, while the $D^2 (C_j)$ are the 2-cells killed by the $2^d$ RED collapse. \[Remark. In the Schoenflies case, all the $D^2 (\gamma^0)$ are [*rigorously*]{} killed by the RED 3-dimensional flow. In the $\Delta^3 \times I$ case, the normal $D(\gamma^0)$’s are, but then we also have non-trivial $D^2 (\gamma^0)$’s corresponding to the defects. Quite some care has to be devoted to them in real life. But in this exposition we will largely ignore them (as much as that will be possible).\] In terms of (\[eq3.3\]) and (\[eq3.4\]) we define the red geometric intersection matrix $C \cdot h$. We express the RED 2-dimensional collapse by stipulating that $C \cdot h$ is of the following [easy id + nilpotent]{} form (after appropriate re-indexing) $$\label{eq3.6} C_i \cdot h_j = \delta_{ij} + \xi_{ij}^0 \, , \ \mbox{where we can have $\xi_{ij}^0 \ne 0$ only if $i > j$}Ê\, .$$ Finally, the BLUE feature is expressed by stipulating that $$\label{eq3.7} \mbox{The blue geometric intersection matrix $\eta \cdot B$ is also of the easy id $+$ nilpotent form.}$$ With all this, in the framework of a common, unique smooth cell-decomposition for $X^4$, we have encoded in a convenient combinatorial language both our red and blue features. [**Remark.**]{} There is also a notion of [difficult]{} id $+$ nilpotent. With notations like in (\[eq3.6\]), this means now that $$\xi_{ij}^0 \ne 0 \quad \mbox{only if} \quad i < j \, .$$ This, contrary to the easy id $+$ nil, is very far from “collapsible”, when we are in the infinite context. It can be shown without difficulty, that the classical Whitehead manifold ${\rm Wh}^3$ [@W2] admits a handlebody decomposition with only handles of index one and two and with a geometric intersection which is of this type. But then, in [@PoV-A] (and see [@PoV-Aoutline] too) where it occurs quite naturally, the difficult id $+$ nil turned out to be quite useful too. In the set-up which we have just introduced, we have the following two very useful objects $$\label{eq3.8} X_0^2 \underset{\rm def}{=} \Gamma (\infty) \cup \left( \sum_1^n D^2 (\Gamma_i) + \sum_1^{\infty} D^2 (C_j)\right) \supset \Gamma (\infty) \cup \sum_{1}^{\infty} D^2 (C_j) \, .$$ There is now a RED 2-dimensional collapse of $X_0^2$ onto $\Delta^2$. But bluewise, the $X_0^2$ is clearly limping. From now on, we take $N^4 (\Delta^2)$ as being $\Delta^4$ and, correspondingly, $N^4 (\Delta^2) \cup \{\mbox{collar}\}$ as being $\Delta_1^4$. With all this, we change the set-up (\[eq3.1\]) into the following, for the time being $$\label{eq3.9} \Delta^4 \subset N^4 (X_0^2) \subset N^4 (X^2) \subset \Delta_1^4 \, .$$ STAGE II. This will be, essentially, a refinement of the previous stage, in preparation for the next things to come. In the context of (\[eq3.8\]), (\[eq3.9\]) we consider the natural embedding $$\sum_1^{\infty} \gamma_k^0 \subset \partial N^4 (X_0^2) \subset N^4 (X_0^2) \subset \Delta_1^4 \, .$$ Of course, the $\gamma_k^0$ bounds the $D^2 (\gamma_k^0)$ in $\Delta_1^4 - {\rm int} \, N^4 (X_0^2)$. But we can do much better than that. Consider $$\sum_1^{\infty} \gamma_k^0 \subset X_0^2 \subset \Delta_1^4 \, .$$ We can (essentially) extend the $\underset{1}{\overset{\infty}{\sum}} \ \gamma_k^0$ to an embedded family of discs $$\label{eq3.10} \sum_1^{\infty} d_k^2 \longrightarrow \Delta_1^4 \, ,$$ which touches the $X_0^2 \subset \Delta_1^4$ only along $\underset{1}{\overset{\infty}{\sum}} \ \gamma_k^0$ and which also [smears itself arbitrarily tightly close]{} to $X_0^2$ (contrary to the $\underset{1}{\overset{\infty}{\sum}} \ D^2 ( \gamma_k^0)$ which clearly does not). The “essentially” here stems from the fact that, in the case $\Delta^3 \times I$, the $d_k^2 = d^2 (\gamma_k^0)$ in (\[eq3.10\]) are defined only for those normal $\gamma^0$’s not corresponding to the defects $\# \, \infty \, \# \, (S^2 \times D^2)$ of the infinite RED collar $X^4 - \Delta^4$. It will turn out, eventually, that there is no harm in this. The construction of (\[eq3.10\]) makes an essential use of the RED 3-dimensional collapsing flow, about which not much can be said at the level of the present smallish paper. Finally, we still have to mention one of the important ingredients of the present approach: there is a certain [compatibility]{} property between the RED 2-dimensional and 3-dimensional collapsing flows; we will not make it explicit here, but just refer to it, when necessary. Now, once we have (\[eq3.10\]), we will forget about $X^2$ and only retain the $X_0^2$ from (\[eq3.8\]). With the same (\[eq3.8\]), let us notice the following feature of our present set-up. Let us consider any of the RED 1-handles of $\Delta^2$, namely the $$\sum_1^n R_i \subset \Gamma (1) \subset \Gamma (\infty) \cup \sum_1^{\infty} D^2 (C_j) \, , \quad \mbox{with} \ \Gamma (1) - \sum_1^n R_i = {\rm tree} \, .$$ When one adds to any of these $R_i$ all the incoming trajectories of the RED 2-dimensional collapsing flow, then we get the object $$\label{eq3.11} \{ \mbox{extended cocore of} \ R_i \} \subset \Gamma (\infty) \cup \sum_1^{\infty} D^2 (C_j) \, ,$$ which is an infinite PROPERLY embedded tree, which splits locally the target. Even better, we get this way a PROPERLY and properly embedded copy of $B^3 - \{$a tame Cantor set of $\partial B^3 \}$, which we denote just like in (\[eq3.11\]), by $$\label{eq3.12} \{ \mbox{extended cocore} \ R_i \} \subset N^4 (\Gamma (\infty)) \cup \sum_1^{\infty} D^2 (C_j) \, .$$ As a matter of terminology, by “proper” we mean boundary to boundary and interior to interior, while by “PROPER”, in capital letters, we mean $f^{-1} ({\rm compact}) = {\rm compact}$. Now, the same kind of construction as for (\[eq3.12\]) also works perfectly well, in the following cases, for instance. i\) Consider the set of the $b_i \in B \cap \Gamma (1)$. For obvious reasons, we have $\# \, B \cap \Gamma (1) \geq \# \, R \cap \Gamma (1) = n$ and we may as well assume that $$\label{eq3.12.1} P \underset{\rm def}{=} \, \# \, B \cap \Gamma (1) > \# \, R \cap \Gamma (1) = n \, .$$ Any of these $b_i$’s also has an $\{$extended cocore $b_i\}$. Notice that the $b_i$’s are [not]{} exactly 1-handles of $\Delta^2$ (at least if we insist, as we normally do, to have a unique handle of index $0$). But the (\[eq3.12.1\]) is a [disbalance]{} between Red and Blue for $\Delta^4 (\approx \Delta^2)$ which, later on, we will have to deal with. ii\) Let $p \in X_0^2$ be any smooth point of some $D^2 (C)$ (but not of any $D^2 (\Gamma)$). Then $p$ also possesses a PROPERLY, but not quite properly, embedded $\{$extended cocore $(p)\}$, inside $X_0^2$ and/or $N^4 (X_0^2)$. Let us say that the embedding fails to be proper, along a small disc of $\partial B^3 - \{$the tame Cantor set$\}$. It is the RED 2-dimensional collapse $X_0^2 \to \Delta^2$ which creates, of course, these $\{$extended cocores$\}$, which are absent for $X^2$. Finally, one should notice two capital sins of our present set-up, as it is $$\begin{aligned} \label{eq3.13.1} &&\mbox{Our $X_0^2$ (or $X^2$ itself, for that matter), possesses two not everywhere well-defined} \\ &&\mbox{2-dimensional collapsing flows, the RED and the BLUE ones.} \nonumber\end{aligned}$$ But, generally speaking, and unfortunately for us as it turns out, the two kinds of trajectories cut through each other transversally. The global picture of the set $\{$RED 2-flow lines$\} \cup \{$BLUE 2-flow lines$\}$ is horribly complicated. $$\label{eq3.13.2} \mbox{There are no $\{$exterior cocore $q\}$ for points $q \in {\rm int} \, D^2 (\Gamma_i)$; and there is certainly no cure for this.}$$ But then, on the road to those embedded exterior discs in cancelling position which we are eventually after (see the very beginning of this section), we most likely have some provisional substitute discs which we call now $\delta^2$, neither quite embedded nor quite exterior. This last thing means transversal contacts $\delta^2 \cap X_0^2 \subset \Delta_1^4$. These contacts may take the form $$q \in \delta^2 \cap D^2 (\Gamma_i) \subset \Delta_1^4 \, , \eqno (*)$$ and here the lack of $\{$exterior cocore $q\}$ is, as we shall see, a serious potential danger. Hence, it would be very desirable to eliminate all the $q$’s like in ($*$) above. We will manage to do that, completely, in the case $\Delta^3 \times I$; see the stage III below. What we will manage to do in the Schoenflies case will be just to control the occurances ($*$) to a sufficient extent so that they become manageable. This is a good place to stress one basic difference between the two levels of our discussion. In the $\Delta^3 \times I$ case, the $\Delta^2$ is embedded in dimension three; in the Schoenflies case this is certainly not so. Now, before we manage to start dealing with the two issues (\[eq3.13.1\]), (\[eq3.13.2\]), a lengthy prentice will have to be opened. STAGE III, a prentice on compactifications. We consider now a compact bounded smooth 4-manifold, with only handles of index one and two. Call it $\Delta^4$; this could, of course, be the $N^4 (\Delta^2)$ from (\[eq3.9\]), but we are supposed so be now at a higher level of generality. We consider, also, an open 4-manifold $$\label{eq3.14} X_0^4 = \Delta^4 + \{\mbox{handles of index one, called $h_i$, and handles of index two called} \ D^2 (C_j)\} \, .$$ Let us be slightly more specific about the way in which our handles are attached. We start by adding to $\Delta^4$ finitely many infinite trees, which we thicken in dimension four $$\label{eq3.14.1} \Delta^4 \cup \sum_1^f T_j \approx \Delta^4 \cup \sum_1^f N^4 (T_j) \, .$$ Next, one adds the 1-handles to (\[eq3.14.1\]) and, afterwards, the 2-handles too, to the resulting space. [**Lemma 7.**]{} – *We assume now that the geometric intersection matrix $C \cdot h$ is of the easy id $+$ nilpotent type.* There exists then a natural smooth compactification $\hat X_0^4$ of $X_0^4$, with the following properties $$\label{eq3.15.1} \mbox{We have a diffeomorphism} \ \hat X_0^4 = \Delta^4 \cup \{ \mbox{collar} \ \partial \Delta^4 \times [0,1]\} \, .$$ Here, in the RHS, the two pieces are glued along $\partial \Delta^4 \times \{ 0 \}$, making that $\partial \hat X_0^4 = \partial \Delta^4 \times \{ 1 \}$. \[It should be stressed here that the formula above is just a diffeomorphism, and that the real life embedding $\Delta^4 \subset X_0^4 \subset \hat X_0^4$, where $\partial \Delta^4 \cap \partial \hat X_0^4 \ne \emptyset$, is not quite the one which it may suggest.\] $$\label{eq3.15.2} \mbox{There is a compact subset $F \subset \partial \Delta^4 \times \{ 1 \}$ such that $X_0^4 = \hat X_0^4 - F$};$$ the next point describes the structure of $F$. $$\begin{aligned} \label{eq3.15.3} &&\mbox{There is a disjoint partition $F = F_0 \cup F_1$, with $\bar F_1 = F$ where $F_0$ is a tame Cantor set} \\ &&\mbox{and $F_1$ is the closed set of a $1$-dimensional {\ibf tame lamination} ${\mathcal L}$.} \nonumber\end{aligned}$$ The $F_0$ is actually the sum of the end-point spaces of the $T_i$’s. The (easy) id $+$ nil form of the matrix $C \cdot h$ puts the $C_i$’s and $h_i$’s into a natural bijection, each bloc $h_i \cup D^2 (C_i)$ being a 4-ball. We have, with “$C\ell$” standing for closure $$C\ell \left( X_0^4 - \Delta^4 \cup \sum_1^f N^4 (T_j) \right) = \sum_i (h_i \cup D^2 (C_i))$$ and, with this we define the following pair of smooth non-compact manifolds $$\label{eq3.16} (\mbox{LAVA}, \delta \, \mbox{LAVA}) = \left( \sum_i (h_i \cup D^2 (C_i)) , \partial \, \mbox{LAVA} \cap \partial \left( C\ell \left( X_0^4 - \Delta^4 \cup \sum_1^f N^4 (T_j) \right) \right) \right) \, .$$ The LAVA and the $\delta \, \mbox{LAVA}$ are non compact bounded smooth manifolds, of dimensions four and three respectively. It is via the $\delta \, \mbox{LAVA} \subset \partial \, \mbox{LAVA}$, that our LAVA connects to the other world. In the context of Lemma 7, the key fact is the following $$\begin{aligned} \label{eq3.16.1} &&\mbox{The pair $(\mbox{LAVA}, \delta \, \mbox{LAVA})$ has the following {\ibf product property}. There is a diffeomorphism} \nonumber \\ &&(\mbox{LAVA} \cup \{\mbox{the lamination ${\mathcal L}$, added at infinity}\} , \delta \, \mbox{LAVA}) = (\delta \, \mbox{LAVA} \times [0,1] , \delta \, \mbox{LAVA} \times \{ 0 \}). \end{aligned}$$ It is important to notice here that our product property is a feature of the pair $(L , \delta L)$, and not just an absolute property of the space $L$ above. The space of endpoints $e \left( \overset{f}{\underset{1}{\sum}} \, T_i \right) = \overset{f}{\underset{1}{\sum}} \, e (T_i)$ glues naturally both to $\biggl\{ \overset{f}{\underset{1}{\sum}} \, T_i$ and/or to $\overset{f}{\underset{1}{\sum}} \, N^4 (T_i)$ and $\Delta^4 \cup \overset{f}{\underset{1}{\sum}} \, N^4 (T_i) \biggl\}$, and then also to LAVA $\cup \, {\mathcal L}$. This allows us to compactify LAVA itself into the following object $$\mbox{LAVA}^{\wedge} = \mbox{LAVA} \cup {\mathcal L} \cup \sum_1^f e (T_i) \, .$$ With all these things, the explicit definition of the $\hat X_0^4$ from Lemma 7 is, actually $$\begin{aligned} \label{eq3.16.2} \hat X_0^4 = \Delta^4 \cup \sum_1^f N^4 (T_i) \cup \mbox{LAVA}^{\wedge}\end{aligned}$$ where the second “$\cup$”, i.e. the way in which $\mbox{LAVA}^{\wedge}$ glues to the rest, requires some specifications which we will not explain here. The product property which, appropriately stated is shared by $(\mbox{LAVA}^{\wedge} , \delta \, \mbox{LAVA})$ too, is [the]{} big virtue of lava, as far as we are concerned. With the explicit description of $\hat X_0^4$ given just above, the diffeomorphism (\[eq3.15.1\]) is now a consequence of the product property. As already said before, (\[eq3.15.1\]) is only a diffeomorphism, the real life formula behind it, from which also the correct embedding $\Delta^4 \subset \hat X_0^4$ is readable, is actually (\[eq3.16.2\]). [**Some comments.**]{} A) When Lemma 7 is applied to something like the explicit context of stage II, then the $\{$extended cocore $R_i\} \subset N^4 \left( \Gamma (\infty) \cup \underset{1}{\overset{\infty}{\sum}} \, D^2 (C_j) \right)$ or the $\{$extended cocore $p\} \subset N^4 (X_0^2)$ get themselves compactified into objects which we will call $$\label{eq3.16.3} \{\mbox{extended cocore}\}^{\wedge} = \{\mbox{extended cocore}\} \cup \{\mbox{Cantor set}\} = B_{\rm smooth}^3 \, .$$ B\) Now, simple-mindedly, one might think that, locally, our $\mbox{LAVA} \, \cup \, \{{\mathcal L} \ \mbox{at infinity}\}$ is nothing but an object like $$\{\mbox{some extended cocore}\}^{\wedge} \times [0,1] \, .$$ But this is certainly not so. It is indeed true that $(\mbox{LAVA}) \cup {\mathcal L}$ comes equipped with a surjection $$(\mbox{LAVA}) \cup {\mathcal L} \twoheadrightarrow \{\mbox{some highly non simply-connected train-track}\},$$ but the fibers jump here quite wildly, as one moves around the train-track in question. There is not, even locally, a product. C\) The little theory above can be generalized when we have handles of index one, two [*and*]{} three. Our $F = \underset{1}{\overset{f}{\sum}} \, e (T_i) \cup {\mathcal L}$ occurs then as accumulation points of a second, 2-dimensional lamination by planes, living at the infinity of a 4-dimensional object we call MAGMA. This may be useful in some situations, presumably. D\) The compactification above is considerably more simple-minded than the so-called [strange compactification]{} from [@PoVI]. The only similarity is that a lamination occurs there too; but that one has both nontrivial holonomy and some nasty singularities. All such things are absent here. E\) The following pair $$\label{eq3.17} \left( N^4 \left( \Gamma (\infty) \cup \sum_1^{\infty} D^2 (C_j) \right)^{\wedge} , \ \sum_1^n \{\mbox{extended cocore} \ R_i\}^{\wedge} \right) \, ,$$ is a standard connected sum of $n$ copies of $$(S^1 \times B^3 , (*) \times B^3) \, .$$ With this, $N^4 (X_0^2)^{\wedge}$ is now a smooth handlebody decomposition of $\Delta^4$, with $n$ red handles of index one and $\bar n$ handles of index two. Here, in the case $\Delta^3 \times I$ we have $\bar n = n$ and in the Schoenflies case $\bar n > n$. This kind of paradigm will be very useful, later on. STAGE IV. We go back now to the two stings (\[eq3.13.1\]), (\[eq3.13.2\]), starting actually with (\[eq3.13.2\]). In the beginning, our way of proceeding will be purely 2-dimensional and abstract. By “abstract”, we mean here that 4-dimensional incarnations or even maps into $4$-manifolds, are not yet considered. So, starting from $$\Gamma (1) \subset \Delta^2 \subset X_0^2 \subset X^2 \, ,$$ let us define the following 2-complex which, at least for the time being will replace $X^2$ (think of it as being “$X^2$ (old)”) $$\label{eq3.18} X^2 ({\rm new}) = X^2 \cup [(\Gamma (1) \times [0 \geq \xi_0 \geq -1]) \cup (\Delta^2 \times (\xi_0 = -1))] \, ,$$ where the $\Gamma (1) \subset X^2$ is glued to $\Gamma (1) \times (\xi_0 = 0)$. We will decide, by decree, that from now on $\Delta^2 \times (\xi_0 = -1)$ is to be our $$\Delta^2 \approx \{\mbox{2-spine or 2-squeleton of $\Delta^4$}\} \, .$$ The point here is that, with this $$\Delta^2 = \Delta^2 \times (\xi_0 = -1) \subset X^2 ({\rm new}) \, ,$$ all our BLUE and RED features (at least the 2-dimensional ones, for the time being) are still with us. Red-wise, the $D^2 (\Gamma_i)$ are now the $D^2 (\Gamma_i) \times (\xi_0 = -1)$, the old $D^2 (\Gamma_i) \times (\xi_0 = 0)$ being declared $D^2 (\gamma^0)$’s. The idea is that, besides this, on $X^2 = X^2 ({\rm old}) \subset X^2 ({\rm new})$, the RED labels stay (essentially) put, and the RED collapse is proceeding according to the following general scheme $$X^2 ({\rm new}) \to (\Gamma (1) \times [0 \geq \xi_0 \geq -1]) \cup \Delta^2 \times (\xi_0 = -1) \to \Delta^2 \times (\xi_0 = -1) \, .$$ When we move to BLUE, the general scheme is to start by the following decree $$\label{eq3.18.1} \mbox{The 2-cells} \ D^2 (\Gamma_{i}) \times (\xi_0 = -1) \ \mbox{are now $D^2 (\gamma^1)$'s.}$$ With this one can crush all the $\Delta^2 \times [0 \geq \xi_0 \geq -1]$ part of $X^2 ({\rm new})$ onto $\Gamma (1) \times (\xi_0 = 0) = \Gamma (1) \subset X^2 ({\rm old})$ and then proceed on the remaining $X^2 ({\rm old})$, exactly as before, in the old case. Notice that, with our new RED story as set up above, comes also an $X_0^2 ({\rm new}) \subset X^2 ({\rm new})$, defined on the same lines as (\[eq3.8\]). The process $X^2 ({\rm old}) \Rightarrow X^2 ({\rm new})$ which we have just reviewed, is part of a bigger, still abstract transformation $$\label{eq3.19} X^2 ({\rm old}) \Longrightarrow X^2 ({\rm new}) = X^2 \Longrightarrow 2X^2 \supset 2X_0^2 \, ,$$ to be defined, explicitly, later on. For expository purposes, we will give now, before we move to (\[eq3.19\]), the gist of the way in which the transformation ${\rm old} \Rightarrow {\rm new}$, will be incarnated 4-dimensionally; only the easier case $\Delta^4 = \Delta^3 \times I$ will be discussed right now. In the situation $\Delta^3 \times I$, at least, we have suggested in Figure 1 an immersion $f$ of $X^2 ({\rm new})$ into $X^4 = X^3 \times R$. This is the $X^4$ from (\[eq3.1\]) and $f$ is part of the following diagram $$\xymatrix{ X^2 ({\rm new}) \ar[rr]^{f} \ar[drr]_{\pi \circ f} &&X^4 = X^3 \times R \ar[rr]^{\qquad\pi_0} \ar[d]^{\pi} &&R \\ &&X^3 }$$ which is actually a piece of the larger diagram (\[eq3.24\]) below. Here are some explanations concerning Figure 1, from which the reader is supposed to read the diagram above. The regular oblique grid, which one should imagine infinite in all directions, is suggesting the embedding $X^2 ({\rm old}) \subset X^3 \times R$. We have tilted it so as to make the map $\pi \circ f$ generic. In the present $\Delta^3 \times I$ context, $\Delta^2 \subset X^2 ({\rm old})$ lives at level $t=0$. It is suggested by the dotted fat line, along which $t=0$ and $\xi_0 = 0$ coincide. The plain fat line stands for $\pi \circ f (\Delta^2 \times (\xi_0 = -1))$, while the oblique thinly dotted lines stand for $\pi \circ f (\Gamma (1) \times [0 \geq \xi_0 \geq -1])$. Our $f$ is a generic immersion and some of the double points $fM^2 (f)$ are represented as fat points. The $$\xymatrix{ X^2 ({\rm new}) \ar[r]^{\ \ \pi \circ f} &X^3 }$$ itself, is a singular $2$-dimensional polyhedron, with undrawable singularities, in the sense of [@Ga] and/or [@PoI]. Anyway, with all these things we may define $N^4 (X_0^2 ({\rm new}))$ as being the regular neighbourhood of the immersion $f \mid X_0^2 ({\rm new})$, happily disregarding the double points of $f$. There is no harm with this, of course. $$\includegraphics[width=8cm]{POfig1.eps}$$ But Figure 1 is supposed to suggest other things too. Notice that in our drawing, the location of $f(\Gamma (1) \times [0 \geq \xi_0 \geq -1])$ breaks the symmetry between the past $(t < 0)$ and the future $(t > 0)$. The full incarnation of (\[eq3.19\]) will break this symmetry even further. The point here is the following. Later on, curves like (\[eq3.4\]), plus some others, will enter into a link projection (see (\[eq4.7\]), in the next section) which will have to be changed into a link diagram. It so turns out that our geometrical set-up, when restricted just to $X_0^2 ({\rm old})$, makes that in the passage from the link projection to the link diagram, there is a certain correlation between future and UP, and then also between past and DOWN. Of course, this convention, as such, could happily be reversed. But once it is there, then sending $[0 \geq \xi_0 \geq -1]$ to the past, like we did in Figure 1, will make that, in the context $\Delta^3 \times I$, the $\Delta^2 \times (\xi_0 = -1)$, actually the corresponding curves $\Gamma_i \times (\xi_0 = -1)$, will be constantly DOWN, in the link diagram. For reasons to be explained later, this will turn out to be very good for us. At this precise point, the Schoenflies case is different and also more difficult, as it will turn out. At the level of Figure 1 we have suggested by arrows, very schematically, the RED 3-dimensional flow, crushing everything on $\Delta^2 \times (\xi_0 = -1)$. This flow is, in the context $\Delta^3 \times I$, perfectly “smooth”, like in the old context, [*except*]{} for folding singularities at $\xi_0 = 0$. These do create serious technical problems and we will only be able to survive with them, at the price of a very heavy cure. In the Schoenflies case, the RED 3-dimensional collapsing flow comes with even more serious problems at $\xi_0 = -1$. We will come back to them later. But right now, we go back to the specific issue (\[eq3.13.2\]) for $\Delta^3 \times I$. It turns out that once the curves $\Gamma_{i \leq n} \times (\xi_0 = -1)$ are kept completely DOWN, from the viewpoint of our link diagram, then the $\Delta^2 \times (\xi_0 = -1)$ itself is kept disjoined from all the action to come, in particular from the kind of accidents which will be described by (\[eq3.33.2\]) (and their yoked (\[eq3.33.1\])), below. But since this is a bit too technical to be explained here, we will also give a more heuristical and intuitively easy, albeit less tight argument. In our $\Delta^3 \times I$ case, the $\Delta^2$ is embeddable in dimension three and, when we thicken in dimension four the immersion suggested by Figure 1 then, at the level of $N^4 (X_0^2 ({\rm new}))$ and/or $N^4 (X^2 ({\rm new}))$, we will get $$\Delta^2 \times (\xi_0 = -1) \subset \Delta^3 \subset \partial N^4 \, .$$ This should make it plausible, at least, that it is now out of trouble. So, we go now to $\Delta^4$ Schoenflies, having in the back of our minds the same concern (\[eq3.13.2\]). The first thing now, is to make sure that (\[eq3.18.1\]) is [strictly]{} true. The issue here is the following. To begin with, once (\[eq3.18\]) has the BLUE features, something like (\[eq3.18.1\]) has to be there. But then, any further subdivision, and there will be many such, transforms, generally speaking, any $D^2 (\gamma^1)$ into a new, smaller $D^2 (\gamma^1)$, plus many $D^2 (\eta)$’s. Clearly this would spoil any initial strict (\[eq3.18.1\]). In consequence, some hard work is necessary in order to maintain (\[eq3.18.1\]) true, strictly. So, we assume this to be so, from now on, and let us see what it can do for us. Like in (\[eq3.5\]) (and/or in (\[eq3.4\])), in the new context (\[eq3.18\]) we continue to have the following equality between infinite sets $$\label{eq3.19.1} \{ D^2 (\Gamma_i)\} + \{ D^2 (C_j)\} + \{ D^2 (\gamma_k^0)\} = \{ D^2 (\eta_{\ell}) \} + \{ D^2 (\gamma_m^1)\}$$ and so, the (\[eq3.18.1\]) implies that we also have $$\label{eq3.19.2} \{ D^2 (\eta_{\ell}) \} \subset \{ D^2 (C_j) \} + \{ D^2 (\gamma_k^0)\} \, ,$$ while $\{ D^2 (\eta_{\ell}) \} \cap \{ D^2 (\Gamma_i)\} = \emptyset$. As we shall see later, this is crucial for handling the issue (\[eq3.13.2\]) in the Schoenflies case. But, for the time being we leave it at that. From now on, it will be understood that $X_0^2$, $X^2$ are the new ones (whenever the contrary is not explicitly said), and we move to the big issue (\[eq3.13.1\]). In order to deal with it, we will introduce something like a much grander version of ((\[eq3.18\]), the so-called [doubling process]{}. Beware that what follows next is, for the time being purely abstract, not yet incarnated 4-dimensionally. Let $e \subset \Gamma(\infty)$ be an edge which contains an element $b_j \in B$; such an edge will be denoted, generically, by $e(b)$ (or sometimes, more specifically, $e(b_j)$). Any other edge, i.e. one containing either something in $R-B$ or nothing in $B \cup R$, will be denoted generically by $e(r)$. For the next purposes, we introduce three quantities $$0 < r \, (\mbox{for RED}) < \beta < b \, (\mbox{for BLUE}) \, ,$$ with $\beta - r$ very small (compared to $b-r$). For $b_i \in B$, we consider the curve $c_i (b) = \partial (e(b_i) \times [r,b])$, boundary of the 2-cell $D^2 (c_i(b)) = e(b_i) \times [r,b]$. Similarly, we define the 2-cell $D^2 (c(r)) = e(r) \times [r,b]$, cobounding $c(r) = \partial D^2 (c(r))$. With this, we consider now the following infinite 2-complex $$\label{eq3.20} 2 X_0^2 = (X_0^2 \times r) \cup \{ \Gamma (\infty) \times [r,b] - \sum {\rm int} \, D^2 (c(b))\} \cup \left( \bigcup_{1}^{\infty} D^2 (\eta_{\ell}) \right) \times b \, ,$$ a formula which begs for some explanations. The $X_0^2 \times r$ is glues to the middle term $\{ \ldots \}$ along $\Gamma (\infty) \times r$, while the middle term is then glued to $$X_b^2 \underset{\rm def}{=} \left(\bigcup_{1}^{\infty} D^2 (\eta_{\ell}) \right) \times b$$ along $\Gamma (\infty) \times b$. Next, when in (\[eq3.20\]) we delete ${\rm int} \, D^2 (c_i (b))$, it should be understood that a boundary collar, thicker than $\beta - r$ is left in place, so that the inclusion $c_i (b) \subset \partial (2 X_0^2)$ should make sense. Completely similarly, when the interiors of the 2-cells $D^2 (\gamma_k^0)$ (see (\[eq3.4\]) and (\[eq3.5\])) are deleted from $X^2 \approx X^2 \times r$, so as to get $X_0^2 \times r \subset 2 X_0^2$, again a collar is left in place so that, eventually, we should get $$\label{eq3.20.1} \partial(2 X_0^2) = \sum_k \gamma_k^0 + \sum_i c_i (b) \, .$$ The 2-skeleton of $2 X_0^2$, which we denote by $2\Gamma (\infty)$ is $$2\Gamma (\infty) = (\Gamma (\infty) \times r) \cup (\Gamma_0 (\infty) \times [r,b]) \cup (\Gamma (\infty) \times b) \, ,$$ where $\Gamma_0 (\infty) \subset \Gamma (\infty)$ is the 0-skeleton. Schematically speaking, $2 X_0^2$ consists of a red side $X_0^2 \times r$, a blue side $X_b^2$, plus some intermediary stuff which is essentially $\Gamma (\infty) \times [r,b]$, but with some deletions. On the same lines as in (\[eq3.20\]), we introduce the following larger 2-complex $$\label{eq3.21} 2 X^2 = (X_0^2 \times r) \cup (\Gamma (\infty) \times [r,b] ) \cup X_b^2 \, .$$ In all this story, it should be understood that our object of interest, $\Delta^2 = \Delta^2 \times (\xi_0 = -1)$ lives now, naturally, in $X_0^2 \times r$; but the $D^2 (\eta)$’s have been transferred to the $b$-side. In the next lemma, the RED 3-dimensional collapse is ignored. [**Lemma 8.**]{} – [*At the present abstract $2$-dimensional level of $2 X^2$, all the desirable RED and BLUE features are preserved. But moreover, for $2 X^2 \supset 2 X_0^2$ we also have the following*]{} $$\label{eq3.22} \mbox{\it There are {\ibf no transversal intersections} between the RED flow-lines and the BLUE flow-lines.}$$ The whole purpose of the doubling was exactly to get (\[eq3.22\]). Here is also a sketch of proof for Lemma 8. To begin with on the $r$-side $$\Gamma (\infty) \times r \subset X_0^2 \times r$$ we keep all the $R,B$ as they are, as well as the labels $D^2 (\Gamma)$, $D^2 (C)$. Of course, the $D^2 (\gamma^0)$’s are gone, but they have left a thin collar and the useful boundary piece $\gamma^0$, in their place. The edges $\Gamma_0 (\infty) \times [r,b]$ carry no $R \cup B$ labels, while each edge $e \times b \subset \Gamma (\infty) \times b$ will carry a (newly created) $R \cap B$. The set $\{ D^2 (\Gamma) \}$ does not change, but we will have extended sets of $C$, $\eta$, namely $$\{\mbox{extended set of $C$'s}\} = \{ C \} + \{ c(r) \} + \{ \eta \times b \} \quad \mbox{and} \quad \{\mbox{extended set of $\eta$'s}\} = \{ \eta \times b \} + \{ c(b)\} + \{ c(r) \} \, .$$ In the same vein, we have $$\{\mbox{extended set of $\gamma^0$'s}\} = \{ \gamma^0 \} + \{ c(b) \} \, , \ \{\mbox{extended set of $\gamma^1$'s}\} = \{\Gamma \} + \{ C \} \, .$$ Notice that the first of these last two formulae, really makes $2 X_0^2$ be the analogue of $X_0^2$, after doubling, with $2 X^2$ in the role of $X^2$. On the $X_0^2 \times r$ side, the old RED geometric intersection matrix is kept as such. In the extended $C \cdot h$, which one can check to be of the easy id $+$ nil type, the $c(r)$ is dual to the corresponding $e(r) \times b$, while $\eta_i \times b$ is dual to $b_i \times b$. In the new BLUE geometric interaction matrix, which is again of the easy id $+$ nil type, the $\eta_i \times b$ dual to $b_i \times b$ and then, also, $c(b_i)$ is dual to $b_i \times r$ and $c(r)$ to $e(r) \times b$. By $e(\ldots)$ we may mean the corresponding, newly created $R \cap B$. The old matrix $\eta \cdot B$ finds itself transported now on the $b$-side, making (\[eq3.22\]) possible. The much larger $2X_0^2$ collapses now on our $\Delta^2$ and, with this, our discussion of Lemma 8 is finished. So far, all this was purely abstract stuff. In order to incarnate it, geometrically, we start with the natural embedding $X_0^2 ({\rm old}) \subset X^3 \times R$, just like in the discussion coming with Figure 1. Next, one has to find a good way to extend it to a generic immersion $$\label{eq3.24} \xymatrix{ \Delta^2 \subset 2X_0^2 \ar[rr]^{f} \ar[drr]_{\pi \circ f} &&X^4 = X^3 \times R \ar[rr]^{\qquad\pi_0} \ar[d]^{\pi} &&R \\ &&X^3 }$$ We will have to come back to the (\[eq3.24\]), but let us pretend it is with us now. What we do next, is the following item. $$\begin{aligned} \label{eq3.24.1} &&\mbox{Completely disregarding the double points of the immersion $f$, one can get the regular} \\ &&\mbox{neighbourhood $N^4 (2X_0^2)$ which is supposed to contain the {\ibf correct} $\Delta^4 = N^4 (\Delta^2)$. } \nonumber\end{aligned}$$ One can apply the compactification of Lemma 7 to $N^4 (2X_0^2)$ and get $N^4 (2X_0^2)^{\wedge}$ which comes then with a diffeomorphism $$N^4 (2X_0^2)^{\wedge} = N^4 (\Delta^2) \cup \{\mbox{collar}\} \, .$$ Now, at the level of stage II we had the embedding (\[eq3.10\]), which was smearing itself very tightly close to $X_0^2$. This last property means that we can carry now the $\underset{k}{\sum} \, d_k^2$ cobounding the $\underset{k}{\sum} \, \gamma_k^0 \subset \partial (2X_0^2)$ along, to our present context $$\label{eq3.24.2} \Delta^2 \subset X_0^2 \subset 2X_0^2 \hookrightarrow N^4 (2X_0^2)^{\wedge} \, .$$ In other words, the $\sum \gamma_k^0$ extends now to an immersion which, for the better or for the worst replaces the by now deceased $\underset{k}{\sum} \, D^2 (\gamma_k^0)$ $$\label{eq3.24.3} \sum_k d_k^2 \overset{\mathcal J}{\longrightarrow} N^4 (2X_0^2)^{\wedge} \, .$$ For this generic immersion there are now both double points $M^2({\mathcal J}) \subset \underset{k}{\sum} \, d_k^2 \times \underset{k}{\sum} \, d_{\ell}^2 - \{\mbox{diagonal}\}$, at the source, inducing $$\label{eq3.24.4} x \in {\mathcal J} M^2 ({\mathcal J}) \subset N^4 (2X_0^2)^{\wedge} \, ,$$ at the target and, also, transversal contacts $$\label{eq3.24.5} z \in {\rm Im} \,{\mathcal J} \cap 2X_0^2 \subset N^4 (2X_0^2)^{\wedge} \, .$$ The contacts $z$ take the form $$z \in {\rm Im} \,{\mathcal J} \cap (X_0^2 \times r \cup \Gamma (\infty) \times [r,\beta]) \, ,$$ and in the case $\Delta^3 \times I$ (but not necessarily so in the Schoenflies case) they are avoiding $\Delta^2$ altogether, like in the previous discussion around (\[eq3.18\]). At this point, one should also keep in mind that (\[eq3.24.3\]) is the living memory of the RED 3-dimensional collapsing flow which, at least in the context (\[eq3.18\]) was still with us. It was already mentioned that in the Schoenflies context of (\[eq3.18\]), this RED 3-dimensional flow did have complications at $\xi_0 = -1$. The offshot of these complications, [*are*]{} the transversal contacts, hinted at above, $$\label{eq3.24.6} z \in {\rm Im} \, {\mathcal J} \cap \Delta^2 (\mbox{Schoenflies}) \, .$$ This is part of the issue (\[eq3.13.2\]) for Schoenflies, the discussion of which is still not finished yet. Instead of coming directly to grips with the full diagram (\[eq3.24\]), a more indirect road will be profitable now. We will use the language of singular 2-dimensional polyhedra, their desingularizations and 4-dimensional thickenings, which is explained in great detail in [@Ga]; see also [@PoI] or [@PoII]. It is understood that, whatever $f$ itself may be, it is generic with respect to $\pi$, so that $$\label{eq3.25} \xymatrix{ 2X_0^2 \ar[r]^{\ \ \pi \circ f} &X^3 }$$ is a singular 2-dimensional polyhedron. The $\pi_0$-values in (\[eq3.24\]) induce naturally a desingularization $\varphi$ for (\[eq3.25\]). Essentially, this will be the following prescription $$\label{eq3.26} \mbox{$\varphi = S$ means high $\pi_0$ values and $\varphi = N$ means low $\pi_0$ values.}$$ With this comes a 4-dimensional thickening $\Theta^4$ which is diffeomorphic to the regular neighbourhood of the immersion $f$, i.e. $$\label{eq3.26.1} N^4 (2X_0^2) = \Theta^4 (2X_0^2 , \varphi) \, ,$$ an equality stemming directly from first principles. In the references [@Ga], [@PoI], [@PoII] it is explained, in detail, how to any pair (singular 2-dimensional polyhedra, desingularization), a canonical 4-dimensional thickening $\Theta^4 ( \ldots , \ldots)$ is attached. We will rather concentrate here on the following restriction of (\[eq3.25\]), namely $$\label{eq3.26.2} \xymatrix{ (X_0^2 \times r) \cup \Gamma (\infty) \times [r,\beta] \ar[r]^{\qquad\qquad \pi \circ f} &X^3 \, , }$$ where most of our head-aches will be concentrated. Here is a brief description of how (\[eq3.26.2\]) goes. Start with the restriction of (\[eq3.26.2\]) to $X_0^2 ({\rm old})$. There, our set-up is such that any of the undrawable singularities has to involve a purely spatial branch and a time-like branch. For these singularities, we will have, according to the case $$\label{eq3.27} \varphi (\mbox{future}) = S \, , \ \varphi (\mbox{past}) = N \, .$$ Out of $X_0^2 ({\rm old})$ grow branches $\Gamma (\infty) \times [r,\beta]$ and $\Gamma (1) \times [0 \geq \xi_0 \geq -1]$, creating new singularities, but never do we find, simultaneously, at a given singularity, a branch $t < 0$ and a branch $\xi_0 < 0$. The correct set-up now, is the following extension of (\[eq3.27\]), which also supersedes it, whenever that is the case $$\label{eq3.28} \varphi (\Gamma (1) \times [0 \geq \xi_0 \geq -1]) = N \, , \ \varphi (\Gamma (\infty) \times [r,\beta]) = S \, .$$ We will not try to explain exactly here why this [is]{} the correct thing to do. Nor do we make explicit how to proceed at $\xi_0 = -1$. For the case $\Delta^3 \times I$ this is simple-minded, and rather clearly suggested by Figure 1. For the Schoenflies case, we also want the branches containing $+ \vec\xi_0$ to be always with $\varphi = S$, but this is now no longer hundred per cent automatic, and some work is needed at this particular point. [**Remark.**]{} At any given singularity, exactly one of the four prescriptions in (\[eq3.27\]) $+$ (\[eq3.28\]) applies. With all these things, the climax of the present stage IV, will be to replace the set-up (\[eq3.1\]) by the following one $$\label{eq3.29} \Delta^4 = N^4 (\Delta^2) \subset N^4 (2X_0^2) \subset \Delta_1^4 \overset{\mathcal J}{\longleftarrow} \sum_k d_k^2 \, ,$$ where we take as ambient space $\Delta_1^4$ a slightly larger copy of $N^4 (2X_0^2)^{\wedge}$, into which ${\mathcal J} d^2$ is pushed, rel its boundary, as much as it is possible. STAGE V. ON THE WAY TO A SYSTEM OF EMBEDDED EXTERIOR DISCS, IN CANCELLING POSITION. The three adjectives just used will certainly not apply to the discs from Lemma 9 below, they remain for the time being at the level of a pipe dream. We start now with the set $B$ [at level (\[eq3.18\])]{} (and not at the fully doubled level (\[eq3.20\]) or (\[eq3.21\])), and we will organize it according to its natural BLUE order which comes from the easy id $+$ nil property of the matrix $\eta \cdot B$ (\[eq3.18\]). Let us denote this by $$B = \{ b_1 , b_2 , b_3 , \ldots \} \, .$$ Still at level (\[eq3.18\]), with the same index $i$, we also have $e(b_i) \subset \eta_i \subset D^2 (\eta_i) \subset X_0^2$. Generically, the curve $\eta_i$ also has some other edges $e(b_j) \subset \eta_i$, with $j < i$. Now, in terms of the equality (\[eq3.19.1\]), to the blue 2-cell $D^2 (\eta_i)$ corresponds a red cell which may be a $D^2 (\Gamma)$, a $D^2 (C)$ or a $D^2 (\gamma^0)$. Let us call this red cell $[D^2 (\eta_i)]$, but with the understanding that in the particular $D^2 (\gamma_k^0)$ case, then $$[D^2 (\eta_i)] = \{\mbox{the surviving collar of $D^2 (\gamma_k^0)$ in $X_0^2$}\} \, .$$ In that case $\gamma_k^0$ occurs as an additional boundary piece of $[D^2 (\eta_i)]$. We move now to $2X_0^2$ and, at its level, for every $b_i \in B$ we define the following disc with holes which is properly embedded inside $2X_0^2$ $$\begin{aligned} \label{eq3.30} &&B_i^2 = \{ [D^2 (\eta_i)] \times r \} \cup \{ \eta_i \times [r,b] , \ \mbox{with} \ D^2 (c_i (b)) \ \mbox{and the} \ D^2 ((c_{j < i} (b)) \ \mbox{replaced by their} \\ &&\mbox{corresponding surviving collars, creating thus boundary pieces} \ c_i (b) , c_j (b) \ \mbox{for} \ B_i^2 \} \cup \{ D^2 (\eta_i) \times b\} . \nonumber\end{aligned}$$ As a notational remark, do not mix up the $B_i^2$ from (\[eq3.30\]) with the Blue set $B$, they are not at all the same thing. Notice that $$\label{eq3.31} \partial B_i^2 = c_i (b) + \sum \{\mbox{some lower $c_{j<i} (b) \} + \{$possibly,} \ \gamma^0_k \} \, .$$ Let us go back now, for a minute, to the immersion (\[eq3.24.3\]), which also appears in (\[eq3.29\]). The map ${\mathcal J}$ of the $d_k^2$ into $\Delta_1^4$ is guided by another map, which really does come from the RED 3-flow, namely $$\label{eq3.32} \sum_k d_k^2 \overset{F}{\longrightarrow} X_0^2 \subset 2X_0^2 \, .$$ Think of $F$ as being more or less immersive too (but it certainly has folds) and with these things, the ${\mathcal J}$ (\[eq3.24.3\]) is the lift of $F$ into $\Delta_1^4$. [**Lemma 9.**]{} – 1) [*For each curve $c_i (b)$ there is a disc $\delta_i^2$, with $\partial \delta_i^2 = c_i (b)$, coming with a map $$\label{eq3.32.1} \sum \delta_i^2 \overset{F}{\longrightarrow} 2X_0^2 \, .$$ We construct*]{} (\[eq3.32.1\]) [*by induction, along the natural BLUE order. So, for given $b_i$, assume that*]{} (\[eq3.32.1\]) $\vert \, (j < i)$ [*is already well-defined. Then, for $c_i (b)$ we consider the $B_i^2$, with which $F\delta_i^2$ will start. Next, we have to fill in the missing boundary pieces occuring in*]{} (\[eq3.31\]). [*We fill in every $c_{j<i} (b)$ with $F \delta_j^2$ and $\gamma_k^0$ (if it is there) with $Fd_k^2$.*]{} 2\) [*One can lift $F$ off $2X_0^2$ into an immersion which, essentially, extends [(\[eq3.24.3\])]{}, namely*]{} $$\label{eq3.32.2} \sum \delta_i^2 \overset{\mathcal J}{\longrightarrow} \Delta_1^4 \, .$$ Our ${\mathcal J}$, which rests on $N^4 (2X_0^2)^{\wedge} \subset {\rm int} \, \Delta_1^4$ (\[eq3.29\]) exactly along $\sum c_i (b) \subset \partial N^4 (2X_0^2)^{\wedge}$ has, generally speaking, ACCIDENTS extending (\[eq3.24.4\]), (\[eq3.24.5\]), namely $$\label{eq3.33.1} \mbox{Double points} \ x \in {\mathcal J} M^2 ({\mathcal J}) \subset \Delta_1^4 \, , \ \mbox{and}$$ $$\label{eq3.33.2} \mbox{Transversal contacts} \ z \in {\mathcal J} \delta^2 \cap 2X_0^2 \subset \Delta_1^4 \, .$$ A VERY IMPORTANT REMARK. In the case $\Delta^3 \times I$, we also have non-trivial $D^2 (\gamma_k^0)$ corresponding to the defects $\# \, \infty \, \# \, (S^2 \times D^2)$, and we will denote them, generically, by $D^2 (\gamma_{k(\beta)}^0)$. Obviously, our little story above would get into deep trouble every time we would find that $$[D^2 (\eta_i)] = D^2 (\gamma_{k(\beta)}^0) \, . \eqno (*_1)$$ So, here is a hint how we get rid of this problem. To begin with, not the full $\underset{1}{\overset{\infty}{\sum}} \, \delta_i$ will be actually needed, but only a very high finite truncation $\underset{1}{\overset{M}{\sum}} \, \delta_i^2$ of it. The quantity $M$ has to be large enough, if (\[eq3.32.2\]) is to be good enough for our purposes. Next, at the level of (\[eq3.5\]), there is a certain margin of flexibility for fixing the infinite subset $\sum D^2 (\gamma_{k(\beta)}^0) \subset \sum D^2 (\gamma_k^0)$. This turns also out to be an issue where there is no difference between $X^2 ({\rm old})$ and $X^2 ({\rm new})$. But the real point is now the following: we can fix $\sum D^2 (\gamma_{k(\beta)}^0)$, [after]{} having decided on the size of $M$. And then, we can also choose $\sum D^2 (\gamma_{k(\beta)}^0)$ sufficiently close to infinity so that, for $i \leq M$, the $(*_1)$ should not occur. There is no such problem in the Schoenflies case, of course. This ends our remark. Retain that each $\delta^2$ is made out of spare parts, possibly occuring with multiplicities, each of them being a $B_i^2$ or a $d_k^2$. As a result of the tri-dimensionality of $\Delta^3$ and of the passage old $\Rightarrow$ new, in the $\Delta^3 \times I$ context we will find that $$\label{eq3.34} {\mathcal J} \delta^2 \cap \Delta^2 = \emptyset \, .$$ This is the complete happy end as far as the issue (\[eq3.13.2\]) is concerned, in the case $\Delta^3 \times I$. As already noticed, on the other hand, in the Schoenflies case we do have $$\label{eq3.34.S1} {\mathcal J} d^2 \cap \Delta^2 \ne \emptyset \, .$$ But then, once we have a [strict]{} (\[eq3.18.1\]), and see here also the discussion coming with (\[eq3.19.1\]), (\[eq3.19.2\]), we will also find that $$\label{eq3.34.S2} {\mathcal J} B^2 \cap \Delta^2 = \emptyset \, .$$ The two formulae above, in particular the last one, are what takes care of the issue (\[eq3.13.2\]) in the Schoenflies case. Incidentally, it was quite an illumination for me when, during the Spring 2003 in Princeton, I realized that (\[eq3.18.1\]), leading to (\[eq3.34.S2\]), was the key to the until then locked door for Theorem 2. [**Remark.**]{} A priori, we might have tried to invoke (\[eq3.18.1\]) for the case $\Delta^3 \times I$ too. For some technical reasons, connected to the 3-dimensional RED collapsing flow, we have chosen not to proceed that way. As things stand right now, for our present $c_i (b) = \partial \delta_i^2$ when we move from the $B$ (\[eq3.18\]) to the actual larger set of blue 1-handles $B$ of $2X_0^2$, then the contacts $c_i (b) \cdot B$ are exactly the following two $$\begin{aligned} \label{eq3.35} &&c_i (b) \cdot (b_i \times r) = 1, \ \mbox{on the $X_0^2 \times r$ side, and} \\ &&c_i (b) \cdot (b_i \times b) = 1 , \mbox{on the $X_b^2$ side}. \nonumber\end{aligned}$$ Let us also introduce the notation $$\label{eq3.36} \Delta^2 \cap B = \Gamma (1) \cap B = \{ b_{i_1} , b_{i_2} , \ldots , b_{i_P} \} \subset \{ b_1 , b_2 , \ldots , b_M \} \subset B \ \mbox{(\ref{eq3.18})} \, .$$ The quantity $P$ here is the same one as in (\[eq3.12.1\]). In what follows next, in various successive steps, we will vastly change the system (\[eq3.32.2\]). For these vastly transformed $\delta_i^2$’s, the boundary curve, which so far is the $c_i (b)$ above, will be denoted by $\eta_i ({\rm green}) = \partial \delta_i^2 \subset \partial N^4 (2X_0^2)^{\wedge}$. Also, with the large $B = B(2X_0^2)$, we will be very much focusing on the geometric intersection matrix $$\label{eq3.37} \eta ({\rm green}) \cdot B \, .$$ Careful here, the $\delta^2$ is an [exterior]{} disk (or at least a candidate thereof). So, one should not mix up the [external]{} BLUE geometric intersection matrix (\[eq3.37\]), with the related [internal]{} BLUE geometric intersection matrices $\eta \cdot B$ (\[eq3.18\]) or $\eta \cdot B (2X_0^2)$. [**Lemma 10.**]{} – [*By sliding the external system of discs $\underset{1}{\overset{M}{\sum}} \, \delta_i^2$ over the internal $2$-handles contained in $$X_b^2 \cup \Gamma (\infty) \times [b,r] \subset 2X_0^2$$ we can create a new system of external discs, which after further restriction from $M$ to $P$ [(\[eq3.36\])]{} we denote $$\label{eq3.38} \sum_{\ell = 1}^P \delta_{i_{\ell}}^2 \overset{\mathcal J}{\longrightarrow} \Delta_1^4 \, , \quad \partial \delta_{i_{\ell}}^2 = \eta_{i_{\ell}} ({\rm green}) \, ,$$ which is such that the following [blue diagonality]{} condition should be satisfied: $$\label{eq3.39} \mbox{For $\alpha , \beta \leq P$ we have $\eta_{i_{\alpha}} ({\rm green}) \cdot b_{i_{\beta}} = \delta_{\alpha\beta}$, and} \ \sum_{\ell = 1}^P \eta_{i_{\ell}} ({\rm green}) \cdot (B(2X_0^2) - \Delta^2 \cap B (2X_0^2)) = 0 \, .$$* ]{} This operation increases, a priori, the bag of ACCIDENTS to be considered, afterwards. Also, both very importantly and less trivially so than it might seem, there is no obstruction for performing this BLUE diagonalization. We will have to come back to this issue. Once our diagonalization (\[eq3.39\]) has been performed, we can afford to go to a simpler notation, namely $$\{ b_1 , b_2 , \ldots , b_P \} = \{ b_{i_1} , \ldots , b_{i_P} \} \ {\rm and} \ \eta_{i_j} ({\rm green}) = \eta_j ({\rm green}) \, , \ j \leq P \, .$$ At this point, there are some very serious problems to be faced, which we list below. $$\begin{aligned} \label{eq3.40.1} &&\mbox{We do have ACCIDENTS. Dealing with them is actually the hardest and longest part} \\ &&\mbox{of the proofs in PoV-B (\cite{PoV-B}).} \nonumber\end{aligned}$$ But, for expositary purposes, we will pretend in the next Stage VI that the accidents have already been dealt with. In the next Section 4 some hints will be given concerning the operation of killing all the accidents, which in real life will have to preceed the $R/B$-balancing of Stage VI. $$\begin{aligned} \label{eq3.40.2} &&\mbox{So, assume there are no accidents, and the (\ref{eq3.38}) is really a system of exterior discs.} \\ &&\mbox{We also want them to be in cancelling position with the 1-handles of $\Delta^4$. But then,} \nonumber \\ &&\mbox{these are the $R_1 + \cdots + R_n$ and {\ibf not} the $b_1 + \cdots + b_P$; the $\Gamma (1) - \underset{1}{\overset{P}{\sum}} \, b_i$ is {\ibf not} connected.} \nonumber\end{aligned}$$ The reader may check that, in an ideal world where (\[eq3.40.1\]) would have already been dealt with and where we would also have $P=n$, we would be done, by now. But $P > n$, in real life. Now, (\[eq3.40.1\]) has to be dealt with before we come to grips with (\[eq3.40.2\]). We will show how to handle (\[eq3.40.2\]) in the next Stage VI, but this will come then with another new, very serious problem, as we shall see. In a nutshell, this will be that $$\begin{aligned} \label{eq3.40.3} &&\mbox{Once (\ref{eq3.40.2}) will have been dealt with, the blue diagonalization (\ref{eq3.39}) will no longer} \\ &&\mbox{be good enough, and another GRAND BLUE DIAGONALIZATION will be needed.} \nonumber\end{aligned}$$ This will be one of the topics of the next Section 4. STAGE VI. A CHANGE OF VIEWPOINT CONCERNING $\Delta^4$. During the change of viewpoint in question, the product structure $\Delta^3 \times I$ will get blurred too, but that is fine since by now, at this stage of the game, it has already served its purpose. In our context used so far, we had, remember $$\Delta_1^4 = \{\mbox{ambient space} \ N^4 (2 X_0^2)^{\wedge} \cup ({\rm collar})\} = \Delta^4 \cup ({\rm collar}) \, ,$$ where the first equality is a definition and the second one a diffeomorphism. Let us say that, up to now, our context has been $$\label{eq3.41} \Delta^4 \subset N^4 (2X_0^2) (\mbox{non-compact}) \subset {\rm int} \, \Delta_1^4 \subset \Delta_1^4 \, ,$$ with $\Gamma (1) =\{$1-skeleton of $\Delta^4 \} \subset \Gamma (\infty) = \{$1-skeleton $\Gamma (\infty) \times r$ of $X_0^2 \times r \} \subset 2\, \Gamma (\infty) = \{$1-skeleton of $2X_0^2 \}$. With this, as things stand now, we also have $\Gamma (1) \cap B = \underset{1}{\overset{P}{\sum}} \, b_i$, $\Gamma (1) \cap R = \underset{1}{\overset{n}{\sum}} \, R_i$, where $P \geq n$ and where also, since the case $P=n$ is easier, it will be assumed that $P > n$. [**The $R/B$ balancing Lemma 11.**]{} – [*Staying all the time embedded inside the ambient space $\Delta_1^4$ and also keeping $\Delta^4$ fixed, we can submit the $N^4 (2X_0^2)$ in [(\[eq3.41\])]{} to the following kind of compact changes, localized inside $X_0^2 \times r$ $$\label{eq3.42.1} \mbox{Pick up a certain well-chosen family of $1$-handles $\{ y_1 , y_2 , \ldots , y_{P-n}\} \subset \Gamma (\infty) \cap h - B \subset R-B$} \, ,$$ the dual $C$-curves of which we will denote by $C(1), C(2) , \ldots , C(P-n)$. $$\begin{aligned} \label{eq3.42.2} &&\mbox{We will perform now embedded $1$-handle slidings, dragging along the corresponding} \\ &&\mbox{$2$-handles, at the level of $N^4 (\Gamma (\infty)) \subset N^4 (2\Gamma (\infty))$. We will slide, in succession, each of the} \nonumber \\ &&\mbox{$y_1 , y_2 , \ldots , y_{P-n}$ over a second family of well-chosen elements $x \in h-B \subset R-B$ which are} \nonumber \\ &&\mbox{always such that, in the natural RED order of $C \cdot h$ {\rm (\ref{eq3.18})}, and hence of $C \cdot h \, (2X_0^2)$ too, we} \nonumber \\ &&\mbox{should have the following inequality, every time $y$ slides over $x$} \nonumber\end{aligned}$$ $$x < \{\mbox{the $y$ which slides}\}$$ $$\begin{aligned} \label{eq3.42.3} &&\mbox{At the end of the sliding move, we have a $\Gamma (\infty) \subset 2 \Gamma (\infty)$ which has changed (but we do} \\ &&\mbox{not bother to denote these objects differently). The point is that $\Gamma (1)$ has been replaced} \nonumber \\ &&\mbox{by a $\Gamma (3) \subset \{\mbox{new} \ \Gamma (\infty)\}$, which is {\ibf well-balanced}, in the sense that the $\Gamma (3) \cap B = \underset{1}{\overset{P}{\sum}} \, b_i$} \nonumber \\ &&\mbox{(as before) and the $\Gamma (3) \cap R = \underset{1}{\overset{n}{\sum}} \, R_i + \underset{1}{\overset{P-n}{\sum}} \, y_j$ are now two sets of the same cardinality,} \nonumber \\ &&\mbox{with $\Gamma (3) - B$, $\Gamma (3) - R$ being, both, trees.} \nonumber\end{aligned}$$ $$\label{eq3.42.4} \mbox{We also have, more globally, that $2\Gamma (\infty) - R$ and $2\Gamma(\infty)-B$ are trees.}$$* ]{} Up to a diffeomorphism which does not budge $\Delta^4 , \Delta_1^4$, the sequence (\[eq3.41\]) does not feel the effect of the balancing process. Ideally, we would be quite happy if we could argue, at this point, as follows. Decide that the 1-skeleton of $\Delta^4$ is now $\Gamma (3)$, which comes equipped with two sets of 1-handles among which we might chose one. These are the BLUE $b_1 , b_2 , \ldots , b_P$ and the RED $R_1 , R_2 , \ldots , R_n$, $R_{n+1} = y_1 , \ldots$, $R_P = y_{P-n}$. The $\Delta^4$ should have now a handlebody decomposition which we will call “ideal”, with 2-handles $$\label{eq3.43} \{ D^2 (\Gamma_i) \} \, , \ D^2 (C(1)) , \ldots , D^2 (C (P-n)) \, .$$ Notice that not only have the $y_1 , \ldots , y_{P-n}$ been [promoted]{} as 1-handles of $\Delta^4$, but also their dual $D^2 (C(1)) , \ldots$, $D^2 (C(P-n))$ as 2-handles of $\Delta^4$ too. Since, clearly $$\{ C(1) , \ldots , C(P-n) \} \cdot \{ y_1 , \ldots , y_{P-n}\} = {\rm id} + {\rm nilpotent} \, ,$$ formally at least we are O.K. Also, provided that the ACCIDENTS (\[eq3.33.1\]), (\[eq3.33.2\]) have been killed, Lemma 10 would then provide us with exterior, embedded 2-handles $\delta^2$, in cancelling position with the blue 1-handles. This little ideal scenario has a very serious flaw, which is the following $$\label{eq3.44} \mbox{The 2-handles $D^2 (C(1)) , \ldots , D^2 (C(P-n))$ are, generally speaking, {\ibf not} directly attached to $\Gamma (3)$.}$$ Moreover, the (\[eq3.44\]) is [irreparable]{} because of the following item. Notice, to begin with, that it is not hard to concoct a RED diagonalization process which would let some internal RED 2-handles $D^2 (C_a)$ slide over lower $D^2 (C_b)$, with $b < a$ in the natural red order, so that we should achieve $$C(i) \subset \partial N^4 (\Gamma (3)) \, , \eqno (*)$$ But here comes a fact, which will be explained in Section 4 $$\begin{aligned} \label{eq3.44.1} &&\mbox{Contrary to the BLUE diagonalization which has led to (\ref{eq3.39}), the kind of RED} \\ &&\mbox{diagonalization leading to $(*)$, which was envisioned above, is {\ibf forbidden}.} \nonumber\end{aligned}$$ But before we can explain what we will do now, in order to cope with these issues, we have to be more precise about our notations concerning the cardinalities of the handles of $\Delta^2$. The two $n = {\rm card} (\Gamma (1) \cap R)$, $P = {\rm card} (\Gamma (1) \cap B) = {\rm card} (\Gamma (3) \cap B)$ should be unambiguously clear and we use for them the same notations in both the contexts $\Delta^3 \times I$ and $\Delta^4$ Schoenflies. The cardinality of the set $\{ D^2 (\Gamma_i)\}$ used so far, is the same $n$ as above, in the case $\Delta^3 \times I$ but then also, it is some $\bar n > n$ in the Schoenflies case. So, once the ideal scenario has collapsed, here is what we will do, in the real world. Consider, for the time being the [purely abstract promotion]{}, for $1 \leq i \leq P-n$ $$\label{eq3.45} y_i \Longrightarrow R_{n+i} \ (\mbox{1-handle of} \ \Delta^4) \, , \ {\rm and}$$ $C(i) \Longrightarrow \Gamma_{n+i}$ (case $\Delta^3 \times I)$, respectively $C(i) \Longrightarrow \Gamma_{\bar n + i}$ (case $\Delta^4$ Schoenflies); in both of these last two formulae, $\Gamma$ is the same physical curve as the $C$, but considered now as an internal attaching curve of $\Delta^4$. With this promotion, which so far is barely more than a notational device, $\Delta^4$ is endowed now, abstractly speaking with $P$ handles of index one (a RED collection and also a BLUE one) and with the 2-handles $$\sum_1^P D^2 (\Gamma_i) \ \mbox{in the case} \ \Delta^3 \times I , \ \mbox{respectively} \ \sum_{1}^{P+(\bar n - n)} D^2 (\Gamma_i) \ \mbox{in the Schoenflies case.}$$ Keep in mind that this is only abstract, so far, in the sense that for our physical $\Delta^4 = N^4 (\Delta^2)$, as such, no bonafide handlebody decomposition on the lines above is available. On the other hand, in this abstract context, when we look at the geometric intersection matrices all the desirable RED and BLUE features continue to be satisfied, provided that we also accompany the promotion (\[eq3.45\]) by the following other related transformation, concerning now the whole of $N^4 (2\Gamma (\infty))$, after the $R/B$-balancing and promotion. $$\label{eq3.46} \mbox{By decree, the new families $h(2X_0^2)$ and $C(2X_0^2)$ are now the very slightly reduced}$$ $$h - \sum_{1}^{P-n} y_i \, , \ C- \sum_1^{P-n} C(i) \, , \ \mbox{respectively}.$$ By the same decree we exclude from LAVA the $P-n$ copies of $B^4$ consisting of the $h_1 \cup D^2 (C(1)) , \ldots , h_{P-n} \cup D^2 (C(P-n))$. The new, slightly reduced lava, call it again $({\rm LAVA} , \delta \, {\rm LAVA})$, continues to have the product property. Moreover, via its $\delta \, {\rm LAVA}$, this new LAVA glues to $$\label{eq3.47} N^4 (2\Gamma (\infty) \ (\mbox{after $R/B$ balancing}) - h \ \mbox{(\ref{eq3.46}) (after promotion})) \supset N^4 (\Gamma (3)) \, .$$ The next lemma, which should be compared to the comment E) at the end of Stage III, is our way to meet the difficulty (\[eq3.44\]). [**Lemma 12.**]{} – 1) *In the context of [(\[eq3.47\])]{} above, we introduce the following large smooth compact $4$-manifold $$\begin{aligned} \label{eq3.48} \bar N^4 (\Gamma (3)) &= &\{ N^4 (2\Gamma (\infty) (\mbox{after $R/B$ balancing})) - h \mbox{\rm (\ref{eq3.46})}\} \cup ({\rm LAVA}^{\wedge}) \\ &= &(N^4 (2\Gamma (\infty)(\mbox{after balancing}) \cup \sum_{1}^{\infty} D^2 (C_i) (\mbox{after promotion}))^{\wedge} \, , \nonumber\end{aligned}$$ where in the second term, the two pieces are glued along $\delta \, {\rm LAVA}$.* This $\bar N^4 (\Gamma (3))$ is a smooth compact handlebody of genus $P$, by which we mean, it is a $P \, \# \, (S^1 \times B^3)$. 2\) [*The $\bar N^4 (\Gamma (3))$ also comes equipped with a properly embedded system of $P$ $3$-balls ($=$ $1$-handle cocores), namely the $$\label{eq3.49} \sum_1^P \{\mbox{extended core} \, b_i \}^{\wedge} \subset \bar N^4 (\Gamma (3)) \, ,$$ and with this, the pair defined by [(\[eq3.49\])]{} is [standard]{}.*]{} 3\) [*With our abstract promotion presented above in mind, we introduce now the following quantity $$\bar P \underset{\rm def}{=} P \, (\mbox{in the case} \ \Delta^3 \times I) , \bar P \underset{\rm def}{=} P + (\bar n - n) (\mbox{in the $\Delta^4$ Schoenflies case}).$$ With this, the $2$-handles $\underset{1}{\overset{\bar P}{\sum}} \ D^2 (\Gamma_i)$ (which include now the promoted $D^2 (C(1)) , \ldots , D^2 (C(P-n))$, are quite naturally, directly attached to $\bar N^4 (\Gamma (3))$. Also, there is a diffeomorphism $$\label{eq3.50} \Delta^4 \underset{\rm DIFF}{=} \bar N^4 (\Gamma (3)) + \sum_1^{\bar P} D^2 (\Gamma_i) \, ,$$ where, remember, $\Delta^4$ is here our $N^4 (\Delta^2)$ and, in the rest of the paper, its incarnation will be the RHS of the formula*]{} (\[eq3.50\]). In other words, $\Delta^4$ has now a smooth handlebody decomposition with $P$ 1-handles $\underset{1}{\overset{P}{\sum}} \{\mbox{extended cocore} \, b_i\}^{\wedge}$ and with 2-handles $\underset{1}{\overset{\bar P}{\sum}} \, D^2 (\Gamma_j)$ (taking the promotion here into account). Here are some comments. To begin with, the sets $(h,C)$ occuring here are the ones from after doubling, slightly diminished by the promotion. Also, the RED 1-handles $\underset{1}{\overset{P}{\sum}} R_i$ of $\Delta^4$ can and will be forgotten. Assuming the accidents already killed, our latest transformation of (\[eq3.1\]), after the (\[eq3.9\]) and (\[eq3.29\]), is now the following, with the same $\Delta_1^4$ as in (\[eq3.29\]) $$\label{eq3.51} \bar N^4 (\Gamma (3)) \cup \sum_1^{\bar P} D^2 (\Gamma_i) \subset \Delta_1^4 \overset{\mathcal J}{\longleftarrow} \sum_1^P \delta_i^2 \, ,$$ where ${\mathcal J}$ is an embedding into $$\Delta_1^4 - {\rm int} (\bar N^4 (\Gamma (3)) \cup \sum_1^{\bar P} D^2 (\Gamma_i)) \, ,$$ and where, isotopically speaking $$\bar N^4 (\Gamma (3)) + \sum_1^{\bar P} D^2 (\Gamma_i) = N^2 (2X_0^2)^{\wedge} = \{\mbox{closure of} \ N (2X_0^2) \subset \Delta_1^4 \} \, .$$ Moreover, as a consequence of the BLUE diagonalization (\[eq3.39\]), for the $\eta_i ({\rm green}) = \partial \delta_i^2 \subset \partial (\bar N^4 (\Gamma (3)) \cup \underset{1}{\overset{\bar P}{\sum}} \, D^2 (\Gamma_i))$ we have now $$\label{eq3.52} \eta_i ({\rm green}) \cdot b_j = \delta_{ij} \ {\rm if} \ i,j \leq P \ {\rm and} \ \sum_1^P \eta_i ({\rm green}) \cdot \left( B(2X_0^2) - \sum_1^P b_i \right) = 0 \, .$$ But, the new problem which has been created now, is that the 1-handles of $\bar N^4 (\Gamma (3)) \cup \underset{1}{\overset{\bar P}{\sum}} \, D^2 (\Gamma_i)$ are not exactly the BLUE $\underset{1}{\overset{P}{\sum}} \, b_i$, but the more exotic $\underset{1}{\overset{P}{\sum}} \, \{\mbox{extended cocore} \ b_i\}^{\wedge}$. This is the difficulty mentioned in (\[eq3.40.3\]). What we find now is exactly the following $$\begin{aligned} \label{eq3.53} &&\eta_i ({\rm green}) \cdot \{\mbox{extended cocore} \, b_j\}^{\wedge} = \delta_{ij} + \{\mbox{an additional, call it off-diagonal term coming} \\ &&\mbox{from those contacts} \ \eta_i ({\rm green}) \cdot h_k \ {\rm with} \ h_k \in h-B \ {\rm and} \ h_k \subset \{\mbox{extended cocore} \ b_j \}\} \, . \nonumber\end{aligned}$$ There are finitely many $h_k$’s involved in (\[eq3.53\]), all living inside $X_0^2 \times r \subset 2X_0^2$. At the point which we have reached now, there are still two main obstacles between us and what we want to achieve, namely 1\) We still have to get rid of the accidents. 2\) After that has been done, we still have to achieve the GRAND BLUE DIAGONALIZATION, by which we mean the following $$\eta_i ({\rm green}) \cdot \{\mbox{extended cocore} \ b_j \}\}^{\wedge} = \delta_{ij} \, ,$$ which of course, is equivalent to $$\eta_i ({\rm green}) \cdot \{\mbox{extended cocore} \ b_j \}\} = \delta_{ij} \, .$$ The next section is entirely devoted to these two pending issues. But since, in real life, this story is considerably more technical than what has been going on so far, the exposition will be even more sketchy and impressionistic. Some additional technicalities {#sec4} ============================== We consider now the stage when the little blue diagonalization (\[eq3.52\]) has been already achieved, but all the accidents (\[eq3.33.1\]), (\[eq3.33.2\]) of $$\label{eq4.1} \sum_1^P \delta_i^2 \overset{\mathcal J}{\longrightarrow} \Delta_1^4 \, , \quad \partial \delta_i^2 = \eta_i ({\rm green})$$ are still with us. Normally, the double points $x$ (\[eq3.33.1\]) and the transversal contacts $z$ (\[eq3.33.2\]) come [yoked]{} together, and here is a toy-model for a typical system of yoked accidents. In some coordinate neighbourhood $U \subset \Delta_1^4$, the set $2X_0^2 \cap U$ consists of two transversal planes $Q_1 , Q_2$ with $Q_1 \cap Q_2 = P$, while ${\rm Im} \, {\mathcal J} \cap U$ consists of two smooth branches $A_1 , A_2$, parallel copies of $Q_1 , Q_2$ respectively, coming with $$\label{eq4.2} A_1 \cap Q_2 = z_2 \, , \ A_2 \cap Q_1 = z_1 \, , \ A_1 \cap A_2 = x \, ;$$ the notation of (\[eq3.33.1\]), (\[eq3.33.2\]) are being used here. Let us start with the following remark. $$\label{eq4.3} \mbox{Assume that $z_1$ possesses an $\{$extended cocore $z_1 \} \subset 2X_0^2$.}$$ Then, we can push $A_2$ over the compactified $\{$extended cocore $z_1 \}^{\wedge}$, like it is suggested to do, with dotted lines, in Figure 2, and get rid of $z_1$. This process does not change $\eta ({\rm green})$ at all, and we will certainly make use of it. Also, according to conditions, $x$ might be destroyed together with $z_1$ too. But, at this point, it is not hard to see, and the reader should certainly try to figure this out alone, that even if both $\{$extended cocore $z_1 \}$ and $\{$extended cocore $z_2 \}$ are present (which will be the case, most of the times), we still [cannot]{} use the mechanism (\[eq4.3\]) in order to completely destroy the yoked system $(z_1 , z_2 , x)$. What one should retain is that even with the extended cocore mechanism available at $z_1$, something else is still necessary for dealing with $z_2$. This finishes the discussion of the toy-model, and we go back now to the singular 2-dimensional polyhedron (\[eq3.26.2\]) with its desingularization (\[eq3.26\]) (see here (\[eq3.27\]) and (\[eq3.28\]) too). Notice that the quantity $\beta$ in $(X_0^2 \times r) \cup (\Gamma (\infty) \times [r,\beta])$ is close enough to $r$ so as not to see the deletions $2X^2 - 2X_0^2$. The 1-skeleton of $(X_0^2 \times r) \cup (\Gamma (\infty) \times [r,\beta])$ is the following object $$\label{eq4.4} \Gamma (\infty) = \Gamma (\infty) \times r \, , \ \mbox{with a little are} \ P \times [r,\beta] \ \mbox{sticking out of each vertex} \ P \in \Gamma (\infty) \, .$$ With this, we will review now the construction of $$\label{eq4.5} \Theta^4 ((X_0^2 \times r) \cup (\Gamma (\infty) \times [r,\beta]) , \varphi) \subset \Theta^4 (2X_0^2 , \varphi) = N^4 (2X_0^2) \, ,$$ which, as far as accidents go, is the most important part, since it houses $Fd^2$. We will refer now to the procedures explained in [@Ga] (see also [@PoI], [@PoII]) and use the singular 3-dimensional version, rather than the 2-dimensional one. We may assume that the restriction of the map $\pi \circ f$ (\[eq3.26.2\]) to the 1-skeleton (\[eq4.4\]) is an embedding. Its regular neighbourhood is an infinite solid torus $N^3 (\Gamma (\infty))$, coming with some additional structures. We will introduce the notation $$\label{eq4.6} \Sigma_{\infty}^2 = \partial N^3 (\Gamma (\infty)) \, ,$$ and this infinite open surface $\Sigma_{\infty}^2$ comes with a PROPERLY embedded system of small disks, which we call generically $\beta$. This is the trace of the $\underset{P}{\sum} \, P \times [r,\beta]$. Next, $\Sigma_{\infty}^2$ comes equipped with an infinite [link projection]{} $$\label{eq4.7} \sum_{1}^{\bar P} \Gamma_i + \sum_1^{\infty} C_j + \sum_1^{\infty} \gamma_k^0 + \sum_1^{\infty} [c_{\ell} (b \, {\rm or} \, r)] \overset{j}{\longrightarrow} \Sigma_{\infty}^2 \, ,$$ with the following specifications. Each $[c_{\ell} (b \, {\rm or} \, r)]$ is a piece of the corresponding $c_{\ell} (b \, {\rm or} \, r)$, essentially $$\mbox{``} c_{\ell} (b \, {\rm or} \, r) \cap [(X_0^2 \times r) \cup (\Gamma (\infty) \times [r,\beta])]\mbox{''.}$$ Concretely, $j [c_{\ell} (b \, {\rm or} \, r)]$ is an arc hooked at two spots $\beta$. The $j$ is a generic immersion (in particular it has no triple points), and it injects on each connected component of the L.H.S. of (\[eq4.7\]). Never mind here that the interiors of $D^2 (\gamma_k^0)$, $D^2 (c_{\ell} (b))$ have been deleted, their very useful surviving collars and boundary pieces are still with us. We consider the double points $s \in jM^2 (j) \subset \Sigma_{\infty}^2$, and the main facts are here the following $$\begin{aligned} \label{eq4.8} &&\mbox{There is a canonical bijection} \\ &&\mbox{$\{$the undrawable singularities of the singular $2$-dimensional polyhedron (\ref{eq3.26.2})$\} \approx jM^2 (j)$} \, . \nonumber\end{aligned}$$ $$\label{eq4.9} \mbox{Each $s \in jM^2(j)$ lives, inside $\Sigma_{\infty}^2 = \partial N^3 (\Gamma (\infty))$, close to some {\it canonically} attached vertex $P \in \Gamma (\infty)$.}$$ We can consider, inside $X^3$, source of $f$ (\[eq3.26.2\]) a disjoined system of 3-balls $B^3 (P)$ each centered at $f(P)$, with radii much thicker than the width of $N^3 (\Gamma (\infty))$ and, with this, all the interesting part of the link projection (\[eq4.7\]) lives inside $$\sum_P B^3 (P) \cap \Sigma_{\infty}^2 \, .$$ Now, what we know from first principles, is that the desingularization $\varphi$ (\[eq3.26\]) of (\[eq3.26.2\]), gives a recipee for undoing the double points $s \in jM^2 (j)$. At each $s$, the ${\rm Im} \, j$ has two branches and, keeping in mind (\[eq4.8\]), we pull UP, towards the observer, the branch coming with $\varphi = S$ and then, accordingly to this, we push DOWN, the one with $\varphi = N$. Our set-up in (\[eq3.28\]) makes that $$\label{eq4.9bis} \varphi [c_{\ell} (b \, {\rm or} \, r)] = S$$ making that, whenever this makes sense, “UP” looks towards $b$ and “DOWN” towards $r$, with $b,r $ standing for blue and red, respectively, too. Keep in mind that all these are mere useful conventions. We go now 4-dimensional and for this, we start by changing $N^3 (\Gamma (\infty))$ into $N^4 (\Gamma (\infty)) = N^3 (\Gamma (\infty)) \times [0,1]$, with $\partial N^4 (\Gamma (\infty))$ equal to the double of $N^3 (\Gamma (\infty))$. Very explicitely, we have now a [splitting]{} $$\label{eq4.10} \partial N^4 (\Gamma (\infty)) = \partial^- N^4 (\Gamma (\infty)) \cup \partial^+ N^4 (\Gamma (\infty)), \mbox{with} \ \partial^- N^4 (\Gamma (\infty)) \cap \partial^+ N^4 (\Gamma (\infty)) = \Sigma_{\infty}^2 \, .$$ We make precise the distinction between $\partial^- N^4$ and $\partial^+ N^4$ by specifying that the $\beta$’s are now 3-balls living, together with the now embedded system $\underset{\ell}{\sum} \, [c_{\ell} (b \, {\rm or} \, r)]$ hooked at them, entirely inside ${\rm int} \, \partial^+ N^4 (\Gamma (\infty))$, while, for the time being at least, the rest of the link diagram lives entirely inside ${\rm int} \, \partial^- N^4 (\Gamma (\infty))$. With apropriate framings, this is enough for reconstructing $$\Theta^4 ((X_0^2 \times r) \cup \Gamma (\infty) \times [r,\beta]) , \varphi) \underset{\rm DIFF}{=} N^4 (X_0^2) \, ,$$ but we can do better than that too. Starting from the 3-balls $\beta$ inside $\partial^+ N^4 (\Gamma (\infty))$ we get back the whole $N^4 (2\Gamma (\infty))$, which comes now with a splitting by [*the same*]{} surface $\Sigma_{\infty}^2$, $$\label{eq4.11} \partial N^4 (2\Gamma (\infty)) = \partial^- N^4 (2\Gamma (\infty)) \cup \partial^+ N^4 (2\Gamma (\infty)) \, ,$$ with $\partial^- N^4 (2\Gamma (\infty)) = \partial^- N^4 (\Gamma (\infty))$, but with a much larger $\partial^+ N^4$. We have now a grand link, with the $\eta ({\rm green})$ thrown in too, for further purposes, coming with the following [normal confinement conditions]{} $$\label{eq4.12} \sum_{\ell} c_{\ell} (r \, {\rm or} \, b) + \sum_{\ell} \eta_{\ell} + \sum_1^P \eta_i ({\rm green}) \subset {\rm int} \, \partial^+ N^4 (2\Gamma (\infty)) \, , \ \sum_{i=1}^{\bar P} \Gamma_i + \sum_{j=1}^{\infty} C_j + \sum_{k=1}^{\infty} \gamma_k^0 \subset {\rm int} \, \partial^- N^4 (2\Gamma (\infty)) \, .$$ Here, the $c_{\ell} (r)$, $\Gamma_i$, $C_j$, $\eta_{\ell}$ are attaching zones of internal 2-handles of $N^4 (2X_0^2)$, they have canonical framings, and via all this we can reconstruct $N^4 (2X_0^2)$. The $\eta_i ({\rm green})$ themselves bound exterior discs $\delta_i^2$, and we will not focus here and now on the close connection which $\eta ({\rm green})$ and/or $\delta^2$ may have, or may have had with $c(b)$ (and even with $\gamma^0$). Now, in real life, we will need a certain finite change of the normal confinement conditions (\[eq4.12\]). There will be a finite system of curves, called generically $\overline{C_f}$, with $$\sum \overline{C_f} \subset \sum_1^{\bar P} \Gamma_i + \sum_1^{\infty} C_j \, ,$$ which are UP at all their corners $P$, and which will be moved isotopically from $\partial^- N^4 (2\Gamma (\infty))$ to $\partial^+ N^4 (2\Gamma (\infty))$. The reasons for this change $$\label{eq4.13} (\bar C \subset \partial^- N^4 (2\Gamma (\infty))) \Rightarrow (\bar C \subset \partial^+ N^4 (2\Gamma (\infty)))$$ will soon become clear. But the point here is that (\[eq4.13\]) is actually a [forced]{} transformation, the $\overline{C_f}$ may not a priori be UP at all its $P$’s and, anyway, some [*global*]{} measures will be necessary in order to be able to fit (\[eq4.13\]) into our whole machinery. In particular, an infinite, locally fine subdivision of $X^2$, [before doubling]{}, coming with a [complete redefinition]{} of the BLUE labelling, will be needed. What will make this kind of thing possible is the basic commutativity property specific for 2-dimensional collapsing: after any arbitrary collapse, a collapsible 2-dimensional complex stays collapsible. IMPORTANT REMARK. As one has already seen, there are quite a number of successive steps in our approach. It is of paramount importance to keep them in correct order, like for instance performing old $\Rightarrow$ new before (\[eq4.13\]), and (\[eq4.13\]) itself before doubling$\ldots$ The (\[eq4.13\]) has, of course, to stay compatible with the rest of our construction. Once it is performed, it leads to the real life, forced confinement conditions, which will supersede (\[eq4.12\]), from now on. They are the following $$\label{eq4.14} \sum_{\ell} c_{\ell} (b \, {\rm or} \, r) + \sum_j \eta_j + \sum_f \overline{C_f} \subset \partial^+ N^4 (2\Gamma (\infty)) \supset \sum_1^P \eta_{\alpha} ({\rm green}) \, ,$$ $$\left( \sum_i \Gamma_i + \sum_j C_j - \sum_f \overline{C_f} \right) + \sum \gamma_k^0 \subset \partial^- N^4 (2\Gamma (\infty)) \, .$$ We will call this from now on the LINK, i.e. the set of internal attaching curves occuring on the LHS of the formula above. There will never be any other violations of the final confinement above, the splitting is sacro-sancted, and nothing is ever allowed to cross $\Sigma_{\infty}^2$. With this we can open a small prentice, going back to (\[eq3.44.1\]) which can be explained now. Imagine we would perform a RED diagonalization which would lead to a system of curves $$\Gamma_{\bar n + 1} = C(1) \, , \ \Gamma_{\bar n + 2} = C(2) , \ldots , \Gamma_{P+(\bar n - n)} = C(P-n)$$ which would be attached directly to $\Gamma (3)$. This hypothetical RED diagonalization would have to use both curves $\bar C$ and $(C - \bar C)$, involving thereby serious tresspassing through $\Sigma_{\infty}^2$. This contradicts the sacro-santed principles and, hence, it is [forbidden]{}. This proves (\[eq3.44.1\]). We go back now to the map $$\sum_1^P \delta_i^2 \overset{F}{\longrightarrow} 2X_0^2 \subset \Delta_1^4$$ from Lemma 9 (see (\[eq3.32.1\])). This $F$ admits a not everywhere well-defined lift to $\partial N^4 (2X_0^2)$, denoted with the same letter, which we have to look inside, a bit closer, now. To begin with, there is a piece which we call body $\delta_i^2 \subset \delta_i^2$, and which via the lift of $F$ to $\partial N^4 (2X_0^2)$ goes into $\partial N^4 (2\Gamma (\infty))$. Of course, $\eta_i ({\rm green}) \subset \partial \, {\rm body} \, \delta_i^2$ and also, $\delta^2 - {\rm body} \, \delta^2$ gets nicely embedded by $F$ into the various lateral surfaces of the 2-handles of $N^4 (2X_0^2)$. We have $$\begin{aligned} \label{eq4.15} &&\partial \, {\rm body} \, \delta_i^2 = \eta_i ({\rm green}) + \sum \{\mbox{the various attaching zones,} \\ &&\mbox{call them generically $C_i$, which are such that $F \delta_i^2$ uses $D^2 (C_i)\}$.} \nonumber\end{aligned}$$ The intersecting part of $F$ is a [*not everywhere well-defined*]{} immersion, denoted again by the same letter $$\label{eq4.16} \sum {\rm body} \, \delta_i^2 \overset{F}{\longrightarrow} \partial N^4 (2\Gamma (\infty)) \, .$$ Notice that, the $C_i$ in (\[eq4.15\]) is part of our LINK, coming with $C_i \subset \partial N^4 (2\Gamma (\infty))$. Careful here, this “$C_i$” is just a generic notation for a curve which may be an honest $C \subset \partial^- N^4 (2\Gamma (\infty))$, but which might well be, also, a $\Gamma$ or a $\bar C \subset \partial^+ N^4 (2\Gamma (\infty))$. With all this, the spots where $F$ (\[eq4.16\]) is [not]{} really well-defined correspond to the transversal contacts $F ({\rm body} \, \delta_i^2) \cap C_j$, which we will call [punctures]{}. With all these things, here is a typical accident situation, and the description below is supposed to supersede the toy-model considered in connection with (\[eq4.2\]), in the beginning of this section, $$\label{eq4.17} \mbox{Let $L=F ({\rm body} \, \delta_i^2) \cap F ({\rm body} \, \delta_j^2)$ be a {\ibf clasp}, bounded by two punctures}$$ $$p_1 \in F ({\rm body} \, \delta_i^2) \cap C_j \quad {\rm and} \quad p_2 \in F ({\rm body} \, \delta_j^2) \cap C_i \, .$$ In this situation, we also have, once one goes to dimension four, two transversal contacts $$z_1 \in {\mathcal J} \delta_i^2 \cap (D^2 (C_j) \subset 2X_0^2) \, , \ z_2 \in {\mathcal J} \delta_j^2 \cap (D^2 (C_i) \subset 2X_0^2) \, ,$$ living over $p_1 , p_2$ respectively, as well as a double point $$x \in {\mathcal J} \delta_i^2 \cap {\mathcal J} \delta_j^2 \, .$$ Figure 2, which lives in a 2-dimensional section through $\Delta_1^4$, may help understand the yoked system of accidents $(z_1 , z_2 , x)$ from (\[eq4.17\]). Rather than what we had in the context of the toy model (\[eq4.2\]), the present $p_1 , p_2$ live now at two distinct endpoints of an edge of $2\Gamma (\infty)$, which we call again $p_1 , p_2$. The $F$ (\[eq4.16\]) is a generic immersion with $$M^3 (F) = \phi \ {\rm and} \ FM^2 (F) = \{{\rm clasps} \} \cap \{{\rm ribbons}\}$$ which, generally speaking, may create a dense web which is highly connected. We will be now a bit more specific and discuss at some length a typical harder case of (\[eq4.17\]) where (after a possible permutation of $(1,2)$) we have two pieces $B_i^2 \subset \delta_i^2$, $d_j^2 \subset \delta_j^2$ such that, in the context of (\[eq4.17\]) we should have $$\label{eq4.18} F ({\rm body} \, \delta_i^2) \mid L \subset F ({\rm body} \, B_i^2) \, , \ F ({\rm body} \, \delta_j^2) \mid L \subset F \, d_j^2 \, .$$ We know, already, that the transversal contacts ${\mathcal J} \delta^2 \cap \Delta^2 = {\mathcal J} \delta^2 \cap D^2 (\Gamma)$ can only come from the pieces $d^2 \subset \delta^2$ (and moreover this only in the Schoenflies context), which means that with our present specifications, the $\{$extended cocore $z_1\}$ exists always and for sure. We will use it, like in (\[eq4.3\]). For the sake of the present exposition, let us also assume that $x$ is killed together with $z_1$, leaving us to deal with $z_2$, afterwards, i.e. now. The point is that, in the context (\[eq4.18\]) we will also have, by construction, $$\label{eq4.19} p_2 \in C_i = \bar C_i \subset \partial^+ N^4 (2\Gamma (\infty)) \, .$$ $$\includegraphics[width=8cm]{POfig2.eps}$$ [**Remarks.**]{} A) The reason for (\[eq4.13\]) was, in retrospect, exactly, to have this (\[eq4.19\]). B\) The $d^2$’s only concern $(X_0^2 \times r) \cup (\Gamma (\infty) \times [r,\beta])$ reason why, this piece on which we have focused in (\[eq3.26.2\]) is more complicated to deal with, than $(\Gamma (\infty) \times [\beta , b]) \cup X_b^2$. Here is the general idea of how one deals with the $z_2$, which lives over $p_2$. We will need an arc $\lambda \subset F \delta_j^2$, joining $p_2$ to some point $q_2 \in \eta_j ({\rm green})$. Eventually, we will want $\lambda$ to live inside $\partial N^4 (2\Gamma (\infty))$, in fact inside $\partial^+ N^4 (2\Gamma (\infty))$, but let use choose to ignore these issues right now, for the purpose of the exposition. The general idea is to use a sliding move of $\eta_j ({\rm green})$ along $\lambda$, until it gets on the other side of $p_2 \in \overline{C_i}$, dragging ${\mathcal J} \delta_j^2$ with it in the process, and thereby destroy the contact $z_2$. In a more precise language, what we mean here is this. Start with the mapping cylinder $${\rm Map} \, ({\mathcal J} \delta_j^2 \approx \delta_j^2 \overset{F}{\longrightarrow} F \delta_j^2 \subset 2X_0^2) \subset \Delta_1^4 \, ,$$ which is, topologically speaking, essentially $F \delta_j^2 \times [0,\varepsilon]$; then consider a very thin neighbourhood $\lambda \subset U \subset F \delta_j^2$, biting a small arc centered at $q_2$ from $\eta_j ({\rm green})$. Finally, delete the piece of the mapping cylinder living over $U$; this changes $({\mathcal J} \delta_j^2 , F \delta_j^2 , \eta_j ({\rm green}))$ so that $z_2$ disappears. With this general idea in mind, we go back now to the arc $\lambda$. $$\begin{aligned} \label{eq4.20} &&\mbox{There will be two successive pieces of $\lambda$, first a {\ibf green arc} $\lambda_1 \subset F d_j^2$, connecting $p_2$} \\ &&\mbox{to some $r_2 \in \gamma_j^0$ and next, a {\ibf dual arc} $\lambda_2 \subset F (\delta_j^2 - d_j^2)$ connecting $r_2$ to $q_2$.} \nonumber \\ &&\mbox{So $\lambda$ takes the form of a paths composition $\lambda = \lambda_1 \cdot \lambda_2$.} \nonumber\end{aligned}$$ By adding some extra folds to the “immersion with folds” $F$, we can arrange this set up in (\[eq4.20\]), so that $$\lambda_1 \subset Fd^2 \subset X_0^2 \times r \, , \ \lambda_2 \subset F (\delta^2 - d^2) \subset \Gamma (\infty) \times [r,b] \cup X_b^2 \, .$$ The two endpoints $p_2 , q_2$ of $\lambda$ live certainly inside $\partial^+ N^4 (2\Gamma (\infty))$ but, a priori, we may well find that, unless we do something special about it, we have $$\label{eq4.21} \lambda \cap \{ \partial^- N^4 (2\Gamma (\infty)) \cup [\mbox{lateral surfaces of the $2$-handles}]\} \ne \emptyset \, .$$ We will come back to this unpleasant problem (\[eq4.21\]) later on. One should remember, at this point, that ${\mathcal J} d^2$, $Fd^2$ were constructed using the 3-dimensional RED collapsing flow, which was still with us up to the $X^2 ({\rm new})$ from (\[eq3.18\]), but which we have lost by doubling. Now, although this 3-flow has, physically speaking disappeared, its surviving trace on $Fd_j^2$ will be used in order to construct the green arc $\lambda_1$. Also, dually so to say, the 2-dimensional BLUE flow is used for constructing $\lambda_2$. From the begining, the 2-dimensional and the 3-dimensional RED collapsing flows were supposed to be compatible (and we do not explain that now in more detail, the word should suffice here), by doubling we have gained (\[eq3.22\]), and then, finally the extended cocores use the 2-dimensional RED flow. The net result of all these facts put together, is the following basic item $$\label{eq4.22} \{\mbox{extended cocore} \, z_1\}^{\wedge} \cap \lambda = \emptyset \, .$$ This means that our two procedures, via which we want to deal with the two ends of the clasp $L$, do not clash with each other. In the same vein, let us notice that, with the green arc $\lambda_1$ guided by the RED flow, and with the dual arc $\lambda_2$ likewise guided by the BLUE flow, if we had not made sure of (\[eq3.22\]), via the doubling $X^2 \Rightarrow 2X^2$, then in lieu of the normal $\lambda_1 \cap \lambda_2 = \{ r_2 \}$, we would have found, also, plenty of transversal intersections $\lambda_1 \cap \lambda_2$, with disastrous results. As a more general comment, the doubling process (and actually the whole sequence (\[eq3.19\])) seems to be an essential ingredient for dealing with the accidents. A lot of various hot issues concerning the accidents have hardly been mentioned so far, and the only thing we can do now is to list at least some of them. $$\label{eq4.23.1} \mbox{We certainly want to get rid of (\ref{eq4.21}), with which we cannot live, and achieve}$$ $$\lambda \subset \partial^+ N^4 (2\Gamma (\infty)) \, ,$$ instead. Here there will be two distinct procedures, one for $\lambda_1$ and another one for $\lambda_2$. For $\lambda_2$, the only issue is to avoid the lateral surfaces of 2-handles. This is an easier issue which can be dealt with by some apropriate subdivisions performed, this time, after doubling, at the level of $X_b^2$ above. \[Any pre-doubling subdivision gets automatically “doubled” too.\] The issue of moving $\lambda_1$ into $\partial^+ N^4 (2\Gamma (\infty))$ is considerably harder and requires some acrobatics which we do not explain here. $$\begin{aligned} \label{eq4.23.2} &&\mbox{One of the offshots of the compatibility between the RED 2-dimensional and 3-dimensional} \\ &&\mbox{flows, will be that $\lambda_1 \cap h = \emptyset$. But we will have contacts $\lambda_1 \cap B \ne \emptyset$. These may threaten} \nonumber \\ &&\mbox{the highly sacro-sancted condition $\eta \cdot B = {\rm id} + {\rm nil}$, and so they are dangerous. They need} \nonumber \\ &&\mbox{hence a treatment, which will not be explained here.} \nonumber\end{aligned}$$ After all these things, there is no special issue concerning $\lambda_2 \cap (R \cup B)$. $$\begin{aligned} \label{eq4.23.3} &&\mbox{Then, there is also a {\ibf ribbon} analogue of the {\ibf clasp}-accident (\ref{eq4.17}) and this is certainly} \\ &&\mbox{not a trivial thing, contrary to what one may think.} \nonumber\end{aligned}$$ The correct viewpoint here is to consider the following dense highly connected system (see (\[eq4.16\])) $$FM^2 (F) = \{{\rm clasps}\} \cup \{{\rm ribbons}\} \subset \bigcup_i F ({\rm body} \, \delta_i^2)$$ which, among other things, raises the hot issue of the unavoidable contacts $$\{\mbox{green arcs}\} \cap \{\mbox{clasps and {\ibf ribbons}}\} \ne \emptyset \, ,$$ into which we will not go here. It so happens that the big complications of the accidents, are all concentrated along $(X_0^2 \times r) \cup (\Gamma (\infty) \times [r,\beta])$. We give here a complete description of how accidents ever reach into the region $(\Gamma (\infty) \times [\beta , b]) \cup X_b^2$. This is the following precise local model. $$\begin{aligned} \label{eq4.24} &&\mbox{On the same lines as in (\ref{eq4.17}), we have a clasp $L$, going now along some edge} \\ &&\mbox{$P \times [r,b]$ and involving two $FB^2$'s.} \nonumber\end{aligned}$$ We have an $FB_i^2$ and an $FB_j^2$, with $z_1$ localized at $P \times r$ and $z_2$ localized at $P \times b$. We treat $(z_1 , x)$ as a single bloc, just like we have done it for (\[eq4.17\]), except that this is now in earnest, not just an expository pretence. For $z_2$ we use a dual arc $\lambda_2$ confined inside $X_b^2$, without any green arc $\lambda_1$ being necessary here. One can set up things so that there are no $x \in {\mathcal J} M^2 ({\mathcal J})$ localized at $X_b^2$, and all this is more like a simple toy-model of the more difficult case discussed earlier. [**Remark.**]{} A) It would look, a priori, that when we are dealing with something like (\[eq4.17\]), we are free to treat $x$ together with $z_1$ [or]{} with $z_2$. This is not quite so in real life. B\) In term of (\[eq4.12\]), as it stands (and the present discussion is insensitive to the change (\[eq4.13\])), we consider $({\rm LINK}) \cap \partial^- N^4 (2\Gamma (\infty))$ and its corresponding part of the link projection and link diagram, the only ones which will be discussed now. We know, also, from (\[eq4.9\]) that $$\{\mbox{link diagram}\} = \sum_P \{\mbox{link diagram}\} \mid P \, .$$ With all this comes now another sacro-sancted principle, which our constructions have always to abide to, namely the following $$\begin{aligned} \label{eq4.25} &&\mbox{For any individual vertex $P \in \Gamma (\infty)$, inside the corresponding $\{\mbox{link diagram} \, P\} \mid P$, there is} \\ &&\mbox{never an individual line which has both crossings where it is UP and crossings where it is DOWN.} \nonumber\end{aligned}$$ C\) (A short discussion of (\[eq4.25\])). So, with (\[eq4.25\]), any individual line in $\{\mbox{link diagram}\} \mid P$ carries an unmistakable label UP, DOWN, or neutral. Before any (\[eq3.19\]) is in effect, here is how this could (and will actually) be implemented, at the level of Stage I, in the previous section. Remembering that $X^2$ is the 2-skeleton of (some cell-decomposition of) $X^4 = X^3 \times R$ and that $X^3$ itself comes with a submersion into $R^3$ (for which, in the context $\Delta^3 \times I$ we have to invoke Smale-Hirsch), we may always assume that, locally at least, the cell-decomposition of $X^4$ is of the form $$\label{eq4.26} \{\mbox{a cubically-crystalline decomposition of} \ X^3 \} \times \{\mbox{any subdivision of} \ R\} \, .$$ A lot of combinatorial work is required in order to have both $\{$the desirable BLUE and RED features$\}$ AND (\[eq4.26\]), lumped together inside a single cell-decomposition. But the point here is that with a cell-decomposition like (\[eq4.26\]), it is not hard to see that (\[eq4.25\]) is more or less automatically fulfilled. Now, all this was [*before*]{} we go to the move (\[eq3.18\]) in Stage IV (and our (\[eq4.25\]) which has concerned $[({\rm LINK})$ (\[eq4.12\])$] \cap \partial^- N^4 (2\Gamma (\infty))$, is insensitive to whatever may happen in Stage IV strictly [*after*]{} the transformation $X^2 ({\rm old}) \Rightarrow X^2 ({\rm new})$). Now, when we go to the real life situation, this passage $X^2 ({\rm old}) \Rightarrow X^2 ({\rm new})$ turns out to be much more complex than what formula (\[eq3.18\]), as such, may suggest, particularly because [*we have to abide to*]{} (\[eq4.25\]). We actually have to use two distinct procedures, once we look into the seams of (\[eq3.18\]), one for $\Delta^3 \times I$ and then another one for $\Delta^4$ Schoenflies. The difference comes, again, from the existence of the compact product structure, in the first of the two cases. This is about as much as we will say here, concerning the implementation of (\[eq4.25\]). We will rather say a few words now about what (\[eq4.25\]) brings to us. When we consider any $B_i^2$ (\[eq3.30\]) and we also focus on some $P \in \Gamma (\infty) \times r$ which $B_i^2$ may touch, then $B_i^2 \mid P$ is completely identified by one of the arcs $$A \subset \{\mbox{link diagram}\} \mid P \, ,$$ the connection being that $B_i^2 \mid P = \{$a little triangle spanned by $A$ and the vertex $P \times \beta \}$. When we deal with the accidents, the various $B_i^2 \mid P$ will be (most of the time) dealt with as independent units and, with (\[eq4.25\]) being satisfied (and also lumping for simplicity’s purpose, here, neutral with UP, let us say), each $B_i^2 \mid P$ is $$\label{eq4.27} (B_i^2 \mid P) ({\rm UP}) \qquad {\rm OR} \qquad (B_i^2 \mid P)({\rm DOWN}) \, ,$$ and never both, simultaneously. Each of these two cases will have to receive a different treatment, reason why we want to keep them distinct. As an illustration for these different treatments, in (\[eq4.18\]) the $B_i^2 = B_i^2 ({\rm DOWN})$, while at $P \times r$ in (\[eq4.24\]) we have $B_i^2 ({\rm DOWN}), B_j^2 ({\rm UP})$. D\) Our handling of accidents obviously has to change the topology of the subset $$2X_0^2 \cup \sum_1^P {\mathcal J} \delta_i^2 \subset \Delta_1^4 \, ,$$ but then, in some cases it has to involve changes of the topology of the ambient space $\Delta^4_1$, itself. More explicitely, we may have to use moves which, without touching to $\Delta^4$, of course, locally at least are a brutal change in topology which will turn out, afterwards and this time for global reasons, to leave intact up to diffeomorphism, the pair $(\Delta_1^4 , \Delta^4)$. Without going right now into any particulars, here is how such a [brutal]{} move may look like. Consider, in terms of the link diagram, a crossing of two curves none of which are of type $\Gamma_i$. Then interchange the UP/DOWN values at the crossing. Without any loss of generality this does not change the geometric intersection matrices, nor $N^4 (\Delta^2)$ of course. Something quite horrible has, quite clearly, happened locally, but up to diffeomorphism the global topology of the pairs of type $$(\Delta_1^4 = N^4 (\Delta^2) \cup \{{\rm collar}\} , N^4 (\Delta^2))$$ stays intact. So, provided that we do not otherwise conflict with the other structures and/or principles, this is an acceptable move. With this we close, at the level of the present account, the discussion of the accidents, which we assume, from now on, to have been dealt with already. The last item on our agenda is to give now a very sketchy outline of the grand blue diagonalization. We will be starting now with the diagram (\[eq3.51\]), and the little BLUE diagonalization (\[eq3.52\]) is already, and will also constantly be too, with us. Call this the [initial]{} level. We have here $\Gamma (3) \subset 2\Gamma (\infty)$, with two families of 1-handles $$\label{eq4.28} R({\rm initial}) \subset 2\Gamma (\infty) \supset B \, ,$$ where we discard the $R_1 + R_2 + \cdots + R_n$ from (\[eq3.3\]), as well as the promoted $y_1 + \cdots + y_{P-n} = R_{n+1} + \cdots + R_P$, decreeing that $$\sum_1^P b_i \subset R({\rm initial}) \cap B \, ,$$ which is perfectly legitimate since $\Gamma (3) - \overset{P}{\underset{1}{\sum}} \, b_i$ is now a tree. We do not write $B({\rm initial})$ in (\[eq4.28\]) since, contrary to what will happen with the $R({\rm initial})$, in the colour-changing process let us call it initial $\Rightarrow$ final, following next, the $B$ will not change at all. We use again the notation $$h({\rm initial}) = R({\rm initial}) - \sum_1^P b_i$$ and, at our present initial stage, we have disjoined partitions $$\label{eq4.29.initial} h = (h-B) + h \cap B \, , \ B = (B-h) + B \cap R \, .$$ Here $h,R$ are, of course, $h({\rm initial})$, $R({\rm initial})$. With all this, we have $h({\rm initial}) \subset {\rm LAVA} ({\rm initial})$ and we may rewrite (\[eq3.48\]) as follows $$\label{eq4.30} \bar N^4 (\Gamma (3)) = (N^4 (2\Gamma (\infty) - h({\rm initial})) \cup {\rm LAVA}^{\wedge} ({\rm initial}) \, ,$$ the two pieces in the RHS being glued along $\delta \, {\rm LAVA} ({\rm initial})$. In the formula above we apply the prescriptions from Stage III meaning that $$\label{eq4.31} ( {\rm LAVA} ({\rm initial}) , \delta \, {\rm LAVA} ({\rm initial})) = \left( \bigcup_i h_i ({\rm initial}) \cup D^2 (C_i) \, , \ \partial \, {\rm LAVA} ({\rm initial}) \cap \partial (N^4 (2\Gamma (\infty) - h ({\rm initial}))\right) \, .$$ The $C \cdot h ({\rm initial})$ is of the easy id $+$ nil form, and hence the pair (\[eq4.31\]) has the product property. So, at our initial stage we start from $$\label{eq4.32} [(N^4 (2\Gamma (\infty)-h ({\rm initial})) \cup {\rm LAVA}^{\wedge} ({\rm initial})] + \sum_1^{\bar P} D^2 (\Gamma_i) \subset \Delta_1^4 \overset{\mathcal J}{\longleftarrow} \sum_1^P \delta_i^2 \, ,$$ where the following things happen a\) inside the $[ \ldots \cup \ldots]$, the two corresponding terms are glued together along $\delta \, {\rm LAVA} (\rm {initial})$, b\) between the two compact spaces from the LHS, there is just a collar, c\) we also have $$\label{eq4.33} \partial \sum_1^P \delta_j^2 = \sum_1^P \eta_j ({\rm green}) \subset \partial [N^4 (\ldots) \cup {\rm LAVA}^{\wedge} ({\rm initial})] - \sum_1^{\bar P} \Gamma_i \, ,$$ d\) and finally, apart from (\[eq4.33\]), ${\mathcal J} \delta^2$ is disjoined from $\Delta^4 \subset \Delta_1^4$, the ${\mathcal J}$ itself being an embedding into this last space. This last point expresses the fact that the accidents are, by now, killed. Very much like in (\[eq4.14\]), slightly re-arranged and also considered now at the present initial level after the accidents have been dealt with, we have the following BIG LINK (initial) $$\label{eq4.34} \sum_{\ell} c_{\ell} (b \, {\rm or} \, r) + \sum_j \eta_j + \sum_f \overline{C_f} + \sum_1^P \eta_{\alpha} ({\rm green}) \subset \partial^+ N^4 (2\Gamma (\infty)),$$ $$\left( \sum_{1}^{\bar P} \Gamma_i + \sum_1^{\infty} C_j - \sum_f \overline{C_f} \right) + \sum_1^{\infty} \gamma_k^0 \subset \partial^- N^4 (2\Gamma (\infty)) \, .$$ To complete the picture at the initial level, let us add the following two items too. The small blue diagonalization (\[eq3.52\]) is, and will still constantly be, with us from now on. Finally, the only obstruction which has been left on our way now, is the following finite set, which was already identified in (\[eq3.53\]), namely $$\begin{aligned} \label{eq4.34.1} &&\mbox{the $h_k \in h ({\rm initial}) - B$ such that $h_k \subset \overset{P}{\underset{1}{\sum}} \ \{\mbox{extended cocore} \, b_j \}^{\wedge}$} \\ &&\mbox{(the $1$-handles of $\Delta^4$) and which, also, are touched by $\overset{P}{\underset{1}{\sum}} \ \eta_i ({\rm green})$.} \nonumber\end{aligned}$$ With all these things, what comes next is a big geometric transformation, which we call $$\label{eq4.35} \mbox{The CHANGE OF COLOUR initial $\Rightarrow$ final,}$$ at the final level of which we will find a context analogous (modulo some important changes) to the one of the initial level, but where now the GRAND BLUE DIAGONALIZATION IS IN PLACE. The transformation (\[eq4.35\]) will leave $(2\Gamma (\infty) , B , \eta ({\rm green}))$, eventually, invariant. By “eventually” we mean here that the initial and final levels for these objects will be rigorously the same, but with a lot of drastic transformations occuring between. The (\[eq4.35\]) also comes with a change $$\label{eq4.36} \{ h({\rm initial}) , ({\rm LAVA} ({\rm initial}) , \delta \, {\rm LAVA} ({\rm final}))\} \Rightarrow \{h ({\rm final}) , ({\rm LAVA} ({\rm final}) , \delta \, {\rm LAVA} ({\rm final}))\}$$ satisfying the usual condition $$\label{eq4.37} \delta \, {\rm LAVA} ({\rm final}) = \partial \, {\rm LAVA} ({\rm final}) \cap \partial (N^4 (2\Gamma (\infty)) - h({\rm final})) \, .$$ It will turn out now, that the change of colour process (\[eq4.35\]) brings with it a quite serious violation of the RED $C \cdot h = {\rm id} + {\rm nil}$ feature. This means that, in order to retain the product property for the pair $$({\rm LAVA} ({\rm final}) , \delta \, {\rm LAVA} ({\rm final})) \eqno (*)$$ we cannot any longer use the exact prescription from the Stage III. We had used these prescriptions for (\[eq4.31\]), but for defining correctly $(*)$, some modifications of the standard prescriptions will be needed. Without going into that, right now, with $[N^4 (2\Gamma (\infty) - h ({\rm initial})] \cup {\rm LAVA}^{\wedge} ({\rm initial})$, replaced now by $$\label{eq4.38} [N^4 (2\Gamma (\infty)) - h ({\rm final})] \cup {\rm LAVA}^{\wedge} ({\rm final}) \, ,$$ and with $(*)$ which still retains the product property, we have, at the final level, a context just like in (\[eq4.32\]). In particular, the product property, provides us with a properly embedded system of 3-balls $$\label{eq4.39} \sum_1^P \{\mbox{extended cocore} \, b_i\}^{\wedge} ({\rm final}) \, .$$ The next lemma should explain the term “change of colour”. [**Lemme 13.**]{} [*For any finite subset $S \subset h({\rm initial}) - B$, we can find a larger, still finite subset $$\label{eq4.40} S \subset S_1 \subset h({\rm initial}) - B$$ for which there exists an $$\label{eq4.41} S_2 \subset B - R({\rm initial}) , \ \mbox{with} \ {\rm card} \, S_2 = {\rm card} \, S_1$$ such that the following things should happen.*]{} 1\) [*If one defines $$\label{eq4.42} R({\rm final}) = \sum_1^P b_i + (h({\rm initial}) - S_1) + S_2 , \mbox{and hence also} \ h({\rm final}) = (h({\rm initial}) - S_1) + S_2 \, ,$$ then, just like $2\Gamma (\infty) - B$ and $2\Gamma (\infty) - R({\rm initial})$, the $2\Gamma (\infty) - R({\rm final})$ is again a tree.*]{} 2\) [*In a manner which the very sketchy proof below will make explicit (at least up to a certain extent), this comes with the items [(\[eq4.36\]), (\[eq4.37\]), (\[eq4.38\]), (\[eq4.39\])]{}, i.e. with the following analogue of [(\[eq4.32\])]{}, at the final level $$\label{eq4.43} [(N^4 (2\Gamma (\infty) - h ({\rm final})) \cup {\rm LAVA}^{\wedge} ({\rm final})] + \sum_1^{\bar P} D^2 (\Gamma_i) \subset \Delta_1^4 \overset{\mathcal J}{\longleftarrow} \sum_1^P \delta_i^2 \, ,$$ satisfying the analogue of*]{} (\[eq4.33\]). 3\) (PUNCH LINE) [*If $S$ is large enough so as to contain the finite set from [(\[eq4.34.1\])]{}, then we also have*]{} $$\label{eq4.44} \sum_1^P \eta_i ({\rm green}) \cap {\rm LAVA}^{\wedge} ({\rm final}) = \emptyset \, .$$ Before we go to a very sketchy and impressionistic outline of proof for this last lemma, let us notice that, modulo everything said so far, the (\[eq4.44\]) should clinch the proofs of the two Theorems 1 and 2. With this we list now the kind of steps invoked in the proof of Lemma 13. This is just a sketchy outline, of course. I\) At the 1-dimensional level, the (\[eq4.35\]) is a sequence of embedded transformations of $N^4 (2\Gamma (\infty))$ inside $\Delta_1^4$, which are 1-handle slides keeping all the time the [splitting intact]{} and changing the positions of 1-handle cocores around. In the end (but [only]{} in the end), we find ourselves with exactly the same $N^4 (2\Gamma (\infty))$ as in the beginning, and also with the transformation of pairs $$(N^4 (2\Gamma (\infty)) , R({\rm initial})) \Rightarrow (N^4 (2\Gamma (\infty)) , R({\rm final})) \, .$$ But, in this eventual transformation, the $B$ stays put (although it might have done horrible things at intermediary stages, a leitmotif in this present story). So, it is only $h-B$ which actually changes, when we move from “initial” to “final”. II\) The 1-handle slides from I), drag along the curves and the 2-handles, internal or external attached along them. The confinement conditions are [never]{} violated and, with a lot of intermediary stages we get a transformation at the level of (\[eq4.34\]) $$\label{eq4.45} \mbox{BIG LINK (initial) $\Rightarrow$ BIG LINK (final),}$$ at the end of which (but only at the end) we find that, as subsets of $2\Gamma (\infty)$, we have the equality $$\label{eq4.46} \{\mbox{BIG LINK (initial)}\} \cap \partial^+ N^4 (2\Gamma (\infty)) = \{ \mbox{BIG LINK (final)}\} \cap \partial^+ N^4 (2\Gamma (\infty)) \, .$$ III\) Our set-up is such that, once we choose to forget the intermediary stages and only look at the initial and final levels, then both the geometric intersection matrices $\eta \cdot B$ and $\eta ({\rm green}) \cdot B$ stay put. But not so on the RED side when we will actually have $$\label{eq4.47} C({\rm final}) \cdot h ({\rm final}) \ne {\rm id} + {\rm nilpotent} \, .$$ [**Remarks.**]{} Both $C ({\rm initial})$ and $C({\rm final})$ contain curves $\bar C \subset \partial^+ N^4 (2\Gamma (\infty))$. Also, in a sense which is not too hard to make precise the $C ({\rm initial}) \cdot h({\rm initial})$ and $C ({\rm final}) \cdot h({\rm final})$ only differ by a [finite]{} matrix, let us say that the [violation]{} of the RED feature id $+$ nil displayed above, is compact. IV\) Once we have lost the RED id $+$ nil, we can no longer proceed exactly like in the context of Stage III. So now we will have, by definition $$\label{eq4.48} {\rm LAVA} ({\rm final}) = \left[ \sum_i (h_i ({\rm final})) \cup D^2 (C_i ({\rm final})) \right] \cup \{\mbox{some {\ibf additional} pieces which we call {\ibf lava bridges}}\}.$$ The $\sum$ (lava bridges) is compact and its role is to make that the following should happens $$\begin{aligned} \label{eq4.49} &&\mbox{The pair $({\rm LAVA} ({\rm final}) , \delta \, {\rm LAVA} ({\rm final}))$ continues to have the product property,} \\ &&\mbox{actually with the same lamination as before, i.e. ${\mathcal L} ({\rm initial}) = {\mathcal L} ({\rm final})$.} \nonumber\end{aligned}$$ V\) There is a geometric transformation $$\label{eq4.50} ({\rm LAVA} ({\rm initial}) , \delta \, {\rm LAVA} ({\rm initial})) \Rightarrow ({\rm LAVA} ({\rm final}) , \delta \, {\rm LAVA} ({\rm final}))$$ which has the virtue that its various intermediary stages exhibit, explicitely, the conservation of the product property. Bot the initial and final levels of (\[eq4.50\]) are naturally embedded inside the ambient $\Delta_1^4$, but [not]{} so all the intermediary steps. Let us say here that (\[eq4.50\]) is an “[allowable lava move]{}”. Also, in order for the product property to be preserved, notwithstanding the fact that we only have a compact violation of id $+$ nil in the context of (\[eq4.50\]), some global conditions, involving the totality of $N^4 (2X_0^2)$ will have to be paid attention too. VI\) We will constantly have $$\label{eq4.51} \{\mbox{lava bridges}\} \cap \partial N^4 (2\Gamma (\infty)) \subset \partial^- N^4 (2\Gamma (\infty)) \, ,$$ far from $\eta ({\rm green})$. VII\) Here is a heuristic argument suggesting why we should have (\[eq4.44\]) (the punch line). We already know that $$\eta ({\rm green}) \cdot \left( B - \sum_1^P b_j \right) = 0$$ and, quite clearly the $S_2$ (\[eq4.41\]) $\subset \, B - \overset{P}{\underset{1}{\sum}} \, b_i$. Assume now that $S \supset \{\mbox{the finite set of $h_k$'s from (\ref{eq4.34.1}), call it}$ $S_3\}$. It is also, only through $S_3$ that $\eta ({\rm green})$ touches LAVA (initial). So, if $S_3$ is changed into a piece of the BLUE $S_2$, then we get (\[eq4.44\]); end of proof! The real life argument is, of course, a bit more complex, but this is, anyway, the idea. VIII\) All the steps outlined so far may seem a bit mysterious, and so I would like to focus now, for a minute or so, on the exact moment when the actual change of colour takes place, and in particular on the geometry which comes with it. We are supposed to be now somewhere in the middle of the process (\[eq4.35\]), at a time which I will call “intermediary”. This comes with a $2\Gamma (\infty)$ (intermediary) quite different from $2\Gamma (\infty)$, but which is split along an $\Sigma_{\infty}^2$ (intermediary) as follows $$\label{eq4.52} \partial N^4 (2\Gamma (\infty) ({\rm intermediary})) = \partial^+ N^4 \cup \partial^- N^4 , \partial^- N^4 \cap \partial^+ N^4 = \Sigma_{\infty}^2 ({\rm intermediary}) \, .$$ There is also a BIG LINK (intermediary) satisfying the obvious confinement condition and an $R$ (intermediary), containing some precise, interesting RED element $$\label{eq4.53} h_i \in (R({\rm intermediary}) - B) \cap S_1$$ which is such that the time has come for trading it for its BLUE counterpart $B_i \in S_2 - R$ (intermediary). And it is the geometry of this trading step $h_i \leftrightarrow B_i$, which we want to explain now. But before we can do this we have to start by unravelling one of the main virtues of the step (I) above. Some notations will be necessary here. Let us consider the 3-ball $B^3$, together with the splitting of its boundary by the equatorial circle, call it $$\partial B^3 = \partial^+ B^3 \cup \partial^- B^3$$ where $\partial^{\pm} B^3$ are the two hemispheres. Next, consider a long cylinder $B^3 \times [-N,N]$, with $[-N,N] \subset \{\mbox{some $x$-axis}\}$, and along this $x$-axis we consider the four quantities $$-N < r < b < N \, .$$ Notice the following splitting for the lateral surface of our cylinder $$\label{eq4.54} \partial B^3 \times [-N,N] = (\partial^- B^3 \times [-N,N]) \cup (\partial^+ B^3 \times [-N,N]) \, .$$ With all these things, what step (I) does for us, is to generate an embedding $$\label{eq4.55} (B^3 \times [-N,N] , \partial B^3 \times [-N,N] ) \overset{\ell}{\longrightarrow} (N^4 (2\Gamma (\infty) ({\rm intermediary})) , \partial N^4 (2\Gamma (\infty) ({\rm intermediary})) \, ,$$ which is such that $$\label{eq4.55.1} \ell (B^3 \times r) = h_i \, , \ \ell (B^3 \times b) = B_i \ \mbox{and, moreover} \ {\rm Im} \ell \cap (B \cup R) = \{ h_i , B_i \} + \{\mbox{some harmless} \, B-R\} \, .$$ $$\label{eq4.55.2} \mbox{The embedding $\ell$ is compatible with the splittings (\ref{eq4.54}) (at the source) and (\ref{eq4.52}) (at the target), i.e.}$$ $$\ell (\partial^{\pm} B^3 \times [-N,N]) \subset \partial^{\pm} N^4 (2\Gamma (\infty) ({\rm intermediary})) \, .$$ We can omit to write the “$\ell$” explicitely, from now on. At the intermediary moment where we find ourselves now, the (\[eq4.46\]) is, most likely, violated. What we have, instead, are the following two sets, to consider next $$\label{eq4.56} \Lambda^+ = \{\mbox{BIG LINK (intermediary)}\} \cap \partial^+ B^3 \times [-N,N] \ {\rm and}$$ $$\Lambda^- = \{\mbox{BIG LINK (intermediary)} + [\mbox{lava bridges}]\} \cap \partial^- B^3 \times [-N,N] \, .$$ Here are some comments concerning (\[eq4.55\]) and (\[eq4.56\]). $$\begin{aligned} \label{eq4.56.1} &&\mbox{We have $B^3 \times r \subset {\rm LAVA}$ , $B^3 \times b \not\subset {\rm LAVA}$, the lava under discussion now,} \\ &&\mbox{ being the one at the intermediary moment before any action.} \nonumber\end{aligned}$$ $$\label{eq4.56.2} \mbox{Except for $(\eta ({\rm green}) + c(b)) \cap \Lambda^+$ and for $(\Gamma + \gamma^0) \cap \Lambda^-$, everything else in $\Lambda^{\pm}$ is lava.}$$ $$\label{eq4.56.3} \mbox{Consider any $h_u$ and any curve $C_v$ which is lava (which, in terms of (\ref{eq4.34}) may mean (actual) $C,\eta$ or $c(r)$).}$$ Any contact $C_v \cdot h_u$, a priori [sticks]{}, in the sense that if we severe it, then we might well destroy the product property of lava. Now, it so happens that our LAVA has a bit more internal structure than what has been displayed, so far. One of the consequences of this not yet explained additional structure, is that, if $C_v \subset \partial^+ N^4 (2\Gamma (\infty))$ and if $h_u$ pertains to $X_0^2 \times r$, then the contacts $C_v \cdot h_u$ do not stick, one can sever them without destroying the product property. With all this we consider now an internal transformation $T$ of $N^4 (2\Gamma (\infty) ({\rm intermediary}))$, operating as follows. $$\begin{aligned} \label{eq4.57.1} &&\mbox{The transformation $T$ applied to the space $N^4 (2\Gamma (\infty) ({\rm intermediary}))$ is a simple isotopic} \\ &&\mbox{diffeomorphism, respecting the splitting, and having all of its active part concentrated} \nonumber \\ &&\mbox{inside $B^3 \times [-N,N]$.} \nonumber\end{aligned}$$ $$\begin{aligned} \label{eq4.57.2} &&\mbox{If $x$ is the coordinate along $[-N,N]$, then $T$ is the identity on the factor $B^3$, while along} \\ &&\mbox{$[-N,N]$ it is the translation $T(x) = x + (b-r)$, dampened so that it becomes the identity,} \nonumber \\ &&\mbox{again, in the neighbourhood of $\pm N$.} \nonumber\end{aligned}$$ $$\label{eq4.57.3} \mbox{So, geometrically speaking} \ T (B^3 \times r) = B^3 \times b \, ,$$ which we accompany by the following decrees. To begin with, we change $R ({\rm intermediary})$ into $R$ (intermediary) $ - \{ h_i \} + \{ B_i \}$ deciding, also that now $B_i \in R \cap B$. Next, we also decree that $$B^3 \times b_i \subset LAVA \, ,$$ and we completely discard the $h_i$, as such, from the rest of our procedure. $$\begin{aligned} \label{eq4.57.4} &&\mbox{Declaring that $B^3 \times b$ is LAVA, is a less innocent operation than it may look at first sight} \\ &&\mbox{since $B^3 \times b$ certainly comes with contacts $(B^3 \times b) \cap \Lambda^{\pm}$ of its own, which we have to worry} \nonumber \\ &&\mbox{about now.} \nonumber\end{aligned}$$ But let us first describe the action of $T$ on $\Lambda^{\pm}$, which will be done in the (\[eq4.57.5\]) below. $$\begin{aligned} \label{eq4.57.5} &&\mbox{({\ibf The main step}) On the side of $\partial^- B^3 \times [-N,+N]$, we let $\Lambda^-$ go solidarily with $B^3 \times r$,} \\ &&\mbox{i.e. we apply to it the same geometrical move $x \mapsto x+(b-r)$ like in (\ref{eq4.57.2}) above. But} \nonumber \\ &&\mbox{then on the $\partial^+ B^3 \times [-N,+N]$ side we take $T \mid \Lambda^+ = {\rm identity}$, i.e. {\ibf we leave $\Lambda^+$ in place},} \nonumber \\ &&\mbox{\ibf without budging it.} \nonumber\end{aligned}$$ All this requires some explaining. The $B^3 \times r$ has taken along with it, to its new position $B^3 \times b$, all the contacts with $\Lambda^-$ which it had. This includes, of course lava, which comes now on top of $B^3 \times b$, i.e. new lava connections. But that is fine, since $B^3 \times r$ has been just changed into $B^3 \times b$. Also, at the same time, the same $x \mapsto x + [b-r]$ removes all the old connections $(B^3 \times b) \cap \Lambda^-$ which might have been there. One can easily see that, at the local level of $\partial^- B^3 \times [-N,N]$, all this is OK, lava-wise. Also, we have dragged along the non lava part of $\Lambda^-$, which inside its confinement site $\partial^- N^4$ is, generally speaking, entangled with the rest of $\Lambda^-$. This way, we have avoided the danger of tearing apart the topology of $$[N^4 (2\Gamma (\infty) - h) \cup {\rm LAVA}^{\wedge}] + \sum D^2 (\Gamma) \, .$$ This is all we have to say on the $\partial^- N^4$ side. On the $\partial^+ N^4$ side, the lava connections coming from $(B^3 \times r) \cap \Lambda^+$ have been severed, but this is allowable, via (\[eq4.56.3\]). Then also, new lava connections coming with $(B^3 \times b) \cap \Lambda^+$ have been established and this other brutal move is also allowable, because of another global property of the lava, dual to (\[eq4.56.3\]), and quite similar to it. So, our whole movement $T$ which has exchanged $h_i \in S_1$ with $B_i \in S_2$ has kept the product property of lava intact, while at the same time, achieving the following basic thing $$\label{eq4.58} \mbox{Any contact $\eta ({\rm green}) \cdot h_i$ has gone and no contact $\eta ({\rm green}) \cdot B_i$ has appeared instead.}$$ This is obviously the kind of thing we need, for getting (\[eq4.44\]). [**A final remark.**]{} Notice that it is the splitting $+$ confinement, both sacro-sancted principles in this paper, which allow us to operate independently on $\Lambda^+$ and $\Lambda^-$ without getting them entangled with each other. But then, splitting $+$ confinement are necessary all over the place in the proof of Lemma 13, like for instance for restoring, at the final level, the $$(N^4 (2\Gamma (\infty)) , B , \eta ({\rm green}) (\mbox{together with the (BIG LINK)} \, \cap \partial^+ N^4)) \, ,$$ exactly as they were at the initial one. [999]{} , Valentin Poénaru’s Program for the Poincaré Conjecture, in the volume [*Geometry Topology and Physics for Raoul Bott*]{} (ed. by S.T. Yau), International Press, pp. 139-169 (1994). , On embedding of spheres, [*BAMS*]{}, [**65**]{}, pp. 59-65 (1959). , Recent progress on the Poincaré Conjecture and the classification of 3-manifolds, [*BAMS*]{}, [**42**]{}, pp. 57-78 (2004). (\[PoI\]) [V. Poénaru]{}, The collapsible pseudo-spine representation theorem, [*Topology*]{}, vol. 31, [**3**]{}, pp. 625-636 (1992). (\[PoII\]) [V. Poénaru]{}, Infinite processes and the 3-dimensional Poincaré Conjecture, II: The Honeycomb representation theorem, [*Prépublications d’Orsay*]{}, 93-14 (1993). (\[PoIII\]) [V. Poénaru]{}, Infinite processes and the 3-dimensional Poincaré Conjecture, III: The algorithm, [*Prépublications d’Orsay*]{}, 92-10 (1992). (\[PoIV-A\]) [V. Poénaru]{}, Processus infini et conjecture de Poincaré en dimension trois, IV: Le théorème de non sauvagerie lisse (The smooth tameness theorem), Part A, [*Prépublications d’Orsay*]{}, 93-83 (1992). (\[PoIV-B\]) [V. Poénaru]{}, Processus infini et conjecture de Poincaré en dimension trois, IV: Le théorème de non sauvagerie lisse (The smooth tameness theorem), Part B, [*Prépublications d’Orsay*]{}, 95-33 (1995). This \[PoV\], [*Prépublications d’Orsay*]{}, 94-25 (1994) is completely superseded by [@PoV-A], [@PoV-B]. (\[PoV-A\]) [V. Poénaru]{}, Geometric simple connectivity in four-dimensional differential topology, PartA, [*IHES Prépublications*]{} M/01/45 (2001), http://www.ihes.fr/PREPRINTS.M01/Resu/resu-M01-45.html , Geometric simple connectivity in four-dimensional differential topology: An outline, Preprint Trento Univ. UTM 649 (2003), http://eprints.biblio.unitn.it/archive/00000660/02/UTM649-pdf (\[PoV-B\]) [V. Poénaru]{}, Manuscript. (\[PoVI\]) [V. Poénaru]{}, The strange compactification theorem, Part A, [*IHES Prépublications*]{} M/95/15 (1995); Part B, [*IHES Prépublications*]{} M/96/43 (1996); Part C, [*IHES Prépublications*]{} M/97/43 (1997); Part D, [*IHES Prépublications*]{} M/97/59 (1997); Part E is in process of being typed at IHES. , A program for the Poincaré Conjecture and some of its ramifications, in the volume [*Topics in Low-Dimensional Topology*]{} (ed. A. Banyaga, H. Movahedi-Lankarani, R. Wells), World Scientific, pp. 65-88 (1999). , Geometric Simple Connectivity and Low-Dimensional Topology, [*Proceedings of the Steklov Institute*]{}, [**247**]{}, pp. 195-208 (2004). , Three lectures on higher-dimensional methods in three-dimensional topology, [*Proceedings of the F. Tricerri Memorial Conference*]{}, Suppl. ai Rendiconti del Circolo Matematico di Palermo, S-II N49, pp. 203-217 (1997). , On $C^1$-complexes, [*Ann. of Math.*]{} [**41**]{}, pp. 809-824 (1940). , A certain open manifold whose group is unity, [*Q. J. of Math.*]{}, [**6**]{}, pp. 268-279 (1935). [^1]: Université de Paris-Sud, Mathématiques 425, Topologie et Dynamique, 91405 Orsay Cedex, France. Universitá degli Studi di Trento, Dipartimento di Matematica, 38050 Povo-Trento, Italia. This paper has been partially supported by the NSF grant DMS-0071852.
--- abstract: 'Understanding the evolution of spin-orbit torque (SOT) with increasing heavy-metal thickness in ferromagnet/normal metal (FM/NM) bilayers is critical for the development of magnetic memory based on SOT. However, several experiments have revealed an apparent discrepancy between damping enhancement and damping-like SOT regarding their dependence on NM thickness. Here, using linewidth and phase-resolved amplitude analysis of vector network analyzer ferromagnetic resonance (VNA-FMR) measurements, we simultaneously extract damping enhancement and both field-like and damping-like inverse SOT in [Ni$_{80}$Fe$_{20}$]{}/Pt bilayers as a function of Pt thickness. By enforcing an interpretation of the data which satisfies Onsager reciprocity, we find that both the damping enhancement and damping-like inverse SOT can be described by a single spin diffusion length ($\approx$ ), and that we can separate the spin pumping and spin memory loss contributions to the total damping. This analysis indicates that less than 40% of the angular momentum pumped by FMR through the [Ni$_{80}$Fe$_{20}$]{}/Pt interface is transported as spin current into the Pt. On account of the spin memory loss and corresponding reduction in total spin current available for spin-charge transduction in the Pt, we determine the Pt spin Hall conductivity (${\sigma_{\mathrm{SH}}}= $ ) and bulk spin Hall angle (${\theta_{\mathrm{SH}}}= $ ) to be larger than commonly-cited values. These results suggest that Pt can be an extremely useful source of SOT if the FM/NM interface can be engineered to minimize spin loss. Lastly, we find that self-consistent fitting of the damping and SOT data is best achieved by a model with Elliott-Yafet spin relaxation and extrinsic inverse spin Hall effect, such that both the spin diffusion length and spin Hall conductivity are proportional to the Pt charge conductivity.' author: - 'Andrew J. Berger' - 'Eric R. J. Edwards' - 'Hans T. Nembach' - Olof Karis - Mathias Weiler - 'T. J. Silva' bibliography: - 'Physics.bib' title: 'Determination of spin Hall effect and spin diffusion length of Pt from self-consistent fitting of damping enhancement and inverse spin-orbit torque measurements' --- [^1] Introduction ============ The use of nonmagnetic metals with strong spin-orbit coupling (SOC) to generate pure spin currents via spin-orbit effects is currently an area of intense focus, driven largely by the promise of efficient electrically-controllable magnetic memory. For this application, the spin current or spin accumulation generated by SOC in a non-magnetic layer can be used to exert a torque on an adjacent ferromagnetic (FM) layer—so called spin-orbit torque (SOT)—in order to excite magnetization dynamics or cause switching. Central to this field of study is proper characterization of the spin-to-charge conversion that occurs in heavy metal films such as Pt, Ta, W, and Au. There are many techniques for measuring this conversion, including ferromagnetic resonance (FMR) spin pumping [@saitoh_conversion_2006], non-local spin valves [@valenzuela_direct_2006; @sagasta_tuning_2016], thermal spin injection via the spin Seebeck effect [@qu_self-consistent_2014], spin Hall magnetoresistance [@nakayama_spin_2013], spin torque FMR [@liu_spin-torque_2011], and harmonic analysis of Hall effect voltage measurements [@garello_symmetry_2013]. Several groups, using various techniques [@nakayama_geometry_2012; @feng_spin_2012; @rojas-sanchez_spin_2014; @nan_comparison_2015; @conca_lack_2017], have uncovered a discrepancy when comparing the excess damping and the spin-to-charge conversion by inverse spin Hall effect (iSHE) contributed by the normal metal (NM) layer. Specifically, the FM damping exhibits a steep increase with the introduction of only a very thin ($<$ ) NM film [@boone_spin-scattering_2015; @azzawi_evolution_2016; @caminale_spin_2016]. Meanwhile, the measured SOT, characterized by either spin-to-charge conversion via DC iSHE or harmonic Hall technique, develops over a much longer length length scale [@azevedo_spin_2011; @nguyen_spin_2016]. Magneto-optical measurements also demonstrate an interfacial spin accumulation in Pt due to SHE with a spin diffusion length of about [@stamm_magneto-optical_2017]. Spin memory loss (SML) [@kurt_spin-memory_2002; @rojas-sanchez_spin_2014] and proximity-induced magnetic moments at the FM/NM interface [@caminale_spin_2016] have been invoked to explain the large damping enhancement caused by thin NM films even when the NM thickness is less than its spin diffusion length. In this model, spin loss at the FM/NM interface acts as an additional parallel spin relaxation pathway to that of spin pumping and diffusion into the Pt bulk. From damping measurements alone, the relative contributions of these mechanisms is not resolvable. In this work, we show that a self-consistent fit of Gilbert damping and damping-like iSOT versus Pt thickness—where both sets of data are described by the same spin diffusion length ${\lambda_{\mathrm{s}}}$—makes it possible to separate these sources of damping. Furthermore, this data analysis methodology allows for unambiguous determination of the spin-mixing conductance ${G^{\uparrow\downarrow}}$ at the FM/NM interface. We therefore can ascertain the spin Hall conductivity (or spin Hall angle) without having to refer to spin transport parameters ${G^{\uparrow\downarrow}}$ and ${\lambda_{\mathrm{s}}}$ determined from measurements performed on dissimilar samples or theoretical idealized values. For our samples of Pt deposited on [Ni$_{80}$Fe$_{20}$]{}(or Permalloy, Py), only % of the total damping enhancement from the Pt film is attributable to spin pumping into the Pt layer when ${d_{\mathrm{Pt}}}\gg {\lambda_{\mathrm{s}}}$. Experimental Technique ====================== The data presented in this work are based on the spectroscopic and complex amplitude information encoded in VNA-FMR spectra, which yield a measure of the damping and SOT, respectively. FMR damping extracted from a spectral linewidth analysis [@kalarickal_ferromagnetic_2006] has been used extensively to study the damping enhancement due to the spin pumping effect into an NM adjacent to the FM layer [@tserkovnyak_enhanced_2002; @mizukami_effect_2002; @heinrich_role_2003; @schoen_magnetic_2017-1]. If such spectra are measured inductively with phase-sensitive VNA-FMR, it is also possible to analyze the phase and amplitude information of those spectra to quantitatively extract the field-like (FL) and damping-like (DL) SOT conductivities, as we have previously described [@berger_inductive_2018]. These conductivities, ${\sigma_{\mathrm{FL}}}^{\mathrm{SOT}}$ and ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$, relate the AC charge currents produced in the NM layer via iSHE or inverse Rashba-Edelstein effect (iREE) in response to driven magnetization dynamics in the FM layer. Direct coupling to the magnetization dynamics via Faraday’s law also drives AC charge currents in the NM layer, quantified by ${\sigma_{\mathrm{FL}}}^{\mathrm{F}}$. The superposition of these charge currents presents a complex inductive load to the microwave coplanar waveguide (CPW) used in VNA-FMR measurements, altering the amplitude and phase of the transmitted microwave signal. By Onsager reciprocity, ${\sigma_{\mathrm{FL}}}^{\mathrm{SOT}}$ and ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$ measured inductively via inverse spin-charge conversion processes are equivalent to the spin torque efficiency per unit applied electric field used by Nguyen et al. in Ref. to describe the forward SOT process [@berger_inductive_2018]. Samples ------- To study the Pt-thickness dependence of damping and damping-like iSOT, we prepared two sample sets, with sputter-deposited metal multilayers consisting of substrate/Ta(1.5)/Py($d_{\mathrm{Py}}$)/Pt($d_{\mathrm{Pt}}$)/Ta(3), where thicknesses are indicated in nanometers and are calibrated with X-ray reflectivity measurements. In the first sample set, the thickness $d_{\mathrm{Py}}$ was varied from to while $d_{\mathrm{Pt}} = $ was fixed. In the second set, the thickness $d_{\mathrm{Pt}}$ was varied from to with fixed $d_{\mathrm{Py}} = $ . For each sample, an identical control sample was prepared, where Pt is substituted with Cu. The Cu thicknesses were chosen to match the sheet resistance of the corresponding Pt layer, so as to control for Faraday effect induced currents in the NM layer. Results and Discussion ====================== Py thickness series ------------------- From the Py thickness series we focus on three quantities: (1) the FM contribution to the sample inductance ($L_{\mathrm{FM}}$, as in Ref. ), (2) the effective magnetization ${M_{\mathrm{eff}}}$, and (3) the Gilbert damping parameter $\alpha$. From $L_{\mathrm{FM}}$ as a function of Py thickness (Fig. \[fig:LFM\_dPy\]), we are able to extract the dead layer thickness, and therefore determine the effective magnetic thickness of the FM layer. From ${M_{\mathrm{eff}}}$, we are able to determine the saturation magnetization ${M_{\mathrm{s}}}$ (Fig. \[fig:Meff\_dPy\]). Lastly, from the Gilbert damping as a function of Py thickness, we can separate the intrinsic and interfacial contributions to $\alpha$ (Fig. \[fig:alpha\_dPy\]). This is a critical first step to determine the spin pumping and SML contributions to the total damping. ### Ferromagnetic dead layer measurement In inductive VNA-FMR measurements, the FM layer contributes a frequency-independent inductance to the $S_{21}$ measurement according to [@berger_inductive_2018; @silva_characterization_2016]: $$L_{\mathrm{FM}} = \frac{\mu_0 l d_{\mathrm{FM}}}{4 W_{\mathrm{wg}}} \eta^2(z, W_{\mathrm{wg}})$$ where $\mu_0$ is the permeability of free space, $l$ is the sample length along the CPW signal propagation direction, $d_{\mathrm{FM}}$ is the deposited FM thickness, $W_{\mathrm{wg}}$ is the CPW signal line width, and $\eta(z,W_{\mathrm{wg}}) \equiv (2/\pi)\arctan(W_{\mathrm{wg}}/2z)$ is the spacing loss, ranging from 0 to 1, due to a finite distance $z$ between sample and CPW. When plotted vs. $d_{\mathrm{FM}}$, the $L_{\mathrm{FM}} = 0$ intercept indicates the magnetic dead layer thickness. From the data in Fig. \[fig:LFM\_dPy\], we find $d_{\mathrm{dead}} = $ for Py/Pt samples. Also shown are the data for Py/Cu control samples, which exhibit a similar dead layer thickness of , suggesting that the Py dead layer exists primarily at the Ta/Py interface. ![Py-thickness dependent zero-frequency inductance for both Py/Pt and Py/Cu control samples.[]{data-label="fig:LFM_dPy"}](LFM_dPy_v2.eps){width="0.5\linewidth"} ### Determination of ${M_{\mathrm{s}}}$ The effective magnetization ${M_{\mathrm{eff}}}$ as a function of applied microwave frequency is extracted from the FMR spectral fits and the Kittel FMR condition for magnetization oriented out of the film plane [@kittel_introduction_2004]: $$H_{\mathrm{res}} = \frac{\omega}{\gamma \mu_0} + {M_{\mathrm{eff}}}$$ where $H_{\mathrm{res}}$ is the center field of the resonant absorption line, $\omega$ is the applied microwave frequency, and $\gamma = g \mu_{\mathrm{B}}/\hbar$ is the gyromagnetic ratio. Assuming the Py has no bulk anisotropy, ${M_{\mathrm{eff}}}$ is determined by the saturation magnetization ${M_{\mathrm{s}}}$ of the material, and the interfacial anisotropy energy $K_{\mathrm{int}}$ according to Ref. : $$\mu_0 {M_{\mathrm{eff}}}= \mu_0 {M_{\mathrm{s}}}- \frac{2 K_{\mathrm{int}}}{{M_{\mathrm{s}}}} \left( \frac{1}{d_{\mathrm{FM}} - d_{\mathrm{dead}}} \right)$$ Therefore, a linear fit of ${M_{\mathrm{eff}}}$ vs. inverse effective FM thickness (Fig. \[fig:Meff\_dPy\]) provides a measurement of the saturation magnetization ${M_{\mathrm{s}}}$. We find $\mu_0 M_{\mathrm{s}} = $ , comparable to previous findings [@schoen_magnetic_2017]. Similarly, for Py/Cu we find $\mu_0 M_{\mathrm{s}} = $ . ![${M_{\mathrm{eff}}}$ vs. inverse effective FM thickness $(d_{\mathrm{Py}} - d_{\mathrm{dead}})$ for Py($d_{\mathrm{Py}})$/Pt(6) and Py($d_{\mathrm{Py}})$/Cu(3.3). Dead layer thickness is determined from Fig. \[fig:LFM\_dPy\].[]{data-label="fig:Meff_dPy"}](Meff_dPy_v2.eps){width="0.5\linewidth"} ### Determination of intrinsic Gilbert damping constant The total Gilbert damping due to intrinsic and interfacial contributions can be described by: $$\alpha = \alpha_{\mathrm{int}} + {G^{\uparrow\downarrow}}_{\mathrm{eff}}\left(\frac{\gamma \hbar^2}{2 {M_{\mathrm{s}}}d_{\mathrm{FM}} e^2}\right) \label{eq:alphaTot}$$ where $\gamma = g \mu_{\mathrm{B}}/\hbar$ is the gyromagnetic ratio, $g$ is the spectroscopic g factor, $\mu_{\mathrm{B}}$ is the Bohr magneton, $\hbar$ is Planck’s constant divided by $2 \pi$, and $e$ is the electron charge. $M_{\mathrm{s}}$ and $d_{\mathrm{FM}}$ for the Py layer are determined as described above. For the thin FM layers studied here, we can ignore the contribution from radiative damping [@schoen_radiative_2015]. When plotted vs. $1/(d_{\mathrm{Py}} - d_{\mathrm{dead}})$, we can extract $\alpha_{\mathrm{int}}$ as the infinite-thickness limit of the measured damping. We calculate the intercept of the data in Fig. \[fig:alpha\_dPy\] using linear regression in order to fix $\alpha_{\mathrm{int}} = 0.0041 \pm 0.0001$, in good agreement with a previous systematic study of damping in magnetic alloys [@schoen_magnetic_2017]. ![Circuit model for angular momentum flow sourced by FMR excitation in Ta/Py/Pt trilayer. Spin current is drawn into parallel resistance channels provided by spin pumping into the Ta seed and Pt spin sink layers, as well as spin memory loss.[]{data-label="fig:SpinPumping_CurrentDivider"}](SpinPumping_CurrentDivider_v4.eps){width="\linewidth"} For the interfacial contribution to the damping (second term in Eq. ), the full model we use for the effective spin-mixing conductance ${G^{\uparrow\downarrow}}_{\mathrm{eff}}$ includes contributions from spin pumping into Pt via the spin-mixing conductance ${G^{\uparrow\downarrow}}_{\mathrm{Py/Pt}}$, spin pumping into the Ta seed layer via the spin-mixing conductance ${G^{\uparrow\downarrow}}_{\mathrm{Py/Ta}}$, and spin memory loss (SML). (In all instances where we invoke the spin-mixing conductance, it is to be understood that we are only considering the real part of said quantity). $${G^{\uparrow\downarrow}}_{\mathrm{eff}} = \frac{{G^{\uparrow\downarrow}}_{\mathrm{Py/Pt}}}{1 + \dfrac{2 \lambda_{\mathrm{s,Pt}} {G^{\uparrow\downarrow}}_{\mathrm{Py/Pt}}}{\sigma_{\mathrm{Pt}}(d_{\mathrm{Pt}}) \tanh\left(\dfrac{d_{\mathrm{Pt}}}{\lambda_{\mathrm{s,Pt}}}\right)}} + \frac{{G^{\uparrow\downarrow}}_{\mathrm{Py/Ta}}}{1 + \dfrac{2 \lambda_{\mathrm{s,Ta}} {G^{\uparrow\downarrow}}_{\mathrm{Py/Ta}}}{\sigma_{\mathrm{Ta}}(d_{\mathrm{Ta}}) \tanh\left(\dfrac{d_{\mathrm{Ta}}}{\lambda_{\mathrm{s,Ta}}}\right)}} + {G^{\uparrow\downarrow}}_{\mathrm{Py/Pt}} \Delta_{\mathrm{SML}} \label{eq:Geff}$$ This model is depicted as a network of series and parallel conductance channels for the flow of angular momentum, treating FMR as an angular momentum potential source, as depicted in Fig. \[fig:SpinPumping\_CurrentDivider\] (also see Ref. ). The first two terms of Eq. represent spin pumping into the Pt and Ta layers, respectively. Within those layers, spin is pumped through series resistances set by the interfacial spin-mixing conductance, and thickness-dependent spin resistance (which accounts for the exponential spin accumulation profile in the NM layer, as a solution to the spin diffusion equation, subject to the boundary condition that no spin current can flow through the distant interface). The final term represents a spin memory loss channel, where the phenomenological parameter $\Delta_{\mathrm{SML}}$ can be arbitrarily large. By multiplying Eq. by the bracketed term in Eq. , conductances are converted to the unitless damping parameters $\alpha_{\mathrm{sp,Pt(Ta)}}$ (due to spin pumping into Pt (or Ta)) and $\alpha_{\mathrm{SML}}$ (due to spin memory loss). Taken together, Eqs. and describe both the NM and FM thickness dependencies of the damping. As a part of our self-consistent fitting routine (described in Section \[sec:selfConsistent\], and using the previously determined value for $\alpha_{\mathrm{int}}$, we fit the Py thickness dependence of $\alpha$ simultaneously with the Pt thickness dependence (Fig. \[fig:sigmaSOTandAlpha\](b)), with ${G^{\uparrow\downarrow}}$ and $\Delta_{\mathrm{SML}}$ as fit parameters. The result of that simultaneous fit is shown in Fig. \[fig:alpha\_dPy\]. Also shown for comparison are damping data for the Py/Cu controls, which exhibit a drastically reduced spin pumping contribution (slope), and slightly increased intrinsic contribution ($\alpha_{\mathrm{int}} = 0.0054 \pm 0.0001$). ![Total Gilbert damping vs. inverse effective FM thickness $(d_{\mathrm{Py}} - d_{\mathrm{dead}})$ for Py($d_{\mathrm{Py}}$)/Pt(6) (circles) and Py($d_{\mathrm{Py}}$)/Cu(3.3) (squares). Dead layer thickness is determined from Fig. \[fig:LFM\_dPy\].[]{data-label="fig:alpha_dPy"}](alpha_dPy_v2.eps){width="0.5\linewidth"} Pt thickness series {#sec:PtThickness} ------------------- For samples where the Pt thickness is varied, the measured values for ${\sigma_{\mathrm{FL}}}^{\mathrm{SOT}}$ and ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$—extracted from our quantitative VNA-FMR complex amplitude analysis [@berger_inductive_2018]—are shown as a function of NM thickness in Fig. \[fig:sigmaFL\_DL\]. Two corrections must be made to these values in order to extract the iSOT due to Pt. First, we subtract the values for ${\sigma_{\mathrm{FL}}}$ and ${\sigma_{\mathrm{DL}}}$ obtained from the Cu control samples (blue squares) from those of the Pt samples (red circles). Since we used Cu thicknesses to match the sheet resistance of the Pt samples, this removes the Faraday contribution. This subtraction also removes any FL or DL iSOT due to the Ta seed and capping layers. While we do not completely understand the Cu thickness dependence of ${\sigma_{\mathrm{DL}}}$, the DL signal is essentially eliminated for Py in isolation, without seed or capping layers, which suggests details of the iSOT from cap and/or seed layers are responsible for the peculiar behavior (see discussion and measurements in Section \[sec:ControlSamples\]). In Fig. \[fig:ShuntCorr\](a), ${\sigma_{\mathrm{FL}}}$ and ${\sigma_{\mathrm{DL}}}$ after Cu reference subtraction are plotted. ![Measured quantities for FL and DL conductivities, for both Pt and Cu control samples, extracted from complex inductance analysis of VNA-FMR data [@berger_inductive_2018]. (a) ${\sigma_{\mathrm{FL}}}$ as a function of either Pt (top axis) or Cu thickness (bottom axis). Linear dependence on NM thickness at large thicknesses indicates dominance of ${\sigma_{\mathrm{FL}}^{\mathrm{F}}}$ term. (b) Same as (a), but for ${\sigma_{\mathrm{DL}}}$. []{data-label="fig:sigmaFL_DL"}](sigmaFL_DL_v3.eps){width="\linewidth"} ![(a) FL and DL iSOT conductivities, after subtraction of Cu control samples. (b) Measured sheet resistance of metallic layers, as a function of Pt thickness. Inset: the sample sheet resistance acts as a parallel shunting path to the signal generating component of $I_{\mathrm{SOT}}$, which flows through the characteristic impedance $Z_0$ and $R_{\square}$.[]{data-label="fig:ShuntCorr"}](ShuntCorr_v3.eps){width="\linewidth"} Second, we correct for shunting effects of the iSOT currents. The data of Fig. \[fig:ShuntCorr\](a) attenuate as the Pt thickness is increased. This is attributed to the decreasing sheet resistance of the metallic stack, which effectively shunts the AC iSOT currents, therefore producing a weaker inductive response. This is functionally similar to the current divider effect observed in DC voltage iSHE spin pumping experiments [@nakayama_geometry_2012; @feng_spin_2012; @jiao_spin_2013; @wang_scaling_2014]. However, in our AC iSOT experiments with the sample placed on a CPW with characteristic impedance of , the sample sheet resistance acts as a shunt path in parallel with the CPW characteristic impedance (inset of Fig. \[fig:ShuntCorr\](b)). We therefore multiply the ${\sigma_{\mathrm{FL}}}$ and ${\sigma_{\mathrm{DL}}}$ results of Fig. \[fig:ShuntCorr\](a) by the shunt factor $(1 + Z_0/R_{\square})$, where $R_{\square}$ is the measured sheet resistance of the multilayer stack (Fig. \[fig:ShuntCorr\](b)). After application of the shunting correction, the final results for ${\sigma_{\mathrm{FL}}}^{\mathrm{SOT}}$ and ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$ are presented in Fig. \[fig:sigmaSOTandAlpha\](a). These results are shown adjacent to the dependence of the measured Gilbert damping parameter $\alpha$ on Pt thickness in Fig. \[fig:sigmaSOTandAlpha\](b) to compare their evolution with $d_{\mathrm{Pt}}$. The DL conductivity increases monotonically with Pt thickness. Meanwhile, the FL conductivity remains more or less constant, consistent with the presumption of an interfacial source of spin-charge conversion such as iREE, where additional Pt beyond does not increase the charge signal further. From Fig. \[fig:sigmaSOTandAlpha\](b), it is clear that if the enhanced damping (second term in Eq. ) were ascribed entirely to spin pumping into the Pt, the length scale necessary to capture the rapid increase in $\alpha$ above the intrinsic value must be much shorter than the length scale over which ${\sigma_{\mathrm{DL}}}$ is seen to increase in \[fig:sigmaSOTandAlpha\](a). In other words, using only the data for Pt-thickness dependence of damping (Fig. \[fig:sigmaSOTandAlpha\](b)), it is impossible to separate the different contributions to ${G^{\uparrow\downarrow}}_{\mathrm{eff}}$. Several other groups have observed this apparent discrepancy when comparing damping with DC voltages measured by iSHE [@nakayama_geometry_2012; @feng_spin_2012; @rojas-sanchez_spin_2014]. In this work, we are able to resolve the discrepancy through a self-consistent fit of both the damping data and ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$ versus Pt thickness. ![(a) Final values for ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$ and ${\sigma_{\mathrm{FL}}}^{\mathrm{SOT}}$ for Py(3.5)/Pt(${d_{\mathrm{Pt}}}$). The FL torque remains constant over the range of studied thicknesses, whereas the DL torque increases with a characteristic length scale. (b) Gilbert damping for the same sample series (error bars are smaller than symbols). Color coding indicates different contributions to the Gilbert damping. Both the SOT conductivity and damping are fit to four different models, where spin relaxation is either EY or DP, and the spin Hall effect arises from intrinsic (int) or extrinsic (ext) processes. In both cases EY + ext (black solid line) provides the best fit, as determined by a $\chi^2$ analysis.[]{data-label="fig:sigmaSOTandAlpha"}](sigmaSOTandAlpha_v5.eps){width="\linewidth"} Although we measure only a 3% enhancement of damping as ${d_{\mathrm{Pt}}}$ increases from to , given the high signal-to-noise ratio of the damping data, it can be fit with Eq. and by use of the same spin diffusion length that describes the behavior of ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$, as discussed in detail later. Because of the better dynamic range of the ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$ data, we use it as the basis for establishing ${\lambda_{\mathrm{s}}}$ by fitting with a model provided by Haney et al. [@haney_current_2013]: $${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}} = {\sigma_{\mathrm{SH}}}\left\{\frac{(1 - e^{-d_{\mathrm{NM}}/\lambda_{\mathrm{s}}})^2}{(1 + e^{-2 d_{\mathrm{NM}}/\lambda_{\mathrm{s}}})} \frac{|\tilde{G}^{\uparrow\downarrow}|^2 + \mbox{Re}(\tilde{G}^{\uparrow\downarrow})\tanh^2\left(\dfrac{d_{\mathrm{NM}}}{\lambda_{\mathrm{s}}}\right)} {|\tilde{G}^{\uparrow\downarrow}|^2 + 2 \mbox{Re}(\tilde{G}^{\uparrow\downarrow})\tanh^2\left(\dfrac{d_{\mathrm{NM}}}{\lambda_{\mathrm{s}}}\right) + \tanh^4\left(\dfrac{d_{\mathrm{NM}}}{\lambda_{\mathrm{s}}}\right)}\right\} \epsilon \label{eq:StilesHaney}$$ where $\tilde{G}^{\uparrow\downarrow} = {G^{\uparrow\downarrow}}2 \lambda_{\mathrm{s}} \tanh(d_{\mathrm{NM}}/\lambda_{\mathrm{s}})/\sigma$, $\sigma$ represents the NM charge conductivity, and $\epsilon \equiv \alpha_{\mathrm{sp,Pt}}/(\alpha_{\mathrm{sp,Pt}} + \alpha_{\mathrm{SML}})$ represents the fraction of spin current pumped out of the FM that is available for spin-charge conversion in the Pt layer, as determined by the the spin current divider model applied to the first and last terms of Eq. . Self-consistent fit routine of damping and SOT {#sec:selfConsistent} ---------------------------------------------- To perform the self-consistent fits of ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$ and $\alpha$, an initial fit of ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$ is performed to extract the Pt spin diffusion length $\lambda_{\mathrm{s,Pt}}$. This is then used as a fixed parameter in Eq. and when fitting $\alpha$. With this constraint on $\lambda_{\mathrm{s,Pt}}$, the Pt and Py thickness series (Figs. \[fig:sigmaSOTandAlpha\](b) and \[fig:alpha\_dPy\], respectively) are fitted simultaneously with Eq. and to determine ${G^{\uparrow\downarrow}}_{\mathrm{Py/Pt}}$ and $\Delta_{\mathrm{SML}}$. These are then put back into Eq. to re-fit ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$ and extract refined values for ${\sigma_{\mathrm{SH}}}$ and $\lambda_{\mathrm{s,Pt}}$. This process is iterated until the change in fit parameters is less than 0.01%. Our self-consistent data analysis is tantamount to enforcing Onsager reciprocity on the spin-to-charge interconversion processes of spin pumping and spin torque [@brataas_spin_2012]. If the enhanced damping of Fig. \[fig:sigmaSOTandAlpha\](b) were ascribed purely to spin pumping, it would imply that the Pt already draws a maximum amount of spin current from the precessing FM at thicknesses of only $\approx$ . By contrast, a damping-like torque conductivity that continues to increase for thicknesses up to (Fig. \[fig:sigmaSOTandAlpha\](a)) suggests that the Pt layer can continue to generate (or draw) increasingly larger spin current for thicknesses well beyond . The use of unequal length scales to describe diffusive spin current flow due to spin pumping and spin-orbit torque generation would violate the reciprocity of spin-to-charge interconversion. Equations and can be used either with an Elliott-Yafet (EY) [@elliott_theory_1954; @yafet_conduction_1983] or D’yakonov-Perel’ (DP) [@dyakonov_spin_1972] spin relaxation model. In the EY case, the spin diffusion length is a function of the charge conductivity: ${\lambda_{\mathrm{s}}}(\sigma(d_{\mathrm{NM}})) = (\sigma(d_{\mathrm{NM}})/\sigma_{\mathrm{bulk}}){\lambda_{\mathrm{s}}}^{\mathrm{max}}$. The thickness-dependent conductivity and bulk conductivity $\sigma_{\mathrm{bulk}}$ are both determined by four-probe resistance measurements (see Section \[sec:t\_depR\]). By contrast, for DP spin relaxation, ${\lambda_{\mathrm{s}}}$ is independent of charge conductivity [@boone_spin-scattering_2015]. Additionally, the spin Hall conductivity in Eq. can be attributed to intrinsic or extrinsic SOC. For intrinsic spin Hall, ${\sigma_{\mathrm{SH}}}$ is independent of charge conductivity, while for extrinsic SHE, ${\sigma_{\mathrm{SH}}}(d_{\mathrm{NM}}) = {\theta_{\mathrm{SH}}}\sigma(d_{\mathrm{NM}})$, where ${\theta_{\mathrm{SH}}}$ is fixed. Fits using the four combinations of these models are shown in Fig. \[fig:sigmaSOTandAlpha\], with results collected in Table \[tab:SpinParams\]. To distinguish between the quality of fit for the different models, we utilize a $\chi^2$ test. The $\chi^2$ values for each fit of the SOT and damping data is calculated as $\chi^2 \equiv \sum_i^n (y_i - f_i)^2/\sigma_i^2$, where $y_i$ is the measured value, $f_i$ is the calculated value based on the fit model, and $\sigma_i^2$ is the measured variance, for each of $n$ measurements. Results are shown in Table \[tab:chi-sq\]. Using the cumulative distribution function (CDF) of a $\chi^2$ distribution for each fit, with $\nu = n - p$ degrees of freedom, and $p$ fit parameters, we also calculate the joint probability with which we can reject the null hypothesis in which there is no relationship between our measurements and the given model. The CDF is determined by $$\mbox{CDF}(\chi^2) = \int\limits_0^{\chi^2} \frac{t^{\nu/2-1} e^{-t/2}}{\Gamma(\nu/2) 2^{\nu/2}} dt$$ where $\Gamma(x) = (x-1)!$. The joint probability is calculated as the product of $(1 - \mbox{CDF}(\chi^2))$ for the two fits. The EY/extrinsic model provides the highest confidence that we can reject the null hypothesis. Because this analysis reveals EY spin relaxation with extrinsic SHE as the best fit to our data, we focus on the fitted parameters from that model combination in the discussion below. [l | c | c | c ]{} Fit model & $\chi^2$ (SOT) & $\chi^2$ (damping) & Joint Probability\ EY + ext & 0.668 & 3.696 & 0.89\ EY + int & 3.123 & 12.032 & 0.11\ DP + ext & 8.149 & 5.715 & 0.07\ DP + int & 8.819 & 6.101 & 0.05\ By choosing to enforce reciprocity, we find that the fraction of spin current absorbed by the Pt layer (which produces the damping-like AC charge currents) reaches a maximum of $(37 \pm 6)$% for the thickest Pt layers. This is comparable to previous findings of large SML at Co/Pt interfaces [@nguyen_spin-flipping_2014] and Pt/Cu interfaces [@kurt_spin-memory_2002]. The different contributions to the total measured damping are represented as shaded areas in Fig. \[fig:sigmaSOTandAlpha\](b), with a color code to match Fig. \[fig:SpinPumping\_CurrentDivider\]. Note that only the contribution from spin pumping into Pt is Pt-thickness dependent. The self-consistent fit also results in a spin diffusion of length of $\lambda_{\mathrm{s,Pt}}^{\mathrm{max}} =$ , ${G^{\uparrow\downarrow}}= $, which is in good agreement with the maximum theoretical value for Pt of ${G^{\uparrow\downarrow}}= $ [@liu_interface_2014], given the estimated error, and ${\sigma_{\mathrm{SH}}}^{\mathrm{bulk}} = $ . This corresponds to a spin Hall angle of $0.387 \pm 0.008$. While this ${\theta_{\mathrm{SH}}}$ is among the largest reported for Pt [@zhang_role_2015; @pai_dependence_2015], it is a necessary logical conclusion that with less spin current driven into the NM (on account of SML), a larger spin-to-charge conversion efficiency is required to fit the data than would be otherwise obtained if the SML were negligible. We furthermore stress that the phenomenological value for ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$ (the asymptotic value in Fig. \[fig:sigmaSOTandAlpha\](a)) is comparable to that measured with other techniques ( for AlO$_x$(2)/Co(0.6)/Pt(3) [@garello_symmetry_2013], for Ta(2)/Pt(4)/Co$_{50}$Fe$_{50}$(0.5)/MgO(2)/Ta(1) [@pai_dependence_2015], and $\approx$ for Ta(1)/Pt($d_{\mathrm{Pt}}$)/Co(1)/MgO(2)/Ta(1) [@nguyen_spin_2016]). This indicates consistency of the SOC strength of the Pt layers in each of these experiments, and stresses the importance of characterizing spin loss mechanisms to optimize SOT for magnetic switching. Our finding that the data are best fit with an extrinsic SHE model is somewhat surprising, given that it conflicts with some previous experimental work [@nguyen_spin_2016] and theoretical expectations [@guo_intrinsic_2008]. Qualitatively, both intrinsic and extrinsic SHE models are seen to describe the data quite well, given that the fit parameters can adjust to compensate for differences in the models, as is seen by the various fits in Fig. \[fig:sigmaSOTandAlpha\](a). Nevertheless, the $\chi^2$ analysis makes a clear distinction. Finally, the value for ${\sigma_{\mathrm{SH}}}$ determined here is more than 5 times larger than the prediction by Guo et al. (using their result of $\sigma_{xy} = 2.2 \times 10^5 (\hbar/e)$ , and setting ${\sigma_{\mathrm{SH}}}= 2 \sigma_{xy}$ to account for the total spin current due to both up and down spins [@guo_intrinsic_2008]). This implies that the extrinsic effect dominates in our sputtered thin film systems where interfaces and crystal defects likely play a major role in determining the spin-orbit physics [@amin_spin_2016-1]. It is possible that some amount of intrinsic SHE is present in addition to the extrinsic effect, as discussed by Sagasta, et al. [@sagasta_tuning_2016]. In that work, the authors show that the total effective spin Hall conductivity ${\sigma_{\mathrm{SH}}}^{\mathrm{eff}}$ can be described by: $${\sigma_{\mathrm{SH}}}^{\mathrm{eff}} = {\sigma_{\mathrm{SH}}}^{\mathrm{int}} + {\theta_{\mathrm{SH}}}\sigma_{\mathrm{Pt}} \label{eq:sigmaSH_combined}$$ where ${\sigma_{\mathrm{SH}}}^{\mathrm{int}}$ is the intrinsic spin Hall conductivity, and the second term describes the extrinsic effect as we have modeled it here. The Pt conductivities studied here (from $\approx$ to ) fall within the transition from intrinsic- to extrinsic-regimes, as described in Ref. . Therefore, depending on the details of the spin and momentum scattering that govern ${\theta_{\mathrm{SH}}}$, the extrinsic term in Eq. can easily be the dominant effect. Furthermore, we see no evidence of a large interfacial source of spin Hall conductivity, as in Ref. , which would manifest as a non-zero intercept of ${\sigma_{\mathrm{DL}}}^{\mathrm{SOT}}$ in the limit of $d_{\mathrm{Pt}} \rightarrow 0$. [l | c | c | c | c | c ]{} Fit model & ${G^{\uparrow\downarrow}}\left(\times 10^{14}\right.$ ) & $\epsilon = \dfrac{\alpha_{\mathrm{sp,Pt}}}{(\alpha_{\mathrm{sp,Pt}} + \alpha_{\mathrm{SML}})}$ & ${\lambda_{\mathrm{s}}}$ (nm) & ${\sigma_{\mathrm{SH}}}\left(\times 10^{6}\right.$ )& ${\theta_{\mathrm{SH}}}= \dfrac{{\sigma_{\mathrm{SH}}}}{\sigma_{\mathrm{Pt}}}$\ EY + ext & & & & &\ EY + int & & & & &\ DP + ext & & & & &\ DP + int & & & & &\ Isolating the normal-metal layer contribution to sample inductance {#sec:ControlSamples} ------------------------------------------------------------------ To better understand the influence of the normal metal layers (Ta seed, Pt or Cu spin sink, and Ta cap) on the perturbative inductance—and hence, the extracted FL and DL conductivities—that the sample contributes to a VNA-FMR measurement, we measured several control samples. First, we inserted an AlO$_x$ layer between the Py and the Pt in order to block spin pumping into the Pt [@zink_efficient_2016]. To do so, of Al was sputter deposited onto the Py and subsequently oxidized for 10 minutes under of O$_2$. The AlO$_x$ layer deposition and oxidation steps were repeated 1, 2, or 3 times, to ensure complete blocking of spin pumping. As can be seen in Fig. \[fig:sigma\_controls\](b), the AlO$_x$ layers effectively reduce the damping by blocking spin pumping. This reduction correlates strongly with a reduction in ${\sigma_{\mathrm{DL}}}$, confirming its signature as the damping-like conductivity. By contrast, ${\sigma_{\mathrm{FL}}}$ actually changes sign with the introduction of the AlO$_x$ layers (Fig. \[fig:sigma\_controls\](a)). The contribution to ${\sigma_{\mathrm{FL}}}$ by Faraday-type pickup in the Pt cannot be eliminated by the AlO$_x$ barrier, since the Pt can still inductively couple to the precessing magnetization in the Py. The Faraday contribution clearly adds a negative contribution to ${\sigma_{\mathrm{FL}}}$, as ${\sigma_{\mathrm{FL}}}$ becomes increasingly negative with thicker Pt and Cu layers, as in Fig. \[fig:sigmaFL\_DL\](a). Therefore, the AlO$_x$ barrier might eliminate the ${\sigma_{\mathrm{FL}}}^{\mathrm{SOT}}$ contribution at the top Py interface. Nevertheless, even for Py deposited directly on SiO$_2$ (open square symbol), there remains a negative total ${\sigma_{\mathrm{FL}}}$, perhaps due to the interface asymmetry that remains between the top and bottom Py interfaces. The control samples also elucidate the impact of the Ta layers on our measurements. We note that Eq. does not explicitly include SML at the Ta interface. Using the data from Fig. \[fig:sigma\_controls\](b), we find that this simplification is justified. For these samples, we measured a total damping of $\alpha_{\mathrm{tot}} = 0.0104 \pm 0.0002$. If we set ${G^{\uparrow\downarrow}}_{\mathrm{Py/Ta}} = $ (the Sharvin value for Ta [@liu_interface_2014]), and use our measured conductivity of $\sigma_{\mathrm{Ta}} =$ , we obtain $\alpha_{\mathrm{sp,Ta}} = 0.004$ (the amount depicted in Fig. \[fig:SpinPumping\_CurrentDivider\]). Therefore, when damping pathways into the Pt are blocked, the intrinsic damping plus spin pumping into the Ta accounts for all but 0.0023 of the total damping. Assigning this small amount of excess damping to SML at the Ta interface would reduce the contribution of SML at the Pt interface by less than 20% and the values for spin Hall conductivity and spin Hall angle in Pt by only 10%. Finally, we fabricated samples without any seed or capping layers. For Py(5) deposited directly onto SiO$_2$, ${\sigma_{\mathrm{DL}}}$ is only 5% of its value for Py(3.5)/Pt(6) (see open circle data point in Fig. \[fig:sigma\_controls\](b)). The residual damping (beyond the intrinsic value) and damping-like conductivity for this sample could stem from the oxidized top surface or interfacial asymmetries, as well as less-than-optimal Py crystal structure, since no Ta seed layer was used. In the cases of both ${\sigma_{\mathrm{FL}}}$ and ${\sigma_{\mathrm{DL}}}$, some residual signal remains even when spin pumping into the Pt is effectively blocked, or the seed and capping layers are eliminated entirely. Therefore, it is not surprising that even for our control samples in which Pt is replaced with Cu (with its weak spin-orbit interaction), some weak sources of spin-to-charge conversion (interfacial or otherwise) persist. ![(a) FL and (b) DL conductivities for samples with AlO$_x$ $[\times n]$ (where $n$ = 1, 2, or 3) blocking layers inserted between Py(3.5) and Pt(6). Also shown is a sample in which Py(5) is deposited directly onto SiO$_2$ (open symbols). Note that Py(3.5)/Py(6) (direct contact) was re-grown and re-measured as a part of the AlO$_x$ series (duplicate data points for zero AlO$_x$ repeats). The lower data point for both ${\sigma_{\mathrm{FL}}}$ and ${\sigma_{\mathrm{DL}}}$ at zero AlO$_x$ layers is that from the main text.[]{data-label="fig:sigma_controls"}](sigma_controls_v3.eps){width="\linewidth"} Conclusion ========== To summarize, by use of simultaneously acquired damping and iSOT data, we are able to properly assign the portions of damping enhancement incurred by a FM/NM bilayer due to the parallel channels of SML and spin pumping into the NM. These results suggest that Pt is indeed a promising material for spintronic applications. Our data also validate previous suggestions that interface engineering will be crucial for the optimization of SOT in multilayer systems [@nguyen_spin-flipping_2014; @rojas-sanchez_spin_2014; @pai_dependence_2015; @zhang_role_2015]. Appendix ======== Pt thickness-dependent resistivity {#sec:t_depR} ---------------------------------- To extract the Pt contribution to the total measured stack resistance, we have developed a model for the metallic multilayer stack to account for different conductivities in the bulk and at the metal interfaces. In this model, the interfacial conductivity $\sigma_{\mathrm{int}}$ at the Py/Pt interfaces decays exponentially to the Pt bulk value, $1/\rho_0$, with increasing distance from the interface. Position-dependent conductivity through the Pt thickness can therefore be approximated as the sum of bulk and interfacial contributions: $$\sigma(z) = \frac{1}{\rho_0}\left[1 - \exp\left(\dfrac{-z}{\sigma_{\mathrm{int}} \rho_0 \lambda}\right)\right] + \sigma_{\mathrm{int}}\exp\left(\dfrac{-z}{\sigma_{\mathrm{int}} \rho_0 \lambda}\right)$$ where $\rho_0$ is the bulk resistivity, $\sigma_{\mathrm{int}}$ is the interfacial conductivity, and $\lambda$ is the bulk mean free path. The length scale $\sigma_{\mathrm{int}} \rho_0 \lambda$ describes the effective thickness over which the conductivity is determined by $\sigma_{\mathrm{int}}$. When $\sigma(z)$ is integrated over the Pt thickness from $z=0$ to $z=d_{\mathrm{Pt}}$, we obtain a final result for thickness-dependent resistivity: $$\rho(d_{\mathrm{Pt}}) = \frac{\rho_0}{\left[1 + \left(\dfrac{\sigma_{\mathrm{int}} \rho_0 \lambda}{d_{\mathrm{Pt}}}\right)(\rho_0 \sigma_{\mathrm{int}} - 1) \left[1 - \exp\left(\dfrac{-d_{\mathrm{Pt}}}{\sigma_{\mathrm{int}} \rho_0 \lambda}\right)\right] + \left(\dfrac{1}{R_{\mathrm{other}}}\right) \left(\dfrac{\rho_0}{d_{\mathrm{Pt}}}\right) \right]}$$ where $R_{\mathrm{other}}$ represents the sheet resistances of any fixed-thickness metallic layers (here, Py and Ta). We use a calculated mean free path for our samples () by scaling a literature value [@fischer_mean_1980] () by the ratio of our measured bulk resistivity to the literature value for bulk resistivity. From the fit in Fit. \[fig:RPt\_dPt\], we obtain $\sigma_{\mathrm{int}} = $ , $\rho_0 = $ , and $R_{\mathrm{other}} = $ . These values are used to obtain the thickness-dependent conductivity of the Pt layer, required in Eqs. and . ![Thickness-dependent resistivity, measured for subtstrate/Ta(1.5)/Py(3.5)/Pt($d_{\mathrm{Pt}}$)/Ta(3) as a function of Pt thickness.[]{data-label="fig:RPt_dPt"}](RPt_dPt_v3.eps){width="0.5\linewidth"} [^1]: Contribution of the National Institute of Standards and Technology; not subject to copyright.
--- abstract: | Motivated by the recent neutron diffraction experiment on $YVO_3$, we consider a microscopic model where each $V^{3+}$ ion is occupied by two $% 3d$ electrons of parallel spins with two fold degenerate orbital configurations. The mean field classical solutions of the spin-orbital superexchange model predicts an antiferro-orbital ordering at a higher temperature followed by a C-type antiferromagnetic spin ordering at a lower temperature. Our results are qualitatively consistent with the observed orbital phase transition at $\sim 200$K and the spin phase transition at $% \sim 114$K in $YVO_3$. author: - 'Theja. N. De Silva$^{a,b}$ , Anuvrat Joshi$^{c}$, Michael Ma$^{a,d}$, and Fu Chun Zhang$^{a,e}$' title: 'Theory for spin and orbital orderings in high temperature phases in $YVO_3$' --- I. Introduction =============== The transition metal perovskite oxides exhibit many interesting physical phenomena. In some of these compounds, the orbital degrees of freedom play an important role in their magnetic properties due to the strong spin-orbital coupling [@mott; @tokura; @imada]. Examples include the Mott-Hubbard type insulators $YVO_{3}$ and $LaVO_{3}$, which show very unusual magnetic properties. Although the early experiments on $YVO_{3}$ and $LaVO_{3}$ were reported back in the mid 1970’s [@boruk; @zubkov1; @zubkov2], there has been renewed interest in the past decade on these materials  [@kawano; @nguyen; @mahajan; @kikuchi; @cin; @corti; @ren; @miyasaka; @blake]. There are two magnetic phases in $YVO_{3}$: C-type antiferromagnetic order (ferromagnetic chains along the z-axis which stagger within the x-y plane) at temperature $114$K $>T>77$K, and G-type antiferromagnetic order (staggered in all three directions) at temperature $T<77$K [@boruk; @zubkov1; @zubkov2; @kawano]. The magnetic order in $LaVO_{3}$ is always C-type. The microscopic mechanism leading to the difference between these two compounds is still under investigation, and it might be related to the fact that at room temperature the cubic crystal structure is significantly distorted in $YVO_{3}$ but almost undistorted in $LaVO_{3}$. It is generally believed that the relevant orbital degrees of freedom, the degenerate or almost degenerate $% 3d-t_{2g}$ states are crucial to the observed magnetic properties. There have also been interesting theoretical studies related to these magnetic behaviors [@mizokawa1; @mizokawa2; @sawada; @giniyat; @shen; @castellani; @khomskii; @kugel; @euro]. In particular, Khaliullin $\mathit{et.al}$ [@giniyat] considered a spin-orbital Hamiltonian starting with 3-fold degenerate $t_{2g}$ orbitals, and compared the free energies between the C-type and G-type spin states in $ YVO_3$ by including an explicit Jahn-Teller energy in the model. Very recently, Blake $\mathit{et.al}$ [@blake] reported neutron diffraction experiment in $YVO_3$, which shows clear evidence that the orbital ordering has a sudden change from high temperature G-type to low temperature C-type at the $77$K magnetic phase transition, manifested by a change in the Jahn-Teller type of distortion. The data also show clear evidence for the orbital transition from high temperature disordered phase to the G-type ordered phase at $\sim 200$K. This has motivated us to study the spin-orbital ordering in $YVO_3$. In this paper, we consider a microscopic model for insulating $YVO_{3}$, where each V-ion has two electrons with parallel spins favored by the large Coulomb repulsion and the Hund’s coupling. The distortion in the cubic crystal structure already present at room temperature splits the degeneracy of the three $t_{2g}$ orbitals so that the $d_{xy}$ orbital is favored by the crystal field. In our model, we take it as always singly occupied , while the other electron occupies either the $d_{xz}$ or the $ d_{yz}$ state. This description is consistent with the neutron diffraction experiment [@blake]. We consider the superexchange interaction of the model and derive an effective Hamiltonian for $YVO_{3}$. We then study the mean field classical solutions of the model, and examine the spin and orbital orderings. We find a G-type orbital ordering at a higher temperature followed by an additional C-type spin ordering at a lower temperature. Our result is consistent with the observed orbital phase transition at $\sim 200$ K, and spin phase transition at $114$K in $YVO_{3}$. In this scenario, the orbital ordering at $\sim 200$K is of the electronic origin, and the lattice distortion at $\sim 200$K observed in the experiment is a consequence of the orbital ordering and the electron-lattice coupling. The superexchange interaction alone considered in our model does not explain the phase transition at $77$K, which may require other interactions such as the Jahn-Teller effect as proposed in previous articles [@mizokawa2; @giniyat]. This paper is organized as follows. In Section II, we examine a multi-band Hubbard model at electron density 2 electron per site, and consider the limit of large Coulomb repulsion and the large Hund’s coupling. We then derive an effective Hamiltonian based on the superexchange mechanism. In Section III, we discuss the mean field classical solutions of the model, and examine the phase diagram for the orbital and spin orderings. A brief summary is given in Section IV. II. Model ========== In $YVO_{3}$, the vanadium electron configuration is $3d^{2}$. The compound has a cubic crystal structure, and each $V$ ion is surrounded by six oxygen ions. Due to the cubic crystal field, the five-fold degenerate $3d$ orbitals are split into a higher energy doublet of $e_{g}$ orbitals and a lower energy triplet $t_{2g}$ orbitals. At low temperatures and for low energy physics, the relevant orbitals are the three-fold $t_{2g}$ orbitals: $d_{xy}$, $d_{yz}$,$d_{zx}$. In the strong coupling limit, the on-site Coulomb repulsion between the two electrons in the $3d$ states and the Hund’s coupling are much larger than the intersite electron hopping amplitudes, the system is a Mott insulator with each V-ion having two localized electrons of parallel spins in two out of three degenerate $t_{2g}$ orbitals. This scenario appears to be consistent with experiments. As indicated in the recent diffraction experiment [@blake], the cubic crystal is distorted at room temperature. As a result, the V-O bond distances are anisotropic. Here we consider the structure at room temperature, where the V-O bond distance along $c$-axis (perpendicular axis) is the smallest(see figure 1). This crystal structure further splits the $% t_{2g}$ states. The $d_{xy}$ orbital has a lower energy, and becomes always singly occupied. The other d-electron is either in $d_{yz}$ or $d_{zx}$ orbital. In the diffraction experiment [@blake], the data also indicate a smaller difference in V-O bond lengths in the $xy$ plane, which we shall neglect here for simplicity. =8.5cm The atomic Hamiltonian [@castellani] is then given by $H_0 = \sum_i H_i$ , where the sum over $i$ runs all the V-sites, and $$\begin{aligned} H_{i} &=&\frac{1}{2}\sum_{mm^{\prime },\sigma \sigma ^{\prime }}(1-\delta _{mm^{\prime }}\delta _{\sigma \sigma ^{\prime }})U_{mm^{\prime }}n_{im\sigma }n_{im^{\prime }\sigma ^{\prime }} \nonumber \\ &&-J\sum_{mm^{\prime },\sigma }\biggr( n_{im\sigma }n_{im^{\prime }\sigma }+c_{im\sigma }^{\dagger }c_{im-\sigma }c_{im^{\prime }-\sigma }^{\dagger }c_{im^{\prime }\sigma } \nonumber \\ &&-c_{im^{\prime }-\sigma }^{\dagger }c_{im^{\prime }\sigma }^{\dagger }c_{im\sigma }c_{im-\sigma }\biggr) +\sum_{m,\sigma }\Delta _{m}n_{m\sigma }.\end{aligned}$$ In the above Hamiltonian, $c_{im\sigma }^{\dagger }(c_{im\sigma })$ creates (annihilates) an electron of orbital $m$ and spin $\sigma $ at site $i$, $% n_{im\sigma }=c_{im\sigma }^{\dagger }c_{im\sigma }$. $\Delta _{1}=\Delta _{2}=0$, and $\Delta _{3}=\Delta <0$, with $m=1,2,3$ representing orbitals $% d_{yz}$, $d_{zx}$, $d_{xy}$ respectively. $U_{mm^{\prime }}$ is the on-site direct interaction, and J is the exchange interaction, or the Hund’s coupling. For the $t_{2g}$ orbitals, $U_{mm}=U=U_{mm^{\prime }}+2J$ for $% m^{\prime }\neq m$. In the case $U,J>\Delta $, this Hamiltonian leads to an atomic ground state with each $V-3d^{2}$ ion having a total spin $S=1$ with two-fold degenerate orbital configurations ($d_{xy},d_{xz}$) and ($% d_{xy},d_{yz}$). This last restriction in orbital configurations is valid for $YVO_3$ with strong lattice distortion but not for $LaVO_3$ where the cubic structure is almost undistorted at room temperature. We next introduce the intersite hopping Hamiltonian $H_{t}$, given by $$\begin{aligned} H_{t}=\sum_{\langle ij\rangle }\sum_{mm^{\prime },\sigma }\biggr(% t_{m,m^{\prime }}^{ij}c_{i,m,\sigma }^{\dagger }c_{j,m^{\prime },\sigma }+h.c.\biggr)\end{aligned}$$ where the sum runs over all the nearest neighbor V-V pairs, and $% t_{m,m^{\prime }}^{ij}$ is the electron hopping integral between two sites $% i $ and $j$ from orbital $m$ to orbital $m^{\prime }$. Since the most important contribution to the hopping integrals is from the path via the $2p$ state of the O-ion between the two neighboring V-ions, the hopping integrals are diagonal in the present problem due to the cubic symmetry. Namely, we have $t_{m,m^{\prime }}^{ij}=t_{m}^{ij}\delta _{m,m^{\prime }}$. Therefore, there are only two independent hopping parameters, $% t_{11}^{z}=t_{22}^{z}=t_{\perp }$, and $% t_{22}^{x}=t_{33}^{x}=t_{11}^{y}=t_{33}^{y}=t_{\parallel }$, with the super-index indicating the direction of the two sites. In the limit $ t_{\perp },t_{\parallel }<<U,\,J,\,\Delta $, the system is an insulator with spin 1 on each V-ion. However, the virtual hopping introduces an effective intersite coupling of spins and the occupied orbitals. The effective Hamiltonian for $H=H_{0}+H_{t}$ can be derived by applying perturbation theory to second order in $t_{\perp }$ or $t_{\parallel }$. Let $|\phi _{ij}\rangle =|s_{i}^{z},\tau _{i}^{z},s_{j}^{z},\tau _{j}^{z}\rangle $ be a ground state of $H_{0}$ for two V-ions $i,j$, where $% s^{z}=1,\,0,\,-1$ is the spin z-component, and $\mathbf{\tau }$ is a pseudospin-1/2 operator for the orbitals: $\tau ^{z}=1/2$ if $d_{yz}$ is occupied, and $\tau ^{z}=-1/2$ if $d_{xz}$ is occupied. The matrix elements between the unperturbed ground states of the two V-ions can be calculated within the second order perturbation theory, and it is given by, $$\begin{aligned} \langle \phi_{kl}| H_{eff} |\phi_{ij} \rangle = \sum_{I} \frac{\langle \phi_{kl}|H_t| I \rangle\langle I | H_t |\phi_{ij}\rangle}{E_0 - E_I}\end{aligned}$$ where the sum is over all the intermediate eigenstates $| I \rangle$ of $H_0$ corresponding to the eigen energy $E_I$, and $E_0$ is the ground state energy of $H_0$. Two-electron states with total spin $S=1$ are given in ref. 21. The electronic configuration of the intermediate state $| I \rangle$ is $3d^3$ on one V-ion and $3d^1$ on the other. In the Appendix, we list all the states for $V-3d^3$, and the corresponding energy difference $% E_I - E_0$. The effective Hamiltonian can be derived from these matrix elements. Defining for each site a spin-1 operator $\mathbf{S}$ and a pseudospin-1/2 operator $\mathbf{\tau }$ that act on the $s^z$ and $\tau^z$ degrees of freedom , it can then be expressed as below, $$\begin{aligned} H_{eff}=\sum_{\left\langle ij\right\rangle ,\nu }[I^{\nu }(\mathbf{\tau } _{i},\mathbf{\tau }_{j})\mathbf{S}_{i}\cdot \mathbf{S}_{j}+L^{\nu }(\mathbf{ \tau }_{i},\mathbf{\tau }_{j})]\end{aligned}$$ where $\nu =x,y,z$ gives the direction of the bond $\left\langle ij\right\rangle $. The first term corresponds to spin-dependent orbital couplings while the second corresponds to orbital couplings which are spin independent. The first term also shows that the effective spin-spin couplings depend on orbital configuration. Equivalently, by defining $I^{\nu }=K_{+}^{\nu }+K_{-}^{\nu }$ and $L^{\nu }=K_{+}^{\nu }-K_{-}^{\nu }$, $H_{eff}$ can be written as $$\begin{aligned} H_{eff}=\sum_{\left\langle ij\right\rangle ,\nu }[K_{+}^{\nu }(\mathbf{\tau }% _{i},\mathbf{\tau }_{j})(\mathbf{S}_{i}\cdot \mathbf{S}_{j}+1) \\ \nonumber +K_{-}^{\nu }(% \mathbf{\tau }_{i},\mathbf{\tau }_{j})(\mathbf{S}_{i}\cdot \mathbf{S}% _{j}-1)]\end{aligned}$$ so that $2K_{+}^{\nu }$ and $2K_{-}^{\nu }$ are interpreted as the intersite orbital couplings for parallel spins ($s_{i}^{z}=1$, $s_{j}^{z}=1$) and antiparallel spins ($s_{i}^{z}=1$, $s_{j}^{z}=-1$) respectively. We choose the energy unit to be $t_{\parallel }^{2}/U$, and denote $\eta =J/U$, $\eta _{\frac{3}{2}}=1/(1-3\eta )$, $\eta _{\frac{1}{2}}=1/(1+2\eta )$, and $% Q=t_{\perp }/t_{\parallel }$. $K_{\pm }^{\nu }$ can be expressed in terms of parameters $Q$ and $\eta $, and are given below. $$\begin{aligned} &&K_{+}^{(x,y)}=\eta _{\frac{3}{2}}(\tau _{iz}\tau _{jz}-\frac{1}{4}), \nonumber \\ &&K_{+}^{z}=2Q^{2}\eta _{\frac{3}{2}}(\vec{\tau}_{i}\cdot \vec{\tau}_{j}-% \frac{1}{4}), \nonumber \\ &&K_{-}^{(x,y)}=\alpha (\tau _{iz}\tau _{jz}-\frac{1}{4})+\frac{3}{4}(1+\eta _{\frac{1}{2}}) \nonumber \\ &&+\frac{l_{x,y}}{4}(1+\eta _{\frac{1}{2}})(\tau _{iz}+\tau _{jz}), \nonumber \\ &&K_{-}^{z}=Q^{2}\{2\alpha (\tau _{iz}\tau _{jz}-\frac{1}{4})+\frac{1}{2}% (1+\eta _{\frac{1}{2}}) \nonumber \\ &&-\frac{1}{3}(\eta _{\frac{3}{2}}-1)(\tau _{i}^{+}\tau _{j}^{-}+\tau _{i}^{-}\tau _{j}^{+}) \nonumber \\ &&-\frac{1}{2}(1-\eta _{\frac{1}{2}})(\tau _{i}^{+}\tau _{j}^{+}+\tau _{i}^{-}\tau _{j}^{-})\}.\end{aligned}$$ In the above equations, $\alpha =-\frac{1}{6}(1+2\eta _{\frac{3}{2}}-3\eta _{% \frac{1}{2}})$, and $l_{x,y}=-1$ and $+1$ respectively. For a bond in the $z$ direction, where the $d_{xy}$ orbital is inert due to zero hopping amplitude and the $d_{xz}$ and $d_{yz}$ hopping is isotropic, our model is similar to the original Kugel-Khomskii model[@khomskii] with two differences. The first is the replacement of spin $1/2$ by spin $1.$ The second is the effect of the $d_{xy}$ occupation which changes the Hund’s coupling contribution to the on-site energies. We first discuss the intersite pseudospin couplings between two parallel spins. In this case, the psuedospin has a $SU(2)$ symmetry along $z$ -direction. Along $x$- or $y$-direction, however, the virtual hopping integral for orbital $2$ or orbital $1$ vanishes, so there is no exchange term in the pseudospin, and $K^{(x,y)}_+$ is of the Ising form. The pseudospin coupling between the two V-ions of antiparallel spins is quite different. There is a linear term $(\tau_{iz} + \tau_{jz})$ along $x$- or $y$-direction, which either favors $d_{zx}$ or $d_{yz}$ orbital occupation to gain energy via the virtual hopping process. The pseudospin coupling along $z $-direction includes both the exchange term $(\tau_i^+ \tau_j^- +h.c.)$ and the pair flip term $(\tau_i^+ \tau_j^+ + h.c.)$. In spite of an isotropic matrix in the $z$-direction, the orbital Hamiltonian is not $SU(2)$ symmetric because of the presence of Hund’s coupling. In particular, the pair flip term is related to superexchange processes involving those intermediate states listed in Table II that are split in energy due to Hund’s coupling. To illustrate this point further, we consider a pair of V-ions along $z$-direction with antiparallel spins and pseudospins are $% \tau_{iz}=\tau_{jz} =1/2$. The relevant intermediate states in the superexchange are the states listed in the second and the fifth rows in Table II. Because these states have different energies, there is non-zero amplitude for the pseudospins to flip to $\tau_{iz}=\tau_{jz}=-1/2$. The pseudospin pair flip process is actually quite common in orbital physics. For example, there are pair flip terms in the effective Hamiltonian for spin-1/2 systems with orbital degeneracy derived by Castellani et al. [@castellani]. In the limit $J/U=0$, $\eta _{\frac{3}{2}}=\eta _{\frac{1}{2}}=1$, and $\alpha =0$, and so we have $K_{-}^{z}=Q^{2}$, and $K_{-}^{(x,y)}=-\frac{l_{x,y}}{2}(\tau _{iz}+\tau _{jz})+\frac{3}{2}$. The orbital coupling between the two V-ions of antiparallel spins vanishes, and the orbital coupling between the two V-ions of parallel spins remains to be pseudospin $SU(2)$ symmetric along $z$-direction and pseudospin Ising symmetric along $x$- or $y$-directions. For $J/U=0$, the lack of the global pseudospin $SU(2)$ symmetry is due to the anisotropic hopping integrals in the system. Our effective Hamiltonian here is similar to the Hamiltonian proposed previously by Khaliullin et al. [@giniyat] with the following differences. These authors considered a model with 3-fold $t_{2g}$ orbital degeneracy, while we consider a 2-fold orbital degeneracy with $d_{xy}$ orbital being always singly occupied. Below we shall compare the two Hamiltonians by considering the the spin- orbital coupling between two V-ions along $z$-direction, namely $H_{ij}$, with $j=i+z$. This may be carried out by imposing the orbital $d_{xy}$ being always singly occupied in the Hamiltonian of Khaliullin et al. We find that the $\tau _{iz}\tau _{jz}$ terms are the same in the two theories. However, the Hamiltonian of Khaliullin et al. does not include the pseudospin flip term $(\tau _{i}^{+}\tau _{j}^{+}+h.c.)$. As we illustrated above, the pseudospin flip term is non-zero. As we shall see in the next section, this pair flip term does not affect the mean field results, which depend only on the z-components of pseudopsin in the present case. It will be an interesting question to examine if the pseudospin flip term is important to the orbital fluctuations. III. Mean field theory and the phase diagram ============================================ We start with the classical solutions of $H_{eff}$. The Hamiltonian has a global SU(2)symmetry in spin space, so that we can assume the spin ordering along the z-direction. The Hamiltonian is invariant under the simultaneous transformation of global $Z(2)$ (reversing orbitals at all sites) and a $90^0 $ rotation of the lattice about the $z$-axis. In general, we should consider orbital ordering along an arbitrary orientation. However, for the present problem, the orbital z-component terms are always larger or equal to the x- or y-component terms in $K_{\pm }^{\nu }$ of Eq. (6). Therefore, we can discuss the classical solutions by considering the z-component of the orbital ordering only [@delta]. In other words, the classical solutions are the same as the Ising solutions in the present case. In Table I, we show the energies per site in various classical states. Note that $% \eta _{\frac{1}{2}}\leq 1$, and $\eta _{\frac{3}{2}}\geq 1$, where the equality holds if and only if $J=0$. We consider below the case with non-zero Hund’s coupling $J>0$. In this case, the two states listed in Table I with C-type antiferro-orbital (CO) configuration have higher energies. Also, we can see that the G-type antiferromagnetic spin (GS) and G-type antiferro-orbital (GO) phase has a higher energy than the CS-GO phase. Therefore, the ground state is either ferromagnetic spin (FS) and GO, or CS-GO. In both FS-GO and CS-GO phases, the orbital is antiparallel in all three directions, favored by the combination of the symmetries in hopping integrals (due to the cubic crystal symmetry) and the Hund’s coupling. The asymmetry between the spin configuration along the z-axis and in the x-y plane is a result of the splitting of the $d_{xy}$ orbital level from the other two $t_{2g}$ orbitals. As expected, the FS-GO phase is energetically more favored at a larger J where the Hund’s coupling dominates, and the CS-GO phase is more favored at a smaller J. It may be helpful to understand these two possible ground states by examining the following limiting cases more explicitly. In the limit of large Hund’s coupling, $\eta \rightarrow 1/3 $, the terms in the energy expression in Table I proportional to $\eta _{% \frac{3}{2}}$ dominate. Hence the FS-GO phase has the lowest energy. In the limit $J\rightarrow 0^{+}$, $\eta _{\frac{1}{2}}\rightarrow 1-0^{+}$, and $% \eta _{\frac{3}{2}}\rightarrow 1+0^{+}$, so that the CS-GO phase has the lowest energy. It should be noted that although the $d_{xy}$ orbital is always singly occupied, the virtual hopping of the $d_{xy}$ electron means that our model is not identical to a model with one electron occupying degenerate $d_{xz}$ and $d_{yz}$ orbitals. In that model, for G-type orbital ordering, the ground state with non-zero Hund’s coupling would always be ferromagnetic. This is also true if we compare our model to a model with one electron occupying triply degenerate $t_{2g}$ orbitals [@kugel2; @khaliullin]. ------------------------------------------------------------------------------------------------------- **Phase** ** Energy per site** ---------------------------------------- -------------------------------------------------------------- G-type spin and G-type orbital (GS-GO) $-\frac{2}{3}\biggr( % 5+2Q^2+(1+Q^2)\eta_{\frac{3}{2}}+3\eta_{\frac{1}{2}}\biggr)$ C-type spin and G-type orbital (CS-GO) $-\frac{2}{3}\biggr( 5+(1+Q^2)\eta_{% \frac{3}{2}}+3\eta_{\frac{1}{2}}\biggr)$ G-type spin and C-type orbital (GS-CO) $-\frac{1}{3}\biggr( 10+3Q^2+2\eta_{% \frac{3}{2}}+3(2+Q^2)\eta_{\frac{1}{2}}\biggr)$ C-type spin and C-type orbital (CS-CO) $-\frac{2}{3}\biggr( 5+\eta_{\frac{3% }{2}}+3\eta_{\frac{1}{2}}\biggr)$ Ferro spin and G-type orbital (FS-GO) $-2(1+Q^2)\eta_{\frac{3}{2}}$ ------------------------------------------------------------------------------------------------------- We now discuss the finite temperature phases and their transitions. We introduce three types of order parameters, namely the spin order parameter $% m_{i}=\langle S_{iz}\rangle $, the orbital order parameter $r_{i}=\langle \tau _{iz}\rangle $, and the spin-orbital order parameter $q_{i}=\langle S_{iz}\tau _{iz}\rangle $. We shall consider the order parameters corresponding to the FS-GO and CS-GO phases, since other ordered states are not energetically favorable. In both the FS-GO and CS-GO phases, we divide the lattice into two sublattices $A$ and $B$ accordingly. For the FS-GO ordering, we consider $m_{i}=m$ for all the sites $i$, and $r_{i}=r$ and $% q_{i}=q$ for $i$ at sublattice $A$ and $r_{i}=-r$ and $q_{i}=-q$ for $i$ at sublattice $B$. For the CS-GO ordering, we consider $m_{i}=m$, $r_{i}=r$, and $q_{i}=q$ for $i$ at sublattice $A$, and $m_{i}=-m$, $r_{i}=-r$, and $% q_{i}=q$ for $i$ at sublattice $B$. We use a mean field theory to examine the thermodynamically stable phases described by these order parameters, and neglect both quantum and thermal fluctuations. The effective Hamiltonian $% H_{eff}$ is then approximated by, $$\begin{aligned} H_{MF} = \sum_{i}(aS_{iz}+b\tau_{iz}+cS_{iz}\tau_{iz}+d)\end{aligned}$$ In the above equations, coefficient $a,\,b,\,c,\,d$ are functions of $\eta $ , $Q$, as well as of the mean fields $m$, $r$, and $q$. They are given by, with the subscript upper (-) and lower (+) signs corresponding to the CS-GO and FS-GO phases, respectively, $$\begin{aligned} a &=&A_{\mp}mr^2+ D_{\mp}m, \nonumber \\ b &=& A_{\mp} m^{2}r- B_{\mp}r, \nonumber \\ c &=& - A_{\mp}(2m^{2}r^{2}+q), \nonumber \\ 2d &=&-C_{\mp} m^{2}r^{2}+ E_{\mp} m^{2} + B r^2 - A_{\mp}q^{2},\end{aligned}$$ where $B=B_{-}$, and $$\begin{aligned} A_{\pm }&=&4(\eta_{\frac{3}{2}}+ \alpha) (Q^2 \pm 1), \nonumber \\ B_{\pm}& =&4(\eta_{\frac{3}{2}}-\alpha) (Q^2 \mp 1), \nonumber \\ C_{\pm}&=&4(\eta_{\frac{3}{2}} +\alpha) (Q^2 \pm \frac{1}{2}), \\ D_{\pm} &=&(1+\eta_{\frac{1}{2}})(Q^2 \pm 3) - (\eta_{\frac{3}{2}} + \alpha)(Q^2 \pm 1), \nonumber \\ E_{\pm} &=& -(1+\eta_{\frac{1}{2}})(Q^2 \pm \frac{3}{2}) + (\eta_{\frac{3}{2}% } + \alpha)(Q^2 \pm \frac{1}{2}). \nonumber\end{aligned}$$ The mean field Hamiltonian can be solved easily to obtain the thermal averages of $S_{iz}$, $\tau_{iz}$, and $S_{iz}\tau_{iz}$, from which we obtain the following self-consistent equations for the order parameters $m$, $r$ and $q$, (with $\beta$ the inverse temperature) $$\begin{aligned} \langle S_{iz} \rangle &=& m = -\frac{2\sinh (\beta a)}{(1+2\cosh (\beta a))} \nonumber \\ \langle \tau_{iz} \rangle &=& r = -\frac{1}{2}\tanh (\beta b/2) \nonumber \\ \langle S_{iz}\tau_{iz} \rangle &=& q = - \frac{\sinh (\beta c/2)}{(1+2\cosh (\beta c/2))}.\end{aligned}$$ The free energy per site in the mean field theory is given by $$\begin{aligned} f &=& -\frac{1}{\beta}\ln \biggr(4\cosh(\beta b/2)[1+2\cosh(\beta c/2)] \nonumber \\ &&\times [1+2\cosh(\beta a)]\biggr) + d.\end{aligned}$$ We solve the self-consistent equations for different ordered states at various temperatures. The phases studied are (i). paramagnetic spin (PS) and para-orbital (PO) state (PS-PO) with $m=r=q=0$; (ii). CS-PO state with C-type spin ordering $m\neq 0$ and $r=q=0$; (iii). PS-GO state with $r\neq 0$ and $m=q=0$; (iv). CS-GO state; and (v). FS-GO state. In the states (iv) and (v), $m,r,q\neq 0$. When more than one set of MF solutions exist at a given temperature, we compare their free energies to determine the thermodynamically stable phase diagram. =5.1cm =5.5cm In Fig. 2 and Fig. 3, we plot the phase diagrams obtained from the mean field theory for $Q=1.3$ and $Q=0.5$, respectively. The phase diagram for $% Q=1$ is qualitatively the same as that for $Q=1.3$. The ground state is found to be CS-GO for smaller J/U, and FS-GO for larger J/U, consistent with our previous discussions. In general, the spin and orbital ordering occur at different temperatures. This feature is in distinction from the model for $% V_{2}O_{3}$ for which spin and orbital order at the same temperature [@mila; @joshi]. Within the mean field theory, the phase transition between CS-GO and FS-GO is first order, and all other transitions between the different phases in Figs. 2 and 3 are second order. (The lattice distortion associated with the orbital ordering, which is not included in our model, may change the nature of the phase transition.) In passing, we note that the spin-orbital ordering described by the order parameter $q$ is always zero unless both the spins and the orbitals are ordered in the present system. This result indicates that the spin-orbital ordering parameter $q$ introduced in our mean field theory in addition to the spin order parameter $m$ and the orbital order parameter $r$ may not be as significant in the present problem in altering the qualitative physics as in the $SU(4)$ model [@li]. In what follows, we shall focus on the phases relevant to $YVO_{3}$, and discuss the sequential phase transitions from PS-PO to CS-GO. As we can see from Fig. 2, as the temperature decreases from the disordered state PO-PS, the system first undergoes a transition to the G-type antiferro-orbital ordered phase. Only at lower temperatures does the spin become C-type antiferromagnetic ordered. For smaller $Q$, the phase transitions depend on the Hund’s coupling as we can see from Fig. 3. At intermediate $J/U$, the orbital transition temperature is higher than the spin’s, while at smaller $% J/U$, the spin transition temperature is higher than the orbital’s. Since for $YVO_{3}$, $Q\geq 1$, and the estimated values for $U$ and $J$ are $% U\sim 4.5eV$ and $J\sim 0.68eV$ [@mizokawa2; @sawada], our theory suggests orbital ordering first at a higher temperature followed by a subsequent spin ordering at a lower temperature in $YVO_{3}$. This is qualitatively consistent with the experimental findings for $YVO_{3}$ above 77K. As recently reported by Blake $\mathit{et.al}$  [@blake], the neutron diffraction experiment shows that the orbital ordering in $YVO_3$ takes place at $T_{GO}=200$K, which is far above the antiferromagnetic ordering temperature $T_{CS}=116$K. The orbital ordering is evidenced by the changes of the V-O bond lengths in the xy-plane. Our theory is consistent with these observations. Orbital ordering and lattice distortion are often observed simultaneously in experiments. It is usually difficult to distinguish if the lattice distortion is due to the orbital ordering or vice versa. Our theory suggests a scenario, in which the orbital ordering is of electronic origin, and the lattice distortion observed above 77K is a consequence of the orbital ordering and the electron-phonon coupling. In $YVO_{3}$, as temperature decreases further, there is another phase transition at a lower temperature $T_{GS}=77$K, below which the system is in the G-type antiferromagnetic and C-type antiferro-orbital state. There have been proposals [@mizokawa2; @giniyat] to attribute this lower temperature phase transition to the Jahn-Teller energy which favors C-type orbital ordering. It is an interesting issue to further understand the nature of the low temperature phase transition. IV. Summary =========== In summary, we have studied the electronic structure of the insulating $% YVO_{3}$, and derived an effective Hamiltonian based on the superexchange interaction. We started with the atomic limit where each $V-3d^{2}$ has a spin-1 and two-fold degenerate orbital configurations $(d_{xy}d_{xz})$ and $% (d_{xy}d_{yz})$. This consideration is consistent with the recent neutron diffraction experiment at $T>77$K. We studied the classical solutions of the model within mean field theory, and found G-type antiferro-orbital ordering at a higher temperature followed by a second phase transition where the spins become C-type ordered. Our theory explains the orbital and spin ordering of $YVO_{3}$ at temperatures $T>77$K. While our model does not explain the lower temperature phase of G-type spin ordering, which may require considerations in addition to the superexchange interaction, our theory provides a starting point for understanding the unusual magnetic properties of $YVO_{3}$. After we completed the present calculations, we learned of a very recent inelastic neutron scattering experiment of Ulrich et al. [@keimer]. They reported an energy gap in the spin wave spectrum of $YVO_{3}$ in the C-type spin ordering phase, and they interpreted it as an evidence for the orbital Peierls state along the z-direction. The classical solutions we study here do not predict any spin or orbital Peierls transition. Quantum fluctuations or electron-lattice interactions may be responsible for this unusual state [@keimer; @shen]. This work was in part supported by NSF Grant \#0113574, the URC Summer Student Fellowship at University of Cincinnati, and by the Chinese Academy of sciences. MM acknowledges the hospitality of the Hong Kong University of Science and Technology. V. Appendix =========== In this Appendix, we present all the intermediate eigenstates $| I \rangle$ and their corresponding energy differences with the ground state, $E_I - E_0$, of the the four $t_{2g}$ electrons in two V-ions. These states are used in the second order perturbation theory to derive the effective Hamiltonian $% H_{eff}$ in the text. The atomic ground states of the system are $6 \times 6$ -fold degenerate, with each ion occupied by two electrons of parallel spins in the orbital configurations $(d_{xy}d_{xz})$ or $(d_{xy}d_{yz})$. The atomic ground state energy is $E_0 = 2 (U-3J)$. The excited states $| I \rangle$ are $(V-3d^1)-(V-3d^3)$ with 6-fold degeneracy in $V-3d^1$ ion. Here we consider the limiting case $U, \, J >> \Delta$, and neglect the effect of $\Delta$ on the $V-3d^3$ states. Within this approximation, these excited states are split into three multiples due to the Hund’s coupling: $% 4\times 6$ states with energy $(U-3J+E_0)$, $10 \times 6$ states with energy $(U+E_0)$, and $6 \times 6$ states with energy $(U+2J+E_0)$. The spin and orbital configurations of $V-3d^3$ are listed in Table II. ------------------------------------------------------------------------------------------------------------------------------- **Eigenstate** **Spin and orbital configuration** **$E_I - E_0$** ------------------------------------------------------------------------------- ------------------------------------ ---------- $(c^{\dagger}_{1\uparrow}c^{\dagger}_ {2\uparrow}c^{\dagger}_ =0.8cm {3\uparrow})|0\rangle$ $\frac{1}{\sqrt{3}}(c^{\dagger}_{1\uparrow}c^{\dagger}_{2\downarrow}c^{% =0.8cm $U-3J$ \dagger}_ {3\uparrow}+c^{\dagger}_{1\downarrow}c^{\dagger}_ {2\uparrow}c^{\dagger}_ {3\uparrow}+c^{\dagger}_{1\uparrow}c^{\dagger}_ {2\uparrow}c^{\dagger}_ {3\downarrow})|0\rangle$ $\frac{1}{\sqrt{3}}(c^{\dagger}_{1\downarrow}c^{\dagger}_{2\uparrow}c^{% =0.8cm \dagger}_{3\downarrow}+c^{\dagger}_{1\uparrow}c^{\dagger}_{2\downarrow}c^{% \dagger}_{3\downarrow}+c^{\dagger}_{1\downarrow}c^{\dagger}_{2\downarrow}c^{% \dagger}_{3\uparrow})|0\rangle$ $(c^{\dagger}_{1\downarrow}c^{\dagger}_ {2\downarrow}c^{\dagger}_ =0.8cm {3\downarrow})|0\rangle$ $\frac{1}{\sqrt{2}}(c^{\dagger}_{1\uparrow}c^{\dagger}_ =0.8cm $U$ {1\downarrow}c^{\dagger}_ {3\uparrow}-c^{\dagger}_{2\uparrow}c^{\dagger}_ {2\downarrow}c^{\dagger}_ {3\uparrow})|0\rangle$ $\frac{1}{\sqrt{2}}(c^{\dagger}_{1\uparrow}c^{\dagger}_{1\downarrow}c^{% =0.8cm (3-fold) \dagger}_ {3\downarrow}-c^{\dagger}_{2\uparrow}c^{\dagger}_ {2\downarrow}c^{\dagger}_ {3\downarrow})|0\rangle$ $\frac{1}{\sqrt{6}}(2c^{\dagger}_{1\uparrow}c^{\dagger}_ =0.8cm $U$ {2\uparrow}c^{\dagger}_ {3\downarrow}-c^{\dagger}_{1\uparrow}c^{\dagger}_ {2\downarrow}c^{\dagger}_ {3\uparrow}-c^{\dagger}_{1\downarrow}c^{\dagger}_ {2\uparrow}c^{\dagger}_ {3\uparrow})|0\rangle$ $\frac{1}{\sqrt{6}}(2c^{\dagger}_{1\downarrow}c^{\dagger}_{2\downarrow}c^{% =0.8cm \dagger}_ {3\uparrow}-c^{\dagger}_{1\downarrow}c^{\dagger}_ {2\uparrow}c^{\dagger}_ {3\downarrow}-c^{\dagger}_{1\uparrow}c^{\dagger}_ {2\downarrow}c^{\dagger}_ {3\downarrow})|0\rangle$ $\frac{1}{\sqrt{2}}(c^{\dagger}_{1\uparrow}c^{\dagger}_ =0.8cm $U$ {2\downarrow}c^{\dagger}_ {3\uparrow}-c^{\dagger}_{1\downarrow}c^{\dagger}_ {2\uparrow}c^{\dagger}_ {3\uparrow})|0\rangle$ $\frac{1}{\sqrt{2}}(c^{\dagger}_{1\uparrow}c^{\dagger}_{2\downarrow}c^{% =0.8cm \dagger}_ {3\downarrow}-c^{\dagger}_{1\downarrow}c^{\dagger}_ {2\uparrow}c^{\dagger}_ {3\downarrow})|0\rangle$ $\frac{1}{\sqrt{2}}(c^{\dagger}_{1\uparrow}c^{\dagger}_ =0.8cm $U+2J$ {1\downarrow}c^{\dagger}_ {3\uparrow}+c^{\dagger}_{2\uparrow}c^{\dagger}_ {2\downarrow}c^{\dagger}_ {3\uparrow})|0\rangle$ $\frac{1}{\sqrt{2}}(c^{\dagger}_{1\uparrow}c^{\dagger}_{1\downarrow}c^{% =0.8cm (3-fold) \dagger}_ {3\downarrow}+c^{\dagger}_{2\downarrow}c^{\dagger}_ {2\uparrow}c^{\dagger}_ {3\downarrow})|0\rangle$ ------------------------------------------------------------------------------------------------------------------------------- N. F. Mott: Metal-Insulator Transitions (Taylor and Francis, London, 1990). For a review, Y. Tokura and N. Nagaosa, Science, VOL 288, 462 (2000). M. Imada, A. Fujimori and Y. Tokura, Rev. Mod. Phys. 70, 1039 (1998). A. S. Borukhovich, G. V. Bazuv and G. P. Shveikin, Sov. Phys; Solid State 16, 191 (1974). V. G. Zubkov, A. S. Borukhovich, G. V. Bazuv, I. I. Matveenko and G. P. Shveikin, Sov. Phys. JETP 39, 896 (1974). V. G. Zubkov, G. V. Bazuv and G. P. Shveikin, Sov. Phys; Solid State 18, 1165 (1976). H. Kawano, H. Yoshizawa and Y. Ueda, J. Phys. Soc. Jpn. 63, 2857 (1994). H. C. Nguyen and J. B. Goodenough, Phys. Rev. B. 52, 324 (1995). A. V. Mahajan et al., Phys.Rev. B 46, 10966 (1992). J. Kikuchi, H. Yosuoka, Y. Kokubo and Y. Ueda, J. Phys. Soc. Jpn. 63, 3577 (1994). F. Cintolesi, M. Corti, A. Rigamonti, G. Rossetti, P. Ghigna and A. Lascialfari, J. Appl. Phys. 79, 6624 (1996). M. Corti, F. Cintolesi, A. Lascialfari and F. Rossetti, J. Appl. Phys. 81, 5286 (1997). Y. Ren, T. T. M. Palstra, D. I. Khomskii, A. A. Nugroho, A. A. Menovsky and G. A. Sawatzky, Phys. Rev. B. 62, 6577 (2000). S. Miyasaka, T. Okuda and Y. Tokura, Phys. Rev. Lett. 85, 5388 (2000). G. R. Blake, T. T. M. Palstra, Y. Ren, A. A. Nugroho and A. A. Menovsky, Phys. Rev. Lett. 87, 245501 (2001). T. Mizokawa and A. Fujimori, Phys. Rev. B 54, 5368 (1996). T. Mizokawa, D. I. Khomskii and G. A. Sawatzky, Phys. Rev. B 60, 7309 (1999). H. Sawada, N. Hamada, K. Terakura and T. Asada Phys. Rev. B 53, 12742 (1996). G. Khaliullin, P. Horsch and A. M. Oles, Phys. Rev. Lett. 86, 3879 (2001). S. Q. Shen, X. C. Xie and F. C. Zhang, Phys. Rev. Lett. 88, 027201 (2002). C. Castellani, C. R. Natoli, and J. Ranninger, Phys. Rev. B 18, 4945 (1978). K. I. Kugel and D. I. Khomskii, Sov. Phys. JETP 37, 725 (1973). K. I. Kugel and D. I. Khomskii, Sov. Phys. Solid State 17, 285 (1975). G. Khaliullin, S. Maekawa, Phys. Rev. Lett. 85, 3950 (2000). K. I. Kugel and D. I. Khomskii, Sov. Phys. JETP 52, 501 (1981). F. Mila and F. C. Zhang, Eur. Phys. J. B 16, 7 (2000). We have examined the linear orbital operator term in Eq. (6). For the spin configurations considered in this paper, namely the C- or G-type antiferromagnets or ferromagnet, the contribution of this linear term to the energy vanishes. F. Mila, R. Shiina, F. C. Zhang, A. Joshi, M. Ma, V. Anisimov and T. M. Rice, Phys. Rev. Lett. 85, 1714 (2000). A. Joshi, M. Ma, and F. C. Zhang, Phys. Rev. Lett, 86, 5743 (2001). Y. Q. Li, D. N. Shi, M. Ma, and F. C. Zhang, Phys. Rev. Lett. 81, 3527 (1998). C. Ulrich, G. Khaliullin, J. Sirker, M. Reehuis, M. Ohl, S. Miyasaka, Y. Tokura, and B. Keimer, LANL preprint, cond-mat/0211589, (2002).
--- author: - 'N. Peretto, G. A. Fuller, Ph. André, D. Arzoumanian, V. M. Rivilla, S. Bardeau, S. Duarte Puertas, J. P. Guzman Fernandez, C. Lenfestey, G.-X. Li, F. A. Olguin, B. R. Röck, H. de Villiers, J. Williams' bibliography: - 'references.bib' date: 'Received; accepted' title: 'SDC13 infrared dark clouds: Longitudinally collapsing filaments?[^1]' --- [Formation of stars is now believed to be tightly linked to the dynamical evolution of interstellar filaments in which they form. In this paper we analyze the density structure and kinematics of a small network of infrared dark filaments, SDC13, observed in both dust continuum and molecular line emission with the IRAM 30m telescope. These observations reveal the presence of 18 compact sources amongst which the two most massive, MM1 and MM2, are located at the intersection point of the parsec-long filaments. The dense gas velocity and velocity dispersion observed along these filaments show smooth, strongly correlated, gradients. We discuss the origin of the SDC13 velocity field in the context of filament longitudinal collapse. We show that the collapse timescale of the SDC13 filaments (from 1 Myr to 4 Myr depending on the model parameters) is consistent with the presence of Class I sources in them, and argue that, on top of bringing more material to the centre of the system, collapse could generate additional kinematic support against local fragmentation, helping the formation of starless super-Jeans cores. ]{} Introduction ============ In recent years, interstellar filaments have received special attention. The far-infrared [*Herschel*]{} space observatory revealed the ubiquity of filaments in both quiescent and active star-forming clouds. The detailed analysis of large samples of filaments identified with [*Herschel*]{} suggest that they represent a key stage towards the formation of low-mass prestellar cores [@arzoumanian2011]. In the picture proposed by @andre2010, these cores form out of filaments which have reached the thermal critical mass-per-unit-length, $M_{line,crit}^{th}=2c_s^2/G$ [@ostriker1964], above which filaments become gravitationally unstable. Filaments with $M_{line} >M_{line,crit}^{th}$ are called supercritical filaments. While this scenario might apply to the bulk of the low-mass prestellar cores, it can hardly account for the formation of super-Jeans cores [e.g. @sadavoy2010] whose mass is several times larger than the local thermal Jeans mass $M_{J}^{th}$ ($\sim1$ M$_{\odot}$ for critical 10 K filaments). Additional support (magnetic, kinematic) and/or significant subsequent core accretion are necessary to explain the formation of cores with masses $M_{core} >>M_{J}^{th}$. Several high-resolution studies of filamentary cloud kinematics have been performed in the past three years towards low-mass, nearby, star forming regions [e.g. @duartecabral2010; @hacar2011; @kirk2013 Arzoumanian et al. 2013]. These studies demonstrate the importance of cloud kinematics in the context of filament evolution and core formation. For instance, @kirk2013 showed that gas filamentary accretion towards the central cluster of young stellar objects (YSOs) in Serpens South could sustain the star formation rate observed in this cloud, providing enough material to constantly form new generations of YSOs. But, it is not clear if these inflows have any impact at all in determining the mass of individual low-mass cores. On the high-mass side, infrared dark clouds (IRDCs) have been privileged targets [e.g. @miettinen2012; @ragan2012; @henshaw2013; @busquet2013; @peretto2013]. These sources are typically located at a distance of $\sim4$ kpc from the sun [@peretto2010a], making the analysis of filament kinematics more difficult. Thanks to the high sensitivity and resolution of ALMA, @peretto2013 showed that the global collapse of the SDC335.579-0.292 IRDC is responsible for the formation of an early O-type star progenitor ($M\simeq545$ M$_{\odot}$ in 0.05 pc) sitting at the cloud centre. In this paper, we present results on an IRDC (hereafter called SDC13), composed of three Spitzer Dark Clouds from the Peretto & Fuller (2009) catalogue (SDC13.174-0.07,SDC13.158-0.073, SDC13.194-0.073). The near kinematical distance of SDC13 is $d=3.6 (\pm 0.4)$ kpc using the @reid2009 model. From 8 extinction we estimate a total gas mass of $\sim600$ M$_{\odot}$ above $N_{H_2}>10^{22}$ cm$^{-2}$. As any IRDC, SDC13 shows little star formation activity, except in the central region where extended infrared emission is observed (see Fig. 1 from @peretto2010a). The high extinction contrast, the rather simple geometry, and the large aspect ratios of the SDC13 filaments are remarkable, and provide a great opportunity to better understand the impact of filament kinematics on the formation of cores in an intermediate mass regime. ![Spitzer 8 image of SDC13 in grey scale on top of which we overlaid the IRAM 30m MAMBO 1.2mm dust continuum contours (from 3 mJy/beam to 88 mJy/beam in step of 5 mJy/beam). The positions of the identified 1.2mm compact sources within the SDC13 filaments are marked as crosses, red for starless sources and blue for protostellar sources. []{data-label="continuum"}](sdc13_mambo_spitzer.pdf){width="7.cm"} Millimeter observations ======================= Dust continuum data ------------------- In December 2009 we observed SDC13 at the IRAM 30m telescope using the MAMBO bolometer array at 1.2mm. We performed on-the-fly mapping of a $5\arcmin\times5\arcmin$ region. The sky opacity at 225 GHz was measured to be between 0.08 and 0.26 depending on the scan. Pointing accuracy was better than 2 and calibration on Uranus was better than 10%. We used MOPSIC to reduce these MAMBO data, obtaining a final rms noise level on the reduced image of $\sim1$ mJy/11-beam. We also make use of publicly available [*Spitzer*]{} GLIMPSE [@churchwell2009] and 24 MIPSGAL [@carey2009] data. Molecular line data ------------------- In September 2011, during the IRAM 30m summer school, we observed SDC13 with the IRAM 30m telescope using the EMIR heterodyne receiver. We performed 31 single pointing observations along the SDC13 filaments in N$_2$H$^+$(1-0), at 93.2 GHz (i.e. 27 beam). The off positions, taken 5 away from SDC13, were checked to ensure no emission was present there. The pointing was better than 5 and the sky opacity was varying between 0.3 to 2 at 225 GHz. We used the FTS spectrometer with a 50 KHz spectral resolution. The data have been reduced in CLASS[^2]. The resulting spectra have a rms noise varying from 0.05 to 0.1 K in a 0.16 km/s channel width. ![(left): Spitzer 8 image of SDC13 in grey scale on top of which we marked the 31 single-pointing positions we observed in N$_2$H$^+$(1-0), and the skeletons of each filament (thick solid lines). The size of the black circles represents the 30m beam size at 3.2mm. (right): Examples of three N$_2$H$^+$(1-0) spectra (displayed in T$_a^*$ temperature scale) observed at three different positions (1, 10, and 24). Their corresponding hyperfine structure fits are displayed as yellow solid lines. The remaining spectra are displayed in online Fig. \[allspec\]. The vertical dashed line corresponds to the systemic velocity of the cloud (i.e. $V_{sys}=37.0$ km/s) as measured from the isolated component of the hyperfine structure.[]{data-label="spectra"}](sdc13_fil_fewspec.pdf){width="9.1cm"} Analysis ======== Mass partition in SDC13 ----------------------- The MAMBO observations of SDC13 are presented in Fig. \[continuum\]. We can see that the 1.2mm dust continuum emission towards SDC13 matches the mid-infrared extinction very well. In the remainder of this paper we focus only on the emission towards these dark filaments and ignore any other significant emission peaks since we do not have kinematical information for those. We segmented the emission on this dust continuum map into four filaments: Fi-SE, Fi-NW, Fi-NEs, Fi-NEn. The skeletons of these filaments (Fig. \[spectra\]-left) have been obtained by fitting polynomial functions to the MAMBO emission peaks along them. The splitting of Fi-NE into two parts is due to its dynamical structure (cf Sect. 3.2). These filaments are barely resolved in our MAMBO map and are responsible for most of the 1.2mm flux in the region. Although ground based dust continuum observations filter out all emission whose spatial scale is larger than the size of the bolometer array ($\sim2\arcmin$ for MAMBO), inspection of [*Herschel*]{} Hi-GAL observations [@molinari2010] shows that the SDC13 mass is indeed mostly concentrated within the filaments. The observed properties of the filaments (as measured within the 3 mJy/beam contour) are presented in Table 1[^3]. Using the source identification scheme presented in Appendix A of @peretto2009 we further identified 18 compact sources sitting within the filaments. The observed properties of these sources are given in Table 2. Note that a number of sources have sizes that are close to, or even smaller than, the beam size (all sources with dashes in Table 2 are in this situation). These sources have significant emission peaks ($>3\sigma$) but because they are blended with nearby sources their measured sizes appear smaller than the beam, and their properties are very uncertain. All these sources are confirmed with higher resolution extinction map and SABOCA data (not shown here). The flux uncertainties in Table 2 reflect the difficulty to estimate the separation between the sources and the underlying filament. It is calculated by taking the average of the clipping and the bijection schemes of the flux estimates [@rosolowsky2008]. We further characterize the compact sources by checking their association with [*Spitzer*]{} 24 point-like sources, and their fragmentation as seen in the 8 extinction map (see Fig. 1 from Peretto & Fuller 2010). We find that most of the sources are starless (13 out of 18) and that only MM10 appears to be sub-fragmented in extinction. The results of this analysis are also quoted in Table 2. Assuming that the dust emission is optically thin at 1.2mm, the measured fluxes provide a direct measurement of the source mass. In Tables 3 and 4 we give the masses of all structures estimated using a specific dust opacity at 1.2mm $\kappa_{1.2mm}=0.005$ cm$^2$g$^{-1}$, and dust temperatures of 12 K for filaments and starless sources, and 15 K for protostellar ones [@peretto2010b]. We further provide their volume densities, and the mass per unit length of filaments. All uncertainties result from the propagation of the flux uncertainties. They do not take into account systematic uncertainties on the dust temperature/opacity which account for an additional factor of $\le2$ uncertainty on all calculated properties. ![Spitzer 8 image of SDC13 in grey scale on top of which we symbolized the results of the N$_2$H$^+$(1-0) HFS fitting as circles. The colour of the symbols code the gas velocity while [**their**]{} sizes code the gas velocity dispersion (FWHM). []{data-label="velocity"}](sdc13_spit_vel.pdf){width="7.2cm"} Dense gas projected velocity field ---------------------------------- The N$_2$H$^+$(1-0) spectra observed along the SDC13 filaments are displayed in Fig. \[spectra\] and online Fig. \[allspec\]. All spectra show strong detections ($S/N\ge4$) for all hyperfine line components, and symmetric line profiles. Based on these observations we estimate the systemic velocity of SDC13 at +37 km/s (near position 8 in Fig. \[spectra\]). Note that from position 18 to 24 and 28 to 31 we observe a secondary velocity component separated by $+17$ km/s. In Fig. \[spectra\] it can be identified as a small bump at $V\simeq+54$ km/s on spectrum \#24. This higher velocity component most likely belongs to another cloud on the eastern side of SDC13 which overlaps, in projection, with the Fi-NE filament. For the remainder of the paper we decided to ignore this higher velocity component. We used the hyperfine structure line fitting routine of the CLASS software to derive line parameters such as the centroid gas velocity and gas velocity dispersion (see Fig. \[spectra\]). We represent the results from this fitting procedure on Fig. \[velocity\]. The colour and size of the circular symbols code the line-of-sight velocity and linewidth, respectively, of the gas as measured in N$_2$H$^+$(1-0). The exact values are given in Table 5 for each position. Figure \[velocity\] shows that both the dense gas velocity and velocity dispersion vary smoothly along each filament, from one end to the other. The two quantities are strongly correlated. Along Fi-NE, we can actually see that the smooth variation of the velocity reaches a minimum (around position 25) to start increasing again towards position 31. We believe that this shows that the northern and southern parts of Fi-NE are dynamically distinct. This is the reason why we decided to subdivide this filament into two, Fi-NEn and Fi-NEs. The Fi-NE filaments are blue-shifted with respect to the systemic velocity of SDC13 while the Fi-NW and Fi-SE filaments are redshifted. The filaments spatially and dynamically converge near position 8 at the systemic velocity of the system, i.e +37 km/s. At this same position, the linewidth reaches a maximum of $\sim2$ km/s, while at the filament ends the linewidth is the narrowest, i.e. $\sim0.7$ km/s. Discussion ========== Timescales and filament stability --------------------------------- The [*Spitzer*]{} protostellar objects we identify in SDC13 filaments are typically embedded Class I (or young Class II) objects (see Appendix B). The age $\tau_{CI}$ of such objects, from prestellar core stage to Class I protostellar stage, is $1~\rm{Myr}\leq\tau_{CI}\leq2$ Myr [e.g. @kirk2005; @evans2009], consequently setting a lower limit of 1 Myr for the age of the filaments themselves. Their widths[^4] as measured on the MAMBO map are $\sim 0.3$ pc. Taking a 3D velocity dispersion $\sigma_{3D}=\sqrt{3}\sigma_{tot}=1.0(\pm0.3)$ km/s we obtain a crossing time $t_{cross}=0.3(\pm0.1)$ Myr. Given their lower age limit of 1 Myr, the SDC13 filaments must therefore either be gravitationally bound or confined by ram/magnetic pressure. The thermal critical mass-per-unit-length for a 12 K isothermal filament is $M_{line,crit}^{th}=19$ M$_{\odot}$/pc, the observed values for the SDC13 filaments are higher by a factor of 4 to 8 (cf Table 3), meaning that they are thermally supercritical. However, if we consider non-thermal motions as an extra support against gravity the picture slightly changes. We define the effective critical mass-per-unit-length as $M^{eff}_{line,crit}=2\overline{\sigma_{tot}}^2/G$, where $\overline{\sigma_{tot}}$ is the average total velocity dispersion along the filaments. Taking $\overline{\sigma_{tot}}=0.6$ km/s (cf Table 5) we obtain $1\leq \frac{M^{eff}_{line,crit}}{M_{line}} \leq 2$ for all filaments, making them just bound. This ratio is probably slightly overestimated since the contribution from the longitudinal collapse of filaments (see following sections) probably contribute to the observed line widths in the form of non-supportive bulk motions. The SDC13 filaments match the relationship between $M_{line}$ and $M^{eff}_{line,crit}$ found by @arzoumanian2013 for thermally supercritical filaments. ![(left): Line-of-sight velocity profile of the dense gas within each filament as a function of position form the well centres, i.e. position 8 for Fi-NW (purple dotted line) , Fi-SE (red dashed-dotted line), Fi-NEs (blue dashed line), and position 31 for Fi-NEn (cyan solid line). (right): Same as panel (left) but for the total (thermal+non-thermal) dense gas velocity dispersion. []{data-label="profiles"}](sdc13_fil_dist_vel.pdf "fig:"){width="4.4cm"} ![(left): Line-of-sight velocity profile of the dense gas within each filament as a function of position form the well centres, i.e. position 8 for Fi-NW (purple dotted line) , Fi-SE (red dashed-dotted line), Fi-NEs (blue dashed line), and position 31 for Fi-NEn (cyan solid line). (right): Same as panel (left) but for the total (thermal+non-thermal) dense gas velocity dispersion. []{data-label="profiles"}](sdc13_fil_dist_width.pdf "fig:"){width="4.4cm"} Dynamical evolution scenarios ----------------------------- The smooth velocity structure observed along the SDC13 filaments shows that these filaments are dynamically coherent. The number of physical processes leading to a well organized parsec-scale velocity pattern, as observed in SDC13, is rather limited. Collapse, expansion, rotation, soft filament collision, and wind driven acceleration are the five main possibilities. Rotation would imply that the SDC13 filaments rotate about an axis going through the filament junction. The resulting geometry is unrealistic in the context of interstellar cloud dynamics, differential rotation would tear apart each filament in a crossing time $t_{cross}\simeq 0.3$ Myr (cf Sec. 4.1). No expanding source of energy (such as HII region or cluster of outflows[^5]) is located at the centre of SDC13, rendering the expansion scenario unlikely, while colliding filaments cannot account for the velocity pattern observed along Fi-NEn. Finally, wind-driven acceleration powered by a nearby stellar cluster is a credible scenario. In particular, as we can see in Fig. \[continuum\] a mid-infrared nebulosity, along with a number of infrared compact sources, are observed east of MM1, probably interacting with it, and impacting its velocity through expanding winds. However, it probably does not impact the dynamics of the entire networks of filaments. Similarly to the colliding flow scenario, the wind-driven scenario can hardly explain the smooth velocity field observed in these filaments, and in particular, the one along Fi-NEn. We therefore consider that longitudinal collapse is the best option to explain the velocity structure observed in SDC13. Gravity can naturally account for a velocity gradient along the filaments as the gas collapses towards the centre of the system. In this context, one can argue for the presence of a primary gravitational potential well centre near position 8 towards which all the gas is collapsing, and a secondary one near position 31 towards which only Fi-NEn is infalling. In this collapse scenario, the redshifted filaments, i.e. Fi-NW and Fi-SE, are in the foreground with respect to the primary centre of the system, while the blueshifted ones, Fi-NEs and Fi-NEn, are in the background. The dense gas velocity observed along the filaments is therefore interpreted as a projected infall velocity profile. Figure \[profiles\] displays these profiles for all four filaments. We can see that for each filament the velocity is a linear function of the distance to the centre (gradients are given in Table 3) up to $\sim1.5$ pc and then flattens out. This is expected in case of homologous free-fall collapse of filament having centrally concentrated density profiles along their crest (see Fig. 6 from @pon2011, Fig. 5 from Peretto et al. 2007, and Fig. 1 from @myers2005). ![Time evolution of free-falling filaments in the $Z-\nabla V$ parameter space. All filaments have the same initial density $n_0=4\times10^4$ cm$^{-3}$. Each solid line represents the evolution of a filament with a different initial aspect ratio $A_0$ indicated at the bottom end. Dashed lines represent isochrones from 0.5 Myr to 4 Myr separated by 0.5 Myr. The solid and empty squares mark the positions of the four filaments, for projection angles of 45 and 67, respectively (cf text and Appendix A).[]{data-label="model"}](dist_grad_time.pdf){width="7.cm"} In the context of longitudinal collapse the observed velocity gradients are an indication of the time evolution of each filament as the gradient becomes steeper during collapse. We can therefore potentially constrain the age of a filament by measuring its velocity gradient. [@pon2012] showed that the collapse time of free-falling filaments is $\tau_{1D} = \sqrt{2/3}\,A\,\tau_{3D}$ where $\tau_{3D}$ is the spherical free fall time at the same volume density, and $A$ is the aspect ratio of the filament. This equation shows that the collapsing time of a filament can be significantly longer than the standard $\tau_{3D}$. One can compute the time evolution of the filament velocity gradient as a function of the semi-major axis $Z$, and this for a given initial density $n_0$, initial aspect ratio $A_0$, and initial semi-major axis $Z_0$, and then compare these to the SDC13 filament observed values (see Appendix A for more details). Figure \[model\] shows that the velocity gradients of modelled filaments are consistent with the observed ones after 1 to 4 Myr of evolution, depending on the projection angles. Similar collapsing timescales have recently been reported for the massive-star forming filament NGC6334 [@zernickel2013]. Also, the fact Fi-NEn seems dynamically younger is consistent with the idea that it is collapsing towards a secondary potential well centre around position 31. Despite the limitations of this comparison (SDC13 is more of a hub-filament system rather than a single filament - cf Fig. \[infpic\]) these timescales are consistent with the presence of Class I protostellar sources in the SDC13 filaments. Gravity-driven turbulence and core formation -------------------------------------------- Figure \[profiles\](right) shows the total velocity dispersion (including thermal and non-thermal motions - cf Arzoumanian et al. 2013) along each filaments. We can see a strong correlation between this quantity and the dense gas velocity, the velocity dispersion increasing towards each of the two centres of collapse (positions 8 and 31). During the collapse of the filaments the gravitational energy of the gas is converted into kinetic energy [e.g. @peretto2007; @vazquezsemadeni2007]. This has been proposed to be the main process by which radially collapsing filaments manage to keep a constant width [@arzoumanian2013]. The velocity dispersion resulting from gravitational energy conversion is then a function of the infall velocity and the gas density [@klessen2010]. The global dynamical evolution of a supercritical filament initially at rest, with uniform density, can be described in two stages [e.g. @peretto2007]. In the first stage, the gas at the filament ends is accelerated more efficiently, and the filament develops a linear velocity gradient increasing from centre to ends. In the second stage, as the matter is accumulated at the centre, the velocity gradient starts to reverse due to the acceleration close to the central mass becoming larger than at the filament ends. It is clear that in the second stage, both the infall velocity and the velocity dispersion will increase faster at the centre than at the filament ends. However, in the first stage we might expect the infall velocity dispersion and the infall velocity to be larger at the filament ends. In the case of SDC13, the infall velocity is larger at the filament ends, but the velocity dispersion is larger at the centre. This could be explained by the fact that the SDC13 filaments are in an intermediate evolutionary stage, and/or that the initial density profile of SDC13 is not uniform. In fact, SDC13 as a whole has a more complex geometry and density structure than the ones of a single filament, it resembles more of a hub-filament system [@myers2009]. In such configuration, the potential well is dominated by the central mass of the hub. The resulting hub density profile might then favour an early increase of the velocity dispersion during the hub collapse. However, this is speculation and remains to be tested. ![Schematic representation of the the SDC13 velocity field (arrows) (a) as viewed on the plane of the sky; (b) as viewed sideways with the observer on the left-hand-side of the plot. The two centres of collapse are symbolised by black dashed circles and a cross at their centres.[]{data-label="infpic"}](pictinf.pdf){width="9cm"} The MM1 and MM2 sources, the most massive ones in SDC13, are located near the centre of the system. Within the context of the free-fall collapse scenario these sources must have formed within a tenth of a pc from their current position, suggesting that MM1 and MM2 have basically formed in situ, near the converging point of the filaments. This location is privileged as source mass growth is concerned since flows of dense gas are running towards it, both bringing more material to be accreted, and locally increasing the velocity dispersion. We can quantify what impact both processes have on core formation. The mass infall rate due to material running through the filaments can be estimated following $\dot{M}_{inf}=\pi (W/2)^2\rho_{fil}V_{inf}$ where $W$ is the filament width, $\rho_{fil}$ the filament density, and $V_{inf}$ the infall velocity. Using the average values given in Table 3, along with an infall velocity $V_{inf}\simeq0.2$ km/s at the core locations, we find that $\dot{M}_{inf}\simeq2.5\times10^{-5}$ M$_{\odot}$/yr. This means that over a million years each filament would bring $\sim25~M_{\odot}$ to the SDC13 centre. This is probably an upper limit since the gas did not infall for a million years with its current infall velocity. Now, if we consider that the gravity-driven turbulence acts as a support against gravity then the effective Jeans mass becomes ${\rm M_{J}^{eff}}=0.9~{\rm M_{\odot}} (\sigma_{tot}/0.2~{\rm km\,s^{-1}})^3$. With a gas velocity dispersion $\sigma_{tot}\simeq 0.8$ km/s towards the centre of the system, the effective Jeans mass goes up to $\rm{M_{J}^{eff}}\simeq 60$ M$_{\odot}$. This is close to the observed masses for MM1 and MM2, and suggests that filament longitudinal collapse might lead to the formation of super-Jeans cores more as a result of enhanced turbulent support than as a result of large-scale accretion. Conclusion ========== The presence of protostellar sources and organised velocity gradients along the SDC13 filaments suggest that local and large-scale longitudinal collapse are simultaneously taking place. As a result of the high aspect ratios of the SDC13 filaments, the timescale of the former process is much shorter than that of the latter. Here we propose that the large-scale longitudinal collapse mostly contributes to the increase of the turbulent support towards the centre of collapse where starless cores with masses order of magnitude larger than the thermal Jeans mass can form. This study therefore suggests that super-Jeans prestellar cores may form at the centre of collapsing clouds. Despite this increase of support at the centre of SDC13, the MM1 and MM2 masses are still much lower than the mass of the O-type star-forming core sitting at the centre of SDC335 [@peretto2013]. We speculate that the major difference resides in the mass and morphology of the surrounding gas reservoirs surrounding these sources. Dense, collapsing, spherical clouds of cold gas are more efficient in concentrating matter at their centres in a short amount of time. The study of a larger sample of massive star-forming clouds presenting all sort of morphologies is required to test this assertion. --------- -------------------------- ----- ----------- ------------------- Name Sizes PA R$_{eff}$ $F^{int}_{1.2mm}$ ($\arcsec\times\arcsec$) () () (Jy) Fil-NEn $104\times20$ +38 30.6 $0.259\pm0.032$ Fil-NEs $95\times19$ +19 29.5 $0.213\pm0.030$ Fil-NW $152\times16$ -31 29.6 $0.198\pm0.031$ Fil-SE $107\times22$ -56 32.8 $0.219\pm0.032$ --------- -------------------------- ----- ----------- ------------------- ------ ------------- ------------- ------------------ ------------------- ------------- ------------- ------ ----------- --------------- ------------- Name RA Dec $F^{pk}_{1.2mm}$ $F^{int}_{1.2mm}$ Maj. Min. PA R$_{eff}$ Protostellar? Fragmented? (J2000) (J2000) (mJy/beam) (mJy) ($\arcsec$) ($\arcsec$) () () MM1 18:14:30.86 -17:33:20.4 51.8 $94.5 \pm34.3$ 28.5 11.6 +27. 15.7 yes – MM2 18:14:28.53 -17:33:30.9 48.3 $73.2 \pm23.3$ 14.0 11.9 +21 12.8 no no MM3 18:14:35.53 -17:30:53.4 34.6 $22.3 \pm16.5$ 11.3 6.4 +26 7.7 no no MM4 18:14:30.63 -17:33:59.0 31.2 $23.0 \pm13.8$ 10.5 6.8 -49 8.4 yes – MM5 18:14:33.89 -17:31:14.4 28.6 $10.3 \pm8.4$ 8.4 3.8 +39 5.6 yes no MM6 18:14:35.30 -17:30:36.0 28.4 $8.8 \pm 7.7$ 6.5 3.7 -45 5.2 no no MM7 18:14:32.96 -17:34:19.9 26.3 $26.0\pm17.6$ 15.6 7.2 -60 9.9 yes no MM8 18:14:29.93 -17:33:48.4 20.4 – – – – – no – MM9 18:14:32.73 -17:31:39.0 19.9 $9.8 \pm7.0$ 7.7 5.6 +43 6.6 no no MM10 18:14:32.26 -17:32:24.5 18.8 $31.0 \pm17.3$ 18.7 10.5 +27 13.0 no yes MM11 18:14:27.37 -17:32:42.0 17.9 $5.7 \pm4.0$ 7.00 3.9 -32 5.2 no no MM12 18:14:34.59 -17:30:21.9 17.9 – – – – – no – MM13 18:14:25.72 -17:32:07.0 17.8 $6.5 \pm4.2$ 6.1 4.8 -44 5.6 no no MM14 18:14:34.83 -17:34:41.0 17.7 $14.0 \pm11.0$ 10.5 8.2 -9 8.6 no – MM15 18:14:26.67 -17:32:27.9 16.2 – – – – – no no MM16 18:14:24.56 -17:31:42.4 16.1 $8.5 \pm5.6$ 12.2 4.1 -35 6.8 no no MM17 18:14:24.79 -17:31:56.4 14.6 – – – – – no no MM18 18:14:29.46 -17:33:13.4 13.8 – – – – – yes – ------ ------------- ------------- ------------------ ------------------- ------------- ------------- ------ ----------- --------------- ------------- --------- ----------------- -------------- --------------- -------------------------- ------------------------- ----------------------- Name Sizes$_{dec}$ Aspect ratio Mass Density M$_{line}$ $\nabla V$ ($pc\times pc$) (M$_{\odot}$) ($\times10^4$ cm$^{-3}$) (M$_{\odot}$ pc$^{-1}$) (kms$^{-1}$pc$^{-1}$) Fil-NEn $1.8\times0.3$ 6 $256 \pm 32$ $3.5\pm0.4$ $142\pm18$ $0.22\pm0.07$ Fil-NEs $1.6\times0.3$ 5 $211 \pm 30$ $3.2\pm0.5$ $117\pm19$ $0.63\pm0.19$ Fil-NW $2.6\times0.2$ 13 $195 \pm 31$ $4.1\pm0.7 $ $75\pm12$ $0.36\pm0.11$ Fil-SE $1.9\times0.3$ 6 $ 216 \pm 32$ $2.8\pm0.4 $ $114\pm17$ $0.62\pm0.14$ --------- ----------------- -------------- --------------- -------------------------- ------------------------- ----------------------- ------ ----------------- --------------- -------------------------- Name R$_{eff}^{dec}$ Mass Density (pc) (M$_{\odot}$) ($\times10^4$ cm$^{-3}$) MM1 0.26 $74.8\pm27.1$ $1.8\pm0.6$ MM2 0.21 $81.1\pm25.8$ $3.6\pm1.1$ MM3 0.10 $24.7\pm18.3$ $10.2\pm7.6$ MM4 0.11 $18.1\pm10.9$ $5.6\pm3.4$ MM5 $<0.05$ $8.2\pm6.6$ $>27.2\pm21.9$ MM6 $< 0.05$ $9.7\pm8.5$ $>32.1\pm28.2$ MM7 0.15 $20.2\pm13.9$ $2.5\pm1.7$ MM8 – – – MM9 0.06 $10.9\pm7.7$ $20.9\pm14.8$ MM10 0.21 $34.3\pm19.2$ $1.5\pm0.9$ MM11 $<0.05$ $6.3\pm4.4$ $>20.9\pm14.6$ MM12 – – – MM13 $<0.05$ $7.3\pm4.7$ $>24.2\pm15.6$ MM14 0.12 $15.6\pm12.2$ $3.8\pm2.9$ MM15 – – – MM16 0.07 $9.4\pm6.1$ $11.3\pm7.4$ MM17 – – – MM18 – – – ------ ----------------- --------------- -------------------------- ----- ------------- ------------- ---------- -------- --------------- ---------------- Id RA Dec Velocity FWHM $\sigma_{NT}$ $\sigma_{tot}$ (J2000) (J2000) (km/s) (km/s) km/s km/s P1 18:14:35.29 -17:34:41.9 37.83 0.87 0.37 0.42 P2 18:14:34.50 -17:34:33.7 37.89 0.95 0.40 0.45 P3 18:14:33.77 -17:34:25.8 37.88 1.32 0.56 0.60 P4 18:14:32.95 -17:34:18.0 37.68 1.58 0.67 0.70 P5 18:14:32.16 -17:34:10.3 37.58 1.60 0.68 0.71 P6 18:14:31.36 -17:34:02.8 37.38 1.57 0.67 0.70 P7 18:14:30.54 -17:33:58.1 37.25 1.76 0.75 0.78 P8 18:14:29.69 -17:33:49.4 37.09 1.87 0.79 0.82 P9 18:14:28.53 -17:33:38.0 37.19 1.92 0.81 0.84 P10 18:14:28.11 -17:33:25.8 37.24 1.97 0.84 0.86 P11 18:14:27.84 -17:33:12.7 37.48 1.50 0.64 0.67 P12 18:14:27.50 -17:32:59.9 37.41 1.22 0.52 0.56 P13 18:14:27.29 -17:32:46.1 37.49 1.03 0.43 0.48 P14 18:14:26.71 -17:32:35.2 37.60 1.03 0.43 0.48 P15 18:14:25.34 -17:32:15.0 37.39 0.68 0.28 0.35 P16 18:14:30.74 -17:33:38.2 36.83 1.72 0.73 0.76 P17 18:14:29.58 -17:33:32.2 36.96 1.86 0.79 0.82 P18 18:14:30.83 -17:33:18.7 36.82 1.85 0.78 0.81 P19 18:14:31.09 -17:33:05.9 36.59 1.52 0.64 0.67 P20 18:14:31.36 -17:32:52.8 36.48 1.35 0.57 0.61 P21 18:14:31.67 -17:32:39.7 36.33 0.88 0.37 0.42 P22 18:14:32.01 -17:32:26.9 36.19 1.06 0.45 0.49 P23 18:14:32.17 -17:32:13.1 36.21 1.01 0.43 0.48 P24 18:14:32.17 -17:31:59.6 36.17 0.87 0.37 0.42 P25 18:14:32.17 -17:31:45.7 36.18 0.66 0.27 0.34 P26 18:14:32.38 -17:31:32.6 36.19 0.66 0.27 0.34 P27 18:14:32.93 -17:31:20.6 36.34 0.79 0.33 0.39 P28 18:14:33.61 -17:31:10.8 36.37 1.01 0.43 0.48 P29 18:14:34.47 -17:31:04.4 36.44 1.29 0.55 0.59 P30 18:14:35.23 -17:30:56.2 36.44 1.38 0.58 0.62 P31 18:14:35.65 -17:30:43.1 36.47 1.44 0.61 0.64 ----- ------------- ------------- ---------- -------- --------------- ---------------- We thank the anonymous referee whose report helped improving the quality of this paper. We would like to thank Alvaro Hacar for helping with the MAMBO data reduction. And finally, we want to thank the IRAM 30m staff for their support during the observing runs. Homologous free-fall collapse of filaments ========================================== The equation of motion for a filament edge in homologous free-fall collapse is given by [@pon2012]: $$\frac{d^2Z(t)}{dt^2}=\frac{GM}{Z(t)^2}$$ where $Z(t)$ is the semi-major axis of the filament, and M its mass. Multiplying each side of Eq. (A.1) by $\frac{dZ(t)}{dt}$ and integrating over $t$, we obtain the following equation: $$\frac{dZ(t)}{dt}=V(t)=-\sqrt{2GM[Z(t)^{-1}+\alpha]}$$ where $\alpha$ is the constant of integration. Now, if we consider a filament initially at rest, we have $V(t=0)=0$, and therefore $\alpha= -Z_0^{-1}$, where $Z_0=Z(t=0)$ is the initial semi-major axis of the filament. We can solve for $Z(t)$ by setting $Z(t)/Z_0=\cos^2\beta$ in Eq. (A.2) and integrating over $t$ again. We then obtain the following equation: $$\beta+\frac{1}{2}\sin2\beta = t\sqrt{2GM/Z_0^3}$$ or when replacing the mass by $M=2\pi R^2Z_0\rho_0$ $$\beta+\frac{1}{2}\sin2\beta = \frac{t}{A_0}\sqrt{4\pi G\rho_0}$$ where $A_0$ is the initial aspect ratio of the filament and $\rho_0$ its initial density. Note that the filament free-fall times $\tau_{1D}$ is calculated for $Z(t)/Z_0=0$, which means when $\beta=\pi/2$. We therefore obtain: $$\tau_{1D}=A_0\sqrt{\frac{\pi}{16G\rho_0}}=\sqrt{2/3}A_0\tau_{3D}$$ where $\tau_{3D}$ is the spherical free-fall time at the same density. Finally, given that the velocity gradient is linear during the homologous collapse of a filament, we can compute the velocity gradient $\nabla V=V(t)/Z(t)$. In order to compute the different values plotted in Fig. \[model\] we sampled $\beta$ between 0 and $\pi/2$ and used the relevant equations presented in this appendix. We further assumed that the radius of the filaments stay the same during the collapse. For the purpose of the calculations we took $R=0.15$ pc. Figure \[model\] shows the evolution of seven filaments (solid lines) having all the same initial density $n_0=4\times10^4$ cm$^{-3}$ but a different initial aspect ratio[^6]. Nearly all quantities presented in this paper are affected by projection effect. We can only correct for it if we know the projection angle $\theta$ of the filaments with respect to the line of sight. Unfortunately, it is impossible to know the exact value of $\theta$ for a given filament. However, for randomly sampled filament orientations one have $<\theta>=67\degr$. So for this letter, we considered two cases, the case where $\theta=45\degr$ and the case where $\theta=67\degr$. The corrected values for the velocity, semi-major axis, and velocity gradients are: $V_{corr}= V_{obs}/\cos(\theta)$, $L_{corr}=L_{obs}/\sin(\theta)$, and $\nabla V_{corr}=\nabla V_{obs} \tan(\theta)$. For $\theta=45\degr$ these corrections meant hat the observed velocity gradients remain unaffected, but the semi-major axis becomes larger by a factor 1.4. For $\theta=67\degr$, the velocity gradient increases by a factor 2.4, and the semi-major axis by a factor 1.1. The points in Fig. \[model\] corresponding to the four SDC13 filaments have been corrected by such factors. Note that much larger angles would lead to very large velocity gradients which have never been observed on parsec-scales, while much smaller angles would imply very large initial aspect ratios and semi-major axes. Even though we cannot rule out such configurations, they are unlikely. Protostellar classes of SDC13 sources ===================================== Infrared spectral indexes have been used for more than 20 years to classify YSOs according to their evolutionary stages. The idea is that the peak of a YSO spectral energy distribution (SED) evolves from submillimeter to infrared during the first few million years of its life. Calculating then the slope, i.e. the spectral index, of the SED at a critical frequency range allows the quantification of this evolution. YSOs classes are traditionally defined [e.g. @lada1984; @wilking1989] between 2  and 10 by calculating the following quantity $\alpha_{[2-10]} = \frac{d(\log(\lambda S_{\lambda}))}{d(\log(\lambda))}$, Class I sources having $\alpha_{[2-10]} > 0$, Class II sources $0> \alpha_{[2-10]} > -1.6$, and Class III sources $\alpha_{[2-10]} < -1.6$. ![(left): Spitzer 24 image of SDC13 (greyscale) on which we over plotted the MAMBO 3 mJy/beam contour (yellow). On this image we also report the positions of the seven protostellar sources identified alone the SDC13 filaments (purple circles), along with the detection of two extra sources located on the edge of the outer edge of the contour (blue circles). (right): 8 image of SDC13 (greyscale). The contour is the same as in the left panel. The purple circles mark the positions of the nine 8 sources identified towards the identified 24 sources. The meaning of the symbol’s colour is the same as in the left panel.[]{data-label="24mic"}](sdc13_spit8_24_ext.pdf){width="9.cm"} In the context of this study we focussed on sources identified at 24. We only considered sources which are spatially coincident with the SDC13 filaments (purple circles on Fig. \[24mic\]), limiting potential background/foreground contamination. To extract sources we used the Starfinder algorithm [@diolaiti2000] and found seven 24 sources, five of which have 8 counterparts[^7] (see Fig. \[24mic\]). We then calculated their spectral index as estimated between 8 and 24(fluxes of multiple 8 sources have been combined, and for those with no 8 counterpart we used the 8 detection limit of 2 mJy). One advantage provided by these two wavelengths is that they are similarly affected by extinction [@flaherty2007; @chapman2009] which implies that estimated values of $\alpha_{[8-24]}$ are not much affected by extinction. As a result, we find three sources with $-0.4<\alpha_{[8-24]}<0$, and another four with $0<\alpha_{[8-24]}<2.2$. Note that we also report the detection of two additional protostellar sources located on the outer edge of the filament (blue circles in Fig. \[24mic\]). Their spectral indexes are $\sim -1.5$, indicating that they might well be background/foreground sources. Note that we do not have any detection towards MM1 at 24. This is due to the fact that the 24 emission is rather diffuse towards this source, and Starfinder failed to find any compact sources, a requirement for protostar detection. We then used the YSO C2D database of Serpens, Ophiucchi, and Perseus [@evans2009] in order to compare the $\alpha_{[2-10]}$ spectral index with $\alpha_{[8-24]}$. The results of this comparison are displayed in Fig. \[index\_comp\]. Although there is some dispersion and not a one-to-one correlation, there is still a linear correlation between the two indexes. Despite the dispersion in $\alpha_{[8-24]}$ this plot strongly suggests that the seven protostellar sources identified within the SDC13 filaments have mid-infrared spectral indexes consistent with Class I and/or young Class II objects. We further analysed the properties of the nine 8 sources (see Fig/ \[24mic\]) by constructing a colour-colour diagram using all IRAC [*Spitzer*]{} bands between 3.6 and 8. Depending on their colours, sources of different classes are expected to sit in different regions of such diagram [@allen2004; @megeath2004]. On Fig. \[colcol\] we can see that seven sources sit in the Class I region of the diagram, while the two remaining sources are clearly sitting in the Class III/stellar contamination region. The latter are in fact the two sources located on the edge of the filament (i.e the blue circles in Fig. \[24mic\]), already identified as likely foreground/background sources from our spectral index analysis. Note also that extinction can significantly affect the source location in this plot, mostly artificially increasing their \[3.6\] -\[ 4.5\] magnitude. However, the extracted sources are red enough that even for an extinction of $A_v=50$ most identified sources would remain in the Class I region of the diagram, the other ones just moving to the other side of the Class II/Class I border. This is consistent with the $\alpha_{[8-24]}$ analysis performed above. Finally, based on the 24 flux of protostellar sources and an assumed extinction, one may obtain an estimate of the source bolometric luminosity [@dunham2008]. Following @parsons2009 we estimate that, with extinctions in the range of $10<\rm{A_v}<50$ (matching the range of observed dust column density in SDC13) and 24  fluxes between 10 mJy and 60 mJy for all but one source, the SDC13 protostellar sources have bolometric luminosities in the range of 10 L$_{\odot}$ to few 100 L$_{\odot}$. Such luminosities stand in the high end of the luminosity distribution of nearby low-mass protostars [@evans2009], possibly indicating a slightly shorter lifetime of the SDC13 protostellar sources compared to the standard Class I low-mass protostar lifetime. ![Mid-infrared spectral indexes for all $\sim1000$ YSOs from the C2D catalogue [@evans2009]. For each source the spectral index has been calculated for two wavelength ranges: from 2to 10, and from 8 to 24. []{data-label="index_comp"}](specind_c2d.pdf){width="7.cm"} ![Mid-infrared \[3.6\]-\[4.5\] vs \[5.8\]-\[8.0\] colour-colour diagram for the nine sources extracted at 8. The colour coding is the same as in Fig. \[24mic\]. The arrow indicates an extinction of $A_v=10$. The different regions of the plot corresponding to the different protostellar classes are also shown. []{data-label="colcol"}](SDC13_spitzer-diagram.pdf){width="7.cm"} [^1]: Based on observations carried out with the IRAM 30m Telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). [^2]: http://www.iram.fr/IRAMFR/GILDAS [^3]: The sizes are estimated in the same way as for the IRDC catalogue of @peretto2009 [see Appendix A], meaning that we calculate the matrix of moment of inertia of the pixels within the 3 mJy/beam contour for each filament. Then we diagonalise the matrix and get two values corresponding to the sizes along the major and minor axes of the filaments. The radius R$_{eff}$ corresponds to the radius of the disc having the same area as the filament. Note that the 3 mJy/beam contour encompasses all filaments, we therefore artificially divided the MAMBO emission at the filaments junction. [^4]: Note that these widths are compatible with the standard 0.1 pc FWHM width observed towards nearby interstellar filaments [@arzoumanian2011] since the SDC13 measurements we provide here extend beyond the FWHM of the filament profiles. [^5]: We observed few core positions in SiO(2-1), well known tracer of protostellar outflows, but did not get any wing emission. [^6]: Note that since we fixed the filament radius, $A_0$ and $Z_0$ are not independent parameters. Also, the aspect ratios given in Table 3 correspond to half of this $A_0$ parameter. [^7]: Note that two 24 sources split up into two 8 sources
--- abstract: 'An improved Iterative Physical Optics (IPO) image approximation method has been presented to dramatically increase the accuracy of the approximation and extend its applicability to PEC surfaces with smaller radii or larger curvatures. Starting from the first-order conventional PO image approximation, the IPO image approximation method iteratively correct the surface current to compensate the deviation of the electric field boundary condition on the PEC surfaces, making use of the local plane wave approximation. Numerical validations with two popular PEC surfaces, [*i*.*e*.]{}, the parabolic dish antennas and the PEC spheres, are carried out and the results show that the IPO approximation method increases the surface current accuracy by more than two orders of magnitude, compared to the conventional PO image approximation method.' author: - 'Shaolin Liao^\*,\ 1^ and Lu Ou^2^' title: 'IPO: Iterative Physical Optics Image Approximation' --- [^1] [^2] Introduction {#sec:intro} ============ Physical Optics (PO) image approximation has been widely used for smooth Perfect Electric Conductor (PEC) surfaces, due to its efficiency and accurate approximation for relatively smooth PEC surfaces [@liao_image_2006; @liao_near-field_2006; @shaolin_liao_new_2005; @liao_fast_2006; @liao_cylindrical_2006; @liao_beam-shaping_2007; @liao_fast_2007; @liao_validity_2007; @liao_high-efficiency_2008; @liao_four-frequency_2009; @vernon_high-power_2015; @liao_multi-frequency_2008; @liao_fast_2007-1; @liao_sub-thz_2007; @liao_miter_2009; @liao_fast_2009; @liao_efficient_2011; @liao_spectral-domain_2019]. Compared to other advanced Computational Electromagnetics (CEM) methods such as the Method of Moments (MoM), the application of the PO image approximation can dramatically reduce the the number of unknowns and memory requirements for the electromagnetics scattering problem of electrically large objects [@ma_efficient_2012]. The PO image approximation can find important applications in antennas design and analysis [@chuan_liu_design_2013; @yurduseven_compact_2011], beam-shaping [@liao_beam-shaping_2007; @liao_high-efficiency_2008; @liao_four-frequency_2009; @vernon_high-power_2015; @liao_multi-frequency_2008; @liao_fast_2007-1; @liao_sub-thz_2007; @liao_miter_2009], and Radar Cross Section (RCS) calculation [@emhemmed_analysis_2019; @wang_radar_2017]. In particular, the author’s group has applied the PO method to design a beam-shaping mirrors system at the millimeter-wave regime to shape the multi-mode TE waves (TE$_{22,6}$/110 GHz, TE$_{23,6}$/113 GHz, TE$_24,6$/116 GHz and TE$_25,6$/118.8 GHz) into the Gaussian beams, resulting in efficiencies $> 98\%$ [@liao_beam-shaping_2007; @liao_high-efficiency_2008; @liao_four-frequency_2009; @vernon_high-power_2015; @liao_multi-frequency_2008]. Although useful in the above important applications, the PO image method is only exact for planar PEC surface and is approximate for non-planar but relatively smooth PEC surfaces: the larger radii or the smaller the curvatures of the surfaces, the better the approximation [@liao_image_2006; @liao_near-field_2006]. So it would be beneficial if the PO image approximation can be extended to PEC surfaces of smaller radii or larger curvatures with satisfactory accuracy, which is the focus of this letter. ![The improved IPO approximation method for the electromagnetic scattering from relatively smooth PEC surfaces: a) the parabolic dish antenna; b) the PEC spheres;; and c) the IPO algorithm.[]{data-label="fig:problem"}](all.jpg){width="80.00000%"} Problem Formulation {#sec:problem} =================== The electromagnetic scattering problem from a PEC surface is shown in Fig. In general, the unknown surface current can only be solved through rigorous CEM methods such as MoM by imposing zero total electric field boundary condition on the metallic surface as follows, \[eqn:BC\] =0, where $\hat{n}$ is the unit surface normal of the PEC surface; also, the superscripts $s$ and $i$ denote the scattering and incident electric field, respectively. For perfect planar surface, the exact surface current can be obtained by the PO as follows, \[eqn:PO\] \^[PO]{} = = { \^[i]{} + \^[s]{} } = 2 \^[i]{}, and the boundary condition in Eq. (\[eqn:EFIE\]) reduces to the following, \[eqn:BC2\] = = 0, where $\mathcal{L}$ is the operator that computes the scattering electric field form the surface current. For smooth PEC surfaces, PO is only approximate [@liao_image_2006; @liao_near-field_2006; @liao_multi-frequency_2008]. So it would be beneficial if the accuracy of the conventional PO image approximation method can be improved so that it can be extended to PEC surfaces with smaller radii or larger curvatures. The Scattering Electric Field ============================= The scattering electric field $\overline{E}^s$ can be expressed in terms of convolution of the surface current $\overline{J}$ and the electric dyadic Green’s function $\overline{\overline{G}}_e$ as follows, \[eqn:field\_convolution\] \^s = { } = \_e = - j \_[S]{} \_e (R) (’) dS’, where $\circledast$ denotes the 2D convolution operation; $R = r- r'; r=|\overline{r}|, r'=|\overline{r}'|$ with $\overline{r}$ and $\overline{r}'$ being the observation point and source point respectively; $\omega$ is the angular frequency; $\mu$ is the permeability; and the dyadic Green’s function is given as Eq (\[eqn:dyadic\_green\]), \[eqn:dyadic\_green\] & () = g() + g();    g(r) = ,   r = ||;   k =|| = , with $\overline{\overline{I}}$ being the identity matrix and $k$ being the magnitude of the wave vector $\overline{k} = [k_x, k_y, k_z]$. The EFIE ======== With the scattering electric field given in Eq. (\[eqn:field\_convolution\]), the EFIE equation can be obtained from the boundary condition of Eq. (\[eqn:BC\]) as follows, \[eqn:EFIE\] = 0, where the surface current $\overline{J}$ remains to be solved. The Iterative PO (IPO) Approximation ==================================== The first-order IPO approximation for the surface current $\overline{J}$ is the PO image theorem given in Eq. (\[eqn:PO\]), \[eqn:PO1\] \_1\^[IPO]{} = \^[PO]{} = 2 \^[i]{}. Now the deviations of the boundary condition of Eq. (\[eqn:BC\]) is given by, \[eqn:dE\] \_1\^s = . Approximating the local electromagnetic field as local plane wave, the electric field $ \delta \overline{E}_1^s$ is related to the deviation of the deviation of the magnetic field $ \delta \overline{H}_1^s$ as follows, \[eqn:dH\] \_1\^s = \_1\^s ;   = . Substituting Eq. (\[eqn:dH\]) into Eq. (\[eqn:dE\]), the following is obtained, \[eqn:dEH\] = \_1\^s = . Now, to compensate the deviations of the electric field $\delta \overline{E}_1^s$, the surface current can be corrected $\delta \overline{J}_1^{IPO}$ as follows, \_1\^[IPO]{} = - \_1\^s = - { }, from which the deviations of the electric field $\delta \overline{E}_1^s$ is obtained as follows, \[eqn:current\_iteration\] \_1\^[IPO]{} = \_[1]{}\^[s//]{} = , and $\delta \overline{E}_{1}^{s//}$ is the tangential electric field projection on the PEC surface. The surface current correction of Eq. (\[eqn:current\_iteration\]) is done iteratively until satisfactory result is obtained, \[eqn:current\_update\] \_i = . ![Parabolic dish antennas: left) Surface current deviation $\eta \left| \delta \overline{J}_{x, i}^{IPO} \right|$ at different iterations of the IPO approximation method for the focus length of $F=100 \lambda$; and right) Surface current deviation of the conventional PO image approximation method $\eta \left| \delta \overline{J}_x^{PO} \right|$ and the IPO approximation method $\eta \left| \delta \overline{J}_x^{IPO} \right|$ for parabolic dish antennas of different focus lengths $F$.[]{data-label="fig:parabolic"}](parabolic_F100.jpg){width="100.00000%"} Algorithm ========= Fig. \[fig:problem\]c) shows the algorithm of the improved IPO image approximation: it starts with the first-order PO image approximation; then it calculates the scattering electric field according to Eq. (\[eqn:field\_convolution\]), followed by updating of the surface current according to Eq. (\[eqn:current\_iteration\]) and Eq. (\[eqn:current\_update\]); finally the algorithm ends when satisfactory result is met. ![PO surface currents deviation for a parabolic dish antenna of a focus length of $F = 100 \lambda$ (left to right): $\eta \left| \delta \overline{J}_x^{PO} \right|$; $\eta \left| \delta \overline{J}_y^{PO} \right|$; and $\eta \left| \delta \overline{J}_z^{PO} \right|$.[]{data-label="fig:dJ_PO"}](dJx_1.jpg){width="105.00000%"} ![IPO surface currents deviation for a parabolic dish antenna of a focus length of $F = 100 \lambda$ (left to right): $\eta \left| \delta \overline{J}_x^{PO} \right|$; $\eta \left| \delta \overline{J}_y^{PO} \right|$; and $\eta \left| \delta \overline{J}_z^{PO} \right|$.[]{data-label="fig:dJ_IPO"}](dJx_IPO.jpg){width="105.00000%"} ![PEC spheres: left) Surface current deviation $\eta \left| \delta \overline{J}_{x, i}^{IPO} \right|$ at different iterations of the IPO approximation method for the radius of $R = 60 \lambda$; and right) Surface current deviation of the conventional PO image approximation method $\eta \left| \delta \overline{J}_x^{PO} \right|$ and the IPO approximation method $\eta \left| \delta \overline{J}_x^{IPO} \right|$ for PEC spheres of different focus radii $R$.[]{data-label="fig:sphere"}](sphere_R60.jpg){width="100.00000%"} ![PO surface currents deviation for a PEC sphere of a radius of $F = 60 \lambda$ (left to right): $\eta \left| \delta \overline{J}_x^{PO} \right|$; $\eta \left| \delta \overline{J}_y^{PO} \right|$; and $\eta \left| \delta \overline{J}_z^{PO} \right|$.[]{data-label="fig:dJ_PO_sphere"}](dJx_1_sphere.jpg){width="105.00000%"} ![IPO surface currents deviation for a PEC sphere of a radius of $F = 60 \lambda$ (left to right): $\eta \left| \delta \overline{J}_x^{PO} \right|$; $\eta \left| \delta \overline{J}_y^{PO} \right|$; and $\eta \left| \delta \overline{J}_z^{PO} \right|$.[]{data-label="fig:dJ_IPO_sphere"}](dJx_IPO_sphere.jpg){width="105.00000%"} Numerical Validation ==================== Two PEC surfaces are used to show the efficiency of the IPO image approximation with a Gaussian beam of waist $w = 2 \lambda$ as the incidence wave: 1) the parabolic dish antennas with different focus lengths $F$ (Fig. \[fig:problem\] a); and 2) the PEC spheres of different radii $R$ (Fig. \[fig:problem\]b). The PEC surface of the parabolic dish antenna [@chuan_liu_design_2013; @yurduseven_compact_2011] and that of the PEC sphere are Z\_[parabolic]{} = ;   Z\_[spherical]{} = , where $F$ is the focus distance of the parabolic dish antenna and $R$ is the radius of the PEC sphere. The left plot of Fig. \[fig:parabolic\] shows the surface current deviation $\eta \left| \delta \overline{J}_{x, i}^{IPO} \right|$ from the exact surface current obtained by the MoM at different iterations of the IPO approximation method for the parabolic dish antennas with a focus length of $F = 100 \lambda$, showing the convergence of the IPO method. Also, the right plot of Fig. \[fig:parabolic\] shows the surface current deviations of the conventional PO image approximation method $\eta \left| \delta \overline{J}_{x}^{PO} \right|$ and that of those of the IPO approximation method $\eta \left| \delta \overline{J}_{x}^{IPO} \right|$ for different focus lengths $F = [50, 150] \lambda$, from which it can be seen that more than two orders of magnitude increase in accuracy has been achieved. In addition, the surface current deviation of the PO image approximation and that of the IPO approximation for the parabolic dish antenna with a focus length of $F = 100 \lambda$ have been shown in Fig. \[fig:dJ\_PO\] and Fig. \[fig:dJ\_IPO\] respectively. Similarly, the left plot of Fig. \[fig:sphere\] shows the surface current deviation $\eta \left| \delta \overline{J}_{x, i}^{IPO} \right|$ at different iterations of the IPO approximation method for the PEC spheres with a radius of $R = 60 \lambda$; and the right plot of Fig. \[fig:sphere\] shows the surface current deviations of the conventional PO image approximation method $\eta \left| \delta \overline{J}_x^{PO} \right|$ and those of the IPO approximation method $\eta \left| \delta \overline{J}_x^{IPO} \right|$ for different radii $R = [30, 100] \lambda$, from which it can be seen that the accuracy has been improved by more than two orders of magnitude also. Finally, the surface current deviation of the PO image approximation and that of the IPO approximation for the PEC sphere with a radius of $R = 60 \lambda$ have been shown in Fig. \[fig:dJ\_PO\_sphere\] and Fig. \[fig:dJ\_IPO\_sphere\] respectively. Conclusion {#sec:con} ========== The improved IPO image approximation method has been presented to increase the accuracy of the conventional PO image approximation method. The IPO method iteratively corrects the surface current to compensate for the deviation of the electric field boundary condition on the PEC surfaces, assuming local plane wave approximation. Numerical experiments with parabolic dish antennas and PEC spheres show that the IPO method can increase the accuracy of the surface current by more than two orders of magnitude, compared to the conventional PO image approximation method. [00]{} Shaolin Liao and R. J. Vernon. On the Image Approximation for Electromagnetic Wave Propagation and PEC Scattering in Cylindrical Harmonics. Progress In Electromagnetics Research, 66:65-88, 2006. Publisher: EMW Publishing. Shaolin Liao and Ronald J. Vernon. The Near-Field and Far-Field Properties of the Cylindrical Modal Expansions with Application in the Image Theorem. In 2006 Joint 31st International Conference on Infrared Millimeter Waves and 14th International Conference on Teraherz Electronics, pages 260-260, September 2006. ISSN: 2162-2035. Shaolin Liao and R.J. Vernon. A new fast algorithm for calculating near-field propagation between arbitrary smooth surfaces. In 2005 Joint 30th International Conference on Infrared and Millimeter Waves and 13th International Conference on Terahertz Electronics, volume 2, pages 606-607 vol. 2, September 2005. ISSN: 2162-2035. Shaolin Liao, Henry Soekmadji, and Ronald J. Vernon. On Fast Computation of Electromagnetic Wave Propagation through FFT. In 2006 7th International Symposium on Antennas, Propagation EM Theory, pages 1-4, October 2006. Shaolin Liao and Ronald J. Vernon. The Cylindrical Taylor-Interpolation FFT Algorithm. In 2006 Joint 31st International Conference on Infrared Millimeter Waves and 14th International Conference on Teraherz Electronics, pages 259-259, September 2006. ISSN: 2162-2035. Shaolin Liao. Beam-shaping PEC Mirror Phase Corrector Design. PIERS Online, 3(4):392-396, 2007. Shaolin Liao. Fast Computation of Electromagnetic Wave Propagation and Scattering for Quasi-cylindrical Geometry. PIERS Online, 3(1):96-100, 2007. Shaolin Liao. On the Validity of Physical Optics for Narrow-band Beam Scattering and Diffraction from the Open Cylindrical Surface. PIERS Online, 3(2):158-162, 2007. Shaolin Liao, Ronald J. Vernon, and Jeffrey Neilson. A high-efficiency four-frequency mode converter design with small output angle variation for a step-tunable gyrotron. In 2008 33rd International Conference on Infrared, Millimeter and Terahertz Waves, pages 1-2, September 2008. ISSN: 2162-2035. S. Liao, R. J. Vernon, and J. Neilson. A four-frequency mode converter with small output angle variation for a step-tunable gyrotron. In Electron Cyclotron Emission and Electron Cyclotron Resonance Heating (EC-15), pages 477-482. WORLD SCIENTIFIC, April 2009. Ronald J. Vernon. High-Power Microwave Transmission and Mode Conversion Program. Technical Report DOEUW52122, Univ. of Wisconsin, Madison, WI (United States), August 2015. Shaolin Liao. Multi-frequency beam-shaping mirror system design for high-power gyrotrons: theory, algorithms and methods. Ph.D. Thesis, University of Wisconsin at Madison, USA, 2008. AAI3314260 ISBN-13: 9780549633167. Shaolin Liao and Ronald J. Vernon. A Fast Algorithm for Wave Propagation from a Plane or a Cylindrical Surface. International Journal of Infrared and Millimeter Waves, 28(6):479-490, June 2007. S.-L. Liao and R. J. Vernon. Sub-THz Beam-Shaping Mirror System Designs for Quasi-optical Mode Converters in High-power Gyrotrons. Journal of Electromagnetic Waves and Applications, 21(4):425-439, January 2007. Publisher: Taylor & Francis. Shaolin Liao. Miter Bend Mirror Design for Corrugated Waveguides. Progress In Electromagnetics Research, 10:157-162, 2009. Publisher: EMW Publishing. Shaolin Liao and Ronald J. Vernon. A Fast Algorithm for Computation of Electromagnetic Wave Propagation in Half-Space. IEEE Transactions on Antennas and Propagation, 57(7):2068-2075, July 2009. Conference Name: IEEE Transactions on Antennas and Propagation. Shaolin Liao, N. Gopalsami, A. Venugopal, A. Heifetz, and A. C. Raptis. An efficient iterative algorithm for computation of scattering from dielectric objects. Optics Express, 19(4):3304-3315, February 2011. Publisher: Optical Society of America. Shaolin Liao. Spectral-domain MOM for Planar Meta-materials of Arbitrary Aperture Wave-guide Array. In 2019 IEEE MTT-S International Conference on Numerical Electromagnetic and Multiphysics Modeling and Optimization (NEMO), pages 1-4, May 2019. Ji Ma, Shu-Xi Gong, Xing Wang, Ying Liu, and Yun-Xue Xu. Efficient Wide-Band Analysis of Antennas Around a Conducting Platform Using MoM-PO Hybrid Method and Asymptotic Waveform Evaluation Technique. IEEE Transactions on Antennas and Propagation, 60(12):6048-6052, December 2012. Conference Name: IEEE Transactions on Antennas and Propagation. Chuan Liu, Shiwen Yang, and Zaiping Nie. Design of a parabolic reflector antenna with a compact splash-plate feed. In 2013 Cross Strait Quad-Regional Radio Science and Wireless Technology Conference, pages 241-244, July 2013. Okan Yurduseven and Ozan Yurduseven. Compact parabolic reflector antenna design with cosecant-squared radiation pattern. In 2011 MICROWAVES, RADAR AND REMOTE SENSING SYMPOSIUM, pages 382-385, August 2011. Adel Saad Emhemmed, Nafaa Shebani, Amer Zerek, and Nuredin Ahmed. Analysis of RCS and Evaluation of PO Approximation’s Accuracy for Simple Targets. In 2019 19th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA), pages 666-669, March 2019. ISSN: 2573-539X. Ruijun Wang, Bin Deng, Hongqiang Wang, and Yuliang Qin. Radar cross section of arbitrary polished spheres at terahertz frequencies. Journal of Systems Engineering and Electronics, 28(6):1072-1077, December 2017. [^1]: -0.12in\* Corresponding author: Shaolin Liao  (sliao5@iit.edu). [^2]: -0.12in^1^ S. Liao is with Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL 60616 USA. ^2^ L. Ou (oulu9676@gmail.com) is with College of Computer Science and Electronic Engineering, Hunan University, Changsha, Hunan, China 410082.
--- author: - 'Shai Vardi [^1]' bibliography: - 'Vardi\_PhD\_Bibliography.bib' title: The Secretary Returns --- [^1]: Blavatnik School of Computer Science, Tel Aviv University. E-mail: [shaivar1@post.tau.ac.il]{}. This research was supported in part by the Google Europe Fellowship in Game Theory.
--- abstract: 'We introduce various probablistic finiteness conditions for profinite groups related to positive finite generation (PFG). We investigate completed group rings which are PFG as modules, and use this to answer [@KV Question 1.2] on positively finitely related groups. Using the theory of projective covers, we define and characterise a probabilistic version of the ${\operatorname{FP}}_n$ property for profinite groups, called ${\operatorname{PFP}}_n$. Finally, we prove how these conditions are related to previously defined finiteness conditions and each other.' author: - 'Ged Corob Cook, Matteo Vannacci' title: | Probabilistic Finiteness Properties\ for Profinite Groups --- [^1] Introduction {#introduction .unnumbered} ============ Motivation {#motivation .unnumbered} ---------- The study of finiteness properties of abstract groups has a long history; see [@Brown] for some background. Analogously to abstract groups, a profinite group $G$ is said to be of *type ${\operatorname{FP}}_n$* over a profinite ring $R$ if there is a projective resolution $\ldots \to P_n \to \ldots \to P_1\to P_0 \to R \to 0$ with $P_0, \ldots, P_n$ finitely generated profinite $R \llbracket G \rrbracket$-modules. Even the first steps into studying this property run into some difficulties for profinite groups that do not occur in the abstract case; for instance ${\operatorname{FP}}_1$ and finite generation are not equivalent (see [@Damian]). In this paper we consider an alternative notion of finite generation. The class of *positively finitely generated* groups (PFG groups for short) was introduced in [@Mann] and it consists of those profinite groups $G$ where, for some $k$, $k$ Haar-random elements generate $G$ with positive probability (cf. Section \[sec:pfgpfrmore\]). *Positive finite relatedness* (PFR) was introducted in [@KV] as a higher analogue of PFG (see Section \[sec:pfgpfrmore\] for the definition). In the same spirit we will define and study some related ‘probabilistic’ finiteness properties, such as *positively finitely presented* profinite groups, and we introduce a new family of higher finiteness properties for profinite groups via the concept of *PFG modules* and *modules of type ${\operatorname{PFP}}_n$*. Main results {#main-results .unnumbered} ------------ For convenience, we provide a diagram showing the relationships between the various conditions studied in the paper. $$\xymatrix@C=50pt{ \text{PFG} \ar@/^1.0pc/@{=>}[rr]^-{\text{finitely presented, Proposition \ref{prop:PFGimpliesPFR}}} \ar@{=>}[d]_{\text{\cite[Theorem 4.4]{Damian}}}^{(\subsetneq, \text{\cite[Example 4.5]{Damian}})} && \txt{positively\\finitely\\presented} \ar@{=>}[ll]^-{(\subsetneq,\text{\cite[Remark 2]{Vannacci}})} \ar@{=>}[d]^-{(\subsetneq, \text{Theorem \ref{PFRnotPFG}})} \\ \text{APFG} \ar@{=>}[dr]_-*!/^12pt/[@]{\labelstyle \text{Proposition \ref{APFGring}}} \ar@{=>}[dd]_{\text{Lemma \ref{APFGPFP1}}} && \text{PFR} \ar@{=>}[ll]_{\text{\cite[Theorem 5.6]{KV}, Proposition \ref{APFGring}}}^{(\subsetneq,\text{\cite[Remark 2]{Vannacci}})} \ar@{=>}[dd]^{\text{Lemma \ref{pfp2}}} \\ & \text{UBERG} \ar@{=>}[ur]^(0.45)*!/_16pt/[@]{\labelstyle \text{finitely presented,}}_*!/^12pt/[@]{\labelstyle \text{\cite[Theorem 5.6]{KV}}} \\ {\operatorname{PFP}}_1 && {\operatorname{PFP}}_2 \ar@{=>}[ll]_{(\subsetneq, \text{Proposition} \ref{FPnnotn+1})} & \cdots \ar@{=>}[l]_{(\subsetneq, \text{Proposition} \ref{FPnnotn+1})} }$$ Implications in this diagram which follow trivially from the definitions have been left without references; for those marked $(\subsetneq,-)$, this reference provides a counterexample showing the reverse implication fails; for those marked ‘finitely presented’, the implication holds for finitely presented profinite groups, though not in general. In Section \[sec:APFGdefn\] we consider two natural modules associated to a profinite group $G$: the group ring $\hat{\mathbb{Z}}\llbracket G \rrbracket$ and the augmentation ideal $I_{\hat{\mathbb{Z}}}\llbracket G \rrbracket$. When $\hat{\mathbb{Z}}\llbracket G \rrbracket$ is PFG (as a $\hat{\mathbb{Z}}\llbracket G \rrbracket$-module; see Section \[sec:APFGdefn\]) we say that $G$ has *UBERG*. If the ideal $I_{\hat{\mathbb{Z}}}\llbracket G \rrbracket$ is PFG as a $\hat{\mathbb{Z}}\llbracket G \rrbracket$-module we say that $G$ is *APFG*. We show that APFG implies UBERG, and the converse is true if $G$ has type ${\operatorname{FP}}_1$ over $\hat{\mathbb{Z}}$ (see Proposition \[APFGring\]). In Section \[sec:PFGPFR\] we answer [@KV Question 1.2], by showing: If $G$ is finitely presented and PFG, it is PFR, but not all PFR groups are PFG. In Section \[sec:posfinpres\] we introduce the notion of *positively* *finitely* *presented* profinite groups, as a natural analogue of the definition of finitely presented groups – that is, groups that have a ‘PFG presentation’ (see Definition \[defn:posfinpres\]) – based on the fact that all PFG groups admit an epimorphism from a PFG projective group. We prove some important properties of positively finitely presented groups (see Section \[sec:frattinicovers\] for the definition of universal Frattini cover of a profinite group). \[thm:posfinpres\] (i) The class of positively finitely presented profinite groups is closed under extensions (see Lemma \[pfp-extension\]). (ii) A profinite group $G$ is positively finitely presented if and only if the kernel of the universal Frattini cover $\tilde{G} \to G$ of $G$ is positively normally finitely generated in $\tilde{G}$ (see Proposition \[pfp-frattinicover\]). (iii) In the class of PFG groups, positive finite presentability is equivalent to PFR (see Corollary \[pfp=pfr\]). In Section \[sec:PFP\_modules\] we start our investigation of positively finite generated (PFG) *modules* and modules of *type ${\operatorname{PFP}}_n$*(see Sections \[sec:APFGdefn\] and \[sec:PFPn\_modules\]). As for groups, any PFG module admits an epimorphism from a PFG projective module, so we can form projective resolutions from these, and define a module to have type ${\operatorname{PFP}}_n$ if it has a projective resolution $P_\ast$ in which $P_0,\ldots,P_n$ are PFG. Additionally, we develop several fundamental tools to work with PFG modules and we obtain a characterisation (in the spirit of the Mann-Shalev Theorem) of modules of type ${\operatorname{PFP}}_n$ in terms of growth conditions on the sizes of certain $\mathrm{Ext}$-groups (see Theorem \[extcondition\]). As usual, there is a similar definition of type ${\operatorname{PFP}}_n$ for groups. The first crucial observation, using the theory of projective covers, is that a profinite group has type ${\operatorname{PFP}}_0$ over a commutative ring $R$ if and only if $R$ is PFG; and it turns out that type ${\operatorname{PFP}}_n$ coincides with type ${\operatorname{FP}}_n$ in the class of prosoluble groups (see Remark \[ex:PFPnprosoluble\]). Our next main result is an equivalent cohomological characterisation of type ${\operatorname{PFP}}_n$ obtained by applying Theorem \[extcondition\]. For a commutative profinite ring $R$ and $k\in \mathbb{N}$, denote by $\mathcal{S}^{R \llbracket G \rrbracket}_k$ the set of irreducible $R \llbracket G \rrbracket$-modules of order $k$. \[extconditionintro\] Let $G$ be a profinite group and $R$ a commutative profinite ring. Then, $G$ has type ${\operatorname{PFP}}_n$ over $R$ if and only if $\sum_{S \in \mathcal{S}^{R \llbracket G \rrbracket}_k} (\lvert H_R^m(G,S) \rvert-1)$ has polynomial growth in $k$ for all $m \leq n$. When $G$ is known to have type ${\operatorname{FP}}_n$ over $R$, we get another characterisation of type ${\operatorname{PFP}}_n$, from Corollary \[Damian3\]. Let $G$ be a profinite group of type ${\operatorname{FP}}_n$ and $R$ a commutative profinite ring. Then, $G$ has type ${\operatorname{PFP}}_n$ over $R$ if and only if $$\lvert \{ S \in \mathcal{S}^{R \llbracket G \rrbracket}_k: H_R^m(G,S) \neq 0\} \rvert$$ has polynomial growth in $k$ for all $m \leq n$. Next, we show that the class of groups of type ${\operatorname{PFP}}_n$ is closed under commensurability and finite direct products. The proof of the next theorem can be found in Proposition \[prop:PFPncommensurability\] and Proposition \[prop:PFPndirectprod\]. (i) Let $G$ be a profinite group and let $H$ an open subgroup of $G$. Then $G$ is of type ${\operatorname{PFP}}_n$ if and only if $H$ is. (ii) Let $G_1$ and $G_2$ be profinite groups of type ${\operatorname{PFP}}_n$ and ${\operatorname{PFP}}_m$ over $R$, respectively. Then $G_1\times G_2$ is of type ${\operatorname{PFP}}_{\min(n,m)}$ over $R$. Up to this point, property ${\operatorname{PFP}}_n$ seems to enjoy several nice properties, but it still remains mysterious. To address this shortcoming, in Section \[sec:APFG\] and \[PFPvsPFP2\] we start examining properties ${\operatorname{PFP}}_1$ and ${\operatorname{PFP}}_2$ in more detail. In Section \[sec:APFG\] we show that APFG implies ${\operatorname{PFP}}_1$ (Lemma \[APFGPFP1\]). Using this, we can give an example of a group of type ${\operatorname{PFP}}_1$ that is not PFG (see Example \[ex:damian\]). In Section \[PFPvsPFP2\], we investigate the relation between PFR and type ${\operatorname{PFP}}_2$. We show in Lemma \[pfp2\] that PFR implies ${\operatorname{PFP}}_2$, and in Proposition \[PFP2=absubgps\] that type ${\operatorname{PFP}}_2$ can be detected by considering the minimal presentation of a group. In the case of modules for abstract groups, or the usual (non-probabilistic) definition of finite generation for profinite modules, we have the following nice property: if $M$ is an $H$-module with $H \leq G$, $M$ is finitely generated if and only if ${\operatorname{Ind}}^G_H M$ is. But it is not hard to show that an analogous property fails for positive finite generation. For example, $\hat{\mathbb{Z}}$ is PFG, but by Proposition \[prop:free\_not\_PFP\], ${\operatorname{Ind}}_1^{F_n} \hat{\mathbb{Z}}$ is not PFG, for $F_n$ the free profinite group on $n>1$ generators. We confront this problem in Section \[sec:relpfpn\]. Specifically, we define a relative version of type ${\operatorname{PFP}}_n$. Given a profinite group $G$ and a closed normal subgroup $H$ of $G$, we say that $H$ has *relative type ${\operatorname{PFP}}_n$ in $G$* if all PFG projective $R \llbracket G/H \rrbracket$-modules have type ${\operatorname{PFP}}_n$ over $R \llbracket G \rrbracket$. Note that in the analogous type ${\operatorname{FP}}_n$ case, relative type ${\operatorname{FP}}_n$ is equivalent to type ${\operatorname{FP}}_n$. Suppose that $H$ has relative type ${\operatorname{PFP}}_m$ in $G$ over $R$. (i) If $G$ has type ${\operatorname{PFP}}_n$ over $R$, then $G/H$ has type ${\operatorname{PFP}}_{\min(m+1,n)}$ over $R$. (ii) If $G/H$ has type ${\operatorname{PFP}}_n$ over $R$, then $G$ has type ${\operatorname{PFP}}_{\min(m,n)}$ over $R$. This is shown in Corollary \[relPFPn2\]. Incidentally, the proof of the previous theorem also works in showing the corresponding result for abstract groups and our proof does not depend on the usual spectral sequence argument, which may have some independent interest. We finish with Section \[examples\], where we produce some novel examples to distinguish some of the aforementioned classes of groups. First, for any prime $p$, we give examples of type ${\operatorname{PFP}}_1$ groups $G$ over $\mathbb{Z}_p$ such that the group ring $\mathbb{Z}_p\llbracket G \rrbracket$ is not PFG (see Proposition \[prop:finsimplegroup\]). We would like to give examples of this behaviour over $\hat{\mathbb{Z}}$, to distinguish between the classes of groups with UBERG and type ${\operatorname{PFP}}_1$ over $\hat{\mathbb{Z}}$, but for now the question remains open. Such examples cannot appear among pronilpotent groups (see Proposition \[prop:pronilp\] and the related Question \[quest:prosoluble\] about the prosoluble case). Although, trivially, if a module has type ${\operatorname{PFP}}_n$ it has type ${\operatorname{FP}}_n$, we show that the converse is not true, by showing in Section \[FP1vsPFP1\] that the free profinite group on $m$ generators, $1 < m < \infty$, has type ${\operatorname{FP}}_1$ but not type ${\operatorname{PFP}}_1$ over $\hat{\mathbb{Z}}$. Finally, in Proposition \[FPnnotn+1\], we distinguish the classes of groups of type ${\operatorname{PFP}}_n$ over $\hat{\mathbb{Z}}$ by constructing, for each $n \geq 0$, a prosoluble group of type ${\operatorname{PFP}}_n$ but not ${\operatorname{PFP}}_{n+1}$. Preliminaries and notation ========================== We state now some conventions which will be in force for the rest of this article. All subgroups and submodules will be assumed to be closed. Generation will always be intended in the topological sense. All homomorphisms will be continuous. Modules will be assumed to be left modules. Homology theory for profinite groups ------------------------------------ In the course of this work, we will need the usual ‘homological lemmas’ for profinite groups, such as snake lemma, horseshoe lemma, Schanuel’s lemma, Lyndon-Hochschild-Serre spectral sequence, the long exact sequence in cohomology, and Ext groups – see for instance [@RZ]. Other tools such as the mapping cone construction, valid in all abelian categories, can be found in [@Weibel] for example. Haar measure ------------ For a profinite group $G$, we denote by $\mu_G$ the (left) *Haar measure* of $G$, see [@LS Chapter 11] for basic properties. We will always consider the *normalised* Haar measure, in this way we can turn a profinite group into a probability space. For a profinite group $G$, the direct power $G^k$ can be viewed as a profinite group, and as such it supports a Haar measure. For $g_1,\ldots,g_k \in G$, we denote by $\langle g_1,\ldots,g_k \rangle$ the closed subgroup of $G$ they generate. Now we define the set $$X(G,k) = \{(g_1,\ldots,g_k)\in G^k \mid \langle g_1\ldots,g_k\rangle = G\},$$ so that $P(G,k) = \mu_{G^k}(X(G,k))$ is non-zero for some $k$ if and only if $G$ is PFG. \[lem:11.1.1\] Let $G$ be a profinite group, let $K$ be a closed normal subgroup of $G$ and $\pi:G\to G/K$ be the natural projection. If $X$ is a closed subset of $G$, then $\mu_{G/K}(\pi(X))\ge \mu_G(X)$; if $Y$ is a closed subset of $G/K$, then $\mu_G(\pi^{-1}(Y)) = \mu_{G/K}(Y)$. PFG, PFR and more {#sec:pfgpfrmore} ----------------- We say that a profinite group $G$ is *PFG* if there is a positive integer $k$ such that the probability of $k$ Haar-random elements of $G$ generating the whole group is positive. This condition has been studied extensively and here we only mention the Mann-Shalev theorem [@MS Theorem 4]: a profinite group $G$ is PFG if and only if it has polynomial maximal subgroup growth. In the spirit of the Mann-Shalev theorem, the authors of [@KV] study a related property called PFR. We list below some of the conditions considered there that we will need; the interested reader may check [@KV] for more details. A profinite group $G$: (i) is *PFR* if it is finitely generated, and for every epimorphism $f:H \to G$ with $H$ finitely generated, the kernel of $f$ is positively finitely normally generated in $H$; (ii) has PMEG, if the number of isomorphism classes of minimal extensions of $G$ of order $n$ grows polynomially in $n$; (iii) has UBERG, if the number of irreducible $\hat{\mathbb{Z}}\llbracket G \rrbracket$-modules of order $n$ grows polynomially in $n$. \[KV\] UBERG is equivalent to the group algebra $\hat{\mathbb{Z}}\llbracket G \rrbracket$ being PFG, for all profinite groups $G$. PFR and PMEG are equivalent for $G$ finitely generated. All the conditions are equivalent for $G$ finitely presented. Note that the equivalence of UBERG to $\hat{\mathbb{Z}}\llbracket G \rrbracket$ being PFG is only stated in [@KV] for finitely generated groups, but the proof for general groups goes through without change. We will see later that there are groups with UBERG which are not PFG; the question of whether there are non-finitely generated groups with UBERG remains open. But we do have the following result. \[UBERG=cb\] Suppose $G$ has UBERG, then it is countably based. Suppose by contradiction that $G$ is not countably based. Then, for some index $n$, there are infinitely many open normal subgroups of index $n$ in $G$ and so $G$ has infinitely many irreducible representations of dimension at most $n$ (over fields of characteristic $p > n$). PFG modules ----------- For a module $M$, as for a profinite group, the direct power $M^k$ can be viewed as an abelian profinite group, and supports a Haar measure which we will denote by $\mu_{M^k}$. For $m_1,\ldots,m_k \in M$, we denote by $\langle m_1,\ldots,m_k \rangle_R$ the closed submodule of $M$ they generate. As for groups, for a positive integer $k$, we define the set $$X_R(M,k) = \{(m_1,\ldots,m_k)\in M^k \mid \langle m_1\ldots,m_k\rangle_R = M\}.$$ and $P_R(M,k) = \mu_{M^k}(X_R(M,k))$. A profinite module $M$ is said to be *PFG* if there is some $k\in \mathbb{N}$ such that $P_R(M,k) > 0$. In [@LS Proposition 11.2.1] it is shown that the class of PFG groups is closed under quotients and extensions. The same is true for PFG modules with an analogous proof which we omit. \[lem:PFGquotext\] PFG modules are closed under quotients and extensions. APFG {#sec:APFGdefn} ---- It will be useful here to compare our conditions on profinite groups to another condition introduced in [@Damian]. Recall that the *augmentation map* $\varepsilon: \hat{\mathbb{Z}}\llbracket G \rrbracket \to \hat{\mathbb{Z}} $ is induced by $\varepsilon(g)=1$, for $g\in G$. Define the *augmentation ideal* as $I_{\hat{\mathbb{Z}}}\llbracket G \rrbracket = \ker \varepsilon$, so we have a short exact sequence $$\label{eq:augmentation} 0 \to I_{\hat{\mathbb{Z}}}\llbracket G \rrbracket \to \hat{\mathbb{Z}} \llbracket G \rrbracket \to \hat{\mathbb{Z}} \to 0.$$ Note that the group $G$ has type ${\operatorname{FP}}_1$ if and only if its augmentation ideal $I_{\hat{\mathbb{Z}}}\llbracket G \rrbracket$ is finitely generated. A profinite group $G$ is said to be *APFG* if the augmentation ideal $I_{\hat{\mathbb{Z}}}\llbracket G \rrbracket$ is PFG as a $\hat{\mathbb{Z}} \llbracket G \rrbracket$-module. There is an error in the statement of [@Damian Corollary 2.5] (which does not affect the rest of [@Damian]): it should say “$I_{\hat{\mathbb{Z}}}\llbracket G\rrbracket$ is finitely generated as a $\hat{\mathbb{Z}} \llbracket G \rrbracket$-module if and only if there exists $d \in \mathbb{N}$ such that $\delta_G(M)/r_G(M) \leq d$ for any $M \in \mathrm{Irr}(\mathbb{F}_p\llbracket G \rrbracket)$ and $p \in \pi(G)$”. Recall that $\hat{\mathbb{Z}} \llbracket G \rrbracket$ is PFG if and only if $G$ has UBERG. We can now state the following fundamental result. \[prop:damian\_thm4.4\] Every PFG group is APFG. It is shown in [@Damian Theorem 3] that, if $G$ is countably based and has type ${\operatorname{FP}}_1$ over $\hat{\mathbb{Z}}$, $G$ is APFG if and only if the number of irreducible $\hat{\mathbb{Z}} \llbracket G \rrbracket$-modules has polynomial growth (and, hence, if and only if $G$ has UBERG). Here we may generalise this theorem by dropping the countably based requirement. \[APFGring\] Let $G$ be a profinite group. If $G$ is APFG, then $G$ has UBERG. If $G$ has UBERG and type ${\operatorname{FP}}_1$, then $G$ is APFG. Suppose that the augmentation ideal $I_{\hat{\mathbb{Z}}}\llbracket G\rrbracket$ of $G$ is PFG. Then the group ring $\hat{\mathbb{Z}} \llbracket G \rrbracket$ fits into the exact sequence and it is PFG as an extension of two PFG modules. On the other hand, if $\hat{\mathbb{Z}} \llbracket G \rrbracket$ is PFG, then $\hat{\mathbb{Z}} \llbracket G \rrbracket$-modules are PFG if and only if they are finitely generated; for $G$ of type ${\operatorname{FP}}_1$, the augmentation ideal is finitely generated and hence PFG. Note that profinite groups of type ${\operatorname{FP}}_1$ over $\hat{\mathbb{Z}}$ which are not countably based exist, by [@CC Example 7.1], but they do not have UBERG by Proposition \[UBERG=cb\]. Combining the last two propositions, we get: \[PFGtoUBERG\] If $G$ is PFG, it has UBERG. \[rem:prop6.1notfingen\] All finitely generated prosoluble groups by [@KV Corollary 6.12] are PFG, and hence have UBERG. Non-abelian free profinite groups do not have UBERG by [@KV Proposition 6.14]. The Schur multiplier and Schur covers {#Schur} ------------------------------------- The Schur multiplier of an abstract group is an important source of group-theoretic information. Basic facts we use about the Schur multiplier in this paper may be found in [@BT] and [@KarpSM]. The profinite version is less well known, so we introduce it here. Recall that the Schur multiplier, for an abstract group $G$, is $$H_2(G,\mathbb{Z}) \cong \frac{R \cap [F,F]}{[F,R]},$$ for a presentation of $G$ given by the exact sequence $1 \to R \to F \to G \to 1$ with $F$ free. The same argument as in the abstract case (see, for example, [@Weibel Presentations 6.8.7]) shows that, for $G$ profinite and $1 \to \hat{R} \to \hat{F} \to G \to 1$ a profinite presentation, we have $$H_2(G,\hat{\mathbb{Z}}) \cong \frac{\hat{R} \cap [\hat{F},\hat{F}]}{[\hat{F},\hat{R}]}.$$ Here, by $[\hat{F},\hat{F}]$ and $[\hat{F},\hat{R}]$, we mean the closure of the abstract subgroups generated by these commutators. We may write $M(G)$ for the Schur multiplier of a profinite group $G$. For $G$ finite, $H_2(G,\mathbb{Z})$ (with $G$ considered as an abstract group) is isomorphic to $H_2(G,\hat{\mathbb{Z}})$ (with $G$ considered as a profinite group). It is well-known that the Schur multiplier for abstract groups is equal to $H^2(G,\mathbb{C}^\times)$. Calculating cohomology using the bar resolution, each term in the cochain complex ${\operatorname{Hom}}(G^n,\mathbb{C}^\times)$ is equal to ${\operatorname{Hom}}(G^n,\mathbb{Q}/\mathbb{Z})$ because the image of each element of $G^n$ has finite order; we deduce that the Schur multiplier is isomorphic to $H^2(G,\mathbb{Q}/\mathbb{Z})$. By considering the bar resolution again, this is the same as $H^2(G,\mathbb{Q}/\mathbb{Z})$ when we think of $G$ as a profinite group because every abstract homomorphism in ${\operatorname{Hom}}(G^n,\mathbb{Q}/\mathbb{Z})$ is continuous, which is isomorphic to $H_2(G,\hat{\mathbb{Z}})$ by Pontryagin duality, [@RZ Proposition 6.3.6]. Thus for finite groups we can talk about ‘the’ Schur multiplier, to mean the group defined (up to isomorphism) by both our definitions. An important part of the theory of Schur multipliers for abstract groups is the existence of Schur covers. There does not seem to be a good reference for Schur covers of profinite groups, but we will content ourselves here with giving the facts we will need: the proofs go through just as they do for abstract groups. A Schur cover of a profinite group $G$ is an exact sequence $M(G) \rightarrowtail H \twoheadrightarrow G$ which is a stem extension: an extension in which $M(G)$ is contained in the centre of $H$ and in the derived subgroup $\overline{[H,H]}$. Schur covers are precisely the maximal stem extensions of $G$. The universal coefficient theorem gives a short exact sequence $$0 \to {\operatorname{Ext}}_{\hat{\mathbb{Z}}}^1(G_{ab},M(G)) \to H^2(G,M(G)) \to {\operatorname{Hom}}(M(G),M(G)) \to 0$$ (here all the cohomological functors are understood to be derived functors of the abstract group of homomorphisms, defined by taking a projective resolution in the first variable), and the preimage of $\mathrm{id}_{M(G)}$ in $H^2(G,M(G))$ gives all the isomorphism classes of Schur covers of $G$ (see [@BT Proposition II.3.2]). So the number of such isomorphism classes is at most $|{\operatorname{Ext}}_{\hat{\mathbb{Z}}}^1(G_{ab},M(G))|$. In particular, for $G$ a perfect group, there is a unique Schur cover. \[perfectcover\] Let $G$ be perfect, and $H$ its Schur cover. Then $H$ is perfect, and its Schur multiplier $M(H)$ is trivial. Frattini covers and PFG {#sec:frattinicovers} ----------------------- For a profinite group $G$, write $\Phi(G)$ for the Frattini subgroup of $G$: that is, the intersection of all the maximal open subgroups of $G$. An epimorphism $f: H \to G$ is called a *Frattini cover* of $G$ if $\ker(f) \leq \Phi(H)$. The Frattini covers of $G$ form an inverse system whose inverse limit, called the *universal Frattini cover* of $G$, is again a Frattini cover of $G$ and is a projective profinite group. See [@FJ Chapter 22] for background on this. \[fratcover\] A Frattini cover $H$ of a profinite group $G$ is PFG if and only if $G$ is. Without loss of generality, we can assume that both $H$ and $G$ are finitely generated. In fact, we just have to observe that, for any generating set $S$ of $G$, any lift of $S$ to $H$ generates $H$, since the kernel is contained in the Frattini subgroup of $H$. The claim follows by Lemma \[lem:11.1.1\]. \[Schurcover\] A Schur cover of a profinite group $G$ is a Frattini cover. This holds by the same argument as [@KarpSM Lemma 2.4.5]. An open question of Kionke-Vannacci {#sec:PFGPFR} =================================== We first answer the second part of [@KV Open Question 1.2]. \[prop:PFGimpliesPFR\] Let $G$ be a finitely presented profinite group. If $G$ is PFG, then it is PFR. By Proposition \[prop:damian\_thm4.4\], $G$ is APFG. Hence $\hat{\mathbb{Z}} \llbracket G \rrbracket$ fits into and it is PFG. Now [@KV Theorem A] shows that $G$ is PFR. In the rest of the section, we give an example of a PFR group which is not PFG, answering the first half of [@KV Question 1.2]. Consider the group $G = \prod_{n \geq N} A_n^{2^n}$, where $A_n$ is the alternating group on $n$ letters. For the sake of concreteness, we will take $N = 15$; it is entirely possible to take any $N \geq 5$, at the expense of a little extra complexity in the argument, and the features described here will not change, except some constants like the number of generators needed for $G$ and the constant $3$ in the statement of Theorem \[PFRnotPFG\]. We have tried to make explicit where assumptions on $N$ are actually being used. $G$ is $2$-generated but not PFG. The argument of [@Mann Example 2] shows $G$ is not PFG. By [@KaLu Lemma 5], it is enough to show $A_n^{2^n}$ is $2$-generated for each $n \geq N$. Let $S$ be a non-abelian finite simple group, and let $D(S)$ be the number of elements of the product $S^2$ which generate $S$. By [@KaLu Corollary 7], if $m \leq |D(S)|/|Aut(S)|$, then $S^m$ is $2$-generated. Now for $n \geq 7$, it is well-known that $Aut(A_n) = S_n$, the symmetric group; for $n \geq 5$ we have $|D(A_n)| \geq (1-1/n-8.8/n^2)|A_n|^2$ by [@MRD Theorem 1.1]. So $|D(S)|/|Aut(S)| \geq (1-1/n-8.8/n^2)n!/4$, which is easily checked to be $>2$ for $n \geq 7$. On the other hand, for $n=6$ we have $|Aut(A_n)|=2|S_n|$, and the computation using $|D(A_n)| = 0.588|A_n|^2$ from [@MRD Table 1] shows $A_6^{2^6}$ is not $2$-generated. (The corresponding calculation for $n=5$ shows $A_5^{2^5}$ is not $2$-generated either.) For $n \geq 8$, the Schur multiplier of $A_n$ has order $2$ by [@KarpSM Theorem 2.12.5], and we write $A^\ast_n$ for the (unique, because $A_n$ is perfect) Schur cover of $A_n$. By [@KarpSM Theorem 2.2.10], the Schur multiplier of a finite product of alternating groups $\prod_i A_{n_i}$ is the product of the Schur multipliers $\prod_i M(A_{n_i}) = \prod_i \mathbb{Z}/2\mathbb{Z}$; with [@KarpSM Theorem 2.8.5], it follows that the Schur cover of $\prod_i A_{n_i}$ is $\prod_i A^\ast_{n_i}$. Since profinite homology groups commute with inverse limits, we get that $M(G) = \prod_{n \geq N} (\mathbb{Z}/2\mathbb{Z})^{2^n}$. For the Schur cover, we use the functoriality of our construction: $H^2(G,M(G)) = \prod_j \bigoplus_i H^2(A_{n_i},M(A_{n_j}))$, where $i$ and $j$ index the simple factors of $G$. Now, as described in Section \[Schur\], pick the preimage of $\mathrm{id}_{M(G)}$ given by picking $0 \in H^2(A_{n_i},M(A_{n_j}))$ for $i \neq j$ and a preimage of $\mathrm{id}_{M(A_{n_i})}$ in $H^2(A_{n_i},M(A_{n_j}))$ when $i=j$. We deduce that the Schur cover $\tilde{G}$ of $G$ is $\prod_{n \geq N} (A^\ast_n)^{2^n}$ (again, this is unique because $G$ is perfect). Now $\tilde{G}$ is a Frattini cover of $G$ by Lemma \[Schurcover\], so it is $2$-generated and not PFG. We will imitate [@Damian Example 4.5] to prove: $\tilde{G}$ has UBERG. We will show the number $g(n)$ of irreducible $\tilde{G}$-modules of order $n$ is polynomial in $n$. Suppose $M$ is an irreducible $\tilde{G}$-module with $|M|=n=p^r$. The action of $\tilde{G}$ on $M$ factors through some finite product $A^\ast_{n_1} \times \cdots \times A^\ast_{n_t}$ such that none of the $A^\ast_{n_i}$ act trivially on it. Set $u=n_i$ for some $1 \leq i \leq t$ and consider an $A^\ast_u$-composition series for $M$: a filtration $0=M_0 \leq M_1 \leq \cdots \leq M_k=M$ of $A^\ast_u$-submodules such that the factors $K_{l}=M_{l}/M_{l-1}$ are irreducible $A^\ast_u$-modules. Note that $|K_l| > p$ for some $1 \leq l \leq k$, since otherwise we would have a non-trivial homomorphism $A^\ast_u \to U(r,p)$, the group of unitriangular matrices in $GL(r,p)$, which is a $p$-group; this is impossible because the only non-trivial quotients of $A^\ast_u$ are itself and $A_u$. So $K_l$ is a non-trivial irreducible $A^\ast_u$-module. By [@KL Proposition 5.3.1, Proposition 5.3.7], for $u \geq 9$ we get $|K_l| \geq p^{u-2}$. Thus $u-2 \leq \dim M = r$, so $n_i \leq r+2$ for $1 \leq i \leq t$, and every irreducible $\tilde{G}$-module of order $n=p^r$ is an irreducible $H_r$-module, where $H_r = \prod_{j=N}^{r+2}(A^\ast_j)^{2^j}$. The rest of the proof of [@Damian Example 4.5] goes through without change. Since $M(G)$ is not finitely generated, $G$ does not have type ${\operatorname{FP}}_2$ over $\hat{\mathbb{Z}}$, so it is not finitely presented. On the other hand, by [@KV Theorem A], to show $\tilde{G}$ is PFR, it suffices to show it is finitely presented. We will do this using the equivalent condition given in [@Lubotzky Theorem 0.3]: a finitely generated profinite group $H$ is finitely presented if and only if there exists a positive constant $C$ such that, for every prime $p$ and every irreducible $\mathbb{F}_p\llbracket H \rrbracket$-module $M$, $\dim H^2(H,M) \leq C \dim M$. \[PFRnotPFG\] For $M$ an irreducible $\tilde{G}$-module, $\dim H^2(\tilde{G},M) \leq 3\dim M$. Thus $\tilde{G}$ is finitely presented, and hence PFR. We will prove this in several steps. If $M$ is a trivial $\tilde{G}$-module, $H^2(\tilde{G},M)=0$. By Pontryagin duality, [@RZ Proposition 6.3.6], this is equivalent to showing $H_2(\tilde{G},M^\ast)=0$, where $M^\ast = {\operatorname{Hom}}(M,\mathbb{Q}/\mathbb{Z})$ is the Pontryagin dual of $M$. We have $M \cong \mathbb{F}_p$ for some $p$, so $M^\ast \cong \mathbb{F}_p$ too, also with trivial $\tilde{G}$-action. The short exact sequence $0 \to \hat{\mathbb{Z}} \to \hat{\mathbb{Z}} \to \mathbb{F}_p \to 0$ gives a long exact sequence $$\cdots \to M(\tilde{G})=H_2(\tilde{G},\hat{\mathbb{Z}}) \to H_2(\tilde{G},M^\ast) \to H_1(\tilde{G},\hat{\mathbb{Z}})=\tilde{G}_{ab} \to \cdots,$$ and we have $M(\tilde{G})=\tilde{G}_{ab}=0$ by Proposition \[perfectcover\], because $\tilde{G}$ is the Schur cover of a perfect group. So for the theorem, we only need to consider $M$ with a non-trivial action. Suppose $M$ is a non-trivial irreducible $\mathbb{F}_p\llbracket \tilde{G} \rrbracket$-module. As before, the action of $\tilde{G}$ on $M$ factors through some finite product $L = A^\ast_{n_1} \times \cdots \times A^\ast_{n_t}$ such that none of the $A^\ast_{n_i}$ act trivially on it, and we write $K$ for the product of all the other quasisimple factors, so $\tilde{G}=K \times L$. $\dim H^2(G,M) \leq \dim H^2(L,M)$. By the Lyndon-Hochschild-Serre spectral sequence we know $$\begin{gathered} \dim H^2(\tilde{G},M) \leq \dim H^2(K,M^L) + \\ \dim H^1(K,H^1(L,M)) + \dim H^2(L,M)^K.\end{gathered}$$ Since $M$ is irreducible and non-trivial as an $L$-module, $M^L=0$. Since the actions of $K$ on $L$ and $M$ are trivial, the action of $K$ on $H^1(L,M)$ is trivial. Moreover, since $H^1(L,M)$ is abelian and $K$ is perfect, we have $H^1(K,H^1(L,M))={\operatorname{Hom}}(K,H^1(L,M))=0$. So $$\dim H^2(G,M) \leq \dim H^2(L,M)^K \leq \dim H^2(L,M).$$ Write $J$ for the kernel of the $L$-action on $M$: $J$ is a finite product of copies of $\mathbb{Z}/2\mathbb{Z}$. \[Jcohom\] If $p \neq 2$ or $J$ is trivial, $H^j(J,M)$ is $0$ for $j \geq 1$. If $p=2$ and $J \neq 1$, $H^j(J,M)$ is isomorphic, as an $L/J$-module, to a finite direct sum of copies of $M$. For $p \neq 2$, this is [@RZ Corollary 7.3.3]; for $J$ trivial, it is trivial. For $p=2$, we can use the explicit description of the $L/J$-action given in [@Brown Section III.8]. Using the notation there, we have $\alpha=\mathrm{id}_J$ because $J$ is central in $L$, so we can calculate the $L/J$-action using a finite type free resolution $F_\ast$ of $\mathbb{F}_2$ as a $J$-module, and the action of $L/J$ on ${\operatorname{Hom}}_J(F_\ast,M)$ is given by $(g \cdot f)(x) = g(f(x))$. Now after fixing a basis for $F_j$, ${\operatorname{Hom}}_J(F_j,M)$ is clearly a finite direct sum of copies of $M$ indexed by this basis, on which $L/J$ acts diagonally. Since this is a cochain complex of semisimple $L/J$-modules, its homology groups are semisimple, with all their simple factors isomorphic to $M$. $\dim H^2(L,M) \leq 3\dim M$. Once again, the Lyndon-Hochschild-Serre spectral sequence gives $$\begin{gathered} \dim H^2(L,M) \leq \dim H^2(L/J,M) + \\ \dim H^1(L/J,H^1(J,M)) + \dim H^2(J,M)^{L/J}.\end{gathered}$$ We know $H^2(J,M)^{L/J}=M^{L/J}=0$ by Lemma \[Jcohom\]. If $t>1$, $H^1(L/J,M)=0$ and $\dim H^2(L/J,M) \leq (1/4)\dim M$, by [@GKKL Lemma 5.2(4)], and by Lemma \[Jcohom\] we deduce $H^1(L/J,H^1(J,M))=0$, so $\dim H^2(L,M) \leq (1/4)\dim M$. If $t=1$ (so $L$ is $A^\ast_n$ for some $n$) and $p \neq 2$, by Lemma \[Jcohom\] we have $H^1(L/J,H^1(J,M))=0$, and then either $J$ is trivial and $H^2(L,M) = H^2(L/J,M) = 0$ by [@GKKL Lemma 4.1(2)], or $J$ is non-trivial, so $J=\mathbb{Z}/2\mathbb{Z}$, $L/J=A_n$, and $$\dim H^2(L,M) = \dim H^2(L/J,M) \leq \dim M$$ by [@GKKL Lemma 4.1(2), Theorem 6.2(1)(2)]. If $t=1$ and $p=2$, by [@GKKL Lemma 4.1(3)], $J$ is non-trivial, so $J=\mathbb{Z}/2\mathbb{Z}$ and $$\begin{aligned} H^2(L,M) &\leq \dim H^2(L/J,M) + \dim H^1(L/J,M) \\ &\leq (35/12)\dim M + (1/12)\dim M = 3 \dim M\end{aligned}$$ for $n \geq 15$ by [@GKKL Theorem 6.1(1), Theorem 6.2(3)]. This proves the theorem. Positively finitely presented groups {#sec:posfinpres} ==================================== Given that the idea of PFR is an higher analogue of PFG, an alternative condition would require that $G$ has a ‘PFG presentation’. By Lemma \[fratcover\], every PFG group admits a short exact sequence of the form $$\label{eq:posfinpres} 1 \to R \to P \to G \to 1$$ with $P$ a PFG projective profinite group. In this section, we will think of such sequences as presentations for $G$. Let $G$ be a profinite group and let $A$ be a normal subgroup of $G$. We say that $A$ is *positively finitely normally generated* in $G$ if there exists $k\in \mathbb{N}$ such that, defining the set $$X^{G}(A,k)=\{(a_1,\ldots,a_k)\in A^k \mid \langle a_1,\ldots,a_k \rangle^G = A \},$$ we have $P^{G}(A,k) :=\mu_{A^k}(X^{G}(A,k)) >0$. It is easy to see, by the same argument as [@KV Section 3.3], that $A$ is positively finitely normally generated in $G$ if and only if the number of open maximal $G$-stable subgroups of index $n$ in $A$ grows polynomially in $n$. \[defn:posfinpres\] A profinite group $G$ is said to be *positively finitely presented*[^2] if $G$ is PFG and for every short exact sequence with $P$ a PFG projective profinite group, $R$ is PFG as a normal subgroup of $P$. We justify our use of the term positively finitely presented by showing that groups satisfying this condition are finitely presented. \[prop:PFG and fin pres\] A profinite group is positively finitely presented if and only if it is PFG and finitely presented. Given a positively finitely presented group $G$, fix a presentation with $P$ PFG projective. Finitely generated projective profinite groups are finitely presented by [@Lubotzky Proposition 1.1], so this exhibits $G$ as a quotient of a finitely presented group by a normally finitely generated group: it is standard that such groups are finitely presented. PFG and finitely presented implies PFR by Proposition \[prop:PFGimpliesPFR\], and then PFR plus PFG imply positively finitely presented by [@KV Lemma 3.4]. In particular, PFG projective groups are positively finitely presented. \[pfp=pfr\] For PFG groups, positive finite presentation is equivalent to PFR. PFR groups are finitely presented, which with PFG implies positive finite presentation by Proposition \[prop:PFG and fin pres\]. Positively finitely presented groups are PFG and finitely presented, which implies PFR by Proposition \[prop:PFGimpliesPFR\]. The next two results show that the class of positively finitely presented profinite groups is well behaved. \[pfp-extension\] For $N$ a positively finitely presented normal subgroup of $G$, $G$ is positively finitely presented if and only if $G/N$ is. $N$ is finitely presented, so $G$ is finitely presented if and only if $G/N$ is; $N$ is PFG, so $G$ is PFG if and only if $G/N$ is. So if one of $G$ and $G/N$ is positively finitely presented, the other is finitely presented and PFG, so it is positively finitely presented by Proposition \[prop:PFG and fin pres\]. Compare this to the class of PFR groups: it remains an open question whether this is closed under extensions ([@KV p.3]). We conclude this section by showing that $G$ being positively finitely presented is witnessed by its universal Frattini cover. Compare this to the class of PFR groups: in general minimal presentations are not sufficient to determine whether a group is PFR ([@KV Section 7]). The following lemma is a generalisation of [@LS Proposition 11.2.1] to *positively finitely normally generated subgroups*. \[lem:pfng\] Let $G$ be a profinite group, $B \lhd A \lhd G$ with $B$ normal in $G$. Suppose that $A/B$ is positively finitely normally generated in $G/B$ and $B$ is positively finitely normally generated in $G$. Then $A$ is positively finitely normally generated in $G$. We first consider the case with $A$ (and hence $B$) finite. Let $\pi: G^k \to (G/B)^k$ be the obvious projection, and pick $\underline{a} \in \pi^{-1}(X^{G/B}(A/B,k))$, $\underline{b}\in X^G(B,k)$ and $\underline{u} \in (\langle\underline{a} \rangle^G)^l$; then $\langle \underline{b}\cdot \underline{u},\underline{a}\rangle^G =A$ (the product $\underline{b}\cdot \underline{u}$ is componentwise). Thus we can estimate the probability of normally generating $A$ in $G$, by counting the possible choices for the elements $\underline{a}$, $\underline{b}$ and $\underline{u}$: there are at least $$\vert B \vert^k \cdot \vert X^{G/B}(A/B,k) \vert \cdot \vert A/B \vert^{l} \cdot \vert X^G(B,l) \vert$$ choices of $k+l$ elements generating $A$, and we conclude that $P^G(A,k+l)$ is at least $$\frac{\vert B \vert^k \vert X^{G/B}(A/B,k) \vert}{\vert A \vert^k} \frac{\vert A/B \vert^{l} \vert X^G(B,l) \vert}{\vert A\vert^l}= P^{G/B}(A/B,k) P^G(B,l).$$ Now suppose $A$ is profinite. Note that $A$ has a neighbourhood basis $\mathcal{N}$ of the identity consisting of open normal subgroups which are $G$-invariant: this can be achieved by intersecting a basis of open normal subgroups of $G$ with $A$. For a subset $X\subset A$, $\langle X\rangle^G = A$ if and only if $\langle XN/N \rangle^{G/N}=A/N$ for all $N\in \mathcal{N}$ (this is [@Wilson Proposition 4.1.1] with minor modifications). This implies that $X^G(A,k)$ is the inverse limit over $\mathcal{N}$ of $X^{G/N}(A/N,k)$ and hence $$\begin{aligned} P^G(A,k+l) &= \mu_{A^k}(X^G(A,k+l)) \\ &= \inf_{n\in \mathcal{N}} \frac{\vert X^{G/N}(A/N,k+l) \vert}{\vert A/N\vert^{k+l}} \\ &= \inf_{n\in \mathcal{N}} P^{G/N}(A/N,k+l) \\ &\geq \inf_{n\in \mathcal{N}} P^{G/BN}(A/BN,k) P^{G/N}(BN/N,l) \\ &= P^{G/B}(A/B,k) P^G(B,l),\end{aligned}$$ which is positive for some choice of $k$ and $l$ by hypothesis. \[pfp-frattinicover\] Let $G$ be a PFG profinite group and let $f: \tilde{G}\to G$ be the universal Frattini cover of $G$. Write $R$ for the kernel of this map. If $R$ is positively normally finitely generated in $\tilde{G}$, $G$ is positively finitely presented. Since $\tilde{G}$ is a projective cover of $G$, if $1 \to S \to Q \to G \to 1$ is another presentation of $G$ with $Q$ PFG, then $Q \to G$ factors into an epimorphism $Q \to \tilde{G}$ and $f$. The diagram $$\xymatrix{ S \ar[r] \ar@{->>}[d] & Q \ar[r] \ar@{->>}[d] & G \ar@{=}[d] \\ R \ar[r] & \tilde{G} \ar[r] & G }$$ has exact rows; writing $T$ for the kernel of $Q \to \tilde{G}$, we get $S/T \cong R$ by the Nine Lemma. Since $R$ is positively finitely normally generated in $\tilde{G}$, it has polynomial maximal $\tilde{G}$-stable subgroup growth. $\tilde{G}$ is positively finitely generated and projective, hence positively finitely presented, so $T$ has polynomial maximal $Q$-stable subgroup growth. By Lemma \[lem:pfng\] applied to $S$ as an extension of $T$ by $R$, we have that $S$ is positively normally finitely generated in $Q$, as required. Modules of type PFPn {#sec:PFP_modules} ==================== Let $R$ be a profinite ring. In this section, all modules will be profinite $R$-modules. Projective covers of PFG modules -------------------------------- Let $M$ be an $R$-module. A submodule $N$ of a module $M$ is *superfluous* if, for any submodule $H$ of $M$, $H+N=M$ implies $H=M$. A homomorphism $P\to M$, with $P$ projective, is said to be a *projective cover* of $M$ if its kernel is a superfluous submodule of $P$. It is easy to see that, if $P_1\to M$ and $P_2\to M$ are two projective covers of $M$, then $P_1\cong P_2$. So we may abuse terminology by referring to $P$ itself as the projective cover of $M$, instead of the homomorphism $P\to M$. Profinite modules have projective covers by [@SW Remark 3.4.3(i)]. \[projcover\] $M$ is PFG if and only if its projective cover $P$ is. This is the same argument as Lemma \[fratcover\]: we can assume that both $M$ and $P$ are finitely generated and observe that, for any generating set $S$ of $M$, any lift of $S$ to $P$ generates $P$, since the kernel is superfluous. Modules of type PFPn {#sec:PFPn_modules} -------------------- The previous lemma suggests the following definition. An $R$-module $M$ has *type ${\operatorname{PFP}}_n$* if it has a projective resolution $P_\ast$ $$\ldots \to P_n \to \ldots \to P_1\to P_0 \to M \to 0 $$ with $P_0, \ldots, P_n$ PFG $R$-modules. The module $M$ has type ${\operatorname{PFP}}_\infty$ if it has a projective resolution $P_\ast$ with $P_n$ PFG for all $n$. \[Schanuel\] Suppose we have two partial resolutions $$P_{n-1} \to \cdots \to P_0 \text{ and } Q_{n-1} \to \cdots \to Q_0$$ of $M$ with each $P_i$ and $Q_i$ PFG projective. Then $\ker(P_{n-1} \to P_{n-2})$ is PFG if and only if $\ker(Q_{n-1} \to Q_{n-2})$ is. Schanuel’s lemma. It follows that $M$ has type ${\operatorname{PFP}}_\infty$ if and only if it has type ${\operatorname{PFP}}_n$ for all $n$. (i) By Lemma \[projcover\], an $R$-module has type ${\operatorname{PFP}}_0$ if and only if it is PFG. (ii) Clearly, type ${\operatorname{PFP}}_n$ implies type ${\operatorname{FP}}_n$ for all $n$. (iii) Note that, if $R$ is PFG as an $R$-module, then all finitely generated $R$-modules are PFG. Thus, type ${\operatorname{PFP}}_n$ coincides with type ${\operatorname{FP}}_n$ for PFG rings. We will now show that the properties defined above behave well with respect to short exact sequences. See [@Weibel] for more detail on the constructions used. \[moduletypes\] Let $0 \to A \xrightarrow[]{f} B \xrightarrow[]{g} C \to 0$ be a short exact sequence of profinite $R$-modules. (i) If $A$ has type ${\operatorname{PFP}}_{n-1}$ and $B$ has type ${\operatorname{PFP}}_n$, $C$ has type ${\operatorname{PFP}}_n$. (ii) If $B$ has type ${\operatorname{PFP}}_{n-1}$ and $C$ has type ${\operatorname{PFP}}_n$, $A$ has type ${\operatorname{PFP}}_{n-1}$. (iii) If $A$ and $C$ have type ${\operatorname{PFP}}_n$, so does $B$. Recall that the class of PFG modules is closed under extensions by Lemma \[lem:PFGquotext\]. (i) Take a type ${\operatorname{PFP}}_{n-1}$ resolution $P'_\ast$ of $A$ and a type ${\operatorname{PFP}}_n$ resolution $P_\ast$ of $B$. There is a map $P'_\ast \to P_\ast$ extending $A \to B$. The mapping cone of $P'_\ast \to P_\ast$ is a type ${\operatorname{PFP}}_n$ resolution of $C$. (ii) Fix a map $Q \stackrel{q}{\twoheadrightarrow} B$ with $Q$ PFG projective. Note that $Q$ has type ${\operatorname{PFP}}_\infty$. We have a diagram $$\xymatrix{ \ker(q) \ar[r] \ar[d] & Q \ar@{->>}[r]^{q} \ar@{=}[d] & B \ar@{->>}[d]^{g} \\ \ker(g \circ q) \ar[r] & Q \ar@{->>}[r] & C;}$$ with exact rows. By Proposition \[Schanuel\], $\ker(q)$ is of type ${\operatorname{PFP}}_{n-2}$ and $\ker(g\circ q)$ is of type ${\operatorname{PFP}}_{n-1}$. By the snake lemma, $0 \to \ker(q) \to \ker(g\circ q) \to A \to 0$ is exact. By (i), $A$ has type ${\operatorname{PFP}}_{n-1}$. (iii) Given a type ${\operatorname{PFP}}_n$ resolution for $A$ and another for $C$, the resolution for $B$ constructed using the horseshoe lemma has type ${\operatorname{PFP}}_n$. Growth conditions ----------------- The famous Mann-Shalev Theorem in [@MS] characterises PFG groups algebraically as those profinite groups with “few” open maximal subgroups. We would like to mimic this theorem as well as producing a cohomological criterion for when modules have type ${\operatorname{PFP}}_n$. For $R$ a profinite ring and $M$ a profinite $R$-module, let $m^R_k(M)$ be the number of maximal (open) submodules of $M$ of index $k$. We say that $M$ has *polynomial maximal submodule growth*, or *PMSMG* for short, if there is some constant $c>0$ such that $m^R_k(M) \leq k^c$ for all $k$. For a profinite ring $R$ and $k\in \mathbb{N}$, denote by $\mathcal{S}^R_k$ the set of irreducible $R$-modules of order $k$. \[prop:PMSMG\] Let $M$ be a profinite $R$-module. Then the following conditions are equivalent: (i) $M$ is PFG. (ii) $M$ has PMSMG. (iii) $\sum_{S \in \mathcal{S}^R_k} (\lvert {\operatorname{Hom}}_R(M,S) \rvert-1)$ has polynomial growth in $k$. (i)$\Leftrightarrow$ (ii): Imitate [@KV Proposition 6.1, (2)$\Rightarrow$(3)] and [@KV Proposition 3.5]. We will now show that (ii) is equivalent to (iii). First, note that for each maximal submodule in $M$ we get a quotient map to an irreducible module, so we have an injection from the set of maximal submodules of index $k$ to the set of surjective (or equivalently, non-trivial) maps to irreducible modules of order $k$. Hence, $$m^R_k(M) \leq \sum_{S \in \mathcal{S}^R_k} (\lvert {\operatorname{Hom}}_R(M,S) \rvert-1).$$ Conversely, if $M$ has PMSMG, we have shown it is $d$-generated for some $d$. Hence $\lvert {\operatorname{Hom}}_R(M,S) \rvert \leq \lvert S \rvert^d$, so $$\sum_{S \in \mathcal{S}^R_k} (\lvert {\operatorname{Hom}}_R(M,S) \rvert-1) \leq k^dm^R_k(M).$$ Since $\mathrm{Hom}(M,S)$ is just ${\operatorname{Ext}}^0_R(M,S)$, we can now apply the proposition to give conditions equivalent to a module having type ${\operatorname{PFP}}_n$. \[extcondition\] Let $M$ be an $R$-module. Then, $M$ has type ${\operatorname{PFP}}_n$ if and only if $\sum_{S \in \mathcal{S}^R_k} (\lvert {\operatorname{Ext}}_R^m(M,S) \rvert-1)$ has polynomial growth in $k$ for all $m \leq n$. For an $R$-module $M$, we write $$f_m^M(k)= \sum_{S \in \mathcal{S}^R_k} (\lvert {\operatorname{Ext}}_R^m(M,S) \rvert-1).$$ The case $n=0$ is Proposition \[prop:PMSMG\]. Now, suppose that $n\ge 1$ and the theorem is true for every $m\le n-1$. Let $M$ be an $R$-module of type ${\operatorname{PFP}}_n$. By hypothesis, $f_m^M(k)$ is polynomial in $k$ for $m \leq n-1$; it remains to check that $f_n^M(k)$ is polynomial in $k$. By Lemma \[projcover\], we have a short exact sequence $$0 \to K \to P \to M \to 0$$ with $P$ PFG projective. $K$ has type ${\operatorname{PFP}}_{n-1}$, so by hypothesis $f_{n-1}^K(k)$ has polynomial growth in $k$; since ${\operatorname{Ext}}_R^n(P,-)=0$ for $n \geq 1$, we have that $f_n^P(k)=0$. Using the long exact sequence in cohomology, we see that $f_n^M(k) \leq f_{n-1}^K(k) + f_n^P(k)$, and we are done. Note that $\mathcal{S}^R_k$ may be infinite, and the sum $\sum_{S \in \mathcal{S}^R_k} (\lvert {\operatorname{Ext}}_R^n(M,S) \rvert-1)$ may nonetheless be finite. We will see in Section \[examples\] that, for an infinite product $G=\prod H$ with $H$ a finite group,, there are infinitely many values of $k$ such that $$\sum_{S \in \mathcal{S}^{\hat{\mathbb{Z}}\llbracket G \rrbracket}_k} (\lvert H_{\hat{\mathbb{Z}}}^1(G,S) \rvert-1) = 0$$ even though $\mathcal{S}^{\hat{\mathbb{Z}}\llbracket G \rrbracket}_k$ is infinite. Groups of type PFPn {#pfpn_groups} =================== In this section $R$ will be a commutative profinite ring. A profinite group $G$ has *type ${\operatorname{PFP}}_n$ over $R$* if $R$ has type ${\operatorname{PFP}}_n$ as $R \llbracket G \rrbracket$-module. Unless specified otherwise, type ${\operatorname{PFP}}_n$ will mean over $\hat{\mathbb{Z}}$. \[ex:PFPnprosoluble\] We get immediately that ${\operatorname{PFP}}_n$ and ${\operatorname{FP}}_n$ are equivalent when $R \llbracket G \rrbracket$ itself is PFG as an $R \llbracket G \rrbracket$-module. By Corollary \[PFGtoUBERG\], the ring $\hat{\mathbb{Z}} \llbracket G \rrbracket$ is PFG as a $\hat{\mathbb{Z}} \llbracket G \rrbracket$-module whenever $G$ is PFG; by Remark \[rem:prop6.1notfingen\], this occurs for all finitely generated prosoluble groups $G$. So ${\operatorname{PFP}}_n$ and ${\operatorname{FP}}_n$ over $\hat{\mathbb{Z}}$ coincide for such groups (cf. Conjecture \[quest:prosoluble\] below). Every group is of type ${\operatorname{FP}}_0$ over every ring. This is false for type ${\operatorname{PFP}}_0$. A profinite group $G$ has type ${\operatorname{PFP}}_0$ over $R$ if and only if $R$ is PFG as an $R$-module. Any subset of $R$ generates it as an $R$-module if and only if it generates it as an $R \llbracket G \rrbracket$-module, since the $G$-action is trivial. Next we show that the class of groups of type ${\operatorname{PFP}}_n$ is closed under commensurability. Recall that for an $R \llbracket G \rrbracket$-module $M$, we denote by ${\operatorname{Res}}^G_H M$ the $ R \llbracket H \rrbracket$-module with the same underlying set as $M$ and restricting the action of $G$ to $H$. \[prop:PFPncommensurability\] Let $G$ be a profinite group and let $H$ be an open subgroup of $G$. Then, $H$ has type ${\operatorname{PFP}}_n$ over $R$ if and only if $G$ does. We first claim that an $R \llbracket G \rrbracket$-module $M$ is PFG if and only if ${\operatorname{Res}}^G_H M$ is PFG. Indeed, clearly any set of generators for ${\operatorname{Res}}^G_H M$ as an $R \llbracket H \rrbracket$-module generates $M$ as an $R \llbracket G \rrbracket$-module. Conversely, say $\lvert G:H \rvert = c$. A set of $t$ generators for $M$ generates an open submodule of ${\operatorname{Res}}^G_HM$ of index at most $c^t$, so if $P_{R \llbracket G \rrbracket}(M,t) > 0$ for some $t$, $P_{R \llbracket H \rrbracket}({\operatorname{Res}}^G_HM,t+c^t) > 0$. It now follows by the same techniques as for abstract modules (see [@Brown VIII, Proposition 5.1]) that $M$ has type ${\operatorname{PFP}}_n$ if and only if ${\operatorname{Res}}^G_HM$ does, and in particular this holds for $M=R$. Minimal resolutions ------------------- We can imitate the methods of [@Damian] to give some more detail about the type ${\operatorname{PFP}}_n$ conditions. Fix a projective resolution $P^G_\ast$ of $\hat{\mathbb{Z}}$ as a $\hat{\mathbb{Z}}\llbracket G \rrbracket$-module such that each $P^G_n$ is a projective cover of the kernel $K^G_{n-1}$ of the following map. Thus $G$ has type ${\operatorname{PFP}}_n$ over $\hat{\mathbb{Z}}$ if and only if $K^G_m$ is PFG for all $m<n$. Write $\mathcal{S}^{\hat{\mathbb{Z}}\llbracket G \rrbracket}$ for the set of irreducible $\hat{\mathbb{Z}}\llbracket G \rrbracket$-modules. For $M \in \mathcal{S}^{\hat{\mathbb{Z}}\llbracket G \rrbracket}$: (i) $r_G(M)$ is the dimension of $M$ over the field ${\operatorname{End}}_G(M)$; (ii) $\delta_G(M)$ is the number of non-Frattini chief factors $G$-isomorphic to $M$ in a chief series of $G$; (iii) $h_G^n(M)$ is the dimension of $H^n(G,M)$ over ${\operatorname{End}}_G(M)$, for $n \geq 1$; (iv) $h'_G(M)$ is the dimension of $H^1(G/C_G(M),M)$ over ${\operatorname{End}}_G(M)$. For $N$ any other (profinite) $\hat{\mathbb{Z}}\llbracket G \rrbracket$-module, we write $i_N(M)$ for the number of factors $G$-isomorphic to $M$ in the (semisimple) *head* $N/\operatorname{rad}(N)$ of $N$. By [@Wilson Proposition 7.4.5], $N/\operatorname{rad}(N)$ is isomorphic to a product of simple modules and we can write $N/\operatorname{rad}(N) = \prod_{i\in I} H_i$, where $H_i$ is a power of a simple $\hat{\mathbb{Z}}\llbracket G \rrbracket$-module $M_i$ and $M_i \not\cong M_j$ for $i\neq j$. The profinite $\hat{\mathbb{Z}}\llbracket G \rrbracket$-module $H_i$ is called a homogeneous component of the head of $N$. Parallel to [@Damian Theorem 1], we get: For $N$ a $\hat{\mathbb{Z}}\llbracket G \rrbracket$-module, the minimal size $d_{\hat{\mathbb{Z}}\llbracket G \rrbracket}(N)$ of a generating set of $N$ is $$\sup_{M \in \mathcal{S}^{\hat{\mathbb{Z}}\llbracket G \rrbracket}} \left \lceil \frac{i_N(M)}{r_G(M)} \right\rceil.$$ Any subset of $N$ generates $N$ if and only if it generates the head of $N$, if and only its projection into each homogeneous component of the head generates that homogeneous component (see [@Damian Section 3]). Each homogeneous component is a product of $i_N(M)$ copies of some $M \in \mathcal{S}^{\hat{\mathbb{Z}}\llbracket G \rrbracket}$, and so the homogeneous component is generated by $\left \lceil i_N(M)/r_G(M) \right\rceil$ elements by [@CGK Lemma 1]. We can apply this theorem to the kernels $K^G_n$ in our minimal projective resolution. $$d_{\hat{\mathbb{Z}}\llbracket G \rrbracket}(K^G_{n-1}) = \sup_{M \in \mathcal{S}^{\hat{\mathbb{Z}}\llbracket G \rrbracket}} \left\lceil \frac{h_G^n(M)}{r_G(M)} \right\rceil.$$ $i_{K^G_{n-1}}(M) = h_G^n(M)$ by the argument of [@Gruenberg Lemma 2.11]: differentials in the chain complex ${\operatorname{Hom}}_G(P^G_\ast,M)$ are trivial for $M$ irreducible. $$d_{\hat{\mathbb{Z}}\llbracket G \rrbracket}(K^G_0) = \sup_{M \in \mathcal{S}^{\hat{\mathbb{Z}}\llbracket G \rrbracket}} \left \lceil \frac{\delta_G(M)+h'_G(M)}{r_G(M)} \right \rceil.$$ $h_G^1(M) = \delta_G(M)+h'_G(M)$, by [@AG (2.10)]. Note that it is enough for the two corollaries to take the supremum over $\mathbb{F}_p$-modules for $p$ dividing the order of $G$, since otherwise $h_G^n(M)=0$ by [@RZ Corollary 7.3.3]. As expected, we deduce that $K^G_0$ is finitely generated if and only if $I_{\hat{\mathbb{Z}}}\llbracket G \rrbracket$ is, and comparing our theorem to [@Damian Theorem 1], we see that $$d_{\hat{\mathbb{Z}}\llbracket G \rrbracket}(K^G_0) \leq d_{\hat{\mathbb{Z}}\llbracket G \rrbracket}(I_{\hat{\mathbb{Z}}}\llbracket G \rrbracket) \leq d_{\hat{\mathbb{Z}}\llbracket G \rrbracket}(K^G_0)+1.$$ For $N$ a $\hat{\mathbb{Z}}\llbracket G \rrbracket$-module, and any $k$, $$P_{\hat{\mathbb{Z}}\llbracket G \rrbracket}(N,k) = \prod_{M\in \mathcal{S}^{\hat{\mathbb{Z}}\llbracket G \rrbracket}} \prod_{i=0}^{i_N(M)-1} \left(1-\frac{\vert {\operatorname{End}}_G(M) \vert^i}{\vert M \vert^k}\right).$$ (If $i_N(M)=0$ for some $M$, we take the corresponding factor to be the empty product, i.e. $1$.) The proof echoes that of [@Damian Theorem 2] exactly, except for the following two points. Firstly, it is unnecessary to assume $k \geq d_{\hat{\mathbb{Z}}\llbracket G \rrbracket}(N)$: if not, by the argument of the previous theorem there is some $M \in \mathcal{S}^{\hat{\mathbb{Z}}\llbracket G \rrbracket}$ such that $$i_N(M)/r_G(M) > k,$$ so there is some $i < i_N(M)$ such that $$\vert M \vert^k = \vert {\operatorname{End}}_G(M) \vert^{r_G(M)k} = \vert {\operatorname{End}}_G(M) \vert^i,$$ and hence for that value of $i$, the factor $1-\vert {\operatorname{End}}_G(M) \vert^i/\vert M \vert^k$ vanishes, and we correctly conclude $P_{\hat{\mathbb{Z}}\llbracket G \rrbracket}(N,k) = 0$. If $N$ is not finitely generated, our formula gives $P_{\hat{\mathbb{Z}}\llbracket G \rrbracket}(N,k) = 0$ for all $k$. Secondly, it is unnecessary to assume $G$ is countably based. If not, we may choose a (transfinite) sequence $\{N_\alpha\}$ of normal closed subgroups of $G=N_0$, such that $N_\alpha+1$ is open in $N_\alpha$ and $N_\alpha = \bigcap_{\beta<\alpha} N_\beta$ for $\alpha$ a limit. Then, arguing inductively on $\alpha$ in the same way as [@Damian Theorem 2], we reach the required conclusion. These arguments apply to the statement of [@Damian Theorem 2]: the extra conditions there can be removed. For any $k$, $$P_{\hat{\mathbb{Z}}\llbracket G \rrbracket}(K^G_{n-1},k) = \prod_{\mathcal{S}^{\hat{\mathbb{Z}}\llbracket G \rrbracket}} \prod_{i=0}^{h_G^n(M)-1} \left(1-\frac{\vert {\operatorname{End}}_G(M) \vert^i}{\vert M \vert^k}\right).$$ As before, note that $i_{K^G_{n-1}}(M) = h_G^n(M)$. Finally, by applying the same arguments as [@Damian Theorem 3], we get: For $N$ a finitely generated $\hat{\mathbb{Z}}\llbracket G \rrbracket$-module, $N$ is PFG if and only if the number of irreducible $G$-modules $M$ such that $i_N(M) \geq 1$ is polynomially bounded as a function of the order. Note that the inequality $\sum g(n)/n^t \leq A_G(t)$ in the proof of [@Damian Theorem 3] (in the notation used there) is not quite true: it assumes that $i_G(M) \geq 1$ for all $M \in \mathcal{S}^{\hat{\mathbb{Z}}\llbracket G \rrbracket}$. Since this holds for all non-trivial $M$, and since the number of trivial irreducible $G$-modules of order $n$ is polynomially bounded as a function of $n$ (by the polynomial $1$), the conclusion holds. \[Damian3\] Assume $G$ has type ${\operatorname{FP}}_n$. Then $K^G_{n-1}$ is PFG (and $G$ has type ${\operatorname{PFP}}_n$ over $\hat{\mathbb{Z}}$) if and only if the number of irreducible $G$-modules $M$ such that $h_G^n(M) \geq 1$ is polynomially bounded as a function of the order. In particular, for $G$ of type ${\operatorname{FP}}_1$, $G$ has type ${\operatorname{PFP}}_1$ if and only if the number of irreducible $G$-modules $M$ such that $\delta_G(M)+h'_G(M) \geq 1$ is polynomially bounded as a function of the order. APFG and PFP1 {#sec:APFG} ------------- As the reader might guess, APFG and ${\operatorname{PFP}}_1$ are closely related. From Proposition \[APFGring\] we deduce: \[APFGPFP1\] If $G$ is APFG, then it has type ${\operatorname{PFP}}_1$ over $\hat{\mathbb{Z}}$. By Proposition \[APFGring\], $\hat{\mathbb{Z}} \llbracket G \rrbracket$ is PFG. So the exact sequence of PFG modules shows that $G$ has type ${\operatorname{PFP}}_1$. \[PFP1\] If $G$ is PFG, then it has type ${\operatorname{PFP}}_1$ over $\hat{\mathbb{Z}}$. By Proposition \[prop:damian\_thm4.4\], $G$ being PFG implies it is APFG. The result follows by the previous lemma. Using the relation between APFG and ${\operatorname{PFP}}_1$, we can give an example of group of type ${\operatorname{PFP}}_1$ that is not PFG. \[ex:damian\] In [@Damian Example 4.5] it is shown that, for $N$ large enough, the group $\prod_{n\ge N} \mathrm{Alt}(n)^{2^n}$ is APFG but not PFG. This example therefore has type ${\operatorname{PFP}}_1$ over $\hat{\mathbb{Z}}$, and has UBERG, but is not PFG. Minimal extensions, PFR and type PFP2 {#PFPvsPFP2} ------------------------------------- \[pfp2\] PFR implies ${\operatorname{PFP}}_2$ over $\hat{\mathbb{Z}}$. Let $G$ be PFR. Then $G$ is finitely presented and has UBERG, by Proposition \[KV\]. Hence $G$ has type ${\operatorname{PFP}}_2$ if and only if it has type ${\operatorname{FP}}_2$. Since $G$ is finitely presented, it has type ${\operatorname{FP}}_2$. Recall that group extensions $K \to E \to G$ and $K \to E' \to G$ with $K$ abelian are said to be *equivalent* if there is a commutative diagram $$\xymatrix{ K \ar[r] \ar@{=}[d] & E \ar[r] \ar[d] & G \ar@{=}[d] \\ K \ar[r] & E' \ar[r] & G;}$$ they are said to be *isomorphic* if there is a commutative diagram $$\xymatrix{ K \ar[r] \ar^\cong[d] & E \ar[r] \ar[d] & G \ar@{=}[d] \\ K \ar[r] & E' \ar[r] & G.}$$ So equivalent abelian extensions are isomorphic. Moreover, isomorphic abelian extensions induce a $G$-automorphism of $K$; since irreducible $G$-modules are $1$-generated, the number of $G$-automorphisms is at most $\lvert K \rvert$. Therefore the number of equivalence classes of minimal abelian extensions of degree $n$ in one isomorphism class is at most $n$. We conclude: \[eq/iso\] Polynomial minimal abelian extension growth (that is, polynomial growth in the number of isomorphism classes of minimal extensions of $G$ with abelian kernel) is equivalent to polynomial growth in the number of equivalence classes of minimal extensions by non-isomorphic $G$-modules. Recall that, by the usual correspondence between the second cohomology of a group and its extensions, this second condition is equivalent to saying that $\sum_S \vert H^2(G,S) \vert$ grows polynomially in $k$, where the sum ranges over all irreducible $G$-modules of order $k$. We can use this idea for an alternative proof that PFR implies type ${\operatorname{PFP}}_2$. It is shown in [@KV Theorem 3.9] that for finitely generated profinite groups, PFR is equivalent to PMEG (see Section \[sec:pfgpfrmore\]) – that is, polynomial growth in the number of isomorphism classes of minimal extensions. Clearly this implies polynomial minimal abelian extension growth, and in fact it is equivalent to it among finitely presented groups, by [@KV Theorem 4.1, Proposition 5.2, Proposition 5.4]. By our proposition, this is equivalent to polynomial growth in $\sum_S \vert H^2(G,S) \vert$; by Theorem \[extcondition\], groups of type ${\operatorname{PFP}}_1$ (which includes PFR groups by Proposition \[APFGring\] and Lemma \[APFGPFP1\]) have type ${\operatorname{PFP}}_2$ (over $\hat{\mathbb{Z}}$) if and only if $\sum_S (\vert H^2(G,S) \vert-1)$ grows polynomially, so the result follows. We also observe, in contrast to [@KV Section 7], that we can check the ${\operatorname{PFP}}_2$ property by considering the minimal presentation of a group. Imitating [@KV Section 3], we say that a presentation of $G$ has polynomial maximal $P$-stable *abelian* subgroup growth if the number of maximal $P$-stable subgroups $S$ of $R$ of index $n$ with $R/S$ abelian grows polynomially in $n$. We can now state an equivalent formulation of property ${\operatorname{PFP}}_2$. \[PFP2=absubgps\] Let $G$ be a profinite group of type ${\operatorname{PFP}}_1$ and let $\tilde{G}\to G$ be the universal Frattini cover of $G$. Then $G$ has type ${\operatorname{PFP}}_2$ if and only if $R = \ker(\tilde{G} \to G)$ has polynomial maximal $\tilde{G}$-stable abelian subgroup growth. We may use the argument of [@KV Proposition 7.1]: the maximal $\tilde{G}$-stable subgroups of $R$ of degree $n$ correspond precisely to the isomorphism classes of non-split minimal abelian extensions of $G$ of degree $n$ by [@Hill (3.2)]; the number of these grows polynomially in $n$ if and only if the number of equivalence classes of non-split minimal abelian extensions of $G$ of degree $n$ does, just as in Proposition \[eq/iso\]; since $G$ has type ${\operatorname{PFP}}_1$, this condition is equivalent to having type ${\operatorname{PFP}}_2$ by Theorem \[extcondition\]. Relative type PFPn {#sec:relpfpn} ------------------ A closed normal subgroup $H$ of a profinite group $G$ has *relative type ${\operatorname{PFP}}_n$ in $G$ over $R$* if all PFG projective $R \llbracket G/H \rrbracket$-modules have type ${\operatorname{PFP}}_n$ over $R \llbracket G \rrbracket$. Notice that in the analogous ${\operatorname{FP}}_n$ case, $H$ has relative type ${\operatorname{FP}}_n$ in all groups if and only if $H$ has type ${\operatorname{FP}}_n$, whereas when $G$ does not have type ${\operatorname{PFP}}_1$, the trivial subgroup (of type ${\operatorname{PFP}}_\infty$) does not have relative type ${\operatorname{PFP}}_1$ in $G$, so we need the two different definitions. \[relPFPn\] Let $G$ be a profinite group, $H$ be normal in $G$ and let $M$ be a profinite $R \llbracket G/H \rrbracket$-module. Suppose $H$ has relative ${\operatorname{PFP}}_m$ in $G$ over $R$. (i) If $M$ has type ${\operatorname{PFP}}_n$ over $R \llbracket G \rrbracket$ (via restriction), then it has type ${\operatorname{PFP}}_{k}$ over $R \llbracket G/H \rrbracket$, where $k = \min(m+1,n)$. (ii) If $M$ has type ${\operatorname{PFP}}_n$ over $R \llbracket G/H \rrbracket$, then it has type ${\operatorname{PFP}}_{l}$ over $R \llbracket G \rrbracket$, where $l = \min(m,n)$. First, note that $M$ is PFG as an $R \llbracket G \rrbracket$-module if and only if it is PFG as an $R \llbracket G/H \rrbracket$-module, because the action is by restriction. (i) Use induction on $n$. When $n=0$ we are done. Take a PFG projective $R \llbracket G/H \rrbracket$-module $P$ and an epimorphism $P \to M$ with kernel $K$. The module $K$ has type ${\operatorname{PFP}}_{\min(m,n-1)}$ over $R \llbracket G \rrbracket$ by Proposition \[moduletypes\], so by hypothesis it has type $$\min(m+1,\min(m,n-1)) = \min(m,n-1)$$ over $R \llbracket G/H \rrbracket$. Therefore $M$ has type $\min(m,n-1)+1=k$ over $R \llbracket G/H \rrbracket$. (ii) Use induction on $n$. When $n=0$ we are done. Take a PFG projective $R \llbracket G/H \rrbracket$-module $P$ and an epimorphism $P \to M$ with kernel $K$. Now $K$ has type ${\operatorname{PFP}}_{n-1}$ over $R \llbracket G/H \rrbracket$-module by Proposition \[moduletypes\], so by hypothesis it has type ${\operatorname{PFP}}_{\text{min}(m,n-1)}$ over $R \llbracket G \rrbracket$. Also $P$ has type ${\operatorname{PFP}}_m$ over $R \llbracket G \rrbracket$, so by Proposition \[moduletypes\] $M$ has type ${\operatorname{PFP}}_{\text{min}(m,n)}$ over $R \llbracket G \rrbracket$. In particular this holds for $M=R$. \[relPFPn2\] Suppose that $H$ has relative type ${\operatorname{PFP}}_m$ in $G$ over $R$. (i) If $G$ has type ${\operatorname{PFP}}_n$ over $R$, then $G/H$ has type ${\operatorname{PFP}}_{\min(m+1,n)}$ over $R$. (ii) If $G/H$ has type ${\operatorname{PFP}}_n$ over $R$, then $G$ has type ${\operatorname{PFP}}_{\min(m,n)}$ over $R$. Exactly the same approach gives the analogous classical result (see [@Bieri Proposition 2.7] for example) that relates type ${\operatorname{FP}}_n$ conditions for extensions and quotients of abstract groups. As far as we know, this is the first proof of this result which avoids the use of spectral sequences; those unfamiliar with the mysteries of spectral sequences may find this new perspective enlightening. At the moment the property of relative type ${\operatorname{PFP}}_n$, and thus the behaviour of type ${\operatorname{PFP}}_n$ under extensions, remains mysterious. But we have the following result: \[prop:PFPndirectprod\] Suppose $G$ and $H$ are profinite groups. Let $M$ be an $R \llbracket G \rrbracket$-module and $N$ an $R \llbracket H \rrbracket$-module. Suppose $M$ has type ${\operatorname{PFP}}_m$ and $N$ has type ${\operatorname{PFP}}_n$. Then $M \otimes_R N$ has type ${\operatorname{PFP}}_{\min(m,n)}$ as an $R \llbracket G \times H \rrbracket$-module. We will show that if $P$ is a PFG projective $R \llbracket G \rrbracket$-module and $Q$ a PFG projective $R \llbracket H \rrbracket$-module, then $P \otimes_R Q$ is PFG as an $R \llbracket G \times H \rrbracket$-module. The result follows by taking tensor products of partial PFG projective resolutions for $M$ and $N$. Each maximal submodule of $P$ gives an epimorphism onto an irreducble $k\llbracket G \rrbracket$-module for some field $k$ which is a quotient of $R$. Now define the function $f^G_n(P,k')$ over all finite extensions $k'$ of fields $k$ which appear as quotients of $R$, as follows. Let $\mathcal{S}_n(G,k')$ be the set of absolutely irreducible representations of $G$ of dimension $n$ which are defined over $k'$; we think of elements of $\mathcal{S}_n(G,k')$ as $R \llbracket G \rrbracket$-modules via restriction along $R \to k'$. [@KV Lemma 6.7] gives a bijection $\Phi_k$ from Galois orbits of irreducible $\bar{k}\llbracket G \rrbracket$-modules to irreducible $k\llbracket G \rrbracket$-modules, where $\bar{k}$ is the algebraic closure of $k$, and we identify $k'$ with a subfield of $\bar{k}$. Then $f^G_n(P,k') = \sum_{M \in \mathcal{S}_n(G,k')} (|{\operatorname{Hom}}_{R \llbracket G \rrbracket}(P,\Phi_k(M))|-1)$. Exactly the same approach as [@KV Lemma 6.8] shows that $P$ is PFG if and only if there is some $b$ such that $f^G_n(P,k') \leq |k'|^{bn}$ for all $n$ and all $k'$ where it is defined. As in the proof of [@KV Theorem 6.4], the absolutely irreducible representations of $G \times H$ over $k'$ are precisely the tensor products of absolutely irreducible representations of $G$ and $H$ over $k'$. As there, we deduce that if $f^G_n(P,k') \leq |k'|^{bn}$ for some $b$, and similarly for $f^H_n(Q,k')$, there exists a $c$ such that $f^{G \times H}_n(P \otimes_R Q,k') \leq |k'|^{cn}$, and $P \otimes_R Q$ is PFG, as required. If $G$ has type ${\operatorname{PFP}}_m$ over $R$ and $H$ has type ${\operatorname{PFP}}_n$ over $R$, then $G \times H$ has type ${\operatorname{PFP}}_{\min(m,n)}$ over $R$. Compare this to the result in [@KV Theorem 6.4] that UBERG is preserved by (finite) direct products. Examples ======== In this section we will construct some examples of groups of type ${\operatorname{PFP}}_1$ over $R$ although the group ring $R \llbracket G \rrbracket$ is not PFG, examples of groups of type ${\operatorname{FP}}_1$ over $\hat{\mathbb{Z}}$ which do not have type ${\operatorname{PFP}}_1$, and examples of groups of type ${\operatorname{PFP}}_n$ but not ${\operatorname{PFP}}_{n+1}$ over $\hat{\mathbb{Z}}$ for all $n$. G PFP1 != R\[\[G\]\] PFG ------------------------ The examples studied in [@Damian] suggest the strategy of looking at products of finite groups. \[prop:finsimplegroup\] Let $S$ be a finite group and let $G$ be an infinite product of copies of $S$. For any prime $p$ not dividing the order of $S$, $\mathbb{Z}_p \llbracket G \rrbracket$ is not PFG, but $G$ has type ${\operatorname{PFP}}_1$ over $\mathbb{Z}_p$. The group $G$ has infinitely many irreducible representations of dimension $\leq \vert S\vert$ over $\mathbb{F}_p$, so $\mathbb{Z}_p \llbracket G \rrbracket$ is not PFG. On the other hand, [@RZ Corollary 7.3.3] shows that $H^1(G,M) =0$ is trivial for all irreducible $\mathbb{F}_p\llbracket G\rrbracket$-modules $M$ with $p \notin \pi$. Therefore, by Theorem \[extcondition\], we can see that $G$ has type ${\operatorname{PFP}}_1$ over $\mathbb{Z}_p$. Similarly, if we denote by $\pi= \pi(S)$ the set of prime divisors of the order of $S$, the group $S^\mathbb{N}$ has type ${\operatorname{PFP}}_1$ over $\mathbb{Z}_{\pi'} = \prod_{p\notin \pi} \mathbb{Z}_p$. By varying the group $S$, we get examples of this behaviour over $\mathbb{Z}_p$ for all primes $p$. We do not know of any such examples over $\hat{\mathbb{Z}}$, and leave it as a question. Are there groups of type ${\operatorname{PFP}}_1$ over $\hat{\mathbb{Z}}$ which do not have UBERG? By Corollary \[Damian3\], a necessary and sufficient condition for this is that the number of irreducible modules $M$ of order $n$ for such a group $G$ would grow faster than polynomially in $n$, but the number for which $\delta_G(M)+h'_G(M)$ is non-zero would grow polynomially. [@Damian] constructs groups with UBERG which are not PFG, but another remaining question is whether such groups must be finitely generated. Are there groups with UBERG which are not finitely generated? On the other hand, examples like the above cannot appear among pronilpotent groups. In future work with S. Kionke, we show: \[prop:pronilp\] Let $G$ be a pronilpotent group. Then $G$ has UBERG if and only if $G$ is finitely generated. The class of prosoluble groups appears often in these contexts as groups where pathological behaviour cannot occur. For example, finitely generated prosoluble groups are PFG, and prosoluble groups have type ${\operatorname{PFP}}_1$ if and only if they are finitely generated by Corollary \[PFP1\] and [@CC2 Remark 3.5(a)]. So it is very natural to ask: \[quest:prosoluble\] Are all prosoluble groups with UBERG finitely generated? Type FP1 and type PFP1 {#FP1vsPFP1} ---------------------- Clearly type ${\operatorname{PFP}}_1$ over $R$ implies type ${\operatorname{FP}}_1$ over $R$; we show the converse does not hold. \[prop:free\_not\_PFP\] Let $F_n$ be the free profinite group on $n$ generators. For $n > 1$, $F_n$ does not have type ${\operatorname{PFP}}_1$ over $\hat{\mathbb{Z}}$. We first give a proof for $n=3$; the case for $n>3$ is proved similarly. We think of $F_3$ as the profinite free product $\hat{\mathbb{Z}} \ast F_2$, for some fixed copy of $\hat{\mathbb{Z}}$. We use the Mayer-Vietoris sequence of [@RZ Proposition 9.2.13]. Note that our profinite free product is proper by [@RZ Example 9.2.6], so the Mayer-Vietoris sequence applies. Now $F_3$ has type ${\operatorname{FP}}_1$ because it is finitely generated. To show it does not have type ${\operatorname{PFP}}_1$ we use the cohomological characterisation: we will show $\sum_{S \in \mathcal{S}^{F_3}_k} |H^1(G,S)|-1$ grows faster than polynomially in $k$. By the Mayer-Vietoris sequence, it suffices to show $\sum_{S \in \mathcal{S}^{F_3}_k} |H^1(\hat{\mathbb{Z}},S)|-1$ grows faster than polynomially in $k$. Let $\mathcal{T}^{F_3}_k$ be the set of irreducible $F_3$-modules of order $k$ on which restriction to $\hat{\mathbb{Z}}$ gives the trivial action. By the universal property of free products, we can identify this with $\mathcal{S}^{F_2}_k$. The sequence $|\mathcal{S}^{F_2}_k|$ grows faster than polynomially in $k$ by [@KV Lemma 6.16]. It is well-known (see [@Weibel]) that for any group $G$ and any trivial $G$-module $A$, $H^1(G,A) = {\operatorname{Hom}}(G,A)$, so for $S \in \mathcal{T}^{F_3}_k$, $H^1(\hat{\mathbb{Z}},S) = S$ and hence $$\sum_{S \in \mathcal{S}^{F_3}_k} |H^1(\hat{\mathbb{Z}},S)|-1 \geq \sum_{S \in \mathcal{T}^{F_3}_k} |H^1(\hat{\mathbb{Z}},S)|-1 = |\mathcal{T}^{F_3}_k|^2 - |\mathcal{T}^{F_3}_k|$$ grows faster than polynomially in $k$, as required. Finally, for $n=2$, we note by [@RZ Theorem 3.6.2] that proper open subgroups of $F_2$ are free profinite groups of higher rank, which therefore do not have type ${\operatorname{PFP}}_1$; we conclude $F_2$ does not have type ${\operatorname{PFP}}_1$ by Proposition \[prop:PFPncommensurability\]. Type PFPn but not type PFPn+1 ----------------------------- In [@CC2 Proposition 4.6], a family $\{A_n\}$ of pro-$\mathcal{C}$ groups of type ${\operatorname{FP}}_n$ but not ${\operatorname{FP}}_{n+1}$ is constructed over $\mathbb{Z}_{\hat{\mathcal{C}}}$, the pro-$\mathcal{C}$ completion of $\hat{\mathbb{Z}}$, for any class $\mathcal{C}$ of finite groups closed under subgroups, quotients and extensions. \[FPnnotn+1\] Let $\mathcal{C}$ be the class of finite soluble groups, so that $\mathbb{Z}_{\hat{\mathcal{C}}} = \hat{\mathbb{Z}}$. Then $A_n$ has type ${\operatorname{PFP}}_n$ but not type ${\operatorname{PFP}}_{n+1}$ over $\hat{\mathbb{Z}}$. By Remark \[ex:PFPnprosoluble\], for finitely generated prosoluble groups, type ${\operatorname{PFP}}_n$ over $R$ is equivalent to type ${\operatorname{FP}}_n$ over $R$. Since $A_n$ is finitely generated for $n \geq 1$, we conclude that $A_n$ has type ${\operatorname{PFP}}_n$ but not type ${\operatorname{PFP}}_{n+1}$ over $\hat{\mathbb{Z}}$ for $n \geq 1$. For $n=0$, $A_0$ is not finitely generated, hence not of type ${\operatorname{FP}}_1$ over $\hat{\mathbb{Z}}$ by [@Damian Corollary 2.4], hence not of type ${\operatorname{PFP}}_1$. As an additional example, we consider the iterated wreath products described in [@Vannacci]. Let $G$ be the infinitely iterated wreath product of copies of the alternating group $A_{36}$ defined in [@Vannacci Remark 2]. $G$ is PFG by the proof of [@Quick Theorem A], so it has type ${\operatorname{PFP}}_1$ over $\hat{\mathbb{Z}}$, and has UBERG – though it is not finitely presented, by [@Vannacci Remark 2]. $G$ does not have type ${\operatorname{PFP}}_2$ over $\hat{\mathbb{Z}}$. Since $G$ has UBERG, it is enough to show $G$ does not have type ${\operatorname{FP}}_2$. $A_{36}$ has Schur multiplier of size $2$, so for $W_n$ the iterated wreath product of copies of $A_{36}$, iterated $n$ times, we get $H_2(W_n,\hat{\mathbb{Z}}) = (\mathbb{Z}/2\mathbb{Z})^n$ by [@Read Theorem 3]. Taking inverse limits over $n$, $H_2(G,\hat{\mathbb{Z}})$ is an infinite product of copies of $\mathbb{Z}/2\mathbb{Z}$. In particular, it is not finitely generated, so $G$ does not have type ${\operatorname{FP}}_2$ by [@CC2 Lemma 4.5]. [99]{} M. Aschbacher and R. Guralnick. Some applications of the first cohomology group. 90(2):446–460, 1984. F. Beyle and J. Tappe. Group extensions, representations, and the Schur multiplicator, volume 958 of [*Lecture Notes in Mathematics*]{}. Springer-Verlag, 1982. R. Bieri. Homological dimension of discrete groups, in *Queen Mary College Mathematical Notes*. Queen Mary College, London 1981. K.S. Brown. Cohomology of groups, volume 87 of [*Graduate Texts in Mathematics*]{}. Springer-Verlag, 1994. G. Corob Cook. Bieri-Eckmann criteria for profinite groups. , 212:857–893, 2016. G. Corob Cook. On profinite groups of type ${\operatorname{FP}}_\infty$. , 294:216–255, 2016. J. Cossey, K.W. Gruenberg, L.G. Kovács The presentation rank of a direct product of finite groups. , 28(3):597–603, 1974. E. Damian. The generation of the augmentation ideal in profinite groups. , 186:447–476, 2011. M.D. Fried and M. Jarden. Field arithmetic, volume 11 of [*A Series of Modern Surveys in Mathematics*]{}. Springer-Verlag, Berlin, 2008. K.W. Gruenberg. Relation Modules of Finite Groups, Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics, No. 25. American Mathematical Society, Providence, RI, 1976. R. Guralnick, W. Kantor, M. Kassabov and A. Lubotzky. Presentations of finite simple groups: profinite and cohomological approaches. , 1:469–523, 2007. V. Hill. Split and minimal abelian extensions of finite groups. , 172:329–337, 1972. W. Kantor and A. Lubotzky. The probability of generating a finite classical group. , 36(1):67–87, 1990. P. Kleidman and M. Liebeck. , LMS Lecture Note Series 129. Cambridge University Press, Cambridge, 1990. G. Karpilovsky. , LMS Monographs 2. Oxford University Press, Oxford, 1987. S. Kionke and M. Vannacci. Positively finitely related profinite groups. , 225(2):743–770, 2018. A. Lubotzky. Pro-finite presentations. , 242:672–690, 2001. A. Lubotzky and D. Segal. , volume 212 of [*Progr. Math*]{}. Birkhäuser Verlag, Basel, 2003. A. Mann. Positively finitely generated groups. 8(8):429–460, 1996. A. Mann and A. Shalev. Simple groups, maximal subgroups, and probabilistic aspects of profinite groups. , 96:449–468, 1996. L. Morgan and C. Roney-Dougal. A note on the probability of generating alternating or symmetric groups. , 105:201–204, 2015. M. Quick. Probabilistic generation of wreath products of non-abelian finite simple groups, II. , 16(3):493–503, 2006. E.W. Read. On the Schur multiplier of a wreath product. , 20(3):456–466, 1976. L. Ribes and P. Zalesskii. Profinite groups, volume 40 of *A Series of Modern Surveys in Mathematics*. Springer-Verlag, Berlin, 2010. P. Symonds and T. Weigel. Cohomology of $p$-adic analytic groups. In [*New horizons in pro-[$p$]{} groups*]{}, volume 184 of [*Progr. Math.*]{}, pages 347–408. Birkhäuser Boston, Boston, MA, 2000. M. Vannacci. On hereditarily just infinite profinite groups obtained via iterated wreath products. 19:233–238, 2016. C Weibel. An introduction to homological algebra, volume 38 of *Cambridge Studies in Advanced Mathematics*. Cambridge University Press, Cambridge, 1994. J.S. Wilson. Profinite groups, volume 19 of *London Mathematical Society Monographs, New Series*. Oxford University Press, New York, 1998. [^1]: The first author was supported by ERC grant 336983 and Basque government grant IT974-16. The second author acknowledges support from the research training group *GRK 2240: Algebro-Geometric Methods in Algebra, Arithmetic and Topology*, funded by the DFG [^2]: We choose to avoid abbreviating this to PFP because of potential clashes with some future paper about a positively type ${\operatorname{FP}}$ condition.
--- abstract: 'Data generated from real world events are usually temporal and contain multimodal information such as audio, visual, depth, sensor etc. which are required to be intelligently combined for classification tasks. In this paper, we propose a novel generalized deep neural network architecture where temporal streams from multiple modalities are combined. There are total M+1 (M is the number of modalities) components in the proposed network. The first component is a novel temporally hybrid Recurrent Neural Network (RNN) that exploits the complimentary nature of the multimodal temporal information by allowing the network to learn both modality specific temporal dynamics as well as the dynamics in a multimodal feature space. M additional components are added to the network which extract discriminative but non-temporal cues from each modality. Finally, the predictions from all of these components are linearly combined using a set of automatically learned weights. We perform exhaustive experiments on three different datasets spanning four modalities. The proposed network is relatively 3.5%, 5.7% and 2% better than the best performing temporal multimodal baseline for UCF-101, CCV and Multimodal Gesture datasets respectively.' author: - 'Ankit Gandhi$^{1\,*}$, Arjun Sharma$^{1\,*}$ , Arijit Biswas$^2$,' - Om Deshmukh$^1$ bibliography: - 'sig-alternate-sample.bib' title: 'GeThR-Net: A Generalized Temporally Hybrid Recurrent Neural Network for Multimodal Information Fusion' --- Introduction ============ Humans typically perceive the world through multimodal sensory information [@stein2009neural] such as visual, audio, depth, etc.. For example, when a person is running, we recognize the event by looking at how the body posture of the person is changing with time as well by listening to the periodic sound of his/her footsteps. Human brains can seamlessly process multimodal signals and accurately classify an event or an action. However, it is a challenging task for machines to exploit the complimentary nature and optimally combine multimodal information. Recently, deep neural networks have been extensively used in computer vision, natural language processing and speech processing. LSTM [@hochreiter1997long], a Recurrent Neural Network (RNN) [@williams1989learning] architecture, has been extremely successful in temporal modelling and classification tasks such as handwriting recognition [@graves2009offline], action recognition [@baccouche2011sequential], image and video captioning [@Chen_2015_CVPR; @Vinyals_2015_CVPR; @yao2015describing] and speech recognition [@graves2013speech; @graves2014towards]. RNNs can also be used to model multimodal information. These methods fall under two broad categories: (a) Early-Fusion: modality specific features are combined to create a feature representation and fed into a LSTM network for classification. (b) Late-Fusion: each modality is modelled using individual LSTM networks and their predictions are combined for classification [@wu2015modeling]. Since early-fusion techniques do not learn any modality specific temporal dynamics, they fail to capture the discriminative temporal cues present in each modality. On the other hand, late-fusion methods cannot extract the discriminative temporal cues which might be available in a multimodal feature representation. In this paper, we propose a novel generalized temporally hybrid Recurrent Neural Network architecture called GeThR-Net which models the temporal dynamics of individual modalities (late fusion) as well as the overall temporal dynamics in a multimodal feature space (early fusion). GeThR-Net has one temporal and $M$ ($M$ is the total number of modalities) non-temporal components. The novel temporal component of GeThR-Net models the long-term temporal information in a multimodal signal whereas the non-temporal components take care of situations where explicit temporal modelling is difficult. The temporal component consists of three layers. The first layer models each modality using individual modality-specific LSTM networks. The second layer combines the hidden representations from these LSTMs to form a multimodal feature representations corresponding to each time step. In the final layer, one multimodal LSTM is trained on the multimodal features obtained from the second layer. The output from the final layer is fed into a softmax layer for category-wise confidence prediction. We observe that in many real world scenarios, the temporal modelling of individual or multimodal information is extremely hard due to the presence of noise or high intra-class temporal variation. We address this issue by introducing additional $M$ components to GeThR-Net which model modality specific non-temporal cues by ignoring the temporal relationship across features extracted from different time-instants. The predictions corresponding to all $M+1$ components in the proposed network are combined using a weighted vector learned from the validation dataset. We note that GeThR-Net can be used with any kind of modality information without any restriction on the number of modalities. The main contributions of this paper are: - We propose a generalized deep neural network architecture called GeThR-Net that could intelligently combine multimodal temporal information from any kind and from any number of streams. - Our objective is to propose a general framework that could work with modalities of any kind. We demonstrate the effectiveness and wide applicability of GeThR-Net by evaluation of classification performance on three different action and gesture classification tasks, UCF-101 [@soomro2012ucf101], Multimodal Gesture [@escalera2013multi] and Columbia Consumer videos [@jiang2011consumer]. Four different modalities such as audio, appearance, short-term motion and skeleton are considered in our experiments. We find out that GeThR-Net is relatively 3.5%, 5.7% and 2% better than the best temporal multimodal baseline for UCF-101, CCV and Multimodal Gesture datasets respectively. The full pipeline of the proposed approach is shown in Figure \[fig:pipeline\]. We discuss the relevant prior work in Section \[sec:related\_work\] followed by the details of GeThR-Net in Section \[sec:proposed\_approach\]. The details of experimental results are provided in Section \[sec:exp\_results\]. ![image](pipeline){width="0.8\linewidth"} Related Work {#sec:related_work} ============ In this section, we describe the relevant prior work on generic multimodal fusion and multimodal fusion using deep learning. A good survey of different fusion strategies for multimodal information is in [@atrey2010multimodal]. We discuss a few relevant papers here. The authors in [@xie2013multimodal] provide a general theoretical analysis for multimodal information fusion and implements novel information theoretic tools for multimedia applications. [@wu2004optimal] proposes a two-step approach for an optimal multimodal fusion, where in the first step statistically independent modalities are found from raw features and in the second step, super-kernel fusion is used to find the optimal combination of individual modalities. In [@jhuo2014discovering], the authors propose a method for detecting complex events in videos by using a new representation, called bi-modal words, to explore the representative joint audio and visual patterns. [@jiang2009short] proposes a method to extract a novel representation, the Short-term Audio-Visual Atom (S-AVA), for improved semantic concept detection in videos. The authors in [@ye2012robust] propose a rank minimization method to fuse the predicted confidence scores of multiple models based on different kinds of features. Their goal is to find a shared rank-2 pairwise relationship matrix (for the test samples) based on which each original score matrix from individual model can be decomposed into the common rank-2 matrix and sparse deviation errors. [@snoek2005early] proposes an early and a late fusion scheme for audio, visual and textual information fusion for semantic video analysis and demonstrates that the late fusion method works slightly better. In [@paleari2006toward], the authors propose a multimodal fusion technique and describe a way to implement a generic framework for multimodal emotion recognition. In [@ngiam2011multimodal], the authors propose a deep autoencoder network that is pretrained using sparse Restricted Boltzmann Machines (RBM). The proposed method is used to learn multimodal feature representation for the task of audio-visual speech recognition. The authors in [@srivastava2012multimodal], propose a Deep Boltzmann Machine (DBM) for learning a generative model of data that consists of multiple and diverse input modalities. [@sohn2014improved], proposes a multimodal representation learning framework that minimizes the variation information between data modalities through shared latent representations. In [@wu2014exploring], the authors propose a unified deep neural network, which jointly learns feature relationships and class relationships, and simultaneously carries out video classification within the same framework utilizing the learned relationships. [@mao2014explain; @mao2014deep] proposes an approach for generating novel image captions given an image. This approach directly models the probability distribution of a word given previous words and an image using a network that consists of a deep RNN for sentences and a deep CNN for images. [@wu2014multimodal] proposes a novel bi-modal dynamic network for gesture recognition. High level audio and skeletal joints representations, extracted using dynamic Deep Belief Networks (DBN), are combined using a layer of perceptron. However, none of these approaches use RNNs for both multimodal and temporal data fusion and hence cannot learn features which truly represent the complimentary nature of multimodal features along the temporal dimension. The authors in [@chen2015multi], propose a multi-layer RNN for multi-modal emotion recognition. However, the number of layers in the proposed architecture is equal to the number of modalities, which restricts the maximum number of modalities which can be used simultaneously. The authors in [@wu2015modeling] propose a hybrid deep learning framework for video classification that can model static spatial information, short-term motion, as well as long-term temporal clues in the videos. The spatial and the short-term motion features extracted from CNNs are combined using a regularized feature fusion network. LSTM is used to model only the modality specific long-term temporal information. However, in the proposed GeThR-Net, the temporally hybrid architecture can automatically combine temporal information from multiple modalities without requiring any explicit feature fusion framework. We also point out that unlike [@wu2015modeling], in GeThR-Net, the multimodal fusion is performed at the LSTM network level. To the best of authors’ knowledge, there are no prior approaches where multimodal information fusion is performed at the RNN/LSTM level. GeThR-Net is the first method to use a temporally hybrid RNN which is capable of learning features from modalities of any kind without any upper-bound on the number of modalities. Proposed Approach {#sec:proposed_approach} ================= In this section, we provide the details of the proposed deep neural network architecture GeThR-Net. First, we discuss how LSTM networks usually work. Next, we provide the descriptions of the temporal and non-temporal components of our network followed by how we combine predictions from all these components. Long Short Term Memory Networks {#sec:lstm} ------------------------------- Recently, a type of RNN, called Long Short Term Memory (LSTM) Networks, have been successfully employed to capture long-term temporal patterns and dependencies in videos for tasks such as video description generation, activity recognition etc. RNNs [@williams1989learning] are a special class of artificial neural networks, where cyclic connections are also allowed. These connections allow the networks to maintain a memory of the previous inputs, making them suitable for modelling sequential data. In LSTMs, this memory is maintained with the help of three non-linear multiplicative gates which control the in-flow, out-flow, and accumulation of information over time. We provide a detailed description of RNNs and LSTM networks below. Given an input sequence $\textbf{x}=\{x_{t}\}$ of length $T$, the fixed length hidden state or memory of an RNN **h** is given by $$h_{t} = g(x_{t}, h_{t-1}) \quad t = 1,\dots,T \label{eq:hidden} %\vspace{-0.2cm}$$ We use $h_{0} = 0$ in this work. Multiple such hidden layers can be stacked on top of each other, with $x_t$ in equation \[eq:hidden\] replaced with the activation at time $t$ of the previous hidden layer, to obtain a ‘deep’ recurrent neural network. The output of the RNN at time $t$ is computed using the state of the last hidden layer at $t$ as $$y_{t} = \theta(W_{yh}h_{t}^{n} + b_{y}) \label{eq:output}$$ where $\theta$ is a non-linear operation such as sigmoid or hyperbolic tangent for binary classification or softmax for multiclass classification, $b_y$ is the bias term for the output layer and $n$ is the number of hidden layers in the architecture. The output of the RNN at desired time steps can then be used to compute the error and the network weights are updated based on the gradients computed using Back-propagation Through Time (BPTT). In simple RNNs, the function $g$ is computed as a linear transformation of the input and previous hidden state, followed by an element wise non-linearity. $$g(x_{t}, h_{t-1}) = \theta(W_{hx}x_{t} + W_{hh}h_{t-1} + b_{h})$$ Such simple RNNs, however, suffer from the vanishing and exploding gradient problem [@hochreiter1997long]. To address this issue, a novel form of recurrent neural networks called the Long Short Term Memory (LSTM) networks were introduced in [@hochreiter1997long]. The key difference between simple RNNs and LSTMs is in the computation of $g$, which is done in the latter using a memory block. An LSTM memory block consists of a memory cell $c$ and three multiplicative gates which regulate the state of the cell - forget gate $f$, input gate $i$ and output gate $o$. The memory cell encodes the knowledge of the inputs that have been observed up to that time step. The forget gate controls whether the old information should be retained or forgotten. The input gate regulates whether new information should be added to the cell state while the output gate controls which parts of the new cell state to output. Like simple RNNs, LSTM networks can be made deep by stacking memory blocks. The output layer of the LSTM network can then be computed using equation \[eq:output\]. We refer the reader to [@hochreiter1997long] for more technical details on LSTMs. Temporal Component of GeThR-Net ------------------------------- In this subsection, we describe the details of the temporal component, which is a temporally hybrid LSTM network that models modality specific temporal dynamics as well as the multimodal temporal dynamics. This network has three layers. The first layer models the modality specific temporal information using individual LSTM layers. Multimodal information do not interact with each other in this layer. In the second layer, the hidden representations from all the modalities are combined using a linear function, followed by sigmoid non-linearity, to create a single multimodal feature representation corresponding to each time step. Finally, in the third layer, a LSTM network is fed with the learned multimodal features from the second layer. The output from the third layer is fed into a softmax layer for estimating the classification confidence scores corresponding to each label. This component is fully trained in an end-to-end manner and does not require any explicit feature fusion modelling. Now, we describe the technical details of these layers. We assume that there are total $M$ different modalities and total $T$ time-steps. The feature representation for modality $m$ corresponding to time instant $t$ is given by: $x^m_{t}$. Now, we describe the mathematical details: - [**First Layer:**]{} The input to this layer is $x^m_{t}$ for modality $m$ at time instant $t$. If $LSTM_{m}$ denotes the LSTM layer for modality $m$ and if $h^m_{t}$ denotes the corresponding hidden representation at time $t$, then: $$h^m_{t} = LSTM_{m}(x^m_{t})$$ - [**Second Layer:**]{} In this layer, the hidden representations are combined using a linear function followed by a sigmoid non-linearity. The objective of using this layer is to combine features from multiple temporal modalities. Let us assume that $z_{t}$ denotes the concatenated hidden representation from all the modalities at time-step $t$. $W_z$ (same for all time-step $t$) denotes the weight matrix which combines the multimodal features and creates a representation $p_t$ at time instant $t$. $b_z$ denotes a linear bias and $\sigma$ is the sigmoid function. $$z_{t} = (h^1_{t},\cdots,h^m_{t}), \hspace{1cm} p_{t} = \sigma(W_{z}z_{t} + b_{z})$$ - [**Third Layer:**]{} In this layer, one modality-independent LSTM layer is used to model the overall temporal dynamics of the multimodal feature representation $p_{t}$. Suppose, $LSTM_{c}$ denotes the combined LSTM and $h^c_t$ denotes the hidden representation from this LSTM layer at time $t$. $W_o$ is the weight matrix that linearly transforms the hidden representation. The output is propagated through a softmax function $\theta$ to obtain the final classification confidence values $y^c_t$ at time $t$. $b_{o}$ is a linear bias vector. $$h^c_t = LSTM_{c}(p_{t}), \hspace{1cm} y^c_t = \theta(W_{o}h^c_t + b_{o})$$ Non-temporal Component of GeThR-Net ----------------------------------- Although it is important to model the temporal information in multimodal signals for accurate classification or any other tasks, often in real world scenarios multimodal information contains significant amount of noise and large intra-class variation along the temporal dimension. For example, videos of the activity ‘cooking’ often contain action segments such as ‘changing thermostat’ or ‘drinking water’ which are no way related to the actual label of the video. In those cases, modelling only the long-term temporal information in the video could lead to inaccurate results. Hence, it is important that we allow the proposed deep network to learn the non-temporal features too. We analyze videos from multiple datasets and observe that a simple classifier which is trained on ‘frame-level’ features (definition of frame could vary according to the features) could give a reasonable accuracy, especially when videos contain unrelated temporal segments. Please refer to Section \[sec:discussion\_results\] for more experimental results on this. Since our objective is to propose a generic deep network that could work with any kind of multimodal information, we add additional components to the GeThR-Net, which explicitly model the modality specific non-temporal information. During training, for each modality $m$, we train a classifier where the set $\{x^m_{t}\}$, $\forall t$ is used as the training examples corresponding to the class of the multimodal signal. While testing for a given sequence, the predictions across all the time-steps are averaged to obtain the classifier confidence scores corresponding to all of the classes. In this paper, we have explored four different modalities: appearance, short-term motion, audio (spectrogram and MFCC) and skeleton. For appearance, short-term motion and audio-spectrogram, we use fine-tuned CNNs and for audio-MFCC and skeleton, we use SVMs as the non-temporal classifiers. Combination ----------- There are total $M+1$ components in GeThR-Net, where the first one is the temporally hybrid LSTM network and the rest $M$ are the non-temporal modality specific classifiers corresponding to each modality. Once we independently train these $M+1$ classifiers, their prediction scores are combined and a single class-label for each multimodal temporal sequence is predicted. We use a validation dataset to determine the relevant weights corresponding to each of the $M+1$ components. Experiments {#sec:exp_results} =========== Our goal is to demonstrate that the proposed GeThR-Net can be effectively applied to any kind of multimodal fusion. To achieve that, we perform thorough experimental evaluation and provide the details of the experimental results in this section. Dataset Details --------------- The dataset details are provided in this subsection. **UCF-101 [@soomro2012ucf101]:** UCF-101 is an action recognition dataset containing realistic action videos from YouTube. The dataset has 13,320 videos annotated into 101 different action classes. The average length of the video in this dataset is 6-7 sec. The dataset possess various challenges and diversity in terms of large variations in camera motion, object appearance and pose, cluttered background, illumination, viewpoint, etc. We evaluate the performance on this dataset following the standard protocol [@wu2015modeling; @soomro2012ucf101] by reporting the mean classification accuracy across three training and testing splits. We use the appearance and short-term motion modality for this dataset [@SimonyanZ14; @wu2015modeling]. **CCV [@jiang2011consumer]:** The Columbia Consumer Videos (CCV) has 9,317 YouTube videos distributed over 20 different semantic categories. The dataset has events like ‘baseball’, ‘parade’, ‘birthday’, ‘wedding ceremony’, scenes like ‘beach’, ‘playground’, etc. and objects like ‘cat’, ‘dog’ etc... The average length of the video in this dataset is 80 sec long. For our experiments, we have used 7751 videos (3851 for training and 3900 for testing) as the remaining videos are not available on YouTube presently. In this dataset, the performance is measured by average precision (AP) for each class and the overall measure is given by mAP (mean average precision over 20 categories). In this dataset, we use three different modalities, i.e., appearance, short-term motion and audio. **Multimodal Gesture Dataset [@escalera2013multi] (MMG):** ChaLearn-2013 multimodal gesture recognition dataset is a large video database of 13,858 gestures from a lexicon of 20 Italian gesture categories. The focus of the dataset is on user independent multiple gesture learning. The dataset has RGB and depth images of the videos, user masks, skeletal model, and the audio information (utterance of the corresponding gesture by the actor), which are synchronous with the gestures performed. The dataset has 393 training, 287 testing, and 276 testing sequences. Each sequence is of duration between 1-2 minutes and contains 8-20 gestures. Furthermore, the test sequences also have ‘distracter’ (out of vocabulary) gestures apart from the 20 main gesture categories. For this dataset, we use the audio and skeleton modality for fusion because some of the top-performing methods [@escalera2013multi] on this dataset also used these two modalities. The loose temporal boundaries of the gestures in the sequence is available during training and validation phase, however, at the time of testing, the goal is to also predict the correct order of gestures within the sequence along with the gesture labels. The final evaluation is defined in terms of edit distance (insertion, deletion, or substitution) between the ground truth sequence of labels and the predicted sequence of labels. The overall score is the sum of edit distance for all testing videos, divided by the total number of gestures in all the testing videos [@escalera2013multi]. Modality Specific Feature Extraction {#modality_specific_feature} ------------------------------------ In this section, we describe the feature extraction method for different modalities - appearance, short-term motion, audio, and skeleton, which are used in this paper across three different datasets. - [**Appearance Features:**]{} We adopted the VGG-16 [@simonyan2014very] architecture to extract the appearance features. In this architecture, we change the number of neurons in fc7 layer from 4096 to 1024 to get a compressed lower dimensional representation of an input. We finetune the final three fully connected layers (fc6, fc7, and fc8) of the network pretrained on ImageNet using the frames of the training videos. The activations of the fc7 layer are taken as the visual representation of the frame provided as an input. While finetuning, we use minibatch stochastic descent with a fixed momentum of 0.9. The input size of the frame to our model is 224 $\times$ 224 $\times$ 3. Simple data augmentations are also done such as cropping and mirroring [@jia2014caffe]. We adopt a dropout ratio of 0.5. The initial learning rate is set to 0.001 for fc6, and 0.01 for fc7 and fc8 layers as the weights of last two layers are learned from scratch. The learning rate is reduced by factor of 10 after every 10,000 iterations. - [**Short-Term Motion Features:**]{} To extract the features, we adopted the method proposed in the recent two-stream CNN paper [@SimonyanZ14]. This method stacks the optical flows computed between pairs of adjacent frames over a time window and provides it as an input to CNN. We used the same VGG-16 architecture (as above) with 1024 neurons in fc7 layer, and pre-training on ImageNet for the extraction of short-term motion features. However, unlike the previous case (where input to the model was an RGB image comprising of three channels), the input to this network is a 10-frame stacking of optical flow fields (x and y direction), and thus the convolution filters in the first layer are different from those of the appearance network. We adopt a high dropout rate of 0.8 and set the initial learning rate to 0.001 for all the layers. The learning rate is reduced by a factor of 10 after every 10,000 iterations. - [**Audio Features:**]{} We use two different kinds of feature extraction method for audio modality. - [**Spectrogram Features:**]{} In this method, we extract the spectrogram features from audio signal using a convolutional neural network [@van2013deep]. We divide the video into multiple overlapping 1 sec clips and then, apply the Short Time Fourier Transformation to convert each one second 1-d audio signal into a 2-D image (namely log-compressed mel-spectrograms with 128 components) with the horizontal axis and vertical axis being time-scale and frequency-scale respectively. The features are extracted from these spectrogram images by providing them as input to a CNN. In this case, we use AlexNet [@krizhevsky2012imagenet] architecture and the network was pre-trained on ImageNet. We finetune the final three layers of network with respect to the spectrogram images of training videos to learn the ‘spectrogram-discriminative’ CNN features. We also change the number of nodes in fc7 layer to 1024 and use the activations of fc7 layer as the representation of a spectrogram image. The learning rate and dropout parameters are same as mentioned in the appearance feature extraction case. - [**MFCC Features:**]{} We use MFCC features for the MMG dataset. The spectrogram based CNN features were not used for this dataset as the temporal extent of each gesture was very less (1-2 sec), making it difficult to extract multiple spectrograms along the temporal dimension. In this method, speech signal of a gesture was analyzed using a 20ms Hamming window with a fixed frame rate of 10ms. Our feature consists of 12 Mel Frequency Cepstral Coefficients (MFCCs) along with the log energy ($MFCC_0$) and their first and second order delta values to capture the spectral variation. We concatenated 5 adjacent frames together in order to adhere to the 20fps of videos in the MMG dataset. Hence, we have a feature of dimension of $39 \times 5 = 195$ for each frame of the video. The data was also normalized such that each of the features (coefficients, energy and derivatives) extracted have zero mean and one variance. - [**Skeleton Features:**]{} We use the skeleton features for the MMG dataset. We employ the feature extraction method proposed in  [@wu2014multimodal; @yang2012eigenjoints] to characterize the action information which includes the posture feature, motion feature and offset feature. Out of 20 skeleton joint locations, we use only 9 upper body joints as they are the most discriminative for recognizing gestures. Methods Compared {#sec:methods_compared} ---------------- ![image](figure_baseline){width="0.6\linewidth"} To establish the efficacy of the proposed approach, we compare GeThR-Net with several baselines. The baselines were carefully designed to cover several temporal and non-temporal feature fusion methods. We provide the architectural details of these baselines in Figure \[fig:figure\_baseline\] for easy understanding of their differences. 1. [**NonTemporal-M:**]{} In this baseline, we train modality specific non-temporal models and predict label of a temporal sequence based on the average over all predictions across time. For appearance, short-term motion and audio spectrogram, we use CNN features (Section \[modality\_specific\_feature\]) followed by a softmax layer for classification. For audio MFCC and Skeleton, we use the features extracted using the methods described in \[modality\_specific\_feature\] followed by SVM classification. Multimodal fusion is not performed for label prediction in these baselines. 2. [**Temporal-M:**]{} For this baseline, we feed the modality specific features (as described in the last subsection), to LSTM networks for the temporal modelling and label prediction. Here also, features from multiple modalities are not fused for classification. 3. [**NonTemporal-AM (all modality combined):**]{} In this baseline, the outputs from the modality specific non-temporal baselines (CNN/SVM) are linearly combined for classification. The combination weights are automatically learned from validation datasets. 4. [**Temporal-AM (late fusion, all modality combined):**]{} Here also, the outputs from the modality specific temporal baselines (LSTMs) are linearly combined for classification. This is a late fusion approach. 5. [**TemporalEtoE-AM (late fusion, all modality combined):**]{} In this baseline, we add a linear layer on top of the modality specific temporal baselines and use an end-to-end training approach for learning the weights of the combination layer. This is also a late fusion approach. 6. [**Temporal-AM (early fusion, all modality combined):**]{} Features from multiple modalities are linearly combined and then forward propagated through a LSTM for classification. This is an early fusion approach. 7. [**Temporal-AM+NonTemporal-AM (all modality combined):**]{} In this baseline, the outputs from all the modality specific temporal and nontemporal baselines are combined for the final label prediction. Here also, we use a validation dataset for predicting the optimal weights corresponding to each of these components. 8. [**TemporallyHybrid-AM (proposed, all modality combined):**]{} This method uses only the temporally hybrid component of the proposed approach. The non-temporal components’ outputs are not used. This network is completely trained in an end-to-end fashion (See the temporal component in Figure \[fig:pipeline\]). 9. [**GeThR-Net:**]{} This is the proposed approach (See Figure \[fig:pipeline\]). Implementation Details ---------------------- We used the initial learning rate of 0.0002 for all LSTM networks. It is reduced by a factor of 0.9 for every epoch starting from the 6-th epoch. We set the dropout rate at 0.3. For the baseline methods of temporal modelling, Temporal-M, Temporal-AM and TemporalEtoE-AM, we tried different combinations for the number of hidden layers and the number of units in each layer and chose the one which led to the optimal performance on the validation set. Since, the feature dimension is high (1024) in UCF-101 and CCV dataset, the number of units in each layer is varied from 256 to 768 in the intervals of 32. While in case of MMG, it is varied from 64 to 512 in the same interval. The number of layers in the baselines were varied between 1 and 3 for all of the datasets. For the proposed temporally hybrid network (TemporallyHybrid-AM) component also, the number of units in the First-layer LSTM corresponding to each modality, the number units in the linear Second-layer and the number of units in Third-layer multimodal LSTM are chosen based upon the performance on the validation dataset. For UCF-101 dataset, the First-layer has 576 units for both the appearance and short-term modality. The Second-layer has 768 units and the Third-layer has 448 units. For CCV dataset, all the three modalities, appearance, short-term motion and audio have 512 units in the First-layer. In CCV, the Second-layer has 896 units and the Third-layer has 640 units. For MMG dataset, the First-layer has 256 units for skeleton modality and 192 units for audio modality. The Second-layer has 384 units and the Third-layer has 256 units. Note that these parameters differ across the datasets due to the variation in the input feature size and the inherent complexity of the datasets. Dataset Modalities Used --------- ------------------- Appearance (M1) Short term Motion (M2) Appearance (M1) Short-term Motion (M2) Audio (M3) Audio (M1) Skeleton (M2) : Comparison of GeThR-Net with baseline methods on UCF-101, CCV and Multimodal Gesture recognition (MMG) dataset. UCF-101: M1 is appearance, M2 is short-term motion and classification accuracy is reported. CCV: M1 is appearance, M2 is short-term motion, M3 is audio and mean average precision (mAP) is reported. MMG: M1 is audio, M2 is skeleton and normalized edit distance is reported. --------------------- -------------- -------------- --------------- Methods NonTemporal-M1 76.3 76.7 0.988 NonTemporal-M2 86.8 57.3 0.782 NonTemporal-M3 - 30.3 - Temporal-M1 76.6 71.7 0.284 Temporal-M2 85.5 55.1 0.361 Temporal-M3 - 28.5 - NonTemporal-AM 89.9 78.5 0.776 TemporalEtoE-AM TemporallyHybrid-AM 89.0 74.0 [**0.152**]{} GeThR-Net [**91.1**]{} [**79.3**]{} [**0.152**]{} --------------------- -------------- -------------- --------------- : Comparison of GeThR-Net with baseline methods on UCF-101, CCV and Multimodal Gesture recognition (MMG) dataset. UCF-101: M1 is appearance, M2 is short-term motion and classification accuracy is reported. CCV: M1 is appearance, M2 is short-term motion, M3 is audio and mean average precision (mAP) is reported. MMG: M1 is audio, M2 is skeleton and normalized edit distance is reported. \[tab:results\] Discussion on Results {#sec:discussion_results} --------------------- In this section, we compare GeThR-Net with various baseline methods (Section \[sec:methods\_compared\]) and several recent state-of-the-art methods on three different datasets. The results corresponding to all the baselines and the proposed approach are summarized in Table \[tab:results\]. In the first two slabs of the table, results from individual modalities are shown using the temporal and non-temporal components. In the next three slabs, results for different fusion strategies across modalities are shown for both the temporal and non-temporal components. In the final slab of the table, results obtained from the proposed temporally hybrid component and GeThR-Net are shown. - [**UCF-101 [@soomro2012ucf101]:**]{} For UCF-101, we report the test video classification accuracy. GeThR-Net achieves an absolute improvement of 3.1%, 2.7% and 4.6% over Temporal-AM (late fusion), TemporalEtoE-AM (late fusion) and Temporal-AM (early fusion) baselines respectively. This empirically shows that the proposed approach is significantly better in capturing the complementary temporal aspects of different modalities compared to the late and early fusion based methods. GeThR-Net also gives an absolute improvement of 0.9% over a strong baseline method of combining temporal and non-temporal aspects of different modalities (Temporal-AM+Non-Temporal-AM). This further establishes the efficacy of the proposed architecture. We also compare the results produced by GeThR-Net with several recent papers which reported results on UCF-101 (see Table \[tab:results\_ucf-101\]). Out of the seven approaches we compare, we are better than five of them and comparable to two ([@WangXW015] and [@wu2015modeling]) of them. As pointed out earlier, the goal of this paper is to develop a general deep learning framework which can be used for multimodal fusion in different kinds of tasks. The results on UCF-101 clearly shows that GeThR-Net can be effectively used for the short action recognition task (average duration 6-7 seconds). - [**CCV [@escalera2013multi]:**]{} We also perform experiments on the CCV dataset to show that GeThR-Net can also be used for longer action recognition (average duration 80 seconds). In this dataset, we report the mean average precision (in a scale of 0-100) for all the algorithms which we compare. In CCV also, GeThR-Net is better than Temporal-AM (late fusion), TemporalEtoE-AM (late fusion) and Temporal-AM (early fusion) baselines by an absolute mAP of 4.3, 6.8 and 6.2 respectively. However, GeThR-Net performs comparable (mAP of 79.3 compared to 79.2) to a strong baseline method of combining temporal and non-temporal aspects of different modalities (Temporal-AM+Non-Temporal-AM). We also wanted to compare GeThR-Net with several recent approaches which also reported results on the CCV dataset. However, a fair comparison was not possible because several videos from CCV were unavailable from youtube. We used only 7,751 videos for training and testing as opposed to 9,317 videos in the original dataset. In spite of that, to get an approximate idea about how GeThR-Net performs compared to these methods, we provide some comparisons. The mAP reported on CCV by some of the recent methods are: 70.6 [@Wu2014], 64.0 [@ye2012robust], 63.4 [@Ma2014], 60.3 [@Xu2013], 68.2 [@Liu2013], 64.0 [@MVA:audiovisual] and 83.5 [@wu2015modeling]. We perform better (mAP of 79.3) than six of these methods. - [**MMG [@escalera2013multi]:**]{} In this dataset, we report the normalized edit distance (lower is better) [@escalera2013multi] corresponding to each method. The normalized edit distance obtained by GeThR-Net is lower than the other multimodal baselines such as Temporal-AM (late fusion), TemporalEtoE-AM (early fusion), Temporal-AM (late fusion) and Temporal-AM+NonTemporal-AM by 0.004, 0.003, 0.038 and 0.003 respectively. We are also significantly better than modality specific temporal baselines, e.g.: GeThR-Net gives a normalized edit distance of only 0.152 compared to 0.284 and 0.361 produced by Temporal-M1 (audio) and Temporal-M2 (skeleton) respectively. The results on this dataset demonstrates that GeThR-Net performs well in fusing multimodal information from audio-MFCC and skeleton. The edit distance obtained from GeThR-Net is one of the top-three edits distances reported in the Chalearn-2013 multimodal gesture recognition competition [@escalera2013multi]. IDT + FV [@Wang_2013_ICCV] IDT + HSV[@PengWWQ14] Two-stream [@SimonyanZ14] LSTM [@NgHVVMT15] TDD + FV [@WangQT15a] Two-stream2 [@WangXW015] Fusion [@wu2015modeling] GeThR-Net ---------------------------- ----------------------- --------------------------- ------------------- ----------------------- -------------------------- -------------------------- ----------- 85.9 87.9 88.0 88.6 90.3 91.4 91.3 91.1 : Comparison of GeThR-Net with state-of-the-art methods on UCF-101. \[tab:results\_ucf-101\] From the results on these datasets, it is clear that GeThr-Net is effective in fusing different kinds of multimodal information and also applicable to different end-tasks such as short action recognition, long action recognition and gesture recognition. That empirically shows the generalizability of the proposed deep network. Conclusion ========== In this paper, we propose a novel deep neural network called GeThR-Net for multimodal temporal information fusion. GeThR-Net has a temporally hybrid recurrent neural network component that models modality specific temporal dynamics as well as the temporal dynamics in a multimodal feature space. The other components in the GeThR-Net are used to capture the non-temporal information. We perform experiments on three different action and gesture recognition datasets and show that GeThR-Net performs well for any general multimodal fusion task. The experimental results are performed on four different modalities with maximum three modality fusion at a time. However, GeThR-Net can be used for any kind of modality fusion without any upper bound on the number of modalities that can be combined.
--- abstract: 'We give necessary and sufficient conditions for a Banach space $X$ having the Radon-Nikodým property in terms of polynomial spline sequences.' address: 'Institute of Analysis, Johannes Kepler University Linz, Austria, 4040 Linz, Altenberger Strasse 69' author: - Markus Passenbrunner bibliography: - 'radon.bib' title: 'Spline characterizations of the Radon-Nikodým property' --- Introduction and Preliminaries ============================== The aim of this paper is to prove new characterizations of the Radon-Nikodým property for Banach spaces in terms of polynomial spline sequences in the spirit of the corresponding martingale results (see Theorem \[thm:rnp\_dentable\]). We thereby continue the line of research about extending martingale results to also cover (general) spline sequences that is carried out in [@Shadrin2001; @PassenbrunnerShadrin2014; @Passenbrunner2014; @MuellerPassenbrunner2017; @Passenbrunner2017; @KeryanPassenbrunner2017]. We refer to the book [@DiestelUhl1977] by J. Diestel and J.J. Uhl for basic facts on martingales and vector measures; here, we only give the necessary notions to define the Radon-Nikodým property below. Let $(\Omega,\mathcal A)$ be a measure space and $X$ a Banach space. Every $\sigma$-additive map $\nu:\mathcal A\to X$ is called a *vector measure*. The *variation* $|\nu|$ of $\nu$ is the set function $$|\nu|(E) = \sup_\pi \sum_{A\in\pi} \|\nu(A)\|_X,$$ where the supremum is taken over all partitions $\pi$ of $E$ into a finite number of pairwise disjoint members of $\mathcal A$. If $\nu$ is of bounded variation, i.e., $|\nu|(\Omega)<\infty$, the variation $|\nu|$ is $\sigma$-additive. If $\mu:\mathcal A \to [0,\infty)$ is a measure and $\nu:\mathcal A\to X$ is a vector measure, $\nu$ is called *$\mu$-continuous* if $\lim_{\mu(E)\to 0} \nu(E)=0$ for all $E\in\mathcal A$. In the following, $L^p_X = L^p_X(\Omega,\mathcal A,\mu)$ will denote the Bochner-Lebesgue space of $p$-integrable Bochner measurable functions $f:\Omega\to X$ and if $X=\mathbb{R}$, we simply write $L^p$ instead of $L^p_\mathbb R$. A Banach space $X$ has the *Radon-Nikodým property (RNP)* if for every measure space $(\Omega,\mathcal A)$, for every positive measure $\mu$ on $(\Omega,\mathcal A)$ and for every $\mu$-continuous vector measure $\nu$ of bounded variation, there exists a function $f\in L^1_X(\Omega,\mathcal A,\mu)$ such that $$\nu(A) = \int_A f{\,\mathrm{d}}\mu,\qquad A\in\mathcal A.$$ Additionally, recall that a sequence $(f_n)$ in $L^1_X$ is *uniformly integrable* if the sequence $(\|f_n\|_X)$ is bounded in $L^1$ and, for any $\varepsilon>0$, there exists $\delta>0$ such that $$\mu(A) < \delta \quad \Longrightarrow \quad \sup_n \int_A \|f_n\|_X {\,\mathrm{d}}\mu < \varepsilon, \qquad A\in \mathcal A.$$ We have the following characterization of the Radon-Nikodým property in terms of martingales, see e.g. [[@Pisier2016 p. 50]]{}. \[thm:rnp\_dentable\] For any $p\in (1,\infty)$, the following statements about a Banach space $X$ are equivalent: (i) $X$ has the Radon-Nikodým property (RNP), (ii) every $X$-valued martingale bounded in $L^1_X$ converges almost surely, (iii) every uniformly integrable $X$-valued martingale converges almost surely and in $L^1_X$, (iv) every $X$-valued martingale bounded in $L^p_X$ converges almost surely and in $L^p_X$. For the above equivalences, it is enough to consider $X$-valued martingales defined on the unit interval with respect to Lebesgue measure and the dyadic filtration (cf. [@Pisier2016 p. 54]). Now, we describe the general framework that allows us to replace properties (ii)–(iv) with its spline versions. A sequence of $\sigma$-algebras $(\mathcal F_n)_{n\geq 0}$ in $[0,1]$ is called an *interval filtration* if $(\mathcal F_n)$ is increasing and each $\mathcal F_n$ is generated by a finite partition of $[0,1]$ into intervals of positive Lebesgue measure. For an interval filtration $(\mathcal F_n)$, we define $\Delta_n := \{\partial A: A \text{ atom of }\mathcal F_n\}$ to be the set of all endpoints of atoms in $\mathcal F_n$. For a fixed positive integer $k$, set $$\begin{aligned} S_n^{(k)} = \{ f\in C^{k-2}[0,1]:\quad &f \text{ is a polynomial of order $k$} \text{ on each atom of } \mathcal F_n \}, \end{aligned}$$ where $C^n[0,1]$ denotes the space of $n$ times continuously differentiable, real valued functions on $[0,1]$ and the order $k$ of a polynomial $p$ is related to the degree $d$ of $p$ by the formula $k=d+1$. The finite dimensional space $S_n^{(k)}$ admits a very special basis $(N_i)$ of non-negative and uniformly bounded functions, called B-spline basis, that forms a partition of unity, i.e. $\sum_i N_i(t) = 1$ for all $t\in [0,1]$, and the support of each $N_i$ consists of the union of $k$ neighboring atoms of $\mathcal F_n$. If $n\geq m$ and $(N_i),(\tilde{N}_i)$ are the B-spline bases of $S_n^{(k)}$ and $S_m^{(k)}$ respectively, we can write each $f\in S_m^{(k)}$ as $f= \sum a_i \tilde{N}_i = \sum b_i N_i$ for some coefficients $(a_i),(b_i)$ since $S_m^{(k)}\subset S_n^{(k)}$. Those coefficients are related to each other in the way that each $b_i$ is a convex combination of the coefficients $(a_i)$. For more information on spline functions, see [@Schumaker2007]. Additionally, we let $P_n^{(k)}$ be the orthogonal projection operator onto $S_n^{(k)}$ with respect to $L^2[0,1]$ equipped with the Lebesgue measure $|\cdot|$. Each space $S_n^{(k)}$ is finite dimensional and B-Splines are uniformly bounded, therefore, $P_n^{(k)}$ can be extended to $L^1$ and $L^1_X$ satisfying $P_n^{(k)}(f\otimes x) = (P_n^{(k)}f) \otimes x$ for all $f\in L^1$ and $x\in X$, where $f\otimes x$ denotes the function $t\mapsto f(t)x$. Moreover, by $S_{n}^{(k)}\otimes X$, we denote the space ${\operatorname{span}}\{f\otimes x : f\in S_n^{(k)}, x\in X\}$. Let $X$ be a Banach space and $(f_n)_{n\geq 0}$ be a sequence of functions in $L^1_X$. Then, $(f_n)$ is an ($X$-valued) *$k$-martingale spline sequence adapted to $(\mathcal F_n)$*, if $(\mathcal F_n)$ is an interval filtration and $$P_n^{(k)} f_{n+1} =f_n,\qquad n\geq 0.$$ This definition resembles the definition of martingales with the conditional expectation operator replaced by $P_n^{(k)}$. For splines of order $k=1$, i.e. piecewise constant functions, the operator $P_n^{(k)}$ even is the conditional expectation operator with respect to the $\sigma$-algebra $\mathcal F_n$. Many of the results that are true for martingales (such as Doob’s inequality, the martingale convergence theorem or Burkholder’s inequality) in fact carry over to $k$-martingale spline sequences corresponding to an *arbitrary* interval filtration as the following two theorems show: \[thm:splines\] For any positive integer $k$, any interval filtration $(\mathcal F_n)$ and any Banach space $X$, the following assertions are true: (i) \[it:splines1\] there exists a constant $C_k$ only depending on $k$ such that $$\sup_n\| P_n^{(k)} : L^1_X \to L^1_X \| \leq C_k,$$ (ii) \[it:splines2\] there exists a constant $C_k$ only depending on $k$ such that for any $X$-valued $k$-martingale spline sequence $(f_n)$ and any $\lambda>0$, $$|\{ \sup_n \|f_n\|_X > \lambda \}| \leq C_k \frac{\sup_n\|f_n\|_{L^1_X}}{ \lambda},$$ (iii) \[it:splines3\] for all $p\in (1,\infty]$ there exists a constant $C_{p,k}$ only depending on $p$ and $k$ such that for all $X$-valued $k$-martingale spline sequence $(f_n)$, $$\big\| \sup_n \|f_n\|_X \big\|_{L^p} \leq C_{p,k} \sup_n\|f_n\|_{L^p_X},$$ (iv) \[it:splines4\] if $X$ has the RNP and $(f_n)$ is an $L^1_X$-bounded $k$-martingale spline sequence, $(f_n)$ converges a.s. to some $L^1_X$-function. is proved in [@Shadrin2001] and – are proved (effectively) in [@PassenbrunnerShadrin2014; @MuellerPassenbrunner2017]. \[thm:splines5\] For all $p\in(1,\infty)$ and all positive integers $k$, scalar-valued $k$-spline-differences converge unconditionally in $L^p$, i.e. for all $f\in L^p$, $$\big\|\sum_n \pm (P_n^{(k)} - P_{n-1}^{(k)})f \big\|_{L^p} \leq C_{p,k} \|f\|_{L^p},$$ for some constant $C_{p,k}$ depending only on $p$ and $k$. The martingale version of Theorem \[thm:splines5\] is Burkholder’s inequality, which precisely holds in the vector-valued setting for UMD-spaces $X$ (by the definition of UMD-spaces). It is an open problem whether Theorem \[thm:splines5\] holds for UMD-valued $k$-martingale spline sequences in this generality, but see [@KamontMuller2006] for a special case. For more information on UMD-spaces, see e.g. [@Pisier2016]. Let $X$ be a Banach space, $(\mathcal F_n)$ an interval filtration and $k$ a positive integer. Then, $X$ has the *$((\mathcal F_n),k)$-martingale spline convergence property (MSCP)* if all $L^1_X$-bounded $k$-martingale spline sequences adapted to $(\mathcal F_n)$ admit a limit almost surely. In this work, we prove the following characterization of the Radon-Nikodým property in terms of $k$-martingale spline sequences. \[thm:char\] Let $X$ be a Banach space, $(\mathcal F_n)$ an interval filtration, $k$ a positive integer and $V$ the set of all accumulation points of $\cup_n \Delta_n$. Then, $((\mathcal F_n),k)$-$\operatorname{MSCP}$ characterizes RNP if and only if $|V|>0$, i.e., $$|V|>0\Longleftrightarrow\big( X \text{ has RNP} \Longleftrightarrow X \text{ has } ((\mathcal F_n),k)\text{-MSCP}\big).$$ If $|V|>0$, it follows from Theorem \[thm:splines\] that RNP implies $( (\mathcal F_n), k)$-MSCP for any positive integer $k$ and any interval filtration $(\mathcal F_n)$. The reverse implication for $|V|>0$ is a consequence of Theorem \[thm:general\]. We even have that if $X$ does not have RNP, we can find a $(\mathcal F_n)$-adapted $k$-martingale spline sequence that does not converge at all points $t\in E$ for a subset $E\subset V$ with $|E|=|V|$. We simply have to choose $E:=\limsup E_n$ with $(E_n)$ being the sets from Theorem \[thm:general\]. If $|V|=0$, it is proved in [@MuellerPassenbrunner2017] that any Banach space $X$ has $((\mathcal F_n),k)$-MSCP. We also have the following spline analogue of Theorem \[thm:rnp\_dentable\]: \[thm:rnpspline\] For any positive integer $k$ and any $p\in (1,\infty)$, the following statements about a Banach space $X$ are equivalent: (i) \[it:rnp\]$X$ has the Radon-Nikodým property, (ii) \[it:L1as\]every $X$-valued $k$-martingale spline sequence bounded in $L^1_X$ converges almost surely, (iii) \[it:unif\]every uniformly integrable $X$-valued $k$-martingale spline sequence converges almost surely and in $L^1_X$, (iv) \[it:Lp\]every $X$-valued $k$-martingale spline sequence bounded in $L^p_X$ converges almost surely and in $L^p_X$. $\Rightarrow$: Theorem \[thm:splines\].\ $\Rightarrow$: clear.\ $\Rightarrow$: if $(f_n)$ is a $k$-martingale spline sequence bounded in $L^p_X$ for $p>1$, then $(f_n)$ is uniformly integrable, therefore it has a limit $f$ (a.s. and $L^1_X$), which, by Fatou’s lemma, is also contained in $L^p_X$. By Theorem \[thm:splines\](iii), $\sup_n \|f_n\|_X\in L^p$ and we can apply dominated convergence to obtain $\|f_n - f\|_{L^p_X}\to 0$. $\Rightarrow$: follows from Theorem \[thm:general\]. The rest of the article is devoted to the construction of a suitable non-RNP-valued $k$-martingale spline sequence, adapted to an arbitrary given filtration $(\mathcal F_n)$, so that the associated martingale spline differences are separated away from zero on a large set, which, more precisely, takes the following form: \[thm:general\] Let $X$ be a Banach space without RNP, $(\mathcal F_n)$ an interval filtration, $V$ the set of all accumulation points of $\cup_n \Delta_n$ and $k$ a positive integer. Then, there exists a positive number $\delta$ such that for all $\eta\in (0,1)$, there exists an increasing sequence of positive integers $(m_j)$, an $L_X^\infty$-bounded $k$-martingale spline sequence $(f_j)_{j\geq 0}$ adapted to $(\mathcal F_{m_j})$ with $f_j \in S_{m_j}^{(k)}\otimes X$, and a sequence $(E_n)$ of measurable sets $E_n\subset V$ with $|E_n|\geq (1-2^{-n}\eta)|V|$ so that for all $n\geq 1$ $$\|f_{n}(t) - f_{n-1}(t) \|_X \geq \delta,\qquad t\in E_n.$$ We will use the concept of dentable sets to prove Theorem \[thm:general\] and recall its definition: \[def:dentable\] Let $X$ be a Banach space. A subset $D\subset X$ is called *dentable* if for any $\varepsilon>0$ there is a point $x\in D$ such that $$x\notin \overline{\operatorname{conv}}\big(D\setminus B(x,\varepsilon)\big),$$ where $\overline{\operatorname{conv}}$ denotes the closure of the convex hull and where $B(x,\varepsilon)= \{y\in X : \|y-x\| < \varepsilon\}$. If $D$ is a bounded non-dentable set, then, the closed convex hull $\overline{\operatorname{conv}}(D)$ is also bounded and non-dentable. Thus, we may assume that $D$ is convex. Moreover, we can as well assume that each $x\in D$ can be expressed as a finite convex combination of elements in $D\setminus B(x,\delta)$ for some $\delta>0$ since if $D\subset X$ is a convex set such that $x\in\overline{\operatorname{conv}}\big(D \setminus B(x,\delta)\big)$ for all $x\in D$, then, the enlarged set $\widetilde{D} = D + B(0,\eta)$ is also convex and satisfies $$x\in\operatorname{conv}\big(\widetilde{D} \setminus B(x,\delta-\eta)\big),\qquad x\in\widetilde{D}.$$ The reason why we are able to use the concept of dentability in the proof of Theorem \[thm:general\] is the following geometric characterization of the RNP (see for instance [@DiestelUhl1977 p. 136]). \[thm:dentable\] For any Banach space $X$ we have that $X$ has the RNP if and only if every bounded subset of $X$ is dentable. We record the following (special case of the) basic composition formula for determinants (see for instance [@Karlin1968 p. 17]): \[lem:comp\] Let $(f_i)_{i=1}^n$ and $(g_j)_{j=1}^n$ two sequences of functions in $L^2$. Then, $$\begin{aligned} \det \Big( \int_0^1 f_i(t) &g_j(t){\,\mathrm{d}}t \Big)_{i,j=1}^n \\ &=\int_{0\leq t_1 < \cdots < t_n\leq 1} \det ( f_i(t_\ell) )_{i,\ell=1}^n \cdot \det ( g_j(t_\ell))_{j,\ell=1}^n{\,\mathrm{d}}(t_1,\ldots,t_n). \end{aligned}$$ We also note the following simple \[lem:Gamma\] Let $I\subset [0,1]$ be an interval and $V$ an arbitrary measurable subset of $[0,1]$. Then, for all $\varepsilon_1,\varepsilon_2>0$, there exists a positive integer $n$ so that for the decomposition of $I$ into intervals $(A_\ell)_{\ell=1}^n$ with $\sup A_\ell \leq \inf A_{\ell+1}$ and $n|A_\ell\cap V| = |I\cap V|$ for all $\ell$, the index set $\Gamma = \{ 2 \leq \ell \leq n-1 : \max(|A_{\ell-1}|, |A_{\ell}|, |A_{\ell+1}|) \leq \varepsilon_1 \}$ satisfies $$\sum_{\ell\in \Gamma} |A_{\ell} \cap V| \geq (1-\varepsilon_2) |I\cap V|.$$ Construction of non-convergent spline sequences =============================================== In this section, we prove Theorem \[thm:general\]. In order to do that, we begin by fixing an interval filtration $(\mathcal F_n)$, the corresponding endpoints of atoms $(\Delta_n)$ and a positive integer $k$. For the space $S_n^{(k)}$, we will suppress the (fixed) index $k$ and write $S_n$ instead. We will apply the same convention to the corresponding projection operators $P_n = P_n^{(k)}$. We also let $V\subset[0,1]$ be the closed set of all accumulation points of $\cup_n \Delta_n$. The main step in the proof of Theorem \[thm:general\] consists of an inductive application of the construction of a suitable martingale spline difference in the following lemma: \[lem:moments\] Let $(x_j)_{j=1}^M$ be in the Banach space $X$, $\bar x\in S_N\otimes X$ for some non-negative integer $N$ such that $\bar x = \sum_{j=1}^M \alpha_j\otimes x_j$ with $\sum_{j=1}^M \alpha_j \equiv 1$, $\|x_j\|\leq 1$, $\alpha_j\in S_N$ having non-negative B-spline coefficients for all $j$ and let $I\subset [0,1]$ be an interval so that $|I\cap V| >0$. Then, for all $\varepsilon\in (0,1)$, there exists a positive integer $K$ and a function $g\in S_K\otimes X$ with the properties (i) \[eq:lem2-1\]$\int_I t^j g(t){\,\mathrm{d}}t = 0$ for all $j=0,\ldots,k-1$, (ii) \[eq:lem2-2\]${\operatorname{supp}}g\subset \operatorname{int} I$, (iii) \[eq:lem2-3\]we have a splitting of the collection $\mathscr A = \{A\subset I: A \text{ is atom in }\mathcal F_K\}$ into $\mathscr A_1 \cup \mathscr A_2$ so that (a) if the functions $\alpha_j$ are all constant, then\ on each $J\in \mathscr A_1$, $\bar x + g$ is constant with a value in $\cup_i \{ x_i \}$, otherwise we still have that\ on each $J\in\mathscr A_1$, $\bar{x} + g$ is constant with a value in $\operatorname{conv}\{x_i : 1\leq i\leq M\}$, (b) $|\cup_{J\in\mathscr A_1} J\cap V| \geq (1-\varepsilon)|I\cap V|$, (c) on each $J\in \mathscr A_2$, $\bar x + g = \sum_\ell \lambda_\ell\otimes y_\ell$ for some functions $\lambda_\ell \in S_K$ having non-negative B-spline coefficients with $\sum_\ell \lambda_\ell\equiv 1$ and $y_\ell\in \operatorname{conv}\{x_j: 1\leq j\leq M\} + B(0,\varepsilon)$. The first step of the construction gives a function $g$ satisfying the desired conditions but only having mean zero instead of vanishing moments in property . In the second step, we use this result to construct a function $g$ whose moments also vanish. <span style="font-variant:small-caps;">Step 1:</span> We start with the (simpler) construction of $g$ when the functions $\alpha_j$ are not constant and condition (a) has the form that on each $J\in\mathscr A_1$, $\bar x + g$ is constant with a value in $\operatorname{conv}\{x_i : 1\leq i\leq M\}$. First, decompose $I$ into intervals $(A_\ell)_{\ell=1}^n$ satisfying $n |A_\ell\cap V| = |I\cap V|$ with $\sup A_\ell \leq \inf A_{\ell+1}$ and $n\geq 4/\varepsilon$. Then, choose $K\geq N$ so large that $A_1,A_2,A_{n-1},A_n$ each contains at least $k+1$ atoms of $\mathcal F_K$. Denoting by $(N_j)$ the B-spline basis of $S_K$, we can write $$\alpha_\ell \equiv \sum_j \alpha_{\ell,j} N_j, \qquad \ell=1,\ldots,M$$ for some non-negative coefficients $(\alpha_{\ell,j})$. Define $$h_\ell \equiv \sum_{j : \cup_{i=2}^{n-1} A_i \cap {\operatorname{supp}}N_j\neq \emptyset} \alpha_{\ell,j} N_j.$$ Observe that ${\operatorname{supp}}h_\ell \subset \operatorname{int} I$ and $h_\ell \equiv \alpha_\ell$ on $\cup_{i=2}^{n-1} A_i$. Letting $\widetilde{x} = \sum \beta_\ell x_\ell$ for $ \beta_\ell = \int h_\ell/\big(\sum_j \int h_j\big) \in [0,1],$ we define $$g := - \sum_{\ell=1}^M h_\ell \otimes x_\ell + \Big(\sum_{j=1}^M h_j \Big) \otimes \widetilde{x}.$$ This is a function of the desired form when defining $\mathscr{A}_1 := \{A \subset \cup_{i=2}^{n-1} A_i : A\text{ is atom in }\mathcal F_K\}$ and $\mathscr{A}_2 = \mathscr{A}\setminus \mathscr{A}_1$ as we will now show by proving $\int g=0$ and properties ,. The fact that $\int g=0$ follows from a simple calculation. Property is satisfied by the definition of the functions $h_\ell$. Property (a) follows from the fact that $\bar x(t) + g(t) = \tilde{x}\in\operatorname{conv}\{x_j : 1\leq j\leq M\}$ for $t\in \cup_{i=2}^{n-1} A_i$ since $h_\ell \equiv \alpha_\ell$ on that set for any $\ell=1,\ldots, M$. Since $|(A_1\cup A_2\cup A_{n-1}\cup A_n)\cap V|= 4|I\cap V|/n \leq \varepsilon|I\cap V|$, (b) also follows from the construction of $\mathscr A_1$. Since $$\bar x(t)+g(t) = \sum_{\ell=1}^M \big(\alpha_\ell(t) - h_\ell(t)\big) x_\ell + \Big(\sum_{j=1}^M h_j(t)\Big) \tilde{x},$$ $\tilde{x}\in \operatorname{conv}\{x_j : 1\leq j\leq M\}$, $h_\ell \leq \alpha_\ell$ and $\sum_\ell \alpha_\ell \equiv 1$, (c) is also proved. The next step is to construct the desired function $g$ when $\alpha_j$ are assumed to be constant and (a) has the form that on each $J\in\mathscr A_1$, $\bar x + g$ is constant with a value in $\cup_i \{ x_i \}$. Here, the idea is to construct a function of the form $g(t) = \sum f_j(t) (x_j - \bar x)$ with $f_j\in S_K$ for some $K$ and $\int f_j \simeq C\alpha_j$ for all $j$ and some constant $C$ independent of $j$ to employ the assumption $\sum \alpha_j (x_j - \bar x)=0$ implying $\int g = 0$. We begin this construction by successively choosing parameters $\varepsilon_3 \ll \varepsilon_1 \ll \tilde{\varepsilon}<\varepsilon$ obeying certain given conditions depending on $\varepsilon$, $\bar x$, $(x_j)$, $(\alpha_j)$, $|I\cap V|$ and $|I|$. First, set $\tilde{\varepsilon} = \varepsilon |I\cap V|/(3|I|)>0$. and $$\label{eq:eps1} \varepsilon_1 = \frac{\varepsilon\tilde{\varepsilon}(1-\varepsilon/3) | I\cap V|}{72 M}.$$ Now, we apply Lemma \[lem:Gamma\] with the parameters $\varepsilon_1$ and $\varepsilon_2=\varepsilon/3$ to get a positive integer $n$ and a partition $(A_{\ell})_{\ell=1}^{n}$ of $I$ consisting of intervals with $n|A_\ell\cap V|=|I\cap V|$ for all $\ell=1,\ldots,n$ so that $$\Gamma = \{ 2 \leq \ell \leq n-1 : \max(|A_{\ell-1}|, |A_{\ell}|, |A_{\ell+1}|) \leq \varepsilon_1 \}$$ satisfies $$\label{eq:applemma} \big( 1 - \frac{\varepsilon}{3}\big)|I\cap V|\leq \sum_{\ell\in \Gamma} |A_{\ell} \cap V|.$$ Finally, we put $\varepsilon_3= \varepsilon_1/(2n)$. Next, for each $\ell=1,\ldots,n$, we choose a point $p_{\ell}\in \operatorname{int} A_{\ell}$ and an integer $K_\ell$ so that the intersection of $\operatorname{int} A_{\ell}$ and the $\varepsilon_3$-neighborhood $B(p_\ell,\varepsilon_3)$ of $p_{\ell}$ contains at least $k+1$ atoms of $\mathcal F_{K_\ell}$ to the left as well as to the right of $p_{\ell}$. This is possible since $|A_\ell\cap V|=|I\cap V|/n$ and $V$ is the set of all accumulation points of $\cup_j \Delta_j$. Then set $K=\max_\ell K_\ell$ and we let $u_\ell\in A_{\ell}$ be the leftmost point of $\Delta_K$ contained in $B(p_\ell,\varepsilon_3)\cap \operatorname{int} A_{\ell}$. Similarly, let $v_\ell\in A_{\ell}$ be the rightmost point of $\Delta_K$ contained in $B(p_\ell,\varepsilon_3)\cap \operatorname{int} A_{\ell}$. Next, for $2\leq \ell\leq n-1$, we put $B_{\ell} := (v_{\ell-1}, u_{\ell+1}) \subset A_{\ell-1} \cup A_{\ell} \cup A_{\ell+1}$. Observe that the construction of $u_\ell$ and $v_\ell$ implies that $B_\ell \cap B_{j}=\emptyset$ for all $|\ell - j| \geq 2$. Next, let $(N_i)$ be the B-spline basis of the space $S_K$ and let $(\ell(i))_{i= 1}^{L}$ be the increasing sequence of integers so that $\Gamma= \{\ell(i) : 1\leq i\leq L\}$ for $L=|\Gamma|\leq n$. We then define the set $$\Lambda( r,s) := \Big\{ j : {\operatorname{supp}}N_j \cap \Big(\bigcup_{i=r}^s B_{\ell(i)}\Big) \neq \emptyset \Big\}$$ to consist of those B-spline indices so that the support of the corresponding B-spline function intersects the set $\bigcup_{i=r}^s B_{\ell(i)}$. Observe that by , $$\label{eq:epsest} \big(1-\frac{\varepsilon}{3}\big) |I\cap V| \leq \sum_{\ell\in \Gamma} |A_{\ell}\cap V| = \Big| \bigcup_{\ell\in \Gamma} A_{\ell}\cap V\Big| \leq \Big| \bigcup_{i} B_{\ell(i)}\cap V\Big| \leq \Big| \bigcup_{i} B_{\ell(i)}\Big|.$$ Thus, the definition of $\varepsilon_1$ in particular implies $$\label{eq:cons_est} 72\varepsilon_1M \leq \varepsilon\cdot\tilde{\varepsilon}\cdot \Big| \bigcup_i B_{\ell(i)} \Big|.$$ We continue with defining the functions $(f_j)$ contained in $S_K$ using a stopping time construction and first set $j_0=-1$ and $C=(1-\tilde{\varepsilon}/3)\big| \bigcup_{i} B_{\ell(i)}\big|>0$. For $1\leq m\leq M$, if $j_{m-1}$ is already chosen, we define $j_m$ to be the smallest integer $\leq L$ so that the function $$\label{eq:cond_def_fm} f_m := \sum_{j\in \Lambda(j_{m-1}+2,j_m) } N_j \qquad \text{satisfies}\qquad \int f_m(t) {\,\mathrm{d}}t > C\alpha_m.$$ If no such integer exists, we set $j_m = L$ (however, we will see below that for the current choice of parameters, such an integer always exists). Additionally, we define $$f_{M+1} := \sum_{j\in \Lambda(j_{M} + 2, L)} N_j.$$ Observe that by the locality of the B-spline basis $(N_i)$, ${\operatorname{supp}}f_\ell \cap {\operatorname{supp}}f_m = \emptyset$ for $1\leq \ell < m\leq M+1$. Based on the collection of functions $(f_m)_{m=1}^{M+1}$, we will define the desired function $g$. But before we do that, we make a few comments about $(f_m)_{m=1}^{M+1}$. Note that for $m=1,\ldots, M$, by the minimality of $j_m$, $$\int \sum_{j\in \Lambda(j_{m-1}+2,j_m -1) } N_j(t){\,\mathrm{d}}t \leq C\alpha_m,$$ and therefore, again by the locality of the B-splines $(N_i)$, $$\label{eq:alpha_upper} \int f_m(t){\,\mathrm{d}}t \leq C\alpha_m + \int\sum_{j\in \Lambda(j_m,j_m) } N_j(t){\,\mathrm{d}}t \leq C\alpha_m + 3\varepsilon_1.$$ Additionally, employing also the definition of $u_\ell$ and $v_\ell$ and the fact that the B-splines $(N_i)$ form a partition of unity, $$\label{eq:unionintegral} \begin{aligned} \Big|\bigcup_{i=j_{m-1}+2}^{j_m}B_{\ell(i)}\Big|&\leq \int f_m(t){\,\mathrm{d}}t \leq \Big|\bigcup_{i=j_{m-1} + 2}^{j_{m} } (p_{\ell(i)-1}, p_{\ell(i)+1})\Big| \\ &\leq\Big|\bigcup_{i=j_{m-1}+2}^{j_m}B_{\ell(i)}\Big| + 2n\varepsilon_3. \end{aligned}$$ Next, we will show $$\label{eq:B_ell_estimate} (1-\tilde{\varepsilon}) \Big|\bigcup_{i} B_{\ell(i)}\Big| \leq \Big|\bigcup_{i\leq j_{M}} B_{\ell(i)}\Big| \leq (1-\tilde{\varepsilon}/6) \Big|\bigcup_{i} B_{\ell(i)}\Big|.$$ Indeed, we calculate on the one hand by and $$\begin{aligned} \Big|\bigcup_{i\leq j_{M}} B_{\ell(i)}\Big| &\leq \sum_{m=1}^{M}\Big| \bigcup_{i=j_{m-1} + 2}^{j_m} B_{\ell(i)}\Big| + \sum_{m=1}^{M} |B_{j_m +1}| \leq \sum_{m=1}^{M} \int f_m(t){\,\mathrm{d}}t + 3\varepsilon_1 M \\ &\leq \sum_{m=1}^{M} (C\alpha_m + 3\varepsilon_1) + 3\varepsilon_1 M = C + 6\varepsilon_1 M \end{aligned}$$ Recalling now that $C= (1-\tilde{\varepsilon}/3) \big| \bigcup_i B_{\ell(i)}\big|$ and using now yields the right hand side of . On the other hand, employing and , $$\begin{aligned} \Big|\bigcup_{i\leq j_{M}} B_{\ell(i)}\Big| &\geq \sum_{m=1}^{M} \Big|\bigcup_{i=j_{m-1} + 2}^{j_m} B_{\ell(i)}\Big| \geq \sum_{m=1}^{M} \Big(\int f_m(t){\,\mathrm{d}}t - 2n\varepsilon_3 \Big)\\ &\geq C\sum_{m=1}^{M} \alpha_m - 2nM\varepsilon_3 = C - 2nM\varepsilon_3. \end{aligned}$$ The definition of $C = (1-\tilde{\varepsilon}/3)\big| \bigcup_i B_{\ell(i)} \big|$ and $\varepsilon_3 = \varepsilon_1 /(2n)$, combined with gives the left hand inequality in . The inequality on the right hand side of , combined with again, allows us to give the following lower estimate of $\int f_{M+1}$: $$\label{eq:intfM} \begin{aligned} \int f_{M+1}(t){\,\mathrm{d}}t &\geq \Big| \bigcup_{i\geq j_{M}+2} B_{\ell(i)} \Big| \geq \Big| \bigcup_{i> j_{M}} B_{\ell(i)} \Big| - 3\varepsilon_1 \geq \frac{\tilde{\varepsilon}}{12} \Big| \bigcup_i B_{\ell(i)} \Big|. \end{aligned}$$ We are now ready to define the function $g\in S_K\otimes X$ as follows: $$\label{eq:defg} g\equiv \sum_{j=1}^{M} f_j\otimes (x_j - \bar x) + f_{M+1}\otimes\sum_{j=1}^M \beta_j (x_j-\bar x) ,$$ where $$\label{eq:defbeta} \beta_j = \frac{C\alpha_j - \int f_j(t){\,\mathrm{d}}t}{\int f_{M+1}(t){\,\mathrm{d}}t}, \qquad 1\leq j\leq M.$$ We proceed by proving $\int g=0$ and properties – for $g$: The fact that $\int g=0$ follows from a straightforward calculation using and the assumption $\sum_{j=1}^M \alpha_j (x_j - \bar x)=0$. follows from ${\operatorname{supp}}g \subset [p_1, p_{n}]\subset \operatorname{int}I$. Next, observe that by definition of $g$ and $f_1,\ldots, f_{M+1}$, on each $\mathcal F_K$-atom contained in the set $B:=\cup_{m=1}^{M} \cup_{i=j_{m-1} +2}^{j_m} B_{\ell(i)}$, the function $\bar x + g$ is constant with a value in $\cup_i\{x_i\}$. Setting $\mathscr A_1= \{A \subset B : A\text{ atom in }\mathcal F_K\}$ and $\mathscr A_2 = \mathscr A\setminus \mathscr A_1$ now shows (a). Moreover, by , and , $$\begin{aligned} \Big| \bigcup_{J\in\mathscr{A}_1} J\cap V \Big| &= \Big| \bigcup_{m=1}^{M} \bigcup_{i=j_{m-1} +2}^{j_m} B_{\ell(i)} \cap V\Big| \geq \Big| \bigcup_{i\leq j_{M}} B_{\ell(i)} \cap V \Big| - 3M\varepsilon_1 \\ &\geq \Big| \bigcup_i B_{\ell(i)}\cap V \Big| - \Big| \bigcup_{i>j_{M}} B_{\ell(i)}\Big| - \frac{\varepsilon |I\cap V|}{24} \\ &\geq \big( 1 - \frac{\varepsilon}{3} \big) |I\cap V| - \tilde{\varepsilon} \Big|\bigcup_i B_{\ell(i)}\Big| - \frac{\varepsilon|I\cap V|}{24}. \end{aligned}$$ Since $\tilde{\varepsilon}|\cup_i B_{\ell(i)} |\leq \tilde{\varepsilon} |I|\leq \varepsilon|I\cap V|/3$ by definition of $\tilde{\varepsilon}$, we conclude $|\cup_{J\in \mathscr A_1} J\cap V| \geq (1-\varepsilon)|I\cap V|$, proving also (b). Next, we note that for $t\in {\operatorname{supp}}f_j$ with $j\leq M$, we have $$\bar x + g(t) = \bar x + f_j(t) (x_j - \bar x) = f_j(t) x_j + \big(1-f_j(t)\big) \bar x.$$ Since $f_j(t)\in [0,1]$ and $\bar x$ is a convex combination of the elements $(x_j)$, we get (c) in this case. If $t\in {\operatorname{supp}}f_{M+1}$, we calculate $$\label{eq:uniformestimate_g} \begin{aligned} \bar x + g(t) &= \bar x +f_{M+1}(t) \sum_{j=1}^M \beta_j (x_j - \bar x) \\ &= \big(1-f_{M+1}(t)\big) \bar x + f_{M+1}(t) \big( \bar x + \sum_{j=1}^M \beta_j (x_j - \bar x)\big). \end{aligned}$$ We have by the lower estimate for $\int f_{M+1}$ and by $$\begin{aligned} \sum_{j=1}^M |\beta_j| &\leq \frac{12}{\tilde{\varepsilon} \big|\bigcup_i B_{\ell(i)}\big|} \sum_{j\leq M} \Big(\int f_j - C\alpha_j\Big) \leq \frac{12}{\tilde{\varepsilon} \big|\bigcup_i B_{\ell(i)}\big|} ( 3\varepsilon_1 M ), $$ which, by , is smaller than $\varepsilon/2$. Therefore, combining this with yields property (c) for $t\in {\operatorname{supp}}f_{M+1}$ by setting $\lambda_1=1-f_{M+1}$, $\lambda_2 = f_{M+1}$, $y_1 = \bar x$, $y_2 = \bar x + \sum_j \beta_j (x_j - \bar x)$. Thus, we finished Step 1 of constructing a function $g$ with mean zero and properties ,. The next step is to construct a function $g$ so that additionally all of its moments up to order $k$ vanish. <span style="font-variant:small-caps;">Step 2:</span> Set $\tilde{\varepsilon} = 1- (1-\varepsilon)^{1/3}>0$. We write $a=\inf I$, $b=\sup I$ and choose $c\in I$ so that $R := (c,b)$ satisfies $0<|R \cap V|= \tilde{\varepsilon}|I\cap V|$. Define $L=I\setminus R$. Let $(N_i)$ be the B-spline basis of $S_{K_R}$, where we choose the integer $K_R$ so that we can select B-spline functions $(N_{m_i})_{i=0}^{k-1}$ that ${\operatorname{supp}}N_{m_i} \subset \operatorname{int} R$ for any $i=0,\ldots,k-1$ and ${\operatorname{supp}}N_{m_i}\cap {\operatorname{supp}}N_{m_j} = \emptyset$ for $i\neq j$. We then form the $k\times k$-matrix $$A = \Big( \int_R t^i N_{m_j}(t){\,\mathrm{d}}t \Big)_{i,j=0}^{k-1}.$$ The matrix $(t_\ell^i)_{i,\ell=0}^{k-1}$ is a Vandermonde matrix having positive determinant for $t_0 < \cdots< t_{k-1}$. Moreover, the matrix $(N_{m_j}(t_\ell))_{j,\ell=0}^{k-1}$ is a diagonal matrix having positive entries if $t_\ell\in \operatorname{int}{\operatorname{supp}}N_{m_\ell}$ for $\ell=0,\ldots, k-1$. For other choices of $(t_\ell)$, the determinant of $(N_{m_j}(t_\ell))_{j,\ell=0}^{k-1}$ vanishes. Therefore, Lemma \[lem:comp\] implies that $\det A\neq 0$ and $A$ is invertible. Next, we choose $\varepsilon_1= \tilde{\varepsilon}/\big(k(1+\tilde{\varepsilon}) \| A^{-1} \|_\infty |L| \big)$ and apply Lemma \[lem:Gamma\] with the parameters $\varepsilon_1$, $\varepsilon_2 = \tilde{\varepsilon}$ and the interval $L$ to obtain a positive integer $n$ so that for the partition $(A_\ell)_{\ell=1}^n$ of $L$ with $n|A_\ell \cap V| = |L\cap V|$ and $\sup A_{\ell-1} = \inf A_{\ell}$, the set $\Gamma = \{ 2\leq \ell \leq n-1 : \max(|A_{\ell-1}|, |A_{\ell}|, |A_{\ell+1}|) \leq \varepsilon_1 \}$ satisfies $$\sum_{\ell\in \Gamma} |A_{\ell} \cap V| \geq (1-\tilde{\varepsilon}) |L\cap V|.$$ We now apply the construction of Step 1 on every set $A_{\ell}$, $\ell\in \Gamma$, with the parameters $\bar x$, $(x_j)_{j=1}^M$, $(\alpha_j)_{j=1}^M$, $\tilde{\varepsilon}$ to get functions $(g_{\ell})$ with zero mean having properties , with $I$ replaced by $A_{\ell}$. On $L$, we define the function $$g(t) := \sum_{\ell\in \Gamma} g_\ell(t),\qquad t\in L.$$ Let $z_j:= \int_L t^j g(t){\,\mathrm{d}}t$ for $j=0,\ldots,k-1$. Observe that, since $\int_{A_{\ell}} g_\ell(t){\,\mathrm{d}}t=0$ and $\|g_\ell\|_{L^\infty_X} \leq 1+\tilde{\varepsilon}$ by and $|A_\ell|\leq \varepsilon_1$, we get for all $j=0,\ldots,k-1$, $$\begin{aligned} \| z_j \| &= \Big\| \sum_{\ell\in\Gamma} \int_{A_{\ell}} t^jg_\ell(t){\,\mathrm{d}}t\Big\| = \Big\| \sum_{\ell\in\Gamma} \int_{A_{\ell}} \big(t^j-(\inf A_\ell)^j\big)\cdot g_\ell(t){\,\mathrm{d}}t\Big\| \\ &\leq j \sum_{\ell\in\Gamma} |A_{\ell}|\int_{A_{\ell}} \| g_\ell(t) \| {\,\mathrm{d}}t \\ &\leq j \varepsilon_1 (1+\tilde{\varepsilon}) |L| \leq \tilde{\varepsilon}\cdot \|A^{-1}\|_\infty^{-1}. \end{aligned}$$ In order to have $\int_I t^j g(t){\,\mathrm{d}}t = 0$ for all $j=0,\ldots,k-1$, we want to define $g$ on $R=I\setminus L$ so that $$\label{eq:condR} \int_R t^j g(t){\,\mathrm{d}}t = - z_j,\qquad j=0,\ldots,k-1.$$ Assume that $g$ on $R$ is of the form $$g(t) = \sum_{i=0}^{k-1} N_{m_i}(t)w_i,\qquad t\in R$$ for some $(w_i)_{i=0}^{k-1}$ contained in $X$. Then, is equivalent to $$Aw = -z$$ by writing $w=(w_0,\ldots,w_{k-1})^T$ and $z=(z_0,\ldots,z_{k-1})^T$. Defining $w := -A^{-1} z$ and employing the estimate for $\|z\|_\infty$ above, we obtain $$\label{eq:esty} \|w\|_\infty \leq \|A^{-1}\|_\infty \|z\|_\infty \leq \tilde{\varepsilon}.$$ The definition of $g$ immediately yields properties , . From the application of the construction in Step 1 to each $A_{\ell}$, $\ell\in\Gamma$, we obtained collections $\mathscr A_1(\ell)$ of disjoint subintervals of $A_\ell$ that are atoms in $\mathcal F_{K_\ell}$ for some positive integer $K_\ell\geq N$ satisfying that $\bar x + g_\ell$ is constant on each $J\in \mathscr A_1(\ell)$ taking values in $\operatorname{conv}\{ x_i : 1\leq i\leq M \}$ and $|\cup_{J\in\mathscr A_1(\ell)} J\cap V| \geq (1-\tilde{\varepsilon}) |A_\ell \cap V|$. Let $B:= \cup_\ell\cup_{J\in\mathscr A_1(\ell)} J$ and define $\mathscr A_1$ to be the collection $\{J\subset B : J\text{ atom in }\mathcal F_K\}$ where $K:= \max(\max_\ell K_\ell, K_R )$ and define $\mathscr A := \{J\subset I:J\text{ atom in }\mathcal F_K\},$ $\mathscr A_2 := \mathscr A\setminus \mathscr A_1$. Then, (a) is satisfied by the corresponding property of each $g_\ell$. (b) follows from the calculation $$\begin{aligned} \Big| \bigcup_{J\in\mathscr A_1} J \cap V\Big| & \geq (1-\tilde{\varepsilon})\sum_{\ell\in\Gamma}|A_{\ell}\cap V| \geq (1-\tilde{\varepsilon})^2 |L\cap V| \\ &\geq (1-\tilde{\varepsilon})^3 |I\cap V|= (1-\varepsilon)|I\cap V|. \end{aligned}$$ Property (c) on $L$ is a consequence of property (c) for the functions $g_\ell$. We can write $\alpha_j \equiv \sum_\ell \alpha_{j,\ell} N_\ell$ for some non-negative coefficients $(\alpha_{j,\ell})$ that have the property $\sum_{j=1}^M \alpha_{j,\ell} =1$ for each $\ell$. Therefore, on $R$, we have $$\bar x(t) + g(t) = \sum_{j=1}^M \alpha_j(t)x_j + \sum_{i=0}^{k-1} N_{m_i}(t) w_i = \sum_\ell N_\ell(t)\Big( \sum_{j=1}^M \alpha_{j,\ell} x_j + \sum_{i=0}^{k-1} \delta_{\ell,m_i}w_i\Big) , $$ which, since $\|w\|_\infty \leq \tilde{\varepsilon}\leq \varepsilon$ and $\sum_{j=1}^M \alpha_{j,\ell} = 1$ for each $\ell$, implies (c) on $R$. We now use Lemma \[lem:moments\] inductively to prove Theorem \[thm:general\]. We assume that $X$ does not have the RNP. Then, by Theorem \[thm:dentable\], the ball $B(0,1/2)\subset X$ contains a non-dentable convex set $D$ satisfying $$x\in \overline{\operatorname{conv}}\big( D \setminus B(x,2\delta) \big),\qquad x\in D$$ for some parameter $2\delta$. Defining $D_0 = D+ B(0,\delta/2)$ and, for $j\geq 1$, $D_j = D_{j-1} + B(0,2^{-j-1}\delta)$, we use the remark after Definition \[def:dentable\] to get that all the sets $(D_j)$ are contained in $B(0,1)$, are convex and $$x\in \operatorname{conv}\big(D_j \setminus B(x,\delta)\big),\qquad x\in D_j,\ j\geq 0.$$ We will assume without restriction that $\eta\leq \delta$. Let $x_{0,1}\in D_0$ arbitrary and set $f_0 \equiv {\ensuremath{\mathbbm 1}}_{[0,1]} \otimes x_{0,1}\in S_{m_0}\otimes X$ on $I_{0,1} := [0,1]$ for $m_0 = 0$. By $P_j$, we will denote the $L^1_X$-extension of the orthogonal projection operator onto $S_{m_j}$, where we assume that $(m_j)_{j=1}^n$ and $(f_j)_{j=1}^n$ with $f_j \in S_{m_j}\otimes X$ for each $j=1,\ldots,n$ are constructed in such a way that for all $j=0,\ldots,n$, 1. $P_{j-1} f_j = f_{j-1}$ if $j\geq 1$, 2. on all atoms $I$ in $\mathcal F_{m_j}$, $f_j$ has the form $$f_j \equiv \sum_{\ell} \lambda_\ell \otimes y_\ell,\qquad \text{finite sum}$$ for functions $\lambda_\ell\in S_{m_j}$ with non-negative B-spline coefficients, $\sum_\ell \lambda_\ell\equiv 1$ and some $y_\ell\in D_j$, 3. there exists a finite collection of disjoint intervals $(I_{j,i})_i$ that are atoms in $\mathcal F_{m_j}$ so that (setting $C_j = \cup_i I_{j,i}$) 1. for all $i$, $f_j \equiv x_{j,i}\in D_j$ on $I_{j,i}$, 2. $\|f_j - f_{j-1}\|_X \geq \delta$ on $C_j\cap C_{j-1}$ if $j\geq 1$, 3. $|C_j\cap C_{j-1}\cap V| \geq (1 -2^{-j}\eta)|V| $ if $j\geq 1$, 4. $|C_j \cap V| \geq (1 -2^{-j-2}\eta)|V| $, 5. $|I_{j,i}\cap V|>0$ for every $i$. We will then perform the construction of $m_{n+1}$, $f_{n+1}$ and the collection $(I_{n+1,i})$ of atoms in $\mathcal F_{m_{n+1}}$ having properties (1)–(3) for $j=n+1$. Define the collection $\mathscr C = \{A \text{ atom of }\mathcal F_{m_{n}} : |A\cap V|>0\}$. We will distinguish the two cases $B\in \mathscr C_1 := \{A\in \mathscr C: A= I_{n,i}\text{ for some }i\}$ and $B\in \mathscr C_2:=\mathscr C\setminus \mathscr C_1$. <span style="font-variant:small-caps;">Case 1:</span> $B\in \mathscr{C}_1$: here, $B=I_{n,i}$ for some $i$ and we use the fact that on $B$, $f_n = x_B:=x_{n,i}\in D_n$ and write $$x_{B} = \sum_{\ell=1}^{M_B} \alpha_{B,\ell} x_{B,\ell}$$ with some positive numbers $(\alpha_{B,\ell})$ satisfying $\sum_\ell \alpha_{B,\ell} = 1$, some $x_{B,\ell}\in D_n$ and $\|x_{B} - x_{B,\ell}\|\geq \delta$ for any $\ell=1,\ldots, M_B$. We apply Lemma \[lem:moments\] to the interval $B$ with this decomposition and with the parameter $\varepsilon=\eta_n:= 2^{-n-3}\eta$. This yields a function $g_{B}\in S_{K_{B}}\otimes X$ for some positive integer $K_{B}$ that has the properties (i) $\int t^\ell g_{B}(t){\,\mathrm{d}}t = 0, \qquad 0\leq \ell\leq k-1$, (ii) ${\operatorname{supp}}g_{B} \subset \operatorname{int} B$, (iii) we have a splitting of the collection $\mathscr A_B = \{A\subset B: A \text{ is atom in }\mathcal F_{K_B}\}$ into $\mathscr A_{B,1} \cup \mathscr A_{B,2}$ so that (a) on each $J\in \mathscr A_{B,1}$, $f_n+g_B=x_B + g_B$ is constant on $J$ taking values in $\cup_\ell \{ x_{B,\ell} \}$, (b) $|\cup_{J\in\mathscr A_{B,1}} J\cap V| \geq (1-\eta_n)|B\cap V|$, (c) on each $J\in \mathscr A_{B,2}$, the function $f_n + g_B$ can be written as $$f_n(t) + g_B(t) = x_B + g_B(t) = \sum_\ell \lambda_{B,\ell}(t) y_{B,\ell}$$ for some functions $\lambda_{B,\ell} \in S_{K_B}$ having non-negative B-spline coefficients with $\sum_\ell \lambda_{B,\ell}\equiv 1$ and $y_{B,\ell}\in \operatorname{conv}\{x_{B,\ell}: 1\leq j\leq M_B\} + B(0,\eta_n)$. <span style="font-variant:small-caps;">Case 2:</span> $B\in \mathscr C_2$: on $B$, $f_n$ is of the form $$f_n(t) = \sum_{\ell=1}^{M_B} \lambda_{\ell}(t)y_{\ell}$$ for some functions $\lambda_{\ell}\in S_{m_n}$ having non-negative B-spline coefficients with $\sum_\ell \lambda_{\ell}\equiv 1$ and some $y_{\ell}\in D_n$. Applying Lemma \[lem:moments\] with the parameter $\eta_n =2^{-n-3}\eta$, we obtain a function $g_B\in S_{K_B}\otimes X$ (for some positive integer $K_B$) that has the properties (i) $\int t^\ell g_{B}(t){\,\mathrm{d}}t = 0, \qquad 0\leq \ell\leq k-1$, (ii) ${\operatorname{supp}}g_{B} \subset \operatorname{int} B$, (iii) we have a splitting of the collection $\mathscr A_B = \{A\subset B: A \text{ is atom in }\mathcal F_{K_B}\}$ into $\mathscr A_{B,1} \cup \mathscr A_{B,2}$ so that (a) for each $J\in \mathscr A_{B,1}$, $f_n + g_B$ is constant on $J$ taking values in $\operatorname{conv}\{y_\ell : 1\leq \ell\leq M_B\}$, (b) $|\cup_{J\in\mathscr A_{B,1}} J\cap V| \geq (1-\eta_n)|B\cap V|$, (c) for each $J\in \mathscr A_{B,2}$, the function $f_n + g_B$ can be written as $$f_n(t) + g_B(t) = \sum_\ell \lambda_{B,\ell}(t) y_{B,\ell}$$ for some functions $\lambda_{B,\ell} \in S_{K_B}$ having non-negative B-spline coefficients with $\sum_\ell \lambda_{B,\ell}\equiv 1$ and $y_{B,\ell}\in \operatorname{conv}\{y_j: 1\leq j\leq M_B\} + B(0,\eta_n)$. Having treated those two cases, we define the index $m_{n+1}:=\max\{ K_B : B\in\mathscr C\}$ and $$f_{n+1} = f_n + \sum_{B\in\mathscr{C}} g_{B}.$$ The new collection $(I_{n+1,i})$ is defined to be the decomposition of the set $\cup_{B\in\mathscr C} \cup_{J\in \mathscr A_{B,1}} J$ (from the above construction) into $\mathcal F_{m_{n+1}}$-atoms, after deleting those $\mathcal F_{m_{n+1}}$-atoms $I$ with $|I\cap V|=0$. Since $D_n$ is convex and $\eta\leq \delta$, the corresponding function values of $f_{n+1}$ are contained in $D_n + B(0,\eta_n)\subset D_{n+1}$ and we will enumerate them as $(x_{n+1,i})_i$ accordingly. We additionally set $C_{n+1} := \cup_i I_{n+1,i}$. With these definitions, we will successively show properties (1)–(3) for $j=n+1$. Since the function $g=P_n f_{n+1} \in S_{m_n}\otimes X$ is characterized by the condition $$\int g(t)s(t){\,\mathrm{d}}t = \int f_{n+1}(t)s(t){\,\mathrm{d}}t,\qquad s\in S_{m_n},$$ property (1) for $j=n+1$ follows if we show that $\int g_{B}(t)s(t){\,\mathrm{d}}t = 0$ for any $s\in S_{m_n}$ and any $B\in\mathscr C$. But this is a consequence of (i) for $g_{B}$ (in both case 1 and case 2), since $s\in S_{m_n}$ is a polynomial of order $k$ on $B$. Property (2) now is a consequence of (iii) (again for both cases 1 and 2), we just remark once again that $D_{n}+ B(0,\eta_n) \subset D_{n+1}$ due to $\eta\leq \delta$. Properties (3a), (3b) and (3e) are direct consequences of the construction. (3d) follows from (iii)(b) in cases 1 and 2 since $$\begin{aligned} |C_{n+1} \cap V| &=\Big| \bigcup_{B\in\mathscr C} \bigcup_{J\in \mathscr A_{B,1}} J\cap V\Big| = \sum_{B\in\mathscr C} \Big| \bigcup_{J\in \mathscr A_{B,1}} J\cap V\Big| \\ &\geq (1-\eta_n) \sum_{B\in \mathscr C} |B\cap V| = (1-\eta_n)|V| \end{aligned}$$ and $\eta_n = 2^{-n-3}\eta$. For property (3c), we calculate $$|C_{n+1} \cap C_n \cap V| \geq (1-\eta_n) |C_n\cap V| \geq (1-\eta_n) (1-2^{-n-2}\eta) |V|$$ by (iii)(b) in case 1 and by induction hypothesis. Since $\eta_n = 2^{-n-3}\eta$, we get $(1-\eta_n)(1-2^{-n-2}\eta)\geq 1-2^{-(n+1)}\eta$ and this proves (3c) for $j=n+1$. Finally, we note that due to (2) and (3)(c), the sequence $(m_n)$, the $k$-martingale spline sequence $(f_n)$ and the sets $E_n:= C_n \cap C_{n-1}\cap V$ have the properties that are desired in the theorem. Acknowledgments {#acknowledgments .unnumbered} --------------- The author is grateful to P. F. X. Müller for many fruitful discussions related to the underlying article. This research is supported by the FWF-project Nr.P27723.
--- abstract: 'We show that Lusztig’s $a$-function of a Coxeter group is bounded if the Coxeter group has a complete graph (i.e. any two vertices are joined) and the cardinalities of finite parabolic subgroups of the Coxeter group have a common upper bound.' address: | HUA Loo-Keng Key Laboratory of Mathematics and Institute of Mathematics\ Chinese Academy of Sciences\ Beijing, 100190\ China author: - Nanhua XI title: 'Lusztig’s $a$-function for Coxeter groups with complete graphs' --- [^1] Introduction ============ \[sec:Intro\] Lusztig’s $a$-function for a Coxeter group is defined in \[L2\] and is a very useful tool for studying cells in Coxeter groups and related topics such as representations of Hecke algebras. For an affine Weyl group, in \[L2\] Lusztig showed that the $a$-function is bounded by the length of the longest element of the corresponding Weyl group. It might be true that for any Coxeter group of finite rank the $a$-function is bounded by the length of the longest element of certain finite parabolic subgroups of the Coxeter group. In this paper we first show that this property implies that the Coxeter group has a lowest two-sided cell (Theorem 1.5). We then show that Lusztig’s $a$-function of a Coxeter group has this property (Theorem 2.1) if the Coxeter group has a complete graph (i.e. any two different simple reflections of the Coxeter group are not commutative) and the cardinalities of finite parabolic subgroups of the Coxeter group have a common upper bound. For Coxeter groups of rank 3, Peipei Zhou showed an analogue result by using the approach of the paper. These facts support part (iv) of Question 1.13 in \[X2\]. Preliminaries ============= [**1.1.**]{} Let $(W,S)$ be a Coxeter group. We use $l$ for the length function and $\le$ for the Bruhat order of $W$. The neutral element of $W$ will be denoted by $e$. Let $q$ be an indeterminate. The Hecke algebra $H$ of $(W,S)$ is a free $\cala=\BZ[q^{\frac12},q^{-\frac12}]$-module with a basis $T_w,\ w\in W$ and the multiplication relations are $(T_s-q)(T_s+1)=0$ if $s$ is in $S$, $T_wT_u=T_{wu}$ if $l(wu)=l(w)+l(u)$. For any $w\in W$ set $\tilde T_w=q^{-\frac{l(w)}2}T_w$. For any $w,u\in W$, write $$\tilde T_w\tilde T_u=\sum_{v\in W}f_{w,u,v}\tilde T_v,\qquad f_{w,u,v}\cala.$$ The following fact is known and implicit in \[L2, 8.3\]. [(a)]{} For any $w,u,v\in W$, $f_{w,u,v}\in\cala $ is a polynomial in $q^{\frac12}-q^{-\frac12}$ with non-negative coefficients and $f_{w,u,v}=f_{u,v^{-1},w^{-1}}=f_{v^{-1},w,u^{-1}}$. Its degree is less than or equal to min$\{l(w),l(u), l(v)\}$. Proof. Note that $f_{x,y,e}=0$ if $xy\ne e$ and $f_{x,x^{-1},e}=1$ for any $x,y\in W$. Then it is easy to verify $$f_{w,u,v}f_{v,v^{-1},e}=f_{w,w^{-1},e}f_{u,v^{-1},w^{-1}}.$$ So we have $f_{w,u,v}=f_{u,v^{-1},w^{-1}}=f_{v^{-1},w,u^{-1}}$. It is clear that $f_{w,u,v}$ is a polynomial in $q^{\frac12}-q^{-\frac12}$ with non-negative coefficients and deg$f_{w,u,v}$ is less than or euqal to min$\{l(w),l(u)\}$. The second assertion follows. For any $w,u,v$ in $W$, we shall regard $f_{w,u,v}$ as a polynomial in $\xi=q^{\frac12}-q^{-\frac12}$. The following fact is noted by Lusztig \[L3, 1.1 (c)\]. \(b) For any $w,u,v$ in $W$ we have $f_{w,u,v}=f_{u^{-1},w^{-1},v^{-1}}.$ [**Lemma 1.2.**]{} Let $(W,S)$ be a Coxeter group and $I$ is a subset of $S$. The following conditions are equivalent. \(a) The subgroup $W_I$ of $W$ generated by $I$ is finite. \(b) There exists an element $w$ of $W$ such that $sw\le w$ for all $s$ in $I$. \(c) There exists an element $w$ of $W$ such that $w\le ws$ for all $s$ in $I$. Proof. Clear. We set $L(w)=\{s\in S\,|\, sw\le w\}$ and $R(w)=\{s\in S\,|\, ws\le w\}$ for any $w\in W$. [**Lemma 1.3.**]{} Let $w$ be in $W$ and $I$ is a subset of $L(w)$ (resp. $R(w)$). Then $l(w_Iw)+l(w_I)=l(w)$ (resp. $l(ww_I)+l(w_I)=l(w)$), here $w_I$ is the longest element of $W_I$. Proof. Clear. [**1.4.**]{} For any $y,w\in W$, let $P_{y,w}$ be the Kazhdan-Lusztig polynomial. Then all the elements $C_w=q^{-\frac{l(w)}2}\sum_{y\le w}P_{y,w}T_y$, $w\in W$, form a Kazhdan-Lusztig basis of $H$. It is known that $P_{y,w}=\mu(y,w)q^{\frac12(l(w)-l(y)-1)}$ +lower degree terms if $y<w$ and $P_{w,w}=1$. For any $w,u$ in $W$, Write $$C_wC_u=\sum_{v\in W}h_{w,u,v}C_v,\ h_{w,u,v}\in\cala.$$ Following \[L2\], for any $v\in W$ we define $$a(v)=\max\{i\in\BN\,|\, i=\text{deg}h_{w,u,v}, \ w,u\in W\},$$ here the degree is in terms of $q^{\frac12}$. Since $h_{w,u,v}$ is a polynomial in $q^{\frac12}+q^{-\frac12}$, we have $a(v)\ge 0$. We are interested in the bound of the function $a:W\to\BN$. Clearly, $a$ is bounded if $W$ is finite. The following fact is known (see \[L3\]) and easy to verify. \(a) The $a$-function is bounded by a constant $c$ if and only if deg$f_{w,u,v}\le c$ for any $w,u,v\in W$. Lusztig showed that for an affine Weyl group the $a$-function is bounded by the length of the longest element of the corresponding Weyl group. This fact is important in studying cells in affine Weyl groups. One consequence is that an affine Weyl group has a lowest two-sided cell \[S1\]. We will show that the boundness of $a$-function is also interesting in general. Assume now that the $a$-function is bounded and its maximal value is $c$. Let $w,u,v$ be elements in $W$. We shall regard $h_{w,u,v}$ as a polynomial in $\eta=q^{\frac12}+q^{-\frac12}$. Following Lusztig \[L2\], write $h_{w,u,v}=\gamma_{w,u,v}\eta^{a(v)}+\delta_{w,u,v}\eta^{a(v)-1}+$lower degree terms. Then $\gamma_{w,u,v}$ and $\delta_{w,u,v}$ are integers. Let $\Omega$ be the subset of $W$ consisting of all elements $w$ with $a(w)=c$. Assume that $v\in\Omega$. For $w,u\in W$, we have $f_{w,u,v}=\gamma_{w,u,v}\xi^{c}+$lower degree terms. Using 1.1(a) and 1.1 (b) we get \(b) $\gamma_{w,u,v}=\gamma_{u,v^{-1},w^{-1}}=\gamma_{v^{-1},w,u^{-1}}=\gamma_{u^{-1},w^{-1},v^{-1}}$ for $w,u,v\in \Omega$. \(c) Let $w,u\in W$ and $v\in\Omega$. If $\gamma_{w,u,v}\ne 0$, then $w,u$ are in $\Omega$ and $\gamma_{w,u,v}=\gamma_{u,v^{-1},w^{-1}}=\gamma_{v^{-1},w,u^{-1}}=\gamma_{u^{-1},w^{-1},v^{-1}}$ is positive. Since $h_{w,u,v}\ne 0$ implies that $w\rl v,\ u\ll v$, by (c) we obtain \(d) Let $w,u\in W$ and $v\in\Omega$. If $\gamma_{w,u,v}\ne 0$, then $w\er v$, $w\el u^{-1}$ and $u\el v$. In particular, $w,u,v$ are in the same two-sided cell. [**Theorem 1.5.**]{} Let $(W,S)$ be a Coxeter group. Assume that the $a$-function is bounded by the length of the longest element $w_0$ of a finite parabolic subgroup $P$ of $W$. Then the two-sided cell of $W$ containing $w_0$ is the lowest two-sided cell of $W$. Moreover, the lowest two-sided cell contains all elements $w$ in $W$ with $a(w)=l(w_0)$. Proof. We first show that $x\lrl w_0$ for any $x\in W$. (We refer to \[KL\] for the definitions of the preorders $\lrl, \rl, \ll$ and the equivalences $\el$, $\elr$ on $W$.) Let $x\in W$ be such that $l(xw_0)=l(x)-l(w_0)$. We first show that $x$ and $w_0$ are in the same left cell. Clearly $x\ll w_0$. Let $y=xw_0$. Then $$\tt_{x^{-1}}\tt_x=\tt_{w_0}(\sum_{z\in W}f_{y^{-1},y,z}\tt_z)\tt_{w_0}.$$ Since $f_{y^{-1},y,e}=1$, $f_{w_0,w_0,w_0}$ has degree $l(w_0)$ as a polynomial in $\xi=q^{\frac12}-q^{-\frac12}$ and $f_{w,u,v}$ has non-negative coefficients as a polynomial in $\xi$ for any $w,u,v\in W$, by 1.4(a) we conclude that $f_{x^{-1},x,w_0}$ has degree $l(w_0)$. Thus $h_{x^{-1},x,w_0}$ has degree $l(w_0)$ as a Laurent polynomial in $q^{\frac12}$. In particular, $h_{x^{-1},x,w_0}$ is non-zero, so $w_0\ll x$. Hence $x$ and $w_0$ are in the same left cell. Now assume that $x$ is an arbitrary element in $W$. Clearly there exists $w\in P$ such that $l(xw)=l(x)+l(w)$ and $l(xww_0)=l(xw)-l(w_0)$. We then have $ w_0\el xw\rl x$. Hence, the two-sided cell containing $w_0$ is the lowest one among the two-sided cells of $W$ (with respect to the partial order $\lrl$ on the set of two-sided cells of $W$). Now we show that the lowest two-sided cell contain all elements $w$ in $W$ with $a(w)=l(w_0)$. Assume that $a(w)=l(w_0)$. Then there exists $x,y\in W$ such that $\gamma_{x,y,w}\ne 0$ and $x,y,w$ are in the same two-sided cell. By 1.4 (c), $a(x)=a(y)=l(w_0).$ Choose $u\in P$ such that $l(yu)=l(y)+l(u)=l(yuw_0)+l(w_0)$. It is easy to see that $l(wu)=l(w)+l(u)=l(wuw_0)+l(w_0)$. Since $\tt_x\tt_{yu}=(\tt_x\tt_y)\tt_u$, we have $\gamma_{x,yu,wu}\ge \gamma_{x,y,w}$. Thus $x,yu,wu$ are in the same two-sided cell. But we have seen that $yu$ and $w_0$ are in the same two-sided cell. The theorem is proved. [**Corollary 1.6**]{} Let $(W,S)$ be a Coxeter group. Assume that the $a$-function is bounded by the length of the longest element $w_0$ of a finite parabolic subgroup $P$ of $W$. Then $$\{x\in W\,|\, l(xw_0)=l(x)-l(w_0)\}$$ is a left cell of $W$. Proof. It follows from the proof of Theorem 1.5. [*Remark.*]{} For affine Weyl groups, this result is due to Lusztig \[L2\]. [**1.7.**]{} Let $(W,S)$ be a Coxeter group. Assume that the $a$-function is bounded by the length of the longest element $w_0$ of a finite parabolic subgroup $P$ of $W$. Denote the left cell containing $w_0$ by $\Gamma$. Then $\Gamma=\{w\in W\,|\, l(w)=l(ww_0)+l(w_0)\}.$ Let $J_{\Gamma\cap\Gamma^{-1}}$ be the free $\BZ$-module with a basis $\{t_w\,|\, w\in \Gamma\cap\Gamma^{-1}\}$. Define $t_wt_u=\sum_{v\in \Gamma\cap\Gamma^{-1}}\gamma_{w,u,v}t_v$. Then $J_{\Gamma\cap\Gamma^{-1}}$ is an associative ring with unit $1=t_{w_0}$. Let $\Omega$ be the subset of $W$ consisting of all elements $w$ with $a(w)=l(w_0)$. We can define $J_{\Omega}$ and the multiplication in $J_{\Omega}$ similarly. The multiplication is associative. However, $J_{\Omega}$ has no unit in general, since $\Omega$ contains infinite left cells in general, as shown in \[B, Be\], see also Proposition 3.2. [**1.8. Remark.**]{} Keep the assumption of Theorem 1.5. Motivated by the work of Shi \[S1, S2\], we give some conjectures. It is likely that the lowest two-sided cell is exactly the set of elements $w$ in $W$ with $a(w)=l(w_0)$. Further, it is likely that the lowest two-sided cell coincides with the set of elements of $W$ of the form $xwy$ such that $l(xwy)=l(x)+l(w)+l(y)$, $l(w)=l(w_0)$ and $w$ is the longest element of a finite parabolic subgroup of $W$. Let $D'$ be the set consisting of all elements $x\in W$ such that \(1) $x=wy$ for some $w$ in a finite parabolic subgroup of $W$ with length $l(w_0)$ and $y\in W$ and $l(x)=l(w)+l(y)$, \(2) for any $s$ in $L(w)$, there are no $z,z',u\in W$ such that $sx=zuz'$, $l(sx)=l(z)+l(u)+l(z')$ and $u$ is in a finite parabolic subgroup of $W$ with length $l(w_0)$. For any $x\in D'$, let $\Gamma_x$ be the subset of $W$ consisting of all elements $zx$ satisfying $l(zx)=l(z)+l(x)$. It is likely that $\Gamma_x$ is a left cell in the lowest-sided cell of $W$ and the map $x\to \Gamma_x$ is a bijection between the set $D'$ and the set of left cells in the lowest two-sided cell. Also, the set $D=\{y^{-1}wy\,|\, wy\in D'\}$ should be the set of distinguished involutions in the lowest two-sided cell, here $wy$ satisfies the above (1) and (2). When the Coxeter graph of $W$ is connected we also conjecture that the set $D$ is finite if and only if $W$ is finite or is an affine Weyl group or $st$ has infinite order for any different simple reflections $s,t\in S$. Assume that $wy$ satisfies (1) and (2). Let $zw\in W$ be such that $l(zw)=l(z)+l(w)$. Then we should have $C_{zw}C_{wy}=h_{w,w,w}C_{zwy}$. Also we should have $\mu(z'wy,zwy)=\mu(z'w,zw)$ if $l(z'w)=l(z')+l(w)$. For affine Weyl groups, these equalities are true, see \[X1, SX\]. If $(W,S)$ is crystallographic, then the function $a$ is constant on a two-sided cell \[L2\]. Since $a(w_0)=l(w_0)$ (see \[L2\]), we see that the lowest two-sided cell is exactly the set $\{w\in W\,|\, a(w)=l(w_0)\}$. For an affine Weyl group $W$, thanks to \[S1, S2\], we know that (a) the lowest two-sided cell of $W$ coincides with the set of elements of $W$ of the form $xwy$ such that $l(xwy)=l(x)+l(w)+l(y)$, $l(w)=l(w_0)$ and $w$ is the longest element of a finite parabolic of $W$; (b) $D$ is the set of distinguished involutions in the lowest two-sided cell. In section 3 we will show that the above conjectures are true for certain Coxeter groups with complete graphs. Coxeter groups with complete graphs =================================== Throughout this section $(W,S)$ is a Coxeter group and any two simple reflections in $S$ are not commutative. In another words, the Coxeter graph of $(W,S)$ is a complete graph. Another main result of this article is the following. [**Theorem 2.1.**]{} Let $(W,S)$ be a Coxeter group. Assume that any two different simple reflections are not commutative and the cardinalities of finite parabolic subgroups of $W$ have a common upper bound. Then Lusztig’s $a$-function on $W$ is bounded by the length of the longest element of certain finite parabolic subgroups of $W$. The remaining of this section is devoted to a proof of the theorem. [**Lemma 2.2.**]{} Let $r,s,t$ be simple reflections such that the orders of $rs,rt, st$ are greater than 2. Then there is no element $w$ in $W$ such that $w=w_1r=w_2st$ and $l(w)=l(w_1)+1=l(w_2)+2$. Proof. We use induction on $l(w)$. When $l(w)=0,1,2,3$, the lemma is clear. Now assume that the lemma is true for $u$ with length $l(w)-1$. Since $r,t\in R(w)$, by Lemma 1.2, we know that the subgroup $W_{rt}$ of $W$ generated by $r,t$ is finite. Let $w_{rt}$ be the longest element in $W_{rt}$. By lemma 1.3, $w=w_3w_{rt}=w_4trt$ for some $w_3,w_4\in W$ and $l(w)=l(w_3)+l(w_{rt})$, $l(w)=l(w_4)+3$. So we get $w_4tr=w_2s$. Clearly we have $l(w_4rt)=l(w)-1$=$l(w_4)+2=l(w_2)+1$. By induction hypothesis, $w_2s$ does not exist, hence $w$ does not exist. The lemma is proved. [**Lemma 2.3.**]{} Keep the assumption of Theorem 2.1. Let $x\in W$ and $t_1t_2\cdots t_m$ ($m\ge 2)$ be a reduced expression of an element in $W$. Assume that $xt_1\le x$, $xt_2\cdots t_{m-1}t_m\le xt_2\cdots t_{m-1}$, and $l(xt_2\cdots t_{m-1})=l(x)+m-2$. If for any reduced expression $s_1s_2\cdots s_m$ of $t_1t_2\cdots t_m$ with $xs_1\le x$ we have $l(xs_2\cdots s_{m-1})$=$l(x)+m-2$, then $t_1t_2\cdots t_m$ is in a finite parabolic subgroup of $W$ generated by two simple reflections. Proof. If $m=2$, by Lemma 1.2, the result is clear. Now assume that $m\ge 3$. Let $s=t_{m-1}$, $t=t_m$, and $y=xt_2\cdots t_{m-1}$. Then $s,t\in R(y)$. By Lemma 1.3, $y=y_1 s^a(ts)^b$ and $l(y)=l(y_1)+a+2b$, here $a=0$ or 1 and $s^a(ts)^b$ is the longest element of the subgroup $W_{st}$ of $W$ generated by $s,t$. Write $t_1\cdots t_{m-1}=t_1\cdots t_is^d(ts)^c$, $t_i\ne t,s$, $d=0$ or 1, $c\ge 0$ and $d+2c+i=m-1$. We understand that $t_1\cdots t_i$ is the neutral element $e$ of $W$ if $i=0$. We need show that $i=0$. Since $t_m=t$ and $t_1t_2\cdots t_m$ is a reduced expression, we must have $d+2c<a+2b$. Assume $a+2b=d+2c+1$. If $i\ge 1$, then $t_1t_2\cdots t_{m-1}t_m=t_1\cdots t_is^a(ts)^b$ and $R(xt_2\cdots t_i)\cap\{s,t\}$ contains exactly one element, denoted by $r$. (We understand that $t_2\cdots t_i=e$ if $i=1$.) Then $t_1t_2\cdots t_m$ has a reduced expression of form $t_1t_2\cdots t_ir\cdots$. Since $xt_1\le x$ and $i\le m-2$, by the assumptions of the lemma, we know that $xt_2\cdots t_ir$ has length $l(x)+i$, this contradicts that $xt_2\cdots t_ir\le xt_2\cdots t_i$. So $i=0$ in this case. Assume $a+2b>d+2c+1$, then $b>c$ since $0\le a,d\le 1$. We have $y_1s^a(ts)^b=(xt_1)t_1t_2\cdots t_is^d(ts)^c$. So $y_1s^a(ts)^{b-c}s^d=(xt_1)t_1t_2\cdots t_i$ and $l(s^a(ts)^{b-c}s^d)\ge 2$. Then $y_1s^a(ts)^{b-c}s^d=y_3st$ or $y_3ts$ for some $y_3$ with $l(y_1s^a(ts)^{b-c}s^d)=l(y_3)+2$. If $i\ge 1$, we have $y_1s^a(ts)^{b-c}s^dt_i\le y_1s^a(ts)^{b-c}s^d$. Since $t_i\ne s,t$, by Lemma 2.2, this is impossible. The contradiction leads $i=0$. That is, all $t_1,t_2,...,t_m$ are in $\{s,t\}$. The lemma is proved. [*Remark.*]{} The lemma is not true in general. For instance, let $(W,S)$ be of type $A_3$, $s_1,s_2,s_3$ are simple reflections such that $s_1s_3=s_3s_1$. Consider $x=t_1t_2t_3t_4=s_2s_1s_3s_2$. [**Lemma 2.4.**]{} Let $x\in W$ and $t_1\cdots t_m\cdots t_n$ ($1<m<n$) be a reduced expression of an element in $W$. Assume that (1) $l(xt_2\cdots t_{m-1}t_{m+1}\cdots t_{n-1})$ has length $l(x)+n-3$, (2) $xt_1\le x$, (3) $xt_2\cdots t_{m-1}t_m\le xt_2\cdots t_{m-1}$, and (4) $xt_2\cdots t_{m-1}t_{m+1}\cdots t_{n-1}t_n\le xt_2\cdots t_{m-1}t_{m+1}\cdots t_{n-1}$. Further, assume that $t_1t_2\cdots t_m$ (resp. $t_m\cdots t_n$) is in a parabolic subgroup $P$ (resp. $Q$) of $W$ with rank 2. Then $P=Q$ is finite and $n=m+1$. In particular, $t_1\cdots t_n$ is in a finite parabolic subgroup of $W$ generated by two simple reflections. Proof. Let $t_m=s$ and $t_{m-1}=r$. Then $R(xt_2\cdots t_{m-1})$ contains $r,s$. Since the graph of $W$ is complete, any parabolic subgroup of $W$ generated by more than two simple reflections is infinite, by Lemma 1.2 we know that $R(xt_2\cdots t_{m-1})$ is exactly $\{r,s\}$ and $P=<r,s>$ (the subgroup of $W$ generated by $r,s$) is finite. Assume that $Q$ is generated by $s,t$. Clearly $t_{m+1}\ne r,s$, so $t_{m+1}=t$. Let $xt_2\cdots t_{m-1}=yt_m$. Then $l(yt_m)=l(y)+1$ and $R(y)$ does not contain $s$. We must have $t\in R(y)$. Otherwise, $R(y)\cap\{s,t\}$ is empty and $xt_2\cdots t_{m-1}t_{m+1}\cdots t_{n}=yt_mt_{m+1}\cdots t_n$ has length $l(x)+n-2$. It contradicts the assumption $xt_2\cdots t_{m-1}t_{m+1}\cdots t_{n-1}t_{n}\le xt_2\cdots t_{m-1}t_{m+1}\cdots t_{n-1}.$ Therefore $xt_2\cdots t_{m-1}$=$yt_m=y_1ts=y_2srs$ has length $l(y_1)+2=l(y_2)+3$. So $y_1t=y_2sr$ has length $l(y_1)+1)=l(y_3)+2$. By Lemma 2.2 we must have $t=r$ and then $n=m+1$. The lemma is proved. [**Lemma 2.5.**]{} Let $x,w,y$ be elements in $W$. Assume that $w$ is in a parabolic subgroup generated by two simple reflections $r,s\in S$, $l(w)\ge 3$ and $r,s$ are not in $R(x)\cup L(y)$. Then $l(xwy)=l(x)+l(w)+l(y).$ Proof: By Lemma 2.2, $R(xw)=R(w)$. Let $t_1\cdots t_n$ be a reduced expression of $y$. Assume that $l(xwt_1\cdots t_{m-1})=l(x)+l(w)+m-1$, $xwt_1\cdots t_{m-1}t_m$ $\le xwt_1\cdots t_{m-1}$, and $m\le n$ is minimal for all reduced expressions of $y$. Then $m\ge 2$. By Lemma 2.3, there exists $t_0\in R(w)$ such that $t_0t_1\cdots t_{m-1}$ is in the finite parabolic subgroup of $W$ generated by $t_0,t_1$. Since $l(w)\ge 3$ and $r,s$ are not in $R(x)\cup L(y)$, by Lemma 2.2, $R(xwt_0)$ does not contain $t_1\in L(y)$. Thus $t_0t_1\cdots t_{m-1}$ is the longest element of the parabolic subgroup of $W$ generated by $t_0,t_1$ and $R(xwt_1\cdots t_{m-1})=\{t_0,t_1\}$. So $t_0t_1\cdots t_{m-1}=t_1\cdots t_m$. Thus $t_0\in L(y)$. This contradicts that $r,s\not\in R(x)\cup L(y)$. The lemma is proved. [**Corollary 2.6.**]{} Let $r,s$ be simple reflections and $x,y,z\in W$ such that $x=yrs$ with $l(x)=l(y)+2$, $R(x)=\{s\}$, $R(yr)=\{r\}$, $r,s\not\in L(z)$. Then $l(xz)=l(x)+l(z)$. Proof. It follows from the proof of the above lemma. [**Lemma 2.7.**]{} Assume that $w,u$ are elements of a finite parabolic subgroup $P$ of $W$ generated by two simple reflections Then deg$f_{w,u,v}\le l(v)$ for $v\in P$ and $f_{w,u,v}=0$ if $v\not\in P$. (Recall that $f_{w,u,v}$ is a polynomial in $q^{\frac12}-q^{-\frac12}$.) Proof. The first assertion follows from 1.1 9a) and the second assertion is clear. [**Lemma 2.8.**]{} Let $r,s,t$ be simple reflections and $x,y,z\in W$. Assume that $x=yrs$, $R(yr)=\{r,t\}$, $R(x)=\{s\}$, $R(y)=\{t\}$. If $r,s\not\in L(z)$, then deg $f_{x,z,w}\le 1$ for all $w$ in $W$. Proof. If $l(xz)=l(x)+l(z)$, nothing needs to prove. Assume that $l(xz)<l(x)+l(z)$. Let $t_1t_2\cdots t_n$ be a reduced expression of $z$. Then we can find a positive integer $m$ such that $xt_1\cdots t_{m-1}t_m\le xt_1\cdots t_{m-1}$. By assumptions of the lemma, clearly we have $m\ge 2$. We choose the reduced expression of $z$ so that $m$ is minimal in all possibilities. According to Lemma 2.3, $st_1\cdots t_{m-1}$ is in the parabolic subgroup of $W$ generated by $s,t_1$. We claim that $t_1= t$. Otherwise, since $r,s\not\in L(z)$, Lemma 2.2 implies that the element $st_1\cdots t_{m-1}=t_1\cdots t_m$ is the longest element of the subgroup $<s,t_1>$ of $W$ generated by $s,t_1$. This contradicts that $s\not\in L(z)$. Let $y_1\in W$ be such that $x=y_1trts$. Then $l(x)=l(y_1)+4$. By Lemma 2.2 we know that $R(y_1tr)=\{r\}$. So $tst_1\cdots t_{m-1}$ is the longest element $w_{st}$ in $<s,t>$. Then (recall that $\xi=q^{\frac12}-q^{-\frac12}$) $$\tt_x\tt_z=\xi\tt_{y_1trw_{st}}\tt_{t_{m+1}\cdots t_k}+\tt_{y_1trw_{st}t_m}\tt_{t_{m+1}\cdots t_k}.$$ We must have $s,t\not\in L(t_{m+1}\cdots t_n)$. Otherwise $t_1\cdots t_{m+1}$ is the longest element of $<s,t>$ and $s\in L(z)$, which contradicts our assumptions. Since $l(w_{st})\ge 3$, by Lemma 2.5, we have $$\tt_{y_1trw_{st}}\tt_{t_{m+1}\cdots t_k}=\tt_{y_1trw_{st}t_{m+1}\cdots t_k}.$$ If $l(w_{st}t_m)\ge 3,$ using Lemma 2.5, we get $$\tt_{y_1trw_{st}t_m}\tt_{t_{m+1}\cdots t_k}=\tt_{y_1trw_{st}t_mt_{m+1}\cdots t_k}.$$ We are done in this case. Assume now $l(w_{st}t_m)=2$, then $w_{st}=tst$, $m=2$, $t_1=t$, $t_2=s$. So $y_1trw_{st}t_m=y_1trst.$ If the longest element $w_{rt}$ of $<r,t>$ is at least 4, then $w_{rt}t$ has length at least 3. Since $s,t\not\in L(t_3\cdots t_n)$ and $r,s\not\in L(t_1\cdots t_n)$ we see that $r,t\not\in L(stt_3\cdots t_n)$. Write $y_1trw_{st}t_m=y_2w_{rt}tst$, then $l(y_2w_{rt}tst)=l(y_2)+l(w_{rt}t)+2$. By Lemma 2.5, we know that $$\tt_{y_1trw_{st}t_m}\tt_{t_{m+1}\cdots t_k}=\tt_{y_1trw_{st}t_mt_{m+1}\cdots t_k}.$$ We are done in this case. Assume now $w_{st}=sts$ and $w_{rt}=rtr$. Let $u=y_1trw_{st}t_m=y_1trst.$ Assume that $s_1\cdots s_{n-2}$ is a reduced expression of $t_3\cdots t_n$ and $us_1\cdots s_{i-1}s_i\le us_1\cdots s_{i-1}$ and $i$ is minimal in all possibilities. Note that $R(u)=\{t\}$. By Lemma 2.3, $ts_1\cdots s_{i-1}$ is in a parabolic subgroup of $W$ of rank 2. Since $s,t\not\in L(t_3\cdots t_n)$, we have $i\ge 2$. Also we have $s_1=r$ and $R(ut)=\{r,s\}$. Otherwise, Lemma 2.2 implies that $ts_1\cdots s_{i-1}=s_1\cdots s_i$ is the longest element in $<t,s_1>$, so $t\in L(t_3\cdots t_n)$, a contradiction. Now we have $R(u)=\{t\}$, $R(ut)=\{r,s\}$, $R(uts)=\{r\}$ and $s,t\not\in L(t_3\cdots t_n)$. So can use induction on $l(z)$ to see the lemma is true in this case. The lemma is proved. [**2.9.**]{} Now we prove Theorem 2.1. Let $x,y\in W$ and consider $$\tilde T_x\tilde T_y=\sum_{z\in W}f_{x,y,z}\tilde T_z.$$ We will prove that deg$f_{x,y,z}\le a_0$, here $a_0$ is the maximal number among the lengths of the longest elements of all finite parabolic subgroups of $W$. Let $t_1t_2\cdots t_k$ be a reduced expression of $y$. We may assume that $xt_1\le x$, otherwise we replace $x$ by $xt_1$. We may further assume that $xs_1\le x$ for any reduced expression $s_1\cdots s_m$ of $y$. we use induction on $k$. For $k=0,1$, the result is clear. Now assume that $k>1$. If $xt_2\cdots t_k$ has length $l(x)+k-1$, then we have $$\tilde T_x\tilde T_y=\xi \tilde T_{xt_2\cdots t_{k}}+\tt_{xt_1}\tt_{t_1y},$$ where $\xi=q^{\frac12}-q^{-\frac12}$. Using induction hypothesis we see the theorem is true in this case. Now assume that $xt_2\cdots t_{m-1}t_m\le xt_2\cdots t_{m-1}$ for some $2\le m\le k$. We may require that $m'\ge m>1$ if $s_1s_2\cdots s_k$ is another reduced expression of $y$ and $xs_2\cdots s_{m'-1}s_{m'}\le xs_2\cdots s_{m'-1}$. By Lemma 2.3, $t_1\cdots t_m$ is in the parabolic subgroup $P$ generated by $t_1,t_2$. Let $x_1$ (resp. $y_1$) be the element in the coset $xP$ (resp. $Py$) with minimal length. Let $u,v\in P$ be such that $x=x_1w$ and $y=uy_1$. Then we have $$\tt_x\tt_y=\sum_{v\in P}f_{w,u,v}\tt_{x_1v}\tt_{y_1}.$$ By Lemma 2.7, deg$f_{w,u,v}\le l(v)$ and $v\in P$ if $f_{w,u,v}\ne 0$. If $l(v)\ge 3$, by Lemma 2.5, $l(x_1vy_1)=l(x_1v)+l(y_1)$. Hence $\tt_{x_1w}\tt_{y_1}=\tt_{x_1wy_1}$. If $l(v)=2$, using Corollary 2.6 and Lemma 2.8 we see that deg$f_{x_1v,y_1,z}\le 1$ for any $z$. If $l(v)=0$, by induction hypothesis, we see that the degrees of $f_{x_1,y_1,z}$ are not greater than $a_0$ for any $z\in W$. Now consider the case $l(v)=1$. In this case $v$ is a simple reflection. We have $l(x_1v)=l(x_1)+1$ and $l(vy_1)=l(y_1)+1<l(y)$ since $m\ge 2$. Applying induction hypothesis to the equality $$\tt_{x_1v}\tt_{vy_1}=\xi\tt_{x_1v}\tt_{y_1}+\tt_{x_1}\tt_{y_1},$$ we see that deg$f_{x_1v,y_1,z}\le a_0-1$ for any $z\in W$. Theorem 2.1 is proved. [**Corollary 2.10.**]{} Keep the assumption of Theorem 2.1. Let $a_0$ be the maximal number among the lengths of the longest elements of all finite parabolic subgroups of $W$. Then $a(w)=a_0$ if and only if $w=xuy$ for some $x,y\in W$ and $u$ being the longest element of a finite parabolic subgroup and $l(w)=l(x)+l(y)+l(u)$, $l(u)=a_0$. This is clear from the proof of Theorem 2.1. Some consequences I - the lowest two-sided cell =============================================== In this section $(W,S)$ is a Coxeter group with complete graph and the cardinalities of finite parabolic subgroups of $W$ have a common upper bound. We discuss the lowest two-sided cell of $W$. Let $a_0$ be the maximal value of the lengths of the longest elements of finite parabolic subgroups of $W$ and let $\Lambda$ be the set consisting of all the longest elements of finite parabolic subgroups of the maximal cardinality (which is $2a_0$). Let $D'$, $D$ and $\Gamma_x$ ($x\in D')$ be as in subsection 1.8. [**Proposition 3.1.**]{} Keep the assumptions and notations above. We have \(a) The lowest two-sided cell of $W$ coincides with the set $\{w\in W\,|\, a(w)=a_0\}$. So for any $x$ in the lowest two-sided cell, there exists $y,z\in W$ and $u\in \Lambda$ such that $x=zuy$ and $l(x)=l(z)+l(u)+l(y)$. \(b) The map $x\to \Gamma_x$ defines a bijection between the set $D'$ and the set of left cells in the lowest two-sided cell $c_0$. \(c) The set $D=\{y^{-1}uy\,|\, w\in\Lambda,\ uy\in D',\ l(uy)=l(u)+l(y) \}$ is the set of distinguished involutions in the lowest two-sided cell. \(d) Let $z,z'\in W$ and $uy\in D'$ be such that $u\in \lambda$, $l(zuy)=l(z)+l(u)+l(y)$ and $l(z'uy)=l(z')+l(u)+l(y).$ Then $C_{zu}C_{uy}=h_{u,u,u}C_{zuy}$ and $\mu(z'uy,zuy)=\mu(z'u,zu)$. Proof. Let $$\Omega=\{zuy\in W\,|\, x,z\in W,\ u\in\Lambda,\text{ and } l(x)=l(z)+l(u)+l(y)\}.$$ We claim that $\Omega$ is the lowest two-sided cell. Since $\Lambda\subset\Omega$, it suffices to prove that $\Omega$ is a two-sided cell. Let $x\in \Omega$. Then there exist $y,z\in W$ and $u\in \Lambda$ such that $x=zuy$ and $l(x)=l(z)+l(u)+l(y)$. It is no harm to assume that $uz$ is in $D'$. By computing $\tt_{zu}\tt_{uy}$ we see easily that $\gamma_{zu,uy,w}\ne 0$ if and only if $w=x$ and $\gamma_{zu,uy,x}=1.$ This implies that $C_{zu}C_{uy}=h_{u,u,u}C_{zuy}$. The first part of (d) is proved. Let $w\in W$ and $w\elr x$. Then there exist $w=w_1,w_2,...,w_n=x$ such that $\mu(w_i,w_{i+1})\ne 0$ or $\mu(w_{i+1},w_i)\ne 0$, and $L(w_i)\not\subset L(w_{i+1})$ or $R(w_i)\not\subset R(w_{i+1})$ for all $i=1,2,...,n-1$. We show that all $w_i$ are in $\Omega$. It is no harm to assume that $n=2$ and $L(w)\not\subset L(x)$. Let $s$ be the simple reflection in $L(w)-L(x)$. Then $C_{w}$ appears in $C_sC_x$ with coefficient $\mu(w,x)$. Using the identity $C_{zu}C_{uy}=h_{u,u,u}C_{zuy}$, we see that there exists $z_1\in W$ such that $l(z_1u)=l(z_1)+l(u)$, $C_{z_1u}$ appears in $C_sC_{zu}$ and $\gamma_{z_1u,uy,w}\ne 0$. We must have $w=z_1uy$ and $\mu(z_1u,zu)=\mu(w,x)$ or $\mu(zu,z_1u)=\mu(x,w)$. So $w\in \Omega$. Part (a) is proved. Also we showed that $\Gamma_{uy}=\{zuy\,|\,z\in W,\ l(zuz)=l(y)+l(u)+l(y)\}$ is a left cell of $W$. Let $u_1y_1\in D'$, $l(u_1y_1)=l(u_1)+l(y_1)$, $u_1$ has length $a_0$ and is the longest element of a finite parabolic subgroup of $W$. If $u_1y_1\in\Gamma_{uz}$, then $u_1y_1=zuy$ for some $z\in W$ and $l(zuy)=l(z)+l(u)+l(y)$. By the definition of $D'$ we see that $z=e$. So $u_1y_1=uy$. Part (b) is proved. Comparing the coefficients of $\tilde T_e$ in both sides of the equality $C_{zu}C_{uy}=h_{u,u,u}C_{zuy}$, we see that the $l(w)-$2deg$P_{e,zuy}-a(u)\ge 0$. Moreover, $l(w)-$2deg$P_{e,zuy}-a(u)= 0$ if and only if $z=y^{-1}$. In this case, the coefficient of the term $q^{l(y)}$ is 1. Part (c) is proved. Now we prove the second part of (d). Let $E_y,F_y\in H$ be such that $C_uF_y=C_{uy}$ and $E_yC_u=C_{y^{-1}u}$. Then $C_{zu}F_y=C_{zuy}$ and $E_yC_{uz^{-1}}=C_{(zuy)^{-1}}$. Thus $h_{uz^{-1},z'u,w}=h_{y^{-1}uz^{-1}, z'uy,y^{-1}wy}$. Assume that $z'uy< zuy$. Comparing the coefficients of $\tilde T_e$ in both sides of the equality $C_{(z'uy)^{-1}}C_{zuy}=h_{{z'uy}^{-1},zuy,w}C_{w}$, as in \[SX, 2.2\], we see that the second part of (d) is true. The proposition is proved. [**Proposition 3.2.**]{} Let $(W,S)$ be a Coxeter group with complete Coxeter graph and the cardinalities of finite parabolic subgroups of $W$ have a common upper bound. Assume the cardinality of $S$ is greater than 2 and the order of $st$ is finite for some simple reflections $s,t$ in $S$. Then the number of left cells in the lowest two-sided cell of $W$ is finite if and only if $W$ is an affine Weyl group of type $\tilde A_2$. Proof. The if part is clear (see \[L2\]). Now assume that $W$ is not of type $\tilde A_2$. Let $s,t$ be simple reflections such that the order $st$ is finite and maximal in all possibilities. Let $w$ be the longest element of the subgroup $<s,t>$ of $W$ generated by $s,t$. Then $w$ is in the lowest two-sided cell of $W$. If $w$ has length at least 4, then $w(rst)^k$ is in $D'$ (see 1.8 for the definition of $D'$) for any positive integer $k$, here $r$ is a simple reflection in $S-\{s,t\}$. By Proposition 3.1 (b), we know that the number of left cells in the lowest two-sided cell of $W$ is infinite. If $w$ has length 3, then either $|S|\ge 4$ or one of $rs$, $rt$ has infinite order for $r\in S-\{s,t\}$ since $W$ is not of type $\tilde A_2$ and the length of $w$ is maximal among the longest elements of finite parabolic subgroups of $W$. In first case, we can find two different simple reflections $r,v$ in $S-\{s,t\}$. Then $w(rvst)^k$ is in $D'$ for any positive integer $k$. In second case, let $r\in S$ be different from $s,t$. It is no harm to assume that $rs$ has infinite order. Then $w(rs)^k$ is in $D'$ for any positive integer $k$. By Proposition 3.1 (b), in both cases the number of left cells in the lowest two-sided cell of $W$ is infinite. The proposition is proved. Some consequences II - other results ==================================== In this section $(W,S)$ is a Coxeter group such that any two simple reflections are not commutative, except other specifications are given. We shall give some other consequences of Theorem 2.1. In \[L1\], Lusztig showed that the elements in $W$ with unique reduced expressions forma a two-sided cell of $W$. If the order of $st$ is $\infty$ for any two different simple reflections $s,t$ of the Coxeter group $(W,S)$, then $W$ has only two two-sided cells: $\{e\},\ W-\{e\}$, see \[L3\]. [**Proposition 4.1.**]{} Let $m\ge 3$ be a positive integer. Assume that the order of $st$ is either $m$ or $\infty$ for any two different simple reflections $s,t$ of the Coxeter group $(W,S)$ and the order of some $st$ is $m$. Then $W$ has only three two-sided cells. Proof. If $w\in W$ has different reduced expressions, then there exist simple reflections $s,t$ in $W$ and $x,y\in W$ such that $st$ has order $m$ and $w=xuy$, $l(w)=l(x)+l(u)+l(y)$, where $u$ is the longest element in the subgroup of $W$ generated by $s,t$. By Theorem 2.1, $m$ is the maximal value of the $a$-function on $W$. According to Proposition 3.1, $w$ is in the lowest two-sided cell of $W$. Therefore, $W$ has only the following three two-sided cells: $\{e\},$ $\{$elements in $W$ with unique reduced expression$\}$, $\{$elements in $W$ having different reduced expressions$\}$. The proposition is proved. [**4.2.**]{} Assume that any two simple reflections in $W$ are not commutative. Let $O$ be the set of isomorphism classes of finite parabolic subgroups of $W$ with rank 2. It is likely that the number of the two-sided cells of $W$ is $|O|+2$, here $|O|$ denotes the cardinality of $O$. Proposition 4.1 supports this conjecture. Below we will see that when $W$ is crystallographic, the conjecture is also true. We first establish some lemmas. [**Lemma 4.3.**]{} Let $P$ and $Q$ be two different finite parabolic subgroups of $W$ with rank 2. Denote their longest elements by $w$ and $u$ respectively. Assume $l(w)\le l(u)$. Let $x,y\in Q$ be such that $l(wx)=l(w)+l(x)$, $l(wx)=l(wxu)-l(u)$, $l(ywx)=l(y)+l(wx)$ and $l(y)=l(wx)-l(u)-1$. Then $\mu(u,ywx)=1$, Proof. The existence of $x$ is clear. Since $l(w)\ge 3$, by Lemma 2.5, $y$ exists. Using the formulas (2.2.c) and (2.3.g) in \[KL\] we can prove this lemma by a direct computation. [**Corollary 4.4.**]{} Let $P$ and $Q$ be two finite parabolic subgroups of $W$ with rank 2. Denote their longest elements by $w$ and $u$ respectively. Then $u\lrl w$ if $l(u)\ge l(w)$. In particular, $w$ and $u$ are in the same two-sided cell if $l(w)=l(u)$. Proof. Let $y,x$ be as in Lemma 4.3. Since $l(w)\le (u)$, we have $l(y)<l(x)$ and $L(ywx)$ is a proper subset of $L(u)$. Lemma 4.3 then implies that $u\ll ywx$. Clearly, $ywx\lrl w$. So $u\lrl w$. The lemma is proved. [**Proposition 4.5.**]{} Let $(W,S)$ be a crystallographic Coxeter group with complete Coxeter graph and $O$ be the set of isomorphism classes of finite parabolic subgroups of $W$ with rank 2. Then the number of the two-sided cells of $W$ is $|O|+2$. Proof. By Theorem 2.1, the maximal value of the $a$-function on $W$ is at most 6. For $i=3,4,6$, let $W_i$ be the set of elements $x$ of $W$ with the properties below: (1) $x=zuy$ for some $z,y\in W$, $u$ has length $i$ and is the longest element of a parabolic subgroup of $W$, (2) $l(x)=l(z)+l(u)+l(y)$. We have two obvious two-sided cells: $\{e\}$ and $\{$ elements in $W$ with unique reduced expression$\}$. We claim that $W_6, W_4-W_4\cap W_6$, $W_3-W_3\cap(W_4\cup W_6)$ are two-sided cells whenever they are not empty. First assume that $W_6$ is not empty. According to Proposition 3.1, $W_6$ is the lowest two-sided cell of $W$. We claim that $W_4-W_4\cap W_6$ is a two-sided cell of $W$ if it is not empty. Clearly, $a(x)\ge 4$ for any $x\in W_4$. From the argument for Theorem 2.1 we see easily that $a(x)\le 4$ if $x$ is not in $W_6$. Since $W$ is crystallographic, by \[L3, Corollary 1.9\], for $x\in W_4-W_4\cap W_6$ we have $x\elr u$, here $x=zuy$ is as in (1). Using Corollary 4.4 we know that $W_4-W_4\cap W_6$ is in a two-sided cell. Let $w$ be in $W_3$ but not in $W_4\cup W_6$. According to \[L3, Proposition 1.4\], $\gamma_{w^{-1},w,d}=1$, here $d$ is the distinguished involution in the left cell containing $w$. Using the positivity for Kazhdan-Lusztig polynomials of $W$ and for $h_{w,w',w''}$ with $w,w',w''$ in $W$, we see that the argument for Theorem 2.1 implies that $v\in W_6$ if deg$h_{w^{-1},w,v}\ge 4$. Since $a(w)=a(d)\ge 3$ and $d$ is not in $W_6$, we must have $a(w)=3$ and $w$ is not in the two-sided cell containing $x$. Therefore, $W_4-W_4\cap W_6$ is a two-sided cell if it is not empty. We also showed that $W_3-W_3\cap(W_4\cup W_6)$ is a two-sided cell if it is not empty. If $W_6$ is empty, the discussion is similar and simpler. The proposition is proved. Some comments ============= In this section we propose two questions. Let $(W,S)$ be an arbitrary Coxeter group. In \[L2\], it is showed that $a(w)\le (w)$ for any $w$ in a Weyl group. This result was extended to arbitrary crystallographic Coxeter groups by Springer, see \[L3\] for a proof. It is natural to suggest that $a(w)\le l(w)$ for $w$ in an arbitrary Coxeter group. Assume that $(W,S)$ is connected (i.e. its Coxeter graph is connected). Let $P$ and $Q$ be two finite parabolic subgroups of $W$. It is likely that the longest elements of $P$ and $Q$ are in the same two-sided cell of $W$ if $P$ and $Q$ are isomorphic Coxeter groups. Bédard, R.: [*Left V-cells for hyperbolic Coxeter groups,*]{} Comm. Alg. **17** (1989), no. 12, 2971-2997. Belolipetsky, M.: [*Cells and representations of right-angled Coxeter groups,*]{} Selecta Math. (N.S.) **10** (2004), no. 3, 325-339. Kazhdan, D., Lusztig, G.: [*Representations of Coxeter groups and Hecke algebras,*]{} Invent. Math. **53** (1979), 165-184. Lusztig, G.: [*Some examples of square integrable representations of semisimple $p$-adic groups,*]{} Trans. Amer. Math. Soc. **277** (1983), no. 2, 623-653. Lusztig, G.: [*Cells in affine Weyl groups,*]{} in “Algebraic groups and related topics", Advanced Studies in Pure Math., vol. **6**, Kinokuniya and North Holland, 1985, pp. 255-287. Lusztig, G.: [*Cells in affine Weyl groups, II,*]{} J. Alg. **109** (1987), 536-548. Scott, L. and Xi, N.: [*Some non-trivial Kazhdan-Lusztig coefficients of an affine Weyl group of type $\tilde A_n$,*]{} Science China Mathematics **53** (2010), No. 8, 1919-1930. Shi, J.: [*A two-sided cell in an affine Weyl group,*]{} J. London Math. Soc. (2) **36** (1987), no. 3, 407-420. Shi, J.: [*A two-sided cell in an affine Weyl group,II,*]{} J. London Math. Soc. (2) **37** (1988), no. 2, 253-264. Xi, N.: The based ring of the lowest two-sided cell of an affine Weyl group, J. Alg. **134** (1990), 356-368. Xi, N.: Representations of affine Hecke algebras, Lecture Notes in Mathematics, **1587**. Springer-Verlag, Berlin, 1994. Zhou, Peipei: Lusztig’s $a$-function for Coxeter groups of rank 3. In preparation. [^1]:
--- abstract: 'We study a force-based hybrid method that couples atomistic models with nonlinear Cauchy-Born elasticity models. We show that the proposed scheme converges quadratically to the solution of the atomistic model, as the ratio between lattice parameter and the characteristic length scale of the deformation tends to zero. Convergence is established for general short-ranged atomistic potential and for simple lattices in three dimension. The convergence is based on consistency and stability analysis. General tools are developed in the framework of pseudo-difference operators for stability analysis in arbitrary dimension of the multiscale atomistic and continuum coupling methods.' address: - | Department of Mathematics\ Courant Institute of Mathematical Sciences\ New York University\ New York, NY 10012\ email: jianfeng@cims.nyu.edu - | LSEC, Institute of Computational Mathematics and Scientific/Engineering Computing\ AMSS, Chinese Academy of Sciences\ No. 55, Zhong-Guan-Cun East Road\ Beijing 100190, China\ email: mpb@lsec.cc.ac.cn author: - Jianfeng Lu - Pingbing Ming bibliography: - 'quasicont.bib' date: 'February 8, 2011' title: 'Convergence of a force-based hybrid method for atomistic and continuum models in three dimension' --- [^1] Introduction ============ Multiscale methods for mechanical deformation of materials have been investigated intensely in recent years. The main spirit of these methods is to use atomistic models for regions containing defects, and continuum models in regions where the material is smoothly deformed. We refer to the recent review [@MillerTadmor:2009] for various methods and the book [@E:book] for general discussion of multiscale modeling. There are two different ways of coupling atomistic and continuum models. One is based on energy, and the other is based on force. The energy-based method defines an energy which is a mixture of atomistic energy and continuum elasticity energy. The energy functional is then minimized to obtain the solution. The force-based method works instead at the level of force balance equations. The forces derived from atomistic and continuum models are coupled together. The force balance equations are solved to obtain the deformed state of the system. From a numerical analysis point of view, one of the key issues for these multiscale methods is the consistency and stability of the coupled schemes. Taking one of the most successful multiscale methods, the quasicontinuum method [@TadmorOrtizPhillips:1996; @KnapOrtiz:2001] for example, one of the main issues is the so called ghost force problem [@ShenoyMillerTadmorPhillipsOrtiz:1999], which are the artificial non-zero forces that the atoms experience at their equilibrium state. In the language of numerical analysis, it means that the scheme lacks consistency at the interface between atomistic and continuum regions [@ELuYang:2006]. In [@MingYang:2009], it was shown that the ghost forces may lead to a finite size error of the gradient of the solution. The stability analysis for the coupling schemes is so far limited to one dimensional systems, in which case a direct calculation is possible thanks to the easy one dimensional lattice structure and pairwise interaction potential. This is no longer the case in two and three dimensions, and the extension is by on means easy. More general tools for stability analysis are needed, to address in general the multiscale hybrid methods. In this work, based on existing ideas in the literature, we formulate a force-based hybrid scheme for general short-ranged potentials (with some natural assumptions) in three dimension. We focus on the numerical analysis of the hybrid method, which is a representative of a general class of multiscale methods. The solution of the proposed method converges quadratically to the solution of the atomistic model as the ratio between lattice parameter and the characteristic length scale of the mechanical deformation goes to zero. To the best of our knowledge, this is the first convergence result for multiscale methods coupling atomistic and continuum models in three dimension. The convergence result is based on the analysis of consistency and linear stability. To achieve this, we study the linearized operator in the framework of pseudo-difference operators. We obtained the stability estimate combining regularity estimate of pseudo-difference operators, consistency of the linearized operator, and stability of the continuous problem. These tools developed will help understanding multiscale methods in general. Before we present the formulation of the method and the main theorem in Section \[sec:formulation\], we need some preliminaries and notations. Lattice function and norms -------------------------- We will consider only Bravais lattices (see for example [@AshcroftMermin:1976]) in this work, denoted as ${\mathbb{L}}$. Let $d$ be the dimension. Let $\{a_j\} \subset {\mathbb{R}}^d$, $j=1, \cdots, d$ be basis vectors of the lattice ${\mathbb{L}}$, hence $${\mathbb{L}}= \{ x \in {\mathbb{R}}^d \mid x = \sum_j n_j a_j,\, n \in \mathbb{Z}^d \}.$$ Let $\{b_j\} \subset {\mathbb{R}}^d$, $j = 1, \cdots, d$ be the reciprocal basis vectors, given by $$a_j \cdot b_k = 2\pi \delta_{jk}.$$ The reciprocal lattice ${\mathbb{L}}^{\ast}$ is then $${\mathbb{L}}^{\ast} = \{ x \in {\mathbb{R}}^d \mid x = \sum_j n_j b_j, \, n \in \mathbb{Z}^d \}.$$ Denote the unit cells of ${\mathbb{L}}$ and ${\mathbb{L}}^{\ast}$ as $\Gamma$ and $\Gamma^{\ast}$ respectively. $$\begin{aligned} & \Gamma = \{ x \in {\mathbb{R}}^d \mid x = \sum_j c_j a_j,\; 0 \leq c_j < 1, \, j = 1, \cdots, d\}; \\ & \Gamma^{\ast} = \{ x \in {\mathbb{R}}^d \mid x = \sum_j c_j b_j,\; -1/2 \leq c_j < 1/2, \, j = 1, \cdots, d\}.\end{aligned}$$ For ${\varepsilon}= 1/n, \, n \in {\mathbb{Z}}_+$, we will consider lattice system ${\varepsilon}{\mathbb{L}}$ inside domain $\Omega = \Gamma \subset {\mathbb{R}}^d$, denoted as $ \Omega_{{\varepsilon}} = \Omega \cap {\varepsilon}{\mathbb{L}}$. Note that the lattice constant is ${\varepsilon}$, so that the number of points in $\Omega_{{\varepsilon}}$ is $1/ {\varepsilon}^d$. We will restrict to periodic boundary conditions in this work, general boundary conditions will be leaved for future publications. For a lattice function $u$ defined on ${\varepsilon}{\mathbb{L}}$, we say it is *$\Omega_{{\varepsilon}}$-periodic* if $$u(x) = u(x'), \qquad \forall \, x, x' \in {\varepsilon}{\mathbb{L}},\, x - x' = a_j \text{ for some } j \in \{1, \cdots, d\}.$$ In particular, an $\Omega_{{\varepsilon}}$-periodic function is determined by its restriction on $\Omega_{{\varepsilon}}$. Functions defined on $\Omega_{{\varepsilon}}$ can be easily extended to $\Omega_{{\varepsilon}}$-periodic functions defined on ${\varepsilon}{\mathbb{L}}$. We also define the reciprocal lattice associated with $\Omega_{{\varepsilon}}$. Let ${\mathbb{L}}^{\ast}_{{\varepsilon}} = {\mathbb{L}}^{\ast} \cap ( \Gamma^{\ast} / {\varepsilon}) $. Define $K_{{\varepsilon}}$ a subset of ${\mathbb{Z}}^d$ given by $$K_{{\varepsilon}} = \{ \mu \in {\mathbb{Z}}^d \mid \sum_j {\varepsilon}\mu_j b_j \in \Gamma^{\ast} \},$$ hence ${\mathbb{L}}^{\ast}_{{\varepsilon}}$ is given by $${\mathbb{L}}^{\ast}_{{\varepsilon}} = \{ x \in {\mathbb{R}}^d \mid x = \sum_j \mu_j b_j,\, \mu \in K_{{\varepsilon}}\}.$$ For $\mu \in {\mathbb{Z}}^d$, the translation operator $T^{\mu}_{{\varepsilon}}$ is defined as $$(T^{\mu}_{{\varepsilon}} u)(x) = u(x + {\varepsilon}\mu_j a_j), \quad \text{for } x \in {\mathbb{R}}^d.$$ We define the forward and backward discrete gradient operators as $${D_{{\varepsilon}, s}^+} ={\varepsilon}^{-1}(T^{\mu}_{{\varepsilon}}-I)\qquad\text{and}\qquad {D_{{\varepsilon}, s}^-}={\varepsilon}^{-1}(I-T^{\mu}_{{\varepsilon}}),$$ where $s = \sum_i \mu_i a_i$ and $I$ denotes the identity operator. It is easy to see ${D_{{\varepsilon}, -s}^+}=-{D_{{\varepsilon}, s}^-}$. We say $\alpha$ is a multi-index, if $\alpha \in {\mathbb{Z}}^d$ and $\alpha \geq 0$. We will use the notation $${\lvert\alpha\rvert} = \sum_{j=1}^d \alpha_j.$$ For a multi-index $\alpha$, the difference operator $D_{{\varepsilon}}^{\alpha}$ is given by $$D^{\alpha}_{{\varepsilon}} = \prod_{j=1}^d ({D_{{\varepsilon}, a_j}^+})^{\alpha_j}.$$ When no confusion will occur, we will omit the subscript ${\varepsilon}$ in the notations $T^{\mu}_{{\varepsilon}}$, ${D_{{\varepsilon}, s}^+}$, ${D_{{\varepsilon}, s}^-}$ and $D^{\alpha}_{{\varepsilon}}$ for simplicity. We will use various norms for functions defined on the lattice $\Omega_{{\varepsilon}}$. For integer $k\geq 0$, define the difference norm $${\lVertu\rVert}_{{\varepsilon}, k}^2 = \sum_{0 \leq {\lvert\alpha\rvert} \leq k} {\varepsilon}^d \sum_{x \in \Omega_{{\varepsilon}}} {\lvert(D_{{\varepsilon}}^{\alpha} u)(x)\rvert}^2.$$ It is clear that ${\lVert\cdot\rVert}_{{\varepsilon}, k}$ is a discrete analog of Sobolev norm associated with $H^k(\Omega)$. Hence, we denote the corresponding spaces of lattice functions as $H_{{\varepsilon}}^k(\Omega)$ and $L_{{\varepsilon}}^2(\Omega)$ when $k = 0$. We also need the uniform norms on the lattice $\Omega_{{\varepsilon}}$, given by $$\begin{aligned} & {\lVertu\rVert}_{L^{\infty}_{{\varepsilon}}} = \max_{x \in \Omega_{{\varepsilon}}} {\lvertu(x)\rvert}, \\ & {\lVertu\rVert}_{W^{k,\infty}_{{\varepsilon}}} = \sum_{0 \leq {\lvert\alpha\rvert} \leq k} \max_{x \in \Omega_{{\varepsilon}}} {\lvert(D^{\alpha}_{{\varepsilon}} u)(x)\rvert}.\end{aligned}$$ In the above definitions, we have identified lattice function $u$ with its $\Omega_{{\varepsilon}}$-periodic extension to function defined on ${\varepsilon}{\mathbb{L}}$, and hence the differences are well-defined. These norms extend to vector-valued functions as usual. Define the discrete Fourier transform for lattice functions $f$ as $$\label{eq:discF} {\widehat{f}}(\xi) = {\varepsilon}^d (2\pi)^{-d/2} \sum_{x \in \Omega_{{\varepsilon}}} e^{-{\imath}\xi \cdot x} f(x), \quad \xi \in \mathbb{L}^{\ast}_{{\varepsilon}},$$ and its inverse as $$\label{eq:invdiscF} f(x) = (2\pi)^{d/2} \sum_{\xi \in \mathbb{L}^{\ast}_{{\varepsilon}}} e^{{\imath}x \cdot \xi} {\widehat{f}}(\xi), \quad x \in \Omega_{{\varepsilon}}.$$ We need a symbol which plays the same role for difference operators that $\Lambda^2(\xi) = 1 + \Lambda_0^2(\xi) = 1 + {\lvert\xi\rvert}^2$ plays for differential operators. For ${\varepsilon}> 0, \, \xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}$, let $$\Lambda_{j, {\varepsilon}}(\xi) = \frac{1}{{\varepsilon}} {\lvert e^{{\imath}{\varepsilon}\xi_j} - 1\rvert}, \qquad j = 1, \cdots, d,$$ and $$\Lambda_{{\varepsilon}}^2(\xi) = 1 + \Lambda_{0, {\varepsilon}}^2(\xi) = 1 + \sum_{j=1}^d \Lambda_{j, {\varepsilon}}^2(\xi) = 1 + \sum_{j=1}^d \frac{4}{{\varepsilon}^2} \sin^2\Bigl( \frac{{\varepsilon}\xi_j}{2} \Bigr).$$ It is not hard to check for any $\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}$, it holds $$\label{eq:symbolcompare} c\Lambda^2(\xi) \leq \Lambda_{{\varepsilon}}^2(\xi) \leq \Lambda^2(\xi).$$ where the positive constant $c$ depends on $\{b_j\}$. The $L^2_{{\varepsilon}}$ norm of lattice function can be rewritten as $${\lVertf\rVert}_{{\varepsilon}, 0}^2 = (2\pi)^d \sum_{\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}} {\lvert{\widehat{f}}(\xi)\rvert}^2.$$ Indeed, using Poisson summation formula, $$\begin{aligned} \sum_{\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}} {\lvert{\widehat{f}}(\xi)\rvert}^2 & = \sum_{\xi\in{\mathbb{L}}_{{\varepsilon}}^{\ast}} {\varepsilon}^{2d} (2\pi)^{-d} \sum_{x \in \Omega_{{\varepsilon}}} e^{{\imath}\xi\cdot x} f^{\ast}(x) \sum_{x' \in \Omega_{{\varepsilon}}} e^{-{\imath}\xi \cdot x'}f(x') \\ & = \sum_{x, x' \in \Omega_{{\varepsilon}}} {\varepsilon}^{2d} (2\pi)^{-d} f^{\ast}(x) f(x') \sum_{\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}} e^{{\imath}\xi \cdot(x - x')} \\ & = \sum_{x \in \Omega_{{\varepsilon}}} (2\pi)^{-d} {\varepsilon}^d {\lvertf(x)\rvert}^2 = (2\pi)^{-d}{\lVertf\rVert}_{{\varepsilon}, 0}^2. \end{aligned}$$ Moreover, notice that for $\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}$, we have $${\widehat{D_{{\varepsilon}, a_j}^+ f}}(\xi) = \frac{1}{{\varepsilon}} ( e^{{\imath}{\varepsilon}\xi \cdot a_j} - 1) {\widehat{f}}(\xi).$$ Therefore, discrete Sobolev norms have equivalent representations using discrete Fourier transform: $$c {\lVertf\rVert}_{{\varepsilon}, k}^2 \leq \sum_{\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}} \Lambda_{{\varepsilon}}^k(\xi) {\lvert{\widehat{f}}(\xi)\rvert}^2 \leq C {\lVertf\rVert}_{{\varepsilon}, k}^2,$$ with positive constant $c$ depending on $k$ and $\{ a_j \}$. For $k>d/2$, we have the following discrete Sobolev imbedding inequality [@Frank:1971]\*[Proposition 6]{}: $${\lVertf\rVert}_{L^{\infty}_{{\varepsilon}}}\le C{\lVertf\rVert}_{{\varepsilon},k},$$ where $C$ depends on $k$ and $\Omega$. Atomistic model and Cauchy-Born rule ------------------------------------ In this work, we will restrict our attention to classical empirical potentials. For atoms located at $\{ y_1, \cdots, y_N\}$, the interaction potential energy between the atoms is given by $$V(y_1,\cdots,y_N),$$ where $V$ often takes the form: $$V(y_1,\cdots,y_N)=\sum_{i,j}V_2(y_i/{\varepsilon}, y_j/{\varepsilon}) +\sum_{i,j,k}V_3(y_i/{\varepsilon},y_j/{\varepsilon},y_k/{\varepsilon})+\cdots.$$ Here we have omitted interactions of more than three atoms. Different potentials are chosen for different materials. In this paper, we will work with general atomistic models, and we will make the following assumptions on the potential functions $V$ as in [@EMing:2007]: 1. $V$ is translation invariant. 2. $V$ is invariant with respect to rigid body motion. 3. $V$ is smooth in a neighborhood of the equilibrium state. 4. $V$ has finite range and consequently we will consider only interactions that involve a finite number of atoms. The first two assumptions are general [@BornHuang:1954], while the latter two are specific technical assumptions. In fact, for simplicity of notation and clarity of presentation, our presentation will be limited to potentials that contain only two-body and three-body potentials. Actually, we will sometimes only make explicit the three-body terms in the expressions for the potential and omit the two-body terms. It is straightforward to extend our results to potentials with interactions of more atoms that satisfy the above conditions, following the discussion on the three-body terms. By [@Keat:1965], the potential function $V$ is a function of atom distances and angles by invariance with respect to rigid body motion. Therefore, we may write $$\begin{aligned} V_2(y_i,y_j)&=V_2{\left(\,{\lverty_i-y_j\rvert}^2\,\right)},\\ V_3(y_i,y_j,y_k)& =V_3{\left(\,{\lverty_i-y_j\rvert}^2, {\lverty_i-y_k\rvert}^2,{\left\langley_i-y_j,y_i-y_k\right\rangle}\,\right)},\end{aligned}$$ where ${\left\langle\cdot,\cdot\right\rangle}$ denotes the inner product over ${\mathbb{R}}^d$. We write the two-body and three-body potentials this way to make the formula in our calculations easier to read. We assume that the atoms are located at $\Omega_{{\varepsilon}}$ in equilibrium, with $x$ denoting the equilibrium position ($x \in \Omega_{{\varepsilon}}$). Positions of the atoms under deformation will be viewed as a function defined over $\Omega_{{\varepsilon}}$, denote as $y(x) = x + u(x)$. Hence, $u: \Omega_{{\varepsilon}} \to {\mathbb{R}}^d$ is the displacement of atoms. We extend $u$ as an $\Omega_{{\varepsilon}}$-periodic function defined on ${\varepsilon}{\mathbb{L}}$. Denote the space of atom positions $y$ as $$X_{{\varepsilon}} = \{ y: {\varepsilon}{\mathbb{L}}\to {\mathbb{R}}^d \mid y = x + u,\, u\ \Omega_{{\varepsilon}}\text{-periodic},\, \sum_{x\in\Omega_{{\varepsilon}} u(x) = 0} \}.$$ Hence, $y \in X_{{\varepsilon}}$ satisfies $$y(x) - y(x') = x - x', \qquad \forall \, x, x' \in {\varepsilon}{\mathbb{L}},\, x - x' = a_j \text{ for some } j \in \{1, \cdots, d\}.$$ The atomistic problem is formulated as follows. For given $f: \Omega_{{\varepsilon}} \to {\mathbb{R}}^d$, find $y \in X_{{\varepsilon}}$ such that $$\label {atom:min} y = \arg\min_{z \in X_{{\varepsilon}}} I_{{\mathrm{at}}}(z),$$ where $$I_{{\mathrm{at}}}(z) = \dfrac{1}{3!} {\varepsilon}^d \sum_{x\in\Omega_{{\varepsilon}}}\sum_{(s_1,s_2) \in S} V_{(s_1, s_2)}[z] - {\varepsilon}^d \sum_{x \in \Omega_{{\varepsilon}}} f(x) z(x),$$ where $$V_{(s_1, s_2)}[z] = V{\left(\,{\lvert{D_{s_1}^+}z(x)\rvert}^2, {\lvert{D_{s_2}^+}z(x)\rvert}^2,{\left\langle{D_{s_1}^+}z(x),{D_{s_2}^+}z(x)\right\rangle}\,\right)}.$$ Here $S$ is the set of all possible $(s_1, s_2)$ within the range of the potential. By our assumptions, $S$ is a finite set. Note that as remarked above, we only make explicit the three body terms in the potential. In $I_{{\mathrm{at}}}$, ${\varepsilon}^d$ is a normalization factor, so that $I_{{\mathrm{at}}}$ is actually the energy of the system per atom. The Euler-Lagrange equations for the atomistic problem is then $$\label{atom:eq} {\mathcal{F}}_{{\mathrm{at}}}[y](x) = f(x),\qquad x\in\Omega_{{\varepsilon}},$$ where $$\begin{aligned} {\mathcal{F}}_{{\mathrm{at}}}[y](x) &= \sum_{(s_1,s_2)\in S} \Bigl( {D_{s_1}^-}{\left(\,2{\partial}_1 V_{(s_1, s_2)}[y](x){D_{s_1}^+}y(x)+{\partial}_3V_{(s_1, s_2)}[y](x){D_{s_2}^+}y(x)\,\right)}\\ &\phantom{\sum_{(s_1,s_2)}}\qquad+{D_{s_2}^-}{\left(\,2{\partial}_2 V_{(s_1, s_2)}[y](x){D_{s_2}^+}y(x)+{\partial}_3V_{(s_1, s_2)}[y](x){D_{s_1}^+}y(x)\,\right)}\Bigr),\end{aligned}$$ where for $i=1,2,3$, we denote $${\partial}_i V_{(s_1, s_2)}[y](x)=\partial_i V{\left(\,{\lvert{D_{s_1}^+}y(x)\rvert}^2,{\lvert{D_{s_2}^+}y(x)\rvert}^2, {\left\langle{D_{s_1}^+}y(x),{D_{s_2}^+}y(x)\right\rangle}\,\right)},$$ the partial derivative with respect to the $i$-th argument of $V$. To introduce the continuum Cauchy-Born (CB) elasticity problem  [@BornHuang:1954; @Ericksen:1984; @Ericksen:2008], we fix more notations. For any positive integer $k$, we denote by $W^{k,p}(\Omega;{\mathbb{R}}^d)$ the Sobolev space of mappings $y{:}\;\Omega\to{\mathbb{R}}^d$ such that $\|y\|_{W^{k,p}}<\infty$. In particular, $W_{\sharp}^{k,p}(\Omega;{\mathbb{R}}^d)$ denotes the Sobolev space of periodic functions whose distributional derivatives of order less than $k$ are in the space $L^p(\Omega)$. For any $p>d$ and $m\ge 0$, we define $X$ as $$X=\{y: \Omega \to {\mathbb{R}}^d \mid y = x + v,\, v\in W^{m+2,p}(\Omega;{\mathbb{R}}^d) \cap\,W_{\sharp}^{1,p}(\Omega;{\mathbb{R}}^d), \, \int_{\Omega}v=0 \}.$$ As in [@EMing:; @2007], we have the Cauchy-Born elasticity problem as: find $y\in X$ such that $$\label {cb:min} y = \arg \min_{z\in X}I(z),$$ where the total energy functional $I$ is given by $$I(z)=\int_{\Omega} {\left(\,{W_{\mathrm{CB}}}({\nabla}v(x))-f(x)z(x)\,\right)} {{\,\mathrm{d}}x},$$ where $v(x) = z(x) - x$ and Cauchy-Born stored energy density ${W_{\mathrm{CB}}}$ is given by $${W_{\mathrm{CB}}}(A)=\dfrac{1}{3!}\sum_{(s_1,s_2)\in S} W_{(s_1, s_2)}(A),$$ where for $A \in {\mathbb{R}}^{d\times d}$, $$W_{(s_1, s_2)}(A) = V{\left(\,{\lverts_1+s_1 A\rvert}^2,{\lverts_2+s_2A\rvert}^2,{\left\langles_1+s_1A,s_2+s_2A\right\rangle}\,\right)}.$$ The range $S$ is the same as that in the atomistic potential. We have used the deformed position $y$ instead of the more usual displacement field $u$ as variable in in order to be parallel with the atomistic problem. The Euler-Lagrange equation for the Cauchy-Born elasticity model is then $$\label {cb:eq} {\mathcal{F}}_{{\mathrm{CB}}}[y](x)=f(x),$$ where $${\mathcal{F}}_{{\mathrm{CB}}}[y](x)=-{\operatorname{\nabla\cdot}}{\left(\,D_A{W_{\mathrm{CB}}}({\nabla}v(x))\,\right)}, \qquad v(x)=y(x)-x.$$ Here $D_A {W_{\mathrm{CB}}}(A)$ denotes differentiation of ${W_{\mathrm{CB}}}(A)$ with respect to $A$. Since we are primarily interested in the coupling between the atomistic and continuum region, we will take the finite element discretization $\mathcal{T}_{{\varepsilon}}$ be a triangulation of $\Omega_{{\varepsilon}}$ with each atom site as an element vertex with element size ${\varepsilon}$. The triangulation is chosen so that it is translation invariant. The approximation space ${\widetilde{X}}_{{\varepsilon}}$ is defined as $${\widetilde{X}}_{{\varepsilon}}=\bigl\{ y \in W^{1,p}_{\sharp}(\Omega;{\mathbb{R}}^d) \mid y|_T\in P_1(T), \ \forall\, T\in{\mathcal{T}}_{{\varepsilon}}\bigr\},$$ where $P_1(T)$ is the space of linear functions on the element $T$. Force-based hybrid method {#sec:formulation} ------------------------- We are ready to formulate the force-based hybrid method. We take $\varrho: \Omega \to [0, 1]$ as a smooth standard cutoff function. The atomistic region corresponds to the zero level set of $\varrho$: $\Omega_{a} = \{x \mid \varrho(x) = 0\}$, and the continuum region corresponds to the region that $\varrho$ equals to $1$: $\Omega_{c} = \{ x \mid \varrho(x) = 1 \}$. The region in between is a buffer between the atomistic and continuum regions. The force-based hybrid method is given as: find $y(x)\in X_{{\varepsilon}}$ such that $$\label {sqc:eq} {\mathcal{F}}_{{\mathrm{hy}}}[y](x)\equiv (1 - \varrho(x)) {\mathcal{F}}_{{\mathrm{at}}}[y](x) + \varrho(x) {\mathcal{F}}_{{\varepsilon}}[y](x) = f(x),\qquad x\in\Omega_{{\varepsilon}},$$ where ${\mathcal{F}}_{{\varepsilon}}$ is the force from finite element approximation of Cauchy-Born elasticity problem . Due to the choice of $\varrho$, in the atomistic region $\Omega_{a}$, the force acting on the atom is just that of atomistic model, while in the continuum region $\Omega_c$, the force is calculated from finite element approximation of the Cauchy-Born elasticity. The proposed scheme works in dimension $d \leq 3$ for general short-range interaction potentials. The main result for this work is the following quadratic convergence result for the force-based hybrid method. \[thm:main\] Under Assumptions \[assump:stabatom\] and \[assump:stabCB\], there exist positive constants $\delta$ and $M$, so that for any $p > d$ and $ f \in W^{15, p}(\Omega) \cap W^{1, p}_{\sharp}(\Omega) $ with ${\lVertf\rVert}_{W^{15, p}} \leq \delta$, we have $$\label{eq:main} {\lVerty_{{\mathrm{hy}}} - y_{{\mathrm{at}}}\rVert}_{{\varepsilon},2} \leq M {\varepsilon}^2.$$ While we do not attempt in this work to optimize the regularity assumption on $f$, we note that it is easy to relax the assumption to $f \in W^{5, p}(\Omega)$ with $p>d$ following the remarks below in the proof. The sharp stability conditions Assumptions \[assump:stabatom\] and \[assump:stabCB\] will be given in Section \[sec:regularity\]. These assumptions are quite natural and physical. We refer to Section \[sec:regularity\] and also [@EMing:2007] for more discussions on the stability conditions and its link to physics literature. The proof of Theorem \[thm:main\], which will be viewed as a convergence result for (nonlinear) finite difference schemes, follows the spirit of Strang’s work [@Strang:1964]. In short, consistency and linear stability implies convergence. The heart of the matter lies in the analysis of consistency and stability, which will be the focus of the proof. The rest of the paper is organized as follows. In the next subsection, we review some related works. Section \[sec:consistency\] discusses the consistency of the scheme. The linear stability is proved in Section \[sec:stability\]. The stability estimate is based on the regularity estimate of finite difference schemes in Section \[sec:regularity\], which is established in the framework of pseudo-difference operators [@LaxNirenberg:1966; @Thomee:1964; @BubeStrikwerda:1983]. With the preparation of consistency and linear stability analysis, the proof is concluded in Section \[sec:convergence\]. Related works ------------- Recently there are a lot of papers discussing various atomistic/continuum coupling strategies as summarized in the recent reviews [@RuddBroughton:2000; @CurtinMiller:2003; @MillerTadmor:2009; @HMMreview], we will only mention some of the works that are closely related to ours and refer the readers to these reviews and the references therein. The hybrid method resembles several methods in the literature. The most closely related method is the quasicontinuum (QC) method [@TadmorOrtizPhillips:1996; @KnapOrtiz:2001], which is among the most popular methods for modeling the mechanical deformation of crystalline solids. The QC method contains following ingredients: decomposition of the whole domain into atomistic and continuum regions, with the defects covered by the atomistic regions; degree reduction by adaptive selection of representative atoms (rep-atoms), with fewer atoms selected in regions with smooth deformation; and the application of the Cauchy-Born approximation in the continuum region to reduce the complexity involved in computing the total energy of the system. Both the proposed method and QC method couple atomistic models with nonlinear Cauchy-Born elasticity model. In some sense, the proposed method can be viewed as a smoothened modification of the force-based QC method. Indeed, the original force-based QC method amounts to take $\varrho$ to be a characteristic function (so that there is no buffer region). The force-based QC is free of ghost force, and it was proven in [@Ming:2008; @DobsonLuskinOrtner:2010a] that, for one-dimensional problem, the force-based method converges quadratically. However, its convergence behavior remains open for high dimensional problem. As will be proved later in the paper, the proposed method is stable and also converges quadratically in three dimension. For the understanding of the original force-based QC, this work may also provide some new tools and insights. The Arlequin method [@BenDhia:1998; @Bauman:2008] and the bridging domain method [@BelytschkoXiao:2003] also adopt a smooth transition between atomistic and continuum regions. The difference with the proposed scheme is however these methods are energy-based, so that the mixing is done at the energy level, while the current method is force-based. Moreover, these two methods enforce the consistency between the atomistic and continuum regions by imposing certain constraints, while there is no such constraints in our method. These methods are suffered from ghost force problems as shown in [@MillerTadmor:2009], while the proposed method is consistent at the interface. The proposed method also shares certain common traits with the concurrent AtC coupling method (AtC) proposed in [@BadiaBochvLehoucqParksFishNuggehallyGunzburger:2007]. The AtC method also uses a smooth transition between atomistic and continuum regions and is force-based. However, the proposed method differs from AtC in the following aspects: (1) our method employs Cauchy-Born elasticity while AtC uses linear elasticity and (2) our method is free of ghost force while AtC is plagued by ghost force as demonstrated in [@MillerTadmor:2009]. Most of the analysis of these multiscale methods limits to the quasicontinuum method. In [@EMing:2007], the Cauchy-Born rule for crystalline solids is verified under sharp stability conditions. In the language of QC, the authors in [@EMing:2007] actually proved the convergence of local QC (the whole computational domain is treated as local region). Explicit convergence rate for the local QC can be found in [@EMing:2004; @EMing:2005]. For the QC method couples together atomistic and continuum models (nonlocal QC method in short), the error estimate can be found in [@MingYang:2009; @DobsonLuskin:2009b] and the references therein. All these works dealt only with the one dimensional problem, and moreover, except [@MingYang:2009], the analysis was limited to quadratic potential models, so that the system is linear. To the best of the authors’ knowledge, there is no analysis for the nonlocal QC method or other coupling schemes for high-dimensional problems with general potential (usually, many-body potential function). The main difficulties lie in the analysis of the consistency and stability. For one-dimensional problem, the lattice structure is very simple and the pairwise potential function can be handled by a direct calculation. However, such an approach cannot be easily extended to high-dimensional problem with general potential because the lattice structure and the potential function for high-dimensional problem is much more involved. One of the main contributions of the current paper is the development of general tools for the analysis of consistency and stability. Finally, we remark that in this work the analysis of the proposed method, especially the stability analysis, is based on analysis of finite difference schemes. The readers might wonder why the analysis is *not* done in the framework of finite element method, as after all, we are dealing with static problems, the systems to be solved are “elliptic”; and moreover, the continuum region is discretized by finite element method. The reason actually lies in the atomistic part, since the force balance equations derived from energy of discrete lattice systems are intrinsically of finite difference type. To the best of our knowledge, there has not been yet a successful way to put the atomistic equations into the framework of finite element analysis. Therefore, to be consistent, we view the finite element approximation in the continuum region also as a finite difference approximation. The proof hence relies on the analysis of finite difference schemes. This may give a reminiscence of the early history about finite element analysis, during when the finite element method was also analyzed in the framework of finite difference schemes [@StrangFix:1973]. Since the theory of adaptive mesh is well-established for finite element method, it is an interesting question whether one can adopt the finite element analysis framework to analyze these multiscale coupling methods. Consistency {#sec:consistency} =========== We study the consistency of the force-based hybrid method in this section. The key is the following lemma, which is a refined version of [@EMing:2007]\*[Lemma 5.1]{}. \[lem:consCB\] For any $y=x+u(x)$ with $u$ smooth, we have $$\label{cons:eq1} {\lVert{\mathcal{F}}_{{\mathrm{at}}}[y] - {\mathcal{F}}_{{\mathrm{CB}}}[y]\rVert}_{L^{\infty}_{{\varepsilon}}} \leq C {\varepsilon}^2 {\lVertu\rVert}_{W^{16, \infty}},$$ where the constant $C$ depends on $V$ and ${\lVertu\rVert}_{L^\infty}$, but is independent of ${\varepsilon}$. The consistency estimate is presented in the form of for later use in the proof of Proposition \[prop:Hcons\]. A bound involves less order of derivatives of $u$ is possible, in fact, it is not hard to see from the proof that we have $$\label{cons:lowdiff} {\lVert{\mathcal{F}}_{{\mathrm{at}}}[y] - {\mathcal{F}}_{{\mathrm{CB}}}[y]\rVert}_{L^{\infty}_{{\varepsilon}}} \leq C {\varepsilon}^2,$$ where $C$ depends on $V$ and ${\lVertu\rVert}_{W^{6, \infty}}$. The price is however the dependence of $C$ on ${\lVertu\rVert}_{W^{6, \infty}}$ is nonlinear. For any $x\in\Omega_{{\varepsilon}}$, and for $i=1,2$, Taylor expansion at $x$ gives $${D_{s_i}^+}y(x) = {\nabla}_{s_i}^1[y](x)+{\varepsilon}{\nabla}_{s_i}^2[y](x)+{\varepsilon}^2 R_{2,s_i}[y](x),$$ where, for convenience, we have introduced the short-hands for the Taylor series and its remainder: $$\begin{aligned} & {\nabla}_{s_i}^j[y](x) = \dfrac{1}{j!}(s_i\cdot{\nabla})^j y(x), \\ & R_{k,s_i}[y](x) =\int_0^1(k+1)(1-t)^k{\nabla}_{s_i}^{k+1} y(x+{\varepsilon}ts_i){{\,\mathrm{d}}t},\quad k\in{\mathbb{N}},\end{aligned}$$ provided that the terms on the right hand side are well defined. Obviously, we may write $$\label {operexpan} {D_{s_i}^+}={\nabla}_{s_i}^1+{\varepsilon}{\nabla}_{s_i}^2+{\varepsilon}^2 R_{2,s_i},\quad {D_{s_i}^-}={\nabla}_{s_i}^1-{\varepsilon}{\nabla}_{s_i}^2-{\varepsilon}^2 R_{2,-s_i}.$$ For $i=1,2,3$ and $t \in [0,1]$, let $$\begin{aligned} F_i(t) ={\partial}_i V_{(s_1,s_2)}\Bigl( & {\lvertt{D_{s_1}^+}y(x)+(1-t)(s_1\cdot{\nabla})y(x)\rvert}^2,\\ & \quad {\lvertt{D_{s_2}^+}y(x) +(1-t)(s_2\cdot{\nabla})y(x)\rvert}^2,\\ & \quad \bigl\langle t{D_{s_1}^+}y(x)+(1-t)(s_1\cdot{\nabla})y(x), \\ & \qquad\qquad t{D_{s_2}^+}y(x) +(1-t)(s_2\cdot{\nabla})y(x)\bigr\rangle \Bigr).\end{aligned}$$ Using Taylor expansion, we get $$\label{eq:expandF} F_i(1)=F_i(0)+F^{\prime}_i(0)+R_1[F_i](0).$$ Here for $F_i: [0, 1] \to {\mathbb{R}}$, we have introduced a similar short-hand for the remainder $$R_k[F_i](0)=\int_0^1 \frac{(1-t)^k}{k!}{\nabla}^{k+1}F_i(t) {\,\mathrm{d}}t.$$ Notice that by definition we have $$\begin{aligned} F_i(1) & = \partial_i V_{(s_1, s_2)}\bigl({\lvert{D_{s_1}^+}y(x)\rvert}^2, {\lvert{D_{s_2}^+}y(x)\rvert}^2, {\left\langle{D_{s_1}^+}y(x),{D_{s_2}^+}y(x)\right\rangle} \bigr) \\ & = \partial_i V_{(s_1, s_2)}[y](x); \\ F_i(0) & = \partial_i V_{(s_1, s_2)}\bigl({\lvert(s_1\cdot{\nabla})y(x)\rvert}^2, {\lvert(s_2\cdot{\nabla})y(x)\rvert}^2, {\left\langle(s_1\cdot{\nabla})y(x),(s_2\cdot{\nabla})y(x)\right\rangle} \bigr) \\ & = \partial_i W_{(s_1, s_2)}(\nabla u(x)).\end{aligned}$$ Therefore, we can rewrite as $$\label{force-expan} \begin{aligned} {\partial}_iV_{(s_1,s_2)}[y](x)&={\partial}_iW_{(s_1,s_2)}({\nabla}u(x))+{\varepsilon}a_j{\partial}_{ij}W_{(s_1,s_2)}({\nabla}u(x))\\ &\quad+{\left(\,{\varepsilon}^2 b_j{\partial}_{ij}W_{(s_1,s_2)}({\nabla}u(x))+R_1[F_i](0)\,\right)}\\ &\quad+{\varepsilon}^3c_j{\partial}_{ij}W_{(s_1,s_2)}({\nabla}u(x))\\ &\equiv{\mathcal{Q}}_{i, (s_1, s_2)}[{\nabla}u](x), \end{aligned}$$ where for $j = 1, 2, 3$, $$\begin{aligned} a_j&=2{\left\langle(s_j\cdot{\nabla})y,{\nabla}_{s_j}^2[y]\right\rangle}(1-\delta_{j3})\\ &\quad+{\left(\,{\left\langle(s_1\cdot{\nabla})y,{\nabla}_{s_2}^2[y]\right\rangle} +{\left\langle(s_2\cdot{\nabla})y,{\nabla}_{s_1}^2[y]\right\rangle}\,\right)}\delta_{j3},\\ b_j&=2{\left\langle(s_j\cdot{\nabla})y,{\nabla}_{s_j}^3[y]\right\rangle}(1-\delta_{j3})\\ &\quad+{\left(\,{\left\langle(s_1\cdot{\nabla})y,{\nabla}_{s_2}^3[y]\right\rangle} +{\left\langle(s_2\cdot{\nabla})y,{\nabla}_{s_1}^3[y]\right\rangle}\,\right)}\delta_{j3},\\ c_j&=2{\left\langle(s_j\cdot{\nabla})y,R_{2,s_j}[y]\right\rangle}(1-\delta_{j3})\\ &\quad+{\left(\,{\left\langle(s_1\cdot{\nabla})y,R_{2,s_2}[y]\right\rangle} +{\left\langle(s_2\cdot{\nabla})y,R_{2,s_1}[y]\right\rangle}\,\right)}\delta_{j3}.\end{aligned}$$ Substituting the equations  into ${\mathcal{F}}_{{\mathrm{at}}}[y](x)$, we obtain $$\begin{aligned} {\mathcal{F}}_{{\mathrm{at}}}[y]=& \\ \sum_{(s_1, s_2) \in S} &({\nabla}_{s_1}^1-{\varepsilon}{\nabla}_{s_1}^2-{\varepsilon}^2R_{2,-s_1}) \Bigl\{2{\partial}_1V_{(s_1,s_2)}[y]({\nabla}_{s_1}^1+{\varepsilon}{\nabla}_{s_1}^2+{\varepsilon}^2R_{2,s_1})[y]\\ &\phantom{({\nabla}_{s_1}^1-{\varepsilon}{\nabla}_{s_1}^2+{\varepsilon}^2R_{2,-s_1})}\qquad +{\partial}_3V_{(s_1, s_2)}[y]({\nabla}_{s_2}^1+{\varepsilon}{\nabla}_{s_2}^2+{\varepsilon}^2R_{2,s_2})[y]\Bigr\}\\ +&({\nabla}_{s_2}^1-{\varepsilon}{\nabla}_{s_2}^2-{\varepsilon}^2R_{2,-s_2})\Bigl\{2{\partial}_2 V_{(s_1,s_2)}[y]({\nabla}_{s_2}^1+{\varepsilon}{\nabla}_{s_2}^2+{\varepsilon}^2R_{2,s_2})[y]\\ &\phantom{({\nabla}_{s_2}^1-{\varepsilon}{\nabla}_{s_2}^2+{\varepsilon}^2R_{2,-s_2})}\qquad +{\partial}_3V_{(s_1,s_2)}[y]({\nabla}_{s_2}^1+{\varepsilon}{\nabla}_{s_2}^2+{\varepsilon}^2R_{2,s_2})[y]\Bigr\}.\end{aligned}$$ Next substituting  into the above equation, we have $$\begin{aligned} {\mathcal{F}}_{{\mathrm{at}}}[y](x)= &\\ \sum_{(s_1, s_2) \in S} & ({\nabla}_{s_1}^1-{\varepsilon}{\nabla}_{s_1}^2-{\varepsilon}^2R_{2,-s_1}) \Bigl\{2{\mathcal{Q}}_{1, (s_1, s_2)}[{\nabla}u]({\nabla}_{s_1}^1+{\varepsilon}{\nabla}_{s_1}^2+{\varepsilon}^2 R_{2,s_1})[y]\\ &\phantom{({\nabla}_{s_1}^1-{\varepsilon}{\nabla}_{s_1}^2+{\varepsilon}^2R_{2,-s_1})}\ +{\mathcal{Q}}_{3, (s_1, s_2)}[{\nabla}u] ({\nabla}_{s_2}^1+{\varepsilon}{\nabla}_{s_2}^2+{\varepsilon}^2 R_{2,s_2})[y]\Bigr\}\\ + & ({\nabla}_{s_2}^1-{\varepsilon}{\nabla}_{s_2}^2-{\varepsilon}^2R_{2,-s_2}) \Bigl\{2{\mathcal{Q}}_{2, (s_1, s_2)}[{\nabla}u]({\nabla}_{s_2}^1+{\varepsilon}{\nabla}_{s_2}^2 +{\varepsilon}^2 R_{2,s_2})[y]\\ &\phantom{({\nabla}_{s_2}^1-{\varepsilon}{\nabla}_{s_2}^2+{\varepsilon}^2R_{2,-s_2})}\ +{\mathcal{Q}}_{3, (s_1, s_2)}[{\nabla}u]({\nabla}_{s_1}^1+{\varepsilon}{\nabla}_{s_1}^2+{\varepsilon}^2 R_{2,s_1})[y]\Bigr\}.\end{aligned}$$ Collecting the terms of the same order, we get $$\label {opexpansion:eq} {\mathcal{F}}_{{\mathrm{at}}}[y](x)= {\mathcal{L}}_0[u](x) +{\varepsilon}{\mathcal{L}}_1[u](x)+{\varepsilon}^2{\mathcal{L}}_2[u](x)+{\mathcal{O}}({\varepsilon}^3).$$ If we change ${\varepsilon}$ to $-{\varepsilon}$, the left-hand side of  is invariant, then the terms of odd power of ${\varepsilon}$ in the right-hand side of  automatically vanishes. Therefore, we have $${\mathcal{F}}_{{\mathrm{at}}}[y](x)={\mathcal{L}}_0[u](x) +{\varepsilon}^2{\mathcal{L}}_2[u](x)+{\mathcal{O}}({\varepsilon}^4).$$ The explicit form of ${\mathcal{L}}_0$ can be written as $$\begin{aligned} {\mathcal{L}}_0[u](x) &= -2(s_1\cdot{\nabla})\bigl[(s_1+(s_1\cdot{\nabla})u){\partial}_1W_{(s_1,s_2)}({\nabla}u(x))\bigr]\\ &\quad-(s_1\cdot{\nabla})\bigl[(s_2+(s_2\cdot{\nabla})u) {\partial}_3W_{(s_1,s_2)}({\nabla}u(x))\bigr]\\ &\quad-2(s_2\cdot{\nabla})\bigl[(s_2+(s_2\cdot{\nabla})u) {\partial}_2W_{(s_1,s_2)}({\nabla}u(x))\bigr]\\ &\quad-(s_2\cdot{\nabla})\bigl[(s_1+(s_1\cdot{\nabla})u){\partial}_3W_{(s_1,s_2)}({\nabla}u(x))\bigr].\end{aligned}$$ We see that ${\mathcal{L}}_0$ is the same as the operator that appears in the Euler-Lagrangian equation of . The proof of that ${\mathcal{L}}_2$ is of divergence form is similar. Actually, ${\mathcal{L}}_2$ is a quasilinear operator, which actually counts for the linear dependence on ${\lVertu\rVert}_{W^{16, \infty}}$ on the right-hand side of . To prove , it remains to estimate terms of ${\mathcal{O}}({\varepsilon}^2)$, which is a combination of terms of the form: for ${\alpha},\beta=1,2$, $$\begin{aligned} &{\nabla}_{s_{\alpha}}^k{\left(\,{\partial}_iW_{(s_1,s_2)}({\nabla}u){\nabla}_{s_\beta}^l u\,\right)},\quad l+k=4,l,k\in{\mathbb{N}},\\ &{\nabla}_{s_{\alpha}}^k{\left(\,a_j{\partial}_{ij}W_{(s_1,s_2)}({\nabla}u){\nabla}_{s_\beta}^l u\,\right)},\quad l+k=3,l,k\in{\mathbb{N}},\\ &{\nabla}_{s_{\alpha}}^1{\left(\,b_j{\partial}_{ij}W_{(s_1,s_2)}({\nabla}u){\nabla}_{s_\beta}^1 u+R_1[F_i](0){\nabla}_{s_{\beta}}^1u\,\right)}.\end{aligned}$$ We only give the estimate for the first term, and the other two can be bounded similarly. Due to chain rule and to Leibniz’s rule, ${\nabla}_{s_{\alpha}}^k{\left(\,{\partial}_iW_{(s_1,s_2)}({\nabla}u){\nabla}_{s_\beta}^l u\,\right)}$ is a linear combination of the form $$\begin{aligned} T&=\prod_{i=1}^3{\left(\,\dfrac{{\partial}}{{\partial}x_i}\,\right)}^{{\operatorname{sgn}}{\delta_i}}{\partial}_i W_{(s_1,s_2)}({\nabla}u)\\ &\phantom{=\prod_{i=1}^3}\quad{\times}(s_{\alpha}\cdot{\nabla})^{{\gamma}_1}P_{\delta_1}(s_{\alpha}\cdot{\nabla})^{{\gamma}_2}P_{\delta_2} (s_{\alpha}\cdot{\nabla})^{{\gamma}_3}P_{\delta_3}(s_{\beta}\cdot{\nabla})^{4-{\lvert{\gamma}\rvert}}u,\end{aligned}$$ where ${\gamma}\in{\mathbb{N}}^3$ are multiindecies with ${\lvert{\gamma}\rvert}=\sum_{i=1}^3{\lvert{\gamma}_i\rvert}$ and ${\lvert{\gamma}\rvert}\le 3$. Here $$P_1={\lverts_1+(s_1\cdot{\nabla})u\rvert}^2,\quad P_2={\lverts_2+(s_2\cdot{\nabla})u\rvert}^2,\quad P_3={\left\langles_1+(s_1\cdot{\nabla})u,s_2+(s_2\cdot{\nabla})u\right\rangle}.$$ Using chain rule once again, we get, for $i=1,2,3$, $$\begin{aligned} {\lVert(s_{\alpha}\cdot{\nabla})P_i\rVert}_{L^\infty}&\le C(s_{\alpha})(1+{\lVert{\nabla}u\rVert}_{L^\infty}){\lVert{\nabla}^2 u\rVert}_{L^\infty},\\ {\lVert(s_{\alpha}\cdot{\nabla})^2P_i\rVert}_{L^\infty}&\le C(s_{\alpha}){\left(\,(1+{\lVert{\nabla}u\rVert}_{L^\infty}){\lVert{\nabla}^3 u\rVert}_{L^\infty} +{\lVert{\nabla}^2 u\rVert}_{L^\infty}^2\,\right)},\\ {\lVert(s_{\alpha}\cdot{\nabla})^3P_i\rVert}_{L^\infty}&\le C(s_{\alpha}){\left(\,(1+{\lVert{\nabla}u\rVert}_{L^\infty}){\lVert{\nabla}^4 u\rVert}_{L^\infty}+{\lVert{\nabla}^2 u\rVert}_{L^\infty}^2{\lVert{\nabla}^3 u\rVert}_{L^\infty}^2\,\right)}.\end{aligned}$$ Using Gagliardo-Nirenberg inequality [@Nirenberg:; @1959], $${\lVert{\nabla}^j u\rVert}_{L^\infty}\le C{\lVert{\nabla}^m u\rVert}_{L^\infty}^{j/m}{\lVertu\rVert}_{L^\infty}^{1-j/m},\quad 0<j<m,$$ we have $${\lVert(s_{\alpha}\cdot{\nabla})^kP_i\rVert}_{L^\infty}\le C(s_{\alpha}){\left(\,{\lVert u\rVert}_{L^\infty}{\lVert{\nabla}^{k+2} u\rVert}_{L^\infty}+{\lVert{\nabla}^{k+1} u\rVert}_{L^\infty}\,\right)}.$$ Using the above inequality, we conclude $$\begin{aligned} {\lVertT\rVert}_{L^\infty}&\le C\max_{2\le{\lvert{\gamma}\rvert}\le 4}{\lVert{\partial}_{{\gamma}}W_{(s_1,s_2)}({\nabla}u)\rVert}_{L^\infty}{\lVert{\nabla}^{4-{\lvert{\gamma}\rvert}} u\rVert}_{L^\infty}\\ &\quad{\times}\Biggl\{ (1+{\lVertu\rVert}_{L^\infty}^3)\prod_{i=1}^3{\lVert{\nabla}^{{{\gamma}_i}+2}u\rVert}_{L^\infty} +\prod_{i=1}^3{\lVert{\nabla}^{{{\gamma}_i}+1}u\rVert}_{L^\infty}\\ &\phantom{{\times}\Biggl\{}\qquad+(1+{\lVertu\rVert}_{L^\infty}^2)\sum_{i,j,k=1}^3{\lVert{\nabla}^{{{\gamma}_i}+2}u\rVert}_{L^\infty} {\lVert{\nabla}^{{{\gamma}_j}+2}u\rVert}_{L^\infty}{\lVert{\nabla}^{{{\gamma}_k}+1}u\rVert}_{L^\infty}\\ &\phantom{{\times}\Biggl\{}\qquad+(1+{\lVertu\rVert}_{L^\infty})\sum_{i,j,k=1}^3{\lVert{\nabla}^{{{\gamma}_i}+2}u\rVert}_{L^\infty} {\lVert{\nabla}^{{{\gamma}_j}+1}u\rVert}_{L^\infty}{\lVert{\nabla}^{{{\gamma}_k}+1}u\rVert}_{L^\infty}\Biggr\}.\end{aligned}$$ Invoking Gagliardo-Nirenberg inequality again, we obtain $$\begin{aligned} {\lVertT\rVert}_{L^\infty}&\le C({\lVertu\rVert}_{L^\infty}^3+{\lVertu\rVert}_{L^\infty}^6){\lVert{\nabla}^{10}u\rVert}_{L^\infty}\\ &\quad+C({\lVertu\rVert}_{L^\infty}^3+{\lVertu\rVert}_{L^\infty}^5){\lVert{\nabla}^9u\rVert}_{L^\infty}\\ &\quad+C({\lVertu\rVert}_{L^\infty}^3+{\lVertu\rVert}_{L^\infty}^4){\lVert{\nabla}^8u\rVert}_{L^\infty}\\ &\quad+C{\lVertu\rVert}_{L^\infty}^3{\lVert{\nabla}^7u\rVert}_{L^\infty}\\ &\le C\sum_{i=3}^6{\lVertu\rVert}_{L^\infty}^i{\lVertu\rVert}_{W^{10,\infty}}.\end{aligned}$$ Proceeding along the same line, we can obtain the similar bounds for the higher-order terms, while ${\lVertu\rVert}_{W^{16,\infty}}$ arises from the following term $$R_{{\alpha},-s_{\alpha}}{\left(\,{\partial}_i W_{(s_1,s_2)}({\nabla}u)R_{\beta,s_{\beta}}[y]\,\right)}.$$ Summing up all terms of ${\mathcal{O}}{({\varepsilon}^2)}$, we get . \[coro:consFE\] For any $y=x+u(x)$ with $u$ smooth, we have $${\lVert{\mathcal{F}}_{{\varepsilon}}[y] - {\mathcal{F}}_{{\mathrm{CB}}}[y]\rVert}_{L^{\infty}_{{\varepsilon}}} \leq C{\varepsilon}^2 {\lVertu\rVert}_{W^{16, \infty}},$$ where the constant $C$ depends on $V$ and ${\lVertu\rVert}_{L^\infty}$, but is independent of ${\varepsilon}$. The corollary follows Lemma \[lem:consCB\] by the observation that we can view the energy functional of the finite element discretization as a particular choice of atomistic potential energy. To be more concrete, let us consider the case $d = 2$, so that each element $T \in {\mathcal{T}}_{{\varepsilon}}$ has three vertices. It is straightforward to extend the argument below to higher dimensions, with certain complication of notations. Let $y_{{\varepsilon}} \in {\widetilde{X}}_{{\varepsilon}}$ be the approximation of $y$ so that $y_{{\varepsilon}}(x) = y(x)$ for any $x \in \Omega_{{\varepsilon}}$. Let $u_{{\varepsilon}} = y_{{\varepsilon}} - x$. Obviously, we have $u_{{\varepsilon}}(x) = u(x)$ for any $x \in \Omega_{{\varepsilon}}$. Now, for each $T \in {\mathcal{T}}_{{\varepsilon}}$, $\nabla u_{{\varepsilon}} \vert_{T}$ is a linear function of $y_{{\varepsilon}}$ on the vertices of $T$. Denote the three vertices of $T$ as $x_0, x_1, x_2$, and $s_1 = (x_1 - x_0)/{\varepsilon}$, $s_2 = (x_2 - x_0) / {\varepsilon}$, then $\nabla u_{{\varepsilon}}\vert_{T}$ is the solution of the linear system $$\begin{cases} s_1 + s_1 A = D_{{\varepsilon}, s_1}^+ y_{{\varepsilon}}(x_0), \\ s_2 + s_2 A = D_{{\varepsilon}, s_2}^+ y_{{\varepsilon}}(x_0). \end{cases}$$ Therefore, let us denote $$\nabla u_{{\varepsilon}} \vert_{T} = A_{(s_1, s_2)}(y_{{\varepsilon}}(x_0)/{\varepsilon}, y_{{\varepsilon}}(x_1)/{\varepsilon}, y_{{\varepsilon}}(x_2) / {\varepsilon})$$ as the solution of the above system. Notice that due to linearity, the map $A_{(s_1, s_2)}$ is independent of ${\varepsilon}$. Hence, for $x \in T$, we can write $$\label{eq:Wcbpotential} \begin{aligned} W_{{\mathrm{CB}}}(\nabla u_{{\varepsilon}}(x)) & = W_{{\mathrm{CB}}}\bigl(A_{(s_1, s_2)}(y_{{\varepsilon}}(x_0)/{\varepsilon}, y_{{\varepsilon}}(x_1)/{\varepsilon}, y_{{\varepsilon}}(x_2) / {\varepsilon})\bigr) \\ & = W_{{\mathrm{FE}}, (s_1, s_2)}(y_{{\varepsilon}}(x_0)/{\varepsilon}, y_{{\varepsilon}}(x_1)/{\varepsilon}, y_{{\varepsilon}}(x_2) / {\varepsilon}), \end{aligned}$$ where $W_{{\mathrm{FE}}, (s_1, s_2)} \equiv W_{{\mathrm{CB}}}\circ A_{(s_1, s_2)}$. Denote $S_{{\mathrm{FE}}}$ as the set of all pairs $(s_1, s_2)$ such that $\{x_0, x_0+{\varepsilon}s_1, x_0 + {\varepsilon}s_2 \}$ forms the vertices of an element $T \in {\mathcal{T}}_{{\varepsilon}}$ containing $x_0$ (it is easy to see that $S_{{\mathrm{FE}}}$ is independent of ${\varepsilon}$). Then, using , we have $$\begin{aligned} \int_{\Omega} & W_{{\mathrm{CB}}}(\nabla u_{{\varepsilon}}(x)) \\ & = \sum_{T \in {\mathcal{T}}_{{\varepsilon}}} {\lvertT\rvert} W_{{\mathrm{CB}}}(\nabla u_{{\varepsilon}} \vert_T) \\ & = \frac{1}{3!}\sum_{x \in \Omega_{{\varepsilon}}} \sum_{(s_1, s_2) \in S_{{\mathrm{FE}}}} {\varepsilon}^d {\lvertT_{(s_1, s_2)}\rvert} W_{{\mathrm{FE}}, (s_1, s_2)}\left(\frac{y_{{\varepsilon}}(x)}{{\varepsilon}}, \frac{y_{{\varepsilon}}(x + {\varepsilon}s_1)}{{\varepsilon}}, \frac{y_{{\varepsilon}}(x + {\varepsilon}s_2)}{{\varepsilon}}\right) \\ & = \frac{1}{3!}{\varepsilon}^d \sum_{x \in \Omega_{{\varepsilon}}} \sum_{(s_1, s_2) \in S_{{\mathrm{FE}}}} V_{{\mathrm{FE}}, (s_1, s_2)}\left(\frac{y_{{\varepsilon}}(x)}{{\varepsilon}}, \frac{y_{{\varepsilon}}(x + {\varepsilon}s_1)}{{\varepsilon}}, \frac{y_{{\varepsilon}}(x + {\varepsilon}s_2)}{{\varepsilon}}\right), \end{aligned}$$ where $V_{{\mathrm{FE}}, (s_1, s_2)} ={\lvertT_{(s_1, s_2)}\rvert} W_{{\mathrm{FE}}, (s_1, s_2)}$ and $T_{(s_1, s_2)}$ is the triangle formed by vectors $s_1$ and $s_2$. This indicates that we can view the energy functional in the finite element discretization as a particular atomistic potential model, given by three body interactions $V_{{\mathrm{FE}}, (s_1, s_2)}$, by identifying the value of $y$ on nodes as the deformed atom positions. It is immediately clear that the Cauchy-Born energy density corresponding to the atomic potential constructed is just $W_{{\mathrm{CB}}}$. Indeed, for a homogenously deformed system with deformation gradient $A$, by definition, the energy of the system is just $W_{{\mathrm{CB}}}(A) {\lvert\Omega\rvert}$, and hence the Cauchy-Born energy density is given again by $W_{{\mathrm{CB}}}(A)$. With this viewpoint of the finite element discretization as an atomic potential, we obtain the conclusion as an immediate corollary of Lemma \[lem:consCB\]. \[coro:consqc\] For any $y=x+u(x)$ with $u$ smooth, we have $$\label{eq:compareqcat} {\lVert{\mathcal{F}}_{{\mathrm{hy}}}[y] - {\mathcal{F}}_{{\mathrm{at}}}[y]\rVert}_{L^{\infty}_{{\varepsilon}}} \leq C {\varepsilon}^2 {\lVertu\rVert}_{W^{16, \infty}},$$ and $$\label{eq:compareqcCB} {\lVert{\mathcal{F}}_{{\mathrm{hy}}}[y] - {\mathcal{F}}_{{\mathrm{CB}}}[y]\rVert}_{L^{\infty}_{{\varepsilon}}} \leq C {\varepsilon}^2 {\lVertu\rVert}_{W^{16, \infty}},$$ where the constant $C$ depends on $V$ and ${\lVertu\rVert}_{L^\infty}$, but is independent of ${\varepsilon}$. The inequality follows from Lemma \[lem:consCB\], Corollary \[coro:consFE\], and $$\begin{aligned} {\lVert{\mathcal{F}}_{{\mathrm{hy}}}[y] - {\mathcal{F}}_{{\mathrm{at}}}[y]\rVert}_{L^{\infty}_{{\varepsilon}}} & = {\lVert \rho(x) ({\mathcal{F}}_{{\varepsilon}}[y](x) - {\mathcal{F}}_{{\mathrm{at}}}[y](x)) \rVert}_{L^{\infty}_{{\varepsilon}}} \\ & \leq {\lVert {\mathcal{F}}_{{\varepsilon}}[y] - {\mathcal{F}}_{{\mathrm{at}}}[y]\rVert}_{L^{\infty}_{{\varepsilon}}} \\ & \leq {\lVert {\mathcal{F}}_{{\mathrm{at}}}[y] - {\mathcal{F}}_{{\mathrm{CB}}}[y]\rVert}_{L^{\infty}_{{\varepsilon}}} + {\lVert {\mathcal{F}}_{{\varepsilon}}[y] - {\mathcal{F}}_{{\mathrm{CB}}}[y]\rVert}_{L^{\infty}_{{\varepsilon}}}, \end{aligned}$$ where we have used $\varrho(x) \in [0, 1]$. Similarly, follows from Lemma \[lem:consCB\], Corollary \[coro:consFE\], and $$\begin{aligned} {\lVert{\mathcal{F}}_{{\mathrm{hy}}}[y] - {\mathcal{F}}_{{\mathrm{CB}}}[y]\rVert}_{L^{\infty}_{{\varepsilon}}} & \leq {\lVert \rho(x) ({\mathcal{F}}_{{\varepsilon}}[y](x) - {\mathcal{F}}_{{\mathrm{CB}}}[y](x)) \rVert}_{L^{\infty}_{{\varepsilon}}} \\ & \qquad + {\lVert(1 - \rho(x)) ({\mathcal{F}}_{{\mathrm{at}}}[y](x) - {\mathcal{F}}_{{\mathrm{CB}}}[y](x))\rVert}_{L^{\infty}_{{\varepsilon}}} \\ & \leq {\lVert {\mathcal{F}}_{{\varepsilon}}[y] - {\mathcal{F}}_{{\mathrm{CB}}}[y] \rVert}_{L^{\infty}_{{\varepsilon}}} + {\lVert {\mathcal{F}}_{{\mathrm{at}}}[y] - {\mathcal{F}}_{{\mathrm{CB}}}[y]\rVert}_{L^{\infty}_{{\varepsilon}}}. \end{aligned}$$ Regularity estimate {#sec:regularity} =================== To analyze the stability property of the proposed force-based hybrid method, we use the framework of pseudo-difference operators [@Thomee:1964; @LaxNirenberg:1966]. In this section, we will establish regularity estimate Theorem \[thm:regularity\] for the force-based hybrid method. This will be one of the key ingredients used to prove stability estimate in the next section. We study the linearized operator of ${\mathcal{F}}_{{\mathrm{hy}}}$. Let us denote ${\mathcal{H}}_{{\mathrm{hy}}}[u]$ the linearization of ${\mathcal{F}}_{{\mathrm{hy}}}$ at state $u$: $${\mathcal{H}}_{{\mathrm{hy}}}[u] = \frac{\delta {\mathcal{F}}_{{\mathrm{hy}}}}{\delta y} \bigg\vert_{y = x + u},$$ so that ${\mathcal{H}}_{{\mathrm{hy}}}[u]$ is a linear operator on lattice functions $w$, given by $${\mathcal{H}}_{{\mathrm{hy}}}[u] w = \lim_{t\to0} \frac{\partial {\mathcal{F}}_{{\mathrm{hy}}}[x + u + t w]} {\partial t}.$$ It is convenient to rewrite ${\mathcal{H}}_{{\mathrm{hy}}}$ in the form of a pseudo-difference operator as $${\mathcal{H}}_{{\mathrm{hy}}}[u] = \sum_{\mu \in {\mathcal{A}}} h_{{\mathrm{hy}}}[u](x, \mu) T^{\mu},$$ where the coefficient $h_{{\mathrm{hy}}}[u](x,\mu)$ is a $d\times d$ (probably asymmetric) matrix for each $x$ and $\mu \in {\mathcal{A}}$, given by $$\label{eq:defhqc} (h_{{\mathrm{hy}}}[u])_{\alpha\beta}(x, \mu) = \frac{\partial ( {\mathcal{F}}_{{\mathrm{hy}}}[y] )_{\alpha}(x)} {\partial (T^{\mu} y)_{\beta}(x)} \bigg\vert_{y = x + u},$$ where $\alpha, \beta = 1, \cdots, d$ are indices. Here ${\mathcal{A}}$ is range of the pseudo-difference stencil (note that $0 \in {\mathcal{A}}$), which is finite by assumptions. By the definition of ${\mathcal{F}}_{{\mathrm{hy}}}$, we have $$\label{eq:linearh} h_{{\mathrm{hy}}}[u](x, \mu) = (1 - \varrho(x)) h_{{\mathrm{at}}}[u](x, \mu) + \varrho(x) h_{{\varepsilon}}[u](x, \mu),$$ where $h_{{\mathrm{at}}}[u]$ and $h_{{\varepsilon}}[u]$ are given by similar equations as by replacing ${\mathcal{F}}_{{\mathrm{hy}}}$ to ${\mathcal{F}}_{{\mathrm{at}}}$ and ${\mathcal{F}}_{{\varepsilon}}$ respectively. Define ${\widetilde{h}}_{{\mathrm{hy}}}[u](x, \xi)$ as the symbol of the pseudo-difference operator ${\mathcal{H}}_{{\mathrm{hy}}}[u]$ given as $${\widetilde{h}}_{{\mathrm{hy}}}[u](x, \xi) = \sum_{\mu\in{\mathcal{A}}} h_{{\mathrm{hy}}}[u](x, \mu) \exp({\imath}{\varepsilon}\sum_j \mu_j a_j \cdot \xi) \qquad \text{for } \xi \in {\mathbb{L}}^{\ast}_{{\varepsilon}},$$ and similarly for ${\widetilde{h}}_{{\varepsilon}}[u]$ and ${\widetilde{h}}_{{\mathrm{at}}}[u]$. By definition, we have for any $x \in \Omega_{{\varepsilon}}$, $$({\mathcal{H}}_{{\mathrm{hy}}}[u] e_k e^{{\imath}x\cdot \xi})_j(x) = ({\widetilde{h}}_{{\mathrm{hy}}}[u])_{jk}(x, \xi) e^{{\imath}x\cdot \xi},$$ for $1 \leq j,k \leq d$ and similarly for ${\widetilde{h}}_{{\varepsilon}}[u]$ and ${\widetilde{h}}_{{\mathrm{at}}}[u]$. Here $\{e_k\}$ are the canonical basis of ${\mathbb{R}}^d$. It is also clear that  implies $$\label{eq:linearwth} {\widetilde{h}}_{{\mathrm{hy}}}[u](x, \xi) = (1 - \varrho(x)) {\widetilde{h}}_{{\mathrm{at}}}[u](x, \xi) + \varrho(x) {\widetilde{h}}_{{\varepsilon}}[u](x, \xi).$$ In the case that we linearize around the equilibrium state $u = 0$, we will simplify the notation as $${\mathcal{H}}_{{\mathrm{hy}}} = {\mathcal{H}}_{{\mathrm{hy}}}[0], \quad h_{{\mathrm{hy}}} = h_{{\mathrm{hy}}}[0], \quad {\widetilde{h}}_{{\mathrm{hy}}} = {\widetilde{h}}_{{\mathrm{hy}}}[0],$$ and similarly for those defined for atomistic model and finite element discretization. We observe that by the translation invariance of the total energy $I_{\mathrm{at}}$ at the state $u = 0$, $$h_{{\mathrm{at}}}(x, \mu) = h_{{\mathrm{at}}}(\mu), \quad h_{{\varepsilon}}(x, \mu) = h_{{\varepsilon}}(\mu).$$ The coefficients are independent of position $x$, and hence similarly for ${\widetilde{h}}_{{\mathrm{at}}}$ and ${\widetilde{h}}_{{\varepsilon}}$. We also denote ${\mathcal{H}}_{{\mathrm{CB}}}$ as the linearization of ${\mathcal{F}}_{{\mathrm{CB}}}$ at the equilibrium state $u = 0$, and define ${\widetilde{h}}_{{\mathrm{CB}}} = {\widetilde{h}}_{{\mathrm{CB}}}(x, \xi)$ as its symbol. Note that due to the periodic boundary condition assumed on $\Omega$, $\xi$ here only takes value in ${\mathbb{L}}^{\ast}$. Again, due to the translation invariance of the total energy, ${\widetilde{h}}_{{\mathrm{CB}}}$ is independent of $x$. Let us start the analysis with the operator ${\mathcal{H}}_{{\mathrm{hy}}}$. First, we show that the matrix ${\widetilde{h}}_{{\mathrm{hy}}}$ is Hermitian. The matrices ${\widetilde{h}}_{{\mathrm{at}}}(\xi)$, ${\widetilde{h}}_{{\varepsilon}}(\xi)$ and hence ${\widetilde{h}}_{{\mathrm{hy}}}(x, \xi)$ are Hermitian for any ${\varepsilon}> 0$, $x \in \Omega_{{\varepsilon}}$ and $\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}$. It suffices to prove the result for ${\widetilde{h}}_{{\mathrm{at}}}(\xi)$, as the argument for ${\widetilde{h}}_{{\varepsilon}}(\xi)$ is the same and the conclusion for ${\widetilde{h}}_{{\mathrm{hy}}}(x, \xi)$ follows immediately from . Since $({\mathcal{F}}_{{\mathrm{at}}}[y])_{\alpha}(x) = - \partial I_{{\mathrm{at}}}[y] / \partial y_{\alpha}(x)$, we have $$\begin{aligned} (h_{{\mathrm{at}}})_{\alpha\beta}(\mu) & = - \frac{\partial^2 I_{{\mathrm{at}}}[y]} {\partial y_{\alpha}(x) \partial (T^{\mu} y)_{\beta}(x)} \bigg\vert_{y = x} \\ & = - \frac{\partial^2 I_{{\mathrm{at}}}[y]} {\partial y_{\alpha}(x) \partial y_{\beta}(x + {\varepsilon}\mu_j a_j)} \bigg\vert_{y = x} \\ & = - \frac{\partial^2 I_{{\mathrm{at}}}[y]} {\partial (T^{-\mu}y)_{\alpha}(x+{\varepsilon}\mu_j a_j) \partial y_{\beta}(x + {\varepsilon}\mu_j a_j)} \bigg\vert_{y = x} \\ & = - \frac{\partial^2 I_{{\mathrm{at}}}[y]} {\partial (T^{-\mu}y)_{\alpha}(x) \partial y_{\beta}(x)} \bigg\vert_{y = x} = (h_{{\mathrm{at}}})_{\beta\alpha}(-\mu), \end{aligned}$$ where the last line follows from translational invariance of the unperturbed system. Therefore, $$\begin{aligned} ({\widetilde{h}}_{{\mathrm{at}}})_{\alpha\beta}(\xi) & = \sum_{\mu} (h_{{\mathrm{at}}})_{\alpha\beta}(\mu) \exp({\imath}{\varepsilon}\sum_j \mu_j a_j \cdot \xi) \\ & = \sum_{\mu} (h_{{\mathrm{at}}})_{\beta\alpha}(-\mu) \exp({\imath}{\varepsilon}\sum_j (-\mu_j) a_j \cdot (-\xi)) \\ & = \biggl( \sum_{\mu} (h_{{\mathrm{at}}})_{\beta\alpha}(-\mu) \exp({\imath}{\varepsilon}\sum_j (-\mu_j) a_j \cdot \xi) \biggr)^{\ast} = ({\widetilde{h}}_{{\mathrm{at}}})_{\beta\alpha}^{\ast}(\xi), \end{aligned}$$ for any $\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}$, where we have used the fact that $h_{{\mathrm{at}}}$ are real matrices. This proves the Lemma. We make the following stability assumptions about the atomistic potentials, the finite element discretization of the Cauchy-Born elasticity model: \[assump:stabatom\] ${\widetilde{h}}_{{\mathrm{at}}}(\xi)$ is positive definite and there exists $a_{{\mathrm{at}}}>0$ such that for any ${\varepsilon}>0$ and any $\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}$, $$\det {\widetilde{h}}_{{\mathrm{at}}}(\xi) \geq a_{{\mathrm{at}}} \Lambda_{0, {\varepsilon}}^{2d}(\xi).$$ \[assump:stabCB\] ${\widetilde{h}}_{{\varepsilon}}(\xi)$ is positive definite and there exists $a>0$ such that for any ${\varepsilon}>0$ and any $\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}$, $$\det {\widetilde{h}}_{{\varepsilon}}(\xi) \geq a \Lambda_{0, {\varepsilon}}^{2d}(\xi).$$ The Assumptions \[assump:stabatom\] and \[assump:stabCB\] will be assumed in the sequel without further indication. These assumptions are quite natural and physical. In fact, Assumption \[assump:stabatom\] is just the phonon stability conditions (for simple Bravais lattice) identified in [@EMing:2007] represented using the notions of pseudo-difference operators. Assumption \[assump:stabCB\] is the usual stability condition of a finite element discretization of continuous problem derived from the Cauchy-Born rule. We note that as a consequence of these stability assumptions, the continuous Cauchy-Born elasticity problem is also elliptic, as indicated by Corollary \[coro:CB\] below. From a mathematical point of view, Assumption \[assump:stabatom\] and Assumption \[assump:stabCB\] can be seen as the uniform ellipticity of the difference operator. Next, we prove a lower bound for the symbol ${\widetilde{h}}_{{\mathrm{hy}}}$, which is crucial for the regularity and stability estimates. Let us recall an inequality proved by Ky Fan: \[thm:KyFan\] Let $A$, $B$ be positive definite matrices, then for any $\lambda \in [0, 1]$, $$\det( \lambda A + (1 - \lambda) B) \geq (\det A)^{\lambda} (\det B)^{1-\lambda}.$$ \[cor:lowbound\] For any ${\varepsilon}> 0$, $x \in \Omega_{{\varepsilon}}$ and any $\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}$, we have $$\det {\widetilde{h}}_{{\mathrm{hy}}}(x, \xi) \geq \min(a, a_{{\mathrm{at}}}) \Lambda_{0, {\varepsilon}}^{2d}(\xi).$$ This is an immediate corollary of Theorem \[thm:KyFan\]. Since for any $x$, $\varrho(x) \in [0, 1]$, we have $$\begin{aligned} \det {\widetilde{h}}_{{\mathrm{hy}}}(x, \xi) & = \det \bigl( (1 - \varrho(x)) {\widetilde{h}}_{{\mathrm{at}}}(\xi) + \varrho(x) {\widetilde{h}}_{{\varepsilon}}(\xi) \bigr) \\ & \geq (\det {\widetilde{h}}_{{\mathrm{at}}}(\xi))^{1 - \varrho(x)} (\det {\widetilde{h}}_{{\varepsilon}}(\xi))^{\varrho(x)} \\ & \geq a_{{\mathrm{at}}}^{1-\varrho(x)} a^{\varrho(x)} \Lambda_{0, {\varepsilon}}^{2d}(\xi) \\ & \geq \min(a, a_{{\mathrm{at}}}) \Lambda_{0, {\varepsilon}}^{2d}(\xi). \end{aligned}$$ With these preparations, we now establish the regularity estimate of the quasi-continuum approximation. The regularity of discrete elliptic systems is understood by a fundamental result of finite difference approximation by Bube and Strikwerda [@BubeStrikwerda:1983]. They extended the regularity estimate of Thom[é]{}e and Westergren [@ThomeeWestergren:1968] from single elliptic equation to elliptic systems. Let us introduce the regular discrete elliptic system following [@BubeStrikwerda:1983]. The concept is parallel to the regular continuous elliptic system [@AgmonDouglisNirenberg:1959]. For $i, j = 1, \cdots, d$, let $L_{ij}$ be a difference operator with symbol $l_{ij}(x, \xi)$. The system of difference equations $$\label{eq:ellipsys} \sum_{j=1}^d L_{ij} v_j(x) = f_i(x), \qquad i = 1, \cdots, d,$$ is a *regular discrete elliptic system*, if there are set of integers $\{\sigma_i\}_{i=1}^d$ and $\{\tau_j\}_{j=1}^d$ such that each $L_{ij}$ is a difference operator of order at most $\sigma_i + \tau_j$, and if there are positive constants $C, \xi_0, {\varepsilon}_0$ such that $${\lvert \det l_{ij}(x, \xi) \rvert} \geq C \Lambda_{{\varepsilon}}^{2p}(\xi)$$ for $ 0 < {\varepsilon}\leq {\varepsilon}_0$, $\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}$, and $\max_{1\leq i \leq d} {\lvert\xi_i\rvert} \geq \xi_0$, where $2p = \sum_{i} (\sigma_i + \tau_i)$. We will say that the system is regular elliptic of order $(\sigma, \tau)$. By Corollary \[cor:lowbound\], we immediately have \[prop:elliptic\] Under Assumptions \[assump:stabatom\] and \[assump:stabCB\], the finite difference system $$\label{eq:ellipsys2} {\mathcal{H}}_{{\mathrm{hy}}} v = f$$ is a regular discrete elliptic system of order $(0,2)$. For the regular discrete elliptic system , we have the following regularity estimate. \[thm:regularity\] Under Assumptions \[assump:stabatom\] and \[assump:stabCB\], for any $v \in H^2_{{\varepsilon}}(\Omega)$, we have $$\label{eq:regularity} {\lVertv\rVert}_{{\varepsilon},2} \leq C ({\lVert{\mathcal{H}}_{{\mathrm{hy}}} v\rVert}_{{\varepsilon},0} + {\lVertv\rVert}_{{\varepsilon},0}).$$ The constant $C$ is independent of $v$ and ${\varepsilon}$. Theorem \[thm:regularity\] is analogous to the interior regularity estimate for elliptic partial differential equations given in [@AgmonDouglisNirenberg:1964]. The statement of the theorem is just rewriting Theorem 2.1 in [@BubeStrikwerda:1983] using the current notation. We note that in [@BubeStrikwerda:1983], Bube and Strikwerda proved interior regularity estimates, which clearly implies the *a priori* estimate for periodic case here. Stability {#sec:stability} ========= The main theorem we will prove in this section is the following stability estimate. \[thm:stability\] Under Assumptions \[assump:stabatom\] and \[assump:stabCB\], for any $v \in H^2_{{\varepsilon}}(\Omega)$, we have $$\label{eq:stability} {\lVertv\rVert}_{{\varepsilon},2} \leq C {\lVert{\mathcal{H}}_{{\mathrm{hy}}} v\rVert}_{{\varepsilon},0}.$$ Let us make some remarks about the stability result. In general, we do not know whether a stability estimate like is valid for the force-based quasicontinuum method in general dimension (see [@DobsonLuskinOrtner:2010b; @DobsonOrtnerShapeev] for some study in one dimension). From a pseudo-difference operator point of view, the continuity in $x$ variable of the symbol of the linearized operator is crucial for the validity of the strong stability. This is also the main motivation to use a smooth transition function $\varrho(x)$ in the current scheme. The strong stability property of the scheme will facilitate the numerical solution based on iterative methods. We also note that the strong stability is also crucial for the extension of the current scheme to the time-dependent case. It plays the role of Grding inequality. We will leave this to future publications. To obtain the stability estimate from the regularity estimate of Theorem \[thm:regularity\], we need to eliminate ${\lVertv\rVert}_{{\varepsilon}, 0}$ on the right hand side of . In spatial dimension one, this can be achieved by the discrete maximum principle for the finite difference equation. This is however no longer the case for higher dimensions, as then we are dealing with an elliptic system. The argument we will use is instead similar in spirit to the argument used in [@AgmonDouglisNirenberg:1959; @Schechter:1959] for passing from regularity estimate to uniqueness results for elliptic systems. The difficulty however is that a compactness argument as in [@Schechter:1959] can not apply to the finite difference system, as we need a uniform estimate for different ${\varepsilon}$. Therefore, instead of using the compactness, the proof is based on the uniqueness of the continuous system from ellipticity, the consistency of the finite difference schemes to the continuous system, and the regularity estimate Theorem \[thm:regularity\]. We note that a similar approach was considered by Martin [@Martin:94]. In order to connect the finite difference system with continuous PDE, we need to extend grid functions on $\Omega_{{\varepsilon}}$ to continuous functions defined in $\Omega$. For this purpose, let us define an interpolation operator $Q_{{\varepsilon}}$ as follows.[^2] For any lattice function $u$ on $\Omega_{{\varepsilon}}$, we define $Q_{{\varepsilon}} u \in L^2(\Omega)$ as $$\label{eq:Qvepsdef} (Q_{{\varepsilon}} u)(x) = (2\pi)^{d/2} \sum_{\xi \in \mathbb{L}^{\ast}_{{\varepsilon}}} e^{{\imath}x \cdot \xi} {\widehat{u}}(\xi), \quad x \in \Omega.$$ Comparing with , we know that $Q_{{\varepsilon}} u$ agrees with $u$ on $\Omega_{{\varepsilon}}$. We have the following properties of $Q_{{\varepsilon}}$. For $k \geq 0$, there exists constants $c_k, C_k > 0$, such that for any $u$, $$c_k {\lVertu\rVert}_{H^k_{{\varepsilon}}(\Omega)} \leq {\lVertQ_{{\varepsilon}} u\rVert}_{H^k(\Omega)} \leq C_k {\lVertu\rVert}_{H^k_{{\varepsilon}}(\Omega)}.$$ The conclusion follows immediately from definition and . Let $\chi$ be a standard nonnegative cut-off function on ${\mathbb{R}}^d$, which is smooth and compactly supported, with ${\lVert\chi\rVert}_{L^1} = 1$. Let $\chi_{{\varepsilon}}$ be the scaled version $$\chi_{{\varepsilon}}(x) = {\varepsilon}^{-(\alpha d)} \chi( {\varepsilon}^{-\alpha} x),$$ for some $\alpha$ with $0 < \alpha < 1$. The choice of the value of $\alpha$ will be specified later in the proof of Proposition \[prop:contcons\]. Define a low-pass filter operator $L_{{\varepsilon}}$ for $f \in L^2(\Omega)$ using ${\widehat{\chi_{{\varepsilon}}}}$ as Fourier multiplier: $${\widehat{L_{{\varepsilon}} f}}(\xi) = (2\pi)^{d/2} {\widehat{f}}(\xi) {\widehat{\chi_{{\varepsilon}}}}(\xi) = (2\pi)^{d/2} {\widehat{f}}(\xi) {\widehat{\chi}}({\varepsilon}^{\alpha} \xi).$$ In real space, $L_{{\varepsilon}}$ convolves $f$ with $\chi_{{\varepsilon}}$. Note that, using integration by parts, it is easy to see that $$\begin{aligned} & \label{eq:chidecay} {\lvert{\widehat{\chi_{{\varepsilon}}}}(\xi)\rvert} \leq C_k {\lvert{\varepsilon}^{\alpha} \xi\rvert}^{-k}, \quad \forall\ k \in {\mathbb{Z}}_+, \\ & \label{eq:chiunity} (2\pi)^{d/2} {\widehat{\chi_{{\varepsilon}}}}(0) = 1.\end{aligned}$$ Hence, $L_{{\varepsilon}}$ is indeed a low-pass filter. For simplicity of notation, we will denote $${\overline{u}}_{{\varepsilon}} = L_{{\varepsilon}} Q_{{\varepsilon}} u_{{\varepsilon}},$$ for lattice function $u_{{\varepsilon}}$ on $\Omega_{{\varepsilon}}$. We state and prove a consistency result for the linearized operator in terms of symbols. \[prop:Hcons\] There exists ${\varepsilon}_0 > 0$ and $s > 0$ such that for any ${\varepsilon}\leq {\varepsilon}_0$ and $\xi, \eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}$, we have $${\lvert{\widehat{h}}_{{\mathrm{CB}}}(\xi, \eta) - {\widehat{h}}_{{\mathrm{hy}}}(\xi, \eta)\rvert} \leq C {\varepsilon}^2 ({\lvert\eta\rvert} + 1)^s.$$ By definition, for $1 \leq j, k \leq d$, $$\begin{aligned} ({\widehat{h}}_{{\mathrm{hy}}})_{jk}(\xi, \eta) & = {\varepsilon}^d (2\pi)^{-d/2} \sum_{x \in \Omega_{{\varepsilon}}} e^{-{\imath}\xi \cdot x} ({\widetilde{h}}_{{\mathrm{hy}}})_{jk}(x, \eta) \\ & = {\varepsilon}^d (2\pi)^{-d/2} \sum_{x \in \Omega_{{\varepsilon}}} e^{-{\imath}(\xi+\eta) \cdot x} ({\mathcal{H}}_{{\mathrm{hy}}} (e_k f_{\eta}))_j(x). \end{aligned}$$ where $f_{\eta}(x) = e^{{\imath}x\cdot \eta}$ for $x \in \Omega$ and $$\begin{aligned} ({\widehat{h}}_{{\mathrm{CB}}})_{jk}(\xi, \eta) & = (2\pi)^{-d/2} \int_{\Omega} e^{- {\imath}\xi\cdot x} {\,\mathrm{d}}x ({\widetilde{h}}_{{\mathrm{CB}}})_{jk}(\eta) \\ & = {\varepsilon}^d (2\pi)^{d/2} \sum_{x \in \Omega_{{\varepsilon}}} e^{-{\imath}\xi \cdot x} ({\widetilde{h}}_{{\mathrm{CB}}})_{jk}(\eta) \\ & = {\varepsilon}^d (2\pi)^{-d/2} \sum_{x \in \Omega_{{\varepsilon}}} e^{-{\imath}(\xi+\eta) \cdot x} ({\mathcal{H}}_{{\mathrm{CB}}} (e_k f_{\eta}))_j(x), \end{aligned}$$ where we have used in the fact that ${\widetilde{h}}_{{\mathrm{CB}}}(x, \eta) = {\widetilde{h}}_{{\mathrm{CB}}}(\eta)$ due to translational symmetry. Note that we get from the second line from the first line in the above equation using the fact that $\xi$ takes value in ${\mathbb{L}}^{\ast}_{{\varepsilon}}$, so that the integral equals to the sum. Hence, taking difference of the above two equations, we obtain the bound $${\lvert{\widehat{h}}_{{\mathrm{hy}}}(\xi, \eta) - {\widehat{h}}_{{\mathrm{CB}}}(\xi, \eta) \rvert} \leq C \sup_{1\leq k \leq d} {\lVert{\mathcal{H}}_{{\mathrm{hy}}} (e_k f_{\eta}) - {\mathcal{H}}_{{\mathrm{CB}}} (e_k f_{\eta})\rVert}_{L^{\infty}_{{\varepsilon}}}.$$ Note that by the definition of linearized operators ${\mathcal{H}}_{{\mathrm{hy}}}$ and ${\mathcal{H}}_{{\mathrm{CB}}}$, we have $${\mathcal{H}}_{{\mathrm{hy}}} (e_k f_{\eta}) - {\mathcal{H}}_{{\mathrm{CB}}} (e_kf_{\eta}) = \lim_{t \to 0^+} \frac{1}{t} \bigl( {\mathcal{F}}_{{\mathrm{hy}}}[x + t (e_k f_{\eta})] - {\mathcal{F}}_{{\mathrm{CB}}}[x + t (e_k f_{\eta})] \bigr).$$ Hence, $$\begin{aligned} {\lVert{\mathcal{H}}_{{\mathrm{hy}}} (e_kf_{\eta}) - {\mathcal{H}}_{{\mathrm{CB}}} (e_kf_{\eta})\rVert}_{L^{\infty}_{{\varepsilon}}} & = \lim_{t \to 0^+} \frac{1}{t} {\lVert{\mathcal{F}}_{{\mathrm{hy}}}[x + t (e_kf_{\eta})] - {\mathcal{F}}_{{\mathrm{CB}}}[x + t (e_kf_{\eta})]\rVert}_{L^{\infty}_{{\varepsilon}}} \\ & \leq C {\varepsilon}^2 {\lVerte_k f_{\eta}\rVert}_{W^{16, \infty}} \leq C {\varepsilon}^2 {\lVerte_k f_{\eta}\rVert}_{H^{s}} \leq C {\varepsilon}^2 (1 + {\lvert\eta\rvert})^{s}, \end{aligned}$$ where $s$ is chosen so that the Sobolev inequality $${\lVertf\rVert}_{W^{16, \infty}(\Omega)} \leq C {\lVertf\rVert}_{H^{s}(\Omega)}$$ holds for any $f \in H^{s}(\Omega)$ ($s$ depends on the dimension). Here, we have used Corollary \[coro:consqc\], noticing that ${\lVertt e_k f_{\eta}\rVert}_{L^{\infty}}$ is uniformly bounded for $\eta$ as $t \to 0$. This concludes the proof. The proof of Proposition \[prop:Hcons\] actually gives for any ${\varepsilon}\leq {\varepsilon}_0$, $x \in \Omega_{{\varepsilon}}$ and $\eta \in {\mathbb{L}}_{{\varepsilon}}^{\ast}$, $$\label{eq:Hcons} {\lvert {\widetilde{h}}_{{\mathrm{hy}}}(x, \eta) - {\widetilde{h}}_{{\mathrm{CB}}}(\eta) \rvert} \leq C {\varepsilon}^2 (1 + {\lvert\eta\rvert})^s.$$ Combined with Corollary \[cor:lowbound\], we get as a corollary \[coro:CB\] ${\widetilde{h}}_{{\mathrm{CB}}}(\xi)$ is positive definite and there exists $a_{{\mathrm{CB}}}>0$ such that for any $\xi \in {\mathbb{L}}^{\ast}$, $$\det {\widetilde{h}}_{{\mathrm{CB}}}(\xi) \geq a_{{\mathrm{CB}}}\Lambda_0^{2d}(\xi).$$ Fixed $\xi \in {\mathbb{L}}^{\ast}$, take ${\varepsilon}_1$ sufficiently small, so that for ${\varepsilon}< {\varepsilon}_1$, $\xi \in {\mathbb{L}}_{{\varepsilon}}^{\ast}$ (it suffices to take ${\varepsilon}_1$ so small that ${\varepsilon}_1 \xi \in \Gamma^{\ast}$). Without loss of generality, we can take ${\varepsilon}_1$ less than ${\varepsilon}_0$ in Proposition \[prop:Hcons\]. From the continuous dependence of matrix determinants on matrix elements, we get from that for any ${\varepsilon}\leq {\varepsilon}_1$ sufficiently small, $x \in \Omega_{{\varepsilon}}$ $${\lvert\det {\widetilde{h}}_{hy}(x, \xi) - \det {\widetilde{h}}_{{\mathrm{CB}}}(\xi)\rvert} \leq C {\varepsilon}^2 (1 + {\lvert\xi\rvert})^s.$$ Combining the last inequality with Corollary \[cor:lowbound\], we get the desired estimate by taking ${\varepsilon}\to 0$. With these preparations, let us now state the key proposition will be used in the proof of Theorem \[thm:stability\]. \[prop:contcons\] For $\{v_{{\varepsilon}}\}_{{\varepsilon}> 0}$ that $v_{{\varepsilon}}\in H^2_{{\varepsilon}}(\Omega)$ and ${\lVertv_{{\varepsilon}}\rVert}_{{\varepsilon}, 2}$ is uniformly bounded, we have $$\lim_{{\varepsilon}\to 0+} {\lVert{\mathcal{H}}_{{\mathrm{CB}}} {\overline{v}}_{{\varepsilon}}- {\overline{{\mathcal{H}}_{{\mathrm{hy}}} v_{{\varepsilon}}}}\rVert}_{L^2(\Omega)} = 0.$$ Assume the validity of Proposition \[prop:contcons\], which we will come back in the end of this section, the proof of Theorem \[thm:stability\] follows a *reductio ad absurdum*. Suppose does not hold, then there is a sequence of functions $\{w_k\}$ and ${\varepsilon}_k > 0$ such that $$\begin{aligned} & {\lVertw_k\rVert}_{{\varepsilon}_k, 2} \to \infty, && \text{as } k \to \infty; \\ & {\lVert{\mathcal{H}}_{{\mathrm{hy}}} w_k\rVert}_{{\varepsilon}_k, 0} \leq c, && \text{for all } k; \\ & \sum_{x\in \Omega_{{\varepsilon}_k}} w_k(x) = 0, && \text{for all } k. \end{aligned}$$ Set $v_k = w_k / {\lVertw_k\rVert}_{{\varepsilon}_k, 2}$, we then have $$\begin{aligned} \label{eq:useq1} & {\lVertv_k\rVert}_{{\varepsilon}_k, 2} = 1 && \text{for all } k;\\ \label{eq:useq2} & {\lVert{\mathcal{H}}_{{\mathrm{hy}}} v_k\rVert}_{{\varepsilon}_k, 0} \to 0, && \text{as } k \to \infty; \\ \label{eq:useq3} & \sum_{x\in \Omega_{{\varepsilon}_k}} v_k(x) = 0, && \text{for all } k. \end{aligned}$$ Since $${\mathcal{H}}_{{\mathrm{CB}}} {\overline{v}}_k = {\overline{{\mathcal{H}}_{{\mathrm{hy}}} v_k}} + ( {\mathcal{H}}_{{\mathrm{CB}}} {\overline{v}}_k - {\overline{{\mathcal{H}}_{{\mathrm{hy}}} v_k}}).$$ Since ${\lVert{\mathcal{H}}_{{\mathrm{hy}}} v_k\rVert}_{{\varepsilon}_k, 0} \to 0$, we have $${\lVert{\overline{{\mathcal{H}}_{{\mathrm{hy}}} v_k}}\rVert}_{L^2(\Omega)} \to 0, \quad \text{as } k \to \infty.$$ Moreover, by Proposition \[prop:contcons\], $${\lVert{\mathcal{H}}_{{\mathrm{CB}}} {\overline{v}}_k - {\overline{{\mathcal{H}}_{{\mathrm{hy}}} v_k}}\rVert}_{L^2(\Omega)} \to 0, \quad \text{as } k \to \infty.$$ Hence ${\lVert{\mathcal{H}}_{{\mathrm{CB}}} {\overline{v}}_k \rVert}_{L^2(\Omega)} \to 0$. Note also that the average of ${\overline{v}}_k$ is zero, since ${\widehat{{\overline{v}}_k}}(0) = 0$. By the invertibility of ${\mathcal{H}}_{{\mathrm{CB}}}$ on the subspace orthogonal to constant function, ${\lVert{\overline{v}}_k\rVert}_{L^2(\Omega)} \to 0$, as $ k \to \infty$, while ${\lVertv_k\rVert}_{{\varepsilon}_k, 2} = 1$. It follows then ${\lVertv_k\rVert}_{{\varepsilon}_k, 0} \to 0$. Indeed, since $${\lVertv_k\rVert}_{{\varepsilon}_k, 1} = \sum_{\xi \in {\mathbb{L}}^{\ast}_{{\varepsilon}_k}} \Lambda^2_{{\varepsilon}_k}(\xi) {\lvert{\widehat{v_k}}(\xi)\rvert}^2 \leq 1,$$ for any $\delta > 0$, there exist $\Xi>0$ and $k_1$, such that for any $k \geq k_1$, $$\label{eq:k1} \sum_{\xi \in {\mathbb{L}}^{\ast}_{{\varepsilon}_k},\, {\lvert\xi\rvert} \geq \Xi} {\lvert{\widehat{v_k}}(\xi)\rvert}^2 < \delta / 2.$$ On the other hand, due to , there exists $k_2$, such that for $k \geq k_2$ $$\label{eq:k2} \sum_{\xi \in {\mathbb{L}}^{\ast}_{{\varepsilon}_k}, \, {\lvert\xi\rvert} < \Xi} \bigl\lvert {\lvert{\widehat{v_k}}(\xi)^2\rvert} - {\lvert{\widehat{{\overline{v}}_k}}(\xi)\rvert}^2 \bigr\rvert \leq \delta / 4.$$ Moreover, as ${\lVert{\overline{v}}_k\rVert}_{L^2} \to 0$, there exists $k_3$, such that for $k \geq k_3$, $$\label{eq:k3} \sum_{\xi \in {\mathbb{L}}^{\ast}_{{\varepsilon}_k}, {\lvert\xi\rvert} < \Xi} {\lvert{\widehat{{\overline{v}}_k}}(\xi)\rvert}^2 \leq \delta / 4.$$ Combined – together, we have for $k \geq \max(k_1, k_2, k_3)$, $${\lVertv_k\rVert}_{{\varepsilon}_k, 0}^2 = \sum_{\xi \in {\mathbb{L}}^{\ast}_{{\varepsilon}_k}} {\lvert{\widehat{v_k}}\rvert}^2\leq \delta.$$ Hence, $\lim_{k \to \infty} {\lVertv_k\rVert}_{{\varepsilon}_k, 0} = 0$. From Theorem \[thm:regularity\], this implies $$\lim_{k\to \infty} {\lVertv_k\rVert}_{{\varepsilon}_k, 2} = 0.$$ The contradiction with the choice of $v_k$ proves the Theorem. Using perturbation, we may extend the results of Theorem \[thm:stability\] to a deformed state $u$. \[thm:ustability\] Under Assumption \[assump:stabatom\] and \[assump:stabCB\], there exists $\delta > 0$, such that for any ${\varepsilon}> 0$ and $u$, ${\lVertu\rVert}_{W^{2, \infty}_{{\varepsilon}}} \leq \delta$ and any $v \in H^2_{{\varepsilon}}(\Omega)$, we have $$\label{eq:ustability} {\lVertv\rVert}_{{\varepsilon},2} \leq C {\lVert{\mathcal{H}}_{{\mathrm{hy}}}[u] v\rVert}_{{\varepsilon},0},$$ where the constant depends on $\delta$, but is independent of $u$, $v$ and ${\varepsilon}$. This theorem follows from a perturbation argument of Theorem \[thm:stability\]. Denote by $v_0$ the solution of $${\mathcal{H}}_{{\mathrm{hy}}}[0] v_0=f.$$ We immediately have $${\mathcal{H}}_{{\mathrm{hy}}}[0](v-v_0)={\left(\,{\mathcal{H}}_{{\mathrm{hy}}}[0]-{\mathcal{H}}_{{\mathrm{hy}}}[u]\,\right)}v.$$ Using Theorem \[thm:stability\], we have $${\lVertv-v_0\rVert}_{{\varepsilon},2}\le C{\lVert{\left(\,{\mathcal{H}}_{{\mathrm{hy}}}[0]-{\mathcal{H}}_{{\mathrm{hy}}}[u]\,\right)}v\rVert}_{{\varepsilon},0}\le C {\lVert{\nabla}u\rVert}_{W^{1,\infty}_{{\varepsilon}}}{\lVertv\rVert}_{{\varepsilon},2}.$$ By triangular inequality, we have $$\begin{aligned} {\lVertv\rVert}_{{\varepsilon},2}&\le{\lVertv_0\rVert}_{{\varepsilon},2}+{\lVertv-v_0\rVert}_{{\varepsilon},2}\\ &\le C{\lVert{\mathcal{H}}_{{\mathrm{hy}}}[0] v_0\rVert}_{{\varepsilon},0}+C {\lVert{\nabla}u\rVert}_{W^{1,\infty}_{{\varepsilon}}}{\lVertv\rVert}_{{\varepsilon},2}\\ &=C{\lVert{\mathcal{H}}_{{\mathrm{hy}}}[u] v\rVert}_{{\varepsilon},0}+C{\lVert{\nabla}u\rVert}_{W^{1,\infty}_{{\varepsilon}}}{\lVertv\rVert}_{{\varepsilon},2}\\ &\le C{\lVert{\mathcal{H}}_{{\mathrm{hy}}}[u] v\rVert}_{{\varepsilon},0}+C\delta{\lVertv\rVert}_{{\varepsilon},2},\end{aligned}$$ which gives  by choosing $\delta=1/(2C)$. We conclude this section with the proof of Proposition \[prop:contcons\]. We work in the Fourier domain. By definition, $$({\mathcal{H}}_{{\mathrm{CB}}} {\overline{v}}_{{\varepsilon}})(x) = \sum_{\xi \in {\mathbb{L}}^{\ast}_{{\varepsilon}}} e^{{\imath}x \cdot \xi} {\widetilde{h}}_{{\mathrm{CB}}}(x, \xi) {\widehat{\chi}}({\varepsilon}^{\alpha} \xi) {\widehat{v_{{\varepsilon}}}}(\xi).$$ Hence, taking Fourier transform, $$\begin{aligned} {\widehat{{\mathcal{H}}_{{\mathrm{CB}}} {\overline{v}}_{{\varepsilon}}}}(\xi) & = (2\pi)^{-d/2} \int_{\Omega} e^{-{\imath}\xi \cdot x} \sum_{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}} e^{{\imath}x \cdot \eta} {\widetilde{h}}_{{\mathrm{CB}}}(x, \eta) {\widehat{\chi}}({\varepsilon}^{\alpha} \eta) {\widehat{v_{{\varepsilon}}}}(\eta) {\,\mathrm{d}}x \\ & = \sum_{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}} {\widehat{h}}_{{\mathrm{CB}}}(\xi - \eta, \eta) {\widehat{\chi}}({\varepsilon}^{\alpha} \eta) {\widehat{v_{{\varepsilon}}}}(\eta), \end{aligned}$$ where $${\widehat{h}}_{{\mathrm{CB}}}(\xi, \eta) = (2\pi)^{-d/2} \int_{\Omega} e^{-{\imath}\xi \cdot x} {\widetilde{h}}_{{\mathrm{CB}}}(x, \eta) {\,\mathrm{d}}x$$ is the Fourier transform of the symbol with respect to $x$. On the other hand, for the discrete system, we have $$\begin{aligned} {\widehat{{\overline{{\mathcal{H}}_{{\mathrm{hy}}} v_{{\varepsilon}}}}}}(\xi) & = {\widehat{\chi}}({\varepsilon}^{\alpha} \xi) {\varepsilon}^d (2\pi)^{-d/2} \sum_{x \in \Omega_{{\varepsilon}}} e^{-{\imath}\xi \cdot x} \sum_{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}} e^{{\imath}x \cdot \eta}{\widetilde{h}}_{{\mathrm{hy}}}(x, \eta) {\widehat{v_{{\varepsilon}}}}(\eta) \\ & = {\widehat{\chi}}({\varepsilon}^{\alpha} \xi) \sum_{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}} {\widehat{h}}_{{\mathrm{hy}}}(\xi - \eta, \eta) {\widehat{v_{{\varepsilon}}}}(\eta), \end{aligned}$$ where $${\widehat{h}}_{{\mathrm{hy}}}(\xi, \eta) = {\varepsilon}^d (2\pi)^{-d/2} \sum_{x \in \Omega_{{\varepsilon}}} e^{-{\imath}\xi \cdot x} {\widetilde{h}}_{{\mathrm{hy}}}(x, \eta).$$ Let us compare the difference between ${\mathcal{H}}_{{\mathrm{CB}}} {\overline{v}}_{{\varepsilon}}$ and ${\overline{{\mathcal{H}}_{{\mathrm{hy}}} v_{{\varepsilon}}}}$. We write $$\begin{aligned} \left\lvert {\widehat{{\mathcal{H}}_{{\mathrm{CB}}} {\overline{v}}_{{\varepsilon}}}}(\xi) \right. & - \left. {\widehat{{\overline{{\mathcal{H}}_{{\mathrm{hy}}} v_{{\varepsilon}}}}}} (\xi) \right\rvert \\ & = \biggl\lvert \sum_{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}} \Bigl( {\widehat{\chi}}({\varepsilon}^{\alpha} \eta) {\widehat{h}}_{{\mathrm{CB}}}(\xi - \eta, \eta) - {\widehat{\chi}}({\varepsilon}^{\alpha} \xi) {\widehat{h}}_{{\mathrm{hy}}}(\xi - \eta, \eta)\Bigr) {\widehat{v_{{\varepsilon}}}}(\eta) \biggr\rvert \\ & \leq {\lvert{\widehat{I_1}}(\xi)\rvert} + {\lvert{\widehat{I_2}}(\xi)\rvert}, \end{aligned}$$ where $$\begin{aligned} & {\widehat{I_1}}(\xi) = \sum_{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}} \bigl( {\widehat{\chi}}({\varepsilon}^{\alpha} \xi) - {\widehat{\chi}}({\varepsilon}^{\alpha} \eta) \bigr) {\widehat{h}}_{{\mathrm{CB}}}(\xi - \eta, \eta) {\widehat{v_{{\varepsilon}}}}(\eta), \\ & {\widehat{I_2}}(\xi) = {\widehat{\chi}}({\varepsilon}^{\alpha} \xi) \sum_{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}} \bigl( {\widehat{h}}_{{\mathrm{CB}}}(\xi - \eta, \eta) - {\widehat{h}}_{{\mathrm{hy}}}(\xi - \eta, \eta) \bigr) {\widehat{v_{{\varepsilon}}}}(\eta). \end{aligned}$$ It suffices to prove that $L^2$ norms of $I_1$ and $I_2$ both go to zero as ${\varepsilon}\to 0$. Let us estimate $I_1$ first. By the smoothness of $\chi$, we have ${\lvert{\widehat{\chi}}({\varepsilon}^{\alpha}\xi) - {\widehat{\chi}}({\varepsilon}^{\alpha}\eta)\rvert} \leq C {\varepsilon}^{\alpha}{\lvert\xi - \eta\rvert}$, hence $${\lvert{\widehat{I_1}}(\xi)\rvert} \leq C {\varepsilon}^{\alpha} \sum_{\eta\in{\mathbb{L}}^{\ast}_{{\varepsilon}}} {\lvert\xi - \eta\rvert} \bigl\lvert \Lambda^{-2}(\eta) {\widehat{h}}_{{\mathrm{CB}}}(\xi - \eta, \eta) \bigr\rvert {\lvert\Lambda^2(\eta) {\widehat{v_{{\varepsilon}}}}(\eta)\rvert}.$$ Define $\theta(\xi)$ as $$\theta(\xi) = {\lvert\xi\rvert} \sup_{\eta \in {\mathbb{L}}^{\ast}} \bigl\lvert \Lambda^{-2}(\eta) {\widehat{h}}_{{\mathrm{CB}}}(\xi, \eta) \bigr\rvert.$$ By the smoothness of ${\widetilde{h}}_{{\mathrm{CB}}}(x, \xi)$ with respect to $x$ and the fact that ${\mathcal{H}}_{{\mathrm{CB}}}$ is a second order operator, we have ${\lvert\xi \Lambda^{-2}(\eta) {\widehat{h}}_{{\mathrm{CB}}}(\xi, \eta)\rvert} \leq C {\lvert\xi\rvert}^{-d-1}$ uniformly in $\eta$. Hence, $\theta \in l^1({\mathbb{L}}^{\ast})$ as a function of $\xi$. Therefore, $$\begin{aligned} {\lVertI_1\rVert}_{L^2(\Omega)} = {\lVert{\widehat{I_1}}\rVert}_{l^2({\mathbb{L}}^{\ast})} & \leq C {\varepsilon}^{\alpha} {\lVert\theta\rVert}_{l^1({\mathbb{L}}^{\ast})} \biggl( \sum_{\eta \in L^{\ast}_{{\varepsilon}}} \Lambda^4(\eta) {\lvert{\widehat{v_{{\varepsilon}}}}(\eta)\rvert}^2 \biggr)^{1/2} \\ & \leq C {\varepsilon}^{\alpha} {\lVert\theta\rVert}_{l^1({\mathbb{L}}^{\ast})} {\lVertQ_{{\varepsilon}} v_{{\varepsilon}}\rVert}_{H^2(\Omega)} \\ & \leq C {\varepsilon}^{\alpha} {\lVert\theta\rVert}_{l^1({\mathbb{L}}^{\ast})} {\lVertv_{{\varepsilon}}\rVert}_{H^2_{{\varepsilon}}(\Omega)}, \end{aligned}$$ where the first inequality results from Young’s inequality. This proves that ${\lVertI_1\rVert}_{L^2(\Omega)}$ goes to zero as ${\varepsilon}\to 0$. Let us consider $I_2$ next. Take $\alpha_1 \in (\alpha, 1)$, we break $I_2$ into three parts $${\widehat{I_2}}(\xi) = {\widehat{I_{21}}}(\xi) + {\widehat{I_{22}}}(\xi) + {\widehat{I_{23}}}(\xi),$$ where $$\begin{aligned} & {\widehat{I_{21}}}(\xi) = 1_{{\lvert\xi\rvert} \geq \pi {\varepsilon}^{-\alpha_1}} {\widehat{\chi}}({\varepsilon}^{\alpha} \xi) \sum_{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}} \bigl( {\widehat{h}}_{{\mathrm{CB}}}(\xi - \eta, \eta) - {\widehat{h}}_{{\mathrm{hy}}}(\xi - \eta, \eta) \bigr) {\widehat{v_{{\varepsilon}}}}(\eta), \\ & {\widehat{I_{22}}}(\xi) = 1_{{\lvert\xi\rvert} < \pi {\varepsilon}^{-\alpha_1}} {\widehat{\chi}}({\varepsilon}^{\alpha} \xi) \sum_{\substack{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}, \\ {\lvert\eta\rvert} \geq 2\pi {\varepsilon}^{-\alpha_1}}} \bigl( {\widehat{h}}_{{\mathrm{CB}}}(\xi - \eta, \eta) - {\widehat{h}}_{{\mathrm{hy}}}(\xi - \eta, \eta) \bigr) {\widehat{v_{{\varepsilon}}}}(\eta), \\ & {\widehat{I_{23}}}(\xi) = 1_{{\lvert\xi\rvert} < \pi {\varepsilon}^{-\alpha_1}} {\widehat{\chi}}({\varepsilon}^{\alpha} \xi) \sum_{\substack{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}, \\ {\lvert\eta\rvert} < 2\pi {\varepsilon}^{-\alpha_1}}} \bigl( {\widehat{h}}_{{\mathrm{CB}}}(\xi - \eta, \eta) - {\widehat{h}}_{{\mathrm{hy}}}(\xi - \eta, \eta) \bigr) {\widehat{v_{{\varepsilon}}}}(\eta). \end{aligned}$$ We will control each term: $I_{21}$ is small due to the decay property of ${\widehat{\chi}}$; $I_{22}$ is small since $\xi$ and $\eta$ is well separated; $I_{23}$ is small due to consistency. - Define $w$ given by $${\widehat{w}}(\xi) = \sum_{\eta\in{\mathbb{L}}^{\ast}_{{\varepsilon}}} \bigl({\widehat{h}}_{{\mathrm{CB}}}(\xi-\eta, \eta) - {\widehat{h}}_{{\mathrm{hy}}}(\xi-\eta, \eta) \bigr) {\widehat{v_{{\varepsilon}}}}(\eta).$$ We observe that ${\widehat{w}}(\xi)$ is the Fourier transform of $$w(x) = ({\mathcal{H}}_{{\mathrm{CB}}} Q_{{\varepsilon}} v_{{\varepsilon}})(x) - (Q_{{\varepsilon}}({\mathcal{H}}_{{\mathrm{hy}}} v_{{\varepsilon}}))(x).$$ Hence, ${\lVertw\rVert}_{L^2(\Omega)} \leq C{\lVertv_{{\varepsilon}}\rVert}_{{\varepsilon}, 2}$. By , we have $${\lvert{\widehat{\chi}}({\varepsilon}^{\alpha} \xi)\rvert} \leq C_k {\varepsilon}^{k(\alpha_1 - \alpha)}, \quad \forall\, {\lvert\xi\rvert} \geq \pi {\varepsilon}^{-\alpha_1},$$ for any positive integer $k$. Therefore, we conclude that ${\lVertI_{21}\rVert}_{L^2(\Omega)} \to 0$ as ${\widehat{I_{21}}}(\xi) = 1_{{\lvert\xi\rvert} \geq \pi {\varepsilon}^{-\alpha_1}} {\widehat{\chi}}({\varepsilon}^{\alpha}\xi) {\widehat{w}}(\xi)$. - We have $$\begin{gathered} \label{eq:I22} {\lvert{\widehat{I_{22}}}(\xi)\rvert} \leq C \sum_{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}} } {\lvert\Lambda^{-2}(\eta) {\widehat{h}}_{{\mathrm{CB}}}(\xi - \eta, \eta)\rvert} {\lvert\Lambda^2(\eta) {\widehat{v_{{\varepsilon}}}}(\eta)\rvert} 1_{{\lvert\xi - \eta\rvert} > \pi{\varepsilon}^{-\alpha_1}} \\ + C \sum_{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}} } {\lvert\Lambda^{-2}({\varepsilon}, \eta) {\widehat{h}}_{{\mathrm{hy}}}(\xi - \eta, \eta)\rvert} {\lvert\Lambda^2_{{\varepsilon}}(\eta) {\widehat{v_{{\varepsilon}}}}(\eta)\rvert} 1_{{\lvert\xi - \eta\rvert} > \pi{\varepsilon}^{-\alpha_1}}. \end{gathered}$$ The argument for the two terms are analogous, and let us focus on the first term. Consider $\varphi(\xi)$ given by $$\varphi(\xi) = \sup_{\eta \in{\mathbb{L}}^{\ast}} {\lvert\Lambda^{-2}(\eta) {\widehat{h}}_{{\mathrm{CB}}}(\xi, \eta)\rvert}.$$ Since ${\widetilde{h}}_{{\mathrm{CB}}}(x, \eta)$ is smooth with respect to $x$ and ${\mathcal{H}}_{{\mathrm{CB}}}$ is a second-order operator, we have $\varphi \in l^1({\mathbb{L}}^{\ast})$ as a function of $\xi$. Hence $$\lim_{{\varepsilon}\to 0} {\lVert\varphi(\xi) 1_{{\lvert\xi\rvert} > \pi {\varepsilon}^{-\alpha_1}}\rVert}_{l^1({\mathbb{L}}^{\ast})} = 0.$$ Therefore, using Young’s inequality, the first term on the right hand side of is bounded by $ C {\lVert\varphi(\xi) 1_{{\lvert\xi\rvert} > \pi {\varepsilon}^{-\alpha_1}}\rVert}_{l^1({\mathbb{L}}^{\ast})} {\lVertQ_{{\varepsilon}} v_{{\varepsilon}}\rVert}_{H^2(\Omega)}$, which goes to zero as ${\varepsilon}\to 0$. Hence, $I_{22}$ goes to zero in $L^2$ norm. - From Proposition \[prop:Hcons\], we have $${\lvert{\widehat{h}}_{{\mathrm{CB}}}(\xi, \eta) - {\widehat{h}}_{{\mathrm{hy}}}(\xi, \eta)\rvert} \leq C {\varepsilon}^2 ({\lvert\eta\rvert} + 1)^t$$ for some $s \geq 0$. As ${\lvert\eta\rvert} < 2\pi {\varepsilon}^{-\alpha_1}$, we have $${\lvert{\widehat{h}}_{{\mathrm{CB}}}(\xi, \eta) - {\widehat{h}}_{{\mathrm{hy}}}(\xi, \eta)\rvert} \leq C {\varepsilon}^{(2-s\alpha_1)}.$$ Therefore, $$\begin{aligned} \sum_{\xi\in{\mathbb{L}}^{\ast}} {\lvert{\widehat{I_{23}}}(\xi)\rvert}^2 & \leq C \sum_{\substack{\xi \in {\mathbb{L}}^{\ast}, \\ {\lvert\xi\rvert} < \pi {\varepsilon}^{-\alpha_1}}} \biggl( \sum_{\substack{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}, \\ {\lvert\eta\rvert} < 2\pi {\varepsilon}^{-\alpha_1}}} \bigl( {\widehat{h}}_{{\mathrm{CB}}}(\xi - \eta, \eta) - {\widehat{h}}_{{\mathrm{hy}}}(\xi - \eta, \eta) \bigr) {\widehat{v_{{\varepsilon}}}}(\eta) \biggr)^2 \\ & \leq C \sum_{\substack{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}, \\ {\lvert\eta\rvert} < 2\pi {\varepsilon}^{-\alpha_1}}} {\lvert{\widehat{v_{{\varepsilon}}}}(\eta)\rvert}^2 \sum_{\substack{\xi \in {\mathbb{L}}^{\ast}, \\ {\lvert\xi\rvert} < \pi {\varepsilon}^{-\alpha_1}}} \bigl \lvert {\widehat{h}}_{{\mathrm{CB}}}(\xi - \eta, \eta) - {\widehat{h}}_{{\mathrm{hy}}}(\xi - \eta, \eta) \bigr\rvert^2 \\ & \leq C {\varepsilon}^{4 - (2s + d) \alpha_1} \sum_{\substack{\eta \in {\mathbb{L}}^{\ast}_{{\varepsilon}}, \\ {\lvert\eta\rvert} < 2\pi {\varepsilon}^{-\alpha_1}}} {\lvert{\widehat{v_{{\varepsilon}}}}(\eta)\rvert}^2. \end{aligned}$$ Hence, by choosing $\alpha_1$ (and also $\alpha$) sufficiently small that $ \alpha_1 < 4 / (2s + d)$, we have ${\lVertI_{23}\rVert}_{L^2} \to 0$ as ${\varepsilon}\to 0$. Therefore, to sum up, we have proved both ${\lVertI_1\rVert}_{L^2(\Omega)}$ and ${\lVertI_2\rVert}_{L^2(\Omega)}$ go to zero as ${\varepsilon}\to 0$. The proposition is proved. Convergence of the force-based hybrid method {#sec:convergence} ============================================ With the consistency and stability results prepared in the last three sections, we are now ready to prove the main result Theorem \[thm:main\]. The proof follows the spirit of Strang’s convergence proof of nonlinear finite difference schemes [@Strang:1964]. As a direct consequence of Corollary \[coro:consqc\], we have the following \[coro:highorder\] Under the same assumptions of Theorem \[thm:main\], there exist positive constants $\delta$ and $M$, so that for any $p > d$ and $ f \in W^{15, p}(\Omega) \cap W^{1, p}_{\sharp}(\Omega) $ with ${\lVertf\rVert}_{W^{15, p}} \leq \delta$, denote ${\widetilde{y}}=x+u(x)$ with $u$ the solution of the Cauchy-Born elasticity problem , we then have $${\lVert{\mathcal{F}}_{{\mathrm{hy}}}[{\widetilde{y}}] - f\rVert}_{L^{\infty}_{{\varepsilon}}} \leq M {\varepsilon}^2.$$ Using the remark under Lemma \[lem:consCB\], the regularity assumption of $f$ can be relaxed to $W^{5, p}(\Omega)$ with $p>d$. We take ${\widetilde{y}}$ be that given by Corollary \[coro:highorder\]. It is easy to see $$\int_0^1{\mathcal{H}}_{{\mathrm{hy}}}[ty+(1-t){\widetilde{y}}](x){{\,\mathrm{d}}t}\cdot(y-{\widetilde{y}}) ={\mathcal{F}}_{{\mathrm{hy}}}[y]-{\mathcal{F}}_{{\mathrm{hy}}}[{\widetilde{y}}].$$ Hence $y$ is the solution of  if and only if $$\int_0^1{\mathcal{H}}_{{\mathrm{hy}}}[ty+(1-t){\widetilde{y}}](x){{\,\mathrm{d}}t}\cdot(y-{\widetilde{y}}) =f-{\mathcal{F}}_{{\mathrm{hy}}}[{\widetilde{y}}].$$ For any $\kappa\in (3/2, 2)$, we define $$B={\left\{\,y\in X_{{\varepsilon}}\,\mid\,{\lVerty-{\widetilde{y}}\rVert}_{{\varepsilon},2}\le{\varepsilon}^{\kappa}\,\right\}}.$$ We define a map $T: B \to B$ as follows: for any $y\in B$, let $T(y)$ be the solution of the linear system $$\label {linear} \int_0^1{\mathcal{H}}_{{\mathrm{hy}}}[ty+(1-t){\widetilde{y}}](x){{\,\mathrm{d}}t}\cdot{\left(\,T(y)-{\widetilde{y}}\,\right)} =f-{\mathcal{F}}_{{\mathrm{hy}}}[{\widetilde{y}}].$$ We first show that $T$ is well defined. Since $${\lVertty+(1-t){\widetilde{y}}-{\widetilde{y}}\rVert}_{{\varepsilon},2}\le t{\lVerty-{\widetilde{y}}\rVert}_{{\varepsilon},2}\le{\varepsilon}^{\kappa},$$ which gives that for sufficiently small ${\varepsilon}$ and $d\le 3$, there holds $${\lVertty+(1-t){\widetilde{y}}-{\widetilde{y}}\rVert}_{W_{{\varepsilon}}^{2,\infty}}\le{\varepsilon}^{\kappa-d/2} <\delta,$$ where the constant $\delta$ appears in Theorem \[thm:ustability\]. It follows from Theorem \[thm:ustability\] that the problem  is solvable and $$\label{err:eq} \begin{aligned} {\lVertT(y)-{\widetilde{y}}\rVert}_{{\varepsilon},2}&\leq C{\lVertf-{\mathcal{F}}_{{\mathrm{hy}}}[{\widetilde{y}}]\rVert}_{{\varepsilon},0}\\ &\le C{\lVertf-{\mathcal{F}}_{{\mathrm{at}}}[{\widetilde{y}}]\rVert}_{{\varepsilon},0} +C{\lVert{\mathcal{F}}_{{\mathrm{at}}}[{\widetilde{y}}]-{\mathcal{F}}_{{\mathrm{hy}}}[{\widetilde{y}}]\rVert}_{{\varepsilon},0}\\ &\le C{\varepsilon}^2, \end{aligned}$$ where we have used Corollary \[coro:highorder\]. For sufficiently small ${\varepsilon}$, we have $${\lVertT(y)-{\widetilde{y}}\rVert}_{{\varepsilon},2}\le{\varepsilon}^\kappa.$$ Therefore, $T(y)\in B$ and $T$ is well-defined, which in turn implies $T(B)\subset B$ for sufficiently small ${\varepsilon}$. Now the existence of $y$ follows from the Brouwer fixed point theorem. The solution $y$ is locally unique since the Hessian at $y$ is nondegenerate. Let us denote the solution as $y_{{\mathrm{hy}}}$, we then have from that $$\label{err:eq2} {\lVert{\widetilde{y}}-y_{{\mathrm{hy}}}\rVert}_{{\varepsilon},2}\le C{\varepsilon}^2.$$ Proceeding along the same line that leads to  and using Lemma \[lem:consCB\], we get $$\label{err:highorder} {\lVert{\widetilde{y}}-y_{{\mathrm{at}}}\rVert}_{{\varepsilon},2}\le C{\varepsilon}^2.$$ Finally, we conclude that $y_{{\mathrm{hy}}}$ satisfies  by combining  and . [^1]: Part of the work was done during J.L.’s visit to State Key Laboratory of Scientific and Engineering Computing, Chinese Academy of Sciences. J.L. appreciates its hospitality. The work of P.B.M. was supported by National Natural Science Foundation of China under grants 10871197, 10932011, and by the funds from Creative Research Groups of China through grant 11021101, and by the support of CAS National Center for Mathematics and Interdisciplinary Sciences. We thank Weinan E and Robert V. Kohn for helpful discussions. [^2]: Usual linear interpolations are not sufficient for our purpose as we need high regularity of the interpolated functions.
--- author: - 'Peng Jiang, Ren-Dong Nan, Lei Qian, You-Ling Yue' bibliography: - 'paperef.bib' title: 'System solutions study on the fatigue of the fast cable-net structure caused by form-changing operation $^*$ ' --- Introduction {#sect:intro} ============ The Five-hundred-meter Aperture Spherical Radio Telescope (abbreviated as FAST), one of the key scientific projects of the national 11th Five-year Plan, was approved for construction by the National Development and Reform Commission on July 10, 2007. FAST will make observations at frequencies from 70MHz to 3GHz. The design resolution and pointing accuracy will be $2.9'$ and $8''$ respectively. To achieve these technical specifications, the root-mean-square value of the out-of-plane error of the reflector should be no more than 5$mm$ (@1 [@2]). According to the geometric optical principle of FAST (illustrated in Figure \[fig:1\]), the supporting structure of the reflector system should be capable of forming a parabolic surface from a spherical surface. This is the most prominent special requirement of the telescope beyond those of conventional structures. The National Astronomy Observatory of China has been carrying out a rigorous feasibility study of critical technologies since 1994. More than 100 scientists and engineers from 20 institutions are involved in the project. An extensive comparative analysis of several design plans of the supporting structure of the reflector system was performed (@3 [@4]). An Arecibo-type plan was selected because a cable-net structure can easily accommodate the complex topography of Karst terrain, thus avoiding heavy civil engineering works between actuators and the ground (see Fig. \[fig:2\]) (@5). ![Geometric optical principle of FAST[]{data-label="fig:1"}](p1.eps){width="50.00000%"} Later, an extensive numerical comparative analysis was performed among several different cell types, such as three-dimensional cells, Kiewitt cells and geodesic triangular cells. Geodesic triangular cells were selected because their stress distribution is more even (@5 [@6]). The cable net comprises 6670 steel cables and approximately 2225 cross nodes. The lengths of cables range from 10.5 to 12.5$m$, the total weight of the net is 1300 tons, and the cross-sections of main cables have 16 different areas ranging from 280 to 1319$mm^{2}$. None of the cables are connected, allowing us to set their cross sections according to their loads. The outer edge of the cable net is suspended from a steel ring beam whose diameter is 500$m$ (see Fig. \[fig:2\]). The cross nodes of the cable net are used as control points. Each is connected to an actuator by a down-tied cable. By controlling the actuator using feedback from the measurement and control system, the positions of the cross nodes can be adjusted to form an illuminated aperture having a diameter up to 300$m$. This illuminated aperture moves along the spherical surface according to the zenith angle of the target objects (see S$_{1}$ and S$_{2}$ in Figure \[fig:1\]). ![Concept of the adaptive cable-net structure[]{data-label="fig:2"}](p2.eps){width="50.00000%"} The above description clearly indicates that long-term observations by FAST will lead to long-term frequent shape-changing operation of the cable-net structure. Previous research has shown that such shape-changing operation introduces stress range on the order of 500$MPa$, which is nearly twice the standard authorized value (@7 [@8; @9]). The cable-net structure is thus the most critical and fragile part of the FAST reflector system. To improve the reliability and service life of FAST, the present work, on the basis of the final design, searches for a way to reduce the stress range acting on the cable in shape-changing operation and develops a new type of cable system with ultra high fatigue resistance that meets the requirements of the FAST observation principle. Optimization of the deformation strategy ======================================== Analysis of the main influencing factors ---------------------------------------- In general, the illuminated aperture is a parabolic surface, whose profile can be expressed as $$x^{2}=2py+c \label{eqn:1}$$ The variables and parameters of the problem are the weight of the reflector element $w$, the density of the cable $\rho$, the elastic modulus of the cable $E$, the area of the cable cross section $A_{i}$, the geometric parameters of the illuminated aperture $p$ and $c$, the diameter of the illuminated aperture $d$, the radius of the cable-net structure $R$, and the diameter of the ring beam $D$. The stress amplitude can thus be expressed as $$\Delta\sigma=f(w, E, \rho, A_{i}, p, c, D, d, R). \label{eqn:2}$$ Among these governing parameters, $E$, $d$ and $\rho$ have independent dimensions. By applying the $\pi$ theorem from dimensional analysis (@10), we obtain $$\frac{\Delta \sigma}{E}=\prod\left(\frac{w}{\rho d}, \frac{A_{i}}{d^{2}}, \frac{p}{d}, \frac{c}{d^{2}}, \frac{D}{d}, \frac{R}{d}\right). \label{eqn:3}$$ In our final design, $w$, $d$, $R$, $D$, and $A_{i}$ are already determined; the weight of the reflector element $w$ is approximately $17 kg/m^{2}$, the diameter of the illuminated aperture $d$ is 300$m$, the radius of the cable-net $R$ is 300$m$, the diameter of the ring beam $D$ is 500$m$, and all cross sections of the cable $A_{i}$ have been determined according their loads. For a steel cable, $\rho$ is approximately $7850 kg/m^{3}$ . Thus, Eq. (\[eqn:2\]) can be simplified as $$\frac{\Delta\sigma}{E}=\prod\left(\frac{p}{d}, \frac{c}{d^{2}}\right) \label{eqn:4}$$ Furthermore, the nodes of the outer edge of the cable net are fixed to the ring beam and, in contrast to the case for the other cross nodes, their positions cannot be adjusted by the actuator. To expand the observation zenith angle as much as possible, the outer edge of the illuminated aperture should coincide with the basic spherical surface. Only then can the outer edge of the illuminated aperture arrive at the outer edge of the cable net. Under such a constraint, we can derive that $$c=22500+2p\sqrt{67500}. \label{eqn:5}$$ Equation (\[eqn:3\]) can then be further simplified as $$\frac{\Delta\sigma}{E}=\prod\left(\frac{p}{d}\right) \label{eqn:6}$$ The elastic modulus of steel cable is about 200$GPa$, and thus, the only variable remaining in the implicit function of Eq. (\[eqn:6\]) is $p/d$ , which is the focal ratio of the telescope. From Eq. (\[eqn:6\]) we know that the fatigue stress range of FAST cable is most dependent on the focal ratio of the telescope. A different focal ratio would lead to a different relative position between the illuminated aperture and the base plane, which is directly related to internal forces of the cable net and stroke of the actuator. In our earlier work (@6), we proposed three deformation strategies, namely strategies I, II, and III. The relative positions of the illuminated aperture to the spherical base plane in these three strategies are shown in Figure. \[fig:3\]. ![Relative positions of the parabolic surface and the base spherical surface in three previously proposed deformation strategies[]{data-label="fig:3"}](p3.eps){width="50.00000%"} Among the strategies, strategies I and II respectively have the shortest actuator stroke and the minimum peak distance from the illuminated aperture to the base spherical surface. The actuator stroke is approximately 0.67$m$ in strategy I and the maximum deviation is approximately 0.47$m$ in strategy II. Strategy III is based on the principle of equal arclengths; i.e., the profile arclength of the illuminated aperture is equal to that of the spherical base plane. The focal ratios corresponding to these three strategies are respectively 0.4665, 0.4611, and 0.4613. In the preliminary design of the FAST cable-net structure, deformation strategy I was recommended as the preferred control scheme simply because it has the shortest actuator stroke. Obviously, it is unreasonable to omit the fatigue problem in the optimization of the deformation strategy, especially in the present case where stress range is of the order of 500$MPa$. The present work thus establishes a relation between the focal ratio and deformation stress range. By considering both the actuator stroke and stress range of the cable, we can reconsider our recommended deformation strategy for FAST observations. Simplified analysis method -------------------------- We use ANSYS software to build the finite element model of the entire supporting structure, which comprises cable net, down tied cable, ring beam, and steel pillar (see Fig. \[fig:2\]). Link10 elements and beam 188 elements are respectively used to simulate the response of the cable-net structure and the steel ring beam structure. When not in operation, the FAST cable net should hold a spherical shape under the combined loads of gravity, initial stress, and the down-tied cable. To derive such a state, inverse iteration is applied. When FAST is making observations, the deformation procedure for forming the illuminated aperture from the spherical shape can be simulated by employing conventional iteration method. According to the working principle of FAST, the motion of the cable net cross nodes during the form-changing operation is very slow. If we take the sphere center as the origin of coordinates and take the observation direction as the polar axis, the polar equation of the illuminated aperture can be expressed as $$\sin^{2}\theta\cdot\rho^{2}-553.294\cdot\cos\theta\cdot\rho-166250=0 \label{eqn:7}$$ where $\rho$ is the distance from the cross node to the sphere center and $\theta$ is the polar angle of the cross node (see Fig. \[fig:1\]). In form-changing operation, the tangential displacement of the cross node is negligibly small. The velocity and acceleration of the cross node can then be expressed as $$\left\{\begin{array}{l} v(\theta)=\frac{d\rho}{d\theta}\cdot\frac{d\theta}{dt} \\ a(\theta)=\frac{d\left(\frac{d\rho}{d\theta}\cdot\frac{d\theta}{dt}\right)}{d\theta}\cdot\frac{d\theta}{dt} \end{array}\right. \label{eqn:8}$$ The tracking angular velocity of FAST is 15 degrees per hour. Substituting the angular velocity into Eq. (\[eqn:8\]), we find that the maximum speed of cross node is no more than 0.58$mm/s$. Thus, the form-changing operation of the FAST cable net is simplified as a quasi-static process here. ![Illustration of cross nodes used as discrete points to describe the continuous trajectory path of the illuminated aperture center[]{data-label="fig:4"}](p4.eps){width="45.00000%"} According to the FAST working principle, the illuminated aperture center is restricted within a certain scope as shown in Fig. \[fig:4\]. This region contains 550 cross nodes, with the interval between any two adjacent nodes being no more than 12.5$m$, which corresponds approximately to a center angle of only 1 degree for the 300-$m$-radius reflector; this interval is much smaller than the illuminated aperture. We assume that the distribution of the 550 discrete points is sufficiently dense, and any possible observation state is approximately equivalent to one of 550 deformation states whose illuminated aperture centers correspond to these 550 cross nodes. The peak stress range of each cable can then be easily derived from simulation results for the 550 deformation states. To verify whether the distribution of discrete points is sufficiently dense, comparative analysis was performed for deformation strategy II using the abovementioned 550 discrete points and a denser distribution of discrete points. In the latter case, the discrete points include not only the cross-nodes but also the mid-point of each cable and the middle of each triangular element; thus, more than 2200 deformation states need to be simulated for the one deformation strategy. The calculation results derived using the two sets of discrete points described above are shown in Fig. \[fig:5\]a and b. The peak stress ranges derived using 550 and 2200 discrete points are respectively 488 and 491$MPa$, a difference of less than 1%. We thus assume that using 550 cross nodes provides sufficient accuracy in our case. The subsequent work in this paper is based on this assumption. ![Comparison of the simulation results of deformation strategy II obtained using 550 and 2200 discrete points[]{data-label="fig:5"}](p5.eps){width="80.00000%"} ------------------------------------------------ ------------------------------------------------- \(a) Result obtained using 550 discrete points \(b) Result obtained using 2200 discrete points ------------------------------------------------ ------------------------------------------------- Analysis results ---------------- Employing the same procedure used in the previous section, the peak stress range generated by several deformation strategies with different focal ratios can be derived (see Table \[tab:1\]). The peak stress range, taken as an analytical factor, is obtained for different focal ratios. The simulation results reveal that the stress range has a minimum value of 455$MPa$ when the focal ratio is 0.4621. To verify that the optimum focal ratio is 0.4621 for the fatigue problem of the cable net, focal ratios of 0.4620 and 0.4622 were also investigated. The simulation showed that the peak stress ranges when employing these two deformation strategies are respectively 462 and 460$MPa$, both of them are slightly higher than that resulting when employing the deformation strategy with a focal ratio of 0.4621. Therefore, we have reason to believe that the deformation strategy with a focal ratio of 0.4621 is very close to the optimum deformation strategy having the minimum stress range. It should also be noted that the actuator stroke in the strategy is 0.89m, which is 50$mm$ less than that in strategy II. By comparing the stress range and actuator stroke of the deformation strategies (see Table \[tab:1\]), we recommend a strategy with a focal ratio of 0.4621 for FAST application. [ccccccccc]{} Focal ratio & 0.4603 & 0.4611 & 0.4613 & 0.4620 & **0.4621** & 0.4622 & 0.4633 & 0.4665\ Maximum stress range (MPa) & 512 & 488 & 482 & 462 & **455** & 460 & 488 & 547\ Actuator stroke (m) & 0.9890 & 0.9450 & 0.9341 & 0.8966 & **0.8914** & 0.8861 & 0.8291 & 0.6741\ Assessment of fatigue resistance ================================ According to the working principle of FAST, the problem of fatigue of the cable-net structure arises from the form-changing operation. The time history of the cable stress would directly depends on the trajectory of the illuminated aperture. Therefore, the present work on the assessment of fatigue of the cable net structure can be divided into several parts. First, according to the scientific goal and observation model of the telescope, we roughly plan the trajectory of the illuminated aperture in the service life of the telescope of 30 years. Second, employing a simplified finite element method, the stress–time history curve of each cable is derived from the trajectory of the illuminated aperture. Finally, using a reasonable mathematical statistical method to deal with the stress–time history curve of the cable, we can count the approximate number of fatigue cycles of each cable. Planning the observation trajectory ----------------------------------- Regarding the scientific goals of FAST, observations made with the telescope can be classified into five classes: pulsars, neutral hydrogen, molecular spectral lines, very-long-baseline interferometry and the search for extraterrestrial intelligence. The observing mode can then be divided into three types: pulsar searching and monitoring, neutral-hydrogen large-area and small-area scanning, and other observations. We assume that each of the three types of observations accounts for one-third of the observation time. According to unofficial statistics for the first half of 2009, the observation uptime of the Green Bank Radio Telescope is approximately 70% to 80%, and the uptime of the Xinjiang 25-meter radio telescope is approximately 74%. Considering the system complexity in the FAST design, more maintenance time will be needed for FAST. It is thus reasonable to assume that the FAST uptime will be no more than 70%. Making the above assumption and employing randomization principle, we can roughly plan the trajectory of the illuminated aperture during the service life of the telescope. A trajectory data file is then created. The data include a total of 228,715 observations and 3,410,008 tracking points. The interval time between tracking points is 120 seconds, and the corresponding spherical solid angle at the surface of the reflection is about 0.5 degrees. In employing the trajectory of the parabolic center to describe this problem, we can use MATLAB software to draw the observation trajectory over different time periods, as shown in Figure \[fig:6\]. According to the design principle of the reflector system, the edge of the cable net is fixed on the ring beam. The illuminated region cannot extend beyond the 500-meter diameter, and the maximum observation zenith angle is 26.4 degrees. The trajectory of the illuminated aperture is thus limited to within a certain region near the reflector center. ![Estimated trajectories of the parabolic center[]{data-label="fig:6"}](p6.eps){width="60.00000%"} Estimationof fatigue cycles --------------------------- It is unrealistic and unnecessary to use time history analysis to simulate the form-changing operation process. Fortunately, the form-changing operation of the FAST cable net can be simplified as a quasi-static process. We can thus simplify the continuous tracking process as a series of discrete deformation states. We verified that any possible observation state is approximately equivalent to one of the 550 deformation states. Thus, with the above 3,410,008 tracking points, the stress–time curve of each cable can be estimated from the simulation results of the 550 deformation states. Currently, the rain flow count method is the method most commonly used to analyze a fatigue stress spectrum (@11 [@12]). We can use this method to derive both the stress range and number of cycles. A program was thus written to apply the rain flow count method in the processing of the stress–time histories of all cables. The number of load cycles for each cable is thus derived. The statistical results show that, in the abovementioned 228,715 observations, each cable went through between 840,107 and 1,020,054 cycles. The effect of the mean stress level is negligible when the mean stress is between 15% and 40% ultimate tensile strength (@13). Our cable strength design fit well in this situation, so the effect of the mean stress is thus neglected here. In general, certain materials have a fatigue limit or endurance limit that represents a stress level below which the material does not fail and can be cycled infinitely. Therefore, fatigue cycles when cables are in a higher stress range are of more concern here, especially when the stress range exceeds 300$MPa$. The distribution of the number of cycles when the stress range exceeds 300$MPa$ is plotted in Figure \[fig:7\]. Figure \[fig:7\] shows that the stress range exceeds 300$MPa$ for 270,027 cycles. There is obvious regularity in that there are more fatigue cycles in the form-changing operation closer to the center of the reflector. It worth noting that the highest stress range and the maximum number of cycles are located near to one another. ![Distribution of the stress cycles for a stress range exceeding 300$MPa$[]{data-label="fig:7"}](p7.eps){width="45.00000%"} By comparing the stress range and number of fatigue cycles of each cable, we can select a characteristic cable that we recognize as having a greater chance of failing to perform more detailed analysis. Figure \[fig:8\] shows the statistical results of the number of cycles versus the stress range in the abovementioned 228,715 observations. There are $1.8\times 10^{5}$, $1.7\times 10^{5}$ and $1.1\times 10^{5}$ cycles for stress ranges of 200$\sim$300, 300$\sim$400 and 400$\sim$455$MPa$, respectively. There are approximately $4.6\times 10^{5}$ fatigue cycles for which the stress range exceeds 200$MPa$. Together with the stress range below 200$MPa$, the cycles number of this cable totally come to $1.02\times 10^{6}$ times. ![Statistical result of the number of fatigue cycles versus the stress range[]{data-label="fig:8"}](p8.eps){width="50.00000%"} Fatigue experiment ================== According to above results of numerical analysis, the cable stress range generated by shape-changing operation is about twice the standard authorized value. The designed service life of FAST is 30 years. However, in accordance with international practice, such a large radio telescope would in general have at least 50 years of service. Arecibo, for example, has been in service for more than 60 years. Meanwhile, the trajectory path of the illuminated aperture on the spherical cable-net is difficult to estimate accurately. For reasons of security, the FAST project has proposed rigorous engineering requirements for the cable; i.e., the steel cable used in FAST construction should be able to endure $2\times10^{6}$ cycles of a fatigue test under a stress range of 500$MPa$. This is a serious challenge for the steel cable, with no successful experience for reference. Therefore, we start the present work from the basic tensile elements of the steel cable system; i.e., the steel wire and steel strand. Steel wire ---------- Previous research (@14) has shown there is a direct relation between the fatigue resistance of a steel cable and that of its steel wire. It is necessary, especially in the case of the high fatigue resistance requirement of FAST, to start an investigation by focusing on the steel wire that is the base material of the steel cable system. The Post-Tensioning Institute has specified that the minimum fatigue test strength/performance of single steel wire is 370$MPa$ under $2\times10^{6}$ load cycles (@8). In general, the fatigue strength of a steel cable system, because of fretting fatigue and the non-uniform stress distribution, is somewhat lower than that of steel wire. Therefore, the abovementioned fatigue strength of a single steel wire would be unable to satisfy the requirement of FAST cable. Fortunately, improved materials have become available. We thus perform a fatigue test using the Chinese domestic supply of the latest high-performance steel wire. ![Photograph of the fatigue test conducted on a single Super 82B galvanized steel wire[]{data-label="fig:9"}](p9.eps){width="50.00000%"} Super 82B galvanized steel wire (1860-$MPa$ grade), manufactured by Baoshan Iron and Steel Company, Ltd., is selected to carry out the fatigue test. The tensile load fluctuates with a constant-amplitude sinusoidal shape, and the stress range is set at 600$MPa$ (from 144 to 744$MPa$). The purpose of the redundant 100$MPa$ is to allow for a reduction in the fatigue strength after the cable system is manufactured from the steel wires. The specimen is 200$mm$ in length and 5.20$mm$ in diameter(see Fig. \[fig:9\]). The experiment loading frequency is 10$Hz$. The experimental results show that all six specimens endured $2\times 10^{6}$ cycles in the fatigue test. We thus conclude that the fatigue strength of a single steel wire has obviously improved beyond the historical value given above. However, it should be noted that fretting fatigue between adjacent wires will obviously affect the fatigue resistance. Further experimental investigation of the steel strand is still needed. Steel strand ------------ The problem of fretting fatigue is the most important factor affecting the fatigue resistance of a steel strand or cable system. We thus have reason to believe that the type of coating of steel wires will play an important role. Consequently, strands with different types of coating were tested in the present work, such as no coating, galvanized coating, and epoxy coating strands. In general, epoxy-coated steel wire strands can be classified as filled epoxy-coated steel wire strands and individual epoxy-coated steel wire strands. The latter is selected in the present work mainly because its steel wires are individually isolated from each other by an epoxy coating, which efficiently reduces the stress concentration and friction damage on the surface of the steel wires. All samples are in the form of a $1\times 7$ strand with nominal diameter of 15.2$mm$ and length of 1000$mm$. The stress range is conservatively set to 550$MPa$, which is 50$MPa$ higher than our requirement for the steel cable system. ![Photograph of the fatigue test on a steel strand[]{data-label="fig:10"}](p10.eps){width="40.00000%"} The tested strand was anchored by the wedge-type anchor, which is a common mechanism of anchorage. The fatigue test was performed on an MTS hydraulic fatigue machine frame (see Fig. \[fig:10\]). To eliminate the uneven distribution of stress, the specimen was subjected to initial loading from zero to about 80% guaranteed ultimate tensile strength. Dynamic loading was then applied between 13.12% and 40% guaranteed ultimate tensile strength. The fatigue test was performed at a loading frequency of approximately 10$Hz$. The results for different strands are listed in Table \[tab:2\]. [cccc]{} Coating & Load range & Stress range & Cycles\ No coating & 27.28-104.28 $kN$ & 550 $MPa$ & $3.0\times10^{5}$\ No coating & 27.28-104.28 $kN$ & 550 $MPa$ & $2.88\times10^{5}$\ No coating & 27.28-104.28 $kN$ & 550 $MPa$ & $2.07\times10^{5}$\ No coating & 27.28-104.28 $kN$ & 550 $MPa$ & $2.8\times10^{5}$\ No coating & 27.28-104.28 $kN$ & 550 $MPa$ & $1.5\times10^{5}$\ Galvanized coating & 27.28-104.28 $kN$ & 550 $MPa$ & $4.56\times10^{5}$\ Galvanized coating & 27.28-104.28 $kN$ & 550 $MPa$ & $5.58\times10^{5}$\ Epoxy coating & 27.28-104.28 $kN$ & 550 $MPa$ & $2.0\times10^{6}$\ Epoxy coating & 27.28-104.28 $kN$ & 550 $MPa$ & $2.0\times10^{6}$\ Epoxy coating & 27.28-104.28 $kN$ & 550 $MPa$ & $2.0\times10^{6}$\ Table \[tab:2\] shows that all six strand samples with no coating broke within 300,000 cycles, and the two galvanized strands both broke at about 500,000 cycles. The present test result revealed that the obvious reduction in fatigue resistance of these two types of strands, compared with the fatigue resistance of their steel wires, may result from fretting fatigue between adjacent wires. We then took a steel wire from one broken strand sample to observe the friction at its surface. Figure \[fig:11\] shows an obvious scratch on the surface of this steel wire. Under repeated fatigue loading, such scratches are most likely to be the sources of initial cracks, and thus reduce the fatigue strength of the steel strand. ![Wear and scratching on the surface of a steel wire acquired from a steel strand broken in a fatigue test[]{data-label="fig:11"}](p11.eps){width="80.00000%"} In the case of an individual epoxy-coated steel wire strand, the stress concentration and friction damage on the steel wire surface is efficiently reduced by the epoxy coating between the steel wires. Therefore, this type of strand was found to high fatigue resistance. In our test, all three samples of this type of strand endured $2\times10^{6}$ fatigue cycles under a stress range of 550$MPa$. S-N curve --------- The cable-net structure is the most critical and expendable part of the FAST reflector system, and the service life of FAST directly depends on the residual fatigue life of the structure. FAST has a designed life of 30 years, and it is necessary to develop a fatigue damage monitoring system for such an expensive device. The S-N curve is the basic data used in the evaluation of the structural fatigue life. However, it would be most unpractical and expensive to perform stay cable system fatigue tests at various numbers of load cycles and stress ranges to establish S-N curves. Fortunately, a significant experience verified that the performance of an individual prestressing element, anchored with the actual anchorage details of the stay cable system, can be used as an indication of the approximate performance of the stay cable system (@8). Therefore, fatigue tests for different stress ranges were performed on epoxy coating steel strands Here, the upper stress was fixed at 744$MPa$, and different stress ranges of 550, 560, 570, 580, and 600$MPa$ were applied by changing the lower stress. Three samples were tested for each case. The test results reveal that the fatigue life of a strand obviously declines with an increase in the stress range. At a stress range of 600$MPa$, strands generally broke after approximately 150,000 cycles. At a stress range of 580$MPa$, the three samples broke between 320,000 and 460,000 cycles. At a stress range of 560$MPa$, the three samples broke between 1,130,000 and 1,270,000 cycles. The relationship between the applied stress range and fatigue life can be plotted using dual logarithm coordinates as shown in Fig \[fig:12\]. Employing the least-squares method, the S-N curve of the type of strand investigated can be derived as $$\log(N)=87.736-31.192\cdot\log(\Delta\sigma), \label{eqn:9}$$ where $\Delta\sigma$ is stress range and $N$ is number of fatigue test cycles. ![Linear regression of Log ($\Delta\sigma$) versus Log ($N$) obtained in our experiment on the epoxy-coated strand[]{data-label="fig:12"}](p12.eps){width="50.00000%"} Cable system ------------ For stay cable systems to achieve the specified minimum test performance requirements, the stay cable anchorage systems need to be carefully designed and detailed. In this work, traditional extruding anchoring technology was improved by adding a layer of cushioning material between the cable and anchoring device, and the anchoring system of the cable was realized by internal squeezing. Employing the improved anchoring, $3\times\Phi15.2$ and $6\times\Phi15.2$ cables, having effective cross-section areas of 420 and 840$mm^{2}$ respectively, were fabricated. The cross sections of tested cables are selected by referring to the actual cross-section selection used in FAST cable-net structure. Tests were carried out at the Supervision and Test Center for Product Quality belonging to the Ministry of Railways, and the Chinese Railway Bridge Bureau Group Corporation. Figure \[fig:13\] shows a tested cable. According to the requirements of standards, the free segment length of the six cables was 3$m$. To eliminate the uneven distribution of stress, the specimen was subjected to initial loading from zero to about 80% guaranteed ultimate tensile strength. The fatigue tests were performed under a fatigue stress amplitude of 500$MPa$, maximum stress of 744$MPa$ and loading frequency of 3$Hz$. The number of fatigue loadings reached 2 million without failure. Table \[tab:3\] gives the experimental results for the six cables. ![Photograph of the location of the cable test[]{data-label="fig:13"}](p13.eps){width="40.00000%"} [cccc]{} Cable specifications & Stress amplitude & Number of cycles & Location\ $3\times\Phi15.2$ & 500 $MPa$ & 2 million times & Ministry of Railways\ $3\times\Phi15.2$ & 500 $MPa$ & 2 million times & Supervision and Test Center\ $3\times\Phi15.2$ & 500 $MPa$ & 2 million times & for Product Quality\ & & &\ $6\times\Phi15.2$ & 500 $MPa$ & 2 million times & Chinese Railway Bridge\ $6\times\Phi15.2$ & 500 $MPa$ & 2 million times & Bureau Group Corporation\ $6\times\Phi15.2$ & 500 $MPa$ & 2 million times &\ Conclusion ========== During FAST observations, the stress range generated by the form-changing operation is more than twice the standard authorized value. To improve the reliability and service life of FAST, we carried out an extensive numerical and experimental investigation. The following conclusions are drawn from the results of the investigation. 1. The focal ratio is the key influencing factor of the stress range of the FAST cable-net structure generated by the shape-changing operation. Additionally, a focal ratio of 0.4621 is suggested as most appropriate, leading to a reduction in the stress range of approximately 30$MPa$ and a reduction in the actuator stroke of 50$mm$. 2. The tracking trajectory was planned according to the demands of the scientific objectives of FAST during its service life of 30 years. The technical requirements of the cables were obtained by mathematically simulating the tracking trajectory of FAST. 3. Compared with the historical value, there was an obvious improvement in the fatigue performance of a steel wire. In our fatigue test on the Super 82B steel wire, all six samples endured $2\times 10^{6}$ cycles under a stress range of 600$MPa$. 4. Because of fretting fatigue, different types of coating on the steel wire surface can obviously affect the fatigue performance of the strand. In our experiments on three types of coating strand, the best performance was found for the individual epoxy-coated steel wire strand, with all three samples enduring $2\times10^{6}$ fatigue cycles under a stress range of 550$MPa$. 5. S-N curves of the individual epoxy-coated steel wire strand were derived, giving basic data for the evaluation of the fatigue life of the FAST cable net in future operation. 6. The steel cable system was subjected to fatigue tests under a stress range of 500$MPa$ for 2 million fatigue loadings. We thus developed a steel cable system that could operate under high stress amplitude targeted at the relevant technical engineering requirements of FAST. This work was supported by the Young Scientist Project of the Natural Science Foundation of China (Grant No. 11303059) and the Chinese Academy of Sciences Youth Innovation Promotion Association. We would like to thank all our colleagues for their contributions to our study.
--- abstract: 'We study a variant of decision-theoretic online learning in which the set of experts that are available to Learner can shrink over time. This is a restricted version of the well-studied sleeping experts problem, itself a generalization of the fundamental game of prediction with expert advice. Similar to many works in this direction, our benchmark is the ranking regret. Various results suggest that achieving optimal regret in the fully adversarial sleeping experts problem is computationally hard. This motivates our relaxation where any expert that goes to sleep will never again wake up. We call this setting “dying experts” and study it in two different cases: the case where the learner knows the order in which the experts will die and the case where the learner does not. In both cases, we provide matching upper and lower bounds on the ranking regret in the fully adversarial setting. Furthermore, we present new, computationally efficient algorithms that obtain our optimal upper bounds.' author: - | Hamid Shayestehmanesh[^1]\ Department of Computer Science\ University of Victoria - | Sajjad Azami\ Department of Computer Science\ University of Victoria - | Nishant A. Mehta\ Department of Computer Science\ University of Victoria\ `{hamidshayestehmanesh, sajjadazami, nmehta}@uvic.ca` bibliography: - 'bibliography.bib' title: 'Dying Experts: Efficient Algorithms with Optimal Regret Bounds' --- =1 Introduction {#sec:intro} ============ Decision-theoretic online learning (DTOL) ([@littlestone1994weighted; @vovk1990aggregating; @vovk1998game; @freund1997decision]) is a sequential game between a learning agent (hereafter called *Learner*) and Nature. In each round, Learner plays a probability distribution over a fixed set of experts and suffers loss accordingly. However, in wide range of applications, this “fixed” set of actions shrinks as the game goes on. One way this can happen is because experts either get disqualified or expire over time; a key scenario of contemporary relevance is in contexts where experts that discriminate are prohibited from being used due to existing (or emerging) anti-discrimination laws. Two prime examples are college admissions and deciding whether incarcerated individuals should be granted parole; here the agent may rely on predictions from a set of experts in order to make decisions, and naturally experts detected to be discriminating against certain groups should not be played anymore. However, the standard DTOL setting does not directly adapt to this case, i.e., for a given round it does not make sense nor may it even be possible to compare Learner’s performance to an expert or action that is no longer available. Motivated by cases where the set of experts can change, a reasonable benchmark is the *ranking regret* ([@kleinberg2010regret; @kale2016hardness]), for which Learner competes with the best ordering of the actions (see in Section \[sec:prob-setup-rel-work\] for a formal definition). The situation where the set of available experts can change in each round is known as the *sleeping experts* setting, and unfortunately, it appears to be computationally hard to obtain a no-regret algorithm in the case of adversarial payoffs (losses in our setting) and adversarial availability of experts ([@kanade2014learning]). This motivates the question of whether the optimal regret bounds can be achieved efficiently for the case where the set of experts can only shrink, which we will refer to as the “dying experts” setting. Applying the results of [@kleinberg2010regret] to the dying experts problem only gives ${\mathcal{O}}(\sqrt{TK\log K})$ regret, for $K$ experts and $T$ rounds, and their strategy is computationally inefficient. In more detail, the strategy in [@kleinberg2010regret] is to define a permutation expert (our terminology) that is identified by an ordering of experts, where a permutation expert’s strategy is to play the first awake expert in the ordering. They then run Hedge ([@freund1997decision]) on the set of all possible permutation experts over $K$ experts. Although this strategy competes with the best ordering, the per-round computation of running Hedge on $K!$ experts is ${\mathcal{O}}(K^K)$ if naïvely implemented, and the results of [@kanade2014learning] suggest that no efficient algorithm — one that uses computation $\mathrm{poly}(K)$ per round — can obtain regret that simultaneously is $o(T)$ and $\mathrm{poly}(K)$. However, in the dying experts setting, we show that many of these $K!$ orderings are redundant and only ${\mathcal{O}}(2^K)$ of them are “effective”. The notion of effective experts (formally defined in Section \[sec:number-of-effective\]) is used to refer to a minimal set of orderings such that each ordering in the set will behave uniquely in hindsight. The behavior of an ordering is defined as how it uses the initial experts in its predictions over $T$ rounds. Interestingly, it turns out that this structure also allows for an efficient implementation of Hedge which, as we show, obtains optimal regret in the dying experts setting. The key idea that enables an efficient implementation is as follows. Our algorithms group orderings with identical behavior into one group, where there can be at most $K$ groups at each round. When an expert dies, the orderings in one of the groups are forced to predict differently and therefore have to redistribute to the other groups. This splitting and rejoining behavior occurs in a fixed pattern which enables us to efficiently keep track of the weight associated with each group. In certain scenarios, Learner might be aware of the order in which the experts will become unavailable. For example, in online advertising, an ad broker has contracts with their providers and these contracts may expire in an order known to Learner. Therefore, we will study the problem in two different settings: when Learner is aware of this order and when it is not. #### Contributions. Our first main result is an upper bound on the number of effective experts (Theorem \[thm:number-of-effective-experts\]); this result will be used for our regret upper bound in the known order case. Also, in preparation for our lower bound results, we prove a fully non-asymptotic lower bound on the minimax regret for DTOL (Theorem \[thm:dtol-lower-bound\]). Our main lower bounds contributions are minimax lower bounds for both the unknown and known order of dying cases (Theorems \[thm:unknown-lower\] and \[thm:known-lower\]). In addition, we provide strategies to achieve optimal upper bounds for unknown and known order of dying (Theorems \[thm:unknown-upper\] and \[thm:known-upper\] respectively), along with efficient algorithms for each case. This is in particular interesting since, in the framework of sleeping experts, the results of [@kanade2014learning] suggest that no-regret learning is computationally hard, but we show that it is efficiently achievable in the restricted problem. Finally, in Section \[sec:extend-alg\], we show how to generalize our algorithms to other algorithms with adaptive learning rates, either adapting to unknown $T$ or achieving far greater forms of adaptivity like in AdaHedge and FlipFlop ([@derooij2014follow]). All formal proofs not found in the main text can be found in the appendix. Background and related work {#sec:prob-setup-rel-work} =========================== The DTOL setting ([@freund1997decision]) is a variant of prediction with expert advice ([@littlestone1994weighted; @vovk1990aggregating; @vovk1998game]) in which Learner receives an example $x_t$ in round $t$ and plays a probability distribution $\bm{p}_t$ over $K$ actions. Nature then reveals a loss vector $\bm{\ell}_t$ that indicates the loss for each expert. Finally, Learner suffers a loss $\hat{\ell}_t := \bm{p}_t \cdot \bm{\ell}_t = \sum_{i=1}^{K} p_{i,t}\ell_{i,t}$. In the dying experts problem, we assume that the set of experts can only shrink. More formally, for the set of experts $E = \{ e_1, e_2, \dots e_K \}$, at each round $t$, Nature chooses a non-empty set of experts $E_a^t$ to be available such that $E^{t+1}_a \subseteq E_a^t$ for all $t \in \{1, \dots, T-1\}$. In other words, in some rounds Nature sets some experts to sleep, and they will never be available again. Similar to [@kleinberg2010regret; @kanade2009sleeping; @kanade2014learning], we adopt the ranking regret as our notion of regret. Before proceeding to the definition of ranking regret, let us define $\pi$ to be an ordering over the set of initial experts $E$. We use the notion of orderings and permutation experts interchangeably. Learner can now predict using $\pi \in \Pi$, where $\Pi$ is the set of all the orderings. Also, denote by $\sigma^t(\pi)$ the first alive expert of ordering $\pi$ in round $t$; expert $\sigma^t(\pi)$ is the action that will be played by $\pi$. The cumulative loss of an ordering $\pi$ with respect to the available experts $E_a^t$ is the sum of the losses of $\sigma^t(\pi)$ at each round. We can now define the ranking regret: $$\label{eq:ranking-regret-definition} R_\Pi(1,T) = \sum_{t=1}^{T} \hat{\ell}_t - \min_{\pi \in \Pi} \sum_{t=1}^T \ell_{\sigma^t(\pi),t} \,\, .$$ Since we will use the notion of classical regret in our proofs, we also provide its formal definition: $$\label{eq:classical-regret-definition} R_E(1,T) = \sum_{t=1}^{T} \hat{\ell}_t - \min_{i\in[K]} \sum_{t=1}^{T} \ell_{i,t} \,\, .$$ We use the convention that the subscript of a regret notion $R$ represents the set of experts against which we compare Learner’s performance. Also, the argument in parentheses represents the set of rounds in the game. For example, $R_\Pi(1,T)$ represents the regret over rounds $1$ to $T$ with the comparator set being all permutation experts $\Pi$. Also, we assume that $\ell_{i,t} \in [0,1]$ for all $i \in [K], t \in [T]$. Similar to the definition of $E_a^t$, let $E_d^t := E \setminus E_a^t$ be the set of dead experts at the start of round $t$. We refer to a round as a “night” if any expert becomes unavailable on the next round. A “day” is defined as a continuous subset of rounds where the subset starts with a round after a night and ends with a night. As an example, if any expert become unavailable at the beginning of round $t$, we refer to round $t-1$ as a night (and we say the expert dies on that night) and the set of rounds $\{t, t+1 \dots, t'\}$ as a day, where $t'$ is the next night. We denote by $m$ the number of nights throughout a game of $T$ rounds. #### Related work. The papers [@freund1997using] and [@blum1997empirical] initiated the line of work on the sleeping experts setting. These works were followed by [@blum2007external], which considered a different notion of regret and a variety of different assumptions. In [@freund1997using], the comparator set is the set of all probability vectors over $K$ experts, while we compare Learner’s performance to the performance of the best ordering. In particular, the problem considered in [@freund1997using] aims to compare Learner’s performance to the best mixture of actions, which also includes our comparator set (orderings). However, in order to recover an ordering as we define, one needs to assign very small probabilities to all experts except for one (the first alive action), which makes the bound in [@freund1997using] trivial. As already mentioned, we assume the set $E_a^t$ is chosen adversarially (subject to the restrictions of the dying setting), while in [@kanade2009sleeping] and [@neu2014online] the focus is on the (full) sleeping experts setting with adversarial losses but *stochastic* generation of $E_a^t$. For the case of adversarial selection of available actions (which is more relevant to the present paper), [@kleinberg2010regret] studies the problem in the cases of stochastic and adversarial rewards with both full information and bandit feedback. Among the four settings, the adversarial full-information setting is most related to our work. They prove a lower bound of $\Omega(\sqrt{TK\log K})$ in this case and a matching upper bound by creating $K!$ experts and running Hedge on them, which, as mentioned before, requires computation of order ${\mathcal{O}}(K^K)$ per round. They prove an upper bound of ${\mathcal{O}}(K\sqrt{T \log K})$ which is optimal within a log factor for the bandit setting using a similar transformation of experts. A similar framework in the bandits setting introduced in [@chakrabarti2009mortal] is called “mortal bandits”; we do not discuss this work further as the results are not applicable to our case, given that they do not consider adversarial rewards. There is also another line of work which considers the contrary direction of the dying experts game. The setting is usually referred to as “branching” experts, in which the set of experts can only expand. In particular, part of the inspiration for our algorithms came from [@gofer2013regret; @mourtada2017efficient]. The hardness of the sleeping experts setting is well-studied ([@kanade2014learning; @kale2016hardness; @kleinberg2010regret]). First, [@kleinberg2010regret] showed for a restricted class of algorithms that there is no efficient no-regret algorithm for sleeping experts setting unless $RP = NP$. Following this, [@kanade2014learning] proved that the existence of a no-regret efficient algorithm for the sleeping experts setting implies the existence of an efficient algorithm for the problem of PAC learning DNFs, a long-standing open problem. For the similar but more general case of online sleeping combinatorial optimization (OSCO) problems, [@kale2016hardness] showed that an efficient and optimal algorithm for “per-action” regret in OSCO problems implies the existence of an efficient algorithm for PAC learning DNFs. Per-action regret is another natural benchmark for partial availability of actions for which the regret with respect to an action is only considered in rounds in which that action was available. Number of effective experts in dying experts setting {#sec:number-of-effective} ==================================================== In this section, we consider the number of effective permutation experts among the set of all possible orderings of initial experts. The idea behind this is that, given the structure in dying experts, not all the orderings will behave uniquely in hindsight. Formally, the *behavior* of $\pi$ is a sequence of predictions $(\sigma^1(\pi), \sigma^2(\pi), \dots, \sigma^T(\pi))$. This means that the behaviors of two permutation experts $\pi$ and $\pi'$ are the same if they use the same initial experts in *every* round. We define the set of effective orderings ${\mathcal{E}}\subseteq \Pi$ to be a set such that, for each unique behavior of orderings, there only exists one ordering in ${\mathcal{E}}$. To clarify the definition of unique behavior, suppose initial expert $e_1$ is always awake. Then two orderings $\pi_1 = (e_1,e_2,\dots)$ and $\pi_2 = (e_1,e_3,\dots)$ will behave the same over all the rounds, making one of them redundant. Let us clarify that behavior is not defined based on losses, e.g., if $\pi_1 = (e_i,\dots)$ and $\pi_2 = (e_j,\dots )$ where $i\neq j$ both suffer identical losses over all the rounds (i.e. their performances are equal) while using different original experts, then they are not considered redundant and hence both of them are said to be *effective*. Let $d_i$ be the number of experts dying on the $i\operatorname*{^{\text{th}}}$ night. Denote by $A$ the number of experts that will always be awake, so that $A = K - \sum_{i=1}^{m} d_i $. We are now ready to find the cardinality of set ${\mathcal{E}}$. \[thm:number-of-effective-experts\] In the dying experts setting, for $K$ initial experts and $m$ nights, the number of effective orderings in $\Pi$ is $f(\{ d_1, d_2, \dots d_m \}, A) = A \cdot \prod_{s=1}^{m} (d_s+1)$. In the special case where no expert dies ($m = 0$), we use the convention that the (empty) product evaluates to 1 and hence $f(\{ \}, A) = A$. We mainly care about $|{\mathcal{E}}|$ as we use it to derive our upper bounds; hence, we should find the maximum value of $f$. We can consider the maximum value of $f$ in three regimes. 1. In the case of a fixed number of nights $m$ and fixed $A$, the function $f$ is maximized by equally spreading the dying experts across the nights. As the number of dying experts might not be divisible by the number of nights, some of the nights will get one more expert than the others. Formally, the maximum value is $(\left\lceil \frac{D}{m} \right\rceil^{D\bmod{m}} \cdot \left\lfloor \frac{D}{m} \right\rfloor^{m - (D \bmod{m})} \cdot A )$, where $D = K - A + m$ and $K - A \le m$. 2. In the case of a fixed number of dying experts (fixed $A$), the maximum value of $f$ is $(2^{K - A}\cdot A)$ which occurs when one expert dies on each night. The following is a brief explanation on how to get this result. Denote by $B = (d_1, d_2, \dots, d_b)$ a sequence of numbers of dying experts where more than one expert dies on some night and $B$ maximizes $f$ (for fixed $A$), so that $F = f\left(\{d_1, d_2, \dots, d_b \}, A\right)$. Without loss of generality, assume that $d_1 > 1$. Split the first night into $d_1$ days where one expert dies at the end of each day (and consequently each of those days becomes a night). Now $F' = f\left(\{1, 1, \dots ,1 , d_2, \dots, d_b \}, A\right)$ where 1 is repeated $d_1$ times. If $d_1 > 1$ then $F' = F\cdot2^{d_1}/(d_1 + 1) > F$. We see that by splitting the nights we can achieve a larger effective set. 3. In the case of a fixed number of nights $m$, similar to the previous cases, the maximum value is obtained when each night has equal impact on the value of $f$, i.e., when $A = d_1 + 1 = d_2 + 1 = \dots = d_m + 1$; however, it might not be possible to distribute the experts in a way to get this, in which case we should make the allocation $\{ A, d_1 + 1, d_2 + 1, \dots, d_m + 1 \}$ as uniform as possible. By looking at cases 2 and 3, we see that by increasing $m$ and the number of dying experts, we can increase $f$; thus, the maximum value of $f$ with no restriction is $2^{K-1}$ and is achieved when $m = K-1$ and $A = 1$. Regret bounds for known and unknown order of dying {#sec:bounds} ================================================== In this section, we provide lower and upper bounds for the cases of unknown and known order of dying. In order to prove the lower bounds, we need a non-asymptotic minimax lower bound for the DTOL framework, i.e., one which holds for a finite number of experts $K$ and finite $T$. During the preparation of the final version of this work, we were made aware of a result of Orabona and Pál (see Theorem 8 of [@orabona2015optimal]) that does give such a bound. However, for completeness, we present a different fully non-asymptotic result that we independently developed; this result is stated in a simpler form and admits a short proof (though we admit that it builds upon heavy machinery). We then will prove matching upper bounds for both cases of unknown and known order of dying. Fully non-asymptotic minimax lower bound for DTOL ------------------------------------------------- We analyze lower bounds on the minimax regret in the DTOL game with $K$ experts and $T$ rounds. We assume that all losses are in the interval $[0,1]$. Let $\Delta_K := \Delta([K])$ denote the simplex over $K$ outcomes. The minimax regret is defined as $$\begin{aligned} \inf_{\bm{p}_1 \in \Delta_K} \sup_{\bm{{\ell}}_1 \in [0,1]^K} \ldots \inf_{\bm{p}_T \in \Delta_K} \sup_{\bm{{\ell}}_T \in [0,1]^K} \left\{ \sum_{t=1}^T \bm{p}_t \cdot \bm{{\ell}}_t - \min_{j \in [K]} \sum_{t=1}^T {\ell}_{j,t} \right\} . \label{eqn:minimax-regret} \end{aligned}$$ \[thm:dtol-lower-bound\] For a universal constant $L$, the minimax regret is lower bounded by $$\begin{aligned} \frac{1}{L} \min \left\{ \sqrt{(T/2) \log K}, T \right\} . \end{aligned}$$ The proof (in the appendix) begins similarly to the proof of the often-cited Theorem 3.7 of [@cesa2006prediction], but it departs at the stage of lower bounding the Rademacher sum; we accomplish this lower bound by invoking Talagrand’s Sudakov minoration for Bernoulli processes ([@talagrand1993regularity; @talagrand2005generic]). Unknown order of dying {#sec:unknown} ---------------------- For the case where Learner is not aware of the order in which the experts die, we prove a lower bound of $\Omega(\sqrt{mT\log K})$. Given that we have $E^{t+1}_a \subseteq E_a^t$, the construction for the lower bound of [@kleinberg2010regret] cannot be applied to our case. In other words, our adversary is much weaker than the one in [@kleinberg2010regret], but, surprisingly, we show that the previous lower bound still holds (by setting $m = K$) even with the weaker adversary. We then analyze a simple strategy to achieve a matching upper bound. In this section, we further assume that $\sqrt{T/2\log K} < T$ for every $T$ and $K$ so that there is hope to achieve regret that is sublinear with respect to $T$. We now present our lower bound on the regret for the case of unknown order of dying. \[thm:unknown-lower\] When the order of dying is unknown, the minimax regret is $\Omega(\sqrt{mT\log K})$. We construct a scenario where each day is a game decoupled from the previous ones. This means that the algorithm will be forced to have no prior information about the experts at the beginning of each day. First, partition the $T$ rounds into $m+1$ days of equal length. The days are split into two halves. On the first half, each expert suffers loss drawn i.i.d. from a Bernoulli distribution with $p= 1/2$. At the end of the first half of the day, we choose the expert with the lowest cumulative loss until that round, and that expert will suffer no loss on the second half. For any other expert $e_i$, we use the loss $\ell^{(1)}_{i,t}$ of $e_i$ on the $t\operatorname*{^{\text{th}}}$ round of the first half to define the loss $\ell^{(2)}_{i,t}$ of $e_i$ on the $t\operatorname*{^{\text{th}}}$ round of the second half; specifically, we choose the setting $\ell^{(2)}_{i,t} := 1 - \ell^{(1)}_{i,t}$. We show that the ranking regret of the set of orderings over $T$ rounds is obtained by summing the classical regrets of each day over the set of days. A natural strategy in the case of unknown dying order is to run Hedge over the set of initial experts $E$ and, after each night, reset the algorithm. We will refer to this strategy as “Resetting-Hedge”. Theorem \[thm:unknown-upper\] gives an upper bound on regret of Resetting-Hedge. \[thm:unknown-upper\] Resetting-Hedge enjoys a regret of $R_\Pi(1,T) = {\mathcal{O}}(\sqrt{mT\log K})$. Let $\tau_s$ be the set of round indices of day $s$; hence, we have $\sum_{s=1}^{m+1} |\tau_s| = T$. The overall ranking regret can be upper bounded by the sum of classical regrets for every interval. Hence, the analysis is as follows: $$\begin{aligned} R_\Pi(1,T) \leq \sum_{s=1}^{m+1} \sqrt{|\tau_s| \log(K-s)} \leq \sqrt{\log K} \sum_{s=1}^{m+1} \sqrt{|\tau_s|} \leq \sqrt{(m+1) T \log K} ; \end{aligned}$$ the last inequality is essentially from the Cauchy-Schwarz inequality (see Lemma \[lemma:cauchy-sumofsqrt\]). Although the basic Resetting-Hedge strategy adapts to $m$, it has many downsides. For example, resetting can be wasteful in practice. Another natural strategy, simply running Hedge on the set of all $K!$ permutation experts, is non-adaptive (obtaining regret ${\mathcal{O}}(\sqrt{T K \log K})$ and computationally inefficient if implemented naïvely). However, as we show in Section \[sec:algorithm-unknown\], this algorithm can be implemented efficiently (with runtime linear in $K$ rather than $K!$) and also, as we show in Section \[sec:extend-alg\], by running Hedge on top of several copies of Hedge (one per specially chosen learning rate), we can obtain a guarantee that is far better than Theorem \[thm:unknown-upper\]. Moreover, our efficient implementation of Hedge can be extended to adaptive algorithms like AdaHedge and FlipFlop ([@derooij2014follow]). Known order of dying -------------------- A natural question is whether Learner can leverage information about the order of experts that are going to die to achieve a better regret. We show that the answer is positive: the bound can be improved by a logarithmic factor. We also give a matching lower bound for this case (so both bounds are tight). Similar to the unknown setting, we provide a construction to prove a lower bound on the ranking regret in this case. We still assume that $\sqrt{T/2\log K} < T$. \[thm:known-lower\] When Learner knows the order of dying, the minimax regret is $\Omega(\sqrt{mT})$. Our construction involves first partitioning all the rounds to $m/2$ days of equal length. On day $s$, all experts will suffer loss 1 on all the rounds except for experts $e_{2s-1}$ and $e_{2s}$, who will suffer losses drawn i.i.d. from a Bernoulli distribution with success probability $p = 1/2$. Experts $e_{2s-1}$ and $e_{2s}$ will die at the end of day $s$, and therefore, each “day game” effectively has 2 experts; our lower bound holds even when Learner knows this fact. Furthermore, Learner will be aware that these two experts ($e_{2s-1}$ and $e_{2s}$) will die at the end of day $s$. Similar to the proof of Theorem \[thm:unknown-lower\], the minimax regret is lower bounded by the sum of the minimax regrets over each day game. Although the proof is relatively simple, it is at least a little surprising that knowing such rich information as the order of dying only improves the regret by a logarithmic factor. To achieve an optimal upper bound, using the results of Theorem \[thm:number-of-effective-experts\], the strategy is to create $2^m(K-m)$ experts (those that are effective) and run Hedge on this set. \[thm:known-upper\] For the case of known order of dying, the strategy as described above achieves a regret of ${\mathcal{O}}\big(\sqrt{T (m + \log K)}\big)$. Hedge has regret of ${\mathcal{O}}(\sqrt{T\log K})$ for $K$ number of experts. Therefore, running Hedge on $2^m(K-m)$ experts yields the desired bound. Though the order of computation in the above strategy is better than ${\mathcal{O}}(K^K)$, it is still exponential in $K$. In the next section, we introduce algorithms that simulate these strategies but in a computationally efficient way. $\forall i \in [K]$ $c_{i, 1} := 1$, $h_{i, 1} := (K-1)! $ $E_a := \{ e_1, e_2, \dots e_K \}$ \[alg:hedge\_perm\] $\forall i \in [K]$ $c_{i,1} := 1$, $h_{i,1} := \lceil{2^{K-i-1}}\rceil$ $E_a := \{ e_1, e_2, \dots e_K \}$ \[alg:hedgepermknown\] Efficient algorithms for dying experts {#sec:algorithm} ====================================== The results of [@kanade2014learning] imply computational hardness of achieving no-regret algorithms in sleeping experts; yet, we are able to provide efficient algorithms for dying experts in the cases of unknown and known order of dying. For the sake of simplicity, we initially assume that only one expert dies each night. Later, in Section \[sec:extend-alg\], we show how to extend the algorithms for the general case where multiple experts can die each night. We then show how to extend these algorithms to adaptive algorithms such as AdaHedge ([@derooij2014follow]). The algorithms for both cases are given in Algorithms \[alg:hedge\_perm\] and \[alg:hedgepermknown\]. Unknown order of dying {#sec:algorithm-unknown} ---------------------- We now show how to efficiently implement Hedge over the set of all the orderings. Even though Resetting-Hedge is already efficient and achieves optimal regret, it has its own disadvantages. The issue arises when one needs to extend Resetting-Hedge to adaptive algorithms. This is particularly important in real-world scenarios, where Learner wants to adapt to the environment (such as stochastic or adversarial losses). We show that Algorithm \[alg:hedge\_perm\], Hedge-Perm-Unknown (HPU), can be adapted to AdaHedge ([@vanerven2011adaptive]) and, therefore, we can simulate FlipFlop ([@derooij2014follow]). Next, we give the main idea on how the algorithm works, after which we prove that Algorithm \[alg:hedge\_perm\] efficiently simulates running Hedge over $\Pi$. Before proceeding further, let us recall how Hedge makes predictions in round $t$. First, it updates the weights using $w_{i,t} = w_{i,t-1} e^{-\eta \ell_{i,t}}$, and it then assigns a probability to expert $i$ as follows: $$p_{i,t} = \frac{w_{i,t-1}}{\sum_{i=1}^K w_{i,t-1}} \,\, .$$ Recall that $e_1, e_2, \dots, e_K$ denote the original experts while $\pi_1, \pi_2, \dots \pi_{K!}$ denote the orderings. Denote by $w_{\pi}^t$ the weight that Hedge assigns to $\pi$ in round $t$. Define $\Pi_{i}^t \subseteq \Pi$ to be the set of orderings predicting as expert $e_i$ in round $t$. The main ideas behind the algorithm are as follows: 1. When $\pi$ and $\pi'$ have the same prediction $e$ in round $t$ (i.e. $\sigma^t(\pi) = \sigma^t(\pi') = e$), then we do not need to know $w_{\pi}^t$ and $w_{\pi'}^t$; we use $w_{\pi}^t + w_{\pi'}^t$ instead for the weight of $e$. 2. The algorithm maintains $\sum_{\pi \in \Pi_{j}^t}e^{-\eta L_{\pi}^{t-1}}$, where $\eta$ is the learning rate and $L_{\pi}^{t}$ is the cumulative loss of ordering $\pi$ up until round $t$, i.e., $L_{\pi}^{t} = \sum_{s=1}^t \ell_{\sigma^s(\pi), s}$. We will discuss how to tune $\eta$ later. Let $J = \{j_1, \dots, j_m\}$ represent the rounds on which any expert will die. Denote by $j_t$ the last night observed so far at the end of round $t$, formally defined as $j_t = \max_{j \in J} j \leq t$. We maintain a tuple $(h_{i, t}, c_{i, t})$ for each original expert $e_i$’s in the algorithm in round $t$, where $h_{i,t}$ is the sum of non-normalized weights of the experts in $\Pi_i^t$ in round $j_t$. We similarly maintain $c_{i,t}$, except that it only considers the loss suffered from $j_t + 1$ to round $t-1$ for experts in $\Pi_i^t$. Formally: $$h_{i,t} = \sum_{\pi \in \Pi_{i}^t} e^{-\eta (\sum_{s=1}^{j_t}\ell_{\sigma^s(\pi),s})}, \hspace{0.05\linewidth} c_{i,t} = \sum_{\pi \in \Pi_{i}^t} e^{-\eta (\sum_{s=j_t+1}^{t-1}\ell_{\sigma^s(\pi),s})} \,\, .$$ It is easy to verify that $h_{j,t} \cdot c_{j,t} = \sum_{\pi \in \Pi_{j}^t} e^{-\eta L_{\pi}^{t-1}}$. The computational cost of the algorithm at each round will be ${\mathcal{O}}(K)$. We claim that HPU will behave the same as executing Hedge on $\Pi$. We use induction on rounds to show the weights are the same in both algorithms. By “simulating” we mean that the weights over the original experts will be maintained identically to how Hedge maintains them. \[thm:HPU\] At every round, HPU simulates running Hedge on the set of experts $\Pi$. The main idea is to group the permutation experts with similar predictions (the first expert alive in the permutation) in one group. Hence, initially there will be $K$ groups. Then, if expert $e_j$ dies, every ordering in the group associated with $e_j$ will be moved to another group and the empty group will be deleted. We prove that the orderings will distribute to other groups symmetrically after a night. Using this fact, we show that we do not need to know the elements of a group; we only maintain the sum of the weights given to all the orderings in each group. Known order of dying -------------------- For the case of known order of dying, we propose Algorithm \[alg:hedgepermknown\], Hedge-Perm-Known (HPK), which is slightly different than HPU. In particular, the weight redistribution (when an expert dies) and initialization of coefficient $h_{i, 1}$ is different. In the proof of Theorem \[thm:HPU\], we showed that when the set of experts includes all the orderings, the weight of the expert $j$ that died will distribute equally between initial experts ($e_j \in E$). But when the set of experts is only the effective experts, this no longer holds. In this section, we assume without loss of generality that the experts die in the order $e_1, e_2, \dots$ and recall that ${\mathcal{E}}$ denotes the set of effective orderings. Based on Theorem \[thm:number-of-effective-experts\], the number of experts starting with $e_i$ in ${\mathcal{E}}$ is $\ceil{2^{K-i-1}}$; we denote the set of such experts as ${\mathcal{E}}_{e_i}$. \[thm:HPK\] At every round, HPK simulates running Hedge on the set of experts ${\mathcal{E}}$. #### Remarks for tuning learning rates. For both algorithms, we assume $T$ is known beforehand. So, the learning rate for HPU is $\eta = \sqrt{(2\log (K!))/T}$ and for HPK is $\eta = \sqrt{(2\log (2^m(K-m)))/T}$. One can use a time-varying learning rate to adapt to $T$ in case it is not known. Extensions for algorithms {#sec:extend-alg} ------------------------- As we mentioned at the beginning of Section \[sec:algorithm\], for the sake of simplicity we initially assumed that only one expert dies each night. First, we discuss how to handle a night with more than one death. Afterwards, we explain how to extend/modify HPU and HPK to implement the Follow The Leader (FTL) strategy. We then introduce a new algorithm which simulates FTL efficiently and maintains $L^*_t$ as well, where $L^*_t$ is the cumulative loss of the best permutation expert through the end of round $t$. Finally, using $L^*_t$, we explain how to simulate AdaHedge and FlipFlop ([@derooij2014follow]) by slightly extending HPU and HPK. #### More than one expert dying in a night. We handle nights with more than one death as follows. We have one of the experts die on that night, and, for each expert $j$ among the other experts that should have died that night, we create a “dummy round”, give all alive experts (including expert $j$) a loss of zero, keep the learning rate the same as the previous round, and have expert $j$ die at the end of the dummy round (which hence becomes a “dummy night”). Even though the number of rounds increases with this trick, it is easy to see that the regret is unchanged since in dummy rounds all experts have the same loss (and also the learning rate after the sequence of dummy rounds is the same as what it would have been had there been no dummy rounds). Moreover, since now one expert dies on each night (some of which may be dummy nights), we may use Theorems \[thm:HPU\] or \[thm:HPK\] to conclude that our algorithm correctly distributes any dying experts’ weights among the alive experts. #### Beyond adaptivity to $\mathbf{m}$. Consider the case of unknown order and let the number of nights $m$ be unknown. As promised, we show that we can improve on the simple Resetting-Hedge strategy. \[thm:quantile\] Consider running Hedge on top of $K$ copies of HPU where, for $r \in \{0, 1, \ldots, K-1\}$, we set $\varepsilon_r = \prod_{l=0}^{r-1} \frac{1}{K - l}$ and the $r\operatorname*{^{\text{th}}}$ copy of HPU uses learning rate $\eta^{\varepsilon_r}_t := \sqrt{8 \log (1/\varepsilon_r)/t}$. Let $\pi^*$ be a best permutation expert in hindsight and suppose that the sequence $(\sigma^1(\pi^*), \ldots, \sigma^T(\pi^*)$ changes experts at most $l$ times. Then the regret of this algorithm is ${\mathcal{O}}\bigl( \sqrt{T (l + 1) \log K}) \bigr)$. Note that this theorem does better than adapt to $m$, as with $m$ nights we always have $l \leq m$ but $l$ can in fact be much smaller than $m$ in practice. Hence, Theorem \[thm:quantile\] recovers and can improve upon the regret of Resetting-Hedge and, moreover, wasteful resetting is avoided. Also, while the computation increases by a factor of $K$, it is easy to see that one can instead use an exponentially spaced grid of size $\log_2(K)$ to achieve regret of the same order. #### Follow the Leader. FTL might be the most natural algorithm proposed in online learning. In round $t$ the algorithm plays the expert with the lowest cumulative loss up to round $t$, $L^*_{t-1}$. By setting $\eta = \infty$ in Hedge and similarly, in HPU and HPK, we recover FTL; hence, our algorithms can simulate FTL. The motivation for FTL is that it achieves constant regret (with respect to $T$) when the losses are i.i.d. stochastic and there is a gap in mean loss between the best and second best (permutation) experts. Our algorithms do not maintain $L^*_t$, but we need $L^*_t$ to implement AdaHedge (which we discuss in the next extension). Here, we propose a simple algorithm to perform FTL on the set of orderings. The algorithm works as follows: 1. [Perform as FTL on alive initial experts and keep track of their cumulative losses $(L_1^t, L_2^t, \dots, L_K^t)$, while ignoring the dead experts;]{} 2. [If expert $j$ dies in round $t'$, then for every alive expert $i$ where $L_i^{t'} > L_j^{t'}$ do: $L_i^{t'} := L_j^{t'}$. ]{} This not only performs the same as FTL but also explicitly keeps track of $L^*_t$. We will use this implementation to simulate AdaHedge. #### AdaHedge. The following change to the learning rate in HPU/HPK recovers AdaHedge. Let $\hat{L}_t = \sum_{r=1}^{t} \hat{\ell}_r$. For round $t$, AdaHedge on $N$ experts sets the learning rate as $\eta_t = (\ln N)/\Delta_{t-1}$ and $\Delta_t = \hat{L}_t - M_t$ where $M_t = \sum_{r=1}^{t} m_r$ and $m_r = -\frac{1}{\eta_r} \ln(\bm{w_r}\cdot e^{-\eta_r \bm{\ell}_r})$; here, $m_r$ can easily be computed using the weights from HPU/HPK. As we have the loss of the algorithm at each round, we can calculate $M_t$. Also, using the implementation of FTL describe above, we can maintain $L^*_t$. Finally, we can compute $\Delta_t$ and the regret of HPU/HPK. #### FlipFlop. By combining AdaHedge and FTL, [@derooij2014follow] proposes FlipFlop which can do as well as either of AdaHedge (minimax guarantees and more) or FTL (for the stochastic i.i.d. case). We can adapt HPK and HPU to FlipFlop by implementing AdaHedge and FTL as described above and switching between the two based on $\Delta_t^{\mathrm{ah}}$ and $\Delta_t^{\mathrm{ftl}}$, where $\Delta_t^{\mathrm{ftl}}$ is defined similar to $\Delta_t^{\mathrm{ah}}$ but the learning rate associated with $m_t$ for FTL is $\eta^{\mathrm{ftl}} = \infty$ while for AdaHedge it is $\eta^{\mathrm{ah}}_t = \frac{\ln K}{\Delta_{t-1}}$. \[cl:corollary\] By combining FTL and AdaHedge as described above, HPU and HPK simulate FlipFlop over set of experts $A$ (where $A = \Pi$ for HPU and $A= {\mathcal{E}}$ for HPK) and achieve regret $$R_A (1, T) < \min\left\{ C_0 R_A^{\mathrm{ftl}}(1,T) + C_1, C_2 \sqrt{\frac{L^*_T(T - L^*_T)}{T} \ln \left(\left|A\right|\right)} + C_3\ln\left(\left|A\right|\right) \right\} ,$$ where $C_0, C_1, C_2, C_3$ are constants. The interest in FlipFlop is that in the real-world we may not know if losses are stochastic or adversarial. This motivates one to use an algorithms that detect and adapt to easier situations. Conclusion {#sec:conclusion} ========== In this work, we introduced the dying experts setting. We presented matching upper and lower bounds on the ranking regret for both the cases of known and unknown order of dying. In the case of known order, we saw that the reduction in the number of effective orderings allows our bounds to be reduced by a $\sqrt{\log K}$ factor. While it appears to be computationally hard to obtain sublinear regret in the general sleeping experts problem, in the restricted dying experts setting we provided efficient algorithms with optimal regret bounds for both cases. Furthermore, we proposed an efficient implementation of FTL for dying experts which, combined with efficiently maintaining mix losses, enabled us to extend our algorithms to simulate AdaHedge and FlipFlop. It would be interesting to see if the notion of effective experts can be extended to other settings such as multi-armed bandits. Furthermore, it might be interesting to study the problem in regimes in between known and unknown order. ### Acknowledgments {#acknowledgments .unnumbered} This work was supported by the NSERC Discovery Grant RGPIN-2018-03942. Proofs for Section \[sec:number-of-effective\] ============================================== Proof of Theorem \[thm:number-of-effective-experts\] ---------------------------------------------------- We define a new operator denoted by $+$ which operates between an expert $e$ on the left hand side and a multi-set of orderings $\Pi$ on the right hand side and returns a new multi-set of orderings in which $e$ is added to the left side of every ordering $\pi \in \Pi$. Let $J = \{j_1, \dots, j_m\}$ be the rounds on which any expert will die. Without loss of generality, assume that experts die in the order of their indices, i.e., $e_1$ dies first, $e_2$ second, …, and $d_{K-A}$ dies last. We use mathematical induction on $m$ to prove the claim. *Induction Basis:* For $m=0$, or the case that no expert dies (i.e. $A = K$), the number of effective permutation experts is equal to $A$, the number of experts that never die. Hence, $$f\left(\{\}, A\right) = A .$$ *Induction Hypothesis:* We assume that the number of effective experts when there are only $i-1$ nights is equal to $$f\left(\{ d_1, d_2, \ldots, d_{i-1} \}, A\right) = A \prod_{s=1}^{i-1} \left(d_s+1\right)$$ Denote the set of the effective permutations created in the induction hypothesis as ${\mathcal{E}}_{i-1}$. *Induction Step:* First, notice that any expert $e \in E_{d}^{j_1+1}$ has an impact on the behavior of a permutation expert $\pi$ only if $e = \sigma^1(\pi)$. If we ignore the first night and remove every $e \in E_{d}^{j_1+1}$ from the orderings, the theorem would behave as though those experts do not exist and there are only $i-1$ nights. Due to the induction hypothesis we know that $$f(\{ d_2, \dots, d_{i} \}, A) = A \prod_{s=2}^{i} (d_s+1) .$$ Denote by $F$ the number of effective orderings $\pi$ where $e = \sigma^1(\pi)$ for some $e \in E_{d}^{j_1+1}$. It is easy to see that $$f(\{d_1, d_2, \dots, d_{i} \}, A) = F + f(\{ d_2, \dots, d_{i} \}, A) .$$ On the other hand, the effective orderings which start with $e_s \in E_{d}^{j_1+1}$ can be constructed as $(e_s) + {\mathcal{E}}_{i-1}$, so $$\left|(e_s) + {\mathcal{E}}_{i-1}\right| = \left|{\mathcal{E}}_{i-1}\right| = f\left(\{ d_2, \dots, d_{i} \}, A\right) ,$$ and it follows that ${\mathcal{E}}_i = (\cup_{e_s \in E_{d}^{j_1+1}} ((e_s) + {\mathcal{E}}_{i-1})) \cup {\mathcal{E}}_{i-1}$. Then due to the induction hypothesis we get: $$\begin{aligned} f(\{ d_1, d_2, \dots, d_{i} \}, A) &= (d_1+1) f(\{ d_2, \dots, d_{i} \}, A) = (A) \Pi_{s = 1}^{i} (d_s+1)\label{effectiveinductionlaststep} \end{aligned}$$ and completes the induction step, concluding the proof. Proofs for Section \[sec:bounds\] ================================= Proof of Theorem \[thm:dtol-lower-bound\] ----------------------------------------- Our lower bound strategy is similar[^2] to the proof of Theorem 3.7 of [@cesa2006prediction] until our equation . Any strategy of Learner over $T$ rounds can be represented as a sequence $\bm{p}$ of $T$ maps $\bm{p}_1, \ldots, \bm{p}_T$, where $$\begin{aligned} \bm{p}_t : [0,1]^{t-1} \rightarrow \Delta_K . \end{aligned}$$ By representing Learner’s strategy in this way, we can write the above minimax regret as: $$\begin{aligned} \inf_{\mathbf{p}} \sup_{\bm{{\ell}}_1, \ldots, \bm{{\ell}}_T} \left\{ \sum_{t=1}^T \bm{p}_t \cdot \bm{{\ell}}_t - \min_{j \in [K]} \sum_{t=1}^T {\ell}_{j,t} \right\} . \end{aligned}$$ The minimax regret can only decrease by replacing the supremum over the experts’ losses by an expectation over random i.i.d. losses ${\ell}_{j,t}$, where, for all $j \in [K]$ and $t \in [T]$, we take ${\ell}_{j,t}$ to be independently drawn uniformly from $\{0,1\}$; hence, the above is lower bounded by $$\begin{aligned} \inf_{\mathbf{p}} \operatorname{\mathsf{E}}\left[ \sum_{t=1}^T \bm{p}_t \cdot \bm{{\ell}}_t - \min_{j \in [K]} \sum_{t=1}^T {\ell}_{j,t} \right] &= \operatorname{\mathsf{E}}\left[ \frac{T}{2} - \min_{j \in [K]} \sum_{t=1}^T {\ell}_{j,t} \right] \\ &= \frac{1}{2} \operatorname{\mathsf{E}}\left[ \max_{j \in [K]} \sum_{t=1}^T \left( 1 - 2 {\ell}_{j,t} \right) \right] . \end{aligned}$$ Now, observe that each random variable $(1 - 2 {\ell}_{j,t})$ has the same law as an independent Rademacher random variable (i.e. uniform over $\pm 1$). Therefore, the above is equal to $$\begin{aligned} \frac{1}{2} \operatorname{\mathsf{E}}\left[ \max_{j \in [K]} \sum_{t=1}^T \varepsilon_{j,t} \right] , \label{eqn:rademacher-sum} \end{aligned}$$ where the $\varepsilon_{j,t}$ are independent Rademacher random variables. Our approach will be to express the above as the expected maximum of a Bernoulli process. To this end, for each $j \in [K]$ define the matrix $\tau^{(j)} \in {\mathbb{R}}^{T \times K}$ whose $j\operatorname*{^{\text{th}}}$ column is equal to the ones vector and whose remaining columns each are equal to the zero vector. Our lower bound on the minimax regret now can be rewritten as (one half of) $$\begin{aligned} \operatorname{\mathsf{E}}\left[\max_{j \in [K]} \sum_{t=1}^T \varepsilon_{j,t}\right] &= \operatorname{\mathsf{E}}\left[\max_{j \in [K]} \sum_{i=1}^K \sum_{t=1}^T \varepsilon_{i,t} \tau^{(j)}_{i,t}\right] . \end{aligned}$$ This is just the expected supremum of a Bernoulli process indexed by $\{\tau^{(1)}, \ldots, \tau^{(K)}\}$. A result of [@talagrand2005generic], restated as Lemma \[lemma:sudakov-bernoulli\] after this proof, can be used to lower bound this process in terms of $T$ and $K$. In our setting, treating the matrices $\tau^{(j)}$ as vectors by stacking their columns, we see that the vectors $(\tau^{(j)})_j$ satisfy - $\| \tau^{(i)} - \tau^{(j)} \|_2 \geq \sqrt{2 T}$ for all distinct $i, j \in [K]$; - $\| \tau^{(j)} \|_\infty \leq 1$ . Hence, Lemma \[lemma:sudakov-bernoulli\] implies that the minimax regret is lower bounded by $$\begin{aligned} \frac{1}{2 L} \min \left\{ \sqrt{2 T \log K}, 2 T \right\} , \end{aligned}$$ for $L$ a universal constant. The above proof uses the following powerful result of Talagrand on Sudakov minoration for Bernoulli processes; here, the most convenient form is stated as Theorem 4.2.4 in [@talagrand2005generic], but the result first appeared as Proposition 2.2 of [@talagrand1993regularity] in a quite different form. \[lemma:sudakov-bernoulli\] Let $a, b > 0$ and $\tau^{(1)}, \ldots, \tau^{(K)} \in \ell^2$ satisfy the conditions: - $\| \tau^{(i)} - \tau^{(j)} \|_2 \geq a$ for all distinct $i, j \in [K]$ ; - $\| \tau^{(j)} \|_\infty \leq b$ for all $j \in [K]$ , Let $\varepsilon_1, \varepsilon_2, \ldots$ be i.i.d Rademacher random variables. Then for a universal constant $L$ we have $$\begin{aligned} \operatorname{\mathsf{E}}\sup_{j \leq K} \sum_{s \geq 1} \tau^{(j)}_s \varepsilon_s \geq \frac{1}{L} \min \left\{ a \sqrt{\log K} , \frac{a^2}{b} \right\} . \end{aligned}$$ Proof of Theorem \[thm:unknown-lower\] -------------------------------------- We prove the theorem using a construction as follows. Recall that we refer to a round as a “night” if an expert dies on that round and to each segment between two nights as a “day”. First, partition $T$ rounds to rounds of length $T' = T/(m+1)$, where $m$ is the number of nights. The goal is to construct a scenario where each day is a game decoupled from the previous ones. This means that the algorithm will be forced to have no prior information about the experts at the beginning of each day. Recall that $\tau_s$ is the set of time-step indices of day $s$, i.e.,$\tau_s = \{t | (s-1)T' < t \leq sT'\}$. Each day is divided into two equal parts. Denote by $\tau^1_s$ and $\tau^2_s$ sets of time-step indices of the first half and the second half of day $s$, respectively. Let $\bm{\ell}_{\tau^1_s,i}$ and $\bm{\ell}_{\tau^2_s,i}$ be the sequences of losses of an expert $i$ on the first and second half of the day $s$, respectively. On the first half of day $s$, each expert suffers loss drawn i.i.d. from a Bernoulli distribution with $p=1/2$. At the end of the first half of the day, we choose the expert with the lowest cumulative loss up until now, denoted by $e^*_s$. This expert will suffer no loss in the second half. Also, the adversary forces every other expert $i$ where $ e_i \neq e^*_s$ to suffer losses according to loss sequence $\bm{\ell}_{s_2,i}$ on the second half of the day, where we have element-wise subtraction as $\bm{\ell}_{s_2,i} = \mathbf{1} - \bm{\ell}_{s_1,i}$. Denote by $E_a(s)$ the set of experts alive on day $s$. We now analyze the ranking regret of any algorithm for this construction over $T$ rounds. Without loss of generality, suppose the order of experts that are going to die is as ${\mathcal{D}}= (e_1,e_2,\dots,e_m)$. Also, denote by $\pi^* \in \Pi$ the best ordering over $T$. From the construction, it is clear that $\pi^* = ({\mathcal{D}}, \dots)$. Therefore, the ranking regret over $T$ rounds is obtained from . $$\label{eq:ranking-over-T} R_\Pi(1,T) = \hat{L} - L_{\pi^*} = \hat{L} - \sum_{s=1}^{m+1}\sum_{t \in \tau_s} l_{\sigma^t(\pi^*),t}$$ where $L_{\pi^*}$ is the cumulative loss of playing according to ordering $\pi^*$. Now we write $R_\Pi(1,T)$ in terms of a sum of classical regrets over each day. Since in our construction the best expert of the day will die at the end of that day, then, for all rounds in a given day, $\sigma^t(\pi^*)$ yields the same expert as the expert that is best for that day according to the ordinary regret. Therefore, we have: $$\begin{aligned} \hat{L} - \sum_{s=1}^{m+1}\sum_{t \in \tau_s} l_{\sigma^t(\pi^*),t} &= \sum_{s=1}^{m+1} \left(\sum_{t \in \tau_s} \hat{l_t} - \min_{s \leq i \leq K} \sum_{t \in \tau_s} l_{i,t}\right)\nonumber\\ &= \sum_{s=1}^{m+1} R_{E_a(s)}(\tau_s)\label{eq:normal-regret-over-T} \end{aligned}$$ where the last equality is obtained from the fact that in our construction, each day is an independent day from the others, meaning the history of losses of the experts does not matter. Combining and , we have: $$\label{eq:ranking-equals-sum-of-normals} R_\Pi(1,T) = \sum_{s=1}^{m+1} R_{E_a(s)}\left(\tau_s\right)$$ Now it remains to analyze the regret of each day separately. For this, first, we lower bound the regret of each half of the days. Denote by $\hat{L}_s^1$ and $\hat{L}_s^2$ the cumulative losses of the algorithm on the first and second half of the day $s$ with length, respectively. It is easy to verify that $R_{E_a(s)}(\tau_s) \geq R_{E_a(s)}(\tau^2_s)$, hence for the regret of each day we have $$\begin{aligned} R_{E_a(s)}(\tau_s) = \hat{L}_s^1 + \hat{L}_s^2 - \sum_{t \in \tau_s} l_{\sigma^t(\pi^*),t} &\geq \hat{L}^1_s - \sum_{t \in \tau^1_s} l_{\sigma^t(\pi^*),t}\nonumber\\ &\geq \frac{1}{L}\min\{\sqrt{T'/2 \log (K-s)}, T'\}\label{eq:lower-on-each-day} \end{aligned}$$ where the last inequality is based on (the proof of) Theorem \[thm:dtol-lower-bound\]. Combining and we have: $$\begin{aligned} R_\Pi(1,T) &\geq \sum_{s=1}^{m+1} \frac{1}{L}\min\{\sqrt{T'/2 \log (K-s)}, T'\} \nonumber\\ &=\sum_{s=1}^{m+1} \sqrt{T/2 (m+1) \log (K-s)} = \Omega\left(\sqrt{Tm \log K}\right) , \end{aligned}$$ yielding the desired bound. The proof of Theorem \[thm:unknown-lower\] uses the results of following Lemma. \[lemma:cauchy-sumofsqrt\] For variables $x_1, \dots, x_m > 0$ and subject to $\sum_{i=1}^{m} x_i = T$, we have: $$\sum_{i=1}^{m} \sqrt{x_i} \leq \sqrt{mT}$$ Denote by $\mathcal{T}$ the vector $\left[ \sqrt{x_1} , \sqrt{x_2} , \dots , \sqrt{x_m}\right]$ and $I = \left[1,1,\dots,1\right]$ vectors of length $m$ where all the elements are equal to one. We have: $$\begin{aligned} \sum_{i=1}^{m} \sqrt{x_i} = \mathcal{T} \cdot I^T \leq ||\mathcal{T}||\cdot||I|| &\leq \sqrt{m} \sqrt{(\sqrt{x_1})^2 + (\sqrt{x_2})^2 + \dots + (\sqrt{x_m})^2 } \leq \sqrt{mT} \end{aligned}$$ where the first inequality follows from the Cauchy-Schwarz inequality. Proof of Theorem \[thm:known-lower\] ------------------------------------ The construction for this case is similar to the one we previously had for the unknown order of dying. We divide the $T$ rounds into $m/2$ days each of size $T' = 2 T / m$. We choose two experts $\{e_{2s - 1}, e_{2s}\}$ at each day $s$ and they will suffer losses drawn i.i.d. from a Bernoulli distribution with success probability of $p=1/2$. Every expert $e_i \not\in \{e_{2s - 1}, e_{2s}\}$ will suffer constant loss of $1$ during day $s$. At the end of day $s$, both the experts $\{e_{2s - 1}, e_{2s}\}$ will die. Therefore, at the beginning of each day, all the experts have the same loss history and consequently, each day is decoupled from the previous ones. Additionally, we provide extra information to the algorithm, that the best expert of day $s$ is one of the two experts $\{e_{2s - 1}, e_{2s}\}$. Thus, the algorithm needs to track only two experts on a single day. Denote by $E_a(s)$ the set of experts alive on day $s$. In the following, we analyze the ranking regret of this construction. We will use the same result from the proof of Theorem \[thm:unknown-lower\] to connect ranking regret to the classical regret over each day. Hence, using , we have: $$\label{eq:ranking-equals-sum-of-normals-known} R_\Pi(1,T) = \sum_{s=1}^{m/2} R_{E_a(s)}(\tau_s)$$ Using the bound we obtained from Theorem \[thm:dtol-lower-bound\], for each day $s$, $K=2$ and $T'$ rounds we have: $$\label{eq:lower-on-day-regret-known} R_{E_a(s)}(\tau_s) \geq \frac{1}{L}\min\{\sqrt{T'/2 \log 2},T'\}$$ Combining and , we obtain the bound on ranking regret over time horizon $T$ as follows: $$R_\Pi(1,T) \geq \sum_{s=1}^{m/2} \frac{1}{L}\min\{\sqrt{T'/2 \log 2}, T'\} = \sum_{s=1}^{m/2} \sqrt{T/m} = \Omega\left(\sqrt{mT}\right)$$The theorem follows. Proofs for Section \[sec:algorithm\] ==================================== Wherever we refer to Theorem \[thm:number-of-effective-experts\] in this section, we assume that only one expert dies each night; therefore for $m$ nights (consequently, $m$ dying experts) the value of $f$ (the function defined in Theorem \[thm:number-of-effective-experts\]) is $2^m (K-m)$ where $K$ is the number of experts. Proof of Theorem \[thm:HPU\] ---------------------------- We show that the loss and weights of the algorithms are the same at each round, therefore, their regret is the same. Define $\Pi_D$ to be the set of all possible orderings of elements in set $D$ of length $\lvert D \lvert $. We claim that based on the update rules of HPU for $h_{j,t}$ and $c_{j,t}$, for every round and expert we have $\sum_{\pi \in \Pi_j^t} e^{-\eta L_\pi ^{t-1}} = h_{j,t} \cdot c_{j,t}$. *Induction Basis*: At round $t=1$, in Hedge, every expert has non-normalized weight of 1. The size of each $\Pi_{e_j}^1$ is $(K-1)!$. The algorithm assigns $h_{i,1} = (K-1)!$ and $c_{i,1} = 1$, therefore the claims hold. *Induction Hypothesis*: At the beginning of round $t - 1$, for every alive expert $e_j$, the following holds: $$\sum_{\pi \in \Pi_j^ {t-1}} e^{-\eta L_\pi^{t-2}} = h_{j,t-1} \cdot c_{j,t-1}$$ *Induction Step*: This step is divided into two cases. First, when $E_{a}^t = E_{a}^{t-1}$. Second, when an expert dies, $|E_{a}^t| = |E_{a}^{t-1}| - 1$. *Case I*: If no expert dies at the end of round $t-1$, then for every $i$ we have $\Pi_i^t = \Pi_i^{t-1}$ and $h_{i, t} = h_{i, t-1}$, thus for every alive expert $e_j$ following holds $\sum_{\pi \in \Pi_j^t} e^{-\eta L_\pi^{t-2}} = h_{j,t-1} \cdot c_{j,t-1}$. After the update in Hedge, the weights on $\Pi_j^t$ is $ \underset{\pi \in \Pi_{j}^{t}}{\sum} e^{-\eta (L_\pi^{t-2} + \ell_{j, t-1})}$. On the other hand, in the HPU algorithm, we have the following: $$\begin{aligned} h_{j,t}\cdot c_{j,t} = e^{-\eta \ell_{j, t-1}} \cdot c_{j,t-1} \cdot h_{j,t-1} = e^{-\eta \ell_{j, t-1}} \sum_{\pi \in \Pi_{j}^{t-1}} e^{-\eta L_{\pi}^{t-2}} \\ = \sum_{\pi \in \Pi_j^t} e^{-\eta (L_{\pi}^{t-2} + \ell_{j, t-1})} = \sum_{\pi \in \Pi_j^t} e^{-\eta L_{\pi}^{t-1} } \end{aligned}$$ Where the second equality follows from the induction hypothesis. It can be observed that the weights are identical to the ones from running Hedge on $K!$ experts. *Case II*: The second case is when the expert $j$ dies at the end of round $t-1$. Let $i, k$ be arbitrary alive experts not equal to $j$. Observe that any $\pi \in \Pi_i^t \cap \Pi_j^{t-1}$ takes the form $(\pi_d, e_j, \pi_{d^{'}} ,e_i, \pi_{R_i})$, where, for some $D, D^{'} \subseteq E_d^t$ where $D \cap D^{'} = \emptyset$ and $R_i := E \setminus ( D \cup D^{'} \cup \{e_j, e_i \} )$, we have that $\pi_d \in \Pi_D$ and $\pi_{d^{'}} \in \Pi_D^{'} $ and $\pi_{R_i} \in \Pi_{R_i}$. Then $\Pi_k^t \cap \Pi_j^{t-1}$ contains a unique element $\pi' = (\pi_d, e_j, \pi_{d^{'}}, e_k, \pi_{R_k})$, where D is taken as before and (like before) $R_k := E \setminus ( D \cup D^{'} \cup \{e_j, e_k \} )$ and $\pi_{R_{k}}$ is created only by replacing $e_i$ as $e_k$ in $\pi_{R_{i}}$. Moreover, since their behavior is the same over the first $t-1$ rounds, $\pi$ and $\pi'$ satisfy $L_{\pi}^{t-1} = L_{\pi'}^{t-1}$. Therefore by symmetry, we can obtain from . $$\begin{aligned} \underset{\pi \in \Pi_{i}^{t}}{\sum} e^{-\eta L_{\pi}^{t-1}} &= \underset{\pi \in \Pi_{i}^{t-1}}{\sum} e^{-\eta L_{\pi}^{t-1}} + \underset{\pi \in (\Pi_{i}^{t} {\textstyle\cap} \Pi_{j}^{t-1})}{\sum} e^{-\eta L_{\pi}^{t-1}} \label{eq:update-symmetry}\\ &= \underset{\pi \in \Pi_{i}^{t-1}}{\sum} e^{-\eta L_{\pi}^{t-1}} +\frac{1}{{\left\lvert E_{a}^{t}\right\rvert}} \left(\underset{\pi \in \Pi_{j}^{t-1}}{\sum} e^{-\eta L_{\pi}^{t-1}} \right) \label{eq:first-update}\\ &= h_{i, t} \cdot c_{i, t} \nonumber \end{aligned}$$ Notice that, given expert $j$ dies at the end of round $t-1$ hence, $\sum_{\pi \in \Pi_{j}^{t-1}} e^{-\eta L_{\pi}^{t-1}}$ is calculable. Therefore, HPU is always maintaining the weights correctly. Proof of Theorem \[thm:HPK\] ---------------------------- Here we follow a construction similar to the proof of Theorem \[thm:HPU\], i.e., we do induction on $t$. Before proceeding to the proof, define $\lambda(\pi, t)$ as a function that will remove ineffective elements of a permutation expert at round $t$. An element is said to be ineffective, if it will never be used for the prediction in that permutation or it is dead. Recall that in this section we assumed that the experts die in order, $e_1$ dies first and $e_{K-A}$ last. For example, $(e_4) = \lambda((e_4, e_3, e_2, e_1 ), t)$ and $(e_1, e_3, e_4) = \lambda((e_1, e_3, e_2, e_4), 1)$ with respect to the assumption we made earlier on the order of dying, and if $e_1$ dies at $t=1$ and $e_3$ dies at $t=5$, then $(e_3, e_4) = \lambda((e_1, e_3, e_2, e_4), 3)$ and $(e_4) = \lambda((e_1, e_3, e_2, e_4), 6)$. Naturally, $\lambda({\mathcal{E}}, t)$ performs the function $\lambda(\pi,t)$ on every permutation $\pi \in {\mathcal{E}}$. The output of $\lambda({\mathcal{E}}, t)$ is a multi-set, not a set. *Induction Basis*: At round $t=0$, each of the permutation-experts have the same weight. Due to the Theorem \[thm:number-of-effective-experts\], we know the number of the orderings starting by expert $e_i$ is equal to $\ceil{2^{K - i - 1}}$. Therefore, in Hedge, the cumulative non-normalized weight put on ${\mathcal{E}}_{e_i}^1$ is $\ceil{2^{K - i - 1}}$ which is equal to $h_{i, 1} \cdot c_{i,1}$ in HPK. *Induction Hypothesis*: At the beginning of round $t-1$, in Hedge, the cumulative non-normalized weight put on ${\mathcal{E}}_{e_i}^{t-1}$ for every $e_i \in E_a^{t-1}$, is equal to $h_{i, t-1} \cdot c_{i,t-1}$ *Induction Step*: As in the proof of Theorem \[thm:HPU\], for round $t$, we split the step into two cases. In the first case, no expert dies, i.e., $E_a^t = E_a^{t-1}$. For the second case, one expert dies at the end of round $t-1$. The proof for the first case is omitted as it is identical to the proof for *Case I* of Theorem \[thm:HPU\]. For the case that an expert dies at the end of round $t-1$, we show that the weight distribution works correctly. Let $E^{(i+1,K)} = \{e_{i+1}, \dots, e_K\}$. Due to the discussions in Section \[sec:number-of-effective\], first, we have the number of initial orderings starting by $e_i$ as $\lceil 2^{K-i-1} \rceil$ at the beginning. Second, due to Lemma \[lemma:effectivetime\], if $e_i$ dies at round $t-1$, we have: $$\lambda\left({\mathcal{E}}_{e_i}^{t-1},\ \ t\right) =\lambda\left(\underset{e \in E^{(i+1,K)}}{\textstyle\bigcup}{\mathcal{E}}_{e}^{t-1},\ \ t\right)$$ therefore, for every $e_j$ where $j > i$, $(| {\mathcal{E}}_{e_j}^{1} |)/({ |{\mathcal{E}}_{e_i}^{1}|})$ fraction of $h_{i, t-1} \cdot c_{i,t-1}$ must be added to the weight of ${\mathcal{E}}_{e_j}^{t}$ to maintain the weight of ${\mathcal{E}}_{e_j}^{t}$. Before proceeding to Lemma \[lemma:effectivetime\], recall that operator $+$ operates between an expert $e$ on LHS and a multi-set of orderings $\Pi$ on RHS and returns a new multi-set of orderings which $e$ is added to the left side of every ordering $\pi \in \Pi$. \[lemma:effectivetime\] At round $t$, where $E_d^t = \{e_1, e_2, \dots e_{i-1}\}$ are dead and the rest of the experts are alive, we have: $$\lambda\left({\mathcal{E}}_{e_i}^{t},\ \ t\right) = (e_i)+\lambda\left(\underset{e \in E^{(i+1,K)}}{\textstyle\bigcup}{\mathcal{E}}_{e}^{t},\ \ t\right)$$ and therefore: $$\left|{\mathcal{E}}_{e_i}^{ t}\right| = \left|\lambda\left(\underset{e \in E^{(i+1,K)}}{\textstyle\bigcup} {\mathcal{E}}_{e}^{t}, \ \ t\right)\right|.$$ Before proving the statement, let us define two new operators. For ${\mathcal{E}}$ as the set of permutation-experts, ${\mathcal{E}}- \{ e_i \}$ removes element $e_i$ from every permutation $\pi \in {\mathcal{E}}$. Also, ${\mathcal{E}}' = x {\mathcal{E}}$ is a multi-set where each item in ${\mathcal{E}}$ is copied $x$ times. As a result, trivially we have $|{\mathcal{E}}'| = x\cdot|{\mathcal{E}}|$. Recall that we assumed that the experts die in order. Due to constructive structure of the Theorem \[thm:number-of-effective-experts\], ${\mathcal{E}}_{e_i}^{1}$ is equal to adding $e_i$ as the first element for every permutation in ${\mathcal{E}}_{E^{(i+1,K)}}^{1}$. $$\lambda\left(({\mathcal{E}}_{e_i}^{1} ),\ \ 1\right) = (e_i) + \lambda\left( \underset{{e \in E^{(i+1,K)}}}{\textstyle\bigcup}{\mathcal{E}}_{e}^{1},\ \ 1\right)$$ Therefore, the claim holds for $t=1$ and we have $|\lambda({\mathcal{E}}_{e_i}^{1}, 1)| = |{\mathcal{E}}_{e_i}^{1}|$. It is easy to verify that: $$\left|\lambda\left(\underset{{e \in E^{(i+1,K)}}}{\textstyle\bigcup}{\mathcal{E}}_{e}^{1},\ \ 1\right)\right| = \left|\underset{e \in E^{(i+1,K)}}{\textstyle\bigcup}{\mathcal{E}}_{e}^{1}\right|$$ Due to Lemma \[lemma:copy\], similar claim holds for $t$ when $\{ e_1, \dots, e_{i-1} \}$ are dead. $\lambda({\mathcal{E}}_{e_i}^{t}, t)$ will be $2^{i-1}$ copies of $\lambda({\mathcal{E}}_{e_i}^{1}, 1)$ and similarly $$\lambda\left(\underset{e \in E^{(i+1,K)}}{\textstyle\bigcup}{\mathcal{E}}_{e}^{t},\ \ t\right) = 2^{i-1} \lambda\left(\underset{e \in E^{(i+1,K)}}{\textstyle\bigcup}{\mathcal{E}}_{e}^{1},\ \ 1\right)$$hence: $$\begin{aligned} 2^{i-1} \lambda\left({\mathcal{E}}_{e_i}^{1},\ \ 1\right) &= (e_i) + 2^{i-1} \lambda\left(\underset{{e \in E^{(i+1,K)}}}{\textstyle\bigcup}{\mathcal{E}}_{e}^{1},\ \ 1\right)\\ \lambda\left({\mathcal{E}}_{e_i}^{t},\ \ t\right) &= (e_i) +\lambda\left(\underset{{e \in E^{(i+1,K)}}}{\textstyle\bigcup}{\mathcal{E}}_{e}^{t},\ \ t\right) \end{aligned}$$ \[lemma:copy\] At round $t$ when $m$ experts have died so far and $e_j \in E_a^t$, $\lambda({\mathcal{E}}_{e_j}^{t}, t)$ is equal to $2^m$ copies of $\lambda({\mathcal{E}}_{e_j}^{1}, 1)$ . Recall that $\Pi_D$ is the set of all possible orderings of elements in set $D$ of length $|D|$ and similarly, ${\mathcal{E}}_D$ is the set of all effective orderings with respect to $\Pi_D$. Due to the constructive building of ${\mathcal{E}}_{e_i}^{1}$, $\lambda({\mathcal{E}}_{e_i}^{1})$ is equal to $$\label{basicrule} (e_i) + \lambda\left({\mathcal{E}}_{\{ e_{i+1}, \dots, e_K \}}^{1},\ \ 1\right) = (e_i) + \lambda\left(\underset{i+1 \le j \le K}{\textstyle\bigcup} {\mathcal{E}}_{e_j}^{1},\ \ 1\right) = (e_i) + \underset{i+1 \le j \le K}{\textstyle\bigcup} \lambda\left({\mathcal{E}}_{e_j}^{1},\ \ 1\right)$$ We use induction on $m$ to prove the claim. *Induction Basis:* The claim trivially holds when $m=0$. *Induction Hypothesis:* When $e_1, e_2, \dots, e_{i-1}$ are dead before round $t-1$, for any $j \ge i$ we have $\lambda({\mathcal{E}}_{e_j}^{t-1},\ \ t-1) = 2^{i-1} \lambda({\mathcal{E}}_{e_j}^{1},\ \ 1)$ *Induction Step:* Assume that at round $t-1$, $e_i$ dies. $$\begin{aligned} \lambda\left({\mathcal{E}}_{e_i}^1,\ \ 1\right) &= (e_i) + \underset{e \in E^{(i+1,K)}}{\textstyle\bigcup} \lambda\left({\mathcal{E}}^1_e,\ \ 1\right)\nonumber\\ 2^{i-1} \lambda\left({\mathcal{E}}_{e_i}^1,\ \ 1\right) &= (e_i) + 2^{i-1}\underset{e \in E^{(i+1,K)}}{\textstyle\bigcup} \lambda\left({\mathcal{E}}^1_e,\ \ 1\right)\nonumber\\ 2^{i-1} \lambda\left({\mathcal{E}}_{e_i}^1,\ \ 1\right) &= (e_i) + \underset{e \in E^{(i+1,K)}}{\textstyle\bigcup} 2^{i-1}\lambda\left({\mathcal{E}}^1_e,\ \ 1\right)\nonumber\\ \lambda\left({\mathcal{E}}_{e_i}^{t-1},\ \ t-1\right) &= (e_i) + \underset{e \in E^{(i+1,K)}}{\textstyle\bigcup} \lambda\left({\mathcal{E}}^{t-1}_e,\ \ t-1\right) \end{aligned}$$ Where the second and forth equality follows by applying the induction hypothesis to the left and right sides of the first and third line and first equality holds due to Section \[sec:number-of-effective\]. Therefore when $e_i$ dies, any $\pi \in {\mathcal{E}}^{t-1}_{e_i}$ we have $\pi \in {\mathcal{E}}^{t}_{e}$ where $e \in E^{(i+1,K)}$ hence $$\underset{e \in E^{(i+1,K)}}{\textstyle\bigcup} \lambda\left({\mathcal{E}}^{t}_e,\ \ t\right) = \underset{e \in E^{(i+1,K)}}{\textstyle\bigcup} 2\lambda\left({\mathcal{E}}^{t-1}_e,\ \ t-1\right) = \underset{e \in E^{(i+1,K)}}{\textstyle\bigcup}2^i \lambda\left({\mathcal{E}}^{1}_e,\ \ 1\right)$$ where the second equality is from the induction hypothesis. This is easy to see that each set in the union is independent from others, so $\lambda({\mathcal{E}}_{e_j}^t,\ \ t) = 2^i \lambda({\mathcal{E}}_{e_j}^1,\ \ 1)$ where $j>i$. Proof of Theorem \[thm:quantile\] (Adapting to the number of nights $\mathbf{m}$) --------------------------------------------------------------------------------- The idea is to use a simple counting argument. Let $\pi$ be a best permutation expert (it typically will not be unique). For the dying sequence that actually occurs, we will lower bound how many other permutations have the same behavior as this one (we call these behavioral copies). First, observe that if there are $m$ nights, then each permutation expert can only change the actual expert it uses for prediction at most $m$ times (for a total of $m+1$ different experts). Suppose that, over the course of the game, $\pi$ predicts as $e_{i_1}, e_{i_2}, \ldots, e_{i_l}$ where $l \leq m+1$. Now, consider those permutations that actually *begin* as $e_{i_1}, e_{i_2}, \ldots, e_{i_l}$. As the first $l$ positions are fixed, there are $(K - l)!$ such permutations and hence at least $(K - l)!$ behavioral copies of $\pi$. Hence, if $\pi$ is the best expert, then we can compete with it using an $\varepsilon$-quantile bound with $\varepsilon = \frac{(K-l)!}{K!} = \prod_{r=0}^{l-1} \frac{1}{K - r} = \varepsilon_l$. Although we do not know the best choice of $\varepsilon$, as we run Hedge on top of $K$ copies of Hedge-Perm-Unknown and as one of the copies will use learning rate $\varepsilon_l$, we can compete with this optimally tuned copy with additional regret overhead of $\sqrt{T \log K}$. Moreover, a basic quantile bound exercise shows that the regret of the optimally tuned copy will be ${\mathcal{O}}(\sqrt{T (l+1) \log K})$, where $l \leq m$. [^1]: equal contribution [^2]: But it is simpler since we do not consider the more general game of prediction with expert advice.
--- author: - 'Mohammad Bakhshalipour$^\ddagger{}$$^\S{}$ HamidReza Zare$^\ddagger{}$ Pejman Lotfi-Kamran$^\S{}$ Hamid Sarbazi-Azad$^\ddagger{}$$^\S{}$' bibliography: - 'ref.bib' title: 'Die-Stacked DRAM: Memory, Cache, or MemCache?' ---
--- abstract: 'We study a switching synchronization phenomenon taking place in one-dimensional memristive networks when the memristors switch from the high to low resistance state. It is assumed that the distributions of threshold voltages and switching rates of memristors are arbitrary. Using the Laplace transform, a set of non-linear equations describing the memristors dynamics is solved exactly, without any approximations. The time dependencies of memristances are found and it is shown that the voltage falls across memristors are proportional to their threshold voltages. A compact expression for the network switching time is derived.' author: - 'V. A. Slipko' - 'Y. V. Pershin' bibliography: - 'memcapacitor.bib' title: 'Switching synchronization in 1-D memristive networks: An exact solution' --- Introduction ============ The collective effects in networks of resistors with memory (memristors [@chua76a]) have attracted significant attention in recent years  [@pershin11d; @pershin13a; @pershin13b; @vourkas2014generalization; @slipko15a; @Caravelli17a; @Slipko17a] driven by their possible applications. In particular, the most commonly studied networks of two nonvolatile memristors exhibit an amazing functionality of universal boolean logic [@borghetti10a; @kvatinsky2014memristor]. Some volatile memristors [@pershin17a] offer an alternative (but less practical) approach for the same application. It has been demonstrated theoretically that 2D memristive networks can be used to solve the shortest path [@pershin13b] and maze [@pershin11d] problems in one single step. Moreover, it has been found that the simplest 1D memristive networks exhibit a complex switching dynamics [@pershin13a] involving a switching synchronization phenomenon [@slipko15a], and 1D networks combining memristors and resistors can transfer and process information [@Slipko17a]. Some of the recent advances in the area of memristor-based nanoelectronic networks are summarized in Refs. [@adamatzky2013memristor; @vourkas2016memristor]. The switching synchronization [@slipko15a] is a collective phenomenon taking place in 1D memristive networks (Fig. \[fig1\]) in the regime when the threshold-type memristors with unequal switching rates switch from the high to low resistance states. It was shown in Ref. [@slipko15a] that the switching of such memristors is synchronized such that the faster switching memristors ’wait’ for the switching of slower ones. The details of switching synchronization phenomenon can be found in Ref. [@slipko15a] that studies a network of memristors with unequal switching rates but the same threshold voltages. The corresponding analytical results were derived in the limit when the applied voltage per memristor just slightly exceeds the memristor threshold voltage. The goal of the present paper is to extend our previous result (Ref. ) to the case of memristors with unequal threshold voltages, unequal switching rates, and an arbitrary voltage applied to the network. In what follows, the equations describing the network dynamics are solved exactly using the Laplace transform. Based on this approach, we have been able to find the time dependencies of memristances exactly. Moreover, a generalized synchronization rule has been formulated. The main result of this paper is given by Eqs. (\[RiqFinal\])-(\[DenomF\]) that represent the time dependencies of memristances in a parametric form. ![\[fig1\] (Color online) One-dimensional network of $N$ memristive systems M$_i$ connected with the same polarity to the voltage source $V$. It is assumed that at the initial moment of time the memristors are in their high resistance states.](fig1){width=".85\columnwidth"} The voltage-controlled memristive systems are a class of two-terminal devices with memory defined by [@chua76a] $$\begin{aligned} I&=&R^{-1}\left( x, V, t\right) V, \label{eq3} \\ \dot{x}&=&f\left(x, V, t\right), \label{eq4}\end{aligned}$$ where $I$ and $V$ are the current through and voltage across the system, respectively, $R\left( x, V, t\right)$ is the memristance, $x$ is an $n$-component vector of internal state variables and $f\left(x, V, t\right)$ is a vector-function. For our purposes, it is sufficient to use the following simple model of memristors that, however, incorporates several important aspects of the physics of memristive devices such as the threshold-type switching, limited states of resistance, and finite switching time [@pershin11a; @di2013physical]. For $i$-th memristor, this model is formulated as [@pershin09b] $$\begin{aligned} I_i&=&R_i^{-1}V_i \label{eq:model1} \\ \frac{\textnormal{d}R_i}{\textnormal{d}t}&=& \begin{cases} \pm\textnormal{sgn}(V_i)\beta_i(|V_i|-V_{t,i}) \;\textnormal{if} \;\; |V_i|>V_{t,i} \\ 0 \;\;\;\;\;\;\;\;\;\;\;\; \textnormal{otherwise} \end{cases} , \label{eq:model2}\end{aligned}$$ where $I_i$ and $V_i$ are the current through and voltage across $i$-th memristor, the memristance $R_i$ is used as the internal state variable [@chua76a], $\beta_i$ is the switching rate, $V_{t,i}$ is the (positive) threshold voltage, $\textnormal{sgn}(.)$ is the sign of the argument, and $+$ or $-$ sign is selected according to the device connection polarity. Additionally, it is assumed that the memristance is limited to the interval \[$R_{on}$, $R_{off}$\], where $R_{on}$ and $R_{off}$ are the low and high resistance states of memristors, respectively. The model {#sec2} ========= Equations --------- We start from the system of $N$ nonlinear equations describing the evolution of Fig. \[fig1\] memristors that are coupled through the current (or, equivalently, the total resistance $R$): $$\begin{aligned} \dot{R}_i(t)&=&-\beta_i\left[ V \frac{R_i(t)}{R(t)}-V_{t,i}\right],~i=1,...,N, \label{Ri(t)def}\\ R&=&\sum_{i=1}^{N}R_i. \label{Rdef}\end{aligned}$$ Moreover, it is assumed that at the initial moment of time $t=0$, all voltage drops $V_i$ across memristors are larger than the corresponding (positive) threshold voltages $V_{t,i}$, namely, $V_i(t=0)> V_{t,i}$. In this case, one can show that also $V_i(t)> V_{t,i}$. Indeed, if at some moment of time $V_i(t)=V_{t,i}$ for some $i$, then this particular memristance $R_i$ does not change during the subsequent infinitesimal time interval (see Eq. (\[Ri(t)def\])). However, the ratio $R_i(t)/R(t)$ can only increase resulting in $V_i>V_{t,i}$ in the next moment of time. In order to realize this regime of operation, the applied voltage $V$ should exceed the total threshold voltage, $V>V_{t}=\sum_i V_{t,i}$. It is convenient to introduce a new independent variable $q$ instead of $t$, $q=\int_0^t V \textnormal{d}t'/R(t')$, which represents the charge flown through the network by the time $t$. Then, the system of Eqs. (\[Ri(t)def\])-(\[Rdef\]) can be transformed into $$\begin{aligned} \frac{\textnormal{d}R_i}{\textnormal{d}q}&=&-\beta_i\left[R_i-\frac{V_{t,i}}{V}R\right], \label{Ri(q)def} \\ t&=&\frac{1}{V}\int\limits_0^q \textnormal{d}q' R(q'). \label{t(q)def}\end{aligned}$$ Importantly, one can notice from the above equations that this change of independent variable has linearized the system of Eqs. (\[Ri(t)def\])-(\[Rdef\]). Laplace transform solution -------------------------- Next, we introduce the Laplace transforms of $R_i$ and $R$ as $F_i(p)=\int_0^\infty R_i(q)\exp(-pq)\textnormal{d}q$ and $F(p)=\int_0^\infty R(q)\exp(-pq)\textnormal{d}q$, respectively. Applying the Laplace transformation to Eq. (\[Ri(q)def\]) yields $$F_i(p)=\frac{R_i(0)}{p+\beta_i}+\frac{V_{t,i}\beta_i}{V}\frac{F(p)}{p+\beta_i}, \label{Fi}$$ where $R_i(0)$ is the initial memristance of $i$-th memristor. The sum of Eqs. (\[Fi\]) for $i=1,...,N$ results in the following expression for the Laplace transform of the total memristance: $$F(p)=\frac{\sum\limits_{i=1}^{N}\frac{R_i(0)}{p+\beta_i}} {1-\frac{1}{V}\sum\limits_{i=1}^{N}\frac{V_{t,i}\beta_i}{p+\beta_i}}. \label{F}$$ One can notice that $F(p)$ is a rational function that approaches zero at infinity as $$F(p)=\frac{1}{p}\sum_{i=1}^{N}R_i(0)+O\left(\frac{1}{p^2}\right),~ p\rightarrow\infty. \label{Finfty}$$ Therefore, we can safely apply the inverse Laplace transform to obtain the total memristance $R$ as a function of charge $q$ flown through the network. This results in $$\begin{aligned} R(q)=\sum_{k} \textnormal{Res}(F(p)e^{pq};p_k)\nonumber\\ =-\textnormal{Res}(F(p)e^{pq};p=\infty), \label{R(q)}\end{aligned}$$ where $p_k$ are the singular points of the function (\[F\]). ![\[fig2\] (Color online) Schematics of the denominator of $F(p)$ (Eq. (\[F\])) for the case of $N=3$.](fig2){width=".8\columnwidth"} Then, by using Eq. (\[t(q)def\]), one finds the time as a function of charge $$\begin{aligned} t(q)=\frac{1}{V}\sum_{k} \textnormal{Res}\left(\frac{F(p)}{p}(e^{pq}-1);p_k\right)\nonumber\\ =-\textnormal{Res}\left(\frac{F(p)}{p}e^{pq};p=\infty\right). \label{t(q)}\end{aligned}$$ The conditions of positivity of the threshold voltages $V_{t,i}$ and rates $\beta_i$ guarantee that the points $p=-\beta_i$ are regular ones for the function $F(p)$: $$\lim\limits_ {p\rightarrow -\beta_i} F(p)=-\frac{R_i(0)V}{V_{t,i}\beta_i}. \label{limbetai}$$ So, the only singular points of the function (\[F\]) are the zeroes of its denominator. Clearly, there are no more zeroes than the number of different values among the rates $\beta_i$. Moreover, all these zeroes are negative and they are separated by the corresponding rates $-\beta_i$. That can be clearly seen from the graph of the denominator of Eq. (\[F\]) as a function of $p$ (see Fig. \[fig2\]). Let us assume that all $\beta_i$ are different. In this case, there are exactly $N$ different singular points $p_1, p_2, ..., p_N$, which are simple poles satisfying the following inequalities: $$\begin{aligned} -\beta_N<p_N<-\beta_{N-1}<\cdots<-\beta_{1}<p_1<0 . \label{ineq}\end{aligned}$$ In order to find $R_i(q)$ we apply the inverse Laplace transform to $F_i(p)$ given by Eq. (\[Fi\]) and obtain $$\begin{aligned} R_i(q)=\frac{V_{t,i}\beta_i}{V}\sum_{k} \textnormal{Res}\left(\frac{F(p)}{p+\beta_i}e^{pq};p_k\right)\nonumber\\ =R_i(0)e^{-\beta_i q}-\frac{V_{t,i}\beta_i}{V} \textnormal{Res}\left(\frac{F(p)}{p+\beta_i}e^{pq};p=\infty\right). \label{Riq}\end{aligned}$$ Using the above mentioned information about the singular points of function $F(p)$, the final results can be presented as finite sums over the singular points of function $F(p)$: $$\begin{aligned} R_i(q)&=&\frac{V_{t,i}\beta_i}{V}\sum_{k} \frac{f_{k}}{p_k+\beta_i}e^{p_k q}, \label{RiqFinal}\\ t(q)&=&\frac{1}{V}\sum_{k} \frac{f_k}{p_k}e^{p_k q}+T_0, \label{tq}\\ R(q)&=&\sum_{k}f_ke^{p_k q}, \label{Rq}\end{aligned}$$ where the constants $f_k$ and $T_0$ are given by $$\begin{aligned} f_k&=&\textnormal{Res}(F(p);p_k)=\frac{V\sum_{i=1}^{N}\frac{R_i(0)}{p_k+\beta_i}} {\sum_{i=1}^{N}\frac{V_{t,i}\beta_i}{(p_k+\beta_i)^2}},\\ \label{fk} T_0&=&-\frac{1}{V}\sum_k \textnormal{Res}\left(\frac{F(p)}{p};p_k\right)\nonumber\\ &=&\frac{1}{V}\textnormal{Res}\left(\frac{F(p)}{p};p=0\right) =\frac{\sum_{i=1}^{N}\frac{R_i(0)}{\beta_i}}{V-\sum_{i=1}^{N}V_{t,i}}. \label{T0}\end{aligned}$$ and $p_k$-s are the roots of the equation $$\begin{aligned} 1-\frac{1}{V}\sum_{i=1}^{N}\frac{V_{t,i}\beta_i}{p+\beta_i}=0. \label{DenomF}\end{aligned}$$ The exact solution (\[RiqFinal\])-(\[DenomF\]) of the system of Eqs. (\[Ri(t)def\])-(\[Rdef\]) determines the time dependencies of all memristances in the parametric form. It is interesting to note that $T_0$ (given by Eq. (\[T0\])) can be considered as a characteristic switching time for the network, being the time when all resistances turn to zero. In reality, however, the switching of any individual memristor stops when its resistance reaches the lowest possible value $R_{on}$. Clearly, in the limit of $R_{on}\ll R_{off}$, the time $T_0$ provides a good estimate for the network switching time. Asymptotic behavior ------------------- Next we consider the asymptotic behavior of the exact solution at long times. This is possibly the most interesting case as the short time behavior is strongly influenced by the initial conditions. Consider the case of $\beta_1 q\gg 1$. Then, as it follows from inequalities (\[ineq\]), all the terms in the sums (\[RiqFinal\])-(\[Rq\]) except for the first ones can be neglected as the contributions of the higher order terms (corresponding to $p_i$ with $i\geq 2$) are exponentially small. Eliminating the charge $q$, we find the asymptotic behavior of the total resistance $R$ and the individual memristances $R_i$: $$\begin{aligned} R=V|p_1|(T_0-t), \label{R_large}\\ R_i=\frac{V_{t,i}}{V}\frac{\beta_i}{(\beta_i+p_1)}R. \label{Ri_large}\end{aligned}$$ It follows from Eq. (\[R\_large\]) that at long times the total resistance of the network decreases linearly with time at the rate defined by $V |p_1|$. Moreover, Eq. (\[Ri\_large\]) demonstrates that the individual memristance $R_i$ evolves similarly to the total resistance $R$. (a)![image](fig4a){width=".8\columnwidth"} (c)![image](fig4c){width=".8\columnwidth"}\ (b)![image](fig4b){width=".8\columnwidth"} (d)![image](fig4d){width=".8\columnwidth"} Generally, the smallest root $|p_1|$ of Eq. (\[DenomF\]) can not be found analytically. Consider the special case of the applied voltage slightly exceeding the combined threshold voltage of the network, namely, $\delta V=V-V_t\rightarrow +0$. In this case, it is possible to show that $|p_1|\rightarrow 0$. Consequently, $|p_1|$ can be calculated by expanding the left-hand side of Eq. (\[DenomF\]) with respect to the small $p$: $$\begin{aligned} p_1=-\frac{\delta V}{\sum\limits_{i=1}^{N}\frac{V_{t,i}}{\beta_i}} \left[1- \frac{\sum\limits_{i=1}^{N}\frac{V_{t,i}}{\beta_i^2}} {\left(\sum\limits_{i=1}^{N}\frac{V_{t,i}}{\beta_i}\right)^2} \delta V\right]+O(\delta V^3), \label{p1}\end{aligned}$$ $\delta V\rightarrow +0$, where the second term in the brackets should be small in comparison with the first one, i.e. with $1$. In this limiting case Eq. (\[Ri\_large\]) can be simplified. Neglecting $|p_1|$ compared to $\beta_1$ we obtain $$\frac{R_{i}}{V_{t,i}}=\frac{R}{V}. \label{Ri_sinch}$$ Thus in this case the ratio of resistivity $R_i$ of the individual memristor to its threshold voltage $V_{t,i}$ does not depend on the index $i$. This observation can be considered as the generalized synchronization effect. Figure \[fig4\](a)-(c) shows differently normalized memristances in a sample $N=3$ network calculated by using Eqs. (\[RiqFinal\])-(\[DenomF\]). In order to obtain this plot, the following set of parameter values was used: $R_i(t=0)=R_{off}=100R_{on}$, $V_{t,1}=0.97V_0$, $V_{t,2}=1.07V_0$, $V_{t,3}=0.9V_0$, $\beta_1=0.7 \beta_0$, $\beta_2=0.9 \beta_0$, $\beta_3=1.3 \beta_0$, $V=1.1NV_0$. From this figure we notice that while the individual memristances $R_i(t)$ (Fig. \[fig4\](a)) exhibit quite a different evolution, the normalization by threshold voltages $V_{t,i}$ (Fig. \[fig4\] (b)) or even better the combination $R_i(\beta_i+p_1)/(V_{t,i}\beta_i)$ (Fig. \[fig4\](c)) puts the curves very close to each other (after an initial time interval). Fig. \[fig4\](d) presents the total memristance calculated with Eq. (\[Rq\]) (solid line) alongside with its asymptotic behavior (dotted line, Eq. (\[R\_large\])). Clearly, there is an excellent agreement between these two results especially at longer times. It is interesting to note that for the selected set of parameters, the smallest root $|p_1|$ of Eq. (\[DenomF\]) is approximately equal to $-0.096\beta_0$. This value is small compared to the rate $\beta_1=0.7 \beta_0$. Therefore, one can also use the asymptotic Eq. (\[p1\]), which results in $|p_1|=-0.098\beta_0$, so that the relative error is less than $1.6$ percent. Discussion ========== (a)![\[fig3\] (Color online) Time dependencies of memristances (a) and normalized memristances (b) found in numerical simulations of the network dynamics. See the text for details.](fig3a "fig:"){width=".8\columnwidth"}\ (b)![\[fig3\] (Color online) Time dependencies of memristances (a) and normalized memristances (b) found in numerical simulations of the network dynamics. See the text for details.](fig3b "fig:"){width=".8\columnwidth"} The presented in Sec. \[sec2\] exact solution of the non-linear problem of network dynamics (Eqs. (\[Ri(t)def\])-(\[Rdef\])) is given by Eqs. (\[RiqFinal\])-(\[Rq\]), which are the main mathematical result of this paper. According to these equations, the time-dependencies of memristances are expressed in the parametric form through the charge flown through the network. This result is quite interesting by itself as $R_M(q)$ dependence is the main characteristic of the ideal memristor model [@chua71a]. While, in general, the memristors described by Eqs. (\[eq:model1\])-(\[eq:model2\]) are not ideal, their collective dynamics (in the situation considered in this work) keeps them operating in the ideal memristor mode during the network switching process. Moreover, we have identified the generalized switching synchronization condition (Eq. (\[Ri\_sinch\])). This condition extends the result of Ref. [@slipko15a] for the case of distributions of threshold voltages. Eq. (\[Ri\_sinch\]) shows that the ratio of the memristance to threshold voltage is the same for all memristors in the network at any moment of time (at longer times). In other words, the voltage across any memristor stabilizes in the proximity of its threshold voltage and stays at this value until the switching ends. Such a behavior can be explained by the collective feedback of network. In order to demonstrate this feature graphically for a larger memristive network, we have performed numerical simulations of a network of $N=30$ memristors assuming flat distributions of switching rates and threshold voltages. According to Fig. \[fig3\](a), the switching synchronization can not be recognized in the dynamics of memristances, which evolve differently. The generalized switching synchronization phenomenon is clearly visible in Fig. \[fig3\](b) presenting the memristances normalized by threshold voltages. Starting at $t\sim 200 \tau$, where $\tau=R_{on}/(\beta_0V_0)$, the normalized memristances decrease coherently at the same rate. Fig. \[fig3\] was obtained with the threshold voltages and switching rates of memristors selected (with uniform distributions) in the intervals $[V_0-\delta V,V_0+\delta V]$ and $[\beta_0-\delta \beta,\beta_0+\delta \beta]$, respectively. The following set of parameter values was used: $R_{off}=100R_{on}$, $\delta V=0.2V_0$, $\delta \beta=0.3 \beta_0$, $V=1.1NV_0$. Clearly, the results presented in Fig. \[fig3\] confirm our analytical predictions. Conclusion ========== In conclusion, we have considered a simple 1D memristive network that, however, exhibit a very interesting switching synchronization phenomenon. Using a suitable change of variable and the theoretical method of Laplace transform, we were able to solve a complex nonlinear problem analytically, what is surprising by itself, as the exact analytical results are known for a quite few number of nonlinear problems. Our analytical results are in agreement with numerical simulations. Acknowledgment {#acknowledgment .unnumbered} ============== This work has been partially supported by the Russian Science Foundation grant No. 15-13-20021.
--- abstract: 'Assuming that the particle with mass $\sim 126$ GeV discovered at LHC is the Standard Model Higgs boson, we find that the stability of the EW vacuum strongly depends on new physics interaction at the Planck scale $M_P$, despite of the fact that they are higher-dimensional interactions, apparently suppressed by inverse powers of $M_P$. In particular, for the present experimental values of the top and Higgs masses, if $\tau$ is the lifetime of the EW vacuum, new physics can turn $\tau$ from $\tau >> T_U$ to $\tau << T_U$, where $T_U$ is the age of the Universe, thus weakening the conclusions of the so called meta-stability scenario.' author: - Vincenzo Branchina and Emanuele Messina title: 'Stability, Higgs Boson Mass and New Physics' --- [*Introduction.—*]{} When the particle with mass $\sim 126$ GeV discovered at LHC[@atlas; @cms] is identified with the Standard Model (SM) Higgs boson, serious and challenging questions arise. Among them, the vacuum stability issue. The Higgs effective potential $V_{eff}(\phi)$ bends down for values of $\phi$ larger the EW minimum, an instability due to top loop-corrections. By requiring stability, lower bounds on the Higgs mass $M_H$ were found[@cab; @sher; @jones; @sher2; @alta; @quiro; @shapo]. -4mm ![In this picture we repeat the analysis of[@isido; @isiuno; @isidue], which is done in the absence of new interactions at the Planck scale. The $M_H-M_t$ plane is divided in three sectors: absolute stability, metastability and instability regions. The dot indicates $M_H\sim 126$ GeV and $M_t\sim 173.1$ GeV. []{data-label="bounn"}](PLANE.eps "fig:"){width=".44\textwidth"}-4mm -.6cm A variation on this picture is the so called metastability scenario[@sher; @isido; @isiuno; @isidue]. For $\phi$ much larger than $v$ (location of the EW minimum), $V_{eff}(\phi)$ develops a new minimum at $\phi_{min}^{(2)}$. When $M_H$ and $M_t$ are such that $V_{eff}(v) < V_{eff}(\phi_{min}^{(2)})$, the EW minimum is [*stable*]{}, otherwise it is a false vacuum that should decay into the true vacuum (at $\phi_{min}^{(2)}$) in a finite amount of time. Depending on the values of $M_H$ and $M_t$, the lifetime $\tau$ of the EW vacuum can be larger or smaller than the age of the Universe $T_U$. For $\tau > T_U $, we may well live in the metastable EW minimum. This is the metastability scenario. The aim of this Letter is to study the influence of new physics interactions (at the Planck scale) on $\tau$. Tree level and quantum fluctuation contributions are taken into account. In this Letter, however, we limit ourselves to consider the quantum corrections from the Higgs sector only. This is sufficient to illustrate our main point. The complete analysis is left for a forthcoming paper. Let us begin with fig.\[bounn\], where we repeat the usual analysis[@isido; @isiuno; @isidue] and draw the phase diagram in the $M_H-M_t$ plane. The latter is divided into three different sectors: an [*absolute stability*]{} region ($V_{eff}(v) < V_{eff}(\phi_{min}^{(2)})$), a [*metastability*]{} region ($\tau > T_U $) and an [*instability*]{} region ($\tau < T_U $). The dashed line separates the stability and the metastability sectors and is obtained for $M_H$ and $M_t$ such that $V_{eff}(v) = V_{eff}(\phi_{min}^{(2)})$. The dashed-dotted line separates the metastability and the instability regions and is obtained for $M_H$ and $M_t$ such that $\tau = T_U $. For $M_t \sim 173.1$ GeV and $M_H \sim 126$ GeV, the SM lies within the metastability region. It is then concluded that the present experimental values of $M_H$ and $M_t$ allow for a Standard Model valid all the way up to the Planck scale. Let $V_{eff}(\phi)$ be normalized so to vanish at $\phi=v$. At a much larger value $\phi=\phi_{inst}$, $V_{eff}(\phi_{inst})$ vanishes again (for $M_H \sim 126$ GeV, $M_t \sim 173.1$ GeV, this happens for $\phi_{inst} \sim 10^{10}$ GeV). For $\phi > \phi_{inst}$, the potential becomes negative, later developing a new minimum. It is assumed that the actual behavior of $V_{eff}(\phi)$ for $\phi$ beyond $\phi_{inst}$ has no impact on $\tau$. More precisely, it is stated that even if $V_{eff}(\phi)$ at $\phi=M_P$ is still negative (and the new minimum forms at a scale much larger than $M_P$), new physics interactions around the Planck scale must stabilize the potential (eventually bringing the new minimum around $M_P$), but $\tau$ does not depend on the detailed form of $V_{eff}(\phi)$ beyond $\phi_{inst}$[@isido]. In this respect, it is worth to note that for $M_H \sim 126$ GeV and $M_t\sim 173.1$ GeV, not only the effective potential at the Planck scale is negative but also it continues to go down beyond $M_P$. The new minimum is formed at $\phi_{min}^{(2)} \sim 10^{31}$ GeV (see fig.\[instab\]). Note also that the instability of the effective potential occurs for very large values of $\phi$, ($\phi_{inst} \sim 10^{10}$ GeV). In this range, $V_{eff}(\phi)$ is well approximated by keeping only the quartic term[@sher2]. Accordingly, following[@cole1; @cole2], the tunneling time $\tau$ is computed by considering the bounce solutions to the euclidean equation of motion for the potential $V(\phi)= \frac{\lambda}{4}\phi^4$ with negative $\lambda$, a good approximation in this range. -4mm ![For $M_H \sim 126$ GeV and $M_t\sim 173.1$ GeV, the running of $\lambda(\mu)$ as determined by $SM$ model interactions only (solid line) and in the presence of $\lambda_6$ and $\lambda_8$. Dashed-dotted line: $\lambda_6(M_P)=1$ and $\lambda_8(M_P)=0.5$. Dashed line: $\lambda_6(M_P)=-2$ and $\lambda_8(M_P)=2.1$. Clearly, the tree lines coincide for values of $\mu$ below the Planck scale. []{data-label="lam"}](bilambda.eps "fig:"){width=".44\textwidth"}-4mm -.6cm [*Lifetime of the EW vacuum.—*]{} In order to study the impact of new physics interactions at the Planck scale, we add two higher dimension operators $\phi^6$ and $\phi^8$ to the SM Higgs potential: $$\begin{aligned} \label{newpot} V(\phi)&=&\frac{\lambda}{4}\phi^4 +\frac{\lambda_6}{6}\frac{\phi^6}{M_P^2} +\frac{\lambda_8}{8}\frac{\phi^8}{M_P^4}\,\,.\end{aligned}$$ Naturally, we can also consider higher dimensional operators. However, the examples we are going to study (with different choices of $\lambda_6$ and $\lambda_8$) are sufficient to illustrate some interesting cases we can face when new physics interactions at the Planck scale are considered. The influence of $\lambda_6$ and $\lambda_8$ on the RG flow of the quartic coupling $\lambda(\mu)$, for values of $\mu$ below $M_P$, is negligible (see fig.\[lam\]). The RG functions for the SM parameters at the two-loop level (with the corresponding boundary conditions) can be found, for instance, in[@isiuno; @riotto]. Further (slight) improvement is obtained by considering three-loops contributions[@isiuno; @isidue; @bla]. Let us consider now two different representative cases. For $\lambda_6 (M_P)=-2$ and $\lambda_8 (M_P)=2.1$, the potential is given by the dashed line of fig.\[instab\]. Due to the large range of scales involved, the plot is done in a double logarithmic scale. As $\lambda_6$ is negative, when $\phi$ approaches $M_P$, $V_{eff}^{new}(\phi)$,which is the renormalization group improved effective potential in the presence of $\lambda_6$ and $\lambda_8$, bends down much steeply than $V_{eff}(\phi)$ and forms a new minimum at about $\phi_{min}^{(2)}\sim M_P$. This is clear from the zoom around the Planck scale in panel (b) of fig.\[instab\]. The second case we consider is when $\lambda_6$ and $\lambda_8$ are both positive. For $\lambda_6 (M_P)=1$ and $\lambda_8 (M_P)=0.5$, the potential is given by the dotted-dashed line of fig.\[instab\]. As $\lambda_6$ is positive, when $\phi$ approaches $M_P$ the potential $V_{eff}^{new}(\phi)$ lies above (rather than below) $V_{eff}(\phi)$. In both cases, the potential is stabilized at the Planck scale by new physics terms. However, it is commonly believed that, although such a stabilization has to take place, the presence of new physics interactions has no impact on the EW vacuum lifetime[@isido]. We shall see that this is not generically true. When $V_{eff}^{new}(\phi)$ lies above $V_{eff}(\phi)$, which in our example is realized with $\lambda_6(M_P) > 0$ and $\lambda_8(M_P) > 0$, $\tau$ is almost insensitive to the presence of these new terms. On the contrary, when $V_{eff}^{new}(\phi)$ lies below $V_{eff}(\phi)$, which in our example is realized with $\lambda_6(M_P) < 0$ and $\lambda_8(M_P) > 0$, $\tau$ strongly depends on new physics. The tunneling time $\tau$ is given by[@cole1; @cole2],[@isido]: $$\begin{aligned} \label{tunneling} \frac{1}{\tau} = T_U^{3}\,\frac{S[\phi_b]^2}{4\pi^2} \left|\frac{ {\rm det'}\left[-\partial^2+V''(\phi_b)\right]} {\mbox{det}\left[-\partial^2+V''(v)\right]}\right|^{-1/2} e^{-S[\phi_b]} \end{aligned}$$ where $\phi_b(r)$ is the $O(4)$ bounce solution to the euclidean equation of motion ($r^2=x_\mu x_\mu $), $S[\phi_b]$ the action for the bounce, $\left[-\partial^2+V''(\phi_b)\right]$ the fluctuation operator around the bounce ($V''$ is the second derivative of $V$ with respect to $\phi$). The prime in the ${\rm det^{'}}$ means that in the computation of the determinant the zero modes are excluded and $\frac{S[\phi_b]^2}{4\pi^2}$ comes from the translational zero modes. ![image](bipot.eps){width="75.00000%"}\ (a) ![image](bizoom2.eps){width="75.00000%"}\ (b) Let us compute $\tau$ for the potential of Eq.(\[newpot\]) with $\lambda_6(M_P) =-2$ and $\lambda_8(M_P) =2.1$. When $V_{eff}(\phi)$ (the usual SM Higgs potential without new interaction terms) is computed within the $\overline {MS}$ scheme and the renormalization scale $\mu$ is chosen to coincide with the inverse of the bounce size $R_{max}$ that maximizes the tunneling probability[@isido], we have that $\mu=2\,R_{max}^{-1}\,e^{-\gamma_E} =1.32\,\cdot \, 10^{17}$ GeV ($\gamma_E$ is the Euler gamma) and the coupling constant $\lambda(\mu)$ is $\lambda(\mu)\simeq-0.014$. For our potential, we find that, up to the scale $\eta\simeq 0.780 M_P$, it is very well approximated by an upside down quartic parabola, $V_{eff}^{new}(\phi)=\frac{\lambda_{eff}}{4}\phi^4$, with $\lambda_{eff}=\lambda+\frac{2}{3}\lambda_6\frac{\eta^2}{M_P^2} +\frac{1}{2}\lambda_8\frac{\eta^4}{M_P^4} \simeq -0.437$. For $\phi > \eta$, $V_{eff}^{new}(\phi)$ bends down very steeply (see fig. 3 (b)), eventually creating a new minimum very close to $M_P$ : $\phi_{min}^{(2)}=0.979 M_P$. Therefore, for values of $\phi$ larger than (but close to) $\eta$, $\phi \gtrsim \eta$, it can be linearized and we get: $V(\phi)=\left[\frac{\lambda_{eff}}{4} \eta^4-\frac{\lambda_{eff} \eta^3}{\gamma} \left(\phi-\eta\right)\right]$, where $$\begin{aligned} \label{gamma} \gamma= -\, \lambda_{eff} \,\,\eta^3 \, \left(\lambda\eta^3 +\lambda_6\frac{\eta^5}{M_P^2}+\lambda_8\frac{\eta^7} {M_P^4}\right)^{-1}\,\, .\end{aligned}$$ Interestingly, in order to compute $\tau$, this is all what we need to know[@wein]. Moreover, the parameter $\gamma$ plays an essential role in determining when new physics interactions influence $\tau$. The euclidean equation of motion admits the following bounce solution: $$\begin{aligned} \label{boun} \phi_b(r)=\left\{\begin{array}{cc} 2 \eta -\eta^2 \sqrt{\frac{|\lambda_{eff}|}{8}} \frac{r^2+\overline R^2}{\overline R} & \qquad 0<r<\overline r\\ \sqrt{\frac{8}{|\lambda_{eff}|}}\frac{\overline R}{r^2+\overline R^2} & \qquad r>\overline r \end{array}\right.\end{aligned}$$ where $$\label{erre} \overline r^2=\frac{8\gamma}{\lambda_{eff} \eta^2}(1+\gamma) \,\,\,\,\,\,\,\, , \,\,\,\,\,\,\,\, \overline R^2=\frac{8}{|\lambda_{eff}|}\frac{\gamma^2}{\eta^2}\,.$$ From Eq.$(\ref{erre})_1$ we see that the solution (\[boun\]) exists only for $-1 < \gamma < 0$. $\overline R$ is the size of the bounce (\[boun\]) and the action at $\phi_b$ is: $$\begin{aligned} \label{bounceone} S[\phi_b]= (1 - (\gamma + 1)^4)\,\,\frac{8\pi^2}{3|\lambda_{eff}|}\,.\end{aligned}$$ There are also other bounce solutions: $$\begin{aligned} \label{bouncetwo} \phi_b^{(2)}(r)=\sqrt{\frac{2}{|\lambda_{eff}|}}\frac{2\,R}{r^2+R^2}\end{aligned}$$ where $R$, the size of these bounces, can take any value in the range $\sqrt{\frac{8}{|\lambda_{eff}|}}\frac{1}{\eta} < R < \infty$. A numerical analysis (presented in detail in a forthcoming paper) shows that these latter solutions are related to the approximation that we are considering for $V(\phi)$. Nevertheless, for $|\phi| << M_P$ (that in turn means for very large values of $R$), configurations of the kind given in Eq.(\[bouncetwo\]), with $\lambda_{\rm eff}$ replaced with $\lambda$, are good approximate solutions of the (Euclidean) equation of motion, and, in principle, should be taken into account in the computation of $\tau$. Their action is degenerate with $R$ and is $$\begin{aligned} \label{newactione} S=\frac{8\pi^2}{3|\lambda|}\,.\end{aligned}$$ However, even if for a moment we limit ourselves to the tree level contribution only, from Eqs.(\[tunneling\]), (\[bounceone\]), and (\[newactione\]) we see that for those values of $\gamma$ such that the solution (\[boun\]) exists ($-1<\gamma<0$), the contribution to the tunneling probability coming from the bounces (\[bouncetwo\]) (with $\lambda_{\rm eff}$ replaced with $\lambda$) is exponentially suppressed with respect to the contribution of (\[boun\]). For $\lambda$, $\lambda_6(M_P)$, $\lambda_8(M_P)$ and $\eta$ given above we have: $\gamma\simeq -0.963$. Let us compute now the fluctuation determinant in Eq.(\[tunneling\]) for the bounce (\[boun\]) and for $\lambda_6(M_P)=-2$ and $\lambda_8(M_P)=2.1$, which is the case of interest for us. Due to radial symmetry, $V''(\phi_b)$ in $\left[-\partial^2+V''(\phi_b)\right]$ only depends on $r$. Following [@yaglom], the logarithm of the fluctuation determinant is obtained (see below for some specifications) as: \[yaglom\] ()\^[1/2]{}= \_[l= 0]{}\^ (l+1)\^2\_l\ \_l =\_[r]{} \_l(r) and each $\rho_l(r)$ is solution of the differential equation: $$\begin{aligned} \label{yagloeq} \rho_l''(r)+\frac{\left(2l+d-1\right)} {r}\rho_l'(r)-V''(\phi_b(r))\rho_l(r)=0\end{aligned}$$ with boundary conditions $\rho_l(0)=1$ and $\rho_l'(0)=0$. ($\rho_l''(r)$ is the second derivative of $\rho_l(r)$ w.r.to $r$,...). Eq.(\[yaglom\]) is ill defined in three respects. The eigenvalue related to $l=0$ is a negative mode ($\rho_0<0$), while the $l=1$ modes correspond to the four translational zero modes. We exclude the $l=0$ and $l=1$ modes from the above sum. They can be separately treated in a standard way[@yaglom; @dunne]. Finally, the sum in Eq.(\[yaglom\]) is divergent. This is the usual UV divergence and we know how to take care of it through renormalization[@dunne]. Let us consider now Eq.(\[yagloeq\]) for $l>1$. We can easily solve this equation numerically for each value of $l$ (for increasing $l$, the $\rho_l$’s rapidly converge to one). Following[@dunne], the $\overline {MS}$ renormalized sum in Eq.(\[yaglom\]) is given by: $$\begin{aligned} \label{renorm} &&\left[\frac{1}{2}\sum_{l>1}^{\infty} (l+1)^2\ln \rho_l\right]_{r}= \frac{1}{2}\sum_{l>1}^{\infty}(l+1)^2\ln\rho_l\nonumber\\ -&\frac{1}{2}& \sum_{l=0}^{\infty}(l+1)^2\left[\frac{\int^{\infty}_0 dr r V''}{2(l+1)}-\frac{\int^{\infty}_0 dr r^3 (V'')^2}{8(l+1)^3}\right]\nonumber\\ -&\frac{1}{8}&\int^{\infty}_0 dr r^3 (V'')^2 \left[\ln\left(\frac{\mu r}{2}\right)+\gamma_E+1\right]\,,\end{aligned}$$ where $\mu$ is the renormalization scale. We then get: $$\begin{aligned} \label{ya} \left[\frac{1}{2}\sum_{l>1}^{\infty} (l+1)^2\ln \rho_l\right]_{r}= -2.49 - 5.27\,\, {\rm ln } \left(\frac{1.48\mu}{M_P}\right)\, . \,\, \end{aligned}$$ This result is obtained by truncating the sum to a value of $l$ where it shows saturation (standard renormalization procedure). Strictly speaking, the “angular momentum” cutoff $L$ in this sum is given by $L={\overline R} M_P$, which, from Eq.$(\ref{erre})_2$ is $L \sim 5$. However, the series in Eq.(\[renorm\]) converges very fast. Even truncating it to $l=5$ we get a less than 3 percent difference with the result of Eq.(\[ya\]). The standard renormalization procedure is then well justified. For $l=0$, $\rho_0$ has to be replaced with its absolute value[@dunne]. Solving Eq.(\[yagloeq\]), we find that its contribution to the sum in Eq.(\[yaglom\]) is: $\frac12\ln |\rho_0| = -0.806$. Finally, the contribution of the zero modes ($l=1$) is also obtained in a standard manner[@dunne]. The solution of Eq.(\[yagloeq\]) for $l=1$ vanishes in the $r \to \infty$ limit: $\rho_1=0$. Actually, $\rho_1$ has to be replaced with $\rho_1'$, defined as: \_1’=\_[k 0]{} where $\rho_1^{k}$ is obtained by solving Eq.(\[yagloeq\]) with $V''(\phi_b)$ replaced by $V''(\phi_b) + k^2$. Note that $\rho'_1$ has the dimension of a length square and is given in terms of $\overline R$, the size of the bounce (\[boun\]). The zero modes contribution to the sum in Eq.(\[yaglom\]) finally is: $\frac12\,\, 4\,\,\ln \rho'_1 = 2 \,{ln}\,(0.0896 \overline R^2)$. For the purposes of comparing our results (from $V_{eff}^{new}(\phi)$) with those obtained with $V_{eff}(\phi)$, it is useful to choose the same renormalization scale as before, $\mu=1.32\,\cdot\, 10^{17}$ GeV. Then, collecting the different results, from Eq.(\[tunneling\]) we find: \[newtau\] = 5.45 10\^[-212]{} T\_U, a ridiculously small fraction of a second! This result is at odds with what is shown in fig.(\[bounn\]), where for $M_H\sim 126$ GeV and $M_t\sim 173.1$ GeV, the EW vacuum lies inside the metastability region, close to the stability line and shows that the phase diagram of fig.\[bounn\] has to be reconsidered. Actually, when the EW vacuum lifetime for these values of $M_H$ and $M_t$ is computed in the absence of new physics interactions, we have: = 1.49 10\^[714]{} T\_U. \[tauloro\] Accordingly, the EW vacuum would be an extremely long-lived metastable state. This is why it is often stated that, for the present experimental values of $M_H$ and $M_t$, the SM is an effective theory that is valid all the way up to the Planck scale. Eq.(\[newtau\]) shows that this is not generically true. As a result of the presence of new physics interactions, the EW vacuum may turn from a very long-lived metastable state to a highly unstable one. As we have already seen, in fact, when $V_{eff}^{new}(\phi)$ lies above $V_{eff}(\phi)$, $\tau$ is not dramatically affected by new physics. On the contrary, when $V_{eff}^{new}(\phi)$ lies below $V_{eff}(\phi)$, the $UV$ completion of the Standard Model has a very strong impact on $\tau$, turning it from $\tau>>T_U$ to $\tau<<T_U$. [*Conclusions and outlook.—*]{} In this Letter we show that the lifetime $\tau$ of the EW vacuum strongly depends on new physics. The metastability scenario (which is based on the assumption that $\tau$ does not depend on new physics) and the whole phase structure of fig.(\[bounn\]) have to be entirely reconsidered. Clearly, when the quantum fluctuations from other sectors of the $SM$ are taken into account, the specific value of $\tau$ in Eq.(\[newtau\]) is modified. However, this does not change the core result of the present analysis, namely the huge influence of new physics on $\tau$. A very important outcome of our result is that it poses constraints on possible candidates to the $UV$ completion of the $SM$. In this respect, we note also that a similar analysis can be done when the new physics scale lies below (even much below) the Planck scale. Finally, we note that the considerations developed in this Letter should be relevant for related scenarios, Higgs potential with two degenerate minima[@niel] and Higgs driven inflation scenarios[@ber; @infla]. In all of these cases, in fact, the physical scale relevant to the involved mechanism is dangerously close to the Planck scale and we expect high sensitivity to new physics interactions. [99]{} ATLAS Collaboration, Phys. Lett. B710 (2012) 49. CMS Collaboration, Phys. Lett. B710 (2012) 26. N. Cabibbo, L. Maiani, G. Parisi, R. Petronzio, Nucl. Phys. B158 (1979) 295. R.A. Flores, M. Sher, Phys. Rev. D27 (1983) 1679; M. Lindner, Z. Phys. 31 (1986) 295; M. Sher, Phys. Rep. 179 (1989) 273; M. Lindner, M. Sher, H. W. Zaglauer, Phys. Lett. B228 (1989) 139. C. Ford, D.R.T. Jones, P.W. Stephenson, M.B. Einhorn, Nucl. Phys. B395 (1993) 17. M. Sher, Phys. Lett. B317 (1993) 159. G. Altarelli, G. Isidori, Phys. Lett. B337 (1994) 141. J.A. Casas, J.R. Espinosa, M. Quirós, Phys. Lett. B342 (1995) 171; Phys. Lett. B382 (1996) 374. F. Bezrukov, M. Yu. Kalmykov, B. A. Kniehl, M. Shaposhnikov, JHEP 1210 (2012) 140. G. Isidori, G. Ridolfi, A. Strumia, Nucl. Phys. B609 (2001) 387. J. Elias-Miro, J.R. Espinosa, G.F. Giudice, G. Isidori, A. Riotto, A. Strumia, Phys. Lett. B709 (2012) 222. G. Degrassi, S. Di Vita, J. Elias-Miro, J.R. Espinosa, G.F. Giudice, G. Isidori, A. Strumia, JHEP 1208 (2012) 098. S. Coleman, Phys. Rev. D15 (1977) 2929. C. G. Callan, S. Coleman, Phys. Rev. D16 (1977) 1762. J.R. Espinosa, G. F. Giudice, A. Riotto, JCAP 0805 (2008) 002. L.N. Mihaila, J. Salomon and M. Steinhauser, Phys. Rev. Lett. 108 (2012) 151602; K. Chetyrkin and M. Zoller, JHEP 06 (2012) 033. K. Lee, E.J. Weinberg, Nucl. Phys. B267 (1986) 181. G. Dunne, J. Phys. A: Math. Theor. 41 (2008) 304006. G.V. Dunne, H. Min, Phys. Rev. D72, (2005) 125004; J.Phys.A39 (2006) 11915. D.L. Bennett, H.B. Nielsen and I. Picek, Phys. Lett. B 208 (1988) 275; C. D. Froggatt and H. B. Nielsen, Phys. Lett. B 368 (1996) 96; C.D. Froggatt, H. B. Nielsen, Y. Takanishi, Phys.Rev. D64 (2001) 113014. F.L. Bezrukov, M. Shaposhnikov, Phys.Lett. B659 (2008) 703; JHEP 0907 (2009) 089; F.L. Bezrukov, A. Magnin, M. Shaposhnikov, Phys.Lett. B675 (2009) 88. G. Isidori, V. S. Rychkov, A. Strumia, N. Tetradis, Phys. Rev. D77 (2008) 025034; I. Masina, A. Notari, Phys.Rev. D85 (2012) 123506.
--- abstract: 'The magnetic states of the non-centrosymmetric, pressure induced superconductor CeCoGe$_3$ have been studied with magnetic susceptibility, muon spin relaxation ($\mu$SR), single crystal neutron diffraction and inelastic neutron scattering (INS). CeCoGe$_3$ exhibits three magnetic phase transitions at $T_{\rm{N1}}$ = 21 K, $T_{\rm{N2}}$ = 12 K and $T_{\rm{N3}}$ = 8 K. The presence of long range magnetic order below $T_{\rm{N1}}$ is revealed by the observation of oscillations of the asymmetry in the $\mu$SR spectra between 13 K and 20 K and a sharp increase in the muon depolarization rate. Single crystal neutron diffraction measurements reveal magnetic Bragg peaks consistent with propagation vectors of **k** = (0,0,$\frac{2}{3}$) between $T_{\rm{N1}}$ and $T_{\rm{N2}}$, **k** = (0,0,$\frac{5}{8}$) between $T_{\rm{N2}}$ and $T_{\rm{N3}}$ and **k** = (0,0,$\frac{1}{2}$) below $T_{\rm{N3}}$. An increase in intensity of the (1 1 0) reflection between $T_{\rm{N1}}$ and $T_{\rm{N3}}$ also indicates a ferromagnetic component in these phases. These measurements are consistent with an equal moment, two-up, two-down magnetic structure below $T_{\rm{N3}}$, with a magnetic moment of 0.405(5) $\rm{\mu_B}$/Ce. Above $T_{\rm{N2}}$, the results are consistent with an equal moment, two-up, one-down structure with a moment of 0.360(6) $\rm{\mu_B}$/Ce. INS studies reveal two crystal-field (CEF) excitations at $\sim$ 19 and $\sim$ 27 meV. From an analysis with a CEF model, the wave-functions of the J = $\frac{5}{2}$ multiplet are evaluated along with a prediction for the magnitude and direction of the ground state magnetic moment. Our model correctly predicts that the moments order along the $c$ axis but the observed magnetic moment of 0.405(5) $\rm{\mu_B}$ is reduced compared to the predicted moment of 1.01 $\rm{\mu_B}$. This is ascribed to hybridization between the localized Ce$^{3+}$ f-electrons and the conduction band. This suggests that CeCoGe$_3$ has a degree of hybridization between that of CeRhGe$_3$ and the non-centrosymmetric superconductor CeRhSi$_3$.' author: - 'M. Smidman' - 'D. T. Adroja' - 'A. D. Hillier' - 'L. C. Chapon' - 'J. W. Taylor' - 'V. K. Anand' - 'R. P. Singh' - 'M. R. Lees' - 'E. A. Goremychkin' - 'M. M. Koza' - 'V. V. Krishnamurthy' - 'D. M. Paul' - 'G. Balakrishnan' bibliography: - 'CeCoGe3\_final.bib' title: 'Neutron scattering and muon spin relaxation measurements of the non-centrosymmetric antiferromagnet CeCoGe$_3$ ' --- Introduction ============ The coexistence of superconductivity (SC) and magnetism in heavy fermion (HF) compounds has attracted considerable research interest recently. In particular, several HF systems appear to exhibit unconventional SC close to a quantum critical point (QCP). On tuning the electronic ground state of these systems by doping, pressure or the application of magnetic fields, the SC appears in regions where the magnetic order is being suppressed. [@NatureMagSC; @PfleidererRMP] There is great interest therefore in understanding this phenomenon and in particular the role of magnetic fluctuations in potentially mediating the SC of these compounds. Most of the compounds which display HF SC have centrosymmetric crystal structures, in which the Cooper pairs condense in either spin-singlet or spin-triplet states. However, several cerium based compounds with non-centrosymmetric structures have been recently reported to exhibit SC. The first HF NCS reported was CePt$_3$Si, where antiferromagnetic (AFM) order ($T_{\rm{N}}$ = 2.2 K) and SC ($T_{\rm{c}}$ = 0.75 K) coexist at ambient pressure. [@CePt3Si2004] In non-centrosymmetric superconductors (NCS), a finite antisymmetric spin-orbit coupling (ASOC) lifts the spin degeneracy of the conduction bands, allowing for the mixture of spin singlet and triplet pairing states. [@NCSGorkov] We report results of neutron scattering and muon spin relaxation ($\mu$SR) measurements of the NCS CeCoGe$_3$. This is a member of the Ce$TX_3$ (T = transition metal, X = Si or Ge) series of compounds which crystallize in the non-centrosymmetric, tetragonal BaNiSn$_3$ type structure (space group $I4mm$). In particular, the lack of a mirror plane perpendicular to \[0 0 1\] leads to a Rashba type ASOC. [@BauerNCS] CeCoGe$_3$ orders antiferromagnetically at ambient pressure, with three magnetic phases ($T_{\rm{N1}}$ = 21 K, $T_{\rm{N2}}$ = 12 K, $T_{\rm{N3}}$ = 8 K).[@CeCoGe31993; @CeCoGe32005] $T_{\rm{N1}}$ decreases as a function of applied pressure and there is an onset of SC for $p~>$ 4.3 GPa, with a $T_{\rm{c}}$ of 0.7 K at 5.5 GPa. [@CeCoGe3SC] SC is also observed in CeRhSi$_3$ ($p~>$ 1.2 GPa) [@CeRhSi3SC], CeIrSi$_3$ ($p~>$ 1.8 GPa) [@CeIrSi3SC] and CeIrGe$_3$ ($p~>$ 20 GPa). [@CeIrGe3SC] The superconducting states of these compounds display highly unconventional properties. As well as regions of coexistence with AFM order, the upper critical field is highly anisotropic, vastly exceeding the Pauli limiting field along the $c$ axis. [@CeCoGe3Hc2] However some members of the Ce$TX_3$ family such as CeCoSi$_3$ and CeRuSi$_3$ do not order magnetically and are intermediate valence compounds. [@CeCoGe31998; @CeCoSi32007] The range of observed magnetic properties in the Ce$TX_3$ series has previously been discussed in the context of the Doniach phase diagram [@BauerNCS; @Doniach1977; @HFSC2007; @CeTX32008], with competition between the intersite Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction which favors magnetic ordering and the on-site Kondo effect which leads to a non-magnetic singlet ground state. However, further studies are necessary to characterize the magnetic states of the Ce$TX_3$ series. Knowledge of the magnetic ground states and crystal electric field (CEF) levels will aid in understanding the relationship between SC and magnetism in the Ce$TX_3$ compounds and allows detailed comparisons between members of the series. In particular, the role of hybridization in determining the phase diagram can be examined. CeCoGe$_3$ can be considered a strongly correlated system with an electronic specific heat coefficient $\gamma~=32~\rm{mJ~mol^{-1}~K^{-2}}$ and an enhanced cyclotron mass of $10 m_{\rm e}$, where $m_{\rm e}$ is the free electron mass. [@CeCoGe32005; @CeCoGe3FS] The proximity of the compound to quantum criticality has been studied in the CeCoGe$_{3-x}$Si$_x$ system, where the substitution of Si increases the chemical pressure. Interestingly whilst antiferromagnetism is suppressed for $x~=~1.2$ and a quantum critical region with non-Fermi liquid behaviour is observed for $1~<~x~<~1.5$, no SC was reported down to 0.5 K. [@CeCoGe31998; @CeCoGe32002] This is in contrast to the superconducting behavior observed for the $x~=~0$ compound with applied hydrostatic pressure. As well as being an unconventional superconductor [@CeCoGe3Hc2], CeCoGe$_3$ also has the highest magnetic ordering temperature ($T_{\rm{N1}}$ = 21 K) of any of the Ce$TX_3$ compounds and exhibits a complex temperature-pressure phase diagram. [@CeCoGe3HP; @CeCoGe3LP] Specific heat measurements of single crystals reveal that under a pressure of $p~=~0.8$ GPa, a fourth transition is observed at 15.3 K in addition to those observed under ambient conditions. [@CeCoGe3HP] The temperature of this transition does not shift with pressure whilst $T_{\rm{N1}}$ is suppressed until it meets the pressure induced phase at $p~=~1.5$ GPa. In turn, the transition temperature of this phase is suppressed upon further increasing pressure until it merges with $T_{\rm{N2}}$. The $T~-~P$ phase diagram shows a series of step-like decreases in the magnetic ordering temperature. A total of six phases in the phase diagram were suggested from single crystal measurements up to 7 GPa, whilst eight were observed in polycrystalline samples up to 2 GPa. [@CeCoGe3HP] The magnetic order is suppressed at p = 5.5 GPa and there is a region of coexistence with SC. The lack of step-like transitions above 3.1 GPa could indicate a change in magnetic structure which may be important for understanding the emergence of SC in the system. The magnetic structure of CeCoGe$_3$ has previously been studied at ambient pressure using single crystal neutron diffraction in zero field where two propagation vectors were observed at 2.9 K, **k$_1$** = (0,0,$\frac{1}{2}$) and **k$_2$** = (0,0,$\frac{3}{4}$). [@CeCoGe3SCND] Powder neutron diffraction measurements also indicate the presence of **k$_1$** at 2 K. [@CeCoGe3PND] In this study, we have determined the magnetic propagation vector in zero field for each of the three magnetic phases using single crystal neutron diffraction. We are then able to propose magnetic structures for the phases above $T_{\rm{N2}}$ and below $T_{\rm{N3}}$. We report the temperature dependence of magnetic Bragg reflections from 2 - 35 K. The presence of long range magnetic order is also revealed by $\mu$SR measurements, where oscillations are observed in the spectra below $T_{\rm{N1}}$. Single crystal susceptibility and magnetization measurements were previously used to suggest a CEF scheme with a ground state doublet consisting of the $ |\pm \frac{1}{2} \rangle$ states. [@CeCoGe32005] We use INS to directly measure transitions from the ground state to the excited CEF levels and are able to find an energy level scheme and a set of wave functions compatible with both INS and magnetic susceptibility measurements. We are also able to compare the degree of hybridization in CeCoGe$_3$ with other compounds in the series. Experimental Details ==================== Polycrystalline samples of CeCoGe$_3$ and LaCoGe$_3$ were prepared by arc-melting the constituent elements (Ce : 99.99%, La : 99.99%, Co : 99.95%, Ge : 99.999%) in an argon atmosphere on a water cooled copper hearth. After being flipped and remelted several times, the boules were wrapped in tantalum foil and annealed at 900 $^{\circ}$C for a week under a dynamic vacuum, better than 10$^{-6}$ Torr. Powder X-ray diffraction measurements were carried out using a Panalytical X-Pert Pro diffractometer. Single crystals were grown by melting polycrystalline material with a bismuth flux following the previously reported technique [@CeCoGe32005]. Plate like single crystals were obtained with faces perpendicular to \[0 0 1\] and checked using an X-ray Laue imaging system. Excess bismuth was removed by washing the crystals with a solution of 1 : 1 nitric acid. That the crystals had the correct stoichiometry was confirmed by scanning electron microscopy measurements. Magnetic susceptibility measurements were made using a Quantum Design MPMS SQUID magnetometer. Inelastic neutron scattering and $\mu$SR measurements were performed in the ISIS facility at the Rutherford Appleton Laboratory, UK. INS measurements were carried out on the MARI and MERLIN spectrometers. The samples were wrapped in thin Al-foil and mounted inside a thin-walled cylindrical Al-can, which was cooled down to 4.5 K inside a CCR with He-exchange gas around the samples. Incident energies of 10 and 40 meV were used on MARI whilst 15 meV were used on MERLIN, selected via a Fermi chopper. Further low energy INS measurements were carried out on the IN6 spectrometer at the Institut Laue-Langevin, France, with an incident energy of 3.1 meV. $\mu$SR measurements were carried out on the MuSR spectrometer with the detectors in the longitudinal configuration. Spin-polarized muon pulses were implanted into the sample and positrons from the resulting decay were collected in positions either forward or backwards of the initial muon spin direction. The asymmetry is calculated by $$G_z(t) = \frac{N_F - \alpha N_B}{N_F + \alpha N_B}$$ where $N_F$ and $N_B$ are the number of counts at the detectors in the forward and backward positions and $\alpha$ is a constant determined from calibration measurements made in the paramagnetic state with a small applied transverse magnetic field. The maximum asymmetry for an ideal pair of detectors is $\frac{1}{3}$ but this is lower for a real spectrometer. [@MuonRef2011] The sample was mounted on a silver plate using GE varnish and cooled in a standard crysotat down to 1.5 K, with He exchange gas around the sample. Single crystal neutron diffraction measurements were carried out on the D10 instrument at the Institut Laue-Langevin, France. The sample was mounted on an aluminium pin and cooled in a helium-flow cryostat operating down to 2 K. The instrument was operated in the four-circle configuration. An incident wavelength of 2.36 $\rm{\AA}$ was selected using a pyrolytic graphite monochromator. A vertically focused pyrolytic graphite analyzer was used to reduce the background signal. After passing through the analyzer, neutrons were detected using a single $^3$He detector. Results and discussion ====================== ![X-ray powder diffraction measurements of polycrystalline CeCoGe$_3$ and LaCoGe$_3$. The solid lines show the Rietveld refinements, the results of which are given in Table \[XRDTab\].[]{data-label="XRD"}](Fig1.eps){width="0.99\columnwidth"} CeCoGe$_3$ LaCoGe$_3$ ---------------- ------------- ------------ ----------- $a~(\rm{\AA}$) 4.32042(4) 4.35083(7) $c~(\rm{\AA}$) 9.83484(11) 9.87155(2) $R_{\rm{wp}}$ 10.33 8.86 $x$ $y$ $z$ Ce 0 0 0 Co 0 0 0.666(7) Ge1 0 0 0.4281(6) Ge2 0 0.5 0.7578(5) La 0 0 0 Co 0 0 0.6628(7) Ge1 0 0 0.4285(6) Ge2 0 0.5 0.7556(5) : Results of the refinements of powder x-ray diffraction measurements on CeCoGe$_3$ and LaCoGe$_3$. The lattice parameters, weighted profile factor ($\rm{R_{wp}}$) and the atomic positions are displayed.[]{data-label="XRDTab"} Powder X-ray diffraction ------------------------ Powder X-ray diffraction measurements were carried out on polycrystalline samples of CeCoGe$_3$ and the isostructural non-magnetic LaCoGe$_3$ at 300 K. A Rietveld refinement was carried out on both samples using the TOPAS software. [@TOPAS] The data and refinement are shown in Fig. \[XRD\]. One small impurity peak was detectable in CeCoGe$_3$ ($\sim$ 1% of the intensity of the maximum sample peak) whilst none were observed in LaCoGe$_3$, indicating that the samples are very nearly single phase. The site occupancies were all fixed at 100$\%$. The results of the refinements are displayed in Table \[XRDTab\]. The values of the lattice parameters are in agreement with previously reported values. [@CeCoGe31993; @CeCoGe3PND] Muon spin relaxation -------------------- ![$\mu$SR spectra measured at three temperatures. At 19 K, two frequencies could be observed whilst at 15 K only one frequency was observed. At 1.4 K no oscillations in the spectra were observed. The solid lines show the fits as described in the text.[]{data-label="specT"}](Fig2.eps){width="0.8\columnwidth"} ![$\mu$SR spectra measured at 20 and 21 K. At 20 K one frequency is observed in the spectrum and the initial asymmetry is reduced whilst at 21 K no oscillations are observed and the initial asymmetry reaches the full value for the instrument. The solid lines show the fits as described in the text.[]{data-label="specTn"}](Fig3.eps){width="0.8\columnwidth"} ![(a) The muon depolarization rate as a function of temperature. (b) The internal fields deduced from the frequencies of the oscillations observed in zero-field $\mu$SR spectra. The solid curve is a fit of $B_1$ to a mean field model described in the text.[]{data-label="BFields"}](Fig4.eps){width="0.99\columnwidth"} To investigate the nature of magnetic ordering in CeCoGe$_3$, we measured the zero-field muon spin relaxation of a polycrystalline sample. In the range 13 K $<$ $T$ $<$ 20 K, oscillations of the asymmetry are observed in the $\mu$SR spectra, indicating the presence of long-range magnetic order (Fig. \[specT\] and \[specTn\]). The presence of an oscillation at 20 K (Fig. \[specTn\]) as well as a reduced initial asymmetry indicates that the system is ordered at 20 K. However at 21 K, no oscillations are observed and the initial asymmetry reaches the full value for the instrument indicating that $T_{\rm{N1}}$ lies between 20 and 21 K. The spectra were fitted with $$G_z(t) = \sum_{i=1}^{n} A_i {\rm{cos(\gamma_\mu}} B_i t+\phi)e^{- \frac{(\sigma_i t)^2}{2}} + A_0e^{-\lambda t} + A_{\rm{bg}}$$ where $A_{\rm{i}}$ are the amplitudes of the oscillatory component, $A_0$ is the initial amplitude of the exponential decay, $B_{\rm{i}}$ are the magnetic fields at the muon site $\rm{i}$, $\sigma_{\rm{i}}$ is the Gaussian decay rate, $\lambda_{\rm{i}}$ is the muon depolarization rate, $\phi$ is the common phase, $\rm{{\gamma_\mu}}/{2\pi}$ = 135.53 MHz T$^{-1}$ and $A_{\rm{bg}}$ is the background. All the oscillatory spectra could be fitted with one internal magnetic field ($n~=~1$) apart from at 19 K when it was fitted with two internal magnetic fields ($n~=~2$). This implies that there are at least two muon sites but below 19 K it is likely that B$_2$ exceeds the maximum internal field detectable on the MuSR spectrometer due to the pulse width of the ISIS muon beam. Below 13 K the spectra were fitted with just an exponential decay term. The temperature dependence of one of the internal fields was fitted with $$B(T) = B(0)\left(1-\left(\frac{T}{T_N}\right)^\alpha \right)^\beta$$ With $\beta$ fixed at 0.5 for a mean field magnet, values of $B(0)$ = 889(16) G, $\alpha$ = 4.7(4) and $T_{\rm{N}}$ = 20.12(8) K were obtained (Fig. \[BFields\]). A good fit with $\beta$ = 0.5 means the observations are consistent with that of a mean field magnet. The large value of $\alpha$ indicates complex interactions between the magnetic moments. It was also possible to fit the data with $\beta$ = 0.367 and 0.326 for a 3D Heisenberg and Ising model respectively. [@Blundell] However, fits with both these values of $\beta$ gave values of $T_{\rm{N}}~<~20$ K and poor fits were obtained for $T_{\rm{N}}~>~20$ K. Since the presence of long-range magnetic order has been observed at 20 K (Fig. \[specTn\]), the data are incompatible with these models. The muon depolarization rate ($\lambda$) was found to suddenly increase at $T_{\rm{N1}}$, indicating a transition between the paramagnetic and ordered states. However $\lambda$ does not show a significant anomaly at either $T_{\rm{N2}}$ or $T_{\rm{N3}}$ where there is a rearrangment of the spins and a change in the magnetic structure. ![The normalized longitudinal component of the initial asymmetry ($A_{\rm{z}}$) as a function of an applied magnetic field at 1.4 K. The solid line shows a fit described in the text.[]{data-label="LFmuSR"}](Fig5.eps){width="0.99\columnwidth"} The initial value of the asymmetry ($A_{\rm{z}}$) as a function of applied longitudinal field at 1.4 K is shown in Fig. \[LFmuSR\]. This is the longitudinal component and has been normalized such that $A_{\rm{z}}~=~1$ corresponds to the muon being fully decoupled from its local environment. A fit has been made using the expression described in Ref. . An internal field of 1080(40) G was obtained which is in approximate agreement with that deduced from the zero field data, despite a change in magnetic structure between 13 K and 1.4 K. Single crystal neutron diffraction ---------------------------------- ![Elastic scans made across (1 0 $l$) at four temperatures. No peak is observed above $T_{\rm{N1}}$. Below 2 K a peak is observed at $l~=~\frac{1}{2}$, which shifts to $l~=~\frac{3}{8}$ at 10 K and $l~=~\frac{1}{3}$ at 14 K. []{data-label="hklscan"}](Fig6.eps){width="0.99\columnwidth"} Single crystal neutron diffraction measurements were carried out in each of the three magnetically ordered phases, on the D10 diffractometer . Fig. \[hklscan\] shows elastic scans made across (1 0 $l$) at different temperatures. This reveals that below 20 K, additional peaks for non-integer $l$ are observed, indicating the onset of antiferromagnetic ordering. At 2 K the additional peak is at $l~=~\frac{1}{2}$, at 10 K it is at $l~=~\frac{3}{8}$ and at 14 K it is at $l~=~\frac{1}{3}$. Since the (1 0 0) peak is forbidden for a body-centred structure, this indicates a propagation vector of **k** = (0,0,$\frac{1}{2}$) below $T_{\rm{N3}}$, **k** = (0,0,$\frac{5}{8}$) for $T_{\rm{N3}}~<~T~<~T_{\rm{N2}}$, and **k** = (0,0,$\frac{2}{3}$) for $T_{\rm{N2}}~<~T<~T_{\rm{N1}}$. Fig. \[110peak\] shows the intensity of the (1 1 0) reflection between 2 and 25 K. The increase in integrated intensity of this nuclear peak for $T_{\rm{N3}}~<~T~<~T_{\rm{N1}}$ indicates the presence of an additional ferromagnetic (FM) component for these two magnetic phases. The propagation vector of **k** = (0,0,$\frac{1}{2}$) agrees with the previous single crystal neutron diffraction measurements. [@CeCoGe3SCND] However as shown in Fig. \[hklscan\] we do not see a peak at (1 0 $\frac{1}{4}$) as previously observed nor do we observe any evidence for a two component magnetic structure. However at 8 K, just above $T_{\rm{N3}}$, coexistence of the (1 0 $\frac{1}{2}$) and (1 0 $\frac{3}{8}$) reflections are observed (Fig. \[8Kpeaks\]), indicating a first-order transition between the phases. This is also supported by the observation of hysteresis in magnetic isotherms at 3 K.[@CeCoGe31993] ![The temperature dependence integrated intensity of the (1 1 0) reflection. An increase in the intensity between $T_{\rm{N3}}$ and $T_{\rm{N1}}$ indicates there is a ferromagnetic contribution in these phases.[]{data-label="110peak"}](Fig7.eps){width="0.99\columnwidth"} ![Elastic scans made across made across (1 0 $l$) at 8 K. At this temperature there is a coexistence between the peaks at $l~=~\frac{3}{8}$ and $l~=~\frac{1}{2}$.[]{data-label="8Kpeaks"}](Fig8.eps){width="0.99\columnwidth"} At 35 K, in the paramagnetic state, the intensities were collected for all the allowed, experimentally accessible reflections ($h~k~l$). In each magnetic phase, intensities were collected for the reflections ($h~k~l$) $\pm$ **k**. The intensities of 104 magnetic reflections were collected at 2 and 14 K whilst 57 were collected at 10 K. No magnetic peaks were observed corresponding to (0 0 $l$), indicating that in all three phases the magnetic moments point along the $c$ axis. A symmetry analysis of each phase using SARA$h$ [@Sarah] shows that $\rm{\Gamma_2}$ is the only irreducible representation of the little group (G$_k$) with the moments along the $c$ axis. Both the crystal and magnetic structures of each phase were fitted using FullProf. [@Fullprof1] With the scale factor and extinction parameters fixed from the results of the crystal structure refinement, the only free parameter in the refinements of the magnetic phases was the magnetic moment on the Ce atoms. An R factor of 10.9 was obtained for the refinement of the crystal structure, 21.5 for the magnetic phase at 2 K, 24.3 at 10 K and 22 at 14 K. Plots of F$_{\rm{calc}}$ vs F$_{\rm{obs}}$ for all the refinements are shown in Fig. \[Fcalc\]. ![Plots of the calculated vs observed values of F$_{\rm{hkl}}$ for the refinement of (a) the crystal structure at 35 K and (b) - (d) the magnetic structure in the three magnetic phases. The solid lines indicate where F$_{\rm{calc}}$ = F$_{\rm{obs}}$.[]{data-label="Fcalc"}](Fig9.eps){width="0.99\columnwidth"} The introduction of a global phase $\phi$ to a magnetic structure leaves the neutron diffraction pattern unchanged. However for the phase at 2 K with **k** = (0,0,$\frac{1}{2}$), selecting $\phi$ = $\pi$/4 gives an equal moment on each Ce site of 0.405(5) $\rm{\mu_B}$. This structure has a two-up, two-down configuration along the $c$ axis (Fig. \[CrysStruc\](c)). Similarly for the phase at 14 K with **k** = (0,0,$\frac{2}{3}$), selecting $\phi$ = 0 gives a modulated structure along the $c$ axis with an up moment of 0.485(6) $\rm{\mu_B}$ followed by two down moments of 0.243(3) $\rm{\mu_B}$. The addition of a FM component of $-$0.125 $\rm{\mu_B}$/Ce gives a constant moment, two-up, one-down configuration as shown in Fig. \[CrysStruc\](a). A FM component is observed in this phase (Fig. \[110peak\]) and this equal moment solution is compatible with magnetization results. [@CeCoGe32005] For the phase at 10 K with **k** = (0,0,$\frac{5}{8}$), we were unable to deduce a global phase $\phi$ to which a FM component could be added to give an equal moment solution. A simple three-up, one down structure as was previously suggested for this phase from magnetization measurements [@CeCoGe32005] is not compatible with this propagation vector. The antiferromagnetic component with $\phi$ = 0 is shown in Fig. \[CrysStruc\](b) for half of the magnetic unit cell. However as shown in Fig. \[110peak\], there is also a ferromagnetic component in this phase and further measurements of the nuclear reflections at 10 K would be required to determine the size of this contribution. ![(Color online) The crystal structure of CeCoGe$_3$ where the Ce atoms are in red, the Co in blue and the Ge in grey. The arrows depict the magnetic moments on the Ce atoms. (a) The proposed magnetic structure at 14 K consisting of the antiferromagnetic component with a global phase $\phi$ = 0 and a ferromagnetic component to give an equal moment, two-up, one-down structure. (b) The antiferromagnet component ($\phi$ = 0) at 10 K for one half of the magnetic unit cell. (c) The magnetic structure at 2 K, with $\phi$ = $\pi$/4 to give an equal moment, two-up, two down structure.[]{data-label="CrysStruc"}](Fig10.eps){width="0.99\columnwidth"} Inelastic neutron scattering ---------------------------- ![(Color online) Color coded plots of the inelastic neutron scattering intensity with an incident energy of 40 meV (in units of mb sr$^{-1}$ meV$^{-1}$ f.u$^{-1}$) for (a) CeCoGe$_3$ at 4 K, (b) CeCoGe$_3$ at 25 K and (c) LaCoGe$_3$ at 5 K. The magnetic scattering of CeCoGe$_3$ at 4 K obtained by subtracting the phonon contribution of CeCoGe$_3$ (see text) is shown in (d).[]{data-label="colourplots"}](Fig11.eps){width="0.99\columnwidth"} ![The temperature dependence of the quasielastic linewidth (HWHM) obtained from fitting data measured with an incident energy of 15 meV (see text). A linear fit of the data between 20 K and 150 K is displayed.[]{data-label="GammaT"}](Fig12.eps){width="0.99\columnwidth"} ![(Color online) Cuts of $S_{\rm{mag}} (Q, \omega)$ with an incident energy of 40 meV integrated over **$|$Q$|$** from 0 to 3 $\rm{\AA^{-1}}$. The solid lines show fits made to a CEF model described in the text. The components of the fits are shown with dashed lines.[]{data-label="40meVFits"}](Fig13.eps){width="0.99\columnwidth"} ![(Color online) Cuts of $S_{\rm{mag}} (Q, \omega)$ with an incident energy of 10 meV integrated over **$|$Q$|$** from 0 to 2 $\rm{\AA^{-1}}$. Fits are made to a CEF model (see text). The components of the fits are shown with dashed lines. []{data-label="10meVFits"}](Fig14.eps){width="0.65\columnwidth"} ![(Color online) The single crystal susceptibility between 20 and 390 K with an applied field of 1000 G. The solid lines show fits to a CEF model (see text). The CEF parameters were fixed from the INS data but anisotropic molecular fields ($\lambda_{ab}$ and $\lambda_{c}$) and temperature independent susceptibilities were fitted. The inset shows a self-consistent mean field calculation of the magnetization per cerium atom using the fitted CEF parameters and a molecular field parameter of 38.9 mol/emu.[]{data-label="ChiT"}](Fig15.eps){width="0.99\columnwidth"} To obtain information about the CEF scheme and the magnetic excitations of the ordered state, INS measurements were carried out on polycrystalline samples of CeCoGe$_3$ and LaCoGe$_3$ using the MARI spectrometer with incident neutron energies (E$_i$) of 10 and 40 meV. LaCoGe$_3$ is non-magnetic and isostructural to CeCoGe$_3$ and the measurements were used to estimate the phonon contribution to the scattering. Color coded plots of the INS intensity of CeCoGe$_3$ are shown in Fig. \[colourplots\](a) and \[colourplots\](b) at 4 and 25 K respectively, whilst the scattering of LaCoGe$_3$ is shown Fig. \[colourplots\](c). In both the magnetically ordered and paramagnetic states, two inelastic excitations are observed with a significant intensity at low scattering vectors ($\textbf{Q}$). These are absent in the scattering of non-magnetic LaCoGe$_3$, indicating they are magnetic in origin. The excitations have a maximum intensity at approximately 19 and 27 meV. These can be seen in Fig. \[colourplots\](d) which shows the magnetic scattering ($S_{\rm{mag}} (Q, \omega)$) obtained from $S_{\rm{Ce}}(Q, \omega)$ - $\rm{\alpha}$ $S_{\rm{La}}(Q, \omega)$, where $\alpha$ = 0.9, the ratio of the scattering cross sections of CeCoGe$_3$ and LaCoGe$_3$. The scattering intensity decreases with $|\textbf{Q}|$, as expected for CEF excitations. The presence of two CEF excitations is expected for a Ce$\rm^{3+}$ ion in a tetragonal CEF, since according to Kramers theorem, provided time reversal symmetry is preserved, the energy levels of a system with an odd number of electrons, must remain doubly degenerate. Therefore the 6-fold $J~=~\frac{5}{2}$ ground state can be split into a maximum of three doublets in the paramagnetic state. Also revealed in the 4 K data is an additional excitation with a maximum at around 4.5 meV. This excitation is not present at 25 K (Fig. \[colourplots\](b)), where instead the elastic line is broader. This indicates the presence of spin waves in the ordered state at 4 K with an energy scale of approximately 4.5 meV for the zone boundary magnons. Interestingly the spin wave peak in CeRhGe$_3$ is observed at around 3 meV and the compound orders at $T_{\rm{N1}}$ = 14.5 K. [@CeRhGe32012] Therefore the spin wave energy appears to similarly scale with $T_{\rm{N1}}$ in both CeRhGe$_3$ and CeCoGe$_3$. Additional low energy measurements on IN6 with an incident energy of 3.1 meV display a lack of magnetic scattering below 2 meV at 4 K, indicating a spin gap in the magnon spectrum. In the paramagnetic state, the spectral weight is shifted towards the elastic line and quasielastic scattering (QES) is observed. This is additional magnetic scattering, centred on the elastic line but with a linewidth broader than the instrument resolution. Further measurements were made in the paramagnetic state between 20 and 200 K on the MERLIN spectrometer with an incident energy of 15 meV. The temperature dependence of the half width at half maximum ($\Gamma$) is shown in Fig. \[GammaT\]. The data were fitted with an elastic line resolution function and an additional Lorentzian function to model the quasielastic component. The widths of the elastic component were fixed from measurements of vanadium with the same incident energy and frequency of the Fermi chopper. An estimate of the Kondo temperature ($T_{\rm{K}}$) can be obtained from the value of $\Gamma$ at 0 K. From a linear fit to the data we estimate $T_{\rm{K}}$ = 11(3) K. This is of the same order as the ordering temperature $T_{\rm{N1}}$ = 21 K. A linear dependence of the QES linewidth with temperature is expected until the thermal energy approaches the splitting of the first excited CEF level. [@Linewidth] The first CEF excitation is at 19 meV ($\sim$220 K), which may explain the deviation from linear behaviour observed at 190 K. It was also possible to fit the data to a $T^{\frac{1}{2}}$ dependence. This behaviour has been observed in the linewidth of the QES scattering in other HF systems. [@LinewidthHalf] However this fit yields a negative value of $\Gamma(0)$ for which we have no physical interpretation and therefore has not been displayed. Cuts of $S_{\rm{mag}} (Q, \omega)$ were made by integrating across low values of **$|$Q$|$** (0 to 3 $\rm{\AA^{-1}}$). These are shown for $E_{\rm{i}}$ = 40 meV in Fig. \[40meVFits\] and for $E_{\rm{i}}$ = 10 meV in Fig. \[10meVFits\]. The data were analyzed with the following Hamiltonian for a Ce$\rm^{3+}$ ion at a site with tetragonal point symmetry: $$\mathcal{H}_{\rm{CF}} = B_2^0{\rm{O_2^0}} + B_4^0{\rm{O_4^0}} + B_4^4{\rm{O_4^4}}$$ where $B_n^m$ are CEF parameters and $\rm{O_n^m}$ are the Stevens operator equivalents. Using the fact that Stevens operator equivalents can be expressed in terms of angular momentum operators, the CEF wavefunctions and energies may be determined from diagonalizing $\mathcal{H}_{\rm{CF}}$. [@StevensCEF; @HutchingsCEF] We sought to find a CEF scheme compatible with both INS and magnetic susceptibility data. $B_2^0$ can be estimated for isotropic exchange interactions, from the high temperature magnetic susceptibility [@Jensenrare] using the relation: $$B_2^0 = \frac{10k_B(\theta_{ab} - \theta_c)}{3(2J -1)(2J +3)}$$ where $\theta_{ab}$ and $\theta_{c}$ are the Curie-Weiss temperatures for fields applied in the $ab$ plane and along the $c$ axis respectively. Using the previously obtained values [@CeCoGe32005], $B_2^0$ is calculated to be $-$0.376 meV. In particular, since $\theta_{ab}$ $<$ $\theta_{c}$, a negative $B_2^0$ is anticipated. We then fitted the INS data in the paramagnetic state with $E_i$ = 10 and 40 meV to obtain values of $B_n^m$. Initially we fixed $B_2^0$ = -0.376 meV and varied $B_4^0$ and $B_4^4$. In the final fit, all three CEF parameters were varied. The fits are shown in Figs. \[40meVFits\](b)-(d) and \[10meVFits\](b) and it can be seen that there is a good fit to the INS data. Using these values of $B_n^m$, a fit was made to the single crystal susceptibility data, which shows reasonably good agreement (Fig. \[ChiT\]). Simultaneously fitting the magnetic susceptibility and the INS data at 25 K, led to similar values of $B_n^m$. At 4 K, in the ordered state, an additional peak is observed in $S_{\rm{mag}} (Q, \omega)$ at around 4.5 meV. Although the full treatment of this data would require a calculation of the spin-wave excitations, we sought to determine if the addition of an internal magnetic field could satisfactorily account for this peak in the ordered state. Since the magnetic moments lie along the $c$ axis below $T_{\rm{N1}}$, we fitted $S_{\rm{mag}}(Q, \omega)$ with a finite internal field $B_{\rm{z}}$, allowing $B_4^0$ and $B_4^4$ to vary. A small change in the CEF parameters was allowed below $T_{\rm{N}}$. This is expected due to small changes in the lattice parameters upon magnetic ordering. As shown in Fig \[40meVFits\].(a) and Fig \[10meVFits\].(a), a $B_{\rm{z}}$ of 1.69(9) meV gives a good fit to the data. The resulting CEF parameters are shown in Table \[CEFTab\]. The wavefunctions calculated for the paramagnetic state are $$\begin{multlined} |\psi_1^{\pm} \rangle = 0.8185\left|\pm \frac{5}{2} \right \rangle - 0.5745\left|\mp \frac{3}{2} \right \rangle \\ \\ |\psi_2^{\pm} \rangle = \left|\pm \frac{1}{2} \right \rangle \\ \\ |\psi_3^{\pm} \rangle = 0.8185\left|\pm \frac{3}{2} \right \rangle + 0.5745\left|\mp \frac{5}{2} \right \rangle \end{multlined}$$ $\psi_1 (\Gamma_6(1))$ is predicted to be the ground state (GS) wavefunction whilst $\psi_2 (\Gamma_7)$ is 19.3 meV and $\psi_3 (\Gamma_6(2))$ is 26.4 meV above the GS. The GS magnetic moments of the cerium atoms in the ab-plane ($\langle \mu_x \rangle$) and along the $c$ axis ($\langle \mu_z \rangle$) can be calculated from $$\begin{multlined} \langle \mu_z \rangle = \langle \psi_1^{\pm} | g_JJ_z | \psi_1^{\pm} \rangle \\ \\ \langle \mu_x \rangle = \langle \psi_1^{\mp}| \frac{g_J}{2}(J^+ + J^-) \left | \psi_1^{\pm} \right \rangle \end{multlined}$$ The magnitude of $\langle \mu_z \rangle$ is calculated to be 1.01 $\rm{\mu_B}$ whilst the magnitude of $\langle \mu_x \rangle$ is calculated to be 0.9 $\rm{\mu_B}$. A self-consistent mean field calculation of the magnetization shown in the inset of Fig. \[ChiT\], gives a ground state magnetic moment of 1.3 $\rm{\mu_B}$. A molecular field parameter of $\lambda~=~38.9~$mol/emu was chosen to correctly reproduce the observed value of $T_{\rm{N1}}$ and this is in good agreement with the values shown in Table \[CEFTab\]. However the refinement of the single crystal neutron diffraction data at 2 K predicts a moment along the $c$ axis of 0.405(5) $\rm{\mu_B}$. This implies there is a reduction in the cerium moment due to hybridization between the GS and the conduction electrons. By considering the magnetocrystalline anisotropy energy ($E_{\rm{a}}$), the moment is predicted to lie along the $c$ axis for a negative $B_2^0$ and the $\psi_1$ GS. [@MagAniso1990] Therefore our CEF model correctly predicts the direction of the observed magnetic moment. From previous studies of the magnetic susceptibility, a CEF scheme with a GS of $|\pm \frac{1}{2} \rangle$ was suggested. [@CeCoGe32005] These CEF parameters, give rise to energy level splittings from the GS of 9.8 and 27.3 meV, which are incompatible with our INS measurements. We were unable to find a CEF scheme with this GS configuration that fitted both the INS and magnetic susceptibility data. 4 K 25 K ------------------------------------------ ------------- ------------- $B_2^0$ meV $-$0.61 $-$0.61(4) $B_4^0$ meV $-$0.013(3) $-$0.007(2) $B_4^4$ meV 0.412(8) 0.463(8) $\Gamma_{\rm{QES}}$ (meV) – 1.9(3) $\Gamma_{\psi_2}$ (meV) 2.5(2) 1.6(3) $\Gamma_{\psi_3}$ (meV) 2.3(2) 2.9(3) $\lambda_{ab}$ (mole/emu) – $-$40.9 $\lambda_{c} $(mole/emu) – $-$52.0 $\chi_0^{ab}$ ($\times 10^{-3}$ emu/mol) – $-$0.404 $\chi_0^c$ ($\times 10^{-3}$ emu/mol) – $-$1.936 : The parameters obtained from fitting $S_{\rm{mag}}(Q, \omega)$ from INS and magnetic susceptibility data. $B_m^n$ were obtained from fitting the INS data. At 4 K, the value of $B_2^0$ was fixed whilst the other two CEF parameters were allowed to vary. The Lorentzian linewidths of the quasielastic scattering ($\Gamma_{\rm{QES}}$) and the first and second CEF excitations ($\Gamma_{\psi_2}$ and $\Gamma_{\psi_3}$) are also displayed. The remaining parameters are obtained from fitting the magnetic susceptibility with anisotropic molecular field parameters ($\lambda_{ab}$ and $\lambda_0^c$) as well as temperature independent susceptibilities ($\chi_0^{ab}$ and $\chi_0^c$).[]{data-label="CEFTab"} We may now compare our results with those obtained from isostructural Ce$TX_3$ compounds. Like CeCoGe$_3$, the CEF model for CeRhGe$_3$ predicts a GS which is an admixture of $\left|\pm \frac{5}{2} \right \rangle$ and $\left|\mp \frac{3}{2} \right \rangle$. [@CeRhGe32012] Both compounds have a significant $B_4^4$, 0.463 meV for CeCoGe$_3$ and 0.294 meV for CeRhGe$_3$ which leads to this mixing. In CeRhGe$_3$, the $\left|\pm \frac{3}{2} \right \rangle$ states are the largest components in the GS whilst for CeCoGe$_3$ it is $\left|\pm \frac{5}{2} \right \rangle$. In both compounds, the moments in the magnetically ordered state align along the $c$ axis. However $B_2^0$ is positive for CeRhGe$_3$ and a consideration of E$_a$ predicts a moment lying in the $ab$ plane. The alignment of the moment along $c$ is ascribed to two-ion anisotropic exchange interactions. Unlike CeCoGe$_3$, the easy axis of the magnetic susceptibility is in the $ab$ plane despite the moment alignment along $c$ below T$_{\rm{N}}$. The calculated value of $\langle \mu_z \rangle$ closely agrees to the result obtained from the magnetic neutron diffraction measurements and there is no evidence of a reduction of the cerium moment due to hybridization. In contrast to this, the CEF model for CeCoGe$_3$ correctly predicts the alignment of the ordered moment and the easy axis of the magnetic susceptibility. However the observed moment is significantly reduced compared to the calculated value of $\langle \mu_z \rangle$. The reduction in moment is not as drastic as in the other pressure induced NCS CeRhSi$_3$ and CeIrSi$_3$. For example a CEF model of CeRhSi$_3$ [@CeRhSi3CEF] predicts a moment of 0.92 $\rm{\mu_B}$/Ce in the $ab$ plane whilst a moment of 0.12 $\rm{\mu_B}$/Ce in that direction is actually observed through neutron diffraction studies. [@CeRhSi3ND] This compound also has a very different magnetic structure, a spin-density wave with propagation vector (0.215,0,$\frac{1}{2}$). These results suggest that CeCoGe$_3$ has a degree of hybridization between that of CeRhGe$_3$ and CeRhSi$_3$. This is consistent with the fact that CeRhSi$_3$ is closer to a QCP, having an onset of superconductivity at 1.2 GPa [@CeRhSi3SC] whilst CeCoGe$_3$ becomes superconducting at 5.5 GPa [@CeCoGe3SC] and CeRhGe$_3$ [@CeTX32008] does not become superconducting up to 8.0 GPa. The linewidths of the CEF excitations give an indication of the hybridization strength between the conduction electrons and the excited states. The linewidths obtained for CeCoGe$_3$ at 25 K were 1.6(3) and 2.9(3) meV for transitions from the GS to $\psi_2$ and $\psi_3$ respectively. This is compared to values of 1.4(2) and 2.2(3) meV obtained for CeRhGe$_3$. [@CeRhGe32012] The linewidth of the excitation to $\psi_2$ was similar in both compounds whilst the excitation to $\psi_3$ was broader in CeCoGe$_3$ than CeRhGe$_3$. However linewidths of 3.9(2) and 9.2(4) meV were obtained for the CEF excitations of CeRhSi$_3$ [@CeRhSi3ExpRep], indicating stronger hybridization of all the states in the $J~=~\frac{5}{2}$ multiplet. Conclusions =========== We have studied the magnetic ordering in CeCoGe$_3$ using single crystal neutron diffraction, inelastic neutron scattering, $\mu$SR and magnetic susceptibility. The transition to magnetic ordering below $T_{\rm{N1}}$ is observed with the emergence of oscillations in zero-field $\mu$SR spectra. We fitted the temperature dependence of the internal magnetic fields to a model of mean field magnet. Single crystal neutron diffraction measurements reveal magnetic ordering with a propagation vector of **k** = (0,0,$\frac{1}{2}$) below $T_{\rm{N3}}$, **k** = (0,0,$\frac{5}{8}$) for $T_{\rm{N3}}~<~T~<~T_{\rm{N2}}$, and **k** = (0,0,$\frac{2}{3}$) for $T_{\rm{N2}}~<~T~<~T_{\rm{N1}}$. From a refinement of the integrated intensities we suggest a two-up, two-down magnetic structure below $T_{\rm{N3}}$ with moments of 0.405(5) $\rm{\mu_B}$/Ce along the $c$ axis. Measurements of the (1 1 0) reflection indicate a ferromagnetic component between $T_{\rm{N3}}$ and $T_{\rm{N1}}$. From this we suggest a two-up, one-down structure for the phase between $T_{\rm{N2}}$ and $T_{\rm{N1}}$. INS measurements of polycrystalline CeCoGe$_3$ at low temperatures indicate two CEF excitations at 19 and 27 meV. At 4 K, we observe an additional peak at 4.5 meV due to spin wave excitations. Above $T_{\rm{N1}}$, this peak is not present but quasielastic scattering is observed. A linear fit to the temperature dependence of the quasielastic linewidth gives an estimate of $T_K~=~11(3)$ K. From an analysis of INS and magnetic susceptibility data with a CEF model, we propose a CEF scheme for CeCoGe$_3$. We are also able to account for the spin wave peak at 4.5 meV by the addition of an internal field along the $c$ axis. The CEF scheme correctly predicts the direction of the ordered moment but the observed magnetic moment at 2 K of 0.405(5) $\rm{\mu_B}$/Ce is reduced compared to the predicted moment of 1.01 $\rm{\mu_B}$/Ce. We believe that the reduced moment is due to hybridization between the localized Ce$^{3+}$ f-electrons and the conduction band. From considering the moment reduction, we deduce that CeCoGe$_3$ has a hybridization strength between that of the localized antiferromagnet CeRhGe$_3$ and the NCS CeRhSi$_3$. CeRhSi$_3$ exhibits SC at lower applied pressure than CeCoGe$_3$ whilst CeRhGe$_3$ does not exhibit SC up to at least 8.0 GPa. This is evidence for the important role of hybridization in the unconventional superconductivity of the Ce$TX_3$ series. We acknowledge the EPSRC, UK for providing funding (grant number EP/I007210/1). DTA/ADH thank CMPC-STFC (grant number CMPC-09108) for financial support We thank T.E. Orton for technical support, S. York for compositional analysis and P. Manuel, B.D. Rainford and K.A. McEwen for interesting discussions. Some of the equipment used in this research at the University of Warwick was obtained through the Science City Advanced Materials: Creating and Characterising Next Generation Advanced Materials Project, with support from Advantage West Midlands (AWM) and part funded by the European Regional Development Fund (ERDF).
--- address: - | Physics Department, University of Connecticut, 2152 Hillside Road, Storrs, CT, 06269-3046, USA\ RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, New York 11973, USA - 'Physics Department, Columbia University, New York, New York 10027, USA' - | Department of Physics, Nagoya University, Nagoya 464-8602, Japan\ Nishina Center, RIKEN, Wako, Saitama 351-0198, Japan - | Physics Department, Brookhaven National Laboratory, Upton, New York 11973, USA\ RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, New York 11973, USA - 'Physics Department, Brookhaven National Laboratory, Upton, New York 11973, USA' - | Universität Regensburg, Fakultät für Physik, 93040, Regensburg, Germany\ Physics Department, Brookhaven National Laboratory, Upton, New York 11973, USA author: - Thomas Blum and Luchang Jin - Norman Christ - Masashi Hayakawa - Taku Izubuchi - Chulwoo Jung - Christoph Lehner title: 'Hadronic light-by-light contribution to the muon anomalous magnetic moment from lattice QCD' --- Introduction ============ The anomalous magnetic moment of the muon is providing an important test of the Standard Model. An ongoing experiment at Fermilab (E989) and one planned at J-PARC (E34) aim to reduce the experimental uncertainty by a factor of four, and [similar efforts](http://www.int.washington.edu/PROGRAMS/19-74W/) are underway on the theory side. A key part of the latter is to compute the hadronic light-by-light (HLbL) contribution from first principles using lattice QCD. Such a calculation, with all errors under control, leaves no room for doubt when the ultimate comparison arrives. The anomalous magnetic moment is an intrinsic property of a spin-1/2 particle, and is defined through its interaction with an external magnetic field. Lorentz and gauge symmetries dictate the form of the interaction, $$\begin{aligned} \langle \mu (\vec p^\prime) | J_\nu(0) |\mu(\vec p)\rangle &=& -e \bar u(\vec p^\prime)\left(F_1(q^2)\gamma_\nu+i\frac{F_2(q^2)}{4 m}[\gamma_\nu,\gamma_\rho] q_\rho\right)u(\vec p), \label{eq:ff}\end{aligned}$$ where $J_\mu$ is the electromagnetic current, and $F_1$ and $F_2$ are form factors, giving the charge and magnetic moment at zero momentum transfer ($q=p^\prime-p=0$). The anomalous part of the magnetic moment is given by $F_2(0)$ alone, $$\begin{aligned} a_\mu &\equiv& (g-2)/2 = F_2(0).\end{aligned}$$ The desired matrix element in (\[eq:ff\]) is conventionally extracted in quantum field theory from a correlation function of fields as depicted in Fig. \[fig:hlbl feyn diags\]. Here we work in coordinate (Euclidean) space and use Lattice QCD for the hadronic part which is intrinsically non-perturbative. QED is treated in two ways, first on a discrete, finite, lattice (QED$_L$) and second in the continuum and infinite volume (QED$_\infty$). Note that we always work in a perturbative framework with respect to QED, $i.e$, only diagrams where the hadronic part is connected to the muon by three photons enter the calculation. ![Leading contributions from hadronic light-by-light scattering to the muon anomaly. The shaded circles represent quark loops containing QCD interactions to all orders. Horizontal lines represent muons. Quark- connected (left) and disconnected (right) diagrams are shown. Ellipsis denote diagrams obtained by permuting the photon contractions with the muons and diagrams with three and four quark loops with photon couplings.[]{data-label="fig:hlbl feyn diags"}](hlbl.pdf "fig:"){width="30.00000%"}   +   ![Leading contributions from hadronic light-by-light scattering to the muon anomaly. The shaded circles represent quark loops containing QCD interactions to all orders. Horizontal lines represent muons. Quark- connected (left) and disconnected (right) diagrams are shown. Ellipsis denote diagrams obtained by permuting the photon contractions with the muons and diagrams with three and four quark loops with photon couplings.[]{data-label="fig:hlbl feyn diags"}](hlbl-disc.pdf "fig:"){width="30.00000%"}   $+~~~\cdots$ QED$_L$ Method ============== Here the muon, photons, quarks, and gluons are treated on a single finite, discrete lattice. The method is described in great detail in Ref. [@Blum:2015gfa], and the (quark-connected) diagrams to be computed are shown in Fig. \[fig:point src method\]. It is still not possible to do all of the sums over coordinate space vertices exactly with currently available compute resources. Therefore we resort to a hybrid method where two of the vertices on the hadronic loop are summed stochastically: point source propagators from coordinates $x$ and $y$ are computed, and their sink points are contracted at the third internal vertex $z$ and the external vertex $x_{\rm op}$. Since the propagators are calculated to all sink points, $z$ and $x_{\rm op}$ can be summed over the entire volume. The sums over vertices $x$ and $y$ are then done stochastically by computing many ($O(1000)$) random pairs of point source propagators. To do the sampling efficiently, the pairs are chosen with an empirical distribution designed to frequently probe the summand where it is large, less frequently where it is small. Since QCD has a mass-gap, we know the hadronic loop is exponentially suppressed according to the distance between any of the vertices, including $|x-y|$. As we will see, the main contribution comes from distances less than about 1 fm. The muon line and photons are computed efficiently using FFT’s; however, because they must be calculated many times, the cost is not trivial. Two additional, but related, parts of the method bear mentioning. First, the form dictated by the right hand side of Eq. \[eq:ff\] suggests the limit $q\to0$ is unhelpful since the desired $F_2$ term is multiplied by 0. Second, in our Monte Carlo lattice QCD calculation the error on the $F_2$ contribution blows up in this limit. The former is avoided by evaluating the first moment with respect to $\vec{x}_{\mathrm{op}}$ at the external vertex and noticing that an induced extra term vanishes exponentially in the infinite volume limit [@Blum:2015gfa]. This moment method allows the direct calculation of the correlation function at $q=0$, and hence $F_2(0)$. The second issue is avoided by enforcing the Ward Identity exactly on a configuration-by-configuration basis, $i.e.$, before averaging over gauge fields. This makes the factor of $q$ in Eq. (\[eq:ff\]) exact for each measurement and not just in the average. The Ward Identity is enforced by inserting the external photon at all possible locations on the quark loop. The three distinct possibilities are shown in Fig. \[fig:point src method\]. By the way, it is the Ward Identity that guarantees the unwanted term in the moment method vanishes. ![Correlation functions. Sums over $x$ and $y$ are computed stochastically. The third internal vertex $z$ and the external vertex $x_{\rm op}$ are summed over exactly. The sums on the muon line are done exactly using FFT’s. Strong interactions to all orders are not shown.[]{data-label="fig:point src method"}](chlbl-wi.pdf){width="\textwidth"} Implementing the above techniques produces an order $O(1000)$ fold improvement in the statistical error over the original non-perturbative method for the hadronic light-by-light scattering contribution [@Blum:2014oka]. disconnected diagrams --------------------- The quark-disconnected diagrams that occur at $O(\alpha^3)$ are shown Fig. \[fig:disco diags\]). All but the upper-leftmost diagram vanish in the $SU(3)$ flavor limit and are suppressed by powers of $m_{u,d} - m_s$ depending on the number of loops with a single photon attached. For now we ignore them and concentrate on the leading diagram which is computed with a method similar to the one described in the previous section [@Blum:2016lnc]. ![Disconnected diagrams contributing to the muon anomaly. The top leftmost is the leading one, and does not vanish in the $SU(3)$ flavor limit. Strong interactions to all orders, including gluons connecting the quark loops, are not shown.[]{data-label="fig:disco diags"}](fig-45 "fig:"){width="25.00000%"} ![Disconnected diagrams contributing to the muon anomaly. The top leftmost is the leading one, and does not vanish in the $SU(3)$ flavor limit. Strong interactions to all orders, including gluons connecting the quark loops, are not shown.[]{data-label="fig:disco diags"}](fig-44 "fig:"){width="25.00000%"} ![Disconnected diagrams contributing to the muon anomaly. The top leftmost is the leading one, and does not vanish in the $SU(3)$ flavor limit. Strong interactions to all orders, including gluons connecting the quark loops, are not shown.[]{data-label="fig:disco diags"}](fig-47 "fig:"){width="25.00000%"}\ ![Disconnected diagrams contributing to the muon anomaly. The top leftmost is the leading one, and does not vanish in the $SU(3)$ flavor limit. Strong interactions to all orders, including gluons connecting the quark loops, are not shown.[]{data-label="fig:disco diags"}](fig-46 "fig:"){width="25.00000%"} ![Disconnected diagrams contributing to the muon anomaly. The top leftmost is the leading one, and does not vanish in the $SU(3)$ flavor limit. Strong interactions to all orders, including gluons connecting the quark loops, are not shown.[]{data-label="fig:disco diags"}](fig-48 "fig:"){width="25.00000%"} ![Disconnected diagrams contributing to the muon anomaly. The top leftmost is the leading one, and does not vanish in the $SU(3)$ flavor limit. Strong interactions to all orders, including gluons connecting the quark loops, are not shown.[]{data-label="fig:disco diags"}](fig-49 "fig:"){width="27.50000%"} To ensure loops are connected by gluons, explicit vacuum subtraction is required. However, in the leading diagram the moment at $x_{\rm op}$ implies that the left-hand loop in Fig. \[fig:disco diags\] vanishes due to parity symmetry, and the vacuum subtraction is done to reduce noise. As for the connected case, two point sources (at $y$ and $z$ in Fig. \[fig:disco diags\]) are chosen randomly, and the sink points sinks are summed over. $M$ point source propagators are computed, and all $M^2$ combinations are used to perform the stochastic sum. This “$M^2$ trick" is crucial to bring the statistical fluctuations of the disconnected diagram under control. lattice setup ------------- The simulation parameters are given in Tab. \[tab:ensembles\]. All particles have their physical masses (but not including isospin breaking for the up and down quark masses). The discrete Dirac operator is known as the (Möbius) domain wall fermion ((M)DWF)) operator. Similarly the discrete gluon action is given by the plaquette plus rectangle Iwasaki gauge action. Three ensembles with larger lattice spacing employ the dislocation-suppressing-determinant-ratio (DSDR) to soften explicit chiral symmetry breaking effects for MDWF. The muons and photons take discrete free-field forms. The muons are DWF with infinite size in the extra fifth dimension, and the photons are non-compact in the Feynman gauge. In the latter all modes with $\vec q=0$ are dropped, a finite volume formulation of QED known as QED$_L$ [@Hayakawa:2008an]. 48I 64I 24D 32D 48D ------------------------ -------- -------- -------- -------- ------- $a^{-1}$ (GeV) 1.73 2.359 1.015 1.015 1.015 $a$ (fm) 0.114 0.084 0.2 0.2 0.2 $L$ (fm) 5.47 5.38 4.8 6.4 9.6 $L_s$ 48 64 24 24 24 $m_\pi$ (MeV) 139 135 140 140 140 $m_\mu$ (MeV) 106 106 106 106 106 \# meas (conn., disc.) 65, 99 43, 44 87, 80 64, 68 62, 0 : 2+1 flavors of MDWF gauge field ensembles generated by the RBC/UKQCD collaborations [@Blum:2014tka]. \[tab:ensembles\] test in pure QED ---------------- Before moving to the hadronic case, we tested the method in pure QED [@Blum:2015gfa]. Results for several lattice spacings and box sizes are shown in Fig. \[fig:qed test\]. The systematic uncertainties are large, but under control. Note that the finite volume errors are polynomial in $1/L$ and not exponential. The data are well fit to the form $$F_2(a,L)=F_2\left(1-\frac{b_1}{(m_\mu L)^2}+\frac{b_2}{(m_\mu L)^4}\right)(1-c_1 a^2+c_2 a^4). \label{eq:qed extrap}$$ The continuum and infinite volume limit is $F_2(0)=46.6(2) \times 10^{-10}$ for the case where the lepton mass in the loop is the same as the muon mass, which is quite consisent with the well known perturbative value [@Laporta:1991zw], $46.5\times10^{-10}$. ![QED light-by-light scattering contribution to the muon anomaly. Lattice spacing decreases from bottom to top. Solid lines are from a fit using Eq. (\[eq:qed extrap\]).[]{data-label="fig:qed test"}](qed-lbl){width="50.00000%"} results for QCD --------------- Our physical point calculation [@Blum:2016lnc] started on the $48^3$, $a^{-1}=1.730$ GeV, Iwasaki ensemble listed in the first column of Tab. \[tab:ensembles\], for which we found $a_\mu^{\rm cHLbL} = (11.60\pm0.96)\times 10^{-10}$, $a_\mu^{\rm dHLbL} = (-6.25\pm0.80)\times 10^{-10}$, and $a_\mu^{\rm HLbL} = (5.35\pm1.35)\times 10^{-10}$ for the connected, leading disconnected, and total HLbL contributions to the muon anomaly, respectively. The errors quoted are purely statistical. We have since improved the statistics on the leading disconnected diagram with measurements on 34 additional configurations, and the contribution becomes $-6.15 (61)\times 10^{-10}$. Since then we have computed on several additional ensembles in order to take the continuum and infinite volume limits (see Tab. \[tab:ensembles\]). We next computed on a $64^3$, $a^{-1}=2.359$ GeV, companion to the original $48^3$ Iwasaki ensemble with roughly the same volume. This allows a continuum limit at finite volume, $a_\mu^{\rm cHLbL} =16.94(3.78)$, $a_\mu^{\rm dHLbL}=-12.29(3.35)$, and $a_\mu^{\rm HLbL}=4.66(4.39)$. Notice there is a large cancellation between the connected and disconnected diagrams that persists for $a\to0$. Even though the individual contributions are relatively well resolved, the total is not. The cancellation is expected since hadronic light-by-light scattering in this case is dominated by the $\pi^0$ which contributes to both diagrams, but with opposite sign [@Bijnens:2016hgx; @Jin:2016rmu; @Gerardin:2017ryf]. Notice also that the $a^2$ corrections are individually large but also tend to cancel in the sum. Next the infinite volume limit must be taken. To do this we add another set of ensembles with a slightly different gauge action (Iwasaki+DSDR) and larger lattice spacing so that large physical volumes can be realized (roughly 4.8, 6.4, and 9.6 fm boxes). See Tab. \[tab:ensembles\] for details. The results are displayed in Fig. \[fig:qedl extrap\] along with curves obtained from Eq. (\[eq:qed extrap\]) with $b_2=c_2=0$. We first extrapolate the two Iwasaki ensembles to $a\to 0$, as before, then combine with the I-DSDR ensembles to take the infinite volume limit. We find for the connected, disconnected, and total contributions, $a_\mu^{\rm cHLbL} = (27.16\pm6.25)\times 10^{-10}$, $a_\mu^{\rm dHLbL} = (-20.20\pm5.65)\times 10^{-10}$, $a_\mu^{\rm HLbL} = (6.96\pm7.40)\times 10^{-10}$, respectively. Similar to the non-zero lattice spacing errors, there are large finite volume corrections for the individual contributions, which again largely cancel in the sum. ![Infinite volume extrapolation. Connected (left), disconnected (middle) and total (right).[]{data-label="fig:qedl extrap"}](con-48D.pdf "fig:"){width="30.00000%"} ![Infinite volume extrapolation. Connected (left), disconnected (middle) and total (right).[]{data-label="fig:qedl extrap"}](discon.pdf "fig:"){width="30.00000%"} ![Infinite volume extrapolation. Connected (left), disconnected (middle) and total (right).[]{data-label="fig:qedl extrap"}](total-48D "fig:"){width="30.00000%"} While the large relative error on the total is a bit unsatisfactory, we emphasize that our result represents an important estimate on the hadronic light-by-light scattering contribution to the muon anomaly, with all systematic errors controlled (below we show the omitted non-leading disconnected diagrams are likely negligible). It appears that this contribution can not “rescue" the Standard Model (or the E821 experiment). In fact we can do even a bit better with the data on hand. As seen in Fig. \[fig:cumulative\], which shows the cumulative sum of all contributions up to a given separation of the two sampled currents in the hadronic loop, the total connected contribution saturates at a distance of about 1 fm for all ensembles. This suggests the region $r\simge 1$ fm adds mostly noise and little signal, and the situation gets worse in the limits. A more accurate estimate can be obtained by taking the continuum limit for the sum up to $r=1$ fm, and above that by taking the contribution from the relatively precise $48^3$ ensemble. We include a systematic error on this long distance part since it is not extrapolated to $a=0$. The infinite volume limit is taken as before. This procedure yields $a_\mu^{\rm cHLbL}=27.61(3.12)(0.32)\times 10^{-10}$, with a statistical error that is roughly $2\times$ smaller and small systematic error. Unfortunately a similar procedure for the disconnected diagram is not reliable, as can be seen in the right panel of Fig. \[fig:cumulative\]. The curves do not saturate at 1 fm, but instead tend to increase significantly up to 2 fm, or more. Once the cut moves beyond 1 fm it is no longer effective. The different behavior between the two stems from the different sampling strategies used for each [@Blum:2015gfa]. Using the improved connected result, we find our final result for QED$_L$, $$a_\mu^{\rm HLbL} = (7.41\pm6.33)\times 10^{-10},$$ where the error is mostly statistical and includes a small systematic, added in quadrature, for the hybrid continuum extrapolation of the connected diagram. ![Cumulative contributions to the muon anomaly, connected (left) and disconnected (right). $r$ is the distance between the two sampled currents in the hadronic loop (the other two currents are summed exactly). $24^3$ IDSDR (squares), $24^3$ IDSDR (squares), $32^3$ IDSDR (crosses), $48^3$ Iwasaki (diamonds), and $64^3$ Iwasaki (plusses).[]{data-label="fig:cumulative"}](cons-plot-48D.pdf "fig:"){width="40.00000%"} ![Cumulative contributions to the muon anomaly, connected (left) and disconnected (right). $r$ is the distance between the two sampled currents in the hadronic loop (the other two currents are summed exactly). $24^3$ IDSDR (squares), $24^3$ IDSDR (squares), $32^3$ IDSDR (crosses), $48^3$ Iwasaki (diamonds), and $64^3$ Iwasaki (plusses).[]{data-label="fig:cumulative"}](discon-result.pdf "fig:"){width="39.00000%"} QED$_\infty$ Method =================== A method to compute the two-loop QED integrals directly in infinite volume and the continuum was first proposed by the Mainz group [@Green:2015sra; @Asmussen:2016lse]. This is similar to what is done in the lattice calculation of the hadronic vacuum polarization contribution to the muon anomaly [@Blum:2002ii]. The advantage is that the leading finite volume error is exponentially suppressed instead of $O(1/L^2)$. Our group subsequently developed a similar method, adding extra terms to reduce the residual scaling errors induced by the hadronic part [@Blum:2017cer]. The key idea of these methods is to pre-compute the QED part shown in Fig. \[fig:qedinf\], as a function of the coordinates $x,y,z$, which lie on the QCD lattice used for the hadronic part. However, this function is computed using continuum photon and muon propagators evaluated in an infinite space-time volume. This grid, computed in the continuum, is smoothly interpolated for each set of points used to compute the hadronic part. ![The QED part of the light-by-light scattering amplitude, computed in infinite volume, in the continuum limit. An analytic integral expression as a function of the coordinates $x,y,z$ is pre-computed and tabulated for later use [@Blum:2017cer] with the hadronic amplitude computed on a discrete, finite lattice.[]{data-label="fig:qedinf"}](qedinf.pdf){width="40.00000%"} A test of the method in QED with the loop living on a discrete lattice reproduces the well known perturbative results for loop masses the same as, and $2\times$, the muon mass, respectively [@Blum:2017cer]. results ------- Before discussing preliminary results for QED$_\infty$, we mention we have found generally that the statistical noise associated with the photons grows with the volume. We therefore expect the QED$_\infty$ method to be noisier than QED$_L$, and this is, in fact, the case. In order to combat the problem we introduce another hybrid approach for the long distance contributions. That is, we compute the dominant $\pi^0$ contribution separately and combine with the full lattice value below some cut. This long-distance $\pi^0$ part is calculated from a model (LMD) [@Knecht:1999gb] for now, but eventually will come from a completely separate, and model independent, lattice calculation. Since the model value is in accord with model independent dispersive results, the results shown below are not expected to change when all lattice computations are used. Figure \[fig:qedinf results\] shows both connected and disconnected contributions as the cut between lattice and model contributions is varied. The QCD box in this example is large, roughly 6.4 fm on a side, with a spacing of 0.2 fm ($a^{-1}=1$ GeV). At large $R_{\rm max}$ in the figure, the total is lattice dominated with large uncertainty, and as $R_{\rm max}\to0$, the contribution completely comes from the model. Since the model is only correct at long distance where the $\pi^0$ dominates, at some point the combined result may become constant, yielding an accurate and more precise result than the lattice value alone. One sees that over the range 1-3 fm, lattice (green points) and model results change substantially, but the total remains roughly constant. The respective total values are also roughly compatible with QED$_L$ in the infinite volume and continuum limits. This suggests that residual finite volume and discretization errors are much smaller for QED$_\infty$ (compare to the crosses in Fig. \[fig:cumulative\]). This is as expected for the finite volume errors, and it turns out the latter is due to the extra terms added to the QED weighting function (two-loop integral) which vanish in the $a\to0$ limit [@Blum:2017cer]. A similar reduction can be easily seen in the case of pure QED [@Blum:2017cer]. Finally, we have investigated the size of the next-to-leading disconnected diagram shown in the middle of the top row in Fig. \[fig:disco diags\]. As expected and shown in Fig. \[fig:3+1 disco\], this diagram is severely suppressed compared to the leading contributions. Summary and Outlook =================== We have presented preliminary results for the hadronic light-by-light scattering contribution to the muon $g-2$ from Lattice QCD+QED calculations using physical masses, large boxes, and improved measurement algorithms. Both finite volume and infinite volume QED methods are being investigated. For the former, large discretization and finite volume corrections are apparent but under control, and the value in the continuum and infinite volume limits is compatible with previous model and dispersive treatments, albeit with a large statistical error. Despite the large error, which results after a large cancellation between connected and disconnected diagrams, our systematic calculation suggests that light-by-light scattering can not be behind the approximately 3.7 standard deviation discrepancy between the Standard Model and the BNL experiment E821. Future calculations will reduce the error significantly. We have also presented calculations using the QED$_\infty$ method. When combined with a separate calculation of the dominant $\pi^0$ contribution, QED$_\infty$ is statistically effective. It also has much smaller finite volume and discretization errors compared to QED$_L$ for the same QCD box, even for large lattice spacing. These calculations strengthen the exciting test of the Standard Model promised by the new experiments ongoing at Fermilab and planned at J-PARC. Acknowledgments {#acknowledgments .unnumbered} =============== This work was partially supported by the US Department of Energy. Computations were carried out on the Mira supercomputer at the ALCF at Argonne National Lab. [99]{} T. Blum, N. Christ, M. Hayakawa, T. Izubuchi, L. Jin, C. Jung and C. Lehner, Phys. Rev. Lett.  [**118**]{}, no. 2, 022005 (2017) doi:10.1103/PhysRevLett.118.022005 T. Blum, N. Christ, M. Hayakawa, T. Izubuchi, L. Jin and C. Lehner, Phys. Rev. D [**93**]{}, no. 1, 014503 (2016) doi:10.1103/PhysRevD.93.014503 T. Blum, S. Chowdhury, M. Hayakawa and T. Izubuchi, Phys. Rev. Lett.  [**114**]{}, no. 1, 012001 (2015) doi:10.1103/PhysRevLett.114.012001 \[arXiv:1407.2923 \[hep-lat\]\]. M. Hayakawa and S. Uno, Prog. Theor. Phys.  [**120**]{}, 413 (2008) doi:10.1143/PTP.120.413 T. Blum [*et al.*]{} \[RBC and UKQCD Collaborations\], Phys. Rev. D [**93**]{}, no. 7, 074505 (2016) doi:10.1103/PhysRevD.93.074505 S. Laporta and E. Remiddi, Phys. Lett. B [**265**]{}, 182 (1991). doi:10.1016/0370-2693(91)90036-P J. Bijnens and J. Relefors, JHEP [**1609**]{}, 113 (2016) doi:10.1007/JHEP09(2016)113 L. Jin, T. Blum, N. Christ, M. Hayakawa, T. Izubuchi, C. Jung and C. Lehner, PoS LATTICE [**2016**]{}, 181 (2016) doi:10.22323/1.256.0181 A. Gérardin, J. Green, O. Gryniuk, G. von Hippel, H. B. Meyer, V. Pascalutsa and H. Wittig, Phys. Rev. D [**98**]{}, no. 7, 074501 (2018) doi:10.1103/PhysRevD.98.074501 J. Green, O. Gryniuk, G. von Hippel, H. B. Meyer and V. Pascalutsa, Phys. Rev. Lett.  [**115**]{}, no. 22, 222003 (2015) doi:10.1103/PhysRevLett.115.222003 N. Asmussen, J. Green, H. B. Meyer and A. Nyffeler, PoS LATTICE [**2016**]{}, 164 (2016) doi:10.22323/1.256.0164 T. Blum, Phys. Rev. Lett.  [**91**]{}, 052001 (2003) doi:10.1103/PhysRevLett.91.052001 T. Blum, N. Christ, M. Hayakawa, T. Izubuchi, L. Jin, C. Jung and C. Lehner, Phys. Rev. D [**96**]{}, no. 3, 034515 (2017) doi:10.1103/PhysRevD.96.034515 M. Knecht, S. Peris, M. Perrottet and E. de Rafael, Phys. Rev. Lett.  [**83**]{}, 5230 (1999) doi:10.1103/PhysRevLett.83.5230
--- abstract: 'We have discovered strong gravitational lensing by the galaxy [ESO325–G004]{}, in images obtained with the [*Advanced Camera for Surveys*]{} on the [*Hubble Space Telescope*]{}. The lens galaxy is a boxy group-dominant elliptical at $z=0.0345$, making this the closest known galaxy-scale lensing system. The lensed object is very blue ($B-I_c\approx1.1$), and forms two prominent arcs and a less extended third image. The Einstein radius is $R_{\rm Ein}=1.9$kpc ($\sim3$arcsec on the sky, cf. 12arcsec effective radius of the lens galaxy). Assuming a high redshift for the source, the mass within $R_{\rm Ein}$ is $1.4\times10^{11}\,M_\odot$, and the mass-to-light ratio is $1.8 (M/L)_{\odot,I}$. The equivalent velocity dispersion is $\sigma_{\rm lens}=310$, in excellent agreement with the measured stellar dispersion $\sigma_v=320$. Modeling the lensing potential with a singular isothermal ellipse (SIE), we find close agreement with the light distribution. The best fit SIE model reproduces the ellipticity of the lens galaxy to $\sim$10%, and its position angle within 1$^\circ$. The model predicts the broad features of the arc geometry as observed; the unlensed magnitude of the source is estimated at $I_c\sim23.75$. We suggest that one in $\sim$200 similarly-massive galaxies within $z<0.1$ will exhibit such a luminous multiply-imaged source.' author: - 'Russell J. Smith' - 'John P. Blakeslee' - 'John R. Lucey' - John Tonry title: 'Discovery of Strong Lensing by an Elliptical Galaxy at z=0.0345' --- Introduction ============ Gravitational lensing of background sources can yield valuable information on the mass profiles of galaxies, groups and clusters. In contrast to other methods, lensing constraints are independent of assumptions about the dynamical or hydrodynamic state of tracer material (e.g. stars, galaxies, X-ray gas). On galaxy scales, lensing helps to lift the degeneracy between the potential and the orbital anisotropy which plagues dynamical mass estimates. For distant galaxies, lensing constrains the mass enclosed at large radii, beyond the reach of stellar dynamical studies (e.g. Treu & Koopmans, 2004). The optimal configuration for lensing is with the deflecting potential at half the distance to the source. Hence lensing is usually observed for systems at intermediate redshift, $z\sim0.3$, where the number of potential background sources is very large. At low redshifts ($z\la0.1$), strong lensing systems are rare. On cluster scales, a number of lensed arcs have been discussed (e.g. Blakeslee et al. 2001). The nearest known galaxy-scale strong-lensing system, prior to this [*Letter*]{}, was Q2237+0305 (Huchra et al. 1985), a four-image QSO lensed by a $z=0.039$ spiral. Among galaxy lenses with extended arcs, the lowest known lens redshift is $z=0.205$, for SDSS1402+6321 (Bolton et al. 2005). This system was discovered on the basis of anomalous emission lines in the lens galaxy spectrum. A more direct approach to finding extended arcs is, of course, through imaging observations. Blakeslee et al. (2004) have discussed the excellent prospects for serendipitous discovery of strong lens galaxies with the [*Advanced Camera for Surveys*]{} (ACS) on the [*Hubble Space Telescope*]{}. In this [*Letter*]{}, we report ACS discovery of a new galaxy-scale lens, with multiple extended images, at $z=0.0345$. To our knowledge, the elliptical [ESO325–G004]{} is the nearest known strong lensing galaxy, and provides for the first time a low-redshift analogue of distant galaxy-scale arc systems. We adopt the WMAP cosmological parameters, i.e. $H_0=71$Mpc$^{-1}$, $\Omega_{\rm m}$=0.27, $\Omega_{\Lambda}$=0.73 (Bennett et al. 2003). The lens galaxy ESO325–G004 {#sec:envir} =========================== The massive boxy elliptical galaxy [ESO325–G004]{} (13$^h$43$^m$33$\fs$20, $-$38$^\circ$10$^m$33$\farcs$6) lies at the center of the poor cluster Abell S0740 (Abell, Corwin & Olowin 1989), and is probably the dominant galaxy of that group. The galaxy has radial velocity $cz=$10420 in the Cosmic Microwave Background frame (Smith et al. 2000), corresponding to an angular scale of 0.678kpcarcsec$^{-1}$. The galactic extinction is $E(B-V)=0.06$ (Schlegel, Finkbeiner & Davis 1998). The stellar velocity dispersion of [ESO325–G004]{} was measured by Smith et al. (2000), within an aperture 3.8$\times$3.0arcsec$^2$; the average of their two measurements is $\sigma_v=320\pm7$. Smith et al. (2001) report the effective radius as $R_{\rm eff}=12.5$arcsec (8.5kpc) from R-band imaging. Understanding the environment of the lens galaxy can be critical for correct interpretation of lensing constraints. Figure \[fig:envir\] shows the projected galaxy density around [ESO325–G004]{}, and the distribution of published redshifts. Although not complete, the redshift data suggest that S0740 is distinct from the neighboring cluster Abell 3570 ($\sim$40arcmin away, and with mean redshift $\sim$1000 larger). ![Environment of the lensing galaxy. The main figure shows contours of projected galaxy density from the 2MASS extended source catalog. Small points mark galaxies with measured redshifts in the NASA Extragalactic Database, within 1.5$^\circ$ of [ESO325–G004]{}. Galaxies with redshifts within 500 of [ESO325–G004]{} are shaded. The histogram above shows the same redshifts, with the same redshift interval highlighted. []{data-label="fig:envir"}](f1.eps){width="85mm"} ESO325-G004 was observed with ACS in January 2005, as part of our program measuring surface-brightness fluctuations in the Shapley Foreground region. The total integration times were 18882 seconds in F814W (22 individual exposures) and 1101 seconds (three exposures) in F475W. The frames were combined using [multidrizzle]{} in [stsdas]{} to yield the stacks used in this study (Figure \[fig:image\]). Inspection of the images revealed two long, narrow arcs at $\sim$3arcsec separation from the galaxy center (Figure \[fig:lensmod\]). A third object, less obviously distorted, is present at similar radius. The arcs are prominent even in the much shallower F475W exposure (Figure \[fig:lensmod\]a), due to favorable color contrast between the blue arcs and the red light of the foreground elliptical. ![The deep (18882s) F814W image of [ESO325–G004]{} and the surrounding field. The white square indicates the central 8$\times$8arcsec$^2$ region covered by Figure \[fig:lensmod\].[]{data-label="fig:image"}](f2b.eps){width="85mm"} ![Observations and models of lensing in [ESO325–G004]{}. Panel (a) shows the 1101s F475W image, with color map optimized to show the arcs. In Panel (b), the 18882s F814W image is shown after subtracting a smooth boxy-elliptical profile model. In (c), we show the color-subtracted image, with results from a lensing model for a singular isothermal ellipse (see text).[]{data-label="fig:lensmod"}](f3b.eps){width="77mm"} For a preliminary photometric analysis of the lens galaxy, we applied [iraf]{} tasks based on the ellipse-fitting algorithm of Jedrzejewski (1987). The surface photometry model includes the $c_4$ Fourier coefficient, which is necessary to describe the strong boxiness of the isophotes. All other high-order Fourier terms were forced to zero. The luminosity profile follows a $R^{1/4}$ law out to at least $\sim$1arcmin ($\sim5R_{\rm eff}$). Transformed to the Johnson/Cousins system (following Sirianni et al. 2005), and correcting for extinction and $k$-dimming, the color is $B-I_c\approx2.5$, typical for an old, metal rich stellar population. At the radius of the arcs, the isophotal position angle is 68$^\circ$, and the ellipticity $e=1-\frac{b}{a}\approx0.25$. Although the arcs are seen clearly in the original images (Figure \[fig:lensmod\]a), their visibility is enhanced by subtracting a model for the lens galaxy light. We have explored a number of approaches to this subtraction. The simplest method uses the ellipse+$c_4$ fits described above, subtracting independent models for the F814W and F475W images. The resulting F814W residual image is shown in Figure \[fig:lensmod\]b. This method leaves a strong spoke pattern; there is also a risk of introducing tangential residuals unrelated to lensing. A more robust approach is to use the color contrast between the arcs and the galaxy, subtracting a scaled version of the F814W image from the F475W image. The color residual image (Figure \[fig:lensmod\]c) is limited by noise from the shallow F475W exposure, but is systematically very clean; in particular the spoke pattern and many of the point sources are removed. In Section \[sec:lensmod\], we will show that the observed geometry of arcs ‘A’, ‘B’ and ‘C’ is indeed consistent with a single background source lensed by an elliptical potential. We have determined approximate magnitudes and colors for the arcs, based on the residual images, as summarized in Table \[tab:arcprops\]. In particular, note that the arcs have consistent colors, with $B-I_c=1.10\pm0.05$ (extinction corrected). Such a blue color can be reproduced with star-forming spectral templates at $z\la0.45$, with \[O [ii]{} $\lambda$3727Å\] contributing to the F475W flux. The arcs have irregular surface brightness distributions, with numerous ‘knots’, suggestive of star-forming regions. In ‘B’ we can identify at least seven knots, while ‘A’ shows a symmetric structure as expected for a two-image arc, and ‘C’ appears double. [cccccc]{} A & 2.8 & 3.4 & 22.06 & 22.93 & 1.07\ B & 3.4 & 1.5 & 22.92 & 23.88 & 1.21\ C & 2.6 & 0.9 & 22.61 & 23.45 & 1.03\ \[tab:arcprops\] Mass-to-light ratio from lensing {#sec:lensmod} ================================ We have used simple mass models to test the assumption that all three images are generated by gravitational lensing, and to derive initial estimates for the mass enclosed within the arcs. To test parametrized forms for the lensing potential, we applied the ‘curve-fitting’ method as implemented within the [gravlens/lensmodel]{} software (Keeton 2001). This method takes as input a set of points on each observed arc, and optimizes a lensing potential which maps points from each curve to counter-images on the other curves. The input curves were obtained by manually tracing arcs ‘A’, ‘B’ and ‘C’. To determine a well-defined Einstein radius for an elliptical potential, Rusin, Kochanek & Keeton (2003), have suggested a scheme based on fitting a singular isothermal sphere (SIS) plus external shear. The model has three parameters, the SIS Einstein radius, and the magnitude and position angle of the shear. This parameter space was explored using a combination of grid-search and simplex optimization methods. The best SIS+shear fit yields $R_{\rm Ein}\approx2.85$arcsec (1.93kpc or 0.23$R_{\rm eff}$). From this we can determine the mass interior to $R_{\rm Ein}$ to be $M_{\rm Ein}\approx1.4(D_{\rm s}/D_{\rm ls})\times10^{11}\,M_\odot$. Here $D_{\rm s}$ is the angular-diameter distance to the source, and $D_{\rm ls}$ the angular-diameter distance between lens and source. The enclosed mass measurement thus depends on the source redshift, which is as yet unknown (but see below). For a given lens, the source is most likely to be at high redshift, where the surface density of potential sources is very large. In this section, we consider the limiting case of a distant source, such that $D_{\rm s}/D_{\rm ls}\approx1$. The observed magnitudes interior to $R_{\rm Ein}$ are $F814W=13.5$ and $F475W=15.6$. Transforming these to Johnson/Cousins bandpasses following Sirianni et al. (2005), we have $I_c=12.7$ and $B=15.1$ (corrected for extinction and $k$-dimming). Converting to luminosities we have $L_{\rm Ein, I_c}=8.0\times10^{10}\,L_{\odot,I_c}$ and $L_{\rm Ein, B}=3.0\times10^{10}\,L_{\odot,B}$, and the mass-to-light ratios at the Einstein radius are $\Upsilon_{\rm Ein, I_c}=1.8\,(M/L)_{\odot,I_c}$ and $\Upsilon_{\rm Ein, B}=4.7\,(M/L)_{\odot,B}$. Such values are typical for stellar mass-to-light ratios in old populations. Thus the mass budget within the Einstein radius appears to be dominated by the observed stellar mass[^1]. The SIS mass model yields $\sigma_{\rm SIS}=310$, in excellent agreement with the measured stellar velocity dispersion. Considering the morphology of the galaxy, and the expectation that the mass within $R_{\rm Ein}$ will be dominated by stars, a more realistic model for the lensing potential is the Singular Isothermal Ellipse (SIE; e.g. Korman, Schneider & Bartelmann 1994). In fitting the SIE model, we force the center of the potential to align with the observed galaxy, but allow the mass normalization, position angle and ellipticity to vary. Formally, the best fitting model has ellipticity $e=0.28$ with position angle 68$^\circ$, very similar to the equivalent parameters for the galaxy light. Figure \[fig:lensmod\]c shows the results of this best-fitting SIE model. The reconstructed source position is $\sim$0.3arcsec north of the lens center (small circle), and likely straddles the inner ‘astroid’ caustic of the lens model (inner cusped curve). The figure shows the predicted image locations for such a source. Qualitatively, the SIE potential reproduces the broad features of the observed geometry, and confirms image ‘C’ as a counter-image of ‘A’ and ‘B’. The long arc ‘A’ comprises two images crossing the critical curve (outer ellipse). While broadly successful, it is also clear that the model does not match the data perfectly at a more detailed level. In particular, the SIE does not match the relative lengths of the three observed arcs. For ‘C’, moreover, the model predicts a position displaced towards ‘B’, relative to the observed location. To improve the reconstruction, more sophisticated models could take into account the intrinsic morphology of the source, the boxiness of the lens galaxy, and a possible external shear term. Discussion ========== It is interesting to consider whether the lensing configuration observed in [ESO325–G004]{} is intrinsically unusual, or whether many other nearby galaxies might reveal such bright arcs when observed in sufficient detail. The cross-section for four-image lensing under the SIE mass-model is $\sim$1arcsec$^2$, i.e. roughly the area of the ‘astroid’ caustic on the source plane. The estimated magnification factors for the separated arcs ‘B’ and ‘C’ are $\sim$4, suggesting the unlensed magnitude of the source is $I_c\sim23.75$. To this limit, the integrated I-band galaxy counts are $\sim10^5$deg$^{-2}$ (Postman et al. 1998), so the probability that a given galaxy has such a bright background source aligned for quadruple imaging is $\sim$0.6%. To estimate the total number of massive galaxies available to serve as low-redshift lenses, we use the 2MASS J-band luminosity function of Cole et al. (2001). [ESO325–G004]{} has $M_J-5\log h=-24.16$, and thus a luminosity $\sim7L_J^\star$. Integrating the luminosity function, parametrized as a Schechter function, the space density of galaxies above $7L_J^\star$ is $2.8\times10^{-5}\,{\rm Mpc}^{-3}$ (for $h=0.71$). This density implies $\sim10^4$ galaxies, at least as luminous as [ESO325–G004]{}, within $z=0.1$. Combining these estimates, we find the total expected number of ‘similar’ low-redshift lens systems to be $\sim$60, with only $\sim$3 at the distance of [ESO325–G004]{} or closer. As noted above, the lens model normalization depends on the source redshift, through the factor $D_{\rm s}/D_{\rm ls}$. We have re-examined the raw spectra obtained by Smith et al. (2000) at the Anglo-Australian Telescope, which cover the range $4925-5740$Å, with slit intercepting Arc ‘C’. No anomalous emission lines are seen at the expected position of the arc, which at face value suggests $z_{\rm src}>0.54$ and $z_{\rm src}<0.32$ (absence of \[O [ii]{} $\lambda$3727Å\]), but also $z_{\rm src}>0.15$ (absence of \[O [iii]{} $\lambda$5007Å\] and H$\beta$). Over the redshift interval $0.15-0.32$, the lensing mass correction factor $D_{\rm s}/D_{\rm ls}$ ranges from 1.30 to 1.13. It is however quite possible that the exposures were too short (600sec), and the slit too wide (3arcsec), to detect emission from the arc against the high background of the lens galaxy. Finally, we note that our images show four extremely faint tangential arc candidates at greater separation from [ESO325–G004]{} (radius 9.4arcsec, at position angles $-95^\circ$, $90^\circ$, $-35^\circ$, $150^\circ$). The surface brightness of these features is very low ($\ga24.5$magarcsec$^{-2}$ in I). If confirmed, the outer arcs could provide additional constraints on the mass distribution at larger radius, and on the relative contribution of the group potential to the lensing mass. Conclusions {#sec:concs} =========== We have discovered strong gravitational lensing by the elliptical galaxy [ESO325–G004]{}, which at $z=0.0345$ is the nearest known galaxy-scale lens. The multiply-imaged background source appears to be a star-forming galaxy, with prominent substructure. If the source is very distant, the arc radius is consistent with a stellar-dominated mass-to-light ratio within radius $\sim{}\frac{1}{4}R_{\rm eff}$. An elliptical isothermal mass model recovers the position angle and ellipticity of the galaxy, independent of the observed luminosity. The best-fit mass scale is consistent with the measured stellar velocity dispersion. Future modeling of the system should incorporate structural information from the clumpy arc morphology. We intend to obtain integral-field spectroscopy for [ESO325–G004]{}, with 8m-class telescopes, to secure the source redshift and enclosed mass estimate. Additionally, these observations will yield extended stellar dynamics for the lens galaxy, and enable high-contrast imaging of the arcs by emission-line mapping. RJS thanks Mike Hudson and Laura Parker for useful discussions about this work, and the Anglo-Australian Observatory for retrieving the raw AAT spectra of [ESO325–G004]{}. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. Abell, G. O., Corwin, H. G., & Olowin, R. P. 1989, , 70, 1 Bennett, C. L., et al. 2003, , 148, 1 Blakeslee, J. P., Metzger, M. R., Kuntschner, H., & C[\^ o]{}t[' e]{}, P. 2001, , 121, 1 Blakeslee, J. P., et al. 2004, , 602, L9 Bolton, A. S. Burles, S., Koopmans, L. V. E., Treu, T., & Moustakas, L. A. 2005, , 624, L21 Carlberg, R. G., Yee, H. K. C., Morris, S. L., Lin, H., Hall, P. B., Patton, D. R., Sawicki, M., & Shepherd, C. W. 2001, , 552, 427 Cole, S. M. et al. 2001, , 326, 255 Huchra, J., Gorenstein, M., Kent, S., Shapiro, I., Smith, G., Horine, E., & Perley, R. 1985, , 90, 691 Jedrzejewski, R. I. 1987, , 226, 747 Keeton, C. R. 2001, astro-ph/0102340 Kormann, R., Schneider, P. & Bartelmann M. 1994, , 284, 285 Postman, M., Lauer, T. R., Szapudi, I., & Oegerle, W. 1998, , 506, 33 Rusin, D. Kochanek, C. S. & Keeton, C. R. 2003, , 595, 29 Schlegel, D. J., Finkbeiner, D. P., Davis, M. 1998, , 500, 525 Sirianni, M. et al. 2005, , [*submitted*]{} Smith, R. J., Lucey, J. R., Hudson, M. J., Schlegel, D. J., & Davies, R. L. 2000, , 313, 469 Smith, R. J., Lucey, J. R., Schlegel, D. J., Hudson, M. J., Baggley, G., & Davies, R. L. 2001, , 327, 249 Treu, T., & Koopmans, L. V. E. 2004, , 611, 739 [^1]: Given independent information on the stellar population (e.g. from spectroscopic line indices), this result could yield an upper limit on the mass contribution from dark matter associated with [ESO325–G004]{} and/or the surrounding cluster S0740.
--- abstract: 'The Brownian diffusion of micron-scale inclusions in freely suspended smectic A liquid crystal films a few nanometers thick and several millimeters in diameter depends strongly on the air surrounding the film. Near atmospheric pressure, the three-dimensionally coupled film/gas system is well described by Hughes/Pailthorpe/White hydrodynamic theory but at lower pressure ($p \alt 70$ torr), the diffusion coefficient increases substantially, tending in high vacuum toward the two-dimensional limit where it is determined by film size. In the absence of air, the films are found to be a nearly ideal physical realization of a two-dimensional, incompressible Newtonian fluid.' author: - Zhiyuan Qi - Cheol Soo Park - 'Matthew A. Glaser' - 'Joseph E. Maclennan' - 'Noel A. Clark' bibliography: - 'vacuumref.bib' date: - - title: Experimental Realization of an Incompressible Newtonian Fluid in Two Dimensions --- Theoretical hydrodynamics has progressed through the invention of a series of abstract fluids (perfect, inviscid, incompressible, and so on) that enable the tractable description of certain physical aspects of three-dimensional ($3$D) fluid systems [@Lamb1916]. Among the most useful of these idealizations has been that of the incompressible Newtonian fluid, which models the low-Reynolds number flow of simple and weakly-associated liquids, for example. While there are many physical realizations of such fluids in $3$D, there have been none which satisfy the basic requirements in $2$D, i.e., which is homogeneous in density and viscosity, and obeys the laws of conservation of mass, energy, and momentum. Currently studied $2$D fluids include soap films [@Goldburg2002], which are highly compressible in-plane due their facile response to stress (resulting in thickness changes); and few-nanometer thick, freely suspended, fluid smectic liquid crystal films [@Young1978] which, by virtue of their lamellar structure, are quantized in thickness to an integral number of layers, stabilizing hydrodynamic parameters such as density and viscosity to an extent comparable to that of $3$D fluids. Both systems exchange momentum and energy with a surrounding gas but the low vapor pressure [@Deschamps2008; @Poole2014] of smectic films enables the possibility of pressure reduction to the microtorr regime and thereby the approach to, and study of, the ideal incompressible , isotropic, Newtonian limit of 2D fluids (2DIIN limit). The experiments on smectic films reported here explore the evolution to this hydrodynamic regime as the surrounding gas pressure is reduced and investigate the anomalies arising from reduced dimensionality in this limit. Hydrodynamic behavior in $2$D has received extensive theoretical attention [@Goldenfeld2007] and is of broad interest in the context of $2$D flows in $3$D systems, ranging from wires falling in $3$D viscous fluids [@White1946] to the large scale motion of oceans and the atmosphere [@Boffetta2012]. Also there is increasing interest in the flow of $2$D films *per se* in connection with understanding of the dynamical behavior of defects [@Muzny1992; @Pargellis1992], textures [@Lee2006; @Dolganov2014] and inclusions [@Nguyen2010; @Schulz2014; @Qi2014], and of transport in biological membranes [@Simons1997; @Hormel2014], all of which benefit from experimental information at or near the $2$DIIN limit. As an example, the recent experiments of May et al. [@May2012] reveal a dramatic alteration of the shape dynamics of free-floating bubbles as a result of a partial suppression of in-plane compressibility. The coupling of an incompressible Newtonian $2$D fluid to the surrounding media was first treated by Saffman and Delbrück (SD) [@Saffman1975]. They developed a continuum hydrodynamic model to describe the mobility $\mu$ of an inclusion of radius $a$, in a fluid film of viscosity $\eta$, surrounded by bulk fluid of different viscosity, $\eta^\prime$. They showed that flow of the film about a moving inclusion is limited to a radius on the film of characteristic dimension $l_S = \eta^\prime h/\eta$, the Saffman length. SD treated the case $a < l_S$, finding that $\mu$ is controlled by the film viscosity and the film exhibits $2$D flow as if bounded at $l_S$. Hughes, Pailthorpe, and White (HPW), [@Hughes1981] extended SD theory to describe inclusions of arbitrary radius, showing that for large inclusions ($a > l_S$) $\mu$ is determined by friction with the surrounding fluid, exhibiting something more like $3$D Stokes behavior ($\mu \sim 1/a$) [@Nguyen2010]. Aspects of these SD/HPW predictions have since been tested in experiments by several groups [@Nguyen2010; @Cheung1996; @Petrov2012]. In the absence of surrounding fluid the film flow behavior should be $2$D, marked by extremely long ranged (logarithmic) hydrodynamic interaction and inclusion mobilities that depend logarithmically on system size. In this Letter, we describe the Brownian dffusion of silicone oil droplet inclusions in smectic films as the ambient air pressure is varied from $633$ torr down to $10^{-4}$ torr. The experiments confirm that, while at atmospheric pressure the mobilities are limited by the surrounding gas, in the high vacuum $2$DIIN limit the hydrodynamics are controlled by film size. In addition, predictions in the $2$DIIN limit describe well the dependence of the mobility of inclusions on distance from the film boundary. Since friction from the air plays such an important role in determining the hydrodynamic behavior of inclusions in smectic films, understanding how the inclusion-air interactions can be tuned by varying the ambient pressure is of fundamental interest. As the air pressure is reduced, the mean free path $\lambda$ of the air molecules is expected to increase, as indicated in Fig. \[fig2\](d). At sufficiently low pressure, the surrounding air can not be regarded simply as an incompressible, continuous fluid and the well-established SD/HPW model based on low Reynolds number hydrodynamics can no longer be used to predict the mobilities of inclusions. Here we explore the effects of varying the ambient air pressure on the Brownian motion of inclusions in FSLC films, showing that as the air is removed, the system evolves from a pseudo-$3$D regime where coupling to the air is dominant to a regime in which the hydrodynamics are determined by confinement at the boundaries, as predicted for an ideal $2$D fluid. ![\[fig1\] Experimental apparatus for observing inclusions in smectic liquid crystal films at low pressure. A resistive filament coated with silicone oil is briefly heated with an electric current to generate an oil vapor, part of which then condenses as droplets on the film. The buffer chamber shields the film chamber from sudden changes in pressure.](fig1){width="7.5cm"} Homogeneous FSLC films a few molecular layers thick are robust preparations [@Young1978] provide an ideal platform for studying hydrodynamics in reduced dimensions [@Muzny1992]. In previous experiments, we described the Brownian motion of silicone oil droplets embedded in such films with the ambient air at atmospheric pressure [@Qi2014]. These droplets form lens-shaped inclusions are insoluble in liquid crystal and whose size remains constant over long time intervals, typically for more than half an hour, which far exceeds the time required to perform a typical measurement. The liquid crystal material used in our experiment is $8$CB (4$'$-n-octyl-4$'$-cyanobiphenyl), which is in the fluid smectic A phase at room temperature. The saturation vapor pressure of $8$CB is very low (around $10^{-7}$ torr [@Deschamps2008]), and we are able to maintain stable films of constant thickness over a wide range of air pressures (from atmospheric pressure to $10^{-6}$ torr), enabling us to study the microrheology of inclusions in the film over a wide range of Knudsen number ($\lambda/2R$, the reduced mean free path). The density and viscosity of $8$CB are $\rho \approx 0.96 \; \mathrm{g/cm}^3$ [@Leadbetter1976] and $\eta = 0.052~{\rm Pa} \cdot {\rm s}$ [@Schneider2006] respectively. Each smectic layer is $3.17\,{\rm nm}$ thick [@Davidov1979]. Freely suspended films were formed by spreading a small amount of the liquid crystal across a $4\,{\rm mm}$-diameter hole in a glass cover slip and were then observed using reflected light video microscopy. The film thickness $h$, an integral number $N$ of smectic layers (typically $2 \le N \le 6$), is determined precisely by comparing the reflectivity of the film with black glass [@Sirota1987]. A resistive filament coated in silicone oil is then electrically heated in order to generate an oil vapor, some of which makes its way to the film where it eventually condenses and forms visible droplets such as those shown in Fig. \[fig2\]a, with radii between $2$ and $50\, \mu$m. A double-sealed rotary pump is then used to reduce the pressure to $3 \times 10^{-3}$ torr, after which a turbo pump is used to get the film chamber down to $10^{-4}$ torr. The shape and thickness of the oil droplets is measured by analyzing the interference fringes (see Fig. \[fig2\]b) formed in monochromatic light [@Schuring2002]. Once oil droplets appear on the film, the chamber is carefully tilted in order to maneuver a droplet of desired radius $a$ into the center of the film (of radius $R$), after which the film is leveled to minimize gravitational drift, enabling us to capture several thousand images at a video frame rate of $24$ fps while the droplet is in the field of view and far from the film boundaries, as shown schematically in Fig. \[fig2\](c). After reducing the pressure of the film chamber to $10^{-4}$ torr, closing the valve between the pump and the vacuum buffer chamber allows us to maintain quasi-constant pressure around the film for dozens of minutes, during which we are able to make video recordings of droplet motion. The pressure is then gradually increased by injecting small amounts of air into the system, allowing us to obtain a series of movies showing the Brownian motion of the droplet as the chamber pressure increases in steps from $3\times 10^{-4}$ torr to $633$ torr. These videos are decomposed into sequential images and the size and position of the droplet are determined using algorithms based on Canny’s method for edge detection [@Canny1986] and Taubin’s method for object identification [@Taubin1991]. The diffusion coefficient is obtained after analytically removing any drift [@Nguyen2010]. ![\[fig2\] Effect of surrounding air pressure on droplet diffusion. (a) Oil droplets on a three-layer, freely suspended liquid crystal film viewed in reflection. (b) Interference fringes in a large oil droplet. (c) Cartoon of an oil droplet of radius $a$ near the center of a film of radius $R$. (d) Diffusion coefficient of a single droplet ($a=8 \, \mu$m) near the center of a film ($R=2 \,$mm, $N=3$ layers) as a function of surrounding air pressure (symbols). The green curve corresponds to SD/HPW theory. The black curve shows the diffusion predicted by a model assuming free air molecules impinging on the film. The horizontal dashed red line shows the $2$D confinement limit predicted by Saffman for vanishingly small air viscosity and no-slip boundaries. The dashed black line shows the mean free path of the air molecules scaled by the film diameter ($\lambda_\mathrm{reduced}=\lambda/(2R)$) as a function of pressure. The background shading indicates three distinct behavioral regimes corresponding to different air pressure ranges: free molecules, slip and continuity. (e) Reduced mobility of single oil droplets in a film in high vacuum as a function of reduced film radius. The model curve is Saffman’s prediction for a $2$D fluid with no-slip boundaries.](fig2){width="7.0cm"} The diffusion coefficient of a typical droplet near the center of the film is plotted as a function of ambient pressure in Fig. \[fig2\](d). The green curve shows the corresponding SD/HPW theory with the air viscosity corrected for pressure [@Johnston1951]. The observed variation of diffusion coefficient is well described by this model at pressures close to atmospheric ($p\gtrsim70$ torr) but the experimental data deviate significantly from the theory at lower pressure, increasing monotonically as the pressure is reduced before saturating at very high vacuum, at the limit corresponding to $2$D boundary confinement (horizontal red dashed line in Fig. \[fig2\](d)). The observed behavior falls in three distinct hydrodynamic regimes: (1) Near atmospheric pressure ($p\gtrsim 70$ torr), the mean free path of the air molecules ($\lambda \sim 7 \,\mu$m) is much less than the diameter of the film. In this regime, the air may be regarded as a continuous fluid and SD/HPW theory gives diffusion $D$ with no adjustment parameters, using known air viscosity $\eta^{\prime}$, film viscosity $\eta$, and measured film thickness $h$, and the measured hydrodynamic radius (see Supporting Information) of the inclusions [@Nguyen2010]. (2) Below about $70$ torr, the viscosity of the air decreases as the pressure falls, a phenomenon attributable to slip of the air layers [@Johnston1951] over the surfaces of the film and oil droplet. (3) At very low pressure ($p \lesssim 0.02$ torr), the mean free path is several mm long, a distance comparable to the diameter of the film. In this regime, the ambient air can be regarded as an ensemble of collisionless molecules that obey a Maxwell-Boltzmann velocity distribution [@Clark19751; @Clark19752]. In order to model the behavior of droplets at the lowest pressures, we may approximate the total drag force $F$ as the sum of two terms, one arising from confinement by the boundaries and the other due to friction from the air, or $F=F_b + F_\mathrm{air}$. The confinement term is given by $F_b = {4\pi\eta hU}/{(\ln({R}/{a}) - 0.5)}$ [@Saffman1975]. The air drag on an inclusion moving in the film at speed $U$ depends on both direct frictional force resulting from the impingement of air molecules on the inclusion, and on indirect frictional forces resulting from changes of streamlines in the film caused by collisions with air molecules. Calculations based on kinetic theory [@Epstein1924] indicate that the unit frictional force as a function of droplet speed $U$ and surrounding air pressure $p$ may be written $F_\mathrm{air} = p\sqrt{\pi m/(2kT)} \, U$, where $m$ is the mass of an air molecule, $k$ the Boltzmann constant, and $T$ the temperature. The net inverse droplet mobility may be written as $1/\mu = 1/\mu_{b} + 1/\mu_\mathrm{air}$. Since $\mu_b$ is independent of pressure, the mobility can be expressed in the form $\mu = 1/(\mu_b^{-1}+ \mathrm{const}\times p)$, where the constant can be found by fitting the experimental data at low pressure. This model is plotted as the black curve in Fig. \[fig2\](d). In an ideal $2$D fluid of finite size, therefore, the only drag experienced by a disk-like inclusion should come from confinement forces arising from long-range hydrodynamic interactions with the fluid boundaries. Our experiments confirm that in high vacuum ($p \lesssim 0.003$ torr), the frictional drag from the remaining air molecules is much smaller than the hydrodynamic confinement force and can be neglected. In this regime, the freely suspended SmA liquid crystal film approaches a true $2$D fluid and exhibits purely $2$D hydrodynamics. To verify that we were really in the $2$D limit, we analyzed the Brownian motion of droplets of different sizes in films of different radii under high vacuum. The reduced mobility $m = 4\pi\eta h \mu$ of these inclusions as a function of reduced film radius $R/a$ is plotted in Fig. \[fig2\](e). The observed mobility follows the predictions of SD theory quite well, increasing logarithmically with inclusion size as expected for a system with $2$D hydrodynamic behavior. The observed mobilities are slightly larger than predicted by the model, an effect which might be due to deviations from ideal, no-slip boundary conditions resulting from the presence of a meniscus [@Picano2000]. ![\[fig3\] Reduced mobility of oildrop inclusions diffusing parallel (red) and perpendicular (blue) to a straight film boundary in high vacuum. $a$ is the radius of the inclusion and $d$ the distance from the boundary. The model curves are analytical predictions from Eq. \[eq:jeffrey\]. The two mobilities are different, in agreement with theory, increasing logarithmically with distance from the boundary as expected for pure $2$D hydrodynamics.](fig3){width="7.5cm"} In both $3$D [@Banerjee2005; @Lele2011] and $2$D fluids, inclusions near a rigid boundary experience a “wall effect” which reduces their mobility. To study the $2$D “wall effect", in the $1940s$, White measured the drag on metal wires falling sideways through viscous liquids confined between two vertical bounding walls and found that at low Reynolds number, the presence of the walls affected the mobility of the wires even when they were many hundreds of wire diameters away, with the mobility depending logarithmically on the ratio of wall separation to wire radius [@White1946]. Recent measurements of inclusion mobility in very thick smectic films, in which the Saffman length is greater than the film size and the influence of the air is relatively small, also showed the effects of the boundary [@Eremin2011]. Eliminating the environmental drag on a thin smectic film by removing the ambient air seemed a promising way of studying wall effects in a true $2$D fluid. We therefore measured the mobilities of included oil droplets both parallel and perpendicular to a straight boundary in high vacuum. The experimental observations, shown in Fig. \[fig3\], were compared with the model of Jeffrey and Onishi [@Jeffrey1981], who solved the Navier-Stokes equations for $2$D flow around a translating cylinder near a plane wall, assuming small Reynolds number flow. For translation respectively parallel and perpendicular to the wall, the predicted reduced mobilities are: \[eq:jeffrey\] $$\begin{aligned} m_\parallel & = 4\pi\eta h \mu_\parallel & = \ln \left [\frac{d+\sqrt{d^{2}-a^{2}}}{a} \right ] \, , \label{eq:parallel} \\ % & \mathrm{and} \nonumber \\ m_\perp & = 4\pi\eta h \mu_\perp & = \ln \left [\frac{d+\sqrt{d^{2}-a^{2}}}{a} \right ] -\frac{\sqrt{d^{2}-a^{2}}}{d} . \label{eq:perp}\end{aligned}$$ For large values of $d/a$, these expressions simplify to $m_\parallel \approx \ln[2d/a]$ and $m_\perp \approx \ln[2d/a]-1$. In contrast to $3$D fluids, where the wall effect on mobility decays within a few dozen inclusion radii [@Carbajal2007], the influence of the boundary extends a long distance into a $2$D fluid and the hydrodynamic behavior of inclusions is predicted to remain anisotropic at large distances from the wall. Our experimental results confirm this behavior, as seen in Fig. \[fig3\]. The measured mobilities are on average slightly higher than the theory but are generally in good agreement, except very close to the wall. This may be due to deviations from true no-slip boundary conditions [@Keh2008] at the meniscus, as mentioned previously. In summary, we have described the Brownian motion of single inclusions in freely suspended smectic A liquid crystal films as the pressure of the surrounding air is reduced from one atmosphere to a high vacuum. The inclusion mobility was characterized in three hydrodynamic regimes: near atmospheric pressure, where diffusion follows HPW theory, in partial vacuum (the slip regime), and in high vacuum, where we observe motion limited by $2$D confinement effects. The parallel and perpendicular mobilities of an inclusion in high vacuum near the edge of the film increase logarithmically with distance from the boundary as predicted by theory, with an anisotropic character that persists far into the film. The observations suggest that thin, freely suspended smectic films in high vacuum are a nearly ideal experimental realization of a two-dimensional fluid. This work opens the way for more general hydrodynamic studies in the 2DIIN limit, of such phenomena as driven microrheological flow, high Reynolds number turbulence, energy cascades, and jets. This work was supported by NASA Grant NNX-13AQ81G and NSF MRSEC Grants DMR-0820579 and DMR-1420736.
--- abstract: 'A recently developed self-healing diffusion Monte Carlo algorithm \[PRB [**79**]{}, 195117\] is extended to the calculation of excited states. The formalism is based on an excited-state fixed-node approximation and the mixed estimator of the excited-state probability density. The fixed-node ground state wave-functions of inequivalent nodal pockets are found simultaneously using a recursive approach. The decay of the wave-function into lower energy states is prevented using two methods: i) The projection of the improved trial-wave function into previously calculated eigenstates is removed; and ii) the reference energy for each nodal pocket is adjusted in order to create a kink in the global fixed-node wave-function which, when locally smoothed, increases the volume of the higher energy pockets at the expense of the lower energy ones until the energies of every pocket become equal. This reference energy method is designed to find nodal structures that are local minima for arbitrary fluctuations of the nodes within a given nodal topology. It is demonstrated in a model system that the algorithm converges to many-body eigenstates in bosonic and fermionic cases.' author: - Fernando Agustín Reboredo title: 'Systematic reduction of sign errors in many-body problems: generalization of self-healing diffusion Monte Carlo to excited states' --- Introduction ============ Although several important chemical and physical properties of matter are determined by the lowest energy electronic configuration (or ground state), a significant number of physical properties are crucially dependent on the excitation spectra. These properties range from electronic optical excitations to transport and thermodynamic behavior. While elegant theories that take advantage of the variational principle have been formulated for the ground state, [@hohenberg; @kohn] the theories on the excitation spectra are far more complex. [@hedin65] Therefore, although excited states are extremely important, our understanding of them is limited as compared with the ground state. Diffusion quantum Monte Carlo (DMC) is the method of choice to obtain the ground state energy of systems with more than $\sim\!20$ electrons. The DMC algorithm [@ceperley80] transforms the calculation of an excited state (e.g., the fermionic ground state) into a ground state calculation. The accuracy of the method depends, however, on a previous estimate of the zeros (nodes) of the wave-function. The ground state wave-function of most many-body Hamiltonians $\mathcal{H({\bf R})}$ is a bosonic (symmetric) wave-function without nodes. Any other eigenstate of a many-body Hamiltonian $\mathcal{H({\bf R})}$ must have nodes in order to be orthogonal to the bosonic ground state. In the case of fermions (e.g., electrons), the ground state must be antisymmetric. Therefore, the electronic ground state is an excited state of the many-body Hamiltonian $\mathcal{H({\bf R})}$ and must have nodes (hyper-surfaces in $3N_e$ space where the wave-function becomes zero and changes sign, being $N_e$ the number of particles). The standard diffusion Monte Carlo (DMC) approach [@ceperley80] finds the lowest energy $E^{DMC}_T$ of all the wave-functions that share the nodes $S_T({\bf R})$ of a trial wave-function $\Psi_T({\bf R})$, where ${\bf R}$ is a point in the $3N_e$ coordinate space. This lowest energy wave-function is denoted as the fixed-node ground state $\Psi_{FN}({\bf R})$. Since “no nodes” is a condition easy to satisfy, the ground state energy of a bosonic system can be found with a precision limited only by statistical and time-step errors. For any other eigenstate $\Psi_n({\bf R})$, a good approximation of its nodal surface $S_n({\bf R})$ must be provided in order to avoid systematic errors. Departures in $S_T({\bf R})$ from the exact nodes $S_n({\bf R})$ cause, in general, errors of the energy as compared with the exact eigenstate energy. [@foulkes99] For the fermionic ground state, the standard DMC algorithm provides only an upper bound of the ground state energy. [@anderson79; @reynolds82] Moreover, if $\Psi_n({\bf R})$ is non degenerate, any departure of $S_T({\bf R})$ from $S_n({\bf R})$ creates a kink in the fixed-node ground state. [@keystone] Accordingly, accurate many-body calculations require methods to obtain and improve $S_T({\bf R})$. The problem of searching the exact nodes $S_n({\bf R})$, the surfaces in $3N_e$ space where the wave-function of an arbitrary eigenstate $n$ changes sign, is one of the outstanding problems in condensed matter theory.  [@ceperley91] This paper is the natural conclusion of earlier work. In Ref. we showed that even the [*exact*]{} Kohn-Sham[@kohn] wave-functions [*cannot*]{} be expected to provide accurate nodal structures for DMC calculations. However, we also showed that an optimal Kohn-Sham-like nodal potential exists. Subsequently in Ref. we demonstrated that the nodes of the fermionic ground state wave-function can be found in an iterative process by locally smoothing the kinks of the fixed-node wave-function. We also showed that an effective nodal potential can be found to obtain a compact representation of an optimized trial wave-function with good nodes. While some details are rederived here, reading those papers before this one is [*highly*]{}[@fn:highly] recommended. In this paper the self-healing diffusion Monte Carlo method (SHDMC) is extended to find the nodes, wave-functions, and energies of low-energy eigen-states of bosonic and fermionic systems. The simple SHDMC algorithm for the ground state =============================================== This paper describes how to extend the “simple SHDMC algorithm” (as described in Section III.C of Ref. ) to excited states. An extension to optimize the multi-determinant expansion, (see Section IV in Ref. ) is clearly possible and will be explained elsewhere. The ground state SHDMC algorithm builds upon the importance sampling DMC method. [@ceperley80] The standard diffusion Monte Carlo approach is based on the Ceperley-Alder[@ceperley80] equation: [@units] $$\begin{aligned} \label{eq:ceperleyalder} \frac{\partial f({\bf R},\tau )}{\partial \tau} \! & =& \! {\bf \nabla_R^2 } f({\bf R},\tau )- \! {\bf \nabla_R } \left( f({\bf R},\tau ) {\bf \nabla_R } ln \left| \Psi_T({\bf R}) \right|^2 \right) \nonumber \\ &&- \left[ E_L({\bf R})-E_{T} \right] f({\bf R},\tau ) \; ,\end{aligned}$$ where $E_L({\bf R}) = [\hat\mathcal{H} \Psi_T({\bf R})]/\Psi_T({\bf R}) $ is the “local energy,” $\hat\mathcal{H}$ is the many-body Hamiltonian operator, ${\bf R} $ denotes a point in $3N_e$ space, and $E_T$ is a reference energy. Equation (\[eq:ceperleyalder\]) is often solved numerically[@ceperley80] using a large number $N_c$ of electron configurations (or walkers) which are points ${{\bf R}_i}$ in the $3N_e$ space. These walkers i) randomly diffuse according to the first term in Eq. (\[eq:ceperleyalder\]) and ii) drift according to the second term a time $\delta \tau$. In addition, iii) the walkers branch {or pass on} with probability $p=1-\exp[(E_L({\bf R})-E_{T})\delta \tau]$ {or $p=\exp[(E_L({\bf R}_i)-E_{T})\delta \tau]-1$ }. To prevent large fluctuations in the population of walkers and excessive branching or killing, often a statistical weight is assigned to each walker. A detailed review of the numerical methods used for minimizing errors and accelerating DMC calculations is given in Ref. . In the limit of $\tau \rightarrow \infty$, the distribution function of the walkers in an importance sampling DMC algorithm is given by[@ceperley80] $$\begin{aligned} \label{eq:fr} f({\bf R},\tau \rightarrow \infty) & = & \Psi_T^*({\bf R})\Psi_{FN}({\bf R}) \; e^{-(E^{DMC}_T-E_T)\tau} \\ & = & \lim_{N_c \rightarrow \infty} \lim_{j \rightarrow \infty} \frac{1}{N_c} \sum_i^{N_c} W_i^j(j) \; \delta \left({\bf R-R}_i^j \right) \; . \nonumber \end{aligned}$$ The ${\bf R}_i^j$ in Eq. (\[eq:fr\]) correspond to the positions of walker $i$ at the step $j$ for an equilibrated DMC run of $N_c$ configurations. The original SHDMC method for the ground state was implemented in a mixed branching with weights scheme. For reasons that will be clear below, it is easier to formulate a method for excited states with a constant number of walkers with weights $W_i^j(k)$ which are given by $$\label{eq:weights} W_i^j(k)=e^{-\left[ E_i^j(k)-E_{T} \right] k \; \delta\tau},$$ with $k $ being a number of steps, $\delta\tau$ the time step, and $$\label{eq:enaverage} E_i^j(k) = \frac{1}{k}\sum_{\ell=0}^{k-1} E_L({\bf R}_i^{j-\ell}) \; .$$ The energy reference $E_T$ in Eq. (\[eq:weights\]) is adjusted so that $\sum_i W_i^j(k)\approx N_c$ assuming a constant $E_T$ for $k$ steps. Note that setting all $W_i^j(k)= 1$ in Eq. (\[eq:fr\]) gives at equilibrium, by construction, a distribution $f({\bf R})= |\Psi_T({\bf R})|^2$, because this is equivalent to setting $E_L({\bf R})= E_{T}$ in Eq. (\[eq:ceperleyalder\]). If one sets the initial distribution of walkers as $f({\bf R},0)= |\Psi_T({\bf R})|^2$, then the distribution of walkers at imaginary time $\tau = k \delta \tau$ is given by $$\begin{aligned} \label{eq:evoltau} f({\bf R},\tau) &= &\Psi_T({\bf R}) \left[ e^{-\tau \hat \mathcal{H}_{FN} } \Psi_T({\bf R}) \right] \\ \nonumber & = & \Psi_T({\bf R}) \Psi_T({\bf R},\tau) \\ \nonumber & = & \lim_{N_c \rightarrow \infty} \frac{1}{N_c} \sum_i^{N_c} W_i^j(k) \delta \left({\bf R-R}_i^j \right) \; . \end{aligned}$$ Therefore, at equilibrium and in a no branching approach, the weights $W_i^j(k) $ contain all the difference between $f({\bf R},\tau)$ and $|\Psi_T({\bf R})|^2$ . In Eq. (\[eq:evoltau\]) $e^{-\tau \hat \mathcal{H}_{FN} }$ is the fixed-node evolution operator, which is a function of the fixed-node Hamiltonian operator $ \hat \mathcal{H}_{FN} $ given by $$\label{eq:hfn} \hat \mathcal{H}_{FN} = \hat \mathcal{H}-E_T + \! \infty \ \lim_{\epsilon \rightarrow 0} \theta\left\{\epsilon- d_m[S_T({\bf R'})- {\bf R}] \right\} \; .$$ The third term in the right-hand side of Eq. (\[eq:hfn\]) adds an infinite potential at the points ${\bf R}$ with minimum distance to any point of the nodal surface $d_m[S_T({\bf R'})- {\bf R}]$ smaller than $\epsilon$. [@fn:nodelta] Using Eq. (\[eq:evoltau\]) one can formally obtain $$\label{eq:phiTtau} \langle{\bf R}|\Psi_T(\tau)\rangle = \Psi_T({\bf R},\tau)= e^{-\tau \hat \mathcal{H}_{FN} } \Psi_T({\bf R}) = \frac{f({\bf R},\tau )}{\Psi_T({\bf R})} \; ,$$ and using Eq. (\[eq:fr\]) one obtains $$\langle{\bf R}|\Psi_{FN} \rangle=\Psi_{FN}({\bf R})= \lim_{\tau \rightarrow \infty} \Psi_T({\bf R},\tau) e^{(E^{DMC}_T-E_T)\tau} \; .$$ The trial wave-function $\Psi_T({\bf R})$ is commonly a product of an antisymmetric function $\Phi_T({\bf R})$ and a Jastrow[@fn:Jastrow] factor $e^{J({\bf R})}$. Often $\Phi_T({\bf R})$ is a truncated sum of Slater determinants or pfaffians $\Phi_n({\bf R})$: $$\label{eq:psit} \langle{\bf R}|\Psi_T\rangle= \Psi_T({\bf R})=e^{J({\bf R})} \sum_n^{\sim} \lambda_n \Phi_n({\bf R}) \; .$$ In Ref. we proved that we can evaluate $ e^{-\tau \hat\mathcal{H}} |\Psi_T\rangle$ for $ \tau \rightarrow \infty$ using a numerically stable algorithm. The analytical derivation of the algorithm[@keystone] can be summarized[@fn:highly] here as $$\begin{aligned} \nonumber |\Psi_0\rangle & = & \lim_{\tau \rightarrow \infty} e^{-\tau \hat\mathcal{H}} |\Psi_T^{\ell=0}\rangle \\ \label{eq:prevground} & = & \lim_{\stackrel{\ell \rightarrow \infty}{\tau \rightarrow \infty}} \prod_{\ell} (e^{-\delta \tau^{\prime} \hat \mathcal{H}} e^{-\tau \hat \mathcal{H}^{(\ell-1)}_{FN}}) | \Psi_T^{\ell=0}\rangle \\ \label{eq:algorithmground} & = & \lim_{\stackrel{\ell \rightarrow \infty}{\tau \rightarrow \infty}} \prod_{\ell} (\tilde D e^{-\tau \hat \mathcal{H}^{(\ell-1)}_{FN}}) | \Psi_T^{\ell=0}\rangle \\ \nonumber & = & | \Psi_T^{\ell \rightarrow \infty }\rangle \;.\end{aligned}$$ The operator $\tilde D$ is defined in Eq. (\[eq:deltaexp\]). Equation (\[eq:algorithmground\]) means that the ground state $|\Psi_0\rangle $ [@fn:groundstate] can be obtained recursively by generating a new trial wave-function $|\Psi_T^{\ell} \rangle$ from a fixed-node DMC calculation that uses the previous trial wave-function $|\Psi_T^{\ell-1} \rangle$, which is given by $$\begin{aligned} \label{eq:psitnew} |\Psi_{T}^{\ell} \rangle & = & \tilde D \lim_{\tau \rightarrow \infty} e^{-\tau \mathcal{H}^{(\ell-1)}_{FN}} |\Psi_T^{\ell-1} \rangle \\ \nonumber & = & \tilde D |\Psi_{FN}^{\ell} \rangle \; .\end{aligned}$$ Equation (\[eq:psitnew\]) means that new coefficients $\lambda_n$ of a truncated expansion of a trial wave-function of the form given in Eq. (\[eq:psit\]) are obtained [*numerically*]{} from the distribution of walkers of a DMC run as $$\label{eq:sampcoeff} \langle \lambda_n \rangle = \frac{1}{N_c} \sum_{i=1}^{N_c} W_i^j(k \gg 1) \; \xi_n^*({\bf R}_i^j) \; \gamma({\bf R}_i^j) \; ,$$ where $$\label{eq:xi} \xi_n({\bf R})= e^{-2J({\bf R})} \frac {\Phi_n ({\bf R})} { \Phi_T ({\bf R})}$$ and [@keystone; @umrigar93] $$\label{eq:gamma} \gamma ({\bf R})= \frac{-1 + \sqrt{1 + 2 |{\bf v}|^2 \tau}} {|{\bf v} |^2 \tau} \text{ with } {\bf v} = \frac{\nabla \Psi_T({\bf R})} {\Psi_T({\bf R})} \;.$$ A complete explanation of our method is given in Ref. . Briefly here, our method systematically improves the nodes for three main reasons: 1\) The projectors in Eq. (\[eq:xi\]) include only functions $\Phi_{n}({\bf R})$ that retain all symmetries of the ground state. In more technical terms, the ground state is expanded only with functions that belong to the same irreducible representation. This means that if the $\Phi_n({\bf R})$ are determinants, for example, the bosonic ground state is excluded. Therefore, fluctuations that depart from the fermionic Hilbert space are filtered and do not propagate into the trial wave-function from one DMC run to the next SHDMC iteration. 2\) The projection of $\Psi_{FN}({\bf R})$ into a finite set of $\Phi_{n}({\bf R})$ with low non-interacting energy can be shown[@keystone] to be equivalent to locally smoothing the kinks at the node of the fixed-node wave-function with a function of the form $$\label{eq:deltaexp} \langle{\bf R}|\tilde D |{\bf R^\prime}\rangle = \tilde \delta \left({\bf R,R^\prime} \right) = \sum_n^{\sim} \Phi_n({\bf R}) \Phi_n^*({\bf R^{\prime}}) \; .$$ We proved that a large class of local smoothing functions have the same effect on the nodes as a Gaussian, under certain conditions, which includes the case of Eq. (\[eq:deltaexp\]). In turn, in Ref. we proved that, to linear order in $\sqrt{\delta\tau^{\prime}}$, the convolution of a Gaussian with any continuous function has the same effect on the nodes as the imaginary time propagator $ e^{-\delta \tau^{\prime} \hat{\mathcal{H}}}$ \[this allows replacing Eq. (\[eq:prevground\]) by Eq. (\[eq:algorithmground\])\]. Thus our method can be viewed as the recursive application of two operators on the trial wave-function: i) $e^{-\tau \mathcal{H}_{FN}}$ that turns $|\Psi_{T} \rangle$ into $|\Psi_{FN} \rangle$ and ii) $\tilde D$ that samples and truncates the expansion and changes the nodes as $ e^{-\tau \hat{\mathcal{H}}}$. Accordingly, our method is formally related to the shadow wave-function [@shadow] and the A-function approach [@bianchi93; @bianchi96] \[see Eq. (\[eq:prevground\])\]. 3\) Finally, we argued that the method is robust against statistical noise, because the kink should increase with the distance between the exact node $S({\bf R})$ and the node of the trial wave-function $S_T({\bf R})$ \[the kink must disappear for $S_T({\bf R})= S({\bf R})$\]. In addition, we took the relative error in $\lambda_n$ as truncation criterion for $\tilde D$. Extension of the Self-Healing DMC algorithm to excited states ============================================================= A detailed explanation of the advantages and limitations of the standard fixed-node approximation for excited states is given in Ref.  This paper explores the possibility of overcoming these limitations in calculating excited states by excluding the projection of lower energy states from the set of $\xi_n({\bf R})$. However, in to follow this path the problem of inequivalent nodal pockets has to be addressed. Inequivalent nodal pockets -------------------------- The expression “nodal pocket” denotes a volume in $3N_e$ space enclosed by the nodal surface $S_T({\bf R})$. It has been shown [@ceperley91] that the ground state of any fermionic Hamiltonian with a local potential has nodal pockets that belong to the same class, meaning that the complete $3N_e$ space can be covered by applying all symmetry operations (e.g., particle permutations) to just one nodal pocket. Therefore, if the trial wave-function is obtained from such a Hamiltonian, all nodal pockets are equivalent by symmetry. For the ground state, one can obtain the fixed-node wave-function in just one pocket and map it to the rest of the $3N_e$ space using permutations of the particles and other symmetries of $\hat\mathcal{H}$. In the case of arbitrary excited states, there are inequivalent nodal pockets that present a challenge to the fixed-node approach. [@HLRbook] Due to this inequivalent pocket problem, alternatives to the fixed-node method and variations have been tried. [@ceperley88; @barnett91; @blume97; @dasilva01; @nightingale00; @luchow03; @schautz04; @umrigar07; @purwanto09] Self-healing DMC[@keystone] implicitly takes advantage of the equivalence of nodal pockets in the fermionic ground state and must be extended to the inequivalent pocket case. For this reason a nonbranching formulation is used in the excited state case. Equilibration of walkers in inequivalent nodal pockets ------------------------------------------------------ A first complication, which has a simple solution, of the nonbranching fixed-node approximation is that the number of walkers in each nodal pocket is also fixed by the nodes. As a consequence of the drift or “quantum force” term \[second term in Eq. (\[eq:ceperleyalder\])\], the walkers are repelled from the regions where the wave-function is zero and they cannot cross the node for $\delta \tau \rightarrow 0$. The fact that the population in each nodal pocket is fixed has no consequence for the ground state because all nodal pockets are equivalent. For the ground state it is not important in which nodal pocket the walker is trapped because particle permutations can move every walker into the same nodal pocket and the projectors $\xi_n({\bf R})$ in Eq. (\[eq:xi\]) are invariant under such permutations. However, in the case of excited states, which have more nodes than those required by symmetry, [@fn:permutations] there are inequivalent nodal pockets. In a nonbranching DMC scheme with weights, the population is locked from the start in a set of pockets. If the initial distribution of $N_c$ walkers is chosen with a Metropolis algorithm to match $|\Psi_T({\bf R})|^2$, there would be random variations in the starting population of the order of $\sqrt{N_c/N_p}$, where $N_p$ is the number of inequivalent nodal pockets. This would cause systematic errors if the wave-function coefficients $\lambda_n$ were sampled without taking preventive measures. Moreover, even if the initial numbers of walkers in each pocket were set “by hand” (to be proportional to the integral $|\Psi_T({\bf R})|^2$ in each pocket), the resolution of the sampling cannot be better than $1/N_c$. The importance of this error grows if $N_c$ is small or if the number of inequivalent nodal pockets is large. To prevent this error from occurring, some walkers are simply allowed to cross the node after the wave-function coefficients are sampled. At the end of a sub-block of $k$ steps, for every walker $i$ at ${\bf R}_i$, a random move ${\bf \Delta R}_i$ is generated with a Gaussian distribution using $\sigma^2= \delta\tau^{\prime}$, [ *without*]{} the drift velocity contribution. This move is accepted only if the wave function changes sign with a Metropolis probability $p=\max\left\{ 1,[\Psi_T({\bf R}_i {\bf +\Delta R}) /\Psi_T({\bf R}_i)] ^2 \right\} $. This ensures that i) the distribution of walkers remains proportional to $|\Psi_T({\bf R})|^2$ and ii) the average number of walkers in each pocket is proportional to the integral of $|\Psi_T({\bf R})|^2$ as the number of sub-blocks $M$ tends to $\infty$. Unequal fixed-node energies in inequivalent nodal pockets --------------------------------------------------------- A second complication of the fixed-node approach for the general case of excited states appears because small departures of $S_T({\bf R})$ from the exact nodes $S_n({\bf R})$ often will result in inequivalent nodal pockets having fixed-node solutions with different fixed-node energies. When nodal pockets are not equivalent, a standard DMC algorithm will converge to a “single nodal pocket” population. In this case, the lowest energy pocket will contain all the walkers in a branching algorithm \[or all significant weights ($W_i^j(k) \ne 0 $ )\]. Accordingly, the average energy sampled will correspond to the lowest energy nodal pocket, which will be different from that of the true excited-state energy (see Chapter 6 in Ref. and references therein). If the coefficients of an excited-state fixed-node wave-function are sampled with the same procedure used for the ground state[@keystone] \[see Eq. (\[eq:sampcoeff\])\], they would correspond to a function that is different from zero just at the class of nodal pockets with lowest DMC energy and zero everywhere else. This function will not be, in general, orthogonal to the lower energy states. Moreover, this will result in kinks at the nodes in the wave-function sampled with Eq. (\[eq:sampcoeff\]) between lowest energy nodal pockets and inequivalent ones. A first preventive measure to avoid a single pocket population is to avoid the limit $\tau \rightarrow \infty$ in Eqs. (\[eq:algorithmground\]) and (\[eq:psitnew\]) which replaces $|\Psi_{FN}^{\ell} \rangle $ by $e^{-k \delta \tau \mathcal{H}^{(\ell-1)}_{FN}} |\Psi_T^{\ell-1} \rangle $ in Eq. (\[eq:psitnew\]). As a result $k$ in Eq. (\[eq:sampcoeff\]) is limited to small values, which brings all values of $W_i^j(k)$ closer to $1$. Since the approach is recursive, the limit of $\tau \rightarrow \infty$ is reached as $\ell \rightarrow \infty$ (since successive applications of the algorithm are accumulated in $|\Psi_T^{\ell}\rangle$). In addition, to prevent the wave-function from falling into lower energy states, two techniques are used: i) direct projection and ii) unequal reference energies. Direct projection ----------------- While the trial wave-function can be forced to be orthogonal to the ground state, or any other excited state calculated before, the fixed-node wave-function can develop a projection into lower energy states, because the DMC algorithm only requires $\Psi_{FN}({\bf R})$ to be zero at the nodes $S_{T}({\bf R})$. To prevent excited states from drifting into lower energy states, let me assume, for a moment, that approximated expressions of the excited states $ \langle{\bf R}|e^{\hat J}|\breve\Phi_n\rangle = \Psi_n({\bf R})=e^{J({\bf R})}\breve \Phi_n({\bf R})$ with $n \le \nu$ can be obtained and used to build the projector $$\label{eq:orthogonality} \hat P = e^{\hat J}\left[ 1 - \sum_n^{\nu} |\breve \Phi_n\rangle \langle\breve \Phi_n^*|\right] e^{-\hat J} \; \;,$$ where the operator $e^{\hat J}$ is the multiplication by a Jastrow. Since the $|\breve \Phi_n\rangle $ shall be obtained statistically, they will have errors and will not form an orthogonal basis in general. Therefore, $\langle\breve \Phi_n^*|$ are elements of the conjugated basis that satisfy $\langle\breve \Phi_n^*|\breve \Phi_m\rangle = \delta_{n,m}$. They can be constructed inverting the overlap matrix $S_{n,m}= \langle\breve \Phi_n|\breve \Phi_m\rangle$ as $$\langle\breve \Phi_n^*|= \sum_m S^{-1}_{n,m} \langle\breve \Phi_m| \; .$$ Then, the extension of the self-healing algorithm to the next excited $|\Psi_{\nu+1}\rangle$ can be rederived analytically as follows: $$\begin{aligned} \label{eq:SHDMCexcited} \nonumber |\Psi_{\nu+1}\rangle & = & \lim_{\tau \rightarrow \infty} \hat P \; e^{-\tau \hat\mathcal{H} }\hat P |\Psi_{T,\nu+1}^{\ell=0}\rangle \\ \nonumber & = & \lim_{\ell \rightarrow \infty} \hat P \; \prod_{\ell} \left( e^{-(\delta \tau^{\prime}+k \delta\tau ) \hat \mathcal{H}} \hat P \right) | \Psi_{T,\nu+1}^{\ell=0}\rangle \\ \nonumber & = & \lim_{\ell \rightarrow \infty} \hat P \; \prod_{\ell} \left( e^{-\delta\tau^{\prime} \hat \mathcal{H}} e^{-k \delta\tau \hat \mathcal{H}_{FN}^{(\ell-1)}} \hat P \right) | \Psi_{T,\nu+1}^{\ell=0}\rangle \\ & \simeq & \lim_{\ell \rightarrow \infty} \hat P \; \prod_{\ell} \left( \tilde D e^{-k \delta\tau \hat \mathcal{H}^{(\ell-1)}_{FN}} \hat P \right) | \Psi_{T,\nu+1}^{\ell=0}\rangle \\ \nonumber & = & | \Psi_{T,\nu+1}^{\ell \rightarrow \infty }\rangle.\end{aligned}$$ Equation (\[eq:SHDMCexcited\]) means that for any initial trial wave-function $|\Psi_{T,\nu+1}^{\ell=0}\rangle$ with $\hat P |\Psi_{T,\nu+1}^{\ell=0}\rangle \ne 0 $, one can obtain the next excited state $|\Psi_{\nu+1}\rangle$ recursively. The numerical implementation of the algorithm for excited states (see Section \[sc:algorithm\] for details) is almost identical to the ground state version[@keystone] with three differences: i) there is no branching and the product $k \delta \tau$ is chosen so as $W_i^j(k) \simeq 1 $ \[see Eq. (\[eq:sampcoeff\])\], ii) the projection of the vector of coefficients $\lambda_n$ into the ones corresponding to eigenstates calculated earlier is removed with $\hat P$, and iii) some walkers cross the node after $k$ time steps (see above). Eq. (\[eq:SHDMCexcited\]) holds in the limit of $N_c \rightarrow \infty$, $\delta \tau \rightarrow 0$, $\delta \tau^{\prime} \rightarrow 0$, $\ell k \delta \tau \rightarrow \infty$, and $\ell \delta \tau^{\prime} \rightarrow \infty$. In the derivation of Eq. (\[eq:SHDMCexcited\]), the following properties were used: $\hat P^2=\hat P$, and $[\hat \mathcal{H},\hat P] \simeq 0$. In Ref. it was shown that, under certain conditions, $$S \left[ e^{- \delta\tau^{\prime} \hat \mathcal{H} } e^{-k \delta\tau \hat \mathcal{H}^{(\ell-1)}_{FN}} \hat P | \Psi_T^{\ell}\rangle \right ] \simeq S \left[ \tilde D e^{-k \delta\tau \hat \mathcal{H}^{(\ell-1)}_{FN}} \hat P | \Psi_T^{\ell}\rangle \right ] \; ;$$ that is, the nodes of the two functions in the brackets are approximately the same. Note that the second term in brackets of Eq. (\[eq:orthogonality\]) has precisely the form given in Eq. (\[eq:deltaexp\]). By construction, this term would generate a function with nodes corresponding to a linear combination of lower energy eigenstates. The projector $\hat P$, instead, excludes any change in the wave-functions introduced by the projection and sampling operator $\tilde D$ or by $e^{-\tau \mathcal{H}^{(\ell-1)}_{FN}} $ in the direction of lower energy wave-functions (which includes their nodes). Adjusting the reference energy in each nodal pocket --------------------------------------------------- If walkers at one side of the node have more weight than at the other (because of inequivalent pockets with different fixed-node energies), the propagated wave-function obtained by sampling the walkers will be multiplied by a larger (smaller) factor for the low (high) energy side of the nodal surface. This generates an additional contribution to the kink at the node that, when locally smoothed, increases the volume of lower energy pockets at the expense of the higher energy ones, causing the volume of the lower (higher) energy pockets to grow (diminish). This, in turn, will have an impact on the kinetic energy: due to quantum confinement effects, the difference in fixed-node energies will increase in the next iteration. This very interesting effect in fact acts to our advantage by helping us to find the ground state even when starting from a very poor wave-function. [@keystone] For excited states, this effect is prevented by i) limiting the maximum value of $k$ and ii) the projector $\hat P$ in Eq. (\[eq:SHDMCexcited\]). However, the eigenstates $|\Psi_n\rangle$ will have statistical errors that can create systematic errors in the higher states. To partially prevent these errors, and to limit the number of orthogonality constraints, the energy reference can be changed in order to invert this contribution to the kink to our advantage. While a single reference energy $E_T$ can still be used for the DMC run in each block, the projectors of Eq. (\[eq:sampcoeff\]) are redefined using a reference energy dependent on the nodal pocket. In addition, following a suggestion of C. Umrigar, [@umrigar_private] the change in the coefficients $\delta \lambda_n $ is sampled instead of the total value $ \lambda_n $. $$\begin{aligned} \label{eq:sampnew} \lambda_n^{\ell} &= & \lambda_n^{\ell-1}+ \langle \delta \lambda_n \rangle \\ \nonumber \langle \delta \lambda_n \rangle & = & \frac{1}{N_c} \sum_{i=1}^{N_c} (W_i^j(k) e^{-\beta \left[ E_{T}-\bar E_i^j(j_0) \right]k \; \delta\tau } -1)\; \xi_n^*({\bf R}_i^j) \; \gamma({\bf R}_i^j) \; ,\end{aligned}$$ where $\beta$ is an adjustable parameter and $$\bar E_i^j(j_0) = \frac{\sum_{m=j_0}^j W_i^m(k)\gamma({\bf R}_i^m) E_L({\bf R}^m_i)} {\sum_{m=j_0}^j W_i^m(k)\gamma({\bf R}_i^m) }$$ is the weighted average of the local energy during the lifetime of the walker $i$ since the start of the block or the last time it crossed the node at step $j_0$. If $\beta = 1 $ is selected in Eq. (\[eq:sampnew\]), the factor $e^{-\beta [ E_{T}-\bar E_i^j(j_0) ]}$ just replaces in the definition of the weights \[see Eq. (\[eq:weights\])\] $E_{T}$ by $\bar E_i^j(j_0)$. The energy $\bar E_i^j(j_0)$ for $j-j_0 \gg k$ is expected to converge to the fixed-node energy of the nodal pocket where the walker $i$ is trapped; however, only the last two-thirds of the block are used to accumulate values to allow $\bar E_i^j(j_0)$ to equilibrate. It was argued before that, for $\beta = 0 $, the differences in the fixed-node energies of neighboring nodal pockets create a contribution to the kink that, when locally smoothed, increases the volume of nodal pockets with low fixed-node energy. For $\beta > 1$, it is likely that this contribution to the kink is inverted so that the volume of the lower (higher) energy pockets is reduced (increased) by the smoothing function (\[eq:deltaexp\]). Therefore, it can be assumed that a value of $\beta > 1$ should stabilize the higher energy nodal pockets, increasing their volume and, thus, reducing their energy. This process will stop when the fixed-node energy of all nodal pockets becomes equal. Note that by introducing this artificial contribution to the kink, one may stabilize some nodal structures, preventing nodal fluctuations that reduce the energy of one nodal pocket at the expense of the others. However, fluctuations that lower the energy of every nodal pocket are not prevented. Therefore, if several eigenstates have the same nodal topology, higher energy states could drift into lower energy ones if orthogonality constraints \[see Eq. (\[eq:orthogonality\])\] are not imposed. Finally, note that choosing $\beta > 1$ can also cause problems if the quality of the wave-function is not good or if the statistics is poor. For example, a small statistical fluctuation in the values of $\lambda_n$ could create a new nodal pocket with high energy. In successive blocks (as $\ell$ increases), this pocket will grow at the expense of the others, causing the total energy to rise. SHDMC algorithm for excited states {#sc:algorithm} ================================== A basis of $\Phi_n({\bf R})$ must be constructed, taking advantage of all the symmetries of $\hat \mathcal{H}$.[@fn:permutations] The $\Phi_n({\bf R})$ should be selected to be eigen-functions of a noninteracting many-body system [@keystone] belonging to the same irreducible representation for every symmetry group of $\hat \mathcal{H}$. The calculation must be repeated for each irreducible representation. Note that the same algorithm is used for bosons or fermions: the only difference is the basis used to expand the wave-functions. The calculation of excited states with SHDMC is composed of a sequence of blocks. Each block $\ell$ has $M$ sub-blocks with $k$ standard DMC steps. The basic algorithm is the following: 1. An initial set of coefficients for the expansion of the trial wave-function is selected. 2. The changes $\delta \lambda_n$ are accumulated \[see Eqs. (\[eq:xi\]) and (\[eq:sampnew\])\] at the end of each sub-block. Some walkers near the node can cross it at the end of each sub-block. 3. At the end of each block $\ell$, the error in $\delta \lambda_n$ is evaluated. If this error is larger than 25% of $\lambda_n + \delta \lambda_n$, then $\lambda_n $ is set to zero; [@keystone] otherwise, $\lambda_n $ is set to $\lambda_n + \delta \lambda_n$. 4. A new trial wave-function is constructed at the end of each block $\ell$ using the new values of the coefficients sampled after removing with $\hat P$ the projection into eigenstates calculated earlier. 5. If the scalar product between the vector of new $\delta\lambda_n$ with the one obtained in the previous block ($\ell-1$) is positive, the number of sub-blocks $M$ is increased by one. Otherwise, $M$ is [*multiplied*]{} by a factor larger than one (e.g., $1.25$). This factor increases the statistics reducing the impact of noise. [@fn:changes] 6. Steps 2-6 are repeated until the variance of the weights $W_i^j(k)$ is smaller than a prescribed tolerance (see Fig. \[fg:variance\] in Section \[sc:modeltest\]). 7. The projector $\hat P$ is updated to include the new excited state. 8. Steps 1-7 are repeated until a desired number of excited states is obtained. Remarks ------- Some points about the application of the algorithm should be addressed before discussing the results. - In this paper, to test the method, intentionally poor trial wave-functions have been selected as a starting point. Good initial wave-functions and a good Jastrow are advised in real production runs in large systems. Methods to select good initial trial wave-functions will be discussed elsewhere. - Time-step errors and, in particular, persistent walker configurations[@umrigar93] can cause significant problems. When this happens it often results in an increase in the error bar of every $\lambda_n$ which causes a large reduction in the number of coefficients retained in the trial wave-function. This problem is avoided in the algorithm by discarding the entire block if a 50% reduction in the number of basis functions retained is detected. Nevertheless, if the quality of the initial $\Psi_T({\bf R})$ is bad, it is strongly recommended to reduce the time step $\delta \tau$. As the quality of the wave-function improves with successive iterations, one can increase $\delta\tau$. For fast convergence $\sqrt{k \;\delta\tau}$ should be of the order of the interparticle distance. - As a strategy, it is better to run at first using $\beta=0$ in Eq. (\[eq:sampnew\]) including every state calculated before in $\hat P$ \[see Eq. (\[eq:orthogonality\])\]. Once the wave-function $\Psi_T({\bf R})$ is converged, one can set $\hat P=1$ and $\beta=1$ and monitor if $\Psi_T({\bf R})$ evolves into a subset of lower energy states. To prevent the propagation of errors of every lower energy state included in $\hat P$ into the next excited state, a run including only this subset in $\hat P$ can be performed. - To obtain accurate total energies, a long run with large $k$ is required (this is almost a standard DMC run). - SHDMC should not be used blindly as a library routine. The calculation of excited states with SHDMC is a task that will probably remain limited to quantum Monte Carlo experts. While, in contrast, density functional approximated methods have suddenly become very easy to use, it is not quite clear to the author that requiring expertise and a deep understanding is a disadvantage. Any new code using SHDMC should be tested in a small system where analytical solutions or results with an alternative approach[@umrigar07] are available. The comparison with a soluble model is presented in the next section. Applications to Model Systems {#sc:modeltest} ============================= This section compares the methods described above for the calculation of excited states with SHDMC, with full configuration interaction (CI) calculations in the model system used in Refs. and . Briefly, the lower energy eigenstates are found for two electrons moving in a two dimensional square with a side length $1$ with a repulsive interaction potential of the form[@units] $V({\bf r},{\bf r^{\prime}}) = 8 \pi^2 \gamma \cos{[\alpha \pi(x-x^{\prime})]}\cos{[\alpha \pi(y-y^{\prime})]}$ with $\alpha= 1/ \pi$ and $\gamma = 4$. The many-body wave-function is expanded in functions $\Phi_n({\bf R})$ that are eigenstates of the noninteracting system. The $\Phi_n({\bf R})$ are linear combinations of functions of the form $\prod_{\nu} \sin(m_{\nu} \pi x_{\nu})$ with $m_{\nu} \le 7$. Full CI calculations are performed to obtain a nearly exact expression of the lower energy states of the system $\Psi_n({\bf R})= \sum_m a_m^n \Phi_m({\bf R})$. We solve the problem both for the singlet and the triplet case. The singlet state of this system is bosonic-like, since the ground state wave-function has no nodes. The lowest energy excitations of the noninteracting problem $\Phi_n({\bf R})$ that have the same symmetry (that is, that are invariant under exchange of particles, and under all symmetry operations of the group $D_4$) are selected to expand $\hat \mathcal{H}$. For the case of the triplet, the wave-function must change sign for permutations of the particles. The ground state is, however, degenerate (belongs to the $E$ representation of $D_4$). The $E$ representation can be described by wave-function even (odd) for reflections in $x$ and odd (even) for reflections in $y$. We choose the wave-functions that are odd in the $x$ direction: belonging to a $D_2$ subgroup of the $D_4$ symmetry. For more details on the triplet ground state calculations, see Refs. and . To facilitate the comparison with the full CI results, projectors $\xi_n({\bf R})$ are constructed with the same basis functions used in the CI expansion. For the same reason, no Jastrow function is used \[$J=0$ in Eq. (\[eq:xi\])\]. To test the method, poor initial trial wave-functions are intentionally chosen as follows: For the ground state the lowest energy function of the noninteracting system is selected. For the $n^{th}$ ($n = \nu+1$) excited state, the initial trial wave-function $ |\Psi_{T,n}^{\ell=0}\rangle $ was constructed by completing the first $\nu$ columns of a determinant with the first $\nu+1$ coefficients of the $\nu$ eigenstates calculated before. Subsequently, the vector of cofactors of the last column was calculated. The coefficients of this vector are used to construct a trial wave-function orthogonal to all the eigenstates calculated earlier. ![(Color online) Self-healed DMC run obtained for successive eigenstates belonging to the $A_1$ (trivial) irreducible representation of the group $D_4$ in the singlet state. Black lines denote the average value of the local energy. The horizontal blue dashed lines mark the energy of the corresponding excitation in the full CI calculation. \[fg:dmcCIrun:sglex\]](Fig1rockandroll.eps){width="1.00\linewidth"} Figure \[fg:dmcCIrun:sglex\] shows the results of successive SHDMC runs for the singlet ground state and the next $8$ excitations that belong to the same symmetry (total spin $S=0$, and irreducible representation $A_1$ of the group $D_4$). The SHDMC calculations were done using $N_c = 200$ walkers with a sub-block length $k=50$, a time step $\delta \tau = 0.0002$, [@units] $\delta \tau^{\prime} = 0.002$ (for the ground state $\delta \tau^{\prime} = 0$ ) and, $\beta =1 $ in Eq. (\[eq:sampnew\]). The lines in Fig. \[fg:dmcCIrun:sglex\] join the values obtained for the weighted average of the local energy $E_L({\bf R})$ for each time step. The horizontal dashed lines mark the energy of the nearly analytical result obtained with full CI. The agreement between SHDMC and full CI is extremely good. As higher energy eigenstates are calculated however, and the number of nodal pockets and nodal surfaces increases, time step errors start to play a dominant role. In particular, for the $9^{th}$ excitation (not shown) $\delta \tau $ must be reduced. The occasional peaks (or drops) observable in the data are correlated with the update of $\Psi_T({\bf R})$, and their reduction also reflects a systematic improvement in the trial wave-function. At the end of each block, the trial wave-function coefficients $\lambda_n$ are updated and all weights are reset to 1. They gradually reach equilibrium values when new energies are sampled, completing a sub-block of length $k$. As a result, at the beginning of each block, the energy sampled is the average of the trial wave-function energy, which is often different than the DMC energy sampled thereafter (but it can be smaller or higher for a bad trial wave-function with small $N_c$). One interesting result is that some orthogonality constraints are not required to obtain some excited states. This is the case, for example, of the first excited state calculated with $\beta = 1 $. This is presumably due to the fact that the number of nodal pockets is different for the excited state and the ground state and the decay path from the first excited state to the ground state is obstructed by the formation of a kink between inequivalent nodal pockets if a value of $\beta \approx 1 $ is used. This is also the case for states $6$ and $7$, which were obtained [*before*]{} state 5 despite the fact that they have higher energy. A similar effect is observed in some triplet excitations. Due to the choice of initial trial wave-function and the kink induced by $\beta=1$, the $3^{rd}$ excitation is found before the $2^{nd}$, and the $5^{th}$ is obtained before the $2^{nd}$ and the $4^{th}$. This interesting effect disappears if $\beta=0$ is chosen. Table \[tb:overlap\] shows the logarithm of the residual projection $$\label{eq:lrp} L_{rp}=\log\left(1-|\langle\Psi_n^{CI}|\Psi_n\rangle| \right)$$ of the excited state wave-function $|\Psi_n\rangle$ sampled with SHDMC onto the corresponding full CI result $|\Psi_n^{CI}\rangle$ as a function of the number of iterations for different eigenstates. The states are ordered as they first appear in the calculation. In addition, Table \[tb:overlap\] compares the values of the eigen-energies obtained with CI and SHDMC. The agreement is very good. In some cases the difference is larger than the error bar. This might signal that small nodal errors remain. Note that there is no upper bound theorem for excited states but for the ground state within an abelian irreducible representation.[@foulkes99] ------- ------ ------- ---------- ---------- ---------- --------- --------- ------ State Spin Rep. $L_{rp}$ $L_{rp}$ $L_{rp}$ CI SHDMC a b c Energy Energy 0 S A$_1$ -14.84 -15.05 328.088 328.089 (2) 1 S A$_1$ -6.80 -8.85 374.106 374.103 (6) 2 S A$_1$ -7.23 -8.69 409.960 409.954 (3) 3 S A$_1$ -4.42 -6.07 418.508 418.66 (2) 4 S A$_1$ -3.65 -5.01 454.630 454.84 (2) 6 S A$_1$ -.– -4.85 -6.22 477.019 477.100 (5) 7 S A$_1$ -3.90 -5.26 492.216 491.98 (1) 5 S A$_1$ -5.60 -6.17 468.854 468.845 (13) 8 S A$_1$ -5.09 -6.49 503.805 503.92 (1) 0 T E -8.49 -8.71 342.137 342.191 (5) 1 T E -4.37 -4.35 385.908 387.80 (1) 3 T E -3.06 -3.35 422.670 423.60 (2) 5 T E -4.04 -5.48 438.791 438.70 (1) 2 T E -2.31 -2.31 411.887 416.07 (1) ------- ------ ------- ---------- ---------- ---------- --------- --------- ------ : Values obtained for $L_{rp}$ \[see Eq. (\[eq:lrp\]) \] for a total of (a) $4 \times 10^4$ (b) $8 \times 10^4$ and (c) $12 \times 10^4$ DMC steps and corresponding eigen-energies for two electrons in a square box with a model interaction. The logarithm of the residual projection $L_{rp}$ of the SHDMC wave-function with the corresponding full result CI is given for different eigenstates belonging to the same symmetry of the ground state as a function of the number of steps used to sample the wave-function. \[tb:overlap\] The states are included in the order they were obtained. ![(Color Online) Logarithm of the residual projection \[see Eq. (\[eq:lrp\])\] for the ground (square), first (diamond), second (up triangle) and third (down triangle) eigenstates with $A_1$ symmetry and S=0. \[fg:lrp\] ](Fig2rockandroll.eps){width="1.\linewidth"} Figure \[fg:lrp\] shows $L_{rp} $ at the end of each block for the ground state and low-lying excitations of the system as a function of the total number of SHDMC steps. The calculations were done by first running $\sim\!40\,000$ SHDMC steps for each eigenstate before starting the calculation of the next. Subsequently, an additional set of $\sim\!40\,000$ SHDMC steps was run, improving the projector $\hat P$. The kinks in the data around $\sim~40\,000$ are due to the changes in the coefficients of the lower energy states involved in $\hat P$ \[see Eq. (\[eq:orthogonality\])\]. One important conclusion of Table \[tb:overlap\] and Figure \[fg:lrp\] is that errors in the determination of lower energy states calculated earlier only propagate “locally” because of the orthogonality constraints in Eq. (\[eq:orthogonality\]). This error does not have a strong impact on much higher energy excitations. This is apparently due to the fact that each newly calculated excitation tends to occupy the Hilbert space left by lower excitations due to statistical error. This is clear, for example, for the $5^{th}$ and $8^{th}$ excitations, which have an error much smaller than several excitations calculated earlier (e.g., $3^{rd}$ and $4^{th}$). The error in the $3^{rd}$ and $4^{th}$ excitations is mainly due to mixing among themselves. This result is important because it means that the present method can be used to calculate several higher excitations in spite of the errors in lower energy ones. ![(Color online) Change in the values of the multi-determinant expansion as the DMC self-healing algorithm progresses for the $5^{th}$ excited state of the singlet state of $A_1$ symmetry. Light gray colors denote older coefficients, whereas darker ones denote more converged results. The full CI results are highlighted in small red diamonds.\[fg:dmcsltex\]](Fig3rockandroll.eps){width="1.\linewidth"} Figure  \[fg:dmcsltex\] shows the evolution of the values of the coefficients $\lambda_n^{\ell}$ of $|\Psi_T^{\ell}\rangle$ as a function of the coefficient index $n$ for the $5^{th}$ excited state corresponding to the singlet configuration of the $A_1$ representation of the group $D_4$. The shade of gray is light for the older (small $\ell$) coefficients and deepens to black for the final results (large $\ell$). The calculation started from a trial wave-function orthogonal to the states calculated before as described above. The coefficients of the wave-function sampled with SHDMC overlap with the ones obtained with full CI (see Table \[tb:overlap\]). Similar results are obtained for all the other excited states calculated. An important observation is that the coefficients $\lambda_n$ evolve continuously towards the exact solution, which suggests the possibility of accelerated algorithms that extrapolate the values of $\delta \lambda_n$. Some eigenstates are significantly more difficult to calculate than others. This is typically the case for eigenstates with similar eigenvalues (e.g., the $6^{th}$ excitation in the singlet case). A bigger challenge, however, is when $E_L({\bf R})$ is ill behaved, for example, the case of the $2^{nd}$, $4^{th}$, and $6^{th}$ excitations of the triplet state. Even the full CI wave-function with 300 basis functions has a large variance for $E_L({\bf R})$. In that case the coefficients obtained with SHDMC and CI are different. This is due to the fact that the two methods minimize different things: CI minimizes $\langle~\Psi_n~|(\hat \mathcal{ H}-E_n)^2|\Psi_n\rangle$ on a truncated basis, and SHDMC minimizes $\int E_L({\bf R}) f({\bf R},\tau) {\bf dR}$ with $\langle \Psi_T | \hat P | \Psi_T \rangle = \langle \Psi_T | \Psi_T \rangle $. Accordingly, the fact that the results are different indicates that neither calculation, CI or SHDMC, is converged with the basis chosen. The $4^{th}$ and $6^{th}$ excitations with E symmetry in the triplet case obtained with SHDMC are a linear combination of the corresponding ones in full CI. ![\[fg:drop\] (Color online) Average of the local energy $E_L({\bf R})$ as a function to the number of DMC time steps for two SHDMC runs with $\hat P= 1$ starting from a converged trial wave-function corresponding to the $8^{th}$ singlet excitation of $A_1$ symmetry with a) $\beta = 1.05 $ and b) $\beta =0 $ in Eq. (\[eq:sampnew\]). The dotted lines mark the beginning of some of the fixed-node DMC blocks of a SHDMC run for the $\beta =0 $ case. Same conventions as in Fig. \[fg:dmcCIrun:sglex\].](Fig4rockandroll.eps){width="1.\linewidth"} Figure \[fg:drop\] shows the effect of $\hat P$ and $\beta$ \[see Eq. (\[eq:sampnew\])\] on a SHDMC run. The figure shows the average of the local energy $E_L({\bf R})$ for two calculations that start from the final trial wave-function obtained for the $8^{th}$ singlet excitation with $A_1$ symmetry (please compare it with Fig. \[fg:dmcCIrun:sglex\]). Both calculations were run with the same parameters as in Fig. \[fg:dmcCIrun:sglex\] with two exceptions: i) $\hat P= 1$ was used, which removes the orthogonality constraints, and ii) one calculation was run with $\beta = 1.05 $ and the other with $\beta=0$ in Eq. (\[eq:sampnew\]). An initial number of blocks $M=20$ was used. Both calculations depart from the initial configuration. However, the run with $\beta = 0$ falls very quickly to the singlet ground state. The calculation with $\beta=1.05$ remains much longer in the vicinity of the $8^{th}$ excitation. This clearly shows the stabilizing effect unequal energy references on excited states. Since presumably the $8^{th}$ excitation is not the minimum of its nodal topology, it finally drifts away. For the $\beta=1.05$ case with $\delta\tau =0.0002$, the algorithm becomes numerically unstable to noise after the $\sim\!50],000$ time step because the variance in the distribution of weights of the walkers increases and the statistics is dominated by a reduced number of walkers. In contrast, the first excitation does not drift with $\beta \simeq 1 $ and $\hat P = 1$ (not shown). Coulomb interaction results and discussion ------------------------------------------ ![\[fg:coulomb\] (Color online) Average of the local energy $E_L({\bf R})$ of 200 walkers as the SHDMC algorithm converges to the ground, first and second eigenstates with $A_1$ symmetry and S=0 of two electrons with Coulomb interactions in a square box. ](Fig5rockandroll.eps){width="1.\linewidth"} The use of a simplified electron-electron interaction facilitates the CI calculations and the validation of the optimization method. However, it is also important to test the convergence and stability of the method with a realistic Coulomb interaction as in the case of the ground state. [@keystone] The results shown in this section have an interaction potential of the form[@units] $V({\bf r},{\bf r^{\prime}}) = 20 \pi^2 /|{\bf r-r\prime}|$ as in Ref. . To mimic the difficulties that the algorithm would have to overcome in larger or more realistic systems, the Jastrow term is [*not*]{} included, i.e. $J=0$. Most SHDMC parameters are the same as in the model interaction case. All calculations with Coulomb interactions were run with $\beta=0$, the initial number of sub-blocks $M=6$, and the time step reduced to $\delta \tau = 0.0001$. The initial trial wave-functions were selected with the criteria used for the model case. Figure \[fg:coulomb\] shows the average of the local energy $E_L({\bf R})$ obtained for the ground state and the first two excitations with the same symmetry (singlet $A_1$). The results are qualitatively similar to those obtained with the model potential. It is evident from the data that the variance of $E_L({\bf R})$ and its average are reduced as the wave-function is optimized. Occasionally, $E_L({\bf R})$ might rise when $\hat P$ is updated (improving the description of lower energy states). The energy of the singlet ground state is 400.749 $\pm$ 0.013, which is only slightly smaller than the lowest triplet energy[@keystone] 402.718 $\pm$ 0.008 with symmetry $E$. These energies are very close because of the dominance of the Coulomb repulsion as compared to the kinetic energy, which forces the particles to be well separated and therefore the cost of a node in the triplet state is small. This result is consistent with the choice of parameters that sets the system in the highly correlated regime. The energies obtained for the first and second excitations are[@units] $468.56 \pm 0.09$ and $515.50 \pm 0.08$ respectively. ![(Color online) Logarithm of the variance of the weights of the walkers distribution as a function of the SHDMC block index $\ell$ for the $2^{nd}$ excitation with $A^1$ symmetry with Coulomb interaction (see Fig. \[fg:coulomb\]). The lines are visual guides. \[fg:variance\]](Fig6rockandroll.eps){width="1.00\linewidth"} While Figs. \[fg:dmcCIrun:sglex\] and \[fg:coulomb\] are qualitatively similar, the results shown in Fig. \[fg:dmcCIrun:sglex\] are more convincing since they are directly compared with full CI calculations and they are less noisy, as noted by one referee. When the model interaction potential is replaced by a Coulomb interaction, full CI calculations are still possible, but they involve the numerical calculation of $16471$ integrals with Coulomb singularities. CI calculations are typically done using a Gaussian basis, [@dupuis] which limits the impact of the matrix element integrals of these singularities. However, as the size of the system increases, CI calculations become too expansible numerically. Accordingly, self-reliant methods to validate the quality of the SHDMC wave-functions must be developed. As noted earlier, in a fixed population scheme, the weights contain all the difference between $f({\bf R},\tau)$ and $|\Psi_T({\bf R})|^2$ . Since $f({\bf R},\tau)$ and $|\Psi_T({\bf R})|^2$ should be equal if $\Psi_T({\bf R})$ is an eigenstate, the variance of the weights can be used to measure the quality of the wave-function. Figure \[fg:variance\] shows the evolution of the logarithm of the variance $L_{var}$ of the weights of the walkers $W_i^j(k)$ \[see Eq. (\[eq:weights\])\] as a function of the SHDMC block index $\ell$. $L_{var}$ is evaluated as $$\label{eq:logvar} L_{var} = \log{\sqrt{\frac{1}{N_c}\sum_{i,j} (W_i^{k j}(k) -1)^2 }} \; .$$ By using a linear order expansion in $\delta\tau$ in Eq. (\[eq:weights\]) and using Eq. (\[eq:enaverage\]), it is straightforward to relate Eq. (\[eq:logvar\]) to the variance of $ E_i^j(k)$. The latter is an average of $E_L({\bf R})$. A common measure of the quality of the ground state wave-function is the variance of $E_L({\bf R})$. The results shown in Fig. \[fg:variance\] correspond to the $2^{nd}$ singlet excitation with $A_1$ symmetry (see Fig. \[fg:coulomb\]). Similar results are obtained for the ground state and the first excitation (not shown). The error bar in $L_{var}$ is smaller than the size of the symbols. The fluctuations in $L_{var}$ result from the random fluctuations of the coefficients $\lambda_n$ that are obtained statistically. Note that in spite of the noise, a clear trend shows the improvement of the quality of the wave-function and $E_T$ as the SHDMC algorithm progresses. However, these improvements are not uniform, which is reflected by the oscillations in $L_{var}$ in Fig. \[fg:variance\] and in the amplitude of $E_L({\bf R})$ in Fig. \[fg:coulomb\]. A careful user of SHDMC should track $L_{var}$ and use the best quality wave-function to calculate energies and $\hat P$. Summary {#sc:discussion} ======= An algorithm to obtain the approximate nodes, wave-functions, and energies of arbitrary low-energy eigenstates of many-body Hamiltonians has been presented. This algorithm is a generalization of the “simple” self-healing diffusion Monte Carlo method developed for the calculation of the ground state of fermionic systems,[@keystone] which in turn is built upon the standard DMC method. [@ceperley80] At least in the case of the tested system, wave-functions and energies that continuously approach fully converged configuration interaction calculations can be obtained depending only on the computational time. The wave-function, in turn, allows the calculation of any observable. It is found that some special eigenstates, presumably the minimum energy eigenstate for a given nodal topology, can be obtained without calculating the lower excitations by artificially generating a kink in the propagated function using unequal energy references in different nodal pockets. The present method can be implemented easily in existing codes. Ongoing tests on the ground state method[@keystone] in larger systems give serious hope[@fn:tests] that the current generalization will also be useful. While there are methods to obtain the excitation spectra of a many-body Hamiltonian in a variational Monte Carlo context [@kent98; @umrigar07] they require obtaining the Hamiltonian and the overlap matrix elements. This requirement would present a challenge for very large systems. SHDMC is a complementary technique that could potentially scale better for larger sizes. The evaluation and storage of the matrix elements of $\hat \mathcal{H}$ is not required. The number of quantities sampled \[the projectors $\xi_n ({\bf R})$, Eq. (\[eq:xi\])\] is equal to the number of basis functions $n_b$. In contrast, energy minimization methods or configuration interaction (CI) require the evaluation of $n_b^2$ matrix elements. In addition, the solution of a generalized eigenvalue problem with statistical noise is avoided. This can be an advantage in very large systems since algorithms for eigenvalue problems are difficult to scale to take maximum advantage of large supercomputers. In contrast, the sampling of a large number of determinants can be trivially distributed on different processors. Moreover, recent advances in determinant evaluation could facilitate sampling a very large number of projectors $\xi_n ({\bf R})$. [@nukala09] An apparent disadvantage of SHDMC is that the method is recursive. This disadvantage is partially removed since i) the number of blocks $M$ used to collect data is increased only if necessary to improve the wave-function significantly, [@fn:changes] ii) and, the propagation to large imaginary times is avoided by using precisely this recursive approach that accumulates the propagation in successive blocks. In addition, a small value of $k \; \delta \tau $ limits large fluctuations in the weights, which recently have been claimed to cause an exponential cost in the convergence of DMC results. [@nemec09] The dominant cost of the present algorithm to obtain the wave-functions and their nodes scales as $N_e^3 \times n_{max} \times n_b \times n_{st}$, with $n_{max}$ being the number of excited states, $n_b$ the number of projectors $\xi_n({\bf R})$ sampled, and $n_{st}$ the total number of SHDMC steps. Of course, the error and the cost depend on the quality of the method used to construct $\Phi_n({\bf R})$ and the quality of the initial trial wave-functions. Systematic errors decrease when $n_b$ is large, and the statistical error decreases when $n_{st}$ increases. For a fixed absolute error, $n_b$ is expected to increase exponentially with the number of electrons $N_e$. [@keystone] Note that in order to describe an arbitrary wave-function of a system with $N_e$ electrons and a typical size $L$ in $D>1$ dimensions with a resolution $R_s$, one needs approximately $(L/R_s)^{(D\; N_e)}$ basis functions. The nodal surface alone requires  $(L/R_s)^{(D\; N_e-1)}$ degrees of freedom. Therefore, finding an algorithm to obtain the nodes $S_n({\bf R})$ of any eigenstate $n$ with an arbitrary interaction in a time polynomial in $N_e$ is potentially a “Philosopher’s Stone” quest. However, if exponential factors actually control the accuracy of the DMC approach, as claimed, [@nemec09] just a rock solid method to find the nodes which simultaneously improves the wave-function (reducing the population fluctuations) could be considered a satisfactory solution. The presented work could be the basis of such a method. In ongoing work, SHDMC methods are being developed and tested in larger systems. Acknowledgments {#acknowledgments .unnumbered} =============== The author would like thank C. Umrigar for suggesting the sampling of $\delta \lambda_n$ instead of the absolute value of the coefficients. The author also thanks R. Q. Hood, M. Bajdich and P. R. C. Kent for a critical reading of the manuscript and for related discussions. Finally, the author thanks the anonymous referee who inspired the calculations presented in Figs. \[fg:drop\] and \[fg:variance\]. Research performed at the Materials Science and Technology Division sponsored by the Department of Energy and the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. [99]{} P. Hohenberg and W. Kohn, Phys. Rev. [**136**]{}, B864 (1964). W. Kohn and L. J. Sham, Phys. Rev. [**140**]{}, A1133 (1965). L. Hedin, Phys. Rev. [**139**]{}, A796 (1965). D. M. Ceperley and B. J. Alder, Phys. Rev. Lett. [**45**]{}, 566 (1980). W. M. C. Foulkes, R. Q. Hood, and R. J. Needs, Phys. Rev. B [**60**]{}, 4558 (1999). J. B. Anderson, Int. J. Quantum Chem. [**15**]{}, 109 (1979). P. J. Reynolds, D. M. Ceperley, B. J. Alder, and W. A. Lester, J. Chem. Phys. [**77**]{}, 5593 (1982). F. A. Reboredo, R. Q. Hood, and P. R. C. Kent, Phys. Rev. B [**79**]{}, 195117 (2009) D. M. Ceperley, J. Stat. Phys. [**63,**]{} 1237 (1991). F. A. Reboredo and P. R. C. Kent, Phys. Rev. B [**77**]{}, 245110 (2008). Understanding sections I, II, III, V and VI of Ref. is required before reading this article. The energy unit is $\hbar^2/(2 m)$. C. J. Umrigar, M. P. Nightingale, and K. J. Runge, J. Chem. Phys. [**99**]{}, 2865 (1993). Note that the limit $\epsilon \rightarrow 0$ is taken after the amplitude of the potential tends to $\infty$. Thus, this potential does not have the $\delta({\bf R})$ form, and every eigenstate of $\mathcal{H}_{FN} $ must be zero at $S_T({\bf R})$. The Jastrow factor does not change the nodes but accelerates convergence and improves the algorithm’s numerical stability. The ground state $|\Psi_0\rangle $ is formally obtained byapplying the evolution operator $e^{-\tau \hat\mathcal{H}}$ to a trial wave-function $|\Psi_T^{\ell=0}\rangle $ in the limit $\tau~\rightarrow~\infty $. S. Vitiello, K. Runge, and M. H. Kalos, Phys. Rev. Lett. [**60**]{}, 1970 (1988). R. Bianchi, D. Bressanini, P. Cremaschi, M. Mella, and G. Morosi, J. Chem. Phys. [**98**]{}, 7204 (1993). R. Bianchi, D. Bressanini, P. Cremaschi, M. Mella, and G. Morosi, Int. J. Quant. Chem. [**57**]{}, 321 (1996). B. L. Hammond, W. A. Lester, Jr., and P. J. Reynolds [*Monte Carlo Methods in Ab Initio Quantum Chemistry*]{} (World Scientific, Singapore-New Jersey-London-Hong Kong, 1994). D. M. Ceperley and B. Bernu, J. Chem. Phys. [**89**]{}, 6316 (1988). R. N. Barnett, R. P. Reynolds, and W. A. Lester, J. Chem. Phys. [**96**]{}, 2141 (1991). D. Blume, M. Lewerenz, P. Niyaz, and K. B. Whaley, Phys. Rev. E [**55**]{}, 3664 (1997). W. D. da Silva and P. H. Acioli, J. Chem. Phys. [**114**]{}, 9720 (2001). M. P. Nightingale and V. Melik-Alaverdian, Phys. Rev. Lett. [**87**]{}, 043401 (2001). A. Lüchow, D. Neuhauser, J. Ka, R. Baer, J. Chen, and V. A. Mandelshtam, J. Phys. Chem. A [**107**]{}, 7175 (2003). F. Schautz, F. Buda, and C. Filippi, J. Chem. Phys. [**121**]{}, 5836 (2004). C. J. Umrigar, J. Toulouse, C. Filippi, S. Sorella, and R. G. Hennig, Phys. Rev. Lett. [**98**]{}, 110201 (2007). W. Purwanto, S. Zhang, and H. Krakauer, J. Chem. Phys. [**130**]{}, 094107 (2009). All symmetries of $\hat \mathcal{H}$ must be considered, which includes space group symmetries, spin, and particle permutations. C. Umrigar, private communication. A description of the benefits of his suggested improvement for the ground state will be published elsewhere. If the change in the wave-function coefficients is dominated by random noise, the scalar product between the old and the new $\delta\lambda_n$ can be negative and $M$ is multiplied by a factor larger than 1. M. Dupuis, and J. A. Montgomery, J. Comput. Chem. [**14**]{}, 1347-1363 (1993). P. K. V. V. Nukala and P. R. C. Kent, J. Chem. Phys. [**130**]{}, 204105 (2009). A SrLi dimer with $13$ electrons has been compared with energy minimization calculations.[@umrigar07] We have also a proof of principle for C$_{20}^{+2}$ (78 electrons and  700 determinants). P. R. C. Kent, R. Q. Hood, M. D. Towler, R. J. Needs, and G. Rajagopal, Phys. Rev. B. [**57**]{}, 15293 (1998). N. Nemec, in http://arxiv.org/abs/0906.0501
--- author: - | Joanna L. Karczmarek and and Curtis G. Callan, Jr.\ Department of Physics, Jadwin Hall\ Princeton University\ Princeton, NJ 08544, USA\ title: Tilting the Noncommutative Bion ---
--- bibliography: - 'MassPartProd.bib' --- =15.5pt [**Vacua on the Brink of Decay**]{} [Guilherme L. Pimentel$^{1}$, Alexander M. Polyakov$^{2}$ and Grigory M. Tarnopolsky$^{3}$]{} *$^1$ Institute of Theoretical Physics, University of Amsterdam,\ Science Park 904, Amsterdam, 1098 XH, The Netherlands* *$^{2}$ Joseph Henry Laboratories, Princeton University,\ Princeton, NJ 08544, USA* *$^{3}$ Department of Physics, Harvard University,\ Cambridge, MA 02138, USA* ------------------------------------------------------------------------ [**Abstract**]{}\ We consider free massive matter fields in static scalar, electric and gravitational backgrounds. Tuning these backgrounds to the brink of vacuum decay, we identify a term in their effective action that is singular. This singular term is universal, being independent of the features of the background configuration. In the case of gravitational backgrounds, it can be interpreted as a quantum mechanical analog of Choptuik scaling. If the background is tuned slightly above the instability threshold, this singular term gives the leading contribution to the vacuum decay rate. ------------------------------------------------------------------------ [*Dedicated to the memory of Ludwig Faddeev*]{} Introduction ============ In this article, we study strong background fields which may be able to destroy their own environment. This happens when the mass gap of the Quantum Field Theory (QFT) in question, due to the external field, tends to zero and eventually becomes negative. We identify a universal singularity in the effective action of the background field, which signals instability of the vacuum, as the mass gap vanishes. Background field configurations which lead to particle production are associated to the formation of a “horizon", i.e., a length scale in which it becomes energetically more favorable to produce particles than to sustain the field configuration. This definition of the horizon is more general than the one usually discussed in the literature. For example, it implies the existence of electromagnetic horizons. As a typical example of an electric horizon, consider electrically charged particles of mass $m$ in a background electrostatic potential $a_t(x)$. It is clear that if the “voltage" $A\equiv a_t(+\infty)-a_t(-\infty)$ satisfies $A>2 m$ particles will be produced and not much will remain of the vacuum[^1]. For a gravitational background, the location of the horizon defined in this new way agrees with that of the causal horizon (see e.g. [@Wald:1984rg]). Technically, the phenomenom of particle production can be diagnosed by calculating the vacuum decay rate, given by the imaginary part of the one-loop effective action, obtained after integrating out massive matter fields. Many results are available for the effective action for a constant background field; for an incomplete list of references, see [@Schwinger:1951nm; @Parker:1969au; @Hawking:1974sw; @Gibbons:1977mu; @Savvidy:1977as; @Nielsen:1978rm; @Polyakov:2007mm] and [@Birrell:1982ix; @Dunne:2004nc] for reviews. In these examples, the horizon is always present[^2]. In order to study particle production for field configurations near the threshold, we must consider a gapped matter sector coupled to background fields. The mass gap acts as a barrier, preventing particle production for weak backgrounds. We would like to find singular terms in the effective action as we approach the particle production threshold. In this regime, the effective action is real. If we dial the background field strength above the threshold, the effective action acquires an imaginary piece coming from the singular term. This imaginary part gives the vacuum decay rate. In this article, we consider different scenarios of strong background fields. We consider background scalar, electric and gravitational fields. We couple these backgrounds to free massive scalar matter, and determine the singular terms in the one-loop determinant of the matter fields in the background geometry tuned to the vicinity of the threshold. The fixed backgrounds are not necessarily a solution of the source-free equations of motion; we assume that there are suitable sources that sustain the static background configuration, and focus on the quantum mechanical response of matter fields to the background. The electromagnetic threshold singularity might be experimentally testable in the near future by producing strong electric pulses with lasers. The gravitational threshold singularity is a quantum analogue of Choptuik scaling [@Choptuik:1992jv]. Choptuik numerically simulated the gravitational collapse of a distribution of dust particles. If the initial data is tuned above a critical value, the final state has a black hole. Choptuik recognized a remarkable scaling law in the mass of the black hole, as a function of how much above criticality the initial data is. For various setups of initial data, he found scaling laws for the black hole mass with the same exponent. Our result shares the same robustness to the shape of the gravitational potential. While the exact result for the effective action depends on the details of the background field, we argue that the threshold singularity is universal. The physical reason for that is the following: right above threshold, the first pair production event will be very soft, and the pair will have a very long wavelength. As the wavelength of the excitation is very long, it is not sensitive to the fine features of the background. For an early discussion of these ideas, see [@PiTPlec]. For recent work that partially overlaps with the results presented here, see [@Gies:2016coz], where universality of the particle production rate is found for electric fields slightly above threshold. Our below-threshold singularity, when extrapolated above the threshold, agrees with the result reported in [@Gies:2016coz]. #### Outline In section \[s:threshold\], we compute the threshold singularity for three different backgrounds – scalar, electric and gravitational fields. In section \[s:conclusions\], we present our conclusions. In appendix \[app:a\], we derive in detail a formula for the gravitational effective action in terms of the transmission coefficient; all other cases follow a similar derivation. In appendix \[app:b\], we discuss the behavior of the transmission coefficient when the mass gap is small. Finally, in appendix \[app:c\], we quote the exact transmission coefficients for some electric and gravitational backgrounds. The exact results agree with our general considerations in the main text. Threshold Singularities {#s:threshold} ======================= Having set the stage, let us study the threshold singularity for various quantum field theories in background fields. In this section, we determine the piece in the effective action which becomes singular as a parameter in the external field configuration reaches the threshold. At the threshold, a very long wavelength pair is produced, which can only probe the rough features of the external field configuration. This allows us to find a universal answer for the singularity, regardless of the precise shape of the external field. The nature of the singularity is slightly different for scalar, vector and gravitational external fields. Scalar Fields ------------- We first consider a $1+1$ quantum field theory of a free massive scalar field in a static, position dependent background $U(x)$. We want to compute the one-loop effective action | [in]{}= e\^[i S\_[eff]{}(U)]{}=D ( t x ((\_t)\^[2]{}-(\_x)\^2-(m\^2+U(x))\^2)), where we take $U(x)$ to be an arbitrary smooth function with asymptotic values $U(\pm\infty)\to 0$ (see figure \[scalarbackp1\]). A very similar model was studied from a different point of view in [@migdal1978fermions]. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![\[scalarbackp1\] Plot of a schematic form of the potential $U(x)$. As its parameters are tuned, the potential becomes deeper. At threshold, the background is too strong, and a bound state of the quantum field is spontaneously produced.](./figures/scalarp1.pdf "fig:"){width="48.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- If we consider a family of potentials controlled by some parameters, and tune these parameters in $U(x)$ to a certain threshold value, the effective action acquires an imaginary part. In this case there is a simple way to argue that the threshold singularity will be of square root type and related to the lowest bound state in the potential $U(x)$. The effective action is proportional to the logarithm of the determinant of the Schrödinger operator i S\_[eff]{}(U) = - (\_x\^2+\^[2]{}-m\^2-U(x)), where we used the Fourier representation for the time coordinate and, as we work in the approximation that the background is static, we obtain a factor of $T$ from the amount of time that the background has been switched on. Assuming that $E_{n}(U)$ is the spectrum of the operator $-\partial_x^2+U(x)$, we find \[seffscdrap\] i S\_[eff]{}(U) = - \_[n]{}\_[[C]{}]{} (\^[2]{}-m\^2-E\_[n]{}(U)), where the index $n$ labels discrete and continuous eigenstates. The contour ${\cal C}$ in the complex $\omega$-plane is chosen according to the Feynman $i\epsilon$-prescription $m\to m-i\epsilon$, $\epsilon >0$. On the real axis we have multiple branch points $\omega = \pm \sqrt{m^{2}+E_{n}(U)}$, where we assume that the lowest bound state $E_{0}(U)>-m^{2}$ and the contour ${\cal C}$ goes above the branch cuts for $\omega >0$ and below the branch cuts for $\omega<0$ as shown in fig. \[scalarbackp3\]. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![\[scalarbackp3\] The integration contour ${\cal C}$ in the complex $\omega$-plane. At the threshold when $E_{0}(U)\to -m^{2}$, the contour ${\cal C}$ is pinched by two branch points $\omega = \pm \sqrt{m^{2}+E_{0}}$.](./figures/scalarp3.pdf "fig:"){width="48.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- Because we assumed that $E_{0}(U)>-m^{2}$ we can Wick rotate the contour ${\cal C}$ along the complex axis, so $\omega\to i\omega$ and we obtain a manifestly real expression for the effective action i S\_[eff]{}(U) = -i \_[n]{}\_[-]{}\^[+]{} (\^[2]{}+m\^2+E\_[n]{}(U)). We see that the possibility to Wick rotate is related to the vacuum stability. When the lowest bound state $E_{0}(U)$ approaches $-m^{2}$ the branch points start pinching the contour ${\cal C}$, and this leads to the appearance of the singular terms in the effective action. So one can easily compute \[scseffres\] S\_[eff]{}(U) = - +…, where we omitted less singular and non-singular terms. It is instructive to see how this singularity arises when we express the effective action through the scattering data related to the potential $U(x)$. Namely, below we are going to show that the effective action can be expressed in terms of a logarithm of the transmission coefficient of a wave passing the potential $U(x)$. For the electric and gravitational cases this method will be more convenient. We begin by differentiating the effective action with respect to the mass, and obtain \[dereqsc\] i=-t x G\_F(t,x;t,x) , where $G_F(t,x;t',x')\equiv \langle \phi(x,t)\phi(x',t')\rangle$ is the Feynman Green’s function[^3]. We can use the Fourier representation for the time components of the Green’s function, which then satisfies \[GFsc\] (\_x\^2+\^[2]{}-m\^2-U(x) ) G\_F(;x,x’)=i(x-x’) . By defining mode functions $f_{\rm in}(x)$ and $f_{\rm out}(x)$, which are annihilated by the Schrödinger operator \[deffifiout\] (\_x\^2+\^[2]{}-m\^2-U(x) )f\_[in/out]{}(x)=0 and satisfy the following boundary conditions \[finout\] &f\_[in]{}(x),   f\_[out]{}(x) , p=, we can express the Green’s function as \[grfsc\] G\_F(;x,x’)=i , where the Wronskian is $W(f_{\rm in},f^*_{\rm out}) \equiv f_{\rm in}(x)\partial_{x}f_{\rm out}^{*}(x)- f_{\rm out}^{*}(x)\partial_{x} f_{\rm in}(x)$. The functions $f_{\rm in}$ and $f_{\rm out}$ are related by Bogoliubov coefficients $\alpha$ and $\beta$ f\_[in]{}(x)=f\_[out]{}(x)+f\_[out]{}\^\*(x), where $|\alpha|^{2}-|\beta|^{2}=1$, and a simple computation gives $W(f_{\rm in},f_{\rm out}^*)=i\alpha$. We see that $1/\alpha$ is the transmission coefficient, which depends on $\omega, m$ and $U(x)$; it can be obtained by solving the quantum mechanical scattering problem (\[deffifiout\]). It is possible to show that the effective action $S_{\rm eff}(U)$ is controlled entirely by the coefficient $\alpha$ [@Nikishov:2002ez] (see also [@Polyakov:2007mm; @Krotov:2010ma]). In order to evaluate the effective action $S_{\rm eff}(U)$, we see from and (\[grfsc\]) that we must compute $\int \dd x\, f_{\rm in}(x) f_{\rm out}^*(x)$. It is possible to express this integral through the coefficient $\alpha$. For this we write the left-hand side of for $f_{\rm in}(x)$ with $m^2$, and multiply the equation by $f^*_{\rm out}(x)$ which is solution of the same equation but with $m^{2}+\delta m^{2}$. Analogously we multiply the equation for $f^*_{\rm out}(x)$ with $m^2+\delta m^2$ by $f_{\rm in}(x)$ with $m^2$. We subtract both expressions, integrate the result over $x$, and keep the first nontrivial terms in $\delta m^2$. This gives $\int \dd x f_{\rm in}(x) f^*_{\rm out}(x) = -i \partial \alpha /\partial m^{2}$, where we used the Feynman $i\epsilon$-prescription $m\to m-i\epsilon$, $\epsilon >0$. We finally obtain \[EAfsc\] S\_[eff]{}(U)=iT \_[C]{} () , where the choice of the contour ${\cal C}$ is explained above and shown in figure \[scalarbackp3\]. In appendix \[app:a\], we give a detailed derivation of this formula for the gravitational case, which is technically the most complicated. As we see from , finding $S_{\rm eff}(U)$ has now been reduced to a 1-D scattering problem. Again the singularity arises in the integral (\[EAfsc\]) when the contour $\cal C$ is pinched by branch points. The result (\[EAfsc\]) is not so surprising, as indeed in the scattering theory it is well-known that the transmission coefficient $1/\alpha$ is an analytic function of energy $E$ on the physical sheet $\sqrt{E}$ ($\textrm{Im}\sqrt{E}>0$), except for the points of discrete spectrum $E=E_{n}$, in which the amplitude has simple poles. Thus the coefficient $\alpha \sim (p-i\sqrt{|E_{0}|})/(p+i\sqrt{|E_{0}|})$ near the pinching branch points and computing the integral (\[EAfsc\]) for $E_{0}(U) \to -m^2$ one recovers the result (\[scseffres\]). So we see that in the scattering approach the singularity mechanism is similar. More generally, the relation between scattering data and the determinant of the Schrödinger operator is well-known and has been thoroughly investigated [@Faddeev:1959yc]. Electric Fields --------------- Let us now consider the case of a free massive complex scalar in a strong electric field. We work in $1+1$ dimensions, but some of our results can be generalized to higher dimensions. The one-loop effective action is given by | [in]{}= e\^[i S\_[eff]{}(a)]{}=DD| (i t x (|\_t+i a\_t|\^2-|\_x|\^2-m\^2||\^2)), where we picked the static gauge $a=a_t(x)\dd t$ for our background configuration, and chose $a_t(x)$ to be a smooth and monotonic function with asymptotic values $a_t(-\infty)=-A/2$ and $a_t(+\infty)=+ A/2$ (see fig. \[electricbranchesp1\]). ---------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![\[electricbranchesp1\] Plot of a schematic form of the potential $a_t(x)$. We assume that $A<2m$.](./figures/electricp1.pdf "fig:"){width="48.00000%"} ---------------------------------------------------------------------------------------------------------------------------------------------------------- -- The asymptotic values $\pm A/2$ are symmetric without loss of generality, by a simple shift of the potential. Other than monotonicity, we do not require the curve $a_t(x)$ to have any special property. We will see that particle production becomes favorable if $A> 2m$. If $A< 2m$, the effective action will be purely real and the vacuum is stable. As $ A\to 2m$ from below, we will show that the effective action acquires a logarithmic singularity. To gain some intuition of the pair production threshold in the electric case, we analyze the classical equations of motion, using band theory, in the asymptotic regimes $x\to \pm\infty$. The energies of excitations are given by \_=a\_t(x) . In figure \[electricbranchesp2\], we see that the maximum and minimum points of the energy move as one goes from $x=-\infty$ to $x=+\infty$. When the bottom of the valence band comes up to the top of the conduction band, it becomes energetically favorable to disrupt the vacuum by pair production. This implies that threshold is reached when: (\_+) -[max]{}(\_-) = A-2m&gt;0 . ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![\[electricbranchesp2\] Plot of the “bands" of the matter field. When the background is too strong the bottom of the valence band comes up to the top of the conduction band and it becomes energetically favorable to trigger the tunneling and disrupt the vacuum by pair production.](./figures/electricp2.pdf "fig:"){width="48.00000%"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- Now we proceed with the calculation of the effective action. Once again, it is convenient to differentiate it by mass \[dereq\] i=-it x G\_F(t,x;t,x) , where $G_F(t,x;t',x')\equiv \langle \phi^{*}(x,t)\phi(x',t')\rangle$ is the Feynman Green’s function. We can use the Fourier representation for the time components of the Green’s function, which then satisfies \[GF\] (\_x\^2+(-a\_t(x))\^2-m\^2 ) G\_F(;x,x’)=i(x-x’) . Finding $S_{\rm eff}(a)$ has now been reduced to a 1-D scattering problem similar to the scalar case we treated above. So we define mode functions $f_{\rm in}$ and $f_{\rm out}$ which are annihilated by the operator in the left hand side of . In terms of $f_{\rm in}$ and $f_{\rm out}$, the Green’s function is given by G\_F(;x,x’)=i . The functions $f_{\rm in}$ and $f_{\rm out}$ satisfy the following boundary conditions \[finout\] &f\_[in]{}(x),   f\_[out]{}(x) , where $p_{\pm}=\sqrt{(\omega\mp A/2)^2-m^2}$. The two solutions are related by Bogoliubov coefficients, $f_{\rm in}(x)=\alpha f_{\rm out}(x)+\beta f_{\rm out}^*(x)$. Using the same method as in the scalar case, we obtain for the effective action \[EAf\] S\_[eff]{}(a)=iT \_[C]{} () . The contour ${\cal C}$ must be chosen according to the Feynman $i\epsilon$-prescription $m\to m-i\epsilon$, $\epsilon>0$, which gives $p_{\pm}\to p_{\pm}+i\epsilon$. There are multiple branch cuts on the real $\omega$-axis. They start at the points corresponding to zeros of $p_{-}$ and $p_{+}$ and also $\alpha$. Therefore the contour ${\cal C}$ should go below the branch cuts for $\omega\to -\infty$ and above the branch cuts for $\omega \to +\infty$ and pass between the left and right branch cuts near $\omega=0$ (see fig. \[electricbranchesp3\]). In general we may have a branch point which corresponds to $\alpha=0$ but when $A$ is very close to $2m$ the branch cuts corresponding to $p_{-}=p_{+}=0$ will pinch the contour ${\cal C}$ first. We see that this mechanism is different from the scalar case, where the effect is due to branch cuts corresponding to $\alpha=0$. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![\[electricbranchesp3\] The integration contour ${\cal C}$ in the complex $\omega$-plane. The branch cuts here correspond to points where $p_{-}=p_{+}=0$. When the electric background near the threshold $A\to 2m$, the branch points $\omega =\pm(m-A/2)$ pinch the contour ${\cal C}$.](./figures/electricp3.pdf "fig:"){width="48.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- So as $A\to2m$ the threshold is reached when the contour $\cal C$ is pinched by the branch points at $\omega=+m-A/2$ and $\omega=-m+A/2$. This already hints at some universality, meaning that the most singular piece in the effective action will be largely agnostic about the particular shape of $a_t(x)$, but only depend on how close to threshold its maximum value is. Since the singularity appears as the branch points pinch the contour $\cal C$, we need to look at $\alpha(\omega\approx \pm(m-A/2))$. As the band gap closes, we can use an argument which gives a general form of the coefficient $\alpha$. Leaving the details to appendix \[app:b\], when $\omega \approx \pm(m-A/2)$ and $A\approx 2m$, the coefficient $\alpha$ is given by an infinite series in small $p_{+}$ and $p_{-}$ and has the following form \[alpapro\] = , with $p_\pm\equiv\sqrt{(\omega\mp A/2)^2-m^2}$, and $(c_0, c_-, c_+,c_{+-})$ being shape-of-$a_t(x)$-dependent, but mass and frequency-independent real numbers. Other coefficients in this expansion are not important for the singular terms in the effective action. The conservation of current implies $c_{+}c_{-}-c_{0}c_{+-}=1$. Having an expression for $\alpha$, we can evaluate the effective action. It is convenient to differentiate $\log \alpha(\omega)$ once by $m^{2}$, thus obtaining $$\begin{aligned} \frac{\partial\log \alpha(\omega)}{\partial m^{2}}=&-\frac{1}{2}\left(\frac{c_{+}+ic_{+-}p_{-}}{p_{+}}+\frac{c_-+ic_{+-} p_{+}}{p_{-}}+\dots\right) \nonumber\\&\times\frac{1}{-ic_0 +c_- p_{-}+c_+ p_{+}+\dots}+\frac{1}{4}\left(\frac{1}{p_{-}^{2}}+\frac{1}{p_{+}^{2}}\right).\label{mlog}\end{aligned}$$ At this point the integral $\int_{\cal C} \dd\omega \frac{\partial}{\partial m^{2}}\log \alpha(\omega) $ is convergent and well defined. The last term in the right-hand side of (\[mlog\]) has no branch cuts and can be evaluated in closed form; it is an uninteresting, non-singular piece of $S_{\rm eff}(a)$. So let us consider the first term in (\[mlog\]). We expect to obtain singular terms from vicinity of the points $\omega \approx \pm(m-A/2)$. It is possible to extract the non-analytic part from various integrals contributing to $\partial S_{\textrm{eff}}(a) /\partial m^{2}$. For instance it is not difficult to show that $$\begin{aligned} &\int_{m-A/2}^{2m}\frac{d\omega}{p_{-}(-ic_{0}+c_{+}p_{+}+c_{-}p_{-}+ic_{+-}p_{+}p_{-}+\dots)} =\notag\\ &\qquad\qquad=k_{0}-\frac{c_{+}}{c_{0}^{2}}(2m-A) \log \left(\frac{2m-A}{2m}\right)+k_{1}(2m-A) +\dots\,, \label{intexample}\end{aligned}$$ where the coefficients $k_{0}$ and $k_{1}$ depend on $c_{0},c_{+},c_{-},c_{+-},\dots$ and on the upper limit of the integral, but the singular term depends only on $c_{0}$ and $c_{+}$. Analyzing various types of integrals arising from (\[mlog\]) and similar to (\[intexample\]) we finally obtain the most-singular non-analytic term of the effective action $$\begin{aligned} &\frac{\partial S_{\textrm{eff}}(a)}{\partial m^{2}} = -\frac{T}{2\pi}\frac{c_{+} c_{-} - c_{0}c_{+-}}{2c_0^{2}}\left(\frac{2m-A}{2m}\right)\log \left(\frac{2m-A}{2m}\right)+\dots\, , \end{aligned}$$ and so it follows that $$\begin{aligned} \label{singel} S_{\textrm{eff}}(a) = \frac{T m^3}{2\pi c_0^{2}} \left(\frac{2m-A}{2m}\right)^{2}\log\left(\frac{2m-A}{2m}\right)+\dots\,,\end{aligned}$$ where we have omitted less singular and non-singular terms. Let us make a few comments about : - The term $1/c_{0}^2$ is proportional to the transmission amplitude of the effective potential, thus for long smooth gauge fields it is exponentially damped. - This term in the effective action is neither local in space (as in the usual derivative expansion) or in momentum space (as in the Euler-Heisenberg effective action). Neither of these representations can capture the threshold singularity, as we are always below threshold in the former case, and always above threshold in the latter. - Despite depending on $A$, the effective action is gauge invariant, as $A=\int_{-\infty}^{+\infty}\dd x\, E(x)$. - ${\rm Im}\,S_{\rm eff}$ can be reliably obtained by analytic continuation from , once we go slightly above the threshold, with $(A-2m)\ll m$. The amount of phase space available to pair produce depends on the dimension of the spacetime. A quick estimate gives S\_[eff]{}(a) \~\_0\^[k\_[max]{}]{} \^[d-2]{} k (A-2 m\_[eff]{}(k))\^2 \~(A-2m)\^ , where $m_{\rm eff}(k)\equiv \sqrt{m^2+k^2}$ is the effective mass of the produced particles, and the integral over transverse momenta runs over a finite range, determined by the condition $A-2m_{\rm eff}(k_{\rm max})=0$. For $d=4$, ${\rm Im}\, S_{\rm eff}(a) \sim (A-2m)^3$, as argued in [@Gies:2016coz]. Notice that $S_{\rm eff}(a)$ will contain a factor of $V_{d-2}$, the volume of the transverse directions, in higher dimensions. - The expression is clearly invalid if $c_0=0$. In the regime $c_{0}\ll (2m-A) \ll m$ one finds a different type of singularity S\_[eff]{}(a) = - ()()+…. We recover the expression above for a “quenching" electric field, with $a_t(x)=A/2\,\textrm{sgn}(x)$, where one has exactly $c_0=0$. As a particular example of the formulas presented above, we can determine the precise form of the Bogoliubov coefficient for a family of potentials $a_t(x)=A/2\tanh(x/l)$, parametrized by the width of the potential $l$. The result is presented in appendix \[app:c\]. This family of potentials includes the quenching example when $l\to 0$, and we can show that the singularity is different in that case, confirming our last bullet point above. Expanding the Bogoliubov coefficient around small $p_+$ and $p_-$, we find the same structure argued for in this section, given by . Gravitational Fields -------------------- Finally we consider the case of a massive free scalar field in a strong gravitational background. Again we work in $1+1$ dimensions and we would like to determine the one-loop effective action $$\begin{aligned} \langle {\rm out} | {\rm in}\rangle = e^{iS_{\rm eff}(v)} = \int D\phi \, \exp\left(-\frac{i}{2}\int \dd t \dd x \sqrt{-g} \left(g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi+m^{2}\phi^{2}\right)\right)\,.\end{aligned}$$ For our purposes it is convenient to describe the metric $g_{\mu\nu}$ in Painlevé-Gullstrand coordinates, \[PG\] ds\^2=g\_dx\^dx\^=dt\^2-(dx-v(x)dt)\^2, where we choose $v(-\infty)=0$ and let $v(x)$ increase smoothly for growing $x$ up to some value $v(+\infty)=V$, similarly to the electrostatic potential $a_{t}(x)$ (see figure \[gravbranchesp1\]). ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![\[gravbranchesp1\] Plot of a schematic form of the gravitational potential $v(x)$. We assume that $V<1$.](./figures/gravitationalp1.pdf "fig:"){width="48.00000%"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- This geometry would have a horizon at $x_h$ if $v(x_h)=1$ and thus $g_{tt}=0$. As we will show later, the vanishing of $g_{tt}$ coincides with the criterion for vacuum decay. For $x$ at which $v(x)<1$, we can interpret $v(x)$ as the escape velocity from the position $x$ [@Hamilton:2004au]. So our criterion for vacuum stability is that $v(x)<1$ for all $x$. Therefore we assume that $V<1$ but we tune $V$ to the threshold value, i.e. $V\to1$. This case is mathematically closer to the equipotential planes with fixed asymptotics, in the electric case of the previous section. We will show that the effective action acquires a square root singularity when $V\to1$. Let us consider the semiclassical analysis for the gravitational case. The nature of the gap is slightly different than in the electric case. This is due to the different structure of the single-particle Hamiltonian [@PiTPlec; @Volovik:2008ww; @Volovik:2009eb]. The energies of excitations are given by \_(p,x) = p v(x) , and threshold corresponds to $V\to1$, as (\_+) -[max]{}(\_-) = 2m . The purpose of the mass term is just to open a gap between positive and negative energy bands (see fig. \[gravbranchesp2\]), as particle production can occur for gravitational fields without a horizon when the matter sector is gapless [@Pimentel:2015iiv]. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![\[gravbranchesp2\] Here we show the bands and the dominant tunneling event, which comes from the top of the lower blue band touching the bottom of the upper blue band.](./figures/gravitationalp2.pdf "fig:"){width="48.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- Notice that when $v(x)=1$, $g_{tt}=0$, which, in these coordinates, is the usual definition of the horizon. In other words, our criterion for the location of the horizon being the distance at which it becomes energetically favorable to pair produce coincides with the definition coming from the causal structure of spacetime. Also, notice that a shift $v\to v+C$ is not unphysical, for the case of a static metric. In order to remove the constant $C$, one could use a Galilean transformation, but this would imply a redefinition of the time coordinate. In other words, the criterion for vacuum instability depends not just on the difference between the asymptotic values of the velocity field, like in the electric field example, but also on its absolute values as $x\to \pm \infty$. As usual to compute the effective action we differentiate it by the mass term $$\begin{aligned} i\frac{\partial S_{\rm eff}(v)}{ \partial m^{2}} = -\frac{i}{2} \int \dd t\, \dd x\, G_{F}(x,t;x,t)\,,\end{aligned}$$ where $G_{F}(x,t;x',t')\equiv\langle \phi(x,t)\phi(x',t')\rangle$ is the Feynman Green’s function and we used that for our metric $\sqrt{-g}=1$. The Green’s function obeys the equation $$\begin{aligned} \left(\partial_{x}((1-v^{2})\partial_{x}) -2 v \partial^{2}_{xt}-(\partial_{x}v)\partial_{t}-\partial_{t}^{2} -m^{2}\right)G_{F}(x,t;x',t') = i \delta(x-x')\delta(t-t')\,.\label{Fgrfgr}\end{aligned}$$ This equation contains a term with a first order derivative in $x$, which naively makes it difficult to apply the previous strategy of expressing the effective action as an integral over the logarithm of the transmission coefficient. Nevertheless, by properly changing variables, we are able to obtain a similar $\log \alpha$ formula for the effective action. To proceed one can check that the Green’s function $G_{F}(x,t;x',t')$ can be written in the form $$\begin{aligned} G_{F}(x,t;x',t') = \int \frac{\dd\omega}{2\pi}\frac{e^{i\omega(t-t')}e^{i(\chi(x)-\chi(x'))}}{\sqrt{(1-v^{2}(x))(1-v^{2}(x'))}}G_{\omega}(x,x')\,, \label{grgranz}\end{aligned}$$ where the new Green’s function $G_{\omega}(x,x')$ obeys a Schrödinger-like equation $$\begin{aligned} \label{ngrfgr} \left(\partial_{x}^{2} + \frac{\omega^2-m^2(1-v^2)+(\partial_x v)^2}{(1-v^2)^2}+\frac{v \, \partial_x^2 v}{1-v^2}\right)G_{\omega}(x,x')= i\delta(x-x')\,.\end{aligned}$$ and the function $\chi$ is defined as $\partial_{x}\chi =\omega v/(1-v^{2})$. It is easy to check that (\[grgranz\]) indeed satisfies (\[Fgrfgr\]). Therefore for the effective action we obtain $$\begin{aligned} \frac{\partial S_{\rm eff}(v)}{ \partial m^{2}} &=-\frac{T}{2} \int \frac{\dd\omega}{2\pi} \int \dd x \frac{G_{\omega}(x,x)}{1-v^{2}(x)}\,. \label{grefact2}\end{aligned}$$ The Green’s function $G_{\omega}(x,x')$ as usual can be expressed through $f_{\rm in}$ and $f_{\rm out}$ functions $$\begin{aligned} G_{\omega}(x,x') = i\frac{f_{\textrm{in}}(x) f_{\textrm{out}}^{*}(x')\theta(x'-x)+(x\leftrightarrow x') }{W(f_{\textrm{in}},f^{*}_{\textrm{out}})}\,, \label{Ggrthf}\end{aligned}$$ where $f_{\rm in}$ and $f_{\rm out}$ are anihilated by the Schrödinger-like operator in (\[ngrfgr\]) and have asymptotics $$\begin{aligned} &f_{\rm in}(x) \xrightarrow{x\to-\infty}\displaystyle \frac{1}{\sqrt{2p_{-}}}e^{-ip_{-}x}, \quad f_{\rm out}(x) \xrightarrow{x\to+\infty}\displaystyle \frac{1}{\sqrt{2p_{+}}}e^{-ip_{+}x}\,, \label{asexp}\end{aligned}$$ where we denoted $$\begin{aligned} p_{-}= \sqrt{\omega^{2}-m^{2}}, \quad p_{+} =\frac{1}{1-V^{2}}\sqrt{\omega^{2}-m^{2}(1-V^{2})}\,,\end{aligned}$$ and, as usual, we define $\alpha$ and $\beta$ as $f_{\rm in}(x) =\displaystyle \alpha f_{\rm out}(x)+\beta f_{\rm out}^{*}(x)$. As shown in appendix \[app:a\] we can bring the formula (\[grefact2\]) to our usual form $$\begin{aligned} S_{\rm eff}(v) = \frac{1}{2}iT \int_{\cal C} \frac{d\omega}{2\pi}\log \alpha(\omega)\,. \label{grlogaf}\end{aligned}$$ The pinching singularity comes again from very small frequencies near the points where $p_{+}=0$, and we need to determine $\alpha$ for $\omega \sim m \sqrt{1-V^2}$ (see figure \[gravbranchesp3\]). ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![\[gravbranchesp3\] The integration contour ${\cal C}$ in the complex $\omega$-plane. Branch points $\omega =\pm m \sqrt{1-V^{2}}$ corresponding to $p_{+}=0$ pinch the contour ${\cal C}$ near $\omega \approx 0$ when $V \to1$. This “pinching” region determines the singular piece of the effective action.](./figures/gravitationalp3.pdf "fig:"){width="48.00000%"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- Notice that the relevant feature is the behavior of the mode functions as $x\to\infty$, and for $\omega \sim m \sqrt{1-V^2}$, the mode function has very small modulation with $\omega$. This means that the transmission coefficient behaves like , as $p_-\approx m$, and the model dependence is encoded in the coefficients $(d_0, d_+)$. We can once again take derivatives of the effective action to isolate its singular piece. It turns out that differentiating once with respect to $m^2$ is enough to isolate the singular term. Following the same steps as in the electric case, we arrive at \[gravres\] S\_[eff]{}(v) = m T +… . In the appendix \[app:c\] we find an exact $\alpha(\omega)$ for the step potential $v(x)=V\theta(x)$. In this case one can calculate the integral over $\omega$ in (\[grlogaf\]) exactly and obtain the result (\[gravres\]). This singular part of the effective action can be written in a “local" form[^4] due to the different tunneling pattern when the vacuum breaks down – pairs are produced at large $x$, rather than at both small and large $x$. There is also an analogous term to the electric threshold result, $\sim (1-V^2)\log (1-V^2)$, but it is subleading to . Notice that even in the special case $d_0=0$ we obtain the same singularity, albeit with different overall coefficient. Another interesting thing is that the leading term does not care about the detailed coefficients $d_{0,+}$ $-$ as long as they are nonzero, the only relevant data from the metric is the value of $V$. This is unlike the electric case, where the leading singular term depends on $2m-A$ but also on $c_0$. This threshold singularity is a quantum analog of Choptuik scaling. Choptuik considered a family of initial data labeled by a parameter $p$. Under time evolution using Einstein’s equations, he found [@Choptuik:1992jv] that the final state had a black hole of mass $M\sim (p-p_{\rm cr})^\gamma$, for $p>p_{\rm cr}$. The exponent $\gamma$ is largely independent on the details of the family of initial data. Above criticality, the formation of a black hole indicates the appearance of a horizon. Our critical exponent is entirely analogous, but is a quantum diagnostic of the appearance of the horizon. In our case, we look at $S_{\rm eff}(v)$ rather than $M$, criticality is reached when $V=1$, and the critical exponent is $\gamma=1/2$ [^5]. Conclusions {#s:conclusions} =========== In this paper, we argued that the crossover between the quantum mechanical stability and instability of background fields has certain universal features. This is largely due to the first unstable process triggered right above threshold having very long wavelength and low energy. This soft emission process probes only the roughest features of the external background, and the threshold singularity can be easily expressed in terms of rough background data. There are many avenues for further investigation: - Our analysis was restricted to gaussian matter fields. How would interactions in the matter sector change the critical exponents in the threshold singularity? - Can we connect our results to existing methods for treating backreaction in black holes [@Volovik:1999fc; @Parikh:1999mf]? It would be nice to incorporate our threshold singularity to the problem of formation of a black hole, in order to see if vacuum polarization delays its formation, or prevents formation whatsoever for initial data close enough to threshold. - Finally, it would also be interesting to find the threshold singularity for more realistic field configurations: for example, a spherically symmetric configuration, like a star, where we take a mass shell to be very close to its Schwarzschild radius. We leave such fascinating problems to the near future. 5 pt #### Acknowledgements We would like to thank Daniel Baumann, Garrett Goon, Diego Hofman, Viatcheslav Mukhanov, Andrew Strominger, Leon Takhtajan and Grigory Volovik for useful discussions, and Daniel Baumann, Garrett Goon and Diego Hofman for comments on a draft. The work of G.T. was supported by the MURI grant W911NF-14-1-0003 from ARO and by DOE grant de-sc0007870. G.P. acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie-Skłodowska Curie grant agreement number 751778. The work of G.P. is part of the Delta ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). G.P. also acknowledges support from a Starting Grant of the European Research Council (ERC STG Grant 279617). Derivation of $S_{\rm eff}=i\int d\omega \log\alpha$ {#app:a} ==================================================== In this appendix, we derive the formula for the gravitational effective action in terms of the Bogoliubov coefficient $\alpha$ in detail. The electromagnetic and scalar cases simply follow from this. Using the expression (\[Ggrthf\]) for $G_\omega(x,x')$ and formula (\[grefact2\]) we obtain for the effective action $$\begin{aligned} \frac{\partial S_{\rm eff}(v)}{\partial m^{2}} =-i\frac{T}{2} \int \frac{\dd\omega}{2\pi} \int \dd x \frac{f_{\rm in}(x)f_{\rm out}^{*}(x)}{(1-v^{2}(x))W(f_{\rm in},f^{*}_{\rm out})}\,, \label{intoverx}\end{aligned}$$ where one can easily calculate $W(f_{\rm in},f^{*}_{\rm out})=i\alpha$. We must now calculate the integral over $x$ in (\[intoverx\]). In order to proceed, we do the following: consider the equations $$\begin{aligned} &\partial_{x}^{2}f_{{\rm in},m^{2}}(x) + \left(\frac{\omega^2-m^2(1-v^2)+(\partial_x v)^2}{(1-v^2)^2}+\frac{v \, \partial_x^2 v}{1-v^2}\right)f_{{\rm in},m^{2}}(x)=0\,,\notag\\ &\partial_{x}^{2}f^{*}_{{\rm out},m^{2}+\delta m^{2}}(x) + \left(\frac{\omega^2-(m^2+\delta m^2)(1-v^2)+(\partial_x v)^2}{(1-v^2)^2}+\frac{v \, \partial_x^2 v}{1-v^2}\right)f^{*}_{{\rm out},m^{2}+\delta m^{2}}(x)=0\,.\end{aligned}$$ Multiplying the first equation by $f^{*}_{{\rm out},m^{2}+\delta m^{2}}(x)$ and the second by $f_{{\rm in},m^{2}}(x)$ and subtracting them we obtain $$\begin{aligned} \partial_{x}(f_{{\rm in},m^{2}}\partial_{x}f^{*}_{{\rm out},m^{2}+\delta m^{2}}-f^{*}_{{\rm out},m^{2}+\delta m^{2}}\partial_{x}f_{{\rm in},m^{2}}) = \delta m^{2} \frac{f_{{\rm in},m^{2}}(x)f^{*}_{{\rm out},m^{2}+\delta m^{2}}(x)}{1-v^{2}(x)}\,.\end{aligned}$$ Integrating over $x$ the left and the right parts from $-L$ to $L$, where $L\to +\infty$ we get $$\begin{aligned} \delta m^{2} \int_{-L}^{+L} \dd x \frac{f_{{\rm in},m^{2}}(x)f^{*}_{{\rm out},m^{2}+\delta m^{2}}(x)}{1-v^{2}(x)} = (f_{{\rm in},m^{2}}\partial_{x}f^{*}_{{\rm out},m^{2}+\delta m^{2}}-f^{*}_{{\rm out},m^{2}+\delta m^{2}}\partial_{x}f_{{\rm in},m^{2}})|_{-L}^{+L}\,. \end{aligned}$$ Because we take $L\to \infty$, we can use the asymptotic expressions for $f_{\rm in,out}$ (\[asexp\]) and find $$\begin{aligned} &(f_{{\rm in},m^{2}}\partial_{x}f^{*}_{{\rm out},m^{2}+\delta m^{2}}-f^{*}_{{\rm out},m^{2}+\delta m^{2}}\partial_{x}f_{{\rm in},m^{2}})|_{-L}^{+L} =\notag\\ &=\delta m^{2}\left(-i\frac{\partial \alpha}{\partial m^{2}}-L \alpha \frac{\partial (p_{-}+p_{+})}{\partial m^{2}} -\frac{1}{2}i\Big(\beta^{*}\frac{\partial \log p_{-}}{\partial m^{2}}e^{2iLp_{-}}-\beta\frac{\partial\log p_{+}}{\partial m^{2}}e^{2iLp_{+}}\Big)\right)+\dots\,. \end{aligned}$$ We use the Feynman $i\epsilon$-prescription $p_{\pm}\to p_{\pm}+i\epsilon$ with infinitesimal $\epsilon>0$, so the oscillating terms above are zero for large $L$. Finally we obtain $$\begin{aligned} &(f_{{\rm in},m^{2}}\partial_{x}f^{*}_{{\rm out},m^{2}+\delta m^{2}}-f^{*}_{{\rm out},m^{2}+\delta m^{2}}\partial_{x}f_{{\rm in},m^{2}})|_{-L}^{+L} =-i\delta m^{2}\alpha \frac{\partial}{\partial m^{2}}\Big(\log \alpha +L (p_{-}+p_{+})\Big)\,. \end{aligned}$$ Putting this together, we find $$\begin{aligned} \frac{\partial S_{\rm eff}(v)}{\partial m^{2}} =i\frac{T}{2} \int_{\cal C} \frac{\dd\omega}{2\pi}\frac{\partial}{\partial m^{2}}\Big(\log \alpha +L (p_{-}+p_{+})\Big).\end{aligned}$$ Our last task is to argue that the terms proportional to $L$ are unimportant. By that we mean that they only carry uninteresting dependence on the background. The term proportional to $L p_-$ is harmless, depending only on $m$, but the term proportional to $Lp_+$ seems to have nontrivial dependence on $V$. Let us write it more explicitly \_[C]{} L p\_[+]{}=\_[C]{} (\^[2]{}-m\^[2]{}(1-V\^[2]{}))\^[1/2]{}. If we change variables $\omega= \omega'(1-V^2)^{1/2}$ then the $V$ dependence drops out of the integral and it is exactly equal to the $L p_-$ integral. This argument is too fast, as the integral is UV divergent. The correct argument is that the cutoff is background dependent. For the $p_-$ integral, we are at $x\to-\infty$ so we choose some hard cutoff $\Lambda$ in frequency space. At $x\to+\infty$, the metric is $\dd t^2 (1-V^2)$ so we must choose the cutoff $\Lambda/(1-V^2)^{1/2}$ in frequency space, to take into account the warping of time intervals. This renders the $Lp_+$ integral to have no interesting dependence on $V$. In summary, up to non-important terms, we find $$\begin{aligned} S_{\rm eff}(v) = \frac{1}{2}iT \int_{\cal C} \frac{d\omega}{2\pi}\log \alpha(\omega)\,.\end{aligned}$$ Behavior of $\alpha$ Near Threshold {#app:b} =================================== In this appendix we derive the behavior of the Bogoliubov coefficient $\alpha$ when the effective mass gap is very small. In other words, we find the first few terms in an expansion for $\alpha$ around vanishing mass gap. To start, let us consider the Schrödinger equation $$\begin{aligned} (\partial_{x}^{2}+U(x)-m^2)f =0\,, \label{mq2}\end{aligned}$$ where $U(x)$ either switches off or asymptotes to some fixed values $U(\pm \infty)$ in a smooth way. We are interested in the cases where the effective mass gap at infinity p\_\^2U()-m\^2 is very small, namely $p_{\pm}^2\ll m^2$. We consider here the case in which both $p_{\pm}^2$ are small. In the main text, the gravitational background is such that only in one extreme the mass gap vanishes. Applying our formulas to that example is straightforward. In the region outside of which $U(x)$ is varying, the mass term is either $p_+^2$ or $p_-^2$, which we assume are small. Neglecting those terms, we get $$\begin{aligned} \partial_{x}^{2}f =0\,,\end{aligned}$$ therefore the solutions of the equations of motion are $$\begin{aligned} f = a_{1}+b_{1}x \quad x\ll 0, \qquad f = a_{2}+b_{2}x, \quad x\gg 0 \label{ap1}\end{aligned}$$ where the potential varies significantly close to $x=0$. The coefficients $(a_{1},b_{1})$ and $(a_{2}, b_{2})$ are linearly dependent $$\begin{aligned} a_{1}= c_- a_{2} + c_{+-} b_{2}, \quad b_{1}=c_0 a_{2}+ c_+ b_{2}\,, \label{con}\end{aligned}$$ where the coefficients $(c_0, c_+, c_-, c_{+-})$ are independent of $p_{\pm}$ (as $p_{\pm}$ do not appear in the differential equation with linear functions as solutions), and, from the conservation of current, it follows that we can choose $(c_0, c_+, c_-, c_{+-})$ to be real, with $c_+ c_- - c_0 c_{+-} =1$. Then matching the solutions (\[ap1\]) with the asymptotic solutions in terms of plane waves, we obtain $$\begin{aligned} a_{1} = \frac{1}{\sqrt{2p_{-}}}, \quad b_{1} = \frac{-ip_{-}}{\sqrt{2p_{-}}}, \quad a_{2} = \frac{\alpha+\beta}{\sqrt{2p_{+}}}, \quad b_{2}= \frac{-ip_{+}(\alpha-\beta)}{\sqrt{2p_{+}}}\,,\end{aligned}$$ and solving the equations (\[con\]) we get $$\begin{aligned} \label{expralph} &\alpha =\frac{-i c_0 +c_- p_{-} +c_+ p_{+}+i c_{+-} p_{-}p_{+}}{2\sqrt{p_{-}p_{+}}}\,,\quad \beta =\frac{i c_0 -c_- p_{-} +c_+ p_{+}+i c_{+-} p_{-}p_{+}}{2\sqrt{p_{-}p_{+}}}\,.\end{aligned}$$ Now having $\alpha$ we can evaluate the effective action. These expressions are only valid for $|p_\pm|\ll m$. Exact Solutions {#app:c} =============== Electric example ---------------- The exact solution in the electric case is available for the gauge field profile $a_t(x) = A/2\tanh(x/l)$. In this case one can obtain an exact Bogoliubov coefficient $\alpha$; it is given by \[alphatanh\] = , where $\rho = \frac{1}{2}+\frac{1}{2}\sqrt{1-A^{2}l^{2}}$ and $p_{-},p_{+}$ are defined below (\[finout\]). If we first tune $l\to 0$ we obtain the step potential $a_t(x)=A/2\, \textrm{sgn}(x)$ and the Bogoliubov coefficient (\[alphatanh\]) simplifies to $\alpha = (p_{-}+p_{+})/2\sqrt{p_{+}p_{-}}$. On the other hand, if $l$ is fixed and we are in the regime where $p_{+}$ and $p_{-}$ are small, we find = , where $\psi(x)$ is the digamma function, and $\gamma_E$ is the Euler-Mascheroni constant. This form of the $\alpha$ coefficient agrees with (\[expralph\]). Gravitational example --------------------- In this subsection we are going to find the coefficient $\alpha$ in the case of a step potential $v(x)=V\theta(x)$. To proceed it is convenient to write the Schrödinger equation for the operator (\[ngrfgr\]) as $$\begin{aligned} \partial_{x}\left((1-v^{2})\partial_{x}\Big(\frac{f_{\rm in}(x)}{\sqrt{1-v^{2}}}\Big)\right)+\frac{\omega^{2}-m^{2}(1-v^{2})}{(1-v^{2})^{3/2}}f_{\rm in}(x) =0\,. \label{she2}\end{aligned}$$ Now integrating this equation from $x=-\delta$ to $x=\delta$ with $\delta \to 0$ we find the boundary conditions for $f_{\rm in}(x)$ and $\partial_{x}f_{\rm in}(x)$ at $x=0$: $$\begin{aligned} f_{\rm in}(0^{+}) = \sqrt{1-V^{2}}f_{\rm in}(0^{-}), \quad \sqrt{1-V^{2}}\partial_{x}f_{\rm in}(0^{+}) = \partial_{x}f_{\rm in}(0^{-})\,.\end{aligned}$$ Using these boundary conditions one can find $$\begin{aligned} \alpha = \frac{p_{-}+(1-V^{2})p_{+}}{2\sqrt{(1-V^{2})p_{-}p_{+}}}\,,\end{aligned}$$ where $p_{-}=\sqrt{\omega^{2}-m^{2}}$ and $p_{+}=\sqrt{\omega^{2}-m^{2}(1-V^{2})}/(1-V^{2})$. [^1]: Similar considerations apply to magnetically charged particles in a background magnetostatic field. [^2]: The critical field strength associated to Schwinger $e^+e^-$ pair production, $E_c=m_e^2/e$, gives the electric field value for which the pair production rate becomes non-exponentially suppressed. The pair production rate is, however, nonzero for any value of the background constant electric field, $\Gamma\sim (e^2 E)\exp(-E_c/E)$. [^3]: We omit the time ordering symbol of the Feynman Green’s function to avoid confusion with the time $T$ that the background is switched on. [^4]: The volume form on the timelike surface $x\to +\infty$ is given by $\sqrt{1-V^2}=\sqrt {g_{\rm ind}}$. [^5]: Interestingly, an analysis of black hole formation in a different context gives the same critical exponent [@Strominger:1993tt].
Introduction ============ Cognates are a collection of words in different languages deriving from the same origin. The study of cognates plays a crucial role in applying comparative approaches for historical linguistics, in particular, solving language relatedness and tracking the interaction and evolvement of multiple languages over time. A cognate instance in Indo-European languages is given as the word group: *night* (English), *nuit* (French), *noche* (Spanish) and *nacht* (German). The existing studies on cognate detection involve experiments which distinguish between a pair of words whether they are cognates or non-cognates [@ciobanu2014building; @List:2012:LAD:2388655.2388671]. These studies do not approach the problem of predicting the possible cognate of the target language, if the cognate of the source language is given. For example, given the word *nuit*, could the algorithm predict the appropriate German cognate within the huge German wordlist? This paper tackles this problem by incorporating heuristics of the probabilistic ranking functions from information retrieval. Information retrieval addresses the problem of scoring a document with a given query, which is used in every search engine. One can view the above problem as the construction of a suitable search engine, through which we want to find the cognate counterpart of a word (query) in a lexicon of another language (documents). This paper deals with the intersection between the areas of information retrieval and approximate string similarity (like the cognate detection problem), which is largely under-explored in the literature. Retrieval methods also provide a variety of alternative heuristics which can be chosen for the desired application areas [@Fang:2011:DEI:1961209.1961210]. Taking such advantage of the flexibility of these models, the combination of approximate string similarity operations with an information retrieval system could be beneficial in many cases. We demonstrate how the notion of information retrieval can be incorporated into the approximate string similarity problem by breaking a word into smaller units. Regarding this, Nguyen et al. has argued that segmented words are a more practical way to query large databases of sequences, in comparison with conventional query methods. This further encourages the heuristic attempt at imposing an information retrieval model on the cognate detection problem in this paper. Our main contribution is to design an information retrieval based scoring function (see section 4) which can capture the complex morphological shifts between the cognates. We tackled this by proposing a shingling (chunking) scheme which incorporates positional information (see section 2) and a graph-based error modelling scheme to understand the transformations (see section 3). Our test harness focuses not only on distinguishing between a pair of cognates, but also the ability to predict the cognate for a target language (see section 5). Positional Character-based Shingling ==================================== This section examines on converting a string into a shingle-set which includes the encodings of the positional information. In this paper, we notify, $S$ as the shingle-set of cognate from the source language and $T$ as the shingle-set of cognate for the target language. The similarity between these split-sets is denoted by $S \cap T$. An example of cognate from the source language, $S$ (Romanian) could be shingle set of the word *rosmarin* and $T$ (Italian) could be *romarin*. **K-gram shingling:** Usually, set based string similarity measures are based on comparing overlap between the shingles of two strings. Shingling is a way of viewing a string as a document by considering $k$ characters at a time. For example, the shingle of the word *rosmarin* is created with $k = 2$ as: $S = \left\lbrace \langle \textit{s} \rangle \textit{r, ro, os, sm, ma, ar, ri, in, n} \langle / \textit{s} \rangle \right\rbrace$. Here, $\langle \textit{s} \rangle$ is the start sentinel token and $\langle / \textit{s} \rangle$ is the stop sentinel token. For the sake of simplicity, we have ignored sentinel tokens; which transforms into: $S = \left\lbrace \textit{r, ro, os, sm, ma, ar, ri, in, n} \right\rbrace$. This method splits the strings into smaller $k$-grams without any positional information. Positional Shingling from 1 End ------------------------------- We argue that the unordered *k-grams* splitting could lead to an inefficient matching of strings since a shingle set is visualized as the bag-of-words method. Given this, we propose a positional k-gram shingling technique, which introduces position number in the splits to incorporate the notion of the sequence of the tokens. For example, the word *rosmarin* could be position-wise split with $k = 2$ as: $S = \left\lbrace \textit{1r, 2ro, 3os, 4sm, 5ma, 6ar, 7ri, 8in, 9n} \right\rbrace$. Thus, the member *4sm* means that it is the fourth member of the set. The motivation behind this modification is that it retains the positional information which is useful in probabilistic retrieval ranking functions. Positional Shingling from 2 Ends -------------------------------- The main disadvantage of the positional shingling from single end is that any mismatch can completely disturb the order of the rest, leading to low similarity. For example, if the query is *rosmarin* with cognate *romarin*, the corresponding split sets would be $\left\lbrace \textit{1r, 2ro, 3os, 4sm, 5ma, 6ar, 7ri, 8in, 9n} \right\rbrace$ and $\left\lbrace \textit{1r, 2ro, 3om, 4ma, 5ar, 6ri, 7in, 8n} \right\rbrace$. The order of the members after *2ro* is misplaced, thus this will lead to low similarity between two cognates. Only $\left\lbrace \textit{1r, 2ro} \right\rbrace$ is common between the cognates. Considering this, we propose positional shingling from both ends, which is robust against such displacements. We attach a position number to the left if the numbering begins from the start, and to the right if the numbering begins from the end. Then the smallest position number is selected between the two position numbers. If the position numbers are equal, then we select the left position number as a convention. Figure \[algo\] gives an exemplification of this algorithm illustrated with splits of *romarin* and *rosmarin*. ![The process of positional tokenisation from both ends. On the left, algorithm segments the Romanian word *romarin* into the split-set $\left\lbrace \textit{1r, 2ro, 3om, 4ma, ar4, ri3, in2, n1} \right\rbrace$. On the right, the algorithm segments *rosmarin* into $\left\lbrace \textit{1r, 2ro, 3os, 4sm, 5ma, ar4, ri3, in2, n1} \right\rbrace$. []{data-label="algo"}](drawing.pdf){width="40.00000%"} In Figure 1, split sets of *rosmarin* and *romarin* are shown. After taking intersection of them, we get $\left\lbrace \textit{1r, 2ro, ar4, ri3, in2, n1} \right\rbrace$, indicating a higher similarity. Graphical Error Modelling ========================= Once shingle sets are created, common overlap set measures like set intersection, Jaccard [@jarvelin2007s], XDice [@XDice] or TF-IDF [@tfidf] could be used to measure similarities between two sets. However, these methods only focus on similarity of the two strings. For cognate detection, it is crucial to understand how substrings are transformed from source language to target language. This section discusses on how to view this “dissimilarity” by creating a graphical error model. Algorithm 1 explicates the process of graphical error modelling. For illustration purposes, we visualize the procedure via a Romanian-Italian cognate pair (*mesia*, *messia*). If the source language is Romanian, then $S = \left\lbrace \textit{1m, 2me, 3es, si3, ia2, a1} \right\rbrace$, which is the split-set of *mesia*. Let the target language by Italian. Then the split-set of the Italian word *messia*, denoted as $T$, will be $\left\lbrace \textit{1m, 2me, 3es, 4ss, si3, ia2, a1} \right\rbrace$. Thus $|S \cap T|$ is the number of common terms. Thus the term matches are, $S \cap T = \left\lbrace \textit{1m, 2me, 3es, si3, ia2, a1} \right\rbrace$. We are interested in examining the “dissimilarity”, which are the leftover terms in the sets. That means, we need to infer a certain pattern from leftover sets, which are $S - \lbrace S \cap T \rbrace$ and $T - \lbrace S \cap T \rbrace$. Thus we can draw mappings to gather information of the corrections. Let *top* and *bottom* be the **ordered sets** referring to $S - \lbrace S \cap T \rbrace$ and $T - \lbrace S \cap T \rbrace$ respectively. Referring to the example, $T - S \cap T = \left\lbrace \textit{4ss} \right\rbrace$, a *bottom* set. Similarly, $S - S \cap T = \left\lbrace \right\rbrace$, a *top* set. Then we follow instructions given in algorithm 1. ------------------------------------------------------------------------ **Algorithm 1**: Graphical Error Model ------------------------------------------------------------------------ **Graphical Error Model** takes two split sets generated by the shingling variants, namely *top* and *bottom*. The objective is to output a graphical structure showing connections between members of the *top* and the *bottom* sets. 1. **Initialization of the *top* and *bottom*:** If the given sets *top* and *bottom* are empty, we initialize them by inserting an empty token ($\phi$) into those sets. **Running example:** This step transforms *top* set as $\left\lbrace \phi \right\rbrace$ and *bottom* as $\left\lbrace \textit{4ss} \right\rbrace$. 2. **Equalization of the set cardinalities:** The cardinalities of the sets *top* and *bottom* made equal by inserting empty tokens ($\phi$) into the middle of the sets. **Running example:** The set cardinalities of *top* and *bottom* were already equal. Thus the output of this step are *top* set as $\left\lbrace \phi \right\rbrace$ and *bottom* as $\left\lbrace \textit{4ss} \right\rbrace$. 3. **Inserting the mappings of the set members into the graph:** The empty graph is initialized as $graph = \lbrace \rbrace$. The directed edges are generated, originating from every set member of *top* to every set member of *bottom*. This results in a complete directed bipartite graph between *top* and *bottom* sets. Each edge is assigned a probability $P(e)$ which is discussed in a later section. **Running example:** The output of this step would be complete directed bipartite graph between *top* and *bottom* sets which is $\left\lbrace \phi \rightarrow \textit{4ss} \right\rbrace$ One more example is provided in figure \[graph\]. ------------------------------------------------------------------------ **Intuition:** The edges created as the result of this graph could be used for probabilistic calculations which are detailed more in section \[error\]. Intuitively, $\phi \rightarrow \textit{4ss}$ means that if the letter *s* is added at position 4 of the word of the source *mesia*, then one could get the target word *messia*. ![The figure shows the bipartite graph output of the algorithm when the source cognate is *stupor* and the target cognate is *stupeur*.[]{data-label="graph"}](graph.pdf){width="40.00000%"} Evaluation Function =================== The design of our evaluation function focuses on two main properties: set based similarity (see section \[sim\]) and probabilistic calculation through graphical model (see section \[error\]) Similarity Function {#sim} ------------------- Usually, the computation of similarity between two sets is done by metrics like Jaccard, Dice and XDice [@XDice]. Dynamic programming based methods like edit distance and LCSR (Longest Common Subsequence Ratio, [@lcsr]) are also often used to calculate similarity between two strings. Ranking functions incorporate more complex but necessary features which are needed to distinguish between the documents. In this paper, we use BM25 and a Dirichlet smoothing based ranking function to compute the similarity. BM25 considers term-frequency, inverse document frequency and length normalization based penalization features for similarity calculations. Dirichlet smoothing function [@Robertson:2009:PRF:1704809.1704810] makes use of language modelling features and tunable parameter which aids in Bayesian smoothing of unseen shingles in the split sets [@Blei:2003:LDA:944919.944937]. Error Modelling Function {#error} ------------------------ The information of the common morphological transformations for cognates between two different languages helps in determining if a pair of words could be cognates. Based on the graphs of cognate pairs between Italian and Romanian (section 3), which models the morphological shifts between the cognates in the two languages, we define an error modelling function on any pair of words from the two languages. The split set from the source language is denoted by $S$ and target language by $T$, then probabilistic function would be: $$\begin{aligned} \pi \textit{(S, T)} = \frac{1}{|G(S,T)|} \sum_{e \in G(S,T)} \textit{P(e)}^q \end{aligned}$$ where $G(S,T)$ is the constructed graph of $S$ and $T$, the strength parameter is called $q$ here with the range of $(0,\infty)$, and $P(e)$ is the probability of edge $e$ to occur in between two cognates, which is estimated by its frequency of being observed in the graphs of cognate pairs in the training set. ![From the graph created in figure \[graph\], we calculate the probabilities of each edge (by computing frequencies and smoothing) and then aggregate all the probabilities of edges in the graph.[]{data-label="adding"}](adding.pdf){width="30.00000%"} Figure \[adding\] illustrates the aggregation of edges in the graph and figure \[final\] shows the final output of the error modelling function after normalizing. ![After aggregating, we normalize the sum and the graph conversion score is outputted.[]{data-label="final"}](final.pdf){width="30.00000%"} $\pi \textit{(S, T)}$ is called the error modelling function defined for the word pair, which is an intuitive calculation of probabilty between a pair of cognates through estimating their transformations. $q$ is a tunable parameter that controls the effect of the probabilistic frequencies $P(e)$ observed in the training set, often useful in avoiding overfitting. $\frac{1}{|G(S,T)|}$ is the normalization factor to allow us to compare the quantity across different word pairs. Combining Error Modelling and Similarity Function metrics {#balance} --------------------------------------------------------- In this subsection, we merge the notion of similarity and dissimilarity together. We combine a set-based similarity function (discussed in section \[sim\]) and the error modelling function (discussed in section \[error\]) into a score function by a weighted sum of them, which is, $$\label{main} \textit{score}(S, T) = \lambda \times \textit{sim(S, T)} + (1 - \lambda) \times \pi \textit{(S, T)}$$ where $\lambda \in [0, 1]$ is a weight based hyperparameter, $sim(S, T)$ is a set-based similarity between $S$ and $T$, and $\pi (S, T)$ is the graphical error modelling function defined above. Test Harness ============ Table \[my-label\] summarizes the results of the experimental setup. The elements of test harness are mentioned as following: Setup Description ----------------- **Dataset:** The experiments in this paper are performed on the dataset used by Ciobanu *et al* . The dataset consists 400 pairs of cognates and non-cognates for Romanian-French (Ro - Fr), Romanian-Italian (Ro - It), Romanian-Spanish (Ro - Es) and Romanian-Portuguese (Ro - Pt). The dataset is divided into a 3:1 ratio for training and testing purposes. Using cross-validation, hyperparameters and thresholds for all the algorithms and baselines were tuned accordingly in a fair manner. **Experiments:** Two experiments are included in test harness. 1. We provide a pair of words and the algorithms would aim to detect whether they are cognates. Accuracy on the test set is used as a metric for evaluation. 2. We provide a source cognate as the input and the algorithm would return a ranked list as the output. The efficiency of the algorithm would depend on the rank of the desired target cognate. This is measured by MRR (Mean Reciprocal Rank), which is defined as, $MRR = \sum_{i=1}^{|dataset|} \frac{1}{rank_i}$, where $rank_i$ is the rank of the true cognate in the ranked list returned to the $i^{th}$ query. This dataset is prepared by listing search candidates as the entire lexicon of the particular language. The target list is the whole lexicon for that particular language. Given the input cognate, the algorithm will output possible matches after evaluating the whole lexicon list. Thus, the collection of documents are lexicons (search space) and queries would be the cognates. Baselines --------- **String Similarity Baselines:** It is intuitive to compare our methods with the prevalent string similarity baselines since the notions behind cognate detection and string similarity are almost similar. Edit Distance is often used as the baseline in the cognate detection papers [@lcsr]. This computes the number of operations required to transform from source to target cognate. We have also incorporated XDice [@XDice], which is a set based similarity measure that operates between shingle set between two strings. Hidden alignment conditional random fields (CRF) are often used in transliteration which serves as the generative sequential model to compute the probabilities between the cognates, which is analogous to learnable edit distance [@mccallum2012conditional]. Among these baselines, CRF performs the best in accuracy and MRR. **Orthographic Cognate Detection:** Papers related to this notion usually take alignment of substrings which in classifier like support vector machines [@ciobanu2015automatic; @ciobanu2014building] or hidden markov models [@bhargava2009multiple]. We included Alina *et al* as the baseline , which employs the dynamic programming based methods for sequence alignment following which features were extracted from the mismatches in the word alignments. These features are plugged into the classifier like SVM which results in decent performance on accuracy with an average of 84%, but only 16% on MRR. This result is due to the fact that a large number of features leads to overfitting and scoring function is not able to distinguish the appropriate cognate. **Phonetic Cognate Detection:** Research in automatic cognate identification using phonetic aspects involve computation of similarity by decomposing phonetically transcribed words [@kondrak2000new], acoustic models [@mielke2012phonetically], phonetic encodings [@rama2015automatic], aligned segments of transcribed phonemes [@list2012lexstat]. We implemented Rama’s research , which employs a Siamese convolutional neural network to learn the phonetic features jointly with language relatedness for cognate identification, which was achieved through phoneme encodings. Although it performs well on accuracy, it shows poor results with MRR, possibly the reason as same as SVM performance. Ablation experiments -------------------- We experiment with the variables like length of substrings, ranking functions, shingling techniques, and graphical error model, which are detailed in the Table \[my-label\]. Amongst the shingling techniques, we found that character bigrams with 2-ended positioning give better results. Adding trigrams to the database does not give major effect on the results. The results clearly indicate that adding graphical error model features greatly improve the test results. Amongst the ranking functions, Dirichlet smoothing tends to give better results, possibly due to the fact that it requires fewer parameters to tune and is able to capture the sequential data (like substrings) better than other ranking functions [@Fang:2011:DEI:1961209.1961210]. The hyperparameter $\lambda$ mentioned in the section \[balance\], was tuned around 0.6, which shows the 60% contribution by the similarity function and 40% contribution by the dissimilarity. Overall, the combination of bigrams with 2-ended positional shingling, graphical error modelling with Dirichlet ranking function gives the best performance with an average of 86% on accuracy metric and 60% on MRR. Conclusions =========== We approach towards the harder problem where the algorithm aims to find a target cognate when a source cognate is given. Positional shingling outperformed non-positional shingling based methods, which demonstrates that inclusion of positional information of substrings is rather important. Addition of graphical error model boosted the test results which shows that it is crucial to add dissimilarity information in order to capture the transformations of the substrings. Methods whose scoring functions rely only on complex machine learning algorithms like CNN or SVM tend to give worse results when searching for a cognate, due to huge output space. Acknowledgements {#acknowledgements .unnumbered} ================ This work would not be possible without the support from my parents. I would like to thank the NLP community for providing me open-sourced resources to help an underprivileged and naive student like me. Finally, I would like to thank the reviewers, mentors, and organizers for ACL-SRW for supporting student research. Special thanks to my classmate Chun Sik Chan and SRW mentor Sam Bowman for providing excellent critiques for this paper, and Alina Ciobanu for providing the dataset.
--- abstract: 'A geometrical conclusion: Sierpinski gasket, two Sierpinski gaskets in a line, three Sierpinski gaskets in a line, and four Sierpinski gaskets in a line are self-similar, but five Sierpinski gaskets in a line is not, which is proved in this paper.' author: - Sheng Zhang title: | Five-Collinear Sierpinski Gasket is\ Not Self-Similar --- \[section\] \[theorem\][Proposition]{} \[theorem\][Corollary]{} \[theorem\][Lemma]{} Introduction ============ Sierpinski gasket, two Sierpinski gaskets in a line, three Sierpinski gaskets in a line, and four Sierpinski gaskets in a line are the attractor of some contractive IFS consisting of similitudes, but five Sierpinski gaskets in a line is not. The proof is based on induction. Five Sierpinski gaskets in a line is of fractal dimension, which makes the situation a little bit complicated. The idea of the proof is to use figures of similar shapes with five Sierpinski gaskets in a line, but of integral dimension, to analyze properties of five Sierpinski gaskets in a line. Notations And Definitions ========================= Notations --------- **$\mathbb{Z}$** = $\{\dots, -2, -1, 0, 1, 2, \dots \}$.\ **$\mathbb{N}$** = $\{0, 1, 2, \dots \}$.\ **$\mathbb{N}_+$** = $\{1, 2, \dots \}$.\ **$\blacktriangle P_i P_j P_k$**: The solid triangle in $\mathbb{R}^2$ with vertices $P_i$, $P_j$, and $P_k$.\ \ Below, let $A$, $B$ be sets and $f$, $g$ be maps:\ **$d(x,y)$**: The distance between point $x$ and point $y$.\ **$A-B$** = $\{x\mid x\in A, x\not\in B\}$.\ **$diam(A)$** = $\sup_{x\in A, y\in A}\{d(x,y)\}$, where $A$ is a nonempty subset in $\mathbb{R}^2$.\ **$d(A,B)$** = $\inf_{x\in A, y\in B}\{d(x,y)\}$, where $A$, $B$ are nonempty subsets in $\mathbb{R}^2$.\ **$f\circ g$**: The composition of $f$ and $g$, which maps $x$ to $f(g(x))$.\ **$f^{\circ k}$**: The composition of $k$ $f$’s ($k\in \mathbb{Z}$). Definitions {#def} ----------- **Similitude**: a map $\mathbb{R}^2\to \mathbb{R}^2$ which is the composition of scaling, rotation, translation, and maybe reflection. That is, $f$ is a similitude if and only if there exist $\theta \in [0,2\pi), k \in (0, +\infty)$ and $x_0, y_0\in \mathbb{R}$, such that $$f{x \choose y}=k \left( \begin{array}{cc} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{array} \right) {x \choose y} + {x_0 \choose y_0}$$ or $$f{x \choose y}=k \left( \begin{array}{cc} \cos\theta & \sin\theta \\ \sin\theta & -\cos\theta \end{array} \right) {x \choose y} + {x_0 \choose y_0},$$ where $k$ is called the **scaling factor** of the similitude. If $k$ is strictly less than $1$, then the similitude is called $\textbf{contractive}$.\ **IFS**(Iterated Function System)[@Hutchinson]: $F=\{ \mathbb{R}^2;f_1,f_2,\cdots,f_n\}$, where $f_1,f_2,\cdots,f_n:\mathbb{R}^2\to \mathbb{R}^2$ are continuous maps.\ **Contractive IFS consisting of similitudes**: $F=\{ \mathbb{R}^2;f_1,f_2,\cdots,f_n\}$, where $f_1,f_2,\cdots,f_n:\mathbb{R}^2\to \mathbb{R}^2$ are contractive similitudes.\ **Sierpinski gasket**: The attracor of the IFS $F=\{\mathbb{R}^2; f_1{x \choose y}= \left( \begin{array}{cc} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{array} \right) {x \choose y}, \\ f_2{x \choose y}= \left( \begin{array}{cc} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{array} \right) {x \choose y}+{\frac{1}{4} \choose \frac{\sqrt{3}}{4}}, f_3{x \choose y}= \left( \begin{array}{cc} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{array} \right) {x \choose y} + {\frac{1}{2} \choose 0}\}$ (see Figure \[figure1\]). captype[figure]{} **Two-Sierpinski**: The union of Sierpinski gasket and Sierpinski gasket translated along positive $x$ axis by $1$ unit (see Figure \[figure2\]). captype[figure]{} **$N$-Sierpinski**: The union of $(n-1)$-Sierpinski and Sierpinski gasket translated along positive $x$ axis by $n-1$ unit (see Figure \[figure3\] when $n=3$). Five-Sierpinski will be denoted by **$E$** in this paper (see Figure \[figure4\]). Let **$C$** be the figure obtained from Sierpinski gasket dilated by factor $8$. Then $E \subset C$ (see Figure \[figure5\], the whole figure is $C$, and the brown part is $E$). captype[figure]{} captype[figure]{} captype[figure]{} **Figures $A_n$ and $B_n$** ($n\in \mathbb{Z}$ and $n \ge -3$): Since $C$ is a “bigger” version of Sierpinski gasket, it can be constructed as the intersection of a sequence of sets, the first four of which are listed in Figure \[figure6\]. Denote this sequence of sets by $A_n$ ($n\in \mathbb{Z}$ and $n \ge -3$). Notice each $A_n$ is the union of $3^{n+3}$ solid equilateral triangles of the same size, whose topological interiors are disjoint each other. Denote the union of all vertices of these solid equilateral triangles by $B_n$ ($n\in \mathbb{Z}$ and $n \ge -3$), the first four of which are listed in Figure \[figure7\]. captype[figure]{} captype[figure]{} **Map $T$**: $\mathbb{R}^2 \to \mathbb{R}^2$, $$\label{def t} T{x \choose y}=\frac{1}{2} \left( \begin{array}{cc} \cos\frac{2\pi}{3} & -\sin\frac{2\pi}{3} \\ \sin\frac{2\pi}{3} & \cos\frac{2\pi}{3} \end{array} \right) {x \choose y} + {\frac{3}{4} \choose \frac{\sqrt{3}}{4}}.$$ Then $T$ is a contractive similitude with scaling factor $\frac{1}{2}$ and $T$ is bijective. Key Theorem =========== As indicated in Figure \[figure1\], Figure \[figure2\], and Figure \[figure3\], Sierpinski gasket, two-Sierpinski, and three-Sierpinski are the attractor of some contractive IFS consisting of similitudes. These IFSs can be constructed as follows. In each figure, find similitudes that can map the whole figure to a region of the same colour. The number of similitudes is the number of different colours. Then the desired IFS is constructed by these similitudes. We can also construct a contractive IFS consisting of similitudes having four-Sierpinski as its attractor. This case is similar to three-Sierpinski. However, as indicated in the following theorem, five-Sierpinski cannot be the attractor of any contractive IFS consisting of similitudes. \[theorem1\] Five-Sierpinski is not the attractor of any contractive IFS consisting of similitudes. The proof of this theorem needs several lemmas. \[lemma1\] Denote five-Sierpinski by $E$. If $f$ is a similitude with scaling factor $k\le \frac{1}{80}$ such that $f(E)\subset E$ and $f(E)\cap \blacktriangle P_6 P_7 P_8 \neq \emptyset$ (see Figure \[figure4\]), then $f(E)\subset \blacktriangle P_1 P_4 P_5 \cap E$. Further more, $T^{-1}(f(E))\subset \blacktriangle P_1 P_2 P_3 \cap E \subset E$. First observe that $diam(E)=5$. Since $f$ is a similitude, for any $x,y\in \mathbb{R}^2$, $d(f(x),f(y))=k d(x,y)$. So, $diam(f(E))=\sup_{x\in f(E),y\in f(E)}\{d(x,y)\}=\sup_{x\in E,y\in E}\{d(f(x),f(y))\}=\sup_{x\in E,y\in E}\{k d(x,y)\}=k \sup_{x\in E,y\in E}\{d(x,y)\}=k\cdot diam(E)\le \frac{5}{80}=\frac{1}{16}$. As $f(E)\cap \blacktriangle P_6 P_7 P_8 \neq \emptyset$, suppose $x_0 \in f(E)\cap \blacktriangle P_6 P_7 P_8$. Then, for all $x_1 \in f(E)$, $d(x_1,x_0)\le diam(f(E))=\frac{1}{16}$. Since $d(\blacktriangle P_6 P_7 P_8 , E-\blacktriangle P_1 P_4 P_5)=\frac{\sqrt{3}}{16}$, for all $x_2 \in (E-\blacktriangle P_1 P_4 P_5 )$, we have $d(x_2,x_0)\ge \frac{\sqrt{3}}{16}$. So, $f(E) \cap (E-\blacktriangle P_1 P_4 P_5 )= \emptyset$. Now, $f(E)\subset E$, so $f(E)\subset \blacktriangle P_1 P_4 P_5 \cap E$. Since $\blacktriangle P_1 P_4 P_5 \cap E\subset T(E)$ ($T$ is defined in \[def t\]), $f(E)\subset \blacktriangle P_1 P_4 P_5 \cap T(E)$. So, $T^{-1}(f(E))\subset T^{-1}(\blacktriangle P_1 P_4 P_5 \cap T(E))=T^{-1}(\blacktriangle P_1 P_4 P_5 )\cap E=\blacktriangle P_1 P_2 P_3 \cap E \subset E$. \[lemma2\] Denote five-Sierpinski by $E$. If $f$ is a contractive similitude such that $f(E)\subset E$, then there exists $m\in \mathbb{N}_+$ such that the scaling factor of $f$ is $\frac{1}{2^m}$ and $f(P_1),f(P_2),f(P_3)\in B_m$ (See Figure \[figure4\] and Figure \[figure7\]). As you can see in Figure \[figure5\], $E\subset C$, so $f(E)\subset E\subset C$. Because $C=\cap^{\infty}_{n=-3}A_n$, $f(E)\subset A_n$ for all $n\in \{ -3,-2,-1,0,\cdots \}$. Each $A_n$ is the union of $3^{n+3}$ solid equilateral triangles of the same size, whose topological interiors are disjoint each other (see Figure \[figure6\]). Consider three points $f(P_1),f(P_2),f(P_3)$. Now $f(P_1),f(P_2),f(P_3)$ are different points since $f$ is bijective. They are either in the same solid triangle or not. Suppose $N$ is the greatest integer in $\{ -3,-2,-1,0,\cdots \}$ such that $f(P_1),f(P_2),f(P_3)$ are in the same solid triangle of $A_N$ (note that such $N$ exists because $f(P_1),f(P_2),f(P_3)$ have positive distance each other and the diameter of solid triangles of $A_n$ tends to $0$ as $n$ tends to $\infty$). Then $f(P_1),f(P_2),f(P_3)$ are not in the same solid triangle of $A_{N+1}$. Suppose $\blacktriangle Q_1 Q_4 Q_6$ is the solid triangle of $A_N$ where $f(P_1),f(P_2),f(P_3)$ are in (see Figure \[figure8\]). Then $\blacktriangle Q_1 Q_2 Q_3$, $\blacktriangle Q_2 Q_4 Q_5$, $\blacktriangle Q_3 Q_5 Q_6$ are solid triangles of $A_{N+1}$. So $f(P_1),f(P_2),f(P_3)$ are not in the same one of $\blacktriangle Q_1 Q_2 Q_3$, $\blacktriangle Q_2 Q_4 Q_5$, $\blacktriangle Q_3 Q_5 Q_6$. captype[figure]{} Suppose two of $f(P_1),f(P_2),f(P_3)$ are in the same one of $\blacktriangle Q_1 Q_2 Q_3$, $\blacktriangle Q_2 Q_4 Q_5$, $\blacktriangle Q_3 Q_5 Q_6$ and the other one of $f(P_1),f(P_2),f(P_3)$ is in another one of $\blacktriangle Q_1 Q_2 Q_3$, $\blacktriangle Q_2 Q_4 Q_5$, $\blacktriangle Q_3 Q_5 Q_6$. Without loss of generality, suppose $f(P_1),f(P_2)\in \blacktriangle Q_2 Q_4 Q_5$ and $f(P_3)\in \blacktriangle Q_3 Q_5 Q_6$. Since $f$ is a similitude, $f$ maps straight lines to straight lines. Because $f(E)\subset A_{N+1}$, $f$ maps segment $P_1 P_3$ to a segment in $A_{N+1}$ and maps segment $P_2 P_3$ to another segment in $A_{N+1}$. Now, segment $f(P_1)f(P_3)\subset A_{N+1}$ and segment $f(P_2)f(P_3)\subset A_{N+1}$. If $f(P_3)=Q_3$, then $f(P_1)=Q_2$ and $f(P_2)=Q_5$, or $f(P_1)=Q_5$ and $f(P_2)=Q_2$. Both cases are impossible since $f(P_8)\in f(E)\subset A_{N+1}$ (see Figure \[figure4\] for $P_8$). If $f(P_3)\neq Q_3$, then $f(P_1),f(P_2),f(P_3)$ are in segment $Q_4 Q_6$, which is also impossible. Therefore, $f(P_1),f(P_2),f(P_3)$ are in different ones of $\blacktriangle Q_1 Q_2 Q_3$, $\blacktriangle Q_2 Q_4 Q_5$, $\blacktriangle Q_3 Q_5 Q_6$. Without loss of generality, suppose $f(P_1)\in \blacktriangle Q_1 Q_2 Q_3$, $f(P_2)\in \blacktriangle Q_2 Q_4 Q_5$, $f(P_3)\in \blacktriangle Q_3 Q_5 Q_6$. Then, using similar arguments as above, we can prove $f(P_1)=Q_1$, $f(P_2)=Q_4$, $f(P_3)=Q_6$. Now, the scaling factor of $f$ is $\frac{1}{2^N}$ and $f(P_1),f(P_2),f(P_3)\in B_N$. As $f$ is contractive, $N\ge 1$. So, the lemma is proved. \[lemma3\] For all $k\in \mathbb{N}_+$, $$T^{\circ (k+1)}(\blacktriangle P_6 P_7 P_8)\subset T^{\circ k}(\blacktriangle P_6 P_7 P_8)\subset \cdots \subset \blacktriangle P_6 P_7 P_8$$ and $$T^{\circ (k+1)}(\blacktriangle P_6 P_7 P_8 \cap E)\subset T^{\circ k}(\blacktriangle P_6 P_7 P_8 \cap E)\subset \cdots \subset \blacktriangle P_6 P_7 P_8 \cap E.$$ First observe that $T(\blacktriangle P_6 P_7 P_8)\subset \blacktriangle P_6 P_7 P_8$. For all $k\in \mathbb{N}_+$, by applying $T$, $T^{\circ 2}$, $\cdots$, $T^{\circ k}$ to both sides of this relation respectively, we have $T^{\circ 2}(\blacktriangle P_6 P_7 P_8)\subset T(\blacktriangle P_6 P_7 P_8)$, $T^{\circ 3}(\blacktriangle P_6 P_7 P_8)\subset T^{\circ 2}(\blacktriangle P_6 P_7 P_8)$, $\cdots$, $T^{\circ (k+1)}(\blacktriangle P_6 P_7 P_8)\subset T^{\circ k}(\blacktriangle P_6 P_7 P_8)$. Thus, $T^{\circ (k+1)}(\blacktriangle P_6 P_7 P_8)\subset T^{\circ k}(\blacktriangle P_6 P_7 P_8)\subset \cdots \subset \blacktriangle P_6 P_7 P_8$. Similarly, since $T(\blacktriangle P_6 P_7 P_8 \cap E)\subset \blacktriangle P_6 P_7 P_8 \cap E$, we have $T^{\circ (k+1)}(\blacktriangle P_6 P_7 P_8 \cap E)\subset T^{\circ k}(\blacktriangle P_6 P_7 P_8 \cap E)\subset \cdots \subset \blacktriangle P_6 P_7 P_8 \cap E$. \[lemma4\] If $f$ is a contractive similitude such that $f(E)\subset E$ and the scaling factor of $f$ is $\frac{1}{2^m}$ for some $m\in \mathbb{N}_+$, then $T^{\circ m}(\blacktriangle P_6 P_7 P_8)\cap f(E)=\emptyset$. When $m\le 6$, according to Lemma \[lemma2\], $f(P_1),f(P_2),f(P_3)\in B_m$. Since $f$ is a similitude in $\mathbb{R}^2$, it is uniquely determined by the images of three noncollinear points $P_1,P_2,P_3$. As $f(P_1),f(P_2),f(P_3)\in B_m$ and $B_m$ consists of finitely many points, the possible choices of $f(P_1),f(P_2),f(P_3)$ are finite. So, the possible choices of $f$ are finite. Check directly among all possible choices of $f$ and find the statement $T^{\circ m}(\blacktriangle P_6 P_7 P_8)\cap f(E)=\emptyset$ always holds. For any integer $M\ge 6$, suppose the lemma holds when $m=M$. When $m=M+1$, $f$ has scaling factor $\frac{1}{2^m}=\frac{1}{2^{M+1}}\le \frac{1}{2^7}\le \frac{1}{80}$. If $f(E)\cap \blacktriangle P_6 P_7 P_8 =\emptyset$, then the lemma is proved since $T^{\circ m}(\blacktriangle P_6 P_7 P_8)\subset \blacktriangle P_6 P_7 P_8$. If $f(E)\cap \blacktriangle P_6 P_7 P_8 \neq \emptyset$, then according to Lemma \[lemma1\], $T^{-1}(f(E))\subset E$. Let $g=T^{-1}\circ f$. Then $g$ is a contractive similitude such that $g(E)\subset E$ and the scaling factor of $g$ is $\frac{1}{2^{m-1}}=\frac{1}{2^M}$. By the hypothesis, $T^{\circ M}(\blacktriangle P_6 P_7 P_8)\cap g(E)=\emptyset$. Since $T$ is bijective, $T^{\circ m}(\blacktriangle P_6 P_7 P_8)\cap f(E)=T^{\circ (M+1)}(\blacktriangle P_6 P_7 P_8)\cap T(g(E))=T(T^{\circ M}(\blacktriangle P_6 P_7 P_8)\cap g(E))=T(\emptyset)=\emptyset$. The lemma is proved by induction. \[lemma4’\] If $f$ is a contractive similitude such that $f(E)\subset E$, then there exists $K\in \mathbb{N}_+$ such that $T^{\circ K}(\blacktriangle P_6 P_7 P_8)\cap f(E)=\emptyset$. Suppose $f$ has scaling factor $k$. According to Lemma \[lemma2\], there exists $m\in \mathbb{N}_+$ such that $k=\frac{1}{2^m}$. According to Lemma \[lemma4\], $T^{\circ m}(\blacktriangle P_6 P_7 P_8)\cap f(E)=\emptyset$. Let $K=m$ and the lemma is proved. \[lemma4”\] If $f$ is a contractive similitude such that $f(E)\subset E$, then there exists $K\in \mathbb{N}_+$ such that for all $k\ge K$, $T^{\circ k}(\blacktriangle P_6 P_7 P_8)\cap f(E)=\emptyset$. This lemma is proved directly from Lemma \[lemma3\] and Lemma \[lemma4’\]. Now, let’s prove the key theorem. Suppose there exists a contractive IFS $F=\{\mathbb{R}^2 ;f_1 ,f_2 ,\cdots ,f_n \}$ ($f_1 ,f_2 ,\cdots ,f_n$ are contractive similitudes) such that $E$ is the attractor of $F$. Then $f_1(E)\cup f_2(E)\cup \cdots \cup f_n(E)=E$. So, for all $m=1,2,\cdots ,n$, $f_m(E)\subset E$. According to Lemma \[lemma4”\], there exist $K_m \in \mathbb{N}_+$ such that for all $k\ge K_m$, $T^{\circ k}(\blacktriangle P_6 P_7 P_8)\cap f_m(E)=\emptyset$. Let $K=\max \{K_1,K_2,\cdots K_n\}$. Then for all $m=1,2,\cdots ,n$, $T^{\circ K}(\blacktriangle P_6 P_7 P_8)\cap f_m(E)=\emptyset$. So, $T^{\circ K}(\blacktriangle P_6 P_7 P_8)\cap (f_1(E)\cup f_2(E)\cup \cdots \cup f_n(E))=\emptyset$, or $$T^{\circ K}(\blacktriangle P_6 P_7 P_8)\cap E=\emptyset.\label{theorem1 eq1}$$ According to Lemma \[lemma3\], $T^{\circ K}(\blacktriangle P_6 P_7 P_8 \cap E)\subset \blacktriangle P_6 P_7 P_8 \cap E\subset E$. Since $\blacktriangle P_6 P_7 P_8 \cap E\neq \emptyset$, $T^{\circ K}(\blacktriangle P_6 P_7 P_8 \cap E)\neq \emptyset$. Thus, $T^{\circ K}(\blacktriangle P_6 P_7 P_8 \cap E) \cap E=T^{\circ K}(\blacktriangle P_6 P_7 P_8 \cap E)\neq \emptyset$. As $T^{\circ K}(\blacktriangle P_6 P_7 P_8 \cap E)\subset T^{\circ K}(\blacktriangle P_6 P_7 P_8)$, $T^{\circ K}(\blacktriangle P_6 P_7 P_8)\cap E\neq \emptyset$, which contradicts \[theorem1 eq1\]. The contradiction implies five-Sierpinski is not the attractor of any contractive IFS consisting of similitudes. [99]{} Hutchinson, [*Fractals and Self-Similarity*]{}, Indiana University Journal of Mathematics 30: 713-747, 1981.
--- author: - 'Goutam Das,' - 'M. C. Kumar' - and Kajal Samanta bibliography: - 'references.bib' title: 'Resummed inclusive cross-section in Randall-Sundrum model at NNLO+NNLL' --- Introduction {#sec:introduction} ============ The Standard Model (SM) of particle physics is now well established after the discovery of scalar Higgs boson [@Aad:2012tfa; @Chatrchyan:2012xdj] at the Large Hadron Collider (LHC). The properties of the Higgs boson is being tested at a very high accuracy in the hope of new physics beyond the SM (BSM). A large class of the BSM scenarios are motivated by the large hierarchy between the electroweak symmetry breaking scale and the Planck scale, known as the gauge hierarchy problem. A wide class of theories have been proposed to address this problem through the introduction of large extra dimensions in the TeV scale brane world scenarios. In particular the models with warped extra dimension as proposed by Randall and Sundrum (RS) [@Randall:1999ee] are attractive candidates to solve this gauge hierarchy problem. In its simplest version, it predicts spin-2 Kaluza-Klein (KK) excitations in the TeV mass range which could be accessible at current hadron collider LHC or in any future hadron colliders or electron-positron colliders. Search of physics beyond the SM has been an important objective of the LHC physics program. Precision physics plays an important role in this regard to accurately predict the cross-sections and distributions within perturbative framework. The process like Higgs and pseudo-scalar Higgs boson [@Harlander:2002wh; @Harlander:2002vv; @Anastasiou:2002yz; @Anastasiou:2002wq; @Ravindran:2003um; @Harlander:2003ai], DY [@Hamberg:1990np; @Harlander:2002wh] productions are already available at NNLO accuracy. The large perturbative corrections for Higgs at NNLO even pushes the accuracy to be calculated to even N$^3$LO order [@Anastasiou:2015ema; @Mistlberger:2018etf; @Duhr:2019kwi]. Recently the DY production has also been calculated to third order in strong coupling [@Duhr:2020seh]. The exclusive observables like rapidity are also being calculated to the same accuracy (see for example [@Anastasiou:2004xq; @Anastasiou:2011qx; @Buehler:2012cu; @Dulat:2018bfe; @Cieri:2018oms; @Anastasiou:2003yy; @Anastasiou:2003ds; @Catani:2009sm; @Melnikov:2006kv; @Gavin:2012sy]). In order to achieve perturbative stability, it is instructive to go beyond NNLO by computing the SV cross-section at N$^3$LO order. The SV corrections constitute a significant part of the full cross-section and have been successfully computed for many processes in the SM, for example, Higgs production [@Anastasiou:2014vaa; @Moch:2005ky; @Laenen:2005uz; @Ravindran:2005vv; @Ravindran:2006cg; @Idilbi:2005ni; @Li:2014afw; @Ahmed:2014cha], DY production [@Ravindran:2006bu; @Ahmed:2014cla], associated production [@Kumar:2014uwa] to N$^3$LO as well as in BSM domain like pseudo-scalar production [@Ahmed:2015qda] in 2HDM. Similar accuracy has also been achieved for rapidity distributions [@Ravindran:2006bu; @Ravindran:2007sv; @Ahmed:2014uya; @Ahmed:2014era]. In the threshold region where partonic $z\to 1$, the truncated fixed order cross-section however becomes unreliable due to the presence of large logarithms. These large logarithms arise due to constrained phase space available for the soft gluons. In order to get a reliable prediction also in these corners of the phase-space, it is thus essential to resum these large logarithms to all orders. Threshold resummation has been performed successfully to inclusive Higgs production [@Catani:2003zt; @Moch:2005ky; @Catani:2014uta; @Bonvini:2014joa; @Ahmed:2015sna; @Bonvini:2016frm; @H:2019dcl], DY production [@Moch:2005ky; @Catani:2014uta], DIS [@Moch:2005ba] as well as for pseudo-scalar production [@Schmidt:2015cea; @deFlorian:2007sr; @Ahmed:2016otz] up to N$^3$LL accuracy. The first results towards N$^4$LL corrections are also available recently for DIS in [@Das:2019btv]. Moreover for differential observables like rapidity, it is known to NNLL accuracy for many important processes (see for example [@Catani:1989ne; @Westmark:2017uig; @Banerjee:2017cfc; @Banerjee:2018vvb; @Lustermans:2019cau; @Ebert:2017uel]). In the context of large extra dimension, the NLO corrections were known for many important processes at the LHC [@Mathews:2004xp; @Kumar:2007af; @Kumar:2008dn; @Kumar:2008pk; @Kumar:2009nn; @Agarwal:2009xr; @Agarwal:2010sp; @Kumar:2011yta; @Agarwal:2009zg; @Agarwal:2010sn; @Mathews:2005bw] within Arkani-Hamed-Dimopoulos-Dvali (ADD) [@ArkaniHamed:1998rs] and RS [@Randall:1999ee] model. It is observed in the NLO QCD computation [@Mathews:2004xp] that the K-factors in the di-lepton production case are potentially large and range up to 60%. The matched NLO results with parton shower is also known for di-final processes in ADD [@Frederix:2012dp; @Frederix:2013lga] and in RS [@Das:2014tva] model. The associated production [@Kumar:2010kv] as well as triple gauge boson production processes [@Kumar:2011jq] are also known. In RS model, the triple neutral gauge boson productions are available [@Das:2015bda] in ME+PS accuracy in the M[AD]{}G[RAPH]{} framework. Moreover generic universal and non-universal spin-2 production processes are automatized [@Das:2016pbk] in F[[EYNRULES]{}]{} [@Alloul:2013bka] - M[AD]{}G[RAPH]{}5\_[A]{}MC@NLO [@Alwall:2014hca] framework providing NLO accuracy for inclusive and exclusive cross-sections for all relevant channels at the LHC. The first attempt to go beyond the NLO accuracy has been seen in [@deFlorian:2013wpa] calculating the SV corrections at NNLO. This has been possible due to the calculation of spin-2 form factor [@deFlorian:2013sza] at the same order. Shortly after, the complete NNLO corrections were computed in [@Ahmed:2016qhu] using the method of reverse unitarity [@Anastasiou:2002yz] and phenomenological study has been performed in the context of ADD model. There it has been found that the NNLO correction changes the cross-section by 21% over NLO results and constrains the scale uncertainty to $1.6\%$. Similar accuracy is also available for non-universal spin-2 production [@Banerjee:2017ewt] where spin-2 couples with different coupling to the SM fields. The first attempt in calculating the SV corrections beyond NNLO can be seen [@Das:2019bxi] in the context of ADD model in DY invariant mass distribution after the completion of three-loop quark and gluon form factor [@Ahmed:2015qia]. The perturbative coefficients are same for any spin-2 production with universal coupling to the SM. There it has been noticed that the N$^3$LO SV cross-section changes the NNLO by -0.7% at $Q = 1500$ GeV ($Q$ being the invariant mass of the lepton pair). Moreover the authors also performed threshold resummation up to N$^3$LL accuracy and the corrections are found to be around 1% over NNLO with scale uncertainty reduces to $1.5\%$. In this article, we focus on massive KK production in the RS framework. Since the spin-2 RS KK excitations also couples universally to the SM stress-energy tensor, the analytical perturbative coefficients are same as the generic universal spin-2 case like ADD. Phenomenologically however the RS KK states provide very distinctive signature from that of ADD model at the LHC. Where the di-lepton invariant mass distribution in the ADD model provides a continuum distribution, in the RS model one finds well-separated massive KK resonances. Using the coefficients already obtained in ADD scenario, we first study the invariant mass distribution for DY production at NNLO accuracy in the RS model. Next we study the impact of N$^3$LO SV correction as well as the NNLL resummed effect over the NNLO correction within this model. The article is organized as follows: in sec. (\[sec:model\]) we briefly describe the RS model and present the interaction lagrangian. In sec. (\[sec:dyinv\]) we set up the theoretical framework for invariant mass distribution for di-lepton production in RS model followed by the discussion on SV cross-section in sec. (\[sec:svxsect\]) and threshold resummation in sec. (\[sec:resum\]). In sec. (\[sec:numerics\]), we present the distribution at NNLO and the results for N$^3$LO SV as well as the resummed results matched at NNLO+NNLL accuracy. Finally we conclude in sec. (\[sec:conclusion\]). Theoretical Framework {#sec:theory} ===================== The Model {#sec:model} --------- The RS background is a warped metric and can be parametrized [@Davoudiasl:1999jd] as $$\begin{aligned} ds^{2} = e^{-2\kappa r_{c} |\phi|} \eta_{\mu\nu} dx^{\mu}dx^{\nu} - r_{c}^{2}d\phi^{2} \qquad ,\end{aligned}$$ where $\eta_{\mu\nu}$ is the flat Minkowski metric and $\phi$ is the extra dimension with periodicity $0\leq \phi \leq \pi$ and is compactified on a $S^1/\mathbb{Z}_2$ orbifold with radius $r_c$. The curvature of the $AdS_5$ space-time is denoted as $\kappa$. In the RS model, there are two 3-branes located at two orbifold fixed points on the coordinate of the fifth dimension $\phi = 0$ and $\phi=\pi$ known as the ‘Planck brane’ and ‘TeV brane’ respectively. All the SM particles are confined in the TeV brane and only the gravity is allowed to propagate into the fifth dimension. With this set-up, the hierarchy between the electroweak scale and the Planck scale is understood reasonably if the compactification radius ($r_c$) and the $AdS$ curvature ($\kappa$) satisfy a condition, $\kappa r_c \sim {\cal O}(10)$ [@Goldberger:1999un; @Goldberger:1999uk]. The higher KK modes therefore will produce observable effects on the LHC processes at the TeV range. These massive KK states ($Y_{\mu\nu}^{(n)}$) interact with the SM fields through stress-energy tensor ($T^{\mu\nu}$) and the interaction Lagrangian is given [@Han:1998sg; @Giudice:1998ck] as, $$\begin{aligned} \mathcal{L}_{RS} = - \frac{1}{\overline{M}_{Pl}} T^{\mu\nu} (x)Y^{(0)}_{\mu\nu}(x) - \frac{\overline{c}_{0}}{m_{0}} T^{\mu\nu} (x)\sum^{\infty}_{n=1} Y^{(n)}_{\mu\nu}(x) \,.\end{aligned}$$ The interaction with zeroth KK mode ($Y_{\mu\nu}^{(0)}$) is suppressed by the reduced Planck mass ($\overline{M}_{Pl}$) and thus can be neglected for practical purposes. The higher KK modes however couple with the strength $\frac{\overline{c}_{0}}{m_{0}}$, where $\overline{c}_{0}=\frac{\kappa}{\overline{M}_{Pl}}$, $m_{0}=\kappa e^{-\kappa r_{c} \pi}$. The masses of the KK modes are given by $M_n = x_n\,\kappa\,e^{-\pi\kappa r_c}$, with $x_n$ being the zeros of the Bessel function $J_1(x)$. The effective graviton propagator can be found after summing over all the massive KK modes except the zeroth one and it takes the form [@Davoudiasl:2000wi; @Kumar:2009nn], $$\begin{aligned} D_{eff}(s_{ij}) &= \sum_{n=1}^{\infty} \frac{1}{s_{ij} - M_n^2 + i \Gamma_n M_n} \nonumber \\ &= \frac{1}{m_{0}^{2}}\sum_{n=1}^{\infty} \frac{\left(x^{2}-x_{n}^{2}\right) -i x_{n}\frac{\Gamma_{n}}{m_{0}}}{\left(x^{2}-x_{n}^{2}\right)^{2} +x_{n}^{2}\left(\frac{\Gamma_{n}}{m_{0}}\right)^{2}} \,,\end{aligned}$$ where $s_{ij}=(p_i+p_j)^2$, $x=\sqrt{s_{ij}}/m_0$ and $\Gamma_n$ denotes the width of the resonance with mass $M_n$ (see [@Han:1998sg; @KumarRai:2003kk]). In the RS model, the individual KK resonances are well-separated and can be probed for example in the invariant mass distribution of lepton pairs in DY production. Drell-Yan invariant mass distribution {#sec:dyinv} ------------------------------------- The invariant mass distribution for DY production at the hadron collider is given by, $$\begin{aligned} \label{eq:dylhc} 2 S~{d \sigma^{} \over d Q^2}\left(\tau,Q^2\right) &=\sum_{ab={q,\overline q,g}} \int_0^1 dx_1 \int_0^1 dx_2~ \int_0^1 dz \,\, \delta(\tau-z x_1 x_2) \nonumber\\[2ex] & \times {\cal L}_{ab}(x_1,x_2,\mu_f^2) \sum_{I} \Delta^{I}_{ab}(z,Q^2,\mu_f^2) \,.\end{aligned}$$ Here $S$ and $\hat{s}$ denote the centre-of-mass energy in the hadronic and partonic frame respectively. The mass factorized partonic coefficient function $\Delta^{I}_{ab}$ is convoluted with the parton luminosity distribution ${\cal L}_{ab}$ consisting of parton distribution functions $f_a^{P_1}(x_1,\mu_f^2)$ and $f_b^{P_2}(x_2,\mu_f^2)$ respectively for two incoming protons. The summation over $I$ takes care of the SM and the RS contributions. The hadronic and partonic threshold variables $\tau$ and $z$ are defined as $$\begin{aligned} \tau=\frac{Q^2}{S}, \qquad z= \frac{Q^2}{\hat{s}} \,.\end{aligned}$$ They are thus related by $\tau = x_1 x_2 z$. To the all order in strong coupling, the partonic cross-section in the above [eq. (\[eq:dylhc\])]{} can be decomposed as the sum of soft-virtual (SV) piece and regular piece (up to normalization of born contribution), $$\begin{aligned} \Delta^{I}_{ab} \equiv \sum_n \Delta^{I,(n)}_{ab} = {\cal F}^{(0)}_I \Big(\Delta^{({\rm sv},I)}_{ab} + \Delta^{({\rm reg},I)}_{ab} \Big) \,.\end{aligned}$$ At each order of strong coupling, the SV terms contain all the leading singular terms consisting of ‘plus-distributions’ $\Big[ \frac{\ln^i (1-z)}{(1-z)}\Big]_+$ and delta function $\delta(1-z)$. The regular piece on the other hand is finite in $z$. The pre-factor ${\cal F}^{(0)}$ takes the following form for neutral vector bosons in SM and spin-2 (RS) boson respectively, $$\begin{aligned} {\cal F}_{\rm SM}^{(0)} &= {4 \alpha^2 \over 3 Q^2} \Bigg[Q_q^2 - {2 Q^2 (Q^2-M_Z^2) \over \left((Q^2-M_Z^2)^2 + M_Z^2 \Gamma_Z^2\right) c_w^2 s_w^2} Q_q g_e^V g_q^V {\nonumber}\\ &+ {Q^4 \over \left((Q^2-M_Z^2)^2+M_Z^2 \Gamma_Z^2\right) c_w^4 s_w^4}\Big((g_e^V)^2 + (g_e^A)^2\Big)\Big((g_q^V)^2+(g_q^A)^2\Big) \Bigg]\,, {\nonumber}\\ {\cal F}^{(0)}_{\rm RS} &= \frac{2Q^2}{\Lambda_\pi^2}\,,\end{aligned}$$ where $\alpha$ is the fine structure constant, $Q$ is the invariant mass of the lepton pair, $M_Z$ and $\Gamma_Z$ are the mass and the decay width of the $Z$-boson, $c_w,s_w$ are sine and cosine of Weinberg angle respectively. The vector and axial-vector part of the weak boson coupling is given as, $$\begin{aligned} g_a^A = -\frac{1}{2} T_a^3 \,, \qquad g_a^V = \frac{1}{2} T_a^3 - s_w^2 Q_a \,,\end{aligned}$$ $Q_a$ being electric charge and $T_a^3$ is the weak isospin of the electron or quarks. Note that the SM contribution consists of contribution from $\gamma$ and $Z$ as well as their interference. For the invariant mass distribution, however the spin-2 production is decoupled from the SM one [@Mathews:2004xp] and thus there is no interference of them. The complete SM contribution to DY invariant mass distribution is known up to second order in the strong coupling [@Hamberg:1990np; @Altarelli:1978id; @Matsuura:1987wt; @Matsuura:1988sm]. Up to two loops the contribution from RS spin-2 can be written as, $$\begin{aligned} 2 S{\frac{d \sigma^{}_{RS}}{dQ^2}}(\tau,Q^2)&= \sum_{q,\bar q,g}{\cal F}^{(0)}_{RS} \int_0^1 {d x_1 } \int_0^1 {dx_2} \int_0^1 dz~ \delta(\tau-z x_1 x_2) \nonumber\\ \times &\Bigg[ {\cal L}_{q{\bar q}} \sum\limits_{n=0}^{2} a_{s}^{n} \Delta^{RS, (n)}_{q{\bar q}} + {\cal L}_{g g} \sum\limits_{n=0}^{2} a_{s}^{n} \Delta^{RS, (n)}_{gg} {\nonumber}\\& + \Big( {\cal L}_{gq} + {\cal L}_{qg} \Big) \sum\limits_{n=1}^{2} a_{s}^{n} \Delta^{RS, (n)}_{gq} \nonumber\\&+ {\cal L}_{q q} \sum\limits_{n=2}^{2} a_{s}^{n} \Delta^{RS, (n)}_{qq} + {\cal L}_{q_{1} q_{2}} \sum\limits_{n=2}^{2} a_{s}^{n} \Delta^{RS, (n)}_{q_{1}q_{2}} \Bigg]\,,\end{aligned}$$ with $$\begin{aligned} \label{eq:32} \L_{q \bar q}(x_1,x_2,\mu_f^2)&= f_q^{P_1}(x_1,\mu_f^2) f_{\bar q}^{P_2}(x_2,\mu_f^2) +f_{\bar q}^{P_1}(x_1,\mu_f^2)~ f_q^{P_2}(x_2,\mu_f^2)\,, \nonumber\\ \L_{q q}(x_1,x_2,\mu_f^2)&= f_q^{P_1}(x_1,\mu_f^2) f_{q}^{P_2}(x_2,\mu_f^2) +f_{\bar q}^{P_1}(x_1,\mu_f^2)~ f_{\bar q}^{P_2}(x_2,\mu_f^2)\,, \nonumber\\ \L_{q_1 q_2}(x_1,x_2,\mu_f^2)&= f_{q_1}^{P_1}(x_1,\mu_f^2) \Big( f_{q_2}^{P_2}(x_2,\mu_f^2) + f_{\bar q_2}^{P_2}(x_2,\mu_f^2) \Big) \nonumber\\&+f_{\bar q_1}^{P_1}(x_1,\mu_f^2)~ \Big( f_{q_2}^{P_2}(x_2,\mu_f^2) + f_{\bar q_2}^{P_2}(x_2,\mu_f^2) \Big)\,, \nonumber\\ \L_{g q}(x_1,x_2,\mu_f^2)&= f_g^{P_1}(x_1,\mu_f^2) \Big(f_q^{P_2}(x_2,\mu_f^2) +f_{\bar q}^{P_2}(x_2,\mu_f^2)\Big)\,, \nonumber\\ \L_{q g}(x_1,x_2,\mu_f^2)&= \L_{g q}(x_2,x_1,\mu_f^2)\,, \nonumber\\ \L_{g g}(x_1,x_2,\mu_f^2)&= f_g^{P_1}(x_1,\mu_f^2)~ f_g^{P_2}(x_2,\mu_f^2)\,.\end{aligned}$$ Notice that computation of the partonic coefficients at the second order requires evaluation of matrix element as well as the proper phase space for the di-lepton pair. Using the method of reverse unitarity [@Anastasiou:2002yz], where the phase space integrals were converted to loop integrals, the later has been performed very recently in the case of generic spin-2 production [@Ahmed:2016qhu]. The advantage is that, one then can re-use all the techniques developed for the multi-loop computation. The analytical result obtained in this way is useful for any spin-2 production with universal coupling to the SM. We use these coefficients to predict complete NNLO cross-section for the RS model. Soft-Virtual cross-section {#sec:svxsect} -------------------------- It is important to consider corrections beyond NNLO in order to obtain perturbative stability. The first step as discussed earlier is to calculate the third order soft-virtual cross-section. This can be achieved by calculating the spin-2 form factor at three loops [@Ahmed:2015qia] as well as the soft function at the same order. The soft function being maximally non-abelian up to three loops, can be extracted from the known Higgs [@Anastasiou:2014vaa; @Li:2014afw] and DY [@Ahmed:2014cla; @Catani:2014uta] results. Using these informations, the third order coefficients for the SV corrections have been obtained [@Das:2019bxi] for generic spin-2 coupling and have been applied to ADD model to predict the DY distribution to N$^3$LO. The analytical coefficients can be also used to predict the SV cross-section in the RS graviton production at the same perturbative order. The SV cross-section at each order consists of plus-distributions and delta function which are most singular terms in the partonic coefficient function. One can write the SV coefficient in terms of perturbative expansion in strong coupling, $$\begin{aligned} \Delta_{ab}^{{(\rm sv},I)} &= \sum_{n=0}^{\infty} a_s^n \Delta_{ab}^{(n),I}\,.\end{aligned}$$ The SV coefficients arise only in flavor diagonal channels either in $q\bar{q}$ or $gg$ born process. At any order $n$ of the strong coupling, the SV coefficients have the following structure in terms of plus distributions and delta function, $$\begin{aligned} \Delta_{ab}^{(n),I} &= c_{n,\delta}^I ~\delta(1-z) + \sum_{i=0}^{2n-1} c_{n,i}^I ~ \bigg[\frac{\ln^i(1-z)}{(1-z)} \bigg]_+ \,.\end{aligned}$$ In the SM, only $q\bar{q}$ contributes wheres for the RS scenario both $q\bar{q}$ and $gg$ channels contribute to the SV coefficient. Here we present only the leading order term ($n=0$) in this series to follow the overall normalization to the coefficient, $$\begin{aligned} \Delta_{\rm q\bar{q}}^{(0),SM} &= \frac{2\pi}{n_c} \delta(1-z)\,, {\nonumber}\\ \Delta_{\rm q\bar{q}}^{(0),RS}&= \frac{\pi}{8 n_c} \delta(1-z)\,, {\nonumber}\\ \Delta_{\rm gg}^{(0),RS} &= \frac{\pi}{2 (n_c^2-1)} \delta(1-z)\end{aligned}$$ Up to two loops, these are already known for quite some time and can be found in [@deFlorian:2013wpa]. Recently we calculated the three loop pieces [@Das:2019bxi] for generic spin-2 couplings. In this article we use these three-loop coefficients for the third order phenomenological prediction in the RS model. Threshold resummation {#sec:resum} --------------------- The NNLO cross-section can be improved with the contribution from threshold logarithms at all orders. In particular when partonic $z\to 1$ the contribution from these singular terms becomes large and unreliable and thus needs to be resummed to all orders. The resummation is usually performed in conjugate space where all the convolutions become simple product and therefore easy to calculate. We follow the standard approach where we evaluate the resummed coefficients in the Mellin-$N$ space. The threshold limit translates there into ${{{\,\overline{\!{N}}}}}\to\infty$ with ${{{\,\overline{\!{N}}}}}= N\exp(\gamma_E^{})$, $N$ being Mellin conjugate to $z$ and $\gamma_E^{}$ is the Euler-Mascheroni constant. Up to the born factor, the partonic resummed cross-section can be organized as [@Catani:1996yz; @Catani:1989ne; @Moch:2005ba] , $$\begin{aligned} \label{eq:resum-parton} (d\hat{\sigma}_N/dQ)/(d\hat{\sigma}_{\rm LO}/dQ) = g_0^I \exp \Big( G_{{{{\,\overline{\!{N}}}}}}^I \Big) \,.\end{aligned}$$ The normalization $(d\hat{\sigma}_{\rm LO}/dQ)$ is given as, $$\begin{aligned} (d\hat{\sigma}_{\rm LO}/dQ) &= {\cal F}^{(0)}_{\rm SM} \frac{Q}{S} \bigg\{ \frac{2\pi}{n_c}\bigg\} \, ~ \qquad\qquad\qquad~ \text{for SM,} {\nonumber}\\ &= {\cal F}^{(0)}_{\rm RS} \frac{Q}{S} \bigg\{ \frac{\pi}{8 n_c}, \frac{\pi}{2(n_c^2-1)}\bigg\} \qquad \text{for} ~~ \{q\bar{q},gg\}~ \text{in RS.}\end{aligned}$$ The exponent can be found through the following representation in Mellin space in terms of universal constants the cusp anomalous dimensions $A^I$ [@Moch:2004pa; @Lee:2016ixa; @Moch:2017uml; @Grozin:2018vdn; @Henn:2019rmi; @Bruser:2019auj; @Davies:2016jie; @Lee:2017mip; @Gracey:1994nn; @Beneke:1995pq; @Moch:2017uml; @Moch:2018wjh; @Henn:2019rmi; @Lee:2019zop; @Henn:2019swt] and constants $D^I$ [@Catani:2003zt; @Moch:2005ba; @Das:2019bxi], $$\begin{aligned} G_{{{{\,\overline{\!{N}}}}}}^I = \int_0^1 dz \frac{z^{N-1} - 1}{1 - z} \bigg[ \int_{{\mu_f}^2}^{Q^2(1-z)^2} \frac{d\mu^2}{\mu^2} 2~A^I(a_s(\mu^2)) + D^I(a_s(Q^2(1-z)^2))\bigg] \,.\end{aligned}$$ Performing the integration, one can organize it as a resummed series realizing $\ln {{{\,\overline{\!{N}}}}}\sim 1/a_s$, $$\begin{aligned} \label{eq:resum-exponent} G_{{{{\,\overline{\!{N}}}}}}^I = \ln {{{\,\overline{\!{N}}}}}~g_1^I({\omega}) + g_2^I({\omega}) + a_s ~g_3^I({\omega}) + a_s^2~ g_4^I({\omega}) + \cdots \,,\end{aligned}$$ where ${\omega}= 2 \beta_0 a_s \ln {{{\,\overline{\!{N}}}}}$. The expressions for $g^I_i$ required up to N$^3$LL resummation can be found in [@H.:2020ecd; @Das:2019btv; @H:2019dcl]. The process dependent coefficient $g_0^I$ can be extracted from the SV results in Mellin-$N$ space and can be written as a perturbative series in strong coupling as, $$\begin{aligned} \label{eq:g0b} g_0^I = 1 + \sum_{n=1}^{\infty} a_s^n g_{0n}^I \,.\end{aligned}$$ We have extracted these coefficients up to the third order from the known SV coefficients at the same order for a generic spin-2 coupling, which can be found in [@Das:2019bxi]. The first term in [eq. (\[eq:resum-exponent\])]{} along with first term in [eq. (\[eq:g0b\])]{} define the leading logarithmic (LL) accuracy. Similarly the first two terms in [eq. (\[eq:resum-exponent\])]{} along with the terms up to ${\cal O}(a_s)$ in [eq. (\[eq:g0b\])]{} define next-to-leading logarithm (NLL) order and so on. Note that the expansion in [eq. (\[eq:resum-exponent\])]{} is different from [@Moch:2005ba; @Catani:2003zt], where one organizes the series in terms of $N$ instead of ${{{\,\overline{\!{N}}}}}$. Physically both are equivalent in the large-$N$ limit, however numerically it has been seen that the ${{{\,\overline{\!{N}}}}}$ exponentiation shows better perturbative convergence for DIS [@Das:2019btv] as well as in DY [@H.:2020ecd]. The partonic resummed cross-section has to be Mellin-inverted with suitable $N$-space PDF and finally has to be matched with the known fixed order results. The general expression for the matched cross-section can be written as below: $$\begin{aligned} \label{eq:matched} \bigg[\frac{d\sigma}{dQ}\bigg]_{N^nLO+N^nLL} &= \bigg[\frac{d\sigma}{dQ}\bigg]_{N^nLO} + \sum_{ab\in\{q,\bar{q},g\}} \frac{d\hat{\sigma}_{LO}}{dQ} \int_{c-i\infty}^{c+i\infty} \frac{dN}{2\pi i} (\tau)^{-N} {\nonumber}\\ & \delta_{ab}f_{a,N}({\mu_f}^2) f_{b,N}({\mu_f}^2) \times \bigg( \bigg[ \frac{d\hat{\sigma}_N}{dQ} \bigg]_{N^nLL} - \bigg[ \frac{d\hat{\sigma}_N}{dQ} \bigg]_{tr} \bigg) \,.\end{aligned}$$ The first term in the above [eq. (\[eq:matched\])]{} represents the fixed order known to N$^n$LO. The last term in the bracket represents the resummed result truncated to the fixed order accuracy to N$^n$LO to remove all double counting of the singular terms that are already present in fixed order. $f_{i,N}({\mu_f}^2)$ are the Mellin space PDFs which can be evolved using publicly available code QCD-P[egasus]{} [@Vogt:2004ns], however we followed the prescription provided in [@Catani:1996yz] relating $N$-space PDFs to the derivative of $x$-space PDF for simplicity and flexibility to using [lhapdf]{} [@Buckley:2014ana] libraries. To avoid the Landau pole problem in the Mellin-inversion integration, we have followed the minimal prescription [@Catani:1996yz] and chosen the contour accordingly. All the necessary analytical ingredients are now available to perform the numerical study which we report in the next section. In our work, all the algebraic computations have been done with the latest version of the symbolic manipulation system [Form]{} [@Vermaseren:2000nd; @Ruijl:2017dtg]. Numerical Results {#sec:numerics} ================= In this section we present our numerical results for the di-lepton production cross section in the RS model at LHC. The LO, NLO and NNLO parton level cross sections are convoluted with the respective order by order parton distribution functions (PDF) taken from [lhapdf]{} [@Buckley:2014ana]. The corresponding strong coupling constant $a_s(\mu_r^2) = \alpha_s(\mu_r^2)/(4\pi)$ is also provided by [lhapdf]{}. The fine structure constant and the weak mixing angles are chosen to be $\alpha_{\rm em} = 1/128$ and $\sin^2\theta_w = 0.227$ respectively. Here the results are presented for $n_f = 5$ flavors in the massless limit of quarks. The default choice for the center of mass energy of protons is 13 TeV and the choice for the PDF set is MMHT2014 [@Harland-Lang:2014zoa]. Except for the scale variations, we have used the factorization ($\mu_f$) and renormalisation ($\mu_r$) scales to be the invariant mass of the di-lepton, i.e. $\mu_f = \mu_r = Q$. We also note that there have been several experimental searches at the LHC for warped extra dimensions in the past, yielding stringent bounds on the RS model parameters, the mass of the first resonance mode ($M_1$) and the coupling strength ($\bar{c}_{0}$)[@Khachatryan:2016zqb; @Aaboud:2017yyg]. Such analyses have already used the K-factors that have been computed in the extra dimension models. Here in this work, for our phenomenological study to assess the impact of QCD corrections, we choose $M_1 = 1.5$ TeV and $\bar{c}_{0} = 0.05$. The computational details of the QCD corrections presented here are model independent, and a numerical estimate of the theory predictions for any other choice of the model parameters is straight-forward. For completeness, we also study the dependence of the invariant mass distributions on the model parameters considering the recent bounds on $M_1$ for different $\bar{c}_{0}$ values. Fixed order corrections {#sec:fo} ----------------------- [![Different subprocess contributions for the RS model at NNLO QCD right at the resonance for different $M_1$ values keeping $\bar{c}_{0}$ fixed at $0.05$.[]{data-label="subprocess_contribution"}](plots/nnlo/gr_nnlo_subprocess.pdf "fig:"){width="55.00000%" height="48.00000%"}]{} First we present in [fig. (\[subprocess\_contribution\])]{} the contribution from different subprocesses for the pure RS graviton (GR) at NNLO level right at the resonance region by varying the first resonant mass $M_1$ and keeping $\bar{c}_{0} = 0.05$. At this order in QCD there are six different subprocesses that contribute for GR case, $q\bar{q}$, $gg$, $qg$, $qq$, $q_1q_2$ and $q_1\bar{q}_2$. Here, the dominant contribution comes from the $gg$-subprocess and it remains dominant for resonance values as large as $4.5$ TeV. The next dominant contribution comes from $qg$-subprocess but it is negative for this entire mass ranges. This is followed by quark initiated processes with $q\bar{q}$ being the largest in this category. For a typical choice of first resonance $M_1 = 2500$, we find that the total cross-section is $0.63 \times 10^{-5}$ pb in which the dominant $gg$ subprocess overshoots the value by $151\%$. The $q\bar{q}$, $qq$, $q_1q_2$ and $q_1\bar{q}_2$ channels contribute in addition $24.7\%$, $2.7\%$, $2.2\%$, $0.7\%$ respectively of the total cross-section. As stated earlier only the $qg$ channel contributes negatively of about $-82\%$ of the total cross-section. [![Di-lepton invariant mass distribution up to NNLO QCD for pure RS model (left panel) and for the signal (right panel).[]{data-label="nnlo_inv_distribution"}](plots/nnlo/gr_inv_nnlo.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} 0.05cm [![Di-lepton invariant mass distribution up to NNLO QCD for pure RS model (left panel) and for the signal (right panel).[]{data-label="nnlo_inv_distribution"}](plots/nnlo/signal_inv_nnlo.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} Next, we present in [fig. (\[nnlo\_inv\_distribution\])]{} the di-lepton invariant mass distribution ($d\sigma/dQ$) as a function of the invariant mass of the di-lepton $Q$ for GR and for the signal (SM+GR). The width of the resonance depends on $\bar{c}_{0}$ and near the resonance region the signal receives most of the contribution from the pure RS graviton. Far away from this resonance region, the RS contribution is found to be comparable to that of the SM background for $Q > 3500$ GeV. [![The K-factors up to NNLO in QCD for RS model (left panel) and for the signal (right panel).[]{data-label="nnlo_k_distribution"}](plots/nnlo/gr_k_nnlo.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} 0.05cm [![The K-factors up to NNLO in QCD for RS model (left panel) and for the signal (right panel).[]{data-label="nnlo_k_distribution"}](plots/nnlo/signal_k_nnlo.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} $$\begin{aligned} \label{eqk} \text{K}_\text{NLO} = \frac{ d\sigma^{\rm NLO}/dQ} {d\sigma^{\rm LO}/dQ } \,,~~ \text{K}_\text{NNLO} = \frac{ d\sigma^{\rm NNLO}/dQ} {d\sigma^{\rm LO}/dQ } \,.\end{aligned}$$ In [fig. (\[nnlo\_k\_distribution\])]{} we present the K-factor, defined in [eq. (\[eqk\])]{}, for both GR and the signal cases. In an earlier work we had presented third order SV and resummed results for di-lepton production at LHC in the ADD model [@Das:2019bxi]. We note that it is the same virtual graviton exchange process that contributes both in ADD and RS model. The leading order processes are similar and the QCD corrections are model independent. However, the difference between these two models arise because of the difference in the summation over the tower of KK gravitons and also in the overall wrapped factor. Consequently the relative weight of the contribution from the gravitons in these two models will be different for different invariant mass region. This results in different mass-dependent K-factors in the ADD and RS model. The NLO corrections for pure RS case at $Q=1000$ GeV are found to contribute by about 57% of LO, while NNLO corrections add an additional $18\%$ of LO to the total invariant mass distribution. In [tab. (\[table1\])]{} we present the signal K-factors up to NNLO QCD for different $Q$ values. For signal case, the NLO corrections at $Q=1000$ GeV contribute by about 34% of LO and NNLO corrections add an additional $6\%$ of LO to the total invariant mass distribution. However, right at the resonance region, these NNLO corrections are found to enhance the production cross section by an additional 20% of LO results. This shows that NNLO corrections are indeed essential for this process in order to make any reliable predictions. [![Invariant mass distribution of di-lepton for the SM, RS model and for the signal (left) and their corresponding K-factors (right).[]{data-label="nnlo_model_compare"}](plots/nnlo/model_inv_nnlo.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} 0.05cm [![Invariant mass distribution of di-lepton for the SM, RS model and for the signal (left) and their corresponding K-factors (right).[]{data-label="nnlo_model_compare"}](plots/nnlo/model_k_nnlo.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} In [fig. (\[nnlo\_model\_compare\])]{} we present the di-lepton invariant mass distribution for SM, GR and signal cases. The behavior of the signal K-factor is governed by the respective coupling constants in SM and RS as well as the parton fluxes. As discussed earlier, in the RS case gravity contribution is significant near resonance region and therefore the whole signal K-factor is controlled by RS. In the off resonance region at high $Q$, both RS and SM contributions are comparable and hence the signal K-factor receives contributions from both RS and SM. Hence the behavior of the mass dependent K-factor for the signal in the RS model is very distinct from that in the ADD model [@Das:2019bxi]. [![Dependence of the di-lepton invariant mass distribution for the signal on the RS model parameters $\bar{c}_0$ (left) and the first resonance mass $M_1$ (right).[]{data-label="nnlo_model_parameters"}](plots/nnlo/signal-c0-variation_inv_nnlo.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} 0.05cm [![Dependence of the di-lepton invariant mass distribution for the signal on the RS model parameters $\bar{c}_0$ (left) and the first resonance mass $M_1$ (right).[]{data-label="nnlo_model_parameters"}](plots/nnlo/signal-m1-variation_inv_nnlo.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} We also study the dependence of our results on the RS model parameters, $M_1$ and $\bar{c}_0$. In [fig. (\[nnlo\_model\_parameters\])]{} we present the di-lepton invariant mass distribution by varying $\bar{c}_0$ from $0.03$ to $0.1$ keeping $M_1$ fixed at $1.5$ TeV in the left panel. We also present the results in the right panel by varying $M_1$ from $1.5$ TeV to $3.0$ TeV for a fixed $\bar{c}_0$ ($0.05$). The width of the resonance depends on $\bar{c}_0$, however, right at the resonance this dependence of the production cross section on this coupling $\bar{c}_0$ cancels and the height of peak for any given $M_1$ will be independent of $\bar{c}_0$. Consequently, the respective NNLO K-factors near the resonance region depend on $\bar{c}_0$ but right at the resonance they do not. These signal K-factors are presented in [tab. (\[table2\])]{} right at the resonance region for different $M_1$ values. The NNLO corrections increases the K-factors substantially compared to NLO implying the importance of the higher order correction for this process. [![$7$-point scale variation in the signal is shown up to NNLO for the di-lepton invariant mass distribution.[]{data-label="nnlo_scale"}](plots/nnlo/scale_test_fo.pdf "fig:"){width="60.00000%" height="50.00000%"}]{} We have considered different sources of theoretical uncertainties in our analysis. First, we considered the uncertainties due to the presence of two unphysical scales $\mu_r$ and $\mu_f$ in the theory and then those coming from the non-perturbative parton distribution function in the calculation. For the scale uncertainties we vary $\mu_r$ and $\mu_f$ simultaneously from $Q/2$ to $2Q$ by putting the constraint that the ratio of unphysical scales is less than $2$, as $$\begin{aligned} \Big|\text{ln}\frac{\mu_r}{Q}\Big| \leq \text{ln }2, \quad \Big|\text{ln}\frac{\mu_f}{Q}\Big| \leq \text{ln }2, \quad \Big|\text{ln}\frac{\mu_r}{\mu_f}\Big| \leq \text{ln }2. \label{eqscale}\end{aligned}$$ The last condition in [eq. (\[eqscale\])]{} ensures that no unusual choice of the scale combination is considered within the range. This results in $7$ different combinations of the scale $\left(\mu_r/Q, \mu_f/Q \right) = (1/2, 1/2), (1/2, 1), (1, 1/2)$, $(1, 1), (2, 1), (1, 2), (2, 2)$. With this choice, we estimate the $7$-point scale uncertainties in the di-lepton invariant mass distribution up to NNLO and the results are depicted in [fig. (\[nnlo\_scale\])]{}. The scale uncertainties are found to get reduced significantly from LO to NNLO over the full invariant mass region. For $Q = 1500$ GeV right at the first resonance, the scale uncertainties at LO are $\pm{13.3\%}$, at NLO they are $\pm{5.7\%}$, at NNLO it further reduces to $\pm{2.3\%}$. Away from resonance, the uncertainty also decreases order by order. For example at $Q = 3000$ GeV, the scale uncertainties get reduced from $\pm{14.8\%}$ at LO to about $\pm{2.5\%}$ at NNLO. In the off-resonance region the uncertainty in general increases with increasing $Q$ which can be tamed with inclusion of further higher order terms in the perturbation theory. We also estimate the uncertainties coming from the non-perturbative PDFs. For this we calculate the uncertainty due to the intrinsic errors in the PDFs that result from various experimental errors from the global fits. In this cases we use the PDF sets [ABMP16]{} [@Alekhin:2013nda], [CT14]{} [@Dulat:2015mca], [MMHT2014]{} [@Harland-Lang:2014zoa], [NNPDF31]{} [@Ball:2017nwa], and [PDF4LHC15]{} [@Butterworth:2015oua] provided from the [lhapdf]{}. The central predictions for these different PDF groups also differ due to different underlying assumptions in global fits for different groups. We calculate the intrinsic PDF uncertainties using $51$ sets for [MMHT2014]{}, $57$ sets for [CT14]{}, $101$ sets for [NNPDF31]{}, $30$ sets for [ABMP16]{} and $31$ sets for [PDF4LHC15]{}. To this end we use all PDF sets extracted at NNLO level. In [tab. (\[table3\])]{} we present these uncertainties for the di-lepton invariant mass distribution to NNLO. We find that around the resonance $M_1 =1500$ the PDF uncertainty is well within $5\%$ except for the [CT14]{} which shows relatively increased uncertainty. In the high invariant mass region the uncertainty however increases due to unavailability of sufficient data in those region. 0.05cm [![K-factors are presented up to N$^3$LO$_{sv}$ for the RS model (left) and for the signal (right).[]{data-label="n3lo_kfac"}](plots/n3lo/signal_k_n3lo.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} From the above observation we notice that the NNLO corrections for RS production are large enough to truncate the perturbation theory at this order and necessitates the computation of higher order corrections for the convergence of the perturbation series. As a first step beyond NNLO we studied the three-loop SV correction for di-lepton production channel using the universal property of the SV coefficients for generic spin-2 couplings. In [fig. (\[n3lo\_kfac\])]{} we present these three loop SV corrections in terms of the corresponding $K$-factors up to N$^3$LO$_{\rm sv}$ as a function of the di-lepton invariant mass for pure RS case (left panel) as well as for the signal (right panel). We use [MMHT2014nnlo]{} set for this analysis. These three-loop SV corrections are found to contribute an additional -0.7% of LO to the NNLO result at first resonance $M_1 =1500$ GeV for pure RS case, demonstrating a very good convergence of the perturbation theory at this order. In the high invariant mass region away from the RS resonance however we see correction due to third order SV terms is about $1\%$ of LO cross-section. We also note that the three-loop SV corrections are negative in the low $Q$-region while in the high $Q$-region they are positive because of threshold enhancement. The 7-point scale uncertainty is seen to increase at the lower invariant mass region whereas as we reach higher invariant mass region this becomes better. To further constraint the scale uncertainty the N$^3$LO PDFs are essential at this order. Also the missing sub-leading pieces are important in particular in the low-$Q$ region (see $eg.$ [@deFlorian:2014vta; @Das:2020adl]). The $\mu_r$ uncertainty however is seen to improve in the whole invariant mass region. Keeping $\mu_f=Q=M_1$ we observe an uncertainty of $\pm 0.9\%$ around the resonance $M_1=1500$ for the variation in the range $(1/2,2)M_1$. Resum result {#sec:resummation} ------------ We now move to study the effect of threshold logarithms by resumming them to NNLL accuracy and match to the computed NNLO cross-section in the [sec. (\[sec:fo\])]{}. For this, the same choice of SM and RS model parameters has been used as in the fixed order computation. For the inverse Mellin transformation [eq. (\[eq:matched\])]{}, we use $c = 1.9$. In [fig. (\[res\_inv\_distribution\])]{} we present the di-lepton invariant mass distribution for GR and for the signal at different logarithmic accuracy. [![Di-lepton invariant mass distribution up to NNLO+NNLL for RS model (left) and for the signal (right).[]{data-label="res_inv_distribution"}](plots/resummation/gr_inv_res.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} 0.05cm [![Di-lepton invariant mass distribution up to NNLO+NNLL for RS model (left) and for the signal (right).[]{data-label="res_inv_distribution"}](plots/resummation/signal_inv_res.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} $$\begin{aligned} \label{resum_k} \text{K}_{\rm LL} = \frac{ d\sigma^{\rm LO+LL}/dQ} {d\sigma^{\rm LO}/dQ } \,, ~ \text{K}_{\rm NLL} = \frac{ d\sigma^{\rm NLO+NLL}/dQ} {d\sigma^{\rm LO}/dQ } \,,~ \text{K}_{\rm NNLL} = \frac{ d\sigma^{\text{N$^2$LO+N$^2$LL}}/dQ} {d\sigma^{\rm LO}/dQ } \,.\end{aligned}$$ To quantify these resummation effects, we define the resum K-factors in [eq. (\[resum\_k\])]{} and present the same in [tab. (\[table4\])]{} for different $Q$ values. The enhancement due to threshold logarithms for the signal is significant for all $Q$ values, however it is more significant at the resonance region. This is because of the underlying born processes for the graviton production in the RS model. At the born level, the RS graviton can be produced via quark-antiquark annihilation process (DY-like) as well as gluon fusion channel (Higgs-like). It is well known that the QCD corrections, particularly, the threshold enhancement in these two channels are different and are more pronounced for gluon fusion channel. Here, the signal receives contribution from RS (DY-like as well as Higgs-like) and the SM background (DY-like). However, at the resonance region GR dominates over the SM background by several orders of magnitude and hence the threshold enhancement due to the gluon fusion channel becomes prominent. Far off the resonance region, the signal is essentially dominated by the SM background and assumes DY-like threshold enhancement. For completeness, we present these resummed K-factors for GR case in [fig. (\[res\_k\_distribution\])]{}. $$\begin{aligned} \label{resum_ratio} \text{R}_{2} = \frac{ d\sigma^{\rm NLO+NLL}/dQ} {d\sigma^{\rm NLO}/dQ } \,,~ \text{R}_{3} = \frac{ d\sigma^{\text{N$^2$LO+N$^2$LL}}/dQ} {d\sigma^{\rm NNLO}/dQ } \,.\end{aligned}$$ In order to further study the enhancement due to threshold resummation for the signal, we consider the ratios of the resummed results to the fixed order results defined in [eq. (\[resum\_ratio\])]{}. We observe that at resonance ($Q=M_1=1500$ GeV), NNLO+NNLL contributes additional $4\%$ enhancement over NNLO. These ratios are presented in [fig. (\[res\_k\_distribution\])]{}. Moreover, in [tab. (\[table5\])]{}, we present these resum $K$-factors right at the resonance for different values of resonance mass $M_1$. [![Resummed K-factors for the di-lepton invariant mass distribution as defined in [eq. (\[resum\_k\])]{} and the corresponding ratios as defined in [eq. (\[resum\_ratio\])]{} (right).[]{data-label="res_k_distribution"}](plots/resummation/signal_k_res.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} 0.05cm [![Resummed K-factors for the di-lepton invariant mass distribution as defined in [eq. (\[resum\_k\])]{} and the corresponding ratios as defined in [eq. (\[resum\_ratio\])]{} (right).[]{data-label="res_k_distribution"}](plots/resummation/test1_sig.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} Next, we estimate the theoretical uncertainties in resummed predictions due to the unphysical scales $\mu_r$ and $\mu_f$ as well as due to the non-perturbative PDFs. The conventional $7$-point scale uncertainties for the signal are presented in [fig. (\[res\_scale\])]{} for different logarithmic accuracy. At the resonance region, these scale uncertainties are estimated to be about $\pm 17.5\%$, $\pm 8.1\%$ and $\pm 3.4\%$ at LO+LL, NLO+NLL and NNLO+NNLL respectively. Moreover, these uncertainties are bit larger than the corresponding ones for the fixed order results presented in [fig. (\[nnlo\_scale\])]{}. [![$7$-point scale uncertainties in the signal are shown up to NNLO+NNLL for di-lepton invariant mass distribution.[]{data-label="res_scale"}](plots/resummation/scale_test_res.pdf "fig:"){width="60.00000%" height="50.00000%"}]{} To estimate these uncertainties at the NNLO level and beyond, we contrast these scale uncertainties in the resummed results against those in the fixed order results in [fig. (\[scale\_resum\_vs\_fo\])]{} (left panel). For the resummed case, the scale uncertainty at NNLO+NNLL for $Q=M_1$ is about $3.4\%$ and is larger than the one $2.3\%$ at NNLO level. This increase in scale uncertainty can be understood from the fact that in the resummation formalism only threshold logarithms that are significant in the limit $z \to 1$ have been resummed to all orders in QCD but not the logarithms of unphysical scales. Moreover, it is observed that for Higgs-like processes the resummation does not improve the scale uncertainties over the fixed order ones [@Bonvini:2014joa] for any choice of central scales. In the present context, the graviton production at the resonance receives significant contribution from this Higgs-like gluon fusion process and hence the associated large scale uncertainty. However, the scale uncertainties only due to the renormalization scale $\mu_r$ are found to get reduced from fixed order NNLO level $\pm 1.2\%$ at $Q=1500$ to the resummed NNLO+NNLL level $\pm 0.5\%$ ( see right panel of [fig. (\[scale\_resum\_vs\_fo\])]{}). [![Comparison of $7$-point scale uncertainties at the signal for NNLO and NNLO+NNLL (left). The uncertainty only due to $\mu_r$ scale variation around the central scale $Q$ at NNLO and NNLO+NNLL (right) for fixed $\mu_f=Q$.[]{data-label="scale_resum_vs_fo"}](plots/resummation/scale_res_vs_fo.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} 0.05cm [![Comparison of $7$-point scale uncertainties at the signal for NNLO and NNLO+NNLL (left). The uncertainty only due to $\mu_r$ scale variation around the central scale $Q$ at NNLO and NNLO+NNLL (right) for fixed $\mu_f=Q$.[]{data-label="scale_resum_vs_fo"}](plots/resummation/mur_res_vs_fo.pdf "fig:"){width="48.00000%" height="40.00000%"}]{} Further, we estimate in our predictions the uncertainty due to the non-perturbative PDF inputs. These uncertainties are obtained for each PDF group by systematically calculating the cross section for each of the available sets. These PDF uncertainties are presented for different resonance mass $M_1$ values in [tab. (\[table6\])]{}. This uncertainty for the kinematic range considered and the PDF groups studied, is smallest at $M_1=1500$ GeV for [NNPDF31]{} and largest for [CT14]{} at $M_1=3500$ GeV. Conclusions {#sec:conclusion} =========== In the absence of any signature of new physics at the LHC, it is high time to explore possible scenarios where we could make potential discovery of new physics beyond the SM. In particular the RS model provides to be a very good candidate in the search of massive spin-2 resonances. In the literature, it is found that the NLO QCD corrections to this process are quite substantial in the di-lepton channel, implying the need for higher order corrections for giving precise theory predictions that augment the search for RS gravitons at the collider experiments. In this work, we have studied the NNLO QCD corrections for the di-lepton production process through graviton propagator and have presented for results for the di-lepton invariant mass distribution up to $Q$ values as high as $3.5$ TeV. The underlying born contributions for this process receive both DY-like as well as Higgs-like contributions and hence the corresponding QCD corrections for the signal at the resonance region are very significant, while the QCD corrections off the resonance are mostly SM DY-like. This results in K-factors that are strongly dependent on the invariant mass of the di-lepton. We have presented these mass dependent K-factors at NNLO and beyond for 13 TeV LHC. We find that while NLO correction is about $53\%$ of LO, the NNLO correction increases the cross section by additional $21\%$. The scale uncertainty in the NNLO result at the resonance region also got significantly reduced to as small as $2\%$ for $Q=M_1=1500$ GeV. Further, we have extended our work to include the important SV corrections at the N$^3$LO level. We find that the SV contribution at this order for $Q=1500$ GeV is about $0.7\%$ of LO in magnitude but negative in sign, thus demonstrating a very good convergence of the perturbation series. In addition we also studied the threshold resummation by resumming all the large-threshold logarithms to NNLL accuracy. We have presented these results by matching the NNLL resummed results to the fixed order NNLO ones. We find that these resummed results contribute an additional $7\%$ of LO to the NNLO ones. To conclude, we note that our results are most precise theoretical predictions available to date and that these mass dependent K-factors will be useful in the search for RS graviton resonances in the experimental data analysis using di-lepton events at the LHC. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank V. Ravindran for useful discussion. The research of G.D. is supported by the *Deutsche Forschungsgemeinschaft* (DFG) within the Collaborative Research Center TRR 257 (*Particle Physics Phenomenology after the Higgs Discovery*).
--- abstract: 'Smoothed Dissipative Particle Dynamics (SDPD) is a mesoscopic method which allows to select the level of resolution at which a fluid is simulated. The aim of this work is to extend SDPD to chemically reactive systems. To this end, an additional progress variable is attached to each mesoparticle and evolves according to chemical kinetics. This reactive SDPD model is illustrated with numerical studies of the shock-to-detonation transition in nitromethane as well as the stationary behavior of the reactive wave.' author: - Gérôme Faure - 'Jean-Bernard Maillet' title: Simulations of detonation waves with smoothed dissipative particle dynamics --- Introduction {#sec:introduction} ============ With the development of new architectures and massively parallel codes, Molecular Dynamics (MD) simulations have been applied to systems of increasing sizes, up to billions of atoms, and times, up to a few nanoseconds [@glosli_2007; @kadau_2010]. Nevertheless MD simulations still cannot reach the time and length scales at which some complex phenomena, such as the build up of reactive waves in molecular systems, occur. A variety of mesoscopic methods have been designed to stretch these scales by several orders of magnitude, at the expense of a decreasing predictive power compared to MD. They generally consider fewer degrees of freedom and allow for larger timesteps since they do not need to track the intramolecular vibrations and use softer potentials. Smoothed Dissipative Particle Dynamics (SDPD) [@espanol_2003] has been introduced as a top-down coarse-graining approach that combines Smoothed Particle Hydrodynamics (SPH) [@lucy_1977; @monaghan_1977], a particle Lagrangian discretization of the Navier-Stokes equations, and thermal fluctuations of mesoscopic models such as Dissipative Particle Dynamics with Energy conservation (DPDE) [@avalos_1997; @espanol_1997]. This makes the model thermodynamially suitable to study hydrodynamics at nanoscale. Besides it has been shown to give results consistent with MD for a wide range of resolutions, at equilibrium and for shock waves [@faure_2016], as well as dynamical properties such as the diffusion coefficient of a colloid in a SDPD bath [@vazquez_2009; @litvinov_2009]. In particular SDPD has been used to study colloids [@vazquez_2009; @bian_2012] or polymer suspensions [@litvinov_2008]. Detonation waves have been simulated successfully with MD for model systems [@brenner_1993; @rice_1996; @holian_2002; @heim_2007; @herring_2010] in order to study the shock-to-detonation transition and the structure of the reactive waves. The formation of detonation wave is a complex phenomenon which requires to handle chemical reactions along with the hydrodynamic behavior of the material, two processes that occur at very different time and length scales. Due to the cost of atomistic reactive potentials, MD is limited to a rather short spatial and temporal domain while hydrodynamic method cannot provide an accurate description of chemical reactions. It was recently proposed to use a moving window technique to study detonation waves for longer times with a limited number of atoms [@zhakhovsky_2014]. The modeling of reactive systems at a mesoscopic size is of great interest to deal with more realistic time and length scales. DPDE [@avalos_1997; @espanol_1997] is such a model where one can coarse-grain a molecule into a single particle, reducing the number of degrees of freedom explicitly described. Thanks to its energy conservation, it is able to deal with shock waves [@stoltz_2006] and it has also been extended to reactive materials [@maillet_2007; @brennan_2014]. Reactive DPDE has proved to give insight on the shock-to-detonation transition for nitromethane [@maillet_2011]. Reactive mechanisms have also been included in other coarse-grained dynamics [@antillon_2014] and in SPH for the simulation of detonation wave [@yang_2013]. We adapt the mechanisms proposed in [@maillet_2007] for DPDE to the SDPD setting. We aim to illustrate the ability of SDPD to gain access to scales MD cannot reach while retaining some of its features like consistent thermodynamic fluctuations. This article is organized as follows. We first present in Section \[sec:equations\] the equations of SDPD as reformulated in [@faure_2016]. In Section \[sec:chemistry\], we introduce our reactive mechanism for SDPD which is inspired by the work of [@maillet_2007] for DPDE. We illustrate the reactive SDPD model with the simulation of detonation wave for nitromethane in Section \[sec:results\]. Smoothed Dissipative Particle Dynamics {#sec:equations} ====================================== At the hydrodynamic scale, the dynamics of the fluid is governed by the Navier-Stokes equations , which read in their Lagrangian form when the heat conduction is neglected (for time $t\geq0$ and position ${\boldsymbol{x}}$ in a domain $\Omega\subset \mathbb{R}^3$): $$\label{eq:navier-stokes} \begin{aligned} {\rm D}_t\rho + \rho\,{{\rm div}}_{{\boldsymbol{x}}}{\boldsymbol{v}} &= 0,\\ \rho {\rm D}_t{\boldsymbol{v}} &= {{\rm div}}_{{\boldsymbol{x}}}\left({\boldsymbol{\sigma}}\right),\\ \rho{\rm D}_t\left(u + \frac12{\boldsymbol{v}}^2\right) &= {{\rm div}}_{{\boldsymbol{x}}}\left({\boldsymbol{\sigma}}{\boldsymbol{v}}\right). \end{aligned}$$ The material derivative used in the Lagrangian description is defined as $$D_t f(t,{\boldsymbol{x}}) = \partial_t f(t,{\boldsymbol{x}}) + {\boldsymbol{v}}(t,{\boldsymbol{x}}){{\boldsymbol{\nabla}}}_{{\boldsymbol{x}}}f(t,{\boldsymbol{x}}).$$ The unknowns are $\rho(t,{\boldsymbol{x}}) \in \mathbb{R}$ the density of the fluid, ${\boldsymbol{v}}(t,{\boldsymbol{x}}) \in \mathbb{R}^3$ its velocity, $u(t,{\boldsymbol{x}}) \in \mathbb{R}$ its internal energy and ${\boldsymbol{\sigma}}(t,{\boldsymbol{x}}) \in \mathbb{R}^{3\times 3}$ the stress tensor: $$\label{eq:stress-tensor} {\boldsymbol{\sigma}} = P{{\boldsymbol{\mathrm{Id}}}}+ \eta({{\boldsymbol{\nabla}}}{\boldsymbol{v}} + ({{\boldsymbol{\nabla}}}{\boldsymbol{v}})^T) + \left(\zeta-\frac23\eta\right){{\rm div}}({\boldsymbol{v}}){{\boldsymbol{\mathrm{Id}}}},$$ where $P$ is the pressure of the fluid, $\eta$ the shear viscosity and $\zeta$ the bulk viscosity. In the following, we first present the principles of the particle discretization of the Navier-Stokes equations with SPH in Section \[sec:sph\]. We then introduce in Section \[sec:sdpd\] the SDPD equations reformulated in terms of internal energies [@faure_2016]. Particle discretization {#sec:sph} ----------------------- Smoothed Particle Hydrodynamics [@lucy_1977; @monaghan_1977] is a Lagrangian discretization of the Navier-Stokes equations (\[eq:navier-stokes\]) on a finite number $N$ of fluid particles playing the role of interpolation nodes. These fluid particles are associated with a portion of fluid of mass $m$. They are located at positions ${\boldsymbol{q}}_i \in \Omega$ and have a momentum ${\boldsymbol{p}}_i \in\mathbb{R}^{3}$. The internal degrees of freedom are represented by an internal energy $\varepsilon_i \in \mathbb{R}$. ### Approximation of field variables and their gradients {#sec:approx-sph} In the SPH discretization, the field variables are approximated as the average of their values at the particle positions weighted by a smoothing kernel function $W$ with finite support. We introduce the smoothing length $h$ defined such that $W({\boldsymbol{r}})=0$ if ${\left\vert {\boldsymbol{r}} \right\vert} \geq h $. In the sequel, we use the notation $r = {\left\vert {\boldsymbol{r}} \right\vert}$. In this work, we rely on a cubic spline [@liu_2003], whose expression reads $$\label{eq:sdpd-cubic-w} W({\boldsymbol{r}}) = \left\{ \begin{array}{cl} \displaystyle \frac{8}{\pi h^3} \left(1-6\frac{r^2}{h^2}+6\frac{r^3}{h^3}\right) & \displaystyle \text{ if } r \leq \frac{h}{2},\\[1em] \displaystyle \frac{16}{\pi h^3} \left(1-\frac{r}{h}\right)^3 & \displaystyle \text{ if } \frac{h}{2} \leq r \leq h,\\[1em] 0 & \displaystyle \text{ if } r \geq h. \end{array} \right.$$ The field variables are then approximated as $$\label{eq:sph-approx} f({\boldsymbol{x}}) \approx \sum_{i=1}^N f_i W({\boldsymbol{x}}-{\boldsymbol{q}}_i),$$ where $f_i$ denotes the value of the field $f$ on the particle $i$. The approximation of the gradient ${{\boldsymbol{\nabla}}}_{{\boldsymbol{x}}} f$ is obtained by deriving equation , which yields $${{\boldsymbol{\nabla}}}_{{\boldsymbol{x}}} f({\boldsymbol{x}}) \approx \sum_{i=1}^N f_i {{\boldsymbol{\nabla}}}_{{\boldsymbol{x}}}W({\left\vert {\boldsymbol{x}}-{\boldsymbol{q}}_i \right\vert}).$$ In order to have more explicit expressions, we introduce the function $F$ such that ${\boldsymbol{\nabla}}_{{\boldsymbol{r}}} W({\boldsymbol{r}}) = -F({\left\vert {\boldsymbol{r}} \right\vert}){\boldsymbol{r}}$, which in the case of the cubic spline (\[eq:sdpd-cubic-w\]) is given by $$F(r) = \left\{ \begin{array}{cl} \displaystyle \frac{48}{\pi h^5} \left(2-3{\frac{r}{h}}\right) & \displaystyle \text{ if } r \leq \frac{h}{2},\\[1em] \displaystyle \frac{48}{\pi h^5} \frac1{r} \left(1-{\frac{r}{h}}\right)^2& \displaystyle \text{ if } \frac{h}{2} \leq r \leq h,\\[1em] 0 & \displaystyle \text{ if } r \geq h. \end{array} \right.$$ The expression of the approximated gradient finally becomes $${{\boldsymbol{\nabla}}}_{{\boldsymbol{x}}} f({\boldsymbol{x}}) \approx -\sum_{i=1}^N f_i F({\left\vert {\boldsymbol{x}}-{\boldsymbol{q}}_i \right\vert})({\boldsymbol{x}}-{\boldsymbol{q}}_i).$$ In order to simplify the notation, we define the following quantities for two particles $i$ and $j$: $${\boldsymbol{r}}_{ij} = {\boldsymbol{q}}_i - {\boldsymbol{q}}_j,\quad r_{ij} = {\left\vert {\boldsymbol{r}}_{ij} \right\vert},\quad {\boldsymbol{e}}_{ij} = \frac{{\boldsymbol{r}}_{ij}}{r_{ij}},\quad F_{ij} = F(r_{ij}).$$ We can associate a density $\rho_i$ and volume $\mathcal{V}_i$ to each particle as $$\label{eq:sdpd-rho-v} \rho_i({\boldsymbol{q}}) = \sum_{j=1}^N mW({\boldsymbol{r}}_{ij}),\quad \mathcal{V}_i({\boldsymbol{q}}) = \frac{m}{\rho_i({\boldsymbol{q}})}.$$ The corresponding approximations of the density gradient evaluated at the particle points read $$\label{eq:gradient-rho} {{\boldsymbol{\nabla}}}_{{\boldsymbol{q}}_j} \rho_i = \left\{ \begin{array}{cl} m F_{ij}{\boldsymbol{r}}_{ij} & \text{ if } j\neq i,\\[.5em] -m \sum\limits_{j=1}^N F_{ij}{\boldsymbol{r}}_{ij} & \text{ if } j=i. \end{array} \right.$$ The smoothing length needs to be adapted to the size of the SDPD particles, defined as $\displaystyle K=\frac{m}{m_0}$ with $m_0$ the mass of a single microscopic particle (typically a molecule). This is essential for the approximations (\[eq:sph-approx\]) to remain meaningful. In order to keep the average number of neighbors roughly constant in the smoothing sum, we associate a smoothing length $h_K$ for each particle size $K$ with $$h_K = \mathfrak{h}\left(\frac{m_K}{\rho}\right)^{\frac13}.$$ In this work, we have taken $\mathfrak{h}=2.5$, which correspond to a typical number of 60-70 neighbors, a commonly accepted number [@liu_2003]. ### Thermodynamic closure {#sec:thermo-closure} As in Navier-Stokes hydrodynamics, an equation of state is required to close the set of equations provided by the SPH discretization. This equation of state relates the entropy $S_i$ of the mesoparticle $i$ with its density $\rho_i({\boldsymbol{q}})$ (as defined by ) and its internal energy $\varepsilon_i$ through an entropy function $$\label{eq:sdpd-eos} S_i(\varepsilon_i,{\boldsymbol{q}})=\mathcal{S}(\varepsilon_i,\rho_i({\boldsymbol{q}})).$$ It is then possible to assign to each particle a temperature $$T_i(\varepsilon_i,{\boldsymbol{q}}) = \left[\frac1{\partial_{\varepsilon}\mathcal{S}}\right](\varepsilon_i,\rho({\boldsymbol{q}})),$$ pressure $$P_i(\varepsilon_i,{\boldsymbol{q}}) = -\frac{\rho({\boldsymbol{q}})^2}{m}\left[\frac{\partial_{\rho}\mathcal{S}}{\partial_{\varepsilon}\mathcal{S}}\right](\varepsilon_i,\rho({\boldsymbol{q}})),$$ and heat capacity at constant volume $$C_{v,i}(\varepsilon_i,{\boldsymbol{q}}) = -\left[\frac{(\partial_{\varepsilon} \mathcal{S})^2}{\partial_{\varepsilon}^2\mathcal{S}}\right](\varepsilon_i,\rho({\boldsymbol{q}})).$$ To simplify the notation, we omit in Sections \[sec:eom-sdpd\] the dependence of $T_i$, $P_i$ and $C_{v,i}$ on the variables $\varepsilon_i$ and ${\boldsymbol{q}}$. Equations of motion for SDPD {#sec:sdpd} ---------------------------- Smoothed Dissipative Particle Dynamics [@espanol_2003] is a top-down mesoscopic method relying on the SPH discretization of the Navier-Stokes equations with the addition of thermal fluctuations which are modeled by a stochastic force. In its reformulated form [@faure_2016], SDPD is a set of stochastic differential equations for the following variables: the positions ${\boldsymbol{q}}_i\in\Omega\subset\mathbb{R}^{3}$, the momenta ${\boldsymbol{p}}_i\in\mathbb{R}^{3}$ and the energies $\varepsilon_i\in \mathbb{R}$ for $i=1\dots N$. The dynamics can be split into two elementary dynamics, the first one being a conservative dynamics derived from the pressure gradient in the stress tensor  and the second a set of pairwise fluctuation and dissipation dynamics stemming from the viscous terms in  coupled with random fluctuations. ### Conservative forces {#sec:conservative-sdpd} The elementary force between particles $i$ and $j$ arising from the discretization of the pressure gradient in the Navier-Stokes momentum equation reads $$\label{eq:cons-forces} {\boldsymbol{\mathcal{F}}}_{{\rm cons},ij} = m^2\left(\frac{P_i}{\rho_i^2}+\frac{P_j}{\rho_j^2}\right)F_{ij}{\boldsymbol{r}}_{ij}.$$ This part of the dynamics preserves the entropies $S_i$ along with the total energy $$E({\boldsymbol{q}},{\boldsymbol{p}},\varepsilon) = \sum_{i=1}^N \varepsilon_i + \frac{{\boldsymbol{p}}_i^2}{2m}.$$ As a consequence, the variation of the internal energy only emerges from the variation of the particle volume as $$\begin{aligned} {{\rm d}}\varepsilon_i &= - P_i{{\rm d}}\mathcal{V}_i,\\ &= -\sum_{j\neq i}\frac{m^2P_i}{\rho_i({\boldsymbol{q}})^2}F_{ij}{\boldsymbol{r}}_{ij}\cdot{\boldsymbol{v}}_{ij}\,{{\rm d}}t. \end{aligned}$$ This allows us to write the conservative part of the dynamics as $$\label{eq:sdpd-cons} \left\{\begin{aligned} {{\rm d}}{\boldsymbol{q}}_i &= \frac{{\boldsymbol{p}}_i}{m}\,{{\rm d}}t,\\ {{\rm d}}{\boldsymbol{p}}_i &= \sum_{j\neq i} {\boldsymbol{\mathcal{F}}}_{{\rm cons},ij}\,{{\rm d}}t,\\ {{\rm d}}\varepsilon_i &= -\sum_{j\neq i}\frac{m^2P_i}{\rho_i({\boldsymbol{q}})^2}F_{ij}{\boldsymbol{r}}_{ij}\cdot{\boldsymbol{v}}_{ij}\,{{\rm d}}t. \end{aligned}\right.$$ ### Fluctuation and Dissipation {#sec:fd-sdpd} The viscous term in the Navier-Stokes equations translates into a dissipative force in the equation of motions. This term is coupled with a fluctuation force that distinguishes SDPD from a mere particular discretiztion of Navier-Stokes like SPH. In order to give the expression of the viscous and fluctuating part of the dynamics, we define the relative velocity for a pair of particles $i$ and $j$ as $${\boldsymbol{v}}_{ij} = \frac{{\boldsymbol{p}}_i}{m}-\frac{{\boldsymbol{p}}_j}{m}.$$ In the spirit of DPDE, we choose a pairwise fluctuation and dissipation term for $i<j$ of the following form $$\label{eq:sdpd-simple-fluct} \left\{ \begin{aligned} {{\rm d}}{\boldsymbol{p}}_i &= -{\boldsymbol{\Gamma}}_{ij}{\boldsymbol{v}}_{ij}\,{{\rm d}}t + {\boldsymbol{\Sigma}}_{ij}{{\rm d}}{\boldsymbol{B}}_{ij},\\ {{\rm d}}{\boldsymbol{p}}_j &= {\boldsymbol{\Gamma}}_{ij}{\boldsymbol{v}}_{ij}\,{{\rm d}}t - {\boldsymbol{\Sigma}}_{ij}{{\rm d}}{\boldsymbol{B}}_{ij},\\ {{\rm d}}\varepsilon_i &= \frac12\left[{\boldsymbol{v}}_{ij}^T{\boldsymbol{\Gamma}}_{ij}{\boldsymbol{v}}_{ij} - \frac{{\textrm{Tr}}({\boldsymbol{\Sigma}}_{ij}{\boldsymbol{\Sigma}}_{ij}^T)}{m}\right]{{\rm d}}t -\frac12 {\boldsymbol{v}}_{ij}^T{\boldsymbol{\Sigma}}_{ij}{{\rm d}}{\boldsymbol{B}}_{ij},\\ {{\rm d}}\varepsilon_j &= \frac12\left[{\boldsymbol{v}}_{ij}^T{\boldsymbol{\Gamma}}_{ij}{\boldsymbol{v}}_{ij} - \frac{{\textrm{Tr}}({\boldsymbol{\Sigma}}_{ij}{\boldsymbol{\Sigma}}_{ij}^T)}{m}\right]{{\rm d}}t -\frac12 {\boldsymbol{v}}_{ij}^T{\boldsymbol{\Sigma}}_{ij}{{\rm d}}{\boldsymbol{B}}_{ij}, \end{aligned} \right.$$ where ${\boldsymbol{B}}_{ij}$ is a $3$-dimensional vector of standard Brownian motions, ${\boldsymbol{\Gamma}}_{ij}$ and ${\boldsymbol{\Sigma}}_{ij}$ are $3\times3$ symmetric matrices. In the dynamics , the equations acting on the momenta preserve the total momentum in the system. Furthermore, as in DPDE, the equations for the energy variables are determined to ensure the conservation of the total energy $E({\boldsymbol{q}},{\boldsymbol{p}},\varepsilon)$. Since $\displaystyle {{\rm d}}\varepsilon_i = -\frac12 {{\rm d}}\left(\frac{{\boldsymbol{p}}_i^2}{2m} + \frac{{\boldsymbol{p}}_j^2}{2m}\right)$, Itô calculus yields the resulting equations in . We consider friction and fluctuation matrices of the form $$\label{eq:fluct-gamma} \begin{aligned} {\boldsymbol{\Gamma}}_{ij} &= \gamma^{\parallel}_{ij}{{\boldsymbol{P}}}^{\parallel}_{ij} + \gamma^{\perp}_{ij}{{\boldsymbol{P}}}^{\perp}_{ij},\\ {\boldsymbol{\Sigma}}_{ij} &= \sigma^{\parallel}_{ij}{{\boldsymbol{P}}}^{\parallel}_{ij} + \sigma^{\perp}_{ij}{{\boldsymbol{P}}}^{\perp}_{ij}, \end{aligned}$$ with the projection matrices ${{\boldsymbol{P}}}^{\parallel}_{ij}$ and ${{\boldsymbol{P}}}^{\perp}_{ij}$ given by $${{\boldsymbol{P}}}_{ij}^{\parallel} = {\boldsymbol{e}}_{ij}\otimes{\boldsymbol{e}}_{ij},\quad {{\boldsymbol{P}}}_{ij}^{\perp} = {{\boldsymbol{\mathrm{Id}}}}- {{\boldsymbol{P}}}_{ij}^{\parallel}$$ Introducing the coefficients $$\begin{aligned} a_{ij} &=\left(\frac{5\eta}{3}-\zeta\right)\frac{m^2F_{ij}}{\rho_i\rho_j},\\ b_{ij}+\frac{a_{ij}}3 &= 5\left(\frac{\eta}3+\zeta\right)\frac{m^2F_{ij}}{\rho_i\rho_j}, \\ d_{ij} &= k_{\rm B}\frac{T_iT_j}{(T_i+T_j)^2}\left(\frac1{C_{v,i}}+\frac1{C_{v,j}}\right). \end{aligned}$$ defined from the fluid viscosities $\eta$ and $\zeta$ appearing in the stress tensor (\[eq:stress-tensor\]), a possible choice for the friction and fluctuation coefficients is $$\label{eq:sdpd-gamma-sigma} \begin{aligned} \gamma_{ij}^{\parallel} &= \left(\frac43a_{ij}+b_{ij}\right) \left( 1 - d_{ij}\right),\\ \gamma_{ij}^{\perp} &= a_{ij}\left( 1 - d_{ij} \right),\\ \sigma_{ij}^{\theta} &= 2\sqrt{\frac{\gamma_{\theta}}{1-d_{ij}} k_{\rm B}\frac{T_iT_j}{T_i+T_j}}. \end{aligned}$$ This ensures that measures of the form $$\label{eq:sdpd-energy-minv} \begin{aligned} &\mu({{\rm d}}{\boldsymbol{q}}\,{{\rm d}}{\boldsymbol{p}}\,{{\rm d}}\varepsilon)\\ &\,= g\left(E({\boldsymbol{q}},{\boldsymbol{p}},\varepsilon),\sum\limits_{i=1}^N{\boldsymbol{p}}_i\right)\prod_{i=1}^N\frac{\exp\left(\frac{S_i(\varepsilon_i,{\boldsymbol{q}})}{k_{\rm B}}\right)}{T_i(\varepsilon_i,{\boldsymbol{q}})}\,{{\rm d}}{\boldsymbol{q}}\,{{\rm d}}{\boldsymbol{p}}\,{{\rm d}}\varepsilon \end{aligned}$$ are left invariant by the elementary dynamics  as shown is [@faure_2016]. While other forms of these coefficients are possible (for instance constant $\sigma$ parameters), the relations (\[eq:sdpd-gamma-sigma\]) allow to retrieve the same dissipation as in the original SPDP [@espanol_2003]. ### Complete equations of motion {#sec:eom-sdpd} As a result, the complete set of equations of motion for SDPD reformulated in the position, momentum and internal energy variables read $$\label{eq:sdpd-energy} \left\{ \begin{aligned} {{\rm d}}{\boldsymbol{q}}_i =&\, \frac{{\boldsymbol{p}}_i}{m}\,{{\rm d}}t,\\ {{\rm d}}{\boldsymbol{p}}_i =& \sum_{j\neq i} m^2\left(\frac{P_i}{\rho_i^2}+\frac{P_j}{\rho_j^2}\right)F_{ij}{\boldsymbol{r}}_{ij}\,{{\rm d}}t - {\boldsymbol{\Gamma}}_{ij}{\boldsymbol{v}}_{ij}\,{{\rm d}}t\\ &+ {\boldsymbol{\Sigma}}_{ij}{{\rm d}}{\boldsymbol{B}}_{ij},\\ {{\rm d}}\varepsilon_i =& \sum_{j\neq i} -\frac{m^2P_i}{\rho_i^2}F_{ij}{\boldsymbol{r}}_{ij}^T{\boldsymbol{v}}_{ij}\,{{\rm d}}t\\ &+ \frac12 \left[{\boldsymbol{v}}_{ij}^T{\boldsymbol{\Sigma}}_{ij}{\boldsymbol{v}}_{ij} -\frac1{m}{\textrm{Tr}}({\boldsymbol{\Sigma}}_{ij}{\boldsymbol{\Sigma}}_{ij}^T)\right]{{\rm d}}t\\ & - \frac12 {\boldsymbol{v}}_{ij}^T{\boldsymbol{\Sigma}}_{ij}{{\rm d}}{\boldsymbol{B}}_{ij}, \end{aligned} \right.$$ where ${\boldsymbol{\Sigma}}_{ij}$ and ${\boldsymbol{\Gamma}}_{ij}$ are given by  and (\[eq:sdpd-gamma-sigma\]). The dynamics  preserves the total momentum $\sum\limits_{i=1}^N{\boldsymbol{p}}_i$ and the total energy $E({\boldsymbol{q}},{\boldsymbol{p}},\varepsilon)$ since all the elementary sub-dynamics ensure these conservations. The time integration of the SDPD equations of motion can be performed thanks to a splitting strategy as described in [@faure_2016]. We resort to a Velocity-Verlet scheme for the conservative part given by Equation (\[eq:sdpd-cons\]) while for the fluctuation/dissipation part in Equation (\[eq:sdpd-simple-fluct\]) each pair is handled successively, following the ideas introduced for DPD in [@shardlow_2003] or for DPDE in [@stoltz_2006] This scheme ensures a good energy conservation though linear energy drifts are observed in the long term. Chemical reactions {#sec:chemistry} ================== We present the chemical mechanism included in SDPD and inspired by the reactive DPDE introduced in [@maillet_2007; @maillet_2011]. We first present the modelization of the chemical kinetics in Section \[sec:kinetics\]. The introduction of chemical reactions means that the material should be able to change its properties as it reacts. This is achieved by means of a reactive equation of state, presented in Section \[sec:reac-eos\] that switches between the reactants and the products as the reaction occurs. Finally, we handle the exothermicity of the reaction in Section \[sec:exothermicity\]. Kinetics of the chemical reaction {#sec:kinetics} --------------------------------- In the spirit of [@maillet_2007; @maillet_2011] where chemical reactions were included in DPDE, we model the progress of chemical reactions by adding a progress variable $\lambda_i\in [0,1]$ to each mesoparticle. Considering a model chemical reaction $$A \rightleftharpoons B,$$ we associate $\lambda=0$ to the reactant $A$ and $\lambda=1$ to the product $B$. The progress variable can be seen as the portion of the mesoparticle that has reacted. This statistical point of view gains a clearer meaning as the size of the mesoparticle increases. The evolution of the progress variable is governed by a kinetics that can be freely chosen to model the chemical reaction. In this work, we adopt second order kinetics where mesoparticles can interact with neighbouring particles. The progress variable is thus evolved as $$\label{eq:kinetics} \begin{aligned} \frac{{{\rm d}}\lambda_i}{{{\rm d}}t} =& \sum_{j\neq i} \mathcal{K}_{0\to1}\left(T_{ij}\right)(1-\lambda_i)(1-\lambda_j)W(r_{ij})\\ &- \mathcal{K}_{1\to0}\left(T_{ij}\right)\lambda_i\lambda_jW(r_{ij}), \end{aligned}$$ where $\mathcal{K}_{0\to1}$ and $\mathcal{K}_{1\to0}$ are the reaction rates, respectively, for the forward and backward reactions. The reaction rates depend on the mean temperature $T_{ij} = \frac12\left(T_i+T_j\right)$ according to some Arrhenius law : $$\label{eq:arrhenius} \mathcal{K}_{\mathcal{X}}(T_{ij}) = Z_{\mathcal{X}}\exp\left(-\frac{E_{\mathcal{X}}}{k_{\rm B}T_{ij}}\right),$$ with an activation energy $E_{\mathcal{X}}$, that represents the energy barrier a particle needs to overcome during the reaction, and a prefactor $Z_{\mathcal{X}}$ that governs the frequency of the reaction. Extension of this model to several chemical reactions would be straightforward. Reactive equation of state {#sec:reac-eos} -------------------------- Since the equations of state for the reactants and for the product are different, we need to define a mixed equation of state when $0 < \lambda < 1$. In the following, we denote all quantities related to the reactant by a superscript 0 and to the product by a superscript 1. The functions yielding temperature and pressure from the equation of state  are thus denoted by $\mathcal{T}^0$ and $\mathcal{P}^0$ for the reactant. The internal energy of a mesoparticle, due to its extensivity, can be expressed as $$\label{eq:mixing-ei} \varepsilon_i = \varepsilon_i^0 + \varepsilon_i^1,$$ where $\varepsilon_i^0$ and $\varepsilon_i^1$ are the energies, respectively, of the reactant (a $1-\lambda$ portion of mesoparticle $i$) and the products (a $\lambda$ portion of the mesoparticle). The density, being an intensive variable, is given as a weighted average of the density of the reactant $\rho_i^0$ and of the products $\rho_i^1$: $$\label{eq:mixing-rho} \rho_i = (1-\lambda)\rho_i^0 + \lambda\rho_i^1,$$ Note that we only have access to the internal energy $\varepsilon_i$ and density $\rho_i$ (through Equation (\[eq:sdpd-rho-v\])) of the whole mesoparticle, along with its progress variable $\lambda_i$. In order to obtain the temperature $T_i$, pressure $P_i$ and heat capacity $C_i$ for each mesoparticle, we need to determine the state of each chemical species ($\varepsilon_i^0$, $\rho_i^0$ and $\varepsilon_i^1$, $\rho_i^1$) thanks to a mixing law. If $\lambda_i=0$ or $\lambda_i=1$, the mesoparticle is actually composed purely of either $A$ or $B$. Hence we may use the equation of state for the pure chemical species. In the other cases ($0<\lambda_i<1$), we consider the two components to be at thermal and mechanical equilibrium inside a mesoparticle, which means that $$\begin{aligned} \mathcal{T}^0(\varepsilon_i^0,\rho_i^0) &= \mathcal{T}^1(\varepsilon_i^1,\rho_i^1), \\ \mathcal{P}^0(\varepsilon_i^0,\rho_i^0) &= \mathcal{P}^1(\varepsilon_i^1,\rho_i^1). \end{aligned}$$ Using relations  and  to express $\rho_i^1$ and $\varepsilon_i^1$ as a function of the global state $\varepsilon_i$, $\rho_i$ and of the state of the other component $\varepsilon_i^0$, $\rho_i^0$, this amounts to $$\label{eq:mixing-isotp} \begin{aligned} \mathcal{T}^0(\varepsilon_i^0,\rho_i^0) - \mathcal{T}^1\left(\varepsilon_i-\varepsilon_i^0,\frac{\rho_i-(1-\lambda)\rho_i^0}{\lambda}\right) &= 0, \\ \mathcal{P}^0(\varepsilon_i^0,\rho_i^0) - \mathcal{P}^1\left(\varepsilon_i-\varepsilon_i^0,\frac{\rho_i-(1-\lambda)\rho_i^0}{\lambda}\right) &= 0. \end{aligned}$$ The computation of the energy $\varepsilon_i^0$ and density $\rho_i^0$ generally requires to resort to a numerical inversion, like the Newton method, so that Equation  holds. This finally yields the temperature $T_i=\mathcal{T}^0(\varepsilon_i^0,\rho_i^0)$ and pressure $P_i=\mathcal{P}^0(\varepsilon_i^0,\rho_i^0)$ that are used in Equation (\[eq:kinetics\]) for the chemical reactions and in the usual equations of motion of SDPD (\[eq:sdpd-energy\]). Exothermicity {#sec:exothermicity} ------------- Chemical reactions are called exothermic if they release some chemical energy (or heat) as they occur. It is naturally important to take into account such effects and an exchange between the chemical energy and the other degrees of freedom occurs as the reaction progresses. The exothermicity, which is the energy liberated by the reaction of a single molecule, is given as $$E_{\rm exo} = E_{1\to0}- E_{0\to1}$$ and the total energy in our reactive system now reads $$E({\boldsymbol{q}},{\boldsymbol{p}},\varepsilon,\lambda) = \sum_{i=1}^N \varepsilon_i + \frac{{\boldsymbol{p}}_i^2}{2m} + (1-\lambda_i)KE_{\rm exo}$$ Note that the chemical energy scales with the particle size $K$. Since we request that $E$ is exactly preserved as the reaction progresses, the exothermicity is progressively transferred in the internal energy, inducing an evolution of the internal energy given by $${{\rm d}}\varepsilon_i = KE_{\rm exo}\,{{\rm d}}\lambda_i.$$ It would be possible to also release this energy in the kinetic energy at the cost of the conservation of the total momentum. In practice the exchange of energy between the internal and external degrees of freedom quickly leads to an equilibration between the kinetic and internal energy. This reactive mechanism is coupled with the equations of motion of SDPD (\[eq:sdpd-energy\]). In order to integrate the reactive SDPD model, we use the SSA scheme described in [@faure_2016]. An additional step is included in the integration scheme where the progress variables are updated with an Explicit Euler scheme. The internal energies are finally evolved by taking into account the exothermicity with $$\varepsilon_i^{n+1} = \varepsilon_i^n + (\lambda_i^{n+1}-\lambda_i^n)KE_{\rm exo}.$$ This ensures that the total energy is preserved when integrating the reactive part of the dynamics. Application to nitromethane {#sec:results} =========================== We assess the validity of the reactive SDPD model by simulating the propagation of a detonation wave in nitromethane. We model the decomposition of nitromethane by a single irreversible exothermic reaction: $${\rm NiMe} \to {\rm products}.$$ Compared to the more generic framework of Section \[sec:kinetics\] for reversible reactions, the irreversibility of the reaction is achieved by taking $Z_{1\to0}=0$. The reaction rate follows the Arrhenius law specified in . The activation energy is $E_a = \num{3e-19}$ J and the exothermicity $E_{\rm exo} = \num{4.78e-19}$ J/molecule as in [@maillet_2011]. The influence of the prefactor $Z$ is investigated in Section \[sec:prefactor\]. Inert nitromethane is represented by an equation of state obtained from Monte Carlo molecular simulations [@desbiens_2008] with a force field optimized to reproduce the properties of nitromethane under shock [@desbiens_2007]. The analytic form of the equation of state is given by $$\mathcal{S}_{\rm NiMe}(\varepsilon,\rho) = C_{V}\log\left[\frac{\varepsilon-\mathcal{E}_{\rm ref}(\rho)}{C_{V}} + \theta(\rho)\right] + C_{V}\Gamma_0\frac{\rho_0}{\rho}, \label{eq:eos-hz}$$ with $$\theta(\rho) = (T_0 - T_{00})\exp\left[\Gamma_0\left(1-\frac{\rho_0}{\rho}\right)\right],$$ and $$\mathcal{E}_{\rm ref}(\rho) = \frac12\frac{c_0^2x^2}{1-sx} \times \left\{ \begin{array}{cl} \displaystyle 1+\frac{sx}3-s\left(\Gamma_0-s\right)\frac{x^2}6 & \displaystyle \text{ if } x \geq 0,\\ \displaystyle 1 & \displaystyle \text{ if } x < 0, \end{array} \right.$$ where $\displaystyle x=1-\frac{\rho_0}{\rho}$, and $T_0$ and $T_{00}$ are two constants defined as the standard temperature $T_0 = 298.13$ K and the temperature $T_{00}$ on the reference curve $\mathcal{E}_{\rm ref}$. This constant is determined as $T_{00} = \frac{E_0}{C_V}$ where $E_0$ is the energy in standard conditions (density $\rho_0$ and pressure $P_0 = 10^5$ Pa). The parameters of  are summarized in Table \[tab:eos-hz\]. Parameter Value Unit ------------------ ------------ ---------------------- $\Gamma_0$ $1$ - \[1pt\] $\rho_0$ $1140$ kg.m$^{-3}$ \[1pt\] $c_0$ $1358.47$ m.s$^{-1}$ \[1pt\] $C_{V}$ $1211$ J.K$^{-1}$.kg$^{-1}$ \[1pt\] $s$ $2.000184$ - : Parameters of the equation of state  for nitromethane.[]{data-label="tab:eos-hz"} The products of the reaction are modeled by a Jones-Wilkins-Lee (JWL) equation of state, introduced in [@book_lee_1968] for reaction products. It reads $$\mathcal{S}_{\rm JWL}(\varepsilon,\rho) = C_{V}\log\left[\frac{\varepsilon-\mathcal{E}_k(\rho)}{C_{V}}\right] - C_{V}\Gamma_0\log(\rho), \label{eq:eos-jwl}$$ with $$\begin{aligned} \mathcal{E}_k(\rho) = &\frac{a}{\rho_0R_1}\exp\left[-R_1\frac{\rho_0}{\rho}\right] + \frac{b}{\rho_0R_2}\exp\left[-R_2\frac{\rho_0}{\rho}\right]\\ &+ \frac{\mathfrak{K}}{\rho_0\Gamma_0}\left(\frac{\rho_0}{\rho}\right)^{-\Gamma_0} + C_{\rm ek}, \end{aligned}$$ using the parameters from [@book_dobratz_1985], which are gathered in Table \[tab:eos-jwl\]. Parameter Value Unit -------------- ------------------ ---------------------- $\Gamma_0$ $0.3$ - $\rho_0$ $1128$ kg.m$^{-3}$ $E_0$ $0$ J $D_{\rm CJ}$ $6280$ m.s$^{-1}$ $P_{\rm CJ}$ $\num{1.25e10}$ Pa $T_{\rm CJ}$ $3000$ K $C_{V}$ $2764.23$ J.K$^{-1}$.kg$^{-1}$ $a$ $\num{2.092e11}$ Pa $b$ $\num{5.689e9}$ Pa $R_1$ $4.4$ - $R_2$ $1.2$ - : Parameters of the JWL equation of state  for reacted nitromethane (products).[]{data-label="tab:eos-jwl"} In order to define the constants $\mathfrak{K}$ and $C_{\rm ek}$, we first introduce $$\left\{ \begin{aligned} \rho_{\rm CJ} &= \rho_0 \frac{\rho_0D_{\rm CJ}^2}{\rho_0D_{\rm CJ}^2-P_{\rm CJ}},\\ E_{\rm CJ} &= E_0 + \frac12P_{\rm CJ}\left(\frac1{\rho_0}-\frac1{\rho_{\rm CJ}}\right),\\ P_{\rm k1CJ} &= a\exp\left(-R_1\frac{\rho_0}{\rho_{\rm CJ}}\right) + b\exp\left(-R_2\frac{\rho_0}{\rho_{\rm CJ}}\right). \end{aligned} \right.$$ Then, $$\mathfrak{K} = \left(P_{\rm CJ} - P_{K1{\rm CJ}} - \frac1mC_v\Gamma_0T_{\rm CJ}\rho_{\rm CJ}\right)\left(\frac{\rho_0}{\rho_{\rm CJ}}\right)^{\Gamma_0+1},$$ and $$\begin{aligned} C_{\rm ek} =& E_{\rm CJ} -\frac{a}{\rho_0R_1}\exp\left(-R_1\frac{\rho_0}{\rho_{\rm CJ}}\right)\\ &-\frac{b}{\rho_0R_2}\exp\left(-R_2\frac{\rho_0}{\rho_{\rm CJ}}\right) -\frac{P_{\rm CJ}-P_{\rm k1CJ}}{\rho_{\rm CJ}\Gamma_0}. \end{aligned}$$ In order to observe a detonation wave in our simulations, we first create a shock wave in the neat nitromethane that transforms to a detonation wave provided the shock velocity is high enough. The time step $\Delta t$ is chosen such that the particle after the initial shock wave do not move by more than $10\%$ of the characteristic inter-particle distance during one step. We first study in Section \[sec:shock-to-deto\] the transition from a shock wave to a detonation wave before turning to the analysis of the stationary behavior of the detonation wave in Section \[sec:stationary\]. We conclude by studying the influence of the Arrhenius prefactor in Section \[sec:prefactor\]. Shock to detonation transition {#sec:shock-to-deto} ------------------------------ The system we consider is formed of $N=86400$ particles initially distributed on a $12\times12\times594$ grid at the nitromethane equilibrium density $\rho = 1104$ kg.m$^{-3}$. The initial velocities and internal energies are chosen so that the initial temperature in the system is $300$ K. Periodic boundary conditions are used in the $x$- and $y$-directions. In the $z$-direction, two walls, formed of “virtual” SDPD particles as described in [@bian_2012; @faure_2016], are located at each end of the system. These virtual particles interact with the real SDPD particles through the conservative forces (\[eq:cons-forces\]) and a repulsive Lennard-Jones potential that ensures the impermeability of the walls. After the system equilibration during $\tau_{\rm therm} = 100$ ps, the lower wall is given a constant velocity $v_{\rm P} = 1764$ m.s$^{-1}$ in the $z$-direction. We choose the viscosity parameter $\eta = \num{2e-3}$ Pa.s so that the shock profile is smooth and no spurious oscillations are observed after the shock front. We first use a prefactor $Z=10^{15}$ s$^{-1}$ that is large enough to observe the shock-to-detonation transition in the spatio-temporal window of the simulation. We carry several simulations for different particle sizes $K$. Since we keep the dimensions of the system constant in reduced units, the overall size in physical units increases with the particle size. Due to the different time and length scales, the time step is also dependent on $K$: for instance, we take $\Delta t = \num{1.3e-13}$ s for $K=10$ and $\Delta t = \num{6.0e-13}$ s for $K=1000$. At each step, the spatial domain is split into $n_{\rm sl} = 450$ slices along the $z$-direction over which the thermodynamic variables are averaged in order to estimate instantaneous profiles. Their evolution along time is plotted in time-space diagrams (see Figure \[fig:color-deto\]). Two distinct domains can be distinguished. First, a shock wave is formed thanks to the piston movement. While it propagates in the material, the high temperature leads to the ignition of chemical reactions in the shocked nitromethane. These reactions create compressive waves in the shocked material, leading to the appearance of new ignitions points, forward in the neat shocked nitromethane. Finally as the new ignition points get closer to the shock front, they catch up with the shock front and begin to drive the shock at a larger velocity, hence forming a detonation wave which propagates at a constant velocity $v_{\rm D} = 6646$ m.s$^{-1}$. This is in good agreement with the theoretical hydrodynamic prediction which reads $D_{\rm CJ} = 6620$ m.s$^{-1}$. This shock-to-detonation transition mechanism is very similar to previous computations carried at a more microscopic scale with reactive DPDE [@maillet_2011] where the same discontinuous process, with successive ignition points in the shocked material, was observed. However this differs from hydrodynamic simulations where a reactive wave, first ignited close to the wall, catches up to the shock front, forming a so-called super-detonation [@menikoff_2011]. It is still not clear what is the origin of this discrepancy. It may be an effect of the large prefactor used in [@maillet_2011] and in our simulations which accelerate the chemical kinetics and change its time scale compared to the hydrodynamic time scale. \ \ We compare in Figure \[fig:color-deto-k\] the space-time diagram for particle sizes ranging from $K=10$ to $K=1000$. ![Space-time diagram of the temperature for several particle sizes: from $K=10$ to $K=1000$. The size of the space-time domain explored for each resolution is displayed in the diagram of the coarser simulations with a white ($K=10$) or black ($K=100$) frame.[]{data-label="fig:color-deto-k"}](deto_temperature_k) We observe the same mechanism, with ignition points catching up with the shock front, at any resolution. Moreover it seems that the shock-to-detonation transition occurs in the same physical time and length scales. A more quantitative measurement of the invariance of the transition is to track the position of the shock front during its propagation (see Figure \[fig:shockfront-k\]). ![Position of the shock front with respect to time for different particle sizes. The linear interpolation during the detonation phase is plotted in dashed lines.[]{data-label="fig:shockfront-k"}](deto_shockfront) Inside the two domains (the non reactive shock wave and the detonation wave), the shock front propagates with a constant velocity: $u_{\rm S}$ for the shock wave and $u_{\rm D}$ for the detonation wave. The time to detonation is evaluated by extrapolating the linear evolution of the shock front in the detonation phase down to $t=0$. The intercept of these linear interpolations with the time axis yields very close values for all particle sizes with a maximum $10\%$ difference between $K=10$ ($t_{\rm D}(z=0)=0.0157$ ns) and $K=10000$ ($t_{\rm D}(z=0)=0.0140$ ns). Steady detonation wave {#sec:stationary} ---------------------- We now slightly modify our setup to study steady detonation waves. The system is still formed of $N=86400$ nitromethane particles on a $12\times12\times594$ grid at $\rho=1104$ kg.m$^{-3}$ and $T=300$ K. A wall made of virtual SDPD particles [@bian_2012; @faure_2016] is placed at one end of the system. At the other end, we insert a $50$ nm layer of nitromethane particles initialized at $\rho=1869$ kg.m$^{-3}$ and $T=2330$ K, which corresponds to the thermodynamic state obtained on the unreacted Hugoniot with a shock at $v_{\rm P} = 2500$ m.s$^{-1}$. We observe a fast transition to a detonation wave that is followed by a rarefaction wave. We check that we have reached the stationary regime with a reactive wave propagating at a constant velocity and a self-similar rarefaction wave. Figure \[fig:deto-insta\] shows instantaneous profiles in the reference frame of the shock front for $K=100$. \ The reactive zone, where the progress variable evolves from $0$ to $1$, is delimited by the pressure peak, just behind the shock front, and the CJ point determined in the pressure volume diagram (see Figure \[fig:deto-pv\]) as the tangent point between the Rayleigh line and the Crussard curve. The rarefaction wave begins after the CJ point (located at position $z_{\rm CJ}$). Upon rescaling the positions $z$ as $\displaystyle \frac{z-z_{\rm CJ}}{t}$ for each time $t$, the profiles coincide for $z<z_{\rm CJ}$, highlighting the self-similarity of the rarefaction wave. In order to compare our results with theoretical predictions, we plot the instantaneous values of slice-averaged thermodynamic quantities in a pressure-volume diagram (see Figure \[fig:deto-pv\]) at time $t=130$ ps for $K=100$. ![Hugoniot, Crussard and isentrope curve of nitromethane computed from the equations of state (solid lines) and the thermodynamic states observed in the SDPD simulations with $K=100$ (points). The Rayleigh line is also shown.[]{data-label="fig:deto-pv"}](pv-deto) Up to the thermal fluctuations present in SDPD, the thermodynamic states observed in the rarefaction wave agree very well with the isentrope computed from the equation of state (\[eq:eos-jwl\]). We summarize in Table \[tab:deto-velocity\] the detonation velocity for different particle size and compare them to the theoretical prediction obtained from the Rayleigh line in a simplified model that in particular does not account for viscosity effects. Size Detonation velocity (m.s$^{-1}$) ---------- ---------------------------------- Rayleigh 6620 10 6709 100 6591 1000 6549 : Detonation velocity for different particle sizes compared to the theoretical hydrodynamic prediction.[]{data-label="tab:deto-velocity"} The detonation velocities in SDPD are very close to the theoretical value. Similarly to previous observations for shock waves [@faure_2016], the detonation velocity seems to be decreasing with the particle size. Influence of the Arrhenius prefactor {#sec:prefactor} ------------------------------------ All these results have been obtained with an arbitrarily chosen prefactor in the Arrhenius law , namely $Z=10^{15}$ s$^{-1}$. We study in the following the influence of the prefactor on the properties of the detonation wave. We first turn our attention to the shock-to-detonation transition and perform the simulations reported in Section \[sec:shock-to-deto\] for $K=100$ and prefactors $Z=\num{5e14}$ s$^{-1}$, $Z=10^{15}$ s$^{-1}$ and $Z=\num{2e15}$ s$^{-1}$. All the dimensions along the $z$-axis are scaled by a factor $\displaystyle \mathfrak{z} = \frac{Z}{10^{15}}$. In all these settings the same mechanism is observed and we check whether a simple scaling law can predict the time to transition. In Figure \[fig:shockfront-z\], we plot the position of shock front with respect to time for several prefactors. The positions and times are rescaled by the factor $\mathfrak{z}$. ![Position of the shock front with respect to time for different prefactors. []{data-label="fig:shockfront-z"}](deto_sf_prefactor) Since we have taken $Z=10^{15}$ s$^{-1}$ as the reference, the scaling factor is $\mathfrak{z}=1$ for the prefactor $Z=10^{15}$ s$^{-1}$ and its profile remains unchanged in Figure \[fig:shockfront-z\] compared to Figure \[fig:shockfront-k\]. It appears that, upon a simple rescaling, the trajectories of the shock front match reasonably well for the prefactors considered here, although the smallest prefactor ($Z=\num{5e14}$ s$^{-1}$) somewhat deviates from the others. We also study the influence of the Arrhenius prefactor on the stationary properties of the detonation wave and perform the simulations described in Section \[sec:stationary\] for $K=100$ and prefactors $Z=\num{5e14}$ s$^{-1}$, $Z=10^{15}$ s$^{-1}$ and $Z=\num{2e15}$ s$^{-1}$. As for the STD transition, all the dimensions along the $z$-axis are scaled by the factor $\mathfrak{z}$. We compare in Figure \[fig:deto-profiles-z\] the pressure profile at time $\mathfrak{z} \times 130$ ps for several prefactors. The distances are also rescaled by the factor $\mathfrak{z}$. ![Pressure profiles for several prefactors at time $t=\frac{Z}{10^{15}}\times 130$ ps. The positions are rescaled with the same factor.[]{data-label="fig:deto-profiles-z"}](deto_stationary_prefactor) The profiles agree very well with each other after the rescaling. We notice however that as the prefactor increases higher pressures are observed at the shock front. This can result in oscillations in the relaxation zone as clearly visible for $K=\num{4e15}$ s$^{-1}$. In order to study the width of the reactive zone, we average the profiles of the progress variable over time. We determine the reactive zone to be the region where the progress variable $\lambda$ is significantly different from $0$ or $1$, that is $0.02\leq\lambda\leq0.98$. The widths for all tested prefactors along with the detonation velocity are gathered in Table \[tab:deto-k\]. -------------- -------------------- ----------------------- Prefactor Width of the Detonation (s$^{-1}$) reactive zone (nm) velocity (m.s$^{-1}$) $\num{5e14}$ $61.9$ $6604$ $\num{8e14}$ $39.9$ $6599$ $\num{1e15}$ $31.5$ $6591$ $\num{2e15}$ $15.6$ $6622$ $\num{4e15}$ $14.2$ $7314$ -------------- -------------------- ----------------------- : Width of the reaction zone and detonation velocity for several Arrhenius prefactors.[]{data-label="tab:deto-k"} As expected, the detonation velocity remains very similar and close to the theoretical prediction which does not depend on the chemical kinetics. As for the reactive region, its width roughly scales as $\mathfrak{z}^{-1}$, further suggesting that the prefactor simply involves a rescaling of the domain. In Table \[tab:deto-k\], it is manifest that the previous conclusions do not hold for $Z=\num{4e15}$ s$^{-1}$ for which even the detonation velocity is off by $15\%$. Coupled with the observation in Figure \[fig:deto-profiles-z\], this suggests that a finer resolution is required to deal with fast chemical reactions. To confirm this, we run a simulation with the prefactor $Z=\num{4e15}$ s$^{-1}$ at a smaller particle size ($K=10$). The detonation velocity determined with this setting, namely $u_{\rm D} = 6777$ m.s$^{-1}$, is much closer to the theoretical prediction. While the change of prefactor in the regime explored by our simulations mainly amounts to rescaling the time and length scales, it appears that we should be careful to choose a sufficiently fine resolution. As the kinetics are accelerated, with a larger prefactor, the reactive mechanism described in Section \[sec:chemistry\] in which the chemical reactions are averaged inside each mesoparticle, becomes unable to properly handle fast reactions for large particles. Except for this limitation, SDPD has proved to be able to simulate detonation waves with a much coarser resolution than MD or DPDE and still recover not only the stationary properties but also the STD transition mechanism observed in [@maillet_2007; @maillet_2011]. Conclusion {#sec:conclusion} ========== We have extended SDPD to handle reactive mechanisms. This enables the simulation of detonation wave with SDPD. The stationary properties, detonation velocity and thermodynamic states were succesfully recovered for nitromethane. A mechanism with successive ignition points appearing in the shocked nitromethane was observed for the STD transition similarly to previous simulations with DPDE [@maillet_2011]. The resolution in SDPD has no major influence on these properties since the physical time and length scales associated with this process remain unchanged. This allows us to effectively choose the resolution and deal with larger systems without affecting the physical properties of the detonation wave. We have tested the influence of the prefactor on the STD transition and the stationary behavior of the reactive wave. It actually seems that it only rescales the time and length scales in the simulation. However it has stressed out that the resolution should be chosen fine enough with respect to the speed of the chemical reactions. The multiscale consistency of reactive SDPD lets us envision to study more complex geometries by taking advantage of larger systems SDPD can simulate. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Nicolas Desbiens for fruitful discussions on the equations of state and the detonation mechanism. We are also grateful to Laurent Soulard and Gabriel Stoltz for their useful comments on the manuscript. [31]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty in [**](\doibase 10.1145/1362622.1362700) () pp.  [****,  ()](\doibase 10.1098/rsta.2009.0218) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevE.67.026705) [****,  ()](http://adsabs.harvard.edu/abs/1977AJ.....82.1013L) [****,  ()](https://academic.oup.com/mnras/article-lookup/doi/10.1093/mnras/181.3.375) [****,  ()](http://stacks.iop.org/0295-5075/40/i=2/a=141) [****,  ()](http://stacks.iop.org/0295-5075/40/i=6/a=631) [****,  ()](\doibase 10.1103/PhysRevE.94.043305) [****,  ()](http://scitation.aip.org/content/aip/journal/jcp/130/3/10.1063/1.3050100) [****,  ()](http://scitation.aip.org/content/aip/journal/jcp/130/2/10.1063/1.3058437) [****,  ()](http://scitation.aip.org/content/aip/journal/pof2/24/1/10.1063/1.3676244) [****, ()](http://link.aps.org/doi/10.1103/PhysRevE.77.066703) [****,  ()](\doibase 10.1103/PhysRevLett.70.2174) [****,  ()](\doibase 10.1103/PhysRevE.53.611) [****,  ()](\doibase 10.1103/PhysRevLett.89.285501) [****,  ()](\doibase 10.1103/PhysRevE.76.026318) [****,  ()](\doibase 10.1103/PhysRevB.82.214108) [****, ()](\doibase 10.1103/PhysRevE.90.033312) [****,  ()](http://stacks.iop.org/0295-5075/76/i=5/a=849) [****, ()](http://stacks.iop.org/0295-5075/78/i=6/a=68001) [****,  ()](\doibase 10.1021/jz500756s) [****, ()](http://stacks.iop.org/0295-5075/96/i=6/a=68007) [****,  ()](\doibase https://doi.org/10.1088/0965-0393/22/2/025027) [****,  ()](http://www.sciencedirect.com/science/article/pii/S0045793013003381) [****,  ()](http://www.sciencedirect.com/science/article/pii/S0377042702008695) [****,  ()](http://dx.doi.org/10.1137/S1064827501392879) [****,  ()](\doibase http://dx.doi.org/10.1016/j.jhazmat.2008.12.083) [****,  ()](\doibase 10.1080/08927020701589245) [**](\doibase 10.2172/4783904) (, ) @noop [**]{} (, ) [****, ()](\doibase http://dx.doi.org/10.1016/j.combustflame.2011.05.009)
--- abstract: 'We present an analysis of the structures and dynamics of the merging cluster Abell 1201, which has two sloshing cold fronts around a cooling core, and an offset gas core approximately 500kpc northwest of the center. New  and data reveal a region of enhanced brightness east of the offset core, with breaks in surface brightness along its boundary to the north and east. This is interpreted as a tail of gas stripped from the offset core. Gas in the offset core and the tail is distinguished from other gas at the same distance from the cluster center chiefly by having higher density, hence lower entropy. In addition, the offset core shows marginally lower temperature and metallicity than the surrounding area. The metallicity in the cool core is high and there is an abrupt drop in metallicity across the southern cold front. We interpret the observed properties of the system, including the placement of the cold fronts, the offset core and its tail in terms of a simple merger scenario. The offset core is the remnant of a merging subcluster, which first passed pericenter southeast of the center of the primary cluster and is now close to its second pericenter passage, moving at $\simeq 1000\rm\,km\,s^{-1}$. Sloshing excited by the merger gave rise to the two cold fronts and the disposition of the cold fronts reveals that we view the merger from close to the plane of the orbit of the offset core.' author: - 'Cheng-Jiun Ma, Matt Owers, Paul E. J. Nulsen, Brian R. McNamara, Stephen S. Murray, Warrick J. Couch' title: 'Abell 1201: a Minor merger at second core passage' --- Introduction {#sec:intro} ============ In hierarchical structure formation models, galaxy clusters are formed by mergers of smaller systems, including other groups and clusters, taking place over the age of the universe [e.g. @springel06]. Mergers between galaxy clusters are among the most energetic events in the universe [@sarazin02]. Although the merger rate is now decreasing with time, evidence of recent and ongoing mergers is still commonly found in clusters. Cluster mergers can induce pronounced observable features, particularly in the X-rays. High resolution X-ray observations with and provide a unique means to study cluster mergers. Combined with galaxy redshifts from optical spectroscopy and mass distributions from lensing, they provide a valuable tool for studying merger dynamics and testing our understanding of cluster physics. A notable discovery from X-ray astrophysics in the past decade is the phenomenon of cold fronts in clusters [@markevitch00; @vikhlinin01a]. These are contact discontinuities where there is an abrupt change in the entropy of the gas [@markevitch07]. Simulations [e.g. @churazov03; @tittley05; @ascasibar06; @poole06; @poole08; @zuhone10; @roediger11] show that cold fronts can be induced in more than one way during cluster mergers. Cold fronts of the “remnant core type” occur at the interface between the core of an infalling subcluster and the warmer ICM of the primary cluster [e.g. 1E0657-56; @markevitch02]. Another type, called “sloshing” cold fronts, result from gas motions in a cluster core induced by the gravitational perturbation of an infalling subcluster [e.g. Abell 1795; @markevitch01]. Simulations [e.g @ascasibar06; @zuhone10] show that the “sloshing" type of cold fronts appear $\sim 0.3$Gyr after pericentric passage of a subcluster. This cold front moves outward and a second cold front can appear on the opposite side of the center $0.6$Gyr or longer after pericentric passage. While the cold fronts continue to move outward, additional cold fronts may appear on alternating sides of the center. Viewed from many directions, the fronts are connected in a spiral pattern. Although the time scales vary from cluster to cluster, the well-defined structure of alternating fronts and/or a spiral, are a signature of sloshing induced by mergers. These features have been shown to be useful for constraining the dynamics and history of mergers [e.g. @johnson10; @johnson11]. Abell 1201 (A1201) is a typical example of a cluster with sloshing cold fronts [@owers09 hereafter; Owers et al. 2011b]. It has a redshift of 0.168 [@struble99] and an X-ray luminosity of $L_{\rm X}(0.1-2.4\rm\,keV)=2.4\times10^{44}$ergs$^{-1}$ [@bohringer00; @ebeling98]. measured the redshifts of 321 member galaxies and found a mean redshift of $z = 0.1673\pm0.0002$ and a velocity dispersion of $778\pm36$kms$^{-1}$. Using 21.5 ksec of ACIS-S data, measured a global temperature of $5.3\pm0.3$keV and an abundance of $0.34\pm0.10$ for A1201. They also studied the spatial and redshift distributions of the galaxies in A1201 and they identified a remnant, offset core, located at the northwest end of a bright X-ray ridge that runs through the cluster center [see also @owers09b]. @edge03 analyzed strong lensing in A1201, using the image of an arc located over the BCG, and found that the ellipticity of the mass distribution in the cluster core exceeds that of the optical isophotes of the BCG. In this paper, we extend the study of , using deeper and data to discuss the substructure and dynamics of merging in A1201. The X-ray data reduction is discussed in Section \[sec:data\]. The substructures of A1201, including its cold fronts, offset core, and tail are discussed in Section \[sec:sb\]. These results are interpreted in terms of a merger scenario presented in Section \[sec:discussion\]. Section \[sec:summary\] is a short summary. For this analysis, we adopt a $\Lambda$CDM cosmology with $h_0=0.7$, $\Omega_{\Lambda}=0.7$, and $\Omega_m = 0.3$. In this cosmology, $1\arcsec$ corresponds to 2.88kpc at a redshift of $z=0.1673$. Positions angles (PA) are measured counterclockwise from the west. \[fig:imagexmm\] Data Reduction {#sec:data} ============== A1201 was observed using the European Photon Imaging Camera (EPIC) on in December 2007 (ObsID 0500760101), and the Advanced CCD Imaging Spectrometer (ACIS) on in April 2008 (ACIS-I; ObsID 9616) and in November 2003 (ACIS-S; ObsID 4216). The EPIC observations were performed in full-frame mode using the medium filter, for a total exposure time of 50ksec. The ACIS-I and ACIS-S observations were performed in VFAINT mode for total exposure times of 47ksec and 40ksec, respectively. X-ray Imaging {#sec:data_image} ------------- We reprocessed the EPIC data using SAS v10.0.0 with standard settings. Light curves for the 10 – 12 keV band for each of the three cameras (PN, MOS1 and MOS2) were used to filter out periods when the count rate exceeded the mean by $3\sigma$. Images were produced for each camera, using the method of @carter07. Blank sky and filter-wheel closed (FWC) datasets produced by the EPIC Blank Sky team[^1] were used for this purpose. Particle background is first subtracted using FWC data, scaled by the counts in regions outside each detector field of view. The same procedure is applied to the blank sky datasets, in principle, leaving only the cosmic X-ray background (CXB). Assuming that there are no source photons in this region, counts in annuli 10 – 12 arcmin from the center of each detector are then used to scale the blank sky backgrounds to the source exposures (for both, after subtraction of the hard particle background) in order to remove CXB from the source data. Allowing for some difference in the CXB spectrum between the blank sky and source exposures, this procedure is carried out in the 4 energy bands delimited by 0.5, 2.125, 3.750, 5.375, and 7.0keV. The background subtracted images are then divided by vignetted exposure maps to obtain the image (left panel of Figure \[fig:imagexmm\]). The ACIS-I (ObsID 9616) and ACIS-S (ObsID 4216) data sets were reprocessed using the CIAO software package [version 4.3; @ciao]. Following standard practice, light curves from source free regions were used to filter out periods when the count rate differed by more than $3\sigma$ from the mean. The ACIS-S observation is badly affected by a flare, which leaves only 21.5ksec of useful exposure. No significant flaring is seen in the ACIS-I observation and the cleaned exposure time is 43ksec. Background subtraction was done using the standard blank sky datasets. The exposure map was generated for emission for an optically thin plasma with a temperature of $kT = 5.3$keV, the global temperature of A1201 determined by . The combined, 0.5 – 7 keV, background subtracted, exposure corrected image from the two data sets, binned by a factor of 4, is displayed in the right panel of Figure \[fig:imagexmm\]. An inset image in Figure \[fig:imagexmm\] shows the regions discussed in later sections of this paper. The two cold fronts discussed by and @owers09b are marked by red arcs. The offset core to the northwest of the cluster center is outlined by a magenta ellipse. The outer edge of the tail to the east of the offset core is marked in green. The ridge connecting the cool core and the offset core is marked by two cyan lines. X-ray Spectral Analysis {#sec:data_spec} ----------------------- All X-ray spectra were extracted in the energy range 0.5 to 7.0keV and grouped to ensure a minimum of 20 counts per bin. PN data were not used in the spectral analysis, since the soft background cannot be convincingly removed. Spectra and response files for the ACIS-I and ACIS-S data were assembled using the CIAO script “specextract.” Regions containing X-ray point sources were identified using the CIAO “cell\_detect” task with a $3\sigma$ threshold and excluded from all extracted spectra. The ACIS-S observation was performed at the standard focal plane temperature of $-120\degr$C, but the focal plane was slightly warmer ($-118.7\degr$C) for the ACIS-I observation. This is expected to degrade the calibration accuracy. However, our spectral fits reveal no significant discrepancy and we assume that any calibration errors are insignificant. Background spectra were created using the standard blank sky data described in @markevitch98. For the MOS data, spectra were extracted using the SAS “evselect” task. The point sources identified in the ACIS-I data were excluded, but using larger, $9\arcsec$ radius, apertures. Background spectra were created using a double background subtraction approach similar to that used to make the images. Briefly, the background spectrum is assumed to be the sum of two components, a hard particle component, which is determined by scaling FWC data, and a cosmic background component. The cosmic X-ray background is determined by scaling the residual blank sky spectrum, after subtraction of the particle background, to make the net spectrum in the source exposure zero in the annulus between $10\arcmin$ and $11\arcmin$ centered on the cluster. [lcccc]{} ACIS-I & $5.56\pm 0.2$ & $0.31\pm 0.1$ & $205/266$ & $17762$\ ACIS-S & $5.29\pm 0.3$ & $0.30\pm 0.1$ & $194/225$ & $16499$\ ACIS & $5.43\pm 0.2$ & $0.32\pm 0.07$ & $404/493$ &\ MOS & $5.16\pm 0.2$ & $0.28\pm 0.05$ & $532/537$ & $38353$\ ACIS+MOS& $5.31\pm 0.15$ & $0.29\pm 0.039$ & $853/958$ &\ Owers+08 & $5.3\pm 0.3$ & $0.34\pm 0.10$ & & To determine a global temperature and metallicity, spectra were extracted from the elliptical region defined in Figure 4 of . These were fitted with an absorbed (WABS), optically thin, thermal plasma model [MEKAL; @mewe86], using Sherpa. The absorbing column density of neutral hydrogen was fixed at $N_{\rm H}=1.61\times10^{20}$cm$^{-2}$, allowing for foreground gas [@dickey90]. Spectra from the different instruments were fitted jointly using the same model parameters. The normalizations were not tied together, since their values are sensitive to assumptions about the spatial distribution of the X-ray emission and relative calibration of the detectors. Table \[table:global\_temp\] lists best fitting parameters for a few instrument combinations, together with the results of for comparison. Uncertainties are 90% confidence limits. The net photon counts for the four detectors, given in the last column of the table, are similar. Thus, the total net photon count in the spectra used here is roughly four times greater than used by . @nevalainen10 found that temperatures determined from EPIC (PN and MOS) data are typically 10% lower than temperatures for the same region determined from ACIS data when fitting in a broad band (0.75 – 7keV). This is roughly consistent with the temperature differences seen in our data (Table \[table:global\_temp\]). Temperature and Abundance Maps {#sec:data_tempmap} ------------------------------ employed the broad band method of @markevitch00 to make a temperature map from the ACIS-S data. An improved map, made by applying the methods described in @randall08 and @owers11a to the combined ACIS-I and ACIS-S data, is shown in the left panel of Figure \[fig:matt\_hires\_tmap\]. Briefly, the $0.5-7.0$keV, background subtracted images for ACIS-I and ACIS-S data were coadded and binned by a factor of 4. At each pixel in the binned image, a spectrum was extracted from a circular region with radius set such that there are 1000 background-subtracted counts. The radius of the circles ranges from $6.7\arcsec$ at the center of cluster to $\sim 100\arcsec$ at the boundary of the map. The size of some extraction regions were marked with black circles in the left panel of Figure \[fig:matt\_hires\_tmap\]. Weighted response files were extracted from a more coarsely binned image with binning factor of 32. The spectra were grouped to obtain at least 20 counts per channel. The data were fitted with an absorbed MEKAL model with the abundance fixed at $\rm Z=0.32$, i.e. the global value for the combined ACIS-S and ACIS-I fit in Table \[table:global\_temp\]. A second temperature map was generated by fitting spectra binned into regions defined by a weighted Voronoi tesselation [WVT; @wvt] of the ACIS-I data. The target signal to noise ratio for the WVT binning was set to 25, so that each spatial bin contains about 625 counts. Using the same regions, the spectra from all four data sets (ACIS-I, ACIS-S, MOS1, and MOS2) were extracted. Since the net 0.5 – 7keV count is comparable for all four data sets. Thus, the total net count for each spectral region lies in the range $1800$ to $2800$, sufficient for robust temperature measurements. The signal to noise ratio for the WVT algorithm was chosen so that the smallest regions have a size comparable to the spatial resolution of XMM-Newton. Spectra were fitted using the absorbed MEKAL model as described above. Results of fitting spectra for the ACIS and MOS data separately and jointly were compared. Discrepancies between ACIS and MOS temperatures comparable to those discussed in Section \[sec:data\_spec\] are seen, but the values generally agree within 1$\sigma$ statistical uncertainties. The temperature map in the right panel of Figure \[fig:matt\_hires\_tmap\] shows results for the joint fit. We present both temperature maps in Figure \[fig:matt\_hires\_tmap\], since they have complementary properties. The maps in the left panel retains more spatial information, since it does not require the data to be binned on an artificial grid. However, the results in adjacent pixels are correlated over a distance that is not readily discerned from the maps. By contrast, the WVT map gives independent temperatures and abundances for well-defined regions, at the expense of binning together regions that may have disparate properties. The two temperature maps are broadly in agreement with one another. The primary cluster core (regions 2 of the bottom panels) is cool, with the cool gas extending southward. A stream of hotter gas crosses the bright ridge in the center of both maps. The subcluster core at the northern end of the ridge (region 1) is also marginally cooler than the surrounding regions. The method used to make the high resolution temperature map of Figure \[fig:matt\_hires\_tmap\] was used with coarser binning to obtain the temperature (left) and abundance (right) maps of Figure \[fig:matt\_tmap\]. In detail, the ACIS-I and ACIS-S data were binned by a factor of 8 and the spectrum at each pixel was extracted from a circular region containing at least 3000 background subtracted counts. The spectra were grouped to obtain at least one counts per channel. The minimal grouping is adopted to avoid diluting the 6.7keV iron line. Temperatures and abundances were fitted to an absorbed MEKAL model using the Cash statistics. The abundance of the cool core is high. The inner and outer cold fronts are revealed in both the temperature and abundance maps, with the abundance on the cooler side being higher than on the hotter side of both fronts. The ridge (between the cyan lines of Figure \[fig:matt\_tmap\]), as well as the offset core (the magenta ellipse), shows a marginally low abundance. Deprojected Gas Properties {#sec:data_densitypfl} -------------------------- An “onion peeling” deprojection method was used to obtain radial temperature and density profiles for the gas of A1201 [@david01; @nulsen05]. Assuming that the gas distribution is spherical and that the gas densities and temperatures are constant in spherical shells, the algorithm uses spectral parameters fitted to the outer layers to calculate the spectra of outer shells projected onto inner annuli. Spectra were extracted for 7 annuli, centered on the cluster center and excluding the bright ridge (a sector in the range $\rm PA= 50\degr$ – $75\degr$). The size of each annulus was adjusted to obtain about 3000 photons in each ACIS-I spectrum (0.5 – 7.0keV). The outer edge of the outermost annulus is about 700kpc from the cluster center. The fraction of the cluster emission in the outermost annulus arising from gas at larger radii is corrected using a beta model with $\beta=0.75$. The spectral model for each annulus is a sum of MEKAL models, all absorbed by the Galactic foreground column density. The parameters (temperature, abundance and norm) for all but one MEKAL component are determined by fits to the surrounding shells. Parameters for the remaining component are adjusted to fit the spectrum for the current annulus. Deprojection results are presented in Section \[sec:sb\_clump\]. Merger related structures {#sec:sb} ========================= Cold Fronts {#sec:sb_CF} ----------- The new data were used to measure the surface brightness profiles and temperature jumps for the two cold fronts to confirm their properties with deeper data, following the procedure of . Surface brightness profiles were constructed from the point-source-free ACIS 0.5 – 7keV event table, corrected for vignetting, quantum efficiency and exposure time. Background was determined from scaled blank sky data. For the northwest front, the profile was extracted in annuli centered on $(\rm R.A., DEC) = (168.2268\degr, 13.4344\degr)$, for PA in the range $7\degr$ – $122\degr$. Annuli for the southeastern front were centered at $(\rm R.A., DEC) = (168.2197\degr, 13.4509\degr)$ with PA from $240\degr$ – $267\degr$. The surface brightness profile is modeled, assuming that the electron density follows the broken power law $$n_{\rm e}(r)=\left\{ \begin{array}{ll} n_{\rm e,1}\left({{r}\over{R_{\rm f}}}\right)^{-\alpha_1}, & r<R_{\rm f},\\ n_{\rm e,2}\left({{r}\over{R_{\rm f}}}\right)^{-\alpha_2}, & r>R_{\rm f}, \end{array} \right. \label{eqn:density}$$ where $R_{\rm f}$ is the radius of the discontinuity and $n_{\rm e,1}$ and $n_{\rm e,2}$ are the inner and outer densities at $R_f$, respectively. Here, $r$ is the spherical radius. The small temperature dependence of the surface brightness is ignored. Surface brightness can then be determined by integrating $n_{\rm e}(r)^2$ along lines of sight. Model parameters were obtained by jointly fitting the ACIS-I and ACIS-S surface brightness profiles. The density jumps were found to be $n_{\rm e,1}/n_{\rm e,2} = 1.74^{+0.26}_{-0.30}$ for the northwest discontinuity and $2.2^{+0.27}_{-0.29}$ for the southeast discontinuity[^2]. Temperature jumps were determined by jointly fitting ACIS-I, ACIS-S and MOS spectra, using the method of . Spectra for one region on each side of a front were extracted in the sectors defined above. For the northwest front, the region inside the front covered the range of radius $7\arcsec$ – $17\arcsec$ and the region outside the front, $23.5\arcsec$ – $50\arcsec$. For the southeast front, the radial ranges are $108\arcsec$ – $175\arcsec$ (inside) and $190\arcsec$ – $280\arcsec$ (outside). The norm and temperature outside each front were fitted using an absorbed MEKAL model, with the metallicity fixed at the global value ($0.29$, see Table \[table:global\_temp\]). Using these parameters and the results from fitting the broken power law model for the gas density, the norm for the gas outside the front projected onto the region inside the front was then calculated. This gives a two component model for the region inside the front, one component for the gas outside the front, with all parameters fixed, and a second component with free parameters for the gas inside the front. Fitting the free parameters to the spectrum for the region inside the front then determines the temperature of the gas there. The resulting temperatures are $3.15^{+0.42}_{-0.38}$keV, inside, and $6.21^{+1.5}_{-0.69}$keV, outside the northwest front. For the southeast front, they are $3.29^{+0.34}_{-0.26}$, inside, and $5.1^{+1.7}_{-0.9}$keV, outside. Combining the density jumps determined from the surface brightness profile with the temperature jumps determined here, we find that the pressure jumps at the fronts are $0.86^{+0.23}_{-0.22}$ for the northwest front and $1.41^{+0.50}_{-0.48}$ for the southeast front. These do not differ significantly from 1, consistent with the results of . Thus, we confirm their conclusion that both fronts are cold fronts rather than shocks. Offset Core {#sec:sb_clump} ----------- Using a KMM [KayeÕs mixture model; @ashman94] analysis, divided the 3-dimensional projected phase space of cluster members into a sum of Gaussians. Under this partitioning, found a compact substructure at the location of the offset gas core, which is marked by the ellipse in Figure \[fig:imagexmm\]. This substructure has a velocity of 432kms$^{-1}$ relative to the cluster mean and a velocity dispersion of 166kms$^{-1}$. Although the X-ray emission from this region is not clearly distinct from the core of A1201, a pronounced ridge of X-ray emission extends from the core to the position of the substructure. We interpret this ridge as being composed of contributions from the large-scale cluster emission of A1201 and from gas associated with the offset core. In order to highlight emission from the offset core, a residual image, made by subtracting an elliptical beta model[^3] fitted to the large-scale emission from the combined ACIS image, is shown in Figure \[fig:densitypfl\_resid\]. The cool core and the sector containing the southeast cold front were both masked out of the fit, since they are also not well-described by the beta model. The offset core stands out clearly at the end of the ridge. The temperature and density of the offset core were estimated using spectra for the region marked by the magenta ellipse in Figures \[fig:imagexmm\] and \[fig:densitypfl\_resid\]. A local background was extracted from an annulus centered at the cluster center and containing the offset core, in the PA range $0\degr$ – $50\degr$. Scaling this to the area of the ellipse to obtain a background spectrum and fitting an absorbed thermal model to the residual spectrum gives the excess emission measure of the offset core as $1.145^{+0.085}_{-0.077}~\times 10^{66}~{\rm cm^{-3}}$. For a fixed emission measure in a fixed volume, the mean gas density is maximized when it is uniform. Assuming that the ellipse is the projection of a spheroid, with semi-major axis $a=110$kpc, semi-minor axis $b=88$kpc, and its third semi-axis $c = \sqrt{ab}$, the electron density of the gas is ${n} =3.4\pm0.2\times 10^{-3}\,{\rm cm}^{-3}$. The uncertainty here is dominated by systematic errors, particularly in the volume of the emitting region. The errors show the range of densities as the third axis, $c$, is varied from the prolate ($c=b$) to the oblate ($c=a$) extremes. The corresponding gas mass of the offset core is $3.8\pm0.2\times 10^{11}\,M_{\sun}$. The bright galaxy at $\rm (R.A., DEC) = (168.2088, 13.4750)$ is dynamically associated with the offset core, located close to its center, and has a stellar mass, estimated from SDSS photometry, of ${\rm M_{gal}} = 3\times 10^{11} M_{\sun}$, which is comparable to the gas mass. The second brightest galaxy associated with the offset core is one magnitude fainter. Only taking into account the stellar mass of the brightest galaxy and assuming that the ratio of the total mass to stellar mass is about 10 gives a conservative lower limit on the total mass of the offset core of $3\times10^{12} M_{\sun}$. The temperature maps show that the offset core is only marginally cooler than surrounding regions (Figure \[fig:matt\_hires\_tmap\]). Since the offset core is brighter than adjacent regions at the same radius, it must be denser, as confirmed by the results above. In Figure \[fig:densitypfl\], results for the temperature, density, pressure and entropy of the gas in the offset core are plotted in red together with the results of the deprojections. We note that the bright ridge was excluded from the spectra used for the deprojections, although this makes very little difference to the results. We also obtained a deprojected metallicity profile, which is consistent with the metallicity map in the right panel of Figure \[fig:matt\_tmap\]. The metallicity is highest in the cool core, $Z=0.86\pm0.2$ within 72kpc of the cluster center, and drops to $Z\sim0.3-0.4$ at 140 to 690kpc. Although the gas temperature of the offset core is only marginally lower than that of the surrounding gas (top left panel of Figure \[fig:densitypfl\]), its entropy is significantly lower. This shows that the high density of the gas within the offset core is not simply the result of adiabatic compression of surrounding gas, since that would not alter the entropy. Thus, the gas in the offset core must originate in some other place. The simplest interpretation is that it is gas that fell in with the core. The abundance map in the right panel of Figure \[fig:matt\_tmap\] shows marginal evidence of lower abundances near the offset core. While the significance of this feature is low, it is consistent with an origin for this gas outside the present cluster. We can use the pressures to make a rough estimate of the direction of motion of the offset core. The thermal pressure of gas in the offset core is balanced by the external thermal pressure plus the ram pressure at the stagnation point. Using our estimates of the density and temperature from above gives an internal pressure for the core of $p_{\rm core} = 3\times10^{-11}\,{\rm dyne\,cm^{-2}}$. The pressure of the external gas at the same radius is $p_{\rm ext}$ is $10^{-11}\,{\rm dyne\,cm^{-2}}$. Therefore, the ram pressure $p_{\rm rp}$ is about $2\times10^{-11}\,{\rm dyne\,cm^{-2}}$. Since the ram pressure $p_{\rm rp} = \rho_{\rm ext} v^{2}$, the velocity, $v$, of the offset core would need to be about $1000$kms$^{-1}$. Pressure arguments also constrain the location of the offset core along our line of sight. Both the thermal pressure and the ram pressure of the external gas depend on its density, which decreases with distance from the cluster center. If the distance of the offset core from the cluster center exceeds the projected separation, then the external gas pressure would be smaller, increasing the ram pressure required to confine the core. Since the gas density is also lower, a larger speed is then demanded to confine the gas in the offset core. However, in the absence of any evidence for a shock , we assume that the motion of the core is subsonic, i.e., its speed cannot exceed $\sim1100\rm\,km\,s^{-1}$, which is not significantly higher than the velocity estimated above. Thus, the real distance of the offset core from the center of A1201 cannot be much greater than the projected separation of $\simeq 420$kpc. This requires the offset core to be located close to the plane of the sky. Tail of the Offset Core {#sec:sb_tail} ----------------------- In the two panels of Figure \[fig:imagexmm\], there is a region of brighter emission running to the northeast of the offset core that has a relatively sharp boundary to the north and a less well-defined edge to the east (marked by the green curve). We interpret the bright emission as a stream of gas being stripped from the offset core. Surface brightness profiles, obtained by the procedure of Section \[sec:sb\_CF\], for the northern edge of this feature (from box A in the upper panel of Figure \[fig:sb\_tail\]) and its eastern edge (from box B) are shown in the lower panels of Figure \[fig:sb\_tail\]. The surface brightness profile for box A shows a significant drop across the edge in all three data sets. This feature was not seen by because of its limited coverage in the ACIS-S data, where the tail lies close to a chip gap. An abrupt drop in surface brightness can also be seen in the PN data (blue solid lines in the bottom panels of Figure \[fig:sb\_tail\]) at the eastern edge of this feature, in the surface brightness profile for box B. The eastern edge is less significant in the noisier MOS and ACIS data, although they are reasonably consistent with the PN profile. Without the ACIS identification of the edge, the exact location of the eastern edge is not as securely constrained as the northern one. Profiles extracted from boxes C and D, on the opposite side of the bright ridge are shown for comparison. In them, the surface brightness fades smoothly away from offset core. Excess emission from the tail can also be seen in the residual map of Figure \[fig:densitypfl\_resid\]. In addition to the excess X-ray emission, the distribution of the galaxies associated with the offset core by the KMM algorithm is elongated in the direction of this tail (top panel of Figure 14 in ). This is suggestive of a tidal tail of galaxies stripped from the disrupting core. However, the small number of galaxies in this structure, combined with the uncertainties in the KMM decomposition, make this hard to demonstrate with any certainty. Although the tail does not stand out in the temperature or abundance maps of Figure \[fig:matt\_hires\_tmap\] and \[fig:matt\_tmap\], both the temperatures and abundances are lower on the eastern side of the offset core, over the tail than they are to west. The absence of a significantly lower temperature in the tail may be simply because the stripped gas in the tail does not dominate the X-ray emission. To examine the properties of the tail more carefully, a spectrum was extracted from the tail region, defined as the rectangular region marked in Figure \[fig:densitypfl\_resid\]. Apart from scaling, the local background used is the same as that used to measure the spectrum of the offset core in Section \[sec:sb\_clump\], i.e., a sector from PA $0\degr$ to $50\degr$ of the annulus containing the offset core. X-ray emission in this region is dominated by the ICM of A1201, so that, after background subtraction, the spectrum should be dominated by emission from the tail. These spectra for the ACIS and MOS data were fitted jointly using the absorbed thermal model of Section \[sec:data\_spec\]. The temperature of the tail was found to be $5.42^{+1.5}_{-1.0}$keV. To estimate the density of the tail, we assume that the rectangular region from which the spectrum was extracted is the projection of a cylinder, with its height defined by the long side of the rectangle and its diameter by the short side. This gives an electron density for the tail of $1.31\pm0.1\times10^{-3}$cm$^{-3}$, where only the statistical uncertainty (90% confidence level) is included. Although the systematic error is large, the density of gas in the tail is significantly lower than in the offset core (consistent with its lower surface brightness), but 1.7 times greater than the ICM density at the same radius (Figure \[fig:densitypfl\]). The temperature and density determined here imply that the gas pressure of the tail is appreciably greater than that of its surroundings. In that case, the tail should expand at close to its internal sound speed, until it comes into pressure equilibrium with the surrounding gas. For the sound speed at $\sim 5$keV and a tail width of $\simeq 100$ kpc, this only requires $\simeq 0.1$Gyr. Taking the velocity of the offset core to be about $700$km/s (see next section), the core would travel about 70kpc in this time, which should determine the length of the tail (projection effects can only reduce this). However, the extent of the tail is considerably more than 70kpc (see Figure \[fig:sb\_tail\]). The most plausible cause of this discrepancy is that our estimates of the temperature and density of the tail do not accurately reflect its mean temperature and density. At typical cluster temperatures, X-ray brightness in detectors like ACIS and EPIC is quite insensitive to the gas temperature, being largely determined by its emission measure. Thus, our density estimate best represents the root-mean-square (RMS) density of the tail. Note that the excess brightness of the tail demands that its RMS density is higher than that of the surroundings. Stripped gas in the tail could be clumpy, making the RMS density larger than its mean density. Coupled to this, the temperature in the gas would almost certainly be non-uniform as well. Temperatures obtained by fitting single thermal models to emission from gas mixtures are not simply related to the mean temperature. Thus, our estimates of both of the temperature and density in the tail may not provide a good estimate of its mean pressure. It is more physically reasonable that most of the tail is close to local pressure equilibrium. Presumably, the gas is in the process of mixing into the ambient ICM. We note that to make the RMS density 1.7 times higher than the mean density, the distribution of gas densities in the tail would need to be broad. We also note that pressure estimate for the offset core may be affected similarly (in which case, the velocity estimate of section \[sec:sb\_clump\] would be high). However, because it stands out more clearly relative to the local cluster X-ray emission, our pressure estimate for the offset core should be less affected. Discussion {#sec:discussion} ========== We begin by presenting our interpretation of the merger history of A1201 (Figure \[fig:scheme\]). On its initial core passage, the infalling subcluster was approaching us along a path to the southeast of the center of A1201 in projection. Much of the gas belonging to the subcluster would have been stripped during its first pericenter passage, although some gas has been retained by the subcluster core. The subcluster excited sloshing motions in the core of the primary cluster during its first core passage [@markevitch07], giving rise to the cold fronts. The remnant subcluster core has now passed apocenter and it is close to its second pericenter passage, moving away from us. The orbit of the subcluster is roughly in a plane containing our line of sight ($-z$ direction in the figure) and the X-ray bright ridge. The evidence supporting this interpretation is outlined below. Orientation of the Orbit ------------------------ Simulations of sloshing cold fronts show that contact discontinuities form along a spiral of enhanced X-ray emission that grows in the plane of the interloper orbit [@ascasibar06; @poole06]. This feature is generally visible in X-ray images, unless the merger is viewed from close to the plane containing the spiral. Viewed from within its plane, the spiral appears as cold fronts on alternating sides of the cluster center. Since no spiral feature can be seen connecting the fronts in A1201, in either the X-ray image (Figure \[fig:imagexmm\]) or the temperature maps (Figure \[fig:matt\_hires\_tmap\]), it seems likely that we are viewing the spiral front from close to its plane, i.e. from close to the plane of the merging orbit. In the X-ray images of A1201, a bright ridge lies along the axis joining the two sub-cluster cores. At least in part, this is composed of emission from the sloshing gas and the offset core. The two cold fronts are projections of the spiral feature, viewed from roughly within the plane of the spiral. In this orientation, enhanced X-ray emission from the whole of the spiral density feature (not just the cold fronts) is projected onto the nearly edge on plane of the orbit, contributing to the X-ray emission from the bright ridge. The simulations show that the spiral fronts created by sloshing wind from the outside inward in the prograde sense of the interloper orbit [e.g. @ascasibar06; @poole06]. If the direction of the tail reveals the earlier path of the offset core, we expect that the core passed through its pericenter on the southeastern side of the center of the primary cluster, passing around it moving towards the northeast in projection. Thus, the spiral structure should wind from the inside out, counterclockwise, as plotted in Figure \[fig:scheme\] and as projected onto the sky. The X-ray structure between the cold fronts is consistent with this scenario (Figure \[fig:matt\_tmap\]). We note that in simulations the outermost front generally appears on the opposite side of the cluster center to the pericenter passage that excited the sloshing [e.g. @roediger11], whereas the outer cold front lies to the southeast in A1201. This might be an issue for our proposed merger orbit. However, in the simulation shown in Figure 7 of @ascasibar06, the outer cold front changes sides at later times. This may well be a consequence of the second core passage. For our purpose, it only matters that these simulations demonstrate that the location of the outer front does not unambiguously determine the side of first core passage. This issue is further complicated by projection. Age of the Merger ----------------- We can estimate the time elapsed since pericenter passage using the locations of the two cold fronts. First, if the sloshing motions that give rise to cold fronts may be regarded as a superposition of internal gravity waves excited by the merging subcluster [@churazov03], we can estimate the time required for the cold fronts to get out of phase by $\pi$ radians as $\tau = \pi/(\omega_{\rm BV, in} - \omega_{\rm BV, out})$, where $\omega_{\rm BV}$ is the Brunt-Väisälä frequency [@owers11b; @simionescu11]. This is given by $$\label{omega_BV} \omega_{\rm BV}=\sqrt{{g \over r} {3\over 5} {d\ln \Sigma \over d\ln r}} = \omega_{\rm K} \sqrt{{3\over 5} {d\ln \Sigma \over d\ln r}},$$ where $r$ is the radius, $g = GM(r)/r^2$ is the acceleration due to gravity, $\Sigma$ is the entropy index and $\omega_{\rm K} = \sqrt{g/r}$ is the Kepler frequency. From observations [e.g @owers11b], the factor under the square root on the right is close to unity, so that the approximation $\omega_{\rm BV} \simeq \omega_{\rm K} = V_{\rm circ}/r$ is adequate for our purposes ($V_{\rm circ} = \sqrt{gr}$). To estimate $\omega_{\rm K}$, we measure radii from the cluster center, which is assumed to mark the potential minimum, and we assume that the mass distribution can be approximated as a singular isothermal sphere, so that the circular velocity is given by $V_{\rm circ} = \sqrt{2}\sigma_{v}$, where $\sigma_v$ is the line of sight velocity dispersion. In A1201, the radii of the two cold fronts are 300kpc and 50kpc, and $V_{\rm circ} = 1100$kms$^{-1}$, giving $\tau \simeq 0.17\rm\,Gyr$. The Brunt-Väisälä frequency applies to purely transverse modes. For wavevectors with a radial component, the frequency is reduced by a factor of the sine of the inclination to the radial direction. Cold fronts are oriented almost radially, so this correction can be substantial. @owers11b and @roediger11 found that the approximation in Eqn. \[omega\_BV\] underestimates the time since pericenter passage by a factor of 3 – 4. Correcting for this, we find that the first pericenter passage occurred $\sim 0.6$Gyr ago. This age is smaller than expected if the merging core is close to its second pericenter passage. Note, however, that the 6:1 ratio of the radii of the two cold fronts in A1201 is substantially larger than the 4:1 ratio for the Virgo cluster fronts modeled by @roediger11. Since the estimate above is largely determined by the location of the inner front, it may well be low. The location of the outer front provides an alternative estimate of time elapsed since pericenter passage, assuming that the outer front was excited then. @roediger11 found that cold fronts propagate outward with constant speed. For their Virgo cluster model, the outer front takes 1.5Gyr to reach a radius of $\sim90$kpc. Given that A1201 is about twice as hot as the Virgo cluster, its characteristic speeds (sound speed and Kepler speed) should be a factor of $\simeq \sqrt{2}$ greater than for Virgo. The speed of the cold fronts scales with these, so that, scaling from the results of @roediger11, the outer front in A1201 would require about 3.5Gyr to reach its current radius of 300kpc. This estimate leaves comfortably enough time for the merging subcluster to have reached its second pericenter passage. A more accurate estimate for the age of the merger requires a simulation better matched to A1201. These arguments show that sufficient time has elapsed for the merging subcluster to have returned for its second core passage. Dynamics of the Offset Core --------------------------- As discussed in section \[sec:sb\_clump\], we estimate the speed of the offset core to be about $1000$kms$^{-1}$, while the line of sight velocity of the core is 432kms$^{-1}$. To be consistent, these values require the core to be moving at an angle of about $65\degr$ to our line of sight ($\alpha$ in Figure \[fig:scheme\]). We note that this angle is distinct from the angle between the orbital plane of the offset core and the line of sight, as the direction of motion of the core within the orbital plane is not constrained (Figure \[fig:scheme\]). A relatively large value of $\alpha$ thus presents no conflict with the proposed model, which requires our line of sight to be nearly parallel to the orbital plane. The argument of §\[sec:sb\_clump\] assumes that the core is now subsonic. Note that if it was moving much faster than $1000$kms$^{-1}$, its low line of sight velocity would require its motion to be even closer to the plane of the sky, increasing the likelihood that we could see the shock front it would create. This favors our assumption that its speed through the cluster is subsonic or, at least, only mildly supersonic. As also argued in section \[sec:sb\_clump\] based on pressures, the distance of the offset core from the center of A1201 cannot be much greater than the projected separation of $\simeq 420$kpc. This requires the offset core to be located reasonably close to the plane of the sky. In other words, the angle $\beta$ in Figure \[fig:scheme\] cannot be large. The relatively high pressure in the offset core requires it to lie fairly close to the plane of the sky. Thus, we interpret the offset core as the remnant of a merging subcluster, close to its second core passage. Alignment of the Sub-Structures ------------------------------- @edge03 used the strong lensing arc projected onto the BCG (Figure \[fig:hst\_bcg\]) to calculate the mass distribution in the cluster center. They found the mass distribution to be elongated along the same axis as the BCG isophotes, but, interestingly, with a significantly greater ellipticity than the isophotes. Unfortunately, they found no lensed arcs further from the cluster center, nor enough galaxies for a weak-lensing analysis. Thus, their mass model is well-constrained only in the cluster core and the extent of the highly elliptical mass distribution is not known. At their time of publication, no high resolution X-ray image of A1201 had been obtained, leaving them unaware of the substructures discussed here. The X-ray data combined with the optical data of make a strong case for the presence of a remnant core at a distance of about $400$kpc from the cluster center, in the direction of elongation of the mass distribution. This core is too far from the cluster center to contribute significantly to the mass distribution there. However, the disturbance in the dark matter and stars created by the merging core might well have elongated the mass distribution in this direction. It is remarkable that the major axis of the mass distribution, the offset between the BCG and the X-ray peak, and the cold fronts all lie along the same direction. Conventionally, alignments between cluster halos and their satellites are controlled by the orientation of large scale structures [e.g. @basilakos06]. However, the large scale structure should not play an important role in A1201, as the location of the interloper changes rapidly at this stage of merging. Thus, the consistency of all alignments shown in A1201 suggests the possibility that the orientation of the BCG and the mass distribution in the cluster core are affected by the satellite directly. This would be surprising, since the mass in the cluster core should not be affected much by the tidal field of the interloper [cf. @faltenbacher08]. We speculate that alignment may be enhanced by sloshing, which would need to be verified by numerical simulations. Summary {#sec:summary} ======= We have analyzed the structure and dynamics of the merging cluster A1201 using the and  data. Structures associated with an infalling cluster, including cold fronts and an offset remnant core, were identified previously by ONC. In addition to these structures, the new data show enhanced emission east of the remnant gas core, with breaks in surface brightness along its boundary to the north and east. This is interpreted as a tail of gas stripped from the offset core. Temperature and metallicity maps of A1201 made from the new data support the merger interpretation. High metallicity in the cool core which drops abruptly across the southern cold front is consistent with a sloshing cold front. There is a hint of reduced metallicity in the offset core and tail, as expected if these arise in an external system. Using the deprojected density, temperature, and entropy profiles of the cluster, the entropy of the gas in the offset core and the tail were found to be lower than in the gas at the same radius elsewhere in the cluster. This evidence is consistent with the offset core being the remnant gas of a merging satellite cluster and the tail being composed of gas stripped from it. The observed properties of this system, including the placement of the cold fronts, the offset core and its tail, together with our estimate for the velocity of the offset core are consistent with a simple merger model for A1201 that is sketched in Figure \[fig:scheme\]. In this model, the offset core passed pericenter to the southeast of the primary cluster core and it is now on its second pericenter passage, moving with a transonic velocity of $\sim 1000\rm\,km\,s^{-1}$. The compact, marginally low temperature, structure of the offset core indicates that this gas belongs to the interloper, having survived the first core passage and the gas is being stripped to form the tail behind it. The gas in the primary core was perturbed by the infalling core, causing the sloshing that gave rise to the two visible cold fronts. The disposition of the cold fronts requires that the merger is viewed from close to the plane containing the orbit of the interloper. Moreover, the remarkable alignment between the major axis of the mass distribution in the cluster core, the offset between the BCG and the X-ray peak, the cold fronts and larger scale structure suggests that they have all been affected by the merger disturbing the core of the primary cluster. This work was partly supported by NASA grants NAS8-03060 and NNX08AD68G. CJM and BRM are supported by Chandra Large Project Grant: G09-0140X. BRM acknowledge generous support from the Natural Sciences and Engineering Research Council of Canada. MSO and WJC acknowledge the financial support of the Australian Research Council. We have made use of data obtained under the Chandra HRC GTO program and software provided by the Chandra X-ray Center (CXC) in the application packages CIAO, ChIPS, and Sherpa. STSDAS is a product of the Space Telescope Science Institute, which is operated by AURA for NASA. [38]{} natexlab\#1[\#1]{} Ascasibar, Y., & Markevitch, M. 2006, ApJ, 650, 102 Ashman, K. M., Bird, C. M., & Zepf, S. E. 1994, AJ, 108, 2348 Basilakos, S., Plionis, M., Yepes, G., Gottlöber, S., & Turchaninov, V. 2006, MNRAS, 365, 539 Böhringer, H., [et al.]{} 2000, ApJS, 129, 435 Carter, J. A., & Read, A. M. 2007, A&A, 464, 1155 Churazov, E., Forman, W., Jones, C., & Böhringer, H. 2003, ApJ, 590, 225 David, L. P., Nulsen, P. E. J., McNamara, B. R., Forman, W., Jones, C., Ponman, T., Robertson, B., & Wise, M. 2001, ApJ, 557, 546 Dickey, J. M., & Lockman, F. J. 1990, ARAA, 28, 215 Diehl, S., & Statler, T. S. 2006, MNRAS, 368, 497 Ebeling, H., Edge, A. C., Böhringer, H., Allen, S. W., Crawford, C. S., Fabian, A. C., Voges, W., & Huchra, J. P. 1998, MNRAS, 301, 881 Edge, A. C., Smith, G. P., Sand, D. J., Treu, T., Ebeling, H., Allen, S. W., & van Dokkum, P. G. 2003, ApJ, 599, L69 Faltenbacher, A., Jing, Y. P., Li, C., Mao, S., Mo, H. J., Pasquali, A., & van den Bosch, F. C. 2008, ApJ, 675, 146 Fruscione, A., [et al.]{} 2006, CIAO: Chandra’s data analysis system Johnson, R. E., Markevitch, M., Wegner, G. A., Jones, C., & Forman, W. R. 2010, ApJ, 710, 1776 Johnson, R. E., ZuHone, J. A., Jones, C., Forman, W., & Markevitch, M. 2011, Sloshing Gas in the Core of the Most Luminous Galaxy Cluster RXJ1347.5-1145, 17 pages, 5 figures; Corrected author and affiliation list Markevitch, M. 1998, ApJ, 504, 27 Markevitch, M., Gonzalez, A. H., David, L., Vikhlinin, A., Murray, S., Forman, W., Jones, C., & Tucker, W. 2002, ApJL, 567, L27 Markevitch, M., & Vikhlinin, A. 2007, Phys. Rep., 443, 1 Markevitch, M., Vikhlinin, A., & Mazzotta, P. 2001, ApJ, 562, L153 Markevitch, M., [et al.]{} 2000, ApJ, 541, 542 Mewe, R., Lemen, J. R., & van den Oord, G. H. J. 1986, A&As, 65, 511 Nevalainen, J., David, L., & Guainazzi, M. 2010, A&A, 523, 22 Nulsen, P. E. J., McNamara, B. R., Wise, M. W., & David, L. P. 2005, ApJ, 628, 629 Owers, M. S., Randall, S. W., Nulse, P. E. J., Couch, W. J., David, L. P., & Kempner, J. C. 2011, ApJ, 728, 27 Owers, M. S., Nulsen, P. E. J., & Couch, W. J. 2011, ApJ, 741, 122 Owers, M. S., Nulsen, P. E. J., Couch, W. J., & Markevitch, M. 2009, ApJ, 704, 1349 Owers, M. S., Nulsen, P. E. J., Couch, W. J., Markevitch, M., & Poole, G. B. 2009, ApJ, 692, 702, (*ONC*) Poole, G. B., Babul, A., McCarthy, I. G., Sanderson, A. J. R., & Fardal, M. A. 2008, MNRAS, 391, 1163 Poole, G. B., Fardal, M. A., Babul, A., McCarthy, I. G., Quinn, T., & Wadsley, J. 2006, MNRAS, 373, 881 Randall, S., Nulsen, P., Forman, W. R., Kones, C., Machacek, M., Murray, S. S., & Maughan, B. 2008, ApJ, 473, 651 Roediger, E., Brüggen, M., Simionescu, A., Böhringer, H., Churazov, E., & Forman, W. R. 2011, MNRAS, 413, 2057 Sarazin, C. L. 2002, in Merging Processes in Galaxy Clusters, Vol. 272, 1–38 Simionescu, A., [et al.]{} 2011, Sci, 331, 1576 Springel, V., Frenk, C. S., & White, S. D. M. 2006, Nature, 440, 1137 Struble, M. F., & Rood, H. J. 1999, ApJS, 125, 35 Tittley, E. R., & Henriksen, M. 2005, ApJ, 618, 227 Vikhlinin, A., Markevitch, M., & Murray, S. S. 2001, ApJ, 551, 160 ZuHone, J. A., Markevitch, M., & Johnson, R. E. 2010, ApJ, 717, 908 [^1]: http://xmm.vilspa.esa.es/external/xmm\_sw\_cal/background/blank\_sky.shtml. August 2010 version. [^2]: All uncertainties in this section are 90% confidence ranges. [^3]: http://cxc.harvard.edu/sherpa4.2.2/ahelp/beta2d.html
--- abstract: 'We work out the phenomenology of a model of supersymmetry breaking in the presence of a tiny (tunable) positive cosmological constant, proposed by the authors in arXiv:1403.1534. It utilises a single chiral multiplet with a gauged shift symmetry, that can be identified with the string dilaton (or an appropriate compactification modulus). The model is coupled to the MSSM, leading to calculable soft supersymmetry breaking masses and a distinct low energy phenomenology that allows to differentiate it from other models of supersymmetry breaking and mediation mechanisms.' --- CERN-PH-TH-2015-170\ \ **I. Antoniadis$^{\,a,b}$, R. Knoops$^{\, c,d,e}$** $^a$ [LPTHE, UMR CNRS 7589 Sorbonne Universités, UPMC Paris 6, 75005 Paris France]{} $^b$ [Albert Einstein Center, Institute for Theoretical Physics Bern University, Sidlerstrasse 5, CH-3012 Bern, Switzerland ]{} $^c$ [CERN Theory Division, CH-1211 Geneva 23, Switzerland]{} $^d$ [Section de Mathématiques, Université de Genève, CH-1211 Genève, Switzerland]{} $^e$ [Instituut voor Theoretische Fysica, KU Leuven, Celestijnenlaan 200D, B-3001 Leuven, Belgium]{} Introduction ============ In a recent work [@RK], we studied a simple $N=1$ supergravity model of supersymmetry breaking [@Z1] having a metastable de Sitter vacuum with an infinitesimally small (tunable) cosmological constant independent of the supersymmetry breaking scale that can be in the TeV region. Besides the gravity multiplet, the minimal field content consists of a chiral multiplet with a shift symmetry promoted to a gauged R-symmetry using a vector multiplet. In the string theory context, the chiral multiplet can be identified with the string dilaton (or an appropriate compactification modulus) and the shift symmetry associated to the gauge invariance of a two-index antisymmetric tensor that can be dualized to a (pseudo)scalar. The shift symmetry fixes the form of the superpotential and the gauging allows for the presence of a Fayet-Iliopoulos (FI) term, leading to a supergravity action with two independent parameters that can be tuned so that the scalar potential possesses a metastable de Sitter minimum with a tiny vacuum energy (essentially the relative strength between the F- and D-term contributions). A third parameter fixes the Vacuum Expectation Value (VEV) of the string dilaton at the desired (phenomenologically) weak coupling regime. An important consistency constraint of our model is anomaly cancellation which has been studied in [@R2] and implies the existence of additional charged fields under the gauged R-symmetry. In this work, we study a small variation of this model which is manifestly anomaly free without additional charged fields and allows to couple in a straight forward way a visible sector containing the minimal supersymmetric extension of the Standard Model (MSSM) and study the mediation of supersymmetry breaking and its phenomenological consequences. It turns out that an additional ‘hidden sector’ field $z$ is needed to be added for the matter soft scalar masses to be non-tachyonic; although this field participates in the supersymmetry breaking and is similar to the so-called Polonyi field, it does not modify the main properties of the metastable de Sitter vacuum. All soft scalar masses, as well as trilinear A-terms, are generated at the tree level and are universal under the assumption that matter kinetic terms are independent of the ‘Polonyi’ field, since matter fields are neutral under the shift symmetry and supersymmetry breaking is driven by a combination of the $U(1)$ D-term and the dilaton and $z$-field F-term. Alternatively, a way to avoid the tachyonic scalar masses without adding the extra field $z$ is to modify the matter kinetic terms by a dilaton dependent factor. A main difference of the present analysis from the previous work is that we use a field representation in which the gauged shift symmetry corresponds to an ordinary $U(1)$ and not an R-symmetry. The two representations differ by a Kähler transformation that leaves the classical supergravity action invariant. However, at the quantum level, there is a Green-Schwarz term generated that amounts an extra dilaton dependent contribution to the gauge kinetic terms needed to cancel the anomalies of the R-symmetry. This creates an apparent puzzle with the gaugino masses that vanish in the first representation but not in the latter. The resolution to the puzzle is based to the so called anomaly mediation contributions [@gauginomass; @bagger] that explain precisely the above apparent discrepancy. It turns out that gaugino masses are generated at the quantum level and are thus suppressed compared to the scalar masses (and A-terms). The outline of the paper is the following. In Section 2, we present the model and our conventions and show that adding MSSM fields inert under the shift symmetry leads to tachyonic scalar masses. In Section 3, we solve this problem by extending the model with an additional chiral field in the ‘hidden’ sector, participating in the supersymmetry breaking without modifying the main features of the model and its metastable de Sitter vacuum. In Section 4, we add a visible sector with the MSSM fields and compute all soft breaking terms. In particular, we discuss how gaugino masses are generated and describe the puzzle mentioned above. In Section 5, we discuss the Kähler transformation and show the equivalence of the two representations at the quantum level. We work out the phenomenology in section 6. In section 7 we introduce a non-canonical Kähler potential for the MSSM superfields as a different possible solution to the tachyonic masses. Section 8 contains our conclusions. Appendix A contains the computation of the fermion mass matrix in the models of Sections 3 and 4, while Appendix B describes the anomaly cancellation. Conventions {#sec:conventions} =========== Throughout this paper we use the conventions of [@VP]. A supergravity theory is specified (up to Chern-Simons terms) by a Kähler potential $\mathcal K$, a superpotential $W$, and the gauge kinetic functions $f_{AB}(z)$. The chiral multiplets $z^\alpha, \chi^\alpha$ are enumerated by the index $\alpha$ and the indices $A,B$ indicate the different gauge groups. Classically, a supergravity theory is invariant under Kähler tranformations, viz. $$\begin{aligned} \mathcal K(z ,\bar z) &\longrightarrow \mathcal K(z , \bar z) + J(z) + \bar J(\bar z), \label{kahler_tranformation} \notag \\ W(z) &\longrightarrow e^{-\kappa^2 J(z)} W(z), \end{aligned}$$ where $\kappa$ is the inverse of the reduced Planck mass, $m_p = \kappa^{-1} = 2.4 \times 10^{15} $ TeV. The gauge transformations of chiral multiplet scalars are given by holomorphic Killing vectors, i.e. $\delta z^\alpha = \theta^A k_A^\alpha (z)$, where $\theta^A$ is the gauge parameter of the gauge group $A$. The Kähler potential and superpotential need not be invariant under this gauge transformation, but can change by a Kähler transformation $$\begin{aligned} \delta \mathcal K &= \theta^A \left[r_A(z) + \bar r_A(\bar z)\right], \ \end{aligned}$$ provided that the gauge transformation of the superpotential satisfies $\delta W = - \theta^A \kappa^2 r_A(z) W $. One then has from $\delta W = W_\alpha \delta z^\alpha$ $$\begin{aligned} W_\alpha k_A^\alpha = - \kappa^2 r_A W, \label{Wtransf} \end{aligned}$$ where $W_\alpha = \partial_\alpha W$ and $\alpha$ labels the chiral multiplets. The supergravity theory can then be described by a gauge invariant function $$\begin{aligned} \mathcal G = \kappa^2 \mathcal K + \log(\kappa^6 W \bar W). \end{aligned}$$ The scalar potential is given by $$\begin{aligned} V &= V_F + V_D \notag \\ V_F &= e^{\kappa^2 \mathcal K} \left( - 3 \kappa^2 W \bar W + \nabla_\alpha W g^{\alpha \bar \beta} \bar \nabla_{\bar \beta} \bar W \right) \notag \\ V_D &= \frac{1}{2} \left( {\text{Re}}f \right)^{-1 \ AB} \mathcal P_A \mathcal P_B,\end{aligned}$$ where W appears with its Kähler covariant derivative $$\begin{aligned} \nabla_\alpha W = \partial_\alpha W(z) + \kappa^2 (\partial_\alpha \mathcal K) W(z).\end{aligned}$$ The moment maps $\mathcal P_A$ are given by $$\begin{aligned} \mathcal P_A = i(k_A^\alpha \partial_\alpha \mathcal K - r_A). \label{momentmap} \end{aligned}$$ In this paper we will be concerned with theories having a gauged R-symmetry, for which $r_A(z)$ is given by an imaginary constant $r_A(z) = i \kappa^{-2} \xi $. In this case, $\kappa^{-2} \xi$ is a Fayet-Iliopoulos [@FI] constant parameter. Introduction of the model {#sec:motivation} ========================= Motivation ---------- In [@RK; @Z1] a class $\mathcal N = 1$ supergravity theories based on a gauged R-symmetry which allow for metastable de Sitter (dS) vacua was presented. These theories have a tunable (infintesimally small) value of the cosmological constant and a TeV gravitino mass. The spectrum consists, in addition to the supergravity multiplet, of a chiral multiplet $S$ and a vector multiplet associated with a shift symmetry of the scalar component $s$ of the chiral multiplet $S$ $$\begin{aligned} \delta s = -ic \theta. \label{shift} \end{aligned}$$ The goal of this paper is to generalize this model such that it is anomaly-free and can be coupled to the MSSM and make phenomenological predictions, while maintaining its desirable properties described in [@RK; @Z1] such as a tunable cosmological constant and a TeV gravitino mass. The starting point is a chiral multiplet $S$ invariant under a gauged shift symmetry (\[shift\]) and a string-inspired Kähler potential of the form $-p\log(s+ \bar s)$. The most general superpotential[^1] is either a constant $W=\kappa^{-3}a$ or an exponential superpotential $W=\kappa^{-3}ae^{bs}$ (where $a$ and $b$ are constants). A constant superpotential is (obviously) invariant under the shift symmetry, while an exponential superpotential transforms as $W \rightarrow W e^{-ibc\theta}$, as in eq. (\[Wtransf\]). In this case the shift symmetry becomes a gauged R-symmetry and the scalar potential contains a Fayet-Iliopoulos term. Note however that by performing a Kähler transformation (\[kahler\_tranformation\]) with $J=\kappa^{-2}bs$, the model can be recast into a constant superpotential at the cost of introducing a linear term in the Kähler potential $\delta K=b(s+\bar s)$. Even though in this representation, the shift symmetry is not an R-symmetry, we will still refer to it as $U(1)_R$. The most general gauge kinetic function has a constant term and a term linear in $s$, $f(s)=\delta + \beta s$. To summarise,[^2] $$\begin{aligned} \mathcal K(s,\bar s) &= -p \log(s + \bar s) + b(s + \bar s), \notag \\ W(s) &= \kappa^{-3} a,\notag \\ f(s) &= \delta + \beta s\, , \end{aligned}$$ where the constants $a$ and $b$ together with the constant $c$ in eq. (\[shift\]) can be tuned to allow for an infinitesimally small cosmological constant and a TeV gravitino mass. For $b>0$, there always exists a supersymmetric AdS (anti-de Sitter) vacuum at $\langle s + \bar s \rangle = b/p$. We therefore focus on $b<0$. In the context of string theory, $S$ can be identified with a compactification modulus or the universal dilaton and (for negative $b$) the exponential superpotential may be generated by non-perturbative effects. For $p \geq 3$ the scalar potential V is positive and monotonically decreasing [@Z1], while for $p<3$, its F-term part $V_F$ is unbounded from below when $s + \bar s \rightarrow 0$. On the other hand, the D-term part of the scalar potential $V_D$ is positive and diverges when $s + \bar s \rightarrow 0$ and for various values for the parameters an (infinitesimally small) positive (local) minimum of the potential can be found. If we restrict ourselves to integer $p$, tunability of the vacuum energy restricts $p=2$ or $p=1$ when $f(s)=s$, or $p=1$ when the gauge kinetic function is constant. Let us first consider $\beta \neq 0$: The case when $p=2$ and $f(s)=s$ has been analyzed in full detail in [@RK]. For a field-dependent gauge kinetic function, the Lagrangian contains a Green-Schwarz [@Green-Schwarz] term $$\begin{aligned} \mathcal L_{GS} &= \frac{1}{8} \text{Im}(f(s)) \epsilon^{\mu \nu \rho \sigma} F_{\mu \nu} F_{\rho \sigma}, \label{LGS1} \end{aligned}$$ Since this term is not invariant under the shift symmetry (\[shift\]), $$\begin{aligned} \delta \mathcal L_{GS} = -\theta \frac{\beta c}{8} \ \epsilon^{\mu \nu \rho \sigma} F_{\mu \nu} F_{\rho \sigma}. \end{aligned}$$ its variation should be canceled. As explained in Appendix \[Appendix:cubic\], in the ’frame’ with an exponential superpotential the R-charges of the fermions in the model can give an anomalous contribution to the Lagrangian. In this case the Green-Schwarz term can cancel quantum anomalies. However as shown in [@R2], with the minimal MSSM spectrum, the presence of the term (\[LGS1\]) requires the existence of additional fields in the theory charged under the shift symmetry. Instead, to avoid the discussion of anomalies at this point, we focus on models with a constant gauge kinetic function. In this case the only (integer) possibility[^3] is $p=1$. However, as we will show below, this model suffers from tachyonic soft masses when it is coupled to the MSSM. Models with field-independent gauge kinetic functions {#sec:model} ----------------------------------------------------- As described above, a constant gauge kinetic function dictates $p=1$. Moreover, by appropriate field redefinitions, this constant can be absorbed in the other constants of the theory. We can therefore take $f(s)=1$. As also described above, the model with an exponential superpotential can be recast by a Kähler transformation in a model with a constant superpotential, but with a linear term in the Kähler potential. To avoid any quantum anomalies coming fron the R-charges of the various fermions in the model, we continue with a constant superpotential and a linear term in $s+\bar s$ in the Kähler potential. Although these models are equivalent classically, they might differ at the quantum level. The model is given by $$\begin{aligned} \mathcal K &= - \kappa^{-2} \log (s + \bar s) + \kappa^{-2} b (s + \bar s), \notag \\ W &= \kappa^{-3} a , \notag \\ f(s) &= 1\, . \label{model0} \end{aligned}$$ The scalar potential is given by $$\begin{aligned} V &= V_F + V_D, \notag \\ V_F &= \kappa^{-4} |a|^2 \frac{e^{b(s + \bar s)}}{s + \bar s} \sigma_s, && \sigma_s= -3 + \left( b(s + \bar s) - 1 \right)^2 , \notag \\ V_D &= \kappa^{-4} \frac{c^2}{2} \left( b - \frac{1}{s + \bar s} \right)^2. \label{scalarpot0}\end{aligned}$$ As mentioned in the previous subsection, for $b>0$ this scalar potential always allows for a supersymmetric AdS minimum at $\langle s + \bar s \rangle = 1/b$, while for $b=0$ supersymmetry is broken in AdS space [@RK]. We therefore focus on the case $b<0$. The minimization of the potential $\partial_s V = 0$ gives $$\begin{aligned} \frac{c^2}{a^2} &= \langle s + \bar s \rangle (2- b^2 \langle s + \bar s\rangle ^2) e^{b \langle s + \bar s \rangle}\, . \end{aligned}$$ By plugging this relation into $V_{min} = \Lambda \approx (10^{-3} \text{eV})^4$, one finds $$\begin{aligned} \kappa^4 e^{-b \langle s + \bar s \rangle} \langle s + \bar s \rangle \frac{\Lambda}{a^2} = -3 + (b \langle s + \bar s \rangle -1 )^2 \left[ 2 - \frac{b^2 \langle s + \bar s \rangle^2}{2}\right]. \end{aligned}$$ An infinitesimally small cosmological constant $\Lambda$ can then be obtained by tuning the parameters $a,b,c$ such that $$\begin{aligned} b \langle s + \bar s \rangle &= \alpha \approx -0.233153, \notag \\ \frac{b c^2}{a^2} &= A(\alpha) + \frac{2 \kappa^4 \Lambda \alpha^2}{a^2 b (\alpha - 1)^2}, & A(\alpha) &= 2 e^\alpha \alpha \frac{ 3 - (\alpha -1)^2}{ (\alpha - 1)^2} \approx -0.359291\, , \label{bsalpha} \end{aligned}$$ where $\alpha$ is the negative root of $-3 + (\alpha -1 )^2(2 - \alpha^2/2) = 0$ close to $-0.23$. The other roots are either imaginary or would not allow for a real solution of the second constraint. We conclude that this model allows for a stable de Sitter (dS) vacuum with an infinitesimally small (and tunable) value for the cosmological constant. Unfortunately, if one now adds an MSSM-like field $\varphi$ with a canonical Kähler potential, vanishing superpotential and invariant under the shift symmetry of the model, $$\begin{aligned} \mathcal K &= - \kappa^{-2} \log (s + \bar s) + \kappa^{-2} b (s + \bar s) + \sum \varphi \bar \varphi, \notag \\ W &= \kappa^{-3} a + W_{MSSM} , \label{model0MSSM} \end{aligned}$$ where $W_{MSSM}$ is the MSSM superpotential defined below in eq. (\[MSSMsuperpot\]), the soft scalar mass squared at $\langle \varphi \rangle = \langle \bar \varphi \rangle = 0$ is negative, given by $$\begin{aligned} \left. \partial_\varphi \partial_{\bar \varphi} V \right|_{\langle \varphi \rangle = 0}= |a|^2 b \frac{e^{\alpha}}{\alpha} \left(\langle \sigma_s \rangle +1 \right) < 0\ . \label{tach} \end{aligned}$$ Since $\langle \sigma_s \rangle \approx -1.48$, any nonzero solutions $\langle \varphi \rangle \neq 0$ of $\partial_\varphi V = 0$ would mean that the field $\varphi$ contributes in general to the supersymmetry breaking. We conclude that the model on its own can not be consistently extended to include the MSSM with canonical kinetic terms. To circumvent this problem, one can add an extra hidden sector field which contributes to (F-term) supersymmetry breaking. This will be worked out in full detail in the following sections. However, we will show in section \[sec:noncan\] that the problem of tachyonic soft masses can also be solved if one allows for a non-canonical Kähler potential in the visible sector, which gives an additional contribution to the masses through the D-term. Extra field in the hidden sector {#sec:extrahidden} ================================ Tuning of the parameters ------------------------ As described above, the model (with $p=1$ and a field independent gauge kinetic function) presented there would give a tachyonic mass to any MSSM-like fields (that are invariant under the shift symmetry and have a canonical Kähler potential). In this section we add an extra hidden sector field $z$ (similar to the so-called Polonyi field [@polonyi]) to circumvent this problem. Note that this choice is not unique and that the problem can also be circumvented by allowing a non-canonical Kähler potential for the MSSM fields (see section \[sec:noncan\]). The Kähler potential, superpotential and gauge kinetic function are given by $$\begin{aligned} \mathcal K &= -\kappa^{-2} \log (s + \bar s) + \kappa^{-2} b (s + \bar s) + z \bar z, \notag \\ W &= \kappa^{-3} a (1+ \gamma \kappa z) , \notag \\ f(s) &= 1\, , \label{model} \end{aligned}$$ with $\gamma$ an additional constant parameter. The scalar potential is $$\begin{aligned} V &= V_F + V_D, \notag \\ V_F &= \kappa^{-4} |a|^2 \frac{e^{b(s + \bar s) + \kappa^2 z \bar z}}{s + \bar s} \left( \sigma_s A(z,\bar z) + B(z, \bar z) \right), \notag \\ V_D &= \kappa^{-4} \frac{c^2}{2} \left( b - \frac{1}{s + \bar s} \right)^2, \end{aligned}$$ where $$\begin{aligned} A(z, \bar z) &= \left| 1 + \gamma \kappa z \right|^2, \notag \\ B(z, \bar z) &= \left| \gamma + \kappa \bar z + \gamma \kappa^2 z \bar z \right|^2. \end{aligned}$$ We focus on real $z=\bar z = \kappa^{-1} t$: $$\begin{aligned} A(t) &= (1+\gamma t)^2, \notag \\ B(t) &= (\gamma + t + \gamma t^2)^2\, ; \label{AB} \end{aligned}$$ $\partial_t V = 0$ then gives $$\begin{aligned} 0 &= \gamma(\sigma_s\! +\!1) + (\sigma_s\! +\! 1\! +\! \gamma^2 (\sigma_s\! +\! 2) ) t + \gamma (2 \sigma_s +5) t^2 + ( 1 + \gamma^2 (\sigma_s\! +\! 4) ) t^3\! +\! 2 \gamma t^4\! +\! \gamma^2, \notag \\ \sigma_s &= -3 + (\alpha - 1)^2, \ \ \ \ \ \ \ \ \ \alpha = b(s + \bar s). \label{Vt} \end{aligned}$$ As in the previous section, $\partial_s V = 0$, implies $$\begin{aligned} \frac{c^2}{a^2 } =\frac{\alpha}{b} e^{\alpha + t^2} \left[ A( t ) ( 2 - \alpha^2 ) - B(t) \right]. \label{ca} \end{aligned}$$ This can be combined with $V = 0$ $$\begin{aligned} \frac{c^2}{a^2} = -2 \frac{\alpha}{b} e^{\alpha + t^2} \left[\frac{\sigma_s A(t) + B(t)}{(\alpha-1)^2} \right], \label{ca2} \end{aligned}$$ to give $$\begin{aligned} 0 &= A(t) \left( \sigma_s - \frac{1}{2} (\alpha - 1)^2 (\alpha-2) \right) + B(t) \left(1 - \frac{1}{2} (\alpha - 1)^2 \right). \label{Vs} \end{aligned}$$ In princple for any value of $\gamma$, a Minkowski minimum can be found by solving eqs. (\[Vt\]) and (\[Vs\]) for $\alpha$ and $t$, and then tuning the parameters $a$,$b$ and $c$ by using the relation (\[ca2\]). The role of the extra hidden sector field $z$ is to give a (positive) F-term contribution to the scalar potential, which in turn gives a positive contribution (proportional to $\left| \nabla_z W\right|^2$) to the soft mass squared of any MSSM-like field in eq. (\[tach\]). It turns out that the addition of the extra hidden sector field $z$ indeed results in positive soft masses squared. It is however necessary that $z$ contributes to the supersymmetry breaking. The existence of any minimum of the potential with $\left| \nabla_z W \right|^2 =0$ can be troublesome and we therefore require $$\begin{aligned} \nabla_z W = \partial_z W + \kappa^2 \mathcal K_z W = a\left(\gamma + \bar z (1+ \gamma z) \right) \neq 0 \label{noAdS} \end{aligned}$$ Since $\gamma$ is real, any root of $\nabla_z W =0$ is also real. To ensure the condition (\[noAdS\]) we must ensure that the roots $ \text{Re}(z) =( -1 \pm \sqrt{1-4 \gamma})/4\gamma$ are complex. This requires $|\gamma| > 1/2$. Also, for any $\gamma$ the solution $(\alpha, t)$ of the set of equations (\[Vt\]) and (\[Vs\]) should give a positive right hand side of eq. (\[ca\]) (or equivalently, eq. (\[ca2\])). This constraint leads to $\gamma < 1.707$. We conclude that $$\begin{aligned} \gamma \in \left[ 0.5 ,1.707 \right]. \label{gammarange} \end{aligned}$$ For example, for $\gamma = 1$, we have $b \langle s + \bar s \rangle = \alpha \approx -0.134014$, $\langle t \rangle = 0.39041$. The (negative) constant $b$ can be chosen freely to fix the value of the vacuum expectation value (VEV) of Re$(s)$. The parameters $a$ and $c$ should be tuned carefully according to $$\begin{aligned} \frac{b c^2}{a^2} = -2 \alpha e^{\alpha + t^2} \left[\frac{\sigma_s A(t) + B(t)}{(\alpha-1)^2} \right] \approx -0.1981 . \label{ca1} \end{aligned}$$ Note that the number on the right hand side changes when $\gamma$ is varied. The remaining free parameter $a$ can be used to tune the supersymmetry breaking scale and (as shown below) the soft masses for the MSSM-like fields compared to the gravitino mass depend slightly on $\gamma$ (provided $c$ and $a$ are also tuned according to eq. (\[ca\])). We summarise the VEVs of $\alpha$ and $t$, together with the above constraint on the parameters for the particular choice $\gamma=1$ below for future reference $$\begin{aligned} \gamma = 1, \ \ \alpha \approx -0.134014, \ \ \langle t \rangle \approx 0.39041, \ \ \frac{bc^2}{a^2} \approx -0.1981 \ . \label{parameterchoice} \end{aligned}$$ For $\gamma$ in the allowed parameter range (\[gammarange\]), the scalar potential is positive definite for all Re$(s)>0, z, \bar z$, including the imaginary part of $z$, which justifies our assumption to look for a Minkowski minimum with Im$(z) = 0$. In fact, for the allowed values of $\gamma$, the solution of the set of equations (\[Vt\]) and (\[Vs\]) together with $\partial_{\text{Im}(z)}V=0$ gives $\text{Im}(z)=0$ as a solution. Finally, note that this Minkowski minimum can be lifted to a dS vacuum with an infinitesimally small cosmological constant by a small increase in $c$. A cosmological constant $\Lambda$ can be obtained by replacing the condition (\[ca1\]) with $$\begin{aligned} \frac{c^2}{a^2 } =-2 \frac{\alpha}{b} e^{\alpha + t^2} \left[\frac{\sigma_s A(t) + B(t)}{(\alpha-1)^2} \right] + \frac{2 \alpha^2}{(\alpha-1)^2} \frac{\kappa^4 \Lambda}{a^2 b^2} . \end{aligned}$$ Scalar masses, gravitino mass, super-BEH and Stückelberg mechanism. ------------------------------------------------------------------- The gravitino mass is given by $$\begin{aligned} m_{3/2} = \kappa^2 e^{\kappa^2 \mathcal K/2} W = \kappa^{-1} a \sqrt { \frac{b}{ \alpha} } e^{\alpha/2 + t^2/2} \left( 1 + \gamma t \right). \label{gravitino} \end{aligned}$$ Note that this can be arranged to be at the TeV scale by suitably tuning $a$. For example, for $\gamma=1$, such that $\alpha$ and $t$ are given by eq. (\[parameterchoice\]) and $m_{3/2} =1 \text{ TeV}$, we have $$\begin{aligned} a \sqrt b \approx 3.53 \times 10^{-17}. \label{a_numerics} \end{aligned}$$ Since the VEV of Im$(z)$ vanishes, it does not mix with the other hidden sector scalars and its mass is given by $$\begin{aligned} m^2_{\text{Im}(z)} &= m_{3/2}^2 \ f_{\text{Im}(z)}, \notag \\ f_{\text{Im}(z)} &=\frac{2 \left(1+2 t^3 \gamma +t^4 \gamma ^2+\sigma_s +2 t \gamma (2+\sigma_s )+\gamma ^2 (3+\sigma_s )+t^2 \left(1+\gamma ^2 (4+\sigma_s )\right)\right)}{(1 + \gamma t)^2 }.\end{aligned}$$ However, the masses of the scalars Re$(s)$ and Re$(z)$ mix, so one should diagonalize their mass matrix (with eigenvalues $m_{ts1}$ and $m_{ts2}$) while taking in account the non-canonical kinetic term for $s$. We omit the details and merely state the result for the particular choice of parameters $\gamma = 1$ in eq. (\[parameterchoice\]): $$\begin{aligned} m_{\text{Im}(z)} \approx 1.21\ m_{3/2}, \notag \\ m_{ts1} \approx 4.34\ m_{3/2}, \notag \\ m_{ts2} \approx 1.08\ m_{3/2} . \label{scalarmasses} \end{aligned}$$ The imaginary part of $s$ is eaten by the $U(1)$ gauge boson, which becomes massive. Its mass is given by[^4]: $$\begin{aligned} m_{A_\mu} &= \frac{\kappa^{-1} b c }{\alpha} \notag \\ &\approx 0.87 \ m_{3/2},\label{gaugebosonmass} \end{aligned}$$ where the last line was obtained by the relation between the parameters eq. (\[ca1\]) and by substituting the numerical values for $\gamma=1$ eq. (\[parameterchoice\]). The Goldstino, which is a linear combination of the gaugino, the z-fermion and the s-fermion, is eaten by the gravitino, which in turn becomes massive. The masses of the remaining two hidden sector fermions are calculated in Appendix \[Appendix:Fermions\] and their values for $\gamma=1$ are given by $$\begin{aligned} m_{\chi_1} & \approx 2.27 \ m_{3/2}, \notag \\ m_{\chi_2} & \approx 0.12 \ m_{3/2}. \end{aligned}$$ Tree level soft masses ---------------------- The goal of this section is to use the coupling of the model above, that allows for a TeV gravitino and an infinitesimally small cosmological constant, to the MSSM and to calculate its soft breaking terms. As already said, for simplicity, we take the MSSM-like fields $\varphi_\alpha$ to be chargeless under the extra $U(1)$. They can then easily be coupled to the above model in the following way: $$\begin{aligned} \mathcal K &= - \kappa^{-2} \log (s + \bar s) + \kappa^{-2} b (s + \bar s) + z \bar z + \sum_\alpha \varphi \bar \varphi , \notag \\ W &= \kappa^{-3} a (1+\kappa z) + W_{\text{MSSM}}(\varphi) , \notag \\ f_R(s) &= 1, \ \ \ \ \ \ \ \ \ \ \ \ \ f_A(s) = 1/g_A^2. \label{modelMSSM} \end{aligned}$$ The various multiplets in the MSSM are labeled by an (omitted for simplicity) index $\alpha$. The Standard Model gauge groups are labeled by an index A, while the extra $U(1)$ will be referred to with an index $R$. Note that all gauge kinetic functions are taken to be constants. The scalar potential is now given by $$\begin{aligned} V &= V_F + V_D, \notag \\ V_F &= \kappa^{-4} \frac{e^{b(s + \bar s) + z\bar z + \varphi \bar \varphi} }{s + \bar s} \left( \sigma_s A(z,\bar z, \varphi, \bar \varphi) + B(z, \bar z, \varphi, \bar \varphi) + \kappa^4 \sum_\alpha \left| \nabla_\alpha W \right|^2 \right), \notag \\ V_D &= \kappa^{-4} \frac{c^2}{2} \left( b - \frac{1}{s + \bar s} \right)^2, \label{scalarpotMSSM} \end{aligned}$$ where $$\begin{aligned} A(z,\bar z, \varphi, \bar \varphi) &= \left| a + a \gamma \kappa z + \kappa^3 W_{\text{MSSM} } \right|^2 \notag \\ B(z, \bar z, \varphi, \bar \varphi) &= \left| a \gamma + \kappa \bar z (a + a \gamma z + \kappa^3 W_{\text{MSSM} } ) \right| ^2 \notag \\ \left| \nabla_\alpha W \right|^2 &= \left| \partial_\alpha W_{\text{MSSM}} + \kappa^2 \bar \varphi W \right|^2\, . \end{aligned}$$ It can be easily seen that the resulting scalar potential has a minimum at $\langle \varphi \rangle = \langle W_{\text{MSSM}} \rangle = 0$, in which case the potential of last section is reproduced and its conclusions are still valid. For example, $A(z,\bar z, \varphi,\bar \varphi)|_{\langle z \rangle = t, \langle \varphi \rangle = 0} = a^2 A(t)$ and $B(z,\bar z, \varphi,\bar \varphi)|_{\langle z \rangle = t, \langle \varphi \rangle = 0} = a^2 B(t)$, where $A(t)$ and $B(t)$ are defined in eq. (\[AB\]). The second derivatives of the potential, evaluated on the ground state are given by $$\begin{aligned} \partial_\varphi \partial_{\bar \varphi} V &= \frac{\kappa^{-2} a^2 b e^{\alpha + t^2}}{\alpha} \left[ (\sigma_s + 1 ) A(t) + B(t) + \kappa^2 W_{\varphi \varphi} \bar W_{\bar \varphi \bar \varphi} \right], \notag \\ \partial_\varphi \partial_\varphi V &= \frac{ \kappa^{-1} ab W_{\varphi \varphi} e^{\alpha + t^2}}{\alpha} \left[ (\sigma_s +2) (1+ \gamma t) + t (\gamma +t + \gamma t^2) \right]. \end{aligned}$$ There is no mass mixing between the different $\varphi^\alpha$ (except of course for the $B_0$ term defined below) and between the MSSM fields with $z$ and $s$. Let us now specify the MSSM superpotential $$\begin{aligned} W_{MSSM}= y_u^{ij} \bar u_i Q_j \cdot H_u - y_d^{ij} \bar d_i Q_j \cdot H_d - y_e^{ij} \bar e_i L_j \cdot H_d + \mu H_u \cdot H_d. \label{MSSMsuperpot} \end{aligned}$$ Note that in the scalar potential eq. (\[scalarpotMSSM\]) the MSSM F-terms $\sum_\alpha \left| \nabla_\alpha W \right|^2 $ come with a prefactor $\exp (\alpha + t^2) b/\alpha$ (where the fields have been replaced by their VEVs). To bring this into a conventional form, one should rescale the MSSM superpotential $$\begin{aligned} \hat W_{MSSM} = \sqrt{\frac{b} {\alpha}} e^{\alpha/2 + t^2/2 } \ W_{MSSM}. \label{MSSMhat} \end{aligned}$$ Then the squark and slepton soft masses are given by $$\begin{aligned} m^2_{\tilde Q} &= m^2_{\tilde {\bar u}}= m^2_{\tilde {\bar d}}= m^2_{\tilde Q} = m^2_{\tilde {\bar Q}} = m_0^2 \ \mathbb{I}, \notag \\ m_0^2 &= \kappa^{-2} b a^2 \frac{e^{\alpha + t^2 } }{\alpha} \left[ A(t) \left( \sigma_s + 1 \right) + B(t)\right]. \end{aligned}$$ Here, $\mathbb{I}$ is the unit matrix in family space. The trilinear couplings are given by $$\begin{aligned} &a_u = A_0 \hat y_u, \ \ \ \ \ \ \ a_d = A_0 \hat y_d, \ \ \ \ \ \ \ a_e = A_0 \hat y_e, \notag \\ &A_0= \kappa^{-1}a \sqrt{\frac{b}{\alpha}} e^{(\alpha + t^2)/2 } \left[ (\sigma_s +3) (1+\gamma t) + t(\gamma + t +\gamma t^2) \right], \end{aligned}$$ where $\hat y_u, \hat y_d $ and $ \hat y_e$ are the Yukawa couplings of the MSSM superpotential after the rescaling of eq. (\[MSSMhat\]). Also, $$\begin{aligned} &m^2_{H_u } = m^2_{H_d} = m_0^2. \end{aligned}$$ and $$\begin{aligned} &B_0 = \kappa^{-1}a \sqrt{\frac{b}{\alpha}} e^{(\alpha + t^2)/2} \left[ (\sigma_s + 2)(1+\gamma t) + t (\gamma + t + \gamma t^2) \right], \end{aligned}$$ where $B_0$ generates a term proportional to $- \hat \mu B_0 H_u \cdot H_d + \text{h.c.}$, where $\hat \mu$ is the rescaled $\mu$-parameter (in the sense of eq. (\[MSSMhat\])). Summarised, in terms of the gravitino mass (eq. (\[gravitino\])), the MSSM soft terms are given by $$\begin{aligned} m_0^2 &= m_{3/2}^2 \left[ \left( \sigma_s + 1 \right) + \frac{(\gamma + t + \gamma t)^2}{(1 + \gamma t)^2}\right], \notag \\ A_0&= m_{3/2} \left[ (\sigma_s +3) + t \frac{ (\gamma + t +\gamma t^2) } {1+\gamma t} \right], \notag \\ B_0 &= m_{3/2} \left[ (\sigma_s + 2) + t \frac{ (\gamma + t + \gamma t^2) }{(1+\gamma t)} \right]. \label{softterms} \end{aligned}$$ Note the relation [@A-B; @relation] $$\begin{aligned} A_0 = B_0 + m_{3/2}. \label{AB-relation} \end{aligned}$$ At tree level, the gaugino masses are given by $$\begin{aligned} m_{AB} = -\frac{1}{2} e^{\kappa^2 \mathcal K/2} f_{AB,\alpha} g^{\alpha \bar \beta} \bar \nabla_{\bar \beta} \bar W\, , \label{mgaugino} \end{aligned}$$ where the indices $A$ and $B$ label the different gauge groups and $f_{AB,\alpha}$ stands for $\partial_\alpha f_{AB}$. Since the gauge kinetic functions are constant, they vanish $$\begin{aligned} \left. m_{AB} \right|_{\text{tree}} = 0. \end{aligned}$$ However, as mentioned in section \[sec:motivation\], the Kähler potential and superpotential of any ($\mathcal N = 1, D= 4$) supergravity theory are only determined up to Kähler transformations, at least classically.[^5] By applying a Kähler transformation (\[kahler\_tranformation\]) with $J= -\kappa^{-2} b s$ to the model defined in eq. (\[modelMSSM\]), one ends up with the classically equivalent theory $$\begin{aligned} \mathcal K &=- \kappa^{-2} \log (s + \bar s) + z \bar z + \sum_\alpha \varphi \bar \varphi , \notag \\ W &=\left( \kappa^{-3} a (1+z) + W_{\text{MSSM}}(\varphi) \right) e^{bs}. \label{KahlertransformedMSSM} \end{aligned}$$ Note that all classical results of the previous section also hold for this theory: Its scalar potential is given by (\[scalarpotMSSM\]) and can be tuned in exactly the same way as above. In particular, the $A_0, B_0$ and $m_0$ soft terms are again given by eqs. (\[softterms\]). However, since a Kähler transformation is anomalous [@KL], there are in general additional contributions to the effective action at the quantum level. First note that the shift symmetry (\[shift\]) of $s$ renders the superpotential non-gauge invariant $$\begin{aligned} W \longrightarrow W e^{-ibc\theta}. \end{aligned}$$ In other words, the shift symmetry has become a gauged R-symmetry. Therefore, all the fermions (including the gauginos and the gravitino) in the theory transform[^6] as well under this $U(1)_R$. This leads to cubic $U(1)_R^3$ as well as mixed $U(1) \times G_{\text{MSSM}}$ anomalies. Anomalies in supergravity theories involving a gauged R-symmetry were carefully studied in [@R2; @FreedmanAnomalies]; we summarise the main results in the Appendix \[Appendix:Anomalies\], where it has been shown that these anomalies are cancelled by a Green-Schwarz (GS) counter term. The latter arises from a quantum correction to the gauge kinetic functions given by[^7] $$\begin{aligned} f_A(s) = 1/g_A^2 + \beta_A s. \label{gaugekineticfunction:fielddependent} \end{aligned}$$ These field-dependent gauge kinetic functions give Green-Schwarz contributions $$\begin{aligned} \mathcal L_{GS} &= \frac{1}{8} \text{Im}(f_A(s)) \epsilon^{\mu \nu \rho \sigma} F_{\mu \nu}^A F_{\rho \sigma}^A, \notag \\ \delta \mathcal L_{GS} &= - \frac{\theta \beta_A c}{8} \epsilon^{\mu \nu \rho \sigma} F_{\mu \nu}^A F^A_{\rho \sigma}. \end{aligned}$$ Anomaly cancellation then requires that (see eq. \[anomalieformules\]) $$\begin{aligned} \beta_1 &= - \frac{ 11 b}{8 \pi^2 }, \notag \\ \beta_2 &= - \frac{ 5 b }{8 \pi^2 }, \notag \\ \beta_3 &= - \frac{3 b}{8 \pi^2 }. \end{aligned}$$ The resulting gaugino masses are given by $$\begin{aligned} \hat M_1 &= \frac{11}{16\pi^2} b g_Y^2 e^{\alpha/2} (\alpha-1), \notag \\ \hat M_2 &= \frac{5}{16\pi^2} b g_2^2 e^{\alpha/2} (\alpha-1),\notag \\ \hat M_3 &= \frac{3}{16\pi^2} b g_3^2 e^{\alpha/2} (\alpha-1).\label{gauginomassdifference} \end{aligned}$$ It is curious that the gaugino masses vanish for the model (\[modelMSSM\]), while the classically equivalent model (\[KahlertransformedMSSM\]) obtained upon a Kähler transformation has nonzero gaugino masses. This creates a puzzle on the quantum equivalence of these models. The answer to this puzzle is based on the fact that gaugino masses are present in both representations and are generated at one-loop level by an effect called Anomaly Mediation [@gauginomass; @bagger]. Indeed, it has been argued that gaugino masses receive a one-loop contribution due to the super-Weyl-Kähler and sigma-model anomalies. These contributions are different for both models, and we will show in section \[sec:modelR\] that the difference accounts exactly for the contributions (\[gauginomassdifference\]). Below, we compute the gaugino masses in the model (\[modelMSSM\]) coming entirely from anomaly mediation. The ’Anomaly Mediated’ gaugino mass contribution $M_{1/2}$ is given by [@bagger] $$\begin{aligned} M_{1/2} = - \frac{g^2}{16 \pi^2} \left[ (3 T_G - T_R) m_{3/2} + (T_G - T_R) \mathcal K_\alpha F^\alpha + 2 \frac{T_R}{d_R} (\log \det \mathcal K|_R \ '')_{,\alpha} F^\alpha \right] , \label{gaugino mass} \end{aligned}$$ where $T_G$ is the Dynkin index of the adjoint representation, normalized to $N$ for $SU(N)$, and $T_R$ is the Dynkin index associated with the representation $R$ of dimension $d_R$, equal to $1/2$ for the $SU(N)$ fundamental. An implicit sum over all matter representations is understood. The quantity $3T_G - T_R$ is the one-loop beta function coefficient. The expectation value of the auxiliary field $F^\alpha$, evaluated in the Einstein frame is given by $$\begin{aligned} F^\alpha = - e^{ \kappa^2 \mathcal K/2} g^{\alpha \bar \beta} \bar \nabla_{\bar \beta} \bar W. \end{aligned}$$ Clearly, for the Kähler potential (\[modelMSSM\]) the last term in eq. (\[gaugino mass\]) vanishes. However, the second term survives due to the presence of Planck scale VEVs for the hidden sector fields $s$ and $z$. By using the gravitino mass (\[gravitino\]), the above expression can be rewritten as $$\begin{aligned} M_{1/2} = - \frac{g^2}{16 \pi^2} m_{3/2} \left[(3 T_G - T_R) - (T_G - T_R) \left( (\alpha-1)^2 + t \frac{\gamma + t + \gamma t^2}{1 + \gamma t} \right) \right] \end{aligned}$$ For $U(1)_Y$ we have $T_G = 0$ and $T_R = 11$, for $SU(2)$ we have $T_G = 2$ and $T_R = 7$, and for $SU(3)$ we have $T_G = 3$ and $T_R = 6$, such that for the different gaugino mass parameters this gives (in a self-explanatory notation): $$\begin{aligned} M_1 &= 11 \frac{g_Y^2}{16 \pi^2} m_{3/2} \left[ 1 - (\alpha -1)^2 - \frac{ t(\gamma + t + \gamma t)}{1 + \gamma t} \right], \notag \\ M_2 &= \frac{g_2^2}{16 \pi^2} m_{3/2} \left[1 - 5 (\alpha-1)^2 -5 \frac{ t (\gamma + t + \gamma t^2)}{1 + \gamma t} \right], \notag \\ M_3 &= - 3 \frac{g_3^2}{16 \pi^2} m_{3/2} \left[ 1 + (\alpha - 1)^2 + \frac{ t (\gamma + t + \gamma t^2) }{1 + \gamma t} \right]. \label{m1m2m3} \end{aligned}$$ For example, if we choose $\gamma=1$ (as in eq. (\[parameterchoice\])) the above equations give $$\begin{aligned} M_1 & \approx 0.05 \ g_Y^2 \ m_{3/2}, \notag \\ M_2 & \approx 0.048 \ g_2^2 \ m_{3/2}, \notag \\ M_3 & \approx 0.052 \ g_3^3 \ m_{3/2}. \end{aligned}$$ These relations are compatible accidentally with gauge coupling unification. Indeed, if we now assume that the gauge couplings unify at some unification scale $\frac{5}{3} g_Y^2\equiv g_1^2 = g_2^2 = g_3^2 = 0.51$, we get the gaugino masses at this scale $$\begin{aligned} M_1 & \approx 0.015\ m_{3/2}, \notag \\ M_2 & \approx 0.025\ m_{3/2},\notag \\ M_3 & \approx 0.026\ m_{3/2}. \end{aligned}$$ The gaugino masses for other values of $\gamma$ are listed in table \[tanbeta\] below. Note that in a similar way, the trilinear terms $A_0$ also receive corrections proportional to $$\begin{aligned} \delta A_{ijk} = - \frac{1}{2} \left( \gamma_i + \gamma_j + \gamma_k \right) m_{3/2}, \end{aligned}$$ where the $\gamma$’s are the anomalous dimensions of the corresponding cubic term in the superpotential. These contributions however are small compared to the tree-level value in eq. (\[softterms\]). Although the gaugino masses are generated at one-loop, our model is very different from a mAMSB (minimal Anomaly Mediated Supersymmetry Breaking) [@gauginomass] scenario: In mAMSB, the second and third term in eq. (\[gaugino mass\]) are missing due to the absence of hidden sector fields with a Planck scale VEV. In our model however, the second term in eq. (\[gaugino mass\]) is present because of the non-vanishing F-terms of the $s$ and $z$ fields, and has the effect that it raises the gaugino masses slightly to the order $M_{1/2} \approx 2 \times 10^{-2} \ m_{3/2}$ compared to $M_{1/2} \approx 10^{-2} - 10^{-3} \ \ m_{3/2}$ for a mAMSB where only the first term in eq. (\[gaugino mass\]) is non-vanishing. Another important difference is that we have $M_{1} < M_{2}$ which results in a mostly Bino-like LSP (Lightest Supersymmetric Particle), compared with a mostly Wino-like LSP in mAMSB. Note also that we do not have any danger of tachyonic scalar soft masses because of the presence of a tree-level soft mass $m_0$ in eq. (\[softterms\]). We also have tree-level trilinear couplings $A_0$, which are not present in the mAMSB. Our model is also different from the minimal supergravity mediated scenario (mSUGRA) [@mSUGRA]. Indeed, in mSUGRA gaugino masses are imposed to be equal at tree-level at the GUT unification scale $M_3 \ : \ M_2 \ : \ M_1 = g_3^2 \ : \ g_2^2 \ : \ g_1^2$ of the order $m_0$ (plus or minus an order of magnitude), while our model has vanishing tree-level gaugino masses. They are generated at one-loop and do not satisfy the above relation. Since the gaugino masses are generated at one-loop they are much smaller than the other soft terms. We conclude that although the soft terms $m_0, A_0$ and $B_0 = A_0 - m_{3/2}$ are similar to an mSUGRA scenario, the anomaly mediated gaugino masses (which have on top of the usual AMSB contribution proportional to the beta function another contribution from the Planck scale VEVs of $s$ and $z$) are not universal and are much smaller. Therefore, the particle spectrum will resemble much more the spectrum of a mAMSB scenario, with the important difference that the lightest neutralino is Bino-like instead of Wino-like (See section \[sec:pheno\]). Kähler transformation and gaugino masses {#sec:modelR} ======================================== In this section we show that the gaugino masses of the model (\[modelMSSM\]) and of the model obtained after a Kähler transformation (\[KahlertransformedMSSM\]) match. While in the first the gaugino masses are generated at one-loop by eq. (\[gaugino mass\]), the second receives an extra contribution due to a field-dependent part in the gauge kinetic functions which is needed to cancel the mixed $U(1)_R \times G$ anomalies by a Green-Schwarz counter term. The anomalous contributions to the gauge transformations are proportional to $\mathcal C_A$, given by $$\begin{aligned} \label{CA} \mathcal C_A \delta^{ab} &= {\text{Tr}}\left[ R_\psi (\tau^a \tau^b )_A \right] + T_{G_A} \delta^{ab} R_\lambda\, , \end{aligned}$$ where $A=Y,2,3$ labels the Standard Model gauge groups. The R-charge of the matter fermions is $R_\psi = bc/2$, while the gauginos carry a charge $R_\lambda = -bc/2$, such that eq. (\[CA\]) can be rewritten as $$\begin{aligned} \mathcal C_A = \frac{bc}{2} \left(T_{R_A} - T_{G_A} \right). \end{aligned}$$ Anomaly cancellation (as in eq. (\[anomalieformules\])) then requires that $$\begin{aligned} \beta_A = \frac{ \mathcal C_A} {4 \pi^2 c } \label{anomalymatching} \end{aligned}$$ The effect of these (quantum) corrections to the gauge kinetic functions compared to the classically equivalent theory in eq. (\[modelMSSM\]) is that non-zero gaugino masses $m_R$ for the R-gaugino and for the Standard Model gauginos $m_A$ are now generated because of a field-dependent gauge kinetic function, on top of the “anomaly mediation" contribution (\[gaugino mass\]). The corresponding contribution to the gaugino masses can be calculated using eq. (\[mgaugino\]) together with the anomaly matching conditions eqs. (\[anomalymatching\]). $$\begin{aligned} m_{A} &= -\frac{g_A^2}{2} e^{\kappa^2 \mathcal K/2} \beta_A g^{\alpha \bar \beta} \bar \nabla_{\bar \beta} \bar W \notag \\ &= \frac{g_A^2}{16 \pi^2} b (T_G - T_R) e^{\kappa^2 \mathcal K/2} g^{\alpha \bar \beta} \bar \nabla_{\bar \beta} \bar W\, , \label{gauginoRmass1} \end{aligned}$$ where it is taken into account that the masses of the MSSM gauginos calculated by (\[mgaugino\]) need a rescaling proportional to $g_A^2$ due to their non-canonical kinetic terms: $$\begin{aligned} \mathcal L/e &= -\frac{1}{2} \text{Re}(f)_A \bar \lambda^A \cancel D \lambda^A \notag \\ &= -\frac{1}{2} \left(\frac{1}{g_A^2} + \beta_A \frac{\alpha}{b} \right) \bar \lambda^A \cancel D \lambda^A, \label{gauginoRmass} \end{aligned}$$ where $\beta_A \frac{\alpha}{b} < < g_A^{-2}$ if the gauge coupling is in the perturbative region. On the other hand, since the Kähler potential differs by a linear term $b(s + \bar s)$, the contribution of the second term in eq. (\[gaugino mass\]) differs by a factor $$\begin{aligned} \delta m_{A} = \frac{g_A^2}{16 \pi^2} (T_G - T_R) b e^{\kappa^2 \mathcal K/2} g^{\alpha \bar \beta} \bar \nabla_{\bar \beta} \bar W, \end{aligned}$$ which exactly coincides with eq. (\[gauginoRmass1\]). We conclude that even though the models (\[modelMSSM\]) and (\[KahlertransformedMSSM\]) differ by a (classical) Kähler transformation, they generate the same gaugino masses at one-loop given in eq. (\[m1m2m3\]). While the one-loop gaugino masses for the model (\[modelMSSM\]) are generated entirely by eq. (\[gaugino mass\]), the gaugino masses for the model (\[KahlertransformedMSSM\]) after a Kähler transformation have a contribution from eq. (\[gaugino mass\]) as well as from a field dependent gauge kinetic term whose presence is necessary to cancel the mixed $U(1)_R \times G$ anomalies due to the fact that the extra $U(1)$ has become an R-symmetry giving an R-charge to all fermions in the theory. Phenomenology {#sec:pheno} ============= The results for the soft terms calculated in section \[sec:extrahidden\], evaluated for different values of the parameter $\gamma$ are summarised in table \[tanbeta\]. For every $\gamma$, the corresponding $t$ and $\alpha$ are calculated by imposing a vanishing cosmological constant by eqs. (\[ca\]) and (\[ca2\]). The scalar soft masses and trilinear terms are then evaluated by eqs. (\[softterms\]) and the gaugino masses by eqs. (\[m1m2m3\]). Note that the relation (\[AB-relation\]), namely $ A_0 = B_0 - m_{3/2} $, is valid for all $\gamma$. We therefore do not list the parameter $B_0$. $\gamma$ t $\alpha$ $m_0$ $A_0$ $M_1$ $M_2$ $M_3$ $\tan \beta (\mu> 0)$ $\tan \beta (\mu < 0)$ ---------- ------- ---------- ------- ------- ------- ------- ------- ----------------------- ------------------------ 0.6 0.446 -0.175 0.475 1.791 0.017 0.026 0.027 1 0.409 -0.134 0.719 1.719 0.015 0.025 0.026 1.1 0.386 -0.120 0.772 1.701 0.015 0.024 0.026 46 29 1.4 0.390 -0.068 0.905 1.646 0.014 0.023 0.026 40 23 1.7 0.414 -0.002 0.998 1.588 0.013 0.022 0.025 36 19 : The soft terms (in terms of $m_{3/2}$) for various values of $\gamma$. If a solution to the RGE exists, the value of $\tan \beta$ is shown in the last columns for $\mu >0$ and $\mu <0$ respectively.[]{data-label="tanbeta"} In most phenomenological studies, $B_0$ is substituted for $\tan \beta$, the ratio between the two Higgs VEVs, as an input parameter for the renormalization-group equations (RGE) that determine the low energy spectrum of the theory. Since $B_0$ is not a free parameter in our theory, but is fixed by eq. (\[AB-relation\]), this corresponds to a definite value of $\tan \beta$. For more details see [@Ellis] (and references therein). The corresponding $\tan \beta$ for a few particular choices for $\gamma$ are listed in the last two columns of table \[tanbeta\] for $\mu>0$ and $\mu<0$ respectively. No solutions were found for $\gamma \lesssim 1.1$, for both signs of $\mu$. Some characteristic masses [@softsusy] for $\gamma=1.4$ as a function of the gravitino mass are shown in figure \[plotm32\]. A lower experimental bound of 1 TeV for the gluino mass (vertical dashed line) forces $m_{3/2} \gtrsim 15$ TeV. On the other hand, for $\mu > 0$ ($\mu <0$) no viable solution for the RGE was found when $m_{3/2}\gtrsim 30$ TeV ($m_{3/2}\gtrsim 35$ TeV). We conclude that (for $\gamma = 1.4$) $$\begin{aligned} & 15 \text{ TeV} \lesssim m_{3/2} \lesssim 30 \text{ TeV} && \text{for }\mu > 0, \notag \\ & 15 \text{ TeV} \lesssim m_{3/2} \lesssim 35 \text{ TeV} && \text{for }\mu < 0. \end{aligned}$$ As we will see below, these upper bounds can differ for different choices of $\gamma$. ![The masses (in TeV) of the sbottom squark (yellow), the stop squark (black), the gluino (red), the lightest chargino (green) and the lightest neutralino (blue) as a function of the gravitino mass for $\gamma=1.4$ and for $\mu >0$ (left) and $\mu<0$ (right). The mass of the lightest neutralino varies slightly between 42 GeV (46 GeV) for $m_{3/2}=10$ TeV and 138 GeV (149 GeV) for $m_{3/2}=30$ TeV for $\mu>0$ ($\mu<0$). The vertical dashed line at $m_{3/2 \approx 15}$ TeV indicates the exclusion limit (lower bound) on the gluino mass. []{data-label="plotm32"}](plot_posmu_m32){width="\textwidth"} ![The masses (in TeV) of the sbottom squark (yellow), the stop squark (black), the gluino (red), the lightest chargino (green) and the lightest neutralino (blue) as a function of the gravitino mass for $\gamma=1.4$ and for $\mu >0$ (left) and $\mu<0$ (right). The mass of the lightest neutralino varies slightly between 42 GeV (46 GeV) for $m_{3/2}=10$ TeV and 138 GeV (149 GeV) for $m_{3/2}=30$ TeV for $\mu>0$ ($\mu<0$). The vertical dashed line at $m_{3/2 \approx 15}$ TeV indicates the exclusion limit (lower bound) on the gluino mass. []{data-label="plotm32"}](plot_negmu_m32){width="\textwidth"} In figure \[plotgamma\], the same spectrum is plotted as a function of $\gamma$ for $m_{3/2}=25$ TeV. As one can see, the stop mass varies heavily with $\gamma$, and can become relatively light when $\gamma \approx 1.1$. For all values of $\gamma$ the LSP is given by the lightest neutralino and since $M_1 < M_2$ (see table \[tanbeta\]) the lightest neutralino is mostly Bino-like, in contrast with a typical mAMSB scenario, where the lightest neutralino is mostly Wino-like [@winolike]. ![The masses (in TeV) of the sbottom squark (yellow), the stop squark (black), the gluino (red), the lightest chargino (green) and the lightest neutralino (blue) as a function of $\gamma$ for $m_{3/2}=25$ TeV and for $\mu >0$ (left) and $\mu<0$ (right). No solutions for the RGE were found for $\gamma <1.1$. Notice that for $\gamma \rightarrow 1.1$ the stop mass becomes relatively light.[]{data-label="plotgamma"}](plot_posmu_gamma){width="\textwidth"} ![The masses (in TeV) of the sbottom squark (yellow), the stop squark (black), the gluino (red), the lightest chargino (green) and the lightest neutralino (blue) as a function of $\gamma$ for $m_{3/2}=25$ TeV and for $\mu >0$ (left) and $\mu<0$ (right). No solutions for the RGE were found for $\gamma <1.1$. Notice that for $\gamma \rightarrow 1.1$ the stop mass becomes relatively light.[]{data-label="plotgamma"}](plot_negmu_gamma){width="\textwidth"} To get a lower bound on the stop mass, the sparticle spectrum is plotted in figure \[fig\_11\] (left) as a function of the gravitino mass for $\gamma=1.1$ and $\mu >0$ (for $\mu <0$ the bound is higher). As above, the experimental limit on the gluino mass forces $m_{3/2} \gtrsim 15$ TeV. In this limit the stop mass can be as low as 2 TeV. To obtain an upper bound on the stop mass on the other hand, the sparticle spectrum is plotted in figure \[fig\_11\] (right) for $\gamma =1.7$ and $\mu>0$. Above a gravitino mass of (aproximately) 30 TeV, no solutions to the RGE were found. In this limit the stop mass is about 15 TeV. ![The masses (in TeV) of the sbottom squark (yellow), the stop squark (black), the gluino (red), the lightest chargino (green) and the lightest neutralino (blue) as a function of $m_{3/2}$ for $\gamma=1.1$ (left) and for $\gamma=1.7$ (right), for $\mu>0$. For $\gamma=1.1$ (left) no solutions to the RGE were found when $m_{3/2} \gtrsim 45$ TeV, while for $\gamma=1.7$ (right) no solutions were found when $m_{3/2} \gtrsim 30$ TeV. The lower bound corresponds in both cases to a gluino mass of 1 TeV. []{data-label="fig_11"}](plot_posmu_gamma1){width="\textwidth"} ![The masses (in TeV) of the sbottom squark (yellow), the stop squark (black), the gluino (red), the lightest chargino (green) and the lightest neutralino (blue) as a function of $m_{3/2}$ for $\gamma=1.1$ (left) and for $\gamma=1.7$ (right), for $\mu>0$. For $\gamma=1.1$ (left) no solutions to the RGE were found when $m_{3/2} \gtrsim 45$ TeV, while for $\gamma=1.7$ (right) no solutions were found when $m_{3/2} \gtrsim 30$ TeV. The lower bound corresponds in both cases to a gluino mass of 1 TeV. []{data-label="fig_11"}](plot_posmu_gamma7){width="\textwidth"} To conclude, the lower end mass spectrum consists of (very) light charginos (with a lightest chargino between 250 and 800 GeV) and neutralinos, with a mostly Bino-like neutralino as LSP ($80-230$ GeV), which would distinguish this model from the mAMSB where the LSP is mostly Wino-like. These upper limits on the LSP and the lightest chargino imply that this model could in principle be excluded in the next LHC run. In order for the gluino to escape experimental bounds, the lower limit on the gravitino mass is about 15 TeV. The gluino mass is then between 1-3 TeV. This however forces the squark masses to be very high ($10-35$ TeV), with the exception of the stop mass which can be relatively light ($2-15$ TeV). Non-canonical Kähler potential for the visible sector {#sec:noncan} ===================================================== Since the model (\[model0\]) has tachyonic soft scalar masses for the MSSM fields, in section \[sec:extrahidden\] we proposed a solution by adding an extra field to the hidden sector. However, we will show in this section that the problem can also be circumvented by allowing non-canonical kinetic terms for the MSSM fields. We consider the following model $$\begin{aligned} \mathcal K &= - \kappa^{-2} \log (s + \bar s) + \kappa^{-2} b (s + \bar s) + (s + \bar s)^{-\nu} \sum \varphi \bar \varphi , \notag \\ W &= \kappa^{-3} a + W_{MSSM} , \notag \\ f(s) &= 1, \ \ \ \ \ \ \ f_A(s)=1/g_A^2 \label{model_noncan} \end{aligned}$$ where a sum over all visible sector fields $\varphi$ is understood in the Kähler potential. Here, $\nu$ is considered to be an additional parameter in the theory, where $\nu=1$ corresponds with the leading term in the Taylor expansion of $-\log(s + \bar s - \varphi \bar \varphi)$. The gauge kinetic functions for the Standard Model gauge groups $f_A(s)$ are taken to be constants. The scalar potential is given by $$\begin{aligned} V &= V_F + V_D, \notag \\ V_F &= \kappa^{-4} \frac{e^{b(s + \bar s) + \sum \kappa^2 \left( s + \bar s\right)^{-\nu} \varphi \bar \varphi}}{s + \bar s} \left(-3W\bar W + g^{s \bar s} \left|\nabla_s W \right|^2+ \sum_\varphi (s + \bar s)^{\nu} \left| \nabla_\varphi W \right|^2 \right), \notag \\ V_D &= \frac{c^2}{2} \left(b - \frac{1}{s + \bar s} - \nu (s + \bar s)^{-\nu -1} \sum \varphi \bar \varphi \right)^2, \label{scalarpotnoncan}\end{aligned}$$ where $$\begin{aligned} \nabla_\alpha W = \partial_\alpha W + \kappa^2 (\partial_\alpha \mathcal K) W. \end{aligned}$$ Since the visible sector fields appear only in the combination $\varphi \bar \varphi$, their VEVs vanish provided that the scalar soft masses squared are positive. Moreover, for vanishing visible sector VEVs, the scalar potential reduces to eq. (\[scalarpot0\]) and the non-canonical Kähler potential for the visible sector fields does not change the discussion on the minimization of the potential in section \[sec:model\]. Therefore, the non-canonical Kähler potential does not change the fact that the F-term contribution to the soft scalar masses squared is negative. One has as in eq. (\[tach\]) $$\begin{aligned} \left. \frac{\partial^2 V_F}{\partial \varphi \partial \bar \varphi} \right|_{\langle \varphi \rangle = 0} = \kappa^{-2}a^2 e^{\alpha} \left(\frac{b}{\alpha} \right)^{\nu + 1} (\langle \sigma_s \rangle +1) <0,\end{aligned}$$ However, the visible fields will enter in the D-term scalar potential through the derivative of the Kähler potential with respect to $s$. Even though this has no effect on the ground state of the potential, the $\varphi$-dependence of the D-term scalar potential does result in an extra contribution to the scalar masses squared $$\begin{aligned} \left. \frac{\partial^2 V_D}{\partial \varphi \partial \bar \varphi} \right|_{\langle \varphi \rangle = 0} = \nu \kappa^{-2} c^2 \left( \frac{b}{\alpha} \right)^{\nu + 2} \left(1-\alpha \right).\end{aligned}$$ The total soft mass squared is then the sum of these two contributions $$\begin{aligned} m_0^2 &= \kappa^2 a^2 \left(\frac{b}{\alpha} \right) \left( e^\alpha (\sigma_s +1) + \nu \frac{A(\alpha)}{\alpha} (1-\alpha) \right),\end{aligned}$$ where eq. (\[bsalpha\]) has been used to relate the constants $a$ and $c$, and corrections due to a small cosmological constant have been neglected. A field redefinition due to a non-canonical kinetic term $g_{\varphi \bar \varphi} = (s + \bar s)^{-\nu}$ is taken into account. The soft mass squared is now positive if $$\begin{aligned} \nu > -\frac{e^\alpha (\sigma_s +1) \alpha }{ A(\alpha) (1-\alpha)} \approx 2.6. \end{aligned}$$ The gravitino mass is given by $$\begin{aligned} m_{3/2} = \kappa^{-1}a \sqrt{b/\alpha} e^{\alpha/2}.\end{aligned}$$ In the hidden sector, the imaginary part of $s$ is eaten by the gauge boson corresponding to the shift symmetry, which becomes massive (similar to eq. (\[gaugebosonmass\])) $$\begin{aligned} m_{A_\mu} = \frac{\kappa^{-1} b c }{\alpha} \approx 1.39 \ m_{3/2}. \end{aligned}$$ The mass of the real part of $s$ squared is given by $2 ( \alpha/b)^2 \partial_s \partial_s V$ evaluated at the ground state, where the factor $2(\alpha/b)^2$ comes from the non-canonical kinetic term, $$\begin{aligned} m_s^2 &=2 \left(\alpha ^4-2 \alpha ^2+4 \alpha +\frac{e^{-\alpha } (3-2 \alpha ) A}{\alpha }-4\right) m_{3/2}^2 \notag \\ &\approx 3.48 \ m_{3/2}^2.\end{aligned}$$ Finally, the Goldstino is given by a linear combination of the fermionic superpartner of $s$ and the gaugino, which is eaten by the gravitino by the BEH mechanism. The mass of the remaining fermion is given by (see Appendix \[Appendix:Fermions\]) $$\begin{aligned} m_f^2 &\approx 3.81 \ m_{3/2}^2.\end{aligned}$$ Note that in the scalar potential eq. (\[scalarpotnoncan\]) the MSSM F-terms $\sum_\varphi \left| \nabla_\varphi W \right|^2 $ come with a prefactor $e^{\kappa^2 \mathcal K} g^{\varphi \bar \varphi} $ (where the hidden fields are replaced by their VEVs). To bring this into a more recognizable (globally supersymmetric) form where $\mathcal L \sim - g_{\varphi \bar \varphi} \partial_\mu \varphi \partial^\mu \bar \varphi - g^{\varphi \bar \varphi} W_\varphi \bar W_{\bar \varphi}$, one should rescale the MSSM superpotential (defined in eq. (\[MSSMsuperpot\])) $$\begin{aligned} \hat W_{MSSM} = \exp (\alpha) \left(b/\alpha \right) \ W_{MSSM}. \label{MSSMhatnoncan} \end{aligned}$$ However, another rescaling is needed to take into account the non-canonical Kähler potential for the visible sector[^8]. The trilinear couplings are given by $$\begin{aligned} &A_0= m_{3/2}(s + \bar s)^{\nu/2} \left(\sigma_s + 3 \right), \end{aligned}$$ and $$\begin{aligned} &B_0 = m_{3/2} (s + \bar s)^{\nu/2} \left(\sigma_s + 2 \right), \end{aligned}$$ The main phenomenological properties of this model are not expected to be different from the one we analyzed in section \[sec:pheno\] with the parameter $\nu$ replacing $\gamma$. Gaugino masses are still generated at one-loop level while mSUGRA applies to the soft scalar sector. We therefore do not repeat the phenomenological analysis for this model. Conclusions =========== In this work, we studied a simple supergravity model that allows for an infinitesimally small value of the cosmological constant, while leaving the supersymmetry breaking scale as an independent parameter. The minimal model contains a single chiral multiplet $S$ (a dilaton) which has a gauged shift symmetry, and a vector multiplet. Supersymmetry breaking is then realised by an expectation value of both an F and D-term. A Kähler potential of the form $K=-p\log(s + \bar s)$ is assumed, while the most general superpotential is a single exponential. By performing a Kähler transformation the exponential superpotential can be absorbed in a linear term in the Kähler potential and one is left with a constant superpotential. Gauge invariance then dictates a constant gauge kinetic term, since otherwise a linear contribution would break the (local) shift symmetry. We showed that when this model is coupled to the MSSM, it leads to tachyonic scalar soft masses. This can be cured by adding an extra Polonyi-like field, or by allowing for non-canonical kinetic terms of the Standard Model fields, while maintaining the desirable features of the model. This however introduces an extra parameter $\gamma$ (or $\nu$ in the second case), which turns out to be heavily constrained: $\gamma$ should be in the range $\left[ 1.1, 1.707 \right]$, where the lower bound is to prevent a tachyonic stop squark mass, and the upper bound follows from the tunability of the scalar potential. Since a Kähler transformation can bring the theory from a constant superpotential to a theory with an exponential superpotential where the shift symmetry is a gauged R-symmetry, but with non-trivial gauge kinetic functions, there is an apparent puzzle with the gaugino masses that vanish classically in the first representation but not in the second. Indeed in the latter case all fermions in the theory are charged under $U(1)_R$ leading to anomalies that are cancelled by a Green-Schwarz mechanism due to a gauge kinetic function which is linear in $S$. However, this also results in non-zero gaugino masses, while in the former case the gaugino masses vanish. We have shown that when the ’anomaly mediated’ contributions to the gaugino masses are included, the gaugino masses on both sides of the Kähler transformation match. Since the soft SUSY breaking parameter $B_0$ is related to the trilinear coupling by $B_0 = A_0 - m_{3/2}$, the ratio between the two Higgs VEVs $\tan \beta$ is not a free parameter and the model turns out to be very predictive. The low energy spectrum of the theory consists of (very) light neutralinos, charginos and gluinos, where the experimental bounds on the (mostly Bino-like) LSP, the lightest chargino and the gluino mass force the gravitino mass to be above 15 TeV. This in turn implies that the squarks are very heavy, with the exception of the stop squark which can be as light as 2 TeV when the parameter $\gamma$ approaches its lowest limit $\gamma \rightarrow 1.1$. It follows that the resulting spectrum can be distinguished from other models of supersymmetry breaking and mediation such as mSUGRA and mAMSB. Acknowledgements {#acknowledgements .unnumbered} ================ R.K. would like to thank J.P. Derendinger, J. Ellis and F. Zwirner for useful discussions. This research was (partly) supported by the NCCR SwissMAP, funded by the Swiss National Science Foundation. Fermion masses {#Appendix:Fermions} ============== The fermion mass Lagrangian for the chiral fermions $\chi^\alpha$, the gauginos $\lambda^A$ and the gravitino $\psi_\mu$ is given by [@VP] $$\begin{aligned} \mathcal L_m = \frac{1}{2} m_{3/2} \bar \psi_\mu P_R \gamma^{\mu \nu} \psi_\nu - \frac{1}{2} m_{\alpha \beta} \bar \chi^\alpha \chi^\beta - m_{\alpha A} \bar \chi^\alpha \lambda^A - \frac{1}{2} m_{AB}\bar \lambda^A P_L \lambda^B + \text{h.c.} \end{aligned}$$ where, $$\begin{aligned} m_{\alpha \beta} &= e^{\kappa^2 \mathcal K/2} \left[ \partial_\alpha + (\kappa^2 \partial_\alpha \mathcal K) \right] \nabla_\beta W - e^{\kappa^2 \mathcal K/2} \Gamma^\gamma_{\alpha \beta} \nabla_\gamma W, \notag \\ m_{\alpha A} &= m_{A \alpha} = i \sqrt 2 \left[\partial_\alpha \mathcal P - \frac{1}{4} f_{AB,\alpha} \text{Re}(f)^{-1 \ BC} \mathcal P_C \right] , \notag \\ m_{AB} &= -\frac{1}{2}e^{\kappa^2\mathcal K/2}f_{AB,\alpha} g^{\alpha \bar \beta} \bar \nabla_{\bar \beta} \bar W. \end{aligned}$$ Here, $\Gamma^\alpha_{ \beta \gamma} = g^{\alpha \bar \delta} \partial_\beta g_{\gamma \bar \delta}$ is the Christophel connection with as only non-vanishing component $\Gamma^s_{ss} = -\frac{2}{s + \bar s}$. The moment maps $\mathcal P_\alpha$ are defined in eq. (\[momentmap\]), while $m_{AB} =0$ since the gauge kinetic function is constant. The Goldstino $P_L \nu$ is given by $$\begin{aligned} P_L \nu = \chi^\alpha \delta_s \chi_\alpha + P_L \lambda^A \delta_s P_R \lambda_A, \end{aligned}$$ where $P_{L(R)}$ is the left-handed (right-handed) projection operator. As before, chiral multiplets are labeled by the index $\alpha$, while the different gauge groups are labeled by the index $A$. The ’fermion shifts’ (the scalar parts of the supersymmetry transformation rules) are given by $$\begin{aligned} \delta_s \chi_\alpha &= - \frac{1}{\sqrt 2} e^{\kappa^2 \mathcal K/2} \nabla_\alpha W, \notag \\ \delta_s P_R \lambda_A &= -\frac{i}{2} \mathcal P_A. \end{aligned}$$ Due to the super-BEH effect, elimination of the Goldstino will give mass to the gravitino $$\begin{aligned} m_{3/2} = \kappa^2 e^{\kappa^2 \mathcal K/2} W. \end{aligned}$$ As a result, the mass matrix for the fermions becomes $$\begin{aligned} m = \begin{pmatrix} m_{\alpha \beta} + m_{\alpha \beta}^{(\nu)} & m_{\alpha B} + m_{\alpha B}^{(\nu)} \\ m_{A \beta} + m_{A \beta}^{(\nu)} & m_{AB} + m_{AB}^{(\nu)} \end{pmatrix}, \end{aligned}$$ where the corrections to the fermion mass terms due to the elimination of the Goldstino are given by $$\begin{aligned} m_{\alpha \beta}^{(\nu)} &= - \frac{4 \kappa^2}{3 m_{3/2}} (\delta_s \chi_\alpha)(\delta_s \chi_\beta), \notag \\ m_{\alpha A}^{(\nu)} &=- \frac{4 \kappa^2}{3 m_{3/2}} (\delta_s \chi_\alpha)(\delta_s P_R \lambda_A ),\notag \\ m_{AB}^{(\nu)} &= - \frac{4 \kappa^2}{3 m_{3/2}} (\delta_s P_R \lambda_A)(\delta_s P_R \lambda_B ). \end{aligned}$$ Since the elimination of the Goldstino results in a reduction of the rank of $m$, its determinant vanishes and the physical masses correspond to the non-zero eigenvalues of $m$. The fermion mass matrix for the model in section \[sec:noncan\] for the fermionic superpartner of $s$ and the gaugino corresponding to the shift symmetry (\[shift\]) is then given by. $$\begin{aligned} m = \kappa^{-1} \left( \begin{array}{cc} \left( \frac{\alpha}{b}\right)^2 \frac{ a e^{\alpha /2} \left(\alpha ^2+4 \alpha -2\right)}{3 \left(\alpha/b \right)^{5/2}} & - \left( \frac{\alpha}{b}\right) \frac{i \sqrt{2} b^2 c \left(\alpha ^2-2 \alpha -2\right)}{3 \alpha ^2} \\ - \left( \frac{\alpha}{b}\right) \frac{i \sqrt{2} b^2 c \left(\alpha ^2-2 \alpha -2\right)}{3 \alpha ^2} & \frac{c^2 e^{-\frac{\alpha }{2}} (\alpha -1)^2}{3 a \left(\alpha/b \right)^{3/2}} \\ \end{array} \right), \end{aligned}$$ Where the factors $\left( \frac{\alpha}{b}\right)$ have been taken into account due to non-canonical kinetic terms for the chiral fermions. The gaugino already has canonical kinetic terms since $f(s)=1$. The hidden sector fermions do not mix with the fermions of the MSSM. Also, the determinant of $m$ is proportional to $(2 + 8 \alpha - 3 \alpha^2 - 2 \alpha^3 + \alpha^4)$, which indeed has a root at $\alpha \approx -0.23315$. The mass squared of the physical fermion is then given by $$\begin{aligned} m_f^2 &= \left(2\alpha /b \right)^2 {\text{Tr}}\left[ m^\dagger m \right] = m_{3/2}^2 f_\chi, \end{aligned}$$ where $$\begin{aligned} f_\chi&= \frac{e^{-2 \alpha } \left(e^{2 \alpha } \alpha ^2 \left(\alpha ^2+4 \alpha -2\right)^2+(\alpha -1)^4 A(\alpha)^2+4 e^{\alpha } \alpha \left(\alpha ^2-2 \alpha -2\right)^2 A(\alpha)\right)}{9 \alpha ^2} \notag \\ &\approx 3.807, \end{aligned}$$ and we have used the relations between the parameters and the numerical values for $\alpha$ and $A(\alpha)$ in eqs. (\[bsalpha\]). We now calculate the fermion masses for the model with the extra hidden sector field $z$ in section \[sec:extrahidden\]. This model contains one extra hidden sector fermion. Its mass matrix is given by $\kappa^{-1}$ times $$\begin{aligned} \notag \hspace{-1.5cm} \left( \begin{array}{ccc} \left(\frac{\alpha}{b}\right)^2 \frac{ a e^{\frac{1}{2} \left(t^2+\alpha \right)} \left(-2+4 \alpha +\alpha ^2\right) (1+t \gamma )}{3 \left(\frac{\alpha }{b}\right)^{5/2}} & \left(\frac{\alpha}{b}\right) \frac{a e^{\frac{1}{2} \left(t^2+\alpha \right)} (-1+\alpha ) \left(t+\gamma +t^2 \gamma \right)}{3 \left(\frac{\alpha }{b}\right)^{3/2}} & -\left(\frac{\alpha}{b}\right) \frac{i \sqrt{2} b^2 c \left(-2-2 \alpha +\alpha ^2\right)}{3 \alpha ^2} \\ \left(\frac{\alpha}{b}\right) \frac{a e^{\frac{1}{2} \left(t^2+\alpha \right)} (-1+\alpha ) \left(t+\gamma +t^2 \gamma \right)}{3 \left(\frac{\alpha }{b}\right)^{3/2}} & \frac{a e^{\frac{1}{2} \left(t^2+\alpha \right)} \left(2 t \gamma +2 t^3 \gamma -2 \gamma ^2+t^4 \gamma ^2+t^2 \left(1+2 \gamma ^2\right)\right)}{3 \sqrt{\frac{\alpha }{b}} (1+t \gamma )} & -\frac{i \sqrt{2} b c (-1+\alpha ) \left(t+\gamma +t^2 \gamma \right)}{3\alpha ( 1+t \gamma )} \\ -\left(\frac{\alpha}{b}\right) \frac{i \sqrt{2} b^2 c \left(-2-2 \alpha +\alpha ^2\right)}{3 \alpha ^2} & -\frac{i \sqrt{2} b c (-1+\alpha ) \left(t+\gamma +t^2 \gamma \right)}{3\alpha (1 +t \gamma )} & \frac{b^2 c^2 \sqrt{\frac{\alpha}{b}} e^{\frac{1}{2} \left(-t^2-\alpha \right)} \left(1-\frac{1}{\alpha }\right)^2}{3 a(1+ t \gamma )} \end{array} \right) \end{aligned}$$ It has been checked that the determinant of this matrix vanishes for $\alpha$ and $t$ satisfying eqs. (\[ca\]) and (\[ca2\]). The masses of the physical fermions are the two non-zero eigenvalues of this matrix. The result however is quite tedious and we only state the numerical vqlues for $\gamma=1$: $$\begin{aligned} m_{\chi_1} & \approx 2.57 \ m_{3/2}, \notag \\ m_{\chi_2} & \approx 0.12 \ m_{3/2}. \end{aligned}$$ Anomaly cancellation: {#Appendix:Anomalies} ===================== In this Appendix we calculate the cubic $U(1)_R^3$ and the mixed $U(1)_R \times G_{\text{SM}}$ anomaly cancellation conditions of the model presented in section \[sec:modelR\]. In a theory with a gauged R-symmetry, the superpotential transforms under a gauge transformation as $\delta W = -i\xi \theta W$, where $\theta$ is the gauge parameter of the shift symmetry (\[shift\]), and $\xi = bc$. Then the charges of all chiral fermions are shifted by $+\xi/2$, so that they become $R_\psi = \xi/2$. The gauginos and the gravitino have a charge $R_\lambda = -\xi/2$. The quantum anomalies of such models are studied in full detail in [@FreedmanAnomalies; @R2]. We summarise their results and apply them to our model. For the MSSM (fermion) fields, we use the quantum numbers in table \[table:charges\]. Q u d L e $H_u$ $H_d$ ---------- ----------- ----------- ----------- ----------- ----------- ---------- ---------- $U(1)_R$ $ \xi /2$ $ \xi /2$ $ \xi /2$ $ \xi /2$ $ \xi /2$ $\xi /2$ $\xi /2$ $U(1)_Y$ 1/6 -2/3 1/3 -1/2 1 1/2 -1/2 $SU(2)$ 2 1 1 2 1 2 2 $SU(3)$ 3 $\bar 3$ $\bar 3$ 1 1 1 1 : Charge assignments of the various MSSM fermions.[]{data-label="table:charges"} The cubic anomaly is calculated in subsection \[Appendix:cubic\]. The mixed and gravitational anomalies are calculated in \[Appendix:mixed\] The cubic anomaly {#Appendix:cubic} ----------------- The one-loop contribution to the gauge transformation $\theta$ from quantum anomalies is given by $$\begin{aligned} \delta \mathcal L_{1-loop} &= - \frac{\theta}{32 \pi^2} \ \frac{ \mathcal C_R}{3} \ \epsilon^{\mu \nu \rho \sigma} F_{\mu \nu} F_{\rho \sigma} , \notag \\ \mathcal C_R &= {\text{Tr}}[R_\psi^3] + (n_\lambda + 3) R_\lambda \label{1-loop} \end{aligned}$$ where $n_\chi$ is the number of chiral fermions in the model, $n_\lambda = 8 + 3 + 1 + 1 =13$ is the number of gauginos and the factor ’$+3$’ comes from the gravitino (3 times the contribution of a gaugino). The $U(1)_R$ charges $R_\psi$ of the MSSM fields together with their Standard Model gauge group quantum numbers are summarised in table \[table:charges\]. The trace also includes the hidden sector fields $s$ and $z$ whose R-charge is $R_z = R_s = \xi/2$. We then obtain $$\begin{aligned} \mathcal C_R &= 3 \left[ \left(\frac{\xi}{2} \right)^3 \left(6 + 3+3+2 +1 \right) \right] + \left( \frac{\xi}{2} \right)^3 \left(2+2 \right) - \left(\frac{\xi}{2} \right)^3 (13 + 3) + 2 \left(\frac{\xi}{2} \right)^3 \notag \\ &= 35 \left( \frac{\xi}{2} \right)^3. \end{aligned}$$ Here, the term in square brackets comes from the MSSM chiral fermions (see table \[table:charges\]) with a factor 3 for the three different generations of quarks and leptons. The second term in the first line is the contribution from the Higgsinos. The third term is the contribution from the gauginos and the gravitino, while the last term comes from the two hidden sector fields $z$ and $s$. The one-loop contribution (\[1-loop\]) is cancelled by a Green-Schwarz mechanism: the Lagrangian contains a term $$\begin{aligned} \mathcal L_{GS} = \frac{1}{8}\text{Im}\left(f(s) \right) \ \epsilon^{\mu \nu \rho \sigma} F_{\mu \nu} F_{\rho \sigma}, \end{aligned}$$ and a gauge transformation (\[shift\]) of the gauge kinetic function $f(s) = 1 + \beta_R s$ gives a contribution $$\begin{aligned} \delta \mathcal L_{GS} = -\theta \frac{\beta_R c}{8} \ \epsilon^{\mu \nu \rho \sigma} F_{\mu \nu} F_{\rho \sigma}. \end{aligned}$$ The theory can be made gauge invariant by choosing $$\begin{aligned} \beta_R = - \frac{ \mathcal C_R}{12 \pi^2 c} = -\frac{35 b^3 c^2}{96 \pi^2}. \end{aligned}$$ The mixed anomalies {#Appendix:mixed} ------------------- We now calculate the cancellation conditions of the mixed anomalies by a Green-Schwarz mechanism. In a theory with a gauged R-symmetry, the anomalous contributions to the triangle diagrams involving the R-current and two gauge fields or gravitons are given by $$\begin{aligned} (F \tilde F)_A:& & \mathcal C_A \delta^{ab} &= {\text{Tr}}\left[ R_\psi (\tau^a \tau^b )_A \right] + T_{G_A} \delta^{ab} R_\lambda \notag \\ \mathcal R \mathcal {\tilde R}: & &C_{\text{grav}} &= {\text{Tr}}\left[ R_{\psi} \right] + n_\lambda R_\lambda - 21 R_{ \psi_{3/2} }. \end{aligned}$$ Here, $T_{G_A}\delta^{ab} = f^{acd} f^{bcd}$ with $T_{G_A} = N$ for $SU(N)$ and 0 for $U(1)$, $A$ labels the groups $U(1)_Y, SU(2)_L, SU(3)$. The contribution of the gravitino is $-21$ times the contribution of a gaugino. We can now calculate the $U(1)_R \times U(1)_Y^2$ anomaly $$\begin{aligned} \mathcal C_1 &= 3 \left[ \frac{\xi}{2} \left(\frac{1}{6} + \frac{4}{3} + \frac{1}{3} + \frac{1}{2} +1 \right) \right] + \left(\frac{\xi}{2} \right) \left(\frac{1}{2} + \frac{1}{2} \right) \notag \\ &= 11 \left( \frac{\xi}{2} \right), \label{C1} \end{aligned}$$ the mixed $U(1)_R \times SU(2)$ anomaly $$\begin{aligned} \mathcal C_2 &= \frac{3}{2} \left[ \left( \frac{\xi}{2} \right) \left(3 +1 \right) \right] + \frac{1}{2} \left( \frac{\xi}{2} \right) \left(1 + 1 \right) - 2 \left( \frac{\xi}{2} \right) \notag \\ &= 5 \left( \frac{\xi}{2} \right), \label{C2} \end{aligned}$$ the mixed $U(1)_R \times SU(3)$ anomaly $$\begin{aligned} \mathcal C_3 &= \frac{3}{2} \left[ \left( \frac{\xi}{2} \right) \left(2+2 \right) \right] - 3 \left( \frac{\xi}{2} \right) \notag \\ &= 3 \left( \frac{\xi}{2} \right), \label{C3} \end{aligned}$$ and the gravitational anomaly $$\begin{aligned} \mathcal C_{\text{grav}} &= 3 \left[ \left( \frac{\xi}{2} \right) \left(6 + 3 + 3 +2 + 1 \right) \right] + \left(\frac{\xi}{2} \right) \left(2 +2 +1 +1 +21 - 13 \right) \notag \\ &= 59 \left(\frac{\xi}{2} \right) .\label{Cgrav} \end{aligned}$$ In the equations above, the term in square brackets comes from the contributions of quarks and leptons $Q,u,d,L$ and $e$. The second term in the first line in eqs. (\[C1\]) and (\[C2\]) comes from the Higgsinos, and the last terms in the first line of eqs. (\[C2\]) and (\[C3\]) is the contribution of the gauginos ($T_G$). The contributions to the second term in the first line of eq. (\[Cgrav\]) come from the Higgsinos, $\chi_s$,$\chi_z$, the gravitino and the gauginos respectively, where $\chi_s$ and $\chi_z$ are the superpartners of $s$ and $z$ and we have $13 = 8 + 3 + 1 +1$ gauginos. In the above expressions, we used that $T_R = 11$ for $U(1)_Y$, $T_R=7$ for $SU(2)$ and $T_R=6$ for $SU(3)$. These anomalies are cancelled by a Green-Schwarz mechanism[^9] $$\begin{aligned} \mathcal L_{GS} = \frac{1}{8} \text{Im}( s) \epsilon^{\mu \nu \rho \sigma} \left( \beta_A F_{\mu \nu}^A F_{\rho \sigma}^A + \beta_{\text{grav}} \mathcal R_{\mu \nu} \mathcal {\tilde R}_{\rho \sigma} \right), \end{aligned}$$ provided $$\begin{aligned} \mathcal C_A &= - 4 \pi^2 c \ \beta_A , \ \ \ \ \ \ \ A=1,2,3 \notag \\ \mathcal C_{\text{grav}} &= 32 \pi^2 c \ \beta_{\text{grav}}. \end{aligned}$$ This gives the anomaly cancellation conditions $$\begin{aligned} \beta_1 &= - \frac{ 11 \left(\xi/2 \right)}{4 \pi^2 c}, \notag \\ \beta_2 &= - \frac{ 5 \left(\xi/2 \right)}{4 \pi^2 c}, \notag \\ \beta_3 &= - \frac{3 \left(\xi/2 \right)}{4 \pi^2 c}. \label{anomalieformules} \end{aligned}$$ [99]{} I. Antoniadis and R. Knoops, Nucl. Phys. B [**886**]{} (2014) 43 \[arXiv:1403.1534 \[hep-th\]\]. F. Catino, G. Villadoro and F. Zwirner, JHEP [**1201**]{} (2012) 002 \[arXiv:1110.2174 \[hep-th\]\], G. Villadoro and F. Zwirner, Phys. Rev. Lett.  [**95**]{} (2005) 231602 \[hep-th/0508167\]. I. Antoniadis, D. M. Ghilencea and R. Knoops, JHEP [**1502**]{} (2015) 166 \[arXiv:1412.4807 \[hep-th\]\]. L. Randall and R. Sundrum, Nucl. Phys. B [**557**]{} (1999) 79 \[hep-th/9810155\], G. F. Giudice, M. A. Luty, H. Murayama and R. Rattazzi, JHEP [**9812**]{} (1998) 027 \[hep-ph/9810442\]. J. A. Bagger, T. Moroi and E. Poppitz, JHEP [**0004**]{} (2000) 009 \[hep-th/9911029\]. D. Z. Freedman and A. Van Proeyen, Cambridge, UK: Cambridge Univ. Pr. (2012) 607 p P. Fayet and J. Iliopoulos, Phys. Lett. B [**51**]{} (1974) 461, P. Fayet, Phys. Lett. B [**69**]{} (1977) 489. M. Gomez-Reino and C. A. Scrucca, JHEP [**0708**]{} (2007) 091 \[arXiv:0706.2785 \[hep-th\]\]. M. B. Green and J. H. Schwarz, Phys. Lett. B [**149**]{} (1984) 117. J. Polonyi, Hungary Central Inst Res - KFKI-77-93 (77,REC.JUL 78) 5p. H. P. Nilles, Phys. Rept.  [**110**]{} (1984) 1, A. Brignole, L. E. Ibanez and C. Munoz, Adv. Ser. Direct. High Energy Phys.  [**21**]{} (2010) 244 \[hep-ph/9707209\]. S. Ferrara, L. Girardello, T. Kugo and A. Van Proeyen, Nucl. Phys. B [**223**]{} (1983) 191, P. Binetruy, G. Dvali, R. Kallosh and A. Van Proeyen, Class. Quant. Grav.  [**21**]{} (2004) 3137 \[hep-th/0402046\]. V. Kaplunovsky and J. Louis, Nucl. Phys. B [**422**]{} (1994) 57 \[hep-th/9402005\]. D. Z. Freedman and B. Kors, JHEP [**0611**]{} (2006) 067 \[hep-th/0509217\], H. Elvang, D. Z. Freedman and B. Kors, JHEP [**0611**]{} (2006) 068 \[hep-th/0606012\]. A. H. Chamseddine, R. L. Arnowitt and P. Nath, Phys. Rev. Lett.  [**49**]{} (1982) 970. J. R. Ellis, K. A. Olive, Y. Santoso and V. C. Spanos, Phys. Lett. B [**573**]{} (2003) 162 \[hep-ph/0305212\], J. R. Ellis, K. A. Olive, Y. Santoso and V. C. Spanos, Phys. Rev. D [**70**]{} (2004) 055005 \[hep-ph/0405110\]. B. C. Allanach, Comput. Phys. Commun.  [**143**]{} (2002) 305 \[hep-ph/0104145\]. T. Gherghetta, G. F. Giudice and J. D. Wells, Nucl. Phys. B [**559**]{} (1999) 27 \[hep-ph/9904378\]. [^1]: This was already noticed in [@Gomez]. [^2]: In superfields the shift symmetry (\[shift\]) is given by $\delta S = -ic \Lambda$, where $\Lambda$ is the superfield generalization of the gauge parameter. The gauge invariant Kähler potential is then given by $\mathcal K(S , \bar S) = -p \kappa^{-2} \log (S+\bar S+cV_R)+\kappa^{-2}b(S+\bar S+c V_R)$, where $V_R$ is the gauge superfield of the shift symmetry. [^3]: If $f(s)$ is constant, the leading contribution to $V_D$ when $s + \bar s \rightarrow 0$ is proportional to $1/(s + \bar s)^2$, while the leading contribution to $V_F$ is proportional to $1/(s + \bar s)^p$. It follows that $p<2$; if $p>2$, the potential is unbounded from below, while if $p=2$, the potential is either positive and monotonically decreasing or unbounded from below when $s+ \bar s \rightarrow 0$ depending on the values of the parameters. [^4]: This is calculated as follows: The relevant part of the Lagrangian is $$\begin{aligned} \mathcal L/e = -\frac{1}{(s + \bar s)^2} \left(\partial_\mu s + ic A_\mu \right) \left( \partial^\mu s - ic A^\mu \right) - \frac{f(s)}{4} F_{\mu \nu} F^{\mu \nu}. \notag \end{aligned}$$ Use the gauged shift symmetry to put $\text{Im}(s) = 0$ and obtain $$\begin{aligned} \mathcal L/e = -\frac{1}{(s + \bar s)^2} \partial_\mu \text{Re}(s) \partial^\mu \text{Re}(s) - \frac{1}{4} F_{\mu \nu} F^{\mu \nu} - \frac{c^2}{(s + \bar s)^2} A_\mu A^\mu. \notag \end{aligned}$$ [^5]: This statement is only true for supergravity theories with a non-vanishing superpotential where everything can be defined in terms of a gauge invariant function $G = \kappa^2 \mathcal K + \log(\kappa^6 W \bar W)$ [@W=0]. [^6]: The chiral fermions, the gauginos and the gravitino carry a charge $bc/2$, $-bc/2$ and $-bc/2$ respectively. [^7]: Similarly, to cancel the cubic anomaly one should modify the R-gauge kinetic term as well to be $f_R(s) = 1 + \beta_R s$. It has been checked in Appendix \[Appendix:Anomalies\] that $\beta_R = - \frac{35b^3 c^2}{96 \pi^2}$ is extremely small by eqs (\[ca1\]) and (\[a\_numerics\]), so that the classical scalar potential (\[scalarpotMSSM\]) is still valid to a very good approximation. [^8]: After the rescaling (\[MSSMhatnoncan\]), the Lagrangian contains (very schematically) the following terms $$\begin{aligned} \mathcal L = & -(s + \bar s)^{-\nu} \partial_\mu \bar \varphi \partial^\mu \varphi -(s + \bar s)^{-\nu} \partial_\mu \bar h \partial^\mu h + \hat \mu^2 \bar h h + \hat y \hat \mu \bar h \varphi \varphi + \hat y^2 \bar \varphi \bar \varphi \varphi \varphi + \dots \notag \\ & + \frac{1}{6} A_0 \hat y \varphi \varphi \varphi + \frac{1}{2} B_0 \hat \mu h h.\end{aligned}$$ where $h$ stands for the Higgsinos and $\varphi$ labels the other scalar superpartners and all indices are surpressed for clarity. $y$ stands for the Yukawa couplings and $\mu$ is the usual $\mu$-parameter. The first line contains the kinetic terms and the F-terms coming from $\hat W_{MSSM}$. The last line contains the trilinear supersymmetry breaking terms (A-terms) and the B-term. In order to obtain canonical kinetic terms, one needs a rescaling $\varphi \rightarrow \varphi'=(s+ \bar s)^{-\nu/2} \varphi$ (and similarly for $h$). However, to bring the MSSM superpotential back into its usual form one also needs to redefine $\hat \mu \rightarrow \hat \mu'=(s+\bar s)^{\nu/2} \hat \mu$ and $\hat y \rightarrow \hat y'=(s+\bar s)^{\nu} \hat y $. One then obtains $$\begin{aligned} \mathcal L = & - \partial_\mu \bar \varphi ' \partial^\mu \varphi ' - \partial_\mu \bar h' \partial^\mu h ' + \hat \mu '^2 \bar h' h' + \hat y' \hat \mu ' \bar h' \varphi ' \varphi ' + \hat y'^2 \bar \varphi ' \bar \varphi ' \varphi ' \varphi ' + \dots \notag \\ & + \frac{1}{6} (s + \bar s)^{\nu/2} A_0 \hat y ' \varphi ' \varphi ' \varphi ' + \frac{1}{2} (s + \bar s)^{\nu/2} B_0 \hat \mu ' h' h'.\end{aligned}$$ [^9]: The inclusion of appropriate counter terms that cancel the $G^2 \times U(1)_R$ mixed non-abelian anomaly and bring the abelian mixed anomaly to a covariant form are included in these results; for more information see [@FreedmanAnomalies].
--- author: - Pankaj Kumar - 'K.S. Cho' bibliography: - 'reference.bib' title: Simultaneous EUV and radio observations of bidirectional plasmoids ejection during magnetic reconnection --- INTRODUCTION ============ It is well known that magnetic reconnection plays a key role in the release of magnetic energy stored in sheared/twisted magnetic fields, leading to the onset of solar flares and coronal mass ejections (CMEs) [@priest2000; @asc2004]. In the magnetic reconnection process, oppositely directed field lines break-up and rejoin resulting in the release of magnetic energy into thermal energy and particle acceleration (e.g., @sweet1958, @parker1963, @petschek1964). The standard model of solar flare, known as the CSHKP model, explains the energy release process [@carm1964; @stur1966; @hirayama1974; @kopp1976], which is supported by several observational findings, e.g., cusp-shaped loops [@tsuneta1992], inflows and outflows [@yokoyama2001; @savage2010; @savage2012; @takasao2012], downflow signatures [@mckenzie2000; @mckenzie2001; @innes2003; @asai2004; @savage2011], plasmoid ejections [@shibata1995], loop-top hard X-ray sources [@masuda1994; @sui2003], X-ray jets [@shibata1992; @shimojo1996], bidirectional-jets [@innes1997], and flux rope/loop interactions [@kumar2010a; @kumar2010b; @kumar2010c; @torok2011]. According to the CSHKP model, the magnetic reconnection takes place in a vertical current-sheet located above an underlying closed loop system and the filament/prominence eruption plays a key role in the triggering of fast reconnection [@forbes2000; @lin2000; @kumar2012a]. Although there are some fundamental problems in the theory of magnetic reconnection. For example, in the Sweet-Parker reconnection model [@sweet1958; @parker1957], the reconnection-rate is too slow (10$^{-4}$-10$^{-6}$) to explain the magnetic energy release in solar flares. The @petschek1964 reconnection model predicts a faster reconnection rate (i.e., 0.01-0.1), considering slow-mode MHD shocks in the outflow region. This model explains the observed reconnection rate in most of the solar flares. However, observational value of the reconnection rate varies from $\sim$0.2-0.001 [@isobe2005; @narukage2006; @takasao2012]. MHD simulations have revealed that the localized resistivity can assist to obtain the fast reconnection rate (i.e., close to unity) [@ugai1977; @yokoyama1994]. When the diffusion region becomes long enough (similar to the Sweet-Parker model), the tearing mode instability can cause the onset of bursty reconnection, which involves the formation of several magnetic islands (i.e., plasmoids) in the current sheet [@furth1963; @kliem1995; @priest2000; @shibata2011]. @shibata1995 and @shibata2001 extended the CSHKP model by unifying reconnection and plasmoid ejection, and emphasized on the importance of plasmoid ejection in the reconnection process, which is known as “plasmoid-induced-reconnection" model. In this model, the bursty reconnection is triggered by a plasmoid ejection that leads to build-up of the magnetic energy in a vertical current-sheet. When a plasmoid is ejected, the inflow is induced due to the conservation of mass, which results in the enhancement of the reconnection rate. A plasmoid formed above the current sheet can also accelerated due to the faster reconnection outflow. @nishida2009 performed MHD simulations of the solar flares using different values of resistivity and plasmoid velocity, and found that the reconnection rate correlates positively with the plasmoid velocity. Therefore, the plasmoid ejection plays a key role in triggering fast magnetic reconnection and is an observational evidence of magnetic reconnection in a solar flare. For example, in the soft X-ray and EUV images, upward-moving hot plasma blobs are frequently observed [@shibata1995; @ohyama1998; @kim2005; @kumar2013a; @kumar2013] and the white-light coronagraph observations often show rising blob-like features in the wake of CMEs [@ko2003; @lin2005; @liu2011]. The typical size of the plasmoid varies from $\sim$10$^{9}$ cm (in compact flares) to $\sim$10$^{11}$ cm (in CMEs) and their speed ranges between $\sim$10-1000 km s$^{-1}$ [@shibata2001]. Moreover, the size and velocity of multiple plasmoids formed during the the bursty reconnection (due to tearing instability) varies from $\sim$3-4$\arcsec$ and $\sim$89-460 km s$^{-1}$, respectively [@shen2011; @takasao2012]. On the basis of numerical simulation by @samtaney2009, the number of plasmoids (and spatial scales) formed due to the tearing-mode instability, depends on the Lundquist number (i.e., S$^\frac{3}{8}$). Drifting-pulsating structures (DPSs) are the slowly drifting radio spectrogram features of short durations ($\sim$1-3 min), that consists of several quasi-periodic pulsations. They are generally observed in the decimetric frequency range, (e.g., $\sim$0.6-2.0 GHz) [@khan2002]. Their mean bandwidth is about 500 MHz, and the frequency drift ranges from -20 to -5 MHz s$^{-1}$ [@karlicky2012]. Based on a 2D numerical MHD simulation, @kliem2000 proposed that the DPSs are generated during a bursty magnetic-reconnection when interacting plasmoids are formed. They proposed that the electrons are accelerated during these processes and trapped in the plasmoids that generate individual pulses of the observed DPSs. @karl2004 used the concept of fractal reconnection proposed by @shibata2001 and considered several plasmoids to explain simultaneous observation of several DPSs. The interaction of plasmoids is also revealed in the numerical simulations and negative/positive DPSs corresponds to upward/downward moving plasmoids exciting the radio emissions within progressively lower/higher density regions [@barta2007; @karl2007; @ning2007; @barta2008]. According to @barta2008, the upward and downward motion of the plasmoid is determined by the magnetic reconnection rate above and below the plasmoid. If the reconnection rate above the plasmoid is higher/lower than below, then the plasmoid moves downward/upward. The interaction of downward moving plasmoids with the flare loop arcade or loop-top kernel are also noticed in the simulation and observation [@barta2007; @riley2007; @kolo2007]. @milligan2010 observed downward moving plasmoid (12 km s$^{-1}$) interacting with flare loop-top and showed the enhanced nonthermal emission in the corona at the time of the merging, suggesting the additional particle acceleration. Therefore, high resolution observations are extremely important to support the above simulation results. So far, there is no clear simultaneous observation of both the positive and negative DPSs associated with the multiple plasmoid ejections in EUV or X-ray images. Thanks to the SDO/AIA high-resolution observations, which make it possible to observe this rare phenomena for the first time in an X-class flare occurred on 3 November 2011. In this paper, we present the simultaneous EUV and radio observational evidences of the bidirectional plasmoid ejections along the current-sheet structure during magnetic reconnection. In section 2, we present the multiwavelength observational data analysis and in the last section, we discuss our results. Observations and results ======================== The Atmospheric Imaging Assembly (AIA) instrument onboard the Solar Dynamics Observatory (SDO) mission observes full disk images of the Sun at a spatial resolution of 1.2$^{\prime\prime}$ and its field of view covers $\sim$1.3 R$_\odot$. In this study, we used AIA images observed in 171 Å  (Fe IX, T$\approx$0.7 MK), 131 Å  (Fe VIII/XXI, T$\approx$0.4 & 11 MK), 94 Å  (Fe XVIII, T$\approx$7 MK), and 304 Å  (He II, T$\approx$0.05 MK). The AIA images observed at 12 sec cadence cover chromospheric to coronal heights [@lemen2012], which are useful to study the evolutionary aspects of the X-class flare. We also used the Helioseismic and Magnetic Imager (HMI) magnetogram to study the magnetic field configuration at the flare site [@schou2012; @scherrer2012]. The top panel of Figure \[flux\] shows the GOES soft X-ray flux profile (1-8 Å  channel) and the distance-time profiles of the upward and downward moving plasmoids (shown in Figures \[aia171\] and \[sl\]). The middle panel displays the soft X-ray flux derivative profile. The bottom panel shows 1 sec cadence relative radio flux density (in sfu) profiles at different frequencies (1415, 4995, 8800, and 15400 MHz) observed by the Sagamore Hill radio station [@straka1977]. The two stages of energy release is evident during the flare. The first stage energy release (impulsive) was observed during 20:20-20:23 UT. Most of the nonthermal particles are accelerated during the first stage energy release that peaks around 20:21 UT. The 1445 MHz radio flux profile shows quasi-periodic oscillatory behavior. In addition, the radio emission at 4995, 8800, and 15400 MHz is usually related to the gyrosynchrotron emission during the flare. It is likely that the radio emission mechanism at high frequencies ($>$ 2 GHz) is considered due to gyrosynchrotron emission whereas plasma emission below about 2 GHz [@dulk1985]. The gradual second energy-release takes place during 20:23 to 20:28 UT. The timing of the DPSs is indicated by two vertical dotted lines. It is important to note that the DPSs are observed during the first energy release and are well separated from the gyrosynchotron emission produced by the accelerated electrons during the magnetic reconnection. Filament activation ------------------- AR NOAA 11339 was located at N18E49 on 3 November 2011, with the $\beta$$\gamma$ magnetic configuration. This active region produced seven C-class, one M-class and one X-class flare on the same day. According to the Solar Geophysical Data (SGD) report, the X1.9 flare was started at 20:16 UT, peaked at 20:27 UT and ended at 20:32 UT. Figure \[aia304\] displays the selected snapshots of AIA 304 Å  channel, which represent the lower solar atmosphere, i.e., chromosphere and transition region. These images show the pre-flare activities in and around the active region, where the X-class flare was triggered. About 50 minute prior to the flare, we noticed the activation of a remote-filament (at $\sim$19:30:08 UT) associated with small-scale brightening below it. The filament was located $\sim$50$\arcsec$ away (toward the north) from the AR (indicated by the arrow). The activated filament slowly rises-up and bends toward the AR after reaching a projected height of $\sim$49 Mm (see image at 19:35:08 UT). AIA 304 Å  intensity movie clearly shows the plasma flows along the filament channel from the activation site (direction is shown by arrows). At 19:50:08 UT, we observe plasma flows along ‘A’ and ‘B’ directions toward the AR site. The filament system starts fading after 20:00 UT, but the plasma flows continued (marked by arrow). The flare was initiated at 20:16 UT, where the plasma flows along the filament channel were dissipated. The flare ribbons are observed at 20:20:08 UT (shown by the arrows). To investigate the photospheric magnetic field configuration at the flare site, we overlaid the HMI magnetogram image contours of positive (white) and negative (black) polarities at the top of AIA 304 Å  image at 20:10:08 UT. This image suggests that the filament activation and brightening take place at the location of the weak magnetic flux, where no sunspot was observed in the HMI. To explore the coronal magnetic configuration of the AR, we used AIA 131 Å  images, which are shown in the bottom panel of Figure \[aia304\]. The AIA 131 Å  channel is sensitive to both cool ($\sim$0.4 MK) and hot ($\sim$11 MK) plasma. The first panel at 19:30:33 UT shows the activation and rising of a remote kinked (indicated by ‘K’) filament. Apparently the kinked filament shows one turn. AIA 131 Å  movie reveals the presence of a flux-rope structure (above the flare site) prior to the flare initiation. The flux-rope was visible only in the AIA hot channels (i.e., 131 and 94 Å), which is marked by ‘hot FR’. An underlying hot loop is evident at the flare site. The footpoints of the underlying hot loop are indicated by arrows, where the flare ribbons were formed. In the second panel at 19:50:45 UT, we observe the downward motion of the filament apex (‘F’) near the hot flux-rope (FR) structure, where the flare was triggered. In addition, we also notice the downward motion of the plasma along ‘B’. We observed underlying hot flare loop at 20:18:33 UT, nearby where the downward moving plasma along the filament channel disappeared. During the early phase of the X-class flare, we noticed the upward motion of the higher loops (see AIA 131 Å  intensity movie, Figure \[aia304\]) and the downward motion of underlying hot loop. The downward plasma flows along the activated filament channel are likely to have some effect or interaction (i.e., field destabilization) with the ambient magnetic field configuration at the flare site. Although, the present observations are not sufficient to predict the exact triggering process/mechanism of the flare. To examine the temporal/spatial evolution of the plasma flows towards the AR, we used a slice cut ‘S1’ (AIA 131 Å  image shown in the bottom panel of Figure \[aia304\]) along the bending filament channel. The space-time intensity plot along with the GOES soft X-ray flux profile (in 1-8 Å  wavelength channel) is displayed in Figure \[sl131\]. The stack plot reveals the plasma flows along the filament channel till $\sim$20:14 UT (indicated by the arrows). A vertical red dotted line at $\sim$19:30 UT, indicates the timing of filament activation and associated brightening. We observed two peaks in the flux profile at $\sim$20:22 UT and $\sim$20:27 UT, which reveal the two stages of energy release during the flare. The second peak is more intense, which represents the X-class flare. The timing of plasmoid ejections (observed in EUV) is indicated by two vertical dotted (yellow) lines during the first impulsive energy release. Plasmoid ejections ------------------ Figure \[aia171\] displays the selected snapshots of the AIA 171, 131, and 94 Å  during the flare impulsive phase. The first panel (at 20:21:50 UT) shows the ejection of a plasmoid (marked by an arrow) from the flare site, which continues to rise up. Later, we observe multiple plasmoid ejections during the flare impulsive phase (shown in the AIA 171 Å  panel at 20:23:27 UT). AIA 171 Å  intensity movie shows the interaction and coalescence of multiple plasmoids with each other. The typical thickness of the plasmoids is $\sim$3-4$\arcsec$. In the bottom panels, we observe a bright underlying flare loop and higher loops in the AIA 131 and 94 Å  at 20:18:33 UT. At 20:22:37 UT, multiple plasmoid ejections are observed above the hot underlying flare loop. A vertical structure above the underlying hot loop may be the current sheet (in AIA 131 and 94 Å ) and the plasmoids are formed/ejected along the sheet structure. The plasmoids were observed in both hot and cool AIA channels, which suggests the multi-thermal nature of the plasmoids. The plasmoids were impulsively ejected from the current-sheet, and finally diffused into the corona following a deceleration trend. Some of the plasmoids move downward in the corona after reaching a certain height. To estimate the rising speed of the plasmoid, we tracked the topmost plasmoid (from the flare center) in the AIA 171 Å  images shown in the top panels. The distance-time profile of this plasmoid is shown in Figure \[flux\]. Using the linear fit to the data points, the mean speed of the plasmoid was about 247 km s$^{-1}$. We assume four pixels (2.4$\arcsec$) error in the distance measurement for the visually tracked topmost plasmoid. The estimated error in the mean speed is $\sim$74 km s$^{-1}$. It should be noted that the measured speed is biased by the projection effect and is the lower limit of the true speed of the plasmoid in 3D. Inflows/Outflows ---------------- We used slice cut ‘S2’ and ‘S3’ at AIA 131 Å  image to investigate the inflow and outflow structures related to the flare. The directions for slices are chosen after many trials to observe the clear signature of inflow and outflow patterns. The left panel of Figure \[sl\] shows the space-time intensity distribution plots for slices ‘S2’ and ‘S3’. During the initial phase of the flare, we observed the apparent motion ($\sim$55 km s$^{-1}$) of the eastern loop toward the current sheet structure (above the underlying hot loop). At the same time, we observed the upward motion ($\sim$56 km s$^{-1}$) of the higher loops and downward motion ($\sim$46 km s$^{-1}$) of the underlying hot loop system (slice ‘S3’). These apparent motion of the loop systems may be indirectly linked to the inflow and outflow related to the magnetic reconnection. The outflow speed is quite low during the initial phase of the flare, which is probably the apparent motion of the hot-loop systems. In a well cited event, @yokoyama2001 found the inflow velocities of 1.0–4.7 km s$^{-1}$ by tracking the motion of patterns above the limb using SOHO/EIT images. Furthermore, @narukage2006 reported the inflow speeds ranging from 2.6–38 km s$^{-1}$ in a statistical study of six flare events. Recently, using the SDO/AIA observations, @savage2012 found the inflow speeds of the order of 100 km s$^{-1}$ and the outflow speeds from $\sim$100 to 1000 km s$^{-1}$. In our observations, the speed of the pattern (i.e., inflow speed) is higher than as reported by @yokoyama2001, but lower than that of @savage2012. However, the event studied by @savage2012 was located on the eastern limb, which may account for the higher inflow velocities as the projection effects are minimal. We observed the shrinkage of the underlying flare loop ($\sim$46 km s$^{-1}$) about two minutes prior to the radio emission peaks. Simultaneously, we also observed the outward motion of the higher loop ($\sim$56 km s$^{-1}$) and the apparent inflow motion of the loop ($\sim$55 km s$^{-1}$). It seems that the current sheet is formed in between the underlying flare loop and the higher-loop system. Futhermore, the motion of the loops (upwards/downwards) reveal the signature of magnetic reconnection during the early phase of the flare. @sui2003 reported the loop shrinkage (speed$\sim$9 km s$^{-1}$, for 2-4 minutes) before the hard X-ray peak, which was most likely caused by the change of magnetic field configuration as the X-point collapsed into the current sheet. Similarly, @li2005 also found the flare loop shrinkage (34 GHz radio images, speed$\sim$13 km s$^{-1}$) in the rising phase of about 9 minutes. Our observational results are consistent with the above reported events, but the shrinkage speed is higher in our case. However, we do not have the radio and hard X-ray imaging observations to study the source motion in more details. Some of the AIA images in other wavelengths (except 94 Å ) showed saturation at the flare center during the impulsive phase of the flare. Therefore, it was not possible to observe the downward moving plasmoids in these wavelengths. To observe the simultaneous upward and downward motion of the plasmoids during the flare impulsive phase, we created the stack plots along slices S4 and S5 using AIA 94 Å  images (marked in the bottom-right panel of Figure \[aia171\]). The start and end points for these slices (in arcsecs) are taken as (i) S4=(-803, 315), (-850, 390) (ii) S5=(-804, 300), (-836, 400). The right panel of Figure \[sl\] displays the stack plots along slices S4 (top) and S5 (bottom) during 20:18-20:30 UT. Interestingly, we observed the upward and downward moving plasmoids simultaneously. The position of the upward and downward moving plasmoid is marked by red and black ‘+’ symbols. We estimated the speed of the upward and downward moving plasmoid along slice S5. The initial speed of the upward moving plasmoid is $\sim$362 km s$^{-1}$, whereas the speed of the downward moving plasmoid is $\sim$254 km s$^{-1}$. Note that these speeds are the initial speeds of the plasmoid, which are tracked in the space-time plot. The speed of the upward moving plasmoid is slightly more than the average speed ($\sim$247 km s$^{-1}$) of the topmost plasmoid as reported in the previous section. The calculated speed of the other plasmoids (along slice S4) is $\sim$152 km s$^{-1}$ (upward) and $\sim$83 km s$^{-1}$ (downward). The characteristics of the flare (i.e., inflow and outflow) are consistent with the standard flare model (i.e, CSHKP). To determine the reconnection rate from our observations, we consider the outflow speed equal to the Alfven speed, i.e., V$_{o}$=V$_A$ as predicted in the two-dimensional magnetic reconnection theories [@priest2000]. Assuming the speed of pattern ($\sim$55 km s$^{-1}$) roughly equivalent to the inflow speed. The reconnection rate is $M_{A}=\frac{V_i}{V_{o}}$$\sim$0.22, for the outflow speed of $\sim$247 km s$^{-1}$, which is consistent with the reconnection rate reported in @takasao2012. This reveals the fast-reconnection as predicted in @petschek1964 reconnection model. Several plasma blobs appeared in the current-sheet structure, which collided with each other and ejected from it. On the basis of the observational findings, we consider that the sheet structure is the current sheet and the plasma blobs are the magnetic islands/plasmoids created by the tearing mode instability as suggested by @takasao2012. Drifting Pulsating Structures (DPSs) ------------------------------------ Figure \[spectrum\] shows the dynamic radio spectrum obtained from the Green Bank Solar Radio Burst Spectrometer (GBSRBS) ranging from 400 to 1200 MHz (decimetric) with 1 s time resolution. The spectrometer is located in a radio-quiet zone at NRAO’s Green Bank site, therefore it produces a highly sensitive dynamic spectra at low-noise radio interference. We observed drifting pulsating structures (DPSs) during the flare impulsive phase (20:21:24 UT–20:22:36 UT) for about 1 min duration. The positive and negative DPSs are marked by ‘1’ and ‘2’, respectively. It is interesting to note that at the same time, we observed multiple plasmoid ejections in the AIA images (see Figure \[aia171\]). Therefore, the DPSs are closely related with the dynamics of multiple plasmoid ejections. The negative DPSs are generally observed during the upward moving plasmoids and can be used to derive the source speed. The plasma frequency is related with the local electron density by f=9$\surd$n$_e$ MHz, where n$_e$ is the electron density in m$^{-3}$. The frequency drift rate is defined as $$\frac{df}{dt}\backsimeq\frac{f}{2H}v$$ where H=$\mid$n$_e$/$\frac{dn_e}{dr}$$\mid$ is the inhomogeneous plasma density scale height in the source and v=$\frac{dr}{dt}$ is the speed of the moving emission source [@poh2007; @wang2012]. Then, the source speed can be estimated by using $$v=2\frac{1}{f}\frac{df}{dt}H$$ where $\frac{1}{f}\frac{df}{dt}$ is the relative frequency drift-rate (s$^{-1}$). To estimate the drift-rates for the DPSs, we selected few data points at the emission lane (marked by ‘+’ symbol) in the dynamic spectrum and fitted a straight line to get the slopes, i.e., frequency drift rates. The estimated relative frequency drift-rates are 0.0055 and –0.0061 s$^{-1}$, respectively for ‘1’ and ‘2’. In a different DPSs event, @khan2002 measured the frequency drift rate of –2.8 MHz s$^{-1}$ at a mean frequency of 430 MHz. Our calculated relative negative drift rate is comparable with the value of about –0.0065 s$^{-1}$ in @khan2002. The positive drift represents the downward moving emission source. From the AIA observations, the speed of upward moving plasmoid is $\sim$247 km s$^{-1}$ (Figures \[aia171\] and \[flux\]). Therefore, using equation (2), we obtain the scale height H=2.0$\times$10$^{4}$ km. This value is in agreement with the density scale heights reported by @asc1986 (H$\sim$(2–20)$\times$10$^{3}$ km) by analyzing several type III radio bursts and pulsations in a frequency range of 100–1000 MHz. Using above scale height, we obtained the speed of the downward moving radio source $\sim$224 km s$^{-1}$. Note that these are the mean speed of the upward and downward moving radio sources for the observed DPSs. The speed of the downward moving radio source (from radio observation) is less than the speed of the upward moving source, i.e., plasmoid. The observed drifting-pulsating emissions are likely to be generated by the plasma emission. The positive and negative DPSs implies the plasma densities of 1.13$\times$10$^{10}$ to 1.72$\times$10$^{10}$ cm$^{-3}$ and 3.2$\times$10$^{9}$ to 6.2$\times$10$^{9}$ cm$^{-3}$, respectively. To determine the speed of the DPS exciter independently from the coronal density model, we use the Newkirk one-fold and two-fold density models [@newkirk1961]. But, these models do not provide the reliable heights of the emission source, which lies below the heliocentric distance of 1 R$_\odot$. It means that the emission source is located below the solar surface, which is not correct. Alternatively, we use a barometric isothermal density law for the local density scale height, which is generally used for the decimetric emission [@poh2007; @poh2008]. In this method, the local electron density (n$_e$) is given by, $$n_e=N_e exp(-\frac{696000}{H}(1-\frac{1}{R}))$$ where N$_e$ is the reference electron density at the base of the corona and R is the estimated heliocentric distance [@demoulin2000]. If we assume the reference electron density of 10$^9$ cm$^{-3}$, local scale height of 2$\times$10$^4$ km, and the local electron densities from the DPS. The estimated exciter speed for the positive and negative DPS are 194 and 228 km s$^{-1}$, respectively. However, the speeds are closer to the above estimated speeds, but the source heights are still below the heliocentric distance of 1 R$_{\odot}$ (0.92-0.93 R$_{\odot}$). Therefore, we would like to mention that without proper normalization, density models may not provide the true height of the exciter in the case of the high frequency decimetric bursts (i.e., type II) or drifting pulses [@bain2012; @cho2013]. Date/Time(UT) boxes log(T$_p$) (MK) log(EM$_p$) (cm$^{-5}$ K$^{-1}$) $\sigma_T$ TEM (cm$^{-5}$) L n$_e$ (cm$^{-3}$) --------------------- ------- ----------------- ---------------------------------- ------------ ----------------------- -------------- ------------------------ 03/11/2011 20:18:38 1 6.73 22.28 0.149 8.63$\times$10$^{28}$ 12$\arcsec$ 9.73$\times$10$^9 $ 2 6.80 22.31 0.149 1.10$\times$10$^{29}$ 8$\arcsec$ 1.35$\times$10$^{10}$ 3 6.92 22.39 0.149 1.71$\times$10$^{29}$ 8.5$\arcsec$ 1.63$\times$10$^{10}$ 03/11/2011 20:22:38 1 6.86 22.01 0.170 7.30$\times$10$^{28}$ 13$\arcsec$ 8.65$\times$10$^9$ 2 6.29 22.41 0.170 4.92$\times$10$^{28}$ 9$\arcsec$ 8.54$\times$10$^9$ 3 6.54 22.10 0.170 4.24$\times$10$^{28}$ 6.4$\arcsec$ 9.39$\times$10$^9$ 4 6.22 22.58 0.170 6.19$\times$10$^{28}$ 5$\arcsec$ 1.28$\times$10$^{10}$ 5 6.24 22.42 0.170 4.45$\times$10$^{28}$ 3$\arcsec$ 1.40$\times$10$^{10}$ 6 6.28 23.09 0.170 2.28$\times$10$^{29}$ 4$\arcsec$ 2.76$\times$10$^{10}$ 18/08/2010 05:10:50 1 6.39 21.90 0.175 2.01$\times$10$^{28}$ 2$\arcsec$ 1.15$\times$10$^{10} $ 2 6.30 22.13 0.175 2.76$\times$10$^{28}$ 6$\arcsec$ 7.83$\times$10$^{9}$ 3 6.18 22.03 0.175 1.62$\times$10$^{28}$ 3$\arcsec$ 8.5$\times$10$^{9}$ Differential Emission Measure (DEM) analysis of the active region ----------------------------------------------------------------- To determine the peak temperature and emission measure of the active region, we utilized AIA images in six coronal filters (i.e., 94, 171, 131, 211, 335, and 193 Å). We used the SSWIDL code developed by @asc2011 for this purpose. The coalignment of AIA images at six coronal EUV wavelengths is carried out by using the limb fitting method, with an accuracy of $<$1 pixel. The code fits a DEM solution in each pixel (or macro-pixel), which can be parametrized by a single Gaussian function that has three free parameters: the peak emission measure (EM$_p$), the peak temperature (T$_p$), and the temperature width sigma ($\sigma_T$). The peak emission measure (cm$^{-5}$ K$^{-1}$) and temperature (MK) maps (during the flare and plasmoids eruption) are shown in Figure \[dem1\]. Furthermore, to estimate the density of the hot coronal loop, the current sheet structure, and the multiple plasmoids, we selected small boxes that are shown in the top and bottom panels of Figure \[dem1\]. First of all, we calculated the average temperature, EM and $\sigma_T$ in the selected regions. Using these values, we estimated the total emission measure (TEM) in the selected regions using formula, $\int \! DEM(T) \, \mathrm{d} T$. Using the values of total EM (cm$^{-5}$), we estimated the densities of the selected structures in the active region, assuming that the depth of the structure along the line of sight is nearly equal to its width [@cheng2012]. If ‘L’ is the width of the structure, then the density (n$_e$) of the structure is calculated by using the relation $n_e=\sqrt{\frac{EM}{L}}$ (assuming filling factor $\approx$1). The width of structures are marked in the AIA 94 Å  images (Figure \[aia171\], bottom panels). All the estimated values are summarized and listed in Table 1. In Figure \[dem1\], we displayed the peak emission measure and the peak temperature maps at 20:18:38 UT (top) and 20:22:38 UT (bottom). The top panels show a dense hot underlying loop, a possible current-sheet structure and a higher loop system above it. The estimated mean temperature within the selected regions (indicated by 1, 2, 3) are 5.37, 6.30, and 8.31 MK, respectively. This reveals a quite high temperature of these structures/loops, which are observed only in AIA 131 and 94 Å  channels. This indicates the presence of a hot flux-rope before the triggering of the flare. The computed mean density within these regions are $\sim$$9.73\times10^9$, $\sim$$1.35\times10^{10}$ and $\sim$$1.63\times10^{10}$ cm$^{-3}$, respectively. The bottom panels illustrate the ejection of multiple plasmoids along the current sheet structure. The ejection of hot plasma possibly from the current sheet structure is evident, which is marked by ‘1’. The peak temperature and density of the ejected hot plasma are $\sim$7.2 MK and $\sim$8.65$\times$10$^{9}$ cm$^{-3}$, respectively . The rest boxes from ‘2’ to ‘6’ show the multiple plasmoid ejections along the current sheet. It is interesting to note that the cool and hot plasma blobs are detected along the current-sheet structure. The peak temperature of the plasma blobs varies from $\sim$1.6 to 3.4 MK, whereas the density varies from $\sim$8.54$\times$10$^{9}$ to $\sim$2.76$\times$10$^{10}$ cm$^{-3}$. Although we could observe the downward moving plasmoids in the AIA 94 Å  images, it was not possible to track these plasmoids in the temperature maps due to the image saturation in other AIA channels. A similar type of bidirectional multiple plasmoid ejections associated with inflows is recently reported by @takasao2012 in a C4.5 limb flare occurred in AR NOAA 11099 on 18 August 2010. The plasma blobs were observed both in hot and cool AIA channels during the impulsive phase of the flare similar to our event. The top panels of Figure \[dem2\] (i.e., AIA 131 and 94 Å  images) show the underlying hot loop and the ejection of multiple plasmoids above it, along the possible current sheet structure. To compare the characteristics of the plasma blobs in both events, we generate the peak temperature and emission maps for AR NOAA 11099, which is shown in the bottom panel of Figure \[dem2\]. The peak temperature and density of the plasma blobs in regions 1, 2 and 3 varies from $\sim$1.5 to 2.4 MK and $\sim$7.83$\times$10$^{9}$ to 1.15$\times$10$^{10}$ cm$^{-3}$, respectively. In 18 August 2010 flare, a prominence eruption also takes place nearby the flare site, similar to our event. Later, we observe the flare and ejection of multiple plasmoids. The generation of tearing instability and the ejection of multiple plasmoids are quite similar in both of the events. Although, the radio observations for 18 august 2010 event are not available, otherwise it would have been useful to compare the dynamic radio spectrum for both events. Interestingly, we observe the multi-temperature plasmoids ($\sim$1.6–3.4 MK) formed by the tearing of current sheet during the magnetic reconnection. The observed temperature of multiple X-ray plasmoids (in a different event) associated with the individual peak of hard X-ray emission was $\sim$10 MK [@nishizuka2010]. The estimated temperature and density of a X-ray plasmoid observed in 5 October 1992 flare, was $\sim$6–13 MK and $\sim$8–15$\times$10$^{9}$ cm$^{-3}$, respectively [@ohyama1998]. In our case, the observed temperature of the plasmoids is lower than that of reported in the previous studies. However, the density of the plasmoids is comparable. In addition, the temperature of the plasmoid was lower than that of the region in between the plasmoid and the underlying flare loop, which is consistent with the magnetic reconnection model [@yokoyama1997; @yokoyama1998]. The formation of multi-temperature plasmoids during the tearing mode instability is not clear (observed here for the first time) and further investigations are needed. Discussion and conclusion ========================= We studied multiwavelength observations of the X-class flare which occurred on 3 November 2011. The associated energy-release/brightening below an activated filament induced the plasma flows along the filament channel. Furthermore, the downward moving filament apex towards and close to the AR possibly destabilize the magnetic field configuration at the flare site leading to the flare onset. The plasma flows along the filament channel slow down about 15-20 min prior to the flare occurrence. The downward moving plasma flows are likely to be responsible for the onset of inflows needed for the flare initiation. As shown in the numerical simulation by @shen2011, an initial perturbation is applied to the current sheet to induce the inflow. The AIA images reveal an underlying hot loop before the flare impulsive phase and the formation of a current sheet possibly above the underlying hot loop. Generation of the tearing mode instability in the current sheet leads to the formation and coalescence of multiple plasmoids [@furth1963; @shibata2001]. The radio observations confirm both positive and negative DPSs simultaneously during 20:21:24 UT-20:22:36 UT. This suggests the signature of electron acceleration associated with the multiple bidirectional plasmoid ejections (observed in the AIA images) from the reconnection site. The interaction and coalescence of a series of plasmoids leads to the effective particle acceleration and associated plasma heating [@karl2007]. Recently, the AIA observations revealed the formation of a hot flux-rope structure prior to and during the solar eruption [@cheng2011; @zhang2012], which was considered as a driver of the solar eruption. In the solar flare event observed on 3 November 2010, a hot vertical current sheet was formed below an erupting flux-rope structure (observed in AIA hot channels, i.e., 131 and 94 Å) [@reeves2011; @cheng2011; @kumar2013c; @hannah2013]. @pats2012 also reported the formation of a hot flux-rope prior to an eruption and its subsequent destabilization leads to the onset of a flare/CME. Therefore, in our event, hot structure above the underlying flare loop (observed in the 131 and 94 Å) is the flux-rope structure, which is formed prior to the flare initiation. A current sheet is probably formed in between the hot flux-rope and underlying flare loop. As discussed by @asc2004, a resistive instability is generated in the current sheet, when the driving forces of the inflow exceed the opposing Lorentz force. These driving forces are produced by the sheared magnetic fields leading to the onset of tearing mode instability [@furth1963]. @shen2011 have performed the resistive MHD simulations to study the internal structure of the current sheets. In their simulation, reconnection-rate becomes faster when the magnetic islands or plasma blobs are formed and the plasmoids move in both directions (up and down) along the current sheet, whose speeds ranging from 147–242 km s$^{-1}$ for upward and 89–159 km s$^{-1}$ for downward moving plasmoids. In our case, the mean speed of the upward and downward moving plasmoids are $\sim$247 and $\sim$224 km s$^{-1}$, respectively. The speed of the downward moving plasmoids is less than the upward moving plsamoids. This may be because the closed flare loops prevent blobs to move faster in the downward/sunward direction [@shen2011]. Following Shibata and Karlický models, we draw a schematic cartoon (Figure \[cartoon\]) to explain the scenario of the event. An underlying flare-loop, inflows, and the bidirectional multiple plasmoid ejections along the current sheet structure are shown. Using the AIA observations of a limb flare, @takasao2012 observed the signature of inflows and outflows associated with bidirectional plasmoid ejections. The typical size of the plasmoid in their event was about 2-3$\arcsec$ and the velocities of upward and downward ejection were 220-460 km s$^{-1}$ and 250-280 km s$^{-1}$, respectively. The shape, sizes and speed of the plasmoids (thickness about 3-4$\arcsec$) in our event are nearly consistent with their results. @takasao2012 could observe the downward moving plasmoids due to limb event (negligible projection effect). Similarly, we also observed the downward motion of the plasmoids in the AIA 94 Å  images and the downward motion of the plasmoids is confirmed by the positive DPSs observed in the dynamic radio spectrum. Multiple plasmoids/magnetic islands are created probably by tearing mode instability in the current sheet, which collided with each other and were ejected from it. The plasmoids were visible in AIA hot (131 and 94 Å) and cool (304 Å) channels, suggesting their multi-thermal nature. Very likely the plasmoids were heated by the coalescence process [@kliem2000]. The plasmoid ejections and DPSs are observed during the first impulsive energy release. These plasmoids can induce strong inflow along with the enhancement of the reconnection rate [@shibata2001]. Therefore, the ejection of multiple plasmoids play an important role in triggering the second energy release. @ning2007 also observed the positive DPSs during a flare on 18 March 2003 and suggested that these are probably caused by the downward moving plasmoid in the corona. Numerical simulation results also show that the downward motion of the plasmoid is possible in current-sheet during a bursty magnetic reconnection [@karl2007; @shen2011]. Our observational finding of simultaneous upward and downward moving multiple plasmoids supports their interpretation. As discussed in the numerical simulation by @karlicky2010, the pulses of electromagnetic emission are generated at a location between two interacting plasmoids just before the coalescence of two plasmoids into a larger one. They found that the Langmuir waves accumulate in the interacting plasmoids at the locations of superthermal electrons [@drake2005; @karlicky2007]. On the other hand, the electromagnetic waves appear at the boundaries of the plasmoids and then move outwards, where they mutually interfere and generate the short-period pulsations. Using the particle in cell (PIC) model with periodic boundary conditions, @oka2010 also found the efficient electron acceleration by multi-island coalescence process. Therefore, we expect that the electron acceleration takes place in between the plasmoids during the coalescence/merging of the plasmoids as shown in the model of @karl2004. Moreover, densities of the plasmoid regions derived with the AIA images, are consistent with the densities corresponding to the positive and negative DPSs. Thus, our observations are in agreement with the suggestions/interpretations given by @karlicky2010 and @karlicky2011. However, in the recent numerical simulations [@barta2008; @shen2011], both the negative and positive DPSs have been suggested to be associated with the upward and downward moving plasmoids respectively, but the simultaneous oppositely directed DPSs in radio, associated with a series of plasmoids in EUV, have not been reported earlier. In conclusion, we reported the simultaneous radio and EUV observations of the multiple plasmoid ejections that moved bidirectionally along the current sheet structure during magnetic reconnection. Furthermore, the high-resolution observations (from Hinode and SDO) of similar type of events combined with radio observations, will be helpful to understand the characteristics of these plasma blobs and associated particle acceleration in more details. We would like to thank the anonymous referee for his/her constructive comments/suggestions and help in the interpretation of the data, which improved the manuscript considerably. SDO is a mission for NASA’s Living With a Star (LWS) program. National Radio Astronomy Observatory (NRAO) is operated for the NSF by Associated Universities, Inc., under a cooperative agreement. PK thanks P.K. Manoharan and D.E. Innes for several helpful discussions/suggestions. This work has been supported by the “Development of Korea Space Weather Center" project of KASI, and the KASI basic research fund.
--- abstract: 'A number of scenarios for the formation of multiple populations in globular clusters (GCs) predict that second generation (2G) stars form in a compact and dense subsystem embedded in a more extended first-generation (1G) system. If these scenarios are accurate, a consequence of the denser 2G formation environment is that 2G binaries should be more significantly affected by stellar interactions and disrupted at a larger rate than 1G binaries. The fractions and properties of binary stars can thus provide a dynamical fingerprint of the formation epoch of multiple-population GCs and their subsequent dynamical evolution. We investigate the connection between binaries and multiple populations in five GCs, NGC288, NGC6121 (M4), NGC6352, NGC6362, and NGC6838 (M71). To do this, we introduce a new method based on the comparison of [*Hubble Space Telescope*]{} observations of binaries in the F275W, F336W, F438W, F606W and F814W filters with a large number of simulated binaries. In the inner regions probed by our data we do not find large differences between the local 1G and the 2G binary incidences in four of the studied clusters, the only exception being M4 where the 1G binary incidence is about three times larger than the 2G incidence. The results found are in general agreement with the results of simulations predicting significant differences in the global 1G and 2G incidences and in the local values in the clusters’ outer regions but similar incidences in the inner regions. The significant difference found in M4 is consistent with simulations with a larger fraction of wider binaries. Our analysis also provides the first evidence of mixed (1G-2G) binaries, a population predicted by numerical simulations to form in a cluster’s inner regions as a result of stellar encounters during which one component of a binary is replaced by a star of a different population.' bibliography: - 'ms.bib' date: 'Accepted 2019 December 27. Received 2019 December 26; in original form 2019 October 8' title: 'The [*Hubble Space Telescope*]{} UV Legacy Survey of Galactic Globular Clusters. XXI. Binaries among multiple stellar populations.' --- \[firstpage\] globular clusters: general, stars: population II, stars: abundances, techniques: photometry. Introduction {#sec:intro} ============ Binary stars play a key role in many aspects of globular clusters’ (GCs) dynamics and their evolution and survival is, in turn, significantly affected by stellar interactions in the clusters’ dense environment [see e.g. @heggie2003a]. A variety of scenarios predict that 2G stars formed in a high-density environment in the cluster center [e.g. @dercole2008a; @calura2019a]. Since the rates of binary disruption and evolution of the parameters of surviving binaries strongly depend on the stellar density, the incidence of binaries in first- and second-generation (hereafter 1G and 2G) stars of GCs can provide information and constraints on their formation environment and their long-term evolution [@vesperini2011a; @hong2015a; @hong2016a; @hong2019a]. Indeed a number of numerical studies have shown that the [*global*]{} present-day incidence of binaries in the 2G population is expected to be lower than that of 1G stars [@vesperini2011a; @hong2015a; @hong2016a]. This is a consequence of the larger effect of dynamical processes that determine the evolution and disruption of binary stars for the more concentrated 2G population. In the interpretation of observations covering a specific range of radial distances from a cluster’s center, it is necessary to consider that [*local*]{} values of the 1G and 2G binary incidences (i.e. values of the binary incidence measured at a given distance from a cluster’s center) are determined by a combination of dynamical effects on binary evolution and disruption and the extent of spatial mixing reached by a cluster at any given time during its dynamical evolution [@hong2019a]. The first attempts to infer the incidence of binaries in multiple populations were based on spectroscopy. On the basis of a study of 21 radial-velocity (RV) binaries in ten GCs, [@lucatello2015a] concluded that the fraction of binaries among 1G is 4.1$\pm$1.7 times higher than the fraction of binaries in the 2G [see also @dorazi2010a]. More recently, @dalessandro2018a found that only one out of twelve RV binaries in the GC NGC6362 belong to the 2G population. This corresponds to a fraction of binaries in the 1G and 2G populations equal to, respectively, 4.7$\pm$1.4% and 0.7$\pm$0.7%. These studies probed mainly clusters’ regions around the half-light radius and the differences found between the 1G and 2G binary incidences revealed a larger 1G binary incidence in general agreement with the theoretical expectations. In the analysis presented here, we exploit multi-band [*Hubble Space Telescope*]{} ([*HST*]{}) photometry collected as part of the UV Legacy Survey of Galactic GCs [@piotto2015a] to study binaries among multiple populations in five GCs, namely NGC288, NGC6121, NGC6352, NGC6362 and NGC6838. These GCs are all relatively simple objects in the context of multiple populations and share three properties that make them ideal targets to investigate the incidence of binaries among 1G and 2G stars. - Their 1G and 2G stars exhibit moderate variations in their chemical composition, yet even so the two populations are still distinct [e.g. @marino2008a; @marino2011a; @carretta2009a]. This is in contrast with massive GCs, where 1G and 2G stars host sub-populations with large differences in helium and light-element abundance [e.g. @milone2017a; @milone2018a; @marino2019a]. - The two distinct groups of 1G and 2G stars are well separated along the main sequence (MS), sub-giant branch (SGB), and red-giant branch (RGB) either in the chromosome map (ChM) or in appropriate color-color diagrams. - 1G and 2G stars are distinguishable in the ChMs of MS stars that are at least two magnitudes fainter than the MS turn off in the F814W band. This paper is organized as follows. In Section \[sec:data\] we describe the data and the data reduction. The multiple populations of each cluster are discussed in Section \[sec:mpops\], where we identify the two groups of single 1G and 2G stars along the CMD. Section \[sec:bin\] is dedicated to the presentation of the results and a discussion of the connection between binaries and multiple populations. Finally, discussion and conclusions are provided in Section \[sec:discussion\] and \[sec:conclusions\], respectively. Data and data analysis {#sec:data} ====================== The dataset used in this paper consists of images collected through the Wide Field Channel of the Advanced Camera for Survey (WFC/ACS) and the Ultraviolet and Visual Channel of the Wide Field Camera 3 (UVIS/WFC3) on board [*HST*]{}. The main properties of the images are summarized in @piotto2015a and @milone2018a. To derive the photometry and the astrometry of all the stars we used the FORTRAN software package KS2 developed by Jay Anderson, [see @sabbi2016a; @bellini2017a; @nardiello2018b for details] KS2 is the evolution of $kitchen\_sync$, originally developed by @anderson2008a to reduce two-filter WFC/ACS globular cluster data. KS2 uses different methods to measure stars with different brightnesses. Fluxes and positions of the bright stars were fit for position and flux in each individual exposure independently using the best point-spread function (PSF) model for the star’s location on the detector. The various measurements of each star were then averaged to derive the best estimates of stellar magnitude and position. Faint stars often do not have enough flux to measure their magnitudes and positions in individual exposures. Hence, the KS2 routine determines for each star an average position from all the exposures, then it fits each exposure’s pixels with the PSF, solving only for the flux. Stellar positions have been corrected for geometrical distortion by using the solutions by @bellini2009a and @bellini2011a. The photometry has been converted from the instrumental system into the Vega system as in @bedin2005a using the updated zero points of the WFC/ACS and UVIS/WFC3 filters available at the STScI web pages. We used the diagnostics of the photometric and astrometric qualities provided by KS2 to select a sample of relatively isolated stars that are well fitted by the PSF. Specifically, we exploited position and magnitude rms, the fraction of flux in the aperture due to neighbours and the quality of the PSF fit. We plotted each parameter as a function of the stellar magnitude and verified that most stars follow a clear trend in close analogy with what is done in previous papers from our group [e.g. @milone2009a; @bedin2009a]. Outliers include variable stars and stars with poor astrometry and photometry and are excluded from our investigation. The fluxes of stars in the field of view of NGC6121, NGC6352, NGC6362 and NGC6838 are significantly affected by spatial variation of the interstellar extinction. To minimize the artificial broadening of the photometric sequences in the CMDs due to spatial variations of the photometric zero points, the photometry has been corrected for differential reddening using the procedure by @milone2012a. Artificial stars ---------------- To derive the fraction of 1G and 2G stars among the binaries, we compared the observed photometric diagrams of each GC with simulations, which are constructed from artificial-star (AS) photometry. AS tests have been run by following the method by @anderson2008a. In a nutshell, we generated a catalog of 300,000 stars with instrumental F814W magnitude from the saturation limit of the images to the instrumental magnitude $-4.0$, which is below the detection threshold of our data. Instrumental magnitudes are defined as $-2.5 \cdot \log_{10}{\rm (flux)}$, where the flux is given in photo-electrons. The F275W, F336W, F438W, and F606W magnitudes were calculated from the colors of the fiducials lines of 1G and 2G stars, which are derived from the observed CMDs. We associated to each AS a position in such a way that the radial distribution of ASs resembles the radial distribution of stars brighter than $m_{\rm F814W}=21.0$. ASs were reduced using the same method adopted for real stars and we included in our analysis only those ASs that pass the criteria of selection used for real stars. The ASs were inserted, found, and detected one at a time, so that they would never interfere with each other. Multiple stellar populations {#sec:mpops} ============================ As a first step to study the binaries among multiple populations we identified 1G and 2G stars along the MS, the SGB, and the RGB. To do this, we adopted the procedure illustrated in Figure \[fig:sel\] for M4, which is based on photometric diagrams that maximize the separation between stellar populations with different chemical compositions. We used different diagrams to identify 1G and 2G stars at different brightness levels. Specifically, we defined the three intervals of F814W magnitude, SI, SII, and SIII, which are indicated by the dotted lines in the $m_{\rm F814W}$ vs.$m_{\rm F606W}-m_{\rm F814W}$ and the $m_{\rm F814W}$ vs.$C_{\rm F275W,F336W,F438W}$ diagrams plotted in panels (a) and (b) of Figure \[fig:sel\]. Due to the large observational errors, we are not able to clearly distinguish 1G and 2G stars below $m_{\rm F814W}=18.5$. Panels (c) and (e) of Figure \[fig:sel\] show that the distribution of SI and SIII stars in the $\Delta_{C{\rm F275W,F336W,F438W}}$ vs.$\Delta_{\rm F275W,F814W}$ pseudo two-color diagram, otherwise known as a chromosome map [@milone2015a; @milone2017a] is bimodal. Similarly, the SII stars are distributed along two sequences in the $m_{\rm F336W}-m_{\rm F438W}$ vs.$m_{\rm F275W}-m_{\rm F336W}$ two-color diagram, in close analogy with what we have observed in other GCs [@milone2012c; @milone2013a; @tailo2019a]. The red lines, which are drawn by hand with the aim of separating the two main stellar sequences within each diagram, are used to define the populations of 1G and 2G stars. 1G and 2G stars, selected in panels (c), (d), and (e) are colored red and blue, respectively, in the diagrams plotted in panels (f) and (g). The red and the blue lines superposed on each diagram are the fiducials of 1G and 2G stars. To derive these lines we used a method that is based on the naive estimator by @silverman1986a. In a nutshell, we first defined a series of magnitude intervals of width $\nu$, from $m_{\rm F814W}=12.0$ to $18.5$. We used $\nu$=0.2, 0.1, and 0.4 for stars in the SI, SII, and SIII regions of the CMD. These intervals are defined over a grid of points separated by steps of fixed magnitude ($s=\nu/3$). For each interval we calculated the median color and magnitude and smoothed these median points by boxcar averaging, where each point is replaced by the average of the three adjacent points. We followed the procedure described above for M4 to identify 1G and 2G stars along the RGB, SGB and MS of the other studied GCs. Results are summarized in Figure \[fig:sel2\] where we use red and blue colors to represent 1G and 2G stars, respectively, in the $m_{\rm F814W}$ vs.$C_{\rm F275W,F336W,F438W}$ diagram (panels a1–a4). We also show the ChMs of RGB and MS stars and the $m_{\rm F336W}-m_{\rm F438W}$ vs.$m_{\rm F275W}-m_{\rm F336W}$ two-color diagram of SGB stars that we used to select 1G and 2G stars, in close analogy with what we did for M4. The ChMs of MS stars are used to obtain the fractions of 1G stars of each clusters that listed in Table \[tab:res\] and are derived as in @milone2017a. Binaries and multiple populations {#sec:bin} ================================= The binary systems that survive in the dense environment of a GC are the extremely tight ones. For this reason, their individual components are not resolved in the [*HST*]{} images and the binary system appears in our images as a single point source. The position in the CMD of a binary system formed by non-interacting stars is related to the luminosity of its two components. Specifically, the magnitude of the binary system is: $$m_{\rm bin}=m_{1}-2.5 \log {\Big{(}1+\frac{F_{2}}{F_{1}}\Big{)}}$$ where $F_{1}$ and $F_{2}$ are the fluxes of the two stars and $m_{1}=-2.5 \log{F_{1}} + constant $. In the case of a simple stellar population, the binaries formed by two stars with the same luminosity form a sequence that runs parallel to the cluster fiducial line but is $\sim$0.75 mag brighter. Binaries formed by stars with different luminosities will populate the region of the CMD delimited by the fiducial lines of single stars and the equal-mass binaries. In panels (a1) and (a2) of Figure \[fig:teo\], we plot with continuous red lines the fiducials of 1G stars in the $m_{\rm F814W}$ vs.$m_{\rm F606W}-m_{\rm F814W}$ CMD and in the $m_{\rm F814W}$ vs.$C_{\rm F275W,F336W,F438W}$ pseudo-CMD, respectively. The fiducials of binaries formed by two 1G stars with the same luminosity are represented with red dashed lines. To illustrate the behaviour of a binary system composed of stars with different luminosities, we represent with a large red-starred symbol the binary system formed by two 1G MS stars with $m_{\rm F814W}$=$16.7$ and $m_{\rm F814W}$=$18.2$ whose components are indicated with small red-starred symbols. The fiducials and the binary stars introduced in panels (a1) and (a2) are reproduced in all the panels of Figure \[fig:teo\]. In panels (b1) and (b2) of Figure \[fig:teo\] we represent with blue continuous and dashed lines the fiducials of single 2G MS stars and of 2G-2G equal-luminosity binaries, respectively. 2G-2G binaries have similar $m_{\rm F606W}-m_{\rm F814W}$ colors as 1G-1G binaries with the same luminosity but substantially different values of $C_{\rm F275W,F336W,F438W}$. In the bottom panels of Figure \[fig:teo\] we considered binaries formed by 1G and 2G stars and we used gray colors to represent the fiducials of equal-luminosity binaries. In panels (c1) and (c2) the brightest component of all the binary systems belong to the 1G while in panels (d1) and (d2) the 2G star is brighter than its 1G companion. For fixed F814W magnitudes of 1G and 2G stars, the latter case results in smaller values of $C_{\rm F275W,F336W,F438W}$. In general, binaries formed by 1G and 2G pairs have $C_{\rm F275W,F336W,F438W}$ values that are in between those of the 1G-1G and 2G-2G binaries. The sample of binaries ---------------------- The binaries of M4 analyzed in this paper are located in the shaded yellow region of the $m_{\rm F814W}$ vs.$m_{\rm F606W}-m_{\rm F814W}$ CMD plotted in the left panel of Figure \[fig:selBIN\], which is delimited by the two yellow segments: the segment with the reddest color is the fiducial of the equal-mass 1G-1G binaries but shifted to the red by two times the $m_{\rm F606W}-m_{\rm F814W}$ color error. The other yellow segment is the fiducial formed by a binary system that includes one 2G star with $m_{\rm F814W}=18.5$. We did not include binaries brighter than $m_{\rm F814W}=16.0$ in order to avoid the contamination from single MS and SGB stars with large photometric errors. Moreover, we excluded binaries where the 2G star has $m_{\rm F814W}>18.5$ because we do not have any information on the colors of the fiducial lines at faint magnitudes and we would not predict the location in the CMD of the corresponding binaries. The sample of selected binaries includes the 27 objects that are marked with orange triangles in Figure \[fig:selBIN\]. The right panel of Figure \[fig:selBIN\] shows the $m_{\rm F814W}$ vs.$C_{\rm F275W,F336W,F438W}$ diagram of M4, where most of the selected binaries are located between the fiducial of single 1G stars and the fiducial of equal-mass 1G-1G binaries. This diagram is used to derive the verticalized $m_{\rm F814W}$ vs.$\Delta$($C^{\rm bin}_{\rm F275W,F336W,F438W}$) diagram of the selected binaries that we plotted in the inset together with the corresponding kernel-density and cumulative distributions of $\Delta(C^{\rm bin}_{\rm F275W,F336W,F438W})$. To derive the kernel-density distribution, which is used for illustration purposes only, we adopted a Gaussian kernel with a fixed width that we derived with the rule of thumb by @silverman1986a. The abscissa is calculated as: $$\Delta(C^{\rm bin}_{\rm F275W,F336W,F438W})= [(X-X_{\rm fiducial}^{\rm 1G-1G})/(X_{\rm fiducial}^{\rm 1G}-X_{\rm fiducial}^{\rm 1G-1G})]$$ where $X$ is the $C_{\rm F275W,F336W,F438W}$ pseudo-color of the selected binaries, $X_{\rm fiducial}^{\rm 1G-1G}$ is the corresponding pseudo-color of the fiducial of equal-mass 1G-1G binaries and $X_{\rm fiducial}^{\rm 1G}$ is the pseudo-color of the fiducial of single 1G stars. The incidence of binaries among stellar populations --------------------------------------------------- To infer the fraction of 1G-1G, 2G-2G and 1G-2G binaries with respect to the total number of binaries ($f_{\rm bin}^{\rm 1G-1G}$, $f_{\rm bin}^{\rm 2G-2G}$ and $f_{\rm bin}^{\rm 1G-2G}$), we compared the observations with a grid of simulated diagrams that are derived by using the ASs. To do this, we defined a grid of values for $f_{\rm bin}^{\rm 1G-1G}$, $f_{\rm bin}^{\rm 2G-2G}$ and $f_{\rm bin}^{\rm 1G-2G}$ ranging from 0.00 to 1.00 in steps of 0.01. For each combination of $f_{\rm bin}^{\rm 1G-1G}$, $f_{\rm bin}^{\rm 2G-2G}$ and $f_{\rm bin}^{\rm 1G-2G}$, we compared the $\Delta$($C_{\rm F275W,F336W,F438W}$) kernel-density distribution of the simulated binaries with the observed distributions and calculated the corresponding $\chi^{2}$. We assumed a flat mass-ratio distribution for simulated binaries as inferred by @milone2012a from observations of binaries in Galactic GCs. We also verified that the results remain unchanged when we assume the two extreme mass ratio distributions used by @sollima2007a and @milone2012a. Specifically, we used the distribution obtained from random extractions from a @demarchi2005a initial mass function and the distribution measured by @fisher2005a and verified that the resulting values of $f_{\rm bin}^{\rm 1G-1G}$, $f_{\rm bin}^{\rm 2G-2G}$ and $f_{\rm bin}^{\rm 1G-2G}$ remain the same within 0.03. As an example, we show in the upper panels of Figure \[fig:simu1\] the simulated $m_{\rm F814W}$ vs.$m_{\rm F606W}-m_{\rm F814W}$ and $m_{\rm F814W}$ vs.$C_{\rm F275W,F336W,F438W}$ diagrams that correspond to $f_{\rm bin}^{\rm 1G-1G}=1.00$, $f_{\rm bin}^{\rm 2G-2G}=0.00$ and $f_{\rm bin}^{\rm 1G-2G}=0.00$. The yellow-shaded region defined in Figure \[fig:selBIN\] is used to identify the simulated stars that we compared with the sample of observed binaries. The selected simulated stars are marked with black circles in Figure \[fig:simu1\]. The $m_{\rm F814W}$ vs.$C_{\rm F275W,F336W,F438W}$ diagram shown in the right panel of Figure \[fig:simu1\] is used to derive the verticalized $m_{\rm F814W}$ vs.$\Delta$($C_{\rm F275W,F336W,F438W}$) diagram plotted in the inset, where we also compare the normalized cumulative distribution and the kernel-density distribution of the stars selected in the simulated diagrams (black lines) with the corresponding distributions derived in Figure \[fig:selBIN\] for the observed binaries (orange lines). The lower panels of Figure \[fig:simu1\] shows the simulated $m_{\rm F814W}$ vs.$C_{\rm F275W,F336W,F438W}$ diagrams for different choices of $f_{\rm bin}^{\rm 1G-1G}$, $f_{\rm bin}^{\rm 2G-2G}$ and $f_{\rm bin}^{\rm 1G-2G}$. In the lower-left panel we assumed that all the binary systems are composed of 2G-2G pairs while all the binaries in the right lower-panel include both 1G and 2G stars. In both cases, we obtain a poor match to the observations, as shown by the verticalized diagrams and by the corresponding cumulative and kernel-density distributions plotted in the insets. Finally, in Figure \[fig:simu2\] we show the simulated diagrams that provide the best match with the observations, which is derived as the minimum difference between the corresponding normalized cumulative distributions as plotted in the bottom panel of the inset. The best-fit corresponds to $f_{\rm bin}^{\rm 1G-1G}=0.51$, $f_{\rm bin}^{\rm 1G-2G}=0.06$ and $f_{\rm bin}^{\rm 2G-2G}=0.43$. For completeness, we compare in the middle panel of the inset the kernel-density distribution of $\Delta C^{\rm bin}_{\rm F275W,F336W,F438W}$ for the observed and the simulated binaries. The uncertainties associated with these values are calculated with a bootstrap analysis based on 30,000 samples created by a random sampling with replacement of the observed binary stars. For each extraction we derived the fraction of 1G-1G, 1G-2G and 2G-2G binaries by using the procedure described above. The obtained random mean scatter of the 30,000 determinations of the values of $f_{\rm bin}^{\rm 1G-1G}$, $f_{\rm bin}^{\rm 1G-2G}$ and $f_{\rm bin}^{\rm 2G-2G}$ are 0.11, 0.04, and 0.10, respectively, and are considered as the best estimates of the corresponding uncertainties. To investigate whether the inferred results are reliable or not, we used ASs to generate 30,000 mock CMDs that host the same fraction of 1G-1G, 1G-2G, and 2G-2G binaries that we inferred from the observations. We selected 27 stars from each simulation that are located in the same region of the $m_{\rm F814W}$ vs.$m_{\rm F606W}-m_{\rm F814W}$ CMD defined in Figure \[fig:selBIN\] to select the sample of binaries in the observed CMD. We calculated the values of $f_{\rm bin}^{\rm 1G-1G}$, $f_{\rm bin}^{\rm 1G-2G}$ and $f_{\rm bin}^{\rm 2G-2G}$ in each simulation by using the same procedure described above for real stars. The average values of 1G-1G, 1G-2G, and 2G-2G binary fractions that we obtained from the 30,000 simulated CMDs are identical to the values that we inferred from the observations, while the uncertainties associated to $f_{\rm bin}^{\rm 1G-1G}$, $f_{\rm bin}^{\rm 1G-2G}$ and $f_{\rm bin}^{\rm 2G-2G}$ are slightly smaller and correspond to 0.09, 0.03, and 0.09, respectively. These results ensure that the adopted procedure does not introduce any significant systematic error. Results suggest that about 6% of the studied binaries of NGC6121 are formed by pairs of 1G and 2G stars, but this result is significant at $\sim 1.5 \sigma$-level only. To better understand how significant is the detection of the mixed 1G-2G population we used the procedure described above to derive the best fit simulation containing only 1G-1G and 2G-2G binaries. The resulting cumulative and kernel-density distributions of $\Delta C^{\rm bin}_{\rm F275W,F336W,F438W}$ are represented with gray lines in the inset of Figure \[fig:simu2\] and correspond to the simulation composed of 0.52$\pm$0.12 and 0.48$\pm$0.12 of 1G-1G amd 2G-2G binaries. The Kolmogorov-Smirnov (KS) test provides a probability p=57% that the binaries from best-fit simulation and the observed binaries come from the same parent distribution. The corresponding probability inferred from the comparison of the observations with the best-fit model that accounts for mixed binaries is p=92% and seems to corroborate the conclusion that NGC6121 hosts a small fraction of mixed binaries. ID $N_{\rm 1G}/N_{\rm TOT}$ $N_{\rm bin}$ $f_{\rm bin}^{\rm 1G-1G}$ $f_{\rm bin}^{\rm 1G-2G}$ $f_{\rm bin}^{\rm 2G-2G}$ $f_{\rm b,1G}/f_{\rm b,2G}$ $f_{\rm pri}$ --------- -------------------------- --------------- --------------------------- --------------------------- --------------------------- ----------------------------- --------------- NGC288 0.56$\pm$0.01 95 0.46$\pm$0.08 0.14$\pm$0.07 0.40$\pm$0.08 1.0$\pm$0.3 0.72$\pm$0.15 NGC6121 0.29$\pm$0.01 27 0.51$\pm$0.10 0.06$\pm$0.04 0.43$\pm$0.10 3.1$\pm$0.9 0.85$\pm$0.10 NGC6352 0.50$\pm$0.01 65 0.24$\pm$0.10 0.48$\pm$0.09 0.28$\pm$0.07 0.9$\pm$0.4 0.00$\pm$0.18 NGC6362 0.55$\pm$0.01 74 0.47$\pm$0.07 0.00$\pm$0.03 0.51$\pm$0.07 0.7$\pm$0.2 1.00$\pm$0.06 NGC6838 0.63$\pm$0.01 46 0.46$\pm$0.13 0.27$\pm$0.13 0.27$\pm$0.09 1.2$\pm$0.4 0.42$\pm$0.28 \ \[tab:res\] The procedure described above for NGC6121 was extended to the other clusters and the main results are shown in Figures \[fig:resall1\] and \[fig:resall2\] and summarized in Table \[tab:res\]. Left panels of these figures are zoom in of the $m_{\rm F814W}$ vs.$C_{\rm F275W,F336W,F438W}$ diagrams around the upper MS, while middle panels show $m_{\rm F814W}$ against $\Delta C^{\rm bin}_{\rm F275W,F336W,F438W}$ for the sample of selected binaries and the corresponding cumulative and kernel-density distributions. The $\Delta C^{\rm bin}_{\rm F275W,F336W,F438W}$ distributions of binaries in NGC288 and NGC6362 are clearly bimodal with two main groups of stars with $\Delta C^{\rm bin}_{\rm F275W,F336W,F438W} \sim 0.0-0.1$ and $\sim 0.8-1.0$. In contrast, a single peak with intermediate values of $\Delta C^{\rm bin}_{\rm F275W,F336W,F438W} \sim 0.3$ is present in NGC6352, while the binaries of NGC6838 exhibit a broad distribution. Right panels of Figures \[fig:resall1\] and \[fig:resall2\] show the distribution of binaries from the simulated diagrams that provide the best match with the observations and are obtained from the comparison of the corresponding normalized cumulative distributions. Although the results are inferred from a large sample of simulated binaries as described above, for clarity, the number of binaries that we plotted in each figure as black dots is equal to five times the number of observed binaries. We find that, similarly to NGC6121, both NGC288 and NGC6362 host small fractions of 1G-2G stars, and comparable fractions of 1G-1G and 2G-2G binaries. This fact explains the bimodal $\Delta C^{\rm bin}_{\rm F275W,F336W,F438W}$ distributions of the observed binaries. In the case of NGC6352, we find that about half of the studied binary systems are 1G-2G pairs, while the fraction of 1G-1G and 2G-2G binaries are similar. The predominance of mixed binaries is responsible for the single peak of the kernel-density distribution with intermediate values of $\Delta C^{\rm bin}_{\rm F275W,F336W,F438W}$. NGC6838 hosts a large fraction of 1G-2G binaries ($f_{\rm bin}^{\rm 1G-2G}\sim$0.27) and a similar fraction of 2G-2G pairs. To estimate the incidence of 1G-1G binaries among 1G star with respect to incidence of 2G-2G binaries among 2G stars we calculate the quantity: $f_{\rm b,1G}/f_{\rm b,2G}=(f_{\rm bin}^{\rm 1G-1G}/N_{\rm 1G})/(f_{\rm bin}^{\rm 2G-2G}/N_{\rm 2G})$, where $N_{\rm 1G}$ and where $N_{\rm 2G}$, are the numbers of analyzed 1G and 2G MS stars. Results are listed in Table \[tab:res\]. In M4 we find that the fraction of 1G-1G binary pairs among 1G stars is $\sim 3$ times higher than the fraction of 2G-2G binaries among 2G stars and the difference is significant at $\sim$3-$\sigma$ level. In the other clusters $f_{\rm b,1G}/f_{\rm b,2G}$ is consistent with one. To further investigate the significance of the detection of mixed binaries in NGC288, NGC6352 and NGC6838 we derived the simulation with $f_{\rm bin}^{\rm 1G-2G}=0$ that best reproduce the observations. Specifically, the results listed in Table \[tab:res\] indicate that about half of the binaries of NGC6352 are formed by pairs of 1G-2G stars and the detection of mixed binaries is significant at $\sim 4 \sigma$ level. The KS test indicates that the binaries of the best-fit simulation obtained for NGC6352 has a probability higher than 0.99 to come from the same parent distribution of the observed binaries. In contrast, the corresponding probability for best-fit simulation formed by 1G-1G and 2G-2G binaries alone is 0.00. This fact confirms the high significance of detection of mixed binaries in this GC. In NGC288 and NGC6838 the best-fit simulations with no mixed binaries that provide KS probabilities of 0.31 and 0.11, respectively, which are lower than the corresponding probabilities of 0.90 and 0.98, respectively, derived from the best-fit models that account for 1G-2G binaries although still statistically compatible with observations. These findings are in line with the results of Table \[tab:res\], where we estimate that the detection of mixed binaries in each cluster is significant at $\sim 2$-$\sigma$ level. Primordial and dynamically-formed binaries ------------------------------------------ Present-day binaries in GCs include primordial binaries, which have origin from the same gas cloud and include only 1G-1G or 2G-2G binaries, and binaries formed during the cluster’s dynamical evolution from capture and/or exchange events which can pair stars of different generations and produce some mixed 1G-2G binaries [@hong2015a; @hong2016a]. Here, we have used the results of a set of N-body simulations following the evolution of binaries in multiple-population clusters [@hong2015a; @hong2016a] to establish a link between the fraction of mixed binaries and the fraction of observed binaries belonging to the primordial binary population. To further illustrate this link we have also built a binary population from a Monte Carlo sampling procedure from the observed fraction of 1G and 2G stars. In Figure \[fig:primordial\], we show the evolution of the fraction of primordial binaries in the total population of binaries versus the fraction of mixed 1G-2G binaries from our N-body simulations (further details on the simulations are discussed later in Section 5). This figure clearly illustrates the dynamical information encoded in the fraction of mixed binaries: as a cluster evolves, its binary population is affected by stellar encounters, the fraction of mixed binaries increases, and the fraction of primordial binaries in the binary population declines. Some primordial binaries are disrupted, some are ejected, and some undergo exchange encounters resulting in binaries with components different from those in the primordial binary. Although the simulations are still idealized and not meant to provide detailed models for the observed clusters, the observed values of the fraction of mixed binaries reported in Table 1 and the data shown in Figure \[fig:primordial\] can be used to calculate an approximate estimate of the fraction of the observed binaries belonging to the primordial binary population. In order to further explore the link between the fraction of mixed binaries and the fraction of primordial binaries in the current binary population we have also carried out 101 Monte Carlo samplings of 100,000 MS stars. In each simulation, $i$, we included a fraction of primordial binaries $f_{\rm bin, i}^{\rm pri, simu}$=$i$/100, where $i$ ranges from 0 to 100 in steps of 1. The remaining simulated stars, which comprise the observed fractions of 1G and 2G stars, are randomly coupled. Clearly, this process generates pairs of 1G-1G, 1G-2G, and 2G-2G binaries. We indicate the resulting fraction of 1G-2G binaries with respect to the total number of binaries (including both primordial binaries, and binaries derived by random pair stars) as $f_{\rm bin, i}^{\rm 1G-2G, simu}$. The observed binaries with a primordial origin in each cluster, $f_{\rm pri}$, is provided by the simulation where $f_{\rm bin, i}^{\rm 1G-2G, simu}$ matches the observed fraction of mixed binaries. Results are listed in Table\[tab:res\]. The estimates of the fraction of primordial binaries obtained from simulations are in general good agreement with those found with the Monte Carlo sampling procedure; in particular, we find that the NGC288, NGC6121 and NGC6362 are dominated by primordial binaries, while NGC6352 is consistent with almost no primordial binaries. About half of the studied binaries of NGC6838 have primordial origins. We emphasize that these estimates are meant to provide a general approximate indication of the fraction of primordial binaries and, more in general, to illustrate the dynamical information contained in the population of mixed binaries. More realistic models would be necessary to use the observed fraction of mixed binaries to obtain accurate estimates of the primordial binary fraction. Discussion {#sec:discussion} ========== The present-day binary fractions of 1G and 2G stars provide a dynamical fingerprint of the formation and dynamical evolution of multiple populations in GCs. According to various scenarios, 2G stars form in a dense environment in the innermost regions of a more extended 1G system [e.g. @dercole2008a; @calura2019a and references therein]. Analytic calculations combined with the results of $N$-body simulations of stellar populations in GCs show that, as a consequence of these initial differences between the spatial distributions of 1G and 2G stars, 2G binaries evolve and are disrupted at a significantly larger rate than 1G binaries and the present-day 2G population is expected to have a smaller [*global*]{} binary incidence than the 1G population [@vesperini2011a; @hong2015a; @hong2016a]. The evolution of the ratio of the 1G to the 2G binary incidence is driven by the initial differences between the structural properties of the 1G and the 2G populations and depends on the cluster’s dynamical age as well as on the binary properties [see e.g. @hong2015a; @hong2016a; @hong2019a]. The complex interplay between binary evolution, disruption, and the evolution of the spatial distributions of 1G and 2G single and binary stars is expected to result into a radial variation of the 1G and 2G binary incidences that need to be taken into account in the interpretation of observational data that probe only a specific range of radial distances from the cluster’s center and thus provide a measure of the [*local*]{} binary incidence and not the [*global*]{} one. This issue has been discussed in detail in @hong2016a (see, in particular, their Figures 11 and 12). Hong and collaborators found that the largest differences between the 1G and the 2G binary incidences are, in general, expected in the cluster’s outer regions (see, for example, their Figure12 showing the time evolution of the ratio $f_{\rm b, 1G}/f_{\rm b, 2G}$ estimated at projected distances between 0.5$R_{\rm h}$ and 2.5 $R_{\rm h}$ where $R_{\rm h}$ is the projected half-mass radius). In the study presented here, however, the [*HST*]{} data are limited to the inner regions between the clusters’ centers and an outer radius ranging from about 0.3$R_{\rm h}$ to about 0.8 $R_{\rm h}$. To further illustrate the expected dynamical effects on the evolution of the 1G and 2G binary incidence in the cluster’s inner regions we show in Figure \[fig:fractionratio\] the time evolution of the ratio $f^{\rm 1G-1G}_{\rm bin}/f^{\rm 2G-2G}_{\rm bin}$ measured between the cluster’s center and 0.5 $R_{\rm h}$ for some of the simulations discussed in @hong2015a [@hong2016a]. Each simulation corresponds to different values of X$_{\rm g,0}$, which is the parameter indicative of the initial hardness of primordial binaries. Specifically, X$_{\rm g,0} = E_{\rm b}/(m \sigma^{2})$, where $E_{\rm b}$ is the absolute values of the binary binding energy, and $\sigma$ is the 1D velocity dispersion of all stars. Upper and lower panels correspond to different ratios between the half light radii of 1G and 2G stars at formation. These figures clearly illustrate that the similar values of the 1G and 2G binary incidences found in our analysis are, in general, consistent with those expected in the cluster’s innermost regions. and in the outer regions of all the systems studied [see e.g. Figure 12 in @hong2016a]. Larger differences between the 1G and the 2G binary incidences are expected at all radial distances (including the inner regions) in systems with softer binaries and in the outer regions of all the systems studied [see e.g. Figure 12 in @hong2016a]. The predicted increase in $f_{\rm b,1G}/f_{\rm b,2G}$ with the distance from a cluster’s center is consistent with what is found in previous studies based on radial velocities which probed the clusters’ outer regions. Specifically, @lucatello2015a analyzed multi-epoch spectra of 968 RGB stars of ten GCs and identified 21 radial-velocity binaries, corresponding to a binary fraction of 2.2$\pm$0.5%. When they divided the stars into 1G and 2G on the basis of their abundances of sodium and oxygen, they found that the fraction of binaries among 1G stars was 4.9$\pm$1.3% and is significantly higher than the fraction of 2G binaries (1.2$\pm$0.4%). In another recent paper based on 384 stars of the GC NGC6362, @dalessandro2018a identified 12 binaries on the basis of their radial distribution, corresponding to a binary fraction of 3.1$\pm$0.9%. When separating the stars into 1G and 2G on the basis of their sodium abundance, they find that only [*one*]{} binary belongs to the 2G, implying a binary fraction of 0.7$\pm$0.7%. In contrast, the fraction of 1G binaries is significantly higher and corresponds to 4.7$\pm$1.4%. Although a systematic study of the radial variation of the 1G and 2G binary incidences is necessary, the comparison between the similar values of $f_{\rm bin}^{\rm 1G-1G}$ and $f_{\rm bin}^{\rm 2G-2G}$ we find in the inner regions of this cluster and the larger 1G binary incidence found by @dalessandro2018a provides the first evidence of radial variation in the ratio of the 1G to the 2G binary incidences. In addition to the evolution of the fractions of 1G and 2G binaries, the simulations presented in @hong2015a [@hong2016a] predicted that exchange encounters during which one of the binary components can be replaced by one of the interacting stars can produce mixed binaries composed of one 1G star and one 2G star. The fraction of these binaries also depends on the cluster’s dynamical age and the binary binding energy and provides a new and interesting tool to explore the dynamics of binary stars in multiple-population clusters [see @hong2015a; @hong2016a for further discussion]. The photometric study presented in this paper has allowed us to reveal for the first time the presence of mixed binaries in NGC6352 at a statistical significance larger than 3$\sigma$, and suggest their presence in NGC288 and NGC6838 (at a confidence level of $\sim 2 \sigma$. Although more extensive observational and theoretical studies are needed, mixed binaries can provide an important insight in the binary dynamical activity in a cluster’s inner regions. Figure\[fig:fractionmix\] shows the time evolution of the fraction of mixed binaries in the clusters’ inner regions ($R<0.5R_{\rm h}$) for some of the simulations discussed in @hong2015a [@hong2016a] and illustrates the increase in the fraction of mixed binaries and its dependence on the binary binding energy for a few cases. In all cases the fraction of mixed binaries increases with time and is expected to be larger for denser clusters in a more advanced stage of their dynamical evolution and is expected to depend on the binary binding energy [see also Figure 6 in @hong2016a]. We emphasize again that these simulations are still idealized and not meant for a detailed comparison with observational data; rather, the results shed light on the dynamics driving the evolution of the 1G and 2G binary populations, the formation of mixed binaries, and illustrate the fundamental dynamical aspects behind the results emerging from our observational study. Additional numerical and observational studies will be needed to explore possible correlations between between 1G, 2G and mixed binary properties, the present-day cluster structural properties and the cluster’s dynamical history. Summary and conclusions {#sec:conclusions} ======================= In this analysis, we used [*HST*]{} data collected within the UV survey of Galactic GCs [@piotto2015a] to investigate the incidence of binaries in five GCs by using multi-band photometry. We used the $m_{\rm F336W}-m_{\rm F438W}$ vs.$m_{\rm F275W} - m_{\rm F336W}$ two-color diagrams and the ChM to identify 1G and 2G stars along the RGB, SGB and MS of each cluster. We selected a sample of binaries from the optical $m_{\rm F814W}$ vs.$m_{\rm F606W} − m_{\rm F814W}$ CMD, which are composed of pairs of stars with similar luminosity and derived their distribution in the $m_{\rm F814W}$ vs.$C_{\rm F275W,F336W,F438W}$ pseudo CMD. We compared the $C_{\rm F275W,F336W,F438W}$ pseudo-color distribution of the observed binaries with the corresponding distribution of a large sample of simulated stellar populations that include various combinations of 1G- 1G, 1G-2G and 2G-2G stars. We find that in NGC288, NGC6352, NGC6362 and NGC6838 the incidence of 1G-1G binaries among 1G star is similar to the incidence of binaries among 2G-2G stars. M4, where the fraction of 1G-1G binary pairs among 1G stars is 3.1$\pm$0.9 times higher than the fraction of 2G-2G binaries among 2G stars, is a remarkable exception. The method presented in this paper, makes it possible to identify for the first time mixed 1G-2G binary systems, binaries composed of one 1G star and one 2G star. N-body simulations predicted mixed binaries to form in binary interactions during which one binary component is replaced by one of the interacting stars of a different population. These binaries provide a new tool to explore binary activity and dynamical history of multiple stellar populations. While a statistically-significant detection has been found only in NGC6352, at face value the best fit fraction of 1G-2G binaries is smaller than $\sim$0.15 in NGC288, NGC6121 and NGC6362, whereas NGC6838 and NGC6352 host larger fractions of 1G-2G binaries ($\sim$0.27 and $\sim$0.48). Using the fraction of mixed binaries we provided an initial estimate of the fraction of the observed binary population consistent with being primordial and not the results of exchange interactions and/or dynamical binary formation. Although additional investigation of this issue is needed, our initial estimates suggest that most binaries in NGC6121 and NGC6362 are consistent with a primordial origin, while in NGC6352 most binaries could be the result of dynamical interactions. In NGC6838 and NGC288 the number of binaries with a primordial origin is similar to that of dynamically formed binaries. Future studies extending the analysis presented here to a larger sample of clusters and probing a broader range of radial distances from a cluster’s center will be necessary to build a complete picture of the dynamical effects on binaries in multiple-population globular clusters and provide new constraints for theoretical studies of the formation and evolution of multiple populations. acknowledgments {#acknowledgments .unnumbered} =============== We thank Antonio Sollima for several suggestions that improved the manuscript. This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research innovation programme (Grant Agreement ERC-StG 2016, No 716082 ‘GALFOR’, PI: Milone, http://progetti.dfa.unipd.it/GALFOR), and the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie (Grant Agreement No 797100, beneficiary: Marino). APM and MT acknowledge support from MIUR through the FARE project R164RM93XW SEMPLICE (PI: Milone). APM and LRB acknowledge support by MIUR under PRIN program 2017Z2HSMF (PI: Bedin). DN acknowledges partial support by the Universitá degli Studi di Padova Progetto di Ateneo BIRD178590.
--- abstract: 'As the interest in the representation of context dependent knowledge in the Semantic Web has been recognized, a number of logic based solutions have been proposed in this regard. In our recent works, in response to this need, we presented the description logic-based Contextualized Knowledge Repository (CKR) framework. CKR is not only a theoretical framework, but it has been effectively implemented over state-of-the-art tools for the management of Semantic Web data: inference inside and across contexts has been realized in the form of forward SPARQL-based rules over different RDF named graphs. In this paper we present the first evaluation results for such CKR implementation. In particular, in first experiment we study its *scalability* with respect to different reasoning regimes. In a second experiment we analyze the effects of *knowledge propagation* on the computation of inferences.' author: - Loris Bozzato - Luciano Serafini bibliography: - 'bibliography.bib' title: 'Knowledge Propagation in Contextualized Knowledge Repositories: an Experimental Evaluation' --- =1 Introduction {#sec:intro} ============ Recently, the representation of context dependent knowledge in the Semantic Web has been recognized as a relevant issue. This lead to the introduction of a growing number of logic based proposals, e.g.[@CDF; @klarman:13; @serafini-homola-ckr-jws-2012; @stra-etal-2010; @Tanca:07; @udrea-annotated-rdf-2010]. In this line of research, in our previous works we introduced the Contextualized Knowledge Repository (CKR) framework [@serafini-homola-ckr-jws-2012; @BozzatoSerafini:13; @BozGhiSer:KCAP2013; @BozHomSer:DL2012]. CKR is a description logics-based framework defined as a two-layered structure: intuitively, a lower layer contains a set of contextualized knowledge bases, while the upper layer contains context independent knowledge and meta-data defining the structure of contextual knowledge bases. The CKR framework has not only been presented as a theoretical framework, but we also proposed effective implementations based on its definitions [@BozzatoSerafini:13; @BozzatoEiterSerafini:14]. In particular, in [@BozzatoSerafini:13] we presented an implementation for the CKR framework over state-of-the-art tools for storage and inference over RDF data. Intuitively, the CKR architecture can be implemented by representing the global context and the local object contexts as distinct RDF named graphs. Inference inside (and across) named graphs is implemented as SPARQL based forward rules. We use an extension of the Sesame framework that we developed, called *SPRINGLES*, which provides methods to demand an inference materialization over multiple graphs: rules are encoded as SPARQL queries and it is possible to customize their evaluation strategy. The rule set encodes the rules of the formal materialization calculus we proposed for the CKR framework [@BozzatoSerafini:13] and the evaluation strategy follows the calculus translation process. In this paper we present the results of an initial experimental evaluation of such implementation of CKR framework over RDF. In particular, the experiments we present are aimed at answering two different research questions: - **RQ1 (scalability):** *what is the effect on the amount of time requested for inference closure computation with respect to the number and size of contexts of a CKR?* - **RQ2 (propagation):** *what is the effect on the amount of time requested for inference closure computation with respect to the number of connections across contexts?* (considering a fixed number of contexts and a fixed amount of knowledge exchanged). As we will detail in the following sections, by means of our experiments we answered the questions with these findings: - **F1 (scalability):** reasoning regime at the global and local level strongly impacts on the scalability of reasoning and its behaviour. Considering only global level reasoning, results suggest that the management of contexts does not add overhead to the reasoning in global context; by considering also reasoning inside contexts, inference time appears to be influenced by the expressivity and number of contexts. - **F2 (propagation):** knowledge propagation cost linearly depends on the number of connections. Moreover, the representation of references to local interpretation of symbols using context connections is always more compact w.r.t. replicating symbols for each local interpretation: the first solution in general requires more computational time, but outperforms the second solution in case of a larger number of connections. The remainder of the paper is organized as follows: in Section \[sec:ckr\] we summarize the basic formal definitions for CKR and its associated calculus; in Section \[sec:springles\] we summarize how the presented definitions have been implemented over RDF named graphs; in Section \[sec:experiments\] we present the test setup and experimental evaluations; finally, in Section \[sec:conclusion\] we suggest some possible extensions to the current evaluation and implementation work. Contextualized Knowledge Repositories {#sec:ckr} ===================================== In the following we provide an informal summary of the definitions for the CKR framework: for a formal and detailed description and for complete examples, we refer to [@BozzatoSerafini:13] where the current formalization for CKR has been first introduced. Intuitively, a CKR is a two layered structure: the upper layer consists of a knowledge base ${\mathfrak{G}}$ containing (1) *meta-knowledge*, i.e. the structure and properties of contexts of the CKR, and (2) *global (context-independent) knowledge*, i.e., knowledge that applies to every context; the lower layer consists of a set of (local) contexts that contain (locally valid) facts and can refer to what holds in other contexts. **Syntax.** In order to separate the elements of the meta-knowledge from the ones of the object knowledge, we build CKRs over two distinct vocabularies and languages. The meta-knowledge of a CKR is expressed in a DL language containing the elements that define the contextual structure. A *meta-vocabulary* is a DL vocabulary $\Gamma$ containing the sets of symbols for *context names* ${\boldsymbol{\mathsf{N}}}$; *module names* ${\boldsymbol{\mathsf{M}}}$; *context classes* ${\boldsymbol{\mathsf{C}}}$, including the class ${\mathsf{Ctx}}$; *contextual relations* ${\boldsymbol{\mathsf{R}}}$; *contextual attributes* ${\boldsymbol{\mathsf{A}}}$; and for every attribute ${\mathsf{A}} \in {\boldsymbol{\mathsf{A}}}$, a set ${\mathsf{D_A}}$ of *attribute values* of ${\mathsf{A}}$. The role ${{\mathsf{mod}}}$ defined on ${\boldsymbol{\mathsf{N}}}\times {\boldsymbol{\mathsf{M}}}$ expresses associations between contexts and modules. Intuitively, modules represent pieces of knowledge specific to a context or context class; attributes describe contextual dimensions (e.g. time, location, topic) identifying a context (class). The *meta-language* $\Lcal_\Gamma$ of a CKR is a DL language over $\Gamma$ (where, formally, the range and domain of attributes and ${{\mathsf{mod}}}$ are restricted as explained above). The knowledge in contexts of a CKR is expressed via a DL language $\Lcal_\Sigma$, called *object-language*, based on an object-vocabulary $\Sigma$. The expressions of the object language are evaluated locally to each context, i.e., contexts can interpret each symbol independently. To access the interpretation of expressions inside a specific context or context class, we extend $\Lcal_\Sigma$ to $\Lcal^e_\Sigma$ with *eval expressions* of the form ${\textsl{eval}}(X,{\mathsf{C}})$, where $X$ is a concept or role expression of $\Lcal_\Sigma$ and ${\mathsf{C}}$ is a concept expression of $\Lcal_\Gamma$ (with ${\mathsf{C}} \subs {{\mathsf{Ctx}}}$). Intuitively, ${\textsl{eval}}(X,{\mathsf{C}})$ can be read as “the interpretation of $X$ in all the contexts of type ${\mathsf{C}}$”. On the base of previous languages, we define a *Contextualized Knowledge Repository (CKR)* as a structure $\CKB = \stru{{\mathfrak{G}}, \{{\mathrm{K}}_{{\mathsf{m}}}\}_{{{\mathsf{m}}}\in {\boldsymbol{\mathsf{M}}}}}$ where: (i) ${\mathfrak{G}}$ is a DL knowledge base over $\Lcal_\Gamma\cup\Lcal_\Sigma$; (ii) every ${\mathrm{K}}_{{\mathsf{m}}}$ is a DL knowledge base over $\Lcal^e_\Sigma$, for each module name ${\mathsf{m}} \in {\boldsymbol{\mathsf{M}}}$. The knowledge in a CKR can be expressed by means of any DL language: in this paper, we consider ${\mathcal{SROIQ}\text{-RL}}$ (defined in [@BozzatoSerafini:13]) as language of reference. ${\mathcal{SROIQ}\text{-RL}}$ is a restriction of $\SROIQ$ syntax corresponding to OWL RL [@Motik:09:OWO]. $\CKB$ is a ${\mathcal{SROIQ}\text{-RL}}$ CKR, if ${\mathfrak{G}}$ and all $K_{{\mathsf{m}}}$ are knowledge bases over the extended language of ${\mathcal{SROIQ}\text{-RL}}$ where eval-expressions can only occur in left-concepts and contain left-concepts or roles. **Semantics.** The model-based semantics of CKR basically follows the two layered structure of the framework. A *CKR interpretation* is a structure $\IC = \stru{\Mcal, \I}$ s.t.: (i) $\Mcal$ is a DL interpretation of $\Gamma \cup \Sigma$ (respecting the intuitive interpretation of ${{\mathsf{Ctx}}}$ as the class of all contexts); (ii) for every $x\in{\mathsf{Ctx}}^\Mcal$, $\I(x)$ is a DL interpretation over $\Sigma$ (with same domain and interpretation of individual names of $\Mcal$). The interpretation of ordinary DL expressions on $\Mcal$ and $\I(x)$ in $\IC = \stru{\Mcal, \I}$ is as usual; ${\textsl{eval}}$ expressions are interpreted as follows: for every $x \in {{\mathsf{Ctx}}}^\Mcal$, ${\textsl{eval}}(X,{\mathsf{C}})^{\I(x)} = \bigcup_{{\mathsf{e}} \in {\mathsf{C}}^{\Mcal}} X^{\I({\mathsf{e}})}$, i.e. the union of all elements in $X^{\I({\mathsf{e}})}$ for all contexts $e$ in ${\mathsf{C}}^{\Mcal}$. A CKR interpretation $\IC$ is a *CKR model* of $\CKB$ iff the following conditions hold: (i) for $\alpha \in \Lcal_\Sigma \cup \Lcal_\Gamma$ in ${\mathfrak{G}}$, $\Mcal \models \alpha$; (ii) for $\Pair{x}{y} \in {\mathsf{mod}}^\Mcal$ with $y= {\mathsf{m}}^\Mcal$, $\I(x)\models {\mathrm{K}}_{{\mathsf{m}}}$; (iii) for $\alpha \in {\mathfrak{G}}\cap \Lcal_\Sigma$ and $x \in {{\mathsf{Ctx}}}^\Mcal$, $\I(x) \models \alpha$. Intuitively, while the first two conditions impose that $\IC$ verifies the contents of global and local modules associated to contexts, last condition states that global knowledge has to be propagated to local contexts. **Materialization calculus.** Reasoning inside a CKR has been formalized in form of a materialization calculus. In particular, the calculus proposed in [@BozzatoSerafini:13] is an adaptation of the calculus presented in [@Krotzsch:10] in order to define a reasoning procedure for deciding instance checking in the structure of a ${\mathcal{SROIQ}\text{-RL}}$ CKR. As we discuss in following sections, this calculus provides the formalization for the definition of rules for the implementation of CKR based on RDF named graphs and forward SPARQL rules. Intuitively, the calculus is based on a translation to datalog: the axioms of the input CKR are translated to datalog atoms and datalog rules are added to such translation to encode the global and local inferences rules; instance checking is then performed by translating the ABox assertion to be verified as a datalog fact and verifying whether it is entailed by the CKR program. The calculus, thus, has three components: (1) the *input translations* $I_{glob}$, $I_{loc}$, $I_{rl}$, where given an axiom $\alpha$ and ${{\mathsf{c}}}\in {\boldsymbol{\mathsf{N}}}$, each $I(\alpha, {{\mathsf{c}}})$ is a set of datalog facts or rules: intuitively, they encode as datalog facts the contents of input global and local DL knowledge bases; (2) the *deduction rules* $P_{loc}$, $P_{rl}$, which are sets of datalog rules: they represent the inference rules for the instance-level reasoning over the translated axioms; and (3) the *output translation* $O$, where given an axiom $\alpha$ and ${{\mathsf{c}}}\in {\boldsymbol{\mathsf{N}}}$, $O(\alpha, {{\mathsf{c}}})$ is a single datalog fact encoding the ABox assertion $\alpha$ that we want to prove to be entailed by the input CKR (in the context ${{\mathsf{c}}}$). We briefly present here the form of the different sets of translation and deduction rules: tables with the complete set of rules can be found in [@BozzatoSerafini:13]. \(i) [*${\mathcal{SROIQ}\text{-RL}}$ translation*]{}: Rules in $I_{rl}(S, c)$ translate to datalog facts ${\mathcal{SROIQ}\text{-RL}}$ axioms (in context $c$). E.g., we translate atomic concept inclusions with the rule $A \subs B \mapsto \{{{\tt subClass}}(A,B,c)\}$. The rules in $P_{rl}$ are the deduction rules corresponding to axioms in ${\mathcal{SROIQ}\text{-RL}}$: e.g., for atomic concept inclusions we have ${{\tt subClass}}(y,z,c), {{\tt inst}}(x,y,c) \to {{\tt inst}}(x,z,c)$ \(ii) [*Global and local translations*]{}: Global input rules of $I_{glob}$ encode the interpretation of ${{\mathsf{Ctx}}}$ in the global context. Similarly, local input rules $I_{loc}$ and local deduction rules $P_{loc}$ provide the translation and rules for elements of the local object language. In particular for ${\textsl{eval}}$ expressions in concept inclusions, we have the input rule ${\textsl{eval}}(A, {\mathsf{C}}) \subs B \mapsto \{{{\tt subEval}}(A, {\mathsf{C}}, B, {{\mathsf{c}}})\}$ and the corresponding deduction rule (where ${\mathsf{g}}$ identifies the global context): ${{\tt subEval}}(a, c_1, b, c), {{\tt inst}}(c', c_1, {\mathsf{g}}), {{\tt inst}}(x, a, c') \to {{\tt inst}}(x, b, c)$ \(iii) [*Output rules*]{}: The rules in $O(\alpha, {{\mathsf{c}}})$ provide the translation of ABox assertions that can be verified to hold in context $c$ by applying the rules of the final program. For example, atomic concept assertions in a context ${{\mathsf{c}}}$ are translated by $A(a) \mapsto \{{{\tt inst}}(a,A,{{\mathsf{c}}})\}$. Given a CKR $\CKB = \stru{{\mathfrak{G}}, \{{\mathrm{K}}_{{\mathsf{m}}}\}_{{{\mathsf{m}}}\in {\boldsymbol{\mathsf{M}}}}}$, the translation to its datalog program $PK(\CKB)$ proceeds in four steps: 1. the *global program* $PG({\mathfrak{G}})$ for ${\mathfrak{G}}$ is translated by applying input rules $I_{glob}$ and $I_{rl}$ to ${\mathfrak{G}}$ and adding deduction rules $P_{rl}$; 2. Let ${\boldsymbol{\mathsf{N}}}_{\mathfrak{G}}= \{{{\mathsf{c}}}\in {\boldsymbol{\mathsf{N}}}\;|\; PG({\mathfrak{G}}) \models {{\tt inst}}({{\mathsf{c}}},{{\mathsf{Ctx}}},{\mathsf{g}}) \}$. For every ${{\mathsf{c}}}\in {\boldsymbol{\mathsf{N}}}_{\mathfrak{G}}$, we define the knowledge base associated to the context as ${\mathrm{K}}_{{\mathsf{c}}}= \bigcup\{{\mathrm{K}}_{{\mathsf{m}}}\in \CKB \;|\; PG({\mathfrak{G}}) \models {{\tt triple}}({{\mathsf{c}}},{{\mathsf{mod}}},{{\mathsf{m}}},{\mathsf{g}}) \}$ 3. We define each *local program* $PC({{\mathsf{c}}})$ for ${{\mathsf{c}}}\in {\boldsymbol{\mathsf{N}}}_{\mathfrak{G}}$ by applying input rules $I_{loc}$ and $I_{rl}$ to ${\mathrm{K}}_{{\mathsf{c}}}$ and adding deduction rules $P_{loc}$ and $P_{rl}$. 4. The final *CKR program* $PK(\CKB)$ is then defined as the union of $PG({\mathfrak{G}})$ with all local programs $PC({{\mathsf{c}}})$. We say that $\CKB$ *entails* an axiom $\alpha$ in a context ${{\mathsf{c}}}\in {\boldsymbol{\mathsf{N}}}$ if the elements of $PK(\CKB)$ and $O(\alpha, {\mathsf{c}})$ are defined and $PK(\CKB) \models O(\alpha, {\mathsf{c}})$. We can show (see [@BozzatoSerafini:13]) that the presented rules and translation process provide a sound and complete calculus for instance checking for ${\mathcal{SROIQ}\text{-RL}}$ CKR. CKR Implementation on RDF {#sec:springles} ========================= We recently presented a prototype [@BozzatoSerafini:13] implementing the forward reasoning procedure over CKR expressed by the materialization calculus. The prototype accepts RDF input data expressing OWL-RL axioms and assertions for global and local knowledge modules: these different pieces of knowledge are represented as distinct named graphs, while contextual primitives have been encoded in a RDF vocabulary. The prototype is based on an extension of the Sesame RDF Framework[^1] and structured in a client-server architecture: the main component, called *CKR core* module and residing in the server-side part, exposes the CKR primitives and a SPARQL 1.1 endpoint for query and update operations on the contextualized knowledge. The module offers the ability to compute and materialize the inference closure of the input CKR, add and remove knowledge and execute queries over the complete CKR structure. The distribution of knowledge in different named graphs asks for a component to compute inference over multiple graphs in a RDF store, since inference mechanisms in current stores usually ignore the graph part. This component has been realized as a general software layer called *SPRINGLES*[^2]. Intuitively, the layer provides methods to demand a closure materialization on the RDF store data: rules are encoded as named graphs aware SPARQL queries and it is possible to customize both the input ruleset and the evaluation strategy. The general form of SPRINGLES rules is the following: `<graph-pattern>` is an RDF (named) graph that can contain a set of variables, which are bounded in the SPARQL query of the body. The body of a rule is a SPARQL query that is evaluated. The result of the evaluation of the rule body is a set of bindings for the variables that occurs in the rule head. For every such a binding the corresponding statement in the head of the rule is added to the repository. In our case, the ruleset basically encodes the rules of the presented materialization calculus. As an example, we present the rule dealing with atomic concept inclusions: where prefix `spr:` corresponds to symbols in the vocabulary of SPRINGLES objects and `sys:` prefixes utility “system” symbols used in the definition of the rules evaluation plan. Intuitively, when the condition in the body part of the rule is verified in graphs [?m1]{} and [?m2]{}, the head part is materialized in the inference graph [?mx]{}. Note that in the formulation of the rule we work at the level of knowledge modules (i.e. named graphs). Note that the body of the rules contains a “filter” condition, which is a SPARQL based method to avoid the duplication of conclusions: the `FILTER` condition imposes a rule to be fired only if its conclusion is not already present in the context. The rules are evaluated with a strategy that basically follows the same steps of the translation process defined for the calculus. The plan goes as follows: (i) we compute the closure on the graph for global context ${\mathfrak{G}}$, by a fixpoint on rules corresponding to $P_{rl}$; (ii) we derive associations between contexts and their modules, by adding dependencies for every assertion of the kind ${{\mathsf{mod}}}({{\mathsf{c}}}, {{\mathsf{m}}})$ in the global closure; (iii) we compute the closure the contexts, by applying rules encoded from $P_{rl}$ and $P_{loc}$ and resolving $eval$ expressions by the metaknowledge information in the global closure. Experimental Evaluation {#sec:experiments} ======================= In this section we illustrate the experiments we performed to assess the performance of the CKR prototype and their results. We begin by presenting the method we used to create the synthetic test sets that we generated for such evaluation. **Generation of synthetic test sets.** In order to create our test sets, we developed a simple generator that can output randomly generated CKRs with certain features. In particular, for each generated CKR, the generator takes in input: (1) the number $n$ of contexts (i.e. local named graphs) to be generated; (2) the dimensions of the signature to be declared (number $m$ of base classes, $l$ of properties and $k$ of individuals); (3) the axiom size for the global and local modules (number of global TBox, ABox and RBox axioms and number of TBox, ABox and RBox axioms per context); (4) optionally, the number of additional local ${\textsl{eval}}$ axioms and the number of individuals to be propagated across contexts. Intuitively, the generation of a CKR proceeds as follows: 1. The contexts (named ${\tt :\!c0}, \dots, {\tt :\!cn}$) are declared in the global context named graph and are linked to a different module name (${\tt :\!m0}, \dots, {\tt :\!mn}$), corresponding to the named graph containing their local knowledge. 2. Base classes (named ${\tt :\!A0}, \dots, {\tt :\!Am}$), object properties (${\tt :\!R0}, \dots, {\tt :\!Rl}$) and individuals (${\tt :\!a0}, \dots, {\tt :\!ak}$) are added to the global graph: these symbols are used in the generation of global and local axioms. 3. Then generation of global axioms takes place. We chose to generate axioms as follows, in order to create realistic instances of knowledge bases: - Classes and properties names are taken from the base signature using random selection criteria in the form of (the positive part of) a Gaussian curve centered in $0$: intuitively, classes equal or near to ${\tt :\!A0}$ are more probable in axioms than ${\tt :\!An}$. - Individuals are randomly selected using a uniform distribution. - TBox, ABox and RBox axioms in ${\mathcal{SROIQ}\text{-RL}}$ are added in the requested number to the global context module following the percentages shown in Table \[tab:percentage\] (note that the reported axioms are normal form ${\mathcal{SROIQ}\text{-RL}}$ axioms, as defined in [@BozzatoSerafini:13]). Such percentages have been selected in order to simulate the common distribution in the use of the constructs in real knowledge bases. 4. The same generation criteria are then applied in the case of local graphs representing the local knowledge of contexts. 5. If specified, the requested number for ${\textsl{eval}}$ axioms of the form ${\textsl{eval}}(A, {\mathsf{C}}) \subs B$ and for the set of individuals in the scope of the ${\textsl{eval}}$ operator (i.e. as local members of $A$) are added to local contexts graphs. **Experimental Setup.** Evaluation experiments were carried out on a 4 core Dual Intel Xeon Processor machine with 32Gb 1866MHZ DDR3 RAM, standard S-ATA (7.200RPM) HDD, running a Linux RedHat 6.5 distribution. We allocated 6Gb of memory to the JVM running the SPRINGLES web-app (i.e. the RDF storage and inference prototype), while 20Gb were allocated to the utility program managing the upload, profiling and cleaning of the test repositories. In order to abstract from the possible overhead for the repository setup, the tests have been averaged over multiple runs of the closure operation for each CKR. The tests were carried out on different CKR rulesets in order to study their applicability in practical reasoning. The rulesets are limitations to the full set of rules and evaluation strategy presented in previous sections, in particular: - *ckr-rdfs-global:* inference is only applied to the global context (no local reasoning inside local contexts named graphs). Applies only inference rules for RDFS and for the definition of CKR structure (e.g. association of named graphs for knowledge modules to contexts). - *ckr-rdfs-local:* inference is applied to the graphs both for global and local contexts. Again, applies only inference rules for RDFS and CKR structure rules. - *ckr-owl-global:* inference is only applied to the global context, considering all of the inference rules for ${\mathcal{SROIQ}\text{-RL}}$ and CKR structure rules. - *ckr-owl-local:* full strategy defined by the materialization calculus. Inference is applied to the global and local parts, using all of the (global and local) ${\mathcal{SROIQ}\text{-RL}}$ and CKR rules. More in details, application of RDFS rules corresponds to the limitation of OWL RL closure step only to the inference rules for subsumption on classes and object properties. **TS1: scalability evaluation.** The first experiments we carried out on the CKR prototype had the task to determine the (average) inference closure time with respect to the increase in number of contexts and their contents: with reference to the research questions in the introduction, this first evaluation aimed at answering question **RQ1**. Using the CKR generator tool, we generated the set of test CKRs shown in Table \[tab:TS1\]: we call this test set *TS1*. ------------------ ----------------- --------------- ---------------- -------------- -------------- -------------- -------------- -------------- -------------- ---------------------- [**Contexts**]{} [**Classes**]{} [**Roles**]{} [**Indiv.**]{} [**TBox**]{} [**RBox**]{} [**ABox**]{} [**TBox**]{} [**RBox**]{} [**ABox**]{} [**Total axioms**]{} 1 10 10 20 10 5 20 10 5 20 70 1 50 50 100 50 25 100 50 25 100 350 1 100 100 200 100 50 200 100 50 200 700 1 500 500 1000 500 250 1000 500 250 1000 3.500 1 1000 1000 2000 1000 500 2000 1000 500 2000 7.000 5 10 10 20 10 5 20 10 5 20 210 5 50 50 100 50 25 100 50 25 100 1.050 5 100 100 200 100 50 200 100 50 200 2.100 5 500 500 1000 500 250 1000 500 250 1000 10.500 5 1000 1000 2000 1000 500 2000 1000 500 2000 21.000 10 10 10 20 10 5 20 10 5 20 385 10 50 50 100 50 25 100 50 25 100 1.925 10 100 100 200 100 50 200 100 50 200 3.850 10 500 500 1000 500 250 1000 500 250 1000 19.250 10 1000 1000 2000 1000 500 2000 1000 500 2000 38.500 50 10 10 20 10 5 20 10 5 20 1.785 50 50 50 100 50 25 100 50 25 100 8.925 50 100 100 200 100 50 200 100 50 200 17.850 50 500 500 1000 500 250 1000 500 250 1000 89.250 50 1000 1000 2000 1000 500 2000 1000 500 2000 178.500 100 10 10 20 10 5 20 10 5 20 3.535 100 50 50 100 50 25 100 50 25 100 17.675 100 100 100 200 100 50 200 100 50 200 35.350 100 500 500 1000 500 250 1000 500 250 1000 176.750 100 1000 1000 2000 1000 500 2000 1000 500 2000 353.500 ------------------ ----------------- --------------- ---------------- -------------- -------------- -------------- -------------- -------------- -------------- ---------------------- : Test set TS1.[]{data-label="tab:TS1"} Intuitively, TS1 contains sets of CKRs with an increasing number of contexts, in which CKRs have an increasing number of axioms. We note that no ${\textsl{eval}}$ axioms were added to TS1 knowledge bases. We ran the CKR prototype on 3 generations of TS1 also varying the reasoning regime among the rulesets detailed above: the different generation instances of TS1 are necessary in order to reduce the impact of special cases in the random generation. The results of the experiments on TS1 are reported in Table \[tab:TS1results\]. In the table, for each of the generated CKRs (referred by number of contexts and number of base classes in the first two columns), we show the number of total asserted triples in column *Triples* (averaged on the 3 versions of TS1). The following columns list the results of the closure for each of the rulesets: for a ruleset, we list the (average) total number of triples (asserted + inferred), the inferred triples and the (average) time in milliseconds for the closure operation. The value *timedout* in the measures indicates that the closure operation exceeded 30 minutes (1.800.000 ms.). -------------- -------------- ----------------- --------------- -------------- -------------- --------------- -------------- -------------- --------------- -------------- -------------- --------------- -------------- -------------- [**Ctx.**]{} [**Cls.**]{} [**Triples**]{} [**Total**]{} [**Inf.**]{} [**Time**]{} [**Total**]{} [**Inf.**]{} [**Time**]{} [**Total**]{} [**Inf.**]{} [**Time**]{} [**Total**]{} [**Inf.**]{} [**Time**]{} 1 10 208 228 20 222 234 26 326 249 41 291 298 90 868 1 50 1079 1165 87 221 1288 209 518 1351 272 323 1918 839 4596 1 100 2165 2398 233 260 2666 501 943 2687 521 346 3803 1638 15916 1 500 10549 11870 1321 846 13293 2743 22930 14833 4284 2461 22828 12278 556272 1 1000 20981 23600 2619 1528 25957 4976 95957 29993 9012 4644 5 10 644 685 41 176 698 54 226 780 136 193 1470 826 11721 5 50 3124 3259 135 190 3330 205 341 4134 1010 522 9874 6750 328107 5 100 6201 6450 249 254 6675 475 962 8845 2645 1258 31615 25414 913617 5 500 30928 31994 1066 719 33025 2097 23109 44987 14059 7819 5 1000 61691 64363 2672 1491 66661 4969 106967 95636 33945 16291 10 10 1149 1216 66 165 1225 76 202 1427 278 541 6141 4992 448249 10 50 5620 5782 163 210 5895 275 460 8008 2388 1392 10 100 11058 11353 295 281 11865 807 1745 16315 5257 2986 10 500 56578 57836 1258 910 59052 2474 33643 86821 30243 17375 10 1000 112824 115273 2449 2030 117666 4842 114443 173938 61113 36647 50 10 5509 5780 271 208 5785 276 256 7003 1494 2167 50 50 26327 26676 348 323 26795 467 825 35640 9312 14598 50 100 52037 52543 506 603 52749 713 2384 78439 26402 21461 50 500 259810 261355 1546 2025 262722 2913 41973 416088 156278 299504 50 1000 520276 523082 2807 4350 525702 5426 214434 827451 307176 397110 100 10 10658 11171 513 242 11181 523 279 12916 2258 1865 100 50 51709 52347 638 442 52461 752 1241 73639 21930 31003 100 100 103341 104035 694 531 104259 918 2784 145788 42447 47179 100 500 514497 516316 1819 3469 517567 3070 87325 844215 329718 774657 100 1000 1028233 1031367 3135 7835 1033725 5492 394881 1674765 646532 1018616 -------------- -------------- ----------------- --------------- -------------- -------------- --------------- -------------- -------------- --------------- -------------- -------------- --------------- -------------- -------------- : Scalability results for test set TS1.[]{data-label="tab:TS1results"} In order to analyze the results, the behaviour of the prototype for each of the rulesets has been plotted to graphs, shown in Figure \[fig:graph-Sesame\]. Each of the series represents a set with a fixed number of contexts (1 to 100) and each point a CKR. The $x$ axis represents the number of asserted triples, while the $y$ axis shows the time in milliseconds; the red horizontal line depicts the 30 minutes limit for timeout. To better visualize the behaviour of the series, we plotted a trend line for each of the series: the lines represent an approximation of the data trend calculated by polynomial regression[^3]. --------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------- ![Scalability graphs for TS1.[]{data-label="fig:graph-Sesame"}](CKR1_ALL_GRAPHSs-1.eps "fig:"){width=".65\columnwidth"} ![Scalability graphs for TS1.[]{data-label="fig:graph-Sesame"}](CKR1_ALL_GRAPHSs-2.eps "fig:"){width=".65\columnwidth"} a\) ckr-rdfs-global b\) ckr-owl-global \[2ex\] ![Scalability graphs for TS1.[]{data-label="fig:graph-Sesame"}](CKR1_ALL_GRAPHSs-3.eps "fig:"){width=".65\columnwidth"} ![Scalability graphs for TS1.[]{data-label="fig:graph-Sesame"}](CKR1_ALL_GRAPHSs-4.eps "fig:"){width=".65\columnwidth"} c\) ckr-rdfs-local d\) ckr-owl-local --------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------- Some conclusions can be derived from these data and graphs: the first most evident fact is that the reasoning regime strongly impacts the scalability of the system. Thus, in practical cases the choice of a naive application of the full OWL RL ruleset might not be viable, in presence of large local datasets: on the other hand, if expressive reasoning inside contexts is not required, scalability can be enhanced by relying on the RDFS rulesets (or, in general, by carefully tailoring the ruleset to the required expressivity). By analyzing the graphs and the approximations, it is also possible to observe that the system shows a different behaviour depending on the different reasoning regimes. In the case of *ckr-rdfs-global* and *ckr-owl-global*, the results suggest that the management of named graphs does not add overhead to the reasoning in the global context. This can be also seen by checking Table \[tab:TS1results\]: for a similar number of inferred triples the separation across different graphs does not influence the reasoning time. For example, this is visible for cases with similar $y$ values of the graph (e.g. the case for 1000 classes in series for 1 and 5 contexts, in both rulesets). In the case of *ckr-rdfs-local*, the graphs show that local reasoning clearly influences the total inference time. In particular, at the growth of number of contexts, the behaviour tends to be linear in the number of asserted triples. While the data we have on *ckr-owl-local* are more limited, this behaviour seems to be confirmed by the trend lines. On the other hand, OWL local reasoning seems to influence the reasoning time with respect to the RDFS case: informally, this can be seen in the graph by the larger time overhead across points with a similar number of asserted triples (i.e. on the same $x$ values) but a higher number of contexts. **TS2 and TS3: knowledge propagation evaluation.** The second set of experiments we carried out was aimed at answering question **RQ2**: we wanted to establish the cost of knowledge propagation among contexts, with respect to an increasing number of connections (i.e. ${\textsl{eval}}$ expressions) across contexts. To this aim, we generated two test sets, called *TS2* and *TS3* structured as follows: - TS2 is composed by 100 CKRs, each of them with 100 contexts. Except for the triples needed for the definition of the contextual structure, both the global and local knowledge bases contain no randomly generated axioms. The CKRs inside TS2 are generated with an increasing number of contexts connections trough ${\textsl{eval}}$ axioms (from no connections to the case of “fully connected” contexts). In particular, for $n = 100$ contexts and $k$ connections, in each context $c_i$ we add axioms of the kind: $${\textsl{eval}}(D_0, \{c_{i+1 (mod\ n)}\}) \subs D_1,\ \dots,\ {\textsl{eval}}(D_0, \{c_{i+k (mod\ n)}\}) \subs D_1$$ Moreover, in each context we add a fixed number of instances (10 in the case of TS2) of the local concept $D_0$, that will be propagated through contexts and added to local $D_1$ concepts by the inference rules for the above ${\textsl{eval}}$ expressions. - TS3 analogously contains 100 CKRs of 100 contexts and again no randomly generated global or local axioms. Differently from TS2, TS3 contains no ${\textsl{eval}}$ axioms and the connections across contexts are simulated by having multiple versions of $D_0$ (namely $D_{0{\text{-}}0}, \dots, D_{0{\text{-}}99}$) to represent the local interpretation of the concept. Thus, for $n = 100$ contexts and $k$ connections, in each context $c_i$ we add axioms of the kind $D_{0{\text{-}}j} \subs D_1$ for $j \in \{i+1 (mod\ n), \dots, i+k (mod\ n)\}$. Also, not only we add to $c_i$ the 10 local instances of $D_{0{\text{-}}i}$, but we also “pre-propagate” instances of each $D_{0{\text{-}}j}$ by explicitly adding them to the knowledge of $c_i$. We remark that the way of expressing “contextualized symbols” used in TS3 has been discussed and compared to the CKR representation in [@BozGhiSer:KCAP2013]. ----------------- ----------------- --------------- -------------- -------------- ----------------- --------------- -------------- -------------- [**Related**]{} [**Triples**]{} [**Total**]{} [**Inf.**]{} [**Time**]{} [**Triples**]{} [**Total**]{} [**Inf.**]{} [**Time**]{} 0 2803 3305 502 276 2803 3305 502 299 4 4703 9205 4502 893 11703 16205 4502 577 9 6703 16205 9502 1564 22703 32205 9502 1017 14 8703 23205 14502 2245 33703 48205 14502 1450 19 10703 30205 19502 2932 44703 64205 19502 1960 24 12703 37205 24502 3467 55703 80205 24502 2580 29 14703 44205 29502 4196 66703 96205 29502 3154 34 16703 51205 34502 4847 77703 112205 34502 4099 39 18703 58205 39502 5987 88703 128205 39502 4645 44 20703 65205 44502 6223 99703 144205 44502 5488 49 22703 72205 49502 6878 110703 160205 49502 6456 54 24703 79205 54502 7689 121703 176205 54502 7545 59 26703 86205 59502 8547 132703 192205 59502 8205 64 28703 93205 64502 9076 143703 208205 64502 9159 69 30703 100205 69502 9640 154703 224205 69502 10335 74 32703 107205 74502 10711 165703 240205 74502 10992 79 34703 114205 79502 11223 176703 256205 79502 11879 84 36703 121205 84502 14611 187703 272205 84502 13088 89 38703 128205 89502 12846 198703 288205 89502 13912 94 40703 135205 94502 14999 209703 304205 94502 15064 99 42703 142205 99502 14107 220703 320205 99502 15799 ----------------- ----------------- --------------- -------------- -------------- ----------------- --------------- -------------- -------------- : Knowledge propagation results (extract) for test set TS2 and TS3.[]{data-label="tab:TS2results"} We ran the CKR prototype for 5 independent runs on TS2 and TS3, only considering *ckr-owl-local* ruleset. An extract of the results of experiments on the two test sets is reported in Table \[tab:TS2results\]: CKRs in the two sets are ordered with respect to the number of relations across contexts; for each CKR, the numbers of asserted, total and inferred triples are shown, followed by the (average) closure time in milliseconds. To facilitate the analysis of the results, we plotted such data in histograms in Figure \[fig:graph-TS2\]. The $x$ axis represents the number of local connections, while the $y$ axis shows the time in milliseconds. Again, to better visualize the behaviour of the series, we plotted a trend line for each of the series, calculated by polynomial regression[^4]. ![Knowledge propagation graphs for TS2 and TS3.[]{data-label="fig:graph-TS2"}](CKR4+5_ALL_GRAPHSs.eps){width=".824\columnwidth"} From the graph of TS2, we can note that knowledge propagation cost depends linearly on the number of connections: from the data in Table \[tab:TS2results\] we can calculate that the average increase in closure time for $k$ local connections (for each context) w.r.t. the base case of 0 connections amounts to $(51,\!2 \cdot k) \%$. The comparison with TS3 confirms the compactness of a contextualized representation of symbols (cfr. findings in [@BozGhiSer:KCAP2013]): in fact, note that for an equal number of connections, the number of inferences in both TS2 and TS3 cases is equal, but TS3 always require a larger number of asserted triples. Also, the graph clearly shows that TS3 grows more than linearly: for a small number of connections the knowledge propagation in TS2 requires more inference time ($14,\!9\%$ more, on average), but with the growth of local connections (at $\sim\!68\%$ of number of contexts) the cost of TS3 local reasoning surpasses the propagation overhead. Conclusions and Future Works {#sec:conclusion} ============================ In this paper we provided a first evaluation for the performance of the RDF based implementation of the CKR framework. In first experiment we evaluated the scalability of the current version of the prototype under different reasoning regimes. The second experiment was aimed at evaluating the cost of intra-context knowledge propagation and its relation to its simulation by “reification” of contextualized symbols. Some further experimental evaluations can be interesting to be carried out over our contextual model: one of these can regard the study of the cost and advantages of the separation of the same amount of knowledge across different contexts. With respect to the current CKR implementation, the scalability experiments clearly showed that the current naive strategy (defined by a direct translation of the formal calculus) might not be suitable for a real application of the full reasoning to large scale datasets. In this regard, we are going to study different evaluation strategies and optimizations to the current strategy and evaluate the results with respect to the naive case. One of such possible optimizations can regard a “pay-as-you-go” strategy, in which inference rules are activated only for constructs that are recognized in the local language of a context. [^1]: <http://www.openrdf.org/> [^2]: *SParql-based Rule Inference over Named Graphs Layer Extending Sesame*. [^3]: Average $R^2$ value across all approximations is $\geq 0,993$. [^4]: Average $R^2$ value across the two approximations is $\geq 0,989$.
--- abstract: 'We have developed PGPG (Pipeline Generator for Programmable GRAPE), a software which generates the low-level design of the pipeline processor and communication software for FPGA-based computing engines (FBCEs). An FBCE typically consists of one or multiple FPGA (Field-Programmable Gate Array) chips and local memory. Here, the term “Field-Programmable” means that one can rewrite the logic implemented to the chip after the hardware is completed, and therefore a single FBCE can be used for calculation of various functions, for example pipeline processors for gravity, SPH interaction, or image processing. The main problem with FBCEs is that the user need to develop the detailed hardware design for the processor to be implemented to FPGA chips. In addition, she or he has to write the control logic for the processor, communication and data conversion library on the host processor, and application program which uses the developed processor. These require detailed knowledge of hardware design, a hardware description language such as VHDL, the operating system and the application, and amount of human work is huge. A relatively simple design would require 1 person-year or more. The PGPG software generates all necessary design descriptions, except for the application software itself, from a high-level design description of the pipeline processor in the PGPG language. The PGPG language is a simple language, specialized to the description of pipeline processors. Thus, the design of pipeline processor in PGPG language is much easier than the traditional design. For real applications such as the pipeline for gravitational interaction, the pipeline processor generated by PGPG achieved the performance similar to that of hand-written code. In this paper we present a detailed description of PGPG version 1.0.' author: - 'Tsuyoshi <span style="font-variant:small-caps;">Hamada</span> and Toshiyuki <span style="font-variant:small-caps;">Fukushige</span>' - 'Junichiro <span style="font-variant:small-caps;">Makino</span>' title: 'PGPG: An Automatic Generator of Pipeline Design for Programmable GRAPE Systems' --- Introduction {#secintro} ============ Astronomical many-body simulations have been widely used to investigate the formation and evolution of various astronomical systems, such as planetary systems, globular clusters, galaxies, clusters of galaxies, and large scale structures. In such simulations, we treat planetesimals, stars, or galaxies as particles interacting with each other. We numerically evaluate interactions between the particles and advance the particles according to Newton’s equation of motion. In many cases, the size of an astrophysical many-body simulation is limited by the available computational resources. Simulation of pure gravitational many-body system is a typical example. Since the gravity is a long-range interaction, the calculation cost is $O(N^2)$ per timestep for the simplest scheme, where $N$ is the number of particles in the system. We can reduce this $O(N^2)$ calculation cost to $O(N\log N)$, by using some approximated algorithms, such as the Barnes-Hut treecode (Barnes, Hut 1986), but the scaling coefficient is pretty large. Thus, the calculation of the interaction between particles is usually the most expensive part of the entire calculation, and thus limits the number of particles we can handle. Smoothed Particle Hydrodynamics (SPH, Lucy 1977, Gingold, Monaghan 1977), in which particles are used to represent the fluid, is another example. In SPH calculations, hydrodynamical equation is expressed by short-range interaction between particles. The calculation cost of this SPH interaction is rather high, because the average number of particles which interact with one particle is fairly large, typically around 50, and the calculation of single pairwise interaction is quite a bit more complex compared to gravitational interaction. Astrophysics is not the only field where the particle-based calculation is used. Molecular dynamics (MD) simulation and boundary element method (BEM) are examples of numerical methods where each element of the system in principle interacts with all other elements in the system. In both cases, approaches similar to Barnes-Hut treecode or FMM (Greengard, Rokhlin 1987) help to reduce the calculation cost, but the interaction calculation dominates the total calculation cost. One extreme approach to accelerate the particle-based simulation is to build a special-purpose computer for the interaction calculation. Two characteristics of the interaction calculation make it well suited for such approach. Firstly, the calculation of pairwise interaction is relatively simple. In the case of gravitational interaction, the total number of floating-point operations (counting all operations, including square root and divide operations) is only 20. So it is not inconceivable to design a fully pipelined, hardwired processor dedicated to the calculation of gravitational interaction. For other application like SPH or molecular dynamics, the interaction calculation is more complicated, but still hardware approach is feasible. Secondly, the interaction is in its simplest form all-to-all. In other words, each particle interacts with all other particles in the system. Thus, there is lots of parallelism available. In particular, it is possible to design a hardware so that it calculate the force from one particle to many other particles in parallel. In this way we can reduce the required memory bandwidth. Of course, if the interaction is of short-range nature, one need to implement some clever way to reduce calculation cost from $O(N^2)$ to $O(N)$, and the reduction in the memory bandwidth is not as effective as in the case of true $O(N^2)$ calculation. The approach to develop specialized hardware for gravitational interaction, materialized in the GRAPE (“GRAvity piPE”) project (Sugimoto et al. 1990; Makino and Taiji 1998), has been fairly successful, achieving the speed comparable or faster than the fastest general-purpose computers for the price tag one or two orders of magnitude smaller. For example, GRAPE-6, which costed 500M JYE, achieved the peak speed of 64 Tflops. This speed is favorably compared to the peak speed of the Earth Simulator (40Tflops) or ASCI-Q(30Tflops), both costed several tens of billions of JYE. A major limitation of GRAPE is that it cannot handle anything other than the interaction through $1/r$ potential. It is certainly possible to build a hardware that can handle arbitrary central force, so that molecular dynamics calculation can also be handled (Ito et al. 1993; Fukushige et al 1996; Narumi et al. 1999; Taiji et al 2003). However, to design a hardware that can calculate both the gravitational interaction and, for example, an SPH interaction is quite difficult. Actually, to develop the pipeline processor just for SPH interaction turned out to be a rather difficult task (Yokono et al. 1999). This is provably because the SPH interaction is much more complex than gravity. Computing devices which uses FPGA (Field-Programmable Gate Array) chips could offer the level of flexibility that was impossible to achieve with the conventional GRAPE approaches. As its name suggests, FPGA is a mass-produced LSI chip, consisting of a large number of logic elements and switching network. By programming these logic elements and switching network, we can implement an arbitrary logic design, as far as it can fit to the chip used. Thus, a single hardware can be used to implement various pipeline processors, such as that for gravity, SPH, and others. Such FPGA-based “reconfigurable” computing device has been an active area of research since Splash-1 and Splash-2 (Buell et al. 1996), and several groups, including ourselves, have tried to apply the idea of reconfigurable computing to particle simulations (Kim et al. 1995, Hamada et al. 2000, Spurzem et al. 2002). Hamada et al. (2000) called this approach “Programmable GRAPE” or PROGRAPE. (12 cm,8 cm)[./figure1.eps]{} Figure \[fig1\] shows the basic structure of a PROGRAPE system. It consists of a programmable GRAPE hardware and a host computer. The programmable GRAPE hardware typically is composed of FPGA chips to which the interaction pipelines are implemented, a particle memory, and an interface unit, and calculates the interaction $\mathbf{ f}_i$ between $i$-the particle and other particles expressed as $$\mathbf{ f}_{i} = \sum_{j}\mathbf{ G}(\mathbf{ a}_{i},\mathbf{ a}_{j}), \label{eq:PGPGbasic}$$ where $\mathbf{ a}_{i}$ is the physical value of the $i$-th particle, such as position and velocities, and $\mathbf{ G}()$ is a user-specified function. We specify the function $\mathbf{ G}()$ by programming FPGA. The physical values $\mathbf{ a}_{j}$ of all particles are stored in the particle memory and supplies them to the interaction pipeline. The physical values $\mathbf{ a}_i$ are stored in registers of the interaction pipeline. The interface unit controls communications between the programmable GRAPE hardware and the host computer. The host computer performs all other calculations. FPGA-based PROGRAPEs have several important advantages over conventional full-custom GRAPE processors. One is that the development cost of the chip itself is paid by the manufacturer of the chip, not by us. Thus, initial cost is much lower. This low development cost means that new hardwares can be developed in shorter cycle. Large GRAPE hardwares took several years to develop, and this means the device technology used in GRAPE hardwares, even at the time of its completion, is a few years old. This delay implies quite a large performance hit. Thus, even though the efficiency in the transistor usage is much worse than full-custom GRAPE processors, the actual price-performance of a PROGRAPE system is not so bad, if one condition is satisfied: If the design of the pipeline processor to be implemented in FPGA and other necessary softwares can be developed sufficiently fast. Previous experiences tell us that it is not the case. To implement a relatively simple pipeline for gravitational interaction calculation took more than one person-year, and implementation of even a simple SPH pipeline would take much more. Thus, clearly the difficulty of the software development has been the limiting factor for the practical use of PROGRAPE or other FPGA-based computing device. The difficulty is partly because we have to design the interaction pipeline itself, for which we need rather detailed and lengthy description of hardware logic in hardware-description languages such as VHDL. In addition to the pipeline itself, we also need to develop the control logic for the pipeline and communication to the host, driver software on the host computer, and software emulator library used to verify the design (see section \[sectrad\]). In theory, most of the design description of softwares and hardwares, including the bit-level design of the interaction pipeline itself, can be automatically generated from some high-level description of the pipeline itself. The basic idea behind the PGPG (Pipeline Generator for Programmable GRAPE) system, which we describe in this paper, is to realize such automatic generation. PGPG generates all necessary hardware design descriptions and driver softwares, from high-level description of the pipeline processor itself. Thus, the user is relieved of the burden of learning complex VHDL language. Also, the driver software is automatically generated, so that the user can concentrate on writing the application program, not the low-level driver software for a specific hardware. Thus, we can dramatically reduce the amount of the work of the application programmer. More importantly, when a new hardware becomes ready, once the PGPG system is ported, all user applications developed on it works unchanged. The effort spent to design one application on one hardware will not be thrown away when new hardware becomes available. In this paper, we describe the PGPG system version 1.0. In section 2, we describe the traditional design flow and its problem. In section 3 we describe the basic concept and structure of PGPG. In section 4, we show a design of gravitational force pipeline as an example of pipeline generated by PGPG. Section 5 is for discussion. Table \[tabab\] is a glossary for abbreviations used in the paper. ---------- --------------------------------------------------------------------- API Application Program Interface BEM Boundary Element Method CAE Computer Aided Engineering DFT Discrete Fourier Transform FBCE FPGA-Based Computing Engine FMM Fast Multipole Method FPGA Field Programmable Gate Array GRAPE GRAvity PipE HDL Hardware Description Language LPM Library of Parameterized Modules LSI Large Scale Integration MD Molecular Dynamics PGDL PGPG Description Language \[defined in this paper\] PGPG Pipeline Generator for Programmable GRAPE \[defined in this paper\] PROGRAPE PROgrammable GRAPE SLDL System-Level Design Language SPH Smoothed Particle Hydrodynamics VHDL VHSIC Hardware Description Language VHSIC Very High Speed Integrated Circuit ---------- --------------------------------------------------------------------- : Abbreviation Glossary \[tabab\] Traditional Design Flow for FPGA-based Computing Engines {#sectrad} ======================================================== (14 cm,12 cm)[./figure2.eps]{} In the traditional design flow, we design the FPGA-based computing system in the following five steps. - Target Function Specification: We specify the target function, namely the function that the pipeline processor calculates. This includes the specification of the input data (number format and word length), the dataflow for the calculation of the function, input and output number format and word length for each arithmetic operation. - Bit-Level Software Emulator: We develop a software emulator which implements the target function defined in step (A) in software. Using this software emulator, we verify whether the designed hardware can actually calculate the target function with required accuracy. In this step, we also define the application program interface (API). - Hardware Design: In this step we actually write the source code which implements the pipeline processor in a hardware-description language (HDL) such as VHDL. In addition, we design the control logic and host interface logic also in some HDL. The HDL description is compiled to the configuration data for the FPGA chips by a design software, usually provided by the manufacturer of the FPGA device. - Interface Software: We develop the software on the host computer which takes care of the communication to the hardware and data format conversion between the floating-point data on the host and specialized data format used on the developed pipeline processor. The developed software should have the same API as that of the software emulator developed in step (B). - Finally, we can actually use the pipeline processor with real application program, by combining the hardware, hardware configuration data, interface software and application software. Figure \[fig2\] summarizes these steps. In these steps, we have to design, test, and debug a large amount of hardware logic and softwares. Of course, many of the softwares and hardware designs can be reused, when we develop different applications. For example, the design of the floating-point multiplier is rather generic, and can be used in almost any application. Also, it is possible to buy such design, or design software which generate floating-point arithmetic unit with arbitrary word length, from some CAE company. However, just to understand how to use such a design, one need a deep understanding of the hardware and HDL used for that particular design. Thus, even though the reusability significantly reduce the amount of the work needed for the second and later design for one person, the initial hurdle remains rather high, for an astrophysicist who never used such software, or actually the availability of the library make the hurdle even higher, since a starter need to understand, in addition to the basics of the hardware design and HDL, the use of such libraries and particular design software for that library. The development of the interface software is generally even more difficult than the design of the hardware, since it requires the knowledge of how the device driver softwares work in the operating system of the host computer, and infinite number of small details like how to integrate the device driver to the operating system, how to correctly generate the compiler flags to compile the device driver so that it works with the kernel installed on the host computer etc etc. All these works combined make it almost impossible for an astrophysicist to even think of implementing the pipeline processor on an FPGA-based computing engine. The PGPG system {#secpgpg} =============== Basic Idea of PGPG {#secflow} ------------------ (14 cm,12 cm)[./figure3.eps]{} If we inspect Figure \[fig2\] again, we can see the fact that [*all*]{} softwares and hardware description is derived from the target function specification in step (A). Thus, it should be possible for a sufficiently smart software to generate all necessary softwares and hardware descriptions from the target function description written in some high-level language. The basic idea of PGPG is to develop such a smart software. Figure \[fig3\] shows how the design flow changes with PGPG. After we define the target function, we write it in the high-level specification language, the PGPG description language (PGDL). The PGPG software system takes this PGDL description of the pipeline processor as input, and generates all softwares and hardware descriptions. Thus, with PGPG, an astrophysicist do not have to write VHDL source code for the pipeline processor or C source codes for interface library. In the rest of this section we illustrate how a pipeline processor is specified in PGDL and how that description is translated to actual codes. Example Target -------------- (8 cm,6 cm)[./figure4.eps]{} We consider the following (artificial) example: $$f_i = \sum_j^n {a_i a_j}.\qquad (i=1,...,n)$$ This function is designed purely to show how the PGDL description and PGPG software work. Figure \[fig4\] shows the pipeline itself. “Particle” here is represented by a single scalar value $a$. The interaction between particles $i$ and $j$ is defined as the product $a_i a_j$, and we calculate sum over $j$ to obtain the “force” on particle $i$. Here, we have the essential ingredients of the system: particles, their representation, functional form of interaction. For the particle data $a_i$ (and $a_j$), we use a logarithmic format, with 17 bits in total (1 bit for sign, 1 bit for zero or not, 7 bits for the integer part of logarithm and 8 bits for fractional part). The base of the logarithm is 2. This logarithmic format has the advantage that the multiplication becomes addition, so we do not need a multiplier circuit whose size is $O(m^2)$, where $m$ is the length of the mantissa. Of course, the addition in logarithm format is more complex than that in the floating-point format. Thus, the relative advantage of the log format is not very large. The “multiplier” logic itself is generated automatically by PGPG. In our example target function, we convert the output of multiplier to fixed-point format, so that we can accumulate it with high accuracy. This is done by a circuit provided by PGPG. Finally, converted result is accumulated by a usual fixed-point adder circuit. The particles with index $j$ is stored in the memory, and new data is supplied at each clock cycle. The particle with index $i$ is fixed during one calculation, and is stored in the register within the pipeline processor. #define ascale (pow(2.0,20.0)) #define fscale (1.0/(ascale*ascale)) /NVMP 1; /NPIPE 2; /JPSET iaj,aj[],log,17,8,ascale; /IPSET iai,ai[],log,17,8,ascale; /FOSET sfij,f[],fix,64,fscale; pg_log_muldiv(MUL,iaj,iai,aij,17,1); pg_conv_ltof(aij,fij,17,8,64,1); pg_fix_accum(fij,sfij,64,64,1); Figure \[fig5\] shows the PGDL description of this target function. The first two lines define formulae used for the data format conversion between the internal data format (logarithmic for $a_i$ and fixed-point for $f_i$). These are actually used in the next block, which defines the interface etc. The next block (lines starting with “/”) defines the register and memory layout, which also determines API. The final part describes the target function itself. It has C-like appearance, but actually defines the hardware modules and their interconnection. In the next subsection we describe the PGPG language in more detail. The PGDL Language ----------------- In this section, we give a minimal description of the PGDL language. A full description is available in [http://progrape.jp]{}. ### PGDL Target Hardware Model (14 cm,12 cm)[./figure5.eps]{} Figure \[fig5a\] gives the structure of the special-purpose processor generated from a PGDL “program”. It consist of the control logic, I/O logic, program-specified registers and a memory unit, and the pipeline unit. Program-specified registers are either input registers, which we call $i$-particle registers, or registers which accumulate the calculated interaction, which we call force-accumulation registers. We call memory unit $j$-particle memory. In figure \[fig5\], the pipeline processor is specified by list of modules (lines with [pg\_...]{}). Registers and memories are specified by lines with [/IPSET]{} ($i$-particle register), [/JPSET]{} ($j$-particle memory), and [/FOSET]{} (force-accumulation register). This hardware model is general and flexible enough to express any special-purpose computer which calculates the function of the form of equation (\[eq:PGPGbasic\]). We use the analogy of particles and forces, but the actual data in the $i$-particle register or $j$-particle memory need not represent physical particles, and “force” can be something completely different. For example, this hardware model can be used to describe Discrete Fourier Transform (DFT) or other types of convolution operations. ### PGDL API Model Currently, one PGDL program generates one function prototype, [void force(...)]{}, with list of arguments. The list of arguments consists of the data to be stored to $i$-particle registers, that to be stored to $j$-particle memories, and the data to be returned (the content of force accumulation registers). For the body of the function, both the emulator function and actual driver and data conversion function are generated, and the user can use either one by linking appropriate object file, without changing the source file. In figure \[fig5\], second arguments of [/JPSET]{}, [/IPSET]{} and [/FOSET]{} lines determine the name of the arguments which corresponds to the specified register or memory element. ### PGDL Program Structure A PGDL program consists of the following sections: 1. macro declaration 2. generic declaration 3. interface declaration 4. pipeline description The macro declaration, which is the first two lines of code in figure \[fig5\], is processed by C preprocessor (cpp) and used just for convenience to allow the same expressions which appear in multiple places to be defined only once. The generic declaration in figure \[fig5\] are two lines: /NVMP 1; /NPIPE 2; The first line determines the degree of the virtual multiple pipeline (Makino et al. 1997). The second one is the number of physical pipelines implemented to the current design. Thus, we can change the physical number of pipelines by just change this parameter, and the application program can make use of the parallel pipeline without any need to change the user code. The interface declaration is the following part: /JPSET iaj,aj[],log,17,8,ascale; /IPSET iai,ai[],log,17,8,ascale; /FOSET sfij,f[],fix,64,fscale; The first argument is the name used for the registers and memories in the pipeline description, and the second one is the name used for API. the remaining arguments specifies the number formats. In this example, both $a_i$ and $a_j$ are in the logarithmic format, with 17 bits of the total word length and 8 bits of mantissa. Finally, the pipeline description is the following part: pg_log_muldiv(MUL,iaj,iai,aij,17,1); pg_conv_ltof(aij,fij,17,8,64,1); pg_fix_accum(fij,sfij,64,64,1); Here, [pg\_log\_muldiv]{} generates one multiplier in the logarithmic format, which takes two inputs, [iaj]{} and [iai]{}, and calculates one output result, [aij]{}. The rest of arguments, [17]{} and [1]{} indicate the bit length and number of pipeline stages, respectively. The inputs are taken from the $j$-particle memory and the $i$-particle register with the corresponding names, and the output becomes the input to the next module [pg\_conv\_ltof]{}. This module converts the logarithmic format to fixed-point formant. Finally, module [pg\_fix\_accum]{} accumulates the result, and the value of this accumulator, [sfij]{} is accessible from the application program with name [f]{}, as specified in [/FOSET]{} declaration. ### PGDL Arithmetic Modules The present version of PGDL supports the following two number format: (a) fixed-point format and (b) logarithmic format. For the fixed-point format PGDL supports addition (and accumulation as well), subtraction, and conversion to the logarithmic format. For the logarithmic format, multiplication, division, power functions (with rational powers), and conversion to fixed-point format are supported. Appendix 1 gives more detailed discussion of the PGDL language elements. A Real Example: Gravitational Force Pipeline {#secgrav} ============================================ In this section, we discuss the implementation of the gravitational force calculation in PGDL in detail. We chose the gravitational force calculation as the example target, because we can compare the performance and size of PGDL-generate design with hand-coded ones such as the pipeline design of GRAPE-5. The pipeline to be designed calculates the gravitational force on particle $i$: $$\mathbf{ a}_i = \sum_j {m_j \mathbf{ r}_{ij} \over (r_{ij}^2 + \varepsilon^2)^{3/2}}$$ where $\mathbf{ a}_i$ is the gravitational acceleration of particle $i$, $\mathbf{ r}_i$ and $m_i$ are the position and mass of particle $i$, $\mathbf{ r}_{ij} = \mathbf{ r}_{j} - \mathbf{ r}_{i}$, and $\varepsilon$ is a softening parameter. Here we design the pipeline essentially the same as that of GRAPE-3 and GRAPE-5, to compare the performance and size. The PGDL Pipeline Design Description ------------------------------------ Figure \[fig7\] shows the block diagram of the gravitational force pipeline. Position data for both $i$-particle and $j$-particle are in the fixed-point format, while $m_j$ is in the logarithmic format. After subtraction, $\mathbf{x}_j-\mathbf{x}_i$, the results are converted to the logarithmic format, and all calculations until the final accumulation are done in this logarithmic format. Figure \[fig8\] shows the PGDL program file for the gravitational force pipeline in figure \[fig7\]. One can see that each [pg]{} module in figure \[fig8\] directly corresponds to arithmetic units in figure \[fig7\]. Actually, the PGDL description is more compact, since it allows implicit array operation, as in the case of the first line: pg_fix_addsub(SUB,xi,xj,xij,NPOS,1); Here, [*three*]{} modules are generated automatically, because both [xi]{} and [xj]{} are declared as the array of size 3 in the interface declaration. Also, there is no need to explicitly specify the wait (pipeline delay) modules, since the PGDL compiler inserts the necessary delay element automatically. (18 cm,12 cm)[./figure6.eps]{} #define xscale (pow(2.0,32.0)/64.0) #define mscale (pow(2.0,60.0)/(1.0/1024.0)) #define escale (xscale*xscale) #define fscale (-xscale*xscale/mscale) #define NPOS 32 #define NLOG 17 #define NMAN 8 #define NFOR 57 #define NACC 64 /NVMP 2; /NPIPE 2; /JPSET xj[3],x[][],ufix,NPOS,xscale; /JPSET mj,m[],log,NLOG,NMAN,mscale /IPSET xi[3],x[][],ufix,NPOS,xscale; /IPSET ieps2,eps2,log,NLOG,NMAN,escale; /FOSET sx[3],a[][],fix,NACC,fscale; pg_fix_addsub(SUB,xi,xj,xij,NPOS,1); pg_conv_ftol(xij,dx,NPOS,NLOG,NMAN,4); pg_log_shift(1,dx,x2,NLOG); pg_log_unsigned_add(x2[0],x2[1],x2y2,NLOG,NMAN,3); pg_log_unsigned_add(x2[2],ieps2,z2e2,NLOG,NMAN,3); pg_log_unsigned_add(x2y2,z2e2,r2,NLOG,NMAN,3); pg_log_shift(-1,r2,r1,NLOG); pg_log_muldiv(MUL,r2,r1,r3,NLOG,1); pg_log_muldiv(DIV,mj,r3,mf,NLOG,1); pg_log_muldiv(MUL,mf,dx,fx,NLOG,1); pg_conv_ltof(fx,ffx,NLOG,NMAN,NFOR,2); pg_fix_accum(ffx,sx,NFOR,NACC,2); The top-level interface function to the application program generated from this PGDL program has the following form: void force(double x[][3], double m[], double eps2, double a[][3], int n); In this example, positions of $i$ particles and $j$ particles are passed as a single array [x]{}, since the same name is used in [/IPSET]{} and [/JPSET]{}. Thus, with this interface it is only possible to calculate the self-gravity of an $N$-body system. Performance of Generated Pipeline --------------------------------- Here we report the performance of the PGDL-generated gravitational force calculation pipeline, and compare that with that of GRAPE-3 and GRAPE-5. We summarize internal number expressions for the pipeline designs in table \[tab2\]. Model Position Internal(mantissa) Accumulation ------- ------------- -------------------- -------------- G3 20bit fixed 14(5)bit log 56bit fixed G5 32bit fixed 17(8)bit log 64bit fixed G5+ 32bit fixed 20(11)bit log 64bit fixed : Model \[tab2\] Table \[tab3\] shows the size and performance of the generated pipeline (model G5) for several implementations with different number of pipeline stages, for two different kinds of the Altera device. The pipeline designs with different number of pipeline stages can be easily obtained by a small modification of the design entry file in PGPG. The size and maximum operation speeds are those reported by Altera’s design software, Quartus II(ver 3.0). The speed grade of these devices are (-2) for APEX20k and (-5) for Stratix (fastest available at the time of the writing). ------- ---------- -------------------- ------- ---------- -------------------- APEX20k Stratix stage size(LE) $f_{\rm max}$(MHz) stage size(LE) $f_{\rm max}$(MHz) 14 2735 58.92 17 2499 133.30 16 2928 60.97 19 2655 137.51 17 2925 73.78 21 2849 142.29 20 3074 74.65 23 2927 135.78 21 3064 80.33 24 2864 142.88 ------- ---------- -------------------- ------- ---------- -------------------- : Performance of the generated pipeline (model G5) \[tab3\] ------- --------------- ------ -------- ------- Model $f_{\rm max}$ Size Memory Stage (MHz) (LE) (bit) G5 181.82 3021 41k 30 G3 191.64 2369 21k 26 G5+ 142.27 5082 402k 35 ------- --------------- ------ -------- ------- : Performance of generated pipelines (Stratix) \[tab4\] Table \[tab4\] shows the size and performance of pipelines with different accuracy (G3, G5 and G5+). One can see that both the performance penalty and size increase due to increased accuracy of G5+, compared to G3 or G5, are fairly modest. With currently available FPGA (Stratix EP1S20), we can fit 5 G5 pipelines running at 180 MHz into one chip. The original GRAPE-5 pipeline chip, which was made 7 years ago, had two pipelines operating at 80 MHz clock. Thus, FPGA implementation of GRAPE-5 has about 5 times more speed than the original custom-chip implementation. The peak speed of one FPGA chip for GRAPE-5 pipeline is 34.2 Gflops. Of course, this large improvement over GRAPE-5 is due primarily to the advance in the semiconductor technology in the 7 years (from $0.5{\rm \mu m}$ to 130 nm), but clearly indicates that FPGA-based computing engine does offer very good performance, and that PGDL provides a practical tool to implement special-purpose computers on FPGA-based computing engines. Discussion ========== Comparison with Other Design Methodology ---------------------------------------- In other area, such as digital signal processing, there exist many code generators that generates HDL code from a simple description. For example, a commercial package (MATLAB) generates HDL code for fixed-point filter designed using itself. Recently, a design methodology called the system-level design has become popular. In the system-level design, the function and architecture of LSI or FPGA are described using C/C++ languages or subset of them. These languages are called the System-Level Description Language (SLDL). Using SLDL, programmers can verify functionality and performance at the early stage of the development. The design is divided into software part and hardware part. The hardware part will be synthesized to register-transfer design by a SLDL design software. The SpecC, System-C, or Handel-C are the well-known SLDLs commercially available. There are also a number of research projects to design hardware using C++/Java languages (e.g. Hutchings 1999, Mencer 2002, Tsoi 2004). The goal of these SLDL is to describe hardware logic without using hardware description language, such as VHDL or Verilog HDL. Therefore, even if we use SLDL, we still need a detailed description of the hardware in other language like C or C++. If we consider in the traditional flow (Figure \[fig2\]), we can save step (C) and (D) using SLDL, while step (B) is still required. Using PGPG, we can replace all of steps by writing a short high-level hardware description. Planned and Ongoing Improvement of PGPG and PGDL ------------------------------------------------ In this paper, we described basic concept and functions of PGPG. For those who are more interested, we put a CGI program of the current version of PGPG at a web site ([http://progrape.jp]{}). The CGI program generates VHDL code, user interface code, and emulator code from a PGDL description. Although the current version of PGPG is successful in designing the pipeline for the gravitational force, its functionality is rather limited. We are currently developing the next version of PGPG that supports more functionality and multiple hardwares. For the next version, we plan to add the modules needed to design a pipeline for the SPH simulation and Boundary Element Method (BEM). BEM is one of methods to solve numerically the boundary value problem of partial differential equation (Brebbia 1978). The floating-point format with longer mantissa is needed for these applications. We are now further developing the support of the floating-point arithmetic module for PGPG. We also plan to support Xilinx FPGA chips as well as Altera chips. As the target board for Xilinx device, we use Bioler3/HORN-5 board, developed by Chiba University and RIKEN (Ito et al 2004). This research was partially supported by the Grants-in-Aid by the Japan Society for the Promotion of Science (14740127) and by the Ministry of Education, Science, Sports, and Culture of Japan (16684002). Description of PGDL Declarations and Available Modules ====================================================== ------------------ ---------------------------- --------------------------------------------------------- Module [pg\_fix\_addsub]{} fixed point format adder/subtracter [pg\_fix\_accum]{} fixed point format accumulator [pg\_log\_unsigned\_add]{} unsigned logarithmic format adder [pg\_log\_muldiv]{} logarithmic format multiplier/divider [pg\_log\_shift]{} logarithmic format shifter [pg\_conv\_ftol]{} converter from fixed point format to logarithmic format [pg\_conv\_ltof]{} converter from logarithmic format to fixed point format Definition [/NPIPE]{} number of pipeline [/NVMP]{} number of virtual multiple pipeline [/JPSET]{} memory unit setting [/IPSET]{} input register setting [/FOSET]{} output register setting Device support Altera’s FPGA Hardware support PROGRAPE-2 Other Options for look-up table ------------------ ---------------------------- --------------------------------------------------------- : PGPG version 1.0 feature \[tab1\] Table \[tab1\] shows the features of the current version of PGPG. The specification of version 1.0 is determined so that a pipeline for gravitational force, shown in section \[secgrav\], can be constructed as the first step. PGPG version 1.0 supports nine parametrized modules as shown in Table \[tab1\]. The bit length and the number of pipeline stage for each module can be changed by the arguments. For example, the arguments of the fixed point format adder/subtracter [pg\_fix\_addsub(SUB,xi,xj,xij,32,1)]{} indicate an operation flag(adder or subtracter), the first input, the second input, output, bit length, and number of pipeline stages, respectively, from the first to sixth argument. Modules [pg\_fix\_addsub]{} and [pg\_fix\_accum]{} are fixed point format adder/subtracter and sign-magnitude accumulator, respectively. Modules [pg\_log\_muldiv]{} and [pg\_log\_unsigned\_add]{} are logarithmic format multiplier/divider and unsigned adder, respectively. In the logarithmic format, a positive, non-zero real number $x$ is represented by its base-2 logarithm $y$ as $x=2^{y}$. The logarithmic format has been adapted for the gravitational pipeline because it has larger dynamics length for the same word length and operation such as multiplication and square root are easier to implement than in the usual floating-point format. For more details of the logarithmic format, see GRAPE-5 paper (Kawai et al. 2000). Module [pg\_log\_shift]{} is a logarithmic format shifter. Shift operations in the logarithmic format express square (left shift) and squared root (right shift). Module [pg\_conv\_ftol]{} is a converter from the fixed point format to the logarithmic format, and [pg\_conv\_ltof]{} is a converter from the logarithmic format to the fixed point format. In PGPG version 1.0, these modules are described partly using the Altera’s LPM. A gap of delay timing is synchronized automatically by the PGPG. Addition to the parametrized modules, five definitions are defined in PGPG version 1.0. Definitions [/NPIPE]{} and [/NVMP]{} define the numbers of (real) pipeline and virtual multiple pipeline (Makino et al. 1997), respectively. Definition [/JPSET]{} defines a setting for the memory unit. Definitions [/IPSET]{} and [/FOSET]{} define settings for the input and output registers in the interaction pipeline, respectively. Details of The Generated Gravitational Force Pipeline ===================================================== In this appendix, we show a part of the code generated by PGPG from PGDL description of the force calculation pipeline. More complete code is obtained by a CGI program of the current version of PGPG in a website ([http://progrape.jp]{}). VHDL Code --------- PGPG generates description files of the designed hardware logic in VHDL. The hardware logic includes the pipeline logic itself and its peripheral logic. Figures \[fig11\] and \[fig12\] show a part of the VHDL source files generated by PGPG (the total length is about 2800 lines). The design software provided by the FPGA manufacture creates configuration data of FPGA from the generated sources \[Step (C’) in figure \[fig3\]\]. The configuration data are downloaded into the programmable GRAPE hardware using the interface program also generated by PGPG. library ieee; use ieee.std_logic_1164.all; use ieee.std_logic_unsigned.all; entity pipe is generic(JDATA_WIDTH : integer :=72) ; port(p_jdata : in std_logic_vector(JDATA_WIDTH-1 downto 0); p_run : in std_logic; p_we : in std_logic; p_adri : in std_logic_vector(3 downto 0); p_adrivp : in std_logic_vector(3 downto 0); p_datai : in std_logic_vector(31 downto 0); p_adro : in std_logic_vector(3 downto 0); p_adrovp : in std_logic_vector(3 downto 0); p_datao : out std_logic_vector(31 downto 0); p_runret : out std_logic; rst,pclk : in std_logic); end pipe; architecture std of pipe is begin process(pclk) begin if(pclk'event and pclk='1') then jdata1 <= p_jdata; end if; end process; process(pclk) begin if(pclk'event and pclk='1') then if(vmp_phase = "0000") then xj(31 downto 0) <= jdata1(31 downto 0); yj(31 downto 0) <= jdata1(63 downto 32); zj(31 downto 0) <= p_jdata(31 downto 0); mj(16 downto 0) <= p_jdata(48 downto 32); end if; end if; end process; . . . u0: pg_fix_sub_32_1 port map (x=>xi,y=>xj,z=>xij,clk=>pclk); u1: pg_fix_sub_32_1 port map (x=>yi,y=>yj,z=>yij,clk=>pclk); u2: pg_fix_sub_32_1 port map (x=>zi,y=>zj,z=>zij,clk=>pclk); u3: pg_conv_ftol_32_17_8_4 port map (fixdata=>xij,logdata=>dx,clk=>pclk); u4: pg_conv_ftol_32_17_8_4 port map (fixdata=>yij,logdata=>dy,clk=>pclk); u5: pg_conv_ftol_32_17_8_4 port map (fixdata=>zij,logdata=>dz,clk=>pclk); . . . end std; library ieee; use ieee.std_logic_1164.all; entity pg_conv_ftol_32_17_8_4 is port(fixdata : in std_logic_vector(31 downto 0); logdata : out std_logic_vector(16 downto 0); clk : in std_logic); end pg_conv_ftol_32_17_8_4; architecture rtl of pg_conv_ftol_32_17_8_4 is begin d1 <= NOT fixdata(30 downto 0); one <= "0000000000000000000000000000001"; u1: lpm_add_sub generic map (LPM_WIDTH=>31,LPM_DIRECTION=>"ADD") port map(result=>d2,dataa=>d1,datab=>one); d0 <= fixdata(30 downto 0); sign0 <= fixdata(31); with sign0 select d3 <= d0 when '0', d2 when others; process(clk) begin if(clk'event and clk='1') then d3r <= d3; sign1 <= sign0; end if; end process; u2: penc_31_5 port map (a=>d3r,c=>c1); with d3r select nz0 <= '0' when "0000000000000000000000000000000", '1' when others; . . . end rtl; Interface Functions ------------------- The interface software on the host computer to the programmable GRAPE hardware is composed by the C complier of the sources generated by PGPG \[Step(D’) in figure \[fig3\]\]. Figure \[fig13\] shows a part of source files in C generated by PGPG (the total length is about 150 lines). We run the application program linked with the interface software \[Step(E)\]. #include <stdio.h> #include <math.h> void force(double x[][3], double m[], double eps2, double a[][3], int n) { npipe = 4; pgpgi_initial(); pgpgi_setxj(n,x,m); for(i=0;i<n;i+=npipe){ if((i+npipe)>n){ nn = n - i; }else{ nn = npipe; } pgpgi_setxi(i,nn,x,eps2); pgpgi_run(n); pgpgi_getforce(i,nn,a); } } void pgpgi_setxj(int n, double x[][3], double m[]) { devid = 0; for(j=0;j<n;j++){ xj = ((unsigned int) (x[j][0] * (pow(2.0,32.0)/(64.0)) + 0.5)) & 0xffffffff; yj = ((unsigned int) (x[j][1] * (pow(2.0,32.0)/(64.0)) + 0.5)) & 0xffffffff; zj = ((unsigned int) (x[j][2] * (pow(2.0,32.0)/(64.0)) + 0.5)) & 0xffffffff; if(m[j] == 0.0){ mj = 0; }else if(m[j] > 0.0){ mj = (((int)(pow(2.0,8.0)*log(m[j]*(pow(2.0,60.0)/(1.0/1024.0)))/log(2.0))) & 0x7fff) | 0x8000; }else{ mj = (((int)(pow(2.0,8.0)*log(-m[j]*(pow(2.0,60.0)/(1.0/1024.0)))/log(2.0))) & 0x7fff) | 0x18000; } nword = 4; jpdata[0] = 0xffc00; jpdata[1] = 2*j+1; jpdata[2] = 0x0 | ((0xffffffff & xj) << 0) ; jpdata[3] = 0x0 | ((0xffffffff & yj) << 0) ; g6_set_jpdata(devid,nword,jpdata); jpdata[1] = 2*j+0; jpdata[2] = 0x0 | ((0xffffffff & zj) << 0) ; jpdata[3] = 0x0 | ((0x1ffff & mj) << 0) ; g6_set_jpdata(devid,nword,jpdata); } } Emulator Code ------------- Figures \[fig9\] and \[fig10\] show a part of the source files generated by PGPG (the total length is 580 lines). #include <stdio.h> #include <math.h> void force(double x[][3], double m[], double eps2, double a[][3], int n) { for(i=0;i<n;i++){ xi = ((unsigned int) (x[i][0] * (pow(2.0,32.0)/64.0) + 0.5)) & 0xffffffff; yi = ((unsigned int) (x[i][1] * (pow(2.0,32.0)/64.0) + 0.5)) & 0xffffffff; zi = ((unsigned int) (x[i][2] * (pow(2.0,32.0)/64.0) + 0.5)) & 0xffffffff; if(eps2 == 0.0){ ieps2 = 0; }else if(eps2 > 0.0){ ieps2 = (((int)(pow(2.0,8.0)*log(eps2*((pow(2.0,32.0)/64.0)*(pow(2.0,32.0)/64.0)))/log(2.0))) & 0x7fff) | 0x8000; }else{ ieps2 = (((int)(pow(2.0,8.0)*log(-eps2*((pow(2.0,32.0)/64.0)*(pow(2.0,32.0)/64.0)))/log(2.0))) & 0x7fff) | 0x18000; } sx = 0; sy = 0; sz = 0; for(j=0;j<n;j++){ xj = ((unsigned int) (x[j][0] * (pow(2.0,32.0)/64.0) + 0.5)) & 0xffffffff; yj = ((unsigned int) (x[j][1] * (pow(2.0,32.0)/64.0) + 0.5)) & 0xffffffff; zj = ((unsigned int) (x[j][2] * (pow(2.0,32.0)/64.0) + 0.5)) & 0xffffffff; if(m[j] == 0.0){ mj = 0; }else if(m[j] > 0.0){ mj = (((int)(pow(2.0,8.0)*log(m[j]*(pow(2.0,60.0)/(1.0/1024.0)))/log(2.0))) & 0x7fff) | 0x8000; }else{ mj = (((int)(pow(2.0,8.0)*log(-m[j]*(pow(2.0,60.0)/(1.0/1024.0)))/log(2.0))) & 0x7fff) | 0x18000; } pg_fix_sub_32(xi,xj,&xij); pg_fix_sub_32(yi,yj,&yij); pg_fix_sub_32(zi,zj,&zij); pg_conv_ftol_fix32_log17_man8(xij,&dx); pg_conv_ftol_fix32_log17_man8(yij,&dy); pg_conv_ftol_fix32_log17_man8(zij,&dz); . . . pg_fix_accum_f57_s64(ffx,&sx); pg_fix_accum_f57_s64(ffy,&sy); pg_fix_accum_f57_s64(ffz,&sz); } a[i][0] = ((double)(sx<<0))*(-(pow(2.0,32.0)/64.0)*(pow(2.0,32.0)/64.0)/(pow(2.0,60.0)/(1.0/1024.0)))/pow(2.0,0.0); a[i][1] = ((double)(sy<<0))*(-(pow(2.0,32.0)/64.0)*(pow(2.0,32.0)/64.0)/(pow(2.0,60.0)/(1.0/1024.0)))/pow(2.0,0.0); a[i][2] = ((double)(sz<<0))*(-(pow(2.0,32.0)/64.0)*(pow(2.0,32.0)/64.0)/(pow(2.0,60.0)/(1.0/1024.0)))/pow(2.0,0.0); } } #include<stdio.h> #include<math.h> void pg_conv_ftol_fix32_log17_man8(int fixdata, int* logdata){ /* SIGN BIT */ fixdata_msb = 0x1&(fixdata >>31); logdata_sign = fixdata_msb; /* ABSOLUTE */ fixdata_body = 0x7FFFFFFF & fixdata; { int inv_fixdata_body=0; inv_fixdata_body = 0x7FFFFFFF ^ fixdata_body; if(fixdata_msb == 0x1){ abs = 0x7FFFFFFF & (inv_fixdata_body + 1); }else{ abs = fixdata_body; } } abs_decimal = 0x3FFFFFFF& abs; /* GENERATE NON-ZERO BIT (ALL BIT OR) */ if(abs != 0x0){ logdata_nonzero = 0x1; }else{ logdata_nonzero=0x0; } { /* PRIORITY ENCODER */ int i; int count=0; for(i=31;i >=0;i--){ int buf; buf = 0x1 & (abs >>i); if(buf == 0x1){ count = i; break;} count = i; } penc_out=count; } penc_out = 0x1F & penc_out; /* 5-bit */ . . . } Barnes, J., & Hut, P. 1986, Nature, 326, 446 Hutchings, B., Bellows, P., Hawkins, J., Hemmert, S., Nelson, B., & Rytting, M. 1999, in Proc. IEEE Symposium on Field-Programmable Custom Computing Machines, p12 Brebbia, C. A. 1978, The Boundary Method for Engineers (London: Pentech Press), p1 Buell, D. A., Arnold, J. M., & Kleinfelder, W. J., 1996, Splash 2 (IEEE Computer Society Press, Los Alamitos) Fukushige, T., Taiji, M., Makino, J., Ebisuzaki, T., & Sugimoto, D. 1996, , 468, 51 Gingold, A., & Monaghan, J. 1977, , 181, 374 Greengard, L., & Rokhlin, V. 1987, J.Comput.Phys., 73, 325 Hamada, T., Fukushige, T., Kawai, A., & Makino, J. 2000, , 52, 943 Ito, T., Makino, J., Fukushige, T., Ebisuzaki, T., Okumura, S. K., & Sugimoto, D. 1993, , 45, 339 Ito, T., Masuda, N., Yoshimura, K., Shiraki, A., Shimobaba, A. & Sugie, T. 2004, Optics Express, 13, 1923 Kawai, A., Fukushige, T., Makino, J., & Taiji, M. 2000, , 52, 659 Kim, H., Cook, A. T., & Louca, L. 1995, in Proc. In SPIE Photonics East Conferences on FPGAs for Fast Board Development and Reconfigurable Computing, p115 Lucy, L. 1977, , 82, 1013 Makino, J., & Taiji, M. 1998, Scientific Simulations with Special-Purpose Computers — The GRAPE Systems (Chichester: John Wiley and Sons) Makino, J., Taiji, M., Ebisuzaki, T., & Sugimoto, D. 1997, , 480, 432 Mencer, O. 2002, in Proc. IEEE Symposium on Field-Programmable Custom Computing Machines., p67 Narumi, T., Susukita, R., Ebisuzaki, T., McNiven, G., & Elmegreen, B. 1999, Molecular Simultion, 21, 201 Sugimoto, D., Chikada, Y., Makino, J., Ito, T., Ebisuzaki, T., & Umemura, M. 1990, Nature, 345, 33 Spurzem, R.  2002, in Proc. Fifth JSPS/CSE symposium ([astro-ph/0204326]{}) Taiji, M., Narumi, T., Ohno, Y., Futatsugi, N., Suenaga, A., Takada, N., & Konagaya, A. 2003, in Proc. of SC2003, CD-ROM ([http://www.sc-conference.org/sc2003/]{}) Tsoi, H., Ho, C. H., Yeung, H. C., & Leong, P. H. W. 2004, in Proc. IEEE Symposium on Field-Programmable Custom Computing Machines, p68 Yokono, Y., Ogasawara, R., Inutsuka, S., Chikada, Y., & Miyama, S. 1999, in Numerical Astrophysics, ed S. Miyama, K. Tomisaka, T. Hanawa (Kluwer, Dordrecht) p429
--- abstract: 'Consider the set $\uu$ of real numbers $q \ge 1$ for which only one sequence $(c_i)$ of integers $0 \le c_i \le q$ satisfies the equality $\sum_{i=1}^{\infty} c_i q^{-i} = 1$. In this note we show that the set of algebraic numbers in $\uu$ is dense in the closure $\uuu$ of $\uu$.' address: 'Delft University of Technology, Mekelweg 4, 2628 CD Delft, the Netherlands' author: - Martijn de Vries title: A property of algebraic univoque numbers --- [^1] Introduction {#s1} ============ Given a real number $q \ge 1$, a $q-$[*expansion*]{} (or simply [*expansion*]{}) is a sequence $(c_i)=c_1 c_2 \ldots$ of integers satisfying $0 \le c_i \le q$ for all $i \geq 1$ such that $$\frac{c_1}{q} + \frac{c_2}{q^2} + \frac{c_3}{q^3} + \cdots = 1.$$ One such expansion, denoted by $(\gamma_i(q))= (\gamma_i)$, is obtained by performing the [*greedy algorithm*]{} of Rényi ([@R]): if $\gamma_i$ is already defined for $ i < n$, then $\gamma_n$ is the largest integer satisfying $$\sum_{i=1}^{n} \frac{\gamma_i}{q^i} \leq 1.$$ Equivalently, $(\gamma_i)$ is the largest expansion in lexicographical order. If $q >1$, then another such expansion, denoted by $(\alpha_i(q))= (\alpha_i)$, is obtained by performing the [*quasi-greedy algorithm*]{}: if $\alpha_i$ is already defined for $i < n$, then $\alpha_n$ is the largest integer satisfying $$\sum_{i=1}^{n} \frac{\alpha_i}{q^i} < 1.$$ An expansion is called [*infinite*]{} if it contains infinitely many nonzero terms; otherwise it is called [*finite*]{}. Observe that there are no infinite expansions if $q = 1$: the only 1-expansions are given by $10^{\infty}, 010^{\infty}, 0010^{\infty}, \ldots$. On the other hand, if $q >1$, then $(\alpha_i)$ is the largest infinite expansion in lexicographical order. For any given $q >1$, the following relations between the quasi-greedy expansion and the greedy expansion are straightforward. The greedy expansion is finite if and only if $(\alpha_i)$ is periodic. If $(\gamma_i)$ is finite and $\gamma_m$ is its last nonzero term, then $m$ is the smallest period of $(\alpha_i)$, and $$\alpha_i = \gamma_i \quad \mbox{for } i = 1, \ldots, m-1, \quad \mbox{and } \alpha_m = \gamma_m -1.$$ Erdős, Horváth and Joó ([@EHJ]) discovered that for some real numbers $q >1 $ there exists only one $q-$expansion. Subsequently, the set $\uu$ of such [*univoque numbers*]{} was characterized in [@EJK1], [@EJK2], [@KL3] (see Theorem \[t21\]). Using this characterization, Komornik and Loreti showed in [@KL1] that $\uu$ has a smallest element $q' \approx 1.787$ and the corresponding expansion $(\tau_i)$ is given by the truncated Thue-Morse sequence, defined by setting $\tau_{2^N}=1$ for $N=0,1,\ldots$ and $$\tau_{2^N + i} = 1 - \tau_i \quad \mbox{for }1 \le i < 2^N, \, N=1,2, \ldots.$$ Allouche and Cosnard ([@AC]) proved that the number $q'$ is transcendental. This raised the question whether there exists a smallest algebraic univoque number. Komornik, Loreti and Pethő ([@KL2]) answered this question in the negative by constructing a decreasing sequence $(q_n)$ of algebraic univoque numbers converging to $q'$. It is the aim of this note to show that for each $q \in \uu$ there exists a sequence of algebraic univoque numbers converging to $q$: \[t11\] The set $\mathcal{A}$ consisting of all algebraic univoque numbers is dense in $\uuu$. Our proof of Theorem \[t11\] relies on a characterization of the closure $\uuu$ of $\uu$, recently obtained by Komornik and Loreti in [@KL3] (see Theorem \[t22\]). Proof of Theorem \[t11\] {#s2} ======================== In the sequel, a sequence always means a sequence of nonnegative integers. We use systematically the lexicographical order between sequences; we write $(a_i) < (b_i)$ if there exists an index $n \geq 1$ such that $a_i=b_i$ for $i < n$ and $a_n < b_n$. This definition extends in the obvious way to sequences of finite length. The following algebraic characterization of the set $\uu$ can be found in [@EJK1], [@EJK2], [@KL3]: \[t21\] The map $q \mapsto (\gamma_i(q))$ is a strictly increasing bijection between the set $\uu$ and the set of all sequences $(\gamma_i)$ satisfying $$\label{21} \gamma_{j+1} \gamma_{j+2} \ldots < \gamma_1 \gamma_2 \ldots \quad \mbox{for all }\, j \geq 1$$ and $$\label{22} \overline{\gamma_{j+1} \gamma_{j+2} \ldots} < \gamma_1 \gamma_2 \ldots \quad \mbox{for all }\, j \geq 1$$ where we use the notation $\overline{\gamma_n} := \gamma_1 - \gamma_n$. It was essentially shown by Parry (see [@P]) that a sequence $(\gamma_i)$ is the greedy $q$-expansion for some $q \ge 1$ if and only if $(\gamma_i)$ satisfies the condition . Using the above result, Komornik and Loreti ([@KL3]) investigated the topological structure of the set $\uu$. In particular they showed that $\uuu \setminus \uu$ is dense in $\uuu$. Hence the set $\uuu$ is a perfect set. Moreover, they established an analogous characterization of the closure $\uuu$ of $\uu$: \[t22\] The map $q \mapsto (\alpha_i(q))$ is a strictly increasing bijection between the set $\uuu$ and the set of all sequences $(\alpha_i)$ satisfying $$\label{23} \alpha_{j+1} \alpha_{j+2} \ldots \le \alpha_1 \alpha_2 \ldots \quad \mbox{for all }\, j \geq 1$$ and $$\label{24} \overline{\alpha_{j+1} \alpha_{j+2} \ldots} < \alpha_1 \alpha_2 \ldots \quad \mbox{for all }\, j \geq 1$$ where we use the notation $\overline{\alpha_n} := \alpha_1 - \alpha_n$. - It was shown in [@BK] that a sequence $(\alpha_i)$ is the quasi-greedy $q$-expansion for some $q >1$ if and only if $(\alpha_i)$ is infinite and satisfies . Note also that a sequence satisfying and is automatically infinite. - If $q \in \uuu \setminus \uu$, then we must have equality in for some $j \geq 1$, i.e., the greedy $q$-expansion is finite for each $q \in \uuu \setminus \uu$. On the other hand, it follows from Theorems \[t21\] and \[t22\] that a sequence of the form $(1^n0)^{\infty}$ $(n \ge 2)$ is the quasi-greedy $q$-expansion for some $q \in \uuu \setminus \uu$. Hence the set $\uuu \setminus \uu$ is countably infinite. The following technical lemma is a direct consequence of Theorem \[t22\] and Lemmas 3.4 and 4.1 in [@KL3]: \[l23\] Let $(\alpha_i)$ be a sequence satisfying and . Then - there exist arbitrary large integers $m$ such that $$\label{25} \overline{\alpha_{j+1} \ldots \alpha_m} < \alpha_1 \ldots \alpha_{m-j} \quad \mbox{for all }\, 0 \le j < m;$$ - for all positive integers $m \geq 1$, $$\label{26} \overline{\alpha_{1} \ldots \alpha_{m}} < \alpha_{m+1} \ldots \alpha_{2m}.$$ Since the set $\uuu \setminus \uu$ is dense in $\uuu$, it is sufficient to show that $\overline{\mathcal{A}} \supset \uuu \setminus \uu$. In order to do so, fix $q \in \uuu \setminus \uu$. Then, according to Theorem  \[t22\], the quasi-greedy $q$-expansion $(\alpha_i)$ satisfies and . Let $k$ be a positive integer for which equality holds in , i.e., $$(\alpha_i)= (\alpha_1 \ldots \alpha_k)^{\infty}.$$ According to Lemma \[l23\] there exists an integer $m \geq k$ such that is satisfied. Let $N$ be a positive integer such that $kN \geq m $ and consider the sequence $$(\gamma_i)= (\gamma_i^N)=( \alpha_1 \ldots \alpha_k)^N (\alpha_1 \ldots \alpha_m \overline{\alpha_1 \ldots \alpha_m})^{\infty}.$$ For ease of exposition we suppress the dependence of $(\gamma_i)$ on $N$. Note that $\gamma_i=\alpha_i$ for $ 1 \leq i \leq m+kN$. In particular, we have $$\label{27} \gamma_i=\alpha_i \quad \mbox{for} \quad 1 \leq i \leq 2m.$$ Since $(\gamma_i)$ has a periodic tail, the number $q_N$ determined by $$1 = \sum_{i=1}^{\infty} \frac{\gamma_i}{q_{N}^i}$$ is an algebraic number and $q_N \to q$ as $N \to \infty$. According to Theorem \[t21\] it remains to verify the inequalities and . First we verify and for $j \geq kN$. For those values of $j$ the inequality for $j+m$ is equivalent to for $j$ and for $j+m$ is equivalent to for $j$. Therefore it suffices to verify the inequalities and for $kN \le j < kN +m$. Fix $kN \le j < kN +m$. From , and we have $$\begin{aligned} \gamma_{j+1} \ldots \gamma_{kN+2m}&=& \alpha_{j-kN +1} \ldots \alpha_m \overline{\alpha_1 \ldots \alpha_m} \\ & < & \alpha_{j-kN +1} \ldots \alpha_m \alpha_{m+1} \ldots \alpha_{2m} \\ &\leq& \alpha_1 \ldots \alpha_{kN + 2m - j}\\ &=& \gamma_1 \ldots \gamma_{kN + 2m - j}\end{aligned}$$ and from inequality we have $$\begin{aligned} \overline{\gamma_{j+1} \ldots \gamma_{kN+m}} &=& \overline{\alpha_{j-kN+1} \ldots \alpha_m} \\ &<& \alpha_1 \ldots \alpha_{kN+m-j} \\ &=& \gamma_1 \ldots \gamma_{kN+m-j}.\end{aligned}$$ Now we verify for $j < kN$. If $m \leq j < kN$, then by and , $$\begin{aligned} \gamma_{j+1} \ldots \gamma_{kN+2m} &<& \alpha_{j+1} \ldots \alpha_{kN+2m} \\ & \leq& \alpha_1 \ldots \alpha_{kN+2m-j} \\ & = & \gamma_1 \ldots \gamma_{kN+2m-j}.\end{aligned}$$ If $1 \leq j < m$, then by and , $$\begin{aligned} \gamma_{j+1} \ldots \gamma_{kN+m+j} &=& \alpha_{j+1} \ldots \alpha_{kN+m} \overline{\alpha_1 \ldots \alpha_j}\\ & \leq & \alpha_1 \ldots \alpha_{kN+m-j}\overline{\alpha_1 \ldots \alpha_j}\\ & < & \alpha_1 \ldots \alpha_{kN+m-j} \alpha_{m-j+1} \ldots \alpha_m \\ &=& \gamma_1 \ldots \gamma_{kN+m}.\\\end{aligned}$$ Finally, we verify for $j < kN$. Write $j=k \ell +i \, , 0 \leq \ell < N$ and $0 \leq i < k$. If $i=0$, then follows from the relation $$\overline{\gamma_{j+1}}=\overline{\alpha_1}= 0 < \alpha_1 = \gamma_1.$$ If $ 1 \le i < k$, then applying Lemma \[l23\](ii) we get $$\overline{\alpha_{i+1} \ldots \alpha_{2i}} < \alpha_1 \ldots \alpha_i.$$ Hence $$\begin{aligned} \overline{\gamma_{j+1} \ldots \gamma_{j+k}} &=& \overline{\alpha_{j+1} \ldots \alpha_{j+k}} \\ &=& \overline{\alpha_{i+1} \ldots \alpha_{i+k}} \\ &<& \alpha_1 \ldots \alpha_k \\ &=& \gamma_1 \ldots \gamma_k.\\\end{aligned}$$ (In order for the first equality to hold in case $\ell = N-1$, we need the condition $m \geq k$.) - Since the set $\uuu$ is a perfect set and $\uuu \setminus \uu$ is countable, each neighborhood of $q \in \uu$ contains uncountably many elements of $\uu$. Hence the set of transcendental univoque numbers is dense in $\uuu$ as well. - Recently, Allouche, Frougny and Hare ([@AHF]) proved that there also exist univoque Pisot numbers. In particular they determined the smallest three univoque Pisot numbers. [\[K$^3$1\]]{} J.-P. Allouche, M. Cosnard, [*The Komornik–Loreti constant is transcendental*]{}, Amer. Math. Monthly [**107**]{} (2000), no. 5, 448–449. J.-P. Allouche, C. Frougny, K.G. Hare, [*On univoque Pisot numbers*]{}, Mathematics of Computation, to appear. C. Baiocchi, V. Komornik, [*Quasi-greedy expansions and lazy expansions in non-integer bases*]{}, manuscript. P. Erdős, M. Horváth, I. Joó, [*On the uniqueness of the expansions*]{} $1=\sum q^{-n_i}$, Acta Math. Hungar. [**58**]{} (1991), no. 3–4, 333–342. P. Erdős, I. Joó, V. Komornik, [*Characterization of the unique expansions $1=\sum q^{-n_i}$ and related problems*]{}, Bull. Soc. Math. France [**118**]{} (1990), no. 3, 377–390. P. Erdős, I. Joó, V. Komornik, [*On the number of $q$-expansions*]{}, Ann. Univ. Sci. Budapest.Eötvös Sect. Math. [**37**]{} (1994), 109–118. V. Komornik and P. Loreti, [*Unique developments in non-integer bases*]{}, Amer. Math. Monthly [**105**]{} (1998), no. 7, 636–639. V. Komornik, P. Loreti, A. Pethő, [*The smallest univoque number is not isolated*]{}, Publ. Math. Debrecen [**62**]{} (2003), no. 3–4, 429–435. V. Komornik, P. Loreti, [*On the topological structure of univoque sets*]{}, J. Number Theory [**122**]{} (2007), 157–183. W. Parry, [*On the $\beta$-expansion of real numbers*]{}, Acta Math. Hungar. [**11**]{} (1960), 401–416. A. Rényi, [*Representations for real numbers and their ergodic properties*]{}, Acta Math. Hungar. [**8**]{} (1957), 477–493. [^1]:
--- abstract: 'Radio loud active galactic nuclei (AGN) are on average 1000 times brighter in the radio band compared to radio quiet AGN. We investigate whether this radio loud/quiet dichotomy can be due to differences in the spin of the central black holes that power the radio-emitting jets. Using general relativistic magnetohydrodynamic simulations, we construct steady state axisymmetric numerical models for a wide range of black hole spins (dimensionless spin parameter $0.1\le {a}\le 0.9999$) and a variety of jet geometries. We assume that the total magnetic flux through the black hole horizon at radius $r_{\rm H}({a})$ is held constant. If the black hole is surrounded by a thin accretion disk, we find that the total black hole power output depends approximately quadratically on the angular frequency of the hole, $P\propto{\Omega_{\rm H}}^2\propto ({a}/r_{\rm H})^2$. We conclude that, in this scenario, differences in the black hole spin can produce power variations of only a few tens at most. However, if the disk is thick such that the jet subtends a narrow solid angle around the polar axis, then the power dependence becomes much steeper, $P\propto{\Omega_{\rm H}}^4$ or even $\propto {\Omega_{\rm H}}^6$. Power variations of $1000$ are then possible for realistic black hole spin distributions. We derive an analytic solution that accurately reproduces the steeper scaling of jet power with ${\Omega_{\rm H}}$, and we provide a numerical fitting formula that reproduces all our simulation results. We discuss other physical effects that might contribute to the observed radio loud/quiet dichotomy of AGN.' author: - 'Alexander Tchekhovskoy,$^1$ Ramesh Narayan$^1$, Jonathan C. McKinney$^2$' title: 'Black Hole Spin and the Radio Loud/Quiet Dichotomy of Active Galactic Nuclei' --- \[firstpage\] Introduction {#sec_intro} ============ The first active galactic nuclei (AGN) were discovered through radio emission associated with their relativistic jets. However, it soon became clear that not all AGN[^1] have powerful radio jets; in fact, only about 10% of quasars do. The evidence for a dichotomy between radio loud and radio quiet AGN has become stronger over the years, culminating in the impressive demonstration by @ssl07 that two very distinct populations of AGN are clearly visible when radio luminosities $L_R$ of AGN are plotted against optical luminosities $L_B$. For a given value of $L_B$, these authors show that $L_R$ of radio loud AGN is $\sim10^3{-}10^4$ times greater than that of radio quiet AGN. Also, the two populations follow two well-separated tracks on the plot. The origin of the radio loud/quiet dichotomy has been much discussed in the literature. At Eddington ratios $\lambda = L_{\rm bol}/L_{\rm Edd} \sim 10 L_B/L_{\rm Edd} > 0.01$, where $L_{\rm bol}$ is the bolometric luminosity of the AGN and $L_{\rm Edd}$ is its Eddington luminosity, a likely explanation for the dichotomy [@ho00; @ssl07] is that these systems accrete via a standard thin accretion disk. Jet production is then expected to be intermittent, as found to be the case in black hole (BH) X-ray binaries (@fend04a). However, the existence of two distinct populations for $\lambda<0.01$ is harder to explain. Even at these low luminosities, the radio loudness parameter $R=L_{\rm 5\ GHz}/L_B$ of the radio loud population is at least a factor of $10^3$ times larger than that of the radio quiet population. However, at low values of $\lambda$, BH X-ray binaries typically are in a hard spectral state associated with an advection-dominated accretion flow (ADAF, @nm08), and in this state, all BH X-ray binaries are radio loud [@fend04a]. Why then do AGN with similar values of $\lambda$ have a radio loud/quiet dichotomy? One possible explanation is that radio loud objects are driven by a central BH with a large spin which produces a jet by the Blandford-Znajek (BZ) mechanism [@bz77 hereafter, ]. This is referred to as the spin paradigm [@blandford1990; @wc95; @blandford99], which is in contrast to the accretion paradigm which states that the BH mass and mass accretion rate determine the jet power [@blandfordrees74; @blandfordrees92]. These different paradigms plausibly operate together to some degree [@bbr84; @meier02]. The spin paradigm has been invoked to explain the observed correlation between jet and accretion power in elliptical galaxies [@allen2006] by combining an ADAF accretion model with the BZ effect [@nemmen07; @benson09]. In terms of the dimensionless spin parameter ${a}= J/GM^2$, where $M$ and $J$ are the mass and angular momentum of the BH, it is found that one requires ${a}\gtrsim 0.9$ to explain the correlation. The dichotomy in the power and spatial distribution of emission in Fanaroff-Riley classes 1 and 2 (FR 1 and FR 2) radio galaxies may also be explained by the spin paradigm [@baum95]. The radio loud/quiet dichotomy in AGN [@kellerman89; @msl98; @ivezic04] has been explained in terms of an in situ trigger for relativistic jets [@meier97]. It could also be explained by the differences in the evolutionary stages at which we observe the objects [@blundell08] coupled with the episodic activity of the AGN evidenced by the change or precession of jet orientation [@sb08; @sbd08; @shb10]. However, another possibility is that the two AGN populations have different merger and accretion histories which lead to different BH spins. Recent observations show that, for $\lambda<0.01$, all radio loud AGN reside in elliptical galaxies, whereas radio quiet AGN live mostly in spirals [@ssl07]. @volonteri07 explored a number of scenarios for the formation of ellipticals and spirals and showed that it is plausible for the nuclear BHs in spirals to have lower spins than those in ellipticals. This suggests that the spin paradigm may explain the radio loud/quiet dichotomy. The main problem with this explanation is that the difference in radio loudness between the radio loud and radio quiet populations is a factor of $10^3$ [@ssl07]. This is a strikingly large difference. According to the accretion or spin paradigms, relativistic jets are produced by magnetic outflows from either the inner region of the disk or the spinning BH. The power in the disk outflow is expected to be proportional to the disk luminosity, which leaves no room for a radio loud/quiet dichotomy, so we will ignore this possibility[^2]. The power from the BH does depend on the spin parameter, and we will focus on this[^3]. However, the analytical model of indicates that the jet power $P$ varies only as ${a}^2$ for fixed magnetic flux threading the horizon. If such a weak variation is to produce a difference of $10^3$ in radio power[^4], we need ${a}$ in the two populations to differ by a factor $\sim30$, which does not seem plausible given likely merger histories [@hughesblandford03; @gammie_bh_spin_evolution_2004]. More plausible is a factor $\sim3$ difference in the median values of ${a}$ in the two populations (e.g., see @volonteri07), but this will produce only a factor $\sim10$ difference in jet power, not $10^3$. The scaling for power, $P\propto {a}^2$, was derived in the limit ${a}\ll1$, for a razor-thin disk, assuming the magnetic flux threading the BH is independent of ${a}$. Recent analytical and numerical work by @tn08 show that, for a BH threaded by a split monopole magnetic field, $P$ increases as ${a}^4$ at large values of ${a}$ when higher-order corrections are included. However, the analytical model worked out by these authors only gives a factor of two increase in power above the result even at ${a}=1$. Their numerical simulations achieve a slightly steeper scaling, but still only a factor of four above the result at ${a}=1$. In any case, their work hints that a much steeper dependence of power on ${a}$ occurs as ${a}\to 1$. Are there other effects that can introduce an even steeper dependence on ${a}$? General relativistic magnetohydrodynamic simulations of accretion disks by @mck05 showed that for ${a}\gtrsim 0.5$ the jet power varies as steeply as the fourth power of the BH angular rotation rate, i.e., $P\propto {\Omega_{\rm H}}^4$, where ${\Omega_{\rm H}}\propto {a}/r_{\rm H}$ and $r_{\rm H}$ is the radius of the horizon. Compared to the scaling $P\propto{a}^4$, the scaling $P\propto {\Omega_{\rm H}}^4$ introduces an additional factor of $16$ due to the division by $r_{\rm H}$, since $r_{\rm H}$ decreases from $2M$ to $M$ as ${a}$ varies from $0$ to $1$. @mck05 also finds that the power output of the entire BH has a shallower dependence on ${\Omega_{\rm H}}$ compared to the power output of the jet, which subtends a small solid angle above the disk and corona[^5]. This suggests that changes in the solid angle subtended by the jet (via changes in the disk thickness) could change the steepness of the power output as a function of ${a}$. Since the scaling of $P$ with BH spin is important for jet studies, we have explored this issue in detail using general relativistic magnetohydrodynamic numerical simulations. We consider a variety of geometries for the shape of the jet to see if we can come up with any scenario in which jet power could change by a large factor for a modest variation in ${a}$. We show that the most favorable scenario is a BH surrounded by a thick accretion flow with an angular thickness $H/R\sim1$. We show that in this case the power output into a polar jet has a steep dependence on the spin, $P\propto {\Omega_{\rm H}}^4$, and that the scaling steepens to $P\propto {\Omega_{\rm H}}^6$ for even thicker disks. Hence, we confirm the basic result found by @mck05 of a steep dependence of jet power on ${a}$ at high latitudes above the disk. We suggest that this strong dependence may explain the radio loud/quiet dichotomy in AGN. Our numerical setup is described in §\[sec:num\]. The results for BHs with razor-thin disks are presented in §\[sec:thindisk\], and for jets from BHs with thick disks in §\[sec:thickdisk\]. We discuss the results in the context of the AGN radio loud/quiet dichotomy in §\[sec:discussion\], and conclude in §\[sec:conclusions\]. We work with Heaviside-Lorentzian units and set $c=G=1$. Numerical Setup {#sec:num} =============== ![image](fig1.eps){width="\textwidth"} ![image](fig2.eps){width="\textwidth"} It is known that a highly magnetized relativistic jet does not easily self-collimate. For instance, the equilibrium field configuration around an isolated spinning BH threaded by a magnetic field (sourced by external currents) takes the form of a split monopole (). Only extremely close to the polar axis is any evidence of self-collimation present [@tch09]. Therefore, in order to produce a jet which collimates most of the energy output from the BH, it is necessary to introduce an external confining medium. The confining agent may be the gas in an accretion disk, a corona, or a wind emerging from the inner regions of the disk. Ideally, one should numerically simulate both the jet and the confining medium, but this is numerically very challenging. Instead, we follow the more usual approach (e.g., @kom07 [@tch09b]) of introducing a rigid axisymmetric wall with a prescribed shape and requiring the jet to lie inside the wall. The shape of the wall is set by two parameters: 1. An index $\nu$ which sets the asymptotic poloidal field line shape, as described below. This parameter ranges from $\nu=0$, which corresponds to a monopole field geometry, to $\nu=1$, which corresponds to a paraboloidal jet. In a real system, $\nu$ would be set by the radial pressure profile of the confining medium. Plausible values are in the range $\nu\sim0.5{-}1$ [@tch08]. 2. A transition radius $r_0$ which is defined such that for the jet is monopolar and for it switches to the shape prescribed by $\nu$. The parameter $r_0$ allows us to consider situations in which confinement operates only beyond a certain distance from the BH. In terms of these two parameters, the wall has the following shape in polar $(r,\theta)$ coordinates in the Boyer-Lindquist frame: $$\label{eq:theta_asym} 1-\cos\theta = \left(\frac{r+r_0}{r_{\rm H}+r_0}\right)^{-\nu}, \qquad r_{\rm H}=M\left[1+(1-{a}^2)^{1/2}\right],$$ where $r_{\rm H}$ is the radius of the BH horizon. For $r\ll{}r_0$, $(1-\cos\theta)\ll1$, so $\theta\approx\pi/2$, i.e., the wall lies along the equatorial plane, as for a split monopole. For $r\gg(r_{\rm H},r_0)$, $\theta\propto{}r^{-\nu/2}$, which corresponds to a generalized paraboloid. Note that, in all these models, the wall meets the horizon at the equator. In effect, this means we assume a razor-thin disk which subtends zero solid angle at the BH. In §3.2 we discuss the case of geometrically thick disks. ![image](fig3.eps){width="90.00000%"} Having picked the shape of the wall, we choose the poloidal flux function of the initial magnetic field configuration to be $$\label{eq:psi} \Psi=\left(\frac{r+r_0}{r_{\rm H}+r_0}\right)^\nu(1-\cos\theta).$$ Note that $\Psi$ is conserved along each field line, and $\Phi=2\pi\Psi(r,\theta)$ is the poloidal magnetic flux through a toroidal ring at $(r,\theta)$. By construction, equation corresponds to a total flux of $\Phi_{\rm tot}=2\pi$ in the jet. In this model $\Phi_{\rm tot}$ does not depend upon spin, so the amount of magnetic flux threading the BH is fixed for different spins. The outermost field line, defined by $\Psi=1$, follows the shape of the wall. This particular field line is always located at the wall because of our boundary conditions. Interior field lines, however, are free to move once the simulation begins and do experience minor shifts in the poloidal direction. The initial magnetic field has no toroidal component. There are no known exact solutions for the magnetosphere of a spinning BH. However, the poloidal field configurations given in equation  are sufficiently close to the true solutions that their initial relaxation when the simulation starts is rather mild. Only two linearly independent analytic solutions have been obtained for a non-spinning BH: one corresponds to a monopolar field geometry and is given by equation  with $\nu=0$, while the other is the following solution for a paraboloidal field, $$\label{eq:bzpsipara} \Psi=\frac{(r/r_{\rm H}-1)(1-\cos\theta)-(1+\cos\theta)\log(1+\cos\theta)}{2\log2}+1.$$ Note that the split-monopole solution applies to the entire space exterior to the horizon, whereas the paraboloidal solution only applies to the field lines attached to the BH (see @mck07b for a numerical paraboloidal solution that applies to the whole space). Any linear combination of the monopolar and paraboloidal solutions is also a solution [see, e.g., @bes09]. The paraboloidal solution  is very similar to the approximate solution  for the case $\nu=1$, and this is our reason for focusing on the simpler model , with $\nu$ varying over the range 0 to 1. For completeness, we have also run simulations using the field geometry  to initialize the calculations (along with the appropriate choice of the wall shape, obtained by setting $\Psi=1$ in this equation). We performed the simulations using the general relativistic (GR) MHD code HARM [@gam03; @mck04; @mck06jf; @mck06ffcode; @nob06] using Kerr-Schild coordinates in the Kerr metric; the code includes a number of recent improvements (@mm07 [@tch07; @tch08; @tch09]). Most of the simulations were done in the force-free approximation, which assumes that the plasma is infinitely magnetized and has negligible inertia. Within this approximation, the problem is fully defined by specifying just the BH spin and the shape of the wall. Figure \[fig1\] shows results from several representative simulations. Real relativistic jets are of course not perfectly force-free; in fact, they are expected to deviate substantially from this approximation at large distances from the BH. However, all relativistic MHD jets that have $\gamma\gg1$ are highly magnetized near the BH, and here they are expected to be well represented by force-free solutions . Moreover, the power that a relativistic jet carries is determined entirely by the initial force-free zone. Therefore, we expect numerical results on the jet power from a force-free simulation to agree very well with the power for an MHD jet with inertia. To verify this expectation, we have repeated some of our force-free simulations in the MHD limit, in which the jet is mass-loaded with a finite amount of plasma (details given below). Figure \[fig2\] shows some results. As expected, we find that the asymptotic Lorentz factor $\gamma$ of the jet does depend on mass-loading: a force-free jet accelerates without limit [@tch08], whereas an MHD jet asymptotes to a finite $\gamma$ which is determined by the initial magnetization of the jet. However, we find that the jet [*power*]{}, the primary quantity of interest to us in this paper, is not sensitive to mass-loading so long as the jet is relativistic, i.e., so long as the jet is initially force-free near the BH. MHD jets are more complicated and require more parameters to be specified compared to force-free jets. In particular, the results depend on the details of mass-loading at the base of the jet. Highly magnetized jets accelerate because the magnetic energy flux, which dominates the energy budget at the base of the jet, is converted to kinetic energy flux of the plasma as the jet flows out. The ratio of magnetic to kinetic energy flux at the base of the jet determines the asymptotic Lorentz factor [@begelman_asymptotic_1994; @kom09; @tch09; @tch09b]. Observations suggest a characteristic value for the Lorentz factor of AGN jets $\gamma_{\rm AGN}\sim25$ [@jorstad_agn_jet_2005]. We choose the following simple prescription for the mass-loading of our numerical MHD jets to roughly match this value. We impose a floor on the co-moving rest-mass density of the jet $\rho_{\rm floor}$ such that whenever the density falls below this value, we add mass in the co-moving frame of the plasma. This simple floor model is a convenient way of numerically representing more complicated (and poorly understood) processes that are responsible for the mass-loading of jets in AGN. The value of $\rho_{\rm floor}$ is selected such that the rest mass energy density $\rho_{\rm floor}c^2$ is equal to a fraction $1/\gamma_{\rm AGN}$ of the co-moving magnetic energy density $\epsilon_m$: $\rho_{\rm floor} c^2 = \epsilon_{\rm m} / \gamma_{\rm AGN}$. Thus, our floor model ensures that the ratio of $\epsilon_m$ to $\rho_{\rm floor}c^2$ does not exceed $\gamma_{\rm AGN}$, thereby making sure that the maximum Lorentz factor of our jets is close to the required value. We note that while our procedure is Lorentz invariant, it might not correspond to a physical process that operates in AGN, e.g., photon annihilation from the accretion disk [@phi83]. The code uses internal coordinates $(x_1,x_2)$ that are uniformly sampled with $512\times128$ grid cells. The internal coordinates are mapped to the physical coordinates $(r,\theta)$ via $r/M=R_0+\exp(x_1)$ and $x_2={\rm{}sign}(\Psi){\ensuremath{\left|\Psi\right|}}^{1/2}$. The computational domain extends radially from the inner boundary at $r_{\rm{}in}=0.6M+0.4r_{\rm H}$ to the outer boundary at $r_{\rm{}out}=10^4M$. We apply absorbing (outflow) boundary conditions at each of these boundaries. In the $\theta$-direction, the computational domain extends from the polar axis at $x_2=0$, where we use the usual polar boundary conditions, to the jet boundary $x_2=1$, where we place the wall. At the wall, we outflow (copy) the components of velocity and magnetic field that are parallel to the wall, and mirror the perpendicular components [@tch09]. For a given BH spin and grid resolution, we choose the value of $R_0$ such that there are $7$ to $10$ grid cells between $r_{\rm{}in}$ and $r_{\rm H}$. This ensures that the inner radial grid boundary $r=r_{\rm{}in}$ is causally disconnected from the region outside the BH horizon. At time $t=0$ we initialize the simulation with a purely poloidal field configuration as described in equation  (or equation \[eq:bzpsipara\] in the case of the paraboloidal model) and we run the simulation until $t_{\rm{}f}={\rm{}max}(100M,20/{\Omega_{\rm H}},10r_0)$. Because of the dragging of frames by the spinning BH, the magnetic field develops a toroidal component which propagates out along field lines at nearly the speed of light. Behind this outgoing wave, the solution settles down to a steady state and we study the properties of this steady solution. We have verified by running selected simulations for $10$ times longer than our fiducial $t_{\rm{}f}$ that the near-BH regions of our numerical solutions have truly reached steady state. Results {#sec:results} ======= Power Output of Black Holes with Razor-thin Disks {#sec:thindisk} ------------------------------------------------- As explained below equation , all our jet models correspond to the case of a razor-thin disk. We describe here the results we obtain for these models. showed that the luminosity of a force-free jet from a slowly spinning BH (${a}\ll1$), embedded in a regular magnetic field with a fixed total flux (sourced by toroidal currents in a razor-thin disk), is proportional to the square of the BH spin and the square of the magnetic field strength at the horizon. If we include the length scale of the horizon $2M$ to obtain the correct dimensions, we may write (the choice of the numerical prefactor will become clear below) $$\label{eq:bzori} P^{\rm{}BZ}({a})=k \Phi_{\rm{}tot}^2\frac{{a}^2}{16M^2},$$ where $\Phi_{\rm tot}\propto{}B{}M^2$ is the total poloidal magnetic flux in the jet, and $k$ is a constant which depends on the field geometry, e.g., $k=k_{\rm mono}=(6\pi)^{-1}\approx0.054$ for a monopolar field ($\nu=0$) and $k=k_{\rm para}\approx0.044$ for the paraboloidal geometry (equation \[eq:bzpsipara\]). We refer to equation  as the original scaling. Because equation (\[eq:bzori\]) was derived in the limit ${a}\ll1$, it can be extended to larger values of ${a}$ in several ways. In particular, we could replace the length scale $2M$ by the horizon scale $r_{\rm H}$, where $r_{\rm H}$ is defined in equation (\[eq:theta\_asym\]). In fact, this is a more natural scaling since the angular frequency of the BH, $$\label{eq:aeff} {\Omega_{\rm H}}({a})=\frac{{a}}{2r_{\rm H}({a})},$$ clearly plays an important role in determining the power in the jet at the horizon (; @mck04). found that, for a fixed field strength at the BH horizon, the power in the outflow obeys (; @mck04): $$\label{eq:bhpower} P \propto \Omega({\Omega_{\rm H}}-\Omega)\bigr|_{r=r_{\rm H}} \propto {\Omega_{\rm H}}^2 ,$$ where, based on dimensional argument, the field line angular frequency, $\Omega$, is proportional to ${\Omega_{\rm H}}$ (see Appendix \[app:bz2\]). Numerical simulations by @kom01 showed that the effect achieves maximum efficiency when $\Omega\approx {\Omega_{\rm H}}/2$. The result was found to be true for ${a}=\{0.1,0.5,0.9\}$, demonstrating that this result is valid even in the non-linear regime. Now, replacing the length scale $2M$ with the horizon scale $r_{\rm H}$ in the expression for power , we obtain: $$\label{eq:bzaeff} P^{\rm{}BZ2}({\Omega_{\rm H}}){}=k\Phi_{\rm{}tot}^2{\Omega_{\rm H}}^2.$$ In Appendix \[app:bz2\] we derive this formula analytically from first principles. The scaling of jet power  was confirmed in the numerical simulations by Krasnopolsky (private communication). In general, $k$ is a constant factor whose value depends on the field geometry near the BH. For a slowly spinning BH, ${a}\approx4{\Omega_{\rm H}}M$, and equation  reduces to the standard scaling which was derived in the limit ${a}\ll1$. However, as expected from the above discussion and as we will confirm shortly, equation  is a better approximation for higher spins and is quite accurate up to ${a}\approx0.95$, beyond which it requires a modest correction. We refer to equation  as the modified *second-order* scaling, or simply the BZ2 scaling. We classify the order of a scaling by the maximum power of ${\Omega_{\rm H}}$ up to which the scaling maintains its accuracy. As we will see below, for large spins $a\simeq1$, scalings higher than the $2$nd order are required to obtain good agreement with the numerical results. @tn08 found $4$th order corrections to the BH power output by performing the expansion in powers of BH spin $a$. As we have argued, a more accurate expansion is in powers of the BH rotational frequency ${\Omega_{\rm H}}$ (equation \[eq:aeff\]). Recast in powers of ${\Omega_{\rm H}}$, the @tn08 expansion becomes: $$\label{eq:bz4} P^{\rm BZ4}({\Omega_{\rm H}})\approx k\Phi_{\rm{}tot}^2({\Omega_{\rm H}}^2+\alpha{\Omega_{\rm H}}^4),$$ where the $4$th-order coefficient $\alpha = 8 \left(67-6 \pi^2\right)/45\approx1.38$. We analytically derive this formula from first principles in Appendix \[app:bz4\]. Note that the expansion only contains even powers of ${\Omega_{\rm H}}$ since the power is independent of the sense of BH rotation. As we will see, this $4$th order correction agrees well with the numerical results and is an improvement over the $2$nd order formula . However, at high spins, $a\gtrsim0.99$, even the formula  becomes inaccurate. Below we present a more accurate $6$th order formula (see equation \[eq:bzbf\]). We have performed numerical simulations of force-free jets confined by a rigid wall (as described in §\[sec:num\]) for a wide range of field geometries and BH spins. We numerically explored all possible combinations of $\nu=\{0,\linebreak[0]0.25,\linebreak[0]0.5,\linebreak[0]0.75,\linebreak[0]1\}$, $r_0=M\times\{0,\linebreak[0]1,\linebreak[0]5,\linebreak[0]10,\linebreak[0]100\}$ and ${a}=\{0.1,\linebreak[0]0.2,\linebreak[0]0.3,\linebreak[0]0.5,\linebreak[0]0.7,\linebreak[0]0.9,\linebreak[0]0.94,\linebreak[0]0.97,\linebreak[0]0.99,\linebreak[0]0.99769,0.999,\linebreak[0]0.999769,\linebreak[0]0.9999\}$. We also performed simulations in which we started the field with the paraboloidal geometry .[^6] In addition to force-free simulations, which neglect plasma inertia, we have also performed MHD (mass-loaded) simulations for selected field geometries and spins: $\nu=\{0,\linebreak[0]1\}$, $r_0=M\times\{0,\linebreak[0]1,\linebreak[0]10\}$ and ${a}=\{0.1,\linebreak[0]0.9,\linebreak[0]0.99,\linebreak[0]0.9999\}$. Figures \[fig1\] and \[fig2\] illustrate the effect of the shape parameter $\nu$ and the transition radius $r_0$ on the jet geometry. The larger the value of $\nu$, the more collimated is the jet. The larger the value of $r_0$, the more monopolar-like is the jet geometry near the BH. The shape of the poloidal field lines weakly depends on the BH spin: careful examination of the figures reveals that the field lines tend to come closer to the jet axis for faster spins, and this effect is largest near the BH horizon. The magnetosphere clearly divides into an outflow region ($u^r>0$) and an inflow region ($u^r<0$) separated by a stagnation surface at which the radial velocity vanishes. This is similar to what is seen in simulations of magnetized turbulent tori around spinning BHs [@mck04; @mck06jf; @mb09]. Figure \[fig3\] shows the numerically measured power output of all our models as a function of the BH horizon frequency ${\Omega_{\rm H}}$ (defined in eq \[eq:aeff\]). The colored stripes correspond to the three scalings: BZ (equation \[eq:bzori\]), BZ2 (equation \[eq:bzaeff\]), and BZ6 (equation \[eq:bzbf\]). For a given BH spin, a monopolar field geometry ($\nu=0$) produces a more powerful jet since it has a larger value of the pre-factor $k_{\rm{}mono}\approx 0.054$ compared to the paraboloidal geometry (equation \[eq:bzpsipara\], $k_{\rm{}para}\approx0.044$, ). This difference in $k$ determines the width of the colored stripes in Fig. \[fig3\]. The model given in equation  with $\nu=1$ is close to the paraboloidal solution and has nearly the same power. Models with intermediate values of $\nu$ ($0<\nu<1$) or with non-zero values of $r_0$ have power output intermediate between the two limiting models and form the vertical clusters of numerical points at each ${a}$ in Figure \[fig3\]. Independent of the value of $\nu$, we find that jets with $r_0$ much larger than the outer “light cylinder”[^7] radius $\simeq1/{\Omega_{\rm H}}$ have luminosities very similar to that of a monopolar jet.[^8] As expected, for small BH spins, ${a}\lesssim0.3$, both the original scaling (equation \[eq:bzori\]) and the second order BZ2 scaling (equation \[eq:bzaeff\]) agree well with the numerical results. As we go to higher spins, the BZ2 scaling  continues to follow the simulation data points accurately, while the original scaling  under-predicts the power (e.g., by a factor $\approx3$ at ${a}=0.99$). Careful examination of Fig. \[fig3\]a and especially of the inset Fig. \[fig3\]ia, which shows a blowup of the $a\to1$ region of the plot, reveals a flattening in the numerical data points at spin values ${a}\gtrsim0.95$: the BZ2 formula over-predicts the jet luminosity by about $25\%$ as ${a}\to1$ (in agreement with R. Krasnopolsky, private communication). It is not clear that this corner of parameter space is particularly relevant for astrophysics, nor is the effect very large. Nevertheless, for completeness we note that the flattening of the jet power can be well-modeled by including higher-order corrections to the BZ2 formula , e.g., by the BZ6 formula which we derive in Appendix \[app:bz6\]. We give here a simplified version of this formula: $$\label{eq:bzbf} P^{\rm{}BZ6}({\Omega_{\rm H}})\approx k\Phi_{\rm{}tot}^2({\Omega_{\rm H}}^2+\alpha{\Omega_{\rm H}}^4+\beta{\Omega_{\rm H}}^6),$$ where the value of $\alpha$ is determined analytically, $\alpha \approx 1.38$ (same as in the BZ4 expansion, equation \[eq:bz4\]) and $\beta$ is found numerically by least-square fitting equation  to the full BZ6 analytic formula derived in Appendix \[app:bz6\]: $\beta \approx -9.2$. The gray stripe in Figure \[fig3\] compares this formula to our numerical results for the full range of models. Higher order corrections have no effect at small spin values (the gray and light red stripes lie on top of each other) but do a good job of reproducing the flattening in the jet luminosity at spin values ${a}\gtrsim{}0.95$ and the slight but systematic increase in the power output of the numerical jets above the light red stripe at ${a}\lesssim0.9$ (this increase is especially apparent in Figure \[fig3\]b). In anticipation of future discussion, it is useful to express the power at low spin in terms of the maximum achievable power at ${a}=1$: $$\label{eq:power_low_a} P({a})\simeq 0.32 {a}^2 P({a}=1), \quad {a}\lesssim0.3.$$ We now look into the origin of the differences in the power outputs of the various model jets, as well as of the numerical trends discussed above. We focus on two limiting cases: monopolar jet ($\nu=0$) and paraboloidal jet ($\nu=1$, $r_0=1$). First, let us recast the power output of the jet in a convenient form. In a stationary axisymmetric force-free flow, several quantities are conserved along poloidal field lines (defined by $\Psi={\rm{}const}$). Two of these are the field line angular velocity $\Omega(\Psi)$ and the enclosed poloidal current $I(\Psi)$ [@tch08]. The power output of a force-free jet may be written as the integral of the outward Poynting flux $S^r\equiv -\Omega B^r B_\varphi$ over a spherical jet cross-section[^9]. In this notation, the lower component of the toroidal magnetic field is up to a numerical factor the enclosed poloidal current, $-2\pi B_\varphi\equiv I(\Psi)$. Using this notation, which is very similar in appearance and meaning to the usual special relativistic notation, we obtain the total power output of the BH by integrating over the surface of the BH (see also ): $$\label{eq:jetpower} P=\iint{}S^r\,dA =2\int_0^{\pi/2}\Omega B^r I \,dA =2\int_0^1\Omega(\Psi)I(\Psi)\,d\Psi,$$ where the field strength $B^r$ times the area element $dA$ gives the magnetic flux through that area, $d\Phi = 2\pi d\Psi= B^r\,dA$, and the numerical factor of $2$ accounts for the two hemispheres of the BH. Figure \[fig4\] shows for a monopolar jet the angular profiles of angular velocity $\omega=\Omega/{\Omega_{\rm H}}$, enclosed poloidal current $i=I/{\Omega_{\rm H}}$, and power output $p=P/{\Omega_{\rm H}}^2$. The particular scalings by ${\Omega_{\rm H}}$ have been selected based on equation  so as to remove any obvious trends as a function of spin. This allows us to compare models with different spins on the same scale. At low spin, ${a}\lesssim0.1$, we have excellent agreement between the numerical models and the analytic solution obtained by , shown by the dotted lines. For larger spins up to ${a}\lesssim0.9$, both the dimensionless angular field line velocity $\omega$ and the enclosed current $i$ increase with increasing ${a}$ (Figures \[fig4\]a,b). According to equation , this should result in an increase in the normalized jet power $p$, as confirmed in Figure \[fig4\]c. This is the reason for the small but systematic increase in jet power above the estimate . For ${a}\gtrsim0.95$, we find that $\omega$, $i$, and $p$ all decrease relative to , causing a flattening of the jet power at these extreme spins. The reason for the decreased power is related to a change in the poloidal field geometry of the jet near the BH horizon (see §\[sec:thickdisk\] and Appendices \[app:bz4\] and \[app:bz6\]): while at low spins the magnetic field is nearly uniform across the jet for all of our models, at high spins the poloidal field becomes non-uniform with a maximum field strength at the jet axis and a minimum near the wall. Since it is the field geometry near the BH that sets the power output (see footnote \[ftn:locality\] and Appendix \[app:bz2\]), it is logical that these changes in the field geometry lead to changes in the power output. We demonstrate this in §\[sec:thickdisk\] (see also Appendices \[app:bz4\] and \[app:bz6\]). Figure \[fig5\] shows that paraboloidal jets exhibit very similar trends with increasing spin as their monopolar counterparts. The differences are in details, e.g., the angular velocity profile (Figure \[fig5\]c) is now non-uniform even for low spin values, as predicted by the analytic solution shown in the figure with dotted lines. The agreement with the analytic solution is not as perfect as for the monopolar model, but this is because the poloidal field line shape near the BH in our $\nu=1,r_0=1$ paraboloidal jet differs slightly from the paraboloidal shape. For our numerical paraboloidal jets the agreement with the analytic solution is very good. In all our numerical jets the conserved quantities $I(\Psi)$ and $\Omega(\Psi)$ are preserved along field lines to better than $10$%. We reran a selection of models at twice the fiducial resolution in both the radial and angular directions. We found differences of less than $5$% in the total power output, indicating that our models are well-converged. ![ Angular dependence of various quantities in a monopolar jet ($\nu=0$) as a function of the poloidal flux function $\Psi$. The different curves in each panel correspond to different values of the BH spin (see legend). From top to bottom the panels show the normalized field angular velocity $\omega=\Omega/{\Omega_{\rm H}}$, the normalized enclosed poloidal current $i=I/{\Omega_{\rm H}}$, and the normalized jet luminosity $p=\omega i = \Omega I/{\Omega_{\rm H}}^2 = P/{\Omega_{\rm H}}^2$. The analytic solution, shown with dotted lines, provides an excellent description of the numerical results for all spin values ${a}\lesssim0.99$. Beyond this value of ${a}$, the quantities $\omega$, $i$, and $p$ all become noticeably smaller than the analytic solution. This trend is removed by the BZ6 solution , as shown in Figure \[fig3\].[]{data-label="fig4"}](fig4.eps){width="\figwidthfactor\columnwidth"} ![ Similar to Figure \[fig4\] but for a paraboloidal jet ($\nu=1,r_0=1$). Comparison with Figure \[fig4\] shows that, for the same BH spin, the angular velocity $\omega$ is smaller and the enclosed current $i$ larger in a paraboloidal jet compared to a monopolar jet. These two effects combine to give a smaller power output $p\equiv\omega{}i$ in the paraboloidal solution. Note that, whereas a monopolar jet rotates more or less like a rigid body, a paraboloidal jet has a significant variation of $\omega$ across its cross-section.[]{data-label="fig5"}](fig5.eps){width="\figwidthfactor\columnwidth"} ![image](fig6.eps){width="80.00000%"} Power Output of Black Holes with Thick Disks {#sec:thickdisk} -------------------------------------------- In all the models we described so far, the base of each polar jet covered a full $2\pi$ steradians at the BH horizon. However, observational evidence strongly suggests that low-luminosity BHs ($\lambda<0.01$, §\[sec\_intro\]) are surrounded by thick accretion disks or ADAFs [@nm08] with thicknesses $H/R\sim1$. Here and below by the “disk thickness” $H/R$ we mean the angular extent at the BH horizon of the region exterior to the Poynting-dominated jet, i.e., the total thickness of the gaseous disk plus any magnetized corona or heavily mass-loaded wind above the disk. When a BH is surrounded by a thick disk/corona, equatorial field lines from the BH at lower latitudes pass through the disk/corona, become turbulent and produce a slow baryon-rich wind, whereas polar field lines at higher latitudes lie away from the disk gas and produce a Poynting-dominated relativistic jet [@mck05]. How does this effect modify the dependence of jet power on BH spin? Assuming that both the total magnetic flux $\Phi_{\rm tot}$ threading the BH horizon and the angular thickness $H/R$ of the accretion disk/corona are independent of the BH spin, we can compute the power that is emitted in the Poynting-dominated region of the jet. We assume that the models we have described earlier continue to be valid, except that we integrate the jet power only over field lines that cross the horizon outside the $\pm H/R$ zone of the disk/corona. This procedure is well-motivated by general relativistic magnetohydrodynamical simulations of thick accretion disks which show that the relativistic jet subtends a well-defined solid angle for a given gas pressure scale height [@mck04]. We consider models with $H/R=0.5$, $1$, and $1.25$, and compare the results with the case of a razor-thin disk ($H/R=0$). The results for the jet power as a function of disk thickness and spin are shown in Figure \[fig6\]. As we have already seen, the jet power for a razor-thin disk scales as $P\propto{\Omega_{\rm H}}^2$ (this scaling is shown with dotted lines) until ${a}\lesssim0.95$ after which it levels off. As the disk becomes thicker, the scaling changes qualitatively. For all thicknesses, $H/R=0.5$, $1$, and $1.25$, the power dependence on the spin follows the same ${\Omega_{\rm H}}^2$ power-law at low spins. However, at higher values of ${a}$, the power increases more steeply. The break occurs roughly at ${a}\sim0.7$, with a moderate dependence on the disk thickness. Above the break for the case $H/R=1$ we have $P\propto{\Omega_{\rm H}}^{4}$ and for the case $H/R=1.25$ we have $P\propto{\Omega_{\rm H}}^{6}$. This steep dependence is similar to what was observed by @mck05 in his numerical simulations of turbulent accreting tori.[^10] We observe the same steep power dependence also in our ideal GRMHD simulations (§\[sec\_intro\]). We now explain the reason for the steep dependence of jet power on ${\Omega_{\rm H}}$ as ${a}\to1$. We saw in Figures \[fig1\] and \[fig2\] that as the BH spin increases, magnetic field lines rearrange laterally and concentrate around the axis of rotation (see @km07 for an explanation of this effect in terms of hoop stresses). Figure \[fig7\] shows that this leads to a non-uniform distribution of radial magnetic field $B^r$ across the jet, with $B^r$ having a maximum near the rotation axis and a minimum near the jet boundary. Since the electromagnetic energy flux of a BH is proportional to $(B^r)^2$ at the BH horizon (see equation \[eq:bhpowerapp\]), the concentration of magnetic flux near the rotation axis leads to a progressively larger fraction of the total energy output of the BH to be emitted in the polar region, giving a steeper dependence of jet power on the BH spin in the presence of a thick disk ($H/R\sim1$). In a related context, @macdonald1984 and @km07 have studied how magnetic flux is pulled in by a spinning BH by considering magnetic hoop stresses. @macdonald1984 appears to have missed the strength of this effect by mostly investigating models with relatively small ${a}\lesssim 0.7$ and by primarily looking for a change in the total magnetic flux accumulated at the horizon rather than measuring the flux redistribution on the horizon. We now describe an analytic approach for understanding the steep dependence of jet power on BH spin for thick disks with $H/R\sim1$. While the split-monopolar magnetic field is an exact solution for non-spinning BHs, for spinning BHs the dragging of frames induces a spin-dependent perturbation to the split-monopolar magnetic field geometry. It is this perturbation that causes field lines to move preferentially toward the rotational axis. By performing an expansion in powers of $a$, determined this perturbation of the magnetic field geometry to the lowest (second) order in BH spin $a$ (see also @mck04 and @tn08). Accounting for this perturbation in the field geometry, @tn08 determined the BH power output more accurately than the original derivation, to the $4$th order in BH spin $a$. As we noted in §\[sec:thindisk\], expansions in powers of BH frequency ${\Omega_{\rm H}}$ are more accurate at high spins than expansions in powers of the BH spin $a$. Therefore, in Appendix \[app:bz4\] we perform an equivalent expansion in terms of ${\Omega_{\rm H}}$.[^11] Figure \[fig6\] shows with dashed lines the power output of our ${\Omega_{\rm H}}$-based model, which we refer to as BZ4. Clearly, it provides a more accurate approximation for the power of our numerical jets than the $2$nd order accurate BZ2 solution shown with the dotted lines. However, as we saw in §\[sec:thindisk\] and as is clear also from Figure \[fig6\], the BZ4 solution still does not capture a few important effects: (i) for BHs with razor-thin disks, it does not capture the flattening of the power output at $a\gtrsim0.95$ and thereby over-predicts the numerical results by as much as $25$%, and (ii) for thick disks this solution under-predicts the numerical power by as much as $70$%. To improve our analytic model, we have used the results of our numerical simulations to determine higher-order corrections to both the field geometry and the power of the BH at high spins. Firstly, we have obtained a higher-order accurate numerically-motivated expansion in powers of ${\Omega_{\rm H}}$ for the *magnetic field* $B^r$ at the BH horizon (Appendix \[app:bz6\] provides the details). Shown with dotted lines in Figure \[fig7\] for a wide range of ${a}$, this analytic approximation for $B^r$ accurately reproduces the numerical angular profile of magnetic field for a wide range of polar angles, $\theta_{\rm H}\gtrsim0.3$ or $H/R\lesssim1.3$. Secondly, based on this higher-order magnetic field profile, we have analytically computed the $6$th order accurate approximation for the *BH power output*, which we refer to as model BZ6 (see Appendix \[app:bz6\] for more detail). Figure \[fig6\] shows the results with the solid lines. Not only does the BZ6 expansion correctly capture the flattening of the BH power at high BH spins for razor-thin disks, it also provides a significantly more accurate approximation to the jet power output for thicker disks. For instance, for a thick disk with $H/R=1.25$, the BZ6 expansion is about a factor of $20$ more accurate than the BZ4 expansion. ![Radial contravariant component of the magnetic field strength $B^r$ evaluated at the BH horizon in a monopolar model ($\nu=0,r_0=0$) as a function of polar angle $\theta_{\rm H}$, for different values of BH spin (see legend). As the spin of a BH is increased, magnetic field lines progressively bunch up toward the BH rotation axis, resulting in an increased magnetic field strength close to the axis at small $\theta_{\rm H}$ (this effect can also be seen in Figs. \[fig1\]–\[fig2\]). Dotted colored lines show the high-order analytic solution, while the various other lines show the numerical solution (see legend). []{data-label="fig7"}](fig7.eps){width="\columnwidth"} Discussion {#sec:discussion} ========== Figure \[fig3\] shows that, regardless of the geometry of the confining wall, the total power output of a magnetized relativistic spinning BH with a razor-thin disk varies as $P = k\Phi_{\rm tot}^2 {\Omega_{\rm H}}^2$ (equation \[eq:bzaeff\]), where $\Phi_{\rm tot}$ is the total magnetic flux threading the BH horizon, ${\Omega_{\rm H}}=a/(2r_{\rm H})$ is the BH horizon frequency, and $r_{\rm H}=M\left[1+(1-{a}^2)^{1/2}\right]$ is the radius of the BH horizon in units of $G/c^2$. This modified BZ2 scaling is slightly steeper than the original scaling $P = k\Phi_{\rm tot}^2({a}/4M)^2$ (equation \[eq:bzori\]). A more accurate BZ6 scaling (equation \[eq:bzbf\]) accurately reproduces the power-spin dependence for all values of BH spin $a$, including the limit ${a}\to1$. In the context of the radio loud/quiet dichotomy of AGN, following @ssl07 let us assume that supermassive BHs in elliptical galaxies, which manifest themselves as radio loud AGN, have higher spin parameters with a median ${a}\sim0.9$, while the BHs in spirals, the radio quiet AGN, have a lower median ${a}\sim0.3$ (e.g., @volonteri07). Figure \[fig3\] then suggests that the jet powers in the two classes of objects (assuming similar values of $\Phi_{\rm tot}$) would differ by a factor $\sim20$. However, radio loud AGN and radio quiet AGN differ in their radio luminosities by a factor $\sim10^3$ [@ssl07]. What could be the reason for such a large dichotomy in jet power? One motivation for the present study was to investigate whether there is any strong non-linearity in BH physics that might cause the jet power to increase very rapidly as the BH spin approaches unity. If this were the case, one could pursue a scenario in which radio loud AGN are associated with nearly extremal Kerr BHs. Unfortunately, our numerical results indicate that non-linearity hardly helps. Because the total BH power output scales as ${\Omega_{\rm H}}^2$ (equation \[eq:aeff\]) rather than simply as ${a}^2$, there is a slightly steeper increase of power with ${a}$ as the spin approaches unity. However, the scaling actually becomes shallower once ${a}$ increases above $\sim0.99$. We have carefully checked the convergence of our models and we are confident that the results are not affected by numerical errors. Therefore, there is not much room for increasing the power of radio loud AGN jets by pushing ${a}$ arbitrarily close to unity. Therefore, since there is not much wiggle room at the radio loud end, we need to postulate that radio quiet AGN have very low values of ${a}$, say ${a}\sim0.03$. This is uncomfortably low. It is certainly feasible for an occasional BH to be spinning so slowly, but to have an entire population of BHs (radio quiet sources in spirals) with a median ${a}$ of order $0.03$ seems far-fetched. It would require spiral galaxies not to have experienced any significant mergers. Furthermore, the BHs in their nuclei should have accreted mass entirely through minor mergers with smaller companions, each with a tiny mass and with a random orientation of angular momentum [@hughesblandford03; @gammie_bh_spin_evolution_2004; @bv08]. The second motivation for the present study was to investigate if the geometry of the confining funnel along which the jet propagates, which may be different for rapidly-spinning and slowly-spinning BHs, could lead to a substantial change in the jet power. This too turns out not to be the case, within the context of razor-thin disks. We have tried a wide range of geometries for the jet, as described in §\[sec:num\] (see also Figs. \[fig1\], \[fig2\]), but the jet power varies by no more than $20\%$ for a fixed BH spin and magnetic flux. Our third motivation was to investigate the correctness of the results of @mck05 who obtained a steeper dependence in the jet power compared to the total BH power. For this purpose, we considered in §\[sec:thickdisk\] an additional geometrical effect, viz., varying the solid angle subtended by the base of the jet. Such a variation is expected if the accretion disk is geometrically thick. As an example, let us consider the results corresponding to a BH surrounded by a disk of angular extent $H/R=1$, and let us further assume that radio loud AGN have spin parameters very close to unity and radio quiet AGN have more modest values of $a$. Then Figure \[fig6\] shows that it is possible to explain the radio loud/quiet dichotomy, i.e., a factor of $10^3$ in jet power, if radio loud AGN have ${a}\to1$ and radio quiet systems have ${a}\sim0.15$. This is a lot more comfortable than the requirement ${a}\sim0.03$ that we found earlier for a razor-thin disk. Indeed, if the disk/corona is even thicker than $H/R\sim1$, which is not unreasonable[^12], the effect is even stronger (see Figure \[fig6\]), and the radio loud/quiet dichotomy can be explained with quite modest changes in spin. There are several other effects that we did not consider which might either enhance or diminish the effect of rapid spin. For instance, we assumed that the total magnetic flux threading the horizon is a constant, independent of BH spin. Any mechanism that enhances the total magnetic flux threading the horizon would lead to a larger BH power output [@mck05; @hk06; @km07; @rgb06; @gar09]. For example, @mck05 found that the magnetic flux across the entire horizon scales as $\Phi_{\rm{}tot}\propto {\Omega_{\rm H}}^{1/2}$, so the total power will increase by another factor of $\Phi_{\rm{}tot}^2\propto {\Omega_{\rm H}}$, which further diminishes the range of BH spins required to explain the radio loud/quiet dichotomy. At first sight, it would appear that $\Phi_{\rm tot}$ is determined by conditions far from the BH, e.g., the net magnetic flux of the gas supplied to the accretion disk on the outside, and the ability of the disk gas to transport this field in. However, once the field has been transported to the center, two general relativistic effects kick in. First, the spin of the BH determines the size of the plunging region, and larger plunging regions can accumulate more flux [@gar09]. Note that magnetic flux can also be transported to the BH through the corona outside the accretion disk [@rl08; @bhk09]. Second, the frame-dragging of space-time near the BH within the ergospheric region leads to currents that generate hoop stresses pulling magnetic flux toward the horizon [@km07]. These two effects are non-trivially coupled, although for prograde BH spins the hoop stresses appear to dominate [@mck05], while for retrograde BH spins the size of the plunging region may dominate [@gar09]. Another possibility is that a stronger dependence on spin could occur if some field lines attach between the disk and the BH [@yewang05], although such configurations are not seen in general relativistic magnetohydrodynamical simulations of accretion disks [@hirose04; @mck05]. Yet another possibility is that mass-loading of AGN jets might have a large effect on the jet power. For example, the mechanism only operates at sufficiently high magnetization for a given BH spin. Thus, the magnetization and BH spin can together introduce a “magnetic switch” mechanism that can trigger powerful jet formation from the BH [@takahashi1990; @meier97; @meier99; @kb09]. Another type of magnetic switch can be due to changes in the field geometry from dipolar to multipolar, which leads to significant mass-loading of the polar regions [@mck04; @bhk07; @mb09] and a factor of $\sim 10$ weaker total BH power output. The physics of jet mass-loading is presently uncertain, and this question needs to be investigated in more detail. Finally, changes in the disk thickness may also result in differences in the amount of magnetic flux accumulated or generated by turbulence near the BH [@meier01]. Finally, we note that we have implicitly assumed in this paper that the radio luminosity of a jet is directly proportional to the total energy flux (Poynting and kinetic power) carried by the jet. Perhaps this is not the case. Any non-linearity in the mapping between radiative luminosity and jet power could have important consequences. In particular, we note that the interstellar medium (ISM) in a typical elliptical galaxy is very different from that in a typical spiral. Since the radio emission in a jet is produced when the jet interacts with the external ISM, this difference may well lead to a strong effect on the radio loudness of the jet. In application to gamma-ray bursts (GRBs) and collapsars, an interesting question is whether they are powered by the BZ mechanism through an outflow from a central BH or by an outflow from an accretion disk. @kb09 suggest that the BZ effect is operating in such a scenario (however, see @nagataki09). Conclusions {#sec:conclusions} =========== We set out in this paper to explain the radio loud/quiet dichotomy of AGN in the context of the BH spin paradigm. For razor-thin disks, we found that BH spin alone is insufficient to explain the observations even if the radio loud and radio quiet populations have very different merger and accretion histories. However, we found that the presence of a thick disk, such as an ADAF, can significantly enhance the spin dependence of the power output, to the extent that it can reasonably account for the observed radio loud/quiet dichotomy. Our only modification to the revised spin paradigm of @ssl07 is that both populations should contain a BH surrounded by a thick disk such that the jet subtends a small solid angle around the polar axis. These results were obtained by performing general relativistic numerical simulations of collimated force-free and MHD jets from spinning magnetized BHs for a wide range of spins (up to $a=0.9999$) and jet confinement geometries. We showed that, regardless of the geometry, a BH threaded with a magnetic flux $\Phi_{\rm tot}$ and surrounded by a razor-thin disk produces a jet with power $P\approx k\Phi_{\rm tot}^2{\Omega_{\rm H}}^2$, where $k$ is a known constant factor which depends only weakly on the field geometry, $r_{\rm H} = M[1+(1-a^2)^{1/2}]$ is the radius of the BH horizon, and ${\Omega_{\rm H}}=a/(2r_{\rm H})$ is the angular frequency of the BH. This result gives a somewhat steeper dependence of jet power on ${a}$ compared to the original scaling $P\propto {a}^2$ obtained by . Nevertheless, we conclude that for a fixed magnetic flux $\Phi_{\rm tot}$, even this revised scaling is much too shallow to explain the radio loud/quiet dichotomy of AGN. Our goal, therefore, was to identify any other effect that may cause the jet power to depend more steeply on BH spin. We found that such an effect naturally exists. We showed that the power output of a BH surrounded by a thick accretion disk with $H/R\sim1$ (this is the effective thickness of the disk, corona and mass-loaded disk wind), as expected in systems with advection-dominated accretion flows (ADAFs, @nm08), is $P\propto {\Omega_{\rm H}}^4$, and even $\propto{\Omega_{\rm H}}^6$ for very thick disks (§\[sec:thickdisk\]). In this case we can explain the radio loud/quiet dichotomy by having two different populations of galaxies with modestly different BH spins. For the case $H/R=1$, the radio loud population needs to have large spins $a\simeq1$ while the radio quiet AGN population needs to have $a\simeq0.15$. Such spin values may plausibly result from differences in the merger and accretion histories of supermassive BHs in elliptical and spiral galaxies (§\[sec:discussion\]). We worked out in the Appendices a first principles analytic model which accurately reproduces our numerical results for the jet power over a wide range of BH spin and disk thickness (Figures \[fig3\], \[fig6\]). We thank Vasily Beskin and Serguei Komissarov for useful comments on the manuscript. This work was supported in part by NASA grant NNX08AH32G (AT & RN), NSF grant AST-0805832 (AT & RN), NASA Chandra Fellowship PF7-80048 (JCM), and by NSF through TeraGrid resources [@catlett2007tao] provided by the Louisiana Optical Network Initiative (<http://www.loni.org>). Second-order–accurate Expansion of Black Hole Power {#app:bz2} =================================================== In this section we present a compact derivation of the effect. We determine the power output of a spinning BH embedded into an externally-imposed split-monopolar magnetic field. This magnetic field is given by the flux function  with $\nu = 0$ and $r_0=0$. The main difference of this derivation is that we perform it in the powers of the “natural” variable – the BH angular frequency ${\Omega_{\rm H}}$ that plays an important role in determining the BH power output. The power output density evaluated at the horizon of a spinning BH is (@bz77; @mck04) $$\label{eq:bhpowerapp} F_{\rm E}(\theta) = \left[2 (B^r)^2 \Omega ({\Omega_{\rm H}}-\Omega) r M\sin^2\theta\right]\Bigr|_{r=r_{\rm H}},$$ where the quantity $\Omega$ is the angular frequency of magnetic field lines at the BH horizon and $B^r$ is the radial field strength at the horizon. In order to determine the power output , we need to know two quantities as functions of polar angle at the BH horizon: the radial magnetic field, $B^r$, and the field line angular frequency, $\Omega$. The element of the magnetic flux $d\Phi$ through the BH horizon is related to the radial magnetic field at the horizon through the following differential, $$\label{eq:bdef} d\Phi = 2\pi d\Psi=2\pi B^r {\sqrt{-g}}\,d\theta,$$ where $g$ is the determinant of the Kerr metric in the Boyer-Lindquist coordinates, ${\sqrt{-g}}=(r^2+a^2\cos^2\theta)\sin\theta$. This formula closely resembles its cousin in the spherical polar coordinates and flat space where one has ${\sqrt{-g}}=r^2\sin\theta$. More generally, in place of $B^r$ we can use any vector field (e.g., energy flux), and the result is the differential of that flux. Assuming that the perturbations to the magnetic field away from a perfect split-monopole are higher order, we neglect them and obtain, by differencing the flux function (with $\nu=0$ and $r_0=0$) according to , an identical result to that in flat space: $$\label{eq:brmono} B^r=\Psi_{\rm tot}/r^2,$$ where we neglected terms of order ${\Omega_{\rm H}}^2$ and higher. While the distribution of $\Omega$ needs to be self-consistently determined by solving the non-linear equations describing the balance of electromagnetic fields (we do so numerically in §\[sec:results\]), here we make a simple yet accurate estimate based on the energy argument. Let us assume that the system chooses such a distribution of $\Omega$ that it causes an extremum in BH power output . Such a value is clearly $$\label{eq:omegamono} \Omega = {\Omega_{\rm H}}/2$$ since it maximizes the BH power output  [see, e.g., @bk00]. Despite the simplicity of this estimate, it is remarkably close to the true solution for the split-monopolar geometry as obtained from the numerical simulations (§\[sec:results\]). Plugging equations and into the power output density , we obtain $$\label{eq:fesimple} F_{\rm E}(\theta) = 2\left(\Psi_{\rm tot}r^{-2}\right)^2 ({\Omega_{\rm H}}/2)^2 r M\sin^2\theta.$$ Integrating up this power output density in angle in the same way as we integrated the magnetic field in equation , we obtain the full power output into jets with an opening angle $\theta_j$: $$\label{eq:powertheta} P = 2\times2\pi\int_0^{\theta_j} F_{\rm E}(\theta) {\sqrt{-g}}\Bigr|_{r=r_{\rm H}}d\theta,$$ where the extra factor of $2$ accounts for the fact that there are two jets, one in the northern and one in the southern hemisphere. Note that we are interested in an expansion of power up to 2nd order in ${\Omega_{\rm H}}$. Since the factor $F_{\rm E}(\theta)\propto {\Omega_{\rm H}}^2$ is already second order in ${\Omega_{\rm H}}$ (equation \[eq:fesimple\]), we can without loss of accuracy evaluate the formula  at $r=r_{\rm H}(a=0)=2M$ and replace ${\sqrt{-g}}$ with $\left[(2M)^2\sin\theta\right]$. After plugging into  for $F_{\rm E}(\theta)$ using  and evaluating the integral out to $\theta_j=\pi/2$, i.e., computing the full power output of the BH, we get: $$\label{eq:powerthetasimple} P = \pi\Psi_{\rm tot}^2 {\Omega_{\rm H}}^2 \int_0^{\pi/2}\sin^3\theta\,d\theta = 2\pi\Psi_{\rm tot}^2{\Omega_{\rm H}}^2/3,$$ which is accurate to second order in ${\Omega_{\rm H}}$. In terms of the magnetic flux $\Phi_{\rm tot}=2\pi\Psi_{\rm tot}$, the formula becomes $$\label{eq:powerthetasimple2} P = k \Phi_{\rm tot}^2{\Omega_{\rm H}}^2,$$ with $k=1/(6\pi)$, which reproduces . Fourth-order–accurate Expansion of Black Hole Power {#app:bz4} =================================================== In Appendix \[app:bz2\] we have derived a second order accurate expression for power output of the BH in terms of the hole frequency ${\Omega_{\rm H}}$. The BH was embedded with a split-monopolar magnetic field. Let us now improve the accuracy of the previous derivation, this time retaining the higher order terms, up to ${\Omega_{\rm H}}^4$. A similar derivation was performed by @tn08 but in powers of BH spin $a$. Here we derive the expansion in terms of the natural variable ${\Omega_{\rm H}}$ which allows to use the expansion for nearly maximally spinning BHs. We also explicitly present the angular dependence of the BH power output. In order to obtain a higher-order approximation to the power, this time we need to keep some of the higher terms we neglected in equations for $B^r$ and $\Omega$ . Since $\Omega$ is an odd function of BH frequency, it contains only odd powers of ${\Omega_{\rm H}}$, therefore a higher order approximation for it has the following form: $$\label{eq:omega3rd} \Omega={{^{1}\!/\hspace{-1pt}_{2}}}{\Omega_{\rm H}}+\mathcal O({\Omega_{\rm H}}^3),$$ where $\mathcal O({\Omega_{\rm H}}^3)$ denotes any third order or higher order terms in ${\Omega_{\rm H}}$. Since $P\propto{\Omega_{\rm H}}^2$, these higher-order terms contribute to the power output only terms of order higher than $\mathcal O({\Omega_{\rm H}}^4)$, therefore we neglect them. We do need, however, to include the terms that come from the higher order expansion of $B^r$. showed that the dragging of frames by the spinning BH perturbs the magnetic field away from an exact split-monopole and have derived a second order correction to the flux function, $\Psi_2$, in powers of BH spin so that the full flux function has the form $$\begin{aligned} \Psi(r,\theta) &=& \Psi_0(\theta) + a^2 \Psi_2(r,\theta) +\mathcal O(a^4) \\ &=& \Psi_0(\theta) + 16 {\Omega_{\rm H}}^2 \Psi_2(r,\theta) + \mathcal O({\Omega_{\rm H}}^4), \label{eq:app:bzpsi}\end{aligned}$$ where we have used $a = 4{\Omega_{\rm H}}+ \mathcal O({\Omega_{\rm H}}^3)$. Here the zeroth and second order perturbations to the flux function are $$\label{eq:app:bzpsi1} \Psi_0(\theta) = 1 - \cos\theta, \quad \Psi_2(r,\theta) = f(r) \sin^2\theta\cos\theta,$$ where $f(r)$ is a known function of radius $r$, but for further discussion only its value at the horizon of a non-spinning BH, $f(r=2) = \left(56-3\pi^2\right)/45$, is needed (this is because in the expansion we formally evaluate the coefficients at ${\Omega_{\rm H}}=0$, $r=r_{\rm H}=2$). Combining expressions , and , we obtain the $2$nd-order–accurate radial magnetic field at the BH horizon: $$\label{eq:brbz4} B^r = \frac{\left(1+4 {{\Omega_{\rm H}}}^2\right)^2 \left[9+{{\Omega_{\rm H}}}^2 (-49+6 \pi ^2)(1+3 \cos2\theta)\right]}{9 \left(r_{\rm H}^2\left(1+4 {{\Omega_{\rm H}}}^2\right)^2+16 {{\Omega_{\rm H}}}^2 \cos\theta^2\right)}.$$ Combining this expression with , and plugging into , we numerically obtain the angle-dependent enclosed power $P^{\rm BZ4}(\theta,{\Omega_{\rm H}})$ shown in Figure \[fig6\] as $P^{\rm BZ4}(\theta=\pi/2-H/R,{\Omega_{\rm H}})$ with dashed lines. This result, expanded to $4$th order in powers of ${\Omega_{\rm H}}$, is: $$\begin{aligned} P^{\rm BZ4}(\theta,{\Omega_{\rm H}})&\approx& \pi {\Omega_{\rm H}}^2 \left[{{^{4}\!/\hspace{-1pt}_{3}}} \sin^4(\theta/2) (\cos \theta +2)\right]\notag\\ &+&\pi {\Omega_{\rm H}}^4 \bigl[90 \left(3 \pi ^2-32\right) \cos \theta +\left(970-105 \pi^2\right) \cos 3 \theta \ifthenelse{\equal{{0}}{-1}}{ \notag\\ &\phantom{=}&\phantom{\pi {\Omega_{\rm H}}^4 \bigl[} } {}+9\left(3 \pi^2-26\right) \cos 5 \theta+32 \left(67-6 \pi ^2\right)\bigr]/{270} + \mathcal O({\Omega_{\rm H}}^6) \label{eq:bz4thpwr}\end{aligned}$$ Clearly, at low spins, the second-order piece dominates, which we show in Figure \[fig6\] with dotted lines. However, at high spins, the fourth order piece can become dominant, which is confirmed in Figure \[fig6\]. To see this more clearly, we perform an expansion of  in powers of disk/corona thickness, $H/R\equiv\pi/2-\theta$, and obtain to second order in $H/R$: $$\label{eq:bz4hor} P^{\rm BZ4}(H/R) \approx\Omega _H^2 \left\{2.09-3.14 H/R + \mathcal O[(H/R)^3]\right\}+\Omega_H^4 \left\{2.9+1.7 H/R + \mathcal O[(H/R)^3]\right\} + \mathcal O({\Omega_{\rm H}}^6),$$ where for clarity we have numerically evaluated the coefficients to two decimal places. This expansion makes it clear that as $H/R$ increases, the relative importance of the fourth order term increases. This also explains why in Figure \[fig6\] the $P\propto{\Omega_{\rm H}}^4$ dependence becomes more prominent for larger values of $H/R$ as opposed to smaller values. Finally, we note that at the midplane the power output takes the following form (expanded up to $4$th order in ${\Omega_{\rm H}}$): $$\label{eq:bz4pwrequator} P^{\rm BZ4}(\theta=\pi/2)\approx{{^{2\pi}\!/\hspace{-1pt}_{3}}}\Psi_{\rm tot}^2\left[\Omega _H^2+\Omega _H^4 \,8 \left(67-6 \pi^2\right)/45\right]+\mathcal O({\Omega_{\rm H}}^6)= {{^{2\pi}\!/\hspace{-1pt}_{3}}}\Psi_{\rm tot}^2\left[\Omega _H^2+\alpha\Omega_H^4\right],$$ where $\alpha = 8 \left(67-6 \pi^2\right)/45 \approx 1.38$. In this formula we have reintroduced $\Psi_{\rm tot}$ which was set to unity for the rest of this section. This result can also be expressed in terms of the total flux in the jet using $\Phi_{\rm tot}=2\pi\Psi_{\rm tot}$. Sixth-order–Accurate Expansion of Black Hole Power {#app:bz6} ================================================== Figure \[fig6\] shows that the fourth-order BZ4 solution for power is more accurate than the second-order solution. However, at high BH spin, $a\gtrsim 0.95$, it requires a more than a factor of $3$ correction in order to reproduce the numerical solution. Also, the fourth order BZ4 solution does not reproduce the flattening of the power dependence on the BH spin for razor-thin disks ($H/R=0$) at $a\gtrsim0.95$. Inspired by the success of the previous section, we would like to derive a sixth-order–accurate expression for the power. However, for that we would need to know the expansion of the flux function to the fourth order and of the field angular frequency to the third order. None of these are known analytically, therefore, we adopt a numerical approach. Figure \[fig4\]a shows the angular profiles of the field line rotation frequency $\Omega$ for different BH spins. While the deviations from the zeroth order approximation $\Omega={\Omega_{\rm H}}/2$ are present, their relative magnitude is very small, $\lesssim10$%. These $10$% changes in $\Omega/{\Omega_{\rm H}}$ translate into at most $1$% changes in the power output because the power depends quadratically on the magnitude of the higher order correction $(\Omega-{\Omega_{\rm H}}/2)$ (see equation \[eq:bhpower\]). We are interested in the corrections of order $\sim10{-}70$% (the level of inaccuracy of the BZ4 solution), therefore we neglect the higher-order corrections to ${\Omega_{\rm H}}$ in deriving the sixth order solution. The corrections to the magnetic field shape are, however, dramatic. Figure \[fig7\] shows the angular distribution of the radial magnetic field $B^r$ at the BH horizon. As the BH spin increases, $B^r$ develops a progressively large non-uniformity in angle and deviates from the $4$th order solution by factors of a few. We therefore, attempt to find a numerical fit to the angular magnetic field dependence at the BH horizon for $a=0.9999$ by fitting to it the following trial function: $$\label{eq:bz6psi} \Psi=\Psi_0+16{\Omega_{\rm H}}^2\Psi_2+{\Omega_{\rm H}}^4\Psi_4,$$ where the first two terms are given by equations . We look for the spin-independent part of the third term, $\Psi_4$, in the following form $$\label{eq:bz6psi4guess} \Psi_4(\theta)=\sin^2(\theta) \left[ c_1\cos^{\alpha_1}\theta+ c_2\cos^{\alpha_2}\theta+ c_3\cos^{\alpha_3}\theta+ c_4\cos^{\alpha_4}\theta\right],$$ where we choose $\alpha_1=25$, $\alpha_2=7$, $\alpha_3=3$, $\alpha_4=1$. In order to determine $4$ coefficients $c_1$–$c_4$, we match the numerical solution for $B^r$ at $a=0.9999$, shown in Figure \[fig7\], at $4$ angles: $\theta=0.1$, $0.5$, $0.7$, $\pi/2$. We find $c_1\approx26.16$, $c_2\approx22.72$, $c_3\approx13.54$, $c_4\approx2.08$. Figure \[fig7\] shows as dotted colored lines the solutions due to and , with the above values of expansion coefficients, for various values of BH spin. Clearly, the analytic fit is a very good match to the power output at $\theta_{\rm H}\gtrsim0.3$. Very close to the rotation axis, however, (at angles smaller than $0.3$) the fourth-order–accurate solution to the flux function is not enough: while we have a nearly perfect match between our fit and the numeric solution for $B6$ at high ($a=0.9999$) and mid-range ($a=0.5$) spins, our fit to $B^r$ deviates by up to $\sim25$% at the in-between spins ($a\simeq0.9{-}0.99$). However, the total power emitted into the range of polar angles $\theta\lesssim0.3$ is very small, therefore these deviations of our fit from the numerical solution hardly influence the fit to the power output. We also considered a direct fit to the vector spherical harmonic functions that form a complete set for the vector potential as given by equation B8 in @mck07a. However, even an expansion up to $l=10$ did not fit the very steep behavior of $B^r$ near the polar axis. However, otherwise, even only using up to $l=6$ does a reasonable job at fitting the numerical solution for the total power vs. spin and angle. This fact and the fact that a power of $25$ for $\cos\theta$ was required to fit the numerical results demonstrates that the numerical solution at $a\sim 1$ is highly non-linear with respect to $\theta$ and would be quite difficult to derive analytically. This proves the usefulness of the numerical simulations. Now we are in a position to analytically compute the high-order–accurate power of our jets. Using , we compute the radial field on the horizon, $B^r$, from the fourth-order flux function . We then insert this field and the field angular frequency $\Omega$ into formula  and obtain the angular-dependent jet power. This power, which we refer to as the sixth-order analytic BZ6 solution, is shown in Figure \[fig6\] with solid lines for various disk thicknesses and spins. (We compute these lines by numerically integrating the analytically-determined power in our jets. We note while formally this solution is $6$th order-accurate, its expansion in powers of ${\Omega_{\rm H}}$ contains important terms up to $10$th order. This highlights the non-linearity of the problem.) Clearly, these lines approximate the numerical data points very well, within $20$% for the whole range of disk thicknesses and spins that we have explored. Over most of the parameter space the errors are smaller than this value (they are largest for the thicker disks with $H/R\gtrsim1.25$ that have the BH spin in the range $0.8\lesssim a\lesssim0.99$). \[lastpage\] [^1]: We use the generic term AGN to refer to both luminous quasars and less luminous active nuclei such as Seyferts, LINERs, etc. [^2]: The BH spin also drives power into the disk causing a more powerful disk outflow, but this still requires BH spin to introduce a dichotomy. [^3]: There are arguments to suggest that the luminosity of the disk outflow should be greater than that from the central spinning BH [@ga97; @lop99]. However these arguments assume low values of turbulent viscosity and weak magnetic fields near the BH, and also do not account for the effects of the general relativistic plunging region (see, e.g., @mck07a for a discussion). @mck05 finds that the luminosities of the jet and the disk are similar (however, see @nagataki09). For the purposes of this paper, we ignore the disk wind. [^4]: We assume that the radio luminosity is proportional to the jet power. [^5]: @mck05’s models have an accretion disk with a disk+corona+wind of angular extent $H/R\sim 0.6$. For ${a}\gtrsim 0.5$ the power per unit mass accretion rate scales as $\propto {\Omega_{\rm H}}^5$ for the polar jet and $\propto {\Omega_{\rm H}}^4$ for the entire horizon. However, in these models, the mass accretion rate through the BH horizon per unit fixed mass accretion rate at large radius scales as $1/{\Omega_{\rm H}}$ for ${a}\gtrsim 0.5$, because of the ejection of a massive wind (as also seen in @hk06). Hence, expressed in terms of $\dot{M}$ at large radius, the polar jet power scales $\propto {\Omega_{\rm H}}^4$ and the power output from the entire horizon scales $\propto {\Omega_{\rm H}}^3$. [^6]: We note that this field geometry and the paraboloidal geometry with $\nu=1,r_0=0$ are inherently difficult to study numerically: the wall in these models makes such a small angle with the surface of the BH horizon that there exists no physical solution for the velocity in the immediate vicinity of the wall. We have obtained numerical solutions corresponding to the paraboloidal model only for ${a}\le0.9$ and the paraboloidal $\nu=1$ model only for $r_0\ge1$. [^7]: By the “light cylinder” we mean the Alfvén surface. [^8]: This highlights the fact that jet power output is set by the field line shape [*close*]{} to the BH, i.e., inside the light cylinder. It suggests that communication along the jet is maintained by Alfvén waves (rather than fast waves), so that the outer light cylinder, which acts as a sonic surface for Alfvén waves, prevents signals propagating back to the BH from further out. As a result, the shape of the wall or the properties of the confining medium outside the light cylinder have no influence on the power output in the jet.\[ftn:locality\] [^9]: Here the GR notation is simplified and appears like the non-GR expressions (apart from some sign conventions) by using the notational conventions in appendix B of @mckinney2005 and in @mck06ffcode. In this notation, $B^i\equiv{{^{^*}\!\!F}}^{it}$, $B_i={{^{^*}\!\!F}}_{it}$, $E_i=F_{it}/{{\sqrt{-g}}}$, $E^i=F^{ti}{{\sqrt{-g}}}$, and $\Omega\equiv -E_\theta/B^r$, where $F$ is the faraday tensor, ${{^{^*}\!\!F}}$ is the Maxwell tensor, and ${{\sqrt{-g}}}$ is the square root of minus the determinant of the metric. Horizon surface area elements are given by $dA = {{\sqrt{-g}}}d\theta d\phi$. [^10]: Note that only even powers can enter the expansion of the jet power in terms of ${\Omega_{\rm H}}$ because the power is an even function of ${\Omega_{\rm H}}$. [^11]: This expansion reduces to the expansion  in the limit of $H/R\to0$. [^12]: We note that $H/R$ in this context refers to all the gas-dominated regions of the flow: the accretion disk proper, the corona and the disk wind. The net half-angle subtended by all these components could equal a radian or more in the case of a thick accretion flow.
--- abstract: 'We study families of rational curves on certain irreducible holomorphic symplectic varieties. In particular, we prove that projective holomorphic symplectic fourfolds of $K3^{[2]}$-type contain uniruled divisors and rational Lagrangian surfaces.' address: - 'Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139-4307, USA; and CNRS, Laboratoire de Mathématiques d’Orsay, Université Paris-Sud, 91405 Orsay CEDEX, France' - 'Institut de Recherche Mathématique Avancée, Université de Strasbourg et CNRS, 7, Rue R. Descartes - 67084 Strasbourg CEDEX, France' author: - François Charles - Gianluca Pacienza bibliography: - 'curvesHK.bib' title: Families of rational curves on holomorphic symplectic varieties --- Introduction ============ Let $S$ be a $K3$ surface, and let $H$ be an ample divisor on $S$. By a theorem of Bogomolov-Mumford [@MoriMukai83], $H$ is linearly equivalent to a sum of rational curves. The goal of this paper is to investigate the extent to which this result can be generalized to the higher-dimensional setting. Let $X$ be a complex manifold. We say that $X$ is irreducible holomorphic symplectic – in the text, we will often simply refer to such manifolds as holomorphic symplectic – if $X$ is simply connected and $H^0(X, \Omega^2_X)$ is spanned by an everywhere non-degenerate form of degree $2$. These objects were introduced by Beauville in [@Beauville83]. The holomorphic symplectic surfaces are the$K3$ surfaces. It is to be noted that holomorphic symplectic varieties are not hyperbolic. This has been recently proved by Verbitsky, cf [@Verbitsky13], using among other things his global Torelli theorem [@Verbitsky09]. Much less seems to be known on rational curves on (projective) holomorphic symplectic varieties. A striking application of the existence of rational curves on projective $K3$ surfaces has been given by Beauville and Voisin in [@BeauvilleVoisin04]. They remark that picking any point on a rational curve gives a canonical zero-cycle of degree $1$, and show that this has remarkable consequences on the study of the Chow group of $K3$ surfaces. In the case of projective holomorphic symplectic varieties of higher dimension, Beauville has stated in [@Beauville07] conjectures that predict a similar behavior of the Chow group of algebraic cycles modulo rational equivalence. These questions have been studied on Hilbert schemes of points of $K3$ surfaces and Fano variety of lines on cubic fourfolds [@Voisin08; @Fu13a]. These papers rely on the existence of explicit families of rational curves on the relevant varieties. Tthe conjecture also has been proved for generalized Kummer varieties in [@Fu13b]. Motivated in part by these papers, we investigate the existence of families of rational curves on general irreducible holomorphic manifolds of dimension $2n$. We give criteria for the existence of uniruled divisors, as well as for subvarieties of codimension $2$ whose maximal rationally connected quotient – that we shall henceforth abbreviate by MRC quotient as it is customary – has dimension $2n-4$ – the smallest possible value. In particular, we prove the following result. Recall that a holomorphic symplectic manifold is said to be of $K3^{[n]}$-type if it is a deformation of the Hilbert scheme that parametrizes zero-dimensional subschemes of length $n$ on some $K3$ surface. We say that a subscheme $Y$ of a holomorphic symplectic manifold $X$ is Lagrangian if at any smooth point $y$ of $Y^{red}$, the tangent space $T_{Y^{red},y}$ of $Y^{red}$ at $y$ is a maximal isotropic subspace of $T_{X, y}$ with respect to the holomorphic symplectic form of $X$. \[thm:fourfolds\] Let $X$ be a projective holomorphic symplectic fourfold of $K3^{[2]}$-type. Then the following holds. 1. There exists an ample uniruled divisor on $X$. 2. There exists a rational Lagrangian surface on $X$. This result was already known by work of Voisin [@Voisin04] and Amerik-Voisin [@AmerikVoisin08] in the case of the Fano variety of lines on a smooth cubic fourfold. As in [@BeauvilleVoisin04], we are able to show that the existence of such a rational surface makes it possible to define a canonical zero-cycle of degree $1$ on $X$ (cf. Corollary \[cor:zero\]). The theorem above suggests a natural extension to general projective holomorphic symplectic varieties in the following way. \[ques:subvar\] Let $X$ be a projective holomorphic symplectic variety of dimension $2n$, and let $k$ be an integer between $0$ and $n$. Does there exist a subscheme $Y_k$ of $X$ of pure dimension $2n-k$ such that the MRC quotient of $Y$ has dimension $2n-2k$ ? Again, as shown in section 3, $2n-2k$ is the smallest possible dimension for the MRC quotient of such a $Y$. Rational curves have appeared in important recent works on the positive cone of holomorphic symplectic varieties that culminated in the papers [@BayerMacri13] and [@BayerHassettTschinkel13]. While some of our arguments are related to – and have certainly been influenced by – this line of work, our results are quite different in spirit, as they are relevant even for varieties which are very general in their moduli space, and have Picard number $1$. **Organisation of the paper.** This paper relies on the global Torelli theorem for holomorphic symplectic manifolds as proved by Verbitsky in [@Verbitsky09], as well as on results of Markman on monodromy groups [@Markman10]. Consequences of these results for moduli spaces of polarized holomorphic symplectic manifolds of $K3^{[n]}$-type have been studied by Apostolov in [@Apostolov11]. We describe these results in section 2, that does not contain any new results. Section 3 contains general results on the deformation of families of stable curves of genus zero on holomorphic symplectic manifolds of dimension $2n$. We give criterion for the existence of uniruled divisors, as well as codimension $2$ subschemes with MRC quotient of dimension $2n-4$. In section 4, we give examples – in the $K3^{[n]}$ case – where these criteria can be applied. Using the results of section 2, we prove Theorem \[thm:fourfolds\] and construct a canonical zero-cycle of degree $1$ in the $K3^{[2]}$ case. Finally, the last section is devoted to some open questions. [**Acknowledgements.**]{} We thank Daniel Huybrechts and Claire Voisin for interesting discussions on the subject of this paper. We are especially grateful to Eyal Markman for enlightening discussions and comments, and for sharing with us some unpublished notes of his. The cohomology computations at the end of section 4 are due to him. While finishing the redaction of this paper, independent work of Amerik and Verbitsky [@AmerikVerbitsky14] appeared and seems to have some overlap with some arguments of section 3. Since the goals and the results of the two papers seem to be quite different, we did not try to eliminate similar discussions. We always work over the field $\C$ of complex numbers. Deformations of irreducible holomorphic symplectic manifolds ============================================================ This section is devoted to recalling important results regarding the deformation theory of irreducible holomorphic symplectic varieties. The foundational results on irreducible holomorphic symplectic varieties are due to Beauville in his seminal paper [@Beauville83]. In this paper, we will make crucial use of connectedness properties of moduli spaces of certain irreducible holomorphic symplectic varieties. These are due to Apostolov [@Apostolov11], and they rely on the global Torelli theorem of Verbitsky [@Verbitsky09] and on results of Markman [@Markman10] on the monodromy group of varieties of $K3^{[n]}$-type. Let $S$ be a complex $K3$ surface, and let $n$ be an integer strictly bigger than $1$. Let $S^{[n]}$ be the Hilbert scheme – or the Douady space if $S$ is not projective – of zero-dimensional subschemes of length $n$ on $S$. By [@Fogarty68], the complex variety $S^{[n]}$ is smooth. General properties of Hilbert schemes show that $S^{[n]}$ is projective as soon as $S$ is projective. By [@Beauville83], $S^{[n]}$ is an irreducible holomorphic symplectic variety. As shown by Beauville, the second cohomology group of $S^{[n]}$ is related to that of $S$ as follows. Let $S^{(n)}$ be the $n$-th symmetric product of $S$, and let $\epsilon : S^{[n]}{\rightarrow}S^{(n)}$ be the Hilbert-Chow morphism. The map $$\epsilon^* : H^2(S^{(n)}, \Z){\rightarrow}H^2(S^{[n]}, \Z)$$ is injective. Furthermore, let $\pi : S^n{\rightarrow}S^{(n)}$ be the canonical map, and let $p_1, \ldots, p_n$ be the projections from $S^n$ to $S$. There exists a unique map $$i : H^2(S, \Z){\rightarrow}H^2(S^{[n]}, \Z)$$ such that for any $\alpha\in H^2(S, \Z)$, $i(\alpha)=\epsilon^*(\beta)$, where $\pi^*(\beta)=p_1^*(\alpha)+\ldots+p_n^*(\alpha)$. The map $i$ is an injection. Finally, let $E$ be the exceptional divisor in $S^{[n]}$, that is, the divisor that parametrizes non-reduced points. The cohomology class of $E$ in $H^2(S^{[n]}, \Z)$ is uniquely divisible by $2$. Let $\delta$ be the element of $H^2(S^{[n]}, \Z)$ such that $2\delta=[E]$. Then we have $$\label{H2} H^2(S^{[n]}, \Z)=H^2(S, \Z)\oplus\Z\delta,$$ where the embedding of $H^2(S, \Z)$ into $H^2(S^{[n]}, \Z)$ is the one given by the map $i$ above. Finally, the second cohomology group $H^2(S^{[n]}, \Z)$ is endowed with a canonical quadratic form $q$, the *Beauville-Bogomolov form*. The decomposition (\[H2\]) is orthogonal with respect to $q$, and the restriction of $q$ to $H^2(S, \Z)$ is the canonical quadratic form on the second cohomology group of a surface induced by cup-product. If $\alpha$ and $\beta$ are elements of $H^2(S^{[n]}, \Z)$, we will write $\alpha.\beta$ for $q(\alpha, \beta)$. We have $$\delta.\delta=-2(n-1).$$ It follows from this discussion that the lattice $H^2(S^{[n]}, \Z)$ is isomorphic to the lattice $$\Lambda_n=E_8(-1)^{\oplus 2}\oplus U^{\oplus 3}\oplus < -2(n-1)>.$$ Let $X$ be an irreducible holomorphic symplectic variety, and assume that $X$ is of $K3^{[n]}$-type. Recall that this means that $X$ is deformation-equivalent, as a complex variety, to a complex variety of the form $S^{[n]}$, where $S$ is a $K3$ surface. Suppose from now on that $X$ is polarized. Let $h$ be the first chern class in $H^2(X, \Z)$ of the chosen primitive ample line bundle on $X$. We will be interested in the deformations of the pair $(X, h)$. Note that the deformations of the pair $(X, h)$ can be identified with the deformation of the pair $(X, rh)$ for any $r\in \Q^*$. [*While our assumption on $X$ is simply that there is a deformation of $X$ to some $S^{[n]}$ as complex varieties, an argument using period domains and Verbitsky’s global Torelli theorem shows that if $X$ is polarized, there exists a deformation of the pair $(X,h)$ to some $(S^{[n]}, h')$, where $S$ is a projective $K3$ surface and $h'$ is the class of a polarization on $S^{[n]}$ – see Proposition 7.1 of [@Markman11]. The goal of this section is to control the cohomology class $h'$ with respect to the decomposition (\[H2\]). $\square$*]{} We now describe results by Apostolov [@Apostolov11] that give information on the possible deformation types of the pair $(X, h)$. To such a pair we can associate two numerical invariants. The first one is the *degree*, namely, the even number $h^2$. The second one is the *divisibility* of $h$, that is, the positive integer $t$ such that $h.H^2(X, \Z)=t\Z$. If $n=1$, $X$ is a $K3$ surface and $H^2(X, \Z)$ is a unimodular lattice. As a consequence, the divisibility of the primitive polarization $h$ is always $1$. However, $H^2(X, \Z)$ is not unimodular as soon as $n>1$, and the divisibility can be different from $1$ accordingly. Let $n$ be an integer at least equal to $2$, let $S$ be a $K3$ surface, and let $h\in H^2(S^{[n]}, \Z)$ be a primitive polarization. Using the canonical decomposition of (\[H2\]), write $$h=\lambda h'+\mu\delta,$$ where $h'$ is a primitive element of $H^2(S, \Z)$. Then the divisibility of $h$ is $$t=\gcd(2(n-1), \lambda).$$ Since $H^2(S, \Z)$ is unimodular, $h'.H^2(S^{[n]}, \Z)=h'.H^2(S, \Z)=\Z$. As a consequence, $\lambda h'.H^2(S^{[n]}, \Z)=\lambda \Z$. Similarly, $\mu\delta.H^2(S^{[n]}, \Z)=2\mu(n-1)\Z$. Since $\delta$ and $h'$ are orthogonal, this implies that $h.H^2(S^{[n]}, \Z)=\gcd(2\mu(n-1), \lambda)\Z$. The polarization $h$ is primitive, which implies that $\lambda$ and $\mu$ are relatively prime, and proves the result. \[cor:congruence\] Let $(X, h)$ be a primitively polarized variety of $K3^{[2]}$-type. Then the divisibility $t$ of $h$ is either $1$ or $2$. Furthermore, in the latter case, the degree $2d$ of $h$ is congruent to $-2$ modulo $8$. Since the divisibility of a polarization does not vary under deformations, we can assume that $X$ is of the form $S^{[2]}$ for some $K3$ surface $S$, for which the result is a consequence of the preceding proposition. The two cases above are referred to as the *split* case if $t=1$ and the *non-split* case if $t=2$, see [@GritskenkoHulekSankaran10]. A polarization type for varieties of $K3^{[n]}$-type is the isomorphism class of pairs $(\Lambda, x)$, where $\Lambda$ is a lattice isomorphic to $\Lambda_n$ and $x$ is a primitive element of $\Lambda$. We say that a holomorphic symplectic variety $X$ with a primitive polarization $h$ has polarization type $(\Lambda, x)$ if the pairs $(H^2(X, \Z), h)$ and $(\Lambda, x)$ are isomorphic. The degree and the divisibility of a primitive polarization $h$ of a variety $X$ of $K3^{[2]}$-type determine the isomorphism class of the pair $(H^2(X, \Z), h)$ among pairs $(\Lambda, x)$ where $\Lambda$ is a lattice isomorphic to $\Lambda_2$ and $x$ is a primitive element of $\Lambda$ by Corollary 3.7 of [@GritskenkoHulekSankaran10]. As in [@Verbitsky09; @Huybrechts11], it is possible to construct the moduli space of marked primitively polarized varieties of $K3^{[n]}$-type with a fixed polarization type. This is a smooth complex variety that is a fine moduli space for triples $(X, h, \phi)$, where $X$ is a variety of $K3^{[n]}$-type with a primitive polarization $h$ of the polarization type considered, and $\phi : H^2(X, \Z){\rightarrow}\Lambda_n$ is an isometry. The key result of Apostolov we need is the following. \[thm:apost\] The moduli space of marked primitively polarized varieties of $K3^{[2]}$-type and fixed polarization type is connected. This theorem relies on two important results, namely the global Torelli theorem of Verbitsky [@Verbitsky09] and the computation of monodromy groups for varieties of $K3^{[n]}$ due to Markman [@Markman10]. We will use it in the following form. \[cor:deftype\] Let $(X, h)$ be a primitively polarized variety of $K3^{[2]}$-type. Then there exists a $K3$ surface $S$, with a primitive polarization $h'\in H^2(S, \Z)$, two coprime integers $\lambda$ and $\mu$ such that $(\lambda, \mu)\in\{(1, 0), (2,1)\}$, a smooth projective morphism $\pi : \mathcal X{\rightarrow}B$ of complex smooth quasi-projective varieties, two points $b$ and $b'$ of $B$, and a global section $\alpha$ of $R^2\pi_*\Z$ such that 1. $(\mathcal X_b, \alpha_b)\simeq (X, h) ;$ 2. $(\mathcal X_{b'}, \alpha_{b'})\simeq (S^{[2]}, \lambda h'-\mu\delta).$ In other words, the pair $(X, h)$ is deformation-equivalent to the pair $(S^{[2]}, \lambda h'-\mu\delta)$. Let $2d=h^2$ and let $t$ be the divisibility of $h$. If $t=1$, let $(S, h')$ be a $K3$ surface with a primitive polarization $h'$ of degree $2d$ and let $(\lambda, \mu)=(1, 0)$. If $t=2$, by Corollary \[cor:congruence\], we can write $2d=8d'-2$. Let $(S, h')$ be a $K3$ surface with a primitive polarization of degree $2d'$ and let $(\lambda, \mu)=(2, 1)$. Let $\widetilde h=\lambda h'+\mu\delta$. Then $\widetilde h$ is primitive and we have $$\widetilde h^2=2d$$ and $$\widetilde h.H^2(S^{[2]}, \Z)=t\Z.$$ The class $\widetilde h\in H^2(S^{[2]}, \Z)$ might not be ample. However, since its square is positive, either $\widetilde h$ or $-\widetilde h$ becomes ample on any small deformation of $(S^{[2]}, \widetilde h)$ with Picard number $1$, by Theorem 3.11 of [@Huybrechts99] (see also [@Huybrechts03]). Intersecting with curves of the form $C+s_2+\ldots+s_n$ on $S^{[n]}$ where $C$ is an ample curve of class $h'$ on $S$ and the $s_i$ are distinct varying points of $S$ not lying on $C$ shows that $-\widetilde h$ is not the class of an effective divisor. This proves that there exists a pair $(Y, \widetilde h_Y)$, where $Y$ is a holomorphic symplectic variety and $\widetilde h_Y$ is a primitive ample class in $H^2(Y, \Z)$ that is deformation equivalent to $(S^{[n]}, \widetilde h)$. By construction, $(X, h)$ and $(Y, \widetilde h_Y)$ have the same polarization type. By Theorem \[thm:apost\], they are deformation-equivalent. This in turn shows that $(X, h)$ is deformation-equivalent to $(S^{[2]}, \lambda h'+\mu\delta)$. Deforming rational curves and Lagrangian subvarieties of holomorphic symplectic varieties ========================================================================================= Let $\pi : \mathcal X{\rightarrow}B$ be a smooth projective morphism of complex quasi-projective varieties of relative dimension $d$, and let $\alpha$ be a global section of the local system $R^{2d-2}\pi_*\Z$. Fixing such a section $\alpha$, we can consider the relative Kontsevich moduli stack of genus zero stable curves $\overline{{\mathcal M_0}}(\mathcal X/B, \alpha)$. We refer to [@BehrendManin96; @FultonPandharipande95; @AbramovichVistoli02] for details and constructions. The space $\overline{{\mathcal M_0}}(\mathcal X/B, \alpha)$ parametrizes maps $f : C{\rightarrow}X$ from genus zero stable curves to fibers $X=\mathcal X_b$ of $\pi$ such that $f_*[C]=\alpha_b$. The map $\overline{{\mathcal M_0}}(\mathcal X/B, \alpha){\rightarrow}B$ is proper. If $f$ is a stable map, we denote by $[f]$ the corresponding point of the Kontsevich moduli stack. For the remainder of this section, we work in the following situation. We fix a smooth projective holomorphic symplectic variety $X$ of dimension $2n$ and $f : C{\rightarrow}X$ an unramified map from a stable curve $C$ of genus zero to $X$. Let $\mathcal X{\rightarrow}B$ be a smooth projective morphism of smooth connected quasi-projective varieties and let $0$ be a point of $B$ such that $\mathcal X_0=X$. Let $\mathcal \alpha$ be a global section of $R^{4n-2}\pi_*\Z$ such that $\alpha_0=f_*[C]$ in $H^{4n-2}(X, \Z)$. \[prop:defcurves\] Let $\mathcal M$ be an irreducible component of $\overline{\mathcal M_0}(X, f_*[C])$ containing the point corresponding to $f$. Then stack $\mathcal M$ has dimension at least $2n-2$. If $\mathcal M$ has dimension $2n-2$, then any irreducible component of the Kontsevich moduli stack $\overline{{\mathcal M_0}}(\mathcal X/B, \alpha)$ that contains $[f]$ dominates $B$. In other words, the stable map $f : C{\rightarrow}X$ deforms over a finite cover of $B$. Related results have been obtained by Ran, see for instance Example 5.2 of [@Ran95]. Let $\widetilde{\mathcal X}{\rightarrow}S$ be a local family of deformations of $X$ such that $B$ is the Noether-Lefschetz locus associated to $f_*[C]$ in $S$. In particular, $B$ is a smooth divisor in $S$. Since the canonical bundle of $X$ is trivial, a standard dimension count shows that any component of the deformation space of the stable map $f$ over $S$ has dimension at least $$\dim{S}+2n-3=\dim{B}+2n-2.$$ Since the image in $S$ of any component of the deformation space of the stable map $f$ is contained in $B$, the fibers of such a component all have dimension at least $2n-2$. If any fiber has dimension $2n-2$, it also follows that the corresponding component has to dominate $B$, which shows the result. In order to use the criterion above in concrete situations, we will study the locus spanned by a family of rational curves. The following gives a strong restriction on this locus. \[prop:mumford\] Let $X$ be a projective manifold of dimension $2n$ endowed with a symplectic form, and let $Y$ be a closed subvariety of codimension $k$ of $X$. Then the MRC quotient of $Y$ is at least $(2n-2k)$-dimensional. Before proving the proposition, we prove the following easy fact from linear algebra. \[fact:sympl\] Let $(F,\omega)$ be a symplectic vector space of dimension $2n$ and $V$ a subspace of codimension $k$ of $F$. Then $V$ contains a subspace $V'$ of dimension at least $2n-2k$ such that the $2$-form $\omega_{|V'}$ is symplectic on $V'$. In particular, $\omega^{2n-2k}_{V}\neq 0$. Let $V^{\perp}$ be the orthogonal to $V$ with respect to the symplectic form $\omega$. Since $\omega$ is non-degenerate, we have $$\dim(V^{\perp})=k,$$ which implies that $\dim(V\cap V^{\perp})\leq k$. Let $V'$ be a subspace of $V$ such that $V'$ and $V\cap V^{\perp}$ are in direct sum in $V$. Then $\dim(V')\geq 2n-2k$. It is readily seen that the restriction of $\omega$ to $V'$ is non degenerate. We argue by contradiction and suppose the MRC quotient of $Y$ has dimension at most $2n-2k-1$. Let $\mu :\tilde Y\to Y \subset X$ be a resolution of the singularities of $Y$. Mumford’s theorem on zero cycles – see for instance Proposition 22.24 of [@Voisin2002] – implies that $$H^0(\tilde Y, \Omega_{\tilde Y}^m)=0,\ \ \forall m\geq 2n-2k,$$ However, if $\omega$ is the symplectic form on $X$, then $$\mu^*(\omega^{n-k})\not=0,$$ by Lemma \[fact:sympl\], which is a contradiction. The results above allow us to give a simple criterion for the existence of uniruled divisors on polarized deformations of a given holomorphic symplectic variety. \[cor:divisor\] Let $\mathcal M$ be an irreducible component of $\overline{\mathcal M_0}(X, f_*[C])$ containing $[f]$, and let $Y$ be the subscheme of $X$ covered by the deformations of $f$ parametrized by $\mathcal M$. If $Y$ is a divisor in $X$, then 1. The stable map $f : C{\rightarrow}X$ deforms over a finite cover of $B$. 2. Let $b$ be a point of $B$. Then $\mathcal X_b$ contains a uniruled divisor. Assume that $\dim(\mathcal M)>2n-2$. By a dimension count, this implies that if $y$ is any point of $Y$, the family of curves parametrized by $\mathcal M$ passing through $y$ is at least $1$-dimensional, which in turn shows that the MRC quotient of $Y$ has dimension at most $\dim(Y)-2=2n-3$ and contradicts Proposition \[prop:mumford\]. As a consequence, $\mathcal M$ has dimension $2n-2$ and (1) holds by Proposition \[prop:defcurves\]. To show statement (2), we can assume that $B$ has dimension $1$. By assumption, the deformations of $f$ in $X$ cover a divisor in $X$. Let $\mathcal M'$ be an irreducible component of $\overline{{\mathcal M_0}}(\mathcal X/B, \alpha)$ containing $\mathcal M$. Then $\mathcal M'$ dominates $B$ by Proposition \[prop:defcurves\]. Let $\mathcal Y'\subset \mathcal X{\rightarrow}B$ be the locus in $\mathcal X$ covered by the deformations of $f$ parametrized by $\mathcal M'$. Since $\mathcal M'$ dominates $B$, any irreducible component of $\mathcal Y'$ dominates $B$. Since the fiber of $\mathcal Y'{\rightarrow}B$ over $0$ is a divisor in $\mathcal X_0=X$, the fiber of $\mathcal Y'{\rightarrow}B$ at any point $b$ is a divisor, which is uniruled by construction. \[rmk:nonirr\] [*It might not be true that $Y$ itself deforms over a finite cover of $B$.*]{} The criterion above can be extended to the existence of subschemes of codimension $2$ with MRC quotient of dimension $2n-4$ – by Proposition \[prop:mumford\], this is the minimal possible dimension. \[prop:Lagrangian\] Assume that there exists - a projective scheme $Z$ of dimension $2n-2$ together with a morphism $\phi : Z{\rightarrow}X$ birational onto its image. - A stable map of genus zero $g : C{\rightarrow}Z$. We assume furthermore that each irreducible component of $C$ meets the open subset of $Z$ where $\phi$ is birational onto its image. satisfying the following conditions: - $f=\phi\circ g, $ and the deformations of $g$ cover $Z$. - There exists a $(2n-2)$-dimensional irreducible component $\mathcal M_Z$ of the space $\overline{\mathcal M_0}(Z, g_*[C])$ containing $[g]$. Let $\mathcal M$ be the image of $\mathcal M_Z$ in $\overline{\mathcal M_0}(X, f_*[C])$ and let $Y$ be the reduced subscheme of $X$ covered by the deformations of $f$ parametrized by $\mathcal M$. Then the following hold: 1. The space $\mathcal M$ has dimension $2n-2$, and it is an irreducible component of $\overline{\mathcal M_0}(X, f_*[C])$. 2. The stable map $f : C{\rightarrow}X$ deforms over a finite cover of $B$. 3. Let $b$ be a point of $B$. Then $\mathcal X_b$ contains a subscheme $Y_b$ of codimension $2$ such that the MRC quotient of $Y_b$ has dimension $2n-4$. While the statement above is somewhat long, it gives a practical criterion, as it makes it possible to replace deformation-theoretic computations on $X$ by computations on the auxiliary variety $Z$. In particular, one does not need to determine the locus in $X$ spanned by the deformations of the stable curve $C$. This is apparent in the following application which will be generalized in Proposition \[prop:codim2\]. \[cor:example\] Assume that $X$ has dimension $4$, and that there exists a rational map $\psi : \P^2\dashrightarrow X$, birational onto its image, such that the stable map $f$ can be written as $i\circ\psi : \P^1{\rightarrow}X$, where $i:\P^1{\rightarrow}\P^2$ is the inclusion of a line in $\P^2$ that does not meet the indeterminacy locus of $\psi$. Then 1. The stable map $f : C{\rightarrow}X$ deforms over a finite cover of $B$. 2. Let $b$ be a point of $B$. Then $\mathcal X_b$ contains a Lagrangian subvariety, i.e., an isotropic subvariety of dimension $2$, that is rational. We can find a suitable blow-up $Z$ of $\P^2$ such that the composition $Z{\rightarrow}\P^2{\rightarrow}X$ is a morphism and such that $Z{\rightarrow}\P^2$ is an isomorphism outside the indeterminacy locus of $\psi : \P^2\dashrightarrow X$. We write $g : \P^1{\rightarrow}Z$ for the unique lift of $i$. It is readily seen that $Z$ and $i$ satisfy the hypothesis of Proposition \[prop:Lagrangian\]. The assumptions on $\phi$ and $g$ ensure that the natural map from $\overline{\mathcal M_0}(Z, g_*[C])$ to $\overline{\mathcal M_0}(X, f_*[C])$ is an immersion at $f$. As a consequence, the dimension of $\mathcal M$ is $2n-2$. Let $\widetilde{\mathcal M}$ be an irreducible component of $\overline{\mathcal M_0}(X, f_*[C])$ containing $\mathcal M$, and assume that $\widetilde{\mathcal M}\neq \mathcal M$. Then the dimension of $\widetilde{\mathcal M}$ is strictly bigger than $2n-2$. From the definition, it is readily seen that $Y$ is the image of $Z$ in $X$. Since by assumption the map from $Z$ to $X$ is birational onto its image, $Y$ has codimension $2$ in $X$. Let $Y'$ be the reduced closed subscheme of $X$ covered by the deformations of $f$ parametrized by $\widetilde{\mathcal M}$. Since $\widetilde{\mathcal M}$ is irreducible, $Y'$ is irreducible. We also have $Y\subset Y'$. By corollary \[cor:divisor\], $Y'$ cannot be a divisor as $\widetilde{\mathcal M}$ has dimension strictly greater than $2n-2$. As a consequence, $Y'=Y$. Since $\phi$ is a birational isomorphism from $Z$ to $Y$, and since every component of $f(C)$ meets the locus above which $p$ is an isomorphism, any deformation of $f$ whose image is in $Y$ lifts to a deformation of $g$ in $Z$. This shows that $\widetilde{\mathcal M}$ has the same dimension as $\mathcal M_Z$, i.e., that $\widetilde{\mathcal M}$ has dimension $2n-2$. This shows statement (1). Statement (2) follows from (1) and Proposition \[prop:defcurves\]. We prove the third statement. We can assume that $B$ is a curve. Let $\mathcal M'$ be an irreducible component of $\overline{{\mathcal M_0}}(\mathcal X/B, \alpha)$ containing $[f]$. Let $b$ be a point of $B$. As in Corollary \[cor:divisor\] and by upper semicontinuity of dimensions, the deformations of $f$ parametrized by $\mathcal M'$ in $\mathcal X_b$ cover a codimension $2$ subscheme of $\mathcal X_b$. Since this codimension $2$ subscheme is covered by a $2$-dimensional family of stable curves, one of its irreducible components has MRC quotient of dimension at most $2n-4$. By Proposition \[prop:mumford\], this dimension is exactly $2n-4$, which shows (3). \[rmk:nonirr2\] [*As in Corollary \[cor:divisor\], it might not be true that $Y$ itself deforms over a finite cover of $B$: one needs to consider the union of $Y$ and some other components of the locus covered by the deformations of $f$ in $X$, as the fiber over $0$ of the irreducible component $\mathcal M'$ above might not be irreducible.*]{} Examples and proof of Theorem \[thm:fourfolds\] =============================================== In this section, we give examples of applications of Corollary \[cor:divisor\] and Proposition \[prop:Lagrangian\]. In particular, we prove Theorem \[thm:fourfolds\]. For ease of exposition, we only consider varieties of $K3^{[n]}$-type. As in section 2, if $S$ is a $K3$ surface and $n$ a positive integer, we consider $H^2(S, \Z)$ as a subspace of $H^2(S^{[n]}, \Z)$. If $n\geq 2$, we denote by $\delta$ half of the cohomology class of the exceptional divisor of $S^{[n]}$. \[prop:divgen\] Let $S$ be a $K3$ surface, and let $h\in H^2(S, \Z)$ be an ample cohomology class. Let $n\geq 2$ be an integer. Let $X$ be a holomorphic symplectic variety together with a class $h_X\in H^2(X, \Q)$ such that the pair $(X, h_X)$ is deformation-equivalent to the pair $(S^{[n]}, h-\frac{\mu}{n-1}\delta)$ where $\mu\in\{0, 1\}$ and $\mu\neq 0$ if $h^2=2$. Then $X$ contains a uniruled divisor. [*If $\mu=0$, the class $h$ cannot be ample on $S^{[n]}$ as $h.\delta=0$. However, Theorem 3.1 of [@Huybrechts99] shows that $X$ is projective as it contains a class with positive square. The class of the cohomology divisor above has to be proportional to $h_X$, as can be seen by deforming $(X, h_X)$ to a variety with Picard number $1$.*]{} We first assume $\mu=0$. Up to deforming the pair $(S, h)$ and by a theorem of Chen [@Chen02], we can find an integral nodal rational curve on $S$ with cohomology class $h$. We can also assume that the Néron-Severi group of $S$ has rank $1$. Let $\phi : \P^1{\rightarrow}S$ be the corresponding map with image $R$. Let $s_2, \ldots, s_n$ be $n-1$ distinct points in $S$ that do not lie on $R$. The stable map $$f : \P^1{\rightarrow}S^{[n]}, t\mapsto \phi(t)+s_2+\ldots+s_n$$ satisfies the assumptions of Corollary \[cor:divisor\], as the deformations of $f$ cover the divisor $D$ in $S^{[n]}$ corresponding to length $n$ zero-dimensional subschemes of $S^{[n]}$ that meet $R$. To conclude the proof, we need to show that the class $f_*[\P^1]$ remains a Hodge class along the deformation space of $(S^{[n]}, h)$. Let $\alpha\in H^2(S^{[n]}, \Q)$ be the dual of $f_*[\P^1]$ with respect to the Beauville-Bogomolov form. In other words, $\alpha$ is characterized by the identity $\alpha.v=f_*[\P^1]\cup v$ for any $v\in H^2(S^{[n]}, \Q)$. The class $\alpha$ is a Hodge class. We need to show that the class $\alpha$ remains a Hodge class along the deformation space of $(S^{[n]}, h)$. Since the Néron-Severi group of $S$ has rank $1$, any Hodge class on $H^2(S^{[n]}, \Q)$ is a linear combination of $h$ and $\delta$. Since the image of $f$ does not meet the exceptional divisor, $\alpha.\delta=0$, so $\alpha$ is proportional to $h$. This concludes the proof. Now assume that $\mu=1$. Again, up to deforming $(S, h)$ and using [@MoriMukai83; @Chen02], we can assume that $S$ has Picard number $1$ and contains a pencil of curves of genus $1$ whose general member has only nodal singularities. Let $\mathcal C{\rightarrow}S$ be the corresponding family. Let $C$ be a general member of the pencil of curves above. The normalization $C'$ of $C$ is a smooth curve of genus $1$. Let $s_2, \ldots, s_n$ be $n-2$ distinct points of $S$ that do not lie on $C$. Any $g^1_2$ on $C'$ induces a morphism $\phi : \mathbb{P}^1{\rightarrow}S^{[2]}$. Consider the stable map $$f : \P^1{\rightarrow}S^{[n]}, t\mapsto \phi(t)+s_2+\ldots+s_n$$ associated to such a general $g^1_2$. Then the class $f_*[\P^1]$ is dual to the class $h-\frac{1}{n-1}\delta$ by the computation of Lemma 2.1 in [@CilibertoKnutsen12]. Varying the $g^1_2$, the curve $C$ in the family $\mathcal C{\rightarrow}S$, and the points $s_i$, we see that the deformations of $f$ cover a divisor in $S^{[n]}$. Corollary \[cor:divisor\] allows us to conclude. We now turn to codimension $2$ subvarieties, i.e., to the application of Proposition \[prop:Lagrangian\]. \[prop:codim2\] Let $S$ be a $K3$ surface, and let $h\in H^2(S, \Z)$ be an ample cohomology class. Let $n\geq 2$ be an integer. Let $X$ be a holomorphic symplectic variety together with a class $h_X\in H^2(X, \Q)$ such that the pair $(X, h_X)$ is deformation-equivalent to the pair $(S^{[n]}, h-\frac{\mu}{2(n-1)}\delta)$ where $\mu\in\{0, 1\}$ . Then $X$ contains a subscheme $Y$ of codimension $2$ such that the MRC quotient of $Y$ has dimension $2n-4$. The proof is a variation on Corollary \[cor:example\]. As above, we can assume that the Néron-Severi group of $S$ has rank $1$, and that we can find a nodal rational curve $R$ on $S$ with cohomology class $h$. Let $\phi : \P^1{\rightarrow}S$ be the corresponding map with image $R$. Let $s_3, \ldots, s_n$ be $n-2$ distinct points in $S$ that do not lie on $R$. The symmetric product of $\P^1$ is isomorphic to $\P^2$, and we denote by $\psi$ the rational map $$\psi : \P^2=(\P^1)^{[2]}\dashrightarrow S^{[n]},\ t_1+t_2\mapsto t_1+t_2+s_3+\ldots+s_n.$$ The diagonal $\Delta$ in $(\P^1)^{[2]}=\P^2$ is a smooth conic. Let $l$ be a line in $\P^2$ that does not pass through the finitely many points of indeterminacy of $\psi$ and that intersects $\Delta$ in two distinct points $x$ and $y$. The point $x$ corresponds to a zero-dimensional non-reduced subscheme of length $2$ of $S^{[2]}$ lying on $R$. As a consequence, $x+s_3+\ldots+s_n$ is a well-defined point of $S^{[n]}$. The locus of length $n$ subschemes of $S$ having the same support as $x+s_3+\ldots+s_n$ is a smooth rational curve $C_x$ in $S^{[n]}$ contained in the exceptional divisor $E$ of $S^{[n]}$. The curve $C_x$ is a fiber of the Hilbert-Chow morphism $S^{[n]}{\rightarrow}S^{(n)}$. We define a stable map $f_{\mu} : C_{\mu}{\rightarrow}S^{[n]}$ of genus $0$ as follows. - If $\mu=1$, let $f_1 : C_1=\P^1{\rightarrow}S^{[n]}$ be the composition of $\psi$ and the inclusion of the line $l$ in $\P^2$. - If $\mu=0$, let $f_0 : C_0{\rightarrow}S^{[n]}$ be the stable map of genus zero obtained by glueing $f_1$ with the smooth rational curve $C_x$ at the point $x+s_3+\ldots+s_n$. We define schemes $Z_{\mu}$ mapping to $S^{[n]}$ as follows. - If $\mu=1$, let $T_1$ be the product $\P^2\times S^{[n-2]}$. The rational map $$\psi_1 : T_1\dashrightarrow S^{[n]},\ (t_1+t_2, P)\mapsto t_1+t_2+P$$ is defined as long as the support of the subscheme $P\subset S$ of length $n-2$ does not meet $R$. Blowing up along the indeterminacy locus, we get a map $$p_1 : Z_1{\rightarrow}S^{[n]}$$ that is birational onto its image. - If $\mu=0$, let $E_0$ be the product $\P(\phi^*T_S)\times S^{[n-2]}$ – recall that $\phi : \P^1{\rightarrow}S$ is the rational curve we are considering. We identify the diagonal $\Delta$ with $\P(T_{\P^1})\subset \P(\phi^*T_S)$. The natural morphism $c : \P(\phi^*T_S){\rightarrow}S^{[2]}$ induces a rational morphism $$\psi'_0 : E_0=\P(\phi^*T_S)\times S^{[n-2]}{\rightarrow}S^{[n]}, (x, P)\mapsto c(x)+P$$ that is defined as long as the support of the subscheme $P\subset S$ of length $n-2$ does not meet $R$. Let $T_0$ be the projective scheme obtained by gluing $T_1$ and $E_0$ along their common closed subscheme $\Delta\times S^{[n-2]}$. The rational morphisms $\psi_1$ and $\psi'_0$ glue together to induce a rational morphism $$\psi_0 : T_0\dashrightarrow S^{[n]}.$$ Again, blowing up along the indeterminacy locus, we get a map $$p_0 : Z_1{\rightarrow}S^{[n]}$$ that is birational onto its image. By definition, the map $f_{\mu}$ factors as $$\xymatrix{C_{\mu}\ar[r]^{h_{\mu}} & T_{\mu}\ar[r]^{\psi_{\mu}} & S^{[n]}}$$ and the map $C_{\mu}{\rightarrow}T_{\mu}$ does not meet the indeterminacy locus of $\psi_{\mu} : T_{\mu}{\rightarrow}S^{[n]}$. As a consequence, $f_{\mu}$ lifts to a map $g_{\mu} : C_{\mu}{\rightarrow}Z_{\mu}$ such that $f_{\mu}=p_{\mu}\circ g_{\mu}$, and the local deformations of $g_{\mu}$ in $Z_{\mu}$ are the same as the local deformations of $h_{\mu}$ in $T_{\mu}$. Certainly, the deformations of $G_{\mu}$ cover $Z_{\mu}$. It is readily seen from the definition that the local deformation space of $h_{\mu}$ in $T_{\mu}$ has dimension $2n-2$ – either directly or by computing normal bundles. This shows that $f_{\mu}$, $Z_{\mu}$ and $g_{\mu}$ satisfy the assumptions of Proposition \[prop:Lagrangian\]. To conclude the proof, we need to show as in the proof of Proposition \[prop:divgen\] that the dual of the class $f_{\mu*}[C_{\mu}]$ with respect to the Beauville-Bogomolov form is proportional to $h-\frac{\mu}{2(n-1)}\delta$. Let $H$ be a generic hyperplane section of $S$ with class $h$. Let $H_n$ be the divisor in $S^{[n]}$ corresponding to the subschemes of $S$ of length $n$ whose support intersects $H$. Then the class of $H_n$ in $H^2(S^{[n]}, \Z)$ is $h$. With the notations above, it can be checked that the intersection number of the image of $l$ with $H_n$ is $h^2$ – the image of $l$ can be deformed to a rational curve of the form $t\mapsto \phi(t) + t_2+s_3+\ldots+s_n$, where $t_2$ is a smooth point of $R$ that does not belong to $H_n$. Similarly, the intersection number of $C_x$ with $H_n$ is zero. We readily check that the intersection number of the image of $l$ with the exceptional divisor $E$ is $2$: the images of $x$ and $y$ are the two transverse intersection points. Finally, the intersection number of $C_x$ with the exceptional divisor $E$ is $-1$, see for instance Example 4.2 of [@HassettTschinkel10]. Since $\delta=\frac{1}{2}[E]$ and $\delta^2=-2(n-1)$, this shows that $$(f_{\mu})_*[C_{\mu}]\cup \delta = \mu$$ and, finally, that $(f_{\mu})_*[C_{\mu}]$ is dual to $h-\frac{\mu}{2(n-1)}\delta$. [*In the construction above, we could have dealt with the case $\mu=-1$ in a similar way by considering the stable map obtained by glueing $l$ with $C_x$ and $C_y$. However, it is possible to show that the sign of $\mu$ does not change the deformation type of a pair $(S^{[n]}, h+\mu\delta)$ – indeed, the reflection with respect to the class of the exceptional divisor of $S^{[n]}$ changes the sign of $\mu$ and belongs to the monodromy group of $S^{[n]}$ in the terminology of Markman, as shown in [@Markman13] relying on work of Druel [@Druel11], which implies the statement by Corollary 7.4 of [@Markman11].* ]{} The results above can be applied to the theorem we stated in the introduction. Let $h$ be a primitive polarization on a projective fourfold $X$ of $K3^{[2]}$-type. By Corollary \[cor:deftype\], we can find a $K3$ surface $S$, an ample cohomology class $h'$ and $\mu\in\{0,1\}$ such that $(X, h)$ is deformation equivalent to $(S^{[2]}, \lambda h'-\mu\delta)$ for $(\lambda, \mu)\in \{(1,0), (2,1)\}$. Up to dividing by $2$ if $\mu=1$, which does not change the deformation type, we can apply Propositions \[prop:divgen\] and \[prop:codim2\] to conclude – note that rationally connected irreducible surfaces are rational. The results above do not give directly the cohomology classes of the subschemes we constructed. However, cohomological arguments give the following refinement of Theorem \[thm:fourfolds\]. The argument that allows us to pass from Theorem \[thm:fourfolds\] to Theorem \[thm:classes\] is due to Eyal Markman, to whom we are very grateful for sharing his unpublished notes [@Markmanunpublished] with us. \[thm:classes\] Let $X$ be a projective holomorphic symplectic fourfold of $K3^{[2]}$-type and let $h$ be an ample class in $H^2(X, \Z)$. In Theorem \[thm:fourfolds\], we can assume that the cohomology class of the uniruled divisor on $X$ is a multiple of $h$, and that the cohomology class of the rational surface is a multiple of $5h\cup h-\frac{1}{6}q(h)c_2(X)$. [*For the sake of clarity, we did not use the notation $h^2$ in the formula above.*]{} We can assume that the pair $(X, h)$ is very general. In that case, the Picard number of $X$ is one, so the cohomology class of any divisor on $X$ is proportional to $h$. Results of Markman in [@Markmanunpublished] show that the group of Hodge classes in $H^4(X, \Z)$ is generated by $h\cup h$ and $c_2(X)$, and that any Lagrangian surface in $X$ has cohomology class a multiple of $5h\cup h-\frac{1}{6}q(h)c_2(X)$. The refinement above allows us to construct a canonical zero-cycle of degree $1$ on projective varieties of $K3^{[2]}$-type. \[cor:zero\] Let $X$ be a projective holomorphic symplectic fourfold of $K3^{[2]}$-type. All points of $X$ lying on some rational surface with cohomology class a multiple of $5h\cup h-\frac{1}{6}q(h)c_2(X)$ have the same class $c_X$ in $CH_0(X)$. We only need to show that any two such surfaces have non-empty intersection. For this, it is enough to show that the square of $5h\cup h-\frac{1}{6}q(h)c_2(X)$ is nonzero. By computations of [@Markmanunpublished], we have, in the cohomology ring of $X$, $$h^4=3q(h)^2,\,c_2(X)^2=828\,\mathrm{and}\,h\cup h\cup c_2(X)=30q(h).$$ This shows that $$(5h\cup h-\frac{1}{6}q(h)c_2(X))^2=48q(h)\neq 0.$$ Some open questions =================== We briefly discuss some questions raised by our results above. In view of our constructions, it seems natural to hope for a positive answer for Question \[ques:subvar\]. Note that this is the case if $X$ is of the form $S^{[n]}$ for some $K3$ surface $S$ as follows by taking $Y$ to be the closure in $S^{[n]}$ of the locus of points $s_1+\ldots+s_n$, where the $s_i$ are distinct points of $S$, $k$ of which lie on a given rational curve of $S$. It would be interesting to refine Question \[ques:subvar\] to specify the expected cohomology classes of the subschemes $Y_k$. The particular case of middle-dimensional subschemes seems of special interest in view of the study of rational equivalence on holomorphic symplectic varieties. \[ques:Lagrangian\] Let $X$ be a projective holomorphic symplectic variety of dimension $2n$. Does there exist a rationally connected subvariety $Y$ of $X$ such that $Y$ has dimension $n$ and nonzero self-intersection ? A positive answer to question \[ques:Lagrangian\] implies as in Corollary \[cor:zero\] the existence of a canonical zero-cycle of degree $1$ on $X$. This raises the following question. Assume that Question \[ques:Lagrangian\] has a positive answer for $X$ and let $y$ be any point of $Y$. Let $H$ be an ample divisor on $X$, and let $k_0, \ldots, k_{2n}$ be nonnegative integers such that $2k_0+\sum_i 2ik_i =2n$. Is the zero-cycle $H^{k_0}\Pi_ic_{i}(X)^{k_i}\in CH_0(X)$ proportional to the class of $y$ in $CH_0(X)$ ? Even in the case of a general polarized fourfold of $K3^{[2]}$-type, we do not know the answer to the preceding question. Finally, Question \[ques:subvar\] raises a counting problem as in the case of the Yau-Zaslow conjecture for rational curves on $K3$ surfaces [@YauZaslow95], which was solved in [@KlemmMaulikPandharipandeScheidegger10]. We do not know of a precise formulation for this question.
--- abstract: 'Inspired by the increasing possibility of experimental control in ultra-cold atomic physics we introduce a new Lax operator and use it to construct and solve models with two wells and two on well states together with its generalization for $n$ on well states. The models are solved by the algebraic Bethe ansatz method and can be viewed as describing two Bose-Einstein condensates allowing for an exchange interaction through Josephson tunneling.' address: - | $^{1}$ Centro Brasileiro de Pesquisas Físicas - CBPF\ Rua Dr. Xavier Siguad, 150, Urca, Rio de Janeiro - RJ - Brazil - | $^{2}$ Instituto de Física da UFRGS\ Av. Bento Gonçalves, 9500, Agronomia, Porto Alegre - RS - Brazil author: - 'G. Santos$^{1}$, A. Foerster$^{2}$ and I. Roditi$^{1}$' title: 'A bosonic multi-state two-well model' --- Introduction ============ The realization of Bose-Einstein condensates (BEC), achieved by taking dilute alkali gases to ultra low temperatures [@early; @angly] is certainly among the most exciting recent experimental achievements in physics. Since then, investigations dedicated to the comprehension of new phenomena associated to this state of matter as well as its properties have flourished, either in the experimental or theoretical domains. One noticeable recent effort is the one that proposes the study of a two-well model with two levels at each well as a mean to study Einstein-Podolsky-Rosen entanglement [@drummond]. This quest encouraged the search for new solvable models that could be related to the properties of such condensates, including the possibility of interaction among condensates [@jon1; @jon2; @jonjpa; @key-3; @dukelskyy; @Ortiz; @Kundu; @eric5; @GSantosaa; @GSantos; @GSantos11]. The motivation that underlies those proposed models is that, by the study of exactly solvable models, quantum fluctuations may be fully taken into account providing tools that allows one to go beyond the results obtained by mean field approximations. We believe that this fruitful approach may furnish some new insights in this area, and contribute as well to the increasingly interesting field of integrable systems itself [@hertier; @batchelor]. In the present paper, we will use the algebraic Bethe ansatz method to obtain a new multilevel two-well model. Each well is linked to a BEC and tunneling between the levels of each well is allowed. The algebraic formulation of the Bethe ansatz, and the associated quantum inverse scattering method (QISM), was primarily developed in[@fst; @ks; @takhtajan; @korepin; @faddeev]. The QISM has been used to unveil properties of a considerable number of solvable systems, such as, one-dimensional spin chains, quantum field theory of one-dimensional interacting bosons [@korepin1] and fermions [@yang], two-dimensional lattice models [@korepin2], systems of strongly correlated electrons [@ek; @ek2], conformal field theory [@blz], integrable systems in high energy physics [@lipatov; @korch; @belitsky] and quantum algebras (deformations of universal enveloping algebras of Lie algebras) [@jimbo85; @jimbo86; @drinfeld; @frt]. For a pedagogical and historical review see [@faddeev2]. More recently solvable models have also showed up in relation to string theories (see for instance [@dorey]). Remarkably it is important to mention that exactly solvable models are recently finding their way into the lab, mainly in the context of ultracold atoms [@batchelor2] but also in nuclear magnetic resonance (NMR) experiments[@kino1; @kino2; @kitagawa; @haller; @liao; @coldea; @nmr1] turning its study as well as the derivation of new models an even more fascinating field. Our point of view here comes from very recent results concerning the construction of Lax operators, by which it is possible to obtain solvable models suitable for the effective description of the interconversion interactions occurring in the BEC. Acquiring our motivation from these ideas we present the construction of a two-well solvable model that contemplates interconversion among the levels in each well. We obtain this model by a multistate Lax operator whose construction is fully explained in the sequel. It is motivated by the construction in [@Kuznet; @key-3], where a Lax operator is defined for a single canonical boson operator, but instead of a single operator we choose a linear combination of independent canonical boson operators. It is convenient to underline that although in the sequel we follow a formal presentation we do arrive at new integrable physical Hamiltonians that share many of the aspects of the physical systems studied through interferometric techniques, such as in [@QYHEDrumm; @kuhnert]. In particular, as our models are solvable, one can obtain precise results, for instance, of properties related to the energy gap, entanglement and ground state fidelity [@rubeni]. Also, as mentioned above, increasingly sophisticated NMR techniques [@oliv; @nmr1] allows manipulation of qubits and we believe that with our models we add an interesting possibility to the usual NMR nuclear quadrupole Hamiltonian. Algebraic Bethe ansatz method ============================= In this section we will shortly review the algebraic Bethe ansatz method and present the transfer matrix used to get the solution of the models [@jonjpa; @Roditi]. We begin with the $gl(2)$-invariant $R$-matrix, depending on the spectral parameter $u$, $$R(u)= \left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & b(u) & c(u) & 0\\ 0 & c(u) & b(u) & 0\\ 0 & 0 & 0 & 1\end{array}\right),$$ with $b(u)=u/(u+\eta)$, $c(u)=\eta/(u+\eta)$ and $b(u) + c(u) = 1$. Above, $\eta$ is an arbitrary parameter, to be chosen later. It is easy to check that $R(u)$ satisfies the Yang-Baxter equation $$R_{12}(u-v)R_{13}(u)R_{23}(v)=R_{23}(v)R_{13}(u)R_{12}(u-v)$$ where $R_{jk}(u)$ denotes the matrix acting non-trivially on the $j$-th and the $k$-th spaces and as the identity on the remaining space. Next we define the monodromy matrix $T(u)$, $$T(u)= \left( \begin{array}{cc} A(u) & B(u)\\ C(u) & D(u)\end{array}\right),\label{monod}$$ such that the Yang-Baxter algebra is satisfied $$R_{12}(u-v)T_{1}(u)T_{2}(v)=T_{2}(v)T_{1}(u)R_{12}(u-v).\label{RTT}$$ In what follows we will choose a realization for the monodromy matrix $\pi(T(u))=L(u)$ to obtain solutions of a family of models for multilevel two-well Bose-Einstein condensates. In this construction, the Lax operators $L(u)$ have to satisfy the relation $$R_{12}(u-v)L_{1}(u)L_{2}(v)=L_{2}(v)L_{1}(u)R_{12}(u-v), \label{RLL}$$ Then, defining the transfer matrix, as usual, through $$t(u)= tr \;\pi (T(u)) = \pi(A(u)+D(u)), \label{trTu}$$ it follows from (\[RTT\]) that the transfer matrix commutes for different values of the spectral parameter; i. e., $$[t(u),t(v)]=0, \;\;\;\;\;\;\; \forall \;u,\;v.$$ Consequently, the models derived from this transfer matrix will be integrable. Another consequence is that the coefficients $\mathcal{C}_k$ in the transfer matrix $t(u)$, $$t(u) = \sum_{k} \mathcal{C}_k u^k,$$ are conserved quantities or simply $c$-numbers, with $$[\mathcal{C}_j,\mathcal{C}_k] = 0, \;\;\;\;\;\;\; \forall \;j,\;k.$$ If the transfer matrix $t(u)$ is a polynomial function in $u$, with $k \geq 0$, it is easy to see that, $$\mathcal{C}_0 = t(0) \;\;\; \mbox{and} \;\;\; \mathcal{C}_k = \frac{1}{k!}\left.\frac{d^kt(u)}{du^k}\right|_{u=0}. \label{C14b}$$ Multi-State Lax Operators ========================= Now we introduce a new $L$ operator with multi-state bosonic components. We have $n$ operators $\hat O^{r}_{j}$ each one acting on a given state, here the index $j$ means the state and the index $r$ means the site corresponding to the Lax operator. In our case we will consider only two sites, each one supposed to be a well containing a Bose-Einstein condensate. In other words the operators act, for each site, on the direct sum of the spaces associated to those states, $$\label{eq.1} V = V_1 \oplus V_2 \oplus ... \oplus V_n.$$ The operators of different states or different sites commute, $$[\hat{O}^r_{j},\hat{O}^s_{k}] = 0 \, \forall \, r \neq s \,\,\mathrm{and}\,\, j\neq k,$$ and for the same state and site they obey their respective algebras. More explicitly, for the usual bosonic operators satisfying the canonical commutation relations ( and from hereafter we drop the $r$ index as we denote one site $a$ and the other site $b$), $$[a_{i}^{\dagger},a_{j}^{\dagger}] = [a_{i},a_{j}] = 0,\;\;\;[a_{i},a_{j}^{\dagger}]=\delta_{ij}I.$$ we have the following solution for a multi-state Lax operator, $$L^{\Sigma_{a}}(u) = \left(\begin{array}{cc} uI + \eta\sum_{j=1}^{n}N_{aj} & \sum_{j=1}^{n}t_{j}a_{j}\\ \sum_{j=1}^{n}s_{j}a_{j}^{\dagger} & \eta^{-1}\zeta I \end{array}\right), \label{L2}$$ if the condition, $\zeta = \sum_{j=1}^{n}s_{j}t_{j}$, is satisfied, where $\zeta$ is a constant value. The above Lax operator satisfies then equation (\[RLL\]). Viewed as a monodromy matrix (\[monod\]) the Lax operator (\[L2\]) has the following identifications $$A(u) = uI + \eta\sum_{j=1}^{n}N_{aj}, \qquad B(u) = \sum_{j=1}^{n}t_{j}a_{j},$$ $$C(u) = \sum_{j=1}^{n}s_{j}a_{j}^{\dagger}, \qquad D(u) = \eta^{-1}\zeta I,$$ and the commutation relations, $$[A(u),B(v)] = - \eta B(v), \qquad [A(u),C(v)] = \eta C(v) ,$$ $$[B(u),C(v)] = \zeta I, \qquad [\star, D(u)] = 0,$$ where $\star$ stand for $A(v),\;B(v),C(v)$ or $D(v)$. Models ====== In this section we present two applications of the Lax operator $L$ in (\[L2\]). The two-well model for two on well states and its generalization for $n$ on well states. The two-well model with two on well states ------------------------------------------ The Hamiltonian of the system for two wells (sites) $a$ and $b$ is, $$\begin{aligned} H &=& U_{aa11}N_{a1}^2 + U_{aa12}N_{a1}N_{a2} + U_{aa22}N_{a2}^2 \nonumber \\ &+& U_{bb11}N_{b1}^2 + U_{bb12}N_{b1}N_{b2} + U_{bb22}N_{b2}^2 \nonumber \\ &+& U_{ab11}N_{a1}N_{b1} + U_{ab12}N_{a1}N_{b2} \nonumber \\ &+& U_{ab21}N_{a2}N_{b1} + U_{ab22}N_{a2}N_{b2} \nonumber \\ &-& \mu_{1}(N_{a1} - N_{b1}) - \mu_{2}(N_{a2} - N_{b2}) \nonumber \\ &+& \epsilon_{a1} N_{a1} + \epsilon_{a2} N_{a2} + \epsilon_{b1} N_{b1} + \epsilon_{b2} N_{b2}\nonumber \\ &-& \Omega_{11}(a_{1}^{\dagger}b_{1} + b_{1}^{\dagger}a_{1}) - \Omega_{12}(a_{1}^{\dagger}b_{2} + b_{2}^{\dagger}a_{1}) \nonumber \\ &-& \Omega_{21}(a_{2}^{\dagger}b_{1} + b_{1}^{\dagger}a_{2}) - \Omega_{22}(a_{2}^{\dagger}b_{2} + b_{2}^{\dagger}a_{2}). \label{H1}\end{aligned}$$ In the diagonal part of the Hamiltonian (\[H1\]), the $U_{pqjk}$ parameters describe the atom-atom $S$-wave scattering in the wells, the $\mu_{j}$ parameters are the relative external potentials between the wells for the on well states $\epsilon_{pj}$. The operators $N_{pj}$ are the number of atoms operators. The labels $p$ and $q$ stand for the wells $a$ and $b$, and the labels $j$ and $k$ stand for the on well states $1$ and $2$. In the off diagonal part of the Hamiltonian the parameters $\Omega_{jk}$ are the tunnelling amplitudes. In the Fig. 1 we show a two-well potential with their respective on well states, where $\epsilon_{a1}$, $\epsilon_{a2}$ are the two states in the well $a$ and $\epsilon_{b1}$, $\epsilon_{b2}$ are the two states in the well $b$. The external potentials, $\mu_j$, shift their respective on well states. ![(Color online) In the figure we have two wells coupled by Josephson tunnelling with two on well states in each well. The atoms can tunnel between the wells for the same or different states. The dashed red arrows and the respective tunnelling amplitudes, $\Omega_{jk}$, between the wells for the same or different state are shown. We have two external potentials, $\mu_1$ and $\mu_2$, that shift the on well states. The dotted blue lines and the green arrows show two possible shifts, with $\mu_1 \neq \mu_2$. ](tunelamento5.eps) \[f1\] It is important to note that if we turn off the tunnelling,$\Omega_{jk}=0$, the Hamiltonian (\[H1\]) describes two decoupled Bose-Einstein condensates with two states in each one, $\{|n_{pj}\rangle\},\;j=1,2,\;p=a,b,$ and only one pure vector state for each condensate $|\psi_p\rangle = |n_{p1}\rangle\otimes|n_{p2}\rangle,\;p=a,b$, with the total vector state of the system the tensor product of those pure vectors states, $$|\Psi_T\rangle = |n_{a1}\rangle\otimes|n_{a2}\rangle\otimes|n_{b1}\rangle\otimes|n_{b2}\rangle.$$ The energies, $E_a$ and $E_b$, are respectively, $$\begin{aligned} E_a &=& U_{aa11}n_{a1}^2 + U_{aa12}n_{a1}n_{a2} + U_{aa22}n_{a2}^2 \nonumber \\ &+& (\epsilon_{a1} - \mu_{1})n_{a1} + (\epsilon_{a2} - \mu_{2})n_{a2}, \label{Ea}\end{aligned}$$ $$\begin{aligned} E_b &=& U_{bb11}n_{b1}^2 + U_{bb12}n_{b1}n_{b2} + U_{bb22}n_{b2}^2 \nonumber \\ &+& (\epsilon_{b1} + \mu_{1})n_{b1} + (\epsilon_{b2} + \mu_{2})n_{b2}. \label{Eb}\end{aligned}$$ The total energy is, $E = E_a + E_b$, and the ground state in each condensate depends only of the scattering interactions $U_{aaij}$ and $U_{bbij}$, and the external potentials $\mu_j$. We have four conserved quantities, $[H,I_1]=[H,I_2]=[H,I_3]=[H,I_4] = 0$, with $I_1 = N_{a1},\; I_2 = N_{a2},\;I_3 = N_{b1}$ and $I_4 = N_{b2}$, and so the total number of atoms is a conserved quantity, $N = I_1 + I_2 + I_3 + I_4$. When we turn on the tunnelling,$\Omega_{jk} \neq 0$,the BEC are now coupled by Josephson tunnelling and the total number of atoms, $$N = N_{a1} + N_{a2} + N_{b1} + N_{b2},$$ continues a conserved quantity, but now, $I_j,\; j = 1,2,3,4$, are no more conserved because of the tunnelling amplitudes, $$[H,N] = 0,\qquad [H,I_j] \neq 0.$$ If $\Omega_{12}=\Omega_{21}=0$, we have another conserved quantities, $[H,J_1]=[H,J_2]=0$, with $J_1 = N_{a1} + N_{b1}$ and $J_2 = N_{a2} + N_{b2}$, and so the total number of atoms is a conserved quantity, $N = J_1 + J_2$. The state space is spanned by the base $\{|n_{a1},n_{a2},n_{b1},n_{b2}\rangle\}$ and we can write each vector state as $$|n_{a1},n_{a2},n_{b1},n_{b2}\rangle = \frac{1}{\sqrt{n_{a1}!n_{a2}!n_{b1}!n_{b2}!}}(a_{1}^{\dagger})^{n_{a1}}(a_{2}^{\dagger})^{n_{a2}}(b_{1}^{\dagger})^{n_{b1}}(b_{2}^{\dagger})^{n_{b2}}|0\rangle, \label{state1}$$ where $|0\rangle = |0_{a1},0_{a2},0_{b1},0_{b2}\rangle$ is the vacuum vector state in the Fock space. We can use the states (\[state1\]) to write the matrix representation of the Hamiltonian (\[H1\]). The dimension of the space increase very fast when we increase $N$, $$d = \frac{1}{6}(N + 3)(N + 2)(N + 1),$$ with $N$ as a constant $c$-number, $N = n_{a1} + n_{a2} + n_{b1} + n_{b2}$. Now we use the co-multiplication property of the Lax operators to write the following monodromy matrix, $$L(u) = L_{1}^{\Sigma a}(u + \sum_{j=1}^{2}\omega_{j})L_{2}^{\Sigma b}(u - \sum_{j=1}^{2}\omega_{j}). \label{LH1}$$ Following the monodromy matrix (\[monod\]) we can write the operators, $$\begin{aligned} \pi(A(u)) &=& [(u + \omega_1 + \omega_2)I + \eta\sum_{j=1}^2 N_{aj}][(u - \omega_1 - \omega_2)I + \eta\sum_{j=1}^2 N_{bj}] \nonumber \\ &+& \sum_{j,k=1}^2 s_jt_kb_j^{\dagger}a_k, \label{piA}\\ \pi(B(u)) &=& [(u + \omega_1 + \omega_2)I + \eta\sum_{j=1}^2 N_{aj}]\sum_{j=1}^2 t_jb_j + \eta^{-1}\zeta\sum_{j=1}^2 t_jb_j, \label{piB}\\ \pi(C(u)) &=& (\sum_{j=1}^2 s_ja_j^{\dagger})[(u - \omega_1 - \omega_2)I + \eta\sum_{j=1}^2 N_{bj}] \nonumber \\ &+& \eta^{-1}\zeta\sum_{j=1}^2 t_jb_j^{\dagger}, \label{piC}\\ \pi(D(u)) &=& \sum_{j,k=1}^2 s_jt_ka_j^{\dagger}b_k + \eta^{-2}\zeta^2 I. \label{piD}\end{aligned}$$ Taking the trace of the operator (\[LH1\]) we get the transfer matrix $$\begin{aligned} t(u) & = & u^2I + u\eta N + [\eta^{-2}\zeta^{2} - (\omega_{1} + \omega_{2})^2]I \nonumber\\ & + & \eta^{2}(N_{a1}N_{b1}+N_{a1}N_{b2}+N_{a2}N_{b1}+N_{a2}N_{b2}) \nonumber \\ & + & \eta(\omega_{1} + \omega_{2})(N_{b1} - N_{a1} + N_{b2} - N_{a2}) \nonumber\\ & + & s_{1}t_{1}(a_{1}^{\dagger}b_{1} + b_{1}^{\dagger}a_{1}) + s_{1}t_{2}(a_{1}^{\dagger}b_{2} + b_{2}^{\dagger}a_{1}) \nonumber\\ & + & s_{2}t_{1}(a_{2}^{\dagger}b_{1} + b_{1}^{\dagger}a_{2}) + s_{2}t_{2}(a_{2}^{\dagger}b_{2} + b_{2}^{\dagger}a_{2}). \label{tH1} \end{aligned}$$ From (\[C14b\]) we identify the conserved quantities of the transfer matrix, $$\begin{aligned} \mathcal{C}_0 &=& [\eta^{-2}\zeta^{2} - (\omega_{1} + \omega_{2})^2]I \nonumber\\ & + & \eta^{2}(N_{a1}N_{b1}+N_{a1}N_{b2}+N_{a2}N_{b1}+N_{a2}N_{b2}) \nonumber \\ & + & \eta(\omega_{1} + \omega_{2}) (N_{b1} - N_{a1} + N_{b2} - N_{a2}) \nonumber\\ & + & s_{1}t_{1}(a_{1}^{\dagger}b_{1} + b_{1}^{\dagger}a_{1}) + s_{1}t_{2}(a_{1}^{\dagger}b_{2} + b_{2}^{\dagger}a_{1}) \nonumber\\ & + & s_{2}t_{1}(a_{2}^{\dagger}b_{1} + b_{1}^{\dagger}a_{2}) + s_{2}t_{2}(a_{2}^{\dagger}b_{2} + b_{2}^{\dagger}a_{2}),\end{aligned}$$ $$\begin{aligned} \mathcal{C}_1 &=& \eta N,\end{aligned}$$ $$\begin{aligned} \mathcal{C}_2 &=& I. \end{aligned}$$ The Hamiltonian (\[H1\]) is related with the transfer matrix (\[tH1\]) by the equation, $$H = u^2I + u\mathcal{C}_1 + \frac{\alpha}{\eta^2}\mathcal{C}_1^2 + [\eta^{-2}\zeta^{2} - (\omega_{1} + \omega_{2})^2]I - t(u), \label{H2}$$ where we have the following identification between the parameters, $$\alpha = U_{aajj} = U_{bbjj}, \qquad 2\alpha = U_{aajk} = U_{bbjk} \; (j\neq k),$$ $$2\alpha - \eta^2 = U_{abjk},$$ $$\eta(u - \omega_{1} - \omega_{2}) = \epsilon_{aj} - \mu_{j},\qquad \eta(u + \omega_{1} + \omega_{2}) = \epsilon_{bj} + \mu_{j}$$ $$\Omega_{jk} = s_j t_k, \qquad \sum_{j=1}^2 s_j t_j = \zeta, \qquad j,k = 1,2.$$ Using as pseudo-vacuum the tensor product vector state, $|0 \rangle \equiv |0,0\rangle_a \otimes |0,0 \rangle_b$, with $|0,0 \rangle_p$, which belongs to the direct sum space, denoting the Fock vacuum state associated to the well $p$ $(p = a,b)$, we can apply the algebraic Bethe ansatz method in order to find the Bethe ansatz equations (BAE), $$\begin{aligned} \frac{\eta^{2}[v^2_{i} - (\omega_{1} + \omega_{2})^2]}{\zeta^2} & = & \prod_{j\ne i}^{N}\frac{v_{i}-v_{j}-\eta}{v_{i}-v_{j}+\eta}, \;\;\;\;\; i,j=1,\ldots , N. \label{BAE1}\end{aligned}$$ The eigenvectors $\{ |v_1,v_2,\ldots,v_N\rangle \}$ of the Hamiltonian (\[H1\]) or (\[H2\]) and of the transfer matrix (\[tH1\]) are $$|\vec{v}\rangle \equiv |v_1,v_2,\ldots,v_N\rangle = \prod_{i=1}^N \pi(C(v_i))|0 \rangle,$$ and the eigenvalues of the Hamiltonian (\[H1\]) or (\[H2\]) are, $$\begin{aligned} E(\{v_i\}) & = & u^2 + u\mathcal{C}_1 + \frac{\alpha}{\eta^2}\mathcal{C}_1^2 + \eta^{-2}\zeta^2 - (\omega_{1} + \omega_{2})^2 \nonumber \\ & - & [u^2 - (\omega_{1} + \omega_{2})^2] \prod_{i=1}^{N}\frac{v_{i} - u -\eta}{v_{i} - u} \nonumber \\ & - & \eta^{-2}\zeta^{2} \prod_{i=1}^{N}\frac{v_{i} - u +\eta}{v_{i} - u},\end{aligned}$$ where the $\{v_i\}$ are solutions of the BAE (\[BAE1\]) and $N$ is the total number of atoms. We can choose arbitrarily the spectral parameter $u$. In Fig. 2 we show the dimensionless ground state $E_0/\mu_1$ versus the relative external potential $\mu_2/\mu_1$ for different number of atoms $N$ and for a particular choice of the others parameters. ![In the figure we have the dimensionless ground state $E_0/\mu_1$ versus the relative external potential $\mu_2/\mu_1$ for different values of the total number of atoms $N$ and for the following choice of the parameters: $U_{aajj} = U_{bbjj}=U_{abjk} = 1$, $\;U_{aa12}=U_{bb12}=2$, $\;\epsilon_{a1} = -\epsilon_{a2} = -2$, $\;\epsilon_{b1} = -\epsilon_{b2} = 1$, $\;\mu_1=1$, $\;\Omega_{jk} = 0.5,\;j,k=1,2$, $u=0$ and $\omega_1=\omega_2=\eta=\zeta = 1$. ](E0_N.eps) \[f2\] The two-well model with $n$ on well states ------------------------------------------ The Hamiltonian of the system is, $$\begin{aligned} H & = & \sum_{p=a,b}\;\;\sum_{j=1}^{n} U_{ppjj}N_{pj}N_{pj} \nonumber \\ &+& \frac{1}{2}\sum_{p=a,b}\;\;\sum_{ j,k=1 (j\neq k)}^{n} U_{ppjk} N_{pj}N_{pk} + \sum_{j,k=1}^{n} U_{abjk} N_{aj}N_{bk} \nonumber \\ & - & \sum_{j=1}^{n} \mu_{j}(N_{aj} - N_{bj}) + \sum_{j=1}^{n} \epsilon_{aj}N_{aj} + \sum_{j=1}^{n} \epsilon_{bj} N_{bj} \nonumber \\ & - & \sum_{j,k=1}^{n} \Omega_{jk}(a_{j}^{\dagger}b_{k} + b_{k}^{\dagger}a_{j}). \label{H3}\end{aligned}$$ The parameters in this model are like the parameters in the Hamiltonian (\[H1\]), we just remark that $U_{ppjk} = U_{ppkj}$. The Hamiltonian (\[H3\]) describes $s$-wave scattering between the atoms in all $n$ on well states and the tunnelling of the atoms between the wells $a$ and $b$ to the same or to different on well states. It is important to note that if we turn off the tunnelling,$\Omega_{jk}=0$, the Hamiltonian (\[H3\]) describes two decoupled Bose-Einstein condensates with $n$ states in each one, $\{|n_{pj}\rangle\},\;p=a,b,\;j=1,\ldots,n,$ and only one pure vector state for each condensate, $|\psi_p\rangle = \bigotimes_{j=1}^n |n_{pj}\rangle,\;p=a,b$ , with the total vector state of the system the tensor product of those pure vectors states, $$|\Psi_T\rangle = |\psi_a\rangle\otimes|\psi_b\rangle.$$ The energies, $E_a$ and $E_b$, are respectively, $$\begin{aligned} E_a & = & \sum_{j=1}^{n} U_{aajj}N_{aj}N_{aj} + \frac{1}{2}\sum_{ j,k=1 (j\neq k)}^{n} U_{aajk} N_{aj}N_{ak} + \sum_{j=1}^{n} (\epsilon_{aj} - \mu_{j})N_{aj}, \label{EaH3}\end{aligned}$$ $$\begin{aligned} E_b & = & \sum_{j=1}^{n} U_{bbjj}N_{bj}N_{bj} + \frac{1}{2}\sum_{ j,k=1 (j\neq k)}^{n} U_{bbjk} N_{bj}N_{bk} + \sum_{j=1}^{n} (\epsilon_{bj} + \mu_{j})N_{bj}. \label{EbH3}\end{aligned}$$ The total energy is, $E = E_a + E_b$, and the ground state in each condensate depends only of the scattering interactions $U_{aaij}$ and $U_{bbij}$, and the external potentials $\mu_j$. We have $2n$ conserved quantities, $[H,I_{pj}]= 0,\;I_{pj} = N_{pj},\;j=1,\ldots,n,\;(p=a,b),$ and so the total number of atoms is also a conserved quantity, $N = \sum_{p=a,b}\sum_{j=1}^n I_{pj}$. When we turn on the tunnelling,$\Omega_{jk} \neq 0$,the BEC are now coupled by Josephson tunnelling and the total number of atoms, $$N = N_{a1} + N_{a2} + N_{b1} + N_{b2},$$ continue a conserved quantity, but now, $I_{pj}$, are no more conserved because of the tunnelling amplitudes, $$[H,N] = 0,\qquad [H,I_{pj}] \neq 0.$$ If $\Omega_{jk}=\Omega_{kj}=0,\; \forall\; j \neq k$, we have another conserved quantities, $[H,J_{j}]=0$, with $J_j = N_{aj} + N_{bj},\;j=1,\ldots,n,$ and so the total number of atoms is a conserved quantity, $N = \sum_j^n J_j$. The state space is spanned by the base $\{|n_{a1},\ldots,n_{an},n_{b1},\ldots,n_{bn}\rangle\}$ and we can write each vector state as $$|n_{a1},\ldots,n_{an},n_{b1},\ldots,n_{bn}\rangle = \frac{1}{\sqrt{\prod_{j=1}^n n_{aj}!n_{bj}!}}\prod_{j=1}^n (a_{j}^{\dagger})^{n_{aj}}(b_{j}^{\dagger})^{n_{bj}}|0\rangle, \label{state1}$$ where $|0\rangle = |0_{a1},\ldots,0_{an},0_{b1},\ldots,0_{bn}\rangle$ is the vacuum vector state in the Fock space. We can use the states (\[state1\]) to write the matrix representation of the Hamiltonian (\[H3\]). The dimension of the space increase very fast when we increase $N$, $$d = \frac{(L -1 +N)!}{(L-1)!N!},$$ where $L = 2n$ is the total number of states in both wells and $N$ as a constant $c$-number, $N = n_{a1} + n_{a2} + n_{b1} + n_{b2}$. In the case where we have only two states [@GSantosaa] (one in each well, $n=1$) the dimension is $d = N + 1$. Now we use the co-multiplication property of the Lax operators to write, $$L(u) = L_{1}^{\Sigma a}(u + \sum_{j=1}^{n}\omega_{j})L_{2}^{\Sigma b}(u - \sum_{j=1}^{n}\omega_{j}). \label{LH2}$$ Following the monodromy matrix (\[monod\]) we can write the operators, $$\begin{aligned} \pi(A(u)) &=& (uI+I\sum_{j=1}^n \omega_j + \eta\sum_{j=1}^n N_{aj})(uI - I\sum_{j=1}^n \omega_j + \eta\sum_{j=1}^n N_{bj}) \nonumber \\ &+& \sum_{j,k=1}^n s_jt_kb_j^{\dagger}a_k, \label{piAn}\\ \pi(B(u)) &=& (uI + I\sum_{j=1}^n \omega_j + \eta\sum_{j=1}^n N_{aj})\sum_{j=1}^n t_jb_j + \eta^{-1}\zeta\sum_{j=1}^n t_jb_j, \label{piBn}\\ \pi(C(u)) &=& (\sum_{j=1}^n s_ja_j^{\dagger})(uI - I\sum_{j=1}^n \omega_j + \eta\sum_{j=1}^n N_{bj}) \nonumber \\ &+& \eta^{-1}\zeta\sum_{j=1}^n t_jb_j^{\dagger}, \label{piCn}\\ \pi(D(u)) &=& \sum_{j,k=1}^n s_jt_ka_j^{\dagger}b_k + \eta^{-2}\zeta^2 I. \label{piDn}\end{aligned}$$ Taking the trace of the operator (\[LH2\]) we get the transfer matrix $$\begin{aligned} t(u) & = & u^2I + u\eta N + [\eta^{-2}\zeta^{2} - \sum_{j,k=1}^{n}\omega_{j}\omega_{k}]I \nonumber\\ & + & \eta(\sum_{j=1}^{n}\omega_{j})\sum_{j=1}^{n}(N_{bj} - N_{aj}) + \eta^{2}\sum_{j,k=1}^{n}N_{aj}N_{bk} \nonumber \\ & + & \sum_{j,k=1}^{n}s_{j}t_{k}(a_{j}^{\dagger}b_{k} + b_{k}^{\dagger}a_{j}). \label{tu2} \end{aligned}$$ From (\[C14b\]) we identify the conserved quantities of the transfer matrix (\[tu2\]), $$\begin{aligned} \mathcal{C}_0 & = & [\eta^{-2}\zeta^{2} - \sum_{j,k=1}^{n}\omega_{j}\omega_{k}]I \nonumber\\ & + & \eta(\sum_{j=1}^{n}\omega_{j})\sum_{j=1}^{n}(N_{bj} - N_{aj}) + \eta^{2}\sum_{j,k=1}^{n}N_{aj}N_{bk} \nonumber \\ & + & \sum_{j,k=1}^{n} s_{j}t_{k}(a_{j}^{\dagger}b_{k} + b_{k}^{\dagger}a_{j}), \end{aligned}$$ $$\begin{aligned} \mathcal{C}_1 &=& \eta N,\end{aligned}$$ $$\begin{aligned} \mathcal{C}_2 &=& I.\end{aligned}$$ The Hamiltonian (\[H3\]) is related with the transfer matrix (\[tu2\]) by the equation, $$H = u^2 I + u\mathcal{C}_1 + \frac{\alpha}{\eta^2}\mathcal{C}_1^2 + [\eta^{-2}\zeta^{2} - \sum_{j,k=1}^{n}\omega_{j}\omega_{k}]I - t(u), \label{H4}$$ where we have the following identification between the parameters, $$\alpha = U_{aajj} = U_{bbjj}, \qquad 2\alpha = U_{aajk} = U_{bbjk} \; (j\neq k),$$ $$2\alpha - \eta^2 = U_{abjk},$$ $$\eta(u - \sum_{j=1}^{n}\omega_{j}) = \epsilon_{aj} - \mu_{j},\qquad \eta(u + \sum_{j=1}^{n}\omega_{j}) = \epsilon_{bj} + \mu_{j}$$ $$\Omega_{jk} = s_j t_k, \qquad \sum_{j=1}^{n} s_j t_j = \zeta, \qquad j,k = 1,\ldots ,\; n.$$ We use as pseudo-vacuum the product state, $$|0\rangle = |\{0\}_{n}\rangle_{a}\otimes|\{0\}_{n}\rangle_{b},$$ with $$|\{0\}_{n}\rangle_{p} = |0_{1},0_{2},\ldots,0_{n}\rangle_{p},$$ belonging to the direct sum space associated to the states and denoting the Fock vacuum state for the well $p$ $(p = a,b)$. For this pseudo-vacuum we can apply the algebraic Bethe ansatz method in order to find the Bethe ansatz equations (BAE), $$\begin{aligned} \frac{\eta^{2}(v^2_{i} - \sum_{j,k=1}^{n}\omega_{j}\omega_{k})}{\zeta^{2}} & = & \prod_{j \ne i}^{N}\frac{v_{i}-v_{j}-\eta}{v_{i}-v_{j}+\eta}, \;\;\;\;\; i,j = 1,\ldots , N. \label{BAE2}\end{aligned}$$ The eigenvectors $\{ |v_1,v_2,\ldots,v_N\rangle \}$ of the Hamiltonian (\[H3\]) or (\[H4\]) and of the transfer matrix (\[tu2\]) are $$|\vec{v}\rangle \equiv |v_1,v_2,\ldots,v_N\rangle = \prod_{i=1}^N \pi(C(v_i))|0 \rangle,$$ and the eigenvalues of the Hamiltonian (\[H3\]) or (\[H4\]) are, $$\begin{aligned} E(\{ v_i \}) & = & u^2 + u\mathcal{C}_1 + \frac{\alpha}{\eta^2}\mathcal{C}_1^2 + \eta^{-2}\zeta^{2} - \sum_{j,k=1}^{n}\omega_{j}\omega_{k} \nonumber \\ & - & (u^2 - \sum_{j,k=1}^{n}\omega_{j}\omega_{k}) \prod_{i=1}^{N}\frac{v_{i} - u -\eta}{v_{i} - u} \nonumber \\ & - & \eta^{-2}\zeta^{2}\prod_{i=1}^{N}\frac{v_{i} - u +\eta}{v_{i} - u}. \end{aligned}$$ where the $\{v_i\}$ are solutions of the BAE (\[BAE2\]) and $N$ is the total number of atoms. We can choose arbitrarily the spectral parameter $u$. Summary ======= We have introduced a new family of two-well models with arbitrary $n$ on well state in each well and derived the Bethe ansatz equations and the corresponding eigenvalues. These models were obtained through a combination of Lax operators constructed using the Heisenberg-Weyl Lie algebra. An interesting aspect of these models is that no selection rules for tunnelling between the on well states are present, the atoms can thus tunnel to the same or different on well states. We believe that the models proposed, as they can furnish precise results on physical quantities as the energy gap, entanglement and ground state fidelity, have potential applications in studies such as those involving quantum metrology [@drummond; @oberth; @campbell] in the context of ultracold atoms. Also, the obtained Hamiltonians may be useful in the context of NMR techniques [@nmr1; @oliv; @oliv2] as an alternative to the usual nuclear quadrupole Hamiltonian. Acknowledgments {#acknowledgments .unnumbered} =============== The authors acknowledge Capes/FAPERJ (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior/Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro) and CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) for financial support. References {#references .unnumbered} ========== [10]{} Cornell E. A. and Wieman C. E., *Rev. Mod. Phys.* **74** (2002) 875. Anglin J. R. and Ketterle W., *Nature* **416** (2002) 211. He Q. Y., Reid M. D., Vaughan T. G., Gross C., Oberthaler M. and Drummond P. D., *Phys. Rev. Lett.* **106** (2011) 120405. Zhou H.-Q., Links J., Gould M. and McKenzie R., *J. Math. Phys.* **44** (2003) 4690. Zhou H.-Q., Links J. and McKenzie R. H., *Int. Jour. Mod. Phys. B* **17** (2003) 5819. Links J., Zhou H.-Q., McKenzie R. H. and Gould M. D., *J. Phys. A* **36** (2003) R63. Foerster A. , Links J. and Zhou H.-Q., *Classical and quantum non-linear integrable systems: theory and applications*, Editor A. Kundu, IOP Publishing, Bristol and Philadelphia, (2003) 208. Dukelsky J., Dussel G., Esebbag C. and Pittel S., *Phys. Rev. Lett.* **93** (2004) 050403. Ortiz G., Somma R., Dukelsky J. and Rombouls S., *Nuclear Physics B* **707** (2005) 421. Kundu A., *Theoretical and Mathematical Physics* **151** (2007) 831. Foerster A. and Ragoucy E., *Nuclear Physics B* **777** (2007) 373. Links J., Foerster A., Tonel A. P. and Santos G., *Ann. Henri Poincaré* **7** (2006) 1591. Santos G., Foerster A., Roditi I., Santos Z. V. T. and Tonel A. P., *J. Phys. A: Math. Theor.* **41** (2008) 295003. Santos G., *J. Phys. A: Math. Theor.* **44** (2011) 345003. Héritier M., *Nature* **414** (2001) 31. Batchelor M. T., *Physics Today* **60** (2007) 36. Faddeev L. D., Sklyanin E. K. and Takhtajan L. A., *Theor. Math. Phys.* **40** (1979) 194. Kulish P. P. and Sklyanin E. K, *Integrable Quantum Field Theories: Proceedings of the Symposium Held at Tvrminne, Finland - Lecture Notes in Physics* Editor: J. Hietarinta and C. Montonen, **151**, Springer-Verlag, Berlin, (1982) 61. Takhtajan L. A., *Quantum Groups: Proceedings of the 8th International Workshop on Mathematical Physics Held at the Arnold Sommerfeld Institute, Clausthal, FRG - Lecture Notes in Physics*, Editor: H. -D. Doebner and J. -D. Hennig, **370**, Springer-Verlag, Berlin, (1990) 3. Korepin V. E., Bogoliubov N. M. and Izergin A. G., *Quantum inverse scattering method and correlation functions*, Cambridge University Press, Cambridge, (1993). Faddeev L. D., *Int. J. Mod. Phys. A* **10** (1995) 1845. Izergin A. G. and Korepin V. E., *Lett. Math. Phys.* **6** (1982) 283. Yang C.N., *Phys. Rev. Lett.* **19** (1967) 1312. Izergin A. G. and Korepin V. E., *Nuc. Phys. B* **205** (1982) 401. Essler F. H. L. and Korepin V. E., *Exactly solvable models of strongly correlated electrons*, World Scientific, Singapore, (1994). Essler F. H. L., Frahm H., Göhmann F., Klümper A. and Korepin V. E., *The one-dimensional Hubbard Model*, Cambridge University Press, Cambridge, (2005). Bazhanov V., Lukyanov S. and Zamolodchikov A. B., *Commun. Math. Phys.* **177** (1996) 381. Lipatov L., *JETP Lett.* **59** (1994) 596. Faddeev L., Korchemsky G., *Phys. Lett. B* **342** (1995) 311. Belitsky A.V., Braun V.M., Gorsky A.S., Korchemsky G.P., *Int. J. Mod. Phys.* **A 19** (2004) 4715. Jimbo M., *Lett. Math. Phys.* **10** (1985) 63. Jimbo M., *Field Theory, Quantum Gravity and Strings: Proceedings of a Seminar Series Held at DAPHE, Observatoire de Meudon, and LPTHE, Université Pierre et Marie Curie, Paris - Lecture Notes in Physics*, Editor: H. J. de Vega and N. Sánchez, **246**, Springer-Verlag, Berlin, (1986) 335. Drinfeld V. G., *Quantum groups: Proc. Int. Congress of Mathematicians*, Editor: A. M. Gleason, Providence, RI: American Mathematical Society, (1986) 798. Reshetikhin N. Yu, Takhtajan L. A. and Faddeev L. D., *Leningrad Math. J.* **1** (1990) 193. Faddeev L. D., *40 Years in Mathematical Physics - World Scientific Series in 20th Century Mathematics*, **2**, World Scientific Publishing Co. Pte. Ltd., Singapore, (1995). Dorey N., *J. Phys. A: Math. Theor.* **42** (2009) 254001. Guan X.-W. , Batchelor M. T. and Lee C., *Fermi gases in one dimension: From Bethe Ansatz to experiments*, arXiv:1301.6446 \[cond-mat.quant-gas\]. Kinoshita T., Wenger T. and Weiss D.S., *Science* **305** (2004) 1125. Kinoshita T., Wenger T. and Weiss D.S., *Nature* **440** (2006) 900. Kitagawa T., Pielawa S., Imambekov A., Schmiedmayer J., Gritsev V., and Demler E., *Phys. Rev. Lett.* **104** (2010) 255302. Haller E., Gustavsson M., Mark M.J., Danzl J.G., Hart H., Pupillo G., Nägerl H.-C., *Science* **325** (2009) 1224. Liao Y., Rittner C., Paprotta T., Li W., Partridge G.B., Hulet R.G., Baur S.K. and Mueller E.J., *Nature* **467** (2010) 567. Coldea R., Tennant D.A., Wheeler E.M., Wawrzinska E., Prabhakaran D., Telling M. Habicht K. Smeibidil P., Kiefer K., *Science* **327** (2010) 177. Fel’dman, E. B., Pyrkov, A. N., Zenchuk, A. I., *Philosophical Transactions of The Royal Society A* **370** (2012) 4690. Kuznetsov V.B. and Tsiganov A.V., *J. Phys. A: Math. Gen.* **22** (1989) L73. He Q.Y., Vaughan T. G., Drummond P. D., and Reid M. D., *New J. Phys.* **14** (2012) 093012. Kuhnert M., Geiger R. , Langen T., Gring M., Rauer B., Kitagawa T., Demler E., Adu Smith D., and Schmiedmayer J., *Multimode dynamics and emergence of a characteristic length-scale in a one-dimensional quantum system*, *Phys. Rev. Lett.* **110** (2013) 090405. Rubeni D., Foerster A., Mattei E., and Roditi I., *Nuclear Physics B* **856** (2012) 698. Oliveira I.S., Bonagamba T.J., Sarthour R., Freitas J.C.C., and deAzevedo E.R., *NMR Quantum Information Processing*, Elsevier, Amsterdam, (2007). Araujo-Ferreira A. G., Auccaise R., Sarthour R. S., Oliveira I. S., Bonagamba T. J. and Roditi I., *Classical bifurcation in a quadrupolar NMR system*, *Phys. Rev. A* **87** (2013) 053605. Roditi I., *Brazilian Journal of Physics* **30** (2000) 357. C. Gross, T. Zibold, E. Nicklas, J. Estève and M. K. Oberthaler, *Nature* **464** (2010) 1165. Hennig, H., Witthaut, D. and Campbell, D. K., *Physical Review A* **86** (2012) 051604(R).
--- abstract: 'We report and discuss $JHK_S$ photometry for Sgr dIG, a very metal-deficient galaxy in the Local Group, obtained over 3.5 years with the Infrared Survey Facility in South Africa. Three large amplitude asymptotic giant branch variables are identified. One is an oxygen-rich star that has a pulsation period of 950 days, that was until recently undergoing hot bottom burning, with $M_{bol}\sim-6.7$. It is surprising to find a variable of this sort in Sgr dIG, given their rarity in other dwarf irregulars. Despite its long period the star is relatively blue and is fainter, at all wavelengths shorter than $4.5 \mu$m, than anticipated from period-luminosity relations that describe hot bottom burning stars. A comparison with models suggests it had a main sequence mass $M_i \sim 5\, M_{\odot}$ and that it is now near the end of its AGB evolution. The other two periodic variables are carbon stars with periods of 670 and 503 days ($M_{bol}\sim-5.7$ and $-5.3$). They are very similar to other such stars found on the AGB of metal deficient Local Group Galaxies and a comparison with models suggests $M_i \sim 3\, M_{\odot}$. We compare the number of AGB variables in Sgr dIG to those in NGC6822 and IC1613, and suggest that the differences may be due to the high specific star formation rate and low metallicity of Sgr dIG.' author: - | Patricia A. Whitelock$^{1,2}$, John W. Menzies$^1$, Michael W. Feast$^{2,1}$ and Paola Marigo$^3$\ $^1$ South African Astronomical Observatory, P.O.Box 9, 7935 Observatory, South Africa.\ $^2$ Department of Astronomy, University of Cape Town, 7701 Rondebosch, South Africa.\ $^3$ Department of Physics and Astronomy G. Galilei, University of Padova, Vicolo dellÕOsservatorio 3, I-35122 Padova, Italy. bibliography: - 'pawbib.bib' title: 'A Remarkable Oxygen-rich Asymptotic Giant Branch Variable in the Sagittarius Dwarf Irregular Galaxy' --- [galaxies]{} Introduction {#intro} ============ The Sagittarius Dwarf Irregular Galaxy (Sgr dIG) is one of the outermost members of the Local Group. It is a particularly difficult galaxy to observe as the field ($l=21^{\rm o}$, $b=-16^{\rm o}$) is dominated by foreground stars from the Galactic disk and bulge. Nevertheless, it has come under increasing scrutiny in recent years, primarily because of its extremely low metallicity (the most recent measurement being $ \rm [Fe/H]=-1.88^{+0.13}_{-0.09}$ [@Kirby2017], although earlier estimates were lower), which makes it more representative of the early universe than other systems in which individual stars can be studied. The oxygen abundance measured from H[ii]{} regions is also low, with $\rm 12+\log (O/H)$ in the range 7.26 to 7.50 [@Saviane2002].\ The young blue stars are concentrated towards the centre of Sgr dIG while the redder stellar populations are more widely distributed and the H[i]{} gas covers a much bigger volume [@Beccari2014; @Higgs2016 and references therein]. The star formation history [@Weisz2014] suggests an extended period of star formation and is similar to that of other dwarf irregulars, although it has the highest gas fraction of any galaxy in the Local Group and a particularly high specific star formation rate, making it one of the fastest growing galaxies in the Local Group according to @Kirby2017. While it appears to be isolated it shows signs of interaction [@Higgs2016], including dusty AGB candidates in its outer regions [@Cook1988; @McQuinn2017].\ @Demers2002 identified C stars via narrow band optical photometry and @Gullieuszik2007 via $JHK_S$ photometry. Most of these will be AGB stars. @Boyer2015a [@Boyer2015b] identified a number of upper AGB candidates on the basis of their positions in a Spitzer (3.6 and 4.5 $\mu$m) colour-magnitude diagram and/or their variability. For the Sgr dIG @Momany2005 give $E(B-V)=0.12 \pm 0.05$ and find $(m-M)_0=25.10\pm0.11$, @Beccari2014 use $E(B-V)=0.107 \pm 0.10$ and get $(m-M)_0=25.16\pm0.11$, while @Higgs2016 and @McQuinn2017 find $(m-M)_0=25.36\pm0.15$ and $25.18\pm0.04$, respectively; all estimates are from the red giant branch tip (RGBT). In the following we assume a distance modulus of 25.2 and an interstellar extinction of $A_V=0.34$, which results in $A_J=0.1 $, $A_H=0.06$ and $A_K=0.04$, but our findings are not sensitive to the interstellar extinction. Where we make use of relations defined for the LMC we assume its distance modulus to be 18.5. This work forms part of a broad study of AGB variables in Local Group galaxies which so far has covered the dwarf spheroidals: Leo I [@Menzies2002; @Menzies2010], Phoenix [@Menzies2008], Fornax [@Whitelock2009] and Sculptor [@Menzies2011], as well as two dwarf irregulars, NGC 6822 [@Whitelock2013] and IC1613 [@Menzies2015]. Stars up to about $10\, M_{\odot}$ are thought to undergo AGB evolution, where they reach very high luminosities while undergoing large amplitude pulsations. Dredge-up on the thermally pulsing AGB will increase the abundance of atmospheric carbon, turning normal O-rich stars into C-rich stars once C/O exceeds unity. The lower the initial O-abundance the more rapid the transition to C-rich star will be. However, it is now clear that AGB stars above a certain mass will undergo hot bottom burning (HBB), and the dredged-up carbon is burned to nitrogen, so the stars become O-rich again. The HBB process produces additional luminosity so these stars will be brighter than the core-mass luminosity relation predicts. Unfortunately, even the broad details of nucleosynthesis, dredge up and HBB, as well their dependence on initial metallicity, remain very uncertain [@Doherty2017; @Karakas2017]. Even the upper limit to the mass range for AGB evolution is uncertain, and is usually quoted at around 8 to 12 $M_{\odot}$ for the most massive super-AGB stars. Understanding these stars is vital for establishing the mass boundary between stars that will produce supernovae and those that end as white dwarfs and its dependence on abundance. It is therefore of particular interest to isolate luminous large-amplitude variables in a variety of environments with the objective of identifying HBB and/or super-AGB stars that can be studied in more detail. Mira Period Luminosity Relation (PLR) {#Mira_PLR} ===================================== In addition to finding luminous large amplitude variables, one of the main objectives of our studies is calibrating and testing the PLR for Mira variables in different environments, in particular for Miras because these are easily identified as large amplitude luminous stars that potentially rival Cepheids as distance indicators [@Feast2013; @Whitelock2013b; @Whitelock2014 and other works in our series on Local Group galaxies]. We expect increasing interest in these variables and the PLR following commissioning of the James Web Space Telescope and with the use of adaptive optics in the next generation of ground based extremely large telescopes. Early work on the PLR [e.g., @Feast1989; @Hughes1990] was based on observations of LMC Miras and showed a clear relation at near-infrared wavelengths, as well as for the bolometric magnitude. Separate relations were derived for O- and C-rich variables, although the differences between these were small. For distance scale purposes the $K$ PLR appears to be the best; the amplitude is less than at shorter wavelengths as are the effects of interstellar and circumstellar reddening. It was clear even at this early stage that longer period ($P>420$ days) O-rich Miras are brighter at $K$ than a linear PLR relation fitted to shorter period stars would predict. @Wood1999 showed that the Mira sequence represents fundamental pulsation and that semi-regular AGB variables also obey PLRs, but many of them pulsate in harmonics instead of, or as well as, in the fundamental mode. This work has subsequently been consolidated and extended by several other groups . It is well known that the period ($P$) of a radially pulsating star is a function of its density ($\rho$) and therefore of its current mass through the relation $P \sqrt {\rho} = constant$. Observations of Globular Clusters [@Feast2002] and the kinematics of Miras [@Feast2000; @Feast2006] indicate that $P$ is also a function of the star’s initial mass, from which we can deduce that, at least among short period Miras, there can be little evolution of period once a star becomes a Mira (i.e. a fundamental mode pulsator). @Wood2015 discussed the possible extent of evolution within the PLR. The fundamental (Mira) PLR can be understood as the locus of the end points of AGB evolution of stars of different initial mass and the period is a function of both the initial mass and the current mass. The PLR can also be understood as a consequence of the core-mass luminosity relation. As more Miras were discovered via infrared surveys [e.g., @Cioni2001; @Ita2004; @Menzies2010; @Menzies2008; @Menzies2011; @Menzies2015; @Whitelock2009; @Whitelock2013] it became clear that many C-rich Miras fall below the $K$ PLR, even at relatively short period. This is a consequence of high circumstellar reddening and almost all of these reddened stars do fall on the bolometric PLR as our cited work on Local Group galaxies shows. @Whitelock2003 suggested that the bright long period O-rich stars are actually HBB and this is why they are more luminous than the core-mass luminosity relation would predict. They further suggested that very long period Miras, i.e., OH/IR stars with $P>1000$ days, lie on the bolometric PLR, as they are no longer HBB (the bolometric luminosities of these very dusty stars are difficult to establish with very limited mid-infrared data). The first part of this conclusion, that the stars are HBB, seems sound. The second part, that the very long period stars fall on the extrapolated PLR, is not entirely consistent with new work that includes more detailed mid-infrared studies. @Ita2011 discuss the LMC PLRs at various wavelengths from the near- to the mid-infrared and conclude that there is a kink in the PLRs between 400 and 500 days, such that the relation is steeper in the longer wavelength range. More recently @Yuan2017 have discussed essentially the same LMC data (and similar data for M33) and fitted a quadratic PLR as a better representation of the slope change. The differences between the @Yuan2017 quadratic and the two straight lines of @Ita2011 is not significant in most practical terms, given the uncertainty associated with measured magnitudes. The clear interpretation of the multiple linear PLRs, in terms of fundamental and harmonic oscillations [@Wood1999; @Wood2015], would make it surprising if the PLR were non-linear over the short period range. At longer periods it is not clear how HBB affects the structure of the star and hence the pulsation period as well as the luminosity. Therefore it is not yet possible to predict whether the PLR should be linear or not. @Riebel2015 derived PLRs for multiply observed (so that mean magnitudes can be discussed) Magellanic Cloud Miras at $[3.6]$ and $[4.5]$, but omitted the so-called extreme AGB stars (x-AGB). These x-AGB stars have thick dust shells and are variously defined as those with $J-[3.6]>3.1$, $J-K>2$ or $[3.6]-[4.5]> 0.15$ [@Riebel2015; @Boyer2015a]. @Whitelock2017 used the @Riebel2015 data, but included the x-AGB stars in their PLR analysis. For C-rich Mira variables they found a more complex relation with a colour dependency. It is beyond the scope of this paper to discuss this in more detail, and further work is needed to understand and characterise the behaviour of the longer period Miras. It seems likely we may have to separate those with periods significantly above 400 days in ways that depend on other parameters. In the following we discuss the AGB variables in the same terms as we have in our earlier work on Local Group galaxies while using the @Ita2011 and @Yuan2017 relationships to deal with HBB stars. ![Colour-magnitude diagram for Sgr dIG. Red symbols show known or suspected C stars, the green symbol is the large amplitude variable, V1. Note that V2 and V3 are not shown as we do not have $J$ for them, but both would be off scale to the right[]{data-label="fig_cm"}](SagDIG_cm.eps){width="8.5cm"} ![Two colour diagram for Sgr dIG. Coloured symbols as in Fig. \[fig\_cm\]. Note that V2 and V3 are not shown as we do not have $J-H$ for them, but both would be off scale; for V3 $H-K_S=1.56$. []{data-label="fig_jhhk"}](SagDIG_jhhk.eps){width="8.5cm"} Photometry ========== Observations were made with the SIRIUS camera on the InfraRed Survey Facility (IRSF) at Sutherland. The camera produces simultaneous $J$, $H$ and $K_S$ images covering a $7.68 \times 7.68$ arcmin square field with a scale of 0.45 arcsec/pixel. The Sgr dIG is sufficiently small to require only one pointing per image centred at $\alpha(2000.0)=$19:29:58.0, $\delta(2000.0)=$-17:40:41.0. The aim of the observational series was to find long-period variables; observations were made at 17 epochs spread over 3.5 years. For this field, 10 dithered images were combined after flat-fielding and dark and sky subtraction. Exposures were of either 40 or 60 seconds’ duration, depending on the seeing and on the brightness of the sky in the $K_S$ band. Photometry was performed using DoPHOT in ‘fixed-position’ mode, using the best-seeing $H$-band image as a template. Small differences in pointing amongst the images meant that the area in common to them all for photometry is only $7.2 \times 7.2$ arcmin square. Aladin was used to correct the WCS on each template and RA and Dec were determined for each measured star. This allowed a cross-correlation to be made with the 2MASS catalogue [@Cutri2003], and photometric zero points were determined by comparison of our photometry with that of 2MASS. Fig. \[fig\_cm\] shows the $K_S-(J-K_S)$ diagram and Fig. \[fig\_jhhk\] the $(J-H)-(H-K_S)$ diagram for stars with standard deviations less than 0.2 mag in each band. These diagrams have not been corrected for interstellar reddening which will amount to only $\Delta (J-H)=0.04$, $\Delta (H-K_S)=0.02$ and $\Delta (J-K_S)=0.06$. As anticipated most of the stars are foreground. The [trilegal]{} [@Girardi2005] code (version 1.6) indicates that we would expect about 500 to 520 foreground stars from the Galaxy with $11<K_S<17.5$ (typically all with $J-K_S<1.1$), compared with the 730 actually observed in this area. This is fewer foreground stars than we might expect given that @Gullieuszik2007 estimate contamination using a nearby field, away from Sgr dIG, and conclude that the stars brighter than $K_S = 19.5$ and bluer than $J-K_S = 1.1$ are most probably foreground and that all the stars redder than this limit are candidate C stars belonging to Sgr dIG. The RGBT will be at $K_S\sim 19.6$. All stars with $J-K_S > 1.1$ are brighter than this and therefore probably on the AGB; they fall on the red plume in Fig. \[fig\_jhhk\]. Possible contaminants are brown dwarfs (the [trilegal]{} simulations produced a brown dwarf with colours and magnitudes that overlap with the C stars in one of fourteen simulations) and unresolved background galaxies (although their $J-H$ is usually slightly smaller than that of C stars with the same $H-K_S$ [@Whitelock2009]). The potential contamination of the red stars is very small and neither brown dwarfs nor background galaxies will be large amplitude variables. A comparison with IC1613, a closer dwarf irregular, (figs. 3 and 4 from @Menzies2015) shows that the brightest Sgr dIG source is slightly redder and brighter than the four HBB sources in IC1613 while the other Sgr dIG sources fall in the region of small amplitude C-rich variables. [cccccccc]{} JD & mag & err & mag & err & Mag & err\ & & &\ \ 1037.120 & 15.60 & 0.05 & 17.46 & 0.06 & 15.70 & 0.05\ 2851.998 & 16.06 & 0.05 & 17.25 & 0.14 & 17.21 & 0.10\ 4002.087 & 16.37 & 0.05 & 18.28 & 0.10 & 16.17 & 0.05\ 4341.033 & 15.48 & 0.05 & 17.34 & 0.09 & 16.94 & 0.05\ \ 1036.999 & 16.07 & 0.10 & – & – & – & –\ 1038.052 & 16.12 & 0.10 & – & – & – & –\ 3186.247 & 15.76 & 0.05 & – & – & – & –\ 3554.278 & 16.24 & 0.12 & – & – & – & –\ 5755.278 & 16.18 & 0.08 & – & – & – & –\ \[tab\_extraV\] Archival near-infrared photometry --------------------------------- @Gullieuszik2007 searched for carbon stars in Sgr dIG using $V$, $J$ and $K_S$ photometry and their colour-magnitude diagram looks similar to ours, but goes considerably deeper[^1], as can be seen in Fig. \[fig\_Gull\]. The stars redder than $J-K_S = 2.0$ are too faint for us to measure in all colours, though most are visible on our $K_S$ images. The bright star indicated as a blend in Fig. \[fig\_Gull\] is our M type variable, V1, discussed in detail below. @Gullieuszik2007 claim that V1 was blended on their images, presumably on the basis of a large SHARP parameter in their DAOPHOT photometry, though they do not show the distribution of this parameter for Sgr dIG. We therefore obtained the images used by @Momany2005, who did not give any photometry for the star, from the HST archive. To extend the time span for calculation of the periods of the variables, we also obtained the SOFI $K_S$ and $H$ images from the ESO archive. Aperture photometry was performed on the variables discussed below and the magnitudes were put on the same scale as our photometry. Incidentally, we obtained essentially the same result for $K_S$ as @Gullieuszik2007 for V1 (number 21 in their table 4). The extra photometry is shown in Table \[tab\_extraV\]. ![Colour-magnitude diagram of @Gullieuszik2007. The curved line is the colour magnitude relation for carbon stars from @Totten2000 and the box shows the area in which C stars were found according to the selection criteria of @Gullieuszik2007. []{data-label="fig_Gull"}](SagDIG_Gull.eps){width="7cm"} ---------- ------- ---------- ------- ---------- ------- ---------- ------- ---------- ------- ---------- ------- ---------- -- -- -- JD $J$ err($J$) $H$ err($H$) $K_S$ err($K$) $K_S$ err($K$) $H$ err($H$) $K_S$ err($K$) -2450000 2352.497 16.87 0.02 15.86 0.01 15.45 0.02 – – 18.00 0.11 16.55 0.06 2440.578 16.88 0.02 15.92 0.01 15.45 0.01 17.28 0.15 17.51 0.06 15.97 0.05 2441.751 16.93 0.02 15.93 0.01 15.48 0.02 17.40 0.11 17.44 0.07 16.10 0.05 2507.259 17.01 0.02 16.08 0.01 15.63 0.01 16.98 0.11 17.41 0.07 16.03 0.04 2524.299 17.00 0.02 16.12 0.01 15.72 0.02 16.56 0.07 17.50 0.08 15.96 0.04 2775.579 17.53 0.03 16.45 0.02 15.95 0.02 16.59 0.10 18.46 0.12 16.66 0.06 2808.437 18.11 0.14 16.83 0.08 16.18 0.06 – – – – – – 2809.348 17.63 0.04 16.47 0.02 16.07 0.03 17.39 0.11 18.23 0.14 16.77 0.07 2882.312 17.58 0.01 16.51 0.01 16.18 0.02 17.32 0.09 18.16 0.09 16.70 0.07 3173.407 16.72 0.01 15.78 0.01 15.34 0.01 16.65 0.06 18.38 0.12 16.82 0.06 3236.497 16.73 0.01 15.80 0.01 15.33 0.01 16.80 0.07 18.76 0.11 16.95 0.06 3259.218 16.81 0.01 15.86 0.01 15.39 0.01 – – 18.58 0.11 16.92 0.06 3260.220 16.77 0.01 15.86 0.01 15.41 0.01 16.66 0.07 18.77 0.13 16.88 0.06 3262.261 16.76 0.01 15.88 0.01 15.40 0.01 16.86 0.08 18.46 0.12 16.91 0.07 3293.239 16.88 0.01 15.94 0.01 15.49 0.01 16.35 0.08 18.33 0.11 16.73 0.07 3532.437 17.23 0.03 16.22 0.01 15.76 0.02 – – 17.53 0.07 – – 3616.307 17.52 0.01 16.44 0.01 15.98 0.01 17.43 0.13 18.04 0.07 16.38 0.04 ---------- ------- ---------- ------- ---------- ------- ---------- ------- ---------- ------- ---------- ------- ---------- -- -- -- \[tab\_Var\] Variable Stars {#var_stars} ============== We are limited by sensitivity, and only one large amplitude variable was found amongst our photometric sample. However, we performed aperture photometry on our $K_S$ frames at the positions of the reddest stars from the @Gullieuszik2007 sample, their numbers (GRCHH) 2, 9, 14, 19, 24 and 29. Both GRCHH14 and GRCHH19 have large amplitudes but are not obviously periodic, GRCHH9 may be a small amplitude irregular variable, while GCRHH29 is not visible on any of our $K_S$ frames. GRCHH2 and 24 appear regular with large amplitudes. The three large amplitude variables are listed in Table \[tab\_var2\], together with mid-infrared photometry from Spitzer [@Boyer2015a; @Boyer2015b]. The spectral type for the M star comes from Boyer et al. (2017 in preparation) and is based on HST near-infrared colours. @Demers2002 classified V3 (their C05) as carbon rich using narrow-band colours, while @Gullieuszik2007 classified V2 (GRCHH24) and V3 (GRCHH2) as C-rich on the basis of their near-infrared colours. The variables, V1, V2 and V3 are, respectively, about 27, 58 and 104 arcsec from the centre of the galaxy (19:29:59.0 –17:40:41), as can be seen in Fig. \[fig\_Varchart\]. ![Finding chart for the 3 variable stars using the template $K_S$ image. The variables are circled, and the centre of the galaxy is marked by a cross. The field shown is 7.68 arcmin square. []{data-label="fig_Varchart"}](Var_chart.eps){width="8.5cm"} In their table 3, @Boyer2015b list eight suspected variables in Sgr dIG, including our V2 and V3, as well as GRCHH 14 and 29 which were discussed above. Of the other four, two are faint ($[3.6]>19$) and one of those two is 5.5 arcmin from the centre of the galaxy, while a third suffered from column pull down in the Spitzer imaging; the eighth source is quite faint on the $K_S$ images, is not obviously red and has a close companion. None of these is a strong candidate Mira variable. V1 is the brightest of six stars that @Boyer2015b list (their table 4) that are not obviously variable, but which are x-AGB candidates on the basis of $[3.6]-[4.5]$. The first 4 of these correspond to GRCHH 6, 7, 12 and 19. GRCH 19 was discussed above, GRCHH 6, 7 and 12 are relatively blue ($J-K_S\lesssim 1.3$) and only moderately bright ($K_S \gtrsim 17.3$). None of these 3 is a highly likely Mira candidates. The fifth star (DUSTiNGS 50071) is a Mira candidate, but can be seen to have a close red companion in the HST images, which will confuse the $K_S$ photometry. For the Mira candidates, observations taken within 4 days of each other were averaged to provide a single measurement and periods were determined as in our earlier work [@Whitelock2009]. The resulting $JHK_S$ Fourier means (mag), peak-to-peak amplitudes of the Fourier fitted light curves ($\Delta$mag), and periods (P) are listed in Table \[tab\_varsP\], where N epochs were used for the quoted data and ‘redC’ is the reddening corrected magnitude. The statistical errors in the period are very small and systematic effects, such as long term variations in the mean light level, suggest a conservative uncertainty of less than 10%. The M star: V1 {#Mstar} -------------- The light curves of the M star appear asymmetric (see Fig. \[fig\_Var1\]), with the rising branch steeper than the falling branch, and for this reason the Fourier fits were made in the second order. This should not be over-interpreted as the observations clearly cover two falling branches, but we have no measurements over the time period during which it brightens. Doing the period analysis from the data in Table \[tab\_Var\] together with the additional $H$ and $K_S$ photometry from Table \[tab\_extraV\] results in periods for V1 of 946 days at $H$ and 1055 days at $K$. These values are strongly influenced by the isolated observations, so we can be confident that this star has a period of the order of 1000 days, but a much longer time series is required to establish the exact value. For the discussion below we use a period of 950 days derived from the IRSF $K_S$ data. ------ ---- ------------ ----------- ------- --------- --------- ---- name G RA Dec Dusti \[3.6\] \[4.5\] sp V1 21 19:29:57.9 –17:40:17 44334 14.83 14.17 M V2 24 19:29:56.1 –17:41:21 47717 14.89 14.13 C V3 2 19:30:06.3 –17:40:55 29075 14.34 13.84 C ------ ---- ------------ ----------- ------- --------- --------- ---- : The variable stars. (G is the GRCHH number and Dusti is the DUSTiNGS number from @Boyer2015a. \[tab\_var2\] ![$JHK_S$ light curves of V1, arbitrarily phased at a period of 950 day, each point is plotted twice to emphasise the variability. The solid line show the best fit second-order curve at that period. This plot shows only the data from Table \[tab\_Var\].[]{data-label="fig_Var1"}](fig5.eps){width="8.5cm"} ![The $K_S$ light curves of V2, arbitrarily phased at a period of 670 days and the $H$ and $K_S$ light curves of V3, phased at 503 days. Each point is plotted twice to emphasise the variability and the solid lines show the best fitting sine curve at the given period.[]{data-label="fig_Var2"}](fig6.eps){width="8.5cm"} As noted above, @Gullieuszik2007 find V1 to be blended. It is vital to establish to what extent this is the case if we are to interpret our photometry correctly. Figure \[fig\_stamps\] shows the region around V1 as seen in the 2003 HST archival images in the \[814\], \[606\] and \[475\] bands. ![image](Part_010.eps){width="2.8cm"} ![image](Part_030.eps){width="2.8cm"} ![image](Part_020.eps){width="2.8cm"} In the @Momany2014 catalogue, there are measurements in 3 bands for 11 stars inside a circle of diameter 3 arcsec centred on the variable. Using these data we estimate that the neighbouring stars contribute only about 0.1 mag to the $J$ mean magnitude of the variable, and probably somewhat less to $K_S$, so its position in the $K_S$, $J-K_S$ (Fig. \[fig\_cm\]) diagram is little affected by these stars. Curiously, the catalogue contains no photometry for V1. We have carried out aperture photometry on the star on the HST frames and find \[475\], \[606\] and \[814\] magnitudes of $\ge28.0$, 24.34 and 20.62, respectively. The fact that the amplitude (Table \[tab\_varsP\]) at $J$ is larger than at $K_S$ and that $J-K_S$ is reddest when $K_S$ is faintest (Fig. \[fig\_Var1\]) both support the conclusion that the photometry is not confused by any nearby bluer source. For stars of this colour the correction from the 2MASS system to the SAAO system[@Carpenter2001][^2] is essentially zero, so correcting only for reddening we have $K_0=15.7$. The PL relation from [@Whitelock2008] (with LMC $(m-M)_0=18.5$), $M_K=-3.51\log P +1.09$, would give $M_K=-9.36$. So at the distance of Sgr dIG we should expect $K_0=15.84$. This agreement is surprising and possibly coincidental, unless this star has recently stopped HBB as suggested below (see sections \[discus\] and \[theory\]). ------- ---------------- ---------------- ---------------- ---------------- --------------- ------------ ------------ ----------- -- mag $\Delta$mag N Period redC $m_{bol1}$ $m_{bol2}$ $M_{bol}$ 950 – $18.50\pm0.15$ $-6.70\pm0.2$ $J$ 17.14 0.90 13 914 17.04 $H$ 16.15 0.70 13 933 16.09 $K_S$ 15.74 0.75 13 953 15.70 19.45 $19.53\pm0.20$ $-5.67\pm0.25$ $K_S$ 17.18 1.21 15 670 17.14 19.86 $19.46\pm0.2$ $-5.74\pm0.25$ $19.01\pm0.10$ $-6.19\pm0.15$ $H$ 18.02 1.16 16 504 17.96 $K_S$ 16.46 0.96 19 503 16.47 ------- ---------------- ---------------- ---------------- ---------------- --------------- ------------ ------------ ----------- -- \[tab\_varsP\] The C stars: V2, V3 {#Cstar} ------------------- For V2 we only have the $K_S$ time series, combining the data from Tables \[tab\_extraV\] and \[tab\_Var\], and the colours from @Gullieuszik2007, as the star was too faint for us to measure at shorter wavelengths. The same applies to V3 although we also have $H$ magnitudes for this bluer star. It is possible to estimate the bolometric magnitudes for these stars using our Fourier mean $K_S$ magnitudes and the $(J-K_S)$ colours from @Gullieuszik2007, noting that they observed V2 close to minimum and V3 close to maximum light and that their photometry is on the LCO system [see @Carpenter2001]. $K_S$ and $(J-K_S)$ are converted to $K$ and $(J-K)$ on the SAAO system following Carpenter (see section \[Mstar\]), and then used to estimate the bolometric correction and magnitude following @Whitelock2006. These can be compared to the bolometric magnitudes predicted by the PL relation (equation A1, $M_{bol}=3.89-3.31\log P$, from [@Whitelock2009], after correcting the LMC distance modulus from 18.39 to 18.5), noting the caveats in section \[Mira\_PLR\]. V2 (GRCHH 24) is the reddest of the stars in table 4 of @Gullieuszik2007, for which they quote $J-K_S=4.147$. We use $J-K_S \sim 4.0$ as the estimated mean 2MASS colour and derive SAAO-system reddening-corrected values of $K_0=17.13$ and $(J-K)_0=4.3$, leading to $m_{bol}=19.45$ and $M_{bol}=-5.75$, compared to $M_{bol}=-5.46$ predicted by the PL relation. V3 (GRCHH 2) has $J-K_S=2.360$ in table 4 of @Gullieuszik2007. We use $J-K_S \sim 2.4$ as its mean 2MASS value and derive $K_0=16.4$ and $(J-K)_0=2.5$ on the SAAO system, leading to $m_{bol}=19.86$ and $M_{bol}=-5.34$, compared to $M_{bol}=-5.05$ predicted by the PL relation. Thus both of the C stars are about 0.3 mag brighter than the linear PLR would predict. Given the uncertainty in the colour, the bolometric correction and the distance this difference is not significant. It seems likely that Sgr dIG V2 and V3 represent the longest period and most luminous examples of the C-rich Miras in Sgr dIG and their initial mass is discussed in section \[theory\]. Short period Miras may also be present. Flux Calculation and Bolometric Magnitudes ------------------------------------------ We have made use of all available photometry to construct flux curves for the variables. Apart from the IRSF $JHK_S$ measurements reported above, we have measured V1 and V2 on the HST images referred to above to give \[475\], \[606\] and \[814\] magnitudes for V1 and an \[814\] magnitude for V2. V3 is outside the field of the HST images. The $K_S$ magnitude for V2 was found from our photometry of the ESO images referred to above. Mid-infrared magnitudes were obtained from WISE photometry [@Cutri2012], and Spitzer \[3.6\] and \[4.5\] magnitudes from @Boyer2015a [@Boyer2015b]. The photometry was corrected for reddening using the curve from @Schlafly2016 adjusted to give $A_V/E_{(B-V)}=3.23$. Conversion from magnitude to flux was carried out using zero-point data obtained from the Gemini web page (http://www.gemini.edu/sciops/instruments). Integration under the curves resulted in apparent bolometric magnitudes. Errors are determined on the basis of the catalogued WISE and Spitzer values. Further uncertainty, which is difficult to quantify, is introduced because the photometry longward of $2.2 \mu$m refers to different phases and light curves are not available in those wavebands to allow the appropriate magnitudes to be used. The amplitudes of these dusty C stars at \[3.5\] and \[4.6\] are typically one magnitude, but can be more [see @Whitelock2017 fig. 1]. The normalised energy distributions of the variables are shown in Fig. \[fig\_flux\] and the bolometric magnitudes are given in Table \[tab\_varsP\]. This simple method of estimating the integrated flux gives a bolometric magnitude ($-5.7$, 19.53) for V2 that is close to the value derived in section \[Cstar\] from the bolometric correction ($-5.7$, 19.45). These values are also close to the expectation from the PL relation for C stars ($-5.46$, 19.74; see section \[Cstar\] and @Whitelock2009). In the case of V3, the catalogued value of W3 is listed as an upper limit, so we estimated W3 from fig. 19 of @Nikutta2014, assuming an error of $\pm0.3$. The flux derived from the near-infrared and WISE data ($-5.7$, 19.46) is 0.4 mag brighter than that derived from the bolometric correction ($-5.3$, 19.86). Inexplicably, the Spitzer \[3.6\] and \[4.5\] magnitudes are about 1 mag and 0.7 mag brighter than the WISE W1 and W2 values; the mean phases of the two data sets are about the same, though the Spitzer data were obtained about one cycle later than the WISE ones. For V1, we find a bolometric magnitude of (-6.70, 18.50) from the flux integration. We do not have a calibration of bolometric correction with colour for stars with such unusual flux distributions. Its luminosity is discussed further in the next section. ![Flux curves for V1, V2 and V3 in Sag dIG, normalised to maximum flux (usually in the W2 band). The continuous lines show a linear interpolation between points. For V3, the continuous lines connect the near-infrared points and the Spitzer points, while the dashed line connects the near-infrared points and the WISE points - see text for more details. The bands to which the data points refer are shown above the top plot.[]{data-label="fig_flux"}](Flux_SagDIG.eps){width="8.5cm"} ------ ----- ------------ ------- ------------- ------------- ------------- ---------------- name P $\Delta K$ $M_K$ $M_{[3.6]}$ $(J-K_S)_0$ $J_0-[3.6]$ $ [3.6]-[4.5]$ 3011 550 0.82 -9.20 -9.61 1.01 1.42 0.18 2035 530 0.75 -9.17 -9.66 1.13 1.62 0.09 1016 464 0.50 -9.33 -9.46 1.14 1.27 0.09 V1 950 0.75 -9.5 -10.37 1.34 2.21 0.66 ------ ----- ------------ ------- ------------- ------------- ------------- ---------------- \[HBBstars\] Discussion of V1 and the PLR {#discus} ============================ Given the unusual characteristics of this star as described by @Boyer2015a and Boyer et al. (2017 in preparation), i.e. extreme AGB mid-infrared colours and magnitudes and the $\rm H_2O$ absorption characteristic of an M star, we reject any possibility that it is not the same star as that for which we have determined the period. A comparison with the colour-magnitude diagram for IC1613 [@Menzies2015] and isochrones from @Marigo2008 shows V1 to be where HBB variables are expected. Its position in the galaxy, within the volume occupied by young blue stars, is consistent with its being relatively massive and the comparison with theory in section \[theory\] provides more insight.V1 is slightly brighter and redder than the IC1613 stars in the near-infrared, but considerably redder in $[3.6]-[4.5]$ (see Table \[HBBstars\]). [lcccccccc]{} (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8)\ band & redC & Yuan+ & I+M & S/R &\ && &&&Yuan+ & I+M& S/R\ \ $K/K_S$ & 15.70 & 14.43 & 14.53 & 15.84 & 1.27 & 1.17 & –0.14\ $[3.6]$ & 14.83 & 13.92 & 14.22 & 15.03 & 0.91& 0.61 & –0.20\ $[4.5] $ & 14.17 & 13.77 & 14.14 & 15.16 & 0.40 & 0.03 & –0.99\ $m_{bol} $ & 18.50 & - &18.00 & 19.23 & - & 0.50 & –0.73\ \ $K/K_S$ & 15.17 & 15.22 & 15.32 & 15.85 & -0.05 & -0.15 & -0.67\ $[3.6]$ & 14.76 & 14.77 & 14.72 & 15.15 & -0.01 & 0.04 & -0.39\ $[4.5] $ & 14.58 & 14.66 & 14.56 & 15.23 & -0.08 & 0.02 & -0.65\ $m_{bol} $& 18.33 & - & 18.19 & 19.19 & - & 0.14 & -0.86\ \[PLRcomp\] ![The $K$ PLR showing V1 (red) and the four HBB stars (black) in IC1613 from @Menzies2015. The solid, dashed and doted lines show, respectively, the PLRs from @Whitelock2008(an extrapolation), @Ita2011 and @Yuan2017; all relations are for O-rich stars. The error bars show the range of variability. []{data-label="plk"}](plk.eps){width="8.5cm"} The long period of V1 puts it in a part of the PLR where HBB is important and the extrapolated SAAO relation [@Whitelock2008] falls significantly below those from @Ita2011 and @Yuan2017(see section \[Mira\_PLR\]). This is illustrated in Fig. \[plk\] which shows the various $K_S$ PLRs together with the data for V1 and the four HBB stars from IC1613. The error bars show the amplitudes of the sine curves that best fit the $K_S$ light curves, and are therefore slightly smaller than the observed spread (PLRs for Miras are sometimes shown with single observations rather than mean magnitudes and it is important to appreciate the spread that arises simply from variability). Three of the four HBB Miras in IC1613 lie close to the bright PLRs, while the fourth is even brighter (and could possibly be an overtone pulsator). V1 is about 1.2 mag fainter than the bright PLRs and on the extrapolated SAAO PLR. The Spitzer photometry of V1 is the mean of two epochs (JD2455886 and JD2456089) [@Boyer2015a] and it was not recognised as variable. If the mid-infrared light-curve shows the same behaviour as the near-infrared one, both the DUSTiNGS observations were obtained at about the same phase, corresponding approximately to mean light. Table \[PLRcomp\] lists the deviation of V1 from the various PLRs. It is brighter at \[3.6\] and \[4.5\] than the [@Riebel2015] relation (columns 5 and 8), which is a single straight line over the entire period range. Although it is slightly (0.4 mag) fainter than the @Yuan2017 PLR (columns 3 and 6), it is very close to the @Ita2011 (columns 4 and 7) relation at \[4.5\]. @Whitelock2017 discuss the \[3.6\] and \[4.5\] PLR relations for LMC and IC1613 AGB variables, noting that, for C stars, there is a spread at long periods which is a function of colour. Their sample of long period O-rich stars was rather small. It is informative to note the position of V1 in these PLR plots [@Whitelock2017 figs. 2 and 3] where, at the distance of the LMC, it would have \[3.6\]=8.1, \[4.5\]= 7.5 and $K=9.0$; note that V1’s period is longer than that of any variable discussed by @Whitelock2017 At \[3.6\] it is not far from the extrapolated @Riebel2015 relation, while at \[4.5\] it is considerably brighter. It can also be compared with @Whitelock2017 fig 4 which shows AGB variables in IC1613, including the HBB stars discussed here, in a similar PL diagram. Clearly the PLR for long period variables is complex and requires further study. We can compare the bolometric magnitude given in Table \[tab\_varsP\] with the predictions of the @Ita2011 PLR (columns 4 and 7) and linear extrapolation of the @Whitelock2009 PLR (columns 5 and 8). It lies between the two relations and at $M_{bol}\sim-6.7$ is about half a magnitude brighter than the 4 HBB stars in IC1613, which have $M_{bol}$ between –6.0 and –6.3. This is to be expected since the flux distribution (Fig. \[fig\_flux\]) shows that the bolometric magnitude will depend strongly on the mid-infrared flux. Table \[PLRcomp\], for comparison, tabulates the deviation from the PLRs of one of the IC1613 HBB stars [no. 3011 from @Menzies2015]. This star falls very close to both the @Yuan2017 and @Ita2011 relations, at all wavelengths (see columns 6 and 7). The appendix (section \[appendix\]) to this paper describes a search for stars similar to V1 in the LMC (i.e. long periods, blue colours and low $K_S$ luminosities), but no convincing candidates are identified. Thus it seems that although V1 has many of the characteristics of a HBB Mira it is distinctly different from such stars known in other galaxies. Comparison with theoretical models {#theory} ================================== Thermally Pulsing (TP)-AGB tracks are computed with the [colibri]{} code [@Marigo2013] for two values of the initial stellar mass ($3.0 \, M_{\odot}$ and $4.8 \, M_{\odot}$, see Fig. \[tracks\]) and initial composition with metallicity Z=0.0002, and helium abundance Y=0.249. This choice is suitable to represent the abundance ratio $\rm [Fe/H] = -1.88$, assuming a metallicity for the Sun of $Z=0.0152$ [@Caffau2011; @Bressan2012]. The evolution is followed up to the complete ejection of the envelope by stellar winds. During the initial stages on the Early-AGB mass loss is described with a modified version of the @Cranmer2011 formalism suitable for cold chromospheres. At larger luminosities, when radiation on dust is expected to drive the wind, we adopt the @Bloecker1995 formula (with the efficiency parameter $\eta=0.05$) as long as the surface $\rm C/O<1$, and the routine provided by @Mattsson2010 for $\rm C/O>1$, based on dynamic atmosphere models for pulsating C stars. More details of the models will be provided in a forthcoming theoretical paper (Pastorelli et al., in preparation). TP-AGB models account for the changes in the envelope composition due to the occurrence of the third dredge-up and HBB. The effect of light reprocessing by circumstellar dust in the extended envelopes of mass-losing AGB stars is included following the approach of @Marigo2017. For C stars, in particular, we use the most recent updates of the dust-growth model and radiative transfer calculations presented by @Nanni2016. For models with $\rm C/O < 1$ we used the tables of bolometric corrections calculated by @Marigo2008. These are based on radiative transfer models presented by @Bressan1998. Pulsation periods for the fundamental mode are computed as a function of stellar parameters with the aid of the analytic fits in @Marigo2017, based on new models for long-period variables (Trabucchi et al., in preparation). The $3\, M_{\odot}$ model fits well with the observed photometric and pulsation properties of the two C stars, providing support for the calibration of the dust properties carried out by @Nanni2016. The current masses for the two C stars should be in the range $1.5-1.9\, M_{\odot}$. The M star is interpreted as an AGB star of initial high mass ($\sim 4.2 - 4.8 \, M_{\odot}$) which has a relatively low current mass ($\sim 1.1 -1.4 \, M_{\odot}$) in a phase of intense mass loss, soon after the extinction of HBB. The third dredge-up is also quenched due to the very low residual envelope mass ($0.2-0.4\, M_{\odot}$), so that the formation of a late C star is prevented. We recall that the adopted @Bloecker1995 formula has a strong dependence on the luminosity ($\propto L^{4.25}$). As a consequence, it predicts high mass-loss rates in bright low-metallicity HBB stars despite their relatively high effective temperatures. According to our present set of stellar models, the initial mass of the progenitor cannot be lower than $\sim 4.2\, M_{\odot}$ since below this mass limit TP-AGB stars are predicted to end their lives as C stars and they do not reach the high luminosity measured for V1. The model with $M_{\rm i} =4.8 \, M_{\odot}$ represents the upper limit for a star with $Z=0.0002$ to develop a degenerate C-O core and experience the AGB phase. Just above this mass limit we expect to have stars that enter the super-AGB phase after the end of the C-burning phase. We cannot exclude the possibility that V1 may be a super-AGB star, in which case its luminosity would suggest it is near the lower limit for such stars. ![Evolutionary tracks in colour-magnitude and period-luminosity diagrams (C stars in red and V1 in blue). Error bars show the variability range. AGB evolutionary tracks are shown for two choices of the initial mass as indicated and metallicity Z=0.0002. Stages characterised by surface $\rm C/O<1$ and $\rm C/O>1$ are coloured in blue and red, respectively. []{data-label="tracks"}](plot_agbmod.eps){width="8.5cm"} The rather blue $JHK_S$ colours of V1 are likely the result of the low abundance of silicon (the key-element for the formation of silicate grains) which limits the amount of silicate dust formed, as well as the relatively high effective temperatures of the models (4500-4000 K) which move the dust condensation radius to larger distances from the star. All this makes dust formation and reddening less efficient. These aspects need a careful investigation through consistent dynamical atmospheres models [e.g. @Bladh2015]. The long period of 950 days is only reached during the final AGB stages when the mass has been significantly reduced by stellar winds. So the star will soon leave the AGB and enter the post-AGB phase. The time-scales involved are all very short. Our calculations predict that the duration of the TP-AGB phase (from the first thermal pulse up to envelope ejection) of stars in the relevant initial mass interval (from $4.2 - 4.8 \, M_{\odot}$) ranges from $\sim 7\times 10^4$ yr to $\sim 2.5\times 10^4$ yr. The last interpulse period during which the mass-losing stars quickly accelerate towards very long periods (from $\sim 400$ days to $\sim 1000$ days) and then leave the AGB is of the order of a few $10^3$ yr (from $\sim 9.7\times 10^3$ yr to $\sim 3.6\times 10^3$ yr). Cepheids in Sgr dIG =================== As we are suggesting that V1 is massive in AGB terms ($> 4 \, M_{\odot}$, see section \[theory\]) and given that short period Cepheids presumably evolve into long period Miras, it is worth considering if Cepheids should also be detectable in Sgr dIG. With the IRSF we would have detected any large amplitude variables with mean $K_S<17.2$. A Cepheid with $K_S=17.2$ would have a period of 50 days [@Matsunaga2013] and a mass about $9 \, M_\odot$ [@Anderson2016], i.e. considerably larger than is likely for V1. Such long period Cepheids are rare; there are only two in NGC6822 with P$>$50 days and none in IC1613, and both these galaxies are significantly more massive than Sgr dIG (as discussed in section \[LG\_comp\]). We could well expect Sgr dIG to contain Cepheids with periods of a few days, but these are too faint to detect with a 1.4-m telescope (a Cepheid with P=3 days would have $K_S\sim 21.5$). This is an illustration of the potential importance of Mira variables in the era of JWST and large ground-based telescopes optimised at infrared wavelengths. Miras will be easily detected where individual stars of low or intermediate mass can be resolved. They should prove very useful probes of stellar populations as well as for the distance scale [@Feast2014; @Whitelock2014]. --------- ---------------------- ------------ --------------- ---- --------------- ----------------- -- name M NO$_{HBB}$ P range NC P range $N_{\rm other}$ ($10^{6} M_{\odot}$) (day) (day) NGC6822 100 4(5) 545-638 (854) 50 182-998 27 IC1613 100 4 464-580 9 263:-430(879) $>9$ Sgr dIG 3.5 1 950 2 503-670 2 --------- ---------------------- ------------ --------------- ---- --------------- ----------------- -- \[digs\] Comparison with other dwarf Irregular galaxies {#LG_comp} ============================================== AGB variables in NGC6822 and IC1613 were discussed by @Whitelock2013 and @Menzies2015, respectively. The numbers of large amplitude variables from those galaxies are compared with the numbers found in Sgr dIG in Table \[digs\], where M is the total visible mass of stars in the galaxy [from @McConnachie2012], NO$_{HBB}$ is the number of (presumed) O-rich Miras with periods over 450 days, i.e., those we can be reasonably certain are undergoing HBB, P range (col 4) is their range of periods, NC is the number of (presumed) C rich Miras followed by their period range (col 6), while N$_{other}$ is the number of other large amplitude variables found for which no periods could be determined. The survey of NGC6822 is the most nearly complete, although even there a few C-rich Miras were missed (as discussed in section 6 of that paper) and there may be a very small number of Miras outside the area surveyed. The survey of IC1613 did not cover the full area of the galaxy and will be incomplete. We can estimate that the total number of C-rich Miras in the areas surveyed will be given by the sum of NC+$\rm N_{other}$, i.e. 77 and 18 for NGC6822 and IC1633, respectively. We would not necessarily expect to find very long period Miras in these galaxies, but none of the surveys would have found very red stars with $P>1000$ day. It is also quite possible that some short period O-rich Miras, of the type found in Galactic globular clusters [e.g., @Feast2002], will have been missed in all three galaxies. Because the long period O-rich (HBB) Miras are from relatively massive progenitors they will be confined to the central regions of their respective galaxies. This, together with their brightness, colour and large amplitude variations, suggests that the count of these stars should be complete for all three galaxies. The number for NGC6822 is given as either 4 or 5 because of the uncertain status of the longest period star in that galaxy. This may be a supergiant rather than an AGB star and its low amplitude ($\Delta K_S \sim 0.4$) would support that interpretation. If it is an AGB star then it is very interesting and a candidate super-AGB star, with $M_K= -10.9$ and $M_{bol}\sim-8.0$; it is certainly worth further investigation. NGC6822 also has six lower luminosity O-rich Miras with periods between 158 and 402 days, which may be similar to globular cluster Miras. Thus for Sgr dIG we can be certain that V1 is the only HBB Mira in the galaxy and a remarkable star it is. If we normalise the HBB variable count by the relative masses of NGC6822, IC1613 and Sgr dIG (Table \[digs\]) we expect, as we do indeed find, the same number in NGC6822 and IC1613. In Sgr dIG we expect 0.04, i.e. none at all. The galaxy masses are not accurate, but the existence of this star is nevertheless quite striking. As noted in section \[intro\], Sgr dIG has a particularly high specific star formation rate. It seems likely that V1 is an evolved AGB star with a relatively massive progenitor (section \[theory\]), but the situation is complex and beyond the scope of this discussion. Turning now to the carbon stars, we normalise by the relative mass of NGC6822 and Sgr dIG and expect to find 2 or 3 C-rich Miras in Sgr dIG, where we actually find 2 definite plus 2 probable (there is at least one more possible candidate from @Boyer2015b as discussed in section \[var\_stars\]). This may suggest that our survey of C-rich Miras is very nearly complete. Although it is more likely that the star formation history is more complex than our simple comparisons assume. There are in any case many reasons for investigating Sgr dIG in more detail. conclusion ========== Two large amplitude carbon Miras have been identified in Sgr dIG and, within the limits of our investigation, these are comparable to similar stars in other Local Group galaxies and they probably evolved from stars with main sequence masses $M_i \sim 3\, M_{\odot}$. The single O-rich Mira has a particularly long period (950 days), and must have been undergoing HBB. Its observed characteristics are different from all other well-studied AGB variables. Although its mid-infrared colours suggest a dust excess, its near infrared colours are rather blue. Its mean magnitudes do not tie in with the expectations of the PLRs which include HBB stars [@Ita2011; @Yuan2017] (perhaps because it has recently stopped HBB); in particular, it is faint at $K_S$ and bolometrically, although close to the @Ita2011 PLR at \[4.5\]. It would be useful to know more about its longer wavelength flux. A comparison with models suggests that V1 started with a main sequence mass of around $4.8 \, M_{\odot}$ and is now in a very advanced short-lived evolutionary state. The fact that we have only found a single star with these characteristics in combined surveys of the LMC, NGC6822, IC1613 and Sgr dIG makes a short-lived state plausible, but other alternatives should be considered. It is possible that V1 is in an interacting binary system. Binary Miras are not so unusual and symbiotic Miras have distinct differences from apparently solitary ones[@Whitelock1987; @Mohamed2012]. In any case this star is well worth further study, and monitoring for the changes which might be anticipated if it is in a short lived evolutionary state. We suggest that the existence of this star and its unusual characteristics may be related to the nature of Sgr dIG, perhaps linked to its very high specific star formation rate, which increases the probability of finding a relatively massive AGB variable despite the low total stellar mass. The low metallicity of Sgr dIG is also significant; it is (along with Leo A ) the most oxygen-poor galaxy in the Local Group [@Kirby2017]. The fact that the C-rich Miras are similar to those in other Local Group galaxies, but that the O-rich star is different, points to abundance as a factor. The low metallicity and low oxygen abundance will affect the efficiency of dust production and type of dust grains that can form in the atmospheres of O-rich AGB stars. Abundances make much less difference to mass loss in C-rich stars as their dust is formed from dredged up carbon, and dredge-up is efficient at low metallicity [@Karakas2002]. Evidence to date [@Sloan2012 and references therein] indicates that in O-rich stars dust production is a function of metallicity, but in C stars it is not. The relative transparency of O-rich dust compared to C-rich dust in normal Galactic Miras has been discussed by @Bladh2015 and @Hofner2016, but it is not clear what the O-rich dust will be at very low abundances. Mass-loss is driven through radiation pressure on dust and thus mass-loss and opacities in Sgr dIG O-rich Miras will be very different from those in better studied AGB stars in the Galaxy and Magellanic Clouds. acknowledgments {#acknowledgments .unnumbered} =============== MWF, JWM and PAW are grateful to the National Research Foundation of South Africa for research grants. PM acknowledges the support from the ERC Consolidator grant, project STARKEY (G.A. 615604). This paper made extensive use of the CDS services, Aladin and VizieR. It also makes use of the following: data products from the Wide-field Infrared Survey Explorer (WISE), which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration (NASA); observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute (STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555); observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programmes: 61.E-0273(A),71.D-0560(A),60.A-9205(A),59.A-9004(D),077.D-0423(B),079.D-0482(A) and 60.A-9700(E); data obtained from the ESO Science Archive Facility under request numbers 284193, 284208, 284212, 284379 ,284404, 284405, 284415, 286398, 286459, 286588, 286602 and 286607. The IRSF project is a collaboration between Nagoya University and the SAAO supported by the Grants-in-Aid for Scientific Research on Priority Areas (A) (no. 10147207 and no. 10147214) and Optical & Near-Infrared Astronomy Inter-University Cooperation Program, from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan and the National Research Foundation (NRF) of South Africa. We are grateful for observational assistance provided by Noriyuki Matsunaga, Yoshifusa Ita, Enrico Olivier and Shogo Nishiyama. We also thank Martha Boyer and colleagues for allowing us to quote their HST result for V1 in advance of publication and Martha for helpful discussion. ----- ------- ------------------ -------- -------- ------- --------- --------- ----- ------- LMC OGLE 2MASS $J$ $H$ $K$ $[3.6]$ $[4.5]$ P $M_K$ 1 5878 04534486-6857593 10.401 9.477 8.864 7.50 7.28 912 –9.64 2 38175 05175887-6939231 10.692 10.073 9.105 8.17 7.98 830 –9.39 3 11637 05010330-6854257 11.048 10.073 9.355 7.69 7.60 928 –9.14 ----- ------- ------------------ -------- -------- ------- --------- --------- ----- ------- \[LMC\] Appendix: LMC Sources that are potentially similar to V1 {#appendix} ======================================================== It is important to establish if there are sources elsewhere with similar characteristics to V1 in Sgr dIG. We therefore examined O-rich Miras in the LMC with periods over 600 days, as identified by OGLE [@Soszynski2009], together with $JHK_S$ photometry from 2MASS [@Cutri2003]. There are only nine stars with $K_S$ magnitudes that are 0.8 mag or more below the @Ita2011 PLR. Six of these are red ($(J-K_S)>1.9$) and therefore probably affected by circumstellar extinction. The other 3 are listed in Table \[LMC\] and are referred to below as LMC1, 2 and 3; they all fall close to V1 in Fig. \[plk\]. All three have good OGLE light curves showing the bumps on the rising branch that are common in luminous long period Miras [@Glass2003]. LMC2 is near the centre of the LMC and has been better studied than the other two sources. Although it was originally classified as a planetary nebula, Jacoby LMC 19 [@Jacoby1980], this was not confirmed [@Boroson1989] and it is clearly an AGB variable. It has extra published $JHK_S$ photometry [@Macri2015] that ranges in $K_S$ from 7.79 to 8.46 mag. This shows that the 2MASS measure was particularly faint, the $K_S$ amplitude is $>1.3$ mag and the mean magnitude lies very close to the PLR. Multi-epoch IRSF photometry from Ita (private communication) confirms the period and indicates that $K_S \sim 9.1$ corresponds to minimum light. LMC3 has $K_S=8.39$ and 8.65 in the DENIS catalogue showing that the 2MASS value is probably nearer the minimum than the mean. The DENIS values for LMC1, $K_S=8.98$ and 8.86, are very similar to those from 2MASS. Thus we are left with only a single star in the LMC that might be like V1 in terms of its near-infrared character. The Spitzer magnitudes for LMC1 differ from the @Yuan2017 and @Ita2011 relations by 0.15 and -0.13 mag, respectively, at \[3.5\] and 0.07 and -0.26 mag, at \[4.5\], which suggest it is probably a normal HBB star. More observations are required to confirm this, but there are no strong candidates in the LMC that look like V1 in Sgr dIG. [^1]: their $JK_S$ photometry is from the 3.58-m NTT, whereas ours is from the 1.4-m IRSF. [^2]: update at:$\rm www.ipac.caltech.edu/2mass/releases/allsky/doc/sec6 \_4b.html$
--- author: - Jérôme Claverie - 'Siham Kamali-Bernard' - João Manuel Marques Cordeiro - Fabrice Bernard bibliography: - 'new.bib' title: 'Assessment of mechanical, thermal and surface properties of monoclinic M$_1$ and M$_3$ C$_3$S by molecular dynamics' --- Introduction ============ The research in cementitious materials is experiencing new challenges mainly due to the need to preserve the environment and save energy. To reduce emissions and energy cost of production, alternative binders are under study [@Biernacki2017]. However, Original Portland Cement (OPC) should continue to be employed for a long time and understanding of its principal constituent, alite, is primordial for its improvement. As a matter of fact, even at high water/cement ratio ($w/c$), the total hydration of OPC is quite never achieved. Previous quantitative X-ray diffraction analyses estimated at about the amount of hydrated alite after 65 days for $w/c = 0.45$ [@Ash1993]. The hydration degree of cement paste obtained by deconvolution technique applied to nanoindentation experiments reached at $w/c = 0.4$ [@Vandamme2010]. Computation of thermal and mechanical properties at the molecular scale can provides important information on the behaviour of OPC clinker and hydrated product. Furthermore, from the computation of surface energies, the shape of grains can be theoretically constructed for different polymorphs [@Azevedo2020]. Determination of such properties by experimental methods are most of the time limited, especially in the case of surface energies [@Tran2016]. In all cases, a proper synthesis procedure of pure is necessary, and the determination of the amount of each polymorph in a sample is neither trivial nor accurate. In this work, the mechanical, thermal and surface properties of and (the main phase of alite in industrial OPC [@Noirfontaine2012]) were characterized by molecular dynamics (MD) simulations. The atomistic systems investigated herein were built from the pure [@Noirfontaine2012] and [@Mumme1995] crystal structures (\[fig:unitcells\]). While the latter has already been used [@Manzano2015; @Mishra2013; @Manzano2011], this is the first time that a model has been employed with such computational techniques, despite of its predominance in alite of Portland clinker with high content [@Maki1982a]. Regarding atomic structural organization along the (010) direction, and $b$ parameters, the two cells are very closed. However, the two models are shifted by 1/4 in cell units in the (010) direction, meaning that the (010), Ca-rich, plane of the model corresponds to the (040) plane of model. The atomic organization along (001) axis in the model is close to model in the (100), and the $c$ and $a$ parameters in and respectively are almost equals. Conversely, the $a$ parameter of the unit cell is approximately 3 times larger than the $c$ parameter for , and the major structural difference is expected in the (100) direction for and (001) direction for . ![image](unitcells_new){width="80.00000%"} The present article is thus divided into three sections, namely: elastic properties, thermal properties, and cleavage energies and equilibrium shapes. Each section has an introduction, a description of the method employed and a presentation and discussion of results. To finish, a conclusion resume the different findings. Elastic properties ================== Understanding the mechanical properties of is important for two reasons: 1) as mentioned above, the total hydration of a cement paste is never achieved, so clinker components are to some extent, involved in the final microstructure of hydrated cement and 2) to optimize the grinding of clinker during cement manufacturing. The investigation of elastic properties of cementitious materials are most of the time related to hydrated products and very few data can be found in the literature concerning elastic properties of clinker components. Synthetic alite can be made by solid state sintering of decarbonated calcium oxide and fine silica, with possible addition of impurities (alumina, magnesium, sulfates), depending on the polymorph to be reached [@Nicoleau2013]. The elastic properties are typically determined by nanoindentation experiments and at the macroscale by resonance frequency measurements [@Velez2001; @Brunarski1969]. Nanoindentation experiments are most of the time performed on hydrated mortar or cement paste [@Gao2017; @Ulm2007; @Vandamme2010], and more rarely on pure and doped clinker phases [@Velez2001]. Unhydrated clinker phases are known to exhibit stiffnesses by times larger and hardnesses by one order of magnitude larger than hydrated phases [@Velez2001; @Vandamme2010; @Manzano2009]. According to the generalized Hooke’s law, the elastic properties of anisotropic materials can be derived by the relation between the components of the stress and strain tensors $\sigma_{ij}$ and $\varepsilon_{kl}$: $$\sigma_{ij} = \sum_{k=1}^3 \sum_{l=1}^3 C_{ijkl} \varepsilon_{kl}$$ where $C_{ijkl}$ are the 81 components of the stiffness tensor. These components can be expressed as the derivative of the stress components $\sigma_{ij}$ with respect to the strain components $\varepsilon_{ij}$ and related to the strain energy $\mathcal{U}$: $$C_{ijkl} = \pdv{\sigma_{ij}}{\varepsilon_{kl}} = \pdv{\mathcal{U}}{\varepsilon_{ij}}{\varepsilon_{kl}}$$ The elastic moduli $E_{i}$ along the different directions can be derived from the compliance tensor, defined as $\vb{S}_{ij} = \vb{C}_{ij}^{-1}$, and $E_{i} = S_{ii}^{-1}$. Considering the minor and major symmetries of the stiffness tensor it is possible to reduce the number of stiffness constants to 21 for a general anisotropic material. The more suitable Voigt notation divides the number of indices of stiffness constants by two and the number of components is reduced to 13 for a monoclinic crystal: $$\begin{bmatrix} \sigma_{11} \\ \sigma_{22} \\ \sigma_{33} \\ \sigma_{44} \\ \sigma_{55} \\ \sigma_{66} \end{bmatrix} = \begin{bmatrix} C_{11} & C_{12} & C_{13} & 0 & C_{15} & 0 \\ C_{12} & C_{22} & C_{23} & 0 & C_{25} & 0 \\ C_{13} & C_{23} & C_{33} & 0 & C_{35} & 0 \\ 0 & 0 & 0 & C_{44} & 0 & C_{46} \\ C_{15} & C_{25} & C_{35} & 0 & C_{55} & 0 \\ 0 & 0 & 0 & C_{46} & 0 & C_{66} \end{bmatrix} \begin{bmatrix} \varepsilon_{11} \\ \varepsilon_{22} \\ \varepsilon_{33} \\ \gamma_{12} \\ \gamma_{13} \\ \gamma_{23} \end{bmatrix}$$ Elastic properties of solids are generally computed by applying a strain or a stress in the desired directions and by determining the strain-stress or strain-energy relations. Two type of methods are used and discussed in this work: static optimization methods and time integration methods. Static optimization methods are typically applied at , or where anharmonical vibrations can be neglected, although lattice vibration frequency can be included through quasi-harmonic approximation techniques [@Qomi2017]. In the so-called static method, a small strain $\Delta \varepsilon_j$ is applied positively and negatively in each direction $j$: $$\begin{split} C_{ij}^+ = - \frac{\sigma_i(\Delta\varepsilon_j) - \sigma_i(0)}{\Delta\varepsilon_j} \\ C_{ij}^- = \frac{\sigma_i(-\Delta\varepsilon_j) - \sigma_i(0)}{\Delta\varepsilon_j} \end{split}$$ The stiffness constants can be obtained by averaging $C_{ij}^+$ and $C_{ij}^-$ and the symmetric constants: $$C_{ij} = \frac{C_{ij}^+ + C_{ij}^- + C_{ji}^+ + C_{ji}^-}{4} \label{eq:stiffness_constants}$$ This method performs quick calculation, minimizing the energy of the system before and after application of a small strain $\Delta \varepsilon$. However, it does not provide the stress-strain behavior nor give a prediction of the failure point. This computational scheme can be extended by applying a strain on several step followed by an energy minimization after each step. The stiffness constant are therefore obtained by linear regression on the desired strain range. For a system of particles with a volume $V$, the stress components can be computed as the sum of the kinetic and virial terms over the $N$ particles: $$\sigma_{ij} = \frac{\sum^N_k m_k v_{ki} v_{kj}}{V} + \frac{\sum^N_k r_{ki} f_{kj}}{V} \label{eq:stress_atom}$$ where $i$ and $j$ are the directions $x$, $y$ and $z$. $m_k$, $r_{ki}$, $v_{ki}$ are the mass, position and velocity respectively, and $f_{kj}$ is the force applied on the particle $k$. In the case of a molecular mechanics (MM) optimization, the kinetic term is zero. Time integration methods use equilibrium MD (EMD) or non-equilibrium (NEMD) to compute the deformation of the simulation box while controlling the stress or vice versa. In EMD, the equilibrium is made before each production run, resulting in a one point averaged result. In NEMD, the strain, or the stress, is changed continuously during the run. This is convenient because only a single run is needed, however the strain/stress rate may influence the result. The simulation can be either strain or stress controlled. In the first case, a strain rate is applied on the desired direction and with a fix stress (usually ) on the other directions. In the second case, a stress rate is applied in one direction while keeping the others at . The box is thus allowed to relax in the other directions. Methods ------- Force fields ------------ In this study, two force fields (FF) were employed to describe atomic interactions in : INTERFACE FF (IFF) and ClayFF. The PCFF implementation of IFF, used inhere, include quadratic bonded terms for covalent bonds in silicates, an electrostatic term and a 9-6 Lennard-Jones (LJ) potential for short-range interactions: $$\begin{gathered} {\cal{U}}= \sum_{ij}\sum_{n=2}^{4}K_{r,ij}(r_{ij}-r_{0,ij})^{\text{n}}+ \sum_{ijk}\sum_{n=2}^{4}K_{\theta,ij}(\theta_{ij}-\theta_{0,ij})^{n}\\ +\frac{1}{4\pi\varepsilon_{0}\varepsilon_{r}}\sum_{ij}\frac{q_{i}q_{j}}{r_{ij}} +\sum_{ij}\varepsilon_{0,ij}\left[2\left(\frac{\sigma_{ij}}{r_{ij}}\right)^{9}- 3\left(\frac{\sigma_{ij}}{r_{ij}}\right)^{6}\right] \label{eq:potential}\end{gathered}$$ Conversely, the ClayFF does not account explicitly for covalent bonding in silicates, but larger charges are used for silicon atoms (2.1e), compared with the IFF (1.0e). Its interaction potential is thus the sum of a 12-6 LJ potential and electrostatic interactions. A cutoff, and an Ewald summation with precision of were adopted for short-range and long-range interaction, respectively. ### Static calculations by MM The elastic properties of and were computed from the static MM calculation method on unit cells using the LAMMPS simulation code [@Plimpton1995]. The enthalpy of the cell was minimized at , thus allowing the cell parameters and the atoms to move freely. Then a deformation was applied in the desired direction, the energy of the system was minimized, allowing the atoms to move while fixing the cell parameters. The process was repeated negatively and positively in each direction, to calculate the 21 components of the stiffness matrix according to the \[eq:stiffness\_constants\]. The unit cells experienced maximal deformations of , with increments of , but the values $C_{ij}^{\pm}$ were obtained by linear fitting on values from zero to deformation. Homogeneous values of bulk and shear moduli for large crystals randomly dispersed were obtained by calculating Reuss and Voigt bounds (respectively subscripted $R$ and $V$). The Voigt-Reuss-Hill (or VRH) estimation for monoclinic crystals is obtained as the arithmetic average of Voigt and Reuss bounds on bulk ($K_V$, $K_R$) and shear modulus ($G_V$, $G_R$) [@Wu2007; @Fu2017; @Bernard2018]. ### MD calculations In order to determine elastic properties at finite temperature, supercells of 1296 atoms ( and for and respectively) were created from the unit cells presented in \[fig:unitcells\]. For each polymorph, three replicas were created by using different seeds for the initial velocities of atoms. Equilibration runs were performed during at in the NpT ensemble at hydrostatic pressure $\sigma$ varying between 0 and , followed by a production run of . Nose-Hoover thermostat and barostat were employed and the Newton equation of motion was integrated with the Verlet algorithm. Long-range interaction were computed with an Ewald summation with precision of and a cutoff of was applied for van der Waals interactions. The bulk modulus was calculated from the EMD simulation in the NpT ensemble with incremental equilibrium pressure. It is, by definition, the factor ratio between the variation of the hydrostatic pressure, and the volume variation rate, at constant temperature: $$K = -V_0 \qty\Big(\pdv{P}{V})_T \$$ In the NEMD calculation scheme, the supercells underwent runs of compression and traction at a strain rate of up to , the stress components were computed from \[eq:stress\_atom\]. No noticeable influence was reported for rates of one order of magnitude above and bellow . This is predictable because the resulting dislocation velocity is for the largest dimension. This dislocation velocity is large when compared to macroscale tests, but is negligible compared to the velocity of acoustic waves in . Based on the values of bulk modulus $K$, Poisson’s ratio $\nu$ and density $\rho$ from previous acoustic measurements, compressive and shear waves are calculated as and respectively [@Boumiz1997]. This ensures that atoms will respond instantaneously to the deformation of the simulation box [@Elder2016; @Teich-McGoldrick2012]. Elastic parameters were calculated by the direct relations, where $i \neq j$ are the $x$, $y$ and $z$ coordinates: $$\begin{aligned} E_{ii} &= \frac{\sigma_{ii}}{\varepsilon_{ii}} \notag \\ G_{ij} &= \frac{\sigma_{ij}}{\varepsilon_{ij}} \\ \nu_{ij} &= - \frac{\varepsilon_{ii}}{\varepsilon_{jj}} \notag\end{aligned}$$ Results ------- ### Static calculations The stiffness constants, homogenized elastic constants and directional elastic moduli of and , computed by static MM method are reported in \[tab:elastic\]. [\*5c]{} & &\ & IFF & ClayFF & IFF & ClayFF\ $C_{11}$ & & & &\ $C_{12}$ & & & &\ $C_{13}$ & & & &\ $C_{15}$ & & & &\ $C_{22}$ & & & &\ $C_{23}$ & & & &\ $C_{25}$ & & & &\ $C_{33}$ & & & &\ $C_{35}$ & & & &\ $C_{44}$ & & & &\ $C_{46}$ & & & &\ $C_{55}$ & & & &\ $C_{66}$ & & & &\ $K_{VRH}$ & 111.1 & 53.2 & 104.7 & 53.9\ $G_{VRH}$ & 66.7 & 33.5 & 54.3 & 33.5\ $E_{VRH}$ & 166.8 & 83.0 & 139.0 & 83.2\ $\nu_{VRH}$ & 0.250 & 0.240 & 0.279 & 0.243\ $E_{11}$ & 151.5 & 77.2 & 176.9 & 91.2\ $E_{22}$ & 180.6 & 96.2 & 174.6 & 71.7\ $E_{33}$ & 177.3 & 84.4 & 147.2 & 80.4\ $E_{44}$ & 66.2 & 34.4 & 36.3 & 32.7\ $E_{55}$ & 63.1 & 30.6 & 34.9 & 35.3\ $E_{66}$ & 64.8 & 31.7 & 66.5 & 31.8\ Velez et al. measured elastic moduli of and by resonance frequency and nanoindentation respectively [@Velez2001]. Boumiz et al. found the following values by acoustic measurements [@Boumiz1997]: $$\begin{aligned} K = \SI{105.2(5)}{\giga\pascal} \\ G = \SI{44.8(6)}{\giga\pascal} \\ E = \SI{117.6(8)}{\giga\pascal} \\ \nu = \num{0.314(17)} \end{aligned}$$ The values obtained with IFF agree relatively well with these experimental measurements. When compared to experimental results, the ClayFF tends to underestimate by a factor of approximately 2 the stiffness constants and thus the elastic constants. This very probably result from the non-bonded nature of atomic interactions in ClayFF, which neglect the covalent nature of O-Si bonds in silicates, thus decreasing the stiffness. The values obtained with this force field are close to previous calculations on the same unit cell, obtained with the GULP code, via second derivative of the binding energy [@Shahsavari2016]. Employing the same code and computational method, the Young moduli, shear moduli, bulk moduli and Poisson’s ratio were determined on the model proposed by de la Torre et al. [@Torre2002], using a Buckingham force field [@Manzano2009] and ClayFF [@Shahsavari2016]. The homogenized elastic constants obtained by IFF are very close to those obtained by Manzano et al. with the Buckingham FF [@Manzano2009]. The homogenized elastic constants obtained for with this FF are smaller than for and closed to recent results from DFT calculations on [@Laanaiya2019]. The lowest elastic modulus is obtained in the $x$ and $z$ direction for the and polymorph respectively. This result is predictable because of the correspondence of the $c$ parameter of the unit cell with the a parameter of the unit cell. The distributions of elastic moduli in space were created from the ELATE open-source Python module [@Gaillac_2016]. The resulting shapes are illustrated in \[fig:elastic\_surface\]. ![image](iff_surface_new){width="80.00000%"} The polymorph reveals a much more anisotropic elasticity than the polymorph. ### EMD calculations EMD calculations where performed with the IFF at 0, 1, 2.5 and 5 . The bulk moduli obtained by linear fitting of the hydrostatic pressure with respect to the volume variation are and for and polymorphs, respectively. A larger difference with static MM calculation is observed for . This could rely on the fact that the model, which is not averaged, experienced a structural relaxation during the MD run. As an important feature of the , the change in coordination between calcium cations Ca and oxygen in silicates (Os) as a function of the hydrostatic pressure was analyzed by radial distribution function (RDF) (see \[fig:rdf\_iso\]). ![Radial distribution function of Ca-Os pairs as a function of the hydrostatic pressure for , with IFF. The RDF obtained for the polymorph is very similar.[]{data-label="fig:rdf_iso"}](rdf_m3){width="\columnwidth"} The increasing hydrostatic pressure seems to influence the structure at short range (). The second coordination shell is flattened and shifted to the left by the effect of the pressure. The same behavior is observed for both polymorphs. ### Stress-strain behaviour by NEMD and MM calculations Next, the stress-strain curves of the NEMD simulations, and of the static MM calculation are plotted in \[fig:strain\_nemd\]. The general behavior of the and polymorphs seems similar. However, the compression strength seems to be larger for the polymorph in the $x$ direction, and for the polymorph in the $z$ direction. The structural correspondence of the (001) direction for the unit cell with the (100) direction for the unit cell explains this result. The elastic moduli and Poisson’s ratios obtained from these NEMD simulation are presented in \[tab:el\_nemd\]. They are relatively close to values from previous stress controlled NEMD simulations [@Mishra2013]. ![image](strain_md){width="80.00000%"} [\*4c]{} & & & $^a$ [@Mishra2013]\ $E_{11} = E_{xx}$ & & &\ $E_{22} = E_{yy}$ & & &\ $E_{33} = E_{zz}$ & & &\ $E_{44} = G_{yz}$ & & &\ $E_{55} = G_{xz}$ & & &\ $E_{66} = G_{xy}$ & & &\ $\nu_{12} =\nu_{xy} $ & & &\ $\nu_{13} =\nu_{xz} $ & & &\ $\nu_{21} =\nu_{yx} $ & & &\ $\nu_{23} =\nu_{yz} $ & & &\ $\nu_{31} =\nu_{zx} $ & & &\ $\nu_{32} =\nu_{zy} $ & & &\ The observed elastic behaviour seems to be stiffer for the MM than for MD due to the larger structural relaxation induced by the termal motion and the dynamical strain increasing. Nonetheless, the elastic properties obtained are in relatively close to results from static MM calculations. While the static method provide a good representation of elastic properties, it does not allowed to assess the yield stress properly, since no relaxation is permitted in the transversal directions. This restriction causes hardening for negative strains. The yield stress can be assessed by enthalpy minimization, but this such calculation is not trivial and the calculation can easily stuck in a local minima because the objective function is changing while the box dimensions change. On the other part, static calculations are very fast when compared with MD calculations. MM calculations in all directions on a unit cell was performed in 4 minutes on 4 Intel Xeon Gold 5120 CPUs @2.20 GHz, while a MD calculation only in the $x$ direction on a supercell was finallized in 3 hours and 28 minutes on 24 Intel Xeon Gold 5120 CPUs @2.20 GHz Thermal properties ================== During their lifetime, concrete undergoes temperature changes. The thermal expansion and contraction of concrete as temperature increases and decreases, is influenced by the aggregate type, the cement type, and the water cement ratio. As explained in the introduction of this chapter, the complete hydration of cement is never reached. Thus, although the aggregate type has the larger influence on the expansion and contraction of concrete, the thermal properties of hydrated and dry cement is of great interest. Thermal cracking of concrete generally occurs during the first days after casting. During the exothermic hydration of cement, the temperature rises, and drops faster on the surface than in the bulk. The surface tends to contract with the cooling and stress arises because the bulk remains hot, resulting in cracks. Naturally, this phenomenon occurs more likely in larger volumes. Considering a system of fixed volume being heated, the heat capacity is the partial derivative of its internal energy with respect to its temperature: $$C_V = \qty\Big(\pdv{E}{T})_V \label{eq:cv}$$ Because of a their lower compressibility when compared to gases, the internal energy of solids experience larger variation with volume. For this reason, their heat capacity is most of the time determined experimentally at constant pressure, as the partial derivative of enthalpy with respect to temperature: $$C_p = \qty\Big(\pdv{H}{T})_p$$ Two methods are commonly used to compute the specific heat from a molecular dynamics simulation. In the canonical ensemble, one can derive the fluctuation relationship between specific heat and internal energy as follows: $$C_V = \pdv{E}{T} = \pdv{E}{\beta}\pdv{\beta}{T}$$ with $\beta = 1/k_BT$, where $k_B$ is the Boltzmann constant. From thermodynamical laws, the specific heat capacities $c_v$ and $c_p$ at constant volume and pressure, respectively, are related by the equation: $$c_p - c_v = T \frac{\alpha ^2}{\rho \beta} \label{eq:cp-cv}$$ where $\rho$ is the density, $\alpha = (1/V)(\pdv*{V}{T})_p$ is the thermal expansion coefficient, and $\beta = -(1/V)(\pdv*{V}{P})_T$ is the compressibility, inverse of the bulk modulus $K$. Methods ------- ![image](vdos){width="80.00000%"} [\*6c]{} & Temperature () & $\alpha$ () & $\beta$ () & $\rho$ () & $c_p-c_v$ ()\ & 200 & & & &\ & 300 & & & &\ & 400 & & & &\ & 200 & & & &\ & 300 & & & &\ & 400 & & & &\ By using fluctuation methods, the specific heat can be computed at any temperature with a single, long enough run. However, these methods rely strongly on the temperature relaxation parameter used to thermostat the system (and in the case of the NpT ensemble, on the pressure relaxation parameter) [@Hickman2016]. Moreover values obtained by fluctuation method depends on the time interval used for block averages [@Hickman2016], often leads to large uncertainties and to bad agreements with experimental results [@Wang2010a]. For this reason, non-fluctuation, or direct method, is preferred. It consists on running several simulations at finite temperature and calculating the time average energies for each one. The specific heat is calculated by definition, as the slope of the total energy with respect to the temperature, at the desired temperature. The chosen temperature increment must be large enough to compute accurately the variation of energy between each simulation, but small enough, for the fitting to be representative. The specific heat capacity can also be computed from the velocity autocorrelation function (VACF) of atoms. Considering a solid made by quantum harmonic oscillator, the phonon density of states $g(\omega)$ is proportional to the Fourrier transform of the velocity autocorrelation function of the atoms: $$g(\omega) = \frac{1}{3NkT} \int_{-\infty}^{+\infty} \sum^N_{i=1} \expval{\vb{v}_i(t) \vdot \vb{v}_i(0)} e^{i \omega t} \dd t$$ where k is the Boltzmann constant, $N$ is the number of atoms and $T$ is the temperature of the system. The occupational states of phonons follows a Bose-Einstein distribution $f_{BE}$ and at energy largely below the Debye temperature, the total energy of the system can be reduced to the vibrational energy $E_v$ [@Qomi2015; @Atkins2009]: $$E_v = \int_0^{+\infty} \hbar \omega \qty(g(\omega) f_{BE}(\omega) + \frac{1}{2}) \dd \omega$$ The specific heat in $3Nk$ units is calculated from \[eq:cv\], as follows: $$c_v = \frac{\int_0^{+\infty} \cfrac{u^2 e^u}{(1-e^u)^2} g(\omega) \dd \omega}{\int_0^{+\infty} g(\omega) \dd \omega}$$ where $u = \hbar \omega / kT$. Specific heat and expansion coefficient were computed on three replicas of supercells. The simulations were performed in the NpT ensemble with the same MD parameters as previously. The systems were relaxed during , and the data were collected for . Within the direct method, the specific heat $c_p$ was computed by linear fitting of the enthalpy with respect to the temperature at five points around the temperature of interest (e.g., 280, 290, 300, 310 and to compute the specific heat at ). The same method was employed to calculate the expansion coefficient, fitting the volume variation with respect to the temperature. To compute $c_v$ as function of the VACF with no external influence of thermostating or barostating, the relaxed system were equilibrated for in the NVE ensemble, before running simulation of , dumping the trajectory at each time step to be able to observe high vibrational frequencies. For the calculation using the ClayFF, a geometric mixing rule for LJ parameters was used in place of the original arithmetic mixing. Indeed, this mixing rule provides more accurate value of density obtained during NpT simulations. The VACF were computed on ten correlation windows of on three replica for each polymorph. Results ------- The phonon density of states (DOS) obtained from simulations in the NVE ensemble are presented in \[fig:vdos\]. The phonon DOS obtained for and are almost identical, so as the resulting specific heat. Thus the structural difference between both polymorph does not influence their thermal properties. However, the force field does affect the results. The main difference between the phonon DOS obtained with IFF and ClayFF relies principally in their description of bonds in silicate. In the case of IFF, where Si-O bonds are described by harmonic oscillators in addition to the short and long range description, the partial DOS (PDOS) of Os and Si atoms form a sharped peak near , in agreement with infrared spectroscopy measurements [@Hughes1995], while the in-plane bending vibration of Os-Si-Os angle is measured as a band bellow [@Hughes1995]. This bending contribution happens at larger frequencies in our results (near ). Wave numbers $\omega$ bellow are associated to stretching between calcium and oxygen atoms [@Yu2004; @Qomi2015], in agreement with the PDOS obtained with both IFF and ClayFF. For ClayFF, the purely non-bonded description of Si-Os bonds allows for more degrees of freedom of atoms. The PDOS obtained for Si and Os atoms are thus more sparse. Generally, for the IFF, a shift of the DOS is observed towards higher vibrational frequencies. No significant variation of the DOS was observed between 200, 300 and . The error on the calculation of the isobaric specific heat $c_p$ are mostly related to the $c_p-c_v$ quantity, calculated from simulations in the NpT ensemble. The values of $c_p-c_v$ were calculated at 200, 300 and and extrapolated linearly, because this quantity vary proportionally with temperature (see \[eq:cp-cv\]). The expansion coefficient $\alpha$, the compressibility $\beta$, and the density $\rho$, computed from simulations in the NpT ensemble are presented in \[tab:npt\_thermo\]. The specific heat $c_p$ obtained for and are plotted with respect to the temperature in \[fig:specific\_heat\]. ![Specific heat of and obtained by the direct and VACF method, plotted as function of the temperature. The transparent areas represent the error. Previously computed value from VACF calculation, as well as fitting of experimental measurements [@Matschei2007] and direct measurements [@Todd1951] are plotted in addition to the results.[]{data-label="fig:specific_heat"}](specific_heat_new) The values obtained at by the direct method are and for and respectively, which is much larger than experimental measurements. As for the phonon DOS, no significant variation of $c_p$ was found between and . The VACF method with ClayFF results in $c_p = \SI{0.751(13)}{\joule\per\gram\per\kelvin}$, which is very close to experimental values of [@Matschei2007] and [@Todd1951]. The IFF provided a value slightly lower than experimental measurements: . Cleavage energies and equilibrium shapes ======================================== [cc]{}\ & Cleavage energy ()\ (100) &\ (010) &\ (040) &\ (003) &\ (008) &\ (110) &\ (101) &\ (011) &\ (111) &\ [cc]{}\ & Cleavage energy ()\ (100) &\ (300) &\ (800) &\ (010) &\ (040) &\ (001) &\ (002) &\ (003) &\ ($00\bar{3}$) &\ (008) &\ ($00\bar{8}$) &\ ![image](shapes){width="80.00000%"} Surface energy is an important property of crystals since it can be an indicator of reactivity and give information on equilibrium shapes [@Tran2016]. A practical way to compute the surface energy is to use three-dimensional periodicity. This method consists in creating a box composed by a slab in contact with a vacuum gap. The thickness must be sufficient so that the properties of the center of the slab converge to those of the bulk, and the vacuum region needs to be large enough, such that the surfaces do not interact through the boundaries. The cleavage energy is defined as the energy necessary to split a crystal along a specific plane. It is therefore the average of the energies of the two surfaces created. The cleavage energy is commonly computed as the difference between the slab and the bulk energies, divided by the generated surface area: $$E_{cleav} = \frac{E_{slab} - E_{bulk}}{2A} \label{eq:eq3-1}$$ where $E_{slab}$ and $E_{bulk}$ are respectively the energies of slab and bulk systems, with the same number of atoms and stoichiometry, and $A$ is the surface area of one side of the slab. Most of the time, the bulk energy computation by MD is performed running simulations in NpT ensemble on a supercell, with periodic boundary conditions (PBC). Sun and Ceder documented some issues regarding the calculation of the bulk energy from \[eq:eq3-1\] [@Sun2013]. Non-convergence arises from the difference between the converged bulk energy and the incremental increase of slab energy per layer. The authors performed surface energy computation calculating the bulk energy in different ways. The two methods which most improved the convergence were: 1) the calculation on basis transformed unit cells, eliminating inconsistencies in Brillouin zone integration and 2) the linear-fit relation introduced by Fiorentini and Methfessel [@Fiorentini1996], giving an average of the bulk energy computed over several slabs of different thickness. $$E_{slab}(N) = 2E_{surf} + NE_{bulk} \label{eq:fiorentini relation-1}$$ where $N$ is the number of layers forming the slab in the perpendicular direction. Another way mostly used in MD to compute the cleavage energy, preventing from any issue related to computation of the bulk energy, is to subtract the computed energies of unified and cleaved slabs, using the following equation: $$E_{cleav} = \frac{E_{cleaved} - E_{unified}}{2A} \label{eq:cleavage energy-2}$$ When the surfaces computed on both sides of the slab are symmetrically equivalent, the surface energy is equal to the cleavage energy. However, for most of the minerals, such an assumption cannot be effective for all the Miller indices, and, among various existing methods, Manzano et. al [@Manzano2015] suggested to divide the slab in contact with a vacuum into two atomic groups and to computed the surface energy for each of them. Both groups have one side exposed to the vacuum region and the other in contact with the other group. The surface energy of each side of the slab can be computed as follow: $$E_{surf} = \frac{E_{slab} - E_{bulk}}{A}\label{eq:surface energy-1}$$ Obviously, if the subject of the study is a reconstructed, electrically neutral slab, the surface energies computed for both subslabs are considered equals and correspond to a cleavage energy. One should note that surface energies of asymmetrical, non-reconstructed surfaces can be computed by density functional theory or using a reactive force field. However, this is not in accordance with the classical electrostatic criteria stipulated by Tasker [@Tasker1979; @Noguera2000] and could lead to unrealistic charge distribution. In addition to redistribution of the atomic charge on the two faces of the slab, other methods such as temperature gradients can be applied to assess more energetically favorable configurations, resulting in lower cleavage energies [@Fu2010; @Mishra2013]. Methods ------- The calculation of cleavage energies for multiple planes of the two monoclinic models under study involved creation of cleaved and unified systems. For non-symmetric planes, reorganization of superficial ions was performed to minimize superficial dipole moments. For each plane, five unified and cleaved systems were constructed with random distribution of surface species. A vacuum was employed in cleaved systems. Series of steep temperature gradients were applied on ions within the uppermost and lowermost atomic layer of slabs in unified and cleaved systems. This method was previously employed and allows to relax the surfaces to the configuration of lower energy [@Fu2010; @Mishra2013]. The systems with lower energies were selected to performed the calculation over runs, after equilibration runs. We refer the reader to ref. [@Claverie2019] for more details on the method. Results and discussion ---------------------- The cleavage energy was computed classically from \[eq:cleavage energy-2\] and results are given in \[tab:cleav\_m3\]. The computed vales are in the range of 1.14 to ( in average) for , and 1.04 to ( in average) for , in good agreement with previous atomistic simulation studies [@Durgun2014; @Manzano2015]. In general, the average values are very closed between the two models. The energies obtained for particular planes vary as a function of the structure of each polymorph, and is particularly influenced by coordination between calcium cations and oxygen atoms in silicates. The (100) direction of the polymorph corresponds to the (001) direction of the polymorph, and vice-versa. For , the present calculation indicates that the lowest energy plane is in the (100) direction, at from the origin. The (100) direction of the polymorph has many possible cleavage planes and the plane for which the lowest energy was computed has no equivalent in the polymorph. The Wulff construction can give theoretical insights on the shape of a crystal at equilibrium. It is based on the assumption that a crystal growth in a such way that the Gibbs free energy of its surface is minimized. From the lowest energy obtained in each direction, the equilibrium shapes of and in \[fig:shapes\] were created with the construction algorithm implemented in the pymatgen library [@Tran2016; @Ong2013]. In the polymorph, the crystal grows only along three planes, while in the , seven planes are available. Maki related that the equilibrium form of alite is usually made up of three special forms: one pedion and two rhombohedra [@Maki1986]. The author proposed a morphology similar to the obtained by Wulff construction, though more flat and with only one rhombohedra form. Maki explains that the crystal form of alite changes during recrystallization from platelet to massive granules with well developed pyramidal faces (10$\bar{1}$1) and (1$\bar{1}$02) [@Maki1983]. The ratio between the width and the length of the platelet is function of the environment during the growth. One should note that the obtained equilibrium shapes are not fully definitive, since they were determined on a relatively finite number of planes. More calculations would probably refine these shapes, and produce a more accurate prediction. Conclusion ========== The mechanical and thermal properties of and were investigated, as well as cleavage energies and equilibrium shapes. The elastic constants were calculated by static and dynamical methods. The elastic constants found by Voigt-Reuss-Hill homogenization of the stiffness constants obtained in static calculation with IFF were found in good agreement with experimental measurements. However, the results obtained with ClayFF overestimated and underestimated experimental results, respectively. Isotropic distribution of elastic moduli in space was observed for the polymorph, whereas an anisotropic distribution was found for . Bulk modulus was obtained by EMD, and elastic, shear modulus, as well as Poisson’s ratio were calculated by NEMD. The results are in good agreement with static calculations and experimental measurements. Specific heat capacities were calculated by the direct method and from VACF. The direct method provides results greater than experimental measurements, and with much larger error than the VACF method. On the other hand, the VACF allowed to analyse the phonon density of states and provide results much more accurate. The results obtained with ClayFF are very close to previous experimental measurements, and results from IFF are slightly smaller. The DOS obtained from VACF are in good agreement with infrared spectroscopy measurements, and the differences between IFF and ClayFF arise mainly from the bond description of silicates. Cleavage energy calculations were performed on both and polymorphs, using a temperature gradient method to relax superficial ions to configurations of lower energy. The energies obtained for both polymorphs are in the same range. The resulting equilibrium crystal shapes, obtained by Wulff construction, are however different, but the planes of low index with lowest energies are the (100) and (001) for both polymorphs. The construction possess three facets against seven for the polymorph. The cleavage energies were calculated for a relatively limited number of planes, and further calculation could lead to even more accurate shapes. Acknowledgments {#acknowledgments .unnumbered} =============== The authors acknowledge Brazilian science agencies CAPES (PDSE process n88881.188619/2018–01) and CNPq for financial support.
--- abstract: 'The Schrödinger wave functional $\psi=\exp-S\{\mathcal{A}^a_i(\vec{x})\}$ for the $d=3+1$ QCD vacuum is a partition function constructed in $d=4$; the exponent $2S$ \[in $|\psi|^2=\exp (-2S)$\] plays the role of a $d=3$ Euclidean action. We start from a simple conjecture for $S$ based on dynamical generation of a gluon mass $M$ in $d=4$, then use earlier techniques of the author to extend (in principle) the conjectured form to full non-Abelian gauge invariance. We argue that the exact leading term, of $\mathcal{O}(M)$, in an expansion of $S$ in inverse powers of $M$ is a $d=3$ gauge-invariant mass term (gauged non-linear sigma model); the next leading term, of $\mathcal{O}(1/M)$, is a conventional Yang-Mills action. The $d=3$ action that is (twice) the sum of these two terms has center vortices as classical solutions. The $d=3$ gluon mass $m_3$, which we constrain to be the same as $M$, and $d=3$ coupling $g_3^2$ are related through the conjecture to the $d=4$ coupling strength, but at the same time the dimensionless ratio $m_3/g_3^2$ can be estimated from $d=3$ dynamics. This allows us to estimate the $d=4$ coupling $\alpha_s(M^2)$ in terms of the strictly $d=3$ ratio $m_3/g_3^2$; we find a value of about 0.4, in good agreement with an earlier theoretical value but somewhat low compared to the QCD phenomenological value of $0.7\pm 0.3$. The wave functional for $d=2+1$ QCD has an exponent that is a $d=2$ infrared-effective action having both the gauge-invariant mass term and the field strength squared term, and so differs from the conventional QCD action in two dimensions, which has no mass term. This conventional $d=2$ QCD would lead in $d=3$ to confinement of all color-group representations. But with the mass term (again leading to center vortices), only $N$-ality $\not\equiv 0$ mod $N$ representations can be confined (for gauge group $SU(N)$), as expected.' author: - 'John M. Cornwall[^1]' title: A conjecture on the infrared structure of the vacuum Schrödinger wave functional of QCD --- \[intro\] Introduction ====================== The functional Schrödinger equation (FSE) for gauge theories, while no simpler to solve (and perhaps harder, in some ways) than any other non-perturbative formulation of QCD, has often been used over the years to gain insight into various aspects of QCD or, more generally, $SU(N)$ gauge theory [@loos; @green79; @halpern; @green80; @jackiw80; @feynman; @ja84; @corn87; @corn92; @mansfield94; @kk; @rein97; @zar; @pachos; @mansfield99; @feurein; @greenole]. However, few of these works address the important question of how confinement is expressed in the FSE. In any approach to the FSE for QCD that purports to reveal confinement, there are two important prerequisites: The first is gauge invariance, and it has been addressed many ways. The second is the need to insure that there are only short-range field-strength correlations; otherwise (see, [*e. g.*]{}, the qualitative and in some ways incomplete discussion of Feynman [@feynman]) there cannot be confinement. Given these, confinement further requires long-range pure-gauge contributions to the potential. These long-range pure-gauge parts appear in the FSE as massless longitudinally-coupled scalars that mimic Goldstone fields, although of course there is no symmetry breaking in QCD. Just as with conventional Goldstone fields, these massless poles do not appear in the QCD S-matrix; this would be so even if QCD were not a confining theory. As is well-known, center vortices, solitons of an infrared-effective action for QCD that encapsulates dynamical and gauge-invariant generation [@corn82; @papaag] of a gluon mass $M$, show just these properties and so provide a confinement mechanism. This mass has been estimated theoretically [@corn82], from phenomenology [@field; @natale], and on the lattice [@deforc], all yielding values of 600$\pm$200 MeV. The center vortices in $d=3$ are characterized by closed strings that (generically) constitute the constant-time cross-sections of $d=4$ center vortices; a confining condensate of center vortices in $d=4$ is therefore mirrored by a similar condensate in $d=3$. (Of course, the classical local minimum describing a single or a few center vortices is not relevant in isolation; it is necessary that there be a entropy-driven condensate of vortices. We do not discuss that issue here.) In $d=3+1$ the FSE describes four-dimensional dynamics in $d=3$ terms, because the exponent $S$ in the vacuum wave functional $\psi = \exp (-S)$ is (half of) a $d=3$ action, which we label $I_{d=3}$, depending on the spatial gauge potentials at zero time. Many authors have discussed center vortices for QCD in strictly $d=4$ terms. Our question is, how are such solitons—and hence confinement—described in the FSE action $2S=I_{d=3}$? Our answer proceeds in four steps. The FSE exponent $S$ is an infinite series of $n$-point functions integrated over the spatial components of $n$ gauge potentials (see the Appendix, which reviews earlier work [@corn87] on the FSE, as well as Sec. \[nonabelian\]). The first step, described in Sec. \[abelian\], considers the lowest-order term $S_2$ of this expansion, which is quadratic and shows only Abelian $U(1)^{N^2-1}$ gauge invariance. Our conjectured form of $S_2$ exactly satisfies the FSE with an Abelian gauge Hamiltonian that phenomenologically describes a gauge-invariant gluon mass $M$; it is essentially $N^2-1$ copies of the Abelian Higgs model with infinite Higgs mass. Since our focus is on confinement, an infrared phenomenon, we will use techniques and approximations that are useful in the infrared regime, even though they may misstate ultraviolet-dominated phenomena. In particular, although we treat the gluon mass $M$ as a constant, it is actually a running mass $M(k^2)$ evaluated on-shell. In order that there be dynamical mass generation in QCD, the running mass must vanish for large momentum $k^2$ [@corn82]. This vanishing cures certain short-distance singularities of the center-vortex solitons coming from an infrared-effective action. We will ignore this complication throughout this paper. The Abelian case is not entirely trivial, since the action $S_2$ contains the square root of an operator—the hallmark of the FSE. (Throughout this paper, we take this operator, called $\Omega$, in the simple form $\Omega =\sqrt{M^2-\nabla^2}$.) Nonetheless, $S_2$ has center-vortex solutions. Although these do not completely coincide with conventional $d=3$ center vortices, they show the necessary features: Long-range pure-gauge parts that confine, and field strengths that vanish at large distances as $\exp (-M\rho )$ where $\rho$ is the distance from the closed string on which the vortex lives. In anticipation of what we must do in the non-Abelian case, we study briefly the infrared expansion of $S_2$ in powers of $k^2/M^2$, and show that the first two terms yield a familiar action. The leading term is a gauge-invariant mass term; the next-leading term is the usual Abelian gauge action. However, if the expansion is truncated after two terms, the gauge mass described by them is erroneous. The reason is elementary: The infrared expansion, at least in the Abelian case, is nothing but the first two terms of the expansion $$\label{2termexp} \sqrt{M^2-\nabla^2}\rightarrow \frac{1}{M}(M^2-\frac{1}{2}\nabla^2 +\dots );$$ the two terms saved correspond to a mass $\sqrt{2}M$ instead of $M$. We propose that it may be phenomenologically useful, although not highly accurate, to make the replacement $$\label{meansq} \sqrt{M^2-\nabla^2}\rightarrow \frac{Z}{M}(M^2-\nabla^2)$$ where the renormalization constant $Z\simeq 1$ can be estimated in various ways. This heuristic replacement has the correct gluon mass. We discuss the motivation for this renormalization, coming from omitted terms in the infrared expansion. The second step, the subject of Sec. \[nonabelian\], begins with the problems of enforcing non-Abelian gauge invariance. Using earlier work [@corn87], we show that $S_2$ of step one can be gauge-completed to exact non-Abelian gauge invariance with an infinite series of $n$-point functions and powers of gauge potentials, in such a way that all $n$-point functions depend only on the operator $\Omega$, no matter what the specific form of $\Omega$ is. Gauge completion uses the pinch technique [@corn82; @binpap] and the gauge technique [@cornhou], as reviewed in the Appendix. The gauge technique is an approximation that becomes exact only at zero momentum, but is useful generally for momenta not large compared to $M$. In this gauge-completed $S$ we continue to use the simple form in $S_2$ of the two-point function introduced in the Abelian case. Just as for the ordinary Schrödinger equation, either direct substitution in the FSE or a dressed-loop expansion based on [@cjt] ultimately yields a non-linear Schwinger-Dyson equation for $\Omega$ (see the Appendix). We do not attempt to carry out this difficult program to find $\Omega$, but simply use the form already introduced in the Abelian case, showing mass generation. Even the approximate (although showing full non-Abelian gauge invariance) form of $S$ coming from application of the gauge/pinch technique is extremely complex, involving not only square roots of operators but an infinity of terms. This $S$ looks nothing like actions that we are used to dealing with. Ultimately, whatever form $S$ takes must be dealt with on its own terms. However, just as in the Abelian case it can be helpful to look for an approximate but familiar form. We make the same sort of mass expansion, saving only the first two terms, and argue that for QCD the leading term, of $\mathcal{O}(M)$, is equivalent to a gauged non-linear sigma (GNLS) model, which is commonly used as a description of gauge-invariant dynamical mass generation in Yang-Mills theory (see, for example, [@corn74; @corn82]). This sigma model contains the massless scalar poles, actually pure-gauge parts of center vortices, that are responsible for confinement. The second term, of $\mathcal{O}(1/M)$, is (after gauge completion) the conventional Yang-Mills action. But as in the Abelian case, the mass is wrong by a factor $\sqrt{2}$, so we suggest using the replacement of Eq. (\[meansq\]). In Sec. \[massexpand\] we give the final conjecture for the non-Abelian exponent $S$ and $d=3$ action $I_{d=3}\equiv 2S$, and the main consequences following from it. The conjectured action is the sum of a GNLS model and a conventional Yang-Mills action, with the correct free-field mass and a poorly-known renormalization constant $Z$. We suggest a method or two for estimating $Z$, probably with no more than 25% accuracy. The fourth step is to examine the consequences of this final two-term action. We have already noted that this action has center vortices as classical local minima (classical maxima of the FSE wave functional), and thus could provide a description of confinement in the FSE, which was one of our principal goals. Moreover, by appealing to known $d=3$ gauge dynamics, we can estimate the $d=4$ coupling strength in terms of the renormalization constant $Z$. In $d=3$ the coupling $g_3^2$ has dimensions of mass, and there is a unique \[for given $SU(N)$\] dynamically-determined ratio $M/g_3^2$, which has been estimated by a number of authors [@chk; @cornyan; @alexn; @buchphil; @corn1998; @eber; @karsch; @hkr; @ckp; @naka; @nair]. Knowing only this ratio we can estimate the $d=4$ QCD coupling $\alpha_s(M^2)$, getting a value around 0.4$Z$. For $Z\simeq 1$ this is reasonably close both to an early $d=4$ estimate [@corn82] using the gauge technique and pinch technique but somewhat low compared to phenomenological estimates [@natale] of 0.7$\pm$0.3. Another application is to the $d=2+1$ FSE, studied in, among other works, [@green79; @greenole]. Our present techniques suggest that the corresponding $d=2$ FSE exponent $S$ is again a sum of a gauge-invariant mass term and the usual Yang-Mills action. Greensite [@green79] speculated that this $S$ just had the conventional Yang-Mills term. However, as noted there and in [@greenole], this would lead to the wrong conclusion that in $d=2+1$ all representations of $SU(N)$ were confined, when in fact the adjoint and other representations with $N$-ality $\equiv 0$ mod $N$ are screened, not confined. But with the addition of the mass term, confinement can come about through center vortices, and this form of confinement correctly predicts screening for these representations. The paper ends with Sec. \[conclusions\], giving conclusions. An Appendix reviews some background material on the FSE, including applications of the pinch/gauge technique to the gauge FSE. \[abelian\] Describing mass generation in the FSE: The Abelian case =================================================================== Notation: Throughout this paper we will always use the canonical gauge potential $\mathcal{A}_i^a(\vec{x})$ potential multiplied by the coupling $g$, with the notation: $$\label{pot2} A_i^a(\vec{x})=g\mathcal{A}_i^a(\vec{x}).$$ Here $a$ is a group index for gauge group $SU(N)$, and $i=1,2,3$ index the spatial components. All vectors are three-dimensional, so we will now drop the vector notation and just use, [*e.g.,*]{} $k$ for a three-momentum. We also use the antihermitean matrix form $$\label{pot} A_i(x)=(\frac{g}{2i})\lambda_a\mathcal{A}^a_i(x)$$ where the $\lambda_a$ are the Gell-Mann matrices for $SU(N)$, obeying $$\label{trace} Tr\frac{1}{2}\lambda_a\frac{1}{2}\lambda_b=\frac{1}{2}\delta_{ab}.$$ The $A_i^a$ have engineering mass dimension 1 in any dimension. The time component $A_i^0$ is missing from the FSE. In this paper we will not need to indicate gauge-fixing and ghost terms necessary to define the $d=3$ functional integrals that yield physical expectation values. In the first step we begin with a simple quadratic (in the gauge potentials) form for $S$ that is consistent with gluon mass generation. This quadratic form $S_2$ is Abelian, showing $U(1)^{N^2-1}$ local gauge invariance: $$\label{prelim} S_2=\frac{1}{2g^2}\int A_i^a\Omega_{ij}A_j^a(x)$$ where the integral is over three-space, and $\Omega_{ij}$ is a product of two factors: $$\label{omegadef} \Omega_{ij}=P_{ij}\Omega.$$ The factor $P_{ij}$ is a transverse projector: $$\label{pijdef} P_{ij}=\delta_{ij}-\frac{\partial_i\partial_j}{\nabla^2}$$ that is required for Abelian gauge invariance. The free-field value of $\Omega$, called $\Omega_0$, describes free massless particles: $$\label{freeomega} \Omega_0=\sqrt{-\nabla^2}=\sqrt{k^2}$$ where $k$ is a three-momentum. To describe dynamical mass generation we will use, in this paper, the simple form $$\label{massomega} \Omega = \sqrt{-\nabla^2+M^2}$$ in which the gluon mass $M$ is the on-shell value of a running mass. Putting these equations together we have: $$\label{s2form} S_2=\frac{1}{2g^2}\int A_i^a\sqrt{M^2-\nabla^2}P_{ij}A_j^a.$$ One can easily check that $S_2$ is an exact solution to the FSE for an Abelian Hamiltonian with a gauge-invariant mass term: $$\label{abelham} H=\int\{-\frac{1}{2}g^2(\frac{\delta}{\delta A_i^a})^2+ \frac{1}{2g^2}[\frac{1}{2}(F_{ij}^a)^2+M^2A_i^aP_{ij}A_j^a]\}\equiv \int [\frac{1}{2}(\Pi_i^a)^2]+V.$$ where $F_{ij}^a=\partial_iA_j^a-\partial_jA_i^a$ are the Abelian field strengths. Here the mass term is put in by hand; in the non-Abelian version, we imagine that this mass term summarizes the effects of non-Abelian condensates. Equations of motion and solitons for $S_2$ ------------------------------------------ One goal in this Abelian example is to find center vortex-like solitons as extrema of $S_2$. It may not be entirely obvious how to proceed, because this action has the square root of an operator, leads to subtleties concerning positivity, locality, and self-adjointness. For example, we will see that the operator $\sqrt{M^2-\nabla^2}$ effectively vanishes on center vortex solitons, although $-\nabla^2$ is formally positive; this would falsely suggest that the action of such a soliton is zero. Consider the following alternative description of $S_2$, found by expanding the square root in powers of $-\nabla^2/M^2$ and assuming that integration by parts with no boundary terms is allowed at all orders: $$\label{adjointsqrt} S_2=\frac{M}{2g^2}\int A_i^aP_{ij}A_j^a+\frac{1}{4g^2}\int \sum_{N=0}C_{N+1}M^{-1-2N} [\partial_1\dots \partial_N F_{ij}^a(x)]^2$$ where $\partial_k\equiv \partial /\partial x_k$ and the $C_N$ are the coefficients of $x^N$ in the power-series expansion of $\sqrt{1+x}$. This re-definition of the square root gives the same generalized Euler-Lagrange equations as the naive equations following from the original form of Eqs. (\[prelim\],\[omegadef\],\[massomega\]), because these equations assume that integrating by parts gives no contributions (as would be appropriate for functions that fall off at least exponentially). In order to study these generalized Euler-Lagrange equations, it is very helpful to have $S_2$ in a formally local form. We note that, term by term, all but the first term of this alternative form of $S_2$ are both local and manifestly gauge-invariant, and need no change. As for the first term, we replace (in a familiar way) the non-local part by scalar fields: $$\label{localize} S_2 = \frac{M}{2g^2}\int [A_i^a-\partial_i\phi^a]^2 +\frac{1}{8Mg^2}\int [F_{ij}^a]^2 +\dots$$ Now keeping only a finite number of terms in the mass expansion of $S_2$ yields a local action, although of course the infinite sum may introduce non-localities. Saving only the first two terms in the mass expansion of $S_2$ based on Eq. (\[adjointsqrt\]) should fail to satisfy the Abelian FSE based on the Hamiltonian of Eq. (\[abelham\]). It is instructive to work out this failure and its consequences. The FSE reads: $$\label{fse} \frac{-g^2}{2}\int (\frac{\delta S_2}{\delta A_i^a})^2 +\frac{g^2}{2}\int \frac{\delta^2S_2}{\delta A_i^a \delta A_i^a} +H=E$$ where $E$ is the vacuum energy. Since the second-derivative term on the left-hand side of this equation only contributes to $E$, we drop it and renormalize $E$ to zero. The mass expansion of $S_2$ suggests that the remaining quadratic term in the FSE is in error at $\mathcal{O}(1/M^2)$. A simple calculation confirms this; Eq. (\[fse\]) becomes: $$\label{fseerror} \frac{-g^2}{2}\int (\frac{\delta S_2}{\delta A_i^a})^2+H + \frac{1}{4g^2}\int \frac{1}{2M^2}(\partial_jF_{ij}^a)^2=0.$$ At least qualitatively this error term in the FSE \[last term on the left-hand side, of $\mathcal{O}(1/M^2)$, the same relative order as the $N=1$ term in Eq. (\[adjointsqrt\])\] can be thought of as increasing the kinetic field-strength term $(F_{ij}^a)^2$ by a factor involving a mean-square momentum of the type $\langle k^2\rangle/M^2$; such an increase helps restore the balance between kinetic and mass terms in the expanded Hamiltonian which was disrupted by the usual infrared expansion of Eq. (\[localize\]). Such a renormalization is not quantitatively trivial, since momenta relevant for solitons such as center vortices are of $\mathcal{O}(M)$. It is useful to restate the local form of $S_2$ in a compact way, by undoing the power-series expansion and integration by parts: $$\label{resum} S_2 = \frac{M}{2g^2}\int [A_i^a-\partial_i\phi^a]^2 +\frac{1}{2g^2} \int A_i^aP_{ij}[\sqrt{M^2-\nabla^2}-M]A_j^a.$$ The scalar fields $\phi^a$ are to be integrated over, which may be thought of as projection of a simple mass term $(A_i^a)^2$ onto its gauge-invariant part by integrating over all gauge transformations. Because the $\phi^a$ appear quadratically, such an integration is the same as solving the classical field equations. The field equations for the $\phi^a$ are identical with a constraint following from the field equations for the $A_i^a$. Varying $S_2$, one finds the gauge potential equations of motion: $$\label{abelfeqn} M(A_i^a-\partial_i\phi^a)+[\sqrt{M^2-\nabla^2}-M]P_{ij}A_j^a=0.$$ The divergence yields the $\phi^a$ equations: $$\label{phieqn} \nabla^2\phi^a=\partial_iA_i^a\rightarrow \phi^a=\frac{1}{\nabla^2} \partial_iA_i^a+\varphi^a\;\;{\rm with}\;\;\nabla^2\varphi^a =0.$$ Re-write Equation (\[phieqn\]) as: $$\label{rewrite} \sqrt{M^2-\nabla^2}P_{ij}A_i^a=M\partial_i(\phi^a-\frac{1}{\nabla^2} \partial_jA_j^a)=M\partial_i\varphi^a.$$ Multiplication by $\sqrt{M^2-\nabla^2}$ leads to: $$\begin{aligned} \label{sqrtmult} (M^2-\nabla^2)P_{ij}A_j^a & = & M\sqrt{M^2-\nabla^2}\partial_i\varphi^a\rightarrow \\ \nonumber \nabla^2A_i^a-\partial_i\partial_jA_j^a & = & M^2(A_i^a-\partial_i\frac{1} {\nabla^2}\partial_jA_j^a)-M\sqrt{M^2-\nabla^2}\partial_i\varphi^a \rightarrow \\ \nonumber \nabla^2A_i^a-\partial_i\partial_jA_j^a - M^2(A_i^a-\partial_i\phi^a) & = & M[M-\sqrt{M^2-\nabla^2}]\partial_i\varphi^a. \\ \nonumber\end{aligned}$$ Term by term, every term on the right-hand side of the third equation in Eq. (\[sqrtmult\]) vanishes, if we use $\nabla^2\varphi^a=0$. Since in $\mathcal{R}^3$ there are no fields $\varphi^a$ solving $\nabla^2\varphi^a=0$ that are regular everywhere and vanish at infinity, one may be tempted to make the stronger statement that $\varphi^a$ must vanish. But the description of center vortices requires a non-zero $\varphi^a$, singular on a closed Dirac hypersurface of co-dimension 2 (a closed string in $d=3$), so it is more accurate to say that term by term the expansion of the right-hand side of the third equation in Eq. (\[sqrtmult\]) vanishes almost everywhere. However, we will soon see that this is not true for the unexpanded form. If we nonetheless drop this term with the square-root operator, the final equations of motion are the usual equations [@corn79] for center vortices, the same as would be gotten from the $d=3$ Euclidean action $$\label{d3action} \frac{1}{2g^2}\int \{M^2(A_i^a-\partial_i\phi^a)^2+\frac{1}{2}(F_{ij}^a)^2\}.$$ This action is just the potential $V$ occurring in the Abelian Hamiltonian of Eq. (\[abelham\]), written in local form; it is the Abelian form of the $d=3$ infrared-effective action used [@corn79; @corn82] to describe mass generation, and it has center vortices as classical solitons. If the term $M[M-\sqrt{M^2-\nabla^2}]\partial_i\varphi^a$ is left unexpanded, things are slightly different, although there are still center vortices characterized by long-range pure-gauge parts and field strengths vanishing exponentially as $\exp (-M\rho)$, where $\rho$ is the distance from the Dirac string. A center vortex is always fully determined by $\varphi^a$. We present our results in the gauge $\partial_iA_i^a=0$, in which case $\phi^a =\varphi^a$. The well-known expression [@corn79] for the center-vortex $\partial_i\phi^a$ is: $$\label{phiform} \partial_i\phi^a(x)=2\pi Q^a \epsilon_{ijk}\partial_j\oint_{\Gamma} dz_k\int \frac{d^3k} {(2\pi )^3}\frac{1}{k^2}e^{ik\cdot (x-z)}$$ where the closed contour $\Gamma$ is the Dirac string, and $Q^a$ is one of the $N-1$ generators of the Cartan subalgebra, normalized so that $\exp (2\pi i Q)$ is in the center of $SU(N)$. Now the third equation in Eq. (\[sqrtmult\]) easily gives: $$\label{solve3d} A_i^a(x)=2\pi Q^a\epsilon_{ijk}\partial_j\oint_{\Gamma}dx_k \int \frac{d^3k} {(2\pi )^3}\frac{M}{k^2\sqrt{k^2+M^2}}e^{ik\cdot (x-z)}.$$ In the usual $d=3$ vortex, an extremum of the action in Eq. (\[d3action\]), the factor $M(k^2+M^2)^{-1/2}$ would be replaced by $M^2(k^2+M^2)^{-1}$. This unusual square root does not change the fact that the field strengths show exponential decrease; in fact: $$\label{bfield} B_i^a=\frac{1}{2}\epsilon_{ijk}F_{jk}^a=2\pi Q^a\oint_{\Gamma}dz_i \frac{M^2}{2\pi^2|x-z|}K_1(M|x-z|).$$ There is, of course, still the long-range pure-gauge part associated with $\phi^a$, which we can isolate by the decomposition: $$\label{decompose} \frac{M}{k^2\sqrt{k^2+M^2}}=\frac{1}{k^2}+ \frac{1}{k^2}\{\frac{M}{\sqrt{k^2+M^2}}-1\}.$$ The second term on the right-hand side is short-ranged. The short distance behavior is more singular than that of the conventional vortex, but leads only to a logarithmic singularity in the value of $S_2$, the same as for the conventional vortex. In both cases the singularity is multiplied by a power of $M$, which removes the singularity because the running mass vanishes at short distances. So the vortex extrema of $S_2$ differ in detail from the usual center vortex, but have the hallmark features of a long-range pure-gauge part and field strengths vanishing like $\exp (-M\rho )$. Mass expansion of $S_2$ ----------------------- Another goal of this section is to replace $S_2$, which is either given in Eq. (\[adjointsqrt\]) as an infinite sum involving derivatives of arbitrarily high order or in Eq. (\[resum\]) in terms of square roots of operators, by a tractable and recognizable action. The first two terms of the expansion, written explicitly in Eq. (\[localize\], fit these criteria, but suffer from a serious defect. The coefficient of the second term, the usual gauge action, is wrong by a factor of 2; as written, it describes gauge bosons of mass $\sqrt{2}M$. This wrong coefficient arises from the expansion $\sqrt{1+x}=1+(x/2)+\dots $. We can see the same thing happening with a mass expansion of the Fourier kernel of Eq. (\[decompose\]). Expand the square root in the curly brackets of this equation in powers of $k^2/M^2$ to get: $$\label{fourexp} \frac{M}{k^2\sqrt{k^2+M^2}}=\frac{1}{k^2}-\frac{1}{k^2+2M^2}+\dots$$ This is exactly the kernel of the usual $d=3$ vortex, but with the wrong mass $\sqrt{2}M$. This is not the only way of expanding; for example, re-writing the Fourier kernel in a different form and expanding the square root occurring in it gives: $$\label{reexpand} \frac{M}{k^2\sqrt{k^2+M^2}}=\frac{M\sqrt{k^2+M^2}}{k^2(k^2+M^2)}= \frac{M^2}{k^2(k^2+M^2)}-\frac{1}{2(k^2+M^2)}+\dots$$ The first term on the right-hand side is the standard center vortex with the correct mass $M$, and all other terms have this mass as well. However, these other terms give the wrong coefficient for the exponential falloff of the field strengths at large distance. There are no such results for square-root operators in the non-Abelian case, which is as expected much more complicated. So we will, in the spirit of the Abelian expansion given in Eq. (\[localize\]), look for a way to approximate the complicated non-Abelian result by a two-term form, the first of which is a (gauge-invariant) mass term and the second is the usual Yang-Mills action. In the Abelian case, such a two-term action as an approximation to the infinite sum of Eq. (\[adjointsqrt\]) suggests that the derivatives in this sum, beyond those in $F_{ij}^a$, be approximated by averages so that this equation is effectively $$\label{s2eff} S_2=\frac{M}{2g^2}\int A_i^aP_{ij}A_j^a+\frac{1}{4Mg^2}\int \sum_{N=0}C_{N+1}\langle \frac{k^{2N}}{M^{2N}}\rangle [F_{ij}^a(x)]^2\equiv \frac{M}{2g^2}\int A_i^aP_{ij}A_j^a+\frac{Z}{4Mg^2}\int [F_{ij}^a(x)]^2$$ where $k^{2N}$ stands for the multiple derivatives. If this is justified, the infinitely many terms of Eq. (\[adjointsqrt\]) are indeed replaceable by a mass term plus a renormalized conventional gauge action. But because the gluonic mass described by this $S_2$ must be $M$, the same as in the original $S_2$, there will have to be an equal renormalization of the mass term in Eq. (\[s2eff\]) above. In later sections we will explore an approximation to the square root that is motivated by these remarks, involving the replacement $$\label{replace} \sqrt{M^2-\nabla^2}\rightarrow \frac{Z}{M}(M^2-\nabla^2)$$ for some renormalization constant $Z$, supposed to be near unity. We have no reliable techniques for calculating $Z$, so we will resort to a simplistic approach of making a least-squares fit of the operator $\sqrt{M^2-\nabla^2}$ by the operator $(Z/M)(M^2-\nabla^2)$, which leads to $Z\simeq 1.1-1.2$. Before engaging in this mass expansion we must understand the gauge structure of the non-Abelian exponent $S$. \[nonabelian\] The non-Abelian case: Gauge completion and mass expansion ======================================================================== We can be nowhere near as complete in the non-Abelian case as we were above, and ultimately will be forced to resort to a large-mass expansion. In the non-Abelian case, the quadratic term $S_2$ with which we began is supplemented with an infinity of terms, involving spatial integrals over $n\geq 3$ spatial gauge potentials multiplied by an $n$-point function $\Omega_n$ depending on the spatial and discrete coordinates of the gauge potentials (see the Appendix): $$\label{nonabel1} g^2S=\frac{1}{2!}\int \int A_i^a\Omega_{ij}A_j^a+\frac{1}{3!}\int \int \int A_i^aA_j^bA_k^c \Omega_{ijk}^{abc}+\dots$$ The $n$-point function of this expansion is related to the $n+1$-point function through ghost-free Ward identities, as arise in the pinch technique [@corn82; @binpap]. These Ward identities can be “solved" using the gauge technique, a well-known technique whose main points of interest we describe in the Appendix, and the result is that it is possible in principle to find an approximate but exactly gauge-invariant expression for the entire series of $n$-point functions describing the wave functional exponent $S$. Each $n$-point function depends only on the two-point function, but in a complicated way that is not understood. Ultimately the two-point function is determined by a non-linear Schwinger-Dyson equation that can (again in principle) be derived either by direct substitution in the FSE or by a dressed-loop expansion [@cjt; @corn98; @ccds]. Using a dressed-loop expansion for $S$ is equivalent to direct solution of the FSE (of course, either the dressed-loop expansion or the FSE must be truncated at at certain number of loops, but this truncation has nothing to do with a truncation in the coupling $g^2$; all-order non-perturbative effects arise even at one-dressed-loop order in QCD). A systematic study of the FSE would go on to determine the mass $M$ from the infinity of equations for the $n$-point functions in $S$ of Eq. (\[nonabel1\]), but that is not our purpose here. Instead, we show how to construct what we will call a [*gauge completion*]{} of the 2-point action for an arbitrary $\Omega$, using earlier work [@corn87], to add higher-point functions, consistent with solving the FSE, that depend on $\Omega$ in specific ways that insure full non-Abelian gauge invariance. Ultimately, the FSE becomes a non-linear equation for $\Omega$, just as for the ordinary Schrödinger equation. Gauge invariance requires that the lowest-order (quadratic) term has the Abelian form already given in Eq. (\[s2form\]). The Ward identities for the three-point function and their solution are detailed in the Appendix. Both the Ward identities and the FSE for the determination of this three-point function involve only the two-point function $\Omega_{ij}$, and it is plausible that there exists a three-point function satisfying these equations that is a functional solely of the two-point function $\Omega_{ij}$. The gauge technique provides such a three-point function, as given in Eqs. (\[revfse\],\[cornhoueq\]). The gauge technique by itself does not furnish a unique solution, which must be found by recourse either to the FSE itself or to the dressed-loop expansion. However, in the infrared limit of momenta small compared to the mass $M$ the solution is unique. Mass expansion: The leading term -------------------------------- In general, the gauge/pinch technique leads to quite complicated expressions, and we will explore only a simplified version of it. The main simplification is to look at the leading terms in an expansion in inverse powers of $M$. In the leading term, of $\mathcal{O}(M)$, all two-point functions $\Omega$ are replaced just by $M$ itself, which gets rid of many momentum-dependent terms. In this way the leading term of the three-point function is: $$\label{3ptfunct} \Omega_{ijk}^{abc}(k_1,k_2,k_3)=f^{abc}\frac{M}{6}\{\frac{k_{1i}k_{2j}(k_1-k_2)_k} {k_1^2k_2^2}+c.p.\}+\mathcal{O}(1/M).$$ One can proceed in principle this way, by looking at the pinch/gauge technique solution for the four-point function (see [@papa]) and taking the large-mass limit, then the five-point function, etc. We will not detail such an investigation here, but will point out some features that strongly suggest the all-order solution. The structure of the Ward identities shows that the leading term of every $n$-point function is $\mathcal{O}(M)$, with all other dimensions taken up by momenta, and that the gauge-technique solution involves longitudinally-coupled massless poles whose number grows with $n$. Observe further that the GNLS term of $S$ is the exact solution of an FSE Hamiltonian consisting of just this term itself, as given in Eq. (\[gnlsm\]) below, multiplied by $M$. Of course, there is no such term in the underlying QCD Hamiltonian, but there would be one in the infrared-effective Hamiltonian of QCD, derived by $d=4$ techniques [@corn74; @corn82]. We suggest that the action of the gauged non-linear GNLS model, expressed in non-local form \[as in the originally-stated form of $S_2$, in Eq. (\[s2form\])\] is the all-order perturbative solution to the leading mass terms of the gauge/pinch technique approach. To find this non-local form we investigate the classical solutions of the local GNLS action. Because the notation is more compact, we temporarily switch to the antihermitean matrix notation of Eq. (\[pot\]). The local GNLS model, normalized appropriately, has the action [@corn74]: $$\label{gnlsm} I_{GNLS}= \frac{-M}{g^2}\int d^3xTr[U^{-1}D_iU]^2$$ where $U$ is a unitary matrix transforming as $U\rightarrow VU$ under the gauge transformation $$\label{gtrans} A_i\rightarrow VA_iV^{-1}+V\partial_iV^{-1}.$$ The classical equations for $U$ express this quantity in terms of the $A_i$ [@corn74], with the result $$\label{upert} U=e^{\omega};\;\;\omega = \frac{-1}{\nabla^2}\partial \cdot A +\frac{1}{\nabla^2}\left \{[A_i,\partial _i\frac{1}{\nabla^2}\partial \cdot A]+\frac{1}{2}[\partial \cdot A,\frac{1} {\nabla^2}\partial \cdot A]+\cdots \right \}$$ showing the appearance of massless scalars. More generally, since $U^{-1}D_iU$ is a gauge transformation of $A_i$, functional integration over the $U$ is equivalent to projecting out the gauge-invariant part of the mass term [@corn87; @kk]. Note that the term linear in $A_i$ of the GNLS model field $U^{-1}D_iU$ is the transverse part of $A_i$. This linear term is Abelian, and all higher-order terms of $\omega$ in Eq. (\[upert\]) are non-Abelian. \[Greensite and Olejnik [@greenole] have conjectured that in certain instances operators such as $\nabla^{-2}$ should be replaced by $D^{-2}$, where $D_i=\partial_i+A_i$ is the covariant derivative. Their lattice calculations show that $D^{-2}$ is a finite-range operator, with no massless poles; this is reasonable, because it contains gauge-potential condensate terms, but it is not obvious where the long-range pure-gauge excitations responsible for confinement, such as we have in Eq. (\[upert\]), are. We will not follow this line of reasoning here.\] It is now straightforward, if tedious, to verify that the two- and three-point terms of the non-local GNLS action give rise precisely to (the leading mass terms of) the two-point function $S_2$ and the pinch/gauge technique three-point function of Eq. (\[3ptfunct\]). Moreover, the GNLS action integrated over $U$ automatically satisfies the Ward identities to all orders, just because it is the solution of the FSE for a gauge-invariant Hamiltonian. The second-leading term ----------------------- We already know that the next-leading term, of $\mathcal{O}(1/M)$, in the expansion of the two-point function $S_2$ is the conventional Abelian action involving $F_{ij}^2$. It is obvious without any calculation that the Abelian action will, at a minimum, be gauge-completed to the full Yang-Mills action with its three- and four-point vertices. These come from the three- and four-point functions in the expansion of $S$ as given in Eq. (\[nonabel1\]). The desired terms of the Yang-Mills action are straightforwardly found either by direct solution of the FSE or from the dressed-loop expansion, which always contain all the terms of the action of the underlying theory divided by some sum of two-point functions $\Omega$. For example, we show in the Appendix that the three-point function has the term $$\label{3pvertex} \Omega_{ijk}^{abc}(k_1,k_2,k_3)=[\Omega (1)+\Omega (2) +\Omega (3)]^{-1}f^{abc} [\delta_{ij}(k_1-k_2)_k+c.p.] +\dots$$ where the term in square brackets is the free Yang-Mills three-point vertex and each $\Omega (i)$ is replaced by $M$ to find the leading term in the mass expansion. There is a plethora of other terms, which either cancel among themselves or give total divergences. Of course, higher-order gauge-invariant terms may arise from higher-order coefficient functions in the gauge-potential expansion of $S$, Eq. (\[revgauge\]), but we will not consider them, since they are necessarily accompanied by higher powers of $1/M$. In the Abelian case the $\mathcal{O}(1/M)$ term is of the correct functional form, but with a coefficient twice as small as it should be, and the same problem arises for the non-Abelian case. This results in a gauge mass of $\sqrt{2}M$ instead of $M$, as pointed out in Sec. \[abelian\]. In the next section we consider a modification of the straightforward mass expansion of the type of Eq. (\[replace\]) that forces the correct mass. \[massexpand\] The final conjecture and its consequences ========================================================= Heuristic mass expansion ------------------------ What we have so far in the gauge-completed mass expansion to second order is the sum of a GNLS and a Yang-Mills term, but with the wrong mass. What we need is an approximation to this two-term action that has the correct mass, in part because solitons are described in this momentum range and decay at a rate $\sim \exp (-M\rho )$. In any event, it is clear that the first two terms in any sensible infrared expansion consist first of a gauge-invariant mass term and second of a standard Yang-Mills action. Rather than stick to a strict expansion in powers of $\nabla^2/M^2$, we conjecture that, as in the Abelian case, we can replace $\Omega = \sqrt{-\nabla^2+M^2}$ by a leading term $(Z/M)(-\nabla^2+M^2)$, where $Z$ is a coefficient of $\mathcal{O}(1)$. The mathematical motivation for least-square fits of operators is well-known. Consider a normal operator $P$, expressed in terms of its eigenvalues and eigenfunctions: $$\label{pop} P=\sum|n\rangle \lambda_n\langle n |.$$ Any function of $P$, call it $f(P)$, is expressed by replacing $\lambda_n$ by $f(\lambda_n)$. With the operator norm $Tr P^{\dagger}P$, we define a relative RMS distance between two operators $f(P)$ and $g(P)$ by: $$\label{rmsop} \left\{ \frac{Tr[f(P)-g(P)][f(P)-g(P)]^{\dagger}}{Trf(P)f(P)^{\dagger}} \right\}^{1/2}=\left\{\frac{\int d\lambda \rho (\lambda ) |f(\lambda )-g(\lambda )|^2} {\int d\lambda \rho (\lambda )|f(\lambda )|^2}\right\}^{1/2},$$ where $$\label{evdensity} \rho (\lambda )=\sum \delta (\lambda -\lambda_n)$$ is the density of eigenvalues. One could also modify this density by multiplying it by a non-negative function $q(\lambda )$ to emphasize a certain range of eigenvalues, so that the weight in the integral is $\rho (\lambda )q(\lambda )$. The eigenvalues of $P=-\nabla^2$ are the squared momenta $k^2$, positive for real $k$. We really want our approximation of $\Omega$ to be fairly good for imaginary $k$, so the above discussion is not very useful. Moreover, the operators involved are not in trace class, so divergences arise. Instead, we take a rather simpleminded point of view, asking what is the best fit, in the least-squares sense, of the function $Z(1-x^2)$ to the function $\sqrt{1-x^2}$ in the interval $0\leq x \leq 1$. Here $x^2$ represents $\nabla^2/M^2$, and positive values for this operator suggest that we are applying it to a special class of functions representable by Laplace transformation, with the Laplace-transformation weight peaked around $M$. This is indeed the property of the functions that enter into FSE center vortices, as exemplified in the Abelian center vortex of Eq. (\[bfield\]). For a uniform weight over $0\leq x\leq 1$ we find the normalized least-squares integral $I_{lx}(Z)$: $$\label{rms} I_{ls}(Z)=\left [\frac{\int_0^1dx[Z(1-x^2)-\sqrt{1-x^2}]^2}{\int_0^1dx(1-x^2)} \right ]^{1/2}.$$ Minimizing on $Z$ gives $Z=\frac{45\pi}{128}\simeq 1.10$, and the minimum value of $I_{ls}$ is about 0.22. If we replace $x^2$ by $x$ in the integrand of Eq. (\[rms\]), which corresponds to a different weight, we get $Z=1.2$. Both values are near unity, as expected, and the value $I_{ls}\simeq 0.22$ suggests the relative accuracy of this least-squares fit. The final conjecture: Relating $d$- and $d-1$-dimensional dynamics ------------------------------------------------------------------ The final form of the conjecture, expressed in terms of the $d=4$ variables $g^2,M$ is then: $$\label{finalconj} -2S= -I_{d=3}\rightarrow \frac{2MZ}{g^2}\int d^3xTr[U^{-1}D_iU]^2\}+\frac{Z}{Mg^2} \int d^3x TrG_{ij}^2+\mathcal{O}(M^{-3}).$$ We now compare this to the canonical $d=3$ form of the conjectured action, action, which is: $$\label{cand3} I_{d=3}=-\int d^3x\left \{\frac{1}{2g_3^2}TrG_{ij}^2+\frac{m_3^2}{g_3^2}Tr[U^{-1}D_iU]^2\right \}.$$ Here $G_{ij}$ is the non-Abelian field strength, $D_i=\partial_i+A_i$ is the covariant derivative, and the unitary matrix $U$ is the GNLS field, as before; the gluon mass is $m_3$ and the $d=3$ coupling, with dimensions of mass, is $g_3^2$. Equating $I_{d=3}$ with $2S$ leads to: $$\label{compare} \frac{ZM}{g^2}=\frac{m_3^2}{2g_3^2};\;\;g^2=\frac{2Zg_3^2}{M}.$$ These equations yield $m_3=M$, as expected, plus $$\label{compare2} g^2=\frac{2Zg_3^2}{m_3}.$$ (Note that the $d=3$ quantities scale properly at large $N$ if their $d=4$ counterparts do.) Presumably the $d=4$ coupling $g^2$ that occurs in these formulas is actually the running coupling $g^2(M^2)$ evaluated at the gluon mass scale. We can now make an estimate of a pure $d=4$ quantity in terms of a pure $d=3$ quantity, from Eq. (\[compare2\]) and earlier $d=3$ results. In $d=3$ one quantity of particular interest is the dimensionless ratio $m_3/g_3^2$. This ratio has been estimated in a number of continuum and lattice studies [@chk; @cornyan; @alexn; @buchphil; @corn1998; @eber; @karsch; @hkr; @ckp], and we can see whether this $d=3$ dynamical quantity can correctly predict the running coupling $g^2(M^2)$ at the gluon mass scale. Or we can reverse the problem and use estimates of the running coupling to predict $m_3/g_3^2$. There is no particular reason to think that the dynamics of the action defined by the exact vacuum wave functional, before truncation to two terms of a mass expansion, should be precisely that of $d=3$ QCD. Nonetheless, if our conjecture is to be believed there should not be gross discrepancies. In $SU(2)$ gauge theory various authors [@chk; @cornyan; @alexn; @buchphil; @corn1998; @eber; @karsch; @hkr; @ckp] give a value $m_3/g_3^2\simeq 0.32$, and one $SU(3)$ lattice study [@naka] gives a value of 0.48. The quantity $m_3/g_3^2$ should be linear in $N$ of $SU(N)$ for large $N$, and the factor 3/2 nicely converts the $SU(2)$ values to the $SU(3)$ value, so we use 0.48 as the $SU(3)$ value. We then find a value for the strong coupling (with no quarks) $\alpha_s(M^2)\simeq 0.33Z$ that is in fairly good agreement with the one-dressed-loop approximation found in the original paper on dynamical gluon mass generation [@corn82]. This paper gives a one-dressed-loop equation for the running charge with dynamical gluon mass generation. At the momentum scale of the gluon mass $M$: $$\label{gest} \alpha_s(M^2)=\frac{g^2}{4\pi}=\frac{12\pi}{[11N-2N_f]\ln [5M^2/\Lambda^2)]}\simeq 0.4,$$ where the numerical value is based on the estimates $M=0.6$ GeV, $\Lambda=0.3$ GeV, and the absence of quarks ($N_f=0$). Of course, these numbers for $M$ and $\Lambda$ are themselves uncertain, if only because Eq. (\[gest\]) is a one-dressed-loop equation. According to this one-dressed-loop equation, accounting for three light flavors multiplies the no-quark value by $11/9\simeq 1.2$. If we assume that this correction applies to the FSE result of this paper, which as it stands does not account for quarks, our estimate of $\alpha_s(M^2)$ increases to about $0.4Z$. Several papers have extracted values of $\alpha_s(0)\simeq 0.7\pm 0.3$ from various scattering data sensitive to low-momentum effects [@natale] that could diverge if there were no gluon mass. The three-quark value that we give of 0.4$Z$ is a little smaller, but in quite reasonable agreement considering the approximation that is inherent in a two-term truncation of the FSE exponent $S$ and our lack of knowledge of $Z$.. It has been argued [@nair] that $m_3/g_3^2$ for $SU(N)$ is very closely approximated by the simple analytic function $$\label{nair} \frac{m_3}{g_3^2}=\frac{N}{2\pi};$$ the present author [@corn82] has argued for a ratio that should be fairly close to $15N/(32\pi )$, which differs from the above by only a few percent. One then has a simple analytic formula for $\alpha_s(0)$. Using the value from Eq. (\[nair\]) in Eq. (\[compare2\]) yields the amusing, if not very accurate, formula $$\label{compare3} \alpha_s(M^2)=\frac{Z}{N}\simeq \frac{1}{N}.$$ We can play the same game in one less dimension for the $d=2+1$ FSE, beginning with an exponent $S$ for the wave functional that is the trivial dimensional reduction of what we began with in $d=3+1$. The result is a $d=2$ action with, as in $d=3+1$, a mass term and a kinetic term. This is not the standard $d=2$ QCD action, which is a free field theory. We compare this to a conjecture made long ago by Greensite [@green79], arguing that $S$ for the FSE was just the usual Yang-Mills action in one less dimension. Unfortunately, as [@greenole] notes, if Greensite’s 1979 conjecture is applied in $d=2+1$, the effective action is the familiar $d=2$ free-field QCD, which would lead to confinement of all representations of $SU(N)$, not just those with $N$-ality nonzero. This is not the right behavior for $d=2+1$. But in our case once again the action is the Yang-Mills term plus a GNLS model mass term; this action has [@corn98] center vortices; they are point-like objects in $d=2$. A condensate of these solitons leads to confinement, but only of group representations that have $N$-ality $\not\equiv$ 0 mod $N$; other representations (such as the adjoint) are blind to the long-range parts of center vortices. This is the correct behavior for $d=2+1$ gauge theories. However, if the mass term were not present in $I_{d=2}$ this action, which is supposed to carry all the information about $d=2+1$ gauge theories, would reduce to the standard Yang-Mills action in $d=2$. The conventional treatment of $d=2$ gauge theories, which (in the absence of dynamical matter fields) are free field theories, finds confinement through the long-range free gluon propagator, and all representations are confined. But with the mass term the gluon propagator is short-ranged and confinement comes from the pure gauge long-range parts of center vortices. It is far from trivial to calculate the properties of the center-vortex condensate in $d=2$, and so we cannot relate the $d=3$ coupling to the string tension that would be found from the $d=2$ effective action. \[conclusions\] Summary and conclusions ======================================= We have conjectured that to a reasonable approximation the dominant quasi-infrared part of the vacuum wave functionals for the $d=3+1$ and $d=2+1$ FSE are actions in one less dimension consisting of a Yang-Mills term and a GNLS model term, showing gauge-invariant dynamical mass generation. Two main conclusions follow: 1. Given the usual entropy-dominance argument, these wave functionals show confinement through center vortices, such that only group representations with $N$-ality $\not\equiv 0$ mod $N$ are confined. 2. In $d=3+1$ we can appeal to earlier works estimating the ratio $m_3/g_3^2$ in the $d=3$ action of the FSE to make the estimate $\alpha_s(M^2)\simeq 0.4Z$, where $Z$ is a renormalization constant that we have very crudely estimated to be in the neighborhood of 1.1-1.2. This can be compared to an earlier estimate, based on the original work on dynamical gluon mass generation, of $\alpha_s(M^2)\simeq 0.4$. Both these estimates have three light flavors of quarks. This is to be compared to compared to phenomenological estimates [@natale], also with three light quarks, of $\alpha_s(0)\simeq 0.7\pm 0.3$. It would be interesting to verify this structure of the FSE vacuum wave functionals through lattice simulations. I am happy to acknowledge valuable conversations with Stefan Oleník about Ref. [@greenole]. \[review\] A brief review of the FSE and solution methods ========================================================= The purpose of this review of known material [@corn87] is to indicate the plausibility of constructing an infrared-accurate and gauge-invariant form of the wave functional $\psi$, based on a single operator $\Omega$, obeying a non-linear Schwinger-Dyson equation. In ordinary quantum mechanics this is exactly what happens except that the “Schwinger-Dyson equation" is simply algebraic. The FSE for scalar field theories is really nothing but ordinary quantum mechanics for infinitely-many coupled oscillators, so we review it and its connection to quantum mechanics, then go on to gauge theories. The Schrödinger equation ------------------------ The general principles of solving the FSE in terms of an operator $\Omega$ are most easily understood from the ordinary Schrödinger equation. Consider the quadratic/quartic Hamiltonian $$\label{phi4} H=\frac{-1}{2}(\frac{d}{dx})^2+\frac{1}{2}\omega^2x^2+\frac{1}{4!}\lambda x^4.$$ The ground-state solution is $\psi =\exp (-S)$, with $$\label{groundstate} S=\frac{1}{2}\Omega x^2+\frac{1}{4!}\Omega_4x^4+\dots$$ Following [@corn87], we substitute $\psi$ in the Schrödinger equation saving only terms through $\Omega_6$ and find: $$\label{phi4se} \Omega_6=\frac{-5\Omega_4^2}{3\Omega};\;\; \Omega_4=\frac{\lambda}{4\Omega}+\frac{\Omega_6}{8\Omega};\;\; \Omega^2=\omega^2+ \frac{1}{4}\Omega_4;\;\; E=\frac{1}{2}\Omega.$$ It is easy to solve the equation for $\Omega_4$ to derive a quartic equation for $\Omega$. One can go on to any order this way, expressing (in principle, at least) every $n$-point coefficient up to a given highest value of $n$ in terms of $\Omega$, and ending up with a non-linear dressed-loop equation for $\Omega$. Consider now the case $\omega =0$, for which the perturbative expansion coefficient $\lambda /\omega^3$ diverges. Then through the six-point term we find the expression $E =(3\lambda /272)^{1/3}$, which has the value 0.2226$\lambda^{1/3}$. This is within a few percent of the numerical answer of $0.2311\lambda^{1/3}$. Field theories other than gauge theories ---------------------------------------- For simplicity of exposition, we begin with a scalar field theory. Take the FSE Hamiltonian to be $$\label{revh} H=\int d^3x\left [\frac{-1}{2}(\frac{\delta }{\delta \phi})^2+\frac{1}{2}(\nabla \phi^2)+\frac{1}{2}m^2\phi^2+V(\phi )\right ].$$ where the potential $V$ contains cubic and higher terms. Ref. [@corn87] showed that the vacuum wave functional could be expressed as a $d=4$ partition function: $$\label{revfinal} e^{-S}=const. \times \int (d\Phi ) \exp [-I_0(\Phi )-I_0(\hat{\phi}_0) -\int V(\Phi + \hat{\phi}_0)].$$ In this partition function, space-time integrals are of the form of an integral over a Euclidean time $\tau$ and all of three space: $$\label{revint} \int_0^{\infty}d\tau\int d^3x.$$ The argument of $S$ is the field $\phi (x)$, and the field $\hat{\phi}(x)$ depends on $x=(\tau ,x)$ as: $$\label{phihat} \hat{\phi}_0(x)=e^{-\Omega_0 \tau}\phi(x)$$ with $$\label{defom0} \Omega_0=\sqrt{M_0^2-\nabla^2}.$$ The free action $I_0$ is: $$\label{freeact} I_0(\Phi )= \frac{1}{2}\int [(\partial_{\tau}\Phi)^2+(\nabla \Phi )^2]$$ and the inverse of the free-action operator is the free propagator $$\label{freeprop} \Delta_0 = \langle x|\frac{1}{2\Omega_0}[e^{-\Omega_0|\tau -\tau'|}- e^{-\Omega_0(\tau + \tau')}]|x'\rangle.$$ The first term in the propagator is the usual Euclidean propagator: $$\label{revprop2} \langle x|\frac{1}{2\Omega_0}e^{-\Omega_0|\tau -\tau'|}|x'\rangle=\frac{1}{(2\pi )^4}\int d^4k\frac{e^{ik\cdot (x-x')}} {k^2+m^2}.$$ For purposes of calculating the energy eigenvalue, this is the only term that needs to be saved in $\Delta_0$, but the second term of the propagator in Eq. (\[freeprop\]) is necessary for calculating the wave functional. Either by working out the partition function of Eq. (\[revfinal\]) or by direct substitution in the Schrödinger equation one sees that the vacuum functional $\psi$ has the general form (using a streamlined but transparent notation): $$\label{revvac} \psi = e^{-S};\;\;S=\frac{1}{2}\int \int \phi\Omega\phi+\sum_N\frac{1}{N!}\int \dots \int \Omega_N\phi_1 \dots \phi_N.$$ For purely three-dimensional equations, such as this, the unadorned integral sign simply indicates $\int d^3x$, where $x$ is the argument of a corresponding $\phi$ (and a sum over discrete indices, if any), with $\Omega$ and the $\Omega_N,N\geq 3$, as translationally-invariant form factors in the arguments of the $\phi$. The partition function form in Eq. (\[revfinal\] can be addressed with the well-known resummation techniques [@cjt] of the dressed-loop expansion. The effect of these rules is to remove a large fraction of one-particle-reducible graphs, as required for the dressed-loop expansion. In part, this amounts to a general replacement (but not quite everywhere) of the free operator $\Omega_0$ by a dressed operator $\Omega$ that satisfies a non-linear Schwinger-Dyson equation. This operator is precisely the same as the $\Omega$ that occurs in the quadratic term of the wave functional in Eq. (\[revvac\]). For further details of this formalism for scalar field theories, see [@ccds] which uses it for calculating some terms in the Wigner distribution function. Gauge theories -------------- For gauge theories the same general structure holds; the principal problem remaining is to enforce gauge invariance. The canonical momentum and Hamiltonian are represented by $$\label{revcan} \Pi_i^a \rightarrow -ig^2\frac{\delta}{\delta A_i^a}; H=\int d^3x [-\frac{1}{2}g^2(\frac{\delta}{\delta A_i^a})^2+\frac{1}{2g^2} (B_i^a)^2]$$ where $B_i^a$ is the chromomagnetic field strength. The generator of infinitesimal gauge transformations is $D_j^{ab}(-i\delta /\delta A_j^b)$, and this must annihilate $\psi$. The exponent $S$ in $\psi$ has the form given in Eq. (\[nonabel1\]), repeated here for convenience: $$\label{revgauge} g^2S=\frac{1}{2!}\int \int A_i^a\Omega_{ij}A_j^a+\frac{1}{3!}\int \int \int A_i^aA_j^bA_k^c \Omega_{ijk}^{abc}+\dots$$ Invariance of $S$ under infinitesimal gauge transformations is trivial for the two-point function $\Omega_{ij}$; this quantity must be conserved, so that in Fourier space $$\label{revcons} \Omega_{ij}(k)=\Omega (k)P_{ij}(k);\;\; P_{ij}=\delta_{ij}-\frac{k_ik_j}{k^2}.$$ For the free theory $\Omega_0(k)=k$. Gauge invariance is more complicated for higher-point functions. Annihilating $\psi$ with the generator of gauge transformations yields a set of [*ghost-free*]{} Ward identities (these Ward identities also apply to the pinch technique [@corn82; @binpap] construction of gauge-invariant Green’s functions). For example, the Ward identity for the three-point function is: $$\label{revward} k_{1i}\Omega_{ijk}^{abc}(k_1,k_2,k_3)=f^{abc} [\Omega_{jk}(2)-\Omega_{jk}(3)]$$ where $\Omega_{jk}(2)\equiv \Omega_{jk}(k_2)$, etc. Now turn to the FSE itself. The equation determining the three-point function has the general form $$\label{rev3pt} \Omega_{il}(1)\Omega_{ljk}^{abc}+\Omega_{jl}(2)\Omega_{lik}^{bac} +\Omega_{kl}(3)\Omega_{lij}^{cab}=f^{abc}\Gamma_{ijk}.$$ The right-hand side $\Gamma_{ijk}$ comes from the cubic term in $H$, plus another term from the five-point function. The Ward identity for $\Gamma_{ijk}$ is determined by the above equation plus the Ward identities for the two- and three-point functions as already given, and multiplying both sides of Eq. (\[rev3pt\]) by $k_{1i}$ yields: $$\label{revgamma} k_{1i}\Gamma_{ijk}=\Omega_{jk}^2(3)-\Omega_{jk}^2(2).$$ For free particles, with $\Omega = \Omega_0$, this is satisfied by the usual free three-point vertex $$\label{rev3ptfree} \Gamma^0_{ijk}=i(k_1-k_2)_k\delta_{ij}+ c.p.$$ The reader can verify that the FSE equation (\[rev3pt\]) has a solution of the form: $$\label{revfse} \Omega_{ijk}^{abc}(k_1,k_2,k_3)=[\Omega (1)+\Omega (2) +\Omega (3)]^{-1}f^{abc}\left \{\Gamma_{ijk}+\{\Omega (1)\frac{k_{1i}}{k_1^2} [\Omega_{jk}(2)-\Omega_{jk}(3)]+c.p.\}\right \}$$ which respects the Ward identity of Eq. (\[revward\]), by virtue of the massless pole terms of Eq. (\[revfse\]). It should now be clear that these longitudinally-coupled massless excitations will occur, as a result of enforcing gauge invariance, for every n-point function. We will shortly identify these with couplings of the GNLS field introduced in our conjecture for the infrared-effective action. So far the vertex function $\Gamma_{ijk}$ is undetermined. As [@corn87] argues, one can carry out a program of expressing all higher-point functions in terms of the two-point function, and then the FSE (or the equivalent dressed-loop expansion) becomes a non-linear, non-perturbative equation for this two-point function $\Omega$. The idea, known also as the gauge technique, is to find an infrared-effective approximation to $\Gamma_{ijk}$ that exactly satisfies the Ward identity (\[revgamma\]) for any $\Omega$. One can, at least in principle, find such infrared-effective approximations for four- and higher-point functions as functionals of $\Omega$. In fact, a very general form for the “solution" to the Ward identity for the three- and four-point functions is known [@cornhou; @papa] for arbitrary dependence of $\Omega$ on momentum. The word “solution" is enclosed in quotes because it is not unique; any completely-conserved term can be added to the “solution" for $\Gamma_{ijk}$, for example. But the point is that purely-conserved terms are of higher order in momenta than the terms saved in the gauge technique. The general solution of [@cornhou] is: $$\label{cornhoueq} \Gamma_{ijk}=\delta_{ij}(k_1-k_2)_k-\frac{k_{1i}k_{2j}}{2k_1^2k_2^2} (k_1-k_2)_l\Pi_{lk}(k_3)-[P_{il}(k_1)\Pi_{lj}(k_2) -P_{jl}(k_2)\Pi_{li}(k_1)]\frac{k_{3k}}{k_3^2}+c.p.$$ where the first term on the right is the free vertex $\Gamma^0_{ijk}$ and $\Pi_{ij}(k)\equiv P_{ij}(k)\Pi)k)$ is the transverse pinch-technique [@corn82; @binpap] self-energy, related to $\Omega_{ij}$ by: $$\label{omegaeq} \Omega_{ij}^2=P_{ij}[\Omega_0^2+\Pi \{\Omega\}]$$ where $\Omega_0^2=k^2$ is the free gluon contribution. In the simple case studied by us, $\Pi=M^2$, and the resulting expression for $\Gamma_{ijk}$ is: $$\label{3ptward} \Gamma_{ijk}=\delta_{ij}(k_1-k_2)_k+\frac{M^2}{2}\frac{k_{1i}k_{2j}(k_1-k_2)_k} {k_1^2k_2^2}+c.p.$$ As we saw above, in ordinary quantum mechanics in one spatial dimension $x$ the exponent $S(x)$ of $\psi$ can be determined systematically from a set of non-linear algebraic equations, such that each term of $\mathcal{O}(x^3)$ or higher can be expressed in terms of the quadratic coefficient $\Omega$. Finally, $\Omega$ is determined by a single non-linear equation, equivalent to a dressed-loop expansion. Combining the pinch technique and the gauge technique gives a completely analogous program for gauge theories, based on “solving" the Ward identities insuring gauge invariance. While this program can only be carried out approximately, it is gauge-invariant by construction. Ultimately it yields a dressed-loop equation for a single transverse operator $\Omega_{ij} (k)\equiv P_{ij}(k)\Omega(k)$. The pinch-technique self-energy $\Pi$ is itself a complicated function of $\Omega$, found by using dressed propagators of the general form given in Eq. (\[freeprop\]), with $\Omega_0$ replaced by $\Omega$ and with appropriate vector kinematics. In effect, $\Pi$ is the on-shell self-energy and any $\pm i\Omega (k)$ occurring in $\Pi$ is a fourth component of a Euclidean four-vector $(k_4,k)$ that is on-shell, by which we mean that $k_4^2+k^2+M^2=0$, or $k_4=\pm i\sqrt{k^2+M^2}$. All that we need from this development in the main text is Eq. (\[3ptward\]), which will be used in the large-$M$ expansion of the three-point function $\Omega^{abc}_{ijk}$. [99]{} H. G. Loos, Phys. Rev.  [**188**]{}, 2342 (1969). J. P. Greensite, Nucl. Phys. B [**158**]{}, 469 (1979). M. B. Halpern, Phys. Rev. D [**19**]{}, 517 (1979). J. P. Greensite, Nucl. Phys. B [**166**]{}, 113 (1980). R. Jackiw, Rev. Mod. Phys.  [**52**]{}, 661 (1980). R. P. Feynman, Nucl. Phys. B [**188**]{}, 479 (1981). R. Jackiw, in [*Current Algebras and Anomalies*]{}, edited by S. B. Trieman, R. Jackiw, B. Zumino, and E. Witten (Princeton University Press, Princeton, NJ, 1985), p. 258. J. M. Cornwall, Phys. Rev. D [**38**]{}, 656 (1988). J. M. Cornwall and G. Tiktopoulos, Phys. Rev. D [**47**]{}, 1629 (1993). P. Mansfield, Nucl. Phys. B [**418**]{}, 113 (1994). I. I. Kogan and A. Kovner, Phys. Rev. D [**52**]{}, 3719 (1995). H. Reinhardt, Nucl. Phys. B [**503**]{}, 505 (1997). K. Zarembo, Mod. Phys. Lett. A [**13**]{}, 1709 (1998). J. Pachos, Phys. Lett. B [**432**]{}, 187 (1998). P. Mansfield and M. Sampaio, Nucl. Phys. B [**545**]{}, 623 (1999). C. Feuchter and H. Reinhardt, Phys. Rev. D [**70**]{}, 105021 (2004); H. Reinhardt and C. Feuchter,Phys. Rev. D [**71**]{}, 105002 (2005); C. Feuchter and H. Reinhardt, Nucl. Phys. Proc. Suppl.  [**141**]{}, 205 (2005). J. Greensite and S. Oleník, arXiv:hep-lat/0610073. J. M. Cornwall, Phys. Rev. D [**26**]{}, 1453 (1982). A. C. Aguilar and J. Papavassiliou, arXiv:hep-ph/0610040, and references therein. J. H. Field, Phys. Rev. D [**66**]{}, 013013 (2002) summarizes various phenomenological estimates of the QCD gluon mass up to 2002. E. G. S. Luna, arXiv:hep-ph/0609149; A. A. Natale, arXiv:hep-ph/0610256 and references therein; A. C. Aguilar, A. Mihara and A. A. Natale, Phys. Rev. D [**65**]{}, 054011 (2002). C. Alexandrou, P. de Forcrand and E. Follana, Phys. Rev. D [**63**]{}, 094504 (2001). D. Binosi and J. Papavassiliou, J. Phys. G [**30**]{}, 203 (2004), and references therein. J. M. Cornwall and W. S. Hou, Phys. Rev. D [**34**]{}, 585 (1986). J. M. Cornwall, R. Jackiw and E. Tomboulis, Phys. Rev. D [**10**]{}, 2428 (1974). J. M. Cornwall, Phys. Rev. D [**10**]{}, 500 (1974). J. M. Cornwall, W. S. Hou and J. E. King, Phys. Lett. B [**153**]{}, 173 (1985). J. M. Cornwall and B. Yan, Phys. Rev. D [**53**]{}, 4638 (1996). G. Alexanian and V. P. Nair, Phys. Lett. B [**352**]{}, 435 (1995). W. Buchmuller and O. Philipsen, Phys. Lett. B [**397**]{}, 112 (1997). J. M. Cornwall, Phys. Rev. D [**57**]{}, 3694 (1998). F. Eberlein, Phys. Lett. B [**439**]{}, 130 (1998); Nucl.Phys. B [**550**]{}, 303 (1999). F. Karsch, T. Neuhaus, A. Patkos and J. Rank, Nucl. Phys.B [**474**]{}, 217 (1996). U. M. Heller, F. Karsch and J. Rank, Phys. Rev. D [**57**]{}, 1438 (1998) and references therein. A. Cucchieri, F. Karsch and P. Petreczky, Phys. Rev. D [**64**]{}, 036001 (2001). A. Nakamura, T. Saito and S. Sakai, Phys. Rev. D [**69**]{}, 014506 (2004). D. Karabali, C. j. Kim and V. P. Nair, Phys. Lett. B [**434**]{}, 103 (1998), and references therein. J. M. Cornwall, Nucl. Phys. B [**157**]{}, 392 (1979). J. M. Cornwall, Phys. Rev. D [**57**]{}, 7589 (1998). C. A. A. de Carvalho, J. M. Cornwall and A. J. da Silva, Phys. Rev. D [**64**]{}, 025021 (2001). J. Papavassiliou, Phys. Rev. D [**47**]{} (1993) 4728. [^1]: Email: Cornwall@physics.ucla.edu
--- abstract: '3D video coding is one of the most popular research area in multimedia. This paper reviews the recent progress of the coding technologies for multiview video (MVV) and free view-point video (FVV) which is represented by MVV and depth maps. We first discuss the traditional multiview video coding (MVC) framework with different prediction structures. The rate-distortion performance and the view switching delay of the three main coding prediction structures are analyzed. We further introduce the joint coding technologies for MVV and depth maps and evaluate the rate-distortion performance of them. The scalable 3D video coding technologies are reviewed by the quality and view scalability, respectively. Finally, we summarize the bit allocation work of 3D video coding. This paper also points out some future research problems in high efficiency 3D video coding such as the view switching latency optimization in coding structure and bit allocation.' author: - 'Qifei Wang [^1]' bibliography: - '3DVCoding.bib' title: An Overview of Emerging Technologies for High Efficiency 3D Video Coding --- [Shell : Bare Demo of IEEEtran.cls for IEEE Journals]{} multiview video, free view-point video, depth map, multiview video coding, 3D video coding, bit allocation Introduction ============ immersive 3D visual content representation has been has been studied over decades. Nowadays, the 3D visual representation is applied in both professional fields (tele-immersive medicine and communications) [@wang2015unsupervised] and the personal applications (virtual reality and 3D computer gaming) [@zhang2007multiview]. In different applications, the 3D visual content has various representations, such as point clouds, meshes, multiview video (MVV), and texture video plus depth maps [@yemez2007scene]. For all these representations, the data used to represent the 3D visual content is significant more than the data of the 2D visual representation such as images and videos. Recent years, the worldwide research efforts have been spent on the data compression to improve the accessibility of the 3D visual content. For the video based applications, e.g. the MVV and free view-point video (FVV) [@smolic20113d], the multiview video coding (MVC) [@merkle2007efficient] and some advanced 3D video coding technologies have ever been proposed based on the traditional video coding framework. For the graphics applications, like computer gaming and light field where the point clouds and meshes are usually applied, the 3D mesh coding [@peng2005technologies] has been widely studied. Especially, the scalable 3D mesh coding [@cao20123d] have gain interests from both academia and industry. Due to the increasing business in the 3D industry, ISO/IEC JTC 1/SC 29/WG 11 (Moving Picture Experts Group—MPEG) and the ITU-T SG 16 Working Party 3, the two main international organizations in the world which have worked individually and jointly, have worked on the standardization of the 3D video coding [@vetro2011overview] and 3D mesh coding. This paper reviews the recent progress of the coding technologies of 3D video. The 3D video mentioned in this paper mainly includes the binocular video, MVV, FVV which is represented in texture video plus depth maps. We reviews the coding framework of MVC and the joint coding technologies of MVV and depth maps. We also reviews the scalable 3D video coding frameworks for quality and view scalability. At last, the bit allocation in 3D video coding have been summarized. This paper also point out some unsolved problems, such as the optimization in coding structure and bit allocation when considering view switching behavior model. The rest of this paper are organized as follows: Section \[sec:3D\_representation\] briefly introduces some background knowledge of 3D video representations, depth maps generation, and the view synthesis; Section \[sec:mvc\] presents the MVC framework and its extensions to the FVV coding; Section \[sec:joint\_coding\] analyzes the joint coding technology of MVV and depth maps; Section \[sec:scalable\_coding\] reviews the work of both quality and view scalable 3D video coding; Section \[sec:bit\_allocation\] describes the bit allocation efforts of the FVV coding; The conclusions are summarized in Section \[sec:conclusion\]. 3D Video Representations {#sec:3D_representation} ======================== In this section, we briefly introduce the background information about the 3D video representations, depth map generation, and virtual view synthesis. Multiview Video and Free View-point Video ----------------------------------------- In the human vision system (HSV), the eyes capture the scene through their own channels. The brain merges the two streams of images and generates the stereo vision by the disparity between the images from different channels. Based on this observation, the stereoscopic video was proposed by presenting the videos from two parallel cameras to the two eyes respectively. Human can thus obtain stereo viewing experience when watching the stereoscopic video [@urey2011state]. The stereoscopic video can be captured by the binocular camera as shown in Fig. \[fig:binocular\_camera\]. Although the binocular video can generate stereo vision for HSV, the view point is still fixed. To enhance the immersive viewing experience, the MVV is proposed to add additional view points for the observer. The MVV are usually captured by the camera array systems as shown in Fig. \[fig:mvv\_camera\]. Compared to the binocular video, the MVV expands the vision field to the area captured by the camera array. Therefore, when watching MVV, the viewer can select any view point captured by the camera array. Moreover, the autostereoscopic video system demonstrates the video from multiple view points simultaneous on the polarized screen that viewer can watch the stereoscopic video from multiple physical positions. The MVV can though provide more immersive viewing experience than the binocular stereoscopic video, the view point is nevertheless limited to the captured ones. In order to obtain full immersive viewing experience, people proposed the FVV [@tanimoto2011free] which requires to provide arbitrary view point accessibility. However, it is physically hard to obtain dense sampling of the whole 3D space. Therefore, the FVV is usually achieved by the virtual view synthesis with the sparsely captured texture video and geometry information. The geometry information can be represented in multiple different formats, such as depth, point cloud, meshes, voxels, etc. [@yemez2007scene]. For the purpose of data streaming and real-time processing, the depth maps are the most popular format used in FVV. Therefore, the FVV is usually represented by the MVV which is captured by the sparse camera array and the depth maps [@muller20113]. Depth Map Generation -------------------- The depth maps which represent the quantized distance from the object to the camera plane in the 3D space can be generated by multiple ways. The highest precision depth maps are usually generated by the stereo matching between multiple images from different views [@sun2005symmetric]. Some recent research can also generates the depth maps from a single image by exploiting the implicit geometry prior within the texture images [@saxena2005learning]. Besides the computational approaches, the depth maps can also be sampled by the depth sensors. The depth sensors can be categorized into three classes by their sensing principles, including structure light [@wang2015computational] (Microsoft Kinect version 1), time-of-flight infrared camera [@zhang2012microsoft] (Microsoft Kinect version 2), and the laser scaning (MC3D) [@matsuda2015mc3d]. Although all these depth sensors are suffered from the noise and low resolution [@wang2015evaluation], they can capture the depth maps in real-time. Therefore, for the real-time applications, such as the interactive computer gaming, the depth sensors are widely applied. One the other hand, for the applications like 3DTV which requires high precision depth information, the computational depth estimation approaches are usually applied. Virtual View Synthesis ---------------------- In the early stage of the 3D video, the virtual view video is interpolated from the texture images from its reference views by the coarse geometry models. However, without the accurate epipolar geometry model, the interpolation usually cause artifacts on the objects with large disparities. In the latest view synthesis framework, based on the epipolar geometry model and the calibrated camera parameters, each pixels in the reference view can be projected to the virtual view plane with its depth value [@chan2007image]. In the FVV, the virtual view synthesis is implemented based on the latest view synthesis framework as shown in Fig. \[fig:rendering\] where the virtual view is synthesized by its left and right reference views. In order to reduce the pixel synthesis drift caused by the depth noise, the view synthesis module usually synthesize the depth maps of the virtual view with the depth maps from the reference views. Afterward, each pixel in the virtual view is backward mapped to the reference views and interpolated with the neighbors if the corresponding position is out of the sampled grid in the reference views. The backward projection reduces the ambiguity in the pixel merge from multiple reference views. ![Virtual view synthesis.[]{data-label="fig:rendering"}](fig_view_synthesis.png){width="40.00000%"} Simulcast and Multiview Video Coding {#sec:mvc} ==================================== As the MVV is formed by the video stream from multiple view points, each video stream can be encoded individually by single view video codec. This encoding framework is called simulcast as shown in Fig. \[fig:mvc\_simulcast\]. Although the single view video codec can exploit the temporal and intra frame correlation, the redundancy between different views still remains in the simulcast video streams. In order to compress the redundancy between different views, the multiview video coding (MVC) framework is proposed by adding the inter-view prediction to the simulcast coding framework. Fig. \[fig:mvc\_ks\] demonstrates the MVV coding framework with the inter-view prediction implemented on the key frames which is called MVC\_KS in the rest of this paragraph. The coding framework in Fig. \[fig:mvc\_as\] extends the inter-view prediction to both key and non-key frames to fully exploit the inter-view redundancy. The coding framework in Fig. \[fig:mvc\_as\] is called MVC\_AS in the rest of this paragraph. Fig. \[fig:mvc\_rd\] demonstrates the rate-distortion (R-D) performance of different coding frameworks. Two testing sequences, “Dancer” and “Poznan\_Street” which are both at 1080p and 30 fps, are used in this paper for all the evaluation experiments. From the curves shown in Fig. \[fig:mvc\_rd\], we can see the MVC frameworks improve the coding efficiency compared to the simulcast coding framework. The coding structure with full inter-view prediction (Fig. \[fig:mvc\_as\]) achieves further R-D improvement than the coding structure with inter-view prediction on only key frames. For the purpose of compressing MVV, the MVC\_AS framework would be the optimal choice. However, in the FVV applications, the viewer may randomly switch the view point. Although the simulcast coding framework achieves the lowest R-D performance compared to the other two, the video stream of different view point can be randomly accessed without any dependency. The MVC\_KS framework achieves both moderate R-D performance and the view switching latency. The MVC\_AS framework achieves the best coding framework at the cost of the highest view switching latency. Therefore, the prediction structure should be optimized to balancing the R-D performance and the view switching latency [@ji2014online]. Based on the coding structure shown in Fig. \[fig:mvc\], the average view switching latency can be modeled by the following model. Denote there are $N$ frames in each group of pictures (GOP). The deepest level of the interview dependency is $M$. Suppose the view switching is fully random and satisfied the uniform distribution among all the frames and views, the average view switching delay $T$ can be represented by $$T(M, N)=c\times M \times N. \label{equ:delay}$$ In equation \[equ:delay\], $c$ denotes a constant coefficient which means the average view switching delay is proportional to the product of the GOP size and the average depth of the inter-view dependency. Consequently, the overall optimization problem can be represented as $$\begin{split} J(M,N,Q) &=\operatorname*{arg\,max}_{M,N,Q}D(Q)+\lambda R(M,N,Q)+\beta T(M,N) \\ R(M,N,Q) &< R_C \end{split} \label{equ:optimization}$$ In equation \[equ:optimization\], $D$ and $R$ denote the coding distortion and bit rate, respectively, $Q$ denotes the quantization parameter, $\lambda$ and $\beta$ denote the Lagrange multipliers, $R_C$ denotes the given bit rate constraint. Increasing $M$ and $N$ can reduce the coding bit rate but also introduce additional the view switching latency. On the other hand, when $M$ and $N$ are small, the system can obtain good random view accessibility with the cost of increasing bit-rate. This optimization problem can be solved by the Karush–Kuhn–Tucker (KKT) conditions. Multiview Video and Depth Maps Coding {#sec:joint_coding} ===================================== After introducing the depth maps into the 3D video for the virtual view rendering, the depth maps coding technologies are further considered by the 3D video coding communities. Since the depth information are usually represented by the gray-level pictures, the depth maps are treated as the monochromatic MVV sequences that can be encoded by the MVC technologies. Therefore, most of 3D video coding frameworks encode the MVV and depth maps separately by MVC. Besides, the other existing depth maps coding technologies encode the depth maps by considering the signal properties of the depth maps [@oh2011depth]. Although the MVC can remove both the intra-view and inter-view redundancy in both the MVV and depth maps, the correlation between the MVV and depth maps cannot be exploited by the MVC framework. Therefore, various joint multiview video and depth map coding technologies have been proposed to improve the 3D video coding efficiency based on MVC framework. Since the depth information can be converted to the disparity, using the depth maps to assist the inter-view prediction is one direction to improve the coding efficiency. In [@yea2009view], Yea et al. proposed an auxiliary view synthesis prediction (VSP) mode to improve the inter-view prediction efficiency. In the VSP mode, an additional reference frame is generated for each coding frame by warping the pictures in the reference view to the current view with the decoded depth maps. The rendered virtual reference frame is added into the reference frame buffer for the prediction. Compared to the reference frame from the reference view, the rendered virtual reference frame is in the same camera plane as the current frame. Therefore, the estimated disparity vector between the blocks in the current frame and the virtual reference frame is much less than that between the current frame and the reference frame from reference view. Therefore, the VSP mode can reduce the R-D cost on the disparity vector compared to the traditional inter-view prediction mode. On the other hand, if the non-translational transform exists between different views, the VSP mode can warp the reference frame to the same camera plane as the current frame. This can significantly reduce the prediction residue compared to the translational disparity compensation prediction (DCP). Although the VSP mode can reduce the bit-rate in inter-view prediction, the additional buffer cost increase the complexity at both encoder and decoder sides. In order to exploit the depth information to assist inter-view prediction, Wang et al.[@wang2012free] proposed a depth-assisted disparity compensation prediction (DADCP) mode. In this mode, the encoder calculates the disparity vector for each pixel in the block based on the calibrated camera parameters and the depth value of each pixel. During the prediction, the disparity vectors calculated from the depth maps are quantized to quarter pixel precision. The prediction residue of the DADCP mode is generated by per-pixel interview prediction. Since the depth maps can be treated as the side information for the texture video coding if the depth maps are encoded/decoded ahead of the multiview video encoding/decoding, the bit-rate of the depth calculated disparity vectors can thus be saved. The DADCP scheme can also save the bit-rate of disparity vector without the additional buffer cost compared to the VSP. Besides, since DADCP does not require any additional motion search in the prediction, its run-time complexity is also lower than that of the VSP mode. Figs. \[fig:dadcp\_dancer\] and \[fig:dadcp\_pstreet\] demonstrate the R-D performance of the MVC, MVC with VSP, and MVC with DADCP on the two testing sequences, respectively. From the curves, we can see that DADCP outperforms both the traditional MVC and MVC with VSP. The R-D gain is increasing with the bit-rate rising. Compared to the traditional MVC, the gain of DADCP comes from the bit rate reduction on the disparity vectors in the inter-view prediction. Moreover, the rendering distortion of the synthesized virtual reference frame suppresses the R-D performance of the VSP mode. Beside using depth to assist the inter-view prediction, other research works exploit the similarity of the motion vector field between the MVV and depth maps to reduce the overall bit-rate of 3D video coding. Despite the lack of the texture, the depth maps record the same scene as the texture video. The depth maps and texture video can share similar motion vector fields. Zhang et al. [@zhang2010joint] proposed a joint coding framework that uses the motion vectors of the texture video as a candidate reference motion vector for the depth maps coding. However, since the depth maps are usually noisy than the texture video, the optimize motion vectors of the depth maps usually differ from those the corresponding block in the texture video. The R-D gain of the motion vectors sharing between texture video and depth maps is limited. Guo et al. [@guo2006inter] proposed an inter-view direct mode based on the parallelogram constraint between the motion and disparity vectors. The parallelogram constraint comes from the conservation of the temporal and spatial optical flow which means in any two pictures from one view and the other two corresponding frames from another view, the two motion vector and two disparity vectors between the corresponding pixels form a parallelogram. This scheme can reduce the coding bit-rate of the reference motion or disparity vector also save the run-time complexity at the encoder side. Due to the smoothness of the depth maps, some efforts have been spent on down-sampling the depth maps to lower resolution and perform up-sampling after the decoding [@oh2009depth]. Since the bit-rate of the depth maps is usually 10% to 20% of the bit-rate of the texture video, the bit-rate savings on depth maps are limited compared to that of the texture video. Majority of the work still focuses on reducing the bit-rate of the MVV by the joint coding techniques. Due to the advantages of geometry block partitioning, Wang et al. [@wang2011reduced] proposed practical geometry block partitioning scheme by exploiting the correlation of the object boundary between texture and depth images. The proposed scheme searches the partition line by a linear operator based on the input texture and depth information which significantly reduce the complexity of the geometry partitioning. By enabling the geometry partitioning, the proposed coding framework can achieve 6% R-D gain compared to the traditional MVC framework with only 18% encoding time increasing [@wang2013complexity]. The proposed partition linear searching scheme can also be extended to the traditional 2D video compression [@wang2012complexity]. For the binocular stereoscopic video coding, some researchers have proposed a frame compatible 3D video coding framework by merging the frames from different views into a single frame. The interlacing mode includes top-bottom, side-by-side, row-interleaved, column-interleaved, and checkerboard [@vetro2010frame]. The sequences of the interleaved images can be compressed by the traditional single view video codec. Scalable 3D Video Coding {#sec:scalable_coding} ======================== Due to the limited bandwidth and the high coding bit-rate of 3D video, the scalable 3D video coding also gains much research attention recently. In [@kurutepe2007client], Kurutepe et al. proposed a resolution scalable MVC framework. In this framework, the encoder down-samples all the texture frames to a lower resolution. The base layer stream is generated by encoding all the texture frames at low resolution by MVC. The enhanced layer is the residue between the original frames and the decoded base layer frames. For each view, the enhanced layer is encoded independently by the traditional single view video codec. The encoder will deliver all the base layer and the enhanced layer of the views selected by the viewer. If the viewer change the view points, the enhanced layer can be switched with very low latency since there is no inter-view dependency in the enhanced layer. Therefore, this framework can also be applied to the scalable coding of the FVV. This framework is further extended to the quality scalable MVC framework in [@ji2014online] by applying the traditional scalable coding technology in MVC. Figs. \[fig:svc\_dancer\] and \[fig:svc\_pstreet\] demonstrates the R-D performance of these two coding frameworks on the two testing sequences, respectively. From Fig. \[fig:svc\_rd\], we can see that the quality scalable framework outperforms the resolution framework since former framework applies the interlayer prediction when encoding the enhanced layer. In MVV and FVV, the view scalability is a new research direction in 3D video coding. Shimuzu et al. [@shimizu2007view] proposed a view scalable MVC framework based on the view synthesis. In their framework, the video sequence of one view is selected as the base view and encoded by the traditional single view codec. The rest views are all treated as the enhanced views. The frames in the enhanced views are predicted by the corresponding frames in the base view. In this framework, the decoder obtains fully scalability of the view switching since all the views can be decoded based on the base view and its own prediction residue. However, the coding efficiency is much lower than the MVC framework. To further improve the coding efficiency, the temporal prediction can also be applied to the enhanced views. The random view accessibility is still maintained since all the views only depend on the base view. In this framework, the number of the base views and the prediction between the base views and the enhanced views can be extended. For example, the hierarchical inter-view prediction structure can also be applied in this framework to improve the view scalability. In the 3D video coding system, the coding framework can be optimized by balancing the coding efficiency and the view scalability. Bit Allocation of 3D Video Coding {#sec:bit_allocation} ================================= In FVV, since the video sequences displayed at the client side are synthesized by the texture video sequences and the depth maps of the reference view points, the quality of the synthesized video are determined by both the quality of the decoded texture video and depth maps. Based on the principle of virtual view synthesis, the distortion of texture video will introduce linear error to the synthesized video frames. On the other hand, the distortion of depth maps will results in the pixel position drifting error to the synthesized video frames. To obtain the optimized quality fo the synthesized video, the encoder has to optimize the bit allocation between the video and depth maps. In [@liu2009joint], Liu et al. proposed a frame-level view synthesis distortion estimation algorithm and designed the bit allocation algorithm based on the rate distortion model of 3D video coding. However, since the frequency domain view synthesis distortion model needs the region has the uniform disparity error, the frame-level model is inaccurate to generate the R-D model. To get an accurate R-D model of 3D video coding, Wang et al. [@wang2010region] proposed a region based view synthesis distortion estimation model by partitioning the whole frame into the region with uniform depth value. The distortion estimated for each depth-uniform region can significantly reduce the error of the view synthesis distortion estimation. Based on the proposed R-D model, Wang et al. [@wang2012free] proposed a bit allocation algorithm with single pass search. The results reported in [@wang2012free] demonstrated that the region based R-D model can obtain more accurate estimation of the view synthesis distortion and improved R-D performance of the synthesized virtual view video compared to the frame-level R-D model. Besides, Yuan et al [@yuan2011model]. solved the bit allocation problem by the Lagrange optimization which can optimize the R-D performance with very low complexity. Besides the bit allocation between the MVV and depth maps, some other works have studied the bit allocation between different views. In [@shao2012asymmetric], Shao et al. proposed an asymmetric stereoscopic video coding scheme and optimize the bit allocation based on the masking effect of HSV. Yuan et al. [@yuan2015rate] proposed the bit allocation algorithm between different view based on the view switching. However, since the view switching behavior is complicated, considering the view switching behavior model in the bit allocation is still an unsolved problem for the free view-point video coding with multiple input view points. Conclusions {#sec:conclusion} =========== In this paper, we reviewed the recent progress of the high efficiency 3D video coding technologies. Most of the MVV and FVV coding are based on the MVC framework. For the FVV represented by MVV and depth maps, majority research work focus on the joint MVV and depth maps coding techniques to reduce the overall coding bit rate. By considering the view switching, the prediction structure of MVV and FVV needs to be optimized to balance the view switching latency and the R-D performance. To make the 3D video streaming adaptive to the bandwidth and viewing behavior model, the scalable MVC and FVV coding are also an important research area for exploring. At the end, the bit allocation and rate control also need to be further optimized by considering the viewing behavior model. With the emerging applications of 3D video, the coding technologies will be further improved to meet the requirements from various applications. [Qifei Wang]{} received the B.S. degree in information and computing science from Beijing University of Posts and Telecommunications, China, in 2007, and Ph.D degree in control science and engineering from Tsinghua University, China, in 2013. He joined EECS, University of California, Berkeley, US, in 2014. His current research interests include computer vision, machine learning, video processing and communications. [^1]: Qifei Wang was with the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA, qifei.wang@eecs.berkeley.edu
--- author: - 'K. Dennerl' date: 'Received 16 July 2002 / Accepted 31 July 2002' title: 'Discovery of X–rays from Mars with Chandra' --- Introduction ============ During the last years our knowledge about the X–ray properties of solar system objects was considerably enhanced. While the Sun was long known to be an X–ray source [@51phr008]Friedman..&gt;, X–rays from the Earth [@68jgr012]Grader&gt;, the Moon [@74mon001; @91nat046]Gorenstein..&gt;Schmitt..&gt;, and Jupiter [@83jgr013]Metzger..&gt; were detected a few decades later. In 1996 comets were discovered as a new class of X–ray sources [e.g. @96sci001; @97sci002; @97apj353]Lisse.. Dennerl.. Mumma..&gt;. Prompted by this discovery, [@00apj363]Cravens&gt; found evidence for X–ray emission from the heliosphere. A marginal X–ray detection of Saturn was reported by [@00aap289]Ness..&gt;, and, more recently, X–ray emission was discovered from the Galilean satellites Io and Europa and the Io plasma torus [@02apj367]Elsner&gt;, and from Venus [@02aap288]Dennerl..&gt;. Here the first detection of X–ray emission from Mars is reported. [ccccccccccc]{} obsid & date & time & exp time & instrument & $r$ & $\Delta$ & phase & elong & diam\ & 2001 & \[UT\] & \[s\] & & \[AU\] & \[AU\] & \[$^{\circ}$\] & \[$^{\circ}$\] & \[$''$\]\ 1861 & July 4 & 11:47:39–21:00:30 & 33171 & ACIS–I & 1.446 & 0.462 & 18.2 & 153.7 & 20.3\ \ obsid: Chandra observation identifier, exp time: exposure time, $r$: distance from Sun,\ $\Delta$: distance from Earth, phase: angle Sun–Mars–Earth, elong: angle Sun–Earth–Mars,\ diam: apparent diameter Observation and data analysis ============================= Mars was observed on 4 July 2001, during the first opposition after the launch of Chandra. Details about the observing geometry can be found in Table \[obscxo\]. There were already expectations that Mars would be an X–ray source, though a very faint one (Sect.4). Thus the prime goal for this observation was to get an unambiguous detection. Direct imaging onto the CCDs of the ACIS–I array, operated in very faint mode, was chosen as the observing mode, in order to get spectral information, which can also be used for suppressing the background efficiently. Due to the high surface brightness of Mars ($4.1\mbox{ mag arcsec}^{-2}$), direct imaging with the more sensitive back–illuminated ACIS–S3 CCD might have led to problems with contamination by optical light. The observing technique is illustrated in Figure \[marspath\]. Chandra was pointed such that Mars would be close to the nominal aimpoint in I3 during the middle of the observation, to get the sharpest possible image and a minimum of vignetting. As the CCDs were read out every 3.2 s, there was no need for continuous tracking. The photons were individually transformed into the rest frame of Mars, using the geocentric ephemeris of Mars, computed with the JPL ephemeris calculator.[^1] Correction for the parallax of Chandra was done with the orbit ephemeris of the delivered data set. For the whole analysis events with Chandra standard grades were used. The ACIS particle background was reduced by screening out events with significant flux in border pixels of the $5\times5$ event islands. In order to avoid contamination of the X–ray signal by unrelated point sources, photons within the point spread function of such sources were removed. Due to the high spatial resolution of the Chandra X–ray telescope, this can be done very efficiently. The insensitive areas, which are created by this method in the celestial reference frame, are very small (cf.Fig.\[marspath\]). After transformation into the rest frame of Mars, they become diluted along the proper motion direction of Mars, causing an almost negligible reduction of the effective exposure along such streaks. In order to avoid inhomogeneous sensitivity caused by the gaps between the CCDs along the path of Mars (Fig.\[marspath\]), the analysis was restricted to photons within $100''$ from the center of Mars. to Observational results ===================== Morphology ---------- Mars is clearly detected in the Chandra image (Fig.\[mars2\]), and, although the photon statistics are low, some general information about the brightness distribution across the disk can be derived. Figure \[ms4spa\]b shows the average surface brightness as a function of the distance from the center of Mars. In the (instrumental) energy range 0.2–1.5 keV, a limb brightening by $\sim25\%$ is indicated on the sunward limb (Fig.\[ms4spa\]b,“dayside”), while a darkening is seen at the opposite limb (“nightside”). At larger distances, the radial brightness profiles show some evidence for a soft X–ray halo around Mars, extending out to $\sim30''$ (3 Mars radii), both on the “dayside” and the “nightside”. No evidence for such a halo is seen above 1.5 keV. The halo is most pronounced in the energy range 0.5–1.0 keV, where the annulus between $r=11''$ and $r=30''$ around Mars contains $34.6\pm8.4$ excess counts relative to the background expected for this area, as determined from the $r=50''$ to $r=100''$ annulus around Mars. Spectrum -------- The X–ray spectra of Mars and its halo are shown in Fig.\[ms4spc\]. For the spectrum of Mars photons were extracted within $r<11''$ around its center, for the X–ray halo photons within $11''<r<30''$ were used, and the background was taken in both cases from an annulus around Mars with $50''<r<100''$ (cf.Fig.\[ms4spa\]). The ACIS–I spectrum of Mars at energies below $E\sim0.8\mbox{ keV}$ can be well described ($\chi_{\nu}^2=0.95$ for 10 degrees of freedom) by a single Gaussian emission line at $E=0.65\pm0.01\mbox{ keV}$ with $\sigma=20\pm10\mbox{ eV}$ (i.e., not significantly broadened) and a flux of $\left(6.3\pm0.8\right)\cdot10^{-5}\mbox{ ph cm}^{-2}\mbox{ s}^{-1}$. Above energies of $\sim0.8\mbox{ keV}$, the presence of an additional component is indicated. In the (instrumental) energy range $E=0.5$–$1.1\mbox{ eV}$ (Fig.\[ms4spc\]), the spectrum can be well modeled ($\chi_{\nu}^2=0.89$ for 12 degrees of freedom) by a single Gaussian emission line at $E=0.65\pm0.01\mbox{ keV}$, only instrumentally broadened ($\sigma\equiv0\mbox{ eV}$), with a flux of $\left(5.4\pm0.9\right)\cdot10^{-5}\mbox{ ph cm}^{-2}\mbox{ s}^{-1}$, superimposed on thermal bremsstrahlung with $kT$ fixed to 0.2 keV (as for the halo; see below), which contributes a flux of $\left(1.5\pm0.4\right)\cdot10^{-5}\mbox{ ph cm}^{-2}\mbox{ s}^{-1}$, or $\left(1.5\pm0.4\right)\cdot10^{-14}\mbox{ erg cm}^{-2}\mbox{ s}^{-1}$ in the energy range 0.5–1.2 keV. The X–ray halo can be well characterized by thermal bremsstrahlung emission with $kT=0.2\pm0.1\mbox{ keV}$ and a flux of $\left(0.9\pm0.4\right)\cdot10^{-5}\mbox{ ph cm}^{-2}\mbox{ s}^{-1}$, or $\left(0.9\pm0.4\right)\cdot10^{-14}\mbox{ erg cm}^{-2}\mbox{ s}^{-1}$ in the energy range 0.5–1.2 keV. Further spectral analysis is limited by the poor photon statistics. Temporal variability -------------------- The X–ray flux from Mars was fairly constant during the whole observation, at $\left(9.3\pm0.6\right)\cdot10^{-3}\mbox{ counts s}^{-1}$ for $E<1\mbox{ keV}$ (Fig.\[mlc\]c). According to the Kolmogorov–Smirnov test, the probability that the observed count rates are statistical fluctuations around a constant value is 30%; the significance for intrinsic variability is only $1.3\,\sigma$. The solar X–ray flux, monitored simultaneously with GOES–8 and GOES–10 (Fig.\[mlc\]a) and SOHO/SEM (Fig.\[mlc\]b) was also quite constant, and unusually low for this phase in the solar cycle (Fig.\[goes02\]). These satellites observed within $8\degr$ the same solar hemisphere which was irradiating Mars (Fig.\[yohkoh\]). Modeling the X–ray appearance of Mars ===================================== It was expected that fluorescent scattering of solar X–rays in the atmosphere would be the dominant source of the X–ray radiation from Venus and Mars. In order to get a reliable prediction about the X–ray properties of these planets, a numerical model was developed for computing simulated images in the individual fluorescence lines. This model was already successfully applied to the Chandra observation of Venus [@02aap288]Dennerl..&gt;. For Mars, it was used in order to optimize the time of the observation (Sect.4.5). The ingredients to the model are the composition and density structure of the Mars atmosphere, the photoabsorption cross sections and fluorescence efficiencies of the major atmospheric constituents, and the incident solar spectrum. Mars atmosphere --------------- For the Mars atmosphere a simplified model was adopted, which describes the total density $\rho$ in the form of analytical expressions for heights 0–100 km [@90bac002]Sehnal&gt; and 100–1000 km [@90bac001]Sehnal&gt;. In order to get a smooth transition between both regions, the density at 100–135 km heights was computed with both methods, and the $\log\rho$ values were weighted according to their distance from 100 and 135 km. The analytical expressions are given for solar minimum, solar maximum, and the intermediate state. For the simulation, the solar maximum conditions were selected, motivated by the general behaviour of the soft solar X–ray flux (Fig.\[goes02\]). For simplicity it was assumed that the Mars atmosphere is composed of C, N, and O only, neglecting the $\sim1.6\%$ contribution of other elements, mainly Ar,[^2] and the following composition was adopted: 64.9% oxygen, 32.4% carbon and 2.7% nitrogen. As the main constituents, C and O, are contained in CO$_2$ (which accounts for more than 95% of the Mars atmosphere), this composition was assumed to be homogeneous throughout the atmosphere. Photoabsorption cross sections ------------------------------ The values for the photoabsorption cross sections were taken from [@79apj362]Reilman and Manson&gt;, supplemented by the following K–edge energies (see [@02aap288]Dennerl..&gt; for a discussion of these energies): $ E_{K_{\rm C}} = 296.1\mbox{ eV}, E_{K_{\rm N}} = 409.9\mbox{ eV}, E_{K_{\rm O}} = 544.0\mbox{ eV}. $ From these values and the C, N, and O contributions listed above, the effective photoabsorption cross section of the Mars atmosphere was computed (Fig.\[crscflx\]a). This, together with the atmospheric density structure, yielded the optical depth of the Mars atmosphere, as seen from outside (Fig.\[atmo2\]). Solar radiation --------------- The solar spectra for 2001 July 4 were derived from SOLAR2000 [@00ast001]Tobiska..&gt;.[^3] To improve the coverage towards energies $E>100\mbox{ eV}$, synthetic spectra were computed with the model of [@85aap287]Mewe..&gt; and aligned with the SOLAR2000 spectra in the range 50–500 eV, by adjusting the temperature and intensity. This comparison yielded a fairly low average coronal temperature of only $\sim80\mbox{ eV}$. The adopted solar spectrum, scaled to the heliocentric distance of Mars, is shown in Fig.\[crscflx\]b (upper curve), with a bin size of 1 eV, which was used in order to preserve the spectral details. Model grid ---------- The high dynamic range in the optical depth of the Mars atmosphere requires a model with high spatial resolution. Figure \[atmo2\] shows that the atmosphere becomes optically thick for X–rays with $E<1\mbox{ keV}$ already at heights above 100 km during solar maximum (and above 90 km during solar minimum). This implies that most of the scattering takes place at heights where the latitudinal dependence of the atmospheric density is negligible. Thus, the volume elements need to be calculated only on a two dimensional grid (as in the case of Venus). For the calculation a grid of cubic volume elements with a side length of 1 km was used. The model atmosphere was traced from the surface (at $r=3393.4\mbox{ km}$) to a height of 300 km. The simulated images were synthesized with 20 km resolution perpendicular to the line of sight. Details about the simulation itself can be found in [@02aap288]Dennerl..&gt;. Planning the Mars observation ----------------------------- The simulation program was already used for optimizing the time of the Mars observation. Although the closest approach of Mars to Earth, with a minimum distance of 0.45 AU, occurred on 22 June 2002, the Chandra observation was postponed by a few weeks. This decision was motivated by the fact that the simulation indicated a practically uniform X–ray brightness across the whole planet for this time (cf.Fig.\[mslum1\]), while for phase angles of $\sim15\degr$ and more, a diagnostically more valuable view was predicted, with a characteristic brightening on the sunward limb (Fig.\[simsum\]a–c). The decision to postpone the Chandra observation was supported by the favorable fact that Mars was still approaching the perihelion of its orbit, so that its distance from Earth would increase only slightly to 0.46 AU. Furthermore, the small loss of X–ray photons due to the reduced solid angle would be almost compensated by the fact that Mars would then be closer to the Sun and would intercept more solar radiation. Discussion ========== Morphology ---------- The simulation shows (Fig.\[volem\]) that the scattering of solar X–rays takes place at heights above $\sim80\mbox{ km}$ and is most efficient between 110 km (along the subsolar direction) and 136 km (along the terminator). This behaviour is similar to Venus, where the volume emissivity was found to peak between 122 km and 135 km [@02aap288]Dennerl..&gt;. The fact that the volume emissivity for C is considerably higher than that of O is a direct consequence of the unusually soft solar spectrum during the Mars observation (cf.Fig.\[crscflx\]b). During the Venus observation, the photon fluxes from C and O were comparable. Figures \[simsum\]a-c show the simulated images of Mars at the K$_{\alpha}$ fluorescence lines of C, N, and O, for a phase angle of $18\fdg2$. Although Mars was almost fully illuminated, there is already some brightening on the more sunward limb evident, especially at C and O, accompanied by a fading on the opposite limb. While a direct comparison with the observed Mars image (Fig.\[simsum\]d) suffers from low photon statistics, a similar trend can be seen in the surface brightness profiles (Fig.\[ms4spa\]b). Thus, the expected limb brightening (Sect.4.5) was actually observed. The reason for the limb brightening and the different appearance of Mars in the three fluorescent lines is very similar to the case of Venus, and a discussion can be found in [@02aap288]Dennerl..&gt;. The close match between the simulated and observed morphology is an argument in favor of X–ray fluorescence as the dominant process responsible for the X–ray radiation of Mars. With simulations based on charge exchange interactions (Sect.5.5), [@01grl007]Holmstr"om&gt; obtained a competely different X–ray morphology. Spectrum, X–ray flux and luminosity ----------------------------------- The ACIS–I spectrum of Mars is dominated by a single narrow emission line. Although this line appears at 0.65 keV, it is most likely the O–K$_{\alpha}$ fluorescence line at 0.53 keV. This conclusion is motivated by the fact that in the case of Venus a similar line was observed at 0.6 keV with the same detector [Fig.9 in @02prc002]Dennerl..&gt;, which could be uniquely identified to be at 0.53 keV by the additional LETG observation. The apparent energy shift is most likely caused by optical loading, a superposition of the charges released by 0.53 keV photons and optical photons, during the 3.2 s exposure of each CCD frame. The simulated images can be used to estimate the expected photon flux from the whole visible side of Mars. For the three energies the following values are obtained: $f_{\rm C}=2.3\cdot10^{-4}$, $f_{\rm N}=5.0\cdot10^{-6}$, $f_{\rm O}=7.1\cdot10^{-5}\mbox{ ph cm}^{-2}\mbox{ s}^{-1}$. While the C and N emission lines are outside the energy range of ACIS–I, a direct comparison is possible for O–K$_{\alpha}$, where a flux of $\left(6.3\pm0.8\right)\cdot10^{-5}\mbox{ ph cm}^{-2}\mbox{ s}^{-1}$ was observed (Sect.3.2). This flux is reduced to $\left(5.4\pm0.9\right)\cdot10^{-5}\mbox{ ph cm}^{-2}\mbox{ s}^{-1}$, if an additional bremsstrahlung component is added. In view of all the general uncertainties, these values are in good agreement with each other. Conversion of the observed flux to the luminosity requires knowledge about the angular distribution of the scattered photons. For this purpose, X–ray intensities were determined from simulated Mars images, computed for phase angles from $0\degr$ to $180\degr$ in steps of $1\degr$ (Fig.\[mslum1\]). By spherically integrating these intensities for the three energies over phase angle, the following luminosities are obtained from the simulation: 2.9 MW for C, 0.1 MW for N, and 1.7 MW for O. The total X–ray luminosity of Mars, 4.7 MW (or $\sim3.6\pm0.6\mbox{ MW}$ when adjusted to the observed O–K$_{\alpha}$ flux), agrees well with the prediction of [@01grl006]Cravens..&gt;, who estimated a luminosity of 2.5 MW due to X–ray fluorescence, with an uncertainty factor of about two. This is another argument in favor of X–ray fluorescence. Compared to its optical flux, the X–ray flux of Mars is very low: the visual magnitude $-2.1\mbox{ mag}$ corresponds to an optical flux $f_{\rm opt} = 1.8\times10^{-4}\mbox{ erg cm}^{-2}\mbox{ s}^{-1}$. Adopting a total X–ray flux $f_{\rm x} = 9\times10^{-14}\mbox{ erg cm}^{-2}\mbox{ s}^{-1}$, a ratio $f_{\rm x}/f_{\rm opt} = 5\times10^{-10}$ follows. This is similar to the value $2\times10^{-10}$ observed for Venus [@02aap288]Dennerl..&gt;. In the case of X–ray fluorescence, the $f_{\rm x}/f_{\rm opt}$ ratio of Mars is generally expected to exceed that of Venus, because the optical albedo of Mars is lower than that of Venus, while their X–ray albedos are comparable. Both ratios, however, are expected to vary with time, in response to the temporarily variable solar X–ray flux (cf.Fig.\[goes02\]). The ROSAT observation of Mars in 1993 ------------------------------------- to Mars was observed with the ROSAT Position Sensitive Proportional Counter (PSPC) from 10–13 April 1993 on three occasions, for 1294 s, 2124 s, and 1099 s, respectively. As the pointing direction of the satellite was kept fixed during these observations, Mars was located at different positions in the $2\degr$ PSPC field of view (FOV). During the first and third exposure, Mars was partially obscured by a radial strut of the PSPC support structure, and was furthermore placed so much in the outer parts of the FOV, where the point spread function was severely degraded, that only the second observation is suited for a sensitive search for any X–ray emission. During this observation, Mars was at a heliocentric distance of 1.67 AU. The constraint to observe it at an elongation close to $90\degr$ implied a fairly large geocentric distance from Earth, 1.32 AU, so that Mars appeared as a disk with a diameter of only $7\farcs1$, seen at a phase angle of $37\fdg0$. Mars was not detected in this observation. From the second PSPC exposure, a $3\sigma$ upper limit of $4\cdot10^{-3}\mbox{ counts s}^{-1}$ can be derived in the energy range 0.1–0.9 keV, for a circle around the nominal position of Mars with a radius of $1'$. How does this non–detection with ROSAT compare with the information which is now available on the X–ray properties of Mars? According to the SOLAR2000 data for 13 April 1993, the solar X–ray flux at 1 AU was about 30% fainter than during the Chandra observation (cf.Fig.\[goes02\]), but showed a similar spectral shape. Taking also the larger heliocentric distance into account, Mars received about half of the solar flux. The simulation then yields the following number of expected counts for the second PSPC observation: $5\mbox{ counts}$ at C, $0.003\mbox{ counts}$ at N, and $0.3\mbox{ counts}$ at O. The total number of expected counts, $\sim5$, is somewhat lower than the local background in the detect cell ($\sim8$). This suggests that the X–ray signal of Mars was just below the sensitivity limit of the ROSAT observation. Temporal variability and the dust storm of 2001 ----------------------------------------------- Scattering of solar X–rays on very small dust particles was one of the early suggestions for explaining the X–ray emission from comets. [@96ass009]Wickramasinghe and Hoyle&gt; noted that X–rays can be efficiently scattered by dust particles, if their size is comparable to the X–ray wavelength. Such attogram dust particles ($\sim10^{-18}\mbox{ g}$) would be difficult to detect by other means. It might be possible that such particles are present in the upper Mars atmosphere, in particular during episodes of global dust storms. Incidentally, on June 26 a local dust storm on Mars originated and expanded quickly, developing into a planet–encircling dust storm by July 11 [@02ica015]Smith..&gt;. Such dust storms have been observed on roughly one–third of the perihelion passages during the last decades, but never so early in the Martian year. On July 4, this very vigorous storm had covered roughly one hemisphere (Fig.\[marsadd\]). This hemisphere happened to be visible at the beginning of the Chandra observation. By the end of the observation, which covered one third of a Mars rotation, this hemisphere had mainly rotated away from our view (Figs.\[marsadd\]–\[mobs\]). Thus, a comparison of the Chandra data from both regions should reveal any influence of the dust storm on the X–ray flux. There is, however, no change in the mean X–ray flux between the first and second half of the observation, where 150 and 157 photons were detected, respectively. This implies that, if attodust particles are present in the upper Mars atmosphere, the dust storm did not lead to a local increase in their density, high enough to modify the observed X–ray flux significantly. No statement, however, can be made about the situation below $\sim80\mbox{ km}$, as the solar X–rays do not reach these atmospheric layers (Fig.\[volem\]). While the general presence of some attodust in the upper atmosphere cannot be ruled out by the Chandra observation, the fact that the ACIS–I spectrum of Mars is dominated by a single emission line (Fig.\[ms4spc\]) shows that any contribution of such particles to the X–ray flux from Mars must be small compared to fluorescence, even in the process of a developing global dust storm. The X–ray halo -------------- Although the significance of a soft X–ray halo around Mars is only $\sim4\sigma$, its spectrum is clearly different from that of Mars itself (Fig.\[ms4spc\]), ruling out the possibility that the halo is an instrumental artefact, related to the point spread function of the X–ray telescope. It can also be ruled out that the halo is caused by the vignetting of the telescope, because the $11''<r<30''$ halo contains $2.1\pm0.3$ times more photons than the same area in the $50''<r<100''$ background region, while vignetting would affect the number of photons by less than 5% at energies below 1.5 keV. Furthermore, the halo cannot be an artefact of exposure variations introduced by removing the point sources, because no gradient in the surface brightness is observed at $E=1.5$–$10.0\mbox{ keV}$ (Fig.\[ms4spa\]b). Therefore, the following discussion assumes that the X–ray halo is real. While there is a lot of evidence that the X–rays from Mars are predominantly caused by fluorescent scattering of solar X–rays in its upper atmosphere, there is the possibility of an additional source of X–ray emission. When highly ionized heavy ions in the solar wind encounter atoms in the exosphere of Mars, they become discharged and may emit X–rays. This is the process which was found to be responsible for the X–ray emission of comets [@97grl001; @01sci008]Cravens&gt;Lisse..&gt;. Its consequences for the X–ray emission of Mars were already investigated by several authors. [@00asr006]Cravens&gt; predicted an X–ray luminosity of $\sim0.01\mbox{ MW}$. [@00ica016]Krasnopolsky&gt; estimated an X–ray emission of $\sim4\times10^{22}\mbox{ ph s}^{-1}$. Adopting an average photon energy of 200 eV [e.g. @97grl001]Cravens&gt;, this corresponds to an X–ray luminosity of 1.3 MW. [@01grl007]Holmström..&gt; computed a total X–ray luminosity of Mars due to charge exchange (within 10 Mars radii) of 1.5 MW at solar maximum, and 2.4 MW at solar minimum. For the X–ray halo observed within 3 Mars radii, excluding Mars itself, the Chandra observation yields a flux of $\left(0.9\pm0.4\right)\times10^{-14}\mbox{ erg cm}^{-2}\mbox{ s}^{-1}$ in the energy range $E=0.5$–$1.2\mbox{ keV}$ (Sect.3.2). Assuming isotropic emission, this flux corresponds to a luminosity of $0.5\pm0.2\mbox{ MW}$. This value agrees well with the predictions of [@00ica016]Krasnopolsky&gt; and [@01grl007]Holmström..&gt;, in particular when the spectral shape is extrapolated to lower energies.[^4] In addition to the luminosity, there is another argument in favor of the idea that the X–ray halo may be the signature of charge exchange. Although this process produces a spectrum consisting of many narrow emission lines, the overall properties can be approximated by 0.2 keV thermal bremsstrahlung emission [@98pss001], and the spectrum of the X–ray halo agrees very well with such a model. Also the spectrum of Mars itself shows evidence for an emission component with this spectral shape (Fig.\[ms4spc\]). The Chandra data, however, indicate that the surface brightness of this component in the spectrum of Mars is by one order of magnitude higher than that in the halo, averaged from one to three Mars radii. This is different from the result of computer simulations by [@01grl007]Holmström..&gt;, where the surface brightness in front of Mars is lower than in the halo. In these simulations an empirical model of the proton flow near Mars was used, where the proton flux decreases strongly at the “magnetopause”, at $\sim680\mbox{ km}$ height. The fact that the surface brightness at the center was observed to be higher than expected could be an indication that the dilution of the heavy ion flux near the “magnetopause” might be less pronounced than assumed in the model. It has to be stressed, however, that the observational evidence for any emission component in addition to the X–ray fluorescence is near the sensitivity limit of the observation and that any statement about observational properties may be subject to considerable uncertainties. Summary and conclusions ======================= The Chandra observation clearly shows that Mars is an X–ray source. The luminosity, the X–ray spectrum, the morphology and the time variability are all consistent with fluorescent scattering of solar X–rays on oxygen atoms in the Mars atmosphere at heights above $\sim80\mbox{ km}$ as the main process for the observed radiation. No evidence for dust–related X–ray emission was found, despite the onset of a global dust storm, which had covered roughly one hemisphere at the time of the observation. Differential measurements between the hemisphere affected by the dust storm and the quiet hemisphere showed no significant difference in the X–ray flux. There is, however, some evidence for an additional source of X–ray emission, indicated by a faint X–ray halo which can be traced to about three Mars radii, and by an additional component in the X–ray spectrum of Mars, which has a similar spectral shape as the halo. Within the available limited statistics, the spectrum of this component can be characterized by 0.2 keV thermal bremsstrahlung emission. The spectral shape and the luminosity are indicative of charge exchange interactions between highly charged heavy ions in the solar wind and exospheric hydrogen and oxygen around Mars. The significance of the halo, however, is only $4\sigma$, and additional observations will be needed for further studies. Such observations would also provide additional information about the temporal properties of the exosphere of Mars, in particular with respect to the solar cycle. It is a great pleasure to thank S.Wolk for his support in planning this observation, B.Aschenbach, V.Burwitz, J.Englhauser and C.Lisse for stimulating discussions, and G.Garradd, Y.Morita, T.WeiLeong and B.Flach–Wilken for providing the optical images. SOLAR2000 Research Grade v1.15 historical irradiances are provided courtesy of W. Kent Tobiska and SpaceWx.com. These historical irradiances have been developed with funding from the NASA UARS, TIMED, and SOHO missions. The SOHO CELIAS/SEM data were provided by the USC Space Sciences Center. SOHO is a joint European Space Agency, United States National Aeronautics and Space Administration mission. The Yohkoh image was obtained from the Yohkoh Data Archive Centre (YDAC). Yohkoh is a mission of the Japanese Institute for Space and Astronautical Science. [28]{} natexlab\#1[\#1]{} Cravens, T. E. 1997, , 24, 105 —. 2000, , 532, L153 —. 2000, , 26, 1443 Cravens, T. E. & Maurellis, A. N. 2001, , 28, 3043 Dennerl, K., Burwitz, V., Englhauser, J., Lisse, C., & Wolk, S. 2002, , 386, 319 Dennerl, K., Burwitz, V., Englhauser, J., Lisse, C., & Wolk, S. 2002, in New Visions of the Universe in the XMM–Newton and Chandra Era, ed. F. Jansen, Vol. ESA SP–488; also available at [ astro-ph/0204263]{} Dennerl, K., Englhauser, J., & Trümper, J. 1997, Science, 277, 1625 , R. F., [Gladstone]{}, G. R., [Waite]{}, J. H., [Crary]{}, F. J., [Howell]{}, R. R., [Johnson]{}, R. E., [Ford]{}, P. G., [Metzger]{}, A. E., [Hurley]{}, K. C., [Feigelson]{}, E. D., [Garmire]{}, G. P., [Bhardwaj]{}, A., [Grodent]{}, D. C., [Majeed]{}, T., [Tennant]{}, A. F., & [Weisskopf]{}, M. C. 2002, , 572, 1077 Friedman, H., Lichtman, S. W., & Byram, E. T. 1951, , 83, 1025 Gorenstein, P., Golub, L., & Bjorgholm, P. 1974, , 9, 129 Grader, R. J., Hill, R. W., & Seward, F. D. 1968, , 73, 7149 Holmstr[ö]{}m, M., Barabash, S., & Kallio, E. 2001, , 28, 1287 Krasnopolsky, V. 2000, , 148, 597 Krause, M. O. 1979, , 8, 307 Lisse, C. M., Christian, D. J., Dennerl, K., Meech, K. J., Petre, R., Weaver, H. A., & Wolk, S. J. 2001, Science, 292, 1343 , C. M., [Dennerl]{}, K., [Englhauser]{}, J., [Harden]{}, M., [Marshall]{}, F. E., [Mumma]{}, M. J., [Petre]{}, R., [Pye]{}, J. P., [Ricketts]{}, M. J., [Schmitt]{}, J., [Trümper]{}, J., & [West]{}, R. G. 1996, Science, 274, 205 Metzger, A. E., Gilman, D. A., Luthey, J. L., Hurley, K. C., Schnopper, H. W., Seward, F. D., & Sullivan, J. D. 1983, , 88, 7731 Mewe, R., Gronenschild, E. H. B. M., & van den Oord, G. H. J. 1985, , 62, 197 Mumma, M. J., Krasnopolsky, V. A., & Abbott, M. J. 1997, , 491, L125 Ness, J.-U. & Schmitt, J. H. M. M. 2000, , 355, 394 Reilman, R. F. & Manson, S. T. 1979, , 74, 815 Schmitt, J. H. M. M., Snowden, S. L., Aschenbach, B., Hasinger, G., Pfeffermann, E., Predehl, P., & Trümper, J. 1991, , 349, 583 Sehnal, L. 1990, , 41, 115 —. 1990, , 41, 108 Smith, M. D., Conrath, B. J., Pearl, J. C., & Christensen, P. R. 2002, , 157, 259 Tobiska, W. K., Woods, T., Eparvier, F., Viereck, R., Floyd, L., Bouwer, D., Rottman, G., & White, O. R. 2000, , 62, 1233 Wegmann, R., Schmidt, H. U., Lisse, C. M., Dennerl, K., & Englhauser, J. 1998, , 46.5, 603 Wickramasinghe, N. C. & Hoyle, F. 1996, , 239, 121 [^1]: available at http://ssd.jpl.nasa.gov/cgi-bin/eph [^2]: Argon would produce a K$_{\alpha}$ fluorescence line at 2.96 keV with a fluorescence yield $y_{\rm Ar}=0.118$ [@79jpc001]Krause&gt;, which exceeds that of nitrogen by a factor of 20. The photoabsorption cross section of Ar–K$_{\alpha}$ is 15% of that of N–K$_{\alpha}$, and there are about half as many Ar atoms as N atoms, so that the X–ray albedo of Mars in the K$_{\alpha}$ fluorescence lines of N and Ar should be comparable. The incident solar flux at 3.0 keV, however, is many orders of magnitude lower than at 0.4 keV (Fig.\[crscflx\]b), consistent with the non–detection of any signal from Mars at 3.0 keV. [^3]: available at [http://SpaceWx.com/]{} [^4]: It has, however, be kept in mind that there may be an additional uncertainty in these values, because the O–K$_{\alpha}$ fluorescence line was found to be displaced by $\sim120\mbox{ eV}$ (Sect.5.2).
--- address: - | Department of Mathematics\ Texas A&M University\ College Station\ TX 77843\ USA - | Department of Mathematics\ Dartmouth College\ Hanover, NH 03755\ USA author: - Marcelo Aguiar - 'Rosa C. Orellana' date: 'November 17, 2004' title: | The Hopf algebra of uniform block permutations.\ Extended abstract --- [^1] [^2] We introduce the Hopf algebra of uniform block permutations and show that it is self-dual, free, and cofree. These results are closely related to the fact that uniform block permutations form a factorizable inverse monoid. This Hopf algebra contains the Hopf algebra of permutations of Malvenuto and Reutenauer and the Hopf algebra of symmetric functions in non-commuting variables of Gebhard, Rosas, and Sagan. \ Nous présentons l’algèbre de Hopf des permutations de blocs uniformes est démontrons qu’elle est auto duale, libre et colibre. Ces résultats sont liés au fait que les permutations de blocs uniformes constituent un monoïde inverse factorisable. Cette algèbre de Hopf contient l’algèbre de Hopf des permutations de Malvenuto et Reutenauer et l’algèbre de Hopf des fonctions symétriques à variables non commutatives de Gebhard, Rosas, et Sagan. Uniform block permutations ========================== Set partitions {#S:setpar} -------------- Let $n$ be a non-negative integer and let $[n]:=\{1,2,\ldots, n\}$. A *set partition* of $[n]$ is a collection of non-empty disjoint subsets of $[n]$, called [*blocks*]{}, whose union is $[n]$. For example, ${\mathcal{A}}=\big\{ \{2,5,7\}\{1,3\}\{6,8\}\{4\}\big\}$, is a set partition of $[8]$ with $4$ blocks. We often specify a set partition by listing the blocks from left to right so that the sequence formed by the minima of the blocks is increasing, and by listing the elements within each block in increasing order. For instance, the set partition above will be denoted ${\mathcal{A}}=\{1,3\}\{2,5,7\}\{4\}\{6,8\}$. We use ${\mathcal{A}}\vdash [n]$ to indicate that ${\mathcal{A}}$ is a set partition of $[n]$. The [*type*]{} of a set partition ${\mathcal{A}}$ of $[n]$ is the partition of $n$ formed by the sizes of the blocks of ${\mathcal{A}}$. The symmetric group $S_n$ acts on the set of set partitions of $[n]$: given $\sigma\in S_n$ and ${\mathcal{A}}\vdash [n]$, $\sigma({\mathcal{A}})$ is the set partition whose blocks are $\sigma(A)$ for $A\in{\mathcal{A}}$. The orbit of ${\mathcal{A}}$ consists of those set partitions of the same type as ${\mathcal{A}}$. The stabilizer of ${\mathcal{A}}$ consists of those permutations that preserve the blocks, or that permute blocks of the same size. Therefore, the number of set partitions of type $1^{m_1}2^{m_2}\ldots n^{m_n}$ ($m_i$ blocks of size $i$) is $$\label{E:setpar} \frac{n!}{m_1! \cdots m_n! (1!)^{m_1}\cdots (n!)^{m_n}}\,.$$ The monoid of uniform block permutations ---------------------------------------- The monoid (and the monoid algebra) of uniform block permutations has been studied by FitzGerald [@f] and Kosuda [@k00; @k01] in analogy to the partition algebra of Jones and Martin [@j; @m]. A *block permutation* of $[n]$ consists of two set partitions ${\mathcal{A}}$ and ${\mathcal{B}}$ of $[n]$ with the same number of blocks and a bijection $f:{\mathcal{A}}\to{\mathcal{B}}$. For example, if $n=3$, $f(\{1,3\})=\{3\}$ and $f(\{2\})=\{1,2\}$ then $f$ is a block permutation. A block permutation is called *uniform* if it maps each block of ${\mathcal{A}}$ to a block of ${\mathcal{B}}$ of the same cardinality. For example, $f(\{1,3\})=\{1,2\}$, $f(\{2\})=\{3\}$ is uniform. Each permutation may be viewed as a uniform block permutation for which all blocks have cardinality $1$. In this paper we only consider block permutations that are uniform. To specify a uniform block permutation $f:{\mathcal{A}}\to{\mathcal{B}}$ we must choose two set partitions ${\mathcal{A}}$ and ${\mathcal{B}}$ of the same type $1^{m_1}\ldots n^{m_n}$ and for each $i$ a bijection between the $m_i$ blocks of size $i$ of ${\mathcal{A}}$ and those of ${\mathcal{B}}$. We deduce from  that the total number of uniform block permutations of $[n]$ is $$u_n:=\sum_{1^{m_1}\ldots n^{m_n}\vdash n} \left(\frac{n!}{(1!)^{m_1}\cdots (n!)^{m_n}}\right)^2\frac{1}{m_1!\cdots m_n!}$$ where the sum runs over all partitions of $n$. Starting at $n=0$, the first values are $$1,1,3,16,131,1496,22482,\ldots$$ This is sequence A023998 in [@sl]. These numbers and generalizations are studied in [@sps]; in particular, the following recursion is given in [@sps equation (11)]: $$u_{n+1}=\sum_{k=0}^n\binom{n}{k}\binom{n+1}{k}u_k\,,\quad u_0=1\,.$$ We represent uniform block permutations by means of graphs. For instance, either one of the two graphs in Figure \[F:twographs\] represents the uniform block permutation $f$ given by $$\{1,3,4\}\rightarrow\{3,5,6\},\ \{2\}\rightarrow \{4\}, \ \{5,7\}\rightarrow \{1,2\},\ \{6\}\rightarrow \{8\},\ \mbox{ and } \{8\}\rightarrow \{7\}\,.$$ (370,70)(0,0) (30,10)(30,60)(29,11)[(5,3)[80]{}]{} (25,0)(25,67) (50,10)(50,60)(30,10)[(1,0)[20]{}]{} (45,0)(45,67) (70,10)(70,60)(50,10)[(2,1)[100]{}]{} (65,0)(65,67) (90,10)(90,60)(70,10)[(-4,5)[40]{}]{} (85,0)(85,67) (110,10)(110,60)(70,10)[(2,5)[20]{}]{} (105,0)(105,67) (130,10)(130,60)(70,60)[(1,0)[20]{}]{} (125,0)(125,67) (150,10)(150,60)(70,60)(50,67)(30,60) (145,0)(145,67) (170,10)(170,60)(90,10)[(-4,5)[40]{}]{} (165,0)(165,67) (110,10)[(-4,5)[40]{}]{}(110,10)[(1,0)[20]{}]{}(150,10)[(2,5)[20]{}]{} (170,10)[(-4,5)[40]{}]{} (200,30)[$\sim$]{} (230,10)(230,60)(229,11)[(5,3)[80]{}]{} (225,0)(225,67) (250,10)(250,60)(230,10)[(1,0)[20]{}]{} (245,0)(245,67) (310,60)(330,67)(350,60) (270,10)(270,60)(250,10)[(2,1)[100]{}]{} (265,0)(265,67) (290,10)(290,60)(270,10)[(-4,5)[40]{}]{} (285,0)(285,67) (310,10)(310,60)(270,10)(290,3)(310,10) (305,0)(305,67) (330,10)(330,60)(270,60)[(1,0)[20]{}]{} (325,0)(325,67) (350,10)(350,60)(270,60)(250,67)(230,60) (345,0)(345,67) (370,10)(370,60)(290,10)[(-4,5)[40]{}]{} (365,0)(365,67) (330,10)[(-4,5)[40]{}]{}(310,10)[(1,0)[20]{}]{}(350,10)[(2,5)[20]{}]{} (370,10)[(-4,5)[40]{}]{} Different graphs may represent the same uniform block permutation. For a graph to represent a uniform block permutation $f:{\mathcal{A}}\to{\mathcal{B}}$ of $[n]$ the vertex set must consist of two copies of $[n]$ (top and bottom) and each connected component must contain the same number of vertices on the top as on the bottom. The set partition ${\mathcal{A}}$ is read off from the adjacencies on the top, ${\mathcal{B}}$ from those on the bottom, and $f$ from those in between. The [*diagram*]{} of $f$ is the unique representing graph in which all connected components are cycles and the elements in each cycle are joined in order, as in the second graph of Figure \[F:twographs\]. The set $P_n$ of block permutations of $[n]$ is a monoid. The product $g\cdot f$ of two uniform block permutations $f$ and $g$ of $[n]$ is obtained by gluing the bottom of a graph representing $f$ to the top of a graph representing $g$. The resulting graph represents a uniform block permutation which does not depend on the graphs chosen. An example is given in Figure \[F:product\]. Note that gluing the diagram of $f$ to the diagram of $g$ may not result in the diagram of $g\cdot f$. The identity is the uniform block permutation that maps $\{i\}$ to $\{i\}$ for all $i$. Viewing permutations as uniform block permutations as above, we get that the symmetric group $S_n$ is a submonoid of $P_n$. (350,70)(0,0) (20,40) (50,60)(50,40)(50,40)[(1,1)[20]{}]{} (70,60)(70,40)(70,40)[(-1,1)[20]{}]{} (70,40)[(1,0)[20]{}]{}(90,40)[(1,1)[20]{}]{}(50,60)(80,67)(110,60) (90,60)(90,40)(110,40)[(-1,1)[20]{}]{} (110,60)(110,40)(110,40)[(1,0)[20]{}]{} (130,60)(130,40)(130,40)[(1,1)[20]{}]{} (90,60)(120,67)(150,60) (150,60)(150,40)(150,40)[(-1,1)[20]{}]{} (220,40) (250,60)(250,40)(250,40)[(2,1)[40]{}]{} (270,60)(270,40)(270,40)[(0,0)[20]{}]{} (290,60)(290,40)(290,40)[(3,1)[60]{}]{} (310,60)(310,40)(310,40)(330,30)(350,40) (310,40)[(-3,1)[60]{}]{}(250,60)(290,67)(330,60) (330,60)(330,40)(330,40)[(-1,1)[20]{}]{} (350,60)(350,40)(350,40)[(-1,1)[20]{}]{} (350,80)(0,0) (40,55)[$g\cdot f=$]{} (100,80)(100,60)(100,60)[(1,1)[20]{}]{} (120,80)(120,60)(120,60)[(-1,1)[20]{}]{} (120,60)[(1,0)[20]{}]{}(140,60)[(1,1)[20]{}]{}(100,80)(130,87)(160,80) (140,80)(140,60)(160,60)[(-1,1)[20]{}]{} (160,80)(160,60)(160,60)[(1,0)[20]{}]{} (180,80)(180,60)(180,60)[(1,1)[20]{}]{} (140,80)(170,87)(200,80) (200,80)(200,60)(200,60)[(-1,1)[20]{}]{} (100,45)(100,25)(100,25)[(2,1)[40]{}]{} (100,45)[(0,0)[15]{}]{} (120,45)(120,25)(120,25)[(0,0)[20]{}]{} (120,45)[(0,0)[15]{}]{} (140,45)(140,25)(140,25)[(3,1)[60]{}]{} (140,45)[(0,0)[15]{}]{} (160,45)(160,25)(160,25)(180,15)(200,25) (160,45)[(0,0)[15]{}]{} (160,25)[(-3,1)[60]{}]{}(100,45)(130,52)(180,45) (180,45)(180,25)(180,25)[(-1,1)[20]{}]{} (180,45)[(0,0)[15]{}]{} (200,45)(200,25)(200,25)[(-1,1)[20]{}]{} (200,45)[(0,0)[15]{}]{} (220,55)[$=$]{} (250,65)(250,45)(250,45)[(0,0)[20]{}]{} (250,45)[(1,0)[20]{}]{}(250,65)(280,72)(310,65) (270,65)(270,45)(270,45)[(2,1)[40]{}]{} (290,65)(290,45)(290,45)[(2,1)[40]{}]{} (310,65)(310,45)(310,45)[(1,0)[20]{}]{} (310,45)[(-2,1)[40]{}]{}(270,65)[(1,0)[20]{}]{}(290,65)(320,72)(350,65) (330,65)(330,45)(330,45)[(1,0)[20]{}]{} (350,65)(350,45)(350,45)[(0,0)[20]{}]{} We recall a presentation of the monoid $P_n$ given in [@f; @k00; @k01]. Consider the uniform block permutations $b_i$ and $s_i$ with diagrams (375,80)(0,0) (20,35) (50,60)(50,20)(50,20)[(0,0)[40]{}]{} (45,67)(45,10) (60,35) (90,60)(90,20)(90,20)[(0,0)[40]{}]{} (80,67)(80,10) (105,60)(105,20)(105,20)[(0,0)[40]{}]{} (103,67)(103,10) (105,60)[(1,0)[15]{}]{}(105,20)[(1,0)[15]{}]{} (120,60)(120,20)(120,20)[(0,0)[40]{}]{} (115,67)(115,10) (135,60)(135,20)(135,20)[(0,0)[40]{}]{} (135,67)(135,10) (145,35) (175,60)(175,20)(175,20)[(0,0)[40]{}]{} (173,67)(173,10) (220,35) (250,60)(250,20)(250,20)[(0,0)[40]{}]{} (245,67)(245,10) (260,35) (290,60)(290,20)(290,20)[(0,0)[40]{}]{} (280,67)(280,10) (305,60)(305,20)(304,21)[(2,5)[15]{}]{} (303,67)(303,10) (320,60)(320,20)(321,21)[(-2,5)[15]{}]{} (315,67)(315,10) (335,60)(335,20)(335,20)[(0,0)[40]{}]{} (335,67)(335,10) (345,35) (375,60)(375,20)(375,20)[(0,0)[40]{}]{} (373,67)(373,10) The monoid $P_n$ is generated by the elements $\{ b_i, s_i\, |\, 1\leq i\leq n-1\}$ subject to the following relations: 1. $s_i^2 =1, \qquad b_i^2 =b_i, \qquad 1\leq i \leq n-1$; 2. $s_is_{i+1}s_i=s_{i+1}s_is_{i+1}, \qquad s_ib_{i+1}s_i=s_{i+1}b_is_{i+1}, \qquad 1\leq i\leq n-2$; 3. $s_is_j=s_js_i, \qquad b_is_j=s_jb_i, \qquad |i-j|>1$; 4. $ b_is_i=s_ib_i=b_i, \qquad 1\leq i \leq n-1$; 5. $ b_ib_j=b_jb_i,\qquad 1\leq i,j\leq n-1$. The submonoid generated by the elements $s_i$, $1\leq i\leq n-1$ is the symmetric group $S_n$, viewed as a submonoid of $P_n$ as above. We will see in Sections \[S:selfdual\] and \[S:weak\] that $P_n$ is a factorizable inverse monoid. Therefore, a presentation for $P_n$ may also be derived from the results of [@eef]. An ideal indexed by set partitions {#S:ideal} ---------------------------------- Let ${\Bbbk}P_n$ be the monoid algebra of $P_n$ over a commutative ring ${\Bbbk}$. Given a set partition ${\mathcal{A}}\vdash [n]$, let $Z_{\mathcal{A}}\in{\Bbbk}P_n$ denote the sum of all uniform block permutations $f:{\mathcal{A}}\to{\mathcal{B}}$, where ${\mathcal{B}}$ varies: $$Z_{\mathcal{A}}:=\sum_{f:{\mathcal{A}}\to{\mathcal{B}}}f\,.$$ For instance, (400,45)(0,0) (0,15)[$Z_{\{1,3\}\{2,4\}}=$]{} (70,10)(80,10)(90,10) (100,10) (70,30)(80,30)(90,30) (100,30) (70,30)(80,37)(90,30)(80,30)(90,37)(100,30)(70,10)[(1,0)[10]{}]{}(90,10)[(1,0)[10]{}]{} (70,10)[(0,0)[20]{}]{}(80,10)[(1,2)[10]{}]{}(90,10)[(-1,2)[10]{}]{}(100,10)[(0,0)[20]{}]{} (110,17)[$+$]{} (130,10)(140,10)(150,10) (160,10) (130,30)(140,30)(150,30) (160,30) (130,30)(140,37)(150,30)(140,30)(150,37)(160,30)(130,10)[(0,0)[20]{}]{} (130,10)(140,17)(150,10)(140,10)(150,17)(160,10)(140,10)[(0,0)[20]{}]{} (150,10)[(0,0)[20]{}]{}(160,10)[(0,0)[20]{}]{} (170,17)[$+$]{} (190,10)(200,10)(210,10) (220,10) (190,30)(200,30)(210,30) (220,30) (190,30)(200,37)(210,30)(200,30)(210,37)(220,30)(190,10)[(0,0)[20]{}]{} (190,10)(205,17)(220,10)(200,10)[(1,0)[10]{}]{}(200,10)[(0,0)[20]{}]{} (210,10)[(1,2)[10]{}]{}(220,10)[(-1,2)[10]{}]{} (230,15)[$+$]{} (250,10)(260,10)(270,10) (280,10) (250,30)(260,30)(270,30) (280,30) (250,30)(260,37)(270,30)(260,30)(270,37)(280,30)(250,10)[(1,2)[10]{}]{} (250,10)(265,17)(280,10)(260,10)[(1,0)[10]{}]{}(260,10)[(-1,2)[10]{}]{} (270,10)[(0,0)[20]{}]{}(280,10)[(0,0)[20]{}]{} (290,15)[$+$]{} (310,10)(320,10)(330,10) (340,10) (310,30)(320,30)(330,30) (340,30) (310,30)(320,37)(330,30)(320,30)(330,37)(340,30)(310,10)[(1,2)[10]{}]{} (310,10)(320,17)(330,10)(320,10)(330,17)(340,10)(320,10)[(-1,2)[10]{}]{} (330,10)[(1,2)[10]{}]{}(340,10)[(-1,2)[10]{}]{} (350,15)[$+$]{} (370,10)(380,10)(390,10) (400,10) (370,30)(380,30)(390,30) (400,30) (370,30)(380,37)(390,30)(380,30)(390,37)(400,30)(370,10)[(1,2)[10]{}]{} (370,10)[(1,0)[10]{}]{}(390,10)[(1,0)[10]{}]{}(380,10)[(1,1)[20]{}]{} (390,10)[(-1,1)[20]{}]{}(400,10)[(-1,2)[10]{}]{} Let ${\mathcal{A}}$ be a set partition of $[n]$ and $\sigma$ a permutation of $[n]$. Then $$\sigma \cdot Z_{\mathcal{A}}= Z_{\mathcal{A}}\text{ \ and \ }Z_{\mathcal{A}}\cdot \sigma = Z_{\sigma^{-1}({\mathcal{A}})}\,.$$ In addition, $$Z_{\mathcal{A}}\cdot b_i= \left\{ \begin{array}{ll} Z_{\mathcal{A}}& \mbox{if $i$ and $i+1$ belong to the same block of ${\mathcal{A}}$}\\ \rule{0pt}{20pt}{|A|+|A'|\choose |A|} Z_{\mathcal{B}}& \mbox{if $i$ and $i+1$ belong to different blocks $A$ and $A'$ of ${\mathcal{A}}$}\end{array} \right.$$ where the set partition ${\mathcal{B}}$ is obtained by merging the blocks $A$ and $A'$ of ${\mathcal{A}}$ and keeping the others. Let ${\mathcal{Z}}_n$ denote the subspace of ${\Bbbk}P_n$ linearly spanned by the elements $Z_{\mathcal{A}}$ as ${\mathcal{A}}$ runs over all set partitions of $[n]$. \[C:ideal\] ${\mathcal{Z}}_n$ is a right ideal of the monoid algebra ${\Bbbk}P_n$. The Hopf algebra of uniform block permutations ============================================== In this section we define the Hopf algebra of uniform block permutations. It contains the Hopf algebra of permutations of Malvenuto and Reutenauer as a Hopf subalgebra. Schur-Weyl duality for uniform block permutations ------------------------------------------------- Let $r$ and $m$ be positive integers. Consider the complex reflection group $$G(r,1,m):={\mathbb{Z}}_r\wr S_m\,.$$ Let $t$ denote the generator of the cyclic group ${\mathbb{Z}}_r$, $t^r=1$. Let $V$ be the [*monomial*]{} representation of $G(r,1,m)$. Thus, $V$ is an $m$-dimensional vector space with a basis $\{ e_1,e_2,\ldots, e_m\}$ on which $G(r,1,m)$ acts as follows: $$t\cdot e_1 = e^{2\pi i/r} e_1\,,\quad t\cdot e_i =e_i \text{ for $i>1$, and \ } \sigma\cdot e_i = e_{\sigma(i)} Ý\text{ for $\sigma \in S_m$.}$$ Consider now the diagonal action of $G(r,1,m)$ on the tensor powers $V^{\otimes n}$, $$Ýg\cdot (e_{i_1}e_{i_2}\cdots e_{i_n}) =(g\cdot e_{i_1})(g\cdot e_{i_2}) \cdots (g\cdot e_{i_n})\,.$$ The centralizer of this representation has been calculated by Tanabe.  [@t] \[P:block-duality\] There is a right action of the monoid $P_n$ on $V^{\otimes n}$ determined by $$(e_{i_1}\cdots e_{i_n})\cdot b_j=\delta(i_j,i_{j+1})e_{i_1}\cdots e_{i_n} \text{ \ and \ } (e_{i_1}\cdots e_{i_n})\cdot\sigma= e_{i_{\sigma(1)}}\cdots e_{i_{\sigma(n)}}$$ for $1\leq i\leq n-1$ and $\sigma\in S_n$. This action commutes with the left action of $G(r,1,m)$ on $V^{\otimes n}$. Moreover, if $m\geq 2n$ and $r>n$ then the resulting map $$\label{E:block-duality} {\mathbb{C}}P_n\to {\mathrm{End}}_{G(r,1,m)}(V^{\otimes n})$$ is an isomorphism of algebras. Classical Schur-Weyl duality states that the symmetric group algebra can be similarly recovered from the diagonal action of $GL(V)$ on $V^{\otimes n}$: if $\dim V\geq n$ then $$\label{E:classical-duality} {\mathbb{C}}S_n\cong {\mathrm{End}}_{GL(V)}(V^{\otimes n})\,.$$ Malvenuto and Reutenauer [@mr] deduce from here the existence of a multiplication among permutations as follows. Given $\sigma\in S_p$ and $\tau\in S_q$, view them as linear endomorphisms of the tensor algebra $$T(V):=\bigoplus_{n\geq 0}V^{\otimes n}$$ by means of  ($\sigma$ acts as $0$ on $V^{\otimes n}$ if $n\neq p$, similarly for $\tau$). The tensor algebra is a Hopf algebra, so we can form the [*convolution*]{} product of any two linear endomorphisms: $$T(V){\xrightarrow{\Delta}}T(V)\otimes T(V){\xrightarrow{\sigma\otimes\tau}}T(V)\otimes T(V){\xrightarrow{m}}T(V)\,,$$ where $\Delta$ and $m$ are the coproduct and product of the tensor algebra. Since these two maps commute with the action of $GL(V)$, the convolution of $\sigma$ and $\tau$ belongs to ${\mathrm{End}}_{GL(V)}(V^{\otimes n})$, where $n=p+q$. Therefore, there exist an element $\sigma\ast\tau\in{\mathbb{C}}S_n$ whose right action equals the convolution of $\sigma$ and $\tau$. This is the product of Malvenuto and Reutenauer. The same argument applies to uniform block permutations, in view of Proposition \[P:block-duality\]. We proceed to describe the resulting operation in explicit terms. As for permutations, this structure can be enlarged to that of a graded Hopf algebra. Product and coproduct of uniform block permutations --------------------------------------------------- Consider the graded vector space $${\mathcal{P}}:=\bigoplus_{n\geq 0} {\Bbbk}P_n\,.$$ $P_0$ consists of the unique uniform block permutation of $[n]$, represented by the empty diagram, which we denote by $\emptyset$. Let $f$ and $g$ be uniform block permutations of $[n]$ and $[m]$ respectively. Adding $n$ to every entry in the diagram of $g$ and placing it to the right of the diagram of $f$ we obtain the diagram of a uniform block permutation of $[n+m]$, called the concatenation of $f$ and $g$ and denoted $f\times g$. Figure \[F:concatenation\] shows an example. (290,100)(0,0) (40,75)[$f=$]{} (70,90)(70,70)(70,70)[(2,1)[40]{}]{} (90,90)(90,70)(90,70)[(-1,1)[20]{}]{} (110,90)(110,70)(110,70)[(-1,1)[20]{}]{} (130,90)(130,70)(110,70)[(1,0)[20]{}]{} (130,70)[(0,0)[20]{}]{}(90,90)(110,97)(130,90) (68,95)(88,95)(108,95)(128,95) (68,63)(88,63)(108,63)(128,63) (135,70) (170,75)[$g=$]{}(200,90)(200,70)(200,70)(220,60)(240,70) (240,70)[(1,0)[20]{}]{}(260,70)[(1,1)[20]{}]{}(220,90)[(1,0)[20]{}]{} (220,90)(220,70)(200,70)[(1,1)[20]{}]{} (240,90)(240,70)(240,90)(260,97)(280,90) (260,90)(260,70)(280,70)[(-4,1)[80]{}]{} (280,90)(280,70)(220,70)[(2,1)[40]{}]{} (198,95)(218,95)(238,95)(258,95) (278,95)(198,63)(218,63)(238,63) (258,63)(278,63) (80,25)[$f\times g=$]{} (130,40)(130,20)(130,20)[(2,1)[40]{}]{} (150,40)(150,20)(150,20)[(-1,1)[20]{}]{} (170,40)(170,20)(170,20)[(-1,1)[20]{}]{} (190,40)(190,20)(170,20)[(1,0)[20]{}]{} (190,20)[(0,0)[20]{}]{}(150,40)(170,47)(190,40) (128,45)(148,45)(168,45)(188,45) (208,45)(228,45)(248,45)(268,45) (288,45) (128,13)(148,13)(168,13)(188,13) (208,13)(228,13)(248,13)(268,13) (288,13) (210,40)(210,20)(210,20)(230,10)(250,20) (250,20)[(1,0)[20]{}]{}(270,20)[(1,1)[20]{}]{}(230,40)[(1,0)[20]{}]{} (230,40)(230,20)(210,20)[(1,1)[20]{}]{} (250,40)(250,20)(250,40)(270,47)(290,40) (270,40)(270,20)(290,20)[(-4,1)[80]{}]{} (290,40)(290,20)(230,20)[(2,1)[40]{}]{} Let ${\mathrm{Sh}}(n,m)$ denote the set of $(n,m)$-shuffles, that is, those permutations $\xi \in S_{n+m}$ such that $$\xi(1)<\xi(2)< \cdots < \xi(n) \mbox{ \ and \ } \xi(n+1)<\xi(n+2)<\cdots < \xi(n+m)\,.$$ Let $sh_{n,m}\in {\Bbbk}S_{n+m}$ denote the sum of all $(n,m)$-shuffles. The product $\ast$ on ${\mathcal{P}}$ is defined by $$f\ast g := sh_{n,m}\cdot (f\times g)\in {\Bbbk}P_{n+m}$$ for all $f \in P_n$ and $g \in P_m$, and extended by linearity. It is easy to see that this product corresponds to convolution of endomorphisms of the tensor algebra via the map , when ${\Bbbk}={\mathbb{C}}$. For example, (410,80)(0,0) (0,70)(0,55)(0,55)[(1,0)[15]{}]{} (15,70)(15,55)(0,70)[(1,0)[15]{}]{} (0,55)[(0,0)[15]{}]{}(15,55)[(0,0)[15]{}]{} (22,60)[$\ast$]{} (35,70)(35,55)(35,55)[(1,1)[15]{}]{} (50,70)(50,55)(50,55)[(-1,1)[15]{}]{} (60,60)[$=$]{} (75,70)(75,55)(75,55)[(1,0)[15]{}]{} (90,70)(90,55)(75,55)[(0,0)[15]{}]{} (105,70)(105,55)(75,70)[(1,0)[15]{}]{} (120,70)(120,55)(90,55)[(0,0)[15]{}]{} (105,55)[(1,1)[15]{}]{}(120,55)[(-1,1)[15]{}]{} (127,60)[$+$]{} (145,70)(145,55)(145,70)[(1,0)[15]{}]{} (160,70)(160,55)(145,55)[(0,0)[15]{}]{} (175,70)(175,55)(145,55)(160,45)(175,55) (190,70)(190,55)(175,55)[(-1,1)[15]{}]{} (160,55)[(2,1)[30]{}]{}(190,55)[(-1,1)[15]{}]{} (197,60)[$+$]{} (215,70)(215,55)(215,70)[(1,0)[15]{}]{} (230,70)(230,55)(215,55)[(0,0)[15]{}]{} (245,70)(245,55)(215,55)(237,45)(260,55) (260,70)(260,55)(260,55)[(-2,1)[30]{}]{} (230,55)[(2,1)[30]{}]{}(245,55)[(0,0)[15]{}]{} (267,60)[$+$]{} (285,70)(285,55)(285,70)[(1,0)[15]{}]{} (300,70)(300,55)(300,55)[(1,0)[15]{}]{} (315,70)(315,55)(300,55)[(-1,1)[15]{}]{} (330,70)(330,55)(315,55)[(-1,1)[15]{}]{} (285,55)[(3,1)[45]{}]{}(330,55)[(-1,1)[15]{}]{} (337,60)[$+$]{} (355,70)(355,55)(355,70)[(1,0)[15]{}]{} (370,70)(370,55)(370,55)[(-1,1)[15]{}]{} (385,70)(385,55)(370,55)(385,45)(400,55) (400,70)(400,55)(400,55)[(-2,1)[30]{}]{} (355,55)[(3,1)[45]{}]{}(385,55)[(0,0)[15]{}]{} (407,60)[$+$]{} (75,35)(75,20)(75,35)[(1,0)[15]{}]{} (90,35)(90,20)(105,20)[(1,0)[15]{}]{} (105,35)(105,20)(105,20)[(-2,1)[30]{}]{} (120,35)(120,20)(120,20)[(-2,1)[30]{}]{} (75,20)[(3,1)[45]{}]{}(90,20)[(1,1)[15]{}]{} A *breaking point* of a set partition ${\mathcal{B}}$ is an integer $i\in \{0,1,\ldots, n\}$ for which there exists a subset $S\subseteq{\mathcal{B}}$ such that $$\bigcup_{B\in S} B =\{1,\ldots,i\} \text{ \ (and hence) \ } \bigcup_{B\in {\mathcal{B}}\setminus S} B =\{i+1,\ldots,n\}\,.$$ Given a uniform block permutation $f:{\mathcal{A}}\to{\mathcal{B}}$, let $B(f)$ denote the set of breaking points of ${\mathcal{B}}$. Note that $i=0$ and $i=n$ are breaking points of any $f$. If $f$ is a permutation, that is if all blocks of $f$ are of size $1$, then $B(f)=\{0,1,\ldots, n\}$. In terms of the diagram of a uniform block permutation, if it is possible to put a vertical line between the first $i$ and the last $n-i$ vertices in the bottom row without intersecting an edge between the two sets of vertices, then $i$ is a breaking point. (250,60)(0,0) (30,25)[$f=$]{} (60,10) (70,40)(70,20)(70,20)[(1,1)[20]{}]{} (80,10) (90,40)(90,20)(90,20)[(3,1)[60]{}]{} (100,10) (110,40)(110,20)(110,20)[(-2,1)[40]{}]{} (130,40)(130,20)(110,20)(140,10)(170,20) (150,40)(150,20)(130,20)[(0,0)[20]{}]{} (170,40)(170,20)(150,20)[(1,1)[20]{}]{} (180,10)(190,40)(190,20)(170,20)[(-3,1)[60]{}]{} (210,40)(210,20)(70,40)(90,47)(110,40) (190,20)[(0,0)[20]{}]{}(190,20)[(1,0)[20]{}]{} (210,20)[(0,0)[20]{}]{}(190,40)[(1,0)[20]{}]{} (220,10) (230,25)[ $\Rightarrow \ \ B(f)=\{0,1,2,6,8\}$.]{} \[L:break\] If $i$ is a breaking point of $f$, then there exists a unique $(i,n-i)$-shuffle $\xi\in S_n$ and unique uniform block permutations $f_{(i)} \in P_i$ and $f_{(n-i)}'\in P_{n-i}$ such that $$f=(f_{(i)}\times f'_{(n-i)})\cdot \xi^{-1}\,.$$ Conversely, if such a decomposition exists, $i$ is a breaking point of $f$. We illustrate this statement with an example where $i=4$ and $\xi={\begin{pmatrix}1 & 2 & 3 & 4 & 5 & 6\\2 & 3 & 5 & 6 & 1 & 4\end{pmatrix}}$: (100,60)(0,-10) (0,0)(0,40)(0,40)[(4,-1)[80]{}]{} (20,0)(20,40)(20,40)[(-1,-1)[20]{}]{} (40,0)(40,40)(40,40)[(-1,-1)[20]{}]{} (60,0)(60,40)(60,40)[(2,-1)[40]{}]{} (80,0)(80,40)(80,40)[(-2,-1)[40]{}]{} (100,0)(100,40)(100,40)[(-2,-1)[40]{}]{} (0,0)[(0,20)]{} (60,0)[(0,20)]{} (0,0)[(60,0)]{}(0,20)[(60,0)]{} (80,0)[(0,20)]{} (100,0)[(0,20)]{} (80,0)[(20,0)]{}(80,20)[(20,0)]{} (70,-10) (23,7)[$f_{(4)}$]{} (83,7)[$f'_{(2)}$]{} (-35,17)[$f=$]{} (110,25)[$=\xi^{-1}$]{} We are now ready to define the coproduct on ${\mathcal{P}}$. Given $f\in P_n$ set $$\Delta(f):=\sum_{i\in B(f)}f_{(i)}\otimes f'_{(n-i)},$$ where $f_{(i)}$ and $f'_{(n-i)}$ are as in Lemma \[L:break\]. An example follows. (320,80)(0,0) (90,60) (125,55)(125,70)(125,55)[(1,1)[15]{}]{} (140,55)(140,70)(125,55)(140,45)(155,55) (155,55)(155,70)(140,55)[(-1,1)[15]{}]{} (170,55)(170,70)(170,55)[(-1,1)[15]{}]{} (185,55)(185,70)(170,55)[(1,0)[15]{}]{} (200,55)(200,70)(185,55)[(1,1)[15]{}]{} (200,55)[(-1,1)[15]{}]{}(140,70)(155,77)(170,70)(155,70)(177,77)(200,70) (155,55)[(1,1)[15]{}]{} (0,20) (50,20) (105,15)(105,30)(105,15)[(1,1)[15]{}]{} (120,15)(120,30)(105,15)(120,5)(135,15) (135,15)(135,30)(120,15)[(-1,1)[15]{}]{} (135,15)[(0,0)[15]{}]{}(120,30)[(1,0)[15]{}]{} (140,20) (155,15)(155,30)(155,15)[(0,0)[15]{}]{} (170,15)(170,30)(155,15)[(1,0)[15]{}]{} (185,15)(185,30)(170,15)[(1,1)[15]{}]{} (185,15)[(-1,1)[15]{}]{}(155,30)(170,37)(185,30) (190,20) (205,15)(205,30)(205,15)[(1,1)[15]{}]{} (220,15)(220,30)(220,15)[(-1,1)[15]{}]{} (235,15)(235,30)(205,15)(220,5)(235,15) (250,15)(250,30)(235,15)[(1,1)[15]{}]{} (265,15)(265,30)(250,15)[(-1,1)[15]{}]{} (250,15)[(1,0)[15]{}]{}(265,15)[(0,0)[15]{}]{}(220,30)(235,37)(250,30) (235,30)(250,37)(265,30) (270,20) (285,15)(285,30)(285,15)[(0,0)[15]{}]{} (290,20) (305,20) Recall that an element $x\in{\mathcal{P}}$ is called primitive if $\Delta(x)=x\otimes \emptyset +\emptyset\otimes x$. Every uniform block permutation with breaking set $\{ 0, n\}$ is primitive, but there other primitive elements in ${\mathcal{P}}$. For example, the following element of ${\Bbbk}P_3$ is primitive: (240,50)(0,0) (160,20)(160,35)(160,35)(175,42)(190,35) (175,20)(175,35)(160,20)[(0,0)[15]{}]{} (190,20)(190,35)(160,20)[(1,0)[15]{}]{} (175,20)[(1,1)[15]{}]{}(190,20)[(-1,1)[15]{}]{} (200,25)[$-$]{} (220,20)(220,35)(220,20)[(1,0)[15]{}]{} (235,20)(235,35)(220,20)[(1,1)[15]{}]{} (250,20)(250,35)(250,20)[(-2,1)[30]{}]{} (235,20)[(1,1)[15]{}]{}(235,35)[(1,0)[15]{}]{} Recall that $\emptyset$ denotes the empty uniform block permutation. Let $\varepsilon:{\mathcal{P}}\rightarrow {\Bbbk}$ be $$\varepsilon(f)=\begin{cases} 1 &\text{ if $f=\emptyset\in P_0$,}\\ 0 & \text{ if $f\in P_n$, $n\geq 1$. } \end{cases}$$ The graded vector space ${\mathcal{P}}$, equipped with the product $\ast$, coproduct $\Delta$, unit $\emptyset$ and counit $\varepsilon$, is a graded connected Hopf algebra. Associativity and coassociativity follow from basic properties of shuffles (for the product one may also appeal to  and associativity of the convolution product). The existence of the antipode is guaranteed in any graded connected bialgebra. Compatibility between $\Delta$ and $\ast$ requires a special argument. We sketch part of it. Let $\beta_{n,m}$ be the $(n,m)$-shuffle such that $$\beta_{n,m}(i)=\begin{cases} m+i & \text{ if $1\leq i\leq n$,}\\ i-n & \text{ if $n+1\leq i\leq n+m$.} \end{cases}$$ The diagram of $\beta_{3,4}$ is shown below. (270,60)(0,10) (150,60)(150,20)(150,20)[(3,2)[60]{}]{} (170,60)(170,20)(170,20)[(3,2)[60]{}]{} (190,60)(190,20)(190,20)[(3,2)[60]{}]{} (210,60)(210,20)(210,20)[(3,2)[60]{}]{} (230,60)(230,20)(230,20)[(-2,1)[80]{}]{} (250,60)(250,20)(250,20)[(-2,1)[80]{}]{} (270,60)(270,20)(270,20)[(-2,1)[80]{}]{} The inverse of $\beta_{n,m}$ is $\beta_{m,n}$. Let $f\in P_n$, $g\in P_m$. A summand in $\Delta(f)\ast \Delta(g)$ is of the form $$\xi_1\cdot (f'\times g')\otimes \xi_2\cdot(f''\times g'')$$ where $p\in B(f)$, $f'\in P_p$, $f''\in P_{n-p}$, $q\in B(g)$, $g'\in P_q$, $g''\in P_{m-q}$, $\xi_1\in {\mathrm{Sh}}(p,q)$, $\xi_2\in {\mathrm{Sh}}(n-p,m-q)$, and there exist unique $\eta_1\in {\mathrm{Sh}}(p,n-p)$ and $\eta_2\in {\mathrm{Sh}}(q,m-q)$ such that $f\cdot \eta_1=f'\times f''$ and $g\cdot \eta_2=g'\times g''$. Let $\beta:=1_p\times \beta_{n-p,q}\times 1_{m-q}$. Then $$\begin{aligned} \xi_1\cdot (f'\times g')\times \xi_2\cdot (f''\times g'') &=& (\xi_1\times \xi_2)\cdot((f'\times g')\times (f''\times g''))\\ & =& (\xi_1\times \xi_2)\cdot \beta \cdot((f'\times f'')\times(g'\times g''))\cdot \beta^{-1}\\ & =& (\xi_1\times \xi_2)\cdot\beta\cdot (f\cdot \eta_1\times g\cdot\eta_2)\cdot \beta^{-1}\\ & =& (\xi_1\times \xi_2)\cdot\beta\cdot(f\times g)\cdot (\eta_1\times \eta_2)\cdot \beta^{-1}\,.\end{aligned}$$ Let $\xi:=(\xi_1\times \xi_2)\cdot \beta$ and $\eta:=(\eta_1\times \eta_2)\cdot \beta^{-1}$. One verifies that $\xi\in {\mathrm{Sh}}(n,m)$ and $\eta\in {\mathrm{Sh}}(p+q,n+m-p-q)$. Therefore, $$\xi_1\cdot (f'\times g')\times \xi_2\cdot (f''\times g'')=\xi\cdot (f\times g)\cdot \eta$$ is a summand in $\Delta(f\ast g)$.$\quad\Box$ Consider the following graded subspace of ${\mathcal{P}}$: $${\mathcal{S}}:=\bigoplus_{n\geq 0}{\Bbbk}S_n\,.$$ ${\mathcal{S}}$ is a Hopf subalgebra of ${\mathcal{P}}$. ${\mathcal{S}}$ is the Hopf algebra of permutations of Malvenuto and Reutenauer [@mr]. Let $\sigma$ be a permutation. In the notation of [@as], the element $\sigma\in {\mathcal{S}}$ corresponds to the basis element $F_{\sigma}^*$ of ${{{\mathcal{S}}}{{\mathit{Sym}}}}^*$, or equivalently the element $F_{\sigma^{-1}}$ of ${{{\mathcal{S}}}{{\mathit{Sym}}}}$. Inverse monoid structure and self-duality {#S:selfdual} ----------------------------------------- As ${\mathcal{S}}$, the Hopf algebra ${\mathcal{P}}$ is self-dual. To see this, recall that a block permutation is a bijection $f:{\mathcal{A}}\to{\mathcal{B}}$ between two set partitions of $[n]$. Let $\Tilde{f}:{\mathcal{B}}\to{\mathcal{A}}$ denote the inverse bijection. If $f$ is uniform then so is $\Tilde{f}$. The diagram of $\Tilde{f}\in P_n$ is obtained by reflecting the diagram of $f$ across a horizontal line. Note that for $\sigma\in S_n\subseteq P_n$ we have $\Tilde{\sigma}=\sigma^{-1}$. Let ${\mathcal{P}}^*$ be the graded dual space of ${\mathcal{P}}$: $${\mathcal{P}}^*=\bigoplus_{n\geq 0}({\Bbbk}P_n)^*\,.$$ Let $\{f^*\mid f\in P_n\}$ be the basis of $({\Bbbk}P_n)^*$ dual to the basis $P_n$ of ${\Bbbk}P_n$. The map ${\mathcal{P}}^*\to{\mathcal{P}}$, $f^*\mapsto \Tilde{f}$, is an isomorphism of graded Hopf algebras. The operation $f\mapsto\Tilde{f}$ is also relevant to the monoid structure of $P_n$. Indeed, the following properties are satisfied $$f=f\Tilde{f}f \text{ \ and \ }\Tilde{f}=\Tilde{f}f\Tilde{f}\,.$$ Together with  below, these properties imply that $P_n$ is an [*inverse monoid*]{} [@cp Theorem 1.17]. The following properties are consequences of this fact [@cp Lemma 1.18]: $$\widetilde{fg}=\Tilde{g}\Tilde{f},\quad \Tilde{\Tilde{f}}=f$$ (they can also be verified directly). Factorizable monoid structure and the weak order {#S:weak} ------------------------------------------------ Let $E_n$ denote the poset of set partitions of $[n]$: we say that ${\mathcal{A}}\leq{\mathcal{B}}$ if every bock of ${\mathcal{B}}$ is contained in a block of ${\mathcal{A}}$. This poset is a lattice, and this structure is related to the monoid structure of uniform block permutations as follows. If ${{\mathit{id}}}_{{\mathcal{A}}}:{\mathcal{A}}\to{\mathcal{A}}$ denotes the uniform block permutation which is the identity map on the set of blocks of ${\mathcal{A}}$, then $$\label{E:meet} {{\mathit{id}}}_{{\mathcal{A}}}\cdot{{\mathit{id}}}_{{\mathcal{B}}}={{\mathit{id}}}_{{\mathcal{A}}{\wedge}{\mathcal{B}}}\,.$$ In other words, viewing $E_n$ as a monoid under the meet operation ${\wedge}$, the map $$E_n\to P_n\,,\quad {\mathcal{A}}\mapsto{{\mathit{id}}}_{{\mathcal{A}}}\,,$$ is a morphism of monoids. Any uniform block permutation $f\in P_n$ decomposes (non-uniquely) as $$\label{E:factorization} f=\sigma\cdot{{\mathit{id}}}_{{\mathcal{A}}}$$ for some $\sigma\in S_n$ and ${\mathcal{A}}\in E_n$. Note that $\sigma$ is invertible and ${{\mathit{id}}}_{{\mathcal{A}}}$ is idempotent, by . It follows that $P_n$ is a [*factorizable inverse monoid*]{} [@ch Section 2],  [@l Chapter 2.2]. Moreover, by Lemma 2.1 in [@ch], any invertible element in $P_n$ belongs to $S_n$ and any idempotent element in $P_n$ belongs to (the image of) $E_n$. This lemma also guarantees that in , the idempotent ${{\mathit{id}}}_{{\mathcal{A}}}$ is uniquely determined by $f$ (which is clear since ${\mathcal{A}}$ is the domain of $f$). On the other hand, $\sigma$ is not unique, and we will make a suitable choice of this factor to define a partial order on $P_n$. Consider the action of $S_n$ on $P_n$ by left multiplication. Given ${\mathcal{A}}\in E_n$, the orbit of ${{\mathit{id}}}_{{\mathcal{A}}}$ consists of all uniform block permutations $f:{\mathcal{A}}\to{\mathcal{B}}$ with domain ${\mathcal{A}}$, and the stabilizer is the [*parabolic*]{} subgroup $$S_{{\mathcal{A}}}:=\{\sigma\in S_n \mid \sigma(A)=A\ \forall A\in{\mathcal{A}}\}\,.$$ Consider the set of ${\mathcal{A}}$-*shuffles*: $${\mathrm{Sh}}({\mathcal{A}}):=\{ \xi\in S_n \mid \text{ if $i<j$ are in the same block of ${\mathcal{A}}$ then $\xi(i)<\xi(j)$} \}\,.$$ It is well-known that these permutations form a set of representatives for the left cosets of the subgroup $S_{{\mathcal{A}}}$. Therefore, given a uniform block permutation $f:{\mathcal{A}}\to{\mathcal{B}}$ there is a unique ${\mathcal{A}}$-shuffle $\xi_f$ such that $$f=\xi_f\cdot{{\mathit{id}}}_{{\mathcal{A}}}\,.$$ We use this decomposition to define a partial order on $P_n$ as follows: $$f\leq g \iff \xi_f\leq \xi_g\,,$$ where the partial order on the right hand side is the left weak order on $S_n$ (see for instance [@as]). We refer to this partial order as the [*weak order*]{} on $P_n$. Thus, $P_n$ is the disjoint union of certain subposets of the weak order on $S_n$: $$P_n\cong \bigsqcup_{{\mathcal{A}}\vdash[n]}{\mathrm{Sh}}({\mathcal{A}})$$ (in fact, each ${\mathrm{Sh}}({\mathcal{A}})$ is a lower order ideal $S_n$). Figures \[F:12-3-4\]-\[F:14-23\] show 5 of the 15 components of $P_4$. Note that even when ${\mathcal{A}}$ and ${\mathcal{B}}$ are set partitions of the same type the posets ${\mathrm{Sh}}({\mathcal{A}})$ and ${\mathrm{Sh}}({\mathcal{B}})$ need not be isomorphic. The partial order we have defined on $P_n$ should not be confused with the [*natural partial order*]{} which is defined on any inverse semigroup [@cp2 Chapter 7.1],  [@l Chapter 1.4]. ![The component of $P_4$ corresponding to ${\mathcal{A}}=\{1,2\}\{3\}\{4\}$[]{data-label="F:12-3-4"}](12-3-4.eps){width="10cm"} ![The component of $P_4$ corresponding to ${\mathcal{A}}=\{1\}\{2,3\}\{4\}$[]{data-label="F:1-23-4"}](1-23-4.eps){width="10cm"} ![The component of $P_4$ corresponding to ${\mathcal{A}}=\{1,4\}\{2\}\{3\}$[]{data-label="F:14-2-3"}](14-2-3.eps){width="10cm"} ![The component of $P_4$ corresponding to ${\mathcal{A}}=\{1,3\}\{2\}\{4\}$[]{data-label="F:13-2-4"}](13-2-4.eps){width="7cm"} ![The component of $P_4$ corresponding to ${\mathcal{A}}=\{1,4\}\{2,3\}$[]{data-label="F:14-23"}](14-23.eps){width="6cm"} As observed by Sloane [@sl], there is a connection between uniform block permutations and the [*patience games*]{} of Aldous and Diaconis [@ad]. Starting from a deck of cards a patience game produces a number of card piles according to certain simple rules (the output is not unique). If the cards are numbered $1,\ldots,n$, the initial deck is a permutation of $[n]$ and the resulting piles form a set partition of $[n]$. Suppose $\sigma\in S_n$. The set partitions ${\mathcal{A}}$ such that $\sigma\in{\mathrm{Sh}}({\mathcal{A}})$ are precisely the possible outputs of patience games played from a deck of cards with $\sigma^{-1}(1)$ in the bottom, followed by $\sigma^{-1}(2)$, up to $\sigma^{-1}(n)$ on the top. Thus, uniform block permutations are in bijection with the pairs consisting of the input and the output of a patience game via $(\sigma,{\mathcal{A}})\leftrightarrow \sigma\cdot{{\mathit{id}}}_{{\mathcal{A}}}$. The second basis and the Hopf algebra structure {#S:free} ----------------------------------------------- Following the ideas of [@as], we use the weak order on $P_n$ to define a new linear basis of the spaces ${\Bbbk}P_n$, on which the algebra structure of ${\mathcal{P}}$ is simple. For each element $g\in P_n$ let $$X_g:= \sum_{f\leq g} f\,.$$ By Möbius inversion, the set $\{X_g \mid g\in P_n\}$ is a linear basis of ${\mathcal{P}}_n$. Given $p,q\geq 0$, let $\xi_{p,q}\in S_{p+q}$ be the permutation $$\xi_{p,q}:=\begin{pmatrix} 1 & 2 & \ldots & p & p+1 & p+2& \ldots &p+q\\ q+1 & q+2 & \ldots & q+p& 1 & 2 & \ldots & q \end{pmatrix}\,.$$ This is the maximum element of ${\mathrm{Sh}}(p,q)$ under the weak order. The product of ${\mathcal{P}}$ takes the following simple form on the $X$-basis. \[P:X-product\] Let $g_1\in P_p$ and $g_2\in P_q$ be uniform block permutations. Then $$X_{g_1}\ast X_{g_2} = X_{\xi_{p,q}\cdot(g_1\times g_2)}\,.$$ \[C:free\] The Hopf algebra ${\mathcal{P}}$ is free as an algebra and cofree as a graded coalgebra. Let $V$ denote the space of primitive elements of ${\mathcal{P}}$. It follows that the generating series of ${\mathcal{P}}$ and $V$ are related by $${\mathcal{P}}(x)=\frac{1}{1-V(x)}\,.$$ Since $${\mathcal{P}}(x)=1+x+3x^2+16x^3+131x^4+1496x^5+22482x^6+\cdots$$ we deduce that $$V(x)=x+2x^2+11x^3+98x^4+1202x^5+19052x^6+\cdots\,.$$ The same conclusion may be derived by introducing another basis $$Z_g:= \sum_{f\geq g} f\,.$$ This has the property that $$Z_{g_1}\ast Z_{g_2} = Z_{g_1\times g_2}\,.$$ Note that $Z_{{{\mathit{id}}}_{{\mathcal{A}}}}$ is the element denoted $Z_{{\mathcal{A}}}$ in Section \[S:ideal\]. The Hopf algebra of symmetric functions in non-commuting variables ================================================================== Let $X$ be a countable set, the [*alphabet*]{}. A [*word of length $n$*]{} is a function $w:[n]\to X$. Let ${{\Bbbk}\langle\!\langle X\rangle\!\rangle}$ be the algebra of non-commutative power series on the set of variables $X$. Its elements are infinite linear combinations of words, finitely many of each length, and the product is concatenation of words. The [*kernel*]{} of a word $w$ of length $n$ is the set partition ${\mathcal{K}}(w)$ of $[n]$ whose blocks are the non-empty fibers of $w$. Order the set of set partitions of $[n]$ by refinement, as in Section \[S:weak\]. For each set partition ${\mathcal{A}}$ of $[n]$, let $$p_A:=\sum_{{\mathcal{K}}(w)\leq A} w \in {{\Bbbk}\langle\!\langle X\rangle\!\rangle}\,.$$ This is the sum of all words $w$ such that if $i$ and $j$ are in the same block of ${\mathcal{A}}$ then $w(i)=w(j)$. For instance $$p_{\{1,3\}\{2,4\}}= xyxy+xzxz+ yxyx+ \cdots+ x^4+y^4+z^4+\cdots\,.$$ The subspace of ${{\Bbbk}\langle\!\langle X\rangle\!\rangle}$ linearly spanned by the elements $p_{\mathcal{A}}$ as ${\mathcal{A}}$ runs over all set partitions of $[n]$, $n\geq 0$, is a subalgebra $\Pi$ of ${{\Bbbk}\langle\!\langle X\rangle\!\rangle}$, graded by length. The elements of $\Pi$ can be characterized as those power series of finite degree that are invariant under any permutation of the variables. $\Pi$ is the algebra of symmetric functions in non-commuting variables introduced by Wolf [@w] and recently studied by Gebhard, Rosas, and Sagan [@gs00; @gs01; @rs] in connection to Stanley’s chromatic symmetric function. $\Pi$ is in fact a graded Hopf algebra [@brrz; @am]. The coproduct is defined via evaluation of symmetric functions on two copies of the alphabet $X$. In order to describe the product and coproduct of $\Pi$ on the basis elements $p_{\mathcal{A}}$ we introduce some notation. Given set partitions ${\mathcal{A}}\vdash [n]$ and ${\mathcal{B}}\vdash [m]$ let ${\mathcal{A}}\times{\mathcal{B}}$ be the set partition of $[n+m]$ whose blocks are the blocks of ${\mathcal{A}}$ and the sets $\{b+n\, |\, b\in B\}$ where $B$ is a block of ${\mathcal{B}}$. This corresponds to the operation $\times$ on uniform block permutations in the sense that ${{\mathit{id}}}_{{\mathcal{A}}}\times{{\mathit{id}}}_{{\mathcal{B}}}={{\mathit{id}}}_{{\mathcal{A}}\times{\mathcal{B}}}$. For example, if ${\mathcal{A}}=\{1,3,4\}\{2,5\}\{6\}\vdash [6]$ and ${\mathcal{B}}=\{1,4\}\{2\}\{3,5\}\vdash [5]$, then ${\mathcal{A}}\times {\mathcal{B}}=\{1,3,4\}\{2,5\}\{6\}\{7,10\}\{8\}\{9,11\}\vdash [11]$. To a set partition ${\mathcal{A}}\vdash[n]$ and a subset $S\subseteq{\mathcal{A}}$ we associate a new set partition ${\mathcal{A}}_S$ as follows. Write $$\bigcup_{A\in S} A=\{j_1,\cdots, j_m\} \subseteq [n]$$ with $j_1<j_2<\cdots <j_m$. ${\mathcal{A}}_S$ is the set partition of $[m]$ whose blocks are obtained from the blocks $A\in S$ by replacing each $j_t$ by $t$, for $1\leq t\leq m$. For instance, if $S=\{1,5\}\{2,7\}$ then ${\mathcal{A}}_S=\{1,3\}\{2,4\}\vdash [4]$. The product and coproduct of $\Pi$ are given by $$\begin{aligned} p_{\mathcal{A}}p_{\mathcal{B}}& = p_{{\mathcal{A}}\times{\mathcal{B}}}\,, \label{E:prodPi}\\ \Delta(p_{\mathcal{A}}) & =\sum_{S\sqcup T={\mathcal{A}}}p_{{\mathcal{A}}_S}\otimes p_{{\mathcal{A}}_T}\,, \label{E:coprodPi}\end{aligned}$$ the sum over all decompositions of ${\mathcal{A}}$ into disjoint sets of blocks $S$ and $T$. For example, if ${\mathcal{A}}=\{1,2,6\}\{3,5\}\{4\}$, then $$\begin{aligned} \Delta(p_{\mathcal{A}}) &=& p_{\mathcal{A}}\otimes 1 + p_{\{1,2,5\}\{3,4\}}\otimes p_{\{1\}} + p_{\{1,2,4\}\{3\}}\otimes p_{\{1,2\}} +p_{\{1,3\}\{2\}}\otimes p_{\{1,2,3\}} + \\ && p_{\{1,2,3\}}\otimes p_{\{1,3\}\{2\}} + p_{\{1,2\}} \otimes p_{\{1,2,4\}\{3\}} + p_{\{1\}}\otimes p_{\{1,2,5\}\{3,4\}} + 1 \otimes p_{{\mathcal{A}}}\,.\end{aligned}$$ Consider now the direct sum of the subspaces ${\mathcal{Z}}_n$ of ${\Bbbk}P_n$ introduced in Section \[S:ideal\]: $${\mathcal{Z}}:=\bigoplus_{n\geq 0}{\mathcal{Z}}_n \subset {\mathcal{P}}\,.$$ ${\mathcal{Z}}$ is a Hopf subalgebra of ${\mathcal{P}}$. Moreover, the map $${\mathcal{Z}}\to\Pi, \quad Z_{{\mathcal{A}}}\mapsto p_{{\mathcal{A}}}$$ is an isomorphism of graded Hopf algebras. Thus the Hopf algebra of uniform block permutations ${\mathcal{P}}$ contains the Hopf algebra $\Pi$ of symmetric functions in non-commuting variables. Note also that this reveals the existence of a second operation on $\Pi$: according to Corollary \[C:ideal\], each homogeneous component $\Pi_n$ carries an associative non-unital product that turns it into a right ideal of the monoid algebra ${\Bbbk}P_n$. Connections between $\Pi$ and other combinatorial Hopf algebras are studied in [@am]. [999]{} Marcelo Aguiar and Swapneel Mahajan, *Species and equivariant Hopf algebras*, in preparation (2004). Marcelo Aguiar and Frank Sottile, *Structure of the Malvenuto-Reutenauer Hopf algebra of Permutations*, Advances in Mathematics [**191**]{} n2 (2005) 225–275. David Aldous and Persi Diaconis, *Longest increasing subsequences: from patience sorting to the Baik-Deift-Johansson theorem*, Bull. Amer. Math. Soc. [**36**]{} (1999) 413–432. Nantel Bergeron , Christophe Reutenauer, Mercedes Rosas, and Mike Zabrocki, *The Hopf algebra of symmetric functions in noncommutative variables*, in preparation (2004). S. Y. Chen and S. C. Hsieh, *Factorizable Inverse Semigroups*, Semigroup Forum [**8**]{} (1974) n4 283–297. A. H. Clifford and G. B. Preston, *The algebraic theory of semigroups. Vol. I*, Mathematical Surveys, No. 7 American Mathematical Society, Providence, R.I., 1961 xv+224 pp. A. H. Clifford and G. B. Preston, *The algebraic theory of semigroups. Vol. II*, Mathematical Surveys, No. 7 American Mathematical Society, Providence, R.I., 1967 xv+350 pp. David Easdown, James East, and D. G. FitzGerlad, *Presentations of factorizable inverse monoids*, 2004. Desmond G. FitzGerald, *A presentation for the monoid of uniform block permutations*, Bull. Austral. Math. Soc., [**68**]{} (2003) 317–324. David D. Gebhard, Bruce E. Sagan, *Sinks in Acyclic Orientations of Graphs*, J. Combin. Theory (B) [**80**]{} (2000) 130–146. David D. Gebhard, Bruce E. Sagan, *A chromatic symmetric function in noncommuting variables*, J. Alg. Combin. [**13**]{} (2001) 227–255. Vaughan F. R. Jones, *The Potts Model and the symmetric group*, in Subfactors: Proceedings of the Tanaguchi Symposium on Operator Algebras, Kyuzeso, 1993, pp. 259-267, World Scientific, River Edge, NJ 1994. Masashi Kosuda, *Characterization for the party algebra*, Ryukyu Math. J. [**13**]{} (2000) 7–22. Masashi Kosuda, *Party algebra and construction of its irreducible representations*, paper presented at Formal Power Series and Algebraic Combinatorics (FPSAC01), Tempe, Arizona (USA), May 20-26, 2001. Mark V. Lawson, *Inverse semigroups. The theory of partial symmetries*, World Scientific, River Edge, NJ, 1998. xiv+411 pp. Claudia Malvenuto and Christophe Reutenauer, *Duality between quasi-symmetric functions and the Solomon descent algebra*, J. Algebra [**177**]{} n3 (1995), 967–982. Paul P. Martin, *Temperley-Lieb algebras for non-planar statistical mechanics - The partition algebra construction*, J. Knot Theory and its Applicatitions, [**3**]{} n1 (1994) 51–82. Mercedes Rosas and Bruce Sagan, *Symmetric functions in non-commuting variables*, to appear in Trans. Amer. Math. Soc. J.-M. Sixdeniers, K. A. Penson, and A. I. Solomon, *Extended [B]{}ell and [S]{}tirling numbers from hypergeometric exponentiation*, J. Integer Seq., [**4**]{} (2001), Article 01.1.4. Neil J. A. Sloane, *An on-line version of the encyclopedia of integer sequences*, Electron. J. Combin. **1** (1994), Feature 1, approx. 5 pp. (electronic), [ http://akpublic.research.att.com/\~njas/sequences/ol.html]{}. Kenichiro Tanabe, *On the centralizer algebra of the unitary reflection group $G(m,p,n)$*, Nagoya Math. J. [**148**]{} (1997) 113–126. M. C. Wolf, *Symmetric functions of non-commuting elements*, Duke Math. J. [**2**]{} (1936) 626–637. [^1]: Aguiar supported in part by NSF grant DMS-0302423 [^2]: Orellana supported in part by the Wilson Foundation
--- author: - | J. D. Díaz-Ramírez [^1]\ J. I. García-García [^2]\ D. Marín-Aragón [^3]\ A. Vigneron-Tenorio [^4] title: 'Characterizing affine $\CaC$-semigroups' --- Introduction {#introduction .unnumbered} ============ An affine semigroup $S\subset \N^p$ is called $\CaC_S$-semigroup if $\CaC_S\setminus S$ is a finite set where $\CaC_S\subset \N^p$ is the minimal integer cone containing it. These semigroups are a natural generalization of numerical semigroups and several of their invariants can be generalized. For a given numerical semigroup $G$, it is well-known that $\N\setminus G$ is finite, in fact, $G\subset \N$ is a numerical semigroup if it is a submonoid of $\N$ and $\N\setminus G$ is finite (for topics related with numerical semigroups see [@libro_rosales] and the references therein). In general, it does not happen for affine semigroups. $\CaC$-semigroups are introduced in [@Csemigroup], where the authors study several properties about them (for example, an extended Wilf’s conjecture for $\CaC$-semigroups is given). These semigroups appear in different contexts: when the integer points in an infinite family of some homothetic convex bodies in $\R^p_{\ge}$ are considered (see, for instance, [@politope_semig], [@convex_body] and the references therein), or when the nonnegative integer solutions of some modular Diophantine inequality are studied (see [@prop_mod]), et cetera. In case the cone $\CaC$ is $\N^p$, $\N^p$-semigroups are called generalized numerical semigroups and they were introduced in [@GenSemNp]. Recently, in [@resolucion_maxima] it is proved that the minimal free resolution of the associated algebra to any $\CaC$-semigroup has maximal projective dimension possible. In this context, $\N^p$-semigroups are characterized in [@CFU], but the general problem was opened, [*given any affine semigroup $S$, how to detect if $S$ is or not a $\CaC_S$-semigroup?*]{} The main goal of this work is to determinate the conditions that any affine semigroup given by its minimal set of generators has to verify to be a $\CaC_S$-semigroup. We solve this problem in Theorem \[main\_theorem\], and in Algorithm \[algoritmo\_check\_Csemig\] we give a computational way to check it. Other open problem is to compute the set of gaps of any $\CaC$-semigroup defined by its minimal generating set. We solve this problem by means of setting a finite subset of $\CaC$ containing all the gaps of a given $\CaC$-semigroup. Algorithm \[computing\_gaps\_Csemig\] computes the set of gaps of $\CaC$-semigroups. In this paper, we also go in depth in the study of embedding dimension of $\CaC$-semigroups. In [@Csemigroup Theorem 11], a lower bound of the embedding dimension of $\N^p$-semigroups is provided and some families of $\N^p$-semigroups reaching this bound are given. Besides, in [@Csemigroup Conjecture 12], it is proposed a conjecture about a lower bound for the embedding dimension of any $\CaC$-semigroup. In section \[sec\_embedding\_dimension\], we introduce a lower bound of the embedding dimension of any $\CaC$-semigroup and some families of $\CaC$-semigroups whose embedding dimension is equal to this new bound. The results of this work are illustrated with several examples. To this aim, we have used third-party software, such as Normaliz ([@normaliz]), and the library [CharacterizingAffineCSemigroup]{} ([@PROGRAMA]) developed by the authors in Python ([@python]). The content of this work is organized as follows. Section \[sec\_prelimiraries\] introduces the initial definitions and notations used throughout the paper mainly related with finitely generated cones. In Section \[sec\_main\], a characterization of $\CaC$-semigroups is provided as well as an algorithm to check if an affine semigroup is a $\CaC$-semigroup. Section \[sec\_gaps\] is dedicated to give an algorithm to compute the set of gaps of a $\CaC$-semigroup. Finally, Section \[sec\_embedding\_dimension\] makes a study of the minimal generating sets of $\CaC$-semigroups formulating explicitly a lower bound for their embedding dimensions. Premilinaries {#sec_prelimiraries} ============= The sets of real numbers, rational numbers, integer numbers and the nonnegative integer numbers are denoted by $\R$, $\Q$, $\Z$ and $\N$, respectively. Given $A$ a subset of $\R$, $A_\ge$ is the set of elements in $A$ greater than or equal to zero. For any $n\in \N$, $[n]$ denotes the set $\{1,\ldots n\}$. Given an element $x$ in $\R^n$, $||x||_1$ denotes the sum of the absolute value of its entries, that is, its 1-norm. In this paper we assume the set $\{\mathbf{e}_1,\ldots , \mathbf{e}_p\}$ is the canonical basis of $\R^p$. For a non empty subset of $\R_\ge ^p$, $B$, we define the cone $$L(B):= \left\{\sum_{i=1}^n \lambda_i \mathbf{b}_i \mid n\in \N, \{\mathbf{b}_1,\ldots ,\mathbf{b}_n\}\subset B,\text{ and } \lambda_i \in \R_{\geq}, \forall i \in [n] \right\}.$$ Given a real cone $\CaC\subset \R^p_\ge$, it is well-known that $\CaC\cap \N^p$ is finitely generated if and only if there exists a rational point in each extremal ray of $\CaC$. Moreover, any subsemigroup of $\CaC$ is finitely generated if and only if there exists an element in the semigroup in each extremal ray of $\CaC$. A good monograph about rational cones and affine monoids is [@Bruns]. From now on, we assume that the integer cones considered in this work are finitely generated. Given an integer cone $\CaC\subset \N^p$, an affine semigroup $S\subset \CaC$ is said to be a $\CaC$-semigroup if $\CaC \setminus S$ is a finite set. If the cone $\CaC=\N^p$, a $\CaC$-semigroup is called $\N^p$-semigroup. Fixed a finitely generated semigroup $S\subset \N^p$, we denote by $\CaC_S$ the integer cone $L(S)\cap\N^p$. Note that, if $S$ is a $\CaC$-semigroup, the cone $\CaC$ is $\CaC_S$. Obviously, an unique cone corresponds to infinite different semigroups. The cone $L(S)$ is a polyhedron and we denote by $\{h_1(x)=0,\ldots, h_t(x)=0\}$ the set of its supported hyperplanes. We suppose $L(S)=\{ x\in \R_\ge^d \mid h_1(x)\ge 0, \ldots , h_t(x)\ge 0\}$. Unless otherwise stated, the considered coefficients of each $h_i(x)$ are integers and relatively primes. Assume $L(S)$ has $q$ extremal rays denoted by $\tau_1,\ldots ,\tau_q$. Then, each $\tau_i$ is determined by the set of linear equations $H_i:=\{h_{j_1}(x)=0,\ldots, h_{j_{p-1}}(x)=0\}$ where $J_i:=\{j_1<\cdots <j_{p-1}\}\subset [t]$ is the index set of the supported hyperplanes containing $\tau_i$. So, for each $i\in [q]$, there exists the minimal nonnegative integer vector $\mathbf{a}_i$ such that $\tau_i=\{\lambda\mathbf{a}_i\mid \lambda\in \R_\ge \}$. The set $\{\mathbf{a}_1,\ldots , \mathbf{a}_q\}$ is a generating set of $L(S)$. Note that a necessary condition for $S$ to be a $\CaC_S$-semigroup is the set $\tau_i \cap (\CaC_S\setminus S)$ is finite for all $i\in [q]$. From each extremal ray $\tau_i$ of $L(S)$, we define $\upsilon_{i}(\alpha)$ as the parallel line to $\tau_i$ given by the solutions of the linear equations $\bigcup _{j\in J_i} \{h_j(x)=\alpha_j\}$ where $\alpha=(\alpha_{j_1},\ldots ,\alpha_{j_{p-1}})\in \Z^{p-1}$. For every integer point $P\in \Z^p$ and $i\in [q]$, there exists $\alpha \in \Z^{p-1}$ such that $P$ belongs to $\upsilon_{i}(\alpha)$; if $P\in \CaC_S$, $\alpha \in \N^{p-1}$. We denotes by $\Upsilon _i(P)$ the element $(h_{j_1}(P),\ldots, h_{j_{p-1}}(P))\in\N^{p-1}$ with $J_i=\{j_1<\cdots <j_{p-1}\}$, $P\in\CaC_S$ and $i\in [q]$. Note that for any $P\in\CaC_S$, $P\in \upsilon_i(\alpha)$ if and only if $\alpha =\Upsilon_i(P)$. Since all the semigroups appearing in this work are finitely generated, from now on we omit the term *affine* when affine semigroups are considered. An algorithm to detect if a semigroup is a $\CaC$-semigroup {#sec_main} =========================================================== In this section we study the conditions that a semigroup has to satisfy to be a $\CaC$-semigroup. This characterization depends on the minimal set of generators of the given semigroup. Let $S\subset \N^p$ be the affine semigroup minimally generated by $\Lambda_S=\{\mathbf{s}_1,\ldots, \mathbf{s}_q,\mathbf{s}_{q+1}, \ldots, \mathbf{s}_n\}$ and $\tau_1,\ldots ,\tau_q$ be the extremal rays of $L(S)$. Assume that for every $i\in [q]$, $\tau_i \cap (\CaC_S\setminus S)$ is finite and $\mathbf{s}_i$ is the minimum (respect to the natural order) element in $\Lambda_S$ belonging to $\tau_i$. We denote by $\mathbf{f}_i$ the maximal element in $\tau_i \cap (\CaC_S\setminus S)$ respect the natural order. Recall that $\mathbf{a}_i$ is the minimal nonnegative integer vector defining $\tau_i$, and let $\mathbf{c}_i\in S$ be the element $\mathbf{f}_i+\mathbf{a}_i$. In case $\tau_i \cap (\CaC_S\setminus S)=\emptyset$, we fix $\mathbf{f}_i=-\mathbf{a}_i$. The elements $\mathbf{f}_i$ and $\mathbf{c}_i$ are a generalization on the semigroup $\tau_i \cap S$ of the concepts Frobenius number and conductor of a numerical semigroup; for numerical semigroups, the Frobenius number is the maximal natural number that is not in the semigroup, and the conductor is Frobenius number plus one (see [@libro_rosales Chapter 1]). Hence, we call Frobenius element and conductor of the semigroup $\tau_i\cap S$ to $\mathbf{f}_i$ and $\mathbf{c}_i$, respectively. One easy but important property of $S$ is for every $P\in S$, $P+\mathbf{c}_i+\lambda\mathbf{a}_i\in S$ for any $i\in [q]$ and $\lambda\in\N$. Note that $\tau_i\cap \N^p$ is equal to $\{\lambda\mathbf{a}_i\mid \lambda\in \N \}$. So, there exists $S_i\subset \N$ such that $\tau_i\cap S=\{\lambda\mathbf{a}_i\mid \lambda\in S_i \}$. If we assume that $\tau_i \cap (\CaC_S\setminus S)$ is finite, it is easy to prove that $S_i$ is a numerical semigroup. \[isomorfismo\_en\_rayo\] The $\tau_i$-semigroup $\tau_i \cap S$ is isomorphic to the numerical semigroup $S_i$. Consider the isomorphism $\varphi:\tau_i \cap S\to S_i$ with $\varphi(\mathbf{w}):=\lambda$ such that $\mathbf{w}=\lambda\mathbf{a}_i$. Given the semigroup $\tau_i \cap S$, $\mathbf{f}_i$ is equal to $f\mathbf{a}_i$ and $\mathbf{c}_i=c\,\mathbf{a}_i$ where $f$ and $c$ are the Frobenius number and the conductor of the numerical semigroup $S_i$, respectively. In order to test whether $\tau_i \cap (\CaC_S\setminus S)$ is finite, the following result can be used. Let $S\subset \N^p$ be a semigroup and $\tau$ an extremal ray of $L(S)$ satisfying $\tau\cap \N^p=\{\lambda\mathbf{a}\mid \lambda\in \N \}$ with $\mathbf{a}\in \N^p$. Then, $\tau \cap (\CaC_S\setminus S)$ is finite if and only if $\gcd (\{\lambda\mid \lambda\mathbf{a}\in \tau\cap \Lambda_S \})=1$. Assume that $\tau \cap (\CaC_S\setminus S)$ is finite and suppose that $\gcd (\{\lambda\mid \lambda\mathbf{a}\in \tau\cap \Lambda_S \})=n\neq 1$. Hence, every element $\lambda\mathbf{a}$ with $\gcd(n,\lambda)=1$ does not belong to $S$, and then $\tau \cap (\CaC_S\setminus S)$ is not finite. Conversely, by Lemma \[isomorfismo\_en\_rayo\], if $\gcd (\{\lambda\mid \lambda\mathbf{a}\in \tau\cap \Lambda_S \})=1$, $S_i$ is isomorphic to $\tau_i \cap S$. Therefore, $\tau \cap (\CaC_S\setminus S)$ is finite. In order to introduce the announced characterization, we need to define some subsets of $L(S)$ and to prove some of their properties. Associated to the integer cone $\CaC_S$, consider the sets $\CaA:=\{ \sum_{i\in[q]} \lambda_i\mathbf{a}_i \mid 0\le \lambda_i\le 1\}\cap \N^p$ and $\CaD:=\{ \sum_{i\in[q]} \lambda_i\mathbf{s}_i \mid 0\le \lambda_i\le 1\}\cap \N^p$. \[lemma\_diamante\] Given $P\in \CaC_S$, there exist $Q\in \CaA$ and $\beta \in \N^q$ such that $P=Q+\sum_{i\in[q]} \beta_i\mathbf{a}_i$. Moreover, $\Upsilon_j(P)=\Upsilon_j(Q)+\sum_{i\in [q]}\beta_i \Upsilon_j(\mathbf{a_i})$ for every $j\in[q]$. Since $P\in \CaC_S$, $P=\sum_{i\in[q]}\mu_i\mathbf{a}_i$ with $\mu_i\in \Q_\ge$. For each $\mu_i$ there exists $\lambda_i\in [0,1]$ satisfying $\mu_i=\lfloor \mu_i\rfloor +\lambda_i$. Hence, $P=Q +\sum _{i\in[q]}\lfloor \mu_i\rfloor\mathbf{a}_i$ where $Q=\sum_{i\in[q]}\lambda_i\mathbf{a}_i=P-\sum _{i\in[q]}\lfloor \mu_i\rfloor\mathbf{a}_i\in\CaA$. Trivially, $\Upsilon_j(P)$ is equal to $\Upsilon_j(Q)+\sum_{i\in [q]}\beta_i \Upsilon_j(\mathbf{a}_i)$ for every $j\in[q]$. For every $i\in [q]$, consider $ T_i\subset\N^{p-1}$ the semigroup generated by the finite set $\{\Upsilon_i(Q)\mid Q\in \CaA\}$ and $\Gamma_i$ its minimal generating set. Note that the sets $\CaA$, $ T_i$ and $\Gamma_i$ only depend on the cone $\CaC_S$, and since $\mathbf{a}_i\in \CaA$, $0\in T_i$. The relationships between the elements in $\CaC_S$ and $S$, and the elements belonging to $T_i$ and $\Gamma_i$ are explicitly determined in the following results for each $i\in [q]$. Let $P$ be an element in $\CaC_S$ such that $P\in \upsilon_i(\alpha)$ for some $\alpha\in \N^{p-1}$, then $\alpha\in T_i$. By definition, $P\in \upsilon_i(\alpha)$ means that $\alpha =\Upsilon_i(P)$. Using Lemma \[lemma\_diamante\], $P=Q+\sum_{j\in[q]} \beta_j\mathbf{a}_j$ with $Q,\mathbf{a}_1,\ldots ,\mathbf{a}_q\in \CaA$ and $\beta_1,\ldots ,\beta_q\in \N$. Therefore, $\Upsilon_i (P)=\Upsilon_i(Q)+\sum_{j\in [q]}\beta_j \Upsilon_i(\mathbf{a}_j)\in T_i$. For every $\alpha \in T_i$, $\CaC_S\cap \upsilon_i(\alpha)\neq \emptyset$ if and only if $\CaC_S\cap \upsilon_i(\beta)\neq \emptyset$ for all $\beta \in \Gamma _i$. Since $\Gamma _i\subset T_i$, if for all $\alpha \in T_i$, $\CaC_S\cap \upsilon_i(\alpha)\neq \emptyset$ then $\CaC_S\cap \upsilon_i(\beta)\neq \emptyset$ for all $\beta \in \Gamma _i$. Assume that $\CaC_S\cap \upsilon_i(\beta)\neq \emptyset$ for all $\beta \in \Gamma _i$ and let $\alpha$ be an element in $ T_i$. Then, there exist $\beta_1,\ldots ,\beta_k\in \Gamma_i$, $\mu_1,\ldots ,\mu_k\in \N$ and $Q_1,\ldots ,Q_k\in \CaD$ such that $\alpha=\sum_{j\in [k]}\mu_j\beta_j$ and $\Upsilon_i(Q_j)=\beta _j$ for $j\in [k]$. Note that $P=\sum_{j\in [k]}\mu_jQ_j\in\CaC_S$ belongs to $\upsilon_i(\alpha)$. \[minimal\_terminos\_independientes\] For every $\alpha \in T_i$, $S\cap \upsilon_i(\alpha)\neq \emptyset$ if and only if $S\cap \upsilon_i(\beta)\neq \emptyset$ for all $\beta \in \Gamma _i$. Note that if $P\in S\cap \upsilon_i(\alpha)$ for some $\alpha \in\N^{p-1}$ and $i\in[q]$, $P+\mathbf{c}_i+\lambda\mathbf{a}_i\in S$ and $\Upsilon_i(P+\mathbf{c}_i+\lambda\mathbf{a}_i)=\alpha$ for all $\lambda\in \N$. Now, we introduce a characterization of $\CaC$-semigroups. This characterization depends on the minimal generating set of the given semigroup. Besides, from its proof, we provide in Algorithm \[algoritmo\_check\_Csemig\] an algorithm for checking if a semigroup is a $\CaC$-semigroup. Note that most of parts of Algorithm \[algoritmo\_check\_Csemig\] can be parallelized at least in $q$ stand-alone processes. \[main\_theorem\] A semigroup $S$ minimally generated by $\Lambda_S=\{\mathbf{s}_1,\ldots ,\mathbf{s}_n\}$ is a $\CaC_S$-semigroup if and only if: 1. $\tau_i \cap (\CaC_S\setminus S)$ is finite for all $i\in [q]$. 2. $\Lambda_S\cap \upsilon_i(\alpha)\neq \emptyset$ for all $\alpha \in \Gamma _i$ and $i\in [q]$. Let $S$ be a $\CaC_S$-semigroup. Trivially, $\tau_i \cap (\CaC_S\setminus S)$ is finite for all $i\in [q]$. Assume that $\Lambda_S\cap \upsilon_i(\alpha)= \emptyset$ for some $\alpha \in \Gamma _i$ and some $i\in [q]$. Since $\alpha \in \Gamma _i$, there exists $Q\in \CaA$ such that $\alpha =\Upsilon _i(Q)$. Besides, $Q+\lambda\mathbf{a}_i\in \CaC_S$ and $\Upsilon _i(Q+\lambda\mathbf{a}_i)=\alpha$ for all $\lambda \in \N$. For some $\lambda\in\N$, $Q+\lambda\mathbf{a}_i$ has to be in $S$ ($S$ is $\CaC_S$-semigroup), that is to say, $Q+\lambda\mathbf{a}_i=\sum_{j\in [n]} \mu_j \mathbf{s}_j$ with $\mu_1,\ldots ,\mu_n\in \N$. Therefore, $\alpha=\Upsilon_i (Q+\lambda\mathbf{a}_i)= \sum_{j\in [n]} \mu_j \Upsilon_i(\mathbf{s}_j)$. By Lemma \[lemma\_diamante\], for all $j\in [n]$, $\mathbf{s}_j=Q_j+\sum_{k\in [q]} \beta_{jk}\mathbf{a}_k$ for some $Q_j\in \CaA$ and $\beta_{j1},\ldots ,\beta_{jq}\in \N$. So, $\alpha= \sum_{j\in [n]} \mu_j \Upsilon_i(Q_j+\sum_{k\in [q]} \beta_{jk}\mathbf{a}_k)= \sum_{j\in [n]} \mu_j \Upsilon_i(Q_j) +\sum_{j\in [n]} \sum_{k\in [q]} \mu_j \beta_{jk}\Upsilon_i( \mathbf{a}_k)$. Since $\alpha$ is a minimal element in $ T_i$, $\sum_{j\in [n]} \mu_j +\sum_{j\in [n]} \sum_{k\in [q]\setminus \{i\}} \mu_j \beta_{jk}=1$. Hence, there exists $\mathbf{s}\in\Lambda_S$ such that $\Upsilon_i(\mathbf{s})=\alpha$ and then $\Lambda_S\cap \upsilon_i(\alpha)\neq \emptyset$. Conversely, we assume that $\forall i\in [q]$ and $\forall \alpha \in \Gamma _i$, $\tau_i \cap (\CaC_S\setminus S)$ is finite and $\Lambda_S\cap \upsilon_i(\alpha)\neq \emptyset$ (recall that $\mathbf{c}_i=\mathbf{f}_i+\mathbf{a}_i$). The second condition implies that for $\beta=\Upsilon_i(Q)$ with $Q\in \CaD$, each line $\upsilon_{i}(\beta)$ included an unique non zero minimum (respect 1-norm) point belonging to $S$. Denote by $\{\mathbf{m}_{i1},\ldots , \mathbf{m}_{id_i}\}$ the set obtained from the union of above points for the different elements in $\CaD$ (some of these elements belong to $\Lambda_S$). Note that $\mathbf{m}_{ij}+\mathbf{c}_i+\lambda \mathbf{a}_i\in S$ for all $j\in [d_i]$ and $\lambda\in \N$. Consider $n_i:=\max \{||\mathbf{m}_{i1}+\mathbf{c}_i||_1,\ldots , ||\mathbf{m}_{id_i}+\mathbf{c}_i ||_1\}$, and $\mathbf{x}_i$ the minimum (respect to the 1-norm) element in $\tau_i\cap S$ such that $||\mathbf{x}_i||_1$ is greater than or equal to $n_i$. The set $\CaD_i:=\CaD+\mathbf{x}_i$ satisfies that $\CaD_i \cap S=\CaD_i \cap \CaC_S=\CaD_i$, so $\mathbf{x}_i+\CaC_S\subset S$. We define the finite set $\CaX:= \{\sum_{i\in [q]}\lambda _i\mathbf{x}_i \mid 0\le \lambda_i\le 1 \} $. Since $\mathbf{x}_i+\CaC_S\subset S$ for every $i\in [q]$, $\CaC_S\setminus S\subset \CaX$. Therefore, $S$ is a $\CaC_S$-semigroup. \[algoritmo\_check\_Csemig\] The following example illustrates this theorem and the algorithm obtained from it. \[initial\_example\] Let $S\subset \N^3$ be the semigroup minimally generated by $$\begin{multlined} \Lambda_S=\{ (2, 0, 0), (4, 2, 4), (0, 1, 0), (3, 0, 0), (6, 3, 6), (3, 1, 1), (4, 1, 1),\\ (3, 1, 2), (1, 1, 0), (3, 2, 3), (1, 2, 1) \}. \end{multlined}$$ The cone $L(S)$ is $\langle (1,0,0),(2,1,2),(0,1,0) \rangle_{\R_\ge}$ and its supported hyperplanes are $h_1(x,y,z)\equiv 2y-z =0$, $h_2(x,y,z)\equiv x-z =0$ and $h_3(x,y,z)\equiv z=0$. Recall $\CaC_S=L(S)\cap \N^3$. By $\mathbf{a}_1$, $\mathbf{a}_2$ and $\mathbf{a}_3$ we denote the vectors $(1,0,0)$, $(2,1,2)$ and $(0,1,0)$ respectively, and $\tau_1$, $\tau_2$ and $\tau_3$ are the extremal rays with sets of defining equations $\{h_1(x,y,z)=0,h_3(x,y,z)=0\}$, $\{h_1(x,y,z)=0,h_2(x,y,z)=0\}$ and $\{h_2(x,y,z)=0,h_3(x,y,z)=0\}$, respectively. Hence, $S_1=(\tau_1\setminus \{(1,0,0)\})\cap \N^3$, $S_2=\tau_2\setminus\{(2,1,2)\}\cap \N^3$ and $S_3=\tau_3\cap \N^3$, and the first condition in Theorem \[main\_theorem\] holds. The set $\CaA$ is $$\label{set_A} \begin{multlined} \{(0, 0, 0), (0, 1, 0), (1, 0, 0), (1, 1, 0), (1, 1, 1), (2, 1, 1), (2,1, 2),\\ (2, 2, 2), (3, 1, 2), (3, 2, 2)\}, \end{multlined}$$ $\Upsilon_1(\CaA)=\{(0, 0), (0, 2), (1, 1), (2, 0), (2, 2)\}$, and the sets $\Upsilon_2(\CaA)$ and $\Upsilon_3(\CaA)$ are $\{(0, 0), (0, 1), (1, 0), (1, 1), (2, 0), (2, 1)\}$ and $\{(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2) \}$, respectively. Therefore, $\Gamma_1=\{(0,2),(1,1),(2,0)\}$ and $\Gamma_2=\Gamma_3=\{(0,1),(1,0)\}$. Since $\Upsilon_1(\{(3,1,1),(3,1,2),(1,1,0)\})=\Gamma_1$, $\Upsilon_2(\{(3,1,2),(3,2,3)\})=\Gamma_2$ and $\Upsilon_3(\{(1,1,0),(1,2,1)\})=\Gamma_3$, $S$ satisfies the second condition in Theorem \[main\_theorem\]. Hence, $S$ is a $\CaC_S$-semigroup. By using our implementation of Algorithm \[algoritmo\_check\_Csemig\], we can confirm that $S$ is a $\CaC_S$-semigroup, In [1]: IsCsemigroup([[2,0,0],[4,2,4],[0,1,0],[3,0,0],[6,3,6], [3,1,1],[4,1,1],[3,1,2],[1,1,0],[3,2,3],[1,2,1]]) Out[1]: True To finish this section, it should be pointed out that there exist some special cases of semigroups where Theorem \[main\_theorem\] can be simplify: $\N^p$-semigroups and two dimensional case. Note that, if the integer cone $\CaC_S$ is $\N^p$, its supported hyperplanes are $\{x_1=0,\ldots ,x_p=0\}$. Moreover, since its extremal rays are the axes, $\tau_i\equiv \{\lambda \mathbf{e}_i\mid \lambda \in \Q_\ge \}$ is determined by the equations $\cup _{j\in[p]\setminus\{i\}}\{x_{j}=0\}$, and for any canonical generator $\mathbf{e}$ of $\N^{p-1}$, there exists $P$ in $\N^p$ such that $\Upsilon_i(P)=\mathbf{e}$. Furthermore, $\cup _{j\in [p]\setminus \{i\}} \{\Upsilon_i(\mathbf{e}_j)\}$ is the canonical basis of $\N^{p-1}$. Hence, $\Gamma_1=\cdots =\Gamma_p$ is the canonical basis of $\N^{p-1}$. From previous considerations, a characterization of $\N^p$-semigroups equivalent to [@CFU Theorem 2.8] is obtained from Theorem \[main\_theorem\]. A semigroup $S$ minimally generated by $\Lambda_S$ is an $\N^p$-semigroup if and only if: 1. for all $i\in [p]$, the non null entries of the elements in $\tau_i \cap \Lambda_S$ are coprime, or $\mathbf{s}_i=\mathbf{e}_i$. 2. for all $i,j\in [p]$ with $i\neq j$, $\mathbf{e}_i+\lambda_j\mathbf{e}_j\in \Lambda_S$ for some $\lambda_j\in \N$. Focus on two dimensional case, note that the extremal rays and the supported hyperplanes of a cone are equal. Since for each extremal ray the coefficients of its defining linear equation are relatively primes, the linear equations $h_1(x,y)=1$ and $h_2(x,y)=1$ always have nonnegative integer solutions. So, any semigroup $S\subset \N^p$ is a $\CaC_S$-semigroup if and only if $\tau_i \cap (\CaC_S\setminus S)$ is finite for $i=1,2$, and both sets $\Lambda_S\cap \{h_1(x,y)=1\}$ and $\Lambda_S\cap \{h_2(x,y)=1\}$ are non empty. Set of gaps of $\CaC$-semigroups {#sec_gaps} ================================ In this section, we give an algorithm to compute the set of gaps of a $\CaC$-semigroup. This algorithm is obtained from Theorem \[main\_theorem\]. In order to introduce such an algorithm, allow us to start redefining some objects used to prove that theorem. Given $S$ a $\CaC_S$-semigroup with $q$ extremal rays, for any $i\in [q]$, let $\mathbf{c}_i$ be the conductor of the semigroup $\tau_i\cap S$. By Corollary \[minimal\_terminos\_independientes\], for any $\alpha\in \Upsilon_i(\CaD)$ the intersection $\upsilon_i(\alpha)\cap S$ is not empty. Hence, set $\mathbf{m}^{(i)}_\alpha$ the element in $\upsilon_i(\alpha)\cap S$ with minimal 1-norm and $\alpha\in \Upsilon_i(\CaD)\setminus\{0\}$. Note that $\mathbf{m}^{(i)}_\alpha + \mathbf{c}_i+\lambda \mathbf{a}_i\in S$ for all $\lambda\in \N$. Let $n_i:=||\mathbf{c}_i ||_1+ \max \big(\{||\mathbf{m}^{(i)}_\alpha||_1\mid \alpha\in \Upsilon_i(\CaD) \setminus\{0\} \}\big)$, and $\mathbf{x}_i$ the minimal element in $\tau_i\cap S$ such that $||\mathbf{x}_i||_1$ is greater than or equal to $n_i$. The vector $\mathbf{x}_i$ can be computed as follows: let $Q$ be the nonnegative rational solution of the systems of linear equations $\{x_1+\cdots +x_p=n_i, h_{j_1}(x)=0,\ldots, h_{j_{p-1}}(x)=0\}$ (recall that $h_{j_1}(x)=0,\ldots, h_{j_{p-1}}(x)=0$ are the equations defining $\tau_i$), then $\mathbf{x}_i= \lceil \frac{||Q||_1}{||\mathbf{a}_i||_1}\rceil \mathbf{a}_i$. By the proof of Theorem \[main\_theorem\], $\CaC_S\setminus S\subset \CaX$, with $\CaX= \{\sum_{i\in [q]}\lambda _i\mathbf{x}_i \mid 0\le \lambda_i\le 1 \}$. Algorithm \[computing\_gaps\_Csemig\] shows the process to computed the set of gaps of $S$. Note that several of its steps can be computed in parallel way. \[computing\_gaps\_Csemig\] We illustrate Algorithm \[computing\_gaps\_Csemig\] in the following example. Besides, we confirm our handmade computations by using our free software [@PROGRAMA]. Consider the $\CaC_S$-semigroup $S$ defined in example \[initial\_example\]. So, $\mathbf{s}_1=\mathbf{c}_1= (2,0,0)$, $\mathbf{s}_2=\mathbf{c}_2= (4,2,4)$, $\mathbf{s}_3= (0,1,0)$ and $\mathbf{c}_3= (0,0,0)$. The set $\CaD$ is $$\begin{multlined} \{ (0, 0, 0), (0, 1, 0), (1, 0, 0), (1, 1, 0), (1, 1, 1), (2, 0, 0), (2, 1, 0), (2, 1, 1),\\ (2, 1, 2), (2, 2, 2), (3, 1, 1), (3, 1, 2), (3, 2, 2), (3, 2, 3), (4, 1, 2), (4, 2, 2),\\ (4, 2, 3), (4, 2, 4), (4, 3, 4), (5, 2, 3), (5, 2, 4), (5, 3, 4), (6, 2, 4), (6, 3, 4) \} \end{multlined}$$ For example, for the extremal ray $\tau_1$, $\Upsilon_1(\CaD)$ is the set $$\begin{multlined} \{(0, 0), (0, 2), (0, 4), (1, 1), (1, 3), (2, 0), (2, 2), (2, 4)\}, \end{multlined}$$ and $\cup_{\alpha\in \Upsilon_1(\CaD) \setminus\{0\}} \{\mathbf{m}^{(1)}_\alpha\}$ is $$\begin{multlined} \{ (0, 1, 0), (3, 1, 1), (3, 1, 2), (3, 2, 2), (3, 2, 3), (4, 2, 4), (4, 3, 4) \} \end{multlined}$$ For $\tau_2$ and $\tau_3$, $$\begin{multlined} \cup_{\alpha\in \Upsilon_2(\CaD)\setminus\{0\}} \{\mathbf{m}^{(2)}_\alpha\}= \{(0, 1, 0), (3, 1, 2), (1, 1, 0), (3, 2, 3), (2, 0, 0),\\ (2, 1, 0), (6, 3, 5), (3, 1, 1)\} \end{multlined}$$ $$\begin{multlined}\cup_{\alpha\in \Upsilon_3(\CaD)\setminus\{0\}} \{\mathbf{m}^{(3)}_\alpha\}= \{ (1, 1, 0), (1, 2, 1), (2, 0, 0), (2, 3, 1), (2, 4, 2),\\ (3, 1, 1), (3, 1, 2), (3, 2, 3), (4, 2, 2), (4, 3, 3), (4, 2, 4), (5, 3, 4), (6, 2, 4) \} \end{multlined}$$ Then $n_1=13$, $n_2=24$ and $n_3=12$, and $\mathbf{x}_1= (14,0,0)$, $\mathbf{x}_2= (10,5,10)$ and $\mathbf{x}_3= (0,13,0)$. Therefore, the set of gaps of $S$ is, $$\begin{multlined} \{ (1,0,0), (1,1,1), (2,1,1), (2,1,2), (2,2,1), (2,2,2), (2,3,2),\\ (4,1,2), (4,2,3), (5,2,4), (5,3,5), (8,4,7) \} \end{multlined}$$ By using our implementation of Algorithm \[computing\_gaps\_Csemig\], we obtain those gaps, In [1]: ComputeGaps([[2,0,0],[4,2,4],[0,1,0],[3,0,0],[6,3,6], [3,1,1],[4,1,1],[3,1,2],[1,1,0],[3,2,3],[1,2,1]]) Out[1]: [[1,0,0], [1,1,1], [2,1,1], [2,1,2], [2,2,1], [2,2,2], [2,3,2], [4,1,2], [4,2,3], [5,2,4], [5,3,5], [8,4,7]] Embedding dimension of $\CaC$-semigroups {#sec_embedding_dimension} ======================================== In [@Csemigroup], it is proved that the embedding dimension of an $\N^p$-semigroup is greater than or equal to $2p$, and this bound holds. Furthermore, a conjecture about a lower bound of embedding dimension of any $\CaC$-semigroup is proposed. In this section we determinate a lower bound of the embedding dimension of a given $\CaC$-semigroup by means of studying its elements belonging to $\CaA$. As in previous sections, let $\CaC\subset \N^p$ be a finitely generated cone and $\tau_1,\ldots ,\tau_q$ its extremal rays. For any $i\in [q]$, $\mathbf{a}_i$ is the generator of $\tau_i\cap \N^p$, $\CaA$ is the finite set $\{ \sum_{i\in[q]} \lambda_i\mathbf{a}_i \mid 0\le \lambda_i\le 1\}\cap \N^p$ and $\Gamma_i=\{\alpha^{(i)}_1,\ldots ,\alpha^{(i)}_{m_i} \}$ denotes the minimal generating set of the semigroup $T_i\subset \N^{p-1}$ generated by $\Upsilon_i(\CaA)$. Given a $\CaC$-semigroup $S$, consider $\Lambda':=\{\mathbf{s}_{t_1},\ldots ,\mathbf{s}_{t_k}\}$ the set of minimal elements of $S$ in $\CaA$, and $M_l:=\{i\in [q] \mid \Upsilon_{i}(\mathbf{s}_{t_l})\in\Gamma_i\cup \{0\} \}$ for $l\in [k]$. The following result provides a lower bound for the embedding dimension of any $\CaC$-semigroup such that $\Lambda'$ is the set of its minimal elements in $\CaA$. \[proposition\_dim\] Given $S\subset \N^p$ a $\CaC$-semigroup, $$\e(S)\ge \sum _{i\in [q]} (\e(S_i)+\e(T_i))+k-\sum_{i\in [k]}\sharp(M_i).$$ From Theorem \[main\_theorem\], for any $i\in [q]$, there exist $\e(S_i)$ minimal generators of $S$ in $\tau_i$. Moreover, for each element $\gamma\in \Gamma_i$, there is at least an element of $\Lambda_S$ in $\upsilon_i(\gamma)$. But, it is possible that one element in $\Lambda_S\cap \CaA$ belongs to two (or more) different lines $\upsilon_i(\gamma)$ and $\upsilon_j(\gamma')$ with $\gamma\in\Gamma_i\cup \{0\} $ and $\gamma'\in\Gamma_j\cup \{0\} $ (in that case, $\upsilon_i(\gamma)\cap \upsilon_j(\gamma')$ is this minimal generator). Since each one of these points in $\Lambda_S\cap \CaA$ can be the only minimal generator of $S$ in those lines, $\sharp(M_l)=n>1$ means that one minimal generator can be the only minimal generator for $n$ different elements in $\cup _{i\in[q]}\Gamma_i\cup \{0\} $. So, counting the minimal amount of elements needed to have almost one minimal generator in each line $\upsilon_i(\gamma)$ for each $\gamma\in\Gamma_i\cup \{0\} $ and $i\in [q]$, the embedding dimension of $S$ is greater than or equal to $\sum _{i\in [q]} (\e(S_i)+\e(T_i))+k-\sum_{i\in [k]}\sharp(M_i).$ Consider the $\CaC_S$-semigroup $S$ given in example \[initial\_example\]. In that case, $\Lambda'=\{(3,1,2),(0,1,0),(1,1,0)\}$, $\sharp(M_1)=2$ (i.e. $\Upsilon_i(3,1,2)\in \Gamma_i$ for $i=1,2$), $\sharp(M_2)=2$ ($\Upsilon_1 (0,1,0) \in \Gamma_1$ and $\Upsilon_3 (0,1,0)=(0,0,0)$), and $\sharp(M_3)=2$ ($\Upsilon_1 (1,1,0) \in \Gamma_1$ and $\Upsilon_3 (1,1,0) \in \Gamma_3$). So, $\sum _{i\in [q]} (\e(S_i)+\e(T_i))+k-\sum_{i\in [k]}\sharp(M_i)= 5 + 7 + 3 - 2-2-2= 9$ that is smaller than $\e(S)=11$. Given any bound, the first interesting question about it is if the bound is reached for some $\CaC$-semigroup. The answer is affirmative and this fact is formulated as follows. Let $S_1,\ldots ,S_q$ be the non proper numerical semigroups minimally generated by $\{n_1^{(i)},\ldots ,n_{\e(S_i)}^{(i)}\}$ for each $i\in [q]$, and $\Lambda''\subset \CaC\setminus\{\mathbf{a}_1,\ldots ,\mathbf{a}_q\}$ be a set satisfying - for every $\gamma\in\Gamma_i$ and $i\in [q]$, there exists an unique $\mathbf{d}\in \Lambda''$ such that $\Upsilon_i(\mathbf{d})=\gamma$, - if there exist $i,j\in [q]$ and $\mathbf{d},\mathbf{d}'\in \Lambda''$ such that $\Upsilon_i(\mathbf{d})=\Upsilon_j(\mathbf{d}')$, then $\mathbf{d}=\mathbf{d}'$. Then, the embedding dimension of the $\CaC$-semigroup generated by $\Lambda'' \cup \bigcup_{i\in [q]} \{n_1^{(i)}\mathbf{a}_i,\ldots ,n_{\e(S_i)}^{(i)}\mathbf{a}_i\}$ is $$\sum _{i\in [q]} (\e(S_i)+\e(T_i))+k-\sum_{i\in [k]}\sharp(M_i).$$ By the hypothesis, there are exactly $\sum _{i\in [q]} \e(T_i)+k-\sum_{i\in [k]}\sharp(M_i)$ minimal generators in the $\CaC$-semigroup generated by $\Lambda'' \cup \bigcup_{i\in [q]} \{n_1^{(i)}\mathbf{a}_i,\ldots ,n_{\e(S_i)}^{(i)}\mathbf{a}_i\}$ outside its extremal rays, and $\sum_{i\in [q]} \e(S_i)$ belonging to its extremal rays. Let $S\subset \N^3$ be the semigroup minimally generated by $$\begin{multlined} \Lambda_S=\{ (2, 0, 0), (4, 2, 4), (0, 2, 0), (3, 0, 0), (6, 3, 6), (0, 3, 0), (3, 1, 1),\\ (3, 1, 2), (1, 1, 0), (3, 2, 3), (1, 2, 1) \}. \end{multlined}$$ Note that the cone $\CaC_S$ is the same as the cone consider in example \[initial\_example\]. So, $\CaA$, $\Gamma_1$, $\Gamma_2$ and $\Gamma_3$ are the sets given in that example. For $S$, $\Upsilon_1(\{(3,1,1),(3,1,2),(1,1,0)\})=\Gamma_1$, $\Upsilon_2(\{(3,1,2),(3,2,3)\})=\Gamma_2$ and $\Upsilon_3(\{(1,1,0),(1,2,1)\})=\Gamma_3$. Since $(1,1,0),(3,1,2)\in \CaA$, $\e(S)= 11= 6+7+2-2-2 = \sum _{i\in [3]} (\e(S_i)+\e(T_i))+2-\sum_{i\in[2]}\sharp(M_i)$. Fixed a cone $\CaC$, studying the different possibilities to select sets of points $K\subset \CaC$ such that $\cup_{i\in[q]}\Gamma_i$ is the union of the minimal generating set of the semigroup given by $\cup_{Q\in K}\Upsilon_i(Q)$ (for $i$ from 1 to $q$), we can state results like the following: Let $S_1,\ldots ,S_q$ be the non proper numerical semigroups minimally generated by $\{n_1^{(i)},\ldots ,n_{\e(S_i)}^{(i)}\}$ for each $i\in [q]$, and $\Lambda''\subset \CaC\setminus \CaA$ be a set satisfying that for every $\gamma\in\Gamma_i$ and $i\in [q]$, there exists an unique $\mathbf{d}\in \Lambda'' $ such that $\Upsilon_i(\mathbf{d})=\gamma$. Then, the embedding dimension of the $\CaC$-semigroup generated by $\Lambda'' \cup \bigcup_{i\in [q]} \{n_1^{(i)}\mathbf{a}_i,\ldots ,n_{\e(S_i)}^{(i)}\mathbf{a}_i\}$ is $\sum_{i\in [q]} (\e(S_i) + \e(T_i))$. Finally, we illustrate the above result with an example. Let $S\subset \N^3$ be the semigroup minimally generated by $$\begin{multlined} \Lambda_S=\{(2, 0, 0), (4, 2, 4), (0, 2, 0), (3, 0, 0), (6, 3, 6), (0, 3, 0), (3, 1, 1),\\ (4, 1, 2), (5, 2, 4), (2, 1, 0), (1, 2, 0), (3, 2, 3), (1, 2, 1) \}. \end{multlined}$$ Again, the cone $\CaC_S$ is the cone appearing in example \[initial\_example\]. Note that $(2,0,0),(3,0,0)\in S_1$, $(4,2,4), (6,3,6)\in S_2$ and $(0,2,0),(0,3,0)\in S_3$. Moreover, $\Upsilon_1(\{ (3, 1, 1), (4, 1, 2), (2, 1, 0) \})=\Gamma_1$, $\Upsilon_2(\{(5, 2, 4), (3, 2, 3)\})=\Gamma_2$, $\Upsilon_3(\{(1, 2, 0),(1, 2, 1)\})=\Gamma_3$, and $\Lambda_S\subset \CaC_S\setminus \CaA$. As previous corollary asserts, $\e(S)=13 = 6 + 7 = \sum_{i\in [3]} (\e(S_i) + \e(T_i))$. ### Acknowledgements {#acknowledgements .unnumbered} The authors were partially supported by Junta de Andalucía research group FQM-366. The first author was supported by the Programa Operativo de Empleo Juvenil 2014-2020, which is financed by the European Social Fund within the Youth Guarantee initiative. The second, third and fourth authors were partially supported by the project MTM2017-84890-P (MINECO/FEDER, UE), and the fourth author was partially supported by the project MTM2015-65764-C3-1-P (MINECO/FEDER, UE). [20]{} <span style="font-variant:small-caps;">Bruns, W.; Gubeladze, J.</span>, Polytopes, rings, and K-theory, Springer, Dordrecht, 2009. <span style="font-variant:small-caps;">Cisto, C.; Failla G.; Utano, R.</span>, *On the generators of a generalized numerical semigroup*, An. St. Univ. Ovidius Constanta **27**(1) (2019), 49–59. <span style="font-variant:small-caps;">Bruns, W.; Ichim, B.; Römer, T.; Söger, C.</span>, , available at <http://www.home.uni-osnabrueck.de/wbruns/normaliz/> <span style="font-variant:small-caps;">Díaz-Ramírez, J.D.; García-García, J.I.; Marín-Aragón, D.; Vigneron-Tenorio, A.</span>, , available at <https://github.com/D-marina/CommutativeMonoids/tree/master/ClassCSemigroup>. <span style="font-variant:small-caps;">Díaz-Ramírez, J.D.; García-García, J.I.; Sánchez-R.-Navarro, A.; Vigneron-Tenorio, A.</span>, , arXiv:1906.01585 \[math.AC\]. <span style="font-variant:small-caps;">Failla G.; Peterson C.; Utano, R.</span>, *Algorithms and basic asymptotics for generalized numerical semigroups in $\N^p$*, Semigroup Forum (2016) 92, 460–473. <span style="font-variant:small-caps;">García-García J.I.; Marín-Aragón D.; Vigneron-Tenorio A.</span>, *An extension of Wilf’s conjecture to affine semigroups*, Semigroup Forum (2018), vol. 96, Issue 2, 396–408. <span style="font-variant:small-caps;">García-García J.I.; Marín-Aragón D.; Vigneron-Tenorio A.</span>, *A characterization of some families of Cohen–Macaulay, Gorenstein and/or Buchsbaum rings*, Discrete Applied Mathematics, to appear. <https://doi.org/10.1016/j.dam.2018.03.021> <span style="font-variant:small-caps;">García-García, J.I.; Moreno-Frías, M.A.; Sánchez-R.-Navarro, A.; Vigneron-Tenorio A.</span>, *Affine convex body semigroups*, Semigroup Forum (2013), vol. 87, Issue 2, 331–350. <span style="font-variant:small-caps;">García-García, J.I.; Ojeda, I.; Rosales, J.C.; Vigneron-Tenorio, A.</span>, *On pseudo-Frobenius elements of submonoids of $\N^q$*, arXiv:1903.11028 \[math.AC\]. Python Software Foundation, Python Language Reference, version 3.5, available at <http://www.python.org>. <span style="font-variant:small-caps;">Rosales, J. C.; García-Sánchez, P. A.,</span> Numerical semigroups, *Developments in Mathematics, 20. Springer, New York, 2009.* [^1]: Departamento de Matemáticas/INDESS (Instituto Universitario para el Desarrollo Social Sostenible), Universidad de Cádiz, E-11406 Jerez de la Frontera (Cádiz, Spain). E-mail: juandios.diaz@uca.es. [^2]: Departamento de Matemáticas/INDESS (Instituto Universitario para el Desarrollo Social Sostenible), Universidad de Cádiz, E-11510 Puerto Real (Cádiz, Spain). E-mail: ignacio.garcia@uca.es. [^3]: Departamento de Matemáticas, Universidad de Cádiz, E-11510 Puerto Real (Cádiz, Spain). E-mail: daniel.marin@uca.es. [^4]: Departamento de Matemáticas/INDESS (Instituto Universitario para el Desarrollo Social Sostenible), Universidad de Cádiz, E-11406 Jerez de la Frontera (Cádiz, Spain). E-mail: alberto.vigneron@uca.es.
--- abstract: 'We present a new technique for proving empirical process invariance principle for stationary processes $(X_n)_{n\geq 0}$. The main novelty of our approach lies in the fact that we only require the central limit theorem and a moment bound for a restricted class of functions $(f(X_n))_{n\geq 0}$, not containing the indicator functions. Our approach can be applied to Markov chains and dynamical systems, using spectral properties of the transfer operator. Our proof consists of a novel application of chaining techniques.' author: - 'Herold Dehling[^1]' - 'Olivier Durieu[^2]' - 'Dalibor Volny[^3]' title: | New Techniques for Empirical Processes of\ Dependent Data --- Introduction ============ Let $(X_n)_{n\geq 0}$ be a stationary ergodic process of ${{\mathbb R}}$-valued random variables with marginal distribution function $F(t)=P(X_0\leq t)$. Define the empirical distribution function $(F_n(t))_{t\in{{\mathbb R}}}$ and the empirical process $(U_n(t))_{t\in{{\mathbb R}}}$ by $$\begin{aligned} F_n(t) &:= & \frac{1}{n} \sum_{i=1}^n 1_{(-\infty,t]} (X_i),\; t\in{{\mathbb R}}, \\ U_n(t) &:=& \sqrt{n} (F_n(t)-F(t)),\; t\in{{\mathbb R}}.\end{aligned}$$ The empirical process plays a prominent role in non-parametric statistical inference about the distribution function $F$. In all statistical applications, information about the distribution of the empirical process is needed. In the case of i.i.d. observations, Donsker [@Don52] proved in 1952 that the empirical process converges in distribution to a Brownian bridge process, thus confirming an earlier conjecture of Doob [@Doo49]. In 1968, Billingsley [@Bil68] extended Donsker’s theorem to some weakly dependent processes, specifically to functionals of $\phi$-mixing processes. One of the applications of Billingsley’s theorem is to the empirical process of data generated by the continued fraction dynamical system $T:[0,1]\rightarrow [0,1]$, $T(x):=\frac{1}{x}$. Since 1968, many authors have studied the empirical process of weakly dependent data. Invariance principles for empirical distribution of strong mixing random variables were proved in 1977 by Berkes and Philipp [@BerPhi77] and in 1980 for the multivariate case by Philipp and Pinzur [@PhiPin80]. Later, absolutely regular processes were studied by Doukhan et al. [@DouMasRio95] and Borovkova et al. [@BorBurDeh01]. Many other weak dependence conditions have been studied, see Doukhan and Louichi [@DouLou99], Prieur [@Pri02], Dedecker and Prieur [@DedPri07], Wu and Shao [@WuSha04], Wu [@Wu08]. From the point of view of dynamical systems, an empirical process invariance principle for expanding maps of the interval was proved by Collet et al [@ColMarSch04]. Another one for ergodic torus automorphisms was proved by Durieu and Jouan [@DurJou08]. Proofs of empirical process invariance principles usually consist of two parts, establishing finite-dimensional convergence and tightness of the empirical process. Finite-dimensional convergence, i.e. convergence in distribution of the sequence of vectors $(U_n(t_1),\ldots,U_n(t_k))_{n\geq 1}$, is an immediate consequence of the multivariate CLT for partial sums of the process $$(1_{(-\infty,t_1]}(X_n),\ldots,1_{(-\infty,t_k]}(X_n))_{n\geq 1}.$$ Tightness is far more difficult to establish. One ingredient is usually a probability bound on the increments of the empirical process $$U_n(t)-U_n(s) = \frac{1}{\sqrt{n}} \sum_{i=1}^n\{ 1_{(s,t]}(X_i)-(F(t)-F(s))\},$$ for a fixed pair $ s <t $. Such bounds can in the simplest approach be obtained from bounds on the 4-th moments of $U_n(t)-U_n(s)$. Other results require higher order moment bounds or even exponential bounds. The traditional approach to empirical process invariance principles, as outlined above, works well in situations when the sequence of indicator variables $(1_{(s,t]}(X_n))_{n\geq 0}$ inherits good properties from the original process $(X_n)_{n \geq 0}$. This holds, for example, when $(X_n)_{n\geq 0}$ is strong (uniform, beta) mixing, because then $(1_{(s,t]}(X_n))_{n\geq 0}$ has the same property. There are, however, situations where this is not the case or at least not easy to establish. For some types of Markov processes and dynamical systems, see e.g. Hennion and Hervé [@HenHer01], one has good control over the properties of $(f(X_n))_{n\geq 0}$ when $f$ is a Lipschitz function, but not for indicator functions. For example, Gouëzel [@Gou08] gave an uniformly expanding map of the interval which has a spectral gap on the space of Lipschitz functions but not on the space of bounded variation functions. In this paper, we develop an approach that is strictly based on properties of Lipschitz functions $f(X_i)$ of the original data. We make two basic assumptions, namely that the partial sums of Lipschitz functions satisfy the CLT and that a suitable 4-th moment bound is satisfied. For our proof we develop a variant of the classical chaining technique that uses only Lipschitz functions at all stages of the chaining argument. We replace the usual finite-dimensional convergence plus tightness approach by a method of approximation by a sequence of finite-dimensional processes, which are different from the coordinate projections $(U_n(t_1),\ldots,U_n(t_k))$. We show convergence in distribution of the finite-dimensional processes and prove that the finite-dimensional process approximates the empirical process. In the final step, we use an improved version of a Theorem of Billingsley [@Bil68], see our Theorem \[pbil\] below, to establish convergence in distribution of the empirical process. In the present paper, we make two assumptions concerning the process $(X_i)_{i\geq 0},$ 1. For any Lipschitz function $f$, the CLT holds, i.e. $$\frac{1}{\sqrt{n}} \sum_{i=1}^{n}\{f(X_i)-Ef(X_i)\} {\stackrel{{\cal L}}{\rightarrow}}N(0,\sigma^2), \label{eq:lip-clt}$$ where $N(0,\sigma^2)$ denotes a normal law with mean zero and variance $$\sigma^2=E(f(X_0)-Ef(X_0))^2+2\sum_{i=1}^\infty{{\rm Cov}}(f(X_0),f(X_i)).$$ 2. A bound on the 4-th central moments of partial sums of $(f(X_i))_{i\geq0}$, $f$ bounded Lipschitz with $E(f(X_0))=0$, of the type $$\begin{aligned} &&E\left\{\sum_{i=1}^{n}f(X_i)\right\}^4 \nonumber\\ &&\quad \leq C m_f^3 \left( n \|f(X_0) \|_1 \log^\alpha\left(1+ \| f \|\right) + n^2 \|f (X_0)\|^2_1 \log^\beta\left(1+ \|f \|\right)\right),\nonumber\\ && \label{eq:4th-moment}\end{aligned}$$ where C is some universal constant, $\alpha$ and $\beta$ are some nonnegative integers, $$\parallel f \parallel = \sup_x \mid f(x) \mid + \sup_{x\neq y} \frac{\mid f(x)- f(y) \mid}{\mid x-y \mid}$$ and $$m_f=\max\{1,\sup_x \mid f(x) \mid\}.$$ These assumptions can be verified for a large class of Markov chains and dynamical systems. Concerning the CLT for Lipschitz functions, many results can be found in the literature; see e.g. Hennion and Hervé [@HenHer01] for the spectral gap method and Bradley [@Bra07] for mixing approach. Durieu [@Dur08b] has established 4-th moment bounds of the type (\[eq:4th-moment\]) for Markov chains and dynamical systems under spectral properties. For more details and concrete examples see Section \[examp\] of this paper. We shall assume some regularity for the distribution function of $X_0$. We define the modulus of continuity of a function $f:{{\mathbb R}}\longrightarrow {{\mathbb R}}$ by $$\omega_f(\delta)=\sup\left\{|f(s)-f(t)|\,:\,s,t\in{{\mathbb R}},|s-t|<\delta\right\}.$$ We can now state our main result. \[thm1\] Let $(X_i)_{i\ge 0}$ be an ${{\mathbb R}}$-valued stationary ergodic random process such that the conditions (\[eq:lip-clt\]) and (\[eq:4th-moment\]) hold. Assume that $X_0$ has a distribution function $F$ satisfying the following condition, $$\label{eq-modulus} \omega_F(\delta)\leq D|\log(\delta)|^{-\gamma}\mbox{ for some } D>0 \mbox{ and } \gamma>\max\{\frac{\alpha}{2},\beta\},$$ then $$(U_n(t))_{t\in{{\mathbb R}}} \stackrel{\mathcal{D}} {\longrightarrow} (W(t))_{t\in{{\mathbb R}}}, $$ where $W(t)$ is a mean-zero Gaussian process with covariances $$\begin{aligned} E W (s) \cdot W (t) &=& {{\rm Cov}}( 1_{(- \infty , s]}(X_0) , 1_{( - \infty, t]} (X_0))\\ &&+ \sum^\infty_{k=1} {{\rm Cov}}(1_{(-\infty, s]}(X_0), 1_{(-\infty, t]}(X_k))\\ &&+ \sum^\infty_{k=1} {{\rm Cov}}(1_{(-\infty, s]}(X_k), 1_{(-\infty, t]}(X_0)).\end{aligned}$$ Further, almost surely, $(W(t))_{t\in{{\mathbb R}}}$ has continuous sample paths. In particular, if $X_0$ has a Hölder-continuous distribution function then (\[eq-modulus\]) holds. If the $X_i$’s are i.i.d., $(W(t))_{t\in{{\mathbb R}}}$ is a Brownian bridge, but this is not the case for dependent variables, as in Billingsley [@Bil68] or Collet et al. [@ColMarSch04]. In order to prove Theorem \[thm1\], we apply the following theorem, which is a stronger version of Theorem 4.2 of Billingsley [@Bil68] in the complete case. We do not need to assume a priori that $X^{(m)}$ has a limit in distribution. \[pbil\] Let $ (S,\rho)$ be a complete separable metric space and let $X_n,X_n^{(m)}$ and $X^{(m)}$, $n,m \geq 1$ be S-valued random variables satisfying $$\begin{aligned} && X_n^{(m)}\stackrel {\mathcal{D}}{\longrightarrow } X^{(m)} \mbox{ as } n \rightarrow \infty , \forall m \label{eq:p1}\\ && \lim_{m \rightarrow \infty} \limsup_ {n \rightarrow \infty} P ( \rho (X_n, X_n^{(m)}) \geq \varepsilon) = 0, \forall \varepsilon > 0 .\label{eq:p2}\end{aligned}$$ Then there exists an S-valued random variable $X$ such that $$X_n \stackrel{\mathcal{D}}{\longrightarrow} X \mbox{ as } n \rightarrow \infty .$$ Moreover $ X^{(m)} \stackrel{\mathcal{D}} {\longrightarrow} X \mbox{ as } m \rightarrow \infty$. Both theorems are proved in Section 2 and Section 3. Proof of Theorem \[thm1\] ========================= The bounded case {#boundedcase} ---------------- We first prove the result for bounded variables. Let $(X_i)_{i\ge 0}$ be a $[0,1]$-valued stationary ergodic random process such that (\[eq:lip-clt\]), (\[eq:4th-moment\]) and (\[eq-modulus\]) hold. In our approach we work with Lipschitz continuous approximations to the indicator functions $1_{(-\infty,t]}(x)$. Given a partition $$0=t_0^\prime< \ldots <t_m^\prime=1$$ we define $$t_j=F^{-1}(t_j^\prime)$$ where $F^{-1}$ is given by $$F^{-1}(t)=\sup\{s\in [0,1] : F(s)\leq t\}.$$ Thus, by continuity of $F$, we have a partition $$0\le t_0<\dots<t_m=1.$$ We introduce the functions $\varphi_j:[0,1]\rightarrow\mathbb{R}$ by $$\varphi_j(x)=\varphi\left(\frac{x-t_{j-1}}{t_{j-1}-t_{j-2}}\right),\quad \mbox{ for } j=2,\dots,m$$ where $$\varphi(x)=1_{(-\infty,-1]}(x)-x1_{(-1,0]}(x) \label{eq:phi}$$ and $\varphi_1\equiv0$.\ The function $\varphi_j$ will serve as a Lipschitz-continuous approximation to the indicator function $1_{(-\infty,t_{j-1}]}(x).$ Note that $\varphi_j (x) $ depends on the partition, not only on the point $t_{j-1}.$ We now define the process $$\begin{aligned} F_n^{(m)}(t) &=& \frac{1}{n} \sum_{i=1}^{n} \sum_{j=1}^{m} 1_{[t_{j-1}, t_j)}(t) \varphi_j(X_i)\\ &=& \sum_{j=1}^{m}\left(\frac{1}{n} \sum_{i=1}^{n} \varphi_j(X_i)\right) 1_{[t_{j-1}, t_j)}(t).\end{aligned}$$ Note that $F_n^{(m)}(t)$ is a piecewise constant approximation to the empirical distribution function $F_n(t)$. For $t\in [t_{j-1},t_j]$, we have the inequality $$F_n(t_{j-2}) \leq F_n^{(m)} (t) \leq F_n(t_{j-1}).$$ We define further $$F^{(m)}(t) = E \left(F_n^{(m)}(t)\right) = \sum_{j=1}^m E \left(\varphi_j(X_0)\right) 1_{[t_{j-1},t_j)}(t),$$ and finally the centered and normalized process $$U_n^{(m)}(t) = \sqrt{n} \left(F_n^{(m)}(t) - F^{(m)}(t)\right). \label{eq:unm}$$ Our proof of Theorem \[thm1\] now consists of two parts, each of which will be formulated separately as a proposition below. The theorem will follow by application of Theorem \[pbil\], where $(S,\rho)$ is the space of cadlag functions $D[0,1]$ provided with the Skorohod topology and the metric $d_0$; see Billingsley [@Bil68], p. 113. Note that $(D[0,1],d_0)$ is a complete separable metric space. For any partition $0=t_0^\prime < \ldots < t_m^\prime=1$, there exists a piecewise constant Gaussian process $\left(W^{(m)}(t)\right)_{0\leq t\leq 1}$ such that $$\left(U_n^{(m)}(t)\right)_{0\leq t \leq 1} \mathop{\longrightarrow} \limits^{\cal{D}} \left(W^{(m)}(t)\right)_{0\leq t \leq 1}.$$ The sample paths of the processes $\left(W^{(m)}(t)\right)_{0 \leq t \leq 1}$ are constant on each of the intervals $[t_{j-1}, t_j)$, $1 \leq j \leq m, $ and $W^{(m)}(0) = 0.$ The vector $(W^{(m)} (t_1), \ldots, W^{(m)}(t_m))$ has a multivariate normal distribution with mean zero and covariances $$\begin{aligned} {{\rm Cov}}(W^{(m)}(t_{i-1}), W^{(m)}(t_{j-1})) &=& {{\rm Cov}}(\varphi_i(X_0), \varphi_j (X_0))\\ &&+ \sum^\infty_{k=1}{{\rm Cov}}(\varphi_i(X_0), \varphi_j (X_k))\\ &&+ \sum^\infty_{k=1}{{\rm Cov}}(\varphi_i(X_k), \varphi_j (X_0))\end{aligned}$$ \[prop:fidi-conv\] [*Proof.*]{} Using (\[eq:lip-clt\]) and the Cramér-Wold device, we can show that for any Lipschitz functions $f_1, \ldots ,f_k,$ the multivariate CLT holds, i.e. $$\frac{1}{\sqrt{n}} \sum_{i=1}^{n}\left\{(f_1(X_i), \ldots , f_k(X_i))-E(f_1(X_0), \ldots , f_k(X_0))\right\} {\stackrel{{\cal L}}{\rightarrow}}N(0,\Sigma_{f_1,\ldots,f_k}),$$ where $N(0,\Sigma_{f_1,\ldots,f_k})$ denotes a multivariate normal law with mean zero and covariance matrix $$\Sigma_{f_1,\ldots, f_k} = (\sigma_{f_i},_{f_j})_{1\leq i, j\leq k}$$ where for any Lipschitz functions $ f, g $ we define $$\begin{aligned} \sigma_{f,g}= {{\rm Cov}}(f(X_0),g (X_0)) &+& \sum^\infty_{k=1} {{\rm Cov}}(f(X_0),g(X_k))\\ &+& \sum^\infty_{k=1} {{\rm Cov}}(f(X_k),g(X_0)).\end{aligned}$$ This result proves the proposition. $\Box$ For any $\varepsilon,\eta > 0$ there exists a partition $0=t_0^\prime<\ldots<t_m^\prime=1$ such that $$\limsup_{n\rightarrow\infty} P \left(\sup\limits_{0\leq t\leq 1}\left| U_n(t) - U_n^{(m)}(t)\right|> \varepsilon\right) \leq \eta.$$ \[prop:ep-appr\] [*Proof.*]{} By a variant of the well known chaining technique we will control $$P\left( \sup\limits_{0 \leq t \leq 1} \left| U_n (t) - U_n^{(m)}(t)\right| \geq \varepsilon\right),$$ and then show that this probability can be made arbitrarily small by choosing a partition $0=t_0^\prime < \ldots < t_m^\prime=1$ that is fine enough. From here on we assume that the partition $0=t_0^\prime < \ldots < t_m^\prime=1$ is regularly distributed. Let $h=\frac{1}{m}=t_j^\prime-t_{j-1}^\prime$, for $j=1,\dots,m$. On the interval $[t_{j-1}^\prime, t_j^\prime]$ we introduce a sequence of refining partitions $$t_{j-1}^\prime = s_0^{\prime(k)} < s_1^{\prime(k)} < \ldots < s^{\prime(k)}_{2^k} = t_j^\prime$$ by $$s_l^{\prime(k)} = t^\prime_{j-1} + l \cdot \frac{h}{2^k}\quad, \quad 0 \leq l \leq 2^k.$$ Let us define $$s_l^{(k)}=F^{-1}(s_l^{\prime(k)})\quad,\quad 0 \leq l \leq 2^k.$$ We now have partitions of $[t_{j-1},t_j]$, $$t_{j-1} = s_0^{(k)} < s_1^{(k)} < \ldots < s^{(k)}_{2^k} = t_j.$$ For convenience, we also consider the points $$s_{-1}^{(k)}=F^{-1}\left(t^\prime_{j-1} - \frac{h}{2^k}\right)$$ and the points $$s_{2^k+1}^{(k)}=F^{-1}\left(t^\prime_{j-1} + (2^k+1)\frac{h}{2^k}\right).$$ For any $t\in [t_{j-1}, t_j)$ and $k\geq 0$ we define the index $$l(k,t) = \max \left\{l: s_l^{(k)} \leq t \right\}.$$ In this way we obtain a chain $$t_{j-1} = s_{l(0,t)}^{(0)} \leq s_{l(1,t)}^{(1)} \leq \ldots \leq s_{l(k,t)}^{(k)} \leq t \leq s_{l(k,t)+1}^{(k)},$$ linking the left endpoint $t_{j-1}$ to $t$. Note that for $t\in [t_{j-1}, t_j)$ we have by definition $U_n^{(m)}(t) = U_n^{(m)}(t_{j-1})$. We define the functions $\psi^{(k)}_l$, $k \geq 0$, $0 \leq l \leq 2^k$, by $$\psi^{(k)}_l(x) = \varphi\left(\frac{x}{s_{l}^{(k)}-s_{l-1}^{(k)}}\right),$$ where $\varphi$ is defined as in (\[eq:phi\]). Note that $\psi^{(0)}_{l(0,t)}(x-s^{(0)}_{l(0,t)})=\varphi_j(x)$. To be consistent, in the case $j=1$, we have to fix $\psi_0^{(k)}\equiv 0$, for all $k\ge 0$. We build a chain bridging the gap between $$F_n(t) = \frac{1}{n} \sum\limits^n_{i=1} 1_{(-\infty, t]}(X_i)$$ and $$F_n^{(m)}(t) = \frac{1}{n} \sum\limits^n_{i=1} \varphi_j (X_i)$$ by the functions $$\begin{aligned} \varphi_j(x) &=&\psi^{(0)}_{l(0,t)}( x-s^{(0)}_{l(0,t)}) \\ &\leq & \psi^{(1)}_{l(1,t)} (x-s^{(1)}_{l(1,t)}) \\ &\leq& \ldots \\ &\leq& \psi^{(K)}_{l(K,t)}(x-s^{(K)}_{l(K,t)}) \\ &\leq& 1_{(-\infty, t]}(x) \\ &\leq & \psi^{(K)}_{l(K,t)+2}(x-s^{(K)}_{l(K,t)+2}),\end{aligned}$$ where $K$ is some integer to be chosen later. In this way we get $$\begin{aligned} F_n(t) - F_n^{(m)}(t) &=& \sum^K_{k=1} \frac{1}{n} \sum^n_{i=1} \left(\psi^{(k)}_{l(k,t)}(X_i - s^{(k)}_{l(k,t)}) -\psi^{(k-1)}_{l(k-1,t)}(X_i-s^{(k-1)}_{l(k-1,t)})\right)\nonumber \\ && + \frac{1}{n}\sum^n_{i=1}\left( 1_{(-\infty, t]}(X_i) - \psi^{(K)}_{l(K,t)}(X_i-s^{(K)}_{l(K,t)})\right). \label{eq:fn-fnm}\end{aligned}$$ Observe that by definition of $s^{(k)}_{l(k,t)}$ and of $\psi^{(K)}$, $$\begin{aligned} 0 &\leq & 1_{(-\infty,t]} (X_i) - \psi^{(K)}_{l(K,t)} ( X_i - s^{(K)}_{l(K,t)} ) \\ & \leq & \psi^{(K)}_{l(K,t)+2} (X_i - s^{(K)}_{l(K,t)+2}) - \psi^{(K)}_{l(K,t)} (X_i - s^{(K)}_{l(K,t)}).\end{aligned}$$ From (\[eq:fn-fnm\]) we get by centering and normalization $$\begin{aligned} U_n(t) - U_n^{(m)}(t) &=& \sum^K_{k=1} \frac{1}{\sqrt{n}}\sum^n_{i=1} \left\{ \left( \psi^{(k)}_{l(k,t)}( X_i-s^{(k)}_{l(k,t)})-E \psi^{(k)}_{l(k,t)}(X_i-s^{(k)}_{l(k,t)} ) \right) \right. \\ && -\left. \left(\psi^{(k-1)}_{l(k-1,t)}(X_i - s^{(k-1)}_{l(k-1,t)}) -E\psi^{(k-1)}_{l(k-1,t)}(X_i - s^{(k-1)}_{l(k-1,t)})\right)\right\} \\ && + \frac{1}{\sqrt{n}} \sum^{n}_{i=1} \left\{\left(1_{(-\infty,t]}(X_i)-F(t)\right)\right.\\ && - \left. \left(\psi^{(K)}_{l(K,t)}(X_i - s^{(K)}_{l(K,t)}) -E \psi^{(K)}_{l(K,t)}(X_i - s^{(K)}_{l(K,t)})\right)\right\}.\end{aligned}$$ For the last term on the r.h.s. we have the following upper and lower bounds, $$\begin{aligned} && \frac{1}{\sqrt{n}} \sum^n_{i=1} \left\{ \left(1_{(-\infty, t]} (X_i)-F(t)\right) - \left(\psi^{(K)}_{l(K,t)} (X_i - s^{(K)}_{l (K,t)}) - E \psi^{(K)}_{l(K,t)} (X_i - s_{l (K,t)}^{(K)})\right)\right\} \\ && \le \frac{1}{\sqrt{n}} \sum^n_{i=1} \left\{ \left( \psi^{(K)}_{l(K,t)+2} (X_i-s^{(K)}_{l (K,t)+2}) - E \psi^{(K)}_{l(K,t)+2} (X_i- s^{(K)}_{l (K,t)+2} ) \right)\right. \\ && \quad \quad -\left. \left( \psi^{(K)}_{l(K,t)} (X_i - s^{(K)}_{l (K,t)} ) - E \psi^{(K)}_{l(K,t)}(X_i - s^{(K)}_{l (K,t)} ) \right) \right\}\\ &&\quad +\sqrt{n} \left( E\psi^{(K)}_{l(K,t)+2} (X_i - s^{(K)}_{l(K,t)+2})-F(t)\right)\end{aligned}$$ and $$\begin{aligned} && \frac{1}{\sqrt{n}}\sum^{n}_{i=1} \left\{ \left( 1_{(-\infty,t]}(X_i) - F(t)\right)- \left(\psi^{(K)}_{l(K,t)}(X_i -s^{(K)}_{l(K,t)}) - E \varphi^{(K)}_{l(K,t)} (X_i - s^{(K)}_{l(K,t)})\right)\right\} \\ && \geq - \sqrt{n}\left(F(t) - E \psi^{(K)}_{l(K,t)} (X_i - s^{(K)}_{l(K,t)})\right).\end{aligned}$$ Now choose $K = 4 + \left\lfloor \log\left( \frac{\sqrt{n}h}{\varepsilon}\right)\log^{-1}(2)\right\rfloor$ and note that $$\frac{\varepsilon}{2^4}\leq\sqrt {n} \frac{h}{2^K} \leq \frac{\varepsilon}{2^3}$$ and thus $$\begin{aligned} &&\sqrt{n}\left| E \psi^{(K)}_{l(K,t)+2}(X_i - s^{(K)}_{l(K,t)+2} )- E \psi^{(K)}_{l(K,t)}(X_i - s^{(K)}_{l(K,t)})\right|\\ &&\leq\sqrt{n}\left|F(s^{(K)}_{l(K,t)+2})-F(s^{(K)}_{l(K,t)-1})\right|\\ &&\leq\frac{\varepsilon}{2}.\end{aligned}$$ Thus we get for all $t \in [t_{j-1}, t_j]$, $$\begin{aligned} && \hspace{-15mm}\left| U_n(t)-U_n^{(m)} (t)\right| \\ &\leq& \sum^K_{k=1} \frac{1}{\sqrt{n}} \left| \sum^n_{i=1} \left\{ \left( \psi^{(k)}_{l(k,t)} (X_i-s^{(k)}_{l(k,t)} ) - E \psi^{(k)}_{l(k,t)} (X_i - s^{(k)}_{l (k,t)}) \right) \right.\right. \\ && \left.\left.\quad - \left( \psi^{(k-1)}_{l(k-1,t)} (X_i-s^{(k-1)}_{l (k-1,t)}) - E \psi^{(k-1)}_{l(k-1,t)} (X_i - s^{(k-1)}_{l (k-1,t)})\right) \right\} \right| \\ && + \frac{1}{\sqrt{n}} \left| \sum\limits^n_{i=1} \left\{ \left( \psi^{(K)}_{l(K,t)+2} (X_i - s^{(K)}_{l (K,t)+2}) - E \psi^{(K)}_{l(K,t)+2} (X_i - s^{(K)}_{l (K,t)+2})\right)\right.\right. \\ &&\left.\left. \quad -\left( \psi^{(K)}_{l(K,t)} (X_i - s^{(K)}_{l (K,t)} ) - E \psi^{(K)}_{l(K,t)} (X_i - s^{(K)}_{l (K,t)}) \right) \right\} \right| \\ && \quad + \frac{\varepsilon}{2}.\end{aligned}$$ Note that by definition of $l(k,t)$ and of $s_{l}^{(k)}$, we have $ s^{(k-1)}_{l (k-1,t)} \in \{ s^{(k)}_{l (k,t)}, s^{(k)}_{l (k,t)-1}\}$ and thus $$l(k-1,t)=\left\lfloor \frac{l(k,t)}{2} \right\rfloor.$$ Therefore $$\begin{aligned} && \hspace{-15mm} \sup_{t_{j-1} \le t \le t_j} \left| U_n(t) - U^{(m)}_n (t) \right| \\ &\le& \sum^K_{k=1} \frac{1}{\sqrt{n}} \max_{0 \le l \le 2^k-1} \left| \sum^n_{i=1} \left( ( \psi^{(k)}_l (X_i - s_{l}^{(k)} ) - E \psi^{(k)}_l (X_i - s_{l}^{(k)}) )\right. \right. \\ && \qquad \quad \left. \left. - ( \psi^{(k-1)}_{\lfloor\frac{l}{2}\rfloor} (X_i - s_{\lfloor\frac{l}{2}\rfloor}^{(k-1)}) - E \psi^{(k-1)}_{\lfloor\frac{l}{2}\rfloor} (X_i - s_{\lfloor\frac{l}{2}\rfloor}^{(k-1)} )) \right) \right|\\ && \quad + \frac{1}{\sqrt{n}} \max_{0\le l \le 2^K-1} \left| \sum^n_{i=1} \left( (\psi^{(K)}_{l+2} (X_i - s_{l + 2}^{(K)}) - E \psi^{(K)}_{l+2} (X_i - s^{(K)}_{l+2})) \right. \right. \\ && \left.\left. \qquad \quad - ( \psi^{(K)}_l (X_i - s_{l}^{(K)}) - E \psi^{(K)}_l (X_i - s_l^{(K)} )) \right) \right| \\ && \quad + \frac{\varepsilon}{2}.\end{aligned}$$ Now take $\varepsilon_k : = \frac{\varepsilon}{4 k (k+1)} $ and note that $\sum^K_{k=1}\varepsilon_k \leq \frac{\varepsilon}{4}.$ Then we obtain $$\begin{aligned} &&\hspace{-15mm} P \left(\sup_{t_{j-1}\le t \le t_j} \left|U_n (t) - U_n^{(m)} (t) \right| \ge\varepsilon\right)\\ & \le &\sum\limits^K_{k=1} \sum\limits^{2^k-1}_{l =0} P \left( \frac{1}{\sqrt{n}}\right. \left| \sum\limits^n_{i=1} \right. \left\{ \left( \psi^{(k)}_l (X_i - s_{l}^{(k)}) - E \psi^{(k)}_l (X_i - s_l^{(k)}) \right) \right. \\ && \quad \left.\left.\left.-\left( \psi^{(k-1)}_{\lfloor\frac{l}{2}\rfloor} (X_i-s_{\lfloor\frac{l}{2}\rfloor}^{(k-1)}) - E \psi^{(k-1)}_{\lfloor\frac{l}{2}\rfloor} (X_i-s_{\lfloor\frac{l}{2}\rfloor}^{(k-1)}) \right) \right\} \right| \ge \varepsilon_k \right) \\ && + \sum^{2^K-1}_{l =0} P \left( \frac{1}{\sqrt{n}}\right. \left| \sum^n_{i=1}\left\{ \left( \psi^{(K)}_{l+2} (X_i-s_{l +2}^{(K)}) - E \psi^{(K)}_{l+2} (X_i -s_{l + 2}^{(K)} ) \right)\right.\right. \\ && \quad \left.\left.\left.-\left(\psi^{(K)}_l(X_i-s_{l}^{(K)}) - E \psi^{(K)}_l (X_i-s_{l}^{(K)}) \right) \right\} \right| \ge \frac{\varepsilon}{4}\right).\end{aligned}$$ At this point we use Markov’s inequality together with the 4-th moment bound (\[eq:4th-moment\]). $$\begin{aligned} &&\hspace{-15mm} P \left(\sup_{t_{j-1}\le t \le t_j} \left|U_n (t) - U_n^{(m)} (t) \right| \ge\varepsilon\right)\\ & \le & C\sum\limits^K_{k=1} \sum\limits^{2^k-1}_{l =0} \left\{\frac{1}{n\varepsilon_k^4}\left\| \psi^{(k)}_l(X_0 - s_l^{(k)}) - \psi^{(k-1)}_{\lfloor\frac{l}{2}\rfloor} (X_0 - s_{\lfloor\frac{l}{2}\rfloor}^{(k-1)}) \right\|_1 \right.\\ &&\hspace{60mm}.\log^\alpha\left(1+\left\|\psi^{(k)}_l - \psi^{(k-1)}_{\lfloor\frac{l}{2}\rfloor} \right\|\right)\\ &&+ \frac{1}{\varepsilon_k^4}\left\| \psi^{(k)}_l(X_0 - s_l^{(k)}) - \psi^{(k-1)}_{\lfloor\frac{l}{2}\rfloor} (X_0 - s_{\lfloor\frac{l}{2}\rfloor}^{(k-1)}) \right\|_1^2\\ &&\hspace{60mm}\left. .\log^\beta\left(1+\left\|\psi^{(k)}_l - \psi^{(k-1)}_{\lfloor\frac{l}{2}\rfloor} \right\|\right)\right\}\\ &&+C\sum\limits^{2^k-1}_{l =0} \left\{\frac{4^4}{n\varepsilon^4}\left\| \psi^{(K)}_{l+2}(X_0 - s_{l+2}^{(K)}) - \psi^{(K)}_{l} (X_0 - s_{l}^{(K)}) \right\|_1\right. \\ &&\hspace{60mm}.\log^\alpha\left(1+\left\|\psi^{(K)}_{l+2} - \psi^{(K)}_{l} \right\|\right)\\ &&+ \frac{4^4}{\varepsilon^4}\left\| \psi^{(K)}_{l+2}(X_0 - s_{l+2}^{(K)}) - \psi^{(K)}_{l} (X_0 - s_{l}^{(K)}) \right\|_1^2\\ &&\hspace{60mm}\left. .\log^\beta\left(1+\left\|\psi^{(K)}_{l+2} - \psi^{(K)}_{l} \right\|\right)\right\}.\end{aligned}$$ Note that $$\begin{aligned} \left\| \psi^{(k)}_l(X_0 - s_l^{(k)}) - \psi^{(k-1)}_{\lfloor\frac{l}{2}\rfloor} (X_0 - s_{\lfloor\frac{l}{2}\rfloor}^{(k-1)}) \right\|_1 &\leq & \left| F(s_l^{(k)}) - F(s_{{\lfloor\frac{l}{2}\rfloor}-1}^{(k-1)}) \right|\\ &\leq & \left| F(s_l^{(k)}) - F(s_{l-3}^{(k)}) \right|\\ & =& \frac{3h}{2^k}\end{aligned}$$ and $$\begin{aligned} \left\| \psi^{(K)}_{l+2}(X_0 - s_{l+2}^{(K)}) - \psi^{(K)}_{l} (X_0 - s_{l}^{(K)}) \right\|_1&\le & \left| F(s_{l+2}^{(K)}) - F(s_{l-1}^{(K)}) \right|\\ & =& \frac{3h}{2^K}.\end{aligned}$$ If (\[eq-modulus\]) is satisfied, $$\begin{aligned} \left\| \psi^{(k)}_l \right\| &\leq& 1+\left[\inf\left\{s>0 : \forall t, F(t+s)-F(t)\geq \frac{h}{2^k}\right\}\right]^{-1}\\ &\leq& 1+\left[\inf\left\{s>0 : D|\log(s)|^{-\gamma}\geq \frac{h}{2^k}\right\}\right]^{-1}\\ &=& 1+ \exp\left(\left(\frac{D2^k}{h}\right)^\frac{1}{\gamma}\right).\end{aligned}$$ Thus we have $$\begin{aligned} && \hspace*{-20mm} P \left(\sup\limits_{ t_{j-1} \leq t \leq t_j} \left| U_n (t) - U_n (t_j)\right|\geq \varepsilon\right) \\ &\leq& 4^4 C \sum\limits^K_{k=1} 2^k \frac{(k(k+1))^4}{\varepsilon^4} \frac{1}{n} \frac{3h}{2^k}\log^\alpha \left(2+\exp\left(\left(\frac{D2^k}{h}\right)^\frac{1}{\gamma}\right) \right)\\ && + 4^4 C\sum\limits^K_{k=1} 2^k \frac{(k(k+1))^4}{\varepsilon^4} \frac{(3h)^2} {2^{2k}} \log^\beta \left(2+\exp\left(\left(\frac{D2^k}{h}\right)^{\frac{1}{\gamma}}\right)\right)\\ && + 4^4 C2^K \frac{1}{\varepsilon^4}\frac{1}{n}\frac{3h}{2^K} \log^\alpha\left(2+ \exp\left(\left(\frac{D2^k}{h}\right)^{\frac{1}{\gamma}}\right)\right)\\ && + 4^4 C2^K \frac{1}{\varepsilon^4} \frac{(3h)^2}{2^{2K}} \log^\beta\left(2+ \exp\left(\left(\frac{D2^k}{h}\right)^{\frac{1}{\gamma}}\right)\right)\\ &\leq& \frac{1}{n} \frac{C^\prime}{\varepsilon^4} \sum^K_{k=1} k^8 h \left(\frac{D2^k}{h}\right)^{\frac{\alpha}{\gamma}} + \frac{C^\prime}{\varepsilon^4} \sum^K_{k=1}\frac{k^8}{2^k}h^2 \left(\frac{D2^k}{h}\right)^{\frac{\beta}{\gamma}}\\ &\leq& D^{\frac{\alpha}{\gamma}}\frac{1}{n} \frac{C^\prime}{\varepsilon^4}h \left(\frac{2^K}{h}\right)^{\frac{\alpha}{\gamma}} \sum^K_{k=1} k^8 + D^{\frac{\beta}{\gamma}}\frac{C^\prime}{\varepsilon^4}h^{2-\frac{\beta}{\gamma}} \sum^\infty_{k=1}k^8 2^{k(\frac{\beta}{\gamma}-1)}\\ &\leq&\frac{h}{n}\frac{C''}{\varepsilon^4}\left(\frac{\sqrt{n}}{\varepsilon}\right)^{\frac{\alpha}{\gamma}}K^9 + \frac{C''}{\varepsilon^4}h^{2-\frac{\beta}{\gamma}}\end{aligned}$$ where $C'$ and $C''$ are some constants and we have used convergence of the series $\sum^\infty_{k=1}k^8 2^{k(\frac{\beta}{\gamma}-1)}$. Finally, using $mh=1$, $$\begin{aligned} && \hspace*{-20mm} P \left( \sup_{0\leq t \leq 1} \left| U_n (t) - U_n^{(m)}(t) \right| \geq \varepsilon \right) \\ &\leq& \sum\limits^m_{j=1} P \left( \sup_{t_{j-1} \leq t \leq t_j} \left| U_n(t) - U_n^{(m)} (t) \right| \geq \varepsilon \right)\\ &\leq& m h n^{\frac{\alpha}{2\gamma}-1}\frac{C''}{\varepsilon^{4+\frac{\alpha}{\gamma}}}K^9 + m\frac{C''}{\varepsilon^4}h^{2-\frac{\beta}{\gamma}}\\ &\leq& n^{\frac{\alpha}{2\gamma}-1}\frac{C''}{\varepsilon^{4+\frac{\alpha}{\gamma}}} \left(4+\log\frac{\sqrt{n}h}{\varepsilon}\right)^9 + \frac{C''}{\varepsilon^4}h^{1-\frac{\beta}{\gamma}}\end{aligned}$$ Now, the first of the two final summands converges to zero as $n \rightarrow \infty$. The second can be made arbitrarily small by choosing a partition that is fine enough (i.e. $h$ small). $\Box$ We used a different technique than the usual finite dimensional convergence plus tightness. Of course, since the weak convergence implies the finite dimensional convergence and the tightness, these two properties are satified. Nevertheless, we can also deduce a tightness criterion implying that, almost surely, the limit process has continuous sample paths (see Billingsley [@Bil68], Theorem 15.5). For all $\varepsilon,\, \eta>0$, there exist $\delta>0$ and $N\ge 0$ such that for all $n\ge N$, $$P\left(\sup_{|t-s|<\delta}|U_n(t)-U_n(s)|\ge \varepsilon\right)\le \eta.$$ In particular, $P(W\in C({{\mathbb R}}))=1$. [*Proof.*]{} Let $\varepsilon>0$ and $\eta>0$. Let $m$ be an integer such that $$\label{m} \frac{C}{\varepsilon^4}\frac{D^{\frac{\beta}{\gamma}}}{m^{1+\frac{\beta}{\gamma}}}<\frac{\eta}{4}$$ and consider the regular partition of $[0,1]$ with mesh $\frac{1}{m}$. By Proposition \[prop:ep-appr\], there exists $N\ge 0$ such that for all $n\ge N$, $$P \left(\sup\limits_{0\leq t\leq 1}\left| U_n(t) - U_n^{(m)}(t)\right|\ge \frac{\varepsilon}{3}\right) \leq \frac{\eta}{4}.$$ Let $\delta>0$ such that $\delta<\frac{1}{m}$. Then, for all $n\ge N$, $$\begin{aligned} &&\hspace{-20pt} P\left(\sup_{|t-s|<\delta}|U_n(t)-U_n(s)|\ge \varepsilon\right)\\ &&\le2P\left(\sup_{0\le t\le 1}|U_n(t)-U_n^m(t)|\ge \frac{\varepsilon}{3}\right) +P\left(\sup_{|t-s|<\delta}|U_n^m(t)-U_n^m(s)|\ge \frac{\varepsilon}{3}\right)\\ &&\le \frac{\eta}{2}+P\left(\sup_{|t-s|<\delta}|U_n^m(t)-U_n^m(s)|\ge \frac{\varepsilon}{3}\right).\end{aligned}$$ We recall, as $t_j=F^{-1}(t_j')=F^{-1}(\frac{j}{m})$, that $$\begin{aligned} \|\varphi_j(X_0)-\varphi_{j+1}(X_0)\|_1&\le&P\left(t_{j-2}\le X_0\le t_j\right)\le \frac{2}{m},\\ \|\varphi_j\| & \le & 1+\exp\left(\left(\frac{D}{m}\right)^{\frac{1}{\gamma}}\right).\end{aligned}$$ Thus, by the 4-th moment bound (\[eq:4th-moment\]), $$\begin{aligned} P\left(\sup_{|t-s|<\delta}|U_n^m(t)-U_n^m(s)|\ge \frac{\varepsilon}{3}\right) \le \frac{C}{n\varepsilon^4}\left(\frac{D}{m}\right)^{\frac{\alpha}{\gamma}} +\frac{C}{\varepsilon^4}\frac{D^{\frac{\beta}{\gamma}}}{m^{1+\frac{\beta}{\gamma}}}.\end{aligned}$$ Now there exists $N'\ge N$ such that $$\frac{C}{n\varepsilon^4}\left(\frac{D}{m}\right)^{\frac{\alpha}{\gamma}}\le \frac{\eta}{4}.$$ Finally, by (\[m\]), $$P\left(\sup_{|t-s|<\delta}|U_n(t)-U_n(s)|\ge \varepsilon\right)\le \eta.$$ $\Box$ The unbounded case ------------------ Let $(X_i)_{i\ge 0}$ be an ${{\mathbb R}}$-valued stationary ergodic random process such that (\[eq:lip-clt\]), (\[eq:4th-moment\]) and (\[eq-modulus\]) hold. We will show that it can be reduced to the case of bounded variables. For all $x<y\in{{\mathbb R}}$, we say that the closed interval $[x,y]$ is a ’bad’ interval (for $F$) if $$F(y)-F(x)\ge y-x.$$ We say that $[x,y]$ is a maximal ’bad’ interval (for $F$) if for all ’bad’ intervals $[a,b]$, we have $[a,b]\subset[x,y]$ or $[a,b]\cap[x,y]=\emptyset$. We denote by $I^{max}$ the set of all maximal ’bad’ intervals. $ $ - The Lebesgue measure of $$I:=\bigcup_{[x,y]\in I^{max}}[x,y]$$ is smaller than $1$. - For all $[x,y]\in I^{max}$, we have $$F(y)-F(x)=y-x.$$ [*Proof.*]{} Because $F$ is non-decreasing and takes values in $[0,1]$, the first assertion is clear. If for $x<y$, $F(y)-F(x)>y-x$, then there exists $\varepsilon>0$ such that $$F(y)-F(x)>y-x+\varepsilon.$$ Thus, for all $z>y$ such that $z-y\le\varepsilon$, by monotonicity of $F$, we have $$\begin{aligned} F(z)-F(x)&\ge& F(y)-F(x)\\ &>&y-x+\varepsilon\\ &\ge&z-x\end{aligned}$$ and then $[x,y]$ is not maximal.$\Box$ We define the function $g$ from ${{\mathbb R}}$ to $]0,1[$ by $$\mbox{ for all }[x,y]\in I^{max}, \mbox{ for all }t\in [x,y],\, g(t):=F(x)+t-x$$ and $$\mbox{ for all }t\notin I,\, g(t):=F(t).$$ Then $g$ is a 1-Lipschitz function. We define the $[0,1]$-valued stationary ergodic random process $(Y_i)_{i\ge 0}$ by $$Y_i=g(X_i),\;i\ge 0.$$ Since $g$ is Lipschitz, $(Y_i)_{i\ge 0}$ satisfies (\[eq:lip-clt\]) and (\[eq:4th-moment\]). We also have $$G(t):=P(Y_0\le t)=F\circ g^{-1}(t)$$ where $$g^{-1}(t)=\sup\{s\in{{\mathbb R}}\,:\,F(s)\le t\}.$$ Clearly, $G$ is the identity on $g({{\mathbb R}}\setminus I)$. Further, for all $[x,y]\in I^{max}$, the graph of $G$ on $g([x,y])$ is the graph of $F$ on $[x,y]$ and the Lebesgue measure of $g([x,y])$ is equal to the Lebesgue measure of $[x,y]$. Then $$\begin{aligned} \omega_G(\delta)&\le &\max\{\omega_F(\delta),\delta\}\end{aligned}$$ and (\[eq-modulus\]) holds. We define the associated distribution functions and empirical processes $$\begin{aligned} F_n(t) &:= & \frac{1}{n} \sum_{i=1}^n 1_{(-\infty,t]} (X_i),\; t\in{{\mathbb R}}, \\ U_n(t) &:=& \sqrt{n} (F_n(t)-F(t)),\; t \in{{\mathbb R}}, \\ G_n(t) &:= & \frac{1}{n} \sum_{i=1}^n 1_{[0,t]} (Y_i),\; 0\leq t\leq 1, \\ V_n(t) &:=& \sqrt{n} (G_n(t)-G(t)),\; 0\leq t \leq 1.\end{aligned}$$ We have $$U_n(t)=V_n(g(t)),\; t \in{{\mathbb R}}.$$ By the theorem for bounded variables (section \[boundedcase\]), $$(V_n(t))_{0\leq t \leq 1} \stackrel{\mathcal{D}} {\longrightarrow} (V(t))_{0\leq t \leq1},$$ where $V(t)$ is a mean-zero Gaussian process such that $P(V\in C[0,1])=1$. Applying Theorem 5.1 of Billingsley [@Bil68] with $$\begin{aligned} h:D[0,1]&\longrightarrow& D({{\mathbb R}})\\ x&\mapsto& x\circ g,\end{aligned}$$ we get the weak convergence of $(U_n(t))_{ t \in {{\mathbb R}}}$ to a Gaussian process $$(W(t))_{t\in{{\mathbb R}}}=(V\circ g(t))_{t\in{{\mathbb R}}}$$ such that $P(W\in C({{\mathbb R}}))=1$. Proof of Theorem \[pbil\] ========================= Let $(X,d)$ be a complete metric space and let $x_n, x_n^{(m)}, x^{m} \in X$, $n \geq 1, m \geq 1$ be given with the properties $$\begin{aligned} \lim_{n\rightarrow \infty} d (x_n^{(m)}, x^{(m)}) &=& 0 \qquad \forall m \label{eq:l1} \\ \lim_{m\rightarrow \infty} \limsup_{n\rightarrow \infty} d (x_n, x_n^{(m)}) &=& 0 .\label{eq:l2}\end{aligned}$$ Then $x:= \lim_{m\rightarrow \infty} x^{(m)}$ exists and $$\lim_{n\rightarrow \infty} d (x_n, x) = 0.$$ [*Proof.*]{} We will first show that $x^{(m)}$ is a Cauchy sequence. Given $ \epsilon > 0,$ choose $M$ so big that $ \forall m \geq M $ $$\limsup_{n\rightarrow \infty}d(x_n, x_n^{(m)})< \frac{\varepsilon}{4}.$$ Now take $m_1,m_2 \geq M$. For all $n$ sufficiently large, we have then $$\begin{aligned} d (x_n^{(m_1)}, x^{(m_1)}) &<& \frac{\varepsilon}{4} \\ d (x_n^{(m_2)}, x^{(m_2)}) &<& \frac{\varepsilon}{4} \\ d (x_n, x_n^{(m_1)}) &<& \frac{\varepsilon}{4} \\ d (x_n, x_n^{(m_2)}) &<& \frac{\varepsilon}{4},\end{aligned}$$ and hence, by the triangle inequality $d (x^{(m_1)}, x^{(m_2)}) < \varepsilon.$ Thus $(x^{(m)})_{m \geq 1}$ is a Cauchy sequence and hence $x: = \lim_{m\rightarrow \infty} x^{(m)} $ exists.\ It remains to show that $ \lim_{n\rightarrow \infty} x_n = x. $ Given $\varepsilon > 0, $ choose $m_0 $ so that $$\limsup_{n\rightarrow\infty} d (x_n, x_n^{(m_0)}) <\frac{\varepsilon}{4}$$ and $d (x^{(m_0)}, x)< \frac{\varepsilon}{4}$. Then choose $N $ such that for all $n \geq N $ $$\begin{aligned} d (x_n, x_n^{(m_0)}) &<& \frac{\varepsilon}{4}\\ d (x_n^{(m_0)}, x^{(m_0)}) &<& \frac{\varepsilon}{4}.\end{aligned}$$ Using the triangle inequality, we get $$d(x_n, x) <\varepsilon$$ for all $n \geq N. $ $\Box$ [*Proof of Theorem \[pbil\].*]{} Let $\mu_n, \mu_n^{(m)}$ and $\mu^{(m)}$ denote the distributions of the random variables $X_n, X_n^{(m)}$ and $X^{(m)}$ respectively. These are elements of $M_1 (S),$ the space of probability measures on $S$. We consider the Prohorov metric $d$ on $ M_1 (S)$, defined by $$d (\mu, \upsilon) = \inf \left\{ \varepsilon > 0 : \mu (A) \leq \upsilon(A^\varepsilon) + \varepsilon\quad \forall A \subset S\; \text{measurable} \right\}.$$ Note that $ (M_1(S),d)$ is a complete metric space. If $ Y, Z$ are two S-valued random variables with distributions $P_Y, P_Z,$ satisfying $$P( \rho ( Y,Z) \geq \varepsilon ) \leq \varepsilon,$$ then $d(P_Y, P_Z) \leq \varepsilon.$ Moreover $d$ metrizes the topology of weak convergence, i. e. $\mu_n \rightarrow \mu$ if and only if $d (\mu_n, \mu) \rightarrow 0.$ We now apply Lemma 3.1 to $ \mu_n, \mu_n^{(m)}, \mu^{(m)}.$ Note that (\[eq:l1\]) is a direct consequence of (\[eq:p1\]). Given $\varepsilon > 0,$ by (\[eq:p2\]) we can find $m_0$ such that for all $m \geq m_0,$ $$\limsup_{n\rightarrow \infty} P (\rho (X_n, X_n^{(m)}) \geq \varepsilon ) < \varepsilon.$$ Fix such an $m$; then we can find $n_0$ such that $\forall n \geq n_0$ $$P ( \rho (X_n, X_n^{(m)}) \geq \varepsilon ) \leq \varepsilon$$ and thus $d (\mu_n, \mu_n^{(m)}) \leq \varepsilon.$ Hence $$\limsup_{n \rightarrow \infty} d (\mu_n, \mu_n^{(m)}) \leq \varepsilon$$ for all $m \geq m_0,$ showing that (\[eq:l2\]) holds. Thus by Lemma 3.1, there exists a probability distribution $\mu $ on $S$ such that $$\begin{aligned} \lim_{n\rightarrow \infty} d (\mu^{(m)}, \mu) &=& 0 \\ \lim_{m\rightarrow \infty} d (\mu_n , \mu) &=& 0 .\end{aligned}$$ Finally, let $X$ be an S-valued random variable with distribution $\mu$. Then $X^{(m)}\stackrel {\mathcal{D}}\rightarrow X$ as $m \rightarrow \infty$ and $X_n \rightarrow X$ as $n \rightarrow \infty.$ $\Box$ Examples {#examp} ======== According to Durieu [@Dur08b] the 4-th moment bound (\[eq:4th-moment\]) holds for Markov chains and dynamical systems under some assumptions on the Markov transition operator or the Perron-Frobenius operator. Let $(E,d)$ be a separable metric space and $(X_k)_{k\ge0}$ be an $E$-valued Markov chain with transition operator $Q$ and invariant measure $\nu$. Denote by ${{\mathcal L}}$ the space of all bounded Lipschitz continuous functions from $E$ to ${{\mathbb R}}$ equipped with the norm defined in (\[eq:4th-moment\]). We say that the Markov chain $(X_k)_{k\ge0}$ is ${{\mathcal L}}$-geometrically ergodic if there exist $C>0$ and $0<\theta<1$ such that for all $f\in{{\mathcal L}}$, $$\label{geom} \|Q^kf-\Pi f\|\le C\theta^k\|f\|,$$ where $\Pi f=E_\nu f(X_0)$. This condition corresponds to the fact that the Markov operator is quasi-compact on the space ${{\mathcal L}}$ with $1$ as only eigenvalue of modulus one and simple (see Hennion and Hervé [@HenHer01]). Since ${{\mathcal L}}\hookrightarrow L^\infty$, the following result is a special case of Corollary 2 of Durieu [@Dur08b]. \[od\] If $\left(X_n\right)_{n\ge 0}$ is an ${{\mathcal L}}$-geometrically ergodic Markov chain then (\[eq:4th-moment\]) holds for all $f\in{{\mathcal L}}$ such that $Ef(X_0)=0$, with $\alpha=3$ and $\beta=2$. The same is true for dynamical systems whose Perron-Frobenius operators satisfy (\[geom\]). This gives a large class of examples where our result applies. Linear processes {#linear-processes .unnumbered} ---------------- Let $(A,\|.\|_A)$ be a separable Banach space and ${\mathcal A}$ its Borel sigma algebra. Let $(a_i)_{i\ge0}$ be a sequence of linear forms on $A$ such that there exist $C>0$ and $0<\theta<1$ such that $$\label{ai} |a_i|\le C\theta^i,$$ where $|a_i|=\sup_{\|x\|_A\le 1}|a_i(x)|$. Let $(e_i)_{i\in{{\mathbb Z}}}$ be an i.i.d. bounded random sequence with values in a compact subset $B\subset A$ and marginal distribution $\mu$. We define the real-valued linear process $(X_k)_{k\ge0}$ by $$X_k=\sum_{i\ge 0}a_i(e_{k-i}),\;k\ge0.$$ Several results have already been established for empirical processes of linear processes (see Doukhan and Surgailis [@DouSur98], Wu [@Wu08], Dedecker and Prieur [@DedPri07]). Here, assumption on the $(a_i)_{i\ge 0}$ is stronger than in the mentioned papers, but there will be no assumption on the distribution of the $e_i$’s and assumption on the distribution function of $X_0$ will be weaker. Note that $(X_k)_{k\ge 0}$ can be viewed as a functional of a Markov chain. Let $Y_k=(e_k,e_{k-1},\dots)$, then $(Y_k)_{k\ge 0}$ is a stationary Markov chain on $B^{{\mathbb N}}$ (with stationary measure $\mu^{\otimes{{\mathbb N}}}$) and $X_k=\Phi(Y_k)$ where $$\Phi:B^{{\mathbb N}}\longrightarrow{{\mathbb R}},\;\Phi(x_0,x_1,\dots)=\sum_{i\ge 0}a_i(x_i).$$ Let $Q$ be the Markov transition operator of the chain. On $B^{{\mathbb N}}$, we define a metric $d$ by $$d(x,y)=\sum_{i\ge 0}\theta^i\|x_i-y_i\|_A$$ where $x=(x_i)_{i\ge 0}$ and $y=(y_i)_{i\ge0}$. As $B$ is compact, then $(B^{{\mathbb N}},d)$ is also compact. Let us denote by ${{\mathcal L}}$ the space of all Lipschitz functions from $B^{{\mathbb N}}$ to ${{\mathbb R}}$ provided with the norm $\|.\|$ defined by $$\|f\|=\sup_{x\in B^{{\mathbb N}}}|f(x)|+\sup_{x\ne y}\frac{|f(x)-f(y)|}{d(x,y)}.$$ For all $f\in{{\mathcal L}}$ and for all $x=(x_i)_{i\ge 0}$ and $y=(y_i)_{i\ge0}\in B^{{\mathbb N}}$, we have $$\begin{aligned} |Q^kf(x)-Q^kf(y)|&=& |E(f(Y_k)|Y_0=x)-E(f(Y_k)|Y_0=y)|\\ &=& |E(f(e_k,\dots,e_1,x_0,\dots))-E(f(e_k,\dots,e_1,y_0,\dots))|\\ &\le &\|f\|E\{d((e_k,\dots,e_1,x_0,\dots),(e_k,\dots,e_1,y_0,\dots))\}\\ &=&C\theta^k\|f\|d(x,y),\end{aligned}$$ and $$\begin{aligned} |Q^kf(x)-Ef(Y_0)|&=&|E(f(Y_k)|Y_0=x)-Ef(Y_k)|\\ &\le&E |f(e_k,e_{k-1},\dots,e_1,x_0,\dots)-f(e_k,e_{k-1},\dots)|\\ &\le&C\theta^k\|f\|E\{d(x,Y_0)\}.\end{aligned}$$ Then, we have for all $f\in{{\mathcal L}}$, $$\begin{aligned} \|Q^kf-E(f(Y_0))\|&\le& C\theta^k\|f\|.\end{aligned}$$ Since $({{\mathcal L}},\|.\|)\subset (L^\infty(\mu^{\otimes{{\mathbb N}}}),\|.\|_\infty)$, by Proposition \[od\], $(Y_k)_{k\ge 0}$ satisfies the 4-th moment bound (\[eq:4th-moment\]) with $\alpha=3$ and $\beta=2$ for all Lipschitz functions. Further, for all $f\in{{\mathcal L}}$ the sequence $\sum_{i=0}^nQ^if(Y_0)$ converges in $L^2(\mu^{\otimes{{\mathbb N}}})$ and so by Gordin’s theorem (see Gordin [@Gor69]), the CLT (\[eq:lip-clt\]) is satisfied. Clearly, the function $\Phi$ is a Lipschitz continuous function on $B^{{\mathbb N}}$, and for all Lipschitz function $g:{{\mathbb R}}\longrightarrow{{\mathbb R}}$, $g\circ\Phi$ is also a Lipschitz continuous function on $B^{{\mathbb N}}$. Thus conditions (\[eq:lip-clt\]) and (\[eq:4th-moment\]) hold for the process $(X_k)_{k\ge0}$, for all Lipschitz function on ${{\mathbb R}}$. Then Theorem \[thm1\] applies and we have Let $(X_k)_{k\ge 0}$ be a real linear process defined by a sequence of linear forms $(a_i)_{i\ge0}$ and a sequence of i.i.d. bounded random variables $(e_i)_{i\in{{\mathbb Z}}}$, both on a measurable Banach space $A$. Assume $(a_i)$ satisfies (\[ai\]) and the distribution function $F$ of $X_0$ satisfies $$\omega_F(\delta)\leq D|\log(\delta)|^{-\gamma}\mbox{ for some } D>0 \mbox{ and } \gamma>2.$$ Then $(U_n(t))_{t\in{{\mathbb R}}}$ converges in distribution to a mean-zero Gaussian process. In the paper by Dedecker and Prieur [@DedPri07], Corollary 1, $X_0$ has a bounded density. Here, the existence of a density is not needed. Our result is comparable to a result of Wu and Shao [@WuSha04]. For a concrete example, consider $A=\{0,1\}$, $a_i=\frac{2}{3^i}$, $i\ge 0$ and $e_k=0$ or $1$ with probability $\frac{1}{2}$, $k\in{{\mathbb Z}}$. Then $$X_k=2\sum_{i\ge 0}\frac{e_{k-i}}{3^i},\;k\ge 0$$ is a stationary process with values in $[0,1]$ and the common distribution function of all the $X_k$ is the Cantor function, which is not absolutly continuous but $\frac{\log2}{\log3}$-Hölder continuous (see Dovgoshey et al. [@DovMarRyaVuo06]). Expanding maps {#expanding-maps .unnumbered} -------------- In the setting of expanding maps of the interval, empirical process invariance principles have been established in Collet, Martinez and Schmitt [@ColMarSch04] and Dedecker and Prieur [@DedPri07] for classes of Lasota-Yorke transformations. For these maps, the transfer operator has a spectral gap on the space BV of bounded variation functions. According to Gouëzel [@Gou08], there exist some uniformly expanding maps of the interval for which the transfer operator does not act continuously on the space BV, but admits a spectral gap on the space of Lipschitz functions. The example given by Gouëzel is a transformation of the interval $[0,1)$. Let $(a_n)_{n\ge 1}$ be a sequence of positive numbers with $\sum a_n<\frac{1}{4}$ and let $N>0$ be an integer. Denote by $I_n$ the subintervals $[4\sum_{i=1}^{n-1}a_i,4\sum_{i=1}^{n}a_i)$. We decompose $I_n$ into two subintervals of lenght $2a_n$ denoted by $I_n^{(1)}$ and $I_n^{(2)}$. We can find a map $v_n$ (resp. $w_n$) on $[0,1)$ with image $I_n^{(1)}$ (resp. $I_n^{(2)}$) such that the derivative at a point $x$ is equal to $a_n(1+2\cos^2(2\pi n^4x))$ (resp. $a_n(1+2\sin^2(2\pi n^4x))$). The map $T$ is defined on $I_n$ in such a way that $v_n$ and $w_n$ are two inverse branches of it. It remains the interval $[4\sum_{i=1}^{\infty}a_i,1)$ that we subdivide into $N$ subintervals of equal lenght. $T$ is defined as an affine transformation on each of these subintervals onto $[0,1)$. If $a_n=\frac{1}{100n^3}$ and $N=4$, then $T$ is a Lebesgue measure preserving transformation and its associated transfer operator has a spectral gap on the space of Lipschitz functions with a simple eigenvalue at 1 and no other eigenvalue of modulus 1. Further, the transfer operator does not act continuously on BV. In this situation, the 4-th moment bound (\[eq:4th-moment\]) holds and Theorem \[thm1\] can be used to get an invariance principle for the associated empirical process. Further applications {#further-applications .unnumbered} -------------------- Durieu [@Dur08b] has also given 4-th moment bounds for subshifts of finite type, using the Ruelle-Perron-Frobenius theorem, as in Parry and Pollicott [@ParPol90]. Our result thus also applies here. Another application concerns random iterative Lipschitz models and, as a special case, nonlinear autoregressive models $(X_n)_{n\in{{\mathbb N}}}$ define as follows. For a real-valued random variable $X_0$ and a given function $f:{{\mathbb R}}\longrightarrow{{\mathbb R}}$, let $$X_n=f(X_{n-1})+Y_n,\quad n\ge 1,$$ where and $(Y_n)_{n\ge 1}\subset{{\mathbb R}}$ is an i.i.d. sequence of ${{\mathbb R}}$-valued random variables independent of $X_0$. Such models are studied, e.g., in nonlinear time series analysis. See Hennion and Hervé [@HenHer01] Thm.X.16 for conditions under which $(X_n)_{n\in{{\mathbb N}}}$ is ${{\mathcal L}}$-geometrically ergodic. #### Acknowledgement We are grateful to Loïc Hervé for several lectures introducing us to the spectral gap method, and to Jérôme Dedecker for his critical comments on an earlier version of this paper. [10]{} Istv[á]{}n Berkes and Walter Philipp. An almost sure invariance principle for the empirical distribution function of mixing random variables. , 41(2):115–137, 1977/78. Patrick Billingsley. . John Wiley & Sons Inc., New York, 1968. Svetlana Borovkova, Robert Burton, and Herold Dehling. Limit theorems for functionals of mixing processes with applications to [$U$]{}-statistics and dimension estimation. , 353(11):4261–4318 (electronic), 2001. Richard C. Bradley. . Kendrick Press, Heber City, UT, 2007. P. Collet, S. Martinez, and B. Schmitt. Asymptotic distribution of tests for expanding maps of the interval. , 24(3):707–722, 2004. J[é]{}r[ô]{}me Dedecker and Cl[é]{}mentine Prieur. An empirical central limit theorem for dependent sequences. , 117(1):121–142, 2007. Monroe D. Donsker. Justification and extension of [D]{}oob’s heuristic approach to the [K]{}omogorov-[S]{}mirnov theorems. , 23:277–281, 1952. J. L. Doob. Heuristic approach to the [K]{}olmogorov-[S]{}mirnov theorems. , 20:393–403, 1949. P. Doukhan, P. Massart, and E. Rio. Invariance principles for absolutely regular empirical processes. , 31(2):393–427, 1995. Paul Doukhan and Sana Louhichi. A new weak dependence condition and applications to moment inequalities. , 84(2):313–342, 1999. Paul Doukhan and Donatas Surgailis. Functional central limit theorem for the empirical process of short memory linear processes. , 326(1):87–92, 1998. O. Dovgoshey, O. Martio, V. Ryazanov, and M. Vuorinen. The [C]{}antor function. , 24(1):1–37, 2006. Olivier Durieu. A fourth moment inequality for functionals of stationary processes. . Olivier Durieu and Philippe Jouan. Empirical invariance principle for ergodic torus automorphisms; genericity. , 8(2):173–195, 2008. M. I. Gordin. The central limit theorem for stationary processes. , 188:739–741, 1969. Sébastien Gouëzel. An interval map with a spectral gap on lipschitz functions, but not on bounded variation functions. . Hubert Hennion and Lo[ï]{}c Herv[é]{}. , volume 1766 of [*Lecture Notes in Mathematics*]{}. Springer-Verlag, Berlin, 2001. William Parry and Mark Pollicott. Zeta functions and the periodic orbit structure of hyperbolic dynamics. , (187-188):268, 1990. Walter Philipp and Laurence Pinzur. Almost sure approximation theorems for the multivariate empirical process. , 54(1):1–13, 1980. Cl[é]{}mentine Prieur. An empirical functional central limit theorem for weakly dependent sequences. , 22(2, Acta Univ. Wratislav. No. 2470):259–287, 2002. Wei Biao Wu. Empirical processes of stationary sequences. , 18(1):313–333, 2008. Wei Biao Wu and Xiaofeng Shao. Limit theorems for iterated random functions. , 41(2):425–436, 2004. [^1]: Fakultät für Mathematik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany; e-mail: herold.dehling@rub.de [^2]: Laboratoire de Mathematique Raphaël Salem, UMR 6085 CNRS-Université de Rouen, e-mail: olivier.durieu@etu.univ-rouen.fr [^3]: Laboratoire de Mathematique Raphaël Salem, UMR 6085 CNRS-Université de Rouen, e-mail: dalibor.volny@univ-rouen.fr
--- abstract: 'The effect of interactions on dynamics of coupled motor proteins is investigated theoretically. A simple stochastic discrete model, that allows to calculate explicitly the dynamic properties of the system, is developed. It is shown that there are two dynamic regimes, depending on the interaction between the particles. For strong interactions the motor proteins move as one tight cluster, while for weak interactions there is no correlation in the motion of the proteins, and the particle separation increases steadily with time. The boundary between two dynamic phases is specified by a critical interaction that has a non-zero value only for the coupling of the asymmetric motor proteins, and it depends on the temperature and the transitions rates. At the critical interaction there is a change in a slope for the mean velocities and a discontinuity in the dispersions of the motor proteins as a function of the interaction energy.' author: - 'Evgeny B. Stukalin and Anatoly B. Kolomeisky' title: Dynamic Phase Transitions in Coupled Motor Proteins --- Motor proteins are active enzyme molecules that are important for molecular transport, force generation and transfer of genetic information in biological systems [@lodish_book; @howard_book; @bray_book]. They move along the rigid linear tracks by utilizing the energy of hydrolysis of ATP or related compounds, and the chemical energy is transferred into the mechanical work with a high efficiency. However, the mechanisms of the mechanochemical coupling in the motor proteins are not fully understood [@howard_book]. Structural and biochemical studies of the motor proteins reveal that they consist of many domains and subunits [@howard_book; @bray_book; @kozielski97; @singleton04], and frequently these subunits also have enzymatic activity. An example is the helicase motor protein RecBCD [@vonHippel02] that corrects the DNA breaks and defects by unwinding the double-stranded DNA molecules into separate chains [@bianco01; @dohoney01; @taylor03; @dillingham03]. It has three protein subunits, of which two domains, RecB and RecD, also exist as independent motor proteins [@taylor03; @dillingham03]. Experiments indicate that the complex motor protein RecBCD moves [*significantly*]{} faster than the individual RecB and RecD subunits [@taylor03]. For many other motor proteins the coordination between internal domains have a strong effect on the dynamic properties [@asbury03; @zhang04]. In addition, many motor proteins work in large groups [@howard_book; @bray_book], although the mechanism of such coordinated motion is largely unknown. In recent [*in vivo*]{} experiments [@kural05] the transport of organelles by kinesin and dynein motor proteins have been investigated. Although the kinesins and dyneins move in opposite directions on the microtubules, it was found that they do not work against each other. Apparently, the motor proteins moving in different directions coordinate the overall transport of the organelles. These experimental findings suggest that the inter-domain coupling in the motor proteins and the interaction between different motor proteins have a strong effect on functioning of these biological molecules. However, theoretical investigations of these phenomena are still limited [@howard_book; @betterton03; @stukalin05]. Recently, we proposed a theoretical approach to explain the internal interactions in the motor proteins [@stukalin05], and it was successfully applied to understand the dynamics of single RecBCD helicases. The purpose of this work is to investigate the general effect of interactions inside the motor proteins and between the molecules on the dynamic properties of the system. We assume that there are two interacting particles that move along the parallel linear tracks, as shown in Fig. 1. This model describes the motion of RecBCD helicases with two active subunits on different DNA strands [@stukalin05], or it might correspond to the transport of two interacting motor proteins (kinesins, dyneins) on the parallel filaments (microtubules). The positions of the particles $A$ and $B$ are defined by integers $l$ and $m$, respectively, on the corresponding lattices. It is assumed that the interaction between particles favor compact vertical configurations, while the potential energy of the non-vertical configurations is larger, $U(l,m) = U_0 + \varepsilon |l-m|$, where the parameter $\varepsilon \ge 0 $ specifies the interactions. This potential of interactions seems realistic for the motion of helicases [@vonHippel02], where at each step of the leading subunit the bond between two strands of DNA should be broken, and it leads to the linear dependence of the interaction energy on the distance between the subunits. We introduce $P(l, m; t)$ as a probability to find the system in the configuration where $A$ is at the position $l$ on the first track and $B$ is at the position $m$ on another track at time $t$. The dynamics of the system can be described by a set of transition rates that depend not only on the particle type, but also on the position of the particles. For configurations $(l \pm k, l)$ \[$ k \ge 1 $\], the trailing particle can move forward (backward) with a rate $u_{j1}$ ($w_{j1}$), where $j=a$ or $ b$ corresponds to the particle $A$ or $B$, respectively. At the same time, the leading particle can jump forward (backward) with a rate $u_{j2}$ ($w_{j2}$). For the vertical configurations $(l,l)$ each particle can hop forward with the rate $u_{j2}$ or it can move backward with the rate $w_{j1}$: see Fig. 1. Note, that in our model the transition rates do not depend on the particles separation $k = |l-m|$, but only on the “type” of transition: where it leads to a more compact configuration ($k$ decreases) or a less compact ($k$ increases). This is because of the linear potential of interaction, $U = U_0 + \varepsilon k$, and it leads to the energy difference between two consecutive configurations being equal to $\varepsilon$, independent of the particle separation $k$. The transition rates are related via the detailed balance relations: $$\label{ratios} \frac{u_{j1}}{w_{j1}} = \frac {u_{j}}{w_{j}}\exp(+ \varepsilon /k_{B}T), \quad \frac{u_{j2}}{w_{j2}} = \frac {u_{j}}{w_{j}}\exp(- \varepsilon /k_{B}T),$$ with $j=a$ or $b$, and where $u_{j}$ and $w_{j}$ are the hopping rates in the case of no interaction between the particles ($\varepsilon = 0$). The dynamics of the system is governed by a set of Master equations for the probability distribution function $P(l,m;t)$, $$\begin{aligned} \label{me1} \frac{d P(l,l;t)}{{dt}} & = & u_{a1} P(l-1,l;t) + w_{a2} P(l+1,l;t) + u_{b1} P(l,l-1;t) \nonumber \\ & & + w_{b2} P(l,l+1;t) - (u_{a2} + w_{a1} + u_{b2} + w_{b1}) P(l,l;t); \end{aligned}$$ $$\begin{aligned} \label{me2} \frac{d P(l,l-k;t)}{{dt}} & = & u_{a2} P(l-1,l-k;t) + w_{a2} P(l+1,l-k;t) + u_{b1} P(l,l-1-k;t) \nonumber \\ & & + w_{b1} P(l,l+1-k;t) - (u_{a2} + w_{a2} + u_{b1} + w_{b1}) P(l,l-k;t);\end{aligned}$$ $$\begin{aligned} \label{me3} \frac{d P(l-k,l;t)}{{dt}} & = & u_{a1} P(l-1-k,l;t) + w_{a1} P(l+1-k,l;t) + u_{b2} P(l-k,l-1;t) \nonumber \\ & & + w_{b2} P(l-k,l+1;t) - (u_{a1} + w_{a1} + u_{b2} + w_{b2}) P(l-k,l;t).\end{aligned}$$ At all times these probabilities satisfy the normalization condition, $\sum\limits_{l = - \infty }^{ + \infty }\sum\limits_{m = - \infty }^{ + \infty } P(l,m;t) = 1$. The solutions of the Master equations can be found be summing over all integers $l$ and $m$ at the fixed particle separation $k$. Defining new functions, $$\label{bdef} P_{0, 0}(t) = \sum\limits_{l = - \infty}^{ + \infty } {P(l,l;t)}, \quad P_{0, k}(t) = \sum\limits_{l = - \infty}^{ + \infty } {P(l,l-k;t)}, \quad P_{1, k}(t) = \sum\limits_{l = - \infty}^{ + \infty } {P(l-k,l;t)},$$ it can be shown then that in the stationary-state limit, $$\label{gen_prob} P_{0,k}=P_{0,0} (\beta_{0})^{k}, \quad P_{1,k}=P_{0,0} (\beta_{1})^{k},$$ where $$\label{beta} \beta_0 = \frac{u_{a2} + w_{b1}}{u_{b1} + w_{a2}}, \quad \beta_1 = \frac{u_{b2} + w_{a1}}{u_{a1} + w_{b2}}.$$ These auxiliary functions play a critical role in our analysis. When $\beta_{0} <1$ and $\beta_{1} <1$, using the conservation of probability, we obtain $$\label{prob} P_{i,k} = \frac{(1 - \beta_0)(1 - \beta_1)}{1 - \beta_0 \beta_1} (\beta_i)^k, \quad i=0,1.$$ This means that the vertical configuration ($k=0$) is the most probable one, and the probabilities of the less compact configurations are exponentially decreasing functions of the particle separation $k$. In this dynamic phase, the particles $A$ and $B$ correlate their overall motion. From the knowledge of the stationary probabilities and the transition rates, the dynamic properties of the system, such as the mean velocity $V$ and dispersion (effective diffusion constant) $D$ of the center of mass, can be calculated as $$\label{vel} V_{CM} = \frac{1}{1 - \beta_0 \beta_1} \left[ (u_{a2} - \beta_0 w_{a2})(1 - \beta_1) + (u_{b2} - \beta_1 w_{b2})(1 - \beta_0) \right],$$ and $$\begin{aligned} \label{disp} D_{CM} = \frac{1}{1 - \beta_0 \beta_1} \left[ \left\{ \frac{1}{2}(u_{a2} + \beta_{0} w_{a2}) - \frac{(A_{0} + w_{a2})(u_{a2} - \beta_{0} A_{0})}{u_{b1} + w_{a2}} \right \}(1 - \beta_{1}) + \right. \nonumber \\ + \left. \left \{ \frac{1}{2}(u_{b2} + \beta_{1} w_{b2})- \frac{(A_{1} + w_{b2})(u_{b2} - \beta_{1} A_{1})}{u_{a1} + w_{b2}} \right \} (1 - \beta_{0})] \right] \end{aligned}$$ where the coefficients $A_i$ are given by $$\label{Adef} A_0 = \frac{\beta_1 (u_{a1} - u_{a2}) + \beta_0 \beta_1 w_{a2} - w_{a1}}{1 - \beta_0 \beta_1}, \quad A_1 = \frac{\beta_0 (u_{b1} - u_{b2}) + \beta_0 \beta_1 w_{b2} - w_{b1}}{1 - \beta_0 \beta_1}.$$ The dynamic properties of the individual particles coincide with the dynamic properties of the center of mass of the motor protein cluster. In this case, it can be shown that the average distance $L$ between the particles is always finite (in units of lattice spacings), $$\label{loop} L = \frac{1}{1 - \beta_0 \beta_1} \left[\frac{\beta_0 (1 - \beta_1)}{1 - \beta_0} + \frac{\beta_1 (1 - \beta_0)}{1 - \beta_1} \right].$$ The situation is very different when, at least, one of $\beta_{i} > 1$ ($i=0$ or $1$). Then from Eq. (\[gen\_prob\]) it can be concluded that less compact configurations (large $k$) dominate the steady-state dynamics of the system. In this regime the particles $A$ and $B$ move independently from each other with mean velocities (assuming $A$ is the leading particle) $$\label{rates_weak} V_A = u_{a2} - w_{a2}, \quad V_B = u_{b1} - w_{b1},$$ and dispersions $$\label{disp_weak} D_A = (u_{a2} + w_{a2})/2, \quad D_B = (u_{b1} + w_{b1})/2.$$ The dynamic properties of the center of mass of the motor protein cluster is given by $$\label{weak} V_{CM} = \frac {1}{2}(V_A + V_B), \quad D_{CM} = \frac{1}{4} (D_A + D_B).$$ Furthermore, the average particle-particle separation $L$ is steadily increasing with time. The boundary between two dynamic regimes is determined by the condition $\beta_{0}=1$ and $\beta_{1} <1$, or $\beta_{1}=1$ and $\beta_{0} <1$, and it depends on the transition rates and energy of interaction. Using the detailed balance conditions (\[ratios\]), it can be argued that the transition rates can be expressed as $$\label{rates} u_{j1} = u_{j} \gamma^{1 - \theta_{j1}}, \quad w_{j1} = w_{j} \gamma^{- \theta_{j1}}, \quad u_{j2} = u_{j} \gamma^{- \theta_{j2}}, \quad w_{j2} = w_{j} \gamma^{1 - \theta_{j2}},$$ where $\gamma = \exp(\varepsilon/k_{B}T)$, and $j=a$ or $b$. The coefficients $\theta_{ji}$ are energy-distribution factors that determine the effective splitting of the interaction energy between the forward and backward transitions [@howard_book; @betterton03; @stukalin05]. In the simplest approximation, we assume that all energy-distribution factors are approximately equal to each other, $ 0 \le \theta_{ji}\approx \theta \le 1$, because they describe similar transitions in the motion of the individual motor proteins [@stukalin05]. More general situation with state-dependent energy-distribution factors can also be analyzed. Substituting Eq. (\[rates\]) into the expressions (\[beta\]), we obtain $$\beta_0 \gamma = (\beta_1 \gamma)^{-1} = (u_a + w_b)/(u_b + w_a).$$ Then the boundary between two dynamic phases corresponds to the critical value of the interaction energy, $$\label{ecrit} \varepsilon_c = k_B T \left| \ln \left( \frac{u_a + w_b}{u_b + w_a} \right) \right| \ge 0.$$ It is important to note that the critical interaction depends on temperature, and in the transport of the identical particles ($A=B$) the critical interaction is always zero. This indicates that the dynamic phase transition can only be observed for the coupling of the asymmetric motor proteins. The existence of two dynamic phases in the transport of interacting asymmetric motor proteins can be understood using the following arguments. Consider the configuration where the particle $A$ is $k$ sites ahead of the particle $B$ and $\varepsilon=0$. The effective rate of the transition to the configurations where two particles are separated by $k+1$ sites is equal to $u_{a}+w_{b}$, while the effective rate for $k+1 \rightarrow k$ transition is given by $u_{b}+w_{a}$. The free energy change of making the particle configuration less compact ($k \rightarrow k+1$) can be written as $\Delta G(0)= - k_B T \ln \left( \frac{u_a + w_b}{u_b + w_a} \right) <0$ [@howard_book; @fisher99], assuming that $u_a + w_b > u_b + w_a$. If there is interaction between the particles, then the free energy change increases by the value of $\varepsilon$, $\Delta G(\varepsilon)= \Delta G(0) + \varepsilon$. The boundary between two regimes corresponds to $\Delta G(\varepsilon_{c})= 0$, and it leads to $\varepsilon_{c}= \left| \Delta G(0) \right|$. Thus, for strong interactions ($\varepsilon > \varepsilon_{c}$), it is thermodynamically unfavorable to make less compact configurations. The particles cannot run away from each other, and they move as one tightly-coupled cluster. For weak interactions ($\varepsilon < \varepsilon_{c}$), the favorable free energy change of making less compact particle configuration cannot be compensated by the energy of interaction. As a result, the distance between particles grows linearly with time, and they move in the uncorrelated fashion. The dynamic properties of interacting motor proteins are different in two phases, as shown in Figs. 2 and 3. The mean velocity of the center of mass changes the slope at the critical energy of interaction, while the mean velocities of the individual particles converge to a single value - see Fig. 2. The effect of the interaction is much stronger for the dispersions. As illustrated in Fig. 3, there is a jump in the mean dispersion of the center of mass at the phase boundary. In addition, the mean dispersions of the individual particles do not converge to a single value. This discontinuity in the dispersions is a clear sign of the dynamic phase transition in the system. In order to illustrate our approach, we consider a simplified model of the motion of the interacting motor proteins that can only step forward, i.e., $w_{a}=w_{b}=0$. This model seems reasonable for the description of RecBCD helicase transport [@stukalin05], since the experiments indicate that the backward transitions are small [@perkins04]. Assuming that the particle $A$ moves faster than the particle $B$ ($u_{a} > u _{b}$), the critical interaction can be written as $\varepsilon_{c}= k_{B}T \ln (u_{a}/u_{b})$. For RecBCD motor proteins, where the transition rates for subunits can be approximated as $u_{a}=300$ and $u_{b}=73$ nucleotides/s [@stukalin05], the critical interaction is $\varepsilon_{c} \approx 1.4k_{B}T$. Theoretical analysis [@stukalin05] estimates the energy of interaction between the subunits in RecBCD as $\approx 6k_{B}T$, implying that this motor protein moves in the strong coupling regime, in agreement with experiments [@bianco01; @dohoney01; @taylor03; @dillingham03]. Using Eqs. (\[vel\],\[disp\],\[Adef\]), it can be shown that for large interactions the dynamic properties of the system are given by $$V_{CM}(\varepsilon \ge \varepsilon_{c})=\frac{(u_{a}+u_{b}) \gamma^{-\theta}}{1+\gamma^{-1}}, \quad D_{CM}(\varepsilon \ge \varepsilon_{c})=V_{CM} \left( 1-\frac{2}{\gamma(1+\gamma^{-1})^{2}} \right).$$ In the weak coupling regime, from the expressions (\[rates\_weak\],\[disp\_weak\],\[weak\]) it can be derived that $$V_{CM}(\varepsilon \le \varepsilon_{c})=(u_{a}+u_{b}\gamma) \gamma^{-\theta}, \quad D_{CM}(\varepsilon \le \varepsilon_{c})=V_{CM}/8.$$ The jump in the dispersions at the critical interaction is equal to $$\Delta D=(u_{a}/4) (u_{a}/u_{b})^{-\theta} \left[ 2 \frac{u_{a}^{2}+u_{b}^{2}}{(u_{a}+u_{b})^{2}} -1 \right] >0.$$ Although for this simplified model the dispersion jump is always positive, it can be shown in general that the discontinuity might have any sign. The presented theoretical analysis of the dynamics of the coupled motor proteins is based on the simplified picture that neglects many important features of the biological transport. The intermediate biochemical states, sequence dependence of the transition rates, protein flexibility have not been taken into account in this approach. However, it is expected that these phenomena will not change the main prediction of our analysis - the existence of the dynamic phase transitions that depend on the interaction between the particles. The most crucial assumption in our approach is the assumption of the linear potential of interactions. An important question is if the predicted dynamic phase transitions will survive for more realistic potentials of interaction between proteins. In summary, the effect of interaction between the motor proteins is investigated by analyzing explicitly a simple stochastic model. Using the explicit formulas for the dynamic properties, it is shown that there are two dynamic phases for asymmetric motor proteins depending on the interaction energy. Below the critical interaction the particles do not correlate with each other, while above the critical interaction the particles move as a tight cluster. The origin of these phenomena is the balance between the chemical free energy change and the change in the energy of interactions for different transitions. The critical interaction depends on the transition rates and it can be modified by changing the temperature. Our method is applied to analyze the dynamic phase of RecBCD helicases in agreement with the experiments. This theoretical approach suggests a new way of investigating and controlling biological transport processes at the nanoscale level. The authors would like to acknowledge the support from the Welch Foundation (grant C-1559), the Alfred P. Sloan Foundation (grant BR-4418) and the U.S. National Science Foundation (grant CHE-0237105). [99]{} H. Lodish [*et. al.*]{} [Molecular Cell Biology]{}, (W.H. Freeman and Company, New York, 2000). J. Howard, [*Mechanics of Motor Proteins and Cytoskeleton*]{}, (Sinauer Associates, Sunderland Massachusetts, 2001). D. Bray, [*Cell Movements. From Molecules to Motility*]{}, (Garland Publishing, New York, 2001). F. Kozielski [*et. al.*]{}, Cell, [**91**]{}, 985 (1997). M.R. Singleton [*et al.*]{}, Nature, [**432**]{}, 187 (2004). E. Delagoutte and P.H. von Hippel, Q. Rev. Biophys., [**35**]{}, 431 (2002). P.R. Bianco [*et al.*]{}, Nature, [**409**]{}, 374 (2001). K.M. Dohoney and J. Gelles, Nature, [**409**]{}, 370 (2001). A.F. Taylor and G.R. Smith, Nature, [**423**]{}, 889 (2003). M.S. Dillingham, M. Spies and S.C. Kowalczykowski , Nature, [**423**]{}, 893 (2003). C.L. Asbury, A.N. Fehr and S.M. Block, Science, [**302**]{}, 2130 (2003). Y. Zhang, and W.O. Hancock, Biophys. J., [**87**]{}, 1795 (2004). C. Kural [*et.al.*]{}, Science, [**308**]{}, 5727 (2005). M.D. Betterton and F. Julicher, Phys. Rev. Lett., [**91**]{}, 258103 (2003). E.B. Stukalin, H. Phillips III, and A.B. Kolomeisky, Phys. Rev. Lett., [**94**]{}, 238101 (2005). M.E. Fisher and A.B. Kolomeisky, Proc. Natl. Acad. Sci. U.S.A., [**96**]{}, 6597 (1999). T.T. Perkins [*et al*]{}., Biophys. J. [**86**]{}, 1640 (2004). [**Figure Captions:**]{}\ \ Fig. 1. Schematic view of the motion of two interacting motor proteins. Transition rates $u_{ai}$ and $w_{ai}$ ($ i = 1$ or $2$) describe the motion of the particle $A$ (small circles), while $u_{bi}$ and $w_{bi}$ are the transition rates for the particle $B$ (large circles). Any configuration is specified by the integers $l$ and $m$ for the positions of the particle $A$ and $B$, correspondingly. The energy of interaction in the configuration $(l,m)$ is equal to $|l-m| \varepsilon \ge 0$. Fig. 2. Relative velocities for the coupled motor proteins as a function of the interaction energy. Solid line corresponds to the relative velocity of the center of mass of the particles, while the dotted lines are the relative velocities of the individual particles below the critical interaction. The parameters used for calculations are: $u_a = 4$, $w_a = 0.1$, $u_b = 1$, $w_b = 0.1$ and $\theta = 0.02 $. Fig. 3. Relative dispersions for coupled motor proteins as a function of the interaction energy. Solid line corresponds to the relative dispersion of the center of mass of the particles, while the dotted lines are the relative dispersions of the individual particles below the critical interaction. The parameters used for calculations are the same as in Fig. 2. \ \ \ 0.3in \ \ \ 0.3in \ \ \ 0.3in
--- abstract: 'We analyze a gauge-Higgs unification model which is based on a gauge theory defined on a six-dimensional spacetime with an $S^2$ extra-space. We impose a symmetry condition for a gauge field and non-trivial boundary conditions of the $S^2$. We provide the scheme for constructing a four-dimensional theory from the six-dimensional gauge theory under these conditions. We then construct a concrete model based on an SO(12) gauge theory with fermions which lie in a 32 representation of SO(12), under the scheme. This model leads to a Standard-Model(-like) gauge theory which has gauge symmetry SU(3) $\times$ SU(2)$_L$ $\times$ U(1)$_Y$($\times$ U(1)$^2$) and one generation of SM fermions, in four-dimensions. The Higgs sector of the model is also analyzed, and it is shown that the electroweak symmetry breaking and the prediction of W-boson and Higgs-boson masses are obtained.' address: 'Department of Physics, Saitama University, Shimo-Okubo, Sakura-ku, Saitama 355-8570, Japan' author: - Takaaki Nomura - Joe Sato title: 'Standard(-like) Model from an SO(12) Grand Unified Theory in six-dimensions with $S_2$ extra-space ' --- and Gauge-Higgs unification, Grand unified theory, Coset space dimensional reduction 11.10.Kk, 12.10.-g, 12.10.Dm, 14.80.Cp Introduction ============ The Higgs sector of the Standard Model (SM) plays an essential role in the mechanism of spontaneous breaking of the gauge symmetry from SU(3)$_C$ $\times$ SU(2)$_L$ $\times$ U(1)$_Y$ down to SU(3)$_C$ $\times$ U(1)$_{EM}$, giving masses to the elementary particles. The SM, however, does not address even the most fundamental nature of the Higgs sector, such as the mass of Higgs particles and the Higgs self-coupling constant. Thus the Higgs sector is not only the last frontier of the SM, but it will also provide the key clue to the physics beyond the SM. The gauge-Higgs unification is one of the attractive approaches to the physics beyond the SM in this regard  (for recent approaches, see Refs. ).\ In this approach, the Higgs particles originate from the extra-dimensional components of the gauge field of a gauge theory defined on spacetime with dimensions larger than four. Thus the Higgs sector is embraced into the gauge interactions in the higher-dimensional spacetime and part of the fundamental properties of Higgs scalar is determined from the gauge interactions. We consider a gauge-Higgs unification model based on a gauge theory as defined on the six-dimensional spacetime with the extra-space which has the structure of two-sphere $S^2$. We can impose on the fields of this gauge theory the symmetry condition which identifies the gauge transformation as the isometry transformation of $S^2$ as in the coset space dimensional reduction(CSDR) scheme [@Manton:1979kb; @Forgacs:1979zs; @Kapetanakis:1992hf; @Chatzistavrakidis:2007by; @Zoupanos08] , since the $S^2$ has the coset space structure such as $S^2$=SU(2)/U(1). We then impose on the gauge field the symmetry in order to carry out the dimensional reduction of the gauge sector. The dimensional reduction is explicitly carried out by applying the solution of the symmetry condition, and a background gauge field is introduced as a part of the solution of the symmetry condition [@Manton:1979kb]. We obtain, by the dimensional reduction, the scalar sector with a potential term which leads to spontaneous symmetry breaking. The symmetry also restricts the gauge symmetry and the scalar contents originated from extra guage field components in four-dimensions. We, however, do not impose the symmetry on the fermion of the gauge theory, in contrast to other CSDR models. We then have massive Kaluza-Klein(KK) modes of fermion in four-dimensions while gauge and scalar fields have no massive KK mode, and would obtain a dark-matter candidate. Generally, the KK modes do not have massless mode because of positive curvature of $S^2$ [@A.A.Abrikosov]. We, however, obtain a massless KK mode because of existence of background gauge field; the fermion components which have the massless mode are determined by the background gauge field. Gauge theories with the symmetry condition are well investigated to construct a model which provide Grand Unified Theory (GUT) in four-dimensions [@Kapetanakis:1992hf; @Chapline:1982wy; @10dim-Model:K12; @10dim-Model:D14; @10dim-Model:B; @CSDR14D]. No known model, however, reproduced the full particle contents of GUTs. We generally cannot obtain the Higgs particles which properly break a GUT gauge symmetry, while one or more generation of fermions and SM Higgs-doublet could be obtained. We then impose on fields of a six-dimensional theory the non-trivial boundary conditions of $S^2$ together with the symmetry condition in order to overcome the difficulty. A GUT gauge symmetry can be broken to SM gauge symmetry by the non-trivial boundary conditions (for cases with orbifold extra-space, see for example ). In this paper, we analyze the gauge theory defined on the six-dimensional spacetime which has $S^2$ as extra-space, with the symmetry condition and non-trivial boundary conditions. The gauge symmetry, scalar contents and massless fermion contents are determined by the symmetry condition and the boundary conditions. First, we provide the scheme for constructing a four-dimensional theory from the six-dimensional gauge theory. We then construct the model based on SO(12) gauge symmetry and show that SM-Higgs doublet and one generation of massless fermions are obtained in four-dimensions. We also find that the electroweak symmetry breaking is realized and Higgs mass value is predicted by analyzing Higgs sector of the model. This paper is organized as follows. In sec. \[CSDR\], we give the scheme for constructing a four-dimensional theory from a gauge theory on six-dimensional spacetime which has extra space as two-sphere $S^2$ with the symmetry condition and non-trivial boundary conditions. In sec. \[SO(12)model\], we construct the model based on SO(12) gauge symmetry. We summarize our results in sec. \[summary\]. Six-dimensional gauge theory with extra-space $S^2$ under the symmetry condition and non-trivial boundary conditions {#CSDR} ===================================================================================================================== In this section, we develop the scheme for constructing a four-dimensional theory from a gauge theory on six-dimensional spacetime which has extra-space as two-sphere $S^2$ with the symmetry condition and non-trivial boundary conditions. A Gauge theory on six-dimensional spacetime with $S_2$ extra-space ------------------------------------------------------------------ We begin with a gauge theory with a gauge group $G$ defined on a six-dimensional spacetime $M^6$. The spacetime $M^6$ is assumed to be a direct product of the four-dimensional Minkowski spacetime $M^4$ and two-sphere $S^2$ such that $M^6=M^4 \times S^2$. The two-sphere $S^2$ is a unique two-dimensional coset space, and can be written as $S^2 = \mathrm{SU}(2)_I/\mathrm{U}(1)_I$, where U(1)$_I$ is the subgroup of SU(2)$_I$. This coset space structure of $S^2$ requires that $S^2$ has the isometry group SU(2)$_I$, and that the group U(1)$_I$ is embedded into the group SO(2) which is a subgroup of the Lorentz group SO(1,5). We denote the coordinate of $M^6$ by $X^M=(x^{\mu},y^{\theta}=\theta,y^{\phi}=\phi)$, where $x^{\mu}$ and $\{ \theta,\phi \}$ are $M^4$ coordinates and $S^2$ spherical coordinates, respectively. The spacetime index $M$ runs over $\mu$ $\in$ $\{ 0,1,2,3 \}$ and $\alpha$ $\in$ $\{ \theta,\phi \}$. The metric of $M^6$, denoted by $g_{MN}$, can be written as $$g_{MN} = \begin{pmatrix} \eta_{\mu \nu} & 0 \\ 0 & -g_{\alpha \beta} \end{pmatrix},$$ where $\eta_{\mu \nu}= diag(1,-1,-1,-1)$ and $g_{\alpha \beta}= diag(1, \sin^{-2} \theta)$ are metric of $M^4$ and $S^2$ respectively. Notice that we omit the radius $R$ of $S^2$ in this discussion. We define the vielbein $e^{M}_{A}$ that connects the metric of $M^6$ and that of the tangent space of $M^6$, denoted by $h_{AB}$, as $g_{MN}=e_M^{A} e_N^B h_{AB}$. Here $A=(\mu,a)$, where $a$ $\in$ $\{ 4,5 \}$, is the index for the coordinates of tangent space of $M^6$. The explicit form of the vielbeins are summarized in the Appendix. We introduce a gauge field $A_{M}(x,y)=(A_{\mu}(x,y),A_{\alpha}(x,y))$, which belongs to the adjoint representation of the gauge group $G$, and fermions $\psi(x,y)$, which lies in a representation $F$ of $G$. The action of this theory is given by $$\label{6Daction} S = \int dx^4 \sin \theta d \theta d \phi \bigl( \bar{\psi} i \Gamma^{\mu} D_{\mu} \psi + \bar{\psi} i \Gamma^{a} e^{\alpha}_{a} D_{\alpha} \psi - \frac{1}{4 g^2} g^{MN} g^{KL} Tr[F_{MK} F_{NL}] \bigr) ,$$ where $F_{MN}= \partial_M A_N(X) -\partial_N A_M(X) -[A_M(X),A_N(X)]$ is the field strength, $D_M$ is the covariant derivative including spin connection, and $\Gamma_A$ represents the 6-dimensional Clifford algebra. Here $D_M$ and $\Gamma_A$ can be written explicitly as, $$\begin{aligned} D_{\mu} &= \partial_{\mu} - A_{\mu}, \\ D_{\theta} &= \partial_{\theta} - A_{\theta}, \\ D_{\phi} &= \partial_{\phi} -i \frac{\Sigma_3}{2} \cos \theta -A_{\phi}, \\ \Gamma_{\mu} &= \gamma_{\mu} \otimes \mathbf{I}_2, \\ \Gamma_4 &= \gamma_{5} \otimes \sigma_1, \\ \Gamma_5 &= \gamma_{5} \otimes \sigma_2, \end{aligned}$$ where $ \{ \gamma_{\mu}, \gamma_{5} \} $ are the 4-dimensional Dirac matrices, $\sigma_i(i=1,2,3)$ are Pauli matrices, $\mathbf{I}_d$ is $d \times d$ identity, and $\Sigma_3$ is defined as $\Sigma_3=\mathbf{I}_4 \otimes \sigma_3$. The symmetry condition and the boundary conditions -------------------------------------------------- We impose on the gauge field $A_M(X)$ the symmetry which connects SU(2)$_I$ isometry transformation on $S^2$ and the gauge transformation on the fields in order to carry out dimensional reduction, and the non-trivial boundary conditions of $S^2$ to restrict four-dimensional theory. The symmetry requires that the SU(2)$_I$ coordinate transformation should be compensated by a gauge transformation [@Manton:1979kb; @Forgacs:1979zs]. The symmetry further leads to the following set of the symmetry condition on the fields: $$\begin{aligned} \label{symm-con-vec4} \xi_i^{\beta} \partial_{\beta} A_{\mu} &= \partial_{\alpha} W_i + [W_i,A_{\mu}], \\ \label{symm-con-vecex} \xi_i^{\beta} \partial_{\beta} A_{\alpha} + \partial_{\alpha} \xi_i^{\beta} A_{\beta} &= \partial_{\alpha} W_i + [W_i,A_{\alpha}], \end{aligned}$$ where $\xi_i^{\alpha}$ is the Killing vectors generating SU(2)$_I$ symmetry and $W_i$ are some fields which generate an infitesimal gauge transformation of $G$. Here index $i = 1,2,3$ corresponds to that of SU(2) generators. The explicit forms of $\xi_i^{\alpha}$s for $S^2$ are: $$\begin{aligned} \xi_1^{\theta} &= \sin \phi , \qquad \xi_1^{\phi} = \cot \theta \cos \phi, \nonumber \\ \xi_2^{\theta} &= -\cos \phi , \qquad \xi_2^{\phi} = \cot \theta \sin \phi, \nonumber \\ \xi_3^{\theta} &= 0 , \qquad \xi_3^{\phi} = -1. \end{aligned}$$ The LHSs of Eq (\[symm-con-vec4\],\[symm-con-vecex\]) are infintesimal isometry SU(2)$_I$ transformation and the RHSs of those are infintesimal gauge transformation. The non-trivial boundary conditions are defined so as to remain the action Eq (\[6Daction\]) invariant, and are written as $$\begin{aligned} \label{paripsi} \psi (x,\pi-\theta,-\phi) &= \gamma_5 P \psi (x,\theta,\phi), \\ \label{pariAmu} A_{\mu} (x,\pi-\theta,-\phi) &= P A_{\mu}(x,\theta,\phi) P, \\ \label{pariAthe} A_{\theta} (x,\pi-\theta,-\phi) &= -P A_{\theta}(x,\theta,\phi) P, \\ \label{pariAph} A_{\phi} (x,\pi-\theta,-\phi) &= -P A_{\phi}(x,\theta,\phi) P, \\ \label{boundpsi} \psi (x,\theta,\phi+2 \pi) &= P' \psi (x,\theta,\phi), \\ \label{boundAmu} A_{\mu} (x,\theta,\phi+ 2 \pi) &= P' A_{\mu}(x,\theta,\phi) P', \\ \label{boundAthe} A_{\theta} (x,\theta,\phi+ 2 \pi) &= P' A_{\theta}(x,\theta,\phi) P', \\ \label{boundAph} A_{\phi} (x,\theta,\phi+2 \pi) &= P' A_{\phi}(x,\theta,\phi) P',\end{aligned}$$ where $P(P')$s act on the representation space of gauge group $G$ and satisfy $P^2=1((P')^2=1)$; we can take element of $P(P')$ as $\pm 1$. The dimensional reduction and a Lagrangian in four-dimensions ------------------------------------------------------------- The dimensional reduction of gauge sector is explicitly carried out by applying the solutions of the symmetry condition Eq (\[symm-con-vec4\],\[symm-con-vecex\]). These solutions are given by Manton [@Manton:1979kb] as $$\begin{aligned} \label{kaiAmu} A_{\mu} &= A_{\mu}(x), \\ \label{kaiAtheta} A_{\theta} &= -\Phi_1(x), \\ \label{kaiAphi} A_{\phi} &= \Phi_2(x) \sin \theta - \Phi_3 \cos \theta, \\ \label{solW1} W_1 &= - \Phi_3 \frac{\cos \phi}{\sin \theta}, \\ \label{solW2} W_2 &= - \Phi_3 \frac{\sin \phi}{\sin \theta}, \\ \label{solW3} W_3 &= 0,\end{aligned}$$ and satisfy the following constraints: $$\begin{aligned} \label{kousoku1} [\Phi_3,A_{\mu}] &= 0, \\ \label{kousoku2} [-i \Phi_3,\Phi_i(x)] &= i \epsilon_{3ij} \Phi_j(x), \end{aligned}$$ where $\Phi_1(x)$ and $\Phi_2(x)$ are scalar fields, and $-i\Phi_3$ are chosen as generator of U(1)$_I$. Note that the $\Phi_3$ term in Eq. (\[kaiAphi\]) corresponds to the background gauge field [@background]. Substituting the solutions Eq (\[kaiAmu\])-(\[kaiAphi\]) into $A_M(X)$ in action Eq (\[6Daction\]), we can easily integrate coordinates $\theta$ and $\phi$ in the gauge sector. We then obtain a four dimensional action as $$\begin{aligned} \label{4d-action} S_{4D}^{(gauge)} = \int d^4x \biggl( &- \frac{1}{4g^2} Tr[F_{\mu \nu} F^{\mu \nu}(x)] \nonumber \\ &- \frac{1}{2g^2} Tr[D'_{\mu}\Phi_1(x) D'^{\mu} \Phi_1(x)+D'_{\mu}\Phi_2(x) D'^{\mu} \Phi_2(x)] \nonumber \\ &- \frac{1}{2g^2} Tr[(\Phi_3+[\Phi_1(x),\Phi_2(x)])(\Phi_3+[\Phi_1(x),\Phi_2(x)])] \biggr), \end{aligned}$$ where $D'_{\mu} \Phi = \partial_{\mu} -[A_{\mu},\Phi]$. The fermion sector of four-dimensional action is obtained by expanding fermions in normal modes of $S^2$ and then integrating $S^2$ coordinate in six-dimensional action. Thus, the fermions have massive KK modes which would be a candidate of dark matter. Generally, the KK modes do not have massless mode because of the positive curvature of $S^2$ [@A.A.Abrikosov]. We, however, can show that the fermion components satisfying the following condition have massless mode: $$\label{kousoku3} -i \Phi_3 \psi = \frac{\Sigma_3}{2} \psi.$$ Square mass of the KK modes are eigenvalues of square of extra-dimensional Dirac-operator $-i \hat{D}$. In the $S^2$ case, $-i \hat{D}$ is written as $$\begin{aligned} \label{dirac} -i \hat{D} &= -i e^{\alpha a} \Gamma_a D_{\alpha} \nonumber \\ &=-i \bigl[ \Sigma_1 (\partial_{\theta} + \frac{\cot \theta}{2} ) + \Sigma_2 (\frac{1}{\sin \theta} \partial_{\phi} + \Phi_3 \cot \theta ) \bigr],\end{aligned}$$ where $\Sigma_i=\mathbf{I}_4 \times \sigma_i$. Square of $-i \hat{D}$ can be explicitly calculated: $$\begin{aligned} \label{dirac-square} (-i \hat{D})^2 = - \bigl[ \frac{1}{\sin \theta} \partial_{\theta} (\sin \theta \partial_{\theta}) + \frac{1}{\sin^2 \theta} \partial_{\phi}^2 +i (2(-i\Phi_3) - \Sigma_3) \frac{\cos \theta}{\sin^2 \theta} \partial_{\phi} \nonumber \\ -\frac{1}{4} -\frac{1}{4 \sin^2 \theta} + \Sigma_3 (-i\Phi_3 ) \frac{1}{\sin^2 \theta} - (-i\Phi_3)^2 \cot^2 \theta \bigr].\end{aligned}$$ We then act this operator on a fermion $\psi(X)$ which satisfy Eq. (\[kousoku3\]), and obtain the reration $$(-i \hat{D})^2 \psi = -\bigl[\frac{1}{\sin \theta} \partial_{\theta} (\sin \theta \partial_{\theta}) + \frac{1}{\sin^2 \theta} \partial_{\phi}^2 \bigr] \psi.$$ The eigenvalues of the RHS operator are less than or equal to zero. Thus the fermion components satisfying Eq. (\[kousoku3\]) have massless mode, while other components only have massive KK mode. Note that the massless mode $\psi_0$ should be independent of $S^2$ coordinates $\theta$ and $\phi$: $$\label{masslessmode} \psi_0 = \psi(x).$$ The existence of massless fermion may indicate the meaning of the symmetry condition; though the energy density of the gauge sector in the appearance of the background fields is higher than that of no background fields, since we have massless fermions, it may consist a ground state as a total in the presence of fermions. We also note that we could impose symmetry condition on fermions [@Kapetanakis:1992hf; @Manton:1981es]. In that case, we obtain the massless condition Eq. (\[kousoku3\]) from symmetry condition of fermion, and the solution of symmetry condition is independent from $S^2$ coordinate: $\psi=\psi(x)$ with no massive KK mode. Therefore, we can apply the same discussion for this case as our case if we only focus on the massless mode in our scheme. A gauge symmetry and particle contents in four-dimensions --------------------------------------------------------- The symmetry conditions and the non-trivial boundary conditions substantially constrain the four dimensional gauge group and its representations for the particle contents. The gauge symmetry and particle contents in four-dimensions must satisfy the constraints Eq (\[kousoku1\]),(\[kousoku2\]),(\[kousoku3\]) and be consistent with the boundary conditions Eq (\[paripsi\])-(\[boundAph\]). We show the prescriptions to identify four-dimensional gauge symmetry and particle contents below. First, we show the prescriptions to identify gauge symmetry and field components which satisfy the constrants Eq (\[kousoku1\]),(\[kousoku2\]),(\[kousoku3\]). The gauge group $H$ that satisfy the constraint Eq (\[kousoku1\]) is identified as $$\label{H-condition} H = C_G(U(1)_I)$$ where $C_G(U(1)_I)$ denotes the centralizer of U(1)$_I$ in $G$ [@Forgacs:1979zs]. Note that this implies $G$ $\supset$ $H$ = $H'$ $\times$ U(1)$_I$, where $H'$ is some subgroup of $G$. Second, the scalar field components which satisfy the constraints Eq. (\[kousoku2\]) are specified by the following prescription. Suppose that the adjoint representations of SU(2)$_I$ and $G$ are decomposed according to the embeddings SU(2)$_I$ $\supset$ U(1)$_I$ and $G$ $\supset$ $H'$ $\times$ U(1)$_I$ as $$\begin{aligned} 3( \mathrm{adj} \, \mathrm{SU}(2)) & = (0(\mathrm{adj} \, \mathrm{U}(1)_R)) + (2)+(-2), \label{SU(2)-dec} \\ \mathrm{adj} \, G & = (\mathrm{adj} \, H)(0) + 1(0(\mathrm{adj} \, \mathrm{U}(1))_R) + \sum_{g} h_{g}(r_{g}), \label{G-dec}\end{aligned}$$ where $h_g$s denote representation of $H'$, and $r_g$s denote U(1)$_I$ charges. The scalar components satisfying the constraints belong to $h_g$s whose corresponding $r_g$s in the decomposition Eq. (\[G-dec\]) are $\pm 2$. Third, the fermion components which satisfy the constraints Eq. (\[kousoku3\]) are determined as follows [@Manton:1981es]. Let the group U(1)$_I$ be embedded into the Lorentz group SO(2) in such a way that the vector representation 2 of SO(2) is decomposed according to SO(2) $\supset$ U(1)$_I$ as $$\label{dec-vec} 2= (2)+(-2).$$ This embedding specifies a decomposition of the weyl spinor representation $\sigma_6$=4 of SO(1,5) according to SO(1,5) $\supset$ SU(2) $\times$ SU(2) $\times$ U(1)$_I$ as $$\sigma_6 = (2,1)(1) + (1,2)(-1),$$ where SU(2) $\times$ SU(2) representations (2,1) and (1,2) correspond to left-handed and right-handed spinors, respectively. We then decompose $F$ according to $G$ $\supset$ $H'$ $\times$ U(1)$_I$ as $$\label{dec-F} F = \sum_f h_f(r_f).$$ Now the fermion components satisfying the constraints are identified as $h_f$s whose corresponding $r_f$s in the decomposition Eq. (\[dec-F\]) are (1) for left-handed fermions and (-1) for right-handed fermions. Finally, we show which gauge symmetry and field components remain in four-dimensions by surveying the consistency between the boundary conditions Eq. (\[paripsi\])-(\[boundAph\]), the solutions Eq. (\[kaiAmu\])-(\[kaiAphi\]), and fermion massless mode Eq. (\[masslessmode\]). We then apply Eq (\[kaiAmu\])-(\[kaiAphi\]) and Eq. (\[masslessmode\]) to Eq. (\[paripsi\])-(\[boundAph\]), and obtain the parity conditions $$\begin{aligned} \label{pari-con-Amu} A_{\mu}(x) &= P^{(,)}A_{\mu}(x) P^{(,)}, \\ \label{pari-con-sca1} -\Phi_1(x) &= -P (-\Phi_1(x)) P , \\ \label{pari-con-sca2} -\Phi_1(x) &= P' (-\Phi_1(x)) P', \\ \label{pari-con-sca3} \Phi_2(x)+ \Phi_3 \cos \theta &= -P \Phi_2(x) P+ P \Phi_3 P \cos \theta, \\ \label{pari-con-sca4} \Phi_2(x) - \Phi_3 \cos \theta &= P' \Phi_2(x) P' - P' \Phi_3 P' \cos \theta, \\ \label{pari-con-psi1} \psi (x) &= \gamma^5 P\psi (x), \\ \label{pari-con-psi2} \psi (x) &= P'\psi (x).\end{aligned}$$ We find that gauge fields, scalar fields and massless fermions in four-dimensions should be even for $P A_{\mu} P$ and $P' A_{\mu} P'$; $-P \Phi_{1,2} P $ and $P' \Phi_{1,2} P'$; $\gamma_5 P \psi$ and $P' \psi$, respectively. $\Phi_3$ always remains since it is proportional to an U(1)$_I$ generator and commutes with $P(P')$. Therefore the particle contents are identified as the components which satisfy both the constraints Eq (\[kousoku1\]),(\[kousoku2\]),(\[kousoku3\]) and the parity conditions Eq Eq (\[pari-con-Amu\])-(\[pari-con-psi2\]). The gauge symmetry remained in four-dimensions can also be identified by observing which components of the gauge fields remain. The SO(12) model {#SO(12)model} ================ In this section, we discuss a model based on a gauge group $G$=SO(12) and a representation $F$=32 of SO(12) for fermions. The choice of $G$=SO(12) and $F$=32 is motivated by the study based on CSDR which leads to an SO(10) $\times$ U(1) gauge theory with one generation of fermion in four-dimensions [@Chapline:1982wy] (for SO(12) GUT see also [@Rajpoot:1981it]). A gauge symmetry and particle contents -------------------------------------- First, we show the particle contents in four-dimensions without parities Eq. (\[paripsi\])-(\[boundAph\]). We assume that U(1)$_I$ is embedded into SO(12) such as $$SO(12) \supset SO(10) \times U(1)_I.$$ Thus we identify SO(10) $\times$ U(1)$_I$ as the gauge group which satisfy the constraints Eq (\[kousoku1\]), using Eq. (\[H-condition\]). We identify the scalar components which satisfy Eq. (\[kousoku2\]) by decomposing adjoint representation of SO(12): $$\label{dec66-1} SO(12) \supset SO(10) \times U(1)_I: 66 = 45(0) +1(0)+ 10(2) + 10(-2).$$ According to the prescription below Eq. (\[H-condition\]) in sec. \[CSDR\], the scalar components 10(2)+10(-2) remains in four-dimensions. We also identify the fermion components which satisfy Eq. (\[kousoku3\]) by decomposing 32 representations of SO(12) as $$\label{dec32-1} SO(12) \supset SO(10) \times U(1)_I: 32 = 16(1)+ \overline{16}(-1).$$ According to the prescription below Eq. (\[G-dec\]) in sec. \[CSDR\], we have the fermion components as 16(1) for a left-handed fermion and $\overline{16}$(-1) for a right-handed fermion, respectively, in four-dimensions. Next, we specify the parity assignment of $P(P')$ in order to identify the gauge symmetry and particle contents that actually remain in four-dimensions. We choose a parity assignment so as to break gauge symmetry as SO(12) $\supset$ SO(10) $\times$ U(1)$_I$ $\supset$ SU(5)$\times$ U(1)$_X$ $\times$ U(1)$_I$ $\supset$ SU(3) $\times$ SU(2)$_L$ $\times$ U(1)$_Y$ $\times$ U(1)$_X$ $\times$ U(1)$_I$, and to maintain Higgs-doublet in four-dimensions. The parity assignment is written in 32 dimensional spinor basis of SO(12) such as $$\begin{aligned} \label{pari32} SO(12) & \supset SU(3) \times SU(2)_L \times U(1)_Y \times U(1)_X \times U(1)_I \nonumber \\ 32 = & (3,2)^{(+-)}(1,-1,1)+(\bar{3},2)^{(+-)}(-1,1,-1) \nonumber \\ & + (3,1)^{(--)}(4,1,-1)+(\bar{3},1)^{(--)}(-4,-1,1) \nonumber \\ & + (3,1)^{(-+)}(-2,-3,-1)+(\bar{3},1)^{(-+)}(2,3,1) \nonumber \\ & +(1,2)^{(++)}(3,-3,-1)+(1,2)^{(++)}(-3,3,1) \nonumber \\ & + (1,1)^{(--)}(6,-1,1)+(1,1)^{(--)}(-6,1,-1) \nonumber \\ & +(1,1)^{(-+)}(0,-5,1)+(1,1)^{(-+)}(0,5,-1), \end{aligned}$$ where e.g. $(+,-)$ means that the parities $(P, P')$ of the associated components are (even, odd). We find the gauge symmetry in four-dimensions by surveying parity assignment for the gauge field. The parity assignments of the gauge field under $A_{\mu}$ $\rightarrow$ $ PA_{\mu}P(P' A_{\mu} P')$ are: $$\begin{aligned} \label{pari66-1} 66 = & (8,1)^{(++)}(0,0,0)+(1,3)^{(++)}(0,0,0)+(1,1)^{(++)}(0,0,0) \nonumber \\ & +(1,1)^{(++)}(0,0,0)+(1,1)^{(++)}(0,0,0) \nonumber \\ & + \bigl[(3,2)^{(-+)}(-5,0,0)+ (\bar{3},2)^{(-+)}(5,0,0) \nonumber \\ & +(3,2)^{(--)}(1,4,0)+ (\bar{3},2)^{(--)}(-1,-4,0) \nonumber \\ & +(3,1)^{(+-)}(4,-4,0)+ (\bar{3},1)^{(+-)}(-4,4,0) \nonumber \\ & +\underline{(3,1)^{(+-)}(-2,2,2)+ (\bar{3},1)^{(+-)}(2,-2,-2)} \nonumber \\ & +\underline{(3,1)^{(++)}(-2,2,-2)+ (\bar{3},1)^{(++)}(2,-2,2)} \nonumber \\ & +\underline{(1,2)^{(--)}(3,2,2)+ (1,2)^{(--)}(-3,-2,-2)} \nonumber \\ & +\underline{(1,2)^{(-+)}(3,2,-2)+ (1,2)^{(-+)}(-3,-2,2)} \nonumber \\ & +(1,1)^{(+-)}(6,4,0)+ (1,1)^{(+-)}(-6,-4,0) \bigr].\end{aligned}$$ The components with an underline are originated from 10(2) and 10(-2) of SO(10) $\times$ U(1)$_I$, which do not satisfy constraints Eq. (\[kousoku1\]), and hence these components do not remain in four-dimensions. Thus we have the gauge field with $(+,+)$ parity components without an underline in four-dimensions, and the gauge symmetry is SU(3) $\times$ SU(2)$_L$ $\times$ U(1)$_Y$ $\times$ U(1)$_X$ $\times$ U(1)$_I$. The scalar particle contents in four-dimensions are determined by the parity assignment, under $\Phi_{1,2}$ $\rightarrow$ $-P \Phi_{1,2} P$ and $P' \Phi_{1,2}P'$: $$\begin{aligned} \label{pari66-2} 66 = & (8,1)^{(-+)}(0,0,0)+(1,3)^{(-+)}(0,0,0)+(1,1)^{(-+)}(0,0,0) \nonumber \\ & +(1,1)^{(-+)}(0,0,0)+(1,1)^{(-+)}(0,0,0) \nonumber \\ & + \bigl[(3,2)^{(++)}(-5,0,0)+ (\bar{3},2)^{(++)}(5,0,0) \nonumber \\ & +(3,2)^{(+-)}(1,4,0)+ (\bar{3},2)^{(+-)}(-1,-4,0) \nonumber \\ & +(3,1)^{(--)}(4,-4,0)+ (\bar{3},1)^{(--)}(-4,4,0) \nonumber \\ & +\underline{(3,1)^{(--)}(-2,2,2)+ (\bar{3},1)^{(--)}(2,-2,-2)} \nonumber \\ & +\underline{(3,1)^{(-+)}(-2,2,-2)+ (\bar{3},1)^{(-+)}(2,-2,2)} \nonumber \\ & +\underline{(1,2)^{(+-)}(3,2,2)+ (1,2)^{(+-)}(-3,-2,-2)} \nonumber \\ & +\underline{(1,2)^{(++)}(3,2,-2)+ (1,2)^{(++)}(-3,-2,2)} \nonumber \\ & +(1,1)^{(--)}(6,4,0)+ (1,1)^{(--)}(-6,-4,0) \bigr]. \end{aligned}$$ Note that the relative sign for the parity assignment of $P$ is different from Eq. (\[pari66-1\]), and that the only underlined parts satisfy the constraints Eq. (\[kousoku2\]). Thus the scalar components in four-dimensions are (1,2)(3,2,-2) and (1,2)(-3,-2,2). We find massless fermion contents in four-dimensions, by surveying the parity assignment for each components of fermion fields. We introduce two types of left-handed Weyl fermions that belong to 32 representation of SO(12), which have parity assignment $\psi^{(P')}$ $\rightarrow$ $ \gamma_5 P \psi^{(P')}(P' \psi^{(P')})$ and $\psi^{(-P')}$ $\rightarrow$ $\gamma_5 P \psi^{(-P')}(-P' \psi^{(-P')})$ respectively. They have the parity assignment as $$\begin{aligned} \label{pari32L-1} 32_L^{(P')} = & \underline{(3,2)^{(--)}(1,-1,1)_L}+(\bar{3},2)^{(--)}(-1,1,-1)_L \nonumber \\ & +\underline{(\bar{3},1)^{(+-)}(-4,-1,1)_L} + (3,1)^{(+-)}(4,1,-1)_L \nonumber \\ & +\underline{(\bar{3},1)^{(++)}(2,3,1)_L} + (3,1)^{(++)}(-2,-3,-1)_L \nonumber \\ & +\underline{(1,2)^{(-+)}(-3,3,1)_L} + (1,2)^{(-+)}(3,-3,-1)_L \nonumber \\ & + \underline{(1,1)^{(+-)}(6,-1,1)_L}+(1,1)^{(+-)}(-6,1,-1)_L \nonumber \\ & +\underline{(1,1)^{(++)}(0,-5,1)_L}+(1,1)^{(++)}(0,5,-1)_L, \\ \label{pari32R-1} 32_R^{(P')} = & (3,2)^{(+-)}(1,-1,1)_R+\underline{(\bar{3},2)^{(+-)}(-1,1,-1)_R} \nonumber \\ & +(\bar{3},1)^{(--)}(-4,-1,1)_R + \underline{(3,1)^{(--)}(4,1,-1)_R} \nonumber \\ & +(\bar{3},1)^{(-+)}(2,3,1)_R + \underline{(3,1)^{(-+)}(-2,-3,-1)_R} \nonumber \\ & +(1,2)^{(++)}(-3,3,1)_R + \underline{(1,2)^{(++)}(3,-3,-1)_R} \nonumber \\ & + (1,1)^{(--)}(6,-1,1)_R+ \underline{(1,1)^{(--)}(-6,1,-1)_R} \nonumber \\ & +(1,1)^{(-+)}(0,-5,1)_R+ \underline{(1,1)^{(-+)}(0,5,-1)_R}, \end{aligned}$$ and $$\begin{aligned} \label{pari32L-2} 32_L^{(-P')} = & \underline{(3,2)^{(-+)}(1,-1,1)_L}+(\bar{3},2)^{(-+)}(-1,1,-1)_L \nonumber \\ & +\underline{(\bar{3},1)^{(++)}(-4,-1,1)_L} + (3,1)^{(++)}(4,1,-1)_L \nonumber \\ & +\underline{(\bar{3},1)^{(+-)}(2,3,1)_L} + (3,1)^{(+-)}(-2,-3,-1)_L \nonumber \\ & +\underline{(1,2)^{(--)}(-3,3,1)_L} + (1,2)^{(--)}(3,-3,-1)_L \nonumber \\ & + \underline{(1,1)^{(++)}(6,-1,1)_L}+(1,1)^{(++)}(-6,1,-1)_L \nonumber \\ & +\underline{(1,1)^{(+-)}(0,-5,1)_L}+(1,1)^{(+-)}(0,5,-1)_L, \\ \label{pari32R-2} 32_R^{(-P')} = & (3,2)^{(++)}(1,-1,1)_R+\underline{(\bar{3},2)^{(++)}(-1,1,-1)_R} \nonumber \\ & +(\bar{3},1)^{(-+)}(-4,-1,1)_R + \underline{(3,1)^{(-+)}(4,1,-1)_R} \nonumber \\ & +(\bar{3},1)^{(-+)}(2,3,1)_R + \underline{(3,1)^{(-+)}(-2,-3,-1)_R} \nonumber \\ & +(1,2)^{(+-)}(-3,3,1)_R + \underline{(1,2)^{(+-)}(3,-3,-1)_R} \nonumber \\ & + (1,1)^{(-+)}(6,-1,1)_R+ \underline{(1,1)^{(-+)}(-6,1,-1)_R} \nonumber \\ & +(1,1)^{(--)}(0,-5,1)_R+ \underline{(1,1)^{(--)}(0,5,-1)_R}, \end{aligned}$$ where L(R) means left-handedness(right-handedness) of fermions in four-dimensions, and the underlined parts correspond to the components which satisfy constraints Eq. (\[kousoku3\]). Note the relative sign for parity assignment of $P$ between left-handed fermion and right-handed fermion, and that of $P'$ between 32$^{(P')}$ and 32$^{(-P')}$. The difference between 32$^{(P')}$ and 32$^{(-P')}$ is allowed because of the bilinear form of the fermion sector. We thus find that the massless fermion components in four-dimensions are one generation of SM-fermions with right-handed neutrino: $\{$(3,2)(1,-1,1)$_L$,(3,1)(4,1,-1)$_R$,(3,1)(-2,-3,-1)$_R$,(1,2)(-3,3,1)$_L$,(1,1)(-6,1,-1)$_R$,(1,1)(0,5,-1)$_R$ $\}$. The Higgs sector of the model ----------------------------- We analyze the Higgs-sector of our model. The Higgs-sector $L_{\textrm{Higgs}}$ is the last two terms of Eq. (\[4d-action\]): $$\begin{aligned} L_{\textrm{Higgs}} = &- \frac{1}{2g^2} Tr[D'_{\mu}\Phi_1(x) D'^{\mu} \Phi_1(x)+D'_{\mu}\Phi_2(x) D'^{\mu} \Phi_2(x)] \nonumber \\ &- \frac{1}{2g^2} Tr[(\Phi_3+[\Phi_1(x),\Phi_2(x)])(\Phi_3+[\Phi_1(x),\Phi_2(x)])],\end{aligned}$$ where the first term of LHS is the kinetic term of Higgs and the second term gives the Higgs potential. We then rewrite the Higgs-sector in terms of genuine Higgs field in order to analyze it. We first note that the $\Phi_i$s are written as $$\Phi_i = i \phi_i = i \phi^a_i Q_a,$$ where $Q_a$s are generators of gauge group SO(12), since $\Phi_i$s are originated from gauge fields $A_{\alpha}=iA_{\alpha}^a Q_a$; for the gauge group generator we assume the normalization Tr($Q_aQ_b$)=-2$\delta_{ab}$. Note that we assumed the $-i \Phi_3$ as the generator of U(1)$_I$ embedded in SO(12), $$-i \Phi_3 = Q_I.$$ We change the notation of the scalar fields according to Eq. (\[SU(2)-dec\]) such that, $$\phi_+ = \frac{1}{2} (\phi_1+i \phi_2), \quad \phi_- = \frac{1}{2} (\phi_1-i \phi_2),$$ in order to express solutions of the constraints Eq. (\[kousoku2\]) clearly. The constraints Eq. (\[kousoku2\]) is then rewritten as $$\label{commutator} [Q_I,\phi_+ ] = \phi_+, \qquad [Q_I,\phi_- ] = -\phi_-.$$ The kinetic term $L_{KE}$ and potential $V(\phi)$ term are rewritten in terms of $\phi_+$ and $\phi_-$: $$\begin{aligned} \label{kinetic} L_{KE} &= -\frac{1}{g^2} Tr[D'_{\mu}\phi_+(x) D'^{\mu} \phi_-(x) ], \\ \label{potential} V &= -\frac{1}{2g^2} Tr[Q_I^2-4Q_I[\phi_+,\phi_- ] +4[\phi_+,\phi_- ][\phi_+,\phi_- ] ],\end{aligned}$$ where covariant derivative $D'_{\mu}$ is $D'_{\mu}\phi_{\pm} = \partial_{\mu}\phi_{\pm} - [A_{\mu},\phi_{\pm}]$. Next, we change the notation of SO(12) generators $Q_a$ according to decomposition Eq (\[pari66-1\]) such that $$\begin{aligned} \label{generators} Q_G = \{ & Q_i , Q_{\alpha}, Q_Y, Q, Q_I, Q_{ax(-500)},Q^{ax(500)} \nonumber \\ & Q_{ax(140)},Q^{ax(-1-40)},Q_{a(4-40)},Q^{a(-440)} \nonumber \\ & Q_{a(-22-2)},Q^{a(2-22)},Q_{a(-222)},Q^{a(2-2-2)} \nonumber \\ & Q_{x(322)},Q^{x(-3-2-2)},Q_{x(32-2)},Q^{x(-3-22)} \nonumber \\ & Q(640),Q(-6-40) \}, \end{aligned}$$ where the order of generators corresponds to Eq (\[pari66-1\]), index $i=1-8$ corresponds to SU(3) adjoint rep, index $\alpha=1-3$ corresponds to SU(2) adjoint rep, index $a=1-3$ corresponds to SU(3)-triplet, and index $x=1,2$ corresponds to SU(2)-doublet. We write $\phi_{\pm}$ in terms of the genuine Higgs field $\phi_x$ which belongs to (1,2)(3,2,-2), such that $$\begin{aligned} \label{scalar} \phi_+ = \phi_x Q^{x(-3-22)} \\ \phi_- = \phi^x Q_{x(32-2)}, \end{aligned}$$ where $\phi^x=(\phi_x)^{\dagger}$. We also write gauge field $A_{\mu}(x)$ in terms of $Q$s in Eq. (\[generators\]) as $$\label{gauge} A_{\mu}(x) = i(A_{\mu}^i Q_i+A_{\mu}^{\alpha} Q_{\alpha}+B_{\mu} Q_Y+C_{\mu} Q+E_{\mu} Q_I).$$ We then need commutation relations of $Q^{x(-3-22)}$, $Q_{x(32-2)}$, $Q_{\alpha}$, $Q_Y$, $Q$ and $Q_I$ in order to analyze the Higgs sector; we summarized them in Table \[commutators\]. -- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------- \[$Q^{x(-3-22)}$,$Q_{y(32-2)}$\] = $-\sqrt{\frac{3}{10}}$ $\delta^x_y$ $Q_Y$ + $-\sqrt{\frac{1}{5}}$ $\delta^x_y$ $Q$ +$\delta^x_y$ $Q_I$ +$\frac{1}{\sqrt{2}}$ $(\sigma^*_{\alpha})^x_y$ $Q_{\alpha}$ \[$Q_{\alpha}$,$Q_x$\] = $-\frac{1}{\sqrt{2}}$ $(\sigma_{\alpha})_x^y$ $Q_y$ \[$Q_{\alpha}$,$Q^x$\] = $\frac{1}{\sqrt{2}}$ $(\sigma^*_{\alpha})^x_y$ $Q^y$ \[$Q_x$,$Q_y$\]=0 \[$Q_Y$,$Q^x$\]= $-\sqrt{\frac{3}{10}}$ $Q^x$ \[$Q$,$Q^x$\]= $-\sqrt{\frac{1}{5}}$ $Q^x$ \[$Q_I$,$Q^x$\] = $Q^x$ -- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------- : commutation relations of $Q^{x(-3-22)}$, $Q_{x(32-2)}$, $Q_{\alpha}$, $Q_Y$, $Q$ and $Q_I$[]{data-label="commutators"} Finally, we obtain the Higgs sector with genuine Higgs field by substituting Eq. (\[scalar\])-(\[gauge\]) into Eq. (\[kinetic\], \[potential\]) and rescaling the fields $\phi \rightarrow g/\sqrt{2} \phi$ and $A_{\mu} \rightarrow g A_{\mu}$, and the couplings $\sqrt{2}g=g_2$ and $\sqrt{6/5} g = g_Y$, $$L_{Higgs} = |D_{\mu} \phi_x|^2 - V(\phi),$$ where the covariant derivative $D_{\mu} \phi_x$ and potential $V(\phi)$ are $$\begin{aligned} D_{\mu} \phi_x &= \partial_{\mu} \phi_x + i g_2 \frac{1}{2} (\sigma_{\alpha})_x^y A_{\alpha \mu} \phi_y + i g_Y \frac{1}{2} B_{\mu} \phi_x + i \sqrt{\frac{1}{5}} g C_{\mu} \phi_x - ig E_{\mu} \phi_x , \\ V &= -\frac{2}{R^2} \phi^x \phi_x + \frac{3g^2}{2} (\phi^x \phi_x)^2,\end{aligned}$$ respectively. Notice that we explicitly write radius $R$ of $S^2$ in the Higgs potential, and that we omitted the constant term in the Higgs potential. We note that the SU(2)$_L$ $\times$ U(1)$_Y$ parts of the Higgs sector has the same form as the SM Higgs sector. Therefore we obtain the electroweak symmetry breaking SU(2)$_L$ $\times$ U(1)$Y$ $\rightarrow$ U(1)$_{EM}$. The Higgs field $\phi^x$ acquires vaccume expectation value(VEV) as $$\begin{aligned} <\phi> &= \frac{1}{\sqrt{2}} \begin{pmatrix} 0 \\ v \end{pmatrix}, \\ v &= \sqrt{\frac{4}{3}} \frac{1}{g R},\end{aligned}$$ and W boson mass $m_W$ and Higgs mass $m_H$ are given in terms of radius $R$ $$\begin{aligned} & m_{W} = g_2 \frac{v}{2} = \sqrt{\frac{2}{3}} \frac{1}{R}, \\ & m_H = \sqrt{3}g v = \sqrt{4} \frac{1}{R}.\end{aligned}$$ The ratio between $m_W$ and $m_H$ is predicted $$\frac{m_H}{m_W} = \sqrt{6}.$$ Summary and discussions {#summary} ======================= We analyzed a gauge theory defined on the six-dimensional spacetime which has an $S^2$ extra-space, with the symmetry condition and non-trivial boundary conditions and constructed the model based on SO(12) gauge theory. We first provided the scheme for constructing a four-dimensional theory from a gauge theory on six-dimensional spacetime which has extra space $S^2$ with the symmetry condition of gauge field and the non-trivial boundary conditions. We showed the prescriptions to identify the gauge field and the scalar field, which satisfy the symmetry condition and the boundary conditions. A fermion sector of four-dimensional theory is also obtained by expanding fermions in normal mode and integrating the $S^2$ coordinates, although explicit form was not shown. Massive KK modes of fermions then appear in contrast to scalar and gauge field, which would provide a candidate of dark-matter. They may give a rich phenomena in near future collider experiment. To discuss these matters, we have to find the eigenvalues of Eq. (\[dirac\]). We leave this in future work. We also showed that fermions can have massless mode because of the existence of a background gauge field. The fermion components which have massless modes are then determined by the background gauge field and the boundary conditions. Note that by imposing the symmetry condition, we can get massless fermions. It may indicate the meaning of the symmetry condition; though the energy density of the gauge sector in the appearance of the background fields is higher than that of no background fields, since we have massless fermions, it may consist a ground state as a total in the presence of fermions. We then constructed the model based on the SO(12) gauge theory with fermions which lies in a 32 representation of SO(12). We showed that SU(3) $\times$ SU(2)$_L$ $\times$ U(1)$_Y$ $\times$ U(1)$_X$ $\times$ U(1)$_I$ gauge symmetry is remained in four-dimensions, and that the SM Higgs-doublet is obtained without appearance of extra scalar contents. One generation of SM fermions are successfully obtained by introducing two types of fermions which have different parity assignment under $\theta \rightarrow \pi - \theta$. We also analyzed the Higgs sector that are obtained from gauge sector of the six-dimensional gauge theory. The electroweak symmetry breaking is then realized and the Higgs mass value is predicted. To make our model more realistic, there are several challenges such as eliminating the extra U(1) symmetries and constructing the realistic Yukawa couplings, which are the same as other gauge-Higgs unification models. We, however, can get not only appropriate one-generation fermion fields but also Kaluza-Klein modes. This suggests that we obtain the dark matter candidate in our model. Thus it is very important to study this model further. Acknowledgement {#acknowledgement .unnumbered} =============== This work was supported in part by the Grant-in-Aid for the Ministry of Education, Culture, Sports, Science, and Technology, Government of Japan (No. 19010485, No. 20025001, 20039001, and 20540251). Geometrical quantity on $S^2$ ============================= We summarize the geometrical quantity on $S^2$ such as vielveins $e^a_{\alpha}$, killing vectors $\xi^{\alpha}_a$ and spin connection $R_{\alpha}^{ab}$. The vielveins are expressed as $$\begin{aligned} e^1_{\theta} &= 1, \nonumber \\ e^2_{\phi} &= \sin \theta, \nonumber \\ e^1_{\phi} &= e^2_{\theta} = 0. \end{aligned}$$ The non-zero components of spin connection are $$R^{12}_{\phi} = - R^{21}_{\phi} = -\cos \theta.$$ [99]{} N.S. Manton, Nucl. Phys. B 158 (1979) 141. D.B. Fairlie, Phys. Lett. B 82 (1979) 97. D.B. Fairlie, J. Phys. G 5 (1979) L55. L.J. Hall, Y. Nomura, and D.R. Smith, Nucl. Phys. B 639 (2002) 307. G. Burdman and Y. Nomura, Nucl. Phys. B 656 (2003) 3. I. Gogoladze, Y. Mimura, and S. Nandi, Phys. Lett. B 562 (2003) 307. C.A. Scrucca, M. Serone, L. Silvestrini, and A. Wulzer, JHEP 02 (2004) 049. N. Haba, Y. Hosotani, Y. Kawamura, and T. Yamashita, Phys. Rev. D 70 (2004) 015010. C. Biggio and M. Quiros, Nucl. Phys. B 703 (2004) 199. K. Hasegawa, C. S. Lim and N. Maru, Phys. Lett. B 604 (2004) 133. N. Haba, S. Matsumoto, N. Okada, and T. Yamashita, JHEP 02 (2006) 073. Y. Hosotani, S. Noda, Y. Sakamura, and S. Shimasaki, Phys. Rev. D 73 (2006) 096006. M. Sakamoto and K. Takenaga, Phys. Rev. D 75 (2007) 045015. C. S. Lim, N. Maru and K. Hasegawa, J.Phys.Soc.Jap.77 (2008) 074101. Y. Hosotani and Y. Sakamura, Prog. Theor. Phys. 118 (2007) 935. Y. Sakamura, Phys. Rev. D 76 (2007) 065002. A.D. Medina, N.R. Shah, and C.E.M. Wagner, Phys. Rev. D 76 (2007) 095010. C.S. Lim and N. Maru, Phys.Lett.B653 (2007) 320-324. Y. Adachi, C.S. Lim, and N. Maru. Phys. Rev. D 76 (2007) 075009. I. Gogoladze, N. Okada, and Q. Shafi, Phys. Lett. B 659 (2008) 316. P. Forgacs and N.S. Manton, Commun. Math. Phys. 72 (1980) 15. D. Kapetanakis and G. Zoupanos, Phys. Rept. 219 (1992) 1. A. Chatzistavrakidis, P. Manousselis, N. Prezas, and G. Zoupanos, Phys. Lett. B 656 (2007) 152. George Douzas, Theodoros Grammatikopoulos and George Zoupanos, arXiv:0808.3236\[hep-th\] A.A. Abrikosov, arXiv:hep-th/0212134. G. Chapline and R. Slansky, Nucl. Phys. B 209 (1982) 461. K. Farakos, D. Kapetanakis, G. Koutsoumbas and G. Zoupanos, Phys. Lett. B211 (1988) 322. D. Kapetanakis and G. Zoupanos, Phys. Lett. B249 (1990) 66. B.E. Hanlon and G.C. Joshi, Phys. Rev. D48 (1993) 2204. T. Jittoh, M. Koike, T. Nomura, J. Sato and T. Shimomura, arXiv:0803.0641\[hep-ph\]. Y. Kawamura, Prog. Theor. Phy. 105 (2001) 999. Y. Kawamura, Prog. Theor. Phy. 105 (2001) 691. S. Randjbar-Daemi and E. Percacci, Phys. Lett. B117 (1982) 41. N.S. Manton, Nucl. Phys. B 193 (1981) 502. S. Rajpoot and Sithikong Phys. Rev. D23 (1981) 1649-1656.
--- abstract: 'In a paper [@H-bitan] we obtained explicit examples of Moishezon twistor spaces of some compact self-dual four-manifolds admitting a non-trivial Killing field, and also determined their moduli space. In this note we investigate minitwistor spaces associated to these twistor spaces. We determine their structure, minitwistor lines and also their moduli space, by using a double covering structure of the twistor spaces. In particular, we find that these minitwistor spaces have different properties in many respects, compared to known examples of minitwistor spaces. Especially, we show that the moduli space of the minitwistor spaces is identified with the configuration space of different 4 points on a circle divided by the standard ${\rm{PSL}}\,(2,\mathbf R)$-action.' author: - Nobuhiro Honda title: Examples of compact minitwistor spaces and their moduli space --- Introduction ============ In [@JT85] P.E. Jones and K.P. Tod established a reduction theory for self-dual 4-manifolds with a non-trivial Killing field. We briefly recall their results. Suppose that a self-dual metric $g$ on a 4-manifold $M$ admits a free isometric ${\rm{U}}(1)$-action. Then the quotient 3-manifold $M/{\rm{U}}(1)$ is naturally equipped with so called a [*Weyl structure*]{}, which is a pair of a conformal structure (associated to the natural Riemannian metric on $M/{\rm{U}}(1)$) and an affine connection compatible with the conformal structure. As a consequence of the self-duality of $g$, a curvature of the affine connection satisfies a kind of Einstein condition and the pair becomes [*Einstein-Weyl structure*]{} in the sense of N.J.Hitchin [@Hi82-1]. Moreover, the function on $M/{\rm{U}}(1)$ obtained by associating the length of each ${\rm{U}}(1)$-orbits (with respect to $g$) satisfies certain linear equation, which is called a [*monopole equation*]{}. Thus, the Einstein-Weyl condition and the monopole equation can be thought as a non-linear and linear part of the self-duality equation respectively. This construction is invertible. Namely, if a 3-manifold $N$ is equipped with an Einstein-Weyl structure, and if $M{\rightarrow}N$ is a principal ${\rm{U}}(1)$-bundle equipped with a (positive) solution of the monopole equation, a conformal structure on $M$ is naturally constructed and it becomes self-dual. One can also refer [@LB93] for details. The last inversion of the reduction theory already produces non-trivial self-dual metrics even if one takes the flat Euclidean 3-space (with the natural Einstein-Weyl structure) and if one allows a certain kind of singularities for the ${\rm{U}}(1)$-bundle. Namely, the Eguchi-Hanson metric [@EH79] on the cotangent bundle of $\mathbf{CP}^1$ and the Gibbons-Hawking metrics [@GH78] on the minimal resolution of $\mathbf C^2/\Gamma$, where $\Gamma$ is a cyclic subgroup of $\rm{SU}(2)$, are obtained in this way. Later, C. LeBrun [@LB91] successfully applies the inversion construction for the hyperbolic 3-space (again with the natural Einstein-Weyl structure) to realize explicit self-dual metrics on $n\mathbf{CP}^2$, the connected sum of $n$ copies of complex projective planes. This reduction theory for self-dual metrics can be translated into that of twistor spaces [@JT85] (see also [@LB91; @LB93]). In particular, taking the quotient of a self-dual manifold by a ${\rm{U}}(1)$-action corresponds to taking the quotient of the associated twistor space by the natural holomorphic $\mathbf C^*$-action. Because $\mathbf C^*$-action can become pathological in general as seen in [@Hon01 §4], the quotient space of the $\mathbf C^*$-action does not necessarily possess a structure of a complex surface. But if the twistor space is Moishezon for example, such a pathology does not occur and the quotient space can have a natural structure of a complex surface (with singularities in general). In this case, the quotient complex surface is called a [*minitwistor space*]{}. For the Gibbons-Hawking metrics and the LeBrun metrics mentioned above, this is indeed the case; the resulting minitwistor spaces are the total space of the holomorphic tangent bundle of $\mathbf{CP}^1$ for the Gibbons-Hawking metrics, and the product $\mathbf{CP}^1\times\mathbf{CP}^1$ for the LeBrun metrics. Here, an important feature in these two basic examples is that, while the self-dual (or hyper-Kähler) structure actually deforms, the corresponding Einstein-Weyl 3-manifolds, and hence their minitwistor spaces, do not deform. In a paper [@H-bitan] the author explicitly constructed a family of twistor spaces on $3\mathbf{CP}^2$ parametrized by a 3-dimensional connected space. The corresponding self-dual metrics have a non-trivial ${\rm{U}}(1)$-action but are not conformal to the LeBrun metrics. Moreover, the constructed family is complete in the sense that, on $3\mathbf{CP}^2$, every non-LeBrun self-dual metric with ${\rm{U}}(1)$-action is a member of this family, at least if the self-dual metric is supposed to have a positive scalar curvature. The purpose of this note is to investigate minitwistor spaces of these explicit twistor spaces of $3\mathbf{CP}^2$. Our first result is a determination of the structure of these minitwistor spaces; namely we show that the minitwistor spaces have a natural structure of a branched double covering of ${\overline}{\Sigma}_2$, the Hirzebruch surface of degree 2 with the $(-2)$-section contracted, and that the branching divisor is a smooth anticanonical curve which is an elliptic curve disjoint from the node of ${\overline}{\Sigma}_2$ (Theorem \[thm-main1\]). Briefly speaking this is a reflection of the property that our twistor spaces of $3\mathbf{CP}^2$ have a structure of generically 2 to 1 covering of $\mathbf{CP}^3$ branched along a singular quartic surface that is bimeromorphic to an elliptic ruled surface. We will see that the branch elliptic curve of the minitwistor space is isomorphic to the base elliptic curve of the branch quartic surface. We also show that the isomorphism class of the branch elliptic curve uniquely determines the complex structure of the minitwistor space, and that the moduli space of our minitwistor spaces can be identified with the configuration space of different 4 points on $\mathbf{RP}^1\simeq S^1$ modulo the natural $\rm{PSL}(2,\mathbf R)$-action (Theorem \[thm-main2\]). In particular, our minitwistor spaces constitute a 1-dimensional moduli space, which contrasts to the case of the Gibbons-Hawking metrics and the LeBrun metrics where the moduli spaces are single point. When investigating a twistor space, twistor lines are of fundamental significance. The images of twistor lines into minitwistor space (by the quotient map) are important as well and are called [*minitwistor lines*]{}. In Section 3, we investigate minitwistor lines in our minitwistor spaces. We prove that general minitwistor line is a nodal anticanonical curve of the minitwistor space, and that the natural morphism from a twistor line to the minitwistor line gives the normalization of the nodal curve. Thus the situation is quite different from the case of the Gibbons-Hawking metrics and the LeBrun metrics, since in these two cases, general minitwistor lines are smooth rational curves which are biholomorphic images of twistor lines. Geometrically, the appearance of the singularity of our minitwistor lines corresponds to the fact that for a general twistor line, there exists a unique $\mathbf C^*$-orbit intersecting the twistor line [*twice*]{}. Finally we give an account why such situation occur in our (mini)twistor spaces (Lemma \[lemma-rs2\]), comparing that of LeBrun in [@LB91]. The author would like to express his gratitude to Professors Shin Nayatani and Takashi Nitta for useful conversations. Also, he would like to thank Professor Akira Fujiki for his kind advise. [**Notations.**]{} If $Z$ is a twistor space of a self-dual 4-manifold, $F$ denotes the canonical square root of the anticanonical line bundle $-K_Z$. Tensor product of line bundles is denoted additively. The structure of minitwistor spaces and their moduli space ========================================================== First we recall the main results of [@H-bitan] which determine global structure of the moduli space of self-dual metrics on $3\mathbf{CP}^2=\mathbf{CP}^2\#\mathbf{CP}^2\#\mathbf{CP}^2$ satisfying particular conditions. \[prop-moduli1\] ([@H-bitan]) Let $g$ be a self-dual metric on $3\mathbf{CP}^2$ satisfying the following three properties: (i) the scalar curvature of $g$ is positive, (ii) $g$ admits a non-zero killing field, (iii) $g$ is not conformal to the self-dual metrics constructed by LeBrun [@LB91]. Let $Z$ be the twistor space of $[g]$. Then there is a commutative diagram of holomorphic maps $$\label{cd2} \xymatrix{ Z \ar@{->}[r]^{\mu}\ar@{->}[d]_{\Phi} & Z_0 \ar@{->}[dl]^{\Phi_0}\\ \mathbf{CP}^3 & \\ }$$ where $\Phi$ is a map associated to a linear system $|F|$ on $Z$, $\Phi_0:Z_0\to\mathbf{CP}^3$ is a double covering whose branch locus is a quartic surface $B$ with ordinary nodes, $\mu$ is a small resolution of the corresponding ordinary nodes of $Z_0$. Moreover, the defining equation of $B$ is given by $$\label{eqn-B} \{y_2y_3+Q(y_0,y_1)\}^2-y_0y_1(y_0+y_1)(y_0-a y_1)=0,$$ where $a$ is a positive real number and $Q(y_0,y_1)$ is a homogeneous quadratic polynomial with real coefficients satisfying the following condition: > [*($\ast$)*]{} as an equation on $\mathbf{CP}^1=\{(y_0,y_1)\}$, the quartic equation $$Q(y_0,y_1)^2-y_0y_1(y_0+y_1)(y_0-a y_1)=0$$ has a unique real double root. Conversely, for any quartic surface (\[eqn-B\]) with $Q$ and $a>0$ satisfying the condition [*($\ast$)*]{}, the double covering of $\mathbf{CP}^3$ branched along $B$ admits a small resolution such that the resulting manifold is a twistor space of a self-dual metric on $3\mathbf{CP}^3$ satisfying (i), (ii) and (iii). Apriori the real double root in the condition $(\ast)$ belongs to one of the two intervals $(-1,0)$ and $(a,\infty)$ on which the quartic $y_0y_1(y_0+y_1)(y_0-a y_1)$ is positive. But as showed in [@H-bitan §5.1] we can always suppose that the double root belongs to the latter interval $(a,\infty)$ by applying a real projective transformation with respect to $(y_0,y_1)$, and in the following we always suppose this. We also recall that the quartic surface $B$ defined by satisfying the condition ($\ast$) has exactly three singular points $$P_{\infty}:=(0,0,0,1),\,\,{\overline}{P}_{\infty}:=(0,0,1,0),\,\, P_0:=(\lambda_0,1,0,0),$$ where $(\lambda_0,1)\in\mathbf{CP}^1$ is the real double root (so that $\lambda_0>a$ by the above normalization). The killing field appeared in Proposition \[prop-moduli1\] generates an isometric ${\rm{U}}(1)$-action of the self-dual metric. This ${\rm{U}}(1)$-action naturally lifts and gives a holomorphic ${\rm{U}}(1)$-action on the twistor space $Z$. Taking the complexification of the last ${\rm{U}}(1)$-action, we obtain a holomorphic $\mathbf C^*$-action on $Z$. This $\mathbf C^*$-action then descends on $\mathbf{CP}^3=\mathbf PH^0(F)^{\vee}$, which was shown to be of the form [@H-bitan Prop. 2.1] $$\label{eqn-action1} (y_0,y_1,y_2,y_3)\longmapsto (y_0,y_1,ty_2,t^{-1}y_3),\,\,\,t\in\mathbf C^*.$$ Of course this $\mathbf C^*$-action leaves the quartic surface $B$ invariant. Note that any orbit of this action is contained in a plane belonging to the pencil $\langle y_0,y_1\rangle$, and that the closure of general orbits is a conic in these planes. On the other hand, the anti-holomorphic involution of $\mathbf{CP}^3$ naturally induced from the real structure on $Z$ is explicitly given by, again as shown in [@H-bitan Prop. 2.1], $$\label{eqn-realstr} (y_0,y_1,y_2,y_3)\longmapsto ({\overline}{y}_0,{\overline}{y}_1,{\overline}{y}_3,{\overline}{y}_2),$$ where we are using the homogeneous coordinate in Proposition \[prop-moduli1\]. Of course, and commute. With these preliminary results, we begin to investigate quotient spaces of the twistor spaces with respect to the $\mathbf C^*$-action. First we consider a quotient space of the $\mathbf C^*$-action on $\mathbf{CP}^3$. The rational map $\psi:\mathbf{CP}^3\to\mathbf{CP}^3$ defined by $$\label{quotient1} \psi:(y_0,y_1,y_2,y_3)\longmapsto (z_0,z_1,z_2,z_3)=(y_0^2,y_1^2,y_0y_1,y_2y_3),$$ which is the rational map associated to a linear system formed by $\mathbf C^*$-invariant quadratic polynomials, can be regarded as a quotient map of the $\mathbf C^*$-action, since general fibers of the map is the closure of general orbits. The indeterminacy locus of $\psi$ consists of the two points $P_{\infty}$ and ${\overline}P_{\infty}$, which constitute a conjugate pair of points. The image of $\psi$ is easily seen to be a quadratic cone ${\overline}{\Sigma}_2:=\{z_0z_1=z_2^2\}$ which has $(0,0,0,1)$ as the vertex. Of course, ${\overline}{\Sigma}_2\backslash\{(0,0,0,1)\}$ is isomorphic to the total space of $\mathscr O(2)$, where the isomorphism is explicitly given by $$\begin{aligned} \label{isom67} {\overline}{\Sigma}_2\backslash\{(0,0,0,1)\}\ni (z_0,z_1,z_2,z_3)\longmapsto (u,\zeta)=\left(\frac{z_2}{z_1},\frac{z_3}{z_1}\right)\in\mathscr O(2),\end{aligned}$$ where $u$ is an affine coordinate on the base space of the bundle $\mathscr O(2)$ and $\zeta$ is a fiber coordinate valid on there. Then by , , and , it is immediate to see the following \[lemma-imB\] Let $\psi$ be as in and $B$ the quartic surface in Proposition \[prop-moduli1\]. Then under the isomorphism , the image $\mathscr B:=\psi(B)$ is explicitly given by $$\label{cb} \{\zeta+Q(u,1)\}^2-u(u+1)(u-a)=0.$$ It is obvious from that the projection $\mathscr B\to\mathbf{CP}^1$ (given by $(u,\zeta)\mapsto u$) is a double covering which has $u=0,-1, a$ as simple branch points. Further, since is an equation taking values in the line bundle $\mathscr O(4)$, $u=\infty$ is also a simple branched point. Also it is also obvious that these are all branch points. Therefore, $\mathscr B$ is a smooth elliptic curve whose complex structure is determined by $a$. Also we note that $\mathscr B$ does not go through the node of ${\overline}{\Sigma}_2$, and it belongs to the anticanonical class on ${\overline}{\Sigma}_2$. The following result describes a structure of quotient spaces of the twistor spaces in Proposition \[prop-moduli1\] with respect to the $\mathbf C^*$-action: \[thm-main1\] Let $g$, $Z$, $\Phi$ and $B$ as in Proposition \[prop-moduli1\] and $\psi$, $\mathscr B$ as in Lemma \[lemma-imB\]. Then there exists a commutative diagram of meromorphic maps: $$\label{eqn-cd1} \begin{CD} Z@>{\Psi}>>\mathscr T\\ @V{\Phi}VV @VV{\phi}V\\ \mathbf{CP}^3@>{\psi}>>{\overline}{\Sigma}_2 \end{CD}$$ where $\mathscr T$ is a normal rational surface, $\Psi$ is a surjective rational map, $\phi$ is a finite double covering map whose branch locus is the curve $\mathscr B$. Moreover, all fibers of $\Psi$ are $\mathbf C^*$-invariant and general fibers are the closures of orbits of the $\mathbf C^*$-actions on $Z$. By the last property, $\mathscr T$ can be regarded as a quotient space of the $\mathbf C^*$-action on $Z$. Hence we call the normal rational surface $\mathscr T$ as a minitwistor space associated to the twistor space in Proposition \[prop-moduli1\]. Proof of Theorem \[thm-main1\]. Since $\psi$ has indeterminacy at $P_{\infty}$ and ${\overline}P_{\infty}$, the composition $\psi\circ\Phi$ also has indeterminacy along $\Phi^{-1}(P_{\infty})$ and $\Phi^{-1}({\overline}P_{\infty})$. As seen in [@H-bitan], both of the last two sets are chains of 3 smooth rational curves in $Z$. Let $Z'\to Z$ be a sequence of blow-ups which resolves the indeterminacy of the rational map $\psi\circ\Phi$. We may suppose that the image of the exceptional divisors are contained in $\Phi^{-1}(P_{\infty})\cup\Phi^{-1}({\overline}P_{\infty})$. Let $\Phi_0:Z_0{\rightarrow}\mathbf{CP}^3$ be the double covering whose branch is $B$ as before. If $x\in{\overline}{\Sigma}_2\backslash\mathscr B$, then $\psi^{-1}(x)$ is the closure of a $\mathbf C^*$-invariant conics which are not contained in $B$. Hence $\Phi_0^{-1}(\psi^{-1}(x))$ splits into the closure of two orbits of the $\mathbf C^*$-action on $Z_0$. On the other hand, if $x\in \mathscr B$, then $\psi^{-1}(x)$ is a $\mathbf C^*$-invariant conic contained in $B$. Hence $\Phi_0^{-1}(\psi^{-1}(x))$ is biholomorphic to the conic $\psi^{-1}(x)$. Therefore if $Z'\to\mathscr T\to{\overline}{\Sigma}_2$ is a Stein factorization of the morphism $Z'\to{\overline}{\Sigma}_2$, the latter $\mathscr T\to{\overline}{\Sigma}_2$ is a double covering whose branch is exactly $\mathscr B$. Hence we obtained the commutative diagram . The statement about fibers of $\Psi$ is obvious from the above argument. Further, the singular locus of $\mathscr T$ is exactly the pre-image of the node of ${\overline}{\Sigma}_2$ since the branch curve $\mathscr B$ does not go through the node. Hence $\mathscr T$ is normal. Finally, the rationality of $\mathscr T$ is an immediate consequence of the fact that the pre-images of the lines on the cone ${\overline}{\Sigma}_2$ gives a pencil of rational curves on $\mathscr T$. [$\square$]{} Since the $\mathbf C^*$-action on the twistor space is compatible with the real structure, the minitwistor spaces also have real structures. It is explicitly described as follows: \[prop-rs\] Let $\mathscr T$ be the minitwistor space in Theorem \[thm-main1\] and $\mathscr B\subset{\overline}{\Sigma}_2$ the branch elliptic curve of the double covering $\phi:\mathscr T{\rightarrow}{\overline}{\Sigma}_2$. Consider a real structure on ${\overline}{\Sigma}_2$ given by $$\label{eqn-rs55} (u,\zeta)\longmapsto ({\overline}{u},{\overline}{\zeta}),$$ on the complement $\mathscr O(2)$ of the node of $\,{\overline}{\Sigma}_2$. Then $\mathscr B$ is invariant under this real structure, and the real structure on $\mathscr T$ covers the real structure through the double covering $\mathscr T{\rightarrow}{\overline}{\Sigma}_2$. Proposition \[prop-rs\] can be readily deduced by using – and we omit the proof. Another realization of the minitwistor space $\mathscr T$ is given by the following \[cor-mt\] Let $\mathscr T$ be the minitwistor space in Theorem \[thm-main1\]. Then as a complex surface, $\mathscr T$ is obtained from $\mathbf{CP}^1\times\mathbf{CP}^1$ in the following way: Let $p:\mathbf{CP}^1\times\mathbf{CP}^1 {\rightarrow}\mathbf{CP}^1$ be (any) one of the projections, and $A_1$ and $A_2$ any two different sections of $p$ whose self-intersection numbers are zero. Locate 4 points on $A_1\cup A_2$ in such a way that 2 points are on $A_1$ and the remaining 2 points are on $A_2$, and that the image of the 4 points under $p$ is equivalent to $\{-1,0,\infty,a\}$ under a projective transformation of $\mathbf{CP}^1$. Next blowup $\mathbf{CP}^1\times\mathbf{CP}^1$ at these 4 points. Then the strict transforms of $A_1$ and $A_2$ become $(-2)$-curves. Finally blow down these two curves, to obtain a surface having two ordinary nodes. This surface is biholomorphic to the minitwistor space $\mathscr T$. Proof. Let $\nu:\Sigma_2{\rightarrow}{\overline}{\Sigma}_2$ be the minimal resolution and write $\mathscr B'=\nu^{-1}(\mathscr B)$ which is isomorphic to $\mathscr B$. Let $\mathscr T'{\rightarrow}\Sigma_2$ be the double covering branched along $\mathscr B'$. Then $\mathscr T'$ is the minimal resolution of $\mathscr T$. Consider the composition $\mathscr T'{\rightarrow}\Sigma_2{\rightarrow}\mathbf{CP}^1$, where $\Sigma_2{\rightarrow}\mathbf{CP}^1$ is a projection of a ruling. Since $\mathscr B'$ is 2 to 1 over $\mathbf{CP}^1$, general fiber of the above composition map is $\mathbf{CP}^1$. Further, since $\mathscr B'$ has 4 branched points, the composition map has precisely 4 singular fibers, all of which are two $(-1)$ curves intersecting transversally. If we choose four $(-1)$-curves among eight ones in such a way that just two of them intersect one of the exceptional curves of the minimal resolution $\mathscr T'{\rightarrow}\mathscr T$, and that the other two of them intersect another exceptional curve of $\mathscr T'{\rightarrow}\mathscr T$, and if we blow them down, then we obtain a (relatively) minimal surface which must be $\mathbf{CP}^1\times\mathbf{CP}^1$. This implies the claim of the proposition. [$\square$]{} We note that although Proposition \[cor-mt\] gives an explicit construction of the minitwistor space as a complex surface, its real structure can never be obtained through this construction. More precisely, the blowing-down $\mathscr T'{\rightarrow}\mathbf{CP}^1\times\mathbf{CP}^1$ in the above proof does not preserve the real structure. This can be seen, by going back to the twistor space, as follows. Consider singular fibers of $\mathscr T'{\rightarrow}\mathbf{CP}^1$ in the above proof, which are pairs of $(-1)$-curves intersecting transversally at a point. Then each of these singular fibers is the image of a reducible member of the linear system $|\Phi^*\mathscr O(1)|=|F|$, where $\Phi:Z{\rightarrow}\mathbf{CP}^3$ is the generically 2 to 1 covering as in Proposition \[prop-moduli1\]. Namely, the $(-1)$-curves are the images of the irreducible components, by the quotient map. Since the real structure of $Z$ exchanges the irreducible components, the two $(-1)$-curves in $\mathscr T'$ must be a conjugate pair. Since the blowing-down $\mathscr T'{\rightarrow}\mathbf{CP}^1\times\mathbf{CP}^1$ contracts just one of the $(-1)$-curves for each reducible fiber, it cannot preserve the real structure. The complex structure of our twistor spaces in Proposition \[prop-moduli1\] depends not only on $a>0$ but also on the coefficients of $Q(y_0,y_1)$ in the defining equation of the branch quartic surface $B$. We next show that the complex structure of the minitwistor spaces does not depend on $Q(y_0,y_1)$. Namely we show the following \[prop-moduli2\] Let $\mathscr T$ be the minitwistor space in Theorem \[thm-main1\]. Then the complex structure of $\mathscr T$ is uniquely determined by $a>0$ in the equation . In other words, the complex structure of $\mathscr T$ does not depend on the quadratic polynomial $Q(y_0,y_1)$ in . Proof. Fix $a>0$ and let $Q_1=Q_1(y_0,y_1)$ and $Q_2=Q_2(y_0,y_1)$ be two real homogeneous quadratic polynomials satisfying the condition ($\ast$) in Proposition \[prop-moduli1\]. Let $B_1$ and $B_2$ be the quartic surfaces determined by $(Q_1,a)$ and $(Q_2,a)$ by the equation respectively. Then we can write $Q_1(u,1)-Q_2(u,1)=d_0+d_1u+d_2u^2$ for $d_0,d_1,d_2\in\mathbf R$. Using these $d_0,d_1,d_2$ we consider a map $$\begin{aligned} \label{isom23} (u,\zeta)\mapsto (u,\zeta+d_0+d_1u+d_2u^2).\end{aligned}$$ Viewing $(u,\zeta)$ as a holomorphic coordinate on the total space of $\mathscr O(2)$ as in the proof of Theorem \[thm-main1\], this map is easily seen to be a holomorphic automorphism of the Hirzebruch surface $\Sigma_2$. Moreover, by , the automorphism maps $\mathscr B_1$ to $\mathscr B_2$, where $\mathscr B_1$ and $\mathscr B_2$ are the images of $B_1$ and $B_2$ under the quotient map from $\mathbf{CP}^3$ to ${\overline}{\Sigma}_2$. Thus we have concretely obtained an isomorphism of the pair $({\overline}{\Sigma}_2,\mathscr B_1)$ and $({\overline}{\Sigma}_2,\mathscr B_2)$. Thus the double cover $\mathscr T_1$ and $\mathscr T_2$ whose branches are $\mathscr B_1$ and $\mathscr B_2$ respectively are mutually biholomorphic, as desired. [$\square$]{} Note that the isomorphism between the pair $(\Sigma_2,\mathscr B_1)$ and $(\Sigma_2,\mathscr B_2)$ given in the above proof commutes with the real structure $(u,\zeta)\mapsto({\overline}{u},{\overline}{\zeta})$, since $d_0, d_1$ and $d_2$ in are real. Thus the minitwistor space $\mathscr T$ is uniquely determined by $a$ not only as a complex surface but also as a complex surface with real structure. By Proposition \[prop-moduli2\] we can determine the moduli space of our minitwistor spaces. Let $\mathscr M$ be the moduli space of isomorphic classes of twistor spaces in Proposition \[prop-moduli1\]. As showed in [@H-bitan], $\mathscr M$ is naturally identified with $\mathbf R^3/G$, where $G$ is a reflection of $\mathbf R^3$ having 2-dimensional fixed locus. Let $\mathscr N$ be the moduli space of isomorphic classes of the associated minitwistor spaces, where the isomorphism is required to commute with the real structures. We have a natural surjective map $\mathscr M{\rightarrow}\mathscr N$ sending each isomorphic class of twistor space $Z$ to the isomorphic class of minitwistor space $\mathscr T$. Then it is immediate from Proposition \[prop-moduli2\] to obtain the following \[thm-main2\] Let $\mathscr N$ be the moduli space of isomorphic classes of our minitwistor spaces as explained above. Then $\mathscr N$ is naturally identified with the configuration space of different 4 points on a circle, divided by the usual [[PSL]{}]{}$(2,\mathbf R)$-action on the circle. In particular our minitwistor space has a non-trivial moduli, which contrasts to the known examples such as Gibbons-Hawking’s [@GH78] and LeBrun’s [@LB91; @LB93], where in these two cases the minitwistor spaces are the total space of $\mathscr O(2)$ and a quadratic surface $\mathbf{CP}^1\times\mathbf{CP}^1$ respectively and therefore do not deform, although the corresponding self-dual (or hyperKähler) metrics on 4-manifolds constitute non-trivial moduli spaces. Description of minitwistor lines ================================ As showed in the previous section, our minitwistor space $\mathscr T$ is a rational surface with two ordinary double points. In this section we investigate minitwistor lines in $\mathscr T$; namely the images of twistor lines by the (rational) quotient map $\Psi:Z{\rightarrow}\mathscr T$. We investigate these minitwistor lines by using the diagram . A basic fact about twistor lines in our twistor space $Z$ was that, the image of general twistor line by the map $\Phi:Z{\rightarrow}\mathbf{CP}^3$ is a very special kind of conic, called [*touching conic*]{}, meaning that the conic is tangent to the branch quartic surface $B$ at any intersection points which consist of 4 points in general [@H-bitan Def. 3.1 and Prop. 3.2]. Hence we first study the images of these touching conics by the (rational) quotient map $\psi:\mathbf{CP}^3{\rightarrow}{\overline}{\Sigma}_2$: \[lemma-mtl1\] Let $\psi:\mathbf{CP}^3{\rightarrow}{\overline}{\Sigma}_2$ be as in Lemma \[lemma-imB\]. Then the image of general conics in $\mathbf{CP}^3$ under $\psi$ are anticanonical curves on ${\overline}{\Sigma}_2$ with a unique node. Further, this is true even for general touching conics of $B$, and their images are nodal anticanonical curves which touch the smooth anticanonical curve $\mathscr B$ at 4 points. Proof. Since any conic in $\mathbf{CP}^3$ is contained in some plane, we first study the restriction of $\psi$ onto a general plane. Since general orbits of our $\mathbf C^*$-action are conics, the restriction $\psi|_H$ is 2 to 1 for general plane $H$. Further, by elementary calculations, we can readily see that $\psi|_H$ can be identified with a quotient map of $H=\mathbf{CP}^2$ by a reflection with respect to some line in $H$, where the line is exactly the set of tangents points of $\mathbf C^*$-orbits. Further, the unique isolated fixed point of the reflection is mapped to the node of ${\overline}{\Sigma}_2$. We say that a conic in a plane $H$ is symmetric if it is invariant under the reflection. Then among the complete linear system $|\mathscr O(2)|$ on $H$, symmetric conics in $H$ form a codimension 2 linear subsystem. It is easily seen that if a conic $C$ is not symmetric, its image is an anticanonical curve in ${\overline}{\Sigma}_2$ which has a unique node corresponding to the pair of intersection points of $C$ and its image in $H$ by the reflection which are not on the line. (In contrasts, the image of symmetric conic becomes linearly equivalent to the branch curve of the map $H\to{\overline}{\Sigma}_2$.) Thus we have seen that the image of a conic $C$ by $\psi$ is a nodal anticanonical curve in ${\overline}{\Sigma}_2$, as long as $C$ is not symmetric. This shows the first claim of the proposition. In order to show that the claim is still true for general touching conics of $B$, it suffices to show that for a plane smooth quartic $B_H$ on $H$ and any one of the 63 one-dimensional families of touching conics of $B_H$ (cf. [@H-bitan Prop. 3.10]), there exists no real line in $H$ for which all of the touching conics in the family become symmetric. This can be proved as follows. Let $\mathscr C$ be any one of the families of touching conics of $B_H$ and suppose that there is a line $l$ in $H$ for which all members of $\mathscr C$ are symmetric. Then by [@H-bitan Lemma 3.9 (ii)] there are precisely 6 reducible members of $\mathscr C$, all of which are of course pairs of two bitangents. Since these reducible members must be symmetric with respect to $l$, the intersection points of each pair of bitangents must be on $l$. Set $A=\Phi^{-1}_H(l)$, where $\Phi_H:S_H{\rightarrow}H $ is the double cover of $H$ whose branch is $B_H$. Then since $\Phi_H^*\mathscr O(1)\simeq -K_{S_H}$, $A$ is an anticanonical curve of $S_H$. We show that $A$ is irreducible. For any irreducible member $C\in \mathscr C$, the inverse image $\Phi_H^{-1}(C)$ splits into a sum of two smooth rational curves $F_1$ and $F_2$ satisfying $F_1^2=F_2^2=0$ on $S_H$ ([@H-bitan Lemma 3.8]). On $F_1$ and $F_2$, $\Phi_H$ is isomorphic to their images. Let $F$ be any one of $F_1$ and $F_2$ and consider the pencil $|F|$ generated by $F$. Then members of $\mathscr C$ are exactly the images of those of $|F|$ under the double covering map $\Phi_H$. Because $c_1^2(S)=2$, $|F|$ has precisely 6 reducible members and all of the members are mapped biholomorphically to a pair of bitangents that is a reducible member of $\mathscr C$. Therefore since $l$ goes through all the double points of the 6 reducible members of $\mathscr C$, $A=\Phi_H^{-1}(l)$ goes through every 6 points that are double points of the reducible members of the pencil $|F|$. On the other hand, by considering the intersection of $A$ with any one of the reducible members of $|F|$, it immediately follows that $F\cdot A=2$ on $S_H$, and that $A$ is not contained in any fiber of $h$. Therefore, if $h:S_H{\rightarrow}\mathbf{CP}^1$ denotes the morphism associated to the pencil $|F|$, the restriction $h|_A$ is 2 to 1 and the 6 points must be branch points of $h|_A$. Hence $A$ must be irreducible, the 6 points on $S_H$ must be smooth points of $A$, and the 6 points must be branch points of $h|_A$. This means that the geometric genus of $A$ must be at least two. This is a contradiction since $A$ is an anticanonical curve of the complex surface $S_H$. Thus we have shown that there exists no line on $H$ with respect to which all members of $\mathscr C$ are symmetric. Combined with what we have proved in the first paragraph of this proof, it follows that $\psi(C)$ is an anticanonical curve having a unique node, for a general touching conic $C$. Since $\psi|_H$ is locally isomorphic outside the symmetric line on $H$, it follows that $\psi(C)$ still touches the image $\mathscr B=\psi(B)$ at four points. Thus we have proved all the claims of the proposition. [$\square$]{} Using Lemma \[lemma-mtl1\] we show the following \[prop-mtl2\] Let $\Psi:Z{\rightarrow}\mathscr T$ be the (rational) quotient map by the $\mathbf C^*$-action on $Z$ as in Theorem \[thm-main1\]. Then the image of a general twistor line in $Z$ under $\Psi$ is a real anticanonical curve of $\mathscr T$ which has a unique node. In particular, general minitwistor lines in our minitwistor space $\mathscr T$ are not smooth. This contrasts the case for LeBrun’s metrics, since in LeBrun twistor spaces, since in LeBrun’s case, minitwistor space is $\mathbf{CP}^1\times\mathbf{CP}^1$ and a general minitwistor line is a real (irreducible) curve of bidegree $(1,1)$, so that always non-singular. Proof of Proposition \[prop-mtl2\]. As is already mentioned, $C=\Phi(L)$ is a touching conic of $B$ for a general twistor line $L$. By Lemma \[lemma-mtl1\] the image $\Gamma:=\psi(C)$ is a nodal anticanonical curve of ${\overline}{\Sigma}_2$. Then by the diagram the minitwistor line $\mathscr L:=\Psi(L)$ is an irreducible component of $\phi^{-1}(\Gamma)$. As before let $\mu:\mathscr T'{\rightarrow}\mathscr T$ and $\nu:\Sigma_2{\rightarrow}{\overline}{\Sigma}_2$ be the minimal resolutions of $\mathscr T$ and ${\overline}{\Sigma}_2$ respectively. Let $\phi':\mathscr T'\to\Sigma_2$ be the natural lift of $\phi$. Define a line in $\mathbf{CP}^3$ by $l_{\infty}:=\{y_0=y_1=0\}$ in the coordinate of Proposition \[prop-moduli1\]. Then $l_{\infty}$ is exactly the fiber of $\psi:\mathbf{CP}^3\to{\overline}{\Sigma}_2$ over the node. The branch locus of $\phi'$ is a smooth anticanonical curve $\mathscr B'=\nu^{-1}(\mathscr B)$. If $L$ is chosen so as to satisfy $\Phi(L)\cap l_{\infty}=\emptyset$, then $\Gamma=\Psi(\Phi(L))$ does not go through the node. Hence $\Gamma':=\nu^{-1}(\Gamma)$ is a nodal anticanonical curve of $\Sigma_2$ which is tangent to the branch curve $\mathscr B'$ at 4 points. To prove the proposition, we have to look at irreducible components of the curve $(\phi')^{-1}(\Gamma')$. It is immediate to see that $(\phi')^*(-K_{\Sigma_2})\simeq -2K_{\mathscr T'}$. Moreover, since $\Gamma'$ is tangent to the branch curve $\mathscr B'$ at every intersection points, $(\phi')^{-1}(\Gamma')$ splits into two irreducible curves $\mathscr L_1$ and $\mathscr L_2$. There are two possible situations: (a) both $\mathscr L_1$ and $\mathscr L_2$ are smooth and the morphisms $\mathscr L_1{\rightarrow}\Gamma'$ and $\mathscr L_2{\rightarrow}\Gamma'$ (which are the restrictions of $\phi'$) are the normalizations of the nodal curve $\Gamma'$; or (b) both $\mathscr L_1$ and $\mathscr L_2$ remain nodal curves and the morphisms $\mathscr L_1{\rightarrow}\Gamma'$ and $\mathscr L_2{\rightarrow}\Gamma'$ are isomorphic. We now show that (a) cannot occur for general twistor lines by contradiction. To this end, recall first that $\mathscr T'$ is realized as 4 points blown-up of $\mathbf{CP}^1\times\mathbf{CP}^1$ as in Proposition \[cor-mt\]. In particular we have $(-K_{\mathscr T'})^2=4$ on $\mathscr T'$. On the other hand, as is seen in the proof of Proposition \[cor-mt\], the restriction of the projection $\Sigma_2{\rightarrow}\mathbf{CP}^1$ onto $\mathscr B'$ has 4 branch points and consequently the composition $\mathscr T'{\rightarrow}\Sigma_2{\rightarrow}\mathbf{CP}^1$ has precisely 4 singular fibers, all of which are the sum of two smooth rational curves intersecting transversally. Let $E_i+E_i'$, $1\leq i\le 4$, be these 4 reducible fibers. Then the blowing-down $\alpha:\mathscr T'{\rightarrow}\mathbf{CP}^1\times\mathbf{CP}^1$ is obtained by appropriately choosing one of $E_i$ and $E_i'$ for each $1\leq i\leq 4$ and then blowing them down. After possible renaming, we suppose that $E_i$, $1\leq i\leq 4$, are blown-down by $\alpha$. Then we can write $$\begin{aligned} \label{ls3} \mathscr L_i=\alpha^*\mathscr O(a_i,b_i)-\sum_{j=1}^4n_{ij}E_j, \,\,i=1,2.\end{aligned}$$ Since $\mathscr L_1+\mathscr L_2=-2K_{\mathscr T'}$, we have $a_1+a_2=b_1+b_2=4$ and $n_{1j}+n_{2j}=2$ for $1\leq j\leq 4$. Moreover obviously we have $(a_i,b_i)=(1,3), (2,2)$ or $(3,1)$ for $i=1,2$. Now we show that all $n_{ij}$ must be 1. To see this, recall that the linear system $|F|$ on the twistor space has precisely 4 reducible members $\{D_i+{\overline}{D}_i\}_{i=1}^4$ and that all of them are $\mathbf C^*$-invariant. Since inverse images of fibers of $\mathscr T{\rightarrow}\mathbf{CP}^1$ by the quotient map $\Psi$ are $\mathbf C^*$-invariant members of $|F|$, it follows that the 4 reducible fibers $E_i+E_i'$, $1\le i\le 4$, are the images of $D_i+{\overline}{D}_i$. On the other hand, because $D_i\cdot L={\overline}{D}_i\cdot L=1$, general twistor lines intersect transversally with both of $D_i$ and ${\overline}{D}_i$. Hence $\mathscr L=\Psi(L)$ intersects both $E_i$ and ${\overline}{E}_i$ ($1\le i\le 4)$ for general $L$. Thus combining with $n_{1j}+n_{2j}=2$, we have $n_{ij}=1$ for all $i$ and $j$. Once this is proved, it readily follows $(a_1,b_1)=(a_2,b_2)=(2,2)$, since the blown-up 4 points of $\alpha$ are located in the way described in Proposition \[cor-mt\], and in particular there are two sections of the projection with self-intersection zero on which 2 of the 4 blown-up points lie. Thus we have $$\begin{aligned} \label{52} \mathscr L_i=\alpha^*\mathscr O(2,2)-E_1-E_2-E_3-E_4 \,\,\text{ for }\,\,i=1,2.\end{aligned}$$ Therefore the above item (a) cannot occur and (b) must hold for a general $L$. Moreover, it is now obvious from that $\mathscr L_1$ and $\mathscr L_2$ are anticanonical curves of $\mathscr T'$. The reality of minitwistor lines is clear since the quotient map $\Psi$ preserve the real structure. Thus we obtain all claims of the proposition. [$\square$]{} Finally we give another proof of the property that general minitwistor line in $\mathscr T$ has a node (Proposition \[prop-mtl2\]), and compare the case of LeBrun twistor spaces \[lemma-rs2\] Consider the natural real structure on $\mathscr T$ which is induced from that on $Z$ (cf. Proposition \[prop-rs\]). Then the real locus on $\mathscr T$ consists of two disjoint 2-dimensional spheres. Moreover, exactly one of the sphere parametrizes $\mathbf C^*$-orbits (in $Z$) whose closures are $\mathbf C^*$-invariant twistor lines. Proof. As in Proposition \[prop-rs\], the real structure on the total space of $\mathscr O(2)$ (which is the smooth locus of ${\overline}{\Sigma}_2$) is given by $(u,\zeta)\mapsto ({\overline}{u},{\overline}{\zeta})$. Therefore the real locus of ${\overline}{\Sigma}_2$ consists of the closure of the set $\{(u,\zeta){\,|\,}u\in\mathbf R, \zeta\in\mathbf R\}$, which form a pinched torus, where the pinched point is the node of ${\overline}{\Sigma}_2$. As is already seen, the branch locus $\mathscr B$ of the double covering $\phi:\mathscr T{\rightarrow}{\overline}{\Sigma}_2$ is defined by $$\label{eqn-cb2} \{\zeta+Q(u,1)\}^2-u(u+1)(u-a)=0.$$ Hence the real locus on $\mathscr B$ is a union of the two sets given by $$\begin{aligned} \label{rl1} \left\{(u,\zeta)\in\mathscr O(2){\,|\,}-1\le u\le 0,\,\,\zeta=\pm\sqrt{u(u+1)(u-a)}\right\}\end{aligned}$$ and $$\begin{aligned} \label{rl2} \left\{(u,\zeta)\in\mathscr O(2){\,|\,}a\le u\le \infty,\,\,\zeta=\pm\sqrt{u(u+1)(u-a)}\right\}\end{aligned}$$ where in the last condition we regard $\zeta=0$ if $u=\infty$. The sets and are smooth circles in $\mathscr B$. The sign of the left-hand side of changes across these circles. Since the double covering map $\mathscr T\to{\overline}{\Sigma}_2$ preserves the real structure, the real locus of $\mathscr T$ lies over the real locus of ${\overline}{\Sigma}_2$. These mean that the real locus of $\mathscr T$ is either the inverse image of the two closed disks bounded by the circles and , or the inverse image of the complement of the last two disks. But since the two points over the node of ${\overline}{\Sigma}_2$ (which is clearly outside the two circles) are a conjugate pair of points, the former must hold. These two double covers of the closed disks are smooth spheres. Next we see that exactly one of the two spheres parametrizes $\mathbf C^*$-orbits in $Z$ whose closures are $\mathbf C^*$-invariant twistor lines. From the above description, the two spheres are over two intervals $[-1,0]$ and $[a,\infty]$ respectively, and every corresponding $\mathbf C^*$-orbits lie over a $\mathbf C^*$-invariant planes (determined by $u$). Then as is shown in [@H-bitan Proposition 5.22], if $-1\le u\le 0$, then every real $\mathbf C^*$-orbits lying on the plane $y_0=uy_1$ must be an image of $\mathbf C^*$-invariant twistor lines. Thus the sphere in $\mathscr T$ lying over $[-1,0]$ parametrizes $\mathbf C^*$-invariant twistor lines. On the other hand, real orbits lying on a plane $y_0=uy_1$ with $u\ge a$ are not the image of twistor lines [@H-bitan Proposition 5.22] This proves all the claims of the lemma. [$\square$]{} By using Lemma \[lemma-rs2\] we now give another explanation as to why general minitwistor lines in the minitwistor space $\mathscr T$ become singular. Let $\mathscr T_2^{\sigma}$ and $\mathscr T_4^{\sigma}$ be the connected components of the real locus $\mathscr T^{\sigma}$ of $\mathscr T$, where the former and latter lie over the interval $[-1,0]$ and $[a,\infty]$ in $\mathbf{RP}^1\subset\mathbf{CP}^1$ respectively. (The subscripts $2$ and $4$ come from the notations in [@H-bitan], where we wrote $I_2=[-1,0]$ and $I_4=[a,\infty]$.) As above, both $\mathscr T_2^{\sigma}$ and $\mathscr T_4^{\sigma}$ are 2-spheres smoothly embedded in $\mathscr T$. As explained in the final part of the proof of Lemma \[lemma-rs2\], $\mathscr T_2^{\sigma}$ parametrizes $\mathbf C^*$-orbits in $Z$ whose closures are $\mathbf C^*$-invariant twistor lines, while $\mathbf C^*$-orbits parametrized by $\mathscr T_4^{\sigma}$ are not (contained in) twistor lines. Let $O\in\mathscr T^{\sigma}_4$ be any point and think $O$ as a real $\mathbf C^*$-orbit in $Z$. Consider a twistor line $L\subset Z$ which intersects $O$. Then by what we have explained above, $O$ is not contained in $L$ and it follows by reality that $L\cap O$ consists of even number of points. The last number is 2 since $O$ is contained in some $S\in |F|$ and $S\cdot L=2$. Then because the orbit map $\Psi:Z{\rightarrow}\mathscr T$ identifies these conjugate pair of points, the image $\Psi(L)$ in $\mathscr T$ must have a singular point at $O\in\mathscr T$. This is precisely the node of minitwistor line as stated in Proposition \[prop-mtl2\]. For each $O\in\mathscr T^{\sigma}_4$, there are obviously 2-dimensional family of twistor lines intersecting $L$. Moreover, $\mathscr T^{\sigma}_4$ to which $O$ belongs, is also real 2-dimensional. Thus there are real 4-dimensional family of twistor lines intersecting real orbit in $\mathscr T^{\sigma}_4$. This means that the image of general twistor line must have a node. In contrast to the situation described in Lemma \[lemma-rs2\], the real locus of the minitwistor space of LeBrun twistor spaces on $n\mathbf{CP}^2$ consists of a unique sphere, and it parametrizes $\mathbf C^*$-orbits whose closures are $\mathbf C^*$-invariant twistor lines. This is a reason why the image of general twistor lines by the orbit map is non-singular for LeBrun’s twistor spaces. [99]{} T. Eguchi, A.J. Hanson, [*Asymptotically flat solutions to Euclidean gravity*]{}, Phys. Lett. [**74B**]{} (1978), 249–251. G.W. Gibbons, S.W. Hawking, [*Gravitational multi-instantons*]{}, Phys. Lett. [**78B**]{} (1978), 430–432. N. Hitchin, [*Complex manifolds and Einstein’s equations*]{}, Lecture Notes in Math. [**970**]{} (1982) 73-99. N. Honda, [*Equivariant deformations of meromorphic actions on compact complex manifolds*]{}, Math. Ann. [**319**]{} (2001), 469–481. N. Honda, [*Self-dual metrics and twenty-eight bitangents*]{}, J. Differential Geom. [**75**]{} (2007) 175–258. P.E. Jones, K.P. Tod, [*Minitwistor spaces and Einstein-Weyl geometry*]{}, Class. Quan. Grav. [**2**]{} (1985), 565–577. C. LeBrun, [*Explicit self-dual metrics on ${\mathbf{CP}}^2\#\cdots\#{\mathbf{CP}}^2$*]{}, J. Differential Geom. [**34**]{} (1991), 223–253. C. LeBrun, [*Self-dual manifolds and hyperbolic geometry*]{}, Einstein metrics and Yang-Mills connections (Sanda, 1990), Lecture Notes in Pure and Appl. Math. [**145**]{} (1993), 99-131. $\begin{array}{l} \mbox{Department of Mathematics}\\ \mbox{Graduate School of Science and Engineering}\\ \mbox{Tokyo Institute of Technology}\\ \mbox{2-12-1, O-okayama, Meguro, 152-8551, JAPAN}\\ \mbox{{\tt {honda@math.titech.ac.jp}}} \end{array}$
--- abstract: 'In this short note, we give a characterization of Fréchet spaces via properties of their metric. This allows us to prove that the Hausdorff measure of noncompactness (MNC), defined over Fréchet spaces, is indeed an MNC. As first applications, we lift well-known fixed-point theorems for contractive and condensing operators to the setting of Fréchet spaces.' author: - 'Henning Wunderlich[^1]' bibliography: - 'Frechet-MNC.bib' title: Characterization of Fréchet Spaces and Application to Hausdorff MNC --- Introduction ============ In this short note, we give a characterization of Fréchet spaces via properties of their metric. This allows us to prove that the Hausdorff measure of noncompactness (MNC), defined over Fréchet spaces, is indeed an MNC, which was an open problem before. Recall that the defining property of an MNC is invariance under convex hulls. While the definition of the Hausdorff MNC over Fréchet spaces is well-known, see e.g., [@Akhmerov1992MeasuresON; @AppellVaeth:Funktionalanalysis], a formal proof that it is actually an MNC was only known in the setting of Banach spaces, see e.g., [@MR2059617]. Past research on MNC in the setting of metric spaces [@talman1977; @10.2307/24894850] required the existence of certain convex structures, based on the work of Takahashi [@takahashi1970]. Their existence in the context of Fréchet spaces was not proven. We prove their existence with the help of the mentioned characterization. We think that our main result, the invariance of the Hausdorff MNC under convex hulls in Fréchet spaces, will find many applications in the future. As first applications, we lift well-known fixed-point theorems for contractive and condensing operators of Darbo and Sadovskiĭ type to the setting of Fréchet spaces. Substantial parts of this work have been taken from a submitted thesis of the author [@Thesis:Wunderlich]. Nevertheless, all relevant contents is either presented here or can be found in accessible references. Characterization of Fréchet Spaces ================================== Recall that a *metric vector space* $(E, d)$ is a vector space $E$ and a metric space $(E, d)$, equipped with a *translation-invariant* metric $d$, i.e., $d(x, y) = d(x + z, y + z)$ for all $x, y, z \in E$. We need the following folklore result on such metrics. \[Lemma:AdditiveD\] Let $d$ be a translation-invariant metric. Then for all $x_{1}, x_{2}, y_{1}, y_{2} \in E$, we have $$d( x_{1} + x_{2} , y_{1} + y_{2} ) \leq d( x_{1} , y_{1} ) + d( x_{2} , y_{2} ) \quad.$$ First of all, for all $z_{1}, z_{2} \in E$, we have $$d(0, z_{1} + z_{2}) \leq d(0, z_{1}) + d(z_{1}, z_{1} + z_{2}) = d(0, z_{1}) + d(0, z_{2}) \quad.$$ Then $$\begin{aligned} d( x_{1} + x_{2} , y_{1} + y_{2} ) &= d( 0 , (y_{1} + y_{2}) - (x_{1} + x_{2}) ) = d( 0 , (y_{1} - x_{1}) + (y_{2} - x_{2}) ) \\ &\leq d( 0 , y_{1} - x_{1} ) + d( 0 , y_{2} - x_{2} ) = d( x_{1} , y_{1} ) + d( x_{2} , y_{2} ) \quad.\end{aligned}$$ The translation-invariant metric $d$ of a metric vector space $(E, d)$ induces a uniform topology on $E$, which makes $E$ a [t.v.s.]{}. A [t.v.s.]{} $E$ is called *metrizable*, if there exists a translation-invariant metric $d$ on $E$ inducing the topology of $E$. Recall that a *Fréchet space* is a complete and metrizable [l.c.s.]{}. The differentiating property between complete and metrizable [t.v.s.]{} and [l.c.s.]{} is exactly the following. \[Theorem:CharFrechet\] Let $E$ be a complete and metrizable [t.v.s.]{}. Then $E$ is a Fréchet space iff there exists a translation-invariant metric $d$ on $E$ such that for all $x, y \in E$ and $\lambda \in [0, 1]$ we have $$\label{Eq:DStrong} d( \lambda \cdot x, \lambda \cdot y) \leq \lambda \cdot d(x, y) \quad.$$ We modify the proof in [@Schaefer:TVS I.6.1]. There, a pseudonorm $|x|$ is constructed by a base of $0$-neighborhoods $V_{n}$. The metric is then obtained via $d(x, y) = |y - x|$ and vice versa. As $E$ is an [l.c.s.]{}, we can assume that these $V_{n}$ are not only circled but absolutely-convex, and that $2 \cdot V_{n + 1} = V_{n}$. We prove $| 2^{-k} \cdot x| \leq 2^{-k} \cdot |x|$ for arbitrary $x \in E$ and $k \geq 1$. Then by dyadic expansion and the triangle inequality, we obtain $| \lambda \cdot x | \leq \lambda \cdot |x|$ for all real $\lambda \in [0, 1]$. Set $V_{H} := \sum_{n \in H}V_{n}$ for finite $H \subseteq \mathbb{N}$. Then $V_{k + H} = \sum_{n \in H} V_{k + n} = \sum_{n \in H} 2^{k} \cdot V_{n} = 2^{k} \cdot (\sum_{n \in H}V_{n}) = 2^{k} \cdot V_{H}$. Hence, $2^{-k} \cdot x \in V_{H}$ iff $x \in 2^{k} \cdot V_{H}$ iff $x \in V_{k + H}$. For the numbers $p_{H} := \sum_{n \in H}2^{-n}$ we get $p_{k + H} = 2^{-k} \cdot p_{H}$. Given arbitrary $\epsilon > 0$, let $H$ be such that $|x| \leq p_{H} - \epsilon$. Then $2^{-k} \cdot x \in V_{H}$ implies $x \in V_{k + H}$, and hence $| 2^{-k} \cdot x | \leq p_{k + H} = 2^{-k} \cdot p_{H} \leq 2^{-k} \cdot ( |x| - \epsilon)$. The Lebesgue spaces $\mathcal{L}^{p}$ give nice examples to show, when this stronger inequality (\[Eq:DStrong\]) holds and when it does not. Let $\lambda \in [0, 1]$. For $1 \leq p \leq \infty$, space $\mathcal{L}^{p}$ is a normed and thus a Fréchet space, and we have $d(\lambda \cdot x, \lambda \cdot y) := \| \lambda \cdot (y - x)\|_{p} = \lambda \cdot d(x, y)$. In contrast, for $0 < p < 1$, space $\mathcal{L}^{p}$ is only a complete and metrizable [t.v.s.]{}, and not an [l.c.s.]{}. Here, we have $d(\lambda \cdot x, \lambda \cdot y) = \int | \lambda \cdot (y - x) |^{p} = \lambda^{p} \cdot d(x, y) > \lambda \cdot d(x, y)$ for $\lambda \in ]0, 1[$. Application to Hausdorff MNC ============================ An important part of Functional Analysis is concerned with measures of noncompactness, see [@Akhmerov1992MeasuresON] for a systematic exposition of this topic. A measure of noncompactness quantifies the deviation of a bounded subset of a space from being compact. We remark that this notion does not make sense in Montel spaces, where bounded and compact sets are not distinct. The most general definition is as follows, see also [@Akhmerov1992MeasuresON 1.2.1]: Let $E$ be a [l.c.s.]{}, and let $(Q, \leq)$ be a partially-ordered set. A map $\chi \colon 2^{E} \to Q$ is called a *measure of noncompactness (MNC)*, if for all subsets $A \subseteq E$ we have $\chi(A) = \chi(\overline{\mathrm{co}}(A))$. We note that going beyond [l.c.s.]{} to general [t.v.s.]{} does not make sense, because only for [l.c.s.]{} it is ensured that the convex hull of a compact set stays compact, see [@Schaefer:TVS II.4.3]. The Hausdorff MNC, $\alpha$, is a typical example. Another example, not treated here, is the Kuratowksi MNC, which is actually equivalent to the Hausdorff MNC, see [@Akhmerov1992MeasuresON 1.1.1, 1.1.7]. Let $E$ be a Fréchet space, and let $M \subseteq E$. The *(Hausdorff) measure of noncompactness of $M$* is defined by $$\alpha(M) := \inf \left\{ \epsilon > 0 \mid \textnormal{$M$ has a finite $\epsilon$-net in $E$} \right\} \quad.$$ For some function spaces, explicit formulas are known to compute the Hausdorff MNC, see e.g., [@Akhmerov1992MeasuresON 1.1.9–1.1.13] or [@AppellVaeth:Funktionalanalysis 3.6–3.9]. The Hausdorff MNC, defined over Banach spaces, has the following properties, see e.g., [@MR2059617 Chapter 1, Proposition 1.1]. With literally the same proofs, one can easily show that the properties also hold for the Hausdorff MNC $\alpha$, defined over a Fréchet space $E$. Let $E$ be a Fréchet space. For sets $M, N \subseteq E$, $z \in E$, and $\lambda \in \mathbb{K}$, we have 1. $\alpha(M) \leq \alpha(N)$ for $M \subseteq N$. 2. $\alpha(\overline{M}) = \alpha(M)$. 3. $\alpha(z + M) = \alpha(M)$, i.e., $\alpha$ is translation-invariant. 4. $\alpha(\lambda \cdot M) = |\lambda| \cdot \alpha(M)$, i.e., $\alpha$ is homogeneuous. 5. $\alpha(M) = 0$ iff $M$ is precompact. 6. $|\alpha(M) - \alpha(N)| \leq \alpha(M + N) \leq \alpha(M) + \alpha(N)$. The first inequality only holds in case both subsets are nonempty. 7. $\alpha(M \cup N) = \max\{ \alpha(M), \alpha(N) \}$. 8. $\alpha(B(z, 1)) = 1$, if $E$ is infinite-dimensional, and zero otherwise. 9. If $M_{1} \supseteq M_{2} \supseteq \ldots$ is a decreasing sequence of closed sets in $E$ with $\alpha(M_{n}) \rightarrow 0$ for $n \rightarrow \infty$, then the intersection $M_{\infty} := \bigcap_{n}M_{n}$ is nonempty and compact. Interestingly, this does not hold for the defining property of being an MNC. To the best of our knowledge, it seems to have been an open problem for a long time. The importance of this defining property has been stressed explicitly in [@Akhmerov1992MeasuresON], see remark above Theorem 1.1.5 there. \[Theorem:MNC\] The Hausdorff MNC $\alpha$, defined over a Fréchet space $E$, is indeed an MNC, i.e., we have $\alpha(\mathrm{co}(M)) = \alpha(M)$ for all $M \subseteq E$. We give a direct proof first, and then discuss the existing literature. As $M \subseteq \mathrm{co}(M)$, then $\alpha(M) \leq \alpha(\mathrm{co}(M))$ by item (i) from above proposition. For the other direction, let $N$ be a finite $\eta$-net for $M$, $\eta > 0$. Define $C := \overline{\mathrm{co}}(N)$. We have $d(x, z) \leq \eta$ for all $x \in \mathrm{co}(M)$ and $z \in C$. This can be seen as follows. Point $z$ is a convex combination $z = \sum_{i} \lambda_{i} \cdot z_{i}$ with $z_{i} \in N$, $\lambda_{i} \in [0, 1]$, and $\sum_{i} \lambda_{i} = 1$. Now, the subtle issue comes: Making use of Lemma \[Lemma:AdditiveD\] in the second and Theorem \[Theorem:CharFrechet\] in the third inequality, we have $$\begin{aligned} d(x, z) &= d\left( (\sum_{i} \lambda_{i}) \cdot x, \sum_{i} \lambda_{i} \cdot z_{i} \right) \leq \sum_{i} d\left( \lambda_{i} \cdot x, \lambda_{i} \cdot z_{i} \right) \\ &\leq \sum_{i} \lambda_{i} \cdot d( x, z_{i} ) \leq \sum_{i} \lambda_{i} \cdot \eta = (\sum_{i} \lambda_{i}) \cdot \eta = 1 \cdot \eta = \eta \quad. \end{aligned}$$ In addition, set $C$ is compact, because it is a closed and bounded set in a finite-dimensional space $\mathrm{span}(N)$. As $C$ is compact, for every $\epsilon > 0$, there exists a finite $\epsilon$-net $K$ for $C$. Then $K$ is a finite $(\eta + \epsilon)$-net for $\mathrm{co}(M)$. Concerning MNCs in metric spaces, the book [@Akhmerov1992MeasuresON Section 1.8.1] refers to a paper of Talman [@talman1977], where the Hausdorff MNC is shown to be an MNC, if the underlying metric space has some special convex structure. Recall that a *Takahashi convex structure (TCS)* on a metric space $(E, d)$ is a mapping $W \colon E \times E \times [0, 1] \to E$ such that $$d( u, W(x, y, t) ) \leq t \cdot d(u, x) + (1 - t) \cdot d(u, y)$$ for all $u, x, y \in E$ and $t \in [0, 1]$. For its properties, see e.g., [@takahashi1970; @machado1973; @KUNZI20162]. A metric space, together with such a TCS, is then called a *convex metric space*. Talman refined this notion by going up one dimension. Let us call a *Talman convex structure (T${}_{m}$CS)* on a metric space $(E, d)$ a mapping $K \colon E \times E \times E \times I \to E$, where $I := \{ (\lambda_{1}, \lambda_{2}, \lambda_{3}) \in [0, 1] \mid \lambda_{1} + \lambda_{2} + \lambda_{3} = 1\}$, such that $$d( u, K(x, y, z, t_{1}, t_{2}, t_{3}) ) \leq t_{1} \cdot d(u, x) + t_{2} \cdot d(u, y)+ t_{3} \cdot d(u, z)$$ for all $u, x, y, z \in E$ and $(t_{1}, t_{2}, t_{3}) \in I$. If the point $K(x, y, z, t_{1}, t_{2}, t_{3})$, satisfying above relationship, is uniquely determined, then $K$ is called a *strong convex structure (SCS)*. A metric space, together with an SCS, is then called *strongly convex*. Define the following mappings $$\begin{aligned} \tilde{W}(x, y, t) &:= t \cdot x + (1 - t) \cdot y \quad, \\ \tilde{K}(x, y, z, t_{1}, t_{2}, t_{3}) &:= t_{1} \cdot x + t_{2} \cdot y + t_{3} \cdot z \quad.\end{aligned}$$ Let $E$ be a Fréchet space, with given metric $d$, which is translation-invariant and has property (\[Eq:DStrong\]). Then mapping $\tilde{W}$ is a TCS, and mapping $\tilde{K}$ is a T${}_{m}$CS. We have $$\begin{aligned} d(u, \tilde{K}(x, y, z, t_{1}, t_{2}, t_{3}) ) &= d(u, t_{1} \cdot x + t_{2} \cdot y + t_{3} \cdot z) \\ &= d(t_{1} \cdot u + t_{2} \cdot u + t_{3} \cdot u, t_{1} \cdot x + t_{2} \cdot y + t_{3} \cdot z) \\ &\leq d(t_{1} \cdot u, t_{1} \cdot x) + d(t_{2} \cdot u, t_{2} \cdot y) + d(t_{3} \cdot u, t_{3} \cdot z) \\ &\leq t_{1} \cdot d(u, x) + t_{2} \cdot d(u, y) + t_{3} \cdot d(u, z) \quad.\end{aligned}$$ The proof for $\tilde{W}$ is analogous. We thus call $\tilde{W}$ and $\tilde{K}$ the *obvious* convex structures for Fréchet spaces. Interestingly, we remark that their existence was NOT obvious in the past. Takahashi [@takahashi1970 Section 2, p.142] stated, without giving a reference or giving an example: “But a Fréchet space is not necessar\[il\]y a convex metric space.” The insight of our work may be that one can always find the “right” metric (translation-invariant and property (\[Eq:DStrong\])) such that a TCS exists (the obvious one). Given a metric space $(E, d)$ and a TCS $W$, a subset $C \subseteq E$ is called *$W$-convex*, iff $W(x, y, t) \in C$ for all $x ,y \in C$ and $t \in [0, 1]$. A $W$-convex subset $C \subseteq E$ is called *stable*, if $C_{r} := \{ x \in E \mid d(x, C) < r \}$ is also $W$-convex for every $r > 0$. An SCS on $E$ is *stable*, if the set $\{ W(x, y, t) \mid t \in [0, 1] \}$ is stable for every pair $x, y \in E$. For a strongly convex metric space $E$, an SCS $W$ is stable iff every $W$-convex subset of $E$ is stable, [@talman1977 Theorem 3.3]. We observe that for a Fréchet space $E$ and its obvious TCS $\tilde{W}$, a subset is convex iff it is $\tilde{W}$-convex. Furthermore, every convex subset is stable. Let $E$ be a Fréchet space, with given metric $d$, which is translation-invariant and has property (\[Eq:DStrong\]). Then every $\tilde{W}$-convex subset of $E$ is stable. Let $C \subseteq E$ be convex, and let $r > 0$. For $x, y \in C_{r}$, there exist $\tilde{x}, \tilde{y} \in C$ such that $d(x, \tilde{x}), d(y, \tilde{y}) < r$. For $t \in [0, 1]$, define $z_{t} := t \cdot x + (1 - t) \cdot y$ and $z_{t} := t \cdot \tilde{x} + (1 - t) \cdot \tilde{y}$, respectively. As $C$ is convex, $\tilde{z}_{t} \in C$. We have $$\begin{aligned} d(z_{t}, \tilde{z}_{t}) &= d( t \cdot x + (1 - t) \cdot y, t \cdot \tilde{x} + (1 - t) \cdot \tilde{y} ) \\ &\leq d( t \cdot x, t \cdot \tilde{x} ) + d( (1 - t) \cdot y, (1 - t) \cdot \tilde{y} ) \\ &\leq t \cdot d(x, \tilde{x}) + (1 - t) \cdot d(y, \tilde{y}) < t \cdot r + (1 - t) \cdot r = r \quad.\end{aligned}$$ Hence, $C_{r}$ is convex. Talman [@talman1977 Theorem 3.7] proved the following Let $E$ be a strongly convex metric space with stable SCS. Then for all bounded subsets $M \subseteq E$, we have $\alpha(M) = \alpha(\mathrm{co}(M))$. The only missing piece, which prevents us from applying this theorem on a Fréchet space $E$, is the unclear uniqueness of the obvious TCS $\tilde{K}$, i.e., is $\tilde{K}$ even an SCS? We have doubts that such uniqueness holds in arbitrary Fréchet spaces. Hence, [@Akhmerov1992MeasuresON Section 1.8.1], just referring to Talman’s paper [@talman1977], does not give a conclusive answer for Fréchet spaces. Matters are different with the paper [@10.2307/24894850] of Gajić. For a convex metric space $(E, d, W)$, she defines two properties (P) and (Q). Recall that $(E, d, W)$ has *property (P)*, if for all $x_{1}, x_{2}, y_{1}, y_{2} \in E$ and $t \in [0, 1]$, we have $$d( W(x_{1}, y_{1}, t), W(x_{2}, y_{2}, t) ) \leq t \cdot d(x_{1}, y_{1}) + (1 - t) \cdot d(x_{2}, y_{2}) \quad.$$ It has *property (Q)*, if for every finite subset $F \subseteq E$, the $W$-convex hull of $F$, $W$-$\mathrm{co}(F)$, is compact. The following is Theorem 1 in [@10.2307/24894850]. \[Theorem:Gajic\] Let $(E, d, W)$ be a convex metric space with TCS $W$, satisfying properties (P) and (Q). Then for all bounded subsets $M \subseteq E$, we have $\alpha(M) = \alpha(W\textnormal{-}\mathrm{co}(M))$. A Fréchet space $E$, together with translation-invariant metric $d$ having property (\[Eq:DStrong\]) and the obvious TCS $\tilde{W}$, clearly has properties (P) and (Q). The latter holds, because convex and $\tilde{W}$-convex sets equal, and compact sets are closed under convex hulls in every [l.c.s.]{}. Hence, Theorem \[Theorem:Gajic\] gives another proof of Theorem \[Theorem:MNC\]. Fixed-Point Theorems ==================== The property of being invariant under convex hulls is a crucial component in many proofs, involving the Hausdorff MNC. As first applications of our result, we prove fixed-point theorems for contractive and condensing operators of Darbo and Sadovskiĭ type. For more information on Fixed-Point Theory, we refer the reader to the opus magnum of Granas and Dugundji [@MR1987179]. Let $E$ and $F$ be Fréchet spaces, and let $F \colon E \to F$ be a continuous operator. Analogously to the Banach-space setting, the *upper characteristic of noncompactness* is defined by $$\begin{aligned} [F]_{A} &:= \inf \left\{ \gamma > 0 \mid \alpha(F(M)) \leq \gamma\cdot \alpha(M), \textnormal{$M$ bounded} \right\} \quad.\end{aligned}$$ Operator $F$ is called *$\alpha$-contractive*, if $[F]_{A} < 1$. It is called *condensing*, if $\alpha(F(M)) < \alpha(M)$ for every bounded subset $M \subseteq E$ with $\alpha(M) > 0$. Let $E$ be a Fréchet space, and let $F \colon E \to E$ be $\alpha$-contractive. Let $M \subseteq E$ be a nonempty, convex, closed, and bounded set such that $F(M) \subseteq M$. Then $F$ has a fixed-point in $M$, i.e., there exists $x \in M$ with $F(x) \in M$. Slightly modify the proof in [@MR2059617 Section 2.3, Theorem 2.1], using Theorem \[Theorem:MNC\] in $$\alpha( M_{ n + 1 } ) = \alpha( \overline{\mathrm{co}} F(M_{n}) ) = \alpha( \mathrm{co} F(M_{n}) ) = \alpha( F(M_{n}) ) \leq \cdots \quad,$$ and applying the Theorem of Schauder-Tychonoff instead of the Theorem of Schauder [@MR1987179 II §7.1, Theorem 1.13]. Let $E$ be a Fréchet space, and let $F \colon E \to E$ be condensing. Let $M \subseteq E$ be a nonempty, convex, closed, and bounded set such that $F(M) \subseteq M$. Then $F$ has a fixed-point in $M$. As before, slightly modify the proof in [@MR2059617 Section 2.3, Theorem 2.3], using Theorem \[Theorem:MNC\] in $$\alpha(F(\hat{M})) = \alpha( \overline{\mathrm{co}} F(\hat{M}) ) = \alpha( \Phi(\hat{M})) = \alpha(\hat{M}) \quad,$$ and applying the Theorem of Schauder-Tychonoff instead of the Theorem of Schauder [@MR1987179 II §7.1, Theorem 1.13]. Even more general, Gajić [@10.2307/24894850 Theorem 2] proved a fixed-point theorem for *condensing* set-valued mappings $F \colon E \to 2^{E}$, i.e., $\alpha(M) > 0$ implies $\alpha(F(M)) < \alpha(M)$ for all bounded sets $M \subseteq E$. Let $(E, d, W)$ be a complete TCS with continuous structure $W$ and satisfying properties (P) and (Q), respectively. Let $F \colon E \to 2^{E}$ be a set-valued, condensing mapping with $W$-convex values, closed graph, and bounded range. Then $F$ has a fixed point, i.e., there exists $x \in E$ such that $x \in F(x)$. From this, using the properties of the obvious TCS $\tilde{W}$ on a Fréchet space $E$, we immediately obtain Let $E$ be a Fréchet space. Let $F \colon E \to 2^{E}$ be a set-valued, condensing mapping with convex values, closed graph, and bounded range. Then $F$ has a fixed point. We think that this is just the beginning of lifting known results from Banach to Fréchet spaces. As an outlook, we give two examples of possible future generalizations: First of all, establishing a Nussbaum-Sadovskiĭ degree and its properties [@MR2059617 Section 3.5] for Fréchet spaces, secondly, lifting the FMV and Feng spectra [@MR2059617 Chapters 6 and 7] of Nonlinear Spectral Theory. In the latter case, the defining property of an MNC helps in establishing the closed- and boundedness of these spectra, see e.g., the proofs of Lemma 6.2 and Theorem 7.1 in [@MR2059617]. Acknowledgements {#acknowledgements .unnumbered} ================ First of all, we would like to sincerely thank Prof. Dr. Delio Mugnolo (FernUniversität Hagen) for his valuable advice and support. We would also like to encourage the reader to give us feedback. Any help is appreciated very much! [^1]: Dr. Henning Wunderlich, Frankfurt, Germany. E-Mail: *HenningWunderlich@t-online.de*
[**An Algorithm of a Virtual Quantum Computer Model on a Classical Computer** ]{} *Physics Department, Moscow State Opened University* *Department of Theoretical Physics,\ Russian University of People’s Friendship* *E-mail*: ykamalov@rambler.ru; qubit@mail.ru Construction of virtual quantum states became possible due to the hypothesis on the nature of quantum states quant-ph/0212139. This study considers a stochastic geometrical background (stochastic gravitational background) generating correlation (or, coherency) of various quantum non-interacting objects. In the quantum state virtual model, a simple method of generating of two (or more) dichotomic signals with controlled mutual correlation factor from a single continuous stochastic process is implemented. Basing on the system random number generator of the computer, a model of the stationary random phase (with the nature of random geometrical background). Keywords: virtual qubit, virtual quantum computer. INTRODUCTION ============ Simulation of a virtual qubits on a classical computer should not encounter any insuperable difficulties, as the operating program for quantum states modelling on a classical computer has been already developed \[1\]. In the present study we shall dwell on a classical computer model of quantum entangled states implemented in Pascal (with Delphy) language. This is an operating program of a virtual model for two q-bits. It simulates a controlled correlation or anti-correlation of EPR-Bomm type, including the total one. As the model considered is a classical one, the Bell’s inequalities are not violated within it. The correlation factor of this classical model differs from the quantum analog of EPR state by the factor of $\sqrt{2}$. The simulated operating model of quantum entangled states and related discussions simplify understanding of quantum states and EPR-Bohm correlations. This enables starting the study of quantum algorithms and programs for quantum computer modelling on a classical computer. Until now there has not been found any satisfactory way of modelling of quantum state patterns and their interference on a classical computer. The idea of quantum calculation put forward by R.Feynman in \[2, 3\] relies upon the impossibility of quantum patterns calculation on a classical computer in virtue of various reasons. At the same time, creation of quantum pattern models with the help of quantum elements would be rather straightforward. In its turn, this means that quantum elements must be created. This naturally means that these elements must interfere with each other, their correlation factor having to be nonzero. Then, alteration of a single element would alter the whole quantum pattern. This property of quantum elements is called quantum parallelism. While the value of an information unit (a bit) in a classical computer is defined as either 0 or 1, in a quantum computer each quantum element is described by a wave function $\psi =\alpha \left| 0\right\rangle +\beta \left| 1\right\rangle $, meaning this element being in the state of superposition of zero and unity, $\alpha $,$\beta $being the state complex amplitude (providing $\left| \alpha \right| ^{2}+\left| \beta \right| ^{2}=1$) with probabilities $P(0)=\left| \alpha \right| ^{2}$, $P(1)=\left| \beta \right| ^{2}$. Construction of virtual quantum states became possible due to the hypothesis on the nature of quantum states \[4\]. This study considers a stochastic geometrical background generating correlation (or, coherency) of various quantum non-interacting objects. The area of this background localization is called the area of coherence zone. In this area the correlation factor of various quantum microobjects is nonzero. To explain how this could occur, let us consider a physical model with the background of stochastic gravitational fields and waves representing the effect of the stochastic geometrical background, that is, the physical model with the gravitational background (i.e. gravitational fields and waves background). This means that we assume the existence of fluctuations of gravitational waves and fields in each point of the space, which are mathematically represented by metrics fluctuations. If we to discuss a possible quantization mechanism, based on solution concept by Einstein-de Brogle and use the representation of extended particles as localized self-gravitating structures, we can have the self-gravitating solution with the non-resonance quantization mechanism as result \[5\]. Theoretical investigation of the vacuum accounting for gravitational fields has been performed in the works by Academician Andrey D.Sakharov \[6,7\]. In his first paper of 1967, ”Vacuum Quantum Fluctuations in the Curved Space and Gravitation Theory” it has been stated that ”it is assumed in the modern quantum field theory that the energy-momentum tensor of vacuum quantum fluctuations equal zero , and the respective action $S(0)$ is actually zero”. He has further shown that accounting for gravitational field in the vacuum, taking into consideration definition of space-time action dependence form curvature in the gravitational theory of A. Einstein (with invariants of Ricci tensor $R$ and metric tensor $g$), the action function taking on the form $$S(R)=-\frac{1}{16\pi G}\int(dx)\sqrt{-g}R.$$ Resultant action of all these gravitational fields with number $j$ form the functional $$S_{0}(\psi)=\sum_{j=1}^{\infty}S_{j},$$ $\psi (x)$ being the external field given by the metric tensor $g_{ik}$ of the gravitational field. Let us consider two classical particles in a field of random gravitational fields or waves. The General Theory of Relativity gives the length element in 4-dimensional Riemann space as $$d\ell ^{2}=g_{ik}dx^{i}dx^{k},$$ the metric in the linear approach is $$g_{ik}=\eta_{ik}+h_{ik},$$ $\eta _{ik}$ being Minkowsky metric, constituting the unity diagonal matrix. Hereinafter, the indices $i,k,\mu ,\nu ,\gamma ,m,n$ acquire values 0, 1, 2, 3. Indices encountered twice imply summation thereupon. Let us select harmonic coordinates (the condition of harmonicity of coordinates mean selection of concomitant frame $\frac{\partial h_{n}^{m}}{\partial x^{m}}=% \frac{1}{2}\frac{\partial h_{m}^{m}}{\partial x^{n}}$) and let us take into consideration that $h_{\mu \nu }$ satisfies the gravitational field equations $$\square h_{mn}=-16\pi GS_{mn},$$ which follow from the General Theory of Relativity; here $S_{mn}$ is energy-momentum tensor of gravitational field sources with d’Alembertian $% \square$ and gravity constant $G$. Then, the solution shall acquire the form $$h_{\mu\nu}=e_{\mu\nu}\exp(ik_{\gamma}x^{\gamma})+e_{\mu\nu}^{\ast}\exp(ik_{% \gamma}x^{\gamma}),$$ where the value $h_{\mu\nu}$ is called metric perturbation, $e_{\mu\nu}$polarization, and $k_{\gamma}$ is 4-dimensional wave vector. We shall assume that metric perturbation $h_{\mu\nu}$ are distributed in space with an unknown distribution function $\rho=\rho (h_{\mu\nu})$. Relative displacements $\ell$ of two particles in classic gravitational fields are described in the General Theory of Relativity by deviation equations $$\frac{D^{2}}{D\tau ^{2}}\ell ^{i}(j)=R_{kmn}^{i}(j)\ell ^{m}\frac{dx^{k}}{% d\tau }\frac{dx^{n}}{d\tau },$$ $R_{kmn}^{i}(j)$ being the gravitational field Riemann’s tensor with gravitational field number $j$ of the stohastic gravitational fields. Specifically, the deviation equations give the equations for two particles oscillations $$\ddot{\ell }^{1}+c^{2}R_{010}^{1}\ell ^{1}=0,\quad \omega =c\sqrt{% R_{010}^{1}}.$$ The solution of this equation has the form $$\ell ^{1}(j)=\ell _{0}\exp (k_{a}x^{a}+i\omega (j)t),$$ with $a=1,2,3$. Each gravitational field or wave with index $j$ and Riemann’s tensor $R_{kmn}^{i}(j)$ shall be corresponding to the value $\ell ^{i}(j)$ with random modulated phase $\Phi (j)=\omega (j)t$. If we sum all fields, we can write $\Phi (t)=\omega (t)t$, where $t$ is the time coordinate. This random phase is the same for various quantum microobject in the area of this coherent background localization\[8\], this area being defined as the one within which the correlation factor for these particles is nonzero. Harmonic oscillations with this type phases could model entangled states. 2\. MODELLING OF A VIRTUAL QUBIT ON A CLASSICAL COMPUTER In the quantum state virtual model, a simple method of generating of two (or more) dichotomic random signals with controlled mutual correlation factor out of a single continuous stochastic process is implemented \[1\]. Basing on the system random number generator of the computer, a model of the stationary random process $\Phi (t)$ with $\left\langle \Phi (t)\right\rangle =0$ has been built determining the random phase evenly distributed over the interval $0\div 2\pi $. Further, a random signal was generated on its basis with the help of the algorithm $$a(\alpha ,t)=sign\left\{ \cos \left[ \Phi (t)+\alpha \right] \right\} ,$$ a being $\alpha $ an arbitrary parameter It follows from this definition that $\left\langle a(\alpha )\right\rangle =0$, $a(\alpha \pm \pi )=-a(\alpha )$, that is, the signals $a(\alpha )$ and $a(\alpha \pm \pi )$ are anticorrelated. At the same time, $a(\alpha )$ and $a(\alpha \pm \frac{% \pi }{2})$ are non-correlated signals. In the general case, correlation of signals $a(\alpha )$ and $a(\alpha +\Delta \alpha )$ is \[9\] $$M(\Delta \alpha )=\left\langle x(\alpha )x(\alpha +\Delta \alpha )\right\rangle =1-2\left| \Delta \alpha \right| /\pi .$$ Therefore, out of a single stochastic process $\Phi (t)$ it is possible to generate two (or more) stochastic dichotomic signals $a(\alpha )$ and $% a(\alpha +\Delta \alpha )$ with an arbitrary correlation $M(\Delta \alpha )$ between them confined within $-1$ and $+1$. Hence, should the same effect $% \Phi (t)$ \[1\] act on several distant observers, then each of the latter ones, with the help of the ”individual” local parameter an will be capable of ”distant influencing” the paired mutual correlation of observables. This effect is a classical analog of entangled state’s correlation. INITIALIZATION OF VIRTUAL QUBITS ================================ Let us consider $N$ virtual qubits constructed according to the above algorithm. Let us call q-bit No.1 the control, or signal, one. If the signal qubit is green, then we assign it the value $\left| 0\right\rangle $, if it is red, we output nothing. Hence, at each output we shall get $N$ initialized integral qubits. In calculations, red color will correspond to the value $\left| 1\right\rangle $. HADAMARD TRANSFORM FOR VIRTUAL QUBITS ===================================== Let us assume we have a qubit $$\left| q\right\rangle =\frac{1}{\sqrt{2}}(\left| 0\right\rangle +\left| 1\right\rangle );$$ here, for the state $\left| 0\right\rangle $ the probability $P_{\left| 0\right\rangle }=\frac{1}{2}$, for the state $\left| 1\right\rangle $ the probability $P_{\left| 1\right\rangle }=\frac{1}{2}$. However, in the basis rotated by $\frac{\pi }{4}$ the state of the virtual qubit is determined. Let the Hadamard transform $H$ be the state of the qubit in the basis rotated by $\frac{\pi }{4}$. Then, $$H\left| 0\right\rangle \rightarrow \frac{1}{2}(\left| 0\right\rangle +\left| 1\right\rangle )$$ $$H\left| 1\right\rangle \rightarrow \frac{1}{2}(\left| 0\right\rangle +\left| 1\right\rangle ).$$ Hence, having applied the Hadamard transform to the virtual qubit $q$, we get the precisely determined value of the qubit $$H\left| q\right\rangle =\left| 0\right\rangle .$$ LOGICAL COMPONENT $CNOT$ IN A VIRTUAL QUANTUM COMPUTER ====================================================== To apply the logical component $CNOT$, using of the above control qubit is required. The qubit to which the logical operation $CNOT$ is applied will be called the target qubit. The logical value of the control q-bit is not altered. Let us consider two cases: 1\. If the control qubit has the value $\left| 1\right\rangle $, then the target q-bit is switched into the opposite value. 2\. If the control qubit has the value $\left| 0\right\rangle $, then the value of the target q-bit is not altered. CONCLUSION ========== Operation of quantum algorithms on a virtual quantum computer in case of emulation on a classical computer, a certain time saving should be achieved in solution of certain problems. However, comparing the computation speed of real and virtual quantum computers, one could infer, basing on conclusions of Bell \[10\] that computation speed of a real quantum computer shall be higher than that of a virtual quantum computer. This is related to the fact that the correlation factor for real quantum states, according to Bell’s conclusions, should exceed by the factor of the respective value for classical models of these quantum states, that is $\sqrt{2}$, for virtual quantum states. On the other hand, presently there is a lot of problems related to implementations, measurements, de-coherentization, etc. This gives rise to the question whether a virtual quantum computer would in the nearest future possess a gain in time in implementing quantum algorithm for quantum computation with respect to a real quantum computer. The answer could be positive due to the host of technical and technological problems with a real quantum computer impossible to be solved in the near future. However, even in the contrary case the virtual model will all the same find an application in classical computers, as it does not require alteration of latter ones, but rather, extends the capabilities of a classical computer through special program utilities described above. [10]{} N.V.Evdokimov, T.F.Kamalov, J. of New Technologies, 2002, n. 6, (in Russian, Moscow State Opened University Press). R. Feynman, Int. J. of Theor. Phys., 1982, v.21, n. 6/7, 467. R. Feynman, Found. of Phys., 1986, v.16, 507. T. F. Kamalov, J. or Russian Laser Reseach, 2001, 22, 475. Yu. P. Rybakov, Vestnic RUPF, 1995, n.3, 130 /in Russian/. A. D. Saharov, DAN, 1967, v.177, n.2, 70. A. D. Saharov, J. of Theor. and Math. Phys., 1975, v.23, n.2, 178. T. F. Kamalov, E-print arXiv, quant-ph/0212139, 26 Dec. 2002. N. V. Evdokimov, D. N. Klishko, V. P. Komolov, V. A. Jarochkin, Uspehy F. N., 1996, v.166, n.1, 92. J. S. Bell, Physics, 1964, v.1, n.3, 195.
--- abstract: | We report $ISO$ SWS infrared spectroscopy of the H II region Hubble V in NGC 6822 and the blue compact dwarf galaxy I Zw 36. Observations of Br$\alpha$, \[S III\] at 18.7 and 33.5$\mu$m, and \[S IV\] at 10.5$\mu$m are used to determine ionic sulfur abundances in these H II regions. There is relatively good agreement between our observations and predictions of S$^{+3}$ abundances based on photoionization calculations, although there is an offset in the sense that the models overpredict the S$^{+3}$ abundances. We emphasize a need for more observations of this type in order to place nebular sulfur abundance determinations on firmer ground. The S/O ratios derived using the $ISO$ observations in combination with optical data are consistent with values of S/O, derived from optical measurements of other metal-poor galaxies. We present a new formalism for the simultaneous determination of the temperature, temperature fluctuations, and abundances in a nebula, given a mix of optical and infrared observed line ratios. The uncertainties in our ISO measurements and the lack of observations of \[S III\] $\lambda 9532$ or $\lambda 9069$ do not allow an accurate determination of the amplitude of temperature fluctuations for Hubble V and I Zw 36. Finally, using synthetic data, we illustrate the diagnostic power and limitations of our new method. author: - 'Joshua G. Nollenberg, Evan. D. Skillman' - 'Donald R. Garnett' - 'Harriet L. Dinerstein' title: '$ISO$ SWS OBSERVATIONS OF H II REGIONS IN NGC 6822 AND I ZW 36: SULFUR ABUNDANCES AND TEMPERATURE FLUCTUATIONS' --- Introduction {#intro} ============ Because of their low metallicities [@PE81; @SKH89; @IT99], dwarf irregular and blue compact galaxies can provide valuable information for a wide variety of astrophysical problems. By comparing the low metal abundances found in dwarf galaxies to abundances found in luminous spirals, one can infer variations in star formation histories and chemical evolution. It is possible to use measurements of abundances in H II regions in dwarf irregular galaxies to establish limits on yields from stellar and big bang nucleosynthesis [@Pe92]. It is also possible to characterize the ionizing radiation from OB stars from measurements of emission lines in H II region spectra without resolving individual stellar spectra [@VP88]. Heavy elements such as C, N, O, Ne, S, and Ar are typically observed in H II regions. In order for accurate abundances to be determined, it is necessary to observe all of the ionization stages present in an H II region, or to have a reliable method for inferring the contribution of unobserved ions. Of the aforementioned elements, oxygen is the only one for which all important ionization stages can be easily observed at optical wavelengths. In the case of sulfur, the primary optical lines are \[S II\] $\lambda\lambda$6717,6731 and \[S III\] $\lambda$6312. However, the \[S III\] $\lambda$6312 line is an extremely weak and temperature sensitive line, making it difficult to measure in many H II regions. Because a significant fraction of the sulfur in an H II region is in the ionization state S$^{+2}$ [@G89], accurate determination of S/H = N(S)/N(H) can be difficult based on optical measurements alone. The \[S III\] $\lambda\lambda$9069,9532 lines in the near-infrared, which are intrinsically much stronger, often require a correction for atmospheric water absorption. In high-ionization nebulae S$^{+3}$, which emits only in the 10.5$\mu$m line, can also become an important constituent. Furthermore, the depth of particular ionization zones, temperature fluctuations ($t^2$), and other variations throughout a nebula can cause optical/UV forbidden line diagnostics to yield temperatures that are larger than ion-weighted average values in photoionization models, and they can weight emissivities toward values found in higher temperature regions of the nebulae [@P67; @G92; @MTP98; @Ee99]. These potential problems motivate the use of temperature insensitive mid- and far-infrared forbidden fine structure transitions in order to include optically unobserved ions, accurately determine nebular abundances, and to determine the amplitude and scale of temperature fluctuations inside H II regions (e.g., Dinerstein, Lester & Werner 1985). The infrared portion of the spectrum contains strong emission lines such as \[S III\] 18.7$\mu$m and \[S IV\] 10.5$\mu$m. For faint extragalactic H II regions, the high background flux from earth’s atmosphere precludes ground-based observations of many middle- and far-infrared emission lines at the present time. However, the low background and high sensitivity of the ISO observatory allowed observations to be made of many faint extragalactic sources. In this paper, we present mid-infrared $ISO$ spectra of the \[S III\] 18.7 $\mu$m and 33.5 $\mu$m lines, plus the \[S IV\] 10.5 $\mu$m line, from the H II region Hubble V in NGC 6822 (hereafter referred to as Hubble V), and the blue compact galaxy I Zw 36 (MRK 209; UGCA 281). Our goal is to compare our infrared observations with published optical observations in order to test whether the theoretically predicted ionization correction factors are correct and to determine whether temperature fluctuations are large enough to be detectable in this manner. Observations {#obs} ============ $ISO$ Observations {#iso} ------------------ Observations of Hubble V and I Zw 36 were obtained using the Short Wavelength Spectrometer (SWS) [@dG96] on the 60 cm Infrared Space Observatory ($ISO$) [@K96]. Details of these observations are given in Table 1. The Astronomical Observation Template (AOT) AOT02 was used to measure individual lines in a narrow bandpass with a width of $\Delta\lambda$/$\lambda$ $\approx$ 0.01 [@L97]. Although line profiles were oversampled with the SWS, the instrumental profile FWHM of about 150 km s$^{-1}$ is much larger than the intrinsic widths of emission lines from H II regions. The Br$\alpha$ and \[S IV\] 10.5$\mu$m lines were observed through a 14$\arcsec$$\times$20$\arcsec$ aperture, while the \[S III\] 18.7$\mu$m line was measured through a 14$\arcsec$$\times$27$\arcsec$ aperture. We also obtained measurements of \[S III\] 33.5$\mu$m through a 20$\arcsec$$\times$33$\arcsec$ aperture, but these observations had poor signal/noise and were not used. Standard $ISO$ reduction techniques were employed to reduce the data, using the latest photometric calibrations and procedures (SWS ia3) available at the time of the data reduction [@L97]. The following paragraphs illustrate some of the difficulties and challenges involved in the reduction of the $ISO$ data. Each $ISO$ observation begins with a photometric check, which is a detector scan of an internal calibration source. This is followed by a series of dark current scans and integrations on the source. Memory effects, fringing, glitches and floating dark current levels seriously degraded the quality of the data that were obtained with the SWS. Each of these problems were evident upon preliminary review of the data. Therefore, it was necessary to work with staff scientists and programmers at IPAC to correct the data by using the Interactive Analysis packages that have been developed. The memory effects were primarily due to the measurement of the internal calibration sources at the beginning of each observation. The intensity of the calibrators was high enough to cause the detectors to produce artificially high readings while performing subsequent dark scans and observations. This latent signal that persisted after illumination by bright sources was primarily evident in Detector Bands 1 and 2. During the Interactive Analysis, memory effects were corrected first, using the [*antimem*]{} IDL routine that was designed specifically for this purpose. The second effect that we corrected for was the variation in dark current readings. At regular intervals while observing an object, the detector would perform a dark current scan. Because of time-dependent and nonlinear detector response effects (e.g., memory effects, changing noise levels, and “glitches”), dark current subtractions by the automated processing pipeline were often inaccurate, especially for very faint continuum sources. The dark scans were corrected by hand using the [*dark\_inter*]{} algorithm. First, each individual dark scan was sigma clipped about the median of the scan using a threshold of $3\sigma$. In severe cases, entire dark scans were thrown out (especially dark scans that suffered from memory effects because they immediately followed a photometric check). Individual scans were also deglitched, as will be discussed later. While making the dark current corrections, it was necessary to determine whether the dark current level, which appeared to jump at random time intervals, actually correlated with the subsequent object scan. In cases where there was bimodality, in which the voltage readout fluctuated between two levels from one readout point to the next on very short timescales, only those points which corresponded to the dark level of the object scans were used. Use of these corrected dark levels resulted in a marked improvement of the quality of dark subtraction. Variations in sensitivity were also a problem in the $ISO$ detectors [@L97]. Generally, these variations were time-dependent and nonlinear. Variations from orbit to orbit were often caused by passage through the Van Allen Radiation Belts, cosmic ray strikes and observation of bright objects. However, small variations could be corrected using the [*spd-rl*]{} routine in the Interactive Analysis package. Once the above operations using the Interactive Analysis package were completed, the data were processed further using the $ISO$ Spectral Analysis Package (ISAP). First, glitches flagged by the initial pipeline processing were removed using the algorithm provided in ISAP. Generally, about 30% of the total number of data points were rejected including some entire scans. Another concern is whether any flux was inadvertently lost or gained due to $ISO$’s nominal $\approx$ 1$^{\prime\prime}$ pointing error. The SWS apertures cover an area larger than I Zw 36, which has an angular extent of roughly $10^{\prime\prime}$ x $11^{\prime\prime}$ as well as Hubble V, with a FWHM of $5.5^{\prime\prime}$ [@CHK95]. A thorough review of the current readouts from each observation showed no systematic variation with time, only occasional spikes due to cosmic ray hits and noise. Variations in the average current level would have indicated that the objects were falling in and out of the aperture during the scans. This implies that pointing error or jitter did not cause the objects to fall out of the aperture. The observed $ISO$ line fluxes for I Zw 36 and NGC 6822 are given in Table 2, and plots of the spectra can be found in Figure 1. Estimates of the magnitudes of flux calibration uncertainties were given in Leech (1997) for several of these problems. Uncertainties due to the memory effects were reported to be on the order of $6 - 15\% $ in Band 2 (Br$\alpha$) and $8 - 30\% $ in Band 4 (\[S III\] $18.7\mu m$ and \[S IV\] $10.5\mu m$). Therefore there is a potential uncertainty of up to nearly $50\% $ in the relative flux calibrations between bands. Furthermore, the Spectral Energy Distributions of standard objects used in flux calibrations are known only to between $4 - 10\% $. These systematic uncertainties are not included in the line flux errors cited in Table 2. Supporting Observations from the Literature {#ukirt} ------------------------------------------- There exist several sources of published optical data for Hubble V [@Le79; @PES80; @ST89; @M96] and I Zw 36 [@VT83; @ITL97] which allow a comparison of the abundances derived from optical and infrared \[S III\] transitions. Observed values of line ratios relevant to this paper are given in Table 3. A comparison of the abundances derived from these sources with the abundances derived from our $ISO$ observations can be found in §3.3. Of the Hubble V observations, only those of Lequeux et al. (1979) include a measurement of the \[S III\] $\lambda 6312$ line through a large aperture. We therefore adopt these optical observations for comparison with our infrared measurements. Note that the reddening corrections for the infrared emission line ratios used in this paper are negligible. Data Analysis ============= Electron Temperatures and Densities ----------------------------------- Prior to determining the ionic abundances from the data, electron temperatures were computed using a combination of existing published optical diagnostic data and updated atomic data from [@PP95]. In the case of Hubble V, four sources had \[O III\] data: Pagel & Edmunds (1981), Lequeux et al. (1979), Skillman, Terlevich, & Melnick (1989), and Miller (1996). From these sources, the diagnostic ratio $$R(O~III) = \frac{I(\lambda4959) + I(\lambda5007)}{I(\lambda4363)}$$ was determined. These line ratios and the derived electron temperatures are given in Table 4. Adopting the Lequeux et al. (1979) data, the electron temperature for Hubble V was determined to be $T_e$ = 11,200$\pm$1,100 K. This value is consistent, within stated errors, with the other observations and was adopted for the calculation of emissivities in the abundance calculations. Viallefond & Thuan (1983) and Izotov, Thuan & Lipovetsky (1997) have reported optical spectroscopy for I Zw 36. Using the emission line fluxes from Viallefond & Thuan (1983), we obtain an electron temperature of 14,600$\pm$500 K, a result which corresponds closely to that given by Viallefond & Thuan (1983): $T_e$ = 14,500$\pm$1,300 K. However, using fluxes from Izotov, Thuan & Lipovetsky (1997) with smaller relative observational errors, we obtained a value of $T_e$ = 16,180$\pm$70 K, which we adopt. Photoionization models indicate that the electron temperature is not uniform across an H II region, but can be different for low-ionization and high-ionization zones, depending on metallicity (Garnett 1992). We use the formulation given in Garnett (1992) to estimate electron temperatures for the O$^+$ and S$^{+2}$ zones, based on the derived \[O III\] temperature. We derive T(O$^+$) = 10,800$\pm$1,200 K for Hubble V and T(O$^+$) = 14,330$\pm$50 K for I Zw 36, while T(S$^{+2}$) = 11,000$\pm$1,200 K and 15,130$\pm$50 K for Hubble V and I Zw 36, respectively. Since the temperatures in the low ionization zones are not observed directly, but rather are derived from photoionization models, we assume a lower limit of 500 K on the uncertainty when deriving abundances. Published spectroscopic results for these two regions give low electron densities, $n_e$ $<$ 200 cm$^{-3}$ based on \[S II\] line ratios. These densities are too small to cause significant collisional de-excitation of the \[S III\] and \[S IV\] lines. Because of the poor signal/noise in the 33.5$\mu$m observations, these infrared \[S III\] line measurements do not provide meaningful constraints on $n_e$. Abundance Calculations ---------------------- Using the values of electron temperature derived in the previous section, we computed ionic abundances using a 5-level atom for O$^+$, O$^{+2}$, S$^+$, and S$^{+2}$ from the published optical emission line data for Hubble V and I Zw 36. Collision strengths for \[O II\], \[O III\], and \[S II\] transitions were taken from the compilation of Pradhan & Peng (1995). Collision strengths for \[S III\] were taken from the 27-state R-matrix calculation of Tayal & Gupta (1999), while for \[S IV\] we took the results of the 24-state calculation of Tayal (2000). Our computed abundances are listed in Table 5. We note that the new values of the effective collision strengths for the \[S III\] infrared fine-structure transitions represent an increase of $\approx$ 30% over those of [@GMZ95]. S$^{+3}$ can represent a substantial fraction of the total sulfur abundance in many H II regions (e.g., Lester, Dinerstein, & Rank 1979; Pipher et al. 1984). [@G89] (see also Garnett et al. 1997) showed that there is a sharp decrease in the observed ratio (S$^{+} +$S$^{+2}$)/O for O$^+$/O $<$ 0.25, indicating an increasing contribution from S$^{+3}$ in high ionization H II regions (where ionization is parameterized by O$^{+}$/O). However, the contribution of S$^{+3}$ often remains uncertain because \[S IV\] has no optical transitions; the only line in the ground configuration is at a wavelength of 10.51$\mu$m. In most cases photoionization models are used to estimate the S$^{+3}$ contribution. [@D80] demonstrated that sulfur ionization correction factors based on coincidences in ionization potentials greatly overpredict the actual amount of S$^{+3}$ in high-ionization planetary nebulae. However, the accuracy of ionization corrections based on photoionization models is largely untested for H II regions, since very few observations exist for \[S III\] and \[S IV\] in comparable beam sizes. Therefore, an important aspect of this study is the inclusion of \[S IV\] in the estimation of the total nebular sulfur abundance. Here, we determine S$^{+2}$ and S$^{+3}$ abundances from our $ISO$ observations of the infrared fine structure lines \[S III\] 18.7$\mu$m and \[S IV\] 10.5$\mu$m. We normalized the IR fine-structure lines to our Br$\alpha$ measurements also made with the SWS. Because the extinction coefficients are low in the IR and even the optical extinction to these two targets are low, the reddening corrections are negligible. Given the nebular physical conditions derived in Section 3.1, we computed the S$^{+2}$ and S$^{+3}$ abundances listed in Table 5. We used only the 18.7$\mu$m line to derive the S$^{+2}$ abundance, because of the low signal/noise for our 33.5$\mu$m measurements. Theoretical models and observational studies have indicated that the S/O ratio remains constant with respect to the oxygen abundance, O/H, regardless of metallicity [@Fr88; @TPF89; @G89; @IT99]. In Figure 2 we plot our newly-derived S/O values for Hubble V and I Zw 36, along with values for other objects obtained from the literature, vs. O/H. Figure 2 shows that both I Zw 36 and Hubble V have S/O values similar to those of other metal-poor H II regions. This result tends to support the validity of the abundance ratios derived from optical spectroscopy. Photoionization Models ---------------------- Current photoionization models are limited in accuracy because of uncertainties in input parameters such as stellar ionizing flux distributions. [@VP88] proposed the use of the ratio $$\eta = \frac{O^{+}/O^{+2}}{S^{+}/S^{+2}}$$ as an indicator of the “hardness" of the photoionizing radiation field inside a nebula, which can be used to infer the effective temperature ($T_*$) of the ionizing cluster. Garnett (1989) showed that this is indeed an useful estimator of relative values of $T_*$ in nebulae by constructing photoionization models using different stellar atmosphere flux distributions. However, it may not be possible to derive absolute values of $T_*$ from $\eta$, and it begins to lose its sensitivity above $T_{eff}$ $\sim$ 45,000K, as shown by Skillman (1989). Values of $\eta$ for each nebula were calculated using the O/H ratios as well as the (S$^{+}+$S$^{+2}$)/S values from Table 5 and are also listed there. Using Figure 6 of Garnett (1989) and Figure 1 of [@VP88] we find that the $\eta$ parameter for Hubble V is consistent with $T_*$ $\approx$ 45,000K (using Hummer & Mihalas 1970 LTE model atmospheres) while the $\eta$ parameter for I Zw 36 is consistent with $T_*$ greater than 50,000K. This is within the range of values of $\eta$ for other giant extragalactic  regions and consistent with excitation by a mixture of hot O and B type stars (cf. Garnett 1989). In the specific case of Hubble V, Bianchi et al. (2001) reproduce the H-R diagram for the most luminous stars in OB 8, the stellar association powering Hubble V, and it appears that 45,000 K is a conservative upper limit to the effective temperatures of the most massive stars in this association. Early studies of sulfur abundances in  regions noted that neglecting the contribution of S$^{+3}$ would result in an underestimate of the total sulfur abundance (e.g., Stasinska 1978; French 1981). The photoionization models of Garnett (1989) showed a clear relationship between the S$^{+3}$ ionization correction and O$^+$/O which is relatively insensitive to stellar effective temperature or abundance. In Figure 3, we show O$^+$/O vs. log (S$^{+}$+S$^{+2}$)/S for the two observed nebulae and compare them to the models of Stasińska (1990) for two different sequences in stellar effective temperature. The model sequences represent abundances of 0.1 times solar, similar to the metallicities of our two objects. The values for Hubble V and I Zw 36 plotted in Figure 3 fall at slightly higher values of (S$^{+}$+S$^{+2}$)/S or lower values of O$^+$/O than the models. This may indicate that the S$^{+3}$ fraction is overestimated in the models, perhaps due to line blanketing not accounted for in the stellar atmosphere fluxes. However, we caution against over-interpretation of this offset given the mismatch between the apertures for the optical and IR observations and the fact that the points in Figure 3 are less than 2 $\sigma$ away from the model curves. We emphasize that more observations of S$^+$, S$^{+2}$, and S$^{+3}$ for the same object (preferably with matched apertures) will provide a valuable consistency check on the ICF for S$^{+3}$. Corrections for the unobserved S$^{+3}$ abundance are now usually carried out based on photoionization modeling (e.g., the Thuan, Izotov, & Lipovetsky 1995 fit to the models of Stasinska 1990). It is still desirable to have observational tests of these fits spanning a large range in excitation. With the advent of more sensitive IR instruments, it may eventually be possible to characterize the effective temperatures of the ionizing stars, and to determine the best nebular models to describe a given H II region. DIAGNOSTICS AND TEMPERATURE FLUCTUATIONS ======================================== Standard Analysis of Temperature Fluctuations --------------------------------------------- Because optical collisionally excited line emission is weighted toward high temperature regions, abundance measurements based on optical data may not provide the true ionic abundance of a species in the H II region [@P67; @G92; @M95; @SVG97]. This effect would be most extreme in a case where there is a very localized zone of high temperature embedded in a more extended, lower temperature nebula. In this case, a calculation of the S$^{+2}$ abundance from \[S III\] $\lambda$6312 would yield a S$^{+2}$ abundance lower than the true value. However, this problem can be rectified by observing lines with much smaller excitation energies, i.e. infrared fine structure transitions. Because of their lower excitation energy, the volume emissivities for such transitions are insensitive to electron temperature, and as a result, emission line ratios can be converted to ionic abundances with a smaller dependence on temperature variations within the ionized gas (e.g., Dinerstein 1986). The measurement of larger abundances of an ion from infrared line observations than from optical lines would thus be an indication that there may be temperature fluctuations inside a nebula. [@DLW85] used such an approach to estimate the magnitude of temperature fluctuations in several planetary nebulae using a combination of infrared and optical \[O III\] lines. Comparison of the S$^{+2}$ abundances in Table 5 show evidence of the effect of temperature fluctuations in I Zw 36 and Hubble V, although this is only significant at the 1-2$\sigma$ level. Peimbert (1967) first determined the effects of temperature fluctuations on the calculation of nebular temperatures themselves through a density-weighted ensemble average of the temperature, $$T_o(N_i,N_e) = \frac{\int T(r) N_i(r) N_e(r) d \Omega dl}{\int N_i(r) N_e(r) d\Omega dl},$$ derived from an emission line. This average temperature, used in conjunction with an emission temperature, defined by: $$\frac{I_{X_{+p},\lambda_{nm}}}{I_{X^{+p}, \lambda_{n_1m_1}}} = exp\left[ - \frac{\Delta E - \Delta E^*}{kT}\right],$$ where $\Delta E$ and $\Delta E^{*}$ are the excitation energies of the two lines, can be used to define the root mean square temperature fluctuation, $$t^2 = \langle \bigg[ \frac{T(r) - T_o}{T_o} \bigg]^{2} \rangle.$$ Then, assuming small fluctuations, one can perform a Taylor expansion, and relate the emission temperature to the average temperature by: $$T \approx T_o \left[ 1 + \left( \frac{\Delta E + \Delta E^*}{kT_o} - 3 \right) \frac{t^2}{2} \right],$$ where $ \Delta E \neq \Delta E^* $ and $t^2 \ll 1 $. An alternative formalism was created by Mathis (1995) in which a fraction, $C$, of the gas is assumed to be at one temperature, $T_1$, while the rest of the gas is assumed to be at a second temperature, $T_2$. From this, one can estimate the degree to which differing amounts of plasma at each temperature can affect the derived abundances through the optical depth: $d\tau /dT = C \delta (T - T_1) + (1 - C) \delta (T - T_2)$, where $d\tau = n_e n(X) ds$. A New Approach to Characterizing Temperature Fluctuations --------------------------------------------------------- We have developed a different approach that can be used to characterize the average temperature and the root-mean-square temperature fluctuations in a region. This method can be used with an arbitrary temperature distribution. Assume the gas in a nebula follows a Gaussian temperature distribution, with mean temperature, $T_o$, and a dispersion $\sigma_T$. Let us also assume that the emission lines that we are observing have emission coefficients, at constant density, that can be characterized over a temperature range of a few times $\sigma_T$ by a quadratic fit, $$\epsilon_{X_i, \lambda_i}(T) = a_{X_i, \lambda_i} T^2 + b_{X_i, \lambda_i} T + c_{X_i, \lambda_i},$$ where $a_{X_i, \lambda_i}$, $b_{X_i, \lambda_i}$, and $c_{X_i, \lambda_i}$ are the coefficients of the quadratic fit of the emission coefficient. This is justified, because emission coefficients can usually be accurately approximated by quadratic polynomials over temperature ranges of several thousand degrees. Then, the ratio of two emission line intensities ($R$), which can be generally written as: $$R = \frac{I_{X_1,\lambda_1}}{I_{X_2,\lambda_2}} = \frac{N_{X_1} N_e \epsilon_{X_1, \lambda_1}(T)}{N_{X_2} N_e \epsilon_{X_2, \lambda_2}(T)}$$ can be convolved with the normalized Gaussian temperature distribution so that we have: $$\frac{I_{X_1, \lambda_1}} {I_{X_2,\lambda_2}} = \frac{N_{X_1}}{N_{X_2}} \frac{\int \epsilon_{X_1, \lambda_1}(T) p(T) dT}{\int \epsilon_{X_2,\lambda_2}(T) p(T) dT},$$ where $$p(T) = \frac{1}{\sigma_T \sqrt{2\pi}} exp\left[ \frac{(T - T_o)^2}{2\sigma_T^2} \right].$$ Upon integration from $T = -\infty$ to $T = \infty$, the relation becomes $$\frac{I_{X_1,\lambda_1}}{I_{X_2,\lambda_2}} = \frac{N_{X_1}}{N_{X_2}} \frac{a_{\lambda_1} \left(\sigma_T^2 + T_o^2 \right) + b_{\lambda_1} T_o + c_{\lambda_1}}{a_{\lambda_2} \left( \sigma_T^2 + T_o^2 \right) + b_{\lambda_2} T_o + c_{\lambda_2}},$$ which holds to a good degree of accuracy due to the square-exponential behavior of the kernel in the integrand. However, one should be aware that in this approximation, the Gaussian kernel, $p(T)$, has some mean temperature, $T_{o}$, and width, $\sigma_{T}$. If $T_{o} \gg \sigma_{T}$, then there are no problems, but if $T_{o} \sim \sigma_{T}$, then a significant portion of the kernel may correspond to temperatures $T < 0 K$, which is clearly non-physical and the approximation breaks down. When $T_o$ $\ge$ 3$\sigma_T$, then less than $3\%$ of the kernel will correspond to negative temperatures, and the approximation will hold. Let the line ratio divided by the abundance be given by $$\gamma_{12} = \frac{I_{X_1,\lambda_1}}{I_{X_2,\lambda_2}}\cdot\frac{N_{X_2}}{N_{X_1}},$$ then it is possible to solve for the temperature fluctuations, $\sigma_{T}^{2}$, using the equation $$\sigma_{T}^2 = \frac{(a_{\lambda 1} - \gamma_{12} a_{\lambda 2})T_o^2 + (b_{\lambda 1} - \gamma_{12} b_{\lambda 2})T_o + (c_{\lambda 1} - \gamma_{12} c_{\lambda 2})} {- (a_{\lambda 1} - \gamma_{12} a_{\lambda 2})}.$$ We will refer to Equation (13) as the “diagnostic equation." Equation (13) yields a locus of points in $T_o$, $\sigma_{T}^{2}$ space: a curve for a single, fixed value for the observed line ratio, $R$, and, for different values of $T_o$, different Gaussian temperature distributions of variance $\sigma_{T}^{2}(T_{o})$ centered on $T_o$. So by using additional line ratios, one establishes a diagnostic in which it is possible to solve explicitly for the ionic temperature and the temperature fluctuations in the gas (as done for \[O III\] by Dinerstein, Lester, & Werner 1985). Furthermore, provided three or more collisionally excited lines relative to an H recombination line are available for a given ionic species, the abundance of that species can also be solved for simultaneously. The calculated temperature variance, $\sigma_{T}^{2},$ is related to the traditional $t^{2}$ by definition: $$\sigma_{T}^{2} = T_{o}^{2} t^{2},$$ which holds even if the true distribution of the gas temperature is not Gaussian. If a case arises in which a Gaussian distribution is not appropriate, this method is adaptable so that other (normalized) kernels can be applied as well. For example, assuming a normalized rectangle function (a top-hat) temperature distribution, with half-width $\delta_T$, given by: $$p(T) = \left\{ \begin{array} {r@{\quad : \quad}l} \frac{1}{2\delta_{T}} & T_{L} < T < T_{U}\\ 0 & T \leq T_{L}, T\geq T_{U}. \end{array} \right.$$ Then one obtains an equation identical to (11), but of the form $$\frac{I_{X_1,\lambda_1}}{I_{X_2,\lambda_2}} = \frac{N_{X_1}}{N_{X_2}} \frac{a_{\lambda_1} \left(\frac{1}{3} \delta_{T}^{2} + T_o^2 \right) + b_{\lambda_1} T_o + c_{\lambda_1}}{a_{\lambda_2} \left( \frac{1}{3}\delta_{T}^{2} + T_o^2 \right) + b_{\lambda_2} T_o + c_{\lambda_2}}.$$ keeping in mind that $\delta_{T}^{2}$ can be related to $\sigma_T^2$ by: $$\sigma_{T}^{2} = \frac{1}{3} \cdot \delta_{T}^{2}.$$ Other distribution functions that also could be used include the Lorentzian distribution or the Triangle distribution with a base width of $b_{T}$, given by $$p(T) = \left\{ \begin{array} {r@{\quad : \quad}l} \frac{T - T_o + b_T}{b_{T}^{2}} & T_o - b_T < T < T_o\\ \frac{-T + T_o + b_T}{b_{T}^{2}} & T_o < T < T_o + b_T\\ 0 & otherwise. \end{array} \right.$$ The Triangle distribution and a truncated Lorentzian distribution which has limits placed on the width of the wings at its base also yield equation (11). For the Triangle distribution, $\sigma_T^2 = 1/6\ \times\ \delta_T^2 $, while the truncated Lorentzian possesses a more complicated relationship between its width, base limits, and $\sigma_T^2$. Behavior of the Diagnostic and Choice of Line Ratios ---------------------------------------------------- Initially, we tested the diagnostic by checking whether it would be able to determine the physical properties of a model of a gas cloud with an average temperature $T_0$ = 12,000 K, temperature fluctuations $\sigma_T$ = 2,000 K, and an abundance approximately equal to that of Hubble V. We used emission coefficients given by the STSDAS [*nebular.ionic*]{} routine [@SD95], each of which was fitted using a second-order polynomial. Coefficients for these fits are listed in Table 6. Then the square root of equation (14) was plotted using various line ratios involving \[S III\] and \[O III\] forbidden lines and H recombination lines, assuming the physical conditions given above, by varying $T_o$ in order to determine $T_o$ and $\sigma_T$ simultaneously. Figure 4 shows these diagnostic lines for the case of the nebula described above. The top panel shows diagnostics from combinations of \[S III\] and H recombination lines while the bottom panel shows diagnostics from combinations of \[O III\] lines. Diagnostics involving different combinations of emission lines have different sensitivities to temperature and temperature fluctuations. In order to simultaneously determine the temperature and the temperature fluctuations one would ideally use two combinations of emission lines that yield diagnostics that intersect at nearly right angles. For example, this is roughly the case in the top panel of Figure 4 between the \[S III\] $\lambda 6312$/$18.7\mu m$ and the $\lambda 9532$/$18.7\mu m$ diagnostics. The models on which Figure 4 is based show an additional feature of the diagnostic. The diagnostics involving lines originating solely from one ion intersect at a point that is independent of ionic abundance. However, when using diagnostics that involve ratios of ionic to H recombination lines, this is no longer the case. Therefore, it is possible to simultaneously determine the ionic abundance as well as the temperature and temperature fluctuations, provided that strengths of enough emission lines are known so that the three unknowns can be calculated, and that the ions in question reside in regions of the same $T_0$ and $\sigma_T$. Because the diagnostics are comprised of ratios of emission lines with different temperature sensitivities (the $\lambda 9532$/$18.7\mu m$ diagnostic, for example), the sensitivity of the diagnostic will vary with temperature. Graphically, the diagnostic lines appear to rotate as either the temperature or the temperature fluctuations are varied. Therefore it may be necessary to mix and match combinations of emission lines in order to have orthogonally-crossing diagnostic lines that provide the most stringent determination of the nebular parameters. A general rule of thumb for most nebular conditions would be to use combinations involving nebular (transitions between the middle and lowest terms in an ionic configuration, e.g. \[S III\] $\lambda$9532), auroral (transitions between the highest and middle terms, e.g. \[S III\] $\lambda$6312), and fine structure features. Application of the Diagnostic Technique --------------------------------------- An optimal combination of emission lines consisting of the nebular, auroral, and fine-structure lines of an ionic species as well as hydrogen recombination lines will allow the simultaneous determination of the nebular temperature, temperature fluctuations, and the abundance of the species with respect to hydrogen (see Equation 13). Unfortunately, for most H II regions, published observations do not exist for all of these types of transitions (or, as in the present case, they have not been observed through matched apertures), so we are unable to use our diagnostic technique fully with the present data. However, in this section we will demonstrate the use of the diagnostic using two approaches. First, we will demonstrate the usefulness of the diagnostic using our $ISO$ and ground-based measurements for Hubble V and I Zw 36. This exercise will not lead to accurate estimates of T, $\sigma_T$, and abundances, because of the relatively large uncertainties in the line ratios and the fact that we did not have an optimal set of \[S III\] line ratios. However, it will serve as an illustration of the method on real data. Next, we will show the ability of the diagnostic to make rough predictions of the conditions inside a nebula using synthetic data. ### Illustration with real data For Hubble V and I Zw 36 we have measurements of \[S III\] $\lambda 6312$, $18.7 \mu$m, and H recombination lines. Thus, we have essentially only two diagnostic line ratios, which is not sufficient to determine $T_0$, $\sigma_{T}^{2}$ and the S$^{+2}$ abundance simultaneously. Another \[S III\] line, for example the nebular \[S III\] 9532 Å transition, is needed for a full solution. Nevertheless, we can still provide an instructive example and set constrains on $\sigma_T$ if we assume that the S$^{+2}$ abundances that we calculated from our [*ISO*]{} \[S III\] $18.7 \mu m$ observations are correct. This abundance was based on the measurement of the electron temperature from the \[O III\] lines and an assumption of no temperature fluctuations. In principle, this assumption should be ok because the fine-structure lines have weak sensitivity to temperature fluctuations. How bad is this assumption? From equation (4) in Garnett (1992), which is just an update of Peimbert (1967) equation (15), we can get the ion-weighted mean electron temperature, T(O$^{+2}$), from the measured T(O III) and an estimate of $t^2$. For T(O III) $=$ 11,200 K and a $t^2$ value of 0.04, T(O$^{+2}$) $=$ 10,000 K. Since S$^{+2}$/H$^+$ is roughly proportional to T$^{-0.5}$ for the IR lines, for commonly claimed values of 0.03 to 0.04 for $t^2$, the uncertainty in the S$^{+2}$/H$^+$ abundance is less than 10% (which is smaller than our quoted errors). Thus, this is probably not a bad assumption to adopt for an illustrative example. The results are shown in Figure 5 for both Hubble V and I Zw 36, where it should be noted that with the assumption of the S$^{+2}$/H abundance, we are left with only two undetermined variables: $T$ and $\sigma_{T}^{2}$. This allows us to work in two dimensions and two diagnostics rather than three. We did, however, include a third diagnostic ($\lambda$6312/18.7$\mu m$) in Figure 5 to ensure that all of the diagnostic lines crossed at the same point and produced consistent results. In Figure 5, the diagnostic curves are grouped by line ratio. The center line in each group represents the results of equation 13 for each observed line ratio; the parallel curves represent the spread caused by the $\pm 1\sigma$ observational errors in the line ratios. The expected value of $\sigma_T$ will lie at the centroid of the $1\sigma$ error box. In section 3.2, we derived \[S III\] temperatures of $11,000\pm1200K$ for Hubble V and $15,130\pm50K$ for I Zw 36, based on the estimates from \[O III\] temperatures. Meanwhile, from the diagnostics in Figure 5 we obtain T\[S III\] $\approx 12,000\pm1,000K$ for Hubble V, and $15,500\pm1,000K$ for I Zw 36. Both of the values of T\[S III\] derived from Fig. 5 are in good agreement with those derived in Section 3.1. The data presented in Figure 5 are consistent with a very large range in $\sigma_T$ corresponding to values of $t^2$ from 0 up to roughly 0.2, which is much larger than values usually considered. To provide better constraints on $\sigma_T$ we would need to obtain higher quality observations of the infrared \[S III\] lines as well as include observations of \[S III\] $\lambda\lambda$9069,9532. In the next section we demonstrate the quality of data necessary for this goal. ### Illustration with synthetic data The approach that was followed in Figure 5 is quite useful for the quick determination of the nebular conditions, however we have only used a rudimentary method for determining the errors associated with the original data. This problem is exacerbated by the fact that under some circumstances, the formal simultaneous solution of multiple diagnostics of the form given in Equation (14) may in fact yield $\sigma_{T}^{2} < 0$. This could easily arise from observational uncertainties, but if the error propagation is carried out properly then one would expect the solution to be [*consistent*]{} with positive values for $\sigma_{T}$. In order to develop a better understanding of the error analysis, we developed a Monte Carlo simulation to generate synthetic observations of line ratios whose values are specified along with their corresponding observational errors. The specified line ratios and errors are sent through the diagnostic so that each observation is plotted as a point on the $\sigma_{T}^{2}$ vs. $T$ plane. The intensity of points on the plane is the distribution that arises from a large number of observations, smeared by observational errors. We have plotted two examples of this in Figure 6. The top panel in Figure 6 shows the distribution of points that arises from \[S III\] line ratios corresponding to an isothermal nebula at $T = 12,000K$, with observational errors of 2% in optical and near-infrared lines ratios and 5% in line ratios with infrared lines. The peak of the distribution lies at the point that would be determined in the absence of observational errors. This plot also shows that roughly half of the realizations in the simulation yield $\sigma_{T}^{2} < 0$. However, the contours corresponding to 1 and 2$\sigma$ errors in the determination of the nebular parameters show that the observations could correspond to small fluctuations. The 1$\sigma$ error corresponds to roughly $t^2$ $\approx$ 0.04. Thus, it is possible to make significant constraints on temperature fluctuations using combinations of optical, near-infrared, and infrared \[S III\] lines. Similarly, the bottom panel of Figure 6 shows the distribution of realizations resulting from inputs corresponding to $T = 12,000K$ and $\sigma_{T} = 2,000K$ ($\sigma_{T}^{2} = 4\cdot10^{6} K^{2}$, which corresponds to $t^{2} = 0.028$), smeared by the same errors as above. It is now apparent that the diagnostic lines resulting from observations $\pm 1\sigma$ errors do roughly correspond to the $1\sigma$ bounds on the determinations of the nebular parameters. Two things should be noted here. First, the low observational errors assumed in the calculations are difficult to achieve in practice, and matching apertures is critical. Long slit spectra of either spatially unresolved sources (e.g., extragalactic) or Galactic HII regions where the variations can be traced along the slit are probably best suited for this type of work. It is probably best to obtain ratios to H recombination lines in all cases (i.e., H$\alpha$ in the optical, P8, P9, and P10 in the near-infrared, and Br$\alpha$ in the infrared) to provide a normalization. Second, it is important to point out that for different temperature ranges, different ionic species will provide better constraints. For example, in the example presented in Figure 6, a combination of \[O III\] lines (with the same magnitude of relative errors) gives roughly a factor of two stronger constraint ($t^2$ $\le$ 0.02) on the presence of temperature fluctuations. SUMMARY AND CONCLUSIONS ======================= We have reported new ISO observations of the mid-infrared fine structure \[S III\] and \[S IV\] lines. These lines are of great importance to the accurate determination of nebular total sulfur abundances. This is due to the strong dependence of higher excitation lines on temperature, along with the fact that there are no strong optical lines for either of these species. With our observations, we have shown that S$^{+3}$ can constitute a large fraction of the total sulfur abundance in extragalactic H II regions. This means that if one is to determine accurately the total sulfur abundance without model dependence as in equation (4), then one must make infrared observations of the fine structure \[S III\] and \[S IV\] lines. Once infrared observations are made, it becomes possible to test not only useful techniques such as the determination of the radiation “hardness” [@VP88] and the determinations of the total sulfur abundance as in equation (4) [@G89], but it becomes possible to test the accuracy of photoionization models. Our data suggest that ionization corrections for sulfur based on oxygen ion ratios and photoionization models are valid. The presence of temperature fluctuations in nebulae can complicate the determination of nebular abundances. We have developed a new, generalized diagnostic capable of determining the amplitude of these fluctuations by assuming a Gaussian temperature distribution in these nebulae, although this method can be applied to any normalized distribution. The uncertainties in our ISO measurements and the lack of observations of \[S III\] $\lambda 9532$ or $\lambda 9069$ did not allow an accurate determination of the amplitude of temperature fluctuations for Hubble V and I Zw 36 using our method. A significant challenge is presented by combining large aperture infrared spectra with relatively small aperture optical spectra. Future long slit spectrographs available with SOFIA and SIRTF will allow us to overcome this challenge. As these more powerful instruments become available, observational uncertainties should decrease, allowing a more accurate determination of the size of the temperature fluctuations and other nebular parameters. In the future, one can consider extensions to the present analysis. For example, the potential to calculate spatially unresolved density fluctuations in a similar mathematical framework remains relatively unexplored, although [@R89] has considered the biasing of IR density indicators by density fluctuations. Like temperature fluctuations, density fluctuations can also affect nebular line ratios, and they may have significant effects on nebular physical parameters which must be constrained in order to develop better nebular models. To this end, application of this diagnostic to published data on a large number of H II regions may be useful in characterizing the variances found in H II regions. We give special thanks to the Nancy Silbermann, Sergio Molinari, Sarah Unger and the rest of the IPAC staff for their assistance during and after our visit to IPAC during January, 1997. We also thank the referee, Manuel Peimbert, for a careful reading of the manuscript and several suggestions which significantly improved the paper. JN, EDS, and DRG acknowledge support from JPL contract 961500; DRG also acknowledges support from NASA LTSA grant NAG5-7714. HLD’s participation was supported by JPL contract 961543. EDS acknowledges support from NASA LTSA grant NAG5-9221 and the University of Minnesota. Bianchi, L., Scuderi, S., Massey, P., & Romaniello, M. 2001, , 121, 2020 Caplan, J., & Deharveng, L., 1986, A&A, 155, 297 Collier, J., Hodge, P., & Kennicutt, R. 1995, PASP, 107, 361 de Graauw, T, et al. 1996, A&A 315, L49 Dinerstein, H. L. 1980, ApJ, 237, 486 Dinerstein, H. L. 1986, PASP, 98, 979 Dinerstein, H. L., Lester, D. F., & Werner, M. W. 1985, ApJ, 291, 561 Esteban, C., Peimbert, M., Torres-Peimbert, S., García-Rojas, J., & Rodríguez, M., 1999, ApJS, 120, 113 François, P. 1988, A&A, 195, 226 French, H. B. 1981, ApJ, 246, 434 Galavís, M. E., Mendoza, C. & Zeippen, C. J., 1995, A&ASS, 111, 347 Garnett, D. R. 1989, ApJ, 345, 282 Garnett, D. R. 1992, AJ, 103, 1330 Garnett, D. R., Shields, G. A., Skillman, E. D., Sagan, S. P., & Dufour, R. J. 1997, ApJ, 489, 63 Gruenwald, R. B. & Viegas, S. M. 1992, ApJS, 78, 153 Hawley, S.A. 1978, ApJ, 224, 417 Hummer, D.G. & Mihalas, D. 1970a, MNRAS, 147, 339 Hummer, D.G. & Mihalas, D. 1970b, JILA Report No. 101 Hummer, D.G. & Storey, P.J. 1987, MNRAS, 224, 801 Izotov, Y. I. & Thuan, T. X. 1999, ApJ, 511, 639 Izotov, Y. I., Thuan, T. X., Lipovetsky, V. A. 1997, ApJS, 108, 1 Kessler et al., 1996, A&A, 315, L27 Leech, K., 1997, [*SWS Instrument Data Users Manual Issue 3.1*]{}, SAI/95-221/Dc Lequeux, J., Peimbert, M., Rayo, J. F., Serrano, A., & Torres-Peimbert, S. 1979, A&A, 80, 155 Lester, D. F., Dinerstein, H. L., & Rank, D. M. 1979, ApJ, 232, 139 Mathis, J.S., RMxAASC, 3, 207 Mathis, J. S., Torres-Peimbert, S., & Peimbert, M. 1998, ApJ, 495, 328 Miller, B.W. AJ, 112, 991. Pagel, B. E. J. & Edmunds, M. G. 1981, ARA&A, 19, 77 Pagel, B. E. J., Edmunds, M. G., Smith, G. 1980, MNRAS, 193, 219 Pagel, B. E. J., Simonson, E. A., Terlevich, R. J., & Edmunds, M. G. 1992, MNRAS, 255, 325 Peimbert, M. 1967, ApJ, 150, 825 Pipher, J. L., Helfer, H. L., Herter, T., Briotta, D. A. Jr., Houck, J. R., Willner, S. P., & Jones, B. 1984, ApJ 285, 174 Torres-Peimbert, S., Peimbert, M., & Fierro, J. 1989, ApJ, 345, 186 Pradhan, A. K. & Peng, J. 1995, in The Analysis of Emission Lines, eds. R.E. Williams and M. Livio, Cambridge University Press, p. 8 Rieke, G. H. & Lebofsky, M. J. 1985, ApJ, 288, 618 Rubin, R. H., 1989, ApJS 69, 897 Shaver, P.A., McGee, R.X., Newton, L.M., Danks,A.C., & Pottasch, S.R. 1983, MNRAS, 204, 53 Shaw, R. A. & Dufour, R. J. 1995, , 107, 896 Skillman, E. D., Kennicutt, R. C., & Hodge, P. W. 1989, ApJ, 347, 875 Skillman, E. D., & Klein, U. 1988, A&A, 199,61 Skillman, E. D., Terlevich, R., & Melnick, J. 1989, MNRAS, 240, 563 Stasinska, G. 1978, A&A, 66, 257 Stasinska, G. 1990, A&AS, 83, 501 Steigman, G., Viegas, S., Gruenwald, R. 1997, ApJ, 490, 187 Tayal, S.S., 2000, ApJ, 530, 1091 Tayal, S.S. & Gupta, G.P., 1999, ApJ, 526, 544 Thuan, T.X., Izotov, Y. I., & Lipovetsky, V. A., 1995, ApJ, 445, 108 Viallefond, F. & Thuan, T. X. 1983, ApJ, 269, 444 Vílchez, J. M. & Pagel, B. E. J. 1988, MNRAS, 231, 257 [lll]{} Observation date & 17 Apr 1996 & 25 Apr 1996\ Observer & DGARNETT & DGARNETT\ TDT & 15202208 & 16001210\ Integration Time & $588 s$ & $8040 s $\ [lccc]{} Hubble V&4.05 & Br$\alpha$ & $(1.9\pm0.2)\times10^{-20}$\ &10.51 & \[S IV\] & $(7.5\pm0.5)\times10^{-20} $\ &18.71 & \[S III\] & $(6.7\pm0.7)\times10^{-20}$\ I Zw 36&4.05 & Br$\alpha$ & $(1.9\pm0.3)\times10^{-21}$\ &10.51 & \[S IV\] & $(1.2\pm0.1)\times10^{-20} $\ &18.71 & \[S III\] & $(4.9\pm1.0)\times10^{-21}$\ [llcccccc]{} $3727$ & \[O II\] & $1.4$ & $1.5$ &$1.20\pm0.06$& $1.46\pm0.10$& $0.7$ & $0.719\pm0.002$\ $4363$ & \[O III\] & $0.052$ & $0.047$ & $0.057\pm0.012$ & $0.06\pm 0.01$ & $0.12$ & $0.127\pm0.001$\ $4861$ & H$\beta$ & $1.00$ & $1.00$ & $1.00$ & $1.00$ & $1.00$ & $1.00$\ $4959$ & \[O III\] & $1.8 $ & $1.6$ & $1.92\pm0.010$ & $1.67\pm0.04 $ & $1.98$ & $1.960\pm0.003$\ $5007$ & \[O III\] & $5.4 $ & $5.0 $ & $5.93\pm0.030 $ & $4.90\pm0.14$&$6.52$ & $5.543\pm0.008$\ $6312$ & \[S III\] & $0.014$ & …& …& …& $0.017$ & $0.017\pm0.001$\ $6717$ & \[S II\] & $0.063$ & $0.09$& $0.129\pm0.006$& …& $0.042$ & $0.061\pm0.001$\ $6731$ & \[S II\] & $0.045$ & $0.06$& & …& $0.031$ & $0.045\pm0.001$\ $c($H$\beta)$ & & $0.8$ & $1.05$ & $0.7$ & $0.3\pm0.1$ & $0.41\pm0.12$ & $0.00$\ [lccc]{} LPRST79 & Hubble V & $ 137 $ & $11,200$\ PES80 & Hubble V & $ 142\pm31 $ & $11,000\pm900$\ STM89 & Hubble V & $ 138\pm47 $ & $11,500\pm1,000$\ M96 & Hubble V & $ 110\pm26 $ & $12,400\pm1,100$\ VT83 & I Zw 36 & $ 70\pm12$ & $14,600\pm1,300$\ ITL97 & I Zw 36 & $ 59.08\pm0.47 $ & $16,180\pm65$\ [lcccc]{} O$^{+}$/H & 3,800$\pm$900 & & $710\pm80$ &\ O$^{+2}$/H & 13,000$\pm$3,000 & & $4,930\pm40$ &\ O/H & 16,800$\pm$3,100 & & $5,640\pm90$ &\ S$^{+}$/H & $ 20.1\pm4.8$ & & $11.4\pm0.6$ &\ S$^{+2}$/H & $ 250\pm90 $ & $280\pm50$ & $ 95\pm9$ & $151\pm30$\ S$^{+3}$/H & & $65\pm11$ & & $71\pm13$\ S/H & & $365\pm51$ & & $233\pm33$\ $\eta$ & 3.6$\pm$2.0 & & 1.2$\pm$0.2 &\ [lcccc]{} H I & H$\beta$ & $6.072\times10^{-34}$ & $-2.327\times10^{-29}$ & $2.989\times10^{-25}$\ H I & Br$\alpha$ & $6.318\times10^{-35}$ & $-2.427\times10^{-30}$ & $2.811\times10^{-26}$\ H I & Br$\gamma$ & $2.058\times10^{-35}$ & $-7.889\times10^{-31}$ & $9.365\times10^{-27}$\ $$\[O III\] & $4363$ & $2.609\times10^{-30}$ & $-3.828\times10^{-26}$ & $1.446\times10^{-22}$\ $$\[O III\] & $4959$ & $-4.864\times10^{-31}$ & $3.250\times10^{-25}$ & $-1.958\times10^{-21}$\ $$\[O III\] & $5007$ & $-1.443\times10^{-30}$ & $9.382\times10^{-25}$ & $-5.651\times10^{-21}$\ $$\[O III\] & $51.8\mu m$ & $1.485\times10^{-30}$ & $-5.523\times10^{-26}$ & $1.335\times10^{-21}$\ $$\[O III\] & $87.6\mu m$ & $1.841\times10^{-30}$ & $-7.676\times10^{-26}$ & $1.931\times10^{-21}$\ $$\[S II\] & $4068$ & $7.652\times10^{-31}$ & $9.599\times10^{-25}$ & $-6.249\times10^{-21}$\ $$\[S II\] & $4076$ & $2.136\times10^{-31}$ & $3.269\times10^{-25}$ & $-2.122\times10^{-21}$\ $$\[S II\] & $6717$ & $-2.001\times10^{-28}$ & $8.917\times10^{-24}$ & $-3.734\times10^{-20}$\ $$\[S II\] & $6731$ & $-1.406\times10^{-28}$ & $6.454\times10^{-24}$ & $-2.729\times10^{-20}$\ $$\[S III\] & $6312$ & $3.884\times10^{-30}$ & $5.208\times10^{-26}$ & $-5.035\times10^{-22}$\ $$\[S III\] & $9069$ & $-2.622\times10^{-29}$ & $1.079\times10^{-24}$ & $-3.318\times10^{-21}$\ $$\[S III\] & $9532$ & $-1.448\times10^{-28}$ & $5.960\times10^{-24}$ & $-1.833\times10^{-20}$\ $$\[S III\] & $18.7\mu m$ & $3.929\times10^{-30}$ & $-2.613\times10^{-25}$ & $1.681\times10^{-20}$\ $$\[S III\] & $33.5\mu m$ & $6.407\times10^{-29}$ & $-2.293\times10^{-24}$ & $4.178\times10^{-20}$\ $$\[S IV\] & $10.5\mu m$ & $6.125\times10^{-29}$ & $-3.409\times10^{-24}$ & $8.859\times10^{-20}$\
--- abstract: | Pipelines are used in a huge range of industrial processes involving fluids, and the ability to accurately predict properties of the flow through a pipe is of fundamental engineering importance. Armed with parallel MPI, Arnoldi and Newton–Krylov solvers, the [Openpipeflow ]{}code can be used in a range of settings, from large-scale simulation of highly turbulent flow, to the detailed analysis of nonlinear invariant solutions (equilibria and periodic orbits) and their influence on the dynamics of the flow.\ [**Website:**]{}  `openpipeflow.org`\ [**Reference:**]{}  SoftwareX, 6, 124-127.\ [**DOI:**]{}  10.1016/j.softx.2017.05.003 author: - | Ashley P. Willis\ [*School of Mathematics and Statistics, University of Sheffield,*]{}\ [*South Yorkshire, S3 7RH, U.K.*]{} bibliography: - 'pipes.bib' title: 'The Openpipeflow Navier–Stokes Solver' --- Motivation and significance =========================== The flow of fluid through a straight pipe of circular cross-section is a canonical setting for the study of stability, transition and properties of turbulent flow. At low flow rates, the flow everywhere is in the direction parallel to the axis of the pipe, a simple ‘laminar’ flow. At larger flow rates it typically undergoes a transition to a complex ‘turbulent’ flow, characterised by an abundance of swirling eddies. As early as 1883, Reynolds observed that the transition from laminar to turbulent flow is highly dependent on perturbations of finite amplitude to the initial flow [@R1883].[^1] Nevertheless, he also noticed that the appearance of turbulence is consistent with respect to the value of the non-dimensional combination $D\,U/\nu$, at around 2000, where $U$ is the mean axial speed, $D$ the diameter of the pipe, and $\nu$ the kinematic viscosity. This combination is the now famous Reynolds Number, $Re=D\,U/\nu$, used in a huge range of systems involving fluids, where $D$ and $U$ are typical length and velocity scales for the system. It has been known for some time that the Navier–Stokes equations together with the no-slip boundary conditions accurately predict the evolution of the flow pattern, e.g. the landmark prediction of supercritical transition to a roll pattern for the flow of water between rotating cylinders by G. I. Taylor [@Taylor23] (transition due to linear instability beyond a critical rotation rate). Despite this development and the legacy of the work of Reynolds, the nature of subcritical transition (transition in the absence a linear instability) and the dynamics of pipe flow has largely remained a mystery. But much has changed following the discovery finite-amplitude solutions to the Navier–Stokes equations, for pipe flow as recently as 2003 [@FE03]. These solutions, often referred to as ‘exact coherent states’ [@W01] are believed to embody the processes that sustain turbulence to and form a ‘skeleton’ for the dynamic paths taken by the evolving flow patterns. Comprehension of the nonlinear dynamics, particularly of transition in pipes, and likewise in Couette and channel flows, has progressed in leaps and bounds over the last decade, based on the study of these solutions. New more general families of solutions continue to be discovered, and their unstable manifolds are just beginning to be calculated [@Pringle09; @deLMeAvHo12; @AvMeRoHo13; @ChWiKe14]. The code that has evolved into [Openpipeflow ]{} has played a significant role in the realisation of this odyssey. [[Openpipeflow ]{}offers a more simplified approach than large computational fluid dynamics (CFD) packages – the aim during development has been to maintain a compact and readable code. Thus [Openpipeflow ]{}is easily adapted for a given analysis and extendible to new numerical methods.]{} The code has recently been upgraded with a substantially improved parallelisation, and continues to be augmented with new extensions, [for example large-eddy simulation (LES)]{}. Following the rapid expansion of computational resources that has occurred in recent times, pipe flow is a prime example of a ‘high-dimensional’ system that is receiving examination with methods previously limited to systems with only a few degrees of freedom, such as the Lorenz attractor or the Kuramoto–Sivashinsky equation; see e.g. [@AMdABH11; @WiShCv15]. In the other direction, observations from large-scale simulations of pipe flow have inspired low-order models [@barkley2015rise; @ShHsGo15]. Pipe flow also provides a simple setting for the development of computationally intensive new methods, such as adjoint optimisation techniques, e.g. [@pringle2012minimal]. Software description ==================== [Openpipeflow ]{}implements a second-order predictor-corrector scheme, with automatic time-step control, for simulation of flow on the cylindrical domain $(r,\theta,z)\,\in\,[0,1]\times[0,2\pi/m_p)\times[0,2\pi/\alpha)$, where $m_p$ and $\alpha$ are parameters that determine spatial periodicity. Variables in the Navier–Stokes equations are discretised in the form $$A(r,\theta,z)\,=\,\sum_{k<|K|}\,\sum_{m<|M|} {A_{km}(r_n)} \, \mathrm{e}^{\mathrm{i}(\alpha k z + m_p m \theta)} \, , $$ $n=1..N$, [ where the points $r_n$ are distributed on $[0,1]$. By default the $r_n$ are located at the roots of a Chebyshev polynomial, bunched towards the boundaries to resolve large gradients that occur in the boundary layer.[^2] Derivatives in the radial dimension are calculated using finite differences, so that they may be evaluated using banded matrices. The number of points used, and hence the width of the bands, is an integer parameter; by default derivatives are calculated using 9 points, for which 1st/2nd order derivatives are calculated to 8th/7th order.]{} Following the $3/2$ dealiasing rule, the sums are evaluated on $3K\times3M$ grids in $z$ and $\theta$ respectively. Periodicity in $z$ is a commonplace approximation [that has been shown to capture all the relevant physics of turbulent flow [@Eggels94] and the transition to turbulent flow [@AMdABH11]]{}. The dimension $\theta$ is naturally periodic ($m_p=1$). Rotational symmetry ($m_p=2,3,...$) is often applied, [since finite-amplitude solutions typically satisfy rotational symmetry, or applied simply to reduce computational expense when the structures of interest are much smaller than the domain, e.g. near-wall vortices at large flow rates.]{} A pressure-Poisson equation (PPE) formulation is employed and an influence-matrix technique applied for the enforcement of boundary conditions [@guseva2015transition]. Let ${\mbox{\boldmath $g$}}$ be a vector of boundary conditions, written such that ${\mbox{\boldmath $g$}}={\mbox{\boldmath $0$}}$ when they are satisfied. The influence-matrix technique has several nice features.: - Alternative boundary conditions, e.g. slip or oscillations, are easily introduced by changing the single function that evaluates ${\mbox{\boldmath $g$}}$; - The usual no-slip and divergence conditions at the boundary are satisfied such that $\|{\mbox{\boldmath $g$}}\|$ is typically at the level of the machine epsilon for the given floating-point precision; - Computational overhead is negligible compared to evaluation of non-linear terms; - No stability issues have been observed. Utilities and templates for runtime- and post-processing are included, including a Newton–Raphson solver for the calculation and continuation of invariant solutions. The Newton solver for the pipe flow, which has a multiple-shooting option (orbits may be split into multiple sections), calls a utility that implements a combined Krylov–Trust-region approach [@Visw07b]. This Newton–Krylov–Trust-region utility is designed to be integrable with any simulation code. [Openpipeflow ]{} is written in Fortran90 and uses basic modules and derived types. Esoteric extensions to the programming language have been deliberately avoided. The code makes use of FFTW, LAPACK and NetCDF libraries. Optionally, for parallel use an MPI library is required. ![Code structure and program interaction. The MPI library is not required if $\texttt{\_Nr}=\texttt{\_Ns}=1$. To post-process data it is sufficient for a utility to inherit the io module. To process at run time, it is possible to inherit the whole main loop. []{data-label="fig:modules"}](modules3){width="\columnwidth"} Software Architecture and Functionality --------------------------------------- See Fig. \[fig:modules\] for a schematic of the code structure and program interaction. Once parameters are set and the code built, most jobs begin with a single initial condition, `state.cdf.in`. Outputs from another job, `statennnn.cdf.dat`, usually make the best initial conditions (`nnnn` is a 4-digit numeric label). A variety of possible initial conditions are provided in the database at openpipeflow.org. Truncation or interpolation of initial conditions with a different resolution is automatic. A selection of utilities, plus templates for post-processing or runtime-processing, are described in the online manual. Implementation details ---------------------- Linear systems that originate from the implicit solution of the viscous terms in the Navier–Stokes equations are solved using banded matrices and LU-decomposition for each Fourier mode. Nonlinear terms are evaluated pseudospectrally. Parallelisation is achieved via a split into `_Nr` radial and `_Ns` axial sections, [and the work is divided over $\texttt{\_Np} = \texttt{\_Nr}\times\texttt{\_Ns}$ cores]{} (\#-defined symbols in `parallel.h`). Due to the form of the data transposes involved in the transforms between ‘collocated’ (Fourier) and physical space (`type (coll)` and `type (phys)`), the number of cores is limited to $N\times M$. This has been a distant limitation to date. The recent upgrade to the two-dimensional split from the one-dimensional ‘wall-normal’ split (independent 2D-FFTs) not only extends the maximum number of cores from $N$ to $N\times M$, but also reduces the number of messages that must be sent. The transform involves two stages of FFTs and transposes, but each transpose involves only `_Nr` or `_Ns` cores. [For a transpose involving $p$ cores, each core must send $p-1$ messages.]{} Therefore, choosing $\texttt{\_Nr}\approx\texttt{\_Ns} {~\approx\sqrt{\texttt{\_Np}}}$, the number of messages is $O(2\sqrt{\texttt{\_Np}})$ versus $O(\texttt{\_Np})$. This can substantially reduce time lost in latency due to the time setting up communications. [Further details can be found on the Core Implementation page of the online manual.]{} Illustrative Examples ===================== Modelling a Coriolis force -------------------------- Does the Coriolis force, an extra force term due to rotation of Earth, affect the flow in experiments? [The file `utils/Coriolis.f90` is an example utility is provided with the distribution that models this case.]{} The main loop of the core code [ already includes several calls to a null function at key points during the timestepping process; see `var_null(flag)` in `program/main.f90`. The `flag` may be used to detect the stage at which the function has been called. Here, we replace the null function with the function in `Coriolis.f90` and detect the case `flag==2`,]{} which indicates that nonlinear terms have just been evaluated. At this point we add the Coriolis forces to the nonlinear terms. Note that no changes to the core files, including `main.f90`, are necessary. Figure \[fig:Coriolis\] shows the mean axial flow profile for laminar and turbulent flow at an Ekman number $E = \nu / (2\Omega D^2)=1$ for a pipe with axis oriented east-west, [perpendicular to the rotation of axis for any latitude.]{} For a pipe filled with water at 20$\,^\circ$C, this corresponds to a diameter $D$ of approximately 8.3cm; $Re = 5300$ in all cases, U$_\mathrm{cl}$ is the centreline speed for laminar flow at the same mean flow rate, and $\mathrm{R}=D/2$ is the pipe radius. For this $Re$, laminar flow shows a substantial response, [and the profile is similar to those reported in [@draad1998earth]]{}. Turbulent flow, however, shows no asymmetry. The turbulent mean profile is indiscernible from the documented test case [@Eggels94]. ![Response of flow at $Re=5300$ to a Coriolis force. Solid: Laminar flow, $E\to\infty$ (no rotation). Short-dash: Laminar flow, $E=1$. Long-dash: Turbulent flow, $E=1$. []{data-label="fig:Coriolis"}](Coriolisbw){width="0.7\columnwidth"} Unstable manifold of a travelling wave solution ----------------------------------------------- A travelling wave solution is an equilibrium when considered in a frame moving at its phase speed. In this case we consider the ‘upper branch’ solution known as N2\_ML, Fig. \[fig:TWvis\], which in its symmetry subclass has a single unstable complex eigenvalue, [$0.00620+0.0183\,\mathrm{i}\,(\mathrm{U}_\mathrm{cl}/\mathrm{R})$ (after one rotation it expands by a factor $8.4$)]{}; $Re=2400$, $\alpha=1.25$, $m_p=2$; see [@ACHKW11] for further details. For a given nearby state, the Newton–Krylov utility (`newton.f90`) can find such solutions and output their leading unstable eigenvectors [(solution `state1000.cdf.dat` and real and imaginary parts of the leading eigenvector, `state1001.cdf.dat`, `state1002.cdf.dat`; available at the online Database).]{} To visualise the unstable manifold, we use a utility (`addstates.f90`) to add small multiples [($\approx10^{-4}\times$)]{} of the real part of the eigenvector to the solution, then use these as initial conditions (`state.cdf.in`) for a set of simulations. Figure \[fig:spiral\] shows a projection of the unstable manifold of N2\_ML, as an outward spiral, with deformation at larger amplitudes due to nonlinearity. The coordinates are the kinetic energy $E$, energy input from the applied pressure gradient $I$, and energy dissipation $D$, each normalised by their respective value for laminar flow [(columns of output `vel_totEID.dat`)]{}. ![N2\_ML solution. (blue) Slow ‘streaks’ – axial flow slower than the mean flow profile by $>0.07$. (yellow and green) ‘vortices’ – axial vorticity $>0.2$ and $<-0.2$.[]{data-label="fig:TWvis"}](7901uzwz.png){width="\columnwidth"} ![Projection of the unstable manifold of the N2\_ML travelling wave solution.[]{data-label="fig:spiral"}](spiral){width="\columnwidth"} Impact and conclusions ====================== The [Openpipeflow ]{}solver aims to provide a fast but flexible code, that can be use for state-of-the art research in the study of turbulent flows and transition. Pipe flow is a classical setting for the development of methods for modelling and analysing dynamical systems, and [Openpipeflow ]{}has been used by several groups around the world to make an important contribution to developments in our understanding of subcritical transition, e.g. [@AvMeRoHo13; @ChWiKe14; @barkley2015rise; @ShHsGo15; @Pringle09; @WillKer08]. From these developments have arisen many new opportunities. From the theoretical viewpoint, open issues relate to comprehension of the role of newly discovered equilibria and periodic orbits. Such states are believed to provide a skeleton for the dynamics, but describing the topology of the state space for turbulence remains a challenging and active area. Pipe flow, and the study of shear flows in general, draw interest from a range of branches of mathematics and theoretical physics, e.g. pattern formation, control theory, statistical physics, experimental physics. It is an active area of cross-fertilisation for the development of mathematical and numerical methods. From a more practical viewpoint, the dynamical systems approach is being applied in the modelling of other important flows, e.g. flows of fluids of complex rheology, e.g. stress-dependent viscosity, particulate flows and multiphase flows. The study of ‘high Reynolds number’ flows is also being influenced via application of dynamical systems techniques using LES. [Openpipeflow ]{}stands well placed to make an increasingly valuable contribution to this effort. Alongside the application of methods drawn from chaos theory, extensions to [Openpipeflow ]{}have just been added for shear-thinning fluids and LES, for example. From a research perspective, plenty of exciting new developments are in the pipeline. Acknowledgements {#acknowledgements .unnumbered} ================ The author would like to acknowledge John Gibson (`channelflow.org`), Predrag Cvitanović (`chaosbook.org`), Rich Kerswell and many others for help and inspiration. The author is also very grateful for financial support from EPSRC GR/S76144/01, EP/K03636X/1 and the E.U. FP7PIEF-GA-2008-219-233. This article was completed at the Kavli Institute for Theoretical Physics, supported in part under Grant No. NSF PHY11-25915. [^1]: Reynolds referred to what we now call ‘laminar’ and ‘turbulent’ flows by ‘direct’ and ‘sinuous’ flow, respectively. [^2]: [Optionally the $r_n$ may be read in from a file, `mesh.in`. In LES simulations, for example, it may be desirable to specify the distribution of points with respect to the position of the turbulent buffer layer.]{}
--- abstract: 'Given any positive integers $m$ and $d$, we say the a sequence of points $(x_i)_{i\in I}$ in ${{\mathbb R}}^m$ is [*Lipschitz-$d$-controlling*]{} if one can select suitable values $y_i\; (i\in I)$ such that for every Lipschitz function $f:{{\mathbb R}}^m\rightarrow {{\mathbb R}}^d$ there exists $i$ with $|f(x_i)-y_i|<1$. We conjecture that for every $m\le d$, a sequence $(x_i)_{i\in I}\subset{{\mathbb R}}^m$ is $d$-controlling if and only if $$\sup_{n\in{{\mathbb N}}}\frac{|\{i\in I\, :\, |x_i|\le n\}|}{n^d}=\infty.$$ We prove that this condition is necessary and a slightly stronger one is already sufficient for the sequence to be $d$-controlling. We also prove the conjecture for $m=1$.' author: - 'Andrey Kupavskii[^1]' - 'János Pach[^2]' - 'Gábor Tardos[^3]' date: - - title: Controlling Lipschitz functions --- Introduction ============ The following question, in some sense dual to Tarski’s famous plank problem [@Ta32; @McMS14; @Mo32], was raised by László Fejes Tóth [@FT74]: What is the “sparsest” sequence of points in the plane with the property that every straight line $\ell$ comes closer than $1$ to at least one of its points? Erdős and Pach [@EP80] answered this question by showing that for every infinite sequence of positive numbers $(r_i)_{i\in I}$, one can find points $p_i$ with $|p_i|=r_i$ such that every line $\ell$ passes at distance less than $1$ from $p_i$, for at least one $i\in I$, if and only if $\lim_{n\to \infty} r_n = \infty$ and $\sum_{i=1}^n\frac{1}{r_i}=\infty.$ Makai and Pach [@MaP83] proposed a closely related, but more general question. Given a family $\cal F$ of real functions $f:{{\mathbb R}}\to{{\mathbb R}}$, we say that an infinite sequence $x_i, i\in I,$ is [*$\cal F$-controlling*]{} if one can choose reals $y_i, i\in I,$ such that the graph of any function $f\in{\cal F}$ “comes close” to at least one of the points $p_i=(x_i,y_i), i\in I,$ in the sense that $$|f(x_i)-y_i|<1\;\;\; \mbox{\rm holds for some}\;\;\; i\in I.$$ In particular, they proved that if $\cal F$ is the family of all linear functions $f(x)=a_0+a_1x\; (a_0,a_1\in {{\mathbb R}})$, a sequence of numbers $x_i\ge 1$ is $\cal F$-controlling if and only if $\sum_{i\in I}\frac{1}{x_i}=\infty.$ Kupavskii and Pach [@KP16] managed to generalize this statement to the case where $\cal F$ consists of all polynomials $f(x)=a_0+a_1x+a_2x^2+\ldots+a_kx^k$ of degree at most $k$, for some positive $k$. In this case, the corresponding necessary and sufficient condition is $\sum_{i\in I}\frac{1}{x_i^k}=\infty.$ The aim of this note is to investigate the analogous problem for another interesting class of functions. Given two positive integers $m$ and $d$, let ${\cal L}(m,d)$ denote the class of [*Lipschitz functions*]{} from ${{\mathbb R}}^m$ to ${{\mathbb R}}^d$, that is, the class of functions for which there exists a constant $C$ such that $$|f(x)-f(x')|\le C|x-x'|\;\;\; \mbox{\rm for all}\;\;\; x,x'\in{{\mathbb R}}^m.$$ If a function $f$ satisfies the condition above with a fixed $C>0$, then $f$ is called a [*$C$-Lipschitz*]{} function (or a function with [*Lipschitz constant*]{} $C$). Note that in this definition we can use any norm equivalent to the Euclidean norm. Throughout this note, we will work with maximum norm. For convenience, $|.|$ will stand for the [*maximum norm*]{}, and the word “ball” will refer to a ball in the maximum norm, that is, a cube. [**Definition.**]{} *Given a [*function*]{} $f: {{\mathbb R}}^m\to {{\mathbb R}}^d$ and two points $x\in{{\mathbb R}}^m, y\in{{\mathbb R}}^d,$ we say that the pair $(x,y)$ [*controls*]{} $f$ if $|f(x)-y|<1$.* An infinite [*sequence*]{} $(x_i)_{i\in I}$ of points in ${{\mathbb R}}^m$ is said to be [*${\cal L}(m,d)$-controlling*]{} or, in short, [*$d$-controlling*]{} if one can choose points $(y_i)_{i\in I}$ in ${{\mathbb R}}^d$ such that for every $f\in {\cal L}(m,d)$ there exists $i$ such that $(x_i,y_i)$ controls $f$. It follows from the definition that replacing the condition $|f(x)-y|<1$ by the inequality $|f(x)-y|<\varepsilon$ for any fixed $\varepsilon>0$, does not effect whether a sequence is $d$-controlling. To see this, it is enough to notice that $f\in{\cal L}(m,d)$ if and only if $\varepsilon f\in{\cal L}(m,d)$, and that $(x_i,y_i)$ controls $f\in{\cal L}(m,d)$ if and only if $|(\varepsilon f)(x_i)-(\varepsilon y_i)|<\varepsilon.$ Obviously, if a sequence is $d$-controlling, then it is also $d'$-controlling for every $1\le d'\le d$. Indeed, ${{\mathbb R}}^{d'}$ can be regarded as a subspace of ${{\mathbb R}}^d$, so every Lipschitz function from ${{\mathbb R}}^m$ to ${{\mathbb R}}^{d'}$ is a Lipschitz function from ${{\mathbb R}}^m$ to ${{\mathbb R}}^d$. We solve a problem in [@MaP83] by giving, for any $d$, a necessary and sufficient condition for a sequence of points in ${{\mathbb R}}$ to be $d$-controlling ($m=1$). We conjecture that this result generalizes to sequences of points in ${{\mathbb R}}^m$, for any $m\le d$, but we can prove only a slightly weaker statement. The following theorem gives a necessary condition. A somewhat weaker result was established in [@MaP83 Theorem 3.6A] (it is stated in the concluding remarks of this note). \[nec\] Let $m, d$ be positive integers. If a sequence of points $(x_i)_{i\in I}$ in ${{\mathbb R}}^m$ is $d$-controlling, then we have $$\sup_{n\in{{\mathbb N}}}\frac{|\{i\in I\, :\, |x_i|\le n\}|}{n^d}=\infty.$$ Our next result shows that for $m=1$, the necessary condition in Theorem \[nec\] is also sufficient for a sequence of points in ${{\mathbb R}}$ to be $d$-controlling. \[spec\] Let $d$ be a positive integer. A sequence of points $(x_i)_{i\in I}$ in ${{\mathbb R}}$ is $d$-controlling if and only if $$\sup_{n\in{{\mathbb N}}}\frac{|\{i\in I\, :\, |x_i|\le n\}|}{n^d}=\infty.$$ For $m>d$, the condition in Theorems \[nec\] and \[spec\] is necessary, but [*not sufficient*]{} for a sequence in ${{\mathbb R}}^m$ to be $d$-controlling. To see this, observe that the sequence $(x_i)_{i\in I}$ consisting of all integer points in ${{\mathbb R}}^m$ satisfies the condition for all $d<m$. Nevertheless, this sequence is not even $1$-controlling. Indeed, for any function $h:I\to\{-1,1\}$, there exists a 2-Lipschitz function $f_h:{{\mathbb R}}^m\to{{\mathbb R}}$ for which $f(x_i)=h(i)$ for all $i\in I$. For any sequence of reals $(y_i)$, choose $\bar{h}(i)\in\{-1,1\}$ so that $|\bar{h}(i)-y_i|\ge 1$ for every $i\in I$, and notice that $f_{\bar{h}}$ is not controlled by any pair $(x_i,y_i)$. However, we believe that for $m\le d$, the above condition is not only necessary but also sufficient for a sequence in ${{\mathbb R}}^m$ to be $d$-controlling. \[conj1\] Let $m, d$ be positive integers, $m\le d$. A sequence of points $(x_i)_{i\in I}$ in ${{\mathbb R}}^m$ is $d$-controlling if and only if $$\sup_{n\in{{\mathbb N}}}\frac{|\{i\in I\, :\, |x_i|\le n\}|}{n^d}=\infty.$$ We cannot prove this conjecture for $m>1$, but we can formulate a slightly stronger condition that is already sufficient for a sequence to be $d$-controlling, provided that $m\le d$. \[gen\] Let $m, d$ be positive integers, $m\le d$. Suppose that a sequence of points $(x_i)_{i\in I}$ in ${{\mathbb R}}^m$ satisfies the following condition for every positive $\alpha$: The set of all points $x\in{{\mathbb R}}^m$ with $$|\{i\in I\, :\,|x_i-x|<\alpha\}|<|x|^{d-m}$$ is bounded. Then the sequence $(x_i)_{i\in I}$ is $d$-controlling. For any $\beta>\alpha>0$, the region $\{x\in {{\mathbb R}}^m:\beta\le |x|\le 2\beta\}$ contains at least some positive constant $c=c(m)$ times $(\beta/\alpha)^m$ pairwise disjoint balls of radius $\alpha$ (that is, cubes of side length $2\alpha$, in the maximum norm). If the condition of the last theorem is satisfied, then each of these balls contains at least $\beta^{d-m}$ points $x_i$, provided that $\beta>\beta(\alpha)$ is sufficiently large. Thus, in this case, $$|\{i\in I\, :\, |x_i|\le 2\beta\}|\ge c(\beta/\alpha)^m\beta^{d-m}=(c/\alpha^m)\beta^d.$$ Letting $\alpha\rightarrow 0,$ we obtain that the condition in Conjecture \[conj1\] also holds. Roughly speaking, the condition in Theorem \[gen\] is equivalent to the condition in Conjecture \[conj1\] for “uniformly distributed” sequences $x_i$, but the two conditions differ when the density of the point sequence depends “unevenly” on the location. We remark that our Theorem \[nec\] differs from Theorem 3.6A in [@MaP83] in the same sense: for “uniform” sequences the two statements are equivalent, but in general they are not. The exponent of $|x|$ in the right-hand side of the displayed formula in Theorem \[gen\] cannot be replaced by any smaller number, as follows from Theorem \[nec\]. Theorem \[gen\] disproves a conjecture from [@MaP83]; see the Remark at the end of Section 4.\ It is easy to see that the sufficient condition stated in Theorem \[gen\] is [*not necessary*]{} even if $m=1$ and $d$ is arbitrary. The sequence of points consisting of $k2^{kd}$ copies of $2^k\in{{\mathbb R}}$ for every positive integer $k$, satisfies the condition of Theorem \[spec\] and is, therefore, $d$-controlling. On the other hand, apart from those $x\in{{\mathbb R}}$ that are closer than $\alpha$ to some power of $2$, every $x\not=0$ satisfies the inequality in Theorem \[gen\]. The set of these $x$ is [*unbounded*]{}, thus Theorem \[gen\] is not applicable. Since every $d$-controlling sequence of points in ${{\mathbb R}}$ can be regarded as a $d$-controlling sequence of points in ${{\mathbb R}}^m$ for any $m>1$, we obtain that the sufficient condition stated in Theorem \[gen\] is not necessary for a sequence to be $d$-controlling, for any values of $m$ and $d$. Nevertheless, for some “natural” classes of sequences, the two conditions are equivalent, that is, Conjecture \[conj1\] holds. For instance, let $m\le d$ and $c>0$ be fixed, and consider the sequence of all points $(x_i)_{i\in I}$ in ${{\mathbb R}}^m$ whose each coordinate is the $c$-th power of some natural number. It is easy to see that this sequence satisfies both the condition in Conjecture \[conj1\] and the one in Theorem \[gen\] if $c<m/d$ and neither of them, otherwise. Concerning the case $m>d$, we have a conjecture that (roughly speaking) states that a sequence in ${{\mathbb R}}^m$ is $d$-controlling if and only if there is a $d$-dimensional Lipschitz surface passing through a subset of its points that already guarantees this property. The precise statement can be formulated for every $m$ and $d$, but for $m\le d$ the conjecture is obviously true. \[conj2\] Let $m, d$ be positive integers. A sequence of points $(x_i)_{i\in I}$ in ${{\mathbb R}}^m$ is $d$-controlling if and only if there exist a Lipschitz map $g:{{\mathbb R}}^d\to{{\mathbb R}}^m$ and a $d$-controlling sequence of points $(x'_i)_{i\in I'}$ in ${{\mathbb R}}^d$ with $I'\subseteq I$ such that $g(x'_i)=x_i$ for all $i\in I'$. The “if” part of the conjecture is trivially true. Indeed, suppose that a sequence $(y_i)_{i\in I'}$ in ${{\mathbb R}}^d$ shows that $(x'_i)_{i\in I'}$ is $d$-controlling. Then the same sequence also shows that the sequence of points $(x_i)_{i\in I'}$ in ${{\mathbb R}}^m$ is also $d$-controlling. To see this, take any Lipschitz function $f:{{\mathbb R}}^m\to{{\mathbb R}}^d$, and observe that $f(g(x)):{{\mathbb R}}^d\to{{\mathbb R}}^d$ is also a Lipschitz function. Thus, we have $|f(x_i)-y_i|=|f(g(x'_i))-y_i|<1$ for some $i\in I'$. The “only if” part of the conjecture evidently holds for $m\le d$. Indeed, choose $g:{{\mathbb R}}^d\to{{\mathbb R}}^m$ to be the projection to the subspace induced by the first $m$ coordinates, set $I'=I$ and $x'_i=x_i\times0^{d-m}\in{{\mathbb R}}^d$ for every $i\in I$. The important part of the conjecture is the “only if” direction where $m>d$. The proofs of Theorems \[nec\], \[spec\], and \[gen\], are presented in Sections 2, 3, and 4, respectively. Proof of Theorem \[nec\] ======================== As mentioned in the introduction, a somewhat weaker statement (Theorem 3.6A) was proved in [@MaP83]. Here we extend the proof to the general case. Consider a sequence $(x_i)_{i\in I}$ that violates the condition in the theorem, that is, for which $$\sup_{n\in{{\mathbb N}}}\frac{|\{i\in I\, :\,|x_i|\le n\}|}{n^d}<\infty.$$ Given any sequence $(y_i)_{i\in I}$ of points in ${{\mathbb R}}^d$, we have to find a Lipschitz function $f\in{\cal L}(m,d)$ from ${{\mathbb R}}^m$ to ${{\mathbb R}}^d$ that is not controlled by any of the pairs $(x_i,y_i), i\in I$. We will find such a function $f$ with the property that $f(x)=g(|x|)$ for some Lipschitz function $g:{{\mathbb R}}\to{{\mathbb R}}^d$. Then it is enough to guarantee that no pair $(|x_i|,y_i)$ controls $g$. In other words, it is enough to prove the statement for $m=1$. For technical reasons, we deal with the indices $i$ for which $x_i=0$, separately. Let $k$ denote the number of such indices. It follows from the assumption that $k$ is finite. Suppose without loss of generality that the index set $I$ is the set of integers larger than $-k$ and that $|x_i|$ is monotonically increasing in $i$ with $\lim_{i\rightarrow\infty}|x_i|=\infty$. Thus, we have $$\begin{array}{lcr} x_i=0 & \text{if}\ i\le 0,\\ |x_i|>0 & \text{if}\ i>0. \end{array}$$ Let $\alpha=\sup_{i>0}\frac{i}{|x_i|^d}$ and $\beta=2\alpha^{1/d}$. For $\mu\in {{\mathbb R}},$ denote $\lfloor \mu\rfloor$ and $\lceil \mu\rceil$ the lower and the upper integer part of $\mu$, respectively. Define a real number $\mu:=\max_{j>0}\frac{\lceil |x_j|\rceil^d}{|x_j|^d}$. It is easy to see that $\mu$ is a finite number bigger than $1$. Notice that $\alpha<\infty$ and, hence, $\beta<\infty$, because $$\alpha=\sup_{i>0}\frac{i}{|x_i|^d}\le \sup_{i>0}\frac{|\{j\in I\, :\,|x_j|\le |x_i|\}|}{|x_i|^d}\le \mu \sup_{n\in{{\mathbb N}}}\frac{|\{j\in I\, :\,|x_j|\le n\}|}{n^d}<\infty.$$ In what follows, we define a nested sequence ${{\cal L}}_0\supseteq{{\cal L}}_1\supseteq{{\cal L}}_2\supseteq\ldots$ of families of $\beta$-Lipschitz functions from ${{\mathbb R}}$ to ${{\mathbb R}}^d$, we show that their intersection is nonempty, and any function $g\in\bigcap_{i\ge 0}{{\cal L}}_i$ meets the requirements. Fix a point $y\in{{\mathbb R}}^d$ such that $|y-y_j|>1$ for every $j\le0$. Let ${{\cal L}}_0\subset{{\cal L}}(1,d)$ denote the family of all $\beta$-Lipschitz functions $g:{{\mathbb R}}\to{{\mathbb R}}^d$ with $g(0)=y$. By the choice of $y$, no function $g\in{{\cal L}}_0$ is controlled by any of the points $(|x_j|,y_j)$ with $j\le0$. ![image](Unblocklip2.pdf){width="120mm"}\[pic1\] Figure 1: The case $d=1$. For every $i>0$, let ${{\cal L}}_i$ be defined as the set of all functions in ${{\cal L}}_0$ that are not controlled by any of the pairs $(|x_j|,y_j)$ with $j\le i$, and let $$D_i=\{g(|x_i|)\, :\, g\in{{\cal L}}_i\}.$$ See Fig. 1, for an illustration of the case $d=1$. (The points $(x_i,y_i)$ are marked red. If a $\beta$-Lipschitz function belongs to ${{\cal L}}_i$, its graph cannot intersect the yellow region incident to $(x_i,y_i)$.) We establish a lower bound for the Lebesgue measures $\mu(D_i)$ of the sets $D_i$. [**Claim 2.1.**]{} [*Proof.*]{} By induction on $i$. For $i=0$, we have $D_0=\{y\}$, which is a nonempty set of zero measure. It follows from the definition of $\alpha$ that the bound in Claim 2.1 is strictly positive for every $i>0$. Assume that we have already verified the Claim for some $i\ge0$, and we want to prove it for $i+1$. Let $D'=\{g(|x_{i+1}|)\, :\, g\in{{\cal L}}_i\}$. Clearly, $D'$ can be obtained as the Minkowski sum of $D_i$ and the ball $B_r=B_r(0)$ of radius $r=\beta(|x_{i+1}|-|x_i|)$ around the origin. On the other hand, we have $D_{i+1}=D'\setminus B_1(y_{i+1})$, where $B_1(y_{i+1})$ denotes the ball of radius $1$ around $y_{i+1}$. Therefore, $$\mu(D_{i+1})\ge\mu(D')-\mu(B_1(y_{i+1}))=\mu(D'+B_r)-\mu(B_1(y_{i+1})).$$ By the Brunn-Minkowski inequality, we have $$\mu(D'+B_r)\ge(\mu^{1/d}(D_i)+\mu^{1/d}(B_r))^d.$$ Combining the last two inequalities, $$\mu(D_{i+1})\ge(\mu^{1/d}(D_i)+\mu^{1/d}(B_r))^d-\mu(B_1(y_{i+1})).$$ As we use the maximum norm, we have $\mu(B_1(y_{i+1}))=2^d$ and $\mu(B_r)=2^dr^d$. Using the inductive hypothesis, we get the following chain of implications. $$\begin{aligned} {2} \phantom{\geq} \mu(D_{i+1}) & \ge 2^{d+1}\alpha|x_{i+1}|^d-2^d(i+1) &&\Leftarrow \\ \big((2^{d+1}\alpha|x_i|^d-2^di)^{\frac 1d}+2r\big)^d-2^d &\ge 2^{d+1}\alpha|x_{i+1}|^d-2^d(i+1)\ \ \ &&\Leftrightarrow \\ \big((2\alpha|x_i|^d-i)^{\frac 1d}+r\big)^d &\ge 2\alpha|x_{i+1}|^d-i\ \ \ &&\Leftrightarrow \\ \beta(|x_{i+1}|-|x_i|) &\ge (2\alpha|x_{i+1}|^d-i)^{\frac 1d}-(2\alpha|x_i|^d-i)^{\frac 1d}, &&\end{aligned}$$ where $\beta=2\alpha^{1/d},$ as before. By the definition of $\alpha$, we have $2\alpha x^d-i\ge \alpha x^d$ for every $x\ge |x_i|$. Consider the function $f(x):=(2\alpha x^d-i)^{1/d}$. Then $$f'(x)= \frac 1d 2\alpha d\frac{x^{d-1}}{(2\alpha x^d-i)^{\frac{d-1}d}}\le 2\alpha \frac{x^{d-1}}{(\alpha x^d)^{\frac{d-1}d}}=2\alpha^{1/d},$$ for every $x\ge |x_i|$. Therefore, the last inequality of the chain holds, and so does the first one, as claimed. Q.E.D. In particular, it follows from Claim 2.1 that $D_i\not=\emptyset$ and, hence, ${{\cal L}}_i$ is not empty for every $i\ge0$. To complete the proof of Theorem 1, it is enough to note that the set ${{\cal L}}_0$ is [*compact*]{} in the pointwise topology. Therefore, $\bigcap_{i\ge 0}{{\cal L}}_i\neq\emptyset$. By definition, no function $g\in \bigcap_{i\ge 0}{{\cal L}}_i$ is controlled by any pair $(|x_i|,y_i)$, as required. Proof of Theorem \[spec\] ========================= The “only if” part of the theorem is a special case of Theorem \[nec\]. Thus, we have to prove only the “if” part. Let $(x_i)_{i\in{{\mathbb N}}}$ be a sequence of real numbers satisfying the “density condition” $$\sup_{n\in{{\mathbb N}}}\frac{|\{i\in I\, :\,|x_i|\le n\}|}{n^d}=\infty.$$ Split this sequence into two sequences, one consisting of the nonnegative numbers and the other consisting of the negative ones. At least one of these two sequences must satisfy the above density condition, so we can assume without loss of generality that, say, $x_i\ge0$ for all $i$. If $(x_i)_{i\in{{\mathbb N}}}$ has a convergent subsequence $(x_{i_j})_{j\in {{\mathbb N}}}\rightarrow x$, as $j\rightarrow\infty$, then choose any sequence of points $(y_j)_{j\in{{\mathbb N}}}$, everywhere dense in ${{\mathbb R}}^d$. Obviously, every Lipschitz function $f:{{\mathbb R}}\rightarrow{{\mathbb R}}^d$ is controlled by infinitely many pairs $(x_{i_j},y_j)$. Therefore, we can assume without loss of generality that $(x_i)_{i\in{{\mathbb N}}}$ is an increasing sequence of nonnegative numbers, tending to infinity. We need a simple statement about a finite portion of the sequence $(x_i)$. Fix a positive integer $j$. Let ${{\cal L}}_j$ denote the family of $j$-Lipschitz functions $f:{{\mathbb R}}\to{{\mathbb R}}^d$ with $|f(0)|\le j$. (Note that this deviates from the definition of ${{\cal L}}_j$ used in the previous section.) We also fix $n\in{{\mathbb N}}$. Let $k=k(j,n)=(j(n+1)+1)^d$, and assume that $x_i\le n$ for $1\le i\le k$. Since we use the maximum norm in ${{\mathbb R}}^d$, the ball (cube) $B_r$ of radius $r=j(n+1)$ around the origin can be uniquely partitioned into $k$ balls of radius $r'=\frac{r}{r+1}<1$. Let the centers of these balls be denoted by $z_i$ and the balls themselves by $B_{r'}(z_i),\;1\le i\le k.$ Index the centers $z_i$ decreasingly with respect to the lexicographic order. For every $i, 1\le i\le k$, set $$y_i=z_i-j(n-x_i)v,$$ where $v$ is the all-$1$ vector in ${{\mathbb R}}^d$. See Fig. 2, which depicts the case $d=1$. ![image](Blocklip.pdf){width="120mm"}\[pic2\] Figure 2. [**Claim 3.1.**]{} [*Any function $f\in{{\cal L}}_j$ is controlled by one of the pairs $(x_i,y_i), \; 1\le i\le k.$*]{} [*Proof.*]{} Let $f$ be an arbitrary element of ${{\cal L}}_j$. Notice that the function $g(x)=f(x)+j(n-x)v$ is monotonically decreasing in all of its coordinates, and that $g(x)\in B_r$ for every $x\in [0,n].$ Consider the set $S$ of all indices $1\le i\le k$ such that $g(x_i)$ is contained in a ball $B_{r'}(z_{i'})$ for some $1\le i'\le i$. As $g(x_k)$ is in $B_r$, it belongs to a ball $B_{r'}(z_{i'})$ for some $1\le i'\le k$. Therefore, $k\in S$, so that the set $S$ is not empty. Let $i_0$ denote the smallest element of $S$. Then we have $g(x_{i_0})\in B_{r'}(z_{i_0})$. Indeed, otherwise $g(x_{i_0})\in B_{r'}(z_{i'})$ for some index $i'<i_0$. Thus, $i_0>1$. Using the monotonicity of $g$ and the monotonicity of the sequences $(x_i)_{1\le i\le k}$ and $(z_i)_{1\le i\le k}$, we obtain that $g(x_{i_0-1})\in B_{r'}(z_{i''})$ for some $i''\le i'\le i_0-1$, contradicting the minimality of $i_0$. Hence, $$1>r'\ge|g(x_{i_0})-z_{i_0}|=|f(x_{i_0})-y_{i_0}|.$$ This means that $(x_{i_0},y_{i_0})$ controls $f$, as claimed. Q.E.D. Now we can easily finish the proof of Theorem \[spec\]. We need to show that the sequence $(x_i)_{i\in{{\mathbb N}}}$ is $d$-controlling. To control all functions in ${{\cal L}}_j$ for a fixed $j$, pick an $n=n(j)$ such that for at least $k=(j(n+1)+1)^d$ distinct indices $i$ we have $x_i\le n$. It follows from the density condition that such an $n$ exists. By Claim 3.1, we can choose $y_i$ for $k$ distinct indices $i$ such that every function in ${{\cal L}}_j$ is controlled by one of the $k$ pairs $(x_i,y_i)$. Repeat this step this successively for $j=1,2,\ldots$, making sure that we always use pairwise disjoint sets of indices. This is possible, because removing any finite number of elements from $(x_i)_{i\in {{\mathbb N}}}$, the remaining sequence still satisfies the density condition. Since every Lipschitz function ${{\mathbb R}}\to{{\mathbb R}}^d$ belongs to one of the classes ${{\cal L}}_j$, after completing the above process for all $j\in{{\mathbb N}}$, all Lipschitz functions ${{\mathbb R}}\to{{\mathbb R}}^d$ will be controlled by one of the pairs $(x_i,y_i)$. This proves Theorem \[spec\]. Proof of Theorem \[gen\] ======================== As in the proof of Theorem \[spec\], for every positive integer $j$, ${{\cal L}}_j$ denotes the family of $j$-Lipschitz functions $f:{{\mathbb R}}^m\to{{\mathbb R}}^d$ with $|f(0)|\le j$. As we did in that proof, we fix $j$ and we show that one can control ${{\cal L}}_j$ using only finitely many points $x_i$. To complete the proof of Theorem \[gen\], we perform this step for $j=1,2,\ldots$, sequentially, observing that the density condition in the theorem continues to hold even if we delete any finite number of points $x_i$ from our sequence. The proof of Theorem \[gen\] is based on a topological lemma. We consider a continuously moving set $D$ that leaves a ball $B\subset R^d$. By continuity, each point of $D$ must cross the boundary of the ball. Using Brouwer’s fixed point theorem, we find a point $z\in D$ that crosses the boundary at a point with a special property. See Figure 3, for an illustration. The color gradation distinguishes different points of $D$, that is, points of the same color indicate the trajectory of a point, as it progresses in time $t$. ![image](SquareJanos4.png){width="160mm"}\[pic3\] Figure 3. [**Lemma 4.1.**]{} *Let $d$ be a positive integer. Let $B$ denote a closed ball of positive radius around the origin in ${{\mathbb R}}^d$, and let $S$ stand for the boundary of $B$. Let $J=[t_0,t_1]$ be a closed interval on the real line, let $D$ be an arbitrary topological space, and let $f:D\times J\to{{\mathbb R}}^d$ and $g:B\to D$ be continuous functions.* If $f(z,t_0)\in B\setminus S$ and $f(z,t_1)\notin B$ for all $z\in D$, then there exist $z\in D$ and $t\in (t_0,t_1)$ such that $f(z,t)\in S$ and $g(f(z,t))=z$. [*Proof.*]{} Denote $l$ the radius of $B$. Let $B'=B\times J\subset{{\mathbb R}}^{d+1}$, and let $h:B'\to{{\mathbb R}}^{d+1}$ be defined as $h(y,t)=(y',t')$, where $y'=f(g(y),t)$ and $t'=t-|y'|+l$. Let $c:{{\mathbb R}}^{d+1}\to B'$ be a coordinate-wise retraction; to be specific, let $c(y,t)=(\min(1,l/|y|)\cdot y,\min(t_1,\max(t_0,t)))$. Finally, let $\bar h:B'\to B'$ be the composition of these functions: $\bar h(y,t)=c(h(y,t))$. Clearly, $\bar h$ is continuous and $B'$ is homeomorphic to the $(d+1)$-dimensional ball. Thus, we can apply Brouwer’s fixed point theorem to conclude that there exists $(y,t)\in B'$ with $\bar h(y,t)=(y,t)$. Let $z=g(y)$ and $(y',t')=h(y,t)$. If $t=t_0$, then the second coordinate of $c(y',t')$ equals $t_0$. Hence, either we have $t'=t_0$ or $t'$ was retracted to $t_0$ from the left. Since $t'\le t$, we have $|y'|\ge l$, and thus $f(z,t_0)\notin B\setminus S$, contradicting our assumption. Analogously, if $t=t_1$, then $t'\ge t$, so $|y'|\le l$, implying that $f(z,t_1)\in B$, which is again a contradiction. Consequently, we must have $t_0<t<t_1$. Using the fact that $c$ is a retraction, we obtain that $t'=t$, so $|y'|=l$ and $y'=f(z,t)\in S$. We must also have $y'=y$, which implies that $g(f(z,t))=g(y)=z$. Q.E.D. To apply the lemma, we think of ${{\mathbb R}}^m$ as a product space ${{\mathbb R}}^{m-1}\times {{\mathbb R}}$, with the last coordinate considered as time. Let $D\subset {{\mathbb R}}^{m-1}$ and $B\subset {{\mathbb R}}^d$ be balls, let $J=[t_0,t_1]$ be an interval, and let $g:B\to D$ a linear map (see Fig. 3). Consider any $j$-Lipschitz function $f:{{\mathbb R}}^m\to{{\mathbb R}}^d\; (m\le d),$ and focus our attention on the restriction of $f$ to $D\times J$. In order to apply Lemma 4.1, we choose $B$ large enough to make sure that $f(z,t_0)$ lies in the interior of $B$ for all $z\in D$. By the lemma, we can either find $z\in D$ such that $x=(z,t_1)$ satisfies $y=f(x)\in B$, or there exists $x=(z,t)\in D\times J$ such that $y=f(x)$ belongs to the boundary of $B$ and $g(y)=z$. Our goal is to find sufficiently many indices $i\in I$ with $x_i\in D\times J$, and to assign appropriate values $y_i$ to them, so that for every conceivable pair $(x,y)$ provided by the lemma we can find a pair $(x_i,y_i)$ that is close to it. Specifically, if we have $|x-x_i|<\frac 1{2j}$ and $|y-y_i|<\frac 12$ for some $i$, then $f(x)=y$ implies that the pair $(x_i,y_i)$ controls $f\in{{\cal L}}_j$. Next we spell out the details of proof. *Proof of Theorem \[gen\].  * Let $m\le d$ and let $(x_i)_{i\in I}$ be a sequence of points in ${{\mathbb R}}^m$ satisfying the density condition in the theorem. Let us fix $j\in {{\mathbb N}}$. As we have pointed out earlier, it is sufficient to show that we can select finitely many indices $i\in I$ and assign to them suitable points $y_i\in{{\mathbb R}}^d$ such that every function in ${{\cal L}}_j$ is controlled by at least one of the pairs $(x_i,y_i)$. Set $\epsilon=1/(8j+8)$ and choose a positive integer $c$ with $c^m>4d/\epsilon^{d-m}$. Using the density condition in the theorem with $\alpha=\epsilon/c$, we obtain that there exists $t_0>j+1$ such that $$|\{i\in I\,:\,|x_i-x|<\alpha\}\ge|x|^{d-m}$$ holds for every $x\in {{\mathbb R}}^m$ with $|x|\ge t_0-2\epsilon$. Set $l=\lfloor jt_0+j\rfloor+1<(j+1)t_0$. The density condition we really need for our argument is $$|\{i\in I\,:\,|x_i-x|<\epsilon\}\ge4d(2l)^{d-m},$$ which holds for every $|x|\ge t_0-\epsilon$, since the ball of radius $\epsilon$ around $x$ can be split into $c^m$ internally disjoint balls of radius $\alpha$, each containing at least $(|x|-\epsilon)^{d-m}$ points $x_i$ in their interior. This adds up to total of $c^m(t_0-2\epsilon)^{d-m}>4d(2l)^{d-m}$, as required. Finally, set $t_1>t_0$ such that $$|\{i\in I\,:\,|x_i-x|<\epsilon\}|\ge4d(2l)^{d-m}+(2l)^d$$ holds for all $|x|\ge t_1-\epsilon$. The existence of such a value $t_1$ follows easily from the density condition on the sequence $(x_i)$, because for every sufficiently large $|x|$, the left-hand side of the above inequality is at least $|x|^{d-m}$, while its right-hand side is a constant. Let $D=\{z\in{{\mathbb R}}^{m-1}\, :\,|z|\le t_0\}$ be the ball of radius $t_0$ around the origin in ${{\mathbb R}}^{m-1}$, let $J=[t_0,t_1]$, let $B=\{y\in{{\mathbb R}}^d\, :\,|y|\le l\}$ be the ball of radius $l$ around the origin in ${{\mathbb R}}^d$, and let $S=\{y\in{{\mathbb R}}^d\, :\,|y|=l\}$ denote the sphere bounding $B$. Define a linear map $g:B\to D$ by setting $$g(y_1,\ldots,y_d)=\frac {t_0}{2l}(y_1-y_m,y_2-y_m,\ldots,y_{m-1}-y_m).$$ We identify ${{\mathbb R}}^m$ with ${{\mathbb R}}^{m-1}\times{{\mathbb R}}$ and will use the notation $(z,t)\in{{\mathbb R}}^m$ for $z\in{{\mathbb R}}^{m-1}$ and $t\in{{\mathbb R}}$. Cover $D\times J$ with internally disjoint balls (cubes, in the $l_{\infty}$-norm) of radius $\epsilon$. These balls will be referred to as the [*$\epsilon$-balls*]{}. Let $Z=Z_0\times Z_1$ be a fixed $\epsilon$-ball, where $Z_0\subset{{\mathbb R}}^{m-1}$ is a ball of radius $\epsilon$ and $Z_1$ is an interval of length $2\epsilon$. The sphere $S$ consists of $2d$ facets ($d-1$-dimensional cubes). A facet is obtained by fixing one of the $d$ coordinates to $l$ or $-l$, and letting the other coordinates take arbitrary values in the interval $[-l,l]$. Consider all points $y$ on a facet such that $g(y)\in Z_0$. If the fixed coordinate of the facet is one of the first $m$ coordinates, and such points $y$ exist at all, then the first $m$ of their coordinates are determined within an interval of $8l\epsilon/t_0\le1$, while the remaining coordinates can take arbitrary values in $[-l,l]$. This set can be covered by at most $(2l)^{d-m}$ balls of radius $1/2$. We refer to these balls as the [*$1/2$-balls for $Z$*]{}. Next, consider all points $y$ on a facet of $S$ such that $g(y)\in Z_0$, but assume that the fixed coordinate of this facet is one of the last $d-m$ coordinates. Cover this set with balls of radius $1/2$, as follows. Partition the possible values of the $m$’th coordinate into $4l$ intervals, each of length $1/2$. These intervals determine each of the first $m-1$ coordinates of $y$ within an interval of length $1$, and there are $d-m-1$ further coordinates that can take any value in $[-l,l]$. We have $2(2l)^{d-m}$ balls of radius $1/2$ that cover all points $y$ on this facet with $g(y)\in Z_0$. Summing up over all facets of $S$, we have at most $4d(2l)^{d-m}$ $1/2$-balls for $Z$. For each of these $1/2$-balls $W$ for $Z$, select a separate index $i\in I$ such that $x_i$ lies in the interior of $Z$, and set $y_i$ to be the center of the ball $W$. Note that the center $x$ of $Z$ satisfies $|x|\ge t_0-\epsilon$ (otherwise, $Z$ would be disjoint from $D\times J$). Thus, by our choice of $t_0$, we have enough indices to choose from. We repeat the same procedure for every the $\epsilon$-ball $Z$. [*Case 1:*]{} Consider now any $f\in{\cal L}_j$ for which there exists $x=(z,t)\in D\times J$ such that $y=f(x)\in S$ and $g(y)=z$. Clearly, $x\in Z$ for some $\epsilon$-ball $Z$, and $y\in W$ for some $1/2$-ball $W$ for $Z$. Thus, there exists $i\in I$ such that $x_i$ lies in the interior of $Z$, and $y_i$ is the center of $W$. This implies that $|x_i-x|<2\epsilon$ and $|y_i-y|\le\frac 12$. Using the Lipschitz property, we obtain $$|f(x_i)-y|=|f(x_i)-f(x)|\le j|x_i-x|<2j\epsilon<\frac12.$$ Hence, $(x_i,y_i)$ controls $f$, as $|f(x_i)-y_i|\le|f(x_i)-y|+|y_i-y|<1$. [*Case 2:*]{} It remains to deal with the case where for some $f\in{\cal L}_j$ we cannot find $x=(z,t)\in D\times J$ such that $y=f(x)\in S$ and $g(y)=z$. Let $f$ be such a function. Notice that $f(z,t_0)\in B\setminus S$ for every $z\in D$. Indeed, we have $|(z,t_0)|=t_0$ and, hence, $|f(z,t_0)|\le jt_0+|f(0)|\le jt_0+j<l$, as required. Then, according to Lemma 4.1, if we cannot find $x=(z,t)\in D\times J$ such that $y=f(x)\in S$ and $g(y)=z$, then $f(z,t_1)\in B$ must hold for some $z\in D$. We show that in this case one can select a few more indices $i\in I$ and set the corresponding values $y_i$ so that for some of the newly selected indices $i$, the pairs $(x_i,y_i)$ control $f$. To achieve this, cover the entire ball $B$ with $(2l)^d$ balls of radius $1/2$, and refer to them as [*new balls*]{}. For any $\epsilon$-ball $Z$ that contains a point $(z,t_1)$ and for any new ball $W$, choose a separate (yet unselected) index $i\in I$ such that $x_i$ lies in the interior of $Z$, and set $y_i$ to be the center of $W$. Note that the center $x$ of $Z$ satisfies the inequality $|x|\ge t_1-\epsilon$. Thus, by our choice of $t_1$, we have enough indices to choose from. It can be shown by a simple computation similar to the above one that if for some $z\in D$ we have $y=f(z,t_1)\in B$, then for the indices $i\in I$ selected for the $\epsilon$-ball containing $(z,t_1)$ and the new ball containing $y$ the pair $(x_i,y_i)$ controls $f$. This completes the proof of the fact that every $f\in{\cal L}_j$ is controlled by one of the pairs $(x_i,y_i)$ and, hence, the proof of Theorem \[gen\]. Q.E.D. [**Remark.**]{} Makai and Pach [@MaP83] proved the following result (that also follows from our Theorem \[nec\]): Let $m\le d$ and let $A$ be a set of points in ${{\mathbb R}}^m$ satisfying the condition that for any $x\in {{\mathbb R}}^m$, the number of points in the unit ball around $x$ is at most $K(|x|^{d-m}+1)$, where $K$ is a suitable constant. Then $A$ is [*not*]{} $d$-controlling. Makai and Pach made the conjecture that the same statement remains valid if for any $x\in {{\mathbb R}}^m$, the unit ball around $x$ contains at most $K(|x|^{d-1}+1)$ points of $A$. This would be a significant improvement for $m>1$. However, our Theorem \[gen\] shows that no such improvement is possible. Indeed, for any function $f:{{\mathbb R}}^+\to{{\mathbb R}}^+$ tending to infinity, one can construct a set of point in ${{\mathbb R}}^m$ with at most $f(|x|)|x|^{d-m}$ points in the unit ball around any point $x$, but still satisfying the condition of Theorem \[gen\]. By the theorem, such a set is $d$-controlling. We close this paper by constructing an explicit set $A$ with the properties mentioned above. We choose an increasing sequence of reals $c_i>4$ such that $f(x)>2^{m(i+2)+d}$ whenever $x\ge c_i-2$. Consider the set $S_{i}:=\{x\in2^{-i} \mathbb Z^m\mid c_i\le|x|<c_{i+1}\}$. Form a set $A_i\subset {{\mathbb R}}^m$ by collecting $\left\lceil|x|^{d-m}\right\rceil$ points from the ball of radius $2^{-i}$ around every point $x\in S_i$. Consider the set $A:=\cup_{i=1}^{\infty}A_i$. For $|x|>c_{i}$, we have at least $|x|^{d-m}$ points of $A$ in the $2^{1-i}$-ball around $x$. This shows that $A$ satisfies the conditions of Theorem \[gen\] and is, therefore, $d$-controlling. On the other hand, if $i$ is the highest index such that the unit ball around $x$ contains a point in $A_i$, then $|x|>c_i-2>2$, and the unit ball around $x$ contains at most $\left\lceil(|x|+2)^{d-m}\right\rceil$ points of $A$ around each of the at most $2^{m(i+2)}$ points of $\cup_{i=1}^\infty S_i$ in the ball of radius $2$ about $x$. By our choice of $c_i$, this shows that the unit ball around $x$ contains at most $f(|x|)|x|^{d-m}$ points of $A$, as claimed.\ <span style="font-variant:small-caps;">Acknowledgements.</span> We thank the referee for carefully reading the text and pointing out an inaccuracy in the proof of Claim 2.1. [DSS]{} Th. Bang, [*On covering by parallel-strips*]{}, Mat. Tidsskr. B. [**1950**]{} (1950), 49–53. Th. Bang, [*A solution of the “plank problem,"*]{} Proc. Amer. Math. Soc. [**2**]{} (1951), 990–993. P. Brass, W. Moser, and J. Pach, [*Research Problems in Discrete Geometry*]{}, Springer, Heidelberg, 2005. P. Erdős and J. Pach, [*On a problem of L. Fejes Tóth*]{}, [Discrete Math.]{} [**30**]{} (1980), no. 2, 103–109. P. Erdős and C.A. Rogers, [*Covering space with convex bodies*]{}, Acta Arithmetica [**7**]{} (1962), 281–285. L. Fejes Tóth, [*Remarks on the dual of Tarski’s plank problem*]{} (in Hungarian), [Matematikai Lapok]{} [**25**]{} (1974), 13–20. H. Groemer, [*On coverings of convex sets by translates of slabs,*]{} Proc. Amer. Math. Soc. [**82**]{} (1981), no. 2, 261–266. H. Groemer, [*Covering and packing properties of bounded sequences of convex sets*]{}, Mathematika [**29**]{} (1982), 18–31. H. Groemer, [*Some remarks on translative coverings of convex domains by strips*]{}, Canad. Math. Bull. [**27**]{} (1984), no. 2, 233–237. A. Kupavskii and J. Pach, [*Simultaneous approximation of polynomials*]{}, in: [Discrete and Computational Geometry and Graphs (J. Akiyama, H. Ito, T. Sakai, Y. Uno, eds.), Lecture Notes in Computer Science]{} [**9943**]{}, Springer-Verlag, Cham, 2016, 193–203. E. Makai Jr. and J. Pach, [*Controlling function classes and covering Euclidean space*]{}, [Stud. Scient. Math. Hungarica]{} [**18**]{} (1983), 435–459. A. McFarland, J. McFarland, and J.T. Smith, eds., [*Alfred Tarski. Early work in Poland–geometry and teaching.*]{} Birkhäuser/Springer, New York, 2014. With a bibliographic supplement, Foreword by Ivor Grattan-Guinness. H. Moese, [*Przyczynek do problemu A. Tarskiego: “O stopniu równowaonosci wielokatów”*]{} (A contribution to the problem of A. Tarski, “On the degree of equivalence of polygons”). Parametr [**2**]{} (1932), 305–309. C. A. Rogers, [*A note on coverings,*]{} Mathematika [**4**]{} (1957), 1–6. A. Tarski, [*Uwagi o stopnii równowaznosci wielokatów*]{}, [Parametr]{} [**2**]{} (1932), 310–314. [^1]: EPFL, Lausanne and MIPT, Moscow. Supported in part by the grant N 15-01-03530 of the Russian Foundation for Basic Research. E-mail: [kupavskii@ya.ru]{}. [^2]: EPFL, Lausanne and Rényi Institute, Budapest. Supported by Swiss National Science Foundation Grants 200020-162884 and 200021-165977. E-mail: [pach@cims.nyu.edu]{}. [^3]: Rényi Institute and Central European University, Budapest. Supported by the Cryptography “Lendület” project of the Hungarian Academy of Sciences and by the National Research, Development and Innovation Office, NKFIH, projects K-116769 and SNN-117879.
--- abstract: 'We introduce a new family of symplectic integrators depending on a real parameter $\alpha$. For $\alpha=0$, the corresponding method in the family becomes the classical Gauss collocation formula of order $2s$, where $s$ denotes the number of the internal stages. For any given non-null $\alpha$, the corresponding method remains symplectic and has order $2s-2$: hence it may be interpreted as a $O(h^{2s-2})$ (symplectic) perturbation of the Gauss method. Under suitable assumptions, we show that the parameter $\alpha$ may be properly tuned, at each step of the integration procedure, so as to guarantee energy conservation in the numerical solution. The resulting method shares the same order $2s$ as the generating Gauss formula.' author: - 'Luigi Brugnano[^1]' - 'Felice Iavernaro[^2]' - 'Donato Trigiante[^3]' title: 'On the existence of energy-preserving symplectic integrators based upon Gauss collocation formulae [^4]' --- Hamiltonian systems, collocation Runge-Kutta methods, symplectic integrators, energy-preserving methods. 65P10, 65L05 Introduction ============ We consider canonical Hamiltonian systems in the form $$\label{hamilode} \left\{ \begin{array}{l} \dot y = J\nabla H(y) \equiv f(y), \\ y(t_0) = y_0 \in\RR^{2m}, \end{array} \right. \qquad J=\pmatrix{rr} 0 & I \\ -I & 0 \endpmatrix \in \RR^{2m \times 2m},$$ ($I$ is the identity matrix of dimension $m$). Regarding its numerical integration, two main lines of investigation may be traced, having as objective the definition and the study of symplectic methods and energy-conserving methods, respectively. In fact, symplecticity and the conservation of the energy function are the most relevant features characterizing a Hamiltonian system. From the very beginning of this research activity, high order symplectic formulae were already available within the class of Runge-Kutta methods, the Gauss collocation formulae being one noticeable example. One important implication of symplecticity of the discrete flow is the conservation of quadratic invariants. This circumstance makes the symplecticity property of a method particularly appealing in the numerical simulation of isolated mechanical systems in the form , since it provides a precise conservation of the total angular momentum during the time evolution of the state vector. As a further positive consequence, a symplectic method also conserves quadratic Hamiltonian functions (see the monographs [@HLW; @LR; @SC] for a detailed analysis of symplectic methods). On the other hand, excluding the quadratic case, energy-conserving methods were initially not known within the class of classical methods and as a matter of facts, among the first attempts to address this issue, projection and symmetric projection techniques were coupled to classical non-conservative schemes in order to impose the numerical solution to lie in a proper manifold representing a first integral of the original system (see [@HW Sect. VII.2], [@AR; @H] and [@HLW Sect. V.4.1]). A completely new approach is represented by [*discrete gradient methods*]{} which are based upon the definition of a discrete counterpart of the gradient operator so that energy conservation of the numerical solution is guaranteed at each step and whatever the choice of the stepsize of integration (see [@G; @MQR]). More recently, the conservation of energy has been approached by means of the definition of the [*discrete line integral*]{}, in a series of papers (such as [@IP1; @IT3]), leading to the definition of [*Hamiltonian Boundary Value Methods (HBVMs)*]{} (see for example [@BIT; @BIT1]). These are a class of methods able to preserve, in the discrete solution, polynomial Hamiltonians of arbitrarily high degree (and hence, a [*practical*]{} conservation of any sufficiently differentiable Hamiltonian)[^5]. Such methods admit a Runge-Kutta formulation which reveals their close relationship with classical collocation formulae [@BIT3]. An infinity extension of HBVMs has also been proposed in [@Ha; @BIT1]. Attempts to incorporate both symplecticity and energy conservation into the numerical method will clash with two non-existence results. The first [@GM] refers to non-integrable systems, that is systems that do not admit other independent first integrals different from the Hamiltonian function itself. According to the authors’ words, it states that > *If \[the method\] is symplectic, and conserved $H$ exactly, then it is the time advance map for the exact Hamiltonian system up to a reparametrization of time.* The second negative result [@CFM] refers to B-series symplectic methods applied to general (not necessarily non-integrable) Hamiltonian systems: > *The only symplectic method (as $B$-series) that conserves the Hamiltonian for arbitrary $H(y)$ is the exact flow of the differential equation.* The aim of the present work is to devise methods of any high order that, in a sense that will be specified below and under suitable conditions, may share both features. More precisely, we will begin with introducing a family of one-step methods $$\label{met_alpha} y_1(\alpha)=\Phi_h(y_0,\alpha)$$ ($h$ is the stepsize of integration), depending on a real parameter $\alpha$, with the following specifics: 1. for any fixed choice of $\alpha \not = 0$, the corresponding method is a symplectic Runge-Kutta method with $s$ stages and of order $2s-2$; 2. for $\alpha=0$ one gets the Gauss collocation method (of order $2s$); 3. for any choice of $y_0$ and in a given range of the stepsize $h$, there exists a value of the parameter, say $\alpha^\ast$, depending on $y_0$ and $h$, such that $H(y_1)=H(y_0)$ (energy conservation). As the parameter $\alpha$ ranges in a small interval centered at zero, the value of the numerical Hamiltonian function $H(y_1)$ will match $H(y(t_0+h))$, thus leading to energy conservation. This result, which will be formally proved in Section \[theory\], is formalized as follows: *Under suitable assumptions, there exists a real sequence $\{\alpha_k\}$ such that the numerical solution defined by $y_{k+1}=\Phi_h(y_{k},\alpha_k)$, with $y_0$ defined in (\[hamilode\]), satisfies $H(y_k)=H(y_0)$.* To clarify this statement and how it relates to the above non-existence results, we emphasize that the energy conservation property only applies to the specific numerical orbit $\{y_k\}$ that the method generates, starting from the initial value $y_0$ and with stepsize $h$. For example, let us consider the very first step and assume the existence of a value $\alpha=\alpha_0$, in order to enforce the energy conservation between the two state vectors $y_0$ and $y_1$, as indicated at item 3 above. If $\alpha_0$ is maintained constant, the map $y \mapsto \Phi_h(y,\alpha_0)$ is symplectic and, by definition, assures the energy conservation condition $H(y_1)=H(y_0)$. However, it would fail to provide a conservation of the Hamiltonian function if we changed the initial condition $y_0$ or the stepsize $h$: in general, for any $\hat y_0 \not =y_0$, we would obtain $H(\Phi_h(\hat y_0,\alpha_0)) \not = H(y_0)$. Thus, the energy conservation property we are going to discuss weakens the standard energy conservation condition mentioned in the two non-existence results stated above and hence, by no means, the new methods are meant to produce a counterexample of these statements. The paper is organized as follows. In the next section we report the definition of the methods while in Section \[colloc\] we show their geometrical link with Gauss collocation formulae. In Section \[theory\] we face the problem from a theoretical viewpoint and give some existence results that aim to explain the energy-preserving property of the new methods. In Section \[numerical\_tests\] we report a few tests that give a clear numerical evidence that a change in sign of the function $g(\alpha)=H(y_1(\alpha))-H(y_0)$ does indeed occur along the integration procedure. Definition of the methods {#definition} ========================= Let $c_1<c_2<\dots<c_s$ and $b_1,\dots, b_s$ be the abscissae and the weights of the Gauss-Legendre quadrature formula in the interval $[0,1]$. We consider the Legendre polynomials $P_j(\tau)$, of degree $j-1$ for $j=1,\dots,s$, shifted and normalized in the interval $[0,1]$, that is $$\label{orth} \int_0^1 P_i(\tau)P_j(\tau) \mathrm{d} \tau = \delta_{ij}, \qquad i,j=1,\dots,s,$$ ($\delta_{ij}$ is the Kronecker symbol), and the matrix $$\label{P} \P = \pmatrix{cccc} P_1(c_1) & P_2(c_1) & \cdots & P_s(c_1) \\ P_1(c_2) & P_2(c_2) & \cdots & P_s(c_2) \\ \vdots & \vdots & & \vdots \\ P_1(c_s) & P_2(c_s) & \cdots & P_s(c_s) \endpmatrix_{s \times s}.$$ Our starting point is the following decomposition of the Butcher array $A$ of the Gauss method of order $2s$ (see [@HW Theorem 5.6]): $$\label{A} A= \P X_s \P^{-1},$$ where $X_s$ is defined as $$\label{Xs} X_s = \pmatrix{cccc} \frac{1}2 & -\xi_1 &&\\ \xi_1 &0 &\ddots&\\ &\ddots &\ddots &-\xi_{s-1}\\ & &\xi_{s-1} &0\\ \endpmatrix,$$ with $$\label{xij}\xi_j=\frac{1}{2\sqrt{(2j+1)(2j-1)}}, \qquad j=1,\dots,s-1.$$ We now consider the matrix $X_s(\alpha)$ obtained by perturbing as follows: $$\label{Xs_alpha} X_s(\alpha) = \pmatrix{cccc} \frac{1}2 & -\xi_1 &&\\ \xi_1 &0 &\ddots&\\ &\ddots &\ddots &-(\xi_{s-1}+\alpha)\\ & &\xi_{s-1}+\alpha &0\\ \endpmatrix = X_s + \alpha W_s,$$ where $\alpha$ is a real parameter, and $$\label{Ws} W_s=\pmatrix{cccc} 0 & 0 &&\\ 0 &0 &\ddots&\\ &\ddots &\ddots &-1\\ & &1 &0\\ \endpmatrix,$$ so that $X_s(\alpha)$ is a rank two perturbation of $X_s$. The family of methods $y_1=\Phi_h(y_0,\alpha)$ we are interested in, is defined by the following tableau: $$\label{qgauss} \begin{array}{c|c}\begin{array}{c} c_1\\ \vdots\\ c_s\end{array} & \A(\alpha) \equiv \P X_s(\alpha)\P^{-1}\\ \hline &b_1\, \ldots \ldots ~ b_s \end{array}$$ Therefore $$\label{BA} \A(\alpha)= A + \alpha \P W_s \P^{-1},$$ and hence $\A(0)=A$. By exploiting Theorems 5.11 and 5.1 in [@HW Chap. IV.5], we readily deduce that the symmetric method has order $2s-2$ for any fixed $\alpha \not = 0$, and order $2s$ when $\alpha=0$. We set $$\label{Omega} \omega =\pmatrix{c} b_1 \\ \vdots \\ b_s \endpmatrix, \qquad \Omega =\pmatrix{ccc} b_1 \\ & \ddots \\&& b_s \endpmatrix, \qquad e= \pmatrix{c} 1 \\ \vdots \\ 1 \endpmatrix.$$ \[symplecticity\] For any value of $\alpha$, the Runge-Kutta method defined in is symplectic. On the basis of [@HLW Theorem 4.3, page 192], we will prove the following sufficient condition for symplecticity: $$\Omega \A(\alpha) + \A(\alpha)^T \Omega = \omega \, \omega^T.$$ Since the degree of the integrand functions in does not exceed $2s-2$, the orthogonality conditions may be equivalently posed in discrete form as $$\sum_{k=1}^s b_k P_i(c_k)P_j(c_k) = \delta_{ij}, \qquad i,j=1,\dots,s,$$ or, in matrix notation, $$\label{orth1} \P^T \Omega P = I.$$ Considering that from we get $\P^{-1}= \P^T \Omega$, from we have that $$\label{hint} \Omega \A(\alpha) + \A(\alpha)^T \Omega = \Omega A + A^T \Omega + \alpha \Omega \P ( W_s + W_s^T) \P^T \Omega = \omega \, \omega^T$$ since the Gauss method is symplectic, and $W_s$ is skew-symmetric so that $W_s+W_s^T=0$. In the event that a value $\alpha^\ast \equiv \alpha^\ast(y_0,h)$ for the parameter $\alpha$ may be found such that the conservation condition $ H(y_1(\alpha))=H(y_0)$ be satisfied, we can extrapolate from the parametric method a symplectic scheme $$\label{epgauss} y \mapsto \Phi_h(y,\alpha^\ast),$$ that provides energy conservation if evaluated at $y_0$. The existence of such an $\alpha^\ast$ will be proved in Section \[theory\]. One important implication the use of will guarantee is the conservation of all quadratic constant of motions associated with system . The fact that, in general, $H(\Phi_h(y,\alpha^\ast))\not = H(y)$, explains the extent to which the energy conservation property of the new formulae must be interpreted. Summarizing, the new formulae, when applied to the initial value system are able to define a numerical approximation of any high order, along which the Hamiltonian function and all quadratic first integrals of the system are precisely conserved. Generalizations --------------- The proof of Theorem \[symplecticity\] suggests how to extend the definition of the new formulae in order to get a family of methods depending on a set of parameters. Indeed, by looking at , in order to preserve symplecticity, it is sufficient to substitute to the matrix $\alpha W_s$, any skew-symmetric matrix $\widetilde W_s$ of low rank, having non-null elements in the bottom-right corner so that, with respect to the Gauss method, the order is lowered as least as possible. Consider the $s\times s$ matrix $$\widetilde W_s= \pmatrix{ll} 0 \\ & V_r \endpmatrix$$ where $V_r$ is any skew-symmetric matrix of dimension $r+1<s$. The Runge-Kutta method defined by the Butcher tableau $$\label{qgauss1} \begin{array}{c|c}\begin{array}{c} c_1\\ \vdots\\ c_s\end{array} & \A(\alpha) \equiv \P (X_s+\widetilde W_s)\P^{-1}\\ \hline &b_1\, \ldots \ldots ~ b_s \end{array}$$ in is symplectic and has order $p=2(s-r)$. For example, a natural choice for the matrix $\widetilde W_s$ is: $$\label{tildeWs} \widetilde W_s=\pmatrix{cccccc} 0 & 0 &&\\ 0 &0 &\ddots&\\ &\ddots &\ddots &-\alpha_1\\ & &\alpha_1 &0 & \ddots \\ & & & \ddots & \ddots & -\alpha_r\\ & & & & \alpha_r & 0 \endpmatrix$$ leading to a multi-parametric method depending on the $r$ parameters $\alpha_1,\dots,\alpha_r$. Quasi-collocation conditions {#colloc} ============================ Condition reveals the relation between the Butcher arrays associated with the new parametric method and the Gauss collocation method. In order not to loose generality, just in this subsection we assume to solve the generic problem $\dot y = f(y)$. We wander how the collocation conditions defining the Gauss methods are affected by the presence of the parameter $\alpha$. This is easily accomplished by expressing the coefficients of the perturbing matrix $\P W_s \P^{-1}$ in terms of linear combinations of the integrals $\int_0^{c_i} l_j(\tau)\mathrm{d} \tau$, where $l_j(\tau)$ is the $j$th Lagrange polynomial defined on the abscissae $c_1,\dots,c_s$. Let $\Gamma\equiv\left(\gamma_{ij}\right)$ be the solution of the matrix linear system $A \, \Gamma = \P W_s \P^{-1}$, which means that (see ) $$\label{Gamma} \Gamma = \P X_s^{-1} W_s \P^{-1}.$$ The nonlinear system defining the block vector of the internal stages $\{Y_i\}$ is $$Y = e \otimes y_0 + h (A\otimes I) F(Y) + \alpha h (A\,\Gamma \otimes I) F(Y),$$ where $e$ is the vector defined in (\[Omega\]), hereafter $I$ is the identity matrix of dimension $2m$, and $$Y=\pmatrix{ccc} Y_1^T & \dots & Y_s^T \endpmatrix^T, \qquad F(Y)=\pmatrix{ccc} f(Y_1)^T & \dots & f(Y_s)^T \endpmatrix^T.$$ Therefore, the polynomial $\sigma(t_0+\tau h)$ of degree $s$ that interpolates the stages $Y_i$ at the abscissae $c_i$, $i=1,\dots,s,$ is $$\label{sigma} \displaystyle \sigma(t_0+\tau h) = y_0 + h \sum_{j=1}^s \int_0^\tau l_j(x)\mathrm{d}x \, f(Y_j) + \alpha h \sum_{j=1}^s \left( \sum_{k=1}^s \gamma_{kj} \int_0^\tau l_k(x)\mathrm{d}x \right)\, f(Y_j).$$ Differentiating with respect to $\tau$ gives $$\label{sigma_dot} \displaystyle \dot \sigma(t_0+\tau h) = \sum_{j=1}^s l_j(\tau)\, f(\sigma(t_0+c_j h)) + \alpha \sum_{j=1}^s \left( \sum_{k=1}^s \gamma_{kj} l_k(\tau) \right)\, f(\sigma(t_0+c_j h)).$$ Finally, evaluating at $\tau=0$ and at $\tau=c_i$ yields $$\label{coll} \left\{ \begin{array}{l}\displaystyle \sigma(t_0) = y_0, \\ \displaystyle \dot \sigma(t_0+c_i h) = f(\sigma(t_0+c_i h)) + \alpha \sum_{j=1}^s \gamma_{ij} \, f(\sigma(t_0+c_j h)), \qquad i=1,\dots,s. \end{array} \right.$$ For $\alpha$ small, we can regard as [*quasi-collocation conditions*]{}, since for $\alpha=0$ we recover the classical collocation conditions defining the Gauss method. Geometric interpretation ------------------------ Let us assume the existence of a quadratic first integral $M(y)$ independent from $H(y)$: although this assumption is not strictly needed, it will somehow simplify the presentation of our argument. ![A geometric interpretation of the parametric method . Two quasi-collocation polynomials $\sigma_{\alpha_1}(t_0+\tau h)$ and $\sigma_{\alpha_2}(t_0+\tau h)$ have as end-points the numerical solutions $y_1(\alpha_1)$ and $y_1(\alpha_2)$ which are $O(h^{2s-1})$ close to the Hamiltonian since, for $\alpha \not = 0$ the method has order $2s-2$. This means that the length of the arc of curve enclosed by the points $y_1(\alpha_1)$ and $y_1(\alpha_2)$ is $O(h^{2s-1})$. However, the parametric curve $\gamma: \alpha\in [\alpha_1,\alpha_2] \mapsto y_1(\alpha)$ passes through $y_1(0)$ which is at a distance $O(h^{2s+1})$ from the manifold $H(y)=H(y_0)$, and there is a concrete possibility that this arc may intersect the manifold $H(y)=H(y_0)$ at a point $y_1(\alpha^\ast)$.[]{data-label="sliding"}](sliding){width="12cm" height="7cm"} Roughly speaking, for $\alpha$ small, our [*parametric*]{} method may be interpreted as a symplectic perturbation of the Gauss method. Due to symplecticity of $\Phi_h(\cdot,\alpha)$, the parametric curve $$\label{gamma} \gamma \equiv \alpha \in D \mapsto y_1(\alpha)\in \RR^{2m},$$ where $D$ is a given interval containing zero, will entirely lie in the manifold $M(y)=M(y_0)$ and its length will be $O(h^{2s-1})$, since the method has order $2s-2$. However, the numerical solution produced by the Gauss method, namely $y_1(0)$ will be $O(h^{2s+1})$ close to the manifold $H(y)=H(y_0)$. Since the two manifolds contain the continuous solution, their intersection is nonempty and it is reasonable to expect that when $\alpha$ ranges in $D$, $y_1(\alpha)$ can slide from a region where $H(y_1(\alpha))>H(y_0)$ to a region where $H(y_1(\alpha))<H(y_0)$, thus producing a sign change in the scalar function $$\label{g} g(\alpha)= H(y_1(\alpha))-H(y_0)$$ which, by continuity, will vanish at a point $\alpha^\ast$. Obviously, similar arguments can be repeated for the multi-parameter version (\[qgauss1\]) of the method, where one has even more freedom in the choice of the parameters in the matrix $\widetilde W_s$ defined in , in order to obtain the conservation of energy.[^6] Theoretical existence results {#theory} ============================= After defining the error function $g(\alpha)= H(y_1(\alpha))- H(y_0)$, the nonlinear system, in the unknowns $Y_1,\dots,Y_s$ and $\alpha$, that is to be solved at each step for getting energy conservation, reads $$\label{concon} \left\{ \begin{array}{l} Y = e \otimes y_0 + h (\A(\alpha) \otimes I) F(Y), \\ g(\alpha) =0, \end{array} \right.$$ and its solvability is equivalent to the existence of the energy preserving method we are looking for. After defining the vector function $$G(h,y_1,\alpha)=\pmatrix{c} y_1-\Phi_h(y_0,\alpha) \\[.2cm] H(y_1)-H(y_0) \endpmatrix,$$ we see that system is equivalent to $G(h,y_1,\alpha)=0$. Of course $G(0,y_1,\alpha)=0$ for any value of $\alpha$ and, in particular $G(0,y_1,0)=0$. The Jacobian of $G$ with respect to the two variables $y_1$ and $\alpha$ reads $$\frac{\partial G}{\partial(y_1,\alpha)} (h,y_1,\alpha) = \pmatrix{cc} I & \frac{\partial \Phi_h}{\partial \alpha}(y_0,\alpha) \\[.2cm] \nabla^TH(y_1) & 0 \endpmatrix,$$ where, as usual, $I$ is the identity matrix of dimension $2m$. From we see that $\frac{\partial \Phi_h}{\partial \alpha}(y_0,\alpha)$ coincides with $y'_1(\alpha)$ and, hence, with $\sigma_{\alpha}'(t_0+h)$. Due to the consistency of the method, it follows that, for $\alpha=0$, $\sigma_{\alpha}'(t_0+h) \rightarrow J \nabla H(y_0)$ as $h\rightarrow 0$. Therefore $$\label{Gjac} \frac{\partial G}{\partial(y_1,\alpha)} (0,y_1,0) = \pmatrix{cc} I & J \nabla H (y_0) \\[.2cm] \nabla^TH(y_0) & 0 \endpmatrix.$$ Unfortunately, the Jacobian matrix is always singular. Consequently, the implicit function theorem (in its classical formulation) does not help in retrieving existence results of the solution of when $h$ is small. However, the rank of the matrix is $2m$ independently of the problem to be solved. This would suggest the use of the Lyapunov-Schmidt decomposition [@SLSF] that considers the restriction of the system to both the complement of the null space and the range of the Jacobian, to produce two systems to which the implicit function theorem applies. In our case this approach is simplified in that the implicit function theorem assures the existence of a solution $Y(\alpha)$ of the first system in for all values of the parameter $\alpha$ ranging in a closed interval containing the origin and $|h| \le h_0$, with $h_0$ small enough. Then $y_1(\alpha)=y_0+h(b^T \otimes I)Y(\alpha)$ is substituted into the second of to produce the so called [*bifurcation equation*]{} in the unknown $\alpha$. When needed, we will explicitly write $g(\alpha, h)$ or $g(\alpha, h, y_0)$, in place of $g(\alpha)$, to emphasize the dependence of the function $g$ upon the stepsize $h$, that has to be treated as a parameter, and the state vector $y_0$. Let us fix a vector $y_0$ and look for solution curves of $g(\alpha,h)=0$ in the $(h,\alpha)$ plane. Obviously $g(\alpha,0)=0$ for any $\alpha$, which means that the axis $h=0$ is a solution curve of the bifurcation equation: of course, we are interested in the existence of a different solution curve $\alpha^\ast=\alpha^\ast(h)$ passing through the origin. Since the gradient of $g$ vanishes at $(0,0)$, one has to compute the subsequent partial derivatives of $g$ with respect to $\alpha$ and $h$. However one verifies that $\frac{\partial^2 g}{\partial h^2} = \frac{\partial^2 g}{\partial \alpha^2}=\frac{\partial^2 g}{\partial \alpha \partial h}$ evaluated at $(0,0)$ vanish as well, and this makes the computations even harder. For this reason, to address the question about the existence of a solution of , we make the following assumptions: - the function $g$ is analytical in a rectangle $[-\bar \alpha, \bar \alpha] \times [-\bar h, \bar h]$ centered at the origin; - let $d$ be the order of the error in the Hamiltonian function associated with the Gauss method applied to the given Hamiltonian system and the given state vector $y_0$, that is: $$\label{gH} g(0,h) = H(y_1(0))- H(y_0) = c_0 h^{d} + O(h^{d+1}),$$ with $c_0 \not = 0$. Then, we assume that for any fixed $\alpha \not = 0$, $$g(\alpha,h) = c(\alpha) h^{d-2} + O(h^{d-1}),$$ with $c(\alpha) \not =0$. A couple of quick comments are in order before continuing. Excluding the case where the Hamiltonian $H(q,p)$ is quadratic (which would imply $g(\alpha,h) = 0$ for all $alpha$), the error in the numerical Hamiltonian function associated with the Gauss method is expected to behave as $O(h^{2s+1})$. Anyway, we cannot exclude a priori that special classes of problems or particular values for the state vector $y_0$ may occur, for which the order of convergence may be even higher. This is why we have introduced the integer $d$: therefore such integer will be at least $2s+1$. Moreover, we emphasize that the constant $c_0$ and the function $c(\alpha)$, will depend on $y_0$. In conclusion, what we are assuming is that for the method , when $\alpha$ is a given nonzero constant, the order of the error $H(y_1(\alpha))-H(y_0)$ is lowered by two units with respect to the underlying Gauss method of order $2s$, which is a quite natural requirement since such method has order $2s-2$. \[implicit\] Under the assumptions ($\mathcal A_1$) and ($\mathcal A_2$), there exists a function $\alpha^\ast=\alpha^\ast(h)$, defined in a neighborhood of the origin $(-h_0,h_0)$, such that: - $g(\alpha^\ast(h),h)=0$, for all $h\in(-h_0,h_0)$, - $\alpha^\ast(h)=\mathrm{const}\cdot h^2 + O(h^3)$. From ($\mathcal A_1$) and ($\mathcal A_2$) we obtain that the expansion of $g$ around $(0,0)$ is: $$\label{expg} g(\alpha,h)= \sum_{j=d}^\infty \frac{1}{j!}\frac{\partial^{j}g}{\partial h^j} (0,0) h^j + \sum_{i=1}^{\infty} \sum_{j=d-2}^\infty \frac{1}{i!j!}\frac{\partial^{i+j}g}{\partial \alpha^i \partial h^j} (0,0) h^j \alpha^i.$$ We are now in the right position to apply the implicit function theorem. We will look for a solution $\alpha^\ast = \alpha^\ast(h)$ in the form $\alpha^\ast(h)=\eta(h) h^2$, where $\eta(h)$ is a real-valued function of $h$. To this end, we consider the change of variable $\alpha=\eta h^2$, and insert it into thus obtaining $$\label{expg1} \begin{array}{rl} \displaystyle g(\alpha,h) = & \displaystyle \frac{1}{d!}\frac{\partial^{d}g}{\partial h^d} (0,0) h^d + \frac{1}{(d-2)!}\frac{\partial^{d-1}g}{\partial \alpha \partial h^{d-2}} (0,0) h^d \eta \\[.3cm] & \displaystyle + \frac{1}{(d-1)!}\frac{\partial^{d}g}{\partial \alpha \partial h^{d-1}} (0,0) h^{d+1} \eta + \mbox{higher order terms}. \end{array}$$ Therefore, for $h\not=0$, $g(\alpha,h)=0$ is equivalent to $\tilde g(\eta, h)=0$, where $$\label{expg2} \begin{array}{rl} \displaystyle \tilde g(\eta,h) = & \displaystyle \frac{1}{(d-1)d}\, \frac{\partial^{d}g}{\partial h^d} (0,0) + \frac{\partial^{d-1}g}{\partial \alpha \partial h^{d-2}} (0,0) \eta \\[.3cm] & \displaystyle + \frac{1}{d-1}\frac{\partial^{d}g}{\partial \alpha \partial h^{d-1}} (0,0) h \eta + \mbox{higher order terms}. \end{array}$$ By assumption ($\mathcal A_2$), both $\frac{\partial^{d}g}{\partial h^d} (0,0)$ and $\frac{\partial^{d-1}g}{\partial \alpha \partial h^{d-2}} (0,0)$ are different from zero and hence the implicit function theorem assures the existence of a function $\eta=\eta(h)$ such that $\tilde g(\eta(h), h)=0$. The solution of $g(\alpha,h)=0$ for the variable $\alpha$ will then be given by $$\label{alphah} \alpha^\ast(h) = \eta(h) h^2 = -\frac{1}{(d-1)d}\, \frac{\frac{\partial^{d}g}{\partial h^d} (0,0)}{\frac{\partial^{d-1}g}{\partial \alpha \partial h^{d-2}} (0,0)}\, h^2 +O(h^3),$$ and this completes the proof. By exploiting [@KP Theorem 6.1.2], we see that the function $\alpha^\ast(h)$ is analytic if the power series is absolutely convergent for $|h|\le h_0$ and $|\alpha| \le \alpha_0$. In any event, the function $\alpha^\ast(h)$ is tangent to the $h$-axis at the origin which means that a very small correction of the Gauss method is needed when the stepsize is small enough. As a matter of fact, the needed correction is so small that the resulting method has indeed order $2s$ instead of $2s-2$, just as the Gauss method obtained by posing $\alpha=0$. This is a consequence of the following result. \[fastorder\] Consider the parametric method and suppose that the parameter $\alpha$ is actually a function of the stepsize $h$, in such a way that $\alpha(h)=O(h^2)$. Then, the resulting method has order $2s$. Let $y_1(\alpha,h)$ be the solution computed by method at time $t_0+h$, starting at $y_0=y(t_0)$, and consider its expansion with respect to the variable $\alpha$, in a neighborhood of zero: $$y_1(\alpha,h) = y_1(0,h)+ y'(\zeta_\alpha,h) \alpha.$$ We recall that $y_1(0,h)$ is the numerical solution provided after a single step of the Gauss method and hence it is $O(h^{2s+1})$ accurate while, for $\alpha \not = 0$, $y_1(\alpha,h)$ yields an approximation to the true solution of order $2s-1$. This implies that $y'(\zeta_\alpha,h)$ is $O(h^{2s-1})$. Consequently, $$y_1(\alpha,h) - y(t_0+h) = y_1(0,h)- y(t_0+h)+ y'(\zeta_\alpha,h) \alpha = O(h^{2s+1}) + \alpha O(h^{2s-1}),$$ from which we deduce that the error at the left hand side is $O(h^{2s+1})$ if and only if $\alpha=O(h^2)$. Figure \[keplerg\] reports the level curves of the function $g(\alpha,h)$ in a neighborhood of the origin, for the Kepler problem described in Subsection \[keplerproblem\] (the vector $y_0$ has been chosen as in ). The tick lines in the plot correspond to the points $(\alpha, h)$ in the plane where $g$ vanish. This zero level set consists of the vertical axis $h=0$ and of the function $\alpha^\ast(h)$, which splits the region surrounding the origin into two adjacent subregions where the function $g$ has clearly opposite sign. Despite the local character of the above existence result, we see that the branches of the function $\alpha^\ast(h)$ extend away from the origin. Similar bifurcation diagrams may be traced starting at different values of $y_0$ for all the test problems we have considered: this suggests that, in the spirit of the long-time simulation of dynamical systems, a quite large stepsize may be used during the numerical integration performed by method . ![Level curves in the plane $(h,\alpha)$ of the function $g(\alpha,h,y_0)$ associated with the method of order four, for the Kepler problem (see Subsection \[keplerproblem\]), in a neighborhood of the origin: $h \in [-0.2, 0,2]$, $\alpha \in [-0.5\cdot 10^{-3}, 4\cdot 10^{-3}]$. Besides the $\alpha$-axis, a zero level curve tangent to the $h$-axis at the origin is visible. Such curve separates two regions around the origin where the function $g$ has opposite sign. We notice that just a small correction of the Gauss method suffices to recover the energy preservation even for relatively large stepsizes.[]{data-label="keplerg"}](kepler_bifurc_s2){width="14cm" height="8cm"} We end this section by providing a straightforward generalization of Theorem \[implicit\] to the case where the parameter $\alpha$ is used to perturb a generic (not necessarily the last) element on the subdiagonal of the matrix $X_s$, and its symmetric. \[implicit1\] Consider the method with $\widetilde W_s$ as in with $\alpha_1 \equiv \alpha$ and $\alpha_2=\dots=\alpha_r=0$. We assume that assumption ($\mathcal A_1$) and the following assumption (replacing ($\mathcal A_2$)) hold true: - let $d$ be the order of the error in the Hamiltonian function associated with the Gauss method applied to the given Hamiltonian system and the given state vector $y_0$. That is, (\[gH\]) holds true. Then, we assume that for any fixed $\alpha \not = 0$, $$g(\alpha,h) = c_r(\alpha) h^{d-2r} + O(h^{d-2r+1}),$$ with $c_r(\alpha) \not =0$. Then, there exists a function $\alpha^\ast=\alpha^\ast(h)$ defined in a neighborhood of the origin $(-h_0,h_0)$ and such that: - $g(\alpha^\ast(h),h)=0$, for all $h\in(-h_0,h_0)$, - $\alpha^\ast(h)=\mathrm{const}\cdot h^{2r} + O(h^{2r+1})$. The symplectic energy conserving method resulting from this choice of the parameter has order $2s$. In the next section, we shall provide numerical evidence for the above presented results. Numerical tests {#numerical_tests} =============== In this section we present a few numerical tests showing the effectiveness of our approach. Method and its generalization are implemented by solving, at each step, system . The efficient solution of such system will be the object of future studies; at present we adopt either one of the following techniques: 1. at each step, an interval $[\alpha_1, \alpha_2]$ is detected such that $g(\alpha_1)g(\alpha_2)<0$; after that, a dichotomic search is implemented to locate $\alpha^\ast$ within an error close to the machine precision; 2. the first (vector) equation in is solved with $\alpha_0=0$ (Gauss method) and $\alpha_1=c h^r$, where $c$ and $r$ are suitable constants empirically estimated;[^7] after that, a sequence $\alpha_k$ is produced by solving the second (scalar) equation in via the secant method. In both cases, an outer iteration generating the sequence $\alpha_k$ converging to $\alpha^\ast$ is coupled with an inner iteration that determines the solution $y_1(\alpha_k)$ starting from $y_0$. Such scheme is repeated at each step of integration. The methods that we will consider in our experiments are: method with $s=2$ (fourth order); method with $s=3$ (sixth order); the sixth-order method described in Theorem \[implicit1\] with $s=3$, that is we insert a single perturbation parameter $\alpha$ in the first (rather than in the second) subdiagonal element of the matrix $X_3$. In order to distinguish between these two methods of order six, hereafter the latter will be referred to as “the order six method of the second type”. The Kepler problem {#keplerproblem} ------------------ In this problem, two bodies subject to Newton’s law of gravitation revolve about their center of mass, placed at the origin, in elliptic orbits in the $(q_1,q_2)$-plane. Assuming unitary masses and gravitational constant, the dynamics is described by the Hamiltonian function $$\label{Hkepler} H(q_1,q_2,p_1,p_2)= \frac{1}{2}\left( p_1^2+p_2^2 \right) - \frac{1}{\sqrt{q_1^2+q_2^2}}.$$ Besides the total energy $H$, a relevant first integral for the system is represented by the angular momentum $$\label{Lkepler} L(q_1,q_2,p_1,p_2) = q_1 p_2 - q_2 p_1.$$ Due to its symplecticity, the quadratic first integral will be automatically conserved by method , for any choice of the parameter $\alpha$. On the other hand, we show that, at each step of integration, the parameter $\alpha$ may be tuned in order to get energy conservation in the numerical solution. As initial condition we choose $$\label{keplerinitial} q_1(0)=1-e, \quad q_2(0)=0, \quad p_1(0)=0, \quad p_2(0)=\sqrt{\frac{1+e}{1-e}},$$ which confers an eccentricity equal to $e$ on the orbit. Consequently, $H(q,p)=-0.5$ and $L(q,p)=\sqrt{1-e^2}$. We set $e=0.6$ since, in this experiment, we are going to use constant stepsize (see [@HLW Sec. I.2.3]). More precisely, we solve problem in the interval $[t_0, T]=[0, 50]$ by the two-stages method with the following set of stepsizes: $h_i=2^{-i}$, $i=1,\dots,7$. Figure \[kepler\_Ham\] reports the errors in the Hamiltonian function $H$ and in the angular momentum $L$ of the numerical solutions generated by the method implemented with the intermediate stepsize $h=2^{-5}$. These plots, which remain almost the same whatever is the stepsize considered in the given range, testify that the integration procedure performed by method is indeed feasible and both energy and angular momentum preservation may be recovered in the discrete approximation of . For comparison purposes, we also report the same quantities for the Gauss methods of order $4$ (corresponding to the choice $\alpha=0$ in ). ![Upper picture: errors in the Hamiltonian function of the Kepler problem evaluated along the numerical solution generated by the Gauss method of order four and its conservative variant (method with $s=2$). Bottom plot: error in the numerical angular momentum of the solution computed by the two methods. In both cases the stepsize used is $h=2^{-5}$.[]{data-label="kepler_Ham"}](kepler_Ham_h5 "fig:"){width="12cm" height="7cm"}\ ![Upper picture: errors in the Hamiltonian function of the Kepler problem evaluated along the numerical solution generated by the Gauss method of order four and its conservative variant (method with $s=2$). Bottom plot: error in the numerical angular momentum of the solution computed by the two methods. In both cases the stepsize used is $h=2^{-5}$.[]{data-label="kepler_Ham"}](kepler_AM_h5 "fig:"){width="12cm" height="7cm"} The second and third columns of Table \[kepler\_tab\] report the global error $e(h_i)=|y_N(h_i)-y(T)|$, $N=T/h_i$, at the end point of the integration interval and the corresponding numerical order. According to Theorem , we see that the maximum order is preserved by method . In Figure \[kepler\_alpha\_h5\] the sequence $\alpha^\ast_n$, corresponding to the values of the parameter $\alpha$ that at each step restore the conservation of the energy, are plotted for the case $h=2^{-5}$. We consider $\delta(h)=\max_n(\alpha^\ast_n)-\min_n(\alpha^\ast_n)$ as a measure of the total variability of the values of the sequence $\{ \alpha^\ast_n\}$. Such quantity is reported in the fourth column of Table \[kepler\_tab\] for the values of the stepsize $h_i$ used in this test. According to the result of Theorem \[implicit\], the last column in the table confirms that the dependence of $\delta(h)$ on the stepsize $h$ is of the form $\delta = c h^2 + \mathrm{h.o.t.}$, with $c \simeq 0.16$. ![Sequence of the values of the parameter $\alpha^\ast$ in the method with $s=2$ and $h=2^{-5}$.[]{data-label="kepler_alpha_h5"}](kepler_alpha_h5){width="12cm" height="7cm"} $$\begin{array}{|c|cccc|} \hline h & e(h) & \mbox{order} & \delta(h) & \delta(h)/h^2 \\ \hline 2^{-1} & 2.62\cdot 10^{0} \quad & \quad \quad & \quad 2.13\cdot 10^{-2} \quad & \quad 8.5374\cdot 10^{-2} \\[.2cm] 2^{-2} & 3.85\cdot 10^{-1} \quad & \quad 2.763 \quad & \quad 1.04\cdot 10^{-2} \quad & \quad 1.6700\cdot 10^{-1} \\[.2cm] 2^{-3} & 2.50\cdot 10^{-2} \quad & \quad 3.945 \quad & \quad 2.52\cdot 10^{-3} \quad & \quad 1.6185\cdot 10^{-1} \\[.2cm] 2^{-4} & 1.59\cdot 10^{-3} \quad & \quad 3.970 \quad & \quad 6.23\cdot 10^{-4} \quad & \quad 1.5951\cdot 10^{-1} \\[.2cm] 2^{-5} & 1.00\cdot 10^{-4} \quad & \quad 3.991 \quad & \quad 1.55\cdot 10^{-4} \quad & \quad 1.5878\cdot 10^{-1} \\[.2cm] 2^{-6} & 6.28\cdot 10^{-6} \quad & \quad 3.997 \quad & \quad 3.87\cdot 10^{-5} \quad & \quad 1.5862\cdot 10^{-1} \\[.2cm] 2^{-7} & 3.93\cdot 10^{-7} \quad & \quad 3.999 \quad & \quad 9.67\cdot 10^{-6} \quad & \quad 1.5856\cdot 10^{-1} \\[.2cm] \hline \end{array}$$ Test problem 2 -------------- We consider the problem defined by the following polynomial Hamiltonian function: $$\label{macie1H} H(q_1,q_2,p_1,p_2)=\frac{1}{2}(p_1^2+p_2^2) + (q_1^2+q_2^2)^2.$$ This problem has been proposed in [@MaPr] as an example of a class of polynomial systems which, under suitable assumptions, admit an additional polynomial first integral $F$ which is functionally independent from $H$. In this case, the additional (irreducible) first integral is $$\label{macie1L} L(q_1,q_2,p_1,p_2)=q_1p_2-q_2p_1.$$ The polynomial $L$ being quadratic, we expect that our methods may preserve both $H$ and $L$.[^8] We have solved problem by means of two methods of order six ($s=3$): method , and the order six method of the second type, described in Theorem \[implicit1\]. Figure \[fig\_macie1\] reports the errors in the Hamiltonian function $H$ and in the quadratic first integral $L$ of the numerical solutions generated by the latter method implemented with the intermediate stepsize $h=2^{-3}$. For comparison purposes, we also report the same quantities for the Gauss methods of order six. ![Upper picture: errors in the Hamiltonian function of test problem 2 evaluated along the numerical solution generated by the Gauss method of order six and its conservative variant of the second type. Bottom plot: error in the quadratic first integral of the solution computed by the two methods. In both cases the stepsize used is $h=2^{-3}$.[]{data-label="fig_macie1"}](macie1_Ham_h3 "fig:"){width="12cm" height="7cm"}\ ![Upper picture: errors in the Hamiltonian function of test problem 2 evaluated along the numerical solution generated by the Gauss method of order six and its conservative variant of the second type. Bottom plot: error in the quadratic first integral of the solution computed by the two methods. In both cases the stepsize used is $h=2^{-3}$.[]{data-label="fig_macie1"}](macie1_AM_h3 "fig:"){width="12cm" height="7cm"} Tables \[macie1\_tab1\] and \[macie1\_tab2\] are the analogues of Table \[kepler\_tab\] for these two methods: we see that both methods achieve order six but, while in the former $\alpha^\ast(h)=O(h^2)$, in the latter $\alpha^\ast(h)=O(h^4)$ consistently with Theorems \[implicit\], \[fastorder\], and \[implicit1\]. $$\begin{array}{|c|cccc|} \hline h & e(h) & \mbox{order} & \delta(h) & \delta(h)/h^2 \\ \hline 2^{-1} & 2.17\cdot 10^{-2} \quad & \quad \quad & \quad 1.59\cdot 10^{-2} \quad & \quad 6.37\cdot 10^{-2} \\[.2cm] 2^{-2} & 4.59\cdot 10^{-4} \quad & \quad 5.562 \quad & \quad 3.99\cdot 10^{-3} \quad & \quad 6.39\cdot 10^{-2} \\[.2cm] 2^{-3} & 7.77\cdot 10^{-6} \quad & \quad 5.884 \quad & \quad 9.99\cdot 10^{-4} \quad & \quad 6.40\cdot 10^{-2} \\[.2cm] 2^{-4} & 1.24\cdot 10^{-7} \quad & \quad 5.970 \quad & \quad 2.53\cdot 10^{-4} \quad & \quad 6.48\cdot 10^{-2} \\[.2cm] 2^{-5} & 1.94\cdot 10^{-9} \quad & \quad 5.992 \quad & \quad 6.33\cdot 10^{-5} \quad & \quad 6.49\cdot 10^{-2} \\[.2cm] 2^{-6} & 3.05\cdot 10^{-11} \quad & \quad 5.994 \quad & \quad 1.59\cdot 10^{-5} \quad & \quad 6.51\cdot 10^{-2} \\[.2cm] \hline \end{array}$$ $$\begin{array}{|c|cccc|} \hline h & e(h) & \mbox{order} & \delta(h) & \delta(h)/h^4 \\ \hline 2^{-1} & 4.91\cdot 10^{-2} \quad & \quad \quad & \quad 5.59\cdot 10^{-2} \quad & \quad 0.895 \\[.2cm] 2^{-2} & 1.46\cdot 10^{-2} \quad & \quad 1.753 \quad & \quad 1.51\cdot 10^{-2} \quad & \quad 3.87 \\[.2cm] 2^{-3} & 1.84\cdot 10^{-4} \quad & \quad 6.304 \quad & \quad 4.92\cdot 10^{-4} \quad & \quad 2.01 \\[.2cm] 2^{-4} & 3.23\cdot 10^{-6} \quad & \quad 5.836 \quad & \quad 4.07\cdot 10^{-5} \quad & \quad 2.66 \\[.2cm] 2^{-5} & 4.73\cdot 10^{-8} \quad & \quad 6.091 \quad & \quad 2.30\cdot 10^{-6} \quad & \quad 2.41 \\[.2cm] 2^{-6} & 7.03\cdot 10^{-10} \quad & \quad 6.074 \quad & \quad 1.50\cdot 10^{-7} \quad & \quad 2.51 \\[.2cm] \hline \end{array}$$ The Hénon-Heiles problem ------------------------ The Hénon-Heiles equation originates from a problem in Celestial Mechanics describing the motion of a star under the action of a gravitational potential of a galaxy which is assumed time-independent and with an axis of symmetry (the $z$-axis) (see [@HH] and references therein). The main question related to this model was to state the existence of a third first integral, beside the total energy and the angular momentum. By exploiting the symmetry of the system and the conservation of the angular momentum, Hénon and Heiles reduced from three (cylindrical coordinates) to two (planar coordinates) the degrees of freedom, thus showing that the problem was equivalent to the study of the motion of a particle in a plane subject to an arbitrary potential $U(q_1,q_2)$: $$\label{HH} H(q_1,q_2,p_1,p_2)=\frac{1}{2}(p_{1}^2+p_{2}^2)+U(q_1,q_2).$$ ![Level curves of the potential $U(q_1,q_2)$ of the Hénon-Heiles problem (see ). The origin $O$ is a stable equilibrium point, whose domain of stability contains the equilateral triangle having as vertices the saddle points $P_1$, $P_2$, and $P_3$, provided that the total energy does not exceed the value $\frac{1}{6}$. Inside the triangle, a numerical trajectory (small dots) computed by the sixth-order method of the second type with stepsize $h=0.25$ and in the time interval $[0, 500]$, is traced: its total energy is $0.15$.[]{data-label="henon_fig1"}](henon_potential){width="12cm" height="7cm"} In particular, for their experiments they chose $$\label{Henon_potential} U(q_1,q_2)=\frac{1}{2}(q_{1}^2+q_{2}^2)+q_{1}^2q_{2}-\frac{1}{3}q_{2}^3,$$ which makes the Hamiltonian function a polynomial of degree three. When $U(q_1,q_2)$ approaches the value $\frac{1}{6}$, the level curves of $U$ tend to an equilateral triangle, whose vertices are saddle points of $U$ (see Figure \[henon\_fig1\]). This vertices have coordinates $P_1=(0,1)$, $P_2=(-\frac{\sqrt{3}}{2},-\frac{1}{2})$ and $P_3=(\frac{\sqrt{3}}{2},-\frac{1}{2})$. Since $U$ in has no symmetry in general, we cannot consider the angular momentum as an invariant anymore, so that the only known first integral is the total energy represented by itself, and the question is whether or not a second integral does exist. Hénon and Heiles conducted a series of tests with the aim of giving a numerical evidence of the existence of such integral for moderate values of the energy $H$, and of the appearance of chaotic behavior when $H$ becomes larger than a critical value: it is believed that for values of $H$ in the interval $(\frac{1}{8},\frac{1}{6})$ this second first integral does not exist (see also [@HLW Section I3]). We consider the initial point $P_0=(q_{10},\,q_{20},\,p_{10},\,p_{20})=(0,\, 0,\, \sqrt{\frac{3}{10}},\, 0)$ which confers on the system a total energy $H=0.15 \in(\frac{1}{8},\frac{1}{6})$. Therefore the orbit originating from $P_0$ will never abandon the triangle for any value of the time $t$. We have integrated problem in the time interval $[0,\,500]$ with stepsize $h=0.25$ by using the Gauss method of order six and its conservative variant of the second type. Figure \[fig2.henon\] shows the errors in the Hamiltonian function $H$ in both cases. ![Errors in the Hamiltonian function - evaluated along the numerical solution generated by the Gauss method of order six and its conservative variant of the second type. Stepsize: $h=0.25$; time interval: $[0, 500]$; initial condition $(q_{10},\,q_{20},\,p_{10},\,p_{20})=(0,\, 0,\, \sqrt{\frac{3}{10}},\, 0)$.[]{data-label="fig2.henon"}](henon_Ham_h025){width="13cm" height="8cm"} Conclusions =========== We have defined a new class of symmetric and symplectic one-step methods of any high order that, under somewhat weak assumptions, are capable to compute a numerical solution along which the Hamiltonian function is precisely conserved. This feature has been realized by first introducing a symplectic parametric perturbation of the Gauss method, and then by selecting the parameter, at each step of the integration procedure, in order to get energy conservation. A relevant implication of the symplectic nature of each formula is the conservation of all quadratic first integrals associated to the system. With the help of the implicit function theorem, we have shown that not only do these methods exist, but that the correction required on the Gauss method is so small that the order of convergence of this latter method is preserved by its conservative variant. A few test problems have been reported to confirm the theoretical results presented, and to show the effectiveness of the new formulae. This approach opens a number of interesting routes of investigation. First of all, if preferred, the parameter could be selected in such a way to impose the conservation of other non quadratic first integrals different from the Hamiltonian function itself. More generally, the multi-parametric generalization introduced suggests the possibility of choosing the free parameters in order to impose the conservation of a number of functionally independent first integrals possessed by the continuous problem. Last but not least, the idea of considering symplectic corrections of the Gauss method could be in principle extended to other classes of symplectic methods known in the literature. The above described lines of investigation, as well as the efficient solution of the nonlinear systems arising from the conservation requirements, will be the subject of future researches. [99]{} , [*On some difficulties in integrating highly oscillatory Hamiltonian systems*]{}, in Computational Molecular Dynamics, Lect. Notes Comput. Sci. Eng. 4, Springer, Berlin, 1999, pp. 281–296. , [*The Hamiltonian BVMs (HBVMs) Homepage*]{}, arXiv:1002.2757, also available at url: <http://web.math.unifi.it/users/brugnano/HBVM/>. , [*Analisys of Hamiltonian Boundary Value Methods (HBVMs): a class of energy-preserving Runge-Kutta methods for the numerical solution of polynomial Hamiltonian dynamical systems*]{}, (2009), submitted. (arXiv:0909.5659) , Hamiltonian Boundary Value Methods (Energy Preserving Discrete Line Integral Methods), Jour. of Numer. Anal. Industr. and Appl. Math. 5 (2010), no. 1-2, pp. 17–37. (arXiv:0910.3621) , [*Isospectral Property of HBVMs and their connections with Runge-Kutta collocation methods*]{}, Preprint (2010). (arXiv:1002.4394) , [*Solving Differential Problems by Multistep Initial and Boundary Value Methods*]{}, Gordon and Breach Science Publ., Amsterdam, 1998. , [*An algebraic approach to invariant preserving integrators: the case of quadratic and Hamiltonian invariants*]{}, Numer. Math., [103]{} (2006), no. 4, pp. 575–590. , [*Lie-Poisson Hamilton-Jacobi theory and Lie-Poisson integrators*]{}, Phys. Lett. A, 133 (1988), pp. 134–139. , [*Time integration and discrete Hamiltonian systems*]{}, J. Nonlinear Sci., [6]{} (1996), pp. 449–467. , [*Energy-preserving variant of collocation methods*]{}, J. Numer. Anal. Ind. Appl. Math., to appear. , [*Symmetric projection methods for differential equations on manifolds*]{}, BIT [40]{} (2000), pp. 726–734. , [*Geometric Numerical Integration. Structure-Preserving Algorithms for Ordinary Differential Equations*]{}, Second ed., Springer, Berlin, 2006. , [*Solving Ordinary Differential Equations II. Stiff and Differential- Algebraic Problems*]{}, Second ed., Springer Series in Computational Mathematics 14, Springer-Verlag Berlin, 1996. , [*The Applicability of the Third Integral of Motion: Some Numerical Experiments*]{}, Astron. J., [69]{} (1964), no. 1, pp. 73–79. , [*$s$-Stage Trapezoidal Methods for the Conservation of Hamiltonian Functions of Polynomial Type*]{}, AIP Conf. Proc., 936 (2007), pp. 603–606. , [*High-order symmetric schemes for the energy conservation of polynomial Hamiltonian problems*]{}, J. Numer. Anal. Ind. Appl. Math., [4]{} (2009), no. 1-2, pp. 87–111. , [*The implicit function theorem. History, theory, and applications*]{}, Birkhäuser Boston, Inc., Boston, MA, 2002. , [*Simulating Hamiltonian Dynamics*]{}, Cambridge Monographs on Applied and Computational Mathematics 14, Cambridge University Press, Cambridge, 2004. , [*Darboux Polynomials and First Integrals of Natural Polynomial Hamiltonian Systems*]{}, Phys. Lett. A 326 (2004), no. 3-4, pp. 219–226. , [*Geometric integration using discrete gradient*]{}, Phil. Trans. R. Soc. Lond. A, 357 (1999), pp. 1021–1045. , [*Numerical Hamiltonian Problems*]{}, Chapman & Hall, London, 1994. , [*Lyapunov-Schmidt methods in nonlinear analysis and applications*]{}, Mathematics and its Applications, 550, Kluwer Academic Publishers, Dordrecht, 2002. [^1]: Dipartimento di Matematica “U.Dini”, Università di Firenze, Italy ([luigi.brugnano@unifi.it]{}). [^2]: Dipartimento di Matematica, Università di Bari, Italy ([felix@dm.uniba.it]{}). [^3]: Dipartimento di Energetica, Università di Firenze, Italy ([trigiant@unifi.it]{}). [^4]: Work developed within the project “Numerical methods and software for differential equations”. [^5]: We refer the reader to [@BIT0] for a complete documentation on HBVMs. [^6]: We do not consider multi-parametric methods in the numerical results we present, since a single parameter suffices in getting the energy conservation property. [^7]: For example see the last column in Table \[kepler\_tab\]. [^8]: Of course $L$ may again be interpreted as the angular momentum of a mechanical system having as Hamiltonian function.
--- abstract: 'We argue that the evaluation of censorship evasion tools should depend upon economic models of censorship. We illustrate our position with a simple model of the costs of censorship. We show how this model makes suggestions for how to evade censorship. In particular, from it, we develop evaluation criteria. We examine how our criteria compare to the traditional methods of evaluation employed in prior works.' author: - | Michael Carl Tschantz\ UC Berkeley - | Sadia Afroz\ UC Berkeley - | Vern Paxson\ International Computer Science Institute\ and UC Berkeley - | J. D. Tygar\ UC Berkeley bibliography: - 'censor.bib' title: 'On Modeling the Costs of Censorship[^1]' --- Introduction ============ #### Motivation In response to government censorship of the Internet, activists and researchers have deployed numerous censorship evasion tools [@callanan11freedomhouse]. Censors, in turn, have developed approaches to counter these evasion tools by, for example, blocking all Internet traffic produced from a given tool [@roberts11berkman; @kelly13freedomhouse]. Researchers have responded by proposing numerous improvements intended to neutralize such blocking (e.g., [@wiley11dust; @mohajeri2012skypemorph; @weinberg2012stegotorus; @winter2013scramblesuit]). Due to limited resources, the developers of evasion tools cannot implement and deploy all of these; they need criteria for selecting the most promising. While the evaluation sections of research papers provide some insight into the promise of each proposal, each paper employs its own evaluation methodology, typically selected with the capabilities of the tool in mind, making cross-tool comparisons difficult. Furthermore, determining how well the evaluation predicts real-world performance often poses difficulties. Prototyped tools meeting evaluation criteria are sometimes easily broken using methods unconsidered by the evaluation [@houmansadr13sp]. In this paper, we provide a preliminary examination of a different framework for evaluation with hopes of generating interest in evaluation issues. We propose augmenting prior tool-specific methods of evaluation with a new methodology for creating evaluation criteria and interpreting results. We start from the premise that censors and evaders engage in an ongoing arms race whose ebb and flow is largely determined by economic concerns. Thus, we argue we should evaluate tools on their promise to drive up the costs of the censor while remaining inexpensive to implement. From this prospective, tool-specific evaluations become important evidence that a tool could drive up a censor’s costs. Our perspective addresses the abovementioned shortcoming of prior evaluations in two ways. First, by examining total cost, we remind the evaluator that every aspect of the traffic produced by the evasion tool matters, not simply those considered by its designer. We hope this universal view will encourage designers to widen their focus and catch the often simple attacks that foiled past approaches. Second, by potentially providing a numerical score, cost can reflect a more quantitative measure than seeing whether a tool can be broken by any means, a standard more appropriate when a clear winner is possible (e.g., cryptographic protocols) than an arms race. #### Overview After motivating our arms-race view of censorship and the need for considering costs, we turn our attention to illustrating the use of our methodology. Our illustration is preliminary and focuses on only one side of the equation, the costs to the censor, leaving the evader’s costs to future work. Our illustration starts with a simple model of the censor’s costs. The model emphasizes the ability of the censor to employ any feature of network traffic, not just those on which the tool is evaluated. Given the paucity of information regarding the budgets of real censors, our model must leave the actual costs as unknown parameters. Thus, we cannot use our model to predict actual costs. Despite this limitation, we use it to reason about whether a particular design choice increases costs (by some undetermined amount). We find that these qualitative results suggest economically motivated evaluation criteria for evasion tools. Informally, we judge a tool by the number of *inexpensive* features it obfuscates. We estimate the expense of a feature using surrogates for the actual costs faced by the censor. We examine three prior approaches under these criteria and find that each is narrowly focused. We also use our model to explain the effectiveness of *active manipulation*, attacks during which the censor interacts with an evasion tool, rather than just passively watching its traffic. We conclude that, in the case of a blacklisting censor, *looking different from known disallowed traffic is a better choice than mimicking allowed protocols.* We end with a discussion of open questions. Prior Work ========== Some previous research considered cost of a censor as a result of different circumvention methods. Houmansadr et al. determined that evading decoy routing would increase a censor’s cost in terms of network latency and path length [@houmansadr2014no]. Elahi et al. proposed the CORDON taxonomy that divides different censorship evasion strategies into six types according to their effects on a censor [@elahi2012cordon]. We model costs of the censor for detecting traffic obfuscation tools. Roberts et al. evaluated tools by testing whether they work in various countries [@roberts11berkman]. Callanan et al. used a combination of in-laboratory tests and user surveys to determine the usability, performance, and security characteristics of a variety of tools [@callanan11freedomhouse]. By using common methods on tools in their environment, these researchers were able to compare the current success of deployed tools. However, researchers and developers need criteria applicable to undeployed tools. They must carefully select which proposals to develop and deploy due the costs associated with such efforts. Also, criteria based on surveys include factors other than the technical merits of an approach. For example, a tool may have high adoption due to having first-mover advantage or popular proponents. Alternatively, a tool might be unblocked despite being easily blocked since the tool is too unpopular to warrant the censor’s attention. We desire evaluation criteria that rate tools upon their technical merits. Others have explored absolute characterizations of success. For example, Pfitzmann and Hansen use *undetectability*, or *unobservability*, to mean that the censor should not be able to determine which Internet users are using the evasion tool and use *unblockability* to mean that the censor should not be able to block the tool’s traffic without also blocking a great deal of unrelated traffic [@pfitzmann2010terminology]. However, perfectly achieving these goals typically leads to unacceptably high performance degradation. Thus, undetectability and unblockability are only approximated by tools attempting to increase the censor’s numbers of false negatives and false positives, respectively. Dingledine enumerates general properties that make for a good evasion tool [@dingledine10tor]. We focus on lower-level criteria specific to a tool not being blocked and on the justification of the criteria in terms of an economic model. None the particular evaluations used by various tool developers were designed to be applicable to multiple tools. We discuss them in Section \[sec:eval-tools\], where we compare their evaluations to our own criteria. Houmansadr et al. empirically come to a similar conclusion as we do: tools mimicking allowed protocols are ill-advised [@houmansadr13sp]. They support their position by showing that censors can easily identify such mimicking tools. We explore the issue using a formal model. The Arms Race {#sec:arms} ============= When an evader deploys a sufficiently successful evasion tool, an effected censor typically improves its system to catch use of the tool. Thus, the two sides are engaged in an arms race. With this in mind, evasion tools should be designed to slow the censor down and to cost it resources. To illustrate this arms race and motivate the consideration of costs, we consider the two most popular approaches that evaders take, polymorphism and steganography, and the possible responses of the censors. Before doing so, we provide background on the censor and evader. We focus on Tor-based evasion tools [@tor] and presume familiarity with Tor’s use as such. #### Censors and Evaders A censor disallows some subset of messages. It employs a classifier that examines network traffic and attempts to identify those packets facilitating a disallowed message. The classifier typically uses hand-crafted signatures that characterize disallowed traffic (a blacklist) or allowed traffic (a whitelist). We focus on blacklisting censors as they are more common [@kelly13freedomhouse]. We consider the classifier missing a disallowed message to be a false negative and accidentally blocking an allowed message to be a false positive. The signatures in the blacklist refer to the value of various *features* of the traffic. These features can depend upon a single packet, such as IP address or the destination’s domain name; upon distributions over more than one packet, such as the distribution of interpacket arrival times and packet lengths within a packet flow; or upon the sequence of packets within a flow, requiring the keeping of state. Distributional and stateful features tend to be more costly since they require more storage and computation [@khattak2013towards]. A censor can identify disallowed traffic either by passive monitoring or by active manipulation. Whereas monitoring simply watches traffic, active manipulation involves the censor sending traffic to a suspected evader. For example, the evader can send manipulated requests to the evader to study its reaction. Active attacks can allow the censor to drive the evader toward more recognizable traffic, which we discuss more in Section \[sec:active\]. We view an evasion tool as a transformation on the network traffic that the censor attempts to classify. Since censors typically consider any traffic employing an evasion tool as disallowed, we only consider transformations that alter only disallowed traffic. (Transformations on allowed traffic could be possible with help from, for example, ISPs, but we leave such considerations to future work.) Thus, the tool can transform the features examined by the censor in a manner that drives up false negatives, but not false positives. If an evader drives up the number of false negatives unacceptably high, the censor will respond with a new classifier. The classifier could be altered to recognize the new values produced by disallowed messages under the old features or to employ new features that remain unobfuscated by the evader. The classifier may introduce false positives by attempting to block the tool too aggressively. Thus, tools can indirectly cause false positives. Let us consider two examples of such transformations with the first motivated by polymorphism and the second by steganography. We also consider the censor’s possible responses to the transformations and how they affect the expenses of the censor. For simplicity, we presume the censor’s blacklist starts with a single signature, which is a threshold on the value that a single feature takes on. #### Polymorphism Polymorphism is a way of spreading out behavior. To be polymorphic in a feature means that the feature takes on multiple values among different instances, such as messages. Spreading out the values of a feature used in a blacklist’s signature can result in the signature no longer identifying disallowed traffic, increasing false negatives (Figure \[fig:poly\]). In response, the censor can either come up with a new decision boundary using the old feature or employ a new feature. A new boundary is likely to be more complex than the old one, making it more difficult to implement and less likely to generalize to new traffic (e.g., [@mitchell97book]). Thus, it will typically be less accurate, increasing the costs of the censor. Using a new feature may require additional measurements driving up the operating costs of the censor. Either way, the censor must spend on the development effort. For example, many censors (Iran, China, and Syria) were blocking SSL and, thus, Tor as it also uses SSL. To circumvent this, Obfsproxy (obfs2) was implemented that added an extra layer of encryption on top of Tor with no recognizable byte patterns. To identify Tor with Obfsproxy the censor either has to employ a new complex classifier or a new feature. The new feature could be a passive feature like packet arrival time or an active feature like the reaction to active manipulation [@houmansadr13sp]. ![Polymorphism. Red stars represent disallowed traffic and blue circles represent allowed traffic. The dotted line shows the decision boundary of censor’s classifier.[]{data-label="fig:poly"}](figs/poly.pdf){width="0.7\linewidth"} Note that the role of polymorphism in our description differs from that of Tor’s pluggable transports. The pluggable transports framework allows creating different protocols to transform the Tor traffic flow into different formats with the goal of replacing a blocked protocol quickly. Here, we are envisioning a single protocol that automatically uses many polymorphic variants simultaneously. #### Steganography Steganography is a way of looking like allowed communications. To be steganographic in a feature means having values that are very close to the allowed communications. Since steganography transforms a feature, as with polymorphism, it can result in unrecognizable disallowed traffic and more false negatives (Figure \[fig:steg\]). In the case of perfect steganography on the feature, responding by selecting a more complex decision boundary will not help since the traffic is no longer separable by the altered feature. If the censor keeps relying upon the altered feature, it has to choose between raising false negatives or positives. Alternatively, the censor can add a new feature on which allowed and disallowed traffic continue to differ, incurring the cost of implementing and tracking the new feature. Furthermore, adding such a new feature would require learning about how the disallowed and allowed messages differ, which could involve detailed knowledge of the protocol used by the allowed messages and approximated by disallowed traffic (the *cover protocol*). For example, SkypeMorph transforms Tor traffic to look similar to Skype traffic by changing the packet length distribution of Tor traffic. To distinguish real Skype and mimicked Skype traffic, the censor can check their error behaviors using knowledge of the Skype protocol [@houmansadr13sp]. ![Steganography[]{data-label="fig:steg"}](figs/steg.pdf){width="0.7\linewidth"} The Costs of Censorship ======================= Motivated by the above examples, we believe that censorship and censorship evasion has been and will continue to be an arms race where censors will track an increasing number of features and evaders will transform an increasing number of them. Since the pace of this race is set largely by the budgets of each opponent, technologies that increase the costs of the censor while not increasing the costs of the evaders should be welcomed by the evaders. For this reason, we present a simple cost model of the censor. First, we consider the costs that the censor incurs as a consequence of allowing or disallowing various packets. Let $c(t,a)$ be the cost to the censor when taking the action $a$ on a packet of type $t$ where for simplicity the actions are $a_{{\mathsf}{a}}$ for allow and $a_{{\mathsf}{d}}$ for disallow and the types are $t_{{\mathsf}{a}}$ for allowed or $t_{{\mathsf}{d}}$ for disallowed. (Refinements are possible.) The censor will pick the action that minimizes its expected cost given the information it has on the packet $i$. That is, it selects $$\begin{aligned} d(i {\mathop{{|}}}F) &= \operatorname*{argmin}_{a} \sum_{t \in T} P({\mathsf}{type}(i){=}t {\mathop{{|}}}F(i)) * c(t, a) \label{eqn:d}\end{aligned}$$ where $F(i)$ is the result of computing the features in the set $F$ on $i$, $T$ is the set of types, and ${\mathsf}{type}(i)$ is the type of $i$. The cost that the censor will incur for $i$ is $$\begin{aligned} C(i {\mathop{{|}}}F) &= c(\mathsf{type}(i), d(i {\mathop{{|}}}F)) \label{eqn:C}\end{aligned}$$ When $d(i {\mathop{{|}}}F)$ is $a_{{\mathsf}{a}}$ but ${\mathsf}{type}(i)$ is $t_{{\mathsf}{d}}$, the censor has a false negative. We assume that the cost $c(t_{{\mathsf}{d}}, a_{{\mathsf}{a}})$ is positive since the censor is attempting to block such messages. When $d(i {\mathop{{|}}}F)$ is $a_{{\mathsf}{d}}$ but ${\mathsf}{type}(i)$ is $t_{{\mathsf}{a}}$, the censor has a false positive. We assume that $c(t_{{\mathsf}{a}}, a_{{\mathsf}{d}})$ is positive since blocking allowed traffic can disrupt the economy of the censoring country [@robinson13oitp]. The others costs could be zero or even negative, indicating a reward. Second, we consider the costs of operating the censorship system. To compute the features in the set $F$, the system makes measurements of traffic, such as recording the packet arrival times or lengths. We model operating each measurement $m$ as incurring a cost ${{\mathsf}{op}}(m)$. We also allow the censorship system to be stateful. For example, state allows us to model the feature of flow entropy, which requires making measurements of each packet in a flow and storing distributional information for each flow. The operating expenses of a feature due to storage can depend upon the accuracy to which it is computed. For example, entropy-based approaches become more accurate as the number of samples increases. However, sampling, in this case, means waiting for more packets, which means more storage costs. (Also, if such packets are sent to their destinations before a decision is reached, this can lead to those packets being false negatives, also increasing cost.) With these considerations in mind, we conclude that typically, flow-level features computed over a distribution of packet-level features are typically more costly than packet-level features. We model each feature $f$ as incurring a cost ${{\mathsf}{store}}(f)$ that depends upon the amount of memory it requires. Lastly, we consider development costs. Each new feature $f$ must be implemented, incurring a cost of ${{\mathsf}{imp}}(f)$. For simplicity, we assume that the censor’s system is updated according to a development cycle of fixed duration. To each cycle, we charge the costs of developing the classifier used during that cycle and for using it during the cycle. The development costs depend upon not just the set $F'$ of features used during the cycle, but also the set $F$ of features used in the past since previously used features need not be reimplemented. We denote the expected total cost as ${{\mathsf}{cost}}(F' | F, P)$ where $F'$ are the features used by the classifier during the cycle, $F$ are the features previously used, and $P$ is the traffic distribution. We model the expected total cost as $$\begin{aligned} {{\mathsf}{cost}}(F' | F, P) = \phantom{+}& \sum_{\vec{i}} P(\vec{i}) * \sum_{i \in \vec{i}} C(i | F')\\ +& \sum_{m \in {{\mathsf}{meas}}(F)} {{\mathsf}{op}}(m)\\ +& \sum_{f \in F'} {{\mathsf}{store}}(f)\\ +& \sum_{f \in F' - F} {{\mathsf}{imp}}(f) $$ where $\vec{i}$ ranges over the traffic possible during that cycle and ${{\mathsf}{meas}}(F)$ denotes the set of measurements needed to compute the features $F$. The actual cost is computed by fixing $\vec{i}$ to the actual traffic. We remind the reader that the above model is not designed to produce accurate predictions of the censor’s actual costs. Rather, we keep it simple and abstract while treating it as point of departure to discuss how to increase the censor’s relative costs in general. Thus, we make no effort to estimate any of the costs to which the model refers. Furthermore, do not claim that our model captures every cost of the censor. With this model in mind, we now turn to the approaches that the evasion tool may use to drive up this cost. Evaluation Criteria {#sec:crit} =================== Given that we cannot directly measure the costs effecting censors, we search for surrogates that protocol designers can measure and have reason to believe are correlated with these costs. Such surrogates can serve as criteria for selecting which research proposals to deploy. Since we do not empirically demonstrate that these surrogates actually correlate to a censor’s costs, they must be considered hypotheses. For a surrogate of the accuracy of a set of features, the tool evaluator may test a classifier using those features over simulated traffic that includes traffic from the evaluated tool. The number of lines of code needed to implement a feature can act as a surrogate for the its implementation costs. The amount of storage for each feature serves as a surrogate for storage costs. The number of measurements needed for all the features is a surrogate for measurement costs. Intuitively, the quality of an evasion tool is proportional to the cost of the most inexpensive feature set $F$ that achieves such a level of quality as demanded by the censor. Thus, if the evaluator can find an inexpensive set of features $F_1$ that accurately identifies traffic from a evasion tool $e_1$ but cannot find an equally or less expensive set of features $F_2$ that accurately identifies traffic from a tool $e_2$, then the evaluator should suspect that $e_2$ is a better one than $e_1$. However, the evaluator must keep in mind that these findings are relative to his ability to find and test feature sets $F$ and to the method of estimating the expense of features. In this way, the creators of tools cannot argue that their tool achieves some level success since some set of features might exist that they failed to consider. However, they can demonstrate the shortcoming of other tools by illustrating feature sets under which other tools perform more poorly than their own. Tool creators can also demonstrate that certain features, those obfuscated by their tool, are unlikely to be useful in crafting attacks against their own tool. By doing so for inexpensive, well-known features, they can argue that either unusual or expensive features would have to be used for an attack against their system. Thus, a heuristic metric of a tool’s prospects for success is the cheapness of the features it obfuscates. For example, a tool that obfuscates a few very inexpensive features should be viewed more favorably than a tool that obfuscates expensive features, even if it obfuscates more of them. While computing these surrogates may appear daunting, those evaluating evasion tools typically implement and run programs using the features they claim their tool obfuscates. Thus, they may use their own implementations to find values for these surrogates. Reevaluation of Existing Tools {#sec:eval-tools} ============================== In this section, we demonstrate how our evaluation criteria can be used in practice to guide the development process and evaluation of different circumvention tools. To do this we reevaluate some existing tools using our evaluation criteria. We examine only the capabilities of these tools as described in each paper’s evaluation; we make no effort to confirm the correctness of these evaluations or to infer additional capabilities of the tools. ScrambleSuit [@winter2013scramblesuit] obfuscates Tor traffic by not employing a telltale TLS handshake used by plain Tor and by polymorphically randomizing packet lengths and interpacket arrival times to look different from both Tor and its own instances. Presuming that the packet lengths and interpacket arrival times over which it randomizes matches ones seen in allowed traffic, then it obfuscates these features. SkypeMorph [@mohajeri2012skypemorph] is a steganography tool to obfuscate Tor traffic as Skype video calls. In addition to hiding the TLS handshake, it changes the Tor packet sizes and interpacket delays to match that of pre-recorded traffic of a Skype video call. Since Skype traffic is common, it obfuscates these two features. StegoTorus is a polymorphic steganographic tool designed to have a diverse set of steganographic modules allowing the StegoTorus client to use whichever modules the censor has not yet blocked [@weinberg2012stegotorus]. Currently, there are two proof-of-concept steganography modules, one uses HTTP and another mimics an encrypted peer-to-peer cover protocol, such as Skype. StegoTorus obfuscates the TLS handshake, connection length (seconds), connection payload, and per-packet payloads, which corresponds to two measurements since connection payload and per-packet payload require the same measurement. While all three obfuscates a small number of inexpensive features, some of their effort to obfuscate distributional features may be misplaced. For example, StegoTorus and SkypeMorph focus on mimicking distribution based features but fail to mimic inexpensive features like error codes, a weakness we discuss next. Active Manipulation {#sec:active} =================== To illustrate the censor’s ability to focus on inexpensive, simple features, we discuss active manipulation. Active manipulation aims to increase $P(F(i) {\mathop{{|}}}{\mathsf}{type}(i){=}t_{{\mathsf}{d}})$ when ${\mathsf}{type}(i) = t_{{\mathsf}{d}}$ by making the instance behave in a manner that is characteristic of disallowed traffic while not degrading any of the other probabilities. To do so, active manipulation engages in behavior characteristic of an evasion tool that is not characteristic of any known allowed protocols. For example, presuming that all Tor traffic is disallowed, the censor could initiate the Tor protocol handshake and observe whether the client producing the instance responds in the manner of Tor. Such manipulations work well since allowed traffic is unlikely to just so happen to exhibit Tor’s complex behavior meaning that active manipulation introduces few false positives. Thus, atypical, complex behaviors are dangerous for evasion tools as it provides a telltale sign of its use. Systems like ScrambleSuit that reduces the complexity of the handshake for those without a password represent progress [@winter2013scramblesuit]. In Houmansadr et al. [@houmansadr13sp], they consider a form of active manipulation for defeating steganographic systems. It operates in two steps. First, it establishes that the protocol is very similar to some whitelisted protocol in a manner similar to the initiation attack discussed above. Second, it proves that protocol is not really that whitelisted protocol by exercising some atypical behavior of the whilelisted protocol. Using some additional reasoning and replacing “whitelisted traffic” with “allowed traffic”, the above manipulation approach may also be used by a blacklisting censor. Intuitively, the above two steps together show that the protocol is masquerading as another, which is suspicious enough to warrant blocking since it is unlikely that allowed traffic would do so. However, in this case, the evader may respond as in the above attacks by decreasing its complexity to pass as a simple, unknown protocol, which in this case would mean no longer attempting to look like a known protocol. Thus, it appears that *polymorphism is a better choice than steganography in the blacklisting case.* Discussion and Future Work ========================== The simple cost model presented in this preliminary proposal has already provided some insights into tool evaluation. We have provided a proof of concept that evaluations can be more universally applied and more closely tied to economics than in the past. These advantages come with difficulties making our evaluations more complex than tool-specific ones with a narrow focus. Furthermore, this preliminary work leaves some questions open. We consider some below. \(1) Can our cost model be validated? We developed our cost model primarily through intuition. Ideally, we would have empirical results showing its accuracy. Future work can look at the costs of actual censors in democratic countries with published budgets to gain insights into their costs. We can also compare our model to the costs of those engaged in similar arms races, such as spam detection or network intrusion detection. \(2) How should we select features to examine? The evaluator might overlook relatively cheap features while examining ones that the censor never intended to use. In essence, the evaluations of StegoTorus and SkypeMorph, by not considering the features considered by Houmansadr et al. [@houmansadr13sp], suffered from this trap, which our methodology highlights but does not prevent. Future work can examine past data to estimate the relative cost of different features by seeing how long it takes for a censor to adapt to manipulations of them [@khattak2013towards]. However, the open-ended nature of the measurements and features possible makes it difficult to ever conclude that an evaluation examined every cheap feature. We expect dealing with features elicited by manipulation, such as those used Houmansadr et al. to distinguish StegoTorus from its cover protocols [@houmansadr13sp], will be particularly difficult to characterize. (3) Do our surrogates reflect the censor’s actual costs? Answering this question depends upon the answers to the previous two: our surrogates must be developed from an accurate cost model and should focus on the features mostly used by the censor. As a stopgap, our evaluation heuristic is computed using the features manipulated by the evasion tool rather than those used by the censor. Future work can instead subject evasion tools to a battery of tests based on inexpensive features. \(4) Are all costs equally important for any censorship regime? Different kind of costs might dominate in different censorship regimes. For example, China seems to increase censorship during and before specific events like the anniversary of Tiananmen Square protests, which suggests that the relative costs of false positives and negatives varies with the conditions. Other countries (e.g. Saudi Arabia, Qatar) use computationally-costly DPI methods rather than using simpler but customized methods like DNS redirection, as China does [@gillcharacterizing]. These countries can easily buy off-the-shelf DPI tools but lack the skills to build customized tools, which implies that implementation costs might vary by country. \(5) What should we add to our model? Our current model only considers features of packets. Thus, we cannot fully evaluate tools like Flash Proxy [@fifield2012evading] as it focuses on lowering the cost of creating new proxies rather than on obfuscating features of traffic. We also assume that packets do not interact with each other, which limits our ability to explain attacks leveraging such interactions. For example, Geddes et al. [@geddes2013cover] show that randomly dropping $5\%$ of the traffic would render disallowed traffic using Skype as a cover protocol (e.g., FreeWave [@houmansadr2013want] and SkypeMorph [@mohajeri2012skypemorph]) useless without affecting legitimate Skype. This is caused by a channel mismatch between the cover protocol and the disallowed protocol: the cover protocols are loss tolerant peer-to-peer systems whereas the disallowed protocol (Tor) is a loss intolerant client-proxy system. We cannot model loss tolerance since we consider each packet in isolation. \(6) What are the trade-offs between the censor’s and evaders’ costs? Evaluations of evasion tools must also consider the costs of the evaders. Since evaders separate out into tool developers and tool users, we may require two additional cost models. With these models in hand, we may examine the trade-off between increasing the censor’s costs and keeping the evaders’ costs acceptable. Unfortunately, fully understanding these trade-offs may require models that produce quantitative predictions of costs allowing us to compare each party’s cost on a common scale. In particular, by focusing on the censor’s costs, we presuppose that the evaders are better off whenever the censor is worse off. However, this is not always the case. For example, an evasion technique that increases the censor’s false positives but does not result in more disallowed traffic flowing makes the censor worse off without helping the evader. Alternatively, the censor changing its opinion on the dangers of disallowed traffic may result in it blocking less disallowed traffic but without its costs increasing. Thus, we would like a better understanding of when increasing a censor’s costs corresponds to an improvement for the evader. While this proposal does not answer these questions, our overall methodology allows us to systematically consider them. It makes plain that prior evaluations and our own criteria found in Section \[sec:crit\] must be treated as heuristics and encourages the evaluator to consider how they may deviate from the actual quantity of interest, costs. We provide a rigorous methodology in which to discuss the trade-offs among evasion tools and their evaluations. Acknowledgments {#acknowledgments .unnumbered} =============== We thank David Fifield for many helpful conversations on this topic. [^1]: We gratefully acknowledge funding support from the Freedom 2 Connect Foundation, Intel, the National Science Foundation (grants 1237265 and 1223717), the Open Technology Fund, the TRUST Science and Technology Center (NSF grant 0424422), the US Department of State Bureau of Democracy, Human Rights, and Labor. The opinions in this paper are those of the authors and do not necessarily reflect the opinions of any funding sponsor or the United States Government.
--- abstract: 'We explore the consequences of the neutrino mass matrix having a hidden $\mathcal{Z}_2$ symmetry and one zero eigenvalue. When implemented, these two conditions give relations among the mixing angles. In addition, fitting these relations to the existing oscillation data allows limits to be placed on the parameter of the symmetry.' author: - 'Duane A. Dicus$^{1,}$[^1], Shao-Feng Ge$^{1,2,}$[^2], and Wayne W. Repko$^{3,}$[^3]' title: 'Generalized Hidden $\mathcal{Z}_2$ Symmetry of Neutrino Mixing' --- Introduction {#sec:Intro} ============ Neutrino physics can anticipate an era of higher precision measurements with the upcoming generation of neutrino experiments. In the past, measurements have shown that the mixing pattern of lepton sector is quite different from that of quark sector. In the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) parameterization [@PMNS] for lepton mixing there are two large mixing angles. The atmospheric mixing angle $\theta_a \equiv \theta_{23}$ is almost maximal while the solar mixing angle $\theta_s \equiv \theta_{12}$ is also large and the reactor mixing angle $\theta_x \equiv \theta_{31}$ nearly vanishes. The recent results are summarized in Table\[tab:data\]. We can see that the uncertainties in mixing angles are not particularly small. In most measurements there is roughly a $3^\circ$ deviation at $1 \sigma$ confidence level. [c|c|c|c|c|c]{} & & & & &\ & $\Delta_s\,(10^{-5}{\rm eV}^2)$ & $\Delta_a\,(10^{-3}{\rm eV}^2)$ & $\sin^2\theta_s\,(\theta_s)$ & $\sin^2\theta_a\,(\theta_a)$ & $\sin^2\theta_x\,(\theta_x)$ \ & & & & &\ & & & & &\ [Central Value]{} & $7.67$ & $2.39$ & $0.312$ ($34.0^\circ$) & $0.466$ ($43.0^\circ$) & $0.016$ ($7.3^\circ$) \ & & & & &\ $1\,\sigma$ Range & $7.48-7.83$ & $2.31-2.50$ & $0.294 - 0.331$ & $0.408 - 0.539$ & $0.006 - 0.026$ \ & & & ($32.8-35.1^\circ$) & ($39.7-47.2^\circ$) & ($4.4-9.3^\circ$) \ The mixing matrix which incorporates these angles and diagonalizes the mass matrix $M_{\nu}$ via $U^TM_{\nu}U\,=\,M^{{\rm diag}}_\nu$ is given by $$U_\nu=\begin{pmatrix} c_sc_x & -s_sc_x & -s_xe^{i\delta_D} \\ s_sc_a-c_ss_as_xe^{-i\delta_D} & c_sc_a+s_ss_as_xe^{-i\delta_D} & -s_ac_x \\ s_ss_a+c_sc_as_xe^{-i\delta_D} & c_ss_a-s_sc_as_xe^{-i\delta_D} & c_ac_x \end{pmatrix} \label{Utot}$$ with $(s_{\alpha},c_{\alpha})\,\equiv\,(\sin\theta_{\alpha},\cos\theta_{\alpha})$ for $\alpha\,=\,s,a,x$. $\delta_D$ is the Dirac phase and we have neglected Majorana phases. From Table I we see that a good first approximation is to take $\theta_x\,=\,0,\,\, \theta_a\,=\,45^{\circ}$ which gives $$U_\nu(\theta_s)=\begin{pmatrix} \cos \theta_s & - \sin \theta_s & 0 \\ \sqrt{\frac 1 2} \sin \theta_s & \sqrt{\frac 1 2} \cos \theta_s & - \sqrt{\frac 1 2} \\ \sqrt{\frac 1 2} \sin \theta_s & \sqrt{\frac 1 2} \cos \theta_s & \sqrt{\frac 1 2}\end{pmatrix}\equiv\begin{pmatrix} v_1 & v_2 & v_3\end{pmatrix}\,. \label{eq:Unus}$$ Using this as a starting point, we wish to investigate whether there is an underlying symmetry $G$ of the neutrino mass matrix. This matrix must satisfy $[G,M_\nu]=0$, or $G^TM_\nu G=M_\nu$. Given a $G$, the transformation $GU_\nu$ also diagonalizes $M_\nu$. But $U_\nu$ is unique except for phases. This can be seen by supposing that $d_\nu$ is a unitary matrix such that $U_\nu d_\nu$ also diagonalizes $M_\nu$. For this to be true, $d_\nu$ must satisfy $d^T_\nu M^{{\rm diag}}_\nu d_\nu=M^{{\rm diag}}_\nu$ and this implies that $d_\nu^2=1$. Since $GU_\nu$ diagonalizes $M_\nu$, it must have the form $GU_\nu=U_\nu d_\nu$, where $d_\nu$ has $1$ or $-1$ diagonal elements. Thus $$U^\dagger_\nu G U_\nu=d_\nu\equiv \begin{pmatrix} d_1 \\ & d_2 \\ & & d_3 \end{pmatrix}\qquad \Leftrightarrow \qquad G=U_\nu d_\nu U^\dagger_\nu \,. \label{eq:UGU}$$ There are only eight possible combinations for the elements of $d_\nu$. Two of these are the unit matrix and its negative, both of which define $G$ as a multiple of the identity. Of the remaining six, three have two entries of $+1$ and one of $-1$, while the other three have two entries of $-1$ and one of $+1$. These diagonal matrices differ by an overall minus sign, so only one of the two types is independent. If we choose the three with one entry of $1$ and two entries of $-1$ ($\det(G)=1$), then it is easy to see that multiplying any pair of these diagonal matrices will result in the remaining matrix. Hence, in reality, only two of these matrices are independent and both represent a ${\cal Z}_2$ symmetry. Since the independent $G$’s commute, the horizontal symmetry of lepton mixing is $\mathcal Z_2 \times \mathcal Z_2$ if neutrinos are Majorana fermions [@Lam:2006wm; @Lam:2008sh; @Grimus:2009pg; @GHY]. A representation of $G$ can be obtained using $$G=d_1 v_1 v^\dagger_1 + d_2 v_2 v^\dagger_2 + d_3 v_3 v^\dagger_3\,. \label{eq:G1}$$ Since the eigenvalue $1$ can occur in three places, there are three symmetry matrices $G$ $$G_1=\left(\begin{array}{ccc} c_s^2-s_s^2 & \sqrt{2}s_sc_s & \sqrt{2}s_sc_s \\ \sqrt{2}s_sc_s & -c_s^2 & s_s^2 \\ \sqrt{2}s_sc_s & s_s^2 & -c_s^2 \end{array}\right)\quad G_2=\left(\begin{array}{ccc} -(c_s^2-s_s^2) & -\sqrt{2}s_sc_s & -\sqrt{2}s_sc_s \\ -\sqrt{2}s_sc_s & -s_s^2 & c_s^2 \\ -\sqrt{2}s_sc_s & c_s^2 & -s_s^2 \end{array}\right)\,,$$ and $$G_3=\left(\begin{array}{ccc} -1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & -1 & 0 \end{array}\right)\,,$$ where the subscript $i$ on $G_i$ denotes the component of $d^{(i)}_\nu$ that is $+1$. $G_3$ gives $\mu-\tau$ symmetry [@mu-tau] while $G_1$ is symmetric and commutes with $G_3$. For simplicity we can parameterize the solar mixing angle $\theta_s$ as $$\cos \theta_s\equiv\frac {- k} {\sqrt{k^2 + 2}}, \qquad \sin \theta_s\equiv\frac {\sqrt 2}{\sqrt{k^2 + 2}}. \label{eq:solar-k}$$ Then the mixing matrix (\[eq:Unus\]) takes the form [@GHY] $$U_\nu(k)=\begin{pmatrix} \frac {-k}{\sqrt{2 + k^2}} & \frac {- \sqrt 2}{\sqrt{2 + k^2}} & 0 \\ \frac 1 {\sqrt{2 + k^2}} & \frac {- k} {\sqrt{2 (2 + k^2)}} & - \frac 1 {\sqrt 2} \\ \frac 1 {\sqrt{2 + k^2}} & \frac {- k} {\sqrt{2 (2 + k^2)}} & \frac 1 {\sqrt 2} \end{pmatrix} \equiv U_k. \label{eq:Uk}$$ Consequently, the symmetry transformation matrix $G_1(\theta_s)$ can be reexpressed in terms of $k$ $$G_1(k)=\frac{1}{2+k^2} \begin{pmatrix} 2-k^2 & 2k & 2k \\ 2k & k^2 & -2 \\ 2k & -2 & k^2 \end{pmatrix}. \label{G1}$$ Although we can “derive" a generalized form of $G_1$ symmetry transformation matrix (\[G1\]) given the mixing matrix (\[eq:Uk\]), this relationship cannot be reversed. The mixing matrix $U_\nu$ can not be uniquely determined solely by $G_1$ due to the fact that $G_1$ has degenerate eigenvalues. Invariance under $G_3$ requires $\theta_x=0^{\circ}$, $\theta_a\,=\,45^{\circ}$, but invariance under $G_1$ does not, so below we assume that the neutrino mass matrix is invariant under $G_1$, not only in the approximation $\theta_x\,=\,0^{\circ},\,\,\theta_a\,=\,45^{\circ}$, but for general values of all the mixing angles. In the next section we use this assumption in the form of Eq.(\[G1\]) with general values of $k$ to derive relations among the mixing angles. In Sec.3 we compare our results with the experimental values and in Sec.4 we summarize. Invariance under the $\mathcal{Z}_2$ symmetry $G_1$ {#sec:concrete} =================================================== In this section we show explicitly the consequences of generalized $G_1$ symmetry. Only two mass square differences have been measured and the neutrino’s mass scale has not been determined by experiments. It is possible that one of the mass eigenvalues vanishes. This is also theoretically motivated by minimal seesaw model [@MinimalSeesaw]. We will explore the joint consequences of one vanishing mass eigenvalue and $G_1$ invariance. For simplicity we will postpone discussion of $CP$ phases to a later article [@CPGDR]. Constraints on Mass Matrix Elements ----------------------------------- If the neutrinos are Majorana fermions, their mass matrix must be symmetric. We will consider the case that there are three generations of light neutrinos. Then, the most general form of the neutrino mass matrix can be parameterized as $$M_{\nu}=\begin{pmatrix} A & B_1 & B_2 \\ B_1 & C_1 & D \\ B_2 & D & C_2 \end{pmatrix}\,,\label{M}$$ which has six independent matrix elements. We assume $M_{\nu}$ is invariant under the $G_1$ symmetry transformation, $$G_1^T M_{\nu} G_1=M_{\nu}\,. \label{GMG}$$ With the help of (\[G1\]) and (\[M\]), Eq.(\[GMG\]) gives two conditions on the neutrino mass matrix elements of (\[M\]) [@DGR] , $$\begin{aligned} \frac{B_1+B_2}{C_1+C_2+2D-2A}\,&=&\,\frac{k}{k^2-2}\,, \label{BCDAk1} \\ \frac{B_1-B_2}{C_1-C_2}\,&=&\,\frac{1}{k}\,. \label{BCDAk2}\end{aligned}$$ \[BCDAk\] Eigenvalues and eigenstates {#sec:eigen} --------------------------- If there is a vanishing mass eigenvalue $m_i = 0$ the corresponding mass eigenstate, which can be denoted as $v \equiv (\alpha, \beta, \gamma)^T$, must satisfy $$\begin{pmatrix} A & B_1 & B_2 \\ B_1 & C_1 & D \\ B_2 & D & C_2 \end{pmatrix} \begin{pmatrix} \alpha \\ \beta \\ \gamma \end{pmatrix}=0\,.\label{m00}$$ If we assume $\alpha\,\ne\,0$ then we get three equations $$\begin{aligned} A\,&=&\,-\rho\,B_1-\sigma\,B_2\,, \label{ABB} \\ B_1\,&=&\,-\rho\,C_1-\sigma\,D\,, \label{BCD} \\ B_2\,&=&\,-\rho\,D-\sigma\,C_2\,, \label{BDC}\end{aligned}$$ \[eq:0mass\] where $\rho \equiv \beta/\alpha,\,\sigma \equiv \gamma/\alpha$. Thus we have two sets of conditions, (\[BCDAk\]) from $G_1$ invariance, and (\[eq:0mass\]) from the vanishing mass eigenvalue. From the relations (\[eq:0mass\]) we can express the matrix element $A$ in terms of $C_1$, $C_2$ and $D$ $$A=\rho^2 C_1+ \sigma^2 C_2+ 2 \rho \sigma \,D\,.\label{AA}$$ Now let us use these in the $\mathcal{Z}_2$ relations. Eq.(\[BCDAk1\]) and Eq.(\[BCDAk2\]) give $$\begin{aligned} (\sigma\,k+1)C_2-(\rho\,k+1)C_1+k(\rho-\sigma)D=0\,, \label{C2C1D} \\ (\rho\,k+1)(2\rho-k)C_1+(\sigma\,k+1)(2\sigma-k)C_2+[(2-k^2)(\rho+\sigma)-2k(1-2\rho\sigma)] D=0\,. \label{C1C2D}\end{aligned}$$ The above two relations can be reexpressed in terms of only two matrix elements, $D$ and $C_2$ or $C_1$ respectively $$\begin{aligned} (\rho+\sigma-k)[(\sigma\,k+1)C_2+(\rho\,k+1)D]\,&=&\,0\,,\label{C2D} \\ (\rho+\sigma-k)[(\rho\,k+1)C_1+(\sigma\,k+1)D]\,&=&\,0\,. \label{C1D}\end{aligned}$$ The mass eigenvalues that are nonzero are given by $$\label{Mass} m_{\pm}\,=\,\frac{1}{2}\left[A+C_1+C_2\pm\sqrt{(A+C_1+C_2)^2+4(\rho^2+\sigma^2+1)\, (D^2-C_1C_2)}\right]\,,$$ where we have used (\[BCD\]) and (\[BDC\]) and (\[AA\]). From (\[C2D\]) and (\[C1D\]) it is obvious that one possible solution to the equations for $C_1,C_2,D$ is $$\begin{aligned} C_1\,&=&\,-\frac{\sigma\,k+1}{\rho\,k+1}D\,,\label{C1Dx} \\ C_2\,&=&\,-\frac{\rho\,k+1}{\sigma\,k+1}D\,. \label{C2Dx}\end{aligned}$$ This makes $D^2=C_1C_2$ and consequently $m_{-}$ given above would also be zero. Since the experimental data shows that two mass square differences between the three neutrino mass eigenvalues are nonzero, we need at least two masses nonzero in order to have two oscillation lengths. A second solution of (\[C2D\]), (\[C1D\]) is $\rho\,=\,\sigma\,=\,-1/k$ but then the three relations in (\[eq:0mass\]) simply reproduce the conditions (\[BCDAk\]). So the conclusion is that we must have $$\rho\,=\,k-\sigma\,\label{rsk}$$ and the conditions (\[C2C1D\]) or (\[C1C2D\]) reduce to an equation for $\sigma$ $$\label{SCCD} \sigma\,=\,\frac{(1+k^2)C_1-C_2-k^2D}{k(C_1+C_2-2D)}\,.$$ This relation represents the constraint from $G_1$ invariance which was originally expressed as (\[BCDAk\]) where there were two independent relations. Using (\[rsk\]), these two relations are satisfied simultaneously and reduce to a single constraint (\[SCCD\]). The condition (\[rsk\]) can also be substituted into (\[ABB\]), (\[BCD\]), and (\[BDC\]) to give $$\begin{aligned} \sigma\,&=&\,\frac{A+kB_1}{B_1-B_2}\,, \label{ABBk} \\ \sigma\,&=&\,\frac{B_1+kC_1}{C_1-D}\,, \label{BCDk} \\ \sigma\,&=&\,\frac{B_2+kD}{D-C_2}\,, \label{BDCk}\end{aligned}$$ respectively. These three relations are a manifestation of vanishing mass eigenvalue. We can set these equations for $\sigma$ equal to get relations among the matrix elements $A,\ldots,D$ in terms of the parameter $k$. Not all of these equations are independent but two different relations are possible: $$\begin{aligned} (A + k B_1) (C_1 - D)- (B_1 + k C_1)(B_1 - B_2) & = & 0\,,\label{eq:krel-1}\\ (A + k B_1) (D - C_2)- (B_2 + k D) (B_1 - B_2) & = & 0\,.\label{eq:krel-2}\end{aligned}$$ \[eq:krel\] In the next subsection we will write $A,\ldots,D$ in terms of the mixing angles and thereby get two relations among the mixing angles, again involving $k$. Reconstruction of Neutrino Mass Matrix {#sec:restrictions} -------------------------------------- Using $U_{\nu}$ from Eq.(\[Utot\]) in $M_{\nu}\,=\,U^{*}M^{{\rm diag}}_{\nu}U^{\dagger}$ and comparing with (\[M\]) we get [@BDHL] $$\begin{aligned} A & = & c_x^2 c_s^2 m_1 + c_x^2 s_s^2 m_2 + s_x^2 m_3 \label{VA}\\ B_1 & = & c_x [s_s c_s c_a - s_x s_a c_s^2] m_1 - c_x [s_s c_s c_a + s_x s_a s_s^2] m_2 + c_x s_x s_a m_3 \label{VB1}\\ B_2 & = & c_x [s_s c_s s_a + s_x c_a c_s^2] m_1 - c_x [s_s c_s s_a - s_x c_a s_s^2] m_2 - s_x c_x c_a m_3 \label{VB2}\\ C_1 & = & (s_s c_a - s_x c_s s_a)^2 m_1 + (c_s c_a + s_x s_s s_a)^2 m_2 + c_x^2 s_a^2 m_3 \label{VC1} \\ C_2 & = & (s_s s_a + s_x c_s c_a)^2 m_1 + (c_s s_a - s_x s_s c_a)^2 m_2 + c_x^2 c_a^2 m_3 \label{VC2} \\ D & = & (s_s s_a + s_x c_s c_a)(s_s c_a - s_x c_s s_a) m_1 + (c_s s_a - s_x s_s c_a)(c_s c_a + s_x s_s s_a) m_2 - c_x^2 s_a c_a m_3 \label{VD}\end{aligned}$$ \[eq:reconstruction\] where, as mentioned above, we have deferred consideration of $CP$ violation to a later article. The mass eigenvalues can be further parameterized in terms of experimentally measured mass square differences: $m_1 = m_0$, $m_2 = m_0 \sqrt{1+r}$ and $m_3 = 0$ for inverted mass hierarchy and $m_1 = 0$, $m_2 = m_0 \sqrt r$ and $m_3 = m_0$ for normal mass hierarchy where $m_0 \equiv \sqrt{\Delta_a}$ and $r\equiv\Delta_s/\Delta_a$ which is positive. Correlations between Mixing Angles {#sec:inverted} ---------------------------------- To get relations between mixing angles we can substitute (\[eq:reconstruction\]) into (\[eq:krel\]) which gives $$\begin{aligned} - c_a c_x \left[ c_x (c_a - s_a) + k s_x \right] m_1 m_2 & = & 0\,,\\ s_a c_x \left[ c_x (c_a - s_a) + k s_x \right] m_1 m_2 & = & 0\,.\end{aligned}$$ where we have assumed the mass hierarchy is inverted with vanishing $m_3$ and nonzero $m_1$ and $m_2$, while $c_a$, $s_a$ and $c_x$ are also nonzero. The only possible solution is $$\label{r1} k = c_x \frac {s_a - c_a}{s_x} \approx \frac {\sqrt 2 \delta_a}{\delta_x}\,,$$ where the last factor comes from expanding the two mixing angles $\theta_a$ and $\theta_x$ around the approximations $45^{\circ}$ and $0^{\circ}$, $$\theta_a\equiv\frac \pi 4 + \delta_a\,,\qquad\theta_x\equiv\delta_x\,. \label{eq:expansion}$$ With this solution for $k$ and the reconstructed mass matrix elements (\[eq:reconstruction\]) substituted back into (\[ABBk\]) we find $$\label{r2} \frac{1}{\sigma}\,=\,\frac{c_a-s_a}{k\,c_a}=- \frac {s_x}{c_x c_a} \approx - \sqrt 2 \delta_x\,.$$ Since $\delta_x$ is quite small, $\sigma$ should be very large according to (\[r2\]). We still have the condition from $\mathcal{Z}_2$, Eq.(\[SCCD\]). Together with (\[r2\]) and (\[eq:reconstruction\]) as well as (\[r1\]) it gives $$\tan 2 \theta_s=\frac {2 (c^2_a - s^2_a)s_x}{c^2_x - (2 + 2s^2_x) c_a s_a} = \frac {2 \left( \frac{\ds c_a + s_a}{\ds c_a - s_a} s_x \right)} {1 - \left( \frac {\ds c_a + s_a}{\ds c_a - s_a} s_x \right)^2} = \frac {2 \left( - \frac {\ds c_a - s_a}{\ds c_a + s_a} \frac{\ds 1} {\ds s_x} \right)}{1 - \left( - \frac {\ds c_a - s_a}{\ds c_a + s_a} \frac{\ds 1} {\ds s_x} \right)^2}\,.$$ There are two possible solutions $$\label{r3p} \tan\theta_s\,=\,- \frac {c_a - s_a}{c_a + s_a} \frac{1}{s_x} = \frac{k}{c_x(s_a+c_a)}\approx\frac {\delta_a}{\delta_x}\,,$$ or $$\label{r3a} \tan\theta_s\,=\,\frac {c_a + s_a}{c_a - s_a}s_x = - \frac{c_x(c_a+s_a)}{k} \approx - \frac {\delta_x}{\delta_a}\,.$$ These relations between mixing angles can be used to predict the not well measured $\theta_x$ in terms of the solar and atmospheric mixing angles. For example (\[r3a\]) gives $$\begin{aligned} s_x=\frac {c_a - s_a}{s_a + c_a}\frac {s_s}{c_s}\qquad \Rightarrow \qquad \delta_x \approx - \tan \theta_s \delta_a \label{eq:sx-b}\end{aligned}$$ Since $\theta_x$ is the focus of next generation of neutrino experiments, we use (\[eq:sx-b\]) to estimate its value. The scatter plot is shown in Fig. \[fig:thetax\]. A scatter plot based on (\[r3p\]) would look similar with a steeper slope for the points. ![Prediction of $\theta_x$ in terms of $\theta_a$ and $\theta_s$ at the 90% C.L. The vertical solid line denotes the experimentally measured central value of the atmospheric mixing angle $\theta_a$.[]{data-label="fig:thetax"}](IH_ThetaXA_B.eps){width="3in"} Another way of expressing the results is to write all of the mixing angles in terms of the parameters $\sigma$ and $k$. Using $z=1/\sigma$, this gives $$\begin{aligned} \sin^2 \theta_a & = & \frac{(1-kz)^2}{k^2z^2-2kz+2}\label{s2a} \\ \sin^2 \theta_x & = & \frac{z^2}{(k^2+1)z^2-2kz+2}\label{s2x} \\ {\rm and}\,\,\,\,{\rm either}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,&& \nonumber \\ \sin^2 \theta_s & = & \frac{(2-kz)^2}{k^4z^2-2k^3z+2k^2(z^2+1)-4kz+4} \label{s2s2}\\ {\rm or}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,&& \nonumber \\ \sin^2\theta_s\,&=&\,k^2\frac{(k^2+1)z^2-2kz+2}{k^4z^2-2k^3z+2k^2(z^2+1)-4kz+4}\,,\label{s2s}\end{aligned}$$ \[eq:s2asx-IH\] Note that these equations are all unchanged under $k,z\,\longrightarrow\,-k,-z$ so only the absolute value of $k$ can be determined. Fit to existing data {#sec:fit} ==================== The solutions (\[r3p\]) or (\[r3a\]), which give (\[s2s2\]) or (\[s2s\]), are identical in the following sense - oscillation experiments measure $\sin^22\theta$ and thus can’t distinguish between $\theta$ and $\pi/2-\theta$. Further, $\tan(\pi/2-\theta)\,=\,1/\tan\theta$, so a fit with (\[r3p\]) and $\theta_s$ assumed greater than $\pi/4$ is identical to one with (\[r3a\]) and $\theta_s$ assumed less than $\pi/4$. Having noted this we will proceed to fit both (\[s2s2\]) and (\[s2s\]) with $\theta_s\,<\,\pi/4$. Using Eqs.(\[s2a\]), (\[s2x\]), and (\[s2s2\]), the fit to the data from Ref.[@Fogli-08] gives $\chi^2_{\rm min}=2.10$, $|k|_{\rm min}=2.09$ and $z_{\rm}=0.066$. At the minimum values of $|k|$ and $z$, $\sin^2(\theta_a)=0.426\,(\theta_a=40.7^\circ)$, $\sin^2(\theta_x)=0.0025\, (\theta_x=2.87^\circ)$ and $\sin^2(\theta_s)=0.313\,(\theta_s=34.0^\circ)$. The $68.3\%$ and $90\%$ confidence contours are shown in Fig.\[abc\]. ![The 68.3% and 90.0% confidence contours for the fit using Eqs.(\[s2a\]), (\[s2x\]), and (\[s2s2\]) are shown in red and orange, respectively. The (black) dot indicates the $\chi^2$ minimum.[]{data-label="abc"}](chisqabccontour.eps){width="2.75in"} The distributions of this set of mixing angles are obtained from the likelihood distribution $$Ae^{-(\chi^2(k,z)-\chi^2_{\rm min})/2}\,,$$ where $A$ is a normalization constant, using $$\frac{dP}{d\sin^2(\theta)}=\int\!dk\!\int\!\!dz\,\delta(\sin^2(\theta)-f(k,z)) Ae^{-(\chi^2(k,z)-\chi^2_{\rm min})/2}\,,$$ where $f(k,z)$ is one of the functions on the righthand side of Eqs.(\[eq:s2asx-IH\]). The results are shown in Figs.(\[distna\]). ![The distributions of the $\sin^2\theta_i$ obtained using Eqs.(\[s2a\]), (\[s2x\]), and (\[s2s2\]) are shown.[]{data-label="distna"}](ssa1.eps "fig:"){width="2.3in"} ![The distributions of the $\sin^2\theta_i$ obtained using Eqs.(\[s2a\]), (\[s2x\]), and (\[s2s2\]) are shown.[]{data-label="distna"}](sss1.eps "fig:"){width="2.3in"} ![The distributions of the $\sin^2\theta_i$ obtained using Eqs.(\[s2a\]), (\[s2x\]), and (\[s2s2\]) are shown.[]{data-label="distna"}](ssx1.eps "fig:"){width="2.3in"} As would be expected, none of the distributions is exactly Gaussian. The largest contribution to the minimum $\chi^2$ is associated with $\sin^2(\theta_a)$ and the influence of terms beyond the quadratic expansion of $\chi^2(k,z)$ can be seen in the shape of this distribution. If we use Eqs.(\[s2a\]), (\[s2x\]) and (\[s2s\]), the fit to the data has two local minima. The lowest of these gives $\chi^2_{\rm min}=0.506$, $|k|_{\rm min}=0.942$ and $z_{\rm min}=0.152$. At the minimum values of $|k|$ and $z$, $\sin^2(\theta_a)=0.423\,(40.5^\circ)$, $\sin^2(\theta_x)=0.013\, (6.55^\circ)$ and $\sin^2(\theta_s)=0.311\,(33.9^\circ)$. At the other minimum, where $\chi^2=2.73$, $\sin^2\theta_s$ and $\sin^2\theta_x$ are slightly different, but $\sin^2\theta_a=0.567$. This is reflected in the individual mixing angle distributions. The confidence contours for this case are shown in Fig.\[abd\]. ![The 68.3% and 90.0% confidence contours for the fit using Eqs.(\[s2a\]), (\[s2x\]) and (\[s2s\]) are shown in red and orange, respectively. The (black) dot indicates the $\chi^2$ minimum.[]{data-label="abd"}](chisqabdcontour.eps){width="2.75in"} The distributions of this set of mixing angles are shown in Figs.(\[distnb\]). Here, too, the largest contribution to the minimum $\chi^2$ is associated with $\sin^2(\theta_a)$ and the effect of the second local minimum this is reflected in the distortion on the high side of the probability distribution. ![The distributions of the $\sin^2\theta_i$ obtained using Eqs.(\[s2a\]), (\[s2x\]) and (\[s2s\]) are shown.[]{data-label="distnb"}](ssa.eps "fig:"){width="2.3in"} ![The distributions of the $\sin^2\theta_i$ obtained using Eqs.(\[s2a\]), (\[s2x\]) and (\[s2s\]) are shown.[]{data-label="distnb"}](sss.eps "fig:"){width="2.3in"} ![The distributions of the $\sin^2\theta_i$ obtained using Eqs.(\[s2a\]), (\[s2x\]) and (\[s2s\]) are shown.[]{data-label="distnb"}](ssx.eps "fig:"){width="2.3in"} Alternately we can fit for $k$ using the values of $\sin^2\theta_a$ and $\sin^2\theta_s$ from Ref.[@Fogli-08] but replace $\sin^2\theta_x$ with the value for $\sin^2(2\theta_a)\,\sin^2(2\theta_x)$ published by the MINOS collaboration [@MINOS]. They report $\sin^2(2\theta_a)\,\sin^2(2\theta_x)\,\simeq\,0.18\pm0.13$ for inverted hierarchy and, for normal hierarchy, $\simeq\,0.11\pm0.09$. From the inverted hierarchy result we get $|k|=2.10\pm\,0.10$ with a $\chi^2$ of $1.86$ or $|k|=0.94\pm\,0.15$ with a $\chi^2$ of $1.35$. This was all for inverted hierarchy. Normal hierarchy, with $m1$ equal to zero, gives, after a lot of work, exactly the results of inverted hierarchy, (\[r1\]), (\[r3p\]), (\[r3a\]). The parameter $\sigma$ is a different function than (\[r2\]), $$\label{sigmaNH} \sigma\,=\,\frac{s_a(1+k^2)-c_a}{k(s_a+c_a)}\,,$$ but this just amounts to a reparameterization of (\[eq:s2asx-IH\]) with no physical consequence. Using the MINOS number for normal hierarchy we find the same values, including the errors, for $|k|$ as for the inverted hierarchy MINOS number. The $\chi^2$ values are smaller at $1.42$ or $0.80$. With either MINOS value and for either value of $|k|$ the fitted value of $\sin^2\theta_s$ is stable at $0.312$, the fitted value of $\sin^2\theta_a$ varies only slightly from $0.46$ for the larger $|k|$ to $0.42$ for the smaller value, but $\sin^2\theta_x$ is less than $0.001$ for the larger $|k|$ but equal to $0.015$ for the smaller. Summary ======= A hidden $\mathcal{Z}_2$ symmetry, as given by Eq.(\[GMG\]), results in only two possible sets of conditions on the neutrino mixing angles. Assuming $\theta_s\,<\,\pi/4$ then either $$\begin{aligned} s_x\,&=&\,\frac{c_x}{k}(s_a-c_a)\,, \label{sol1} \\ \tan\theta_s\,&=&\,-\frac{c_x}{k}(s_a+c_a)\,, \label{sol2}\end{aligned}$$ with confidence contours shown in Fig.\[abc\], or $$\begin{aligned} s_x\,&=&\,\frac{c_x}{k}(s_a-c_a)\,, \label{sol3} \\ \tan\theta_s\,&=&\,\frac{k}{c_x(s_a+c_a)}\,,\label{sol4}\end{aligned}$$ with the confidence contours shown in Fig.\[abd\]. Acknowledgments {#acknowledgments .unnumbered} =============== SFG was supported by the China Scholarship Council (CSC). DAD and SFG were supported in part by the U. S. Department of Energy under grant No. DE-FG03-93ER40757. WWR was supported in part by the National Science Foundation under Grant PHY-0555544. DAD is a member of the Center for Particles and Fields and the Texas Cosmology Center. It is always a pleasure to thank Sacha Kopp and Karol Lang for helpful discussions regarding the MINOS data. WWR thanks Jim Linnemann for an informative conversation about the procedures used in Sec.(3). [99]{} B. Pontecorvo, Sov. Phys. JETP [**6**]{}, 429 (1957) \[Zh. Eksp. Teor. Fiz.  [**33**]{}, 549 (1957)\]; Z. Maki, M. Nakagawa and S. Sakata, Prog. Theor. Phys.  [**28**]{}, 870 (1962). G. L. Fogli, E. Lisi, A. Marrone, A. Palazzo and A. M. Rotunno, Phys. Rev. Lett.  [**101**]{}, 141801 (2008) \[arXiv:0806.2649 \[hep-ph\]\] and G. L. Fogli, E. Lisi, A. Marrone, A. Palazzo and A. M. Rotunno, arXiv:0809.2936 \[hep-ph\]; G. L. Fogli [*et al.*]{}, Phys. Rev.  D [**78**]{}, 033010 (2008) \[arXiv:0805.2517 \[hep-ph\]\] and references therein. See, for instance, the experimental reports at XXIII International Conferences on “Neutrino Physics and Astrophysics” (Neutrino 2008), Christchurch, New Zealand, May 25-31, 2008. Web link: <http://www2.phys.canterbury.ac.nz/~jaa53> C. S. Lam, Phys. Rev.  [**D74**]{}, 113004 (2006). C. S. Lam, Phys. Rev.  D [**78**]{}, 073015 (2008) \[arXiv:0809.1185 \[hep-ph\]\]. W. Grimus, L. Lavoura, P. O. Ludl, J. Phys. G [**G36**]{}, 115007 (2009). S.- F. Ge, H. J. He and F. R. Yin, JCAP [**1005**]{}, 017 (2010) \[arXiv:1001.0940 \[hep-ph\]\]. P. F. Harrison and W. G. Scott, Phys. Lett.  B [**547**]{}, 219 (2002) \[arXiv:hep-ph/0210197\], R. N. Mohapatra, JHEP [**0410**]{}, 027 (2004) \[arXiv:hep-ph/0408187\], R. N. Mohapatra, S. Nasri and H. B. Yu, Phys. Lett.  B [**615**]{}, 231 (2005) \[arXiv:hep-ph/0502026\], T. Kitabayashi and M. Yasue, Phys. Lett.  B [**621**]{}, 133 (2005) \[arXiv:hep-ph/0504212\], R. N. Mohapatra and W. Rodejohann, Phys. Rev.  D [**72**]{}, 053001 (2005) \[arXiv:hep-ph/0507312\], Z. Z. Xing, H. Zhang and S. Zhou, Phys. Lett.  B [**641**]{}, 189 (2006) \[arXiv:hep-ph/0607091\]. P. H. Frampton, S. L. Glashow and T. Yanagida, Phys. Lett.  B [**548**]{}, 119 (2002) \[arXiv:hep-ph/0208157\]; M. Raidal and A. Strumia, Phys. Lett.  B [**553**]{}, 72 (2003) \[arXiv:hep-ph/0210021\]. S.- F. Ge, D. A. Dicus, and W. W. Repko, “A $\mathbb{Z}_2$ symmetry prediction for the Dirac $CP$ phase of neutrino mixing,” in preparation. These conditions for one vanishing neutrino mass and $k=-1$ were derived in: D. A. Dicus, S.- F. Ge and W. W. Repko, Phys. Rev. D[**82**]{}, 033005 (2010) arXiv:1004.3266 \[hep-ph\]. V. Barger, D. A. Dicus, H. J. He and T. J. Li, Phys. Lett.  B [**583**]{}, 173 (2004) \[arXiv:hep-ph/0310278\]. P. Adamson [*et. al.*]{}, Phys. Rev. Lett. [**103**]{}, 261802-1 (2009). [^1]: Electronic address: dicus@physics.utexas.edu [^2]: Electronic address: gesf02@gmail.com [^3]: Electronic address: repko@pa.msu.edu
--- abstract: 'During the final moments of a binary black hole (BH) merger, the gravitational wave (GW) luminosity of the system is greater than the combined electromagnetic output of the entire observable universe. However, the extremely weak coupling between GWs and ordinary matter makes these waves very difficult to detect directly. Fortunately, the inspiraling BH system will interact strongly–on a purely Newtonian level–with any surrounding material in the host galaxy, and this matter can in turn produce unique electromagnetic (EM) signals detectable at Earth. By identifying EM counterparts to GW sources, we will be able to study the host environments of the merging BHs, in turn greatly expanding the scientific yield of a mission like LISA.' address: - '$^1$ Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218' - '$^2$ NASA Goddard Space Flight Center, Greenbelt, MD 20771' author: - 'Jeremy D. Schnittman$^{1,2}$' title: Electromagnetic Counterparts to Black Hole Mergers --- INTRODUCTION {#intro} ============ Prompted by recent advances in numerical relativity (NR), there has been an increased interest in the astrophysical implications of black hole (BH) mergers (see [@decadal] for a sample of related White Papers submitted to the recent Astro2010 Decadal Report). Of particular interest is the possibility of a distinct, luminous electromagnetic (EM) counterpart to a gravitational-wave (GW) signal. If such an EM counterpart could be identified with a LISA$^{\footnote{\tt http://lisa.nasa.gov}}$ detection of a supermassive BH binary in the merging process, then the host galaxy could likely be determined [@kocsis:06; @lang:06; @lang:08; @kocsis:08a]. Like the cosmological beacons of gamma-ray bursts and quasars, merging BHs can teach us about relativity, high-energy astrophysics, radiation hydrodynamics, dark energy, galaxy formation and evolution, and even dark matter. A large variety of potential EM signatures have recently been proposed, almost all of which require some significant amount of gas in the near vicinity of the merging BHs. In this paper, we review the recent literature on EM signatures, and propose a rough outline of the future work, both observational and theoretical, that will be needed to fully realize the potential of GW astronomy. DIVERSITY OF SOURCES {#sources} ==================== From a theoretical point of view, EM signatures can be categorized by the physical mechanism responsible for the emission, namely stars, hot diffuse gas, or circumbinary/accretion disks. In Figure \[source\_chart\], we show the diversity of these sources, arranged according the spatial and time scales on which they occur. It is important to note that, while the black holes themselves are of course extremely relativistic objects, most of the observable effects occur on distance and time scales that are solidly in the Newtonian regime. While one of the most interesting NR results in recent years has been the prediction of large recoil velocities originating from the final merger and ringdown of binary BHs [@bigkicks], the [ *astrophysical*]{} implications of these large kicks are for the most part entirely Newtonian. ![\[source\_chart\] Selection of potential EM signatures, sorted by timescale, typical size of emission region, and physical mechanism (blue = stellar; yellow = accretion disk; green = diffuse gas/miscellaneous).](ss.eps){width="\textwidth"} Stellar Signatures ------------------ On the largest scales, we have strong circumstantial evidence of supermassive BH mergers at the centers of merging galaxies. From large optical surveys of interacting galaxies out to redshifts of $z \sim 1$, we can infer that $5-10\%$ of massive galaxies are merging at any given time, and the majority of galaxies with $M_{\rm gal} \gtrsim 10^{10} M_\odot$ have experienced a major merger in the past 3 Gyr [@bell:06; @mcintosh:08; @deravel:09; @bridge:10], with even higher merger rates at redshifts $z\sim 1-3$ [@conselice:03]. At the same time, high-resolution observations of nearby galactic nuclei find that every large galaxy hosts a SMBH in its center [@kormendy:95]. Yet we see a remarkably small number of dual AGN [@komossa:03; @comerford:09], and only one known source with an actual binary system where the BHs are gravitationally bound to each other [@rodriguez:06]. Taken together, these observations strongly suggest that when galaxies merge, the merger of their central SMBHs inevitably follows, and likely occurs on a relatively short time scale, which would explain the apparent scarcity of binary BHs. There is also indirect evidence for SMBH mergers in the stellar distributions of galactic nuclei, with many elliptical galaxies showing light deficits (cores), which correlate strongly with the central BH mass [@kormendy:09]. The cores are evidence of a history of binary BHs that scour out the nuclear stars via three-body scattering [@milosavljevic:01; @milosavljevic:02; @merritt:07], or even post-merger relaxation of recoiling BHs [@merritt:04; @boylan-kolchin:04; @gualandris:08; @guedes:09]. While essentially all massive nearby galaxies appear to host central SMBHs, it is quite possible that this is not the case at larger redshifts and smaller masses, where major mergers could lead to the complete ejection of the final black hole via large gravitational-wave recoils. By measuring the occupation fraction of BHs in distant galaxies, one could infer merger rates and the distribution of kick velocities [@schnittman:07a; @volonteri:07; @schnittman:07b; @volonteri:08a; @volonteri:10]. The occupation fraction will of course also affect the LISA event rates, especially at high redshift [@sesana:07]. An indirect signature for kicked BHs could potentially show up in the statistical properties of active galaxies, in particular in the relative distribution of different classes of AGN in the “unified model” paradigm [@komossa:08b; @blecha:10]. On a smaller scale, the presence of intermediate-mass BHs in globular clusters also gives indirect evidence of their merger history [@holley-bockelmann:08]. Another EM signature of BH mergers comes from the population of stars that remain bound to a recoiling black hole that gets ejected from a galactic nucleus [@komossa:08a; @merritt:09; @oleary:09]. These stellar systems will appear similar to globular clusters, yet with smaller spatial extent and much larger velocity dispersions, as the potential is completely dominated by the central SMBH. Gas Signatures: Accretion Disks ------------------------------- Gas in the form of accretion disks around single massive BHs is known to produce some of the most luminous objects in the universe. However, very little is known about the behavior of accretion disks around [ *two*]{} BHs, particularly at late times in their inspiral evolution. In Newtonian systems, it is believed that a circumbinary accretion disk will have a central gap of much lower density, either preventing accretion altogether, or at least decreasing it significantly [@pringle:91; @artymowicz:94; @artymowicz:96]. When including the evolution of the binary due to GW losses, the BHs may also decouple from the disk at the point when the GW inspiral time becomes shorter than the gaseous inflow time at the inner edge of the disk [@milos:05]. This decoupling should effectively stop accretion onto the central object until the gap can be filled on an inflow timescale. However, other semi-analytic calculations predict an [*enhancement*]{} of accretion power as the evolving binary squeezes the gas around the primary BH, leading to a rapid increase in luminosity shortly before merger [@armitage:02; @chang:10]. Regardless of [*how*]{} the gas can or cannot reach the central BH region, a number of recent papers have shown that if there [*is*]{} sufficient gas present, then an observable EM signal is likely. Krolik [@krolik:10] used analytic arguments to estimate a peak luminosity comparable to that of the Eddington limit, independent of the detailed mechanisms for shocking and heating the gas. Using relativistic magneto-hydrodynamic simulations in 2D, O’Neill [@oneill:09] showed that the prompt mass loss due to GWs may actually lead to a sudden [*decrease*]{} in luminosity following the merger, as the gas in the inner disk temporarily has too much energy and angular momentum to accrete efficiently. Full NR simulations of the final few orbits of a merging BH binary have now been carried out including the presence of EM fields in a vacuum [@palenzuela:09; @mosta:10; @palenzuela:10] and also gas, treated as test particles in [@vanmeter:10] and as an ideal fluid in [@bode:10] and [@farris:10]. The simulations including matter all suggest that the gas can get shocked and heated to high temperatures, thus leading to bright counterparts in the event that sufficient gas is in fact present in the immediate vicinity of the merging BHs. If the primary energy source for heating the gas is gravitational, then typical efficiencies will be on the order of $\sim 1-10$%, comparable to that expected for standard accretion in AGN. However, if the merging BH binary is able to generate strong magnetic fields [@palenzuela:09; @mosta:10; @palenzuela:10], then highly relativistic jets may be launched along the resulting BH spin axis, converting matter to energy with a Lorentz boost factor of $\Gamma \gg 1$. Even with purely hydrodynamic heating, particularly bright and long-lasting afterglows may be produced in the case of very large recoil velocities, which effectively can disrupt the entire disk, leading to strong shocks and dissipation [@lippai:08; @shields:08; @schnittman:08; @megevand:09; @rossi:10; @anderson:10; @corrales:10; @tanaka:10a; @zanotti:10]. For systems that open up a gap in the circumbinary disk, an EM signature may take the form of a quasar suddenly turning on as the gas refills the gap, months to years after the BH merger [@milos:05; @shapiro:10; @tanaka:10b]. For those systems that also received a large kick at the time of merger, we may observe quasar activity for millions of years after, with the source displaced from the galactic center, either spatially [@kapoor:76; @loeb:07; @volonteri:08b; @civano:10; @dottori:10; @jonker:10] or spectroscopically [@bonning:07; @komossa:08c; @boroson:09; @robinson:10]. However, large offsets between the redshifts of quasar emission lines and their host galaxies have also been interpreted as evidence of pre-merger binary BHs [@bogdanovic:09; @dotti:09; @tang:09; @dotti:10b] or due to the large relative velocities in merging galaxies [@heckman:09; @shields:09a; @vivek:09; @decarli:10], or “simply” extreme examples of the class of double-peaked emitters, where the line offsets are generally attributed to the disk [@gaskell:88; @eracleous:97; @shields:09b; @chornock:10; @gaskell:10]. In addition to the many potential prompt and afterglow signals from merging BHs, there has also been a significant amount of theoretical and observational work focusing on the early precursors of mergers. Following the evolutionary trail from the upper-left of Figure 1, we see that shortly after a galaxy merges, dual AGN may form with typical separations of a few kpc [@komossa:03; @comerford:09], sinking to the center of the merged galaxy on a relatively short timescale ($\lesssim$ 1 Gyr) due to dynamical friction [@begelman:80]. This merger process is also expected to funnel a great deal of gas to the galactic center, in turn triggering quasar activity [@hernquist:89; @kauffmann:00; @hopkins:08; @green:10]. At separations of $\sim 1$ pc, the BH binary (now “hardened” into a gravitationally bound system) could stall, having depleted its loss cone of stellar scattering and not yet reached the point of gravitational radiation losses [@milosavljevic:03]. Gas dynamical drag from massive disks ($M_{\rm disk} \gg M_{\rm BH}$) leads to a prompt inspiral ($\sim 1-10$ Myr), in most cases able to reach sub-parsec separations, depending on the resolution of the simulation [@escala:04; @kazantzidis:05; @escala:05; @dotti:07; @cuadra:09; @dotti:09b; @dotti:10a]. At this point, a proper binary quasar is formed, with an orbital period of months to decades, which could be identified by periodic accretion [@macfadyen:08; @hayasaki:08; @haiman:09a; @haiman:09b] or red-shifted broad emission lines as mentioned above [@bogdanovic:08; @shen:09; @loeb:10]. Direct GW stresses on the circumbinary disk might also lead to periodic variations in the light curve, although with very small amplitude [@kocsis:08]. Gas Signatures: Diffuse Gas; “Other” ------------------------------------ In addition to the many disk-related signatures, there are also a number of potential EM counterparts that are caused by the accretion of diffuse gas in the galaxy. For BHs that get significant kicks at the time of merger, we expect to see quasi-periodic episodes of Bondi accretion as the BH oscillates through the gravitational potential of the galaxy over millions of years, as well as off-center AGN activity [@blecha:08; @fujita:09; @guedes:10; @sijacki:10]. On larger spatial scales, the recoiling BH could also produce trails of overdensity in the hot interstellar gas of elliptical galaxies [@devecchi:09]. In a similar way, rogue SMBHs in gas-rich galaxies could leave trails of star formation in their wake [@fuente:08]. It is even possible that the same density enhancements could be detected via off-nucleus gamma-ray emission from annihilating dark matter particles [@mohayaee:08]. Also on kpc–Mpc scales, X-shaped radio jets have been seen in a number of galaxies, which could possibly be due to the merger and subsequent spin-flip of the central BHs [@merritt:02]. Another potential source of EM counterparts comes not from diffuse gas, or accretion disks, but the occasional capture and tidal disruption of normal stars by the merging BHs. This tidal disruption, which also occurs in “normal” galaxies [@rees:88; @komossa:99; @halpern:04], may be particularly easy to identify in off-center BHs following a large recoil [@komossa:08a]. Tidal disruption rates may be strongly increased by the merger process itself [@chen:09; @stone:10; @seto:10; @schnittman:10], while the actual disruption signal may be truncated by the pre-merger binary [@liu:09]. These events are likely to be seen by the dozen in coming years with PanSTARRS and LSST [@gezari:09]. In addition to the tidal disruption scenario, in [@schnittman:10] we showed how gas or stars trapped at the stable Lagrange points in a BH binary could evolve during inspiral and eventually lead to enhanced star formation, ejected hyper-velocity stars, highly-shifted narrow emission lines, and short bursts of Eddington-level accretion coincident with the BH merger. A completely different type of EM counterpart can be seen in the radio. Namely, nanosecond time delays in the arrival of pulses from millisecond radio pulsars is direct evidence of extremely low-frequency (nano-Hertz) gravitational waves from massive ($\gtrsim 10^8 M_\odot$) BH binaries [@jenet:06; @sesana:08; @sesana:09; @jenet:09; @seto:09; @pshirkov:10; @vanhaasteren:10; @sesana:10]. By cross-correlating the signals from multiple pulsars around the sky, we can effectively make use of a GW detector the size of the entire galaxy. GAME PLAN ========= In the coming years, a number of theoretical and observational advances will be required in order to fully realize the potential of GW/EM multi-messenger astronomy. Some of the central questions that need to be answered include: - What is the galaxy merger rate as a function of galaxy mass, mass ratio, gas fraction, cluster environment, and redshift? - What is the mass function and spin distribution of the central BHs in these merging (and non-merging) galaxies? - What is the central environment around the BHs, prior to merger? - What is the quantity and quality (temperature, density, composition) of gas? - What is the stellar distribution (age, mass function, metallicity)? - What are the properties of the circumbinary disk? - What is the time delay between galaxy merger and BH merger? We have rough predictions for some of these questions from cosmological N-body simulations, but the uncertainties and model dependencies are quite large. Similarly, observational constraints are currently quite weak and often open to widely varying interpretations. Theory ------ With respect to the questions outlined above, improved cosmological simulations will certainly help improve our estimates for galactic and BH merger rates, as well as the gas environments expected in the central regions. Particularly promising are multi-scale simulations that can zoom in on regions of interest, going to higher resolution and more realistic physics closer to the BHs [@springel:05]. To model more accurately the interaction between the circumbinary disk and the BHs, grid-based methods (as opposed to smoothed particle hydrodynamics; SPH) will be necessary, especially at the inner edge where steep density and pressure gradients are likely to be found. The accurate treatment of this region is critical to understand the gas environment immediately around the BHs at time of merger, and thus whether any bright EM signal is likely to be produced. The natural product of these (Newtonian) circumbinary MHD simulations would be a set of reasonable initial conditions to be fed into the much more computationally intensive NR codes that compute the final orbits and merger of the BHs, now including matter and magnetic fields. The results of [@palenzuela:09; @mosta:10; @palenzuela:10; @vanmeter:10; @bode:10; @farris:10] are extremely impressive from a computational point of view, but their astrophysical relevance is limited by our complete ignorance of the likely initial conditions. Even with perfect knowledge of the initial conditions, the value of the MHD simulations is also limited by the lack of radiation transport and accurate thermodynamics, which are only now being incorporated into local Newtonian simulations of steady-state accretion disks [@hirose:09]. Significant future work will be required to incorporate the radiation transport into a fully relativistic global framework, required not just for accurate modeling of the dynamics, but also for the prediction of EM signatures that might be compared directly with observations. Observations ------------ Even with the launch of LISA a decade or more away, many of the EM counterparts discussed above should be observable today, in some cases even giving unambiguous evidence for merging BHs. On the largest distance and time scales, dual AGN candidates can be identified with large spectroscopic surveys like SDSS$^{\footnote{\tt http://www.sdss.org}}$, then followed up with high-resolution imaging and spectroscopy. Combined with surveys of galaxy morphology and pairs, the distribution of dual AGN will help us test theories of galactic merger rates as a function of mass and redshift, as well as the connection between gas-rich mergers and AGN activity. Spectroscopic surveys should also be able to identify many candidate binary AGN, which may be confirmed or ruled out with subsequent observations over relatively short timescales ($\sim 1-10$ yrs), as the line-of-site velocities to the BHs changes by an observable degree. Long-lived afterglows could be discovered in existing multi-wavelength surveys, but successfully identifying them as merger remnants as opposed to obscured AGN or other bright unresolved sources would require improved pipeline analysis of literally millions of point sources, as well as extensive follow-up observations. Particularly promising as unambiguous examples of recoiling BHs would be the measurement of large velocity dispersions in nearby ($d \lesssim 20$ Mpc) globular clusters [@merritt:09]. With multi-object spectrometers on large ground-based telescopes, this is also technically realistic in the immediate future. Perhaps the most exciting direction for the coming decade of astronomy is in the time domain. Optical telescopes like PTF and PanSTARRS are already taking data from huge areas of the sky with daily and even hourly frequency. These time-domain surveys are ideally suited for looking for variability from binary BH systems as precursors to merger. Especially promising would be the detection of long-period variable AGN, ideally suited to extensive multi-wavelength follow-up observations. References {#references .unnumbered} ========== [99]{} Bloom J 2009 \[arXiv:0902.1527\]; Demorest P 2009 \[arXiv:0902.2968\]; Jenet F 2009 \[arXiv:0909.1058\]; Madau P 2009 \[arXiv:0903.0097\]; Miller M C 2009 \[arXiv:0903.0285\]; Nandra K 2009 \[arXiv:0903.0547\]; Phinney E S 2009 \[arXiv:0903.0098\]; Prince T 2009 \[arXiv:0903.0103\]; Schutz B F 2009 \[arXiv:0903.100\] Kocsis B, Frei Z, Haiman Z and Menou K 2006 27–37 Lang R N and Hughes S A 2006 122001 Lang R N and Hughes S A 2008 1184–1200 Kocsis B, Haiman Z and Menou K 2008 870–887 Campanelli M, Lousto C, Zlochower Y and Merritt D 2007 5–8; Gonz[á]{}lez J A, Hannam M, Sperhake U, Br[ü]{}gmann B and Husa S 2007 231101; Herrmann F, Hinder I, Shoemaker D, Laguna P and Matzner R A 2007 430–436; Pollney D 2007 124002; Tichy W and Marronetti P 2007 061502; Brugmann B, Gonzalez J A, Hannam M, Husa S and Sperhake U 2008 124047; Baker J G, Boggs W D, Centrella J, Kelly B J, McWilliams S T, Miller M C, van Meter J R 2008 29–32 Bell E F 2006 [*Astroph. J.*]{} [**652**]{} 270–276 McIntosh D H, Guo Y, Hertzberg J, Katz N, Mo H J, van den Bosch F C, and Yang X 2008 [*Mon. Not. Roy. Astron. Soc.*]{} [**388**]{} 1537–1556 de Ravel L 2009 [*Astron. & Astroph.*]{} [**498**]{} 379–397 Bridge C R, Carlberg R G and Sullivan M 2010 [ *Astroph. J.*]{} [**709**]{} 1067–1082 Conselice C J, Bershady M A, Dickinson M and Papovich C 2003 [*Astron. J.*]{} [**126**]{} 1183–1207 Kormendy J and Richstone D 1995 [*Ann. Rev. Astron. & Astroph.*]{} [**33**]{} 581 Komossa S, Burwitz V, Hasinger G, Predehl P, Kaastra J S and Ikebe Y 2003 [*Astroph. J. Lett.*]{} [**582**]{} 15–19 Comerford J M 2009 [*Astroph. J.*]{} [**698**]{} 956–965 Smith K L, Shields G A, Bonning E W, McMullen C C, Rosario D J and Salviander S 2010 866–877 Rodriguez C, Taylor G B, Zavala R T, Peck A B, Pollack L K and Romani R W, [*Astroph. J.*]{} [**646**]{} 49–60 Kormendy J, Fisher D B, Cornell M E and Bender R 2009 [*Astroph. J. Suppl.*]{} [**182**]{} 216–309; Kormendy J and Bender R 2009 [*Astroph. J. Lett.*]{} [**691**]{} 142–146 Milosavljevic M and Merritt D 2001 [ *Astroph. J.*]{} [**563**]{} 34–62 Milosavljevic M, Merritt D, Rest A and van den Bosch F C 2002 [*Mon. Not. Roy. Astron. Soc.*]{} [**331**]{} 51–55 Merritt D, Mikkola S and Szell A 2007 [ *Astroph. J.*]{} [**671**]{} 53–72 Merritt D, Milosavljevic M, Favata M, Hughes S A and Holz D E 2004 [*Astroph. J. Lett.*]{} [**607**]{} 9–12 Boylan-Kolchin M, Ma C-P and Quataert E 2004 [*Astroph. J. Lett.*]{} [**613**]{} 37–40 Gualandris A and Merritt D 2008 [*Astroph. J.*]{} [**678**]{} 780–797 Guedes J, Madau P, Kuhlen M, Diemand J and Zemp M 2009 [*Astroph. J.*]{} [**702**]{} 890–900 Schnittman J D and Buonanno A 2007 [ *Astroph. J. Lett.*]{} [**662**]{} 63–66 Volonteri M 2007 5–8 Schnittman J D 2007 [ *Astroph. J. Lett.*]{} [**667**]{} 133–136 Volonteri M, Lodato G and Natarajan P 2008 1079–1088 Volonteri M, Gultekin K and Dotti M 2010 2143–2150 Sesana A 2007 6–10 Komossa S and Merritt D 2008b 89–92 Blecha L, Cox T J, Loeb A and Hernquist L 2010 \[arXiv:1009.4940\] Holley-Bockelmann K, Gultekin K, Shoemaker D and Yunes N 2008 829–837 Komossa S and Merritt D 2008a 21–24 Merritt D, Schnittman J D and Komossa S 2009 1690–1710 O’Leary R M and Loeb A 2009 781–786 Pringle J E 1991 754–259 Artymowicz P and Lubow S H 1994 651–667 Artymowicz P and Lubow S H 1996 77 Milosavljevic M and Phinney E S 2005 93–96 Armitage P J and Natarajan P 2002 9–12 Chang P, Strubbe L E, Menou K and Quataert E 2010 2007–2016 Krolik J H 2010 774–779 O’Neill S M, Miller M C, Bogdanovic T, Reynolds C S and Schnittman J D 2009 859–871 Palenzuela C, Anderson M, Lehner L, Liebling S L and Neilsen D 2009 081101 Mosta P, Palenzuela C, Rezzolla L, Lehner L, Yoshida S and Pollney D 2010 064017 Palenzuela C, Lehner L and Yoshida S 2010 084007 van Meter J R, Wise, J H, Miller M C, Reynolds C S, Centrella J, Baker J G, Boggs W D, Kelly B J and McWilliams S T 2010 89–92 Bode T, Haas R, Bogdanovic T, Laguna P and Shoemaker D 2010 1117 Farris B D, Liu Y-K and Shapiro S L 2010 084008 Lippai Z, Frei Z and Haiman Z 2008 5–8 Shields G A and Bonning E W 2008 758–766 Schnittman J D and Krolik J H 835–844 Megevand M, Anderson M, Frank J, Hirschmann E W, Lehner L, Liebling S L, Motl P M and Neilsen D 2009 024012 Rossi E M, Lodato G, Armitage P J, Pringle J E and King A R 2010 2021–2035 Anderson M, Lehner L, Megevand M and Neilsen D 2010 044004 Corrales L R, Haiman Z and MacFadyen A 2010 947–962 Tanaka T and Menou K 2010 404–422 Zanotti O, Rezzolla L, Del Zanna L and Palenzuela C 2010 \[arXiv:1002.4185\] Shapiro S L 2010 024019 Tanaka T, Haiman Z and Menou K 2010 642–651 Kapoor R C 1976 Pramãna [**7**]{} 334–343 Loeb A 2007 041103 Volonteri M and Madau P 2008 57–60 Civano F [*et al.*]{} 2010 209–222 Dottori H, Diaz R J, Albacete-Colombo J F and Mast D 2010 42–46 Jonker P G, Torres M A P, Fabian A C, Heida M, Miniutti G and Pooley D 2010 645–650 Bonning E W, Shields G A and Salviander S 2007 13–16 Komossa S, Zhou H and Lu H 2008 81–84 Boroson T A and Lauer T R 2009 [*Nature*]{} [ **458**]{} 53–55 Robinson A, Young S, Axon D J, Kharb P and Smith J E 2010 123–126 Bogdanovic T, Eracleous M and Sigurdsson S 2009 288–292 Dotti M, Montuori C, Decarli R, Volonteri M, Colpi M and Haardt F 2009 L73–L77 Tang S and Grindlay J 2009 1189–1194 Dotti M and Ruszkowski M 2010 37–40 Heckman T M, Krolik J H, Moran S M, Schnittman J D and Gezari S 2009 363–367 Shields G A, Bonning E W and Salviander S 2009 1367–1373 Vivek M, Srianand R, Noterdaeme P, Mohan V and Kuriakosde V C 2009 L6–L9 Decarli R, Falomo R, Treves A and Barattini M 2010 [*Astron. & Astroph.*]{} [**511**]{} 27 Gaskell M C 1988 [*LNP*]{} [**307**]{} 61 Eracleous M, Halpern J P, Gilbert A M, Newman J A and Filippenko A V 1997 216 Shields G A, Rosario D J, Smith K L, Bonning E W, Salviander S, Kalirai J S, Strickler R, Ramirez-Ruiz E, Dutton A A, Treu T and Marshall P J 2009 936–941 Chornock R, Bloom J S, Cenko S B, Filippenko A V, Silverman J M, Hicks M D, Lawrence K J, Mendez A J, Rafelski M and Wolfe A M 2010 39–43 Gaskell M C 2010 [*Nature*]{} [**463**]{} E1 Begelman M C, Blandford R D and Rees M J 1980 [*Nature*]{} [**287**]{} 307–309 Hernquist L 1989 [*Nature*]{} [**340**]{} 687 Kauffmann G and Haehnelt M 2000 576 Hopkins P F, Hernquist L, Cox T J and Keres D 2008 356 Green P J, Myers A D, Barkhouse W A, Mulchaey J S, Bennert V N, Cox T J and Aldcroft T L 2010 [*Astroph. J.*]{} [ **710**]{} 1578–1588 Milosavljevic M and Merritt D 2003 [ *Astroph. J.*]{} [**596**]{} 860–878 Escala A, Larson R B, Coppi P S and Mardones D 2004 765–777 Kazantzidis S, Mayer L, Colpi M, Madau P, Debattista V P, Wadsley J, Stadel J, Quinn T and Moore B 2005 L67–L70 Escala A, Larson R B, Coppi P S and Mardones D 2005 152–166 Dotti M, Colpi M, Haardt F and Mayer L 2007 956–962 Cuadra J, Armitage P J, Alexander R D and Begelman M C 2009 1423–1432 Dotti M, Ruszkowski M, Paredi L, Colpi M, Volonteri M and Haardt F 2009 1640–1646 Dotti M, Volonteri M, Perego A, Colpi M, Ruszkowski M and Haardt F 2010 682–690 MacFadyen A I and Milosavljevic M 2008 83–93 Hayasaki K, Mineshige S and Ho L C 2008 1134–1140 Haiman Z, Kocsis B, Menou K, Lippai Z and Frei Z 2009 [*Class. Quant. Grav.*]{} [**26**]{} 094032 Haiman Z, Kocsis B and Menou K 2009 1952–1969 Bogdanovic T, Smith B D, Sigurdsson S and Eracleous M 2008 455–480 Shen Y and Loeb A 2009 \[arXiv:0912.0541\] Loeb A 2010 047503 Kocsis B and Loeb A 041101 Blecha L and Loeb A 2008 1311–1325 Fujita Y 2009 1050–1057 Guedes J, Madau P, Mayer L and Callegari S 2010 \[arXiv:1008.2032\] Sijacki D, Springel V and Haehnelt M 2010 \[arXiv:1008.3313\] Devecchi B, Rasia E, Dotti M, Volonteri M and Colpi M 2009 633–640 de la Fuente M R and de la Fuente M C 2008 47–50 Mohayaee R, Colin J and Silk J 2008 21–24 Merrit D and Ekers R D 2002 [*Science*]{} [ **297**]{} 1310–1313 Rees M J 1988 [*Nature*]{} [**333**]{} 523–528 Komossa S and Bode N 1999 [*Astron. & Astroph.*]{} [**343**]{} 775–787 Halpern J P, Gezari S and Komossa S 2004 572 Chen X, Madau P, Sesana A and Liu F K 2009 149–152 Stone N and Loeb A 2010 \[arXiv:1004.4833\] Seto N and Muto T 2010 103004 Schnittman J D 2010 \[arXiv:1006.0182\] Liu F K, Li S and Chen X 2009 133–137 Gezari S 2009 1367–1379 698, 1367. Jenet F A 2006 1571–1576 Sesana A, Vecchio A and Colacino C N 2008 192–209 Sesana A, Vecchio A and Volonteri M 2009 2255-2265 Jenet F A 2009 \[arXiv:0909.1058\] Seto N 2009 L38–L42 Pshirkov M S, Baskaran D and Postnov K A 2010 417–423 van Haasteren R and Levin Y 2010 2372–2378 Sesana A and Vecchio A 2010 104008 Springel V 2005 1105–1134 Hirose S, Blaes O and Krolik J H 2009 781–788
--- author: - 'N. Draganova, P. Richter,' - 'C. Fechner' bibliography: - 'papers.bib' date: 'Received xxx, 2011; accepted xxx' title: 'High-resolution observations of two O[vi]{} absorbers at $z\approx2$ towards PKS1448$-$232' --- Introduction ============ Highly ionized species like O[vi]{} and C[iv]{}, observed in the spectra of distant quasars, are excellent tracers of metal-enriched ionized gas in the filamentary intergalactic medium (IGM) and in the circumgalactic environment of galaxies. Therefore, the analysis of intervening O[vi]{} and C[iv]{} absorbers towards low- and high-redshift QSOs is crucial for a better understanding of the physical nature, distribution, evolution, and baryon and metal content of the IGM in the context of galaxy evolution. Because of the high cosmic abundance of oxygen, the large oscillator strength of the O[vi]{} doublet (located in far-ultraviolet at $\lambda\lambda 1031.9,1037.6$ Å), and the high ionization energies of the ionization states O$^{+4}$ (113.9 eV) and O$^{+5}$ (138.1 eV), the O[vi]{} ion is a particularly powerful tracer of the metal-enriched IGM and the gaseous environment of galaxies. Using QSO absorption spectroscopy, O[vi]{} absorption now is commonly detected in various different galactic and intergalactic environments in the redshift range $z\approx 0-3$. In the local Universe, O[vi]{} absorption in interstellar and intergalactic gas can be observed in the FUV spectra of stars and extragalactic background sources. For instance, O[vi]{} absorption is known to arise in the thick disk of the Milky Way [e.g. @Savage03], in the extended, multi-phase gas halos of the Milky Way and other galaxies [e.g. @Sembach03; @Wakker09; @Prochaska11], and in intervening O[vi]{} absorption-line systems that trace metal-enriched gas in the IGM [e.g., @Tripp; @Savage02; @Richter04; @Sembach04; @Tripp08; @Thom08a; @Thom08b; @Danforth08], for a review see @Richter08. Over the last decade, intervening O[vi]{} absorbers at low redshift were considered as major baryon reservoir in the IGM, possibly tracing shock-heated and collisionally ionized intergalactic gas that results from large-scale structure formation [@Cen99; @Dave]. This so-called warm-hot intergalactic medium (WHIM) has gas temperatures in the range $10^{5}<T<10^{7}$ K and is believed to host $30-40\%$ of the baryons at $z=0$ [@Cen99]. Recent observational and theoretical studies indicate, however, that part of the O[vi]{} absorbers at low $z$ may trace low-density, photoionized gas or conductive, turbulent, or shocked boundary layers between cold/warm ($\sim 10^3-10^4$ K) gas clouds and an ambient hot ($\sim 10^6-10^7$ K) plasma rather than the shock-heated WHIM [@Fox11 see discussion in]. Thus, a simple estimate of the ionization state of the gas in the absorbers from the observed O[vi]{}/H[i]{} ratios may lead to erroneous results because of the complex multi-phase character of the gas [@Tepper-Garcia11]. For redshifts $z>2$ O[vi]{} absorption is detectable from the ground, where it can be observed in optical QSO absorption spectra at relatively high signal-to-noise (S/N). One very problematic aspect for the analysis of O[vi]{} absorbers at high redshift is the often severe blending of the O[vi]{} absorption with the Ly$\alpha$ forest. As for low redshifts, the origin and nature of O[vi]{} absorbers at high $z$ is expected to be manifold. It has been shown by simulations [e.g. @Theuns02; @Oppenheimer08] that shock-heating by collapsing large-scale structures is not efficient at high redshift to provide a widespread warm-hot intergalactic phase in the early Universe. Instead, galactic winds probably contribute substantially to the population of photoionized and collisionally ionized O[vi]{} absorbers at high redshifts, enriching the surrounding circumgalactic and intergalactic gas with heavy elements at relatively high gas temperatures [@Fangano07; @Kawata07]. In fact, many of the strong O[vi]{} absorbers at high $z$ exhibit complex absorption patterns that would be expected for a circumgalactic multi-phase gas environment [e.g. @Bergeron05]. Similar as for circumgalactic absorbers in the local Universe, a considerable fraction of the O[vi]{} absorbers at high $z$ thus may arise in conductive, turbulent, or shocked boundary layers. Next to those O[vi]{} absorbers that trace highly-ionized gas in the immediate environment of galaxies, intergalactic O[vi]{} absorbers (i.e., absorbers that are not gravitationally bound to individual galaxies) may arise in regions that are sufficiently enriched with heavy elements. Previous surveys of high-redshift O[vi]{} absorbers [@Bergeron02; @Simcoe02; @Simcoe04; @Simcoe06; @Carswell02; @Bergeron05] have shown that there are many narrow O[vi]{} absorbers with Doppler-parameters $b$ $\leq10$ kms$^{-1}$. Such narrow lines cannot arise from collisionally ionized gas but must be related to photoionized (possibly intergalactic) gas with temperatures $T<10^5$ K. Many of these narrow O[vi]{} absorbers at low and high redshift display velocity-centroid offsets between O[vi]{}, C[iv]{}, and H[i]{}, suggesting that these ions do not arise in the same gas phase. Unfortunately, this crucial aspect is only partially considered in previous O[vi]{} surveys. To explore the multi-phase character of high-ion absorbers and to improve our understanding of the ionization conditions in O[vi]{} systems it is important to investigate in detail the absorption characteristics and ionization conditions in [*selected*]{} absorption-line systems. For this purpose, absorbers that can be observed at high S/N and for which the O[vi]{} absorption is not blended by Ly$\alpha$ forest lines are particularly important. Because of the complexity of many high-ion absorbers that often are composed of several velocity subcomponents a spectral resolution of $R\approx45,000$ and higher is desired. Note that while the analysis of individual high-ion absorbers is a common strategy to explore the nature of O[vi]{} absorbers at low redshift [e.g. @Tumlinson11; @Savage11], detailed studies of individual O[vi]{} absorption systems at high redshift are rare [e.g., @Fox11a]. In this paper we present VLT/UVES observations at intermediate ($R\approx45,000$) and high ($R\approx75,000$) spectral resolution of two particularly interesting O[vi]{} systems at $z\approx2$ along the line of sight towards the quasar PKS 1448$-$232. This sightline was selected by us for a detailed study, as it contains two unsaturated O[vi]{} systems at $z_{\rm abs}=2.1098$ and $z_{\rm abs}=2.1660$, both displaying a well-defined subcomponent structure with narrow O[vi]{}/C[iv]{} absorption components and without major blending with Ly$\alpha$ forest lines [@Bergeron02; @Fox08]. These two absorption systems therefore represent ideal targets to study in detail the physical conditions in photoionized, multi-phase high-ion absorbers at high redshift. Observations and absorption-line analysis ========================================= VLT/UVES observations --------------------- Our data set consists of intermediate- and high-resolution spectra of the quasar PKS1448$-$232 ($z_{\rm em}=2.208$; $V=16.9$), observed at the VLT with the UVES spectrograph. The intermediate-resolution data have a spectral resolution of $R\approx45,000$, corresponding to a velocity resolution of $\Delta v \approx 6.7$ kms$^{-1}$ FWHM. These data were obtained and reduced as part of the ESO Large Programme “The Cosmic Evolution of the IGM” [@Bergeron02]. The wavelength coverage of the intermediate resolution data is $3050-10,400~\rm\AA$. The S/N in the data varies between $15$ and $90$ per spectral resolution element. The high-resolution data have $R\approx75,000$, corresponding to $\Delta v \approx 4.0$ kms$^{-1}$ FWHM velocity resolution and were obtained with VLT/UVES in 2007 in an independent observing run (program ID 079.A$-$0303(A)). The wavelength coverage of the high resolution data is $3000-6687~\rm\AA$. The raw data were reduced using the UVES pipeline implemented in the ESO-MIDAS software package. The pipeline reduction includes flat-fielding, bias- and sky-subtraction and a relative wavelength calibration. The individual spectra then have been corrected to vacuum wavelengths and coadded. The S/N in the high-resolution data is $20-70$ per resolution element. [cccccccccc]{}\ & & & &\ & O[vi]{} & C[iv]{} & H[i]{} & log\[$N$(cm$^{-2}$)\] & $b$\[kms$^{-1}$\] & log\[$N$(cm$^{-2}$)\] & $b$\[kms$^{-1}$\] & log\[$N$(cm$^{-2}$)\] & $b$\[kms$^{-1}$\]\ \ 1 & 2.10982 & 2.10982 & 2.10981 & 14.27($\pm$0.01) & 10.7($\pm$0.2) & 13.12($\pm$0.01) & 7.5($\pm$0.1) & 13.38($\pm$0.04) & 19.6($\pm$0.7)\ 2 & 2.11011 & 2.11008 & 2.11018 & 13.50($\pm$0.02) & 8.4($\pm$0.4) & 12.23($\pm$0.03) & 6.1($\pm$0.6) & 13.37($\pm$0.04) & 28.6($\pm$1.7)\ \ 1 & 2.10984 & 2.10983 & 2.10981 & 14.32($\pm$0.02) &10.1($\pm$0.2) & 13.12($\pm$0.01) & 7.1($\pm$0.1) & 13.39($\pm$0.01) & 20.6($\pm$0.3)\ 2 & 2.11014 & 2.11008 & 2.11019 & 13.49($\pm$0.20) & 5.4($\pm$0.5) & 12.23($\pm$0.03) & 5.3($\pm$0.6) & 13.35($\pm$0.01) & 26.4($\pm$0.8)\ Line-fitting method ------------------- The detected absorption features that are associated with the two absorbers at $z_{\rm abs}=2.1098$ and $z_{\rm abs}=2.1660$ were fitted independently in both spectra (at intermediate resolution and high resolution) with Gaussian profiles using the CANDALF fitting routine [^1], which uses a standard Levenberg-Marquard minimization algorithm. The program simultaneously fits the continuum and the absorption lines, delivering ion column densities, $N$, and Doppler parameters, $b$, of for each absorption component. The continuum is modeled as a Legendre polynomial with an order of up to 4. The one-sigma fitting errors for $N$ and $b$ (as listed in Tables $1-3$) are estimated using the diagonals of the Hesse matrix. The O[vi]{} system at $z=2.1098$ -------------------------------- Fig.1 shows the velocity profiles of O[vi]{} ($\lambda\lambda 1031.9,1037.6$), C[iv]{} ($\lambda\lambda 1548.2,1550.8$), and H[i]{} Ly$\alpha$ and Ly$\beta$ ($\lambda\lambda 1215.7,1025.7$) for the $z=2.1098$ absorber in the high-resolution data (left panel) and the intermediate-resolution data (right panel). From the visual inspection of both panels we find no significant differences between the two data sets. The S/N ratios are better for the high resolution data, except for the C[iv]{} region where this ratio is slightly better at intermediate resolution. Therefore, the differences in the values for $N$, $b$, and $z$ derived for the individual absorption components in the intermediate and high-resolution spectra are a result of the different S/N values in the two data sets. Two absorption components are detected in each of these ions. The O[vi]{} absorption is relatively strong compared to C[iv]{}. H[i]{} absorption is weak compared to other O[vi]{} absorbers at similar redshift [e.g., @Bergeron02] with a central absorption depth in the H[i]{} Ly$\alpha$ line of less than 70 percent. Note that the second, weaker, component of H[i]{} Ly$\alpha$ and Ly$\beta$ absorption associated with the high-ion absorption is blended, so that the true component structure of the H[i]{} and the relative H[i]{} column densities and H[i]{} $b$-values remain somewhat uncertain. The blending aspect is not taken into account in the formal error estimate for $N$ and $b$ given in Table 1, which is based on the profile fitting. While for the stronger of these components the absorption of O[vi]{}, C[iv]{}, and H[i]{} is well aligned, there appears to be a small ($< 10$ kms$^{-1}$) velocity shift between H[i]{} and the high ions in the weaker component (see Table 1). If real, this shift may indicate that the H[i]{} and the metal ions may not trace the same gas phase in the weaker absorption component. Because of the blending of the H[i]{} absorption, however, the reality of this shift remains unclear. We derive for the column densities, listed in Table 1, log $N$(O[vi]{}$)\approx 14.3$, log $N$(C[iv]{}$)\approx 13.1$, and log $N$(H[i]{}$)\approx 13.4$ in the stronger of the two components. The resulting ion-to-hydrogen ratios of $N$(O[vi]{}$)/N$(H[i]{}$)\sim 8$ and $N$(C[iv]{})/$N$(H[i]{}$)\sim 0.5$ already indicate that the metallicity of this absorber must be fairly high [@Bergeron05]. Note that because of the blending problem in the Ly$\alpha$ and Ly$\beta$ lines the H[i]{} column density may be regarded as upper limit, so that the ratios given above could be even higher. [ccccc]{} & $z$ & log\[$N$(H[i]{}) (cm$^{-2}$)\] & $b$ \[kms$^{-1}$\]\ \ 1 & 2.16466 & 13.27 ($\pm$0.01) & 40.0 ($\pm$1.2)\ 2 & 2.16552 & 14.49 ($\pm$0.07) & 29.9 ($\pm$1.2)\ 3 & 2.16605 & 15.18 ($\pm$0.03) & 34.9 ($\pm$0.5)\ 4 & 2.16746 & 13.16 ($\pm$0.03) & 19.2 ($\pm$0.6)\ 5 & 2.16765 & 13.77 ($\pm$0.01) & 45.6 ($\pm$0.3)\ \ 1 & 2.16467 & 13.21 ($\pm$0.02) & 37.4 ($\pm$1.3)\ 2 & 2.16559 & 14.57 ($\pm$0.12) & 31.9 ($\pm$1.7)\ 3 & 2.16607 & 15.13 ($\pm$0.05) & 35.5 ($\pm$0.8)\ 4 & 2.16749 & 13.19 ($\pm$0.02) & 18.8 ($\pm$0.5)\ 5 & 2.16768 & 13.76 ($\pm$0.01) & 46.8 ($\pm$0.3)\ [ccccllllll]{}\ & & & &\ & O[vi]{} & C[iv]{} & C[iii]{} & log\[$N$(cm$^{-2}$)\] & $b$\[kms$^{-1}$\] & log\[$N$(cm$^{-2}$)\] & $b$\[kms$^{-1}$\] & log\[$N$(cm$^{-2}$)\] & $b$\[kms$^{-1}$\]\ \ 1 & 2.16518 & — & — & 13.35($\pm$0.03) & 15.4($\pm$1.3) & — & — & — & —\ 2 & 2.16542 & — & — & 13.02($\pm$0.08) & 8.0($\pm$1.2) & — & — & — & —\ 3 & 2.16569 & 2.16569 & 2.16566 & 13.63($\pm$0.02) & 13.5($\pm$0.7) & 12.18($\pm$0.05) & 11.1($\pm$1.7) & 12.68($\pm$0.07) & 11.1$^{\rm a}$\ 4 & 2.16600 & 2.16600 & 2.16602 & 13.41($\pm$0.03) & 9.9($\pm$0.6) & 13.17($\pm$0.05) & 9.3($\pm$0.5) & 13.49($\pm$0.05) & 15.1($\pm$1.5)\ 5 & — & 2.16616 & 2.16618 & — & — & 12.96($\pm$0.10) & 10.0($\pm$1.6) & 12.78($\pm$0.23) & 8.0($\pm$3.7)\ 6 & 2.16638 & 2.16638 & 2.16633 & 13.21($\pm$0.04) & 13.7($\pm$0.8) & 12.53($\pm$0.09) & 12.6($\pm$2.2) & 12.22($\pm$0.47) & 12.6$^{\rm a}$\ 7 & 2.16744 & 2.16741 & 2.16740 & 12.75($\pm$0.15) & 6.8($\pm$0.8) & 11.90($\pm$0.05) & 4.8$^{\rm b}$ & 12.17($\pm$0.14) & 4.8$^{\rm a}$\ 8 & 2.16766 & 2.16770 & 2.16780 & 13.01($\pm$0.16) & 10.6($\pm$0.4) & 11.84($\pm$0.08) & 9.2$^{\rm c}$ & 12.32($\pm$0.15) & 9.2$^{\rm a}$\ 9 & 2.16796 & 2.16792 & 2.16789 & 13.30($\pm$0.08) & 10.9($\pm$1.3) & 11.50($\pm$0.12) & 4.8$^{\rm b}$ & 11.85($\pm$0.36) & 4.8$^{\rm a}$\ \ 1 & 2.16521 & — & — & 13.12($\pm$0.03) & 8.3($\pm$1.0) & — & — & — & —\ 2 & 2.16544 & — & — & 13.13($\pm$0.04) & 6.4($\pm$0.9) & — & — & — & —\ 3 & 2.16600 & 2.16567 & 2.16557 & 13.53($\pm$0.05) & 10.6($\pm$0.9) & 12.16($\pm$0.05) & 11.1($\pm$1.5) & 12.87($\pm$0.11) & 14.4($\pm$3.7)\ 4 & 2.16606 & 2.16602 & 2.16603 & 13.73($\pm$0.04) & 24.8($\pm$3.1) & 13.23($\pm$0.02) & 10.4($\pm$0.4) & 13.39($\pm$0.08) & 10.4$^{\rm a}$\ 5 & — & 2.16619 & 2.16626 & — & — & 12.73($\pm$0.09) & 7.5($\pm$0.9) & 13.12($\pm$0.15) & 7.5$^{\rm a}$\ 6 & 2.16646 & 2.16637 & 2.16635 & 12.97($\pm$0.11) & 10.4($\pm$2.0) & 12.67($\pm$0.08) & 15.3($\pm$2.6) & — & —\ 7 & 2.16748 & 2.16742 & — & 12.98($\pm$0.03) & 7.3($\pm$0.9) & 11.67($\pm$0.17) & 4.8($\pm$2.3) & — & —\ 8 & 2.16767 & 2.16758 & — & 12.93($\pm$0.05) & 2.8($\pm$0.9) & 12.24($\pm$0.07) & 22.2($\pm$3.1) & — & —\ 9 & 2.16796 & 2.16797 & — & 13.50($\pm$0.01) & 12.7($\pm$0.6) & 11.85($\pm$0.06) & 4.8($\pm$1.2) & — & —\ $^{\rm a}$ Fixed to $b_{\rm C\,{III}} = b_{\rm C\,{IV}}$\ $^{\rm b}$ Fixed to $b$-value derived from the intermediate resolution data\ $^{\rm c}$ Lower limit, fixed to the minimal value\ The O[vi]{} system at $z=2.1660$ -------------------------------- The O[vi]{} system at $z=2.1660$ exhibits a significantly more complex absorption pattern than the absorber at $z=2.1098$, as can be seen in the velocity profiles presented in Fig2. O[vi]{} absorption is observed in eight individual absorption components, spanning a velocity range as large as $\sim 300$ kms$^{-1}$. From the visual inspection it is further evident that the absorption pattern of O[vi]{} is different than those of the other detected intermediate and high ions (C[iii]{}, C[iv]{}) and H[i]{}, although some of the components appear to be aligned in velocity space. As for the system at $z=2.1098$, there are no significant differences in the absorption characteristics between the high-resolution data and the intermediate-resolution data. However, the S/N ratio is somewhat lower in the latter for the lines that are located in the blue part of the spectrum, so that the resulting fitting values for $N$, $b$, and $z$ for the individual absorption components differ slightly (Tables 2 and 3). We have modeled the H[i]{} absorption by simultaneously fitting Ly$\alpha$ and Ly$\beta$ in four absorption components (components $2-5$; see Table 2), obtaining column densities between $13.2 <$ log $N$(H[i]{})$< 15.2$. One additional component (component 1) is present in the Ly$\alpha$ absorption, but is blended in Ly$\beta$ (see Fig.2), so that $N$(H[i]{}) was derived solely from Ly$\alpha$. Note that for the H[i]{} fit we have not tried to tie the H[i]{} component structure to the structure seen in the the metal ions, as this requires knowledge about the physical conditions in the absorber. This aspect will be discussed in detail in Sect.4.2, where we try to reconstruct the H[i]{} absorption pattern based on a photoionization model. Instead, we have fitted the H[i]{} absorption with the minimum number of absorption components required to match the observations (Fig.2, lowest panel) and to obtain an estimate on the total H[i]{} column in the absorber. By summing over the column densities in the individual absorption components we derive total column densities of log $N$(O[vi]{}$)\approx 14.2$, log $N$(C[iii]{}$)\approx 13.7$, log $N$(C[iv]{}$)\approx 13.5$, and log $N$(H[i]{}$)\approx 15.3$. The resulting ion-to-hydrogen ratios of $N$(O[vi]{}$)/N$(H[i]{}$)\sim 0.1$ and $N$(C[iv]{}$)/N$(H[i]{}$)\sim 0.02$ (representing the average over all components) are substantially smaller than in the $z=2.1098$ system, pointing toward a lower (mean) metallicity of the absorber. The complexity in the absorption pattern of the various species in this system and the large velocity spread suggests that this absorber arises in an extended multi-phase gas structure. --- ------------------- --------- --------- ----------------- ---------------------------------- --------- ---------------- ------------- -------------- $v$\[kms$^{-1}$\] C[iv]{} O[vi]{} H[i]{} log \[$n_{\rm H}\,$(cm$^{-3}$)\] log $Z$ log \[$T$(K)\] $L$ \[kpc\] $f_{\rm HI}$ 1 0 13.12 14.27 13.38 $-$4.20 $-$0.24 4.54 19.9 $-$5.21 2 $+$25 12.23 13.50 13.37 $-$4.25 $-$1.02 4.64 30.5 $-$5.35 2 $+$25 12.23 13.50 12.57$^{\rm a}$ $-$4.28 $-$0.24 4.57 4.7 $-$5.32 --- ------------------- --------- --------- ----------------- ---------------------------------- --------- ---------------- ------------- -------------- \ $^{\rm a}$ Our best H[i]{} guess in the model for the second component with fixed metallicity\ Ionization modeling and physical conditions in the gas ====================================================== To infer information on the physical properties of the two O[vi]{} absorbers towards PKS1448$-$232 we have modeled in detail the ionization conditions in these systems. Since the two absorbers at $z=2.1098$ and $z=2.1660$ have redshifts close to the quasar redshift ($z_{\rm QSO}=2.208$), it is necessary to check whether the two systems lie in the proximity zone of the background quasar and are influenced by its ionizing radiation. With the above given redshifts, the two absorbers have velocity separations from the QSO of $\delta v_{2.1098}\approx 9000$ kms$^{-1}$ and $\delta v_{2.1660}\approx 4000$ kms$^{-1}$ and thus the absorber at $z=2.1660$ can be regarded (depending on the definition) as an associated system. With a (monochromatic) luminosity at the Lyman Limit of $L_{912}=3.39 \times 10^{31}$ ergs$^{-1}$Hz$^{-1}$, the size of the sphere-of-influence of the ionizing radiation from PKS1448$-$232 is known to be $6.7$ Mpc, corresponding to a velocity separation of $\sim 1400$ kms$^{-1}$ [@Fox08]. Therefore, it is safe to assume that the ionizing radiation coming from PKS1448$-$232 itself has no measurable influence on the ionization conditions in the two O[vi]{} systems. The small $b$-values measured for O[vi]{}, C[iv]{}, and H[i]{} indicate that collisional ionization is not the origin for the presence of O[vi]{} in the gas. It is common to assume that the observed Doppler parameters ($b_{\rm obs}$) are composed of a thermal and a turbulent component ($b_{\rm th}$ and $b_{\rm turb}$, respectively), so that $b^{2}_{\rm obs} = b^{2}_{\rm th} + b^{2}_{\rm turb}$. The thermal component can be expressed by $b^{2}_{\rm th} = 2kT/m$, where $T$ is the gas temperature and $m$ is the mass of the considered ion. The Doppler parameters measured for the O[vi]{} components in the two absorbers are all $b<16$ kms$^{-1}$ and many of them are $b<10$ kms$^{-1}$ (see Tables 1 and 3), indicating that $T<10^5$ K. This value is below the peak temperature of O[vi]{} in a collisional ionization equilibrium [$T\sim 3\times 10^5$ K; @Sutherland93]; it is also lower than the temperature range expected for O[vi]{} arising in turbulent mixing layers in the interface regions between cold and hot gas [$T=10^5-10^6$ K; @Kwak10]. Consequently, photoionization by the hard UV background remains as the only plausible origin for the presence of O[vi]{} in the two high-ion absorbers towards PKS1448$-$232. Based on these considerations, we have modeled the ion column densities in the two O[vi]{} systems using the photoionization code CLOUDY [version C08; @Ferland]. For this, we have assumed a solar relative abundance pattern of O and C and an optically thin plane-parallel geometry in photoionization equilibrium, exposed to a @HM01 UV background spectrum at $z = 2.16$, which is normalized to $\log~J_{912} = -21.15$ [@Scott] at the Lyman limit. We further assume that each of the observed velocity components is produced by a “cloud”, which we model as an individual entity. As input parameters we consider the measured column densities of C[iii]{} (only for the $z=2.1660$ absorber), C[iv]{}, O[vi]{}, the metallicity $Z$ (in solar units), and the hydrogen particle density $n_{\rm H}$. The metallicity of each cloud and the hydrogen density were varied in a range appropriate for intergalactic clouds (i.e., $-3\leq \,$log$~Z\,\leq 0$ and $-5\leq \,$log$ ~n_{\rm H} \leq 0$). We then applied the following iterative modeling procedure. In a first step, CLOUDY was run with a set of values of $Z$, $n_{\rm H}$ and $N$(H[i]{}), where $N$(H[i]{}) is constrained by the observations. In a second step, the corresponding values of $N$(C[iii]{}), $N$(C[iv]{}), and $N$(O[vi]{}) were calculated. The output was compared with the observed column densities and, in case of a mismatch, the input parameters $Z$ and $n_{H}$ were adjusted for the next iteration step. This process was repeated until the differences between the output column densities and the observed values became negligible and we obtained a unique solution. Next to the ion column densities, our CLOUDY model provides information on the neutral hydrogen fraction, $f_{\rm HI}$, the gas temperature, $T$, and the absorption pathlength, $L=N$(H[i]{}$)/(f_{\rm HI}\,n_{\rm H})$. The system at $z=2.1098$ ------------------------ As mention earlier, absorption by O[vi]{} and C[iv]{} is well aligned in both components in this system, while the true component structure of the H[i]{} is uncertain because of blending effects in the Ly$\alpha$ and Ly$\beta$ lines. Because of the alignment of O[vi]{} and C[iv]{} we assume a single-phase model, in which each of the two components (clouds) at $v=0$ and $+25$ kms$^{-1}$ in the $z=2.1098$ rest frame hosts O[vi]{}, C[iv]{}, and H[i]{} at column densities similar to the ones derived from the profile fitting. Consequently, we have chosen log $N$(H[i]{}$)=13.37$ and $13.38$ as input for the CLOUDY modeling and followed the procedure outlined above. The results from the CLOUDY modeling of the $z=2.1098$ absorber are summarized in Table4. Our model reproduces well the observed O[vi]{} and C[iv]{} column densities in both components, if the clouds have a density of log $n_{\rm H} \approx -4.2$, a temperature of log $T\approx 4.6$, and a neutral hydrogen fraction of log $f_{\rm HI}\approx -5.3$. However, to match the observations, the second component (at $+25$ kms$^{-1}$) in our initial model (Table 4, first two rows) needs to have metallicity of log $Z=-1.02$, which is $\sim 0.8$ dex lower than for the other component (log $Z=-0.24$). The absorption path lengths are $\sim 20$ kpc for the component at $0$ kms$^{-1}$ and $\sim 30$ kpc for the component at $+25$ kms$^{-1}$. Because of the blending problem in the H[i]{} Ly$\alpha$ and Ly$\beta$ absorption, which affects particularly the estimate for $N$(H[i]{}) in the cloud at $+25$ kms$^{-1}$ (Fig.1), we have set up a second CLOUDY model in which we have tied the metallicity of the $+25$ kms$^{-1}$ component to the metallicity of the other component (log $Z=-0.24$), but leaving $N$(H[i]{}) for this component as a free parameter. &gt;From this we derive a value of log $N$(H[i]{}$)=12.57$ for the cloud at $+25$ kms$^{-1}$ and the absorption path-length reduces to $L=4.7$ kpc. In view of the blending, we regard this model as more plausible compared to the model with two different metallicities and a larger absorption path-length. Summarizing, our CLOUDY modeling suggests that the $z=2.1098$ absorber towards PKS1448$-$232 represents at relatively simple, metal-rich O[vi]{} absorber in which the high ions O[vi]{} and C[iv]{} coexist in a single gas-phase. --- ------------------- ---------- --------- --------- ------------------ ---------------------------------- --------- ---------------- ------------- ------------- -- -- -- -- -- -- $v$\[kms$^{-1}$\] C[iii]{} C[iv]{} O[vi]{} H[i]{}$^{\rm a}$ log \[$n_{\rm H}\,$(cm$^{-3}$)\] log $Z$ log \[$T$(K)\] $L$ \[kpc\] $f_{\rm H}$ 3 $-$28 12.68 12.18 10.97 14.51 $-$2.74 $-$1.7 4.42 0.3 $-$3.68 4 $+$0 13.49 13.17 12.25 14.18 $-$2.97 $-$1.7 4.46 4.1 $-$3.95 5 $+$16 12.78 12.96 — 14.51 $-$3.56 $-$1.7 4.58 16.3 $-$4.63 6 $+$37 12.22 12.53 12.86 14.08 $-$3.71 $-$1.7 4.61 12.3 $-$4.79 7 $+$134 12.17 11.90 10.92 13.26 $-$2.93 $-$1.0 4.38 0.04 $-$3.84 8 $+$162 12.32 11.84 10.49 13.57 $-$2.67 $-$1.0 4.34 0.02 $-$3.55 9 $+$182 11.85 11.50 10.38 12.99 $-$2.84 $-$1.0 4.37 0.01 $-$3.73 --- ------------------- ---------- --------- --------- ------------------ ---------------------------------- --------- ---------------- ------------- ------------- -- -- -- -- -- -- \ $^{\rm a}$ Our best H[i]{} guess in the models.\ The system at $z=2.1660$ ------------------------ We started to model this system with CLOUDY, again under the assumption of a single gas-phase hosting the observed intermediate and high ions C[iii]{}, C[iv]{}, and O[vi]{} in the various subcomponents. However, during the modeling process it quickly turned out that it is impossible to match the observed column densities of C[iii]{} and O[vi]{} in a single gas-phase in the components, where these two ions are aligned in velocity space. Our modeling instead indicates that the C[iii]{} absorption must arise in an environment that has a relatively high gas density and that is spatially distinct from the O[vi]{} phase. In a second step, we have tried to tie together the high ions C[iv]{} and O[vi]{} in a single gas phase (as for the $z=2.1098$ system) in the relevant absorption components, ignoring the C[iii]{} phase. Again, this approach does not deliver satisfying results, as we obtain for some of the components, for which C[iv]{}/O[vi]{} is constrained by observations, very low gas densities and very large absorption pathlengths on Mpc scales, which are highly unrealistic. Given the fact, that the overall component structure of O[vi]{} and C[iv]{} are substantially different in this system (Fig.2), this result is not really surprising. The only modeling approach, for which we obtain realistic results on gas densities, temperatures and absorption path-lengths in this system and its subcomponents is a two-phase model, in which C[iii]{} coexists with C[iv]{} and part of the H[i]{} in one (spatially relatively confined) phase, and O[vi]{} and part of the H[i]{} in a second (spatially relatively extended) phase. The coexistence of C[iii]{} and C[iv]{} in one phase is further suggested by the fact that C[iii]{} and C[iv]{} absorption is well aligned in velocity space (see Fig.2). The results from this two-phase model are presented in Tables 5 and 6. A critical issue for the modeling of this complex multi-phase absorber with its many absorption components is an assumption for the neutral gas column density in each subcomponent (and phase). Since in the H[i]{} Ly$\alpha$ and Ly$\beta$ absorption most subcomponents are smeared together to one large absorption trough, the observational data provide little information on the distribution of the H[i]{} column densities among the individual components. Yet, the data give a solid estimate for the [*total*]{} H[i]{} column density in the absorber (log $N\approx 15.3$; see Sect.2.4), which must match the sum of $N$(H[i]{}) over all subcomponents considered in our model. Consequently, we included in our iteration procedure the constraints on $N$(H[i]{}$)_{\rm tot}$ and the [*shape*]{} of the (total) H[i]{} absorption profile. The latter aspect also concerns the choice of the gas temperature in the model, as $T$ regulates the thermal Doppler-broadening and thus the width of the modeled H[i]{} lines. We have modeled the H[i]{} width following the aproach of @Ding. With these various constraints we first modeled the C[iii]{}/C[iv]{} phase in the absorber. However, because of the extremely complex parameter space, we did not find a unique solution for $(T,n_{\rm H},Z)$ among the individual components, but had to make further constraints. Since the individual components observed in C[iii]{}/C[iv]{} are very close together in velocity space, we assumed they all have the same metallicity and, based on the $Z$ range allowed in the model, we set log $Z=-1.5$ for all subcomponents. This model was able to match the observed column densities of these two ions in the individual subcomponents, but did not match well the gross shape of the overall H[i]{} absorption, suggesting that the metallicity in this absorber is non-uniform among the individual absorption components. Therefore, we refined our model, now using two different metallicities: log $Z=-1.7$ for the saturated H[i]{} components and log $Z=-1.0$ for the weaker H[i]{} components (see Tables 5 and 6 for details). Although not perfect, this model delivers a satisfying match between the modeled spectrum and the UVES data. Adopting this model, we find that the C[iii]{}/C[iv]{} absorbing components have temperatures between log $T=4.3$ and $4.6$, densities between log $n_{\rm H}=-3.7$ and $-2.7$, and neutral gas fractions between log $f_{\rm HI}=-4.8$ and $-3.6$ (see Table 5). The absorption path lengths vary between $0.3$ and $16.3$ kpc for the components with log $Z=-1.7$, and between $0.01$ and $0.04$ kpc for the components with log $Z=-1.0$. These numbers suggest that the C[iii]{}/C[iv]{} absorbing phase resides in relatively small and confined gas clumps. This scenario is supported by the small turbulent $b$-values of $<6$ kms$^{-1}$ for the subcomponents that we derive in our model. Note that in Table 5 we also list the predicted column densities for O[vi]{}, which are typically $1-2$ orders of magnitude below the observed ones in this absorber. This, again, underlines that C[iii]{}/C[iv]{} and O[vi]{} must reside in different gas phases with different physical conditions to explain the observed column densities. Finally, we have modeled the O[vi]{} absorbing phase in the $z=2.1660$ absorber, based on the observed O[vi]{} column densities. Since there are no ions other than H[i]{} and O[vi]{} that could provide information about the physical conditions in this phase, we fixed the metallicity of the gas to log $Z=-1.7$ and log $Z=-1.0$ (equal to the C[iii]{}/C[iv]{} phase) and constrained the temperature range \[$T_{\rm min},T_{\rm max}$\] in the CLOUDY models based on the observed line widths of O[vi]{} (giving $T_{\rm max}$) and the modeling results of the C[iii]{}/C[iv]{} phase (giving $T_{\rm min}$ for all components except the first two). The results of this model are shown in Table 6. We derive gas densities in the range log $n_{\rm H}=-4.6$ to $-3.2$ and neutral gas fractions in the range log $f_{\rm HI}=-5.8$ to $-4.6$. The absorption path length varies between $19.8$ and $83.3$ kpc for the components with log $Z=-1.7$, and between $1.3$ and $38.3$ kpc for the ones with log $Z=-1.0$. The mismatch in $N$(O[vi]{}) between the model and the data for components one and nine (see Table 6) points towards a metallicity distribution among the individual absorption components that is even more complex than the one assumed in our model. Despite this (minor) concern, our CLOUDY modeling for O[vi]{} provides clear evidence that the O[vi]{} absorbing phase has substantially lower gas densities than the C[iii]{}/C[iv]{} absorbing phase and is spatially more extended. In summary, our CLOUDY modeling of the $z=2.1660$ absorber suggests that this system represents a complex multi-phase gas structure, in which a number of cooler, C[iii]{}/C[iv]{} absorbing cloudlets are embedded in a spatially more extended, O[vi]{} absorbing gas phase spanning a total velocity range of $\sim 300$ kms$^{-1}$. Although the metallicity is not well constrained in our model, it suggests that log $Z\leq -1$ in the absorber, which is $\sim 0.8$ dex below the value obtained for the system at $z=2.1098$. --- ------------------- ----------------- ------------------ --------------------------------- --------- ---------------- --------------- --------------------- $v$\[kms$^{-1}$\] O[vi]{} H[i]{}$^{\rm a}$ log \[$n_{\rm H}\,$(cm${-3}$)\] log $Z$ log \[$T$(K)\] $L$ \[kpc\] $f_{\rm H}$ 1 $-$78 12.96$^{\rm b}$ 13.17 $< -$3.76 $-$1.7 $<$5.36 $<$19.8 $>-$5.86 2 $-$55 13.02 14.10 $< -$3.77 $-$1.7 $<$4.79 $<$25.8 $>-$5.03 3 $-$30 13.63 14.51 $-$3.91 ... $-$3.31 $-$1.7 4.42 ... 5.24 58.4 ... 67.7 $-$5.50 ... $-4.84$ 4 $+$0 13.41 14.18 $-$3.99 ... $-$3.88 $-$1.7 4.46 ... 4.98 42.6 ... 83.3 $-$5.36 ... $-$4.95 6 $+$36 13.21 14.08 $-$3.92 ... $-$3.23 $-$1.7 4.61 ... 5.26 32.1 ... 21.5 $-$5.52 ... $-$5.00 7 $+$136 12.75 13.26 $-$3.72 ... $-$3.72 $-$1.0 4.38 ... 4.64 1.3 ... 2.0 $-$4.82 ... $-$4.61 8 $+$157 13.01 13.57 $-$3.70 ... $-$3.52 $-$1.0 4.34 ... 5.03 2.2 ... 6.1 $-$5.19 ... $-$4.56 9 $+$185 13.25$^{\rm c}$ 12.99 $-$4.60 ... $-$4.25 $-$1.0 4.37 ... 5.06 32.3 ... 38.3 $-$5.76 ... $-$5.48 --- ------------------- ----------------- ------------------ --------------------------------- --------- ---------------- --------------- --------------------- \ $^{\rm a}$ Our best H[i]{} guess in the models\ $^{\rm b}$ Observed [log]{} $N$(O[vi]{}) $= 13.35$\ $^{\rm c}$ Observed [log]{} $N$(O[vi]{}) $= 13.30$\ Discussion ========== Our detailed analysis of the two O[vi]{} absorbers at $z=2.1098$ and $z=2.1660$ towards the quasar PKS1448$-$232 displays the large diversity and complexity of high-ion absorbers at high redshift. During the past years, a number of studies using both optical observations [e.g., @Bergeron02; @Carswell02; @Simcoe02; @Simcoe04; @Simcoe06; @Bergeron05; @Aguirre08] and numerical simulations [e.g., @Fangano07; @Kawata07] have been dedicated to investigate the properties of high-redshift O[vi]{} systems and their relation to galaxies. Based on their survey of O[vi]{} absorbers in the redshift range $z=2.0-2.6$, @Bergeron05 suggested that O[vi]{} systems may be classified into two different populations: metal-rich absorbers (“type 1”) that have large $N$(O[vi]{})/$N$(H[i]{}) ratios and that appear to be linked to galaxies and galactic winds, and metal-poor absorbers (“type 0”) with small $N$(O[vi]{})/$N$(H[i]{}) ratios that trace the intergalactic medium. The two absorbers towards PKS1448$-$232 discussed in this paper do not match the classification scheme of @Bergeron05. The absorber at $z=2.1098$ has a very large $N$(O[vi]{})/$N$(H[i]{}) ratio of $\sim 8$ (i.e., it is of type 1); it is a simple, single-phase, metal-rich system with a metallicity slightly below the solar value. Yet, this system is completely isolated with no strong H[i]{} Ly$\alpha$ absorption within $1000$ kms$^{-1}$. In contrast, the absorber at $z=2.1660$ has a $N$(O[vi]{})/$N$(H[i]{}) ratio of only $\sim 0.1$ and a metallicity of $0.1$ solar or lower [i.e., it is of type 0 according to @Bergeron05]. However, this absorber is a complex multi-phase system with a non-uniform metallicity, suggesting that originates in a circumgalactic environment. While this mismatch with the @Bergeron05 classification scheme certainly has no statistical relevance for the general interpretation of O[vi]{} absorbers at high redshift, the results suggest that for a thorough understanding of highly-ionized gas at high redshift the absorption characteristics of O[vi]{} systems may be too diverse for a simple classification scheme based solely on observed (and partly averaged) column density ratios of O[vi]{}, H[i]{} and other ions. One critical drawback of many previous O[vi]{} surveys at high $z$ is that they often consider only simplified models for the ionization conditions in their sample of high-ion absorbers, so that the multi-phase character of the gas and possible ionization conditions far from a photoionization equilibrium are only insufficiently taken into account. As pointed out by @Fox11, single-phase, single-component ionization models, if applied, will deliver physically irrelevant results for most of the O[vi]{} systems at high $z$. This implies that previous estimates of the baryon- and metal-content of O[vi]{} absorbers at low and high $z$ possibly are afflicted with large systematic uncertainties. One firm conclusion from many previous observational and theoretical studies of high-ions absorbers is that a considerable fraction of the O[vi]{} systems at low and high $z$ must arise in the metal-enriched circumgalactic environment of (star-forming) galaxies [e.g., @Wakker09; @Prochaska11; @Fox11a; @Tepper-Garcia11; @Fangano07]. Thus, the complex absorption pattern observed in the $z=2.1660$ system towards PKS1448$-$232 and many other O[vi]{} absorbers at high $z$ may reflect the complex gas distribution of enriched gaseous material that was ejected from galaxies into the IGM during their wind-blowing phase [e.g., @Kawata07]. In this context, @Schaye07 suggested that the intergalactic metals have been transported from galaxies through galactic winds and reside in the form of dense, small and high-metallicity patches within large hydrogen clouds. These authors point out that much of the scatter in the metallicities derived for high-redshift absorbers could be explained by the spatially varying number of the metal-rich patches and the different absorption path lengths through the surrounding metal-poor intergalactic filament instead of an overall (large-scale) metallicity scatter in the IGM. In this scenario, the substantial differences in the metallicities of the two O[vi]{} systems towards PKS1448$-$232, and even the intrinsic metallicity variations within the $z=2.1660$ system, could be explained by the different geometries of the absorbing structures, suggesting that much of the H[i]{} that is associated with the metal absorption in velocity space, arises in a spatially distinct region. A similar conclusion was drawn by @Tepper-Garcia11, who studied the nature of O[vi]{} absorbers at low $z$ using a set of cosmological simulations. Note that also absorbers with larger H[i]{} column densities, such as Lyman-limit systems (LLS) and damped-Lyman $\alpha$ systems (DLAs), sometimes exhibit abundance variations among the different velocity subcomponents [e.g., @Richter05; @Prochter10]. This indicates that the metals in the gas surrounding high $z$ galaxies are not well mixed. The observed velocity differences between O[vi]{} and other ions and the multi-phase nature of the gas provide further evidence for an inhomogeneous metallicity and density distribution in intervening high-ion absorbers. It is an interesting fact that the velocity misalignment appears to concern only the O[vi]{} absorbing phase in high-ion absorbers at high redshift, while other high ions such as N[v]{} and C[iv]{} generally appear to be well aligned with H[i]{}, even in systems that exhibit a complex velocity-component structure [@FR09]. This puzzling aspect underlines that additional detailed studies of individual O[vi]{} absorption systems could be very important for our understanding of intergalactic and circumgalactic gas at high redshift, as this ion traces a metal-enriched gas phase that cannot be observed by other means. Summary and outlook =================== In this paper, we have investigated two O[vi]{} absorbers at $z=2.1098$ and $z=2.1660$ towards the quasar PKS1448$-$232. For this, we have used high- ($R\approx 75,000$) and intermediate-resolution ($R\approx 45,000$) optical spectra obtained with the VLT/UVES instrument and CLOUDY photoionization models. The O[vi]{} system at $z=2.1098$ is characterized by strong O[vi]{} absorption and weak H[i]{} absorption in a relatively simple, two-component absorption pattern. The absorption by O[vi]{}, C[iv]{}, and H[i]{} are well aligned in velocity space, indicating that they trace the same gas phase. From a detailed photoionization modeling of this system we derive a metallicity of $\sim 0.6$ solar, a characteristic density of log $n_{\rm H} \approx -4.2$, a temperature of log $T\approx 4.6$, and a total absorption path length of $\sim 30$ kpc. The absorber is isolated with no strong H[i]{} Ly$\alpha$ absorption within $1000$ kms$^{-1}$. The O[vi]{} absorber at $z=2.1660$ represents a complicated, multi-component absorption system with eight relatively weak and narrow O[vi]{} absorption components spanning almost $300$ kms$^{-1}$ in radial velocity. The O[vi]{} components are accompanied by strong H[i]{} absorption and C[iii]{}, C[iv]{} absorption. The O[vi]{} component structure differs from that of H[i]{} and C[iv]{}, indicating a multi-phase nature of the absorber. Our photoionization modeling with CLOUDY suggests the presence of (at least) two distinct gas phases in this system. C[iii]{}, C[iv]{} and most of the H[i]{} appear to coexist in several, relative compact cloudlets at gas densities of log $n_{\rm H}\approx -3.7$ to $-2.7$, temperatures of log $T\approx 4.3-4.6$ and absorption path lengths of $<16$ kpc. O[vi]{} appears to reside in a highly ionized, more extended gas phase at densities in the range log $n_{\rm H}\approx -4.6$ to $-3.2$, temperatures between log $T\approx 4.3$ and $5.3$, and absorption path lengths up to $83$ kpc. While the exact metallicity of the absorber is not well constrained, our modeling favours a non-uniform metal abundance among the individual absorption components with (at least) two different metallicities of log $Z=-1.7$ and log $Z=-1.0$. Our study displays the large diversity and complexity of O[vi]{} systems at high redshift. We speculate that some of the observed differences between the two high-ion absorbers towards PKS1448$-$232 could be a result of a inhomogeneous metallicity and density distribution in the photoionized IGM. Our study indicates that multi-phase, multi-component high-ion absorbers like the one at $z=2.1660$ demand a detailed ionization modeling of the various subcomponents to obtain reliable information on physical conditions and metal-abundances in the gas. We conclude that a rather large effort is required to achieve a more complete view on the nature of O[vi]{} absorbers at high redshift. For the future, we are planning to continue our investigation on these systems by using a larger sample of O[vi]{} absorbers in high-quality UVES archival data and compare their absorption characteristics with artificial spectra generated from numerical simulations of star-forming galaxies and their intergalactic environment. N.D. and P.R. acknowledge financial support by the German *Deutsche Forschungsgemeinschaft*, DFG, through grant Ri 1124/5-1. \[lastpage\] [^1]: written by Robert Baade, Hamburger Sternwarte