text
stringlengths
4
2.78M
--- abstract: 'In this work we explore a correspondence between quantum circuits and low-degree polynomials over the finite field ${\mathbb{F}}_2$. Any quantum circuit made up of Hadamard, Z, controlled-Z and controlled-controlled-Z gates gives rise to a degree-3 polynomial over ${\mathbb{F}}_2$ such that calculating quantum circuit amplitudes is equivalent to counting zeroes of the corresponding polynomial. We exploit this connection, which is especially clean and simple for this particular gate set, in two directions. First, we give proofs of classical hardness results based on quantum circuit concepts. Second, we find efficient classical simulation algorithms for certain classes of quantum circuits based on efficient algorithms for classes of polynomials.' author: - 'Ashley Montanaro[^1]' bibliography: - '../../thesis.bib' title: 'Quantum circuits and low-degree polynomials over ${\mathbb{F}}_2$' --- Introduction ============ Quantum computers are believed to outperform classical computers for important tasks as varied as simulation of quantum mechanics and factorisation of large integers. Although no large-scale general-purpose quantum computer has been built as yet, quantum computation can nevertheless already be used as a theoretical tool to study other areas of science and mathematics, without the need for an actual quantum computer. This work explores a simple correspondence between quantum circuits and low-degree polynomials over the finite field ${\mathbb{F}}_2$, i.e. the integers modulo 2. By picking the right gate set, it turns out that quantum circuit amplitudes have a close connection to counting zeroes of such polynomials. This correspondence can be exploited in two directions. On the one hand, ideas about quantum circuits can be used to prove purely classical results regarding the computational complexity of counting zeroes of polynomials over finite fields. On the other, known classical results about polynomials can be used to give new algorithms for simulating classes of quantum circuits. A similar perspective has been taken by a number of previous works. Particularly relevant is prior work of Dawson et al. [@dawson05], who showed that quantum circuit amplitudes for circuits of Toffoli and Hadamard gates can be understood in terms of solutions to systems of polynomial equations involving low-degree polynomials over ${\mathbb{F}}_2$. Here we use a slightly different universal gate set: Hadamard ($=\frac{1}{\sqrt{2}}{\left( \begin{smallmatrix} 1 & 1\\1& -1 \end{smallmatrix} \right)}$), Z ($={\left( \begin{smallmatrix} 1&0\\0&-1 \end{smallmatrix} \right)}$), controlled-Z (“CZ”) and controlled-controlled-Z (“CCZ”). This is essentially equivalent to the gate set of [@dawson05], as Toffoli gates are identical to CCZ gates conjugated by a Hadamard gate on the target qubit. However, this small shift in perspective seems to simplify and clarify some of the arguments involved. For example, the connection we use associates a single polynomial with each circuit. Related ideas to [@dawson05] were used by Rudolph [@rudolph09] to give a simple encoding of quantum circuit amplitudes as matrix permanents. The set of circuits we consider is a very special case of the class of “algebraic quantum circuits” studied by Bacon, van Dam and Russell [@bacon08] in some generality. The idea of proving classical results using quantum methods has also been explored previously; see [@drucker11] for a survey of many results in this area. Within computational complexity alone, three relevant examples are Aaronson’s proof of the computational hardness of computing the matrix permanent using the close connection between the permanent and linear-optical quantum circuits [@aaronson11a]; Kuperberg’s proof of the computational hardness of approximately computing Jones polynomials by expressing these in terms of quantum circuits [@kuperberg15]; and Fujii and Morimae’s proof of hardness of computing Ising model partition functions, again based on quantum circuits over a suitable gate set [@fujii13]. More recently, together with Bremner and Shepherd [@bremner15], the present author used a correspondence between low-degree polynomials and a certain class of simple quantum computations, known as IQP circuits [@shepherd09], to argue that random IQP circuits are unlikely to be efficiently simulable classically. This holds even if the classical simulator is allowed to be approximate, with a fairly generous notion of approximation. The correspondence between low-degree polynomials and quantum circuits which we investigate here seems particularly simple and direct. We have therefore tried to use it to highlight some of the beautiful ideas present in previous works, and to produce an accessible introduction to computational complexity issues suitable for physicists; and also an introduction suitable for computer scientists to how one can prove classical results using the quantum circuit model. We begin by introducing the circuit-polynomial correspondence and proving its correctness, and go on to make some simple observations about this connection. Then, in Section \[sec:compcomp\], we introduce the ideas from computational complexity that we will need, and in Section \[sec:phard\] show that the correspondence can be used to prove classical hardness of exactly computing the number of zeroes of low-degree polynomials. Similarly, in Section \[sec:approxcomplexity\] we show that approximate computation of this quantity is closely related to quantum computation. We study a new complexity measure for polynomials motivated by this correspondence – the quantum circuit width – in Section \[sec:width\]. Then, in Section \[sec:polysim\], we use the circuit-polynomial correspondence to give two simple classical simulation algorithms for classes of quantum circuits: circuits with few CCZ gates (or where the degree-3 part of the polynomial corresponding to the circuit has a small “hitting set”, qv), and circuits whose corresponding polynomial can be simplified by a linear transformation. We conclude in Section \[sec:conclusions\] with some open problems. Circuits and polynomials {#sec:circpoly} ======================== In this work, we consider quantum circuit amplitudes of the form ${\langle 0|C|0 \rangle}$, where $C$ is a unitary operator expressed as a circuit on $\ell$ qubits with $\operatorname{poly}(\ell)$ gates, and we write $\ket{0} = \ket{0}^{\otimes \ell}$ for conciseness throughout. The gates in $C$ are picked from the set $\mathcal{F} = \{$Hadamard, Z, CZ, CCZ$\}$[^2]. Using the gate set $\mathcal{F}$ will allow us to write ${\langle 0|C|0 \rangle}$ in a particularly concise form. Assume that $C$ begins and ends with a column of Hadamards, i.e. is of the form $$\Qcircuit @C=1em @R=.7em { & \gate{H} & \multigate{2}{C'} & \gate{H} & \qw \\ & \gate{H} & \ghost{C'} & \gate{H} & \qw \\ & \gate{H} & \ghost{C'} & \gate{H} & \qw }$$ for some circuit $C'$. This is without loss of generality, as we can always add pairs of Hadamards to the beginning or end of each line without changing the unitary operator corresponding to the circuit. Further assume that $C'$ contains at least one gate acting on each qubit. Let $h$ be the number of internal Hadamard gates that $C$ contains, i.e. the number of Hadamards in $C'$. Set $n = h + \ell$ and define a polynomial $f_C:\{0,1\}^n \rightarrow \{0,1\}$ over ${\mathbb{F}}_2$ as follows. Divide each horizontal wire of the internal part $C'$ into segments, with each segment corresponding to a portion of the wire which is either between two Hadamard gates or to the left/right of all the Hadamard gates. Associate a distinct variable $x_i$ with each segment of each wire. Observe that there are exactly $h+\ell$ variables in total. Each Hadamard gate now joins two segments and associates their corresponding variables, and each Z, CZ, CCZ gate is associated with one, two or three (respectively) variables, corresponding to the segments on which it acts. For each set of variables $x_{i_1},\dots,x_{i_k}$ associated with each gate, add the corresponding term $x_{i_1} \dots x_{i_k}$ to $f_C$. As we are working over ${\mathbb{F}}_2$, all addition and multiplication in $f_C$ is taken modulo 2. Note that this procedure never produces polynomials of degree higher than 3. As a simple example of this construction, consider the labelled circuit $C'$ in Figure \[fig:labcirc\], where we use the notation $$\Qcircuit @C=1em @R=.7em { & \control \qw & \qw },\;\;\;\; \Qcircuit @C=1em @R=.7em { & \ctrl{1} \qw & \qw\\ & \control \qw & \qw },\;\;\;\; \Qcircuit @C=1em @R=.7em { & \ctrl{2} \qw & \qw\\ & \control \qw & \qw \\ & \control \qw & \qw }$$ for Z, CZ, CCZ gates respectively. $$\Qcircuit @C=1em @R=.7em { & \ustick{x_1} \qw & \gate{H} & \ustick{x_2} \qw & \ctrl{1} & \qw & \qw & \ctrl{2} & \gate{H} & \ustick{x_3} \qw & \qw \\ & \ustick{x_4} \qw & \qw & \qw & \control \qw & \gate{H} & \ustick{x_5} \qw & \control \qw & \qw & \qw & \qw \\ & \ustick{x_6} \qw & \gate{H} & \ustick{x_7} \qw & \qw & \control \qw & \qw & \control \qw & \qw & \qw & \qw }$$ We now show that the number of zeroes of the polynomial corresponding to $C$ has a close connection to ${\langle 0|C|0 \rangle}$. To be more precise, ${\langle 0|C|0 \rangle}$ is proportional to $\operatorname{gap}(f_C)$, where the gap of a polynomial is the difference between the number of zeroes and ones of that polynomial: $$\operatorname{gap}(f_C) := \sum_{x \in \{0,1\}^n} (-1)^{f_C(x)} = |\{x:f_C(x)=0\}|-|\{x:f_C(x)=1\}|.$$ A similar result was shown in [@dawson05] for circuits containing Hadamard and Toffoli gates. However, the argument here seems somewhat simpler. Although there are several ways that the following result can be proven, we choose to highlight a connection to the beautiful results of [@bremner11]. \[prop:gap\] Let $C$ be a quantum circuit on $\ell$ qubits consisting of Hadamard, Z, CZ and CCZ gates, starting and ending with a column of Hadamard gates, and containing $h$ internal Hadamard gates. Then $${\langle 0|C|0 \rangle} = \frac{\operatorname{gap}(f_C)}{2^{h/2+\ell}}.$$ First consider the case where the internal part $C'$ of $C$ does not contain any Hadamard gates (as treated in [@bremner15 Appendix B]). Let $Z_i$ denote a Z gate acting on the $i$’th qubit (and similarly $CZ_{ij}$, $CCZ_{ijk}$). Then, for any $x \in \{0,1\}^\ell$, ${\langle x|Z_i|x \rangle} = (-1)^{x_i}$, ${\langle x|CZ_{ij}|x \rangle} = (-1)^{x_i x_j}$, ${\langle x|CCZ_{ijk}|x \rangle} = (-1)^{x_i x_j x_k}$. As these gates are diagonal, we can obtain ${\langle x|C'|x \rangle}$ simply by multiplying the expressions ${\langle x|G|x \rangle}$ for different gates $G$ in $C'$. Each gate corresponds to a term in $f_C$ as defined above. So, for all $x \in \{0,1\}^\ell$, ${\langle x|C'|x \rangle} = (-1)^{f_C(x)}$, and hence $${\langle 0|H^{\otimes \ell}C'H^{\otimes \ell}|0 \rangle} = \frac{1}{2^\ell} \sum_{x \in \{0,1\}^\ell} {\langle x|C'|x \rangle} = \frac{1}{2^\ell} \sum_{x \in \{0,1\}^\ell} (-1)^{f_C(x)} = \frac{\operatorname{gap}(f_C)}{2^\ell}.$$ We can remove any Hadamard gates in $C'$ using a trick from [@bremner11]. Imagine we have a Hadamard gate on the $i$’th qubit. We form a new overall circuit $C''$ from $C$ by introducing a new ancilla qubit $a$ initialised in the state $\ket{0}$, replacing the Hadamard gate with the gadget $G = H_i CZ_{ai} H_a$, and changing all subsequent gates involving the $i$’th qubit to use qubit $a$ (see Figure \[fig:posts\] for an illustration). Then, by direct calculation, $\bra{0}_i G \ket{0}_a = H / \sqrt{2}$, so ${\langle 0|C''|0 \rangle} = {\langle 0|C|0 \rangle}/\sqrt{2}$. Following this procedure for each of the $h$ Hadamard gates in $C'$, we obtain a circuit on $n = \ell+h$ qubits, where each Hadamard gate corresponds to a product of two variables and relabelling of a qubit as specified in the definition of $f_C$. Taking into account the normalisation factor of $2^{h/2}$, we obtain $${\langle 0|C|0 \rangle} = \frac{1}{2^{h/2+\ell}} \sum_{x \in \{0,1\}^n} (-1)^{f_C(x)} = \frac{\operatorname{gap}(f_C)}{2^{h/2+\ell}}$$ as claimed. $$\Qcircuit @C=1em @R=.7em { \lstick{\dots} & \gate{U} & \gate{H} & \gate{V} & \rstick{\dots} \qw } \;\;\;\;\;\;\;\;\;\; \mapsto \;\;\;\;\;\;\;\;\;\; \raisebox{0.5cm}{ \Qcircuit @C=1em @R=.7em { \lstick{\dots} & \gate{U} & \ctrl{1} & \gate{H} & \rstick{\bra{0}} \qw \\ \lstick{\ket{0}} & \gate{H} & \control \qw & \gate{V} & \rstick{\dots} \qw } }$$ It is easy to check that the formula of Proposition \[prop:gap\] is accurate for the example in Figure \[fig:labcirc\] (where $\operatorname{gap}(f_C)=16$ and ${\langle 0|C|0 \rangle} = 1/2$). The correspondence between circuits and polynomials given in Proposition \[prop:gap\] will be the main tool used throughout this paper. We remark that all the other amplitudes ${\langle x|C|y \rangle}$, $x,y \in \{0,1\}^\ell$, are also related to polynomials. This is because X gates inserted at the start or end of $C$ can be used to map $\ket{0} \mapsto \ket{y}$ or $\ket{x} \mapsto \ket{0}$, X gates can be commuted through Hadamard gates to produce Z gates, and Z gates give linear terms in the corresponding polynomial. Thus ${\langle x|C|y \rangle} = \operatorname{gap}(f_C + L_{x,y})/2^{h/2+\ell}$ for some linear function $L_{x,y}$ depending on $x$, $y$. We next make some other simple observations that follow from the circuit-polynomial correspondence. Basic observations ------------------ There can be more than one quantum circuit $C$ corresponding to a given polynomial $f_C$. There are two easy ways to see this. First, as Z, CZ and CCZ gates commute, a consecutive sequence of such gates in $C$ can be reordered arbitrarily while still corresponding to the same polynomial $f_C$. Second, it is sometimes the case that CZ gates and Hadamards are interchangeable. For example, Figure \[fig:samecircuits\] shows two circuits which both correspond to the polynomial $x_1 x_2$. $$\begin{array}{m{1in}m{1in}} \Qcircuit @C=1em @!R { & \ustick{x_1} \qw & \ctrl{1} \qw & \qw & \qw \\ & \ustick{x_2} \qw & \control \qw & \qw & \qw\\ } & \;\;\;\; \Qcircuit @C=1em @R=.7em { & \ustick{x_1} \qw & \gate{H} \qw & \ustick{x_2} \qw & \qw } \end{array}$$ \[obs:iqp\] For every degree-3 polynomial $f:\{0,1\}^n \rightarrow \{0,1\}$ with no constant term, there exists a quantum circuit $C$ on $n$ qubits such that $f = f_C$. Produce the internal part of a circuit $C$ on $n$ qubits by associating a qubit with each variable in $f$, and include a Z, CZ or CCZ gate between the qubits corresponding to each degree 1, 2, 3 term (respectively) in $f$. We remark that the class of quantum circuits produced from the procedure in Observation \[obs:iqp\] are IQP circuits [@shepherd09]. An IQP circuit (“Instantaneous Quantum Polynomial-time”) on $n$ qubits is a circuit of the form $H^{\otimes n}DH^{\otimes n}$, where $D$ is a circuit of $\operatorname{poly}(n)$ diagonal gates. It was argued in [@bremner15] that it should be hard to sample classically from the output probability distributions of quantum circuits of the form of Observation \[obs:iqp\], even up to small total variation distance. The argument was based on a plausible complexity-theoretic conjecture regarding the complexity of approximately computing $\operatorname{gap}(f)$ for random degree-3 polynomials $f$. \[obs:largew\] There exists a degree-3 polynomial $f:\{0,1\}^n \rightarrow \{0,1\}$ such that every quantum circuit $C$ corresponding to $f$ requires $n$ qubits. Consider the polynomial containing the term $x_i x_j x_k$ for all $1 \le i<j<k \le n$, and no other terms. As there are no degree-2 terms, any corresponding circuit $C$ cannot contain any internal Hadamard gates. Thus $C$ must act on at least $n$ qubits, with one qubit corresponding to each variable. If $f_C:\{0,1\}^n \rightarrow \{0,1\}$ corresponds to a quantum circuit $C$ on $\ell$ qubits, then $|\operatorname{gap}(f_C)| \le 2^{n/2 + \ell/2}$. From Proposition \[prop:gap\], ${\langle 0|C|0 \rangle} = \operatorname{gap}(f_C) / 2^{h/2+\ell}$. As ${\langle 0|C|0 \rangle}$ is a quantum circuit amplitude and hence bounded by 1 in absolute value by unitarity, $|\operatorname{gap}(f_C)| \le 2^{h/2+\ell} = 2^{n/2+\ell/2}$. These observations motivate us to define the [*quantum circuit width*]{} $w(f)$ of a degree-3 polynomial $f$ over ${\mathbb{F}}_2$ as the minimal number of qubits required for any quantum circuit which corresponds to $f$. For example, the family of polynomials $f$ in Observation \[obs:largew\] has $w(f) = n$, whereas the polynomial $f' = x_1 x_2 + x_2 x_3 + \dots + x_{n-1} x_n$ has $w(f') = 1$, corresponding to a circuit whose internal part consists of $n-1$ Hadamard gates applied to one qubit. Computational complexity {#sec:compcomp} ======================== The theory of computational complexity studies the inherent difficulty of computational problems. One of the main goals of this field is to classify problems into complexity classes: sets of problems of comparable difficulty. We now give a brief, informal introduction to this area; see [@papadimitriou94; @arora09] for a full, formal treatment. The complexity classes used in this work can all be presented in terms of determining properties of classical or quantum circuits. A classical circuit is a collection of AND, OR and NOT gates connected with wires, which map an input to an output by evaluating the gates in the natural manner. We assume that classical circuits only have one output bit, but potentially many input bits. For each classical circuit $C$, we let $C(x)$ be the output of $C$ given the bit-string $x$ as input. Then we can define the following natural problems: - Circuit SAT: given a classical circuit $C$, determine whether there exists $x$ such that $C(x) = 1$. - Circuit Counting: given a classical circuit $C$, output $|\{x:C(x)=1\}|$. Each of these problems corresponds to a complexity class. NP (“nondeterministic polynomial-time”) is the class of decision problems which reduce to Circuit SAT in polynomial time, while \#P (“sharp-P” or “number-P”) is the class of functional problems which can be expressed as an instance of Circuit Counting. The closely related class P$^{\#\text{P}}$ is the class of functional problems which can be solved in polynomial time, given the ability to solve any problem in the class \#P. For example, the problem of computing $|\{x:C_1(x)=1\}|-|\{x:C_2(x)=1\}|$ for circuits $C_1$, $C_2$ is in P$^{\#\text{P}}$. Here “polynomial time” is short for “in time polynomial in the input size”, which is the key notion of efficiency used in computational complexity. For any complexity class $\mathcal{C}$, a problem $\mathcal{P}$ is said to be $\mathcal{C}$-hard if it is at least as hard as every problem in $\mathcal{C}$: in other words, for every problem in $\mathcal{C}$, there is a polynomial-time reduction from that problem to $\mathcal{P}$. A problem is said to be NP-complete if it is equivalent in difficulty to Circuit SAT, up to polynomial-time reductions. Many important practical problems (such as optimal packing and scheduling, integer programming, and computing ground-state energies of classical physical systems) are known to be NP-complete [@garey79]. The famous P$\stackrel{?}{=}$NP problem effectively asks whether Circuit SAT can be solved in time polynomial in the size of the given circuit. Although it is widely believed that the answer is “no”, a positive answer would have momentous consequences, implying that any NP-complete problem could be solved in polynomial time. Observe that Circuit Counting is at least as hard as Circuit SAT. In fact, it is conjectured that this problem is much harder. Indeed, if there existed an efficient reduction from Circuit Counting to Circuit SAT, then the infinite tower of complexity classes known as the polynomial hierarchy would collapse [@toda91], a consequence similar to P$=$NP and considered almost as unlikely. Many interesting problems in physics and elsewhere are known to be \#P-hard: at least as hard as any problem in \#P. These include computing Ising model partition functions [@jerrum93], evaluating Jones and Tutte polynomials [@jaeger90], and exactly computing the permanent of a 0-1 matrix [@valiant79]. The intuitive reason behind the hardness of these problems is that they involve computing a sum of exponentially many terms. However, surprisingly, in some cases such sums can be computed efficiently (exactly or approximately). Examples include exact computation of Ising model partition functions on planar graphs [@fisher61; @kasteleyn63; @temperley61], approximate computation of the permanent of a non-negative matrix [@jerrum04], and Valiant’s quantum-inspired “holographic algorithms” for combinatorial problems [@valiant08a]. Proving \#P-hardness of a problem provides strong evidence that a clever efficient algorithm like these should not exist for that problem. Computational complexity of low-degree polynomials {#sec:phard} -------------------------------------------------- We can use the connection between quantum circuits and polynomials to prove \#P-hardness results. It was shown by Ehrenfeucht and Karpinski [@ehrenfeucht90] that computing the number of zeroes (equivalently, the gap) of a degree-3 polynomial $f$ over ${\mathbb{F}}_2$ is \#P-hard. This implies that using the circuit-polynomial correspondence is unlikely to give an efficient algorithm for simulating all quantum circuits classically by computing quantum circuit amplitudes. However, we can go in the other direction, and use the correspondence to obtain a quantum proof of \#P-hardness of computing the number of zeroes of $f$ (equivalently, computing $\operatorname{gap}(f)$). \[prop:phard\] It is \#P-hard to compute $\operatorname{gap}(f)$ for degree-3 polynomials $f$. We will show that the problem of exactly computing ${\langle 0|C|0 \rangle}$ for an arbitrary quantum circuit $C$ containing Hadamard, Z, CZ, and CCZ gates is \#P-hard. As computing $\operatorname{gap}(f)$ for arbitrary degree-3 polynomials $f$ would allow us to compute ${\langle 0|C|0 \rangle}$ for arbitrary circuits of this form, this will imply the claim. To achieve this, we first show that computing ${\langle 0|C|0 \rangle}$ for an arbitrary quantum circuit $C$ containing Hadamard, X and Toffoli gates is \#P-hard. This can easily be obtained from a similar result of Van den Nest [@vandennest08]; we include a simple direct proof here for completeness. It is a fundamental result in the theory of reversible computation that X and Toffoli gates together with ancillas are universal for classical computation, i.e. that given a boolean function $g:\{0,1\}^n \rightarrow \{0,1\}$ computed by a classical circuit $C$ of $\operatorname{poly}(n)$ gates, there is a quantum circuit $C'$ of $\operatorname{poly}(n)$ X and Toffoli gates such that $C'\ket{x}_I\ket{0}_O\ket{0}^{\otimes a}_A = \ket{x}_I\ket{g(x)}_O\ket{0}^{\otimes a}_A$, where the circuit acts on a Hilbert space divided into an $n$-qubit input register I, a 1-qubit output register O, and an $a$-qubit ancilla register A. Then let the circuit $C''$ be defined as follows: 1. Apply an X gate to the O register. 2. Apply Hadamard gates to each qubit in the I and O registers. 3. Apply $C'$. 4. Apply Hadamard gates to each qubit in the I and O registers. 5. Apply an X gate to the O register. If $C''$ is applied to the initial state $\ket{0}$, the state prepared after the second step is $\ket{+}^{\otimes n}_I \ket{-}_O \ket{0}^{\otimes a}_A$. When $C'$ is applied in the third step the second and third registers are left unchanged, and the state of the first register becomes $$\ket{\psi_g} = \frac{1}{\sqrt{2^n}} \sum_{x \in \{0,1\}^n} (-1)^{g(x)} \ket{x}.$$ Thus ${\langle 0|C''|0 \rangle} = {\langle +|^{\otimes n} | \psi_g \rangle} = \frac{1}{2^n} \sum_{x \in \{0,1\}^n} (-1)^{g(x)} = \operatorname{gap}(g)/2^n$. So computing ${\langle 0|C''|0 \rangle}$ allows us to determine $\operatorname{gap}(g)$, and hence the number of zeroes of $g$, for functions $g$ computed by arbitrary polynomial-size classical circuits. This problem is \#P-hard by definition. It remains to show that this same conclusion holds for circuits containing Hadamard, Z, CZ, and CCZ gates. But this is immediate, as Toffoli gates can be produced from CCZ gates by conjugating the target qubit by a Hadamard, and similarly $X = HZH$. The \#P-hardness proof of Ehrenfeucht and Karpinski [@ehrenfeucht90] is not difficult. However, the quantum proof gives a different perspective, and also lends itself to simple generalisations. For example: \[prop:3terms\] $\operatorname{gap}(f)$ remains \#P-hard to compute for degree-3 polynomials where each variable appears in at most 3 terms. We show that computing $\operatorname{gap}(f)$ for an arbitrary degree-3 polynomial $f$ reduces to computing $\operatorname{gap}(f')$ for a degree-3 polynomial $f'$ where each variable appears in at most 3 terms. Given $f$, we produce a corresponding quantum circuit $C$. Then, between each pair of gates, we insert two Hadamard gates on each qubit to produce a new circuit $C'$. As $H^2 = I$, ${\langle 0|C'|0 \rangle} = {\langle 0|C|0 \rangle}$, so the corresponding polynomial $f_{C'}$ satisfies $\operatorname{gap}(f_{C'}) = \operatorname{gap}(f_C)$, up to an easily computed scaling factor. But each variable in $f_{C'}$ is only contained within at most 3 terms, because the inserted Hadamard gates effectively relabel all the variables between each pair of terms in the polynomial. A similar circuit simplification to that of Proposition \[prop:3terms\] was previously observed in [@rudolph09]. Proposition \[prop:phard\] shows that we should not hope to find an efficient algorithm for simulating arbitrary quantum circuits by computing the number of zeroes of low-degree polynomials. However, for some classes of polynomials we can indeed obtain efficient algorithms (see below for some examples of this). A natural question is whether we can improve Proposition \[prop:phard\] to show that even computing the number of zeroes of degree-2 polynomials is \#P-hard. It was already shown by Ehrenfeucht and Karpinski [@ehrenfeucht90] that this is unlikely to be the case, as there is a polynomial-time algorithm for this problem. There is an alternative “quantum” way of seeing this result, as relating to ideas around the well-known Gottesman-Knill theorem [@nielsen00], which states that any quantum circuit whose gates are all picked from the Clifford group can be efficiently simulated classically. Indeed, for any degree-2 polynomial $f:\{0,1\}^n \rightarrow \{0,1\}$, by Observation \[obs:iqp\] we can write down a quantum circuit $C$ on $n$ qubits containing only Hadamard, Z and CZ gates such that ${\langle 0|C|0 \rangle} = \operatorname{gap}(f)/2^n$. As the gates in $C$ are all members of the Clifford group, the state $C\ket{0}$ is a stabilizer state, as is the state $\ket{0}$. It is known that the inner product between two arbitrary stabilizer states can be computed in time $O(n^3)$ [@aaronson04a; @garcia14; @bravyi16], implying an $O(n^3)$ algorithm for computing $\operatorname{gap}(f)$ for degree-2 polynomials $f:\{0,1\}^n \rightarrow \{0,1\}$. Approximate computation {#sec:approxcomplexity} ----------------------- Given that we have shown exactly computing $\operatorname{gap}(f)$ to be hard, the next natural question is whether we can approximately compute it. We now show that this question is closely connected to [*quantum*]{} computational complexity. The class of decision problems which can be solved efficiently by a quantum computer (i.e. in time polynomial in the size of the input), with success probability $2/3$, is known as BQP [@watrous09]. As with the classical complexity classes discussed previously, BQP can be expressed in terms of circuits; however, the circuits are now quantum. Any polynomial-time quantum computation solving a decision problem can be expressed as applying some quantum circuit $U$, generated from the input in polynomial time, to the initial state $\ket{0}$, then measuring the first qubit, and returning the measurement result. \[prop:gapbqp\] Determining $\operatorname{gap}(f)$ for arbitrary degree-3 polynomials $f:\{0,1\}^n \rightarrow \{0,1\}$ up to absolute error $\frac{1}{3} \cdot 2^{(n+w(f))/2}$ is BQP-hard. We first recall that solving decision problems reduces to computing quantum circuit amplitudes (this is an observation of Knill and Laflamme [@knill98]). Assume that we are given some quantum circuit $C$ containing only Hadamard, Z, CZ and CCZ gates, which is applied to the initial state $\ket{0}$, followed by a measurement of the first qubit. We would like to approximately determine the probability that this measurement outputs 1. As the set of gates $\{$Hadamard, CCZ$\}$ is universal for quantum computation [@shi03; @aharonov03], this is sufficient to solve any problem in BQP. So consider the following circuit $C'$: $$\Qcircuit @C=1em @R=.7em { & \multigate{2}{C} & \gate{Z} \qw & \multigate{2}{C^\dag} & \qw \\ & \ghost{C} & \qw & \ghost{C^\dag} & \qw \\ & \ghost{C} & \qw & \ghost{C^\dag} & \qw }$$ Then ${\langle 0|C'|0 \rangle} = {\langle 0|C^\dag Z_1 C|0 \rangle} = \operatorname{tr}Z_1 (C\ket{0}\bra{0}C^\dag)$, which is precisely the difference between the probability that the measurement outputs 0, and the probability that it outputs 1. By the definition of the error bounds in BQP, we have $|{\langle 0|C'|0 \rangle}| \ge 1/3$, so it is sufficient to estimate ${\langle 0|C'|0 \rangle}$ up to absolute error less than $1/3$ to determine whether the answer should be 0 or 1. As discussed in Section \[sec:circpoly\], we can assume that $C'$ begins and ends with Hadamards on every qubit (equivalently, that $C$ begins with Hadamards on every qubit). From Proposition \[prop:gap\], there is a degree-3 polynomial $f_{C'}:\{0,1\}^n \rightarrow \{0,1\}$, where $n=h+\ell$, $h$ is the number of Hadamard gates in the internal part of $C'$ and $\ell$ is the number of qubits on which $C'$ acts, such that ${\langle 0|C'|0 \rangle} = \operatorname{gap}(f_{C'}) / 2^{h/2+\ell}$. So it is sufficient to determine $\operatorname{gap}(f_{C'})$ up to absolute accuracy $\frac{1}{3} \cdot 2^{h/2+\ell} = \frac{1}{3} \cdot 2^{n/2+\ell/2}$ to solve the original decision problem. Observing that $\ell \ge w(f)$ by definition completes the proof. We have seen that approximately computing $\operatorname{gap}(f)$ up to accuracy $O(2^{(n+w(f))/2})$ is sufficient to simulate arbitrary quantum computations. This is already sufficient to imply the known complexity class inclusion[^3] BQP$\subseteq$P$^{\#\text{P}}$ [@bernstein97; @dawson05], as it is easy to see that $\operatorname{gap}(f)$ can be computed exactly by counting the number of inputs of a circuit which evaluate to 1, and hence is in P$^{\#\text{P}}$. Does the implication go the other way? That is, can we use quantum computation to approximate $\operatorname{gap}(f)$ up to accuracy $O(2^{(n+w(f))/2})$? If so, this would imply that approximating $\operatorname{gap}(f)$ up to this level of accuracy is effectively equivalent[^4] to the complexity class BQP. This would give a new example of a combinatorial problem which characterises the power of quantum computation. Several such examples are known (e.g. [@knill01; @aharonov06; @janzing07; @vandennest08a]), but approximately computing the number of zeroes of degree-3 polynomials would arguably be the simplest yet. For any quantum circuit $C$, the Hadamard test [@aharonov06] can be used to estimate ${\langle 0|C|0 \rangle}$ up to inverse-polynomially small absolute error. So, if we are given a circuit on $\ell$ qubits corresponding to a polynomial $f$, we can estimate $\operatorname{gap}(f)$ up to accuracy $O(2^{n/2+\ell/2})$. If $\ell = w(f)$, we have achieved an approximation which matches the bound of Proposition \[prop:gapbqp\]. However, it is not clear how to efficiently determine a quantum circuit corresponding to $f$ which acts on $w(f)$ qubits. Indeed, even determining $w(f)$ itself could be NP-complete. What is the complexity of computing $w(f)$ for an arbitrary degree-3 polynomial $f:\{0,1\}^n \rightarrow \{0,1\}$? To achieve a good enough level of accuracy in estimating $\operatorname{gap}(f)$, it would be sufficient to find a circuit on $\ell$ qubits such that $\ell = w(f) + O(\log n)$. But it is non-obvious how to obtain even this level of accuracy. We can also relate the quantum circuit width to the complexity of [*classical*]{} simulation. \[prop:classical\] Given a degree-3 polynomial $f:\{0,1\}^n \rightarrow \{0,1\}$ and a description of a quantum circuit on $\ell$ qubits corresponding to $f$, $\operatorname{gap}(f)$ can be calculated exactly classically in time $O(2^{2\ell} \operatorname{poly}(n))$. Further, $\operatorname{gap}(f)$ can be approximated up to additive error $\epsilon\,2^n$ with success probability $2/3$ in time $O(\operatorname{poly}(n)/\epsilon^2)$. For any quantum circuit $C$ on $\ell$ qubits containing $m$ gates, ${\langle 0|C|0 \rangle}$ can be calculated in time $O(2^{2\ell} m)$ simply by multiplying out the matrices. If $C$ represents $f$, it can be assumed to contain at most $\operatorname{poly}(n)$ gates, so $m = \operatorname{poly}(n)$. For the second part, we can estimate $|\{x:f(x)=0\}|/2^n$ by taking the average of $s$ random samples from $f(x)$. Each sample can be computed in time $\operatorname{poly}(n)$. By a standard Chernoff bound argument [@dubhashi09], in order for this estimate to be correct up to absolute error $\epsilon$ with probability $2/3$, it is sufficient to take $s = O(1/\epsilon^2)$. Using the second approach in Proposition \[prop:classical\], we can achieve the same level of approximation accuracy achieved by an optimal quantum circuit by taking $\epsilon = O(2^{(w(f)-n)/2})$, giving a classical algorithm which runs in time $O(2^{n-w(f)} \operatorname{poly}(n))$. Thus observe that, if either $w(f) \ge n - O(\log n)$ or $w(f) \le O(\log n)$, the speedup we could obtain by using a quantum algorithm to compute $\operatorname{gap}(f)$ cannot be super-polynomial (but apparently for different reasons). In the former case, the approximate classical algorithm from Proposition \[prop:classical\] runs in polynomial time; in the latter case, the exact classical algorithm runs in polynomial time. These results motivate us to further explore the concept of quantum circuit width. Quantum circuit width {#sec:width} --------------------- We first show that most degree-3 polynomials $f$ have high quantum circuit width, and hence that $\operatorname{gap}(f)$ cannot be approximated significantly more efficiently using this quantum circuit approach than is possible classically. The probability that a random degree-3 polynomial $f:\{0,1\}^n \rightarrow \{0,1\}$ with no constant term has $w(f) \le n-3$ is at most $2^{(-3n+1)/2}$. We count the number of different functions which can correspond to a circuit on $k$ qubits of the form discussed in this work whose internal part contains $n-k$ Hadamards (giving a polynomial on $n$ variables). Break the internal part of the circuit into $n-k+1$ horizontal blocks such that each Hadamard $H_1,\dots,H_h$ begins a block. Then slide (commute) all the Z, CZ, CCZ gates in the circuit to the left until they cannot go any further (i.e. come up against a Hadamard). Then, except for the furthest left-hand block, each such gate acts on the qubit corresponding to the Hadamard which begins its block. Therefore, there are at most $2^{\binom{k}{2} + k + 1} = 2^{k(k+1)/2+1}$ different possibilities for the combination of gates in each block, except the left-hand block, where there are $2^{\binom{k}{3} + \binom{k}{2} + k} = 2^{k(k^2+5)/6}$ possibilities. There are $k^{n-k}$ possibilities for the vertical position of the Hadamards. Overall, we get an upper bound on the number of functions that can be produced which is equal to $$2^{(n-k)(k(k+1)/2+1) + k(k^2+5)/6 + (n-k) \log_2 k}.$$ Take the rough upper bound $\log_2 k \le k/2$, valid for large enough $k$. Then the above quantity is increasing with $k$ and for $k=n-3$ is equal to $2^{(n^3-4n+3)/6}$. On the other hand, there are $2^{\binom{n}{3}+\binom{n}{2}+n} = 2^{(n^3+5n)/6}$ degree-3 polynomials on $n$ variables with no constant term. Thus the fraction of polynomials $f$ such that $w(f) \le n-3$ is at most exponentially small in $n$. We next relate the quantum circuit width of a polynomial to a combinatorial parameter of a hypergraph associated with the polynomial. A hypergraph $G = (V,E)$ is defined by a set of vertices $V$ and a set of hyperedges $E$, where a hyperedge is a subset of at least 2 of the vertices. We can associate a degree-3 polynomial $f$ with a hypergraph $G(f)$ by associating each variable with a vertex, and thinking of each term involving at most 3 variables as a hyperedge between at most 3 vertices. A proper $k$-colouring of a hypergraph $G$ is an assignment of colours to vertices, picked from a set of colours of size $k$, such that at least two vertices within each hyperedge are assigned different colours. The chromatic number of a hypergraph, $\chi(G)$, is defined to be the minimal $k$ such that there exists a proper $k$-colouring of $G$. For any degree-3 polynomial $f:\{0,1\}^n \rightarrow \{0,1\}$, $\chi(G(f)) \le 2 w(f)$, and this inequality can be tight. However, there exists a family of polynomials $f:\{0,1\}^n \rightarrow \{0,1\}$, for $n$ even, such that $\chi(G(f)) = 2$ but $w(f)=n/2$. Given a circuit for $f$ using $\ell$ qubits, each pair of variables which are associated with the same qubit but are not adjacent cannot be included in the same term of $f$. We can thus properly colour the vertices of $G(f)$ using at most $2\ell$ colours by associating a pair of colours $(c_i,d_i)$ with each qubit, and allocating colour $c_i$ (resp. $d_i$) to those vertices which occur on line $i$ at odd (resp. even) times. Tightness follows from the function $f(x) = x_1 x_2 + x_2 x_3 + \dots + x_{n-1} x_n$, which has $w(f)=1$. As $G(f)$ is a path on $n$ vertices, $\chi(G(f))=2$. For the second part, consider the polynomial $f(x) = x_1 x_2 + x_3 x_4 + \dots + x_{n-1} x_n$. The corresponding graph $G(f)$ consists of $n/2$ disjoint edges and hence can be properly coloured with 2 colours. Polynomials and simulation of quantum circuits {#sec:polysim} ============================================== We have seen that, using the construction of Proposition \[prop:gapbqp\], in order to simulate a quantum circuit – i.e. to determine the probability that, at the end of the circuit, the result of measuring the first qubit would be 1 – it is sufficient to compute $\operatorname{gap}(f)$ for a related function $f$. One can use this idea to easily obtain various simulation results for classes of quantum circuits. First, as discussed in Section \[sec:approxcomplexity\], any circuit containing only Hadamard, Z and CZ gates can be simulated efficiently classically using the Gottesman-Knill theorem [@nielsen00]. This result can be generalised to circuits containing a small number of CCZ gates as follows. Let $S$ be a hitting set for the collection of degree-3 terms of $f:\{0,1\}^n \rightarrow \{0,1\}$ (in other words, a set of variables such that each degree-3 term contains at least one element of $S$). Then, given $f$ and $S$, $\operatorname{gap}(f)$ can be computed in time $O(2^{|S|} \operatorname{poly}(n))$. For any variable $x_i$, let $f_{x_i\leftarrow z}$ denote the function obtained from $f$ by fixing the value of $x_i$ to $z$. Then it is easy to see that $\operatorname{gap}(f) = \operatorname{gap}(f_{x_i\leftarrow 0}) + \operatorname{gap}(f_{x_i\leftarrow 1})$. Applying this recursively, for any set $S$ of variables, $\operatorname{gap}(f)$ can be computed by summing the gaps of the $2^{|S|}$ functions obtained by fixing each of the variables in $S$ to either 0 or 1. If we choose $S$ to include at least one variable from each of the degree-3 terms in $f$, each new polynomial produced has degree at most 2, and hence has gap computable in time $O(n^3)$ [@ehrenfeucht90; @aaronson04a]. Observe that, if $f$ contains $k$ degree-3 terms, there is always a hitting set containing $k$ elements (just by taking one variable from each term). More generally, we would like to find a hitting set of minimal size $h(f)$. This is an NP-complete problem [@garey79], but luckily an approximation $h'(f) \le 3h(f)$ can be found in polynomial time (approximating $h(f)$ any better than this is NP-hard [@khot08], assuming the Unique Games Conjecture from complexity theory). We therefore have that $\operatorname{gap}(f)$ can be computed in time $2^{O(h(f))} \operatorname{poly}(n)$. The construction of Proposition \[prop:gapbqp\] produces a circuit $C'$ from any circuit $C$ on $\ell$ qubits whose corresponding polynomial $f_{C'}$ satisfies $h(f_{C'}) \le 2 h(f_C)$. Therefore, any polynomial-size circuit $C$ can be simulated in time $2^{O(h(f_C))} \operatorname{poly}(\ell)$. It was already shown by Aaronson and Gottesman that circuits on $\ell$ qubits containing $k$ non-Clifford gates can be simulated in time $2^{O(k)} \operatorname{poly}(\ell)$ [@aaronson04a], and more recent work has improved the constant hidden in the $O(k)$ term for circuits where the only non-Clifford gate is the T gate [@bravyi16; @bravyi16a]. However, the result here is somewhat more general in that there exist circuits with many CCZ gates whose corresponding polynomial has a small hitting set. For example, Figure \[fig:treew\] illustrates a circuit on $\ell$ qubits containing CCZ gates from the first qubit to every other pair of qubits; this circuit has $\binom{\ell-1}{2}$ gates but a hitting set of size 1. Also observe that this simulation does not seem to follow immediately from the results of Markov and Shi [@markov08] on simulating quantum circuits by tensor contraction in time exponential in the tree-width of the circuit. Indeed, there exist circuits that contain only Clifford gates but have arbitrarily high tree-width. $$\Qcircuit @C=1.2em @R=1.2em { & \ctrl{2} & \ctrl{3} & \ctrl{4} & \qw & \dots & & \ctrl{7} \qw & \qw \\ & \control \qw & \control \qw & \control \qw & \qw & \dots & & \qw & \qw\\ & \control \qw & \qw & \qw & \qw & \dots & & \qw & \qw \\ & \qw & \control \qw & \qw & \qw & \dots & & \qw & \qw \\ & \qw & \qw & \control \qw & \qw & \dots & & \qw & \qw \\ & & \vdots & & & & \vdots \\ & \qw & \qw & \qw & \qw & \dots & & \control \qw & \qw \\ & \qw & \qw & \qw & \qw & \dots & & \control \qw & \qw }$$ Simulation by linear transformations ------------------------------------ In order to calculate $\operatorname{gap}(f)$ more efficiently, we can attempt to transform $f$ into a polynomial which is simpler in some sense. One way of doing this is to apply a linear transformation to $f$. The following result is well-known in the theory of error-correcting codes [@macwilliams83]; we include the simple proof for completeness. For any degree-3 polynomial $f:\{0,1\}^n \rightarrow \{0,1\}$, and any nonsingular linear transformation $L \in GL_n({\mathbb{F}}_2)$, let $f^L$ be the polynomial $f^L(x) = f(Lx)$. Then $\deg(f^L) = 3$ and $\operatorname{gap}(f^L) = \operatorname{gap}(f)$. To produce $f^L$ from $f$, we can replace each term $x_i x_j x_k$ with a term $(Lx)_i (Lx)_j (Lx)_k$ (and similarly for the terms dependent on 1 or 2 variables). As $(Lx)_i$ is a linear function of $x$ over ${\mathbb{F}}_2$, and similarly for $j$, $k$, the product of these functions is a polynomial of degree at most 3. For the second part, as $L$ is nonsingular, there is a one-to-one mapping between the set $\{x:f(x) = 0\}$ and the set $\{x:f^L(x) = 0\}$, so $\operatorname{gap}(f^L) = \operatorname{gap}(f)$. In fact, the group $GL_n({\mathbb{F}}_2)$ is known to be the [*largest*]{} group of transformations which preserves polynomial degree [@macwilliams83]. In some cases, a linear transformation can completely change a function’s quantum circuit width and hence the efficiency with which its gap can be computed using the exact algorithm of Proposition \[prop:classical\]. As a very simple example, it is easy to show that the polynomial $x_1 + \dots + x_n$ has quantum circuit width $n$, but following a linear transformation that maps $x_1 + \dots + x_n \mapsto x_1$, the resulting polynomial $x_1$ has quantum circuit width 1. Although we do not know a general way of minimising the quantum circuit width of a function by applying a linear transformation, a simpler approach is to minimise the number of variables on which the function depends. Given a polynomial $f$ which depends on $v$ variables, $\operatorname{gap}(f)$ can be computed exactly in time $O(2^v \operatorname{poly}(v))$ simply by evaluating $f$ on each of the $2^v$ possible assignments to the variables. It has been shown by Carlini [@carlini06] (see also Appendix B of [@kayal11]) that the linear transformation $L$ which minimises the number of variables in $f^L$ can be computed in polynomial time. We therefore obtain the following corollary: Let $C$ be a polynomial-size quantum circuit on $\ell$ qubits such that there exists a linear transformation $L$ such that $f_C^L$ depends on $v$ variables. Then there is a classical algorithm which computes ${\langle 0|C|0 \rangle}$ exactly in time $O(2^v \operatorname{poly}(\ell))$. In particular, if there exists $L$ such that $f_C^L$ depends on $O(\log \ell)$ variables, we obtain a polynomial-time classical simulation of $C$. Conclusions {#sec:conclusions} =========== In this work we have investigated a correspondence between quantum circuits and low-degree polynomials over finite fields, and have shown that by exploiting this correspondence we can obtain classical hardness results, as well as ideas for classical algorithms that simulate quantum circuits. There seem to be many interesting directions in which to further explore this area. For example, as discussed in Section \[sec:approxcomplexity\], what is the complexity of computing or approximating the quantum circuit width $w(f)$? Is it related to other measures of complexity of boolean functions? Low-degree polynomials over ${\mathbb{F}}_2$ are equivalent to Reed-Muller codes [@macwilliams83] – can ideas from classical coding theory be applied to understand quantum circuits? And finally, can any other useful simulation techniques be developed by taking this perspective – perhaps for other specific classes of quantum circuits? Acknowledgements {#acknowledgements .unnumbered} ---------------- This work was supported by an EPSRC Early Career Fellowship (EP/L021005/1). Some of this work was carried out while the author was at the University of Cambridge. I would like to thank Mick Bremner and Dan Shepherd for discussions on this topic over the last few years, and Scott Aaronson, Miriam Backens and Richard Jozsa for helpful comments on a previous version. Special thanks to Sophie for arriving safely, and providing many helpful distractions from completing this work. [^1]: School of Mathematics, University of Bristol, UK; [ashley.montanaro@bristol.ac.uk]{}. [^2]: In fact, the Z and CZ gates are not necessary, as they can be produced from CCZ gates together with the use of ancillas, but it will be convenient to include them. [^3]: In fact, this argument also gives an alternative proof of the tighter complexity class inclusion BQP$\subseteq$AWPP, due to Fortnow and Rogers [@fortnow98]. [^4]: Technically, equivalent to the complexity class PromiseBQP [@janzing07]: the class of problems which reduce to determining whether the acceptance probability of a quantum computation is greater than $2/3$ or less than $1/3$, given the promise that exactly one of these is the case.
--- abstract: 'We present the essential experimental steps of our all-optical approach to prepare a double-degenerate Fermi-Fermi mixture of [$^6$Li]{} and [$^{40}$K]{} atoms, which then serves as a starting point for molecule formation. We first describe the optimized trap loading procedures, the internal-state preparation of the sample, and the combined evaporative and sympathetic cooling process. We then discuss the preparation of the sample near an interspecies [Feshbach resonance]{}, and we demonstrate the formation of heteronuclear molecules by a magnetic field ramp across the resonance.' author: - 'F.M. Spiegelhalder' - 'A. Trenkwalder' - 'D. Naik' - 'G. Kerner' - 'E. Wille' - 'G. Hendl' - 'F. Schreck' - 'R. Grimm' title: | All-optical production of a degenerate mixture of [$^6$Li]{} and [$^{40}$K]{}\ and creation of heteronuclear molecules --- Introduction ============ The groundbreaking achievements in experiments with ultracold Fermi gases [@Inguscio2006ufg; @Giorgini2008tou] have opened up unprecedented possibilities to study new regimes of strongly interacting quantum matter. Ultracold gases represent well-controllable model systems for the exploration of many-body regimes in a way not possible in conventional condensed-matter systems [@Bloch2008mbp]. A new frontier in the field is currently being explored in experiments on ultracold Fermi-Fermi mixtures of [$^6$Li]{} and [$^{40}$K]{} atoms [@Taglieber2008qdt; @Wille2008eau; @Voigt2009uhf; @Spiegelhalder2009cso; @Tiecke2009sfb]. Because of the mass imbalance and the possibility to apply species-specific optical potentials, such systems promise manifold intriguing applications both in many-body physics [@Paananen2006pia; @Iskin2006tsf; @Iskin2007sai; @Petrov2007cpo; @Iskin2008tif; @Baranov2008spb; @Bausmerth2009ccl; @Nishida2009ipw; @Wang2009qpd; @Mora2009gso] and few-body physics [@Petrov2005dmi; @Nishida2009cie; @Levinsen2009ads]. To prepare degenerate Fermi gases, all-optical approaches have proven to be simple and robust and they facilitate highly efficient evaporative cooling. Therefore they are routinely applied in many present experiments; see Ref. [@Inguscio2006ufg] for a review of earlier work and Refs. [@Fuchs2007mbe; @Inada2008cta; @Ottenstein2008cso; @Huckans2009tbr] for more recent examples. Spin mixtures of $^6$Li atoms near a broad Feshbach resonance are particularly well suited for this cooling approach because of their exceptional collision properties, which offer extremely large cross sections for elastic collisions in combination with very weak inelastic decay. This favorable situation motivates the general idea of using the strongly interacting [$^6$Li]{} gas as a cooling agent for sympathetic cooling of another species. Following this idea in Ref. [@Spiegelhalder2009cso], we recently demonstrated the sympathetic cooling of [$^{40}$K]{} atoms by an evaporatively cooled, optically trapped spin mixture of [$^6$Li]{}, reaching the double-degenerate regime. In this Article, we first present more details on our all-optical approach of preparing a double-degenerate Fermi-Fermi mixture of [$^6$Li]{} and [$^{40}$K]{}. We then show new results related to interactions and molecule formation near interspecies [Feshbach resonance]{}s. In Sec. \[sec:trapping\], we discuss our dual-species cooling and trapping setup and the special loading procedures used for the optical traps. In Sec. \[sec:SpinRelax\], we present an important preparation step where spin relaxation in the mixture brings the K atoms into their lowest internal state. In Sec. \[sec:Evaporation\], we describe the combined evaporative and sympathetic cooling process. In Sec. \[sec:StatePrep\], we show how the mixture can be prepared near interspecies [Feshbach resonance]{}s. In Sec. \[sec:Molecules\], we finally demonstrate the creation of ultracold heteronuclear Fermi-Fermi molecules by Feshbach association methods. Dual-species cooling and trapping setup and procedures {#sec:trapping} ====================================================== ![Electronic ground state energies of [$^6$Li]{} and [$^{40}$K]{} versus magnetic field.[]{data-label="fig:LevelScheme"}](Fig1LevelScheme.png){width="\columnwidth"} Here, we outline the basic concept of our dual-species setup (Sec. \[sec:setup\]), and we present the special procedures applied to prepare the optically trapped mixture. In Sec. \[sec:MOT\] we describe how we operate a dual-species magneto-optical trap (MOT). In Sec. \[sec:ODT\] we present the optical dipole traps (ODT) used in the experiments. In Sec. \[sec:transfer\] we discuss the special loading procedure for the ODT. The whole scheme is optimized for a large number of [$^6$Li]{} atoms, as this species is used as the cooling agent for sympathetic cooling of [$^{40}$K]{} into degeneracy [@Spiegelhalder2009cso]. Figure \[fig:LevelScheme\] shows the atomic ground state energy structure of [$^6$Li]{} and [$^{40}$K]{}. We label the energy levels Li$|i\rangle$ and K$|j\rangle$, counting the states with rising energy. The hyperfine splitting of [$^6$Li]{} is 228.2MHz. For [$^{40}$K]{}, the hyperfine structure is inverted and the splitting amounts to 1285.8MHz [@Arimondo1977edo]. For the low-lying states with $i\leq3$ and $j\leq10$, the projection quantum numbers are given by $m_\mathrm{Li}=-i+3/2$ and $m_\mathrm{K}=j-11/2$. Experimental setup {#sec:setup} ------------------ For the cooling and trapping of Li and K we apply standard laser cooling and trapping techniques [@Metcalf1999lca] combining a Zeeman-slowed atomic beam and a dual-species MOT for initial collection of atoms in the vacuum chamber. A detailed description of the experimental setup and the laser systems can be found in Ref. [@Wille2009poa]. A dual-species oven, which is connected to the main chamber via a differential pumping section, delivers a well-collimated atomic beam. We operate the oven with isotopically enriched samples containing 80% of [$^6$Li]{} and 7% of [$^{40}$K]{}. The Zeeman slower can cool both species individually with the respective settings of the magnetic field gradients. The central element of our vacuum chamber is a glass cell that allows for very good optical access. We achieve excellent vacuum conditions with a pressure on the order of 10$^{-11}$mbar. For both species we use diode laser systems with one grating stabilized master oscillator in combination with tapered amplifiers. The Li (K) laser system provides 11mW (12mW) per MOT beam and 80mW (100mW) for the Zeeman slower beam. Figure \[fig:OptPump\] shows a schematic drawing of the atomic energy levels and optical transitions used for cooling and trapping of Li and K. The Li MOT laser beams contain two frequency parts tuned to the cooling ($F=3/2\rightarrow F'=5/2$) and repumping ($F=1/2\rightarrow F'=3/2$) transitions and having equal power. For K the cooling ($F=9/2\rightarrow F'=11/2$) to repumper ($F=7/2\rightarrow F'=9/2$) ratio is three to two. ![Scheme of the atomic energy levels and transitions used to cool and trap Li and K. Also shown are the transitions used for hyperfine (hf) and Zeeman pumping of K.[]{data-label="fig:OptPump"}](Fig2OptPump.png){width="\columnwidth"} MOT loading {#sec:MOT} ----------- The initial collection and cooling of [$^6$Li]{} and [$^{40}$K]{} is achieved by conventional laser-cooling and trapping techniques. As loading of both species requires different settings of the Zeeman slower magnetic field, we use a sequential MOT-loading scheme. The basic idea is to first load Li and then add K in the presence of the Li MOT. The Li MOT is operated with a field gradient of 26G/cm along the symmetry axis of the field coils and a laser detuning of $-27$MHz, for both, cooling and repumping transition. After a loading time of about 15s, a few $10^9$ Li atoms are accumulated in the MOT. At this point we increase the magnetic field gradient to 38G/cm, where the K MOT works optimally. In 5s, about $10^7$ K atoms are added to the trap. The K MOT is operated with a laser detuning of $-34$MHz. During the K loading phase we operate the Li MOT with a relatively large detuning of $-31$MHz in order to compensate for the effect of the higher magnetic field gradient on volume and density. This avoids increased losses by inelastic interspecies collisions, enabling the efficient sequential loading of both species. In order to reduce shot-to-shot fluctuations of the number of atoms in the trap, we control the Li and K MOT loading by monitoring their fluorescence independently. When the fluorescence of the Li MOT reaches its threshold value, Li loading is stopped and loading of the K MOT is initiated. Once the K MOT fluorescence reaches its threshold, the K loading is turned off. At this point the ODT is ramped on in 100ms, and the magnetic fields of the Zeeman slower are ramped to zero in 10ms. Optical dipole trapping schemes {#sec:ODT} ------------------------------- For further storage and cooling of the atomic cloud, we use a trapping scheme that employs the optical dipole force of an intense infrared laser beam [@Grimm2000odt] and the magnetic force in the curvature of the magnetic field [@Jochim2003bec]. The latter becomes important when the optical trap is operated with low laser power. A schematic drawing of this hybrid trapping scheme is shown in Fig. \[fig:Traps\]. ![Schematic view of the optical trapping beam (beam 1) and the coils for the bias field and magnetic field curvature (shaded areas). Optionally, a second beam (beam 2) can be used for additional axial confinement.[]{data-label="fig:Traps"}](Fig3Traps.png){width="0.7\columnwidth"} The principal ODT is formed by a single beam (beam 1) delivered by a 200-W fiber laser that emits light at a central wavelength of 1070nm (IPG YLR-200SM-LP). The beam is focused down to a waist of 38$\mu$m at the center of the MOT. During loading, the trap is operated with an optical power of 96W, which results in a depth of 2.6mK for Li. For K the trap depth is larger by a factor of 2.1 and thus amounts to about 5.5mK. Two sets of magnetic field coils are used in our setup to control the bias and curvature field independently; the coils setup is described in more detail in Appendix \[apx:magnets\]. For small bias fields the magnetic confinement is very small in the axial direction; here additional axial confinement can be obtained from another infrared beam (beam 2) delivered by a 5-W fiber laser with a central wavelength of 1065nm (IPG YLP-5-LP). The single beam is focused to a waist of 97$\mu$m and intersects beam 1 at an angle of $53^{\circ}$. Dipole trap loading {#sec:transfer} ------------------- Loading cold atoms of a single species from a MOT into a dipole trap is a standard procedure in many experiments. Sub-Doppler cooling and MOT compression are common methods to enhance transfer into the optical trap. The optimized loading of two species, however, needs special procedures. Here we describe the sequential loading scheme that gives us excellent starting conditions for the evaporation of Li and K in a common optical trap. A schematic illustration of the loading and transfer sequence is shown in Fig. \[fig:Sequence\]. We found that an optimum is achieved by first transferring K into the optical trap while keeping Li stored in the large-volume, low-density MOT and then performing the Li transfer. After switching on the ODT, in a first step the K MOT is compressed by ramping up the magnetic field gradient within 50ms to 96G/cm and bringing the frequencies of the K lasers closer to resonance to a detuning of a few MHz with an intensity lowered to 70%. At the same time the detuning of the Li MOT is increased to $-47$MHz to avoid compression of the Li MOT. At this point the K MOT light is extinguished and the K atoms are confined in the dipole trap while Li is stored practically undisturbed in a MOT at a reduced magnetic field gradient of 64G/cm. With the K MOT beams off, untrapped atoms are allowed to escape for 50ms. Potassium has ten Zeeman sublevels in the lowest hyperfine ground state; see Fig. \[fig:LevelScheme\]. In order to produce a spin-polarized sample of K in its lowest internal state, we apply an optical pumping scheme that not only transfers the atoms to the lower hyperfine state, but also pumps the atoms to the lowest $m_F$ state. For optical pumping the quadrupole field is switched off for 2ms and only a small guiding field is kept on. Parallel to the field we shine in a $\sigma^-$-polarized laser beam for 10$\mu$s, which optically pumps the K atoms into state $|1\rangle$. The optical pumping beam contains two frequency components, one for Zeeman pumping tuned to the ($F=9/2\rightarrow F'=9/2$) transition and another one for hyperfine pumping tuned to the ($F=7/2\rightarrow F'=9/2$) transition as shown in Fig. \[fig:OptPump\]. Each frequency component has about 50 times the saturation intensity. During the optical pumping stage, the cloud of Li atoms remains trapped in an optical molasses and can be recaptured without significant losses. ![Illustration of the loading, transfer and measurement timing sequence. CMOT stands for compressed MOT.[]{data-label="fig:Sequence"}](Fig4Sequence.png){width="0.8\columnwidth"} The high-power trapping laser induces a large light-shift on the optical cooling and pumping transitions. Potassium has two optical transitions between the excited 4$^2$P$_{3/2}$ state and the 5$^2$S$_{1/2}$ and 3$^2$D$_{5/2}$ states with wavelengths of 1252nm and 1177nm, respectively. At high intensities of the ODT these two transitions shift the 4$^2$P$_{3/2}$ level by several 100MHz. Therefore optical pumping cannot be performed in the trap and we have to switch it off for a short time. After the 10$\mu$s off-time of the ODT needed for the optical pumping, essentially all K atoms are recaptured into the ODT. At this point we have a sample of a few 10$^4$ polarized [$^{40}$K]{} atoms in the ODT, surrounded by a magneto-optically trapped cloud of [$^6$Li]{} atoms. We finally apply a compressed MOT stage for Li in order to efficiently load this species into the dipole trap. For this, the quadrupole field is ramped back up to 64G/cm and the MOT lasers are operated at a very small detuning of $-3$MHz from resonance while the power is lowered to 180$\mu$W per beam for a duration of 15ms. Hyperfine pumping of Li to the lower state is performed by switching the repumping laser off during the last 50$\mu$s of the pulse. With this sequence we obtain a few 10$^5$ Li atoms in the lowest two spin states in the ODT at a temperature of about 300$\mu$K. Spin relaxation {#sec:SpinRelax} =============== A Li$|i\rangle$K$|j\rangle$ mixture can undergo rapid decay via spin relaxation if exoergic two-body collisions can take place that preserve the total projection quantum number $m_{\rm tot}=m_{\rm Li}+ m_{\rm K}=-i+j-4$. In such a process, $$\mathrm{Li}|i\rangle+\mathrm{K}|j\rangle\rightarrow\mathrm{Li}|i-1\rangle+\mathrm{K}|j-1\rangle+E_r,$$ the energy $E_r$ is released. Whenever one of the species is in the absolute ground state, and the other one is in a low-lying state ($i=1$ and $j\leq10$ or $j=1$ and $i\leq3$), spin relaxation is strongly suppressed [@Simoni2003mco]. Since optical Zeeman pumping does not lead to a perfect transfer of all K atoms into the lowest spin state, we exploit spin relaxation to fully spin polarize K into state [$|1\rangle$]{}. We investigated the conditions under which spin relaxation can be used for this purpose. In these measurements we apply only hyperfine pumping, but no Zeeman pumping. We start with an almost equal mixture of Li in its lowest two hyperfine states, [$|1\rangle$]{} and [$|2\rangle$]{}, trapped together with a population of K in all Zeeman substates $j\leqslant10$. ![Magnetic field dependence of spin relaxation. The numbers of atoms in different spin states are measured by state-selective absorption imaging after a storage time of 500ms in the dipole trap. The filled (open) squares give the Li[$|2\rangle$]{} ([$|1\rangle$]{}) atom number, the filled circles are K[$|1\rangle$]{}. The two pronounced features that are visible at 40G and 207G are fitted by Lorentzians to determine their positions and widths.[]{data-label="fig:SpinRelaxBfield"}](Fig5SpinRelaxBfield.png){width="\columnwidth"} We investigate the magnetic field dependence of the spin relaxation by holding the sample for 500ms at a variable magnetic field. The trap is operated under the same conditions as during trap loading, i.e. with a trap depth of 2.6mK for Li and 5.5mK for K. The atom numbers are measured using spin-selective absorption images, which are always taken at a bias field of 1190G [@endnote:HighFieldImaging]. Figure \[fig:SpinRelaxBfield\] shows the resulting atom numbers of Li in states [$|1\rangle$]{} (open squares) and [$|2\rangle$]{} (filled squares) as well as K in state [$|1\rangle$]{} (filled circles). Two distinct peaks in the K[$|1\rangle$]{} atom number are visible at 40G and 207G and coincide with dips in the Li[$|2\rangle$]{} atom number. These features are fitted with Lorentzians to determine their positions and widths. The release energy $E_r$ at 40G (207G) corresponds to 2.1mK (5.8mK). For an inelastic collision between two atoms with different masses, the resulting kinetic energy contributions are inversely proportional to the mass. For the [$^6$Li]{}-[$^{40}$K]{} combination, 87% of the released energy is transferred to the lighter Li atoms (mass $M_\mathrm{Li}$) and only 13% to the heavier K (mass $M_\mathrm{K}$). A necessary condition for the trap depth $U_\mathrm{K}$ under which a K atom stays confined is $$U_\mathrm{K} > \frac{M_\mathrm{Li}}{M_\mathrm{K} + M_\mathrm{Li}} E_r,$$ and analogously for $U_\mathrm{Li}$. The mass factor along with the about two times larger trap depth for K explains why we observe loss of Li atoms during the spin relaxation while K stays trapped. Furthermore, a K atom in a Zeeman level higher than [$|2\rangle$]{} will need multiple collisions with Li[$|2\rangle$]{} in order to fully polarize. That explains why much more Li[$|2\rangle$]{} is lost than K[$|1\rangle$]{} is gained during this process. ![Interpretation of the 207G spin relaxation feature in terms of Feshbach resonances. The dots show the calculated positions of [Feshbach resonance]{}s between Li[$|2\rangle$]{} and K$|2\leq j\leq10\rangle$ [@Tiecke2009fri]. Also plotted is the Lorentzian fit to the Li[$|2\rangle$]{} loss feature from Fig. \[fig:SpinRelaxBfield\] for comparison.[]{data-label="fig:Li2KsWave"}](Fig6LiToKsWave.png){width="0.975\columnwidth"} We interpret our data by comparing the position of the two spin relaxation features with the location of known interspecies [Feshbach resonance]{}s, since we expect enhanced inelastic loss close to a [Feshbach resonance]{} [@Chin2009FBR]. As Fig. \[fig:Li2KsWave\] shows, there is a series of $s$-wave [Feshbach resonance]{}s between Li[$|2\rangle$]{} and K$|2\leq j\leq10\rangle$ near the 207G feature. The distribution of [Feshbach resonance]{}s coincides with the width of the observed spin relaxation feature. Note that the experiment is performed at relatively high temperature causing considerable broadening of the [Feshbach resonance]{}s. Therefore individual resonances cannot be resolved. For the feature of enhanced spin relaxation at 40G, there are no interspecies $s$- or $p$-wave [Feshbach resonance]{}s and thus we cannot explain it by means of scattering resonances. However, at low magnetic fields $B$ the release energy $E_r$ increases $\propto B$, which leads to a corresponding increase in the density of continuum states in the decay channel. We speculate that the corresponding threshold behavior may explain the increase at lower fields. Then, already at a few ten Gauss the nuclear spin of Li decouples from the electron spin, which may lead to a reduction of loss. In a second set of experiments we investigate the time scale on which spin relaxation occurs at the two relevant magnetic fields 40G and 207G. For both fields we find that the time scale for the process is 150ms and a steady state is essentially reached after 500ms. Spin relaxation is a very efficient process that allows us to fully polarize our K sample without loss of K atoms. Since initially much more Li atoms are present in the trap, the Li loss is a minor problem. The resulting imbalance of the two Li spin states can be removed by driving radio-frequency (rf) transitions between the two states. Evaporation and sympathetic cooling {#sec:Evaporation} =================================== A spin-mixture of Li[$|1\rangle$]{} and Li[$|2\rangle$]{} near the broad 834-G [Feshbach resonance]{} facilitates highly efficient evaporative cooling, as it is well known in the field of strongly interacting Fermi gases [@OHara2002ooa; @Inguscio2006ufg]. The efficiency of the cooling process is due to the very favorable combination of a large elastic scattering cross section with very low losses. In Ref. [@Spiegelhalder2009cso] we have already demonstrated the possibility of using the [$^6$Li]{} spin-mixture as an efficient cooling agent to sympathetically cool another species. In this way we have demonstrated the attainment of a double-degenerate Fermi-Fermi mixture of [$^6$Li]{} and [$^{40}$K]{}. Here we present additional information on the experimental procedures, and the combined evaporative and sympathetic cooling process. Let us first summarize our main findings of Ref. [@Spiegelhalder2009cso] on the collisional stability of the three-component Fermi gas of Li in the lowest two spin states together with K in the lowest spin state. Interspecies collisional loss is negligible on the BCS side of the broad Li resonance ($B>900$G) and quite weak even exactly on resonance (834G). Substantial loss, however, occurs on the BEC side of the resonance in a range between about 650 and 800G. The latter is a result of inelastic collisions between K atoms with weakly bound Li dimers. Consequently, the combined evaporation and sympathetic cooling process needs to be performed on the BCS side of the Li resonance. For our experiments we choose a field of 1190G. Here the Li scattering length is $-2900~a_0$ and the interspecies scattering length is about $+60~a_0$. ![Li and K atom numbers after evaporation performed at variable magnetic field. Open squares show the number of Li atoms per state, filled circles show the K atom number.[]{data-label="fig:EvapBField"}](Fig7EvapBField.png){width="\columnwidth"} Before starting the evaporation process, we carefully balance the population of the two spin states Li[$|1\rangle$]{} and Li[$|2\rangle$]{}. This is particularly important in cases when the spin relaxation stage has caused considerable losses in [$|2\rangle$]{}. The spin balance is accomplished by driving the rf transition $|1\rangle\leftrightarrow|2\rangle$ using a series of 20 ramps over 10kHz with a duration of 50ms each [@Strecker2003coa]. This procedure is performed at 1190G, where the Li spin mixture is outside of the strongly interacting regime and interaction-induced rf line shifts are relatively small. Note that the evaporation process is much more sensitive to a spin imbalance on the BCS side of the resonance than on the BEC side of the resonance. The reason is that in the latter case the molecule formation can lead to a self-balancing of the spin population during evaporation [@Grimm2008Varenna]. Evaporation of the mixture is performed in the principal ODT, beam 1. The evaporation ramp consists of two stages, technically implemented in different ways. In the first stage, we use a digital input of the laser control unit to reduce the ODT power to about 15W. This ramp is linear and takes 1.5s. In a second stage, an acousto-optical modulator (AOM) is used to decrease the power in a nearly exponential ramp. The evaporation ramp is typically stopped after 6s when the laser power is 60mW. At this point the trap frequencies in the radial directions are 394Hz for Li and 219Hz for K. In the axial direction the trap frequency is dominated by the magnetic confinement and is 27Hz for Li and 11Hz for K. The experimental data in Fig. \[fig:EvapBField\] show the number of atoms remaining after the complete evaporation ramp for a magnetic field varied across the full tuning range offered by the Li [Feshbach resonance]{}. The data correspond to the observations of Ref. [@Spiegelhalder2009cso], showing pronounced loss on the BEC side of the Li [Feshbach resonance]{} and large stability on its BCS side. In the high-field region between 950G and 1250G, where the Li scattering length varies between $-5300~a_0$ and $-2800~a_0$, the magnetic field has no significant influence. ![Evolution of the atom numbers during the second stage of evaporation. The Li atom number per state is plotted using squares while circles represent the K atom number. Filled (open) symbols represent data from measurements using fluorescence (absorption) imaging.[]{data-label="fig:EvaporationPower"}](Fig8EvaporationPower.png){width="0.975\columnwidth"} In order to analyze the cooling process, we stop the evaporation ramp at a variable endpoint and measure the number of Li and K atoms. The measurements are performed by recapture into the MOT and subsequent detection of the fluorescence intensity or, at lower power, by absorption imaging after release from the ODT into free space. Figure \[fig:EvaporationPower\] shows that the Li atom number steadily decreases while the K cooling proceeds essentially without losses. Note that the trap depth for K is a factor of 2.1 larger than for Li. This changes at about 100mW, as the gravitational sag of K reduces the trap depth and we begin to see significant loss of K when further lowering the power of the ODT. Figure \[fig:Temp\] shows the temperature evolution of Li and K in the last part of the evaporation ramp. We extract the Li temperature by fitting a Thomas-Fermi profile to absorption images of the atomic cloud after [time-of-flight]{}. The K temperature is determined using a simple Gaussian fit, as the sample here remains in the non-degenerate regime. Throughout the whole evaporation the temperature of K lags behind the temperature of Li. At the end of an extended evaporation ramp, at a trap power of 40mW, the radial trap frequencies for Li (K) amounts to 320Hz (160Hz). In the axial direction the trap frequency is dominated by the magnetic confinement and is 27Hz for Li and 11Hz for K. The Fermi temperatures are calculated to be $T^\mathrm{Li}_F=390$nK for Li and $T^\mathrm{K}_F=135$nK for K. Following the scheme we have presented in Ref. [@Spiegelhalder2009cso], we continue cooling of the mixture by holding it in this shallow trap for 5s. This way we achieve a final K temperature of 50nK corresponding to a degeneracy of $T^\mathrm{K}/T_F^\mathrm{K}\approx 0.6$ while Li is deeply degenerate with $T^\mathrm{Li}/T_F^\mathrm{Li}<0.2$. ![Temperature evolution during the last part of the evaporation. Open squares (filled circles) indicate the Li (K) temperature. Also plotted are curves that represent the evolution of the Fermi temperature for Li (dotted line) and K (dashed line).[]{data-label="fig:Temp"}](Fig9Temperature.png){width="\columnwidth"} Since Li is the coolant of our evaporation scheme, we adjust the amount of K with which evaporation starts such that at the end of evaporation we have about ten times more atoms of Li in each spin state than atoms in K[$|1\rangle$]{}. In this situation K can be used as a probe for the Li mixture. One example of this idea has already been presented in Ref. [@Spiegelhalder2009cso]. The measurement of the K temperature was used in order to get a firm upper bound for the temperature of the Li bath even in the strongly interacting regime. This method was recently adopted to using $^7$Li as a probe in Ref. [@Nascimbene2009ett]. Preparation near interspecies [Feshbach resonance]{}s {#sec:StatePrep} ===================================================== The [$^6$Li]{}-[$^{40}$K]{} mixture offers several $s$-wave [Feshbach resonance]{}s in the range between 150G and 200G [@Wille2008eau; @Tiecke2009sfb]. All of them tend to be quite narrow, which is a common situation in cases of moderate values of the background scattering length [@Chin2009FBR]. The broadest resonances were found for the channels Li[$|1\rangle$]{}K$|7...10\rangle$ with widths between 1G and 2G [@Tiecke2009fri]. The energetically lowest channel Li[$|1\rangle$]{}K[$|1\rangle$]{}, which is of particular interest because of the energetic suppression of any two-body decay, features two resonances with calculated widths around 100mG. In this work, we focus on the resonance near 168G. We show how a degenerate two-component Li-K mixture can be prepared near this resonance after sympathetic cooling at high magnetic field and present measurements on inelastic and elastic properties of the mixture. ![Preparation scheme of a two-component mixture near interspecies [Feshbach resonance]{}s. The horizontal lines indicate four different Li-K spin channels being of particular relevance for our experiments. The numbers give the magnetic field values in Gauss. Filled (open) circles represent $s$-wave ($p$-wave) interspecies resonances. The Li[$|1\rangle$]{} and Li[$|2\rangle$]{} intraspecies $p$-wave resonances (not shown) are located at 160G and 215G respectively [@Zhang2004pwf; @Schunck2005fri] and nearly coincide with interspecies resonances. For the $s$-wave resonances, the relative widths are indicated by the size of the symbols. State transfer, indicated by vertical dashed lines, is achieved by rf transitions. After the evaporation at 1190G, the Li[$|2\rangle$]{} population is removed by a resonant laser pulse, as indicated by the **x**.[]{data-label="fig:Ramp2FBR"}](Fig10RampToFBR.png){width="\columnwidth"} When ramping down the magnetic field from its evaporation setting (1190G) to the interspecies resonances, one has to cross the region of the broad 834-G Li[$|1\rangle$]{}[$|2\rangle$]{} [Feshbach resonance]{}. If this spin channel is populated, the formation of $^6$Li dimers inevitably leads to strong losses from the atomic sample. To avoid this problem, we remove one of the Li spin components by the light pressure of a resonant laser pulse [@Du2008ooa] before starting the ramp. Note that already the momentum kick from one photon is sufficient to push a Li atom out of the shallow trap after evaporation. The pulse is applied for 10$\mu$s with a few times the saturation intensity. We find that at 1190G the interaction between the two spin components is weak enough to avoid any significant effect on the population in the remaining spin state. In this way, we reduce the three-component Fermi-Fermi gas to a two-component mixture. To approach a specific interspecies resonance, it is also important to avoid the effect of other inter- and intraspecies resonances. We find that our ramps are fast enough (ramp speed up to 20G/ms) to cross all the $p$-wave resonances without any problem. However, we find substantial losses on the interspecies $s$-wave resonances, even on the weaker ones. This already points to efficient molecule association [@Chin2009FBR] as we will discuss in Sec. \[sec:Molecules\]. Figure \[fig:Ramp2FBR\] illustrates the procedures applied to reach specific interspecies Feshbach resonances. While it is straightforward to reach the 168-G resonance in the Li[$|1\rangle$]{}K[$|1\rangle$]{} channel by a fast ramp after removal of the state Li[$|2\rangle$]{}, other resonances require more elaborate methods. As an example, we discuss the 155-G resonance in the Li[$|1\rangle$]{}K[$|3\rangle$]{} channel, which is of interest as one of the broadest resonances (width between 0.5 and 1G) in the low-lying spin channels. Here a possible way is to transfer the Li atoms from [$|1\rangle$]{} to [$|2\rangle$]{} after ramping down the field to $\sim$200G, thus converting the sample into a Li[$|2\rangle$]{}K[$|1\rangle$]{} mixture. This can be done with very high efficiency using rf transfer methods. Then the ramp is continued down to a value close to 155G and three subsequent rf transfers are applied to convert the population from Li[$|2\rangle$]{}K[$|1\rangle$]{} to Li[$|1\rangle$]{}K[$|3\rangle$]{}. This procedure avoids all detrimental resonances. Analogous schemes can be applied to reach any other desired resonance. ![Loss measurement and evaporative cooling near an interspecies [Feshbach resonance]{}. Plotted is the K atom number normalized to the background value of the loss measurement away from resonance. The open squares show a set of loss measurements holding the sample at variable magnetic field for 5s. The solid line is a Gaussian fit to the data. The filled circles show a corresponding set of measurements where evaporation was performed by lowering the optical power to one third of its initial value within 3s.[]{data-label="fig:LiKFBRevap"}](Fig11LiKFBRevap.png){width="0.85\columnwidth"} In a set of experiments performed at the 168-G interspecies [Feshbach resonance]{} in the Li[$|1\rangle$]{}K[$|1\rangle$]{} channel, we investigate aspects of inelastic and elastic collisions. Initially, we prepare about $2\times10^5$ Li atoms together with about $1.4\times10^4$ K atoms at a temperature of about 300nK. The power of the trapping beam (beam 1) is 170mW, corresponding to a radial (axial) trap frequency of 660Hz (14Hz) for Li. For K the trap frequencies are 375Hz and 6Hz respectively. The peak densities of the clouds are $n^{\rm Li}_0\approx 2\times 10^{12}\,$cm$^{-3}$ and $n^{\rm K}_0\approx 4\times 10^{11}\,$cm$^{-3}$ and the degeneracies are $T^{\rm Li}/T_F^{\rm Li}\approx0.3$ and $T^{\rm K}/T_F^{\rm K}\approx1.5$. Note that these conditions are deliberately prepared with an incomplete evaporation ramp, stopped at 170mW instead of the usual final power of 60mW (Sec. \[sec:Evaporation\]). In the first series of measurements, we ramp the magnetic field to a variable value and study the loss of atoms after a hold time of 5s. For detection, the remaining atoms are recaptured into the two-species MOT and their fluorescence is recorded. The K atom number, normalized to the background value away from resonance, is plotted in Fig. \[fig:LiKFBRevap\] (open squares). We observe a loss feature centered at 168.22G. Ramping across the [Feshbach resonance]{} does not lead to loss, indicating that the phase-space density used in these experiments is insufficient for adiabatic molecule creation during the magnetic field ramp. In the second series of measurements, we investigate whether an effect of enhanced elastic collisions can be observed in evaporative cooling near the interspecies resonance. Here we lower the power of beam 1 to 55mW within 3s, which results in a radial (axial) trap frequency of 375Hz (14Hz) for Li and 210Hz (5Hz) for K. As before, the number of remaining atoms is determined by recapture into a MOT and fluorescence detection. The corresponding data, plotted in Fig. \[fig:LiKFBRevap\] (filled circles), show a pronounced asymmetry and thus a different behavior on the two sides of the Feshbach resonance. On its high-field side, corresponding to large negative scattering length, we observe a maximum in the recaptured atom number at 168.26G. This signifies evaporative cooling with an enhanced elastic scattering cross section as compared to the background value. At lower fields, however, loss enhanced by the resonance dominates and leads to a minimum in atom number at 168.18G. The loss properties on the two sides of the Feshbach resonance are thus found to be strikingly different with more favorable conditions on the side of negative scattering length, where no weakly bound molecular state exists. ![Scan of the Li-K [Feshbach resonance]{} at 168G with 10ms hold time. The $1/e$-width of the loss feature (indicated by the dotted lines) is determined by fitting a Gaussian to the experimental data and amounts to 33mG.[]{data-label="fig:FastLoss"}](Fig12FastLoss.png){width="0.85\columnwidth"} To determine the resonance position more precisely, we prepare a sample with higher phase-space density than used for the previous sets of experiments. Now both beams of the ODT are used. Beam 2 is held at a constant power of 250mW, corresponding to a trap depth of 1$\mu$K for Li and 2.1$\mu$K for K. Its purpose is to add confinement along the weak direction of beam 1 at the very end of evaporation. Evaporation at 1190G and the ramp to low magnetic field proceed as described above. For further cooling, we create a balanced mixture of Li[$|1\rangle$]{} and [$|2\rangle$]{} at 170.5G, using a sequence of rf sweeps. Afterwards, beam 1 is ramped from a Li trap depth of 1.9$\mu$K to 1.5$\mu$K during one second and the sample is left to thermalize for another second. Then Li[$|2\rangle$]{} is removed by a short pulse of resonant light. At this point the sample has a temperature of $\sim$200nK and contains about $1.3\times10^4$ K atoms and $8\times10^4$ Li atoms. With the trap oscillation frequencies of axially 90Hz (50Hz), and radially 390Hz (210Hz) for Li (K), we calculate Fermi temperatures of about 900nK for Li and 270nK for K, corresponding to $T^{\rm Li}/T_F^{\rm Li}\approx0.2$ and $T^{\rm K}/T_F^{\rm K}\approx0.7$. The K cloud has less than half the size of the Li cloud. For both components the density in the center of the trap is about $2\times 10^{12}\,$cm$^{-3}$. Under these deep cooling conditions, we detect the fast atom loss as a function of the magnetic field. In order to approach the magnetic field value of interest without forming molecules, K is transferred into state [$|2\rangle$]{} by an rf $\pi$-pulse prior to the magnetic field ramp. At the final field, K is transferred back to state [$|1\rangle$]{} by another $\pi$-pulse. After a hold time of 10ms, the remaining K atom number is measured. Figure \[fig:FastLoss\] shows the corresponding data. We observe maximum loss of atoms centered at 168.217G, with an estimated calibration uncertainty of 10mG. Creation of ultracold Fermi-Fermi molecules {#sec:Molecules} =========================================== Here, we describe our basic methodology for molecule creation and detection (Sec. \[sec:CreationAndDetectionSchemes\]), present experimental results (Sec. \[sec:ExperimentalResults\]) and discuss our findings (Sec. \[sec:Discussion\]). Creation and detection schemes {#sec:CreationAndDetectionSchemes} ------------------------------ ![Molecules are associated by a magnetic field ramp, indicated by the arrow labeled $\Delta B$, across a [Feshbach resonance]{}. Transitions to higher atomic spin states are driven by rf pulses. Atoms bound in a molecule are not affected because of the binding energy $E_b$.[]{data-label="fig:ImageRF"}](Fig13ImageRF.png){width="\columnwidth"} The creation of the molecules starts with a Li[$|1\rangle$]{}K[$|1\rangle$]{} mixture under the same conditions as prepared for Fig. \[fig:FastLoss\]. The molecules are associated by a magnetic field ramp from 170.5G to 168.19G within 10ms, crossing the Li[$|1\rangle$]{}K[$|1\rangle$]{} 168-G [Feshbach resonance]{} (ramp speed 0.23G/ms). Instantly after the ramp, the sample is released from the ODT. Selective imaging of molecules and remaining unpaired atoms is possible after transfer of the unpaired atoms to the states Li[$|2\rangle$]{} and K[$|2\rangle$]{}. An rf $\pi$-pulse, tuned to the atomic [$|1\rangle$]{}$\rightarrow$[$|2\rangle$]{} transition, is used for this purpose [@endnote:RFTransitions]; see Fig. \[fig:ImageRF\]. Atoms bound in LiK molecules are not transferred to state [$|2\rangle$]{} if the molecular binding energy detunes the transition far enough from the free atom transition to be outside of the Fourier spectrum of the rf pulse. This condition requires a detuning of 23kHz for K, which is reached 9mG below resonance according to the relative magnetic moment of the molecular state [@Tiecke2009fri]. The rf pulses are applied one after the other during the 0.4ms free expansion of the sample. ![Absorption images of LiK Feshbach molecules and unpaired atoms taken at a magnetic field of 168.19G. The upper row shows images of molecules and atoms taken with light resonant to the K transition whereas the lower row shows images taken with light resonant to the Li transition. The left column shows molecules imaged after 0.4ms [time-of-flight]{} (TOF) expansion and the right column unpaired atoms imaged 1ms later.[]{data-label="fig:LiKMolecules"}](Fig14LiKMolecules.jpg){width="\columnwidth"} State-selective absorption images are taken simultaneously on cycling transitions starting from the Li[$|1\rangle$]{} and K[$|1\rangle$]{} states. This way, molecules are imaged directly. The resulting pictures are shown in the left-hand column of Fig. \[fig:LiKMolecules\] [@endnote:MoleculeImaging]. A second pair of images, this time of the unpaired atoms, which have been transferred to the [$|2\rangle$]{} states, is taken 1ms later and shown in the right-hand column of Fig. \[fig:LiKMolecules\] [@endnote:unpairedAtomImaging]. Absorption imaging of the molecules gives lower boundaries $\mathcal{N}^{\rm K}_{\rm mol}$ and $\mathcal{N}^{\rm Li}_{\rm mol}$ for the real molecule numbers $N^{\rm K}_{\rm mol}$ and $N^{\rm Li}_{\rm mol}$ since the absorption cross section of atoms bound in LiK molecules is somewhat smaller than the one of unpaired atoms. Close to the [Feshbach resonance]{} the cross section is similar to the one of free atoms and decreases for increasing binding energy. The number of remaining unpaired atoms $N^{\rm K}_{\rm free}$ and $N^{\rm Li}_{\rm free}$ can be obtained from the second pair of absorption images. From K images (top row of Fig. \[fig:LiKMolecules\]) we obtain $\mathcal{N}^{\rm K}_{\rm mol}=3\times10^3$ and $N^{\rm K}_{\rm free}=9\times10^3$ and from Li images (bottom row of Fig. \[fig:LiKMolecules\]) $\mathcal{N}^{\rm Li}_{\rm mol}=4\times10^3$ and $N^{\rm Li}_{\rm free}=8\times10^5$. The small cloud of K is immersed in a much larger degenerate Li bath. The molecule conversion efficiency is therefore best characterized by the K conversion efficiency. A lower bound for the molecule fraction can be determined from K absorption images as $\mathcal{F}=\mathcal{N}^{\rm K}_{\rm mol}/(\mathcal{N}^{\rm K}_{\rm mol}+N^{\rm K}_{\rm free}$). From the images shown in Fig. \[fig:LiKMolecules\] we obtain $\mathcal{F}=0.25$. Experimental results {#sec:ExperimentalResults} -------------------- ![Lower bound of molecule fraction $\mathcal{F}$ in dependence of the final magnetic field value of the molecule association magnetic field ramp. Molecules are detected for fields below 168.218G. This field corresponds to the center of the loss feature shown in Fig. \[fig:FastLoss\], which is marked by the vertical solid line here. The dashed vertical lines mark the $1/e$-width of the loss feature and the horizontal dashed line marks a systematic offset.[]{data-label="fig:ConvEff"}](Fig15ConvEff.png){width="0.85\columnwidth"} We now examine the molecule creation process and properties of the molecules in more detail. First, we determine the magnetic field value of the onset of molecule creation. For this, we perform experiments as the one just described, but we vary the endpoint of the magnetic field ramp, keeping the ramp duration fixed. The frequency of the rf pulse for the separation of free K atoms and LiK molecules and the probe beam frequencies are adapted accordingly. The Li rf pulse was not used in these experiments. Figure \[fig:ConvEff\] shows the lower bound for the molecule fraction $\mathcal{F}$. Imperfect rf pulses lead to a 3% systematic offset in the data, indicated by the horizontal dashed line [@endnote:RFOffset]. It is found that the detected molecule fraction depends strongly on the endpoint of the magnetic field ramp. No molecules are detected down to a final field of 168.217G. Only 13mG lower the maximum molecule fraction is observed. This magnetic field range corresponds well to the required detuning from resonance for our atom-molecule separation method to work, as discussed above. For lower fields the molecular signal drops again, first steeply down to about 168.19G and then much slower. At about 167.5G (outside the range of the plotted data) it becomes indiscernible from the background noise. The dependence of the detected molecule fraction on the field may have several reasons. It might be caused by the change in absorption cross section of the molecules with the magnetic field. The slow decrease away from resonance comes from loss of molecules as more time is spent between molecule association and detection. ![Decay of LiK molecules at 168.204G. Plotted is the number of molecules $\mathcal{N}^{\rm K}_{\rm mol}$ in dependence on the hold time after the fast magnetic field ramp. The solid line is an exponential fit to the data, yielding a lifetime of 1.7ms.[]{data-label="fig:LiKDecay"}](Fig16LiKDecay.png){width="0.85\columnwidth"} Within the measurement precision of a few mG the onset of molecule detection coincides with the center of the loss feature from Fig. \[fig:FastLoss\], marked by the solid vertical line in Fig. \[fig:ConvEff\]. This observation is in accordance with the standard picture of molecule formation close to a [Feshbach resonance]{} [@Kohler2006poc]. The maximum K molecule conversion efficiency extracted from this data is reached at 168.204G and amounts to about 40%. A different method to determine the K molecule conversion efficiency is to examine the number of free K atoms at a magnetic field just above the onset of molecule production and just below 168.19G. Assuming all missing atoms have formed molecules, the molecule conversion efficiency is also determined to be 40%. The assumption that no molecules are lost is well justified since the time spent in the [Feshbach resonance]{} region during the magnetic field ramp to 168.19G (120$\mu$s) is short compared to the lifetime of the molecules. The lifetime of the LiK molecules is determined by holding the sample after molecule creation for a varying time in the ODT at a constant magnetic field of 168.204G and measuring the molecule number afterwards. A fit to the decay of the molecule number gives a lifetime of 1.7ms; see Fig. \[fig:LiKDecay\]. This lifetime does not change if the remaining free Li atoms are removed just after molecule creation by a resonant flash of light, indicating that the dominant loss mechanism does not involve free Li atoms. We did not investigate the effect of unpaired K atoms on the molecule lifetime. ![Comparison of the free expansion of LiK Feshbach molecules (circles: detection of the bound K atoms, triangles: detection of the bound Li atoms) and unpaired Li atoms (open squares) at 168.196G. Shown is the radial $1/\sqrt{e}$-width of Gaussian fits to integrated density profiles.[]{data-label="fig:LiKMolTOF"}](Fig17LiKMolTOF.png){width="0.89\columnwidth"} A striking manifestation of molecule formation can be observed by comparing the expansion behavior of clouds of LiK molecules with the one of clouds of unpaired Li atoms in imaging with Li light. For this comparison, we record the expansion of the molecules and the remaining unpaired Li atoms after a molecule association magnetic field ramp to 168.196G; see Fig. \[fig:LiKMolTOF\]. We find the average expansion velocity of molecules to be slower by a factor of 3.3, as determined by fits to the expansion. We interpret this difference mainly as a result of the higher mass of the molecules compared to unpaired Li atoms. It corresponds well to the expected velocity ratio of $v_\mathrm{Li}/v_\mathrm{LiK}=\sqrt{M_\mathrm{LiK}/M_\mathrm{Li}}=\sqrt{46/6}=2.8$ in the approximation of thermal clouds of equal temperature. This observation tells us that Li atoms that remain in state [$|1\rangle$]{} after the $\pi$-pulse are bound to K atoms. Discussion {#sec:Discussion} ---------- In our experiment, molecule association is achieved in basically the same way as demonstrated in many other cold atom experiments before [@Chin2009FBR; @Kohler2006poc] and our results agree well with the standard picture of molecule formation close to a [Feshbach resonance]{} [@Kohler2006poc]. We observe that molecule association is most efficient in samples of high phase-space density and obtain a maximum molecular conversion efficiency for K of 40%. This conversion efficiency is typical for experiments employing the Feshbach ramp technique. A Monte Carlo simulation based on the method presented in Ref. [@Hodby2005peo] agrees with our results, giving a conversion efficiency of about 50% for K. The lifetime of our molecules is quite short, only 1.7ms. Because of this, it would be technically challenging to observe the standard signature of molecule association, which is the reduction of the absorption imaging signal when ramping to the molecular side of the [Feshbach resonance]{} and the recovery of the signal after ramping back. Our rf state separation detection technique, which allows to obtain images of molecules less than 0.1ms after molecule association, overcomes this detection problem. The Li$|1\rangle$K$|1\rangle$ molecule lifetime that we measure is much shorter than typical lifetimes of Li$|1\rangle$K$|3\rangle$ molecules that were measured by the Munich group [@Voigt2009uhf]. Presently, we do not know whether the different spin channels and therefore the different Feshbach resonances used for molecule association can explain the different lifetimes. There are also other possible, more technical reasons, which we presently cannot rule out. One possibility, which needs further investigation, is the loss of molecules because of the absorption of photons from the broad spectrum of the multi-mode fiber laser used for the ODT [@endnote:MultiFrequencyLaser]. Conclusion & Outlook ==================== We have presented an all-optical evaporative and sympathetic cooling scheme for the preparation of a double degenerate $^6$Li-$^{40}$K Fermi-Fermi mixture. We have also shown the general methodology to prepare the sample close to specific interspecies Feshbach resonances. As a first application, we have demonstrated the formation of Fermi-Fermi heteronuclear molecules and we have examined the molecule association process and some properties of the molecules. With the basic tools at hand, we are now in the position for the next steps towards our main goal of realizing strongly interacting regimes in the Fermi-Fermi mixture. Since the available Feshbach resonances are quite narrow this requires precise knowledge of the exact resonance position and the magnetic-field dependent elastic and inelastic interaction properties. We are currently inspecting the relatively broad 155-G resonance in the Li|1&gt;-K|3&gt; channel as a promising candidate, for which we experimentally find a width of about 800mG [@NaikInPreparation]. Strongly interacting conditions generally require a scattering length exceeding the interparticle spacing. Under our typical experimental conditions this would be realized with a magnetic detuning below 10mG, which is experimentally feasible. We want to thank Paul Julienne for fruitful discussions and support on the theoretical understanding of the interspecies scattering properties, Tobias Tiecke for the calculated Feshbach resonance positions displayed in Fig. \[fig:Li2KsWave\], Christoph Kohstall for useful comments on the manuscript, and Clarice Aiello for contributions to the experimental set-up. This work is supported by the Austrian Science Fund (FWF) and the European Science Foundation (ESF) within the EuroQUAM/FerMix project and by the FWF through the SFB FoQuS. Magnetic field coils {#apx:magnets} ==================== Three pairs of magnetic field coils are present in the setup: a pair of high-current, large-diameter coils, which we call Feshbach coils, a smaller-diameter pair of high-current coils, which we call curvature coils, and a third, low-current, low inductance pair of coils, which we call fast coils. Normally, the currents in all coils circulate in the same direction. To achieve a quadrupole field configuration for MOT operation, the direction of current in one coil of the Feshbach coil pair and one coil of the curvature coil pair can be reversed using mechanical relays. In the normal configuration, the Feshbach coils are in Helmholtz configuration and give a very homogeneous bias field near the trap center of up to 3000G. The curvature coils exhibit a magnetic field curvature, which gives rise to an additional contribution to the trapping potential [@Jochim2003bec]. With the current used during evaporation, the curvature coils give a homogeneous bias field of 600G and a magnetic field curvature of 27G/cm$^2$ along the axial direction of the dipole trap beams (perpendicular to the symmetry axis of the coils). This curvature gives rise to a magnetic confinement corresponding to trap frequencies of 27Hz for Li and 10Hz for K. When working at bias fields between 150G and 170G, where the interspecies Feshbach resonances are, the curvature coils provide a magnetic confinement corresponding to 13Hz for Li and 5Hz for K. When high magnetic field-stability is needed, we make use of a battery-powered current supply. Since the interspecies [Feshbach resonance]{}s are very narrow, it is necessary to control the magnetic field with very high precision. Passive stabilization methods, not employing any shielding, lead to a stability of about 10mG peak-to-peak over a 50Hz cycle. By synchronizing the experimental sequence to line, we achieve a magnetic field stability of a few mG for times on the order of one ms, which is much larger than the typical duration of rf $\pi$-pulses we use for internal state transfer. Magnetic field values are calibrated using rf transitions. For probing the interspecies resonances we make use of the fast coils, which have Helmholtz configuration. Using these coils we make precise magnetic field ramps of up to 3G in about 0.1ms. This response time of the magnetic field was characterized by measuring the change in frequency of an atomic RF transition with time after a step change of the current. The response time is not limited by the speed of change of the current through the coil, but by eddy currents. [49]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , eds., ** (, ), . , , , ****, (). , , , ****, (). , , , , , ****, (). , , , , , , , , , , , , , ****, (). , , , , , , , ****, (). , , , , , , ****, (). , , , , , , , (). , , , ****, (). , ****, (). , ****, (). , , , , , ****, (). , ****, (). , , , ****, (). , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , , , , , , , , ****, (). , , , , , , ****, (). , , , , , ****, (). , , , , , ****, (). , , , ****, (). , ** (, , ). , Ph.D. thesis, (). , , , ****, (). , , , , , , , , ****, (). , , , , , ****, (). , Ph.D. thesis, (). , , , , (), . , , , , , ****, (). , , , ****, (). , , , , , (). , , , , , , , , , , ****, (). , , , , , , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , , , , , ****, ().
--- author: - Chunming Yin - Milos Rancic - 'Gabriele G. de Boo' - Nikolas Stavrias - 'Jeffrey C. McCallum' - 'Matthew J. Sellars' - Sven Rogge bibliography: - 'Er1-1new.bib' title: Optical addressing of an individual erbium ion in silicon --- The detection of electron spins associated with single defects in solids is a critical operation for a range of quantum information and measurement applications currently under development[@morello_single-shot_2010; @fuechsle_single-atom_2012; @pla_single-atom_2012; @gaebel_room-temperature_2006; @jiang_repetitive_2009; @togan_quantum_2010; @maze_nanoscale_2008; @morton_embracing_2011; @zwanenburg_silicon_2012]. To date, it has only been accomplished for two centres in crystalline solids: phosphorus in silicon using electrical readout based on a single electron transistor (SET)[@morello_single-shot_2010] and nitrogen-vacancy centres in diamond using optical readout[@gaebel_room-temperature_2006; @togan_quantum_2010]. A spin readout fidelity of about 90% has been demonstrated with both electrical readout[@morello_single-shot_2010] and optical readout[@robledo_high-fidelity_2011; @neumann_single-shot_2010], however, the thermal limitations of the electrical readout and the poor photon collection efficiency of the optical readout hinder achieving the high fidelity required for quantum information applications. Here we demonstrate a hybrid approach using optical excitation to change the charge state of the defect centre in a silicon-based SET, conditional on its spin state, and then detecting this change electrically. The optical frequency addressing in high spectral resolution conquers the thermal broadening limitation of the previous electrical readout and charge sensing avoids the difficulties of efficient photon collection. This is done with erbium in silicon and has the potential to enable new architectures for quantum information processing devices and to dramatically increase the range of defect centres that can be exploited. Further, the efficient electrical detection of the optical excitation of single sites in silicon is a major step in developing an interconnect between silicon and optical based quantum computing technologies. The potential for a hybrid optical/electrical single-spin readout was recently established by Steger et al., revealing long nuclear spin coherence time with an ensemble of P ions in highly purified $^{28}$Si[@steger_quantum_2012]. The readout of the spin ensemble was demonstrated by detecting the photocurrent generated when the excitonic transition at 1,078 nm associated with the P ions was excited. In the current work we demonstrate single-site detection by electrically detecting the optical excitation of the $^{4}$I$_{15/2}$ - $^{4}$I$_{13/2}$ transition of single Er ions implanted into silicon, resolving both electronic Zeeman and hyperfine structure. The efficient readout required for single-site detection is achieved by measuring the photo-induced change in the site’s charge state using a SET, rather than detecting the associated photocurrent. Erbium’s large electronic magnetic moment of its ground state, the $I$=7/2 nuclear spin of its 167 isotope along with the coincidence of the $^{4}$I$_{15/2}$ - $^{4}$I$_{13/2}$ transition with the 1.5 $\muup$m transmission window of silica optical fibers make Er centres appealing for quantum information applications[@kenyon_erbium_2005; @vinh_photonic_2009; @bertaina_rare-earth_2007]. The few existing studies in samples with a high concentration of Er have shown 0.1 s nuclear spin relaxation time[@baldit_identification_2010] and 100 $\muup$s electron spin dephasing time[@bertaina_rare-earth_2007]. Single-site detection grants access to low Er densities where one expects drastically enhanced coherence times in analogy to the impressive recent development for P in Si[@steger_quantum_2012]. The low emission rate from optically excited rare-earth ions such as Er$^{3+}$ makes pure optical detection of single sites challenging[@kenyon_erbium_2005; @vinh_photonic_2009]. Recently though, the optical detection of a single rare-earth ion was demonstrated in a Pr-doped YAG nano-crystal[@kolesov_optical_2012]. The technique employed, involved the 2-step excitation of the ion to a high lying 5d-electron state and detecting the resultant emission[@kolesov_optical_2012]. It was conducted at room temperature and exhibited low detection efficiency and low frequency resolution making state readout not feasible. The $^{4}$I$_{15/2}$ - $^{4}$I$_{13/2}$ transition is between states within the inner 4f-electron shell of the Er$^{3+}$ ion, which is well shielded from the surrounding lattice by filled outer shells, resulting in narrow spectral linewidths and the potential for high resolution frequency addressing. At liquid helium temperatures homogeneous linewidths as narrow as 50 Hz have been observed for the transition in Er$^{3+}$:Y$_2$SiO$_5$[@sun_recent_2002]. Prior to the present work there have not been any sub-inhomogeneous studies conducted on optical transition in Er centres in silicon. The observed lifetime of emission of 2 ms from the $^{4}$I$_{13/2}$ state for Er$^{3+}$ ions in silicon implies a minimum linewidth of 150 Hz[@priolo_excitation_1998]. The resonant photoionization of individual Er$^{3+}$ ions is studied in an Er-implanted SET (Fig. \[Fig:principles\]a), which works as a charge sensor. The $^{4}$I$_{15/2}$ - $^{4}$I$_{13/2}$ transition of an Er$^{3+}$ ion has a relatively high probability, when a laser is tuned to its resonant wavelength, and the Er$^{3+}$ ion could be further ionized due to a second-photon process or an Auger process. The charge displacement induced by an ionization event simultaneously leads to a change in the tunnelling current of the SET. To get a high sensitivity, the SET is biased close to the degeneracy point between two charge states, i.e. at the edge of one Coulomb peak (Fig. \[Fig:principles\]b). Accordingly, the transconductance is large, and a small charge displacement in the sensitive region will lead to a significant change in the tunnelling current[@pioda_single-shot_2011; @hanson_spins_2007]. The photoionization of individual Er$^{3+}$ ions leads to a significant change in tunnelling current (Fig. \[Fig:principles\]b). The $^{4}$I$_{15/2}$ - $^{4}$I$_{13/2}$ transition of each Er$^{3+}$ ion has a specific resonant photon energy, so individual Er$^{3+}$ ions are distinguished by the resonant photon energy. When the laser is tuned to a non-resonant wavelength, the tunnelling current mainly stays at the background level as shown in Fig. \[Fig:principles\]c. In contrast, when the laser is tuned to a resonant wavelength of an Er$^{3+}$ ion, the photoionization of the Er$^{3+}$ ion leads to a rise in the tunnelling current, and then the current drops back due to its neutralization, contributing a two-level current-time trace (Fig. \[Fig:principles\]d), which suggests only one single Er$^{3+}$ ion is ionized (details in Methods). Figure \[Fig:principles\]e shows a photoionization spectrum of a single Er$^{3+}$ ion. Current-time traces are recorded at a series of photon energies, and then the histogram showing the distribution of current in time is plotted as a function of the photon energy detuning. The colour in Fig. \[Fig:principles\]e represents the time ($\Sigma t_{bin}$) during which the current stays within one bin, and a 0.02-nA bin size is used for all the analysis. ![image](Figure1V5.pdf){width="\textwidth"} As shown in Fig. \[Fig:principles\]a, the SET has a Si channel wrapped with the gate. The SET is biased below the threshold voltage, so that the current tunnels through the corner regions of the Si channel[@sellier_subthreshold_2007]. Consequently, the charge sensor is more sensitive to the Er$^{3+}$ ions which are closer to the corner regions in the channel, and different Er$^{3+}$ ions have different capacitive coupling leading to different detection sensitivity. The change in current (Fig. \[Fig:principles\]b-d) accords with the loss of an electron, indicating that it’s due to the ionization of the Er centre, whereas the gain of an electron will lead to a shift opposite to that in Fig. \[Fig:principles\]b. The small fluctuations in current, which we attribute to the trap states in the insulating layer or the oxide layer with weak capacitive coupling[@tettamanzi_interface_2011], can be suppressed by a proper anneal before the device fabrication. The readout efficiency is mainly limited by the efficiency of the excitation from the $^{4}$I$_{13/2}$ excited state into the conduction band, which can be increased to close to 100%, by increasing the intensity of the light used to drive this final ionization step. We observe resonances via the photoionization spectroscopy mostly between 1,535 nm and 1,539 nm, which is consistent with the $^{4}$I$_{15/2}$ - $^{4}$I$_{13/2}$ transition of Er$^{3+}$ ions in silicon[@kenyon_erbium_2005; @vinh_photonic_2009]. In the next experiment, we study the Zeeman effect of individual Er$^{3+}$ ions, as the Zeeman effect is an essential tool to determine the site symmetry of Er centres. Er$^{3+}$ ions tend to take 3+ valence characteristic of the Si lattice, so the 4f electrons of Er$^{3+}$ ions have the ground state of $^4$I$_{15/2}$ and the first excited state of $^4$I$_{13/2}$[@kenyon_erbium_2005]. The degeneracy is lifted by the crystal field, so that each state splits into several levels depending on the symmetry of the Er centre[@kenyon_erbium_2005]. The transition between the lowest level of $^4$I$_{15/2}$ and the lowest level of $^4$I$_{13/2}$ is responsible for the strong emission band around 1.54 $\muup$m, and the Zeeman splitting of those two levels in the case of double degeneracy is shown in Fig. \[Fig:Zeeman\]a. The doublet states can be described by an effective spin $S$=1/2, and the Zeeman interaction has the form: $H$=$\beta_e\mathbf{B}\cdot\mathbf{g}\cdot\mathbf{S}$, where $\beta_e$ is the electronic Bohr magneton, $\mathbf{B}$ is the magnetic field, and $\mathbf{g}$ is the $g$-factor matrix[@guillot-noel_hyperfine_2006]. The Zeeman splitting energy of the higher (lower) energy doublet is proportional to $g_H$ ($g_L$). As shown in Fig. \[Fig:Zeeman\]a, the energy difference between the two $\varDelta M_S$=$\pm$1 ($\varDelta M_S$=0) transitions can be described by $\varDelta E = \beta_e\varDelta g B$, where $\varDelta g$ is the $g$-factor difference. In this study, we measured the Zeeman splitting of 4 spectrally isolated Er resonances, and observed the $g$-factor difference from 1.6 to 10.8. ![\[Fig:Zeeman\]The Zeeman effect of individual Er$^{3+}$ ions. **a,** Schematic diagrams showing the Zeeman splitting and optical transitions of Er$^{3+}$ ions in silicon. The splitting of the $^{4}$I$_{13/2}$ and $^{4}$I$_{15/2}$ states depends on the site symmetry of the Er centre. **b,** The Zeeman splitting scan of the Er resonance with the centred wavelength of 1,535.8 nm. Each pixel stands for a current-time trace recorded for 50 s. **c,** The Zeeman splitting scan of the Er resonance with the centred wavelength of 1,538.0 nm. ](Figure2K.pdf){width="\columnwidth"} Figure \[Fig:Zeeman\]b,c shows the Zeeman splitting of Er$^{3+}$ ions. Current-time traces are taken at a series of photon energies and magnetic fields, and each pixel in Fig. \[Fig:Zeeman\]b,c represents one current-time trace. When an Er$^{3+}$ ion is ionized, the current will exceed a certain threshold, which is determined by the background current fluctuation under non-resonant illumination. For each current-time trace, the time ($t_{up}$), during which the current exceeds the threshold, is integrated and gives the values ($\Sigma t_{up}$) plotted in Fig. \[Fig:Zeeman\]b,c. As shown in Fig. \[Fig:Zeeman\]b, the resonance shows up at the photon energy detuning of 1 $\muup$eV at zero magnetic field, and starts to split into two diagonal arms with increasing magnetic field. It is due to the Zeeman effect of one individual Er$^{3+}$ ion, with $\Delta g\approx 4.8$. Similarly, the Zeeman splitting of the resonance around 1,538.0 nm is studied as shown in Fig. \[Fig:Zeeman\]c, and the rectangular regions denoted by the darkest blue colour are not scanned. There appear to be two resonances with similar resonant wavelengths and the same $g$-factor difference ($\Delta g\approx 3.3$) but with different signal intensity. This could be due to two individual Er$^{3+}$ ions with the same site symmetry but with different capacitive coupling. Furthermore, the Zeeman splitting of the resonance around 1,538.0 nm shows polarization dependence. As shown in Fig. \[Fig:Zeeman\]c, the diagonal arm is weaker than the anti-diagonal one. By modifying the polarization of the light entering the cryostat, the diagonal arm was tuned to be stronger than the anti-diagonal one. The site symmetry of individual Er centres can be determined with the polarization dependence and a rotating magnetic field measurement. Spin selective excitation even for degenerate spin states can be achieved with the maximum contrast of the polarization dependence, which allows spin readout without a magnetic field. The hyperfine structure is of great interest as the nuclear spin has long coherence times for quantum information storage[@steger_quantum_2012; @hedges_efficient_2010; @simmons_entanglement_2011]. In addition, it is a strong evidence for distinguishing between different ions as well as other defects. Erbium has six stable isotopes, among which only $^{167}$Er has a nonzero nuclear spin of $I$=7/2, leading to eight nuclear spin states. At high magnetic field, the hyperfine interaction can be treated as a perturbation of the Zeeman effect[@smith_hyperfine_1965], so each electron spin state will split into eight sublevels due to the hyperfine interaction (Fig. \[Fig:Hyperfine\]a). At low magnetic field, the hyperfine interaction is comparable to the Zeeman effect, so the sublevels will mix[@mcauslan_reducing_2012]. ![image](Figure3V12.pdf){width="\textwidth"} In order to investigate the hyperfine structure of $^{167}$Er$^{3+}$ ions, we implanted $^{167}$Er and $^{168}$Er with zero nuclear spin as control ions. We first study the photoionization spectrum of an Er$^{3+}$ control ion with zero nuclear spin. The integrated time ($\Sigma t_{up}$) is plotted as a function of the photon energy detuning, as indicated by the blue dashed line in Fig. \[Fig:Hyperfine\]b. The same spectral asymmetry as that in Fig. \[Fig:principles\]e is observed. We attribute the asymmetry to the correlation between the Stark shift of the $^{4}$I$_{15/2}$ - $^{4}$I$_{13/2}$ resonance and a broadening of the Coulomb peak, both of which are sensitive to fluctuating electric fields in the channel. The fluctuating field is attributed to the laser excitation of trap states in or near the channel. Since we directly observe the effect on the Coulomb peak it is possible to remove part of this broadening of the peak. After applying this correction (details in Methods), a minimum FWHM spectral width of 50 neV is observed as indicated by the red solid line in Fig. \[Fig:Hyperfine\]c. To significantly reduce the linewidth further it will be necessary to reduce the density of trap states. As well as the electric field induced shifts the line is expected to be broadened through magnetic interactions with $^{29}$Si up to tens of neV and with paramagnetic centres in the device. It is expected from analogy with observations in Er$^{3+}$:Y$_2$SiO$_5$[@sun_recent_2002] that applying a large magnetic field will suppress this broadening mechanism. In the following, we show the hyperfine structure of one $^{167}$Er$^{3+}$ ion. As shown in Fig. \[Fig:Hyperfine\]c, the photoionization spectrum taken at high magnetic field ($B=0.14$ T), reveals eight resonant peaks with the photon energy difference of about 0.2 $\muup$eV between each other. The high spectral resolution allows the nuclear spin readout with potential for single-shot readout and manipulating the nuclear spin states. As the addressability doesn’t rely on a specific magnetic field, the photoionization spectra are measured at a series of magnetic fields, as shown in Fig. \[Fig:Hyperfine\]d. The Zeeman shift is subtracted to show evolution of the hyperfine interaction. At high magnetic field, eight significant peaks are observed all through (0.08 T $\leqslant B \leqslant$ 0.14 T), as the hyperfine interaction can be treated as a perturbation of the Zeeman effect. At low magnetic field, multiple resonances show up (-0.04 T $\leqslant B \leqslant$ 0.06 T), revealing the mixing of the hyperfine sublevels, since the hyperfine interaction is comparable to the Zeeman effect. The eight significant peaks, representing the eight different nuclear spin states of $^{167}$Er, demonstrate that the resonances are due to the $^{167}$Er$^{3+}$ ion rather than other ions or defects. These eight hyperfine peaks (Fig. \[Fig:Hyperfine\]c) correspond to the allowed transitions ($\varDelta M_I$=0) preserving the nuclear spin states, but it’s still a question whether they are due to the $\varDelta M_S$=0 or $\varDelta M_S$=$\pm$1 transitions. As shown in Fig. \[Fig:Hyperfine\]a, we attribute them to the $\varDelta M_S$=0 transitions for two reasons. First, the energy difference between the two most distant hyperfine peaks is only about 1.7 $\muup$eV (Fig. \[Fig:Hyperfine\]c), which is much smaller than the typical splitting energy of the $\varDelta M_S$=$\pm$1 transitions of Er$^{3+}$ ions. The electron paramagnetic resonance measurements of $^{167}$Er$^{3+}$ ions in crystals show a splitting energy ($2\varDelta E_L$) of about 30 $\muup$eV[@yang_electron_2009; @bertaina_rare-earth_2007], which corresponds to the $\varDelta M_S$=$\pm$1 transitions. Second, a ninth peak shows up beyond the region between the two most distant peaks (at the photon energy detuning of -1.1 $\muup$eV in Fig. \[Fig:Hyperfine\]c). The ninth peak is much weaker than the eight peaks but still recognizable, which we attribute to a not-allowed transition. The energy of the not-allowed transitions ($\varDelta M_I$=$\pm$1) of Er$^{3+}$ ions can exceed the region between the two most distant peaks of the allowed transitions, only in the case of the $\varDelta M_S$=0 transitions. Consequently, the eight significant peaks are attributed to the $\varDelta M_S$=0 transitions, and the splitting energy is expressed as $|\varDelta E_H-\varDelta E_L|$=1.7 $\muup$eV. Hybrid optical/electrical access to single spins of individual ions in a nano-transistor has been demonstrated, which is applicable for other defects in solids. Specifically, with an Er-implanted SET the photoionization spectroscopy allows real-time observation of single optical excitation events avoiding the bottleneck of photon collection. Furthermore, high-resolution optical frequency addressing circumvents the limitations due to thermal broadening in earlier electrical detection of impurity spins[@morello_single-shot_2010]. Our findings open the way to optically address and manipulate the electron and nuclear spin states of an individual defect in a solid beyond the nitrogen-vacancy centre in diamond. In addition, this hybrid optical/electrical technique boosts the microstructural study of ions in a semiconductor to a single-site level, including microscopic aspects, electrical and optical activity, etc. An approach that combines dopant ions (e.g. Er, P) with quantum optical control and semiconductor fabrication technologies represents an attractive platform to realize a scalable quantum computation and communication architecture. Such a system could consist of individual ions inside a ring cavity coupled with each other via photons, and nearby charge-sensing devices used to read out the spin states of individual ions and to control the coupling between ions by Stark tuning. The ring cavities can be connected by optical waveguides, which enable quantum information transfer between individual ions in different ring cavities. Here we demonstrated the first step towards such a system, i.e. optical addressing of individual ions, and further improvement can be made by reducing the observed linewidth as discussed previously. However, there are essential questions to be addressed in the future, such as electron and nuclear spin coherence times of Er (P) ions, the influence of photoionization on nuclear spin coherence, and spin-photon entanglement. **METHOD SUMMARY** The devices are fabricated with the same technique as the previous study[@lansbergen_gate-induced_2008]. After complete device fabrication, an Er:O co-implantation (dose ratio 1:6 ) is performed with the implantation energy of 400 keV and 55 keV, respectively. There should be approximately 30-40 Er ions in the sensitive region of one Coulomb peak. Under the erbium implantation conditions we used, the beam is estimated to have been composed of 70-80% $^{168}$Er and 20-30% $^{167}$Er. The devices are then annealed at 700$^\circ$C in N$_{2}$ for 10 minutes to remove the implantation damage and to initiate the formation of Er centres. All the measurements are carried out in a liquid helium cryostat at 4.2 K. The laser beam, with 4-5 mW optical power, goes through a single-mode fiber and irradiates the sample with a diameter of about 1 mm. In the initial phase of the experiments (Figs. \[Fig:principles\] and \[Fig:Zeeman\]), a commercial tunable laser with an external cavity is used. To keep a high precision, we set one centred wavelength with the motor-actuator, and sweep the wavelength around the centred wavelength with the piezo-actuator. In the high-resolution experiments (Fig. \[Fig:Hyperfine\]), the wavelength of another laser is stabilized to about 0.01 pm, and a wavelength meter is used to compensate the thermal drift. **Acknowledgements** We thank R. Ahlefeldt, J. Bartholomew, R. Elliman, N. Manson and A. Morello for discussions. We also thank M. Hedges and T. Lucas for their help in the initial phase of the experiments. The devices were fabricated by N. Collaert and S. Biesemans (IMEC). The work was financially supported by the ARC Centre of Excellence for Quantum Computation and Communication Technology (CE110001027), and the Future Fellowships (FT100100589 and FT110100919). **Author Contribution** N.S. and J.C.M. designed and performed the implantation. C.M.Y., M.J.S. and S.R. designed and conducted the experiments. C.M.Y., M.R. and G.G.d.B. carried out the experiments. All the authors contributed to analysing the results and writing the paper. Correspondence and requests for materials should be addressed to S.R. (s.rogge@unsw.edu.au). **METHODS** **Details of the devices.** The devices used in this study are n-p-n field-effect transistors with a polycrystalline silicon gate wrapped around the p-type silicon channel separated by the gate dielectric. The p-type channel has a boron doping of 10$^{18}$ cm$^{-3}$. After complete device fabrication, an Er:O co-implantation is performed with the implantation energy of 400 keV and 55 keV and the ion fluence of $4\times10^{12}$ cm$^{-2}$ and $3\times10^{13}$ cm$^{-2}$, respectively. This leads to an Er:O dose ratio of about 1:6 in the channel region. Under the erbium implantation conditions we used, the beam is estimated to have been composed of 70-80% $^{168}$Er and 20-30% $^{167}$Er. The presence of both oxygen impurities[@kenyon_erbium_2005] and boron impurities$^{31}$ is known to enhance the luminescence of the Er$^{3+}$ ions in silicon. The 700$^\circ$C post-implantation anneal is within the thermal processing window for Er centre activation in silicon$^{31}$. In the experiments, the device is biased below the threshold voltage, and only the corner regions of the silicon channel go into inversion[@sellier_subthreshold_2007]. A peak of the $I$-$V_G$ curve is due to the Coulomb blockade in one of the two corner regions, where the current flows. The sensitive region is defined as the region, in which one elementary charge change can be detected, with taking the current noise and the transconductance of the Coulomb peak into account. The sensitive region of one Coulomb peak is estimated to be the corresponding channel-corner region with a dimension of $100\times 50\times 20$ nm (length $\times$ width $\times$ height) for the device shown in Fig. 1a. Simulations of the ion implantation based on SRIM$^{32}$ show that there should be approximately 30-40 Er ions in the sensitive region of one Coulomb peak. **Experimental details and data analysis.** All the measurements are carried out in a liquid helium cryostat at 4.2 K. The laser beam, with 4-5 mW optical power, goes through a single-mode fiber and irradiates the sample with a diameter of about 1 mm. In the initial phase of the experiments (Figs. \[Fig:principles\] and \[Fig:Zeeman\]), a commercial tunable laser with an external cavity is used. To keep a high precision, we set one centred wavelength with the motor-actuator, and sweep the wavelength around the centred wavelength with the piezo-actuator. The current-time traces in Fig. \[Fig:principles\]c and Fig. \[Fig:principles\]d taken at two different photon energies are consistent with the photoionization spectrum as indicated by the green and red diamonds in Fig. \[Fig:principles\]e, respectively. For instance, the current mainly stays at the background level (0.4 nA) at the photon energy detuning of -5 $\muup$eV, while the current jumps between two levels (1.8 nA and 0.4 nA) at the photon energy detuning of 4 $\muup$eV. It is worth to note that the two-level trace (Fig. \[Fig:principles\]d) suggests only one single Er$^{3+}$ ion is ionized. Multiple ions with different capacitive coupling will lead to a current-time trace with more than two levels, while two ions with the same capacitive coupling will lead to a current-time trace with three levels once they are simultaneously ionized. We attribute the charge displacement to the ionization of an Er$^{3+}$ ion rather than the charge fluctuations of the trap states, based on the observation that all the Er$^{3+}$ ions that we observed contribute to a shift of the Coulomb peak towards lower gate voltage. In the high-resolution experiments (Fig. \[Fig:Hyperfine\]), the wavelength of another laser is stabilized to about 0.01 pm, and a wavelength meter is used to compensate the thermal drift. The asymmetry as well as part of the broadening of the resonant peak is removed by adding an photon energy offset to the data, and then the time, during which the current exceeds the threshold, is integrated and gives the values plotted as the red solid line in Fig. \[Fig:Hyperfine\]b,c. In comparison, the red solid line with removing the broadening shows smaller widths and less noise than the blue dashed line without removing the broadening, nevertheless, the resonances in the latter are still clearly visible, as shown in Fig. \[Fig:Hyperfine\]b,c. **References** [$^{31}$ J. Michel, J. L. Benton, R. F. Ferrante, D. C. Jacobson, D. J. Eaglesham, E. A. Fitzgerald, Y.-H. Xie, J. M. Poate, and L. C. Kimerling, Journal of Applied Physics **70**, 2672 (1991)]{} [$^{32}$ J. F. Ziegler, M. Ziegler, and J. Biersack, Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms **268**, 1818 (2010).]{}
--- abstract: 'Given a polygon $P$ in the plane, a [*pop*]{} operation is the reflection of a vertex with respect to the line through its adjacent vertices. We define a family of alternating polygons, and show that any polygon from this family cannot be convexified by pop operations. This family contains simple, as well as non-simple (i.e., self-intersecting) polygons, as desired. We thereby answer in the negative an open problem posed by Demaine and O’Rourke [@DO07 Open Problem 5.3].' author: - 'Adrian Dumitrescu[^1]' - 'Evan Hilscher [^2]' title: On convexification of polygons by pops --- **Keywords**: Polygon convexification, edge-length preserving transformation, pop operation. Introduction ============ Consider a polygon $P=\{p_1,\ldots,p_n\}$ in the plane, that could be simple or self-intersecting. A [*pop*]{} operation is the reflection of a vertex, say $p_i$, with respect to the line through its adjacent vertices $p_{i-1}$ and $p_{i+1}$ (as usual indexes are taken modulo $n$, i.e., $p_{n+1}=p_1$) [@Ba03]. Observe that for the operation to be well-defined we need that $p_{i-1}$ and $p_{i+1}$ are distinct. This operation belongs to the larger class of edge-length preserving transformations, when applied to polygons [@Ba03; @Ro91; @RW94; @Sa73; @SB92]. It seems to have been used for the first time by Millet [@Mi94]. If instead of reflecting $p_i$ with respect to the line through its adjacent vertices $p_{i-1}$ and $p_{i+1}$, the reflection is executed with respect to the midpoint of $p_{i-1}$ and $p_{i+1}$, the operation is called a [*popturn*]{}; see [@ABB+07; @Ba03]. Observe that both the pop and the popturn are single-vertex operations. Each is an instance of a “flip”, defined informally, which has been studied at length. The most common variant of flip is the [*pocket flip*]{} (or just [*flip*]{}), first considered by Erd[ö]{}s [@Er35]. Another variant is the [*flipturn*]{}, first considered by Kazarinoff, and later by Joss and Shannon; see [@DGO+06b; @Gr95] for an account of their results. In contrast with pops and popturns, both the flip and the flipturn may involve multiple vertices. The inverse of a pocket flip, called [*deflation*]{}, has been also considered [@DDF+08; @FHM+00]. We briefly describe pocket flips and pocket flipturns next. Assume that we deal with simple polygons in this paragraph. A *pocket* is a region exterior to the polygon but interior to its convex hull, bounded by a subchain of the polygon edges and the pocket *lid*, the edge of the convex hull connecting the endpoints of that subchain; see e.g., [@DO07 p. 74]. Observe that any non-convex polygon has at least one pocket. A [*flip*]{} of a pocket consists of reflecting the pocket about the line through the pocket lid. Instead, a [*flipturn*]{} of a pocket consists of reflecting the pocket about the midpoint of the pocket lid. Observe that if $P$ is simple and non-convex, the polygons resulting after a pocket flip, or a pocket flipturn are again simple. It is known that within both of these variants, convexification can be achieved. More precisely: given a simple polygon, it can be convexified by a finite sequence of pocket flips [@DGO+06b; @Gr95; @GZ01; @KB61; @Na39; @Re57; @To99; @W93; @Yu57]. Similarly, it can be convexified by a finite sequence of pocket flipturns [@Gr95]. Moreover, the first result continues to hold for self-intersecting polygons, under broad assumptions, see [@DGO+06b]. While the convexifying sequence can be arbitrarily long for pocket flips (i.e., irrespective of $n$, the number of vertices), a quadratic number of operations always suffices in the case of flipturns [@ABC+00; @ACD+02; @Bi06]. There is an extensive bibliography pertaining to these subjects [@ABC+00; @ADE+01; @ACD+02; @ABB+07; @Ba03; @Bi06; @DGO+06b; @DO07; @Er35; @Gr95; @GZ01; @Ka81; @KB61; @Na39; @Re57; @To99; @W93; @Yu57]. See also  [@Ba03; @Ro91; @RW94; @Sa73; @SB92] for more results on edge-length preserving transformations and chord stretching. In this paper we focus on pop operations. Thurston gave an example of a simple polygon that becomes self-intersecting with any pop, see [@DO07 p.81]. Ballinger and Thurston showed (according to [@DO07 p. 81]) that almost any simple polygon can be convexified by pops if self-intersection is permitted; however no proof has been published. As Ballinger writes in his thesis [@Ba03], “pops are very natural transformations to consider, but the analysis of polygon convexification by pops seems very tricky”. It has remained an open problem whether there exist polygons that cannot be convexified by pops [@DO07 Open Problem 5.3]. We show here that such polygons do indeed exist, from both classes, simple or self-intersecting, thereby answering the above open problem in its full generality. In Section \[sec:alt\], for every even $n \geq 6$, we define a family $\A_n$ of [*alternating polygons*]{}, and show that any polygon from this family cannot be convexified by pop operations. This family contains simple, as well as non-simple (i.e., self-intersecting) polygons, as desired. It is interesting that this family is closed under pop operations: any pop operation applied to a polygon in $\A_n$, at any vertex, yields a polygon in $\A_n$ . Alternating polygons {#sec:alt} ==================== Recall that in order for the pop operation on a vertex $p_i$ be well defined, its neighbors, $p_{i-1}$ and $p_{i+1}$ need to be distinct, so that the reflection line through them is unique, hence the reflection of $p_i$ is also unique. A condition on the edge lengths of the polygon that guarantees this is that no two edges have the same length; such a polygon is called [*scalene*]{} [@Ba03 p. 24]. A weaker condition that suffices is that no two consecutive edges have the same length; we call such polygons [*weakly scalene*]{}. Our family of polygons $\A_n$ we define below consists of weakly scalene polygons. If $p_{i-1}$ and $p_{i+1}$ coincide, $p_i$ is called a [*hairpin vertex*]{} [@ABB+07]. Popping a hairpin vertex is undefined because there are an infinite number of reflection lines through $p_{i-1}$ and $p_{i+1}$. Our family of polygons is specifically designed to avoid any occurrence of hairpin vertices. See [@ABB+07] for a possible adaptation of pops to hairpin vertices. Let $n$ be even. Fix a coordinate system in the plane. We say that a polygon $P=\{p_1,p_2,\ldots,p_n\}$ with $n$ distinct vertices is [*alternating*]{} if its vertices lie alternately on the two axes: say, the vertices with [*odd*]{} indexes on the $x$-axis, and the vertices with [*even*]{} indexes on the $y$-axis. See Fig. \[f1\] for an illustration. =6.3in Let $n=2k$. Let $\x=(x_1,x_2,\cdots,x_k)$, and $\y=(y_1,y_2,\cdots,y_k)$ be two vectors in the positive orthant of ${\mathbb{R}}^k$, each having distinct nonzero coordinates, that is: $$\begin{aligned} \label{E1} i \in \{1,2,\ldots,k\} &\quad\Rightarrow\quad x_i>0 {\rm \ and \ } y_i>0, \\ i,j \in \{1,2,\ldots,k\} {\rm \ and \ } i \neq j &\quad\Rightarrow\quad x_i \neq x_j {\rm \ and \ } y_i \neq y_j. \end{aligned}$$ Let $\sigma=(\sigma_1,\sigma_2,\cdots,\sigma_{2k}) \in \{-1,+1\}^{2k}$ be a binary sign vector. Consider the alternating polygon $A(\x,\y,\sigma)=\{p_1,p_2,\ldots,p_{2k}\}$, where - $p_{2i+1}=(\sigma_{2i+1} \cdot x_{i+1},0)$, for $i=0,\ldots,k-1$. - $p_{2i}=(0,\sigma_{2i} \cdot y_i)$, for $i=1,\ldots,k$. Let $\A_n$ ($\equiv \A_{2k}$) be the family of all alternating polygons $A(\x,\y,\sigma)$ defined as above. First note that $\A_n$ contains both simple, as well as non-simple (i.e., self-intersecting) polygons. Indeed, consider the polygon $P_1$ described next. Let $x_1=y_1=k$, and $x_i=y_i=k-i+1$, for $i=2,\ldots,k$. Let $\sigma=(+1,+1,-1,\ldots,-1)$. It is easy to see that $P_1 \in \A_n$ is a simple polygon. An example is shown in Fig. \[f1\] (right). Consider now the polygon $P_2$ described as follows. Let $x_i=y_i=k-i+1$, for $i=1,\ldots,k$. Let $\sigma=(+1,\ldots,+1)$. It is easy to see that $P_2 \in \A_n$ is a self-intersecting polygon. An example is shown in Fig. \[f1\] (middle). A sequence of pops executed on an alternating simple polygon with $6$ vertices appears in Fig. \[f2\]. A key fact regarding alternating polygons is the following: \[L1\] If $P \in A_{2k}$ is convex, then $k \leq 2$. Since $P$ is convex, it intersects each of the coordinate axes in at most two points, unless it is tangent to one of the coordinate axes, and there are three consecutive collinear vertices on that axis. However this latter possibility would contradict the alternating property of $P$. So the only alternative is the former, in which case we have $k \leq 2$. Observe that the given inequality on $k$ cannot be improved. The following properties are easy to verify: 1. $A(\x,\y,\sigma)$ has $2k$ distinct vertices. 2. $A(\x,\y,\sigma)$ is weakly scalene. 3. The pop operation applied to the vertex $p_i$ of $A(\x,\y,\sigma)$, ($1 \leq i \leq 2k$), yields $A(\x,\y,\sigma')$, where $\sigma'$ differs from $\sigma$ only in the $i$th bit. That is, the absolute value of the non-zero coordinate of $p_i$ remains the same, with the point switching to its mirror image with respect to the origin of the axes. In particular, this implies that the family $\A_{2k}$ is closed with respect to pop operations. 4. Let $\x,\y$ be fixed, with the above properties, and $\sigma, \sigma'$ be two sign vectors. Consider $P=A(\x,\y,\sigma)$, and $P'=A(\x,\y,\sigma')$. Then $P'$ can be obtained from $P$ by executing at most $n$ pops, via: For $i=1$ to $n$ do: if $\sigma_i \neq \sigma'_i$, then pop $p_i$ to $p'_i$. We are now ready to prove our main result: \[T1\] Let $n=2k$, where $k \geq 3$. Any polygon in the family $\A_n$ is non-convexifiable by pop operations. Consider a polygon $P \in A_{2k}$. (We can choose $P$ simple, or self-intersecting, as desired.) By Lemma \[L1\], $P$ is not convex. Apply any finite sequence of pop operations. By the second property (2.) above, the resulting polygon also belongs to $A_{2k}$, and is therefore not convex. Conclusion {#sec:conclusion} ========== We have shown that there exists a family of polygons that cannot be convexified by a finite sequence of pops. However, there exist many polygons that can be convexified in this way. We conclude with two questions: 1. What is the computational complexity of deciding whether a given (simple or self-crossing) polygon can be convexfied by a finite sequence of pops? 2. How hard is it to find a shortest sequence of pops that convexifies a given polygon (assuming it is convexifiable in this way)? Do good approximation algorithms exist for this problem? [9]{} H.-K. Ahn, P. Bose, J. Czyzowicz, N. Hanusse, E. Kranakis, and P. Morin: Flipping your lid, [*Geombinatorics*]{}, [**10(2)**]{} (2000), 57–63. O. Aichholzer, E. D. Demaine, J. Erickson, F. Hurtado, M. Overmars, M. Soss, G. Toussaint: Reconfiguring convex polygons, [*Computational Geometry: Theory and Applications*]{}, [**20(1-2)**]{} (2001), 85–95. O. Aichholzer, C. Cortés, E. D. Demaine, V. Dujmović, J. Erickson, H. Meijer, M. Overmars, B. Balop, S. Ramaswami, and G. Toussaint: Flipturning polygons, [*Discrete & Computational Geometry*]{}, [**28**]{} (2002), 231–253. G. Aloupis, B. Ballinger, P. Bose, M. Damian, E. D. Demaine, M. L. Demaine, R. Flatland, F. Hurtado, S. Langerman, J. O’Rourke, P. Taslakian, and G. Toussaint: Vertex pops and popturns, [*Proceedings of the 19th Canadian Conference on Computational Geometry*]{}, (CCCG 2007), Ottawa, pp. 137–140. B. Ballinger: [*Length-Preserving Transformations on Polygons*]{}, PhD Thesis, University of California, Davis, 2003. T. Biedl: Polygons needing many flipturns, [*Discrete & Computational Geometry*]{}, [**35**]{} (2006), 131–141. E. D. Demaine, M. L. Demaine, T. Fevens, A. Mesa, M. Soss, D. Souvaine, P. Taslakian, and G. Toussaint: Deflating the pentagon, in [*Computational Geometry and Graph Theory*]{}, Volume 4535/2008 of LNCS, pp. 56–67. E. D. Demaine, B. Gassend, J. O’Rourke, and G. Toussaint: Polygons Flip Finitely[…]{} Right?, [*Contemporary Mathematics*]{}, [**453**]{} (2006), 231–255. E. D. Demaine and J. O’Rourke: [*Geometric Folding Algorithms: Linkages, Origami, Polyhedra*]{}, Cambridge Univ. Press, Cambridge, 2007. P. Erd[ö]{}s, Problem number 3763, [*American Mathematical Monthly*]{}, [**42(10)**]{} (1935), 627. T. Fevens, A. Hernandez, A. Mesa, P. Morin, M.  Soss, and G. Toussaint: Simple polygons with an infinite sequence of deflations, [*Beitr[ä]{}ge zur Algebra und Geometrie*]{}, [**42**]{} (2001), 307–311. B. Gr[ü]{}nbaum: How to convexify a polygon, [*Geombinatorics*]{}, [**5**]{} (1995), 24–30. B. Gr[ü]{}nbaum and J. Zaks: Convexification of polygons by flips and by flipturns, [*Discrete Mathematics*]{}, [**241**]{} (2001), 333–342. T. Kaluza: Problem 2: Konvexieren von Polygonen, [*Mathematische Semesterberichte*]{}, [**28**]{} (1981), 153–154. N.D. Kazarinoff and R.H. Bing: On the finiteness of the number of reflections that change a non-convex plane polygon into a convex one \[In Russian\], [*Matematicheskoe Prosveshchenie*]{}, [**36**]{} (1961), 205–207. K. Millett: Knotting of regular polygons in 3-space, [*Journal of Knot Theory and its Ramifications*]{}, [**3(3)**]{} (1994), 263–278. B. de Sz. Nagy: Solution to problem number 3763, [*American Mathematical Monthly*]{}, [**46(3)**]{} (1939), 176–177. Yu.G. Reshetnyak: On a method of transforming a non-convex polygonal line into a convex one \[in Russian\], [*Uspehi Mat. Mauk (N.S.)*]{}, [**12(3)**]{} (1957), 189–191. S.A. Robertson: Inflation of planes curves, [*Geometry and Topology of Submanifolds*]{}, III, World Scientific, 1991, pp. 264–275. S.A. Robertson and B. Wegner: Full and partial inflation of plane curves, [*Intuitive Geometry, Colloquia Math. Soc. János Bolyai*]{}, vol. 63, North-Holland 1994, pp. 389–401. G. Sallee: Stretching chords of space curves, [*Geometriae Dedicata*]{}, [**2**]{} (1973), 311–315. J. Stratzen and J. Brooks: A chord stretching map of a convex loop is an isometry, [*Geometriae Dedicata*]{}, [**41**]{} (1992), 51–62. G. Toussaint: The Erd[ö]{}s-Nagy Theorem and its Ramifications, [*Computational Geometry: Theory and Applications*]{}, [**31(3)**]{} (1999), 219–236. B. Wegner: Partial inflation of closed polygons in the plane, [*Beitr[ä]{}ge zur Algebra und Geometrie*]{}, [**34**]{} (1993), 77–85. A.Ya Yusupov: A property of simply-connected non-convex polygons \[in Russian\], [*Uchen. Zapiski Buharsk. Gos. Pedagog. Instituta*]{}, Tashkent, 1957, pp. 101-103. [^1]: Department of Computer Science, University of Wisconsin–Milwaukee, WI 53201-0784, USA. Email: `ad@cs.uwm.edu`. Supported in part by NSF CAREER grant CCF-0444188. [^2]: Department of Computer Science, University of Wisconsin–Milwaukee, WI 53201-0784, USA. Email: `hilscher@uwm.edu`.
--- abstract: 'A *biased graph* is a graph $G$, together with a distinguished subset ${\mathcal{B}}$ of its cycles so that no Theta-subgraph of $G$ contains precisely two cycles in ${\mathcal{B}}$. A large number of biased graphs can be constructed by choosing $G$ to be a complete graph, and ${\mathcal{B}}$ to be an arbitrary subset of its Hamilton cycles. We show that, on the logarithmic scale, the total number of simple biased graphs on $n$ vertices does not asymptotically exceed the number that can be constructed in this elementary way.' address: 'University of Waterloo, Waterloo, Canada' author: - Peter Nelson - Jorn van der Pol bibliography: - 'biasedcliques.bib' title: On the number of biased graphs --- Introduction ============ A *theta-graph* is a graph that is the union of three internally disjoint paths between distinct vertices $u$ and $v$. Such a graph contains exactly three cycles. (i.e. connected $2$-regular subgraphs). A *biased graph* is a pair $(G,{\mathcal{B}})$, where $G$ is a finite graph, and ${\mathcal{B}}$ is a subset of the collection of cycles of $G$ for which no theta-subgraph of $G$ contains precisely two cycles in ${\mathcal{B}}$. We call the cycles in ${\mathcal{B}}$ the *balanced* cycles. If $G$ contains no loops or multiple edges, we call $(G,{\mathcal{B}})$ a *simple* biased graph. This paper considers the number of simple biased graphs on a fixed $n$-element vertex set. Let ${\mathbf{B}}_n$ denote the collection of all simple biased graphs $(G,{\mathcal{B}})$ for which $V(G) = [n] = \{1, \dotsc, n\}$. One can easily see that ${\mathbf{B}}_n$ is rather large, even when we restrict to just its members for which $G = K_n$ is a complete graph. For $n \ge 3$, the clique $K_n$ has exactly $\frac{1}{2}(n-1)!$ Hamilton cycles, and no theta-subgraph of $K_n$ contains more than one Hamilton cycle. It follows that $(K_n,{\mathcal{B}})$ is a biased graph whenever ${\mathcal{B}}$ contains only Hamilton cycles. There are $2^{\frac{1}{2}(n-1)!}$ such ${\mathcal{B}}$, and therefore $|{\mathbf{B}}_n| \ge 2^{\frac{1}{2}(n-1)!}$. Our main theorem shows that this lower bound is in fact correct up to lower-order terms in the exponent.[^1] \[main\] $2^{\frac{1}{2}(n-1)!} \le |{\mathbf{B}}_n| \le 2^{\frac{1}{2}(n-1)!\left(1 + 12\sqrt{\frac{\log n}{n}}\right)}$ for all $n \ge 3$. We have made no particular effort to optimize the value of the constant $12$, which can likely be reduced at the expense of increasing the lower bound for $n$. Biased graphs were defined as an abstraction of the notion of a graph with group labellings on the edges. These are usually defined in terms of a graph with oriented edges; we give an equivalent definition where the vertex set has a canonical ordering. A *group-labelling* of a graph $G = (V,E)$ with $V = [n]$ is a function $\gamma {\colon}E \to \Gamma$ for some group $\Gamma$. Given such a function, for each cycle $C = v_1v_2\dotsc v_kv_1$ of $G$, let $\sigma(C) = \prod_{i \in {\mathbb{Z}}_k} \gamma(v_iv_{i+1})^{a_i}$, where $a_i = 1$ if $v_{i+1} > v_i$ and $a_i = -1$ otherwise. We say that $C$ is *balanced* with respect to $\gamma$ if $\sigma(C)$ is the identity in $\Gamma$; this definition ostensibly depends on the choice of ordering of the vertices in $C$, but turns out to be independent of this choice. It is shown in [@Zaslavsky1989] that the collection ${\mathcal{B}}$ of balanced cycles in this sense obeys the theta-property; i.e. $(G,{\mathcal{B}})$ is a biased graph. A biased graph $(G,{\mathcal{B}})$ with $V(G) = [n]$ is *group-labellable* if it arises from a group-labelling of $G$ in this way. Using quite different techniques from those in Theorem \[main\], we bound the number of biased graphs arising from labellings over an abelian group. \[abelian\] For each $n \ge 1$, the number of simple biased graphs with vertex set $[n]$ that arise from an abelian-group-labelling is at most $2^{\frac{1}{4}n^5 \log n}$. In the light of Theorem \[main\], this shows that such graphs are rare indeed. We have not obtained such a theorem for general groups, but we conjecture that group-labellable biased graphs are still rare. \[grouplabel\] If ${\mathbf{H}}_n$ is the set of simple group-labellable biased graphs with vertex set $[n]$, then $\lim\limits_{n \to \infty}|{\mathbf{H}}_n|/|{\mathbf{B}}_n| = 0$. The overlap graph ================= For each integer $n$, let $K_n$ denote the complete graph with vertex set $[n] = \{1,\dotsc, n\}$, and let ${\mathcal{C}}$ be the set of cycles of $K_n$. Let $\Omega_n$ be the graph on vertex set ${\mathcal{C}}$, in which two vertices $C,C'$ are adjacent if and only if some theta-subgraph of $K_n$ contains $C$ and $C'$ (or, equivalently, the cycles $C$ and $C'$ intersect in a nonempty path). Call this the *overlap graph* of order $n$. Summing the number of $(n-k)$-cycles for each $k$, we see that the number of vertices in $\Omega_n$ is $\sum_{k=0}^{n-3} \frac{n!}{2k!(n-k)}$. We will need some estimates on this quantity; to this end, for each $n \ge 3$ define $S_n = \sum_{k=0}^{n-3}\frac{1}{k!(n-k)}$ \[bound\_sn\] $\frac{{\mathrm{e}}}{n} < S_n < \frac{{\mathrm{e}}}{n} + \frac{5}{n^2}$ for all $n \ge 5$. The statement is easy to check when $n = 5$; let $n > 5$. We have $$nS_n = \sum_{k=0}^{n-3} \frac{1}{k!} + \sum_{k=0}^{n-3} \frac{k}{k!(n-k)} = \sum_{k=0}^{n-3}\frac{1}{k!} + \sum_{k=0}^{n-4} \frac{1}{k!(n-1-k)} = \sum_{k=0}^{n-3} \frac{1}{k!} + S_{n-1},$$ which implies ${\mathrm{e}}- \frac{2}{(n-3)!} + S_{n-1} < nS_n < {\mathrm{e}}+ S_{n-1}$. Using $\frac{2}{(n-3)!} < \frac{{\mathrm{e}}}{n-1}$ and induction we have $nS_n > {\mathrm{e}}- \frac{2}{(n-3)!} + S_{n-1} > {\mathrm{e}}- \frac{{\mathrm{e}}}{n-1} + \frac{{\mathrm{e}}}{n-1},$ which gives the lower bound. For the upper bound, first note that $n \ge 6$ implies that $ 2 - \frac{5}{n} > \frac{5}{n-1}$, which gives $\frac{5}{n} - \frac{{\mathrm{e}}}{n-1} > \frac{5}{n} - \frac{3}{n-1} = \frac{1}{n-1}\left(2-\frac{5}{n}\right) > \frac{5}{(n-1)^2}$. Using this and induction, we get $nS_n < {\mathrm{e}}+ S_{n-1} < {\mathrm{e}}+ \frac{{\mathrm{e}}}{n-1} + \frac{5}{(n-1)^2} < {\mathrm{e}}+ \frac{5}{n}$. Therefore $S_n < \frac{{\mathrm{e}}}{n} + \frac{5}{n^2}$ as required. We use the upper bound in the following estimate freely. For all $n \ge 5$ we have $\frac{e}{2}(n-1)! < |V(\Omega_n)| < 2(n-1)!$. As observed, we have $|V(\Omega_n)| = \frac{n!}{2}S_n$. The claimed bounds follow from Lemma \[bound\_sn\] and the fact that $\frac{e}{n} + \frac{5}{n^2} < \frac{3}{n} + \frac{5}{n^2} \le \frac{4}{n}$ for $n \ge 5$. Call a subset ${\mathcal{B}}\subseteq V(\Omega_n)$ *scarce* if it does not span any edge (i.e. it is a stable set of $\Omega_n$); equivalently, every theta-subgraph of $K_n$ contains at most one cycle from ${\mathcal{B}}$. Clearly, if ${\mathcal{B}}$ is scarce, then $(K_n, {\mathcal{B}})$ is a biased clique; we call a biased clique $(K_n, {\mathcal{B}})$ *scarce* if and only if ${\mathcal{B}}$ is scarce. The set of Hamilton cycles in $K_n$ is scarce. We show that in fact it is the unique scarce biased clique with the maximum number of balanced cycles. Let ${\mathcal{S}_{{n}}}$ denote the set of permutations of $[n]$. \[lemma:lym\] If $n \ge 3$ and $(K_n,{\mathcal{B}})$ is a scarce biased clique, then $|{\mathcal{B}}| \le \tfrac{1}{2}(n-1)!$. If equality holds, then ${\mathcal{B}}$ is the set of Hamilton cycles of $K_n$. For each $3 \le k \le n$, write ${\mathcal{B}}_k$ for the set of $k$-cycles in ${\mathcal{B}}$, and Define functions $\Psi, \Psi_1, \Psi_2\colon {\mathcal{B}}\to 2^{{\mathcal{S}_{{(}}}n)}$ as $$\begin{aligned} \Psi_1(C) &= \{\sigma \in {\mathcal{S}_{{n}}} {\colon}\text{$(\sigma(1), \ldots, \sigma(|C|))$ is a cyclic ordering of $C$}\}, \\ \Psi_2(C) &= \{\sigma \in {\mathcal{S}_{{n}}} {\colon}\text{$(\sigma(2), \ldots, \sigma(|C|+1))$ is a cyclic ordering of $C$}\}, \end{aligned}$$ and $\Psi(C) = \Psi_1(C) \cup \Psi_2(C)$. We claim that the $\Psi$-images of distinct cycles in ${\mathcal{B}}$ are disjoint. If this is not the case, then there are cycles $C_1$ and $C_2$, integers $1 \le i \le j \le 2$, and a permutation $\sigma$ such that $\sigma \in \Psi_i(C_1) \cap \Psi_j(C_2)$. We consider three different cases, depending on $(i,j)$. If $(i,j) = (1,1)$, then $|C_1| \neq |C_2|$ (otherwise $C_1 = C_2$). Let $\ell = \min\{|C_1|, |C_2|\}$. The cycles $C_1$ and $C_2$ intersect in the path $\sigma(1)\sigma(2)\ldots\sigma(\ell)$, and hence are adjacent in $\Omega_n$: a contradiction. If $(i,j) = (2,2)$, a contradiction follows by a similar argument. If $(i,j) = (1,2)$, let $\ell = \min\{|C_1|, |C_2| + 1\}$. The cycles $C_1$ and $C_2$ intersect in the path $\sigma(2)\sigma(3)\ldots\sigma(\ell)$, hence are adjacent in $\Omega_n$: again a contradiction. Thus, $\Psi$ encodes each cycle as a collection of permutations, and these collections are pairwise disjoint, it follows that $\sum_{C \in {\mathcal{B}}} |\Psi(C)| \le n!$. As $|\Psi(C)| = 4|C|(n-|C|)!$ if $|C| < n$ and $|\Psi(C)| = 2n$ if $|C| = n$, this yields $$n! \ge \sum_{C \in {\mathcal{B}}}|\Psi(C)| = \sum_{k = 3}^{n-1} 4k(n-k)!|{\mathcal{B}}_k| + 2n|{\mathcal{B}}_n|$$ Now $4k(n-k)! \ge 4k(n-k) \ge 4(n-1) > 2n$ for all $3 \le k < n$; it follows that $n! \ge 2n\sum_{k=3}^n|{\mathcal{B}}_k| = 2n|{\mathcal{B}}|$, so $|{\mathcal{B}}| \le \tfrac{1}{2}(n-1)!$ as required. If equality holds, then clearly ${\mathcal{B}}= {\mathcal{B}}_n$. We now prove that every subset of $V(\Omega_n)$ that is significantly larger than the largest stable set necessarily spans a substantial number of edges. \[supersat\] If $n \ge 12$ is an integer, then for all $\alpha > \left(\frac{10}{n}\right)^2$, if ${\mathcal{B}}\subseteq V(\Omega_n)$ satisfies $|{\mathcal{B}}| \ge (1+\alpha)\frac{1}{2}(n-1)!$, then ${\mathcal{B}}$ spans at least $\tfrac{\alpha}{4} n!$ edges in $\Omega_n$. Let $\Omega = \Omega_n$, and suppose for a contradiction that some set ${\mathcal{B}}\subseteq V(\Omega)$ satisfies $|{\mathcal{B}}| \ge (1+\alpha)\frac{1}{2}(n-1)!$, but ${\mathcal{B}}$ spans fewer than $ \tfrac{1}{4}\alpha n!$ edges in $\Omega$. Note that, since $|V(\Omega)| < 2(n-1)!$, we have $\alpha < 3$. Let ${\mathcal{C}}= V(\Omega)$. Define functions $\Phi, \Phi_1, \Phi_2\colon {\mathcal{C}}\to 2^{{\mathcal{S}_{{n}}}}$ as $$\begin{aligned} \Phi_1(C) &= \{\sigma \in {\mathcal{S}_{{n}}} {\colon}\text{$(\sigma(1), \ldots, \sigma(|C|))$ is a cyclic ordering of $C$}\}, \\ \Phi_2(C) &= \{\sigma \in {\mathcal{S}_{{n}}} {\colon}\text{$(\sigma(n-|C|+1), \ldots, \sigma(n))$ is a cyclic ordering of $C$}\}, \end{aligned}$$ and $\Phi(C) = \Phi_1(C) \cup \Phi_2(C)$. Note that the sets $\Phi_1(C)$ and $\Phi_2(C)$ are disjoint if $|C| < n$ and equal if $|C| = n$; it follows that $|\Phi(C)| = 4|C|(n-|C|)!$ if $3 \le |C| < n$, and $|\Phi(C)| = 2n$ if $|C| = n$. Note also that for each $k < n$ and $\sigma \in {\mathcal{S}_{{n}}}$, there are exactly two $k$-cycles $C$ for which $\sigma \in \Phi(C)$, and there is exactly one $n$-cycle $C$ with $\sigma \in \Phi(C)$. Thus $|\Phi({\mathcal{C}}')| = 2n|{\mathcal{C}}'|$ for each set ${\mathcal{C}}'$ of $n$-cycles. Finally, observe that $|\Phi(C) \cap \Phi(C')| \le 2$ for all distinct $C,C' \in {\mathcal{C}}$. For each $3 \le k \le n$, let ${\mathcal{B}}_k$ be the set of $k$-cycles in ${\mathcal{B}}$, and for each $i \in \{0,1,2\}$, let $P_{k,i}$ be the set of all $\sigma \in {\mathcal{S}_{{n}}}$ for which $|\{C \in {\mathcal{B}}_k \colon \sigma \in \Phi(C)\}| = i$. Since each $\sigma \in {\mathcal{S}_{{n}}}$ is in $\Phi_1(C)$ for at most one $C \in {\mathcal{B}}_k$ and is in $\Phi_2(C)$ for at most one $C \in {\mathcal{B}}_k$, the sets $P_{k,0}, P_{k,1},P_{k,2}$ partition ${\mathcal{S}_{{n}}}$. \[claim:P2-small\] $|P_{k,2}| \le \frac{\alpha}{2} n!$ for each $k \ge \tfrac{n+2}{2}$. Let $\sigma \in P_{k,2}$, and let $C,C'$ be the distinct cycles for which $\sigma \in \Phi(C) \cap \Phi(C')$. As $C$ and $C'$ intersect in at least $2k-n \ge 2$ consecutive elements, the cycles $C$ and $C'$ intersect in a path of at least two vertices, so are adjacent in $\Omega$. Moreover, we have $|\Phi(C) \cap \Phi(C')| \le 2$. It follows that ${\mathcal{B}}_k$ spans at least $|P_{k,2}|/2$ edges. Thus $|P_{k,2}|/2 \le \frac{\alpha}{4}n!$, giving the claim. \[claim:Phi-intersection-small\] $|\Phi({\mathcal{B}}_k) \cap \Phi({\mathcal{B}}_n)| \le \tfrac{\alpha}{2} n!$ for all $3 \le k < n$. Let $C \in {\mathcal{B}}_k$, $C' \in {\mathcal{B}}_n$, and let $\sigma \in \Phi(C) \cap \Phi(C')$. Since $\sigma(1), \dotsc, \sigma(n)$ is a cyclic ordering of $C'$ and either $\sigma(1), \dotsc, \sigma(|C|)$ or $\sigma(n-|C|+1), \dotsc, \sigma(n)$ is a cyclic ordering of $C$, the cycles $C,C'$ are adjacent in $\Omega$. Since $|\Phi(C) \cap \Phi(C')| \le 2$, it follows that $\Omega$ contains at least $|\Phi({\mathcal{B}}_k) \cap \Phi({\mathcal{B}}_n)|/2$ edges of this form. Thus $|\Phi({\mathcal{B}}_k) \cap \Phi({\mathcal{B}}_n)| \le \frac{\alpha}{2}n!$, as required. Recall that the number of $k$-cycles in $K_n$ is $\tfrac{n!}{2k(n-k)!}$. For $3 \le k \le n$, let $\beta_k = |{\mathcal{B}}_k|\frac{2k(n-k)!}{n!}$, so $0 \le \beta_k \le 1$. Note that $|\Phi({\mathcal{B}}_n)| = 2n|{\mathcal{B}}_n| = \beta_n n!$. \[claim:beta-k-bound\] $\beta_k \le \frac{1}{2}(1-\beta_n + \alpha)$ for all $\tfrac{n+2}{2} \le k < n$. Using Claim \[claim:Phi-intersection-small\], we have $$|\Phi({\mathcal{B}}_k)| = |\Phi({\mathcal{B}}_k) \cup \Phi({\mathcal{B}}_n)| + |\Phi({\mathcal{B}}_k) \cap \Phi({\mathcal{B}}_n)| - |\Phi({\mathcal{B}}_n)| \le n! + \tfrac{\alpha}{2} n! - \beta_n n!,$$ so $|\Phi({\mathcal{B}}_k)| \le (1+\frac{\alpha}{2}-\beta_n)n!$. Note that $$|P_{k,1}| + 2|P_{k,2}| = \sum_{i=0}^2 i |P_{k,i}| = \sum_{C \in {\mathcal{B}}_k} |\Phi(C)| = 4k(n-k)!|{\mathcal{B}}_k|$$ and that $|P_{k,1}| + |P_{k,2}| = |\Phi({\mathcal{B}}_k)|.$ We have $$\begin{split} 2\beta_k n! &= 4k(n-k)!|{\mathcal{B}}_k| \\ &= |P_{k,1}| + 2|P_{k,2}| \\ &= |\Phi({\mathcal{B}}_k)| + |P_{k,2}| \\ &\le (1+\tfrac{\alpha}{2}-\beta_n)n! + \tfrac{\alpha}{2}n! \\ &= (1+\alpha-\beta_n)n!. \end{split}$$ The claim follows immediately upon dividing both sides by $2n!$. It follows from \[claim:beta-k-bound\] and the hypothesis that $$\begin{aligned} \frac{1+\alpha}{2}(n-1)! &\le |{\mathcal{B}}| = \sum_{0 \le k \le n-3}|{\mathcal{B}}_{n-k}| \\ &= \frac{n!}{2}\sum_{0 \le k \le n-3}\frac{\beta_{n-k}}{k!(n-k)}\\ &\le \frac{n!}{2}\left(\sum_{k = {\left\lfloor n/2 \right\rfloor}}^{n-3} \frac{1}{k!(n-k)} + \sum_{k = 1}^{{\left\lfloor n/2 \right\rfloor}-1}\frac{\beta_{n-k}}{k!(n-k)} + \frac{\beta_n}{n}\right)\\ &\le \frac{n!}{2}\left(\frac{1}{3}\sum_{k \ge {\left\lfloor n/2 \right\rfloor}}\frac{1}{k!} + \frac{1-\beta_n + \alpha}{2}\sum_{k=1}^{{\left\lfloor n/2 \right\rfloor}-1}\frac{1}{k!(n-k)} + \frac{\beta_n}{n}\right)\\ &\le \frac{n!}{2}\left(\frac{2}{3{\left\lfloor n/2 \right\rfloor}!} + \frac{1-\beta_n + \alpha}{2}\left(S_n-\frac{1}{n}\right) + \frac{\beta_n}{n}\right)\\ &\le \frac{n!}{2}\left(\frac{1}{n^2}+ \frac{1-\beta_n+\alpha}{2} \left(\frac{{\mathrm{e}}-1}{n} + \frac{5}{n^2}\right) + \frac{\beta_n}{n}\right), \end{aligned}$$ where the last line uses ${\left\lfloor n/2 \right\rfloor}! > n^2$ (for $n \ge 12$) and Lemma \[bound\_sn\]. Since $0 \le \beta_n \le 1$ and $\alpha < 3$, this gives $$1+ \alpha \le \tfrac{1}{2}(1+\alpha)({\mathrm{e}}-1) + \beta_n\left(1 - \tfrac{e-1}{2}\right) + \tfrac{11}{n^2},$$ and so $\frac{11}{n^2} \ge \frac{3-{\mathrm{e}}}{2}(1 + \alpha - \beta_n) \ge \tfrac{3-{\mathrm{e}}}{2}\alpha$. Using $3-{\mathrm{e}}> \frac{1}{4}$, we obtain a contradiction to the hypothesis that $\alpha > \frac{100}{n^2}$. Scarce biased cliques ===================== Let $\mathbf{S}_n$ denote the collection of scarce biased cliques on $n$ vertices. In this section we prove the following. \[thm:enum-scarce\] $|\mathbf{S}_n| \le 2^{\frac{1}{2}(n-1)!\left(1 + 10 \sqrt{\frac{\log n}{n}}\right)}$ for all $n \ge 12$. Our proof is a standard application of the container method; see [@Samotij2015] for an introduction to and background of this technique. The main tool is the following lemma, which essentially allows us to find a concise description of every scarce biased clique ${\mathcal{B}}$, in terms of a set $\psi({\mathcal{B}})$ of size $o((n-1)!)$, and a subset of a set $\phi(\psi({\mathcal{B}}))$ of size $(\frac{1}{2}+o(1))(n-1)!$. \[lemma:container-tech\] Let $n \ge 12$ be an integer. Let ${\mathcal{C}}= V(\Omega_n)$ and $$s = \frac{4(n-1)!}{\sqrt{n \log n}} \text{\ \ \ and\ \ \ } a = \left(1 + 2\sqrt{\frac{\log n}{n}}\right)\tfrac{1}{2}(n-1)!.$$ There exist functions $\psi\colon 2^{{\mathcal{C}}}\to\binom{{\mathcal{C}}}{\le s}$ and $\phi\colon2^{{\mathcal{C}}}\to\binom{{\mathcal{C}}}{\le a}$ such that each scarce biased clique $(K_n, {\mathcal{B}})$ satisfies $ {\mathcal{B}}\subseteq \phi(\psi({\mathcal{B}}))$. Let $n \ge 12$ and $\alpha = 2\sqrt{\frac{\log n}{n}}$, noting that $\alpha > \left(\frac{10}{n}\right)^2$ . Let $\Omega = \Omega_n$, and fix a linear order $\sqsubseteq$ on ${\mathcal{C}}= V(\Omega)$. For each set $A \subseteq {\mathcal{C}}$, write $C_A^*$ for the vertex of maximum degree in the induced subgraph $\Omega[A]$, where ties are broken using $\sqsubseteq$. Define a function $f\colon 2^{{\mathcal{C}}} \times 2^{{\mathcal{C}}} \times 2^{{\mathcal{C}}} \to 2^{{\mathcal{C}}} \times 2^{{\mathcal{C}}}$ by $$f(S,A,K) = \begin{cases} (S,A) & \text{if $|A| \le a$} \\ (S\cup\{C_A^*\}, A\setminus N_\Omega(C_A^*)) & \text{if $|A| > a$ and $C_A^* \in K$} \\ (S, A\setminus\{C_A^*\}) & \text{otherwise}. \end{cases}$$ For each ${\mathcal{B}}\subseteq {\mathcal{C}}$, recursively define sequences $S_i = S_i({\mathcal{B}})$ and $A_i = A_i({\mathcal{B}})$ by $S_0 = \varnothing$, $A_0 = {\mathcal{C}}$, and $$(S_{i+1}, A_{i+1}) = f(S_i, A_i, {\mathcal{B}}).$$ Since $A_{i+1} \subseteq A_i$, there exists $i_0$ such that $(S_i, A_i) = (S_{i_0}, A_{i_0})$ for all $i \ge i_0$; define the functions $\psi$ and $\phi$ by $\psi({\mathcal{B}}) = S_{i_0}$ and $\phi({\mathcal{B}}) = A_{i_0}$. $|\psi({\mathcal{B}})| \le a$ and $|\phi({\mathcal{B}})| \le s$ for all ${\mathcal{B}}\subseteq {\mathcal{C}}$. That $|\psi({\mathcal{B}})| \le a$ follows immediately from the construction. For the second part, note that $|S_{i_0}|$ is equal to the number of $i$ for which $S_{i+1} \ne S_i$. For each such $i$, we have $|A_i| > a$ and $C^*_{A_i} \in {\mathcal{B}}$, so Lemma \[supersat\] implies that $A_i$ spans at least $\frac{\alpha }{4}n!$ edges of $\Omega$. Therefore $C^*_{A_i}$ has at least $\frac{\alpha n!}{2|A_i|} \ge \frac{\alpha n!}{2|{\mathcal{C}}|} \ge \frac{1}{4}n\alpha$ neighbours in $A_i$, and thus $|A_{i+1}| \le |A_i| - \frac{1}{4}n\alpha$. This occurs for each of the $|S_{i_0}|$ distinct values of $i$ for which $S_{i+1} \ne S_i$. Since the sequence $\left(|A_i| {\colon}i \ge 0\right)$ is weakly decreasing, it follows that $$0 \le |A_{i_0}| \le |A_0| - |S_{i_0}|\cdot \tfrac{1}{4}n\alpha \le 2(n-1)! - |S_{i_0}| \tfrac{1}{4}n\alpha,$$ and so $|\phi({\mathcal{B}})| = |S_{i_0}| \le \frac{8(n-1)!}{n\alpha} = s$. If ${\mathcal{B}}$ is a scarce biased clique, then, since ${\mathcal{B}}$ is a stable set of $\Omega_n$, for each $i$ the elements of $A_i {{\backslash}}A_{i+1}$ are all nonelements of ${\mathcal{B}}$; it follows by an inductive argument that ${\mathcal{B}}\subseteq A_{i_0} = \phi({\mathcal{B}})$. Now consider the sequences $S_i(\psi({\mathcal{B}}))$ and $A_i(\psi({\mathcal{B}}))$. Another inductive argument shows that they coincide with the sequences $S_i({\mathcal{B}})$ and $A_i({\mathcal{B}})$ respectively, and thus $\phi(\psi({\mathcal{B}})) = \phi({\mathcal{B}})$, and so ${\mathcal{B}}\subseteq \phi(\psi({\mathcal{B}}))$ as required. We are now ready to prove Theorem \[thm:enum-scarce\]. Obtain $s$ and $a$, as well as functions $\psi\colon 2^{{\mathcal{C}}} \to \binom{{\mathcal{C}}}{\le a}$ and $\phi\colon 2^{{\mathcal{C}}} \to \binom{{\mathcal{C}}}{\le s}$ as in Lemma \[lemma:container-tech\]. The number of scarce biased cliques ${\mathcal{B}}\subseteq V(\Omega_n)$ is at most $$\left|\left\{(S,A) : S \in \binom{{\mathcal{C}}}{\le s}, A \subseteq \phi(S)\right\}\right| \le \binom{|{\mathcal{C}}|}{\le s} 2^a.$$ Using $|{\mathcal{C}}| \le 2(n-1)!$ and $n \ge 8$, we have $$\frac{{\mathrm{e}}|{\mathcal{C}}|}{s} \le \frac{6(n-1)!}{s} = \frac{3\sqrt{n \log n}}{2} \le 2^{\log n}.$$ Now, using a standard bound on sums of binomial coefficients, we have $$\binom{|{\mathcal{C}}|}{\le s} \le \left(\frac{{\mathrm{e}}|{\mathcal{C}}|}{s}\right)^{s} \le \left(2^{\log n}\right)^{\frac{4(n-1)!}{\sqrt{n \log n}}} = 2^{8\sqrt{\frac{\log n}{n}} \frac{1}{2}(n-1)!}.$$ The theorem follows as $a = \left(1+2\sqrt{\frac{\log n}{n}}\right)\frac{1}{2}(n-1)!$. Biased graphs ============= Let $\mathbf{K}_n$ denote the collection of biased cliques on $n$ vertices. $|\mathbf{K}_n| \le |\mathbf{S}_n| \cdot 2^{(n-1)! \frac{n^2}{6{\left\lfloor n/3 \right\rfloor}!}}$ for all $n$. Let ${\mathcal{C}}$ be the set of cycles of $K_n$ and let $\prec$ be a linear ordering of ${\mathcal{C}}$ that refines the partial ordering by length (i.e. $C \prec C'$ whenever $|C| < |C'|$). Note that if $C_1,C_2,C_3$ are the cycles in a $\Theta$-subgraph $H$ of $K_n$ whose degree-$3$ vertices are $u$ and $v$, then each vertex in $H - \{u,v\}$ is in exactly two of the $C_i$, so $\sum_{i=1}^3 |C_i| \le 2|V(H)| + 2 \le 2n+2$. It follows that one of the $C_i$ has length at most $\frac{2}{3}(n+1)$. Let ${\mathcal{C}}'$ be the set of cycles in ${\mathcal{C}}$ of length at most $\frac{2}{3}(n+1)$ and let $r = |{\mathcal{C}}'|$; note that $$r = \sum_{k=3}^{{\left\lfloor 2(n+1)/3 \right\rfloor} + 1} \frac{n!}{2k(n-k)!} \le (n-1)!\sum_{k=3}^{{\left\lfloor 2(n+1)/3 \right\rfloor}} \frac{n}{6(n-k)!} \le (n-1)! \frac{n^2}{6{\left\lfloor n/3 \right\rfloor}!}.$$ Let ${\mathcal{H}}$ be the collection of all triples $(C_1,C_2,C_3)$ for which $C_1 \prec C_2 \prec C_3$, and $C_1,C_2,C_3$ are the cycles of a $\Theta$-subgraph of $K_n$. By the above, we have $C_1 \in {\mathcal{C}}'$. For each biased clique ${\mathcal{B}}\subseteq {\mathcal{C}}$, let $\psi({\mathcal{B}})$ be obtained from ${\mathcal{B}}$ by removing $C_1$ and $C_3$ for each triple $(C_1,C_2,C_3) \in {\mathcal{H}}$ for which $\{C_1,C_2,C_3\} \subseteq {\mathcal{B}}$. Since $|\{C_1,C_2,C_3\} \cap {\mathcal{B}}| \in \{0,1,3\}$ for all $(C_1,C_2,C_3) \in {\mathcal{H}}$, it follows that $|\{C_1,C_2,C_3\} \cap \psi({\mathcal{B}})| \le 1$ for each $(C_1,C_2,C_3)$, so $\psi({\mathcal{B}})$ is a scarce biased clique. We now show that for each scarce biased clique ${\mathcal{B}}'$ and each set ${\mathcal{X}}\subseteq {\mathcal{C}}'$, there is at most one biased clique ${\mathcal{B}}$ for which $\psi({\mathcal{B}}) = {\mathcal{B}}'$ and ${\mathcal{B}}\cap {\mathcal{C}}' = {\mathcal{X}}$. It will follow that $|\mathbf{B}_n| \le |\mathbf{K}_n| 2^{|{\mathcal{C}}'|} = |\mathbf{S}_n|2^r$, which gives the theorem by our estimate on $r$. Suppose that the claimed statement fails, so there is a scarce biased clique ${\mathcal{B}}'$ and a pair of distinct biased cliques ${\mathcal{B}}_1,{\mathcal{B}}_2$ for which $\psi({\mathcal{B}}_1) = \psi({\mathcal{B}}_2) = {\mathcal{B}}'$ while ${\mathcal{B}}_1 \cap {\mathcal{C}}' = {\mathcal{B}}_2 \cap {\mathcal{C}}'$. Let ${\mathcal{C}}''$ be a maximal initial segment of ${\mathcal{C}}$ with respect to $\prec$ for which ${\mathcal{B}}_1 \cap {\mathcal{C}}'' = {\mathcal{B}}_2 \cap {\mathcal{C}}''$; we have ${\mathcal{C}}' \subseteq {\mathcal{C}}'' \ne {\mathcal{C}}$ by assumption. Let $C \in {\mathcal{C}}- {\mathcal{C}}''$ be minimal with respect to $\prec$. The maximality in the choice of ${\mathcal{C}}''$ implies that $C$ belongs to exactly one of ${\mathcal{B}}_1$ and ${\mathcal{B}}_2$; say $C \in {\mathcal{B}}_1 - {\mathcal{B}}_2$. Since ${\mathcal{B}}= \psi({\mathcal{B}}_2) \subseteq {\mathcal{B}}_2$ and $C \notin {\mathcal{B}}_2$, we have $C \notin {\mathcal{B}}= \psi({\mathcal{B}}_1)$. Since $C \in {\mathcal{B}}_1$, there is some $(C_1,C_2,C_3) \in {\mathcal{H}}$ for which $\{C_1,C_2,C_3\} \subseteq {\mathcal{B}}_1$ while $C \in \{C_1,C_3\}$. Since $C_1 \in {\mathcal{C}}'$ and $C \notin {\mathcal{C}}'$, this gives $C = C_3$. Now $C_1 \prec C_2 \prec C$, giving $\{C_1,C_2\} \subseteq {\mathcal{C}}''$, and so $\{C_1,C_2\} \cap {\mathcal{B}}_2 = \{C_1,C_2\} \cap {\mathcal{B}}_1 = \{C_1,C_2\}$. Therefore $C_1,C_2 \in {\mathcal{B}}_2$ and thus $C = C_3 \in {\mathcal{B}}_2$ by the theta-property, a contradiction. Since $\frac{n^2}{6{\left\lfloor n/3 \right\rfloor}!} \le \frac{1}{2}\sqrt{\frac{\log n}{n}}$ for $n \ge 18$, combining the above result with Theorem \[thm:enum-scarce\] gives the following. \[countcliques\] $|\mathbf{K}_n| \le 2^{\frac{1}{2}(n-1)!\left(1 + 11\sqrt{\frac{\log n}{n}}\right)}$ for all $n \ge 18$. Finally, we prove Theorem \[main\]. There are exactly $9 < 2^{1 + 12\sqrt{\frac{\log 3}{3}}}$ simple biased graphs on three vertices, so we may assume that $n > 3$. If $(G,{\mathcal{B}})$ is a simple biased graph with $V(G) = [n]$, then $(K_n,{\mathcal{B}})$ is a biased clique on $n$ vertices, since for any theta-subgraph $H$ of $K_n$ containing precisely two cycles in ${\mathcal{B}}$, we have $E(H) \subseteq \cup_{C \in {\mathcal{B}}} E(C) \subseteq E(G)$, so no such $H$ exists. Since $E(K_n)$ has $2^{\binom{n}{2}}$ subsets, it follows that $|{\mathbf{B}}_n| \le 2^{\binom{n}{2}}|{\mathbf{K}}_n|$. If $n \ge 18$, then using $\binom{n}{2} < \tfrac{1}{2} (n-1)!\sqrt{\frac{\log n}{n}}$, the required bound follows from Theorem \[countcliques\]. If $4 \le n \le 17$, then $\frac{\log(n)}{n} \ge \frac{9}{49}$, so $\tfrac{1}{2}\left(1 + 12 \sqrt{\frac{\log n}{n}}\right) \ge \frac{43}{14} > 3$. Using $(n-1)! \ge \binom{n}{2}$, we now have $$2^{\frac{1}{2}(n-1)!\left(1 + 12\sqrt{\frac{\log n}{n}}\right)} \ge 2^{3(n-1)!} \ge 2^{2(n-1)! + \binom{n}{2}} \ge 2^{|V(\Omega_n)|} \cdot 2^{\binom{n}{2}} \ge 2^{\binom{n}{2}}|{\mathbf{K}}_n|,$$ giving the bound. Group-labellable biased graphs ============================== In this section we prove the following stronger version of Theorem \[abelian\]: \[abelian:precise\] For each $n \ge 3$, the number of simple biased graphs with vertex set $[n]$ that arise from an abelian-group-labelling is at most $(2n! + 1)^{\binom{n}{2}^2}$. Theorem \[abelian:precise\] implies Theorem \[abelian\], since $2n!+1 \le n^n$ for all $n \ge 3$. Let $I$ be a finite set, and let $\mathbf{f} = (f_i : i \in I)$ be a tuple of polynomials in the variables $X_1, X_2, \ldots, X_N$ with coefficients in a field ${\mathbb{F}}$. A set $S \subseteq I$ is a *zero-pattern* arising from $\mathbf{f}$ if there exists $w \in {\mathbb{F}}^N$ such that $S = \{i \in I : f_i(w) \neq 0\}$. In this case, we call $w$ a *witness* for $S$. Write $Z_{{\mathbb{F}}}(\mathbf{f})$ for the set of zero-patterns arising from $\mathbf{f}$. The following result, due to Rónyai, Babai, and Ganapathy [@RonyaiBabaiGanapathy2001], bounds the cardinality of $Z_{{\mathbb{F}}}(\mathbf{f})$. \[zeropatterns\] Let $\mathbf{f}$ be an $M$-tuple of polynomials in the variables $X_1, X_2, \ldots, X_N$ over a field ${\mathbb{F}}$. If each of the polynomials has degree at most $D$, then $\left|Z_{{\mathbb{F}}}(\mathbf{f})\right| \le \binom{MD}{N}$. Define an $\binom{n}{2}$-tuple of variables $X = (X_e : e \in E(K_n))$. For a cycle $C = v_1v_2\ldots v_kv_1$ of $K_n$, let $f_C \in {\mathbb{C}}[X]$ be the polynomial $$f_C(X) = \prod_{i \in {\mathbb{Z}}_k, v_i > v_{i+1}} X_{v_iv_{i+1}} - \prod_{i \in {\mathbb{Z}}_k, v_i < v_{i+1}} X_{v_iv_{i+1}}.$$ For each simple graph $G$ with vertex set $[n]$, let $\mathbf{f}_G$ denote the tuple comprising $f_C$ for each cycle $C$ of $G$. Note that $|\mathbf{f}_G| < 2(n-1)!$, as $G$ has at most $|V(\Omega_n)|$ cycles. The following observation links group-labellable biased graphs with zero-patterns. \[abelian-zeropattern\] Let $G = (V,E)$ be a simple graph with $V = [n]$ and $|E| = m$, and let ${\mathcal{C}}$ be the set of cycles of $G$. If $(G,{\mathcal{B}})$ is a biased graph that arises from an abelian-group-labelling, then there exist $P_1, \dotsc, P_m \in Z_{{\mathbb{C}}}(\mathbf{f}_G)$ such that ${\mathcal{C}}- {\mathcal{B}}= \bigcup P_i$. Let $\gamma\colon E \to \Gamma$ be an abelian-group-labelling of $G$ that gives rise to ${\mathcal{B}}$. Since $|E| = m$, we can restrict $\Gamma$ to the subgroup $\Gamma'$ generated by the image $\gamma(E)$, which, by the fundamental theorem of finitely generated abelian groups, has the form $\Gamma' \cong {\mathbb{Z}}^{s} \oplus {\mathbb{Z}}_{q_1} \oplus \dotsc \oplus {\mathbb{Z}}_{q_t}$, where $s + t \le m$, while $q_1, \dotsc, q_t \ge 1$ are integers. By including some trivial groups, we may assume that $s + t = m$. Since each cyclic group is a subgroup of ${\mathbb{C}}^{\times}$, the group $\Gamma'$ is a subgroup of $({\mathbb{C}}^{\times})^{m}$. For each $i \in [m]$, let $P_i = \{C : f_C(\pi_i \circ \gamma) \ne 0\}$, where $\pi_i$ is the projection map onto the $i$-th co-ordinate. Note that each $P_i$ is a zero-pattern with respect to $\mathbf{f}_G$, witnessed by $\pi_i \circ \gamma$. By construction, we have ${\mathcal{C}}- {\mathcal{B}}= \bigcup P_i$, as required. \[abelian-countbyedges\] For all integers $m,n$ with $n \ge 3$ and $0 \le m \le \binom{n}{2}$, the number of abelian-group-labellable simple biased graphs with vertex set $[n]$ and $m$ edges is at most $\binom{{\scriptsize\binom{n}{2}}}{m} \binom{2n!}{m}^m$. Since there are exactly $\binom{{\scriptsize\binom{n}{2}}}{m}$ simple graphs on $[n]$ with $m$ edges, it suffices to show that for each such graph $G$, there are at most $\binom{2n!}{m}^m$ abelian-group-labellable biased graphs on $G$. By Lemma \[abelian-zeropattern\], for every such biased graph $(G, {\mathcal{B}})$, there are zero-patterns $P_1, \ldots, P_m \in Z_{{\mathbb{C}}}(\mathbf{f}_G)$ such that ${\mathcal{C}}-{\mathcal{B}}= \bigcup P_i$, so the number of abelian-group-labellable biased graphs on $G$ is at most $\left|Z_{{\mathbb{C}}}(\mathbf{f}_G)\right|^m$. By Theorem \[zeropatterns\] (applied here with ${\mathbb{F}}={\mathbb{C}}$, $M=|\mathbf{f}_G|$, $N=m$, and $D = n$) and using that $|\mathbf{f}_G| < 2(n-1)!$, the number of zero-patterns arising from $\mathbf{f}_G$ is at most $\binom{2n!}{m}$, which concludes the proof. We are now ready to prove Theorem \[abelian:precise\]. Using Lemma \[abelian-countbyedges\] to bound the number of abelian-group-labellable simple biased graphs with a given number $m$ of edges, we obtain the required bound by summing over all possible values of $m$. Using that $\binom{2n!}{m} \le (2n!)^{\binom{n}{2}}$ for all $n \ge 3$ and $0 \le m \le \binom{n}{2}$, we find that the number of abelian-group-labellable simple biased graphs with vertex set $[n]$ is at most $$\sum_{m=0}^{\binom{n}{2}} \binom{\binom{n}{2}}{m} \binom{2n!}{m}^m \le \sum_{m=0}^{\binom{n}{2}} \binom{\binom{n}{2}}{m} (2n!)^{\binom{n}{2}m} \le (1+2n!)^{\binom{n}{2}^2}. \qedhere $$ While it seems difficult to construct a good model for a uniformly random biased clique on $n$ vertices, Theorem \[main\] suggests that simply taking a uniformly random subset of the Hamilton cycles as the set of balanced cycles may be a reasonable ‘approximation’. The remainder of this section shows that, under this significantly simplified model, almost all biased cliques are not labellable over any group, abelian or non-abelian. This can be interpreted as weak evidence for Conjecture \[grouplabel\]. Call a graph $R$ a *diamond ring* is if consists of two edge-disjoint copies of the diamond graph ($K_4$ minus an edge) and two internally disjoint paths, possibly of length 0, each of which connects a degree-2 vertex in one of the diamonds to a degree-2 vertex in the other. A diamond ring $R$ contains exactly four Hamilton cycles; write $H(R)$ for the set of Hamilton cycles in $R$. If $R$ is a subgraph of a group-labellable biased graph, and three of the cycles in $H(R)$ are balanced, then so is the fourth. \[lemma:bad-diamonds\] Let $(G,{\mathcal{B}})$ be a biased graph on $n$ vertices, and let $R$ be an $n$-vertex diamond ring in $G$. If $(G,{\mathcal{B}})$ is group-labellable, then $|{\mathcal{B}}\cap H(R)| \neq 3$. Consider a random biased clique $(K_n, {\mathbb{B}})$ obtained by including each Hamilton cycle of $K_n$ in ${\mathbb{B}}$ with probability $1/2$, independently of all other Hamilton cycles. With high probability (i.e. with probability tending to 1 as $n$ tends to infinity) $(K_n, {\mathbb{B}})$ is not group-labellable. Call an $n$-vertex diamond ring subgraph $R$ of $K_n$ bad if $|{\mathbb{B}}\cap H(R)| = 3$, and write $X$ for the (random) number of bad diamond ring subgraphs in $(K_n, {\mathbb{B}})$. We show, using a straightforward application of the second moment method, that ${X > 0}$ with high probability; the theorem then follows immediately from Lemma \[lemma:bad-diamonds\]. Let $X_R = 1$ if $R$ is bad and $X_R = 0$ otherwise; clearly $X = \sum X_R$, where the sum is over all $n$-vertex diamond ring subgraphs of $K_n$. The clique $K_n$ contains $\frac{1}{16}n!(n-5)$ diamond rings. Each of these is bad with probability $\binom{4}{3}2^{-4}$, so ${\mathbb{E}_{}\left[X\right]} = \frac{1}{64}n!(n-5) \sim \frac{1}{64}(n+1)!$. Let $R$ and $R'$ be two diamond ring subgraphs of $K_n$. $X_R$ and $X_{R'}$ are dependent if and only if $R$ and $R'$ share at least one Hamilton cycle. The number of dependent pairs is therefore at most $\frac{1}{2}(n-1)!n^4$. It follows that $${\mathrm{Var}_{}\left(X\right)} \le {\mathbb{E}_{}\left[X\right]} + \frac{1}{2}(n-1)! n^4 = o\left(\left({\mathbb{E}_{}\left[X\right]}\right)^2\right),$$ and hence that $X > 0$ with high probability. [^1]: Throughout this paper, we use $\log$ to denote the binary logarithm.
--- address: | $^a$Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996 USA\ $^b$Joint Institute for Computational Sciences, ORNL, Oak Ridge, TN 37831 USA\ $^c$Physics Division, National Science Foundation, Arlington, VA 22230 USA\ $^d$National Center for Computational Sciences, ORNL, Oak Ridge, TN 37831 USA\ $^e$Physics Division, ORNL, Oak Ridge, TN 37831, USA\ $^f$Department of Physics, Florida Atlantic University, Boca Raton, FL 33431, USA author: - 'K. N. YAKUNIN$^{ab}$, P. MARRONETTI$^c$, A. MEZZACAPPA$^{ab}$, O. E. B. MESSER$^{ade}$, E. LENTZ$^{abe}$, S. BRUENN$^f$, W. RAPHAEL HIX$^{ac}$, J. A. HARRIS$^{a}$' title: 'MULTIMESSENGERS FROM 3D CORE-COLLAPSE SUPERNOVAE' --- The era of multimessenger astronomy is about to begin as an advanced generation of gravitational wave detectors will come on-line this year. Core-Collapse Supernovae (CCSN) are among the most promising sources for multi-messenger astronomy due to strong electromagnetic and neutrino signals, as well as powerful gravitational wave (GW) bursts. Multimessenger observations could help resolve a number of open questions concerning the physics of CCSN such as: 1) What collapse mechanisms can we confirm or reject? 2) Can GW detectors provide an early warning to EM observers? 3) What happens in CCSN before light and neutrinos break free? ![Evolution of the shock trajectory from our 1D model and the angle-averaged shock trajectories from our 2D and 3D models, all for the same 15M$_\odot$ progenitor (left). Gravitational wave polarizations $rh_+$ and $rh_\times$ as a function of post-bounce time seen by an observer on the equator (right).[]{data-label="fig:signals"}](shock.png){width="1.0\linewidth"} ![Evolution of the shock trajectory from our 1D model and the angle-averaged shock trajectories from our 2D and 3D models, all for the same 15M$_\odot$ progenitor (left). Gravitational wave polarizations $rh_+$ and $rh_\times$ as a function of post-bounce time seen by an observer on the equator (right).[]{data-label="fig:signals"}](gw.png){width="1.0\linewidth"} In order to address these questions, we study the GW emission in a 3D model performed with the neutrino-hydrodynamics code [[Chimera]{}]{} [@Bruenn14], which is composed of five major modules: hydrodynamics, neutrino transport, self-gravity, a nuclear equation of state, and a nuclear reaction network. We evolve a non-rotating model corresponding to a zero-age main sequence progenitor of 15M$_\odot$ [@Woosley07], on an adaptive spherical-polar mesh with resolution  $512(r)\times180(\theta)\times180(\phi)$. This model was simulated using the Lattimer–Swesty equation of state (EoS) with $K = 220$ MeV for $\rho > 10^{11} \textrm{g}\,\textrm{cm}^{-3}$, and an enhanced version of the Cooperstein EoS for $\rho < 10^{11} \textrm{g}\,\textrm{cm}^{-3}$. The simulation exhibits shock revival and the development of neutrino-driven explosions, a unique feature for first-principle simulations from progenitors with canonical CCSN masses (Fig. \[fig:signals\] left). All the main phases of supernova dynamics can be seen in the gravitational waveforms (Fig. \[fig:signals\] right): prompt convection, standing accretion shock instability (SASI), neutrino-driven convection, and formation of accretion downflows impinging on the surface of the proto-neutron star. The frequency of the gravitational wave signals tends to increase during the first 500 ms of post-bounce evolution. Low-energy neutrinos (LENs) will be an important multi-messenger partner to GWs from CCSN. A CCSN produces 10–160 MeV neutrinos (all flavors) over a few tens of seconds. The estimation of antineutrino rate detection in IceCube  [@IceCube] presented in Fig. \[fig:icecube\] was done using Eq. (1) of Lund  *et al.* [@Lund12]. The SASI, with characteristic frequencies of 50–100 Hz, strongly imprints the neutrino signals observable by large Cherenkov detectors for Galactic CCSN. If neutrino-driven convection dominates, the pre-explosion time variations of the neutrino flux are expected to exhibit smaller amplitude and higher frequency variations. Hence, the neutrino signal of the next Galactic CCSN may observationally constrain the contribution of neutrino-driven convection and SASI [@Ott13].\ [**[Acknowledgments:]{}**]{} This research was supported by the U.S. Department of Energy Offices of Nuclear Physics and Advanced Scientific Computing Research and the NASA Astrophysics Theory and Fundamental Physics Program (NNH11AQ72I). PM is supported by the National Science Foundation through its employee IR/D program. The opinions and conclusions expressed herein are those of the authors and do not represent the National Science Foundation. ![Detection rate of $\bar\nu_e$ in IceCube for Galactic CCSN.[]{data-label="fig:icecube"}](nu.png){width="50.00000%"} References {#references .unnumbered} ========== [99]{} S. W. [Bruenn]{} *et al.*, ArXiv1409.5779 (2014) S. E. Woosley and A. Heger, https://icecube.wisc.edu/ T. Lund *et al.* C. D. Ott *et al.*, *Proceedings of the Neutrino 2012 Conference*, (2012)
--- abstract: | **Purpose:** subject motion and static field ([$\mathrm{B}_0$]{}) drift are known to reduce the quality of single voxel MR spectroscopy data due to incoherent averaging. Retrospective correction has previously been shown to improve data quality by adjusting the phase and frequency offset of each average to match a reference spectrum. In this work, a new method (RATS) is developed to be tolerant to large frequency shifts (greater than 7Hz) and baseline instability resulting from inconsistent water suppression.\ **Methods:** in contrast to previous approaches, the variable-projection method and baseline fitting is incorporated into the correction procedure to improve robustness to fluctuating baseline signals and optimization instability. RATS is compared to an alternative method, based on time-domain spectral registration (TDSR), using simulated data to model frequency, phase and baseline instability. In addition, a J-difference edited glutathione in-vivo dataset is processed using both approaches and compared.\ **Results:** RATS offers improved accuracy and stability for large frequency shifts and unstable baselines. Reduced subtraction artifacts are demonstrated for glutathione edited MRS when using RATS, compared with uncorrected or TDSR corrected spectra.\ **Conclusion:** the RATS algorithm has been shown to provide accurate retrospective correction of SVS MRS data in the presence of large frequency shifts and baseline instability. The method is rapid, generic and therefore readily incorporated into MRS processing pipelines to improve lineshape, SNR and aid quality assessment. author: - Martin Wilson bibliography: - 'main.bib' title: 'Robust Retrospective Frequency and Phase Correction for Single-voxel MR Spectroscopy' --- Introduction ============ Single voxel acquisition is currently the most widely used in-vivo [$^1\mathrm{H}$]{} Magnetic Resonance Spectroscopy (MRS) technique for clinical brain investigation [@Oz2014]. Repeated acquisitions, known as averages or shots, are usually combined to attain a sufficient signal-to-noise ratio (SNR) to measure key metabolite signals, where a doubling the number of averages theoretically improves the SNR by a factor of $\sqrt{2}$. A typical acquisition protocol of 128 averages, acquired from a 2cm sided cubic volume, may be used to discern the major [$^1\mathrm{H}$]{} metabolite resonances, such as total-N-acetylaspartate (tNAA), total-creatine (tCr), total-choline (tCho) and myo-inositol. Ideally, metabolite signals are completely stable throughout the entire acquisition period - achieving the highest possible SNR when averaged. However, two primary mechanisms have been shown to result in dynamic perturbations in spectral phase and frequency. Firstly, slowly varying changes in the static field strength ([$\mathrm{B}_0$]{} drift) commonly follow gradient intensive sequences, such as echo-planer imaging, due to a heating of the static shim elements [@Foerster2005]. [$\mathrm{B}_0$]{} drift during MRS acquisition causes a slowly varying frequency offset, where subsequent averages become increasingly misaligned relative to the first. The second primary cause of temporal instability originates from subject movement, typically resulting in a transient change in the frequency offset and spectral phase. Both [$\mathrm{B}_0$]{} drift and subject movement degrade the SNR and lineshape of MRS data due to incoherent averaging. Unstable acquisitions are particularly detrimental to J-difference edited experiments [@Mescher1998], since spectral misalignment results in an incomplete subtraction of non-edited resonances - resulting in a distortion of the edited metabolite signals [@Evans2013]. One of the earliest approaches for retrospective MRS instability correction is based on a frequency and phase measurement made from the residual water signal [@Zhu1992]. The change in these parameters is estimated throughout the acquisition period, and each average corrected to obtain consistent spectra. However, the use of a residual water resonance has the disadvantage of potentially biasing the metabolite estimates through baseline distortion from the water resonance peak “tails”, or sideband artifacts that increase with the residual water amplitude [@Clayton2001]. More recently, a method has been developed to align spectra without the requirement for a residual water signal [@Near2015]. Correction is performed using a least-squares optimization to a reference spectrum in the time-domain, a process known as “spectral registration”. The time-domain spectral registration (TDSR) approach has been compared with two other correction methods, based on the residual water signal [@Helms2001] and a metabolite peak fitting method [@Waddell2007], and found to perform favorably. In this paper, a new method for spectral registration is presented. In contrast to previous approaches, the registration problem is formulated as variable-projection (VARPRO) [@Golub1973; @VanderVeen1988] in the frequency domain. The use of VARPRO allows the incorporation of baseline modeling, whilst also reducing the iterative optimization complexity from two parameters (phase and frequency) to one (frequency). The approach is compared with TDSR, and found to be more robust to large frequency shifts ($>$7Hz), baseline distortions and edited-MRS frequency misalignment. Methods ======= Time-domain spectral registration --------------------------------- The TDSR method applies a frequency and phase adjustment to each target average, $\mathbf{S}(t)$, to match a reference signal, $\mathbf{R}(t)$ using nonlinear least-squares optimization. The optimization problem may be stated as: $$\min_{F \in \mathbb{R}, \thinspace \phi \in \mathbb{R}} \sum^{t_{N-1}}_{t=t_0} \bigg\lvert \mathrm{Re}(\mathbf{R}(t) - \mathbf{G}(t,F,\phi)) + \mathrm{Im}(\mathbf{R}(t) - \mathbf{G}(t,F,\phi)) \bigg\rvert ^2, \label{tdsr}$$ where $F$ is the frequency correction parameter in Hz, $\phi$ is the phase correction parameter in degrees and $\mathbf{G}(t,F,\phi)$ is defined as: $$\mathbf{G}(t,F,\phi)=\mathbf{S}(t) \thinspace e^{2 \pi j \left (Ft+\frac{\phi}{360} \right )}. \label{corr_eqn}$$ Whilst the parametric correction (\[corr\_eqn\]) is performed as a complex operation ($j=\sqrt{-1}$), the optimization problem (\[tdsr\]) is real valued, achieved by the concatenation of real (Re) and imaginary (Im) parts of $\mathbf{R}$ and $\mathbf{G}$. The optimum $F$ and $\phi$ parameters are found using a non-linear least-squares regression algorithm and applied to the target average, $\mathbf{S}(t)$, to generate a corrected spectrum. Frequency-domain spectral registration with variable-projection baseline modeling --------------------------------------------------------------------------------- One potential limitation of the TDSR method is the assumption that each average may be accurately matched to the reference signal by adjusting only the frequency and phase. Acquisitions with moderate residual water commonly exhibit baseline artifacts, which have a smooth appearance in the frequency domain, and often change throughout the acquisition period due to scanner instability or subject movement. In this paper, a modification of the TDSR optimization problem (\[tdsr\]) is presented, incorporating baseline differences between the target and reference spectrum: $$\begin{aligned} \min_{\substack{F \in \mathbb{R}, \thinspace \mathbf{a}_\mathrm{G} \in \mathbb{C} \\ \mathbf{a}_\mathrm{B} \in \mathbb{C}^{P+1}}} \sum^{f_{N-1}}_{f=f_0} \bigg\lvert & \mathrm{Re}(\hat{\mathbf{R}}(f) - \hat{\mathbf{G}}(f,F) \thinspace \mathbf{a}_\mathrm{G} + \mathbf{B}(f) \thinspace \mathbf{a}_\mathrm{B} ) \\ & + \mathrm{Im}(\hat{\mathbf{R}}(f) - \hat{\mathbf{G}}(f,F) \thinspace \mathbf{a}_\mathrm{G} + \mathbf{B}(f) \thinspace \mathbf{a}_\mathrm{B}) \bigg\rvert ^2. \end{aligned}$$ In contrast to TDSR, the objective function is in the frequency domain, where $\hat{\mathbf{R}}$ and $\hat{\mathbf{G}}$ represent $\mathbf{R}$ and $\mathbf{G}$ following Fourier transformation. A second modification is the addition of the complex amplitude parameter applied to target spectrum, $\mathbf{a}_\mathrm{G}$. Finally, a polynomial basis $\mathbf{B}$, scaled by $\mathbf{a}_\mathrm{B}$, is added to account for baseline differences between the target and reference spectrum. $\mathbf{B}$ is structured as basis matrix with $N$ rows and $p+1$ columns, where $N$ is the number of points considered in the frequency domain and $p$ represents the highest order basis polynomial: $\{1,x,x^2,...,x^{p}\}$. Concatenating the adjusted target spectrum, polynomial basis and corresponding complex amplitude parameters leads to: $$\hat{\mathbf{G}}_\mathrm{B} = \begin{bmatrix} \hat{\mathbf{G}} & \mathbf{B} \end{bmatrix},$$ $$\mathbf{a} = \begin{bmatrix} \mathbf{a}_\mathrm{G} \\ \mathbf{a}_\mathrm{B} \end{bmatrix},$$ $$\min_{F \in \mathbb{R}, \thinspace \mathbf{a} \in \mathbb{C}^{P+2}} \lVert \hat{\mathbf{R}} - \hat{\mathbf{G}}_\mathrm{B}(F) \thinspace \mathbf{a} \rVert_2^2, \label{varpro}$$ where the linear parameters $\mathbf{a}$ are separated from the non-linear frequency adjustment parameter $F$. The purpose of this reformulation is to allow the solution of (\[varpro\]) using the VARPRO approach, which has been shown to be particularly effective for solving similar problems, [@Golub1973; @VanderVeen1988]. VARPRO exploits the fact that the linearly appearing parameters, $\mathbf{a}$, may be optimally found using stable and efficient linear methods: $$\min_{F \in \mathbb{R}} \lVert \hat{\mathbf{R}} - \hat{\mathbf{G}}_\mathrm{B}(F) \thinspace \hat{\mathbf{G}}_\mathrm{B}(F)^{\dagger} \thinspace \hat{\mathbf{R}} \rVert_2^2,$$ where $\dagger$ denotes the Moore-Penrose pseudo-inverse of a matrix. Unlike TDSR, this approach has only one non-linear parameter to be optimized, reducing the problem to a one-dimensional search which can be robustly solved using the FMIN method by Brent [@Brent1973]. This new approach of combining the VARPRO method with baseline fitting to align spectra will be referred to as RATS - Robust Alignment to a Target Spectrum. Correction method performance evaluation ---------------------------------------- Simulated and acquired MRS data were both used to compare the performance of RATS and TDSR over a range of conditions. All simulations were generated from a linear combination of metabolite, lipid and macromolecule signals in proportions to match the appearance of normal appearing brain. Metabolite signals were simulated for a PRESS sequence (TE=30ms at 3T) using density matrix operators [@Levitt2001] and published chemical shift and J-coupling values [@Govind2015]. 5Hz linebroadening was applied to all simulated spectra prior to the addition of Gaussian distributed complex noise. The noise standard deviation was adjusted to produce a desired spectral SNR - defined here as the maximum metabolite spectral intensity divided by the standard deviation of the spectral noise. ### Simulations The first simulation test evaluated the frequency correction accuracy as a function of SNR and frequency shift magnitude. 512 spectra were generated, each with the same spectral signals and SNR ratio but differing random noise samples. A linearly increasing frequency shift was applied to each spectrum, where the first and last spectra had shifts of 0Hz and 10Hz respectively. Seven sets of 512 spectra were generated, with each set having one of the following SNR values: 2.5, 5.0, 7.5, 10.0, 15.0, 20.0 and 25.0. The second simulation test was identical to the first, with the exception that the phase of each spectrum is randomly altered to be between -180 and 180 degrees with a uniform probability distribution. The third simulation test evaluated the performance of each method to baseline instability originating from the tails of a residual water resonance. Similar to the second simulation test, the metabolite, lipid and macromolecule signals were increasingly shifted to a maximum value of 10Hz, over 512 randomly phased spectra with a SNR of 15. A randomly phased artificial residual water resonance, at 4.65 PPM with a linewidth of 10 Hz, was added to introduce a moderate unstable baseline artifact combined with phase and frequency perturbations. For all simulations, frequency or phase adjustments were not applied to the first spectrum of each simulated set since it was used as the reference spectrum. The frequency and phase errors for each approach were measured by subtracting the estimated values from the true values and calculating the standard deviation for each simulation set. ### Edited MRS A glutathione (GSH) J-difference edited example dataset was used to compare the correction methods, due to its high sensitivity to spectral misalignment. GSH edited in-vivo MRS was acquired from a single healthy volunteer using a MEGA-PRESS sequence on a 3T Philips Achieva scanner (Philips Healthcare, Eindhoven, Netherlands). A 3x3x2cm voxel was placed in the anterior cingulate cortex (ACC) and 480 averages acquired with the following acquisition parameters: TR=2s; TE=131; 55Hz bandwidth editing pulse at 4.56 PPM; 1024 complex data points acquired at a sampling frequency of 2000Hz. The correction procedure started with the calculation of the median spectrum separately for the edited and non-edited scans. The individual average closest to the median (calculated using a least-squares spectral difference) was automatically selected as the correction target, a strategy designed to reduce the chances of a motion corrupted average being used as a reference. The correction of individual averages was performed separately for edited and non-edited scans before calculation of the corrected mean edited and non-edited scan. A second correction step was performed to minimize subtraction artifacts by correcting the mean non-edited scan to match the edited (reference) scan over the tNAA spectral region (1.8 to 2.2 PPM). The same correction method (RATS or TDSR) was used for the initial and subtraction correction steps for comparison. Finally, 3Hz linebroading and zero-filling to 4096 points was applied to aid a visual comparison between the RATS, TDSR and uncorrected processing variants. ### Implementation details In the original description of the TDSR method [@Near2015], a preprocessing step to restrict the spectral region for registration (using the discrete Fourier transform) was optionally performed to reduce the influence of unstable signals, such as residual water. In addition, the latter points of the time-domain signals were removed prior to optimization to reduce the noise contribution. In this comparison, the same “preprocessing” steps were taken, with only the spectral region between 4 and 0.5 PPM and the first 200ms of the free induction decay (FID) being considered for both methods - unless stated otherwise. For the RATS method, zero, first and second degree polynomial functions were used to construct the baseline modelling basis and a maximum frequency shift limit of $\pm 20$Hz was used for the Brent optimization algorithm. The RATS and TDSR methods were implemented in the R programming language (v3.5.0) [@R2018] and integrated into the spant (SPectroscopy ANalysis Tools) package (v0.11.0) for MRS analysis (https://github.com/martin3141/spant). All code and data used to generate the results from this paper is freely available from: https://github.com/martin3141/rats. Results ======= Figure \[freq\] compares the accuracy of RATS and TDSR for frequency shifts up to 10Hz. At the lowest SNR value of 2.5, the TDSR method shows improved frequency correction accuracy (part a) over RATS, however for the SNR range between 5 and 15 RATS is the more accurate method. Scatter plot c) illustrates how the TDSR method becomes increasing unstable for frequency shifts greater than 5Hz for SNR=5, and this is the cause of the reduced performance compared to RATS in this SNR regime. Part b) shows TDSR has improved phase correction accuracy over RATS, with both methods becoming comparable in both frequency and phase correction for SNR=15 and above. ![Frequency a) and phase b) correction accuracy of RATS and TDSR for simulated spectra over a range of SNR values. Frequency was linearly increased up to 10Hz for each SNR set of 512 spectra. c) estimated vs true frequency shifts for SNR=5.[]{data-label="freq"}](figure1.eps){width="90.00000%"} Figure \[phase\] shows how RATS and TDSR perform with combined frequency and phase perturbations. Part a) shows a similar trend to Figure \[freq\] with the RATS method proving more accurate in the SNR range between 5 and 15, however some further instability was seen for the TDSR method at SNR=25. Part b) shows how the phase correction accuracy between RATS and TDSR was much closer than the more trivial test in Figure \[freq\]b. To summarize Figures \[freq\] and \[phase\], the RATS method produces more accurate frequency correction in the SNR range between 5 and 15, whereas TDSR performs better for low SNR values of less than 5. Above a SNR of 15 both methods are comparable, however, even at high SNR, some instability was found for the TDSR method when large frequency shifts (&gt;5Hz) were combined with phase variation. ![Frequency a) and phase b) correction accuracy of RATS and TDSR for simulated spectra over a range of SNR values. Spectra were randomly phased and frequency was linearly increased up to 10Hz for each SNR set of 512 spectra.[]{data-label="phase"}](figure2.eps){width="60.00000%"} The performance of each method in the presence of simulated baseline instability, combined with frequency and phase perturbations, is shown in Figure \[bl\_perf\]. For stable baseline simulations (Figures \[freq\] and \[phase\]), a SNR of 15 was shown to produce good results for both methods, however in the unstable case the RATS method has reduced bias and improved accuracy for both frequency (Figure \[bl\_perf\] a) and phase (Figure \[bl\_perf\]b) correction. This improvement in accuracy is illustrated in Figure \[bl\_spectra\], where RATS (part c) shows closely aligned and phased spectra compared to TDSR (part b). ![Frequency a) and phase b) correction accuracy of RATS and TDSR for simulated spectra (SNR=15) with unstable baselines. Residual water signals were generated to simulate baseline distortion and spectra were also randomly phased with linearly increasing frequency shifts up to 10Hz over 512 spectra.[]{data-label="bl_perf"}](figure3.eps){width="70.00000%"} ![a) 512 overlaid simulated spectra (SNR=15) with unstable baseline, random phase and linearly increasing frequency shifts up to 10Hz. b) TDSR corrected spectra and c) RATS corrected spectra.[]{data-label="bl_spectra"}](figure4.png){width="90.00000%"} Edited GSH spectra are shown in Figure \[ed\_gsh\_spec\], with a comparison between a) uncorrected, b) TDSR corrected and c) RATS corrected data. Frequency or phase errors between the edited and non-edited averages result in imperfect subtraction, resulting in residual signal, most clearly seen in the tNAA spectral region between 1.8 and 2.2 ppm. Moderate distortions in the tNAA are present in the uncorrected and TDSR corrected data, whereas RATS correction either eliminates these distortions or reduces them to be indistinguishable from noise. The impact of these artifacts on the edited GSH resonance at 2.95 ppm can also be seen, with uncorrected and TDSR corrected data showing erroneously elevated GSH due to an incomplete subtraction of the tCr peak. ![In-vivo edited GSH spectra from a voxel placed in the ACC of a healthy participant. a) uncorrected data, b) TDSR corrected data and c) RATS corrected data. The tNAA subtraction artifact region is highlighted with a red circle.[]{data-label="ed_gsh_spec"}](figure5.eps){width="90.00000%"} Discussion and Conclusion ========================= In this work, the incorporation of baseline modeling and VARPRO into retrospective spectral correction have been shown to offer: 1) improved robustness to frequency shifts greater than 5Hz; 2) improved robustness to unstable baseline distortions and 3) reduced subtraction artifacts for GSH J-difference edited MRS. The improved robustness to larger frequency shifts results from to use of a VARPRO formulation to reduce the optimization complexity from two to one dimension. This allows the use of optimization methods based on a 1D search, which are less prone to converging on local-minima. In the low SNR regime (less than 5) the new method was generally less accurate that TDSR, likely resulting from the increased modeling freedom due to the addition of a baseline basis set. However, in this regime both methods performed poorly, with frequency correction errors greater than 3Hz, and therefore the use of either method may not be advisable for low SNR spectra. Whilst correction accuracy was the main focus of this work, it should also be noted that both RATS and TDSR correction may be performed quickly on modern hardware. For instance, the correction of 128 averages takes approximately 0.6 and 0.4 seconds for TDSR and RATS respectively using and Intel(R) Core(TM) i5-8250U CPU. Therefore, in cases where SNR is low and the best method may not be obvious, it is feasible to compare the SNR from averaging uncorrected; TDSR and RATS corrected data and proceed with the highest quality reconstruction. One alternative to spectral-registration based methods is known as the metabolite-cycling technique [@Dreher2005], where metabolite selective inversion pulses are alternately applied prior to the localization scheme as an alternative to conventional water suppression. Using this scheme, a full intensity water signal is acquired for each average and water-suppressed metabolite data may be obtained by subtracting average pairs. Therefore, accurate phase and frequency correction may be performed using the high SNR water signal in protocols where the metabolite SNR may be too low for spectral registration with TDSR or RATS [@Hock2013; @Doering2018]. Whilst effective for low metabolite SNR applications, at the time of writing the metabolite-cycling method is not widely available or suitable for non-proton MRS. The first reported use of retrospective correction for conventional clinical MRS was in 2005 [@Oz2005]. Yet, despite being compatible with all widely available sequences and rapid to perform, current use remains largely restricted to edited-MRS [@Evans2013]. One potential reason may be due the smaller typical voxel dimensions used for conventional clinical MRS (2cm sided cube) compared to edited MRS (3cm sided cube) resulting in a metabolite SNR lower than required for accurate correction. However, previous work [@Oz2005] has shown that combining averages over blocks is effective for using spectral registration with lower SNR data. Further potential barriers to use in the clinical setting include the extra time required to export individual averages for offline analysis and limited availability of spectral registration methods integrated into the scanner software. Recently available open-source implementations of spectral registration methods in the MATLAB (MathWorks, Natick, Massachusetts, USA) based FID-A package [@Simpson2017] or R [@R2018] based spant package (https://github.com/martin3141/spant) may aid clinical validation and uptake in the future. Whilst the focus of this paper is on the correction of distorted scans, the RATS method produces an amplitude, frequency offset and phase offset for each average, which may be also used as criteria for excluding individual averages from the final result. For instance, a frequency offset greater than 5Hz could act as an exclusion criterion for a particular average, and a dataset with more than 10% of averages excluded may indicate significant movement which should be incorporated into the clinical decision making process. Plots of the amplitude, frequency and phase throughout the scan could also accompany the fitting results to aid quality assessment and clinical interpretation. In conclusion, the RATS algorithm has been shown to provide accurate retrospective correction of SVS MRS data in the presence of large frequency shifts and baseline instability. The method may be easily incorporated into the processing pipeline of both conventional and J-difference edited MRS to improve lineshape, SNR and aid quality assessment. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== The support of Philips Healthcare Clinical Science for the provision of the MEGA-PRESS implementation.
--- abstract: 'We analyze a large collection of RXTE archive data from April 1997 to August 2003 of the bright X-ray source Scorpius X-1 in order to study the broadband spectral evolution of the source for different values of the inferred mass accretion rate by studying energy spectra from selected regions in the Z-track of its Color-Color Diagram. A two-component model, consisting of a soft thermal component interpreted as thermal emission from an accretion disk and a thermal Comptonization component, is unable to fit the whole 3–200 keV energy spectrum at low accretion rates. Strong residuals in the highest energy band of the spectrum require the addition of a third component that can be fitted with a power-law component, that could represent a second thermal Comptonization from a much hotter plasma, or a hybrid thermal/non-thermal Comptonization, where the electrons in the Comptonizing cloud have a Maxwellian distribution of velocities with a power-law hard tail. The presence of this hard emission in   has been previously reported, however, without a clear relation with the accretion rate. We show, for the first time, that there exists a common trend in the spectral evolution of the source, where the spectral parameters change in correlation with the position of the source in the CD. In particular, using a hybrid thermal/non-thermal Comptonization model (EQPAIR code), we show that the power supplied to the non-thermal distribution can be as high as half of the total hard power injected in heating the electron distribution. We also found that a small sample of spectra, when the source resides at the top of the FB can also show intense hard X-ray emission. We discuss the physical implications derived from the results of our analysis, with a particular emphasis on the hardest part of the X-ray emission and its possible origins.' author: - 'A. D’Aí, P. $\dot{Z}$ycki, T. Di Salvo,R. Iaria, G. Lavagetto, N.R. Robba' bibliography: - 'references.bib' title: 'Broad-band Spectral Evolution of Scorpius X-1 along its Color-Color Diagram' --- Introduction ============ Scorpius X-1 ------------   is the brightest persistent X-ray source in the sky and the first identified X-ray extra-solar source [@giacconi62]. The X-ray source is an old, low magnetized neutron star (NS), accreting matter transferred through Roche-lobe overflow from a low-mass companion [recently identified as an M class star of $\sim$ 0.4 ,  @steeghs02].   is a prototype of the class of the Low-Mass X-ray Binaries (LMXBs), and assuming a distance of $2.8 \pm 0.3$ kpc [@bradshaw99], the source emits close to the theoretical Eddington limit for a 1.4   NS ($L_{Edd} \sim 2~\times~10^{38}~$ erg s$^{-1}$).\ Based on the timing behavior of LMXBs in correlation with the position of a given source in the X-ray color-color diagram (CD), @hasinger89 grouped these sources into two categories: the Z sources and the Atoll sources. The former are brighter, radiating close to $L_{Edd}$, the latter are less bright, emitting at 0.01–0.1 $L_{Edd}$. Z sources exhibit the classical three-shaped branches describing a Z pattern in the CD: the Horizontal Branch (HB) at the top of the Z track, followed by the Normal Branch (NB) and the Flaring Branch (FB) at the bottom of the pattern. There are strong indications [e.g. @vrtilek91] suggesting that what drives the changing in the spectral and temporal properties of LMXBs is the instantaneous accretion rate ($\dot{M}$), which, for Z sources, is believed to increase monotonically from the HB to the FB. Similarly, Atoll sources, display two different spectral/timing states: the soft and luminous banana state, associated to higher accretion rates, and the hard and less luminous island state, associated to lower . This early and straightforward classification has now moved to a more complex picture, as the patterns that Atoll and Z sources describe in their CDs, when displayed on long timescales, appear to be more and more similar to each other [@muno02; @gierlinski02], although some observational facts still hold: Z sources move on the CDs on shorter timescales, have higher luminosities and present a timing phenomenology different from that of the Atoll sources.\ , as all the sources of its class, shows several quasi-periodic oscillations (QPOs) along all the branches of its CD: the horizontal branch oscillations (HBOs), the normal branch oscillations (NBOs) and the flaring branch oscillations (FBOs). In   NBOs, with peak frequencies in the range 4.5–7 Hz, and FBOs, in the frequency range 6–25 Hz, seem to be physically related to each other since the NBO peak frequency smoothly joins the FBO peak frequency as the source moves from the NB to the FB [@casella05]. Van der Klis et al. (1996) reported the first observation of HBOs in   at 45 Hz, with an inferred harmonic near 90 Hz; the power spectrum can also show a pair of QPOs, whose frequencies are in the range 800-1100 Hz, denoted kHzQPOs, that shift simultaneously in frequency, with an almost constant or weakly frequency-dependent peak separation [see e.g. @zhang06]. The physical interpretation of these timing features is not unique and the related scientific debate is still open [see @klis04  for a review].\ Spectral studies of   have been so far not so extensive and detailed as timing studies. This is in part due to the strong brightness of the source that actually prevents its observation with the most sophisticated and high-resolution X-ray satellites like BeppoSAX and ASCA in the recent past and Chandra and XMM-Newton at present. Three articles have investigated so far the broad-band spectral behavior of the source, using data from the Rossi X-ray Timing Explorer (RXTE). @barnard03 used data from both the Proportional Counter Array (PCA) and the High Energy X-ray Timing Experiment (HEXTE), showing that the spectrum, in the energy range 2.5–50 keV, can be fitted by a simple two-component model, a black-body soft component and a cutoff power-law, plus a broad Gaussian line, and interpreted these results in the framework of the so called Birmingham model [@church01]. In this interpretation the spectral emission consists of black-body emission from the NS plus Comptonized emission from an extended accretion disk corona. @bradshaw03 used only PCA data (in the 2.5–18.2 keV range), adopting a model consisting of a black body emission plus a bulk motion Comptonization component and a reflection component in the form of a broad Gaussian line. @damico01 studied HEXTE data above 20 keV, in order to test the presence of high energy X-ray emission in the spectra of the source. Data were fitted, using a bremsstrahlung component, to mimic the effects of the thermal component, and a power-law component, whenever the statistics and clear residuals in the fit required it. In this way no apparent correlation was found between the presence of the hard tail and position of the source in the CD, contrarily to what had been reported for other Z sources.\ More recently, @disalvo06, using INTEGRAL data in the 20–200 keV energy band, detected a hard X-ray tail, with photon index values between 2 and 3.4, whose intensity decreased as the source moved towards higher accretion rates. Data did not show evidence of a high-energy cutoff up to $\sim$ 200 keV, suggesting a non-thermal origin for the hard tail.\   is also an interesting source of radio emission. @formalont01, through an extensive VLBA monitoring campaign, have shown that the radio emission is composed of a point-like radio core emission at the position of the X-ray source and of two opposite radio lobes, moving through the ISM with relativistic speeds $v/c =0.45 \pm 0.03$, and with an angle of 46$^{\circ} \pm$6$^{\circ}$ with respect to our line of sight. It is also evident a connection between the phenomenon of radio flare at the core of the source and the following flaring of the lobes, so that it is argued that the energy production of the radio emission is confined near the NS and only afterward transported to the lobes via the working surface of a jet [see e.g. the magnetohydrodynamical simulations of @kato04].\ These results stimulated the search for the possible connection between inflow and outflow mechanisms working in the violent regimes of accretion onto a NS or a black hole (BH). Energetic mass outflows, collimated sometimes in a typical jet pattern, can be produced not only in extragalactic X-ray sources, such as quasars and active galactic nuclei (AGN), but also in galactic X-ray binary systems [see  @fender02  for a review]. It has been observed that galactic BH binary systems are able to produce strong jet emission (sources for which a jet has been already spatially resolved are usually denoted *microquasars*), with radio-loud states associated to hard/low X-ray states, alternated to periods of radio-quenching during soft/high X-ray states. For NS systems radio-loud episodes are generally interpreted as a jet signature. In this context, all the Z sources seem to owe, under certain conditions, a radio-jet nature. Detections of radio-loud states are usually associated with spectral states of low , i.e. on the HB of the Z-track [@fender00], while the few radio detections in atoll systems found the sources residing in the island state of their CDs [@migliari03]. Spectral properties of Z sources class {#introduction} -------------------------------------- We rapidly review in this section the spectral properties of the Z sources. The class of the Z sources display a homogeneous pattern of spectral properties; their energy spectra can be usually decomposed as follows:\ - a soft thermal component, with characteristic temperatures in the 0.5–1 keV range, interpreted as thermal emission from the NS surface or an optically thick, geometrically thin accretion disk; - a thermal Comptonized component, where the electron cloud distribution has a temperature in the 2.5–3 keV range, the soft seed photon temperature is around 1 keV and the optical depth can have rather large values ($\tau \geq$ 3); this relatively cold thermal Comptonization is thought to take place close to the NS surface; the soft seed photon temperature generally exceeds the highest temperature reached in the accretion disk, so that the source of the soft emission is probably confined in the boundary layer between the inner edge of the accretion disk and the NS surface. The high optical depth values associated to the cloud saturates the seed photon spectrum close to the electron temperature, so that a quasi Wien spectrum results. A theoretical interpretation of this spectral decomposition has been given by @inogamov99, where the boundary layer emission is expected to be radiation pressure supported and its emission is locally at the Eddington rate. Change in luminosity is attributed to a change in the emitting area rather than in its emissivity. Phase resolved spectroscopy of bright LMXBs [@revnivtsev05] further supports this interpretation, establishing in all the analyzed Z sources a common cut-off, due to the boundary layer thermal emission, at $\simeq$ 2.4 keV. - a reflection component, often simply modelled with a broad Gaussian line, in the 6.4-6.9 keV range. In all the Z sources this line has been always observed; spectra from BeppoSAX or RXTE observations were not able to constraint the shape of the line, while the relative width was in the range 0.1-1 keV. High resolution spectra in this energy range have been obtained so far only with the BBRXT [@smale93] and the XMM-Newton satellites [@costantini02] for Cygnus X-2; in both observations the line was associated to highly ionized iron (centroid energy $\simeq$ 6.7 keV), while the line appears to be intrinsically broad (FWHM $\sim$ 1 keV). @brandt94 pointed out that the determination of the line profile can be a primary diagnostic tool, but, at the same time, kinematic and relativistic effects can greatly distort the line from the simple Gaussian profile. Moreover, if reflection is caused by hard X-rays reflected by a cold accretion disk, the contribution of the Compton reflected continuum should be taken into account, when the iron line emission is a considerable contribution to the total energy flux; - a power-law hard tail, which can contribute up to few percent to the total energy flux, whose strenght usually varies in correlation with the position of the source in the CD, namely being the strongest on the HB, gradually decreasing as the source moves to higher accretion rates and totally fading in the FB [e.g. GX 17+2, @disalvo00]. The photon index of the power-law was generally found in the range 1.5–3 with no evident high energy cutoff up to energies of about 200 keV. In this article we report a complete investigation of the spectral properties of   through an extensive analysis of RXTE archive data, indicating a clear connection between position of the source on the CD and spectral behavior. Data reduction and analysis =========================== The scientific payload of RXTE consists of three instruments, the PCA, the HEXTE, and the All Sky Monitor (ASM). The PCA consists of five co-aligned Xenon proportional counter units (PCUs) with a total effective area of about 6500 cm$^2$. The instrument is sensitive in the energy range from 2 keV up to 60 keV [@jahoda], although the response matrix is best calibrated in the energy range 3-22 keV. Data can be processed using several different configuration modes; for our analysis we exclusively use the *Standard2* mode, with 16 s time resolution and 128 energy channels in the 2–60 keV energy range. The HEXTE consists of two cluster of four NaI/CsI-phoswich scintillation counters that are sensitive from 15 keV up to 220 keV [@rothschild]. We use the *Standard Mode*, with 64 energy channels, for the reduction and analysis of the HEXTE data. Background subtraction is done by using the source-background rocking of the collimators. We use HEXTE response matrices of 1999 August.\ We collected a large amount of RXTE archive data, discarding only minor shorter observations, from April 1997 up to August 2003. We present in Table \[tab1\] the datasets we used for our analysis, indicating the associated proposal number, the starting and ending times of each observation in Terrestrial Time, and the corresponding exposure times.\ Data have been processed using the standard selection criteria, discarding data taken during Earth occultations and passages through the South Atlantic Anomaly. We only used data from PCU2 for the PCA and data from Cluster A for the HEXTE instrument. We constructed color-color diagrams (CDs) of the source by extracting energy-dependent lightcurves using PCA energy channels 5–10, 11–16,17–22 and 23–44, with a 64 s bintime. These channel ranges correspond to the energy ranges 1.94–4.05 keV, 4.05–6.18 keV, 6.18–8.32 keV, 8.32–16.26 keV respectively. We define the Soft Color (hereafter SC) as the ratio of count rates in the 4.05–6.18 keV and 1.94–4.05 energy bands, while the Hard Color (hereafter HC) as the ratio of count rates in the 8.32–16.26 keV and 6.18–8.32 keV energy ranges. However, the channel to energy conversion depend on the period of activity of the satellite (there are 5 different instrumental Epochs), most of our data belong to Epoch 3 for which the given energy bands are appropriate; for the Epoch 5 datasets the energy ranges are slightly shifted, so that this results in a consequent shift on the HC and on the SC axes. Because we are not interested in the secular shifts of the track in the CD, nor we want to accumulate spectra taken at different Epochs, but to perform a statistical study of the broad-band spectral behavior of the source for different CD positions in different datasets, we did not perform the energy dependent corrections to the colors.\ We selected regions in the CDs for each observation dataset, in order to cover a homogeneous part of the Z-track and to have at the same time a suitable statistics. From these selections we derived the good time intervals (GTI) which we used for extracting the corresponding spectra for the PCA and HEXTE data.\ Given the high luminosity of the source all the PCA spectra have been dead-time corrected. Some observations were performed with the use of an offset pointing; in these cases we extracted the responses matrixes of both the HEXTE and the PCA instruments, following the indications given in the on-line Data Analysis documentation pages. We processed and analyzed data using version 6.0 of the FTOOLS package suite and version 11.3.1 of Xspec. The Data ======== Hereafter we shall refer to a collection of close in time pointed observations used to extract one CD by using its proposal number. From datasets 20035 (and 30035) we have obtained two different CDs; we will label them, respectively, 20035A and 20035B (30035A and 30035B, see the [*S*pectrum]{} column in Table \[tab2\]). Datasets 20035A, 20035B, 30036, 30035A, 30035B and 40020 belong to Epoch 3, while datasets 70014, 70015 and 80021 belong to Epoch 5. Datasets 70014 and 70015 have been merged in one CD, because these observations were close in time and the source pointing was the same.\ The eight CDs that we have extracted present the source mostly on the NB/FB, as these are the states where the source spends most of the time. In   the HB is not *horizontal* but rather a vertical continuation of the NB (a characteristic shared also by GX 17+2 and GX 349+2, the so-called Sco-like sources). For visual clarity we will refer to the   CD pattern, mainly making the distinction between the left (or FB) track and the right (NB and HB) track, and between the top (harder) parts and the bottom (softer) parts of the tracks.\ The count rates associated with the extracted spectra show a significant trend, generally correlated with the HC; HEXTE and PCA count rates are the highest at the top of the FB and monotonically decrease towards the FB/NB apex; following the direction bottom NB $\rightarrow$ HB, the trend is inverted. This behavior is mostly stressed in the HEXTE data (compare the countrate in spectrum 07 in the top FB with spectrum 10 in the transition region FB/NB, for which the rate is reduced by two thirds in the HEXTE, but only by one third in the PCA).\ ------------------------------------------------ ------------------------------------------------ ------------------------------------------------ ------------------------------------------------ ![image](f1a.ps){height="3.5cm" width="2.5cm"} ![image](f1b.ps){height="3.5cm" width="2.5cm"} ![image](f1c.ps){height="3.5cm" width="2.5cm"} ![image](f1d.ps){height="3.5cm" width="2.5cm"} ![image](f1e.ps){height="3.5cm" width="2.5cm"} ![image](f1f.ps){height="3.5cm" width="2.5cm"} ![image](f1g.ps){height="3.5cm" width="2.5cm"} ![image](f1h.ps){height="3.5cm" width="2.5cm"} ------------------------------------------------ ------------------------------------------------ ------------------------------------------------ ------------------------------------------------ In four datasets (20035A, 20035B, 40020 and 30036) the satellite pointed the source on axis; in three datasets (30035A, 30035B, 80021) an offset pointing has been used. This influenced the high energy statistics, where in the same position on the CD, an offset pointing produced an HEXTE count rate lowered by almost one order of magnitude. Spectra extracted from datasets 70014 and 70015 have also low statistics in the HEXTE band, as the integration time is shorter compared with the other spectra.\ We generally used a 3.0–22 keV energy band for the PCA spectra and a 20–200 keV energy band for the HEXTE spectra. HEXTE channels from 28 to 63 have been rebinned, grouping four channels into one. In some of the spectra that we extracted we found, however, model-independent mismatches in the overlapping energy region between the HEXTE and the PCA; because these mismatches are of instrumental origin we occasionally restrict these energy bands until these features did no longer impact on our fit results. In Table \[tab2\] we report the exact energy bands used for the PCA and the HEXTE datasets for the selected spectra.\ The choose to assign an appropriated systematic error for the PCA data is an essential step in the analysis of the spectra. This operation is not straightforward, as the commonly used standard candle, the Crab, provides in this case only a partial calibration tool. The Crab countrate for every energy channel below $\sim$ 20 keV is about one order of magnitude less than the   countrate, thus making the extrapolation of the Crab based systematic error to the   spectra riskful. The mostly adopted view to assign a 1% systematic error for all the energy channels has a strong impact on the softer part of the spectrum, where there are the mostly weighted channels; we found that in this case the fit is strongly driven by residuals in the 3-6 keV range, leading to a modelization of the spectrum that does not seem physically plausible. The overall effect is a shift of any soft component outside the RXTE energy band, and unrealistic high values for the extrapolated flux.\ The only way to avoid the strong driving effect of the first channels is to arbitrarily raise the systematics in this range; we found that assigning a 3% systematic error for channels between 3 and 3.9 keV (namely the first two PCA channels available in our spectra), while leaving a 1 % systematic error for all the other channels, allows to derive a physically plausible scenario for the spectral evolution of the source, does not result in extreme lower/higher values of reduced $\chi^2$ for every fit performed, allows us to exploit all the available energy band and to generally constrain all the spectral components of the adopted models. We occasionally still found in some fits systematical residuals in the soft part of the spectrum, that are more stressed in the softer spectra, whose origin can be presumedly instrumental and related to the xenon L-edge near 5 keV. We choose to ignore these effects, in order to not overparameterize the fit, as they do not influence the determination of the spectral parameters and do not impact the $\chi^2$ values of the fits.\ No systematic error has been associated to the HEXTE data. A normalization constant is left free to vary between the PCU2 and HEXTE spectra to take into account residual flux calibration uncertainties. Spectral models {#spectralmodels} =============== As pointed out in Section 1, spectra of LMXBs are usually described as the sum of at least two spectral components. Because the PCA energy band starts from 3 keV, it is not possible to constrain the effect of the photoelectric interstellar absorption on the source flux. Following @christian97, we fixed its value to 3 $\times$ 10$^{21}$ cm$^{-2}$, for each fit performed. To fit the CDs selected spectra, we tried at first a series of models given by the combination of a soft thermal component, such as a blackbody or multicolor disk blackbody [[`DISKBB`]{}  in Xspec,  @mitsuda84], and a thermal Comptonized component. For the latter we, alternatively, tried the `COMPPS` [@poutanen96], `COMPTT` [@titarchuck94] and [`THCOMP`]{}  [not included in the standard XSPEC package, see  @zdziarski96] Comptonization models. For any model adopted, large residuals in the 6–10 keV energy range indicated the presence of iron features, that we simply modelled with a broad Gaussian line in the 6–7 keV energy range. We noted a general tendency for any adopted model to have a very broad Gaussian line (with $\sigma \geq 0.8$ keV) and, generally, with centroids at energies $\leq 6.4$ keV. In order to avoid unrealistically large Gaussian lines, absorbing the underneath continuum, we constrained the line to have centroid energies only in the 6.4–7.0 keV energy range, with a line width less than 0.8 keV, while the normalization of the line was a completely free to vary parameter.\ We found an adequate description of the data, for energies below 30 keV, using a two-component model given by the combination of the [`DISKBB`]{}  and the [`COMPTT`]{}components, a modelization that was firstly introduced by @mitsuda84, and that is known as the [*Eastern model*]{}. Other Comptonization models can also statistically equally well represent the spectral sample, but we preferred to illustrate our results using this Compton model as it is one of the mostly used in the LMXBs spectral analysis and allows an easy way to compare our results with results obtained for other sources.\ Using the other often used modelization, the Western model [@white86] that predicts a $\sim$ 1–2 keV blackbody thermal emission and a Comptonization component with disk soft seed-photon in the 0.4–0.8 keV energy range [see e.g @barnard03], we found higher $\chi^2$ values for each spectrum and an extrapolated super-Eddington luminosity along each part of the Z-track, that would be dramatic for some FB spectra which would overcome by more that an order of magnitude the Eddington limit, thus making this assumption not realistic. Moreover, the power dissipated in the Comptonizing disk corona would be constantly much greater than the power dissipated as thermal emission on the NS-surface and this breaks our expectations that the power dissipated near the compact object should be at least of the same order of the power dissipated along the disk, if the disk is very close to the NS [see also @done02  for a more extensive discussion].\ The [`DISKBB`]{}  has two free parameters, the temperature expressed in keV at the inner disk radius (kT$_{DB}$) and a normalization factor that depends on the inner radius of the accretion disk, the distance from the source and the inclination angle between the line of sight and the normal to the disk; the [`COMPTT`]{}  has four free parameters: the soft seed photon temperature, kT$_{0}$, the electron temperature of the Comptonizing cloud, kT$_e$, the optical depth, $\tau$, and a normalization constant. We assumed that the Comptonizing geometry is spherical. We will refer to this two-component model as the [`DBBTT`]{}  model.\ As it is evident from Table  \[tab2\] the [`DBBTT`]{}  model is, however, gradually unable to satisfactorily fit the whole energy spectrum as the source resides in zones of low inferred accretion rate. The residuals, with respect to this model, clearly indicated an excess of flux at energies greater than 30 keV. There is also a second group of spectra (namely spectra 7,8,12,13 and 26) that gave for the same reason an unsatisfactory high $\chi^2$ value. These spectra lie at the top of the FB in the datasets that have the higher HEXTE statistics. We will refer to this particular set as the top-FB spectra.\ Because the mechanisms producing this hard excess in the Z sources are not clear yet, in the following we will adopt two working hypothesises to fit the broadband continuum of the source: a) as in the case of the quasi saturated boundary layer Comptonization, this component is also the result of a thermal Comptonization process, but from a much hotter plasma; b) a broad band spectrum is the result of a hybrid electron velocity distribution, i.e. a thermal electron distribution, which produces the optically thick Comptonization, with a power-law non-thermal tail which is responsible for the high-energy excess. Because, for case a), the hot plasma can only reside in regions close to the central compact object, where magnetic and gravitational effects are the strongest, we assume that the boundary layer radiation field is the primary source of soft-seed photons also for this second thermal Comptonization. We note that this assumption is also needed in order to avoid an overestimation of the contribution of steep hard tail at low energies. The use of an unbroken power-law at steep photon indeces can give a major contribution in the lowest energy band of the spectrum so that the fit is driven by low-energy residuals, possibly resulting in an erroneous continuum modelization. Thereafter, a low-energy cutoff at the seed-photon temperature must be taken into account as the expected photon field spectrum is within the instrumental range.\ We added, thereafter, to our basic two-component model, a power-law in order to mimic this second Comptonization, multiplied by a low-energy cut-off. We set the value of the low-energy cut-off anchored to the kT$_0$ value of the [`COMPTT`]{}  component, while we choose the power-law to have a pegged normalization expressed as flux of this component in the 20.0–200.0 keV energy range ([`PEGPWRLW`]{}, in XSPEC). We checked for each spectrum the $\chi^2$ improvement by adding a high energy cutoff, noting that we could obtain only lower limits on the cutoff energy, generally in the 30–100 keV range without any significant improvement in the $\chi^2$. Thereafter we did not use a high energy cutoff. We will refer at this model simply as   or *two Comptonizations* model.\ To model the possibility given in case b) we also fitted the spectra showing a strong hard excess (namely spectra for which the [`DBBTT`]{}  model gave a null hypotesis probability less than 0.05 (it generally corresponded to a reduced $\chi^2$ value of 1.4) with a model given by the sum of a thermal disk component plus the recently developed thermal/nonthermal hybrid Comptonization model named [`EQPAIR`]{}  [see @coppi99; @coppi00  for a full description of the model]. It embodies Compton scattering, $e^{\pm}$ pair production and annihilation, $pe^{\pm}$ and $e^{\pm}e^{\pm}$ thermal and non-thermal bremsstrahlung, and energy exchange between thermal and non-thermal parts of the $e^{\pm}$ distribution via Coulomb scattering. Selected electrons are accelerated to suprathermal energies and the thermal part of the $e^{\pm}$ distribution can be additionally heated. This model assumes a spherical plasma cloud with isotropic and homogeneous distribution of photons and $e^{\pm}$, and soft seed photons produced uniformly within the plasma, with a thermal temperature of $kT_{BB}$. The properties of the plasma depend on its compactness, ${\it l} \equiv \mathcal{L} \sigma_T / (\mathcal{R} m_e c^2)$ where $\mathcal{L}$ is the total power in the source, $\mathcal{R}$ is the radius of the sphere and $\sigma_T$ is the Thomson cross section. The compactness is divided in a hard compactness, ${\it l_h}$, which corresponds to the power supplied to the electrons, and a soft compactness, ${\it l_s}$, corresponding to the power supplied in the form of soft photons. The compactnesses corresponding to the electron acceleration and to the additional heating of the thermal part of the $e^{\pm}$ distribution are denoted as ${\it l_{nth}}$ and ${\it l_{th}}$, respectively, and ${\it l_h} \equiv {\it l_{nth}} + {\it l_{th}}$. The non-thermal energy distribution of the electrons in the plasma is assumed to be a power law replacing the Maxwellian exponential tail from a gamma Lorentz factor $\gamma_{min} = 1.3$ up to a $\gamma_{max} = 1000$ with an energy index $G_{inj}$. The total Thomson optical depth and the electron temperature of the plasma are computed self-consistently from the assumed optical depth $\tau_p$ of the background electron-proton plasma.\ We choose to model the thermal disk emission using as in the hot Comptonization model the [`DISKBB`]{}  component for spectra in HB/NB, while we adopted the post-Newtonian model [`DISKPN`]{}  [@gierlinski99] in the case of the topFB spectra, as, in this accretion state, we expect the disk to reach its minimum inner disk radius, close to the NS; the [`DISKPN`]{}  model takes into account the effective gravitational potential in the neighborhood of the compact object by computing the emergent disk spectrum using a more appropriate post-Newtonian potential. We assumed that the disk reaches the last stable orbit at 6 R$_g$ and fixed this parameter for the fits performed, while we left free to vary the maximum disk temperature (T$_{max}$) and a normalization factor (K = $M^2 cos(i)/ D^2 \beta^4 $, where $M$ is the central mass expressed in solar units, $i$ is the inclination of the disk, $D$ is the distance to the source in kpc, $\beta$ the color/effective temperature). We have left free to vary for the [`EQPAIR`]{}  component the following parameters: $kT_{BB}$, $l_h/l_s$, $l_{nth}/l_{h}$, $\tau_p$ and the normalization factor. The $G_{inj}$ was not very sensible to variations in the range between 1.5 and 3 for spectra in the HB/NB, while for the topFB spectra had to be left as a free to vary parameter, as this gave a significant decrease in the $\chi^2$ when left as a free to vary parameter. The value of the soft compactness ${\it l_s}$ was unconstrained by the fit (it is mainly driven by the pair production rate, which manifests itself with the annihilation line at energies $\simeq$ 500 keV, obviously far beyond our energy limits), and we kept it frozen at the value of 10 [see also @gierlinski99]. All the other parameters of the model were frozen at the default values. We will refer to this modelization as the   model, or simply, the *hybrid Comptonization* model. Results and discussion ====================== We have examined 43 energy spectra of   in the 3.0–200 keV energy band using RXTE pointed observations from April 1997 to August 2003; the spectra have been extracted from selected regions chosen in the X-ray CD. We produced a CD for each close in time dataset in order to avoid shifts of instrumental origin, and repeated our analysis for each CD obtained in this way, thus having at hand a robust representation of the spectral evolution of the source in all its accretion states.\ To fit the spectra we firstly adopted a two-component model given by the sum of a soft thermal disk emission and a hard Comptonized emission. We checked for the presence of harder X-ray emission by adding a high-energy power-law to our basic model, while for a subset of spectra we adopted a hybrid thermal/non-thermal Comptonization as described in the previous section.\ We find that the basic two-component model [`DBBTT`]{}  can fit the 3-200 keV energy band spectrum (fits for which we obtained a $\chi^2_{red} \leq 1.4 $) in 25 out of a total 43 spectra; these spectra mostly belong to the FB and to the NB. Hard tails, in this group of spectra, are not significantly detected in 14 spectra (these are spectra 1, 9, 14, 15, 17, 18, 22, 23, 24, 29, 34, 35, 39, 40, see Table \[tab2\]) when the source lies in the FB/NB apex of its CD. For the other 11 spectra, although the addition of a new component is not strictly required by the fit, we found a general improvement in the $\chi^2$ value.\ On the contrary, we find that this model fails to describe the spectra in 18 of the 43 selected spectra (fits for which we obtained a $\chi^2_{red} \geq 1.4 $); the fits generally getting worse and worse from the bottom of the FB to the upper NB. From the residuals with respect to this model, it is clearly seen an excess of hard emission at energies above 30 keV. We report in Figure \[fig9\] four representative deconvolved spectra for the case in which the [`DBBTT`]{}  model does not give an adequate description to the broad-band spectral behavior, choosing three spectra from the HB/NB zone and one for the topFB zone: spectrum 06 (at the top of the left track of dataset 20035A), spectrum 16 (at the top of the left track of dataset 30036), spectrum 31 (at the top of the left track of dataset 40020) and spectrum 07 (at the top of the right track of dataset 20035B).\ In the following we will group the spectra according to the relative position in their CDs, in: topFB spectra (spectra 7, 8, 12, 13, 17, 22, 26, 32, 37), FB spectra (1, 9, 14, 18, 27, 28, 33, 38), NB spectra (2, 3, 10, 15, 19, 20, 23, 24, 29, 30, 34, 35, 39, 40) and HB spectra (4, 5, 6, 11, 16, 21, 25, 31, 36, 41, 42, 43). We caution the reader that this distinction is based only on the results of our spectral analysis for spectra taken in particular zones of the Z-track, without any reference to the temporal behavior of the source. Although it is clear, looking just at the lightcurves, to distinguish within one dataset the FB from the NB, the distinction between the NB and the HB is rather subtle, as the two track mostly overlap. For this reason a HB spectrum, for this analysis, is to be considered only as a simple label for a spectrum extracted from the top part of left track of each CD.\ The hot Comptonization model ============================ Energetics ---------- In the following we will discuss the fit results based on the adopted model . Although in some spectra this model over-parameterizes the fit, the spectral behavior of the source and our conclusions would not change, if we used the best-fitting values of the [`DBBTT`]{}  modelization (we show in Table \[tab3a\] and \[tab3b\] the best-fit values of the spectral parameter, together with associated errors quoted at 90% confidence level). In some cases we had to freeze some parameters in order to make the fit physically consistent and avoid solutions without physical meaning. In spectra 7, 8, 12, 13, 17, 37, 38 the electron temperature had to be fixed to 10 keV (a value consistent with what we found in other CDs for the same CD zone; if free the fit would give a best-fit with kT$_0 >$ kT$_e$ which is clearly wrong), the photon index of the [`PEGPWRLW`]{}  component in spectra where this component is not significantly detected, or when its value is unconstrained by the fit, is frozen to a reference value of 2; the [`DISKBB`]{}  parameters of spectra 42 and 43 had to be fixed to the best-values as obtained by the best-fit model [`DBBTT`]{}  (if free, this component vanishes, while the [`COMPTT`]{}  and [`PEGPWRLW`]{}  have comparable flux values).\ In this spectral decomposition we calculated the total luminosity as follows: $$L_x = 4 \pi D^2 (F_{TT} + F_{PEG} + a \xi N_{DB} T_{DB}^4)$$ where $F_{TT}$ is the [`COMPTT`]{}  flux and $F_{PEG}$ is the [`PEGPWRLW`]{}  flux extrapolated in the 0.1–200 keV energy range; $\xi = 1/(f^4 \kappa^2)$, where $f$ is the spectral hardening factor [@shimura95], is assumed equal to 1.7, and $\kappa \sim 0.4$ corrects for the fact that the radius at which is reached the maximum temperature is greater than the inner disk distance [@vierdayanti06]. $a$ is a constant dependent on the distance of the source, which we assumed to be 2.8 kpc, and the inclination of the disk, which we assumed 45$^{\circ}$. In Figure \[fig1\] we show the contribution of the disk and of the Comptonized component to the total flux. The luminosities on both axis are in Eddington units for a 1.4 .\ The source span a luminosity range between 0.9 L$_{Edd}$ and 2.4 L$_{Edd}$, thus being in a super-Eddington state almost independently of the position of the source on the CD. We note that the only way to drastically reduce this range is to suppose a much massive compact object (the possibility of a massive NS hosted in the   system is also discussed in @stella99 in the framework of the relativistic precession model used to explain the frequencies of quasi-periodic oscillations in NSs and BH systems) given that both the distance and the inclination angle are known with good accuracy (10% and 13% relative error, respectively).\ The disk flux and the hard flux correlate as expected with the total flux, although the disk emission presents a more pronounced linear relation, while the hard flux seems to saturate as the total flux reaches the highest values. If we assume a linear dependency of the two contributions versus the total emission we derive a constant of proportionality equals to 0.56 for the disk flux and 0.44 for the Comptonized flux. When we look at the ratio of the two contributions along the Z-track (see Figure \[fig2\] , a 5% error on this ratio is assumed, after having tested the corresponding hard and soft flux values for different parameter values of the spectral components inside the uncertainties errorbars as derived from the fits), there is a general trend to have a higher hard/soft ratio as the source moves from higher to lower accretion states; this is in agreement with our expectation that, at lower accretion rate the disk could be truncated and the fraction of the power dissipated in the boundary layer should correspondently increase. Spectra 7, 12, 13 and 32 do not follow this trend; these spectra belong to the topFB spectra, and in these cases, we are possibly, underestimating the disk contribution, or overestimating the hard flux. -- -- -- -- ----------------------------------------------- ----------------------------------------------- ----------------------------------------------- ----------------------------------------------- ![image](f3a.eps){height="4cm" width="2.8cm"} ![image](f3b.eps){height="4cm" width="2.8cm"} ![image](f3c.eps){height="4cm" width="2.8cm"} ![image](f3d.eps){height="4cm" width="2.8cm"} ![image](f3e.eps){height="4cm" width="2.8cm"} ![image](f3f.eps){height="4cm" width="2.8cm"} ![image](f3g.eps){height="4cm" width="2.8cm"} ![image](f3h.eps){height="4cm" width="2.8cm"} ----------------------------------------------- ----------------------------------------------- ----------------------------------------------- ----------------------------------------------- Disk emission and thermal Comptonization ---------------------------------------- We present in Figure \[fig3\] the dependency of the inner disk radius and of the inner disk temperature, plotted against the hard/soft luminosity ratio. It is shown, both in Table  \[tab3a\], \[tab3b\] and in Figure  \[fig4\], the apparent radius, derived without considering important correction factors, due to the color/effective temperature ratio, relativistic effects, and the non-zero torque boundary conditions; moreover, it should be noted that these corrective factors can also vary, depending on the accretion state. The sum of all these effects could be a drastical rescale of the measured inner radii up to almost an order of magnitude higher [@merloni00], but the general trend that correlates this parameter with the accretion state of the source should be qualitatively preserved.\ The apparent inner disk radius, for spectra in the topFB has the lowest values, $\leq$ 10 km, while the average value for NB spectra and HB spectra is considerably higher (average value: $\sim$ 18 km for the HB and NB, $\sim$ 13.5 km for the FB and $\sim$ 10.2 km for the topFB). The inner disk temperature, on the other hand, presents the highest values, as one could expect, on the FB, while the temperature diminishes with the accretion rate. A small subset of FB spectra falls out of the trend, but as pointed out earlier, in this case, the exact determination of the relative contributions of the two components could suffer of systematic uncertainties linked to the particularly luminous state of the source. On the FB the disk temperature has an average value of 2.12 keV, while on the lower part of the FB sinks to 1.73 keV. There is not any appreciable difference on temperature for spectra taken on the NB and on the HB with an average value of 1.43 keV.\ -- -- -- -- For the hard, thermal Comptonized, component the accurate determination of all the spectral parameters was not always possible. The soft-seed photon temperature presents, for each spectra analyzed, a substantially higher value with respect to the disk temperature at the inner radius. We hence propose that the soft photons are originated from the boundary layer/NS surface, partially thermalized with the softer photons of inner part of the accretion disk around the NS. Moreover, calculating the radius of the soft-seed photons emitting region $R_w$ from: $$R_w = 3 \times 10^4 d \sqrt{\frac{f_{bol}}{1+y}}/(kT_0)^2,$$ where $d$ is the distance to the source in kpc, $f_{bol}$ is the [`COMPTT`]{}  bolometric flux and $y$ (see Eq. \[ypar\]), the Compton parameter, we derived for our sample values in the 3–6 km range, that clearly indicate a rather small emitting region, thus supporting our identification.\ -- -- -- -- The CD correlated changing of the thermal temperatures is compatible with a scenario in which the position of the source on the CD is determined by the instantaneous , and higher accretion rates correspond to higher seed-photons temperatures, resulting in a hotter radiation field. The soft seed-photon temperature has a 2.2 keV, 2.4 keV, 2.9 keV average values for spectra on the HB/NB, FB and topFB respectfully. When plotted against each other, the disk temperature and the soft seed-photon temperature (see Figure \[fig5\], left panel) follow each other quite closely, being these two parameters the main driving physical quantities related to the accretion state of the source. The correlation between the CD resolved spectra and the two temperatures shows, although in a qualitative representation, that the link between the spectral evolution of the source and its accretion state is well motivated.\ -- -- -- -- The other two spectral parameters, that describe the high-energy curvature of the Comptonized component, the electron temperature of the Comptonizing plasma (kT$_e$) and the optical depth of the cloud $\tau$, are not always well constrained by the fit. As it is evident from Figure \[fig6\], for spectra at high accretion rate we obtained high values for the electron temperature (kT$_e \geq$ 10 keV) and low values for the optical depth ($\tau \leq$ 1). As the sources resides on zones of lower , the Comptonizing cloud substantially thickens, while the electron temperature correspondently diminishes. Plotting the subset of the spectra in the NB and HB helps to better visualize this tendency (Figure  \[fig6\], right panel). It is to be noted that the determination of these two parameters is partially related with the hard power-law component; although there are spectra where the [`PEGPWRLW`]{}  component is not strictly necessary for a good fit, its introduction slightly shifts the the thermal Comptonized component to lower energies, thus giving a noticeable reduction in the uncertainties of the Compton curvature.\ In all the spectra it is required a Gaussian line at energies $\sim$ 6.4 keV. It mostly appear rather broad, with a line width that often reaches our constraint of 0.8 keV. The equivalent widths associated with the line are in 80-200 eV range, and are to be taken as rather indicative given the low energy resolution of the PCA and the presence of systematic residuals in this energy range. The hard tail behavior ---------------------- The presence of a hard X-ray excess in the extracted spectra of   is mostly evident in 18 spectra from a total of 43 CD selected spectra. There is little evidence of the presence of hard X-ray excess for spectra lying in the bottom part of the FB and of the NB, namely for spectra taken near the apex that connects the left and right track that compose the V pattern.\ In all the examined CD patterns we always detect a power-law high-energy excess, dominant above $\sim$ 40 keV, as the source resides at the top of the left track, i.e.  a portion of the diagram that we tentatively identified with the HB, and which should correspond to the lowest mass accretion rate. This group of spectra are characterized by the lowest values of inner disk temperature and soft seed-photon temperature; the derived inner disk radii correspond to a disk truncated at about 10-15 R$_g$, while the Comptonizing optical cloud is substantially thicker with respect to any other spectrum in other zones of the CD; the fraction of the total power dissipated in the Comptonizing corona has the highest values, with a ratio hard/soft luminosity well above unity.\ -- -- -- -- Apart from this consistent group of HB/NB spectra, we also found a minor group of spectra (namely 5 spectra: 7, 8, 12, 13 and 26), for which the fit required a third high-energy component. These spectra belong to 3 CDs (20035B, 30036 and 40020) and are spectra taken at the top of the FB. Although these spectra are located in the same CD region, not all the CD that display a topFB clearly show this excess. This could be due to a statistical reason, as in all the other CDs the HEXTE countrate (see Table  \[tab2\]) drops of about one order of magnitude with respect to these 3 CDs; because the countrate is essentially concentrated in the 20–35 keV energy band, that is the range that constrains the curvature of the thermal Comptonized component, the energy channels in this energy band with a high statistic will drive the fit, and any small excess above the exponential tail of the Comptonized component will statistically be more accentuated. However, we cannot exclude that an opposite reasoning is true, i.e. that we are underestimating instrumental effects that appear only for higher count-rates as dead-time corrections or poor background estimate, so that these particular hard tails are artifacts of an incorrect modelization of the thermal Comptonization curvature. As we have no apparent way to better calibrate our spectra, we discuss also the appearance of this component for this group of spectra. Future observations with other high-energy satellites, such as INTEGRAL or SUZAKU, will test if this particular topFB state of the source is also accompanied by a high energy excess or it is an HEXTE faked detection.\ The hard excess has no evident relation with the total luminosity of the source, but this could be expected since the different zones of the CD track are not related with the total X-ray luminosity. Using the pegged power-law, we directly derived from the normalization value of this component, the flux in the 20.0-200.0 keV energy band. We show in Figure \[fig6\], left panel, the hard tails luminosity in the 20–200 keV range vs. the hard/soft ratio. From the plot it can be clearly seen that the hardening of the spectrum is reflected in the increasing luminosity of the hard-tail component; the CD resolved spectra are, correspondently, disposed on this plot: the FB spectra reside on the bottom left of the diagram with hard fluxes $\leq 2 \times 10^{-10}$ erg s$^{-1}$, NB spectra are harder and half of them show significant fluxes ($\sim 4 \times 10^{-10}$ erg s$^{-1}$), while the HB spectra, in the right part of the plot, show the strongest hard X-ray emission (in the range 6–12 $\times 10^{-10}$ erg s$^{-1}$).\ As pointed out in the previous section, we found for the topFB sample, spectra with no detectable hard X-ray emission, that occupy the left bottom part of this figure and that smoothly join the correlation between hard/soft ratio and hard flux and spectra that do not follow this trend and for which there is both in the [`COMPTT`]{}  and in the [`PEGPWRLW`]{}  component a hard flux higher than we expected. From the figure is also evident that one of the most luminous hard tail is found in a particular spectrum on the topFB (namely spectrum 26 of CD 40020); the luminosity of this component is related to the total broad-band high luminous state of the source, that reaches in this case the highest value of total luminosity (2.6 L$_{Edd}$) observed in our sample.\ On the other hand, the values of the photon index present a well defined bimodal distribution according to the position of the source on the CD. We find quite flat power-laws, with index values less that unity for all the FB and topFB spectra, while for the other group of spectra, values generally range between 1.5 and 2 (Figure \[fig6\], right panel). The hybrid Comptonization model ------------------------------- Models of hybrid Comptonization have so far been mostly adopted to explain state transitions both in black-hole candidate systems (as in Cyg X-1, @gierlinski99; GRS 1915+105, @zdziarski01; or GX 339+4, @wardzinski02) and in NS system [GX 17+2,  @farinelli05].\ We tried to model the spectra for which we detected a hard tail, making the hypothesis that both the hard tail and the thermal Comptonized component were related to each other. In this way, all the broad-band evolution of the source is covered in a self-consistent way by only two spectral components: the thermal soft disk emission and the hard hybrid Comptonized component. In table \[tab4\] we present the results of our fits for the group of 18 spectra, distinguishing between the HB/NB group (top part of the table), for which we used the [`DISKBB`]{}  component as in the case of   modelization, and the topFB spectra, for which we used the [`DISKPN`]{}  component, under the assumption that for this state the disk is not truncated, and reaches the surface of the NS (we assumed that this happens at a distance of 6 R$_g$). -- -- -- -- As in the case of the previously adopted modelization, the thermal disk temperature and the soft-seed photon temperature are the main driving physical parameters that determine the changing of the spectral state of the source. We show in Figure \[fig8\] (left panel), the correlation between the two temperatures: the topFB spectra occupy the top right part of the plot, with an average $kT_{BB}$ value of 2.73 keV and inner disk temperature $kT_{DB}$ of 1.97 keV, while the set of the HB-NB spectra are disposed along a linear trend in the 1.2-1.8 keV energy range for the disk temperature and 1.9-2.5 keV range for the soft seed-photon temperature. The plot closely follows the plot of Figure \[fig5\], right panel.\ The hard compactness ${\it l_h}$ results for all the examined spectra only a small fraction of the soft compactness ${\it l_s}$, with values of hard/soft ratio ${\it l_h/l_s}$ below 0.2. The HB/NB spectra span almost an order of magnitude in the ${\it l_h}$ range, while the topFB spectra are all grouped in a more narrow range of values, around ${\it l_h}$ = 0.6–0.8 (see Figure \[fig8\], right panel).\ The two groups of spectra significantly differ both in the $\tau$ and ${\it l_{nth}/l_h}$ values (it is to be noted, however, that the optical depth reported from the fits has not the same physical meaning of the classical Thomson optical depth of the Comptonizing cloud, as it more properly refers to the optical depth associated with the background photon radiation field): HB spectra present generally higher values of optical depth (with an average value of $\tau_p \sim$ 2.2) and a hybrid electron distribution (with 0.3 $< {\it l_{nth}/l_{h}} < $ 0.8, and an average value of 0.55), while the topFB spectra have considerably lower values of $\tau_p \leq 1$ and essentially non-thermal spectra, with ${\it l_{nth}} \simeq 1$. The slope of the high-energy power-law of the hybrid electron distribution, $\Gamma_{inj}$ is consistent with a value of 2, for the HB-NB spectra, while for the topFB spectra its value is considerably lower (between 0 and 1).\ -- -- -- -- -- -- -- -- Conclusions =========== We outline our main conclusions from an extensive analysis of   RXTE observations in the 3–200 keV energy range: we observe a spectral evolution of the source that is clearly dependent on the position of the source along its CD track; the spectral decomposition consists of four different spectral components that we interpret, from the softest to the highest X-rays respectively, as follows: a soft thermal component from an optically thick accretion disk, a thermal Comptonization from the boundary layer, a reflection component which mainly manifests in a broad Gaussian line at 6.4 keV and a variable hard excess above 30 keV, mostly present at low .\ We have shown that the CD correlated variations of the two thermal temperatures (disk temperature and soft seed-photon temperature), the trend in the hard/soft ratio, the Compton thickening of the Comptonizing cloud and the apparency of significant hard X-ray emission on the HB/NB are the main spectral characteristics of the source.\ We paid particular attention to the modelization of the hardest X-ray component, which is of primary importance for the understanding of all the continuum emission, as it has a major impact on the determination of all the other spectral parameters. To fit this hard excess we modelled our spectra using a phenomenological component, i.e. a power-law with a low energy exponential cut-off at the seed-photon temperature and a self-consistent physical model, a hybrid thermal/non-thermal Comptonization code. In both cases we found an adequate description of the spectra for each source state, and the two modelizations are statistically equivalent.\ The use of simple power-laws to fit the hard tails in the Z-class sources, but also in some atolls [@tarana07; @fiocchi06], has been largely used in the past, given the lack of broad-band coverage up to the MeV energies, and a poor understanding of the physical origins of this component. Contrary to what previously reported [@damico01], the presence of the hard tail $is$ related to the broad-band spectral evolution of the source as inferred from the source position on the Z track of its CD: the presence and the values of the photon indexes in   are consistent with the results from fits to spectra on the same zone of the CD for the other Z sources [@asai94; @disalvo00; @disalvo01; @disalvo06]; at the same time, the flux contribution to the total energy output of the sources is of the same order and is anticorrelated to the inferred mass accretion rate. This is clearly shown in the case of the CD 20035A, where the hard tail flux monotonically increases as the source moves from the bottom to the top of its NB/HB.\ The only difference, between   and the other similar sources, is constituted by rather flat hard tails that we found when the source is at the top of the FB and which has never been observed in other NSs sources; however, while for every CD that we obtained, we systematically found a hard tail as the source was at the top of left V track, the detection on the FB is presumedly dependent on the HEXTE statistics. Past surveys with the use of scintillation counters on-board on balloons reported the unusual flattening of the spectrum of   above 40 keV [@agrawal71], as well as more recent surveys [@manchanda06]; it would be interesting to definitively assess the existence of such component; from our analysis, we can only conjecture that this component, could be a signature, on the FB, of the trespassing of the Eddington limit on the interface between the accretion disk and the board of the boundary layer, followed by a violent expulsion of part of the accreting matter.\ Hard tails on the HB could have a thermal origin, as in the case of the thermal Comptonized component that dominates the spectrum at lower energies (less than 20 keV); in this case the power-law, as long as the optical depth is low and the electron temperature is not too high, is a good approximation of the Comptonized spectrum, whose $y$ parameter $$\label{ypar} y = \frac{4 k T_e}{m_e c^2} \times (\tau + \tau^2)$$ is related to the photon index of the power-law by the following relation: $$\alpha = - \frac{1}{2} + \sqrt{\frac{9}{4}+\frac{\pi^2}{3y}}.$$ In the case of   spectra on the HB/NB, we derived photon indexes in the 1.8–2.2 range, that imply a $y$ Compton parameter in the 0.6–1.1 range; a $\tau = 1$ thick plasma would then require a $\sim$ 50 keV thermal plasma, while a $\tau = 2$, on the contrary, a $\sim$ 15 keV electron temperature. We did not find any evidence of high-energy cut-off, obtaining only lower limits in the 50–100 keV energy range, and this constraints the optical depth to be less than one while the electron temperature would be greater than 50 keV. The formation of hot zones, or blobs, of very hot plasma, possibly powered by episodes of magnetic reconnection above the disk, constitutes a plausible scenario [@haardt94]; this model has been also proposed in the case of the hard/low states of BH candidates [@malzac04], which share with the bright Z-sources systems, from a spectral point of view, similar values of the hard photon indexes and the low accretion state, while they naturally differ from the strong boundary layer emission present in the NSs sources and absent in the BH case.\ A thermal origin, on the other hand, for the topFB hard tails seems to be excluded, as it would require unrealistic large values of the electron temperature.\ Our analysis has shown that this scenario is statistically equivalent to a hybrid Comptonization model, where the non-thermal fraction of the power injected in the electron heating counts up to half of the total injected power. The non-thermal fraction is significantly higher with respect to the value found by @farinelli05 in the case of GX 17+2. The non-thermal fraction for the topFB spectra must be distinguished from the non-thermal spectra of the BH sources, as in the case of , the hard luminosity is a small fraction of the soft luminosity, that dominates the 10–30 keV energy spectrum. Most of the soft photons do not Compton interact with the electron cloud, because of the low optical depth values.\ Another possible physical mechanism to explain the hard X-ray variability is the bulk motion Comptonization [e.g. @psaltis01], present in systems with a high velocity radial accretion flows. This radiation mechanism, used to explain the hard power-law component in BH systems spectra in their soft/high states, is, however, not able to account for the the absence of any cut-off up to energies of 0.5 MeV and for the hard photon indexes observed ($\Gamma \leq$ 2.0) [@niedzwiecki06]. In the case of bright NS systems, the high radiation pressure of the inner boundary layer constitutes a strong barrier for any incoming convergent bulk motion flow. On the other side, because jets have been observed in the radio from , bulk motion Comptonization inside the jet can be an important production mechanisms also of hard X-ray radiation. Theoretical spectra and energetic contribution according to the relevant physical parameters involved in the case of a strict coupling with inflow (i.e. accretion to the compact object through the formation of an accretion disk) and outflow (i.e. jets), are a promising way to cover in a self-consistent way all the phases of accretion. Details about the geometry of the scattering media, amount of reflection on the cold disk and dependence from the accretion state of the source are yet to be fully explored. Moreover, in this case synchrotron emission of soft photons by a beamed population of relativistic electrons becomes a competitive source of photons with respect to the thermal soft photon emission of the boundary layer and accretion disk. An attempt to explicitly compute a jet spectrum, from radio to hard X-rays, has been recently proposed by @markoff05, where the jet base subsumes the role of the static Comptonizing corona; spectral fits in the case of BH systems (namely Cyg X-1 and GX 339-4) in hard states are consistent with this scenario, but BH soft states and NS systems spectra need yet to be tested in order to understand the limits of validity of the jet model.\ Questions to be further addressed in future observations are: the exact shape and contribution of the soft component below the 3 keV range, the investigation with good spectral resolution in the 6–10 keV range of the reflection features, the extension up to 0.5 MeV of the spectral coverage in order to constrain the physical parameters characterizing the non-thermal electron distribution of the Comptonizing plasma. The latter point is well within the capabilities of the recently launched SUZAKU satellite, so that future observations in this direction can be a stringent test to our conclusions.
--- abstract: 'This work presents short-time Monte Carlo simulations for the two dimensional Majority-vote model starting from ordered and disordered states. It has been found that there are two pseudo-critical points, each one within the error-bar range of previous reported values performed using fourth order cumulant crossing method. The results show that the short-time dynamic for this model has a dependence on the initial conditions. Based on this dependence a method is proposed for the evaluation of the pseudo critical points and the extraction of the dynamical critical exponent $z$ and the static critical exponent $\beta/\nu$ for this model.' author: - Francisco Sastre title: 'Short-time dynamic in the Majority vote model: The ordered and disordered initial cases' --- Introduction ============ Critical phenomena in equilibrium statistical systems is one of the most important topics in physics. Much of the attention has been focused on the universality of the critical exponents, with several universality classes already characterized in equilibrium systems. On the other hand the critical behavior of non-equilibrium statistical systems has been receiving a lot of attention in recent years, but the characterization of the different universality classes is far from be complete. One of the simplest non-equilibrium models is the two dimensional majority vote model, an Ising-like system (up-down symmetry and spin-flip dynamic) with a continuous order-disorder transition with the same critical exponents that the two dimensional Ising model [@Oliveira91; @Oliveira93; @Kwak2007], as expected from the prediction of Grinstein [*et al*]{} [@Grinstein85]: every spin system with spin-flip dynamic and up-down symmetry falls in the Ising model universality class. However, there is some controversy about the universality class for higher dimensions. A recent work claims [@Yang2008] that the upper critical dimension for the majority vote model is 6 instead of 4, based on numerical calculations. Another discrepancy in the critical exponents have been found in simulations on non-regular lattices [@Lima2005; @Lima2006]. It must be mentioned that all of the results mentioned above were performed using standard “Monte Carlo” simulations and Finite Size Scaling approaches for the evaluation of the static critical exponents. On the other hand the time evolution can gives important information about the universality of a given system. It has been shown by Janssen [*et al.*]{} [@janssen89] that when systems with relaxation dynamics are quenched from high temperatures to the critical temperature there is a short critical universal behavior. Numerical simulations have confirmed this behavior in the Ising and the Pott models (see reference [@zheng98] for a review of these results). Concerning the critical dynamic in systems without detailed balance, there are some works that evaluate the critical dynamic exponent $z$ [@mendes98; @tome98] and the fluctuation-dissipation ratio $X_\infty$ [@sastre03] for the Majority model, but always starting from a disordered state. As expected the results were compatibles with the Ising ones. Given all these results one should expect that the basic assumption for the dynamic relaxation of the k-th moment of the order parameter starting from an completely ordered state will be the same as in the Ising model, but this has not been proved yet. The aims of this work are: a) to evaluate the critical point using short time dynamic starting from ordered and disordered initial conditions and b) evaluate the dynamic critical exponent $z$ and the static critical exponent $\beta/\nu$. This will test if the Grinstein prediction holds for the short time dynamic in the majority vote model. Models and definitions ====================== In the Majority vote model each lattice site has a spin whose values are $\sigma=\pm 1$ and its dynamic can be grouped by the spin flip rule $$\label{dynamic} W_i=\frac{1}{2}[1-\sigma_if(H_i)],$$ here $H_i$ is the local field $\sum_\mathrm{nn}\sigma_j=,0,\pm 2,\pm 4$ produced by its nearest neighbors and $f$ is a function with up down symmetry that depends on two control parameters $f(0)=0$, $f(2)=-f(-2)=x$ and $f(4)=-f(4)=y$. The parameters $x$ and $y$ can be associated with interface and bulk temperatures respectively [@drouffe99] using the relations $$x=\tanh 2\beta_2,~~~~~~ y=\tanh 4\beta_4 .$$ The Majority model is obtained setting $x=y$ ($\beta_4 < \beta_2$)and the critical point is at $x_c=0.850(2)$ [@Oliveira93; @Kwak2007]. The equilibrium case can be obtained along the line $y=2x/(1+x^2)$ (Glauber dynamic), where the temperatures are equal ($\beta_2 =\beta_4 = \beta$) and the critical point is $\beta_c = \frac{1}{2} \log(1+\sqrt{2})$. The order parameter is the standard for Ising-like systems, defined by $$m=\frac{1}{N}\langle\sum_i\sigma_i\rangle$$ where $N=L^2$ is the total number of lattice sites ($L$ is the lateral size. Starting from a disordered state the dynamic for the $k$-th moment of the order parameter was derived by Janssen [*et al*]{} [@janssen89], the mathematical expression is given by $$m^{(k)}(t,\tau,L,m_0)=b^{-k\beta/\nu}m^{(k)}(b^{-z}t,b^{1/\nu}\tau,b^{-1}L,b^{x_0}m_0),$$ here $\tau=(T-T_c)/T_c$ is the reduced temperature, or the reduced control parameter for non-equilibrium systems, $t$ is the dynamic time variable, $b$ is the re-scaling factor, $\beta$ and $\nu$ are static critical exponents, $z$ is the dynamic critical exponent,$x_0$ is the scaling dimension of the initial (small) order parameter $m_0$ (see [@zheng98] and reference therein). At the critical point for sufficiently small $m_0$ and large systems ($L\to\infty$) the order parameter follows a power law dynamic $$m(t)\sim m_0 t^{\theta},$$ where $\theta$ is defined by $(x_0 - \beta/\nu)/z$. One must remark that there is a strong dependence on the initial value of the order parameter. This dependence does not affect the power law behavior at the critical point, what is affected is the exponent $\theta$ that tends to the real value in the limit $m_0\to 0$ For the dynamic starting, from the ordered state we have the assumption that the scaling dynamic form is given by $$m^{(k)}(t,\tau,L)=b^{-k\beta/\nu}m^{(k)}(b^{-z}t,b^{1/\nu}\tau,b^{-1}L),$$ again it can be obtained the scaling form at the critical point taking the limit $L\to \infty$. $$M(t)\sim t^{-\beta/\nu z}.$$ This has been proved in equilibrium systems, like the Ising or the Potts model. For the evaluation of the critical point it has been used the fact that theoretically the order parameter evolves as a power law at the critical point If it is evaluated the difference between a power law and the time evolution for different values of the control parameter $x$ we will expect a minimum in those differences at the critical point. The simulations were performed choosing the lattice site randomly starting from both, a completely ordered state ($m=1.0$) and a carefully prepared disordered state with small magnetization values $m_0 < 0.1$. The order parameter is evaluated as a function of the time $t$, with $\Delta t=1$ corresponding to a Monte Carlo time step (MCTS). In order to avoid finite size effects we used lattices with lateral size $L=2^{9}$ ($N=L^2$). The average were taken with at least $10^5$ independent simulations and 250 and 1000 MCTS for the evaluation of the critical point and dynamical exponents respectively. Results ======= Given that the power law behavior of the order parameter as function of $t$ (decay from an ordered state, increase for a disordered one) one can performed simulations for different values of $x$ around the expected critical point. In Fig. \[examples\_dynamic\] we can observe that above and below a certain value of $x$ we have dynamic that differ from a power law (illustrated by the dashed line). ![\[examples\_dynamic\] (color online) a) Dynamic starting from an ordered state. b) Dynamic starting from a disordered state ($m_0 = 0.0875$). In both cases the top continuous line corresponds to a $x$ above the critical point and the lower one to a $x$ below the critical point. The dashed line shows the expected power law behavior.](sastre_fig01.eps){width="8cm"} From here one must define the criterion that will measure which is the best power law curve, in this work it was used the measurement of the $\chi^2$ for each curve with respect to a power law behavior. I must remark that previous results for equilibrium systems give the same result for the critical point for both initial states. However, two different values were found for the majority vote model, see Fig. \[chisqr\]. ![\[chisqr\] (color online) Deviation from the power line behavior for the decay ($m_0=1$, left curve) and the increase ($m_0=0.0875$, right curve) cases.](sastre_fig02.eps){width="8cm"} The critical point values obtained were $x_c=0.85007(6)$ and $x_c=0.85147(2)$ for the order and disorder state respectively, both results are in perfect agreement with the obtained in the static case (references [@Oliveira91; @Oliveira93; @Kwak2007]) and are quite close. We have a clearly dependence on the initial condition in short time dynamic of this model, which is not the case for equilibrium systems. There is no doubt about the critical point obtained from the ordered phase, since $m_0=1$ is one of the fixed point under renormalization group transformations. However, one must remember that the power law is valid only in the limit $m_0 \to 0$ (the other fixed point) for the disorder case, assuming that the “real” critical point is located at this limit, one can proceed to evaluate the critical point for another values of $m_0$ and with this values extrapolate the critical point for the disordered phase. The results are showed in table\[critical\_1\] $m_0$ 0.0375 0.0500 0.0625 0.0750 0.0875 ------- ------------- ------------ ------------ ------------ ------------ $x_c$ 0.84929(10) 0.84972(9) 0.85019(6) 0.85076(5) 0.85147(2) : \[critical\_1\] Critical point for initial disordered states. Once that each value $x_c(m_0)$ has been evaluated, the dynamical exponent $\theta$ can be obtained. For the evaluation of this exponent the simulations were performed with 1000 MCTS discarding the first time steps, since there is an initial time scale $t_{mic}$ where the power law stabilizes $t_{mic}\sim 20$ (see Fig. \[growing\]). The results for the exponent $\theta$ are showed in table \[critical\_values\]. ![\[growing\] (color online) Growing of the order parameter at $x_c(m_0)$, the continuous curves are for $m_0=0.0875,~0.075,~0.0625,~0.5,~0.0375$ from top to bottom. The dashed line represents the power law growing with $\theta=0.1751$ (the result in this work for the majority vote model). ](sastre_fig03.eps){width="8cm"} $m_0$ 0.0375 0.0500 0.0625 0.0750 0.0875 ---------- ----------- ----------- ----------- ----------- ----------- $\theta$ 0.1769(8) 0.1774(4) 0.1782(3) 0.1788(4) 0.1792(3) : \[critical\_values\] $theta$ exponents for growing process. With a extrapolation of these values to $m_0=0$ the value of the critical point and the $\theta$ exponent were evaluated (see Fig. \[xc\_theta\]). The result for the critical point was $x_c=0.84860(10)$, which is clearly different from the ordered one, however both values are within the error bar from the obtained in the static case $0.848\le x_c\le 0.852$. From now on the pseudo critical point evaluated with the decay process will be denoted as $x_c^o$ and the evaluated with the growing process with $x_c^d$. This surprising result seems similar to the obtained for weak first-order phase transitions [@schulke], where two pseudo-critical points exits due to the metastable states above and below the critical point. However, there is an important difference in this case: for weak order phase transitions the smaller critical point corresponds to the decay process, and the bigger corresponds to the growing process, contrary to the majority vote model case. In the evaluation of the exponent $\theta$ a linear extrapolation gives the result of $0.175(3)$, which is lower from the values of the two dimensional Ising model, $\theta=0.191(1)$, and from the previously evaluated in references [@mendes98; @tome98] for the majority vote model, $\theta=0.192(2)$. The difference with respect to the Ising model could be understood considering that the results obtained here seems to indicate a hole new dynamic. The differences with previous results for the majority model can be explained observing the simulations details used previously: first the critical point used was $x_=0.850$, which is above the result for $x_c^d$. Second the systems sizes used previously were really small ($L=32$), at this size the growing process is not very long and is really hard to see the power law behavior. ![\[xc\_theta\] a) Evaluation of the critical point $x_c$ and b) the $\theta$ exponent. ](sastre_fig04.eps){width="8cm"} One can obtain the dynamical exponent $z$ evaluating the second moment of the magnetization at the critical point $x_c^d$ $$m^{(2)}\sim t^y,~~~~~y=(d-2\beta/\nu)/z,$$ and the autocorrelation $$\begin{aligned} \begin{array}{c} A(t)=\sum_i \sigma_i(t=0)\sigma(t), \\ \\ A(t) \sim t^{-\lambda},~~~~~\lambda=\frac{d}{z} -\theta. \end{array}\end{aligned}$$ Both starting from $m_0=0$ and using 1000 MCTS. Again there is a $t_{mic}$ in each case (around 20 for the autocorrelation and 75 for the second moment, see Fig. \[auto\]). The results obtained are $y=0.799(17)$ and $\lambda=0.758(2)$. Combining both results it can be obtained the values $z=2.143(9)$ and $\beta/\nu = 0.143(18)$. A summary of the results are showed in table \[summary\], where it can be observed discrepancies between most of the values for the majority model and the Ising ones. It must be remark that all these results were obtained using just the growing process. ![\[auto\] (color online) a) Evaluation of $\lambda$, the continuous line shows the autocorrelation time evolution and the dashed line shows the power law behavior with $\lambda=0.758$. b) Evaluation of $y$ , the continuous line shows the second moment order parameter and the dashed line shows the power law behavior with $y=0.799$. ](sastre_fig05.eps){width="8cm"}   Majority vote model Ising ------------- --------------------- ---------- $\theta$ 0.175(3) 0.191(1) $\lambda$ 0.758(2) 0.737(1) $z$ 2.143(9) 2.155(3) $y$ 0.799(17) 0.817(7) $\beta/\nu$ 0.143(18) 1/4 : \[summary\] Summary of the results in this work and of the Ising model. Finally the decay exponent $\beta/\nu z$ was evaluated starting from an ordered phase (at $x_c^o$), using 1000 MCTS (Fig. \[mag\_decay\]), the result was $\beta/\nu=0.0526(5)$, that is lower compared to the Ising one, $0.0580(5)$. Again the first time steps were discarded for the evaluation of the exponent (Fig. \[mag\_decay\]). Theoretically it is possible to obtain the $z$ exponent using the known value of $\beta/\nu$, or knowing the $z$ value one can obtain the $\beta/\nu$ value, but in both cases the results depend on values obtained with a growing process ($z$) or the static simulations ($\beta/nu$). The approach taken in this work is that the growing and the decay process are different and it could be possible that the dynamic exponent $z$ is different in each case, so for the decay case the reporting value is $z=2.37(2)$. ![\[mag\_decay\] (color online) Order parameter relaxation ($m_0=1$), here the dashed line shows the power law behavior with $\beta/\nu=0.0526$. ](sastre_fig06.eps){width="8cm"} The fact that we have two pseudo-critical points (that not corresponds to a weak phase transition) and that the decay and growing process are slower that in the Ising model must be related to the absence of detailed balance condition. One of the consequences of this absence is that we do not have a unique thermodynamic temperature, in this case we have two, so looking at the snapshots for different initial conditions at the pseudo-critical points we can speculate about the competition between the two “temperatures” that governs the dynamic in non-equilibrium Ising systems. Figure \[snapshot\_dis\] shows the time evolution with $m_0=0$ at the two pseudo-critical points, a) for $x_c^d$ and b) for $x_c^o$. Initially the number of sites with spin-flip probability depending on $\beta_2$ are very similar to the ones depending on $\beta_4$, the time increases from left to right and it can be observed that at $x_c^o$ the coarsening seems to appear faster that at $x_c^d$. ![\[snapshot\_dis\] Snapshots at the pseudo-critical points starting with $m_0=0$, a) $x_c^d$ and b) $x_c^o$. Times are (from left to right) 1, 100, 1000, 10000 and 20000. ](sastre_fig07.eps){width="8cm"} Figure \[snapshot\_ord\] shows the same time evolution for $m_0=1$. In this case at the beginning of the evolution all sites have spin-flip probability that depends only on $\beta_4$ and the decay is slightly lower at $x_c^o$. ![\[snapshot\_ord\] Snapshots at the pseudo-critical points starting with $m_0=1$, a) $x_c^d$ and b) $x_c^o$. Times are (from left to right) 1, 100, 1000, 10000 and 20000. ](sastre_fig08.eps){width="8cm"} It seems that the coarsening differs at the two pseudo-critical points, so as a final test the time evolution for a special initial condition were performed setting $m_0$ really close to zero putting almost half of the spins in one state and the other half in the other state with a circular border, in this way we have a large number of sites with $\beta_4$ while the number of possible sites with $b_2$ increases (Fig. \[snapshot\_cir\]). We can observe that the circular shape last longer at $x_c^o$. In order to corroborate the effect of the difference between temperatures it should be performed simulations in spin-like systems for different ratios $\beta_4/\beta_2$, except for the equilibrium case $\beta_4/\beta_2=1$. ![\[snapshot\_cir\] Snapshots at the pseudo-critical points starting with $m_0$ almost zero and a circular border, a) $x_c^d$ and b) $x_c^o$. Times are (from left to right) 1, 100, 1000, 10000 and 20000. ](sastre_fig09.eps){width="8cm"} Conclusion ========== In this work it has been shown that the short time dynamic in the majority vote model presents power law behavior at different control parameters for the growing, $x_c^d=0.84860(10)$, and the decay processes, $x_c^o=0.85007(6)$. These pseudo-critical points are compatibles with results for the critical point reported previously, $x_c=0.850(2)$. It has been show also that the dynamic in both cases is slower that in Ising model for all the quantities calculated ($m,~m^{(2)}$ and $A$). These results seems to be related to the competing dynamic between the interface ($\beta_2$) and bulk ($\beta_4$) temperatures associated to the dynamic, and as consequence to the absence of detailed balance in the system. In order to corroborate these results additional simulations must be carry on in systems without detailed balance. The dynamical critical exponent ($z$) and the static critical exponent ($\beta/\nu$) has been evaluated independently using a growing process, in both cases the results were close to the Ising ones. For the decay process the $z$ exponent was evaluated using results from static simulations founding that the value is different from the obtained in the growing process. Acknowledgments =============== I wish to thank G. Pérez for his useful comments. This work was supported by Conacyt México through Grant No. 61418/2007. References ========== [10]{} M. J. de Oliveira, [*J. of Stat. Phys.*]{}, [**66**]{}, 273 (1992). M. J. de Oliveira, J. F. F. Mendes and M. A. Santos, [*J. Phys. A: Math. and Gen.*]{}, [**26**]{} 2317 (1993). Kwak Wooseop, Jae-Suk Yang, Jang-il Sohn and In-mook Kim, [*Phys. Rev. E*]{} [**75**]{}, 061110 (2007). G. Grinstein, C. Jayaprakash and Y.He, [*Phys. Rev. Lett.*]{} [**55**]{},2527 (1985). Jae-Suk Yang and In-mook Kim, [*Phys. Rev. E*]{} [**77**]{}, 051122 (2008). F. W. S. Lima, U. L. Fulco and R. N. Costa Filho, [*Phys. Rev. E*]{} [**71**]{}, 036105 (2005). F. W. S. Lima and K. Malarz, [*Int. J. of Moderm Phys. C*]{} [**17**]{}, 1273 (2006). H. K. Janssen, B. Schaub and B. Schmittmann, [*Z. Phys. B*]{} [**73**]{}, 539 (1989). B. Zheng, 1998 [*Int. J. of Mod. Phys.*]{}, [**12**]{} (1419). L. Schülke and B. Zheng, [*Phys. Rev. E*]{} [**62**]{}, 7482 (2000) J. F. F. Mendes and M. A. Santos, [*Phys. Rev. E*]{} [**57**]{} 108, (1998). T. Tomé and M. J. de Oliveira, [*Phys. Rev. E*]{} [**58**]{}, 4242 (1998). F. Sastre, I. Dornic and H. Chaté [*Phys. Rev. Lett.*]{} [**91**]{}, 267205 (2003). M. P. Nightingale and H. W. J. Blöte, [*Phys. Rev. B*]{} [**62**]{}, 1089 (2000). J. M. Drouffe and C. Godrèche, [*J. Phys. A: Math. and Gen.*]{} [**32**]{}, 249 (1999). P. Calabrese, A. Gambassi and F. Krzakala, [*J. of Stat. Mec.: Th. Exp.*]{}, 06016 (2006).
--- abstract: 'We analyse the constraints of an Abelian 2-form gauge theory using Faddeev-Jackiw symplectic formalism. Further, this theory is treated as a constrained system in the context of Batalin-Fradkin-Vilkovisky formalism to retrieve the BRST symmetry. Using the fields decompositions the effective action for Abelian 2-form gauge theory is written in terms of diagonalized uncanonical part and BRST exact one. The nilpotent BRST and contracting homotopy $\sigma $ closed transformations with field redefinitions are shown as the Darboux transformations used in the Faddeev-Jackiw formalism.' author: - Sudhaker Upadhyay title: ' BRST symmetry and Darboux transformations in Abelian 2-form gauge theory ' --- introduction ============ The quantization of non-singular systems is in principle straightforward. On the other hand, the quantization of singular systems (i.e. systems with constraints) is non-trivial. The generalized Hamiltonian dynamics of singular systems was initiated by Dirac [@di1; @di2]. The dynamics of such systems are widely used in investigating theoretical models in contemporary elementary particle physics [@ht]. Dirac proposed a kind of bracket to quantize these (singular) systems. In the Dirac approach of dealing with singular systems, the dynamical equations involve the variables of the entire phase space, including also unphysical gauge degrees of freedom. However, a symplectic approach to quantize the singular systems has been used by Faddeev and Jackiw (FJ) [@b], so-called FJ approach, in which the systematic algorithm involves only the physical (unconstrained) degrees of freedom for arriving at a set of Hamilton equations of motion [@rothe]. In Ref. [@pons], this algorithm has been shown equivalent to Dirac approach. In FJ approach, the Lagrangian is treated in (symplectic) first order. Abelian antisymmetric rank-2 tensor field theory is an example of singular system where some of the constraints are not independent and said to be reducible. This theory is the subject of interests in various aspects [@kara; @kaul; @sud; @sud2; @suga; @rl; @sase; @grsc]. For example, Kalb and Ramond has shown that Abelian rank-2 antisymmetric fields interact with classical strings [@kara], which was further applied to the dual description of Abelian Higgs model [@suga; @rl]. The antisymmetric tensor field appears to couple the gravity or supergravity fields with higher curvature term in four and ten dimensions [@sase] and complete understanding of these couplings in superstring theories are crucial in order to have anomalies cancellation [@grsc]. Abelian 2-form gauge fields have their relevance in M-theory also. In particular, the action for multiple M2-branes was studied via BLG theory [@bag; @bag1; @gus; @fai; @1fai]. The gauge symmetry of this theory was generated by a Lie $3$-algebra rather than a Lie algebra. However, this limited the scope of this thoery to two M2-branes. So, this theory was generalized to the ABJM theory [@ori; @bu; @nas; @fai1; @fai2]. The gauge symmetry of this theory is generated by the gauge group $U(N) \times U(N)$. The BRST symmetry of the ABJM theory has also been studied [@fai3; @1fai3; @2fai3]. The ABJM theory has been generalized to the theory of fractional M2-branes [@aha; @klu]. The gauge group of ABJ theory is $U(N) \times U(M)$. Recently, the BRST symmetry of the ABJ theory has also been studied [@fai4; @1fai4; @2fai4]. It is shown in [@pm] at quadratic order of the Lagrangian that the M5-brane theory contains a self-dual two-form gauge field, in addition to the scalars corresponding to fluctuations of the M5-brane in the transverse directions, as well as their fermionic super-partners. The symplectic quantization for Abelian rank-2 antisymmetric tensor field theory has been done in Ref. [@neto; @neto1]. However, the Darboux transformations is not studied for the Abelian rank-2 antisymmetric field in the FJ context. This provides a motivation for this present work. Batalin-Fradkin-Vilkovisky (BFV) formulation is a Hamiltonian path integral approach to quantize the constrained systems [@frvi; @bv]. In this approach one extends the phase space of the theory by introducing a conjugate momentum for every Lagrange multiplier and a ghost field for every constraint. The induced effective action in extended phase space exhibits a so-called BRST symmetry [@brst]. However, in FJ approach the phase space is reduced by iteratively solving the constraints and performing the Darboux transformations, until we end up with an unconstrained and canonical Lagrangian. The relation between BFV quantization scheme and the FJ approach for gauged SU(2) WZW model has been established in [@pa]. We explore it for the reducible gauge theory of Abelian rank-2 tensor field. In this work we start with the FJ constraint analysis for 2-form gauge theory. The constraints which we found (primary and zero iterated) are exactly same as obtained from Dirac analysis but in more elegant manner. Then, we use BFV approach by extending the phase space to analyse the BRST symmetry of the effective action. Further, the BFV action is written in two terms, the first one is uncanonical term that we would obtain with the FJ method after solving the constraints and the second one is BRST exact term. The BRST transformation and contracting homotopy $\sigma $ closed transformations are calculated for the reducible 2-form gauge theory. Under the fields decompositions these transformations are shown as the Darboux transformation used in FJ formalism. The paper is organized as follows. In Sec II, we discuss the preliminaries of the FJ symplectic approach of singular system. In Sec. III, we make an analysis to investigate the constraints structure of Abelian rank-2 tensor field theory using symplectic matrix. Then, we stress the BRST-BFV formulation to quantize such reducible gauge theory in sec. IV. Further, in Sec. V, we show that the BRST and contracting homotopy $\sigma$ transformations of 2-form gauge theory are basically Darboux transformations used in FJ symplectic approach. The last section is kept for making concluding remarks. Faddeev-Jackiw approach: general formulation ============================================= In this section we discuss the methodology of FJ approach to quantize the singular systems. In this formalism, we first write the Lagrangian of a singular system into the first-order form as follows: $$L(\xi )=a_i(\xi )\dot{\xi}^i -V(\xi ), \ \ (i=1, 2, 3, .....,n), \label{lag}$$ where $\xi^i$ is called the symplectic variable, $V (\xi )$ is called the symplectic potential. The first-order form can be implemented by introducing some auxiliary variables ($a_i$) such as the canonical momentum [@mon]. The Euler-Lagrange equations of motion for Lagrangian (\[lag\]) can be written as $$f_{ij}(\xi)\dot{\xi}^j=\frac {\partial V (\xi )}{\partial \xi^i}\ \ \ (i=1, 2, 3,.....,n), \label{eom1}$$ where $f_{ij}$ is so-called symplectic matrix with following explicit form: $$f_{ij}(\xi )=\frac{\partial a_j}{\partial \xi^i}-\frac{\partial a_i}{\partial \xi^j}.$$ If matrix $f_{ij}$ is regular (invertible), all symplectic variables can be solved from (\[eom1\]) $$\dot{\xi}^j=f_{ij}^{-1}\frac {\partial V (\xi )}{\partial \xi^i}\ \ \ (i=1, 2, 3,.....,n).$$ If matrix $f_{ij}$ is singular, there are some constraints in this system. In order to quantize the system with constraints in the FJ method, Barcelos-Neto and Wotzasek [@bnw; @bnw1] proposed the symplectic algorithm to extend the original FJ method [@b]. We give a brief description of the symplectic algorithm here. The constraints arising from Eq. (\[eom1\]) are $$\Omega_\alpha^{(0)}= (U_\alpha )_i\frac{\partial V}{\partial \xi^i}=0 \ \ \ (\alpha =1, 2, 3,....,m),$$ where $U_\alpha$ is the zero mode of the symplectic matrix $f$, $m = n - r $ ($r$ is the rank of $f$) $$(U_\alpha)^T f=0, \ \ (\alpha =1, 2, 3,....,m).$$ Now, we modify our original Lagrangian by introducing the constraint term multiplied with some Lagrange multipliers ($v^\alpha$) as $$L_{mod} =a_i(\xi )\dot{\xi}^i -V(\xi ) -v^\alpha \Omega_\alpha^{(0)}, \ \ (\alpha =1, 2, 3,....,m)$$ and calculate the symplectic matrix with modified Lagrangian density; if there is further constraint in the system then the matrix be- comes singular otherwise it is nonsingular. But, doing iteratively, in the last we get a nonsingular matrix. This means there is no further constraint in the system. So according to the Darboux theorem [@db] there exists a coordinate transformation $$\begin{aligned} Q_1(\xi^{(0)}),......,Q_{m/2}(\xi^{(0)});\\ P_1(\xi^{(0)}),......,P_{m/2}(\xi^{(0)}),\end{aligned}$$ which transform the first-order Lagrangian given in Eq. (\[lag\]) into $$L^{(0)}=P_k\dot{Q}_k-V^{(0)}(P,Q), \ \ (k=1, 2, ......m/2).$$ From the mathematical view, the key of the FJ method is just to construct such a Lagrangian that satisfies the Darboux theorem, and the FJ canonical quantization is established on such a form of the Lagrangian. In the next section we will treat the Abelian 2-form gauge theory as a singular system and will analyse the constraints using the FJ symplectic approach. Constraints analysis of Abelian 2-form gauge theory: using FJ approach ====================================================================== We start with the Lagrangian density for Abelian free Kalb-Ramond theory in (1+3) dimensions (4D) [@kara] given by $${\cal L}=\frac{1}{12} F_{\mu \nu \rho}F^{\mu \nu \rho},\label{kin}$$ where the antisymmetric field strength tensor in terms of Kalb-Ramond field ($B_{\mu\nu}$) is defined as $F_{\mu\nu\lambda}=\partial_\mu B_{\nu\lambda}+ \partial_\nu B_{\lambda\mu}+ \partial_\lambda B_{\mu\nu}$. This Lagrangian density is invariant under the following gauge transformation $$\delta B_{\mu\nu} = \partial_\mu\Lambda_\nu -\partial_\nu\Lambda_\mu,$$ where $\Lambda_\mu$ is an arbitrary vector parameter. This gauge transformation is reducible, since particular choice of vector parameter, i.e. $$\Lambda_\mu = \partial_\mu \varepsilon,$$ leads to $\delta B_{\mu\nu} =0$. Now, the canonical momenta corresponding to the fields $B_{0i}$ and $B_{ij}$, respectively, are calculated as $$\Pi^{0i}=\frac{\partial{\cal L}}{\partial\dot B_{0i}}=0,$$ and $$\Pi^{ij}=\frac{\partial{\cal L}}{\partial\dot B_{ij}}=\frac{1}{2}F^{0ij}.$$ The primary constraint of the theory thus obtained is $$\Pi^{0i}\approx 0.$$ Now, in order to make the Lagrangian density for Abelian 2-form gauge theory in the first order symplectic form, we calculate the following $$\Pi^{\mu\nu}\dot B_{\mu\nu} -{\cal L}=\Pi^{ij}\Pi_{ij}+\frac{1}{12} F_{ijk}F^{ijk} +2\Pi^{ij}\nabla_i B_{0j}.$$ So, the first order symplectic version of Lagrangian density given in Eq. (\[kin\]) is given by $$\begin{aligned} {\cal L}^{(0)}&=&\Pi^{\mu\nu}\dot B_{\mu\nu}-\Pi^{ij}\Pi_{ij}-\frac{1}{12} F_{ijk}F^{ijk} -2\Pi^{ij}\nabla_i B_{0j},\nonumber\\ & =&\Pi^{ij}\dot B_{ij}-V^{(0)},\end{aligned}$$ where $V^{(0)}$ is the symplectic potential with following expression: $$V^{(0)}=\Pi^{ij}\Pi_{ij}+\frac{1}{12} F_{ijk}F^{ijk} +2\Pi^{ij}\nabla_i B_{0j}.$$ The corresponding symplectic equations of motion can be calculated easily with following relations $$f_{ijk\lambda}^{(0)}\dot\xi^{k\lambda}=\frac{\partial V^{(0)}(\xi )}{\partial \xi^{ij}},$$ where $$f_{ijk\lambda}^{(0)}=\frac{\partial a_{k\lambda}({\bf y})}{\partial \xi^{ij}({\bf x})}- \frac{\partial a_{ij}({\bf x})}{\partial \xi^{k\lambda}({\bf y})}.$$ The set of symplectic variables are $$\xi^{(0)}(x)=\{B_{ij}, \Pi_{ij}, B_{oi}\}.$$ The components of symplectic 1-form are calculated as follows: $$\begin{aligned} a_{B_{ij}}^{(0)}&=&\frac{\partial{\cal L}}{\partial\dot B_{ij}}=\Pi^{ij},\nonumber\\ a_{\Pi_{ij}}^{(0)}&=&\frac{\partial{\cal L}}{\partial\dot \Pi_{ij}}=0,\nonumber\\ a_{B_{0i}}^{(0)}&=&\frac{\partial{\cal L}}{\partial\dot B_{0i}}=0.\end{aligned}$$ Thus the matrix $f^{(0)}$, whose general form reads $$f^{(0)}=\left (\begin{array}{clcr} f_{ik}^{(0){B_{0i}{B_{0k}}}}&f_{ikl}^{(0){B_{0i}{B_{kl}}}}&f_{ikl}^{(0){B_{0i}{\Pi_ {kl}}}}\\ f_{ijk}^{(0){B_{ij}{B_{0k}}}} &f_{ijkl}^{(0){B_{ij}{B_{kl}}}}&f_{ijkl}^{(0){B_{ij}{ \Pi_{kl}}}}\\ f_{ijk}^{(0){\Pi_{ij}{B_{0k}}}}&f_{ijkl}^{(0){\Pi_{ij}{B_{kl}}}}&f_{ijkl}^{(0){\Pi_ {ij}{\Pi_{kl}}}} \end{array}\right),$$ is calculated in this case as $$f^{(0)}=\left (\begin{array}{clcr} 0& \ \ \ \ \ \ \ \ 0& 0\\ 0& \ \ \ \ \ \ \ \ 0& \frac{1}{2}(-\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})\\ 0& \frac{1}{2}(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk})& 0 \end{array}\right)\delta^3 ({ \bf x}-{\bf y}).$$ It is a singular matrix. The zero mode of this matrix is, $(0, 0, \nu^{B_{0i}})$ where $\nu^{B_{0i}}$ is some arbitrary function. In terms of FJ method [@wot], using zero-mode, we can obtain constraint $$\begin{aligned} \Omega^{i(0)}&=&(\nu^0)^T_{0i}\frac{\partial V^{(0)}}{\partial \xi^{0i}}\approx0,\nonumber\\ &=& \nu^{B_{0i}}\frac{\partial V^{(0)}}{\partial B^{0i}}\approx 0,\nonumber\\ &=& \nabla_j\Pi^{ij}\approx 0,\end{aligned}$$ which is zero iterated constraint. This is not independent constraint, since it satisfies the reducibility condition $\nabla_i\Omega^{i(0)} =0 $. However, it is easy to see that even after calculating the symplectic matrix for modified Lagrangian density with above constraint the zero modes do not lead to any new constraint. Hence, there is no further constraints in the theory. We end this section by concluding that both the constraints primary and zero iterated are exactly same as obtained in Dirac procedure. The extended action and BRST symmetry ===================================== In the above section, we obtained two constraints (primary and zero iterated) in the 2-form gauge theory, i.e. $\Pi^{0i} =0$ and $\nabla_j\Pi^{ij}=0$. In this section we discuss the nilpotent BRST symmetry for Abelian rank-2 tensor field theory. To do so, we introduce two pairs of canonically conjugate anticommuting ghosts $({\cal{C}}_i,{\cal{P}}_i)$ and $(\bar{{\cal{C}}}_i,\bar{{\cal{P}}}_i)$ corresponding to the above constraints. Further, we need the following pairs of canonically conjugate commuting ghosts $(\beta, \Pi_\beta)$ and $({\bar{\beta}},\Pi_{\bar{\beta}})$ which are ghosts of ghosts according to the property of reducibility. The ghosts numbers of these ghost fields are as follows: $$\begin{aligned} gh ({\cal{C}}_i)&=&-gh({\cal{P}}_i)=1,\nonumber\\ gh (\bar{\cal{C}}_i)&=&-gh (\bar{\cal {P}}_i)=-1,\nonumber\\ gh (\beta )&=&-gh (\Pi_\beta ) =2,\nonumber\\ gh(\bar{\beta})&=&-gh (\Pi_{\bar{\beta}})=-2,\end{aligned}$$ and they satisfy the following (anti-)commutation relations $$\begin{aligned} \left\{{\cal{C}}_i({\bf x}),{\cal{P}}_j({\bf y}) \right\}&=&\; -i\; \delta_{ij}\;\delta^3 ({ \bf x}-{\bf y}),\nonumber\\ \left\{\bar{{\cal{C}}}_i({\bf x}),{\bar{\cal{P}}}_j({\bf y}) \right\}&=& \; -i\; \delta_{ij}\delta^3 ({\bf x}-{\bf y}), \label{acom1}\\ \left[ \beta({\bf x}),\; \Pi_\beta ({\bf y})\right ]&=&\; i\delta^3 (\bf{x}-\bf{y}),\nonumber\\ \left[ \bar{\beta}({\bf x}),\;\Pi_{\bar{\beta}} ({\bf y})\right] &=&\; i\;\delta^3 ({\bf x}-{\bf y}).\label{com3}\end{aligned}$$ The phase space is further extended by introducing canonical conjugate pairs $({\cal{C}}_0,{\cal{P}}_0)$ and (${\bar{\cal{C}}}_0,\bar{\cal{P}}_0)$ as Lagrange multipliers to the pair $({\cal{C}}_i,{\cal{P}}_i)$, $(\bar{{\cal{C}}}_i,\bar{{\cal{P}}}_i)$ and a canonical pair ($\varphi,\Pi_{\varphi})$ as Lagrange multiplier to the gauge condition. Hence, the extended action is given by, $$\begin{aligned} S_{eff}&=&\int d^4x\left[\Pi^{0i}\dot B_{0i}+ \Pi^{ij}\dot B_{ij}+{\cal P}^i\dot {\cal C}_i + \bar {\cal P}^i \dot {\bar{\cal C}}_i \right.\nonumber\\ &+&{\cal P}^0\dot {\cal C}_0 +\left. \bar {\cal P}^0\dot {\bar{\cal C}}_0+\Pi_{\beta}\dot\beta +\Pi_{\bar\beta}\dot{ \bar\beta} +\Pi_{\varphi}\dot\varphi -{ \cal H}_c -\{Q,\Psi\} \right],\label{effs}\end{aligned}$$ where $\Psi $ is the gauge fixed fermion and $Q$ is the generator of the BRST symmetry. The canonical Hamiltonian density, ${\cal H}_c$, is calculated as $${\cal H}_c=\Pi_{ij}\Pi^{ij} +\frac{1}{12}F_{ijk}F^{ijk}.$$ The expression for BRST charge for Abelian 2-form gauge theory is given by $$\begin{aligned} Q= -2\nabla_i\Pi^{ij}{\cal C}_j+\Pi_\varphi\bar{\cal P}_0-{\cal P}_0\Pi_{\bar\beta} -\bar{\cal P}^i\Pi_{0i},\label{brsc}\end{aligned}$$ which satisfies following algebra $$\begin{aligned} \{Q,Q\}&=&0, \ \ \{ {\cal H}_c, Q\} =0.\end{aligned}$$ The ghost numbers of $Q$ and $\Psi $ are as follows: $$\begin{aligned} gh(Q)&=&1,\ \ \ gh(\Psi )=-1.\end{aligned}$$ The BRST symmetry transformation can be calculated with following relation $$s_b \phi=-i{\left[\phi,Q\right]}_\pm,$$ where $+$ is used for fermionic and $-$ for the bosonic nature of generic fields $\phi$ . Using the above relation and the expression for BRST charge given in Eq. (\[brsc\]), we calculated the BRST symmetry transformations for fields as follows: $$\begin{aligned} &&s_b B_{ij}= \left(\nabla_i {\cal {C}}_j-\nabla_j{\cal{C}}_i\right),\ \ s_b B_{0i}=-{\bar{\cal{P}}}_i, \nonumber\\ &&s_b \Pi_{\varphi}=0,\ s_b {\cal{C}}_i =0, \ s_b {\bar{\cal {C}}}_i=\Pi_{0i},\ s_b {\cal{C}}_0 =\Pi_{\bar{\beta}}, \nonumber\\ && s_b {\bar{\cal{C}}}_0 =\Pi_\varphi,\ \ s_b \varphi =-{\bar{\cal{P}}}_0, \ s_b \beta =0,\nonumber\\ &&s_b \bar{\beta} =-{\cal{P}}_0,\ \ s_b \Pi_{0i}=0, \ \ s_b \Pi_{ij}=0,\nonumber\\ &&s_b {\cal{P}}^i =2\nabla_j\Pi^{ji},\ \ s_b {\bar{\cal{P}}}_i =0, \ \ s_b { \cal{P}}_0=0,\nonumber\\ &&s_b {\bar{\cal{P}}}_0=0, \ \ s_b \Pi_\beta =0, \ \ s_b \Pi_{\bar{\beta}}=0.\label{qbrst}\end{aligned}$$ These transformations are nilpotent (i.e. $s_b^2 =0$) and symmetry of the effective action given in Eq. (\[effs\]). BRST symmetry transformation as a Darboux transformation ======================================================== In this section, we study the BRST transformation and the contracting homotopy $\sigma$ transformation for Abelian 2-form gauge theory under the Darboux transformation. For this purpose we first decompose the field $B_{ij}$ into transverse and longitudinal parts as follows $$\begin{aligned} { B}_{ij}&=&{B}_{ij}^T+ {B}_{ij}^L,\nonumber\\ &=&\epsilon_{ijk} \nabla_k B^T+ \nabla_i B_j^L -\nabla_j B^L_i, \label{b}\end{aligned}$$ where ${B}_{ij}^T =\epsilon_{ijk} \nabla_k B^T$ and ${B}_{ij}^L =\nabla_i B_j^L -\nabla_j B^L_i$. Then we decompose corresponding momenta $\Pi_{ij}$ into transverse and longitudinal parts as follows $$\begin{aligned} {\Pi}_{ij}&=&{ \Pi}_{ij}^T+{\Pi}_{ij}^L,\nonumber\\ &=&\epsilon_{ijk} \frac{\nabla_k}{\nabla^2 }{\Pi}^T+ \frac{1}{\nabla^2} \left[ \nabla_i\Pi_j^L -\nabla_j \Pi_i^L\right], \label{pi}\end{aligned}$$ where ${ \Pi}_{ij}^T = \epsilon_{ijk} \frac{\nabla_k}{\nabla^2 }{\Pi}^T$ and ${\Pi}_{ij}^L = \frac{1}{\nabla^2} \left[ \nabla_i\Pi_j^L -\nabla_j \Pi_i^L \right] $. Further, we exploit the relations (\[qbrst\]) to solve the field variables ${\cal C}_i, \bar {\cal P}_i, \Pi^L_{ij}$ and $ \Pi_{0i}$ in terms of BRST transformation as follows $$\begin{aligned} {\cal C}_i&=& s_b B_i^L,\nonumber\\ \bar{\cal P}_i&=&s_b B_{0i},\nonumber\\ \Pi^L_{ij}&=& \frac{\nabla_j}{2\nabla^2}s_b {\cal P}_i,\nonumber \\ \Pi_{0i}&=&s_b \bar{\cal C}_i.\end{aligned}$$ Using the fields decompositions the effective action given in Eq. (\[effs\]) is written as $$\begin{aligned} S_{eff}&=&\int d^4x \left[\Pi_{0i}\dot B^{0i}+\Pi_{ij}^T\dot B^{ijT}+ 2\frac{\nabla_i}{ \nabla^2 }\Pi_j^L \nabla^i \dot B^{jL}\right.\nonumber\\ &- &\left. 2\frac{\nabla_i}{ \nabla^2 }\Pi_j^L \nabla^j \dot B^{iL} +\dot {\cal C}_i {\cal P}^i+ \dot {\bar{\cal C}}_i\bar {\cal P}^i \right. \nonumber\\ &+&\left.\dot {\cal C}_0 {\cal P}^0 +\dot {\bar{\cal C}}_0\bar {\cal P}^0+\Pi_{\beta}\dot\beta +\Pi_{\bar\beta}\dot{ \bar\beta} \right. \nonumber\\ &+&\left.\Pi_{\varphi}\dot\varphi -{\cal H}_c-\{ Q,\Psi\}\right ],\end{aligned}$$ where the decomposed canonical Hamiltonian density is given by $$\begin{aligned} {\cal H}_c&=& \Pi^T_{ij}\Pi^{ijT}+ 2\frac{\nabla_i}{ \nabla^2 }\Pi_j^L \frac{\nabla^i}{ \nabla^2 } \Pi^{jL} \nonumber\\ &- & 2\frac{\nabla_i}{ \nabla^2 }\Pi_j^L \frac{\nabla^j}{ \nabla^2 } \Pi^{iL} +\frac{1}{12}F_{ijk}F^{ijk}.\end{aligned}$$ We can easily see that using the symmetry transformations the effective action for Abelian 2-form gauge theory can be recast as $$\begin{aligned} S_{eff}&= &\int d^4x\left[\Pi_{ij}^T\dot B^{ijT}+\Pi_\beta\dot\beta -{\cal H}+s_b \left(\bar{\cal C}^i\dot B_{0i}-{\cal P}^i \dot B_i^L \right.\right.\nonumber\\ &+& \left. \left. {\cal C}_0\dot{\bar\beta}+ \bar{\cal C}_0\dot\varphi +\frac{1}{4}s_b {\cal P}_i\frac{1}{\nabla^2}{\cal P}^i\right) - \{ Q,\Psi\}\right],\end{aligned}$$ where $${\cal H} = \Pi^T_{ij}\Pi^{ijT} +\frac{1}{12}F_{ijk}F^{ijk}.$$ Hence, we can make the following choice for the gauge fermion $$\Psi = i\left(\bar{\cal C}^i\dot B_{0i}-{\cal P}^i \dot B_i^L +{\cal C}_0\dot{\bar\beta}+ \bar{\cal C}_0\dot\varphi +\frac{1}{4}s_b {\cal P}_i\frac{1}{\nabla^2}{\cal P}^i\right).$$ Exploiting the canonical fields decompositions given in Eqs. (\[b\]) and (\[pi\]), the nilpotent BRST symmetry transformations of Eq. (\[qbrst\]) have the following form: $$\begin{aligned} &&s_b B^L_i= {\cal {C}}_i,\ \ s_b B_{0i}={\bar{\cal{P}}}_i,\ \ s_b \Pi_{\varphi}=0, \nonumber\\ &&s_b{\cal{C}}_i =0, \ \ s_b{\bar{\cal{C}}}_i =\Pi_ {0i}, \ \ \ s_b{\cal{C}}_0 =\Pi_{\bar{\beta}},\nonumber\\ &&s_b{\bar{\cal{C}}}_0 =\Pi_\varphi,\ \ \ s_b\varphi =-{\bar{\cal{P}}}_0, \ \ \ s_b\beta =0,\nonumber\\ &&s_b\bar{\beta} =-{\cal{P}}_0, \ \ s_b\Pi_{0i}=0, \ s_b\Pi^L_i =0, \nonumber\\ && s_b{ \cal{P}}_0 =0,\ \ s_b{\cal{P}}_i =2 \nabla^j \Pi_{ji}^L,\ \ s_b{\bar{\cal{P}}}_i =0, \nonumber\\ &&s_b\Pi_\beta = 0,\ \ \ s_b{\bar{\cal{P}}}_0=0, \ \ \ s_b\Pi_{\bar{\beta}} =0,\nonumber\\ &&s_bB^T_{ij}=0,\ \ \ s_b \Pi^T_{ij}=0.\end{aligned}$$ Here we notice that only transverse fields are BRST closed without being BRST exact. Therefore one can show that the functionals of these transverse fields are being used only in classical BRST cohomology. The contracting homotopy $\sigma$ with respect to above BRST operator $s_b$ is defined as $$\begin{aligned} &&\sigma( {\cal {C}}_i)= B^L_i,\ \ \sigma( B^L_i)=0, \ \ \sigma({\bar{\cal{P}}}_i)=B_{0i}, \nonumber\\ && \sigma (B_{0i})=0,\ \ \sigma (\Pi_{0i})=\bar{\cal C}_i, \ \ \sigma (\bar{\cal C}_i)=0, \nonumber\\ &&\sigma (\Pi_{\bar\beta})={\cal C}_0,\ \sigma ({\cal C}_0) =0, \ \ \sigma (\Pi_\varphi ) =\bar{\cal C}_0, \nonumber\\ && \sigma (\bar{\cal C}_0)=0,\ \ \sigma (-\bar{\cal P}_0)=\varphi,\ \ \sigma (\varphi )=0,\nonumber\\ && \sigma (-\ {\cal P}_0)=\bar{\beta},\ \ \sigma (\bar \beta)=0,\ \ \sigma (\beta)=0,\nonumber\\ &&\sigma\left(2 \nabla^j \Pi_{ji}^L\right)={\cal P}_i, \ \ \sigma ({\cal P}_i)=0,\nonumber\\ && \sigma (\Pi_\beta )=0,\ \ \sigma (B^T_{ij})=0,\ \ \ \sigma (\Pi^T_{ij})=0,\end{aligned}$$ which is also nilpotent in nature. Further, $\sigma$ operator satisfies the following relation: $\sigma s_b +s_b\sigma =N$, where $N$ counts the degree in unphysical variables $B^L_i, {\cal {C}}_i, {\bar{\cal{P}}}_i, B_{0i}, {\Pi_{0i}, \bar{\cal C}}_i, \Pi_{\bar\beta}, {\cal C}_0, \Pi_{\varphi}, \bar{\cal C}_0, \bar{\cal P}_0, \varphi, {\cal P}_0, \bar{\beta}, \Pi_{ji}^L, {\cal P}_i,$ i.e. $$\begin{aligned} N&=&B^L_i \frac{\partial}{\partial B^L_i}+{\bar{\cal{P}}}_i\frac{\partial}{\partial {\bar{\cal{P}}}_i}+{\cal {C}}_i \frac{\partial}{\partial {\cal {C}}_i}+B_{0i} \frac{\partial}{\partial B_{0i}}+ \Pi_{0i} \frac{\partial}{\partial {\Pi_{0i}}} +\bar{\cal C} _i\frac{\partial}{\partial \bar{\cal C} _i}+\Pi_{\bar\beta} \frac{\partial}{\partial \Pi_{\bar\beta}}+{\cal C}_0 \frac{\partial}{\partial {\cal C}_0}\nonumber\\ &+& \Pi_{\varphi}\frac{\partial}{\partial \Pi_{\varphi}}+\bar{\cal C}_0 \frac{\partial}{\partial \bar{\cal C}_0}+\varphi\frac{\partial}{\partial \varphi}+{\cal P}_0 \frac{\partial}{\partial {\cal P}_0}+ \bar{\beta} \frac{\partial}{\partial \bar{\beta}}+ \Pi_{ji}^L \frac{\partial}{\partial \Pi_{ji}^L}+ {\cal P}_i \frac{\partial}{\partial {\cal P}_i}.\end{aligned}$$ It follows that if the functional ${\cal G}$ of degree $n\neq 0$ is BRST closed in the unphysical variables, $$\begin{aligned} s_b{\cal G}=0,\ \ N{\cal G} =n {\cal G},\end{aligned}$$ then it is BRST exact also, i.e. ${\cal G}=s_b[(1/n)\sigma {\cal G}]$. However, only those BRST closed functionals, which are of degree $n=0$ in the unphysical variables, are not BRST exact, i.e. the functionals of $B^T_{ij}, \Pi^T_{ij}, \beta, \Pi_\beta$ fields. Therefore, the above BRST and $\sigma$ closed transformations under which the fields transform are basically Darboux transformations used in FJ quantization. conclusion ========== We have considered the Abelian rank-2 antisymmetric tensor field theory (which is a reducible gauge theory) as a singular system and have investigated the constraints involved in the theory using the FJ symplectic approach. Further, we have implemented the BFV formalism in which the scalar potential, $B_{0i}$, is treated as a full dynamical variable with vanishing conjugate momentum, $\Pi_{0i}$. According to BFV formulation the phase space has been extended by introducing a canonical pair of ghost fields for each constraint in the theory. The conserved BRST charge as well as BRST symmetry have been constructed for Abelian 2-form gauge theory within Hamiltonian framework. We have shown that using fields decompositions the effective action for Abelian rank-2 tensor field theory can be written as a sum of an uncanonical term and the BRST exact one. Further, it has been shown that the field redefinitions under which the fields transform into nilpotent BRST and $\sigma$ closed transformations are basically Darboux transformations used in FJ approach. Further use of similar analysis in the quantum theory of gravity [@fd; @mir1; @1mir1; @1mir; @kon; @es] and in higher derivative field theory [@moz] will be interesting. It is also important to mention that within the FJ framework the attempts to derive a non-abelian version of this theory [@su] will be exotic. The path integral corresponding to the FJ quantization method has also been extensively studied under various aspects [@lh]. So far we have studied the Darboux transformations which appears in the FJ quantization as a symmetry of such path integral. However, it will be interesting to explore the Darboux transformations under which the path integral corresponding to the FJ quantization method is not invariant [@sud1]. [99]{} P. A. M. Dirac, [*Lectures on Quantum Mechanics,*]{} (Yeshiva Univ. Press, New York, 1964). P. A. M. Dirac, [*Can. J. Math.*]{} [**2**]{}, 129 (1950). M. Henneaux and C. Teitelboim, [*[ Quantization of gauge systems]{}*]{} ( University Press, Princeton) 1992. L. D. Faddeev and R. Jackiw, [*[Phys. Rev. Lett.]{}*]{} [**[ 60]{}**]{}, 1692 (1988). H. J. Rothe and K. D. Rothe, [*Classical and Quantum Dynamics of Constrained Hamiltonian Systems*]{}, (World Scientific, v. 81, 2010). J. A. García and J. M. Pons, Int. J. Mod. Phys. **A 12**, 451 (1997). M. Kalb and P. Ramond, [*[Phys. Rev.]{}*]{} [**[D 9]{}**]{}, 2273 (1974). R. K. Kaul, Phys. Rev. **D 18**, 1127 (1978). S. Upadhyay and B. P. Mandal, [ *[Mod. Phys. Lett.]{}*]{} [**[A 40]{}**]{}, 3347 (2010). S. Upadhyay and B. P. Mandal, [*Eur. Phys. J.*]{} [**C 72**]{}, 2059 (2012). A. Sugamoto, [*Phys. Rev.*]{} [**D 19**]{}, 1820 (1979). R. L. Davis and E. P. S. Shellard, [*Phys. Lett.*]{} [**B 214**]{}, 219 (1988). A. Salam and E. Sezgin, [*Supergravities in diverse Dimensions*]{} (North-Holland and World Scientific, 1989). M. B. Green, J. H. Schwarz and E. Witten, [*Superstring Theory*]{} (Cambridge Univ. Press, 1987). J. Bagger and N. Lambert, *JHEP* **0802**, 105 (2008). J. Bagger and N. Lambert, *Phys. Rev.* **D 77**, 065008 (2008). A. Gustavsson, *JHEP* **0804**, 083 (2008). M. Faizal, *JHEP* **1204**, 017 (2012). M. Faizal, arXiv:1303.5477 O. Aharony, O. Bergman, D. L. Jafferis and J. Maldacena, *JHEP* **0810**, 091 (2008). I. L. Buchbinder, E.A. Ivanov, O. Lechtenfeld, N.G. Pletnev, I.B. Samsonov and B. M. Zupnik, *JHEP* **0903**, 096 (2009). H. Nastase and C. Papageorgakis, *JHEP* **1103**, 094 (2011). M. Faizal and D. J. Smith, *Phys. Rev.* **D 85**, 105007 (2012). M. Faizal, *Europhys. Lett.* **98**, 31003 (2012). M. Faizal, *Mod. Phys. Lett.* **A 27**, 1250147 (2012). M. Faizal *Comm. Theor. Phys.* **57**, 637 (2012). M. Faizal *Phys. Rev.* **D 84**, 106011 (2011). O. Aharony, O. Bergman and D. L. Jafferis, *JHEP* 0811, 043 (2008). J. Kluson, *JHEP* **0904**, 112 (2009). M. Faizal, *JHEP* **1301**, 156 (2013). M. Faizal, *Nucl. Phys.* **B 869** 598 (2013). M. Faizal, *Int. J. Mod. Phys.* **A28** 1350012 (2013). P. M. Ho and Y. Matsuo, *JHEP* **0806**, 105 (2008). J. B. Neto and M. B. D. Silva [*Int. J. Mod. Phys.*]{} [**A 10**]{}, 3759 (1995). R. Banerjee and J. Barcelos-Neto, Annals Phys. **265**, 134 (1998). E. S. Fradkin and G. Vilkovisky, [*Phys. Lett.*]{} [**B 55**]{}, 224 (1975). I. A. Batalin and G. Vilkovisky, [*Phys. Lett.*]{} [**B 69**]{}, 309 (1977). C. Becchi, A. Rouet and R. Stora, [*Annals Phys.*]{} [**[98]{}**]{}, 287 (1974). J. E. Paschalis and P.I. Porfyriadis, *Z. Phys.* **C 73**, 557 (1997). H. Montani and C.Wotzasek, [*[Mod. Phys. Lett.]{}*]{} [**A 8**]{}, 3387 (1993). J. Barcelos-Neto and C. Wotzasek, [*[Mod. Phys. Lett.]{}*]{} [**A 7**]{}, 1737 (1992). J. Barcelos-Neto and C. Wotzasek, [*Int. J. Mod. Phys.*]{} [**A 7**]{}, 4981 (1992). C. Von Westenholz, [*Differential Forms in Mathematical Physics*]{} (North-Holland, Amsterdam, 1981). C. Wotzasek, [*[ Mod. Phys. Lett.]{}*]{}, [**[A 8]{}**]{}, 2509 (1993). F. D. Jonghe, J. Paris and W. Troost, *Nucl.Phys.* **B 476**, 559 (1996). M. Faizal, *Phys. Lett.* **B 705**, 120 (2011). M. Faizal *J. Phys.* **A 44**, 402001 (2011). M. Faizal, *Found. Phys.* **41**, 270 (2011). E. Konishi, *Prog. Theor. Phys.* **121**, 1125 (2009). G. Esposito, A. Yu. Kamenshchik, I. V. Mishakov and G. Pollifrone, *Phys. Rev.* **D 52**, 3457 (1995). M. Faizal and M. Khan, *Eur. Phys. J.* **C 71**, 1603 (2011). C. S. Chu, S. L. Ko, *JHEP* **05**, 028 (2012). L. Liao and Y. C. Huang, [*Phys. Rev.*]{} [**D 75**]{}, 025025 (2007). S. Upadhyay, [*in progress*]{}.
--- abstract: 'I will present my implementation ’n-units’ of physical units into C++ programs. It allows the compiler to check for dimensional consistency.' author: - Ingo Josopait bibliography: - 'units.bib' title: Checking C++ Programs for Dimensional Consistency --- Introduction ============ Computer simulations and other scientific programs often deal with physical quantities that have dimensional meanings, like length scales or time scales. The internal representation of such quantities is done by floating point numbers. The actual numbers have no direct meaning by themselves. Their meanings rely on the definition of the measuring units (for example, the length ’5 meters’ could equally well be written as ’500 centimeters’ or ’16.4 feet’). The addition, subtraction or comparison of two numbers of different dimensions, like time scales and length scales, is physically not meaningful and can be regarded as an error. This follows from the principle of dimensional invariance, i.e. from the demand that the meaning of a formula should not depend on the choice of the system of measuring units. Dimensional inconsistencies are a frequent source of errors in programs and much debugging time is usually spent to check a program for dimensional consistency. However, the checking for dimensional consistency can be done automatically [@kennedy; @allen]. Implementations of units into programming languages like python [@denis] and C++ [@brown; @dimnum] exist. I will present another implementation of units into the programming language C++. The source code is available at `http://starburst.sourceforge.net/n-units/`. The emphasis lies on computational speed. As in [@brown] and [@dimnum], a check for dimensional consistency is done at compile time. The main differences to these existing implementations are: - Checking for dimensional consistency is designed to be deactivated for production runs, which results in better runtime performance (and reduced compile time). - Template definitions are simplified and the set of base units can easily be extended. - A function is provided that takes quantities to a fractional power. Dimensions, Units and Quantities ================================ Let me first give 3 definitions: #### quantity A quantity is a property of some kind that can be quantified (e.g. the height of an object or its velocity). #### dimension A dimension specifies the type of a quantity (e.g. a length scale or a time scale). Only quantities of the same dimension can be compared. #### unit A unit is a quantity that has been defined in order to measure other quantities and to be able to express them in terms of numbers. Quantities can then be expressed as multiples of units. More than one unit can be defined per dimension. For example, cm, m and feet are all units that represent length scales. There are typically at least 3 independent dimensions used in a computer simulation: - length scale - mass scale - time scale This set can be extended. The Système International d’unités (SI) is based on 7 base units. But also other dimensions like currencies, the amount of information (in bits or bytes), or the cosmological scale factor can be used. More complex units (like energies or velocities) are derived from the base units. Checking for Dimensional Consistency ==================================== A dimension is uniquely defined by the exponents of the base units. For example, velocities (${\rm cm}^{1} {\rm s}^{-1}$) are composed of a length scale of exponent 1 and a time scale of exponent -1. This can formally be expressed by vectors: If we represent length scales by the vector (1,0,0), mass scales by (0,1,0) and time scales by (0,0,1), velocities would be represented by the vector (1,0,-1). For practical purposes it is sufficient to use a single integer number to represent the dimensional class. This has two advantages: - Template definitions are simplified. - Additional base units can be easily defined. Quantities are represented by the following template class: template <int n, class T=double> struct units { T data; static units<n, T> construct(const T& a) { // somewhat hidden constructor for explicit use units<n, T> r; r.data = a; return r; } ... }; The template parameter `n` specifies the dimension of the quantity, and `T` specifies the underlying floating point type. Base Units ---------- The base units are defined in the following way (as a convention, all units end with an underscore):\ ----------------------------- ---------------------------------------------------- \[baseunits\] Length scale: `const units<1> m_ = units<1>::construct(1);` Mass scale: `const units<10> g_ = units<10>::construct(1E-3);` Time scale: `const units<100> s_ = units<100>::construct(1);` ... ----------------------------- ---------------------------------------------------- This would define that the actual floating point representation follows the SI system (numbers are given in meters, kilograms and seconds) and that the dimensions of length scale, mass scale and time scale are represented by the template parameters 1, 10 and 100, respectively. Note that with the above definition the compiler cannot distinguish between, for instance, $\rm{cm}^{10}$ and $\rm{g}$. Since such large exponents of units are rare, however, it is unlikely that this will be of practical importance. Basic Operations ---------------- The following operations between quantities are allowed: - Addition and subtraction are allowed between quantities of the same dimension. - Multiplication of two quantities of types `units<`$m$`>` and `units<`$n$`>` returns a quantity of type `units<`$m+n$`>`. - Dividing a quantity of type `units<`$m$`>` by a quantity of type `units<`$n$`>` returns a quantity of type `units<`$m-n$`>`. - Relational operators ($==$, $<$, $>$) are allowed between quantities of the same dimension. In the framework of C++ templates, this can be written as: template <int n, class T=double> struct units { ... template <class B> units<n, typename addtype<T,B>::type> operator + (const units<n, B>& b) const; template <class B> units<n, typename subtype<T,B>::type> operator - (const units<n, B>& b) const; template <int nb, class B> units<n+nb, typename multype<T,B>::type> operator * (const units<nb, B>& b) const; template <int nb, class B> units<n-nb, typename divtype<T,B>::type> operator / (const units<nb,B>& b) const; ... }; The classes `addtype`, `subtype`, `multype` and `divtype` are used to correctly determine the return type of the underlying floating point number (so that, for instance, operations involving a `float` and a `double` always return a `double`). The dimensionless type `units<0>` is never used. Specializations of the above operators ensure that a bare floating point number (such as `double` or `float`) is returned instead. Fractional Powers ----------------- Taking fractional powers of quantities can be defined in the following way: - Taking the square root of a quantity of type `units<`$n$`>` returns a quantity of type `units<`$n/2$`>`. - More generally, taking a quantity of type `units<`$n$`>` to the fractional power $p\over q$ returns a quantity of type `units<`$n{p\over q}$`>`. For this purpose, the following template functions are provided: template <int n, class T> units<n/2, T> sqrt(const units<n, T>& a); template <int pa, int pb, int n, class T> units<(pa*n)/pb, T> pow(const units<n, T>& a); The expression $a^{p/q}$ can then be written as `pow<p,q>(a)`, where `p` and `q` are constant integers. Apart from the possibility to check for dimensional correctness, another advantage of using the above `pow<>` template function is that the exponent $p\over q$ is known to the compiler. The template function `pow<>` can therefore attempt to use the standard functions `sqrt(double)` (which takes the square root) and `cbrt(double)` (which takes the cubic root) in order to avoid the considerably slower function `double pow(double, double)`. Data Types ---------- The dimension has to be specified for every quantity in the program. The `typeof` extension of the gcc compiler is very useful for this. `typeof(x)` returns the type of the object `x`. A velocity variable, for instance, can be defined by typeof(cm_/s_) v = 5*m_/s_; Because the units are defined to be constant, `typeof` expressions that involve only one unit should be written as a product to remove the constness. A length `h` should therefore be defined as: typeof(1*cm_) h = 2*km_; Unfortunately, the `typeof` keyword is not part of the ISO C++ standard. However, it is possible to emulate the `typeof` extension [@gibbons] with the help of the `sizeof` keyword (which is part of the ISO C++ specification), at the cost of having to register every type (and dimension) to which `typeof` is applied. To emulate `typeof`, disable the `HAVE_TYPEOF` option in the header file. The following small sample program illustrates the use of units in a program. It calculates the time a slice of bread needs to fall from a table (of height 1 meter) to the floor. #include <iostream> #include "units.h" using namespace std; int main() { typeof(1*cm_) height = 1*m_; typeof(cm_/s_/s_) g = 9.81*m_/s_/s_; typeof(1*s_) t = sqrt(2*height/g); cout << "free fall time=" << t/s_ << " seconds" << endl; } Any violation of dimensional consistency would trigger a compiler error. Disabled Checking ================= Dimensional checking can be disabled by the preprocessor option `UNITCHECK` in the header file. For performance reasons it is advisable to disable it for production runs and to enable it only to check newly written code. If `UNITCHECK` is disabled, the definitions of the base units (see section \[baseunits\]) are replaced by constant floating point numbers: const double m_ = 1; const double g_ = 1E-3; const double s_ = 1; ... The use of units will then have no negative influence on the runtime performance. I would like to stress that even though the compile-time check of units in this implementation relies on the use of template classes, units that are not checked for dimensional consistency can still be used in virtually any programming language, simply by defining the corresponding units as constant floating point numbers. Deriving Additional Units ========================= Once the set of base units is defined, other units can be derived from it, like for example: const typeof(1*m_) cm_ = m_/100; // centimeter const typeof(1*g_) kg_ = 1000*g_; // kilogram const typeof(kg_*m_*m_/s_/s_) J_ = kg_*m_*m_/s_/s_; // Joule const typeof(m_/s_) c_ = 2.99792458e8 * m_ / s_; // speed of light Given these definitions, units can be used directly in a program. One does not need to know the set of underlying measuring units that is used to represent quantities. Even the combination of different units is possible. For example, the following expression is perfectly valid: typeof(1*cm_) height = 1*m_ + 75*cm_; The compiler will correctly add these two length scales. Since the system of base units is known at compile time, the compiler can optimize this expression and perform the calculation during the compilation. Choosing the Base Type ====================== Sometimes the programmer wants to use a specific base type other than `double`, like `float` or `complex<>`. This can be accomplished either by explicitly using `cm_f` (`float`), `cm_d` (`double`) or `cm_ld` (`long double`) instead of `cm_` (which defaults to `double`) or by using the `tof(..)`, `tod(..)`, `told(..)` or `to<>` functions. For instance, typeof(cm_f/s_f) v; and typeof(tof(cm_/s_)) v; both define the velocity `v` to be of type `float`. Output ====== Since `UNITCHECK` should be deactivated for production runs, the compiler has no information about the dimensions of quantities. Therefore, the programmer has to take care of meaningful output of quantities. This can be done by dividing the quantity by the desired unit, for example: void foo(typeof(m_/s_) v) { cout << "v = " << v/(mile_/hour_) << " mph" << endl; } Conclusions =========== I have presented an implementation of physical units into C++ programs. The code is checked for dimensional consistency at compile time. The main advantages of this are: The programmer can use units directly in the code, without the need to know the system of base units. Implementing complex formulae is simplified, because the programmer can be more relaxed about the dimensional correctness. Dimensional correctness is guaranteed by the provided header files.
--- abstract: 'We give a full analysis of the auto- and cross-correlations between the Stokes parameters of the cosmic microwave background. In particular, we derive the windowing function for an antenna with Gaussian response in polarization experiment, and construct correlation function estimators corrected for instrumental noise. They are applied to calculate the signal to noise ratios for future anisotropy and polarization measurements. While the small-angular-scale anisotropy-polarization correlation would be likely detected by the MAP satellite, the detection of electric and magnetic polarization would require higher experimental sensitivity. For large-angular-scale measurements such as the being planned SPOrt/ISS, the expected signal to noise ratio for polarization is greater than one only for reionized models with high reionization redshifts, and the ratio is less for anisotropy-polarization correlation. Correlation and covariance matrices for likelihood analyses of ground-based and satellite data are also given.' address: 'Institute of Physics, Academia Sinica, Taipei, Taiwan 11529, R.O.C.' author: - 'Kin-Wang Ng[^1] and Guo-Chin Liu[^2]' title: Correlation Functions of CMB Anisotropy and Polarization --- Introduction ============ The detection of the large-angle anisotropy of the cosmic microwave background (CMB) by the [*COBE*]{} DMR experiment [@smo] provided important evidence of large-scale spacetime inhomogenities. Since then, a dozen of small-scale anisotropy measurements have hinted that the Doppler peak resulting from acoustic oscillations of the baryon-photon plasma on the last scattering surface seems to be present [@pag]. CMB measurements gain an advantage over other traditional observations due to the fact that the small CMB fluctuations can be well treated as linear, while the low-redshift universe is in a non-linear regime. It is now well established that CMB temperature anisotropies are genuine imprint of the early universe, which could potentially be used to determine to a high precision virtually all cosmological parameters of interest. It has been estimated that a number of cosmological parameters can be determined with standard errors of $10\%$ or better by the upcoming NASA MAP satellite [@jun]. Furthermore, the future Planck Surveyor CMB mission would have capability of observing the early universe about 100 times better than MAP. At this point, we should explore as much information as possible besides the temperature anisotropy contained in the relic photons. Anisotropic radiation possessing a non-zero quadrupole moment acquires a net linear polarization when it is scattered with electrons via Thomson scattering [@ree] (also see Eq. (6) of Ref. [@ng3]). When the photons begin to decouple from the matter on the last scattering surface and develope a quadrupole anisotropy via Sachs-Wolfe effect [@sac], linear polarization is created from scatterings with free electrons near the last scattering surface. Studies have shown that on small angular scales the rms polarization, in a standard universe, is a few percents of the rms anisotropy, while the large-scale polarization is insignificant [@bon]. In models with early reionization, the large-scale polarization is greatly enhanced, to a few percents level, but the small-scale anisotropy is suppressed significantly [@ng3; @zal1]. Therefore, CMB polarization would provide a valuable complementary information to the anisotropy measurements. In addition, the anisotropy-polarization cross correlation offers a test of physics on the last scattering surface, as well as a possibility of distinguishing the scalar and tensor perturbations [@cri1; @cri2]. However, all of these polarization calculations have relied on a small-angle approximation, which may not be valid when a large sky-coverage is considered. As such, full-sky analyses of the polarization have been performed  [@ng4; @sel1; @kam1; @sel2; @kam2]. It was found that there are modifications to low multipole moments ($l<30$) of the polarization power spectra, where the tensor contribution dominates over the scalar contribution [@ng4; @sel2]. More importantly, rotationally invariant power spectra of the Stokes parameters have been constructed [@sel1; @kam1; @sel2; @kam2]. In particular, one of them is a parity-odd magnetic polarization spectrum, which vanishes for scalar-induced polarization, thereby allowing one to make a model-independent identification of non-scalar (i.e. vector or tensor) perturbations (also see Ref. [@sel]). Recently it was shown that magnetic polarization would be a strong discriminator between defect and inflation models [@hu1; @sel3]. Also, a new physically transparent formalism based on the total angular momentum representation [@tol] was proposed [@hu1; @hu2], which simplifies the radiative transport problem and can be easily generalized to open universes [@hu3]. Since polarization fluctuations are typically at a part in a million, an order magnitude below the temperature fluctuations, to measure this signal requires high detector sensitivity, long integration time, and/or a large number of pixels. So far, only experimental upper limits have been obtained [@lub; @par; @net], with the current limit on the linear polarization being $16\mu K$ [@net]. Ground-based experiments being planned or built will probably achieve detection sensitivity using low-noise HEMT amplifiers as well as long hours of integration time per pixel. The MAP satellite will launch in 2000 and make polarization measurements of the whole sky in about $10^5$ pixels. If the polarization foreground can be successfully removed, MAP should marginally reach the detection level. For a detection of the magnetic polarization one would require either several years of MAP observations or the Planck mission [@sel1; @sel3; @kam3]. We expect that polarization measurements are as important as anisotropy in future missions. Previous full-sky studies of the polarization are mainly based on angular power spectrum estimators in Fourier space. Although electric- and magnetic-type scalar fields $E$ and $B$ in real space can be constructed, they must involve nonlocal derivatives of the Stokes components. In this paper, we will study in detail the auto- and cross-correlation functions of the Stokes parameters themselves in real space. Although the two approaches should be equivalent to each other, one can find individual advantages in different situations. We will follow the formalism of Ref. [@sel1], expanding the Stokes parameters in terms of spin-weighted spherical harmonics. The expansion coefficients are rotationally invariant power spectra which will be evaluated using CMBFAST Boltzmann code developed by Seljak and Zaldarriaga [@zal2]. In Sec. \[stokes\] we briefly introduce the CMB Stokes parameters and their relation to spin-weighted spherical harmonics. Sec. \[spin\] is devoted to discussions of the properties of the harmonics, the harmonics representation of rotation group, and the generalized addition theorem and recursion relation. In Sec. \[power\] we expand the Stokes parameters in spin-weighted harmonics, and briefly explain how to compute the power spectra induced by scalar and tensor perturbations. In Sec. \[window\] we derive window functions appropriate to detectors with Gaussian angular response in anisotropy and polarization experiments. The instrumental noise of detectors in CMB measurements is treated in Sec. \[noise\] as white noise superposed upon the microwave sky. Sec. \[estimator\] is to construct the auto- and cross-correlation function estimators corrected for noise bias in terms of the power spectra. As examples, in Sec. \[result\] we compute the means and variances of the estimators for different configurations of future space missions in standard cold dark matter models. Further, we outline the likelihood analysis of the experimental data in Sec. \[like\]. Sec. \[conclusion\] is our conclusions. Stokes Parameters {#stokes} ================= Polarized light is conventionally described in terms of the four Stokes parameters $(I,Q,U,V)$, where $I$ is the intensity, $Q$ and $U$ represent the linear polarization, and $V$ describes the circular polarization. Each parameter is a function of the photon propogation direction $\hat n$. Let us define $$T=I-{\bar I}$$ as the temperature fluctuation about the mean. Since circular polarization cannot be generated by Thomson scattering alone, $V$ decouples from the other components. So, it suffices to consider only the Stokes components $(T,Q,U)$ as far as CMB anisotropy and polarization is concerned. Traditionally, for radiation propagating radially along $\hat e_r$ in the spherical coordinate system, see Fig. \[fig1\], $Q$ and $U$ are defined with respect to an orthonormal basis $(\hat a, \hat b)$ on the sphere, which are related to $(\hat e_\theta, \hat e_\phi)$ by $$\hat a = \hat e_\phi,\quad{\rm and}\quad \hat b = - \hat e_\theta.$$ Then, $Q$ is the difference in intensity polarized in the $\hat b$ and $\hat a$ directions, while $U$ is the difference in the $(\hat a + \hat b)/\sqrt 2$ and $(\hat a - \hat b)/\sqrt 2$ directions [@cha]. Under a left-handed rotation of the basis about $\hat e_r$ through an angle $\psi$, $$\left ( \begin{array}{c} \hat a'\\ \hat b' \end{array} \right)= \left( \begin{array}{cc} \cos\psi&-\sin\psi\\ \sin\psi&\cos\psi \end{array} \right) \left ( \begin{array}{c} \hat a\\ \hat b \end{array} \right),$$ or equivalently, $$\frac{1}{\sqrt 2} \left(\hat a'+i\hat b'\right) = e^{i\psi} \frac{1}{\sqrt 2} \left(\hat a+i\hat b\right). \label{rotation}$$ Under this transformation $T$ and $V$ are invariant while $Q$ and $U$ being transformed to [@cha] $$\left ( \begin{array}{c} Q'\\U' \end{array} \right)= \left( \begin{array}{cc} \cos2\psi&\sin2\psi\\-\sin2\psi&\cos2\psi \end{array} \right) \left ( \begin{array}{c} Q\\U \end{array} \right),$$ which in complex form is $$Q'(\hat e_r)\pm i U'(\hat e_r) = e^{\mp 2i\psi} \left[ Q(\hat e_r)\pm i U(\hat e_r) \right].$$ Hence, $Q(\hat n)\pm i U(\hat n)$ has spin-weight $\mp 2$. Therefore, we may expand each Stokes parameter in its appropriate spin-weighted spherical harmonics [@tol; @sel1]. Unfortunately, the convention in theory has a little difference from experimental practice. In CMB polarization measurements, usually the north celestial pole is chosen as the reference axis $\hat e_3$, and linear polarization at a point $\hat x$ on the celestial sphere is defined by $${\cal Q}(\hat x)=T_{N,S}-T_{E,W},\quad{\rm and}\quad {\cal U}(\hat x)=T_{NE,SW}-T_{NW,SE},$$ where $T_{N,S}$ is the antenna temperature of radiation polarized along the north-south direction, and so on [@lub]. In small-scale experiments covering only small patches of the sky, the geometry is essentially flat, so one can simply choose any local rectangular coordinates to define ${\cal Q}$ and ${\cal U}$. Since an observation in direction $\hat x$ receives radiation with propagating direction $\hat n = -\hat x$, we have $${\cal Q}(\hat x) = Q(\hat n),\quad{\rm and}\quad {\cal U}(\hat x) = -U(\hat n).$$ Spin-weighted Spherical Harmonics {#spin} ================================= An explicit expression of spin-$s$ spherical harmonics is [^3] [@new; @pen] $$\begin{aligned} \:_{s}Y_{lm}(\theta,\phi)&&=(-1)^{m}e^{im\phi}\left[\frac{2l+1}{4\pi} \frac{(l+m)!}{(l+s)!}\frac{(l-m)!}{(l-s)!}\right]^{\frac{1}{2}} \sin^{2l} \left(\frac{\theta}{2}\right) \nonumber \\ &&\times\sum_{r}\left(\begin{array}{c} l-s\\ r\\ \end{array}\right)\left(\begin{array}{c} l+s\\ r+s-m\\ \end{array}\right)(-1)^{l-s-r}\cot^{2r+s-m}\left(\frac{\theta}{2}\right), \label{s-har}\end{aligned}$$ where $$\max(0,m-s) \le r\le \min(l-s,l+m).$$ Note that the common spherical harmonics $Y_{lm}=\:_{0}Y_{lm}$. They have the conjugation relation and parity relation: $$\:_{s}Y^*_{lm}(\theta,\phi)=(-1)^{m+s}\:_{-s}Y_{l-m}(\theta,\phi), \label{conjugation}$$ $$\:_{s}Y_{lm}(\pi-\theta,\phi+\pi)=(-1)^{l}\:_{-s}Y_{lm}(\theta,\phi). \label{parity}$$ They satisfy the orthonormality condition and completeness relation: $$\int d\Omega \:_{s}Y^*_{l'm'}(\theta,\phi)\:_{s}Y_{lm}(\theta,\phi) =\delta_{l'l}\delta_{m'm}, \label{ortho}$$ $$\sum_{lm}\:_{s}Y^*_{lm}(\theta',\phi')\:_{s}Y_{lm}(\theta,\phi) =\delta(\phi'-\phi) \delta(\cos\theta'-\cos\theta).$$ Therefore, a quantity $\eta$ of spin-weight $s$ defined on the sphere can be expanded in spin-$s$ basis, $$\eta(\theta,\phi)=\sum_{lm} \eta_{lm} \:_{s}Y_{lm} (\theta,\phi),$$ where the expansion coefficients $\eta_{lm}$ are scalars. The raising and lowering operators, $\partial\!\!'$ and $\bar{\partial\!\!'}$, acting on $\eta$ of spin-weight $s$, are defined by [@new] $$\begin{aligned} {\partial\!\!'}\eta&=&-(\sin\theta)^s \left[\frac{\partial}{\partial\theta} +i\csc\theta\frac{\partial}{\partial\phi}\right](\sin\theta)^{-s}\eta,\\ \bar{\partial\!\!'}\eta&=&-(\sin\theta)^{-s}\left[\frac{\partial}{\partial \theta} -i\csc\theta\frac{\partial}{\partial\phi}\right](\sin\theta)^s\eta.\end{aligned}$$ When they act on the spin-$s$ spherical harmonics, we have [@new] $$\begin{aligned} {\partial\!\!'}\:_{s}Y_{lm}&=&\left[(l-s)(l+s+1)\right]^{1\over 2} \:_{s+1}Y_{lm},\\ \bar{\partial\!\!'}\:_{s}Y_{lm}&=&-\left[(l+s)(l-s+1)\right]^{1\over 2} \:_{s-1}Y_{lm},\\ \bar{\partial\!\!'}{\partial\!\!'}\:_{s}Y_{lm}&=&-(l-s)(l+s+1)\:_{s}Y_{lm}.\end{aligned}$$ Using these raising and lowering operations, we obtain the generalized recursion relation for $l-2\ge \max(|s|,|m|)$, $$\begin{aligned} \left(\frac{l+s}{l-s}\right)^{1\over 2} &&\:_{s}Y_{lm} =\left[\frac{(2l+1)(2l-1)}{(l+m)(l-m)} \right]^{1\over 2} \cos\theta \:_{s}Y_{l-1,m} \nonumber \\ &&-\left[\frac{(2l+1)(l+m-1)(l-m-1)(l-s-1)}{(2l-3)(l+m)(l-m) (l+s-1)}\right]^{1\over 2}\:_{s}Y_{l-2,m} \nonumber \\ &&+s\left[\frac{(2l+1)(2l-1)}{(l+m)(l-m)(l-s)(l+s-1)}\right]^{1\over 2} \sin\theta\:_{s-1}Y_{l-1,m}. \label{recursion}\end{aligned}$$ This will be used for evaluating the correlation functions in Sec. \[result\]. Table 1 lists explicit expressions for some low-$l$ spin-weighted harmonics, from which higher-$l$ ones can be constructed. The harmonics are related to the representation matrices of the 3-dimensional rotation group. If we define a rotation $R(\alpha,\beta,\gamma)$ as being composed of a rotation $\alpha$ around $\hat e_3$, followed by $\beta$ around the new $\hat e_2'$ and finally $\gamma$ around $\hat e_3''$, the rotation matrix of $R$ will be given by [@new] $$D_{-sm}^{l}(\alpha,\beta,\gamma)=\sqrt{\frac{4\pi}{2l+1}} \:_{s}Y_{lm}(\beta,\alpha)e^{-is\gamma}.$$ Let us consider a rotation group multiplication, $$R(\alpha,\beta,-\gamma)= R(\phi',\theta',0) R^{-1}(\phi,\theta,0),$$ where the angles are defined in Fig. \[fig1\]. In terms of rotation matrices, it becomes $$D^l_{s_1 s_2}(\alpha,\beta,-\gamma)=\sum_{m} D^l_{s_1 m} (\phi',\theta',0) D^{l*}_{s_{2} m}(\phi,\theta,0),$$ which leads to the generalized addition theorem,[^4] $$\sum_{m}\:_{s_1}Y^*_{lm}(\theta',\phi') \:_{s_2}Y_{lm}(\theta,\phi) =\sqrt{\frac{2l+1}{4\pi}}(-1)^{s_1-s_2} \:_{-s_1}Y_{ls_2}(\beta,\alpha)e^{-is_1\gamma}. \label{addition}$$ Power Spectra {#power} ============= Following the notations in Ref. [@sel1], we expand the Stokes parameters as $$\begin{aligned} T(\hat n)&=&\sum_{lm}a_{T,lm}Y_{lm}(\hat n), \nonumber \\ Q(\hat n)-iU(\hat n)&=&\sum_{lm}a_{2,lm}\:_{2}Y_{lm}(\hat n), \nonumber \\ Q(\hat n)+iU(\hat n)&=&\sum_{lm}a_{-2,lm}\:_{-2}Y_{lm}(\hat n). \label{expand}\end{aligned}$$ The conjugation relation (\[conjugation\]) requires that $$a_{T,lm}^*=(-1)^m a_{T,l-m},\quad a_{-2,lm}^*=(-1)^m a_{2,l-m}.$$ For Stokes parameters in CMB measurements, using the parity relation (\[parity\]), we have $$\begin{aligned} {\cal T}(\hat x)&=&\sum_{lm}(-1)^l a_{T,lm}Y_{lm}(\hat x), \nonumber \\ {\cal Q}(\hat x)+i{\cal U}(\hat x)&=&\sum_{lm}(-1)^l a_{2,lm} \:_{-2}Y_{lm}(\hat x), \nonumber \\ {\cal Q}(\hat x)-i{\cal U}(\hat x)&=&\sum_{lm}(-1)^l a_{-2,lm} \:_{2}Y_{lm}(\hat x).\end{aligned}$$ Isotropy in the mean guarantees the following ensemble averages: $$\begin{aligned} \left<a^{*}_{T,l'm'}a_{T,lm}\right>&=&C_{Tl}\delta_{l'l}\delta_{m'm}, \nonumber \\ \left<a^{*}_{2,l'm'}a_{2,lm}\right>&=&(C_{El}+C_{Bl})\delta_{l'l} \delta_{m'm}, \nonumber \\ \left<a^{*}_{2,l'm'}a_{-2,lm}\right>&=&(C_{El}-C_{Bl}) \delta_{l'l}\delta_{m'm}, \nonumber \\ \left<a^{*}_{T,l'm'}a_{2,lm}\right>&=&-C_{Cl}\delta_{l'l}\delta_{m'm}. \label{CMBaa}\end{aligned}$$ Consider two points $\hat n'(\theta',\phi')$ and $\hat n(\theta,\phi)$ on the sphere. Using the addition theorem (\[addition\]) and Eq. (\[CMBaa\]), we obtain the correlation functions, $$\begin{aligned} &&\left<T^{*}(\hat n')T(\hat n)\right> =\sum_l\frac{2l+1}{4\pi}C_{Tl} P_l(\cos\beta), \label{ct} \\ &&\left<T^{*}(\hat n')[Q(\hat n)+iU(\hat n)]\right> =-\sum_l\frac{2l+1}{4\pi}\sqrt{\frac{(l-2)!}{(l+2)!}}C_{Cl} P^2_l(\cos\beta) e^{2i\alpha}, \label{cc}\\ &&\left<[Q(\hat n')+iU(\hat n')]^*[Q(\hat n)+iU(\hat n)]\right> =\sum_{l}\sqrt{\frac{2l+1}{4\pi}}(C_{El}+C_{Bl}) \:_{2}Y_{l-2}(\beta,0)e^{2i(\alpha-\gamma)}, \label{c+} \\ &&\left<[Q(\hat n')-iU(\hat n')]^*[Q(\hat n)+iU(\hat n)]\right> =\sum_{l}\sqrt{\frac{2l+1}{4\pi}}(C_{El}-C_{Bl}) \:_{2}Y_{l2}(\beta,0)e^{2i(\alpha+\gamma)}, \label{c-}\end{aligned}$$ where $\alpha$, $\beta$, and $\gamma$ are the angles defined in Fig. \[fig1\]. Eq. (\[cc\]) is the most general form of those found in Refs. [@cri1; @ng4; @sel; @mel]. In the small-angle approximation, i.e. $\beta<<1$, $\alpha\simeq\gamma$, so Eq. (\[c+\]) depends only on the separation angle $\beta$. When $\hat n'$ and $\hat n$ lie on the same longitude, $\alpha=\gamma=0$ and hence Eqs. (\[cc\],\[c+\],\[c-\]) depend only on $\beta$. When $\hat n'$ and $\hat n$ lie on the same latitude, $\alpha+\gamma=\pi$. Hence the phase angle in Eq. (\[c-\]) vanishes, and that in Eq. (\[c+\]) becomes equal to $e^{4i\alpha}$. A coordinate-independent set of correlation functions has been obtained by defining correlation functions of Stokes parameters $(Q_r,U_r)$ with respect to axes which are parallel and perpendicular to the great arc connecting the two points being correlated [@kam2]. This prescription is indeed equivalent to the transformations: $$\begin{aligned} Q_{r}(\hat n')+iU_{r}(\hat n')&=& e^{-2i\gamma}[Q(\hat n')+iU(\hat n')],\nonumber\\ Q_{r}(\hat n)+iU_{r}(\hat n)&=&e^{-2i\alpha}[Q(\hat n)+iU(\hat n)].\end{aligned}$$ The authors in Ref. [@kam2] expanded $Q_r$ and $U_r$ in terms of tensor spherical harmonics. To calculate the two-point correlation functions between $T$, $Q_r$, and $U_r$, they chose one point to be at the north pole and the other on the $\phi=0$ longitude, and argued that the correlation functions depend only on the angular separation of the two points. Then, they had to evaluate the asymptotic forms for the tensor spherical harmonics at the north pole. Their results are simply equal to the above correlation functions without the phase angles. Here, using the compact generalized addition theorem (\[addition\]), we have given a general and straightforward way of obtaining the correlation functions. In addition, the phase information is retained. We will see in Sec. \[estimator\] and Sec. \[like\] that these phase angles can be easily removed or evaluated in taking experimental data. Therefore, the statistics of the CMB anisotropy and polarization is fully described by four independent power spectra $(C_{Tl}, C_{El}, C_{Bl}, C_{Cl})$ or their corresponding correlation functions. Here, we outline how to evaluate the spectra. The details can be found in Refs. [@sel2; @kam2]. Since the four spectra are rotationally invariant, it suffices to consider the contribution from a single $\hat k$-mode of the perturbation, and then integrate over all the modes. In particular, the calculation will be greatly simplified if we choose $\hat k=\hat e_3$. For scalar perturbations, the contribution of the $\hat k$-mode to $(T(\hat n),Q(\hat n),U(\hat n))$ is $(\Delta^{(S)}_T,\Delta^{(S)}_P,0)$. For tensor perturbations, the contribution is $$\left(\begin{array}{c} (1-\cos^2\theta)\cos2\phi\,\Delta^{(T)}_T\\ (1+\cos^2\theta)\cos2\phi\,\Delta^{(T)}_P\\ 2\cos\theta\sin2\phi\,\Delta^{(T)}_P \end{array} \right),$$ for $+$-mode. The $\times$-mode contribution is from making the replacements, $\cos2\phi\to \sin2\phi$ and $\sin2\phi\to -\cos2\phi$. The quantities $\Delta$’s are then computed by solving the Boltzmann hierarchy equations or by the line-of-sight integration method [@zal2]. In the following sections, we will use CMBFAST Boltzmann code [@zal2] to evaluate all $C_{Xl}$’s. Window Function {#window} =============== Due to the finite beam size of the antenna, any information on angular scales less than about the beam width is smeared out. This effect can be approximated by a Gaussian response function, $$dR(\beta,\alpha)=\frac{\beta\,d\beta\,d\alpha}{2\pi\sigma_b^2}\; e^{-\frac{\beta^2}{2\sigma_b^2}},$$ where $\sigma_b$, much less than $1$, is the Gaussian beam width of the antenna, $\beta$ and $\alpha$ are spherical polar angles with respect to a polar axis along the direction $\hat n(\theta,\phi)$. Therefore, a measurement can be represented as a convolution of the response function and the expected Stokes parameters, $$\int dR(\beta,\alpha) X(\theta',\phi'),$$ where $X$ denotes $T$, or $Q\pm iU$. This can be accounted by a mapping of the harmonics in Eq. (\[expand\]), $$\:_{s}Y_{lm}(\theta,\phi) \to \int dR(\beta,\alpha) \:_{s}Y_{lm}(\theta',\phi'). \label{mapping}$$ From Eq. (\[addition\]), we have $$\:_{s}Y_{lm}(\theta',\phi')=\sqrt{\frac{4\pi}{2l+1}} \sum_{m'}\:_{s}Y_{lm'}(\beta,\alpha)\,e^{is\gamma}\:_{-m'}Y_{lm}(\theta,\phi).$$ Therefore, the convolution involves the integral, $$\sqrt{\frac{4\pi}{2l+1}} \int dR(\beta,\alpha) \:_{s}Y_{lm'}(\beta,\alpha)\,e^{is\gamma}. \label{integral}$$ Making the approximation that $\alpha\simeq \gamma$ for $\sigma_b<<1$ and using the explicit expression (\[s-har\]), the integral (\[integral\]) has a series solution as $$\begin{aligned} &&(-1)^s \left\{1-\left[(l-s)(l+s)+l\right]\left(\frac{\sigma_b^2}{2}\right) +\left[\frac{1}{2}(l-s)(l-s-1)(l+s)(l+s-1)\right.\right. \nonumber \\ &&\left.\left.-(l-s)(l+s)\left(-2l+\frac{4}{3}\right) +2\left(-\frac{l}{6}+\frac{l^{2}}{2}\right)\right] \left(\frac{\sigma_b^2}{2}\right)^{2}+\;...\right\} \delta_{-m',s}\nonumber\\ &\simeq&\; (-1)^s \exp\left[-\left(l(l+1)-s^2\right)\frac{\sigma_b^2}{2}\right]\;\delta_{-m',s}.\end{aligned}$$ Hence, the mapping (\[mapping\]) is approximated by $$\:_{s}Y_{lm}(\theta,\phi)\to (-1)^s \:_{s}W_l^{1\over 2} \:_{s}Y_{lm}(\theta,\phi),$$ where $\:_{s}W_l$ is the window function, $$\:_{s}W_l= \exp\left[-\left(l(l+1)-s^2\right)\sigma_b^2\right]. \label{swl}$$ When $s=0$, it reduces to the usual window function in anisotropy case, $$\:_{0}W_l\equiv W_l=\exp[-l(l+1)\sigma_b^2].$$ The approximation $\:_{s}W_l\simeq \exp[-l^2\sigma_b^2]$ works very well for high $l$’s. Instrumental Noise {#noise} ================== In the CMB experiment, a pixelized map of the CMB smoothed with a Gaussian beam is created. In each pixel, the signal has a contribution from the CMB and from the instrumental noise. A convenient way of describing the amount of instrumental noise is to specify the rms noise per pixel $\sigma_{\rm pix}$, which depends on the detector sensitivity $s$ and the time spent observing each pixel $t_{\rm pix}$: $\sigma_{\rm pix}=s/\sqrt{t_{\rm pix}}$. The noise in each pixel is uncorrelated with that in any other pixel, and is uncorrelated with the CMB component. Let $\Omega_{\rm pix}$ be the solid angle subtended by a pixel. Usually, given a total observing time, $t_{\rm pix}$ is directly proportional to $\Omega_{\rm pix}$. Thus, we can define a quantity $w^{-1}$, the inverse statistical weights per unit solid angle, to measure the experimental sensitivity independent of pixel size [@kno]: $$w^{-1}=\Omega_{\rm pix} \sigma_{\rm pix}^2.$$ Let us simulate the instrumental noise with a background of white noise superposed upon the microwave sky. The statistics of the white noise is completely determined by $$\begin{aligned} \left<a^{N\;*}_{T,l'm'}a^N_{T,lm}\right> &=&w_T^{-1}\delta_{l'l}\delta_{m'm}, \nonumber \\ \left<a^{N\;*}_{2,l'm'}a^N_{2,lm}\right> &=&2w_P^{-1}\delta_{l'l}\delta_{m'm}, \nonumber \\ \left<a^{N\;*}_{-2,l'm'}a^N_{-2,lm}\right> &=&2w_P^{-1}\delta_{l'l}\delta_{m'm}, \nonumber \\ \left<a^{N\;*}_{T,l'm'}a^N_{\pm2,lm}\right> &=&\left<a^{N\;*}_{2,l'm'}a^N_{-2,lm}\right>=0, \label{Naa}\end{aligned}$$ where the label $N$ stands for noise, $w_T^{-1}$ and $w_P^{-1}$ are constants to be dertermined. Then, the two-point correlation functions are $$\begin{aligned} &&\left<T^N(\hat n')T^N(\hat n)\right> =\sum_l\frac{2l+1}{4\pi} w_T^{-1} W_l P_l(\cos\beta), \nonumber \\ &&\left<[Q^N(\hat n')+iU^N(\hat n')]^*[Q^N(\hat n)+iU^N(\hat n)]\right> =\sum_{l}\sqrt{\frac{2l+1}{4\pi}} 2w_P^{-1} \:_{2}W_l \:_{2}Y_{l-2}(\beta,0)e^{2i(\alpha-\gamma)}, \nonumber \\ &&\left<[Q^N(\hat n')-iU^N(\hat n')]^*[Q^N(\hat n)+iU^N(\hat n)]\right> =0. \label{Ncf}\end{aligned}$$ Defining $\sigma^T$ and $\sigma^P$ be the rms anisotropy and polarization variances respectively, for small beam width we obtain from Eqs. (\[swl\],\[Ncf\]) that $$\begin{aligned} &&(\sigma^T)^2\equiv \left<{T^N}^2\right>=\frac{w_T^{-1}}{4\pi\sigma_b^2}, \nonumber \\ &&(\sigma^P)^2\equiv \left<{Q^N}^2\right>=\left<{U^N}^2\right>= \frac{w_P^{-1}}{4\pi\sigma_b^2}.\end{aligned}$$ Therefore, if we assume $\Omega_{\rm pix}=4\pi\sigma_b^2$, then the variances would be the pixel noise, and $w_{T,P}^{-1}$ be the inverse statisical weights per unit soild angle: $$\sigma^{T,P}=\sigma^{T,P}_{\rm pix},\quad w_{T,P}^{-1}=\Omega_{\rm pix}\left(\sigma^{T,P}_{\rm pix}\right)^2.$$ If both anisotropy and polarization are obtained from the same experiment by adding and subtracting the two orthogonal linear polarization states given equal integration times, then $$\left(\sigma^T_{\rm pix}\right)^2={1\over 2}\left(\sigma^P_{\rm pix}\right)^2.$$ If they are from different maps, the noise is uncorrelated. Full-sky Correlation Function Estimators {#estimator} ======================================== The CMB map is inevitably contaminated by instrumental noise and other known or unresolved foreground sources. However, the foreground contamination can be removed by observing the CMB at multi-frequencies and detecting its unique spectral dependence. After the removal of background contamination, the microwave map (denoted by label $M$) is made of the genuine CMB and instrumental noise: $$a^M_{T,lm}=a_{T,lm}+a^N_{T,lm},\quad a^M_{\pm 2,lm}=a_{\pm 2,lm}+a^N_{\pm 2,lm}.$$ Thus the statistics of the noisy CMB map is induced from that of the CMB in Eq. (\[CMBaa\]) and that of the noise in Eq. (\[Naa\]). Again note that the noise is uncorrelated with the CMB signal, i.e. $\left<a^N a\right>=0$. Now we are going to construct the full-sky averaged correlation function estimators. Let us begin taking an average of a product of two spherical harmonics over the whole sky, $$\begin{aligned} \left\{Y^*_{l'm'}(\hat n')Y_{lm}(\hat n)\right\}_S&\equiv& \int d\Omega' d\Omega \,Y^*_{l'm'}(\hat n')Y_{lm}(\hat n) \nonumber \\ &=& \frac{1}{4\pi} P_l(\cos\beta) \delta_{l'l} \delta_{m'm}, \label{YY}\end{aligned}$$ where the curly brackets $\{\}_S$ denote a full-sky averaging at a fixed separation angle $\beta$. The sky averaging can be done easily using Eqs. (\[addition\],\[ortho\]). We firstly transform $Y_{l'm'}(\hat n')$ defined by a spherical coordinate system $\hat e_3$ to a new coordinate system $\hat n$, and then performing azimuthal integration by rotating the transformed $\hat n'$ about $\hat n$ with a fixed separation angle $\beta$. Finally, the remaining product of spherical harmonics with angle variables $\hat n$ is integrated over the whole sky. To generalize the averaging procedure to spin-$s$ spherical harmonics, some complications have to be taken. As we have seen in Eqs. (\[cc\],\[c+\],\[c-\]), multiplication of higher spin harmonics depend explicitly on local angles. Therefore, we define the full-sky averaging as $$\begin{aligned} \left\{\:_{s_1}Y^*_{l'm'}(\hat n')\:_{s_2}Y_{lm}(\hat n)\right\}_S&\equiv& \int d\Omega' d\Omega \:_{s_1}Y^*_{l'm'}(\hat n')\:_{s_2}Y_{lm}(\hat n) e^{i(s_1\gamma - s_2\alpha)} \nonumber \\ &=&\sqrt{\frac{1}{4\pi(2l+1)}}\:_{s_1}Y_{l-s_2}(\beta,0) \delta_{l'l}\delta_{m'm}. \label{sky}\end{aligned}$$ Obviously, when $s_1=s_2=0$, it reduces to Eq. (\[YY\]). We define four full-sky averaged correlation function estimators, $$\begin{aligned} {\cal C}_T(\beta)&\equiv& \left\{T^{M*}(\hat n') T^M(\hat n)\right\}_S -\left<\left\{T^{N*}(\hat n') T^N(\hat n)\right\}_S\right> \nonumber \\ &=&\sum_l\frac{2l+1}{4\pi}\left({\cal C}^M_{Tl} -w_T^{-1}\right) W_l P_l(\cos\beta), \nonumber \\ {\cal C}_C(\beta)&\equiv& {1\over 2}\left\{T^{M*}(\hat n') [Q^M(\hat n)+iU^M(\hat n)]+{\rm h.c.} \right\}_S \nonumber \\ &=&-\sum_l\frac{2l+1}{4\pi}\sqrt{\frac{(l-2)!}{(l+2)!}} {1\over 2}\left({\cal C}^M_{Cl}+{\cal C}^{M*}_{Cl}\right) W_l^{1\over 2} \:_{2}W_l^{1\over 2} P^2_l(\cos\beta), \nonumber \\ {\cal C}_+(\beta)&\equiv& \left\{[Q^M(\hat n')+iU^M(\hat n')]^* [Q^M(\hat n)+iU^M(\hat n)]\right\}_S - \left<\left\{[Q^N(\hat n')+iU^N(\hat n')]^* [Q^N(\hat n)+iU^N(\hat n)]\right\}_S\right> \nonumber \\ &=&\sum_{l}\sqrt{\frac{2l+1}{4\pi}}\left({\cal C}^M_{+l} -2w_P^{-1}\right) \:_{2}W_l \:_{2}Y_{l-2}(\beta,0), \nonumber \\ {\cal C}_-(\beta)&\equiv& {1\over 2}\left\{[Q^M(\hat n')- iU^M(\hat n')]^* [Q^M(\hat n)+iU^M(\hat n)] + {\rm h.c.} \right\}_S \nonumber \\ &=&\sum_{l}\sqrt{\frac{2l+1}{4\pi}}{1\over 2}\left({\cal C}^M_{-l} +{\cal C}^{M*}_{-l}\right) \:_{2}W_l \:_{2}Y_{l2}(\beta,0), \label{calC}\end{aligned}$$ where $$\begin{aligned} {\cal C}^M_{Tl}&\equiv&\frac{1}{2l+1}\sum_{m}a^{M*}_{T,lm}a^M_{T,lm}, \nonumber \\ {\cal C}^M_{Cl}&\equiv&-\frac{1}{2l+1}\sum_{m}a^{M*}_{T,lm}a^M_{2,lm}, \nonumber \\ {\cal C}^M_{\pm l}&\equiv&\frac{1}{2l+1}\sum_{m}a^{M*}_{\pm 2,lm}a^M_{2,lm}.\end{aligned}$$ The ensemble mean of each estimator is $$\begin{aligned} \left<{\cal C}_T(\beta)\right>&=& \sum_l\frac{2l+1}{4\pi}C_{Tl} W_l P_l(\cos\beta), \nonumber\\ \left<{\cal C}_C(\beta)\right>&=& -\sum_l\frac{2l+1}{4\pi}\sqrt{\frac{(l-2)!}{(l+2)!}}C_{Cl} W_l^{1\over 2} \:_{2}W_l^{1\over 2} P^2_l(\cos\beta),\nonumber \\ \left<{\cal C}_{\pm}(\beta)\right>&=& \sum_{l}\sqrt{\frac{2l+1}{4\pi}}(C_{El}\pm C_{Bl}) \:_{2}W_l \:_{2}Y_{l\mp2}(\beta,0). \label{mean}\end{aligned}$$ And the covariance matrix can be constructed as $${\bf M}_{X'Y}\equiv \left<[{\cal C}_X(\beta')-\left<{\cal C}_X(\beta')\right>] [{\cal C}_Y(\beta) - \left<{\cal C}_Y(\beta)\right>]\right>, \label{cm}$$ where $X,Y=T,C,+,-$. Here the prime denotes a different separation angle. The diagonal entries are given by $$\begin{aligned} {\bf M}_{T'T}&=& \frac{1}{8\pi^2}\sum_l (2l+1)\left(C_{Tl}+w_T^{-1}\right)^2 W_l^2 P_l(\cos\beta') P_l(\cos\beta), \nonumber \\ {\bf M}_{C'C}&=& \frac{1}{4\pi}\sum_l \frac{2l+1}{4\pi} \frac{(l-2)!}{(l+2)!} \left[C_{Cl}^2+\left(C_{Tl}+w_T^{-1}\right) \left(C_{El}+w_P^{-1}\right)\right] W_l \:_{2}W_l P^2_l(\cos\beta') P^2_l(\cos\beta),\nonumber \\ {\bf M}_{+'+}&=& \frac{1}{2\pi}\sum_l \left[C_{El}^2+C_{Bl}^2+2 \left(w_P^{-1}\right)^2 + 2\left(C_{El}+C_{Bl}\right) w_P^{-1} \right] \:_{2}W_l^2 \:_{2}Y_{l-2}(\beta',0) \:_{2}Y_{l-2}(\beta,0),\nonumber\\ {\bf M}_{-'-}&=& \frac{1}{2\pi}\sum_l \left[C_{El}^2+C_{Bl}^2+2 \left(w_P^{-1}\right)^2 + 2\left(C_{El}+C_{Bl}\right) w_P^{-1} \right] \:_{2}W_l^2 \:_{2}Y_{l2}(\beta',0) \:_{2}Y_{l2}(\beta,0). \label{variance}\end{aligned}$$ The off-diagonal entries are similarly calculated. In particular, the off-diagonal term of the submatrix ${\bf M}_{X'Y}$ ($X,Y= +,-$) is $${\bf M}_{+'-}= \frac{1}{2\pi}\sum_l \left[C_{El}^2-C_{Bl}^2+ 2\left(C_{El}-C_{Bl}\right) w_P^{-1} \right] \:_{2}W_l^2 \:_{2}Y_{l-2}(\beta',0) \:_{2}Y_{l2}(\beta,0).$$ In practical situations, a galaxy-cut on the CMB map is necessary due to radiation pollution along the galactic plane, and due to limited observation time, usually only a fraction of the sky would be sampled. For instance, the effective CMB coverage of the [*COBE*]{} DMR is $4\pi f$, where $f\simeq 2/3$. This incomplete sky coverage would generally induce a sample variance, whose size depends both on the experimental sampling strategy and the underlying power spectra of the fluctuations. It was found that the covariances calculated above scale roughly with sky coverage as $f^{-1}$ for small-scale experiments [@sco]. For large-scale experiments such as the [*COBE*]{} DMR, they scale roughly as $$0.446+0.542f^{-1}-0.0079f^{-2},$$ valid for $f^{-1}<15$ [@ng]. The difference from $f^{-1}$ scaling is mainly due a large correlation angle in the large-scale experiment. Correlation Measurements in Future Missions {#result} =========================================== The MAP and Planck missions plan to measure all-sky CMB anisotropy and polarization. It has been discussed how to construct the optimal estimators for the power spectra corrected for noise bias, and their corresponding variances from the all-sky map [@sel2; @kam2; @hu2]. To estimate the level of signal and noise, we hereby give an alternative real-space analysis, evaluating the ensemble means and variances of the full-sky averaged correlation function estimators for the MAP and Planck configurations, i.e. $$C_X(\theta)\equiv\left<{\cal C}_X\right>,\quad \Delta C_X (\theta)\equiv {\bf M}_{XX}^{1\over 2},$$ which are respectively given by Eq. (\[mean\]), and Eq. (\[variance\]) with $\theta'=\theta$. We assume the standard cold dark matter (sCDM) model: $\Omega_0=1$, $h=0.5$, $\Omega_B h^2=0.0125$, and no reionization after the hydrogen recombination. Two extreme cases are evaluated: $T/S=0$ and $T/S$=1, where $T$ and $S$ are the anisotropy quadrupole moments induced respectively by tensor and scalar perturbations. All power spectra are computed by the CMBFAST code. The recursion relation (\[recursion\]) has been used for evaluating the spherical harmonics in the correlation functions. The results are plotted in Figs. \[fig2\]-\[fig5\], which are respectively $C_X(\theta)$ attached with its variance $\Delta C_X (\theta)$, where $X=T,+,-,C$. In making the plots, we have used beam width $\theta_{FWHM}=0.5^o$, where $\sigma_b=0.425\times \theta_{FWHM}$. Typical values of the experimental sensitivity for MAP are $w_{T}^{-1}=(0.1{\mu K})^2$ and $w_{P}^{-1}=(0.15{\mu K})^2$, while for Planck they are about a factor of 100 smaller. In Figs. \[fig2\]-\[fig5\], the thick and thin solid lines represent the cases with $T/S=0$ and $T/S=1$ respectively. In each case, the ensemble average is denoted by a middle line sandwiched by two pairs of $1 \sigma$ lines. The outer pair of lines is for the MAP satellite while the inner pair for the Planck Surveyor. Note that in Fig. \[fig2\] the two pairs of $1 \sigma$ lines merge into a single pair, which means that the noise is dominated by cosmic variance rather than instrumental noise. The theoretical expectation of the rms polarization signal in sCDM models, $[C_+(0)]^{1/2}$, is at a level of $1 \mu K$. For the MAP experiment, the polarization signal to noise ratio S/N is about $1-2$. The S/N ratio of the anisotropy-polarization correlation $C_C(\theta)$ is about $3-4$ at $\theta\simeq 1.3^o$, and the absence of tensor mode makes the cross-correlation significantly negative on few-degree scales. For Planck the corresponding S/N ratios are much higher. The MAP would likely detect the anisotropy-polarization correlation, which however is not sensitive to $C_{El}$ or $C_{Bl}$. The detection of the electric and magnetic components would require the Planck satellite. Another space misssion being planned is the Sky Polarization Observatory (SPOrt) on board the International Space Station during the early space station utilization period (2001-2004) [@cor]. The scope is to measure the polarization of the sky diffuse background radiation at an angular-scale of $7^o$ for a large sky-coverage with four frequency channels between 20 GHz and 70 GHz. The experimental sensitivity is expected to be comparable to MAP. Again, we evaluated the ensemble means and variances of the full-sky averaged correlation functions, but in reionized sCDM models with reionization redshifts $z_{\rm ri}=20$ and $50$. The results are plotted in Figs. \[fig6\]-\[fig9\]. The expected rms polarization S/N $\sim 1-3$ for $20<z_{\rm ri}<50$, while the anisotropy-polarization S/N $\sim 1-2$ at $\theta\sim 20^o$. A near-term, ground-based polarization experiment, called POLAR, is to measure CMB polarization at $7^o$ scales for 36 pixels [@kea]. To reach a signal level of $1 \mu K$ for a single pixel requires an integration time of about 120 hours and low-noise HEMT amplifiers of noise temperature of about $10 K$. The expected S/N ratio of the rms polarization is $1-2$ for reionized sCDM models with reionization redshifts $45<z_{\rm ri}<105$ [@kea], whereas the anisotropy-polarization correlation would be dominated by noise [@cri1; @cri2; @ng4]. Likelihood Functions {#like} ==================== The most straightforward way to obtain the power spectra from the measured Stokes parameters is to perform a maximum likelihood analysis of the data. All of the information in the measurement is encoded in the likelihood function, which can properly take into account non-uniform detector noise, and sample variances. This is particularly an advantage for ground-based experiments which track tens or hundreds of spots in the sky to measure $Q$ and $U$. The method offers a simple test of the consistency of the power spectra from map to map, and the correlation between maps. In fact, this method has been employed by the [*COBE*]{}/DMR to determine the anisotropy quadrupole normalization from the two-point functions of the 4-year anisotropy maps containing about 4000 pixels [@hin]. However, as is known, for all-sky coverage in satellite experiments, especially small-scale measurements, the large amount of data involved in the computation makes the analysis inefficient. This problem may be reduced by using filtering and compression as in the case of anisotropy data. For a small number of measurements, such as the ongoing polarization experiment POLAR that measures $Q$ and $U$ by observing an annulus of regular $36$ spots at constant declination [@kea], the data set can be arranged as $${\bf D}=(Q_i+iU_i,Q_j-iU_j),$$ where $i,j=1,..,36$. Since all data points lie on a same latitude, the expected theoretical correlation functions ${C_{\pm}}_{ij}$ in this case are given respectively by Eqs. (\[c+\],\[c-\]) with $\alpha+\gamma=\pi$, i.e. $$\begin{aligned} {C_+}_{ij}&=&\sum_{l}\sqrt{\frac{2l+1}{4\pi}}(C_{El}+C_{Bl}) \:_{2}W_l \:_{2}Y_{l-2}(\theta_{ij},0) e^{4i\alpha_{ij}}, \nonumber \\ {C_-}_{ij}&=&\sum_{l}\sqrt{\frac{2l+1}{4\pi}}(C_{El}-C_{Bl}) \:_{2}W_l \:_{2}Y_{l2}(\theta_{ij},0),\end{aligned}$$ where $\:_{2}W_l$ is the window function (\[swl\]) with a beam width appropriate to the experiment, $\theta_{ij}$ is the separation angle between the $i$th and $j$th spots, and $\alpha_{ij}$ (which is a geometric function of $\theta_{ij}$) is the angle between the longitude at the $i$th spot and the great arc connecting the $i$th and $j$th spots (refer to Fig. \[fig1\]). Thus we construct the likelihood function as $${\cal L}(C_{E2},C_{B2})=\frac{1}{\sqrt {{\rm det}{\bf C}}} \exp\left[-{1\over 2}{\bf D}{\bf C}^{-1}{\bf D}^\dagger\right],$$ where the correlation matrix is $${\bf C}= \left( \begin{array}{cc} {C_+}_{ij} + N_{ij}& {C_-}_{ij}\\ {C_-}_{ij}& {C_+}_{ij}^* + N_{ij} \end{array} \right),$$ where $N_{ij}=2(\sigma^P_i)^2\delta_{ij}$ is the noise correlation matrix. The most likely electric and magnetic quadrupoles are then determined by maximizing the likelihood function over the theories. For all-sky measurements, the full likelihood function can be constructed as $${\cal L}(C_{T2},C_{C2},C_{E2},C_{B2})=\frac{1}{\sqrt{{\rm det}{\bf M}}} \exp\left[-{1\over 2}{\bf\Delta C}{\bf M}^{-1}{\bf\Delta C}^T\right],$$ where ${\bf M}$ is the covariance matrix of the full-sky averaged correlation functions whose entries given by Eq. (\[cm\]), and ${\bf\Delta C}$ is a row vector with entries $$\Delta C_X(\theta)\equiv C_X(\theta)_{\rm measured} - \left<{{\cal C}_X(\theta)}_{w_T^{-1}=w_P^{-1}=0}\right>,$$ where the first term is the two-point correlation function in the sky map obtained by performing the full-sky-averaging (\[sky\]) of products of all map measured Stokes parameters with a fixed angular separation $\theta$, and the second term is calculated from the ensemble mean of the corresponding operator listed in Eq. (\[calC\]) without substracting off the noise, i.e. setting $w_T^{-1}=w_P^{-1}=0$. The tensor contribution can be analyzed by maximizing the likelihood function with the covariance submatrix ${\bf M}_{X'Y}$, where $X,Y=+,-$. The central value of $C_{B2}$ in a confidence-level plot significantly deviated from zero would indicate the presence of tensor mode. Conclusions {#conclusion} =========== It is known that the two-point correlation functions of the Stokes parameters are explicitly dependent on coordinates. A way of getting rid of this is to expand the Stokes parameters in terms of spin-weighted spherical harmonics, and to construct optimal angular power spectrum estimators. Although being useful for all-sky coverage satellite experiments, it is not suitable for near-term, ground-based polarization experiments. For a small number of observation points, the simplest way to compare data with theory is to perform a likelihood analysis with a correlation function matrix. Further, a likelihood analysis of a full-sky map using correlation functions is a challenge for development of computational algorithms. Several authors suggested to obtain coordinate-independent correlation functions by measuring $Q$ and $U$ with respect to axes which are parallel and perpendicular to the great arc connecting the two points being correlated [@kam2]. Here we gave the most general calculation of the two-point correlation functions of the Stokes parameters in terms of spin-weighted spherical harmonics, including the windowing function and instrumental noise. We obtained simple forms though they still explicitly depend on coordinates. However, the coordinate dependence can be eliminated by averaging over the whole sky, and the averaged correlation functions can be used to construct the covariance matrix in likelihood analysis of future CMB satellite data. Moreover, in ground-based polarization experiments, if a correct scanning topology is selected, the coordinate-dependence can be eliminated or simplified in some way and the correlation functions can be directly put in the correlation matrix of the likelihood function. Furthermore, we have calculated the signal to noise ratios from the two-point correlation functions for future anisotropy and polarization experiments. It is likely that MAP will detect the first anisotropy-polarization correlation signal. In fact, to complement the MAP measurement, a small-angle ground-based polarization experiment, targeting at a higher signal to noise for rms polarization, should be performed as to cross-correlate with the MAP high-precision anisotropy map. Surely, measurements of the microwave sky by Planck will push cosmology into a new epoch, as both CMB anisotropy and polarization can be precisely measured. On the other hand, the limit on Sunyaev-Zel’dovich distortion of Compton-y parameter, $|y|<15\times 10^{-6}$, from FIRAS data [@mat] constrains the reionization redshift $z_{\rm ri}< 50$ [@bal]. This constraint is consistent with theoretical CDM model calculations [@teg], which predict an occurrence of reionization at a redshift $30<z_{\rm ri}<70$, and most likely at $z_{\rm ri}\sim 50$. So, it is probable that SPOrt/ISS would observe a polarization signal. However, if large-scale polarization is not detected, then it would be an impact on cosmological theories. This work was supported in part by the R.O.C. NSC Grant No. NSC87-2112-M-001-039. G. F. Smoot [*et al.*]{}, Astrophys. J. [**369**]{}, L1 (1992). L. A. Page, Proc. of the 3rd Int. School of Particle Astrophysics, Erice, Sicily (1996), astro-ph/9703054. G. Jungman, M. Kamionkowski, A. Kosowsky, and D. N. Spergel, Phys. Rev. D [**54**]{}, 1332 (1996). M. J. Rees, Astrophys. J. [**153**]{}, L1 (1968). K. L. Ng and K.-W. Ng, Astrophys. J. [**456**]{}, 413 (1996). R. K. Sachs and A. M. Wolfe, Astrophys. J. [**147**]{}, 73 (1967). J. R. Bond and G. Efstathiou, Astrophys. J. [**285**]{}, L45 (1984); Mon. Not. R. Astr. Soc. [**226**]{}, 655 (1987). M. Zaldarriaga, Phys. Rev. D [**55**]{}, 1822 (1997). D. Coulson, R. G. Crittenden, and N. G. Turok, Phys. Rev. Lett. [**73**]{}, 2390 (1994). R. G. Crittenden, D. Coulson, and N. G. Turok, Phys. Rev. D [**52**]{}, R5402 (1995). K. L. Ng and K.-W. Ng, Astrophys. J. [**473**]{}, 573 (1996). U. Seljak and M. Zaldarriaga, Phys. Rev. Lett. [**78**]{}, 2054 (1997). M. Kamionkowski, A. Kosowsky, and A. Stebbins, Phys. Rev. Lett. [**78**]{}, 2058 (1997). M. Zaldarriaga and U. Seljak, Phys. Rev. D [**55**]{}, 1830 (1997). M. Kamionkowski, A. Kosowsky, and A. Stebbins, Phys. Rev. D [**55**]{}, 7368 (1997). U. Seljak, Astrophys. J. (to be published), astro-ph/9608131. W. Hu and M. White, Phys. Rev. D [**56**]{}, 596 (1997). U. Seljak, U.-L. Pen, and N. Turok, Phys. Rev. Lett. [**79**]{}, 1615 (1997). B. W. Tolman and R. A. Matzner, Proc. R. Soc. Lond. A [**392**]{}, 391 (1984). W. Hu and M. White, New Astronomy (to be published), astro-ph/9706147. W. Hu, U. Seljak, M. White, and M. Zaldarriaga, astro-ph/9709066. P. Lubin, P. Melese, and G. Smoot, Astrophys. J. [**273**]{}, L51 (1983). R. B. Partridge, J. Nowakowksi, and H. M. Martin, Nature (London) [**331**]{}, 146 (1988); E. J. Wollack [*et al.*]{}, Astrophys. J. [**419**]{}, L49 (1993). C. B. Netterfield [*et al.*]{}, Astrophys. J. [**474**]{}, L69 (1995). M. Kamionkowski and A. Kosowsky, astro-ph/9705219. U. Seljak and M. Zaldarriaga, Astrophys. J. [**469**]{}, 437 (1996). S. Chandrasekhar, [*Radiative Transfer*]{} (Dover, New York, 1960). E. Newman and R. Penrose, J. Math. Phys. [**7**]{}, 863 (1966); J. N. Goldberg [*et al.*]{}, [*ibid.*]{} [**8**]{}, 2155 (1967). R. Penrose and W. Rindler, [*Spinors and Space-time*]{}, Chapter 4 (Cambridge Univ. Press, 1984). A. Melchiorri and N. Vittorio, Proc. of NATO Advanced Study Institute 1996, astro-ph/9610029. L. Knox, Phys. Rev. D [**52**]{}, 4307 (1995). D. Scott, M. Srednicki, and M. White, Astrophys. J. [**421**]{}, L5 (1994). K.-W. Ng, Int. J. Mod. Phys. D (to be published). S. Cortiglioni, private communication. B. Keating, P. Timbie, A. Polnarev, and J. Steinberger, Astrophys. J. (to be published), astro-ph/9710087. G. Hinshaw [*et al.*]{}, astro-ph/9601061. J. C. Mather [*et al.*]{}, Astrophys. J. [**420**]{}, 440 (1994). E. A. Baltz, N. Y. Gnedin, and J. Silk, Astrophys. J. [**493**]{}, L1 (1998). M. Tegmark, J. Silk, and A. Blanchard, Astrophys. J. [**420**]{}, 484 (1994); M. Fukugita and M. Kawasaki, Mon. Not. R. Astr. Soc. [**269**]{}, 563 (1994); A. Liddle and D. Lyth, Mon. Not. R. Astr. Soc. [**273**]{}, 1177 (1995). [l]{}\ $ \:_{1}Y_{2\pm2}=\pm{1\over4} \sqrt{\frac{5}{\pi}}(1\mp\cos{\theta}) \sin{\theta}\,e^{\pm2i\phi} $\ $ \:_{1}Y_{3\pm2}={1\over8}\sqrt{\frac{35}{2\pi}}\left[2\sin^3{\theta}- \sin{\theta}(1\pm\cos{\theta})^2\right]e^{\pm2i\phi} $\ $ \:_{1}Y_{4\pm2}=\pm{3\over16}\sqrt{\frac{1}{2\pi}}\left[3\sin{\theta} (1\pm\cos{\theta})^3-5(1\mp 5\cos{\theta})\sin^3{\theta} \right]e^{\pm2i\phi} $\ $ \:_{2}Y_{2\pm2}=\frac{1}{8}\sqrt{\frac{5}{\pi}}(1\mp\cos{\theta})^2 e^{\pm2i\phi} $\ $ \:_{2}Y_{3\pm2}=\pm{1\over16}\sqrt{\frac{7}{\pi}}\left[ -(1\mp\cos{\theta})^3 +5(1\mp\cos{\theta})\sin^2{\theta}\right]e^{\pm2i\phi} $\ $ \:_{2}Y_{4\pm2}={3\over32}\sqrt{\frac{1}{\pi}}\left[ (1\mp\cos{\theta})^4-12(1\mp\cos{\theta})^2\sin^2{\theta} +30\sin^4{\theta}\right]e^{\pm2i\phi} $\ \ TAB. 1. Some spin-weighted spherical harmonics with $l=2,3,4$. [^1]: nkw@phys.sinica.edu.tw [^2]: liugc@phys.sinica.edu.tw [^3]: In Ref. [@new], the sign $(-1)^m$ is absent. We have added the sign in order to match the conventional definition for $Y_{lm}$. [^4]: This theorem was first derived in Eq. (7) of Ref. [@hu1], which however does not give correct signs for the geometric phase angles, $\alpha$ and $\gamma$. Eq. (\[addition\]) will be useful in the following sections.
--- abstract: 'Firms implementing digital advertising campaigns face a complex problem in determining the right match between their advertising creatives and target audiences. Typical solutions to the problem have leveraged non-experimental methods, or used “split-testing" strategies that have not explicitly addressed the complexities induced by targeted audiences that can potentially overlap with one another. This paper presents an adaptive algorithm that addresses the problem via online experimentation. The algorithm is set up as a contextual bandit and addresses the overlap issue by partitioning the target audiences into disjoint, non-overlapping sub-populations. It learns an optimal creative display policy in the disjoint space, while assessing in parallel which creative has the best match in the space of possibly overlapping target audiences. Experiments show that the proposed method is more efficient compared to naive “split-testing” or non-adaptive “A/B/n” testing based methods. We also describe a testing product we built that uses the algorithm. The product is currently deployed on the advertising platform of `JD.com`, an eCommerce company and a publisher of digital ads in China.' author: - | Tong Geng, `JD.com`\ tong.geng@jd.com\ \ Xiliang Lin, `JD.com`\ xiliang.lin@jd.com\ \ Harikesh S. Nair, [`JD.com` and Stanford University]{}\ harikesh.nair@stanford.edu bibliography: - 'MAB.bib' title: 'Online Evaluation of Audiences for Targeted Advertising via Bandit Experiments[^1]' --- Introduction ============ A critical determinant of the success of advertising campaigns is picking the right audience to target. As digital ad-markets have matured and the ability to target advertising has improved, the range of targeting options has expanded, and the profile of possible audiences have become complex. Both advertisers and publishers now rely on data-driven methods to evaluate audiences and to find effective options with which to advertise to them. This paper presents a new bandit algorithm along with a product built to facilitate such evaluations via online experimentation. The problem addressed is as follows. An advertiser designing a campaign wants to pick, from a set of $\mathbb{K} = \{1,..,K\}$ possible target audiences and $\mathbb{R} = \{1,..,R\}$ creatives, a combination $k,r$ ($k\in\mathbb{K}$, $r\in\mathbb{R}$) that provides her the highest expected payoff. The target audiences can be complex, potentially overlapping with each other, and the creatives can be any type of media (picture, video, text etc). We would like to design an experiment to find the best creative-target audience combination while minimizing the costs of experimentation to the advertiser. Consider an archetypal experimental design in which each creative-target audience combination forms a test arm, so that the goal of the test is to discover the arm with the highest expected payoff. To implement such a design, we need to address two challenges associated this problem. The first difficulty is the possibility of overlap in target audiences that are being compared (e.g., “San Francisco users” and “Male users”). This generates a complication in user assignment in the test because it is not obvious to which constituent arm, a user belonging to an overlapping region should be assigned (e.g., should a Male user from San Francisco be assigned to the “San Francisco-creative” arm or the “Male-creative” arm?). Assigning the overlapping user to one of the constituent arms violates the representativeness of the arms (e.g., if we use a rule that Male users from San Francisco will always be assigned to the “San Francisco-creative” arm, the “Male-creative” arm will have no San Franciscans, and will not represent the distribution of Male users in the platform population). Such assignment also under-utilizes data: though the feedback from the user is informative of all constituent arms, it is being used to learn the best creative for only one picked arm (e.g., if we assign a Male user from San Francisco to the “San Francisco-creative” arm, we do not learn from him the value of the “Male-creative” arm, even though his behavior is informative of that arm). The second difficulty is that typical “A/B/n” test designs keep the sample/traffic splits constant as the test progresses. Therefore, both good and bad creatives will be allocated the same amount of traffic during the test. Instead, as we learn during the test that an arm is not performing well, reducing its traffic allocation can reduce the cost to the advertiser of experimentation. The goal of this paper is to develop an algorithm that addresses both issues. It has two broad steps. In step one, we split the compared target audiences (henceforth “*TA*”s) into disjoint audience sub-populations (henceforth “*DA*"s), so that the set of *DA*s fully span the set of *TA*s. In step two, we train a bandit with the creatives as arms, the payoffs to the advertiser as rewards, and the *DA*s, rather than the *TA*s as the contexts. As the test progresses, we aggregate over all *DA*s that correspond to each *TA* to adaptively learn the best creative-*TA* match. In essence, we learn an optimal creative allocation policy at the disjoint sub-population level, while making progress towards the test goal at the *TA* level. Because the *DA*s have no overlap, each user can be mapped to a distinct *DA*, addressing the assignment problem. Because all *DA*s that map to a *TA* help inform the value of that *TA*, learning is also efficient. Further, tailoring the bandit’s policy to a more finely specified context $-$ i.e., the *DA* $-$ allows it to match the creative to the user’s tastes more finely, thereby improving payoffs and reducing expected regret, while delivering on the goal of assessing the best combination at the level of a more aggregated audience. The adaptive nature of the test ensures the traffic is allocated in a way that reduces the cost to the advertiser from running the test, because creatives that are learned to have low value early are allocated lesser traffic within each *DA* as the test progresses. The overall algorithm is implemented as a contextual Thompson Sampler (henceforth “TS”; see [@russo2018] for an overview). Increasing the overlap in the tested *TA*s increases the payoff similarity between the *TA*s, making it harder to detect separation. One worry is that the TS in such situations requires enormous amounts of data before stopping, and performance is degraded to the extent that it is practically unviable. An attractive feature of the proposed algorithm is that feedback on the performance of *DA*s helps inform the performance of all *TA*s to which they belong. This *cross-audience learning* serves as a counterbalancing force that keeps performance stable as overlap increases, preventing the sample sizes required to stop the test from growing unacceptably large and making the algorithm impractical. In several simulations, we show the proposed TS performs well in realistic situations, including with high levels of overlap; and is competitive against benchmark methods including non-adaptive designs and “split-testing" designs currently used in industry. To illustrate real-world performance, we also discuss a case-study from a testing product on the advertising platform of `JD.com`, where the algorithm is currently deployed. Related Work and Other Approaches ================================= [\[sec:lit-review\]]{} There is a mature literature on successful applications of bandits in web content optimization (e.g., [@AgarwalChenElango2009], [@LiChuLangfordSchapire2010], [@ChapelleLi2011], [@HauseretalBannerMorphing2015], [@AgarwaletalBanditTechDebt16]). This paper belongs to a sub-stream of this work that has focused on using bandits for controlled experiments on the web. The closest papers to our work are the complementary papers by [@scott2015multi], [@schwartzetal2017] and [@Juetal2019] who propose using bandit experiments to evaluate creatives for targeted advertising, without focusing explicitly on the problem addressed here of comparing target audiences. In industry, the popular experimental design to compare *TA*s for advertising campaigns is sometimes called “audience split-testing" (e.g., [@facebooksplit2019], [@tencentsplit2019]). Suppose there is only one creative, and $K$ *TA*s are to be compared. The audience split-testing design randomizes test users into $K$ arms, each of which is associated with the same creative, but which correspond respectively to the *K* *TA*s. Conditional on being randomized into an arm, a user is shown the creative only if his features match the arms’ *TA* definition. This ensures that the mix of overlapping and non-overlapping audiences is representative; however, the design under-utilizes the informational content of experimental traffic as there is no learning from users who are randomized into a test-arm but do not match its *TA* definition. Also, in contrast to the design proposed here, there is no cross-audience learning from overlapping users. In addition, the typical implementation of split-testing is non-adaptive, and is not cost minimizing unlike the adaptive design presented here. A possible strategy for maintaining the representativeness of *TA*s in the test is to randomly allocate some proportion $p$ of users in each overlapping region to the *TA*s the region overlaps with. Unfortunately, no value of $p$ exists that maintains representativeness after such allocation while retaining all the data. To illustrate, suppose we have two *TA*s ($TA1$ and $TA2$) that overlap with each other, so we have three *DA*s, $DA1$, $DA2$ and $DA3$, with $DA2$ belonging to both $TA1$ and $TA2$. Suppose in the test, a representative sample of $N_{DA1}$, $N_{DA2}$, and $N_{DA3}$ users belonging to each of the three *DA*s arrive, and have to be assigned in this manner to $TA1$ and $TA2$. If we allocate proportion $p$ of users in $DA2$ to $TA1$, the proportion of $DA2$ users in $TA1$ is $P(DA2|TA1)=\frac{p \times N_{DA2}}{p \times N_{DA2} + N_{DA1}}$. However, to be representative of the population, we need this proportion to be $\frac{N_{DA2}}{N_{DA2} + N_{DA1}}$. The only value of $p$ that makes $TA1$ under this scheme representative is 1. However, when $p=1$, the proportion of $DA2$ in $TA2$ is 0, making $TA2$ under this scheme not representative of $TA2$ in the population. One can restore representativeness by dropping a randomly picked proportion $1-p$ of $N_{DA1}$ users and $p$ of $N_{DA2}$ users. But this involves throwing away data and induces the same issue as the “audience split-testing" design above of under-utilizing the informational content of experimental traffic. Method ====== Step 1: Setup ------------- We take as input into the test the $\mathbb{K} = \{1,..,K\}$ possible *TA*s and $\mathbb{R} = \{1,..,R\}$ creatives the advertiser wants to compare. In step 1, we partition the users in the $K$ *TA*s into a set $\mathbb{J} = \{1,..,J\}$ of $J$ *DA*s. For example, if the *TA*s are “San Francisco users” and “Male users,” we create three *DA*s, “San Francisco users, Male,” “San Francisco users, Not Male," and “Non San Francisco users, Male.” Step 2: Contextual Bandit Formulation ------------------------------------- In step 2, we treat each *DA* as a context, and each creative as an arm that is pulled adaptively based on the context. When a user $i$ arrives at the platform, we categorize the user to a context based on his features, i.e., $$i\in DA(j) \text{ if } i\text{'s features match the definition of } j,$$ where $DA(j)$ denotes the set of users in DA $j$. A creative $r\in\mathbb{R}$ is then displayed to the user based on the context. The cost of displaying creative $r$ to user $i$ in context $j$ is denoted as $b_{irj}$. After the creative is displayed, the user’s action, $y_{irj}$, is observed. The empirical implementation of the product uses clicks as the user feedback for updating the bandit, so $y$ is treated as binary, i.e., $y_{irj}\in\{0,1\}$. The payoff to the advertiser from the ad-impression, $\pi_{irj}$, is defined as: $$\pi_{irj}=\gamma\cdot y_{irj}-b_{irj},$$ where $\gamma$ is a factor that converts the user’s action to monetary units. The goal of the bandit is to find an optimal policy $g(j):\mathbb{J}\rightarrow\mathbb{R}$ which allocates the creative with the maximum expected payoff to a user with context $j$. ### Thompson Sampler To develop the TS, we model the outcome $y_{irj}$ in a Bayesian framework, and let $$\begin{aligned} y_{irj} \sim p(y_{irj}|\theta_{rj}),\\ \theta_{rj} \sim p(\theta_{rj}|\Omega_{rj}).\end{aligned}$$ where $\theta_{rj}$ are the parameters that describe the distribution of action $y_{irj}$, and $\Omega_{rj}$ are the hyper-parameters governing the distribution of $\theta_{rj}$. Since $y$ is Bernoulli distributed, we make the typical assumption that the prior on $\theta$ is Beta which is conjugate to the Bernoulli distribution. With $\Omega_{rj}\equiv(\alpha_{rj},\beta_{rj})$, we model, $$\begin{aligned} y_{irj}\sim \texttt{Ber}(\theta_{rj}),\\ \theta_{rj}\sim\texttt{Beta}(\alpha_{rj},\beta_{rj}).\end{aligned}$$ Given $y_{irj}\sim \texttt{Ber}(\theta_{rj})$, the expected payoff of each creative-disjoint sub-population combination (henceforth “*C-DA*”) is: $$\mu_{rj}^{\pi}(\theta_{rj})=\mathbb{E}[\pi_{irj}]=\gamma\mathbb{E}[y_{irj}]-\mathbb{E}[b_{irj}]\\=\gamma\theta_{rj}-\bar{b}_{rj},\forall r\in\mathbb{R},j\in\mathbb{J},$$ where $\bar{b}_{rj}$ is the average cost of showing the creative $r$ to the users in $DA(j)$.[^2] To make clear how the bandit updates parameters, we add the index $t$ for batch. Before the test starts, $t=1$, we set diffuse priors and let $\alpha_{rj,t=1}=1,\beta_{rj,t=1}=1,\forall r \in \mathbb{R}, j \in \mathbb{J}$. This prior implies the probability of taking action $y$, $\theta_{rj,t=1},\forall r \in \mathbb{R}, j \in \mathbb{J}$ is uniformly distributed between 0% and 100%. In batch $t$, $N_{t}$ users arrive. The TS displays creatives to these users dynamically, by allocating each creative according to the posterior probability each creative offers the highest expected payoffs given the user’s context. Given the posterior at the beginning of batch $t$, the probability a creative $r$ provides the highest expected payoff is, $$w_{rjt}= Pr[\mu_{rj}^{\pi}(\theta_{rjt})=\max\limits_{r \in \mathbb{R}} (\mu_{rj}^{\pi}(\theta_{rjt}))|\vec{\alpha}_{jt},\vec{\beta}_{jt}],$$ where $\vec{\alpha}_{jt} = [\alpha_{1jt}, \dots, \alpha_{Rjt}]'$ and $\vec{\beta}_{jt} = [\beta_{1jt}, \dots, \beta_{Rjt}]'$ are the parameters of the posterior distribution of $\vec{\theta}_{jt} = [\theta_{1jt},\dots,\theta_{Rjt}]'$. To implement this allocation, for each user $i=1,..,N_{t}$ who arrives in batch $t$, we determine his context $j$, and make a draw of the $R\times1$ vector of parameters, $\tilde{\boldsymbol{\theta}}_{jt}^{\left(i\right)}$. Element $\tilde{\theta}_{rjt}^{\left(i\right)}$ of the vector is drawn from $\texttt{Beta}(\alpha_{rjt}, \beta_{rjt})$ for $r\in\mathbb{R}$. Then, we compute the payoff for each creative $r$ as $\mu_{rj}^{\pi}(\tilde{\theta}_{rjt}^{\left(i\right)})=\gamma\tilde{\theta}_{rjt}^{\left(i\right)}-\bar{b}_{rj}$, and display to $i$ the creative with the highest $\mu_{rj}^{\pi}(\tilde{\theta}_{rjt}^{\left(i\right)})$. We update all parameters at the end of processing the batch, after the outcomes for all users in the batch is observed. We compute the sum of binary outcomes for each *C-DA* combination as, $$s_{rjt}=\sum_{i=1}^{n_{rjt}}y_{irjt}, \forall r \in \mathbb{R},j \in \mathbb{J},$$ where $n_{rjt}$ is the number of users with context $j$ allocated to creative $r$ in batch $t$. Then, we update parameters as: $$\vec{\alpha}_{j(t+1)}=\vec{\alpha}_{jt}+\vec{s}_{jt}, \vec{\beta}_{j(t+1)}=\vec{\beta}_{jt}+\vec{n}_{jt}-\vec{s}_{jt}, \forall j \in \mathbb{J},$$ where $\vec{s}_{jt} = [s_{1jt},\dots,s_{Rjt}]'$, and $\vec{n}_{jt} = [n_{1jt},\dots,n_{Rjt}]'$. Then, we enter batch $t+1$, and use $\vec{\alpha}_{j(t+1)}$ and $\vec{\beta}_{j(t+1)}$ as the posterior parameters to allocate creatives at $t+1$. We repeat this process until a pre-specified stopping condition (outlined below) is met. ### Probabilistic Aggregation and Stopping Rule While the contextual bandit is set up to learn the best *C-DA* combination, the goal of the test is to learn the best creative-target audience combination (henceforth “*C-TA*”). As such, we compute the expected payoff of each *C-TA* combination by aggregating the payoffs of corresponding *C-DA* combinations, and stop on the basis of the regret associated with learning the best *C-TA* combination. Using the law of total probability, we can aggregate across all *C-DA*s associated with *C-TA* combination $(r,k)$ to obtain $\lambda_{rkt}$, $$\lambda_{rkt}=\sum_{j\in \mathcal{O}(k)}\theta_{rjt}\cdot\hat{p}(j|k). \label{eq:lambda}$$ In equation (\[eq:lambda\]), $\lambda_{rkt}$ is the probability that a user picked at random *from within TA($k$)* in batch $t$, takes the action $y=1$ upon being displayed creative $r$; $\hat{p}(j|k)$ is the probability (in the platform population) that a user belonging to $TA(k)$ is also of the context $j$; and $\mathcal{O}(k)$ is the set of disjoint sub-populations ($j$s) whose associated *DA($j$)*s are subsets of $TA(k)$. Given equation (\[eq:lambda\]), the posterior distribution of $\theta_{rjt}$s from the TS induces a distribution of $\lambda_{rkt}$s. We can obtain draws from this distribution using Monte Carlo sampling. For each draw $\ensuremath{\theta_{rkt}^{\left(h\right)}},h=1,..,H$ from $\texttt{Beta}(\alpha_{rjt}, \beta_{rjt})$, we can use equation (\[eq:lambda\]) to construct a corresponding $\ensuremath{\lambda_{rkt}^{\left(h\right)}},h=1,..,H$. For each such $\lambda_{rkt}^{\left(h\right)}$, we can similarly compute the implied expected payoff to the advertiser from displaying creative $r$ to a user picked at random from within TA($k$) in batch $t$, $$\omega_{rkt}^{\pi}(\lambda_{rk}^{\left(h\right)})=\gamma\lambda_{rkt}^{\left(h\right)}-\bar{b}_{rk},\forall r\in\mathbb{R},k\in\mathbb{K}, \label{eq:omega}$$ where $\bar{b}_{rk}$ is the average cost for showing creative $r$ to target audience $k$, which can be obtained by aggregating $\bar{b}_{rj}$ through analogously applying equation (\[eq:lambda\]). Taking the $H$ values of $\omega_{rkt}^{\pi}(\lambda_{rk}^{\left(h\right)})$ for each $(r,k)$, we let $r_{kt}^{*}$ denote the creative that has the highest expected payoff within each *TA* $k$ across all $H$ draws, i.e., $$r_{kt}^{*}=\underset{r\in\mathbb{R}}{arg\max}\underset{h=1,..,H}{\max}\omega_{rkt}^{\pi}(\lambda_{rk}^{\left(h\right)}). \label{eq:bestr}$$ Hence, $\omega_{r_{kt}^{*},kt}^{\pi}(\lambda_{rkt}^{(h)})$ denote the expected payoff for creative $r_{kt}^{*}$ evaluated at draw $h$. Also, define $\omega_{^{*}kt}^{\pi}(\lambda_{rkt}^{(h)})$ as the expected payoff for the creative assessed as the best for *TA* $k$ in draw $h$ itself, i.e., $$\omega_{^{*}kt}^{\pi}(\lambda_{rkt}^{(h)})=\underset{r\in\mathbb{R}}{\max}\quad\omega_{rkt}^{\pi}(\lambda_{rk}^{\left(h\right)}), \label{eq:bestomega}$$ Following [@scott2015multi], the value $\omega_{^{*}kt}^{\pi}(\lambda_{rkt}^{(h)})- \omega_{r_{kt}^{*},kt}^{\pi}(\lambda_{rkt}^{(h)})$ represents an estimate of the regret in batch $t$ for *TA* $k$ at draw $h$. Normalizing it by the expected payoff of the best creative across draws gives a unit-free metric of regret for each draw $h$ for each *TA* $k$, $$\rho_{kt}^{(h)}=\frac{\omega_{^{*}kt}^{\pi}(\lambda_{rkt}^{(h)})-\omega_{r_{kt}^{*},kt}^{\pi}(\lambda_{rkt}^{(h)})}{\omega_{r_{kt}^{*},kt}^{\pi}(\lambda_{rkt}^{(h)})}, \label{eq:unitfreeReg}$$ Let $pPVR(k,t)$ be the $95^{\textrm{th}}$ percentile of $\rho_{kt}^{(h)}$ across the $H$ draws. We stop the test when, $$\max\limits_{k\in\mathbb{K}}pPVR(k,t)<0.01. \label{eq:pPVR}$$ In other words, we stop the test when the normalized regret for all *TA*s we are interested in falls below 0.01.[^3] Therefore, while we learn an optimal creative displaying policy for each *DA*, we stop the algorithm when we find the best creative for each *TA* in terms of minimal regret. Algorithm \[algo:TS\] shows the full procedure. $K$ *TA*s are re-partitioned into $J$ *DA*s $t \gets 1$ $\alpha_{rjt} \gets 1,\beta_{rjt} \gets 1,\forall r \in \mathbb{R}, j \in \mathbb{J}$ Obtain from historical data $\hat{p}(j|k),\gamma, \bar{b}_{rj},\forall r \in \mathbb{R}, j \in \mathbb{J}, k \in \mathbb{K}$ $pPVR(k,t) \gets 1,\forall k \in \mathbb{K}$ A batch of $N_{t}$ users arrive Sample $\tilde{\theta}_{rjt}^{(i)}$ using $\texttt{Beta}(\alpha_{rjt},\beta_{rjt})$ for each $r\in\mathbb{R}$ Feed creative $I_{it} = argmax_{r\in \mathbb{R}} \gamma\tilde{\theta}_{rjt}^{\left(i\right)}-\bar{b}_{rjt}$ Collect data $\{y_{irjt}\}_{i=1}^{N_{t}},\{n_{rjt}\}_{r \in \mathbb{R}, j \in \mathbb{J}}$ Compute $s_{rjt}=\sum_{i=1}^{n_{rjt}}y_{irjt}, \forall r \in \mathbb{R},j \in \mathbb{J}$ Update $\alpha_{rj(t+1)}=\alpha_{rjt}+s_{rjt}, \forall r \in \mathbb{R},j \in \mathbb{J}$ Update $\beta_{rj(t+1)}=\beta_{rjt}+n_{rjt}-s_{rjt}, \forall r \in \mathbb{R},j \in \mathbb{J}$ Make $h=1,..,H$ draws of $\theta_{rj(t+1)}$s, i.e. $$\resizebox{0.8\hsize}{!}{ $\begin{bmatrix}\begin{array}{c} \theta_{11(t+1)}\\ ...\\ \theta_{rj(t+1)}\\ ...\\ \theta_{RJ(t+1)} \end{array}\end{bmatrix}^{(h)}\sim\begin{bmatrix} \texttt{Beta}(\alpha_{11(t+1)},\beta_{11(t+1)})\\ ...\\ \texttt{Beta}(\alpha_{rj(t+1)},\beta_{rj(t+1)})\\ ...\\ \texttt{Beta}(\alpha_{RJ(t+1)},\beta_{RJ(t+1)}) \end{bmatrix}^{(h)}, \forall h=1,...,H$ }$$ Compute $\vec{\lambda}_{t+1}^{(h)}=$ $$\resizebox{0.8\hsize}{!}{ $\begin{bmatrix}\begin{array}{c} \lambda_{11(t+1)}\\ ...\\ \lambda_{rk(t+1)}\\ ...\\ \lambda_{RK(t+1)} \end{array}\end{bmatrix}^{(h)}=\begin{bmatrix}\sum\limits_{j\in O(k=1)}\hat{p}(j|k=1)\cdot\theta_{rj(t+1)}\\ ...\\ \sum\limits_{j\in O(k)}\hat{p}(j|k)\cdot\theta_{rj(t+1)}\\ ...\\ \sum\limits_{j\in O(k=K)}\hat{p}(j|k=K)\cdot\theta_{rj(t+1)} \end{bmatrix}^{(h)}, \forall h=1,...,H$ }$$ Compute $\vec{\omega^{\pi}}_{t+1}^{(h)}(\vec{\lambda}_{t+1}^{(h)})=$ $$\resizebox{0.8\hsize}{!}{ $\begin{bmatrix}\begin{array}{c} \omega_{11(t+1)}^{\pi}\\ ...\\ \omega_{rk(t+1)}^{\pi}\\ ...\\ \omega_{RK(t+1)}^{\pi} \end{array}\end{bmatrix}^{(h)}=\begin{bmatrix}\gamma\cdot\lambda_{11(t+1)}-\bar{b}_{11(t+1)}\\ ...\\ \gamma\cdot\lambda_{rkt}-\bar{b}_{rk(t+1)}\\ ...\\ \gamma\cdot\lambda_{RKt}-\bar{b}_{RK(t+1)} \end{bmatrix}^{(h)}, \forall h=1,...,H$ }$$ Compute $\rho_{k(t+1)}^{(h)}=\frac{\omega_{^{*}k(t+1)}^{\pi}(\lambda_{rk(t+1)}^{(h)})-\omega_{r_{k(t+1)}^{*},k(t+1)}^{\pi}(\lambda_{rk(t+1)}^{(h)})}{\omega_{r_{k(t+1)}^{*},k(t+1)}^{\pi}(\lambda_{rk(t+1)}^{(h)})}, \quad \forall h=1,...,H, k \in \mathbb{K}$ $\forall k \in \mathbb{K}$, calculate $pPVR(k,t+1)$ as the $95^{\textrm{th}}$ percentile across the $H$ draws of $\rho^{(h)}_{k(t+1)}$ Set $t \gets t+1$ Experiments =========== This section reports on experiments that establish the face validity of the TS; compares it to audience split testing and a random allocation schema where each creative is allocated to each context with equal probability; and explores its performance when the degree of overlap in *TA*s increases. Setup ----- For the experiments, we consider a setup with 2 creatives and 2 overlapping *TA*s, implying 3 *DA*s, 4 *C-TA* combinations and 6 *C-DA* combinations as shown in Figure (\[fig:simul-setup\]). The *TA*s are assumed to be of equal sizes, with an overlap of 50%.[^4] We set the display cost $b_{irj}$ to zero and $\gamma=1$ so we can work with the *CTR* directly as the payoffs (therefore, we interpret the cost of experimentation as the opportunity cost to the advertiser of not showing the best combination.) We simulate 1,000 values for the expected *CTR*s of the 6 *C-DA* combinations from uniform distributions (with supports shown in Figure (\[fig:simul-setup\])). Under these values, $C_{1}$-$DA_{1}$ has the highest expected *CTR* amongst the *C-DA* combinations, and $C_{1}$-$TA_{1}$ the highest amongst the *C-TA* combinations. We run the TS for each simulated value to obtain 1,000 bandit replications. For each replication, we update probabilities over batches of 100 observations, and stop the sampling when we have 1000 batches of data. Then, we report in Figure (\[fig:valid\]), box-plots across replications of the performance of the TS as batches of data are collected, plotting these at every $10^{\textrm{th}}$ batch. ![Simulation Setup: 2 Cs, 2 TAs and 3 DAs[]{data-label="fig:simul-setup"}](Simulation_setup.png){width="75.00000%"} Algorithm Performance --------------------- Figures (\[valid:a\] and \[valid:b\]) plot the evolution over batches in the unit-free regret (*pPVR*) and the expected regret per impression, where the latter is defined as the expected clicks lost per impression in a batch when displaying a creative other than the true-best for each *DA*, evaluated at the true parameters.[^5] If the TS progressively allocates more traffic to creatives with higher probability of being the best arm in each context (*DA*), the regret should fall as more data is accumulated. Consistent with this, both metrics are seen to fall as the number of batches increases in our simulation. The cutoff of 0.01 *pPVR* is met in 1,000 batches in all replications. Figure (\[valid:c\]) shows the posterior probability implied by TS in each batch that the true-best *C-TA* is currently the best.[^6] The posterior puts more mass on the true-best combination as more batches are sampled. These results establish the face validity of the algorithm as a viable way of finding the best *C-TA* combination in this setting, while minimizing regret. Figure (\[fig:ts\]) now compares the proposed TS algorithm to an Equal Allocation algorithm (henceforth “EA”) and a Split-Testing algorithm (henceforth “ST”). EA is analogous to “A/B/n” testing in that it is non-adaptive: the allocation of traffic to creatives for each *DA* is held fixed, and not changed across batches. Instead, in each batch, we allocate traffic equally to each of the $r\in\mathbb{R}$ creatives for each *DA*. ST follows the design described in $\mathsection$\[sec:lit-review\], and traffic is allocated at the level of *C-TA* (rather than *C-DA*) combinations. Each user is assigned randomly with fixed, equal probability to one of $R\times K$ *C-TA* arms (4 in this simulation), and a creative is displayed only if a user’s features match the arm’s *TA* definition. To do the comparison, we repeat the same 1,000 replications as above with the same configurations, but this time stop each replication when the criterion in equation (\[eq:pPVR\]) is reached. In other words, for each of TS, EA and ST algorithms, we maintain a posterior belief about the best *C-TA* combination, which we update after every batch.[^7] In TS, the traffic allocation reflects this posterior adaptively, while in EA and ST, the traffic splits are held fixed; and the same stopping criteria is imposed in both. All parameters are held the same. Figure (\[Fig:tsea1\]) shows that TS generates the smallest amount of expected regret, and the sample sizes required to exit the experiments under TS are between those under EA and those under ST (Figure (\[Fig:tsea2\])). This is because the expected regret per impression under EA and ST remains constant over batches, while as Figure (\[valid:b\]) demonstrated, the expected regret per impression under TS steadily decreases as more batches arrive. ST generates the most regret and requires the largest sample sizes, since it is not only non-adaptive, but also discards a portion of the traffic and the information that could have been gained from this portion. Figure \[Fig:tsea3\] shows that the TS puts more mass at stopping on the true-best *C-TA* combination compared to EA and ST. Across replications, this allows TS to correctly identify the true-best combination 85.8% of the time at stopping, compared to 77.8% for EA and 70.8% for ST. Overall, the superior performance of the TS relative to EA are consistent with the experiments reported in [@scott2010]. Degree of Overlap among Target Audiences {#subsec:overlap-exps} ---------------------------------------- The next set of experiments assesses the extent to which the degree of audience overlap affects the performance of the proposed TS algorithm. We use simulations to demonstrate the cross-audience learning effect in the algorithm, and to explore how it balances the effects of increased payoff similarity between the *TA*s on performance. From a practical perspective, this simulation helps assess circumstances under which the sampler can reliably learn the best *C-TA* combination (thereby representing an attractive scenario for the platform to run the test), versus not (and thereby representing an unattractive scenario for the platform to run the test). We first fix the *CTR*s of the six *C-DA* combinations $C_{1}$-$DA_{1}$, $C_{2}$-$DA_{1}$, $C_{1}$-$DA_{2}$, $C_{2}$-$DA_{2}$, $C_{1}$-$DA_{3}$, $C_{2}$-$DA_{3}$ to be \[.01,.03,.03,.05,.025,.035\]. We vary the size of the overlapped audience, i.e. $\Pr\left(DA_{2}|TA_{1}\right)=\Pr\left(DA_{2}|TA_{2}\right)$, on a grid from $0$-$.9$. For each value on the grid, we run the TS for 1,000 replications, taking the 6 *C-DA* *CTR*s as the truth, stopping each replication according to equation (\[eq:pPVR\]). We then present in Figure \[fig:ol\_1\] box-plots across these replications as a function of the degree of overlap. As the degree of overlap increases along the $x$-axis, the two target audiences become increasingly similar, increasing cross-audience learning, but decreasing their payoff differences. Figures (\[ol\_1:c\] and \[ol\_1:d\]) show that sample sizes required for stopping and total expected regret per impression remain roughly the same as overlap increases, suggesting the two effects largely cancel each other. [^8] Figure (\[ol\_1:a\]) shows the proportion of 1,000 replications that correctly identify the true-best *C-TA* combination as the best at stopping. The annotations label the payoff difference in the top-2 combinations, showing that the payoffs also become tighter as the overlapping increases. We see that the TS works well for reasonably high values of overlap, but as the payoff differences become extremely small, it becomes increasingly difficult to correctly identify the true-best *C-TA* combination. Figure (\[ol\_1:b\]) explains this pattern by showing that the posterior probability of the best combination identified at stopping also decreases as the payoff differences grow very small. Finally, the appendix presents additional experiments that show that the observed degradation in performance of the TS at very high values of overlap disappears in a pure cross-audience learning setting. Overall, these simulations suggest that the proposed TS is viable in identifying best *C-TA* combinations for reasonably high levels of *TA* overlap. The TS does this by leveraging cross-audience learning. If the sampler is to be used in situations with extreme overlap, it may be necessary to impose additional conditions on the stopping rule based on posterior probabilities, in addition to the ones based on $pPVR$ across contexts in equation (\[eq:pPVR\]). This is left for future research. Deployment ========== We designed an experimentation product based on the proposed TS algorithm. The goal of the product is to help advertisers in `JD.com`’s marketplace improve their digital ad campaigns by discovering good target audience and creative combinations. To use the product, an advertiser starts by setting up a test ad-campaign on the product system. The test campaign is similar to a typical ad-campaign, involving advertiser-specified rules for bidding, budget, duration etc. The difference is that the advertiser defines $K$ *TA*s and binds $R$ creatives to the test-campaign, rather than one as typical; and the allocation of creatives to a user impression is managed by the TS algorithm. Both $K$ and $R$ are limited to a max of 5 so as to restrict the number of parameters to learn in the test. Because the algorithm disjoints *TA*s, the number of contexts grows combinatorially as $K$ increases, and this restriction keeps the total *C-TA* test combinations manageable. When a user arrives at `JD.com`, the ad-serving system retrieves the user’s characteristics. If the characteristics activate the tag(s) of any of the $K$ *TA*s, and satisfies the campaign’s other requirements, the TS chooses a test creative according to the adaptively determined probability, and places a bid for it into the platform’s auction system. The bids are chosen by the advertiser, but are required to be the same for all creatives in order to keep the comparison fair. The auction includes other advertisers who compete to display their creatives to this user. The system collects data on the outcome of the winning auctions and whether the user clicks on the creative when served; updates parameters every 10 minutes; and repeats this process until the stopping criterion is met and the test is stopped. The data are then aggregated and relevant statistical results regarding all the *C-TA* combinations are delivered to the advertiser. See `https://jzt.jd.com /gw/dissert/jzt-split/1897.html` for a product overview. The next sub-section presents a case-study based on one test run on the product. Though many of the other tests ran on the product platform exhibit similar patterns, there is no claim this case-study is representative: we picked it so it best illustrates for the reader some features of the test environment and the performance of the TS. Case-Study ---------- The case-study involves a large cellphone manufacturer. The advertiser defined 2 *TA*s and 3 creatives. The 2 *TA*s overlap, resulting in 3 *DA*s. Figure (\[fig:exp\]) shows the probability that each *C-TA* combination is estimated to be the best as the test progresses. The 6 possible combinations are shown in different colors and markers. During the initial 12 batches (2 hours), the algorithm identifies the “\*" and “+" combinations to be inferior and focuses on exploring the other 4 combinations. Then, the yellow “." combination starts to dominate the other combinations until the test stops. When the test ends, this combination is chosen as the best. The advantage of the adaptive aspect is that most of the traffic during the test is allocated to this combination (see $y$-axis), so that the advertiser does not unnecessarily waste resources on assessing combinations that were learned to be inferior early on. The experiment lasted a bit more than 6 hours with a total of 18,499 users and 631 clicks. The estimated *CTR*s of the six *C-TA* combinations $C_{1}$-$TA_{1}$, $C_{2}$-$TA_{1}$, $C_{3}$-$TA_{1}$ (yellow “." combination), $C_{1}$-$TA_{2}$, $C_{2}$-$TA_{2}$, $C_{3}$-$TA_{2}$ at stopping are \[.028,.034,.048,.028, .017,.036\]. Despite the short time span, the posterior probability induced by the sampling on the yellow “." combination being the best is quite high (98.4%). We use a back-of-the-envelope calculation to assess the economic efficiency of TS relative to EA in this test. We use the data to simulate a scenario where we equally allocate across the creatives the same amount of traffic as this test used. We find TS generates 52 more clicks (8.2% of total clicks) than EA. The quick identification of the best arm in this test may be due to the relatively large differences in the *CTR*s across different combinations. The difference in the top-2 combinations is around 1% for each of the *TA*s and across all combinations. As we suggested in $\mathsection$\[subsec:overlap-exps\], larger differences in the payoffs may result in shorter test span and higher posterior probabilities on the best combinations. In other tests, we found the product performs well even in situations where the creatives are quite similar and $K,R$ are close to $5$, without requiring unreasonable amounts of data or test time so as to make it unviable. Scaling the product to allow for larger sets of test combinations is a task for future research and development. ![Results from Practical Implementation[]{data-label="fig:exp"}](exp1_best.png){width="75.00000%"} Conclusion ========== An adaptive algorithm to identify the best combination among a set of advertising creatives and *TA*s is presented. The novel aspect of the algorithm is to accommodate the possibility of overlap in the *TA*s, which is a pervasive feature of digital advertising settings. Overlap in the *TA*s makes it difficult to sort between the relative attractiveness of various audiences. The proposed method addresses this issue, while adapting the allocation of traffic during the test to what is learned so as to minimize advertiser regret. Experiments show that the proposed method is more efficient compared to naive “split-testing” or non-adaptive “A/B/n” testing based methods. The approach assumes that creatives do not induce long-term dependencies, for instance, that they do not affect future user arrival rates, and that auctions are unrelated to each other, for instance due to the existence of a binding budget constraint. These assumptions justify framing the problem as a multi-armed bandit, and could be relaxed in future work, by using a more general reinforcement learning framework. Appendix {#appendix .unnumbered} ======== Simulation: Increasing Overlap with Pure Cross-Audience Learning ---------------------------------------------------------------- This simulation is set-up to demonstrate the *cross-audience learning* effect induced by increasing overlap in *TA*s. The idea of the simulation is to explore variation in performance with overlap, while holding fixed the payoff difference between the compared *TA*s. Consider a similar setup with 2 creatives and 2 overlapping *TA*s as before. We fix the *CTR*s of the overlapped audience, $C_{1}$-$DA_{2}$, $C_{2}$-$DA_{2}$ to be \[.015, .025\] and the *CTR*s of the four *C-TA* combinations $C_{1}$-$TA_{1}$, $C_{2}$-$TA_{1}$,$C_{1}$-$TA_{2}$, $C_{1}$-$TA_{2}$ to be \[.035, .05, .015, .03\]. We vary the size of the overlapped audience, i.e. $\Pr\left(DA_{2}|TA_{1}\right)=\Pr\left(DA_{2}|TA_{2}\right)$, on a grid from $0$-$.9$. For each value on the grid, we run the TS for 1,000 replications, stopping each according to equation (\[eq:pPVR\]).[^9] Since the payoffs of the *C-TA* combinations remain the same as the overlap changes, this helps isolate the effects of cross-audience learning. Figure \[fig:ol\_2\] shows box-plots across replications as a function of the degree of overlap. Reflecting the cross-audience learning, the sample sizes decrease steadily as the overlap increases (Figure (\[ol\_2:c\])). Expected regret per impression may increase as overlap increases because the payoff difference between the non-overlapping *DA*s increases with overlap; or it may fall because of faster learning. Figure (\[ol\_2:d\]) shows the net effect is somewhat negative and expected regret declines as overlap increases. Figure (\[ol\_2:a\]) shows the proportion of replications that correctly identify the true-best *C-TA* combination as the best at the end of each replication. We see the proportion of correctly identified combinations remain high as the degree of overlap increases. Finally, Figure (\[ol\_2:b\]) shows there is no degradation in performance in terms of the posterior probability accumulated on the best combination at stopping. Overall, these show that under a pure *cross-audience learning* scenario, increased overlap between the *TA*s does not degrade performance of the sampler in a meaningful way. [^1]: The authors are part of `JD Intelligent Ads Lab`. The views represent that of the authors, and not `JD.com`. We thank Jun Hao, Lei Wu and Paul Yan for their helpful collaboration, and Caio Waisman for extensive comments. Previous version: July 5, 2019. [^2]: $\gamma$ may be determined from prior estimation or advertisers’ judgment of the value attached to users’ actions. $\gamma$ is pre-computed and held fixed during the test. $\bar{b}_{rj}$ and $\hat{p}(j|k)$ (defined later) can be pre-computed outside of the test from historical data and held fixed during the test, or inferred during the test using a simple bin estimator that computes these as averages over the observed cost and user contexts data. [^3]: Other stopping rules may also be used, for example, based on posterior probabilities, or based on practical criteria that the test runs till the budget is exhausted (which protects the advertiser’s interests since the budget is allocated to the best creative). The formal question of how to stop a TS when doing Bayesian inference is still an open issue. While data-based stopping rules are known to affect frequentist inference, Bayesian inference has traditionally been viewed as unaffected by optional stopping (e.g., [@edwards1963bayesian]), though the debate is still unresolved in the statistics and machine learning community (e.g., [@rouder] vs. [@dhg2018]). This paper adopts a stopping rule reflecting practical product-related considerations, and does not address this debate. [^4]: Specifically, $\Pr\left(TA_{1}\right)=\Pr\left(TA_{2}\right)=.5$; $\Pr\left(DA_{1}|TA_{1}\right)=\Pr\left(DA_{2}|TA_{1}\right)=0.5$; $\Pr\left(DA_{2}|TA_{2}\right)=\Pr\left(DA_{3}|TA_{2}\right)=0.5$; and $\Pr\left(DA_{1}|TA_{2}\right)=\Pr\left(DA_{3}|TA_{1}\right)=0$. [^5]: Specifically, the expected regret per impression in each batch $t$ is $\sum_{k\in\mathbb{K}}\sum_{j\in O(k)}\hat{p}(j|k)\sum_{r\in\mathbb{R}}w_{rjt}(\theta_{rj}^{\textrm{true}}-\underset{r\in\mathbb{R}}{\max}\theta_{rj}^{\textrm{true}})$. [^6]: Note, these probabilities are not the same as the distribution of traffic allocated by the TS, since traffic is allocated based on *DA* and not *TA*. [^7]: Note that, we do not need to partition the *TA*s under ST, and instead directly set up the model at the *C-TA* level under ST. [^8]: Figure \[ol\_1:d\] suggests a possible decrease in total expected regret with increased overlap. This may be caused by a feature of the simulation setup that the overlapping *DA* (i.e., *$DA_{2}$*) has smaller payoff difference between the 2 *CA*s than the non-overlapping *DA*s. As the degree of overlap increases, the overlapping part dominates the non-overlapping part, making the regret smaller. If we impose the same payoff difference across all *DA*s, we find this decline disappears. [^9]: That is, for each value of overlap on the grid, we compute the *CTR*s for $C_{1}$-$DA_{1}$, $C_{2}$-$DA_{1}$,$C_{1}$-$DA_{3}$, $C_{1}$-$DA_{3}$ that generate the 2 fixed *C-DA* and 4 *C-TA* *CTR* values above, and run 1,000 replications of the TS taking the 6 *C-DA* *CTR*s as the true-values.
--- abstract: 'In the main part of the paper we project forward to having $B$ factory determinations of $\sin(2\beta)$ and $\sin(2\alpha)$, for which we take several values. First, we use a joint $\chi^2$ analysis of CKM experiments to constrain CKM matrix elements in the standard model, and experiments on the angles $\alpha$, $\beta$, and $\gamma$, and on $x_s$ and null $CP$ asymmetries. Then we invoke mixing to a new isosinglet down quark (as in E$_6$) which induces FCNC that allow a $Z^0$ mediated contribution to $B-\bar B$ mixing and which brings in new phases. We then repeat the $\chi^2$ analysis, now including experimental constraints from FCNC as well, finding much larger ranges of prediction for the $B$ factory. We then add projected $B$ factory results on $\sin(2\beta)$ and $\sin(2\alpha)$ and repeat both analyses. In $(\rho,\eta)$ and $(x_s,\sin{(\gamma)})$ plots for the extra isosinglet down quark model, we find multiple regions that will require experiments on $\sin{(\gamma)}$ and/or $x_s$ to decide between and possibly to effectively bound out the extra down quark contribution.' address: | Department of Physics and Astronomy,\ University of California, Irvine\ Irvine, CA 92717-4575 author: - Dennis Silverman title: | [$B$]{} Factory Constraints on Isosinglet Down Quark Mixing,\ and Predictions For Other [$CP$]{} Violating Experiments --- Introduction ============ In this paper we are interested in finding the “reach” of the $B$ factories in terms of determining the angles of the standard model (SM) CKM matrix, and of limiting “new physics” contributions. Within these future limits on both models, we then make predictions for other experiments, including $\sin(\gamma)$, $x_s$, and the asymmetry in $B_s-\bar B_s$ mixing, which is almost null in the SM. In setting limits we use the method of a joint $\chi^2$ fit to all constraining experiments. The “new physics” class of models we use are those with extra iso-singlet down quarks, where we take only one new down quark as mixing significantly. An example is E$_6$, where there are two down quarks for each generation with only one up quark, and of which we assume only one new iso-singlet down quark mixes strongly. This model has shown large possible effects in $B-\bar B$ mixing phases. The “reach” of the $B$ factory in this model also sets limits on the phases of the mixing angles to the new iso-singlet down quark. For different $\sin({2\alpha})$ we find multiple regions that will require experiments on $\sin{(\gamma)}$ or $x_s$ to decide between, and experiments on both could be required to effectively bound out or to verify the model. We also find a relatively small $B_s - \bar{B}_s$ mixing asymmetry, even outside the standard model. Iso-singlet Down Quark Mixing Model =================================== Groups such as $E_6$ with extra SU$(2)_L$ singlet down quarks give rise to flavor changing neutral currents (FCNC) through the mixing of four or more down quarks [@shin; @nirsilnp; @nirsilpr; @sil92]. We use the $4 \times 4$ down quark mixing matrix $V$ which diagonalizes the initial down quarks ($d_{iL}^0$) to the mass eigenstates ($d_{jL}$) by $d_{iL}^0 = V_{ij} d_{jL}$. The flavor changing neutral currents we have are [@nirsilpr; @sil92] $-U_{ds} = V^*_{4d} V_{4s}$ , $-U_{sb} = V^*_{4s} V_{4b}$, and $-U_{bd} = V^*_{4b} V_{4d}$. These FCNC with tree level $Z^0$ mediated exchange may contribute part of $B_d^0 - \bar{B_d^0}$ mixing and of $B_s^0 - \bar{B_s^0}$ mixing, giving a range of non-zero values for the fourth quark’s mixing parameters. $B_d^0 - \bar{B_d^0}$ mixing may occur by the $b - \bar{d}$ quarks in a $\bar{B_d}$ annihilating to a virtual $Z$ through a FCNC with amplitude $U_{db}\ $, and the virtual $Z$ then creating $\bar{b} - d$ quarks through another FCNC, again with amplitude $U_{db}$, which then becomes a $B_d$ meson. If these are a large contributor to the $B_d -\bar{B_d}$ mixing, they introduce three new mixing angles and two new phases into the $CP$ violating $B$ decay asymmetries. The size of the contribution of the FCNC amplitude $U_{db}$ as one side of the unitarity quadrangle is less than 0.1 of the unit base $|V_{cd} V_{cb}|$ at the 1-$\sigma$ level, but we have found [@shin; @nirsilpr; @sil92] that it can contribute, at present, as large an amount to $B_d -\bar{B}_d$ mixing as does the standard model. The new phases can appear in this mixing and give total phases different from that of the standard model in $CP$ violating $B$ decay asymmetries[@nirsilpr; @sil92; @chosilfcnc; @branco; @lavoura]. For $B_d - \bar{B}_d$ mixing with the four down quark induced $b-d$ coupling, $U_{db}$, we have [@chosilfcnc] $$x_d = (2 G_F/3 \sqrt{2}) B_B f_B^2 m_B \eta_B \tau_B \left| U_{std-db}^2 + U_{db}^2 \right|$$ where with $y_t = m_t^2/m_W^2$ $$U^2_{std-db} \equiv (\alpha/(4 \pi \sin^2{\theta_W})) y_t f_2(y_t) (V_{td}^* V_{tb})^2,$$ and $x_d = \Delta m_{B_d}/\Gamma_{B_d} = \tau_{B_d} \Delta m_{B_d}$. The $CP$ violating decay asymmetries depend on the combined phases of the $B^0_d-\bar{B}^0_d\;$ mixing and the $b$ quark decay amplitudes into final states of definite $CP$. Since we have found that $Z$ mediated FCNC processes may contribute significantly to $B^0_d-\bar{B}^0_d\;$ mixing, the phases of $U_{db}\ $ would be important. Calling the singlet down quark $D$, to leading order the mixing matrix elements to $D$ are $V_{tD} \approx s_{34}$, $V_{cD} \approx s_{24} e^{-i\delta_{24} }$, and $V_{uD} \approx s_{14} e^{-i \delta_{14} }$. The FCNC amplitude $U_{db}$ to leading order in the new angles is $$U_{db} = -s_{34} (s_{34} V^*_{td} + s_{14} e^{-i \delta_{14} } -s_{24} e^{-i \delta_{24} } s_{12}).$$ where $V_{td} \approx (s_{12} s_{23} - s_{13} e^{i\delta_{13}})$, and $V_{ub} = s_{13} e^{-i\delta_{13}}$. Joint Chi-squared Analysis for CKM and FCNC Experiments ======================================================= FCNC experiments put limits on the new mixing angles and constrain the possibility of new physics contributing to the $B_d^0 - \bar{B_d^0}$ and $B_s^0 - \bar{B_s^0}$ mixing. Here we analyze jointly all constraints on the $4 \times 4$ mixing matrix obtained by assuming only one of the SU$(2)_L$ singlet down quarks mixes appreciably[@nirsilpr]. We use the eight experiments for the $3 \times 3$ CKM sub-matrix elements [@chosilckm], which include those on the five matrix elements $V_{ud}, V_{cd}, V_{us}, V_{ub}, V_{cb}$ of the $u$ and $c$ quark rows, and, in the neutral $K$ system[@detail], include $|\epsilon|$ and $K_L \to \mu \mu$, and also $B_d - \bar{B}_d$ mixing. For studying FCNC, we add [@chosilfcnc] the $B \to \mu \mu X$ bound (which constrains $b \to d$ and $b \to s$), $K^+ \to \pi^+ \nu \bar{\nu}$ [@lavoura; @kpiexpt; @chiskpi] and $Z^0 \to b \bar{b}$ [@lavoura] (which directly constrains the $V_{4b}$ mixing element). FCNC experiments will bound the three amplitudes $U_{ds}$, $U_{sb}$, and $U_{bd}$ which contain three new mixing angles and three phases. We use the newly indicated mass of the top quark as $m_t = 174$ GeV. In maximum likelihood correlation plots, we use for axes two output quantities which are dependent on the angles, such as $\rho$ and $\eta$, and for each possible bin with given values for these, we search through the nine dimensional angular data set of the $4 \times 4$ down quark mixing angles, finding all sets which give results in the bin, and then putting into that bin the minimum $\chi^2$ among them. To present the results, we then draw contours at several $\chi^2$ in this plane corresponding to given confidence levels. Constraints on the Standard Model CKM Matrix at Present, and After the B Factory ================================================================================ We first analyze the standard model using the present constraints on the eight CKM related experiments, and then repeat the analysis using the projected constraints from the $B$ factory[@Buras] which will give values for $\sin(2\beta)$ and $\sin(2\alpha)$. In the following, we will find and take $\sin(2\beta) = 0.62$ as the center of the current range with its projected $B$ factory errors of $\pm 0.06$ [@porter], and vary $\sin(2\alpha)$ from $-1.0$ to 1.0, using the projected $B$ factory errors of $\pm 0.08$. In Fig. \[rho-etaSM\] is shown the $(\rho,\eta)$ plot for the standard model with contours at $\chi^2$ which correspond to confidence levels (CL) that are the same as the CL for 1, 2, and 3-$\sigma$ limits. Fig. \[rho-etaSM\] shows large regions for the present CKM constraints, and small regions for the projected $B$ factory results, where we have taken the cases $\sin(2\alpha)= 1, 0,$ and $-1$, which appear from left to right, respectively. In Fig. \[sa-sbSM\] is shown the $(\sin(2\alpha),\sin(2\beta))$ plot for the standard model, for the same cases as in Fig. 1. The nearly horizontal contours are the present constraints, and the small circular contours are those for the $B$ factory cases $\sin{(2\alpha)} = -1, 0$, and 1, centered about their appropriate $\sin{(2\alpha)}$ values. In Fig. \[xs-sgSM\] is shown the $(x_s,\sin{(\gamma)})$ plot for the standard model with (a) present data, and (b) for the $B$ factory cases $\sin{(2\alpha)} = 1, 0, -1$ from left to right. $x_s$ is determined here from $x_s = 1.2 x_d (|V_{ts}|/|V_{td}|)^2$. The largest errors arise from the uncertainty in $|V_{td}|$, since we have not assumed any improvement in the present $20\%$ uncertainty in $\sqrt{B_B} f_B$ (which relates $V_{td}$ to $x_d$) from lattice calculations[@latticefB]. The $B$ factory in the SM constructs a rigid triangle from the knowledge of $\alpha$ and $\beta$, and removes this uncertainty in $\gamma$ and $x_s$ in the future. A cautionary note for experiments emerges from this plot, namely that $\sin(\gamma)$ is close to one (0.8 to 1.0) for the 1-$\sigma$ contour, and high accuracy on $\sin(\gamma)$ will be needed to add new information to the standard model. At 1-$\sigma$ the range of $x_s$ in the standard model is from 11 to 24. It is clear that the choices of $\sin(2\alpha)$ cases gives distinct ranges for $x_s$. Using $x_s$ to agree with the range given by a $\sin{(2\alpha)}$ measurement will be a good test of the standard model. Constraints on the Four Down Quark Model at Present, and After the [$B$]{} Factory Results ========================================================================================== Here we also project forward to having results on $\sin{(2\alpha)}$ and $\sin{(2\beta)}$ from the $B$ factories, and show how there will be stronger limits on the new phases of FCNC couplings than from present data. In the four down quark model we use “$\sin{(2\alpha)}$” and “$\sin{(2\beta)}$” to denote results of the appropriate $B_d$ decay $CP$ violating asymmetries, but since the mixing amplitude is a superposition, the experimental results are not directly related to angles in a triangle in this model. The asymmetries with FCNC contributions included are $$\sin{(2\beta)} \equiv A_{B^0_d \to \Psi K^0_s} = {\rm Im} \left[ \frac{(U^2_{std-db} + U^2_{db})}{|U^2_{std-db} + U^2_{db}|} \frac{(V^*_{cb} V_{cs})}{(V^*_{cb} V_{cs})^*} \right]$$ $$\sin{(2\alpha)} \equiv -A_{B^0_d \to \pi^+ \pi^-} = -{\rm Im} \left[ \frac{(U^2_{std-db} + U^2_{db})}{|U^2_{std-db} + U^2_{db}|} \frac{(V^*_{ub} V_{ud})}{(V^*_{ub} V_{ud})^*} \right]$$ with $U_{std-db}$ defined in Eqn. (2.2). We analyze all of these constraints together using a joint $\chi^2$ for fitting all of the thirteen experiments in the nine parameter angle space of the $4 \times 4$ mixing matrix. We include both the standard model and FCNC contributions through effective Hamiltonians[@chosilfcnc]. We then make maximum likelihood plots which include ($\sin{(2\alpha)}$, $\sin{(2\beta)}$), ($\rho$, $\eta$), ($x_s$, $\sin{\gamma}$), and those involving the FCNC amplitudes $U_{db}$ and $U_{sb}$ (not shown). The corresponding plots for the four down quark model are shown for present data and for projected $B$ factory data in the following figures. In the figures, we show $\chi^2$ contour plots with confidence levels (CL) at values equivalent to 1-$\sigma$ and at 90% CL (1.64$\sigma$) for present data, and for projected $B$ factory results. Again, for results with the $B$ factories, we use the example of the most likely $\sin{(2\beta)} = 0.62$ with $B$ factory errors of $\pm 0.06$, and errors of $\pm 0.08$ on $\sin{(2\alpha)}$. In Fig. \[rho-eta4q\] we have plotted the $\chi^2$ contours for the location of the vertex of $V_{ub}^*V_{ud}/|V_{cb}V_{cs}| \equiv \rho + i\eta$ (even for the four down quark quadrangle case). We note that in contrast to the standard model, in Fig. \[rho-eta4q\]a the presently allowed contours in the four down quark model go down to $\eta = 0$ at the 90% CL, which can result from the FCNC with its phases in $U_{db}$ causing the known $CP$ violation. In Fig. \[rho-eta4q\]b,c and d we show the $B$ factory cases of $\sin{(2\alpha)} = -1, 0$ and 1, respectively, with contours at 1-$\sigma$ and at 90% CL. The existence of several regions requires that extra experiments in $\sin{(\gamma)}$ or $x_s$ will also be needed to verify or to bound out the extra down quark mixing model. The larger contours at 1-$\sigma$ roughly agree with those of the standard model in Fig. 1. In our $\chi^2$ we have used[@chosilckm; @chosilfcnc] $|V_{ub}| = 0.071 \pm 0.013$ consistent with adjusting $\kappa$ in the Isgur-Wise model to fit the spectra, and using the spread of model results to determine a $\sigma$. In any case, we can consider this accuracy as obtainable in the future, following ref. [@Buras]. Use of the conservative bound of $0.08 \pm 0.03$ used by others still results in multiple regions. The $(\sin{(2\alpha)},\sin{(2\beta)})$ $\chi^2$ contour plot for the four down quark model (not shown) shows that all values of $\sin{(2\beta)}$ and $\sin{(2\alpha)}$ are individually allowed at 1-$\sigma$, and most pairs of values are allowed at 1-$\sigma$. This is a much broader allowed region in $\sin{(2\beta)}$ than the standard model result from present data in Fig. 2. The allowed 1,2 and 3-$\sigma$ contours in the ($\sin{(2\alpha)}$, $\sin{(2\beta)}$) plot for the cases of the $B$ factory results with the four down quark model are very similar to the SM results shown in Fig. 2. In terms of other experiments, the $(x_s,\sin{(\gamma)}$) plot for the four down quark model is shown in Fig. \[xs-sg4q\]a with the allowed region from present data, with 1-$\sigma$ and 90% CL contours. This allows all values of $\sin{(\gamma)}$ at the 1-$\sigma$ CL at present, and at 1-$\sigma$ constrains $x_s$ to lie between 8 and 25. In the four down quark model, what we mean by “$\sin{(\gamma)}$” is the result of the experiments which would give this variable in the SM [@Kayser]. Here, the four down quark model involves more complicated amplitudes, and is not simply $\sin{(\delta_{13})}$ $$\sin(\gamma) = {\rm Im} \left[ \frac{(U^2_{std-bs} + U^2_{bs})} {|U^2_{std-bs} + U^2_{bs}|} \frac{(V^*_{ub} V_{cs})}{|V_{ub} V_{cs})|} \right],$$ $$x_s = 1.2 x_d \frac{|U^2_{std-bs}+U^2_{bs}|}{|U^2_{std-db}+U^2_{db}|},$$ where $$U^2_{std-bs} = (\alpha/(4 \pi \sin{\theta_W}^2))y_t f_2(y_t) (V^*_{tb}V_{ts})^2.$$ In Figs. \[xs-sg4q\]b, c and d are shown the cases $\sin{(2\alpha)} = -1, 0$, and 1, respectively, at 1-$\sigma$ and at 90% CL. They reflect the same regions that appeared in the $(\rho,\eta)$ plots, Figs. \[rho-eta4q\]b, c, and d. The resemblance is increased if we recall that roughly $\sin{(\gamma)} \approx \eta$, and also that $x_s \propto 1/|V_{td}|^2$ where $|V_{td}|$ is the distance from the $\rho=1, \eta=0$ point. We see that experiments on $\sin{(\gamma)}$ and $x_s$ are necessary to resolve the possible regions allowed by the four down quark model. For the case of $\sin{(2\alpha)}=-1$, the allowed values of $\sin{(\gamma)}$ in Fig. \[xs-sg4q\]b are smaller than those for the standard model in Fig. 3a. The asymmetry $A_{B_s}$ in $B_s$ mixing in the standard model with the leading decay process of $b \to c \bar c s$ has no significant phase from the decay or from the mixing which is proportional to $V_{ts}^2$. The vanishing of this asymmetry is a test of the standard model[@nirsilnp], and a non-zero value can result from a “new physics” model. With the FCNC, the new result is $$A_{B_s} = {\rm Im} \left[ \frac{(U^2_{std-bs} + U^2_{bs})}{|U^2_{std-bs} + U^2_{bs}|} \frac{(V^*_{cb} V_{cs})}{(V^*_{cb} V_{cs})^*} \right]$$ The extent of the non-zero value of $A_{B_s}$ in the four down quark model is shown in Fig. \[xs-Bs4q\] from present data with contours at 1, 2 and 3-$\sigma$. Plots for the $B$ factory cases (not shown) are similar. We note that it is bounded to be rather small from present data at 1-$\sigma$, i.e. less than 0.06, and less than 0.32 at 2-$\sigma$. We compared the limits on the four down quark FCNC amplitude $|U_{db}|$ versus the standard model amplitude $|U_{std-db}|$ for $B_d^0 - \bar{B}_d^0$ mixing , at present and after the $B$ factory results. At present the constraints are such that $|U_{db}|$ can go from zero up to as large as the magnitude of $|U_{std-db}|$ at 2-$\sigma$ [@chosilfcnc]. For the sample $B$ factory results, the $|U_{db}|$ range is only somewhat more restricted. The total phase of $B_d^0 - \bar{B}_d^0$ mixing is closely restricted, however, to the same range as the standard model amplitude. The 90% CL limits on the three new quark mixing elements $|V_{4d}|$, $|V_{4s}|$, and $|V_{4b}|$ are roughly equal to the mixing angles to the fourth down quark $\theta_{14}$, $\theta_{24}$ and $\theta_{34}$, respectively. They are bounded by 0.06, 0.05, and 0.14, respectively. Conclusions =========== The main conclusion with the four down quark model for the $B$ factory cases is that there are multifold allowed regions as shown in the $(\rho,\eta)$ plot and the $(x_s,\sin{(\gamma)})$ plot. This will require additional experiments on $x_s$ and $\sin{(\gamma)}$ to well define the four down quark model results, and eventually to verify or bound out the relevance of the model here. We note that in the four down quark model the $\eta \propto $ Im$(V_{ub}^*)$ range can reach zero, which is quite different than in the standard model. This is because the other phases can account for $CP$ violation. We have also found that the present range of $x_s$ at 1-$\sigma$ is from 11 to 24 in the standard model, and from 8 to 25 in the four down quark model at 1-$\sigma$. The $\sin{(\gamma)}$ range is from 0.8 to 1.0 in the SM at 1-$\sigma$, and completely undetermined in the four down quark model at 1-$\sigma$. Finally, the range of the $B_s$ asymmetry which almost vanishes in the standard model, is found to range from zero up to 0.06 at 1-$\sigma$ in the four down quark model. Although its presence is a signal against the standard model, it may be small in new physics models, as it is in this one, and thus hard to detect. This research was supported in part by the U.S. Department of Energy under Contract No. DE-FG0391ER40679. We acknowledge the hospitality of the Aspen Center for Physics. M. Shin, M. Bander, and D. Silverman, Phys. Lett. [**B219**]{}, 381 (1989). Y. Nir and D. Silverman, Nucl. Phys. [**B345**]{}, 301 (1990). Y. Nir and D. Silverman, Phys. Rev. D [**42**]{}, 1477 (1990). D. Silverman, Phys. Rev. D [**45**]{}, 1800 (1992), and references to earlier literature therein. W.-S. Choong and D. Silverman, Phys. Rev. D [**49**]{}, 1649 (1994). G. C. Branco, T. Morozumi, P. A. Parada, and M. N. Rebelo, Phys. Rev. D [**48**]{}, 1167 (1993). L. Lavoura and J. P. Silva, Phys. Rev. D [**47**]{}, 1117 (1993). W.-S. Choong and D. Silverman, Phys. Rev. D [**49**]{}, 2322 (1994). Typographical corrections are changing the signs of $U_{sd}$ and $U_{ds}$ in Eqns. (13) and (36). Most of the values and errors of experiments are the same as in Ref. [@chosilfcnc]. The exceptions here are first in $K_L \to \mu \mu$ where the joint error from BNL and KEK is scaled up by a factor of 1.3 using the Particle Data Group method. After subtracting the $2\gamma$ unitarity contribution we have $|A_R|^2 = (0.23 \pm 0.54) \times 10^{-9}$. The latest experimental report from BNL is A. P. Heinson [*et al.*]{}, Phys. Rev. D [**51**]{}, 985 (1985). At 1-$\sigma$, the bound on $|A_R|$ is about the upper limit on the long distance contribution estimates. M. S. Atiya [*et al.*]{} Phys. Rev. Lett. [**70**]{}, 2521 (1993). With ${\rm BR}(K^+ \to \pi^+ \nu {\bar \nu}) < 5.2 \times 10^{-9}$ at 90% CL, and ignoring the much smaller standard model contribution, we have the $Z^0$ exchange FCNC bound in $\Delta \chi^2 = (|U_{ds}|^2/ 6.5 \times 10^{-9})^2$ for this experiment. See A. J. Buras, M. E. Lautenbacher, and G. Ostermaier, Phys. Rev. D [**50**]{}, 3433 (1994) for a more thorough analysis of the projected measurements on the standard model. F. Porter and A. Snyder, Babar note No. 140 and $B$ Factory Letter of Intent, SLAC Report 443 (1994). P. B. Mackenzie, Lepton-Photon Symposium, Cornell, Aug. 10-15, (1993) (hep-ph 9311242). We now use in the $\chi^2$ fits $\hat{B}_K = 0.82 \pm 0.16$, although the lattice errors on this ratio might be much smaller. A. Ali and D. London, CERN-TH 7398/94 from which we now use next to leading order $\hat{\eta}_{cc}=1.10$, $\hat{\eta}_{tt}=0.57$, leading order $\hat{\eta}_{ct}=0.36$, and $\hat{\eta}_B = 0.55$. R. Aleksan, B. Kayser, and D. London, Proc. of the Workshop on $B$ Physics at Hadron Accelerators, p. 299-308, Snowmass, Colo. 1993, Ed. P. McBride and C. S. Mishra; R. Aleksan, I. Dunietz, B. Kayser, and F. LeDiberder, Nucl. Phys. [**B361**]{}, 1991; R. Aleksan, I. Dunietz, and B. Kayser, Z. Phys. [**C54**]{}, 653 (1992).
--- abstract: 'We investigate the evolution of dust content in galaxies from redshifts $z=0$ to $z=9.5$. Using empirically motivated prescriptions, we model galactic-scale properties—including halo mass, stellar mass, star formation rate, gas mass, and metallicity—to make predictions for the galactic evolution of dust mass and dust temperature in main sequence galaxies. Our simple analytic model, which predicts that galaxies in the early Universe had greater quantities of dust than their low-redshift counterparts, does a good job at reproducing observed trends between galaxy dust and stellar mass out to $z\approx 6$. We find that for fixed galaxy stellar mass, the dust temperature increases from $z=0$ to $z=6$. Our model forecasts a population of low-mass, high-redshift galaxies with interstellar dust as hot as, or hotter than, their more massive counterparts; but this prediction needs to be constrained by observations. Finally, we make predictions for observing 1.1-mm flux density arising from interstellar dust emission with the Atacama Large Millimeter Array.' author: - 'Nia Imara, Abraham Loeb, Benjamin D. Johnson, Charlie Conroy, and Peter Behroozi' bibliography: - 'dustygal.bib' title: | A Model Connecting Galaxy Masses, Star Formation Rates,\ and Dust Temperatures Across Cosmic Time --- Introduction ============ Interstellar dust has a number of important implications for the formation and evolution of galaxies. Since high-mass stars produce metals, the building blocks of dust grains in the interstellar medium (ISM), dust abundance is an indicator of the level of star formation activity. As a byproduct of stellar nucleosynthesis, metals are expelled into the ISM via supernovae and stellar winds, and about 30–50% of the metals [@Draine_2007] condense into dust grains. Thus, dust traces the metal abundance of galaxies [@Lisenfeld_1998; @Dwek_1998]. In addition to being a product of previous star formation, dust also influences the formation of new stars, since it catalyzes the formation of molecular hydrogen (e.g., Gould & Salpeter 1963), thus enabling the formation of molecular clouds, where stars form. Moreover, dust contributes to gas cooling [e.g., @Ostriker_1973; @Peeples_2014; @Peek_2015], and by stimulating cloud fragmentation, dust may affect the form of the initial mass function [@Omukai_2005]. Besides influencing interstellar chemistry and galaxy physics, dust affects the detectability and observed properties of galaxies. Dust grains absorb ultraviolet (UV) light and re-emit the radiation at infrared (IR) wavelengths [@Spitzer_1978; @Draine_1984; @Mathis_1990; @Tielens_2005]. Since light emitted from galaxies is attenuated by dust, with shorter wavelengths suffering the most attenuation, corrections to the observed spectra are needed to faithfully determine galaxy properties, including the stellar mass and luminosity function. Particularly at high redshifts, where many surveys are executed in the UV rest frame, the measured properties of galaxies critically depend on dust extinction. Because dust strongly extincts optical and UV light, it has especially important consequences for galaxies in the early Universe. First, it affects the escape fraction of UV photons capable of reionizing the Universe at $z\gtrsim 6$. Second, conversely, interstellar dust may provide protection from the UV background of the intergalactic medium (IGM) which, by photoionization heating, could have evaporated dwarf galaxies during the epoch of reionization [@Barkana_1999]. Furthermore, dust obscures as much as half of the light in star-forming galaxies [@Lagache_2005], and at high redshifts ($z\gtrsim 3$), our understanding of dust-obscured, star formation activity in typical star-forming galaxies is very incomplete [e.g., @Pope_2017]. ![image](galaxy_geometry.png){width="5in"} The dust content of galaxies in the local and high-redshift Universe has been the focus of a number of observational studies aiming to understand the physics in the ISM that regulate star formation and constrain galaxy formation models. The launch of the *Herschel* Space Observatory [@Pilbratt_2010] has made possible observational constraints including the relationship between dust mass and stellar mass [@Corbelli_2012; @Santini_2014], dust mass and gas fraction [@Cortese_2012], and the evolution of dust temperature [@Magdis_2012; @Hwang_2010; @Magnelli_2014]. And in recent years, continuum observations with the Atacama Large Millimeter Array (ALMA) have opened a new window on dust formation and evolution in the early Universe. A number of ALMA programs have detected dust in normal, UV-selected galaxies ($L_{\rm IR} < 10^{12} L_\odot$) from $z=4$–8.4 [e.g., @Capak_2015; @Watson_2015; @Willott_2015; @Laporte_2017]. @Dunlop_2017 presented results on the first, deep ALMA image at 1.3-mm of the Hubble Ultra Deep Field. Particularly exciting are the detections of large amounts of dust in galaxies during the epoch of reionization [e.g., @Watson_2015; @Laporte_2017]. Such observations raise interesting questions about dust production and the rate of supernovae in the early Universe, as significant star formation began at $z\sim 10$–20 [@Robertson_2015; @Mesinger_2016; @Planck_2016], since the Universe was no more than a few hundred million years old at these redshifts. The *James Webb Space Telescope* (JWST) is also expected to transform our understanding of dust in the early Universe. Given this context, the goal of this paper is to investigate the cosmic evolution of galaxy dust mass () and dust temperature (), in “normal” star-forming galaxies, with a simple theoretical model. Such a study is important because measurements of  and  shed light on the physical conditions of star-forming environments and are key ingredients in cosmological models of galaxy formation. A number of groups have used hydrodynamical simulations, analytical models, or semi-analytical models (SAMs), including self-consistent tracking of dust, to make predictions for the evolution of galactic dust [e.g., @Dwek_2007; @Dwek_2011; @Bekki_2015; @McKinnon_2016; @Mancini_2016; @Popping_2017]. Traditional simulations and SAMs typically employ recipes for interstellar chemistry and dust production, and they include a great deal of galaxy physics that affect the ISM. @Dwek_2007 developed analytical models describing the evolution of high-redshift $(z\gtrsim 6)$ dust, assuming that the evolution of dust depends solely on its production and destruction by core-collapse supernovae. @Dwek_2011 extended this work by examining the relative roles of supernovae (SNe) and asymptotic branch (AGB) stars in the production of dust by $z\approx 6$. Both studies were designed to account for the observed dust content in a hyperluminous quasar at $z=6.4$, when the Universe was $\sim 900$ Myr old. A key advantage of our model that distinguishes it from previous simulations and SAMs is the relative simplicity of its ingredients and, consequently, the comparative facility of physically interpreting the results. Rather than modeling the micro-physics of galaxies to make predictions about their dust content, we model their large-scale, global properties, including stellar mass, star formation rate (SFR), gas mass, and optical depth due to dust, to determine  and . Our model also has the advantage of producing results useful for observational efforts. In particular, we make predictions for the evolution of , a critical quantity for observers interested in estimating the dust mass, especially in high-redshift galaxies. The total dust mass in a galaxy can only be reliably measured using multi-wavelength observations of dust emission from IR to sub-millimeter wavelengths, and then modeling the spectral energy distribution (SED). In the absence of multi-wavelength observations, (for instance, with the recent ALMA observations of high-redshift galaxies), galaxy dust mass is typically estimated by assuming the SED takes the form of a single temperature modified blackbody [e.g., @Watson_2015; @Laporte_2017]. Thus, uncertainties in the total galaxy dust mass and dust-to-gas ratio are usually dominated by the unknown dust temperature. In this paper, we use empirically-motivated prescriptions for the relationships between galaxy dark matter halo mass, stellar mass, size, SFR, optical depth due to dust, gas mass, and metallicity to make predictions for the cosmic evolution of  and  in normal, main sequence galaxies. We begin by describing our model for determining the optical depth, mass, and temperature of dust in galaxies in Section \[sec:model\]. In Section \[sec:results\], we present the results of our model, test them against observations, and make predictions for future infrared observations of high-redshift galaxies. We discuss caveats and limitations of our model in Section \[sec:caveats\] and summarize our results in Section \[sec:summary\]. Throughout, we assume a flat, $\Lambda$CDM cosmology with the following parameters: $(\Omega_m, \Omega_\Lambda, \sigma_8)=(0.27, 0.73,0.82)$ and $h=0.7$, where $h$ is the Hubble constant in units of $100$ . The Model {#sec:model} ========= Our model assumes a simplified, spherical morphology for all main sequence galaxies. We consider two geometries for the distribution of stars with respect to the ISM, illustrated in Figure \[fig:geometry\]. In the “point source” geometry, the stars are located in the galactic center and are surrounded by a foreground screen of dust and gas. The second geometry consists of a homogeneous mixture of stars, dust, and gas. In reality, the distribution of dust, gas, and stars is anisotropic and clumpy, with large spiral galaxies having dust, star-forming gas, and high-mass stars concentrated in molecular clouds in the disk. But a strength of our model is its flexibility, in that it incorporates a variety of galaxies with different morphologies, and it folds in our poor knowledge of the exact inclination of galaxies, especially at high redshifts. [lcc]{} Symbol & Definition & Equation(s)\ & Dust temperature & \[eq:tdust\]\ $L_\nu$ & Specific luminosity of stars & \[eq:tdust\]\ $f_\star$ & Covering fraction of interstellar dust & \[eq:tdust\]\ $f_{\rm geom}$ & Geometric factor & \[eq:geom\]\ $\tau_\nu$ & Optical depth due to dust & \[eq:tau0\], \[eq:tau1\]\ $r$ & Galactic radius & \[eq:rgal\]\ $r_{\rm vir}$ & Virial radius & \[eq:rvir\]\ $\kappa_\nu$ & Dust opacity & \[eq:kappa\]\ & Dust mass & \[eq:mdust\]\ & Gas mass & \[eq:mgas\]\ $\mathcal{Z}$ & Metallicity & \[eq:mz\]\ DGR & Dust-to-gas ratio & \[eq:dgr\]\ & Stellar mass &\ & Halo mass &\ To describe the redshift evolution of dust temperature in an individual galaxy, ${\mbox{$T_{\rm dust}$}}(z)\equiv{\mbox{$T_{\rm dust}$}}$, we assume the dust is in thermal equilibrium with the total radiation field of the galaxy. That is, dust grains emit and absorb energy at the same rate, $$\frac{dE_{\rm emit}}{dt} = \frac{dE_{\rm absorb}}{dt}.$$ We assume that the stellar radiation field and the cosmic microwave background (CMB) contribute to dust heating [e.g., @Rowan-Robinson_1979; @DaCunha_2013] and that dust grains cool via blackbody radiation. Thus, the power per unit mass emitted by dust is equal to the total power absorbed per unit mass of dust, according to $$\label{eq:tdust} \begin{split} \int_0^\infty \frac{8\pi h}{c^2}\frac{\nu^3}{{\rm exp}(h\nu/k{\mbox{$T_{\rm dust}$}})-1}\kappa_\nu d\nu = \\ \int_0^\infty \frac{ L_\nu}{r^2} f_{\rm geom} f_\star \kappa_\nu d\nu \\ + \int_0^\infty \frac{8\pi h}{c^2}\frac{\nu^3}{{\rm exp}(h\nu/k{\mbox{$T_{\rm cmb}$}})-1}\kappa_\nu d\nu, \end{split}$$ where ${\mbox{$T_{\rm cmb}$}}=2.725(1+z)$ is the CMB temperature at a given redshift $z$, $L_\nu$ is the specific luminosity of all the stars in a galaxy, $r$ is the galactic radius, and $\kappa_\nu$ is the dust opacity at frequency $\nu$. To account for the porosity of the ISM and the fact that some stellar radiation will escape a galaxy without being absorbed by dust grains, we include the factor $f_\star$, a number between 0 and unity that parameterizes the fraction of a galaxy’s surface area covered by dust. The factor $f_{\rm geom}$ accounts for the geometry of stars and dust, as $$\label{eq:geom} f_{\rm geom} = \begin{cases} e^{-\tau_\nu} & \text{Case 1: point source} \\ (1-e^{-\tau_\nu})/\tau_\nu & \text{Case 2: homogeneous}, \end{cases}$$ where $\tau_\nu$ is the optical depth due to dust. The first case corresponds to the case in which the stars at the galactic center act, in effect, as a single point source of radiation behind a foreground screen of dust. The second case corresponds to the solution of the radiative transfer equation for a homogeneous mixture of stars and dust [@Mathis_1972; @Natta_1984]. In order to solve equation (\[eq:tdust\]) for , we model a number of parameters for each galaxy at a given redshift, including , $\kappa_\nu$, $\tau_\nu$, and $L_\nu$. These parameters and others used in this paper are summarized in Table \[table1\]. Dust Optical Depth and Mass {#sec:tau} --------------------------- For each individual galaxy, we assume a spherically symmetric system in which the dust mass density, $\rho_{\rm dust}$, has a power-law distribution, $\gamma$. The frequency-dependent optical depth due to dust is defined $$\label{eq:tau0} \begin{split} \tau_\nu &= \int_0^{{\mbox{$R_{\rm dust}$}}} \rho_{\rm dust}(r) \kappa_\nu dr \\ &= \int_0^{{\mbox{$R_{\rm dust}$}}} \rho_0 r^{-\gamma} \kappa_\nu dr, \end{split}$$ where $\rho_0$ is the central mass density of a given galaxy, and  is the radial extent of dust. The total dust mass in a galaxy also depends on $\rho_0$ as $$\label{eq:dust_mass} \begin{split} M_{\rm dust} &= \int_0^{{\mbox{$R_{\rm dust}$}}} \rho_{0} r^{-\gamma} 4\pi r^2 dr \\ &= 4\pi\rho_0 \frac{{\mbox{$R_{\rm dust}$}}^{3-\gamma}}{3-\gamma}, \end{split}$$ By equating equations (\[eq:tau0\]) and (\[eq:dust\_mass\]), we may express $\tau_{\rm dust,\nu}$ in terms of dust mass: $$\label{eq:tau1} \tau_{\nu} = \frac{M_{\rm dust}(M_\star, z)}{4\pi {\mbox{$R_{\rm dust}$}}^2}\frac{3-\gamma}{1-\gamma}\kappa_\nu,$$ where we have drawn attention to the dependency of  on the stellar mass of the galaxy, , and on the redshift, $z$. We let $\gamma =0$; we justify this choice in §\[sec:caveats\]. To evaluate equation (\[eq:tau1\]), then, we need to model $r$, $\kappa_\nu$, and , all of which depend on $z$. ### Galactic radius {#sec:radius} To acquire a rough approximation of galaxy disk sizes, we adopt the basic picture of @Fall_1980 and others [e.g., @Mo_1998; @Somerville_1999], in which the collapsing gas of a forming galaxy acquires the same specific angular momentum as the dark matter halo, and this angular momentum is conserved as the gas cools. The specific angular momentum is often expressed using the dimensionless spin parameter, $\lambda=J|E|^{1/2}G^{-1}M^{-5/2}$, where $J$ is the angular momentum, $E$ is the total energy of the halo, $G$ is Newton’s gravitational constant, and $M$ is the mass [e.g., @Peebles_1969; @Mo_1998; @Somerville_1999]. For a halo with a singular isothermal density profile $\rho\propto r^{-2}$, the disk exponential scale radius is given by $r_d = \lambda R_{\rm halo}/\sqrt{2}$, where $R_{\rm halo}$ is the virial radius of the dark matter halo. @Somerville_2018 explored $\lambda$ using empirical constraints from $z\sim 0.1$–3 galaxies in the GAMA survey [@Driver_2011; @Liske_2015] and CANDELS survey [@Grogin_2011; @Koekemoer_2011]. Using relationships from their halo abundance matching model, they mapped galaxy stellar mass to halo mass and inferred from these results a median value of the spin parameter, $\lambda=0.036$, corresponding to a ratio between galaxy half-mass radius and halo size of ${\mbox{$R_{\rm gal}$}}/R_{\rm halo} = 0.018$. They found that $\lambda$ is roughly independent of stellar mass and exhibits weak dependence on redshift. We adopt this value for $\lambda$ and make the simplifying assumption that the radial extent of the dust, , is equal to ${\mbox{$R_{\rm gal}$}}$, $$\label{eq:rgal} {\mbox{$R_{\rm dust}$}}={\mbox{$R_{\rm gal}$}}=0.018 R_{\rm halo}.$$ In §\[sec:caveats\], we discuss some of the limitations of assuming a single value for the spin parameter. To determine $R_{\rm halo}$, we adopt the $R_{\rm halo}$-$M_{\rm halo}$ relation of @Loeb_2013: $$\label{eq:rvir} \begin{split} R_{\rm halo} &= 0.784 \left[ \frac{\Omega_m}{\Omega_m(z)} \frac{\Delta_c}{18\pi^2} \right]^2 \\ &\times \left( \frac{M_{\rm halo}}{10^8~M_\odot}^{1/3}\right) \left(\frac{10}{1+z}\right) h^{-2/3}~\rm{kpc}, \end{split}$$ where $$\begin{split} \Delta_c &= 18\pi^2 + 82d - 39d^2 \\ d &= \Omega_m(z)-1 \\ \Omega_m(z) & = \frac{\Omega_m (1+z)^3}{\Omega_m(1+z)^3 + \Omega_\Lambda}. \end{split}$$ For  in equation (\[eq:rvir\]), we employ the stellar-mass-halo-mass (SMHM) relations of @Behroozi_2013a, discussed in detail in §\[sec:smhm\]. In Figure \[fig:model\], we show how the ${\mbox{$R_{\rm gal}$}}$-${\mbox{$M_\star$}}$ relation evolves with redshift. ### Opacity {#sec:opacity} For most extragalactic environments, especially for distant galaxies, we have poor knowledge of the composition and size distribution of dust grains, encapsulated in $\kappa_\nu$ in equation (\[eq:tau1\]). For values of $\nu>10^{12}$ Hz, we adopt the Galactic extinction laws of @Mathis_1990 and @Li_2001. We calculate an interpolated function for $\kappa_\nu$ based on a combination of these two models, since the @Mathis_1990 model extends to lower frequencies, down to $\sim 10^{12}$ Hz, while the @Draine_2001 model extends up to $\sim 10^{18}$ Hz. For frequencies below $\nu \le 10^{12}$ Hz, we adopt the Beckwith et al. (1990) power-law treatment for opacity, $$\label{eq:kappa} \kappa_\nu=0.1 \left( \frac{\nu}{1000~\rm{GHz}} \right)^\beta~\rm{cm}^2 \rm{g}^{-1},$$ where $\beta=2$. The Beckwith et al. model, which originally assumed $\beta=1$, was calibrated to match the emmisivity properties of dust around Galactic protoplanetary disks. The frequency-dependence of $\kappa_\nu$ is uncertain and depends on the size and composition of dust grains. Using a different slope or normalization for $\kappa_\nu$ [e.g., @James_2002; @Dunne_2011; @daCunha_2008; @Clark_2016] would affect the calculation of the optical depth (equation \[eq:tau1\]) in our model and thus the dust temperature. We describe how changes in the normalization or slope for $\kappa_\nu$ affect our results in Section \[sec:omd\]. ![image](mdust_vs_mstar.png){width="6in"} ### Dust Mass Next, we determine , which relates to the gas mass of a galaxy, , via a dust-to-gas ratio, DGR: $$\label{eq:mdust} {\mbox{$M_{\rm dust}$}}\equiv {\mbox{$M_{\rm gas}$}}\times\rm{DGR}.$$ Several studies have explored the correlation between galactic gas mass and stellar mass or SFR [e.g., @Sargent_2014; @Zahid_2014]. Yet empirical relations for the total gas mass as a function of stellar mass or SFR rarely extend to redshifts higher than $z\approx 2$ [e.g., @Zahid_2014], beyond which, measurements of the [$\mbox{\rm \ion{H}{1}}$]{} gas mass are unreliable, and the gas fraction of galaxies is expected to be increasingly dominated by molecular gas. After investigating different prescriptions for the gas mass, we decided to follow @Zahid_2014, who fit a stellar-mass-metallicity (MZ) relation for star-forming galaxies at $z\lesssim 1.6$. They assume that ${\mbox{$M_\star$}}/{\mbox{$M_{\rm gas}$}}\approx ({\mbox{$M_\star$}}/M_0)^\gamma$, where $M_0$ is a metallicity-dependent, characteristic mass, above which $\mathcal{Z}$ approaches a saturation limit, and $\gamma$ is a power law index. By combining this expression with their fit for the MZ relation, @Zahid_2014 determine $$\label{eq:mgas} {\mbox{$M_{\rm gas}$}}({\mbox{$M_\star$}},z) = 3.87\times 10^9 (1+z)^{1.35}\left( \frac{{\mbox{$M_\star$}}}{10^{10}~{\ensuremath{M_\odot}}} \right)^{0.49}.$$ To determine the stellar mass in the above equation, we use the SMHM models of @Behroozi_2013a, who constrain average galaxy stellar masses and SFRs as a function of halo mass (see §\[sec:smhm\]). We extrapolate the $M_{\rm gas}$- relation of Zahid et al. (2014) to redshifts $z> 1.6$, as shown in Figure \[fig:model\]. We discuss potential consequences of this choice and investigate an alternative prescription for  in §\[sec:caveats\]. Equation (\[eq:mdust\]) also depends on the dust-to-gas ratio, DGR. Since dust is composed of heavy elements and traces the metal abundance of galaxies, the problem of determining the DGR can be reduced to one of determining the metallicity, $\mathcal{Z}$. Several authors have conducted observational investigations of the MZ relation in nearby galaxies [e.g., @Lequeux_1979; @Lee_2006; @Zahid_2012; @Berg_2012; @Zahid_2014] and in distant galaxies out to $z\lesssim 3$ [@Savaglio_2005; @Erb_2006; @Maiolino_2008; @Yabe_2012; @Zahid_2013; @Hunt_2016]. We adopt the prescription of @Hunt_2016, who compiled observations of $\sim 1000$ galaxies up to $z\sim 3.7$, with metallicities spanning two orders of magnitude, SFRs spanning 6 orders of magnitude, and stellar masses spanning 5 orders of magnitude. Using a principal component analysis, Hunt et al. (2016) find $$\label{eq:mz} \mathcal{Z} = -0.14 \log(\rm{SFR})+0.37 \log({\mbox{$M_\star$}}) +4.82,$$ where, by convention, $\mathcal{Z}\equiv 12 + \log(\rm{O/H})$ is defined in terms of the gas-phase oxygen abundance (O/H). We extrapolate equation (\[eq:mz\]) for galaxies at $z>3.7$, as shown in Figure \[fig:model\]. We now relate $\mathcal{Z}$ to the DGR, using an empirical formula determined for local galaxies. Rémy-Ruyer et al. (2014) evaluated the gas-to-dust ratio as a function of metallicity for nearby galaxies spanning the range between $1/50~\mathcal{Z}$ and $2~\mathcal{Z}$. The authors provide alternative functional forms for the gas-to-dust ratio versus metallicity relationship, depending on the CO-to-[$\mbox{H$_2$}$]{} conversion factor they employed to estimate the total amount of molecular mass in a galaxy. We adopt the function they derive assuming a metallicity-dependent conversion factor (as opposed to the standard Milky Way CO-to-[$\mbox{H$_2$}$]{} conversion factor). In terms of the DGR, $$\label{eq:dgr} \log\left(\frac{\mathrm{DGR}}{\mathrm{DGR}_\odot}\right) = \begin{cases} \log \left(\frac{\mathcal{Z}}{{\ensuremath{Z_\odot}}}\right) & \text{if } \mathcal{Z} > 0.26{\ensuremath{Z_\odot}}\\ 3.15\log \left(\frac{\mathcal{Z}}{{\ensuremath{Z_\odot}}}\right) + 1.25 & \text{if } \mathcal{Z} \le 0.26{\ensuremath{Z_\odot}}, \end{cases}$$ where $\log(\mathrm{DGR}_\odot)=-2.21$ (Zubko et al. 2004). In the previous subsections, we have modeled $r$, $\kappa_\nu$, , and the DGR. Equation (\[eq:rvir\]) depends on , equation (\[eq:mgas\]) on , and equation (\[eq:mz\]) on  and the SFR In the next section, we describe the self-consistent models of Behroozi et al. (2013a) that we use to determine relation between , , and the SFR at arbitrary redshifts. [lccccccc]{} & $z=0$ & $z=1$ & $z=2$ & $z=3$ & $z=4$ & $z=6$ & $z=9.5$\  ($10^{10}$ [$M_\odot$]{}) & 2.7 & 2.8 & 2.2 & 2.1 & 2.3 & 1.7 & 0.79\ SFR ([$M_\odot$]{} yr$^{-1}$) & 0.76 & 15 & 32 & 52 & 82 & 88 & 59\  (kpc) & 4.7 & 2.8 & 2.0 & 1.5 & 1.2 & 0.85 & 0.57\ $\mathcal{Z}$ & 8.7 & 8.5 & 8.4 & 8.4 & 8.4 & 8.3 & 8.2\ DGR & 1/160 & 1/241 & 1/292 & 1/315 & 1/326 & 1/367 & 1/463\ [$\mbox{$A_{\rm V}$}$]{} (mag) & 0.79 & 1.6 & 2.6 & 4.5 & 7.2 & 11 & 14\  ($10^{10}$ [$M_\odot$]{}) & 0.6 & 1.6 & 2.5 & 3.7 & 5.1 & 7.0 & 8.3\  ($10^7$ [$M_\odot$]{})& 3.9 & 6.7 & 8.5 & 12 & 16 & 19 & 18\  (K) & 34 & 51 & 57 & 58 & 60 & 58 & 46\ **Note.** From top to bottom: stellar mass, SFR, half-mass radius, metallicity, dust-to-gas ratio, visual extinction due to dust, gas mass, dust mass, and dust temperature. Stellar-Mass-Halo-Mass Relation {#sec:smhm} ------------------------------- @Behroozi_2013a use empirical forward modeling to constrain the evolution of the stellar mass—halo mass relationship (SMHM; $SM(M_h, z)$). At fixed redshift, the adopted model for $SM(M_h,z)$ has six parameters, which control the characteristic stellar mass, halo mass, faint-end slope, massive-end cutoff, transition region shape, and scatter of the SMHM relationship. For each parameter, there are three variables that control its redshift scaling at low ($z=0$), mid ($z=1$–2), and high ($z>3$) redshift, with constant (i.e., no) scaling beyond $z=8.5$ to prevent unphysical early galaxy formation. Additional nuisance parameters include systematic uncertainties in observed galaxy stellar masses and SFRs. Any choice of model in this parameter space gives a mapping from simulated dark matter halo catalogs @Behroozi_2013b to mock galaxy catalogs. Comparing these mock catalogs with observed galaxy number counts and SFRs from $z=0$ to $z=8$ results in a likelihood for a given model choice, and so these constraints combined with an MCMC algorithm result in a posterior distribution for the allowed SMHM relationships. Average star formation rates and histories for galaxies are inferred from averaged halo assembly histories (including mergers) combined with the best-fitting model for $SM(M_h,z)$. Stellar Population Synthesis ---------------------------- To determine the specific luminosity of all the stars in a galaxy, $L_\nu$, we use version 3.0 of the Flexible Stellar Population Synthesis code [FSPS; @Conroy_2009; @Conroy_2010] to model the spectral energy distributions from 91Åto 1000. For each halo and each redshift we supply the star formation history (SFH) to FSPS in tabular form, including the the time-dependent metallicity of newly born stars. Within FSPS, the SFH is linearly interpolated between the supplied time points, and the appropriate single-stellar population (SSP) weights are calculated. The spectra of the SSPs are then summed with these weights applied to produce a model galaxy spectrum corresponding to the last time-point of the supplied SFH. This model spectrum is also projected onto filter transmission curves to produce broadband rest frame photometry in several standard filter sets. The total surviving stellar mass (including remnants) and bolometric luminosity are also calculated. No dust attenuation or IGM attenuation is applied, and we do not include nebular emission from [$\mbox{\rm \ion{H}{2}}$]{} regions or dust emission. The base SSP spectra are generated assuming a fully sampled Salpeter IMF from 0.08 to 120 [$M_\odot$]{}. We use the “Padova2007” isochrones [@Bertelli_1994; @Girardi_2000; @Marigo_2008] for stars less than 70 [$M_\odot$]{}. These are combined with Geneva isochrones for higher-mass stars based on the high mass-loss rates evolutionary tracks [@Schaller_1992; @Meynet_2000] and the post-AGB evolutionary tracks of @Vassiliadis_1994. For the stellar spectra we use the BaSeL3.1 theoretical stellar library of @Westera_2002, augmented with the higher-resolution empirical MILES library [@Sanchez-Blazquez_2006] in the optical. The spectra of OB stars are from Smith et al. (2002), and the spectra of post-AGB stars are from @Rauch_2003. The treatment of TP-AGB spectra and isochrones is described in @Villaume_2015. All FSPS variables that affect the isochrones and stellar spectra are set at their default values. Results and Discussion {#sec:results} ====================== We now present predictions for the evolution of dust in galaxies from redshifts $z=0$ to $z=9.5$. The galaxies have dark matter halo masses ranging from $10^9$ to $10^{15}~{\ensuremath{M_\odot}}$, stellar masses ranging from $4.3\times 10^4$ to $1.6\times 10^{12}~{\ensuremath{M_\odot}}$, and SFRs from 0 to 155 [$M_\odot$]{}yr$^{-1}$, with average star formation rates taken as a function of halo mass and redshift from Behroozi et al. (2013). We here consider only average population results, leaving starbursts [e.g., @Riechers_2013; @Strandet_2017] and the distribution of dust properties for follow-up studies. In addition, the @Behroozi_2013a [@Behroozi_2013b] analysis does not separate star-forming from quiescent galaxies, constraining only the average SFR of the entire galaxy population as a function of halo mass. Evolution of Dust Mass ---------------------- Figure \[fig:mdust\] presents galactic dust mass as a function of stellar mass from $z=0$ to $z=9.5$. We find that for a fixed galactic stellar mass,  increases with increasing redshift, with higher mass galaxies displaying slightly larger increases in  than their lower mass counterparts. In Table \[table2\], we summarize the predicted values of , as well as other parameters derived in this paper, for a $10^{12}$ [$M_\odot$]{} galaxy—to show that our predictions compare favorably with quantities observed in the Milky Way. ![image](mdust_multi.png){width="6in"} As the second panel is Figure \[fig:mdust\] shows, the dust-to-stellar mass ratio ${\mbox{$M_{\rm dust}$}}/{\mbox{$M_\star$}}$ first rises, and then at a characteristic value of , the ratio decreases with increasing stellar mass. This turnover results from the dependency of  on metallicity (via the DGR), which changes its functional form at $\mathcal{Z}=0.26$ (equation \[eq:dgr\]). The ratio ${\mbox{$M_{\rm dust}$}}/{\mbox{$M_\star$}}$ arises from a competition between the gas metallicity decreasing with redshift and the SFR increasing. For a fixed stellar mass, ${\mbox{$M_{\rm dust}$}}/{\mbox{$M_\star$}}$ steadily increases with redshift. In Figure \[fig:mdustz\], we compare our results with the observations of @Bethermin_2015, who measure the gas and dust content of massive ($\sim 6\times 10^{10}{\ensuremath{M_\odot}}$) main-sequence galaxies and find little systematic variation of the dust-to-stellar mass ratio as a function of redshift up to $z=4$. Although our results slightly underpredict the observations and suggest a small increase in ${\mbox{$M_{\rm dust}$}}/{\mbox{$M_\star$}}$ over the same redshift range, they are compatible with the Béthermin et al. observations at the $1\sigma$ level. There are various ways to explain the lower dust-to-stellar mass ratios: low metallicities and dust-to-gas ratios, or differences in the SFRs and gas masses. It happens that the gas masses we derive for $\sim 6\times 10^{10}{\ensuremath{M_\odot}}$ galaxies are in good agreement with the gas masses measured by @Bethermin_2015. However, if we use a different prescription for the metallicity, this would alter our results for . For instance, the fundamental metallicity relation of @Mannucci_2010 yields higher metallicities than the Hunt et al. (2016) prescription we use here. Moreover, the SFRs derived by Béthermin et al. for their galaxy sample are slightly higher than the SFRs we use from the @Behroozi_2013a [@Behroozi_2013b] model, since the latter considers *average* SFRs including contributions from both star-forming and quiescent galaxies. For a fixed stellar mass, a higher SFR results corresponds to a lower metallicity. In Figure \[fig:mdustz\] we plot the redshift evolution ${\mbox{$M_{\rm dust}$}}/{\mbox{$M_\star$}}$, where ${\mbox{$M_{\rm dust}$}}$ is derived using the metallicity prescription of @Mannucci_2010 and where the SFRs are corrected to include star-forming galaxies only. This correction is accomplished by dividing the SFRs by $(1-f_q)$, where $f_q$, the fraction of quiescent galaxies at a given redshift, is given by $f_q = [({\mbox{$M_\star$}}/10^{10.2+0.5z}{\ensuremath{M_\odot}})^{-1.3}+1]^{-1}$ [@Behroozi_2013a]. The resulting curve for the evolution of ${\mbox{$M_{\rm dust}$}}/{\mbox{$M_\star$}}$ now slightly overpredicts the Béthermin et al. observations at $z<3$ and does not vary monotonically with redshift. ![image](mass_z_Av.png){width="6.5in"} In Figure \[fig:mdust\_multi\], we again show  as a function of , this time overplotting observations from the literature. In the first panel, we overplot observations from @Remy-Ruyer_2015 and the *Herschel* Reference Survey [HRS; @Ciesla_2014; @Boselli_2015], and we find good agreement between our model and observed dust masses at $z=0$, for stellar masses ranging from about $10^7$ to $10^{11}~{\ensuremath{M_\odot}}$. At $z=1$ and $z=2$, the dust masses predicted by our model are in good agreement with the observations by @Santini_2014. We underpredict the observations by @daCunha_2015 at these $z=1$ and 2 by roughly $0.5$ to $1$ dex. This is not too surprising, however, since the @daCunha_2015 observations are of sub-millimeter galaxies (SMGs), which have higher than typical dust infrared luminosities ($>10^{12}L_\odot$), which are driven by high SFRs in excess of 100 ${\ensuremath{M_\odot}}\rm{yr}^{-1}$. Sub-millimeter selection basically selects for SFR, and the @daCunha_2015 SMGs have SFRs about a factor of 3 higher than main sequence galaxies of the same stellar mass. Galaxies such as those in the da Cunha et al. sample, which includes some starbursts, may have quickly enriched the ISM with metals and dust on timescales shorter than that for main sequence galaxies and are not accounted for by the models adopted in this study. On the low-mass end of galaxies (${\mbox{$M_\star$}}<10^{10}$), it will be interesting to see how well our model reproduces the - trend at redshifts $z\ge 1$. However, as we show in §\[sec:flux\], detections of the dust emission in large samples of low-mass, high-redshift galaxies would be a challenging prospect for observing programs with present-day telescopes. We find that the relation between galaxy dust mass and stellar mass can be parameterized as a broken power law, $$\label{eq:power} \log\left( \frac{{\mbox{$M_{\rm dust}$}}}{{\mbox{$M_{\rm dust}$}}_{,0}} \right) = \begin{cases} \alpha_1\log \left( \frac{{\mbox{$M_\star$}}}{{\mbox{$M_\star$}}_{,0}} \right) & \text{if } {\mbox{$M_\star$}}\le {\mbox{$M_\star$}}_{,0} \\ \alpha_2\log \left( \frac{{\mbox{$M_\star$}}}{{\mbox{$M_\star$}}_{,0}} \right) & \text{if } {\mbox{$M_\star$}}> {\mbox{$M_\star$}}_{,0} , \end{cases}$$ where ${\mbox{$M_{\rm dust}$}}_{,0}$ is the zero point of the dust mass, and $\alpha_{1,2}$ are the slopes below and above ${\mbox{$M_\star$}}_{,0}$, the stellar mass at a given redshift where the break in the power law occurs. The break point in the - relation results from the prescription we use for the dust-to-gas ratio, equation [\[eq:dgr\]; @Remy-Ruyer_2014], also a broken power law, which depends on  and the SFR via the metallicity. Both ${\mbox{$M_{\rm dust}$}}_{,0}$ and ${\mbox{$M_\star$}}_{,0}$ are functions of redshift. The amount of dust in a galaxy is determined by the amount of available metals and gas, both of which are linked to star formation activity. Recent observational studies have shown that the dust-to-gas ratio of nearby galaxies may be characterized as a function of the gas-phase metallicity [@Remy-Ruyer_2014]. Theoretical work by @Popping_2017, who use semi-analytical models to follow the production of interstellar dust, reproduces this observed trend and suggests that it is driven by the accretion of metals onto dust grains and the density of cold gas. Like @Popping_2017, we find that the normalization of the dust-mass-stellar-mass relation, ${\mbox{$M_{\rm dust}$}}_{,0}$, increases from $z=0$ to $z=9.5$. We perform least-squares fits to the predicted curves in Figures \[fig:mdust\] and \[fig:mdust\_multi\] to determine the parameters in equation (\[eq:power\]). The results for $z=0$ to $z=9.5$ are summarized as follows: $$\label{eq:dusfit} \begin{split} \alpha_1 &= 1.20\pm 0.02 \\ \alpha_2 &= 0.75\pm 0.02 \\ \log {\mbox{$M_{\rm dust}$}}_{,0} &= (6.0 \pm 0.1) + (1.8\pm 0.1)\log(1+z) \\ \log {\mbox{$M_\star$}}_{,0} &= (8.4 \pm 0.1) + (1.0\pm 0.2)\log(1+z). \\ \end{split}$$ Evolution of Dust Optical Depth {#sec:tau0} ------------------------------- In Figure \[fig:mass-z\] we present contour maps of the optical depth due to dust in terms of visual extinction, ${\ensuremath{\mbox{$A_{\rm V}$}}}= 1.086\tau_V$ [e.g., @Draine_2011], calculated in the rest frames of the galaxies. The left panel of Figure \[fig:mass-z\] displays the redshift evolution of [$\mbox{$A_{\rm V}$}$]{} in terms of , and the right panel displays the evolution of [$\mbox{$A_{\rm V}$}$]{} in terms of . The maps demonstrate that for constant values of  or , [$\mbox{$A_{\rm V}$}$]{} increases with increasing $z$. For instance, while a $10^{12}~{\ensuremath{M_\odot}}$ halo mass galaxy has an extinction due to dust of ${\ensuremath{\mbox{$A_{\rm V}$}}}\approx 0.8$ mag at $z=0$, a similar galaxy at $z=4$ has an extinction of ${\ensuremath{\mbox{$A_{\rm V}$}}}\approx 7$ mag. This basic trend holds for the optical depth at other wavelengths, with the galaxies having overall higher optical depths at shorter wavelengths and lower optical depths at longer wavelengths. It is possible that we slightly overpredict [$\mbox{$A_{\rm V}$}$]{} at high redshifts. As we discuss in further detail in §\[sec:caveats\], our approximation of spherical symmetry could lead to overestimates of the optical depth, particularly for massive galaxies, in which the bulk of interstellar dust is typically observed to reside in the disk. The rise in [$\mbox{$A_{\rm V}$}$]{} at high redshift is mostly driven by , since $\tau_\nu\propto 1/{\mbox{$R_{\rm dust}$}}^2$ (equation \[eq:tau1\]), and since ${\mbox{$R_{\rm dust}$}}\propto (1+z)^{-1}$ (equations \[eq:rgal\] and \[eq:rvir\]; Figure \[fig:model\]). Thus, with our assumption of spherical geometry, and given that we have defined the optical depth as the integrated value through to the galactic center (equation \[eq:tau0\]), the values we have derived for [$\mbox{$A_{\rm V}$}$]{}, are most likely upper limits. ![image](tdust_point.png){width="7in"} ![image](tdust_homog.png){width="7in"} Evolution of Dust Temperature ----------------------------- We use our results for  and $\tau_\nu$ in the previous sections to determine  from equation (\[eq:tdust\]). In Figures \[fig:tdust1\] and \[fig:tdust2\], we present predictions for the evolution of galaxy dust temperature as a function of stellar mass. We show results for the two galaxy geometries we consider here, the “point source” and “homogeneous” models illustrated in Figure \[fig:geometry\]. We also show results for two different surface area covering fractions of dust, $f_\star=1$ and $f_\star=0.25$. Both models show that the - relation is not monotonic, but rather peaks at characteristic values of . In both figures, we overplot data points of measured galaxy dust temperatures from the literature. The point source model (Figures \[fig:tdust1\]) shows that for most galaxy stellar masses (${\mbox{$M_\star$}}\gtrsim 10^{6.5}$ [$M_\odot$]{}),  tends to increase over time, and is a poor representation of the observations. Whereas observations suggest that dust temperatures in higher mass galaxies tend to get cooler with time, the point source model predicts the opposite. By contrast, the model in which stars and the ISM are homogeneously distributed (Figures \[fig:tdust2\]) suggests that  tends to decrease with time. At all redshifts, our homogeneous, $f_\star=0.25$ model is in better agreement with the observations by @Remy-Ruyer_2015 and @daCunha_2015 than the $f_\star=1$ model. We take the former to be our fiducial model; Table \[table2\] lists the values of  for a $10^{12}$ [$M_\odot$]{} halo mass galaxy in this model. That the $f_\star=0.25$ model is in better agreement with the observations than the $f_\star=1$ model is not surprising. This implies that galactic dust does not have a covering fraction of 100% and that there are regions in any given galaxy where starlight escapes without being absorbed by dust. The actual covering fraction is almost certain to vary widely from galaxy to galaxy. Focusing now on Figure \[fig:tdust2\], starting at about $z=6$, the dust temperature tends to cool down for galaxies of fixed stellar masses, as the Universe evolves. At each redshift, there are two peaks in the - relation. As the redshift decreases, the first peak shifts towards higher and higher values of . For instance, between $z=6$ and $z=3$, the stellar mass at which  peaks shifts from ${\mbox{$M_\star$}}=10^{6.6}$ to $10^{7.7}$ [$M_\odot$]{}. The peak near the high-mass end is more stable and does not display a similar systematic shift over time. Our model predicts a population of high-redshift ($z\gtrsim 2$), low-mass galaxies (${\mbox{$M_\star$}}\approx 10^6$ to $10^{8.5}$ [$M_\odot$]{}) with fairly hot dust. From $z=6$ to $z=4$, these galaxies have dust that is around the same temperature as—if not hotter than—the dust in their more massive counterparts at the same redshifts. Unfortunately, there are no observational constraints on the star formation history of such galaxies, and so with the current state of knowledge, indirect methods would have to be used to infer the robustness of the peak temperatures our model predicts. In Figure \[fig:tdustz\] we plot dust temperatures as a function of redshift for fixed stellar masses. We find that dust temperatures of galaxies of all stellar masses evolve markedly with redshift. Starting the present era to $z=6$, galaxies having stellar masses from $10^8$ to $10^{10}$ [$M_\odot$]{} have dust temperatures which increase monotonically from about $25$-$35$ to $\sim 55$–60. Higher mass galaxies first display increases in , followed by decreases in the dust temperature as they evolve to higher redshifts. @Viero_2013 stacked *Herschel* images of a stellar mass selected sample of galaxies, and @Bethermin_2015 performed a similar stacking analysis of *Spitzer*, *Herschel*, LABOCA, and AzTEC data for galaxies taken from the COSMOS field. These authors find that the dust temperature tends to increase with redshift, up to $z=6$, for galaxies of all stellar masses. While we find a similar trend for galaxies having stellar masses ${\mbox{$M_\star$}}<10^{10}{\ensuremath{M_\odot}}$, our results are at odds with the observations for massive galaxies. That the massive galaxies in our model do not show a steady increase in  with redshift is primarily a reflection of our geometric model for the ISM. As discussed in Section \[sec:tau0\], our approximation of spherical symmetry may result in overestimates of $\tau_\nu$, especially for high-redshift, massive galaxies, where most interstellar dust is observed in the disk. Over-predicting $\tau_\nu$ for high-mass galaxies naturally leads to the non-monotonic evolution of ${\mbox{$T_{\rm dust}$}}$ observed in Figure \[fig:tdustz\]. Nevertheless, the values we derive for  at any fixed redshift are in general agreement with the observational results of @Viero_2013 and @Bethermin_2015, and with the theoretical models of @Cowley_2017. At any given redshift, we predict temperatures a factor of roughly 1.5 higher than these authors. This systematic offset may partly be due to the choice of modified blackbody models fitted by these authors and partly a result of how they selected sources. Indeed, other authors who performed stacking analyses but fitted different models or had different selection criteria, including @Pascale_2009, @Amblard_2010, and @Elbaz_2010, report higher dust temperatures in alignment with our results. We note that these latter two studies, which are based on galaxy samples selected by *Herschel*, may be biased in temperature. Due to the varying sensitivity of the PACS instrument with wavelength, hotter galaxies are easier to detect. Predictions for the Observable Flux Density {#sec:flux} ------------------------------------------- In light of recent ALMA observations that have detected large amounts of dust in galaxies during the epoch of reionization [e.g., @Watson_2015; @Laporte_2017], we are motivated to make predictions of the flux density due to dust in galaxies. We assume that the dust in a galaxy, of total mass , will rise to an average temperature ${\mbox{$T_{\rm dust}$}}$, due to heating by starlight, and the dust will re-emit most of the light in the infrared. The galaxy will have a flux density of $$S_\nu = \frac{\kappa_\nu B_\nu({\mbox{$T_{\rm dust}$}}) {\mbox{$M_{\rm dust}$}}(1+z)}{d_L^2},$$ where $B_\nu({\mbox{$T_{\rm dust}$}})$ is the Planck function, and $d_L$ is the luminosity distance to the galaxy. The emission is assumed to be optically thin. [lccccc]{} Galaxy name & Redshift & & & SFR$_{\rm IR}$ & Reference\ & & (mJy) & ($10^9$ [$M_\odot$]{}) & ([$M_\odot$]{} yr$^{-1}$) &\ A1689-zD1 & $7.5\pm 0.2$ & $0.61\pm 0.12$ & $1.7^{+0.7}_{-0.5}$ & $9^{+4}_{-2}$ & 1\ A2744\_YD4 & $8.38^{+0.13}_{-0.11}$ & $0.099\pm 0.023$ & $1.97^{+1.45}_{-0.66}$ & $20.4^{+17.6}_{-9.5}$ & 2\ UDF1 & 3.00 & $0.924 \pm 0.076$ & $50^{+13}_{-10} $ & $326 \pm 83 $ & 3\ UDF2 & 2.79 & $0.996 \pm 0.087$ & $126^{+52}_{-37} $ & $247 \pm 76$ & 3\ UDF3 & 2.54 & $0.863 \pm 0.084$ & $20 ^{+8}_{-6} $ & $195 \pm 69$ & 3\ UDF4 & 2.43 & $0.303 \pm 0.046$ & $32^{+13}_{-9} $ & $94 \pm 4$ & 3\ UDF5 & 1.76 & $0.311 \pm 0.049$ & $25^{+10}_{-7} $ & $102 \pm 7$ & 3\ UDF6 & 1.41 & $0.239 \pm 0.049$ & $32^{+8}_{-7} $ & $87 \pm 11$ & 3\ UDF7 & 2.59 & $0.231 \pm 0.048$ & $40^{+10}_{-8} $ & $56 \pm 22$ & 3\ UDF8 & 1.55 & $0.208 \pm 0.046$ & $159^{+65}_{-46} $ & $149 \pm 90$ & 3\ UDF9 & 0.67 & $0.198 \pm 0.039$ & $10^{+3}_{-2} $ & $23 \pm 25$ & 3\ UDF10 & 2.09 & $0.184 \pm 0.046$ & $16^{+7}_{-5} $ & $45 \pm 22$ & 3\ UDF11 & 2.00 & $0.186 \pm 0.046$ & $6^{+16}_{-13} $ & $162 \pm 94$ & 3\ UDF12 & 5.00 & $0.154 \pm 0.040$ & $4^{+2}_{-1} $ & $37 \pm 14$ & 3\ UDF13 & 2.50 & $0.174 \pm 0.045$ & $63^{+16}_{-13} $ & $68 \pm 18$ & 3\ UDF14 & 0.77 & $0.160 \pm 0.044$ & $5^{+1}_{-1} $ & $44 \pm 17$ & 3\ UDF15 & 1.72 & $0.166 \pm 0.046$ & $8^{+3}_{-2} $ & $38 \pm 27$ & 3\ UDF16 & 1.31 & $0.155 \pm 0.044$ & $79^{+21}_{-16} $ & $40 \pm 18$ & 3\ **References.** (1) Watson et al. (2015); (2) Laporte et al. (2017); (3) Dunlop et al. (2017). Figure \[fig:snu1\] presents the galaxy flux density at an observed wavelength of $1.1$ mm, as a function of  for $z=0$ to $z=9.5$. For fixed redshifts, $S_\nu$ increases with galaxy stellar mass. For fixed stellar masses, for sources at redshifts $z\gtrsim 1$, $S_\nu$ decreases with time. This trend is due to the combination of two effects: (1) the negative $K$-correction [e.g., @Hughes_1998; @Barger_1999; @Blain_2002; @Lagache_2005]; and (2) our fiducial model predicts that  increases with $z$. At redshifts $z>1$, the far-infrared radiation emitted in distant galaxies is redshifted to sub-mm wavelengths, and the resulting negative $K$-correction counteracts the dimming of galaxies caused by their cosmological distances. If more distant galaxies have hotter dust, then the observed flux from these galaxies originated from radiation emitted closer to the peak of the black body radiation curve than nearby galaxies. For instance, photons emitted from dust in a galaxy at $z=9.5$ had rest wavelengths of $\lambda_0=116$ , while photons emitted from a source at $z=2$ had $\lambda_0=550$ . From Figure \[fig:tdust2\], one can see that the dust temperature of a $10^8$ [$M_\odot$]{} stellar mass galaxy at $z=9.5$ is $\sim 74$ K, corresponding to a peak in the black body radiation curve at 39 . A $10^8$ [$M_\odot$]{} galaxy at $z=2$ has ${\mbox{$T_{\rm dust}$}}\approx 45$ K, corresponding to 64 . Thus, the observed flux at 1.1 mm from the more distant galaxy at $z=9.5$ originated from dust whose emission was closer to the peak of the black body radiation curve, compared to the galaxy at $z=2$. In Figure \[fig:snu2\], we show results in terms of $S_\nu$ as a function of $z$, for fixed stellar masses: ${\mbox{$M_\star$}}=10^8$, $10^9$, and $10^{10}$ [$M_\odot$]{}. For a given stellar mass, $S_\nu$ first decreases with time, and then it rises sharply from $z=1$ to $z=0$. We overplot the observed $\sim 1$ mm continuum fluxes of galaxies recently observed with ALMA. @Watson_2015 observed the lensed galaxy A1689-zD1 between $1.2$ and $1.4$ mm and detected a flux of ${\mbox{$S_\nu$}}=0.61\pm 0.12$ mJy. Located at $z\approx 7.5$ and magnified by a factor of 9.3, A1689-zD1 has a stellar mass, dust mass, and SFR of ${\mbox{$M_\star$}}\approx 2\times 10^9$ [$M_\odot$]{}, ${\mbox{$M_{\rm dust}$}}\approx 4\times 10^7$ [$M_\odot$]{}, and $\rm{SFR}\approx 9$ ${\ensuremath{M_\odot}}\rm{yr}^{-1}$. @Laporte_2017 observed the lensed galaxy A2744\_YD4 at 0.84 mm. At a redshift of about 8.4 and magnified by a factor of $\sim 1.8$, A2744\_YD4 has a stellar mass, dust mass, and SFR of ${\mbox{$M_\star$}}\approx 2\times 10^9$ [$M_\odot$]{}, ${\mbox{$M_{\rm dust}$}}\approx 6\times 10^6$ [$M_\odot$]{}, and $\rm{SFR}\approx 20$ ${\ensuremath{M_\odot}}\rm{yr}^{-1}$. Dunlop et al. (2017) conducted the first, deep ALMA image of the *Hubble* Ultra Deep Field, detecting 16 sources at 1.3 mm. The sources have high stellar masses, with 13 out of 16 having ${\mbox{$M_\star$}}>10^{10}$ [$M_\odot$]{}. Fifteen of the sources are located at redshifts $0.7\le z\le 3$, and one source is located at $z=5$. The observed fluxes and other properties of all the sources plotted in Figures \[fig:snu1\] and \[fig:snu2\] are summarized in Table \[table3\]. To date, observations of the dust emission in sources at $z\gtrsim 1$ have been restricted to galaxies having stellar masses $\gtrsim 10^9$ [$M_\odot$]{}. To achieve the sensitivities necessary to detect the dust emission of single, high-redshift $L^\star$ galaxies at these masses requires total observing times of $\sim 2$–3 hours with ALMA [e.g., @Laporte_2017]. Since typical lower mass galaxies (${\mbox{$M_\star$}}<10^9$ [$M_\odot$]{}) are intrinsically fainter, the much longer observing times needed to detect their dust emission at $z>1$ may be prohibitive. Yet serendipitous occurrences, such as gravitational lensing, could possibly aid in the detection and characterization of the low-mass, star-forming population of galaxies in the early Universe. If and when low-mass galaxies begin to be detected in large numbers, it may be easier (e.g., less time-consuming) to first detect galaxies at higher redshifts, since according to our model, their observed millimeter flux is expected to exceed that of lower redshift galaxies by $\sim 1$ to 2 orders of magnitude. ![image](mdust_vs_mstar_sargent.png){width="6in"} Caveats & Limitations {#sec:caveats} ===================== We now discuss in further detail some of the key assumptions of our model and assess their impact on the results. Galactic geometry and radial distribution of dust ------------------------------------------------- In §\[sec:model\] we model galaxies as spherically symmetric with either most of the dust concentrated around the nucleus or else homogeneously mixed with gas and stars. In reality, dust is often consolidated in the disk of large galaxies, and so the assumption of spherical symmetry may result in overestimates of the optical depth for these systems. In equation (\[eq:tau0\]), we assume that galactic dust has a simple power-law density distribution, with $\gamma=0$. The assumption that the dust profile is constant with radius may seem unfounded, given observations that dust content varies with galactic radius [e.g., @Boissier_2005; @Munoz-Mateos_2009]. Yet appropriate values for $\gamma$ and , about which we are equally ignorant, are certain to vary significantly from galaxy to galaxy. Since there are infinite combinations of $\gamma$ and ${\mbox{$R_{\rm dust}$}}$ that produce identical values of $\tau_\nu$—that is, since $\gamma$ and ${\mbox{$R_{\rm dust}$}}$ are essentially degenerate—we decide to absorb our ignorance about both quantities into our definition of  in equation (\[eq:tau0\]), where we let ${\mbox{$R_{\rm dust}$}}={\mbox{$R_{\rm gal}$}}$, (recalling that  is the half-mass galactic radius). For example, for two galaxies with identical values of $\kappa_\nu$ and , $\tau_\nu$ for one galaxy with ${\mbox{$R_{\rm dust}$}}={\mbox{$R_{\rm gal}$}}$ and $\gamma=0$ is equivalent to $\tau_\nu$ for the second galaxy with $\gamma=1/3$ and ${\mbox{$R_{\rm dust}$}}=2{\mbox{$R_{\rm gal}$}}$. ![image](mdust_multi_sargent.png){width="6in"} Another source of uncertainty in our model is the choice of spin parameter, $\lambda$, which is expected to link galaxy disk size and halo size (see equation \[eq:rgal\]), under the assumption that the collapsing baryonic matter of a galaxy inherits the same specific angular momentum as the halo [@Fall_1980]. In §\[sec:radius\], we assume that the half-mass radius and dark matter halo radius, for all galaxy masses at all redshifts, are linked by a single value, ${\mbox{$R_{\rm gal}$}}/R_{\rm halo}=0.018$, corresponding to $\lambda = 0.036$ [@Somerville_2018]. Some simulations suggest that there is significant scatter—about two orders of magnitude—about the mean value of $\lambda$ [@Teklu_2015; @Zavala_2016], and that this scatter depends in part on galaxy morphology [e.g., @Teklu_2015]. The results of @Somerville_2018, who demonstrate that $\lambda$ is roughly independent mass and weakly evolves with redshift, are in general agreement with other recent studies of the relationship between galaxy and halo size [e.g., @Shibuya_2015; @Kawamata_2015; @Huang_2017]. Yet if our adopted value of $\lambda$ leads to under- or overestimates of  for galaxies at certain masses and epochs, these inaccuracies will naturally propagate into our estimates of  and $\tau_\nu$. Evolution of gas mass --------------------- We use a prescription for ${\mbox{$M_{\rm gas}$}}({\mbox{$M_\star$}},z)$ determined by @Zahid_2014, who determined a relation between metallicity and stellar-to-gas mass ratio for galaxies at $z\lesssim 1.6$ and with ${\mbox{$M_\star$}}\gtrsim 10^9~{\ensuremath{M_\odot}}$. We extrapolate this relation for higher redshifts and lower stellar masses. If the interstellar environments of low-mass galaxies manifest in significantly different relationships between $\mathcal{Z}$, , and , and if ISM conditions evolve with redshift, then the Zahid et al. relation may break down in unexpected ways in the low-, high-$z$ regimes. Further observational tests are required to confirm or rule out the universal metallicity relation upon which equation (\[eq:mgas\]) is based. Nevertheless, it is promising that the Zahid et al. relation is consistent with that of @Andrews_2013, who measure the MZ relation down to ${\mbox{$M_\star$}}\approx 10^{7.5}~{\ensuremath{M_\odot}}$. The predicted dust masses are sensitive to the functional form of , (equation \[eq:mdust\]). To give a sense of how the evolution of  affects , we present additional calculations for , using an alternative prescription for . @Sargent_2014 compiled a sample of 131 massive (${\mbox{$M_\star$}}>10^{10}$ [$M_\odot$]{}), star-forming galaxies at redshifts $z\lesssim 3$. They derived a Schmidt-Kennicutt relation: $$\label{eq:sargent} \begin{split} \log \left(\frac{M_{\rm mol}}{{\ensuremath{M_\odot}}}\right) &= (9.22\pm 0.02) \\ &+ (0.81\pm 0.03)\log\left(\frac{\rm{SFR}}{{\ensuremath{M_\odot}}~\rm{yr}^{-1}}\right), \end{split}$$ where $M_{\rm mol}$ is the galactic molecular mass. Equation (\[eq:sargent\]) is appealing as a comparison to the @Zahid_2014 formulation for , because it is directly applicable for galaxies at higher redshifts. We do not attempt to estimate the total gas mass from equation \[eq:sargent\] but calculate the dust mass using ${\mbox{$M_{\rm gas}$}}= M_{\rm mol}\times\rm{DGR}$. Using the @Sargent_2014 formulation for the molecular mass of galaxies, in Figures \[fig:mdust2\] and \[fig:mdust\_multi2\] we display plots of  as a function of , analogous to Figures \[fig:mdust\] and \[fig:mdust\_multi\]. Figure \[fig:mdust2\] shows that while the @Zahid_2014 formulation for  results in higher predictions for  for most stellar masses, for galaxies with ${\mbox{$M_\star$}}\gtrsim 10^{9.5}~{\ensuremath{M_\odot}}$, the Zahid et al. and Sargent et al. formulations come into better agreement. Not too surprisingly, using equation (\[eq:sargent\]) leads to a model for ${\mbox{$M_{\rm dust}$}}$ that underpredicts observed values in high-mass galaxies (Figure \[fig:mdust\_multi2\]). This is because we did not account for the total galactic gas mass here, only $M_{\rm mol}$. However,  calculated using equation (\[eq:sargent\]) is in good agreement with low-mass galaxies with ${\mbox{$M_\star$}}\lesssim 10^8~{\ensuremath{M_\odot}}$. Moreover, for redshifts $z\gtrsim 1$,  calculated using equation (\[eq:sargent\]) comes into better agreement with the observations of high-mass galaxies, since the gas fraction in galaxies is expected to be increasingly dominated by molecular gas as redshift increases. The total gas mass fraction, defined $f_{g,\rm{tot}}={\mbox{$M_{\rm gas}$}}/({\mbox{$M_{\rm gas}$}}+ {\mbox{$M_\star$}})$, is the subject of a number of studies. For stellar masses in the range $10^{10}$–$4\times 10^{11}{\ensuremath{M_\odot}}$, the @Zahid_2014 relation predicts $f_{g,\rm{tot}}\approx 0.28$, 0.40, 0.48, and 0.55 for redshifts $z=1$, 2, 3, and 4, respectively, though the authors caution the extrapolation of their relation beyond $z=1.6$ (private communication). These values are quite consistent with the *molecular* gas fractions, $f_{g,\rm{mol}}=M_{\rm mol}/(M_{\rm mol} + {\mbox{$M_\star$}})$, derived in many studies from CO and dust observations of galaxies having a similar range of stellar masses. @Daddi_2010 measured $f_{g,\rm{mol}}\approx 0.6$ for 6 galaxies with ${\mbox{$M_\star$}}=0.33$–$1.1\times 10^{11}{\ensuremath{M_\odot}}$ at $z=1.5$. @Tacconi_2010 studied 19 galaxies with ${\mbox{$M_\star$}}=0.3$–$3.4\times 10^{11}{\ensuremath{M_\odot}}$. They measured $f_{g,\rm{mol}}=0.2$–$0.5$ at $z\approx 1.1$ and $f_{g,\rm{mol}}=0.3$–$0.8$ at $z\approx 2.3$. Later on, @Tacconi_2013 found similar results with a larger sample of 52 galaxies, measuring average molecular gas fractions of 0.33 and 0.47 at $z\sim 1.2$ and 2.2. @Magdis_2012 measured $f_{g,\rm{mol}}\approx 0.36$ for a ${\mbox{$M_\star$}}=2\times 10^{11}{\ensuremath{M_\odot}}$ galaxy at $z=3.21$. @Saintonge_2013 measured $0.45$ for ${\mbox{$M_\star$}}\sim 10^{10}{\ensuremath{M_\odot}}$ galaxies at $z=2.8$. More recently, @Bethermin_2015 used observations of dust emission in massive ($\sim 6\times 10^{10}{\ensuremath{M_\odot}}$) galaxies to measure $f_{g,\rm{mol}}=0.16$–0.35 at $z<1$, 0.27–0.41 at $1<z<2$, $\sim 0.5$ at $2<z<3$, and $\sim 0.6$ at $3<z<4$. And in an extensive study of 145 galaxies from the COSMOS survey, @Scoville_2016 measured molecular gas fractions of $0.16$–$0.67$ at $z\approx 1.5$, $0.24$–$0.75$ at $z\approx 2.2$ and $0.23$–$0.85$ at $z\approx 4.4$. While it is generally agreed that high-redshift galaxies are gas-dominated, the details of the redshift evolution of $f_{g,\rm{mol}}$ are uncertain. By re-expressing the gas fraction (total or molecular) as $1/[1+(t_{\rm dep}\rm{sSFR})^{-1}]$, one can see its dependence on the gas depletion time $t_{\rm dep}$ and the specific star formation rate (sSFR), both of which are expected to be redshift-dependent quantities. For instance, some studies suggest that the typical sSFR of main sequence galaxies reaches a plateau by $z\sim 2$ [e.g., @Gonzalez_2010; @Rodighiero_2010; @Weinmann_2011], which would result in lower gas fractions than if the sSFR steadily increases beyond $z=2$, as suggested by other studies [e.g., @Bouwens_2012; @Stark_2013]. Thus, for example, if in this study we underestimated the amount of gas in galaxies, this would lead to underestimating . For a fixed stellar mass, more dust means that the quantity of UV photons per unit mass of dust would be lower, resulting in a decreased dust temperature. It turns out that  in our model is fairly robust to changes in the gas and dust mass. For instance, if  were higher by a factor of two for all galaxy stellar masses at all redshifts, this would decrease the values of  reported here by a factor of only $\sim 0.9$. Opacity, metallicity, and dust-to-gas-ratio {#sec:omd} ------------------------------------------- In Section \[sec:opacity\], we described how we use Galactic laws to model the opacity $\kappa_\nu$. In particular, we used the @Beckwith_1990 relation for long wavelengths and discussed how changes in the normalization or slope of $\kappa_\nu$ could affect the resulting dust temperatures. We performed a series of tests to quantify these changes and found that changing $\beta$ by $\pm 1$ affects the average  by a factor of only about 1.3. On the other hand, varying the normalization by a factor of 2 affects $\langle{\mbox{$T_{\rm dust}$}}\rangle$ by a factor of only 1.1, on average. While our results are robust to the opacity model, ${\mbox{$T_{\rm dust}$}}$ is more sensitive to the prescription for galactic metallicity and the dust-to-gas ratio. For the relationship between metallicity and DGR, we adopt the prescription of @Remy-Ruyer_2014 determined from observations of galaxies at $z=0$. For galaxies with $\mathcal{Z}>0.26\mathcal{Z}_\odot$, equation (\[eq:dgr\]) states $\rm{DGR}/\rm{DGR}_\odot = \mathcal{Z}/\mathcal{Z}_\odot$. This assumption is likely to break down for high-redshift, young galaxies, where the dust production sites have not yet reached equilibrium, especially if dust production by AGB stars is important [e.g., @Dwek_2011]. Thus, such high-redshift galaxies would have higher DGRs than predicted by equation (\[eq:dgr\]), which would translate into higher dust masses. Summary {#sec:summary} ======= In this paper, we have modeled the evolution of dust in galaxies, from $z=0$ to $z=9.5$, and made predictions for the dust mass and temperature as a function of galaxy stellar mass and time. Our simple model employs empirically motivated prescriptions to determine relationships between galaxy halo mass, stellar mass, SFR, gas mass, metallicity, and dust-to-gas-ratio. - Our model faithfully represents observed trends between galaxy dust and stellar mass out to $z\approx 6$. - Our model predicts that the normalization between galaxy - relation gradually decreases over time from $z=9.5$ to $z=0$, suggesting that for fixed stellar masses, galaxies in the early Universe had greater quantities of dust than modern galaxies. We parameterize the - relation as a broken power law and as a function of time. This relationship may be useful to observers who have measurements of a galaxy’s total stellar mass but are lacking observations that would provide an estimate of the dust mass. - In our fiducial model, in which dust, gas, and stars are homogeneously mixed together in a spherically symmetric system, the relation between galaxy dust temperature and stellar mass increases from $z=0$ to $z=6$, indicating that earlier galaxies have hotter dust. The - relation is not a monotonic function, but rather peaks at characteristic values of  that evolve with redshift. The height of the peaks is sensitive to the fraction of galactic surface area covered by dust; and the exact shape of the - relation depends on the geometry of stars and the ISM. - We make predictions for the observed 1.1-mm flux density, , arising from dust emission in galaxies. Our model anticipates that for a fixed galaxy stellar mass,  gradually decreases with cosmic time, until $z\approx 1$, at which point it sharply rises. There may be a population of low-mass (${\mbox{$M_\star$}}\lesssim 10^9$), high-redshift ($z\gtrsim 3$) galaxies that dust as hot as, or hotter than, their more massive counterparts. Given our calculations of $S_\nu$, detecting the dust emission from such low-mass, high-$z$ galaxies to determine their dust temperatures would require long integration times with current observatories, possibly making such observing programs challenging with current technology. However, deep ALMA observations of strong lensing clusters may provide the magnification needed to measure the reemission of stellar radiation by dust in this galaxy population in a more timely fashion. And there may be other promising ways to constrain the dust temperatures of early low-mass galaxies, for instance, by carefully modeling the contribution of their IR luminosities to the cosmic infrared background. In massive, high-redshift galaxies, JWST has the potential to observe dust extinction of UV photons—and thus constrain dust creation and destruction—in the observed optical and infrared range of wavelengths. Moreover, JWST observations have the potential to provide new constraints at high redshifts on the relations we use here between the SFR, metallicity, and dust-to-gas ratio. Thus, it can be hoped that combining future JWST and ALMA observations will illuminate new aspects of the content and evolution of dust in the earliest galaxies. Nia Imara thanks the John Harvard Distinguished Science Fellowship Program for supporting this research. The authors are grateful to H. Jabran Zahid for valuable discussions during our work on this study. We also thank the referee, whose thorough reading and insightful comments helped to improve this paper.
--- abstract: 'This paper presents a logical approach to nonmonotonic reasoning based on the notion of a nonmonotonic consequence relation. A conditional knowledge base, consisting of a set of conditional assertions of the type [**if**]{} …[**then**]{} …, represents the explicit defeasible knowledge an agent has about the way the world generally behaves. We look for a plausible definition of the set of all conditional assertions entailed by a conditional knowledge base. In a previous paper [@KLMAI:89], S. Kraus and the authors defined and studied [*preferential*]{} consequence relations. They noticed that not all preferential relations could be considered as reasonable inference procedures. This paper studies a more restricted class of consequence relations, [*rational*]{} relations. It is argued that any reasonable nonmonotonic inference procedure should define a rational relation. It is shown that the rational relations are exactly those that may be represented by a [*ranked*]{} preferential model, or by a (non-standard) probabilistic model. The rational closure of a conditional knowledge base is defined and shown to provide an attractive answer to the question of the title. Global properties of this closure operation are proved: it is a cumulative operation. It is also computationally tractable. This paper assumes the underlying language is propositional.' author: - 'Daniel Lehmann[^1]' - 'Menachem Magidor[^2]' title: 'What does a conditional knowledge base entail? [^3]' --- Introduction {#sec:intro} ============ Background {#subsec:back} ---------- Inference is the process of achieving explicit information that was only implicit in the agent’s knowledge. Human beings are astoundingly good at infering useful and very often reliable information from knowledge that seems mostly irrelevant, sometimes erroneous and even self-contradictory. They are even better at correcting inferences they learn to be in contradiction with reality. It is already a decade now that Artificial Intelligence has realized that the analysis of models of such inferences was a major task. Many nonmonotonic systems have been proposed as formal models of this kind of inferences. The best known are probably: circumscription [@McCarthy:80], the modal systems of [@McDer:80] and [@Moo:85], default logic [@Reiter:80] and negation as failure [@Clark:78]. An up-to-date survey of the field of nonmonotonic reasoning may be found in [@Reiter:87]. Though each of these systems is interesting per se, it is not clear that any one of them really captures the whole generality of nonmonotonic reasoning. Recently (see in particular the panel discussion of [@TARAK:88]) a number of researchers expressed their disappointment at existing systems and suggested that no purely logical analysis could be satisfactory. This work tries to contradict this pessimistic outlook. It takes a purely logical approach, grounded in A. Tarski’s framework of consequence relations [@Tar:56] and studies the very general notion of a [*sensible*]{} conclusion. It seems that this is a common ground that can be widely accepted: all reasonable inference systems draw only sensible conclusions. On the other hand, as will be shown, the notion of a sensible conclusion has a non-trivial mathematical theory and many interesting properties are shared by all ways of drawing sensible conclusions. The reader is referred to [@KLMAI:89] for a full description of background, motivation and the relationship of the present approach with previous work in Conditional Logic. We only wish to add here that, even though the present work will be compared explicitly only with previous work of E. Adams, some of the intuitions developed here are related with intuitions exposed already in the first works on Conditional Logic, such as [@Ramsey:25] or [@Chisholm:46]. The interested reader may find many relevant articles in [@IFS:81] and should in particular look at [@Harper:81]. The main difference between our approach and Conditional Logic is that we take the view that the truth of a conditional assertion is [*necessary*]{}, i.e., does not depend on the state of the world. For us, worlds give truth values to propositions but not to assertions, preferential models give truth values to assertions, but not to propositions. The models we propose are therefore much simpler than those previously proposed in Conditional Logic and it is doubtful whether they can shed light on the very complex questions of interest to the Conditional Logic community. Notations and terminology conform with those of [@KLMAI:89], but the present paper is essentially self-contained. Preliminary versions of part of the material contained in this paper appeared in [@LMTR:88; @Leh:89]. In [@KLMAI:89] it was suggested that items of [*default*]{}, i.e., [*defeasible*]{} information should be represented as [*conditional assertions*]{}, i.e., pairs of formulas. For example, the information that [*birds normally fly*]{} will be represented by the conditional assertion , where $b$ and $f$ are propositional variables representing [*being a bird*]{} and [*flying*]{} respectively. A set (finite or infinite) of conditional assertions is called a conditional knowledge base (knowledge base, in short) and represents the defeasible knowledge an agent may have. The fundamental question studied in this paper is the following: given a knowledge base [[**K**]{}]{}, what are the conditional assertions that should be considered as entailed, i.e., logically implied, by [[**K**]{}]{}? We consider that an assertion  should be entailed by [[**K**]{}]{} if, on the basis of the defeasible information contained in [[**K**]{}]{} and knowing that the proposition  is true, it would be sensible to conclude (defeasibly) that  is true. The question asked in the title and detailed just above has no simple answer and has probably no unique answer good for everyone in every situation. It may well be the case that, in different situations or for different domains of knowledge, the pragmatically [*right*]{} answers to the question of the title differ. This feeling has been recently expressed in [@DoyleWell:91]. The first part of this paper defines the notion of a [*rational*]{} set of assertions and defends the thesis that any reasonable answer to the question of the title must consist of such a set of assertions. \[rational\] The set of assertions entailed by any set of assertions is rational. The second part of the paper describes a specific construction, [*rational closure*]{}, and shows that the rational closure of a set of assertions is rational. This construction is then studied and its value as an answer to the question of the title assessed. We think that, in many situations, this is an acceptable answer, but do not claim that it provides an answer suitable to any situation. We have just argued that such an answer probably does not exist. One of the main interests of the rational closure construction is that it provides a proof of the existence of some uniform, well-behaved and elegant way of answering the question. In doing so, we develop criteria by which to judge possible answers. We shall in particular consider properties of the mapping from [[**K**]{}]{} to the set of all the assertions it entails and prove that our construction of the rational closure satisfies them. This effort and these results have to be compared with the essential absence, for the moment, of similar results about the systems of nonmonotonic reasoning mentioned above. Plan of this paper {#subsec:plan} ------------------ We survey here the main parts of this paper. The introductions to the different sections contain a detailed description. Section \[sec:pref\] is devoted to preferential consequence relations. This family of relations was defined and studied in [@KLMAI:89]. The first part of this section mainly recalls definitions and results of [@KLMAI:89], its last part presents deep new technical results on preferential entailment that will be used in the sequel, but it may be skipped on a first reading. Section \[sec:rat\] presents the restricted family of relations that is of interest to us: rational relations. This family was first defined, but not studied, in [@KLMAI:89 Section 5.4]. The main result of this section is a representation theorem characterizing rational relations in terms of ranked models. Section \[sec:entrank\] shows that entailment with respect to ranked models is exactly entailment with respect to preferential models and provides an alternative proof of E. Adams’ [@Adams:75] characterization of preferential entailment in terms of his probabilistic semantics. Appendix \[appen:nonstandard\] describes a family of models based on non-standard (in the sense of A. Robinson) probability models and shows that these models provide another exact representation for rational consequence relations. This provides us with a strong justification for considering rational relations. Section \[sec:ratclos\] draws on all previous sections and is the heart of this paper. It proposes an answer to the question of the title. The notion of rational closure is first defined abstractly and global properties proved. It is then showed that finite knowledge bases have a rational closure and a model-theoretic construction is provided. An efficient algorithm is proposed for computing the rational closure of a finite knowledge base. We then discuss some examples, remark that rational closure does not provide for inheritance of generic properties to exceptional classes, and finally propose a second thesis. Preferential relations and models {#sec:pref} ================================= Introduction {#subsec:intropref} ------------ The first part of this section, i.e., Sections \[subsec:prefrel\]–\[subsec:prefmod\], recalls definitions and results of [@KLMAI:89] and provides an example (new) of a preferential relation that cannot be defined by a well-founded model. Then, in Section \[subsec:prefent\], the definition and some properties of preferential entailment are recalled from [@KLMAI:89] and some new remarks included. Preferential entailment is a fundamental notion that is used throughout the paper. The last three sections are essentially independent of each other. They present an in-depth study of preferential entailment. In a first reading, they should probably be read only cursorily. The results of Section \[subsec:some\] expand on part of [@Leh:89] and are used in Section \[subsec:rankpref\]. Section \[subsec:ranking\] presents a new technique to study preferential entailment (i.e., ranking). It is fundamental from Section \[subsec:proof\] and onwards. Section \[subsec:comppref\] shows that preferential entailment is in the class co-NP, and hence is an co-NP-complete problem. A preliminary version of this last result appeared in [@Leh:89]. Preferential relations {#subsec:prefrel} ---------------------- Our first step must be to define a language in which to express the basic propositions. In this paper Propositional Calculus is chosen. Let ${\cal L}$ be the set of well formed propositional formulas (thereafter formulas) over a set of propositional variables. If the set of propositional variables chosen is finite, we shall say that ${\cal L}$ is logically finite. The classical propositional connectives will be denoted by $\neg , \vee , \wedge , \rightarrow$ and $\leftrightarrow$. The connective $\rightarrow$ therefore denotes material implication. Small Greek letters will be used to denote formulas. A world is an assignment of truth values to the propositional variables. The set ${\cal U}$ is the set of all worlds. The satisfaction of a formula by a world is defined as usual. The notions of satisfaction of a set of formulas, validity of a formula and satisfiability of a set of formulas are defined as usual. We shall write $\models \alpha$ if $\alpha$ is valid, i.e., iff $\forall u \in {\cal U}$, $u \models \alpha$. If $\alpha$ and $\beta$ are formulas then the pair (read “from $\alpha$ sensibly conclude $\beta$”) is called a conditional assertion. A conditional assertion is a syntactic object to which the reader may attach any meaning he wants, but the meaning we attach to such an assertion, and against which the reader should check the logical systems to be presented in the upcoming sections, is the following: if $\alpha$ represents the information I have about the true state of the world, I will jump to the conclusion that $\beta$ is true. A conditional knowledge base is any set of conditional assertions. Typically it is a finite set, but need not be so. Conditional knowledge bases seem to provide a terse and versatile way of specifying defeasible information. They correspond to the explicit information an agent may have. Certain well-behaved sets of conditional assertions will be deemed worthy of being called [*consequence relations*]{}. We shall use the notation usual for binary relations to describe consequence relations. So, if ${\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}$ is a consequence relation, indicates that the pair is in the consequence relation ${\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}$ and indicates it is not in the relation. Consequence relations correspond to the implicit information an intelligent agent may have. Consequence relations are typically infinite sets. Certain especially interesting properties of sets of conditional assertions (i.e., binary relations on ${\cal L}$) will be described and discussed now. They are presented in the form of inference rules. Consequence relations are expected to satisfy those properties. $$\label{eq:LLE} {{\models \alpha \leftrightarrow \beta \ \ , \ \ \alpha {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma} \over {\beta {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma}} \hspace {0.55cm} {\rm ({\bf Left \ Logical \ Equivalence}) }$$ $${{\models \alpha {\rightarrow}\beta\ \ , \ \ \gamma {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\alpha } \over {\gamma {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\beta}} \hspace {0.6cm} {\rm ({\bf Right \ Weakening})}$$ $$\alpha {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\alpha \hspace {2.9cm} {\rm ({\bf Reflexivity})}$$ $${{\alpha {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\beta \ \ , \ \ \alpha {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma} \ \ \ \over {\alpha {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\beta \wedge \gamma}} \hspace{0.8cm}{\rm ({\bf And})}$$ $${{\alpha {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma \ \ , \ \ \beta {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma} \over {\alpha \vee \beta {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma}} \hspace {1.15cm} {\rm ({\bf Or})}$$ $$\label{eq:CM} {{\alpha {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\beta \ \ , \ \ \alpha {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma} \over {\alpha \wedge \beta {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma}}\hspace {1.15cm} {\rm ({\bf Cautious \ Monotonicity})}$$ \[def:prefcons\] A set of conditional assertions that satisfies all six properties above is called a [*preferential*]{} consequence relation. A more leisurely introduction with motivation may be found in [@KLMAI:89] where a larger family of consequence relations, that of cumulative relations, was also studied. This family is closely related to the cumulative inference operations studied by D. Makinson in [@Mak:89]. The attentive reader of [@KLMAI:89] may have noticed that, there, we reserved ourselves an additional degree of freedom, that we have denied ourselves here. There, we allowed  to be a subset of the set of all worlds and considered the $\models$ symbol appearing in [**Left Logical Equivalence**]{} and in [**Right Weakening**]{} to be interpreted relatively to this subset. This was felt necessary to deal with [*hard constraints*]{}. In this work, we shall suppose that a hard constraint  is interpreted as the [*soft*]{} constraint, i.e., the assertion, , which was recognized as equivalent to considering  to be the set of all worlds satisfying   in [@KLMAI:89 page 174]. The second proposal there, i.e., to consider  to be part of the facts, would not be consistent with our treatment of rational closure. For the reader’s ease of mind we shall mention two important derived rules. Both [**S**]{} and [**Cut**]{} are satisfied by any preferential relation. $${{\alpha \wedge \beta {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma} \over {\alpha {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\: \beta \rightarrow \gamma}}\hspace {2.2cm} {\rm ({\bf S})}$$ $${{\alpha \wedge \beta {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma \ \ , \ \ \alpha {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\beta} \over {\alpha {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma \;}} \hspace {0.8cm} {\rm ({\bf Cut})}$$ The rule of Cut is presented here in a form that is not the most usual one. Notice, in particular, that we require the left-hand side of the second assumption to be part of the left-hand side of the first assumption. This version of Cut is close to the original form proposed by G. Gentzen. The following form, more usually used now, is [*not*]{} acceptable since it implies monotonicity. $${{\alpha \wedge \beta {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma \ \ , \ \ \alpha' {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\beta} \over {\alpha \wedge \alpha' {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma \;}}$$ Preferential models and representation theorem {#subsec:prefmod} ---------------------------------------------- The following definitions are also taken from [@KLMAI:89] and justified there. We shall define a class of models that we call [*preferential*]{} since they represent a slight variation on those proposed in [@Shoham:87]. The differences are nevertheless technically important. Preferential models give a model-theoretic account of the way one performs nonmonotonic inferences. The main idea is that the agent has, in his mind, a partial ordering on possible states of the world. State $s$ is less than state $t$, if, in the agent’s mind, $s$ is [*preferred*]{} to or more [*natural*]{} than $t$. The agent is willing to conclude $\beta$ from $\alpha$, if all [*most natural*]{} states that satisfy $\alpha$ also satisfy $\beta$. Some technical definitions are needed. Let $U$ be a set and $\prec$ a strict partial order on $U$, i.e., a binary relation that is antireflexive and transitive. \[def:min\] Let . We shall say that is minimal in $V$ iff there is no , such that . We shall say that is a minimum of $V$ iff for every , we have . \[def:smooth\] Let . We shall say that $V$ is smooth iff   , either minimal in $V$, such that or $t$ is itself minimal in $V$. We may now define the family of models we are interested in. \[def:prefmod\] A [*preferential*]{} model $W$ is a triple where $S$ is a set, the elements of which will be called states, assigns a world to each state and $\prec$ is a strict partial order on $S$ satisfying the following [*smoothness condition*]{}: , the set of states is smooth, where is defined as (read $s$ satisfies $\alpha$) iff . The model $W$ will be said to be finite iff $S$ is finite. It will be said to be well-founded iff is well-founded, i.e., iff there is no infinite descending chain of states. The smoothness condition is only a technical condition. It is satisfied in any well-founded preferential model, and, in particular, in any finite model. When the language  is logically finite, we could have limited ourselves to finite models and forgotten the smoothness condition. Nevertheless, Lemma \[le:well\] will show that, in the general case, for the representation result of Theorem \[compth:pref\] to hold we could not have required preferential models to be well-founded. The requirement that the relation $\prec$ be a strict partial order has been introduced only because such models are nicer and the smoothness condition is easier to check on those models, but the soundness result is true for the larger family of models, where $\prec$ is just any binary relation (Definitions \[def:min\] and \[def:smooth\] also make sense for any binary relation $\prec$). In such a case, obviously, the smoothness condition cannot be dropped even for finite models. The completeness result holds, obviously, also for the larger family, but is less interesting. We shall now describe the consequence relation defined by a model. \[def:prefent\] Suppose a model and are given. The consequence relation defined by $W$ will be denoted by and is defined by: iff for any $s$ minimal in $\widehat{\alpha}$, . If we shall say that the model $W$ satisfies the conditional assertion , or that $W$ is a model of . The following theorem characterizes preferential consequence relations. \[compth:pref\] A binary relation on ${\cal L}$ is a preferential consequence relation iff it is the consequence relation defined by some preferential model. If the language ${\cal L}$ is logically finite, then every preferential consequence relation is defined by some finite preferential model. The next result shows we could not have restricted ourselves to well-founded models. \[le:well\] There is a preferential relation that is defined by no well-founded preferential model. -1000.5 pt[**Proof:** ]{} Let ${\cal L}$ be the propositional calculus on the variables ($\omega$ is the set of natural numbers). We shall consider the model where $V$ is the set , iff (i.e., there is an infinite descending chain of states with a bottom element) and is [**true**]{} iff , for and . The smoothness property is satisfied since the only subsets of $V$ that do not have a minimum are infinite sets $A$ that do not contain $s_\infty$ and any $\alpha \in L$ that is satisfied in all states of such a set $A$ is also satisfied in $s_\infty$. The model $W$ defines a preferential relation such that , and , but . But clearly, any preferential model defining such a relation must contain an infinite descending chain of states. 8.5pt We do not know of any direct characterization of those relations that may be defined by well-founded preferential models. But Lemma \[le:co2\] will show that many relations may be defined by well-founded preferential models. It is clear, though, that the canonical preferential model provided by the proof of Theorem \[compth:pref\] is rarely well-founded. Consider, for example, the preferential closure of the empty knowledge base on a logically infinite language ${\cal L}$. It may be defined by some well-founded preferential model (the order $\prec$ is empty). But its canonical model is not well-founded (consider states whose second components are larger and larger disjunctions). We may only make the following obvious remark: if the underlying language ${\cal L}$ is logically finite, then all canonical models are well-founded. Preferential entailment {#subsec:prefent} ----------------------- Now that we have a proof-theoretic definition of a class of relations, a class of models and a representation theorem relating them, it is natural to put down the following definition. It will serve us as a first approximate answer to the question of the title. \[p-implic\] The assertion ${\cal A}$ is [*preferentially entailed*]{} by [**K**]{} iff it is satisfied by all preferential models of [**K**]{}. The set of all conditional assertions that are preferentially entailed by [**K**]{} will be denoted by ${\bf K}^p$. The preferential consequence relation ${\bf K}^p$ is called the preferential closure of [**K**]{}. In [@KLMAI:89] it was noted that the characterization of preferential consequence relations obtained in Theorem \[compth:pref\] enables us to prove the following. \[log:imp2\] Let [**K**]{} be a set of conditional assertions, and ${\cal A}$ a conditional assertion. The following conditions are equivalent: 1. ${\cal A}$ is preferentially entailed by [**K**]{}, i.e., ${\cal A} \in {\bf K}^p$ 2. ${\cal A}$ has a proof from [**K**]{} in the system [**P**]{} consisting of the Rules \[eq:LLE\] to \[eq:CM\]. The following compactness result follows. \[comp:pref\] [**K**]{} preferentially entails ${\cal A}$ iff a finite subset of [**K**]{} does. The following also follows from Theorem \[log:imp2\]. \[co1\] The set ${\bf K}^p$, considered as a consequence relation, is a preferential consequence relation, therefore there is a preferential model that satisfies exactly the assertions of ${\bf K}^p$. If [**K**]{} is itself a preferential consequence relation then ${\bf K} = {\bf K}^p$. The set ${\bf K}^p$ grows monotonically with [**K**]{}. We see that the operation is a compact monotonic consequence operation in the sense of Tarski [@Tar:56]. We have a particular interest in finite knowledge bases. It is therefore useful to put down the following definition. \[def:fingen\] A preferential consequence relation is [*finitely generated*]{} iff it is the preferential closure of a finite knowledge base. Lemma \[le:co2\] will show that finitely generated relations have interesting properties. In [@KLMAI:89], it was shown that any preferential relation defines a strict ordering on formulas by: iff and . \[def:wellf\] A preferential relation is [*well-founded*]{} iff the strict ordering relation $<$ it defines is well-founded. The following is easy to show. \[le:wellfcan\] A preferential relation is well-founded iff the canonical model built in the proof of Theorem \[compth:pref\] is well-founded. We noticed, at the end of Section \[subsec:prefmod\], that not all preferential relations that may be defined by well-founded preferential models are well-founded. \[le:co2\] Any finitely generated preferential relation is defined by some well-founded preferential model. -1000.5 pt[**Proof:** ]{} Let [[**K**]{}]{} be any finite set of assertions. Let $L_i , i \in \omega$ be an infinite sequence of larger and larger [*logically finite*]{} sublanguages of $L$ such that every $L_i$ contains all the formulas appearing in the assertions of [[**K**]{}]{} and such that $L$ is the union of the $L_i$’s. By Theorem \[compth:pref\], for each $i$ there is a [*finite*]{} preferential model $W'_i$ that defines the preferential closure of [[**K**]{}]{}over $L_i$. Let $W_i$ be the finite preferential model (over $L$) obtained by extending the labeling function of $W'_i$ to the variables of $L - L_i$ in some arbitrary way. Clearly $W_i$ is a preferential model of [[**K**]{}]{}. Let $W$ be the structure obtained by putting all the $W_i$’s one alongside the other (the partial ordering $\prec$ on $W$ never relates states belonging to $W_i$’s with different $i$’s). The structure $W$ is well-founded, therefore satisfies the smoothness condition and is a preferential model. Any assertion that is preferentially entailed by [[**K**]{}]{} (over $L$) is satisfied by every $W_i$, and is therefore satisfied by $W$. For any assertion  that is not preferentially entailed by [[**K**]{}]{} one may find a language $L_i$ large enough to include the formulas of . Over $L_i$, the assertion  is not preferentially entailed by [[**K**]{}]{}, by Theorem \[log:imp2\], since a proof in the small language is a proof in the larger one. Therefore $W'_i$ does not satisfy . We conclude that $W_i$ does not satisfy  and that $W$ does not satisfy . 8.5pt Some properties of preferential entailment {#subsec:some} ------------------------------------------ The following result, Theorem \[the:new\], is new. It is important for several reasons. It uses the semantic representation of Theorem \[compth:pref\] and a direct proof using only proof-theoretic arguments seems difficult. It will be used in Section \[subsec:rankpref\]. Its Corollary \[co:reso\] should provide a starting point for the application to preferential entailment of methods based on or related to resolution. First a definition. \[def:cons\] If a formula $\alpha$ is such that , we shall say that $\alpha$ is consistent (for the consequence relation ). A formula is consistent for a model iff it is consistent for the consequence relation defined by the model, or equivalently iff there is a state in the model that satisfies . We shall now define a basic operation on preferential models. Suppose $M$ is a preferential model . For we shall write iff or . Let $\alpha$ be a formula and $u \in V$ be a minimal element of $\widehat\alpha$. Let $\prec_{\alpha}^{u}$ be the strict partial order obtained from $\prec$ by making $u$ a minimum of $\widehat\alpha$, i.e., iff or and there exists a state such that . The following lemma describes the properties of the construction described above. \[prelim\] The structure is a preferential model. The consequence relation defined by $M_{\alpha}^{u}$ extends the consequence relation defined by $M$. In this model $u$ is a minimum of $\widehat\alpha$. Both models have the same set of consistent formulas. -1000.5 pt[**Proof:** ]{} It is easy to see that $\prec_{\alpha}^{u}$ is irreflexive and transitive. It is also easy to see that, under $\prec_{\alpha}^{u}$, $u$ is a minimum of $\widehat\alpha$. We want to show now that, for any $\beta \in L$, the set $\widehat\beta$ is smooth, under $\prec_{\alpha}^{u}$. Let $s \in \widehat\beta$. Since $\widehat\beta$ is smooth under $\prec$, there is a state $t$, minimal under $\prec$ in $\widehat\beta$ such that . If $t$ is still minimal in $\widehat\beta$ under $\prec_{\alpha}^{u}$, then we are done. If not, there is some state $v \in \widehat\beta$ such that and . Since $\widehat\beta$ is smooth under $\prec$, there is a state $w$, minimal in $\widehat\beta$ under $\prec$ such that . Since , $w$ must be minimal in $\widehat\beta$ also under $\prec_{\alpha}^{u}$. But . We have shown that $\widehat\beta$ is smooth under $\prec_{\alpha}^{u}$. To see that the consequence relation defined by $M_{\alpha}^{u}$ extends the one defined by $M$, just notice that, since $\prec_{\alpha}^{u}$ extends $\prec$, all minimal elements under the former are also minimal under the latter. Lastly, since $M$ and $M_{\alpha}^{u}$ have exactly the same set of worlds and the same labeling function, they define exactly the same set of consistent formulas. 8.5pt \[the:new\] Let [[**K**]{}]{} be a knowledge base and  an assertion that is [*not*]{} preferentially entailed by [[**K**]{}]{}. The formulas that are inconsistent for the preferential closure of are those that are inconsistent for the preferential closure of [[**K**]{}]{}. -1000.5 pt[**Proof:** ]{} Suppose that  is not preferentially entailed by [[**K**]{}]{}. Then, let be the preferential model the existence of which is guaranteed by Theorem \[compth:pref\] and that defines ${\bf K}^p$. The model $W$ does not satisfy . There is therefore a minimal element $s \in S$ of $\widehat{\alpha}$ that does not satisfy $\beta$. Consider now the model $W' {\stackrel{\rm def}{=}}W_{\alpha}^{s}$. By Lemma \[prelim\] this is a preferential model that satisfies all the assertions satisfied by $W$, therefore it satisfies all the assertions of [[**K**]{}]{}. Since $s$ is the only minimal element of $\widehat{\alpha}$, it satisfies . Suppose  is inconsistent for . Then it must be inconsistent for $W'$. By Lemma \[prelim\] it is inconsistent for $W$, therefore inconsistent for ${\bf K}^p$. 8.5pt \[co:reso\] Let [[**K**]{}]{} be a conditional knowledge base and  a conditional assertion. The assertion  is preferentially entailed by [[**K**]{}]{}iff the assertion is preferentially entailed by . -1000.5 pt[**Proof:** ]{} The [*only if*]{} part follows immediately from the soundness of the [**And**]{} rule. The [*if*]{} part, follows immediately from Theorem \[the:new\]. 8.5pt The rank of a formula {#subsec:ranking} --------------------- In this section, we introduce a powerful tool for studying preferential entailment. Given a knowledge base, we shall attach an ordinal, its rank, to every formula. We shall prove an important result concerning those ranks, and, in particular, show that a knowledge base [[**K**]{}]{} and its preferential closure [${\bf K}^p$]{} define the same ranks. \[def:exc\] Let [[**K**]{}]{} be a conditional knowledge base (i.e., a set of conditional assertions) and  a formula. The formula  is said to be [*exceptional*]{} for [[**K**]{}]{} iff [[**K**]{}]{} preferentially entails the assertion . The conditional assertion is said to be exceptional for [[**K**]{}]{} iff its antecedent  is exceptional for [[**K**]{}]{}. The set of all assertions of [[**K**]{}]{} that are exceptional for [[**K**]{}]{} will be denoted by $E ( {{\bf K}})$. Notice that . If all assertions of [[**K**]{}]{} are exceptional for [[**K**]{}]{}, i.e., if [[**K**]{}]{} is equal to , we shall say that [[**K**]{}]{} is completely exceptional. The empty knowledge base is completely exceptional. Notice that, in the definition above, [[**K**]{}]{} may be replaced by its preferential closure [${\bf K}^p$]{}. Given a conditional knowledge base [[**K**]{}]{} (not necessarily finite), we shall now define by ordinal induction an infinite non-increasing sequence of subsets of [[**K**]{}]{}. Let $C_0$ be equal to [[**K**]{}]{}. For any successor ordinal $\tau + 1$, $C_{\tau + 1}$ will be $E ( C_{\tau} )$ and for any limit ordinal $\tau$, $C_{\tau}$ is the intersection of all $C_\rho$ for $\rho < \tau$. It is clear that, after some point on, all $C$’s are equal and completely exceptional (they may be empty, but need not be so). We shall say that a formula  has rank $\tau$ (for [[**K**]{}]{}) iff $\tau$ is the least ordinal for which  is not exceptional for $C_\tau$. A formula that is exceptional for all $C_\tau$’s is said to have no rank. Notice that such a formula is exceptional for a totally exceptional knowledge base. The following is a fundamental lemma on preferential entailment. It says that, as far as preferential entailment is concerned, non-exceptional assertions cannot help deriving exceptional assertions. The notion of rank defined above proves to be a powerful tool for studying preferential entailment. \[le:funda\] Let $\tau$ be an ordinal. Let [[**K**]{}]{} be a conditional knowledge base and  a conditional assertion whose antecedent has rank larger or equal to $\tau$ (or has no rank). Then  is preferentially entailed by $C_{0}$ iff it is preferentially entailed by $C_{\tau}$. -1000.5 pt[**Proof:** ]{} The [*if*]{} part follows from the fact that $C_{\tau}$ is a subset of $C_{0}$. The [*only if*]{} part is proved by induction on the length of the proof of  from $C_{0}$. If the proof has length one, i.e.,  is obtained by [**Reflexivity**]{} or is an assertion of $C_{0}$, then the result is obvious. If the last step of the proof is obtained by [**Right Weakening**]{} or [**And**]{}, the result follows from a trivial use of the induction hypothesis. If the last step of the proof is obtained by [**Left Logical Equivalence**]{}, the result follows from the induction hypothesis and the fact that, if  and $\alpha'$ are logically equivalent then   and $\alpha'$ have the same rank. If the last step is a use of [**Or**]{}, and  is of the form then just remark that the rank of the disjunction is the smaller of the ranks of  and . Both  and  have therefore a rank larger or equal to $\tau$ and one concludes by the induction hypothesis. If the last step is a use of [**Cautious Monotonicity**]{}, and  is of the form , where  and are preferentially entailed (with short proofs) by $C_{0}$, let $\sigma$ be the rank of . By the induction hypothesis $C_{\sigma}$ preferentially entails . Since  is not exceptional for $C_\sigma$, we conclude that is not exceptional for $C_{\sigma}$, and therefore has rank $\sigma$. But has rank larger or equal to $\tau$. Therefore . The formula  has rank larger or equal to $\tau$ and we may apply the induction hypothesis to conclude that both  and are preferentially entailed by $C_{\rho}$. 8.5pt \[le:dontcare\] Let [[**K**]{}]{} and ${{\bf K}}'$ be knowledge bases such that . For any formula, the rank it is given by ${{\bf K}}'$ is equal to the rank it is given by [[**K**]{}]{}. -1000.5 pt[**Proof:** ]{} Using Lemma \[le:funda\], one shows by ordinal induction that . 8.5pt The following definition will be useful in Section \[subsec:proof\]. \[def:acc\] A knowledge base [[**K**]{}]{} is said to be [*admissible*]{} iff all formulas that have no rank for [[**K**]{}]{} are inconsistent for [[**K**]{}]{}. We shall immediately show that many knowledge bases are admissible. \[co:acc\] If the preferential closure of [[**K**]{}]{} is defined by some well-founded preferential model, then [[**K**]{}]{} is admissible. In particular, any finite knowledge base is admissible. -1000.5 pt[**Proof:** ]{} We have noticed, in Lemma \[le:dontcare\] that ranks are stable under the the replacement of a knowledge base by its preferential closure. Let $P$ be the preferential closure of [[**K**]{}]{}. Suppose $P$ is defined by some well-founded preferential model $W$. Suppose  has no rank. We must show that no state of $W$ satisfies . As noticed above, there is an ordinal $\tau$ such that $C_{\tau}$ is completely exceptional and  is exceptional for $C_{\tau}$. We shall show that no state of $W$ satisfies a formula that is exceptional for $C_{\tau}$. Indeed, if there were such a state, there would be such a minimal state, $s$, since $W$ is well-founded. But $W$ is a model of $C_{\tau}$ and no state below $s$ satisfy any antecedent of $C_{\tau}$, since $C_{\tau}$ is totally exceptional. Therefore the preferential model consisting of $s$ alone is a model of $C_{\tau}$. But, in a model of $C_{\tau}$, no minimal state satisfy a formula that is exceptional for $C_{\tau}$. A contradiction. It follows now from Lemma \[le:co2\] that any finite knowledge base is admissible. 8.5pt Computing preferential entailment {#subsec:comppref} --------------------------------- This section is devoted to the study of the computational complexity of preferential entailment. It is not needed in the sequel. We shall explain in Section \[subsec:discpref\] why preferential entailment is not the [*right*]{} notion of entailment to answer the question of the title, nevertheless preferential entailment is a central concept and it is therefore worthwhile studying its computational complexity. The results here are quite encouraging: the problem is in co-NP, i.e., in the same polynomial class as the problem of deciding whether a propositional formula is valid. \[le:aux1\] Let [[**K**]{}]{} be a finite conditional knowledge base and  a conditional assertion that is not preferentially entailed by [[**K**]{}]{}. There is a finite totally (i.e., linearly) ordered preferential model of [[**K**]{}]{} no state of which satisfies  except the top state. This top state satisfies  and does not satisfy . -1000.5 pt[**Proof:** ]{} Let ${\cal L'} \subseteq {\cal L}$ be a logically finite language, large enough to contain ,  and all the assertions of [[**K**]{}]{}. Let us now consider ${\cal L'}$ to be our language of reference. Clearly,  is not preferentially entailed by [[**K**]{}]{}, since a proof over the smaller language is a proof over the larger language. By Theorem \[compth:pref\], there is a finite preferential model $W$ (over ${\cal L'}$) of [[**K**]{}]{} that does not satisfy . In $W$, there is therefore a state $s$, minimal in $\widehat{\alpha}$, that satisfies  but does not satisfy . Consider the submodel $W'$ obtained by deleting all states of $W$ that are not below or equal to $s$. It is clearly a finite preferential model of [[**K**]{}]{}, with a top state that satisfies  but not . Let $V$ be obtained by imposing on the states of $W'$ any total ordering that respects the partial ordering of $W'$. Since there are only finitely many states in $V$, the smoothness condition is verified and $V$ is a preferential model (on ${\cal L'}$). It is a model of [[**K**]{}]{} but not of . Now we may extend the labeling function of $V$ to the propositional variables of ${\cal L}$ that are not in ${\cal L'}$ any way we want, to get the model requested. Notice that the model obtained satisfies the smoothness condition because it is finite. 8.5pt \[comp\] There is a non-deterministic algorithm that, given a finite set [**K**]{} of conditional assertions and a conditional assertion ${\cal A}$, checks that ${\cal A}$ is not preferentially entailed by [**K**]{}. The running time of this algorithm is polynomial in the size of [**K**]{} (sum of the sizes of its elements) and ${\cal A}$. -1000.5 pt[**Proof:** ]{} Let [**K**]{} be . Let be a set of indices. We shall define: and . A sequence is a sequence of pairs for , where $I_i \subseteq I$ and $f_i$ is a world. Let $\alpha$ and $\beta$ be in ${\cal L}$. A sequence , , is a witness for (we mean a witness that is not preferentially entailed by [**K**]{} ) iff 1. \[it1\] ,   2. \[itp\] ,   3. \[it2\] ,   4. \[it3\] 5. \[it4\] ,   6. \[it5\] . We must check that: witnesses are short and a conditional assertion has a witness iff it is not preferentially entailed by [**K**]{}. For the first point, just remark that, for the inclusion is strict because of items \[it2\] and \[itp\]. The length of the sequence is therefore bounded by the number of assertions in [**K**]{}. But, each pair has a short description. For the second point, suppose first there is a witness for . Then the ranked model $W$ consisting of worlds $f_0 , \ldots , f_n$ where for satisfies [**K**]{} but not . That it does not satisfy is clear from items \[it4\] and \[it5\]. Let us check that $W$ satisfies . If none of the $f_k$’s, satisfies $\gamma_i$ then $W$ satisfies for any $\eta$ in ${\cal L}$. Suppose therefore that $j$ is the smallest $k$ for which . We must show that . But, by items \[it3\] and \[it2\] and by item \[it1\], . Suppose now that is not preferentially entailed by some given finite [**K**]{}. By Lemma \[le:aux1\], there is a finite linearly ordered model $W$ of [**K**]{}, no state of which satisfies $\alpha$, except the top state that is labeled by a world $m$ that satisfies . Let . It is easy to see that ([*remark*]{} 1): if $V$ is any preferential model of [**K**]{}, for any set , $V$ satisfies . Let us now consider first the set . It cannot be empty, therefore it has a unique minimal state. Let $f_0$ be the label of this state. We must consider two cases. First suppose that . Then $f_0$ is minimal in $\widehat\alpha$ and therefore must be $m$. In such a case is a witness. The only thing to check is that item \[it1\] is satisfied. Indeed either and we conclude by remark 1 or and $m$ satisfies none of the $\gamma_i$’s. Let us deal now with the case . We shall build a sequence beginning by . Since $m$ does not satisfy $\alpha$, it must satisfy $\varphi_{I_{0}}$, which takes care of item \[itp\]. Remark 1 takes care of item \[it1\]. Let us now define . $I_1$ is strictly smaller than $I_0$. We may now consider the set . It is not empty and therefore has a unique minimal element and we may, in this way, go on and build a proof for . 8.5pt Since it is clear that preferential non-entailment is at least as hard as satisfiability (consider assertions with antecedent [**true**]{}), we conclude that it is an NP-complete problem, i.e., that preferential entailment is co-NP-complete. A remark of J. Dix that will be explained at the end of Section \[subsec:comprat\] shows that preferential entailment is reducible to the computation of rational closure and that this reduction, when applied to Horn formulas, requires only the consideration of Horn formulas. It follows that, if we restrict ourselves to [*Horn*]{} assertions, computing preferential entailment has only polynomial complexity. Rationality {#sec:rat} =========== Introduction {#subsec:introrat} ------------ In this section we explain why not all preferential relations represent reasonable nonmonotonic inference procedures. We present some additional principles of nonmonotonic reasoning and discuss them. Those principles are structurally different from the rules of preferential reasoning, since they are not of the type: deduce some assertion from some other assertions. Sections \[subsec:negrat\] and \[subsec:disrat\] present weak principles. Some results are proven concerning those principles. Deeper results on those principles, found after a first version of this paper had been circulated, appear in [@FLMo:91]. Our central principle is presented in Section \[subsec:ratmon\]. Those principles were first described in [@KLMAI:89] but the technical results presented here are new. In \[subsec:discpref\], the value of preferential entailment as an answer to the question of the title is discussed. Our conclusion is that it is not a satisfactory answer, since it does not provide us with a rational relation. Then, in Section \[subsec:rank\], a restricted family of preferential models, the family of ranked models, is presented and a representation theorem is proved. The result is central to this paper but the proof of the representation theorem may be skipped on a first reading. The representation theorem appeared in [@LMTR:88]. The family of ranked models is closely related to, but different from, a family studied in [@Del:87] and Section \[subsec:del\] explains the differences. Negation Rationality {#subsec:negrat} -------------------- In [@KLMAI:89 Section 5.4], it was argued that not all preferential consequence relations represented reasonable inference operations. Three [*rationality*]{} properties were discussed there, and it was argued that all three were desirable. Those properties do not lend themselves to be presented as [*Horn*]{} rules (deduce the presence of an assertion in a relation from the presence of other assertions) but have the form: deduce the absence of an assertion from the absence of other assertions. All of them are implied by [**Monotonicity**]{}. The reader may find the discussion of [@KLMAI:89] useful. Here technical results will be described. The first property considered is the following. $$\label{neg:rat} {{\alpha \wedge \gamma {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\not\sim}\beta \ , \ \alpha \wedge \neg \gamma {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\not\sim}\beta} \over {\alpha {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\not\sim}\beta}} \hspace {1.6cm} {\rm ({\bf Negation \ Rationality}) }$$ \[le:neg:rat\] There is a preferential relation that does not satisfy [**Negation Rationality**]{}. -1000.5 pt[**Proof:** ]{} Take a preferential model containing four states: , with $s_0 \prec s_1$ and $s_2 \prec s_3$. Let the even states be the only states satisfying $q$ and $s_0$ and $s_3$ be the only states satisfying $p$. One easily verifies that the consequence relation defined by this model is such that , but and . 8.5pt No semantic characterization of relations satisfying [**Negation Rationality**]{} is known. It has been shown in [@KLMAI:89] that the consequence relation defined by [*Circumscription*]{} does not always satisfy [**Negation Rationality**]{}. Disjunctive Rationality {#subsec:disrat} ----------------------- The next property is the following. $$\label{disj:rat} {{\alpha {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\not\sim}\gamma \ \ , \ \ \beta {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\not\sim}\gamma} \over {\alpha \vee \beta {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\not\sim}\gamma}} \hspace {3cm} {\rm ({\bf Disjunctive \ Rationality}) }$$ We may prove the following. \[le:disneg\] Any preferential relation that satisfies [**Disjunctive Rationality**]{} satisfies [**Negation Rationality**]{}. -1000.5 pt[**Proof:** ]{} Suppose and . By [**Disjunctive Rationality**]{}, we conclude that . We conclude by [**Left Logical Equivalence**]{}. 8.5pt \[le:disneg2\] There is a preferential relation that satisfies [**Negation Rationality**]{} but does not satisfy [**Disjunctive Rationality**]{}. -1000.5 pt[**Proof:** ]{} Let us consider the following preferential model $W$. The model $W$ has four states: $a_0 , a_1 , b_0 , b_1$. The ordering is: and . The language has three propositional variables: $p$, $q$ and $r$. The two states $a_1$ and $b_1$ (the top states) are labeled with the same world that satisfies only $p$ and $q$. State $a_0$ is labeled with the world that satisfies only $p$ and $r$ and the state $b_0$ with the world that satisfies only $q$ and $r$. The preferential relation defined by $W$ does not satisfy [**Disjunctive Rationality**]{} but satisfies [**Weak Rationality**]{}. For the first claim, notice that: but and . For the second claim, suppose , but . Then it must be the case that there is a minimal state of $\widehat{\alpha}$ that does not satisfy  and, above it, a state that is minimal in $\widehat{\alpha \wedge \beta}$. This last state must be labeled by a world that is the label of no minimal state of $\widehat{\alpha}$. Therefore, $\widehat{\alpha}$ must contain all four states of $W$, and $\widehat{\alpha \wedge \beta}$ must contain either the two top states alone or the two top states and one of the bottom states. In each case it is easy to see that since the minimal states of $\widehat{\alpha \wedge \neg \beta}$ are all also minimal in $\widehat{\alpha}$. 8.5pt No semantic characterization of relations satisfying [**Disjunctive Rationality**]{} was known at the time this paper was elaborated. M. Freund [@Freund:91] has now provided a very elegant such characterization together with an alternative proof of our Theorem \[comthe:rat\]; the canonical model he builds is essentially the same as ours. Rational Monotonicity {#subsec:ratmon} --------------------- The last property is the following. $$\label{Rat:mon} {{\alpha \wedge \beta {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\not\sim}\gamma \ \ , \ \ \alpha {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\not\sim}\neg\beta} \over {\alpha {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\not\sim}\gamma}} \hspace {2cm} {\rm ({\bf Rational \ Monotonicity}) }$$ This rule is similar to the thesis [**CV**]{} of conditional logic (see [@Nute:84]). The reader is referred to [@KLMAI:89 Section 5.4] for a discussion of our claim that reasonable consequence relations should satisfy [**Rational Monotonicity**]{}. Some researchers in Conditional Logic (J. Pollock in particular) have objected to [**CV**]{} as a valid thesis for (mainly subjunctive) conditionals. Echoes of this debate may be found in [@Gin:86 end of Section 4.4]. The objections to [**CV**]{} that hold in the conditional logic framework do not hold for us, though their consideration is recommended to the reader. The most attractive feature of [**Rational Monotonicity**]{} is probably that it says that an agent should not have to retract any previous defeasible conclusion when learning about a new fact the negation of which was not previously derivable. In [@Satoh:89], K. Satoh aptly decided to call nonmonotonic reasoning that validates [**Rational Monotonicity**]{} [*lazy*]{}. The rule of [**Rational Monotonicity**]{} should be distinguished from the following rule, which is satisfied by any preferential relation. $$\label{eq:prat} {{\alpha \wedge \beta {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}\neg \gamma \ \ , \ \ \alpha {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\not\sim}\neg\beta} \over {\alpha {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\not\sim}\gamma}}$$ \[def:ratrel\] A rational consequence relation is a preferential relation that satisfies [**Rational Monotonicity**]{}. Two different representation theorems will be proved about rational relations, in Sections \[subsec:rank\] and in Appendix \[appen:nonstandard\]. The last one seems to provide evidence that reasonable inference procedures validate [**Rational Monotonicity**]{} and that all rational relations represent reasonable inference procedures. \[le:ratmondisj\] A rational relation satisfies [**Disjunctive Rationality**]{}. -1000.5 pt[**Proof:** ]{} Suppose and . By [**Left Logical Equivalence**]{} we have . If we have , then we could conclude by [**Rational Monotonicity**]{} that . Suppose then that . If we had , we would conclude by preferential reasoning that . 8.5pt \[DR&lt;RM\] There is a preferential relation satisfying [**Disjunctive Rationality**]{} that is not rational. -1000.5 pt[**Proof:** ]{} We shall build a preferential model that defines a consequence relation satisfying [**Disjunctive Rationality**]{} but not [**Rational Monotonicity**]{}. Let ${\cal L}$ be the propositional calculus on the three variables: $p_{0}$, $p_{1}$, $p_{2}$. Let ${\cal U}$ contain all propositional worlds on those variables. Let $S$ contain three elements: $s_i$ for $i = 0 , 1 , 2$ and $l(s_i)$ satisfy only $p_i$. The partial order $\prec$ is such that $s_1 \prec s_2$ and no other pair satisfies the relation. This defines a preferential model $W$. First we shall show that the consequence relation defined by $W$ does not satisfy [**Rational Monotonicity**]{}. Indeed, we have both and . Nevertheless, we also have . Let us show now that any preferential model that does not satisfy [**Disjunctive Rationality**]{} must have at least $4$ states. Suppose , but and . The last two assumptions imply the existence of states $a$ and $b$, minimal in $\widehat\alpha$ and $\widehat\beta$ respectively and that do not satisfy $\gamma$, and therefore are not minimal in . Those states are different, since any state minimal in both $\widehat\alpha$ and $\widehat\beta$ would be minimal in . By the smoothness condition there must be a state $a'$ minimal in $\widehat{\alpha \vee \beta}$ and such that $a' \prec a$. Clearly $a'$ satisfies $\gamma$ and does not satisfy $\alpha$ (since $a$ is minimal in $\widehat\alpha$) but satisfies $\beta$. Similarly there must be a state $b'$ minimal in $\widehat{\alpha \vee \beta}$ and such that $b' \prec b$ and $b'$ satisfies $\gamma$ and does not satisfy $\beta$, but satisfies $\alpha$. It is left to show that all four states are different. We have already noticed that $a \not = b$. The states $a'$ and $b'$ satisfy $\gamma$ and are therefore different from $a$ and $b$. But $b'$ satisfies $\alpha$ and $a'$ does not and therefore $a' \not = b'$. 8.5pt Discussion of Preferential Entailment {#subsec:discpref} ------------------------------------- We may now assess preferential entailment as a possible answer to the question of the title. Corollary \[co1\] explains why the notion of preferential entailment cannot be the one we are looking for: the relation ${\bf K}^p$ can be any preferential relation and is not in general rational. For typical [**K**]{}’s, ${\bf K}^p$ fails to satisfy a large number of instances of [**Rational Monotonicity**]{} and is therefore highly unsuitable. One particularly annoying instance of this is the following. Suppose a conditional knowledge base [**K**]{} contains one single assertion where $p$ and $q$ are different propositional variables. Let $r$ be a propositional variable, different from $p$ and $q$. We intuitively expect the assertion to follow from [**K**]{}. The rationale for that has been discussed extensively in the literature and boils down to this: since we have no information whatsoever about the influence of $r$ on objects satisfying $p$ it is sensible to assume that it has no influence and that there are normal $p$-objects that satisfy $r$. The normal $p \wedge r$-objects are therefore normal $p$-objects and have all the properties enjoyed by normal $p$-objects. Nevertheless it is easy to check that is not in ${\bf K}^p$. The problem lies, at least in part, with the fact that ${\bf K}^p$ is not rational, since any rational relation containing , must contain unless it contains . In conclusion, it seems that the set of conditional assertions entailed by [**K**]{} should be larger and more [*monotonic*]{} than the set ${\bf K}^p$. It should also be rational. This question will be brought up again in Section \[sec:ratclos\] and a solution will be proposed. Ranked models and a representation theorem for rational relations {#subsec:rank} ----------------------------------------------------------------- In this section a family of preferential models will be defined and it will be shown that the relations defined by models of this family are exactly the rational relations. \[le:modular\] If $\prec$ is a partial order on a set $V$, the following conditions are equivalent. 1. for any if , and , then 2. for any if , then, either or 3. for any if and , then 4. \[it30\] there is a totally ordered set $\Omega$ (the strict order on $\Omega$ will be denoted by $<$) and a function (the ranking function) such that iff . The proof is simple and will not be given. A partial order satisfying any of the conditions of Lemma \[le:modular\] will be called [*modular*]{} (this terminology is proposed in [@Gin:86] as an extension of the notion of modular lattice of [@Gra:71]). \[rankmod\] A ranked model $W$ is a preferential model for which the strict partial order $\prec$ is modular. Those models are called ranked since the effect of function $r$ of property \[it30\] of Lemma \[le:modular\] is to rank the states: a state of smaller rank being more normal than a state of higher rank. We shall always suppose that a ranked model $W$ comes equipped with a totally ordered set $\Omega$ and a ranking function $r$. Notice that we still require $W$ to satisfy the smoothness condition. It is easy to see that for any subset $T$ of $V$ and any $t \in T$, $t$ is minimal in $T$ iff $r(t)$ is the minimum of the set $r(T)$. It follows that all minimal elements of $T$ have the same image by $r$. The smoothness condition is then equivalent to the following: for any formula , if $\widehat\alpha$ is not empty, the set $r(\widehat\alpha)$ has a minimum. The smoothness condition is always verified if $\Omega$ is a well-ordered set. The reader may check that the preferential model $W$ defined in the proof of Lemma \[le:well\] is ranked (it is even totally ordered). It follows that there are rational relations that are defined by no well-founded ranked model. The following is a soundness result. \[le:soundrat\] If $W$ is a ranked model, the consequence relation it defines is rational. -1000.5 pt[**Proof:** ]{} It is enough to show that satisfies [**Rational Monotonicity**]{}. For this, the smoothness condition is not needed; it is needed, though, for the soundness of [**Cautious Monotonicity**]{}. Suppose $W$ is a ranked model. We shall use the notations of Definition \[rankmod\]. Suppose also that and . From this last assumption we conclude that there is a minimal element of $\widehat\alpha$ that satisfies $\beta$. Let be such a state. Let be a minimal element of . Since , and . But this implies that $s$ is minimal in $\widehat\alpha$: any state $u$ such that $u \prec s$ satisfies $r(u) < r(s)$ and therefore $r(u) < r(t)$ and $u \prec t$. Since , . 8.5pt We shall show now that the converse of Lemma \[le:soundrat\] holds. We shall first mention four derived rules of preferential logic. In fact the first three of these rules are even valid in cumulative logic (see [@KLMAI:89 Section 3]). Their proof (either proof-theoretic or model-theoretic) is straightforward and is omitted. The following rules are derived rules of preferential logic: $$\label{eq-1} {{\alpha {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}{\bf false}} \over {\alpha \wedge \beta {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}{\bf false}}}$$ $$\label{eqzero} {{\alpha \vee \beta {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}\neg\beta} \over {\alpha {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}\neg\beta}}$$ $$\label{eqtwo} {{\alpha \vee \beta \vee \gamma {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}\neg\alpha \wedge \neg\beta} \over {\beta \vee \gamma {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}\neg\beta}}$$ $$\label{eqone} {{\alpha \vee \beta {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}\neg\alpha} \over {\alpha \vee \beta \vee \gamma {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}\neg\alpha}}$$ We shall now derive a property of rational relations. \[rankder\] If is a rational relation, then the following rule is valid: $$\label{eqthree} {{\alpha \vee \gamma {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}\neg\alpha \ \ , \ \ \beta \vee \gamma {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\not\sim}\neg\beta} \over {\alpha \vee \beta {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}\neg\alpha}}$$ -1000.5 pt[**Proof:** ]{} From the first hypothesis, by Rule \[eqone\] one deduces . From the second hypothesis, by Rule \[eqtwo\] one deduces . If one applies now [**Rational Monotonicity**]{}, one gets the desired conclusion. 8.5pt For the completeness result, we proceed in the style of L. Henkin. Completeness proofs in this style have been used in conditional logics since [@StalThom:70]. Since a number of technical lemmas are needed, we have relegated them to Appendix \[appen:rep\] and state here the characterization theorem. \[comthe:rat\] A binary relation on ${\cal L}$ is a rational consequence relation iff it is the consequence relation defined by some ranked model. If the language ${\cal L}$ is logically finite, then every rational consequence relation is defined by some finite ranked model. -1000.5 pt[**Proof:** ]{} The [*if*]{} part is Lemma \[le:soundrat\]. For the [*only if*]{} part, let be a consequence relation satisfying the rules of [**R**]{}. The relation defines a structure $W$ as described in Appendix \[appen:rep\]. By Lemmas \[smoothness\] and \[total\], $W$ is a ranked model. We claim that, for any , iff . Suppose first . By Lemma \[nottoomany\], if is minimal in $\widehat\alpha$, then $m$ is normal for $\alpha$. We conclude that and . Therefore . Suppose now that . Let $m$ be a normal world for $\alpha$. By Corollary \[enough\], the pair is minimal in $\widehat\alpha$ and therefore . All normal worlds for $\alpha$ therefore satisfy $\beta$ and Lemma 8 of [@KLMAI:89] implies that . For the last sentence of the theorem, notice that, if the language ${\cal L}$ is logically finite, the model $W$ is finite. 8.5pt As we remarked just prior to Lemma \[le:soundrat\], the theorem would not hold had we required models to be well-founded. Comparison with Delgrande’s work {#subsec:del} -------------------------------- The system of proof-rules and the models presented above may be compared with the results of J. Delgrande in [@Del:87; @Del:88]. The general thrust is very similar but differences are worth noticing. A first difference is in the language used. Delgrande’s language differs from this paper’s in three respects: his work is specifically tailored to first-order predicate calculus, whereas this work deals with propositional calculus; he allows negation and disjunction of conditional assertions, which are not allowed in this paper; he allows nesting of conditional operators in the language, though his completeness result is formulated only for unnested formulas. Therefore Delgrande’s central completeness result in [@Del:87], only shows that any proposition in which there is no nesting of conditional operators (let us call those propositions [*flat*]{}) that is valid has a proof from the axioms and rules of his system. But this proof may use propositions that are not flat. The completeness results reported here show that valid assertions have proofs that contain only flat assertions. A second difference is that Delgrande’s logical system is different from ours: Delgrande’s logic [**N**]{} does not contain [**Cautious Monotonicity**]{}. Our class of ranked models is more restricted than his class of models: our models are required to obey the smoothness condition and Delgrande’s are not. One may also notice that our logic enjoys the finite model property, but Delgrande’s does not. This difference between our two logical systems may sound insignificant when one remarks that many instances of the rule of [**Cautious Monotonicity**]{} may be derived from [**Rational Monotonicity**]{}, and are therefore valid in Delgrande’s system [**N**]{}. What we mean is that if and then, if one may conclude by [**Rational Monotonicity**]{} rather than by [**Cautious Monotonicity**]{}. But if , and therefore one cannot conclude. The Rule \[eq-1\] is sound in preferential logic but not in Delgrande’s logic. A proof will soon be given. We want to remark here that Rule (\[eq-1\]) is very natural, since the meaning of is that if $\alpha$ is true than anything may be true. It therefore means that it is absolutely unthinkable that $\alpha$ be true. In such a case we would expect $\alpha \wedge \beta$ to be also absolutely unthinkable. Let us show now that Rule (\[eq-1\]) is not valid for Delgrande’s structures. Consider the following structure. Let the set $V$ consists of one infinite descending chain: $\prec$ is a total ordering. Suppose now that the top element of $V$ is the only state that satisfies the propositional variable $p$. In this structure $\widehat{\bf true}$ is $V$ and has no minimal element, therefore . But $\widehat p$ consists only of the top element and has a minimal point and therefore . We have shown that the Rule \[eq-1\] is not valid for Delgrande’s structures. This example also shows that Delgrande’s logic does not posses the finite model property. A third difference is that his definition of the set of conditional assertions entailed by a conditional knowledge base is different from the one presented here, at least at first sight. Ranked entailment {#sec:entrank} ================= Introduction {#subsec:intrank} ------------ After having defined the family of relations and the family of models we are interested in, we proceed to study the notion of entailment provided by those models. Our main result is presented in \[subsec:rankpref\]. It is negative, in the sense that this entailment is equivalent to preferential entailment. A preliminary version of this result may be found in [@LMTR:88]. The collapsing of the two notions of entailment, as opposed to the two different classes of relations represented, sheds a new light on the results of [@Adams:75]. Section \[subsec:adams\] describes the probabilistic semantics given to preferential entailment by E. Adams in [@Adams:75] and shows how the result of Section \[subsec:rankpref\] provides an alternative proof for Adams’ results. The results of this section were contained in [@LMTR:88]. Ranked entailment is preferential entailment {#subsec:rankpref} -------------------------------------------- In the discussion of Section \[subsec:discpref\], we expressed the wish that the set of assertions entailed by a conditional knowledge base [[**K**]{}]{}be rational and larger than ${\bf K}^p$. A natural candidate would be the set of all assertions that are satisfied in all ranked models that satisfy the assertions of [[**K**]{}]{}. This is an intersection of rational relations. This proposal fails in a spectacular way. Problems with this proposal have been noted in [@Del:88 Section 4]. It is also easy to see that the intersection of rational relations may fail to be rational. Theorem \[rat:ent\] shows this failure to be total. \[le:auxrat\] Let  be any preferential relation. There exists a rational extension of  for which a formula is inconsistent only if it is inconsistent for . -1000.5 pt[**Proof:** ]{} Let us choose some enumeration of triples of formulas ,  and  in which every triple appears an unbounded number of times. Let $K_0$ be equal to . At every step $i$ we define $K_{i+1}$ in the following way. Let ,  and  be the triple enumerated at step $i$. Unless $K_i$ contains the pair but contains neither nor , we shall take $K_{i+1}$ to be equal to $K_i$. If $K_i$ satisfies the condition above, we shall take $K_{i+1}$ to be the preferential closure of . Notice that, by [**Cautious Monotonicity**]{}, will enter $K_{i+1}$. It is clear that the $K_i$’s provide an increasing sequence of preferential extensions of . Let $K_\infty$ be the union of all the $K_i$’s. Clearly $K_\infty$ is a preferential extension of . By construction, and since we took care of removing all counter-examples to the rule of [**Rational Monotonicity**]{}, $K_\infty$ is a rational consequence relation. We claim that a formula  is inconsistent for $K_\infty$ (i.e., is in $K_\infty$) only if it is inconsistent already for . Indeed, if  is inconsistent for $K_\infty$ it must be inconsistent for some $K_i$, but Theorem \[the:new\] shows that, by construction, all $K_i$’s have the same inconsistent formulas. 8.5pt \[rat:ent\] If the assertion  is satisfied by all [*ranked*]{} models that satisfy all the assertions of [**K**]{}, then it is satisfied by all [*preferential*]{} such models. -1000.5 pt[**Proof:** ]{} Let be as in the hypotheses. Let  be the rational extension of the preferential closure of , whose existence is asserted by Lemma \[le:auxrat\]. The assertion is in   since it is in any rational relation that extends [[**K**]{}]{}, by Theorem \[comthe:rat\]. Since $\delta {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\neg \varepsilon$ is obviously in , we conclude that $\delta$ is inconsistent for   and therefore inconsistent for the preferential closure of . By Corollary \[co:reso\],  is preferentially entailed by [[**K**]{}]{}. 8.5pt Comparison with Adams’ probabilistic entailment {#subsec:adams} ----------------------------------------------- In this section we shall show that ranked models are closely related to Adams’ probabilistic entailment described in [@Adams:75]. Theorem \[rat:ent\], then, provides an alternative proof of Adams’ axiomatic characterization of probabilistic entailment. There are some technical differences between Adams’ framework and ours since Adams insists on allowing formulas as conditional assertions: for him the formula $\alpha$ is a synonym for . We also insist on studying infinite knowledge bases whenever possible, where Adams restricts himself to finite knowledge bases. A probability assignment for the language ${\cal L}$ is a probability measure on ${\cal L}$ yielded by some probability measure given on ${\cal U}$. E. Adams proposed the following definitions. \[def:proper\] A probability assignment $p$ for the language ${\cal L}$ is said to be [*proper*]{} for a conditional assertion iff $p(\alpha) > 0$. It is proper for a set of conditional assertions iff it is proper for each element. If $p$ is proper for , we shall use $p({\cal A})$ to denote the conditional probability $p(\beta \mid \alpha)$. \[def:pcons\] Let [**K**]{} be a set of conditional assertions. We shall say that [**K**]{} is [*probabilistically consistent*]{} if and only if for any real number there exists a probability assignment $p$ for ${\cal L}$ that is proper for [**K**]{} and such that, for all ${\cal A}$ in [**K**]{}, one has . \[def:pentail\] Let [**K**]{} be a set of conditional assertions and ${\cal A}$ a conditional assertion. We shall say that [**K**]{} [*probabilistically entails*]{} ${\cal A}$ iff for all $\epsilon > 0$ there exists $\delta >0$ such that for all probability assignments $p$ for ${\cal L}$ which are proper for [**K**]{} and ${\cal A}$, if $p({\cal B}) \geq 1 - \delta$ for all ${\cal B}$ in [**K**]{}, then $p({\cal A}) \geq 1 - \epsilon$. In [@Adams:75], Adams studies extensively the relations between the two notions of probabilistic consistency and probabilistic entailment, at least for finite sets of conditional assertions. Here we shall only show the fundamental relation that exists between Adams’ notions and ours. First, we shall make three easy remarks. The first one concerns only probabilistic notions and was claimed by Adams for finite knowledge bases but is true in general. \[pcon:pent\] A set [**K**]{} of conditional assertions is probabilistically inconsistent iff it probabilistically entails any conditional assertion. Our second remark provides a first link between probabilistic notions and the notions introduced in this paper. It is essentially the soundness part of Adams’ soundness and completeness result (see beginning of proof of 4.2 at page 62 of [@Adams:75]). This is the easy direction. \[soundA\] Any conditional assertion preferentially entailed by [**K**]{} is probabilistically entailed by [**K**]{}. Our third remark is the following. \[133\] If the conditional assertion is in [**K**]{} and is preferentially entailed by [**K**]{} then [**K**]{} is probabilistically inconsistent. -1000.5 pt[**Proof:** ]{} Under the assumptions of the lemma, Lemma \[soundA\] shows that is probabilistically entailed by [**K**]{}. But for any probability assignment $p$ that is proper for , is defined and equal to $0$. Since [**K**]{} probabilistically entails we conclude that there is an $\epsilon > 0$ such that no probability assignment that is proper for [**K**]{} and gives probabilities larger than $1 - \epsilon$ to all assertions of [**K**]{}. Since any probability assignment that is proper for [**K**]{} is also proper for the conclusion is proved. 8.5pt We shall now prove the converse of Lemma \[soundA\] in the case [**K**]{} is finite and probabilistically consistent. The basic remark is the following. Suppose $W$ is a finite (i.e., the set $S$ of states is finite) ranked model. Let $\epsilon > 0$ be some real number. We shall describe a probability measure $p_\epsilon$ on $S$. The first principle that will be used in defining $p_\epsilon$ is that all states of the same rank will have equal probabilities. The second principle is that the weight of the set of all states of rank $n$, $w_n$ will be such that . The intuitive meaning of this choice (since $\epsilon$ will approach zero) is that normal states are more probable than exceptional states. There is clearly exactly one probability measure satisfying both principles above, for any given finite ranked model. The probability measure $p_\epsilon$, defined on states, yields a probability measure on formulas. It is clear that a formula $\alpha$ has probability zero under $p_\epsilon$ iff $\alpha$ is inconsistent in $W$, i.e., . Suppose $\alpha$ is consistent. Let us consider the conditional probability of $\beta$ given $\alpha$, which is well defined. If then this conditional probability is larger than and therefore approaches one when $\epsilon$ approaches zero. On the other hand, if , then this conditional probability cannot exceed $1 - {1 \over m}$ where $m$ is the number of states at the rank which is minimal for $\alpha$. It is therefore bounded away from $1$ when $\epsilon$ approaches $0$. \[le:unnamed\] Let a finite probabilistically consistent knowledge base [**K**]{} be given and suppose is not preferentially entailed by [**K**]{}. Then, [**K**]{} does not probabilistically entail ${\cal A}$. -1000.5 pt[**Proof:** ]{} Let [**K**]{} and ${\cal A}$ be as described in the lemma. Let ${\cal L'}$ be some logically finite sublanguage of ${\cal L}$ that contains $\alpha$, $\beta$ and all propositions appearing in [**K**]{}. Relative to ${\cal L'}$, the hypotheses of the lemma are still true. By Lemmas \[le:auxrat\] and Theorem \[the:new\], there is a rational relation that contains [[**K**]{}]{} and and for which a formula is inconsistent only if it is inconsistent for ${\bf K}^p$. By Theorem \[comthe:rat\], there is a finite ranked model, $W$ that satisfies and all assertions of [**K**]{} but whose inconsistent formulas are exactly those of ${\bf K}^p$. Since  is not inconsistent for ${\bf K}^p$, the model $W$ does not satisfy . Let $W'$ be the model obtained by extending the labeling function of $W$ to the full language ${\cal L}$ in an arbitrary way. We shall now apply the construction of $p_\epsilon$ described above on the model $W'$. Using the model $W'$ and a sequence of $\epsilon$’s approaching zero, we define a sequence of probability measures $p_\epsilon$. Let us show that all assignments $p_\epsilon$ are proper for [**K**]{} and ${\cal A}$. If $\gamma \in L'$, the assignment $p_\epsilon$ gives zero probability to $\gamma$ iff $\gamma$ is inconsistent in $W$, i.e., inconsistent for ${\bf K}^p$. But [**K**]{} is probabilistically consistent and, by Lemma \[133\], $p_\epsilon$ is proper for [**K**]{}. Since $W'$ does not satisfy ${\cal A}$, its antecedent cannot be inconsistent in $W'$ and $p_\epsilon$ is proper for ${\cal A}$ too. When $\epsilon$ approaches zero, the conditional probabilities corresponding to each assertion of [**K**]{} approach $1$ and the conditional probability corresponding to [A]{} is bounded away from $1$. 8.5pt That the result cannot be extended to infinite sets of conditional assertions follows from Adams’ remark that his notion of probabilistic consistency does not enjoy the compactness property and from Corollary \[comp:pref\]. Adams’ example [@Adams:75 pages 51–52] is closely related to the construction of Lemma \[le:well\]. The results of Adams presented in this section have been interpreted, in particular by [@Pearl:88], to mean that probabilistic semantics validate preferential reasoning. We certainly agree. But the results that will be presented now show, in our opinion, that probabilistic semantics support the claim that inference procedures should not only be preferential but also rational. Indeed we show, in Appendix \[appen:nonstandard\], that some very natural probabilistic models always define rational relations and that, when the language ${\cal L}$ is countable, all rational relations may be defined by such models. Those models are non-standard probability spaces, in the sense of A. Robinson. Since no use of those models will be made in the paper, their treatment has been relegated to an appendix. The rational closure of a conditional knowledge base {#sec:ratclos} ==================================================== Introduction {#subsec:intro:ratclos} ------------ So far, we have argued for Thesis \[rational\] and gathered much knowledge about rational relations, showing in particular that there is no obvious way to define a notion of closure satisfying Thesis \[rational\]. In this section we shall show that there is a natural notion of closure (called rational closure) that satisfies Thesis \[rational\]. We shall study it and prove that it possesses many very elegant mathematical properties. We shall, then, evaluate the value of rational closure as an answer to the question of the title. In conclusion, we shall propose Thesis \[super\], that claims that any satisfactory answer is a superset of rational closure. In other terms we think that any reasonable system should endorse any assertion contained in the rational closure, but it may also endorse some additional assertions. At present, we do not know of any natural construction satisfying Thesis \[rational\] other than rational closure. A first possible answer is rejected in \[subsec:perf\]. This result appeared in [@Leh:89]. The remainder of the paper describes rational closure. In \[subsec:order\] a partial ordering between rational relations is defined, which captures the notion of a relation being preferable to (i.e., smaller, less adventurous, more reasonable than) another one. The rational closure of a knowledge base is then defined in \[subsec:defratclos\] as the rational extension of a knowledge base that is preferable in the ordering defined in Section \[subsec:order\] to all other rational extensions. Not every knowledge base has a rational closure, but in Section \[subsec:proof\] it will be shown that any admissible (see Definition \[def:acc\]) knowledge base has a rational closure. By Lemma \[co:acc\], then, any finite knowledge base has a rational closure. We claim that the rational closure of a knowledge base, when it exists, provides a reasonable answer to the question of the title. Global properties of the operation of rational closure are described in \[subsec:ratcum\]. These results, concerning the global behavior of a nonmonotonic inference operation, are the first of their kind. In \[subsec:proof\] an algorithmic construction of the rational closure of an admissible knowledge base is described. This algorithmic description essentially replaces and improves upon the proof-theoretic description of [@Leh:89]. A corrected and generalized model-theoretic construction, first described in [@Leh:89] is proposed in \[subsec:modrat\]. Section \[subsec:comprat\] presents an algorithm to compute the rational closure of a finite knowledge base and discusses complexity issues. Section \[subsec:disc\] discusses the appeal of rational closure and provides some examples. Section \[sec:con\] concludes by considering topics for further research. In [@Pearl:90] J. Pearl proposes his own version of the rational closure construction that had been described in [@Leh:89]. Perfect extensions {#subsec:perf} ------------------ All that has been done so far does not allow us to give a satisfactory answer to the question of the title. Let [[**K**]{}]{} be a set of conditional assertions. We would like to define a consequence relation [$\overline {\bf K}$]{}, the rational closure of [[**K**]{}]{}, that contains all the conditional assertions that we intuitively expect to follow from [[**K**]{}]{}. At this point the reader should be convinced that [$\overline {\bf K}$]{}should be a rational consequence relation that extends [[**K**]{}]{}. Any such relation obviously also extends [${\bf K}^p$]{}. It seems that we would also like this rational extension of [[**K**]{}]{}  to be as small as possible. Unfortunately Theorem \[rat:ent\] shows that the intersection of all rational extensions of [[**K**]{}]{} is exactly [${\bf K}^p$]{} and therefore not in general rational and highly unsuitable as shown in Section \[subsec:discpref\]. There is obviously a maximal such extension: the full consequence relation, (i.e., for all $\alpha$, $\beta$ in ${\cal L}$) but this is certainly not the one we are looking for. Can we find out a number of properties that we would like $\overline {\bf K}$ to possess, in order to, at least, narrow the field of possibilities? We shall look both for [*local*]{} properties of [$\overline {\bf K}$]{} with respect to [[**K**]{}]{} and for [*global*]{} properties of the mapping . The sequel will present a proposal for the definition of [$\overline {\bf K}$]{}and proofs that it enjoys both local and global (in particular a strong form of cumulativity) properties. If [${\bf K}^p$]{} happens to be rational, then we probably have no reason to look further and should take [$\overline {\bf K}$]{} to be equal to [${\bf K}^p$]{}. If [${\bf K}^p$]{} is not rational, then there is an assertion  in [${\bf K}^p$]{}, and a formula  such that neither nor are in [${\bf K}^p$]{}. It seems that the right thing to do, in most such cases, is to introduce in [$\overline {\bf K}$]{}. One may try to require that any assertion in be of the form where  is in [${\bf K}^p$]{}, i.e., that any assertion in have [*support*]{} in ${\bf K}^p$. It will be shown that this may well be impossible. Let us encapsulate this idea in definitions. \[def:supp\] An assertion is said to be supported by (or in) ${\bf K}^p$ iff there is a formula  such that and is in ${\bf K}^p$. \[def:perf\] A rational extension ${\bf K}'$ of [**K**]{} is called perfect iff every assertion of ${\bf K}'$ is supported by ${\bf K}^p$. We may present the following disappointing result. \[disa\] There is a finite conditional knowledge base that has no rational perfect extension. -1000.5 pt[**Proof:** ]{} Let ${\cal L}$ be the set of all propositional formulas built out of the set of four propositional variables: . Let $W$ be the preferential model with three states: , in which (and this is the only pair in the relation $\prec$) and $s$ satisfies only $a$, $t$ satisfies only $b$ and $u$ satisfies only $c$ and $d$. Let [**K**]{} be the set of all conditional assertions satisfied in $W$. We claim that [**K**]{} has no rational perfect extension. Notice, first, that $W$ satisfies . This assertion is therefore in [**K**]{}. Any ranked model satisfying must satisfy at least one of the following two assertions: or . Any rational extension of [**K**]{} must therefore contain one of the two assertions above. But has clearly no support in ${\bf K}^p$ and therefore any perfect rational extension of [**K**]{} must contain: . But $W$ satisfies and any ranked model satisfying both and must also satisfy . Any perfect rational extension of [**K**]{} must therefore contain this last formula but it clearly lacks support in ${\bf K}^p$. We conclude that [**K**]{} has no perfect rational extension. 8.5pt It is therefore reasonable to look for less than perfect extensions. Let us first examine perfection concerning two special kinds of formulas. The following is easily proved. \[le:perf\] An assertion of the form is supported by ${\bf K}^p$ iff it is in ${\bf K}^p$. An assertion of the form is supported by ${\bf K}^p$ iff it is in ${\bf K}^p$. We shall propose a construction of [$\overline {\bf K}$]{} such that [$\overline {\bf K}$]{} does not contain any formula of the form or of the form that is not in [${\bf K}^p$]{}. Ordering rational relations {#subsec:order} --------------------------- In this section we shall define a strict partial ordering between rational relations. This ordering captures the notion of a relation being preferable to, i.e., less adventurous than another one. An intuitive explanation will be given immediately after the definition. For the rest of this section we shall write [ *for ( or in) K*]{} to mean [*the assertion is in $K$*]{}. We shall write for ( or in) $K$ when it is [*not*]{} the case that in $K$. \[def:order\] Let $K_0$ and $K_1$ be two rational consequence relations. We shall say that $K_0$ is preferable to $K_1$ and write $K_0 \prec K_1$ iff: 1. \[&lt;1\] there exists an assertion  in such that for all   such that for $K_0$, and for all  such that is in $K_0$, we also have in $K_1$, and 2. \[&lt;2\] for any , if is in there is an assertion in such that for $K_1$. The intuitive explanation behind Definition \[def:order\] is the following. Suppose two agents, who agree on a common knowledge base, are discussing the respective merits of two rational relations $K_0$ and $K_1$. A typical attack would be: [*your relation contains an assertion, , that mine does not contain*]{} (and therefore contains unsupported assertions). A possible defense against such an attack could be: [*yes, but your relation contains an assertion that mine does not, and you yourself think that   refers to a situation that is more usual than the one refered to by* ]{}. Such a defense must be accepted as valid. Definition \[def:order\] exactly says that the proponent of $K_0$ has an attack that the proponent of $K_1$ cannot defend against (this is part \[&lt;1\]) but that he (i.e., the proponent of $K_0$) may find a defense against any attack from the proponent of $K_1$ (this is part \[&lt;2\] of the definition). \[le:order\] The relation $\prec$ between rational consequence relations is irreflexive and transitive. -1000.5 pt[**Proof:** ]{} Irreflexivity follows immediately from Condition \[&lt;1\]. For transitivity, let us suppose that $K_0 \prec K_1$, with  the witness promised by Condition \[&lt;1\] and that with  as a witness. Our first step will be to show that there exists an assertion  in $K_2 - K_0$ such that in $K_0$ and in $K_1$. We shall have to consider many different cases. 1. Suppose in $K_2$. 1. If is not in $K_0$, then is a suitable . 2. If is in $K_0$, then  is a suitable , since if it were in $K_0$ it would be in $K_1$. 2. Suppose therefore that is not in $K_2$, i.e., for $K_2$, . 1. If is in $K_1$, then it is in $K_1 - K_2$ and there is an assertion  in $K_2 - K_1$ such that in $K_2$. 1. If is not in $K_0$, then it is a suitable . 2. If is in $K_0$, then in $K_0$ and we have both that in $K_1$ and that  cannot be in $K_0$, otherwise it would be in $K_1$. We conclude that  is a suitable . 2. Suppose therefore that is not in $K_1$, i.e., for $K_1$, like for $K_2$, . 1. If  is in $K_2$, then it is a suitable . 2. If  is not in $K_2$, then it is in $K_1 - K_2$ and there is a  in $K_2 - K_1$ such that in $K_2$. 1. If is not in $K_0$, then it is a suitable  since, in $K_1$, . 2. If is in $K_0$ then  is a suitable since  cannot be in $K_0$, otherwise it would be in $K_1$. We have now proved the existence of an assertion  with the desired properties. Let us proceed to the proof that $K_0 \prec K_2$. For property \[&lt;1\], we claim that  provides a suitable witness. It is indeed in $K_2 - K_0$ by construction. Suppose now that in $K_0$. Then in $K_0$ and therefore in $K_1$. Therefore in $K_1$. If  is in $K_0$, then it must be in $K_1$ since in $K_0$ and also in $K_2$ since in $K_1$. This concludes the verification of Condition \[&lt;1\]. For Condition \[&lt;2\], suppose that  is in $K_0 - K_2$. We have to find a  in $K_2 - K_0$ such that in $K_2$. We shall consider a number of different cases. 1. If in $K_2$, then  is a suitable . 2. Suppose then that is not in $K_2$, i.e., for $K_2$. 1. Suppose, first that is in $K_1$, therefore in $K_1 - K_2$. There is then an assertion  in $K_2 - K_1$ such that in $K_2$. 1. If is in $K_0$, then in $K_0$ and we conclude that  is not in $K_0$, otherwise it would be in $K_1$. We conclude that  is a suitable . 2. If is not in $K_0$, then it is a suitable . 2. Suppose, then, that is not in $K_1$, i.e., for $K_1$, . 1. If is in $K_0$, then it is in $K_0 - K_1$. Therefore there is an assertion  in $K_1 - K_0$ such that in $K_1$. But then in $K_1$ and we conclude that  is in $K_2$ and that in $K_2$. The assertion  is a suitable . 2. Suppose, then that, on the contrary, is not in $K_0$, i.e., in $K_0$, as in $K_1$ and $K_2$. 1. Suppose first that  is in $K_1$, therefore in $K_1 - K_2$. Then, there is an assertion  in $K_2 - K_1$, such that in $K_2$. There are two cases. If is in $K_0$, then in $K_0$, and  is not in $K_0$, otherwise it would be in $K_1$, since in $K_0$. The assertion  is a suitable . If, on the other hand is not in $K_0$, then it is in $K_2 - K_0$, and it is a suitable . 2. Suppose now that  is not in $K_1$, therefore in . There is an assertion  in , such that in $K_1$. But in $K_1$ and since  is in $K_1$ it must be in $K_2$. Also, since is in $K_1$, it must be in $K_2$. We see that  is a suitable . 8.5pt Definition of rational closure {#subsec:defratclos} ------------------------------ We may now define the rational closure of a knowledge base. \[def:ratclos\] Let [[**K**]{}]{} be an arbitrary knowledge base. If there is a rational extension [$\overline {\bf K}$]{} of [[**K**]{}]{} that is preferable to all other rational extensions of [[**K**]{}]{}, then [$\overline {\bf K}$]{} will be called the rational closure of [[**K**]{}]{}. Notice first that the rational closure of a knowledge base is unique, if it exists, since [*preference*]{} is a partial ordering. Notice then that there are knowledge bases that do not have a rational closure. Example \[ex:noclos\] will show this. In Section \[subsec:proof\] we shall show that admissible knowledge bases, including all finite knowledge bases, have a rational closure. \[ex:noclos\] [Let ${\cal L}$ be the propositional calculus built upon the variables $p_n$ where $n$ is an arbitrary [*integer*]{} (i.e., positive or negative). Let $N$ be the knowledge base that contains all assertions of the form and of the form for all integers $n$. We shall show that $N$ has no rational closure. ]{} We shall first prove a lemma about invariance of the operation of rational closure under renaming of the proposional variables. This lemma is of independent interest. \[def:rename\] 1. A renaming of the propositional calculus ${\cal L}$ is a bijection of the propositional variables. 2. Let $f$ be a renaming of ${\cal L}$. The formula obtained from  by substituting $f(p)$ for the propositional variable $p$ will be denoted by $f({\mbox{$\alpha$}})$. 3. Let $f$ be as above and  a conditional assertion. The assertion $f({\mbox{$\alpha$}}){{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}f({\mbox{$\beta$}})$ will be denoted by $f(\alpha {{\hspace{0.28em}}{{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim}\beta)$. 4. Let $f$ be as above and $K$ a consequence relation. The relation $f(K)$ will be defined by . \[lemma:inv-renaming\] Let $f$ be a renaming of ${\cal L}$. 1. Let $K_0$ and $K_1$ be rational consequence relations. Then $K_0 \prec K_1$ iff $f(K_0) \prec f(K_1)$. 2. Let $K$ be a consequence relation and $\overline{K}$ its rational closure ,then $f(\overline{K})$ is the rational closure of $f(K)$ 3. Let $K$ be a consequence relation which is invariant under $f$, namely $f(K)=K$,then its rational closure (if it exists) is invariant under $f$. -1000.5 pt[**Proof:** ]{} The proof is immediate from the definitions, noting that $f$ is also a bijection of the set of all consequence relations. 8.5pt The knowledge base $N$ defined above has no rational closure. -1000.5 pt[**Proof:** ]{} We shall reason by contradiction. Suppose $R$ is the rational closure of $N$. From Lemma \[le:cperf\] in the sequel (the proof of which does not depend on the present lemma), we know that there is no assertion of the form ${\mbox{${\mbox{$\alpha$}}{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}{\bf false}$}}$ in $R$ that is not in $N^p$. Using a construction very similar to the one used in the proof of Lemma \[le:well\], one may build, for any integer $n$, a preferential model of $N$, containing a top state that satisfies $p_{n}$. Therefore, for any $n$, $p_{n}$ is consistent for $R$. Remember that is the assertion . We shall write to mean that is in $R$. If follows from results of Section \[subsec:rank\] that, on formulas that are consistent for $R$, the relation $<$ is a strict modular ordering. Notice, also, that, for any $n$, the assertion belongs to $N^p$, since both and are in $N^p$. Therefore . There are, in $R$, two infinite (in both directions) chains (for $<$), one containing the variables of odd index, the other one containing those of even index. Since $<$ is modular, we may consider only four cases: 1. \[exo1\] For every even $n$ and odd $k$, . Let $f$ be the renaming of ${\cal L}$ given by . Clearly . Hence, by Lemma  \[lemma:inv-renaming\] we must have . But this last statement implies for even $n$ and odd $k$, and therefore implies that all $p_n$’s are inconsistent for $R$. A contradiction. 2. \[exo2\] For every even $n$ and odd $k$, . The argument is exactly as in case \[exo1\], systematically interchanging ‘odd’ and ‘even’. 3. \[exo3\] There is an odd $k$ ,and there are even $m$ and $n$ such that . In this case, define a renaming $f$ by $f(l)=l$ for odd $l$ and $f(l)=l+m-n$ for even $l$. The contradiction is as above by noting that $f$ transforms into . 4. \[exo4\] None of the above is true. In such a case one may see that there must exist an even $m$ ,and odd $i$ and $j$ such that . The argument is exactly as in case \[exo3\], systematically interchanging ‘odd’ and ‘even’. 8.5pt Global properties of the operation of rational closure {#subsec:ratcum} ------------------------------------------------------ First, we show that rational closure possesses a [*loop*]{} property analogous to the property discussed in [@KLMAI:89 Section 4]. This is a powerful property that one is happy to have. \[le:loop\] Let $K_i$ for $i = 0 , \ldots , n-1$ be knowledge bases such that, for any $i$, , where addition is understood modulo $n$. Then for any $i , j$, one has . -1000.5 pt[**Proof:** ]{} Let $K \preceq K'$ mean that either $K \prec K'$ or $K = K'$. Since $\overline{K_{i}}$ is a rational extension of $K_{i+1}$, we have , for all $i$’s (modulo $n$). We conclude that the rational closures of all the ${\bf K}_i$’s are equal. 8.5pt The following property of [*reciprocity*]{} is the special case $n = 2$. \[co:rec\] If and , then . The following property of [*cumulativity*]{} is equivalent to reciprocity in the presence of inclusion (i.e., ). \[co:cum\] If then . The meaning of Corollary \[co:cum\] is that one may add to a knowledge base anything that is in its rational closure without changing this closure. We may now show that, in two different respects, rational closure is close to being perfect. \[le:tperf\] The consequence relation [$\overline {\bf K}$]{}, if it exists, contains an assertion of the form only if this assertion is in [${\bf K}^p$]{}. -1000.5 pt[**Proof:** ]{} Suppose an assertion of the form above is in [$\overline {\bf K}$]{}. We shall show that it must be in any rational extension of [[**K**]{}]{}and will conclude by Theorem \[rat:ent\]. Suppose $K'$ is a rational extension of [[**K**]{}]{} and is in . Since [$\overline {\bf K}$]{}$\prec K'$, we know there is an assertion in $K' - \overline{\bf K}$ such that in $K'$. But this means is in $K'$, and contradicts the fact that is not in $K'$. 8.5pt \[le:cperf\] The consequence relation [$\overline {\bf K}$]{}, if it exists, contains an assertion of the form only if this assertion is in [${\bf K}^p$]{}. -1000.5 pt[**Proof:** ]{} Let $V$ and $T$ be preferential models defining the relations [$\overline {\bf K}$]{} and [${\bf K}^p$]{} respectively. Such models exist by Theorem \[compth:pref\]. Let $U$ be the model obtained by putting $T$ on top of $V$, i.e., every state of $V$ is less than every state of $T$. One easily sees that $U$ satisfies the smoothness property and is therefore a preferential model. It defines a preferential relation $S$. An assertion of the form is in $S$ only if it is in [${\bf K}^p$]{}, since $T$ is a submodel of $U$. If  is not inconsistent in [$\overline {\bf K}$]{} then for any ,  is in $S$ iff it is in [$\overline {\bf K}$]{}. By Lemma \[le:auxrat\], there is a rational extension $R$ of $S$ with the same set of inconsistent formulas. If one looks at the construction described in the proof of this lemma, one sees that it will add to $S$ only assertions with antecedent inconsistent in [$\overline {\bf K}$]{}. Therefore, if  is not inconsistent in [$\overline {\bf K}$]{}, for any ,  is in $R$ iff it is in [$\overline {\bf K}$]{}. Now, $R$ is a rational extension of [[**K**]{}]{}. If $R$ is equal to [$\overline {\bf K}$]{}, we are through. Suppose not. Then we have . Suppose now that is in . There must be an assertion  in such that in $R$. But  in implies that is in [$\overline {\bf K}$]{} and  is in [$\overline {\bf K}$]{}. A contradiction. 8.5pt Admissible knowledge bases and their rational closure {#subsec:proof} ----------------------------------------------------- In this section, we show that an admissible (see Definition \[def:acc\]) knowledge base has a rational closure and that this rational closure may be defined in terms of the ranks of the formulas, as defined in Section \[subsec:ranking\]. This provides a useful and elegant characterization of the rational closure of an admissible knowledge base. \[the:ratclos\] Let [[**K**]{}]{} be an admissible conditional knowledge base. The rational closure [$\overline {\bf K}$]{} of [[**K**]{}]{} exists and is the set $S$ of all assertions  such that either 1. the rank of  is strictly less than the rank of (this includes the case  has a rank and has none), or 2.  has no rank (In this case has no rank either). -1000.5 pt[**Proof:** ]{} Suppose indeed that every formula consistent with [${\bf K}^p$]{} has a rank. We have many things to check. First let us prove that $S$ contains [[**K**]{}]{}. If  is in [[**K**]{}]{} and  has rank $\tau$, then $C_{\tau}$ contains  and entails . Therefore is exceptional for $C_{\tau}$, and has rank strictly larger than $\tau$. We should now check that $S$ is rational. For [**Left Logical Equivalence**]{}, [**Right Weakening**]{} and [**Reflexivity**]{} the proof is easy. For [**Cautious Monotonicity**]{}, notice that if  is in $S$, then  and have the same rank. For [**And**]{} and [**Or**]{}, notice that the rank of a disjunction is the smaller of the ranks of its components. For [**Rational Monotonicity**]{}, notice that if is not in $S$, then  and have the same rank. We must now check that if $R$ is a rational extension of [[**K**]{}]{} that is different from $S$ then . Let $R$ be such an extension. We shall first show that $S$ and $R$ must agree on all assertions whose antecedents have no rank (the notion of rank is always defined by reference to [[**K**]{}]{}). Indeed, by construction, any such assertion is in $S$, and it is preferentially entailed by [[**K**]{}]{} since [[**K**]{}]{} is admissible. It is therefore in $R$. We conclude that $S$ and $R$ must differ for some assertion that has rank. Let $\tau$ be the smallest rank at which $S$ and $R$ differ, i.e., the smallest rank of an  such that there is a  such that . We have two cases to consider, either there is a formula   of rank greater or equal to $\tau$ such that, for all formulas  of rank greater or equal to $\tau$, in $R$, or there is no such formula. Suppose there is such an . Our first claim is that, for any  of rank greater than $\tau$, the assertion is in $R$. Consider indeed a ranked model $W$ that defines $R$. Let $W''$ be the supermodel obtained from $W$ by adding to $W$, at each level $l$ a state labeled with world $w$ for all worlds $w$ that label a state of rank less than $l$ in $W$. It is clear that $W''$ is ranked and defines the same relation as $W$, and, in $W''$, every label that appears at some level $l$ also appears at all greater levels. Let $W'$ be the submodel of $W''$ that contains all those states of level (rank in $W''$) greater or equal to the minimal level $l$ at which some state satisfies . It clearly satisfies the smoothness property (for this we needed to go through the construction of $W''$). Since a formula is satisfied in $W''$ at some level less than $l$ iff it is of rank less than $\tau$, no antecedent of an assertion of $C_{\tau}$ is satisfied at any level less than $l$. But $W''$ is a model of [[**K**]{}]{} and therefore $W'$ is a model of $C_{\tau}$. But $C_{\tau}$ preferentially entails . The model $W'$ therefore satisfies but not . It therefore also satisfies . But the antecedent of this last assertion has rank greater or equal to $\tau$, and therefore no state of $W''$ that is not in $W'$ satisfies it. Therefore is satisfied by $W''$ and is an element of $R$. satisfies Our second claim is that there is an assertion   in , such that  is of rank $\tau$ and in $R$. We consider two cases. 1. There is an assertion  in with $\xi$ of rank $\tau$. Then has rank greater than $\tau$, and by our first claim, is in $R$. But  is not in $R$, and therefore we must have for $R$. This last assertion is not in $S$ since both  and $\xi$ have rank $\tau$. The assertion is a suitable . 2. There is an assertion  in with $\xi$ of rank $\tau$. If in $R$, then  is a suitable . Suppose, then, that in $R$. Since $\xi$ has the same rank as , is in and a suitable . We may now conclude that . The assertion  fulfills the requirements of Condition \[&lt;1\] of Definition \[def:order\], since  has rank $\tau$. For Condition \[&lt;2\], suppose  is in , then $\xi$ must be of rank greater or equal to $\tau$ and is of rank greater than $\tau$. By our first remark we conclude that for $R$. It is a matter of elementary properties of rational relations to check that if is in $R$, but  is not, then for $R$. Since in $R$, we conclude that for $R$. Suppose now that there is no such . Take any formula  of rank $\tau$. There is a formula  of rank greater or equal to $\tau$ such that for $R$. But this assertion is then in . It satisfies Condition \[&lt;1\] of Definition \[def:order\], since its antecedent has rank $\tau$. Suppose now  is in . Then $\varphi$ is of rank at least $\tau$. If it is of rank $\tau$, there is a formula $\pi$ of rank at least $\tau$ such that is in $R$, but not in $S$ and this provides the witness requested by Condition \[&lt;2\]. If it is of rank greater than $\tau$, then the assertion defined just above will do. 8.5pt A model-theoretic description of rational closure {#subsec:modrat} ------------------------------------------------- We shall describe here a model-theoretic construction that transforms a preferential model $W$ into a ranked model $W'$ by letting all states of $W$ sink as low as they can respecting the order of $W$, i.e., ranks the states of $W$ by their height in $W$. We shall show that, under certain conditions, the model $W'$ defines the rational closure of the relation defined by $W$. This construction is clearly interesting only when the model $W$ is well-founded. We know that, in this case, the relation defined by $W$ indeed possesses a rational closure (Theorem \[the:ratclos\] and Lemma \[co:acc\]). It would have been pleasant to be able to prove the validity of such a construction on an arbitrary well-founded preferential model. Unfortunately we are not able to show this in general, but need to suppose, in addition, that the preferential relation defined by $W$ is well-founded (see Definition \[def:wellf\]). This is quite a severe restriction, since we have seen at the end of Section \[subsec:prefmod\] that finitely-generated relations on arbitrary languages ${\cal L}$ are not always well-founded. When the language ${\cal L}$ is logically finite, we know all preferential relations are well-founded. Given a well-founded preferential relation, the construction may be applied to any of its well-founded models. Let $P$ be a well-founded preferential relation and any well-founded preferential model that defines $P$. We shall define, for any ordinal $\tau$, two sets of states: $U_{\tau}$ and $V_{\tau}$. Those sets satisfy, for any $\tau$, . The set $U_{\tau}$ contains, in addition to the elements of previous $V$’s, the states that are minimal among those states not previously added. The set $V_{\tau}$ contains, in addition to the states of $U_{\tau}$, all states that satisfy only formulas already satisfied by states previously considered. $$U_{\tau} {\stackrel{\rm def}{=}}\bigcup_{\rho < \tau}{V_{\rho}}$$ $$\label{eq:Utau} \cup \{ s \in S \mid \forall t \in S {\rm \ such \ that\ } t \prec s, {\rm there \ is \ a \ } \rho < \tau {\rm \ such \ that \ } t \in V_{\rho} \}$$ $$\label{eq:Vtau} V_{\tau} {\stackrel{\rm def}{=}}\{ s \in S \mid \forall \alpha \in {\cal L} {\rm \ such \ that \ } s {{\hspace{0.8mm}\rule[-1mm]{.1mm}{4mm}\hspace{-4pt}}\equiv}\alpha, \exists t \in U_{\tau} {\rm \ such \ that \ } t {{\hspace{0.8mm}\rule[-1mm]{.1mm}{4mm}\hspace{-4pt}}\equiv}\alpha\}$$ Since the model $W$ is well-founded, every state is in some $V_{\tau}$. Let the height of a state (in $W$) be the least ordinal $\tau$ for which . We shall now show that there is a close relationship between the rank of a formula  in $P$ (see definition following Definition \[def:exc\]) and the height in $W$ of the states that satisfy . For any ordinal $\tau$, we shall denote by $W_\tau$ the substructure of $W$ consisting of all states of height larger or equal to $\tau$. Notice that, since $W$ is well-founded, $W_\tau$ is a preferential model. Notice also that all elements of are minimal elements of $W_{\tau}$. \[le:Sch\] Let $\tau$ be an ordinal. Let  be a formula of rank at least $\tau$ and  be any formula. 1. No state of height less than $\tau$ satisfies . 2. The model $W_\tau$ satisfies iff  is preferentially entailed by $C_{\tau}$. In particular, if  has no rank, no state in $S$ satisfies . -1000.5 pt[**Proof:** ]{} It proceeds by simultaneous ordinal induction on $\tau$. Suppose both claims have been proved for all ordinals . Let us prove our first claim. Since  has rank at least $\tau$, for any $\rho$, , $C_{\rho}$ preferentially entails . By the induction hypothesis (item 2), $W_{\rho}$ satisfies . Therefore no state of satisfies . If there were a state $s$ of height satisfying , there would be a state $t$ of satisfying . We conclude that no state of height less than $\tau$ satisfies . For the second claim, by Lemma \[le:funda\],  is preferentially entailed by $C_{0}$ (i.e., in $P$, i.e., satisfied by $W$) iff it is preferentially entailed by $C_{\tau}$. By the first claim,  is satisfied by $W$ iff it is satisfied by $W_{\tau}$. 8.5pt \[le:ra\] A formula  has rank $\tau$ in $P$ iff there is a state of height $\tau$ that satisfies   and there is no such state of height less than $\tau$. -1000.5 pt[**Proof:** ]{} We shall prove the [*only if*]{} part. The [*if*]{} part is then obvious. First, remark that if is a preferential relation that contains the assertion , then it contains the assertion . This is easily shown by preferential reasoning. Suppose now that  has rank $\tau$. Lemma \[le:Sch\] shows that no state of height less than $\tau$ satisfies . We must show that there is a state of height $\tau$ satisfying . Let  be any formula of rank larger or equal to $\tau$ that is minimal with respect to $<$ among those formulas. There is such a formula since the set is not empty ( is there) and $<$ is well-founded. Since  is not exceptional for $C_{\tau}$, the assertion is not preferentially entailed by $C_{\tau}$ and therefore the assertion is not preferentially entailed by $C_{\tau}$. But has rank $\tau$ and, by Lemma \[le:Sch\], $W_{\tau}$ does not satisfy . There is, therefore, in $W_{\tau}$ a state $s$ satisfying  such that no state $t$ in $W_{\tau}$, $t \prec s$, satisfies . We shall show that $s$ is minimal in $W_{\tau}$ and has therefore height $\tau$. Suppose $s$ is not minimal in $W_{\tau}$. There would be a state $t$ minimal in $W_{\tau}$ such that $t \prec s$. This state $t$ has height $\tau$ and, by construction, it satisfies some formula $\beta'$ that is not satisfied at any smaller height. By Lemma \[le:Sch\], $\beta'$ has rank larger or equal to $\tau$, and the formula has rank larger or equal to $\tau$. Since , the minimality of   implies that . In other terms, . But the state $t$, in $W$, satisfies $\beta'$ and is minimal among states satisfying . Therefore $t$ satisfies . A contradiction. 8.5pt Lemma \[le:ra\] shows that, given a well-founded preferential relation (resp. a finite knowledge base), and a well-founded preferential model $W$ for it (resp. for its preferential closure), one may build a ranked model for its rational closure by ranking the states of $W$ by their depth. Computing rational closure {#subsec:comprat} -------------------------- We shall now provide an algorithm for deciding whether an assertion is in the rational closure of a finite knowledge base. The notation $E(C)$ has been defined following Definition \[def:exc\]. Lemma \[co:acc\] and Theorem \[the:ratclos\] show that, given a finite knowledge base [[**K**]{}]{} and an assertion the following algorithm is adequate. > $C = {{\bf K}}$; > > while  is exceptional for $C$ and $E(C) \neq C$, $C := E( C )$; > > if $\alpha \wedge \neg \beta$ is exceptional for $C$ then answer yes else answer no. The only thing left for us to implement is checking whether a formula is exceptional for a given finite knowledge base. The next lemma shows this is easily done. \[material\] Let  be the conditional assertion . The material counterpart of , denoted by $\tilde{\cal A}$, is the formula , where ${\rightarrow}$, as usual, denotes material implication. If [[**K**]{}]{} is a set of assertions, its material counterpart $\tilde{\bf K}$ is the set of material counterparts of [[**K**]{}]{}. \[le:ex1\] Let [[**K**]{}]{} be a conditional knowledge base and  a formula. Then $\tilde{\bf K} \models \alpha$ iff [[**K**]{}]{} preferentially entails . -1000.5 pt[**Proof:** ]{} The [*if*]{} part follows from the fact that any world satisfying $\tilde{\bf K}$ and not  provides a one state preferential model satisfying [[**K**]{}]{} and not satisfying . For the [*only if*]{} part suppose $\tilde{\bf K} \models \alpha$. By compactness, there is a finite subset of $\tilde{\bf K}$ that entails $\alpha$. By rules [**S**]{}, [**And**]{} and [**Right Weakening**]{} we conclude that [[**K**]{}]{} preferentially entails . 8.5pt \[co:exc\] Let [[**K**]{}]{} be a conditional knowledge base and  a formula. The formula  is exceptional for [[**K**]{}]{}iff $\tilde{\bf K} \models \neg \alpha$. We see that, if [[**K**]{}]{} contains $n$ assertions, in the previous algorithm, we may go over the while loop at most $O(n)$ times. Each time we shall have to consider at most $n+1$ formulas and decide whether they are exceptional or not. The whole algorithm needs at most $O(n^2)$ such decisions. In the most general case all such decisions are instances of the satisfiability problem for propositional calculus, therefore solvable in non-deterministic polynomial time (in the size of the knowledge base [[**K**]{}]{}  times the size of the formulas involved). Therefore, even in the most general case, the problem is not much more complex than the satisfiability problem for propositional calculus. These results may be improved if we restrict ourselves to assertions of a restricted type. For example, if the assertions of [[**K**]{}]{} are of the Horn type (we mean their material counterpart is a Horn formula), and the assertion   is of the same type, then, since each decision may be taken in polynomial deterministic time, the whole algorithm runs in deterministic polynomial time. The complexity discussion above is mainly of theoretical interest. The important practical question is: given a fixed large knowledge base, what information, of reasonable size, should be precomputed to allow efficient answers to queries of the type: is an  in the rational closure? The pre-computation of the different $C_n$ sub-bases would already reduce the exponent of $n$ in the complexity of the algorithm by one. J. Dix noticed that the algorithm just presented for computing the rational closure of a finite knowledge base may be used to compute the preferential closure of such a knowledge base, since, by Corollary \[co:reso\] and Lemmas \[co:acc\] and \[le:cperf\], the assertion  is in [${\bf K}^p$]{}  iff the assertion is in the rational closure of the knowledge base . A discussion of rational closure {#subsec:disc} -------------------------------- We have so far shown that rational closure provides a mathematically elegant and effective answer to the question of the title that satisfies Thesis \[rational\]. It is now time to evaluate whether it provides an answer that matches our intuitions. We shall first present two now classical knowledge bases, describe their rational closure and examine whether they fit our intuitions. Then, we shall discuss the way rational closure treats inheritance of generic properties to abnormal individuals. Finally, we shall try to address the question of whether our formalism is suitable to describe domain knowledge. Let our knowledge base consist of the following two assertions. 1. [*republican*]{} $\neg$[*pacifist*]{} 2. [*quaker*]{} [*pacifist*]{} It is easy to see that none of the assertions of the base is exceptional, but that the formula is exceptional. From this we deduce that neither the assertion nor the assertion is in the rational closure. This seems the intuitively correct decision in the presence of contradictory information. In fact, if we know somebody to be both a Quaker and a Republican, we (i.e., rational closure) shall draw about him only conclusions that are logically implied by our information. Rational closure endorses , meaning that, since we have no information on the pacifism of workers, we shall assume that Republican workers behave as Republicans in this respect. We (i.e., rational closure) also endorse , meaning we are ready to use contraposition in many circumstances. We do not have , though, and quite rightly, since Republicans may well be a small minority among non-pacifists. We have , meaning we think being both a Republican and a Quaker is exceptional. We endorse and , that are also intuitively correct conclusions. If we add to our knowledge base the fact that rich people are typically Republicans, we shall deduce that rich people are typically not pacifists, meaning we endorse a restricted form of transitivity. We shall also deduce that Quakers are typically not rich, which is perhaps more debatable. We shall not conclude anything about the pacifism of rich Quakers though, since rich Quakers are exceptional. We shall not conclude anything either concerning rich Quakers that are not Republicans, which is more debatable. If we want to conclude that rich non-Republican Quakers are pacifists, we should add this assertion explicitly to the knowledge base. The addition will not interfere with previously discussed assertions. Let our knowledge base consist of the following three assertions. 1. [*penguin*]{} [*bird*]{} 2. [*penguin*]{} $\neg$[*fly*]{} 3. [*bird*]{} [*fly*]{} The first two assertions are exceptional, the last one is not. It follows that we (i.e., rational closure) endorse the following assertions: (a case of contraposition), (another case of contraposition), (penguins are exceptional, even among non-flying objects), (penguins are exceptional birds), (penguins are exceptional also among non-birds), (this is an intuitively correct preemption: we prefer specific information to non-specific information), (black penguins don’t fly either, since they are normal penguins), (green birds are normal birds). The following assertions are not endorsed: (there could be non-flying birds other than penguins), (seems intuitively clear), (obviously). A more general reflexion suggests the following. Theorem \[the:ratclos\] shows that, in the rational closure, no information about normal cases may be relevant to abnormal cases. It is a very intriguing question whether human beings obey this rule of reasoning or not. A specific example has been discussed by J. Pearl in a personal communication. It probably goes back to A. Baker. Suppose we know that most Swedes are blond and tall. If we are going to meet Jon, whom we know to be short and to come from Sweden, should we necessarily expect him to be fair? The answer endorsed by rational closure is [*not necessarily*]{}, since short Swedes are exceptional and we have no specific information about such cases. We do not know how people generally handle this and, even if we knew, it is not clear that AI systems should react in exactly the same way: people are, after all, notoriously bad with statistical information. The answer to the question how should people behave in this case, if they were smart and had all the relevant information, depends on the sociobiology of the Swedish population and is not relevant either. There is very solid ground, though, to claim that, in the framework described here, in which a knowledge base contains only positive conditional assertions, the only sensible way to handle this problem is not to expect anything about the color of Jon’s hair. The reason is that, if we ever find out that most short Swedes are blond (or dark, for that matter) it will be easy enough to add this information to our knowledge base. On the contrary, had we chosen to infer that Jon is expected to be blond, and had we found out that half of the short Swedes only are fair, we would not have been able to correct our knowledge base to remove the unwanted inference: adding the fact that most short Swedes are not blond being obviously incorrect. Since, by looking at a number of examples, we have gathered some experience on the behavior of rational closure, we would like to propose the following strengthening of Thesis \[rational\]. \[super\] The set of assertions entailed by any set of assertions [[**K**]{}]{} is a rational superset of  [$\overline {\bf K}$]{}. Thesis \[super\] means that a reasonable system should endorse any assertion contained in the rational closure, but it may also endorse some additional assertions, as long as it defines a rational relation. The search for natural constructions satisfying Thesis \[super\], but providing more [*inheritance*]{} than rational closure is open. The main question that has not been addressed yet is whether conditional knowledge bases are suitable to describe domain knowledge. Undoubtedly much work still has to be done before we may answer this question satisfactorily. We shall only try to express here why we think the answer may well be positive. Representing common sense knowledge is far from trivial in any one of the existing formalisms, such as Circumscription or Default Logic. Indeed to represent any substantive piece of common sense knowledge in one of those formalisms, one needs to be an expert at the mechanics of the formalism used, and they differ greatly from one formalism to the next. Deciding on the different abnormality predicates in Circumscription and the relations between them, or working out the default rules in Default Logic so as to ensure the correct precedence of defaults needs the hand of an expert. In the formalism proposed here, conditional knowledge bases, the treatment is much simpler since abnormality predicates do not appear explicitely and the default information is described in a much poorer language than Default Logic. We rely on the general algorithm for computing rational closure (or some other algorithm that will be found suitable) to deal in a mechanical, uniform and tractable manner with the interactions between different pieces of default information. The fact that our language of assertions is much poorer than other formalisms seems to us to be a great asset. Nevertheless, it is probable that the size of useful conditional knowledge bases will be very large. Indeed, in our approach, adding new assertions to the knowledge base may solve almost any problem. Two main topics for further research may then be delineated. The first one is to find practical ways to avoid having to look at the whole knowledge base before answering any query. The set of assertions constituting a knowledge base will have to be structured (off-line, once and for all) in such a way that irrelevant assertions do not have to be looked at. The second one is to find lucid and compact descriptions of large conditional knowledge bases. This will involve looking seriously into the question: where does the conditional knowledge come from? Different answers may be appropriate in different domains: it may well be that conditional knowledge is derived from causal knowledge in ways that are different from those in which it is derived from conventions of speech or statistical information. Conclusion {#sec:con} ========== We have presented a [*mathematically tractable*]{} framework for nonmonotonic reasoning that can be proved to possess many pragmatically attractive features. Its computational complexity compares favorably with that of most well-established systems. In many cases the intuitively correct answer is obtained. In others, the answer given and the way it was obtained provide an interesting point of view on the knowledge base. Much more practical experience is needed before one may assess the pragmatic value of the approach. The task of extending the results presented here to first-order languages is not an easy one. First steps towards this goal are described in [@LMTARK:90]. Acknowledgements {#sec:Ack} ================ David Makinson suggested importing the thesis [**CV**]{} of conditional logic into the study of nonmonotonic consequence relations, i.e., suggested to consider what is called here rational relations. He conjectured that the corresponding family of models was that of ranked models. He was also instrumental in stressing the importance of studying global properties of nonmonotonic inference operations. Discussions with the following people helped us to disprove hasty conjectures, putting this work in perspective, and improve the presentation of this paper: Johan van Benthem, Michael Freund, Haim Gaifman, Hector Geffner, Matthew Ginsberg, David Israel, Sarit Kraus, John McCarthy and Judea Pearl. Karl Schlechta’s suggestions and Jürgen Dix’s remarks on a previous draft have been very useful. Finally, this paper has been fortunate to receive attention and care of a rare quality from two anonymous referees. We want to thank them. [10]{} Ernest W. Adams. . D. Reidel, Dordrecht, 1975. Peter Cheeseman. In defense of [*an inquiry into computer understanding*]{}. , 4(1):129–142, February 1988. Peter Cheeseman. An inquiry into computer understanding. , 4(1):58–66, February 1988. R. Chisholm. The contrary-to-fact conditional. , 55:289–307, 1946. reprinted in Readings in Philosophical Analysis, edited by H. Feigl and W. Sellars, Appleton-Century-Crofts, New York, 1949, pp. 482–497. Keith L. Clark. Negation as failure. In H. Gallaire and J. Minker, editors, [*Logics and Data Bases*]{}, pages 293–322. Plenum Press, 1978. N. J. Cutland. Non standard measure theory and its applications. , pages 529–589, 1983. James P. Delgrande. A first-order logic for prototypical properties. , 33:105–130, 1987. James P. Delgrande. An approach to default reasoning based on a first-order conditional logic: Revised report. , 36:63–90, August 1988. Jon Doyle and Michael P. Wellman. Impediments to universal preference-based default theories. , 49(1–3):97–128, May 1991. Michael Freund. A semantic characterization of disjunctive relations. In Philippe Jorrand and J. Kelemen, editors, [*Proceedings of FAIR’91, Lecture Notes in Artificial Intelligence Vol. 535*]{}, pages 72–83, Smolenice, Czechoslovakia, September 1991. Springer Verlag. Michael Freund, Daniel Lehmann, and Paul Morris. Rationality, transitivity and contraposition. , 1991? in print. Matthew L. Ginsberg. Counterfactuals. , 30:35–79, 1986. George Grätzer. . W. H. Freeman, San Francisco, 1971. William L. Harper. A sketch of some recent developments in the theory of conditionals. In William L. Harper, Robert Stalnaker, and Glenn Pearce, editors, [*Ifs: Conditionals, Belief, Decision, Chance and Time*]{}, volume 15 of [ *The University of Western Ontario Series in Philosophy of Science*]{}, chapter Introduction, pages 3–38. D. Reidel, Dordrecht, Boston, London, 1981. William L. Harper, Robert Stalnaker, and Glenn Pearce, editors. , volume 15 of [*The University of Western Ontario Series in Philosophy of Science*]{}. D. Reidel, Dordrecht, Boston, London, 1981. H. J. Keisler. . Prindle, Weber&Schmidt Inc., Boston, 1976. Sarit Kraus, Daniel Lehmann, and Menachem Magidor. Nonmonotonic reasoning, preferential models and cumulative logics. , 44(1–2):167–207, July 1990. Daniel Lehmann. What does a conditional knowledge base entail? In Ron Brachman and Hector Levesque, editors, [*Proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning*]{}, Toronto, Canada, May 1989. Morgan Kaufmann. Daniel Lehmann and Menachem Magidor. Rational logics and their models: a study in cumulative logic. Technical Report TR 88-16, Leibniz Center for Computer Science, Dept. of Computer Science, Hebrew University, Jerusalem, November 1988. Daniel Lehmann and Menachem Magidor. Preferential logics: the predicate calculus case. In [*Proceedings of the Third Conference on Theoretical Aspects of Reasoning About Knowledge*]{}, pages 57–72, Monterey, California, March 1990. Morgan Kaufmann. David Makinson. General theory of cumulative inference. In M. Reinfrank, J. de Kleer, M. L. Ginsberg, and E. Sandewall, editors, [*Proceedings of the Second International Workshop on Non-Monotonic Reasoning*]{}, pages 1–18, Grassau, Germany, June 1988. Springer Verlag. Volume 346, Lecture Notes in Artificial Intelligence. John McCarthy. Circumscription, a form of non monotonic reasoning. , 13:27–39, 1980. Drew McDermott and Jon Doyle. Non-monotonic logic [I]{}. , 25:41–72, 1980. Robert C. Moore. Semantical considerations on nonmonotonic logic. , 25:75–94, 1985. Donald Nute. Conditional logic. In Dov M. Gabbay and Franz Guenthner, editors, [*Handbook of Philosophical Logic*]{}, chapter Chapter II.8, pages 387–439. D. Reidel, Dordrecht, 1984. Judea Pearl. . Morgan Kaufmann, P.O. Box 50490, Palo Alto, CA 94303, 1988. Judea Pearl. System [Z]{}: a natural ordering of defaults with tractable applications to nonmonotonic reasoning. In [*Proceedings of the Third Conference on Theoretical Aspects of Reasoning About Knowledge*]{}, pages 121–135, Monterey, California, March 1990. Morgan Kaufmann. Frank Plumpton Ramsey. General propositions and causality (1925). In D. H. Mellor, editor, [*Foundations: Essays in Philosophy, Logic, Mathematics and Economics*]{}, pages 237–257. Routledge and K. Paul, London, 1978. Raymond Reiter. A logic for default reasoning. , 13:81–132, 1980. Raymond Reiter. , volume 2 of [*Annual Reviews in Computer Science*]{}, pages 147–186. Annual Reviews Inc., 1987. Abraham Robinson. . North-Holland, Amsterdam, 1966. Ken Satoh. A probabilistic interpretation for lazy nonmonotonic reasoning. Technical report, Institute for New Generation Computer Technology, 1-4-28 Mita, Minato-ku, Tokyo 108, Japan, December 1989. Yoav Shoham. A semantical approach to nonmonotonic logics. In [*Proc. Logics in Computer Science*]{}, pages 275–279, Ithaca, N.Y., 1987. Robert C. Stalnaker and Richmond H. Thomason. A semantic analysis of conditional logic. , 36:23–42, 1970. Alfred Tarski. . Clarendon Press, Oxford, 1956. Moshe Y. Vardi, editor. , Monterey, California, March 1988. Morgan Kaufmann. Lemmas needed to prove Theorem \[comthe:rat\] {#appen:rep} ============================================= Let us suppose that some rational consequence relation is given. The notion of a consistent formula has been presented in Definition \[def:cons\]. Let $S$ denote the set of all consistent formulas. Let us now recall Definition 10 of [@KLMAI:89]. \[def:normworld\] The world is a normal world for $\alpha$ iff such that , . The following is an easy corollary of Lemma 8 of [@KLMAI:89]. \[le:cons\] A formula is consistent iff there is a normal world for it. We shall now define a pre-order relation on the set $S$. \[leq\] Where , we shall say that $\alpha$ is not more exceptional than $\beta$ and write iff . \[trans\] The relation ${\cal R}$ is transitive. -1000.5 pt[**Proof:** ]{} Straightforward from Lemma \[rankder\]. The fact that the relation ${\cal R}$ was restricted to the set $S$ is not used here and ${\cal R}$ would have been transitive also on the whole language ${\cal L}$. 8.5pt \[total\] Let . Either or (or both). In particular ${\cal R}$ is reflexive. -1000.5 pt[**Proof:** ]{} The proof proceeds by contradiction. Suppose we have and . Then we have and . By [**And**]{} and [**Reflexivity**]{} we have , and therefore and, by Rule \[eq-1\], . Therefore , contradicting . The fact that ${\cal R}$ was restricted to $S$ is crucial here. 8.5pt The following will be useful in the sequel. \[nm\] If , any normal world for $\alpha$ that satisfies $\beta$ is normal for $\beta$. -1000.5 pt[**Proof:** ]{} Suppose , $m$ is normal for $\alpha$ and satisfies $\beta$. Let $\gamma$ be such that . We must show that . Since $m$ is normal for $\alpha$ and satisfies $\beta$, it is enough to show that . But, implies, by [**Left Logical Equivalence**]{}, . By the rule [**S**]{} of [@KLMAI:89], one then obtains . But, by definition of ${\cal R}$, and, by [**Rational Monotonicity**]{} one deduces . 8.5pt \[equiv\] Let . We shall say that $\alpha$ is as exceptional as $\beta$ and write iff and . Since ${\cal R}$ is reflexive and transitive, the relation $\sim$ is an equivalence relation. The equivalence class of a formula $\alpha$ will be denoted by $\overline{\alpha}$ and $E$ will denote the set of equivalence classes of formulas of $S$ under $\sim$. We shall write iff and we shall write iff and . This notation should cause no confusion with a similar notation used with a different meaning, in the context of preferential relations, in [@KLMAI:89] and in Section \[subsec:prefent\]. By Lemmas \[trans\] and \[total\], the relation $<$ is a strict total order on the set $E$. \[&lt;\] Let be consistent formulas. If then . -1000.5 pt[**Proof:** ]{} The assumption implies that , i.e., . Rule (\[eqzero\]) implies the conclusion. 8.5pt \[betabar\] Let be consistent formulas. If there is a normal world for $\alpha$ that satisfies $\beta$, then . -1000.5 pt[**Proof:** ]{} If there is a normal world for $\alpha$ that satisfies $\beta$, then we conclude by Lemma \[&lt;\] that . 8.5pt Let $W$ be the ranked model , where $V \subseteq {\cal U} \times S$ is the set of all pairs such that $m$ is a normal world for $\alpha$, $l(<m,\alpha>)$ is defined to be $m$ and $\prec$ is defined as iff . To show that $W$ is a ranked model, we must prove that it satisfies the smoothness condition. \[minchar\] In $W$, the state is minimal in $\widehat\beta$ iff and -1000.5 pt[**Proof:** ]{} First notice that iff . For the [*only if*]{} part, suppose that is minimal in $\widehat\beta$. The world $m$ is normal for $\alpha$ and satisfies $\beta$. By Lemma \[betabar\] we conclude that . But, since $\beta$ is consistent, by Lemma \[le:cons\] there is a normal world $n$ for $\beta$. The pair is an element of $V$ that satisfies $\beta$ and, by the minimality of in $\widehat\beta$, , i.e., , i.e., . We conclude . For the [*if*]{} part, suppose that $m$ is a normal world for $\alpha$ that satisfies $\beta$ and that . If $n$ is normal for $\gamma$ and then and therefore . By Lemma \[&lt;\] and $n$, which is normal for $\gamma$, cannot satisfy $\beta$. The state is then minimal in $\widehat\beta$. 8.5pt The following is an immediate corollary of Lemma \[minchar\]. \[enough\] If $m$ is a normal world for $\alpha$ the pair is a state of $V$ and is minimal in $\widehat\alpha$. -1000.5 pt[**Proof:** ]{} Suppose $m$ is normal for $\alpha$. First, since there is a normal world for $\alpha$, $\alpha \in S$ and the pair is in $V$. Since $m$ is normal for $\alpha$ it satisfies $\alpha$. 8.5pt We may now prove that the model $W$ satisfies the smoothness property and defines the consequence relation . \[smoothness\] Let $\alpha$ be a consistent formula. The set is smooth. -1000.5 pt[**Proof:** ]{} Suppose . Then, $m$ is a normal world for $\beta$ that satisfies $\alpha$ and, by Lemma \[betabar\], . If , then, by Lemma \[minchar\], is minimal in $\widehat\alpha$. Otherwise, . In this case, let $n$ be any world normal for $\alpha$ (there is such a world since $\alpha$ is consistent). The pair is minimal in $\widehat\alpha$ by Lemma \[minchar\] and . 8.5pt \[nottoomany\] If is minimal in $\widehat\beta$, then $m$ is normal for $\beta$. -1000.5 pt[**Proof:** ]{} Suppose is minimal in $\widehat\beta$. By Lemma \[minchar\] . Therefore . But $m$ is normal for $\alpha$ and satisfies $\beta$, and Lemma \[nm\] implies that $m$ is normal for $\beta$. 8.5pt Non-standard probabilistic semantics {#appen:nonstandard} ==================================== Introduction {#introduction} ------------ We shall describe now, in Definition \[def:ourmodels\] another family of probabilistic models, they provide much more direct semantics for nonmonotonic reasoning than Adams’, at the price of using the language of non-standard (in the sense of A. Robinson) probability theory. The purpose of this section is to provide additional evidence in support of Thesis \[rational\]. We shall show that rational relations are exactly those that may be defined by non-standard [*probabilistic*]{} models. In other terms, if, given a probability distribution, we decide to accept the assertion  iff the conditional probability of  given is very close to one, then the consequence relation we define is rational. On the other hand, any rational relation may be defined, in such a way, by some probability distribution. The results presented in the appendix are not used in the body of the paper. A different representation theorem for rational relations, also based on Theorem \[comthe:rat\], in terms of one-parameter families of standard probabilistic models has been proved recently by K. Satoh [@Satoh:89]. Results relating the semantics of conditionals and non-Archimedean probabilities seem to have been obtained by R. Giles around 1980. There is a school of thought in Artificial Intelligence, represented in particular by [@Che:88; @Che:88b], that denies the validity of the logical approach to modeling common-sense reasoning. The alternative suggested is the Bayesian probabilistic approach. Namely, the only way in which we should make sensible inferences from our knowledge  is by estimating the conditional probability of the required conclusion  given our knowledge , and then adopting  if we are satisfied that this conditional probability is close enough to $1$. We believe that this approach may run into considerable practical difficulties, the choice being between keeping an explicit data base of these many conditional probabilities or estimating them from a small sample. The chief source of difficulty here is that knowing the probability of  and  tells you very little about the probability of their intersection. But we shall not argue the matter in detail here. The main purpose of this section is to show that rational knowledge bases may be considered to come from such a probabilistic model, if we let the cut-off point of how close the conditional probability of  given  has to be before we are ready to adopt  as a sensible consequence of , approach $1$ as a limit. Namely,  is a sensible consequence of , iff the conditional probability is [*infinitesimally*]{} close to $1$. In order to have an interesting theory, there must be probabilities that are not standard real numbers, but belong to a richer system of numbers, containing some infinitesimally small numbers. We shall show that this approach allows one to keep a probabilistic intuition while thinking about common-sense reasoning, namely think about $\alpha{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\beta$ as meaning that the conditional probability of  given  is large, and still defines a well-behaved consequence relation that is not necessarily monotonic. Note that if one considers a standard probabilistic model and accepts $\alpha{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\beta$ as satisfied by the model iff the conditional probability $Pr(\alpha/\beta)>1-\epsilon$, for some choice of a positive $\epsilon$ one obtains a consequence relation that is not well-behaved. For instance, one may have $\alpha{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\beta$ and $\alpha{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\gamma$ satisfied by the model, while $\alpha {\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\ \beta \wedge \gamma$ is not satisfied. If, on the other one hand, one chooses $\epsilon$ to be $0$, one obtains a well-behaved consequence relation, but this relation is always monotonic, and the entailment defined is classical entailment (read  as material implication). J. McCarthy told us he suggested considering non-standard probabilistic models long ago, but, as far as we know, this suggestion has not been systematically pursued. The structure of this section is as follows: first we shall, briefly, survey the basic notions of non-standard analysis. We shall also introduce non-standard probability spaces. Then we shall introduce non-standard probabilistic models for non-monotonic reasoning, define the consequence relation given by such a model and prove that any consequence relation given by a non-standard probability model is rational. Lastly we shall show that the axioms are complete for this interpretation, i.e., any rational consequence relation can be represented as the consequence relation given by some non-standard probability model, at least in the case the language ${\cal L}$ is countable. If ${\cal L}$ is not countable, an easy counter-example shows the result does not hold, but we shall not elaborate in this paper. Non-Standard Analysis --------------------- Non-standard analysis was invented by Abraham Robinson in order to give a rigorous development of analysis in which limiting processes are replaced by behaviour at the infinitesimally small, e.g., the derivative becomes a quotient of the change in the function divided by the change in the argument, when the argument is infinitesimally increased. In this section we shall give a very brief introduction to the basic ideas. The reader interested in a full treatment can consult A. Robinson’s [@Robinson:66] or Keisler’s [@Keisler:76] books on the topic. More advanced topics related to non-standard probability theory are surveyed in [@Cutland:83]. The basic idea of non-standard analysis is to extend the real numbers to a larger ordered field while preserving many of the basic properties of the reals. Therefore, we consider a structure of the form $${\cal R}{^{*}}=\langle R{^{*}},+{^{*}},\times{^{*}},<{^{*}},0,1\rangle$$ such that ${\cal R}{^{*}}$ is an elementary extension of the standard real numbers, namely ${\bf R} \subset R{^{*}}$, the operations and the order relation of extend those of [**R**]{} and for every first order formula $\Phi$ $${\mbox{${\cal R}{^{*}}$}}\models\Phi(x_{1},\ldots,x_{n})\ \mbox{iff}\ {\bf R} \models\Phi(x_{1},\ldots,x_{n})$$ for $x_{1},\ldots,x_{n}\in {\bf R}$. Since we would like to consider not only properties of the real numbers, but real valued functions, functions from real valued functions into reals, and so on, we shall consider a richer structure: the superstructure of the real numbers. The superstructure of the set $X$ is $V_{\infty}(X)=\bigcup_{n=0}^{\infty}V_{n}$ where $V_{n}$ are defined by induction: - $V_{0}=X$ - $V_{n+1}={\cal P}(V_{n})\cup V_{n}$ where ${\cal P}(Y) $ is the power set of Y. Note that the superstructure of $X$ contains all the relations on $X$, all $n$-valued functions from $X$ into $X$, etc. In a non-standard model of the real numbers we would like to have a non-standard counterpart to any standard member of the superstructure of the real numbers. Note that the set theoretical relation $\in$ makes sense in the superstructure of $X$. Recall that a formula of the first order language having only $\in$ as a non logical constant is called bounded if is constructed by the usual connectives and [*bounded* ]{}quantifiers, namely $(\forall x \in y)$ and $(\exists x \in y)$ meaning respectively: $\forall x \ \mbox{\em if} \ x \in y \ \mbox{\em then} \ldots$ and $\exists x \ x \in y \wedge\ldots$. A non-standard model of analysis is an ordered field that is a [*proper*]{} extension of the ordered field of the reals, together with a map ${^{*}}$ from the superstructure of [**R**]{} into the superstructure of , such that for every [*bounded*]{} formula $\Phi(x_{1} \ldots x_{n})$: $$V_{\infty}({\bf R})\models\Phi(a_{1}\ldots a_{n})\ \mbox{iff}\ V_{\infty}({\mbox{${\cal R}{^{*}}$}})\models\Phi(a_{1}{^{*}}\ldots a_{n}{^{*}}) \ {\bf (Leibniz \ Principle)}$$ and such that for $x \in {\bf R}$ $x{^{*}}= x$ (we assume that ${^{*}}$ transforms the standard operations of [**R**]{} into those of ). The Leibniz principle guarantees that the non-standard counterpart of any standard notion (namely its ${^{*}}$) preserves many of the properties of the standard object. In particular it is an object of the same kind: for example if A is a set of functions from [**R**]{} to [**R**]{}, then $A{^{*}}$ is a set of functions from into . As another example consider the absolute value as a function from [**R**]{} to [**R**]{}. In [**R**]{} it has the property $$\left ( \forall x \in {\bf R}) ({\mid x \mid} \geq 0 \wedge \left ( {\mid x \mid} = 0 \leftrightarrow x = 0 \right ) \right )$$ Then by the Leibniz principle $$(\forall x \in {\mbox{${\cal R}{^{*}}$}}) \left ( {\mid x \mid}{^{*}}\geq{^{*}}0 \wedge \left ( {\mid x \mid}{^{*}}= 0 \leftrightarrow x = 0 \right ) \right )$$ In fact since the ${^{*}}$ versions of the standard arithmetic operations and relations (like $\leq$, $\geq, >, <$) are so similar to the standard ones (they extend them) we shall simplify the notation by dropping the ${^{*}}$, letting the context determine whether we mean the standard operation on [**R**]{}, or its extension to . The next theorem shows that this is not a formal game: There exists a non-standard model for analysis. The proof is an application of the compactness theorem. The extension of [**R**]{}, , is not unique but nothing in the following arguments depends on the particular choice of the non-standard extension of [**R**]{}. So fix one such extension . 1. $x \in {\mbox{${\cal R}{^{*}}$}}$, $x \neq 0$, is called [*finite*]{} if ${\mid x \mid} < y$ for some $y \in {\bf R}$, or, equivalently, if ${\mid x \mid} < n$ for some natural number $n$. 2. $x \in {\mbox{${\cal R}{^{*}}$}}$ is called [*infinitesimal*]{} if for all $\epsilon$ in [**R**]{}, $\epsilon>0$, ${\mid x \mid}<\epsilon$. Following our definition $0$ is infinitesimal. 3. $x\in V_{\infty}({\mbox{${\cal R}{^{*}}$}})$ is called [*internal*]{} if $x\in y{^{*}}$ for some $y\in V_{\infty}({\bf R})$. The set of internal objects is denoted by $V_{\infty}{^{*}}$. 4. $x\in V_{\infty}({\mbox{${\cal R}{^{*}}$}})$ is standard if $x=y{^{*}}$ for some $y\in V_{\infty}({\bf R})$. It follows easily, from the fact that is a proper extension of [**R**]{}, that there are infinitesimal, as well as infinite, members of . In fact $x$ is infinitesimal iff $1/x$ is infinite. If [**N**]{} is the set of natural numbers, one can show that [**N**]{} is a proper subset of ${\bf N}{^{*}}$ and every member of ${\bf N}{^{*}}-{\bf N}$ is called a non-standard natural number. \[lemma:sum\] 1. The sum, product and difference of two infinitesimals is infinitesimal. 2. The product of an infinitesimal and a finite member of is infinitesimal. \[Robinson-overspill\] Let $\langle A_{n}\mid n\in {\bf N} \rangle$ be a sequence of members of $V_{k}({\bf R})$ for some $k\in {\bf N}$. Assume also that, for all $n \in {\bf N}$, $A_{n} \neq \emptyset$ and $A_{n+1}\subseteq A_{n}$. Then $\bigcap_{n\in {\bf N}}A_{n}{^{*}}$ is not empty. [**Sketch of proof :**]{} Note that a sequence of elements of $V_{k}({\bf R})$ can be considered to be a function from [**N**]{} into $V_{k}({\bf R})$, and therefore it is a member of $V_{\infty}({\bf R})$. Hence $\langle A_{n}\mid n\in {\bf N} \rangle{^{*}}$ makes sense and it is a function from ${\bf N}{^{*}}$ into $V_{k}({\bf R}){^{*}}$. Its value at $h\in {\bf N}{^{*}}$ will be denoted by $(A){^{*}}_{h}$. Note that $(A){^{*}}_{n}=A_{n}{^{*}}$ for $n\in {\bf N}$. Let $h\in {\bf N}{^{*}}-{\bf N}$. One can easily check, using the Leibniz principle, that $(A){^{*}}_{h}$ is not empty and that for $n\in {\bf N}$ $(A){^{*}}_{h}\subseteq A{^{*}}_{n}$, hence $\bigcap_{n\in {\bf N}}A_{n}{^{*}}$ is not empty. 8.5pt We can now define the notion of non-standard probability space, which is like a standard (finitely additive) probability space, except that the values of the probability function are in . An -[*probability space*]{} is a triple $\langle X, {\cal F}, Pr\rangle$ where X is a non-empty set , $\cal F$ is a Boolean subalgebra of ${\cal P}(X)$, (namely $X\in\cal F$, $\emptyset\in\cal F$, and $\cal F$ is closed under finite unions, intersections and differences) and $Pr$ is a function from ${\cal F}$ into such that 1. $Pr(A)\geq0$ for $A\in\cal F$. 2. $Pr(X)=1$ 3. $Pr(A\cup B)=Pr(A)+Pr(B)$ for $A, B\in \cal F$, A and B disjoint Note that many of the notions that are usually associated with probability spaces are immediately generalized to -probability space, like independence of ‘events’ (namely sets in $\cal F$) and conditional probability: if $Pr(A)\not=0$ then the conditional probability of B given A, is $$Pr(B \mid A)=\frac{Pr(A \cap B)}{Pr(A)}.$$ See [@Cutland:83] for sophisticated applications of non-standard probability spaces. A useful way of getting -probability spaces is by using hyperfinite sets, sets which are considered by to be finite. An internal object $A\in V{^{*}}_{\infty} $ is called [*hyperfinite*]{} iff there exists a function $f\in{\mbox{$V{^{*}}_{\infty}$}}$ and $h\in {\bf N}{^{*}}$ such that f is a 1-1 mapping of h onto A. Note that we follow the usual set theoretical convention by which a natural number is identified with all smaller natural numbers. Of course here we apply this convention also to non-standard natural numbers. By applying the Leibniz principle we can show that if A is hyperfinite and B is an internal subset of A, then B is hyperfinite. Given an -valued function f which is internal, and A an hyperfinite subset of the domain of f, we can naturally define the ‘sum’ of the values of f on A, $\sum{^{*}}_{x\in A}f(x)$.  is defined by taking the ${^{*}}$ of the standard operation of taking the sum of a finite set of real numbers.  shares many of the properties of its standard counterpart, for example $${\mbox{$\sum{^{*}}$}}_{x\in A\cup B}f(x)={\mbox{$\sum{^{*}}$}}_{x\in A}f(x)+{\mbox{$\sum{^{*}}$}}_{x\in B}f(x)$$ for A, B hyperfinite and disjoint. The next definition generalizes the notion of a finite probability space. \[def:HFPS\] Let $A\in{\mbox{$V{^{*}}_{\infty}$}}$ be an hyperfinite set, let f be an internal -valued function on A, which is not constantly zero and such that for $x\in A$ $f(x)\geq0$. Then the [*-probability space generated by A and f*]{} (denoted by $PR{^{*}}(A, f)$ ) is $ \langle A, {\cal F}, Pr\rangle$ where $\cal F$ is the collection of all internal subsets of A, and ${\em Pr}$ is given by $$Pr(B)=\frac{{\mbox{$\sum{^{*}}$}}_{x\in B}f(x)}{{\mbox{$\sum{^{*}}$}}_{x\in A}f(x)}$$ One can verify that under the conditions of Definition \[def:HFPS\], $PR{^{*}}(A, f)$ is a -probability space. Non-standard Probabilistic Models and Their Consequence Relations {#sec:NSPM} ----------------------------------------------------------------- An probabilistic model is an -probability measure on some subset  of . Of course, we assume that for every formula of our language, , the set $\hat{{\mbox{$\alpha$}}}$ is measurable, namely it is in $\cal F$. The probability measure induces a non-standard probability assignment to the formulas of the language by $Pr({\mbox{$\alpha$}})=Pr(\hat{{\mbox{$\alpha$}}})$. The probabilistic model is said to be [*neat*]{} if for every formula, , if $Pr({\mbox{$\alpha$}})=0$ then is satisfied in no world of . \[def:ourmodels\] 1. Let be an probabilistic model. The conditional assertion ${\mbox{$\alpha$}}{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}{\mbox{$\beta$}}$ is valid in , ${\mbox{${\cal M}$}}\models{\mbox{{\mbox{$\alpha$}}{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}{\mbox{$\beta$}}}}$, if either $Pr({\mbox{$\alpha$}})=0$ or the conditional probability of given is infinitesimally close to 1, i.e., $1-Pr({\mbox{$\beta$}}\mid {\mbox{$\alpha$}})$ is infinitesimal. Note that this is equivalent to saying that $Pr(\alpha) = 0$ or is infinitesimal. 2. The consequence relation defined by is: $$K({\mbox{${\cal M}$}})=\{{\mbox{{\mbox{$\alpha$}}{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}{\mbox{$\beta$}}}}\mid{\mbox{${\cal M}$}}\models{\mbox{{\mbox{$\alpha$}}{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}{\mbox{$\beta$}}}}\}$$ \[SoundnessProb\] For every probabilistic model , K() is a rational consequence relation. -1000.5 pt[**Proof:** ]{} [**Left Logical Equivalence**]{} , [**Right Weakening**]{}, and [**Reflexivity**]{} are immediate. [**And**]{} follows from: $$Pr(\neg ({\mbox{$\beta$}}\wedge{\mbox{$\gamma$}}) \mid {\mbox{$\alpha$}}) = Pr((\neg{\mbox{$\beta$}}\vee\neg{\mbox{$\gamma$}})\mid{\mbox{$\alpha$}}) \leq Pr(\neg{\mbox{$\beta$}}\mid{\mbox{$\alpha$}}) + Pr(\neg{\mbox{$\gamma$}}\mid{\mbox{$\alpha$}})$$ and from the fact that the sum of two infinitesimals is infinitesimal. [**Or**]{} is proved by the following manipulation: $$\begin{aligned} Pr(\neg{\mbox{$\gamma$}}\mid{\mbox{$\alpha$}}\vee{\mbox{$\beta$}})&=&\frac{Pr(\neg{\mbox{$\gamma$}}\wedge({\mbox{$\alpha$}}\vee{\mbox{$\beta$}}))}{Pr({\mbox{$\alpha$}}\vee{\mbox{$\beta$}})} \leq \nonumber \\ \frac{Pr(\neg{\mbox{$\gamma$}}\wedge{\mbox{$\alpha$}})}{Pr({\mbox{$\alpha$}}\vee{\mbox{$\beta$}})} &+& \frac{Pr(\neg{\mbox{$\gamma$}}\wedge{\mbox{$\beta$}})} {Pr({\mbox{$\alpha$}}\vee{\mbox{$\beta$}})}\leq \nonumber \\ \frac{Pr(\neg{\mbox{$\gamma$}}\wedge{\mbox{$\alpha$}})}{Pr({\mbox{$\alpha$}})} &+& \frac{Pr(\neg{\mbox{$\gamma$}}\wedge{\mbox{$\beta$}})} {Pr({\mbox{$\beta$}})} = Pr(\neg{\mbox{$\gamma$}}\mid{\mbox{$\alpha$}}) + Pr(\neg{\mbox{$\gamma$}}\mid{\mbox{$\beta$}})\nonumber\end{aligned}$$ and again using the fact that the sum of two infinitesimals is infinitesimal. We assumed above that $Pr({\mbox{$\alpha$}})>0$ and $Pr({\mbox{$\beta$}})>0$. If this fails then the argument is easier. We shall prove [**Rational Monotonicity**]{} by contradiction, so we assume that ${\mbox{$\alpha$}}{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\neg{\mbox{$\beta$}}$ is not in K(), and that ${\mbox{$\alpha$}}{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}{\mbox{$\gamma$}}$ is in K(). We shall prove that ${\mbox{$\alpha$}}\wedge{\mbox{$\beta$}}{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}{\mbox{$\gamma$}}$ is in K(). We can assume that $Pr({\mbox{$\alpha$}}\wedge{\mbox{$\beta$}})>0$ (hence $Pr({\mbox{$\alpha$}})>0$) otherwise the argument is trivial. $$\begin{aligned} \label{eq:RM} Pr(\neg{\mbox{$\gamma$}}\mid{\mbox{$\alpha$}}\wedge{\mbox{$\beta$}})&=& \frac{Pr(\neg{\mbox{$\gamma$}}\wedge{\mbox{$\alpha$}}\wedge{\mbox{$\beta$}})}{Pr({\mbox{$\alpha$}}\wedge{\mbox{$\beta$}})}= \nonumber \\ \frac{Pr(\neg{\mbox{$\gamma$}}\wedge{\mbox{$\alpha$}}\wedge{\mbox{$\beta$}})}{Pr({\mbox{$\alpha$}})}&/& \frac{Pr({\mbox{$\alpha$}}\wedge{\mbox{$\beta$}})}{Pr({\mbox{$\alpha$}})}\leq\nonumber \\ \frac{Pr(\neg{\mbox{$\gamma$}}\wedge{\mbox{$\alpha$}})}{Pr({\mbox{$\alpha$}})} &/&\frac{Pr({\mbox{$\alpha$}}\wedge{\mbox{$\beta$}})}{Pr({\mbox{$\alpha$}})}=\nonumber \\ Pr(\neg{\mbox{$\gamma$}}\mid{\mbox{$\alpha$}})&\times&\frac{1}{Pr({\mbox{$\beta$}}\mid{\mbox{$\alpha$}})}\end{aligned}$$ Since ${\mbox{$\alpha$}}{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}\neg{\mbox{$\beta$}}$ is not in $K({\mbox{${\cal M}$}})$ , we get that $Pr({\mbox{$\beta$}}\mid{\mbox{$\alpha$}})$ is not infinitesimal, hence $\frac{1}{Pr({\mbox{$\beta$}}\mid{\mbox{$\alpha$}})}$ is finite. By Lemma \[lemma:sum\] $Pr(\neg{\mbox{$\gamma$}}\wedge{\mbox{$\alpha$}})\times\frac{1}{Pr({\mbox{$\beta$}}\mid{\mbox{$\alpha$}})}$ is infinitesimal. Hence by Equation \[eq:RM\], ${\mbox{$\alpha$}}\wedge{\mbox{$\beta$}}{\mbox{$\: {{\rule[-0.4mm]{.1mm}{3mm}}\hspace{-3.5pt}}\sim$ }}{\mbox{$\gamma$}}$ is in $K({\mbox{${\cal M}$}})$. [**Cautious Monotonicity** ]{} now follows easily. Suppose and  are both in $K({\mbox{${\cal M}$}})$. If is not in $K({\mbox{${\cal M}$}})$, we conclude by [**Rational Monotonicity**]{}. If is in $K({\mbox{${\cal M}$}})$, we must have $Pr({\mbox{$\alpha$}}) = 0$, since and cannot be both infinitesimally close to 1. Therefore and we conclude that . 8.5pt Completeness for the Non-Standard Probabilistic Interpretation {#subsec:ComNSI} -------------------------------------------------------------- \[completeness-nonstd\] Suppose the language ${\cal L}$ is countable (this assumption cannot be dispensed with) and $P$ is a rational consequence relation on ${\cal L}$. Let be any non-standard model of analysis, then there exists an -probabilistic neat model such that $K({\mbox{${\cal M}$}})=K$. -1000.5 pt[**Proof:** ]{} Let $W=\langle S, l, \prec \rangle $, with ranking function $r$, be a countable (i.e., $S$ is countable) ranked model that defines the consequence relation $P$. The model built in the proof of Theorem \[comthe:rat\] shows that such models exist. If $S$ is finite, or even if each level in $W$ is finite and $W$ is well-founded, one may simply use the construction described just before Lemma \[le:unnamed\], with some arbitrary infinitesimal $\epsilon$. In case the model $W$ is infinitely [*broad*]{}, i.e., has some level containing an infinite number of states, then the construction has to be slightly more sophisticated, but the real difficulty appears when $W$ is not well-founded, and we have already remarked that there are rational relations that have no well-founded ranked model. Following the proof of Lemma \[le:unnamed\] we would like to assign a (non-standard) probability distribution to the states of the model in such a way that the relative probabilty of a level to that of a lower level is infinitesimal, but, for every formula which is satified at a given level, we would like to keep its relative weight within the level non infinitesimal. To each formula we shall assign a positive real number $r$ such that, if the formula is satisfied at level $l$, its relative probability within this level should be at least $r$. In order that these requirements not be contradictory, the sum of the $r$’s so assigned should be at most $1$. Quite arbitrarily, we pick for the $i$-th formula $r=1/2^{i+1}$. Now we have to show that we can find a probability assignment satisfying these requirements. We shall define a set $B_{n}$ of all probability assignments that are [*good up to rank $n$*]{}. An assignment which is [*good for every $n$*]{} will satisfy our requirements. So, we would like to intersect the $B_{n}$’s. The overspill principle will tell us that this intersection is not empty. Since $S$ is countable we may assume that . Since every countable linear ordering may be order embedded into the real numbers, we may assume without loss of generality that the ranking function, $r$, is into [**R**]{}. Since $\prec$ is a partial ordering of [**N**]{}, $\prec{^{*}}$ is a partial ordering of ${\bf N}{^{*}}$ which is ranked by the ranking function $r{^{*}}$ mapping ${\bf N}{^{*}}$ into . For each formula , let . Note that $A_{{\mbox{$\alpha$}}}$ is a subset of ${\bf N}{^{*}}$ (but must not be a subset of [**N**]{}). We can now associate a world, ${\mbox{${\cal U}$}}_{h}$, with each , defined by iff . It is easily checked that, for standard $h$ (i.e., $h\in {\bf N}$), one has and that, for arbitrary $h$, iff . Our idea now is to find an $h$ in ${\bf N}{^{*}}$ and an internal function $f$, from $h$ into such that, if we consider the probability distribution given by the hyperfinite probability space on the set of worlds , we shall get a probabilistic model whose consequence relation is exactly $P$ (recall that we are identifying a member of ${\bf N}{^{*}}$ with the set of smaller members of ${\bf N}{^{*}}$). Fix an enumeration of all the formulas of our language. For let $x_{i}$ be the real number such that the ranking function $f$ maps all the states minimal in $\hat{{\mbox{$\alpha$}}_{i}}$ to it. We are now going to define a sequence of sets of possible approximations to the object we are looking for, namely the appropriate $h\in {\bf N}{^{*}}$ and the appropriate $f$. For , let $B_{n}$ be the set of all triples of the form $(k, \epsilon, f)$ that have the following properties: 1. $k\geq n$, 2. $\epsilon \in {\bf R}$, $\epsilon>0$, $\epsilon \leq1 / n$, 3. $f$ is a function from [**N**]{} into [**R**]{} such that for any $s\in {\bf N}$, $f(s)>0$, 4. \[itemsmall\] for any $x, y \in {\bf R}$ such that $x < y$, if $x$ and $y$ are in the range of the ranking function $r$ on $k$, then $$\frac{\sum_{m<k, r(m)=y}f(m)}{\sum_{m<k, r(m)=x}f(m)} \leq \epsilon ,$$ 5. \[itemstd\] for ${\mbox{$\alpha$}}_{i}$, , if , then $$\frac{\sum_{m \in C , m < k} f(m)}{\sum_{m < k , r(m) = x_{i}} f(m)} \geq \frac{1}{2^{i+1}}$$ It easily follows from the definition of the sequence of sets that for . One may also verify from item \[itemsmall\] that, if and if $x$ is in the range of $r$ on $k$, then: $$\label{eq:ratio-inf} \frac{\sum_{m < k , r(m) > x} f(m)} {\sum_{m < k , r(m) = x} f(m)} \leq \sum_{i=1}^{\infty} \epsilon^{i} = \frac{\epsilon}{1-\epsilon}$$ \[nonempty\] For any , . -1000.5 pt[**Proof:** ]{} The proof is essentially similar to the remarks preceding the proof of Lemma \[le:unnamed\] in Section \[subsec:adams\]. Let, indeed, $W_{n}$ be the finite ranked model defined by . We can easily arrange a probability assignment for it such that the ratio of the probability of each rank and and each smaller rank will be at most $1/n$. Within the rank we have to satisfy item \[itemstd\] in the definition of $B_{n}$ but we can easily arrange for $i<n$, that if ${\mbox{$\alpha$}}_{i}$ has a non empty intersection with this rank, then its relative probability within this rank is at least $\frac{1}{2^{i+1}}$. This may be arranged because $\sum_{i\in {\bf N}}\frac{1}{2^{i+1}}=1$. If we extend this probability assignment to any function from [**N**]{} into [**R**]{}, we see that . 8.5pt Once we have Lemma \[nonempty\] we can use Robinson’s overspill principle (Theorem  \[Robinson-overspill\]) to show that $\cap_{n\in {\bf N}}B_{n}{^{*}}$ is not empty. So let be a member of $B_{n}{^{*}}$ for every $n\in {\bf N}$. One can easily verify that is in ${\bf N}{^{*}}$ and that it is a non-standard natural number: indeed for every , since . Similarly is a positive member of such that for every standard natural number $n$ we have ${\mbox{$\tilde{\varepsilon}$}}\leq\frac{1}{n}$, hence is a positive infinitesimal. Also is a function from ${\bf N}{^{*}}$ into the positive members of , satisfying the appropriate transfer of items \[itemstd\] and \[itemsmall\] into the context of . In particular, Equation \[eq:ratio-inf\] carries over and we have for some : $$\label{eq:ratio2} \frac{{\mbox{$\sum{^{*}}$}}_{m < {\mbox{$\tilde{h}$}}, r{^{*}}(m) > x} {\mbox{$\tilde{f}$}}(m)} {{\mbox{$\sum{^{*}}$}}_{m < {\mbox{$\tilde{h}$}}, r{^{*}}(m) = x} {\mbox{$\tilde{f}$}}(m)} \leq\frac{{\mbox{$\tilde{\varepsilon}$}}}{1-{\mbox{$\tilde{\varepsilon}$}}}$$ We conclude therefore that the left-hand side of Equation \[eq:ratio2\] is infinitesimal. We claim that the probabilistic model whose collection of states is , i.e., , the world associated with $m$ is ${\mbox{${\cal U}$}}_m$, and the probability measure is given by the hyperfinite probability space $PR{^{*}}({\mbox{$\tilde{h}$}}, {\mbox{$\tilde{f}$}})$ is the model we are looking for. Since any $f$ satisfying the requirements can be multiplied by any positive member of and still satisfies the requirements, we may assume without loss of generality that $${\mbox{$\sum{^{*}}$}}_{m\in {\bf N}{^{*}}, m < {\mbox{$\tilde{h}$}}} {\mbox{$\tilde{f}$}}(m)=1$$ Note that is a neat model since, if we have both and , we must also have . \[main-claim\] $K({\mbox{${\cal M}$}})=K$ -1000.5 pt[**Proof:** ]{} First note that iff . If , then , hence . Therefore . For the other direction, if , then, for some , . But $m$, being a standard natural number, is less than , hence some state in satisfies . By the neatness of , . By the previous remark, we can now assume that , hence . Let be minimal in and let . Let be such that . In particular we have: $$(\forall y \in {\bf R}) (y < x \Rightarrow r^{-1}(y) \cap {\mbox{$\hat{{\mbox{$\alpha$}}}$}}) = \emptyset .$$ Using the Leibniz principle we get: $$\{ m \mid m \in {\bf N}{^{*}}, m < {\mbox{$\tilde{h}$}}, {\mbox{${\cal U}$}}_{m} \models {\mbox{$\alpha$}}\} \subseteq \{ m \mid m \in {\bf N}{^{*}}, r{^{*}}(m) \geq x \} .$$ Let us define now $$\eta = {\mbox{$\sum{^{*}}$}}_{m \in {\mbox{$\tilde{h}$}}, r{^{*}}(m) > x} {\mbox{$\tilde{f}$}}(m)$$ and $$\rho = {\mbox{$\sum{^{*}}$}}_{m \in {\mbox{$\tilde{h}$}}, r{^{*}}(m) = x} {\mbox{$\tilde{f}$}}(m) .$$ For every formula define: $$\lambda({\mbox{$\gamma$}}) = {\mbox{$\sum{^{*}}$}}_{m\in{\mbox{$\tilde{h}$}}, r{^{*}}(m)=x , \, {\mbox{${\cal U}$}}_{m} \models {\mbox{$\gamma$}}} {\mbox{$\tilde{f}$}}(m)$$ Of course one always has . Note that by Equation \[eq:ratio2\], is infinitesimal. Also by item \[itemstd\] of the definition of the sequence if and if then $$\lambda({\mbox{$\gamma$}}) \geq \rho \times \frac{1}{2^{j}}.$$ Now, assume . Hence for every , if , we must have . Therefore . Therefore: $$Pr(\neg{\mbox{$\beta$}}\mid {\mbox{$\alpha$}}) = \frac{Pr(\neg{\mbox{$\beta$}}\wedge{\mbox{$\alpha$}})}{Pr({\mbox{$\alpha$}})} \leq \frac{\eta}{\rho \times \frac{1}{2^{i}}} = 2^{i} \times \frac{\eta}{\rho} .$$ Therefore is infinitesimal and by definition . If , then some , satisfies . But this $m$ satisfies , so it is in our model. Let be such that . Since we clearly have: $Pr({\mbox{$\alpha$}})\leq\rho + \eta$, we also have: $$Pr(\neg{\mbox{$\beta$}}\mid{\mbox{$\alpha$}}) = \frac{Pr(\neg{\mbox{$\beta$}}\wedge{\mbox{$\alpha$}})} {Pr({\mbox{$\alpha$}})}\geq\frac{\frac{1}{2^{j}} \times \rho} {\rho+\eta} \geq \frac{1}{2^{j+1}}$$ since obviously . So is [*not*]{} infinitesimal and . (end of proof of Claim \[main-claim\]) 8.5pt We have already noticed that  is a neat model. Claim \[main-claim\] shows that it has the desired property. (end of proof of Theorem \[completeness-nonstd\]) 8.5pt [^1]: Department of Computer Science, Hebrew University, Jerusalem 91904 (Israel) [^2]: Department of Mathematics, Hebrew University, Jerusalem 91904 (Israel) [^3]: This work was partially supported by grant 351/89 from the Basic Research Foundation, Israel Academy of Sciences and Humanities and by the Jean and Helene Alfassa fund for research in Artificial Intelligence. Its final version was prepared while the first author was visiting the Laboratoire d’Informatique Théorique et de Programmation, Univ. Paris 6
--- abstract: | Let $\epsilon >0$. A continuous linear operator $T:C(X) {\longrightarrow}C(Y)$ is said to be [*$\epsilon$-disjointness preserving*]{} if ${\left\| }(Tf)(Tg){\right\| }_{\infty} \le \epsilon$, whenever $f,g\in C(X)$ satisfy ${\left\| }f{\right\| }_{\infty} ={\left\| }g{\right\| }_{\infty} =1$ and $fg\equiv 0$. In this paper we address basically two main questions: 1.- How close there must be a weighted composition operator to a given $\epsilon$-disjointness preserving operator? 2.- How far can the set of weighted composition operators be from a given $\epsilon$-disjointness preserving operator? We address these two questions distinguishing among three cases: $X$ infinite, $X$ finite, and $Y$ a singleton ($\epsilon$-disjointness preserving functionals). We provide sharp stability and instability bounds for the three cases. address: - | Departamento de Matemáticas, Estadística y Computación\ Universidad de Cantabria\ Facultad de Ciencias\ Avda. de los Castros, s. n.\ E-39071 Santander, Spain - | Departamento de Matemáticas\ Universitat Jaume I\ Campus Riu Sec\ 8029 AP, Castellón, Spain author: - Jesús Araujo - 'Juan J. Font' title: Stability and instability of weighted composition operators --- Introduction ============ Suppose that a mathematical object satisfies a certain property approximately. Is it then possible to approximate this object by objects that satisfy the property exactly? This stability problem appears in almost all branches of mathematical analysis and is of particular interest in probability theory and in the realm of functional equations. Within this context, considerable attention has been mainly given to approximately multiplicative maps (see [@Jo1], [@Jo], [@Jar], and [@Se]) and to approximate isometries (see [@HU], [@HU2], [@Bour], and [@HIR]). Recently, G. Dolinar ([@Dol]) treated a more general problem of stability concerning a kind of operators which “almost” preserves the disjointness of cozero sets (see Definition \[djp\]). We need some notation. Let $\mathbb{K}$ denote the field of real or complex numbers. Topological spaces $X$ and $Y$ are assumed to be compact and Hausdorff. Also $C(X)$ stands for the Banach space of all $\mathbb{K}$-valued continuous functions defined on $X$, equipped with its usual supremum norm. An operator $S: C(X) {\longrightarrow}C(Y)$ is said to be a [*weighted composition map*]{} if there exist $a \in C(Y)$ and a map $h: Y {\longrightarrow}X$, continuous on $c(a) := {\left\{} \newcommand{\tr}{\right\}}y \in Y : a(y) \neq 0 \tr $, such that $$(Sf) (y) = a(y) f(h(y))$$ for every $f \in C(X)$ and $y \in Y $. Obviously every weighted composition map is linear and continuous. We also include the case that $S \equiv 0$ as a weighted composition map (being $c(a) = \emptyset$). Recall that a linear operator $T:C(X)\longrightarrow C(Y)$ is said to be [*disjointness preserving*]{} (or [*separating*]{}) if, given $f,g\in C(X)$, $fg\equiv 0$ yields $(Tf)(Tg)\equiv 0$. Clearly every weighted composition map is disjointness preserving. Reciprocally, it is well known that if a disjointness preserving operator is [*continuous*]{}, then it is a weighted composition. On the other hand, automatic continuity of disjointness preserving operators can be obtained sometimes (see for instance [@Jz], [@ABN], [@FH], [@JW])). \[djp\] Let $\epsilon >0$. A continuous linear operator $T:C(X)\longrightarrow C(Y)$ is said to be [*$\epsilon$-disjointness preserving*]{} if ${\left\| }(Tf)(Tg){\right\| }_{\infty} \le \epsilon$, whenever $f,g\in C(X)$ satisfy ${\left\| }f{\right\| }_{\infty} ={\left\| }g{\right\| }_{\infty} =1$ and $fg\equiv 0$ (or, equivalently, if ${\left\| }(Tf)(Tg){\right\| }_{\infty} \le \epsilon {\left\| }f {\right\| }_{\infty} {\left\| }g {\right\| }_{\infty}$ whenever $fg\equiv 0$). Obviously the study of $\epsilon$-disjointness preserving operators can be restricted to those of norm $1$, because if $T \neq 0$ is $\epsilon$-disjointness preserving, then $T/{\left\| }T {\right\| }$ is $\epsilon/{\left\| }T {\right\| }^2$-disjointness preserving. On the other hand, every such $T$ has the trivial weighted composition map $S \equiv 0$ at distance $1$. That is, giving any bound equal to or bigger than $1$ does not provide any information on the problem. Apart from this, it can be easily checked that every continuous linear functional on $C(X)$ of norm $1$ is $1/4$-disjointness preserving and, consequently, every continuous linear map $T:C(X)\longrightarrow C(Y)$ with ${\left\| }T{\right\| }=1$ is $1/4$-disjointness preserving. Thus, if we consider again the trivial weighted composition map $S\equiv 0$, then ${\left\| }T-S{\right\| }=1$. We conclude that our study can be restricted to $\epsilon$ belonging to the interval $(0, 1/4)$. In [@Dol] the author, following the above stability questions, studies when an $\epsilon$-disjointness preserving operator is close to a weighted composition map. The main result in [@Dol] reads as follows: Let $\epsilon > 0$ and let $T:C(X)\longrightarrow C(Y)$ be an $\epsilon$-disjointness preserving operator with ${\left\| }T{\right\| }=1$. Then there exists a weighted composition map $S:C(X)\longrightarrow C(Y)$ such that $${\left\| }T-S{\right\| }\le 20\sqrt{\epsilon}.$$ In view of the above comments we conclude that Dolinar’s result is meaningful only for $\epsilon \in {\left(}0, 1/400 {\right)}$. Apart from the general case, Dolinar also concentrates on the study of linear and continuous functionals, where the bound given is $3 \sqrt{\epsilon}$ (see [@Dol Theorem 1]). On the other hand, notice that when $X$ has just one point, we are in a situation of “extreme stability”, because every continuous linear operator is a weighted composition map. But in general, given an $\epsilon$-disjointness preserving operator, we do not necessarily have a weighted composition map arbitrarily close. Instability questions deal with bounds of how far apart an $\epsilon$-disjointness preserving operator can be from all weighted composition maps. In the present paper we improve Dolinar’s result by showing, under necessary restrictions on $\epsilon$, that a weighted composition map is indeed much closer. If fact we address the following two questions. Given any $\epsilon$-disjointness preserving operator, 1. \[domench\] [Stability.]{} [*How close*]{} there must be a weighted composition map? That is, find the shortest distance at which we can be certain that there exists a weighted composition map. 2. \[lironcaretu\] [Instability.]{} [*How far*]{} the set of all weighted composition maps can be? That is, find the longest distance at which we cannot be certain that there exists a weighted composition map. [How close.]{} We prove that, for every $\epsilon < 2/17$, the number $ \sqrt{17\epsilon /2}$ is a bound valid for every $X$ and $Y$ (Theorem \[rz\]). It is indeed the smallest in every case, as we give an example such that, for every $\epsilon <2/17$, no number strictly less than $\sqrt{17 \epsilon/2}$ satisfies it (Example \[gustavo\]). The question appears to be very related to the following: Find the biggest set $\mathbb{I} \subset (0, 1/4)$ such that every $\epsilon \in \mathbb{I}$ has the following property: [*Given an $\epsilon$-disjointness preserving operator $T: C(X) \longrightarrow C(Y)$ with ${\left\| }T {\right\| }=1$, there exists a weighted composition map $S: C(X) \longrightarrow C(Y)$ such that ${\left\| }T - S {\right\| }<1$.*]{} We prove that $\mathbb{I}= (0, 2/17)$ (Theorem \[rz\] and Example \[andalu\]). We will also study the particular case when $X$ is finite. Here the bound, which can be given for every $\epsilon <1/4$ and every $Y$, is the number $2 \sqrt{\epsilon}$, and is sharp (Theorem \[opodel\] and Example \[llegomalenayamigos2\]). [How far.]{} Of course, an answer valid for every case would be trivial, because if we take $X$ with just one point, then every continuous linear operator is a weighted composition map, so the best bound is just $0$. If we avoid this trivial case and require $X$ to have at least two points, then we can see that again the problem turns out to be trivial since the best bound is now attained for sets with two points. The same happens if we require the set $X$ to have at least $k$ points. In general, it can be seen that the answer does not depend on the topological features of the spaces but on their cardinalities. If we assume that $Y$ has at least two points, then the number $2 \sqrt{ \epsilon}$ is a valid bound if $X$ is infinite (Theorem \[ex3\]), and a different value plays the same rôle for each finite set $X$ (Theorem \[recero\]). We also prove that these estimates are sharp in every case (Theorems \[vr\] and \[cero\]). But here, instead of providing a concrete counterexample, we can show that the bounds are best for a general family of spaces $Y$, namely, whenever $Y$ consists of the Stone-Čech compactification of any discrete space. On the other hand, unlike the previous question, the answer can be given for every $\epsilon < 1/4$. [The case of continuous linear functionals.]{} The context when $Y$ has just one point, that is, the case of continuous linear functionals, deserves to be studied separately. We do this in Sections \[bound\] and \[ninguno\]. In fact some results given in this case will be tools for a more general study. Various situations appear in this context, depending on $\epsilon$. Namely, if $\epsilon<1/4$, then the results depend on an suitable splitting of the interval $( 0, 1/4 )$ (based on the sequence $(\omega_n)$ defined below), as well as on the cardinality of $X$ (Theorem \[martakno\]). Also, as we mentioned above, when $X$ has just one point, every element of $C(X)'$ is a weighted composition map, that is, a scalar multiple of the evaluation functional $\delta_x$. We will see that a related phenomenon sometimes arises when $X$ is finite (see Remark \[nadenas\]). In every case our results are sharp. [**Notation.**]{} Throughout $\mathbb{K} = \mathbb{R}$ or $\mathbb{C}$. $X$ and $Y$ will be (nonempty) compact Hausdorff spaces. To avoid the trivial case, we will always assume that $X$ has at least two points. In a Banach space $E$, for $e \in E$ and $r>0$, $B (e, r)$ and $\overline{B} (e, r)$ denote the open and the closed balls of center $e$ and radius $r$, respectively. [*Spaces and functions.*]{} Given any compact Hausdorff space $Z$, we denote by ${\mathrm{card} \hspace{.02in}}Z$ its cardinal. $C(Z)$ will be the Banach space of all $\mathbb{K}$-valued continuous functions on $Z$, endowed with the sup norm ${\left\| }\cdot {\right\| }_{\infty}$. $C(Z)'$ will denote the space of linear and continuous functionals defined on $C(Z)$. If $a \in \mathbb{K}$, we denote by $\widehat{a}$ the constant function equal to $a$ on $Z$. In the special case of the constant function equal to $1$, we denote it by ${\bf 1}$. For $f \in C(Z)$, $0 \le f \le 1$ means that $f(x) \in [0,1]$ for every $x \in Z$. Given $f\in C(Z)$, we will consider that $c(f) ={\left\{} \newcommand{\tr}{\right\}}x \in Z : f(x) \neq 0 \tr$ is its cozero set, and ${\rm supp}(f)$ its support. Finally, if $A \subset Z$, we denote by ${\mathrm{cl} {\hspace{.02in}}}A$ the closure of $A$ in $Z$, and by $\xi_A$ the characteristic function of $A$. [*Continuous linear functionals and measures: $\lambda_{\varphi}$, ${\left|}\lambda {\right|}$, $\delta_x$.*]{} For $\varphi \in C(X)'$, we will write $\lambda_{\varphi}$ to denote the measure which represents it. For a regular measure $\lambda$, we will denote by ${\left|}\lambda {\right|}$ its total variation. Finally, for $x \in X$, $\delta_x$ will be the evaluation functional at $x$, that is, $\delta_x (f) := f(x)$ for every $f \in C(X)$. [*The linear functionals $T_y$ and the sets $Y_r$.*]{} Suppose that $T: C(X) {\longrightarrow}C(Y)$ is linear and continuous. Then, for each $y\in Y$, we define a continuous linear functional $T_y$ as $T_{y}(f):=(Tf)(y)$ for every $f \in C(X)$. Also, for each $r \in \mathbb{R}$ we define $Y_r :=\left\{y\in Y:{\left\| }T_{y}{\right\| }> r \right\}$, which is an open set. It is clear that, if ${\left\| }T{\right\| }=1$, then $Y_r$ is nonempty for each $r<1$. [*The sets of operators.*]{} We denote by ${\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ the set of all $\epsilon$-disjointness preserving operators from $C(X)$ to $C(Y)$, and by ${\mathbf{WCM} {\left(}X,Y {\right)}}$ the set of all weighted composition maps from $C(X)$ to $C(Y)$. When $Y$ has just one point, then ${\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ and ${\mathbf{WCM} {\left(}X,Y {\right)}}$ may be viewed as subspaces of $C(X)'$. In this case, we will use the notation ${\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$ and ${\mathbf{WCM} {\left(}X, \mathbb{K} {\right)}}$ instead of ${\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ and ${\mathbf{WCM} {\left(}X,Y {\right)}}$, respectively. That is, ${\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$ is the space of all $\varphi \in C(X)'$ which satisfy ${\left|}\varphi (f) {\right|}{\left|}\varphi (g) {\right|}\le \epsilon$ whenever $f , g \in C(X)$ satisfy ${\left\| }f {\right\| }_{\infty} = 1 = {\left\| }g {\right\| }_{\infty}$ and $f g \equiv 0$, and ${\mathbf{WCM} {\left(}X, \mathbb{K} {\right)}}$ is the subset of $C(X)'$ of elements of the form $\alpha \delta_x$, where $\alpha \in \mathbb{K}$ and $x \in X$. [*The sequences $(\omega_n)$ and ${\left(}\mathbb{A}_n {\right)}$.*]{} We define, for each $n \in \mathbb{N}$, $$\omega_n := \frac{n^2 -1}{4 n^2}$$ and $$\mathbb{A}_n: = {\left[}\omega_{2n-1}, \omega_{2n+1} {\right)},$$ It is clear that ${\left(}\mathbb{A}_n {\right)}$ forms a partition of the interval $[0, 1/4)$. The sequences $(\omega_n)$ and ${\left(}\mathbb{A}_n {\right)}$ will determine bounds in Sections \[leffe\] and \[puntodencuentro\]. Main results I: How close. The general case =========================================== In this section we give the best stability bound in the general case. This result is valid for every $X$ in general, assuming no restrictions on cardinality (see Section \[paulus-15nov07\] for the proof). \[rz\] Let $0 < \epsilon < 2 /17$, and let $T \in {\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ with ${\left\| }T{\right\| }=1$. Then $$\overline{B} {\left(}T, \sqrt{\frac{17 \epsilon}{2}} {\right)}\cap {\mathbf{WCM} {\left(}X,Y {\right)}}\neq \emptyset.$$ \[ensinkafetefever\] Theorem \[rz\] is accurate in two ways. On the one hand, for every $\epsilon \in (0, 2/17)$, the above bound is sharp, as it can be seen in Example \[gustavo\]. On the other hand, we have that $(0, 2/17)$ is the maximal interval we can get a meaningful answer in. Namely, if $\epsilon \ge 2/17$, then it may be the case that ${\left\| }T -S {\right\| }\ge 1$ for every weighted composition map $S$ (Example \[andalu\]). But, as it is explained in the comments after Definition \[djp\], this is not a proper answer for the stability question. Main results II: How far. The case when $X$ is infinite {#004} ======================================================= We study instability first when $X$ is infinite. Our results depend on whether or not the space $X$ admits an appropriate measure. \[ex3\] Let $0 < \epsilon < 1/4$. Suppose that $Y$ has at least two points, and that $X$ is infinite. Then for each $t<1$, there exists $T \in {\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ with ${\left\| }T{\right\| }=1$ such that $$B {\left(}T, 2 t \sqrt{\epsilon} {\right)}\cap {\mathbf{WCM} {\left(}X,Y {\right)}}= \emptyset.$$ Furthermore, if $X$ admits an atomless regular Borel probability measure, then $T$ can be taken such that $$B {\left(}T, 2\sqrt{\epsilon} {\right)}\cap {\mathbf{WCM} {\left(}X,Y {\right)}}= \emptyset.$$ We also see that the above bounds are sharp when considering some families of spaces $Y$. \[vr\] Let $0 < \epsilon < 1/4$. Suppose that $Y$ is the Stone-Čech compactification of a discrete space with at least two points, and that $X$ is infinite. Let $T \in {\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ with ${\left\| }T{\right\| }=1$. Then $$\overline{B} {\left(}T, 2\sqrt{\epsilon} {\right)}\cap {\mathbf{WCM} {\left(}X,Y {\right)}}\neq \emptyset.$$ Furthermore, if $X$ does not admit an atomless regular Borel probability measure and $Y$ is finite (with ${\mathrm{card} \hspace{.02in}}Y \ge 2$), then $$B {\left(}T, 2\sqrt{\epsilon} {\right)}\cap {\mathbf{WCM} {\left(}X,Y {\right)}}\neq \emptyset.$$ The proofs of both results are given in Section \[reallyfar\]. The property of admitting an atomless regular Borel probability (or, equivalently, complex and nontrivial) measure can be characterized in purely topological terms. A compact Hausdorff space admits such a measure if and only if is [*scattered*]{} (see [@S Theorem 19.7.6]). Main results III: How far and how close. The case when $X$ is finite {#leffe} ==================================================================== Next we study the case when $X$ is finite. Here, the best instability bounds depend on the sequence $(\omega_n)$, and the cardinality of $Y$ does not play any rôle as long as it is at least $2$. We define $o'_X : (0, 1/4) {\longrightarrow}\mathbb{R}$, for every finite set $X$ (recall that we are assuming ${\mathrm{card} \hspace{.02in}}X \ge 2$). We put $$o'_X (\epsilon) := \left\{ \begin{array}{rl} 2 \sqrt{\frac{(n-1)\epsilon}{n+1}} & \mbox{if } n := {\mathrm{card} \hspace{.02in}}X \mbox{ is odd and } \epsilon \le \omega_n \\ \frac{n-1}{n} & \mbox{if } n := {\mathrm{card} \hspace{.02in}}X \mbox{ is odd and } \epsilon > \omega_n \\ \frac{2(n-1) \sqrt{\epsilon}}{n} & \mbox{if } n := {\mathrm{card} \hspace{.02in}}X \mbox{ is even} \end{array} \right.$$ \[recero\] Let $0 < \epsilon < 1/4$. Assume that $Y$ has at least two points, and that $X$ is finite. Then there exists $T \in {\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ with ${\left\| }T{\right\| }=1$ such that $$B {\left(}T, o'_X (\epsilon ) {\right)}\cap {\mathbf{WCM} {\left(}X,Y {\right)}}= \emptyset.$$ The next result says that Theorem \[recero\] provides a sharp bound, and gives a whole family of spaces $Y$ for which the same one is a bound for stability as well. As we can see in Example \[llegomalenayamigos2\], our requirement on these $Y$ is not superfluous. \[cero\] Let $0 < \epsilon < 1/4$. Suppose that $Y$ is the Stone-Čech compactification of a discrete space with at least two points, and that $X$ is finite. Let $T \in {\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ with ${\left\| }T{\right\| }=1$. Then $$\overline{B} {\left(}T, o'_X ( \epsilon ) {\right)}\cap {\mathbf{WCM} {\left(}X,Y {\right)}}\neq \emptyset.$$ The instability bounds are special when the space $X$ is finite. In the following theorem we study the stability bounds in this particular case. Example \[llegomalenayamigos2\] shows that the result is sharp. \[opodel\] Let $0 < \epsilon < 1/4$. Suppose that that $X$ is finite, and let $T \in {\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ with ${\left\| }T{\right\| }=1$. Then $$\overline{B} {\left(}T, 2 \sqrt{\epsilon} {\right)}\cap {\mathbf{WCM} {\left(}X,Y {\right)}}\neq \emptyset.$$ Theorems \[recero\] and \[cero\] are proved in Section \[cerilla\], and Theorem \[opodel\] in Section \[myrna\]. Main results IV: The case of continuous linear functionals {#puntodencuentro} ========================================================== In some of the previous results, we assume that the space $Y$ has at least two points. Of course the case when $Y$ has just one point can be viewed as the study of continuous linear functionals. In this section we give the best stability and instability bounds in this case, and see that both bounds coincide. Here we do not require $X$ to be finite, and Theorem \[martakno\] is valid both for $X$ finite and infinite. Anyway, the result depends on the sequence $(\omega_n)$ and its relation to the cardinal of $X$. We first introduce the map $o_X : (0, 1/4) {\longrightarrow}\mathbb{R}$ as follows: For $n \in \mathbb{N}$ and $\epsilon \in \mathbb{A}_n $, $$o_X (\epsilon) := \left\{ \begin{array}{rl} \frac{2n-1 - \sqrt{1 - 4 \epsilon}}{2n} & \mbox{if } 2 n \le {\mathrm{card} \hspace{.02in}}X \\ \frac{k-1 - \sqrt{1 -4 \epsilon}}{k} & \mbox{if } k := {\mathrm{card} \hspace{.02in}}X < 2 n \mbox{ and } k \mbox{ is even} \\ \frac{k-1}{k} & \mbox{if } k := {\mathrm{card} \hspace{.02in}}X < 2 n \mbox{ and } k \mbox{ is odd} \end{array} \right.$$ We use this map to give a bound both for stability and instability (see Section \[ninguno\] for the proof). \[martakno\] Let $0 < \epsilon < 1/4$. If $\varphi \in {\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$ and ${\left\| }\varphi {\right\| }=1$, then $$\overline{B} {\left(}\varphi, o_X (\epsilon) {\right)}\cap {\mathbf{WCM} {\left(}X, \mathbb{K} {\right)}}\neq \emptyset.$$ On the other hand, there exists $\varphi \in {\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$ with ${\left\| }\varphi {\right\| }=1$ such that $$B {\left(}\varphi, o_X (\epsilon) {\right)}\cap {\mathbf{WCM} {\left(}X, \mathbb{K} {\right)}}= \emptyset.$$ \[nadenas\] Sometimes the information given by the number $\epsilon$ is redundant, in that $\epsilon $ is too “big” with respect to the cardinal of $X$. This happens for instance when $X$ is a set of $k$ points, where $k \in \mathbb{N}$ is odd. This is the reason why the definition of $o_X$ (and that of $o'_X$) does not necessarily depend on $\epsilon$. The bounds $1/4$ and $2/9$ for continuous linear functionals {#bound} ============================================================ We start with a lemma that will be broadly used. \[pol-immemoriam\] Let $0 < \epsilon <1/4$. Let $\varphi \in {\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$ be positive with ${\left\| }\varphi{\right\| }=1$. If $C$ is a Borel subset of $X$, then $$\lambda_{\varphi} (C)\notin \left(\frac{1-\sqrt{1-4\epsilon}}{2},\frac{1+\sqrt{1-4\epsilon}}{2}\right).$$ Suppose, contrary to what we claim, that there is a Borel subset $C$ such that $ \left(1-\sqrt{1-4\epsilon} \right)/2 < \lambda_{\varphi} (C) < \left( 1+\sqrt{1-4\epsilon}\right)/2$. This implies that $\lambda_{\varphi} (C) ( 1 - \lambda_{\varphi} (C)) > \epsilon$ and, consequently, we can find $\delta>0$ with $(\lambda_{\varphi} (C) -\delta)(1- \lambda_{\varphi} (C) -\delta)>\epsilon$. By the regularity of the measure, there exist two compact subsets, $K_1$ and $K_2$, such that $K_1\subset C$ and $K_2\subset X\setminus C$ and, furthermore, $\lambda_{\varphi} (K_1) > \lambda_{\varphi} (C)-\delta$ and $\lambda_{\varphi} (K_2) > 1-\lambda_{\varphi} (C)-\delta$. On the other hand, let us choose two disjoint open subsets $U$ and $V$ of $X$ such that $K_1\subset U$ and $K_2\subset V$. By Urysohn’s lemma, we can find two functions $f_1$ and $f_2$ in $C(X)$ such that $0\le f_1 \le 1$, $0\le f_2 \le 1$, $f_1\equiv 1$ on $K_1$, $f_2\equiv 1$ on $K_2$, ${\rm supp} (f_1)\subset U$ and ${\rm supp}(f_2)\subset V$. Clearly, $f_1 f_2 \equiv 0$ and $$\varphi(f_i)=\int_X f_i d\lambda_{\varphi} \ge \lambda_{\varphi}(K_i)$$ for $i = 1, 2$. Besides, ${\left\| }f_1{\right\| }_{\infty}={\left\| }f_2{\right\| }_{\infty}=1$. However, $${\left|}\varphi(f_1) {\right|}{\left|}\varphi(f_2){\right|}\ge (\lambda_{\varphi} (C)-\delta)((1-\lambda_{\varphi} (C)-\delta)>\epsilon,$$ which contradicts the $\epsilon$-disjointness preserving property of $\varphi$, and we are done. If $\varphi \in C(X)'$, then let us define $${\left|}\varphi {\right|}(f):=\int_{X}fd {\left|}\lambda_{\varphi} {\right|}= \int_{X} f \overline{ \frac{ d \lambda_{\varphi}}{d {\left|}\lambda_{\varphi} {\right|}} } d \lambda_{\varphi}$$ for every $f \in C(X)$. \[sintelefarri\] Given $\varphi \in C(X)'$, ${\left|}\varphi {\right|}$ is a positive linear functional on $ C(X)$ with ${\left\| }{\left|}\varphi {\right|}{\right\| }= {\left\| }\varphi {\right\| }$. Moreover, if $\epsilon >0$ and $\varphi \in {\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$, then ${\left|}\varphi {\right|}\in {\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$ and $\lambda_{{\left|}\varphi {\right|}} = {\left|}\lambda_{\varphi} {\right|}$. The first part is apparent. As for the second part, using Lusin’s Theorem (see [@Ru p. 55]), we can find a sequence ${\left(}k_n {\right)}$ in $C(X)$ such that $$\lim_{n {\longrightarrow}\infty} \int_X {\left|}k_n - \overline{ \frac{ d \lambda_{\varphi}}{d {\left|}\lambda_{\varphi} {\right|}} } {\right|}d {\left|}\lambda_{\varphi} {\right|}= 0,$$ and ${\left\| }k_n {\right\| }_{\infty} \le 1$ for every $n \in \mathbb{N}$. This implies that, for all $f \in C(X)$, ${\left|}\varphi {\right|}(f) = \lim_{n {\longrightarrow}\infty} \varphi {\left(}f k_n {\right)}$, and we can easily deduce that ${\left|}\varphi {\right|}$ is $\epsilon$-disjointness preserving. It is also clear that $\lambda_{{\left|}\varphi {\right|}} = {\left|}\lambda_{\varphi} {\right|}$. \[L1\] Let $0 < \epsilon <1/4$. Let $\varphi \in {\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$, ${\left\| }\varphi{\right\| }=1$. Then there exists $x \in X$ with $${\left|}\lambda_{\varphi} (\{x\}) {\right|}\ge \sqrt{1-4\epsilon}.$$ Furthermore, if $0 < \epsilon <2/9$, then there exists a unique $x\in X$ with $${\left|}\lambda_{\varphi} (\{x\}) {\right|}\ge \frac{1+\sqrt{1-4\epsilon}}{2}.$$ Let $0 < \epsilon <1/4$. We prove the result first for positive functionals. Suppose that for every $x \in X$, $\lambda_{\varphi} (\{x\})< \sqrt{1-4\epsilon}$. For each $x \in X$, take an open neighborhood $U(x)$ of $x$ with $\lambda_{\varphi} {\left(}U(x) {\right)}< \sqrt{1-4\epsilon}$. Since $X$ is compact, we can find $x_1, x_2, \ldots, x_n$ in $X$ such that $ X = U (x_1) \cup U(x_2) \cup \cdots \cup U(x_n)$. Let $r_1:= \lambda_{\varphi} (U(x_1))$, $r_2:= \lambda_{\varphi} (U(x_1) \cup U(x_2))$, …, $r_n:= \lambda_{\varphi} (U(x_1) \cup U(x_2) \cup \cdots \cup U(x_n)) $, and suppose without loss of generality that $r_1 <r_2 < \cdots <r_n =1$. By Lemma \[pol-immemoriam\], $r_1 \le {\left(}1-\sqrt{1-4\epsilon} {\right)}/2$, and we can take $i_0 := \max {\left\{} \newcommand{\tr}{\right\}}i : r_i \le {\left(}1-\sqrt{1-4\epsilon}{\right)}/2 \tr$. We then see that $r_{i_{0+1}}$ belongs to ${\left(}{\left(}1-\sqrt{1-4\epsilon} {\right)}/2, {\left(}1+\sqrt{1-4\epsilon} {\right)}/ 2{\right)}$, against Lemma \[pol-immemoriam\]. This proves the first part of the lemma for positive functionals. If $\varphi$ is not positive, then we use Lemma \[sintelefarri\], and from the above paragraph we have that there exists $x \in X$ such that $${\left|}\lambda_{\varphi} (\{x\}) {\right|}= {\left|}\lambda_{\varphi} {\right|}(\{x\})\ge \sqrt{1-4\epsilon}.$$ As for the second part, if follows immediately from Lemma \[pol-immemoriam\] and the fact that ${\left(}1-\sqrt{1-4\epsilon}{\right)}/2 < \sqrt{1-4\epsilon}$ for $0< \epsilon < 2/9$. Finally, if there exist two different points $x_1 , x_2$ such that ${\left|}\lambda_{\varphi} (\{x_i\}) {\right|}\ge {\left(}1+\sqrt{1-4\epsilon}{\right)}/ 2$ ($i =1,2$), then ${\left|}\lambda_{\varphi} {\right|}(\{x_1,x_2\})\ge 1+\sqrt{1-4\epsilon}>1 $, against our assumptions. This completes the proof. \[llata\] Let $0 < \epsilon <1/4$. Let $\varphi \in {\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$, ${\left\| }\varphi{\right\| }=1$. Then there exists $x \in X$ with $${\left\| }\varphi- \lambda_{\varphi} (\{x\})\delta_{x}{\right\| }\le 1 - \sqrt{1-4\epsilon}.$$ Furthermore, if $0 < \epsilon <2/9$, then there exists a unique $x\in X$ with $${\left\| }\varphi- \lambda_{\varphi} (\{x\})\delta_{x}{\right\| }\le \frac{1-\sqrt{1-4\epsilon}}{2}.$$ It is easy to see that $$\begin{aligned} 1 &=& {\left\| }\varphi {\right\| }\\ &=& {\left\| }\lambda_{\varphi} (\{x\})\delta_{x} {\right\| }+ {\left\| }\varphi- \lambda_{\varphi} (\{x\})\delta_{x}{\right\| }\\ &=& {\left|}\lambda_{\varphi} (\{x\}) {\right|}+ {\left\| }\varphi- \lambda_{\varphi} (\{x\})\delta_{x}{\right\| }, \end{aligned}$$ and the conclusion follows from Lemma \[L1\]. \[L3\] Let $0 < \epsilon <1/4$. Suppose that $\varphi \in {\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$ and $$2 \sqrt{\epsilon}<{\left\| }\varphi {\right\| }\le 1.$$ Then there exists $x \in X$ such that $${\left\| }\varphi - \lambda_{\varphi} (\{x\})\delta_{x}{\right\| }\le {\left\| }\varphi {\right\| }-\sqrt{{\left\| }\varphi {\right\| }^2-4\epsilon}.$$ Furthermore, if $0 < \epsilon <2/9$ and $\sqrt{9\epsilon / 2}<{\left\| }\varphi {\right\| }\le 1$, then there exists a unique $x \in X$ such that $${\left\| }\varphi - \lambda_{\varphi} (\{x\})\delta_{x}{\right\| }\le \frac{{\left\| }\varphi {\right\| }-\sqrt{{\left\| }\varphi {\right\| }^2-4\epsilon}}{2}.$$ Let $0 < \epsilon < 1/4$. It is apparent that $\varphi/ {\left\| }\varphi{\right\| }$ has norm $1$ and is $\epsilon/ {\left\| }\varphi {\right\| }^2$-disjointness preserving. Besides $\epsilon/{\left\| }\varphi {\right\| }^2 < \epsilon / \left( 2 \sqrt{\epsilon}\right)^2 = 1/4.$ Hence, by Lemma \[llata\], there exists $x \in X$ with $${\left\| }\frac{\varphi}{{\left\| }\varphi{\right\| }} - \lambda_{\frac{\varphi}{{\left\| }\varphi {\right\| }}} (\{x\})\delta_{x}{\right\| }\le 1-\sqrt{1-\frac{4\epsilon}{{\left\| }\varphi {\right\| }^2}}$$ and we are done. The proof of the second part is similar. \[zapatillas1966\] Let $0 < \epsilon <2/9$. Suppose that $\varphi \in {\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$ and $ \sqrt{9\epsilon/2}<{\left\| }\varphi {\right\| }\le 1$, and that $x \in X$ is the point given in Corollary \[L3\]. Then $$\sqrt{{\left\| }\varphi {\right\| }^2-4\epsilon} \le \left| \varphi (f) \right|$$ whenever $f \in C(X)$ satisfies ${\left|}f(x){\right|}=1 = {\left\| }f {\right\| }_{\infty}$. Let $f \in C(X)$ be such that ${\left|}f(x){\right|}=1 = {\left\| }f {\right\| }_{\infty}$. By Corollary \[L3\], we have that $$\left| \left| \varphi (f) \right| - \left| \lambda_{\varphi} (\{x\}) \right| \right| \le \left|(\varphi- \lambda_{\varphi} (\{x\})\delta_{x}) (f)\right| \le \frac{{\left\| }\varphi{\right\| }-\sqrt{{\left\| }\varphi{\right\| }^2-4\epsilon}}{2}.$$ Hence, by applying Lemma \[L1\] $$\begin{aligned} \left| \varphi (f) \right| &\ge& \left| \lambda_{\varphi} (\{x\}) \right| - \frac{{\left\| }\varphi{\right\| }-\sqrt{{\left\| }\varphi{\right\| }^2-4\epsilon}}{2}\\ &\ge& \frac{{\left\| }\varphi{\right\| }+ \sqrt{{\left\| }\varphi{\right\| }^2-4\epsilon}}{2}- \frac{{\left\| }\varphi{\right\| }-\sqrt{{\left\| }\varphi{\right\| }^2-4\epsilon}}{2}\\ &=& \sqrt{{\left\| }\varphi{\right\| }^2-4\epsilon}.\end{aligned}$$ The sequence $(\omega_n)$ for continuous linear functionals {#ninguno} =========================================================== Recall that we have defined, for each $n \in \mathbb{N}$, $$\omega_n := \frac{n^2 -1}{4 n^2}$$ and $$\mathbb{A}_n: = {\left[}\omega_{2n-1}, \omega_{2n+1} {\right)}.$$ The precise statement of the results in this section depends heavily on the number $n$ such that $\epsilon \in \mathbb{A}_n$, and on the cardinality of $X$. Suppose that $X$ is a finite set of $k$ elements, and that $\varphi \in C(X)'$ has norm $1$. Then it is immediate that there exists a point $x \in X$ with ${\left|}\lambda_{\varphi} (\{x\}) {\right|}\ge 1/k$. We next see that this result can be sharpened when $k$ is even and $\varphi \in {\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$, and also when $X$ has “many” elements (being finite or infinite). \[ksis\] Let $0 < \epsilon < 1/4$. Suppose that $X$ is a finite set of cardinal $k \in 2 \mathbb{N}$. If $\varphi \in {\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$ and ${\left\| }\varphi {\right\| }=1$, then there exists $x \in X$ such that $${\left|}\lambda_{\varphi} (\{x\}) {\right|}\ge \frac{1 + \sqrt{1-4 \epsilon}}{k}.$$ By Lemma \[sintelefarri\], we can assume without loss of generality that $\varphi$ is positive. Suppose that $k=2m$, $m \in \mathbb{N}$. Notice that there cannot be $m$ different points $x_1, \ldots, x_m \in X$ with $$\lambda_{\varphi} (\{x_i\}) \in {\left(}\frac{1 - \sqrt{1 -4 \epsilon}}{k} , \frac{1 + \sqrt{1 -4 \epsilon}}{k}{\right)}$$ for every $i \in \{1, \ldots, m\}$, because otherwise $$\lambda_{\varphi}(\{x_1, \ldots, x_m \}) \in {\left(}\frac{1 - \sqrt{1 -4 \epsilon}}{2} , \frac{1 + \sqrt{1 -4 \epsilon}}{2}{\right)},$$ against Lemma \[pol-immemoriam\]. This implies that there exist at least $m+1$ points whose measure belongs to $${\left[}0, \frac{1 - \sqrt{1 -4 \epsilon}}{k} {\right]}\cup {\left[}\frac{1 + \sqrt{1 -4 \epsilon}}{k}, 1 {\right]}.$$ Suppose that at least $m$ different points $x_1, \ldots, x_m \in X$ satisfy $\lambda_{\varphi}(\{x_i\}) \le {\left(}1 - \sqrt{1 -4 \epsilon} {\right)}/k$. Then $\lambda_{\varphi}(\{x_1, \ldots, x_m\}) \le {\left(}1 - \sqrt{1 -4 \epsilon} {\right)}/ 2$, and consequently we have that $\lambda_{\varphi}( X \setminus \{x_1, \ldots, x_m\}) \ge {\left(}1 + \sqrt{1 -4 \epsilon} {\right)}/ 2$. Since $X \setminus \{x_1, \ldots, x_m\}$ has $m$ points, this obviously implies that there exists $x \in X \setminus \{x_1, \ldots, x_m\}$ with $\lambda_{\varphi}(\{x\}) \ge {\left(}1 + \sqrt{1 -4 \epsilon} {\right)}/ k$, and we are done. \[martopaz\] Let $0< \epsilon < 1/4$, and let $n \in \mathbb{N}$ be such that $\epsilon \in \mathbb{A}_n $. Suppose that ${\mathrm{card} \hspace{.02in}}X \ge 2n$. If $\varphi \in {\epsilon-\mathbf{DP} {\left(}X, \mathbb{K} {\right)}}$ and ${\left\| }\varphi {\right\| }=1$, then there exists $x \in X$ such that $${\left|}\lambda_{\varphi} (\{x\}) {\right|}\ge \frac{1 + \sqrt{1-4 \epsilon}}{2n}.$$ Let $D:= \{x \in X : {\left|}\lambda_{\varphi} (\{x\}) {\right|}>0\}$. It is clear that $D$ is a countable set, and by Lemma \[L1\] it is nonempty. Let $\mathbb{M} := \{1, \ldots, m\}$ if the cardinal of $D$ is $m \in \mathbb{N}$, and let $\mathbb{M} := \mathbb{N}$ otherwise. It is obvious that we may assume that $D= \{x_i : i \in \mathbb{M }\}$ and that ${\left|}\lambda_{\varphi} (\{x_{i+1} \}) {\right|}\le {\left|}\lambda_{\varphi} (\{x_{i} \}) {\right|}$ for every $i$. Next let $$\mathbb{J} := {\left\{} \newcommand{\tr}{\right\}}j \in \mathbb{M}: \sum_{i=1}^j {\left|}\lambda_{\varphi} (\{x_{i} \}) {\right|}< \frac{1}{2} \tr$$ and $$R:= \sum_{i \in \mathbb{J}} {\left|}\lambda_{\varphi} (\{x_{i} \}) {\right|}.$$ We have that $R \le 1/2$, and by Lemma \[pol-immemoriam\] applied to the functional associated to ${\left|}\lambda_{\varphi} {\right|}$, we get $R <1 /2$. Take any open subset $U$ of $X$ containing all $x_i$, $i \in \mathbb{J}$, such that ${\left|}\lambda_{\varphi} {\right|}(U ) < 1/2$, that is, ${\left|}\lambda_{\varphi} {\right|}(U ) \le {\left(}1 - \sqrt{1-4 \epsilon} {\right)}/2$, and suppose that ${\left|}\lambda_{\varphi} (\{x\}) {\right|}< \sqrt{1-4 \epsilon} $ for every $x \notin U$. Then there exist open sets $U_1, \ldots, U_l$ in $X$, $l \in \mathbb{N}$, such that $X = U \cup U_1 \cup \cdots \cup U_l$ and ${\left|}\lambda_{\varphi} {\right|}(U_i) < \sqrt{1-4 \epsilon} $ for every $i$. If we consider, for $i \in \{1, \ldots, l\}$, $b_i := {\left|}\lambda_{\varphi} {\right|}{\left(}U \cup \bigcup_{j=1}^i U_j {\right)}$, then we see that there must be an index $i_0$ with $$b_{i_0} \in {\left(}\frac{1 - \sqrt{1-4 \epsilon}}{2} , \frac{1 + \sqrt{1-4 \epsilon}}{2} {\right)},$$ which goes against Lemma \[pol-immemoriam\]. We deduce that there exists $j \in \mathbb{M}$, $j \notin \mathbb{J}$, such that ${\left|}\lambda_{\varphi} (\{x_{j} \}) {\right|}\ge \sqrt{1 - 4 \epsilon}$. By the way we have taken $D$, this implies that ${\left|}\lambda_{\varphi} (\{x_{i} \}) {\right|}\ge \sqrt{1 - 4 \epsilon}$ for every $i \in \mathbb{J}$, and obviously $\mathbb{J}$ must be finite, say $\mathbb{J} = \{1 , \ldots, m_0\}$. Let us see now that $m_0 \le n-1$. We have that, since $\epsilon < \omega_{2n+1}$, then $\sqrt{1 - 4 \epsilon} > 1 / {\left(}2n+1 {\right)}$, which implies that $$n \sqrt{1- 4 \epsilon} > \frac{1 - \sqrt{1- 4 \epsilon}}{2}.$$Consequently, if $m_0 \ge n $, then we get $$\begin{aligned} R &=& \sum_{i=1}^{m_0} {\left|}\lambda_{\varphi} (\{x_{i} \}) {\right|}\\ &\ge& n \sqrt{1-4 \epsilon} \\ &>& \frac{1 - \sqrt{1- 4 \epsilon}}{2},\end{aligned}$$ which is impossible, as we said above. We conclude that $m_0 \le n-1 $. On the other hand, taking into account that $$\sum_{i =1}^{m_0 +1} {\left|}\lambda_{\varphi} (\{x_{i} \}) {\right|}\ge \frac{1 + \sqrt{1 - 4 \epsilon}}{2},$$ we have that $$(m_0 +1) {\left|}\lambda_{\varphi} (\{x_{1} \}) {\right|}\ge \frac{1 + \sqrt{1 - 4 \epsilon}}{2},$$ which implies that $$n {\left|}\lambda_{\varphi} (\{x_{1} \}) {\right|}\ge \frac{1 + \sqrt{1 - 4 \epsilon}}{2}.$$ As a consequence we get $${\left|}\lambda_{\varphi } ( \{x_1\}) {\right|}\ge \frac{1 + \sqrt{1-4 \epsilon}}{2n},$$ and we are done. Let us show the first part. By Propositions \[ksis\] (see also comment before it) and \[martopaz\], there exists $x \in X$ with ${\left|}\lambda_{\varphi} (\{x\}) {\right|}\ge 1- o_X (\epsilon)$. If we define $\psi := \lambda_{\varphi} (\{x\}) \delta_x$, then we are done. Let us now prove the second part. Suppose that $\epsilon$ belongs to $\mathbb{A}_n $, $n \in \mathbb{N}$. It is clear that this fact implies that $(2 n -1) \sqrt{1- 4 \epsilon} \le 1$. If ${\mathrm{card} \hspace{.02in}}X \ge 2 n$, then we can pick $2n$ distinct points $x_1, x_2, \ldots, x_{2n}$ in $X$, and define the map $\varphi \in C(X)'$ as $$\varphi := \frac{1 + \sqrt{1-4 \epsilon}}{2n} {\left(}\sum_{i=1}^{2n-1} \delta_{x_i} {\right)}+ \frac{1 - (2n -1) \sqrt{1-4 \epsilon}}{2n} \hspace{.03in} \delta_{x_{2n}} .$$ It is easy to see that $\varphi$ satisfies all the requirements. To study the cases when ${\mathrm{card} \hspace{.02in}}X < 2n$, put $X := \{x_1, \ldots, x_k\}$. Suppose first that $k$ is even. Since $(2n -1) \sqrt{1- 4 \epsilon} \le 1$, we have $(k-1) \sqrt{1- 4 \epsilon} < 1$. We can easily see that if we define the map $\varphi$ as $$\varphi := \frac{1 + \sqrt{1- 4 \epsilon}}{k} {\left(}\sum_{i=1}^{k-1} \delta_{x_i} {\right)}+ \frac{1 -(k-1) \sqrt{1- 4 \epsilon}}{k} \hspace{.03in} \delta_{x_k} ,$$ then we are done. Suppose finally that $k$ is odd. It is clear that if we define $$\varphi := \frac{1}{k} {\left(}\sum_{i=1}^{k} \delta_{x_i} {\right)},$$ then $\varphi$ is a norm one element of $C(X)'$, and is $\omega_k$-disjointness preserving, which implies that it is $\epsilon$-disjointness preserving. It is also easy to see that ${\left\| }\varphi - \psi {\right\| }\ge 1- 1/k$ for every weighted evaluation functional $\psi$ on $C(X)$. How close. The general case: Proofs {#paulus-15nov07} =================================== Let $0 < \epsilon < 2/9$, and let $T: C(X) \longrightarrow C(Y)$ be a norm one $\epsilon$-disjointness preserving operator. If we take any $y\in Y_{\sqrt{9\epsilon/2}}$, then $T_{y} / {\left\| }T_{y}{\right\| }$ is a norm one $\epsilon/{\left\| }T_y{\right\| }^2$-disjointness preserving operator with $$\frac{\epsilon}{{\left\| }T_y{\right\| }^2}<\frac{\epsilon}{\frac{9\epsilon}{2}} =\frac{2}{9}.$$ By Lemma \[L1\], there exists a unique $x_y\in X$ such that ${\left|}\lambda_{T_y}(\{x_y\}) {\right|}> {\left\| }T_y{\right\| }/2$. Thus, we can define a map $h_T: Y_{\sqrt{9\epsilon/2}} \longrightarrow X$, in such a way that ${\left|}\lambda_{T_y} (\{h_T (y) \}) {\right|}> {\left\| }T_y {\right\| }/2$ for each $y \in Y_{\sqrt{9\epsilon/2}}$. These fact can be summarized in the following lemma. \[nak\] Let $0 < \epsilon < 2/9$, and let $T \in {\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ with ${\left\| }T{\right\| }=1$. If $y \in Y_{\sqrt{9\epsilon/2}}$, then $${\left|}\lambda_{T_{y}} {\right|}(\{h_T (y)\}) \ge \frac{{\left\| }T_{y}{\right\| }+ \sqrt{{\left\| }T_{y}{\right\| }^2-4\epsilon}}{2}.$$ \[lg\] Let $0 < \epsilon < 2/9$, and let $T \in {\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ with ${\left\| }T{\right\| }=1$. Then the map $h_T$ is continuous. We will check the continuity of this map at every point. To this end, fix $y_0\in Y_{\sqrt{9\epsilon/2}} $ and let $U(h_T (y_0))$ be an open neighborhood of $h_T (y_0)$. We have to find an open neighborhood $V(y_0)$ of $y_0$ such that $h_T (V(y_0))\subset U(h_T (y_0))$. By regularity, there exists an open neighborhood $U'(h_T (y_0))\subset U(h_T (y_0))$ of $h_T (y_0)$ such that $${\left|}\lambda_{T_{y_0}} {\right|}(U'(h_T (y_0)))- {\left|}\lambda_{T_{y_0}} {\right|}(\{h_T (y_0)\})< \frac{\sqrt{{\left\| }T_{y_0}{\right\| }^2-4\epsilon}}{2}.$$ Let $f_0\in C(X)$ with $0\le f_0\le 1$, $f_0(h_T (y_0))=1$, and ${\rm supp}(f_0)\subset U'(h_T (y_0))$. We will now check that ${\left|}(Tf_0)(y_0) {\right|}> \sqrt{\epsilon}$. To this end, we proceed as follows: $$\begin{aligned} {\left|}(Tf_0)(y_0) {\right|}&=& {\left|}\int_X f_0 d \lambda_{T_{y_0}} {\right|}\\ &=&{\left|}\int_{\{h_T(y_0)\}}f_0d\lambda_{T_{y_0}}+\int_{U'(h_T(y_0))\setminus \{h_T(y_0)\}}f_0d\lambda_{T_{y_0}} {\right|}\\ &\ge& {\left|}\int_{\{h_T(y_0)\}}f_0d\lambda_{T_{y_0}} {\right|}- \int_{U'(h_T(y_0))\setminus \{h_T(y_0)\}}f_0 d {\left|}\lambda_{T_{y_0}} {\right|}\\ &\ge& {\left|}f_0(h_T(y_0)) {\right|}{\left|}\lambda_{T_{y_0}}(h_T(\{y_0\})) {\right|}- {\left|}\lambda_{T_{y_0}} {\right|}(U' (h_T(y_0))\setminus \{h_T(y_0)\}) \\ &>& \frac{{\left\| }T_{y_0}{\right\| }+\sqrt{{\left\| }T_{y_0}{\right\| }^2-4\epsilon}}{2}- \frac{\sqrt{{\left\| }T_{y_0}{\right\| }^2-4\epsilon}}{2},\end{aligned}$$ and as a consequence, we see that $${\left|}(Tf_0)(y_0) {\right|}> \frac{{\left\| }T_{y_0}{\right\| }}{2}> \sqrt{9\epsilon/8} >\sqrt{\epsilon},$$ as was to be checked. Let us now define $$V(y_0):=\left\{y\in Y: {\left|}(Tf_0)(y) {\right|}>\sqrt{\epsilon}\right\}\cap Y_{\sqrt{9\epsilon/2}}.$$ We will check that, if $y_1\in V(y_0)$, then $h_T(y_1)\in {\rm supp}(f_0)$. Assume, contrary to what we claim, that $h_T(y_1)\notin {\rm supp}(f_0)$. Then there exist an open set $U'(h_T(y_1))$ and a function $f_1\in C(X)$ such that ${\rm supp}(f_1)\cap {\rm supp}(f_0)=\emptyset$, $0\le f_1\le 1$, $f_1(h_T(y_1))=1$ and ${\rm supp}(f_1)\subset U'(h_T(y_1))$ with $${\left|}\lambda_{T_{y_1}} {\right|}(U'(h_T(y_1))- {\left|}\lambda_{T_{y_1}} {\right|}(\{h_T(y_1)\}) < \frac{\sqrt{{\left\| }T_{y_1}{\right\| }^2-4\epsilon}}{2}$$ As above, $${\left|}(Tf_1)(y_1) {\right|}> \sqrt{\epsilon}.$$ Hence, $${\left\| }(Tf_1) ( Tf_0) {\right\| }_{\infty} \ge {\left|}(Tf_1)(y_1) {\right|}{\left|}(Tf_0)(y_1) {\right|}> \epsilon,$$ which contradicts the $\epsilon$-disjointness preserving property of $T$. Summing up, $h_T$ is continuous. \[cv\] Let $0 < \epsilon < 2/9$, and let $T \in {\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ with ${\left\| }T{\right\| }=1$. If $t \in [0,1]$ and $y \in Y_{\sqrt{9\epsilon/2}}$, then $${\left|}(Tf) (y) - t (T{\bf 1}) (y) f(h_T(y)) {\right|}\le {\left\| }T_y {\right\| }- t \sqrt{{\left\| }T_y{\right\| }^2-4\epsilon}$$ for every $f \in C(X)$ with ${\left\| }f {\right\| }_{\infty} \le 1$. Let $A_y:= \lambda_{T_y} (\{h_T(y)\})\delta_{h_T(y)}$. It is easy to check that, since $(\lambda_{T_y} -\lambda_{A_y}) (\{h_T(y) \}) =0$, then ${\left\| }T_y{\right\| }= {\left\| }T_y -A_y {\right\| }+ {\left\| }A_y {\right\| }$. Furthermore, as by Lemma \[nak\], $${\left\| }A_y {\right\| }\ge \frac{{\left\| }T_y{\right\| }+\sqrt{{\left\| }T_y{\right\| }^2-4\epsilon}}{2},$$ we deduce $$- \sqrt{{\left\| }T_y{\right\| }^2-4\epsilon} \ge {\left\| }T_y{\right\| }- 2 {\left\| }A_y {\right\| }.$$ As a consequence, for $f\in C(X)$ with ${\left\| }f{\right\| }_{\infty} \le 1$, we have $$\begin{aligned} {\left|}(Tf)(y)- t (T{\bf 1}) (y) f(h_T(y)) {\right|}&\le& {\left|}T_y f- A_y f {\right|}+ {\left|}A_y f - t A_y f {\right|}+ \\& & \hspace{0.1in} + t {\left|}A_y \widehat{f(h_T(y))} - T_y \widehat{f(h_T(y))} {\right|}\\&\le& {\left\| }T_y -A_y{\right\| }+ (1 - t) {\left\| }A_y {\right\| }+ t {\left\| }T_y -A_y{\right\| }\\ &=& (1 + t) ({\left\| }T_y{\right\| }- {\left\| }A_y{\right\| }) + (1- t ) {\left\| }A_y{\right\| }\\ &=& {\left\| }T_y{\right\| }+ t ({\left\| }T_y {\right\| }- 2 {\left\| }A_y {\right\| }) \\&\le& {\left\| }T_y {\right\| }- t \sqrt{{\left\| }T_y{\right\| }^2-4\epsilon},\end{aligned}$$ and we are done. \[pollo\] Let $0 < \epsilon < 1/4$. The function $\gamma : [2 \sqrt{\epsilon}, 1] \longrightarrow \mathbb{R}$, defined as $\gamma (t) := t - \sqrt{t^2 -4 \epsilon}$ is strictly decreasing and bounded above by $2 \sqrt{\epsilon}$. We fix $\delta_0 \in {\left(}0, \epsilon {\left(}1- \sqrt{17 \epsilon /2}{\right)}{\right)}$ and, for each $n \in \mathbb{N} $, set $\delta_n := \delta_0 2^{-n}$. Also we define $D_n := Y_{\sqrt{17 \epsilon /2}+\delta_n}$ for $n \in \mathbb{N} \cup \{0\}$, which is nonempty for every $n$. We easily deduce from Corollary \[zapatillas1966\] that ${\left|}(T{\bf 1}) (y) {\right|}\ge \sqrt{9 \epsilon/2} + \delta_n$ for every $n \in \mathbb{N}$ and $y \in {\mathrm{cl} {\hspace{.02in}}}D_n $. Consequently each ${\mathrm{cl} {\hspace{.02in}}}D_n $ is contained in $Y_{\sqrt{9\epsilon/2}}$, so there exists a function $\alpha_n \in C(Y)$ such that $0\le \alpha_n \le 1$, $$\alpha_n \left({\mathrm{cl} {\hspace{.02in}}}D_n \right)\equiv 1$$ and $${\rm supp}(\alpha_n)\subset Y_{\sqrt{\frac{9\epsilon}{2}}}.$$ Let us define $\alpha : Y {\longrightarrow}\mathbb{K}$ as $$\alpha := \sum_{n=0}^{\infty} \frac{\alpha_n}{2^{n+1}}.$$It is clear that $\alpha$ is continuous, ${\left\| }\alpha {\right\| }_{\infty} =1$, $c(\alpha)\subset Y_{\sqrt{9\epsilon/2}}$, $\alpha \left( D_0 \right) \equiv 1$, and $\alpha \ge 1/2^n$ on $D_n$ for each $n \in \mathbb{N}$. Finally define a weighted composition map $S$ as $$(S f)(y):=\alpha(y) (T {\bf 1})(y)f(h_T(y))$$ for all $f\in C(X)$ and $y\in Y$. We will now check that $${\left\| }T- S{\right\| }\le \sqrt{\frac{17 \epsilon}{2}}.$$ Fix any $f\in C(X)$ with ${\left\| }f{\right\| }_{\infty}=1$. Let us first study the case of $y \in Y$ satisfying ${\left\| }T_y{\right\| }\le \sqrt{9\epsilon/2}$. Since $c (\alpha)\subset Y_{\sqrt{9\epsilon/2}}$, in this case we have $${\left|}(Tf)(y)-( S f)(y) {\right|}= {\left|}(Tf)(y) {\right|}\le \sqrt{\frac{9\epsilon}{2}} .$$ Next, consider the remaining case $\sqrt{9\epsilon/2}< {\left\| }T_y{\right\| }\le 1$. By Lemma \[cv\] we know that $${\left|}(Tf)(y)-(Sf)(y) {\right|}\le{\left\| }T_y {\right\| }- \alpha(y) \sqrt{{\left\| }T_y{\right\| }^2-4\epsilon}$$ for every $y \in Y_{\sqrt{9\epsilon/2}}$. We immediately deduce that ${\left|}(Tf)(y)-(Sf)(y) {\right|}\le \sqrt{17 \epsilon /2}$ for every $y$ with $\sqrt{9\epsilon/2}< {\left\| }T_y{\right\| }\le \sqrt{17 \epsilon /2}$. On the other hand, for $y\in D_0$, we have $\alpha (y) =1$, so $${\left|}(Tf)(y)-(Sf)(y) {\right|}\le {\left\| }T_y{\right\| }-\sqrt{{\left\| }T_y{\right\| }^2-4\epsilon} \le 2\sqrt{\epsilon}$$ by Lemma \[pollo\]. Finally, if $\sqrt{17 \epsilon /2} < {\left\| }T_y{\right\| }\le \sqrt{17 \epsilon /2} + \delta_0$, then there exists $n \in \mathbb{N}$ such that $y \in D_n \setminus D_{n-1}$, that is, $\sqrt{17 \epsilon /2} + \delta_{n} < {\left\| }T_y{\right\| }\le \sqrt{17 \epsilon /2} + \delta_{n-1}$. Let us see that $$\label{eqf} \delta_{n-1} \le \alpha(y) \sqrt{{\left\| }T_y{\right\| }^2-4\epsilon}.$$ Clearly, since we have chosen $\delta_0 < \sqrt{\epsilon}$, we know that $$2 \delta_0 < \sqrt{{\left\| }T_y{\right\| }^2-4\epsilon}.$$ Also, by the definition of $\alpha$, we have $\alpha (y) \ge 1/2^n$, and the inequality \[eqf\] follows. In this way we get $$\begin{aligned} {\left|}(Tf)(y)-(Sf)(y) {\right|}&\le& {\left\| }T_y {\right\| }- \alpha(y) \sqrt{{\left\| }T_y{\right\| }^2-4\epsilon} \\ &\le& \sqrt{\frac{17 \epsilon}{2}} + \delta_{n-1} - \alpha(y) \sqrt{{\left\| }T_y{\right\| }^2-4\epsilon} \\ &\le& \sqrt{\frac{17 \epsilon}{2}}.\end{aligned}$$ We conclude that ${\left\| }T- S {\right\| }\le \sqrt{17 \epsilon /2}$, as was to be proved. How close. The general case: Examples {#pexample} ===================================== In this section we first provide a sequence of examples of $2/9$-disjointness preserving operators of norm $1$, and then give a related family of $2/17$-disjointness preserving operators. This will lead to an example of a norm one $2/17$-disjointness operator whose distance to every weighted composition map is at least $1$. We use this to get an example, for each $\epsilon \in (0, 2/17)$, of an element of ${\epsilon-\mathbf{DP} {\left(}X, Y {\right)}}$ whose distance to ${\mathbf{WCM} {\left(}X,Y {\right)}}$ is at least $\sqrt{17 \epsilon/2}$. This shows that the bound given in Theorem \[rz\] is sharp. All the sets involved in these examples are contained in $\mathbb{R}^3$. \[deruhi-abiri\] [*A special sequence ${\left(}R_n {\right)}$ of $2/9$-disjointness preserving operators of norm $1$.*]{} For any two points $A, B$ in $\mathbb{R}^3$, let $\overline{AB}$ denote the segment joining them. Also, given four points $A,B,C,D \in \mathbb{R}^3$, let $\vee{ABCD}$ denote the union of the segments $\overline{AD}$, $\overline{BD}$ and $\overline{CD}$. We will need some (closed) semilines, all contained in $z=0$, and starting at the point $(0,0,0)$. First $l_A$ will be the semiline $x \ge 0$, $y=0$. We now take two other semilines, namely $$\begin{aligned} l_B &:=& \mathbf{rot} {\left(}l_A, \frac{2 \pi}{3} {\right)}\\ l_C &:=& \mathbf{rot} {\left(}l_B, \frac{2 \pi}{3} {\right)},\end{aligned}$$ where for $G \subset \mathbb{R}^3$ and $\theta \in [0, 2 \pi)$, $\mathbf{rot} (G, \theta)$ denotes the set obtained by rotating counterclockwise $G$ an angle $\theta$ around the axis $z$. Next consider the circles $S_1, S_1'$ and $S_1''$ centered at $(0,0,0)$ with radius $1, 2$, and $3$, respectively, and contained in the plane $z=0$. For $E \in \{A, B, C\}$, we denote by $E_0, E_0'$ and $E_0''$ the points in the intersection of $l_E$ with $S_1, S_1'$ and $S_1''$, respectively. Fix $\theta_0 := \pi/6$. For each $n \in \mathbb{N}$ and $E=A, B, C$, we now define a new semiline as $$m^E_n := \mathbf{rot} {\left(}l_E ,2 \pi - \frac{\theta_0}{n} {\right)}.$$ We will use each of these semilines to obtain two new points, $E_n$ and $E_n'$, as the intersection of $m^E_n$ with $S_1$ and $S_1'$, respectively. That is, if for $\mathbf{x} \in \mathbb{R}^3$ and $\theta \in [0,2 \pi)$, we write $\mathbf{rot} ( \mathbf{x}, \theta)$ meaning the point in $\mathbf{rot} ( {\left\{} \newcommand{\tr}{\right\}}\mathbf{x} \tr, \theta)$, then $E_n := \mathbf{rot} ( E_0, 2 \pi - \theta_0 /n)$ and $E_n' := \mathbf{rot} ( E_0', 2 \pi - \theta_0 /n),$ for $E=A, B, C$ and $n \in \mathbb{N}$. We put $D_0 := (0,0,0)$, and introduce two special points $D_1^n := (0,0,1/n^3)$, $D_2^n := (0,0,2/n^3)$ for each $n \in \mathbb{N}$. We also denote $D_0^n := D_0$ for all $n \in \mathbb{N}$. Given $ n \in \mathbb{N}$, we start by considering the sets $W_n^0 := \vee{A_n B_0 C_0 D_0}$, $W_n^1 := \vee{A_0 B_n C_0 D^n_1}$, and $W_n^2 := \vee{A_0 B_0 C_n D^n_2}$ (for the case $n=1$, see Figures \[w0\], \[w1\], and \[w2\], respectively). ![The set $W_1^0$.[]{data-label="w0"}](w01c) ![The set $W_1^1$[]{data-label="w1"}](w11d) ![The set $W_1^2$[]{data-label="w2"}](w21d) By the way we have taken all these points, we see that for every $n$, the intersection of two different $W_n^i$ consists of one of the points $A_0, B_0, C_0$. In the same way, for $E=A, B, C$, each $E_0$ belongs exactly to two sets $W_n^i$, and each $E_n$ belongs to just one of them. Call $Z_n := W_n^0 \cup W_n^1 \cup W_n^2$ (see Figure \[z1\] for the case $n=1$). ![The set $Z_1$[]{data-label="z1"}](z1d) Consider next a new set $W_0 := \vee{A_0 B_0 C_0 D_0}$, and (see Figure \[x0\])$$X_0 := W_0 \cup {\left(}\bigcup_{E=A,B,C} \overline{E_0 E''_0} {\right)}.$$ ![The set $X_0$[]{data-label="x0"}](ejemploy0d) Also, for each $n \in \mathbb{N}$, define $$X_n:= Z_n \cup {\left(}\bigcup_{E=A,B,C} \overline{E_0 E''_0} {\right)}\cup {\left(}\bigcup_{E=A,B,C} \overline{E_n E'_n} {\right)}$$(see Figures \[x1\] and \[x1ab\] for the case $n=1$) ![The set $X_1$[]{data-label="x1"}](total2i) ![Projection of $X_1$ on the plane $z=0$[]{data-label="x1ab"}](total2gabove) Our next step consists of introducing a linear and continuous operator $R_n: C(X_n) {\longrightarrow}C(X_0)$ for every $n \in \mathbb{N}$. Given any point $\mathbf{y} \in X_0$ and $f \in C(X_n)$, the definition of $R_n f$ at $\mathbf{y}$ will depend on whether or not $\mathbf{y} \in W_0$. To this end, for each $i=0, 1,2$ and $n \in \mathbb{N}$, we will define a map $h_n^i :W_0 {\longrightarrow}W_n^i$. Notice that, given $\mathbf{y} \in W_0$, there exist $E \in \{A, B, C\}$ and $t \in [0,1]$ such that $\mathbf{y} = t E_0$. For such $\mathbf{y}$, we put $h_n^i ( \mathbf{y}) := D_i^n + t {\left(}E_{\mathbf{\mathbf{y}}} - D_i^n {\right)}$, where $E_{\mathbf{y}} = E_0$ or $E_n$ (depending on whether $E_0 \in W_n^i$ or $E_n \in W_n^i$). It is immediate to see that $h_n^i$ is indeed a surjective homeomorphism. Suppose that $f \in C(X_n)$, and that $\mathbf{y} \in W_0$. Then we define $$(R_n f) (\mathbf{y}) := \frac{ f {\left(}h_n^1 (\mathbf{y}) {\right)}+ f {\left(}h_n^2 (\mathbf{y}) {\right)}+ f {\left(}h_n^3 (\mathbf{y}) {\right)}}{3}.$$ The definition of $R_n f$ at points of $X_0 \setminus W_0$ will be given in a different way. To do so we fix a continuous map $\zeta : [0,1] {\longrightarrow}[2/3,1]$ such that $\zeta (0) =2/3$ and $\zeta (1) = 1$. For $E=A,B,C$, given $\mathbf{y} \in \overline{E_0 E'_0}$, there exists $t \in [0,1]$ such that $\mathbf{y} = E_0 + t {\left(}E_0' - E_0 {\right)}$. For such point, we set $$\begin{aligned} (R_n f) (\mathbf{y}) &:=& \zeta(t) f (\mathbf{y}) + ( 1 - \zeta (t) ) f {\left(}E_n + t {\left(}E_n' - E_n {\right)}{\right)}. \end{aligned}$$ Finally, if $\mathbf{y}\in \overline{E'_0 E''_0}$, then define $$(R_n f ) (\mathbf{y}) := f(\mathbf{y}).$$ In particular we see that for every $f \in C (X_n)$ and $E=A, B, C$, $(R_n f) (E_0) = 2 f{\left(}E_0 {\right)}/3 + f{\left(}E_n {\right)}/3$, $(R_n f) (E'_0) = f{\left(}E'_0 {\right)}$, and $(R_n f) (E''_0) = f{\left(}E''_0 {\right)}$. On the other hand, it is easy to check that $R_n$ is linear and continuous, with ${\left\| }R_n {\right\| }= 1$, and that it is $2/9$-disjointness preserving. \[bebel\] [*A special sequence ${\left(}T_n {\right)}$ of $2/17$-disjointness preserving operators of norm $1$.*]{} We follow the same notation as in Example \[deruhi-abiri\]. To construct the new examples take a continuous map $$\rho : X_0 {\longrightarrow}{\left[}\frac{3}{\sqrt{17}} , 1 {\right]}$$ such that $\rho {\left(}W_0 \cup \bigcup_{E=A, B, C} \overline{E_0 E'_0} {\right)}\equiv 3/\sqrt{17}$ and $\rho {\left(}{\left\{} \newcommand{\tr}{\right\}}A''_0 , B''_0, C''_0 \tr {\right)}\equiv 1$. For each $n \in \mathbb{N}$ define $T_n : C(X_n ) {\longrightarrow}C(X_0)$ as $T_n f := \rho \cdot R_n f$ for every $f \in C(X_n)$. Taking into account that $R_n$ is $2/9$-disjointness preserving, it is straightforward to see that each $T_n$ is $2/17$-disjointness preserving. On the other hand, it is immediate that it has norm $1$. Unfortunately, it is possible to construct a weighted composition map $S_n: C(X_n ) {\longrightarrow}C(X_0)$ whose distance to $T_n$ is strictly less than $1$. This can be done as follows. Pick any $\epsilon >0$, and consider $a_n \in C(X_0)$, $0 \le a_n \le 1$, such that $a_n \equiv 1$ on $\rho^{-1} {\left(}{\left[}3/ \sqrt{17} + \epsilon, 1 {\right]}{\right)}$ and $a_n \equiv 0$ on $\rho^{-1} {\left(}{\left\{} \newcommand{\tr}{\right\}}3/ \sqrt{17} \tr {\right)}$. If we define $S_n : C(X_n) {\longrightarrow}C(X_0)$ as $(S_n f) (\mathbf{y}) := a_n (\mathbf{y}) \rho (\mathbf{y}) f (\mathbf{y})$ for every $f \in C(X_n)$ and $\mathbf{y} \in X_0$, then $ (S_n f) (\mathbf{y}) = (T_n f) (\mathbf{y})$ when $y \in \rho^{-1} {\left(}{\left[}3/ \sqrt{17} + \epsilon, 1 {\right]}{\right)}$. Thus ${\left\| }T_n - S_n {\right\| }\le 3/ \sqrt{17} + \epsilon$. Consequently, constructing an operator having the desired properties is more complicated. It will be done in the next example. \[andalu\] [*A $2/17$-disjointness preserving operator of norm $1$ whose distance to any weighted composition map is at least $1$.*]{} We follow the notation given in Examples \[deruhi-abiri\] and \[bebel\]. Also, for $n \in \mathbb{N}$, we put $\mathbf{w}_n := (0,0, 1/n)$, and define $$\begin{aligned} X &:=& X_0 \cup {\left(}\bigcup_{n=1}^{\infty} \mathbf{w}_n + X_n {\right)}\cup {\left(}\bigcup_{n=1}^{\infty} - \mathbf{w}_n + X_0 {\right)},\\ Y &:=& X_0 \cup {\left(}\bigcup_{n=1}^{\infty} \mathbf{w}_n + X_0 {\right)}\cup {\left(}\bigcup_{n=1}^{\infty} - \mathbf{w}_n + X_0 {\right)}\end{aligned}$$ Related to the $T_n$, we can introduce in a natural way new norm one $2/17$-disjointness preserving operators $T'_n : C {\left(}\mathbf{w}_n + X_n {\right)}{\longrightarrow}C {\left(}\mathbf{w}_n + X_0 {\right)}$ as follows. First, given $\mathbf{z} \in \mathbb{R}^3$, denote by $\tau_{\mathbf{z}}$ the translation operator sending each $\mathbf{x} \in \mathbb{R}^3$ to $\mathbf{z} + \mathbf{x}$. Then define $P_n : C {\left(}\mathbf{w}_n + X_n {\right)}{\longrightarrow}C {\left(}X_n {\right)}$ as $P_n f := f \circ \tau_{\mathbf{w}_n}$ for every $f \in C {\left(}\mathbf{w}_n + X_n {\right)}$, and $Q_n : C {\left(}X_0 {\right)}{\longrightarrow}C {\left(}\mathbf{w}_n + X_0 {\right)}$ as $Q_n g := g \circ \tau_{- \mathbf{w}_n}$ for every $ g \in C {\left(}X_0 {\right)}$. Finally put $T'_n := Q_n \circ T_n \circ P_n$. Next we give a $2/17$-disjointness preserving linear and continuous operator $T: C(X) {\longrightarrow}C(Y)$ of norm $1$. In the process of definition, as well as in the rest of this example, the restrictions of functions $f \in C(X)$ to subspaces of $X$ will be also denoted by $f$. Take any $f \in C(X)$. If $\mathbf{y} \in X_0$, then we define $$(Tf) (\mathbf{y}) := \rho (\mathbf{y}) f(\mathbf{y}).$$ Also, for $n \in \mathbb{N}$ and $\mathbf{y} \in X_0$, we put $$(Tf) (-\mathbf{w}_n + \mathbf{y}) := {\left(}\frac{1}{2} + \frac{\rho (\mathbf{y})}{2} {\right)}f(-\mathbf{w}_{2n} + \mathbf{y}) - {\left(}\frac{1}{2} - \frac{\rho (\mathbf{y})}{2} {\right)}f(-\mathbf{w}_{2n -1} + \mathbf{y}) ,$$ and $$(Tf) ( \mathbf{w}_n + \mathbf{y} ) := {\left(}T'_n f {\right)}(\mathbf{\mathbf{w}_n + y}) .$$ It is easy to check that ${\left\| }T {\right\| }=1$, and we can use the fact that $0 \le {\left(}1 - \rho(\mathbf{z})^2 {\right)}/4 \le 2/17$ for all $\mathbf{z}$, and that each $T'_n$ is $2/17$-disjointness preserving to show that $T$ is $2/17$-disjointness preserving. We are going to prove that if $S: C(X) {\longrightarrow}C(Y)$ is a weighted composition map, then ${\left\| }T-S {\right\| }\ge 1$. We suppose that this is not true, so there exist $a \in C(Y)$ and a continuous map $h: c(a) {\longrightarrow}X$ such that $Sf = a \cdot f \circ h$ and ${\left\| }T -S {\right\| }<1$. We will see that this is not possible. \[tregueta\] The following hold: 1. $X_0 \cup {\left(}\bigcup_{n=1}^{\infty} - \mathbf{w}_n + X_0 {\right)}\subset c(a) $, 2. \[nasda\] Given $n \in \mathbb{N}$ and $\mathbf{y} \in X_0$, $h {\left(}- \mathbf{w}_n + \mathbf{y} {\right)}= - \mathbf{w}_{2n} + \mathbf{y} $. 3. \[entier-bebel\] Given $n \in \mathbb{N}$ and $E= A, B, C$. $h {\left(}\mathbf{w}_n + E''_0 {\right)}= \mathbf{w}_n + E''_0$. 4. $h(\mathbf{y}) = \mathbf{y}$ for every $\mathbf{y} \in X_0$. Suppose first that there exist $n \in \mathbb{N}$ and $\mathbf{y} \in X_0$ with $- \mathbf{w}_n + \mathbf{y} \notin c(a)$. Consider $$f_0:= \xi_{- \mathbf{w}_{2n} + X_0} - \xi_{- \mathbf{w}_{2n-1} + X_0} \in C(X).$$ By definition, it is clear that $(Tf_0) (- \mathbf{w}_n + \mathbf{y}) =1$. Also ${\left\| }f_0 {\right\| }_{\infty} =1$ and $(Sf_0) (- \mathbf{w}_n + \mathbf{y}) =0$. This gives ${\left\| }Tf_0 - Sf_0 {\right\| }_{\infty} =1$, against our assumptions. On the other hand, exactly the same contradiction is reached if we assume that $h {\left(}- \mathbf{w}_n + \mathbf{y} {\right)}\notin {\left\{} \newcommand{\tr}{\right\}}- \mathbf{w}_{2n-1} + \mathbf{y} , - \mathbf{w}_{2n} + \mathbf{y} \tr$ for some $n \in \mathbb{N}$ and $\mathbf{y} \in X_0$. Namely we take $g_0 \in C(X)$ with ${\left\| }g_0 {\right\| }_{\infty} =1$, and such that $g_0 {\left(}{\left\{} \newcommand{\tr}{\right\}}- \mathbf{w}_{2n-1} + \mathbf{y} , - \mathbf{w}_{2n} + \mathbf{y} \tr {\right)}\equiv 1$ and $g_0 {\left(}h {\left(}- \mathbf{w}_n + \mathbf{y} {\right)}{\right)}=0$. It is clear that if we take $f_0$ as above, and define $f_1 := f_0 g_0$, then ${\left\| }f_1 {\right\| }_{\infty} =1$ and ${\left\| }Tf_1 - Sf_1 {\right\| }_{\infty} =1$, which is impossible. Of course, working now with $\xi_{- \mathbf{w}_{2n} + X_0}$, we deduce that $h {\left(}- \mathbf{w}_n + E''_0 {\right)}= - \mathbf{w}_{2n} + E''_0 $ for $E=A,B,C$, which implies, since $h {\left(}- \mathbf{w}_n + X_0 {\right)}$ is connected, that $h {\left(}- \mathbf{w}_n + \mathbf{y} {\right)}= - \mathbf{w}_{2n} + \mathbf{y} $ for every $\mathbf{y} \in X_0$ and $n \in \mathbb{N}$. This proves (\[nasda\]). The proof of (\[entier-bebel\]) is similar. Suppose next that there is a point $\mathbf{y} \in X_0$ with $\mathbf{y} \notin c(a)$. Fix any $\delta >0$. Then there exists a neighborhoof $U$ of $\mathbf{y}$ such that ${\left|}a (\mathbf{z}) {\right|}< \delta$ for every $\mathbf{z} \in U$. In particular we can select $\mathbf{z} \in U \cap {\left(}- \mathbf{w}_n + X_0 {\right)}$ for some $n \in \mathbb{N}$. Take $f_0$ as above, which satisfies $(Tf_0) (\mathbf{z}) =1$. Consequently, $\left| (Sf_0) (\mathbf{z}) \right| \le {\left|}a(\mathbf{z}) {\right|}<\delta$ and ${\left|}(Tf_0) (\mathbf{z}) - (Sf_0) (\mathbf{z}) {\right|}\ge 1- \delta$. We conclude again that ${\left\| }T - S {\right\| }=1$, against our assumptions. This shows that $X_0 \subset c(a)$. Finally, since $h$ is continuous, we deduce that $h(\mathbf{y}) = \mathbf{y}$ for every $\mathbf{y} \in X_0$. By Claim \[tregueta\], we have that there exists $n_0 \in \mathbb{N}$ such that $\mathbf{w}_n + X_0 \subset c(a)$ for every $n \ge n_0$. Now, since $X_0$ is connected, we deduce in particular that for each $n \ge n_0$, $h(\mathbf{w}_n + X_0) $ is contained in $ \mathbf{w}_{n} + X_{n}$. If we now set $\mathbf{F}:= {\left\{} \newcommand{\tr}{\right\}}(x, y, z) \in \mathbb{R}^3 : x^2 + y^2 <1/2 \tr$, we see that there is an open ball $B(D_0, r)$ of center $D_0$ and radius $r \in (0, 1)$ such that $B(D_0, r) \subset c(a)$ and $h(B(D_0, r)) \subset \mathbf{F}$. Let $n_1 \in \mathbb{N}$, $n_1 \ge n_0$, such that $B(D_0, r) \cap {\left(}\mathbf{w}_n + X_0 {\right)}\neq \emptyset$ for every $n \ge n_1$. We clearly have that if we fix any $n \ge n_1$, then $$h {\left(}B(D_0, r) \cap {\left(}\mathbf{w}_{n} + X_0 {\right)}{\right)}\subset \mathbf{F} \cap {\left(}\mathbf{w}_{n} + X_{n} {\right)}.$$ On the other hand, $B(D_0, r) \cap {\left(}\mathbf{w}_{n} + X_0 {\right)}$ is connected, and so must be its image by $h$. Since $\mathbf{F} \cap {\left(}\mathbf{w}_{n} + X_{n} {\right)}$ has three connected components, each containing a different point $\mathbf{w}_{n} + D_i^{n}$, $i=0,1,2$, then we have that $h {\left(}B(D_0, r) \cap {\left(}\mathbf{w}_{n} + X_0 {\right)}{\right)}$ contains at most one point $\mathbf{w}_{n} + D_i^{n}$, $i=0,1,2$. \[there-tarde\] The set of integers $n \ge n_1$ satisfying $${\mathrm{card} \hspace{.02in}}{\left\{} \newcommand{\tr}{\right\}}i : \mathbf{w}_{n} + D_i^{n} \in h {\left(}\mathbf{w}_n + X_0 {\right)}, i=0,1,2 \tr \ge 2$$ is finite. Suppose on the contrary that this set is infinite. By the comments above we deduce that there is an infinite subset $\mathbb{M}$ of $\mathbb{N}$ such that, if $n \in \mathbb{M}$, then there exists $\mathbf{s}_n \in \mathbf{w}_n + X_0 $, $\mathbf{s}_n \notin B(D_0 , r)$, and $h {\left(}\mathbf{s}_n {\right)}\in {\left\{} \newcommand{\tr}{\right\}}\mathbf{w}_{n} + D_i^{n} : i=0,1,2 \tr$. Since $Y$ is compact, there is an accumulation point $\mathbf{s}$ of ${\left\{} \newcommand{\tr}{\right\}}\mathbf{s}_n : n \in \mathbb{M} \tr$ in $X_0$, which necessarily satisfies ${\left\| }\mathbf {s} {\right\| }\ge r$. By continuity we must have $h(\mathbf{s}) =D_0$, and since $\mathbf{s} \in X_0$, then we also have $h (\mathbf{s}) = \mathbf{s}$, which is impossible. To finish, we use Claim \[there-tarde\], and take an integer $n \ge n_1$ such that there is at most one $i \in \{0,1,2\}$ with $\mathbf{w}_{n} + D_i^{n} \in h {\left(}\mathbf{w}_n + X_0 {\right)}$. Suppose for instance that $i \neq 1, 2$ (the other cases are similar). By Claim \[tregueta\](\[entier-bebel\]), $h {\left(}\mathbf{w}_n + E''_0 {\right)}= \mathbf{w}_{n} + E''_0 $, for $E=A,B,C$. so the image by $h$ of the subset $\mathbf{w}_n + {\left(}\overline{D_0 A''_0} \cup \overline{D_0 C''_0} {\right)}$ is a connected subset of $ \mathbf{w}_{n} + {\left(}X_n \setminus {\left\{} \newcommand{\tr}{\right\}}D_1^{n}, D_2^{n} \tr {\right)}$ joining $\mathbf{w}_{n} + A''_0 $ and $\mathbf{w}_{n} + C''_0 $. We easily see that this is impossible. \[gustavo\] [*An example showing that the bound given in Theorem \[rz\] is sharp.*]{} Let $0 < \epsilon < 2/17$. We claim that there exists a norm one $\epsilon$-disjointness preserving operator $T'$ such that ${\left\| }T' - S' {\right\| }\ge \sqrt{17 \epsilon/2}$ for every weighted composition map $S'$. Let $$\gamma := \sqrt{\frac{17 \epsilon}{2}},$$ and let $X$, $Y$, and $T$ be as in the previous example. We need a point not belonging to $X \cup Y$, for instance $(4, 0, 0)$. If we consider the sets $X' := X \cup \{(4, 0, 0)\}$ and $Y' := Y \cup \{(4, 0, 0)\}$, then we define a linear map $T' : C(X') \longrightarrow C(Y')$ such that, for every $f\in C(X')$, $(T' f) (4, 0, 0) := f(4, 0, 0)$ and, for all $\mathbf{y} \in Y$, $(T'f) (\mathbf{y}) := \gamma (Tf_r) (\mathbf{y})$, where $f_r$ is the restriction of $f$ to $X$. Since $T$ is a $ 2 /17$-disjointness preserving, then $T'$ is $ 2\gamma^2 /17 $-disjointness preserving, which is to say $\epsilon$-disjointness preserving. Let $S' : C(X') {\longrightarrow}C(Y')$ be a weighted composition map with associated maps $a' \in C(Y')$ and $h' : Y' {\longrightarrow}X'$ (continuous on $c(a')$). It is easy to see that the set $A:= c(a') \cap h'^{-1} {\left(}X {\right)}$ is closed and open in $c(a')$, and that the restriction to $Y$ of $ a' \xi_A $ (denoted by $a$) belongs to $C(Y)$. Also, if we fix $x_0 \in X$ and define $h :Y {\longrightarrow}X$ as $h(y) := h'(y)$ for every $y \in c(a)$, and $h(y) := x_0$ for $y \in Y \setminus c(a)$, then $h$ is continuous on $c(a)$. Next we consider the weighted composition map $S: C(X) {\longrightarrow}C(Y)$ given as $Sf := a \cdot f \circ h$ for all $f \in C(X)$. By Example \[andalu\] we have that, for every $\delta >0$, there exists $f_{\delta} \in C(X)$ with ${\left\| }f_{\delta} {\right\| }_{\infty} =1$ and ${\left\| }{\left(}\gamma T - S {\right)}{\left(}f_{\delta} {\right)}{\right\| }_{\infty} \ge \gamma- \delta$. It is now apparent that, if $g_{\delta} \in C(X')$ is an extension of $f_{\delta}$ such that $g_{\delta} (4, 0, 0) =0$, then $ {\left\| }{\left(}S' - T' {\right)}{\left(}g_{\delta} {\right)}{\right\| }_{\infty} \ge \gamma- \delta$. Therefore $${\left\| }S' - T' {\right\| }\ge \gamma=\sqrt{\frac{17 \epsilon}{2}}.$$ How far. The case when $X$ is infinite {#reallyfar} ====================================== In this section we consider the case when $X$ is infinite, and prove Theorems \[ex3\] and \[vr\]. The proof that Theorem \[vr\] is not valid for general $Y$ can obviously be seen in Example \[gustavo\], but also in Example \[destranyo-kajero\]. The finite case is special, and we leave it for the next section. For $\delta >0$, let us choose a regular Borel probability measure $\mu$ on $X$ such $\mu(\{x\})\le \delta / 2$ for every $x\in X$. Next, fix $y_0,y_1$ in $Y$ and $x_0\in X$. After choosing two disjoint neighborhoods, $U(y_0)$ and $U(y_1)$, of $y_0$ and $y_1$, respectively, we define two continuous functions, $\alpha:Y\longrightarrow [0,2 \sqrt{\epsilon}]$ and $\beta:Y\longrightarrow [0,1]$, with the following properties: - $\alpha(y_0)=2\sqrt{\epsilon}$ - ${\rm supp}(\alpha)\subset U(y_0)$ - $\beta(y_1)=1$ - ${\rm supp}(\beta)\subset U(y_1)$ Next, for each $y\in Y$, we define two continuous linear functionals on $C(X)$ as follows: $$F_y(f)=\beta(y)\delta_{x_0}(f)$$ $$G_y(f)=\alpha(y)\int_{X}fd\mu$$ By using these functionals we can now introduce a linear map $T:C(X)\longrightarrow C(Y)$ such that $(Tf)(y)=F_y(f)+G_y(f)$ for every $f \in C(X)$. Let us first check that ${\left\| }T{\right\| }=1$. To this end, it is apparent that $(T{\bf 1})(y_1)=F_{y_1}({\bf 1})+G_{y_1}({\bf 1})=1+0=1$. Consequently, ${\left\| }T{\right\| }\ge 1$. On the other hand, it is easy to see that if $f\in C(X)$ satisfies ${\left\| }f{\right\| }_{\infty} =1$, then ${\left|}(Tf)(y) {\right|}\le 1$ for every $y \in Y$. Hence, ${\left\| }T{\right\| }=1$. The next step consists of checking that $T$ is $\epsilon$-disjointness preserving. Let $f,g\in C(X)$ with ${\left\| }f{\right\| }_{\infty} ={\left\| }g{\right\| }_{\infty} =1$ and such that $c(f)\cap c(g)=\emptyset$. It is easy to see that $(Tf)(y)(Tg)(y)=0$ whenever $y \notin U(y_0)$. On the other hand, if $y\in U(y_0)$, then ${\left|}(Tf)(y)(Tg)(y) {\right|}={\left|}G_y(f){\right|}{\left|}G_y(g){\right|}$. It is clear that there exist two unimodular scalars $a_1,a_2\in \mathbb{K}$ such that $a_1G_y(f)={\left|}G_y(f){\right|}$ and $a_2G_y(g)={\left|}G_y(g){\right|}$. Since ${\left\| }a_1f+a_2g{\right\| }_{\infty} =1$, then $$\begin{aligned} {\left|}G_y(f) {\right|}+ {\left|}G_y(g) {\right|}&=& G_y(a_1f+a_2g) \\ &=& \alpha(y)\int_X(a_1f+a_2g)d\mu \\ &\le& \alpha(y)\end{aligned}$$ Consequently, ${\left|}G_y(f){\right|}{\left|}G_y(g) {\right|}\le \alpha(y)^2/4$. Indeed, $${\left|}(Tf)(y)(Tg)(y) {\right|}={\left|}G_y (f) {\right|}{\left|}G_y(g) {\right|}\le \frac{\alpha(y)^2}{4}\le \frac{(2\sqrt{\epsilon})^2}{4}=\epsilon$$ Finally, we will see that ${\left\| }T-S{\right\| }\ge 2\sqrt{\epsilon}(1-\delta)$ for every weighted composition map $S:C(X)\longrightarrow C(Y)$. Let $S \in {\mathbf{WCM} {\left(}X,Y {\right)}}$, and let $h:c(S {\bf 1}) {\longrightarrow}X$ be its associated map. It is clear that, if $(S{\bf 1}) (y_0) =0$, then ${\left\| }T-S {\right\| }= {\left|}(T-S) ({\bf 1}) (y_0) {\right\| }= 2 \sqrt{\epsilon}$, so we may assume that $y_0 $ belongs to $c(S {\bf 1})$. By the regularity of the measure $\mu$, there exists an open neighborhood $U$ of $h(y_0)$ such that $\mu(U)<\delta$. Let us select $f\in C(X)$ satisfying $0\le f\le 1$, $f (h(y_0)) = 0$, and $f\equiv 1$ on $X\setminus U$. Obviously $(Sf)(y_0)=0$ and ${\left|}(Tf)(y_0) {\right|}= {\left|}G_{y_0}(f) {\right|}$. Hence $$\begin{aligned} {\left\| }T-S{\right\| }&\ge& {\left|}(Tf)(y_0) {\right|}\\ &\ge& \alpha(y_0)\int_{X\setminus U}fd\mu \\ &\ge& 2\sqrt{\epsilon}(1-\delta).\end{aligned}$$ This proves the first part. The second part is immediate because, since the measure can be taken atomless, then $\delta$ is as small as wanted. We are assuming that there exists a discrete space $Z$ such that $Y= \beta Z$. Of course $Y$ may be finite (that is, $Y=Z$), and this is necessarily the case when we consider the second part of the theorem. Let $Z_0 := Z \cap Y_{2 \sqrt{\epsilon}} $, which is a nonempty closed and open subset of $Z$, and $$Z_1 := \{z \in Z \setminus Z_0 : \exists x_z \in X \mbox{with} {\left|}\lambda_{T_z} (\{x_z\}) {\right|}>0 \}.$$ Fix any $x_0 \in X$. By Lemma \[L1\], we can define a map $h:Z \longrightarrow X$ such that ${\left|}\lambda_{T_z} (\{h(z)\}) {\right|}\ge \sqrt{{\left\| }T_z {\right\| }^2 - 4 \epsilon}$ for every $z \in Z_0 $, and such that $h(z) := x_z$ for $z \in Z_1$, and $h(z) := x_0$ for $z \notin Z_0 \cup Z_1$. Also, since $Z$ is discrete, then $h$ is continuous, and consequently it can be extended to a continuous map from $Y$ to $X$ (when $Y \neq Z$). We will denote this extension also by $h$. Define $\alpha : Z \longrightarrow \mathbb{K}$ as $\alpha (z) := \lambda_{T_z} (\{h(z)\})$ if $z \in Z_0 \cup Z_1$, and $\alpha (z) := 0$ otherwise, and extend it to a continuous function, also called $\alpha$, defined on $Y$. Then consider $S: C(X) \longrightarrow C(Y)$ defined as $(Sf) (y) := \alpha (y) f(h(y))$ for every $f \in C(X)$ and $y \in Y$. Let us check that ${\left\| }T- S{\right\| }\le 2\sqrt{ \epsilon}$. Take $f\in C(X)$ with ${\left\| }f{\right\| }_{\infty}\le 1$. First, suppose that $z \in Z \setminus {\left(}Z_0 \cup Z_1 {\right)}$. Then $(Sf)(z) =0$, so $${\left|}(Tf)(z)-( S f)(z) {\right|}= {\left|}(Tf)(z) {\right|}\le 2 \sqrt{\epsilon}.$$ Now, if $z \in Z_1$, then ${\left\| }T_z {\right\| }\le 2 \sqrt{\epsilon}$ and, as in the proof of Lemma \[llata\], $${\left|}(Tf)(z)-(Sf)(z) {\right|}\le{\left\| }T_z {\right\| }- {\left|}\lambda_{T_z} (\{h(z)\}) {\right|}< 2 \sqrt{\epsilon}.$$ On the other hand, if $z \in Z_0$, we know by Corollary \[L3\] that $${\left|}(Tf)(z)-(Sf)(z) {\right|}\le{\left\| }T_z {\right\| }- \sqrt{{\left\| }T_z{\right\| }^2-4\epsilon}.$$ By Lemma \[pollo\], we have ${\left|}(Tf)(z)-(Sf)(z) {\right|}< 2 \sqrt{\epsilon}$ for every $z \in Z_0$. By continuity, we see that the same bound applies to every point in $Y$, and the first part is proved. Finally, in the second case, that is, when $X$ does not admit an atomless regular Borel probability measure and $Y$ is finite, we have that $Y=Z$, and that $Z \setminus {\left(}Z_0 \cup Z_1 {\right)}$ consists of those points satisfying ${\left\| }T_z {\right\| }=0$. The conclusion is then easy. The case when $X$ is finite. How far {#cerilla} ==================================== In this section we prove Theorems \[recero\] and \[cero\]. The fact that Theorem \[cero\] does not hold for arbitrary $Y$ (with more than one point) can be seen in next section (see Example \[llegomalenayamigos2\]). We first prove the result when $n$ is odd. We follow the same ideas and notation as in the proof of Theorem \[ex3\], with some differences. Namely, we directly take $\mu(\{x\}) = 1/n$ for every $x \in X$, and use a new function $$\alpha : Y \longrightarrow {\left[}0, \min {\left\{} \newcommand{\tr}{\right\}}\frac{2n \sqrt{\epsilon}}{\sqrt{n^2 -1}}, 1 \tr {\right]}$$ such that $$\alpha (y_0) = \min {\left\{} \newcommand{\tr}{\right\}}\frac{2n \sqrt{\epsilon}}{\sqrt{n^2 -1}}, 1 \tr$$and ${\rm supp}(\alpha)\subset U(y_0)$. Notice that $\alpha (y_0 )= 2n \sqrt{\epsilon} / \sqrt{n^2 -1} $ if $\epsilon \le \omega_n$, and $\alpha (y_0 ) =1$ otherwise. Clearly ${\left\| }T {\right\| }= 1$, and using the fact that $$\frac{(n-1)(n+1)}{4 n^2} = \max {\left\{} \newcommand{\tr}{\right\}}\frac{l (n-l)}{ n^2} : 0 \le l \le n \tr,$$we easily see that $T$ is $\epsilon$-disjointness preserving both if $\epsilon \le \omega_n$ and if $\epsilon > \omega_n$. On the other hand, by the definition of the measure, reasoning as in the proof of Theorem \[ex3\], we easily check that ${\left\| }T-S {\right\| }\ge {\left(}1 - 1 /n {\right)}\alpha (y_0)$ for every weighted composition $S$. Finally, we follow the above pattern to prove the result when $n$ is even. In particular we also take $\mu(\{x\}) = 1/n$ for every $x \in X$, and use a function $\alpha : Y \longrightarrow [ 0, 2 \sqrt{\epsilon} ]$ with $\alpha (y_0) = 2 \sqrt{\epsilon}$ and ${\rm supp}(\alpha)\subset U(y_0)$. The rest of the proof follows as above. Let $Z$ be a discrete space with $Y = \beta Z$. Since $X$ has $n$ points, say $X:= \{x_1, \ldots, x_n\}$, we have that, for each $z \in Z$, $T_z$ is of the form $T_z := \sum_{i=1}^n a_i^z \delta_{x_i}$, for some $a_i^z \in \mathbb{K}$, $i = 1, \ldots, n$. Consequently, for each $z \in Z$, we can choose a point $x_z \in X$ such that ${\left|}\lambda_{T_z} (\{x_z\}) {\right|}\ge {\left|}\lambda_{T_z} (\{x\}) {\right|}$ for every $x \in X$, which yields ${\left|}\lambda_{T_z} (\{x_z\}) {\right|}\ge {\left\| }T_z {\right\| }/n$. This allows us to define a map $h: Z \longrightarrow X$ as $h(z) := x_z$ for every $z \in Z$. Since $h$ is continuous we can extend it to a continuous function defined on the whole $Y$, which we also call $h$. Following a similar process as in the proof of Theorem \[vr\], define $\alpha : Z \longrightarrow \mathbb{K}$ as $\alpha (z) := \lambda_{T_z} (\{h(z)\})$, and extend it to a continuous function defined on $Y$, also denoted by $\alpha$. Now, define $S: C(X) \longrightarrow C(Y)$ as $(Sf) (y) := \alpha (y) f(h(y))$ for every $f \in C(X)$ and $y \in Y$. Fix any $f \in C(X)$, ${\left\| }f {\right\| }_{\infty} \le 1$, and $z \in Z$. It is then easy to check that ${\left|}(Tf) (z) - (Sf) (z ){\right|}\le (n-1) {\left\| }T_z {\right\| }/n $ . Consequently, if ${\left\| }T_z {\right\| }\le 2 \sqrt{\epsilon}$, we have $${\left|}(Tf) (z) - (Sf) (z ){\right|}\le \frac{2(n-1)}{n}\sqrt{\epsilon} \le o'_X (\epsilon).$$ Let us now study the case when ${\left\| }T_z {\right\| }> 2 \sqrt{\epsilon}$. First, we know from Corollary \[L3\] that ${\left|}(Tf)(z)-(Sf)(z) {\right|}\le {\left\| }T_z {\right\| }- \sqrt{{\left\| }T_z{\right\| }^2-4\epsilon}$. Next, we split the proof into two cases. - [*Case 1. Suppose that $n$ is odd.*]{} We see that to finish the proof it is enough to show that $$\min {\left(}{\left\| }T_z {\right\| }- \sqrt{{\left\| }T_z{\right\| }^2-4\epsilon}, \frac{n-1}{n} {\left\| }T_z {\right\| }{\right)}\le o'_X ( \epsilon)$$ whenever ${\left\| }T_z {\right\| }> 2 \sqrt{\epsilon}$. To do this, we consider the functions $\gamma, \delta :[2 \sqrt{\epsilon}, 1] \longrightarrow \mathbb{R}$ defined respectively as $\gamma (t) := t - \sqrt{t^2 -4 \epsilon}$, and $\delta (t) := (n-1) t/n $ for every $t \in [2 \sqrt{\epsilon}, 1]$. We have that $\gamma$ is decreasing (see Lemma \[pollo\]) and $\delta$ is increasing on the whole interval of definition. Now, if $\epsilon \le \omega_n$, then for $t_0 := \sqrt{\epsilon /\omega_n} \in {\left[}2 \sqrt{\epsilon}, 1 {\right]}$, we have $\gamma (t_0)= \delta (t_0)$. This common value turns out to be $\delta (t_0) = 2\sqrt{ (n-1) \epsilon/(n+1)}$, that is, it is equal to $o'_X ( \epsilon)$, and we get that ${\left|}(Tf) (z) - (Sf) (z ){\right|}\le o'_X (\epsilon)$ for every $z \in Z$. On the other hand, if $\epsilon > \omega_n$, then $\delta (1) \le \gamma (1)$, so $\delta (t) \le \gamma (t)$ for every $t \in [2 \sqrt{\epsilon}, 1]$, and ${\left|}(Tf) (z) - (Sf) (z ){\right|}\le \delta (1) $ for every $z \in Z$. Since $\delta(1) = (n-1)/n = o'_X ( \epsilon)$, we obtain the desired inequality also in this case. - [*Case 2. Suppose that $n$ is even.*]{} By Proposition \[ksis\], we get that ${\left|}\lambda_{T_z} (\{h(z)\}) {\right|}\ge {\left(}{\left\| }T_z {\right\| }+ \sqrt{{\left\| }T_z {\right\| }^2 - 4 \epsilon} {\right)}\Big\slash n$, so $${\left|}(Tf) (z) - (Sf) (z ){\right|}\le {\left\| }T_z {\right\| }- \frac{{\left\| }T_z {\right\| }+ \sqrt{{\left\| }T_z {\right\| }^2 - 4 \epsilon}}{n}.$$ Consequently, to finish the proof in this case we just need to show that $$\min {\left(}{\left\| }T_z {\right\| }- \sqrt{{\left\| }T_z{\right\| }^2-4\epsilon}, {\left\| }T_z {\right\| }- \frac{{\left\| }T_z {\right\| }+ \sqrt{{\left\| }T_z {\right\| }^2 - 4 \epsilon}}{n} {\right)}\le \frac{2 (n-1) \sqrt{\epsilon}}{n}.$$ Let $\eta : [2 \sqrt{\epsilon}, 1] \longrightarrow \mathbb{R}$ be defined as $$\eta (t) := t - \frac{t + \sqrt{t^2 -4 \epsilon}}{n}$$ for every $t \in [2 \sqrt{\epsilon}, 1]$, and consider also the function $\gamma$ defined above. Clearly, when $n=2$ we have $ \eta = \gamma /2$, and the above inequality follows from Lemma \[pollo\]. So we assume that $n \neq 2$. We easily see that $\eta (t) \le \gamma (t) $ whenever $t \in {\left[}2 \sqrt{\epsilon}, \sqrt{\epsilon /\omega_{n-1}} {\right]}$, and that $\eta$ is decreasing in ${\left[}2 \sqrt{\epsilon}, \sqrt{\epsilon /\omega_{n-1}} {\right]}$ ($t \le 1$). We deduce that $$\min {\left(}\gamma (t) , \eta (t) {\right)}\le \eta {\left(}2 \sqrt{\epsilon} {\right)}= \frac{2 {\left(}n-1 {\right)}\sqrt{\epsilon}}{n}$$ whenever $2 \sqrt{\epsilon} \le t \le 1 $, as it was to be seen. By denseness of $Z$ in $Y$, we conclude that ${\left\| }T-S {\right\| }\le o'_X( \epsilon)$. The case when $X$ is finite. How close {#myrna} ====================================== In this section we start proving Theorem \[opodel\], and then we give an example showing that the bound given in it is in fact sharp. Of course this implies in particular that Theorem \[cero\] does not hold for $Y$ arbitrary, and consequently that the bounds for instability given in Theorem \[recero\] are not bounds for stability. At the end of the section we provide an example which shows that Theorem \[opodel\] is not valid in general for $X$ infinite, even in the simplest case, that is, when $X$ is is a countable set with just one accumulation point. We see not only that $2 \sqrt{\epsilon}$ is not a bound for stability, but that every bound for stability must be bigger than $\sqrt{8 \epsilon}$. This shows a dramatic passage from finite to infinite. We assume that $X = \{x_1 , \ldots, x_n \}$. It is easy to see that ${\left\| }T_y {\right\| }= \sum_{i=1}^n {\left|}{\left(}T \xi_{\{x_i\}} {\right)}{\left(}y {\right)}{\right|}$ for every $y \in Y$, and consequently the map from $Y$ to $\mathbb{K}$ given by $y \mapsto {\left\| }T_y {\right\| }$ is continuous. For each set $C \subset X$, we consider $A_C := E_C \cap {\left(}\bigcap_{u \in C} E_C^u {\right)}$, where $$\begin{aligned} E_C &:=& {\left\{} \newcommand{\tr}{\right\}}y \in Y_{2 \sqrt{\epsilon}} : {\left|}\lambda_{T_{y}} {\right|}{\left(}C {\right)}\ge \frac{{\left\| }T_y {\right\| }}{2} \tr \\ &=& {\left\{} \newcommand{\tr}{\right\}}y \in Y_{2 \sqrt{\epsilon}} : \sum_{x \in C} {\left|}{\left(}T \xi_{\{x\}} {\right)}(y) {\right|}\ge \frac{\sum_{i =1}^n {\left|}{\left(}T \xi_{{\left\{} \newcommand{\tr}{\right\}}x_i \tr } {\right)}(y) {\right|}}{2} \tr , \end{aligned}$$ and $$\begin{aligned} E_C^u &:=& {\left\{} \newcommand{\tr}{\right\}}y \in Y_{2 \sqrt{\epsilon}} : {\left|}\lambda_{T_{y}} {\right|}{\left(}C\setminus \{ u\} {\right)}< \frac{{\left\| }T_y {\right\| }}{2} \tr \\ &=& {\left\{} \newcommand{\tr}{\right\}}y \in Y_{2 \sqrt{\epsilon}} : \sum_{x \in C \setminus \{u\}} {\left|}{\left(}T \xi_{\{x\}} {\right)}(y) {\right|}< \frac{\sum_{i =1}^n {\left|}{\left(}T \xi_{{\left\{} \newcommand{\tr}{\right\}}x_i \tr } {\right)}(y) {\right|}}{2} \tr ,\end{aligned}$$ By Lemma \[pol-immemoriam\], we know that $E_C$ coincides with the set of all $y \in Y_{2 \sqrt{\epsilon}}$ satisfying $ {\left|}\lambda_{T_{y}} {\right|}{\left(}C {\right)}> {\left\| }T_y {\right\| }/ 2$, that is, $$\sum_{x \in C} {\left|}{\left(}T \xi_{\{x\}} {\right)}(y) {\right|}> \sum_{i =1}^n {\left|}{\left(}T \xi_{{\left\{} \newcommand{\tr}{\right\}}x_i \tr } {\right)}(y) {\right|}/2 ,$$ and consequently is both open and closed as a subset of $Y_{2 \sqrt{\epsilon}}$. In the same way, each $E_C^u$ is also open and closed in $Y_{2 \sqrt{\epsilon}}$, and so is $A_C$. Notice that again by Lemma \[pol-immemoriam\], if $y \in A_C$, then ${\left|}\lambda_{T_y} {\right|}{\left(}C {\right)}\ge {\left(}{\left\| }T_y {\right\| }+ \sqrt{{\left\| }T_y {\right\| }^2 - 4 \epsilon} {\right)}/2 $, and ${\left|}\lambda_{T_y} {\right|}{\left(}C \setminus \{u\} {\right)}\le {\left(}{\left\| }T_y {\right\| }- \sqrt{{\left\| }T_y {\right\| }^2 - 4 \epsilon} {\right)}/2 $ for every $u \in C$. We conclude that ${\left|}\lambda_{T_y} {\right|}{\left(}{\left\{} \newcommand{\tr}{\right\}}u \tr {\right)}\ge \sqrt{{\left\| }T_y {\right\| }^2 - 4 \epsilon}$ for every $u \in C$. On the other hand, it is clear that each element $y \in Y_{2 \sqrt{\epsilon}}$ belongs to some $A_C$, so we can make a finite partition of $Y_{2 \sqrt{\epsilon}}$ by open and closed sets $B_1, \ldots, B_m$, where each $B_i \subset A_C$ for some set $C$. This implies that, for each $i =1, \ldots, m$, there exists a point $u_i \in X$ such that ${\left|}\lambda_{T_y} (\{u_i\}) {\right|}\ge \sqrt{{\left\| }T_y {\right\| }^2 - 4 \epsilon}$ for every $y \in B_i$. This allows us to define a continuous map $h: Y_{2 \sqrt{\epsilon}} {\longrightarrow}X$ as $h (y) := u_i$ for every $y \in B_i$. Also take any map $\mathbf{b} : Y {\longrightarrow}\mathbb{K}$ such that $\mathbf{b} (y) = \lambda_{T_y} ({\left\{} \newcommand{\tr}{\right\}}h(y) \tr )$ whenever $y \in Y_{2 \sqrt{\epsilon}}$, which is continuous on $Y_{2 \sqrt{\epsilon}}$. We next follow a process similar to that seen in the proof of Theorem \[rz\], with some necessary modifications. In particular we use the map $\alpha \in C(Y)$ given as $$\alpha (y) := \sqrt{\frac{{\left\| }T_y {\right\| }- 2 \sqrt{\epsilon}}{{\left\| }T_y {\right\| }+ 2 \sqrt{\epsilon}}}$$ for $y \in Y_{2 \sqrt{\epsilon}}$, and constantly as $0$ on $Y \setminus Y_{2 \sqrt{\epsilon}}$, and define a weighted composition map $S$ as $$(S f)(y):= \alpha(y) \mathbf{b} (y) f(h (y))$$ for all $f\in C(X)$ and $y\in Y$. Now, for $y \in Y_{2 \sqrt{\epsilon}}$, put $A_y:= \mathbf{b} (y) \delta_{h(y)}$. It is easy to check that ${\left\| }T_y{\right\| }= {\left\| }T_y -A_y {\right\| }+ {\left\| }A_y {\right\| }$, and that, for $t \in [0,1]$ and $f \in C(X)$ with ${\left\| }f {\right\| }_{\infty} \le 1$, $$\begin{aligned} {\left|}(Tf)(y)- t \mathbf{b} (y) f(h(y)) {\right|}&\le& {\left|}T_y f- A_y f {\right|}+ {\left|}A_y f - t A_y f {\right|}\\&\le& {\left\| }T_y -A_y{\right\| }+ (1 - t) {\left\| }A_y {\right\| }\\ &=& {\left\| }T_y{\right\| }- t {\left\| }A_y{\right\| }\\ &\le& {\left\| }T_y {\right\| }- t \sqrt{{\left\| }T_y{\right\| }^2-4\epsilon},\end{aligned}$$ This allows us to use the same arguments as in the proof of Theorem \[rz\], and show that ${\left\| }T- S{\right\| }\le 2\sqrt{ \epsilon}$. \[llegomalenayamigos2\] [*An example showing that the bound given in Theorem \[opodel\] is sharp.*]{} Let $Y:= [-1,1]$ and $\epsilon \in (0, 1/4)$. Take two continuous and even functions $\alpha : [-1,1] {\longrightarrow}{\left[}2 \sqrt{\epsilon}, 1 {\right]}$ and $\beta : [-1,1] {\longrightarrow}{\left[}1 , 1/ \sqrt{1-4 \epsilon} {\right]}$, both increasing in $[0,1]$, such that $\alpha (0) = 2 \sqrt{\epsilon}$, $\alpha {\left(}1 {\right)}= 1$, $\beta (0) =1$, and $\beta (1) = 1/ \sqrt{1 -4 \epsilon}$. Taking into account that $x \mapsto x / \sqrt{x^2 - 4 \epsilon}$ is decreasing for $x >2 \sqrt{\epsilon}$, we see that $\beta(t) \sqrt{\alpha^2 (t) - 4 \epsilon} \le \alpha (t)$ for every $t \in [-1, 1]$. Now pick two points $ A, B \in X$ (recall that we are assuming that $X$ has at least two points), and consider $T :C(X) {\longrightarrow}C(Y)$ such that, for every $f \in C(X)$, $$\begin{aligned} (Tf) (t) &=& \frac{\alpha (t) + \operatorname{sgn}(t) \beta (t) \sqrt{ \alpha (t)^2 -4 \epsilon}}{2} \ f(A) \\ &+& \frac{\alpha (t) - \operatorname{sgn}(t) \beta (t) \sqrt{ \alpha (t)^2 -4 \epsilon}}{2} \ f(B) \end{aligned}$$ for every $t \in [-1,1]$, where $\operatorname{sgn}$ denotes the usual sign function. It is clear that $T$ is $\epsilon$-disjointness preserving and has norm $1$. Also, since ${\left(}T \mathbf{1} {\right)}(\pm 1) = 1 $, it is easily seen that if a weighted composition map $S = a \cdot f \circ h$ is at distance less than $2 \sqrt{\epsilon}$ from $T$, then $1, -1 \in c(a)$. On the other hand, if we suppose that $h(1) \neq A$, then we take $f_0 \in C(X)$ with $f_0 (A) =1 = {\left\| }f_0 {\right\| }_{\infty}$, and $f_0 (h(1)) =0 = f (B)$, and we see that $${\left|}(T - S) (f_0) (1) {\right|}= 1 > 2 \sqrt{\epsilon}.$$ We deduce that, as ${\left\| }T - S {\right\| }< 2 \sqrt{\epsilon}$, then $h(1) =A$, and in a similar way $h(-1) =B$. Since $Y$ is connected and $h: c(a) {\longrightarrow}X$ is continuous, we conclude that there is a point $t_0 \in Y$ such that $t_0 \notin c(a)$, that is, $(Sf) (t_0) =0$ for every $f \in C(X)$. Then it is easy to see that ${\left\| }T - S {\right\| }\ge \alpha (t_0) \ge 2 \sqrt{\epsilon} $. Notice that the above process is also valid if $X$ is infinite. \[destranyo-kajero\] [*For $X= \mathbb{N} \cup {\left\{} \newcommand{\tr}{\right\}}\infty \tr$ and any $\epsilon \in (0, 1/8)$, an $\epsilon$-disjointness preserving operator of norm $1$ whose distance to any weighted composition map is at least $\sqrt{8 \epsilon}$.*]{} Given $r >0$, we denote by $C(r)$ the circle with center $0$ and radius $r$ in the complex plane. We take a strictly decreasing sequence $(r_n )$ in $\mathbb{R}$ converging to $0$ and the interval $[-r_1, 0]$, and define $Y \subset \mathbb{C}$ as $$Y := [-r_1, 0] \cup \bigcup_{n=1}^{\infty} C(r_n).$$ We also take $X := \mathbb{N} \cup {\left\{} \newcommand{\tr}{\right\}}\infty \tr$. Next let $$\pi_0 :=\frac{1}{2} - \frac{ \sqrt{2}}{4},$$ and consider a continuous map $\alpha : \bigcup_{n=1}^{\infty} C(r_n) {\longrightarrow}{\left[}0, \pi_0 {\right]}$ such that $\alpha {\left(}-r_n {\right)}=0$ and $\alpha {\left(}r_n {\right)}=\pi_0$ for every $n \in \mathbb{N}$. Next, for each $f \in C(X)$ and $n \in \mathbb{N}$, we define, for $z \in C(r_n)$, $$(Tf) (z) := {\left(}\alpha (z) + \sqrt{2}/2 {\right)}f (2n) - \alpha (z) f (2n -1) .$$ On the other hand, if $n \in \mathbb{N}$ and $z \in (- r_n , - r_{n+1})$, then it is of the form $$z = - {\left(}t r_n + (1-t) r_{n +1} {\right)},$$ where $t$ belongs to the open interval $ (0,1)$. In this case, we define $$\begin{aligned} (Tf) (z) &:=& t {\left(}Tf {\right)}{\left(}- r_n {\right)}+ {\left(}1-t {\right)}{\left(}Tf {\right)}{\left(}- r_{n+1} {\right)}\\&=& \frac{\sqrt{2}}{2} {\left[}t f (2n) + (1- t) f (2n + 2 ) {\right]}. \end{aligned}$$ Finally we put $$(Tf) (0) := \frac{\sqrt{2}}{2} f(\infty).$$ It is apparent that $T :C(X) \longrightarrow C(Y)$ is linear and continuous, with ${\left\| }T {\right\| }=1$. Furthermore it is easy to see that if $f, g \in C(X)$ satisfy ${\left\| }f {\right\| }_{\infty} =1 = {\left\| }g {\right\| }_{\infty}$ and $fg =0$, then ${\left|}(Tf) (z) (Tg) (z) {\right|}\le 1/8$ for every $z \in Y$, that is, $T$ is $1/8$-disjointness preserving. We will now check that we cannot find a weighted composition map “near” $T$. Namely, if $S: C(X) \longrightarrow C(Y)$ denotes a weighted composition map, then we claim that ${\left\| }S- T {\right\| }\ge 1$. Let $D := c(S {\bf 1})$, and consider the continuous map $h :D \longrightarrow X$ given by $S$. If $r_n \notin D$ for some $n \in \mathbb{N}$, then we take $f_n := \xi_{\{2n\}} - \xi_{\{2n-1\}}$. It is clear that ${\left\| }f_n {\right\| }_{\infty} =1$, $(Tf_n) (r_n)=1$ and, as $r_n \notin D$, then $(Sf_n) (r_n) =0$. As a consequence ${\left\| }S-T {\right\| }\ge 1$. It is also easy to see that we obtain the same conclusion if $h(r_n) \notin \{2n, 2n-1\}$. On the other hand, if we suppose that $0 \notin D$, then $(S {\bf 1}) (0) =0$. Therefore, given any $\delta >0$, there exists a neighborhood $U$ of $0$ in $Y$ such that $\left| (S{\bf 1}) (z) \right| < \delta$ for all $z \in U$. Choose now $r_n \in U$, and let $f_n$ be as above. It is apparent that either $({\bf 1} - f_n) (h(r_n)) = 0$ or $({\bf 1} + f_n) (h(r_n)) = 0$, which implies that $(S {\bf 1}) (r_n) - (Sf_n) (r_n) =0$ or $(S {\bf 1}) (r_n) + (Sf_n) (r_n) =0$. Consequently, $\left| (Sf_n) (r_n) \right| = \left| (S{\bf 1}) (r_n) \right| < \delta$ and, as in the previous cases, we easily deduce that ${\left\| }S - T {\right\| }\ge 1- \delta$. Therefore ${\left\| }S - T {\right\| }\ge 1$. Finally, we will see that we cannot have $0 \in D$ and $h (r_n) \in \{2n , 2n-1\}$ for every $n \in \mathbb{N}$. Otherwise, as $D$ is open, there exists $s >0$ such that $B(0, s) \cap Y \subset D$. Also $h$ is continuous and $B(0, s) \cap Y $ is connected, so $h {\left(}B(0, s) \cap Y {\right)}$ is constant. This is obviously impossible by our assumptions on $h (r_n)$. This contradiction shows that this case does not hold. Hence, we have ${\left\| }S - T {\right\| }\ge 1$. Let $0 < \epsilon < 1/8$. We are going to construct a norm one $\epsilon$-disjointness preserving map $T'$ such that, for all weighted composition map $S'$, ${\left\| }T' - S' {\right\| }\ge \sqrt{8 \epsilon}$. Let $$\gamma := \sqrt{8 \epsilon},$$ and let $X' := X \cup \{0\}$ and $Y' := Y \cup \{2r_1\} \subset \mathbb{C}$. Define a linear map $T' : C(X') \longrightarrow C(Y')$ as $(T' f) (2 r_1) := f(0)$ and, for all $z \in Y$, $(T'f) (z) := \gamma (Tf_r) (z)$, where $f_r$ is the restriction of $f$ to $X$. Since $T$ is a $1/8$-disjointness preserving and $\epsilon = \gamma^2 /8$, then $T'$ is $\epsilon$-disjointness preserving. The conclusion follows as in Example \[gustavo\]. Acknowledgements ================ The authors would like to thank Prof. Luis Alberto Fernández for his help with the drawings. [BNT]{} J. Araujo, E. Beckenstein and L. Narici, [*Biseparating maps and homeomorphic real-compactifications*]{}. J. Math. Anal. Appl. [**192**]{} (1995), 258–265. D. G. Bourgin, [*Approximately isometric and multiplicative transformations on continuous function rings*]{}. Duke Math. J. [**16**]{} (1949), 385–397. G. Dolinar, [*Stability of disjointness preserving mappings*]{}. Proc. Amer. Math. Soc. [**130**]{} (2002), 129–138. J. J. Font and S. Hernández, [*On separating maps between locally compact spaces*]{}. Arch. Math. (Basel) [**63**]{} (1994), 158–165. D. H. Hyers and S. M. Ulam, [*On approximate isometries*]{}. Bull. Amer. Math. Soc. [**51**]{} (1945), 288–292. D. H. Hyers and S. M. Ulam, [*Approximate isometries of the space of continuous functions*]{}. Ann. Math. [**48**]{} (1947), 285–289. D. H. Hyers, G. Isac and Th. M. Rassias, [*Stability of functional equations in several variables*]{}. Birkhauser, 1998. K. Jarosz, [*Perturbations of Banach algebras*]{}. Lecture Notes in Mathematics 1120, Springer, Berlin, 1985. K. Jarosz, [*Automatic continuity of separating linear isomorphisms*]{}. Canad. Math. Bull. [**33**]{} (1990), 139–144. J.-S. Jeang and N.-C. Wong, [*Weighted composition operators of $C_0(X)$’s*]{}. J. Math. Anal. Appl. [**201**]{} (1996), 981–993. B. E. Johnson, [*Approximately multiplicative functionals*]{}. J. London Math. Soc. (2) [**34**]{} (1986), 489–510. B. E. Johnson, [*Approximately multiplicative maps between Banach algebras*]{}. J. London Math. Soc. (2) [**37**]{} (1988), 294–316. W. Rudin, [*Real and complex analysis*]{}. Third edition. McGraw-Hill Book Co., New York, 1987. Z. Semadeni, [*Banach Spaces of Continuous Functions*]{}, Vol. I. Monograf. Mat. 55. PWN–Polish Sci. Publ., Warszawa, 1971. P. Šemrl, [*Nonlinear perturbations of homomorphisms on $C(X)$*]{}. Quart. J. Math. Oxford Ser. (2) [**50**]{} (1999), 87–109.
--- abstract: '[*Reductionism*]{} is a prevalent viewpoint in science according to which all physical phenomena can be understood from fundamental laws of physics. Anderson \[Science, 177, 393 (1972)\], Laughlin and Pines \[PNAS, 97, 28 (2000)\], and others have countered this viewpoint and argued in favour hierarchical structure of the universe and laws. In this paper we advance the latter perspective by showing that some of the complex flow properties derived using hydrodynamic equations (macroscopic laws) are very difficult, if not impossible, to describe in microscopic framework—kinetic theory. These properties include Kolmogorov’s theory of turbulence, turbulence dissipation and diffusion, and dynamic pressure. We also provide several other examples of hierarchical description.' author: - 'Mahendra K. Verma' date: 'Received: date / Accepted: date' title: 'Microscopic laws vs. Macroscopic laws: Perspectives from kinetic theory and hydrodynamics' --- [example]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction {#intro} ============ A prevalent view in science is that all phenomena in the universe can “in principle” be explained using fundamental laws of physics. This paradigm, called [*reductionist hypothesis*]{}, encouraged search for microscopic laws that led to fascinating discoveries in quantum mechanics and particle physics [@Kane:book:Particle]. Buoyed by the success of these discoveries, some physicists are looking for a reductionist framework that can explain all the physical phenomena of the universe. This holy grail is referred to as [*theory of everything (TOE), final theory, ultimate theory*]{}, and [*master theory*]{} [@Weinberg:book:dream; @Hawking:book:TOE]. The aforementioned viewpoint has many champions and supporters, but it has also invited criticisms as descried below. The degree of criticism and support to the reductionist paradigm vary. For example, Weinberg [@Weinberg:book:dream] argues strongly in favour of reductionism, and claims that all scientists, including economists, practice reductionism. According to Weinberg, “it saves scientists from wasting their ideas that are not worth pursuing”, and/or provides stronger theoretical basis for their hypothesis. Refer to [@Weinberg:book:dream; @Hawking:book:TOE] for more references in support of reductionism. In a somewhat sharp criticism, Anderson [@Anderson:Science1972] argued that “the reductionism hypothesis does not by any means imply a ‘constructionist’ one: the ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe”. Further he agues that if the starting point of a field Y is field X, then it does not mean that all the laws of Y are “just applied X”. He goes on the illustrate the above viewpoint by showing how the ideas of broken symmetries (apart from fundamental laws) help explain diverse phenomena of condensed matter physics. In another article critical of reductionism, Laughlin and Pines [@Laughlin:PNAS2000] write “The emergent physical phenomena regulated by higher organizing principles have a property, namely their insensitivity to microscopics, that is directly relevant to the broad question of what is knowable in the deepest sense of the term.” They further argue, “Rather than a Theory of Everything we appear to face a hierarchy of Theories of Things, each emerging from its parent and evolving into its children as the energy scale is lowered.” Also refer to Laughlin [@Anderson:book:More; @Laughlin:book]. Another set of illustrations on limitations of reductionism are as follows. The letters of the book do not convey the story of a book. A combination of words, paragraphs, and chapters that describe subplots and plots makes the story. Similarly, music and paintings cannot be appreciated by just focussing on musical notes and photon packets; rather, they are complex hierarchical structures with notes and colours appearing at the bottom-most layer. The aesthetics and ecology of a building is impossible to derive from the properties of bricks and mortar. A complex computer programs is a hierarchical structure with program statements, functions, data structures, and their combinations (called [*classes*]{}); it is very difficult to decipher the functionality of a program if we focus only on the program statements. Carrying the analogy to physics, though every macroscopic physical system is made of electrons and protons, its macroscopic properties follow from the complex organization of different things. For the Earth, we need to focus on the macroscopic objects like atmosphere, oceans, lakes, land, life, etc., rather than electrons and protons that make them. After so many discussion by eminent scientists, it appears futile to write more on this topic. However in the present article, I provide several interesting examples of hydrodynamic laws (a macroscopic description) that cannot be conveniently derived using the microscopic counterpart, for example, kinetic theory. These examples provide much simpler comparison between microscopic and macroscopic laws, in comparison to more complex ones involving stars, planets, biology, society, etc. The present article essentially advances the viewpoint that not all macroscopic phenomena can be explained from microscopic perspectives [@Anderson:Science1972; @Laughlin:PNAS2000]. Kinetic theory and hydrodynamics {#sec:KT} ================================ In kinetic theory, we deal with a large number of particles (say $N$) that are specified by their position (${\bf r}$) and velocity (${\bf u}$). These particles are represented as a point in $6N$-dimensional phase space whose coordinates are $(x_a, y_a, z_a, p_{x,a}, p_{y,a}, p_{z,a})$, where $a$ is the particle label; or as $N$ points in a six-dimensional $\mu$-space whose coordinates are $(x, y, z, p_x, p_y, p_z)$. The density of these points in $\mu$-space is called [*distribution function*]{}, and it is denoted by $f({\bf r,u}, t)$ [@Choudhuri:book:Fluids]. The Boltzmann equation of kinetic theory describes the evolution of the distribution function, and it is the starting point for many works of statistical physics [@Lifshitz:book:Physical_Kinetics; @Choudhuri:book:Fluids; @Liboff:book]. Kinetic theory successfully describes many phenomena—thermodynamics; phase transitions; observed properties of gas, liquids, polymers; etc. On the other hand, hydrodynamic description involves real-space density $\rho({\bf r})$, velocity ${\bf u(r)}$, and internal energy $e({\bf r})$ [@Landau:book:Fluid]. The equations of these variables were derived in continuum framework by Euler, Navier, Stokes, and others. These equations are essentially Newton’s laws of motion for fluid elements in the flow. Here, the field variables are averaged quantity over many microscopic particles. This is called [*continuum approximation*]{}. Note however that hydrodynamic equations can be derived using using kinetic theory. An averaging of the Boltzmann equation (with collision terms) and its various moments yields equations for $\rho({\bf r})$, ${\bf u(r)}$, and $e({\bf r})$ [@Lifshitz:book:Physical_Kinetics; @Choudhuri:book:Fluids; @Liboff:book]. Such derivations are popular among the astro- and plasma physicists. In the following discussion we will describe several important hydrodynamic laws—Kolmogorov’s theory of turbulence, irreversibility in turbulence, accelerated diffusion in turbulence, dynamic pressure, etc., which could be treated as macroscopic laws since they are derived using a multiscale description of hydrodynamic equations. We show in the next several sections that the above laws cannot be derived conveniently starting solely from kinetic theory. As far as we know, no one provided such derivations from the first principles. Note that even derivation of incompressible hydrodynamics from kinetic theory itself is quite difficult [@Bisi:JPA2014]. Multiscale energy transfers and flux in hydrodynamic turbulence =============================================================== Many natural (astrophysical and geophysical) and engineering flows are turbulent. Generic features among them are—energy feed at the large scales, and energy flow to smaller and smaller scales that finally gets converted to heat. See Fig. 1 for an illustration. This multiscale feature has been propounded by Richardson, Taylor, Prandtl, Kolmogorov, and others [@Kolmogorov:DANS1941Dissipation; @Kolmogorov:DANS1941Structure; @Frisch:book; @Pope:book; @Lesieur:book:Turbulence; @McComb:book:Turbulence]. According to Kolmogorov, in incompressible hydrodynamic turbulence forced at large scales, the energy flux at the intermediate scale is constant ($\epsilon_u$), while the velocity fluctuations $u_l \sim (\epsilon_u l)^{1/3}$. The corresponding energy spectrum is $E_u(k) = K_\mathrm{Ko} \epsilon_u^{2/3} k^{-5/3}$, where $K_\mathrm{Ko}$ is Kolmogorov’s constant, and $k$ is wavenumber. The multiscale energy transfer of Fig. 1 has been derived both in real space and Fourier space formulation of hydrodynamic turbulence [@Kolmogorov:DANS1941Dissipation; @Kolmogorov:DANS1941Structure; @Frisch:book; @Pope:book; @Lesieur:book:Turbulence; @McComb:book:Turbulence]. ![Schematic diagrams illustrating energy transfers in three-dimensional hydrodynamic turbulence. The energy supplied at large scales cascades to the inertial range and then to the dissipative range. []{data-label="fig:NS"}](turb.pdf){width="0.8\linewidth"} Can we derive Kolmogorov’s law in the framework of kinetic theory without going to hydrodynamic formulation? This remains a challenge. The multiscale flow structures (e.g., vortices within vortices) are natural in the hydrodynamic description of turbulence, but not very apparent in kinetic theory whose basic constituents are particles. Note however that we can obtain multiscale fluid structures by averaging or coarse-graining many times, as is often done in lattice hydrodynamics [@Succi:book:Lattice]. [*Yet, the derived structures follow the laws of hydrodynamics, and these laws are not transparent at the particle level.* ]{} Thus, macroscopic description provided by hydrodynamics is much more convenient for the description of turbulence. Many natural flows involve more complex forces than those assumed in Kolmogorov’s theory of turbulence (see Fig. \[fig:NS\]). For example, Ekman friction, which is of the form $-\alpha {\bf u}$ ($\alpha$ is a positive constant), induces dissipation of kinetic energy at all scales [@Verma:EPL2012]. Consequently, the energy flux $\Pi_u(k)$ decreases with $k$. Hence, the kinetic energy in the flow at a given scale is lower than that for $\alpha =0$. This feature leads to a steeper spectrum for Ekman friction than that predicted by Kolmogorov’s theory ($k^{-5/3}$). Similar steepening of kinetic energy spectrum is observed in buoyancy-driven turbulence [@Obukhov:DANS1959] and in magnetohydrodynamic turbulence [@Verma:ROPP2017]. A derivation of above variable energy flux is very easy in spectral description of hydrodynamics [@Frisch:book; @Lesieur:book:Turbulence; @Verma:book:BDF], but not in kinetic theory. A cautionary remark is in order. In gas dynamics, kinetic theory is extensively employed to describe rarified gas for which hydrodynamic description breaks down [@Succi:book:Lattice; @Singh:PRE2016]. These ideas find applications in supernova explosions, supersonic rockets and jets, rarified plasma, etc. Dissipation, diffusion, and pressure in hydrodynamics ===================================================== In microscopic description of physical processes, the collisions or interactions among particles conserve energy. These processes also respect time reversal symmetry [@Feynman:book:Character; @Carroll:book:Time; @Pathria:book]. Given this, it is very difficult to incorporate dissipation for an isolated system of particles. Hydrodynamic description bypasses this difficulty by postulating viscosity that sets up the energy cascade from large scales to small scales. The origin of such friction has been debated by researchers. In a multiscale hydrodynamic description, the viscosity converts coherent kinetic energy (related to the flow velocity) to incoherent heat energy of microscopic particles at the dissipation scale [@Verma:arxiv:time]; note however that the total kinetic energy at the particle level is conserved. Turbulence typically enhances diffusion. We illustrate this phenomena using an often-quoted example—heat diffusion from a heater. Since the thermal diffusion coefficient of air is $\kappa \approx 10^{-5}~\mathrm{m}^2/\mathrm{s}$, from kinetic theory or statistical mechanics, the time estimate for the heat diffusion by $L= 1$ m would be $L^2/\kappa \approx 10^5$ seconds. This estimate is clearly incorrect. In reality, heat is advected by the nonlinear term, hence the time scale is $L/U \approx 1/0.1 = 10$ seconds, where $U$ is the velocity of the large-scale structures [@Verma:book:BDF]. A derivation of the aforementioned hydrodynamic diffusion from kinetic theory is not practical. A related phenomena is [*Taylor dispersion*]{} [@Taylor:PRSA1954] of particles in a turbulent flow. The distance between two particles in a turbulent flow increases as $t^{3/2}$, where $t$ is the elapsed time. Note that the Taylor dispersion is faster than ballistic dispersion ($\sim t$), which is the fastest dispersion for any particle in kinetic theory. The enhancement in Taylor dispersion is due to the advection of the particles by multiscale structures; the particles separated by a distance $r$ hop from vortices of size $r$ to larger vortices that move with even larger speeds. Again, Taylor dispersion would be hard to derive in kinetic theory. As described in Section \[sec:KT\], the hydrodynamic equations can be derived from kinetic theory. Such derivations yield equations for compressible flows for which the pressure is the [*thermodynamic pressure*]{} (that has origin in kinetic theory). However, there is another important pressure called [*dynamic pressure*]{} that appears in incompressible hydrodynamics. In Bernoulli’s equation, $p+ \rho u^2/2 = \mathrm{constant}$, where $p$ is the dynamic pressure, which is distinct from the thermodynamic pressure. Note that the dynamic pressure can be derived easily in hydrodynamic framework [@Frisch:book], but it would be very hard to derive in kinetic theory (without going to coarse-grained picture of hydrodynamics). We remark that a compressible flow contains both dynamic and thermodynamic pressures [@Zank:PF1991], but their derivation in kinetic theory would be way too complex. We conclude in the next section. Conclusions and Discussions =========================== In this article, we describe certain hydrodynamic (macroscopic) laws that are difficult to derive [*directly*]{} from microscopic framework such as kinetic theory. These laws include Kolmogorov’s theory of turbulence, viscous dissipation and Taylor’s dispersion in turbulent flows, and dynamic pressure. For these laws, the hydrodynamic description is more adequate than the kinetic theory. These observations are in the spirit of discussions by Anderson [@Anderson:Science1972] and Laughlin [@Laughlin:PNAS2000] where they argue in favour of hierarchical description of systems and laws. We can go to a step (or hierarchy) further in the flow complexity. Planetary and stellar flows are quite complex; some of the leading problems in these fields are global warming, ice ages, magnetic field generation, corona heating, mantle and core dynamics of the Earth, land ocean coupling, monsoons, etc. [@Fowler:book]. To address these problems, particle description is never employed. Further, it is impractical (in fact, impossible) to solve the relevant primitive equations—flow velocity, chemical constituents, moisture, ice—at all scales. For the Earth, the corresponding length scales range from $10^{-6}$ m to $4\times 10^6$ m. Hence, scientists often model these systems using relevant large-scale variables. For example, ice age is modelled using total solar radiation, carbon dioxide, and the mean temperature of the Earth. Similarly, the solar magnetic field is modelled using several magnetic modes in spherical harmonic basis [@Jones:book_chapter]. There are other equally important tools like probability, filtering, and machine learning for describing aforementioned complex systems. The next level of hierarchical structures are solar system, galaxy, and the universe. As we move up the hierarchy, the planetary and stellar atmosphere are ignored and newer sets of variables and equations are used. For example, Newton assumed the Sun and the Earth to be point particles for describing planetary motion; Millenium simulation of the universe treat the galaxies as point particles embedded in dark matter. Thus, nature has hierarchical structures that have their own laws and relevant tools [@Laughlin:PNAS2000]. However, the system descriptions and associated laws at different levels are connected to each other, most strongly among the neighbouring levels. For example, kinetic theory and hydrodynamics are intimately connected. Yet, the laws of the system at a given level are best derived using the equations and tools at that level. A possible hierarchical categorisation could be—nuclear and particle physics, atomic and molecular physics, condensed-matter physics, chemistry, biology, ecology, and so on. Another multiscale characterisation is—kinetic description of particles, hydrodynamic description of flows, planetary and stellar atmosphere and interiors, solar system, galaxies, galaxy clusters, universe. These structures help us identify the laws at each level, and derive relationships among them. It is important to keep in mind that the connections between the theories at different levels many involve many complications. Berry [@Berry:PT2002] and Batterman [@Batterman:book:Details] describe such issues, in particular [*singular limits*]{} encountered in such attempts. Note that [*holism*]{}, considered to be the opposite philosophy of [*reductionism*]{}, advocates that the properties of a system are best understood as a [*whole*]{}, not as a sum of its parts [@Auyang:book]. The hierarchical description however differs somewhat from holism, and it propounds that the universe is hierarchically structured, and it is best described by hierarchy of laws at different scales. These laws however may be interlinked, similar to the laws of kinetic theory and hydrodynamics. The hierarchical framework is often invoked for describing emergent phenomena [@Laughlin:PNAS2000; @Laughlin:book; @Anderson:Science1972; @Anderson:book:More]. For example, chemists, biologists, and material scientists work tirelessly to discover new molecules and material with specific properties using ab-initio or first-principle calculations. However, centuries ago researchers used to rely on macroscopic properties of materials (such as, affinity to water, air, fire etc.). Although no one doubts the power of first-principle calculations, the former approaches too could be useful. A major component of climate research involves large-scale simulations of primitive variables on massive grids (say, with one billion grid points). In comparison, at present, much less attention goes into making a low-dimensional models based on large-scale or macroscopic variables, such as mean temperature, solar radiation, land-sea interactions, overall carbon dioxide content, etc. Many believe that a combination of both the approaches, microscopic and macroscopic, would yield richer dividends. These illustrations indicate that applications of hierarchical description may help address some of the complex problems we face today. Acknowledgments {#acknowledgments .unnumbered} =============== I thank Anurag Gupta and Michael Berry for useful discussions. [10]{} \[1\][[\#1]{}]{} urlstyle \[1\][DOI \#1]{} G. Kane, *[Modern Elementary Particle Physics]{}*, 2nd edn. (Cambridge University Press, Cambridge, 2017) S. Weinberg, *Dreams of a Final Theory* (Vintage, New York, 1992) S. Hawking, *The Theory Of Everything* (Jaico Publishing House, 2006) P.W. Anderson, Science **177**(4), 393 (1972) R.B. Laughlin, D. Pines, PNAS **97**(1), 28 (2000) P.W. Anderson, *[More and Different]{}*. Notes from a Thoughtful Curmudgeon (World Scientific, 2011) R.B. Laughlin, *A Different Universe: Reinventing Physics from the Bottom Down* (Basic Books, New York, 2006) A.R. Choudhuri, *[The Physics of Fluids and Plasmas: An Introduction for Astrophysicists]{}* (Cambridge University Press, Cambridge, 1998) E.M. Lifshitz, L.P. Pitaevskii, *[Physical Kinetics]{}*. Course of Theoretical Physics (Pergamon Press, Oxford, 2012) R.L. Liboff, *Kinetic Theory* (Wiley, 1998) L.D. Landau, E.M. Lifshitz, *[Fluid Mechanics]{}*, 2nd edn. Course of Theoretical Physics (Elsevier, Oxford, 1987) M. Bisi, J. Phys. A: Math. Theor. **47**(45), 455203 (2014) A.N. Kolmogorov, Dokl Acad Nauk SSSR **32**, 16 (1941) A.N. Kolmogorov, Dokl Acad Nauk SSSR **30**, 301 (1941) U. Frisch, *[Turbulence: The Legacy of A. N. Kolmogorov]{}* (Cambridge University Press, Cambridge, 1995) S.B. Pope, *[Turbulent Flows]{}* (Cambridge University Press, Cambridge, 2000) M. Lesieur, *[Turbulence in Fluids]{}* (Springer-Verlag, Dordrecht, 2008) W.D. McComb, *[The physics of fluid turbulence]{}* (Clarendon Press, Oxford, 1990) S. Succi, *[The Lattice Boltzmann Equation for Fluid Dynamics and Beyond]{}* (Clarendon Press, Oxford, 2001) M.K. Verma, EPL **98**, 14003 (2012) A.M. Obukhov, Dokl Acad Nauk SSSR **125**, 1246 (1959) M.K. Verma, Rep. Prog. Phys. **80**(8), 087001 (2017) M.K. Verma, *Physics of Buoyant Flows: From Instabilities to Turbulence* (World Scientific, Singapore, 2018) S.K. Singh, C. Thantanapally, S. Ansumali, Phys. Rev. E **94**(6), 063307 (2016) R.P. Feynman, *[The Character of Physical Law]{}* (Modern Library, New York, 1994) S. Carroll, *[From Eternity to Here]{}* (Oneworld Publications, 2011) , *[Statistical Mechanics]{}*, 3rd edn. (Elsevier, Oxford, 2011) M.K. Verma, arXiv:1902.03567 (2019) G.I. Taylor, Proc. R. Soc. A **223**(1), 446 (1954) G.P. Zank, W.H. Matthaeus, Phys. Fluids A **3**(1), 69 (1991) C.M.R. Fowler, *[The Solid Earth: An Introduction to Global Geophysics]{}*, 2nd edn. (Cambridge University Press, Cambridge, 2004) C.A. Jones, in *Dynamo*, ed. by P. Cardin, L.F. Cugliandolo (Elsevier, 2008), pp. 45–135 M. Berry, Phys. Today **55**(5), 10 (2002) R.W. Batterman, *The devil in the details*. Oxford studies in philosophy of science (Oxford University Press, Oxford, 2002) S.Y. Auyang, *Foundations of Complex-system Theories: In Economics, Evolutionary Biology, and Statistical Physics* (Cambridge University Press, 1999)
--- abstract: 'We establish that solutions, to the most simple NLKG equations in $2$ space dimensions with mass resonance, exhibits long range scattering phenomena. Modified wave operators and solutions are constructed for these equations. We also show that the modified wave operators can be chosen such that they linearize the non-linear representation of the Poincaré group defined by the NLKG.' author: - 'Erik Taflin[^1] [^2]' date: 'December 2005, Version 2006.12.24' title: 'Simple Non Linear Klein-Gordon Equations in $2$ space dimensions, with long range scattering' --- [**Mathematics Subject Classification (2000):**]{} 35L70, 35Q75, 35P25, 74J20\ [**Keywords:**]{} Non-Linear representations, Non-Linear Klein-Gordon equations, long range scattering, normal forms Introduction ============ The purpose of this article is to study Non-Linear Klein-Gordon Equations in $2$ space dimensions with a finite number of masses $m_i>0,$ having a mass resonance of the following kind, intrduced in [@S-T; @85]: For some $j, \ j_1, \ j_2,$ there exists numbers $\epsilon_{j_1}, \ \epsilon_{j_2}=\pm 1$ such that $$\label{Eq1.1} m_j= \epsilon_{j_{1}} m_{j_1} + \epsilon_{j_2} m_{j_2} \ .$$ The equations for the real valued functions $\varphi_{i}$ are: $$\label{Eq1.2} (\square + m_i^{2}) \varphi_{i}= F_i (\varphi, \partial \varphi),$$ where $\varphi$ is the vector with components $\varphi_{i},$ $t \in \mathbb{R},$ $x \in \mathbb{R}^{2},$ $ \varphi_{i}(t,x) \in \mathbb{R},$ $\partial = (\partial_{0},\partial_{1},\partial_{2}),$ $\partial_{0} =\frac{\partial}{\partial t},$ $\partial_{j} = \frac{\partial}{\partial x_{j}}$ for $j=1,2,$ $\Delta = {\sum_{i=1}^{2} \partial_{i}^{2}},$ $\square = (\partial_{0})^{2} - \Delta .$ The $F_i$ are real $C^{\infty}$ functions, vanishing together with their first derivative at the origin. In this paper we shall study the simplest cases of eq. (\[Eq1.2\]), when condition (\[Eq1.1\]) is satisfied. For a given mass $ m>0 ,$ we consider the following two systems of non-linear Klein-Gordon (NLKG) equations, each containing one of the basic critical terms of (\[Eq1.2\]): $$ \label{Eq1.3} (\square + m^{2}) \varphi_{1}= 0, \quad (\square + (2m)^{2}) \varphi_{2}=(\varphi_{1})^{2} $$ and $$\label{Eq1.4} (\square + m^{2}) \varphi_{1}=\varphi_{1} \varphi_{2}, \quad (\square + (2m)^{2}) \varphi_{2}=0. $$ It easily follows that the Cauchy problem for each of the system of equations (\[Eq1.3\]) and (\[Eq1.4\]) has global solutions for large initial data (see Theorem \[th1\] for a precise formulation). The scattering problem is more interesting, since it is only the quadratic terms in (\[Eq1.2\]) which can give rise to long-range phenomenas: 1\) We establish (Theorem \[th2\]) that the systems (\[Eq1.3\]) and (\[Eq1.4\]) have “long range” modified wave operators and that they fail to have “short range” wave operators. This is due to the second degree “mass resonance”, defined by (\[Eq1.1\]), which is present in these systems together with the $t^{-1}$ time decrease of the $L^{\infty}(\mathbb{R}^{2})$-norm of solutions of the linear K-G equation. This should be compared with the small-data Cauchy and scattering problem for the NLKG $$\label{Eq1.5} (\square + m^{2}) \varphi= F(\varphi, \partial \varphi),$$ with only one mass $m>0.$ For $n \geq 2$ space dimensions, the scattering theory of (\[Eq1.5\]) is short range [@S-T; @92] (see also [@Horm97] and references therein for further developments), which reflects the fact that there is no second degree “mass resonance”. However, there is a third degree “mass resonance” which for $n=1$ together with the $t^{-1}$ time decrease of the $L^{\infty}(\mathbb{R})$-norm of $\varphi^{2}$ gives rise to the “long range” behavior treated in [@D01], [@L-S1] and [@L-S2] for the cubic NLKG. We note that the asymptotic completeness of the modified wave operators for (\[Eq1.4\]) is not studied in this paper. The methods in [@L-S2], adapted to spaces of initial conditions like Schwartz spaces, seem to give a promising departure for such future studies. For (\[Eq1.3\]) the asymptotic completeness is a trivial consequence of Theorem \[th1\] and Theorem \[th2\]. 2\) For $n \geq 1$ space dimensions, all formal nonlinear representations of the Poincaré group only involving massive fields are (at least formally) linearizable (see [@T; @84] where the corresponding cohomology was proved to be trivial). Then a natural question is: can modified wave operators be chosen such that they intertwine the non-linear representation, of the Poincaré group (and its Lie algebra) naturally defined on initial conditions for (\[Eq1.3\]) and (\[Eq1.4\]), and the linear representation defined by their linear part i.e. $$\label{Eq1.5.1} (\square + m^{2}) \varphi_{1}=0, \quad (\square + (2m)^{2}) \varphi_{2}=0. $$ We prove that the answer is yes (Theorem \[th2\]). This is not at all automatic. For example, it is not possible for the Maxwell-Dirac equations in three space dimensions. In fact, as was proved in [@FST97], MD is non-linearizable, on natural spaces of initial conditions. We next write equations (\[Eq1.3\]) and (\[Eq1.4\]) as evolution equations in a Hilbert space $E.$ The variable $a(t)=(a_{1,+}(t), a_{1,-}(t),a_{2,+}(t), a_{2,-}(t))$ is defined by: $$\label{a1.4} a_{j,\epsilon} (t) = \dot{\varphi}_{j} (t)+ \epsilon i \omega_{jm} (-i \nabla) \varphi_{j}(t),\;\; \epsilon = \pm 1,$$ where $\omega_M (p)=(M^2+|p|^2)$ and $\dot{\varphi}_{j}(t,x) = \frac{\partial}{\partial t}\varphi_{j} (t,x)$. The inverse of the transformation (\[a1.4\]) is $$\label{a1.5} \varphi_{j}(t) = (2i\omega_{jm}(-i \nabla))^{-1} (a_{j,+}(t)-a_{j,-}(t)), \quad \dot{\varphi}_{j}(t) = 2^{-1}(a_{j,+}(t)+a_{j,-}(t)).$$ Equations (\[Eq1.3\]) and (\[Eq1.4\]) then reads $$\begin{cases} \label{a1.6} &\frac{d}{dt} a_{1}(t) = i\omega_{m}(-i \nabla) (a_{1,+}(t),-a_{1,-}(t)) + (F_{1}(a(t)),F_{1}(a(t))) \\ &\frac{d}{dt} a_{2}(t) = i\omega_{2m}(-i \nabla) (a_{2,+}(t),-a_{2,-}(t)) + (F_{2}(a(t)),F_{2}(a(t))), \end{cases}$$ where in the case of equation (\[Eq1.3\]) $$\label{a1.7.1} \begin{split} F_{1}=0, \; \; F_{2}(a(t)) =&\left((2i\omega_{m}(-i\nabla))^{-1}(a_{1,+}(t)-a_{1,-}(t))\right) \\ & \quad \left((2i\omega_{m}(-i\nabla))^{-1}(a_{1,+}(t)-a_{1,-}(t))\right) \end{split}$$ and in the case of equation (\[Eq1.4\]) $$\label{a1.7.2} \begin{split} F_{2}=0, \;\; F_{1}(a(t)) =&\left((2i\omega_{m}(-i\nabla))^{-1}(a_{1,+}(t)-a_{1,-}(t))\right) \\ & \quad \left((2i\omega_{2m}(-i\nabla))^{-1}(a_{2,+}(t)-a_{2,-}(t))\right). \end{split}$$ The real Hilbert space $E$ is defined by $E=E_{(1)} \oplus E_{(2)}$ with norm $\|f\|_E =(\sum_{j=1,2} \|f_j\|_{E_{(j)}})^{1/2},$ where $E_{(j)}$ is the real subspace of $E_{(j)}^{C}=E_{(j,+)} \oplus E_{(j,-)}$ such that the image of the transformation (\[a1.5\]) only contains real functions. The norms in the complex Hilbert spaces $E_{(j)}^{C}$ and $E_{(j,-)}$ are given by $$\label{Eq1.5.1.-1} \|f_j\|_{E_{(j)}}=(\sum_{\epsilon = \pm} \|f_{j,\epsilon}\|_{E_{(j,\epsilon)}})^{1/2} \; \mathrm{and} \; \|f_j\|_{E_{(j,\epsilon)}} =\|(\omega_{jm} (-i\nabla))^{-1/2}f_{j,\epsilon}\|_{L^{2}}. $$ We shall define modified out and in wave operators $\Omega_{+}: \mathcal{O}^{+} \rightarrow \mathcal{O}^{0}$ and $\Omega_{-}: \mathcal{O}^{-} \rightarrow \mathcal{O}^{0}$ respectively, by introducing, for given scattering data $f \in \mathcal{O}^{\delta},$ $\delta =\pm,$ an approximate solution $a^{(\delta)}(f)$ satisfying for some initial condition $a(0)$ of equation (\[a1.6\]) and for $\alpha =0:$ $$\label{Eq1.5.3} \lim_{t \rightarrow \delta \infty} (1+|t|)^\alpha \| a(t)-(a^{(\delta)}(f))(t) \|_{E} =0. $$ By the uniqueness of the solution $a$ we can now define $$\label{Eq1.5.4} a(0)=\Omega_{\delta}(f).$$ Since the cases $\delta = \pm$ are so similar, we limit ourselves to $\delta =+.$ A study of the large time behavior of solutions of (\[a1.6\]) by stationary phase methods and the use of [@T; @84] to construct linearization maps of nonlinear representations of the Poincaré group leads to a choice of approximate solutions $a^{(+)}(f).$ With the notation $V(t)_{(j,\epsilon)}=\exp( i\omega_{jm}(-i \nabla) t )$ we define $ (a^{(+)}(f))(t) = V(t)(b^{(+)}(f))(t),$ where in the case of (\[a1.7.1\])[^3] $$\begin{cases} \label{a1.6+} & b^{(+)}_{1}(t) = f_{1} \\ &(b^{(+)}_{2,\epsilon}(t))^{\hat{}}(k) = \hat{f}_{2,\epsilon}(k) - i\epsilon \ln{(1+\frac{t(2m)^{2}}{\omega_{2m}(k)}}) \, \frac{1}{8m} (\hat{f}_{1,\epsilon}(k/2))^{2} \end{cases}$$ ($f$ in $(b^{(+)}(f))(t)$ has here been omitted) and in the case of (\[a1.7.2\]) $$\begin{cases} \label{a1.7+} & b^{(+)}_{1}(t) = \exp{\left(\frac{1}{4m} L(f_{2}) \ln{(1+t m^{2} (\omega_{m}(-i \nabla))^{-1}}) \right)}f_{1} \\ & b^{(+)}_{2}(t) = f_{2}, \end{cases}$$ where for $g \in E_{(1),\infty}^{C}$ and $h \in E_{(2),\infty}^{C},$ $E_{(j),\infty}^{C} =S(\mathbb{R}^{2},\mathbb{C}) \oplus S(\mathbb{R}^{2},\mathbb{C}),$ $$\label{Eq1.8} ((L(h)g)_\epsilon)^{\hat{}}(k) \equiv (L_\epsilon(h)g_{-\epsilon})^{\hat{}}(k) =i \epsilon \hat{h}_\epsilon (2k) \hat{g}_{-\epsilon}(-k).$$ Then, for a given $f \in E_\infty =E_{(1),\infty} \oplus E_{(2),\infty},$ where $E_{(j),\infty}=E_{(1)} \cap E_{(1),\infty}^{C},$ $a$ is formally a solution of $$\label{Eq1.9} a(t)= (a^{(+)}(f))(t) -\int_t^\infty V(t-s) \left(T^{2}_{P_0} (a(s)) -V(s) (\dot{b}^{(+)}(f))(s) \right) \, ds, $$ where $\dot{b}^{(+)}(f))(t)=\frac{d}{dt}(b^{(+)}(f))(t)$ and see (\[a1.11\]) for $T^{2}_{P_0}.$ A rigorous study of this equation in next section, will lead to the construction and covariance properties of modified wave operators (Theorem \[th2\]). The construction of modified wave operators and solutions of more general evolution equations also leads to an equation analog to (\[Eq1.9\]), where the recipe (usually based on an iteration starting with a free solution) how to find an approximate solution $a^{(+)}(f)$ for given scattering data $f$ has to be specified in each particular case. In the case of relativistic covariant equations, this was accomplished for the MD eq. in three space dimensions [@FST87] (see also [@FST97]) for asymptotic completeness) and for NLKG in one space dimension [@D01], [@L-S1] and [@L-S2]. For NLS it was accomplished in [@0-91] and for several other non-relativistic equations in [@GB03] and references therein to related papers by the same authors. The Poincaré group $\mathcal{P}=\mathbb{R}^{3}{\mbox{{\cmss C}\hspace{-1.8mm}\raisebox{0.3mm}{\cms x}}}SO(2, 1)$ acts on elements $y=(y^{0},y^{1},y^{2})$ in the 3-dimensional Minkowski space by $g y = \Lambda y - a,$ where $g = (a,\Lambda),$ $\Lambda \in SO(2, 1)$ and $a \in \mathbb R^2.$ $\mathcal{P}$ acts on real functions $f$ on the Minkowski space by a linear representation $R:$ $$\label{a1.7.3} (R_g f)( y ) = f (g^{-1} y), \quad y \in \mathbb R^3.$$ Covariance of the NLKG under the representation $R$ leads to nonlinear representations of $\mathcal{P}.$ $\Pi = \{P_0,P_1,P_2,R,N_{1},N_{2}\}$ denotes an ordered standard basis of the Poincaré Lie algebra $\mathfrak{p} = \mathbb{R}^{3}{\mbox{{\cmss C}\hspace{-2mm}{\tt +}}}so(2,1)$ in $3$ dimensions. Here $P_0,$ $P_1,$ $P_2,$ $R,$ $N_{1}$ and $N_{2}$ are respectively, the time translation, the two space translation, the space rotation and the two boost generators. We define a linear representation $T^{1}$ of $\mathfrak{p}$ in the Schwartz space $E_{\infty}$ of elements $f=(f_{1,+},f_{1,-},f_{2,+},f_{2,-})$ by: $$\begin{aligned} \label{a1.8} &(T^{1}_{{P}_{0}}f)_j =i\omega_{jm}(-i\nabla) (f_{j,+} , - f_{j,-}) , \quad j=1,2, \\ &T^{1}_{{P}_{n}} f = {\partial}_{n} f ,\quad n=1,2 , \\ &T^{1}_{R}f = m_{12}f ,\quad m_{12} = x_{1} {\partial}_{2} -x_{2} {\partial}_{1} , \\ &(T^{1}_{N_{n}}f)_j (x) = (i\omega_{jm}(-i \nabla) x_{n} f_{j,+} , -i\omega_{jm}(-i \nabla) x_{n} f_{j,-}) , \; j,n=1,2.\end{aligned}$$ The non-linear representation $T$ of $\mathfrak{p}$ on $E_{\infty}$ (see [@FSP77]), is obtained by the fact that equations (\[Eq1.3\]) and (\[Eq1.4\]) are manifestly covariant: $$\label{a1.10} T_{X} = T^{1}_{X} + T^{2}_{X} ,\quad X \in \mathfrak{p} ,$$ where for $f \in E_{\infty}$ the quadratic term $T^{2}$ is given by $$\begin{aligned} \label{a1.11} &T^{2}_{P_0} (f) = (F_1(f) , F_1(f),F_2(f) , F_2(f)), \\ &T^{2}_{P_1} =T^{2}_{P_2}=T^{2}_{R} = 0, \\ &(T^{2}_{N_{n}} (f))(x) = x_{n}(T^{2}_{P_0} (f))(x) ,\quad n=1,2.\end{aligned}$$ In particular, equation (\[a1.6\]) reads $$\label{a1.6.1} \frac{d}{dt} a(t) = T_{P_0} (a(t).$$ The representation $T^{1}$ is the differential of a unitary representation $U^{1}$ of the Poincaré group $\mathcal{P}$ in the Hilbert space $E.$ Let $\Pi'$ be the standard basis of the universal enveloping algebra $\mathcal{U}(\mathfrak{p})$ of $\mathfrak{p}$ corresponding to $\Pi.$ We give $\Pi'$ its lexicographic order with respect to the ordered basis $\Pi.$ Let $\vert Y \vert$ be the degree of $Y \in \Pi'.$ The space $E_n$ of n-differentiable vectors for the representation $U^{1}$ in $E$ coincides with the Hilbert space obtained by the completion of $E_\infty$ with respect to the norm (summing over $Y \in \Pi'$ and $\vert Y\vert \leq n$) $$\Vert u\Vert _{E_n} = (\sum \Vert T^1_Y u \Vert ^2_E)^{1/2},$$ where $T^1_Y$ is defined by the canonical extension of $T^1$ from $\mathfrak{p}$ to $\mathcal{U}(\mathfrak{p}).$ We have $E_\infty \subset E_j \subset E_i \subset E_0 = E$ for $i\leq j$. $U^1_j$ and $T^1_j$ denote the representations obtained by restricting $U^1$ and $T^1$ to $E_{(j)}$ and $E_{(j), \infty}$ respectively. Here $E_{(j), n}$ and $E_{(j), \infty}$ denotes the image of the canonical projection of $E_{n}$ and $E_{ \infty}$ on $E_{(j)}.$ We note the well-known fact that the norms $\Vert \;\; \Vert _{E_n}$ and $q_n$ are equivalent, where (summing over multi-indices with $x^\mu=x_1^{\mu_1}x_2^{\mu_2}$ and $\nabla^\mu=\partial_1^{\mu_1}\partial_2^{\mu_2}$) $$\label{Eq1.5.1.0} q_n (f)=\bar{q}_n ((I-\Delta)^{-1/4}f) \; \; \text{and} \;\; \bar{q}_n (f) = (\sum_{\vert \mu \vert, \, \vert \nu \vert \leq n} \Vert x^\mu \nabla^\nu f \Vert^2_{L^2})^{{1 / 2}}.$$ The linear map $ X \mapsto T_X,$ from $\mathfrak{p}$ to the vector space of all $C^{\infty}$ maps from $E_\infty$ to $E_\infty,$ extends to $\mathcal{U}(\mathfrak{p})$ by defining inductively (see [@S-T; @92]): $T_{\mathbb{I}} = I$, where $\mathbb{I}$ is the identity element in the enveloping algebra, and $$\label{Eq1.5.1.1} T_{YX} = D T_Y.T_X, \quad X \in \mathfrak{p},$$ where $(DA.B)(f)=(DA)(f;B(f))$ is the the Fréchet derivative of the map $A$ at the point $f$ in the direction $B(f).$ Suppose for the moment that the nonlinear Lie algebra representation $ X \mapsto T_X$ is (locally) integrable, i.e. in this case $\forall X \in \mathfrak{p}$ and $\forall f \in E_\infty$ there exists $c>0$ such tat for $|t| < c$ $$\label{Eq1.5.1.2} \frac{d}{dt} U_{g(t)}(f)=T_{X}(U_{g(t)}(f)), \quad g(t)=\exp{(tX)}.$$ Then, for an element $Y \in \mathcal{U}(\mathfrak{p})$ (see [@S-T; @92], [@S-T; @95]) $$\label{Eq1.5.2} \frac{d}{dt} T_{Ad_{g(t)}(Y)} (U_{g(t)} (f)) = T_{X Ad_{g(t)}(Y)} (U_{g(t)} (f)),$$ where the adjoint representation is given by $$\frac{d}{dt} Ad_{g(t)} Y = [X, Ad_{g(t)} Y] , \quad Ad_{g(0)}Y=Y.$$ Main Results ============ Since the equation given by (\[a1.6\]) for $a_2$ (resp. $a_1$) in the case of (\[a1.7.1\]) (resp. (\[a1.7.2\])) is simply a linear K-G, with an inhomogeneous (resp. linear potential) term, we easily prove the following theorem: \[th1\] i) There exists $N_0$ such that for $N \geq N_0,$ $T$ is integrable to a unique global nonlinear analytic group representation $U$ of $\mathcal{P}$ on $E_N$ and $U: \mathcal{P} \times E_\infty \rightarrow E_\infty$ is $C^\infty.$\ ii) For all initial conditions $f \in E_\infty,$ equation (\[a1.6\]) has a unique $C^\infty$ solution $a: \mathbb{R} \rightarrow E_\infty.$\ iii) For all initial conditions $(\varphi_{1}(0),\dot{\varphi}_{1}(0),\varphi_{2}(0),\dot{\varphi}_{2}(0)) \in S(\mathbb{R}^{2},\mathbb{R}^{4}),$ there exists a unique solution $(\varphi_{1},\varphi_{2}) \in C^{\infty}(\mathbb{R}^{3}, \mathbb{R}^{2})$ of eq. (\[Eq1.3\]) (resp. (\[Eq1.4\])). **Outline of proof:** Proceeding as in [@S-T; @92] and [@S-T; @95], for $Y \in \mathcal{U}(\mathfrak{p})$ and $X \in \mathfrak{p}$ introduce $$u_Y (t)= T_{Ad_{\exp{(t X)}}(Y)} (u(t)).$$ Let $u(0)=f \in E_\infty.$ According to equation (\[Eq1.5.2\]) $$\label{Eq2.1} \frac{d}{dt}u_Y (t) = u_{X Y}(t), \quad u_Y (0)=T_{Y} (f).$$ Let $\mathbb{I} < Y_1 < \ldots < Y_{c(k)}$ be the lexicographic ordering of the set of $Y\in \Pi'$ such that $\vert Y\vert \leq k,$ and let $$\label{Eq2.2} v_N (t) = (u_{Y_0} (t), u_{Y_i} (t) , \ldots, u_{Y_{c(N)}} (t)), N \geq 0.$$ According to formula (2.23a) of [@S-T; @95], (\[Eq2.1\]) leads to an equation for $v_N,$ $$\label{Eq2.3} v_N (t) = U^1_{\exp{(tX)}} v_N (0) + \int^t_0 U^1_{\exp{((t-s)X)}} G_N (v_N (s)) ds,$$ for some, in this case, quadratic forms $G_N$ depending on $X.$ We define, for a function $d: \Pi' \rightarrow E$ and for $n \in \mathbb{N}:$ $$\label{Eq2.4} \mathcal{P}_n (d) = (\sum_{\substack{Y\in \Pi' \\ \vert Y\vert \leq n}} \Vert d_Y \Vert_E^2)^{1/2}.$$ Choosing $N_0$ sufficiently large, one obtains from (\[a1.7.1\]), (\[a1.7.2\]) and (\[Eq2.3\]) using the unitarity of $U^1$ that $$\notag \mathcal{P}_N (u(t)) \leq \mathcal{P}_N (T(f)) + \int_{0}^{t}C_{N} \mathcal{P}_N (T(f)) (1+s)^{-1} \mathcal{P}_N (u(s))\,ds, \; N \geq N_0.$$ Then by Grönwall’s lemma $\mathcal{P}_N (u(t)) \leq \mathcal{P}_N (T(f)) (1+t)^{C_{N} \mathcal{P}_N (T(f))} < \infty$ for $t \geq 0.$ Statement (i) now follows by using Theorem 6 of [@S-T; @95]. Statements (ii) and (iii) are direct consequences of (i). [**QED**]{} The following two lemmas give time decrease of $b^{(+)}(t)$ and its derivatives. \[lm1\] Let $f \in E_\infty.$ Then $t \mapsto b^{(+)}(t)$ is a $C^\infty$ mapping from $[0, \infty [ \; $ to $E_\infty$ and there exists constants $C$ independent of $f$ and $C_{N,n}$ such that for all $f \in E_\infty,$ $t \geq 0,$ $n \geq 0$ and $N \geq 2$\ i) if $F$ is given by (\[a1.7.1\]) then, $C_{N,n}$ is independent of $f,$ $$\label{Eq2.5} \Vert b^{(+)}_2 (t) \Vert_{E_{(2), N}} \leq \Vert f_2 \Vert_{E_{(2), N}} + C_{N,0} \ln{(2+tm)} \|f_1 \|_{E_{(1), N}} \|f_1 \|_{E_{(1),2}}$$ and for $n \geq 1$ $$\label{Eq2.6} \Vert \frac{d^n}{dt^n}b^{(+)}_2 (t) \Vert_{E_{(2), N}} \leq C_{N,n} (1+t)^{-n} \|f_1 \|_{E_{(1), N}} \|f_1 \|_{E_{(1),2}}$$ ii) if $F$ is given by (\[a1.7.2\]) then, $C_{N,n}$ only depends on $\|f_2 \|_{E_{(2), 3}}$ and $$\label{Eq2.7} \begin{split} \Vert \frac{d^n}{dt^n}b^{(+)}_1 (t)& \Vert_{{E_{(1), N}}} \leq C_{N,n} (1+t)^{C \|f_2 \|_{E_{(2),1}} -n} (\ln{(2+tm)})^{2N+1} \\ & (\|f_1 \|_{E_{(1), N}}+ \|f_1 \|_{E_{(1), 3}} \|f_2 \|_{E_{(2), N}} +\|f_1 \|_{E_{(1), N}} \|f_2 \|_{E_{(2), 3}}). \end{split}$$ **Proof:** We only consider the more difficult case (ii). Expression (\[a1.7+\]) gives $$\label{Eq1 prooof lm1} (b^{(+)}_{1}(t))_{\epsilon}^{\hat{}}(k) = \cosh{(S(t,k))}f_{1,\epsilon}^{\hat{}}(k) +\frac{\sinh{(S(t,k))}}{S(t,k)}T(t,k),$$ where, for given $\epsilon,$ $S(t,k)=(1/4m) |f_{2,\epsilon}^{\hat{}}(2k)| \ln{(1+\frac{t m^{2}}{\omega_{m}(k)})}$ and $T(t,k)=\frac{i \epsilon}{4m}f_{2,\epsilon}^{\hat{}}(2k) \ln{(1+\frac{t m^{2}}{\omega_{m}(k)})} f_{1,-\epsilon}^{\hat{}}(-k).$ Let $F_r(z)=\sum_{n \geq 0} z^n / ((2n+r)!),$ $r=0,1.$ Then $F_0(z^2)=\cosh{(z)}$ and $F_1(z^2)=\sinh{(z)}/z.$ The $n$-th derivative satisfies $|F_r^{(n)}(z)| \leq F_r^{(n)}(|z|)$ and $F_r^{(n+1)}(x) < F_r^{(n)}(x),$ $x\geq 0.$ We define the norms $Q_N$ and $Q'_N,$ $N \geq 1$ by $$\label{Eq2 proof lm1} Q_N (a) =\|a\|_{L^\infty}+Q'_N (a), \;\; Q'_N (a) = (\sum_{\substack{0 \leq \vert \mu \vert \leq N \\ 1 \leq \vert \nu \vert \leq N}} \Vert \frac{x^\mu}{(1+|x|^2)^{1/4}} \nabla^\nu a \Vert_{L^2}^2)^{{1 / 2}}.$$ Let $h_r(t,k)=F_r(g(t,k)),$ where $g(t,k)=(S(t,k))^2.$ Applying $k^\alpha (\partial/\partial k)^\beta$ on $h_r(t,k),$ for multi-indices $\alpha$ and $\beta,$ and using the above properties of $F_r,$ the expression (\[Eq1.5.1.0\]), Plancherel’s theorem and interpolation, give for $N \geq 3:$ $$\notag Q_N ( h_r(t,\cdot)) \leq C_N F_r(\| g(t,\cdot)\|_{L^\infty}) (1+Q'_{3}(g(t,\cdot)))^{N-1} (1+Q'_{N}(g(t,\cdot)). $$ We note that $\| g(t,\cdot)\|_{L^\infty} \leq C^2 \|f_2\|_{E_{(2),1}}^2 (\ln{(2+t m)})^2,$ and that by interpolation $Q'_{N}(g(t,\cdot)) \leq C'_N Q'_{N}((\hat{f}_{2})^2) (\ln{(2+t m)})^2 \leq C''_N \|f_2\|_{E_{(2),3}} \|f_2\|_{E_{(2),N}} (\ln{(2+t m)})^2.$ Since $F_r(x^2) \leq e^x,$ $x \geq 0,$ we obtain for $ N \geq 3:$ $$\label{Eq4 proof lm1} Q_N (h_r(t,\cdot)) \leq C_N (1+tm)^{C \|f_2\|_{E_{(2),1}}} (\ln{(2+t m)})^{2 N} (1+\|f_2\|_{E_{(2),3}})^{N-1}(1+\|f_2\|_{E_{(2),N}}).$$ Interpolation then gives, with $(H_0(t))^{\hat{}}(k)=h_0(t,k) f_{1,\epsilon}^{\hat{}}(k)$ and $(H_1(t))^{\hat{}}(k)=h_1(t,k) T(t,k):$ $$\label{Eq5 proof lm1} \begin{split} &\|H_0(t)\|_{E_{(1),N}}+\|H_1(t)\|_{E_{(1),N}} \leq C_N (1+tm)^{C \|f_2\|_{E_{(2),1}}} (\ln{(2+t m)})^{2 N +1} \\ &(1+\|f_2\|_{E_{(2),3}})^{N} (\|f_1\|_{E_{(1),N}}(1+\|f_2\|_{E_{(2),3}})+\|f_1\|_{E_{(1),3}}\|f_2\|_{E_{(2),N}}), \end{split}$$ which proves (\[Eq2.7\]) in the case of $n=0.$ Repeated use of $$\label{Eq6 proof lm1} \frac{d}{dt}b^{(+)}_{1}(t) =\frac{1}{4} L(f_{2}) (\omega_{m}(-i \nabla)/m+t m)^{-1}b^{(+)}_{1}(t)$$ and interpolation leads to, for $N\geq 3$ and $n \geq 1:$ $$\label{Eq7 proof lm1} \begin{split} &\Vert \frac{d^n}{dt^n}b^{(+)}_1 (t) \Vert_{{E_{(1), N}}} \leq C_{N,n} (1+tm)^{-n} \\ &(1+\|f_2\|_{E_{(2),3}})^{n-1} (\|b^{(+)}_1 (t)\|_{E_{(1),N}}\|f_2\|_{E_{(2),3}} +\|b^{(+)}_1 (t)\|_{E_{(1),3}}\|f_2\|_{E_{(2),N}}). \end{split}$$ The case $n=0$ of (\[Eq2.7\]) and inequality (\[Eq7 proof lm1\]) prove statement (ii) of the lemma. [**QED**]{} \[lm2\] For all $f \in E_\infty,$ $t \geq 0$ and $n,N \geq 0$ there exists a constant $C$ independent of $f,$ and constants $C_{N,n}$ and $N'$ such that\ i) if $F$ is given by (\[a1.7.1\]) then, $C_{N,n}$ is independent of $f$ and $$\label{Eq2.8} \begin{split} q_N&\left(\frac{d^n}{dt^n} \left(e^{-i \epsilon \omega_{2m}(-i\nabla)t} ((2i\omega_{m}(-i\nabla))^{-1}a^{(+)}_{1,\epsilon}(t) )^{2} - \dot{b}^{(+)}_{2,\epsilon} (t)\right)\right) \\ & \quad \quad \leq C_{N,n} (1+t)^{-n-2} \|f_1\|_{E_{(1), N'}}^{2}, \end{split}$$ ii) if $F$ is given by (\[a1.7.2\]) then, $C_{N,n}$ only depends on $\|f \|_{E_{ 3}}$ and $$\label{Eq2.9} \begin{split} &q_N\left(\frac{d^n}{dt^n} \left(-e^{-i \epsilon \omega_{m}(-i\nabla)t} (2i\omega_{m}(-i\nabla))^{-1}a^{(+)}_{1,-\epsilon}(t)) (2i\omega_{2m}(-i\nabla))^{-1}a^{(+)}_{2,\epsilon}(t)) - \dot{b}^{(+)}_{1,\epsilon} (t)\right)\right) \\ & \quad \quad \leq C_{N,n} (1+tm)^{C \|f_2 \|_{E_{(2),1}} -n-2} (\ln{(2+tm)})^{2N'+1}\|f_1\|_{E_{(1), N'}} \|f_2\|_{E_{(2), N'}}. \end{split}$$ **Proof:** We only consider the case (ii). Define $h,$ $I$ and $J$ by $$\notag \begin{split} &(h(s))^{\hat{}}(k)=\frac{i\epsilon}{4m} (b^{(+)}_{1,-\epsilon}(s))^{\hat{}} (-k) f_{2,\epsilon}^{\hat{}}(2k), \;\; I(t)=\dot{b}^{(+)}_{1,\epsilon} (t)-h(t)/t, \;\; J(t,s)=- h (s)/t \\ &-e^{-i \epsilon \omega_{m}(-i\nabla)t} ((2i\omega_{m}(-i\nabla))^{-1}e^{-i \epsilon \omega_{m}(-i\nabla)t}b^{(+)}_{1,-\epsilon}(s)) ((2i\omega_{2m}(-i\nabla))^{-1}e^{-i \epsilon \omega_{2m}(-i\nabla)t}f_{2,\epsilon}). \end{split}$$ We have to prove that $q_N (\frac{d^n}{dt^n} (J(t,t)-I(t)))$ is majorized by the right hand side of inequality (\[Eq2.9\]). Let $J_{n_1,n_2}(t,s)=(d/(dt))^{n_1}(d/(ds))^{n_2} J(t,s).$ Theorem \[th 3.3\] (with $n=0$ and $(d/(ds))^{n_2}h(s)$ instead of $f_0$), gives $$\notag q_N (J_{n_1,n_2}(t,s)) \leq C_{N,n} (1+tm)^{-n_1-2} q_{N'}((\frac{d}{ds})^{n_2}b^{(+)}_{1,\epsilon} (s)) \, \|f_2\|_{E_{(2), N'}}.$$ Inequality (\[Eq2.7\]) then gives, with $n=n_1+n_2$ and new $C_{N,n}$ and $N':$ $$\notag q_N (J_{n_1,n_2}(t,t)) \leq C_{N,n} (1+mt)^{C \|f_2 \|_{E_{(2),1}} -n-2} (\ln{(2+tm)})^{2N'+1} \|f_1\|_{E_{(2), N'}} \|f_2 \|_{E_{(2), N'}}.$$ Summing over $n_1+n_2=n,$ it follows that $q_N (\frac{d^n}{dt^n} J(t,t))$ is majorized by the right hand side of inequality (\[Eq2.9\]). This is also the case for $q_N (\frac{d^n}{dt^n} I(t,t)).$ In fact, according to (\[Eq6 proof lm1\]), $(I(t))^{\hat{}}(k)=(i \epsilon / (mt))((1+\omega_m(k)/(tm^2))^{-1}-1) (b^{(+)}_{1,-\epsilon}(s))^{\hat{}} (-k) f_{2,\epsilon}^{\hat{}}(2k).$ Derivation in $t$ and application of inequality (\[Eq2.7\]) now give the result. [**QED**]{} To state the main result on the existence of covariant modified wave operators for equation (\[a1.6\]), with the nonlinearities (\[a1.7.1\]) and (\[a1.7.2\]), we define $\mathcal{O}^{+} =E_\infty$ in the case of (\[a1.7.1\]) and $\mathcal{O}^{+} =\{f \in E_\infty \; | \; C \|f_2 \|_{E_{(2),1}} <1\}$ in the case of (\[a1.7.1\]), where $C>0$ is as in Lemma \[lm2\]. \[th2\] If $f \in \mathcal{O}^{+} $ then, there exists a unique solution $a \in C(\mathbb{R}, (I-\Delta)^{-1}E)$ of equation (\[Eq1.9\]), such that the asymptotic condition (\[Eq1.5.3\]) is satisfied with $\alpha =0$. This solution satisfies (\[Eq1.5.3\]) for an $\alpha >0$ and $a \in C^\infty (\mathbb{R}, E_\infty)$ and defines by (\[Eq1.5.4\]) a $C^{\infty}$ modified wave operator $\Omega_+ : \mathcal{O}^{+} \rightarrow E_\infty.$ $\Omega_+$ intertwines the linear and nonlinear representations of $\mathcal{P},$ i.e. for all $f \in \mathcal{O}^{+}$ there exists a neighborhood of the identity in $\mathcal{P}$ of elements $g$ such that $U_g (\Omega_+(f))=\Omega_+(U^1_gf).$ **Proof:** We only consider the case of the nonlinearity (\[a1.7.2\]), since the case (\[a1.7.1\]) is easier. Let $f \in \mathcal{O}^{+}.$ For $j=1,2,$ $T^{2}_{(j)}$ and $T^{2}_{(j,\epsilon)}$ be the orthogonal projections of $T^{2}$ on $E_{(j)}$ and $E_{(j,\epsilon)}$ respectively. We shall use the following notations, where $g, h(t) \in E_{(1),N},$ for some $N:$ $$\label{Eq1 proof th2} \begin{split} &(H(h))(t)= -\int_t^\infty V_1(-s)T^{2}_{(1)P_0} (V(s)(h_1(s),f_2)) \, ds, \\ &I_{\epsilon}(t)= -\int_t^\infty \sum_{\epsilon_1+2\epsilon_2 \neq \epsilon} V_{1,\epsilon}(-s)T^{2}_{(1,\epsilon)P_0} (V_{1,\epsilon_1}(s) (b^{(+)}_{1,\epsilon_1}(f))(s), V_{2,\epsilon_2}(s)f_{2,\epsilon_2}) \, ds, \\ &J_\epsilon (t)= -\int_t^\infty \big( V_{1,\epsilon}(-s)T^{2}_{(1,\epsilon)P_0} (V_{1,-\epsilon}(s)(b^{(+)}_{1,-\epsilon}(f))(s), V_{2,\epsilon}(s)f_{2,\epsilon}(s))) -(\dot{b}_{1,\epsilon}^{(+)}(f))(s) \big) \, ds, \\ &(K_\epsilon (g,f_2))^{\hat{}}(k)= \frac{i}{2 \pi}\int_{\mathbb{R}^2} \sum_{\epsilon_1+2\epsilon_2 \neq \epsilon} d_{\epsilon,\epsilon_1,\epsilon_2}(p,k-p) \frac{\hat{g}_{\epsilon_1}(p)}{2i\omega_{m}(p)}\frac{\hat{f}_{2,\epsilon_2}(k-p)}{2i\omega_{2m}(k-p)}, \\ &d_{\epsilon,\epsilon_1,\epsilon_2}(p_1,p_2) =(\epsilon \omega_{m}(p_1+p_2)-\epsilon_1 \omega_{m}(p_1)-\epsilon_2 \omega_{2m}(p_2))^{-1}. \end{split}$$ Given $c >0$ let $M_{\tau},$ where $\tau>0,$ be the Banach space of functions $h \in C([\tau, \infty[, (I-\Delta)^{-1}E_{(1)})$ with norm $|||h|||=\sup_{t \geq \tau} (1+t)^c \|(I-\Delta)h(t)\|_{E_{(1)}} <\infty.$ Using that $\|V_2(t)(\omega_{2m}(-i\nabla))^{-1/2}f_2\|_{L^{\infty}} \leq C' (1+t)^{-1} \|f_2\|_{E_{(2)N_0}}$ for some $N_0$ it follows that $|||H(h)||| \leq C_\tau \, |||h ||| \; \|f\|_{E_{(2)N'}}$ for some $N'$ and $C_\tau.$ To estimate $J,$ for the given $f \in \mathcal{O}^{+}$ we choose $c$ such that $0 <c <1-C \|f_2 \|_{E_{(2),1}}.$ Inequality (\[Eq2.9\]) of Lemma \[lm2\], with $N=2$ and $n=0,$ then gives that $||| J ||| \leq C' \|f_1\|_{E_{(1), N'}} \|f_2\|_{E_{(2), N'}}$ for some new $N'$ and $C'.$ To estimate the non-resonant terms $I(t)$ we proceed, with minor changes, as in §3 of [@S-T; @92]. We obtain (see Corollary 3.8 of [@S-T; @92]) $\|(I-\Delta)K(V_1(t)g,V_2(t)f_2)\|_{E_{(1)}} \leq C' (1+t)^{-1} \|g\|_{E_{(1)N_0}} \|f_2\|_{E_{(2)N_0}}$ for some $C',$ $N_0.$ Partial integration gives $$\label{Eq2 proof th2} I(t)=K(V_1(t)(b^{(+)}_{1}(f))(t),V_2(t)f_2)+\int_t^\infty K(V_1(s)(\dot{b}^{(+)}_{1}(f))(s),V_2(s)f_2) \, ds$$ By Lemma \[lm1\] we now obtain (with new constants) that $||| I ||| \leq C' \|f_1\|_{E_{(1), N'}} \|f_2\|_{E_{(2), N'}}.$ These estimates give, with $ G(h,f)=H(h)+I+J$ that $|||G(h,f)||| \leq C' ( |||h||| \; \|f_2\|_{E_{(2), N'}} + \|f_1\|_{E_{(1), N'}} \|f_2\|_{E_{(2), N'}})$ and $|||G(h,f)-G(h',f)||| \leq C' |||h-h'||| \; \|f_2\|_{E_{(2), N'}}.$ Let $f_2$ be such that $C' \; \|f_2\|_{E_{(2), N'}} <1,$ so $G$ is a contraction. The equation (\[Eq1.9\]) for $t \geq \tau$ is, since $(b_2(f))(t)=f_2,$ equivalent to $$\label{Eq3 proof th2} b_1- b^{(+)}_1(f)=G(b_1- b^{(+)}_1(f),f). $$ This equation has a unique solution $b- b^{(+)}(f) \in M_\tau.$ It follows using Grönwall’s lemma that, there is a unique continuation of $V(\cdot)b(\cdot)$ to a solution $a \in C(\mathbb{R},(I-\Delta)^{-1}E_{})$ of the integrated version of (\[a1.6.1\]) and that $b_1- b^{(+)}_1(f) \in M_0.$ Similarly, one establish that the mappings $\mathcal{O}^{+} \ni f \mapsto b(0)=a(0)=\Omega_+(f) \in (I-\Delta)^{-1}E_{}$ and $f \mapsto b,a \in M_\tau,$ $\tau \in \mathbb{R}$ are $C^\infty.$ We next turn to the covariance properties of $\Omega_+.$ For given $f \in \mathcal{O}^{+},$ we consider an open neighborhood of the identity of elements $g \in \mathcal{P}$ such that $U^1_g f \in \mathcal{O}^+.$ $a^g$ denotes the solution in $C(\mathbb{R},(I-\Delta)^{-1}E_{}),$ given by the above construction, of (\[Eq1.9\]) with scattering data $U^1_g f$ and $b^g (t) =V(-t)a^g(t).$ Equation (\[Eq1.9\]) gives $$\label{Eq4 proof th2} b^g (t)- (b^{(+)}(U^1_g f))(t)= \int_\infty^t (V(-s) T^{2}_{P_0} (V(s)b^g(s)) -(\dot{b}^{(+)}(U^1_gf))(s) ) \, ds. $$ Let $R'$ be the representation of $\mathcal{P},$ on tempered distributions $F \in S'(\mathbb{R}^3,\mathbb{C}^4),$ defined by the representation $R$ in (\[a1.7.3\]) on $\phi_1,\phi_2 \in S'(\mathbb{R}^3,\mathbb{C})$ and the transformations (\[a1.4\]) and $F(t)=V(-t)a(t).$ For translations, i.e. $g=(I,(s_0,s_1,s_2)),$ formula (\[Eq1 prooof lm1\]) shows that $b^{(+)}(U^1_g f)=U^1_g b^{(+)}(f).$ Let $\delta_g(f)=R'_g b^{(+)}(f)-b^{(+)}(U^1_g f).$ If $g$ is a space translation, i.e. $s_0=0,$ then $(R'_g F)(t,x)=F(t,x_1+s_1,x_2+s_2),$ so $\delta_g(f)=0.$ If $g$ is a time translation, i.e. $s_1=s_2=0,$ then $(R'_g F)(t,x)=V(s_0)F(t+s_0,x),$ so $(R'_g b^{(+)}(f))(t)=(b^{(+)}(U^1_g f))(t+s_0)$ and $(\delta_g(f))(t)=V(s_0)( (b^{(+)}(f))(t+s_0)-(b^{(+)}(f))(t)).$ With $c$ as above, let $0 <c'<c.$ Then Lemma \[lm2\] gives that $\|\frac{d}{dt} (\delta_g(f))(t) \|_{E_N} \leq |s_0| C'_N (1+t)^{-c'-1},$ for some $C'_N.$ We can now integrate $\frac{d}{dt} (\delta_g(f))(t)$ in formula (\[Eq4 proof th2\]), which shows that for sufficiently small translations $$\label{Eq5 proof th2} b^g (t)- (R'_g b^{(+)}(f))(t)= \int_\infty^t (V(-s) T^{2}_{P_0} (V(s)b^g(s)) -\frac{d}{ds}(R'_g b^{(+)}(f))(s) ) \, ds. $$ It follows that (\[Eq5 proof th2\]) holds true with $b^g=R'_g b$ and this solution is unique. If $g$ is a space rotation, then similarly one finds that $b^g=R'_g b.$ This shows the intertwining property, $U_g(\Omega_+(f))=\Omega_+(U^1_gf),$ for $g$ in a neighborhood of the identity in the $\mathbb{R}^3 {\mbox{{\cmss C}\hspace{-1.8mm}\raisebox{0.3mm}{\cms x}}}SO(2)$ subgroup of $\mathcal{P}.$ For the case of a Lorentz transformation, let $g(s)=\exp{(s N_{j})},$ $j=1,2$ and $X(t)=N_{j}+tP_j.$ The already proved intertwining property shows that $a^{g(s)}(t)=\Omega_+(V(t)U^1_{g(s)}f).$ This function is $C^\infty$ in $(t,s,f),$ since $\Omega_+$ is $C^\infty.$ Suppose for the moment that, for $t=s=0,$ $$\label{Eq6 proof th2} \frac{d}{ds}a^{g(s)}(t)=T_{X(t)}(a^{g(s)}(t)).$$ Then one obtains $D\Omega_+(f;T^1_{N_{j}}f)= T_{N_{j}}(a^{g(0)}(0))=T_{N_{j}}(\Omega_+ (f)),$ which shows that the intertwining property holds true for a neighborhood of the identity in $\mathcal{P}.$ Successive differentiation of $g \mapsto \Omega_+(U^1_{g(s)}f) \in (I-\Delta)^{-1}E_{}$ gives that $T_Y(\Omega_+(f)) \in E_{}$ for all $Y \in \Pi'.$ Now according to Theorem 2 of [@S-T; @95], $\Omega_+(f) \in E_{\infty}$ and this mapping from $\Omega_+$ to $E_\infty$ is $C^\infty.$ To complete the proof we shall prove formula (\[Eq6 proof th2\]) for $t\geq 0.$ The differentiability of $b$ in $f,$ justifies to differentiate in $s$ both sides of formula (\[Eq4 proof th2\]), with $g=g(s).$ Then, with $b^{g(0)}=b$ and $b'=db^{g(s)}/(ds)|_{s=0}$ and $b^{'(+)}=d b^{(+)}(U^1_g f)/(ds)|_{s=0}:$ $$\label{Eq7 proof th2} b' (t)- b^{'(+)}(t)= \int_\infty^t (V(-s) 2(DT^{2}_{P_0})(V(s)b(s);V(s)b'(s)) -\frac{d}{ds}b^{'(+)}(s) ) \, ds.$$ The generator $\Xi_{N_{j}}$ of $s \mapsto R'_{g(s)},$ is given by its component in $E_{(1,\epsilon)}:$ $$\label{Eq8 proof th2} ((\Xi_{(1,\epsilon)N_{j}}h)(t))^{\hat{}}(k)= \big(-\epsilon (\omega_m(k)\frac{\partial}{\partial k_j} + \frac{k_j t}{\omega_m(k)} \frac{\partial}{\partial t}) +i(\frac{\partial}{\partial k_j}-\frac{k_j }{(\omega_m(k))^2})\frac{\partial}{\partial t}\big) (h(t))^{\hat{}}(k).$$ Using Lemmas \[lm1\] and \[lm2\] one establish, with $c'$ as above, that $\|\frac{d}{dt} \big( (\Xi_{N_{j}}b^{(+)})(t) -b^{'(+)}(t)\big)\|_{E_N} \leq |s_0| C'_N (1+t)^{-c'-1},$ for some $C'_N.$ This shows that we can replace $b^{'(+)}$ by $\Xi_{N_{j}}b^{(+)}$ on both sides of (\[Eq7 proof th2\]). Observing that $V(t)(\Xi_{N_{j}}b)(t)=T_{X(t)}(a(t))$ and that $T^{1}_{P_0}V(t)(\Xi_{N_{j}}b)(t)+2(DT^{2}_{P_0})(V(t)b(t);V(t)(\Xi_{N_{j}}b)(t))=T_{P_0 X(t)}(a(t))$ we can now identify $b'(t)$ with $V(-t)T_{X(t)}(a(t))$ which satisfies the equality $$\notag V(-t)T_{X(t)}(a(t))- (\Xi_{N_{j}}b^{(+)})(t)= \int_\infty^t (V(-s) 2(DT^{2}_{P_0})(a(s);T_{X(t)}(a(s))) -\frac{d}{ds} (\Xi_{N_{j}}b^{(+)})(s) ) \, ds.$$ [**QED**]{} The linear K-G equation {#lin k-g} ======================= We shall here give certain results on phase and decrease properties of solutions of linear Klein-Gordon equations, which we have used to study resonant terms. They are adapted from Appendix [@FST97] to our situation and are based on the symbolic calculus developed in [@Horm87]. Let $M >0.$ For given $\epsilon =\pm$ and $f \in S(\mathbb{R}^{2},\mathbb{C}),$ $$\label{Eq3.1} \varphi (t)= e^{i \epsilon \omega_{M}(-i\nabla)t}f,$$ defines a solution $\varphi$ of $$\label{Eq3.2} (\square + M^{2}) \varphi=0.$$ The forward light-cone is denoted $\Gamma_+=\{(t,x) \in \mathbb{R}^3 \, \big\vert \, t^2-|x|^2 \geq 0, t\geq 0 \}$ and let $\rho = (t^2-|x|^2)^{1/2}$ for $(t,x) \in \Gamma_+.$ The sequence of functions $g^{}_l \in C^\infty ((\mathbb{R}^+ \times \mathbb{R}^2)- \{ 0 \}),$ with support in the forward light-cone, is defined by $$\label{Eq3.3} g^{}_0 (t,x) = i \epsilon (Mt/\rho^2) \hat{f}(-\epsilon M x / \rho), \; \; g^{}_l = \frac{\rho}{ 2i \epsilon l M} \square g^{}_{l-1}, \quad l \geq 1,$$ for $(t,x) \in \Gamma_+.$ $g^{}_l$ is homogeneous of degree $- 1 - l$. The solution $\varphi=\varphi_0$ has an asymptotic expansion with rest-term $\varphi_n:$ $$\label{Eq3.4} \varphi_n =\varphi_0-e^{i\epsilon M \rho}\sum_{0\leq l\leq n-1}g^{}_l, \quad n\geq 1.$$ Define $\lambda(t)$ and $\delta(t)$ for $t\geq 0$ by $$\begin{aligned} \label{Eq3.5} (\lambda(t))(x)&= t/(1+t-\vert x\vert)\quad \hbox{for}\ 0\leq \vert x\vert \leq t, \\ (\lambda(t))(x)&=\vert x\vert \quad \hbox{for}\ 0\leq t \leq \vert x\vert, \\ (\delta(t))(x)&=1+t+\vert x\vert.\end{aligned}$$ We introduce the representation $X\mapsto \xi^{}_X$ of the Poincaré Lie algebra $\mathfrak{p}$ by: $$\begin{aligned} \label{Eq3.6} \xi^{}_{P_0}&=\frac{\partial}{\partial t}, \quad \xi^{}_{P_i}=\frac{\partial}{\partial x^{}_i},\quad 1\leq i\leq 2, \\ \xi^{}_{N_{i}}&=x^{}_i \frac{\partial}{\partial t} +t \frac{\partial}{\partial x^{}_i},\quad 1\leq i\leq 2, \\ \xi^{}_{R}&=-x^{}_1 \frac{\partial}{\partial x^{}_2} +x^{}_2 \frac{\partial}{\partial x^{}_1}.\end{aligned}$$ We define, for a function $d: \Pi' \rightarrow S(\mathbb{R}^{2},\mathbb{C})$ and for $n \in \mathbb{N}:$ $$\label{Eq3.7} p^{(s)}_n (d) = \sum_{\substack{Y\in \Pi' \\ \vert Y\vert \leq n}} (M\Vert d_Y \Vert_{L^s} +\sum_{0 \leq \mu \leq 2}\Vert d_{P_{\mu}Y} \Vert_{L^s}).$$ The following theorem gives decrease properties of the solution $\varphi$ and the rest terms $\varphi_n.$ We omit its proof, since its so similar to that of Theorem A.1 in [@FST97], considering the case of the Dirac equation in $3$-space dimensions. Given an ordering on the basis $Q=\{N_{1}, N_{2},R\}$ of ${\mathfrak{so}}(2,1),$ let $Q'$ be the corresponding standard basis of the enveloping algebra $U({\mathfrak {so}}(2,1))$ of ${\mathfrak {so}}(2,1)$. \[th 3.1\] There exists $C_i \in \mathbb{R},$ $i\geq0,$ and $\kappa_i \in \mathbb{N},$ $1 \leq i\leq4,$ such that for all $j,k,n\in\mathbb{N}$, $t >0$, $f \in S(\mathbb{R}^{2},\mathbb{C}),$ $X\in\Pi'\cap U(\mathbb{R}^3)$ and $Y\in Q':$ $$\begin{aligned} \label{Eq3.8} &p^{(2)}_j\big((1+\lambda(t))^{k/2}(\xi \varphi_0)(t)\big) \leq C_{j+k}\big( \bar{q}_{j+k}(f)+\sum_{1\leq i\leq 2} \bar{q}_{j+k}(\partial_i f)\big), \\ \label{Eq3.9} &p^{(\infty)}_j\big(\delta(t)(1+\lambda(t))^{k/2}(\xi \varphi_0)(t)\big) \leq C_{j+k} \bar{q}_{j+k+\kappa_2}(f) , \\ \label{Eq3.10} &p^{(2)}_j\big((1+\lambda(t))^{k/2}(\xi \varphi_{n+1})(t)\big) \leq C_{j+k+n}\ t^{-n-1} \bar{q}_{3(j+k+n)+\kappa_1} (f), \\ \label{Eq3.11} &p^{(\infty)}_j\big(\delta(t)(1+\lambda(t))^{k/2}(\xi \varphi_{n+1})(t)\big) \leq C_{j+k+n}\ t^{-n-1}\bar{q}_{3(j+k+n)+\kappa_3} (f), \\ \label{Eq3.12} &\Vert (\rho^{-j}\xi^{}_{XY}g^{}_n)(t)\Vert^{}_{L^2} \leq C_{j+\vert X\vert +n}\ t^{-j-n-\vert X\vert} \bar{q}_{3\vert X\vert+3n+\vert Y\vert+j} (f), \\ \label{Eq3.13} &\Vert (\rho^{-j}\xi^{}_{XY}g^{}_n)(t)\Vert^{}_{L^\infty} \leq C_{j+\vert X\vert+ n}\ t^{-1-j-n-\vert X\vert} \bar{q}_{3\vert X\vert+3n+\vert Y\vert+j+\kappa_4} (f). $$ The development defined by (\[Eq3.3\]) and (\[Eq3.4\]) can be inverted. Given a homogeneous function $g \in C^\infty ((\mathbb{R}^+ \times \mathbb{R}^2) - \{0\})$ of degree $- 1$ with support in $\Gamma_+,$ we construct by iteration $f_0,\ldots,f_n \in D_\infty:$ $$\begin{aligned} \label{Eq3.14} &\hat{f}_l (k)= -i \epsilon (M/ (\omega(k))^2) g^{}_{l,0} (1,- \epsilon k /\omega_M(k)), \quad 0 \leq l \leq n, \\ \label{Eq3.15} & g^{}_{0,0} = g,\quad g^{}_{l,0} (t,x) = - \sum_{1 \leq j \leq l} t^j g^{}_{l-j,j} (t,x),\quad 1 \leq l \leq n, \\ \label{Eq3.16} &g^{}_{l,j}= \frac{\rho}{ 2i \epsilon j M} \square g^{}_{l,j-1},\quad 1 \leq j \leq n-l.\end{aligned}$$ By this construction $g^{}_{l,j} \in C^\infty ((\mathbb{R}^+ \times \mathbb{R}^2) - \{0\})$ is homogeneous of degree $- 1-j$ with support in $\Gamma_+.$ Reformulation in two space dimensions of Theorem A.2 [@FST97], (there proved in the case of three space dimensions), gives: \[th 3.2\] Let $g \in C^\infty ((\mathbb{R}^+ \times \mathbb{R}^2) - \{0\})$ be a homogeneous function of degree $-1$ with ${\rm supp}\ g \subset \Gamma_+.$ If $f_0,\ldots,f_n$ are given by the construction (\[Eq3.14\])-(\[Eq3.16\]) and $$\label{Eq3.16.1} u^{}_n(t)=e^{i \epsilon M \rho }g(t) -\sum_{0\leq l\leq n} t^{-l}e^{i\epsilon\omega_M(-i\nabla)t}f^{}_l,$$ then there exists $C_i \in \mathbb{R},$ $i\geq0,$ independent of $g,$ such that for all $j,k,n\in\mathbb{N}$ and $t >0$ and with $\kappa_1$ and $\kappa_3$ as in Theorem \[th 3.1\]: $$\begin{aligned} \label{Eq3.17} & \bar{q}_j(f^{}_n) \leq C_{n+j} \sum_{\substack{ Y\in Q' \\ q+\vert Y \vert \leq j+2n}} \Vert ({m / \rho(1,\cdot)})^{q+n} (\xi^{}_Y g) (1,\cdot) \Vert^{}_{L^2}, \\ \label{Eq3.18} &p^{(2)}_j\big((1+\lambda(t))^{k/2}(\xi u_n) (t)\big) \leq C_{j+k+n}\sum_{\substack{ Y\in Q' \\ q+\vert Y \vert \leq 3(j+k+n)+\kappa_1}} \Vert ({m / \rho(1,\cdot)})^q (\xi^{}_Y g) (1,\cdot) \Vert^{}_{L^2}t^{-n-1}, \\ \label{Eq3.19} &p^{(\infty)}_j\big(\delta(t)(1+\lambda(t))^{k/2} (\xi u_n) (t)\big) \leq C_{j+k+n}\sum_{\substack{ Y\in Q' \\ q+\vert Y \vert \leq 3(j+k+n)+\kappa_3}} \Vert ({m / \rho(1,\cdot)})^q (\xi^{}_Y g) (1,\cdot) \Vert^{}_{L^2}t^{-n-1}.\end{aligned}$$ Theorem \[th 3.1\] and Theorem \[th 3.2\] permit to find the asymptotic behavior and estimates of resonant terms. \[th 3.3\] Let $M, M_1, M_2 >0$ and $\epsilon, \epsilon_1, \epsilon_2 \in \{- 1,1\}$ be such that $\epsilon M=\epsilon_1 M_1 + \epsilon_2 M_2$ and let $f^{(1)},f^{(2)} \in S(\mathbb{R}^{2},\mathbb{C}).$ There exists a unique sequence of functions $f_{l} \in S(\mathbb{R}^{2},\mathbb{C}),$ such that if $$\label{Eq3.20} \delta_n (t)=e^{-i\epsilon\omega_M(-i\nabla)t} \left((e^{i\epsilon_1\omega_{M_1}(-i\nabla)t}f^{(1)}) (e^{i\epsilon_2\omega_{M_2}(-i\nabla)t}f^{(2)})\right) -\sum_{0\leq l\leq n} t^{-1-l}f^{}_l,$$ then for all $N,j,n \in \mathbb{N}$ there are $C$ and $N'$ such that $$\label{Eq3.21} \bar{q}_N((\frac{d}{dt})^j \delta_n (t)) \leq C (1+t)^{-2-j-n} \bar{q}_{N'}(f^{(1)}) \bar{q}_{N'}(f^{(2)}).$$ Moreover $$\label{Eq3.22} \hat{f}_0(k)= i \frac{\epsilon_1 M_1 \epsilon_2 M_2}{\epsilon M} (\frac{\omega_{M}(k)}{M})^2 (f^{(1)})^{\hat{}}(\frac{\epsilon_1 M_1}{\epsilon M}k) (f^{(2)})^{\hat{}}(\frac{\epsilon_2 M_2}{\epsilon M}k) $$ and $$\label{Eq3.23} \bar{q}_N (f_j) \leq C \bar{q}_{N'}(f^{(1)}) \bar{q}_{N'}(f^{(2)}).$$ **Proof:** With $f^{(i)}$ instead of $f,$ we define $\varphi^{(i)},$ $g_n^{(i)}$ and $\varphi_n^{(i)}$ by formulas (\[Eq3.1\])–(\[Eq3.4\]). Given $J \in \mathbb{N},$ formula (\[Eq3.20\]) can be written $$\label{Eq3.24} \begin{split} \delta_n (t)=&e^{-i\epsilon\omega_M(-i\nabla)t} ( (\varphi_{J+1}^{(1)}(t)+e^{i\epsilon_1 M_1 \rho}\sum_{0 \leq l \leq J}g_l^{(1)}(t)) (\varphi_{J+1}^{(2)}(t)+e^{i\epsilon_2 M_2 \rho}\sum_{0 \leq l \leq J}g_l^{(2)}(t)) ) \\ &-\sum_{0\leq l\leq n} t^{-1-l}f^{}_l, \end{split}$$ where the functions $f^{}_l$ will be defined later in this proof. We define $$\label{Eq3.25} g_l(t, \cdot)=t^{1+l} \sum_{\substack{l_1+l_2=l \\ 0 \leq l_i \leq J}} g^{(1)}_{l_1}(t, \cdot) g^{(2)}_{l_2}(t, \cdot),$$ $$\label{Eq3.26} I_J(t)=\varphi_{J+1}^{(1)}(t) \varphi_{J+1}^{(2)}(t) +\sum_{0 \leq l \leq J} (e^{i\epsilon_1 M_1 \rho}g_l^{(1)}(t) \varphi_{J+1}^{(2)}(t) +e^{i\epsilon_2 M_2 \rho}g_l^{(2)}(t) \varphi_{J+1}^{(1)}(t))$$ and $$\label{Eq3.27} v_n^J(t)= e^{-i\epsilon\omega_M(-i\nabla)t}\sum_{0\leq l\leq 2J} t^{-1-l} e^{i\epsilon M \rho}g_l(t, \cdot) -\sum_{0\leq l\leq n} t^{-1-l}f^{}_l.$$ Then $$\label{Eq3.27.1} \delta_n(t)= e^{-i\epsilon\omega_M(-i\nabla)t}I_J(t) +v_n^J(t).$$ The function $g_l$ is homogeneous of degree $-1.$ We note that, according to (\[Eq3.12\]) and (\[Eq3.13\]) of Theorem \[th 3.1\], if $Z \in \Pi'$ then $$\label{Eq3.28} \Vert (\rho^{-j}\xi^{}_{Z}g^{}_l)(1, \cdot)\Vert^{}_{L^2} \leq C \bar{q}_{N'} (f^{(1)})\bar{q}_{N'} (f^{(2)}),$$ where $C$ and $N'$ depend on $|Z|$ and $j.$ Also, a straight forward application of (\[Eq3.10\])– (\[Eq3.13\]) gives, with new $C$ and $N'$ depending on $N,$ $k$ and $J$ $$\label{Eq3.29} p^{(2)}_N \big((1+\lambda(t))^{k/2}(\xi I_J)(t)\big) \leq C t^{-2-J} \bar{q}_{N'}(f^{(1)})\bar{q}_{N'} (f^{(2)}).$$ With $g_l$ instead of $g$ and $J$ instead of $n,$ we define $f_{l,k}$ and $u_{l,J}$ by formulas (\[Eq3.14\])–(\[Eq3.16.1\]). Then, according to Theorem \[th 3.1\] and (\[Eq3.28\]), $$\label{Eq3.30} u_{l,J}(t)=e^{i \epsilon M \rho }g_l(t) -\sum_{0\leq k\leq J} t^{-k}e^{i\epsilon\omega_M(-i\nabla)t}f_{l,k}$$ satisfies, with $C$ and $N'$ depending on $J,$ $N$ and $k,$ $$\label{Eq3.31} \begin{split} t^{J+1}& \left( p^{(2)}_N\big((1+\lambda(t))^{k/2}(\xi u_{l,J}) (t)\big) + p^{(\infty)}_N\big(\delta(t)(1+\lambda(t))^{k/2}(\xi u_{l,J}) (t)\big) \right) \\ &+\bar{q}_N (f_{l,k}) \leq C \bar{q}_{N'}(f^{(1)})\bar{q}_{N'} (f^{(2)}). \end{split}$$ Formulas (\[Eq3.27\]) and (\[Eq3.30\]) give $$\label{Eq3.32} v_n^J(t)=\sum_{0\leq l\leq 2J} t^{-1-l} \left( e^{-i\epsilon\omega_M(-i\nabla)t} u_{l,J}(t) + \sum_{0\leq k\leq J} t^{-k} f_{l,k} \right) -\sum_{0\leq l\leq n} t^{-1-l}f^{}_l.$$ In the sequel of this proof we suppose that $J \geq n$ and define $$\label{Eq3.33} f_l=\sum_{k_1+k_2=l} f_{k_1,k_2}, \; A_J(t)=\sum_{0\leq l\leq 2J} t^{-1-l}u_{l,J}(t), \; B_{n,J}(t)=\sum_{(l, k_1, k_2) \in D(n,J)} t^{-1-l}f_{k_1,k_2}, $$ where $ D(n,J)=\{(l, k_1, k_2) \, | \, l>n, 0\leq l\leq 2J, 0\leq k_2 \leq J, k_1+k_2=l \}.$ Then $v_n^J(t)=e^{-i\epsilon\omega_M(-i\nabla)t} A_J(t) + B_{n,J}(t),$ so by (\[Eq3.27.1\]) $$\label{Eq3.34} \delta_n(t)= e^{-i\epsilon\omega_M(-i\nabla)t}h_J(t) + B_{n,J}(t), \; \; \text{where} \:\; h_J(t)=I_J(t)+ A_J(t).$$ Inequalities (\[Eq3.28\]) and (\[Eq3.31\]) and the fact that $\xi_{N_i}t^{-1-l}=-(1+l)t^{-2-l}x_i$ give $$\label{Eq3.35} t^{J+2} p^{(2)}_N\big((1+\lambda(t))^{k/2}(\xi A_J) (t)\big) +t^{n+j+2}\bar{q}_N ((\frac{d}{dt})^j B_{n,J}(t)) \leq C \bar{q}_{N'}(f^{(1)})\bar{q}_{N'} (f^{(2)}). $$ Since $(x_j i \epsilon \omega_M(-i \nabla)-t\partial_j) e^{-i \epsilon \omega_M(-i \nabla)t} = e^{-i \epsilon \omega_M(-i \partial)t} x_j i \epsilon \omega_M(-i \nabla),$ it follows from Leibniz’s rule and (\[Eq1.5.1.0\]) that $$\label{Eq3.36} \begin{split} &\bar{q}_N ((\frac{d}{dt})^j (e^{-i \epsilon \omega_M(-i \nabla)t} h_{J}(t))) \leq C_{j,N}\sum_{\substack{j_1+j_2=j \\ |\alpha|, |\beta| \leq N}} \|x^{\alpha} \nabla^{\beta} e^{-i \epsilon \omega_M(-i \nabla)t} (\omega_M(-i \nabla))^{j_1} (\frac{d}{dt})^{j_2} h_{J}(t) \|_{L^2} \\ & \leq C'_{j,N}\sum_{\substack{|\alpha|+k \leq N \\ |\beta|+j_2 \leq N +j}} t^k \|x^{\alpha} \nabla^{\beta} (\frac{d}{dt})^{j_2} h_{J}(t) \|_{L^2}. \end{split}$$ Let $U(\mathbb{R}^3)$ be the enveloping algebra of the translation subalgebra of $\mathfrak{p}.$ Inequalities (\[Eq3.29\]), (\[Eq3.31\]) and (\[Eq3.36\]) give that for $t \geq 1:$ $$\label{Eq3.37} \begin{split} &\bar{q}_N ((\frac{d}{dt})^j (e^{-i \epsilon \omega_M(-i \nabla)t} h_{J}(t))) \leq C''_{j,N}\sum_{\substack{X \in U(\mathbb{R}^3) \cap \Pi' \\ |X| \leq N+j}} t^N \|(1+\lambda (t))^N (\xi h_{J})(t) \|_{L^2} \\ & \leq C'''_{j,N} t^N p_{N+j}((1+\lambda (t))^N (\xi h_{J})(t)) \leq C'_{j+N} t^{N-2-J} \bar{q}_{N'}(f^{(1)})\bar{q}_{N'} (f^{(2)}). \end{split}$$ Inequality (\[Eq3.21\]), for $t \geq 1,$ now follows by choosing $J \geq N+n+j.$ The definition of $f_l$ in formula (\[Eq3.33\]) and inequality (\[Eq3.31\]) give inequality (\[Eq3.23\]). To prove formula (\[Eq3.22\]), we observe that $g^{(j)}_0 (t,x) = i \epsilon_j (M_j t/\rho^2) \hat{f}(-\epsilon_j M_j x / \rho),$ $j=1,2,$ according to (\[Eq3.3\]). By (\[Eq3.25\]), $g_0(t,x)=tg^{(1)}_0 (t,x)g^{(2)}_0 (t,x).$ By (\[Eq3.33\]) $f_0=f_{0,0},$ so by formulas (\[Eq3.30\]) and (\[Eq3.14\]) $\hat{f}_0 (k)= -i \epsilon (M/ (\omega(k))^2) g^{}_{0} (1,- \epsilon k /\omega_M(k)).$ The result now follows using that $\rho (1,- \epsilon k /\omega_M(k))=M/\omega_M(k).$ [**QED**]{}\ **Note added in the proofs:** I learned later from H.Sunagawa, after the acceptation of the paper, that he has related results in Hokkaido Math. Journ. **33**, 457–472 (2004), which cover some of those for the easier case (\[Eq1.3\]) but not for the case (\[Eq1.4\]), and the method is different. [9]{} Delort, J.M.: [*Existence globale et comportement asymptotique pour l’équation de Klein-Gordon quasi linéaire à données petites en dimension $1$*]{}, Ann. Sci. Ec. Norm. Sup. **34**(4), 1–61 (2001). Flato, M., Pinczon, G., Simon, J.C.H.: [*Non-linear representations of Lie groups*]{}, Ann. Sci. Ec. Norm. Sup. [**10**]{}, 405-418 (1977). Flato, M., Simon, J. C. H., Taflin, E.: [*On global solutions of the [M]{}axwell-[D]{}irac equations*]{}, Comm. Math. Phys. **112**, 21–49 (1987). Flato, M., Simon, J. C. H., Taflin, E.: [*Asymptotic completeness, global existence and the infrared problem for the [M]{}axwell-[D]{}irac equations*]{}, Mem. Amer. Math. Soc. **127** number 606, (1997). Ginibre, J., Velo, G.: [*Long Range Scattering and Modified Wave Operators for the Maxwell-Schr¨odinger System I. The Case of Vanishing Asymptotic Magnetic Field*]{}, Commun. Math. Phys. 236, 395–448 (2003). Hörmander, L.: [*The lifespan of classical solutions of nonlinear hyperbolic equations*]{}, in Lecture notes in Math, Vol. **1256**, pp. 214-280, Springer Verlag 1987. Hörmander, L.: [*Lectures on Nonlinear hyperbolic differential equations*]{}, Springer Verlag 1997. Lindblad, H., Soffer, A.: [*A remark on long range scattering for the nonlinear Klein-Gordon equation*]{}, J. Hyperbolic Differ. Eq. [**2**]{}, 77–89 (2005). Lindblad, H., Soffer, A.: [*A remark on asymptotic completeness for the critical nonlinear Klein-Gordon equation*]{}, Lett. Math. Phys. [**73**]{}, 249-258 (2005). Ozawa, T.: [*Long Range Scattering for Nonlinear Schrödinger Equations in One Space Dimension*]{}, Com. Math. Phys. 139, 479–493 (1991). Simon, J.C.H., Taflin, E.: [*Wave operators and analytic solutions for systems of non-linear Klein-Gordon equations and non-linear Schrödinger equations*]{}, Commun. Math. Phys. [**99**]{}, 541–562 (1985). Simon, J.C.H., Taflin, E.: [*The [C]{}auchy problem for non-linear [K]{}lein-[G]{}ordon equations*]{}, Commun. Math. Phys. [**152**]{}, 433–478 (1992). Simon, J.C.H. and Taflin, E.: [*Initial data for non-linear evolution equations and differentiable vectors of group representations*]{}, Mathematical Physics Studies, vol. 18, Kluwer Academic Publisher, Dordrecht, 1995, pp. 243–253. Taflin, E.: [*Formal linearization of nonlinear massive representations of the connected Poincaré group*]{}, J. Math. Phys. [**25**]{}, 765–771 (1984). [^1]: taflin@eisti.fr; EISTI, Ecole International des Sciences du Traitement de l’Information, Avenue du Parc, 95011 Cergy, France [^2]: Acknowledgement: The author thanks Avy Soffer for pointing out the interest of the particular cases (\[Eq1.3\]) and (\[Eq1.4\]) of the NLKG (\[Eq1.2\]). [^3]: The Fourier transformation $f \mapsto \hat{f}$ is here defined by $\hat{f} (k) = (2 \pi)^{- 1} \int_{\mathbb{R}^2} e^{-ikx} f(x) dx.$
--- abstract: 'In this paper, the existence and uniqueness of solution of a specific differential equation is studied. This equation originates from the description of a coupled process by totally asymmetric simple exclusion process (TASEP) and Langmuir kinetics (LK). In the fields of physics and biology, the properties of the TASEP-LK coupled process have been extensively studied by Monte Carlo simulations and numerical calculations, as well as detailed experiments. However, so far, no rigorous mathematical analysis has been given to the corresponding differential equations, especially their existence and uniqueness of solution. In this paper, using the upper and lower solution method, the existence of solution of the steady state equation is obtained. Then using a generalized maximum principle, we show that the solution constructed from the upper and lower solution method is actually the unique solution in $C^{\infty}$ space. Moreover, the existence and uniqueness of solution of the time dependent differential equation are also obtained in one specific space $X^\beta$. Our results imply that the previous results obtained by numerical calculations and Monte Carlo simulations are theoretically correct, especially the most important phase diagram of particle density along the travel track under different model parameters. The main difficulties encountered in the analysis are that, as the length of the travel track tends to infinity (corresponding to parameter $\epsilon\to0$ in the differential equation), there may exist boundary layers at one or both of the boundaries. Meanwhile, in some domains of the parameter space, domain wall may also appear, which is defined as the boundary separating high and low values of the particle density. The study in this paper provides theoretical foundations for the analysis of TASEP-LK coupled process. At the same time, the methods used in this paper may be instructive for studies about the more general cases of the TASEP-LK process, such as the one with multiple travel tracks or the one with multiple particle species.' author: - Jingwei Li and Yunxin Zhang title: ' Existence and uniqueness of solution of the differential equation describing the TASEP-LK coupled transport process ' --- Introduction ============ Inspired by the unidirectional motions of many motor proteins along cytoskeletal filaments [@Howard2001; @Schliwa2003; @Sperry2007], a process for describing the stochastic driven system along a one-dimensional lattice is proposed in [@ParmeggianiPRL2003] and then deeply discussed in [@Parmeggiani2004; @Zhang20101; @Zhang2012]. This process couples the one-dimensional totally asymmetric simple exclusion process (TASEP) [@Spohn1991; @Derrida1997book; @Mukamel2000; @Schutz2001] with the Langmuir kinetics (LK), see Fig. \[TASEP\_diagram\_figure\]. A rich steady state phase diagram, with high and low density phases, two and three phase coexistence regions, and a boundary independent “Meissner" phase, is found by considering a continuum limit [@ParmeggianiPRL2003; @Zhang20101; @Zhang2012]. Such profiles of particle density are very different from those of the classical TASEP model [@DerridaMatrix1993; @DerridaRecursion1992; @SchutzRecursion1993]. The pure TASEP model is in fact the limiting case of the coupling model when the attachment and detachment rates of LK tend to zero [@Parmeggiani2004]. Steady state solutions of the differential equation describing classical TASEP can be obtained by various methods [@DerridaMatrix1993; @DerridaRecursion1992; @SchutzRecursion1993]. Where the recursion method presented in [@DerridaRecursion1992; @SchutzRecursion1993] is very technical, and is hard to generalize to analyze the TASEP-LK coupled process. Meanwhile, although the matrix formulation used in [@DerridaMatrix1993] is tidy, for TASEP-LK coupled process, its network structure indicates that the methods of this standard matrix product ansatz could be rather difficult to implement [@Parmeggiani2004]. The TASEP process can be analyzed by mean field approximation [@KrugPRL1991]. Using mean field approximation, a continuum limit of TASEP-LK coupled process is presented in [@Parmeggiani2004]. Actually, this continuum limit of TASEP-LK coupled process is a semi-linear initial value parabolic problem with Dirichlet boundary condition. $$\begin{aligned} \label{continuumlimitintroduction} \left\{ \begin{array}{ll} \rho_t=\epsilon[\frac{\epsilon}{2}\rho_{xx}+(2\rho-1)\rho_x+\Omega_A(1-\rho)-\Omega_D\rho], \quad & 0< x <1,\ t>0, \\ \rho(0,t)=\alpha,\quad \rho(1,t)=1-\beta, & t>0, \\ \rho(x,0)=\rho_0(x), & 0\le x \le1. \\ \end{array} \right.\end{aligned}$$ The physical meanings of $\alpha, \beta$, and $\rho(x)$ will be given in Section \[model\]. The phase diagram of steady state solution can then be obtained by solving the corresponding semi-linear elliptic problem with Dirichlet boundary condition. $$\begin{aligned} \label{ellipticequationintroduction} \left\{ \begin{array}{ll} \frac{\epsilon}{2}\rho_{xx}+(2\rho-1)\rho_x+\Omega_A(1-\rho)-\Omega_D\rho=0, & \quad 0< x <1, \\ \rho(0)=\alpha,\quad \rho(1)=1-\beta. & \\ \end{array} \right.\end{aligned}$$ In previous studies, Eq. (\[ellipticequationintroduction\]) has been solved numerically to obtain the phase diagram of $\rho(x)$. In this paper, we will analyze Eqs. (\[continuumlimitintroduction\],\[ellipticequationintroduction\]) rigorously, especially the existence and uniqueness of their solutions, and the relations between the solution of Eq. (\[continuumlimitintroduction\]) and the solution of Eq. (\[ellipticequationintroduction\]). Our main results are as follows. There exists a unique $W^{1,2}(0,1)$ weak solution for Eq. (\[ellipticequationintroduction\]). Such a solution has the regularity of $C^\infty[0,1]$. The phase diagram of steady state solution of the TASEP-LK coupled process, i.e. the solution of Eq. (\[ellipticequationintroduction\]), coincides with that obtained numerically in [@ParmeggianiPRL2003; @Parmeggiani2004]. For $\alpha>3/4$, Eq. (\[continuumlimitintroduction\]) has a unique global $X^\alpha$ solution, and there exists a global attractor in $X^\alpha$. Here $X^{\alpha}$ is a function space which will be defined in Section \[global\] (see also [@Cholewa2000]). For cases $\alpha>3/4$, we have $X^{\alpha}=W^{2\alpha,2}_0(0,1)$, see Eq. (\[Eqspaceequivalent\]). Inspired by the idea of Lam [*et al.*]{} in [@Lam2016], which is used to study a diffusive logistic equation originating from population models in disrupted environments [@SkellamBiometrika1951; @CantrellProceedings1989; @CantrellSIAM1991; @CantrellJoMB1991; @LustscherTheoretical2007], we will analyze Eq. (\[ellipticequationintroduction\]) by the method of upper and lower solution [@Du2006]. Compared with related methods in perturbation theory, such as boundary layer theory and WKB theory [@ColeBook1968; @FromanBook1965; @BenderBook1999], the method of upper and lower solution can handle quasi-linear problems, and has strict theoretical basis. The uniqueness of weak solution of Eq. (\[ellipticequationintroduction\]) in $W^{1,2}(0,1)$ can be obtained by the comparison principle for divergence form operator which is obtained by Trudinger in [@TrudingerArchive1974] (or see Theorem 10.7 in [@Gilbarg2001]). Eq. (\[continuumlimitintroduction\]) is an evolutionary partial differential equation, which describes the evolution of the spatial solution with time. In this paper, the existence and uniqueness of a local solution of Eq. (\[continuumlimitintroduction\]) will be firstly verified. Then its unique extendibility to the whole half line $t\in[0,+\infty)$ is proved. For Eq. (\[continuumlimitintroduction\]), except the existence and uniqueness of solution, the global attractiveness of the steady state solution is also an important issue. In our study, the existence of global attractor will be proved by the theory of sectorial operator [@Cholewa2000]. We want to emphasize that in this paper, only the existence of global attractor will be proved, while whether the steady state solution, obtained from Eq. (\[ellipticequationintroduction\]), is the global attractor has not been solved yet. This paper is organized as follows. In Section \[model\], we will introduce the TASEP-LK coupled process briefly and then give its continuum limit, i.e. the parabolic problem listed in Eq. (\[continuumlimitintroduction\]). In Section \[steady\], we use the method of upper and lower solution to prove the existence of a weak solution of Eq. (\[ellipticequationintroduction\]) in $W^{1,2}(0,1)$, which has the same phase diagram as numerically obtained in [@Parmeggiani2004]. Then we show that this weak solution in $W^{1,2}[0,1]$ is in fact a classical $C^{\infty}$ solution. In section \[uniqueness\], we will prove the uniqueness of the solution of Eq. (\[ellipticequationintroduction\]) in $C^1[0,1]$ \[which is actually in $W^{1,2}(0,1)$\] by the methods presented in [@Gilbarg2001] for quasi-linear elliptic equation. In Section \[global\], the existence and uniqueness of the global $X^\alpha$ solution, as well as the existence of the global attractor in $X^\alpha$, will be discussed by the sectorial operator theory. Finally, conclusions and remarks will be presented in Section \[conclusion\]. TASEP-LK coupled process and its continuum limit {#model} ================================================ In this section, we will briefly introduce the TASEP-LK coupled process, and then derive its continuum limit. A diagram to illustrate the TASEP-LK coupled process is given in Fig. \[TASEP\_diagram\_figure\]. One can always refer to it as reading this section. In TASEP, particles from the same species hop unidirectionally along a one-dimensional lattice with constant rate (usually normalized to be unit), but with spatial exclusion. Which means that particles at site $i$ will hop to site $i+1$ if site $i+1$ is not occupied. If the initial (left) site $i=1$ is vacancy, particles in environment will bind to it with rate $\alpha$. Meanwhile, particles at the terminal (right) site $i=N$ will leave the lattice with rate $\beta$. The Langmuir kinetics (LK) means that particles can also attach to or detach from the main body of the lattice (for sites $2\le i\le N-1$) with constant rates, denoted by $\omega_A$ and $\omega_D$ respectively [@Parmeggiani2004; @Zhang2012]. Let $\rho_i(t)$ be the probability that site $i$ is occupied by a particle at time $t$. One can easily show that under independent assumption, $\rho_i(t)$ is governed by the following equations [@Parmeggiani2004; @Zhang2012]. $$\begin{aligned} \label{TASEPmeanfield} \left\{ \begin{array}{ll} \partial_t \rho_i=\rho_{i-1}(1-\rho_i)-\rho_i(1-\rho_{i+1})+\omega_A(1-\rho_i)-\omega_D\rho_i, & 2\le i\le N-1, \\ \partial_t \rho_1=\alpha(1-\rho_1)-\rho_1(1-\rho_2), & \\ \partial_t \rho_N=\rho_{N-1}(1-\rho_N)-\beta\rho_N. & \\ \end{array}\right.\end{aligned}$$ If we expand the definition of $\rho_i$ to $\rho_0\equiv \alpha$, and $\rho_{N+1}\equiv 1-\beta$, then Eq. (\[TASEPmeanfield\]) can be rewritten as $$\begin{aligned} \label{TASEPmeanfield2} \left\{ \begin{array}{ll} \partial_t \rho_i=\rho_{i-1}(1-\rho_i)-\rho_i(1-\rho_{i+1})+\omega_A(1-\rho_i)-\omega_D\rho_i, \quad & 2\le i\le N-1, \\ \partial_t \rho_1=\rho_0(1-\rho_1)-\rho_1(1-\rho_2), & \\ \partial_t \rho_N=\rho_{N-1}(1-\rho_N)-\rho_N(1-\rho_{N+1}), & \\ \rho_0=\alpha, & \\ \rho_{N+1}=1-\beta. & \\ \end{array} \right.\end{aligned}$$ Let $x:=i/(N+1)$, $\rho(x,t):=\rho_i$, $\epsilon:=1/(N+1)$. In this paper, we always assume that $\Omega_A:=\omega_A/\epsilon$ and $\Omega_D:=\omega_D/\epsilon$ are nonzero constants. Which means that rates $\omega_A, \omega_D$ are of order $\epsilon^\gamma$ with $\gamma=1$. For $\gamma\ne1$, the TASEP-LK coupled process will reduce to pure TASEP or LK process [@Popkov2003; @Zhang20131]. Therefore, the most complicated case which has rich physical properties is that $\omega_A, \omega_D$ are of order $\epsilon$. For simplicity, denote $\rho_t:=\partial_t\rho$, $\rho_x:=\partial_x\rho$, $\rho_{xx}:=\partial^2_x\rho$. Expanding $\rho(x\pm\epsilon,t)$ in powers of $\epsilon$, $$\begin{aligned} \label{epsilonexpansion} \rho(x\pm\epsilon,t)=\rho(x,t)\pm\epsilon\rho_x(x,t)+\frac{1}{2}\epsilon^2\rho_{xx}(x,t)+O(\epsilon^3).\end{aligned}$$ Then substituting Eq. (\[epsilonexpansion\]) into Eq. (\[TASEPmeanfield2\]) and only keeping terms of $\epsilon$ up to order two, we obtain $$\begin{aligned} \label{continuumlimittape} \left\{ \begin{array}{ll} \rho_t=\epsilon[\frac{\epsilon}{2}\rho_{xx}+(2\rho-1)\rho_x+\Omega_A(1-\rho)-\Omega_D\rho], \quad & \frac{2}{N+1}\le x \le \frac{N-1}{N+1}, \\ \rho_t=\epsilon[\frac{\epsilon}{2}\rho_{xx}+(2\rho-1)\rho_x], & x=\frac{1}{N+1},\frac{N}{N+1}, \\ \rho(0,t)=\alpha, & \\ \rho(1,t)=1-\beta. & \\ \end{array} \right.\end{aligned}$$ When $N$ is large, we may neglect the influences at $x=1/(N+1),N/(N+1)$, then Eq. (\[continuumlimittape\]) reduces to $$\begin{aligned} \label{continuumlimit0} \left\{ \begin{array}{ll} \rho_t=\epsilon[\frac{\epsilon}{2}\rho_{xx}+(2\rho-1)\rho_x+\Omega_A(1-\rho)-\Omega_D\rho], \quad & 0< x <1, \\ \rho(0,t)=\alpha, \quad \rho(1,t)=1-\beta. & \\ \end{array} \right.\end{aligned}$$ Giving initial condition $\rho(x,0)=\rho_0(x)$, we then obtain Eq. (\[continuumlimitintroduction\]). In the following discussion, we will firstly discuss the existence and uniqueness of the solution of the steady state equation (\[ellipticequationintroduction\]) in Sections \[steady\] and \[uniqueness\], and then discuss the properties of the time dependent equation (\[continuumlimitintroduction\]) in Section \[global\]. Existence of the steady state solution in $W^{1,2}(0,1)$ with specific phase diagram {#steady} ==================================================================================== The steady state solution of Eq. (\[continuumlimitintroduction\]) satisfies the one-dimensional boundary value elliptic equation (\[ellipticequationintroduction\]). In previous studies [@Parmeggiani2004; @Zhang20101; @Zhang2012], through Monte Carlo simulations and numerical computations, it has been found that, with different choices of initial rate $\alpha$ and terminal rate $\beta$, the $\epsilon\to 0$ limit of the solution of Eq. (\[ellipticequationintroduction\]) will change essentially. There are phase transitions driven by boundary values. See also references [@Nishinari2005; @Leduc2012] for experimental observations. The main aim of this Section is to show the existence of a $W^{1,2}(0,1)$ weak solution of Eq. (\[ellipticequationintroduction\]), which tends to $f$ as $\epsilon\to 0$. Here, for convenience $f$ denotes the limit solution corresponding to $\epsilon=0$, obtained previously by numerical computations [@Parmeggiani2004; @Zhang20101; @Zhang2012]. Our proof relies on the method of upper and lower solution. Concretely, it includes mainly two steps. - We first construct two functions $\rho_u$ and $\rho_l$ which are arbitrarily close to $f$ and satisfy $\rho_u\ge\rho_l$. - Then we prove that there exists an $\epsilon_0$, such that for any $\epsilon<\epsilon_0$, $\rho_u$ and $\rho_l$ are the upper and lower solutions of Eq. (\[ellipticequationintroduction\]) respectively. For convenience, the method of upper and lower solution is briefly introduced in the following subsection. The method of upper and lower solution -------------------------------------- Eq. (\[ellipticequationintroduction\]) has the following quasi-linear form, $$\begin{aligned} \label{upperlowerform} -[A(x,\rho,\rho_x)]_x+p(x,\rho,\rho_x)=0,\quad {\rm for}\ x\in(0,1),\ \rho(0)=\alpha,\ \rho(1)=1-\beta.\end{aligned}$$ with $A(x,\rho,\rho_x)=\frac{\epsilon}{2}\rho_{x}$, $p(x,\rho,\rho_x)=-(2\rho-1)\rho_x-\Omega_A(1-\rho)+\Omega_D\rho$. We claim that $A(x,t,\xi)=\frac{\epsilon}{2}\xi$ satisfies the following four conditions needed in the method of upper and lower solution [@Du2006], - $A:R\times R\times R\to R$ satisfies the Caratheodory conditions, [*i.e.*]{}, $A(x,t,\xi)$ is measurable in $x\in (0,1)$ for any fixed $(t,\xi)\in R\times R$, and continuous for $(t,\xi)$ for a.e. fixed $x\in (0,1)$. - There exist constants $q\in(1,\infty)$, $c_0\ge 0$, and a function $k_0\in L^{q'}(0,1)$ with $q'=q/(q-1)$ such that, for a.e. $x\in(0,1)$ and $(t,\xi)\in R\times R$, $$\begin{aligned} |A(x,t,\xi)|\le k_0(x)+c_0(|t|^{q-1}+|\xi|^{q-1}). \end{aligned}$$ - For a.e. $x\in(0,1)$, for all $t\in R$ and $\xi,\xi'\in R$ with $\xi\neq \xi'$, $$\begin{aligned} [A(x,t,\xi)-A(x,t,\xi')](\xi-\xi')>0. \end{aligned}$$ - For some $c_1>0$ and $k_1\in L^{q'}(0,1)$, for a.e. $x\in (0,1)$ and all $(t,\xi)\in R\times R$, $$\begin{aligned} A(x,t,\xi)\xi\ge c_1|\xi|^q-k_1(x). \end{aligned}$$ Denote the above four conditions by . It is easy to show that $A$ satisfies with $q=2$, $k_0=0$, $c_0=c_1=\epsilon/2$, and $k_1=0$. According to the Definition 4.7 of a weak upper (lower) solution in [@Du2006], and resembling the Lemma 5.2 in [@Lam2016], we have the following Lemma. \[upperlowersufficientcondition\] $w$ is a weak upper (lower) solution of Eq. (\[ellipticequationintroduction\]) if 1. $w\in C[0,1]$; 2. there exists a partition $0=x_0<x_1<x_2<\cdots<x_{k-1}<x_k=1$ such that for all $i=0,\cdots,k-1$, $w\in C^2[x_i,x_{i+1}]$ and satisfies $$\begin{aligned} \frac{\epsilon}{2}w_{xx}+(2w-1)w_x+\Omega_A(1-w)-\Omega_Dw\le 0 (\ge 0)\ {\rm in}\ [x_i,x_{i+1}]; \end{aligned}$$ 3. for all $i=1,\cdots,k-1$, $w_x(x_i^-)\ge w_x(x_i^+)(\le)$; 4. $w(0)\ge\alpha(\le)$, $w(1)\ge 1-\beta(\le)$. The lemma can be easily verified via integration by parts. Based on Lemma \[upperlowersufficientcondition\] and the Theorem 4.9 in [@Du2006], we have \[upperlower2W12\] Suppose that $v$ and $w$ satisfy the sufficient condition in Lemma \[upperlowersufficientcondition\] for weak lower and upper solutions of Eq. (\[ellipticequationintroduction\]) respectively, and $m\le v\le w\le M$ in $(0,1)$ for some constants $m,M$. Then Eq. (\[ellipticequationintroduction\]), whose quasi-linear part $A$ (see Eq. (\[upperlowerform\])) satisfies , has a weak solution $\rho\in w^{1,2}(0,1)$ satisfying $v\le \rho\le w$ a.e. in $(0,1)$. For all $x\in (0,1)$, $\xi \in R$, and $t\in[v(x),w(x)]\subset[m,M]$, $$\begin{aligned} |p(x,t,\xi)|=|-(2t-1)\xi-\Omega_A(1-t)+\Omega_Dt|\le \max(|2M-1|,|2m-1|)|\xi|+(\Omega_A+\Omega_D)\max(|M|,|m|)+\Omega_A.\end{aligned}$$ The proof is straightforward with the Theorem 4.9 in [@Du2006]. Preliminaries: properties of an ordinary differential equation {#SubSecB} -------------------------------------------------------------- Before constructing upper and lower weak solutions of Eq. (\[ellipticequationintroduction\]), we have a glance at the following first order ordinary differential equation (ODE), which will be very useful in the construction of upper and lower solutions of Eq. (\[ellipticequationintroduction\]) with specific phase diagram ([*i.e.*]{} when domain wall or boundary layer appears). $$\label{equationwresearch} \frac{\epsilon}{2}\partial_xw=-(w-A)(w-(1-A)),\ A\le 1/2,\ w(x_0)=w_0.$$ Let $\tilde{w}(x):=w(\epsilon (x-x_0)+x_0)=w(y)$, $y:=\epsilon (x-x_0)+x_0$. Then $$\begin{aligned} &&\partial_x\tilde{w}(x)=\partial_xw(y)=\epsilon\partial_{y}w(y)\\ &=&\epsilon\frac{2}{\epsilon}[-(w(y)-A)(w(y)-(1-A))]\\ &=&-2(\tilde{w}(x)-A)(\tilde{w}(x)-(1-A)).\end{aligned}$$ Note that $\tilde{w}(x)$ is independent of $\epsilon$, while $w$ is a contraction in $x$ direction of $\tilde{w}(x)$ centred at $x_0$. One can easily verify that $w(x_0)=\tilde{w}(x_0)=w_0$, and $\tilde{w}(x)$ has the following properties. - If $w_0\ge(1-A)$, then $\lim_{x\to +\infty}\tilde{w}(x)=1-A$ from above. - If $w_0\le A$, then $\lim_{x\to -\infty}\tilde{w}(x)=A$ from below. - If $A<w_0<1-A$, then $\lim_{x\to +\infty}\tilde{w}(x)=1-A$ from below, and $\lim_{x\to -\infty}\tilde{w}(x)=A$ from above. As a consequence, one can easily show that $w(x)$ has the following properties. - If $w_0\ge(1-A)$, then for any $x>x_0$, $\lim_{\epsilon\to 0^+}w(x)=\lim_{\epsilon\to 0^+}\tilde{w}(x_0+(x-x_0)/\epsilon)=1-A$. - If $w_0\le A$, then for any $x<x_0$, $\lim_{\epsilon\to 0^+}w(x)=\lim_{\epsilon\to 0^+}\tilde{w}(x_0+(x-x_0)/\epsilon)=A$. - If $A<w_0<(1-A)$, then for any $x>x_0$, $\lim_{\epsilon\to 0^+}w(x)=\lim_{\epsilon\to 0^+}\tilde{w}(x_0+(x-x_0)/\epsilon)=1-A$, and for any $x<x_0$, $\lim_{\epsilon\to 0^+}w(x)=\lim_{\epsilon\to 0^+}\tilde{w}(x_0+(x-x_0)/\epsilon)=A$. In fact, $w$ satisfies $$\begin{aligned} 0=\frac{\epsilon}{2}w_{xx}+(2w-1)w_x.\end{aligned}$$ This is the continuum limit of a pure TASEP without LK, which can only generate a domain wall from down to up [@Krug1991; @DerridaRecursion1992; @DerridaMatrix1993; @SchutzRecursion1993]. Which means that at the location $x_w$ of the domain wall, $w(x_w^-)<w(x_w^+)$. The domain wall in TASEP-LK coupled process may be similar as that in pure TASEP. Perhaps this can explain why no domain wall from up to down is found in numerical computations and stochastic simulations [@Parmeggiani2004; @Zhang20101; @Zhang2012]. To get a glance of the basic properties of the solution $w$ of Eq. (\[equationwresearch\]), three typical examples of $w$, obtained by choosing parameter $A=0.25$ and with conditions $w(0.5)=0.5$, $w(0)=1$, and $w(1)=0$ respectively, are plotted in Fig. \[integrate\_figures\_Kneq1\_7to11\][**f**]{}. For each example, figures of $w$ with three values of parameter $\epsilon$, $\epsilon=0.1,0.05,0.01$, are plotted. These plots show that the smaller the value of $\epsilon$ is, the steeper the domain wall (or boundary layer) will be. From the above discussion, we conclude that the limit $\hat{w}$ of $w$ as $\epsilon\to 0$ is as follows. - If $w(0)=w_0\ge1-A$, $$\begin{aligned} \hat{w}=\left\{ \begin{array}{ll} w_0, & x=0, \\ 1-A, & 0<x\le 1. \\ \end{array} \right. \end{aligned}$$ - If $w(1)=w_0\le A$, $$\begin{aligned} \hat{w}=\left\{ \begin{array}{ll} A, & 0\le x<1, \\ w_0, & x=1. \\ \end{array} \right. \end{aligned}$$ - If $A< w(x_0)=w_0< 1-A$ for some $x_0\in(0,1)$, $$\begin{aligned} \hat{w}=\left\{ \begin{array}{ll} A, & 0\le x<x_0, \\ w_0, & x=x_0, \\ 1-A, & x_0<x\le 1. \\ \end{array} \right. \end{aligned}$$ An obvious corollary of the above property is that $\lim_{\epsilon\to 0^+}\partial_xw(x)=0$ for $x\neq x_0$. Construction of upper and lower weak solutions of Eq. (\[ellipticequationintroduction\]): special cases $\Omega_A=\Omega_D=\Omega$ {#steadySpecialCases} ---------------------------------------------------------------------------------------------------------------------------------- According to Theorem \[upperlower2W12\], to show that there exists a $W^{1,2}(0,1)$ weak solution of Eq. (\[ellipticequationintroduction\]) which tends to $f$ as $\epsilon\to 0$, it is sufficient to construct two functions $\rho_u$ and $\rho_l$ which satisfy $\rho_u\ge\rho_l$, and there exists $\epsilon_0>0$, for any $0<\epsilon<\epsilon_0$, $\rho_u$ and $\rho_l$ are upper and lower weak solutions of Eq. (\[ellipticequationintroduction\]) respectively, [*i.e.*]{}, they satisfy the conditions in Lemma \[upperlowersufficientcondition\]. Note that $\rho_u$ and $\rho_l$ may depend on $\epsilon$. For convenience, we denote $$\label{Ldefinition} L\rho:=\frac{\epsilon}{2}\partial_x^2\rho+(2\rho-1)\partial_x\rho+\Omega_A(1-\rho)-\Omega_D\rho.$$ Our main aim in this subsection and the next subsection is to construct functions $\rho_u$ and $\rho_l$ with the following properties, upon different boundary conditions for $\alpha$ and $\beta$ of Eq. (\[ellipticequationintroduction\]). Detailed descriptions about how this method works will be given in Lemma \[hatwlemma\] after defining $\Delta$ neighbourhood’ and arbitrarily close’. - By choosing corresponding parameter values, $\rho_u$ and $\rho_l$ can be arbitrarily close to $f$. Meanwhile, $\rho_u\ge \rho_l$ for $x\in[0,1]$. - $\rho_u$ and $\rho_l$ are continuous, and they are piecewise functions in $C^2(0,1)$. At splitting points $0<x_i<1$, $\partial^-_x\rho_u(x_i)\ge\partial^+_x\rho_u(x_i)$ and $\partial^-_x\rho_l(x_i)\le\partial^+_x\rho_l(x_i)$. - For any $\epsilon$ small enough, $\rho_u(0)\ge\alpha\ge \rho_l(0)$, and $\rho_u(1)\ge1-\beta\ge \rho_l(1)$. - For any $\epsilon$ small enough, $L\rho_u\le 0\le L\rho_l$. In this paper, the meaning that functions $\rho_u$ and $\rho_l$ are arbitrarily close to numerical solution $f$ is defined as follows. \[difinitionofneighoff\] Suppose $f$ is a piecewise continuous function with discontinuity points $x_1,x_2,\cdots,x_k\in[0,1]$. We say that a function $\rho$ belongs to the $\Delta$ neighbourhood of $f$ if $$\begin{aligned} \left\{ \begin{array}{ll} f(x)-\Delta < \rho(x)< f(x)+\Delta, & x\in[0,1]\setminus\cup_{i=1}^k[x_i-\Delta,x_i+\Delta], \\ \min_{y\in[0,1]\cap[x_i-\Delta,x_i+\Delta]}f(y)-\Delta< \rho(x)< \max_{y\in[0,1]\cap[x_i-\Delta,x_i+\Delta]}f(y)+\Delta, & x\in [0,1]\cap[x_i-\Delta,x_i+\Delta]. \end{array} \right.\end{aligned}$$ A sufficient condition for $\rho$ belonging to the $\Delta$ neighbourhood of $f$ is that there exist $\rho_u,\rho_l$ belonging to the $\Delta$ neighbourhood of $f$, and $\rho_l\le\rho\le\rho_u$ for $x\in[0,1]$. Assuming index $\epsilon>0$, we say that a group of functions $\rho^{\epsilon}\to f$ as $\epsilon\to 0^+$, if for any $\Delta>0$, there exists $\epsilon_0>0$, such that for any $0<\epsilon<\epsilon_0$, $\rho^{\epsilon}$ belongs to the $\Delta$ neighbourhood of $f$. For the $\Delta$ neighbourhood of $f$, we have the following result. \[neighbourofneighbour\] Suppose $f$ is a piecewise continuous function with discontinuity points $x_1,x_2,\cdots,x_k\in[0,1]$, and $g$ is a piecewise continuous function with discontinuity points $y_1,y_2,\cdots,y_l\in[0,1]$. For $\Delta$ small enough, if $\rho$ belongs to the $\Delta/2$ neighbourhood of $g$, and $g$ belongs to the $\Delta/2$ neighbourhood of $f$, then $\rho$ belongs to the $\Delta$ neighbourhood of $f$. Choose $\Delta$ small enough such that $\Delta<\min_{1\le i\le l}|g(y_i^+)-g(y_i^-)|$, and the pairwise intersections of $[x_j-\Delta,x_j+\Delta]$, $1\le j\le k$ are all empty. Suppose there exists $y_i\notin \cup_{j=1}^k[x_j-\Delta/2,x_j+\Delta/2]$, then $g(y_i^+)$, $g(y_i^-)$ cannot lie in interval $(f(y_i)-\Delta/2,f(y_i)+\Delta/2)$ together, which is contradictory to the Lemma’s condition. Thus, $\forall\ 1\le i\le l$, $\exists\ 1\le j\le k$, such that $[y_i-\Delta/2,y_i+\Delta/2]\subset[x_j-\Delta,x_j+\Delta]$. Therefore, for $x\in[x_i-\Delta,x_i+\Delta]$, we always have $$\begin{aligned} \rho(x) \le \max_{x\in[x_i-\Delta,x_i+\Delta]}g(x)+\Delta/2 \le \max_{x\in[x_i-\Delta,x_i+\Delta]}\{\max_{y\in[x_i-\Delta,x_i+\Delta]}f(y)+\Delta/2\}+\Delta/2=\max_{y\in[x_i-\Delta,x_i+\Delta]}f(y)+\Delta.\end{aligned}$$ For $x\notin\cup_{i=1}^k[x_i-\Delta,x_i+\Delta]$, $$\begin{aligned} \rho(x) \le g(x)+\Delta/2 \le f(x)+\Delta/2+\Delta/2=f(x)+\Delta.\end{aligned}$$ The lower bound can be discussed similarly. In this subsection, we will discuss the special cases in which $\Omega_A=\Omega_D=\Omega$. For these cases, the elliptic boundary value problem (\[ellipticequationintroduction\]) reduces to the following form, $$\label{OriginalPro} \left\{ \begin{array}{ll} L\rho=\frac{\epsilon}{2}\partial_x^2\rho+(2\rho-1)(\partial_x\rho-\Omega)=0,\quad &x\in(0,1),\\ \rho(0)=\alpha,\ \rho(1)=1-\beta. \end{array} \right.$$ In the following discussion, the same symbols will be used repeatedly in different sub-subsections. Such as $\rho_u$–upper solution, $\rho_l$–lower solution, $f$–limit solution of Eq. (\[ellipticequationintroduction\]) with $\epsilon\to 0$, [*etc.*]{} In the following of this subsection, we will construct the upper solution $\rho_u$ and lower solution $\rho_l$ of Eq. (\[ellipticequationintroduction\]) by splitting the parameter space into several different domains, [**(1)**]{} $\alpha+\Omega>\beta$, $\beta+\Omega>\alpha$, and $\alpha+\beta+\Omega<1$, [**(2)**]{} $\alpha<0.5$, $\beta<0.5$, and $\alpha+\beta+\Omega>1$, [**(3)**]{} $\alpha>0.5$, and $0.5-\Omega<\beta<0.5$, [**(4)**]{} $\alpha>\beta+\Omega$, $\beta<0.5-\Omega$, and $\alpha+\beta+\Omega<1$, [**(5)**]{} $\alpha>\beta+\Omega$, $\beta<0.5-\Omega$, and $\alpha+\beta+\Omega>1$, [**(6)**]{} $\alpha>0.5$, and $\beta>0.5$. Other cases can be obtained by the particle-hole symmetry [@Parmeggiani2003], see Fig. \[TASEP\_diagram\_figure\][**Right**]{}. ### For cases $\alpha+\Omega>\beta$, $\beta+\Omega>\alpha$, and $\alpha+\beta+\Omega<1$ For these cases, the expression of $f$ is as follows, $$\label{numericalsolution} f(x)=\left\{ \begin{array}{ll} \alpha+\Omega x, & 0\le x\le \frac{\beta-\alpha}{2\Omega}+\frac{1}{2}, \\ 1-\beta-\Omega+\Omega x, & \frac{\beta-\alpha}{2\Omega}+\frac{1}{2}<x\le1. \end{array} \right.$$ See solid line in Fig. \[integrate\_figures\_Keq1\][**a**]{}. Upper solution $\rho_u$ of Eq. (\[ellipticequationintroduction\]) is constructed by the following process. Let $x_u=\frac{\beta-\alpha}{2\Omega}+\frac{1}{2}-\frac{\delta}{2}$, $A=\frac{\alpha+\beta+\Omega}{2}$, and $w_u$ be the solution of the ODE $$\label{equationwu1} \frac{\epsilon}{2}\partial_xw_u=-(w_u-A)(w_u-(1-A)),\quad w_u(x_u)=\frac{1}{2}.$$ Then $\rho_u$ can be given by the following method, $$\label{uppersolution} \rho_u=\left\{ \begin{array}{ll} w_u+\Omega[x-(x_u+\frac{\delta}{4})], & x\le x_u+\frac{\delta}{4},\\ w_u+\Omega'[x-(x_u+\frac{\delta}{4})], & x>x_u+\frac{\delta}{4}, \end{array} \right.$$ where $\Omega'$ satisfies $\Omega>\Omega'$, and $\rho_u(1)=w_u(1)+\Omega'[1-(x_u+\frac{\delta}{4})]>1-\beta$. Such an $\Omega'$ exists if $w_u(1)+\Omega[1-(x_u+\frac{\delta}{4})]>1-\beta$. In fact, one can easily show that, $$1-\beta=(1-A)+\Omega(1-x_u-\frac{\delta}{2}).$$ Thus, $w_u(1)+\Omega[1-(x_u+\frac{\delta}{4})]>1-\beta$ is equivalent to $$w_u(1)>(1-A)-\frac{\delta}{4}\Omega.$$ Which is correct for $\epsilon$ small enough. Similarly, the lower solution $\rho_l$ can be given as follows, $$\label{lowersolution} \rho_l=\left\{ \begin{array}{ll} w_l+\Omega''[x-(x_l-\frac{\delta}{4})], & x\le x_l-\frac{\delta}{4},\\ w_l+\Omega[x-(x_l-\frac{\delta}{4})], & x>x_l-\frac{\delta}{4}. \\ \end{array} \right.$$ Here, $x_l=\frac{\beta-\alpha}{2\Omega}+\frac{1}{2}+\frac{\delta}{2}$, $w_l$ is the solution of the ODE $$\label{equationwu10} \frac{\epsilon}{2}\partial_xw_l=-(w_l-A)(w_l-(1-A)),\quad w_l(x_l)=\frac{1}{2},$$ $\Omega''$ satisfies $\Omega>\Omega''$ and $\rho_l(0)=w_l(0)+\Omega''[0-(x_l-\frac{\delta}{4})]<\alpha$. See dashed lines in Fig. \[integrate\_figures\_Keq1\][**a**]{} for examples of $\rho_u$, $\rho_l$. For convenience, we denote the $\epsilon\to0$ limit of $w_u, \rho_u, w_l, \rho_l$ by $\hat{w}_u, \hat{\rho}_u, \hat{w}_l, \hat{\rho}_l$ respectively. One can easily verify that, $$\begin{aligned} \label{upperhatsolution} \hat{w}_u=\left\{ \begin{array}{ll} A, & 0\le x<x_u, \\ 1/2, & x=x_u, \\ 1-A, & x_u<x\le 1, \end{array} \right. \qquad \hat{\rho}_u=\left\{ \begin{array}{ll} \hat{w}_u+\Omega[x-(x_u+\frac{\delta}{4})], & x\le x_u+\frac{\delta}{4},\\ \hat{w}_u+\Omega'[x-(x_u+\frac{\delta}{4})], & x>x_u+\frac{\delta}{4}, \end{array} \right.\end{aligned}$$ and $$\begin{aligned} \label{lowerhatsolution} \hat{w}_l=\left\{ \begin{array}{ll} A, & 0\le x<x_l, \\ 1/2, & x=x_l, \\ 1-A, & x_l<x\le 1, \end{array} \right. \qquad \hat{\rho}_l=\left\{ \begin{array}{ll} \hat{w}_l+\Omega''[x-(x_l-\frac{\delta}{4})], & x\le x_l-\frac{\delta}{4},\\ \hat{w}_l+\Omega[x-(x_l-\frac{\delta}{4})], & x>x_l-\frac{\delta}{4}. \end{array} \right.\end{aligned}$$ From the functions $\rho_u, \rho_l$ and $\hat{\rho}_u, \hat{\rho}_l$, the existence of $W^{1,2}(0,1)$ weak solution of Eq. (\[OriginalPro\]) can be obtained by the following Lemma. \[hatwlemma\] If the following four conditions are satisfied, then $\forall\, \Delta>0$, there exists $\epsilon_0>0$, such that for any $\epsilon<\epsilon_0$, there exists a $W^{1,2}(0,1)$ weak solution $\rho_s$ of Eq. (\[OriginalPro\]), which belongs to the $\Delta$ neighbourhood of $f$. Or equivalently, there exists a $W^{1,2}(0,1)$ weak solution $\rho_s$ of Eq. (\[OriginalPro\]) which tends to $f$ as $\epsilon\to 0$. - $\forall\, \Delta>0$, there exist $\hat{\rho}_u,\hat{\rho}_l$ belonging to the $\Delta/2$ neighbourhood of $f$. - $\forall\, \Delta>0$, there exists $\epsilon_1>0$, such that $\forall\, \epsilon<\epsilon_1$, $\rho_u$, $\rho_l$ belong to the $\Delta/2$ neighbourhood of $\hat{\rho}_u$ and $\hat{\rho}_l$ respectively. - There exists $\epsilon_2>0$, $\forall\, \epsilon<\epsilon_2$, $\rho_u\ge \rho_l$. - There exists $\epsilon_3>0$, $\forall\, \epsilon<\epsilon_3$, $\rho_u$ and $\rho_l$ are the upper and lower solutions of Eq. (\[OriginalPro\]) respectively ([*i.e.*]{}, they satisfy the sufficient conditions in Lemma \[upperlowersufficientcondition\]). For any $\Delta>0$, let $\epsilon_0=\min\{\epsilon_1,\epsilon_2,\epsilon_3\}$. Then for any $\epsilon<\epsilon_0$, based on the first two conditions and Lemma \[neighbourofneighbour\], we know that $\rho_u$ and $\rho_l$ belong to the $\Delta$ neighbourhood of $f$. Therefore, according to the third and fourth conditions as well as Theorem \[upperlower2W12\], there exists a $W^{1,2}(0,1)$ weak solution $\rho_s$ of Eq. (\[OriginalPro\]), satisfying $\rho_l\le \rho_s\le \rho_u$. Finally, using the sufficient condition in Definition \[difinitionofneighoff\], $\rho_s$ belongs to the $\Delta$ neighbourhood of $f$. We claim that functions $\rho_u, \rho_l$ and $\hat{\rho}_u, \hat{\rho}_l$, as given in Eqs. (\[uppersolution\],\[lowersolution\],\[upperhatsolution\],\[lowerhatsolution\]), satisfy the four conditions in Lemma \[hatwlemma\]. Since $\lim_{\epsilon\to0}\rho_u=\hat{\rho}_u$, $\lim_{\epsilon\to0}\rho_l=\hat{\rho}_l$, the second condition is naturally satisfied. From Eqs. (\[numericalsolution\],\[upperhatsolution\],\[lowerhatsolution\]) and the definitions of $A, x_u, x_l$, one can easily show that by choosing $\delta$ small enough, $\hat{\rho}_u$ and $\hat{\rho}_l$ can be arbitrarily close to $f$. Meanwhile, one can also verify that if $\Omega-\Omega'$ and $\Omega-\Omega''$ are small enough, then $\rho_u>\rho_l$ is valid $\forall\, \epsilon>0$. Therefore, the first and the third conditions in Lemma \[hatwlemma\] are satisfied. In the following discussion, we show that $\rho_u$ is an upper solution of Eq. (\[OriginalPro\]), [*i.e.*]{}, satisfying the sufficient conditions in Lemma \[upperlowersufficientcondition\]. Taking the derivative of Eq. (\[equationwu1\]), we obtain $$\label{equationwu2} \frac{\epsilon}{2}\partial_x^2w_u+(2w_u-1)\partial_xw_u=0.$$ Substituting Eq. (\[uppersolution\]) into Eq. (\[OriginalPro\]), and using Eq. (\[equationwu2\]), we have $$L\rho_u=\frac{\epsilon}{2}\partial_x^2w_u+(2\rho_u-1)\partial_xw_u=2\Omega[x-(x_u+\frac{\delta}{4})]\partial_xw_u\le0,\quad {\rm for}\ x\le x_u+\frac{\delta}{4},$$ since $\partial_xw_u>0$ in interval $[0, 1]$, see Eq. (\[equationwu1\]). Meanwhile, $$L\rho_u=\frac{\epsilon}{2}\partial_x^2w_u+(2\rho_u-1)(\partial_xw_u+\Omega'-\Omega)=2\Omega'[x-(x_u+\frac{\delta}{4})]\partial_xw_u+(2\rho_u-1)(\Omega'-\Omega)<0,\quad {\rm for}\ x> x_u+\frac{\delta}{4},$$ which holds if $\epsilon$ is small enough, since $\lim_{\epsilon\to0}\partial_xw_u(x)\to0$ uniformly for $1\ge x>x_u+\delta/4$ \[see Eq. (\[upperhatsolution\])\], $\Omega'<\Omega$, and $2\rho_u-1>0$ for $ x> x_u+\frac{\delta}{4}$ \[see Eq. (\[equationwu1\],\[uppersolution\])\]. It is obvious that $\rho_u$ is continuous at $x=x_u+\frac{\delta}{4}$, and $\partial_x^-\rho(x_u+\frac{\delta}{4})=\partial_xw_u(x_u+\frac{\delta}{4})+\Omega>\partial_xw_u(x_u+\frac{\delta}{4})+\Omega'=\partial_x^+\rho(x_u+\frac{\delta}{4})$. Meanwhile, since $w_u(0)>A$, $$\rho_u(0)=w_u(0)+\Omega[-(x_u+\frac{\delta}{4})]=w_u(0)-A+\Omega\frac{\delta}{4}+A+\Omega[-(x_u+\frac{\delta}{2})]=w_u(0)-A+\Omega\frac{\delta}{4}+\alpha>\alpha,$$ and the discussion below Eq. (\[uppersolution\]) gives $$\rho_u(1)=w_u(1)+\Omega'[1-(x_u+\frac{\delta}{4})]>1-\beta.$$ From Lemma \[upperlowersufficientcondition\], we conclude that $\rho_u$ is an upper solution of Eq. (\[OriginalPro\]). Through similar methods one can show that $\rho_l$, given by Eq. (\[lowersolution\]), is a lower solution of Eq. (\[OriginalPro\]). Therefore, the fourth condition in Lemma \[hatwlemma\] is satisfied. So functions $\rho_u, \rho_l$ and $\hat{\rho}_u, \hat{\rho}_l$, as given in Eqs. (\[uppersolution\],\[lowersolution\],\[upperhatsolution\],\[lowerhatsolution\]), satisfy all the four conditions of Lemma \[hatwlemma\]. Hence, for these special cases, [*i.e.*]{}, for $\alpha+\Omega>\beta$, $\beta+\Omega>\alpha$, and $\alpha+\beta+\Omega<1$, there exists a $W^{1,2}(0,1)$ weak solution $\rho_s$ of Eq. (\[OriginalPro\]) which tends to $f$ as $\epsilon\to 0$. ### For cases $\alpha<0.5$, $\beta<0.5$, and $\alpha+\beta+\Omega>1$ For these cases, the expression of $f$ is as follows, $$f(x)=\left\{ \begin{array}{ll} \alpha+\Omega x, & 0\le x\le \frac{0.5-\alpha}{\Omega}, \\ 0.5, & \frac{0.5-\alpha}{\Omega}< x\le 1-\frac{0.5-\beta}{\Omega}, \\ 1-\beta-\Omega+\Omega x, \quad & 1-\frac{0.5-\beta}{\Omega}<x\le1. \end{array} \right.$$ See solid line in Fig. \[integrate\_figures\_Keq1\][**b**]{} for an example. The upper solution $\rho_u$ is constructed by following steps. Define $x_q=1-\frac{0.5-\beta}{\Omega}$, $x_q'=\frac{0.5-\alpha}{\Omega}$, and $$q_u=\frac{\delta}{4}+0.5+C(x-x_q)^2.$$ Here constant $C$ depends only on $\delta$ and $\Omega$, and is chosen to be large enough such that $q_u(x)>f(x)$ for $x>x_q$. Inequality $\alpha+\beta+\Omega>1$ implies $x_q>x_q'$. Choose another constant $\Omega'$ which is smaller than $\Omega$, [*i.e.*]{}, $\Omega'<\Omega$. Let $x_1=-\sqrt{\frac{\delta}{4C}}+x_q$, [*i.e.*]{}, $x_1$ is the small root of equation $q_u(x)=\frac{\delta}{2}+0.5$. Meanwhile, let $x_2=\frac{\Omega'}{2C}+x_q$, [*i.e.*]{}, $x_2$ satisfies the following equation $$\partial_xq_u(x)=\Omega'.$$ Now we define $\rho_u$ as follows, $$\rho_u=\left\{ \begin{array}{ll} \frac{\delta}{2}+0.5+\Omega (x-x_q'), \quad & 0\le x\le x_q', \\ \frac{\delta}{2}+0.5, & x_q'< x\le x_1, \\ q_u, & x_1<x\le x_2,\\ q_u(x_2)+\Omega' (x-x_2), & x_2<x\le1. \end{array} \right.$$ Where $\Omega-\Omega'$ is required to be small enough such that $g(\Omega'):=\rho_u(1)=\frac{\delta}{4}+0.5+(1-x_q)\Omega'-\frac{\Omega'^2}{4C}>1-\beta$. One can easily verify that $g(\Omega)=\frac{\delta}{4}+1-\beta-\frac{\Omega'^2}{4C}$. So $g(\Omega)>1-\beta$ for $C$ large enough (independent of $\epsilon$). Thus, such a constant $\Omega'$ exists. The lower solution $\rho_l$ can be obtained by the same methods. Let $$\begin{aligned} q_l=-\frac{\delta}{4}+0.5-C'(x-x_q')^2,\end{aligned}$$ with constant $C'>0$ large enough such that $q_l<f$ for $x<x_q'$. Define $x_3$ as the large root of $q_l=-\frac{\delta}{2}+0.5$, [*i.e.*]{}, $x_3=\sqrt{\frac{\delta}{4C'}}+x_q'$. Define $x_4$ as the root of $\partial_xq_l(x)=\Omega''$ with constant $\Omega''$ satisfying $\Omega''<\Omega$. One can easily get $x_4=-\frac{\Omega''}{2C'}+x_q'$. Then the lower solution $\rho_l$ is given as follows, $$\begin{aligned} \rho_l=\left\{ \begin{array}{ll} q_l(x_4)+\Omega''(x-x_4), & 0\le x\le x_4, \\ q_l, & x_4< x\le x_3, \\ -\frac{\delta}{2}+0.5, & x_3<x\le x_q, \\ -\frac{\delta}{2}+0.5+\Omega(x-x_q), \quad & x_q<x\le1. \end{array} \right.\end{aligned}$$ One can verify that for $\Omega-\Omega''$ small enough, $\rho_l(0)<\alpha$. Examples of $\rho_u$ and $\rho_l$ are plotted in Fig. \[integrate\_figures\_Keq1\][**b**]{} (dashed lines). We claim that $\rho_u$ and $\rho_l$ given here satisfy all the four conditions listed in Lemma \[hatwlemma\]. As stated before, the second condition is satisfied naturally. If $\Omega-\Omega'$ and $\Omega-\Omega''$ are small enough, then $\rho_u>\rho_l$ is valid, so the third condition is satisfied. For these special cases, $\rho_u$ and $\rho_l$ are independent of $\epsilon$, and $\rho_u$ and $\rho_l$ can be arbitrarily close to $f$ if $\delta$ is small enough. Which means that the first condition is satisfied. To show that the final condition of Lemma \[hatwlemma\] is satisfied, we only need to verify that $\rho_u$ and $\rho_l$ satisfy the sufficient conditions in Lemma \[upperlowersufficientcondition\]. In the following discussion, we will only show that $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and therefore is an upper solution. The lower solution $\rho_l$ can be verified similarly. One can easily show that $\rho_u(0)=\frac{\delta}{2}+\alpha>\alpha$, $\rho_u(1)=g(\Omega')>1-\beta$, and $\partial_x^- \rho_u(x)\ge \partial_x^+ \rho_u(x)$ is valid for any $x\in(0,1)$. Therefore, we only need to verify that $\rho_u$ satisfies the second condition in Lemma \[upperlowersufficientcondition\]. By substituting $\rho_u$ into Eq. (\[OriginalPro\]), we obtain that, [**(1)**]{} for $0\le x\le \frac{0.5-\alpha}{\Omega}$, $L\rho_u=0$, [**(2)**]{} for $\frac{0.5-\alpha}{\Omega}< x\le x_1$, $L\rho_u=-\Omega\delta<0$, [**(3)**]{} for $x_1<x\le x_2$, $L\rho_u=\epsilon C+(2q_u-1)(\partial_xq_u-\Omega)<\epsilon C+\frac{\delta}{2}(\Omega'-\Omega)<0$, and [**(4)**]{} for $x_2<x\le1$ and $\epsilon$ small enough, $L\rho_u=(2\rho_u-1)(\Omega'-\Omega)<0$. Therefore, $\rho_u$ satisfies all the sufficient conditions in Lemma \[upperlowersufficientcondition\], and hence is an upper solution of Eq. (\[OriginalPro\]). Finally, from Lemma \[hatwlemma\], we obtain that, for these special cases, [*i.e.*]{}, for $\alpha<0.5$, $\beta<0.5$, and $\alpha+\beta+\Omega>1$, there exists a $W^{1,2}(0,1)$ weak solution $\rho_s$ of Eq. (\[OriginalPro\]) which tends to $f$ as $\epsilon\to 0$. ### For cases $\alpha>0.5$, and $0.5-\Omega<\beta<0.5$ For these cases, the expression of $f$ is as follows, $$f(x)=\left\{ \begin{array}{ll} \alpha, & x=0, \\ 0.5, & 0< x\le 1-\frac{0.5-\beta}{\Omega}, \\ 1-\beta-\Omega+\Omega x, & 1-\frac{0.5-\beta}{\Omega}<x\le1. \end{array} \right.$$ See Fig. \[integrate\_figures\_Keq1\][**c**]{} (solid line) for an example of $f$. The upper solution $\rho_u$ of Eq. (\[OriginalPro\]) is constructed through the following methods. Let $w_u$ be the solution of the following ODE $$\label{equationwu} \frac{\epsilon}{2}\partial_xw_u=-[w_u-(0.5+\frac{\delta}{2})][w_u-(0.5-\frac{\delta}{2})],\ w_u(0)=\alpha,$$ where $\delta$ is chosen to be small enough such that $0.5+\frac{\delta}{2}<\alpha$. Denote $x_q=1-\frac{0.5-\beta}{\Omega}$, and $$q_u=\frac{\delta}{4}+0.5+C(x-x_q)^2,$$ where constant $C$, which depends only on $\delta$ and $\Omega$, is chosen to be large enough such that $q_u(x)>f(x)$ for $x>x_q$. Let $x_1$ be the small root of $q_u(x)=w_u(x)$. One can easily show that $\lim_{\epsilon\to 0}x_1=-\sqrt{\frac{\delta}{4C}}+x_q$. For $\Omega'<\Omega$, denote $x_2=\frac{\Omega'}{2C}+x_q$, [*i.e.*]{}, $x_2$ satisfies the following equation $$\partial_xq_u(x)=\Omega'.$$ Then $\rho_u$ is given as follows, $$\rho_u=\left\{ \begin{array}{ll} w_u, & 0\le x\le x_1, \\ q_u, & x_1<x\le x_2, \\ q_u(x_2)+\Omega' (x-x_2), \quad & x_2<x\le1. \end{array} \right.$$ Here $\Omega-\Omega'$ is required to be small enough such that $\rho(1)=g(\Omega')=\frac{\delta}{4}+0.5+(1-x_q)\Omega'-\frac{\Omega'^2}{4C}>1-\beta$. Note that $g(\Omega)>1-\beta$ for $C$ large enough (independent of $\epsilon$). Thus, such an $\Omega'$ exists. See Fig. \[integrate\_figures\_Keq1\][**c**]{} for an example of $\rho_u$ (dashed line). We claim that $\rho_u$ satisfies all the conditions in Lemma \[hatwlemma\]. Similar as the discussions in previous subsections, we only need to verify that $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\]. One can easily show that $\rho_u(0)=w_u(0)=\alpha$, $\rho_u(1)=g(\Omega')>1-\beta$, and $\partial_x^- \rho_u(x)\ge \partial_x^+ \rho_u(x)$ is valid for any $x\in(0,1)$. Therefore, we only need to verify that $\rho_u$ satisfies the second condition in Lemma \[upperlowersufficientcondition\]. By substituting $\rho_u$ into Eq. (\[OriginalPro\]), we obtain that, [**(1)**]{} for $0\le x\le x_1$, $L\rho_u=-\Omega(2\rho_u-1)<0$, [**(2)**]{} for $x_1<x\le x_2$, $L\rho_u=\epsilon C+(2q_u-1)(\partial_xq_u-\Omega)<\epsilon C+\frac{\delta}{2}(\Omega'-\Omega)<0$, [**(3)**]{} for $x_2<x\le1$, $L\rho_u=(2\rho_u-1)(\Omega'-\Omega)<0$ if $\epsilon$ is small enough. Therefore, $\rho_u$ satisfies all the sufficient conditions in Lemma \[upperlowersufficientcondition\], and hence is an upper solution of Eq. (\[OriginalPro\]). The lower solution $\rho_l$ of Eq. (\[OriginalPro\]) is given by $$\rho_l=\left\{ \begin{array}{ll} -\frac{\delta}{2}+0.5, & 0\le x\le x_q, \\ -\frac{\delta}{2}+0.5+\Omega(x-x_q), \quad & x_q<x\le1. \end{array} \right.$$ See also Fig. \[integrate\_figures\_Keq1\][**c**]{} (dashed line) for an example of $\rho_l$. Similar as the discussion for upper solution $\rho_u$, one can show that $\rho_l$ satisfies the conditions in Lemma \[upperlowersufficientcondition\], and therefore is a lower solution of Eq. (\[OriginalPro\]). In fact, one can verify that $\rho_l(0)=-\frac{\delta}{2}+0.5<\alpha$, $\rho_l(1)=-\frac{\delta}{2}+1-\beta<1-\beta$, and $\partial_x^- \rho_l(x)\le \partial_x^+ \rho_l(x)$ is valid for any $x\in(0,1)$. By substituting $\rho_l$ into Eq. (\[OriginalPro\]), we obtain that, [**(1)**]{} for $0\le x\le x_q$, $L\rho_l=\Omega\delta>0$, and [**(2)**]{} for $x_q<x\le 1$, $L\rho_l=0$. Therefore, $\rho_l$ is a lower solution of Eq. (\[OriginalPro\]). Meanwhile, let $\hat{\rho}_u=\lim_{\epsilon\to 0}\rho_u$, then for $\delta$ small enough, $\hat{\rho}_u$ and $\rho_l$ (note $\rho_l$ is independent of $\epsilon$) can be arbitrarily close to $f$. At the same time, $\rho_u>\hat{\rho}_u>\rho_l$ is valid $\forall\, \epsilon>0$ if $\Omega-\Omega'$ is small enough. The above analysis shows that $\rho_u$ and $\rho_l$ satisfy all the conditions in Lemma \[hatwlemma\], therefore, for these special cases, [*i.e.*]{}, for $\alpha>0.5$, and $0.5-\Omega<\beta<0.5$, there exists a $W^{1,2}(0,1)$ weak solution $\rho_s$ of Eq. (\[OriginalPro\]) which tends to $f$ as $\epsilon\to 0$. ### For cases $\alpha>\beta+\Omega$, $\beta<0.5-\Omega$, $\alpha+\beta+\Omega<1$ For these cases, the expression of $f$ is as follows, $$f(x)=\left\{ \begin{array}{ll} \alpha, & x=0, \\ 1-\beta-\Omega+\Omega x, & 0<x\le1. \end{array} \right.$$ See Fig. \[integrate\_figures\_Keq1\][**d**]{} (solid line) for an example of $f$. The upper solution $\rho_u$ is given by $$\rho_u=\frac{\delta}{2}+1-\beta+\Omega(x-1).$$ One can easily show that, $\rho_u(0)=\frac{\delta}{2}+1-\beta-\Omega>\alpha$, and $\rho_u(1)=\frac{\delta}{2}+1-\beta>1-\beta$. By substituting $\rho_u$ into Eq. (\[OriginalPro\]), one can verify that $L\rho_u=0$. Therefore, from Lemma \[upperlowersufficientcondition\], we know that $\rho_u$ is an upper solution of Eq. (\[OriginalPro\]). The lower solution $\rho_l$ is given by $$\rho_l=w_u+\Omega x,$$ where $w_u$ is the solution of the following ODE $$\label{equationwu} \frac{\epsilon}{2}\partial_xw_u=-[w_u-(1-\beta-\Omega)][w_u-(\beta+\Omega)],\quad w_u(0)=\alpha.$$ See Fig. \[integrate\_figures\_Keq1\][**d**]{} (dashed lines) for examples of $\rho_u$ and $\rho_l$. One can easily verify that $\rho_l(0)=w_u(0)=\alpha$, and $\rho_l(1)=w_u(1)+\Omega<1-\beta-\Omega+\Omega=1-\beta$. By substituting $\rho_l$ into Eq. (\[OriginalPro\]), we have $L\rho_l=2\Omega x\partial_xw_u\ge0$. Therefore, $\rho_l$ satisfies the conditions listed in Lemma \[upperlowersufficientcondition\], and is a lower solution of Eq. (\[OriginalPro\]). For these special cases, $\rho_u$ is independent of $\epsilon$, $\hat{\rho}_l=\lim_{\epsilon\to 0}\rho_l=f$. For $\delta$ small enough, $\rho_u$ can be arbitrarily close to $f$, and $\rho_u>\hat{\rho}_l>\rho_l$ $\forall \epsilon>0$. All the above analyses show that $\rho_u$ and $\rho_l$ satisfy the conditions in Lemma \[hatwlemma\], therefore, for these special cases, [*i.e.*]{}, for $\alpha>\beta+\Omega$, $\beta<0.5-\Omega$, and $\alpha+\beta+\Omega<1$, there exists a $W^{1,2}(0,1)$ weak solution $\rho_s$ of Eq. (\[OriginalPro\]) which tends to $f$ as $\epsilon\to 0$. ### For cases $\alpha>\beta+\Omega$, $\beta<0.5-\Omega$, and $\alpha+\beta+\Omega>1$ For these cases, the expression of $f$ is as follows, see Fig. \[integrate\_figures\_Keq1\][**e**]{}, $$f(x)=\left\{ \begin{array}{ll} \alpha, & x=0, \\ 1-\beta-\Omega+\Omega x, \quad & 0<x\le1. \end{array} \right.$$ The lower solution $\rho_l$ is given as $$\rho_l=-\frac{\delta}{2}+1-\beta+\Omega(x-1).$$ One can easily show that $\rho_l(0)=-\frac{\delta}{2}+1-\beta-\Omega<\alpha$, $\rho_l(1)=-\frac{\delta}{2}+1-\beta<1-\beta$. By substituting $\rho_l$ into Eq. (\[OriginalPro\]), we have $L\rho_l=0$. Therefore, $\rho_l$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is a lower solution of Eq. (\[OriginalPro\]). The upper solution $\rho_u$ is given by $$\rho_u=w_u+\Omega x.$$ Where $w_u$ is the solution of the following ODE, $$\label{equationwu} \frac{\epsilon}{2}\partial_xw_u=-[w_u-(1-\beta-\Omega)][w_u-(\beta+\Omega)],\quad w_u(0)=\alpha.$$ One can easily verify that $\rho_u(0)=w_u(0)=\alpha$, and $\rho_u(1)=w_u(1)+\Omega>1-\beta-\Omega+\Omega=1-\beta$. By substituting $\rho_u$ into Eq. (\[OriginalPro\]), we obtain $L\rho_u=2\Omega x\partial_xw_u\le0$. Therefore, $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is hence an upper solution. For these cases, $\rho_l$ is independent of $\epsilon$, and $\hat{\rho}_u=\lim_{\epsilon\to 0}\rho_u=f$. For $\delta$ small enough, $\rho_l$ can be arbitrarily close to $f$, and $\rho_u>\hat{\rho}_u>\rho_l$ $\forall \epsilon>0$. The above analyses show that $\rho_u$ and $\rho_l$ satisfy the conditions in Lemma \[hatwlemma\], therefore, for these special cases, [*i.e.*]{}, for $\alpha>\beta+\Omega$, $\beta<0.5-\Omega$, and $\alpha+\beta+\Omega>1$, there exists a $W^{1,2}(0,1)$ weak solution $\rho_s$ of Eq. (\[OriginalPro\]) which tends to $f$ as $\epsilon\to 0$. ### For cases $\alpha>0.5$, and $\beta>0.5$ For these cases, the expression of $f$ is as follows, $$f(x)=\left\{ \begin{array}{ll} \alpha, & x=0, \\ 0.5, & 0< x< 1, \\ 1-\beta, & x=1. \end{array} \right.$$ The upper solution $\rho_u$ can be given by the following ODE, $$\label{equationwu} \frac{\epsilon}{2}\partial_x\rho_u=-[\rho_u-(0.5+\frac{\delta}{2})][\rho_u-(0.5-\frac{\delta}{2})],\quad \rho_u(0)=\alpha,$$ where $\delta$ satisfies $0.5+\frac{\delta}{2}<\alpha$. Similarly, the lower solution $\rho_l$ can be obtained by the following ODE, $$\label{equationwl} \frac{\epsilon}{2}\partial_x\rho_l=-[\rho_l-(0.5+\frac{\delta}{2})][\rho_l-(0.5-\frac{\delta}{2})],\quad \rho_l(1)=1-\beta,$$ where $\delta$ satisfies $0.5-\frac{\delta}{2}>1-\beta$. See Fig. \[integrate\_figures\_Keq1\][**f**]{} for examples of $f$, $\rho_u$, and $\rho_l$ for these cases. One can easily show that $\rho_u(0)=\alpha$, $\rho_u(1)>0.5+\frac{\delta}{2}>1-\beta$. By substituting $\rho_u$ into Eq. (\[OriginalPro\]), we obtain $L\rho_u=-\Omega (2w_u-1)<-\Omega\delta<0$. Therefore, $\rho_u$ is an upper solution of Eq. (\[OriginalPro\]) (see Lemma \[upperlowersufficientcondition\]). By similar methods, one can verify that $\rho_l$ is a lower solution of Eq. (\[OriginalPro\]). One can also verify that, for $\delta$ small enough, $\hat{\rho}_u=\lim_{\epsilon\to 0}\rho_u$ and $\hat{\rho}_l=\lim_{\epsilon\to 0}\rho_l$ can be close to $f$ arbitrarily, and $\rho_u>\hat{\rho}_u>\hat{\rho}_l>\rho_l$ $\forall\, \epsilon>0$. Therefore, the conditions in Lemma \[hatwlemma\] are all satisfied. So, for these special cases, [*i.e.*]{}, for $\alpha>0.5$, and $\beta>0.5$, there exists a $W^{1,2}(0,1)$ weak solution $\rho_s$ of Eq. (\[OriginalPro\]) which tends to $f$ as $\epsilon\to 0$. Construction of upper and lower weak solutions of Eq. (\[ellipticequationintroduction\]): for general cases $\Omega_A=K\Omega_D$ with $K>1$ {#SecConstructionGeneral} ------------------------------------------------------------------------------------------------------------------------------------------- For these general cases, we have $$\label{OriginalProKneq1} \left\{ \begin{array}{ll} L\rho=\frac{\epsilon}{2}\partial_x^2\rho+(2\rho-1)\partial_x\rho-(K+1)\Omega_D\rho+K\Omega_D=0,\quad & x\in(0,1),\\ \rho(0)=\alpha, &\\ \rho(1)=1-\beta. & \end{array} \right.$$ For convenience of the construction of $f$, $\rho_u$, and $\rho_l$, we give a function $u$ which satisfies the following equation, $$\label{definitionofudiscription} (2u-1)\partial_xu-(K+1)\Omega_Du+K\Omega_D=0.$$ It is easy to find that $1/2$ and ${K}/{(K+1)}$ are two critical points of $u$, and $u$ has the following properties, - With boundary condition $u(0)<1/2$, $u$ increases with $x$ and exists in interval $[0,x_1)$, where $u^-(x_1)=1/2$. - With boundary condition $1/2<u(0)<K/(K+1)$, $u$ decreases with $x$ and exists in interval $[0,x_2)$, where $u^-(x_2)=1/2$. - With boundary condition $u(1)\ge K/(K+1)$, $u$ increases with $x$, and tends to $K/(K+1)$ as $x\to -\infty$. The solution of Eq. (\[definitionofudiscription\]) can be expressed implicitly as follows, $$\label{definitionofudiscription3} x+C=\frac{2u}{(K+1)\Omega_D}+(K-1)\frac{\log|(K+1)\Omega_Du-K\Omega_D|}{(K+1)^2\Omega_D}.$$ Where $C$ is a constant determined by boundary condition. In the following of this subsection, we will construct the upper solution $\rho_u$ and the lower solution $\rho_l$ of Eq. (\[OriginalProKneq1\]) for eleven different cases, which include all the possible cases of Eq. (\[OriginalProKneq1\]), see the Table 1 in [@Zhang2012]. To describe the properties of the solution $\rho$ of Eq. (\[OriginalProKneq1\]) conveniently, we introduce the following acronym: BL$_l^+$— which means that $\rho$ has [**B**]{}oundary [**L**]{}ayer at [**l**]{}eft boundary $x=0$, and in the boundary layer $\rho$ increases (+) with $x$; Similarly, we have BL$_l^-$, BL$_r^+$, and BL$_r^-$; LD— which means that the solution $\rho<1/2$, or physically the particle density is in [**L**]{}ow [**D**]{}ensity phase; HD— which means that $\rho>K/(K+1)$, or the particle density is in [**H**]{}igh [**D**]{}ensity phase; MD— which means that $1/2<\rho<K/(K+1)$, or the particle density is in [**M**]{}edium [**D**]{}ensity phase; DW— which means that [**D**]{}omain [**W**]{}all appears in interval (0, 1). Here, domain wall is the boundary of low density and high density (or medium density), i.e., at the left side of domain wall $\rho<1/2$, while at the right side of domain wall $\rho>1/2$. ### For cases [LD+BL$_r^+$]{} Let $u_0$ be the solution of the following ODE, $$(2u_0-1)\partial_xu_0-(K+1)\Omega_Du_0+K\Omega_D=0,\quad u_0(0)=\alpha.$$ If $\alpha<1/2$ and $1-u_0(1)>1-\beta>u_0(1)$, then the limit solution $f$ is (see Fig. \[integrate\_figures\_Kneq1\_1to6\][**a**]{}) $$f(x)=\left\{ \begin{array}{ll} u_0(x), & x<1, \\ 1-\beta, & x=1. \end{array} \right.$$ For these cases, the small $\epsilon$ limit of the solution $\rho$ of Eq. (\[OriginalProKneq1\]) is in low density (LD) phase and has right boundary layer (BL$_r^+$). The upper solution $\rho_u$ for these cases can be given as follows, $$\rho_u=w+u_\delta-A,$$ where $u_\delta$ satisfies $$\label{EQu} (2u_\delta-1)\partial_xu_\delta-(K+1)\Omega'_Du_\delta+K\Omega'_D=0,\quad u_\delta(0)=\alpha+\delta,$$ with constant $\Omega'_D>\Omega_D$, and $w$ satisfies $$\label{EQw} \frac{\epsilon}{2}\partial_xw=-(w-A)(w-(1-A)),\quad w(1)=1-\beta+\delta_2,$$ with constant $A=\delta_1+u_\delta(1)$. Where we assume $\delta_1, \delta_2$ satisfy $\delta_1<\delta_2$, and $u_\delta(1)+\delta_1+\delta_2<\beta$ (or equivalently $1-A>w(1)$). We claim that $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\]. Substituting $\rho_u$ into (\[OriginalProKneq1\]), and using Eqs. (\[EQu\],\[EQw\]), we have $$\label{EQrhou} L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta-\frac{(K-1)\Omega'_D}{2u_\delta-1}(w-A)+2(u_\delta-A)\partial_xw+(\Omega'_D-\Omega_D)((K+1)(u_\delta+w-A)-K).$$ It can be verified that $u_\delta$ and $w$ are both increasing functions, so $u_\delta(x)-A\le u_\delta(1)-A=-\delta_1<0$, and $\partial_xw=-\frac{2}{\epsilon}(w-A)(w-(1-A))\ge \frac{2}{\epsilon}(1-A-w(1))(w-A)$. From $u_\delta(x)-A\le -\delta_1$, we obtain that for $\epsilon$ small enough, $$-\frac{(K-1)\Omega'_D}{2u_\delta-1}(w-A)+2(u_\delta-A)\partial_xw\le -\frac{(K-1)\Omega'_D}{2u_\delta-1}(w-A)-2\delta_1\frac{2}{\epsilon}(1-A-w(1))(w-A)<0.$$ For the other two terms in Eq. (\[EQrhou\]), we have the following two cases, - If $w\le \frac{1}{2}$, then $u_\delta+w-A=u_\delta+w-(\delta_1+u_\delta(1))=w-\delta_1+(u_\delta-u_\delta(1))<1/2-\delta_1$. So $(\Omega'_D-\Omega_D)((K+1)(u_\delta+w-A)-K)\le (\Omega'_D-\Omega_D)((K+1)(-\delta_1+\frac{1}{2})-K)<0$. Since $\partial^2_xu_\delta$ is a bounded function, we have $$\frac{\epsilon}{2}\partial^2_xu_\delta+(\Omega'_D-\Omega_D)((K+1)(u_\delta+w-A)-K)<0,$$ for $\epsilon$ small enough. - If $w> \frac{1}{2}$, then $\lim_{\epsilon\to 0}\partial_xw=\lim_{\epsilon\to 0}-\frac{2}{\epsilon}(w-A)(w-(1-A))\ge \lim_{\epsilon\to 0}\frac{2}{\epsilon}(1-A-w(1))(\frac{1}{2}-A)\to\infty$. Since $u_\delta(x)-A\le -\delta_1$, and other terms are bounded, we have $$\frac{\epsilon}{2}\partial^2_xu_\delta-\frac{(K-1)\Omega'_D}{2u_\delta-1}(w-A)+2(u_\delta-A)\partial_xw+(\Omega'_D-\Omega_D)((K+1)(u_\delta+w-A)-K)<0,$$ for $\epsilon$ small enough. Therefore, $L\rho_u<0$ holds for any case. Meanwhile, one can verify that $\rho_u(0)=w(0)+u_\delta(0)-A>\alpha+\delta>\alpha$. $\rho_u(1)=w(1)+u_\delta(1)-A=\delta_2-\delta_1+(1-\beta)>(1-\beta)$. In summary, $\rho_u$ satisfies all the conditions in Lemma \[upperlowersufficientcondition\], and thus is an upper solution of Eq. (\[OriginalProKneq1\]). The lower solution $\rho_l$ can be given by the following ODE, $$(2\rho_l-1)\partial_x\rho_l-(K+1)\Omega_D\rho_l+K\Omega_D=0,\ \rho_l(0)=\alpha-\delta.$$ Substituting $\rho_l$ into Eq. (\[OriginalProKneq1\]), one can easily show that $$L\rho_l=\frac{\epsilon}{2}\partial^2_x\rho_l>0.$$ Meanwhile, $\rho_l(0)=\alpha-\delta<\alpha$. $\rho_l(1)<u_0(1)<1-\beta$. Thus, $\rho_l$ satisfies all the conditions in Lemma \[upperlowersufficientcondition\], and is a lower solution of Eq. (\[OriginalProKneq1\]). Finally, for these cases, $\rho_l$ is independent of parameter $\epsilon$. For $\delta$, $\delta_2$, $\Omega'-\Omega$ small enough, both $\hat{\rho}_u=\lim_{\epsilon\to 0}\rho_u$ and $\rho_l$ can be close to $f$ arbitrarily, and $\rho_u>\hat{\rho}_u>\rho_l$ holds $\forall\, \epsilon>0$. So the first and third conditions in Lemma \[hatwlemma\] are also satisfied. Then the results of Lemma \[hatwlemma\] give that there exists a $W^{1,2}(0,1)$ weak solution $\rho$ of Eq. (\[OriginalProKneq1\]) which tends to $f$ as $\epsilon\to 0$. ### For cases [LD+BL$_r^-$]{} Similar to the previous subsection, let $u_0$ be the solution of the following ODE, $$(2u_0-1)\partial_xu_0-(K+1)\Omega_Du_0+K\Omega_D=0,\quad u_0(0)=\alpha.$$ If $\alpha<1/2$ and $1-\beta<u_0(1)$, then the limit solution $f$ is (see Fig. \[integrate\_figures\_Kneq1\_1to6\][**b**]{}) $$f(x)=\left\{ \begin{array}{ll} u_0, & x<1, \\ 1-\beta, & x=1. \end{array} \right.$$ For these cases, the small $\epsilon$ limit of the solution $\rho$ of Eq. (\[OriginalProKneq1\]) is in low density (LD) phase and also has right boundary layer, but $\rho$ decreases in the boundary (BL$_r^-$). The upper solution $\rho_u$ for these cases can be given as follows, $$(2\rho_u-1)\partial_x\rho_u-(K+1)\Omega'_D\rho_u+K\Omega'_D=0,\quad \rho_u(0)=\alpha+\delta,$$ with $\Omega'_D>\Omega_D$. By substituting $\rho_u$ into Eq. (\[OriginalProKneq1\]), we find that, for $\epsilon$ small enough, $$\begin{aligned} L\rho_u=\frac{\epsilon}{2}\partial^2_x\rho_u+(\Omega'_D-\Omega_D)((K+1)\rho_u-K)<0.\end{aligned}$$ Meanwhile, one can easily show that $\rho_u(0)=\alpha+\delta>\alpha$, and $\rho_u(1)>u_0(1)>1-\beta$. Thus, $\rho_u$ is an upper solution of Eq. (\[OriginalProKneq1\]). The lower solution $\rho_l$ can be given by $$\rho_l=w+u_{-\delta}-A.$$ Where $u_{-\delta}$ is the solution of the following ODE $$\label{EQu1} (2u_{-\delta}-1)\partial_xu_{-\delta}-(K+1)\Omega_Du_{-\delta}+K\Omega_D=0,\quad u_{-\delta}(0)=\alpha-\delta,$$ constant $A=\delta_1+u_{-\delta}(1)$, and $w$ satisfies the following ODE, $$\label{EQw1} \frac{\epsilon}{2}\partial_xw=-(w-A)(w-(1-A)),\quad w(1)=1-\beta.$$ Here, we assume $A<u_0(1)$ and $u_{-\delta}(1)>1-\beta$, which are correct when $\delta,\delta_1$ are both small enough. In the following discussion, we show that $\rho_l$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\]. Substituting $\rho_l$ into Eq. (\[OriginalProKneq1\]), and using Eqs. (\[EQu1\],\[EQw1\]), we have $$L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}-\frac{(K-1)\Omega_D}{2u_{-\delta}-1}(w-A)+2(u_{-\delta}-A)\partial_xw.$$ Since $u_{-\delta}$ is an increasing function in interval \[0, 1\], we have $u_{-\delta}(x)-A\le u_{-\delta}(1)-A=-\delta_1<0$. At the same time, $\partial_xw=-\frac{2}{\epsilon}(w-A)(w-(1-A))\le \frac{2}{\epsilon}(1-2A)(w-A)$. Therefore, for $\epsilon$ small enough, we have $$-\frac{(K-1)\Omega_D}{2u_{-\delta}-1}(w-A)+2(u_{-\delta}-A)\partial_xw\ge -\frac{(K-1)\Omega_D}{2u_{-\delta}-1}(w-A)-2\delta_1\frac{2}{\epsilon}(1-2A)(w-A)>0.$$ Since $\frac{\epsilon}{2}\partial^2_xu_{-\delta}>0$, we have $L\rho_l>0$. One can also easily show that $\rho_l(0)=w(0)+u_{-\delta}(0)-A<\alpha-\delta<\alpha$. $\rho_l(1)=w(1)+u_{-\delta}(1)-A<1-\beta$. Thus, $\rho_l$ is a lower solution of Eq. (\[OriginalProKneq1\]). For these cases, $\rho_u$ is independent of $\epsilon$. For $\delta$, $\Omega'-\Omega$ small enough, $\hat{\rho}_l=\lim_{\epsilon\to 0}\rho_l$ and $\rho_u$ can be arbitrarily close to $f$, and $\rho_u>\hat{\rho}_l>\rho_l$ $\forall \epsilon>0$. So the first and third conditions in Lemma \[hatwlemma\] are also satisfied. Hence the existence of solution $\rho$ of Eq. (\[OriginalProKneq1\]) is obtained from Lemma \[hatwlemma\]. ### For cases [BL$_l^+$+HD]{} Let $u_0$ be the solution of the following ODE, $$(2u_0-1)\partial_xu_0-(K+1)\Omega_Du_0+K\Omega_D,\quad u_0(1)=1-\beta.$$ If $1-\beta>{K}/{(K+1)}$ and $1-u_0(0)<\alpha<u_0(0)$, then the limit solution $f$ is (see Fig. \[integrate\_figures\_Kneq1\_1to6\][**c**]{}) $$f(x)=\left\{ \begin{array}{ll} \alpha, & x=0, \\ u_0, & 0<x\le1. \end{array} \right.$$ For these cases, the small $\epsilon$ limit solution $\rho$ of Eq. (\[OriginalProKneq1\]) is in high density (HD) phase and has left boundary layer, but $\rho$ increases in the left boundary layer (BL$_l^+$). The upper solution $\rho_u$ can be given by the following ODE, $$(2\rho_u-1)\partial_x\rho_u-(K+1)\Omega'_D\rho_u+K\Omega'_D=0,\quad \rho_u(1)=1-\beta+\delta,$$ where $\Omega'_D<\Omega_D$ is a constant. By substituting $\rho_u$ into Eq. (\[OriginalProKneq1\]), we have that, for $\epsilon$ small enough, $$\begin{aligned} L\rho_u=\frac{\epsilon}{2}\partial^2_x\rho_u+(\Omega'_D-\Omega_D)((K+1)\rho_u-K)<0.\end{aligned}$$ Meanwhile, one can verify that $\rho_u(0)>u_0(0)>\alpha$. $\rho_u(1)=1-\beta+\delta>1-\beta$. Thus, $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and therefore is an upper solution of Eq. (\[OriginalProKneq1\]). The lower solution $\rho_l$ can be given by $$\rho_l=w+u_{-\delta}-A.$$ Where $u_{-\delta}$ is the solution of $$\label{EQu2} (2u_{-\delta}-1)\partial_xu_{-\delta}-(K+1)\Omega_Du_{-\delta}+K\Omega_D=0,\ u_{-\delta}(1)=1-\beta-\delta,$$ constant $A=-\delta_1+u_{-\delta}(0)$, and $w$ is the solution of $$\label{EQw2} \frac{\epsilon}{2}\partial_xw=-(w-A)(w-(1-A)),\quad w(0)=\alpha-\delta_2.$$ Here we assume $\delta_1<\delta_2$ and $w(0)>1-A$, which will be satisfied if $\delta,\delta_1,\delta_2$ are small enough. Substituting $\rho_l$ into (\[OriginalProKneq1\]), and using Eqs. (\[EQu2\],\[EQw2\]), we have $$L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}-\frac{(K-1)\Omega_D}{2u_{-\delta}-1}(w-A)+2(u_{-\delta}-A)\partial_xw.$$ Since $u_{-\delta}$ is an increasing function, we have $u_{-\delta}(x)-A\ge u_{-\delta}(0)-A=\delta_1>0$. Meanwhile, one can show that $-\frac{(K-1)\Omega_D}{2u_{-\delta}-1}(w-A)>0$, $\partial_xw>0$, and $\frac{\epsilon}{2}\partial^2_xu_{-\delta}>0$. So $$\frac{\epsilon}{2}\partial^2_xu_{-\delta}-\frac{(K-1)\Omega_D}{2u_{-\delta}-1}(w-A)+2(u_{-\delta}-A)\partial_xw>0.$$ At the same time, $\rho_l(0)=w(0)+u_{-\delta}(0)-A=\alpha+\delta_1-\delta_2<\alpha$, and $\rho_l(1)=w(1)+u_{-\delta}(1)-A<1-\beta-\delta<1-\beta$. Thus, $\rho_l$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and consequently, is a lower solution of Eq. (\[OriginalProKneq1\]). For these cases, $\rho_u$ is independent of $\epsilon$. For $\delta$, $\delta_2$, $\Omega-\Omega'$ small enough, $\hat{\rho}_l=\lim_{\epsilon\to 0}\rho_l$ and $\rho_u$ can be close to $f$ arbitrarily, and $\rho_u>\hat{\rho}_l>\rho_l$ $\forall \epsilon>0$. So the first and third conditions in Lemma \[hatwlemma\] are also satisfied. Hence the existence of solution $\rho$ of Eq. (\[OriginalProKneq1\]) is obtained. ### For cases [BL$_l^-$+HD]{} Let $u_0$ be the solution of the following ODE, $$(2u_0-1)\partial_xu_0-(K+1)\Omega_Du_0+K\Omega_D=0,\ u_0(1)=1-\beta.$$ If $1-\beta>{K}/{(K+1)}$ and $\alpha>u_0(0)$, the limit solution $f$ is (see Fig. \[integrate\_figures\_Kneq1\_1to6\][**d**]{}) $$f(x)=\left\{ \begin{array}{ll} \alpha, & x=0, \\ u_0, & 0<x\le1. \end{array} \right.$$ For these cases, the small $\epsilon$ limit solution of Eq. (\[OriginalProKneq1\]) is in high density (HD) phase, and has left boundary, but $\rho$ decreases in the boundary layer. The upper solution $\rho_u$ can be given as follows, $$\rho_u=w+u_\delta-A.$$ Where $u_\delta$ is the solution of the following ODE, $$\label{EQu3} (2u_\delta-1)\partial_xu_\delta-(K+1)\Omega'_Du_\delta+K\Omega'_D=0,\quad u_\delta(1)=1-\beta+\delta,$$ constant $A=-\delta_1+u_\delta(0)$, and $w$ satisfies $$\label{EQw3} \frac{\epsilon}{2}\partial_xw=-(w-A)(w-(1-A)),\quad w(0)=\alpha.$$ Here we assume $\Omega'_D<\Omega_D$, and $A<\alpha$. The latter can be satisfied if $\delta$ is small enough. Substituting $\rho_u$ into (\[OriginalProKneq1\]), and using Eqs. (\[EQu3\],\[EQw3\]), we have $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta-\frac{(K-1)\Omega'_D}{2u_\delta-1}(w-A)+2(u_\delta-A)\partial_xw+(\Omega'_D-\Omega_D)((K+1)(u_\delta+w-A)-K).$$ Since $u_\delta$ is an increasing function, $u_\delta(x)-A\ge u_\delta(0)-A=\delta_1>0$. Meanwhile, one can easily show that $-\frac{(K-1)\Omega'_D}{2u_\delta-1}(w-A)<0$, $\partial_xw<0$, and $(\Omega'_D-\Omega_D)((K+1)(u_\delta+w-A)-K)<0$. Therefore, for $\epsilon$ small enough, $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta-\frac{(K-1)\Omega'_D}{2u_\delta-1}(w-A)+2(u_\delta-A)\partial_xw+(\Omega'_D-\Omega_D)((K+1)(u_\delta+w-A)-K)<0.$$ At the same time, $\rho_u(0)=w(0)+u_\delta(0)-A>\alpha+\delta_1>\alpha$. $\rho_u(1)=w(1)+u_\delta(1)-A>u_\delta(1)=\delta+(1-\beta)>(1-\beta)$. Thus, $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and hence is an upper solution of Eq. (\[OriginalProKneq1\]). The lower solution $\rho_l$ can be obtained by the following ODE, $$(2\rho_l-1)\partial_x\rho_l-(K+1)\Omega_D\rho_l+K\Omega_D=0,\ \rho_l(1)=1-\beta-\delta.$$ Substituting $\rho_l$ into Eq. (\[OriginalProKneq1\]), we have $L\rho_l=\frac{\epsilon}{2}\partial^2_x\rho_l>0$. Meanwhile, $\rho_l(0)<u_0(0)<\alpha$, and $\rho_l(1)=1-\beta-\delta<1-\beta$. Thus, $\rho_l$ is a lower solution of Eq. (\[OriginalProKneq1\]). For these cases, $\rho_l$ is independent of $\epsilon$. For $\delta$, $\Omega-\Omega'$ small enough, $\hat{\rho}_u=\lim_{\epsilon\to 0}\rho_u$ and $\rho_l$ can be close to $f$ arbitrarily, and $\rho_u>\hat{\rho}_u>\rho_l$ $\forall \epsilon>0$. So the first and third conditions in Lemma \[hatwlemma\] are also satisfied. Hence the existence of solution $\rho$ of Eq. (\[OriginalProKneq1\]) is obtained. ### For cases [BL$_l^+$+MD]{} Let $u_0$ be the solution of the following ODE, $$(2u_0-1)\partial_xu_0-(K+1)\Omega_Du_0+K\Omega_D=0,\quad u_0(1)=1-\beta.$$ If $\frac{1}{2}<1-\beta<\frac{K}{K+1}$ and $1-u_0(0)<\alpha<u_0(0)$, the limit solution $f$ is (see Fig. \[integrate\_figures\_Kneq1\_1to6\][**e**]{}) $$f(x)=\left\{ \begin{array}{ll} \alpha, & x=0, \\ u_0, & 0<x\le1. \end{array} \right.$$ For these cases, the small $\epsilon$ limit solution $\rho$ of Eq. (\[OriginalProKneq1\]) is in medium density (MD) phase, and has left boundary layer (BL$_l^+$). The upper solution $\rho_u$ can be given by the following equation, $$(2\rho_u-1)\partial_x\rho_u-(K+1)\Omega_D\rho_u+K\Omega_D=0,\ \rho_u(1)=1-\beta+\delta.$$ Substituting $\rho_u$ into Eq. (\[OriginalProKneq1\]), we have $L\rho_u=\frac{\epsilon}{2}\partial^2_x\rho_u<0$. Meanwhile, $\rho_u(0)>u_0(0)>\alpha$, and $\rho_u(1)=1-\beta+\delta>1-\beta$. Thus, $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is an upper solution. The lower solution $\rho_l$ can be given by $$\rho_l=w+u_{-\delta}-A.$$ Where $u_{-\delta}$ is the solution of the following ODE $$\label{EQu4} (2u_{-\delta}-1)\partial_xu_{-\delta}-(K+1)\Omega'_Du_{-\delta}+K\Omega'_D=0,\quad u_{-\delta}(1)=1-\beta-\delta,$$ with constant $\Omega'_D<\Omega_D$, $A=-\delta_1+u_{-\delta}(0)$, and $w$ satisfies $$\label{EQw4} \frac{\epsilon}{2}\partial_xw=-(w-A)(w-(1-A)),\quad w(0)=\alpha-\delta_2.$$ Here we assume $\delta_1<\delta_2$ and $w(0)>1-A$, which can be satisfied if $\delta,\delta_1,\delta_2,\Omega_D-\Omega'_D$ are small enough. Substituting $\rho_l$ into (\[OriginalProKneq1\]), and using Eqs. (\[EQu4\],\[EQw4\]), we have $$L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}-\frac{(K-1)\Omega'_D}{2u_{-\delta}-1}(w-A)+2(u_{-\delta}-A)\partial_xw+(\Omega'_D-\Omega_D)((K+1)(u_{-\delta}+w-A)-K).$$ Since $u_{-\delta}(0)-A=\delta_1>0$, there exists a constant $\delta_3>0$, which is independent of $\epsilon$, such that $u_{-\delta}(x)-A>0$ for $x<\delta_3$. Meanwhile, for $x\ge \delta_3$, $\partial_xw\to 0$ uniformly (see subsection \[SubSecB\], together with the monotonicity of $\partial_x w$), and one can verify that $\left[-\frac{(K-1)\Omega_D}{2u_{-\delta}-1}(w-A)\right]>0$, $\partial_xw>0$, $(\Omega'_D-\Omega_D)((K+1)(u_{-\delta}+w-A)-K)>0$. Therefore, for $\epsilon$ small enough, $$L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}-\frac{(K-1)\Omega'_D}{2u_{-\delta}-1}(w-A)+2(u_{-\delta}-A)\partial_xw+(\Omega'_D-\Omega_D)((K+1)(u_{-\delta}+w-A)-K)>0.$$ At the same time, $\rho_l(0)=w(0)+u_{-\delta}(0)-A=\alpha+\delta_1-\delta_2<\alpha$, $\rho_l(1)=w(1)+u_{-\delta}(1)-A<1-\beta-\delta<1-\beta$. Thus, $\rho_l$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is a lower solution of Eq. (\[OriginalProKneq1\]). For these cases, $\rho_u$ is independent of $\epsilon$. For $\delta$, $\delta_2$, $\Omega-\Omega'$ small enough, $\hat{\rho}_l=\lim_{\epsilon\to 0}\rho_l$ and $\rho_u$ can be close to $f$ arbitrarily, and $\rho_u>\hat{\rho}_l>\rho_l$ $\forall \epsilon>0$. So the first and third conditions in Lemma \[hatwlemma\] are also satisfied. Hence the existence of solution $\rho$ of Eq. (\[OriginalProKneq1\]) is obtained. ### For cases [BL$_l^-$+MD]{} Let $u_0$ be the solution of the following ODE, $$(2u_0-1)\partial_xu_0-(K+1)\Omega_Du_0+K\Omega_D=0,\quad u_0(1)=1-\beta.$$ If $\frac{1}{2}<1-\beta<\frac{K}{K+1}$ and $\alpha>u_0(0)$, the limit solution $f$ is (see Fig. \[integrate\_figures\_Kneq1\_1to6\][**f**]{}) $$f(x)=\left\{ \begin{array}{ll} \alpha, & x=0, \\ u_0, & 0<x\le1. \end{array} \right.$$ For these cases, the small $\epsilon$ limit solution $\rho$ of Eq. (\[OriginalProKneq1\]) is in medium density (MD) phase, and has left boundary layer (BL$_l^-$). The upper solution $\rho_u$ can be given by the following equation, $$\rho_u=w+u_\delta-A.$$ Where $u_\delta$ is the solution of the following ODE, $$\label{EQu5} (2u_\delta-1)\partial_xu_\delta-(K+1)\Omega_Du_\delta+K\Omega_D=0,\quad u_\delta(1)=1-\beta+\delta,$$ constant $A=u_0(1)$, and $w$ is the solution of the following ODE, $$\label{EQw5} \frac{\epsilon}{2}\partial_xw=-(w-A)(w-(1-A)),\quad w(0)=A+\alpha-u_\delta(0).$$ Substituting $\rho_u$ into (\[OriginalProKneq1\]), and using Eqs. (\[EQu5\],\[EQw5\]), we have $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta-\frac{(K-1)\Omega_D}{2u_\delta-1}(w-A)+2(u_\delta-A)\partial_xw.$$ Since $u_\delta$ is a decreasing function, $u_\delta(x)-A\ge u_\delta(1)-A=u_\delta(1)-u_0(1)=\delta>0$. One can verify that $-\frac{(K-1)\Omega_D}{2u_\delta-1}(w-A)<0$, $\partial_xw<0$, and $\frac{\epsilon}{2}\partial^2_xu_\delta<0$. Therefore, $$\frac{\epsilon}{2}\partial^2_xu_\delta-\frac{(K-1)\Omega_D}{2u_\delta-1}(w-A)+2(u_\delta-A)\partial_xw<0.$$ Meanwhile, $\rho_u(0)=w(0)+u_\delta(0)-A=A+\alpha-u_\delta(0)+u_\delta(0)-A=\alpha$, $\rho_u(1)=w(1)+u_\delta(1)-A>u_\delta(1)=\delta+(1-\beta)>(1-\beta)$. Thus, $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is an upper solution of Eq. (\[OriginalProKneq1\]). The lower solution $\rho_l$ can be given by the following ODE, $$(2\rho_l-1)\partial_x\rho_l-(K+1)\Omega'_D\rho_l+K\Omega'_D=0,\ \rho_l(1)=1-\beta-\delta,$$ where $\Omega'_D<\Omega_D$. By substituting $\rho_l$ into Eq. (\[OriginalProKneq1\]), we obtain that for $\epsilon$ small enough, $$\begin{aligned} L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}+(\Omega'_D-\Omega_D)((K+1)u_{-\delta}-K)>0.\end{aligned}$$ Meanwhile, $\rho_l(0)<u_0(0)<\alpha$, and $\rho_l(1)=1-\beta-\delta<1-\beta$. Thus, $\rho_l$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is a lower solution of Eq. (\[OriginalProKneq1\]). For these cases, $\rho_l$ is independent of $\epsilon$. For $\delta$, $\Omega-\Omega'$ small enough, $\hat{\rho}_u=\lim_{\epsilon\to 0}\rho_u$ and $\rho_l$ can be close to $f$ arbitrarily, and $\rho_u>\hat{\rho}_u>\rho_l$ $\forall \epsilon>0$. So the first and third conditions in Lemma \[hatwlemma\] are also satisfied. Hence the existence of solution $\rho$ of Eq. (\[OriginalProKneq1\]) is obtained. ### For cases [BL$_l^+$+MD+BL$_r^-$]{} Let $u_0$ be the solution of the following ODE, $$(2u_0-1)\partial_xu_0-(K+1)\Omega_Du_0+K\Omega_D=0,\quad u_0(1)=\frac{1}{2}.$$ If $\beta>\frac{1}{2}$ and $1-u_0(0)<\alpha<u_0(0)$, the limit solution $f$ is (see Fig. \[integrate\_figures\_Kneq1\_7to11\][**a**]{}) $$f(x)=\left\{ \begin{array}{cc} \alpha, & x=0, \\ u_0, & 0<x<1, \\ 1-\beta, & x=1. \end{array} \right.$$ For these cases, the small $\epsilon$ limit solution $\rho$ of Eq. (\[OriginalProKneq1\]) is in medium density (MD) phase, and has left boundary layer (BL$_l^+$) and right boundary layer (BL$_r^-$). The upper solution $\rho_u$ can be given by the following equation, $$(2\rho_u-1)\partial_x\rho_u-(K+1)\Omega_D\rho_u+K\Omega_D=0,\quad \rho_u(1)=\frac{1}{2}+\delta.$$ Substituting $\rho_u$ into Eq. (\[OriginalProKneq1\]), we have $$\begin{aligned} L\rho_u=\frac{\epsilon}{2}\partial^2_x\rho_u+(2u_\delta-1)\partial_x\rho_u-(K+1)\Omega_Du_{\delta}+K\Omega_D=\frac{\epsilon}{2}\partial^2_x\rho_u<0.\end{aligned}$$ Meanwhile, $\rho_u(0)>u_0(0)>\alpha$ and $\rho_u(1)=\frac{1}{2}+\delta>1-\beta$. Thus, $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is an upper solution of Eq. (\[OriginalProKneq1\]). The lower solution $\rho_l$ can be given as follows, $$\rho_l=w+w_1+u^\epsilon_{-\delta}-A-\frac{1}{2}.$$ Where $u^\epsilon_{-\delta}$ is the solution of $$\label{EQu6} (2u^\epsilon_{-\delta}-1)\partial_xu^\epsilon_{-\delta}-(K+1)\Omega'_Du^\epsilon_{-\delta}+K\Omega'_D=0,\quad u^\epsilon_{-\delta}(1-\epsilon^{1/2})=u_0(1-\epsilon^{1/2}),$$ with $\Omega'_D<\Omega_D$. $w_1$ is the solution of $$\label{EQw61} \frac{\epsilon}{2}\partial_xw_1=-e(w_1-(\frac{1}{2}+\delta_1))(w_1-(\frac{1}{2}-\delta_1)),\quad w_1(1)=1-\beta-\delta_2,$$ with $0<e<1$. Constant $A=-\delta_3-\delta_1+u^\epsilon_{-\delta}(0)$. $w$ is the solution of $$\label{EQw6} \frac{\epsilon}{2}\partial_xw=-(w-A)(w-(1-A)),\quad w(0)=\alpha-\delta_4.$$ We assume that $\delta_3<\delta_4$ and $w(0)>1-A$, which can be satisfied if $\delta_1,\delta_3,\delta_4,\Omega_D-\Omega'_D$ are small enough. Let $u^\epsilon_{-\delta}(1)=0.5+\theta$. We discuss the relationship between $\theta$ and $\epsilon$ below. The function $u_0$ satisfies the following equation, $$\label{u0solve1} \frac{2}{(K+1)\Omega_D}u_0+\frac{K-1}{(K+1)^2\Omega_D}\log|(K+1)\Omega_Du_0-K\Omega_D|=x+D,$$ with $D$ a constant. Assuming $u_0(1-\epsilon^{1/2})=0.5+\delta'$, then from Eq. (\[u0solve1\]) and the boundary condition $u_0(1)=0.5$, we obtain (keep only the leading order terms), $$\begin{aligned} \frac{2\delta'^2}{(K-1)\Omega_D}\sim \epsilon^{1/2}.\end{aligned}$$ The function $u^\epsilon_{-\delta}$ satisfies the following eqution $$\label{udesolve1} \frac{2}{(K+1)\Omega'_D}u^\epsilon_{-\delta}+\frac{K-1}{(K+1)^2\Omega'_D}\log|(K+1)\Omega'_Du^\epsilon_{-\delta}-K\Omega'_D|=x+E,$$ with $E$ a constant. By substituting $u^\epsilon_{-\delta}(1)=0.5+\theta$ into Eq. (\[udesolve1\]), and using $u^\epsilon_{-\delta}(1-\epsilon^{1/2})=0.5+\delta'$, we obtain (also keep only the leading order terms), $$\begin{aligned} \theta^2\sim \frac{(K-1)(\Omega_D-\Omega'_D)}{2}\epsilon^{1/2}.\end{aligned}$$ In the following discussion, we will show that $\rho_l$ is a lower solution of Eq. (\[OriginalProKneq1\]). By substituting $\rho_l$ into Eq. (\[OriginalProKneq1\]), and using Eqs. (\[EQu6\],\[EQw61\],\[EQw6\]), we have $$\begin{aligned} L\rho_l&=&\frac{\epsilon}{2}\partial^2_xu^\epsilon_{-\delta}-\frac{(K-1)\Omega'_D}{2u^\epsilon_{-\delta}-1}(w+w_1-A-\frac{1}{2})+2(w+u^\epsilon_{-\delta}-A-\frac{1}{2}+(1-e)(w_1-\frac{1}{2}))\partial_xw_1 \nonumber \\ &&+2(w_1+u^\epsilon_{-\delta}-A-\frac{1}{2})\partial_xw+(\Omega'_D-\Omega_D)((K+1)(w+w_1+u^\epsilon_{-\delta}-A-\frac{1}{2})-K).\end{aligned}$$ One can verify that, for the last term, $$\begin{aligned} (\Omega'_D-\Omega_D)((K+1)(w+w_1+u^\epsilon_{-\delta}-A-\frac{1}{2})-K)\ge (\Omega'_D-\Omega_D)((K+1)(u^\epsilon_{-\delta}(0)-\delta_1)-K)>0.\end{aligned}$$ Since $w_1(0)+u^\epsilon_{-\delta}(0)-A-\frac{1}{2}\approx-\delta_1+u^\epsilon_{-\delta}(0)-A=\delta_3>0$, one can show that, for $\epsilon$ small enough, there is a $\delta_5$, which is independent of $\epsilon$, such that $w_1(x)+u^\epsilon_{-\delta}(x)-A-\frac{1}{2}>0$ for $x<\delta_5$, thereby $2(w_1+u^\epsilon_{-\delta}-A-\frac{1}{2})\partial_xw>0$ since $\partial_xw>0$. For $x\ge\delta_5$, $\lim_{\epsilon\to0}\partial_xw=0$ uniformly. Moreover, $$\begin{aligned} \frac{\epsilon}{2}\partial^2_xu^\epsilon_{-\delta}(1)=\frac{\epsilon\Omega'_D(K-1)[(K+1)\Omega'_Du^\epsilon_{-\delta}(1)-K\Omega'_D]}{16(u^\epsilon_{-\delta}(1)-\frac{1}{2})^3}\sim\frac{(\Omega'_D)^2[(K+1)u^\epsilon_{-\delta}(1)-K]\theta}{4(K-1)(\Omega_D-\Omega'_D)^2}.\end{aligned}$$ $$\begin{aligned} -\frac{(K-1)\Omega'_D}{2u^\epsilon_{-\delta}(1)-1}(w(1)+w_1(1)-A-\frac{1}{2})>\frac{(K-1)\Omega'_D}{2\theta}\delta_1>0.\end{aligned}$$ Thus, we can choose $\epsilon$ small enough such that $$\begin{aligned} \frac{\epsilon(\Omega'_D)^2(K-1)[(K+1)\frac{1}{2}-K]}{16\theta^3}+\frac{(K-1)\Omega'_D}{2\theta}\delta_1>0.\end{aligned}$$ Thereby, for $0<x<1$, if we assume $u^\epsilon_{-\delta}(x)=\frac{1}{2}+E(x)+\theta$ with $E(x)>0$, then $$\begin{aligned} &&\frac{\epsilon}{2}\partial^2_xu^\epsilon_{-\delta}(x)-\frac{(K-1)\Omega'_D}{2u^\epsilon_{-\delta}(x)-1}(w(x)+w_1(x)-A-\frac{1}{2})\\ &>&\frac{\epsilon(\Omega'_D)^2(K-1)[(K+1)\frac{1}{2}-K]}{16(\theta+E(x))^3}+\frac{(K-1)\Omega'_D}{2(\theta+E(x))}\delta_1\\ &=&\left(\frac{\epsilon(\Omega'_D)^2(K-1)[(K+1)\frac{1}{2}-K]}{16\theta^3}+\frac{(K-1)\Omega'_D}{2\theta}\delta_1\right)\frac{\theta^3}{(\theta+E(x))^3}\\ &&+\frac{(K-1)\Omega'_D}{2\theta}\delta_1\frac{\theta}{\theta+E(x)}\left(1-\frac{\theta^2}{(\theta+E(x))^2}\right)>0.\end{aligned}$$ Note that $w(1)+u^\epsilon_{-\delta}(1)-A-0.5+(1-e)(w_1(1)-0.5)\le\theta-(1-e)\delta_1<0$ for $\epsilon$ small enough. Thus, for $\epsilon$ small enough, there is a $\delta_6$ independent of $\epsilon$ such that $w(x)+u^\epsilon_{-\delta}(x)-A-0.5+(1-e)(w_1(x)-0.5)<0$ for $x>1-\delta_6$. Since $\partial_xw_1<0$, $2(w+u^\epsilon_{-\delta}-A-0.5+(1-e)(w_1-0.5))\partial_xw_1>0$. For $x\le1-\delta_6$, $\lim_{\epsilon\to0}\partial_xw_1=0$ uniformly. From the above analysis, we have $$\begin{aligned} L\rho_l&=&\frac{\epsilon}{2}\partial^2_xu^\epsilon_{-\delta}-\frac{(K-1)\Omega'_D}{2u^\epsilon_{-\delta}-1}(w+w_1-A-\frac{1}{2})+2(w+u^\epsilon_{-\delta}-A-\frac{1}{2}+(1-e)(w_1-\frac{1}{2}))\partial_xw_1\\ &&+2(w_1+u^\epsilon_{-\delta}-A-\frac{1}{2})\partial_xw+(\Omega'_D-\Omega_D)((K+1)(w+w_1+u^\epsilon_{-\delta}-A-\frac{1}{2})-K)>0.\end{aligned}$$ At the same time, $\rho_l(0)=w(0)+w_1(0)+u^\epsilon_{-\delta}(0)-A-0.5< \alpha-\delta_4+\delta_3<\alpha$. $\rho_l(1)=w(1)+w_1(1)+u^\epsilon_{-\delta}(1)-A-0.5<\theta+1-\beta-\delta_2<1-\beta$ for $\epsilon$ small enough. Thus, $\rho_l$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is a lower solution of Eq. (\[OriginalProKneq1\]). For these cases, $\rho_u$ is independent of $\epsilon$. For $\delta$, $\delta_1$, $\delta_2$, $\delta_4$, and $\Omega-\Omega'$ small enough, $\hat{\rho}_l=\lim_{\epsilon\to 0}\rho_l$ and $\rho_u$ can be close to $f$ arbitrarily, and $\rho_u>\hat{\rho}_l$ if $\delta_3<\delta_4$. Thus, for $\epsilon$ small enough, $\rho_u>\rho_l$. So the first and third conditions in Lemma \[hatwlemma\] are also satisfied. Hence the existence of solution $\rho$ of Eq. (\[OriginalProKneq1\]) is obtained. ### For cases [BL$_l^-$+MD+BL$_r^-$]{} Let $u_0$ be the solution of the following ODE, $$(2u_0-1)\partial_xu_0-(K+1)\Omega_Du_0+K\Omega_D=0,\quad u_0(1)=\frac{1}{2}.$$ If $\beta>\frac{1}{2}$ and $\alpha>u_0(0)$, the limit solution $f$ is (see Fig. \[integrate\_figures\_Kneq1\_7to11\][**b**]{}) $$f(x)=\left\{ \begin{array}{ll} \alpha, & x=0, \\ u_0, & 0<x<1, \\ 1-\beta, & x=1. \end{array} \right.$$ For these cases, the small $\epsilon$ limit solution $\rho$ of Eq. (\[OriginalProKneq1\]) is in medium density (MD) phase, and has left boundary layer (BL$_l^-$) and right boundary layer (BL$_r^-$). The upper solution $\rho_u$ can be given by the following equation, $$\rho_u=w+u_\delta-\frac{1}{2}.$$ Where $u_\delta$ is the solution of the following ODE, $$\label{EQu7} (2u_\delta-1)\partial_xu_\delta-(K+1)\Omega_Du_\delta+K\Omega_D=0,\quad u_\delta(1)=\frac{1}{2}+\delta,$$ and $w$ is the solution of $$\label{EQw7} \frac{\epsilon}{2}\partial_xw=-(w-\frac{1}{2})^2,\quad w(0)=\frac{1}{2}+\alpha-u_\delta(0).$$ By substituting $\rho_u$ into Eq. (\[OriginalProKneq1\]), and using Eqs. (\[EQu7\],\[EQw7\]), we obtain $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta-\frac{(K-1)\Omega_D}{2u_\delta-1}(w-\frac{1}{2})+2(u_\delta-\frac{1}{2})\partial_xw.$$ Since $u_\delta$ is a decreasing function, $u_\delta(x)-\frac{1}{2}\ge u_\delta(1)-\frac{1}{2}=\delta>0$. Meanwhile, $-\frac{(K-1)\Omega_D}{2u_\delta-1}(w-\frac{1}{2})<0$, $\partial_xw<0$, and $\frac{\epsilon}{2}\partial^2_xu_\delta<0$. Therefore, $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta-\frac{(K-1)\Omega_D}{2u_\delta-1}(w-\frac{1}{2})+2(u_\delta-\frac{1}{2})\partial_xw<0.$$ At the same time, $\rho_u(0)=w(0)+u_\delta(0)-\frac{1}{2}=\frac{1}{2}+\alpha-u_\delta(0)+u_\delta(0)-\frac{1}{2}=\alpha$, and $\rho_u(1)=w(1)+u_\delta(1)-\frac{1}{2}>u_\delta(1)=\delta+\frac{1}{2}>(1-\beta)$. Thus, $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is an upper solution of Eq. (\[OriginalProKneq1\]). The lower solution $\rho_l$ can be given as follows, $$\rho_l=w_1+u^\epsilon_{-\delta}-\frac{1}{2}.$$ Where $u^\epsilon_{-\delta}$ is the solution of $$\label{EQu8} (2u^\epsilon_{-\delta}-1)\partial_xu^\epsilon_{-\delta}-(K+1)\Omega'_Du^\epsilon_{-\delta}+K\Omega'_D=0,\quad u^\epsilon_{-\delta}(1-\epsilon^{1/2})=u_0(1-\epsilon^{1/2}),$$ with $\Omega'_D<\Omega_D$. $w_1$ is the solution of $$\label{EQw8} \frac{\epsilon}{2}\partial_xw_1=-e(w_1-(\frac{1}{2}+\delta_1))(w_1-(\frac{1}{2}-\delta_1)),\quad w_1(1)=1-\beta-\delta_2,$$ with $0<e<1$. Assuming $u^\epsilon_{-\delta}(1)=0.5+\theta$, we first give the relation between $\theta$ and $\epsilon$. The solution $u_0$ satisfies the following equation, $$\label{u0solve} \frac{2}{(K+1)\Omega_D}u_0+\frac{K-1}{(K+1)^2\Omega_D}\log|(K+1)\Omega_Du_0-K\Omega_D|=x+D.$$ Let $u_0(1-\epsilon^{1/2})=0.5+\delta'$, and substitute it into Eq. (\[u0solve\]). From the boundary condition $u_0(1)=0.5$, and only keeping the leading order terms, we have $$\begin{aligned} \frac{2\delta'^2}{(K-1)\Omega_D}\sim \epsilon^{1/2}.\end{aligned}$$ Meanwhile, the solution $u^\epsilon_{-\delta}$ satisfies the following equation, $$\label{udesolve} \frac{2}{(K+1)\Omega'_D}u^\epsilon_{-\delta}+\frac{K-1}{(K+1)^2\Omega'_D}\log|(K+1)\Omega'_Du^\epsilon_{-\delta}-K\Omega'_D|=x+E.$$ Substituting $u^\epsilon_{-\delta}(1)=\frac{1}{2}+\theta$ into Eq. (\[udesolve\]), using $u^\epsilon_{-\delta}(1-\epsilon^{1/2})=\frac{1}{2}+\delta'$, and only keeping the leading order terms, we have $$\begin{aligned} \theta^2\sim \frac{(K-1)(\Omega_D-\Omega'_D)}{2}\epsilon^{1/2}.\end{aligned}$$ In the following discussion, we will show that $\rho_l$ is a lower solution of Eq. (\[OriginalProKneq1\]). Substituting $\rho_l$ into Eq. (\[OriginalProKneq1\]), and using Eq. (\[EQu8\],\[EQw8\]), we have $$\begin{aligned} L\rho_l&=&\frac{\epsilon}{2}\partial^2_xu^\epsilon_{-\delta}-\frac{(K-1)\Omega'_D}{2u^\epsilon_{-\delta}-1}(w_1-\frac{1}{2})+2(u^\epsilon_{-\delta}-\frac{1}{2}+(1-e)(w_1-\frac{1}{2}))\partial_xw_1 \nonumber\\ &&+(\Omega'_D-\Omega_D)((K+1)(w_1+u^\epsilon_{-\delta}-\frac{1}{2})-K).\end{aligned}$$ One can verify that, $$\begin{aligned} (\Omega'_D-\Omega_D)((K+1)(w_1+u^\epsilon_{-\delta}-\frac{1}{2})-K)\ge (\Omega'_D-\Omega_D)((K+1)(u^\epsilon_{-\delta}(0)-\delta_1)-K)>0.\end{aligned}$$ Meanwhile, $$\begin{aligned} \frac{\epsilon}{2}\partial^2_xu^\epsilon_{-\delta}(1)=\frac{\epsilon\Omega'_D(K-1)[(K+1)\Omega'_Du^\epsilon_{-\delta}(1)-K\Omega'_D]}{16\theta^3}\sim\frac{(\Omega'_D)^2[(K+1)u^\epsilon_{-\delta}(1)-K]\theta}{4(\Omega_D-\Omega'_D)^2(K-1)},\end{aligned}$$ and $$\begin{aligned} -\frac{(K-1)\Omega'_D}{2u^\epsilon_{-\delta}-1}(w_1-\frac{1}{2})>\frac{(K-1)\Omega'_D}{2\theta}\delta_1>0.\end{aligned}$$ Thus, $\epsilon$ can be chosen to be small enough such that $$\begin{aligned} \frac{\epsilon(\Omega'_D)^2(K-1)[(K+1)\frac{1}{2}-K]}{16\theta^3}+\frac{(K-1)\Omega'_D}{2\theta}\delta_1>0.\end{aligned}$$ For $0<x<1$, let $u^\epsilon_{-\delta}(x)=\frac{1}{2}+E(x)+\theta$ with $E(x)>0$. Then $$\begin{aligned} &&\frac{\epsilon}{2}\partial^2_xu^\epsilon_{-\delta}(x)-\frac{(K-1)\Omega'_D}{2u^\epsilon_{-\delta}(x)-1}(w_1(x)-\frac{1}{2})\\ &>&\frac{\epsilon(\Omega'_D)^2(K-1)[(K+1)\frac{1}{2}-K]}{16(\theta+E(x))^3}+\frac{(K-1)\Omega'_D}{2(\theta+E(x))}\delta_1\\ &=&(\frac{\epsilon(\Omega'_D)^2(K-1)[(K+1)\frac{1}{2}-K]}{16\theta^3}+\frac{(K-1)\Omega'_D}{2\theta}\delta_1)\frac{\theta^3}{(\theta+E(x))^3}\\ &&+\frac{(K-1)\Omega'_D}{2\theta}\delta_1\frac{\theta}{\theta+E(x)}(1-\frac{\theta^2}{(\theta+E(x))^2})>0.\end{aligned}$$ Note that $u^\epsilon_{-\delta}(1)-\frac{1}{2}+(1-e)(w_1(1)-\frac{1}{2})\le\theta-(1-e)\delta_1<0$. Thus, for $\epsilon$ small enough, there is a $\delta_6$ which is independent of $\epsilon$ such that $u^\epsilon_{-\delta}(x)-\frac{1}{2}+(1-e)(w_1(x)-\frac{1}{2})<0$ for $x>1-\delta_6$. Since $\partial_xw_1<0$, $2(u^\epsilon_{-\delta}-\frac{1}{2}+(1-e)(w_1-\frac{1}{2}))\partial_xw_1>0$. For $x\le1-\delta_6$, $\lim_{\epsilon\to0}\partial_xw_1=0$ uniformly. From the above analysis, we have $$\begin{aligned} L\rho_l&=&\frac{\epsilon}{2}\partial^2_xu^\epsilon_{-\delta}-\frac{(K-1)\Omega'_D}{2u^\epsilon_{-\delta}-1}(w_1-\frac{1}{2})+2(u^\epsilon_{-\delta}-\frac{1}{2}+(1-e)(w_1-\frac{1}{2}))\partial_xw_1\\ &&+(\Omega'_D-\Omega_D)((K+1)(w_1+u^\epsilon_{-\delta}-\frac{1}{2})-K)>0.\end{aligned}$$ At the same time, one can verify that for $\epsilon$ small enough, $\rho_l(0)=w_1(0)+u^\epsilon_{-\delta}(0)-\frac{1}{2}< u_0(0)-\delta_1<\alpha$, and $\rho_l(1)=w_1(1)+u^\epsilon_{-\delta}(1)-\frac{1}{2}=\theta+1-\beta-\delta_2<1-\beta$. Thus, $\rho_l$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is a lower solution of Eq. (\[OriginalProKneq1\]). Define $\hat{\rho}_l=\lim_{\epsilon\to 0}\rho_l$, $\hat{\rho}_u=\lim_{\epsilon\to 0}\rho_u$. For $\delta$, $\delta_1$, $\delta_2$, $\Omega-\Omega'$ small enough, $\hat{\rho}_l$ and $\hat{\rho}_u$ can be arbitrarily close to $f$, and $\rho_u>\hat{\rho}_u>\hat{\rho}_l$. Thus, for $\epsilon$ small enough, $\rho_u>\rho_l$. Which implies that the first and third conditions in Lemma \[hatwlemma\] are also satisfied. Hence the existence of solution $\rho$ of Eq. (\[OriginalProKneq1\]) is obtained. ### For cases [LD+DW+HD]{} Let $u_0^1$ be the solution of the following ODE, $$(2u_0^1-1)\partial_xu_0^1-(K+1)\Omega_Du_0^1+K\Omega_D=0,\quad u_0^1(0)=\alpha,$$ and $u_0^2$ be the solution of $$(2u_0^2-1)\partial_xu_0^2-(K+1)\Omega_Du_0^2+K\Omega_D=0,\quad u_0^2(1)=1-\beta.$$ If $1-\beta>\frac{K}{K+1}$, $0<\alpha<0.5$, and there exists $0<x_0<1$ such that $u_0^1(x_0)+u_0^2(x_0)=1$, then the limit solution $f$ is as follows (see Fig. \[integrate\_figures\_Kneq1\_7to11\][**c**]{}), $$f(x)=\left\{ \begin{array}{ll} u_0^1, & 0\le x\le x_0, \\ u_0^2, & x_0<x\le1. \end{array} \right.$$ For these cases, the solution $\rho$ of Eq. (\[OriginalProKneq1\]) for small $\epsilon$ is in low density (LD) phase in interval $[0, x_0)$, and in high density (HD) phase in interval $(x_0,1]$. The boundary between LD phase and HD phase is the so called domain wall (DW), which is located at $x_0$. Let $u_\delta^1$ be the solution of $$(2u_\delta^1-1)\partial_xu_\delta^1-(K+1)\Omega_Du_\delta^1+K\Omega_D=0,\quad u_\delta^1(0)=\alpha+\delta,$$ and $u_\delta^2$ be the solution of $$(2u_\delta^2-1)\partial_xu_\delta^2-(K+1)\Omega_Du_\delta^2+K\Omega_D=0,\quad u_\delta^2(1)=1-\beta+\delta.$$ Then there exists $0<x_\delta<x_0<1$ such that $u_\delta^1(x_\delta)+u_\delta^2(x_\delta)=1$ for $\delta$ small enough. In fact, according to boundary conditions of the above two equations for $u_\delta^1$ and $u_\delta^2$, one can easily find that $u_\delta^1(x_0)+u_\delta^2(x_0)>1$. Meanwhile, since both $u_\delta^1(x)$ and $u_\delta^2(x)$ are increasing functions, for $\delta$ small enough, there exists $0<x_\delta<x_0$ which satisfies $u_\delta^1(x_\delta)+u_\delta^2(x_\delta)=1$. Define $w$ as the solution of $$\frac{\epsilon}{2}\partial_xw=-(w-A_1)(w-(1-A_1)),\quad w(x_\delta)=\frac{1}{2},$$ where $A_1=u_\delta^1(x_\delta)$. Define $u_\delta^3$ as the solution of $$(2u_\delta^3-1)\partial_xu_\delta^3-(K+1)\Omega'_Du_\delta^3+K\Omega'_D=0,\quad u_\delta^3(x_\delta)=A_1-\delta_1,$$ with $\Omega'_D>\Omega_D$. Define $u_\delta^4$ as the solution of $$(2u_\delta^4-1)\partial_xu_\delta^4-(K+1)\Omega''_Du_\delta^4+K\Omega''_D=0,\quad u_\delta^4(x_\delta)=1-A_1-\delta_1,$$ with $\Omega''_D<\Omega_D$. Here, we choose $\Omega'_D-\Omega_D,\Omega_D-\Omega''_D,\delta_1$ small enough such that $u_\delta^3>u_0^1$, $u_\delta^4>u_0^2$. The upper solution of Eq. (\[OriginalProKneq1\]) can be given as follows, $$\rho_u=\left\{ \begin{array}{ll} w+u_\delta^3-A_1, & x\le x_\delta, \\ w+u_\delta^4-(1-A_1),\quad & x>x_\delta. \end{array} \right.$$ See dashed line in Fig. \[integrate\_figures\_Kneq1\_7to11\][**c**]{}. Substituting $\rho_u$ into Eq. (\[OriginalProKneq1\]), we obtain that, for $x\le x_\delta$, $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^3-\frac{(K-1)\Omega'_D}{2u_\delta^3-1}(w-A_1)+2(u_\delta^3-A_1)\partial_xw+(\Omega'_D-\Omega_D)((K+1)(u_\delta^3+w-A_1)-K),$$ and for $x>x_\delta$, $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^4-\frac{(K-1)\Omega''_D}{2u_\delta^4-1}(w-(1-A_1))+2(u_\delta^4-(1-A_1))\partial_xw+(\Omega''_D-\Omega_D)((K+1)(u_\delta^4+w-(1-A_1))-K).$$ For $x\le x_\delta$, one can show that $2(u_\delta^3-A_1)\partial_xw\le \frac{4\delta_1}{\epsilon}(\frac{1}{2}-(1-A_1))(w-A_1)<0$. Thus, for $\epsilon$ small enough, $-\frac{(K-1)\Omega'_D}{2u_\delta^3-1}(w-A_1)+2(u_\delta^3-A_1)\partial_xw<0$. Meanwhile, $\partial^2_xu_\delta^3$ is bounded, and $(\Omega'_D-\Omega_D)((K+1)(u_\delta^3+w-A_1)-K)<0$ with a negative upper bound. So we have $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^3-\frac{(K-1)\Omega'_D}{2u_\delta^3-1}(w-A_1)+2(u_\delta^3-A_1)\partial_xw+(\Omega'_D-\Omega_D)((K+1)(u_\delta^3+w-A_1)-K)<0.$$ For $x> x_\delta$, $u_\delta^4(x_\delta)-(1-A_1)=-\delta_1<0$. Thus, there exists a $\delta_2$, which is independent of $\epsilon$, such that for $x_\delta<x\le x_\delta+\delta_2$, $u_\delta^4(x)-(1-A_1)<0$ with a negative upper bound. [**(1)**]{} For $x\le x_\delta+\delta_2$, we have two different cases. [**(a)**]{} If $(1-A_1)-w\ge(u_\delta^4(x_\delta)-\frac{K}{K+1})/2=\gamma$, we have, since $u_\delta^4-(1-A_1)\le u_\delta^4(x_\delta+\delta_2)-(1-A_1)$, $$\begin{aligned} 2(u_\delta^4-(1-A_1))\partial_xw\le \frac{4(u_\delta^4(x_\delta+\delta_2)-(1-A_1))}{\epsilon}\gamma(\frac{1}{2}-A_1)\to -\infty.\end{aligned}$$ Since other terms are bounded, we have $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^4-\frac{(K-1)\Omega''_D}{2u_\delta^4-1}(w-(1-A_1))+2(u_\delta^4-(1-A_1))\partial_xw+(\Omega''_D-\Omega_D)((K+1)(u_\delta^4+w-(1-A_1))-K)<0.$$ [**(b)**]{} If $(1-A_1)-w<(u_\delta^4(x_\delta)-\frac{K}{K+1})/2=\gamma$, then $(\Omega''_D-\Omega_D)((K+1)(u_\delta^4+w-(1-A_1))-K)<0$ with a negative upper bound. $\frac{\epsilon}{2}\partial^2_xu_\delta^4\to 0$ uniformly. $$\begin{aligned} 2(u_\delta^4-(1-A_1))\partial_xw\le -\frac{4(u_\delta^4(x_\delta+\delta_2)-(1-A_1))}{\epsilon}(w-(1-A_1))(\frac{1}{2}-A_1)<0.\end{aligned}$$ Thus, for $\epsilon$ small enough, $$-\frac{(K-1)\Omega''_D}{2u_\delta^4-1}(w-(1-A_1))+2(u_\delta^4-(1-A_1))\partial_xw<0.$$ Therefore, $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^4-\frac{(K-1)\Omega''_D}{2u_\delta^4-1}(w-(1-A_1))+2(u_\delta^4-(1-A_1))\partial_xw+(\Omega''_D-\Omega_D)((K+1)(u_\delta^4+w-(1-A_1))-K)<0.$$ [**(2)**]{} For $x> x_\delta+\delta_2$, $(\Omega''_D-\Omega_D)((K+1)(u_\delta^4+w-(1-A_1))-K)<0$ with a negative upper bound. $\partial_xw$ and $w-(1-A_1)$ tend to zero uniformly. Thus, $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^4-\frac{(K-1)\Omega''_D}{2u_\delta^4-1}(w-(1-A_1))+2(u_\delta^4-(1-A_1))\partial_xw+(\Omega''_D-\Omega_D)((K+1)(u_\delta^4+w-(1-A_1))-K)<0.$$ The above analyses show that $L\rho_u<0$ for any $0<x<1$. Meanwhile, one can easily show that $\rho_u(0)=w(0)-A_1+u_\delta^3(0)>u_0^1(0)=\alpha$, and $\rho_u(1)=w(1)-(1-A_1)+u_\delta^4(1)>u_0^2(1)=1-\beta$ for $\epsilon$ small enough. Thus, $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and therefore is an upper solution of Eq. (\[OriginalProKneq1\]). In the following discussion, we will give the lower solution $\rho_l$ of Eq. (\[OriginalProKneq1\]) by similar methods. Define $u_{-\delta}^1$ as the solution of $$(2u_{-\delta}^1-1)\partial_xu_{-\delta}^1-(K+1)\Omega_Du_{-\delta}^1+K\Omega_D=0,\quad u_{-\delta}^1(0)=\alpha-\delta.$$ Define $u_{-\delta}^2$ as the solution of $$(2u_{-\delta}^2-1)\partial_xu_{-\delta}^2-(K+1)\Omega_Du_{-\delta}^2+K\Omega_D=0,\ u_{-\delta}^2(0)=1-\beta-\delta.$$ Then there exists $x_0<x_{-\delta}<1$ such that $u_{-\delta}^1(x_{-\delta})+u_{-\delta}^2(x_{-\delta})=1$. In fact, according to boundary conditions of the above two equations for $u_{-\delta}^1$ and $u_{-\delta}^2$, we can find that $u_{-\delta}^1(x_0)+u_{-\delta}^2(x_0)<1$. Since both $u_{-\delta}^1(x)$ and $u_{-\delta}^2(x)$ are increasing functions, for $\delta$ small enough , there exists $x_0<x_{-\delta}<1$ which satisfies $u_{-\delta}^1(x_{-\delta})+u_{-\delta}^2(x_{-\delta})=1$. Define $w_1$ as the solution of $$\frac{\epsilon}{2}\partial_xw_1=-e(w_1-A_2)(w_1-(1-A_2)),\quad w_1(x_{-\delta})=\frac{1}{2},$$ where $A_2=u_{-\delta}^1(x_{-\delta})$ and $0<e<1$ are two constants. Define $w_2$ as the solution of $$\frac{\epsilon}{2}\partial_xw_2=-(w_2-A_2)(w_2-(1-A_2)),\quad w_2(x_{-\delta})=\frac{1}{2}.$$ Let $u_{-\delta}^3$ be the solution of $$(2u_{-\delta}^3-1)\partial_xu_{-\delta}^3-(K+1)\Omega'''_Du_{-\delta}^3+K\Omega'''_D=0,\quad u_{-\delta}^3(x_{-\delta})=A_2+\delta_1,$$ with $\Omega'''_D<\Omega_D$, and $u_{-\delta}^4$ be the solution of $$(2u_{-\delta}^4-1)\partial_xu_{-\delta}^4-(K+1)\Omega_Du_{-\delta}^4+K\Omega_D=0,\quad u_{-\delta}^4(x_{-\delta})=1-A_2+\delta_1.$$ Here, we choose $\Omega_D-\Omega'''_D$ and $\delta_1$ small enough such that $u_{-\delta}^3<u_0^1$ and $u_{-\delta}^4<u_0^2$. The lower solution of Eq. (\[OriginalProKneq1\]) can be given as follows, $$\rho_l=\left\{ \begin{array}{ll} w_1+u_{-\delta}^3-A_2, & x\le x_{-\delta}, \\ w_2+u_{-\delta}^4-(1-A_2),\quad & x>x_{-\delta}. \end{array} \right.$$ See dashed line in Fig. \[integrate\_figures\_Kneq1\_7to11\][**c**]{}. Substituting $\rho_l$ into Eq. (\[OriginalProKneq1\]), we obtain that, for $x\le x_{-\delta}$, $$L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}^3-\frac{(K-1)\Omega'''_D}{2u_{-\delta}^3-1}(w_1-A_2)+2(u_{-\delta}^3-A_2+(1-e)(w_1-\frac{1}{2}))\partial_xw_1+(\Omega'''_D-\Omega_D)((K+1)(u_{-\delta}^3+w_1-A_2)-K),$$ and for $x>x_{-\delta}$, $$L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}^4-\frac{(K-1)\Omega_D}{2u_{-\delta}^4-1}(w_2-(1-A_2))+2(u_{-\delta}^4-(1-A_2))\partial_xw_2.$$ For $x\le x_{-\delta}$, $2(u_{-\delta}^3(x_{-\delta})-A_2+(1-e)(w_1(x_{-\delta})-\frac{1}{2}))>2\delta_1>0$. Thus, there exists a $\delta_2$, which is independent of $\epsilon$, such that for $x_{-\delta}-\delta_2\le x\le x_{-\delta}$, $2(u_{-\delta}^3(x)-A_2+(1-e)(w_1(x)-\frac{1}{2}))>0$ with a positive lower bound. Since $\partial_xw_1>0$, $2(u_{-\delta}^3-A_2+(1-e)(w_1-\frac{1}{2}))\partial_xw_1>0$. For $x<x_{-\delta}-\delta_2$, $\partial_xw_1$ tends to zero uniformly. Meanwhile, $(\Omega'''_D-\Omega_D)((K+1)(u_{-\delta}^3+w_1-A_2)-K)>0$ with a positive lower bound, $\frac{\epsilon}{2}\partial^2_xu_{-\delta}^3>0$, and $-\frac{(K-1)\Omega'''_D}{2u_{-\delta}^3-1}(w_1-A_2)>0$. Thus, for $\epsilon$ small enough, $$L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}^3-\frac{(K-1)\Omega'''_D}{2u_{-\delta}^3-1}(w_1-A_2)+2(u_{-\delta}^3-A_2+(1-e)(w_1-\frac{1}{2}))\partial_xw_1+(\Omega'''_D-\Omega_D)((K+1)(u_{-\delta}^3+w_1-A_2)-K)>0.$$ For $x> x_\delta$, it is easy to verify that all the three terms in $L\rho_l$ are positive. Thus, $$L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}^4-\frac{(K-1)\Omega_D}{2u_{-\delta}^4-1}(w_2-(1-A_2))+2(u_{-\delta}^4-(1-A_2))\partial_xw_2>0.$$ At the same time, one can find that for $\epsilon$ small enough, $\rho_l(0)=w_1(0)-A_2+u_{-\delta}^3(0)<u_0^1(0)=\alpha$, and $\rho_l(1)=w_2(1)-(1-A_2)+u_{-\delta}^4(1)<u_{-\delta}^4(1)<u_0^2(1)=1-\beta$. Meanwhile, $$\partial_xw_1(x_{-\delta})-\partial_xw_2(x_{-\delta})=\frac{2}{\epsilon}(1-e)(\frac{1}{2}-A_2)(\frac{1}{2}-(1-A_2))\to -\infty.$$ Thus, $\partial^-_x\rho_l(x_{-\delta})<\partial^+_x\rho_l(x_{-\delta})$ for $\epsilon$ small enough. Finally, $\rho_l$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and therefore is a lower solution of Eq. (\[OriginalProKneq1\]). Finally, let $\hat{\rho}_l=\lim_{\epsilon\to 0}\rho_l$ and $\hat{\rho}_u=\lim_{\epsilon\to 0}\rho_u$. For $\delta$ small enough, $\hat{\rho}_l$ and $\hat{\rho}_u$ can be arbitrarily close to $f$, and $\rho_u>\rho_l$ for $\epsilon$ small enough if $\delta_1$, $\Omega'-\Omega$, $\Omega-\Omega''$, $\Omega-\Omega'''$ are small enough. Which implies that the first and third conditions in Lemma \[hatwlemma\] are also satisfied. Hence the existence of solution $\rho$ of Eq. (\[OriginalProKneq1\]) is obtained. ### For cases [LD+DW+MD]{} Let $u_0^1$ be the solution of $$(2u_0^1-1)\partial_xu_0^1-(K+1)\Omega_Du_0^1+K\Omega_D=0,\quad u_0^1(0)=\alpha,$$ and $u_0^2$ be the solution of $$(2u_0^2-1)\partial_xu_0^2-(K+1)\Omega_Du_0^2+K\Omega_D=0,\quad u_0^2(1)=1-\beta.$$ For $\frac{1}{2}<1-\beta<\frac{K}{K+1}$, if there exists $0<x_0<1$ such that $u_0^1(x_0)+u_0^2(x_0)=1$, then the limit solution $f$ is given as follows (see Fig. \[integrate\_figures\_Kneq1\_7to11\][**d**]{}), $$f(x)=\left\{ \begin{array}{ll} u_0^1, & 0\le x\le x_0, \\ u_0^2,\ & x_0<x\le1. \end{array} \right.$$ Define $u_\delta^1$ as the solution of $$(2u_\delta^1-1)\partial_xu_\delta^1-(K+1)\Omega_Du_\delta^1+K\Omega_D=0,\quad u_\delta^1(0)=\alpha+\delta.$$ Define $u_\delta^2$ as the solution of $$(2u_\delta^2-1)\partial_xu_\delta^2-(K+1)\Omega_Du_\delta^2+K\Omega_D=0,\quad u_\delta^2(1)=1-\beta+\delta.$$ Then for $\delta$ small enough, there exists $0<x_\delta<x_0$ such that $u_\delta^1(x_\delta)+u_\delta^2(x_\delta)=1$. In fact, according to boundary conditions of the above two equations for $u_\delta^1$ and $u_\delta^2$, one can easily show that $u_\delta^1(x_0)+u_\delta^2(x_0)>1$. Since $0<u_\delta^1(x)<1/2$ and $1/2<u_\delta^2(x)<K/(K+1)$, $u_\delta^1(x)+u_\delta^2(x)>1$ is equivalent to $2u_\delta^2(x)-1>1-2u_\delta^1(x)>0$. Therefore, $$\begin{aligned} \partial_x[u_\delta^1+u_\delta^2]=\frac{(K+1)\Omega_Du_\delta^1-K\Omega_D}{2u_\delta^1-1}+\frac{(K+1)\Omega_Du_\delta^2-K\Omega_D}{2u_\delta^2-1}>0.\end{aligned}$$ So for $x>x_0$, there must always be $u_\delta^1(x)+u_\delta^2(x)>1$, which implies $x_0>x_\delta$. Define $w$ as the solution of $$\frac{\epsilon}{2}\partial_xw=-(w-A_1)(w-(1-A_1)),\quad w(x_\delta)=\frac{1}{2},$$ where $A_1=u_\delta^1(x_\delta)$. Let $u_\delta^3$ be the solution of $$(2u_\delta^3-1)\partial_xu_\delta^3-(K+1)\Omega'_Du_\delta^3+K\Omega'_D,\quad u_\delta^3(x_\delta)=A_1-\delta_1,$$ with constant $\Omega'_D>\Omega_D$, and $u_\delta^4$ be the solution of $$(2u_\delta^4-1)\partial_xu_\delta^4-(K+1)\Omega_Du_\delta^4+K\Omega_D,\quad u_\delta^4(x_\delta)=1-A_1-\delta_1.$$ Here, we choose $\Omega'_D-\Omega_D$ and $\delta_1$ small enough such that $u_\delta^3>u_0^1$ and $u_\delta^4>u_0^2$. The upper solution can be given as follows, $$\rho_u=\left\{ \begin{array}{ll} w+u_\delta^3-A_1, & x\le x_\delta, \\ w+u_\delta^4-(1-A_1),\ & x>x_\delta. \end{array} \right.$$ See dashed line in Fig. \[integrate\_figures\_Kneq1\_7to11\][**d**]{}. By substituting $\rho_u$ into Eq. (\[OriginalProKneq1\]), we obtain that, for $x\le x_\delta$, $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^3-\frac{(K-1)\Omega'_D}{2u_\delta^3-1}(w-A_1)+2(u_\delta^3-A_1)\partial_xw+(\Omega'_D-\Omega_D)((K+1)(u_\delta^3+w-A_1)-K),$$ and for $x>x_\delta$, $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^4-\frac{(K-1)\Omega_D}{2u_\delta^4-1}(w-(1-A_1))+2(u_\delta^4-(1-A_1))\partial_xw.$$ For $x\le x_\delta$, $$\begin{aligned} 2(u_\delta^3-A_1)\partial_xw\le \frac{4\delta_1}{\epsilon}(\frac{1}{2}-(1-A_1))(w-A_1)<0.\end{aligned}$$ Thus, for $\epsilon$ small enough, $$\begin{aligned} -\frac{(K-1)\Omega'_D}{2u_\delta^3-1}(w-A_1)+2(u_\delta^3-A_1)\partial_xw<0.\end{aligned}$$ Since $(\Omega'_D-\Omega_D)((K+1)(u_\delta^3+w-A_1)-K)<0$ with a negative upper bound, and $\partial^2_xu_\delta^3$ is bounded, one can show that, for $x\le x_\delta$, $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^3-\frac{(K-1)\Omega'_D}{2u_\delta^3-1}(w-A_1)+2(u_\delta^3-A_1)\partial_xw+(\Omega'_D-\Omega_D)((K+1)(u_\delta^3+w-A_1)-K)<0.$$ For $x> x_\delta$, $u_\delta^4-(1-A_1)\le-\delta_1<0$. Therefore, $$\begin{aligned} 2(u_\delta^4-(1-A_1))\partial_xw\le \frac{4\delta_1}{\epsilon}(\frac{1}{2}-A_1)(w-(1-A_1))<0.\end{aligned}$$ So for $\epsilon$ small enough, $$\begin{aligned} -\frac{(K-1)\Omega_D}{2u_\delta^4-1}(w-(1-A_1))+2(u_\delta^4-(1-A_1))\partial_xw<0.\end{aligned}$$ Since $\frac{\epsilon}{2}\partial^2_xu_\delta^4<0$, we have $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^4-\frac{(K-1)\Omega_D}{2u_\delta^4-1}(w-(1-A_1))+2(u_\delta^4-(1-A_1))\partial_xw<0.$$ Meanwhile, one can verify that for $\epsilon$ small enough, $\rho_u(0)=w(0)-A_1+u_\delta^3(0)>u_0^1(0)=\alpha$, $\rho_u(1)=w(1)-(1-A_1)+u_\delta^4(0)>u_0^2(1)=1-\beta$, and $\partial^-_x\rho_u(x_\delta)>\partial^+_x\rho_u(x_\delta)$. Therefore, $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is an upper solution of Eq. (\[OriginalProKneq1\]). In the following discussion, we will give the lower solution $\rho_l$ by similar methods. Define $u_{-\delta}^1$ as the solution of $$(2u_{-\delta}^1-1)\partial_xu_{-\delta}^1-(K+1)\Omega_Du_{-\delta}^1+K\Omega_D=0,\quad u_{-\delta}^1(0)=\alpha-\delta.$$ Define $u_{-\delta}^2$ as the solution of $$(2u_{-\delta}^2-1)\partial_xu_{-\delta}^2-(K+1)\Omega_Du_{-\delta}^2+K\Omega_D=0,\quad u_{-\delta}^2(1)=1-\beta-\delta.$$ Then there exists $x_0<x_{-\delta}<1$ such that $u_{-\delta}^1(x_{-\delta})+u_{-\delta}^2(x_{-\delta})=1$. In fact, the relationship between $u_{-\delta}^i$ and $u_0^i$ is qualitatively the same as that between $u_0^i$ and $u_\delta^i$ ($i=1,2$). So, according to the inequality $x_\delta<x_0$, we have $x_0<x_{-\delta}$. Define $w_1$ as the solution of $$\frac{\epsilon}{2}\partial_xw_1=-e(w_1-A_2)(w_1-(1-A_2)),\quad w_1(x_{-\delta})=\frac{1}{2},$$ where constants $A_2=u_{-\delta}^1(x_{-\delta})$ and $0<e<1$. Define $w_2$ as the solution of $$\frac{\epsilon}{2}\partial_xw_2=-(w_2-A_2)(w_2-(1-A_2)),\quad w_2(x_{-\delta})=\frac{1}{2}.$$ Let $u_{-\delta}^3$ be the solution of $$(2u_{-\delta}^3-1)\partial_xu_{-\delta}^3-(K+1)\Omega''_Du_{-\delta}^3+K\Omega''_D=0,\quad u_{-\delta}^3(x_{-\delta})=A_2+\delta_1,$$ with $\Omega''_D<\Omega_D$, and $u_{-\delta}^4$ be the solution of $$(2u_{-\delta}^4-1)\partial_xu_{-\delta}^4-(K+1)\Omega'''_Du_{-\delta}^4+K\Omega'''_D=0,\quad u_{-\delta}^4(x_{-\delta})=1-A_2+\delta_1,$$ with $\Omega'''_D<\Omega_D$. Here, $\Omega_D-\Omega''_D,\Omega_D-\Omega'''_D,\delta_1$ are chosen to be small enough such that $u_{-\delta}^3<u_0^1$, $u_{-\delta}^4<u_0^2$. The lower solution can be given as follows, $$\rho_l=\left\{ \begin{array}{ll} w_1+u_{-\delta}^3-A_2, & x\le x_{-\delta}, \\ w_2+u_{-\delta}^4-(1-A_2),\quad & x>x_{-\delta}. \end{array} \right.$$ See dashed line in Fig. \[integrate\_figures\_Kneq1\_7to11\][**d**]{}. By substituting $\rho_l$ into Eq. (\[OriginalProKneq1\]), we obtain that, for $x\le x_{-\delta}$, $$L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}^3-\frac{(K-1)\Omega''_D}{2u_{-\delta}^3-1}(w_1-A_2)+2(u_{-\delta}^3-A_2+(1-e)(w_1-\frac{1}{2}))\partial_xw_1+(\Omega''_D-\Omega_D)((K+1)(u_{-\delta}^3+w_1-A_2)-K),$$ and for $x>x_{-\delta}$, $$L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}^4-\frac{(K-1)\Omega'''_D}{2u_{-\delta}^4-1}(w_2-(1-A_2))+2(u_{-\delta}^4-(1-A_2))\partial_xw_2+(\Omega'''_D-\Omega_D)((K+1)(u_{-\delta}^4+w_2-(1-A_2))-K).$$ For $x\le x_\delta$, $2(u_{-\delta}^3(x_{-\delta})-A_2+(1-e)(w_1(x_{-\delta})-\frac{1}{2}))>2\delta_1>0$. Thus, there exists a $\delta_2>0$, which is independent of $\epsilon$, such that for $x_{-\delta}-\delta_2\le x\le x_{-\delta}$, $2(u_{-\delta}^3(x)-A_2+(1-e)(w_1(x)-\frac{1}{2}))>0$ with a positive lower bound. Since $\partial_xw_1>0$, $2(u_{-\delta}^3-A_2+(1-e)(w_1-\frac{1}{2}))\partial_xw_1>0$. On the other hand, for $x<x_{-\delta}-\delta_2$, $\partial_xw_1$ tends to zero uniformly. Note that $(\Omega''_D-\Omega_D)((K+1)(u_{-\delta}^3+w_1-A_2)-K)>0$ with a positive lower bound, $\frac{\epsilon}{2}\partial^2_xu_{-\delta}^3>0$, and $-\frac{(K-1)\Omega''_D}{2u_{-\delta}^3-1}(w_1-A_2)>0$. Thus, for $\epsilon$ small enough, $$L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}^3-\frac{(K-1)\Omega''_D}{2u_{-\delta}^3-1}(w_1-A_2)+2(u_{-\delta}^3-A_2+(1-e)(w_1-\frac{1}{2}))\partial_xw_1+(\Omega''_D-\Omega_D)((K+1)(u_{-\delta}^3+w_1-A_2)-K)>0.$$ For $x> x_\delta$, $2(u_{-\delta}^4(x_{-\delta})-(1-A_2))=2\delta_1>0$. Thus, there exists a $\delta_3>0$, which is independent of $\epsilon$, such that for $x_{-\delta}\le x\le x_{-\delta}+\delta_3$, $2(u_{-\delta}^4(x)-(1-A_2))>0$ with a positive lower bound. Since $\partial_xw_2>0$, $2(u_{-\delta}^4-(1-A_2))\partial_xw_2>0$. On the other hand, for $x>x_{-\delta}+\delta_3$, $\partial_xw_2$ tends to zero uniformly. Note that $(\Omega'''_D-\Omega_D)((K+1)(u_{-\delta}^4+w_2-(1-A_2))-K)>0$ with a positive lower bound, $\frac{\epsilon}{2}\partial^2_xu_{-\delta}^4$ tends to zero uniformly, and $-\frac{(K-1)\Omega'''_D}{2u_{-\delta}^4-1}(w_2-(1-A_2))>0$. Thus, $$L\rho_l=\frac{\epsilon}{2}\partial^2_xu_{-\delta}^4-\frac{(K-1)\Omega'''_D}{2u_{-\delta}^4-1}(w_2-(1-A_2))+2(u_{-\delta}^4-(1-A_2))\partial_xw_2+(\Omega'''_D-\Omega_D)((K+1)(u_{-\delta}^4+w_2-(1-A_2))-K)>0.$$ One can verify that for $\epsilon$ small enough, $\rho_l(0)=w_1(0)-A_2+u_{-\delta}^3(0)<u_0^1(0)=\alpha$, and $\rho_l(1)=w_2(1)-(1-A_2)+u_{-\delta}^4(1)<u_{-\delta}^4(1)<u_0^2(1)=1-\beta$. Since $$\partial_xw_1(x_{-\delta})-\partial_xw_2(x_{-\delta})=\frac{2}{\epsilon}(1-e)(\frac{1}{2}-A_2)(\frac{1}{2}-(1-A_2))\to -\infty,$$ $\partial^-_x\rho_l(x_{-\delta})<\partial^+_x\rho_l(x_{-\delta})$ for $\epsilon$ small enough. Thus, $\rho_l$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is a lower solution of Eq. (\[OriginalProKneq1\]). Finally, let $\hat{\rho}_l=\lim_{\epsilon\to 0}\rho_l$, $\hat{\rho}_u=\lim_{\epsilon\to 0}\rho_u$. Then for $\delta$ small enough, $\hat{\rho}_l$ and $\hat{\rho}_u$ can be arbitrarily close to $f$. Meanwhile, $\rho_u>\rho_l$ for $\epsilon$ small enough if $\delta_1$, $\Omega'-\Omega$, $\Omega-\Omega''$, $\Omega-\Omega'''$ are all small enough. Which implies that the first and third conditions in Lemma \[hatwlemma\] are also satisfied. Hence the existence of solution $\rho$ of Eq. (\[OriginalProKneq1\]) is obtained. ### For cases [LD+DW+MD+BL$_r^-$]{} Let $u_0^1$ be the solution of $$(2u_0^1-1)\partial_xu_0^1-(K+1)\Omega_Du_0^1+K\Omega_D=0,\quad u_0^1(0)=\alpha,$$ and $u_0^2$ be the solution of $$(2u_0^2-1)\partial_xu_0^2-(K+1)\Omega_Du_0^2+K\Omega_D=0,\quad u_0^2(1)=\frac{1}{2}.$$ For the cases $\beta>\frac{1}{2}$, if there exists $0<x_0<1$ such that $u_0^1(x_0)+u_0^2(x_0)=1$, then the limit solution $f$ is given as follows (see Fig. \[integrate\_figures\_Kneq1\_7to11\][**e**]{}), $$f(x)=\left\{ \begin{array}{ll} u_0^1, & 0\le x\le x_0, \\ u_0^2, & x_0<x\le1. \end{array} \right.$$ The upper solution $\rho_u$ can be given by the following methods. Define $u_\delta^1$ as the solution of $$(2u_\delta^1-1)\partial_xu_\delta^1-(K+1)\Omega_Du_\delta^1+K\Omega_D=0,\quad u_\delta^1(0)=\alpha+\delta,$$ and $u_\delta^2$ as the solution of $$(2u_\delta^2-1)\partial_xu_\delta^2-(K+1)\Omega_Du_\delta^2+K\Omega_D=0,\quad u_\delta^2(1)=\frac{1}{2}+\delta.$$ Then for $\delta$ small enough, there exists $0<x_\delta<x_0<1$ such that $u_\delta^1(x_\delta)+u_\delta^2(x_\delta)=1$. In fact, according to the above two equations for $u_\delta^1$ and $u_\delta^2$, one can find that $u_\delta^1(x_0)+u_\delta^2(x_0)>1$, $0<u_\delta^1(x)<1/2$, and $1/2<u_\delta^2(x)<K/(K+1)$. Since $u_\delta^1(x)+u_\delta^2(x)>1$ is equivalent to $2u_\delta^2(x)-1>1-2u_\delta^1(x)$, we can obtain that $$\begin{aligned} \partial_x[u_\delta^1+u_\delta^2]=\frac{(K+1)\Omega_Du_\delta^1-K\Omega_D}{2u_\delta^1-1}+\frac{(K+1)\Omega_Du_\delta^2-K\Omega_D}{2u_\delta^2-1}>0.\end{aligned}$$ Which means that for $x>x_0$, there must always be $u_\delta^1(x)+u_\delta^2(x)>1$, and so $x_0>x_\delta$. Define $w$ as the solution of $$\frac{\epsilon}{2}\partial_xw=-(w-A_1)(w-(1-A_1)),\quad w(x_\delta)=\frac{1}{2},$$ where $A_1=u_\delta^1(x_\delta)$. Let $u_\delta^3$ be the solution of $$(2u_\delta^3-1)\partial_xu_\delta^3-(K+1)\Omega'_Du_\delta^3+K\Omega'_D=0,\quad u_\delta^3(x_\delta)=A_1-\delta_1,$$ with $\Omega'_D>\Omega_D$. Let $u_\delta^4$ be the solution of $$(2u_\delta^4-1)\partial_xu_\delta^4-(K+1)\Omega_Du_\delta^4+K\Omega_D=0,\quad u_\delta^4(x_\delta)=1-A_1-\delta_1.$$ Here, $\Omega'_D-\Omega_D, \delta_1$ are assumed to be small enough such that $u_\delta^3>u_0^1$, $u_\delta^4>u_0^2$. Then the upper solution can be given as follows, $$\rho_u=\left\{ \begin{array}{ll} w+u_\delta^3-A_1, & x\le x_\delta, \\ w+u_\delta^4-(1-A_1),\ & x>x_\delta. \end{array} \right.$$ See dashed line in Fig. \[integrate\_figures\_Kneq1\_7to11\][**e**]{}. By substituting $\rho_u$ into Eq. (\[OriginalProKneq1\]), we obtain that, for $x\le x_\delta$, $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^3-\frac{(K-1)\Omega'_D}{2u_\delta^3-1}(w-A_1)+2(u_\delta^3-A_1)\partial_xw+(\Omega'_D-\Omega_D)((K+1)(u_\delta^3+w-A_1)-K),$$ and for $x>x_\delta$, $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^4-\frac{(K-1)\Omega_D}{2u_\delta^4-1}(w-(1-A_1))+2(u_\delta^4-(1-A_1))\partial_xw.$$ For $x\le x_\delta$, one can verify that $2(u_\delta^3-A_1)\partial_xw\le \frac{4\delta_1}{\epsilon}(\frac{1}{2}-(1-A_1))(w-A_1)<0$. Thus, for $\epsilon$ small enough, $-\frac{(K-1)\Omega'_D}{2u_\delta^3-1}(w-A_1)+2(u_\delta^3-A_1)\partial_xw<0$. Since $(\Omega'_D-\Omega_D)((K+1)(u_\delta^3+w-A_1)-K)<0$ with a negative upper bound, and $\partial^2_xu_\delta^3$ is bounded, one can know that, for $\epsilon$ small enough, $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^3-\frac{(K-1)\Omega'_D}{2u_\delta^3-1}(w-A_1)+2(u_\delta^3-A_1)\partial_xw+(\Omega'_D-\Omega_D)((K+1)(u_\delta^3+w-A_1)-K)<0.$$ For $x> x_\delta$, one can show that $u_\delta^4-(1-A_1)\le-\delta_1<0$. Therefore, $2(u_\delta^4-(1-A_1))\partial_xw\le \frac{4\delta_1}{\epsilon}(\frac{1}{2}-A_1)(w-(1-A_1))<0$. So for $\epsilon$ small enough, $-\frac{(K-1)\Omega_D}{2u_\delta^4-1}(w-(1-A_1))+2(u_\delta^4-(1-A_1))\partial_xw<0$. Since $\frac{\epsilon}{2}\partial^2_xu_\delta^4<0$, we have $$L\rho_u=\frac{\epsilon}{2}\partial^2_xu_\delta^4-\frac{(K-1)\Omega_D}{2u_\delta^4-1}(w-(1-A_1))+2(u_\delta^4-(1-A_1))\partial_xw<0.$$ Finally, one can easily show that for $\epsilon$ small enough, $\rho_u(0)=w(0)-A_1+u_\delta^3(0)>u_0^1(0)=\alpha$, $\rho_u(1)=w(1)-(1-A_1)+u_\delta^4(1)>u_0^2(1)=\frac{1}{2}>1-\beta$, and $\partial^-_x\rho_u(x_\delta)>\partial^+_x\rho_u(x_\delta)$. Which means that $\rho_u$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is an upper solution of Eq. (\[OriginalProKneq1\]). In the following discussion, we will give the lower solution $\rho_l$ by similar methods. Define $u_{-\delta}^1$ as the solution of $$(2u_{-\delta}^1-1)\partial_xu_{-\delta}^1-(K+1)\Omega_Du_{-\delta}^1+K\Omega_D=0,\quad u_{-\delta}^1(0)=\alpha-\delta,$$ and $u_{-\delta}^2$ as the solution of $$(2u_{-\delta}^2-1)\partial_xu_{-\delta}^2-(K+1)\Omega''''_Du_{-\delta}^2+K\Omega''''_D=0,\quad u_{-\delta}^2(1)=\frac{1}{2},$$ with $\Omega''''_D<\Omega_D$. Then for $\delta$ and $\Omega-\Omega''''$ small enough, there exists $x_0<x_{-\delta}<1$ such that $u_{-\delta}^1(x_{-\delta})+u_{-\delta}^2(x_{-\delta})=1$. In fact, according to the above two equations for $u_{-\delta}^1$ and $u_{-\delta}^2$, one can find that $u_0^1(x_{-\delta})+u_0^2(x_{-\delta})>1$, $0<u_0^1(x)<1/2$, and $1/2<u_0^2(x)<K/(K+1)$. Since $u_0^1(x)+u_0^2(x)>1$ is equivalent to $2u_0^2(x)-1>1-2u_0^1(x)$, one can show that $$\begin{aligned} \partial_x[u_0^1+u_0^2]=\frac{(K+1)\Omega_Du_0^1-K\Omega_D}{2u_0^1-1}+\frac{(K+1)\Omega_Du_0^2-K\Omega_D}{2u_0^2-1}>0.\end{aligned}$$ So for $x>x_{-\delta}$, there must always be $u_0^1(x)+u_0^2(x)>1$, which implies $x_0<x_{-\delta}$. Define $w_1$ as the solution of $$\frac{\epsilon}{2}\partial_xw_1=-e(w_1-A_2)(w_1-(1-A_2)),\quad w_1(x_{-\delta})=\frac{1}{2},$$ where constants $A_2=u_{-\delta}^1(x_{-\delta})$ and $0<e<1$. Define $w_2$ as the solution of $$\frac{\epsilon}{2}\partial_xw_2=-(w_2-A_2)(w_2-(1-A_2)),\quad w_2(x_{-\delta})=\frac{1}{2}.$$ Let $u_{-\delta}^5$ be the solution of $$(2u_{-\delta}^5-1)\partial_xu_{-\delta}^5-(K+1)\Omega'''_Du_{-\delta}^5+K\Omega'''_D=0,\quad u_{-\delta}^5(1)=\frac{1}{2},$$ with $\Omega''''_D<\Omega'''_D<\Omega_D$. Let $u_{-\delta}^{4\epsilon}$ be the solution of $$(2u_{-\delta}^{4\epsilon}-1)\partial_xu_{-\delta}^{4\epsilon}-(K+1)\Omega'''_Du_{-\delta}^{4\epsilon}+K\Omega'''_D=0,\quad u_{-\delta}^{4\epsilon}(1-\epsilon^{1/2})=u_0^2(1-\epsilon^{1/2}).$$ Define $\delta_5<u_{-\delta}^5(x_{-\delta})-(1-A_2)$, and $w_3$ as the solution of $$\frac{\epsilon}{2}\partial_xw_3=-e_3(w_3-(\frac{1}{2}-\delta_5))(w_3-(\frac{1}{2}+\delta_5)),\quad w_3(1)=1-\beta-\delta_2.$$ Let $u_{-\delta}^{3\epsilon}$ be the solution of $$(2u_{-\delta}^{3\epsilon}-1)\partial_xu_{-\delta}^{3\epsilon}-(K+1)\Omega''_Du_{-\delta}^{3\epsilon}+K\Omega''_D=0,\quad u_{-\delta}^{3\epsilon}(x_{-\delta})=A_2+\delta_3^\epsilon,$$ with $\Omega''_D<\Omega_D$ and $\delta_3^\epsilon=w_3(x_{-\delta})-\frac{1}{2}+u_{-\delta}^{4\epsilon}(x_{-\delta})-(1-A_2)$. Note that $ \delta_3=\lim_{\epsilon\to0}\delta_3^\epsilon=-\delta_5+u_{-\delta}^5(x_{-\delta})-(1-A_2)>0$. Here $\Omega_D-\Omega''_D$ and $\delta_5$ are chosen to be small enough such that $\delta_3$ is small enough to satisfy $\lim_{\epsilon\to0}u_{-\delta}^{3\epsilon}<u_0^1$. Then the lower solution can be given as follows, $$\rho_l=\left\{ \begin{array}{ll} w_1+u_{-\delta}^{3\epsilon}-A_2, & x\le x_{-\delta}, \\ w_2+u_{-\delta}^{4\epsilon}-(1-A_2)+w_3-\frac{1}{2},\ & x>x_{-\delta}. \end{array} \right.$$ See dashed line in Fig. \[integrate\_figures\_Kneq1\_7to11\][**e**]{}. Substituting $\rho_l$ into Eq. (\[OriginalProKneq1\]), we have that, for $x\le x_{-\delta}$, $$\begin{aligned} L\rho_l&=&\frac{\epsilon}{2}\partial^2_xu_{-\delta}^{3\epsilon}-\frac{(K-1)\Omega''_D}{2u_{-\delta}^{3\epsilon}-1}(w_1-A_2)+2(u_{-\delta}^{3\epsilon}-A_2+(1-e)(w_1-\frac{1}{2}))\partial_xw_1\\ &&+(\Omega''_D-\Omega_D)((K+1)(u_{-\delta}^{3\epsilon}+w_1-A_2)-K),\end{aligned}$$ and for $x>x_{-\delta}$, $$\begin{aligned} L\rho_l&=&\frac{\epsilon}{2}\partial^2_xu_{-\delta}^{4\epsilon}-\frac{(K-1)\Omega'''_D}{2u_{-\delta}^{4\epsilon}-1}(w_2+w_3-(1-A_2)-\frac{1}{2})+2(u_{-\delta}^{4\epsilon}-\frac{1}{2}+w_3-(1-A_2))\partial_xw_2\\ &&+2(w_2-(1-A_2)+u_{-\delta}^{4\epsilon}-\frac{1}{2}+(1-e_3)(w_3-\frac{1}{2}))\partial_xw_3\\ &&+(\Omega'''_D-\Omega_D)((K+1)(w_2+u_{-\delta}^{4\epsilon}-\frac{1}{2}+w_3-(1-A_2))-K).\end{aligned}$$ For $x\le x_{-\delta}$, one can show that $\lim_{\epsilon\to0}2(u_{-\delta}^{3\epsilon}(x_{-\delta})-A_2+(1-e)(w_1(x_{-\delta})-\frac{1}{2}))=2\delta_3>0$. Thus, for $\epsilon$ small enough, there exists a $\delta_4$, which is independent of $\epsilon$, such that for $x_{-\delta}-\delta_4\le x\le x_{-\delta}$, $2(u_{-\delta}^{3\epsilon}(x)-A_2+(1-e)(w_1(x)-\frac{1}{2}))>0$ with a positive lower bound. Since $\partial_xw_1>0$, $2(u_{-\delta}^{3\epsilon}-A_2+(1-e)(w_1-\frac{1}{2}))\partial_xw_1>0$. For $x<x_{-\delta}-\delta_4$, $\partial_xw_1$ tends to zero uniformly. Meanwhile, $(\Omega''_D-\Omega_D)((K+1)(u_{-\delta}^{3\epsilon}+w_1-A_2)-K)>0$ with a positive lower bound, $\frac{\epsilon}{2}\partial^2_xu_{-\delta}^{3\epsilon}>0$, and $-\frac{(K-1)\Omega''_D}{2u_{-\delta}^{3\epsilon}-1}(w_1-A_2)>0$. Thus, for $\epsilon$ small enough, $$\begin{aligned} L\rho_l&=&\frac{\epsilon}{2}\partial^2_xu_{-\delta}^{3\epsilon}-\frac{(K-1)\Omega''_D}{2u_{-\delta}^{3\epsilon}-1}(w_1-A_2)+2(u_{-\delta}^{3\epsilon}-A_2+(1-e)(w_1-\frac{1}{2}))\partial_xw_1\\ &&+(\Omega''_D-\Omega_D)((K+1)(u_{-\delta}^{3\epsilon}+w_1-A_2)-K)>0.\end{aligned}$$ For $x> x_{-\delta}$, we first consider the value of $u_{-\delta}^{4\epsilon}(1)=0.5+\theta$. Firstly, the solution $u_0^2$ satisfies the following equation $$\begin{aligned} \frac{2u_0^2}{(K+1)\Omega_D}+\frac{K-1}{(K+1)\Omega_D}\log|(K+1)\Omega_Du_0^2-K\Omega_D|=x+C_1.\end{aligned}$$ If we assume $u_0^2(1-\epsilon^{1/2})=0.5+\delta'$, then from $u_0^2(1)=0.5$, and keeping only the leading order terms, we obtain $$\begin{aligned} \frac{2\delta'^2}{(K-1)\Omega_D}\sim \epsilon^{1/2}.\end{aligned}$$ The solution $u_{-\delta}^{4\epsilon}$ satisfies the following equation $$\begin{aligned} \frac{2u_{-\delta}^{4\epsilon}}{(K+1)\Omega'''_D}+\frac{K-1}{(K+1)\Omega'''_D}\log|(K+1)\Omega'''_Du_{-\delta}^{4\epsilon}-K\Omega'''_D|=x+C_2.\end{aligned}$$ If we assume $u_{-\delta}^{4\epsilon}(1)=\frac{1}{2}+\theta$, then from $u_0^2(1-\epsilon^{1/2})=\frac{1}{2}+\delta'$, and keeping only the leading order terms, we have $$\begin{aligned} \theta^2\sim \frac{(K-1)(\Omega_D-\Omega'''_D)}{2}\epsilon^{1/2},\end{aligned}$$ $$\begin{aligned} \frac{\epsilon}{2}\partial^2_xu_{-\delta}^{4\epsilon}(1)=\frac{\epsilon(\Omega'''_D)^2(K-1)((K+1)u_{-\delta}^{4\epsilon}-K)}{16\theta^3} \sim\frac{(\Omega'''_D)^2((K+1)u_{-\delta}^{4\epsilon}-K)\theta}{4(K-1)(\Omega_D-\Omega'''_D)^2},\end{aligned}$$ $$\begin{aligned} -\frac{(K-1)\Omega'''_D}{2u_{-\delta}^{4\epsilon}(1)-1}(w_2+w_3-(1-A_2)-\frac{1}{2})\ge\delta_5\frac{(K-1)\Omega'''_D}{2\theta}>0.\end{aligned}$$ So, for $\epsilon$ small enough, we have $$\begin{aligned} \frac{\epsilon(\Omega'''_D)^2(K-1)((K+1)\frac{1}{2}-K)}{16\theta^3}+\delta_5\frac{(K-1)\Omega'''_D}{2\theta}>0.\end{aligned}$$ For any $x_{-\delta}<x<1$, let $u_{-\delta}^{4\epsilon}(x)=E(x)+\theta+\frac{1}{2}$ with $E(x)>0$. Then we have $$\begin{aligned} &&\frac{\epsilon}{2}\partial^2_xu_{-\delta}^{4\epsilon}(x)-\frac{(K-1)\Omega'''_D}{2u_{-\delta}^{4\epsilon}(1)-1}(w_2+w_3-(1-A_2)-\frac{1}{2})\\ &\ge&\frac{\epsilon(\Omega'''_D)^2(K-1)((K+1)\frac{1}{2}-K)}{16(\theta+E(x))^3}+\delta_5\frac{(K-1)\Omega'''_D}{2(\theta+E(x))}\\ &=&(\frac{\epsilon(\Omega'''_D)^2(K-1)((K+1)\frac{1}{2}-K)}{16\theta^3}+\delta_5\frac{(K-1)\Omega'''_D}{2\theta})\frac{\theta^3}{(\theta+E(x))^3}\\ &&+\delta_5\frac{(K-1)\Omega'''_D}{2(\theta+E(x))}\frac{\theta}{\theta+E(x)}(1-\frac{\theta^2}{(\theta+E(x))^2})>0.\end{aligned}$$ Note that $\lim_{\epsilon\to0}u_{-\delta}^{4\epsilon}(x_{-\delta})-\frac{1}{2}+w_3(x_{-\delta})-(1-A_2)=\delta_3>0$. Thus, for $\epsilon$ small enough, there exists a $\delta_6$, which is independent of $\epsilon$, such that for all $x_{-\delta}<x<x_{-\delta}+\delta_6$, $u_{-\delta}^{4\epsilon}(x)-\frac{1}{2}+w_3(x)-(1-A_2)>0$. Since $\partial_xw_2>0$, $2(u_{-\delta}^{4\epsilon}-\frac{1}{2}+w_3-(1-A_2))\partial_xw_2>0$. For $x\ge x_{-\delta}+\delta_6$, $\partial_xw_2$ tends to zero uniformly. One can verify that for $\epsilon$ small enough, $w_2(1)-(1-A_2)+u_{-\delta}^{4\epsilon}(1)-\frac{1}{2}+(1-e_3)(w_3(1)-\frac{1}{2})<0$. Thus, for $\epsilon$ small enough, there exists a $\delta_7$, which is independent of $\epsilon$, such that for all $1-\delta_7<x\le 1$, $w_2(x)-(1-A_2)+u_{-\delta}^{4\epsilon}(x)-\frac{1}{2}+(1-e_3)(w_3(x)-\frac{1}{2})<0$. Since $\partial_xw_3(x)<0$, $2(w_2(x)-(1-A_2)+u_{-\delta}^{4\epsilon}(x)-\frac{1}{2}+(1-e_3)(w_3(x)-\frac{1}{2}))\partial_xw_3(x)>0$. For $x\le 1-\delta_7$, $\partial_xw_3(x)$ tends to zero uniformly. Finally, we have $(\Omega'''_D-\Omega_D)((K+1)(w_2+u_{-\delta}^{4\epsilon}-\frac{1}{2}+w_3-(1-A_2))-K)>0$ with a positive lower bound. Thus, when $x> x_{-\delta}$, $$\begin{aligned} L\rho_l&=&\frac{\epsilon}{2}\partial^2_xu_{-\delta}^{4\epsilon}-\frac{(K-1)\Omega'''_D}{2u_{-\delta}^{4\epsilon}-1}(w_2+w_3-(1-A_2)-\frac{1}{2})+2(u_{-\delta}^{4\epsilon}-\frac{1}{2}+w_3-(1-A_2))\partial_xw_2\\ &&+2(w_2-(1-A_2)+u_{-\delta}^{4\epsilon}-\frac{1}{2}+(1-e_3)(w_3-\frac{1}{2}))\partial_xw_3\\ &&+(\Omega'''_D-\Omega_D)((K+1)(w_2+u_{-\delta}^{4\epsilon}-\frac{1}{2}+w_3-(1-A_2))-K)>0,\end{aligned}$$ for $\epsilon$ small enough. One can also verify that $$\partial_xw_1(x_{-\delta})-\partial_xw_2(x_{-\delta})=\frac{2}{\epsilon}(1-e)(\frac{1}{2}-A_2)(\frac{1}{2}-(1-A_2))\to -\infty.$$ Thus, for $\epsilon$ small enough, $\partial^-_x\rho_l(x_{-\delta})<\partial^+_x\rho_l(x_{-\delta})$. Meanwhile, it can also be shown that, for $\epsilon$ small enough, $\rho_l(0)=w_1(0)-A_2+u_{-\delta}^{3\epsilon}(0)<u_0^1(0)=\alpha$, and $\rho_l(1)=w_2(1)-(1-A_2)+u_{-\delta}^{4\epsilon}(1)+w_3(1)-\frac{1}{2}\to 1-\beta-\delta_2<1-\beta$. Thus, $\rho_l$ satisfies the sufficient conditions in Lemma \[upperlowersufficientcondition\], and is a lower solution of Eq. (\[OriginalProKneq1\]). Since too many parameters are used in this subsection, for convenience, we summarize their logical relationships below. - $\delta$, $\Omega''''<\Omega$, $\delta_2$ are chosen independently at first. - $\delta_1$, $\Omega'>\Omega$ are chosen based on $\delta$ such that $u_\delta^3>u_0^1$, $u_\delta^4>u_0^2$. - $\Omega''''<\Omega'''<\Omega$, $\delta_5$ are chosen based on $\delta$ and $\Omega''''<\Omega$ such that $$\delta_3=\lim_{\epsilon\to0}\delta_3^\epsilon=-\delta_5+u_{-\delta}^5(x_{-\delta})-(1-A_2)>0.$$ At the same time, $\delta_3$ is required to be small enough. Upon this small $\delta_3$, as well as $\delta$ and $\Omega''''<\Omega$, we further choose $\Omega''<\Omega$ such that $u_{-\delta}^{3\epsilon}<u_0^1$. Finally, for $\delta$, $\Omega-\Omega''''$, $\delta_2$ small enough (note that $\delta_5$, $\Omega-\Omega'''$ are controlled by $\Omega-\Omega''''$), $\hat{\rho}_l=\lim_{\epsilon\to 0}\rho_l$ and $\hat{\rho}_u=\lim_{\epsilon\to 0}\rho_u$ can be arbitrarily close to $f$. Meanwhile, for $\epsilon$ small enough, $\rho_u>\rho_l$ if $\delta_1$, $\Omega'-\Omega$, $\Omega-\Omega''$, and $\delta_3$ are also small enough. We can make $\delta_3$ small through choosing suitable $\Omega'''-\Omega''''$ and $\delta_5$. Therefore, the first and third conditions in Lemma \[hatwlemma\] are also satisfied, and hence the existence of solution $\rho$ of Eq. (\[OriginalProKneq1\]) is obtained. Regularity of the $W^{1,2}(0,1)$ weak solution ---------------------------------------------- The discussions in Section \[steadySpecialCases\] and Section \[SecConstructionGeneral\] show that there exists at least one [*weak*]{} solution in space $W^{1,2}(0,1)$ for Eq. (\[ellipticequationintroduction\]), or equivalently Eqs. (\[OriginalPro\],\[OriginalProKneq1\]), see Lemma \[hatwlemma\]. Here, we want to show that the weak solution in space $W^{1,2}(0,1)$ is actually in space $C^\infty[0,1]$. \[W12Cinftyregularitytheorem\] Any $W^{1,2}(0,1)$ weak solution $\rho$ of Eq. (\[ellipticequationintroduction\]) is actually in space $C^\infty[0,1]$. This can be proved inductively. If $\rho$ is a solution of Eq. (\[ellipticequationintroduction\]) in $W^{n,2}(0,1)$ space ($n\ge 1$), we have $$\frac{\epsilon}{2}\rho_{xx}=-(2\rho-1)\rho_x-\Omega_A(1-\rho)+\Omega_D\rho.$$ Note that $\rho\rho_x$ is in $W^{n-1,2}(0,1)$ space (see Theorem 7.4 in [@Gilbarg2001]). Thus, the righthand side of the above equation belongs to $W^{n-1,2}(0,1)$ space, thereby $\frac{\epsilon}{2}\rho_{xx}\in W^{n-1,2}(0,1)$, which implies $\rho \in W^{n+1,2}(0,1)$. So $\rho \in W^{k,2}(0,1)$ for all $k>0$, and consequently $\rho \in C^\infty[0,1]$. See Section 7.7 in [@Gilbarg2001] for more details. Uniqueness of the steady state solution in space $C^1[0,1]$ (or in space $W^{1,2}(0,1)$) {#uniqueness} ======================================================================================== The uniqueness of the steady state solution, i.e., the solution of Eq. (\[ellipticequationintroduction\]), in space $C^1[0,1]$ can be proved by using the Theorem 10.7 in [@Gilbarg2001], which is a generalization of the classical linear maximum principle to the quasi-linear cases. \[C1uniqueness\] The $C^1[0,1]$ solution of Eq. (\[ellipticequationintroduction\]), if exists, is unique. Let $K=\Omega_A/\Omega_D$, $A(\rho,\rho_x)=\epsilon\rho_x/2+\rho^2-\rho$, and $B(\rho)=-(K+1)\Omega_D\rho+K\Omega_D$. Then Eq. (\[ellipticequationintroduction\]) can be written as follows, $$[A(\rho,\rho_x)]_x+B(\rho)=0.$$ Suppose both $\rho^0$ and $\rho^1$ are $C^1[0,1]$ solutions of Eq. (\[ellipticequationintroduction\]). Define $g=\rho^1-\rho^0$, and let $$\rho^t=t\rho^1+(1-t)\rho^0.$$ Then for any function $\varphi$ in $W^{1,2}_0(0,1)$ space, we have $$\label{compareelliptic} 0=\int_0^1\{[A(\rho^1,\rho^1_x)-A(\rho^0,\rho^0_x)]\varphi_x-[B(\rho^1)-B(\rho^0)]\varphi\}dx.$$ Note that $$\begin{aligned} \label{Aminuselliptic} A(\rho^1,\rho^1_x)-A(\rho^0,\rho^0_x)=\int_0^1[A(\rho^t,\rho^t_x)]_tdt=\int_0^1\{(2\rho^t-1)g+\frac{\epsilon}{2}g_x\}dt=b(x)g+\frac{\epsilon}{2}g_x,\end{aligned}$$ where $b(x)\triangleq\int_0^1(2\rho^t-1)dt$. Since $\rho^1$ and $\rho^0$ are in $C^1[0,1]$ space, $|b|\le \Lambda$ for some constant $\Lambda>0$. Similarly, $$\begin{aligned} \label{Bminuselliptic} B(\rho^1)-B(\rho^0)=\int_0^1B(\rho^t)_tdt=-\int_0^1(K+1)\Omega_Dgdt=-(K+1)\Omega_Dg.\end{aligned}$$ By substituting $\varphi=\frac{g^+}{g^++\delta}\in W^{1,2}_0(0,1)$, together with Eqs. (\[Aminuselliptic\], \[Bminuselliptic\]), into Eq. (\[compareelliptic\]), we obtain $$\begin{aligned} 0&=&\int_0^1\{(bg^++\frac{\epsilon}{2}g^+_x)(\frac{g^+_x}{g^++\delta}-\frac{g^+_xg^+}{(g^++\delta)^2})+(K+1)\Omega_Dg^+\frac{g^+}{g^++\delta}\}dx\\ &=&\int_0^1\{b[\log(1+\frac{g^+}{\delta})]_x\frac{g^+}{g^++\delta}\delta+\frac{\epsilon}{2}[\log(1+\frac{g^+}{\delta})]_x^2\delta+(K+1)\Omega_D\frac{(g^+)^2}{g^++\delta}\}dx.\end{aligned}$$ Here $\delta>0$ is a small constant, and the function $g^+$ is defined as follows, $$g^+(x)=\left\{ \begin{aligned} &g(x),\quad \textrm{if}\ g(x)\ge0,\\ &0,\qquad\ \, \textrm{if}\ g(x)<0. \end{aligned} \right.$$ Since $(K+1)\Omega_D\frac{(g^+)^2}{g^++\delta}\ge0$, $b\le\Lambda$, $\frac{g^+}{g^++\delta}\le1$, we have $$\begin{aligned} \frac{\epsilon}{2}\int_0^1[\log(1+\frac{g^+}{\delta})]_x^2dx\le\Lambda\int_0^1|[\log(1+\frac{g^+}{\delta})]_x|dx.\end{aligned}$$ From Hölder’s inequality, we obtain $$\begin{aligned} \int_0^1|[\log(1+\frac{g^+}{\delta})]_x|dx\le \left\{\int_0^1[\log(1+\frac{g^+}{\delta})]_x^2dx\right\}^{1/2}.\end{aligned}$$ Thus, $$\begin{aligned} \int_0^1[\log(1+\frac{g^+}{\delta})]_x^2dx\le\left(\frac{2\Lambda}{\epsilon}\right)^2.\end{aligned}$$ By Poincaré’s inequality, $$\begin{aligned} \int_0^1[\log(1+\frac{g^+}{\delta})]^2dx\le C(\epsilon,\Lambda).\end{aligned}$$ The above results are correct for all $\delta>0$. Note that $C(\epsilon,\Lambda)$ is independent of $\delta$. Since $g^+$ is continuous over interval $[0,1]$, we have $g^+=0$, or $\rho^1-\rho^0=g\le 0$ over $[0,1]$. Otherwise, there will be $\lim_{\delta\to0^+}\int_0^1[\log(1+\frac{g^+}{\delta})]^2dx=+\infty$. On the contrary, it can also be proved that $\rho^0-\rho^1\le 0$ in $[0,1]$. Therefore, $\rho^0=\rho^1$, and the uniqueness of the solution of Eq. (\[ellipticequationintroduction\]) in $C^1[0,1]$ is obtained. Theorem \[C1uniqueness\] gives the uniqueness of solution of Eq. (\[ellipticequationintroduction\]) in $C^1[0,1]$ space. In fact, this can be generalized to the uniqueness in $W^{1,2}(0,1)$ space. From Theorem \[W12Cinftyregularitytheorem\], we can know that for any two solutions $\rho_a$ and $\rho_b$ of Eq. (\[ellipticequationintroduction\]) in $W^{1,2}(0,1)$ space, we have $\rho_a,\rho_b\in C^\infty[0,1] \subset C^1[0,1]$. Then from Theorem \[C1uniqueness\], we have $\rho_a=\rho_b$. In the first part of this section, the existence of solution of Eq. (\[ellipticequationintroduction\]) in $W^{1,2}(0,1)$ space has already been given by the method of upper and lower solution. So together with the discussion of uniqueness in this subsection, we have obtained that the Eq. (\[ellipticequationintroduction\]), which describes the steady state density of particles along the underline track in TASEP process, has a unique $W^{1,2}(0,1)$ weak solution, which actually belongs to $C^\infty[0,1]$. The existence and uniqueness of the global $X^\alpha$ solution, as well as the existence of global attractor in $X^\alpha$ {#global} ========================================================================================================================== In this section, we will show the global existence and uniqueness of solution of the time dependent Eq. (\[continuumlimitintroduction\]), as well as the existence of global attractor in a space of certain type functions. Let $V$ be a metric space. The one parameter family $\{(T(t))\}: V\to V$, $t>0$ is a $C^0$ semigroup (see Definition 1.1.1 of [@Cholewa2000]). By a global attractor for $\{T(t)\}$, we mean a nonempty, compact, $\{T(t)\}$-invariant set $\mathcal{A}\subset V$ which attracts every bounded subset of $V$ (see Definition 1.1.4 of [@Cholewa2000]). Here, $V$ is the function space $X^\alpha$ defined below, and for $\rho(x,s)\in X^\alpha$, Eq. (\[continuumlimitintroduction\]) defines a $C^0$ semigroup by $T(t)\rho(x,s)=\rho(x,t+s)$, where $\rho(x,s)$ will evolve to $\rho(x,t+s)$ according to Eq. (\[continuumlimitintroduction\]). Let $\rho^s$ be the unique solution of Eq. (\[ellipticequationintroduction\]) in $C^\infty[0,1]$ space, $\rho$ be a solution of Eq. (\[continuumlimitintroduction\]) with initial value $\rho_0(x)$. Then $g=\rho-\rho^s$ satisfies the following equation, $$\begin{aligned} \label{differenceparabolicform} \left\{ \begin{array}{ll} \frac{1}{\epsilon}g_t=\frac{\epsilon}{2}g_{xx}+[g^2+g(2\rho^s-1)]_x-(K+1)\Omega_Dg, \quad &\textrm{for}\ t>0\ \textrm{and}\ 0<x<1,\\ g(0,t)=0,\ g(1,t)=0, &\textrm{for}\ t>0, \\ g(x,0)=g_0, &\textrm{for}\ 0\le x\le 1, \end{array} \right.\end{aligned}$$ where $K=\Omega_A/\Omega_D$ and $g_0=\rho_0-\rho^s$. Due to the homogeneous Dirichlet boundary conditions, we will discuss the specific type solution $g$ of Eq. (\[differenceparabolicform\]). Then $\rho=g+\rho^s$ will be the corresponding solution of Eq. (\[continuumlimitintroduction\]). Define $A(g)=-\frac{\epsilon}{2}g_{xx}+\lambda_0g$, $F(g,g_x)=[g^2+g(2\rho^s-1)]_x-(K+1)\Omega_Dg+\lambda_0g$ with $\lambda_0>0$ a positive constant. Then Eq. (\[differenceparabolicform\]) can be reformulated into the following form, $$\begin{aligned} \label{differenceparabolicformAF} \left\{ \begin{array}{ll} \frac{1}{\epsilon}g_t=-A(g)+F(g,g_x),\quad &\textrm{for}\ t>0\ \textrm{and}\ 0<x<1,\\ g(0,t)=0,\ g(1,t)=0, &\textrm{for}\ t>0, \\ g(x,0)=g_0, &\textrm{for}\ 0\le x\le 1. \end{array} \right.\end{aligned}$$ Following the idea used in chapter 5 of [@Cholewa2000], we have - The operator $A(g)$, the homogeneous Dirichlet boundary conditions, and the domain $(0,1)$ form a regular elliptic boundary value problem in the sense of Definition 1.2.1 in [@Cholewa2000]; - The condition $$\begin{aligned} \int_0^1A(g)hdx=\int_0^1\{\frac{\epsilon}{2}g_xh_x+\lambda_0gh\}dx \end{aligned}$$ holds for any $g\in W^{2,2}_0(0,1)$ and $h\in W^{1,2}_0(0,1)$, whereas the form $$\begin{aligned} a(g,h)=\frac{\epsilon}{2}g_xh_x+\lambda_0gh \end{aligned}$$ is symmetric and coercive. The latter means that $$\begin{aligned} \int_0^1a(h,h)dx=\int_0^1\{\frac{\epsilon}{2}h_x^2+\lambda_0h^2\}\ge\min(\frac{\epsilon}{2},\lambda_0)\|h\|^2_{W^{1,2}},\ \forall h\in W^{1,2}_0(0,1). \end{aligned}$$ Regard $A$ as operator from space $W^{2,2}_0(0,1)$ to space $L^2(0,1)$, and denote its spectrum set by $\sigma(A)$. According to the example 1.3.8 in section 1.3 of [@Cholewa2000], we know that $A$ is sectorial (see definition 1.3.1 of [@Cholewa2000]) and $Re\sigma(A)>0$ if $\lambda_0>0$ is chosen to be large enough. Now we define $A^{-\alpha}:L^2\to L^2$ as $$\begin{aligned} A^{-\alpha}v=\frac{1}{\Gamma(\alpha)}\int_0^\infty t^{\alpha-1}e^{-At}vdt. \end{aligned}$$ The proposition 1.3.4 in section 1.3 of [@Cholewa2000] gives that $A^{-\alpha}$, $\alpha\in(0,+\infty)$, are well defined linear bounded operators on $X=L^2(0,1)$ giving a one-to-one correspondence between $L^2$ and the range $R(A^{-\alpha})$. Define $A^\alpha$ as the inverse of $A^{-\alpha}$, and $X^\alpha:=R(A^{-\alpha})$ as the domain of definition of $A^\alpha$. Specially, $A^{-1}$ is consistent with the inverse of $A$, and $X^1=W^{2,2}_0(0,1)$. For convenience, we define $A^0=I$, and $X^0=X=L^2(0,1)$. According to discussions from page 47 to page 50, especially the Remark 1.3.7, in section 1.3 of [@Cholewa2000], we have $$\begin{aligned} \label{Eqspaceequivalent} \left\{ \begin{array}{ll} X^\alpha=W^{2\alpha,2}(0,1), \quad &\textrm{for } 0\le\alpha<1/4, \\ X^\alpha \subset W^{1/2,2}(0,1), &\textrm{for } \alpha=1/4, \\ X^\alpha=W^{2\alpha,2}_0(0,1), &\textrm{for } 1/4<\alpha\le 1. \end{array} \right.\end{aligned}$$ Similar to the definition 2.1.1 of [@Cholewa2000], the local $X^\alpha$ solution of Eq. (\[differenceparabolicformAF\]) for $\alpha\in[0,1)$ is defined as follows. \[definitionofXalphalocalsolution\] Let $\alpha\in[0,1)$ and $g_0\in X^\alpha$. If, for some real $\tau>0$, a function $g\in C([0,\tau),X^\alpha)$ satisfies the following conditions, - $g(x,0)=g_0(x)$, - $g\in C^1((0,\tau),X)$, - $g(x,t)$ belongs to $X^1$ for each $t\in(0,\tau)$, - $\frac{1}{\epsilon}g_t=-A(g)+F(g,g_x)$ holds in $X$ $\forall t\in(0,\tau)$, then $g$ is called a local $X^\alpha$ solution of Eq. (\[differenceparabolicformAF\]). Note that the boundary condition is satisfied naturally since $g(x,t)\in X^1=W^{2,2}_0$ for each $t\in(0,\tau)$. If $\tau=+\infty$, such a solution is call a global $X^\alpha$ solution. According to the discussion in section 9.4 of [@Cholewa2000], the $X^\alpha$ solution $g$ has the following regularity. $$\begin{aligned} \label{regularityofgstrongest} g\in C([0,\tau),X^\alpha)\cap C^1((0,\tau),X^\gamma)\cap C((0,\tau),X^1),\ \forall\gamma\in[0,1).\end{aligned}$$ In the following, based on the theory presented in [@Cholewa2000], we will prove the global existence and uniqueness of the solution of Eq. (\[differenceparabolicformAF\]), as well as the existence of its global attractor, in $X^\alpha$ space with $\alpha>\frac{3}{4}$. Firstly, we show that $F(g,g_x)$, as an operator from $X^\alpha$ to $X$, is Lipschitz continuous on bounded sets of $X^\alpha$ for $\alpha>\frac{3}{4}$. Since $2\alpha-1>1/2$, we have $X^\alpha= W^{2\alpha,2}(0,1) \subset C^1[0,1]$ according to Sobolev imbedding theorem (see Section 7.7 in [@Gilbarg2001]). So an element $g$ in space $X^\alpha$ being bounded means that there exists a constant $C_0>0$ such that $|g|,|g_x|\le C_0$. For two elements $g_1$ and $g_2$, which belong to a bounded subset of $X^\alpha$, we have $$\begin{aligned} &&\|F(g_1,(g_1)_x)-F(g_2,(g_2)_x)\|_X=\|F(g_1,(g_1)_x)-F(g_2,(g_2)_x)\|_2\\ &=&\|(g_1-g_2)(2\rho^s-1)_x+(g_1-g_2)_x(2\rho^s-1)+2g_1(g_1)_x-2g_2(g_2)_x-(K+1)\Omega_D(g_1-g_2)+\lambda_0(g_1-g_2)\|_2\\ &\le&[(K+1)\Omega_D+\lambda_0+\|(2\rho^s-1)_x\|_\infty]\|g_1-g_2\|_2+2\|(g_1-g_2)(g_1)_x\|_2\\ &&+2\|g_2(g_1-g_2)_x\|_2+\|(g_1-g_2)_x\|_2\|(2\rho^s-1)\|_\infty\\ &\le& [(K+1)\Omega_D+\lambda_0+\|(2\rho^s-1)_x\|_\infty+2C_0]\|g_1-g_2\|_2+(\|(2\rho^s-1)\|_\infty+2C_0)\|(g_1-g_2)_x\|_2\\ &\le& C_1\|g_1-g_2\|_{W^{1,2}(0,1)}\le C_2\|g_1-g_2\|_{X^\alpha}.\end{aligned}$$ Secondly, we discuss a growth condition of $F(g,g_x)$ (see Eq. (\[growthconditionwithrestriction\]) below). Choose $1\le\gamma_0<5$, $1\le\gamma_1<\frac{5}{3}$ which satisfy $\frac{1}{\gamma_0}+\frac{1}{\gamma_1}=1$. Since $\rho^s\in C^\infty[0,1]$, we have $\|\rho^s\|_\infty< +\infty$. So from Young’s inequality (see Lemma 1.2.2 in [@Cholewa2000]), we have $$\begin{aligned} \label{growthconditionwithrestriction} |F(g,g_x)|&=&|[g^2+g(2\rho^s-1)]_x-(K+1)\Omega_Dg+\lambda_0g|\le M_1|gg_x|+M_2|g|+M_3|g_x|\nonumber \\ &\le& M_1(\frac{|g|^{\gamma_0}}{\gamma_0}+\frac{|g_x|^{\gamma_1}}{\gamma_1})+M_2(1+|g|^{\gamma_0})+M_3(1+|g_x|^{\gamma_1})\nonumber \\ &\le& C_3(1+|g|^{\gamma_0}+|g_x|^{\gamma_1}).\end{aligned}$$ Thirdly, we give an $L^2(0,1)$ priori estimate of $g(\cdot,t)$, which is asymptotically independent of the initial condition $g_0$. Multiplying both sides of Eq. (\[differenceparabolicform\]) by $f_1g^+$ with $f_1=\frac{1}{2}x+\frac{1}{2}$, defining $b=2\rho^s-1$, and integrating with respect to $x$ over $[0,1]$, we obtain, $$\begin{aligned} &&\frac{1}{2\epsilon}\frac{d}{dt}\int_0^1f_1(g^+)^2dx =\frac{1}{\epsilon}\int_0^1f_1g_tg^+dx \nonumber \\ &=&\frac{\epsilon}{2}\int_0^1f_1g^+g_{xx}dx+\int_0^1(g^2)_xf_1g^+dx +\int_0^1(gb)_xf_1g^+dx-(K+1)\Omega_D\int_0^1gf_1g^+dx \label{Eq305}\\ &=&-\frac{\epsilon}{2}\int_0^1f_1(g^+_x)^2dx-\frac{2}{3}\int_0^1(f_1)_x(g^+)^3dx \label{Eq306}\\ &&-\frac{1}{2}\int_0^1b(f_1)_x(g^+)^2dx+\frac{1}{2}\int_0^1b_xf_1(g^+)^2dx-(K+1)\Omega_D\int_0^1f_1(g^+)^2dx \label{Eq307}\\ &\le& -\frac{1}{3}\int_0^1(g^+)^3dx+\frac{\|b\|_\infty}{4}\int_0^1(g^+)^2dx+\frac{\|b_x\|_\infty}{2}\int_0^1f_1(g^+)^2dx\\ &\le& -\frac{1}{3}\int_0^1(g^+)^3dx+\frac{\|b\|_\infty}{2}\int_0^1f_1(g^+)^2dx+\frac{\|b_x\|_\infty}{2}\int_0^1f_1(g^+)^2dx.\end{aligned}$$ The detailed derivation for the last equality is as follows. By integral by parts, the integral in the first term of Eq. (\[Eq305\]) is $$\begin{aligned} &&\int_0^1f_1g^+g_{xx}dx \nonumber \\ &=&-\int_0^1(f_1)_xg^+g_xdx-\int_0^1f_1g^+_xg_xdx\\ &=&\int_0^1(f_1)_{xx}g^+gdx+\int_0^1(f_1)_xg^+_xgdx-\int_0^1f_1g^+_xg_xdx.\end{aligned}$$ Summing $1/2$ $\times$ line 2 and $1/2$ $\times$ line 3, and noting that $(f_1)_{xx}=0$, we have $$\begin{aligned} \int_0^1f_1g^+g_{xx}dx=-\int_0^1f_1g^+_xg_xdx=-\int_0^1f_1(g^+_x)^2dx.\end{aligned}$$ Then the first term in Eq. (\[Eq306\]) is obtained. The second term in Eq. (\[Eq305\]) can be reformulated as follows, $$\begin{aligned} &&\int_0^1f_1g^+(g^2)_xdx\\ &=&2\int_0^1f_1g^+gg_xdx\\ &=&-\int_0^1(f_1)_xg^+g^2dx-\int_0^1f_1g^+_xg^2dx.\end{aligned}$$ Summing $1/3$ $\times$ line 2 and $2/3$ $\times$ line 3, we have $$\begin{aligned} \int_0^1f_1g^+(g^2)_xdx=-\frac{2}{3}\int_0^1(f_1)_xg^+g^2dx=-\frac{2}{3}\int_0^1(f_1)_x(g^+)^3dx.\end{aligned}$$ Then the second term in Eq. (\[Eq306\]) is obtained. The third term in Eq. (\[Eq305\]) can be reformulated as $$\begin{aligned} &&\int_0^1[gb]_xf_1g^+dx\\ &=&\int_0^1g_xbf_1g^+dx+\int_0^1gb_xf_1g^+dx\\ &=&-\int_0^1gb(f_1)_xg^+dx-\int_0^1gbf_1g^+_xdx.\end{aligned}$$ Summing $1/2$ $\times$ line 2 and $1/2$ $\times$ line 3, we have $$\begin{aligned} \int_0^1[gb]_xf_1g^+dx=-\frac{1}{2}\int_0^1gb(f_1)_xg^+dx+\frac{1}{2}\int_0^1gb_xf_1g^+dx=-\frac{1}{2}\int_0^1b(f_1)_x(g^+)^2dx+\frac{1}{2}\int_0^1b_xf_1(g^+)^2dx.\end{aligned}$$ Which are the first two terms in Eq. (\[Eq307\]). Before continuing our estimation of $\|g\|_2$, we show that $\frac{1}{2}\frac{d}{dt}\int_0^1f_1(g^+)^2dx=\int_0^1f_1g_tg^+dx$ for $t>0$. According to Eq. (\[regularityofgstrongest\]), $g\in C^1((0,\infty),X^\gamma)$ for any $\gamma\in [0,1)$. Remind that $X^\gamma= W^{2\gamma,2}(0,1)\subset C^0[0,1]$ for $\gamma\in (1/4,1)$. Thus, the following two conditions are satisfied. - $\forall\, t>0$, $\lim_{h\to 0}g(x,t+h)=g(x,t)$ uniformly for $x\in [0,1]$. Note that $\frac{3}{4}<\alpha<1$. This is also true for $t=0$ since $g\in C([0,\infty),X^\alpha)\subset C([0,\infty),C^1[0,1])$. - $\forall\, t>0$, $\lim_{h\to 0}\frac{g(x,t+h)-g(x,t)}{h}=g_t(x,t)$ uniformly for $x\in [0,1]$. \[canchangetheorem\] Under the above two conditions, $\frac{1}{2}\frac{d}{dt}\int_0^1f_1(g^+)^2dx=\int_0^1f_1g_tg^+dx$ for $t>0$. One can easily show that $|g^+(x,t+h)-g^+(x,t)|\le|g(x,t+h)-g(x,t)|$. So from the first condition we can know that $\lim_{h\to 0}g^+(x,t+h)=g^+(x,t)$ uniformly for $x\in [0,1]$. In the following discussion, we want to show that $\frac{[g^+(x,t+h)]^2-[g^+(x,t)]^2}{h}$ converges to $2g_t(x,t)g^+(x,t)$ pointwise. Obviously, we have $$\begin{aligned} \label{decompositionofg2minusdivdeh} &&\lim_{h\to 0}\frac{[g^+(x,t+h)]^2-[g^+(x,t)]^2}{h}\nonumber \\ &=&\lim_{h\to 0}\left\{g^+(x,t+h)\frac{g^+(x,t+h)-g^+(x,t)}{h}+g^+(x,t)\frac{g^+(x,t+h)-g^+(x,t)}{h}\right\}.\end{aligned}$$ We illustrate the convergence through the following two cases. [**(1)**]{} If $g(x,t)\le0$, then $\lim_{h\to 0}g^+(x,t+h)=g^+(x,t)=0$. Since $|g^+(x,t+h)-g^+(x,t)|\le|g(x,t+h)-g(x,t)|$, and $\lim_{h\to 0}\frac{g(x,t+h)-g(x,t)}{h}=g_t(x,t)$ exists, $\lim_{h\to 0}\frac{g^+(x,t+h)-g^+(x,t)}{h}$ is bounded. So from Eq. (\[decompositionofg2minusdivdeh\]), we obtain $$\begin{aligned} \lim_{h\to 0}\frac{[g^+(x,t+h)]^2-[g^+(x,t)]^2}{h}=0=2g_t(x,t)g^+(x,t).\end{aligned}$$ Which is right for any $x$ such that $g(x,t)\le 0$. [**(2)**]{} If $g(x,t)>0$, then there exists $h_0(x,t)>0$ such that $\forall\, |h|<h_0(x,t)$, $|g(x+h,t)-g(x,t)|<g(x,t)$. So $g(x+h,t)>0$, and $g^+(x+h,t)=g(x+h,t)$. Thus, we have $$\begin{aligned} \lim_{h\to 0}\frac{[g^+(x,t+h)]^2-[g^+(x,t)]^2}{h}=\lim_{h\to 0}\frac{[g(x,t+h)]^2-[g(x,t)]^2}{h}=2g_t(x,t)g(x,t)=2g_t(x,t)g^+(x,t).\end{aligned}$$ Which is right for any $x$ such that $g(x,t)>0$. In order to use the dominated convergence theorem, we need to show that $\max_{x\in[0,1]}|\frac{[g^+(x,t+h)]^2-[g^+(x,t)]^2}{h}|$ is uniformly bounded with respect to $h$. From the two conditions of this Theorem, we know that $\lim_{h\to 0}\frac{g(x,t+h)-g(x,t)}{h}=g_t(x,t)$ uniformly for $x\in [0,1]$, and $g_t(x,t)\in X^\gamma\subset C^0[0,1]$ is uniformly bounded with respect to $x$. So $\max_{x\in[0,1]}|\frac{g(x,t+h)-g(x,t)}{h}|$ is uniformly bounded with respect to $h$. Since $|g^+(x,t+h)-g^+(x,t)|\le|g(x,t+h)-g(x,t)|$, $\max_{x\in[0,1]}|\frac{g^+(x,t+h)-g^+(x,t)}{h}|$ is also uniformly bounded with respect to $h$. Similarly, we can know that $\max_{x\in[0,1]}|g^+(x,t+h)|$ is uniformly bounded with respect to $h$, since $\lim_{h\to 0}g(x,t+h)=g(x,t)$ uniformly for $x\in [0,1]$, $g(x,t)\in X^\alpha\subset C^1[0,1]$ is uniformly bounded with respect to $x$, and $|g^+(x,t+h)|\le|g(x,t+h)|$. Therefore, from Eq. (\[decompositionofg2minusdivdeh\]) we obtain that $\max_{x\in[0,1]}|\frac{[g^+(x,t+h)]^2-[g^+(x,t)]^2}{h}|$ is uniformly bounded with respect to $h$. Then from the dominated convergence theorem, we have $$\begin{aligned} &&\int_0^12f_1g_t(x,t)g^+(x,t)dx\\ &=&2\int_0^1f_1\lim_{h\to 0}\frac{[g^+(x,t+h)]^2-[g^+(x,t)]^2}{h}dx\\ &=&2\lim_{h\to 0}\frac{\int_0^1f_1[g^+(x,t+h)]^2dx-\int_0^1f_1[g^+(x,t)]^2dx}{h}\\ &=&2\frac{d}{dt}\int_0^1f_1(g^+(x,t))^2dx.\end{aligned}$$ The proof is then complete. Now we continue the estimation of $L^2$ norm of $g$. Since $y^+:=\int_0^1f_1(g^+)^2dx\le \int_0^1(g+)^2dx\le \left[\int_0^1(g+)^3dx\right]^{2/3}$, we have $$\begin{aligned} \frac{1}{2\epsilon}\frac{dy^+}{dt}\le -\frac{1}{3}(y^+)^{3/2}+\frac{\|b\|_\infty+\|b_x\|_\infty}{2}y^+.\end{aligned}$$ Then from Bernoulli inequality (see the Lemma 1.2.4 in [@Cholewa2000]), we have $$\begin{aligned} \sup_{t\in[0,\tau_0)}y^+\le \max\left(y^+(0),\left(\frac{3(\|b\|_\infty+\|b_x\|_\infty)}{2}\right)^2\right),\end{aligned}$$ where $\tau_0$ is the maximum existence time of the $X^\alpha$ solution $g$. If $\tau_0=\infty$, we have $$\begin{aligned} \limsup_{t\to\infty}y^+\le \left(\frac{3(\|b\|_\infty+\|b_x\|_\infty)}{2}\right)^2.\end{aligned}$$ Similarly, let $y^-:=\int_0^1f_2(g^-)^2dx$ with $f_2=-\frac{1}{2}x+1$. We have $$\begin{aligned} \sup_{t\in[0,\tau_0)}y^-\le \max\left(y^-(0),\left(\frac{3(\|b\|_\infty+\|b_x\|_\infty)}{2}\right)^2\right).\end{aligned}$$ If $\tau_0=\infty$, we have $$\begin{aligned} \limsup_{t\to\infty}y^-\le \left(\frac{3(\|b\|_\infty+\|b_x\|_\infty)}{2}\right)^2.\end{aligned}$$ Finally, let $C_L:=\left(\frac{3(\|b\|_\infty+\|b_x\|_\infty)}{2}\right)^2$. We have $$\begin{aligned} \label{gboundnotasymptotic} &&\|g\|_2^2=\|g^+\|_2^2+\|g^-\|_2^2\le 2y^++2y^-\le 2\max(y^+(0),C_L)+2\max(y^-(0),C_L)\\ &\le& 4\max(\|g_0\|_2^2,C_L)\le 4\max(C\|g_0\|_{x^\alpha}^2,C_L),\end{aligned}$$ and $$\begin{aligned} \label{gboundasymptotic} &&\limsup_{t\to\infty}\|g\|_2^2\le\limsup_{t\to\infty}\|g^+\|_2^2+\limsup_{t\to\infty}\|g^-\|_2^2\le 2\limsup_{t\to\infty}y^++2\limsup_{t\to\infty}y^-\le 4C_L.\end{aligned}$$ With the above preparation, we finally have the following result. For $\alpha\in(3/4,1)$, Eq. (\[differenceparabolicformAF\]) has a unique global $X^\beta$ solution $g$ for any initial value $g_0\in X^\beta$, as well as a global attractor in $X^\beta$, where $\beta\in[\alpha,1)$. Therefore, for any initial value $\rho_0$ satisfying $\rho_0\in X^\beta$, Eq. (\[continuumlimitintroduction\]) has a unique global $X^\beta$ solution $\rho$, as well as a global attractor in $X^\beta$. We only need to illustrate the results for Eq. (\[differenceparabolicformAF\]). Firstly, note that $A$ is sectorial and $Re\sigma(A)>0$, $F(g,g_x)$, as an operator from $X^\alpha$ to $X$, is Lipschitz continuous on bounded sets of $X^\alpha$ for $\alpha>\frac{3}{4}$. Then according to the Theorem 2.1.1 of [@Cholewa2000], Eq. (\[differenceparabolicformAF\]) has a unique [*local*]{} $X^\beta$ solution $g$ for any $g_0\in X^\beta$. Secondly, $F(g,g_x)$ satisfies the growth condition presented in Eq. (\[growthconditionwithrestriction\]), and $g$ has the $L^2$ estimation as presented in Eq. (\[gboundnotasymptotic\]). Then according to the Lemma 5.2.1, the Proposition 5.2.1, and the Remark 5.2.3 of [@Cholewa2000], Eq. (\[differenceparabolicformAF\]) has a unique [*global*]{} $X^\beta$ solution $g$ for any $g_0\in X^\beta$. Finally, $g$ has the $L^2$ estimation as presented in Eq. (\[gboundasymptotic\]). Then according to the Theorem 5.3.1 and the Remark 5.3.1 of [@Cholewa2000], Eq. (\[differenceparabolicformAF\]) has a global attractor in $X^\beta$. According to the Proposition 9.4.2 of [@Cholewa2000], the unique solution of Eq. (\[continuumlimitintroduction\]) is in fact a classical solution. Which means all derivatives in Eq. (\[continuumlimitintroduction\]) are actually classical derivatives. Conclusions and Remarks {#conclusion} ======================= This paper is devoted to the analysis of an initial value parabolic problem with Dirichlet boundary conditions as given in Eq. (\[continuumlimitintroduction\]). Which originates from the description of the continuum limit of TASEP-LK coupled process. The phase diagram of its steady state, which can be obtained from Eq. (\[ellipticequationintroduction\]), is biophysically very important to understand corresponding both macroscopic and microscopic biological processes, and has previously been extensively studied by Monte Carlo simulations and numerical computations. The main task of this paper is to study the properties of Eqs. (\[continuumlimitintroduction\],\[ellipticequationintroduction\]) mathematically, including their existence of solutions and the stability of the steady state solution. By using the methods of upper and lower solutions, we finally obtained the following conclusions. [**(1)**]{} There exists a weak solution of Eq. (\[ellipticequationintroduction\]) in $W^{1,2}(0,1)$ space, which has the same phase diagram as the one obtained by Monte Carlo simulations and numerical computations. Furthermore, this weak solution is actually a classical one, and lies in $C^\infty[0,1]$ space. [**(2)**]{} The weak solution of Eq. (\[ellipticequationintroduction\]) is unique in space $W^{1,2}(0,1)$. [**(3)**]{} For the time dependent equation (\[continuumlimitintroduction\]), we have also obtained its global existence and uniqueness of solution in a specific space $X^\beta$, with $\beta\in[\alpha,1)$ and $\alpha\in(3/4,1)$. Finally, we want to point out that the Eqs. (\[continuumlimitintroduction\],\[ellipticequationintroduction\]) which we have studied in this paper are from the simplest case of the TASEP-LK coupled process. In which particles travel along only one one-dimensional track and during each forward stepping process, particles have only one internal biochemical or biophysical state. Meanwhile, it is also assumed that all particles are from the same species, and therefore have the same properties, including their speed, attachment and detachment rates, initiation rate and termination rate [*etc.*]{} In the field of biology and physics, there are actually many general cases. For examples, all the particles may travel along one closed orbit, particles may have different travel speed at different domains of the track, particles may include multiple internal states, and particles may also be allowed to switch between different tracks. Moreover, recent experiments showed that particles from different species may travel along the same track. Although for many of the above mentioned general TASEP-LK coupled processes, rich biophysical properties have been obtained by Monte Carlo simulations and numerical computations, almost no mathematical analysis has been carried out to show if there are any more properties about the corresponding differential equations, or mathematically prove that if the numerically found results are reasonable. In the future, we hope the methods used in this paper can be generalized to analyze the general cases, or more sophisticated mathematical methods can be presented to make the analysis more efficient and more powerful. Acknowledgements {#acknowledgements .unnumbered} ================ This study was supported by the Natural Science Foundation of China (Grant No. 11271083). The authors thank Professor Yuan Lou in Ohio State University and Yongqian Zhang in Fudan University for useful discussions. [10]{} J. Howard. . Sinauer Associates and Sunderland, MA, 2001. M. Schliwa. . Wiley-Vch, Weinheim, 2003. A. O. Sperry. . Humana Press Inc., Totowa, New Jersey, 2007. A. Parmeggiani, T. Franosch, and E. Frey. Phase coexistence in driven one-dimensional transport. , 90(8):086601, February 2003. A. Parmeggiani, T. Franosch, and E. Frey. Totally asymmetric simple exclusion process with langmuir kinetics. , 70:046101, October 2004. Yunxin Zhang. Domain wall of the totally asymmetric exclusion process without particle number conservation. , 48:607–618, 2010. Yunxin Zhang. Microtubule length dependence of motor traffic in cells. , 35:101, 2012. Herbert Spohn. . Theoretical and Mathematical Physics. Springer Verlag, New York, 1991. B. Derrida and M. R. Evans. , chapter 14, pages 277–304. Cambridge University Press, Cambridge, England, 1997. D. Mukamel. , pages 237–258. Institute of Physics, Bristol, 2000. G. Schütz. , volume 19, pages 3–251. Academic Press, San Diego, 2001. B. Derrida, M. R. Evans, V. Hakim, and V. Pasquier. Exact solution of a [1D]{} asymmetric exclusion model using a matrix formulation. , 26:1493–1517, 1993. B. Derrida, E. Domany, and D. Mukamel. An exact solution of a one-dimensional asymmetric exclusion model with open boundaries. , 69:667–687, 1992. G. Schütz and E. Domany. Phase transitions in an exactly soluble one-dimensional exclusion process. , 72:277–296, 1993. Joachim Krug. Boundary-induced phase transitions in driven diffusive systems. , 67:1882, September 1991. Jan W. Cholewa and Tomasz Dlotko. . London Mathematical Society Lecture Note Series. Cambridge University Press, 2000. King-Yeung Lam, Yuan Lou, and Frithjof Lutscher. The emergence of range limits in advective environments. , 2016. J. G. Skellam. Random dispersal in theoretical populations. , 38(1-2):196–218, June 1951. Robert Stephen Cantrell and Chris Cosner. Diffusive logistic equations with indefinite weights: population models in disrupted environments. , 112(3-4):293–318, 1989. Robert Stephen Cantrell and Chris Cosner. Diffusive logistic equations with indefinite weights: population models in disrupted environments [II]{}\*. , 22(4):1043–1064, July 1991. Robert Stephen Cantrell and Chris Cosner. The effects of spatial heterogeneity in population dynamics. , 29(4):315–338, February 1991. Frithjof Lustscher, Edward McCauley, and Mark A. Lewis. Spatial patterns and coexistence mechanisms in systems with unidirectional flow. , 71(3):267–277, May 2007. Yihond Du. , volume 2 of [*Partial Differential Equations and Applications*]{}. Mainland Press, Singapore, 2006. Julian D. Cole. . Ginn and Company, Boston, 1968. N. Fröman and Per Olof Fröman. . North-Holland Publishing Company, Amsterdam, 1965. Carl M. Bender and Steven A. Orszag. . Springer-Verlag, Berlin, Heidelberg, New York, 1999. Neil S. Trudinger. On the comparison principle for quasilinear divergence structure equations. , 57(2):128–133, June 1974. David Gilbarg and Neil S. Trudinger. . Springer Verlag, Berlin, Heigelberg, New York, 2001. V. Popkov, A. Rakos, R. D. Willmann, A. B. Kolomeisky, and G. M. Schütz. Localization of shocks in driven diffusive systems without particle number conservation. , 67:066117, 2003. Yunxin Zhang. Theoretical analysis of kinesin [KIF1A]{} transport along microtubule. , 152:1207–1221, 2013. K. Nishinari, Y. Okada, A. Schadschneider, and D. Chowdhury. Intracellular transport of single-headed molecular motors [KIF1A]{}. , 95:118101, 2005. Cécile Leduc, Kathrin Padberg-Gehle, Vladimír Varga, Dirk Helbing, Stefan Diez, and Jonathon Howard. Molecular crowding creates traffic jams of kinesin motors on microtubules. , 109:6100–6105, 2012. Joachim Krug. Boundary-induced phase transitions in driven diffusive systems. , 67:1882–1885, 1991. A. Parmeggiani, T. Franosch, and E. Frey. Phase coexistence in driven one-dimensional transport. , 90:086601, 2003. ![[**Left.**]{} A diagram to illustrate the TASEP-LK coupled process with $N$ binding sites. Particles move from left to right along a one-dimensional lattice and exclude with each other. Particles can bind to the leftmost site $1$ with rate $\alpha$ providing it is unoccupied, and particles at the rightmost site $N$ will leave the lattice with rate $\beta$. Particles at site $i$ will hop forward to site $i+1$ if site $i+1$ is vacancy. The Langmuir kinetics (LK) means that particles can detach from the main body of the lattice with rate $\omega_D$, and can attach to any of the internal sites $2\le i\le N-1$ with rate $\omega_A$ providing it is vacancy, see [@ParmeggianiPRL2003; @Zhang2012] for more details. [**Right.**]{} Splitting of parameter space as discussed in Subsection \[steadySpecialCases\]. The parameter domains labeled by ([*i*]{})’ are the cases discussed in Subsection \[steadySpecialCases\](i), while the parameter domains labeled by ([*i’*]{})’ are the symmetric cases of those labeled by ([*i*]{})’ (through particle-hole symmetry). []{data-label="TASEP_diagram_figure"}](TASEP_diagram_figure.eps "fig:"){width="8cm"} ![[**Left.**]{} A diagram to illustrate the TASEP-LK coupled process with $N$ binding sites. Particles move from left to right along a one-dimensional lattice and exclude with each other. Particles can bind to the leftmost site $1$ with rate $\alpha$ providing it is unoccupied, and particles at the rightmost site $N$ will leave the lattice with rate $\beta$. Particles at site $i$ will hop forward to site $i+1$ if site $i+1$ is vacancy. The Langmuir kinetics (LK) means that particles can detach from the main body of the lattice with rate $\omega_D$, and can attach to any of the internal sites $2\le i\le N-1$ with rate $\omega_A$ providing it is vacancy, see [@ParmeggianiPRL2003; @Zhang2012] for more details. [**Right.**]{} Splitting of parameter space as discussed in Subsection \[steadySpecialCases\]. The parameter domains labeled by ([*i*]{})’ are the cases discussed in Subsection \[steadySpecialCases\](i), while the parameter domains labeled by ([*i’*]{})’ are the symmetric cases of those labeled by ([*i*]{})’ (through particle-hole symmetry). []{data-label="TASEP_diagram_figure"}](parameter1.eps "fig:"){width="6cm"}\ ![Typical examples of the $\epsilon\to 0$ limit solution $f$ (solid lines), and the corresponding upper and lower solutions $\rho_u$ and $\rho_l$ (dashed lines) of Eq. (\[OriginalPro\]), which is the special case of Eq. (\[ellipticequationintroduction\]) with $\Omega_A=\Omega_D=\Omega$. The parameter values used in calculations of [**a**]{}-[**f**]{} correspond to the cases discussed in SubSection \[steadySpecialCases\](1-6) respectively. See also domains with labels (1)-(6) in the parameter space of $(\alpha, \beta)$ as given in Fig. \[TASEP\_diagram\_figure\] [**right**]{}. For convenience, we list them as follows. [**a.**]{} $\alpha+\Omega>\beta$, $\beta+\Omega>\alpha$, $\alpha+\beta+\Omega<1$. [**b.**]{} $\alpha<0.5$, $\beta<0.5$, $\alpha+\beta+\Omega>1$. [**c.**]{} $\alpha>0.5$, $0.5-\Omega<\beta<0.5$. [**d.**]{} $\alpha>\beta+\Omega$, $\beta<0.5-\Omega$, $\alpha+\beta+\Omega<1$. [**e.**]{} $\alpha>\beta+\Omega$, $\beta<0.5-\Omega$, $\alpha+\beta+\Omega>1$. [**f.**]{} $\alpha>0.5$, $\beta>0.5$. []{data-label="integrate_figures_Keq1"}](integrate_figures_Keq1.eps "fig:"){width="15cm"}\ ![Typical examples of the $\epsilon\to 0$ limit solution $f$ (solid lines), and the corresponding upper and lower solutions $\rho_u$ and $\rho_l$ (dashed lines) of Eq. (\[OriginalProKneq1\]), or Eq. (\[ellipticequationintroduction\]) with $\Omega_A/\Omega_D=K>1$. The subfigures plotted in [**a**]{}-[**f**]{} correspond to the cases discussed in SubSection \[SecConstructionGeneral\](1-6) respectively. [**a.**]{} [LD+BL$_r^+$]{}. [**b.**]{} [LD+BL$_r^-$]{}. [**c.**]{} [BL$_l^+$+HD]{}. [**d.**]{} [BL$_l^-$+HD]{}. [**e.**]{} [BL$_l^+$+MD]{}. [**f.**]{} [BL$_l^-$+MD]{}. Here LD means the solution $\rho<1/2$, HD means $\rho>K/(K+1)$, MD means $1/2<\rho<K/(K+1)$, DW means [**D**]{}omain [**W**]{}all appears in interval (0, 1), and BL means there exists [**B**]{}oundary [**L**]{}ayer. Subscript $r$ or $l$ of BL means the boundary layer appears at the right or left boundary. Superscript $+$ or $-$ indicates the monotonicity of $\rho$ in boundary layer. []{data-label="integrate_figures_Kneq1_1to6"}](integrate_figures_Kneq1_1to6.eps "fig:"){width="15cm"}\ ![The same as in Fig. \[integrate\_figures\_Kneq1\_1to6\], but subfigures plotted in [**a**]{}-[**e**]{} correspond to the cases discussed in SubSection \[SecConstructionGeneral\](7-11) respectively. [**a.**]{} [BL$_l^+$+MD+BL$_r^-$]{}. [**b.**]{} [BL$_l^-$+MD+BL$_r^-$]{}. [**c.**]{} [LD+DW+HD]{}. [**d.**]{} [LD+DW+MD]{}. [**e.**]{} [LD+DW+MD+BL$_r^-$]{}. For better understanding of the constructions of upper and lower solutions $\rho_u$ and $\rho_l$, typical examples of the solution $w$ of Eq. (\[equationwresearch\]), with conditions $w(0.5)=0.5$, $w(0)=1$, and $w(1)=0$ respectively, are plotted in subfigure [**f**]{}. The parameter values used in the calculations are $A=0.25$, and $\epsilon=0.1,0.05,0.01$. []{data-label="integrate_figures_Kneq1_7to11"}](integrate_figures_Kneq1_7to11.eps "fig:"){width="15cm"}\
--- abstract: 'Asymmetric, broad iron lines are a common feature in the X-ray spectra of both X-ray binaries (XRBs) and type-1 Active Galactic Nuclei (AGN). It was suggested that the distortion of the Fe K$\alpha$ emission results from Doppler and relativistic effects affecting the radiative transfer close to the strong gravitational well of the central compact object: a stellar mass black hole (BH) or neutron star (NS) in the case of XRBs, or a super massive black hole (SMBH) in the case of AGN. However, alternative approaches based on reprocessing and transmission of radiation through surrounding media also attempt to explain the line broadening. So far, spectroscopic and timing analyzes have not yet convinced the whole community to discriminate between the two scenarios. Here we study to which extent X-ray polarimetric measurements of black hole X-ray binaries (BHXRBs) and type-1 AGN could help to identify the possible origin of the line distortion. To do so, we report on recent simulations obtained for the two BH flavors and show that the proposed scenarios are found to behave differently in polarization degree and polarization angle. A relativistic origin for the distortion is found to be more probable in the context of BHXRBs, supporting the idea that the same mechanism should lead the way also for AGN. We show that the discriminating polarization signal could have been detectable by several X-ray polarimetry missions proposed in the past.' address: 'Observatoire Astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de l’Université, 67000 Strasbourg, France' author: - 'F. Marin' - 'F. Tamborra' title: 'Probing the origin of the iron K$\alpha$ line around stellar and supermassive black holes using X-ray polarimetry' --- Polarization; radiative transfer; line: profiles; scattering; X-rays: binaries; X-rays: galaxies; galaxies: active. =0.5 cm Introduction {#Intro} ============ The X-ray spectrum observed in XRBs and AGN is complex and constituted by several components, due to the circumnuclear material which scatters, absorbs and reprocesses photons produced by the disk [@Risaliti2004]. In AGN, the accretion disk produces thermal photons (the so called “multi-temperature black body emission”), whose spectral energy distribution peaks in the UV band for a $\sim 10^8$ M$_\odot$ SMBH [@Pringle1972; @Shakura1973]. In XRBs, the thermal emission produced by a BH with $\sim 10$ M$_\odot$ peaks in the soft X-ray band (i.e. a few keV, $ibid.$). To produce the hard X-ray observed in both cases, an optically thin, hot corona of thermally distributed electrons is believed to Comptonize (i.e. via the inverse Compton effect) soft photons to higher energies [@Haardt1991; @Haardt1993]. The higher energetic component of the X-ray radiation that falls back to the disk is reflected while the less energetic part ($\lesssim 10$ keV) is absorbed and partly reprocessed in emission lines. The combination of the relatively strong abundance of iron in the disk matter and the associated high Fe fluorescence yield makes the Fe K-shell fluorescence lines the strongest emission features to be detected in the X-ray wave band. They naturally became one of the prior targets of X-ray observations, as iron lines can be used as probes of matter under extreme conditions [@Nandra1997; @Reynolds1997]. Exploring the accretion disks physics in AGN and X-ray binaries, @Fabian1989 predicted that the iron emission lines should be distorted by Doppler and general relativistic effects acting close to the central BH. The *ASCA*/*SIS* observation of the Seyfert 1 galaxy MCG-6-30-15 confirmed such hypothesis, farther strengthened by subsequent systematic surveys of radio-quiet, type-1 AGN, where broadened and distorted iron lines where found in a few tens of targets [@Reeves2006; @Nandra2007; @delaCalle2010; @Patrick2011; @Patrick2012]. But what about stellar-mass compact objects? Back in the 1990’s, *ASCA*’s sensitivity and resolution limits prevented to clearly detect a broad red wing of iron lines in black hole X-ray binaries. In the case of neutron stars, the weaker iron line intensities made the detection even less probable. With the launch of *XMM-Newton* and later *Suzaku*, the detection of distorted Fe K$\alpha$ lines then extended to stellar-mass black holes in binary systems [@Miller2004] and neutron star XRBs [@Bhattacharyya2007], although @Done2010 and @Ng2010 challenged this view pointing out potential problems due to instrumental effects and/or uncertainties in the spectroscopic analysis. The distorted Fe K$\alpha$ emission line at 6.4 keV is thus a feature shared by stellar and supermassive black hole powered objects [@Nandra2007; @Miller2007; @Cackett2008]. It seems reasonable to assume that the red-wing extension is due to the same broadening mechanisms: Doppler and general relativity effects. However, taking into account the interaction between radiation and the hot coronal plasma above the accretion disk, or cold, distant obscuring material along the observer’s line-of-sight, some thought about a different explanation in which the apparent red-wing is actually ascribed to the continuum emission and the line is not really broad [e.g. @Inoue2003; @Miller2008]. Relativistic scenarios usually point into the direction of rapidly spinning BH, while complex absorption in AGN systematically lowers the estimated BH spin as the line is much less related to the innermost stable circular orbit (ISCO) radius. In the case of low-mass XRBs (LMXRBs), Comptonization by the corona can cause overestimations of the equivalent width of the line and thus also lower the BH spin estimation [@Makishima1996; @Ng2010]. While actual spectral and timing analyses have not yet convinced the whole community regarding the preponderant mechanism responsible for the broad Fe K$\alpha$ line, it is the scope of this paper to present a different and independent path: X-ray polarimetry. By adding two more, independent observables to the spectroscopic information, i.e. the polarization degree and the polarization position angle, we show how X-ray polarimetry could help to independently solve the issue. Overview of different broadening mechanisms {#Overview} =========================================== The mechanism responsible for the broadening of the iron K$\alpha$ line is still matter of debate. In the following subsections, we summarize the basic concepts behind the common interpretations of line broadening. The various mechanisms actually do not exclude each other and possibly all of them contribute together in distorting the line profile. Nonetheless the scientific community still argues about which mechanism is predominant and which can be neglected. For a detailed and complete dissertation on the iron line studies we address the reader to the comprehensive review by @Reynolds2003 and references therein. Relativistic reflection {#Relat} ----------------------- A characteristic double-peaked profile is common for broad iron lines. The spectroscopic split between the two peaks depends on the observer’s viewing angle and is attributed to Doppler shifting due to the orbital motion of the reprocessing matter in the accretion disk. Due to special relativistic aberration, emission from the matter on the approaching portion of the orbit (commonly referred to as the blue peak) is boosted and enhances the blue peak intensity with respect to the red one. In addition to the Doppler and boosting effects, the complete line profile is red-shifted and broadened due to the gravitational potential and the transverse Doppler effect (i.e. special relativistic time dilation). For a disk seen almost face on, the latter redshift effects dominate, while for larger inclinations, Doppler effects are dominant. As a direct consequence, from the broadening of the line, in particular from the extension of the red tail to low energies, the spin of the black hole or the radius of a NS can be inferred [@Fabian1989; @Laor1991; @Dovciak2004; @Brenneman2006; @Dauser2013]. Compton scattering broadening in XRBs {#Compton} ------------------------------------- To produce high energy X-ray photons, Comptonization (i.e. Inverse Compton scattering) by a hot corona of thermally distributed electrons is invoked. In AGN the thermal energy of these photons can be as large as hundreds of keV. In XRBs, a layer (a “corona”) of electrons with a thermal energy of a few keV should arise. In this case, the 6.4 keV iron line photons can lose energy by Compton scattering with less energetic electrons. In the multiple scattering regime (i.e. for an optically thick corona) this can result in a red shifting of the line centroid and an asymmetric broadening (but much less than for the relativistic reflection) of the profile toward lower energies, mimicking the effect produced by relativistic effects. The Compton broadening is believed to be dominant for particular systems like LMXRBs with a NS as the collapsed object [@Ng2010]. Nonetheless, for any accreting source, a certain amount of broadening should be ascribed to simple Compton scattering of photons traveling through the corona. Complex absorption in AGN {#Abso} ------------------------- Exploring the inner core of AGN is rather challenging as several opaque media lie in the vicinity of the nucleus. While many XRBs spectra can be considered as relatively absorption-free, the presence of obscuring circumnuclear matter around the SMBH, as well as outflows complicate the picture by adding possible sources of absorption along the observer’s line-of-sight. Taking into account the action of cold matter onto AGN spectra, another scenario explaining the asymmetrical broadening of the Fe K$\alpha$ line emerged a decade ago [@Inoue2003]. In this prescription, the fluorescent emission is neither intrinsically broad, nor predominantly blurred by gravitational effects, but the spectral line shape is rather “carved out” by absorption in more distant, absorbing cloudlets. A distribution of optically thick media, located along the observer’s line-of-sight and partially covering the primary X-ray source is ultimately responsible for both the flux variability and the apparent broadening of the iron line. The overall extended red wing reduces to the sum of the uncovered continuum radiation, the transmitted and scattered radiation escaping “through the holes between the clouds”, and the absorbed flux [@Miller2008; @Miller2009; @Miller2013]. Spectropolarimetric modeling {#Modeling} ============================ The accuracy of spectroscopic fitting, and thus the constraint on the BH spin, depends on a precise distinction between the shape of the underlying continuum and the iron line. So far, various models presented claim to be able to reproduce the spectral shape of the intensity flux and sometimes also the observed time-dependency. In this context, we discuss the recent results obtained by our group that follow the independent path of X-ray polarimetry. We illustrate the constraining and discriminating power of X-ray polarimetry by exploring the polarization signatures of different proposed scenarios. It is not the scope of this paper to produce accurate spectral fits. We rather rely on the prescriptions given by certain models assuming reflection [e.g. @Miniutti2004], Comptonization [e.g. @Pozdnyakov1983; @Sunyaev1984; @Hirano1987; @Haardt1994] and absorption [e.g. @Miller2008; @Miller2009]. LMXRBs: impact of Compton scattering on the Fe K$\alpha$ line {#LMXRBs} ------------------------------------------------------------- To test the impact of broadening due to Compton scattering, we used [MoCA]{}, a Monte Carlo code devoted to the study of spectra and polarization in accreting sources (Tamborra, Matt & Bianchi, private communication). The code is written in IDL, a vectorized and interactive language. The two main features of the code are its modularity (with minor modifications, [MoCA]{} can be applied to different accreting sources) and its full special relativistic treatment, using the Klein Nishina cross-section for the treatment of Compton scattering [@Klein1929]. MoCA belongs to the class of source-to-observer codes; it samples and follows each photon during its journey from the emitting source to the detector (the observer), saving the radiation’s energy, direction, number of scattering events and the two Stokes parameters, Q and U, which describe the linear polarization degree and polarization angle. In this very first application, we do not use the full potential of the code yet, but we simply evaluate the effect of pure Compton scattering on line broadening. We want to offer an independent way to understand if the broad lines observed in NS are distorted by relativistic effects [@Cackett2008; @Disalvo2009] or if the large broadening is overestimated by instrumental effects not properly taken into account (pile-up), thus reducing the impact of Compton broadening [@Ng2010].   According to a simple emissivity law $F~\propto~R^{-2.5}$ ($F$ being the flux and $R$ the disk radius), we generated monochromatic photons with energy 6.4 keV, arising isotropically from a non-rotating accretion disk with inner radius $R_{\rm i}$ = 6 $R_{\rm G}$ (gravitational radii, $R_{\rm G} = GM/c^2$ for black hole mass $M$) and outer radius $R_{\rm o}$ = 48 $R_{\rm G}$. Radiation travels trough a coronal region filled with free electrons of thermal energy kT$_{\rm e}$ = 2 keV. We considered two geometries for the corona: a spherical plasma cloud covering the inner half of the accretion disk (inner and outer radii of the corona being $R_{\rm c,i}$ = 6 $R_{\rm G}$ and $R_{\rm c,o}$ = 24 $R_{\rm G}$, respectively) and a thin slab gas covering the whole disk (height and outer radius of the slab being H = 6 $R_{\rm G}$ and $R_{\rm c,o}$ = 48 $R_{\rm G}$, respectively). For both geometries, we computed an optically thin (Compton optical depth $\tau_{\rm c}$ = 0.1) and an optically thick corona ($\tau_{\rm c}$ = 1), covering a total of four scenarios. For each case, we generated $\sim 10^8$ photons. ![Intensity spectra (counts $\times$ energy) for the slab geometry, for an optically thick (top) and an optically thin (bottom) corona. The percentage of radiation that reached the observer (out of the initial $10^8$ photons emitted by the disk) is shown. We indicated also the sub-percentage of photons that experienced at least one scattering. The line centroid is indicated by the red-dashed line.[]{data-label="Slab_geom_spectra"}](specslab1.pdf "fig:"){width="12cm"} ![Intensity spectra (counts $\times$ energy) for the slab geometry, for an optically thick (top) and an optically thin (bottom) corona. The percentage of radiation that reached the observer (out of the initial $10^8$ photons emitted by the disk) is shown. We indicated also the sub-percentage of photons that experienced at least one scattering. The line centroid is indicated by the red-dashed line.[]{data-label="Slab_geom_spectra"}](specslab01.pdf "fig:"){width="12cm"} ![Intensity spectra (counts $\times$ energy) for the spherical geometry, for an optically thick (top) and an optically thin (bottom) corona. The percentage of radiation that reached the observer (out of the initial $10^8$ photons emitted by the disk) is shown. We indicated also the sub-percentage of photons who experienced at least one scattering. The line centroid is indicated by the red-dashed line.[]{data-label="Sphere_geom_spectra"}](specsphere1.pdf "fig:"){width="12cm"} ![Intensity spectra (counts $\times$ energy) for the spherical geometry, for an optically thick (top) and an optically thin (bottom) corona. The percentage of radiation that reached the observer (out of the initial $10^8$ photons emitted by the disk) is shown. We indicated also the sub-percentage of photons who experienced at least one scattering. The line centroid is indicated by the red-dashed line.[]{data-label="Sphere_geom_spectra"}](specsphere01.pdf "fig:"){width="12cm"} SLAB SPHERE ---------------------- ------------------- ------------------- $\tau_{\rm c}$ = 1 E$_c$ = 6.353 keV E$_c$ = 6.389 keV Width = 559 eV Width = 271 eV   $\tau_{\rm c}$ = 0.1 E$_c$ = 6.396 keV E$_c$ = 6.399 keV Width = 157 eV Width = 73 eV : Line centroid and width for the slab (left column) and the spherical geometry (right column), for optically thick (first row) and optically thin (second row) coronas. The width of the line represents an upper limit to the equivalent width.[]{data-label="Tab_line"} ### Spectral broadening {#LMXRB:spectra} In Fig. \[Slab\_geom\_spectra\] and Fig. \[Sphere\_geom\_spectra\], we present the four intensity spectra produced by the two geometries in both of the optical depth regimes. According to the energy of the line centroid, summarized in Tab. \[Tab\_line\], the slab corona is found to be more efficient in terms of down-scattering, producing broader lines than the spherical plasma cloud. The related line widths (Tab. \[Tab\_line\]) represent an upper limit of the real equivalent width, as the monochromatic radiation has been produced by the disk without any continuum emission. In order to compare spectroscopic simulations results with equivalent widths derived from observations analysis, a more complex model in which the line is produced together with the continuum is required, and will be provided in future work. Nonetheless, these qualitative results suggest that the $\sim$100 eV broadening produced by an optically thin corona is not large enough to solely associate the broadening mechanism with Compton scattering. Moreover, for the optically thick slab corona, the broadened line is found to be slightly asymmetric, and its centroid is red-shifted by more than $1\%$ with respect to the initial 6.4 keV emission energy. The spherical corona, on the other hand, seems to be unable to efficiently shift the emission line at any of the two optical depths. ### Coronal polarization {#LMXRB:polarization} To push our investigation of the impact of Comptonization onto the iron line broadening somewhat further, we present in Fig. \[Slab\_geom\_pol\] and Fig. \[Sphere\_geom\_pol\] the linear polarization degree $P$ (in percentage, averaged over all disk inclination) and the corresponding photon polarization angle $\Psi$ (in degrees) as a function of energy. By convention, a photon polarization angle of $0^{\circ}$ indicates that the photon’s $\vec E$-vector is perpendicular to the projected disk axis, while $\Psi$ = $90^{\circ}$ means that the $\vec E$-vector is aligned with the disk axis. When the polarization degree is zero, the polarization angle has no physical meaning as it is a random superposition of all the incident photon’s polarization planes. Monochromatic seed photons, thermally produced by the disk, are unpolarized, hence the resulting polarization signal is uniquely provided by photons which have experienced at least one scattering event before reaching the observer. There is a proportionality between the radiation energy shift, the number of scatterings and the final polarization degree. The noise seen in the polarization spectra is due to the Poissonian statistics of the Monte Carlo method and must not be taken into account for physical interpretation. Finally, we decided to set both $P$ and $\Psi$ to zero if the number of photons recorded by individual energy bin was less than 100 and thus negligible. ![Polarization degree $P$ (black line) and corresponding polarization position angle $\Psi$ (red line) for the same slab coronas as in Fig. \[Slab\_geom\_spectra\].[]{data-label="Slab_geom_pol"}](polslab1.pdf "fig:"){width="12cm"} ![Polarization degree $P$ (black line) and corresponding polarization position angle $\Psi$ (red line) for the same slab coronas as in Fig. \[Slab\_geom\_spectra\].[]{data-label="Slab_geom_pol"}](polslab01.pdf "fig:"){width="12cm"} ![Polarization degree $P$ (black line) and corresponding photon polarization angle $\Psi$ (red line) for the same spherical coronas as in Fig. \[Sphere\_geom\_spectra\].[]{data-label="Sphere_geom_pol"}](polsphere1.pdf "fig:"){width="12cm"} ![Polarization degree $P$ (black line) and corresponding photon polarization angle $\Psi$ (red line) for the same spherical coronas as in Fig. \[Sphere\_geom\_spectra\].[]{data-label="Sphere_geom_pol"}](polsphere01.pdf "fig:"){width="12cm"}   The morphology plays a significant role for the polarization signal, as the anisotropy of the scattering geometry enhances the resulting polarization. Thus, a spherical corona produces more modest linear polarization percentages than a slab geometry. For both optical depths, a spherical morphology shows $P \le$ 10 %, while the polarization degree induced by slab coronas is found to be larger than 10 %. On each side of the emission line centroid, a bump in polarization degree appears. These features are due to the contribution of photons that experienced only one scattering event: the energy shift induced by single scattering is small but the polarization of the radiation increases. A second scattering event will shift the photons farther in energy and randomize the polarization vectors, thus decreasing $P$ and resulting in the polarization bumps just left and right of the almost unpolarized line centroid as shown in Fig. \[Slab\_geom\_pol\] and Fig. \[Sphere\_geom\_pol\].   Additional information can be extracted from the polarization angle $\Psi$ (plotted in red in Fig. \[Slab\_geom\_pol\] and Fig. \[Sphere\_geom\_pol\]). The slab corona, Fig. \[Slab\_geom\_pol\], produces a net polarization position angle aligned with the projected slab symmetry axis, independently of the optical thickness. Since the last scattering occurs mainly in a plane oriented roughly perpendicularly to the disk axis, the resulting polarization angle remains predominantly parallel. Even for the case of multiple reprocessing in an optically thick disk, an energy-independent, parallel $\Psi$ is obtained, which confirms previous simulations in @Goosmann2007 and @Marin2012a. The geometry of a spherical corona (Fig. \[Sphere\_geom\_pol\]) results in a different behavior of $\Psi$ as a function of the optical thickness. In the optically thick scenario, the first scattering event occurs close to the disk surface and the photon’s energy is only slightly shifted. So, between 5 and 7 keV, the corona behaves similarly to a slab geometry, and the net polarization position angle is 90$^\circ$. Secondary and tertiary scatterings modify the radiation’s energy, shifting it below 5 keV or higher than 7 keV, and randomizing $\Psi$, as photons have traveled farther through the symmetric geometry of the sphere. The resulting photon position angle becomes random. The optically thin prescription (Fig. \[Sphere\_geom\_pol\], bottom) presents a switch of the polarization position angle, from 90$^\circ$ to 0$^\circ$, at energies corresponding to the $P$ bumps. It indicates that the main contribution to polarization is given by singly scattered, parallelly polarized photons. The photon position angle becomes equal to 0$^\circ$ beyond the polarization bumps as there is no contribution of multiple scattering (the corona being optically thin), and $\Psi$ is given by the few recorded photons with a substantially random polarization position angle.   Using a simple, qualitative, but yet accurate numerical simulation, we demonstrated how X-ray polarimetry can be used as a probe independent of spectral analyses. As polarization signatures are strongly correlated with the morphology of the coronal plasma, strong constraints can be given by future observations. AGN: the test case of MCG-6-30-15 {#AGN} --------------------------------- To illustrate the predictive power of X-ray polarimetry in AGN, we focus on the famous candidate MCG-6-30-15 rather than going through a generic model, as precise fitting parameters for different spectroscopic models of the object are given in the literature. The Seyfert 1 galaxy is one of the best examples showing an asymmetric, broad, 6.4 keV iron line. MCG-6-30-15’s red-wing is well established since its *ASCA* discovery by @Tanaka1995 and following long observations with [*XMM-Newton*]{} [@Wilms2001; @Fabian2002] and [*Suzaku*]{} [@Miniutti2007]. Its 4 –7 keV spectrum is well fitted applying the light bending scenario with a rotating SMBH [@Wilms2001; @Fabian2002]. However, @Inoue2003 and @Miller2008 [@Miller2009] also claimed to be able to reproduce the excess emission above 10 keV, the spectral shape, and the time-invariant red wing of the iron line in MCG-6-30-15 using absorption models. In the following, we recall the spectropolarimetric predictions computed by @Marin2012b according to the prescriptions given by the authors of the two competitive models. ### Spectropolarimetric predictions {#AGN:models} To evaluate the polarization signature resulting from a reflection-dominated [@Miniutti2004] versus an absorption-dominated model [@Miller2008; @Miller2009], we constructed numerical models for MCG-6-30-15 with the characteristics summarized in Tab. \[Tab\_AGN\]. MCG-6-30-15 (reflection) MCG-6-30-15 (absorption) ------------------------------------------ ---------------------------------------------------------- Photon index $\alpha$: 1 Photon index $\alpha$: 1 Inclination: 30$^\circ$ Inclination: 30$^\circ$ Lamp-post height: 2.5 $R_{\rm G}$ Number of covering media: 2 (zones 4 & 5) Spin $a$: 1 Zone 4: Covering factor: 62 % ($\tau_{\rm c} \sim 1.5$) SMBH mass: 1.5 $\times 10^6 \rm M_\odot$ Zone 5: Covering factor: 17 % ($\tau_{\rm c} \sim 0.02$) : Parametrization of MCG-6-30-15 model according to @Miniutti2004 and @Miller2008 [@Miller2009].[]{data-label="Tab_AGN"} The reflection scenario consists of a maximally rotating BH with a neutral accretion disk illuminated by an elevated lamp-post, irradiating an isotropic, unpolarized primary continuum. The input spectrum ranges from 1 to 100 keV and has a power law shape $F_{\rm *}~\propto~\nu^{-\alpha}$ with $\alpha~=~1.0$. The re-emitted intensity as a function of incident and re-emission angle is computed by the Monte-Carlo radiative transfer code [*NOAR*]{} [@Dumont2000], the local polarization being estimated according to the transfer equations of @Chandrasekhar1960. The local, polarized reflection spectra are then combined with the [*KY*]{}-code [@Dovciak2004], that conducts relativistic ray-tracing between the elevated source, the disk, and the distant observer. Our choice of parameters is in good agreement with the assumptions of @Miniutti2004. The absorption scenario considers a clumpy distribution of Compton-thick, spherical clouds with equal radius and constant density, localized between 1.0 and 1.8 parsecs from the irradiating source. The source itself is defined as a geometrically thin, emitting slab that represents the so-called hot inner flow. It irradiates the same unpolarized primary spectrum as in the light bending scenario, with the same power-law parameters. In order to avoid confusion with relativistic disk reflection, the emitting region is insensitive to scattering and does not reach the ISCO. The model of @Miller2007 [@Miller2008] uses 5 covering zones to reproduce the [*Chandra*]{} and [*XMM-Newton*]{} grating data, but only zones 4 and 5 are responsible for the spectral curvature below 10 keV. We therefore focused on these two zones and used the latest version of the Monte Carlo code [stokes]{} [@Goosmann2007; @Marin2012a] to compute the resulting polarization. Please refer to @Marin2012b for further details about the numerical simulations. ![MCG-6-30-15’s percentage of polarization $P$ and variation of the polarization angle $\Delta\Psi$ with respect to its mean as a function of the energy. MCG-6-30-15 is viewed at an inclination $i$ = $60^\circ$ to the axis of symmetry. $Legend$: a fragmented absorption region (solid line) and a relativistic reflection model with an extreme Kerr SMBH with $a~=~1$ (red dashed line). The figure is taken from @Marin2012b.[]{data-label="SMBH"}](MCG-6-30-15.pdf){width="12cm"}   Fig. \[SMBH\] shows the results obtained for MCG-6-30-15, in terms of percentage of polarization $P$ and variation of the polarization position angle $\Delta\Psi$, with respect to a convenient average of $\Psi$ over the depicted energy band. The relativistic $P$ is found to be at least ten times higher than the absorption scenario, with a maximum obtained in the Compton hump, where multiple scattering dominates. The spectral shape of $P$ is determined by 1) the net integration of the polarization over the accretion disk in the relativistic case or 2) by the polarization phase function of electron scattering in the absorption scenario. In addition to that, the polarization spectra are influenced by dilution from the continuum source, whose power-law index $\Gamma$ ($\Gamma$ = $\alpha$ +1) is set to favor the emission of soft, unpolarized X-ray photons, explaining the diminution of $P$ below 5 keV. Additional constraints on the origin of iron line come from the variation of the polarization angle. In the relativistic case, $\Delta\Psi$ varies continuously over the whole energy band, showing a particularly strong feature across the iron line. The energy-dependent albedo and scattering phase function of the disk material explain the smooth behavior of $\Delta\Psi$, which varies by $5^\circ$ around the iron line. The absorption model responds very differently and shows no variation at all. Thus, if the $\Delta\Psi$ spectrum of MCG-6-30-15 is found to vary with photon energy, it would be a strong indication for the light bending model to be favored. ### Observational prospects {#AGN:MDP} We saw from Fig. \[SMBH\] that, in principle, X-ray polarization can distinguish between the two scenarios for the origin of the broad iron line in MCG-6-30-15. The polarization degree found in the relativistic case is always superior to the one of complex absorption by a factor of ten, with the strongest polarization signal to be measured in the 10–100 keV band. Additional constraints, coming from the variation of the polarization angle, could independently give an insight of the more probable scenario. The question so far is: could any X-ray polarimeter have detected such levels of polarization? Knowing that the last measurement of X-ray polarization goes back decades ago and that no measurement has ever been taken of MCG-6-30-15, we now investigate the observational prospects for a variety of proposed X-ray missions that included a polarimeter. ![1 Ms observation minimum detectable polarization of MCG-6-30-15 for the two scenarios of the broad iron line. The MDP for [*XIPE*]{} is represented by the maroon line, [*NHXM*]{} in blue and [*IXO*]{} in green. $Legend$: a fragmented absorption region (solid line) and a relativistic reflection model with an extreme Kerr SMBH with $a~=~1$ (red dashed line).[]{data-label="MDP"}](X-Pred-MCG-6-30-15.pdf){width="12cm"}   To explore the potential detection of X-ray polarization in MCG-6-30-15, we focused on three mission concepts, namely [*XIPE*]{} (S-class mission, Soffitta et al. accepted), [*NHXM*]{} (M-class, @Tagliaferri2012), and [*IXO*]{} (L-class, @NRC2010). The three polarimeters of these missions were based upon the Gas Pixel Detector [@Bellazzini2006; @Bellazzini2010], with [*XIPE*]{} and [*IXO*]{} instruments ranging from 2 to 10 keV, in comparison with the 2 – 35 keV observational band of [*NHXM*]{}. The instrumental minimum detectable polarization (MDP) at 99% confidence level, assuming that MCG-6-30-15’s background flux is negligible with respect to the source flux, is given by: $$\label{MDP_eq} { MDP~\approx 14\% \left( \frac{S}{1 \rm mCrab} \right)^{-1/2} \left( \frac{\rm exposure \, time}{100,000~\rm s} \right)^{-1/2} }$$ and was was computed for a 1 Ms observation with an estimated flux of 2.70 $\pm$ 0.15 mCrab in the 2 – 10 keV band [@Krivonos2007]. We took into account the energy dependence of the detector’s modulation factor and calculated the MDP on the full bandpasses of the instrument The estimated MDP for the two objects are presented in Fig. \[MDP\], using the following color-code : maroon for [*XIPE*]{}, blue for [*NHXM*]{} and green for [*IXO*]{}. We find that the reflection scenario of MCG-6-30-15 is within the polarization detectability of [*XIPE*]{}, so any detection would strongly support the relativistic model. Further indications could be deduced from the variation of the polarization position angle if the observed polarization signal is significant enough. The [*NHXM*]{}’s blue line indicates that the polarization signal originating from the relativistic reflection case can be detected almost across the whole 2 – 35 kev band. It covers, in particular, the iron line domain where the $\Delta\Psi$ signature of strong gravity would be detected. Finally, we show that [*IXO*]{} would have been the only mission to potentially detect the low polarization originating from the absorption model. The distinction between the two competitive cases would have come from the detection of either a smooth variation of $\Psi$ (relativistic signature) or no variation at all (transmission through absorbing gas). Summary and discussion {#Conclusions} ====================== Conducting detailed radiative transfer simulations that include polarization, we explored an independent and complementary approach to timing and spectral analysis in order to probe the origin of the broadening of the iron K$\alpha$ line around stellar and supermassive black holes. For the LMXRBs line case, we qualitatively showed that from a spectroscopic point of view, a broad line can be produced by pure Compton scattering mostly with an optically thick corona. However the red-shifting, even with the most broadening-efficient geometry of thick corona, is not as large as expected, indicating that the mechanism responsible for the distortion of the line has probably to be ascribed to relativistic effects. Nonetheless, in the case of Compton broadening, a significant amount of polarized line flux is expected and represents a viable and independent way to discriminate and quantify the different ongoing processes. Moreover, the polarization signal strongly depends on the geometry of the scattering material and represents a unique tool to infer it. In our simulations, we demonstrated that the expected polarization signal produced by a spherical corona is almost zero, while the axisymmetric slab geometry is more significant in terms of polarization degree ($P \ge$ 10 %). Looking at broad iron line signatures in AGN, we recalled the polarization differences emerging from the two main competing scenarios in MCG-6-30-15. The resulting polarization signal is found to be rather different between the light bending model and the absorption scenario, with the relativistic reflection showing a polarization degree at least ten times higher over the whole energy band. An independent measure of the variation of the photon polarization angle can help to discriminate between the two interpretations, as distant absorption should cause no variation of $\Psi$, on the contrary of the smooth and continuous variation of $\Psi$ across the iron line in the relativistic case. In this note, we demonstrated that the polarization signal of the light bending model is within the detection range of past X-ray mission projects, even for a small, path-finder polarimetry mission. So, any detection of X-ray polarization would strongly support the relativistic scenario.   The lack of broadband spectral observation, covering the full X-ray domain from 0.1 keV to the high energy cut-off, is one of the reasons preventing a final determination of the main asymmetrical, broadening mechanism. All the models might not be able to reproduce both the Fe K$\alpha$ line and the Compton hump, raising a potential way to disentangle the problem using the future observational campaigns of [*Astro-H*]{} [@Takahashi2010] and [*NuStar*]{} [@Harrison2010]. However, one can also reverse the question by looking at the other end of the X-ray domain: @Fabian2009’s and @Zoghbi2010’s [*XMM-Newton*]{} observation of the narrow-line Seyfert 1 galaxy 1H0707-495 revealed the presence of a broad feature below 1 keV, assumed to be skewed, fluorescent iron L emission. In the case of a high iron abundance in the accretion disk, Fe L emission features should be detectable [@Fabian2009] and thus can provide another test for the disambiguation between light bending, Compton broadening or complex absorption. Similarly to the case of the Fe K$\alpha$ line, polarimetric measurements around the Fe L emission band could help to favor one of the possible scenarios.   Encouraging prospects can be foreseen from the recently launched [*NuStar*]{} satellite. The first [*NuStar*]{} paper focuses on NGC 1365, a nearby galaxy ($z$ = 0.005) hosting a type-1.8 AGN. Simultaneously observed with [*XMM-Newton*]{}, the broadband spectra (3 – 79 keV) of NGC 1365 gave prior to reflection-dominated models using temporal and spectral arguments [@Risaliti2013]. However, according to @Miller2013, the absorption model explored by @Risaliti2013 suffers from inaccurate computation. @Miller2013 provided a set of physical parameters associated with a complex model of anisotropic cloudlets distribution that should correctly reproduce NGC 1365’s spectral shape, excess emission above 10 keV and time-invariant red-wing. NGC 1365 is rather different from MCG-6-30-15, as it seems to oscillate between type-1 and type-2 classification due to its extreme inclination and rapid X-ray spectral changes, indicating the presence of cold gas along the observer’s line-of-sight [@Risaliti2005]. In this context, a clear spectral or timing disambiguation might not be simple. We currently explore the resulting polarization signal induced by the two opposite models (light bending, @Risaliti2013, versus complex absorption, @Miller2013) and preliminary results (Marin et al., submitted to MNRAS) indicate that the two scenarios show even stronger polarization differences than in the case of MCG-6-30-15 [@Marin2012b]. Bhattacharyya, S., & Strohmayer, T. E., Evidence of a Broad Relativistic Iron Line from the Neutron Star Low-Mass X-Ray Binary Serpens X-1, ApJL, 664, L103-L106, 2007 Bellazzini, R., & Muleri, F., X-ray polarimetry: A new window on the high energy sky, Nuclear Instruments and Methods in Physics Research A, 623, 766-770, 2010 Bellazzini, R., Baldini, L., Brez, A., et al., Gas pixel detectors for high-sensitivity x-ray polarimetry, Proceedings of the SPIE, 6266, pp. 62663Z 2006 Brenneman, L. W., & Reynolds, C. S., Constraining Black Hole Spin via X-Ray Spectroscopy, ApJ, 652, 1028-1043, 2006 Cackett, E. M., Miller, J. M., Bhattacharyya, S., et al., Relativistic Iron Emission Lines in Neutron Star Low-Mass X-Ray Binaries as Probes of Neutron Star Radii, ApJ, 674, 415-420, 2008 Chandrasekhar, S., Radiative transfer, New York: Dover, 1960 Dauser, T., Garcia, J., Wilms, J., et al., Irradiation of an accretion disc by a jet: general properties and implications for spin measurements of black holes, MNRAS, 430, 1694-1708, 2013 di Salvo, T., D’A[í]{}, A., Iaria, R., et al., A relativistically smeared spectrum in the neutron star X-ray binary 4U 1705-44: looking at the inner accretion disc with X-ray spectroscopy, MNRAS, 398, 2022-2027, 2009 de La Calle P[é]{}rez, I., Longinotti, A. L., Guainazzi, M., et al., FERO: Finding extreme relativistic objects. I. Statistics of relativistic Fe Kα lines in radio-quiet Type 1 AGN, A&A, 524, A50, 22 pp, 2010 Done, C., & Diaz Trigo, M., A re-analysis of the iron line in the XMM-Newton data from the low/hard state in GX339-4, MNRAS, 407, 2287-2296, 2010 Dov[č]{}iak, M., Bianchi, S., Guainazzi, M., Karas, V., & Matt, G., Relativistic spectral features from X-ray-illuminated spots and the measure of the black hole mass in active galactic nuclei, MNRAS, 350, 745-755, 2004 Dumont, A.-M., Abrassart, A., & Collin, S., A code for optically thick and hot photoionized media, A&A, 357, 823-838, 2000 Fabian, A. C., Rees, M. J., Stella, L., & White, N. E., X-ray fluorescence from the inner disc in Cygnus X-1, MNRAS, 238, 729-736, 1989 Fabian, A. C., Vaughan, S., Nandra, K., et al., A long hard look at MCG-6-30-15 with XMM-Newton, MNRAS, 335, L1-L5, 2002 Fabian, A. C., Zoghbi, A., Ross, R. R., et al., Broad line emission from iron K- and L-shell transitions in the active galaxy 1H0707-495, Nature, 459, 540-542, 2009 Goosmann, R. W., & Gaskell, C. M., Modeling optical and UV polarization of AGNs. I. Imprints of individual scattering regions, A&A, 465, 129-145, 2007 Haardt, F., & Maraschi, L., A two-phase model for the X-ray emission from Seyfert galaxies, ApJL, 380, L51-L54, 1991 Haardt, F., & Maraschi, L., X-ray spectra from two-phase accretion disks, ApJ, 413, 507-517, 1993 Haardt, F., Maraschi, L., & Ghisellini, G., A model for the X-ray and ultraviolet emission from Seyfert galaxies and galactic black holes, ApJL, 432, L95-L99, 1994 Harrison, F. A., Boggs, S., Christensen, F., et al., The Nuclear Spectroscopic Telescope Array (NuSTAR), SPIE, 7732, 8 pp, 2010 Hirano, T., Hayakawa, S., Nagase, F., Masai, K., & Mitsuda, K., Iron emission line from low-mass x-ray binaries, PASJ, 39, 619-644, 1987 Inoue, H., & Matsumoto, C., Another Interpretation of the Disk-Line Profile for the Seyfert 1 Galaxy MCG-6-30-15, PASJ, 55, 625-629, 2003 Klein, O., & Nishina, T., Uber die Streuung von Strahlung durch freie Elektronen nach der neuen relativistischen Quantendynamik von Dirac, Zeitschrift fur Physik, 52, 853-868, 1929 Krivonos, R., Revnivtsev, M., Lutovinov, A., et al., INTEGRAL/IBIS all-sky survey in hard X-rays, A&A, 475, 775-784, 2007 Laor, A., Line profiles from a disk around a rotating black hole, ApJ, 376, 90-94, 1991 Makishima, K., Tashiro, M., Ebisawa, K., et al., In-Orbit Performance of the Gas Imaging Spectrometer onboard ASCA, PASJ, 48, 171-189, 1996 Marin, F., Goosmann, R. W., Gaskell, C. M., Porquet, D., & Dov[č]{}iak, M., Modeling optical and UV polarization of AGNs. II. Polarization imaging and complex reprocessing, A&A, 548, A121, 25 pp, 2012 Marin, F., Goosmann, R. W., Dov[č]{}iak, M., et al., X-ray polarimetry as a new tool to discriminate reflection from absorption scenarios - predictions for MCG-6-30-15, MNRAS, 426, L101-L105, 2012 Miller, J. M., Raymond, J., Fabian, A. C., et al., Chandra/High Energy Transmission Grating Spectrometer Spectroscopy of the Galactic Black Hole GX 339-4: A Relativistic Iron Emission Line and Evidence for a Seyfert-like Warm Absorber, ApJ, 601, 450-465, 2004 Miller, J. M., Relativistic X-Ray Lines from the Inner Accretion Disks Around Black Holes, ARA&A, 45, 441-479 , 2007 Miller, L., Turner, T. J., & Reeves, J. N., An absorption origin for the X-ray spectral variability of MCG-6-30-15, A&A, 483, 437-452, 2008 Miller, L., Turner, T. J., & Reeves, J. N., The absorption-dominated model for the X-ray spectra of typeI active galaxies: MCG-6-30-15, MNRAS, 399, L69-L73, 2009 Miller, L., & Turner, T. J., The hard X-ray spectrum of NGC 1365: scattered light, not black hole spin, ApJL, 773, L5, 5 pp, 2013 Miniutti, G., & Fabian, A. C., A light bending model for the X-ray temporal and spectral properties of accreting black holes, MNRAS, 349, 1435-1448, 2004 Miniutti, G., Fabian, A. C., Anabuki, N., et al., Suzaku Observations of the Hard X-Ray Variability of MCG -6-30-15: the Effects of Strong Gravity around a Kerr Black Hole, PASJ, 59, 315-325, 2007 Nandra, K., George, I. M., Mushotzky, R. F., Turner, T. J., & Yaqoob, T., ASCA Observations of Seyfert 1 Galaxies. II. Relativistic Iron K alpha Emission, ApJ, 477, 602-620, 1997 Nandra, K., O’Neill, P. M., George, I. M., & Reeves, J. N., An XMM-Newton survey of broad iron lines in Seyfert galaxies, MNRAS, 382, 194-228, 2007 National Research Council Committee for a Decadal Survey of Astronomy and Astrophysics 2010, The National Academies Press Ng, C., D[í]{}az Trigo, M., Cadolle Bel, M., & Migliari, S., A systematic analysis of the broad iron Kα line in neutron-star LMXBs with XMM-Newton, A&A, 522, A96, 25 pp, 2010 Patrick, A. R., Reeves, J. N., Porquet, D., et al., Iron line profiles in Suzaku spectra of bare Seyfert galaxies, MNRAS, 411, 2353-2370, 2011 Patrick, A. R., Reeves, J. N., Porquet, D., et al., A Suzaku survey of Fe K lines in Seyfert 1 active galactic nuclei, MNRAS, 426, 2522-2565, 2012 Pozdnyakov, L. A., Sobol, I. M., & Syunyaev, R. A., Comptonization and the shaping of X-ray source spectra - Monte Carlo calculations, Astrophysics and Space Physics Reviews, 2, 189-331, 1983 Pringle, J. E., & Rees, M. J., Accretion Disc Models for Compact X-Ray Sources, A&A, 21, 1-9, 1972 Reeves, J. N., Fabian, A. C., Kataoka, J., et al., Suzaku observations of iron lines and reflection in AGN, Astronomische Nachrichten, 327, 1079-1086, 2006 Reynolds, C. S., & Begelman, M. C., Intermittant Radio Galaxies and Source Statistics, ApJL, 487, L135, 1997 Reynolds, C. S., & Nowak, M. A., Fluorescent iron lines as a probe of astrophysical black hole systems, PhR, 377, 389-466, 2003 Risaliti, G., & Elvis, M., A Panchromatic View of AGN, Supermassive Black Holes in the Distant Universe, 308, 187-220, 2004 Risaliti, G., Elvis, M., Fabbiano, G., Baldi, A., & Zezas, A., Rapid Compton-thick/Compton-thin Transitions in the Seyfert 2 Galaxy NGC 1365, ApJL, 623, L93-L96, 2005 Risaliti, G., Harrison, F. A., Madsen, K. K., et al., A rapidly spinning supermassive black hole at the centre of NGC1365, Nature, 494, 449-451, 2013 Shakura, N. I., & Sunyaev, R. A., Black holes in binary systems. Observational appearance, A&A, 24, 337- 355, 1973 Sunyaev, R. A., & Titarchuk, L. G., Comptonization of low-frequency radiation in accretion disks Angular distribution and polarization of hard X-ray radiation, Advances in Space Research, 3, 245-248, 1984 Tagliaferri, G., & NHXM Consortium, NHXM: a New soft and Hard X-ray imaging and polarimetric Mission., MmSAI, 83, 360-364, 2012 Takahashi, T., Mitsuda, K., Kelley, R., et al., The ASTRO-H Mission, SPIE, 7732, 18 pp, 2010 Tanaka, Y., Nandra, K., Fabian, A. C., et al., Gravitationally redshifted emission implying an accretion disk and massive black hole in the active galaxy MCG-6-30-15, Nature, 375, 659-661, 1995 Wilms, J., Reynolds, C. S., Begelman, M. C., et al., XMM-EPIC observation of MCG-6-30-15: direct evidence for the extraction of energy from a spinning black hole?, MNRAS, 328, L27-L31, 2001 Zoghbi, A., Fabian, A. C., Uttley, P., et al., Broad iron L line and X-ray reverberation in 1H0707-495, MNRAS, 401, 2419-2432, 2010
--- abstract: 'We propose a microscopic theory of interaction of long wave molecular phonons with electrons in fullerides in the presence of disorder. Phonon relaxation rate and frequency renormalization are discussed. Finite electronic bandwidth reduces phonon relaxation rate at $q=0$. Electron-phonon coupling constants with molecular modes in fullerides are estimated. The results are in good agreement with photoemission experiments.' address: 'Frank Laboratory of Neutron Physics, JINR, 141980 Dubna, Russia' author: - 'V.L. Aksenov, V.V. Kabanov' title: 'Electron-Phonon Interaction and Raman Linewidth in Superconducting Fullerides.' --- PACS numbers 79.60.Bm, 63.20.-e, 71.38.+i Introduction. ============= Superconducting fullerides are a new type of materials, where the electronic bandwidth is of the same order as the frequencies of intramolecular modes. [@pic; @gelf]. Nonadiabaticity of electrons measured by the ratio of characteristic phonon frequency $\omega$ to the Fermi energy $E_{F}$ is not small. The phonon frequencies are high $\omega \leq 0.2$ $eV$ and bare Fermi energy is low $E_{F} \leq 0.2$ $eV$[@pic]. In the past years several different calculations of the electron-phonon coupling constants have been reported for fullerides[@pic1; @antr; @varma; @shlu; @faul]. Some of them yield the strongest coupling with the high frequency $H_{g}$ modes with a moderate electron-phonon coupling, $% \lambda \leq 0.5$. on the other hand picket $et$ $al$[@pic1] predicted the strongest coupling with the high frequency $A_g(2)$ mode and $\lambda \sim 3$. Similar conclusion has been reached in Ref.[@stolh]. The difference in calculated coupling constants is quite remarkable, and may result in a qualitatively different understanding of the nature of superconductivity of fullerides. Therefore, the experimental determination of $\lambda$ is required[@kuz1; @kuz2; @gun; @alekab]. Recently Raman spectra for metallic fullerides at low temperature has been reported in Ref.[@kuz1; @kuz2]. The linewidth have been analyzed using Allen’s formula for the decay rate of the phonon into electron-hole pair $% \gamma$ averaged over all phonon momenta[@allen]: $$\bar{\gamma}=\frac {\pi N(0)\lambda \omega^{2}}{2\kappa},$$ $N(0)$ is the density of states on the Fermi level, $\lambda$ is electron-phonon coupling constant, $\kappa$ is degeneracy of the phonon mode. It is well known that the phonon lifetime is determined by the parameter $% qv_{F}/\omega $, $v_{F}$ is Fermi velocity, and $\omega $ is the frequency of optical phonon. It means that Allen’s formula for the phonon linewidth does not work for optical phonons in $q\approx 0$ limit. In the clean single-band system it is not possible for any optical $q\approx 0$ phonon to decay into electronic excitations because of the conservation of the momentum and the energy. $\Im \Pi (q,\omega )=0$ for $qv_{F}\ll \omega $ [@schr; @ippat]. This result is based on the Ward identity and is independent on vertex corrections (Eqs. (4.20) and (4.21) of the Ref. [@schr]). Moreover high frequency phonon can not decay into electron-hole pairs if $\omega \geq w$, $w$ is electronic half bandwidth. It should be pointed out some special comments concerning the $pentagonal$ $% pinch$ $A_{g}(2)$ mode. It shows only a little broadening with doping. The authors in Refs.[@kuz1; @kuz2; @gun] conclude that it is the manifestation of the weak coupling with this mode. As it has been mentioned by Gelfand[@gelf] a $q=0$ $A_{g}$ mode shifts all the energy levels on $all$ molecules in the solid by the same amount, and therefore leads to only diagonal elements between band states for the deformation potential. The $q=0$ $A_{g}$ modes are thus uncapable of decaying into an electron-hole pair, no matter how strong the electron-phonon coupling is. Thus, because Fermi velocity $v_{F}$ is small and the frequency of intramolecular modes are high, the ratio $v_{F}/\omega$ is small and formula for the phonon lifetime in adiabatic limit $\omega \rightarrow 0$ does not work. The finite contribution to the phonon lifetime for $\omega \rightarrow 0$ appears due to impurity scattering and orientational disorder[@schluter1] and the violation of the conservation of the momentum. In this paper we analyze the phonon relaxation rate and the renormalization of the phonon frequency for $q=0$ due to electron-phonon interaction in the presence of disorder and taking into account finite electronic bandwidth. We take into account the effect of disorder in terms of relaxation time $\tau$ and adopt Fermi-liquid description $E_{F}\tau \gg 1$. Effect of disorder. =================== We describe electron-phonon interaction in fullerides by the standard hamiltonian [@lanoo]. It describes the interaction of $t_{1u}$ electrons with $A_{g}$ and $H_{g}$ intramolecular modes: $$H=\sum_{k,\sigma,i} \epsilon_{k}c^{\dagger}_{k,\sigma,i} c_{k,\sigma,i} +\sum_{k,q,\sigma,i}g_{k} c^{\dagger}_{k,\sigma,i}c_{k+q,\sigma,i} (b^{\dagger}_{q}+b_{-q})+\sum_{q}\omega b^{\dagger}_{q}b_{q},$$ where the first term is the kinetic energy of the electrons in threefold degenerate $t_{1u}$ band, $c^{\dagger}_{k,\sigma,i}$ is the creation operator of the electron with momentum $k$, spin $\sigma$ and orbital index $% i$($i = 1,2,3$), $b^{\dagger}_{q}$ is the creation operator of the phonon with momentum $q$. Here we take into account momentum dependence of the coupling constant explicitly. For intramolecular modes this dependence is weak, but as we discuss later this dependence is responsible for the finite contribution of the electron-phonon coupling to the phonon relaxation rate at $q=0$. Note that the fine structure of $H_{g}$ phonons is usually neglected for the analysis of the relaxation rate with Allen’s formula[@kuz2]. We also neglect strong degeneracy of $H_{g}$ modes. This assumption is quite reasonable if electronic relaxation time is large $\omega \tau >>1$ and if crystal field effects are strong and split of the fivefold degenerate modes is strong[@kuz2]. It is clear because the nodiagonal elements of the electronic Green’s function appear only due to impurity scaterring and are small if $\omega \tau >>1$. It is important, that the interaction constant with single $H_{g}$ submode is strongly momentum dependent. Phonon relaxation rate and frequency renormalization are determined by the real and imaginary parts of the polarization : $$\Pi (q=0,\omega )=i\int \Gamma (k,\omega ^{^{\prime }}+\omega /2,\omega ^{^{\prime }}-\omega /2)g_{k}G(k,\omega ^{^{\prime }}+\omega /2)G(k,\omega ^{^{\prime }}-\omega /2)\frac{d^{3}kd\omega }{(2\pi )^{4}}$$ The equation for the vertex has the form[@agd] (Fig.1): $$\begin{aligned} \Gamma (k,\omega ^{^{\prime }}+\omega /2,\omega ^{^{\prime }}-\omega /2) &=&g_{k}+n_{im}/(2\pi )^{3}\int |u(p-k)|^{2}G(p,\omega ^{^{\prime }}+\omega /2) \\ &&G(p,\omega ^{^{\prime }}-\omega /2)\Gamma (p,\omega ^{^{\prime }}+\omega /2,\omega ^{^{\prime }}-\omega /2)d^{3}p \nonumber\end{aligned}$$ where $u(p-k)$ is the potential of the single impurity, $G(k,\omega )=1/(\omega -\xi -\Sigma (\omega ))$ is electronic Green function, averaged over impurity[@agd], $\Sigma (\omega )\simeq -i\frac{\omega }{2|\omega |\tau }$, $\tau $ is electronic relaxation time, $n_{im}$ is concentration of impurities. We define the function: $$P(k,\omega^{^{\prime}}+\omega/2,\omega^{^{\prime}}-\omega/2)= \Gamma(k,\omega^{^{\prime}}+\omega/2,\omega^{^{\prime}}-\omega/2) G(k,\omega^{^{\prime}}+\omega/2) G(k,\omega^{^{\prime}}-\omega/2).$$ This function satisfies the equation: $$\begin{aligned} P(k,\omega^{^{\prime}}+\omega/2,\omega^{^{\prime}}-\omega/2)= G(k,\omega^{^{\prime}}+\omega/2) G(k,\omega^{^{\prime}}-\omega/2)( g_{k}+ \\ n_{im}/(2\pi)^{3}\int |u(p-k)|^{2} P(p,\omega^{^{\prime}}+\omega/2,\omega^{^{\prime}}-\omega/2) d^{3}p) \nonumber\end{aligned}$$ Main contribution to the integrals appears from the momenta near the Fermi surface $k \sim k_{F}$ and we can expand $g_{k}$ and $|u(k-p|^{2}$ in spherical harmonics $\phi_{L}(k)$ on the Fermi surface [@zawa]: $$g_{k}=\sum_{L}g_{L}\phi_{L}(k)$$ $$|u(p-k)|^2= \sum_{L,L^{^{\prime}}}\phi_{L}(k) \Gamma_{L,L^{^{\prime}}}\phi_{L^{^{\prime}}}(p)^{*}$$ For the sake of simplicity we suppose that $\Gamma_{L,L^{^{\prime}}} = \delta_{L,L^{^{\prime}}}\Gamma_{L}$. The equations for the relaxation times have the form $1/\tau = 2\pi N(0) n_{im}\Gamma_{0}$, $1/\tau_{L} = 2\pi N(0) n_{im}\Gamma_{L}$, where $N(0)$ is the density of state on the Fermi level. Note that $g_{L=0} \gg g_{L \neq 0}$ for $A_{g}$ modes. On the other hand for fivefold degenerate $H_{g}$ modes we expect strong $k$ dependence of the coupling constant. We define the set of functions $\Lambda_{L}(\omega^{^{\prime}},\omega)$: $$\sum_{L}g_{L}\phi_{L}(k)\Lambda_{L}(\omega^{^{\prime}},\omega)= n_{im}/(2\pi)^{3} \int|u(k-p^{^{\prime}})|^{2}P(p^{^{\prime}},\omega^{^{\prime}}+\omega/2,% \omega^{^{\prime}}-\omega/2) d^{3}p,$$ and derive the equation for $\Lambda_{L}(\omega^{^{\prime}},\omega)$: $$\begin{aligned} \sum_{L}g_{L}\phi_{L}(l)\Lambda_{L}(\omega^{^{\prime}}, \omega)=n_{im}/(2\pi)^{3} \sum_{M}(1+\Lambda_{M}(\omega^{^{\prime}},\omega)\int|u(k-p)|^{2}g_{M} \phi_{M}(p) \\ G(p,\omega^{^{\prime}}+\omega/2)G(p,\omega^{^{\prime}}-\omega/2)d^{3}p. \nonumber\end{aligned}$$ Integrating out the angles in Eq.(10) and taking into account Eq.(8) we obtain: $$\begin{aligned} \Lambda_{L}(\omega^{^{\prime}},\omega)=\hspace{1cm} i/\tau_{L} \frac{1}{% \omega+i/\tau_{L}^{*}} \hspace{1cm} |\omega^{^{\prime}}| < |\omega| \\ 0 \hspace{2.5cm} |\omega^{^{\prime}}| > |\omega| \nonumber\end{aligned}$$ where $1/\tau_{L}^{*} = 1/\tau - 1/\tau_{L}$ Note that for $L=0$ $\Lambda_{0}(\omega^{^{\prime}},\omega)=i/\tau\omega$. The largest term in the expansion of the coupling constant $g_{0}$ does not contribute to the $q=0$ phonon relaxation rate. Substituting Eq.(5) to Eq.(3) and taking into account Eqs.(9),(6) and (11) we obtain: $$\begin{aligned} \Pi(0,\omega) = -2 i \sum_{L \neq 0} \frac {g_{L}^{2} N(0)/\tau_{L}^{*}}{% \omega+i/\tau_{L}^{*}}.\end{aligned}$$ Here we take into account that $\int d\omega^{^{\prime}}(\Sigma(\omega+\omega^{^{\prime}}/2)-\Sigma(\omega-% \omega^{^{\prime}}/2)) = 0$. As a result we obtain the formula for the phonon relaxation rate $% \gamma(\omega)$: $$\gamma(\omega) = - \Im\Pi(0,\omega)=2\sum_{L \neq 0}\frac{% g_{L}^{2}N(0)\omega \tau_{L}^{*}}{\omega^{2}\tau_{L}^{*2}+1}$$ It follows from the Eq.(13) phonon relaxation rate at $q \rightarrow 0$ is determined by the parameter $<g_{k}^{2}>-<g_{k}>^{2}$, where $<..>$ is average over Fermi surface. This formula is strongly different from Allen’s formula[@allen]. ($i$) Phonon relaxation rate is proportional to the averaged over Fermi surface $k$-dependent component of the electron-phonon coupling constant. Phonon relaxation rate due to electron-phonon coupling is equal to zero if coupling constant is independent of the electronic momentum $k$. ($ii$) Phonon relaxation rate is proportional to the impurity scattering relaxation rate of electrons at low temperatures $1/\tau^{*}$. Therefore, momentum dependence of the electron-phonon interaction is responsible for the finite Raman linewidth. It should be pointed out, that similar formula for the relaxation rate of the optical phonons in metals was derived from kinetic equation in Ref.[@misc] and Green’s function technique[@kost]. Note that formula (13) is different from that derived in Ref.[@misc]. New term proportional to $% \Lambda (\omega ^{^{\prime }},\omega )$ appears in the equation for $\Pi (\omega )$ due to correct average of the vertex over impurities. Neglecting this term one can derive the same formula for relaxation rate as Eq.(18) of Ref.[@misc]. Extensive numerical calculations of the phonon lifetime, using spherically symmetrical coupling have been performed in Ref.[@des]. It has been shown that diagonal component of the polarization is site dependent in disordered phase. This fact is in agreement with formula (13). Because $% H_{g} $ modes are not spherically symmetrical the interaction with the five split submodes will have large $L \ne 0$ harmonics on the Fermi surface even in the case of spherically symmetrical bare interaction. Bandwidth effect. ================= In superconducting fullerides there are a number of molecular modes with the frequencies of the order of bare bandwidths. These are $% pentagonal$ $pinch$ mode $A_{g}(2)$ $\omega \simeq 1500 cm^{-1}$ and four $% H_{g}$ modes with $\omega \simeq 1200-1600cm^{-1}$. Because of conservation of energy these modes cannot decay into electron-hole pair in the clean system. Note that in the limit of $w \ll \omega$ phonon relaxation rate is equal to 0 in the lowest order in coupling constant. We use Eqs. (3) and (4) for the polarization and lorenzian form of the density of states to take into account the finite bandwidth: $$N(\xi) = \frac {2\nu}{\pi} \frac {w}{\xi^{2}+w^{2}}$$ where $w$ is effective half bandwidth, $\nu$ is orbital degeneracy. For the $% t_{1u}$ band $\nu = 3$. Using Eq.(14) we can derive the equation for electronic self-energy averaged over impurities in ladder approximation[@aek]: $$\Sigma(\omega) = xw^{2} \frac {1}{\omega+iw \omega/|\omega|)- \Sigma(\omega)}$$ where $x = \nu\Gamma_{0}n_{im}/w^{2} = 1/2\tau w$ is dimensionless concentration of impurities. Integrating out the angle in Eq.(10) and taking into account Eq.(8) we obtain the formula for $\Lambda_{L}(\omega^{^{\prime}},\omega)$: $$\Lambda_{L}(\omega^{^{\prime}},\omega) = \frac {x_{L}}{x} \frac {% \Sigma(\omega-\omega^{^{\prime}}/2)-\Sigma(\omega+\omega^{^{\prime}}/2)} {% \omega+\frac {x-x_{L}}{x} (\Sigma(\omega-\omega^{^{\prime}}/2)-\Sigma(% \omega+\omega^{^{\prime}}/2)}$$ where $x_{L} = \nu\Gamma_{L}n_{im}/w^{2} = 1/2\tau_{L} w$. We have used here integral equation for the electronic self-energy in ladder approximation[@agd; @aek]. Note, that Eq.(16) is equivalent to the Eq.(11) if $% \Sigma(\omega) =-\frac {i \omega}{2|\omega|\tau}$. Equation for the $L$ component of the polarization has the form: $$\Pi_{L}(\omega) = \frac {-2 i g_{L}^{2}\nu}{\pi x w} \int dy \frac {% \Sigma(y-\omega/2)-\Sigma(y+\omega/2)} {\omega+\frac {x-x_{L}}{x} (\Sigma(y-\omega/2)-\Sigma(y+\omega/2)}$$ Taking into account that $E_{F}\tau \simeq w\tau \gg 1$ we obtain: $$\Sigma(\omega) = x w^{2} \frac {1}{\omega+iw \omega/|\omega|}$$ Substituting Eq.(18) into Eq.(17) and integrating out $y$ we derive the formulae for imaginary and real parts of the polarization: $$\Re\Pi(\omega) = \sum_{L}\frac {2g_{L}^{2}N(0)}{w(\omega/w)^{2} \tau_{L}^{*}} (\frac {\omega\ln{(1+(\omega/w)^{2})}-4w\arctan{(\omega/w)}} {% \omega((\omega/w)^{2}+4)}+1)$$ $$\gamma(\omega) = -\Im\Pi(\omega) = \sum_{L}\frac {4g_{L}^{2}N(0)}{% w(\omega/w)^{3}((\omega/w)^{2}+4)\tau_{L}^{*}} (\ln{(1+(\omega/w)^{2})}% +\omega\arctan{(\omega/w)/w})$$ Eq.(20) reduces to Eq.(13) in the large bandwidth limit $\omega/w \ll 1$. In the opposite limit $\omega/w \gg 1$ the relaxation rate is strongly reduced: $$\gamma(\omega) = -\Im\Pi(\omega) = \sum_{L}\frac {2\pi g_{L}^{2}N(0)w^{3}}{% \omega^{4}\tau_{L}^{*}}$$ Conclusion. =========== In conclusion we analyze the experimental data on the Raman scattering in fullreides[@kuz1; @kuz2] using the correct formula for $q=0$ phonon relaxation rate. Unfortunately, direct estimate of the coupling constant is practically impossible. It requires exact form of angular dependence of electron-phonon coupling constant on the Fermi surface and electronic relaxation rate $1/\tau$. However, if we assume that $g_{L}^{2} \sim <g^{2}> \sim \lambda\omega/N(0)$ and use the value for $H_{g}(1)$ mode from the photoemission experiments[@gun; @alekab], we can calculate coupling constants for another 7 $H_{g}$ modes using the formula: $$\gamma_{i}/\gamma_{j} \sim \lambda_{i}/\lambda_{j}$$ If we suppose that $\lambda_{1}/N(0) \simeq 0.02eV$[@gun; @alekab] for $% H_{g}(1)$ mode, we obtain the coupling constants $\lambda_{i}/N(0)$ for other 7 $H_{g}$ modes (Table 1). Note that Eq.(22) is valid only for $H_{g}$ modes, because angular dependence of the coupling constant on the Fermi surface for $A_{g}$ modes is strongly different from that for $H_{g}$ modes and we do not expect the cancellation of angular factor in Eq.(13). From the Table 1 we can conclude: - Using Eq.(22) and the experimental Raman linewidths we obtain coupling constants for $H_{g}$ modes. They are in good agreement with photoemission data. Note that Allen’s formula underestimates the coupling constants by the order of magnitude for the most of the $H_{g}$ modes. - The difference in coupling constants for $H_{g}(2)$ and $H_{g}(3)$ modes is probably connected with the fact that in the analysis of photoemission spectra of $C_{60}^{-}$ the interaction with $A_{g}(1)$ mode has been neglected[@gun; @alekab]. - The difference in estimated constants for $H_{g}(7,8)$ modes is due to frequency dependence of electronic relaxation time $\tau $. Because the interaction with low frequency modes is quite strong we expect strong frequency dependence of $\tau $ and Eq.(22) is not valid. - Due to high symmetry of $A_{g}(1,2)$ modes angular dependence of the coupling constants is weak and Eq.(22) does not work. It should be pointed out that frequency renormalization of these modes is not due to effects considered in the paper. Indeed, the downshift of $% A_{g}(2)$ mode is about $6$ $cm^{-1}$ per elementary charge on the $C_{60}$. If we suppose that this downshift is due to interaction of phonons with band electrons one should expect the maximum of downshift near the half-filled band ($x=3$) and the absence of the downshift for $x=6$. In an isolated molecule there is also a frequency renormalization when molecule becomes charged. Theoretical estimates of the frequency shift due to the charging of $C_{60}$ molecule are in a reasonable agreement with experiments[@fried]. We have estimated coupling constants of the conducting electrons with the molecular phonons in superconducting fullerides from Raman experiments. The results are in good agreement with that obtained from photoemission measurements. Note that these constants with proper account of polaron effect lead to correct values of $T_{c}$, isotope effect and pressure dependence of $T_{c}(P)$[@alekab]. We highly appreciate enlightening discussions with A.S. Alexandrov, N.M. Plakida, J. Annett, E.G. Maksimov, O.V. Dolgov and D. Mihailovic. One of us (V.V.K.) thanks Russian Foundation for Basic Research (Grant 97-02-16705) and Slovenian Ministry of Research and Technology for financial support and Dragan Mihailovic for hospitality. W.E. Pickett, in Solid State Physics, eds. H. Ehrenreich and F. Spaepen, Academic Press, ${\bf 48}$, 225 (1994)). M.P. Gelfand in Superconductivity Review, Edited by P. Kumar (Gordon and Breach, New York, 1994) Vol. 1, p.103. S.E. Erwin and W.E. Pickett, Phys.Rev. B [**46**]{}, 14257 (1992); ibid. Science [**254**]{}, 842 (1992); W.E. Pickett $et$ $al$, J. Superconductivity(US) ${\bf 7}$, 651 (1994). V.P. Antropov, O.Gunnarsson, A.I. Liechtenstein, Phys.Rev. B, [**48**]{},7551, (1993) C.M.Varma, J.Zaanen, K.Raghavachari, Science, [**254**]{}, 989, (1991) M.Schluter et al, Phys. Rev. Lett. [**68**]{}, 526, (1992); J. Phys. Chem. Solids, [**53**]{},1473, (1992). J.C.R.Faulhaber, D.Y.K.Ko, P.R.Briddon Phys. Rev. B [**48**]{}, 661,(1993). G. Stollhoff, Phys. Rev. B[**44**]{}, 10998, (1991). H. Kuzmany $et$ $al$, Adv.Mater. ${\bf 6}$, 731 (1994). J. Winter, H. Kuzmany, Phys. Rev. B [**53**]{}, 655, (1996). O. Gunnarsson $et$ $al$, Phys.Rev.Lett. ${\bf 74}$, 1875(1995). A.S.Alexandrov, V.V.Kabanov, Pis’ma ZhETF [**62**]{}, 920, (1995). A.S.Alexandrov, V.V.Kabanov, Phys. Rev. B [**54**]{}, 3655, (1996). P.B. Allen, Solid State Commun. [**14**]{}, 937, (1974). P.B. Allen, Phys. Rev. B[**6**]{}, 2577 (1972). S. Engelsberg and J.R. Schrieffer, Phys.Rev. [**131**]{}, 993 (1963). I.P. Ippatova, A.V. Subashiev, ZhETF[**66**]{}, 722, (1974). M.A. Schluter, M. Lannoo, M.F. Needels, G.A. Baraff and D. Tomanek Phys. Rev. Lett, [**69**]{}, 213, (1992) M. Lannoo $et$ $al$, Phys.Rev.B ${\bf 44}$, 12106 (1991). A.A. Abrikosov, L.P. Gorkov, I.E. Dzyaloshinskii, Quantum Field Theoretical Methods in Statistical Physics (Pergamon, New York and Oxford, 1965). A.Zawadowski, M. Cardona, Phys. Rev. B[**42**]{}, 10732 (1990). E.G. Mishchenko, L.A. Falkovsky, ZhETF, [**107**]{}, 936, (1995). V.N. Kostur, Z. Phys. B, [**89**]{}, 149, (1992); E.G. Maksimov, S.V. Shulga, Solid State Commun. v.97, 553, (1996). M.S. Despande, E.J. Mele, M.J. Rice, H.Y. Choi, Phys. Rev. B[**50**]{}, 6993, (1994). A.S. Alexandrov, V.F. Elesin, M.P. Kazeko, Fiz. Tverd. Tela, [**21**]{}, 2062, (1979). B. Friedman, Phys. Rev. B [**48**]{}, 17551, (1993) -------- ------------- ----------------- -------------------- -------------------- -------------------- -------------------- $\omega$ $\gamma$[@kuz2] $\lambda/N(0)$(eV) $\lambda/N(0)$(eV) $\lambda/N(0)$(eV) $\lambda/N(0)$(eV) (cm$^{-1}$) (cm$^{-1}$) AF[@kuz2] Eq.(13) PES[@gun] PES[@alekab] $H(1)$ 270 20 0.048 0.020 0.019 0.020 $H(2)$ 432 21 0.020 0.021 0.040 0.038 $H(3)$ 709 8 0.002 0.008 0.013 0.019 $H(4)$ 773 10 0.003 0.010 0.018 0.018 $H(5)$ 1100 11 0.001 0.011 0.012 0.009 $H(6)$ 1248 10 0.001 0.010 0.005 0.001 $H(7)$ 1425 46 0.004 0.046 0.017 0.000 $H(8)$ 1572 42 0.003 0.042 0.023 0.000 -------- ------------- ----------------- -------------------- -------------------- -------------------- -------------------- : Coupling constants obtained from Raman measurements using Allen’s formula (AF), Eq.(13) and from photoemission experiments (PES).
--- author: - | Zihang Dai[^1]\ Carnegie Mellon University\ `dzihang@andrew.cmu.edu` Lei Li$^*$\ Toutiao.com\ `lileicc@gmail.com` Wei Xu\ Baidu Research\ `xuwei06@baidu.com` bibliography: - 'BIB/ref.bib' title: | [<span style="font-variant:small-caps;">CFO</span>]{}: Conditional Focused Neural Question Answering\ with Large-scale Knowledge Bases --- Introduction {#sec:intro} ============ Related Work {#sec:related} ============ Overview {#sec:overview} ======== Proposed [<span style="font-variant:small-caps;">CFO</span>]{} {#sec:model} ============================================================== Review: Gated Recurrent Units {#sec:review_gru} ----------------------------- Model Parameterization {#sec:model_parameterization} ---------------------- Focused Pruning {#sec:subject_labeling} --------------- Parameter Estimation {#sec:training} ==================== Decomposable Log-Likelihood {#sec:decompose_likelihood} --------------------------- Approximation with Negative Samples {#sec:hinge_loss} ----------------------------------- Experiments {#sec:experiments} =========== Dataset and Knowledge Base {#sec:dataset} -------------------------- Evaluation and Baselines {#sec:evaluation} ------------------------ Experiment Setting {#sec:experiment_setting} ------------------ Results {#sec:results} ------- Effectiveness of Pruning {#sec:eval_subject_labeling} ------------------------ Additional Analysis {#sec:additional_analysis} ------------------- Conclusion {#sec:conclusion} ========== [^1]:   Part of the work was done while at Baidu.
--- abstract: 'We study the self-similar structure of electromagnetic showers and introduce the notion of the fractal dimension of a shower. Studies underway of showers in various materials and at various energies are presented, and the range over which the fractal scaling behaviour is observed is discussed. Applications to fast shower simulations and identification, particularly in the context of extensive air showers, are also discussed.' address: 'Department of Physics, Northeastern University, Boston, MA 02115, USA' author: - 'L. A. Anchordoqui[^1], M. Kirasirova$^a$[^2], T. P. McCauley$^a$[^3], T. Paul$^a$[^4], S. Reucroft$^a$[^5], $\,$ and J. D. Swain$^a$[^6]' title: Fractal Electromagnetic Showers --- Introduction ============ One of the most serious problems in the analysis of cosmic ray data is the complex and time-consuming nature of the codes used for shower simulation. In order to try to capture the detailed physics of the processes involved, it is customary to directly simulate [@AIRES; @CORSIKA] the multiplicative branching process whereby an initial particle gives rise to two or more secondary particles, each of which, in turn, initiates what is essentially its own shower, albeit now at lower energy. Such a process can give rise to large fluctuations, and the final distributions of ground particles and their energies (as well as the longitudinal distribution of the shower as a whole) are difficult to model with simple parametrizations unless one is happy to settle for a description of the [*mean*]{} behaviour of the shower and forego knowledge of the fluctuations. Indeed, this is the leading reason that so much Monte Carlo time must be used for shower simulations: there are no simple analytical forms for the relevant distributions which can describe the fluctuations. The issue is a pressing one for experiments collecting large amounts of data which may be difficult to compare against theory in any form other than a large number of simulated events. Here we report on the observation that electromagnetic showers display self-similar behaviour which can be described by a multifractal geometry and describe first steps towards formalizing this concept. Our eventual goal is to describe showers in terms of what we argue here is the relevant geometry: not one of smooth functions, but one which allows for irregular geometries which are better described in terms of fractals. We consider here only electromagnetic showers, but plan to study hadronic showers in future work. Self-Similarity in Electromagnetic Showers ========================================== The idea that an electromagnetic shower should, in some sense, be a fractal is almost obvious. It is generated recursively from the two processes" 1. pair creation: $\gamma\rightarrow {\mathrm{e^+e^-}}$ in the electric field of a nucleus and; 2. Bremsstrahlung: ${\mathrm{e}}^\pm\rightarrow {\mathrm{e}}^\pm \gamma$ as an electron or positron is deflected by the electric field of a nucleus This is illustrated in figure 1 which shows the particles making up a shower produced by a 100 GeV electron entering a block of aluminum 150 cm long (radiation length 8.9 cm) as simulated using the [geant4]{} program [@GEANT4]. Each final state particle from an interaction effectively initiates its own electromagnetic shower, and each process has a similar cross section to occur in matter. As long as the energies involved are large compared with the energy required to create an electron-positron pair (and thus also large compared to atomic processes such as ionization), each step is much the same as the one before it, but at a reduced energy. Figure 2 shows a slice through the block right at the far end with the point of intersection of each particle with the slice shown as a black dot whose radius is independent of energy. Here one clearly sees the shower core, with a diminishing density of particles with distance from the centre. \[fig:shower\] \[fig:slice\] Fractals and Multifractals ========================== There are many ways to characterize self-similar objects, but the most common and well-known way is in terms of fractal dimensions. There are many different concepts of fractal dimension which are useful, and perhaps the most obvious is that of mass dimension, $D_M$. The idea here is to see how the total energy $E_{TOT}(R)$ (considered now as a sort of weight) within a disk of radius $R$ varies as $R$ is changed. If the distribution were one-dimensional (a line of uniform energy deposited in the plane), one would find $$E_{TOT}(R) \propto R^{1}$$ and one would take the exponent in the foregoing equation to be the dimension of the distribution. If the energy were uniformly distributed over the whole plane, one would find $$E_{TOT}(R) \propto R^{2}$$ and conclude again that the exponent in the scaling law for the energy should be interpreted as the dimension of the distribution. In the event that a scaling law of the form $E_{TOT}(R)\propto R^{D_M}$ holds for a non-integer $D_M$, we call $D_M$ the “fractal mass dimension”. A plot of $\log(E)$ as a function of $\log(R)$ will then have a slope in the limit of small $R$ which is $D_M$. Two points are important to keep in mind here: first that there are no true fractals in nature as there are always some smallest and largest value for variables in the problem beyond which scaling behaviour does not hold, and second that one must be careful to watch for systematic effects which can bias estimates of the dimension. Systematic effects which we have had to be wary of include the fact that early in the shower development the central core can contain particles which carry a large fraction of the initial energy and give the radial energy distribution a spike at small $R$ which does not correspond to scaling behaviour. In the case of the electromagnetic shower with the slice taken at the end of the shower at 150 cm, we look at the summed energy (scaled so that the total energy is 1) as a function of the fraction of the radius out (scaled so that the maximum radius is 1). This quantity we denote as $I(R|1)$ for reasons which will become clear later in the text. Plotting logarithms against logarithms (base 10), we find the distribution shown in figure \[fig:P1\]. \[fig:P1\] The first thing to notice is that the curve is reasonably approximated by a straight line at small radii. The second thing to notice is that the whole curve is [*not*]{} a straight line. At large radii we start to reach the physical boundaries of the shower and cannot expect scaling to hold. In fact, even at very small radii, there is some anomalous structure which can be traced to the effects of very energetic particles very close to the core, which give an additional spike of energy to the distribution which cannot be expected to be a part of any overall scaling behaviour. This effect is more pronounced earlier in the shower. The scaling properties of the shower are thus different in different parts of the plane, and in order to quantify this further, we study the scaling behaviour of cumulative moments of the energy distribution defined for $q>0$ by $$I(R|q) = \frac{\sum_{r<R} E_i^q}{\sum_{\mathrm{all\ }i}E_i^q}$$ where $E_i$ are the energies contained in a disk going out to radius $R$ and the sum is taken over all particles within a distance $r<R$. What units are used is not important as we are only interested in the average scaling behaviour of the curves at small $R\rightarrow 0$. (As discussed earlier in the text, the region of very small $R$ should be avoided for physical reasons, and we will avoid the subtleties of precise numerical analyses in this short communication.) For graphical purposes here, $R$ is normalized so that the particle with the largest radial distance out is at $R=1$ and the moments are defined so that their value at maximum radius is unity. We can then introduce an infinite family[@fractalrefs] of fractal dimensions $D_q$ defined for $q>0$ by $$D_q = \lim_{R\rightarrow 0} \left< \frac{1}{q}\frac{\partial \log I(R|q) }{\partial \log R} \right>$$ with the understanding that the limit must still lie in the scaling region in physical examples. Figure 4 shows the scaling behaviour of moments of the electromagnetic shower corresponding to how the sums of the squares and cubes of the energy grow with distance. \ \[fig:Ptwo\] For a homogeneous and uniform fractal structure we expect the $D_q$ to be equal. If not, then we describe the distribution as multifractal in that it requires more than one fractal dimension in order to characterize it. The associated $D_q$ for small $q$ estimated from finite differences in the scaling region are all approximately equal within the errors in the data here and approximately unity, suggesting a good degree of homogeneity. It is important to keep in mind that the results in this paper are presented for a full, realistic GEANT simulation, and include ionization, delta-ray, and other soft processes, so some care is needed in interpreting the results as if they corresponded to a pure electromagnetic shower generated only by pair creation and Bremsstrahlung (which is, of course, not realizable in nature). The definition of fractal dimensions can also be continued to $q\leq 0$, but this has some subtleties involved with the fact that as $q\rightarrow\infty$ the highest energy particles contribute most, while as $q\rightarrow -\infty$ the lower energy ones dominate. In particular, some care must be used with the $D_q$ for $q<0$ as they give high weights to softer particles which are not part of the hard shower process. These matters, as well as more precise results on dimensions including energy and material dependence will be presented elsewhere[@inprep]. Further Work ============ Clearly space limitations make it impossible to cover the material as completely as one would like, but several points concerning work not discussed here are worth making. First of all, we expect fractal behaviour in all three dimensions, and in this discussion we have neglected the longitudinal scaling behaviour, where the full shower is made of many scaled and translated showers superimposed along the shower axis. In addition, there are clearly angular correlations and fluctuations, and studies can be made at a given fixed radius of the scaling behaviour of the shower as a function of the angular coordinate which we have integrated out in this discussion. The relation of these ideas to the concept of intermittency, especially as studied in hadronic jets has not escaped our notice and is currently under investigation. One of the main goals of this work is to better understand the geometry of electromagnetic (and other) showers in order to try to parametrize them by the appropriate non-smooth basis functions, such as wavelets. Such a parametrization should allow the fast generation of showers without the attendant loss of information concerning large fluctuations[@inprep] which has so far been handled only by the use of enormous computational resources. Acknowledgements ================ We would like to thank the US National Science Foundation and CONICET, Argentina for support. We would also like to thank our collaborators on the Pierre Auger Project, as well as on L3 and CMS for useful discussions on electromagnetic calorimetry. [99]{} S. Sciutto, [*Air Shower Simulations with the*]{} [aires]{} [*system*]{}, in [*Proc. XXVI International Cosmic Ray Conference*]{}, (Eds. D. Kieda, M. Salamon, and B. Dingus, Salt Lake City, Utah, 1999) vol.1, p.411, at . D. Heck [*et al.*]{}, [corsika]{} [*(COsmic Ray Simulation*]{} [*for KASCADE)*]{}, FZKA6019 ( Forschungszentrum Karlsruhe ) 1998; updated by D. Heck and J. Knapp, FZKA6097 (Forschungszentrum Karlsruhe) 1998. See, for example, “The Science of Fractal Images”, eds. H. Peitgen and D. Saupe, Spinger-Verlag, 1988. The authors, papers in preparation. [^1]: doqui@hepmail.physics.neu.edu [^2]: kirasirova@hepmail.physics.neu.edu [^3]: mccauley@hepmail.physics.neu.edu [^4]: tom.paul@hepmail.physics.neu.edu [^5]: stephen.reucroft@cern.ch [^6]: john.swain@cern.ch
--- author: - The ATLAS Collaboration bibliography: - 'HtautauPaperJHEP.bib' title: '**Search for the Standard Model Higgs boson in the $\boldsymbol{H\to \tau^+\tau^-}$ decay mode in $\boldsymbol{\sqrt{s}=7 }$ $pp$ collisions with ATLAS**' ---
--- abstract: 'Although various linear log-distance path loss models have been developed, advanced models are requiring to more accurately and flexibly represent the path loss for complex environments such as the urban area. This letter proposes an artificial neural network (ANN) based multi-dimensional regression framework for path loss modeling in urban environments at 3 to 6 GHz frequency band. ANN is used to learn the path loss structure from the measured path loss data which is a function of distance and frequency. The effect of the network architecture parameter (activation function, the number of hidden layers and nodes) on the prediction accuracy are analyzed. We observe that the proposed model is more accurate and flexible compared to the conventional linear model.' author: - 'Chanshin Park,  Daniel K. Tettey, and Han-Shin Jo [^1]' title: Artificial Neural Network Modeling for Path Loss Prediction in Urban Environments --- [Shell : Bare Demo of IEEEtran.cls for IEEE Journals]{} Path loss, Multi-dimensional Regression, Artificial Neural Network (ANN), Mean square error (MSE), Machine Learning Introduction ============ loss is the decrease in the strength of radio signal as it propagates through space. Since radio receivers require a certain minimum power (sensitivity) to be able to successfully decode information, path loss prediction is essential in mobile communications network design and planning. Empirical path loss prediction models [@hata]-[@HJO] have been developed for this purpose. Many existing path loss models are empirically derived by assuming a linear log-distance model and determining the model parameters through the adequate linear regression analysis of the measured data. However, linear regression models are not best for all the regions. For example, the measured data are well presented by the linear regression in Fig. 1(a), but not especially for the distances less than 200 m in Fig. 1(b). Machine learning approach to path loss modelling is expected to provide a better model which can generalize well to the propagation environment since the model is being learned through training with data collected from the environment. Literature [@SUB-URBAN]-[@RURAL] provide path loss prediction using artificial neural network (ANN) models. The ANN models provide more precise estimation over the empirical models. The studies in [@SUB-URBAN],[@URBAN] developed ANN prediction models for urban and suburban environments, but did not present multi-dimensional model of distance and frequency. The authors in [@RURAL] showed that a simple ANN architecture (feed-forward network with one hidden layer and few neurons) has better path loss prediction accuracy compared to a complex architecture in rural environments. Motivated by this, we develop an ANN model for multi-dimensional regression of the path loss that has joint relation with distance and frequency in urban environments. Considering the complex propagation due to the the various types and distribution of buildings in urban area, we design the ANNs with three different activation functions (rectifier, hyperbolic tangent, and logistic sigmoid). The ANNs learn features of the multi-dimensional path loss using the measured data for areas A and B presented in [@HJO], and their accuracy are compared to each other and to the linear model (which was revised from COST-231 Hata model) proposed in [@HJO]. \ \ Artificial Neural Network Approach {#sec:Pathloss} ================================== ANN is non-linear regression system motivated by the mechanism of learning and generalizing relationship between input and output through the weighted network of neurons. An ANN model can be more effective model in estimation performance compared with polynomial regression model [@Biemacki] and handle more dimensions than look-up table method [@Meijer]. Network Architecture -------------------- The most common type of ANN is the multilayer perceptron neural network (MLP-NN) in which multiple neurons are arranged in layers, starting from an input layer, followed by hidden layers, and ending with an output layer. The outputs from each node at the layer are weighted sum of its inputs over an activation function. $$\begin{aligned} \label{eq:linmodel} \mathbf{A}_{n,m}^l&=&\begin{cases} \sum_{k=1}^D\mathbf{X}_{n,k} \cdot \mathbf{W}_{k,m}^l& \text{for $l=1$} \\ \sum_{k=1}^M\mathbf{Z}_{n,k}^{l-1} \cdot \mathbf{W}_{k,m}^l & \text{for $l=2\cdots L-1$} \\ \sum_{k=1}^M\mathbf{Z}_{n,k}^{l-1} \cdot \mathbf{W}_{k,1}^l & \text{for $l=L$} \end{cases} \\ \mathbf{Z}_{n,m}^l &=& H^l(\mathbf{A}_{n,m}^l),\end{aligned}$$ where $\mathbf{W}_{k,m}^l$ are the entry in the $k$th row and $m$th column of a weight matrix of the $l$th layer upon given inputs $\mathbf{X}_{n,k}$ with the number of features $D$=2 (distance, frequency), $H^l$ is given activation function for the $l$th layer which tweaks the weighted sum of linear output, $\mathbf{A}_{n,m}^l$. Fig. \[fig:MLP-NN\] shows the abstract structure of the MLP-NN. ![Block diagram of multilayer perceptron neural network(MLP-NN).[]{data-label="fig:MLP-NN"}](fig17.png){width="3.5in"} We evaluate three types of the commonly used activation functions: rectifier, logistic sigmoid, and hyperbolic tangent function. The rectified linear unit (ReLU) [@Rectifier] function is known for ramp function that allows the model easily obtain sparse representation, given by $$\label{eq:relu} H(a) = max(0, a).$$ The logistic sigmoid function is a non-linear activation function that derive smooth thresholding curve for artificial neural network, given by $$\label{eq:sigmoid} H(a) = \frac{1}{1 + e^{-a}}.$$ The hyperbolic tangent function is a differential non-linear activation function that the negative inputs are mapped large negative value and the zero inputs are mapped near zero, given by $$\label{eq:tanh} H(a) = \frac{e^a - e^{-a}}{e^a + e^{-a}}.$$ All these activation functions are bounded, non-linear, monotonic, and continuously differentiable. The universal approximation theorem [@APROX] shows that a feedforward neural network with three layers and finite number of nodes can approximate any continuous functions under mild assumptions on the activation function in any desired accuracy. However, some highly nonlinear problems need more hidden layers and nodes, since the degree of nonlinearity depends on the number of layers and nodes. Based on two assumptions, the ANN learning was executed on a single hidden layer architecture except ANN ReLU model, since ReLU model shows more stable results in deeper and larger network configurations, more details can be found on the section III. The objective of the training is to minimize the loss function given by $$\label{eq:lossfunc} %J(\mathbf{W}) = \frac{1}{n}||\hat{Y}_{n,1} - Y_{n,1}||^2_2 + \frac{\alpha}{2}||W_{m,n}^l||_2^2, J(\mathbf{W}) = \frac{1}{N}\sum_{n=1}^N|\hat{\mathbf{y}}_{n} - \mathbf{y}_{n}|^2 + \frac{\alpha}{2}||\mathbf{W}||_2^2,$$ where $J(\mathbf{W})$is loss function given weight $\mathbf{W}$, $\hat{\mathbf{y}}_{n}$ is prediction value for given weight $\mathbf{W}$, and $\mathbf{y}_{n}$ is measured pathloss values. $\frac{1}{N}\sum_{n=1}^N|\hat{\mathbf{y}}_{n} - \mathbf{y}_{n}|^2$ is the mean square error (MSE) and $\frac{\alpha}{2}||\mathbf{W}||_2^2$ is an L2-regularization term that penalizes the ANN model from overfitting and $\alpha$ is the magnitude of the invoked penalty. Artificial Neural Network Learning ---------------------------------- The fully connected MLP-NN is a basic type of neural networks which are comprised of the multilayer perceptron (MLP) class. The MLP-NN constitutes several hidden layers of nodes and single hidden layer of network structure is depicted in Fig. \[fig:MLP-NN\]. The ANN learning is obtained by updating the weights along the MLP-NN in consecutive iterations of feedforward and backpropagation procedures. The feedforward computation is performed on the following equation: $$\begin{aligned} \label{eq:feedforward} %\mathbf{Z}^0_{n,m} &=& \mathbf{X}_{n,k}\quad\\ %\mathbf{Z}^l_{n,m} &=& H^l(\mathbf{A}^l_{n,m})\quad \textrm{for}~l = 1, ..., L \\ \mathbf{Z}^{L-1} = H^{L-1}(H^{L-2}(\cdots H^{1}(\mathbf{X}\mathbf{W}^1))),\end{aligned}$$ where $\mathbf{W}^l$ is the weights for each connections between layers $l-1$ and $l$, $H^l$ is activation function, $\mathbf{A}^l_{n,m}$ is linear output, and $\mathbf{Z}^l_{n,m}$ is activation output at the $l$th layer. The prediction ($\hat{\mathbf{y}}_{n}$) from the final output of feedforward procedure is $\mathbf{A}_{n,1}^L$, which is linear output of ($\mathbf{Z}_{n,m}^{L-1}\cdot \mathbf{W}^L_{m,1}$) at the last layer without applying activation function as given by $$\label{eq:output} \hat{\mathbf{y}}_{n} = \mathbf{A}_{n,1}^L = \mathbf{Z}_{n,m}^{L-1}\cdot \mathbf{W}^L_{m,1}$$ After feedforward phase, adaptive updates for the weight on each connections are conduct by backpropagation. Starting from initial random weights, the backpropagation is repeatly updating these weights based on gradient descent of loss function with respect to the weights. $$\begin{aligned} \label{eq:backprop} \frac{\partial J}{\partial \mathbf{W}_{m,n}^l} &=& \frac{\partial J}{\partial \mathbf{A}^l_{m,n}}(\mathbf{Z}^{l-1}_{m,n})\cr\cr \frac{\partial J}{\partial \mathbf{A}^l_{m,n}} &=& \begin{cases} (\mathbf{W}^{l+1}_{k,m}\frac{\partial J}{\partial \mathbf{A}_{m,n}^{l+1}})\circ H^{\prime}(\mathbf{A}^l_{m,n})\quad (l < L)\\ \nabla J \circ H^{\prime}(\mathbf{A}^L_{n,1}) \quad (l = L) \end{cases}\end{aligned}$$ where x $\circ$ y = ($x_1y_1,\dots,x_n y_n$) is the Hadamard product, $H^{\prime}(\mathbf{A}^L_{n,1}) = \frac{\partial H}{\partial \mathbf{A}^L_{n,1}}$ is the derivative for the corresponding activation function, and $\nabla J = \frac{\partial J}{\partial H}$ is the derivative of the loss function. Finally, the weights are updated as follows. $$\label{eq:update} \mathbf{W}^l_{m,n} \leftarrow \mathbf{W}^l_{m,n} - \lambda \frac{\partial J}{\partial \mathbf{W}^l_{m,n}} = \mathbf{W}^l_{m,n} - \lambda \frac{\partial J}{\partial \mathbf{A}^l_{n,m}}(\mathbf{Z}^{l-1}_{n,m}),$$ where $\lambda$ is the learning rate, the hyperparameter for controlling the step-size in parameter updates. This backward pass propagates from the output layer to previous layers with updating weights for minimizing the loss as shown in (\[eq:update\]). After finishing backpropagation up to the first layer’s weights, it continues to the next iteration of another feedforward and backpropagation process until the weight values are converged certain tolerance level, which is the another hyperparameter determining the model. For backpropagation optimization, the Quasi-Newton method, which iteratively approximates the inverse Hessian with $O(N^2)$ time complexity, is applied. The Limited-memory Broyden-Fletcher-Goldfarb-Shanno(L-BFGS) [@Lbfgs.Nocedal] [@Lbfgs.BNS] [@Lbfgsb.MN] is most practical batch method of the Quasi-Newton algorithm and we use the Scipy version of it. \ Data Preprocessiong ------------------- In theory, ANN is learning model that its accuracy depends on the training data induced to it. Aside from its algorithmic and tuning options, well distributed, sufficient, and accurately measured set of data is the prerequisite for acquiring an accurate model. Based on Fig. \[fig:relu\], \[fig:sigmoid\], and \[fig:tanh\], within each of ANN models, the distribution and shape of scattered points of learning data can produce a significantly different models, even though they use the same activation function. In this perspective, the data preprocessing is essential procedure toward obtaining ANN learning model. For preparing learning data, all the measured data was divided into three sets, learning(80%), validation(10%) and testing(10%), with uniform random sampling. The validation set is for adjusting hyperparameters for model optimization. The objective of learning is to find out the optimal weights on given learning data which enables precise prediction. The key factor for obtaining an right weight is to normalize the magnitude of input values which minimizes side effects from different scales. For instance, with the same increase with 0.0005, different magnitude of inputs with 0.001 and 0.1 can produce a quite dramatic results in gradient, 0.5 and 0.005. If the input features are not properly normalized, backpropagation with iterative partial derivatives throughout MLP-NN can risk deriving biased weights. Based on propagation characteristics of the input features and balancing the different scale of them, we applied logrithmic transformation on the frequency (Mhz), as well as the distance (m) values. [![ANN ReLU Model for area A(UP) and B(DOWN).[]{data-label="fig:relu"}](fig21c.png "fig:"){width="2.5in"}\[fig:reluA\]]{} [![ANN ReLU Model for area A(UP) and B(DOWN).[]{data-label="fig:relu"}](fig21f.png "fig:"){width="2.5in"}\[fig:reluB\]]{} [![ANN Sigmoid Model - area A(UP) and B(DOWN).[]{data-label="fig:sigmoid"}](fig21d.png "fig:"){width="2.5in"}\[fig:sigmoidA\]]{} [![ANN Sigmoid Model - area A(UP) and B(DOWN).[]{data-label="fig:sigmoid"}](fig21g.png "fig:"){width="2.5in"}\[fig:sigmoidB\]]{} [![ANN Tanh Model - area A(UP) and B(DOWN).[]{data-label="fig:tanh"}](fig21e.png "fig:"){width="2.5in"}\[fig:tanhA\]]{} [![ANN Tanh Model - area A(UP) and B(DOWN).[]{data-label="fig:tanh"}](fig21h.png "fig:"){width="2.5in"}\[fig:tanhB\]]{} ![\[4\] vs ANN models.[]{data-label="fig:rmsehist"}](fig18a.png){width="2.5in"} ![\[4\] vs ANN models.[]{data-label="fig:rmsehist"}](fig18b.png){width="2.5in"} (all) ------ ----------- --------- --------- --------- --------- area frequency \[4\] ReLU Sigmoid Tanh 3.4Ghz 7.81199 6.74917 6.73545 6.68894 5.3Ghz 7.18454 6.93408 6.67481 6.69689 6.4Ghz 8.03397 7.59049 7.47268 7.48575 Overall 7.69133 7.0961 6.96451 6.96154 3.4Ghz 8.10528 6.62416 6.29166 6.33517 5.3Ghz 7.37937 5.93431 5.72666 5.68217 6.4Ghz 7.92057 5.76464 5.79612 5.73346 Overall 7.79879 6.1065 5.93387 5.91315 : PATH LOSS PREDICTION PERFORMANCE(RMSE)[]{data-label="tab:table1"} Experimental Results {#sec:rmseresult} ==================== This section describes experimental results for the network configuration variance and path loss prediction in the two real-world data measured in [@HJO] from two regions in Korea, named as area A and area B. The performance measure of both experiments is the root mean square error (RMSE) between the actual measured value and the prediction made from ANN learning models. Totally, 17,728 out of 22,160 samples are used for training, 11,100 for area A, 8,864 for area B ($N_{A} = 11,100, N_{B} = 8,864$). In the network architecture perspective, three key factors are considered, the type of activation function, the number of hidden layers and the number of hidden nodes on each layers. A key element in the ANN configuration is the activation function that determines the nonlinear transformation for the given learning data. Figs. \[fig:relu\], \[fig:sigmoid\], and \[fig:tanh\] show that the shape of the model varies with different activation functions. In order to find out optimal number of layers for certain activation, we examined RMSE trends with changing the number of hidden layers. The RMSE values are processed with the validation set, which was initially sampled separately from learning data. As a result, we can see from Fig. \[fig:annlearningLayer\], comparing with the logistic sigmoid and hyperbolic tangent ANN models, the performance of the ReLU ANN model is stable as deeper layers. In other words, the logistic sigmoid and hyperbolic tangent ANN models can easily build up nonlinearity with a few layers and became underfitted (higher RMSE) as more layers ($L_{Sigmoid} = 3, L_{Tanh} = 3$). Furthermore, based on Fig. \[fig:annlearningLayer\], RMSE trend over the number of layers shows better prediction (less RMSE) as more layers that extra 6 hidden layers (Total 8 layers) are applied only for the ReLU model ($L_{ReLU} = 8$). In the case of increasing the number of hidden nodes at the single hidden layer, it shows that more than 20 nodes for the single hidden layer ensures stabilized performance ($M_{(ReLU, Sigmoid, Tahn)}=40$) as shown in Fig. \[fig:annlearningNode\]. In order to minimize the variance from hyperparameters in learning ANN models, L-BFGS algorithm was mainly used, which is a batch type of computational optimization method, different from other stochastic mini-batch approach. For the reference, the fixed hyperparameter of learning rate, epoch and tolerance rate are set to 0.001, 1000, and 0.00001, throughout the course of experiments. Another experiment is for evaluating the path loss prediction over the test set using the ANN learning models with RMSE as a performance metric. In the area A, the ANN models show slightly better performance compared with [@HJO] by 7.74%, 9.45%, 9.49%, in ReLU, logistic sigmoid, and hyperbolic tangent ANN models, respectively. The improvement in Area B was 21.70%, 23.91%, and 24.18%. For the learning data distribution in the area B, the path loss drops at a short distance is severe than longer distance that the prediction performance by ANN models is much improved compared to linear-like shaped distribution in the area A. When we see the learning graph of ANN models (Figs. \[fig:relu\], \[fig:sigmoid\], and \[fig:tanh\]), especially which are more tweaked in slopes with closely following the distribution of data, shows more higher accuracy in prediction. In addition, when we look at the ANN model performance from area B in Fig. \[fig:rmsehist\], the prediction improvement in the high frequency band is slightly higher than the low frequency band. Finally, within ANN models, the hyperbolic tangent activation function based ANN model shows the lowest RMSE in the both areas as comparing with other models. Conclusions =========== In this paper, we developed the ANN learning based path loss model for two different urban areas at the frequency range of 3-6 Ghz. The learning was performed by the L-BFGS algorithm and an identical MLP-NN hyperparameter set is applied with three kinds of activation functions, except five extra layers in MLP-NN structure for the ReLU model. The ANN learning model outperformed the existing model [@HJO] in two areas with average 8.89%, 23.26%, respectively. Especially, for the environments with high-rise apartment buildings (area B), the ANN learning model can provide more accurate estimation. In future, multidimensional space with more environmental features and large data set based on different scenarios could be analyzed with sophisticated architecture of ANN learning. [99]{} M. Hata, “Empirical formula for propagation loss in land mobile radio services,” *IEEE Transactions on Vehicular Technology*, vol. 29, no. 3, pp. 317-325, Aug. 1980. Y. Okumura, E. Ohmori, T. Kawano and K. Fukuda, “Field strength and its variability in VHF and UHF land mobile radio service,” 1968. COST Action 231, “Digital mobile radio towards future generation systems, final report,”[*Tech. Rep., European Communities, EUR 18957*]{}, 1999. H.-S. Jo and J. Yook, “Path Loss Characteristics for IMT-Advanced Systems in Residential and Street Environments,” *IEEE Antennas and Wireless Propagation Letters*, vol. 9, pp. 867-871, Sep 2010. I. Popescu, D. Nikitopoulos, P. Constantinou and I. Nafornita, “ANN Prediction Models for Outdoor Environment,” *2006 IEEE 17th International Symposium on Personal, Indoor and Mobile Radio Communications*, Helsinki, 2006, pp. 1-5. J. M. Mom, C. O. Mgbe, G. A. Igwue, “Application of artificial neural network for path loss prediction in urban macro cellular environment,” *Am J Eng Res*, vol. 3, issue 2, pp.270-275, Feb 2014. E. Ostlin, H. Zepernick and H. Suzuki, “Macrocell Path-Loss Prediction Using Artificial Neural Networks,” *IEEE Transactions on Vehicular Technology*, vol. 59, no. 6, pp. 2735-2747, July 2010. R. M. Biernacki, J. W. Bandler, J. Song and Q. -. Zhang, “Efficient quadratic approximation for statistical design,” *IEEE Transactions on Circuits and Systems*, vol. 36, no. 11, pp. 1449-1454, Nov. 1989. P. B. L. Meijer, “Fast and smooth highly nonlinear multidimensional table models for device modeling,” *IEEE Transactions on Circuits and Systems*, vol. 37, no. 3, pp. 335-346, March 1990 X. Glorot, A. Bordes and Y. Bengio, “Deep sparse rectifier neural networks,” *Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics*, vol. 15, pp. 315-323, 2011. A. R. Barron, “Approximation and estimation bounds for artificial neural networks,” *Machine Learning*, 14(1):115–133, Jan 1994. J. Nocedal, ”Updating quasi-Newton matrices with limited storage,” *Mathematics of Computation*, vol. 35, (151), pp. 773-782, 1980. R. Byrd, J. Nocedal and R. Schnabel, “Representations of quasi-Newton matrices and their use in limited memory methods,” *Mathematical Programming*, vol. 63, (1), pp. 129-156, 1994. J. Morales and J. Nocedal, “Remark on algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization,” *ACM Transactions on Mathematical Software (TOMS)*, vol. 38, (1), pp. 1-4, Nov. 2011. [^1]: Han-Shin Jo and Daniel K. Tettey is with Department of Electronics and Control Engineering, Hanbat National University, Korea. (e-mail: hsjo@hanbat.ac.kr). Chanhin Park is with Department of Computer Science, University of Southern California, USA. (e-mail: chanship@usc.edu).
--- abstract: | The discrete tori are graph analogues of the real tori, which are defined by the Cayley graphs of a finite product of finite cyclic groups. In this paper, using the theory of the heat kernel on the discrete tori established by Chinta, Jorgenson and Karlsson, we derive an explicit prime geodesic theorem for the discrete tori, which is not an asymptotic formula. To describe the formula, we need generalizations of the classical Jacobi polynomials, which are defined by the Lauricella multivariable hypergeometric function of type $C$. [*Primary*]{} 11M36 [*Secondary*]{} 05C30, 33C65 discrete tori, prime geodesic theorem, heat kernels on graphs, Lauricella hypergeometric functions, Jacobi polynomials. author: - 'Yoshinori YAMASAKI[^1]' title: '**An explicit prime geodesic theorem for discrete tori and the hypergeometric functions**' --- Introduction and the main results ================================= Let $X=(V,E)$ be a graph with $V$ and $E$ being respectively the sets of all vertices and edges of $X$. Throughout of the present paper, we always assume that graphs satisfy the standard conditions, that is, they are finite, undirected, simple, connected and regular. Assume that $X$ is a $(q+1)$-regular graph. It is well known that, as an analogue of the prime number theorem, we have so-called the prime geodesic theorem for $X$. We here briefly review it (more precisely, see e.g., [@Terras2011]). For a positive integer $n$, let $N_{X}(n)$ be the number of all reduced cycles $C$ in $X$ with $l(C)=n$ where $l(C)$ is the length of $C$ and $\pi_{X}(n)$ the number of all equivalence classes $[P]$ of the prime reduced cycles $P$ in $X$ with $l(P)=n$. Here, the cycle $C$ is called reduced if $C^2$ has no backtrack and is prime if it can not be expressed as $C=D^f$ for any cycle $D$ and $f\ge 2$. It is easy to see that $N_X(n)=\sum_{b\,|\,n}b\pi_{X}(b)$ and hence $\pi_X(n)=\frac{1}{n}\sum_{b\,|\,n}\mu(\frac{n}{b})N_{X}(b)$ by the Möbius inversion formula. Let $W$ be the edge adjacency matrix of $X$, which is a square matrix of size $2|E|$, and ${\mathrm{Spec}\,}(W)$ the set of all eigenvalues of $W$ with multiplicities. Then, it holds that $$\label{for:PGTforN} N_X(n) =\sum_{\lambda\in{\mathrm{Spec}\,}(W)}\lambda^n$$ and hence $$\label{for:PGTforPi} \pi_X(n) \sim \delta_X \frac{q^{n}}{n} \quad (n\to\infty),$$ if $\delta_X\nmid n$ (and $\pi_X(n)=0$ otherwise). Here, $\delta_X$ is the greatest common divisor of all lengths of prime reduced cycles in $X$. Actually, we can obtain from by using the result obtained by Kotani and Sunada [@KotaniSunada2000], which asserts that the eigenvalues of $W$ with the largest absolute value are given by $\lambda=qe^{\frac{2\pi ia}{\delta_X}}$ for $a=1,2,\ldots,\delta_X$. As the prime number theorem is obtained via the Riemann zeta function, these formulas are also related to the zeta function $Z_X(u)$, called the Ihara zeta function [@Ihara1966], of $X$ defined by the following Euler product; $$Z_{X}(u) =\prod_{[P]}\bigl(1-u^{l(P)}\bigr)^{-1} \quad (|u|<q^{-1}).$$ Here, in the product, $[P]$ runs over all equivalence classes of the prime reduced cycles in $X$. In fact, since we have from the definition $$\label{for:generating_funcition_of_N} u\frac{d}{du}\log Z_{X}(u)=\sum^{\infty}_{n=1}N_X(n)u^n,$$ one obtains by the following determinant expression of $Z_X(u)$ with respect to $W$; $$Z_X(u)^{-1}=\det\bigl(I_{|E|}-uW\bigr).$$ Here, for a positive integer $m$, $I_{m}$ is the identity matrix of size $m$. Remark that we will encounter another type of the determinant expression of $Z_X(u)$ (see § \[subsec:Ihara\_zeta\]). The aim of this paper is to establish an explicit prime geodesic theorem, which are [*not*]{} an asymptotic formula, for the discrete tori. Here, for $M=(m_1,\ldots,m_d)\in(\mathbb{Z}_{\ge 3})^d$, the discrete torus ${\mathrm{DT}}^{(d)}_{M}$ of dimension $d$ is defined by the Cayley graph of the group $\prod^{d}_{j=1}\mathbb{Z}/m_j\mathbb{Z}$ associated with the generating set $\{\pm\delta_1,\ldots,\pm \delta_d\}$ with $\delta_j=(0,\ldots,0,1,0,\ldots,0)\in \prod^{d}_{j=1}\mathbb{Z}/m_j\mathbb{Z}$. This is a $2d$-regular graph having $\Vert M\Vert=m_1\cdots m_d$ vertices and $d\Vert M\Vert$ edges. Because of the simplicity of the structure of the graph, their harmonic analysis are well studied. In particular, very recently, there are various results on the [*complexities*]{} of the discrete tori or their degenerated ones by establishing the theory of the heat kernel on the graphs ([@ChintaJorgensonKarlsson2010; @ChintaJorgensonKarlsson2012; @ChintaJorgensonKarlsson2015; @Louis2015a; @Louis2015b]). To state the result, we need a generalization of the Jacobi polynomial: For $\alpha=(\alpha_1,\ldots,\alpha_d)\in\mathbb{R}^n$, $\beta\in\mathbb{R}$ and $k\in\mathbb{Z}_{\ge 0}$, define $$\label{def:geneJacobi} P^{(\alpha,\beta)}_{d,k}(x) =\frac{(|\alpha|+1)_k}{k!} F^{(d)}_C\left( \begin{array}{c} -k,k+|\alpha|+\beta+1\\[3pt] \alpha_1+1,\ldots,\alpha_d+1 \end{array} ;\,\frac{1-x}{2},\ldots,\frac{1-x}{2} \right),$$ where $(a)_k=\frac{\Gamma(a+k)}{\Gamma(a)}$ is the Pochhammer symbol with $\Gamma(x)$ being the gamma function, $|\alpha|=\alpha_1+\cdots+\alpha_d$ and, for $a,b,c_1,\ldots,c_d\in\mathbb{R}$, $$\begin{aligned} F^{(d)}_C\left(\begin{array}{c}a,b\\c_1,\ldots,c_d\end{array};x_1,\ldots,x_d\right) &= \displaystyle{\sum_{n=(n_1,\ldots,n_d)\in(\mathbb{Z}_{\ge 0})^d}\frac{(a)_{|n|}(b)_{|n|}}{(c_1)_{n_1}\cdots (c_d)_{n_d}} \frac{x_1^{n_1}}{n_1!}\cdots \frac{x_d^{n_d}}{n_d!}}\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad (|x_1|^{\frac{1}{2}}+\cdots+|x_d|^{\frac{1}{2}}<1)\end{aligned}$$ is the Lauricella multivariable hypergeometric function of type $C$. Remark that $F^{(1)}_C$ is equal to the Gauss hypergeometric function ${}_2F_{1}$ and hence $P^{(\alpha,\beta)}_{1,k}(x)$ coincides with the classical Jacobi polynomial. When $d=2$, $F^{(2)}_C$ equals the Appell hypergeometric function $F_4$. It is worth commenting that, if $\alpha\in(\mathbb{Z}_{\ge 0})^2$, then the above generalized Jacobi polynomial $P^{(\alpha,\beta)}_{2,k}(x)$ can be expressed by the generalized hypergeometric function ${}_4F_3$ (more precisely, see in Example \[ex:d=2\]). However, for general $d\ge 3$, we can not confirm such degeneracies. Write $N^{(d)}_M(n)=N_{{\mathrm{DT}}^{(d)}_M}(n)$ for short. The following is our main result. \[thm:main\] For $n\ge 3$, it holds that $$\label{for:PGTforDT} N^{(d)}_{M}(n) =\Vert M\Vert\sum_{0\le h\le n \atop h\equiv n \!\!\!\!\! \pmod{2}}\sum_{z\in P^{(d)}_M(h)}m^{(d)}_{M}(z)X^{(d)}_{M,h}(n;z),$$ where, for $h\in\mathbb{Z}_{\ge 0}$, $$\begin{aligned} P^{(d)}_{M}(h) =\left\{z=(z_1,\ldots,z_d)\in(\mathbb{Z}_{\ge 0})^d\,\left| \begin{array}{l} z_1\ge \cdots \ge z_d, \ |z|=z_1+\cdots+z_d=h,\\ \text{there exists $(y_1,\ldots,y_d)\in\mathbb{Z}^d$ such that, as a multiset,}\\ \{z_1,\ldots,z_d\}=\{m_1|y_1|,\ldots,m_d|y_d|\} \end{array} \right.\right\}\end{aligned}$$ and, for $z=(z_1,\ldots,z_d)\in P^{(d)}_{M}(h)$, $$\begin{aligned} m^{(d)}_{M}(z) &=\#\left\{(y_1,\ldots,y_d)\in\mathbb{Z}^d\,\left|\, \text{as a multiset}, \ \{z_1,\ldots,z_d\}=\{m_1|y_1|,\ldots,m_d|y_d|\} \right.\right\},\\ X^{(d)}_{M,h}(n;z) &=2(d-1)\delta_{h,0}+\frac{2n\bigl(-(2d-1)\bigr)^{\frac{n-h}{2}}}{n+h}\binom{h}{z_1,\ldots,z_d} P^{(z,-1)}_{d,\frac{n-h}{2}}\Bigl(\frac{2d-3}{2d-1}\Bigr).\end{aligned}$$ Here, $\delta_{h,0}$ is the Kronecker delta and $\binom{m}{m_1,\ldots,m_d}=\frac{m!}{m_1!\cdots m_d!}$ with $m_1+\cdots+m_d=m$ denotes the multinomial coefficient. We remark that though we can not calculate the right hand side of explicitly (not numerically) in general, we can definitely the one of because it is a finite sum of polynomials whose coefficients are given concretely. We also remark that above result is easily extended to general discrete tori corresponding to the groups $\mathbb{Z}^d/\Lambda\mathbb{Z}^d$ for $\Lambda\in GL(d,\mathbb{Z})$. In this paper, for simplicity, we only consider such diagonal cases. We give a proof of Theorem \[thm:main\] in Section \[sec:Spectral\_zeta\]. It is achieved by calculating the spectral zeta function of ${\mathrm{DT}}^{(d)}_M$ in two ways: One is done by using the Ihara zeta function and the other is by the theory of the heat kernel obtained in [@ChintaJorgensonKarlsson2010]. In Section \[sec:normalized\_DT\], we investigate a special case, that is, the normalized discrete torus ${\mathrm{DT}}^{(d)}_{m}={\mathrm{DT}}^{(d)}_{(m,\ldots,m)}$. We give some numerical examples and observations obtained from the examples. We see that, in this case, our formula seems to give a graph analogue of a refinement of the prime geodesic theorem for compact Riemannian manifolds of negative curvature, which counts closed geodesics lying in a fixed homology class. Spectral zeta functions {#sec:Spectral_zeta} ======================= Let $X=(V,E)$ be a $(q+1)$-regular graph. Define a spectral zeta function $\zeta_{X}(s)$ of $X$ by the following finite sum. $$\zeta_X(s) =\sum_{\lambda\in{\mathrm{Spec}\,}(\Delta_X)}\frac{1}{\lambda+s}.$$ Here, $\Delta_X$ is the combinatoric Laplacian on $X$, that is, $\Delta_X=(q+1)I_{|V|}-A$ with $A$ being the (vertex) adjacency matrix of $X$. We sometimes understand that $\Delta_X$ is a linear operator on the $\mathbb{C}$-vector space $L^2(X)=\{f:X\to\mathbb{C}\}$, endowed with the inner product $(f,g)=\sum_{x\in V}f(x)\overline{g(x)}$, $f,g\in L^2(X)$, acting by $$(\Delta_X f)(x)=(q+1)f(x)-\sum_{\{x,y\}\in E}f(y).$$ Here we denote $\{x,y\}\in E$ by the edge which connects vertices $x\in V$ and $y\in V$. The idea for obtaining the main result is to calculate the spectral zeta function $\zeta^{(d)}_{M}(s)=\zeta_{{\mathrm{DT}}^{(d)}_M}(s)$ of ${\mathrm{DT}}^{(d)}_M$ in two ways; One is via the Ihara zeta function and the other is via the heat kernel on ${\mathrm{DT}}^{(d)}_M$. First, it is useful to give here the explicit descriptions of the eigenvalues and the corresponding eigenfunctions of the combinatoric Laplacian $\Delta^{(d)}_M$ of ${\mathrm{DT}}^{(d)}_M$. Let $$\begin{aligned} V^{(d)}_M &=\prod^{d}_{j=1}\mathbb{Z}/m_j\mathbb{Z} =\left\{(x_1,\ldots,x_d)\,\left|\,x_j\in\{0,1,\ldots,m_j-1\},\ j=1,2,\ldots,d\right.\right\}, \\ (V^{(d)}_M)^{*} &=\prod^{d}_{j=1}\frac{1}{m_j}\mathbb{Z}\Big/\mathbb{Z} =\left\{(v_1,\ldots,v_d)\,\left|\,v_j\in\left\{0,\frac{1}{m_j},\ldots,\frac{m_j-1}{m_j}\right\},\ j=1,2,\ldots,d\right.\right\}.\end{aligned}$$ Notice that $V^{(d)}_M$, $(V^{(d)}_M)^{*}$ are the sets of all vertices of ${\mathrm{DT}}^{(d)}_M$ and its dual $({\mathrm{DT}}^{(d)}_M)^{*}$ (see [@ChintaJorgensonKarlsson2015]), respectively. Moreover, for $x=(x_1,\ldots,x_d)\in V^{(d)}_M$ and $v=(v_1,\ldots,v_d)\in (V^{(d)}_M)^{*}$, put $(x,v)=x_1v_1+\cdots+x_dv_d$. \[lem:specData\] For $v=(v_1,\ldots,v_d)\in(V^{(d)}_M)^{*}$, let $$\lambda_{v}=2d-2\sum^{d}_{k=1}\cos\bigl(2\pi v_i\bigr), \quad \phi_v(x)=\Vert M\Vert^{-\frac{1}{2}}e^{2\pi i(x,v)}.$$ Then, we have ${\mathrm{Spec}\,}(\Delta^{(d)}_M)=\bigl\{\lambda_v\,\bigl|\,v\in(V^{(d)}_M)^{*}\bigr\}$ and see that $\phi_v(x)$ is an orthonormal eigenfunction corresponding to $\lambda_{v}$, that is, $\bigl\{\phi_v\bigr\}_{v\in (V^{(d)}_M)^{*}}$ forms an orthonormal basis of $L^{2}({\mathrm{DT}}^{(d)}_M)$. See [@ChintaJorgensonKarlsson2015]. Ihara zeta functions {#subsec:Ihara_zeta} -------------------- Let $X=(V,E)$ be a $(q+1)$-regular graph. It is easy to see that $$\label{for:spec1} \zeta_X(s) =\frac{d}{ds}\log\det\bigl(\Delta_X+sI_{|V|}\bigr).$$ We now show that, using the determinant expression $$\label{for:detexpression_for_Ihara_A} Z_{X}(u) =(1-u^2)^{|E|-|V|}\det\bigl(I_{|V|}-uA+qu^2 I_{|V|}\bigr)$$ of the Ihara zeta function $Z_{X}(u)$ with respect to $A$, the spectral zeta function $\zeta_X(s)$ can be written by the logarithmic derivative of $Z_{X}(u)$. Let $X=(V,E)$ be a $(q+1)$-regular graph. Then, it holds that $$\label{for:spectral_Ihara} \zeta_X(s)=|V|\frac{u_s}{1-u^2_s}+\frac{u_s}{1-qu^2_s}u_s\frac{d}{du}\log Z_{X}(u_s),$$ where $$\label{for:quadratic_trans} u_s=\frac{s+q+1\pm\sqrt{s^2+2(q+1)s+(q-1)^2}}{2q}.$$ Substituting $A=(q+1)I_{|V|}-\Delta_X$ into with the relation $2|E|=(q+1)|V|$, we have $$\label{for:detexpression_for_Ihara_Delta} \det\Bigl(\Delta_X+\frac{1-(q+1)u+qu^2}{u}I_{|V|}\Bigr) =\left\{\Bigl(u(1-u^2)^{\frac{q-1}{2}}\Bigr)^{|V|}Z_{X}(u)\right\}^{-1}.$$ Now the desired formula is derived from and with making a change of variable $$\label{for:su1} s=\frac{1-(q+1)u_s+qu^2_s}{u_s},$$ that is, , together with the relation $\frac{d}{ds}u_s=-\frac{u^2_s}{1-qu^2_s}$. From this lemma, because ${\mathrm{DT}}^{(d)}_{M}$ is a $2d$-regular graph (i.e., $q=2d-1$), one immediately obtains the following proposition. We have $$\label{for:DT_spectral_Ihara} \zeta^{(d)}_M(s) =\Vert M\Vert\frac{u_s}{1-u^2_s}+\frac{u_s}{1-(2d-1)u^2_s}u_s\frac{d}{du}\log Z^{(d)}_{M}(u_s),$$ where $$u_s=\frac{s+2d\pm\sqrt{s^2+4ds+4(d-1)^2}}{2(2d-1)}.$$ We notice from that $$\label{for:su2} s+2d=\frac{1+(2d-1)u^2_s}{u_s}.$$ We will encounter this quadratic transformation in § \[subsec:HG\]. Because $u_s\frac{d}{du}\log Z^{(d)}_{M}(u_s)=\sum^{\infty}_{n=1}N^{(d)}_M(n)u_s^n$, what we have to do next is to expand $\zeta^{(d)}_M(s)$ in a series in the variable $u_s$. Heat kernels {#subsec:Heat_kernel} ------------ We next start from the fact that the spectral zeta function $\zeta_X(s)$ can be expressed as the Laplace transform of the theta function $\theta_X(t)$ of $X$ defined by $$\theta_X(t) =\sum_{\lambda\in{\mathrm{Spec}\,}(\Delta_X)}e^{-\lambda t}.$$ Actually, noticing that ${\mathrm{Spec}\,}(\Delta_X)\subset [0,2(q+1)]$, we have $$\label{for:spec2} \zeta_X(s) =\int^{\infty}_{0}e^{-st}\theta_X(t)dt \quad ({\mathrm{Re}\,}(s)>0).$$ From a general theory, we know that $\theta_X(t)$ is essentially given by the heat kernel $K_{X}(x,t):V\times \mathbb{R}_{>0}\to \mathbb{C}$ on the graph $X$, which is the unique solution of the heat equation $$\left\{ \begin{array}{l} \left(\Delta_X+\frac{\partial}{\partial t}\right)f(x,t)=0, \\[5pt] \displaystyle{\lim_{t\to 0}}f(x,t)=\delta_{o}(x). \end{array} \right.$$ Here, $o\in V$ is a fixed base point of $X$ and $\delta_{o}(x)$ is the Kronecker delta, that is, $\delta_{o}(x)=1$ if $x=o$ and $0$ otherwise. The following is well known. Let $X=(V,E)$ be a $(q+1)$-regular graph with and $o\in V$. Let $\phi_{\lambda}(x)$ be an orthonormal eigenfunction of $\Delta_X$ with respect to $\lambda\in {\mathrm{Spec}\,}(\Delta_X)$. Then, we have $$\label{for:HK_general} K_X(x,t) =\sum_{\lambda\in{\mathrm{Spec}\,}(\Delta)}e^{-\lambda t}\overline{\phi_{\lambda}(o)}\phi_{\lambda}(x).$$ See, e.g., [@Chung1997]. Now, let us calculate $K^{(d)}_M(x,t)=K_{{\mathrm{DT}}^{(d)}_M}(x,t)$ and $\theta^{(d)}_{M}(t)=\theta_{{\mathrm{DT}}^{(d)}_M}(t)$. We notice that, in the case of studying ${\mathrm{DT}}^{(d)}_{M}$, we always take a base point $o=(0,\ldots,0)\in V^{(d)}_{M}$. From Lemma \[lem:specData\] and , we have $$K^{(d)}_M(x,t) =\frac{1}{\Vert M\Vert}\sum_{v\in ({\mathrm{DT}}^{(d)}_M)^{*}}e^{-\lambda_v t}e^{2\pi i(x,v)}$$ and hence $$\label{for:theta} \theta^{(d)}_M(t) =\sum_{v\in ({\mathrm{DT}}^{(d)}_M)^{*}}e^{-\lambda_v t}=\Vert M\Vert K^{(d)}_M(o,t).$$ It is shown in [@KarlssonNeuhauser2006] (see also [@Karlsson2012]) that the heat kernel $K_{\mathbb{Z}}(x,t)$ on $\mathbb{Z}$ with $o=0$ is given by $$K_{\mathbb{Z}}(x,t)=e^{-2t}I_x(2t) \quad (x\in\mathbb{Z}),$$ where $I_x(t)$ is the $I$-Bessel function (or the modified Bessel function of the first kind) having the expansion $$\label{def:I_Bessel} I_x(t) =\sum^{\infty}_{n=0}\frac{1}{n!\Gamma(n+x+1)}\Bigl(\frac{t}{2}\Bigr)^{2n+x}.$$ Hence, by the uniqueness of the heat kernel, we see that the heat kernel $K_{\mathbb{Z}^d}(x,t)$ on $\mathbb{Z}^d$ can be written as $K_{\mathbb{Z}^d}(x,t)=\prod^{d}_{j=1}K_{\mathbb{Z}}(x_j,t)$ for $x=(x_1,\ldots,x_d)\in\mathbb{Z}^d$. Moreover, periodizing this as a function on $V^{(d)}_{M}$, we have $$\begin{aligned} K^{(d)}_M(x,t) =\sum_{z\in\prod^{d}_{j=1}m_j\mathbb{Z}}K_{\mathbb{Z}^d}(x+z,t) =\sum_{(z_1,\ldots,z_d)\in\prod^{d}_{j=1}m_j\mathbb{Z}}\prod^{d}_{j=1}K_{\mathbb{Z}}(x_j+z_j,t).\end{aligned}$$ This shows from that $$\begin{aligned} \theta^{(d)}_M(t) =\Vert M\Vert K^{(d)}_M(o,t) &=\Vert M\Vert\sum_{(z_1,\ldots,z_d)\in\prod^{d}_{j=1}m_j\mathbb{Z}}e^{-2dt}\prod^{d}_{j=1}I_{z_j}(2t)\\ &=\Vert M\Vert\sum^{\infty}_{h=0}\sum_{z=(z_1,\ldots,z_d)\in P^{(d)}_{M}(h)}m^{(d)}_{M}(z)e^{-2dt}\prod^{d}_{j=1}I_{z_j}(2t).\end{aligned}$$ Here, one obtains the last equation above as follows: Let $W=\mathfrak{S}_d \ltimes (\mathbb{Z}/2\mathbb{Z})^d$ with $\mathfrak{S}_d$ being the symmetric group of degree $d$. We see that $W$ naturally acts on $\mathbb{Z}^d$ and can take the set of all partitions of length less than or equal to $d$ as a set of all representatives of the $W$-orbits on $\mathbb{Z}^d$. For a partition $z$ of the length $l(z)\le d$, we have $m_M^{(d)}(z) = \# (\prod^d_{j=1} m_j \mathbb{Z}) \cap Wz$ where $Wz$ is the $W$-orbit of $z$. Then, for $h\in\mathbb{Z}_{\ge 0}$, we have $P_M^{(d)}(h)=\{z\vdash h\,|\,\text{$l(z)\le d$,\ $m_M^{(d)}(z) >0$}\}$ and, because $\prod^d_{j=1} I_{z_j}(2t)$ is $W$-invariant (notice that $I_x(t)=I_{-x}(t)$), obtain the desired equation. Now, from , it holds that $$\begin{aligned} \zeta^{(d)}_{M}(s) &=\Vert M\Vert\sum^{\infty}_{h=0}\sum_{z\in P^{(d)}_{M}(h)}m^{(d)}_{M}(z)F^{(d)}_{z}(s+2d) \nonumber\\ \label{for:spectral_HK} &=\Vert M\Vert\sum^{\infty}_{h=0}\sum_{z\in P^{(d)}_{M}(h)}m^{(d)}_{M}(z)G^{(d)}_{z}(u_s),\end{aligned}$$ where, for $z=(z_1,\ldots,z_d)\in P^{(d)}_{M}(h)$, we put $$\begin{aligned} F^{(d)}_z(x) &=\int^{\infty}_{0}e^{-xt}\prod^{d}_{j=1}I_{z_j}(2t)dt \quad ({\mathrm{Re}\,}(x)>2d),\\ G^{(d)}_z(u) &=F_z\Bigl(\frac{1+(2d-1)u^2}{u}\Bigr).\end{aligned}$$ Here, we have used the relation . We will study some properties of these functions in § \[subsec:HG\]. Proof of Theorem \[thm:main\] ----------------------------- Let us give a proof of the main result. \[Proof of Theorem \[thm:main\]\] From , we have $$\label{for:logderiIhara} u_s\frac{d}{du}\log Z^{(d)}_{M}(u_s) =-\Vert M\Vert\frac{1-(2d-1)u^2_s}{1-u^2_s}+\frac{1-(2d-1)u^2_s}{u_s}\zeta^{(d)}_{M}(s).$$ Therefore, based on the equation , one can obtain a formula for $N^{(d)}_M(n)$ by expanding the right hand side of in a series in the variable $u_s$. The first term on the right hand side of can be easily expanded as follows: $$\label{for:FirstTerm} -\Vert M\Vert+2\Vert M\Vert(d-1)u^2_s+2\Vert M\Vert(d-1)\sum_{n\ge 4 \atop n\,:\,\text{even}}u^n_s.$$ Moreover, since we have from together with , which we will prove in § \[subsec:HG\], $$\begin{aligned} \zeta^{(d)}_{M}(s) &= \Vert M\Vert\sum^{\infty}_{h=0}\sum^{\infty}_{k=0} \bigl(-(2d-1)\bigr)^k \left\{\sum_{z\in P^{(d)}_{M}(h)}m^{(d)}_{M}(z)C_z P^{(z,0)}_{d,k}\Bigl(\frac{2d-3}{2d-1}\Bigr)\right\}u^{h+2k+1}_s\end{aligned}$$ with $C_z=\binom{|z|}{z_1,\ldots,z_d}$, the second term of can be written as $$\begin{aligned} & \Vert M\Vert\sum^{\infty}_{h=0}\sum^{\infty}_{k=0} \bigl(-(2d-1)\bigr)^k \left\{\sum_{z\in P^{(d)}_{M}(h)}m^{(d)}_{M}(z)C_z P^{(z,0)}_{d,k}\Bigl(\frac{2d-3}{2d-1}\Bigr)\right\}u^{h+2k}_s \\ &\ \ \ +\Vert M\Vert\sum^{\infty}_{h=0}\sum^{\infty}_{k=1} \bigl(-(2d-1)\bigr)^k \left\{\sum_{z\in P^{(d)}_{M}(h)}m^{(d)}_{M}(z)C_z P^{(z,0)}_{d,k-1}\Bigl(\frac{2d-3}{2d-1}\Bigr)\right\}u^{h+2k}_s \\ =&\Vert M\Vert\sum^{\infty}_{h=0} \left\{\sum_{z\in P^{(d)}_{M}(h)}m^{(d)}_{M}(z)C_z \right\}u^{h}_s \\ &\ \ \ +\Vert M\Vert\sum^{\infty}_{h=0}\sum^{\infty}_{k=1} \bigl(-(2d-1)\bigr)^k\frac{h+2k}{h+k} \left\{\sum_{z\in P^{(d)}_{M}(h)}m^{(d)}_{M}(z)C_z P^{(z,-1)}_{d,k}\Bigl(\frac{2d-3}{2d-1}\Bigr)\right\}u^{h+2k}_s \\ =&\Vert M\Vert\sum^{\infty}_{n=0} \left\{\sum_{z\in P^{(d)}_{M}(n)}m^{(d)}_{M}(z)C_z \right\}u^{n}_s \\ &\ \ \ +\Vert M\Vert\sum^{\infty}_{n=1} \left\{\sum_{0\le h\le n-2 \atop h\equiv n \!\!\!\!\! \pmod{2}}\frac{2n\bigl(-(2d-1)\bigr)^{\frac{n-h}{2}}}{n+h} \sum_{z\in P^{(d)}_{M}(h)}m^{(d)}_{M}(z)C_z P^{(z,-1)}_{d,\frac{n-h}{2}}\Bigl(\frac{2d-3}{2d-1}\Bigr)\right\}u^{n}_s.\end{aligned}$$ Here, in the first equality, we have used the equation $P^{(z,0)}_{d,0}\bigl(\frac{2d-3}{2d-1}\bigr)=1$ and, for $k\ge 1$, $$P^{(z,0)}_{d,k}(x)+P^{(z,0)}_{d,k-1}(x) =\frac{|z|+2k}{|z|+k}P^{(z,-1)}_{d,k}(x),$$ which are easily seen from the definition. Let us write the coefficient of $u^n_s$ of the rightmost hand side of the above formula as, say, $C(n)$. Noticing that $P^{(d)}_{M}(0)=\{(0,\ldots,0)\}$ and $P^{(d)}_{M}(1)=P^{(d)}_{M}(2)=\emptyset$ because $m_j\ge 3$ for all $j=1,2,\ldots,d$, we have $$C(0) =\Vert M\Vert,\quad C(1) =0,\quad C(2) =\Vert M\Vert\cdot \bigl(-2(2d-1)\bigr)P^{(0,-1)}_{d,1}\Bigl(\frac{2d-3}{2d-1}\Bigr) =-2\Vert M\Vert(d-1).$$ This shows that the second term on the right hand side of is expanded as follows: $$\begin{aligned} \label{for:SecondTerm} &\ \ \ \Vert M\Vert-2\Vert M\Vert(d-1)u^2_s\\ \nonumber &+\Vert M\Vert\sum^{\infty}_{n=3} \left\{\sum_{0\le h\le n \atop h\equiv n \!\!\!\!\! \pmod{2}}\frac{2n\bigl(-(2d-1)\bigr)^{\frac{n-h}{2}}}{n+h} \sum_{z\in P^{(d)}_{M}(h)}m^{(d)}_{M}(z)C_z P^{(z,-1)}_{d,\frac{n-h}{2}}\Bigl(\frac{2d-3}{2d-1}\Bigr)\right\}u^{n}_s\end{aligned}$$ Combining and , and noticing that $m^{(d)}_{M}(0,\ldots,0)=1$, we obtain the desired formula . This ends the proof. A quadratic transformation $F^{(d)}_C$ {#subsec:HG} -------------------------------------- We here prove the following proposition, which contains a key formula for our results. Let $z=(z_1,\ldots,z_d)\in (\mathbb{Z}_{\ge 0})^d$. $(1)$ We have $$\begin{aligned} \label{for:F_d1} F^{(d)}_{z}(x) &=\frac{C_z}{x^{|z|+1}} F^{(d)}_C\left( \begin{array}{c} \frac{|z|}{2}+\frac{1}{2},\frac{|z|}{2}+1\\[3pt] z_1+1,\ldots,z_d+1 \end{array} ;\,\frac{4}{x^2},\ldots,\frac{4}{x^2} \right)\\ \label{for:F_d2} &=\frac{1}{x^{|z|+1}}\sum^{\infty}_{n=0}A^{(d)}_z(n)\frac{(\frac{|z|}{2}+\frac{1}{2})_n(\frac{|z|}{2}+1)_n}{(h+1)_n}\frac{(\frac{4}{x^2})^n}{n!},\end{aligned}$$ where $C_z=\binom{|z|}{z_1,\ldots,z_d}$ and $$A^{(d)}_{z}(n)=\sum_{n_1,\ldots,n_d\ge 0 \atop n_1+\cdots +n_d=n}\binom{n}{n_1,\ldots,n_d}\binom{n+|z|}{n_1+z_1,\ldots,n_d+z_d}.$$ $(2)$ We have $$\label{for:G} G^{(d)}_{z}(u) =C_z u^{|z|+1} \sum^{\infty}_{k=0}P^{(z,0)}_{d,k}\Bigl(\frac{2d-3}{2d-1}\Bigr)\bigl(-(2d-1)u^2\bigr)^k,$$ where $P^{(\alpha,\beta)}_{d,k}(x)$ is the generalization of the Jacobi polynomial defined by . Moreover, if $\alpha\in(\mathbb{Z}_{\ge 0})^d$, then it can be written as $$\label{for:geneJacobi1} P^{(\alpha,\beta)}_{d,k}(x) =\frac{1}{C_{\alpha}}\frac{(|\alpha|+1)_k}{k!} \sum^{k}_{n=0}A^{(d)}_{\alpha}(n)\frac{(-k)_n(k+|\alpha|+\beta+1)_n}{(|\alpha|+1)_n}\frac{(\frac{1-x}{2})^n}{n!}.$$ Put $h=|z|$. Using , we have $$\begin{aligned} F^{(d)}_{z}(x) &=\sum_{n=(n_1,\ldots,n_d)\in(\mathbb{Z}_{\ge 0})^d}\frac{1}{n_1!\cdots n_d!(n_1+z_1)!\cdots (n_d+z_d)!} \int^{\infty}_{0}e^{-xt}t^{h+2|n|+1}\frac{dt}{t}\\ &=\frac{1}{x^{h+1}}\sum_{n=(n_1,\ldots,n_d)\in(\mathbb{Z}_{\ge 0})^d}\frac{(h+2|n|)!}{n_1!\cdots n_d!(n_1+z_1)!\cdots (n_d+z_d)!}\frac{1}{x^{2|n|}}\\ &=\frac{1}{x^{h+1}}\frac{h!}{z_1!\cdots z_d!}\sum_{n=(n_1,\ldots,n_d)\in(\mathbb{Z}_{\ge 0})^d}\frac{\bigl(\frac{h}{2}+\frac{1}{2}\bigr)_{|n|}\bigl(\frac{h}{2}+1\bigr)_{|n|}}{(z_1+1)_{n_1}\cdots (z_d+1)_{n_d}} \frac{(\frac{4}{x^2})^{|n|}}{n_1!\cdots n_d!}.\end{aligned}$$ Here, we have employed the identities $(a+n)!=a!(a+1)_n$ and $(a+2n)!=a!2^{2n}\bigl(\frac{a}{2}+\frac{1}{2}\bigr)_n\bigl(\frac{a}{2}+1\bigr)_n$ for $a,n\in\mathbb{Z}_{\ge 0}$. Hence we obtain . The formula is easily obtained from . We next concentrate on $G^{(d)}_{z}(u)$. From , it holds that $$\begin{aligned} G^{(d)}_z(u) =C_zu^{h+1}\sum_{n=(n_1,\ldots,n_d)\in(\mathbb{Z}_{\ge 0})^d}\frac{\bigl(\frac{h}{2}+\frac{1}{2}\bigr)_{|n|}\bigl(\frac{h}{2}+1\bigr)_{|n|}}{(z_1+1)_{n_1}\cdots (z_d+1)_{n_d}} \frac{2^{2|n|}u^{2|n|}}{n_1!\cdots n_d!}\bigl(1+(2d-1)u^2\bigr)^{-(h+1+2|n|)}.\end{aligned}$$ The generalized binomial theorem yields $$\begin{aligned} \bigl(1+(2d-1)u^2\bigr)^{-(h+1+2|n|)} &=\sum^{\infty}_{l=0}\binom{h+l+2|n|}{l}\bigl(-(2d-1)u^2\bigr)^{l}\\ &=\sum^{\infty}_{l=0}\frac{(h+1)_{l+|n|}(l+|n|+h+1)_{|n|}}{l!2^{2|n|}\bigl(\frac{h}{2}+\frac{1}{2}\bigr)_{|n|}\bigl(\frac{h}{2}+1\bigr)_{|n|}}\bigl(-(2d-1)u^2\bigr)^{l}\end{aligned}$$ and hence $$\begin{aligned} & \ \ \ G^{(d)}_z(u)\\ &=C_zu^{h+1}\sum^{\infty}_{l=0}\left\{\sum_{n=(n_1,\ldots,n_d)\in(\mathbb{Z}_{\ge 0})^d}\frac{(h+1)_{l+|n|}(l+|n|+h+1)_{|n|}}{(z_1+1)_{n_1}\cdots (z_d+1)_{n_d}} \frac{u^{2(l+|n|)}}{l!n_1!\cdots n_d!}\right\} \bigl(-(2d-1)\bigr)^{l}\\ &=C_zu^{h+1}\sum^{\infty}_{k=0}\left\{\sum_{n=(n_1,\ldots,n_d)\in(\mathbb{Z}_{\ge 0})^d \atop |n|\le k}\frac{(h+1)_{k}(k+h+1)_{|n|}}{(z_1+1)_{n_1}\cdots (z_d+1)_{n_d}} \frac{u^{2k}}{(|n|-k)!n_1!\cdots n_d!}\right\} \bigl(-(2d-1)\bigr)^{k-|n|}\\ &=C_zu^{h+1}\sum^{\infty}_{k=0}\frac{(h+1)_{k}}{k!}\left\{\sum_{n=(n_1,\ldots,n_d)\in(\mathbb{Z}_{\ge 0})^d \atop |n|\le k}\frac{(-k)_{|n|}(k+h+1)_{|n|}}{(z_1+1)_{n_1}\cdots (z_d+1)_{n_d}} \frac{\bigl(\frac{1}{2d-1}\bigr)^{|n|}}{n_1!\cdots n_d!}\right\} \bigl(-(2d-1)u^2\bigr)^{k}.\end{aligned}$$ Therefore we obtain . The equation follows in the same manner as . \[ex:d=1\] When $d=1$, we have respectively from and $$\begin{aligned} F^{(1)}_h(x) &=\frac{1}{x^{h+1}} {}_2F_{1}\left( \begin{array}{c} \frac{h}{2}+\frac{1}{2},\frac{h}{2}+1\\[3pt] h+1 \end{array} ;\,\frac{4}{x^2} \right), \nonumber\\ \label{for:G1} G^{(1)}_h(u) &=u^{h+1}\sum^{\infty}_{n=0}P^{(h,0)}_{1,k}(-1)(-u^2)^{k}=\frac{u^{h+1}}{1+u^2}.\end{aligned}$$ Here, in the last equality in , we have used the well-known formula $$\label{for:Jacobi-1} P^{(\alpha,\beta)}_{1,k}(-1)=(-1)^k\binom{k+\beta}{k}.$$ We remark that is also obtained from the Pfaff transformation $${}_2F_{1}\left( \begin{array}{c} \frac{a}{2},\frac{a+1}{2}\\[3pt] a-b+1 \end{array} ;\,\frac{4x}{(1+x)^2} \right) = (1+x)^a {}_2F_{1}\left( \begin{array}{c} a,b\\[3pt] a-b+1 \end{array} ;\,x \right)$$ for the Gauss hypergeometric functions ${}_2F_1$. From this, we can say that is a kind of generalization of the Pfaff transformation for $F^{(d)}_C$ with special parameters. \[ex:d=2\] We next consider the case $d=2$. Let $z=(z_1,z_2)\in (\mathbb{Z}_{\ge 0})^2$. It is easy to see that $$\label{for:A2} A^{(2)}_{z}(n)=\binom{2n+|z|}{n+z_1,n+z_2}.$$ Hence, we have respectively from and $$\begin{aligned} F^{(2)}_z(x) &=\frac{C_z}{x^{|z|+1}} {}_4F_{3}\left( \begin{array}{c} \frac{|z|}{2}+\frac{1}{2},\frac{|z|}{2}+\frac{1}{2},\frac{|z|}{2}+1,\frac{|z|}{2}+1\\[3pt] |z|+1,z_1+1,z_2+1 \end{array} ;\,\frac{16}{x^2} \right),\\ G^{(2)}_z(u) &=C_zu^{|z|+1}\sum^{\infty}_{k=0} P^{(z,0)}_{2,k}\Bigl(\frac{1}{3}\Bigr)(-3u^2)^{k}.\end{aligned}$$ Here, ${}_pF_{q}$ is the generalized hypergeometric function defined by $${}_pF_{q}\left(\begin{array}{c}a_1,\ldots,a_p\\b_1,\ldots,b_q\end{array};x\right) = \sum^{\infty}_{n=0}\frac{(a_1)_{n}\cdots (a_p)_{n}}{(b_1)_{n}\cdots (b_q)_{n}}\frac{x^{n}}{n!}.$$ Moreover, from and , we have for $\alpha=(\alpha_1,\alpha_2)\in(\mathbb{Z}_{\ge 0})^2$ $$\label{for:P2} P^{(\alpha,\beta)}_{2,k}(x) =\frac{(|\alpha|+1)_k}{k!} {}_4F_{3}\left( \begin{array}{c} -k,k+|\alpha|+\beta+1,\frac{|\alpha|}{2}+\frac{1}{2},\frac{|\alpha|}{2}+1\\[3pt] |\alpha|+1,\alpha_1+1,\alpha_2+1 \end{array} ;\,2(1-x) \right).$$ Normalized discrete tori {#sec:normalized_DT} ======================== Prime geodesic theorem for ${\mathrm{DT}}^{(d)}_m$ -------------------------------------------------- In this section, we consider a special case, that is, a normalized discrete torus ${\mathrm{DT}}^{(d)}_m={\mathrm{DT}}^{(d)}_{(m,\ldots,m)}$ for $m\ge 3$. In this case, the result obtained in the previous section becomes more simple form as below. Here, we put $N^{(d)}_{m}(n)=N^{(d)}_{(m,\ldots,m)}(n)$. \[thm:main2\] For $n\ge 3$, it holds that $$\label{for:PGTforNDT} N^{(d)}_{m}(n) =m^d\sum_{0\le h\le \frac{n}{m} \atop mh\equiv n \!\!\!\!\! \pmod{2}}\sum_{\mu\vdash h \atop l(\mu)\le d}m(\mu)X^{(d)}_{m,h}(n;\mu),$$ where, for a partition $\mu=(\mu_1,\ldots,\mu_l)=(1^{m_1(\mu)}\cdots h^{m_{h}(\mu)})\vdash h$ of length $l(\mu)=l\le d$ with $m_j(\mu)$ being the multiplicity of $j$ in $\mu$, $$\begin{aligned} \label{def:multiplicity} m(\mu) &=2^{l}\binom{d}{l}u(\mu) \ \ \text{with} \ \ u(\mu)=\binom{l}{m_1(\mu),\ldots,m_h(\mu)}, \\ \nonumber X^{(d)}_{m,h}(n;\mu) &=2(d-1)\delta_{h,0}+\frac{2n\bigl(-(2d-1)\bigr)^{\frac{n-mh}{2}}}{n+mh}\binom{mh}{m\mu_1,\ldots,m\mu_l} P^{(m\mu,-1)}_{d,\frac{n-mh}{2}}\Bigl(\frac{2d-3}{2d-1}\Bigr).\end{aligned}$$ Consider the case $d=1$. Let us check the trivial result $$N^{(1)}_{m}(n) = \begin{cases} 0 & m\nmid n,\\ 2m & m\,|\,n \end{cases}$$ from our formula. First, we have $m(\mu)=1$ if $\mu=0$ (the empty partition) and $2$ otherwise. Moreover, from , it holds that $$\begin{aligned} X^{(1)}_{m,h}(n;\mu) &= \begin{cases} 1 & m\,|\,n \ \ \text{and} \ \ h=\frac{n}{m}, \\ 0 & \text{otherwise}. \end{cases}\end{aligned}$$ Hence, from , one can actually obtain the desired formula. We next consider the case $d=2$. It holds that $$\begin{aligned} N^{(2)}_{m}(n) &=m^2\sum_{0\le h\le \frac{n}{m} \atop mh\equiv n \!\!\!\!\! \pmod{2}}\sum_{\mu\vdash h \atop l(\mu)\le 2}m(\mu)X^{(2)}_{m,h}(n;\mu)\\ &=m^2\left(\delta^{\mathrm{e}}_n X^{(2)}_{m,0}(n;0)+4\sum_{1\le h\le \frac{n}{m} \atop mh\equiv n \!\!\!\!\! \pmod{2}} \left\{X^{(2)}_{m,h}\bigl(n;(h)\bigr)+\sum_{\mu\vdash h \atop l(\mu)=2}u(\mu)X^{(2)}_{m,h}(n;\mu)\right\}\right).\end{aligned}$$ Here, $\delta^{\mathrm{e}}_n=1$ if $n$ is even and $0$ otherwise. Notice that, for $\mu=(\mu_1,\mu_2)\vdash h$ with $l(\mu)=2$, $u(\mu)=1$ if $\mu_1=\mu_2$ and $2$ otherwise. Using , we have $$X^{(2)}_{m,0}(n;0) =2+2(-3)^{\frac{n}{2}} {}_3F_{2}\left( \begin{array}{c} -\frac{n}{2},\frac{n}{2},\frac{1}{2}\\[3pt] 1,1 \end{array} ;\,\frac{4}{3} \right).$$ Moreover, for $h\ge 1$ and $\mu=(\mu_1,\mu_2)\vdash h$ with $l(\mu)=2$, letting $k=\frac{n-mh}{2}$, we have $$\begin{aligned} X^{(2)}_{m,h}(n;(h)) &=\frac{4n(-3)^{k}}{n+mh}\frac{(mh+1)_k}{k!} {}_4F_{3}\left( \begin{array}{c} -k,k+mh,\frac{mh}{2}+\frac{1}{2},\frac{mh}{2}+1\\[3pt] mh+1,mh+1,1 \end{array} ;\,\frac{4}{3} \right),\\ X^{(2)}_{m,h}\bigl(n;(\mu_1,\mu_2)\bigr) &=\frac{4n(-3)^{k}}{n+mh}\binom{mh}{m\mu_1,m\mu_2}\frac{(mh+1)_k}{k!} {}_4F_{3}\left( \begin{array}{c} -k,k+mh,\frac{mh}{2}+\frac{1}{2},\frac{mh}{2}+1\\[3pt] m\mu_1+1,m\mu_2+1,mh+1 \end{array} ;\,\frac{4}{3} \right).\end{aligned}$$ For example, let us consider the case $m=3$ and $n=6$. Since $$N^{(2)}_{3}(6) =9\left( X^{(2)}_{3,0}(6;0) +4X^{(2)}_{3,2}\bigl(6;(2)\bigr) +4X^{(2)}_{3,2}\bigl(6;(1,1)\bigr) \right)$$ with $$\begin{aligned} X^{(2)}_{3,0}(6;0) &=2+2(-27) {}_3F_{2}\left( \begin{array}{c} -3,3,\frac{1}{2}\\[3pt] 1,1 \end{array} ;\,\frac{4}{3} \right) =2+2(-27)\Bigl(-\frac{11}{27}\Bigr) =24, \\ X^{(2)}_{3,2}\bigl(6;(2)\bigr) &={}_4F_{3}\left( \begin{array}{c} 0,6,\frac{7}{2},4\\[3pt] 7,7,1 \end{array} ;\,\frac{4}{3} \right) =1,\\ X^{(2)}_{3,2}\bigl(6;(1,1)\bigr) &=\binom{6}{3} {}_4F_{3}\left( \begin{array}{c} 0,6,\frac{7}{2},4\\[3pt] 4,4,7 \end{array} ;\,\frac{4}{3} \right) =20,\end{aligned}$$ we have $$N^{(2)}_{3}(6) =9\bigl(1\cdot 24+4\cdot 1+4\cdot 20\bigr) =9\cdot 108 =972.$$ For the other values of $X^{(2)}_{3,h}(n;\mu)$ with $n\le 10$, see the table below. $n$ $h=0$ $h=1$ $h=2$ $h=3$ $\frac{N^{(2)}_3(n)}{3^2}$ ------ ------------- ------------ -------------------------- ---------------------- ---------------------------- $3$ $X(1)=1$ 4 $4$ $X(0)=8$ 8 $5$ $X(1)=10$ 40 $6$ $X(0)=24$ $X(2)=1$, $X(1^2)=20$ 108 $7$ $X(1)=42$ 168 $8$ $X(0)=216$ $X(2)=40$, $X(1^2)=80$ 696 $9$ $X(1)=414$ $X(3)=1$, $X(21)=84$ 2332 $10$ $X(0)=1520$ $X(2)=420$, $X(1^2)=840$ 6560 : The values of $X(\mu)=X^{(2)}_{3,h}(n;\mu)$ Some observations ----------------- From the table above, we first expect the following. \[conj:Xinteger\] It holds that $X^{(d)}_{m,h}(n;\mu)\in\mathbb{Z}_{\ge 0}$. If Conjecture \[conj:Xinteger\] is true, then it seems that $X^{(d)}_{m,h}(n;\mu)$ counts something special type of cycles in ${\mathrm{DT}}^{(d)}_{m}$. To see this, let us subdivide $N^{(d)}_{m}(n)$ into small pieces by the following manner. Fix $o\in V^{(d)}_m$. We notice that there is an one-to-one correspondence between a cycle $C$ in ${\mathrm{DT}}^{(d)}_{m}$ starting from and ending to $o$ of length $n$ and a path $\overline{C}$ in $\mathbb{Z}^d$ starting from the origin $(0,\ldots,0)$ and ending to $(mp_1,\ldots,mp_d)$ for some $(p_1,\ldots,p_d)\in\mathbb{Z}^d$ of length $n$. Let us call a path $\overline{C}$ in $\mathbb{Z}^d$ [*reduced modulo $m$*]{} if the corresponding cycle $C$ in ${\mathrm{DT}}^{(d)}_{m}$ is reduced. Let $\mathrm{RC}^{(d)}_m(n)$ be the set of all reduced cycles in ${\mathrm{DT}}^{(d)}_{m}$ starting from and ending to $o$ of length $n$ and, for $p=(p_1,\ldots,p_d)\in\mathbb{Z}^d$, $\overline{\mathrm{RP}}^{(d)}_{m}(n;p)$ the set of all reduced paths modulo $m$ in $\mathbb{Z}^d$ starting from the origin $(0,\ldots,0)$ and ending to $(mp_1,\ldots,mp_d)$ of length $n$. It is clear that $$\begin{aligned} N^{(d)}_{m}(n) &=m^d\#\mathrm{RC}^{(d)}_m(n)\\ &=m^d\sum_{p\in\mathbb{Z}^d}\#\overline{\mathrm{RP}}^{(d)}_{m}\bigl(n;p\bigr)\\ &=m^d\sum_{0\le h\le \frac{n}{m} \atop mh\equiv n \!\!\!\!\! \pmod{2}}\sum_{\mu\vdash h \atop l(\mu)\le d}m(\mu)N^{(d)}_{m,h}(n;\mu),\end{aligned}$$ where, for $\mu=(\mu_1,\ldots,\mu_d)\vdash h$ of length $l(\mu)=l\le d$, $m(\mu)$ is defined in and $N^{(d)}_{m,h}(n;\mu)$ is the number of all reduced paths modulo $m$ in $\mathbb{Z}^d$ starting from the origin $(0,\ldots,0)$ and ending to $(m\mu_1,\ldots,m\mu_d)$ of length $n$. Notice that $m^d$ represents the number of choices of the starting points. Now, it is natural from to expect the following. \[conj:XN\] It holds that $X^{(d)}_{m,h}(n;\mu)=N^{(d)}_{m,h}(n;\mu)$. It is clear that Conjecture \[conj:Xinteger\] follows from Conjecture \[conj:XN\] because $N^{(d)}_{m,h}(n;\mu)\in\mathbb{Z}_{\ge 0}$. The following figures support that Conjecture \[conj:XN\] is true for the case $d=2$, $m=3$ and $n=6$, that is, $N^{(2)}_{3,0}(6;(0))=X^{(2)}_{3,0}(6;(0))=24$, $N^{(2)}_{3,2}(6;(2))=X^{(2)}_{3,2}(6;(2))=1$ and $N^{(2)}_{3,2}(6;(1,1))=X^{(2)}_{3,2}(6;(1,1))=20$. Here, the black dots in the figures represent the lattices points. In particular, the big black dots denote points of the form of $(mp_1,mp_2)$ for some $(p_1,p_2)\in\mathbb{Z}^2$. By the same manner, we have already checked that the equation $X^{(d)}_{m,h}(n;\mu)=N^{(d)}_{m,h}(n;\mu)$ holds for $n\le 10$. ![$N^{(2)}_{3,2}(6;(1,1))=20$; it is the number of all reduced paths $\overline{C}$ modulo $3$ in $\mathbb{Z}^2$ starting from $(0,0)$ and ending to $(3,3)$ of length $6$.](N00.eps){width="140mm"} ![$N^{(2)}_{3,2}(6;(1,1))=20$; it is the number of all reduced paths $\overline{C}$ modulo $3$ in $\mathbb{Z}^2$ starting from $(0,0)$ and ending to $(3,3)$ of length $6$.](N2.eps){width="50mm"} ![$N^{(2)}_{3,2}(6;(1,1))=20$; it is the number of all reduced paths $\overline{C}$ modulo $3$ in $\mathbb{Z}^2$ starting from $(0,0)$ and ending to $(3,3)$ of length $6$.](N11.eps){width="150mm"} Let $M$ be a compact Riemannian manifold of negative curvature. It is known that there exist countably infinitely many closed geodesics in $M$ and that a closed geodesic in $M$ corresponds to a unique non-trivial conjugacy class $\mathrm{Conj}(\gamma)$ of $\gamma\in\pi_1(M)$ where $\pi_1(M)$ is the fundamental group of $M$. Let us write the closed geodesic corresponding to $\mathrm{Conj}(\gamma)$ as $C_{\gamma}$. Let $N_M(x)$ be the number of all closed geodesics in $M$ of length $\le x$. Then, we have the following prime geodesic theorem for $M$ ([@Margulis1969]); $$N_M(x)\sim \frac{e^{hx}}{hx} \quad (x\to\infty),$$ where $h>0$ is the topological entropy of the geodesic flow over $M$. This is an analogue of the classical prime number theorem. Moreover, we have also an analogue of the Dirichlet theorem on arithmetic progressions, which counts the closed geodesics lying in a fixed homology class: Let $H_1(M,\mathbb{Z})$ be the first homology group of $M$ and $\phi:\pi_1(M)\to H_1(M,\mathbb{Z})=\pi_1(M)^{\mathrm{ab}}=\pi_1(M)/[\pi_1(M),\pi_1(M)]$ be the natural projection. Here, $[\pi_1(M),\pi_1(M)]$ is the commutant subgroup of $\pi_1(M)$. For a fixed $\alpha\in H_1(M,\mathbb{Z})$, let $N_M(x;\alpha)$ be the number of all closed geodesics $C=C_{\gamma}$ in $M$ of length $\le x$ satisfying $\phi(\gamma)=\alpha$. Then, it is shown in [@AdachiSunada1987; @PhillipsSarnak1987; @Lalley1989] that there exists a constant $C>0$, not depending on $\alpha$, such that $$\label{for:PGThomology} N_M(x;\alpha)\sim C\frac{e^{hx}}{x^{\frac{b}{2}+1}} \quad (x\to\infty).$$ Here, $b\in\mathbb{Z}_{\ge 0}$ is the first Betti number of $M$, that is, the rank of $H_1(M,\mathbb{Z})$. Now, we may regard the claim in Conjecture \[conj:XN\] as a graph analogue of . Note that $\pi_1({\mathrm{RT}}^{(d)})=H_1({\mathrm{RT}}^{(d)},\mathbb{Z})=\mathbb{Z}^d$ where ${\mathrm{RT}}^{(d)}$ is the real torus of dimension $d$. Acknowledgment {#acknowledgment .unnumbered} ============== The author would like to thank Professor Hiroyuki Ochiai for carefully reading the manuscript and giving many helpful comments. [999999]{} T. Adachi and T. Sunada, Homology of closed geodesics in a negatively curved manifold, [*J. Diff. Geom.*]{}, [**26**]{} (1987), 81–99. G. Chinta, J. Jorgenson and A. Karlsson, Zeta functions, heat kernels, and spectral asymptotics on degenerating families of discrete tori, [*Nagoya Math. J.*]{}, [**198**]{} (2010), 121–172. G. Chinta, J. Jorgenson and A. Karlsson, Complexity and heights of tori, Dynamical systems and group actions, 89–98, [*Contemp. Math.*]{}, [**567**]{}, Amer. Math. Soc., Providence, RI, 2012. G. Chinta, J. Jorgenson and A. Karlsson, Heat kernels on regular graphs and generalized Ihara zeta function formulas, [*Monatsh. Math.*]{}, [**178**]{} (2015), 171–190. F. Chung, Spectral graph theory. CBMS Regional Conference Series in Mathematics, 92. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1997. Y. Ihara, On discrete subgroups of the two by two projective linear group over $p$-adic fields, [*J. Math. Soc. Japan*]{}, [**18**]{} (1966), 219–235. A. Karlsson, Applications of heat kernels on abelian groups: $\zeta(2n)$, quadratic reciprocity, Bessel integrals. Number theory, analysis and geometry. In memory of Serge Lang, Springer Verlag, 307–320 2012. A. Karlsson and M. Neuhauser, Heat kernels, theta identities, and zeta functions on cyclic groups, Topological and asymptotic aspects of group theory, 177–189, Contemp. Math., 394, Amer. Math. Soc., Providence, RI, 2006. M. Kotani and T. Sunada, Zeta functions of finite graphs, [*J. Math. Sci. Univ. Tokyo*]{}, [**7**]{} (2000), 7–25. S. P. Lalley, Closed geodesics in homology classes on surfaces of variable negative curvature, [*Duke Math. J.*]{}, [**58**]{} (1989), 795–821. J. Louis, Asymptotics for the number of spanning trees in circulant graphs and degenerating $d$-dimensional discrete tori, [*Ann. Comb.*]{}, [**19**]{} (2015), 513–543. J. Louis, A formula for the number of spanning trees in circulant graphs with nonfixed generators and discrete tori, [*Bull. Aust. Math. Soc.*]{}, [**92**]{} (2015), 365–373. G. Margulis, Applications of ergodic theory to the investigation of manifolds of negative curvatures, [*Funkt. Anal. i Ego Pril.*]{}, [**3**]{} (1969), 89–90. R. Phillips and P. Sarnak, Geodesics in homology classes, [*Duke Math. J.*]{}, [**55**]{} (1987), 287–297. A. Terras, Zeta functions of graphs. A stroll through the garden, Cambridge Studies in Advanced Mathematics, 128. Cambridge University Press, Cambridge, 2011. <span style="font-variant:small-caps;">Yoshinori YAMASAKI</span>\ Graduate School of Science and Engineering, Ehime University,\ Bunkyo-cho, Matsuyama, 790-8577 JAPAN.\ `yamasaki@math.sci.ehime-u.ac.jp` [^1]: Partially supported by Grant-in-Aid for Scientific Research (C) No. 15K04785.
--- abstract: | It is known that every $R$-module has a flat precover. We show in the paper that every $R$-module has a Gorenstein flat precover.\ address: - 'School of Mathematics, Physics and Software Engineering, Lanzhou Jiaotong University, Lanzhou [730070]{}, P.R. China' - 'School of Mathematics, Physics and Software Engineering, Lanzhou Jiaotong University, Lanzhou [730070]{}, P.R. China' author: - Gang Yang - Li Liang title: All modules have Gorenstein flat precovers --- **Introduction** ================ A class $\mathcal{L}$ of objects of an abelian category $\mathcal{C}$ is called a precovering class [@Enoc81] if every object of $\mathcal{C}$ has an $\mathcal{L}$-precover (see Definition 2.2). In the language of [@AR] this means that $\mathcal{L}$ is a contravariantly finite subcategory. Precovering classes (or contravariantly finite subcategories) play a great important role in homological algebra. One of the reasons is that one can construct proper $\mathcal{L}$-resolutions using a precovering class $\mathcal{L}$ to compute homology and cohomology (see [@EJ00] for details). For any ring $R$, recall from [@EJT93] that an $R$-module $G$ is Gorenstein flat if there exists an exact sequence $\cdots\rightarrow F^{-2}\rightarrow F^{-1}\rightarrow F^0\rightarrow F^1\rightarrow F^2\rightarrow\cdots$ of flat $R$-modules with $G=\text{Ker}(F^0\rightarrow F^1)$ such that $I\otimes_R-$ leaves the sequence exact whenever $I$ is an injective right $R$-module. Obviously, flat $R$-modules are Gorenstein flat. Further studies on Gorenstein flat $R$-modules can be found in [@Ben09; @EJ00; @EJLR04; @EJT93; @Holm04a]. Bican, El Bashir and Enochs [@BBE01] proved that the class of flat $R$-modules is a precovering class. On the other hand, Enochs, Jenda and López-Ramos [@EJLR04] proved that the class of Gorenstein flat $R$-modules is a precovering class over a right coherent ring. Furthermore, it was shown in [@YL] that the result holds over a left GF-closed ring (that is, a ring over which the class of Gorenstein flat $R$-modules is closed under extensions). In this paper, we prove that the class of Gorenstein flat $R$-modules is a precovering class over any ring as follows. **Theorem A.** *Let $R$ be any ring. Then every $R$-module has a Gorenstein flat precover.* We prove the above result by constructing a perfect cotorsion pair in the category of complexes of $R$-modules. **Preliminaries** ================= Throughout the paper, we assume all rings have an identity and all modules are unitary. Unless stated otherwise, an $R$-module will be understood to be a left $R$-module. To every complex $\xymatrix@C=0.6cm{C= \cdots \ar[r]^{} & C^{m-1} \ar[r]^{d^{m-1}} & C^m \ar[r]^{d^{m}} & C^{m+1} \ar[r]^{d^{m+1}} & \cdots },$ the $m$th cycle is defined as ${\mbox{\rm Ker}}(d^m)$ and is denoted $\text{Z}^m(C)$. The $m$th boundary is $\text{Im}(d^{m-1})$ and is denoted $\text{B}^m(C)$. The $m$th homology of $C$ is the module $$\text{H}^m(C)=\text{Z}^m(C)/\text{B}^m(C).$$ A complex $C$ is exact if $\text{H}^m(C)=0$ for all $m\in\mathbb{Z}$. For an integer $n$, $C[n]$ denotes the complex such that $C[n]^m=C^{m+n}$ and whose boundary operators are $(-1)^nd^{m+n}$. Given an $R$-module $M$, we denote by $\overline{M}$ the complex $$\xymatrix@C=0.6cm{ \cdots \ar[r]^{ } & 0 \ar[r]^{ } & M \ar[r]^{id} & M\ar[r]^{ } & 0 \ar[r]^{ } & \cdots }$$ with $M$ in the $-1$ and 0th degrees and $\underline{M}$ the complex $$\xymatrix@C=0.6cm{ \cdots \ar[r]^{ } & 0 \ar[r]^{ } & M \ar[r]^{ } & 0 \ar[r]^{ } & \cdots }$$ with $M$ in the $0$th degree. A complex $C$ is finitely presented (generated) if only finitely many components are nonzero and each $C^m$ is finitely presented (generated). Clearly, both $\overline{R}$ and $\underline{R}$ are finitely presented. Recall that a complex $P$ is projective if it is exact and $\text{Z}^m(P)$ is a projective $R$-module for each $m\in \mathbb{Z}$, so it is easy to see that $P$ is a direct sum of the form $\overline{Q}[m]$ for some projective $R$-modules $Q$. Given two complexes $X$ and $Y$, we let ${\mbox{\rm Hom}}^\bullet(X, Y)$ denote a complex of $\mathbb{Z}$-modules with $m$th component $${\mbox{\rm Hom}}^\bullet(X, Y)^m=\prod_{t\in \mathbb{Z}}{\mbox{\rm Hom}}(X^t, Y^{m+t})$$ and such that if $f\in{\mbox{\rm Hom}}^\bullet(X, Y)^m$ then $$(d^m(f))^n=d_Y^{n+m}\circ f^n-(-1)^{m}f^{n+1}\circ d_X^n.$$ We say $f:X\rightarrow Y$ a morphism of complexes if $d_Y^{n}\circ f^n=f^{n+1}\circ d_X^n$ for all $n\in \mathbb{Z}$. ${\mbox{\rm Hom}}(X, Y)$ denotes the set of morphisms of complexes from $X$ to $Y$ and ${\mbox{\rm Ext}}^i(X, Y)$ $(i\geq1)$ are right derived functors of ${\mbox{\rm Hom}}$. Obviously, ${\mbox{\rm Hom}}(X, Y)=\text{Z}^0({\mbox{\rm Hom}}^\bullet(X, Y))$. We let $\underline{{\mbox{\rm Hom}}}(X, Y)$ denote a complex with $\underline{{\mbox{\rm Hom}}}(X, Y)^m$ the abelian group of morphisms from $X$ to $Y[m]$ and with a boundary operator given by: $f\in\underline{{\mbox{\rm Hom}}}(X, Y)^m$, then $d^m(f): X\rightarrow Y[m+1]$ with $d^m(f)^n=(-1)^md_Y\circ f^n$, $\forall n\in \mathbb{Z}$. We note that the new functor $\underline{{\mbox{\rm Hom}}}(X, Y)$ has right derived functors whose values will be complexes. These values should certainly be denoted $\underline{{\mbox{\rm Ext}}}^i(X, Y)$. It is not hard to see that $\underline{{\mbox{\rm Ext}}}^i(X, Y)$ is the complex $$\cdots\rightarrow{{\mbox{\rm Ext}}}^i(X, Y[n-1])\rightarrow {{\mbox{\rm Ext}}}^i(X, Y[n])\rightarrow{{\mbox{\rm Ext}}}^i(X, Y[n+1])\rightarrow\cdots$$ with boundary operator induced by the boundary operator of $Y$. If $X$ is a complex of right $R$-modules and $Y$ is a complex of left $R$-modules, let $X\otimes^\bullet Y$ be the usual tensor product of complexes. I.e., $X\otimes^\bullet Y$ is the complex of abelian groups with $$(X\otimes^\bullet Y)^m=\bigoplus_{t\in \mathbb{Z}}X^t\otimes_R Y^{m-t}$$ and $$d(x\otimes y)=d_X^t(x)\otimes y+(-1)^{t}x\otimes d_Y^{m-t}(y)$$ for $x\in X^t$ and $y\in Y^{m-t}$. Obviously, $\underline{M}\otimes^\bullet Y=M\otimes_R Y= \cdots\rightarrow M\otimes_R Y^{-1}\rightarrow M\otimes_R Y^0\rightarrow M\otimes_R Y^1\rightarrow\cdots$ for a right $R$-module $M$. We define $X\otimes Y$ to be $\frac{(X\otimes^\bullet Y)}{\text{B}(X\otimes^\bullet Y)}$. Then with the maps $$\frac{(X\otimes^\bullet Y)^n}{\text{B}^n(X\otimes^\bullet Y)}\rightarrow \frac{(X\otimes^\bullet Y)^{n+1}}{\text{B}^{n+1}(X\otimes^\bullet Y)}, \quad x\otimes y\mapsto d_X(x)\otimes y,$$ where $x\otimes y$ is used to denote the coset in $\frac{(X\otimes^\bullet Y)^n}{\text{B}^n(X\otimes^\bullet Y)}$, we get a complex of abelian groups. One can found the next result in [@Garc99 Proposition 4.2.1]. \[l2.1\] Let $X$, $Y$, $Z$ be complexes. Then we have the following natural isomorphisms: 1. $X\otimes(Y\otimes Z)\cong (X\otimes Y)\otimes Z$; 2. For a right $R$-module $M$, $\overline{M}[n]\otimes Y\cong M\otimes_R Y[n]$; 3. $X\otimes (\varinjlim Y_i)\cong \varinjlim (X\otimes Y_i)$ for a directed family $(Y_i)_{i\in I}$ of complexes. Let $\mathcal{L}$ be a class of objects of an abelian category $\mathcal{C}$ and $X$ an object. A homomorphism $f: L\rightarrow X$ is called an $\mathcal{L}$-precover if $L\in\mathcal{L}$ and the abelian group homomorphism $\text{Hom}(L', f): \text{Hom}(L', L)\rightarrow \text{Hom}(L', X)$ is surjective for each $L'\in\mathcal{L}$. An $\mathcal{L}$-precover $f: L\rightarrow X$ is called an $\mathcal{L}$-cover if every endomorphism $g: L\rightarrow L$ such that $fg=f$ is an isomorphism. Dually we have the definitions of an $\mathcal{L}$-preenvelope and an $\mathcal{L}$-envelope. A pair $(\mathcal{A}, \mathcal{B})$ in an abelian category $\mathcal{C}$ is called a cotorsion pair if the following conditions hold: 1. ${\mbox{\rm Ext}}^1_\mathcal{C}(A, B)=0$ for all $A\in\mathcal{A}$ and $B\in\mathcal{B}$; 2. If ${\mbox{\rm Ext}}^1_\mathcal{C}(A, X)=0$ for all $A\in\mathcal{A}$ then $X\in\mathcal{B}$; 3. If ${\mbox{\rm Ext}}^1_\mathcal{C}(X, B)=0$ for all $B\in\mathcal{B}$ then $X\in\mathcal{A}$. We think of a cotorsion pair $(\mathcal{A}, \mathcal{B})$ as being $\lq\lq$orthogonal with respect to ${\mbox{\rm Ext}}^1_\mathcal{C}$". This is often expressed with the notation $\mathcal{A}={^\perp\mathcal{B}}$ and $\mathcal{B}=\mathcal{A}^\perp$. The notion of a cotorsion pair was first introduced by Salce in [@S79] and rediscovered by Enochs and coauthors in 1990’s. Its importance in homological algebra has been shown by its use in the proof of the existence of flat covers of modules over any ring [@BBE01]. A cotorsion pair $(\mathcal{A}, \mathcal{B})$ is said to be complete if for any object $X$ there are exact sequences $0\rightarrow X\rightarrow B\rightarrow A\rightarrow 0$ and $0\rightarrow B'\rightarrow A'\rightarrow X\rightarrow 0$ with $A, A'\in \mathcal{A}$ and $B, B'\in \mathcal{B}$. A cotorsion pair $(\mathcal{A}, \mathcal{B})$ is said to be cogenerated by a set if there is a set $\mathcal{S}\subset \mathcal{A}$ such that $\mathcal{S}^\bot=\mathcal{B}$. By a well-known theorem of Eklof and Trlifaj [@ET01], a cotorsion pair $(\mathcal{A}, \mathcal{B})$ is complete if it is cogenerated by a set (see [@BBE01]). A cotorsion pair $(\mathcal{A}, \mathcal{B})$ is said to be perfect if every object has an $\mathcal{A}$-cover and a $\mathcal{B}$-envelope. **All modules have Gorenstein flat precovers** {#ns} ============================================== Recall from [@Garc99] that an exact sequence $0\rightarrow P\rightarrow X\rightarrow X/P\rightarrow 0$ of complexes is *pure* if for any complex $Y$ of right $R$-modules, the sequence $0\rightarrow Y\otimes P\rightarrow Y\otimes X\rightarrow Y\otimes X/P\rightarrow 0$ is exact. We state here the characterizations of purity that can be found in [@Garc99 Theorem 5.1.3]. \[p2.1\] Let $0\rightarrow P\rightarrow X\rightarrow X/P\rightarrow 0$ be an exact sequence of complexes. Then the following statements are equivalent. 1. $0\rightarrow P\rightarrow X\rightarrow X/P\rightarrow 0$ is pure; 2. $0 \rightarrow\underline{{\mbox{\rm Hom}}}(U, P)\rightarrow\underline{{\mbox{\rm Hom}}}(U, X) \rightarrow\underline{{\mbox{\rm Hom}}}(U, X/P)\rightarrow0$ is exact for any finitely presented complex $U$. Recall from [@AF91] that a complex $Q$ is DG-projective, if each $R$-module $Q^m$ is projective and ${\mbox{\rm Hom}}^\bullet(Q, E)$ is exact for any exact complex $E$. By [@Garc99 Proposition 2.3.5], a complex $Q$ is DG-projective if and only if ${\mbox{\rm Ext}}^1(Q, E)=0$ for every exact complex $E$. \[l2.2\] Let $ 0 \rightarrow P \rightarrow X\rightarrow X/P\rightarrow 0$ be a pure exact sequence of complexes. If $X$ is exact then both $P$ and $X/P$ are also exact. By Lemma \[p2.1\], the sequence $\underline{{\mbox{\rm Hom}}}(D,X)\rightarrow\underline{{\mbox{\rm Hom}}}(D,X/P)\rightarrow0$ is exact for all finitely presented complex $D$, and so the sequence $$\underline{{\mbox{\rm Hom}}} (\underline{R},X)\rightarrow\underline{{\mbox{\rm Hom}}}(\underline{R},X/P)\rightarrow0$$ is exact since $\underline{R}$ is finitely presented. On the other hand, the sequence $$\underline{{\mbox{\rm Hom}}}(\underline{R},X)\rightarrow\underline{{\mbox{\rm Hom}}}(\underline{R},X/P)\rightarrow \underline{{\mbox{\rm Ext}}}^1(\underline{R},P)\rightarrow \underline{{\mbox{\rm Ext}}}^1(\underline{R},X)$$ is exact, where $\underline{{\mbox{\rm Ext}}}^1(\underline{R},X)=0$ since $\underline{R}$ is DG-projective and $X$ is exact. Thus we get that $\underline{{\mbox{\rm Ext}}}^1(\underline{R},P)=0$, and so $\text{H}^{-n+1}(P)\cong {\mbox{\rm Ext}}^1(\underline{R},P[-n])=0$ for all $n\in \mathbb{Z}$. This means that $P$ is an exact complex, and now it is easily seen that $X/P$ is also exact. Let $R$ be a ring, we denote by $\mathbf{E}(R)$ the class of exact complexes of flat $R$-modules such that they remain exact after applying $I\otimes_R-$ for any injective right $R$-module $I$. Recall that a complex $F$ is flat if $F$ is exact and each $\text{Z}^n(F)$ is a flat $R$-module for each $n\in \mathbb{Z}$. Clearly, $\mathbf{E}(R)$ contains all flat complexes. As characterized in [@Gill04] and [@Garc99], there are initiate connections between the purity and the flatness of complexes. Inspired by this fact we give the following result. \[l3.3\] Let $R$ be any ring and $E\in \mathbf{E}(R)$. If $S\subseteq E$ is pure, then $S$ and $E/S$ are both in $\mathbf{E}(R)$. Let $M$ be any right $R$-module. Then $$0\rightarrow\overline{M}[n]\otimes S\rightarrow \overline{M}[n]\otimes E\rightarrow \overline{M}[n]\otimes E/S\rightarrow0$$ is exact. By Lemma \[l2.1\](2), the sequence $$0\rightarrow M\otimes_R S[n]\rightarrow M\otimes_R E[n]\rightarrow M\otimes_R (E/S)[n]\rightarrow0$$ is exact. Therefore $S^n\subseteq E^n$ is pure for each $n\in\mathbb{Z}$. Since each $E^n$ is flat, we get that $S^n$ and $E^n/S^n$ are flat for each $n\in\mathbb{Z}$. By Lemma \[l2.2\], we get that $S$ and $E/S$ are exact. It remains to show that for any injective right $R$-module $I$, $I\otimes_RS$ and $I\otimes_RE/S$ are exact. Since the exact sequence $0\rightarrow S\rightarrow E\rightarrow E/S\rightarrow0$ is pure, we get that the sequence $$0\rightarrow\overline{I}\otimes S\rightarrow \overline{I}\otimes E\rightarrow \overline{I}\otimes E/S\rightarrow0$$ is exact and pure by Lemma \[l2.1\](1). Note that $\overline{I}\otimes E\cong I\otimes_R E$ is exact by Lemma \[l2.1\](2), then $\overline{I}\otimes S$ and $\overline{I}\otimes E/S$ are exact by Lemma \[l2.2\], and so $I\otimes_RS$ and $I\otimes_RE/S$ are exact by Lemma \[l2.1\](2). \[l2.4\] Let ${\rm Card}(R)\leq \kappa$, where $\kappa$ is some infinite cardinal. Then for any $F\in\mathbf{E}(R)$ and any element $x\in F$ (by this we mean $x\in F^n$ for some $n$), there exists a subcomplex $L\subseteq F$ with $x\in L$, $L, F/L\in \mathbf{E}(R)$ and ${\rm Card}(L)\leq \kappa$. By [@Gill04 Lemma 4.6], there exists a pure subcomplex $L\subseteq F$ with $x\in L$ and ${\rm Card}(L)\leq \kappa$, then, by Lemma \[l3.3\], we get that $L$ and $F/L$ are contained in $\mathbf{E}(R)$. \[l3.5\] For any ring $R$ the pair $(\mathbf{E}(R), \mathbf{E}(R)^\bot)$ is a perfect cotorsion pair. By Lemma \[l2.1\](3) the class $\mathbf{E}(R)$ is closed under direct limits. Clearly, $\mathbf{E}(R)$ is closed under direct sums, direct summands and extensions. Using Lemma \[l2.4\] and a similar method as proved in [@AE01 Remark 3.2], we get that the pair $(\mathbf{E}(R), \mathbf{E}(R)^\bot)$ is cogenerated by a set. On the other hand, the class $\mathbf{E}(R)$ contains all projective complexes. Thus, by [@AE01 Corollaris 2.11, 2.12 and 2.13], the pair $(\mathbf{E}(R), \mathbf{E}(R)^\bot)$ is a perfect cotorsion pair. *Proof of Theorem A.* Let $M$ be any $R$-module and $g: E\rightarrow \underline{M}[1]$ be an $\mathbf{E}(R)$-precover which exists by Lemma \[l3.5\]. This gives the following commutative diagram: $$\xymatrix@C=15pt@R=30pt{ E=: \ \cdots \ar[r]^{} &E^{-2} \ar[rr]^{}\ar[dd]^{} & & E^{-1} \ar[dr]_{\pi}\ar[rr]^{} \ar[dd]^{g^{-1}} & & \ E^0 \ar[rr]^{}\ar[dd]^{} && E^1 \ar[rr]^{}\ar[dd]^{} & & \cdots \\ & \ \ & & \ & G \ar[dd]^{\widetilde{g}} \ar@{.>}[ur]^{} \\ \underline{M}[1]=: \ \cdots \ar[r]^{}&0 \ar[rr]^{} & &M \ar[dr]_{=}\ar[rr]_{} & & 0 \ar[rr]^{} && 0 \ar[rr]^{} & & \cdots \\ & \ \ & &\ & M \ar[ur]^{} }$$ where $G=\text{Z}^0(E)$ is Gorenstein flat. In the following we show that $\widetilde{g}: G\rightarrow M$ is a Gorenstein flat precover of $M$. Let $\widetilde{f}: H\rightarrow M$ be a homomorphism with $H$ Gorenstein flat. Then there exists a complex $F$ in $\mathbf{E}(R)$ such that $H=\text{Z}^0(F)$. Now one can extend $\widetilde{f}$ to a morphism $f: F\rightarrow \underline{M}[1]$ of complexes as follows: $$\xymatrix@C=15pt@R=30pt{ F=: \ \cdots \ar[r]^{} &F^{-2} \ar[rr]^{}\ar[dd]^{} & & F^{-1} \ar[dr]_{\sigma}\ar[rr]^{} \ar[dd]^{f^{-1}} & & \ F^0 \ar[rr]^{}\ar[dd]^{} && F^1 \ar[rr]^{}\ar[dd]^{} & & \cdots \\ & \ \ & & \ & H \ar[dd]^{\widetilde{f}} \ar@{.>}[ur]^{} \\ \underline{M}[1]=: \ \cdots \ar[r]^{}&0 \ar[rr]^{} & &M \ar[dr]_{=}\ar[rr]_{} & & 0 \ar[rr]^{} && 0 \ar[rr]^{} & & \cdots \\ & \ \ & &\ & M \ar[ur]^{} }$$ Since $g: E\rightarrow \underline{M}[1]$ is an $\mathbf{E}(R)$-precover, there exists a morphism $h: F\rightarrow E$ of complexes such that the diagram $$\xymatrix{ & E \ar[d]^{g} \\ F \ar[ur]^{h} \ar[r]_{f} & \underline{M}[1] }$$ is commutative. The morphism $h$ induces a homomorphism $\widetilde{h}: H\rightarrow G$ such that the following diagram $$\xymatrix@C=15pt@R=30pt{ F=: \ \cdots \ar[r]^{} &F^{-2} \ar[rr]^{}\ar[dd]^{h^{-2}} & & F^{-1} \ar[dr]_{\sigma}\ar[rr]^{} \ar[dd]^{h^{-1}} & & \ F^0 \ar[rr]^{}\ar[dd]^{h^0} && F^1 \ar[rr]^{}\ar[dd]^{h^1} & & \cdots \\ & \ \ & & \ & H \ar[dd]^{\widetilde{h}} \ar@{.>}[ur]^{} \\ E=: \ \cdots \ar[r]^{}&E^{-2} \ar[rr]^{} & &E^{-1} \ar[dr]_{\pi}\ar[rr]_{} & & E^0 \ar[rr]^{} && E^1 \ar[rr]^{} && \cdots \\ & \ \ & &\ & G \ar[ur]^{} }$$ is commutative. Note that $\widetilde{f}\sigma=f^{-1}=g^{-1}h^{-1}=\widetilde{g}\pi h^{-1}=\widetilde{g}\widetilde{h}\sigma,$ then $\widetilde{f}=\widetilde{g}\widetilde{h}$ since $\sigma$ is an epimorphism. This implies that $\widetilde{g}: G\rightarrow M$ is a Gorenstein flat precover of $M$. [10]{} S. T. Aldrich, E. E. Enochs, J. R. García Rozas and L. Oyonarte, COvers and envelopes in Grothendieck categories: flat covers of complexes with applications. J. Algebra **243** (2001), 615-630. M. Auslander and I. Reiten, Applications of contravariantly finite subcategories. Adv. Math. **86** (1991), 111-152. L. L. Avramov and H.-B. Foxby, Homological dimensions of unbounded complexes. J. Pure Appl. Algebra **71** (1991), 129-155. D. Bennis, Rings over which the class of Gorenstein flat modules is closed under extensions. Comm. Algebra **37** (2009), 855-868. L. Bican, R. El Bashir, E. E. Enochs, All modules have flat covers. Bull. Lond. Math. Soc. **33** (2001), 385-390. P. C. Eklof and J. Trlifaj, How to make Ext vanish. Bull. London Math. Soc. **33** (2001), 41-51. E. E. Enochs, Injective and flat covers, envelopes and resolvents. Israel J. Math. (3) **39** (1981), 189-209. E. E. Enochs and O. M. G. Jenda, *Relative Homological Algebra*. De Gruyter Expositions in Mathematics no. 30, Walter De Gruyter, Berlin-New York, 2000. E. E. Enochs and O. M. G. Jenda, J. A. López-Ramos, The existence of Gorenstein flat covers. Math. Scand. **94** (2004), 46-62. E. E. Enochs and O. M. G. Jenda, B. Torrecillas, Gorenstein flat modules. Nanjing Daxue Xuebao Shuxue Bannian Kan **10** (1993), 1-9. J. Gillespie, The flat model structure on Ch($R$). Trans. Amer. Math. Soc. **356** (2004), 3369-3390. J. R. García Rozas, *Covers and Envelope in the Category of Complexes of Modules*. CRC Press, Boca Raton-London-New York-Washington, D.C., 1999. H. Holm, Gorenstein homological dimensions. J. Pure Appl. Algebra **189** (2004), 167-193. L. Salce, Cotorsion theories for abelian groups. Symposia Math. 23, 1979. G. Yang and Z. K. Liu, Gorenstein Flat Covers over GF-Closed Rings, Comm. Algebra **40** (2012), 1632-1640.
--- abstract: 'We show that if the existence of a supercompact cardinal is consistent with $ZFC$, then it is consistent with $ZFC$ that the $p$-rank of $\Ext_{\Z}(G,\Z)$ is as large as possible for every prime $p$ and any torsion-free abelian group $G$. Moreover, given an uncountable strong limit cardinal $\mu$ of countable cofinality and a partition of $\Pi$ (the set of primes) into two disjoint subsets $\Pi_0$ and $\Pi_1$, we show that in some model which is very close to $ZFC$ there is an almost-free abelian group $G$ of size $2^{\mu}=\mu^+$ such that the $p$-rank of $\Ext_{\Z}(G,\Z)$ equals $2^{\mu}=\mu^+$ for every $p \in \Pi_0$ and $0$ otherwise, i.e. for $p \in \Pi_1$.' address: - 'Department of Mathematics, The Hebrew University of Jerusalem, Israel, and Rutgers University, New Brunswick, NJ U.S.A.' - 'Department of Mathematics, University of Duisburg-Essen, 45117 Essen, Germany' - '[*Current address*]{}: Department of Mathematics, University of Hawaii, 2565 McCarthy Mall, Honolulu, HI 96822-2273, USA' author: - Saharon Shelah - Lutz Strüngmann title: 'On the $p$-rank of $\Ext_{\Z}(G,\Z)$ in certain models of $ZFC$' --- [^1] [^2] Introduction ============ In 1977 the first named author solved the well-known Whitehead problem by showing that it is undecidable in ordinary set-theory $ZFC$ wether or not every abelian group $G$ satisfying $\Ext_{\Z}(G,\Z)=\{0\}$ has to be free (see [@Sh1], [@Sh2]). However, this did not clarify the structure of $\Ext_{\Z}(G,\Z)$ for torsion-free abelian groups - a problem which has received much attention since then. Easy arguments show that $\Ext_{\Z}(G,\Z)$ is always a divisible group for every torsion-free group $G$. Hence it is of the form $$\Ext_{\Z}(G,\Z)= \bigoplus\limits_{p \in \Pi}\Z(p^{\infty})^{(\nu_p)} \oplus \Q^{(\nu_0)}$$ for some cardinals $\nu_p,\nu_0$ ($p \in \Pi)$ which are uniquely determined. The obvious question that arises is which sequences $(\nu_0, \nu_p : p \in \Pi)$ of cardinals can appear as the cardinal invariants of $\Ext_{\Z}(G,\Z)$ for some (which) torsion-free abelian group? Obviously, the trivial sequence consisting of zero entries only can be realized by any free abelian group. However, the solution of the Whitehead problem shows that it is undecidable in $ZFC$ if these are the only ones. There are a few results about possible sequences $(\nu_0, \nu_p : p \in \Pi)$ provable in $ZFC$. On the other hand, assuming Gödel’s constructible universe $(V=L)$ plus there is no weakly compact cardinal a complete characterization of the cardinal invariants of $\Ext_{\Z}(G,\Z)$ for torsion-free abelian groups $G$ has recently been completed by the authors (see [@EkHu], [@EkSh], [@GS], [@GS2], [@HHS], [@MRS], [@SS1], [@SS2], [@ShSt] and [@Sh3] for references). In fact, it turned out that almost all divisible groups $D$ may be realized as $\Ext_{\Z}(G,\Z)$ for some torsion-free abelian group $G$ of almost any given size.\ In this paper we shall take the opposite point of view. It is a theorem of $ZFC$ that every sequence $(\nu_0,\nu_p : p \in \Pi)$ of cardinals such that $\nu_0=2^{\lambda_0}$ for some infinite $\lambda_0$ and $\nu_p\leq \nu_0$ is either finite or of the form $2^{\lambda_p}$ for some infinite $\lambda_p$ can arise as the cardinal invariants of $\Ext_{\Z}(G,\Z)$ for some torsion-free $G$. The first purpose of this paper is to show that this result is as best as possible by constructing a model of $ZFC$ in which the only realizable sequences are of this kind. We shall assume therefore the consistency of the existence of a supercompact cardinal (see [@MeSh]). This is a strong additional set-theoretic assumption which makes the model we are working in be far from $ZFC$.\ On the other hand, we will also work in models very close to $ZFC$ assuming only the existence of certain ladder systems on successor of strong limit cardinals of cofinality $\aleph_0$. Although this model is close to $ZFC$ it allows us to construct almost-free torsion-free abelian groups $G$ such that for instance $\Ext_{\Z}(G,\Z)$ is torsion-free, i.e. $G$ is coseparable. Also this can be considered as a result at the borderline of what is provable in models close to $ZFC$ since the existence of non-free coseparable groups is independent of $ZFC$ (see [@EkMe Chapter XII] and [@MeSh]).\ Our notation is standard and we write maps from the right. All groups under consideration are abelian and written additively. We shall abbreviate $\Ext_{\Z}(-,-)$ by $\Ext(-,-)$ and $\Pi$ will denote the set of all primes. A Whitehead group is a torsion-free group $G$ such that $\Ext_{\Z}(G,\Z)=0$. If $H$ is a pure subgroup of the abelian group $G$, then we shall write $H \subseteq_* G$. We shall assume sufficient knowledge about forcing, large cardinals and prediction principles like weak diamond etc. as for example in [@EkMe], [@Ku] or [@Sh4]. Also reasonable knowledge about abelian groups as for instance in [@Fu] is assumed. However, the authors have tried to make the paper as accessible as possible to both algebraists and set theorists. The structure of $\Ext(G,\Z)$ ============================= In this section we recall the basic results on the structure of $\Ext(G,\Z)$ for torsion-free groups $G$. Therefore let $G$ be a torsion-free abelian group. It is easy to see that $\Ext(G,\Z)$ is divisible, hence it is of the form $$\Ext(G,\Z)= \bigoplus\limits_{p \in \Pi}\Z(p^{\infty})^{(\nu_p)} \oplus \Q^{(\nu_0)}$$ for certain cardinals $\nu_p,\nu_0$ ($p \in \Pi)$. Since the cardinals $\nu_p$ ($p \in \Pi)$ and $\nu_0$ completely determine the structure of $\Ext(G,\Z)$ we introduce the following terminology. Let $\Ext_p(G,\Z)$ be the $p$-torsion part of $\Ext(G,\Z)$ for $p \in \Pi$. We denote by $r^e_0(G)$ the [*torsion-free rank*]{} $\nu_0$ of $\Ext(G,Z)$ which is the dimension of $\Q \otimes \Ext(G,\Z)$ and by $r_p^e(G)$ the $p$-$rank$ $\nu_p$ of $\Ext(G,\Z)$ which is the dimension of $\Ext(G,\Z)[p]$ as a vector space over $\Z/p\Z$ for any prime number $p \in \Pi$. There are only a few results provable in $ZFC$ when $G$ is uncountable, but assuming additional set-theoretic assumptions a better understanding of the structure of $\Ext(G,\Z)$ is obtained. For instance, in Gödel’s universe and assuming that there is no weakly compact cardinal a complete characterization is known. The aim of this paper is to go to the borderline of the characterization. On the one hand we shall show that one can make the $p$-ranks of $\Ext(G,\Z)$ as large as possible for every torsion-free abelian group $G$ by working in a model of $ZFC$ which assumes strong additional axioms (the existence of large cardinals). On the other hand we shall work in a model which is very close to $ZFC$ but still allows to construct uncountable torsion-free groups $G$ such that $\Ext(G,\Z)$ is torsion-free.\ We first justify our restriction to torsion-free $G$. Let $A$ be any abelian group and $t(A)$ its torsion subgroup. Then $\Hom(t(A),\Z)=0$ and hence we obtain the short exact sequence $$0 \rightarrow \Ext(A/t(A),\Z) \rightarrow \Ext(A,\Z) \rightarrow \Ext(t(A),\Z) \rightarrow 0$$ which must split since $\Ext(A/t(A),\Z)$ is divisible. Thus $$\Ext(A,\Z) \cong \Ext(A/t(A),\Z) \oplus \Ext(t(A),\Z).$$ Since the structure of $\Ext(t(A),\Z) \cong \prod_{p \in \Pi} \Hom(A,\Z(p^{\infty}))$ is well-known in $ZFC$ (see [@Fu]) it is reasonable to assume that $A$ is torsion-free and, of course, non-free. Using Pontryagin’s theorem one proves \[countabletf\] Suppose $G$ is a countable torsion-free group which is not free. Then $r_0^e(G)=2^{\aleph_0}$. See [@EkMe Theorem XII 4.1].\ Similarly, we have for the $p$-ranks of $G$ the following \[countablet\] If $G$ is a countable torsion-free group, then for any prime $p$, either $r_p^e(G)$ is finite or $2^{\aleph_0}$. See [@EkMe Theorem XII 4.7].\ This clarifies the structure of $\Ext(G,\Z)$ for countable torsion-free groups $G$ in $ZFC$. We now turn our attention to uncountable groups. There is a useful characterization of $r_p^e(G)$ using the exact sequence $$0 \rightarrow \Z \overset{p}\rightarrow \Z \rightarrow \Z/pZ \rightarrow 0.$$ The induced sequence $$\Hom(G,\Z) \overset{\varphi^p}{\rightarrow} \Hom(G,\Z/p\Z) \rightarrow \Ext(G,\Z) \overset{p_*}{\rightarrow} \Ext(G,\Z)$$ shows that the dimension of $$\Hom(G,\Z/p\Z)/\Hom(G,\Z)\varphi^p$$ as a vector space over $\Z/p\Z$ is exactly $r_p^e(G)$.\ The following result due to Hiller, Huber and Shelah deals with the case when $\Hom(G,\Z)=0$. \[existence\] For any cardinal $\nu_0$ of the form $\nu_0=2^{\mu_0}$ for some infinite $\mu_0$ and any sequence of cardinals $(\nu_p : p \in \Pi)$ less than or equal to $\nu_0$ such that each $\nu_p$ is either finite or of the form $2^{\mu_p}$ for some infinite $\mu_p$ there is a torsion-free group $G$ of cardinality $\mu_0$ such that $\Hom(G,\Z)=0$ and $r_0^e(G)=\nu_0$, $r_p^e(G)=\nu_p$ for all primes $p \in \Pi$. See [@HHS Theorem 3(b)].\ Together with the following lemma we have reached the borderline of what is provable in $ZFC$. \[dualt\] If $G$ is torsion-free such that $\Hom(G,\Z)=0$, then for all primes $p$, $r_p^e(G)$ is either finite or of the form $2^{\mu_p}$ for some infinite $\mu_p \leq |G|$. See [@EkMe Lemma XII 5.2].\ Assuming Gödel’s axiom of constructibility one even knows a complete characterization in the case when $\Hom(G,\Z)=0$. \[uncountabletf\] Suppose $G$ is a torsion-free non-free group and let $B$ be a subgroup of $A$ of minimum cardinality $\nu$ such that $A/B$ is free. Then $r_0^e(G)=2^{\nu}$. In particular, $r_0^e(G)$is uncountable and $r_0^e(G)=2^{|G|}$ if $\Hom(G,\Z)=0$. See [@EkMe Theorem XII 4.4, Corollary XII 4.5].\ Note that the above lemma is not true in $ZFC$ since for any countable divisible group $D$ it is consistent that there exists an uncountable torsion-free group $G$ with $\Ext(G,\Z) \cong D$, hence $r_0^e(G)=1$ is possible taking $D=\Q$ (see [@Sh3]).\ The following result is a collection of theorems due to Grossberg, Mekler, Roslanowski, Sageev and the authors. It shows that under the assumption of $(V=L)$ almost all possibilities for $r_p^e(G)$ can appear if the group is not of weakly compact cardinality or singular cardinality of cofinality $\aleph_0$. Let $\nu$ be an uncountable cardinal and suppose that $(\nu_p : p\in \Pi)$ is a sequence of cardinals such that for each $p$, $0 \leq \nu_p \leq 2^{\nu}$. Moreover, let $H$ be a torsion-free group of cardinality $\nu$. Then the following hold. 1. If $\nu$ is regular and less than the first weakly compact cardinal, then there is an almost-free group $G$ of cardinality $\nu$ such that $r_0^e(G)=2^{\nu}$ and for all primes $p$, $r_p^e(G)=\nu_p$; 2. If $\nu$ is a singular strong limit cardinal of cofinality $\omega$, then there is no torsion-free group $G$ of cardinality $\nu$ such that $r_p^e(G)=\nu$ for any prime $p$; 3. If $\nu$ is weakly compact and $r_p^e(H) \geq \nu$ for some prime $p$, then $r_p^e(H)=2^{\nu}$; 4. If $\nu$ is singular less than the first weakly compact cardinal and of cofinality $\cf(\nu) > \aleph_0$, then there is a torsion-free group $G$ of cardinality $\nu$ such that $r_0^e(G)=2^{\nu}$ and for all primes $p$, $r_p^e(G)=\nu_p$. For (i) see [@MRS Theorem 3.7], for (ii) we refer to [@GS Theorem 1.0], for (iii) see [@SS1 Main Theorem] and (iv) is contained in [@ShSt].\ The above results show that under the assumption of $(V=L)$ and the non-existence of weakly compact cardinals, the structure of $\Ext(G,\Z)$ for torsion-free groups $G$ of cardinality $\nu$ is clarified for all cardinals $\nu$ and almost all sequences $(\nu_0,\nu_p : p \in \Pi)$ can be realized as the cardinal invariants of some torsion-free abelian group in almost every cardinality. However, if we weaken the set-theoretic assumptions to $GCH$ (the generalized continuum hypothesis), then even more is possible which was excluded by $(V=L)$ before (see Lemma \[uncountabletf\]). The following hold. 1. Assume $GCH$. For any torsion-free group $A$ of uncountable cardinality $\nu$, if $\Hom(A,\Z)=0$ and $r_0^e(A) < 2^{\nu}$, then for each prime $p$, $r_p^e(A)=2^{\nu}$; 2. It is consisitent with $ZFC$ and $GCH$ that for any cardinal $\rho \leq \aleph_1$, there is a torsion-free group $G_{\rho}$ such that $\Hom(G_{\rho},\Z)=0$, $r_0^e(G_{\rho})=\rho$ and for all primes $p$, $r_p^e(G_{\rho})=2^{\aleph_1}$. See [@EkMe Theorem XII 5.3] and [@EkMe Theorem XII 5.49].\ It is our aim in the next section to show that this rich structure of $\Ext(G,\Z)$ ($G$ torsion-free) which exists in $(V=L)$ does not appear in other models of $ZFC$. As a motivation we state two results from [@MeSh] which show that using Cohen forcing we may enlarge the $p$-rank of $\Ext(G,\Z)$ for torsion-free groups $G$. \[meklerold\] Suppose $G$ is contained in the $p$-adic completion of a free group $F$ and $|G| > |F|$. Then, if $\lambda \geq |F|$ and $\lambda$ Cohen reals are added to the universe, $|\Ext_p(G,\Z)| \geq \lambda$. In particular, adding $2^{\aleph_0}$ Cohen reals to the universe implies that for every torsion-free reduced non-free abelian group $G$ of cardinality less than the continuum, there is a prime $p$ such that $r_p^e(G) > 0$. See [@MeSh Theorem 8].\ Assuming the consistency of large cardinals we even get more. Recall that a cardinal $\kappa$ is [*compact*]{} if it is uncountable regular and satisfies the condition that for every set $S$, every $\kappa$-complete filter on $S$ can be extended to a $\kappa$-complete ultrafilter on $S$. This is equivalent to saying that for any set $A$ such that $|A| \geq \kappa$, there exists a fine measure on $P_{\kappa}(A)$ (the set of all subsets of $A$ of size less than or equal to $\kappa$). If we require the measure to satisfy a normality condition, then we get a stronger notion. A fine measure $U$ on $P_{\kappa}(A)$ is called [*normal*]{} if whenever $f:P_{\kappa}(A) \rightarrow A$ is such that $f(P) \in P$ for almost all $P \in P_{\kappa}(A)$, then $f$ is constant on a set in $U$. A cardinal $\kappa$ is called [*supercompact*]{} if for every set $A$ such that $|A| \geq \kappa$, there exists a normal measure on $P_{\kappa}(A)$ (see [@EkMe Chapter II.2] or [@Je Chapter 6, 33. Compact cardinals] for further details on supercompact cardinals). Suppose that it is consistent that a supercompact cardinal exists. Then it is consistent with either $2^{\aleph_0}=2^{\aleph_1}$ or $2^{\aleph_0} < 2^{\aleph_1}$ that for any group $G$ either $\Ext(G,\Z)$ is finite or $r_0^e(G) \geq 2^{\aleph_0}$. See [@MeSh Theorem 11].\ The free (p-)rank ================= In this section we introduce the [*free (p-)rank*]{} of a torsion-free group $G$ ($p$ a prime) which will induce upper bounds for the cardinal invariants of $\Ext(G,\Z)$. For a prime $p \in \Pi$ let $K_p$ be the class of all torsion-free groups $G$ such that $G/p^{\omega}G$ is free. Moreover, let $K_0$ be the class of all free groups. Note, that for $G \in K_p$ ($p$ a prime) we have $G=p^{\omega}G \oplus F$ for some free group $F$ and hence $\Ext(G,\Z)[p]=0$ since $p^{\omega}G$ is $p$-divisible. Note that $p^{\omega}G$ is a pure subgroup of $G$. Thus $r_p^e(G)=0$ for $G \in K_p$ and any prime $p$. Clearly, also $r_0^e(G)=0$ for all $G \in K_0$. Let $G$ be a torsion-free group. We call $$\fk_0(G)=\min\{\rk(H) : H \subseteq_* G \textit{ such that } G/H \in K_0 \}$$ the [free rank of $G$]{} and similarly we call $$\fk_p(G)=\min\{\fk(H) : H \subseteq_* G/p^{\omega}G \textit{ such that } (G/p^{\omega}G)/H \in K_p \}$$ the [free $p$-rank of $G$]{} for any prime $p \in \Pi$. We have a first easy lemma. \[freerank\] Let $G$ be a torsion-free group and $p \in \Pi$ a prime. Then the following hold. 1. $r_p^e(G)=r_p^e(G/p^{\omega}G)$; 2. If $H$ is a pure subgroup of $G$, then $r_p^e(H) \leq r_p^e(G)$ and $r_0^e(H) \leq r_0^e(G)$; 3. $\fk_0(G)\geq \fk_0(G/p^{\omega}G)$; 4. $\fk_p(G)=\fk_p(G/p^{\omega}G)$; 5. $\fk_p(G) \leq \fk_0(G)$. We first show (i) and let $p$ be a prime. Since $p^{\omega}G$ is pure in $G$ we have that $p^{\omega}G$ is $p$-divisible, hence $$0 \rightarrow p^{\omega}G \rightarrow G \rightarrow G/p^{\omega}G \rightarrow 0$$ induces the exact sequence $$0 \rightarrow \Hom(G/p^{\omega}G,\Z/p\Z) \rightarrow \Hom(G,\Z/p\Z) \rightarrow \Hom(p^{\omega}G,\Z/p\Z)=0$$ the latter being trivial because $p^{\omega}G$ is $p$-divisible. Thus we have $$\Hom(G,\Z/p\Z) \cong \Hom(G/p^{\omega}G, \Z/p\Z)$$ and it follows easily that $$\Hom(G,\Z/p\Z)/\Hom(G,\Z)\varphi^{p} \cong \Hom(G/p^{\omega}G,\Z/p\Z)/\Hom(G/p^{\omega}G,\Z)\varphi^{p}.$$ Therefore, $r_p^e(G)=r_p^e(G/p^{\omega}G)$.\ In order to show (ii) we consider the exact sequence $$0 \rightarrow H \rightarrow G \rightarrow G/H \rightarrow 0$$ which implies the exact sequence $$\cdots \rightarrow \Ext(G/H,\Z) \overset{\alpha}{\rightarrow} \Ext(G,\Z) \rightarrow \Ext(H,\Z) \rightarrow 0.$$ Since $G/H$ is torsion-free we conclude that $\Ext(G/H,\Z)$ is divisible and hence $\im(\alpha)$ is divisible. Thus $\Ext(G,\Z)= \Ext(H,\Z) \oplus \im(\alpha)$ and therefore $r_0^e(H) \leq r_0^e(G)$ and $r_p^e(H) \leq r_p^e(G)$ for every prime $p$.\ Claim (iii) is easily proved noting that, whenever $G=H \oplus F$ for some free group $F$, then $p^{\omega}G \subseteq H$ for every prime $p$, hence $G/p^{\omega}G=H/p^{\omega}G \oplus F$.\ To show (iv) note that $p^{\omega}(G/p^{\omega}G)=\{0\}$ and hence $\fk_p(G)=\fk_p(G/p^{\omega}G)$ easily follows.\ Finally, (v) follows from (iii), (iv) and the definition of $\fk_p(G)$ and $\fk_0(G)$ since the class $K_0$ is contained in the class $K_p$ for every prime $p$.\ If $G$ is a torsion-free group and $p$ a prime, then Lemma \[freerank\] (i) and (iv) imply that, regarding the free $p$-rank of $G$, we may assume without loss of generality that $G$ is $p$-reduced. This is also justified by the fact that $$\fk_p(G)=\{ H \subseteq_* G \textit{ such that } G/H \in K_p \}$$ is easily proven. To simplify notations we let $\Pi_0=\Pi \cup \{0\}$ in the sequel. \[lemmaabschaetz\] Let $G$ be a torsion-free group. Then the following hold. 1. $r_0^e(G) \leq 2^{\lambda}$ where $\lambda=\max\{\aleph_0, \fk_0(G)\}$; In particular, $r_0^e(G) \leq 2^{\fk_0(G)}$ if $\fk_0(G)$ is infinite; 2. $r_p^e(G) \leq p^{\fk_p(G)}$ for all $p \in \Pi$. In order to prove (i) choose a subgroup $H \subseteq G$ such that $\rk(H)=\fk_0(G)$ and $G/H \in K_0$. Hence $G=H \oplus F$ for some free group $F$ and so $\Ext(G,\Z)=\Ext(H,\Z)$ which implies that $r_0^e(G)=r_0^e(H) \leq 2^{\lambda}$ where $\lambda=\max\{\aleph_0,\rk(H)\}=\max\{\aleph_0,\fk_0(G)\}$.\ We now prove (ii). Let $p$ be a prime, then $r_p^e(G)=r_p^e(G/p^{\omega}G)$ and $\fk_p(G)=\fk_p(G/p^{\omega}H)$ by Lemma \[freerank\] (i) and (iv). Hence we may assume that $p^{\omega}G=\{0\}$ without loss of generality. Let $H \subseteq G$ be such that $\fk_0(H)=\fk_p(G)$ and $G/H \in K_p$. Then $G/H=D \oplus F$ for some free group $F$ and some $p$-divisible group $D$. As in the proof of Lemma \[freerank\] (i) it follows that $r_p^e(G)=r_p^e(H)$. Now, we let $H=H' \oplus F'$ for some free group $F'$ such that $\rk(H')=\fk_0(H)=\fk_p(G)$. Hence $\Ext(H,\Z)=\Ext(H',\Z)$ and therefore $r_p^e(G)=r_p^e(H)=r_p^e(H')$. Consequently, $r_p^e(G)=r_p^e(H') \leq p^{\rk(H')}=p^{\fk_p(G)}$.\ Note, that for instance in $(V=L)$ for any torsion-free group $G$, $2^{\fk_0(G)}$ is the actual value of $r_0^e(G)$ by Lemma \[uncountabletf\]. The following lemma justifies that, as far as it concerns the free $p$-rank of a torsion-free group, one may also assume without loss of generality that $\fk_p(G)=\rk(G)$ if $p \in \Pi_0$. \[reduction\] Let $G$ be a torsion-free group, $p \in \Pi_0$ and $H \subseteq_* G$ such that 1. $G/\left( H \oplus F \right) \in K_p$ for some free group $F$; 2. $\rk(H)=\fk_p(G)$. Then $\fk_p(H)=\rk(H)$ and $r_p^e(G)=r_p^e(H)$. Let $G$, $H$ and $p$ be given. If $p=0$, then the claim is trivially true. Hence assume that $p \in \Pi$ and that $\rk(H)=\fk_p(G)$. Then there is a free group $F$ such that $H'=H \oplus F$ is a pure subgroup of $G$ satisfying $G/H' \in K_p$. Thus $\fk_p(G)=\rk(H)=\fk_0(H')$. Without loss of generality we may assume that $G/H'$ is $p$-divisible by splitting of the free part. By way of contradiction assume that $\fk_p(H) < \rk(H)$. Let $H_1 \subseteq_* H$ such that $H/H_1 \in K_p$ and $\fk_0(H_1)=\fk_p(H) < \rk(H)$. Then there are a free group $F_1$ and a $p$-divisible group $D$ such $$H/H_1 = D \oplus F_1.$$ Choose a pure subgroup $H_1 \subseteq_* H_2 \subseteq_* H$ such that $H_2/H_1 \cong D$. Thus $H/H_2 \cong F_1$ and so $H \cong H_2 \oplus F_1$ and without loss of generality $H=H_2 \oplus F_1$. Consequently, $\rk(H_2)=\rk(H)$ since $\rk(H)=\fk_p(G)$. Let $H_3=H_1 \oplus F_1 \oplus F$. Then $$\fk_0(H_3)=\fk_0(H_1) < \rk(H)=\fk_p(G).$$ Moreover, $$G/H' \cong \left( G/H_3 \right) / \left( H'/H_3 \right)$$ is $p$-divisible. Since also $H'/H_3 \cong H_2/H_1$ is $p$-divisible and all groups under consideration are torsion-free we conclude that $G/H_3$ is $p$-divisible. Hence $\fk_p(G) \leq \fk_0(H_3) < \fk_p(G)$ - a contradiction. Finally, $r_p^e(G)=r_p^e(H)$ follows as in the proof of Lemma \[lemmaabschaetz\].\ We now show how to calculate explicitly $\fk_p(G)$ for torsion-free groups $G$ of finite rank and $p \in \Pi$ (note that $\fk_0(G)$ can be easily calculated). Recall that a torsion-free group $G$ of finite rank is [*almost-free*]{} if every subgroup $H$ of $G$ of smaller rank than the rank of $G$ is free. Let $G$ be a non-free torsion-free group of finite rank $n$ and $p \in \Pi$. Then we can calculate $\fk_p(G)$ as follows. 1. If $G$ is almost-free, then let $H \subseteq \Q$ be the outer type of $G$. Then 1. $\fk_p(G)=n$ if $H$ is not $p$-divisible; 2. $\fk_p(G)=0$ if $H$ is $p$-divisible. 2. If $G$ is not almost-free, then choose a filtration $\{0\}=G_0 \subseteq_* G_1 \subseteq_* \cdots \subseteq_*G_m \subseteq_* G$ with $G_{k+1}/G_k$ almost-free. Then $$\fk_p(G)=\sum\limits_{k < m} \fk_p(G_{k+1}/G_k).$$ Left to the reader.\ In order to prove our main Theorem \[main1\] of Section $4$ we need a further result on the class $K_p$ for $p \in \Pi$. \[forcingone\] Let $p$ be a prime and $G$ a torsion-free group of infinite rank. Then the following hold. 1. If $G$ is of singular cardinality, then $G \in K_p$ if and only if every pure subgroup $H$ of $G$ of smaller cardinality than $G$ satisfies $H \in K_p$; 2. $G \not\in K_p$ if and only if $r_p^e(G) > 0$ whenever we add $|G|$ Cohen reals to the universe; 3. If $\rk(G) \geq \aleph_0$, then adding $|G|$ Cohen reals to the universe adds a new member to $\Ext_p(G,\Z)$ preserving the old ones. Let $G$ and $p$ be as stated. Part (i) is an easy application of the first author’s Singular Compactness Theorem from [@Sh0].\ One implication of (ii) is trivial, hence assume that $G \not\in K_p$. By Lemma \[freerank\] (ii) we may assume that $G$ does not have any pure subgroup of smaller rank than $G$ satisfying (ii). It is easily seen that the rank $\delta=\rk(G)$ of $G$ must be uncountable. Thus $\delta>\aleph_0$ must be regular by (i). Let $G=\bigcup\limits_{\alpha < \delta}G_{\alpha}$ be a filtration of $G$ by pure subgroups $G_{\alpha}$ of $G$ ($\alpha < \delta$). The claim now follows as in [@MeSh] repeating [@MeSh Theorem 9 and Theorem 10] (compare also Lemma \[meklerold\]). The only difference is that in our situation the group $G$ is not almost free, hence we require in [@MeSh Theorem 10] that for every $\alpha$ in the stationary set $E$ there exists an element $a \in G_{\alpha+1}\backslash(G_{\alpha} + p^{\omega}G_{\alpha+1})$ which belongs to the $p$-adic closure of $G_{\alpha}+p^{\omega}G_{\alpha+1}$. This makes only a minor change in the proof of [@MeSh Theorem 10].\ Finally, (iii) follows similar to (ii) from the proof of [@MeSh Theorem 11]. The proof is therefore left to the reader.\ Finally, we consider the [*$p$-closure*]{} of a pure subgroup $H$ of some torsion-free abelian group $G$ which shall be needed in the proof of Theorem \[main1\]. Let $G$ be torsion-free and $H$ a pure subgroup of $G$. For every prime $p \in \Pi$ the set $${\cl}_{p}(G,H)=\{ x \in G: \textit{ for all } n \in \N \textit{ there is } y_n \in H \textit{ such that } x -y_n \in p^nG \}$$ is called the [$p$-closure of $H$]{}. We have a first easy lemma. \[pclosure\] Let $G$ be torsion-free and $H$ a pure subgroup of $G$. Then the following hold for all primes $p \in \Pi$. 1. $H \subseteq \cl_p(G,H)$; 2. $\cl_p(G,H)$ is a pure subgroup of $G$; 3. $\cl_p(G,H)/H$ is $p$-divisible. We fix a prime $p \in \Pi$. The first statement is trivial. In order to prove (ii) assume that $mx \in \cl_p(G,H)$ for some $m \in \N$ and $x \in G$. Then, for every $n \in \N$, there is $y_n \in H$ such that $mx - y_n \in p^nG$, say $y_n=p^ng_n$ for some $g_n \in G$. Without loss of generality we may assume that $(m,p)=1$. Hence $1=km + lp^n$ for some $k,l \in \Z$. Thus $$x= kmx+lp^nx=kp^ng_n + ky_n + lp^nx$$ and hence $x-ky_n \in p^nG$ with $ky_n \in H$. Therefore $x \in \cl_p(G,H)$ and (ii) holds. Finally, (iii) follows easily from (ii).\ Supercompact cardinals and large $p$-ranks ========================================== In this section we shall assume that the existence of a supercompact cardinal is consistent with $ZFC$. We shall then determine the cardinal invariants $(r_0^ e(G), r_p^ e(G) : p \in \Pi)$ of $\Ext(G,\Z)$ for every torsion-free abelian group in this model and show that they are as large as possible. We start with a theorem from [@MeSh] (see also [@Da]). Recall that for cardinals $\mu, \gamma$ and $\delta$ we can define a partially ordered set $Fn(\mu,\gamma,\delta)$ by putting $$Fn(\mu,\gamma,\delta)=\{ f : dom(f) \rightarrow \gamma : dom(f) \subseteq \mu, |dom(f)| < \delta \}.$$ The partrial order is given by $f \leq g$ if and only if $g \subseteq f$ as functions. Suppose $\kappa$ is a supercompact cardinal, $V$ is a model of $ZFC$ which satisfies $2^{\aleph_0}=\aleph_1$ and $\P=Fn(\mu,2,\aleph_1) \times Fn(\rho,2,\aleph_0)$, where $\mu,\rho > \aleph_1$. Then $\P$ forces that every $\kappa$-free group is free. See [@MeSh Theorem 19].\ As a corollary one obtains \[model\] If it is consistent with $ZFC$ that a supercompact cardinal exists then both of the statements [every $2^{\aleph_0}$-free group is free and $2^{\aleph_0} < 2^{\aleph_1}$]{} and [every $2^{\aleph_0}$-free group is free and $2^{\aleph_0} = 2^{\aleph_1}$]{} are consistent with $ZFC$. Furthermore, if it is consistent that there is a supercompact cardinal then it is consistent that there is a cardinal $\kappa < 2^{\aleph_1}$ so that if $\kappa$ Cohen reals are added to the universe then every $\kappa$-free group is free. See [@MeSh Corollary 20].\ We are now ready to prove our main theorem of this section working in the model from Lemma \[model\]. Assume that the existence of a supercompact cardinal is consistent with $ZFC$. Let $V$ be any model in which there exists a supercompact cardinal $\kappa$ such that the weak diamond principle $\Diamond_{\lambda^+}^*$ holds for all regular cardinals $\lambda \geq \kappa$. Now, we use Cohen forcing to add $\kappa$ Cohen reals to $V$ to obtain a new model $\v$. Thus, in $\v$ we still have $\Diamond_{\lambda^+}^*$ for all regular $\lambda \geq \kappa$ and also $\Diamond_{\kappa}$ holds. Moreover, we have $2^{\aleph_0}=\kappa$ and every $\kappa$-free group (of arbitrary cardinality) is free by [@MeSh]. \[main1\] In any model $\v$ as described above, the following is true for every non-free torsion-free abelian group $G$ and prime $p \in \Pi$. 1. $r_0^e(G)=2^{\max\{\aleph_0,\fk_0(G)\}}$; 2. If $\fk_p(G)$ is finite, then $r_p^e(G)=\fk_p(G)$; 3. If $\fk_p(G)$ is infinite, then $r_p^e(G)=2^{\fk_p(G)}$. We would like to remark first that the above theorem shows that $r^e_p(G)$ ($p \in \Pi_0$) is as large as possible for every torsion-free abelian group $G$ in the model described above. Moreover, Lemma \[existence\] shows that every sequence of cardinals $(\nu_p : p \in \Pi_0)$ not excluded by Theorem \[main1\] may be realized as the cardinal invariants of $\Ext(H,\Z)$ for some torsion-free group $H$. Let $p \in \Pi_0$ be fixed. By Lemma \[freerank\] we may assume that $p^{\omega}G=0$ if $p \in \Pi$. Moreover, Lemma \[reduction\] shows that also $\fk_p(G)=\rk(G)$ holds without loss of generality. We now prove the claim by induction on the rank $\lambda=\rk(G)=\fk_p(G)$.\ $\lambda$ is finite.\ In this case we may assume without loss of generality that $\Hom(G,\Z)=\{0\}$ since $G$ is of finite rank. If $p=0$, then $r_0^e(G)=2^{\aleph_0}=\kappa$ follows from Lemma \[countabletf\] since $G$ is not free. Thus assume that $p > 0$. Then $r_p^e(G)$ is the dimension of $\Hom(G,\Z/p\Z)$ as a vectorspace over $\Z/p\Z$. Since $\Hom(G,\Z/p\Z)$ is the vectorspace dual of $G/pG$ it follows that $r_p^e(G)=\rk(G)=\fk_p(G)$. Note that $G$ is $p$-reduced by assumption.\ $\lambda=\aleph_0$.\ Since $G$ is of countable rank it is well-known that there exists a decomposition $G=G' \oplus F$ of $G$ where $F$ is a free group and $G'$ satisfies $\Hom(G',\Z)=\{0\}$. Moreover, by assumption $\fk_p(G)=\lambda=\aleph_0$, hence we obtain that $\rk(G')=\fk_p(G')=\aleph_0$. Therefore, $r_0^e(G)=r_0^e(G')=2^{\aleph_0}=\kappa$ follows from Lemma \[countabletf\]. If $p>0$ we conclude that $\Hom(G',\Z/p\Z)$ has cardinality $2^{\aleph_0}$. Since $\Hom(G',\Z)=\{0\}$ it follows by Lemma \[lemmaabschaetz\] (ii) that $$2^{\aleph_0} \leq r_p^e(G')=r_p^e(G) \leq 2^{\aleph_0}$$ and thus $r_p^e(G)=2^{\aleph_0}=\kappa$ for $p \in \Pi_0$.\ $\aleph_0 < \lambda < \kappa$.\ Note that $\kappa=2^{\aleph_0}=2^{\lambda}$, hence $r_0^e(G)=2^{\aleph_0}=\kappa$ follows from Lemma \[uncountabletf\]. In fact, by induction hypothesis every Whitehead group $H$ of size less than $\lambda$ has to be free (because $r_p^e(H)=\fk_0(H)$) which suffices for Lemma \[uncountabletf\]. Now, assume that $ p \in \Pi$. By Lemma \[lemmaabschaetz\] (ii) we deduce $$r_p^e(G) \leq 2^{\fk_p(G)} \leq 2^{\lambda} = 2^{\aleph_0}$$ and hence it remains to prove that $r_p^e(G) \geq 2^{\aleph_0}$. The proof is very similar to the proof of [@MeSh Theorem 8] (see also Lemma \[meklerold\]), hence we shall recall it only briefly. Let $V$ be the ground model and $P$ be the Cohen forcing, i.e. $P=P(\kappa \times \omega,2,\omega)=\{ h: \dom(h) \rightarrow \{0,1\} : \dom(h) \textit{ is a finite subset of } \kappa \times \omega \}$. If $\G$ is a $P$-generic filter over $V$ let $\tilde{h}=\bigcup\limits_{g \in \G}g$. For notational reasons we may also write $\v=V[\G]=V[\tilde{h}]$ for the extension model determined by the generic filter $\G$. Let $A \subseteq [\kappa]^{\leq \lambda}$ such that $G$ belongs to $V[\tilde{h}\restriction_{A \times \omega}]$. Without loss of generality we may assume that $\alpha \in A$ if and only if $\beta \in A$ whenever $\alpha + \lambda=\beta + \lambda$. We shall prove the claim by splitting the forcing. For each $\alpha$ such that $\lambda\alpha \in \kappa\backslash A$ let $$\tilde{f}_{\alpha} \in V[A \times \omega \cup [\lambda\alpha, \lambda\alpha + \lambda) \times \omega]$$ be a member of $\Hom(G,\Z/p\Z)$ computed by $\tilde{h}\restriction_{[\lambda\alpha,\lambda\alpha+\lambda)\times \omega}$. Note that $\tilde{f}_{\alpha}$ exists by Proposition \[forcingone\] (iii). Then $\tilde{f}_{\alpha}$ is also computed from $\tilde{h}\restriction_{[\lambda\alpha,\lambda\alpha+\lambda)\times \omega}$ over $V[\tilde{h}\restriction_{\kappa\backslash[\lambda\alpha,\lambda\alpha+\lambda) \times \omega}]$ and hence $\tilde{f}_{\alpha}$ is not equivalent to any $f' \in V[\tilde{h}\restriction_{\kappa\backslash[\lambda\alpha,\lambda\alpha+\lambda)\times \omega}]$ modulo $\Hom(G,\Z)\varphi^p$. Thus the set of homomorphisms $\{\tilde{f}_{\alpha} : \lambda\alpha < \kappa, \lambda\alpha \not\in A \}$ exemplifies that $r_p^e(G) \geq \kappa=2^{\aleph_0}$ and hence $$2^{\lambda}=2^{\aleph_0}=\kappa \leq r_p^e(G) \leq 2^{\lambda}$$ which shows $r_p^e(G)=2^{\lambda}=2^{\fk_p(G)}$.\ $\lambda \geq \kappa$.\ Let $p \in \Pi_0$. We distinguish two subcases. Note that for $p \in \Pi$ our assumption $\fk_p(G)=\rk(G)$ also implies that $\fk_0(G)=\rk(G)$ by Lemma \[freerank\] (v).\ $\lambda \geq \kappa$ is regular.\ The case $p=0$ follows as in [@EkMe Theorem XII 4.4]. Let $\left< G_{\alpha} : \alpha < \lambda \right>$ be a filtration of $G$ into pure subgroups $G_{\alpha}$ $(\alpha < \lambda)$ so that if $G/G_{\alpha}$ is not $\lambda$-free, then $G_{\alpha+1}/G_{\alpha}$ is not free. Choose by [@EkHu Lemma 2.4] an associate free resolution of $G$, i.e. a free resolution $$0 \rightarrow K \overset{\Phi}{\rightarrow} F \rightarrow G \rightarrow 0$$ of $G$ such that $F=\bigoplus\limits_{\alpha<\lambda}F_{\alpha}$ and $K=\bigoplus\limits_{\alpha < \lambda}K_{\alpha}$ are free groups such that $|F_{\alpha}| < \lambda$ and $|K_{\alpha}|<\lambda$ for all $\alpha < \lambda$ and the induced sequences $$0 \rightarrow \bigoplus\limits_{\beta < \alpha}K_{\beta} \rightarrow \bigoplus\limits_{\beta < \alpha}F_{\beta} \rightarrow G_{\alpha} \rightarrow 0$$ are exact for every $\alpha < \lambda$. Since $\fk_0(G)=\lambda$, the set $E=\{\alpha < \lambda : G_{\alpha+1}/G_{\alpha} \text{ is not free } \}$ is stationary. For any subset $E' \subseteq E$ let $K(E')=\bigoplus\limits_{\alpha \in E'}K_{\alpha}$ and $G(E')=F/\Phi(K(E'))$. Then $\Gamma(G(E')) \geq \tilde{E'}$ where $\Gamma(G(E'))$ is the $\Gamma$-invariant of $G(E')$ (see [@EkMe] for details on the $\Gamma$-invariant). Now, by assumption we have $\Diamond^*_{\lambda}$, hence we may decompose $E$ into $\lambda$ disjoint stationary sets $E'_{\alpha}$, each of which is non-small, i.e. $\Diamond_{\lambda}^*(E_{\alpha}')$ holds. Hence $G(E'_{\alpha})$ is not free since $\tilde{E'_{\alpha}} \leq \Gamma(G(E'_{\alpha}))$ for every $\alpha < \lambda$. By [@MeSh] we conclude that $G(E'_{\alpha})$ is not $\kappa$-free and therefore has a non-free pure subgroup $H_{\alpha}$ of rank less than $\kappa$. By induction hypothesis it follows that $\Ext(H_{\alpha},\Z)\not= 0$ and hence also $\Ext(G(E'_{\alpha}),\Z)\not= 0$. As in [@EkHu Lemma 1.1] (see also [@EkMe Lemma XII 4.2]) there is an epimorphism $$\Ext(G,\Z) \rightarrow \prod\limits_{\alpha < \lambda}\Ext(G(E_{\alpha}'),\Z) \rightarrow 0$$ and it easily follows that $r_0^e(G) \geq 2^{\lambda}$ and hence $r_0^e(G)=2^{\lambda}$ (compare [@EkMe Lemma 4.3]).\ Now assume that $p>0$. Again, let $\left< G_{\alpha} : \alpha < \lambda \right>$ be a filtration of $G$ into pure subgroups $G_{\alpha}$ $(\alpha < \lambda)$ such that $\Ext_p(G_{\alpha+1}/G_{\alpha},\Z)\not= 0$ if and only if $\Ext_p(G_{\beta}/G_{\alpha},\Z)\not=0$ for some $\beta > \alpha$. Fix $\alpha < \lambda$. We claim that $G/\cl_p(G,G_{\alpha})$ is not free. By way of contradiction assume that $G/\cl_p(G,G_{\alpha})$ is free. Hence $G=\cl_p(G,G_{\alpha}) \oplus F$ for some free group $F$. Therefore, $G/G_{\alpha}=\left( \cl_p(G,G_{\alpha}) \oplus F \right)/G_{\alpha}=\cl_p(G,G_{\alpha})/G_{\alpha} \oplus F$ is a direct sum of a $p$-divisible group and a free group by Lemma \[pclosure\] (iii). It follows that $\fk_p(G) \leq \rk(G_{\alpha}) < \lambda$ contradicting the fact that $\fk_p(G)=\lambda$. By [@MeSh] we conclude that $G/\cl_p(G,G_{\alpha})$ is not $\kappa$-free since we are working in the model $\v$. Let $G'/\cl_p(G,G_{\alpha}) \subseteq_* G/\cl_p(G,G_{\alpha})$ be a non-free pure subgroup of $G/\cl_p(G,G_{\alpha})$ of size less than $\kappa$. Then there exists $\alpha \leq \beta < \lambda$ such that $G' \subseteq_* G_{\beta}$. By purity it follows that $\cl_p(G,G_{\alpha}) \cap G_{\beta}=\cl_p(G_{\beta},G_{\alpha})$. Hence $$G_{\beta}/{\cl}_{p}(G_{\beta},G_{\alpha}) = \left( G_{\beta} + {\cl}_{p}(G,G_{\alpha}) \right) /{\cl}_{p}(G,G_{\alpha})$$ is torsion-free but not free. Without loss of generality we may assume that $\beta=\alpha+1$. Hence we may assume that for all $\alpha < \lambda$ the quotient $G_{\alpha + 1}/\cl_p(G_{\alpha +1},G_{\alpha})$ is a torsion-free non-free group. Note that $G_{\alpha + 1}/\cl_p(G_{\alpha +1},G_{\alpha})$ is also $p$-reduced since $\cl_p(G_{\alpha +1},G_{\alpha})$ is the $p$-closure of $G_{\alpha}$ inside $G_{\alpha+1}$.\ Since the cardinality of $G_{\alpha + 1}/\cl_p(G_{\alpha +1},G_{\alpha})$ is less than $\lambda$ the induction hypothesis applies. Hence $r_p^e(G_{\alpha + 1}/\cl_p(G_{\alpha +1},G_{\alpha}))=2^{\fk_p(G_{\alpha + 1}/\cl_p(G_{\alpha +1},G_{\alpha}))}$. We claim that $\Ext_p(G_{\alpha + 1}/\cl_p(G_{\alpha +1},G_{\alpha}),\Z) \not = \{0\}$ stationarily often. If not, then there is a cub $C \subseteq \lambda$ such that for all $\alpha < \beta \in C$ we have $\Ext_p(G_{\alpha + 1}/\cl_p(G_{\alpha +1},G_{\alpha}),\Z) = \{0\}$ and equivalently $\Ext_p(G_{\beta}/G_{\alpha},\Z)=0$, hence $\fk_p(G_{\beta})=\fk_p(G_{\alpha})$ by induction hypothesis. As in [@EkMe Proposition XII 1.5] it follows that $\Ext_p(G/G_{\alpha},\Z)=0$ for all $\alpha \in C$. This easily contradicts the fact that $\fk_p(G)=\lambda$. It follows that without loss of generality for every $\alpha < \lambda$ there exist homomorphisms $h_{\alpha}^0, h_{\alpha}^1 \in \Hom(G_{\alpha+1},\Z/p\Z)$ such that 1. $h_{\alpha}^0\restriction_{\cl_p(G_{\alpha +1},G_{\alpha})}=h_{\alpha}^1\restriction_{\cl_p(G_{\alpha +1},G_{\alpha})}$; 2. There are no homomorphisms $g_{\alpha}^0,g_{\alpha}^1 \in \Hom(G_{\alpha +1},\Z)$ such that 1. $g_{\alpha}^0\restriction_{\cl_p(G_{\alpha +1},G_{\alpha})}=g_{\alpha}^1\restriction_{\cl_p(G_{\alpha +1},G_{\alpha})}$ (or equivalently $g_{\alpha}^0\restriction_{G_{\alpha}}=g_{\alpha}^1\restriction_{G_{\alpha}}$); 2. $h_{\alpha}^0=g_{\alpha}^0\varphi^p$ and $h_{\alpha}^1=g_{\alpha}^1\varphi^p$. To see this note that there is a homomorphism $\varphi: G_{\alpha+1}/\cl_p(G_{\alpha+1},G_{\alpha}) \rightarrow \Z/p\Z$ which can not be factored by $\varphi_p$ since $\Ext_p(G_{\alpha + 1}/{\cl}_p(G_{\alpha +1},G_{\alpha}),\Z) \not = \{0\}$. Let $h_{\alpha}^0=0$ and $h_{\alpha}^1$ be given by $$h_{\alpha}^1: G_{\alpha+1} \rightarrow G_{\alpha+1}/{\cl}_p(G_{\alpha+1},G_{\alpha}) \overset{\varphi}\rightarrow \Z/p\Z.$$ Then it is easy to check that $h_{\alpha}^0$ and $h_{\alpha}^1$ are as required. In particular we may assume that $h_{\alpha}^0=0$ for every $\alpha < \lambda$. An immediate consequence is the following property (U).\ Let $f: G_{\alpha} \rightarrow \Z/p\Z$ and $g: G_{\alpha} \rightarrow \Z$ such that $g\varphi_p=f$. Then there exists\ (U) $\tilde{f}: G_{\alpha+1} \rightarrow \Z/p\Z$ such that $\tilde{f}\restriction_{G_{\alpha}}=f$ and there is no homomorphism\ [ ]{} $\tilde{g}:G_{\alpha+1} \rightarrow \Z$ satisfying both $\tilde{g}\restriction_{G_{\alpha}}=g$ and $\tilde{g}\varphi_p=\tilde{f}$.\ To see this, let $\hat{f}:G_{\alpha+1} \rightarrow \Z/p\Z$ be any extension of $f$ which exists by the pure injectivity of $\Z/p\Z$. If $\hat{f}$ is as required let $\tilde{f}=\hat{f}$. Otherwise let $\hat{g}:G_{\alpha+1} \rightarrow \Z$ be such that $\hat{g}\restriction_{G_{\alpha}}=g$ and $\hat{g}\varphi_p=\hat{f}$. Put $\tilde{f}=\hat{f}-h_{\alpha}^1$. Then $\tilde{f}\restriction_{G_{\alpha}}=f$. Assume that there exists $\tilde{g}:G_{\alpha+1} \rightarrow \Z$ such that $\tilde{g}\restriction_{G_{\alpha}}=g$ and $\tilde{g}\varphi_p=\tilde{f}$. Choosing $g_{\alpha}^1=\tilde{g}-\hat{g}$ we conclude $-g_{\alpha}^1\varphi_p=h_{\alpha}^1$ contradicting (2). Note that $h_{\alpha}^0=0$.\ We now proceed exactly as in [@HHS Proposition 1] to show that $r_p^e(G)=2^{\fk_p(G)}=2^{\lambda}$. We therefore recall the proof only briefly and for simplicity we even shall assume that $\Diamond_{\lambda}$ holds. It is an easy exercise (and therefore left to the reader) to prove the result assuming the weak diamond principle only. Assume that $r_p^e(G) =\sigma < 2^{\lambda}$ and let $L=\{ f^{\alpha} : \alpha < \sigma \}$ be a complete list of representatives of elements in $\Hom(G,\Z/p\Z)/\Hom(G,\Z)\varphi_p$. Without loss of generality let $\{g_{\alpha}: G_{\alpha} \rightarrow \Z: \alpha < \lambda \}$ be the Jensen functions given by $\Diamond_{\lambda}$, hence for every homomorphism $g: G \rightarrow \Z$ there exists $\alpha$ such that $g\restriction_{G_{\alpha}}=g_{\alpha}$. We now define a sequence of homomorphisms $\{f_{\alpha}^* : G_{\alpha} \rightarrow \Z/p\Z : \alpha < \lambda\}$ such that the following hold. 1. $f_{0}^*=f^0$; 2. $f^*_{\alpha}\restriction_{G_{\beta}}=f^*_{\beta}$ for all $\beta < \alpha$; 3. If $f^*=\bigcup_{\alpha < \lambda}f^*_{\alpha}$, then $f^*-f^{\alpha}$ is an element of $\Hom(G,\Z/p\Z)$ but not of $\Hom(G,\Z)\varphi_p$. Suppose that $f^*_{\beta}$ has been defined for all $\beta < \alpha$. If $\alpha$ is a limit ordinal, then we let $f^*_{\alpha}=\bigcup_{\beta < \alpha}f^*_{\beta}$ which is a well-defined homomorphism by (2). If $\alpha=\beta +1$ is a successor ordinal, then we distinguish two cases. If $f^*_{\beta}-f^{\beta}\restriction_{G_{\beta}} \not=g_{\beta}\varphi_p$, let $f^*_{\alpha}:G_{\alpha} \rightarrow \Z/p\Z$ be any extension of $f^*_{\beta}$ which exists since $\Z/p\Z$ is pure injective and $G_{\beta} \subseteq_* G_{\alpha}$. If $f^*_{\beta}-f^{\beta}\restriction_{G_{\beta}} =g_{\beta}\varphi_p$, then (U) shows that there is a homomorphism $\tilde{f}:G_{\alpha} \rightarrow \Z/p\Z$ extending $f^*_{\beta}-f^{\beta}\restriction_{G_{\beta}}$ such that there is no $\tilde{g}:G_{\beta+1} \rightarrow \Z$ with both extending $g_{\beta}$ and $\tilde{g}\varphi_p=\tilde{f}$. Finally, put $f_{\alpha}^*=\tilde{f} + f^{\alpha}\restriction_{G_{\alpha}}$ and $f^*=\bigcup\limits_{\alpha < \lambda}f_{\alpha}^*$. It is now straightforward to see that $f^*$ satisfies (3) and hence $f^*$ contradicts the maximality of the list $L$.\ $\lambda \geq \kappa$ is singular.\ First note that $\fk_p(G) > \kappa$ since $\kappa=2^{\aleph_0}$ is regular. By induction on $\alpha < \lambda$ we choose subgroups $K_{\alpha}$ of $G$ such that the following hold. 1. $K_{\alpha}$ is a pure non-free subgroup of $G$; 2. $|K_{\alpha}| < \kappa$; 3. $K_{\alpha} \cap \sum\limits_{\beta < \alpha}K_{\beta}=\{ 0 \}$; 4. $\sum\limits_{\beta < \alpha}K_{\beta}$ is a pure subgroup of $G$. Assume that we have succeeded in constructing the groups $K_{\alpha}$ ($\alpha < \lambda$). Then $$K=\sum\limits_{\beta < \lambda}K_{\beta}=\bigoplus\limits_{\beta < \lambda}K_{\beta}$$ is a pure subgroup of $G$ and hence $r_p^e(G) \geq r_p^e(K)$ by Lemma \[freerank\] (ii). If $\Ext_p(K_{\alpha},\Z)=0$, then $K_{\alpha}\in K_p$ follows by induction. Since $G$ is $p$-reduced we obtain $K_{\alpha}\in K_0$ contradicting (1). Thus, $\Ext_p(K_{\alpha},\Z) \not= \{0\}$ for every $\alpha < \lambda$ which implies that $r_p^e(K) \geq 2^{\lambda}$ since $\Ext(K,\Z) \cong \prod\limits_{\alpha < \lambda}\Ext(K_{\alpha},\Z)$. It therefore suffices to complete the construction of the groups $K_{\alpha}$ $(\alpha < \lambda)$. Assume that $K_{\beta}$ for $\beta < \alpha$ has been constructed. Let $\mu=(\kappa + |\alpha|)^{< \kappa}$ which is a cardinal less than $\lambda$. Let $H_{\alpha}$ be such that 1. $H_{\alpha} \subseteq_* G$; 2. $\sum\limits_{\beta < \alpha}K_{\beta} \subseteq H_{\alpha}$; 3. $|H_{\alpha}|=\mu$; 4. If $K \subseteq G$ is of cardinality less than $\kappa$, then there is a subgroup $K' \subseteq_* G$ such that $H_{\alpha} \cap K \subseteq K'$ and $K$ and $K'$ are isomorphic over $K \cap H_{\alpha}$, i.e. there exists an isomorphism $\psi: K \rightarrow K'$ which is the identity if restricted to $K \cap H_{\alpha}$. It is easy to see that $H_{\alpha}$ exists. Now, $G/H_{\alpha}$ is a non-free group since $\fk_0(G)=\lambda$ and $p^{\omega}\left( G/H_{\alpha} \right)=\{ 0 \}$. Hence [@MeSh] implies that there is $K'_{\alpha} \subseteq G$ such that $\left( K_{\alpha}' + H_{\alpha} \right) / H_{\alpha}$ is not free and $|K'_{\alpha}| < \kappa$. Let $K_{\alpha}^0 \subseteq_* H_{\alpha}$ be as in (5), i.e. $K_{\alpha}' \cap H_{\alpha} \subseteq K_{\alpha}^0$ and there is an isomorphism $\psi_{\alpha}: K_{\alpha}' \rightarrow K_{\alpha}^0$ which is the identity on $K_{\alpha}' \cap H_{\alpha}$. Let $K_{\alpha}=\{ x - x\psi_{\alpha} : x \in K_{\alpha}' \}$. Then $K_{\alpha}$ is as required. For instance $$K_{\alpha} \cong K_{\alpha}^0/\left( K_{\alpha}' \cap H_{\alpha} \right) \cong K_{\alpha}'/\left( K_{\alpha}' \cap H_{\alpha} \right)$$ shows that $K_{\alpha}$ is not free.\ In the model $\v$ let $\left< \mu_p : p \in \Pi_0 \right>$ be a sequence of cardinals. Then there exists a torsion-free non-free abelian group $G$ such that $r_p^e(G)=\mu_p$ for all $p \in \Pi_0$ if and only if 1. $\mu_0=2^{\lambda_0}$ for some infinite cardinal $\lambda_0$; 2. $\mu_p \leq \mu_0$ for all $p \in \Pi$; 3. $\mu_p$ is either finite or of the form $2^{\lambda_p}$ for some infinite cardinal $\lambda_p$. The proof follows easily from Lemma \[existence\] and Theorem \[main1\].\ \[maincor\] In the model $\v$ let $\left< \mu_p : p \in \Pi_0 \right>$ be a sequence of cardinals. Then there exists a non-free $\aleph_1$-free abelian group $G$ such that $r_p^e(G)=\mu_p$ for all $p \in \Pi_0$ if and only if $\mu_p \leq \mu_0$ and $\mu_p=2^{\lambda_p}$ for some infinite cardinal $\lambda_p$ for every $p \in \Pi_0$. By Theorem \[main1\] we only have to prove the existence claim of the corollary. It suffices to construct $\aleph_1$-free groups $G_p$ for $p \in \Pi_0$ such that $r_p^e(G)=r_0^e(G)=2^{\aleph_0}=\kappa$ and $r_q^e(G)=0$ for all $p \not=q \in \Pi$. Then $B=G_0^{(\lambda_0)} \oplus \bigoplus\limits_{p \in \Pi} G_p^{(\lambda_p)}$ will be as required (see for instance the proof of [@HHS Theorem 3(b)]). Fix $p \in \Pi_0$. From [@EkMe Theorem XII 4.10] or [@SS2] it follows that there exists an $\aleph_1$-free non-free group $G_p$ of size $2^{\aleph_0}$ such that $r_p^e(G_p)=2^{\aleph_0}=\kappa$ if $p \in \Pi$. In [@EkMe Theorem XII 4.10] it is then assumed that $2^{\aleph_0}=\aleph_1$ to show that also $r_0^e(G_p)=\kappa$. However, since we work in the model $\v$ and $G_p$ is not free it follows from Theorem \[main1\] that $\fk_0(G_p) \geq \aleph_1$ and hence $r_p^0(G_p)=2^{\fk_0(G_p)}=\kappa$.\ Recall that a reduced torsion-free group $G$ is called [*coseparable*]{} if $\Ext(G,\Z)$ is torsion-free. By [@MeSh] it is consistent that all coseparable groups are free. However, by [@EkMe Theorem XII 4.10] there exist non-free coseparable groups assuming $2^{\aleph_0}=\aleph_1$. Note that the groups constructed in Lemma \[existence\] are not reduced, hence do not provide examples of coseparable groups. In the model $\v$ there exist non-free coseparable groups. Follows from Corollary \[maincor\] letting $\mu_0=2^{\aleph_0}$ and $\mu_p=0$ for all $p \in \Pi$. A model close to $ZFC$ ====================== In this section we shall construct a coseparable group which is not free in a model of $ZFC$ which is very close to $ZFC$. As mentioned in the previous Section $4$ it is undecidable in $ZFC$ if all coseparable groups are free.\ Let $\aleph_0 < \lambda$ be a regular cardinal and $S$ a stationary subset of $\lambda$ consisting of limit ordinals of cofinality $\omega$. We recall the definition of a ladder system on $S$ (see for instance [@EkMe page 405]). [A]{} ladder system $\bar{\eta}$ on $S$ [is a family of functions $\bar{\eta}=\left< \eta_{\delta} : \delta \in S \right>$ such that $\eta_{\delta}: \omega \rightarrow \delta$ is strictly increasing with $\sup(\rg(\eta_{\delta}))=\delta$, where $\rg(\eta_{\delta})$ denotes the range of $\eta_{\delta}$. We call the ladder system]{} tree-like [if for all $\delta, \nu \in S$ and every $n, n\in \omega$, $\eta_{\delta}(n)=\eta_{\nu}(m)$ implies $n=m$ and $\eta_{\delta}(k)=\eta_{\nu}(k)$ for all $k \leq n$.]{} In order to construct almost-free groups one method is to use $\kappa$-free ladder systems. Let $\kappa$ be an uncountable regular cardinal. The ladder system $\bar{\eta}$ is called [$\kappa$-free]{} if for every subset $X \subseteq S$ of cardinality less than $\kappa$ there is a sequence of natural numbers $\left< n_{\delta} : \delta \in X \right>$ such that $$\left< \{\eta_{\delta}\restriction_l : n_{\delta} < l < \omega\} : \delta \in X \right>$$ is a sequence of pairwise disjoint sets. Finally, recall that a stationary set $S \subseteq \lambda$ with $\lambda$ uncountable regular is called [*non-reflecting*]{} if $S \cap \kappa$ is not stationary in $\kappa$ for every $\kappa < \lambda$ with $\cf(\kappa) > \aleph_0$. \[main2\] Let $\mu$ be an uncountable strong limit cardinal such that $\cf(\mu)=\omega$ and $2^{\mu}=\mu^+$. Put $\lambda=\mu^+$ and assume that there exists a $\lambda$-free tree-like ladder system on a non-reflecting stationary subset $S \subseteq \lambda$. If $\Pi=\Pi_0 \cup \Pi_1$ is a partition of $\Pi$ into disjoint subsets $\Pi_0$ and $\Pi_1$, then there exists an almost-free group $G$ of size $\lambda$ such that 1. $r_0^e(G)=2^{\lambda}$; 2. $r_p^e(G)=2^{\lambda}$ if $p \in \Pi_0$; 3. $r_p^e(G)=0$ if $p \in \Pi_1$. Let $\bar{\eta}=\left< \eta_{\delta} : \delta \in S \right>$ be the $\lambda$-free ladder system where $S$ is a stationary non-reflecting subset of $\lambda$ consisting of ordinals less than $\lambda$ of cofinality $\omega$. Without loss of generality we may assume that $S=\lambda$. Let $\pr : \mu^2 \rightarrow \mu$ be a pairing function, hence $\pr$ is bijective and if $\alpha \in \mu$ then we shall denote by $(\pr_1(\alpha),\pr_2(\alpha))$ the unique pair $(\beta,\gamma) \in \mu^2$ such that $\pr(\pr_1(\alpha),\pr_2(\alpha))=\alpha$. Let $L$ be the free abelian group $$L=\bigoplus\limits_{\alpha < \mu} \Z x_{\alpha}$$ generated by the independent elements $x_{\alpha}$ ($\alpha < \mu$). For notational simplicity we may assume that $\Pi_1 \not= \emptyset$ and let $\left< (p_{\beta},f_{\beta}) : \beta < \lambda \right>$ be a listing of all pairs $(p,f)$ with $p \in \Pi_1$ and $f \in \Hom(L,\Z/p\Z)$. Recall that $\lambda=2^{\mu}$. By induction on $\beta < \lambda$ we shall choose triples $(g_{\beta},\nu_{\beta},\rho_{\beta})$ such that the following conditions hold. 1. $g_{\beta} \in \Hom(L,\Z)$; 2. $f_{\beta}=g_{\beta}\varphi_p$ where $\varphi_p: \Hom(L,\Z) \rightarrow \Hom(L,\Z/p\Z)$ is the canonical map; 3. $\nu_{\beta},\rho_{\beta} : \omega \rightarrow \mu$ such that $\eta_{\beta}(n)=\pr_1(\nu_{\beta}(n))=\pr_1(\rho_{\beta}(n))$; 4. For all $\delta \leq \beta$ there exists $n=n(\delta, \beta) \in \omega$ such that for all $m \geq n$ we have $g_{\delta}(x_{\nu_{\beta}(m)})=g_{\delta}(x_{\rho_{\beta}(m)})$; 5. For all $\delta < \beta$ there exists $n=n(\delta, \beta) \in \omega$ such that for some sequence $\left< b^{\delta,\beta}_m : m \in [n,\omega) \right>$ of natural numbers we have $\left(\prod\limits_{p \in \Pi_1 \cap m}p \right)b^{\delta,\beta}_{m+1}=b^{\delta,\beta}_m + g_{\beta}(x_{\nu_{\delta}(m)}) - g_{\beta}(x_{\rho_{\delta}(m)})$ for all $m \geq n$; 6. $\nu_{\beta}(m) \not= \rho_{\beta}(m)$ for all $m \in \omega$. Fix $\beta < \lambda$ and assume that we have constructed $(g_{\delta},\nu_{\delta},\rho_{\delta})$ for all $\delta < \beta$. Choose a function $h_{\beta}: \beta \rightarrow \omega$ such that $h_{\beta}(\delta) > p_{\delta}$ for all $\delta < \beta$ and $$\label{disjoint} \left< \{ \eta_{\delta}\restriction_l : l \in [h_{\beta}(\delta),\omega)\} : \delta < \beta \right>$$ is a sequence of pairwise disjoint sets. Note that such a choice is possible since the ladder system $\bar{\eta}$ is $\lambda$-free by assumption. Moreover, by (3) the pairing function $\pr$ implies that also $$\label{almostfree} \left< \{ \nu_{\delta}\restriction_l, \rho\restriction_l : l \in [h_{\beta}(\delta),\omega)\} : \delta < \beta \right>$$ is a sequence of pairwise disjoint sets. Now, we choose the function $g_{\beta}$ such that (2) and (5) hold. For $\delta < \beta$ let $n=n(\delta,\beta)=h_{\beta}(\delta)$. Since $L$ is free we may choose first $g_{\beta}(x_{\alpha})$ satisfying $g_{\beta}(x_{\alpha}) + p_{\beta}\Z = f_{\beta}(x_{\alpha})$ for every $\alpha$ such that $\pr_1(\alpha) \not= \eta_{\delta}(l)$ for all $\delta < \beta$ and $l \geq n(\delta,\beta)$, that is to say for those $\alpha$ such that $x_{\alpha}$ does not appear in (5). Secondly, for $\delta < \beta$, we choose by induction on $m \geq n(\delta,\beta)$ integers $b^{\delta,\beta}_{m+1}$ such that $$0 + p_{\beta}\Z = b^{\delta,\beta}_{m+1} + f_{\beta}(x_{\nu_{\delta}(m)}) - f_{\beta}(x_{\rho_{\delta}(m)}) + p_{\beta}\Z$$ and then choose $g_{\beta}(x_{\nu_{\delta}(m)})$ and $g_{\beta}(x_{\rho_{\delta}(m)})$ such that (5) holds for $\delta$. Note that this inductive process is possible by the choice of $h_{\beta}$ and condition (\[disjoint\]).\ Finally, let $\beta=\bigcup\limits_{n \in \omega}A_n$ be the union of an increasing chain of sets $A_n$ such that $|A_n|< \mu$ (recall that we have assumed without loss of generality that $S=\lambda$, so $\beta$ is of cofinality $\omega$). By induction on $n < \omega$ we now may choose $\rho_{\beta}(n)$ and $\nu_{\beta}(n)$ as distinct ordinals such that - $\rho_{\beta}(n),\nu_{\beta}(n) \in \mu$ - $\rho_{\beta}(n),\nu_{\beta}(n) \not\in \{ \nu_{\beta}(m), \rho_{\beta}(m) : m < n \}$ - $\pr_1(\rho_{\beta}(n))=\pr_1(\nu_{\beta}(n))=\eta_{\beta}(n)$; - $\left< g_{\delta}(x_{\nu_{\beta}(n)}) : \delta \in A_n \right> = \left< g_{\delta}(x_{\rho_{\beta}(n)}) : \delta \in A_n \right>$. Hence (3), (4) and (6) hold and we have carried on the induction. Now, let $G$ be freely generated by $L$ and $\{ y_{\beta,n} : \beta < \lambda, n \in \omega \}$ subject to the following relations for $\beta < \lambda$ and $n \in \omega$. $$\left( \prod\limits_{p \in \Pi_1 \cap n} p \right) y_{\beta,n+1} = y_{\beta,n}+x_{\nu_{\beta}(n)} - x_{\rho_{\beta}(n)} .$$ Then $G$ is a torsion-free abelian group of size $\lambda$. Moreover, since the ladder system $\bar{\eta}$ is $\lambda$-free and $S$ is stationary but not reflecting it follows by standard calculations using (\[almostfree\]) that $G$ is almost-free but not free (see for instance [@EkSh2]). It remains to prove that (i), (ii) and (iii) of the Theorem hold. For $\beta < \lambda$ let $$G_{\beta}= \left< L, y_{\delta,n} : \delta < \beta, n \in \omega \right>_* \subseteq_* G$$ so that $G=\bigcup\limits_{\beta < \lambda}G_{\beta}$ is the union of the continuous increasing sequence of pure subgroups $G_{\beta}$ ($\beta < \lambda$). We start by proving (iii). Thus let $p \in \Pi_1$ and choose $f \in \Hom(G,\Z/p\Z)$. By assumption there is $\beta < \lambda$ such that $(p,f\restriction_L)=(p_{\beta},f_{\beta})$. Inductively we shall define an increasing sequence of homomorphisms $g_{\beta,\gamma}: G_{\gamma} \rightarrow \Z$ for $\gamma \geq \beta$ such that $g_{\beta,\gamma}\varphi_p=f\restriction_{G_{\gamma}}$. For $\gamma=\beta$ we choose $n(\delta,\beta)$ and $\left< b^{\delta,\beta}_m : m \in [n(\delta,\beta),\omega) \right>$ as in (5) for $\delta < \beta$. We let $g_{\beta,\beta}\restriction_L=g_{\beta}$ where $g_{\beta}$ is chosen as in (1). Moreover, put $g_{\beta,\beta}(y_{\delta,m})=b_m^{\delta,\beta}$ for $m \in [n(\delta,\beta),\omega)$ and $\delta < \beta$. By downwards induction we chose $g_{\beta,\beta}(y_{\delta,m})$ for $m < n(\delta\beta)$, $\delta < \beta$. It is easily seen that $g_{\beta,\beta}$ is as required, i.e. satisfies $g_{\beta,\beta}\varphi_p=f\restriction_{G_{\beta}}$. Now, assume that $\gamma > \beta$. If $\gamma$ is a limit ordinal, then let $g_{\beta,\gamma}=\bigcup\limits_{\beta \leq \epsilon < \gamma}g_{\beta,\epsilon}$. If $\gamma=\epsilon+1$, then (4) implies that there is $n(\beta,\epsilon) < \omega$ such that $g_{\beta}(x_{\nu_{\epsilon}(m)})=g_{\beta}(x_{\rho_{\epsilon}(m)})$ for all $m \in [n(\beta,\epsilon),\omega)$. Therefore, putting $g_{\beta,\gamma}\restriction_{G_{\epsilon}}=g_{\beta,\epsilon}$ and $g_{\beta,\gamma}(y_{\epsilon,m})=0$ for $m \in [n(\beta,\epsilon),\omega)$ and determing $g_{\beta,\gamma}(y_{\epsilon,m})$ by downward induction for $m < n(\beta,\epsilon)$ we obtain $g_{\beta,\gamma}$ as required. Finally, let $g=\bigcup\limits_{\gamma \geq \beta}g_{\beta,\gamma}$ which satisfies $g\varphi_p=f$. Since $f$ was chosen arbitrary it follows that $\Hom(G,\Z/p\Z)=\Hom(G,\Z)\varphi_p$ for all $p \in \Pi_1$ and hence $r_p^e(G)=0$ for $p \in \Pi_1$.\ We now turn to $p \in \Pi_0$. By definition of $G$ it follows that every homomorphism $\psi:L \rightarrow \Z$ has at most one extension to a homomorphism $\psi':G \rightarrow \Z$. Thus $|\Hom(G,\Z)|\leq 2^{\mu}$. However, for every $\beta < \lambda$, any homomorphism $\psi: G_{\beta} \rightarrow \Z/p\Z$ has more than one extension to a homomorphism $\psi':G_{\beta+1} \rightarrow \Z/p\Z$ and hence $|\Hom(G,\Z/p\Z)|=2^{\lambda} > 2^{\mu}$. Consequently, $r_p^e(G)=2^{\lambda}$. Similarly, it follows that $r_0^e(G)=2^{\lambda}$ which finishes the proof.\ Let $\mu$ be an uncountable strong limit cardinal such that $\cf(\mu)=\omega$ and $2^{\mu}=\mu^+$. Put $\lambda=\mu^+$ and assume that there exists a $\lambda$-free ladder system on a stationary subset $S \subseteq \lambda$. Then there exists an almost-free non-free coseparable group of size $\lambda$. Follows from Theorem \[main2\] letting $\Pi_0=\emptyset$ and $\Pi_1=\Pi$. [99]{} , [*On Shelah’s compactness of cardinals*]{}, Israel J. Math. [**31**]{} (1978), 34–56. , [*On the rank of $\Ext$*]{}, Math. Zeit. [**174**]{} (1980), 159–185. , [*Almost Free Modules, Set-Theoretic Methods (revised edition)*]{}, Amsterdam, New York, North-Holland, Math. Library. , [*The structure of $\Ext(A,\Z)$ and $GCH$: possible co-Moore spaces*]{}, Math. Zeit. [**239**]{} (2002), 143–157. , [*On Whitehead modules*]{}, J. Algebra [**142**]{} (1991), 492–510. , [*Infinite Abelian Groups, Vol. I and II*]{}, Academic Press (1970 and 1973). , [*On the structure of $\Ext_p(G,\Z)$*]{}, J. Algebra [**121**]{} (1989), 117–128. , [*On cardinalities in quotients of inverse limits of groups*]{}, Math Japonica [**47**]{} (1998), 189-197. , [*The structure of $\Ext(A,\Z)$ and $V=L$*]{}, Math. Zeit. [**162**]{} (1978), 39–50. , [*Set Theory*]{}, Academic Press, New York (1973). , [*Set Theory - An Introduction to Independent Proofs*]{}, Studies in Logic and the Foundations of Mathematics, North Holland, [**102**]{} (1980). , [*Every coseparable group may be free*]{}, Israel J. Math. [**81**]{} (1993), 161–178. , [*On the $p$-rank of $\Ext$*]{}, Israel J. Math. [**112**]{} (1999), 137–156. , [*Weak compactness and the structure of $\Ext(G,\Z)$*]{}, Abelian group theory (Oberwolfach, 1981), ed. R. Göbel and A.E. Walker, Lecture Notes in Mathematics [**874**]{} Springer Verlag (1981), 87–92. , [*On the structure of $\Ext(A,\Z)$ in $ZFC^+$*]{}, J. of Symbolic Logic [**50**]{} (1985), 302–315. , [*A compactness theorem for singular cardinals, free algebras, Whitehead problem and transversals*]{}, Israel J. Math. [**21**]{} (1975), 319–349. , [*Whitehead groups may not be free even assuming CH, I*]{}, Israel J. Math. [**28**]{} (1977), 193–203. , [*Whitehead groups may not be free even assuming CH, II*]{}, Israel J. Math. [**35**]{} (1980), 257–285. , [*The consistency of $\Ext(G,\Z)=\Q$*]{}, Israel J. Math. [**39**]{} (1981), 74–82. , [*Proper and improper forcing*]{}, Perspectives in Mathematical Logic, Springer Verlag (1998). , [*A characterization of $\Ext(G,\Z)$ assuming $(V=L)$*]{}, submitted. , [*Whitehead test modules*]{}, Trans. Amer. Math. Soc. [**348**]{} (1996), 1521–1554. [^1]: 2000 Mathematics Subject Classification. Primary 20K15, 20K20, 20K35, 20K40; Secondary 18E99, 20J05 [^2]: Number 874 in Shelah’s list of publications. The first author was supported by project No. I-706-54.6/2001 of the [ *German-Israeli Foundation for Scientific Research & Development*]{}.\ The second author was supported by a grant from the German Research Foundation DFG
--- abstract: 'In 1980, Gizatullin classified rational surfaces endowed with an automorphism whose action on the Neron-Severi group is parabolic: these surfaces are endowed with an elliptic fibration invariant by the automorphism. The aim of this expository paper is to present for non-experts the details of Gizatullin’s original proof, and to provide an introduction to a recent paper by Cantat and Dolgachev.' address: 'CNRS & Institut de Mathématiques de Marseille, Université d’Aix-Marseille, $39$ rue Frédéric Joliot-Curie, $13453$ Marseille Cedex $13$, France.' author: - Julien Grivaux bibliography: - 'bib.bib' title: | Parabolic automorphisms of projective surfaces\ (after M. H. Gizatullin) --- Introduction ============ Let $X$ be a projective complex surface. The Neron-Severi group $\mathrm{NS}\,(X)$ is a free abelian group endowed with an intersection form whose extension to $\mathrm{NS}_{\R}(X)$ has signature $(1, \mathrm{h}^{1,1}(X)-1)$. Any automorphism of $f$ acts by pullback on $\mathrm{NS}\,(X)$, and this action is isometric. The corresponding isometry $f^*$ can be of three different types: elliptic, parabolic or hyperbolic. These situations can be read on the growth of the iterates of $f^*$. If $|| \, . \, ||$ is any norm on $\mathrm{NS}_{\R}(X)$, they correspond respectively to the following situations: $||(f^*)^n||$ is bounded, $||(f^*)^n|| \sim C n^2$ and $||(f^*)^n|| \sim \lambda^n$ for $\lambda >1$. This paper is concerned with the study of parabolic automorphisms of projective complex surfaces. The initial motivation to their study was that parabolic automorphisms don’t come from $\mathrm{PGL}(N, \C)$ via some projective embedding $X \hookrightarrow \P^N$. Indeed, if $f$ is an automorphism coming from $\mathrm{PGL}(N, \C)$, then $f^*$ must preserve an ample class in $\mathrm{NS}\,(X)$, so $f^*$ is elliptic. The first known example of such a pair $(X, f)$, due to initially to Coble [@Coble] and popularised by Shafarevich, goes as follows: consider a generic pencil of cubic curves in $\mathbb{P}^2$, it has $9$ base points. Besides, all the curves in the pencil are smooth elliptic curves except $12$ nodal curves. After blowing up the nine base points, we get a elliptic surface $X$ with $12$ singular fibers and $9$ sections $s_1, \ldots, s_9$ corresponding to the exceptional divisors, called a Halphen surface (of index $1$). The section $s_1$ specifies an origin on each smooth fiber of $X$. For $2 \leq i \leq 8 $, we have a natural automorphism $\sigma_i$ of the generic fiber of $X$ given by the formula $\sigma_i(x)=x+s_i-s_1$. It is possible to prove that the $\sigma_i$’s extend to automorphisms of $X$ and generate a free abelian group of rank $8$ in $\mathrm{Aut}\,(X)$. In particular, any nonzero element in this group is parabolic since the group of automorphisms of an elliptic curve fixing the class of an ample divisor is finite. In many aspects, thisexample is a faithful illustration of parabolic automorphisms on projective surfaces. A complete classification of pairs $(X, f)$ where $f$ is a parabolic automorphism of $X$ is given in [@GIZ]. In his paper, Gizatullin considers not only parabolic automorphisms, but more generally groups of automorphisms containing only parabolic or elliptic[^1] elements. We call such groups of moderate growth, since the image of any element of the group in $\mathrm{GL}(\mathrm{NS}(X))$ has polynomial growth. Gizatullin’s main result runs as follows: \[Main\] Let $X$ be a smooth projective complex surface and $G$ be an infinite subgroup of $\mathrm{Aut}\, (X)$ of moderate growth. Then there exists a unique elliptic $G$-invariant fibration on $X$. Of course, if $X$ admits one parabolic automorphism $f$, we can apply this theorem with the group $G={\ensuremath{\mathbb Z}}$, and we get a unique $f$-invariant elliptic fibration on $X$. It turns out that it is possible to reduce Theorem \[Main\] to the case $G={\ensuremath{\mathbb Z}}$ by abstract arguments of linear algebra. In all cases except rational surfaces, parabolic automorphisms come from minimal models, and are therefore quite easy to understand. The main difficulty occurs in the case of rational surfaces. As a corollary of the classification of relatively minimal elliptic surfaces, the relative minimal model of a rational elliptic surface is a Halphen surface of some index $m$. Such surfaces are obtained by blowing up the base points of a pencil of curves of degree $3m$ in $\mathbb{P}^2$. By definition, $X$ is a Halphen surface of index $m$ if the divisor $-mK_X$ has no fixed part and $|-mK_X|$ is a pencil without base point giving the elliptic fibration. \[Second\] Let $X$ be a Halphen surface of index $m$, $S_1, \ldots, S_{\lambda}$ the reducible fibers and $\mu_i$ the number of reducible components of $S_i$, and $s=\sum_{i=1}^{\lambda} \{\mu_i-1\}$. Then $s \leq 8$, and there exists a free abelian group $G_X$ of rank $s-8$ in $\mathrm{Aut}\,(X)$ such that every nonzero element of this group is parabolic and acts by translation along the fibers. If $\lambda \geq 3$, $G$ has finite index in $\mathrm{Aut}\,(X)$. The number $\lambda$ of reducible fibers is at least two, and the case $\lambda=2$ is very special since all smooth fibers of $X$ are isomorphic to a fixed elliptic curve. Such elliptic surfaces $X$ are now called Gizatullin surfaces, their automorphism group is an extension of $\C^{\times}$ by a finite group, $s=8$, and the image of the representation $\rho \colon \mathrm{Aut}\,(X) \rightarrow \mathrm{GL}\, (\mathrm{NS}\,(X))$ is finite. Let us now present applications of Gizatullin’s construction. The first application lies in the theory of classification of birational maps of surfaces, which is an important subject both in complex dynamics and in algebraic geometry. One foundational result in the subject is Diller-Favre’s classification theorem [@DF], which we recall now. If $X$ is a projective complex surface and $f$ is a birational map of $X$, then $f$ acts on the Neron-Severi group $\mathrm{NS}\,(X)$. The conjugacy types of birational maps can be classified in four different types, which can be detected by looking at the growth of the endomorphisms $(f^*)^n$. The first type corresponds to birational maps $f$ such that $|| (f^*)^n || \sim \alpha n$. These maps are never conjugate to automorphisms of birational models on $X$ and they preserve a rational fibration. The three other remaining cases are $|| (f^*)^n ||$ bounded, $|| (f^*)^n || \sim Cn^2$ and $|| (f^*)^n || \sim C \lambda^n$. In the first two cases, Diller and Favre prove that $f$ is conjugate to an automorphism of a birational model of $X$. The reader can keep in mind the similarity between the last three cases and Nielsen-Thurston’s classification of elements in the mapping class group into three types: periodic, reducible and pseudo-Anosov. The first class is now well understood (see [@BD2]), and constructing automorphisms in the last class is a difficult problem (see [@BKv], [@MM] for a systematic construction of examples in this category, as well as [@BK], [@BD] and [@DG] for more recent results). The second class fits exactly to Gizatullin’s result: using it, we get that $f$ preserves an elliptic fibration. One other feature of Gizatullin’s theorem is to give a method to construct hyperbolic automorphisms on surfaces. This seems to be paradoxal since Gizatullin’s result only deals with parabolic automorphisms. However, the key idea is the following: if $f$ and $g$ are two parabolic (or even elliptic) automorphisms of a surface generating a group $G$ of moderate growth, then $f^*$ and $g^*$ share a common nef class in $\mathrm{NS}\,(X)$, which is the class of any fiber of the $G$-invariant elliptic fibration. Therefore, if $f$ and $g$ don’t share a fixed nef class in $\mathrm{NS}\, (X)$, some element in the group $G$ must be hyperbolic. Let us describe the organization of the paper. §\[3\] is devoted to the theory of abstract isometries of quadratic forms of signature $(1, n-1)$ on $\R^n$. In §\[3.1\], we recall their standard classification into three types (elliptic, parabolic and hyperbolic). The next section (§\[3.2\]) is devoted to the study of special parabolic isometries, called parabolic translations. They depend on an isotropic vector $\theta$, the direction of the translation, and form an abelian group $\mathcal{T}_{\theta}$. We prove in Proposition \[ray\] and Corollary \[wazomba\] one of Gizatullin’s main technical lemmas: if $u$ and $v$ are two parabolic translations in different directions, then $uv$ or $u^{-1}v$ must be hyperbolic. Building on this result, we prove in §\[3.3\] a general structure theorem (Theorem \[ptfixe\]) for groups of isometries fixing a lattice and containing no hyperbolic elements. In §\[4\], we recall classical material in birational geometry of surfaces which can be found at different places of [@DF]. In particular, we translate the problem of the existence of an $f$-invariant elliptic fibration in terms of the invariant nef class $\theta$ (Proposition \[nefnef\]), and we also prove using the fixed point theorem of §\[3.3\] that it is enough to deal with the case $G={\ensuremath{\mathbb Z}}f$ in Theorem \[Main\]. Then we settle this theorem for all surfaces except rational ones. In §\[5\] and §\[6\], we prove Gizatullin’s theorem. Roughly speaking, the strategy goes as follows: the invariant nef class $\theta$ is always effective, we represent it by a divisor $C$. This divisor behaves exactly as a fiber of a minimal elliptic surface, we prove this in Lemmas \[base\] and \[genre\]. The conormal bundle $N^*_{C/X}$ has degree zero on each component of $C$, but is not always a torsion point in $\mathrm{Pic}\, (C)$. If it is a torsion point, it is easy to produce the elliptic fibration by a Riemann-Roch type argument. If not, we consider the trace morphism $\mathfrak{tr} \colon \mathrm{Pic}\, (X) \rightarrow \mathrm{Pic}\, (C)$ and prove in Proposition \[torsion\] that $f$ acts finitely on $\mathrm{ker}\, (\mathfrak{tr})$. In Proposition \[elliptic\], we prove that $f$ also acts finitely on a large part of $\mathrm{im}\, (\mathfrak{tr})$. By a succession of clever tricks, it is possible from there to prove that $f$ acts finitely on $\mathrm{Pic}\, (X)$; this is done in Proposition \[chic\]. In §6 we recall the classification theory of relatively minimal rational elliptic surfaces; we prove in Proposition \[primitive\] that they are Halphen surfaces. In Proposition \[sept\] and Corollary \[sympa\], we prove a part of Theorem \[Second\]: the existence of parabolic automorphisms imposes a constraint on the number of reducible components of the fibration, namely $s \leq 7$. We give different characterisations of Gizatullin surfaces (that is minimal elliptic rational surfaces with two singular fibers) in Proposition \[waza\]. Then we prove the converse implication of Theorem \[Second\]: the numerical constraint $s \leq 7$ is sufficient to guarantee the existence of parabolic automorphisms. Lastly, we characterize minimal elliptic surfaces carrying no parabolic automorphisms in Proposition \[hapff\]: the generic fiber must have a finite group of automorphisms over the function field $\mathbb{C}(t)$. At the end of the paper, we carry out the explicit calculation of the representation of $\mathrm{Aut}\, (X)$ on $\mathrm{NS}\, (X)$ for unnodal Halphen surfaces (that is Halphen surfaces with irreducible fibers) in Theorem \[classieux\]. These surfaces are of crucial interest since their automorphism group is of maximal size in some sense, see [@CD] for a precise statement. Throughout the paper, we work over the field of complex numbers. However, Gizatullin’s arguments can be extended to any field of any characteristic with minor changes. We refer to the paper [@CD] for more details. **Acknowledgements** I would like to thank Charles Favre for pointing to me Gizatullin’s paper and encouraging me to write this survey, as well as Jeremy Blanc, Julie Déserti and Igor Dolgachev for very useful comments. Notations and conventions {#2} ========================= Throughout the paper, $X$ denotes a smooth complex projective surface, which will always assumed to be rational except in §\[4\]. By divisor, we will always mean $\Z$-divisor. A divisor $D=\sum_i a_i\, D_i$ on $X$ is called primitive if $\mathrm{gcd}(a_i)=1$. If $D$ and $D'$ are two divisors on $X$, we write $D \sim D'$ (resp. $D \equiv D'$) if $D$ and $D'$ are linearly (resp. numerically) equivalent. For any divisor $D$, we denote by $|D|$ the complete linear system of $D$, that is the set of effective divisors linearly equivalent to $D$; it is isomorphic to $\mathbb{P}\, \bigl( \mathrm{H}^0(X, \oo_X(D) \bigr)$. The group of divisors modulo numerical equivalence is the Neron-Severi group of $X$, we denote it by $\mathrm{NS} (X)$. By Lefschetz’s theorem on $(1, 1)$-classes, $\mathrm{NS}\,(X)$ is the set of Hodge classes of weight $2$ modulo torsion, this is a $\Z$-module of finite rank. We also put $\mathrm{NS}\,(X)_{\R}=\mathrm{NS}\,(X) \otimes_{\Z} \R$. If $f$ is a biregular automorphism of $X$, we denote by $f^*$ the induced action on $\mathrm{NS}\,(X)$. We will always assume that $f$ is *parabolic*, which means that the induced action $f^*$ of $f$ on $\mathrm{NS}_{\R}(X)$ is parabolic. The first Chern class map is a surjective group morphism $\mathrm{Pic}\,(X) \xrightarrow{\mathrm{c}_1} \mathrm{NS}\,(X)$, where $\mathrm{Pic}\,(X)$ is the Picard group of $X$. This morphism is an isomorphism if $X$ is a rational surface, and $\mathrm{NS}\,(X)$ is isomorphic to $\Z^r$ with $r=\chi(X)-2$. If $r$ is the rank of $\mathrm{NS}\,(X)$, the intersection pairing induces a non-degenerate bilinear form of signature $(1, r-1)$ on $X$ by the Hodge index theorem. Thus, all vector spaces included in the isotropic cone of the intersection form are lines. If $D$ is a divisor on $X$, $D$ is called a nef divisor if for any algebraic curve $C$ on $X$, $D.C \geq 0$. The same definition holds for classes in $\mathrm{NS}\,(X)_{\R}$. By Nakai-Moishezon’s criterion, a nef divisor has nonnegative self-intersection. Isometries of a Lorentzian form {#3} =============================== Classification {#3.1} -------------- Let $V$ be a real vector space of dimension $n$ endowed with a symmetric bilinear form of signature $(1, n-1)$. The set of nonzero elements $x$ such that $x^2 \geq 0$ has two connected components. We fix one of this connected component and denote it by $\mathfrak{N}$. In general, an isometry maps $\mathfrak{N}$ either to $\mathfrak{N}$, either to $- \mathfrak{N}$. The index-two subgroup $\mathrm{O}_+ (V)$ of $\mathrm{O}(V)$ is the subgroup of isometries leaving $\mathfrak{N}$ invariant. There is a complete classification of elements in $\mathrm{O}_+ (V)$. For nice pictures corresponding to these three situations, we refer the reader to Cantat’s article in [@Milnor]. \[classification\] Let $u$ be in $\mathrm{O}_+ (V)$. Then three distinct situations can appear: 1. **u is hyperbolic** There exists $\lambda>1$ and two distinct vectors $\theta_{+}$ and $\theta_-$ in $\mathfrak{N}$ such that $u(\theta_+)=\lambda \, \theta_+$ and $u(\theta_-)=\lambda^{-1} \theta_-$. All other eigenvalues of $u$ are of modulus $1$, and $u$ is semi-simple. 2. **u is elliptic** All eigenvalues of $u$ are of modulus $1$ and $u$ is semi-simple. Then $u$ has a fixed vector in the interior of $\mathfrak{N}$. 3. **u is parabolic** All eigenvalues of $u$ are of modulus $1$ and $u$ fixes pointwise a unique ray in $\mathfrak{N}$, which lies in the isotropic cone. Then $u$ is not semi-simple and has a unique non-trivial Jordan block which is of the form $\begin{pmatrix} 1&1&0\\ 0&1&1\\ 0&0&1 \end{pmatrix}$ where the first vector of the block directs the unique invariant isotropic ray in $\mathfrak{N}$. The existence of an eigenvector in $\mathfrak{N}$ follows from Brouwer’s fixed point theorem applied to the set of positive half-lines in $\mathfrak{N}$, which is homeomorphic to a closed euclidian ball in $\mathbb{R}^{n-1}$. Let $\theta$ be such a vector and $\lambda$ be the corresponding eigenvalue. $*$ If $\theta$ lies in the interior of $\mathfrak{N}$, then $V=\R\, \theta \oplus {\theta}^{\perp}$. Since the bilinear form is negative definite on ${\theta}^{\perp}$, $u$ is elliptic. $*$ If $\theta$ is isotropic and $\lambda \neq 1$, then $\mathrm{im}\, (u-\lambda^{-1} \mathrm{id}) \subset \theta^{\perp}$ so that $\lambda^{-1}$ is also an eigenvalue of $u$. Hence we get two isotropic eigenvectors $\theta_+$ and $\theta_-$ corresponding to the eigenvalues $\lambda$ and $\lambda^{-1}$. Then $u$ induces an isometry of $\theta_+^{\perp} \cap \theta_-^{\perp}$, and $u$ is hyperbolic. $*$ If $\theta$ is isotropic and $\lambda=1$, and if no eigenvector of $u$ lies in the interior of $\mathfrak{N}$, we put $v=u-\textrm{id}$. If $\theta'$ is a vector in $\mathrm{ker} \, (v)$ outside $\theta^{\perp}$, then $\theta' + t \theta$ lies in the interior of $\mathfrak{N}$ for large values of $t$ and is fixed by $u$, which is impossible. Therefore $\mathrm{ker}\, (v) \subset \theta^{\perp}$. In particular, we see that $\mathbb{R \theta}$ is the unique $u$-invariant isotropic ray. Since $\theta$ is isotropic, the bilinear form is well-defined and negative definite on $\theta^{\perp}/{{\ensuremath{\mathbb R}}\theta}$, so that $u$ induces a semi-simple endomorphism $\overline{u}$ on $\theta^{\perp}/{{\ensuremath{\mathbb R}}\theta}$. Let $P$ be the minimal polynomial of $\overline{u}$, $P$ has simple complex roots. Then there exists a linear form $\ell$ on $\theta^{\perp}$ such that for any $x$ orthogonal to $\theta$, $P(u)(x)=\ell(x) \, \theta$. Let $E$ be the kernel of $\ell$. Remark that $$\ell(x)\, \theta= u\{\ell(x)\, \theta \}=u \,\{P(u)(x)\}=P(u)(u(x))=\ell(u(x))\, \theta$$ so that $\ell \circ u=\ell$, which implies that $E$ is stable by $u$. Since $P(u_{|E})=0$, $u_{|E}$ is semi-simple. Assume that $\theta$ doesn’t belong to $E$. Then the quadratic form is negative definite on $E$, and $V=E \oplus E^{\perp}$. On $E^{\perp}$, the quadratic form has signature $(1,1)$. Then the situation becomes easy, because the isotropic cone consists of two lines, which are either preserved or swapped. If they are preserved, we get the identity map. If they are swapped, we get a reflexion along a line in the interior of the isotropic cone, hence an elliptic element. In all cases we get a contradiction. Assume that $u_{| \theta^{\perp}}$ is semi-simple. Since $\mathrm{ker}\, (v) \subset \theta^{\perp}$, we can write $\theta^{\perp}=\mathrm{ker}\, (v)\, \oplus W$ where $W$ is stable by $v$ and $v_{|W}$ is an isomorphism. Now $\mathrm{im}\,(v) =\mathrm{ker}\,(v)^{\perp}$, and it follows that $\mathrm{im}\, (v)=\mathbb{R} \theta \oplus W$. Let $\zeta$ be such that $v(\zeta)=\theta$. Then $u(\zeta)=\zeta+\theta$, so that $u(\zeta)^2=\zeta^2+2 (\zeta. \theta)$. It follows that $\zeta. \theta=0$, and we get a contradiction. In particular $\ell$ is nonzero. Let $F$ be the orthogonal of the subspace $E$, it is a plane in $V$ stable by $u$, containing $\theta$ and contained in $\theta^{\perp}$. Let $\theta'$ be a vector in $F$ such that $\{\theta, \theta'\}$ is a basis of $F$ and write $u(\theta')=\alpha \theta + \beta \theta'$. Since $\theta$ and $\theta'$ are linearly independent, $\theta'^2 <0$. Besides, $u(\theta')^2=\theta'^2$ so that $\beta^2=1$. Assume that $\beta=-1$. If $x=\theta'-\frac{\alpha}{2} \theta$, then $u(x)=-x$, so that $u_{\theta^{\perp}}$ is semi-simple. Thus $\beta=1$. Since $\alpha \neq 0$ we can also assume that $\alpha=1$. Let $v=u-\textrm{id}$. We claim that $\mathrm{ker}\,(v) \subset E$. Indeed, if $u(x)=x$, we know that $x\in \theta^{\perp}$. If $x \notin E$, then $P(u)(x) \neq 0$. But $P(u)(x)=P(1) \, x$ and since $\theta \in E$, $P(1)=0$ and we get a contradiction. This proves the claim. Since $\mathrm{im}\,(v) \subseteq \mathrm{ker}\,(v)^{\perp}$, $\mathrm{im}\,(v)$ contains $F$. Let $\theta''$ be such that $v(\theta'')=\theta'$. Since $v(\theta^{\perp}) \subset E$, $\theta'' \notin \theta^{\perp}$. The subspace generated with $\theta$, $\theta'$ and $\theta''$ is a $3 \times 3$ Jordan block for $u$. \[bof\] Elements of the group $\mathrm{O}_{+}(V)$ can be distinguished by the growth of the norm of their iterates. More precisely: 1. If $u$ is hyperbolic, $||u^n|| \sim C\lambda^{n}$. 2. If $u$ is elliptic, $||u^n||$ is bounded. 3. If $u$ is parabolic, $||u^n|| \sim C {n}^2$. We can sum up the two main properties of parabolic isometries which will be used in the sequel: \[lemmenef\] Let $u$ be a parabolic element of $\mathrm{O}_{+}(V)$ and $\theta$ be an isotropic fixed vector of $u$. 1. If $\alpha$ is an eigenvector of $u$, $\alpha^2 \leq 0$. 2. If $\alpha$ is fixed by $u$, then $\alpha \, . \,\theta=0$. Besides, if $\alpha^2=0$, $\alpha$ and $\theta$ are proportional. Parabolic isometries {#3.2} -------------------- The elements which are the most difficult to understand in $\mathrm{O}_{+}(V)$ are parabolic ones. In this section, we consider a distinguished subset of parabolic elements associated with any isotropic vector. Let $\theta$ be an isotropic vector in $\mathfrak{N}$ and $Q_{\theta}=\theta^{\perp}/ {\ensuremath{\mathbb R}}\theta$. The quadratic form is negative definite on $Q_{\theta}$. Indeed, if $x\, . \, \theta=0$, $x^2 \leq 0$ with equality if and only if $x$ and $\theta$ are proportional, so that $x=0$ in $Q_{\theta}$. If $$\mathrm{O}_{+}(V)_{\theta}=\{ u \in \mathrm{O}_+(V) \, \, \textrm{such that} \, \, u(\theta)=\theta\}$$ we have a natural group morphism $$\chi_{\theta} \colon \mathrm{O}_{+}(V)_{\theta} \rightarrow \mathrm{O}(Q_{\theta}),$$ and we denote by $\mathcal{T}_{\theta}$ its kernel. Let us fix another isotropic vector $\eta$ in $\mathfrak{N}$ which is not collinear to $\theta$, and let $\pi \colon V \rightarrow \theta^{\perp} \cap \eta^{\perp}$ be the orthogonal projection along the plane generated by $\theta$ and $\eta$. \[commutatif\]$ $ 1. The map $\varphi \colon \mathcal{T}_{\theta} \rightarrow \theta^{\perp} \cap \eta^{\perp}$ given by $\varphi(u)=\pi \{ u(\eta) \}$ is a group isomorphism. 2. Any element in $\mathcal{T}_{\theta} \setminus \{ \textrm{id} \}$ is parabolic. We have $V=\{\theta^{\perp} \cap \eta^{\perp} \oplus {\ensuremath{\mathbb R}}\theta\} \oplus {\ensuremath{\mathbb R}}\eta=\theta^{\perp} \oplus {\ensuremath{\mathbb R}}\eta$. Let $u$ be in $G_{\theta}$, and denote by $\zeta$ the element $\varphi(u)$. Let us decompose $u(\eta)$ as $a \theta + b \eta + \zeta$. Then $0=u(\eta)^2=2ab\, (\theta\, . \, \eta)+ \zeta^2$ and we get $$ab=-\dfrac{\zeta^2}{2\, (\theta. \eta)} \cdot$$ Since $u(\theta)=\theta, \theta\, .\, \eta=\theta \, u(\eta)=b\, (\theta \, . \, \eta)$ so that $b=1$. This gives $$a=-\dfrac{\zeta^2}{2\, (\theta. \eta)} \cdot$$ By hypothesis, there exists a linear form $\lambda \colon \theta^{\perp}\cap \eta^{\perp} \rightarrow \R$ such that for any $x$ in $\theta^{\perp} \cap \eta^{\perp}$, $u(x)=x+\lambda(x) \, \theta$. Then we have $$0=x\, . \, \eta = u(x)\,.\, u(\eta)=x \, . \, \zeta+ \lambda(x) \, \theta\, . \, \eta$$ so that $$\lambda(x)=-\dfrac{(x\, . \, \zeta)}{(\theta\, . \, \eta)} \cdot$$ This proves that $u$ can be reconstructed from $\zeta$. For any $\zeta$ in $\theta^{\perp} \cap \eta^{\perp}$, we can define a map $u_{\zeta}$ fixing $\theta$ by the above formulæ, and it is an isometry. This proves that $\varphi$ is a bijection. To prove that $\varphi$ is a morphism, let $u$ and $u'$ be in $G_{\theta}$, and put $u''=u' \circ u$. Then $$\qquad \zeta''=\pi \{u' (u(\eta))\}=\pi \{u' (\zeta +a \theta + \eta) \}=\pi \{ \zeta + \lambda(\zeta) \theta + a \theta + \zeta' + a' \theta + \eta\}=\zeta + \zeta'.$$ It remains to prove that $u$ is parabolic if $\zeta \neq 0$. This is easy: if $x=\alpha \theta + \beta \eta + y$ where $y$ is in $\theta^{\perp} \cap \eta^{\perp}$, then $u(x)=\{\alpha+ \lambda(y) \} \theta + \{\beta \zeta + y\}$. Thus, if $u(x)=x$, we have $\lambda(y)=0$ and $\beta=0$. But in this case, $x^2=y^2 \leq 0$ with equality if and only if $y=0$. It follows that $\R_{+}\theta$ is the only fixed ray in $\mathfrak{N}$, so that $u$ is parabolic. Nonzero elements in $\mathcal{T}_{\theta}$ are called parabolic translations along $\theta$. This definition is justified by the fact that elements in the group $\mathcal{T}_{\theta}$ act by translation in the direction $\theta$ on $\theta^{\perp}$. \[ray\] Let $\theta$, $\eta$ be two isotropic and non-collinear vectors in $\mathfrak{N}$, and $\varphi \colon \mathcal{T}_{\theta} \rightarrow \theta^{\perp} \cap \eta^{\perp}$ and $\psi \colon \mathcal{T}_{\eta} \rightarrow \theta^{\perp} \cap \eta^{\perp}$ the corresponding isomorphisms. Let $u$ and $v$ be respective nonzero elements of $\mathcal{T}_{\theta}$ and $\mathcal{T}_{\eta}$, and assume that there exists an element $x$ in $\mathfrak{N}$ such that $u(x)=v(x)$. Then there exists $t > 0$ such that $\psi(v)=t\, \varphi(u)$. Let us write $x$ as $\alpha \theta + \beta \eta + y$ where $y$ is in $\theta^{\perp} \cap \eta^{\perp}$. Then $$\qquad u(x)=\alpha \, \theta + \beta \zeta + y + \lambda(y) \, \theta \quad \textrm{and} \quad v(x)= \alpha \, \zeta' + \beta \eta + y + \mu(y) \, \eta.$$ Therefore, if $u(x)=v(x)$, $$\qquad \{\alpha + \lambda(y)\} \, \theta - \{\beta + \mu(y) \}\, \eta + \{\beta \zeta - \alpha \zeta' \} =0$$ Hence $\beta \zeta - \alpha \zeta' =0$. We claim that $x$ doesn’t belong to the two rays ${\ensuremath{\mathbb R}}\theta$ and ${\ensuremath{\mathbb R}}\eta$. Indeed, if $y=0$, $\alpha=\beta=0$ so that $u(x)=0$. Thus, since $x$ lies in $\mathfrak{N}$, $x \, . \, \theta >0$ and $x \, . \, \eta >0$ so that $\alpha >0$ and $\beta >0$. Hence $\zeta'=\dfrac{\beta}{\alpha}\, \zeta$ and $\dfrac{\beta}{\alpha}>0$. \[wazomba\] Let $\theta$, $\eta$ two isotropic and non-collinear vectors in $\mathfrak{N}$ and $u$ and $v$ be respective nonzero elements of $\mathcal{T}_{\theta}$ and $\mathcal{T}_{\eta}$. Then $u^{-1}v$ or $uv$ is hyperbolic. If $u^{-1}v$ is not hyperbolic, then there exists a nonzero vector $x$ in $\mathfrak{N}$ fixed by $u^{-1} v$. Thus, thanks to Proposition \[ray\], there exists $t>0$ such that $\psi(v)=t\, \varphi(u)$. By the same argument, if $uv$ is not hyperbolic, there exists $s>0$ such that $\psi(v)=s\, \varphi(u^{-1})=-s\, \varphi(u)$. This gives a contradiction. A fixed point theorem {#3.3} --------------------- In this section, we fix a lattice $\Lambda$ of rank $n$ in $V$ and assume that the bilinear form on $V$ takes integral values on the lattice $\Lambda$. We denote by $\mathrm{O}_{+}(\Lambda)$ the subgroup of $\mathrm{O}_{+}(V)$ fixing the lattice. We start by a simple characterisation of elliptic isometries fixing $\Lambda$: \[fini\] $ $ 1. An element of $\mathrm{O}_{+}(\Lambda)$ is elliptic if and only if it is of finite order. 2. An element $u$ of $\mathrm{O}_{+}(\Lambda)$ is parabolic if and only if it is quasi-unipotent (which means that there exists an integer $k$ such that $u^k-1$ is a nonzero nilpotent element) and of infinite order. $ $ 1. A finite element is obviously elliptic. Conversely, if $u$ is an elliptic element of $\mathrm{O}_{+}(\Lambda)$, there exists a fixed vector $\alpha$ in the interior of $\mathfrak{N}$. Since $\ker\, (u-\mathrm{id})$ is defined over $\Q$, we can find such an $\alpha$ defined over $\Q$. In that case, $u$ must act finitely on $\alpha^{\perp} \cap \Lambda$ and we are done. 2. A quasi-unipotent element which is of infinite order is parabolic (since it is not semi-simple). Conversely, if $g$ is a parabolic element in $\mathrm{O}_{+}(\Lambda)$, the characteristic polynomial of $g$ has rational coefficients and all its roots are of modulus one. Therefore all eigenvalues of $g$ are roots of unity thanks to Kronecker’s theorem. One of the most important properties of parabolic isometries fixing $\Lambda$ is the following: \[gauss\] Let $u$ be a parabolic element in $\mathrm{O}_{+}(\Lambda)$. Then *:* 1. There exists a vector $\theta$ in $\mathfrak{N} \cap \Lambda$ such that $u(\theta)=\theta$. 2. There exists $k>0$ such that $u^k$ belongs to $\mathcal{T}_{\theta}$. $ $ 1. Let $W= \mathrm{ker}\,(f-\mathrm{id})$, and assume that the line ${\ensuremath{\mathbb R}}\theta$ doesn’t meet ${\Lambda}_{\Q}$. Then the quadratic form $q$ is negative definite on $\theta^{\perp} \cap W_{\Q}$. We can decompose $q_{W_{\Q}}$ as $-\sum_i \ell_i^2$ where the $\ell_i$’s are linear forms on $W_{\Q}$. Then $q$ is also negative definite on $W$, but $q(\theta)=0$ so we get a contradiction. 2. By the first point, we know that we can choose an isotropic invariant vector $\theta$ in $\Lambda$. Let us consider the free abelian group $\Sigma:=(\theta^{\perp} \cap \Lambda)/ {\ensuremath{\mathbb Z}}\theta$, the induced quadratic form is negative definite. Therefore, since $u$ is an isometry, the action of $u$ is finite on $\Sigma$, so that an iterate of $u$ belongs to $\mathcal{T}_{\theta}$. The definition below is motivated by Remark \[bof\]. A subgroup $G$ of $\mathrm{O}_+ (V)$ is of *moderate growth* if it contains no hyperbolic element. Among groups of moderate growth, the most simple ones are finite subgroups of $\mathrm{O}_+ (V)$. Recall the following well-known fact: \[burnside\] Any torsion subgroup of $\mathrm{GL}(n, \Q)$ is finite. Let $g$ be an element in $G$, and $\zeta$ be an eigenvalue of $g$. If $m$ is the smallest positive integer such that $\zeta^m=1$, then $\varphi(m)=\mathrm{deg}_{\Q} (\zeta) \leq n$ where $\varphi(m)=\sum_{d |m} d $. Since $\varphi(k)\underset{k \rightarrow + \infty}{\longrightarrow} + \infty$, there are finitely many possibilities for $m$. Therefore, there exists a constant $c(n)$ such that the order of any $g$ in $G$ divides $c(n)$. This means that $G$ has finite exponent in $\mathrm{GL}(n, \C)$, and the Lemma follows from Burnside’s theorem. As a consequence of Lemmas \[fini\] and \[burnside\], we get: \[burnside\] A subgroup of $\mathrm{O}_+ (\Lambda)$ is finite if and only if all its elements are elliptic. We now concentrate on infinite groups of moderate growth. The main theorem we want to prove is Gizatullin’s fixed point theorem: \[ptfixe\] Let $G$ be an infinite subgroup of moderate growth in $\mathrm{O}_+ (\Lambda)$. Then *:* 1. There exists an isotropic element $\theta$ in $\mathfrak{N} \cap \Lambda$ such that for any element $g$ in $G$, $g(\theta)=\theta$. 2. The group $G$ can be written as $G=\Z^{r} \rtimes H$ where $H$ is a finite group and $r>0$. $ $ 1. Thanks to Corollary \[burnside\], $G$ contains parabolic elements. Let $g$ be a parabolic element in $G$ and $\theta$ be an isotropic vector. Let $\Lambda^*=(\theta^{\perp} \cap \Lambda)/ {\ensuremath{\mathbb Z}}\theta$. Since the induced quadratic form on $\Lambda^*$ is negative definite, and an iterate of $g$ acts finitely on $\Lambda^*$; hence $g^k$ is in $\mathcal{T}_{\theta}$ for some integer $k$. Let $\tilde{g}$ be another element of $G$, and assume that $\tilde{g}$ doesn’t fix $\theta$. We put $\eta=\tilde{g}(\theta)$. If $u={g}^k$ and $v=\tilde{g} g^k \tilde{g}^{-1}$, then $u$ and $v$ are nonzero elements of $\mathcal{T}_{\theta}$ and $\mathcal{T}_{\eta}$ respectively. Thanks to Corollary \[waza\], $G$ contains hyperbolic elements, which is impossible since it is of moderate growth. 2. Let us consider the natural group morphism $$\varepsilon \colon G \hookrightarrow \mathrm{O}_{+}(V) \rightarrow \mathrm{O}(\Lambda^*).$$ The image of $\varepsilon$ being finite, $\ker \, (\varepsilon)$ is a normal subgroup of finite index in $G$. This subgroup is included in $\mathcal{T}_{\theta}$, so it is commutative. Besides, it has no torsion thanks to Proposition \[commutatif\] (1), and is countable as a subgroup of $\mathrm{GL}_n(\Z)$. Thus it must be isomorphic to $\Z^r$ for some $r$. Background material on surfaces {#4} =============================== The invariant nef class {#4.1} ----------------------- Let us consider a pair $(X, f)$ where $X$ is a smooth complex projective surface and $f$ is an automorphism of $X$ whose action on $\mathrm{NS}(X)_{\R}$ is a parabolic isometry. \[tropmalin\] There exists a unique non-divisible nef vector $\theta$ in $\mathrm{NS}\,(X) \cap \ker \left( f^* - \mathrm{id} \right)$. Besides, $\theta$ satisfies $\theta^2=0$ and ${K}_X. \theta=0$. Let $\mathcal{S}$ be the space of half-lines $\R_{+} \mu$ where $\mu$ runs through nef classes in $\mathrm{NS}\,(X)$. Taking a suitable affine section of the nef cone so that each half-line in $\mathcal{S}$ is given by the intersection with an affine hyperplane, we see that $\mathcal{S}$ is bounded and convex, hence homeomorphic to a closed euclidian ball in $\R^{n-1}$. By Brouwer’s fixed point theorem, $f^*$ must fix a point in $\mathcal{S}$. This implies that $f^* \theta=\lambda \,\theta$ for some nef vector $\theta$ and some positive real number $\lambda$ which must be one as $f$ is parabolic. Since $\theta$ is nef, $\theta^2 \geq 0$. By Lemma \[lemmenef\] (1), $\theta^2=0$ and by Lemma \[lemmenef\] (2), $K_X . \theta=0$. It remains to prove that $\theta$ can be chosen in $\mathrm{NS}\,(X)$. This follows from Lemma \[gauss\] (1). Since ${\ensuremath{\mathbb R}}\theta$ is the unique fixed isotropic ray, $\theta$ is unique up to scaling. It is completely normalized if it is assumed to be non-divisible. \[mg\] Let $G$ be an infinite group of automorphisms of $X$ having moderate growth. Then there exists a $G$-invariant nef class $\theta$ in $\mathrm{NS}\,(X)$. This follows directly from Theorem \[ptfixe\] and Proposition \[tropmalin\]. Constructing elliptic fibrations {#4.2} -------------------------------- In this section, our aim is to translate the question of the existence of $f$-invariant elliptic fibrations in terms of the invariant nef class $\theta$. \[nefnef\] If $(X, f)$ is given, then $X$ admits an invariant elliptic fibration if and only if a multiple $N \theta$ of the $f$-invariant nef class can be lifted to a divisor $D$ in the Picard group $\mathrm{Pic}\,(X)$ such that $\mathrm{dim} \, |D| =1$. Besides, such a fibration is unique. Let us consider a pair $(X, f)$ and assume that $X$ admits a fibration $X \xrightarrow{\pi} {C}$ invariant by $f$ whose general fiber is a smooth elliptic curve, where $C$ is a smooth algebraic curve of genus $g$. Let us denote by $\beta$ the class of a general fiber $X_z=\pi^{-1}(z)$ in $\mathrm{NS}\,(X)$. Then $f^* \beta=\beta$. The class $\beta$ is obviously nef, so that it is a multiple of $\theta$. This implies that the fibration $(\pi, C)$ is unique: if $\pi$ and $\pi'$ are two distinct $f$-invariant elliptic fibrations, then $\beta . \,\beta' >0$; but $\theta^2=0$. Let $C \xrightarrow{\varphi} \mathbb{P}^1$ be any branched covering (we call $N$ its degree), and let us consider the composition $X\xrightarrow{\varphi \,\circ\, \pi} \mathbb{P}^1$. Let $D$ be a generic fiber of this map. It is a finite union of the fibers of $\pi$, so that the class of $D$ in $\mathrm{NS}\,(X)$ is $N \beta$. Besides, $\mathrm{dim} \, |D| \geq 1$. In fact $\mathrm{dim} \, |D|=1$, otherwise $D^2$ would be positive. This yields the first implication in the proposition. To prove the converse implication, let $N$ be a positive integer such that if $N \theta$ can be lifted to a divisor $D$ with $\mathrm{dim} \, |D|=1$. Let us decompose $D$ as $F+M$, where $F$ is the fixed part (so that $|D|=|M|$). Then $0=D^2=D.F+D.M$ and since $D$ is nef, $D.M=0$. Since $|M|$ has no fixed component, $M^2 \geq 0$ so that the intersection pairing is semi-positive on the vector space generated by $D$ and $M$. It follows that $D$ and $M$ are proportional, so that $M$ is still a lift of a multiple of $\theta$ in $\mathrm{Pic}\,(X)$. Since $M$ has no fixed component and $M^2=0$, $|M|$ is basepoint free. By the Stein factorisation theorem, the generic fiber of the associated Kodaira map $X \rightarrow |M|^*$ is the disjoint union of smooth curves of genus $g$. The class of each of these curves in the Neron-Severi group is a multiple of $\theta$. Since $\theta^2=\theta . K_X=0$, the genus formula implies $g=1$. To conclude, we take the Stein factorisation of the Kodaira map to get a true elliptic fibration. It remains to prove that this fibration is $f$-invariant. If $\mathcal{C}$ is a fiber of the fibration, then $f(\mathcal{C})$ is numerically equivalent to $\mathcal{C}$ (since $f^* \theta=\theta$), so that $\mathcal{C}. f(\mathcal{C})=0$. Therefore, $f(\mathcal{C})$ is another fiber of the fibration. \[malin\] The unicity of the fibration implies that any $f^N$-elliptic fibration (for a positive integer $N$) is $f$-invariant. In view of the preceding proposition, it is natural to try to produce sections of $D$ by applying the Riemann-Roch theorem. Using Serre duality, we have $$\label{RR} \mathrm{h}^0(D)+\mathrm{h}^0(K_X-D) \geq \chi(\mathcal{O}_X)+\frac{1}{2} D.(D-K_X)=\chi({\mathcal{O}_X}).$$ In the next section, we will use this inequality to solve the case where the minimal model of $X$ is a $K3$-surface. \[reduction\] If Theorem \[Main\] holds for $G=\mathbb{Z}$, then it holds in the general case. Let $G$ be an infinite subgroup of $\mathrm{Aut}\,(X)$ of moderate growth, $f$ be a parabolic element of $X$, and assume that there exists an $f$-invariant elliptic fibration $\mathcal{C}$ on $X$. If $\theta$ is the invariant nef class of $X$, then $G$ fixes $\theta$ by Proposition \[mg\]. This proves that $\mathcal{C}$ is $G$-invariant. Kodaira’s classification {#4.3} ------------------------ Let us take $(X, f)$ as before. The first natural step to classify $(X, f)$ would be to find what is the minimal model of $X$. It turns out that we can rule out some cases without difficulties. Let $\kappa(X)$ be the Kodaira dimension of $X$. – If $\kappa(X)=2$, then $X$ is of general type so its automorphism group is finite. Therefore this case doesn’t occur in our study. – If $\kappa(X)=1$, we can completely understand the situation by looking at the Itaka fibration $X \dasharrow |mK_X|^*$ for $m >> 0$, which is $\mathrm{Aut}\,(X)$-invariant. Let $F$ be the fixed part of $|mK_X|$ and $D=mK_X-F$. The linear system $|D|$ is a base point free pencil, whose generic fiber is a finite union of elliptic curves. If $X$ is minimal, we refer the reader to [@GH pp. 574-575]. If $X$ is not minimal, let $Z$ be its minimal model and $X \xrightarrow{\pi} Z$ the projection. Then $K_X=\pi^*K_Z+E$, where $E$ is a divisor contracted by $\pi$, so that $|mK_X|=|mK_Z|=|D|$. We can now consider the Stein factorisation $X \rightarrow Y \rightarrow Z$ of $\pi$. In this way, we get an $\mathrm{Aut} (X)$-invariant elliptic fibration $X \rightarrow Y$. – If $\kappa(X)=0$, the minimal model of $X$ is either a $K3$ surface, an Enriques surface, or a bielliptic surface. We start by noticing that we can argue directly in this case on the minimal model: If $\kappa(X)\!=\!0$, every automorphism of $X$ is induced by an automorphism of its minimal model. Let $Z$ be the minimal model of $X$ and $\pi$ be the associated projection. By classification of minimal surfaces of Kodaira dimension zero, there exists a positive integer $m$ such that $mK_Z$ is trivial. Therefore, $mK_X$ is an effective divisor $\mathcal{E}$ whose support is exactly the exceptional locus of $\pi$, and $|mK_X|=\{\mathcal{E}\}$. It follows that $\mathcal{E}$ is invariant by $f$, so that $f$ descends to $Z$. $*$ Let us deal with the K3 surface case. We pick any lift $D$ of $\theta$ in $\mathrm{Pic}\,(X)$. Since $\chi(\oo_X)=2$, we get by (\[RR\]) $$\mathrm{h}^0(D)+\mathrm{h}^0(-D) \geq 2.$$ Since $D$ is nef, $-D$ cannot be effective, so that $\mathrm{h}^0(-D)=0$. We conclude using Proposition \[nefnef\]. $*$ This argument doesn’t work directly for Enriques surfaces, but we can reduce to the K3 case by arguing as follows: if $X$ is an Enriques surface, its universal cover $\widetilde{X}$ is a K3 surface, and $f$ lifts to an automorphism $\tilde{f}$ of $\widetilde{X}$. Besides, $\tilde{f}$ is still parabolic. Therefore, we get an $\tilde{f}$-invariant elliptic fibration $\pi$ on $\widetilde{X}$. If $\sigma$ is the involution on $\widetilde{X}$ such that $X=\widetilde{X}/\sigma$, then $\tilde{f}=\sigma \circ \tilde{f} \circ \sigma^{-1}$, by the unicity of the invariant fibration, $\pi \circ \sigma=\pi$. Thus, $\pi$ descends to $X$. $*$ The case of abelian surfaces is straightforward: an automorphism of the abelian surface $\C^2/ \Lambda$ is given by some matrix $M$ in $\mathrm{GL}(2; \Lambda)$. Up to replacing $M$ by an iterate, we can assume that this matrix is unipotent. If $M= \mathrm{id} + N$, then the image of $N \colon \Lambda \rightarrow \Lambda$ is a sub-lattice $\Lambda^*$ of $\Lambda$ spanning a complex line $L$ in $\mathbb{C}^2$. Then the elliptic fibration $\mathbb{C}^2/ \Lambda \xrightarrow{N} L / \Lambda^*$ is invariant by $M$. $*$ It remains to deal with the case of bi-elliptic surfaces. But this is easy because they are already endowed with an elliptic fibration invariant by the whole automorphism group. – If $\kappa(X)=-\infty$, then either $X$ is a rational surface, or the minimal model of $X$ is a ruled surface over a curve of genus $g \geq 1$. The rational surface case is rather difficult, and corresponds to Gizatullin’s result; we leave it apart for the moment. For blowups of ruled surfaces, we remark that the automorphism group must preserve the ruling. Indeed, for any fiber $\mathcal{C}$, the projection of $f(\mathcal{C})$ on the base of the ruling must be constant since $\mathcal{C}$ has genus zero. Therefore, an iterate of $f$ descends to an automorphism of the minimal model $Z$. We know that $Z$ can be written as $\mathbb{P}(E)$ where $E$ is a holomorphic rank $2$ bundle on the base of the ruling. By the Leray-Hirsh theorem, $\mathrm{H}^{1,1}(Z)$ is the plane generated by the first Chern class $\mathrm{c}_1(\mathcal{O}_E(1))$ of the relative tautological bundle and the pull-back of the fundamental class in $\mathrm{H}^{1, 1}(\mathbb{P}^1)$. Thus, $f^*$ acts by the identity on $\mathrm{H}^{1,1}(Z)$, hence on $\mathrm{H}^{1,1}(X)$. The rational surface case {#5} ========================= Statement of the result {#5.1} ----------------------- From now on, $X$ will always be a rational surface, so that $\mathrm{h}^1(X, \mathcal{O}_X)=\mathrm{h}^2(X, \mathcal{O}_X)=0$. It follows that $\mathrm{Pic}\,(X) \simeq \mathrm{NS}\,(X) \simeq \mathrm{H}^2(X, \Z)$, which imply that numerical and linear equivalence agree. In this section, we prove the following result: \[Gizzz\] Let $X$ be a rational surface and $f$ be a parabolic automorphism of $X$. If $\theta$ is the nef $f$-invariant class in $\mathrm{NS}\,(X)$, then there exists an integer $N$ such that $\mathrm{dim}\, |N \theta|=1$. Thanks to Proposition \[mg\] and Corollary \[reduction\], this theorem is equivalent to Theorem \[Main\] for rational surfaces and is the most difficult result in Gizatullin’s paper. Properties of the invariant curve {#5.2} --------------------------------- The divisor ${K}_X-\theta$ is never effective. Indeed, if $H$ is an ample divisor, $K_X. H <0$ so that $(K_X-\theta).H <0$. Therefore, we obtain by (\[RR\]) that $|\theta| \neq \varnothing$, so that $\theta$ can be represented by a possibly non reduced and non irreducible curve $C$. We will write the curve $C$ as the divisor $\sum_{i=1}^d a_i \,C_i$ where the $C_i$ are irreducible. Since $\theta$ is non divisible in $\mathrm{NS}\,(X)$, $C$ is primitive. In the sequel, we will make the following assumptions, and we are seeking for a contradiction: **Assumptions** - We have $|N \theta|=\{NC\}$ for all positive integers $N$. - For any positive integer $k$, the pair $(X, f^k)$ is minimal. Let us say a few words on $(2)$. If for some integer $k$ the map $f^k$ descends to an automorphism $g$ of a blow-down $Y$ of $X$, then we can still argue with $(Y, g)$. The corresponding invariant nef class will satisfy $(1)$. Thanks to Remark \[malin\], we don’t lose anything concerning the fibration when replacing $f$ by an iterate. We study thoroughly the geometry of $C$. Let us start with a simple lemma. \[ttsimple\] If $D_1$ and $D_2$ are two effective divisors whose classes are proportional to $\theta$, then $D_1$ and $D_2$ are proportional (as divisors). There exists integers $N$, $N_1$, and $N_2$ such that $N_1 D_1 \equiv N_2 D_2 \equiv N \theta$. Therefore, $N_1 D_1$ and $N_2 D_2$ belong to $|N \theta|$ so they are equal. The following lemma proves that $C$ looks like a fiber of a minimal elliptic surface. \[base\] $ $ 1. For $1 \leq i \leq d$, $K_X. C_i=0$ and $C.C_i=0$. If $d \geq 2$, $C_i^2<0$. 2. The classes of the components $C_i$ in $\mathrm{NS}\,(X)$ are linearly independent. 3. The intersection form is nonpositive on the $\Z$-module spanned by the $C_i$’s. 4. If $D$ is a divisor supported in $C$ such that $D^2=0$, then $D$ is a multiple of $C$. $ $ 1. Up to replacing $f$ by an iterate, we can assume that all the components $C_i$ of the curve $C$ are fixed by $f$. By Lemma \[lemmenef\], $C_i^2\leq 0$ and $C.K_X=C.C_i=0$ for all $i$. Assume that $d \geq 2$. If $C_i^2=0$, then $C$ and $C_i$ are proportional, which would imply that $C$ is divisible in $\mathrm{NS}\,(X)$. Therefore $C_i^2<0$. If $K_X.C_i<0$, then $C_i$ is a smooth and $f$-invariant exceptional rational curve. This contradicts Assumption (2). Thus $K_X.C_i \geq 0$. Since $K_X.C=0$, it follows that $K_X.C_i=0$ for all $i$. 2. If there is a linear relation among the curves $C_i$, we can write it as $D_1 \equiv D_2$, where $D_1$ and $D_2$ are linear combinations of the $C_i$ with positive coefficients (hence effective divisors) having no component in common. We have $D_1^2=D_1.D_2 \geq 0$. On the other hand $C. D_1=0$ and $C^2=0$, so by the Hodge index theorem $C$ and $D_1$ are proportional. This contradicts Lemma \[ttsimple\]. 3. Any divisor $D$ in the span of the $C_i$’s is $f$-invariant, so that Lemma \[lemmenef\] (1) yields $D^2 \leq 0$. 4. If $D^2=D.C=0$, then $D$ and $C$ are numerically proportional. Therefore, there exists two integers $a$ and $b$ such that $aD-bC \equiv 0$. By Lemma \[ttsimple\], $aD=bC$ and since $C$ is primitive, $D$ is a multiple of $C$. \[genre\] $ $ 1. The curve $C$ is $1$-connected *(*see [@BPVDV pp. 69]*)*. 2. We have $\mathrm{h}^0(C, \oo_C)=\mathrm{h}^1(C, \oo_C)=1$. 3. If $d=1$, then $C_1$ has arithmetic genus one. If $d \geq2$, all the curves $C_i$ are rational curves of self-intersection $-2$. $ $ 1. Let us write $C=C_1+C_2$ where $C_1$ and $C_2$ are effective and supported in $C$, with possible components in common. By Lemma \[base\] (3), $C_1^2\leq 0$ and $C_2^2\leq 0$. Since $C^2=0$, we must have $C_1.C_2 \geq 0$. If $C_1.C_2=0$, then $C_1^2=C_2^2=0$ so that by Lemma \[base\] (4), $C_1$ and $C_2$ are multiples of $C$, which is impossible. 2. By $(1)$ and , $\mathrm{h}^0(C, \oo_C)=1$. The dualizing sheaf $\omega_C$ of $C$ is the restriction of the line bundle $K_X+C$ to the divisor $C$. Therefore, for any integer $i$ between $1$ and $d$, $\mathrm{deg}\, ({\omega_C})_{| C_i}=(K_X+C).C_i=0$ by Lemma \[base\] (1). Therefore, by , $\mathrm{h}^0(C, \omega_C)\leq 1$ with equality if and only if $\omega_C$ is trivial. We can now apply the Riemann-Roch theorem for singular embedded curves [@BPVDV Theorem 3.1]: since $\omega_C$ has total degree zero, we have $\chi(\omega_C)=\chi(\oo_C)$. But using Serre duality [@BPVDV Theorem 6.1], $\chi(\omega_C)=-\chi(\oo_C)$ so that $\chi(\oo_C)=\chi(\omega_C)=0$. It follows that $\mathrm{h}^1(C, \oo_C)=1$. 3. This follows from the genus formula: $2p_a(C_i)-2=C_i^2+K_X.C_i=C_i^2<0$ so that $p_a(C_i)=0$ and $C_i^2=-2$. Now the geometric genus is always smaller than the arithmetic genus, so that the geometric genus of $C_i$ is $0$, which means that $C_i$ is rational. We can now prove a result which will be crucial in the sequel: \[crucial\] Let $D$ be a divisor on $X$ such that $D. C=0$. Then there exists a positive integer $N$ and a divisor $S$ supported in $C$ such that for all $i$, $(ND-S).\, C_i=0$. Let $V$ be the $\Q$-vector space spanned by the $C_i$’s in $\mathrm{NS}_{\Q}(X)$, by Lemma \[base\] (3), it has dimension $r$. We have a natural morphism $\lambda \colon V \rightarrow \Q^r$ defined by $\lambda(x)=(x.C_1, \ldots, x.C_r)$. The kernel of this morphism are vectors in $V$ orthogonal to all the $C_i$’s. Such a vector is obviously isotropic, and by Lemma \[base\] (4), it is a rational multiple of $D$. Therefore the image of $\lambda$ is a hyperplane in $\Q^r$, which is the hyperplane $\sum_i a_i x_i=0$. Indeed, for any element $x$ in $V$, we have $\sum_i a_i \, (x.C_i)=x.C=0$. Let us consider the element $w=(D.C_1, \ldots, D.C_r)$ in $\Q^r$. Since $\sum_i a_i \, (D.C_i)=D.C=0$, we have $w=\lambda(S)$ for a certain $S$ in $V$. This gives the result. The trace morphism {#5.3} ------------------ In this section, we introduce the main object in Gizatullin’s proof: the *trace morphism*. For this, we must use the Picard group of the embedded curve $C$. It is the moduli space of line bundles on the complex analytic space $\mathcal{O}_C$, which is $\mathrm{H}^1(C, \oo_C^{\,\times})$. Recall [@BPVDV Proposition 2.1] that $\mathrm{H}^1(C, \Z_C)$ embeds as a discrete subgroup of $\mathrm{H}^1(C, \oo_C)$. The connected component of the line bundle $\oo_C$ is denoted by $\mathrm{Pic}^0 (C)$, it is the abelian complex Lie group $\mathrm{H}^1(C, \oo_C)/ \mathrm{H}^1(C, \Z_C)$. We have an exact sequence $$0 \rightarrow \mathrm {Pic}^0(C) \rightarrow \mathrm{Pic}\,(C) \xrightarrow{\mathrm{c}_1} \mathrm{H}^1(C, \Z)$$ and $\mathrm{H}^1 (C, \mathbb{Z})\simeq \Z^d$. Therefore, connected components of $\mathrm{Pic}\,(C)$ are indexed by sequences $(n_1, \ldots, n_d)$ corresponding to the degree of the line bundle on each irreducible component of $C$. By Lemma \[genre\] (2), $\mathrm{Pic}^0(C)$ can be either $\C$, $\C^{\times}$, or an elliptic curve. The trace morphism is a group morphism $\mathfrak{tr} \colon \mathrm{Pic}\,(X) \rightarrow \mathrm{Pic}\, (C)$ defined by $ \mathfrak{tr}\, (\mathcal{L})=\mathcal{L}_{| C}$ . Remark that $C.C_i=0$ for any $i$, so that the line bundle $\oo_X\left(C\right)$ restricts to a line bundle of degree zero on each component $a_i \,C_i$. $ $ \[torsion\] 1. The line bundle $\mathfrak{tr}\left(\oo_X (C ) \right)$ is not a torsion point in $\mathrm{Pic}^0(C)$. 2. The intersection form is negative definite on $\mathrm{ker}\, (\mathfrak{tr})$. $ $ 1. Let $N$ be an integer such that $N\mathfrak{tr}\left(\oo_X (C ) \right)=0$ in $\mathrm{Pic}\,(C)$. Then we have a short exact sequence $$0 \rightarrow \oo_X((N-1)C) \rightarrow \oo_X (NC) \rightarrow \oo_C \rightarrow 0.$$ Now $\mathrm{h}^2(X, \oo_X((N-1)C))=\mathrm{h}^0(\oo_X(-(N-1)C+K_X)=0$, so that the map $$\mathrm{H}^1(X, \oo_X(NC)) \rightarrow \mathrm{H}^1(C, \oo_C)$$ is onto. It follows from Lemma \[genre\] (2) that $\mathrm{h}^1(X, \oo_X(NC)) \geq 1$ so that by Riemann-Roch $$\mathrm{h}^0(X, \oo_X(NC)) \geq \mathrm{h}^1(X, \oo_X(NC)) + \chi(\oo_X) \geq 2.$$ This yields a contradiction since we have assumed that $|N\theta|=\{NC\}$. 2. Let $D$ be a divisor in the kernel of $\mathfrak{tr}$. By the Hodge index theorem $D^2 \leq 0$. Besides, if $D^2=0$, then $D$ and $C$ are proportional. In that case, a multiple of $C$ would be in $\ker \,(\mathfrak{tr})$, hence $\mathfrak{tr}\, (\oo_X(C))$ would be a torsion point in $\mathrm{Pic}\,(C)$. Proof of Gizatullin’s theorem {#6} ============================= The general strategy {#6.1} -------------------- The strategy of the proof is simple in spirit. Let $\mathfrak{P}$ be the image of $\mathfrak{tr}$ in $\mathrm{Pic}\, (C)$, so that we have an exact sequence of abelian groups $$1 \rightarrow \ker \, (\mathfrak{tr}) \rightarrow \mathrm{Pic}\, (X) \rightarrow \mathfrak{P} \rightarrow 1$$ By Proposition \[torsion\], the intersection form is negative definite on $\ker \,(\mathfrak{tr})$, so that $f^*$ is of finite order on $\ker \,(\mathfrak{tr})$. In the first step of the proof, we will prove that for any divisor $D$ on $X$ orthogonal to $C$, $f^*$ induces a morphism of finite order on each connected component of any element $\mathfrak{tr} (D)$ in $\mathrm{Pic}\,(C)$. In the second step, we will prove that the action of $f^*$ on $\mathrm{Pic} (X)$ is finite. This will give the desired contradiction. Action on the connected components of $\mathfrak{P}$ {#6.2} ---------------------------------------------------- In this section, we prove that $f^*$ acts finitely on “many” connected components of $\mathfrak{P}.$ More precisely: \[elliptic\] Let $D$ be in $\mathrm{Pic}\, (X)$ such that $D.C=0$, and let $\mathfrak{X}_D$ be a connected component of $\mathfrak{tr} (D)$ in $\mathrm{Pic}\, (C)$. Then the restriction of $f^*$ to $\mathfrak{X}_D$ is of finite order. We start with the case $D=0$ so that $\mathfrak{X}=\mathrm{Pic}^0(C)$. Then three situations can happen: – If $\mathrm{Pic}^0 (C)$ is an elliptic curve, then its automorphism group is finite (by automorphisms, we mean group automorphisms). – If $\mathrm{Pic}^0 (C)$ is isomorphic to $\C^{\times}$, its automorphism group is $\{\mathrm{id}, z \rightarrow z^{-1}\}$, hence of order two, so that we can also rule out this case. – Lastly, if $\mathrm{Pic}^0 (C)$ is isomorphic to $\C$, its automorphism group is $\C^{\times}$. We know that $C$ is a non-zero element of $\mathrm{Pic}^0(C)$ preserved by the action of $f^*$. This forces $f^*$ to act trivially on $\mathrm{Pic}^0 (C)$. Let $D$ be a divisor on $X$ such that $D.C=0$. By Proposition \[crucial\], there exists a positive integer $N$ and a divisor $S$ supported in $C$ such that $N \mathfrak{tr}\,(D)- \mathfrak{tr}\,(S) \in \mathrm{Pic}^0(C)$. Let $m$ be an integer such that $f^m$ fixes the components of $C$ and acts trivially on $\mathrm{Pic}\,(C)$. We define a map $\lambda \colon {\ensuremath{\mathbb Z}}\rightarrow \mathrm{Pic}^0(C)$ by the formula $$\lambda(k)=(f^{km})^* \{\mathfrak{tr} (D)\}-\mathfrak{tr} (D)$$ 1. **Claim 1**: $\lambda$ does not depend on $D$. Indeed, if $D'$ is in $\mathfrak{X}_D$, then $\mathfrak{tr}\,(D'-D) \in \mathrm{Pic}^0(C)$ so that $$(f^{km})^*(D'-D)=D'-D.$$ This gives $(f^{km})^* \{\mathfrak{tr} (D')\}-\mathfrak{tr} (D')=(f^{km})^* \{\mathfrak{tr} (D)\}-\mathfrak{tr} (D)$ 2. **Claim 2**: $\lambda$ is a group morphism. $$\begin{aligned} {3} \lambda(k+l) & = (f^{km})^*(f^{lm})^* \{\mathfrak{tr} (D)\}-\mathfrak{tr} (D) \\ & = \begin{aligned}[t] (f^{km})^*\left\{(f^{lm})^* \{\mathfrak{tr} (D)\} \right\} & -\left\{(f^{lm})^* \{\mathfrak{tr} (D)\} \right\} \\ & + (f^{lm})^* \{\mathfrak{tr} (D)\}-\mathfrak{tr} (D) \end{aligned} \\ &= \lambda(k) + \lambda(l) \quad \textrm{by Claim 1}.\end{aligned}$$ 3. **Claim 3**: $\lambda$ has finite image. For any integer $k$, since $N \,\mathfrak{tr}\,(D)- \mathfrak{tr}\,(S) \in \mathrm{Pic}^0(C)$, $(f^{km})^* \{N\, \mathfrak{tr} (D) \}= N \,\mathfrak{tr} (D)$. Therefore, we see that $(f^{km})^* \{ \mathfrak{tr} (D) \} - \mathfrak{tr} (D)=\lambda(k)$ is a $N$-torsion point in $\mathrm{Pic}^0(C)$. Since there are finitely many $N$-torsion points, we get the claim. We can now conclude. By claims $2$ and $3$, there exists an integer $s$ such that the restriction of $\lambda$ to $s \Z$ is trivial. This implies that $D$ is fixed by $f^{ms}$. By claim 1, all elements in $\mathfrak{X}_D$ are also fixed by $f^{ms}$. Lift of the action from $\mathfrak{P}$ to the Picard group of $X$ {#6.3} ----------------------------------------------------------------- By Proposition \[torsion\] (2) and Proposition \[elliptic\], up to replacing $f$ with an iterate, we can assume that $f$ acts trivially on all components $\mathfrak{X}_D$, on $\mathrm{ker}\, (\mathfrak{tr})$, and fixes the components of $C$. Let $r$ be the rank of $\mathrm{Pic}\, (X)$, and fix a basis $E_1, \ldots, E_r$ of $\mathrm{Pic}\, (X)$ composed of irreducible reduced curves. Let $n_i=E_i.C$. If $n_i=0$, then either $E_i$ is a component of $C$, or $E_i$ is disjoint from $C$. In the first case $E_i$ is fixed by $f$. In the second case, $E_i$ lies in the kernel of $\mathfrak{tr}$, so that it is also fixed by $f$. Up to re-ordering the $E_i$’s, we can assume that $n_i>0$ for $1 \leq i \leq s$ and $n_i=0$ for $i>s$. We put $m=n_1 \ldots n_s$, $m_i=\frac{m}{n_i}$ and $L_i=m_iE_i$. \[chic\] For $1 \leq i \leq s$, $L_i$ is fixed by an iterate of $f$. For $1 \leq i \leq s$, we have $L_i.C=m$, so that for $1\leq i, j \leq s $, $(L_i-L_j).C=0$. Therefore, by Proposition \[elliptic\], an iterate of $f$ acts trivially on $\mathfrak{X}_{L_i-L_j}$. Since there are finitely many couples $(i,j)$, we can assume (after replacing $f$ by an iterate) that $f$ acts trivially on all $\mathfrak{X}_{L_i-L_j}$. Let us now prove that $f^* L_i$ and $L_i$ are equal in $\mathrm{Pic}\,(X)$. Since $f^*$ acts trivially on the component $\mathfrak{X}_{L_i-L_j}$, we have $\mathfrak{tr}\,(f^*L_i-L_i)=\mathfrak{tr}\,(f^*L_j-L_j)$. Let $D=f^*L_1-L_1$. Then for any $i$, we can write $f^*L_i-L_i=D+D_i$ where $\mathfrak{tr}\, (D_i)$=0. Let us prove that the class $D_i$ in $\mathrm{Pic}\, (X)$ is independent of $i$. For any element $A$ in $\mathrm{ker}\, (\mathfrak{tr})$, we have $$D_i. \,A=(f^*L_i-L_i-D). \,A=f^*L_i . f^*A-L_i . \,A-D. \,A=-D. \,A$$ since $f^*A=A$. Now since the intersection form in non-degenerate on $\mathrm{ker}\,(\mathfrak{tr})$, if $(A_k)_k$ is an orthonormal basis of $\mathrm{ker}\,(\mathfrak{tr})$, $$D_i=-\sum_k (D_i . \, A_k) \,A_k=\sum_k (D . \, A_k) \, A_k.$$ Therefore, all divisors $D_i$ are linearly equivalent. Since $D_1=0$, we are done. We can end the proof of Gizatullin’s theorem. Since $L_1, \ldots, L_s, E_{s+1}, \ldots, E_{r}$ span $\mathrm{Pic}\, (X)$ over $\Q$, we see that the action of $f$ on $\mathrm{Pic}\,(X)$ is finite. This gives the required contradiction. Minimal rational elliptic surfaces {#7} ================================== Throughout this section, we will assume that $X$ is a rational elliptic surface whose fibers contain no exceptional curves; such a surface will be called by a slight abuse of terminology a minimal elliptic rational surface. Classification theory {#7.1} --------------------- The material recalled in this section is more or less standard, we refer to [@DS Chap. II §10.4] for more details. \[hehehe\] Let $X$ be a rational surface with $K_X^2=0$. Then $|-K_X| \neq \varnothing$. Besides, for any divisor $\mathfrak{D}$ in $|-K_X|$ *:* 1. $\mathrm{h}^1(\mathfrak{D}, \oo_{\mathfrak{D}})=1$. 2. For any divisor $D$ such that $0< D < \mathfrak{D}$, $\mathrm{h}^1({D}, \oo_{D})=0$. 3. $\mathfrak{D}$ is connected and its class is non-divisible in $\mathrm{NS}\,(X)$. $ $ The fact that $|-K_X| \neq \varnothing$ follows directly from the Riemann-Roch theorem. 1. We write the exact sequence of sheaves $$0 \rightarrow \oo_X(-\mathfrak{D}) \rightarrow \oo_X \rightarrow \oo_{\mathfrak{D}} \rightarrow 0.$$ Since $X$ is rational, $\mathrm{h}^1(X, \oo_X)=\mathrm{h}^2(X, \oo_X)=0$; and since $\mathfrak{D}$ is an anticanonical divisor, we have by Serre duality $$\mathrm{h^2(X, -\mathfrak{D})}=\mathrm{h^0(X, K_X)}=1.$$ 2. We use the same proof as in (1) with $D$ instead of $\mathfrak{D}$. We have $$\mathrm{h^2(X, -{D})}=\mathrm{h^0(X, K_X+D)}=\mathrm{h^0(X, D-\mathfrak{D})}=0.$$ 3. The connectedness follows directly from $(1)$ and $(2)$: if $\mathfrak{D}$ is the disjoint reunion of two divisors $\mathfrak{D}_1$ and $\mathfrak{D}_2$, then $\mathrm{h}^0(\mathfrak{D}, \oo_\mathfrak{D})=\mathrm{h}^0(\mathfrak{D}_1, \oo_{\mathfrak{D}_1})+\mathrm{h}^0(\mathfrak{D}_2, \oo_{\mathfrak{D}_2})=0$, a contradiction. Assume now that $\mathfrak{D}=m \mathfrak{D}'$ in $\mathrm{NS}\,(X)$, where $\mathfrak{D}'$ is not necessarily effective and $m \geq 2$. Then, using Riemann-Roch, $$\qquad\mathrm{h}^0(X, {\mathfrak{D}'})+\mathrm{h}^0(X, -(m+1) \mathfrak{D}')\geq 1.$$ If $-(m+1) \mathfrak{D}'$ is effective, then $|NK_X|\neq \varnothing$ for some positive integer $N$, which is impossible. Therefore the divisor $\mathfrak{D}'$ is effective; and $\mathfrak{D}-\mathfrak{D'}=(m-1) \mathfrak{D'}$ is also effective. Using Riemann-Roch one more time, $$\begin{aligned} \qquad\mathrm{h}^0(\mathfrak{D}', \oo_{\mathfrak{D}'})-\mathrm{h}^1(\mathfrak{D}', \oo_{\mathfrak{D}'})=\chi(\oo_{\mathfrak{D}'})&=\chi(\oo_X)-\chi(\mathcal{O}_X(-\mathfrak{D}'))\\ &=-\frac{1}{2} \mathfrak{D}' .(\mathfrak{D}'+K_X)=0. \end{aligned}$$ Thanks to $(2)$, since $0 < \mathfrak{D'} < \mathfrak{D}$, $\mathrm{h}^1(\mathfrak{D}', \oo_{\mathfrak{D}'})=0$, so that $\mathrm{h}^0(\mathfrak{D}', \oo_{\mathfrak{D}'})=0$. This gives again a contradiction. \[dix\] Let $X$ be a rational minimal elliptic surface and $C$ be a smooth fiber. 1. $K_X^2=0$ and $\mathrm{rk} \, \{\mathrm{Pic}\,(X)\}=10.$ 2. For any irreducible component $E$ of a reducible fiber , $E^2 <0$ and $E.K_X=0.$ 3. There exists a positive integer $m$ such that $-mK_X=C$ in $\mathrm{Pic}\,(X)$. Let $C$ be any fiber of the elliptic fibration. Then for any reducible fiber $D=\sum_{i=1}^s a_i D_i$, $D_i . C=C^2=0$. By the Hodge index theorem, $D_i^2 \leq 0$. If $D_i^2=0$, then $D_i$ is proportional to $C$. Let us write $D=a_i D_i+(D-a_i D_i)$. On the one hand, $a_i D_i .(D-a_i D_i)=0$ since $D_i$ and $D-D_i$ are proportional to $C$. On the other hand, $a_i D_i .(D-a_i D_i)>0$ since $D$ is connected. This proves the first part of (2). We have $K_X.C=C.C=0$. By the Hodge index theorem, $K_X^2 \leq 0$. We have an exact sequence $$0 \rightarrow K_X \rightarrow K_X+C \rightarrow \omega_C \rightarrow 0.$$ Since $\mathrm{h}^0(C, \omega_C)=1$ and $\mathrm{h^0(X, K_X)}=\mathrm{h}^1(X, K_X)=\mathrm{h}^1(X, \oo_X)=0$, $\mathrm{h}^0(X, K_X+C)=1$. Thus, the divisor $D=K_X+C$ is effective. Since $D.C=0$, all components of $D$ are irreducible components of the fibers of the fibration. The smooth components cannot appear, otherwise $K_X$ would be effective. Therefore, if $D=\sum_{i=1}^{s} a_i D_i$, we have $D_i^2<0$. Since $X$ is minimal, $K_X.D_i \geq 0$ (otherwise $D_i$ would be exceptional). Thus, $K_X.D \geq 0$. Since $C$ is nef, we have $D^2=(K_X+C).D \geq K_X.D\geq0$. On the other hand, $D.C=0$ so that $D^2=0$ by the Hodge index theorem. Thus $K_X^2=0$. Since $X$ is rational, it follows that $\mathrm{Pic}\, (X)$ has rank $10$. This gives (1). Now $K_X^2=C^2=C.K_X=0$ so that $C$ and $K_X$ are proportional. By Lemma \[hehehe\], $K_X$ is not divisible in $\mathrm{NS}\,(X)$, so that $C$ is a multiple of $K_X$. Since $|dK_X|=0$ for all positive $d$, $C$ is a negative multiple of $K_X$. This gives (3). The last point of (2) is now easy: $E.K_X=-\frac{1}{m} E.C=0$. We can be more precise and describe explicitly the elliptic fibration in terms of the canonical bundle. \[primitive\] Let $X$ be a minimal rational elliptic surface. Then for $m$ large enough, we have $\mathrm{dim}\, |-mK_X| \geq 1$. For $m$ minimal with this property, $|-mK_X|$ is a pencil without base point whose generic fiber is a smooth and reduced elliptic curve. The first point follows from Proposition \[dix\]. Let us prove that $|-mK_X|$ has no fixed part. As usual we write $-mK_X=F+D$ where $F$ is the fixed part. Then since $C$ is nef and proportional to $K_X$, $C.F=C.D=0$. Since $D^2\geq 0$, by the Hodge index theorem $D^2=0$ and $D$ is proportional to $C$. Thus $D$ and $F$ are proportional to $K_X$. By Lemma \[hehehe\], the class of $K_X$ is non-divisible in $\mathrm{NS}\,(X)$. Thus $F=m' \mathfrak{D}$ for some integer $m'$ with $0 \leq m' <m$. Hence $D=(m-m') \,\mathfrak{D}=-(m-m') \,K_X$ and $\mathrm{dim}\, |D| \geq 1$. By the minimality of $m$, we get $m'=0$. Since $K_X^2=0$, $-mK_X$ is basepoint free and $|-mK_X|=1$. Let us now prove that the divisors in $|-mK_X|$ are connected. If this is not the case, we use the Stein decomposition and write the Kodaira map of $-mK_X$ as $$X \rightarrow S \xrightarrow{\psi} |-mK_X|^*$$ where $S$ is a smooth compact curve, and $\psi$ is finite. Since $X$ is rational, $S=\mathbb{P}^1$ and therefore we see that each connected component $D$ of a divisor in $|-mK_X|$ satisfies $\mathrm{dim}\, |D| \geq 1$. Thus $\mathrm{dim}\, |D| \geq 2$ and we get a contradiction. We can now conclude: a generic divisor in $|-mK_X|$ is smooth and reduced by Bertini’s theorem. The genus formula shows that it is an elliptic curve. $ $ 1. Proposition \[primitive\] means that the relative minimal model of $X$ is a *Halphen surface* of index $m$, that is a rational surface such that $|-mK_X|$ is a pencil without fixed part and base locus. Such a surface is automatically minimal. 2. The elliptic fibration $X \rightarrow |-mK_X|^*$ doesn’t have a rational section if $m \geq 2$. Indeed, the existence of multiple fibers (in our situation, the fiber $m \mathfrak{D}$) is an obstruction for the existence of such a section. Reducible fibers of the elliptic fibration {#6.2} ------------------------------------------ We keep the notation of the preceding section: $X$ is a Halphen surface of index $m$ and $\mathfrak{D}$ is an anticanonical divisor. \[woodloot\] All the elements of the system $|-mK_X|$ are primitive, except the element $m \mathfrak{D}$. Since $K_X$ is non-divisible in $\mathrm{NS}\,(X)$, a non-primitive element in $|-mK_X|$ is an element of the form $k D$ where $D \in |m' \mathfrak{D}|$ and $m=k m'$. But $\mathrm{dim} \,|m' \mathfrak{D}|=0$ so that $|D|=|m' \mathfrak{D}|=\{m' \mathfrak{D}\}$. In the sequel, we denote by $S_1, \ldots, S_\lambda$ the reducible fibers of $|-mK_X|$. We prove an analog of Lemma \[base\], but the proofs will be slightly different. \[libre\] $ $ 1. Let $S=\alpha_1 E_1 + \ldots + \alpha_{\nu} E_{\nu}$ be a reducible fiber of $|-mK_X|$. Then the classes of the components $E_i$ in $\mathrm{NS}\,(X)$ are linearly independent. 2. If $D$ is a divisor supported in $S_1 \cup \ldots \cup S_{\lambda}$ such that $D^2=0$, then there exists integers $n_i$ such that $D=n_1S_1 + \ldots + n_{\lambda} S_{\lambda}$. If there is a linear relation among the curves $E_i$, we can write it as $D_1 \equiv D_2$, where $D_1$ and $D_2$ are linear combinations of the $E_i$ with positive coefficients (hence effective divisors) having no component in common. We have $D_1^2=D_1. \, D_2 \geq 0$. On the other hand $S.\, D_1=0$ and $D^2=0$, so by the Hodge index theorem $S$ and $D_1$ are proportional. Let $E$ be a component of $S$ intersecting $D_0$ but not included in $D_0$. If $a \,D_1 \sim b \,S$, then $0=b \,S.\,E=a \,D_1.\, E>0$, and we are done. For the second point, let us write $D=D_1+ \ldots +D_{\lambda}$ where each $D_i$ is supported in $S_i$. Then the $D_i$’s are mutually orthogonal. Besides, $D_i. C=0$, so that by the Hodge index theorem $D_i^2 \leq 0$. Since $D^2=0$, it follows that $D_i^2=0$ for all $i$. We pick an $i$ and write $D_i=D$ and $S_i=S$. Then there exists integers $a$ and $b$ such that $a D \sim b S$. Therefore, if $D=\sum \beta_q \,E_q$, $\sum_q (a \alpha_q-b \beta_q) \,E_q=0 $ in $\mathrm{NS}\,(X)$. By Lemma \[libre\], $a \alpha_q-b \beta_q=0$ for all $q$, so that $b$ divides $a \alpha_q$ for all $q$. By Lemma \[woodloot\], $b$ divides $a$. If $b=ac$, then $\beta_q=c \alpha_q$ for all $q$, so that $D=cS$. Let $\rho \colon X \rightarrow \mathbb{P}^1$ be the Kodaira map of $|-mK_X|$, and $\xi$ be the generic point of $\mathbb{P}^1$. We denote by $\mathfrak{X}$ the algebraic variety $\rho^{-1}(\xi)$, which is a smooth elliptic curve over the field $\mathbb{C}(t)$. Let $\mathcal{N}$ be the kernel of the natural restriction map ${\mathfrak{t}} \colon \mathrm{Pic}\,(X) \rightarrow\mathrm{Pic}\,(\mathfrak{X})$. The image of $\mathfrak{t}$ is the set of divisors on $\mathfrak{X}$ defined over the field $\mathbb{C}(t)$, denoted by $\mathrm{Pic}\, (\mathfrak{X}/\C(t))$. The algebraic group $\mathrm{Pic}_0(\mathfrak{X})$ acts naturally on $\mathfrak{X}$, and this action is simple and transitive over any algebraic closure of $\C(t)$. \[sept\] If $S_1, \ldots, S_{\lambda}$ are the reducible fibers of the pencil $|-mK_X|$ and $\mu_j$ denotes the number of components of each curve $S_j$, then $ \mathrm{rk}\, \mathcal{N}=1+\sum_{i=1}^{\lambda} \,\{\mu_i- 1\}. $ The group $\mathcal{N}$ is generated by $\mathfrak{D}$ and the classes of the reducible components of $|-mK_X|$. We claim that the module of relations between these generators is generated by the relations $\alpha_1 [E_1] + \ldots + \alpha_{\nu} [E_{\nu}]=m[\mathfrak{D}]$ where $\alpha_1 E_1 + \cdots + \alpha_s E_s$ is a reducible member of $|-mK_X|$. Let $D$ be of the form $a \mathfrak{D}+D_1+ \cdots + D_{\lambda}$ where each $D_i$ is supported in $S_i$, and assume that $D \sim 0$. Then $(D_1+ \cdots + D_{\lambda})^2=0$. Thanks to Lemma \[libre\] (2), each $D_i$ is equal to $n_i S_i$ for some $n_i$ in $\Z$. Then $a+m \,\{\sum_{i=1}^{\lambda} n_i \}=0$, and $$a \mathfrak{D}+D_1+ \cdots + D_{\lambda}=\sum_{i=1}^{\lambda} n_i \,(S_i-m \mathfrak{D}).$$ We also see easily that these relations are linearly independent over $\Z$. Thus, since the number of generators is $1+\sum_{i=1}^{\lambda} \mu_i$, we get the result. \[sympa\] We have the inequality $\sum_{i=1}^{\lambda} \,\{\mu_i-1\} \leq 8$. Besides, if $\sum_{i=1}^{\lambda} \,\{\mu_i-1\}=8$, every automorphism of $X$ acts finitely on $\mathrm{NS}\,(X)$. We remark that $\mathcal{N}$ lies in ${K}_X^{\perp}$, which is a lattice of rank $9$ in $\mathrm{Pic}\,(X)$. This yields the inequality $\sum_{i=1}^{\lambda} \,(\mu_i-1) \leq 8$. Assume $\mathcal{N}=K_X^{\perp}$, and let $f$ be an automorphism of $X$. Up to replacing $f$ by an iterate, we can assume that $\mathcal{N}$ is fixed by $f$. Thus $f^*$ is a parabolic translation leaving the orthogonal of the isotropic invariant ray $\mathbb{R} K_X$ pointwise fixed. It follows that $f$ acts trivially on $\mathrm{Pic}\,(X)$. Lastly, we prove that there is a major dichotomy among Halphen surfaces. Since there is no proof of this result in Gizatullin’s paper, we provide one for the reader’s convenience. Let us introduce some notation: let $\mathrm{Aut}_0(X)$ be the connected component of $\mathrm{id}$ in $\mathrm{Aut}\, (X)$ and $\widetilde{\mathrm{Aut}}\, (X)$ be the group of automorphisms of $X$ preserving fiberwise the elliptic fibration. \[waza\] Let $X$ be a Halphen surface. Then $X$ has at least two degenerate fibers. The following are equivalent: 1. $X$ has *exactly* two degenerate fibers. 2. $\mathrm{Aut}_0(X)$ is an algebraic group of positive dimension. 3. $\widetilde{\mathrm{Aut}}\, (X)$ has infinite index in $\mathrm{Aut}\, (X)$. Under any of these conditions, $\mathrm{Aut}_0(X) \simeq\C^{\times}$, and $\widetilde{\mathrm{Aut}}\, (X)$ is finite, and $\mathrm{Aut}_0(X)$ has finite index in $\mathrm{Aut}\, (X)$. Let $\mathcal{Z}$ be the finite subset of $\mathbb{P}^1$ consisting of points $z$ such that $\pi$ is not smooth at some point of the fiber $X_z$, and $U$ be the complementary set of $\mathcal{Z}$ in $\mathbb{P}^1$. The points of $\mathcal{Z}$ correspond to the degenerate fibers of $X$. Let $\mathcal{M}_1$ be the moduli space of elliptic curves, considered as a complex orbifold. It is the quotient orbifold $\mathfrak{h} / \mathrm{SL}(2; \mathbb{Z})$ and its coarse moduli space $|\mathcal{M}_1|$ is $\C$. The elliptic surface over $U$ yields a morphism of orbifolds $\phi \colon U \rightarrow \mathcal{M}_{1}$, hence a morphism $| \phi | \colon U \rightarrow \mathbb{C}$. The orbifold universal cover of $\mathcal{M}_{1}$ is $\mathfrak{h}$, so that $|\phi|$ induces a holomorphic map $\widetilde{U} \rightarrow \mathfrak{h}$. If $\# \mathcal{Z} \in \{0, 1, 2\}$, then $\widetilde{U}=\mathbb{P}^1$ or $\widetilde{U}=\C$ and $|\phi|$ is constant. This means that all fibers of $X$ over $U$ are isomorphic to a fixed elliptic curve $E$. Let $H$ be the isotropy group of $\mathcal{M}_1$ at $E$, it is a finite group of order $2$, $4$ or $6$. Then $\phi$ factorizes as the composition $U \rightarrow {B}H \rightarrow \mathcal{M}_1$ where ${B} H$ is the orbifold $\bullet_{H}$. The stack morphisms from $U$ to ${B}H$ are simply $H$-torsors on $U$, and are in bijection with $\mathrm{H}^1(U, H)$. In the case $\# \mathcal{Z} \in \{0, 1\}$, that is $U=\mathbb{P}^1$ or $U=\mathbb{C}$, then $\mathrm{h}^1(U, H)=0$. Thus $X$ is birational to $E \times \mathbb{P}^1$ which is not possible for rational surfaces. This proves the first part of the theorem. \(iii) $\Rightarrow$ (i) We have an exact sequence $$0 \rightarrow \widetilde{\mathrm{Aut}}\,(X) \rightarrow \mathrm{Aut}\,(X) \xrightarrow{\kappa} \mathrm{Aut}\, (\mathbb{P}^1)$$ The image of $\kappa$ must leave the set $\mathcal{Z}$ globally fixed. If $\# \mathcal{Z} \geq 3$, then the image of $\kappa$ is finite, so that $\widetilde{\mathrm{Aut}}\,(X)$ has finite index in $\mathrm{Aut}\,(X)$. \(i) $\Rightarrow$ (ii) In this situation, we deal with the case $U=\C^{\times}$. The group $\mathrm{H}^1(\mathbb{C}^{\times}, H)$ is isomorphic to $H$. For any element $h$ in $H$, let $n$ be the order of $h$ and $\zeta$ be a $n$-th root of unity. The cyclic group ${\ensuremath{\mathbb Z}}/ n \Z$ acts on $\mathbb{C}^{\times} \times E$ by the formula $p.(z, e)=(\zeta^{p} z, h^p . e)$. The open elliptic surface over $\mathbb{C}^{\times}$ associated with the pair $(E, h)$ is the quotient of $\mathbb{C}^{\times} \times E$ by $\Z/n\Z$. We can compactify everything: the elliptic surface associated with the pair $(E, h)$ is obtained by desingularizing the quotient of $\mathbb{P}^1 \times E$ by the natural extension of the ${\ensuremath{\mathbb Z}}/ n\Z$-action defined formerly. By this construction, we see that the $\mathbb{C}^{\times}$ action on $\pi^{-1}(U)$ extends to $X$. Thus $\mathrm{Aut}_0(X)$ contains $\C^{\times}$. \(i) $\Rightarrow$ (iii) We have just proven in the previous implication that if $X$ has two degenerate fibers, then the image of $\kappa$ contains $\C^{\times}$. Therefore $\widetilde{\mathrm{Aut}}\,(X)$ has infinite index in $\mathrm{Aut}\,(X)$. \(ii) $\Rightarrow$ (i) We claim that $\widetilde{\mathrm{Aut}}\, (X)$ is countable. Indeed, $\widetilde{\mathrm{Aut}}\, (X)$ is a subgroup of $\mathrm{Aut}\, (\mathfrak{X}/ \C(t))$ which contains $\mathrm{Pic}\, (\mathfrak{X}/ \C(t))$ as a finite index subgroup; and $\mathrm{Pic}\, (\mathfrak{X}/ \C(t))$ is a quotient of $\mathrm{Pic}\,(X)$ which is countable since $X$ is rational. Therefore, if $\mathrm{Aut}_0(X)$ has positive dimension, then the image of $\kappa$ is infinite. The morphism $|\phi| \colon U \rightarrow \mathbb{C}$ is invariant by the action of $\mathrm{im}\, (\kappa)$, so it must be constant. As we have already seen, this implies that $X$ has two degenerate fibers. It remains to prove the last statement of the Proposition. Since $\widetilde{\mathrm{Aut}}\,(X)$ is a countable group, $\widetilde{\mathrm{Aut}}\,(X) \cap \mathrm{Aut}_0 (X)=\{ \mathrm{id} \}$. Thus, $\mathrm{Aut}_0(X) \simeq \kappa \left(\mathrm{Aut}_0(X) \right) \simeq \mathbb{C}^{\times}$. Let $\varepsilon$ denote the natural representation of $\mathrm{Aut}\,(x)$ in $\mathrm{NS}(X)$. Since $\mathrm{Aut}_0(X) \subset \mathrm{ker}\,(\varepsilon)$, $\mathrm{ker}\,(\varepsilon)$ is infinite. Thanks to [@HH], $\mathrm{im}(\varepsilon)$ is finite. To conclude, it suffices to prove that $\mathrm{Aut}_0(X)$ has finite index in $\mathrm{ker}\,(\varepsilon)$. Any smooth curve of negative self-intersection must be fixed by $\mathrm{ker}\,(\varepsilon)$. Let $\mathbb{P}^2$ be the minimal model of $X$ (which is either $\mathbb{P}^2$ or $\mathbb{F}_n$) and write $X$ as the blowup of $\mathbb{P}^2$ along a finite set $Z$ of (possibly infinitly near) points. Since $\mathrm{Aut}_0(\mathbb{P}^2)$ is connected, $\mathrm{ker}\,(\varepsilon)$ is the subgroup of elements of $\mathrm{Aut}\,(\mathbb{P}^2)$ fixing $Z$. This is a closed algebraic subgroup of $\mathrm{Aut}\,(\mathbb{P}^2)$, so $\mathrm{ker}\,(\varepsilon)_0$ has finite index in $\mathrm{ker}\,(\varepsilon)$. Since $\mathrm{ker}\,(\varepsilon)_0=\mathrm{Aut}_0(X)$, we get the result. Minimal elliptic surfaces with two degenerate fibers are called Gizatullin surfaces, they are exactly the rational surfaces possessing a nonzero regular vector field. They are Halphen surfaces of index $1$, their detailed construction is given in [@GIZ §2]. They have two reducible fibers $S_1$ and $S_2$ which satisfy $\mu_1+ \mu_2=10$, and $\mathrm{Aut}_0(X)$ has always finite index in $\mathrm{Aut}\,(X)$. The main construction {#7.3} --------------------- In this section, we construct explicit parabolic automorphisms of Halphen surfaces. \[penible\] Let $X$ be a Halphen surface such that $\sum_{i=1}^{\lambda} \,\{\mu_i-1\} \leq 7$. Then there exists a free abelian group $G$ of finite index in ${\mathrm{Aut}}\,(X)$ of rank $8-\sum_{i=1}^{\lambda} \,\{\mu_i-1\}$ such that any non-zero element in $G$ is a parabolic automorphism acting by translation on each fiber of the fibration. Let $\widetilde{\mathrm{Aut}}(X)$ be the subgroup of $\mathrm{Aut}\,(X)$ corresponding to automorphisms of $X$ preserving the elliptic fibration fiberwise. By [@DS Chap. II §10 Thm.1], any automorphism of $\mathfrak{X}$ defined over $\mathbb{C}(t)$ extends to an automorphism of $X$. Thus $\widetilde{\mathrm{Aut}}\,(X)=\mathrm{Aut} (\mathfrak{X}/ \mathbb{C}(t))$. Since $\mathfrak{X}$ is a smooth elliptic curve, $\mathrm{Pic}_0 \{\mathfrak{X} / \mathbb{C}(t)\}$ has finite index in $\mathrm{Aut} (\mathfrak{X}/ \mathbb{C}(t))$, so that $\mathrm{Pic}_0 \{\mathfrak{X} / \mathbb{C}(t)\}$ has finite index in $\widetilde{\mathrm{Aut}}\,(X)$. The trace morphism $\mathfrak{t} \colon \mathrm{Pic}\,(X) \rightarrow \mathrm{Pic} \{\mathfrak{X} / \mathbb{C}(t)\}$ is surjective and for any divisor $D$ in $\mathrm{Pic}\,(X)$ we have $\mathrm{deg}\, \mathfrak{t}(D)=D.C$. Therefore $$K_X^{\perp}/ \mathcal{N} \simeq \mathrm{Pic}_0 \{\mathfrak{X} / \mathbb{C}(t)\} \hookrightarrow \widetilde{\mathrm{Aut}}\,(X)$$ where the image of the last morphism has finite index. By Proposition \[sept\], the rank of $\mathcal{N}$ is $\sum_{i=1}^{\lambda} (\mu_i-1) +1$, which is smaller that $8$. Let $G$ be the torsion-free part of ${K_X^{\perp}}/{\mathcal{N}}$; the rank of $G$ is at least one. Any $g$ in $G$ acts by translation on the generic fiber $\mathfrak{X}$ and this translation is of infinite order in $\mathrm{Aut}\, (\mathfrak{X})$. Beside, via the morphism $\mathrm{Pic}\,(X) \rightarrow \mathrm{Pic}\,(\mathfrak{X})$, $g$ acts by translation by $\mathfrak{tr}\,(g)$ on $\mathrm{Pic}\,(\mathfrak{X})$, so that the action of $g$ on $\mathrm{Pic}\,(X)$ has infinite order. Let $g$ in $G$, and let $\lambda$ be an eigenvalue of the action of $g$ on $\mathrm{Pic}\, (X)$, and assume that $|\lambda| > 1$. If $g^*v=\lambda v$, then $v$ is orthogonal to $K_X$ and $v^2=0$. It follows that $v$ is collinear to $K_X$ and we get a contradiction. Therefore, $g$ is parabolic. To conclude the proof it suffices to prove that $ \widetilde{\mathrm{Aut}}\,(X)$ has finite index in ${\mathrm{Aut}}\,(X)$. Assume the contrary. Then Proposition \[waza\] implies that $X$ has two degenerate fibers, that is $X$ is a Gizatullin surface. In that case $\mu_1+\mu_2=10$ (by the explicit description of Gizatullin surfaces) and we get a contradiction. \[hapff\] Let $X$ be a Halphen surface. The following are equivalent: 1. $\sum_{i=1}^{\lambda} \{\mu_i-1\}=8$. 2. The group $\widetilde{\mathrm{Aut}}(X)$ is finite. 3. The image of $\mathrm{Aut}\, (X)$ in $\mathrm{GL}\!\left(\mathrm{NS}\,(X)\right)$ is finite. \(i) $\Leftrightarrow$ (ii) Recall (see the proof of Proposition \[penible\]) that $K_X^{\perp}/\mathcal{N}$ has finite index in $\widetilde{\mathrm{Aut}}\, (X)$. This gives the equivalence between (i) and (ii) since $K_X^{\perp}/\mathcal{N}$ is a free group of rank $8-\sum_{i=1}^{\lambda} \{\mu_i-1\}$. \(i) $\Rightarrow$ (iii) This is exactly Corollary \[sympa\]. \(iii) $\Rightarrow$ (i) Assume that $\sum_{i=1}^{\lambda} \{\mu_i-1\} \leq 7$. Then $X$ carries parabolic automorphisms thanks to Theorem \[penible\]. This gives the required implication. Let us end this section with a particular but illuminating example: *unnodal Halphen surfaces*. By definition, an unnodal Halphen surface is a Halphen surface without reducible fibers. In this case, $\mathcal{N}$ is simply the rank one module ${\ensuremath{\mathbb Z}}K_X$, so that we have an exact sequence $$0 \rightarrow {\ensuremath{\mathbb Z}}K_X \rightarrow K_X^{\perp} \underset{\lambda}{\hookrightarrow} {\mathrm{Aut}}\, (X)$$ where the image of the last morphism has finite index. Then: \[classieux\] For any $\alpha$ in $K_X^{\perp}$ and any $D$ in $\mathrm{NS}\, (X)$, $$\lambda_{\alpha}^*(D)=D-m\,(D.K_X)\, \alpha+\left\{m\,(D. \alpha)-\frac{m^2}{2} (D.K_X)\, \alpha^2 \right\} K_X.$$ We consider the restriction map $\mathfrak{t} \colon \mathrm{Pic}\, (X) \rightarrow \mathrm{Pic}(\mathfrak{X}/\C(t))$ sending $K_X^{\perp}$ to $\mathrm{Pic}_0(\mathfrak{X}/\C(t))$. Then $\mathfrak{t}({\alpha})$ acts on the curve $\mathfrak{X}$ by translation, and also on the Picard group of $\mathfrak{X}$ by the standard formula $$\mathfrak{t}({\alpha})^* (\mathfrak{Z})=\mathfrak{Z}+ \mathrm{deg}\,(\mathfrak{Z})\, \mathfrak{t}({\alpha}).$$ Applying this to $\mathfrak{Z}=\mathfrak{t}(D)$ and using the formula $\mathrm{deg}\, \mathfrak{t}(D)=-m\,(D.K_X)$, we get $$\mathfrak{t}\left(\mathfrak{\lambda}_{\alpha}^* (D)\right)=\mathfrak{t}(D)-m\, (D.K_X) \, \mathfrak{t}(\alpha).$$ Hence there exists an integer $n$ such that $$\mathfrak{\lambda}_{\alpha}^* (D)=D-m\, (D.K_X)\, \alpha + n\, K_X.$$ Then $$\mathfrak{\lambda}_{\alpha}^* (D)^2=D^2-2m \,(D.K_X)\,(D. \alpha)+m^2\, (D.K_X)^2\,\alpha^2+2n\, (D.K_X).$$ We can assume without loss of generality that we have $(D.K_X)\neq 0$ since $\mathrm{Pic}\,(X)$ is spanned by such divisors $D$. Since $\mathfrak{\lambda}_{\alpha}^* (D)^2=D^2$, we get $$n=m\, (D. \alpha) -\frac{m^2}{2} \,(D.K_X) \,\alpha^2.$$ [^1]: Gizatullin considers only parabolic elements, but most of his arguments apply to the case of groups containing elliptic elements as well as soon an they contain at least *one* parabolic element.
--- abstract: 'We propose a method for calculating Wannier functions of periodic solids directly from a modified variational principle for the energy, subject to the requirement that the Wannier functions are orthogonal to all their translations (“shift-orthogonality”). Localization is achieved by adding an $L_1$ regularization term to the energy functional. This approach results in “compressed” Wannier modes with compact support, where one parameter $\mu$ controls the trade-off between the accuracy of the total energy and the size of the support of the Wannier modes. Efficient algorithms for shift-orthogonalization and solution of the variational minimization problem are demonstrated.' author: - Farzin Barekat - Ke Yin - 'Russel E. Caflisch' - 'Stanley J. Osher' - Rongjie Lai - Vidvuds Ozolinš title: 'Compressed Wannier modes found from an $L_1$ regularized energy functional' --- Electronic states in periodic crystals are usually discussed in terms of Bloch waves of definite crystal momentum and energy. An alternative description in terms of spatially localized functions was introduced by Gregory Wannier [@Wannier1937] and further developed in [@Kohn1959; @Wannier1960; @Blount1962; @DesCloizeaux1963]. These so-called Wannier functions are associated with lattice sites, are translational images of each other, and can be chosen to be real and exponentially localized in conventional (i.e., topologically trivial) insulators [@Brouder2007]. Even though the Wannier functions are not the eigenstates of the crystal Hamiltonian, they represent a convenient description of the electronic states for understanding such phenomena as electric polarization [@Resta1994], orbital magnetization [@Thonhauser2005], nontrivial insulating states [@Resta2011] and range of electronic interactions in condensed matter [@Prodan2005]. Wannier functions can also be used to increase speed and accuracy of computations. For instance, they can be used to interpolate the electronic wave functions and band structure throughout the Brillouin zone whenever a very large number of $\k$ points is needed, such as when calculating electron-phonon scattering rates [@Giustino2007]. Wannier functions are unitary transformations of Bloch waves with different crystal momenta and are usually obtained by optimizing a suitably chosen localization functional. A particularly successful choice was introduced by Marzari and Vanderbilt [@Marzari1997], in which one minimizes the spread (second moment) of the Wannier functions, resulting in maximally localized Wannier functions (MLWF). Due to the non-convexity of the target functional and constraints, a reasonable initial guess is usually needed to avoid local minima corresponding to poorly localized, complex Wannier functions [@Marzari2012]. It is well understood that for insulators Wannier functions satisfy the minimum principle for the total energy subject to the constraint of orthogonality to all their translations by lattice vectors; we refer to this as shift-orthogonality. The corresponding variational principle was formulated by Koster [@Koster1953] and used by Kohn [@Kohn1973] in his variational Wannier function approach, but it has been seldom used in practice with general bases [@Pederson1987]. In this paper, we show that localized Wannier modes can be obtained directly from an $L_1$ regularized variational principle without ever calculating crystal eigenstates in the Bloch representation. These ideas generalize earlier work [@Ozolins2013; @Ozolins2014] to systems with translational symmetry. Our approach is well-defined for both insulating and metallic systems, with one parameter providing a systematically controllable trade-off between the accuracy of the total energy and the localization degree of the regularized (“compressed”) Wannier modes. We also introduce efficient numerical methods to solve the associated constrained variational problem. For simplicity, we assume that the problem permits real-valued Wannier functions. Following general practice, we label the Wannier functions $\psi^n_{\R} (\r) \equiv \psi^n (\r - \R)$ by a band index $n$ and lattice site $\R$. The $L^1$ regularized energy functional introduced in [@Ozolins2013] is written as $$\label{eq:modified_functional} \mathcal{J}(\psi):=\frac{1}{\mu} \|\psi\|_1+\langle \psi | \hat{H} | \psi \rangle,$$ where the $L_1$ norm of a function is defined as $\|\psi\|_1=\int |\psi(\r)|\,\mathrm{d}\r$. The effect of the $L_1$ term is to localize the solutions, and the parameter $\mu$ controls the trade-off between sparsity and accuracy: larger values of $\mu$ give solutions that better minimize the total energy at the expense of more extended Wannier functions, while a smaller $\mu$ gives highly localized wave functions at the expense of larger errors in the calculated energies. Furthermore, due to the properties of the $L_1$ term, the functions that minimize Eq.  have compact support, i.e. they are nonzero only in a finite spatial region. Since the functional is convex, efficient numerical minimization methods can be devised. Our proposed scheme defines compactly supported Wannier modes recursively by minimizing $\mathcal{J}(\psi)$ subject to shift-orthogonality and normalization constraints: $$\label{eq:Wannier} \begin{cases} \psi^1=\underset{\psi}{\arg \min}\;\mathcal{J}(\psi) & \hbox{s.t.} \;\; \langle\psi_{\R}| \psi_{\R' } \rangle=\delta_{\R\R'} \\ \psi^k=\underset{\psi}{\arg \min}\; \mathcal{J}(\psi) & \hbox{s.t.} \;\; \langle\psi_{\R}| \psi_{\R'} \rangle=\delta_{\R\R'} \\ &\textrm{and} \;\; \langle \psi_\R | \psi^i_{\R'} \rangle=0 \; \textrm{for}\; i<k. \end{cases}$$ This generalizes to nonzero crystal potentials the approach used in [@Ozolins2014] to construct the compressed plane wave (CPW) bases for the Laplacian. A key advantage of our scheme is that one parameter $\mu$ controls both the physical accuracy and the spatial extent, while not requiring any physical intuition about the properties of the solution. In other words, the Wannier functions are nonzero only in those regions that are required to achieve a given accuracy for the total energy and are zero everywhere else. The Cauchy–Schwarz inequality guarantees that the difference between and the true energy functional is bounded from above by a constant multiple of $\|\psi\|_2$. Hence, the solutions to the variational problem involving provide an accurate, systematically controllable approximation to the true total energy of the system [@Ozolins2013]. Fully self-consistent calculations can be performed using Wannier functions, without any reference to the Bloch waves and Brillouin zones. In what follows, we describe efficient algorithms for solving . We choose a supercell $\Omega$ defined by three lattice vectors $${\R}^\text{SC}_\alpha = \sum_{\beta=1}^3 L_{\alpha\beta} {\R}_\beta,$$ where $L_{\alpha\beta}$ is a $3 \times 3$ nonsingular matrix with integer elements, and ${\R}_\beta$ are the unit cell vectors of the primitive lattice. The Hamiltonian $\hat{H} = - \frac{1}{2} \Delta + V(\r)$ has the periodicity of the primitive lattice: $$\begin{aligned} \label{eq:Hperiodic} V(\mathbf{r}) = V ( \r + \R ), \\ \R = \sum_{\alpha=1}^3 n_\alpha \mathbf{R}_\alpha, \quad n_\alpha \in \mathbb{Z}.\end{aligned}$$ For the Wannier modes in , we impose periodic boundary conditions with respect to the supercell: $$\begin{aligned} \label{eq:PBC} \psi ({\r}) = \psi ( \r + \R_\text{S} ) \\ {\R}_\text{S} = \sum_\alpha n_\alpha \mathbf{R}^\text{SC}_\alpha, \quad n_\alpha \in \mathbb{Z}.\end{aligned}$$ For physical accuracy, the supercell $\R^\text{SC}_\alpha$ should be chosen big enough to allow the compressed Wannier modes to decay to zero within the range of the supercell, although this is not necessary for the numerical algorithm to work. We also introduce the primitive reciprocal lattice ${\bf Q}_\alpha$ such that ${\bf Q}_\alpha {\bf R}_\beta = 2 \pi \delta_{\alpha\beta}$. The Fourier expansion of a supercell periodic Wannier mode $\psi$ will contain only plane waves with wave vectors $\k+\G$, where $\k$ belongs to the first Brillouin zone of the primitive lattice and $\G = \sum_\alpha m_\alpha \mathbf{Q}_\alpha \;\; (m_\alpha \in \mathbb{Z})$ is a reciprocal lattice vector. Fourier expansion of Wannier mode $\psi$ includes all plane waves below a certain kinetic energy cutoff $E_\mathrm{max}$: $$\label{eq:Emax} \frac{1}{2} |\k + \G |^2 \leq E_\mathrm{max},$$ and can be written as $$\label{eq:Fourier} \psi( \mathbf{r} ) = \sum_\k \sum_\G \tilde{\psi} (\k+\G) e^{i(\k+\G)\r} \equiv \sum_\k u_{\k} (\r) e^{i \k \r},$$ where we have defined a cell-periodic Bloch function $$\label{eq:Bloch} u_{\k} (\r) = \sum_\G \tilde{\psi} (\k+\G) e^{i \G \r}.$$ The inverse Fourier transform is given by $$\tilde{\psi}(\k+\G)=\frac{1}{| \Omega |}\int_\Omega \psi(\r)e^{-i(\k+\G)\r}\dr, \label{equation:inverseFourier}$$ where the integral extends over the supercell and $|\Omega|$ is the supercell volume. [*Shift-orthogonality:*]{} One of the key steps for is ensuring that the solution $\psi^n$ is orthogonal to its own translations by all primitive lattice vectors $\R$, as well as orthogonal to all translations of the lower Wannier modes $\psi^1 \ldots \psi^{n-1}$. We say that function $\psi(\r)$ is shift-orthogonal if and only if $$\langle \psi_{\R'} | \psi_\R \rangle \equiv \int \psi(\r - \R') \psi(\r - \R) \dr = \delta_{\R'\R} \label{equation:shiftOrthogonalDefinition}$$ holds for all lattice vectors $\R$ and $\R'$. The nonlinear Lagrangian method used to enforce in [@Ozolins2014] is too slow in this context, and here we detail a faster approach adapted from computational harmonic analysis [@Mallat1999]. Given a supercell periodic function $f$, the objective is to find the projection of $f$ to the set of shift orthogonal functions (see also [@BarekatThesis]). In other words, we need to solve the following minimization problem: $$\hat{\Pi} f := {\operatornamewithlimits{argmin}}_{\psi} \| f - \psi \|_2 \quad \hbox{s.t.} \quad \langle \psi_{\R'} | \psi_{\R} \rangle = \delta_{\R'\R}. \label{equation:SOproblem}$$ Note that $\hat{\Pi} f$ is not necessarily unique and the set of shift-orthogonal functions is not a vector space because a sum of two shift-orthogonal functions may not be shift-orthogonal. However, if two shift-orthogonal functions $f$ and $g$ are orthogonal to all shifts of each other, any normalized linear combination of them will also be shift-orthogonal. This property allows to design efficient iterative update algorithms for . The following theorem is well known in the wavelet community (e.g., see Eq. 7.19 in [@Mallat1999]). For completeness, we provide the proof of the theorem in the appendix. Supercell-periodic function $\psi(\r)$ is shift-orthogonal if and only if $$\sum_{\G}|\tilde{\psi}(\k+\G)|^2=\frac{1}{N|\Omega|} \quad \forall~ \k\in \BZ,$$ where $N=\det(L)$ is the number of primitive cells inside the supercell. \[theorem:equivalence\] A derivation similar to what is used for Theorem yields the following theorem: For two supercell-periodic functions $\psi(\r)$ and $\phi(\r)$, $$\langle \psi_\R | \phi_{\R'} \rangle = 0 \quad \forall~\R,\R' \in \Omega$$ if and only if $$\sum_{\G}\tilde{\psi}^*(\k+\G) \tilde{\phi}(\k+\G)=0 \quad \forall~ \k \in \BZ.$$ \[theorem:equivalence2\] It is seen that these shift-orthogonality conditions amount to orthonormalization imposed on the Bloch functions $u_{\k} (\r)$ . Theorem \[theorem:equivalence\] and Parseval’s identity $\|f-\psi\|_2=|\Omega| \|\tilde{f}-\tilde{\psi}\|_2$ yield a straigtforward algorithm for obtaining the solution to problem . This algorithm has several important features. First, it has computational complexity of $M\log(M)$ where $M$ is the number of Fourier coefficients used to represent function $f$. Second, it is parallelizable over both $\k$ and $\G$. Finally, for a real valued input function $f$, the algorithm outputs a real valued projection $\hat{\Pi} f$. As mentioned earlier, the solution to is not unique if $\sum_{\G}|\tilde{\psi}(\k+\G)|^2=0$ for some $\k$. In these situations, we choose the solution corresponding to the lowest frequency, i.e. $\tilde{\psi}(\k)=\frac{1}{N |\Omega|}$. Next suppose that supercell periodic functions $f$ and $g^1,\ldots,g^n$ are given. Theorem \[theorem:equivalence2\] yields an algorithm similar to the one discussed above that finds a shift-orthogonal projection that is also perpendicular to all translations of $g^1,\ldots,g^n$: $$\begin{aligned} \label{equation:SOproblemV2} &\hat{\Pi}_{\{g^1,\ldots,g^n\}^\perp} f := {\operatornamewithlimits{argmin}}_{\psi} \|f-\psi\|_2 \quad \hbox{s.t. } \\ & \hspace{1cm}\begin{cases} \hbox{$\psi$ is shift-orthogonal, and} \\ \langle \psi_{\R} | g^i_{\R'} \rangle=0 \quad \hbox{for } \forall \, \R,\R' \in \Omega \;\; \text{and} \;\; i=1,\ldots,n. \end{cases} \notag\end{aligned}$$ [*Computing Wannier modes:*]{} Wannier functions as in are minimizers of $\mathcal{J}(\psi)$ subject to shift-orthogonality constraints. However, minimization of $\mathcal{J}(\psi)$ cannot be done efficiently using conventional quadratic optimization techniques due to the discontinuous behavior of the derivative of the $L_1$ term at $\psi = 0$ and the non-convex constraints. Efficient numerical methods for such problems are based on the Bregman iteration [@Osher2005; @Yin2008]. Here, we use the split Bregman approach of [@Goldstein2009], which treats the $L^1$ term by introducing an additional variable $v$ with a quadratically constraint to approach the solution, as shown in Algorithm \[algorithm:bregman\_L1\]. The main advantage of this approach is that the minimization of the quadratic functional $\langle \psi | \hat{H} | \psi \rangle$ is separated from the minimization of the $L^1$ term, allowing use of highly efficient quadratic optimization algorithms for the former. **Input:** $\hat{{H}}$, $\left\{ \psi^{j}:j=1,\ldots,k-1\right\} $ (empty if $k=1$), $\lambda, \gamma, \mu$ **Output:** [$\psi^k(\r)$]{} : both $u(\r),v(\r)$ are norm-1 random functions defined on $\Omega$. $b(\r)=c(\r)=0$; $\displaystyle \psi=\underset{\psi}{{\operatornamewithlimits{argmin}}}\langle\psi|\hat{{H}}|\psi\rangle+\frac{\lambda}{2}\|\psi-u+b\|_{2}^{2}+\frac{{\gamma}}{2}\|\psi-v+c\|_{2}^{2}$ $\displaystyle u = \hat{\Pi}_{\{\psi^1,\ldots,\psi^{k-1}\}^\perp} (\psi+b) $; $\displaystyle v=\underset{v}{\text{{argmin}}}\frac{{1}}{\mu}\|v\|_{1}+\frac{{\gamma}}{2}\|\psi-v+c\|_{2}^{2}$; $b=\psi-u+b$; $c=\psi-v+c$; $\psi^k(\r)= \psi$. In Algorithm \[algorithm:bregman\_L1\], $\lambda,\gamma$ are chosen such that $\hat{H}+\lambda+\gamma$ is positive definite, and $\mu$ is chosen such that $\gamma \mu\gg \frac{1}{\sqrt{| \Omega |}}$. Line 3 is equivalent to solving the elliptic equation $$\label{eq:non_linear_elliptic} (\hat{H}+\lambda+\gamma) \psi(\r) = \lambda [ u(\r)-b(\r)] + \gamma [v(\r)-c(\r)],$$ which can be solved by the preconditioned conjugate gradient method, with the preconditioner given by the inverse of a linear elliptic operator $$\left( -\frac{1}{2}\nabla^2 + \lambda + \gamma \right )^{-1}.$$ In this work, we implement the inverse by a fast Poisson solver. In practice, problem does not need to be solved exactly and a few iterations per cycle are sufficient. Line 4 is solved by the algorithms described for when $k=1$ and by when $k>1$. Line 5 is solved by a component-wise soft thresholding operation: $$v(\r)=\sgn \left( \psi(\r)+c(\r) \right) \max \left( 0,|\psi(\r)+c(\r)|-\frac{1}{\gamma\mu} \right).$$ As an illustration, we find compressed Wannier modes for a one-dimensional system with lattice parameter $a$ and Hamiltonian $\hat{{H}}=-\frac{{1}}{2}\nabla^2+V(x)$, where $V$ is a superposition of inverted Gaussians of two different depths: $$V(x)=-\sum_{j=-\infty}^{\infty} \sum_{m=1}^2 V_{m} \exp \left[ -\frac{(x-x_m-ja)^{2}}{2\sigma^{2}} \right].$$ We choose $a=1$, $x_1=0$, $V_1=60$ and $x_2=a/2$, $V_2=1 \times 100$; the resulting potential $V(x)$ is shown in Figure \[fig:Wannier-Functions\]. The lowest 8 Wannier modes are constructed following Algorithm \[algorithm:bregman\_L1\] using a supercell of length $L=8$ and parameters $\lambda=\gamma=10^3, \mu=10/\sqrt{L}$. The results are shown in Figure \[fig:Wannier-Functions\]. Observe that Wannier modes of levels 1 and 3 are located within the the deep wells and level 2 is located in the shallow well, corresponding to “semi-core states” with flat bands. Higher levels spread over the two types of wells, which suggests that they belong to the continuous spectrum. Adaptively changing $\mu$ in front of the $L^1$ term inversely proportional to the total energy can limit their support. We note that the Wannier modes in all cases are either symmetric or antisymmetric, i.e. they constitute irreducible representations of the symmetry group of the underlying potential. It remains to be seen whether similar property is preserved in higher dimensions. ![From top to bottom: Model potential function (top) and its compressed Wannier modes at levels 1-8 (bottom). []{data-label="fig:Wannier-Functions"}](kronig_penny.pdf "fig:"){width="1\linewidth"} ![From top to bottom: Model potential function (top) and its compressed Wannier modes at levels 1-8 (bottom). []{data-label="fig:Wannier-Functions"}](KP_WF_1-8.pdf "fig:"){width="1\linewidth"} The calculated eigenvalue dispersion for bands 1-8 is shown in Figure \[fig:spaghetti\] for exact diagonalization (continuous line) and for subspace diagonalization using the lowest 8 Wannier modes (filled circles). The former are calculated as the eigenvalues of the subspace Hamiltonian, $$h^{nm}(\k) = \sum_{\R} e^{-i\k\R} \langle \psi^n_\mathbf{0} | \hat{H} | \psi^n_{\R} \rangle.$$ We see that the agreement is perfect, except for small deviation in the highest band, which is due to the limited number of Wannier modes in use. ![Eigenvalue dispersion for bands 1-8 calculated by exact diagonalization (continuous line) and by using the lowest 8 Wannier modes (filled circles).[]{data-label="fig:spaghetti"}](k_dispersion.pdf){width="1\linewidth"} In conclusion, we have introduced an approach to obtaining compactly supported Wannier modes directly from an $L_1$ regularized variational principle for the total energy. Our approach does not require calculation of the Bloch states with a subsequent minimization of a nonconvex localization functional and therefore is expected to be more robust. The proposed numerical algorithms are logically straightforward and simple to implement in existing density-functional theory (DFT) codes with Brillouin zone sampling. Indeed, the key step of Algorithm \[algorithm:bregman\_L1\] involves iterative solution of Eq. , which in turn requires evaluations of $\hat{H} \psi$. Using the decomposition of $\psi$ into Bloch functions according to , we can write $$\label{eq:Hxpsi} \hat{H} \psi = \sum_{\k} e^{i\k\r} \hat{H}_{\k} u_{\k} (\r),$$ where $\hat{H}_{\k} \equiv e^{-i\k\r} \hat{H} e^{i\k\r}$. Routines for calculating $\hat{H}_{\k} u_{\k} (\r)$ are already implemented in codes based on the Bloch theorem, and can be evaluated by a simple summation or Fourier transform over $\k$ in the Brillouin zone. Hence, the computational complexity of the proposed approach is similar to that of conventional Bloch function methods and can be used directly in self-consistent DFT calculations. We also hypothesize that $L_1$ regularized Wannier modes will be useful for beyond-DFT approaches that can benefit from the finite range of electronic states, such as screened exchange and quantum Monte Carlo methods. V.O. was supported by the National Science Foundation under Award No. DMR-1106024 and used computing resources at the National Energy Research Scientific Computing Center, which is supported by the US DOE under Contract No. DE-AC02-05CH11231. The research of R.C. is partially supported by the US DOE under Contract No. DE-FG02-05ER25710. The research of S.O. was supported by the Office of Naval Research (Grant N00014-11-1-719). We acknowledge Dr. J. C. Budich, who pointed out the equivalence of shift-orthogonality conditions for the Wannier functions and the orthonormality of their Bloch functions. Proof of shift orthogonality theorems {#section:SOcriteria} ===================================== Here we present a proof for Theorem \[theorem:equivalence\]. For a given function $h$ on $\Omega$, define its sampling function at lattice points of $\Omega$ by $$h_d(\r)=\sum_{\R\in \Omega}h(\r)\delta_{\R}(\r), \label{equation:definitionHd}$$ where $\delta_{\R}(\r)$ is the Dirac delta function $\delta(\r-\R)$. Let $N$ denote the number of primitive cells inside the supercell (i.e. $N=\det(L)$). Note that $$\tilde{h}_d(\k+\G)= N\sum_{\G'} \tilde{h}(\k+\G'). \label{equation:hHat}$$ To see this, observe that $$\begin{aligned} ~~&\tilde{h}_d(\k+\G) \\ =& \sum_{\R\in \Omega}|\Omega| (\tilde{h}*\tilde{\delta}_{\R})(\k+\G) \\ =& \sum_{\R\in \Omega}\sum_{\G'} \sum_{\k'\in \BZ}|\Omega| \tilde{h}(\k'+\G')\frac{e^{-i (\k+\G-(\k'+\G'))\R}}{|\Omega|} \\ =& \sum_{\G'} \sum_{\k'\in \BZ} \tilde{h}(\k'+\G') \sum_{\R\in \Omega} e^{-i (\k-\k')\R} \\ =& \sum_{\G'} \sum_{\k'\in \BZ} \tilde{h}(\k'+\G') N {\mathbf{1}}_{\{\k'=\k\}} \\ =& N\sum_{\G'} \tilde{h}(\k+\G'). \end{aligned}$$ In view of , supercell-periodic function $\psi(\r)$ is shift-orthogonal if and only if for all $\R'\in \Omega$: $$\begin{aligned} \delta_{\R'\0} &= \langle \psi(\r-\R'), \psi(\r) \rangle = \int_\Omega \psi^*(\r-\R')\psi(x)= \notag \\ &=\int_\Omega \mathring{\psi}(\R'-\r) \psi(\r) =(\mathring{\psi}*\psi)(\R'), \label{equation:delR0}\end{aligned}$$ where $\mathring{\psi}$ is defined by $\mathring{\psi}(\r)=\psi^*(-\r)$. Now, let $h(\r)=(\mathring{\psi}*\psi)(\r)$. In view of and , $\psi(\r)$ is shift-orthogonal if and only if $$h_d(\r)=\delta(\r).$$ Taking Fourier transform from both sides, using and using the fact that Fourier transform of $(\mathring{\psi}*\psi)(\r)$ is $|\tilde{\psi}(\k+\G)|^2$ yields that $\psi(\r)$ is shift-orthogonal if and only if for all $\k\in \BZ$, $$N \sum_{\G'} |\tilde{\psi}(\k+\G')|^2=\frac{1}{|\Omega|}.$$ [23]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} (, ) pp.  @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{} (, ) @noop [**]{} (, , ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{}
--- abstract: 'In our previous works [@MT1; @MT2], a relationship between Hermite’s two approximation problems and Schlesinger transformations of linear differential equations has been clarified. In this paper, we study $\tau$-functions associated with holonomic deformations of linear differential equations by using Hermite’s two approximation problems. As a result, we present a determinant formula for the ratio of $\tau$-functions ($\tau$-quotient).' author: - Masao Ishikawa - Toshiyuki Mano - Teruhisa Tsuda date: 'October 17, 2017; Revised June 20, 2018' title: 'Determinant structure for $\tau$-function of holonomic deformation of linear differential equations' --- Introduction {#sec:intro} ============ There are many results concerning determinant formulas for solutions to the Painlevé equations; see [@JKM1; @JKM2; @KMO; @Mas; @Tsu1; @Tsu2] and references therein. After pioneering works by D. Chudnovsky and G. Chudnovsky [@CC1; @CC2], an underlying relationship between the theory of rational approximation for functions and the Painlevé equations has been clarified by several authors [@Mag; @Man; @MT1; @MT2; @Yam]. This relationship provides a natural explanation for the determinant structure of solutions to the Painlevé equations. Among them, the second and third authors of this paper studied the relationship between two approximation problems by Hermite (i.e. the Hermite–Padé approximation and the simultaneous Padé approximation) and isomonodromic deformations of Fuchsian linear differential equations. They constructed a class of Schlesinger transformations for Fuchsian linear differential equations using Hermite’s two approximation problems and a duality between them. As an application, they obtained particular solutions written in terms of iterated hypergeometric integrals to the higher-dimensional Hamiltonian systems of Painlevé type (that were introduced in [@Tsu3]). For details refer to [@MT1; @MT2]. In the present paper, we study using Hermite’s two approximation problems the determinant structure for $\tau$-functions of holonomic deformations of linear differential equations which have regular or irregular singularities of arbitrary Poincaré rank. The main theorem (Theorem \[thm:mainthm\]) is stated as follows: fix an integer $L \geq 2$ and consider a system of linear differential equations of rank $L$ $$\label{eq:intro1} \frac{dY}{dx}=\left( \sum_{\mu=1}^N\sum_{j=0}^{r_{\mu}}A_{\mu,-j}(x-a_{\mu})^{-j-1}-\sum_{j=1}^{r_{\infty}}A_{\infty,-j}x^{j-1} \right)Y,$$ where $A_{\mu,-j}$ and $A_{\infty,-j}$ are $L\times L$ matrices independent of $x$. Let $\tau _0$ be [*Jimbo–Miwa–Ueno’s $\tau$-function*]{} (see (\[eq:defofomega\]) and (\[eq:defoftau\])) associated with a holonomic deformation of (\[eq:intro1\]). We apply the Schlesinger transformation to (\[eq:intro1\]) that shifts the characteristic exponents at $x=\infty$ by $${\boldsymbol n}=((L-1)n,-n,\dots,-n) \in {\mathbb Z}^L$$ for a positive integer $n$. Let $\tau_n$ denote the $\tau$-function associated with the resulting equation. Then the ratio $\tau_n/\tau_0$ ($\tau$-quotient) admits a representation in terms of an $(L-1)n\times (L-1)n$ block Toeplitz determinant: $$\label{eq:mainresult} \frac{ \tau_n}{\tau_0}=\mbox{const.} D_n, \quad D_n=\begin{vmatrix} B^{1}_n((L-1)n,n) & \cdots & B^{L-1}_n((L-1)n,n) \end{vmatrix}$$ with $B_m^{i}(k,l)$ being a $k\times l$ rectangular Toeplitz matrix (see (\[eq:Toeplitz\])) whose entries are specified by the asymptotic expansion of a fundamental system of solutions to (\[eq:intro1\]) around $x=\infty$. It should be noted that our result is valid for general solutions not only for particular solutions such as rational solutions or Riccati solutions. This paper is organized as follows. In Section \[sec:HPandSP\], we review Hermite’s two approximation problems and a certain duality between them. This duality due to Mahler [@Mah] will be a key point for the construction of Schlesinger transformations in a later section. We remark that the normalization in this paper is slightly different from that in the previous ones [@MT1; @MT2]. Therefore, we formulate the two approximation problems in a form suitable to the present case. In Section \[sec:detPade\], we give determinant representations for the approximation polynomials and the remainder of the approximation problems. In our method, these representations turn out to be the nature of the determinant structure of the $\tau$-quotient. In Section \[sec:holonomic\], we briefly review the theory of holonomic deformation of a linear differential equation following [@Jimbo-Miwa; @Jimbo-Miwa-Ueno]. In Section \[sec:Stransformation\], we construct the Schlesinger transformations of linear differential equations by applying the approximation problems. Section \[sec:dettau\] is the main part of this paper. We present the determinant formula for the $\tau$-quotient (see (\[eq:mainresult\]) or Theorem \[thm:mainthm\]) based on the coincidence between the Schlesinger transformations and the approximation problems. A certain determinant identity (see (\[eq:key\])) plays a crucial role in the proof. Section \[sec:particularsol\] is devoted to an application of our result. We demonstrate how to construct particular solutions to the holonomic deformation equations such as the Painlevé equations. We then find some inclusion relations among solutions to holonomic deformations and, typically, obtain a natural understanding of the determinant formulas for hypergeometric solutions to holonomic deformations. In Appendix \[secA:proofofdet\], we give a proof of the determinant identity applied in the proof of Theorem \[thm:mainthm\]. Though this determinant identity can be proved directly, we will prove its Pfaffian analogue in a general setting and then reduce it to the determinant case in order to simplify the proof and to enjoy better perspectives. #### *Acknowledgement.* This work was supported by a grant-in-aid from the Japan Society for the Promotion of Science (Grant Numbers 16K05068, 17K05270, 17K05335, 25800082 and 25870234). Hermite–Padé approximation and simultaneous Padé approximation {#sec:HPandSP} ============================================================== In this section, we review Hermite’s two approximation problems in a suitable form, which will be utilized to construct Schlesinger transformations for linear differential equations in a later section. Let $L$ be an integer larger than one. Given a set of $L$ formal power series $$f_0(w), f_1(w), \dots , f_{L-1}(w)\in {\mathbb C}[\![w]\!]$$ with the conditions $$\label{eq:fcond} f_0(0)=1,\quad f_i(0)=0 \quad (i \neq 0)$$ the [*Hermite–Padé approximation*]{} is formulated as follows: find polynomials $$Q^{(i)}_j(w)\in {\mathbb C}[w]\quad (0\leq i,j \leq L-1)$$ such that $$\begin{aligned} &\deg Q^{(i)}_j(w)\leq n-1+\delta_{i,j}, \label{eq:degree} \\ &Q^{(i)}_i(w)f_i(w)+\sum_{j\neq i}wQ^{(i)}_j(w)f_j(w)=w^{Ln}(\delta_{i,0}+O(w)), \label{eq:HPkinji} \\ &Q^{(i)}_i(0)=1 \quad (i \neq 0). \label{eq:seikika} \end{aligned}$$ There exists a unique set of polynomials $\{Q^{(i)}_j(w)\}$ under a certain generic condition on the coefficients of $f_i(w)$. The precise condition will be later stated in terms of non-vanishing of some block Toeplitz determinants; see (\[eq:unique\]) in Section \[sec:detPade\]. In turn, the [*simultaneous Padé approximation*]{} is formulated as follows: find polynomials $$P^{(i)}_j(w)\in {\mathbb C}[w] \quad (0\leq i,j \leq L-1)$$ such that $$\begin{aligned} &\deg P^{(i)}_j(w)\leq n(L-1)-1+\delta_{i,j}, \label{eq:sP1} \\ &f_0(w)P^{(i)}_j(w)-f_j(w)w^{1-\delta_{i,j}}P^{(i)}_0(w)=O(w^{nL}). \label{eq:sP2}\end{aligned}$$ Under the same generic condition as above, for each $i$ the polynomials $P^{(i)}_j(w)$ $(0 \leq j \leq L-1)$ are uniquely determined up to simultaneous multiplication by constants. Interestingly enough, these two approximations are in a dual relation; cf. [@Mah]. \[th:mahler\] Let $\{Q^{(i)}_j(w)\}$ and $\{P^{(i)}_j(w)\}$ be the Hermite–Padé approximant and the simultaneous Padé approximant, respectively. Define $L\times L$ matrices $Q(w)$ and $P(w)$ by $$\begin{gathered} Q(w)=\left(w^{1-\delta_{i,j}}Q^{(i)}_j(w)\right)_{0 \leq i,j \leq L-1} \in {{\mathbb C}[w]}^{L\times L}, \\ P(w)=\left(w^{1-\delta_{i,j}}P^{(i)}_j(w)\right)_{0 \leq i,j \leq L-1} \in {{\mathbb C}[w]}^{L\times L}. \end{gathered}$$ Then it holds that $$Q(w){}^{\rm T} P(w)=w^{nL}\cdot D,$$ where $D$ is a diagonal matrix independent of $w$. This can be proved in a procedure similar to Theorem 1.3 in [@MT2]. We can choose the normalization of $P^{(i)}_j(w)$ such that $D=I$ (the identity matrix). We will henceforth adopt this normalization. Determinant representation of Hermite–Padé approximants {#sec:detPade} ======================================================= In this section, we give a concrete description of the solution to the Hermite–Padé approximation problem (\[eq:degree\])–(\[eq:seikika\]) in Section \[sec:HPandSP\]. Without loss of generality, we may assume $f_0(w)=1$ since the approximation conditions remain unchanged if we replace $\{f_0,f_1,\dots,f_{L-1}\}$ by $\{1,f_1/f_0,\dots,f_{L-1}/f_0\}.$ Therefore, we assume $f_0(w)=1$ in the sequel. Let us write the power series as $$f_i(w)=\displaystyle\sum_{k=0}^{\infty}b^i_kw^k \quad (0 \leq i \leq L-1).$$ Then we see that $b^0_0=1$ and $b^0_k=0$ $(k\neq 0)$ from $f_0(w)=1$ and that $b^i_0=0$ $(i \neq 0)$ from (\[eq:fcond\]). Besides we set $b^i_k=0$ $(k<0)$ for notational convenience. Let us write the polynomials $Q^{(i)}_j(w)$ as $$Q^{(i)}_j(w)=c^i_{j,0}+c^i_{j,1}w+\cdots +c^i_{j,n-1+\delta_{i,j}}w^{n-1+\delta_{i,j}} \quad (0 \leq i,j \leq L-1)$$ with $c^i_{j,k}$ being the coefficient of $w^k$. The left-hand side of (\[eq:HPkinji\]) reads as $$\begin{aligned} Q^{(i)}_if_i+\sum_{j\neq i} w Q^{(i)}_j f_j = \sum_{k=0}^{\infty} \left( \sum_{l=0}^nb^i_{k-l}c^i_{i,l}+\sum_{j\neq i}\sum_{l=0}^{n-1}b^j_{k-1-l}c^i_{j,l} \right)w^k.\end{aligned}$$ Hence the approximation condition (\[eq:HPkinji\]) can be interpreted as a system of linear equations for the unknowns $c^i_{j,k}$: $$\begin{aligned} &\sum_{l=0}^nb^0_{k-l}c^0_{0,l}+\sum_{j\neq 0}\sum_{l=0}^{n-1}b^j_{k-1-l}c^0_{j,l}=0 \quad (0 \leq k \leq Ln-1), \label{eq:senkeikinji01} \\ &\sum_{l=0}^nb^0_{Ln-l}c^0_{0,l}+\sum_{j\neq 0}\sum_{l=0}^{n-1}b^j_{Ln-1-l}c^0_{j,l}=1 \label{eq:senkeikinji02}\end{aligned}$$ for $i=0$; and $$\sum_{l=0}^nb^i_{k-l}c^i_{i,l}+\sum_{j\neq i}\sum_{l=0}^{n-1}b^j_{k-1-l}c^i_{j,l}=0 \quad (1 \leq k \leq Ln) \label{eq:senkeikinjii}$$ for $i \neq 0$. Let us introduce the column vectors $${\boldsymbol c}^i={}^{\rm T} \left({\boldsymbol c}^i_0,{\boldsymbol c}^i_1,\dots ,{\boldsymbol c}^i_{L-1}\right) \in {\mathbb C}^{Ln+1} \quad (0 \leq i \leq L-1),$$ where $${\boldsymbol c}^i_j= \left(c^i_{j,0},\dots,c^i_{j,n-1+\delta_{i,j}} \right),$$ and introduce the $k\times l$ rectangular Toeplitz matrix $$\label{eq:Toeplitz} B_m^{i}(k,l)=\left(b_{m+\alpha-\beta}^{i}\right)_{ \begin{subarray}{l} 1\leq \alpha\leq k \\ 1\leq \beta\leq l \end{subarray}} =\begin{pmatrix} b_m^{i} & b_{m-1}^{i} & \cdots & b_{m-l+1}^{i} \\ b_{m+1}^{i} & b_{m}^{i} & \cdots & b_{m-l+2}^{i} \\ \vdots & \vdots & \ddots & \vdots \\ b_{m+k-1}^{i} & b_{m+k-2}^{i} & \cdots & b_{m+k-l}^{i} \end{pmatrix}$$ for the sequence $\{b^i_{j}\}$. Then the linear equations (\[eq:senkeikinji01\]) and (\[eq:senkeikinji02\]) are summarized as a matrix form $$\label{eq:renritsu0} \mathcal{B}^0{\boldsymbol c}^0={}^{\rm T}(0,\ldots,0,1),$$ where $\mathcal{B}^0$ is a square matrix of order $Ln+1$ defined by $$\mathcal{B}^0=\begin{pmatrix} B^0_0(Ln+1,n+1) & B^1_{-1}(Ln+1,n) & \cdots & B^{L-1}_{-1}(Ln+1,n) \end{pmatrix}.$$ Similarly, (\[eq:senkeikinjii\]) can be rewritten into $$\label{eq:renritsui} \mathcal{B}^i{\boldsymbol c}^i= {\boldsymbol 0}= {}^{\rm T}(0,\ldots,0) \quad (i \neq 0)$$ where $\mathcal{B}^i$ ($i \neq 0$) are $Ln \times (Ln+1)$ matrices defined by $$\mathcal{B}^i=\begin{pmatrix} B^0_0(Ln,n) & \cdots & B^{i-1}_0(Ln,n) & B^i_1(Ln,n+1) & B^{i+1}_0(Ln,n) & \cdots & B^{L-1}_0(Ln,n) \end{pmatrix}.$$ Solving (\[eq:renritsu0\]) and (\[eq:renritsui\]) by Cramer’s rule, we have the determinant expressions of the approximants $Q^{(i)}_j(w)$: $$\begin{aligned} Q^{(0)}_0(w) &= \frac{1}{ \left| \mathcal{B}^0\right| } \begin{vmatrix} B^0_0(Ln,n+1) & B^1_{-1}(Ln,n) & \cdots & B^{L-1}_{-1}(Ln,n) \\ 1,w,\dots,w^n & {\boldsymbol 0} & \cdots & {\boldsymbol 0} \end{vmatrix}, \\ Q^{(0)}_j(w) &= \frac{1}{ \left| \mathcal{B}^0\right| } \begin{vmatrix} B^0_0(Ln,n+1) & \cdots & B^j_{-1}(Ln,n) & \cdots & B^{L-1}_{-1}(Ln,n) \\ {\boldsymbol 0} & \cdots & 1,w,\dots ,w^{n-1} & \cdots & {\boldsymbol 0} \end{vmatrix} \quad (j \neq 0) $$ for $i=0$; and $$\label{eq:Qij} Q^{(i)}_j(w) =\frac{(-1)^{(L+i)n}}{ \left| \mathcal{B} \right| } \left| \begin{array}{ccc} &{\mathcal B}^{(i)}& \\ {\boldsymbol 0} & \underbrace{1, w, \ldots, w^{n-1+\delta_{i,j}}}_{\text{\rm $j$th block}} &{\boldsymbol 0} \end{array} \right|$$ for $i \neq 0$, where $\mathcal{B}$ is a square matrix of order $Ln$ defined by $$\mathcal{B}=\begin{pmatrix} B^0_0(Ln,n) & B^{1}_0(Ln,n) & \cdots & B^{L-1}_0(Ln,n) \end{pmatrix}.$$ In the latter case we have used the normalization (\[eq:seikika\]). Note that $$\label{eq:unique} \left|\mathcal{B}^0 \right|\neq 0 \quad \text{and} \quad \left| \mathcal{B} \right| \neq 0$$ are the conditions for the unique existence of $\{Q^{(i)}_j\}$, which we will impose throughout this paper. Next, we concern $$\label{eq:remind} \rho^i(w)=Q^{(i)}_if_i+\sum_{j\neq i}w Q_j^{(i)} f_j \quad (0 \leq i \leq L-1)$$ which are the reminders of the Hermite–Padé approximation problem (\[eq:degree\])–(\[eq:seikika\]). For $i=0$, we have $$\rho^0(w)=w^{Ln}(1+O(w)).$$ For $i \neq 0$, substituting (\[eq:Qij\]) shows that $$\begin{aligned} \rho^i(w) &= \frac{(-1)^{(L+i)n}}{|{\mathcal B} |} \begin{vmatrix} B^0_0(Ln,n) & \cdots & B^i_1(Ln,n+1) & \cdots & B^{L-1}_0(Ln,n) \\ wf_0,\dots,w^nf_0 & \cdots & f_i,wf_i,\dots,w^nf_i & \cdots & wf_{L-1},\dots,w^nf_{L-1} \end{vmatrix} \\ &=O(w^{nL+1}).\end{aligned}$$ Introduce the [*block Toeplitz determinants*]{} $$\begin{aligned} D_n&=|{\mathcal B}| =\begin{vmatrix} B^{0}_0(Ln,n) & \cdots & B^{L-1}_0(Ln,n) \end{vmatrix} \nonumber \\ &= \begin{vmatrix} B^{1}_n((L-1)n,n) & \cdots & B^{L-1}_n((L-1)n,n) \end{vmatrix} \label{eq:defDn}\end{aligned}$$ and $$\begin{aligned} E_n^{i,j} &= \begin{vmatrix} B^{0}_0(Ln,n) & \cdots & B^{i}_1(Ln,n+1) & \cdots & B^{L-1}_0(Ln,n) \\ B^{0}_{Ln+j-1}(1,n) & \cdots & B^{i}_{Ln+j}(1,n+1) & \cdots & B^{L-1}_{Ln+j-1}(1,n) \end{vmatrix} \nonumber \\ &= \begin{vmatrix} B^{1}_n((L-1)n,n) & \cdots & B^{i}_{n+1}((L-1)n,n+1) & \cdots & B^{L-1}_n((L-1)n,n) \\ B^{1}_{Ln+j-1}(1,n) & \cdots & B^{i}_{Ln+j}(1,n+1) & \cdots & B^{L-1}_{Ln+j-1}(1,n) \end{vmatrix}, \label{eq:defE}\end{aligned}$$ where we have used $b^0_0=1$ and $b^0_k=0$ $(k\neq 0)$. Thus, the coefficients of $\rho^i(w)=w^{Ln}\sum_{j=1}^{\infty}\rho^i_jw^{j}$ are written as $$\label{eq:reprhoij} \rho^i_j= (-1)^{(L+i)n} \frac{E_n^{i,j}}{D_n}.$$ Holonomic deformation of a system of linear differential equations {#sec:holonomic} ================================================================== In this section, we briefly review the theory of holonomic deformations of linear differential equations following [@Jimbo-Miwa; @Jimbo-Miwa-Ueno]. We consider an $L\times L$ system of linear differential equations which has regular or irregular singularities at $x=a_1,\dots,a_N,a_{\infty}=\infty$ on $\mathbb{P}^1$ with Poincaré rank $r_{\mu}$ $(\mu=1,\dots,N,\infty)$, respectively: $$\label{lineardiffeq} \frac{dY}{dx}=A(x)Y,$$ where $$A(x)=\sum_{\mu=1}^N\sum_{j=0}^{r_{\mu}}A_{\mu,-j}(x-a_{\mu})^{-j-1}-\sum_{j=1}^{r_{\infty}}A_{\infty,-j}x^{j-1} \in {\mathbb C}(x)^{L\times L}$$ and $A_{\mu,-j}$ and $A_{\infty,-j}$ are constant matrices independent of $x$. We assume that $A_{\mu,-r_{\mu}}$ $(\mu=1,\dots,N,\infty)$ is diagonalizable as $$A_{\mu,-r_{\mu}}=G^{(\mu)}T^{(\mu)}_{-r_{\mu}}G^{(\mu)-1},$$ where the diagonal matrix $T^{(\mu)}_{-r_{\mu}}=(t^{(\mu)}_{-r_{\mu}, \alpha} \delta_{\alpha,\beta})_{0 \leq \alpha,\beta \leq L-1}$ satisfies $$\begin{aligned} t^{(\mu)}_{-r_{\mu},\alpha} \neq t^{(\mu)}_{-r_{\mu},\beta} \quad &\text{if} \quad\alpha\neq\beta, \quad r_{\mu}\geq 1, \\ t^{(\mu)}_{0,\alpha} \not\equiv t^{(\mu)}_{0,\beta} \mod \mathbb{Z} \quad &\text{if} \quad \alpha\neq\beta, \quad r_{\mu}=0.\end{aligned}$$ Let us introduce the diagonal matrices $$T^{(\mu)}(x)= \left(e^{(\mu)}_{\alpha}(x)\,\delta_{\alpha,\beta} \right)_{0 \leq \alpha ,\beta \leq L-1}$$ for $ \mu=1,\ldots,N,\infty$ with $$e^{(\mu)}_{\alpha}(x)=\sum_{j=1}^{r_{\mu}}t^{(\mu)}_{-j, \alpha} \frac{ {z_\mu }^{-j}}{-j}+t^{(\mu)}_{0,\alpha}\log z_{\mu}, \quad z_{\mu}= \left\{\begin{array}{ll} x-a_{\mu} &(1 \leq \mu \leq N) \\ x^{-1} & (\mu=\infty). \end{array} \right.$$ Then, we can take sectors $\mathscr{S}^{(\mu)}_k$ $(1 \leq k \leq 2r_{\mu})$ centered at $a_{\mu}$ and there exists a unique fundamental system of solutions to (\[lineardiffeq\]) having the asymptotic expansion of the form $$Y(x) \simeq G^{(\mu)}\hat{Y}^{(\mu)}(x)e^{T^{(\mu)}(x)}, \quad \hat{Y}^{(\mu)}(x) =I+Y^{(\mu)}_1z_{\mu}+Y^{(\mu)}_2{z_{\mu}}^2+\cdots$$ in each $\mathscr{S}^{(\mu)}_k$. Note that $\hat{Y}^{(\mu)}(x)$ are in general divergent and that even around the same point $z=a_\mu$ these power series in two different sectors may differ by a left multiplication of some constant matrix ([*Stokes phenomena*]{}). Without loss of generality, we henceforth assume $G^{(\infty)}=I$. If we start with the fundamental system of solutions normalized by the asymptotic expansion $$\label{eq:fundsol} Y(x) \simeq \hat{Y}^{(\infty)}(x)e^{T^{(\infty)}(x)}, \quad \hat{Y}^{(\infty)}(x) =I+Y^{(\infty)}_1z_{\infty}+Y^{(\infty)}_2{z_{\infty}}^2+\cdots$$ in the sector $\mathscr{S}^{(\infty)}_1$ around $x=\infty$, then the same solution behaves as $$Y(x) \simeq G^{(\mu)}\hat{Y}^{(\mu)}(x)e^{T^{(\mu)}(x)} {S^{(\mu)}_{k-1}}^{-1} \cdots {S^{(\mu)}_1}^{-1} C^{(\mu)}$$ in a different sector $\mathscr{S}^{(\mu)}_k$, where $C^{(\mu)}$ and $S^{(\mu)}_j$ are the invertible constant matrices called the [*connection matrix*]{} and [*Stokes multiplier*]{}, respectively. We consider a deformation of (\[lineardiffeq\]) by choosing $a_1,\ldots,a_N$ and $t^{(\mu)}_{-j, \alpha}$ $(\mu=1,\dots,N,\infty$; $1 \leq j \leq r_{\mu}$; $0 \leq \alpha \leq L-1)$ as its independent variables such that $T^{(\mu)}_0$, $C^{(\mu)}$ and $S^{(\mu)}_j$ are kept invariant. Such a deformation is called a [*holonomic deformation*]{}. Let $d$ denote the exterior differentiation with respect to the deformation parameters $\{a_{\mu},t^{(\mu)}_{-j,\alpha}\}$. The fundamental system of solutions $Y(x)$ specified by (\[eq:fundsol\]) is subject to the holonomic deformation if and only if it satisfies $$\label{defeq} dY(x)=\Omega(x)Y(x),$$ where $\Omega(x)$ is a matrix-valued $1$-form given as $$\Omega(x)=\sum_{\mu=1}^{N}B^{(\mu)}(x)da_{\mu} +\sum_{\mu=1,\dots,N,\infty}\sum_{j=1}^{r_{\mu}}\sum_{\alpha=0}^{L-1}B^{(\mu)}_{-j,\alpha}(x)dt^{(\mu)}_{-j,\alpha},$$ whose coefficients $B^{(\mu)}(x)$ and $B^{(\mu)}_{-j,\alpha}(x)$ are rational functions in $x$. From the integrability condition of (\[lineardiffeq\]) and (\[defeq\]), we obtain a system of nonlinear differential equations for $A(x)$ and $G^{(\mu)}$: $$\label{eq:deform} dA(x)=\frac{\partial \Omega}{\partial x} (x)+[\Omega(x), A(x)], \quad dG^{(\mu)}=\Theta^{(\mu)}G^{(\mu)} \quad (1 \leq \mu \leq N).$$ We remark that $\Omega(x)$ and $\Theta^{(\mu)}$ are computable from $A(x)$ and $ G^{(\mu)}$ by a rational procedure; see [@Jimbo-Miwa-Ueno] for details. The $1$-form $$\label{eq:defofomega} \omega=-\sum_{\mu=1,\dots,N,\infty} {\rm tr}\, \underset{x=a_\mu}{\rm Res} \hat{Y}^{(\mu)}(x)^{-1}\frac{\partial \hat{Y}^{(\mu)}}{\partial x}(x)\, dT^{(\mu)}(x)$$ is closed, i.e. $d \omega =0$, for any solution to (\[eq:deform\]). Hence we can define the [*$\tau$-function*]{} $\tau=\tau(\{a_{\mu},t^{(\mu)}_{-j,\alpha}\})$ by $$\label{eq:defoftau} d\log \tau=\omega.$$ Construction of Schlesinger transformations {#sec:Stransformation} =========================================== In this section, we construct the Schlesinger transformation that shifts the characteristic exponents at $x=\infty$ of the system of linear differential equations (\[lineardiffeq\]) as $${\boldsymbol t}^{(\infty)}_0= \left( t^{(\infty)}_{0,0},\dots,t^{(\infty)}_{0,L-1} \right )\mapsto {\boldsymbol t}^{(\infty)}_0+{\boldsymbol n},$$ where ${\boldsymbol n}=((L-1)n,-n,\dots,-n) \in {\mathbb Z}^L$ and $n$ is a positive integer. Write the power series part of $Y(x) \simeq \hat{Y}^{(\infty)}(x)e^{T^{(\infty)}(x)}$ (see (\[eq:fundsol\])) as $$\label{eq:Y} \hat{Y}^{(\infty)}(x)=\Phi (w) =\left( \phi_{i,j}(w)\right)_{0 \leq i,j \leq L-1}, \quad \phi_{i,j}(w)=\sum_{k=0}^{\infty}a^{i,j}_k w^k, $$ where $$w=z_{\infty}=\frac{1}{x}.$$ Namely, $\Phi(w)$ is an $L \times L$ matrix whose entries are formal power series in $w$, and its constant term is the identity matrix, i.e. $\phi_{i,j}(0)=\delta_{i,j}$. Factorizing $\Phi(w)$ into two matrices as $$\Phi(w) = \left( \frac{ \phi_{i,j}(w) }{\phi_{j,j}(w) }\right)_{0 \leq i,j \leq L-1} \cdot \mbox{diag} \left(\phi_{j,j}(w) \right)_{0 \leq j \leq L-1},$$ we then define new power series $f_i(w)$ from the first column of the former by $$\label{eq:deffi} f_i(w) =\sum_{k=0}^{\infty}b^i_k w^k = \frac{\phi_{i, 0}(w)}{\phi_{0, 0}(w)} \quad (0\leq i\leq L-1).$$ Note that the coefficients of the [*diagonal free*]{} part $$\left( \frac{ \phi_{i,j}(w) }{\phi_{j,j}(w) }\right)_{0 \leq i,j \leq L-1}-I$$ can be determined recursively by (\[lineardiffeq\]); see [@Jimbo-Miwa-Ueno Proposition 2.2]. Since it holds that $f_0(w)=1$ and $f_i(0)=0$ $(i \neq 0)$, we can apply the Hermite–Padé approximation problem (\[eq:degree\])–(\[eq:seikika\]) and the simultaneous Padé approximation problem (\[eq:sP1\])–(\[eq:sP2\]) considered in Section \[sec:HPandSP\] to the set of $L$ formal power series $\{f_0(w),\dots, f_{L-1}(w)\}$. Define the matrices $$\begin{aligned} Q(w) &=\left(w^{1-\delta_{i,j}}Q^{(i)}_j(w)\right)_{0 \leq i,j \leq L-1} \in {{\mathbb C}[w]}^{L \times L}, \\ R(x)&=x^nQ(x^{-1})\in {{\mathbb C}[x]}^{L \times L}.\end{aligned}$$ Recall here that $\deg Q^{(i)}_j(w) \leq n-1+ \delta_{i,j}$. The result is stated as follows. The polynomial matrix $R(x)$ provides the representation matrix of the Schlesinger transformation for [(\[lineardiffeq\])]{} which shifts the characteristic exponents at $x=\infty$ by ${\boldsymbol n}=((L-1)n,-n,\dots,-n) \in {\mathbb Z}^L$. From Theorem \[th:mahler\], we have $|Q(w)|\cdot | P(w)|=w^{L^2n}$. The conditions for the degrees (\[eq:degree\]) and (\[eq:sP1\]) shows that $|Q(w)|$ is of degree at most $Ln$ and $|P(w)|$ at most $L(L-1)n$, respectively. Consequently, it holds that $|Q(w)|=cw^{Ln}$ and $|P(w)|=c^{-1}w^{L(L-1)n}$ for some constant $c \neq 0$; and thus $|R(x)|=c$. It implies that $R(x)$ is an invertible matrix at any $x\in{\mathbb C}$. Therefore, the transformation $Y(x)\mapsto R(x)Y(x)$ does not affect the regularity or the singularity of $Y(x)$ at any $x\in{\mathbb C}$. Let us observe the influence at $x=\infty$ of this transformation. It follows from the approximation conditions (\[eq:HPkinji\]) and (\[eq:seikika\]) that $$\begin{aligned} \nonumber R(x)\Phi(w)&=w^{-n}Q(w)\Phi(w) \\ \nonumber &=w^{-n} \left(Q^{(i)}_i\phi_{i, j}+\sum_{k\neq i}wQ^{(i)}_k\phi_{k, j} \right)_{0 \leq i,j \leq L-1} \\ &= \left(I+O(w)\right) \mbox{diag} \left(w^{(L-1)n}, w^{-n}, \dots, w^{-n} \right). \label{eq:henkantenkai} \end{aligned}$$ Noticing the expression $$e^{T^{(\infty)}(x)}= \mbox{diag} \left(w^{t^{(\infty)}_{0,j}} \right)_{0 \leq j \leq L-1} e^{\sum_{j=1}^{r_{\infty}}T^{(\infty)}_{-j}\frac{w^{-j}}{-j}}$$ of the exponential part of $Y(x)$, we can conclude that $Y(x)\mapsto R(x)Y(x)$ induces the Schlesinger transformation that shifts the characteristic exponents at $x=\infty$ as ${\boldsymbol t}^{(\infty)}_0\mapsto {\boldsymbol t}^{(\infty)}_0+{\boldsymbol n}$. Taking the determinants of the both sides of (\[eq:henkantenkai\]), we have $|R(x)| \cdot |\Phi (w)|=1+O(w)$. Combining this with $|\Phi (w)|=1+O(w)$ yields $c=| R(x)|=1$. Determinant structure of $\tau$-quotients {#sec:dettau} ========================================= In this section, we investigate the influence on the $\tau$-function by the Schlesinger transformation. We consider the Schlesinger transformation of a linear differential equation (\[lineardiffeq\]), which shifts the characteristic exponents at $x=\infty$ by $${\boldsymbol n}=\left((L-1)n,-n,\dots ,-n\right)\in {\mathbb Z}^L$$ for a positive integer $n$; see Section \[sec:Stransformation\]. Let $\tau_n$ denote the $\tau$-function associated with the holonomic deformation of the resulting linear differential equation after the Schlesinger transformation, while $\tau_0$ denotes that of the original (\[lineardiffeq\]). First, we shall look at a relation between $\tau_0$ and $\tau_1$. According to [@Jimbo-Miwa Theorem 4.1] it holds that $$\frac{ \tau_1}{\tau_0} =\mbox{const.} \begin{vmatrix} G^{(\infty, \infty)(1,1)}_{1,0} & G^{(\infty,\infty)(1,1)}_{2,0} & \cdots & G^{(\infty, \infty)(1,1)}_{L-1,0} \\ G^{(\infty, \infty)(1,2)}_{1,0} & G^{(\infty,\infty)(1,2)}_{2,0} & \cdots & G^{(\infty,\infty)(1,2)}_{L-1,0} \\ \vdots & \vdots & \ddots & \vdots \\ G^{(\infty,\infty)(1,L-1)}_{1,0} & G^{(\infty,\infty)(1,L-1)}_{2,0} & \cdots & G^{(\infty,\infty)(1,L-1)}_{L-1,0} \end{vmatrix},$$ where $G^{(\infty,\infty)(1,l)}=(G^{(\infty,\infty)(1,l)}_{i, j})$ $(l\in {\mathbb Z})$ is a special case of the [*characteristic matrices*]{} and is defined by the following generating function: $$\sum_{l\in{\mathbb Z}}G^{(\infty,\infty)(1,l)}w^l=\Phi(w)$$ or equivalently $$G^{(\infty,\infty)(1,l)}_{i,j}=a^{i,j}_l$$ with $a^{i,j}_l=0$ for $l<0$. Thus we find that $$\begin{aligned} \label{eq:tau1/tau0} \frac{\tau_1}{\tau_0} =\mbox{const.}\begin{vmatrix} a^{1,0}_1 & a^{2,0}_1 & \cdots & a^{L-1,0}_1 \\ a^{1,0}_2 & a^{2,0}_2 & \cdots & a^{L-1,0}_2 \\ \vdots & \vdots & \ddots & \vdots \\ a^{1,0}_{L-1} & a^{2,0}_{L-1} & \cdots & a^{L-1,0}_{L-1} \end{vmatrix}.\end{aligned}$$ Here we note the following elementary fact. \[lem:simple\] Let $$\begin{aligned} \sum_{k=1}^{\infty}\alpha^{i}_k w^k, \quad \sum_{k=1}^{\infty}\beta^{i}_k w^k \quad (1 \leq i \leq L-1) \quad \text{and} \quad \sum_{k=0}^{\infty}\gamma_kw^k \end{aligned}$$ be formal power series, where $\gamma_0=1$. If the relation $$\sum_{k=1}^{\infty}\alpha^{i}_k w^k = \left(\sum_{k=1}^{\infty}\beta^{i}_kw^k\right) \left(\sum_{k=0}^{\infty}\gamma_kw^k\right)$$ among the formal power series holds for each $i$, then the equality $$\begin{vmatrix} \alpha^{1}_1 & \alpha^{2}_1 & \cdots & \alpha^{L-1}_1 \\ \alpha^{1}_2 & \alpha^{2}_2 & \cdots & \alpha^{L-1}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \alpha^{1}_{L-1} & \alpha^{2}_{L-1} & \cdots & \alpha^{L-1}_{L-1} \end{vmatrix}=\begin{vmatrix} \beta^{1}_1 & \beta^{2}_1 & \cdots & \beta^{L-1}_1 \\ \beta^{1}_2 & \beta^{2}_2 & \cdots & \beta^{L-1}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \beta^{1}_{L-1} & \beta^{2}_{L-1} & \cdots & \beta^{L-1}_{L-1} \end{vmatrix}$$ regarding their coefficients holds. It can be verified straightforwardly by using $\alpha_k^i=\sum_{l+m=k} \beta_l^i \gamma_m$. Returning to our situation, we have $$\begin{aligned} \label{eq:a=b} \begin{vmatrix} a^{1,0}_1 & a^{2,0}_1 & \cdots & a^{L-1,0}_1 \\ a^{1,0}_2 & a^{2,0}_2 & \cdots & a^{L-1,0}_2 \\ \vdots & \vdots & \ddots & \vdots \\ a^{1,0}_{L-1} & a^{2,0}_{L-1} & \cdots & a^{L-1,0}_{L-1} \end{vmatrix}=\begin{vmatrix} b^{1}_1 & b^{2}_1 & \cdots & b^{L-1}_1 \\ b^{1}_2 & b^{2}_2 & \cdots & b^{L-1}_2 \\ \vdots & \vdots & \ddots & \vdots \\ b^{1}_{L-1} & b^{2}_{L-1} & \cdots & b^{L-1}_{L-1} \end{vmatrix} \end{aligned}$$ from Lemma \[lem:simple\] since $b^i_k$ and $a^{i,0}_k$ are mutually related by (see (\[eq:Y\]) and (\[eq:deffi\])) $$f_i(w)=\sum_{k=0}^{\infty}b^i_kw^k=\frac{\phi_{i,0}(w)}{\phi_{0,0}(w)}= \frac{\sum_{k=0}^{\infty}a^{i,0}_kw^k}{1+O(w)}.$$ It thus follows from (\[eq:tau1/tau0\]) that $$\begin{aligned} \label{eq:tau1/tau0=b} \frac{ \tau_1}{\tau_0}=\mbox{const.}\begin{vmatrix} b^{1}_1 & b^{2}_1 & \cdots & b^{L-1}_1 \\ b^{1}_2 & b^{2}_2 & \cdots & b^{L-1}_2 \\ \vdots & \vdots & \ddots & \vdots \\ b^{1}_{L-1} & b^{2}_{L-1} & \cdots & b^{L-1}_{L-1} \end{vmatrix}.\end{aligned}$$ Next, we shall track how the entries of $\Phi(w)$ are changed after the Schlesinger transformation. Define $$\overline{\Phi} (w) =\left( \overline{\phi}_{i,j}(w)\right)_{0 \leq i,j \leq L-1}, \quad \overline{\phi}_{i,j}(w)=\sum_{k=0}^{\infty} \overline{a}^{i,j}_k w^k$$ by $$R(w)\Phi(w)=\overline{\Phi}(w) \ \mbox{diag} \left( w^{(L-1)n}, w^{-n}, \dots, w^{-n}\right).$$ In particular, the entry $\overline{\phi}_{i,0}(w)$ is obtained from the remainder of the Hermite–Padé approximation as (see (\[eq:remind\]) and (\[eq:deffi\])) $$\begin{aligned} \overline{\phi}_{i,0}(w) &=w^{-Ln} \left(Q^{(i)}_i(w)\phi _{i,0}(w)+\sum_{j\neq i}wQ^{(i)}_j(w)\phi_{j,0}(w)\right) \\ &=w^{-Ln}\phi_{0,0}(w)\rho^i(w).\end{aligned}$$ Let $$\overline{f}_{i}(w)=\sum_{k=0}^{\infty}\overline{b}^{i}_k w^k =\frac{\overline{\phi}_{i,0}(w)}{\overline{\phi}_{0,0}(w)}$$ and $\rho^i(w)=w^{Ln}\sum_{k=1}^{\infty}\rho^i_k w^{k}$ for $1 \leq i \leq L-1$ as in the previous sections. Namely, overlined symbols denote the quantities after the Schlesinger transformation that shifts the characteristic exponents at $x=\infty$ as ${\boldsymbol t}^{(\infty)}_0\mapsto {\boldsymbol t}^{(\infty)}_0+{\boldsymbol n}$. Then, by applying Lemma \[lem:simple\] twice, we have $$\label{b=a=rho} \begin{vmatrix} \overline{a}^{1,0}_1 & \overline{a}^{2,0}_1 & \cdots & \overline{a}^{L-1,0}_1 \\ \overline{a}^{1,0}_2 & \overline{a}^{2,0}_2 & \cdots & \overline{a}^{L-1,0}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \overline{a}^{1,0}_{L-1} & \overline{a}^{2,0}_{L-1} & \cdots & \overline{a}^{L-1,0}_{L-1} \end{vmatrix} = \begin{vmatrix} \overline{b}^{1}_1 & \overline{b}^{2}_1 & \cdots & \overline{b}^{L-1}_1 \\ \overline{b}^{1}_2 & \overline{b}^{2}_2 & \cdots & \overline{b}^{L-1}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \overline{b}^{1}_{L-1} & \overline{b}^{2}_{L-1} & \cdots & \overline{b}^{L-1}_{L-1} \end{vmatrix} = \begin{vmatrix} \rho^{1}_1 & \rho^{2}_1 & \cdots & \rho^{L-1}_1 \\ \rho^{1}_2 & \rho^{2}_2 & \cdots & \rho^{L-1}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \rho^{1}_{L-1} & \rho^{2}_{L-1} & \cdots & \rho^{L-1}_{L-1} \end{vmatrix}.$$ Finally, combining (\[eq:tau1/tau0=b\]) and (\[b=a=rho\]) yields that $$\frac{\tau_{n+1}}{\tau_n} = \frac{\overline{\tau}_{1}}{\overline{\tau}_0} =\mbox{const.}\begin{vmatrix} \overline{b}^{1}_1 & \overline{b}^{2}_1 & \cdots & \overline{b}^{L-1}_1 \\ \overline{b}^{1}_2 & \overline{b}^{2}_2 & \cdots & \overline{b}^{L-1}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \overline{b}^{1}_{L-1} & \overline{b}^{2}_{L-1} & \cdots & \overline{b}^{L-1}_{L-1} \end{vmatrix}=\mbox{const.} \begin{vmatrix} \rho^{1}_1 & \rho^{2}_1 & \cdots & \rho^{L-1}_1 \\ \rho^{1}_2 & \rho^{2}_2 & \cdots & \rho^{L-1}_2 \\ \vdots & \vdots & \ddots & \vdots \\ \rho^{1}_{L-1} & \rho^{2}_{L-1} & \cdots & \rho^{L-1}_{L-1} \end{vmatrix}. $$ Substituting (\[eq:reprhoij\]) in the above, we obtain $$\frac{\tau_{n+1}}{\tau_{n}} =\mbox{const.}\, {D_n}^{-L+1}\det (E^{i,j}_n)_{1 \leq i,j\leq L-1}. \label{eq:t(n+1)/t(n)}$$ Now we state the main theorem. \[thm:mainthm\] Consider a holonomic deformation of [(\[lineardiffeq\])]{}. Let $\tau_0$ be the $\tau$-function associated with [(\[lineardiffeq\])]{} and let $\tau_n$ be the $\tau$-function associated with the transformed equation from [(\[lineardiffeq\])]{} by the Schlesinger transformation that shifts the characteristic exponents at $x=\infty$ by $${\boldsymbol n}=\left((L-1)n,-n,\dots ,-n\right)\in {\mathbb Z}^L$$ for a positive integer $n$. Then the following determinant formula for the $\tau$-quotient holds[:]{} $$\label{eq:taun/tau0} \frac{\tau_n}{\tau_0}=\mbox{\rm const.}\,D_n,$$ where $D_n$ is the block Toeplitz determinant defined by [(\[eq:Toeplitz\])]{} and [(\[eq:defDn\])]{} and its entries $b^i_k$ are specified by [(\[eq:Y\])]{} and [(\[eq:deffi\])]{}, i.e. the asymptotic solution to [(\[lineardiffeq\])]{} at $x=\infty$. We have the equality $$\label{eq:key} D_{n+1}{D_n}^{L-2}=\det (E^{i,j}_n)_{1 \leq i,j \leq L-1},$$ which will be shown in Appendix \[secA:proofofdet\]. Therefore, (\[eq:t(n+1)/t(n)\]) implies $$\frac{\tau_{n+1}}{\tau_n}=\mbox{const.}\, \frac{D_{n+1}}{D_n}.$$ It is clear from (\[eq:defDn\]) and (\[eq:tau1/tau0=b\]) that $ \tau_1/\tau_0=\mbox{const.}D_1. $ Hence the theorem is proved. In the case of a second-order Fuchsian linear differential equation, their isomonodromic deformations are governed by the Garnier systems and the formula (\[eq:taun/tau0\]) has been established in [@Man]. Jimbo and Miwa [@Jimbo-Miwa] treat determinant representations of $\tau$-quotients for arbitrary Schlesinger transformations and their matrix entries are written in terms of the characteristic matrices. However, the characteristic matrices themselves are, in general, too complicated to compute explicitly. On the other hand, Theorem \[thm:mainthm\] above gives a much simpler representation of $\tau$-quotients in terms of block Toeplitz determinants, though the Schlesinger transformations are restricted to a specific direction shifting the characteristic exponents at one point by ${\boldsymbol n}=((L-1)n,-n,\dots,-n)$. Note also that our formula involves only the first column of $\Phi(w)$, where $\Phi(w)$ is the power series part of the asymptotic solution to (\[lineardiffeq\]). It is expected that more general Schlesinger transformations are related to other types of approximation problems beyond Hermite–Padé type. It would be an interesting problem to explore such relationships. Consider a $2 \times 2$ system of linear differential equations $$\label{eq:p2_lin} \frac{dY}{dx}= \left( \begin{pmatrix} 1& \\ &-1 \end{pmatrix}x^2 + \begin{pmatrix} &u\\ -2 \mu/u & \end{pmatrix}x +\begin{pmatrix} \mu + t/2& -u \lambda \\ -2 (\lambda \mu+ \theta)/u &-\mu-t/2 \end{pmatrix} \right) Y$$ with an irregular singularity of Poincaré rank $3$ at $x=\infty$. There exists a unique fundamental system of solutions having the asymptotic behavior of the form $$Y \simeq \Phi e^{T^{(\infty)}}, \quad \Phi =\left(\phi_{i,j}(w)\right)_{i,j=0,1} =I+O(w)$$ at $x=\infty$, where $w=1/x$ and $$T^{(\infty)}= \begin{pmatrix}1 & \\ & -1 \end{pmatrix}\frac{w^{-3}}{3} +\begin{pmatrix}t & \\ & -t \end{pmatrix} \frac{w^{-1}}{2} +\begin{pmatrix}\theta & \\ & -\theta \end{pmatrix}\log w.$$ It thus follows that $$f(w)=\sum_{k=1}^\infty b_k w^k = \frac{\phi_{1,0}(w)}{\phi_{0,0}(w)} = -\frac{\mu}{u} w -\frac{\theta+\lambda \mu}{u} w^2 + \frac{\mu(\mu+t)}{2u}w^3+\cdots.$$ The holonomic deformation of (\[eq:p2\_lin\]) amounts to its compatibility condition with $$\frac{\partial Y}{\partial t}= \left( \begin{pmatrix} 1 & \\ &-1 \end{pmatrix}\frac{x}{2}+ \begin{pmatrix} & u/2 \\ -\mu /u& \end{pmatrix} \right)Y,$$ which reads $$\frac{d \lambda}{dt}= \lambda^2+\mu+\frac{t}{2}, \quad \frac{d \mu}{dt}=-2 \lambda \mu -\theta, \quad \frac{d }{dt} \log u = -\lambda;$$ the first two equations are equivalent to the Painlevé II equation (see [@Jimbo-Miwa Appendix C]): $$\frac{d^2\lambda}{dt^2}= 2 \lambda^3+t \lambda + \alpha, \quad \alpha = \frac{1}{2} -\theta.$$ In this case, since $L=2$ the block Toeplitz determinat $D_n$ (see (\[eq:defDn\])) reduces to a usual one and Theorem \[thm:mainthm\] shows that the $\tau$-quotient $\tau_n/\tau_0$ is equal to $$D_n=\begin{vmatrix} b_n&b_{n-1}& \cdots & b_1 \\ b_{n+1}& b_n & \cdots & b_2 \\ \vdots& \vdots & \ddots & \vdots \\ b_{2n+1}&b_{2n} &\cdots&b_n \end{vmatrix}$$ up to multiplication by constants. It is interesting to note that if we substitute the rational solution $\theta=1/2$, $\lambda=0$ and $\mu=-t/2$ then $f(w)=\sum_{k=1}^\infty b_k w^k$ can be expressed as a logarithmic derivative of a shifted Airy function ${\rm Ai} (t+w^{-2})$; this phenomenon has been studied closely in connection with integrable systems [@IKN; @JKM1; @KMO] (see also [@CC2]). Particular solutions to holonomic deformation {#sec:particularsol} ============================================= In this section, as an application of results in the previous section, we present a method for constructing particular solutions to holonomic deformation equations such as the Painlevé equations. Consider the $L \times L$ system of linear differential equations (\[lineardiffeq\]). Take a new point $a_{N+1}\in{\mathbb C}\setminus \{a_1,\dots,a_N\}$ where (\[lineardiffeq\]) is non-singular. The solution (\[eq:fundsol\]) normalized at $x=\infty$ can be expanded around $x=a_{N+1}$ as follows: $$Y(x)=Y(a_{N+1}) \Psi(w), \quad \Psi(w)=Y(a_{N+1})^{-1} \sum_{n=0}^{\infty}Y^{(n)}(a_{N+1})\frac{{w}^n}{n!},$$ where $w=x-a_{N+1}$ and $Y^{(n)}(x)$ denotes the $n$th derivative of $Y(x)$ with respect to $x$. Write the power series part $\Psi(w)$ as $$\Psi(w)= \left( \psi_{i,j}(w)\right)_{0 \leq i,j \leq L-1}$$ and put $$f_i(w)=\frac{\psi_{i,0}(w)}{\psi_{0,0}(w)} \quad (0 \leq i \leq L-1).$$ We apply the Hermite–Padé approximation problem (\[eq:degree\])–(\[eq:seikika\]) to the set of formal power series $\{f_0=1,f_1,\dots,f_{L-1}\}$, and introduce the matrices $$\begin{aligned} Q(w)&=\left(w^{1-\delta_{i,j}}Q^{(i)}_j(w)\right)_{0 \leq i,j \leq L-1} \in {\mathbb C}[w]^{L\times L}, \\ R(x)&=(x-a_{N+1})^{-n}Q(x-a_{N+1})\in {\mathbb C}[(x-a_{N+1})^{-1}]^{L\times L}\end{aligned}$$ made from its approximants $Q^{(i)}_j(w)$. Using $R(x)$, we define the rational function matrix $$S(x)=Y(a_{N+1})R(\infty)^{-1}R(x)Y(a_{N+1})^{-1}.$$ Then $\widetilde{Y}(x)=S(x)Y(x)$ satisfies a system of differential equations of the form $$\label{eq:lineardiffeq2} \frac{d\widetilde{Y}}{dx}= \left(\sum_{\mu=1}^N\sum_{j=0}^{r_{\mu}} \widetilde{A}_{\mu,-j}(x-a_{\mu})^{-j-1} -\sum_{j=1}^{r_{\infty}}\widetilde{A}_{\infty,-j}x^{j-1}+\widetilde{A}_{N+1}(x-a_{N+1})^{-1}\right)\widetilde{Y}.$$ This means that the transformation $Y(x)\mapsto \widetilde{Y}(x)=S(x)Y(x)$ induces one regular singularity $a_{N+1}$ in (\[lineardiffeq\]). It is clear by definition of $S(x)$ that the characteristic exponents of (\[eq:lineardiffeq2\]) at the additional regular singularity $x=a_{N+1}$ read ${\boldsymbol n}=((L-1)n,-n,\dots,-n)$. Furthermore, we see that if $Y(x)$ is subject to a holonomic deformation of (\[lineardiffeq\]), then $\widetilde{Y}(x)$ is also subject to that of (\[eq:lineardiffeq2\]) since $Y(x)$ and $\widetilde{Y}(x)$ have the same monodromy. Consequently, at the level of holonomic deformations, we have a certain [*inclusion relation*]{} between solutions as described below. Suppose for simplicity that (\[lineardiffeq\]) is Fuchsian, i.e. $r_\mu=0$ for any $\mu=1,\ldots,N,\infty$. One can associate with (\[lineardiffeq\]) an $(N+1)$-tuple $$M=\{(m_{1,1},m_{1,2},\dots,m_{1,k_1}), \ldots,(m_{N,1},m_{N,2},\dots,m_{N,k_N}), (m_{\infty,1},m_{\infty,2},\dots,m_{\infty,k_\infty}) \}$$ of partitions of $L$, called the [*spectral type*]{}, which indicates how the characteristic exponents overlap at each of the $N+1$ singularities $x=a_\mu$ ($\mu=1,\ldots,N,\infty$). Note that by means of the spectral type the number of accessary parameters in (\[lineardiffeq\]) is estimated at $$2+(N-1)L^2- \sum_{i=1, \ldots, N, \infty} \sum_{j=1}^{k_i} {m_{i,j}}^2;$$ see e.g. [@oshima]. The argument above provides a procedure to obtain a new system (\[eq:lineardiffeq2\]) of spectral type $\widetilde{M}=M \cup (L-1,1)$ from the original system (\[lineardiffeq\]) of spectral type $M$ while keeping the monodromy. Therefore, the general solution to the deformation equation of (\[lineardiffeq\]) gives rise to a particular solution to the deformation equation of (\[eq:lineardiffeq2\]). This phenomenon is exemplified by the fact that the Garnier system in $N+1$ variables includes the Garnier system in $N$ variables as its particular solution; cf. [@Tsu0 Theorem 6.1] It is also interesting to mention that if the original (\[lineardiffeq\]) is [*rigid*]{}, i.e. having no accessory parameter such as Gauß’s hypergeometric equation, then the deformation equation of (\[eq:lineardiffeq2\]) possesses a solution written in terms of that of the rigid system (\[lineardiffeq\]) itself. In this case, our procedure gives a natural interpretation to Suzuki’s recent work [@Su1], in which a list of rigid systems or hypergeometric equations appearing in particular solutions to the higher order Painlevé equations is presented. Let us consider a $2 \times 2$ Fuchsian system of differential equations $$\label{eq:hypergeo} \frac{dY}{dx}=\left(\frac{A_0}{x}+\frac{A_1}{x-1}\right)Y$$ with three regular singularities $x=0,1, \infty$, whose spectral type is $\{(1,1),(1,1),(1,1)\}$. We can assume without loss of generality that $ |A_i|=0$ and $A_{\infty}=-A_0-A_1$ is diagonal, i.e. $A_{\infty}=\mbox{diag}(\kappa_1, \kappa_2)$. It is well known that the entries of a fundamental system of solutions to (\[eq:hypergeo\]) can be written in terms of Gauß’s hypergeometric function. If we take an arbitrary point $t\in\mathbb{C}\setminus\{0,1\}$ and apply the procedure above, then we obtain a system of differential equations of the form $$\label{eq:hypP6} \frac{d\widetilde{Y}}{dx}=\left(\frac{\widetilde{A}_0}{x}+\frac{\widetilde{A}_1}{x-1}+\frac{\widetilde{A}_t}{x-t}\right)\widetilde{Y};$$ it is a $2 \times 2$ Fuchsian system with four regular singularities $x=0,1, \infty, t$, whose spectral type is $\{(1,1),(1,1),(1,1),(1,1)\}$. We know from the construction that the monodromy of (\[eq:hypP6\]) is independent of $t$, i.e. (\[eq:hypP6\]) is subject to an isomonodromic deformation with a deformation parameter $t$. Thus we can derive a particular solution written in terms of Gauß’s hypergeometric functions to the Painlevé VI equation with constant parameters $$\alpha= \frac{(\theta_{\infty}-1)^2}{2}, \quad \beta=\frac{-{\theta_0}^2}{2}, \quad \gamma=\frac{{\theta_1}^2}{2}, \quad \delta=\frac{1-4n^2}{2},$$ where $\theta_{\infty}=\kappa_1-\kappa_2$, $\theta_i=\mbox{tr}A_i= \mbox{tr} \widetilde{A}_i$ $(i=0,1)$ and $n\in\mathbb{Z}_{\geq 0}$. Refer to [@gausspainleve; @Jimbo-Miwa] for the Painlevé VI equation. Proof of an identity for determinants {#secA:proofofdet} ===================================== In this appendix we derive the determinant identity (\[eq:key\]), which is used to verify the main theorem of this paper. We first prove its Pfaffian analogue in a general setting to achieve better perspectives, and then we reduce it to the determinant case. The reader can refer to [@IO1] for various Pfaffian identities and their applications. Let $A$ be a set of alphabets, which is a totally ordered set. Let $A^*$ denote the set of words over $A$. For a word $I\in A^*$ and its permutation $J$, $\sgn(I,J)$ denotes the sign of the permutation that converts $I$ into $J$ if $I$ has no duplicate letter, and $0$ otherwise. Given a word $I=i_1i_2\cdots i_{2n}\in A^*$ of length $ \sharp I= 2n$, its permutation $J=j_{1}j_{2}\cdots j_{2n}$ is called a [*perfect matching*]{} on $I$ if $\sigma(2k-1)<\sigma(2k)$ for $1\leq k\leq n$ and $\sigma(2k-1)<\sigma(2k+1)$ for $1\leq k\leq n-1$, where $\sigma \in \Sym_{2n}$ and $j_{1}j_{2}\cdots j_{2n}=i_{\sigma(1)}i_{\sigma(2)}\cdots i_{\sigma(2n)}$. This perfect matching is designated by the configuration in the $xy$ plane which contains $2n$ vertices $v_{k}=(k,0)$ ($1\leq k\leq2n$) labeled with $i_k$ and $n$ arcs above the $x$ axis connecting the vertices $v_{\sigma(2k-1)}$ and $v_{\sigma(2k)}$ ($1\leq k\leq n$). Let $\Fam(I)$ denote the set of all perfect matchings on $I$. For a perfect matching $J=j_1j_2\cdots j_{2n}\in\Fam(I)$, we call $\Mat(J)=\{(j_{2k-1},j_{2k})\,|\,1\leq k\leq n\}$ the set of arcs in $J$. It is easy to see that the sign $\sgn(I,J)$ equals $(-1)^c$, where $c$ is the number of crossings of the arcs in the configuration of $J$. For example, the set of perfect matchings on a word $I=1234$ reads $$\Fam(I)=\{1234,1324,1423\}.$$ If we take a perfect matching $J=1423\in\Fam(I)$ then we have the set of arcs $\Mat(J)=\{(1,4),(2,3)\}$ and $J$ is designated by the following configuration: (30,20)(0,0) ( 0,10) (10,10) (20,10) (30,10) ( 0,10)(15,25)(30,10) (10,10)(15,15)(20,10) (-1, 4)[$1$]{} ( 9, 4)[$2$]{} (19, 4)[$3$]{} (29, 4)[$4$]{} There is no crossing of the arcs and certainly $\sgn(I,J)=1$ holds. Let $f$ be a map which assigns an element of a commutative ring to each pair $(i,j)\in A\times A$ such that $f(j,i)=-f(i,j)$. Such a map is called a [*skew symmetric*]{} map. For each perfect matching $J=j_{1}j_{2}\cdots j_{2n}\in\Fam(I)$, we define the weight $\wt_{f}(J)$ as $$\wt_{f}(J)=\sgn(I,J)\prod_{(i,j)\in\Mat(J)}f(i,j).$$ The [*Pfaffian*]{} $\Pf_{f}(I)$ of $f$ corresponding to the word $I=i_{1}i_{2}\cdots i_{2n}$ is the sum of the weights $\wt_{f}(J)$, where $J$ runs over all perfect matchings on $I$, i.e., $$\Pf_f(I)= \sum_{J \in \Fam(I)} \wt_f(J).$$ We use the convention that $\Pf_f(I)=1$ if $I=\emptyset$. It is known that $$\label{eq:indexchange} \Pf_{f}(K)=\sgn(I,K)\Pf_{f}(I),$$ where $K$ is a permutation of $I$. Especially $\Pf_{f}(I)=0$ if $I$ has a duplicate letter. For example, the Pfaffian of $f$ corresponding to $I=1234$ is given as $$\Pf_f(I)= f(1,2)f(3,4)-f(1,3)f(2,4)+f(1,4)f(2,3).$$ The following identity is the Plücker relation for Pfaffians, which is originally due to Ohta [@O1] and Wenzel [@W1]. Ohta’s proof is by algebraic arguments, and Wenzel employs the Pfaffian form. The proof we present here is more combinatorial one based on the same idea as in [@IW1]. \[thm:Ohta-Wenzel\] Let $I,J,K\in A^{\ast}$ be words such that $\sharp I$ and $\sharp J$ are odd and $\sharp K$ is even. Then it holds that $$\begin{aligned} &\sum_{i\in I}\sgn(IJ,(I\setminus\{i\})iJ) \Pf_f((I\setminus\{i\})K)\Pf_f(iJK) \nonumber\\ &\qquad =\sum_{j\in J}\sgn(IJ,Ij(J\setminus\{j\}))\Pf_f(IjK)\Pf_f((J\setminus\{j\})K). \label{eq:Ohta-Wenzel}\end{aligned}$$ We put $W_1=KI$, $W_2=JK$ and $W=W_1W_2$. Let ${\mathfrak G}$ denote the set of perfect matchings on $W$ in which there is exactly one arc connecting a vertex in $W_1$ and a vertex in $W_2$ and all the other arcs are between vertices in $W_1$ or between vertices in $W_2$. For example, if $I=123$, $J=456$ and $K=78$ then $W_1=KI=78123$, $W_2=JK=45678$ and $W=W_1W_2=7812345678$. The following configuration designates such a perfect matching on $W$, $P=7283154867 \in {\mathfrak G}$, in which the arc $(1,5)$ is the only arc connecting a letter in $W_1$ and a letter in $W_2$: (90,20)(0,0) ( 0,10) (10,10) (20,10) (30,10) (40,10) (50,10) (60,10) (70,10) (80,10) (90,10) ( 0,10)(15,20)(30,10) (10,10)(25,20)(40,10) (20,10)(40,20)(60,10) (50,10)(70,20)(90,10) (70,10)(75,15)(80,10) (-1, 4)[$7$]{} ( 9, 4)[$8$]{} (19, 4)[$1$]{} (29, 4)[$2$]{} (39, 4)[$3$]{} (49, 4)[$4$]{} (59, 4)[$5$]{} (69, 4)[$6$]{} (79, 4)[$7$]{} (89, 4)[$8$]{} For $i\in W_1$ and $j\in W_2$, let ${\mathfrak G}_{i,j}$ denote the subset of ${\mathfrak G}$ having the arc $(i,j)$; thereby, ${\mathfrak G}=\biguplus_{i \in W_1, j \in W_2} {\mathfrak G}_{i,j}$. Let us consider the sums $ \Omega =\sum_{P\in {\mathfrak G} }\wt_f(P)$ and $\Omega_{i,j}=\sum_{P \in {\mathfrak G}_{i,j}}\wt_f(P)$; thereby, $$\label{eq:Omega} \Omega=\sum_{i \in W_1,j \in W_2}\Omega_{i,j}.$$ It holds that $$\label{eq:claim} \sum_{j\in W_2}\Omega_{i,j}=\sgn(W,(W_1\setminus\{i\})iW_2)\Pf_f(W_1\setminus\{i\})\Pf_f(iW_2)$$ for $i\in W_1$. To check the claim, we first associate with each perfect matching $P\in{\mathfrak G}_{i,j}$ a pair $(P_1,P_2)$ of perfect matchings such that $P_1\in\Fam(W_1\setminus\{i\})$ and $P_2\in\Fam(iW_2)$ by shifting $i$ from the original position to the head of $W_2$ in the configuration. For the above example $P \in {\mathfrak G}_{1,5}$ the vertex $1$ is shifted and the associated pair $(P_1,P_2)$ is thus illustrated as follows: (90,20)(0,0) ( 0,10) (10,10) (20,10) (30,10) (40,10) (50,10) (60,10) (70,10) (80,10) (90,10) ( 0,10)(10,15)(20,10) (10,10)(20,15)(30,10) (40,10)(50,15)(60,10) (50,10)(70,20)(90,10) (70,10)(75,15)(80,10) (-1, 4)[$7$]{} ( 9, 4)[$8$]{} (19, 4)[$2$]{} (29, 4)[$3$]{} (39, 4)[$1$]{} (49, 4)[$4$]{} (59, 4)[$5$]{} (69, 4)[$6$]{} (79, 4)[$7$]{} (89, 4)[$8$]{} Since $\sgn(W,P)=\sgn(W,(W_1\setminus\{i\})iW_2)\sgn(W_1\setminus\{i\},P_1)\sgn(iW_2,P_2)$, it is then clear that $$\wt_f(P)=\sgn(W,(W_1\setminus\{i\})iW_2)\wt_f(P_1)\wt_f(P_2),$$ which proves (\[eq:claim\]). By the same argument we obtain $$\label{eq:claim'} \sum_{i\in W_1}\Omega_{i,j}=\sgn(W,W_1j(W_2\setminus\{j\}))\Pf_f(W_1j)\Pf_f(W_2\setminus\{j\})$$ for $j \in W_2$. Hence (\[eq:Omega\]) leads to the desired identity (\[eq:Ohta-Wenzel\]) via (\[eq:claim\]) and (\[eq:claim’\]). Note that if $i\in K$ or $j\in K$ then there appears a repeated letter in the word, so we can remove these cases. \[cor:Knuth\] Let $I,K\in A^{\ast}$ be words such that $\sharp I$ and $\sharp K$ are even. Then it holds that $$\sum_{{i\in I}\atop{i\neq j}}\sgn(I,(I\setminus\{ i, j \})ij)\Pf_f((I\setminus\{i, j\})K)\Pf_f(ijK) =\Pf_f(IK)\Pf_f(K) \label{eq:Knuth}$$ for $j \in I$. Putting $\sharp J=1$, i.e. $J=j$, in Theorem \[thm:Ohta-Wenzel\] shows that $$\sum_{i\in I}\sgn(Ij,(I\setminus\{i\})ij)\Pf_f((I\setminus\{i\})K)\Pf_f(ijK) =\Pf_f(IjK)\Pf_f(K).$$ Write $I=i_1i_2\cdots i_n$ and $I'=i_1\cdots i_{k-1}ji_{k}\cdots i_{n}$. Then we have $\sgn(Ij,(I\setminus\{i\})ij)=\sgn(Ij,I')\sgn(I',(I'\setminus\{ i, j \})ij)$ and $\Pf_f(IjK)=\sgn(Ij,I')\Pf_f(I'K)$ by . Hence we obtain $$\sum_{ \begin{subarray}{l} i \in I' \\ i\neq j \end{subarray}} \sgn(I',(I'\setminus\{ i, j \})ij)\Pf_f((I'\setminus\{ i, j \})K)\Pf_f(ijK) =\Pf_f(I'K)\Pf_f(K),$$ which coincides with (\[eq:Knuth\]) if we replace $I'$ with $I$. \[cor:Knuth2\] Let $I,K\in A^{\ast}$ be words such that $\sharp I$ and $\sharp K$ are even with $\sharp I=2n$. Let $F_{f,K}$ be a skew symmetric map on $A \times A$ defined by $F_{f,K}(i,j)=\Pf_{f}(ijK)$. Then it holds that $$\Pf_{F_{f,K}}(I) =\sum_{J\in\Fam(I)}\wt_{F_{f,K}}(J) =\Pf_{f}(IK)\Pf_f(K)^{n-1}. \label{eq:Knuth2}$$ Let $I=i_1i_2\cdots i_{2n}$. We proceed by induction on $n$. If $n=1$, it is trivial. (If $n=2$, is implied by Corollary \[cor:Knuth\].) Assume the $n-1$ case holds for some $n>1$. In view of $ \Fam(I)=\bigcup_{k=2}^{2n} \bigcup_{ J\in\Fam(I\setminus\{i_{1},i_{k}\}) } \{i_{1}i_{k}J \}$, we observe by definition that $$\Pf_{F_{f,K}}(I) =\sum_{k=2}^{2n}\sum_{J\in\Fam(I\setminus\{i_{1},i_{k}\})} \sgn(I,i_{1}i_{k}J)\, F_{f,K}(i_{1},i_{k}) \prod_{(i,j)\in\Mat(J)}F_{f,K}(i,j).$$ Using $\sgn(I,i_{1}i_{k}J)=\sgn(I,i_{1}i_{k}(I\setminus\{i_{1},i_{k}\}))\sgn(I\setminus\{i_{1},i_{k}\},J)$, we have $$\Pf_{F_{f,K}}(I) =\sum_{k=2}^{2n} \sgn(I,i_{1}i_{k}(I\setminus\{i_{1},i_{k}\})) \Pf_f(i_{1}i_{k}K) \Pf_{F_{f,K}}(I\setminus\{i_{1},i_{k}\}).$$ Using the induction hypothesis, we have $\Pf_{F_{f,K}}(I\setminus\{i_{1},i_{k}\})=\Pf_{f}((I\setminus\{i_{1},i_{k}\})K)\Pf_{f}(K)^{n-2}$. By virtue of Corollary \[cor:Knuth\], it is immediate to verify for any $n$. From here we consider identities for determinants. Assume the set $A$ of alphabets is a disjoint union of $\overline{A}$ and $\underline{A}$, i.e. $A=\overline{A}\uplus\underline{A}$. Let $R$ and $C$ be any sets of alphabets which possess injections $R \to\overline A$ and $C \to \underline A$, denoted by $i\mapsto\overline i$ and $j\mapsto\underline j$, respectively. For instance, we let $A=R=C$ be the set of positive integers, and $\overline{A}$ and $\underline{A}$ the sets of odd and even integers, respectively. Then we may put $\odd{i}=2i-1$ and $\even{j}=2j$, which define the injections $R\to\overline A$ and $C\to\underline A$. For a pair $I=i_1i_2\cdots i_{n}\in R^*$ and $J=j_1j_2\cdots j_{n}\in C^*$ of words of length $n$, we introduce the word $$m(I,J)=\odd{i_1}\even{j_1}\odd{i_2}\even{j_2}\cdots\odd{i_n}\even{j_n}\in A^*$$ of length $2n$. Let $g$ be a map which assigns an element of a commutative ring to each pair $(i,j)\in R\times C$. We then define a skew symmetric map $f_g$ on $A \times A$ as follows: $$\label{eq:fg} f_{g}(i,j)=\begin{cases} g(k,l)&\text{ if $i=\odd{k}\in\overline{A}$ and $j=\even{l}\in\underline{A}$,}\\ -g(l,k)&\text{ if $i=\even{k}\in\underline{A}$ and $j=\odd{l}\in\overline{A}$,}\\ 0&\text{ otherwise.} \end{cases}$$ We also use the notation $\Det_g(I,J)$ of determinant $$\Det_g(I,J) =\det(g(i,j))_{i\in I,j\in J} =\det(g(i_{k},j_{l}))_{1\leq k,l\leq n},$$ where $I=i_1i_2\cdots i_{n} \in R^*$ and $J=j_1j_2\cdots j_{n} \in C^*$. A determinant can be expressed as a Pfaffian. \[prop:Pfaffian-determinant\] Let $I\in R^{\ast}$ and $J\in C^{\ast}$ be words such that $\sharp I=\sharp J$. Then it holds that $$\Pf_{f_g}(m(I,J)) =\Det_{g}(I,J).$$ Let $I=i_{1}i_{2}\cdots i_{n}$ and $J=j_{1}j_{2}\cdots j_{n}$. To compute $\Pf_{f_g}(m(I,J))$, we need to consider only perfect matchings on $m(I,J)=\odd{i_1}\even{j_1}\odd{i_2}\even{j_2}\cdots\odd{i_n}\even{j_n}\in A^*$ whose arcs are all between $\overline{A}$ and $\underline{A}$; recall (\[eq:fg\]). The set of such perfect matchings is in one-to-one correspondence with $\Sym_n$. To simplify the description, we first rearrange the word $m(I,J)$ to be $$m(I,J)'=\odd{i_1}\odd{i_2} \cdots\odd{i_n}\even{j_n}\cdots\even{j_2}\even{j_1}$$ and then consider its perfect matching $P_\sigma=\odd{i_1}\even{j_{\sigma(1)}}\odd{i_2}\even{j_{\sigma(2)}}\cdots\odd{i_n}\even{j_{\sigma(n)}}$ for each $\sigma\in\Sym_{n}$. Because $\sgn(m(I,J),m(I,J)')=1$, we have $$\Pf_{f_g}(m(I,J))=\Pf_{f_g}(m(I,J)') =\sum_{\sigma \in \Sym_n} \sgn(m(I,J)',P_\sigma) \prod_{k=1}^n g(i_k,j_{\sigma(k)})$$ (see (\[eq:indexchange\])) and $$\sgn(m(I,J)',P_\sigma) =\sgn(m(I,J),P_\sigma)=\sgn \sigma,$$ which complete the proof. Combining Corollary \[cor:Knuth2\] and Proposition \[prop:Pfaffian-determinant\] leads to the following determinant identity, which we may call [*Sylvester’s identity*]{}. \[cor:Knuth2-det\] Let $I,K\in R^{\ast}$ and $J,M\in C^{\ast}$ be words such that $\sharp I=\sharp J=n$ and $\sharp K=\sharp M$. Let $G_{g,K,M}$ be a map on $R \times C$ defined by $G_{g,K,M}(i,j)=\Det_{g}(iK,jM)$. Then it holds that $$\Det_{G_{g,K,M}}(I,J) =\det\left(\Det_{g}(iK,jM)\right)_{i\in I, j\in J} =\Det_{g}(IK,JM)\Det_{g}(K,M)^{n-1}. \label{eq:Knuth2-det}$$ Finally, let us derive the determinant identity (\[eq:key\]) from Corollary \[cor:Knuth2-det\]. For notation, recall (\[eq:Toeplitz\]), (\[eq:defDn\]) and (\[eq:defE\]). Let $R=C=\{1,2,\dots,(L-1)(n+1)\}$ and put $$g(i,j)= b^s_{i-j+s(n+1)} \quad \text{with} \quad s= \left\lfloor \frac{j}{n+1} \right\rfloor+1$$ for $(i,j) \in R \times C$, where $\lfloor x\rfloor$ denotes the largest integer which does not exceed $x$. We take the words $I=i_1i_2\cdots i_{L-1} \in R^*$ and $J=j_1j_2 \cdots j_{L-1} \in C^*$ of length $L-1$ given by $$i_k=(L-1)n+k \quad \text{and} \quad j_k=(k-1)(n+1)+1 \quad \text{for} \quad 1 \leq k \leq L-1.$$ Let $[i,j]$ denote the word $i(i+1) \cdots j$ for $i < j$; e.g. $I=[(L-1)n+1,(L-1)(n+1)]$. We take the words $K=[1,(L-1)n]\in R^*$ and $M=[1,(L-1)(n+1)] \setminus J \in C^*$ of length $(L-1)n$. Then it holds that $\Det_g(K,M)=D_n$ and $$\begin{aligned} \Det_g(IK,JM)&=(-1)^{\frac{L(L-1)n}{2}}\Det_g([1, (L-1)(n+1)],[1,(L-1)(n+1)] ) \\ &=(-1)^{\frac{L(L-1)n}{2}}D_{n+1}\end{aligned}$$ since both $IK$ and $JM$ can be rearranged to be $[1,(L-1)(n+1)]$ and $$\sgn(IK,[1,(L-1)(n+1)])\sgn(JM,[1,(L-1)(n+1)])=(-1)^\frac{L(L-1)n}{2}.$$ In a similar manner, it holds that $$\Det_g(i_k K, j_l M)=(-1)^{(L-l)n} E_n^{k,l}$$ for $1 \leq k, l \leq L-1$. Hence we obtain (\[eq:key\]) from (\[eq:Knuth2-det\]) with $n$ replaced by $L-1$. [99]{} Chudnovsky, D.V., Chudnovsky, G.V.: Bäcklund transformations for linear differential equations and Padé approximations. I. J. Math. Pures Appl. [**61**]{}, 1–16 (1982) Chudnovsky, D.V., Chudnovsky, G.V.: Explicit continued fractions and quantum gravity. Acta Appl. Math. [**36**]{}, 167–185 (1994) Ishikawa, M. Okada, S.: Identities for determinants and Pfaffians, and their applications. Sugaku Expositions [**27**]{}, 85–116 (2014) Ishikawa, M., Wakayama, M.: Applications of minor summation formula III, Plücker relations, lattice paths and Pfaffian identities. J. Combin. Theory Ser. A [**113**]{}, 136–157 (2006) Iwasaki, K., Kajiwara, K., Nakamura, T.: Generating function associated with the rational solutions of the Painlevé II equation. J. Phys. A [**35**]{}, L207–L211 (2002) Iwasaki, K., Kimura, H., Shimomura, S., Yoshida, M.: From Gauss to Painlevé: A Modern Theory of Special Functions. Vieweg, Braunschweig (1991) Jimbo, M., Miwa, T.: Monodromy preserving deformation of linear ordinary differential equations with rational coefficients. II. Physica D [**2**]{}, 407–448 (1981) Jimbo, M., Miwa, T., Ueno, K.: Monodromy preserving deformation of linear ordinary differential equations with rational coefficients. Physica D [**2**]{}, 306–352 (1981) Joshi, N., Kajiwara, K., Mazzocco, M.: Generating function associated with the determinant formula for the solutions of the Painlevé II equation. Astérisque [**297**]{}, 67–78 (2004) Joshi, N., Kajiwara, K., Mazzocco, M.: Generating function associated with the Hankel determinant formula for the solutions of the Painlevé IV equation. Funkcial. Ekvac. [**49**]{}, 451–468 (2006) Kajiwara, K., Mazzocco, M., Ohta, Y.: A remark on the Hankel determinant formula for solutions of the Toda equation. J. Phys. A [**40**]{}, 12661–12675 (2007) Knuth, D.: Overlapping Pfaffians. Electron. J. Combin. [**3**]{}, no. 2, \#R5 (1996) Magnus, A.: Painlevé-type differential equations for the recurrence coefficients of semi-classical orthogonal polynomials. J. Comput. Appl. Math. [**57**]{}, 215–237 (1995) Mahler, K.: Perfect systems. Compos. Math. [**19**]{}, 95–166 (1968) Mano, T.: Determinant formula for solutions of the Garnier system and Padé approximation. J. Phys. A [**45**]{}, 135206 (2012) Mano, T., Tsuda, T.: Two approximation problems by Hermite, and Schlesinger transformation. RIMS Kokyuroku Bessatsu [**B47**]{}, 77–86 (2014) (in Japanese) Mano, T., Tsuda, T.: Hermite–Padé approximation, isomonodromic deformation and hypergeometric integral. Math. Z. [**285**]{}, 397–431 (2017) Masuda, T.: On a class of algebraic solutions to the Painlevé VI equation, its determinant formula and coalescence cascade. Funkcial. Ekvac. [**46**]{}, 121–171 (2003) Matsumoto, S.: Hyperdeterminantal expressions for Jack functions of rectangular shapes. J. Algebra [**320**]{}, 612–632 (2008) Ohta, Y.: Bilinear Theory of Solitons. Doctoral Thesis, Graduate School of Engineering, University of Tokyo (1992) Oshima, T.: Classification of Fuchsian systems and their connection problem. RIMS Kokyuroku Bessatsu [**B37**]{}, 163–192 (2013) Suzuki, T.: Six-dimensional Painlevé systems and their particular solutions in terms of rigid systems. J. Math. Phys. [**55**]{}, 102902 (2014) Tsuda, T.: Birational Symmetries, Hirota Bilinear Forms and Special Solutions of the Garnier Systems in 2-variables. J. Math. Sci. Univ. Tokyo [**10**]{}, 355–371 (2003) Tsuda, T.: Rational solutions of the Garnier system in terms of Schur polynomials. Int. Math. Res. Not. [**43**]{}, 2341–2358 (2003) Tsuda, T.: Toda equation and special polynomials associated with the Garnier system. Adv. Math. [**206**]{}, 657–683 (2006) Tsuda, T.: UC hierarchy and monodromy preserving deformation. J. reine angew. Math. [**690**]{}, 1–34 (2014) Wenzel, W.: Pfaffian forms and $\Delta$-matroids. Discrete Math. [**115**]{}, 253–266 (1993) Yamada, Y.: Padé method to Painlevé equations. Funkcial. Ekvac. [**52**]{}, 83–92 (2009)
--- author: - Federico Boschi - Ulisse Munari date: 'Received date..............; accepted date................' title: 'M31-RV evolution and its alleged multi-outburst pattern[^1] ' --- Introduction ============ Rich et al. (1989) discovered in 1988 a highly unusual stellar outburst in the Bulge of the Andromeda galaxy (M31), known since then as M31-RV (for “red variable”). The event peaked at M$_{bol} \approx -10$ mag and its spectrum closely resembled that of M supergiants, evolving from M0 I at discovery (Sept 5, 1988) to $>$M7 I about 58 days later when the brightness in the $V$ band had dropped by at least 4 mag (Rich 1990). Two similar events have been later identified in our Galaxy, V4332 Sgr that exploded in 1994 (Martini et al. 1999) and V838 Mon that erupted in 2002 (Munari et al. 2002a, Bond et al. 2003, and references therein). The M31-RV event has been characterized by radiative luminosities in-between those of classical novae and supernovae. The mass of the ejected envelope (optically thick during the whole observed evolution) is uncertain but it is certainly larger than in typical novae and much less than in supernovae. The radiative and kinetic energetics place therefore M31-RV, and by analogy also V4332 Sgr and V838 Mon, in the gap between classical novae and supernovae, making them stars of special interest. So far, few theoretical attempts to explain their highly peculiar nature have been pursued. Soker and Tylenda (2003), to explain the energetics and multi-maxima behaviour of V838 Mon, have suggested the merging of two main sequence stars of masses 0.1$-$0.5 M$_\odot$ and 1.5 M$_\odot$, with the second one expanding to large radii, low temperature and high luminosity in response to the frictional energy dissipation of the cannibalized less massive companion. A similar scenario has been proposed by Retter and Marom (2003). They postulated the multi-maximum eruption of V838 Mon as the result of the swallowing of massive planets in close orbit around a parent star expanding while on the RGB (red giant branch) or AGB (asymptotic giant branch). A thermonuclear runaway (TNR) model was instead developed by Iben and Tutukov (1992) to explain M31-RV. The model envisages a binary system, composed of a WD and a low mass companion, that evolves to orbital periods shorter than 2 hours by loss of angular momentum via gravitational waves, without experiencing classical nova eruptions on the way to. The accretion at very low rates ($\sim 10^{-11}$ M$_\odot$ yr$^{-1}$) occurring onto a cold white dwarf (WD) can lead to the accumulation of a massive H-rich envelope of the order of $\sim$0.05 M$_\odot$ before this is expelled in a gigantic hydrogen shell flash (some 10$^3$ times the mass expelled in a typical nova eruption). Friction energy dissipation of the binary revolving within such a massive and dense common envelope can raise the drag luminosity to 10$^7$ L$_\odot$, with as much as 10$^6$ L$_\odot$ ($\gg$ L$_{\rm Eddington}$) coming out in the form of radiation. ---------- ------------ -------- ------------- ------------- ----- ----------------- ---- --- 1.22 m Newton 6.0 m 29 Oct 1942 26 Aug 1992 831 795 in $B$ band 25 2 1.22 m Cassegrain 19.1 m 31 Oct 1961 27 Nov 1972 94 93 in $B$ band 4 1.82 m Cassegrain 16.4 m 05 Aug 1973 09 Dec 1988 194 177 in $B$ band 2 67/92 cm Schmidt 2.2 m 02 Oct 1965 17 Dec 1993 291 264 in $B$ band 2 8 40/50 cm Schmidt 1.0 m 14 Oct 1958 05 Mar 1986 37 28 in $B$ band ---------- ------------ -------- ------------- ------------- ----- ----------------- ---- --- Common to all models above is the uniqueness of the event: the progenitor can experience a single such outburst in its life. In the Soker and Tylenda (2003) and Retter and Marom (2003) approaches, it is the result of a merger event that obviously cannot be repeated. In the Iben and Tutukov (1992) model an extremely long time (a sizable fraction of a Hubble time) is required to accrete at very low rates $10^{-2}$ M$_\odot$ on a WD that had to cool to low temperatures. Therefore, the report by Sharov (1990) about a second outburst of M31-RV in 1967, 20 years before the main one, is something that deserves careful scrutiny and independent verification. If confirmed, it would have profound consequences on the theoretical modeling of M31-RV, V4332 Sgr and V838 Mon, perhaps even more than the discovery of a massive and young B3 V companion to the latter (Munari et al. 2002b). The recent eruption of V838 Mon has considerably revitalized the interest on this class of objects. In anticipation of a growing modeling effort by the community, we decided to take advantage of the Asiago plate archive to evaluate the reality of a second outburst of M31-RV and to investigate its long term photometric evolution. Plate archive data ================== Four instruments have contributed to the large collection of photographic plates of M31 that we have located in the Asiago archive: the 1.22 m and 1.82 m reflectors, and the 40/50 cm and 67/92 cm Schmidt telescopes. Details about the number of plates, time span, focal length, limiting magnitude, etc. are provided in Table 1. In total, we have selected and retrieved 1447 plates of M31 from the Asiago archive. They all have been inspected visually with a high quality Zeiss binocular microscope. All plates have been inspected by the same author, and about 10% of them, randomly selected, checked by the other one. All key plates have been inspected by both authors more than once (over a one month time span and each time with a different orientation), taking care to make them unrecognizable so to avoid biasing from memory of previous inspections. The agreement between the estimates of the two authors and their repeatability at different times turned out to be excellent, typically at the 0.05 mag level, and very rarely differing by more than 0.1 mag. Munari et al. (2003) have determined an accurate astrometric position for M31-RV and have provided a finding chart from one of the Asiago plates taken during the 1988 outburst. A proper comparison sequence had to be established. We looked for literature data of stars close to M31-RV and projected on similar background brightness (thus roughly aligned parallel to local isophotes of the the unresolved bulge of M31), to minimize biasing by the galaxy background when going from plates of one instrument to those of another, taken with different focal lengths, photometric bands, exposure times, seeing and sky conditions. We selected magnitudes obtained by Magnier et al. (1992) with CCD observations that allowed proper handling of bulge background. The sequence we have adopted is presented in Figure 1 and listed in Table 2. Comparison with other datasets (A. Henden 2003, priv. comm.) indicate that there may be errors in the 0.1 mag range for some of the stars, but using the ensemble results in photometry close to the standard system. --- -------- -------- -------- -------- -- -- a 14.306 13.691 13.328 13.058 b 14.780 14.227 13.902 13.638 c 15.039 14.695 14.463 14.269 d 15.272 14.803 14.440 14.213 e 16.086 15.217 14.657 14.275 f 16.644 15.881 15.487 15.233 g 16.892 16.099 15.582 15.239 h 17.401 16.665 16.184 15.868 i 17.775 17.154 16.820 16.662 l 18.369 17.524 17.051 16.840 m 18.556 18.116 17.792 17.472 n 19.043 18.176 17.649 17.275 o 19.591 19.589 19.368 —— p 20.117 19.419 18.562 17.872 q 20.363 19.729 19.308 18.981 --- -------- -------- -------- -------- -- -- : The comparison sequence plotted in Figure 1. Magnitudes from CCD observations of Magnier et al. (1992). The date, UT, telescope, filter, emulsion, exposure time and limiting magnitude in the appropriate band for each one of the 1447 inspected plates is given in Table 3 (available only in electronic form). No outburst in 1967 =================== Sharov (1990) announced that M31-RV had twenty years earlier experienced an outburst similar to that of 1988. He reported that while inspecting a long series of plates of M31 taken with telescopes of the Crimean Astrophysical Observatory he noted M31-RV around $B \sim$18.5 on three plates taken on Aug 4, Sep 3 and Sep 4, 1967 (Sharov reported that the outburst occurred in 1968, but the JDs he tabulated leave no doubt it was 1967. The listed JDs firmly establish the 50 cm Maksutof telescope as the source instrument. According to A.Tatarnikova (private communication) the focal length of this instrument is 2.0 m). The Andromeda galaxy has been frequently observed by Asiago telescopes for half a century, mainly to search for novae within a long term program lead by late Leonida Rosino (cf. Rosino 1973). More than 30 plates of M31 were collected in 1967, and similarly in adjacent years. Particularly useful are the plates taken at the Newton focus of the 1.22 m telescope. It has a much larger aperture and longer focal length (6 m) than the Sharov’s 50 cm Maksutof camera, and its plates routinely show stars fainter than $B$=20 mag close to the position of M31-RV (the limit away from the bright background of the bulge of the Andromeda galaxy is generally one magnitude fainter). The 1.22 m plates rule out the outburst of M31-RV in 1967 announced by Sharov (1990). Particularly useful is a plate taken on Aug 11 (cf. Table 4), when M31-RV should have been at $B\sim$18.7 according to Sharov. Nothing is present at the M31-RV position down to the local plate limit of $B$=20.5. According to the 50 years covered by Asiago plates, the only recorded event is that of 1988. It is worth noting that Goranskii et al. (2002) have inspected archive plates spanning the time interval 1949–1994 in search for previous outburst of V838 Mon, and found none. The region of the bulge where M31-RV appeared is characterized by subtle dust lanes and a knotty surface brightness distribution. It is possible that the Crimean 50 cm Maksutof camera had trouble resolving the local inhomogeneities of the bulge brightness distribution, which could have been confused for M31-RV on the 1967 plates. Alternative possibilities, such as a gravitational lensing event or the appearance of a nova close to the position of M31-RV do not apply because they should have been easily visible on the deeper Asiago plates. It is worth noting that the Asiago 1.22 m telescope discovered a sizable fraction of all novae cataloged in M31 during the 1960’s. The 1988 outburst ================= Evolution --------- Table 5 collects all direct photometric observations that we have been able to locate in literature concerning the 1988 outburst of M31-RV. Two $R_{\rm C}$ entries (cf. Table 3) come from the present inspection of plates from the Asiago archive. By far, the best covered photometric bands are $B$, $R_{\rm C}$ and $N_{\rm H_\alpha}$. The latter has been obtained with a narrow filter centered on H$\alpha$ and characterized by a full width at half maximum (FWHM) of 75 Å. Their light-curves are presented in Figure 2. The $R_{\rm C}$ and $N_{\rm H_\alpha}$ match well because the H$\alpha$ displayed a modest emission, with negligible effect on the total flux through both $R_{\rm C}$ and $N_{\rm H_\alpha}$ filters. More relevant is instead the fact that the position of H$\alpha$ and therefore of the $N_{\rm H_\alpha}$ filter is centered on the continuum that in M stars tries to emerge between the 6200 and 6700 Å TiO bands. $N_{\rm H_\alpha}$ tends to appear brighter compared to $R_{\rm C}$ as the spectral type progresses from M0 to M5. At later spectral types the two bands converge back to similar values because the rapidly increasing steepness of the spectrum increases the flux in the red wing of the $R_{\rm C}$ band. The small differences in Figure 2 between the $R_{\rm C}$ and $N_{\rm H_\alpha}$ branches therefore seem to just reflect the monotonic evolution toward later M spectral types of the M31-RV continuum. The $R_{\rm C}$ lightcurve of M31-RV is less scattered compared to the $B$ lightcurve for a number of reasons. The literature $R_{\rm C}$ data come mainly from CCD observations, and the two Asiago $R_{\rm C}$ data-points are relative to a comparison sequence calibrated via CCD observations. The comparison sequences used in the literature to derive $B$ data are of unknown origin. Furthermore, the $R_{\rm C}$ data are obtained with accurate detector + filter pairs well matching the standard system, while several of the $B$ band data-points are not color corrected or come from scattered emulsion+filter combinations (which are relevant in the case of the very red colors displayed by M31-RV). Finally, the contrast between M31-RV and the background bulge brightness was more favorable in $R_{\rm C}$ than in $B$. The striking similarity of the $R_{\rm C}$ light-curves of M31-RV and V838 Mon is evident in Figure 2 (V838 Mon $R_{\rm C}$ data are taken from Munari et al. 2002c and Bond et al. 2003). The comparison is obviously limited to the portion of the lightcurve of M31-RV covered by the observations (while the V838 Mon one extends well beyond the small displayed section). Both objects, after a plateau phase characterized by a slowly evolving K-type spectrum, experienced a sudden drop of several magnitudes (reaching $\Delta R_{\rm C}$=0.2 mag day$^{-1}$ for M31-RV and $\Delta R_{\rm C}$=0.3 mag day$^{-1}$ for V838 Mon) accompanied by a corresponding temperature drop as indicated by the spectral type sweeping quickly through the M-type sequence toward classifications so far seen only in brown [*dwarfs*]{} (cf. Evans et al. 2003). As evident from the evolution of reddening and spectral energy distribution discussed in following sections, the drop in magnitude of M31-RV is not due to dust condensation in the ejecta, but instead mainly due to drop in temperature during the expansion (shifting progressively the emission peak toward the IR) and to an overall decrease in luminosity. Reddening --------- From available data it is possible to estimate at different epochs the reddening affecting M31-RV. At the time of the [*JHK*]{} observation from of Sep 21, 1988 reported in Table 5, the spectral type of M31-RV was close to M2 (cf. Figure 2). According to Frogel and Whitford (1987), the intrinsic color of M2 giants in the Bulge of our Galaxy (taken to resemble their counterparts in the Bulge of M31, with comparable ages and metallicities, cf. Davidge 2001) is ($J-K$)$_\circ$=0.81. Compared with the observed $J-K$=0.86 for M31-RV, it implies $E_{J-K}$=0.05. The relation between $E_{J-K}$ and $E_{B-V}$ for M2 giants in the KPNO infrared system is (Fiorucci and Munari 2003): $$\frac{E_{J-K}}{E_{B-V}}=0.596 + 0.005\times E_{B-V}$$ and the corresponding reddening toward M31-RV is therefore $E_{B-V}$=0.08. At the time of the infrared observations of Oct 25, 1988, the spectral type was $\sim$M7, for which ($J-K$)$_\circ$=1.23 (again from Frogel and Whitford 1987). Compared with the observed $J-K$=1.30 it gives $E_{J-K}$=0.07 and correspondingly $E_{B-V}$=0.12. The latest IR observation in Table 5 cannot be used because the spectral classification at that time is unknown (by analogy with V838 Mon it was probably later than M10). By the time M31-RV was passing through the shoulder of the $R_{\rm C}$ lightcurve in Figure 2 (JD$\sim$2447418), the optical color was $B-R_{\rm C}\approx$+3.1 and the spectral type $\sim$M1. From Kurucz models computed on purpose for the M31 bulge metallicity (\[Fe/H\]=$-$0.2), the intrinsic color of an M1 supergiant is $B-R_{\rm C}$=2.79. The excess is therefore $E_{B-R}$=0.31. The transformation relation between $E_{B-R}$ and $E_{B-V}$ (both in the Landolt realization of the Johnson and Cousins systems) for early M giants and a normal extinction law ($R_V = A_V / E_{B-V} = 3.1$) is (from Fiorucci and Munari 2003): $$\frac{E_{B-R}}{E_{B-V}}=2.044 + 0.099\times E_{B-V}$$ This gives $E_{B-V}$=0.15 for M31-RV. The three independent determinations consistently converge toward: $$E_{B-V}=0.12 ~\pm0.02$$ as the reddening affecting M31-RV, with no indication of any significant increase during the abrupt photometric descent from optical maximum brightness of M31-RV, which cannot therefore be ascribed to dust condensation in the ejecta. A sizable fraction of the total reddening affecting M31-RV arises in our own Galaxy. In fact, the Burnstein and Hales (1982) extinction maps report $E_{B-V}\sim$0.1 as the total Galactic extinction along the line of sight to M31. Absolute magnitude ------------------ The distance modulus to M31 has been recently determined as 24.49$\pm$0.11 mag by Joshi et al. (2003) and as 24.47$\pm$0.08 mag by Stanek and Garnavich (1998). Taking the average of 24.48 mag and the $E_{B-V}=0.12$ reddening from the previous section, the absolute magnitude of M31-RV at peak $R_{\rm C}$ brightness around Aug 15, 1988 ($R_{\rm C}\sim$14.94) is $M_{R_C}\sim -$9.88. The true maximum could have been even brighter because the lightcurve is not completely mapped. Repeating the exercise for the $B$ band, the maximum can be estimated to have occurred around JD 2447362 at $B$=17.4, to which it corresponds $M_B$=$-$7.7. The bolometric correction and $V-R_{\rm C}$ for M0 supergiants are $-$1.29 and +0.97, respectively (Drilling and Landolt 2000). According to Figure 2, M31-RV was at $R_{\rm C}\sim$15.2 by the time it was classified M0 by Rich et al. (1989), which implies $M_{bol}\sim -$9.95 and $L\sim 7.5\times10^5$ L$_\odot$, i.e. one of the brightest stars in M31 and the whole Local Group. An universal eruption mechanism ? --------------------------------- The striking photometric and spectroscopic similarities between M31-RV and V838 Mon suggest a similar outburst mechanism. The absolute magnitude reached by the two events also seems quite similar. Figure 2 indicates that both M31-RV and V838 Mon when transitioning from the plateau to the rapid fading phase were displaying a $\sim$M1 supergiant spectrum. At that time, the absolute magnitude of M31-RV was $M_{R_{\rm C}} \approx -$9.6. At the corresponding time the magnitude of V838 Mon was $R_{\rm C}$=6.2. The reddening affecting V838 Mon is uncertain, but a fair estimate is $E_{B-V}$=0.5 (cf. Munari et al. 2002a). Assuming the same absolute magnitude of M31-RV, this corresponds to a distance $d_{\rm V838~Mon}=8$ kpc. This value is in good agreement with the average of the distance determinations by Bond et al. (2003) based on the HST imaging of the V838 Mon light echo, and Munari et al. (2002b) spectrophotometric distance to the B3 V component in the V838 Mon binary. Therefore the photometric and spectroscopic evolution of V838 Mon and M31-RV were similar, as well as the absolute magnitude at the time the drop in temperature occurred. Such similarities are remarkable in view of the different ages and evolution histories of the two objects. M31-RV appeared in the Bulge of M31, which is characterized by a turn-off mass around 1 M$_\odot$ and a high metallicity \[Fe/H\]=$-$0.2. V838 Mon appears instead to be young and massive (the companion to the erupted component is a B3 V star) and it is located in the outskirts of the galactic disk, at galacto-centric distances of 15-17 kpc, where the metallicity is lower and of the order of \[Fe/H\]=$-$0.6 (cf. Davidge 2001). Yet, the two events show the same evolution and absolute luminosity in $R_{\rm C}$. This seems to suggest that an [*universal*]{} explosion mechanism could have powered both events, a mechanism independent from the way in which a stellar system reaches it. The independence of the outcome from the initial conditions is a characteristic, for example, of models of SN Ia, well known for their homogeneity in absolute magnitude and lightcurve shapes. In SN Ia, a WD reaches the Chandrasekhar mass and ignites carbon burning, irrespective of whether a merger of two WDs or the accretion on a single WD from a non-degenerate companion occurred. Here we postulate that a common eruption mechanism must have powered both M31-RV and V838 Mon, the outcome of which was not affected by the large differences in metallicity, age and mass of the two objects. The theoretical models so far published do not seem able to explain both M31-RV and V838 Mon, as well as the similarity of the two events. The Iben and Tutukov (1992) TNR mechanism cannot work in V838 Mon, because the young age implied by the presence of a B3V star in the system is too short for a WD to cool and accrete enough material at a very low accretion rate. Both Soaker and Tylenda (2003) and Retter and Marom (2003) suggestions of swallowed stellar or planetary companions by an expanding giant seem unable to account for the strong similarities exhibited by M31-RV and V838 Mon. The results presented in this paper therefore support the need of a radically new model if M31-RV, V838 Mon and V4332 Sgr are to be explained as a [*homogeneous class*]{} of astronomical objects. Rich, R.M. 1990, in “Confrontation between stellar pulsation and evolution”, C. Cacciari and G. Clementini ed.s, ASP Conf. ser. 11, pag. 472 Bond, H.E., Henden, A., Levay, Z.G., Panagia, N., Sparks, W.B., Starrfield, S., Wagner, R.M., Corradi, R.L.M., & Munari, U. 2003, Nature, 422, 405 Burstein, D. & Heiles, C. 1982, AJ 87, 1165 Bryan, J., & Royer, R.E. 1992, PASP, 104, 179 Ciardullo, R., Tamblyn, P., & Phillips, A.C. 1990, PASP, 102, 1113 Davidge, T.J. 2001, AJ, 122, 1386 Drilling, J.S. & Landolt, A.U. 2000, in “Allen’s Astrophysical Quantities IV”, A.N.Cox ed., AIP Press, Springer, pag. 381 Fiorucci, M., & Munari, U. 2003, A&A, 401, 781 Frogel, J.A., & Whitford, A.E. 1987, ApJ, 320, 199 Goranskii, V.P., Kusakin, A.V., Metlova, N.V., Shugarov S.Yu., Barsukova, E.A. & Borisov, N.V. 2002, AstL 28, 691 Iben, I., & Tutukov, A.V. 1992, ApJ, 389, 369 Joshi, Y.C., Pandey, A.K., Narasimha, D., Sagar, R., & Giraud-Heraud, Y. 2003, A&A, 402, 113 Magnier, E.A., Lewin, W.H.G., van Paradijs, J., Hasinger, G., Jain, A., Pietsch, W., & Truemper, J. 1992, A&AS, 96, 379 Martini, P., Wagner, R.M., Tomaney, A., Rich, R.M., della Valle, M., & Hauschildt, P.H. 1999, AJ, 118, 1034 Mould, J., Cohen, J., Graham, J.R., Hamilton, D., Matthews, K., Picard, A., Reid, N., Schmidt, M., Soifer, T., Wilson, C., Rich, R.M., & Gunn J. 1990, ApJ, 353, 35 Munari, U., Henden, A., Kiyota, S., Laney, D., Marang, F.,Zwitter, T., Corradi, R.L.M., Desidera, S., Marrese, P.M., Giro, E., Boschi, F., & Schwartz, M.B. 2002a, A&A , 389, L51 Munari, U., Desidera, S., & Henden, A. 2002b, IAUC, 8005 Munari, U., Henden, A., Corradi, R.L.M. & Zwitter, T. 2002c, in “Classical Nova Explosions”, M.Hernanz and J.Josè ed.s, Am.Inst.Phys. Conf. Proc. 637, pag. 52 Munari, U., Henden, A., & Boschi, F. 2003, IBVS, 5410 Retter, A. & Marom, A. 2003, MNRAS, 345, L25 Rich, R.M., Mould, J., Picard, A., Frogel, J.A., & Davies, R. 1989, 341, 51 Rosino, L. 1973, A&AS, 9, 347 Sharov, A.S. 1990, SvAL, 16, 199 Soker, N., & Tylenda, R. 2003, ApJ, 582, L105 Stanek, K.Z., & Garnavich, P.M. 1998, ApJ, 503, 131 Tomaney, A.B., & Shafter, A.W. 1992, ApJS, 81, 683 [^1]: Table 3 available only in electronic form (ASCII format) at CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/ and from the web page http://ulisse.pd.astro.it/M31-RV/, where further information is provided
--- abstract: 'We investigate theoretically the electronic structure and transport for a two-level quantum wire with Rashba spin-orbit coupling (SOC) under the irradiation of an external laser field at low temperatures. The photon-induced transitions between SOC-splitted subbands with the same lateral confinement quantum numbers and between subbands with different confinement quantum number are expected. Using the method of equation of motion (EOM) for Keldysh nonequilibrium Green’s functions (NGF), we examine the time-averaged density of states (DOS) and the spin polarized conductance for the system with photon polarization perpendicular to the wire direction. Through the analytical analysis and some numerical examples, the interplay effects of the external laser field and the Rashba SOC on both the DOS and the conductance of the system are demonstrated and discussed. It is found that the external laser field can adjust the spin polarization rate and the transport of the quantum wire system with some proper Rashba SOC strengths.' author: - 'Guanghui Zhou$^{1,2,3}$' - Wenhu Liao$^2$ title: 'Electronic structure and transport for a laser-field-irradiated quantum wire with Rashba spin-orbit coupling' --- Introduction ============ In recent years, the effects of SOC in semiconductor mesoscopic systems have attracted more and more attention since it plays an important role in the emerging field of spintronics (see recent review article$^1$ and references therein) since the proposal of constructing an electronic analog of optic modulator using ferromagnetic contacts as the spin injector and the detector.$^2$ Many fundamental and interesting phenomena, such as spin precession,$^{3,4}$ spin accumulation,$^{5,6}$ spin (polarized) transport$^{7,8}$ and spin Hall effect$^{9,10}$ in the systems with SOC have been investigated and are under further study now. Though the SOC has its origin in relativistic effects, it is regarded vitally in some low-dimensional mesoscopic semiconductor systems.$^{11,12}$ Usually, two types of SOC are taken into account in the investigation for systems based on a two-dimensional electron gas (2DEG) confined in a semiconductor heterostructure. They are Rashba$^{11}$ and Dresselhaus$^{12}$ SOC, which can be described by the Hamiltonians $$\label{myeq1} H_R=\frac{\hbar k_R}{m^*}(\sigma_xp_y-\sigma_yp_x)$$ and $$\label{myeq2} H_D=\frac{\hbar k_D}{m^*}(\sigma_yp_y-\sigma_xp_x),$$ respectively, where $m^*$ is the effective electron mass and ${\bf\sigma}=(\sigma_x,\sigma_y,\sigma_z)$ is the vector of Pauli matrix. The strengths of the two types of SOC are measured in terms of characteristic wavevectors $k_R$ and $k_D$, respectively. For some semiconductor based systems (e.g., InAs quantum well), the Rashba term arising from the structure inversion asymmetry in heterostructures$^{13,14}$ is roughly one order magnitude larger than Dresselhaus term which is due to the bulk inversion asymmetry.$^{15}$ Moreover, the strength of Rashba SOC can be tuned by external gate voltage,$^{16}$ and its effect on the systems has been paid more attention, particularly in quasi-one-dimensional quantum wire system. Mesoscopic systems with or without external magnetic field in the presence of SOC have been studied extensively.$^{3-10,17}$ Two years ago, two independent experiments on the (001)-grown n-type GaAs multiple quantum well structures had been done by using a circularly polarized infrared radiation$^{18}$ and the orthogonally polarized two optical harmonic pulses,$^{19}$ respectively. The spin photoncurrent$^{18}$ and the pure spin current$^{19}$ due to resonant intersubband transitions have been observed in the absence of any external magnetic field. Hereafter, for a single quantum well (2DEG) with SOC irradiated under an in-plane linearly polarized infrared irradiation, the spin-dependent density of state (DOS) and the density of spin polarization has been calculated, and a pure spin current has been theoretically verified for the system.$^{20}$ Further, a mechanism for spin-polarized photocurrent generation in a multimode quantum wire, which is due to the combined effect of the Rashba SOC and a linearly polarized in-plane microwave irradiation, has been proposed in the presence of a static in-plane magnetic field.$^{21}$ On the other hand, the electron transport for a quantum wire under a time-varying electromagnetic (EM) field irradiation in the absence of SOC has been analyzed previously by means of the NGF$^{22}$ and the scattering matrix approach,$^{23}$ respectively. However, a further confined low-dimensional systems, such as a two-level quasi-one-dimensional quantum wire or quasi-zero-dimensional quantum dot with SOC under the irradiation of time-dependent field have been studied rarely.$^{21}$ Mesoscopic two-level system (such as a two-level quantum wire or quantum dot) is of physically important since it has been proved to be very useful in describing many aspects of interaction between EM field and the electrons confined in a heterostructure, and in application of solid-state electronic device. Therefore, it is meaningful to investigate the interplay effect between the SOC and the applied laser filed for a two-level mesoscopic system. In order to investigate the electronic structure and transport of a two-level quantum wire with SOC under an intense laser field irradiation, in this paper we theoretically calculate the time-averaged DOS and the conductance at the low temperatures for the system. The interplay effects of different laser frequency and Rashba SOC strength on the electronic structure and transport are investigated by using the nonequilibrium Keldysh formulism (NKF). Through the analysis with a few numerical examples, we find some characteristics different from those for the similar systems in previous works.$^{20-23}$ The remainder part of the paper is organized as follows. In Sec. II, we introduce the model Hamiltonian for our system and give the NKF straightforwardly, where the time-averaged DOS and the conductance are calculated analytically. The numerical results and the discussions are shown in Sec. III. Finally, Sec. IV concludes the paper. Model and Formalism =================== The NGF approach has been employed in last decades to study a variety of problems beyond the linear response regime.$^{22}$ Meir et al$^{24}$ derived a formula for the current through a region of interacting electrons using the NKF. Changing the one-direction time axis into a loop with two branches, four Green’s functions depending on the relative positions of $t_a$ and $t_b$ in the loop can be defined. They are time-ordered, anti-time-ordered and two distribution Green’s functions, respectively. However, only two of them are independent. We will use the approach of standard nonequilibrium Keldysh EOM in the present work. Consider a quasi-one-dimensional system of electrons (a quantum wire) in the presence of SOC and an external time-dependent laser field, the model Hamiltonian reads $$\label{myeq3} H=\frac{{\bf p}^2}{2m^*}+V({\bf r})+H_{so}+V(t),$$ where ${\bf r}=(x,y)$ and ${\bf p}=(p_x,p_y)$ are two-dimensional position and momentum vectors, respectively. The SOC Hamiltonian $H_{so}$ is generally consisted of $H_R$ and $H_D$, while $V(t)$ is the potential from the interaction of the external time-dependent laser field with electrons in the system. The electrons are confined in the $y$ direction by an infinite square-well potential of width $a$, i.e., $$\begin{aligned} \label{myeq4} V({\bf r})=\left\{ \begin{array}{l l} 0 & (|y|<a/2)\\ \infty & {(|y|>a/2)}, \end{array} \right.\end{aligned}$$ which can eliminate the possibility of SOC due to the effective electric field coming from the nonuniformity of the confining potential.$^{25}$ To investigate the effects of SOC and the external field on the electron transport properties by means of NKF, we rewrite Hamiltonian (3) in the second-quantized form. For this purpose, we define that ${a^+_{ks\alpha}}$($a_{ks\alpha}$) creates (annihilates) an electron with wavevector $k$ and a spin branch $s$ \[$s=\uparrow$ and $\downarrow$, or $+$ and $-$, which is the spin branch index corresponding to spin-up and spin-down, respectively. See Eq.(11) for detailed explanation\] in mode $\alpha$ in either the left (L) or the right (R) lead, and $c_{k_xns}^+$($c_{k_xns}$) creates (annihilates) an electron in the $n$th transverse mode $|k_x,n,s\rangle$ with wavevector $k_x$ and a spin branch index $s$ in the absence of SOC in the quantum wire modeled as a two-level ($n=1,2$) system. For convenience, we choose$^{25}$ the spin polarization axis $\hat{\bf{n}}=(cos\varphi,sin\varphi)$ to be along the effective magnetic field due to the SOC for wave propagating in the $x$-direction such that $$\begin{aligned} \label{myeq5} {\begin{array}{*{20}c}|s\rangle\\ \end{array}}=\frac{1}{\sqrt{2}}\left({\begin{array}{*{20}c} se^{-i\varphi/2}\\ e^{i\varphi/2} \end{array}}\right)\end{aligned}$$ with $\varphi\equiv arg[k_D+ik_R]$. With these definitive operators and spin states, the Hamiltonian for a laser-field-irradiated two-level quantum wire (connected to two electrode leads) in the presence of SOC reads $$\begin{aligned} \label{myeq6} H&=&\sum_{k,s,\alpha\in{L/R}}\varepsilon_{ks\alpha}a_{ks\alpha}^ +a_{ks\alpha}+\sum_{k_x,n,s}\varepsilon_{ns}(k_x)c_{k_xns}^+c_{k_xns} \nonumber\\&&+\sum_{k,k_x,n,s,\alpha\in{L/R}}(T_{kk_xns}^{\alpha} a_{ks\alpha}^+c_{k_xns}+h.c.)\nonumber\\&&+\sum_{k_x,n,n',s,s'} [\gamma_{nn'}\beta_{ss'}+V_{nsn's'}\cos(\Omega t)]c_{k_xns}^+c_{k_xn's'},\\end{aligned}$$ where $\varepsilon_{ks\alpha}$ is the energy level with spin $s$ and wavevector $k$ in lead $\alpha$, and $$\label{myeq7} \varepsilon_{ns}(k_x)=\frac{\hbar^2}{2m^*}[(k_x-s k_{so})^2 +(\frac{n\pi}{a})^2]-\Delta_{so}$$ is the $n$th sublevel in the wire with $k_{so}= \sqrt{k^2_R+k^2_D}$ and $\Delta_{so}=\hbar^2 k^2_{so}/{2m}$. In Hamiltonian (6), the coupling between the electrode leads and the wire with strength $T_{kk_xns}^{\alpha}$ is represented by the third term, and the last term describes the adiabatical electron-photon interaction in the wire$^{22,26}$ and the mixture of transverse modes due to SOC, where $V_{nsn's'}$ are the dipole electron-photon interaction matrix elements (MEs) and $\Omega$ the incident laser frequency. Since the frequencies of interest are in the range corresponding to wavelengths of the order of hundreds of nanometers, the spatial variation of the field potential can be neglected. The SOC mixes the transverse modes through the matrix element $\gamma_{nn'}\beta_{ss'}$, where $$\begin{aligned} \label{myeq8} \gamma_{nn'}=\frac{4nn'}{a(n^2-n'^2)} \left\{ \begin{array}{l l} (-1)^{\frac{n+n'-1}{2}}&(n\neq n')\\ 0 & {(n=n')} \end{array}, \right.\end{aligned}$$ and according to the lateral confinement potential$^{25}$ $\beta_{ss'}$ is the element of matrix $$\begin{aligned} \label{myeq9} {\begin{array}{*{20}c}\beta=\\ \end{array}} \frac{\hbar^2}{m^*k_{so}}\left[ {\begin{array}{*{20}c} 2ik_Rk_D & k^2_D-k^2_R \\ k^2_R-k^2_D & -2ik_Rk_D \end{array}} \right].\end{aligned}$$ In the above Hamiltonian we have neglected electron-electron interactions since its effect on SOC can be plausibly taken into a renormalized SOC constant.$^{27}$ For simplicity, we focus on the Rashba SOC effect, i.e., let $k_D=0$. Furthermore, according to Dyson equation, the coupling between the electrode leads and the wire only adds a self-energy term in the NGF, so we firstly calculate the Green’s function (GF) of the quantum wire without considering the electrode leads. In this case the Hamiltonian of the quantum wire part in the absence of EM field reads $$\begin{aligned} \label{myeq10} H_{wire}&=&\sum_{k_x}[\varepsilon_{1\uparrow}(k_x)c^+_{k_x1\uparrow}c_{k_x1\uparrow} +\varepsilon_{1\downarrow}(k_x)c^+_{k_x1\downarrow}c_{k_x1\downarrow}\nonumber\\ &+&\varepsilon_{2\uparrow}(k_x)c^+_{k_x2\uparrow}c_{k_x2\uparrow} +\varepsilon_{2\downarrow}(k_x)c^+_{k_x2\downarrow}c_{k_x2\downarrow}\nonumber\\ &+&\varepsilon_R(c^+_{k_x2\uparrow}c_{k_x1\downarrow} +c^+_{k_x1\downarrow}c_{k_x2\uparrow}\nonumber\\ &-&c^+_{k_x1\uparrow}c_{k_x2\downarrow}-c^+_{k_x2\downarrow}c_{k_x1\uparrow})],\end{aligned}$$ where $\varepsilon_R=8\hbar^2k_R/(3m^*a)$. According to Eq.(5), here the spin-up state $|\uparrow\rangle$ and the spin-down state $|\downarrow\rangle$ are the linear combination of the eigenstates of $\sigma_z$ $$\begin{aligned} \label{myeq11} {\begin{array}{*{20}c}|\uparrow\rangle\\ \end{array}}=\frac{1-i}2\left({\begin{array}{*{20}c} 1\\0 \end{array}}\right)+\frac{1+i}2\left({\begin{array}{*{20}c} 0\\1 \end{array}}\right),\nonumber\\ {\begin{array}{*{20}c}|\downarrow\rangle\\ \end{array}}=-\frac{1-i}2\left({\begin{array}{*{20}c} 1\\0 \end{array}}\right)+\frac{1+i}2\left({\begin{array}{*{20}c} 0\\1 \end{array}}\right),\end{aligned}$$ with equal probability occupying the real spin-up and spin-down states in the original spin space, respectively. For definiteness, we consider the case of the applied incident laser is polarized along $y$ direction (perpendicular to the wire direction), hence the diagonal electron-photon interaction MEs are simply zero in the dipole approximation. Also for simplicity in calculation we assume phenomenologically that the off-diagonal electron-photon interaction MEs $V_{1s2s'}=V_{2s1s'}=1.0$ as the free input parameters (dependent of incident laser intensity) , and thus the Hamiltonian (10) becomes $$\begin{aligned} \label{myeq12} H'_{wire}&=&\sum_{k_x}\{\varepsilon_{1\uparrow}(k_x)c^+_{k_x1\uparrow}c_{k_x1\uparrow} +\varepsilon_{1\downarrow}(k_x)c^+_{k_x1\downarrow}c_{k_x1\downarrow}\nonumber\\ &+&\varepsilon_{2\uparrow}(k_x)c^+_{k_x2\uparrow}c_{k_x2\uparrow} +\varepsilon_{2\downarrow}(k_x)c^+_{k_x2\downarrow}c_{k_x2\downarrow}\nonumber\\ &+&[\frac{1}{2}(e^{i\Omega t}+e^{-i\Omega t})+\varepsilon_R] (c^+_{k_x1\downarrow}c_{k_x2\uparrow}+c^+_{k_x2\uparrow}c_{k_x1\downarrow})\nonumber\\ &+&[\frac{1}{2}(e^{i\Omega t}+e^{-i\Omega t})-\varepsilon_R] (c^+_{k_x1\uparrow}c_{k_x2\downarrow}+c^+_{k_x2\downarrow}c_{k_x1\uparrow})\nonumber\\ &+&\frac{1}{2}(e^{i\Omega t}+e^{-i\Omega t})(c^+_{k_x1\uparrow}c_{k_x2\uparrow} +c^+_{k_x2\uparrow}c_{k_x1\uparrow}\nonumber\\ &+&c^+_{k_x1\downarrow}c_{k_x2\downarrow}+c^+_{k_x2\downarrow}c_{k_x1\downarrow})\}.\end{aligned}$$ It is seen from Eqs.(10) and (12) that the pure Rashba SOC induces spin-flip transitions with equal probabilities (spin-conserving) according to Eq.(6) while the applied laser field may arouse unequal probability transitions for spin-flip and spin-conserving due to the interplay between the Rashba SOC and the field. Our interest is to numerically find which kind of transitions is favorable for this system. Next we employe the usually defined retarded GF$^{22,24}$ $$\begin{aligned} \label{myeq13} G_{nsn's'}^r(t_2,t_1)=\ll c_{k_xns}(t_2), c_{k_xn's'}(t_1)\gg^r\nonumber\\ =-i\theta(t_2-t_1)\langle\{c_{k_xns}(t_2), c_{k_xn's'}(t_1)\}\rangle,\end{aligned}$$ then its corresponding Keldysh EOM is $$\begin{aligned} \label{myeq14} i\frac{\partial}{\partial t_2}\ll c_{k_xns}(t_2),c_{k_xn's'}(t_1)\gg^r=\nonumber\\ \delta(t_2-t_1)\langle\{c_{k_xns}(t_2),c_{k_xn's'}(t_1)\} \rangle\nonumber\\ +\ll[c_{k_xns}(t_2),H],c_{k_xn's'}(t_1)\gg^r.\end{aligned}$$ Inserting system Hamiltonian (12) into (14) and transforming the variables to $t_2-t_1$ and $t_1$, and then performing the Fourier transform to change the variable $t_2-t_1$ into $\omega$, we finally obtain the diagonal MEs of the two retarded GFs without the coupling between the electrode leads and the wire $$\begin{aligned} \label{myeq15} \{[\omega-\varepsilon_{1/2\uparrow}(k_x)][\omega-\varepsilon_{2/1\downarrow}(k_x)] -\varepsilon^2_R\}\nonumber\\ \cdot\ll c_{k_x1/2\uparrow},c^+_{k_x1/2\uparrow}\gg_{\omega}^r =\omega-\varepsilon_{2/1\downarrow}(k_x),\end{aligned}$$ $$\begin{aligned} \label{myeq16} \{[\omega-\varepsilon_{1/2\downarrow}(k_x)][\omega-\varepsilon_{2/1\uparrow}(k_x)] -\varepsilon^2_R\}\nonumber\\ \cdot\ll c_{k_x1/2\downarrow},c^+_{k_x1/2\downarrow} \gg_{\omega}^r =\omega-\varepsilon_{2/1\uparrow}(k_x),\end{aligned}$$ $$\begin{aligned} \label{myeq17} &&[\omega-\varepsilon_{1/2\uparrow}(k_x)]\ll c_{k_x1/2\uparrow}, c_{k_x1/2\uparrow}^+(t_1) \gg_{\omega}^r\nonumber\\ &&=1\mp \varepsilon_R \ll c_{k_x2/1\downarrow},c_{k_x1/2\uparrow}^+(t_1)\gg_{\omega}^r\nonumber\\ &&+\frac{1}{2}e^{i\Omega t_1}[\ll c_{k_x2/1\downarrow}, c_{k_x1/2\uparrow}^+(t_1)\gg_{\omega+\Omega}^r\nonumber\\ &&+\ll c_{k_x2/1\uparrow},c_{k_x1/2\uparrow}^+(t_1)\gg_{\omega+\Omega}^r]\nonumber\\ &&+\frac{1}{2}e^{-i\Omega t_1}[\ll c_{k_x2/1\downarrow}, c_{k_x1/2\uparrow}^+(t_1)\gg_{\omega-\Omega}^r\nonumber\\ &&+\ll c_{k_x2/1\uparrow},c_{k_x1/2\uparrow}^+(t_1)\gg_{\omega-\Omega}^r],\end{aligned}$$ $$\begin{aligned} \label{myeq18} &&[\omega-\varepsilon_{1/2\downarrow}(k_x)]\ll c_{k_x1/2\downarrow}, c_{k_x1/2\downarrow}^+(t_1)\gg_{\omega}^r\nonumber\\ &&=1\pm\varepsilon_R\ll c_{k_x2/1\uparrow},c_{k_x1/2\downarrow}^+(t_1)\gg_{\omega}^r\nonumber\\ &&+\frac{1}{2}e^{i\Omega t_1}[\ll c_{k_x2/1\uparrow}, c_{k_x1/2\downarrow}^+(t_1)\gg_{\omega+\Omega}^r\nonumber\\ &&+\ll c_{k_x2/1\downarrow},c_{k_x1/2\downarrow}^+(t_1)\gg_{\omega+\Omega}^r]\nonumber\\ &&+\frac{1}{2}e^{-i\Omega t_1}[\ll c_{k_x2/1\uparrow}, c_{k_x1/2\downarrow}^+(t_1)\gg_{\omega-\Omega}^r\nonumber\\ &&+\ll c_{k_x2/1\downarrow},c_{k_x1/2\downarrow}^+(t_1)\gg_{\omega-\Omega}^r],\end{aligned}$$ for spin-up and spin-down, respectively. It is seen from Eqs.(17) and (18) that the retard NGF $G^r_0$ with frequency $\omega$ are coupled to the components with photon sidebands frequencies of $\omega+\Omega$ and $\omega-\Omega$ in connection with $k_{so}$ (the characteristic wavevector of Rashba SOC). On the other hand, the self-energy describing the influence of the leads on the system can be simply written as $$\label{myeq19} \Sigma_{nn'}\equiv\Sigma^{L/R}_{nn'}(\omega)=2\pi\sum_{k,k_x,s} (T^{\alpha}_{kk_xns})^*T_{k,k_xn's}^{\alpha}\delta(\omega-\varepsilon_{ks\alpha}),$$ with which one can construct the GF $G^r=[(G^r_0)^{-1}-i\Sigma]^{-1}$ for the whole system. If we calculate the time-averaged NGF up to the second order, then at low temperatures the time-averaged DOS is $$\label{myeq20} DOS=-\frac1{\pi}Im[Tr(G^r(\omega,\omega))],$$ and the conductance has the form of Landauer-type$^{22,26}$ $$\label{myeq21} G=\frac{e^2}{h}Tr[\Sigma^L(\omega)G^a(\omega,\omega) \Sigma^R(\omega) G^r(\omega,\omega)].$$ Here $G^r(\omega,\omega)$ and $G^a(\omega,\omega)$ represent the time-averaged retarded and advanced GFs, respectively. Numerical Results and Discussions ================================= In the following, we present some numerical examples of the DOS and conductance calculated according to Eqs. (15)-(21) for the system. We have selected that the energy unit $E^*=\epsilon_1=\pi^2\hbar^2/(2m^*a^2)$ (i.e., the first lateral level of the quantum wire without SOC), the time unit $t^*=\hbar/E^*$, and the frequency unit $\Omega^*=1/t^*$. With these units, the propagating longitudinal wavevector corresponding to the $n$th transverse mode is $k_x=(\omega-n^2)^{1/2}$. In the wide-band approximation the real part of the self-energy is negligible,$^{22,24-26}$ and we simply assume that $\Sigma_{11}=\Sigma_{22}=0.1$ and $\Sigma_{12}=\Sigma_{21}=0.05$. The choice of these typical parameters is based on the following consideration.$^{22}$ Usually the strength of electron-photon interaction depends on the photon intensity, polarization and the size of the quantum wire. Under the irradiation of a strong laser with an electric field of the order $(10^5-10^6)$ $V/m$, the MEs are comparable to or several times larger than the level spacing in the quantum wire with the width of order $(10-100)$ $nm$ (corresponding to the external laser frequency $\sim$ THz), and these quantities are physically realizable in recent experiments.$^{18,19}$ We first consider the electronic structure of the system. It is common known that the electronic energy spectrum is degenerate for the two spin orientation in the absence of SOC. In the presence of SOC the energy spectrum (7) satisfies the condition $\varepsilon_{n,s}(k_x)=\varepsilon_{n,-s}(-k_x)$ in accordance with the time inversion symmetry. However, our interest is the interplay effect of the external laser field and the Rashba SOC on the electronic structure and transport of the system. Here we consider that the incident field is linearly polarized perpendicular to the current direction (the wire direction), i.e., the off diagonal MEs dominate the electron-photon interaction. With the assumption of the off diagonal MEs $V_{12}=V_{21}$=1.0 \[see Eq.(11)\] and the incident laser frequency $\Omega=0.5$, in Fig.1 we illustrate the time-averaged DOS as a function of energy for the two different Rashba SOC strengths $k_R=1/(2\pi)$ and $k_R=1/\pi$, respectively. We can see that the main peak around $\omega\sim1$ is always obvious in the presence of both Rashba SOC and laser field. This is because that the electrons are populated at energy level $\varepsilon_{1\uparrow}\sim$ 1.01 rather than $\varepsilon_{1\downarrow}\sim$ 1.25 with single photon absorption. In the case of weak Rashba SOC strength as shown in Fig.1(a), there are two additional photon resonance peaks at $\omega$=1.6 and 4.6 for spin-up (solid line), while for spin-down (dashed line) there are three additional resonance peaks at $\omega$=4.58, 0.75 and 0.65 with a pattern of oscillation in the range of $0.76<\omega<1$. ![The time-averaged DOS (in arbitrary units) as a function of energy with electron-photon interaction off-diagonal matrix elements $V_{1s2s'}=V_{2s1s'}=1.0$ for the two different Rashba SOC strengths (a) $k_R=1/(2\pi)$ and (b) $k_R =1/\pi$, where the incident laser frequency is $\Omega=0.5$ and the solid (dashed) line represents the spin-up (-down) is shifted 0.1 upward for clarity.](1a.eps "fig:"){width="1.5in"} ![The time-averaged DOS (in arbitrary units) as a function of energy with electron-photon interaction off-diagonal matrix elements $V_{1s2s'}=V_{2s1s'}=1.0$ for the two different Rashba SOC strengths (a) $k_R=1/(2\pi)$ and (b) $k_R =1/\pi$, where the incident laser frequency is $\Omega=0.5$ and the solid (dashed) line represents the spin-up (-down) is shifted 0.1 upward for clarity.](1b.eps "fig:"){width="1.5in"} ![The time-averaged DOS (in arbitrary units) as a function of energy with the same system parameters and line presentation as in Fig.2 except for the incident laser energy is $\Omega=3.0$.](2a.eps "fig:"){width="1.5in"} ![The time-averaged DOS (in arbitrary units) as a function of energy with the same system parameters and line presentation as in Fig.2 except for the incident laser energy is $\Omega=3.0$.](2b.eps "fig:"){width="1.5in"} Nevertheless, as the increase of the Rashba SOC strength shown in Fig.1(b), for spin-up the two photon resonance peaks are shifted from $\omega$=1 and 4 to $\omega$=0.63 and 3.4, respectively. While for spin-down there only two resonance peaks occur at $\omega=0.63$ (superposed with that for spin-up) and 3.5 without an oscillatory pattern. However, it seems that the other main peak around $\omega\sim4$ makes sense in this strong Rashba SOC case. Because the single photon energy $\Omega$ is much smaller than the quantum wire sublevel spacing $\Delta\epsilon$, the resonance peaks here are belong to the transitions between Rashba SOC-splitted subbands with the same lateral confinement quantum number.$^{21}$ In order to determine the transitions between subbands with different confinement quantum number, in Fig.2 we increase the incident frequency to $\Omega=3$ but with the same two different Rashba SOC strengths as in Fig.1. As shown in Fig.2 the time-averaged DOS for spin-up (solid lines) has no transition resonance peaks in the both weak and strong Rashba SOC cases, while for spin-down there are several sharp resonance transition peaks at $\omega$=0.75, 4.1, 4.5 and 4.6 in the weak Rashba SOC case \[see the dashed line in Fig.2(a)\] and an oscillatory pattern with no resonance peak \[dashed line in Fig.2(b)\] in the strong Rashba SOC case. This result implies a rule of possible transition that the transition probabilities are very larger for this condition. We believe that some of the resonance peaks in Fig.2(a) can be identified to the photon-induced transitions between subbands with different quantum numbers.$^{21-23}$ Because both spin-flip and spin-conserving transitions are modulated by the strengths of Rashba SOC and laser field, so it seems that the strong strength of Rashba SOC in the higher laser frequency case is not favorable for the transitions between subbands with different quantum numbers. ![The plotted conductance $G$ (in the unit of $e^2/h$) as a function of energy ($\sim\omega$, in unit of $\epsilon_1$) without laser field for the two different Rashba SOC strengths (a) $k_R=1/(2\pi)$ and (b) $k_R =1/\pi$, where the solid (dashed) line represents the spin-up (-down) is shifted 0.1 upward for clarity.](3a.eps "fig:"){width="1.65in"} ![The plotted conductance $G$ (in the unit of $e^2/h$) as a function of energy ($\sim\omega$, in unit of $\epsilon_1$) without laser field for the two different Rashba SOC strengths (a) $k_R=1/(2\pi)$ and (b) $k_R =1/\pi$, where the solid (dashed) line represents the spin-up (-down) is shifted 0.1 upward for clarity.](3b.eps "fig:"){width="1.65in"} Next we turn our attention to the conductance of the system. The conductance (in unit of $e^2/h$) as a function of energy ($\sim\omega$, in unit of $\epsilon_1$) of the system without external laser field in the presence of weak and strong Rashba SOC is illustrated in Fig.3. There are two major peaks in the conductance curves, as a consequence of the two subband levels structure of the wire. Particularly, the conductance difference for the two spin orientation in Fig.3 is very small and consistent with the analytical prediction from energy spectrum. One also note that the conductance peaks are asymmetry near the two subband levels due to the spin-orbit interaction.$^{26}$ ![The time-averaged conductance $G$ (in the unit of $e^2/h$) as a function of energy with the same system parameters and line presentation as in Fig.1.](4a.eps "fig:"){width="1.5in"} ![The time-averaged conductance $G$ (in the unit of $e^2/h$) as a function of energy with the same system parameters and line presentation as in Fig.1.](4b.eps "fig:"){width="1.5in"} The time-averaged conductance of the system irradiated under a transversally polarized laser field in the presence of Rashba SOC is shown in Fig.4 with $\Omega=0.5$. Corresponding to the resonance states in Fig.1(a), the time-averaged conductance in Fig.4(a) shows some peaks with the height of $\sim e^2/h$. When the incident electrons energy is about $\omega=$0.65 and 0.75, we note that the conductance is nearly $e^2/h$ for spin-down while that for spin-up is nearly 0; when the incident electrons energy is increased to $\omega=1.6$, there is a sharp conductance peak for spin-up while that for spin-down is about 0. Therefore, with a largest spin polarization in Fig.1(a), a spin filter may be devised in the case of appropriate incident electron energy and the Rashba SOC strength. Fig.4(b) shows the time-averaged conductance corresponding to the Fig.1(b) in strong Rashba SOC case, from which one can see more photon resonance peaks (especially in lower energy range) than in the weak Rashba SOC case. Furthermore, when the external laser frequency is increased to 3.0 the time-averaged conductance of the system with the two different Rashba SOC strengths is illustrated in Fig.5. Due to the intersubband resonance states in Fig.2(a), there is more sharp resonance transition peaks in higher energy range \[see Fig.5(a)\] for the spin-down electrons \[see the explanation for Fig.2(a)\]. While in the strong Rashba SOC case, the conductance curves for both spin-up and -down show only the two main peaks \[see Fig.5(b)\] as in Fig.2(b). Maybe in this case the Rashba SOC is too strong to produce quantum transitions for the system. ![The time-averaged conductance $G$ (in the unit of $e^2/h$) as a function of energy with the same system parameters and line presentation as in Fig.2.](5a.eps "fig:"){width="1.5in"} ![The time-averaged conductance $G$ (in the unit of $e^2/h$) as a function of energy with the same system parameters and line presentation as in Fig.2.](5b.eps "fig:"){width="1.5in"} Finally, the time-averaged DOS (solid line for spin-up and dash-dotted line for spin-down) and the spin polarization rate$^{20}$ (dashed line) with a fixed incident electron energy ($\omega=2.5$) as a function of the characteristic wavevector $k_R$ (proportional to the strength of Rashba SOC) without or with a transversally polarized external laser field ($\Omega=0.5$) are demonstrated in Fig.6(a) and Fig.6(b), respectively. The electronic energy spectrum is degenerate for spin-up and spin-down when $k_R=0$ in both cases as expected (see the solid and dash-doted lines in Fig.6. In the case without laser field as shown in Fig.6(a), the spin polarization rate (dashed line) is about 17% when $k_R=0.02$, and it can reach to 95% when $k_R=0.04$. Under the irradiation of the laser field, as shown in Fig.6(b), the spin polarization rate increases to 60% and 100% around $k_R=0.02$ and $k_R=0.04$, respectively. Moreover, there are several additional peaks of spin polarization rate in the range of $0.05<k_R<0.25$ with laser field, while in the case without laser filed as shown in Fig.6(a) the spin polarization rate is smoothly low in this range of $k_R$. Therefore, it seems that the external laser field can enhance the spin polarization rate for a quantum wire system with an appropriate Rashba SOC strength which can be adjusted through the controllable lateral electrodes.$^{16}$ ![The time-averaged DOS and spin polarization rate as a function of $k_R$ (proportional to the strength of Rashba SOC) for a fixed incident electrons energy $\omega=2.5$ (a)without and (b)with a transversally polarized laser field ($\Omega=0.5$), where the solid line (shifted 0.1 upward for clarity) for spin-up and dash-dotted line for spin-down DOS, respectively. The dashed line represents spin polarization rate.](6a.eps "fig:"){width="1.5in"} ![The time-averaged DOS and spin polarization rate as a function of $k_R$ (proportional to the strength of Rashba SOC) for a fixed incident electrons energy $\omega=2.5$ (a)without and (b)with a transversally polarized laser field ($\Omega=0.5$), where the solid line (shifted 0.1 upward for clarity) for spin-up and dash-dotted line for spin-down DOS, respectively. The dashed line represents spin polarization rate.](6b.eps "fig:"){width="1.5in"} Conclusion ========== In summary, using the method of EOM for Keldysh NGF, we have investigated theoretically the electronic structure and transport properties of a two-sublevel quantum wire irradiated under a transversally polarized external laser field in the presence of Rashba SOC. The time-averaged DOS and conductance for spin-up and spin-down electrons in the case of the off-diagonal electron-photon interaction dominating the process are calculated analytically, and are demonstrated numerically with two different Rashba SOC strengths and laser frequencies, respectively. It is found that the external laser field can enhance the spin polarization rate for the system with some particular Rashba SOC strengths. An all-electrical nonmagnetic spintronic devices may be desirable under an appropriate choice of external control parameters. However, the experimental observation for this proposal and further theoretical investigation if the impurity, phonon or electron-electron interaction are taken into account are worthy to be carried out. This work was supported by National Natural Science Foundation of China (Grant NO. 10574042), and by Scientific Research Fund of Hunan Provincial Education Department (Grant NO. 04A031). Zutic I, Fabian J and Sarma S D 2004 Rev. Mod. Phys. [**76**]{} 323 Datta S and Das B 1990 Appl. Phys. Lett. [**56**]{} 665 Mireles F and Kirczenow G 2004 Phys. Rev. B [**64**]{} 024426\ Winkler R 2004 Phys. Rev. B [**69**]{} 045317 Liu Ming-Hao, Chang Ching-Ray and Chen Son-Hsien 2005 Phys. Rev. B [**71**]{} 153305 Governale M and Zülicke U 2002 Phys. Rev. B [**66**]{} 073311 Onoda M and Nagaosa N 2005 Phys. Rev. Lett. [**95**]{} 106601\ Onoda M and Nagaosa N 2005 Phys. Rev. B [**72**]{} 081301(R) Balents L and Egger R 2000 Phys. Rev. Lett. [**85**]{} 3464\ Pramanik S, Bandyopadhyay S and Cahay M 2003 Phys. Rev. B [**68**]{} 075313 Rodrigues V, Bettini J, Silva P C and Ugarte D 2003 Phys. Rev. Lett. [**91**]{} 096801\ Wang X F, Vasilopoulos P and Peeters F M 2005 Phys. Rev. B [**71**]{} 125301 Murakami S, Nagaosa N and Zhang S C 2003 Science [**301**]{} 1348\ Murakami S, Nagaosa N and Zhang S C 2004 Phys. Rev. B [**69**]{} 235206 Hirsch J E 1999 Phys. Rev. Lett. [**83**]{} 1834 (1999)\ Sinova J, Culcer D, Niu Q, Sinitsyn N A, Jungwirth T and MacDonald A H 2004 Phys. Rev. Lett. [**92**]{} 126603\ Shen Shun-Qing, Michael Ma, Xie X C and Fu Chun Zhang 2004 Phys. Rev. Lett. [**92**]{} 256603 Bychkov Y A and Rashba E I 1984 J. Phys. C [**17**]{} 6039 Dresselhaus G 1955 Phys. Rev. B [**100**]{} 580 Lommer G, Malcher F and Rössler U 1988 Phys. Rev. Lett. [**60**]{} 728 Andradae Silva E A, Rocca G C L and Bassani F 1994 Phys. Rev. B [**50**]{} 8523 Moroz A V and Barnes C H W 1999 Phys. Rev. B [**60**]{} 14272 Nitta J, Akazaki T, Takayanagi H and Enoki T 2002 Phys. Rev. Lett. [**78**]{} 1335\ Grundler D 2000 Phys. Rev. Lett. [**84**]{} 6074\ Koga T, Nitta J, Akazaki T and Takayanagi H 20002 Phys. Rev. Lett. [**89**]{} 046801 Sun Qing-feng and Xie X C 2005 Phys. Rev. B [**71**]{} 155321\ Sun Qing-feng, Wang Jian and Guo Hong 165310 Phys. Rev. B [**71**]{} 165310 Stevens M J, Smirl A L, Bhat R D R, Najmaie A, Sipe J E and van Driel H M 2003 Phys. Rev. Lett. [**90**]{} 136603 Ganichev S D, Schneider P, Bel’kov V V, Ivchenko E L, Tarasenko S A, Wegscheider W, Weiss D, Schuh D, Murdin B N, Phillips P J, Pidgeon C R, Clarke D G, Merrich M, Murzyn P, Beregulin E V and Prettl W 2003 Phys. Rev. B [**68**]{} 081302(R) Najmaie A, Smirl A L and Sipe J E 2005 Phys. Rev. B [**71**]{} 075306\ Sherman E Y, Najmaie A and Sipe J E 2005 Appl. Phys. Lett. [**86**]{} 122103\ Cheng J L and Wu M W 2005 Appl. Phys. Lett. [**86**]{} 32107 Fedorov A, Pershin Y V and Piermarocchi C 2005 Phys. Rev. B [**72**]{} 245327\ Pershin Y V and Piermarocchi C 2005 Appl. Phys. Lett. [**86**]{} 212107 Niu C and Lin D L 1997 Phys. Rev. B [**56**]{} R12752\ Niu C and Lin D L 2000 Phys. Rev. B [**62**]{} 4578 Zhou Guanghui, Yang Mou, Xiao Xianbo and Li Yuan 2003 Phys. Rev. B [**68**]{} 155309 Meir Y and Wingreen N S 1992 Phys. Rev. Lett [**68**]{} 2512\ Jauho A P, Wingreen N S and Meir Y 1994 Phys. Rev. B [**50**]{} 5528 Lee M and Bruder C 2005 Phys. Rev. B [**72**]{} 045353\ Lee M and Choi M S 2005 Phys. Rev. B[**71**]{} 153306 Yang M and Li S S 2004 Phys. Rev. B [**70**]{} 045318 Chen G H and Raikh M E 1999 Phys. Rev. B [**60**]{} 4826
--- author: - 'Indubala I. Satija[@www]' date: 'Received: date / Revised version: date' title: ' Localization, Dirac Fermions and Onsager Universality' --- [g.eps]{} gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore Two-dimensional Ising model is one of the few examples of exactly solvable many body systems.[@book] The model exhibits a phase transition at finite temperature characterized by universal exponents defining a universality class, the Onsager universality which describes the phase transitions for anisotropic XY models. Interestingly, the Onsager universality also describes a quantum phase transition at zero temperature, driven by quantum fluctuations, of one-dimensional quantum anisotropic XY spin chains in a transverse field.[@Lieb] These quantum models belong to a small family of integrable Hamiltonians that have attracted both theoreticians as well as experimentalists. Heart of integrability of these many body quantum spin problem is a mapping between spin and fermions which relates interacting XY spin chain with $O(1)$ symmetry to the fermion Hamiltonian that are quadratic in fermions. The spin-fermion correspondence has proven to be extremely important also for the case of disordered magnetic field as the quadratic fermion Hamiltonian can be numerically diagonalized with extreme precision.[@IIS] Recent studies have shown that the a large variety of disordered quantum chains with $O(1)$ spin symmetry are still described by the Onsager universality.[@Luck; @IIS] In this paper, we show a new type of spin-fermion mapping where disordered fermions exhibiting exponential localization are shown to be related to anisotropic spin chain in a disordered magnetic field at the onset to long range order(LRO). This correspondence, valid exclusively for exponentially localized systems, provides new insight into various issues relevant to disordered fermion as well as spin models. In particular we exploit well established universality hypothesis of spin systems to make important statements about fermion problems. Firstly, the root of recently observed universality in localized fermions[@KSprl; @andy] is traced to the Onsager universality of the spin systems. Another interesting result is a correspondence between the relativistic and the nonrelativistic fermions in the presence of disorder. It is shown that the relativistic fermions can be viewed as the fluctuations in the exponentially localized solutions of the nonrelativistic fermions. This provides a new approach for understanding the absence of localization in disordered Dirac fermions which has been the subject of various recent studies.[@Dirac] We argue that the long range magnetic correlations provide mechanism for delocalization of relativistic fermions thus obtaining a deeper understanding of the absence of localization. In addition to obtaining intuitively appealing picture of some surprising results of disordered fermions, we also obtain a generalization of universality statement of the critical exponents for disordered spin models. Finally, we examine how the correlations affect the localization characteristics and show the possibility of delocalization of relativistic fermions analogous to the corresponding nonrelativistic case. Although the setting we describe is quite general in the context of disordered systems, for concreteness we will consider quasiperiodic disorder where the lattice problem for nonrelativistic fermion is described by the Harper equation,[@Harper] $$\psi_{n-1}+\psi_{n+1}+2 \lambda V_n \psi_n = E \psi_n.$$ Here $V_n = cos(\theta_n)$ where $\theta_n= (2 \pi \sigma_g n +\phi)$. The $\sigma_g$ is an irrational number describing competing length scales in the problem and $\phi$ is a constant phase factor. Harper equation, in one-band approximation describes Bloch electrons in a magnetic field. This problem frequently arises in many different physical contexts, every time emerging with a new face to describe another physical application. The problem is solvable by Bethe-ansatz[@BA]: there is an algebraic Bethe-ansatz equation for the spectrum. It was shown that at some special points in the spectrum, e.g at the mid-band points, the Hamiltonian in certain gauge can be written as a linear combination of generators of a quantum group called $U_q(sl_2)$. It also describes some properties of the integer quantum Hall effect: it showed that Kubo-Greenwood formula for the conductance of any filled isolated band is an expression for a topological invariant, and is an integer multiple of $e^2/h$. It was further shown by Avron et al[@Chern] that it is a topological invariant that defines the first Chern class of the mapping of the Brillouin zone( a two-dimensional torus) onto a complex projective space of the wave functions. In contrast to usual Anderson problem describing localized particle in a random potential, the Harper equation exhibits localization-delocalization transition provided $\sigma_g$ is an irrational number with good Diophantine properties (ie. badly approximated by rational numbers). This transition at $\lambda=1$ has been characterized with singular continuous states and spectra. Richness and complexity of the critical point describing localization transition has been studied in great detail by various renormalization group approaches.[@Ostlund; @KSrg] Recently, it was shown that multifractal characteristics continue to exist beyond the critical point[@KSprl] throughout the localized phase. This hidden complexity of the localized phase is brought to light after one factors out the exponentially decaying envelope. The localized wave functions with inverse localization length $\xi^{-1}=log(\lambda)$ is rewritten as,[@KSprl] $$\psi_n = e^{ -\gamma |n|} \eta_n$$ where $\gamma=\xi^{-1}$. The tight binding model(tbm) describing the fluctuations $\eta_n$ in the exponentially decaying envelope is given by the following pair of equations, $$\begin{aligned} e^{-\gamma} \eta^r_{n+1} + e^{\gamma} \eta^r_{n-1} + 2 \lambda V_n \eta^r_n &=& E \eta^r_n \nonumber\\ e^{\gamma} \eta^l_{n+1} + e^{-\gamma} \eta^l_{n-1} + 2 \lambda V_n \eta^l_n &= &E \eta^l_n\end{aligned}$$ Here $\eta^r$ and $\eta^l$ respectively describe the fluctuations to the right and to the left of the localization center. An exact renormalization scheme[@KSprl] showed that these fluctuations exhibit universal features(see Fig. 1) described by the strong coupling fixed point. Hence, localized phase is characterized by universal fractal characteristics described by $\lambda \to \infty$ limit of the equation. These results were further confirmed by rigorous mathematical analysis.[@andy] Hence the strength of the disorder which determines the localization length can be factored out making universal aspects transparent. A new revelation that provides intuitive understanding of the strong coupling fixed point of Harper is obtained by relating the fluctuations described by (3) to the anisotropic spin chain at the onset to LRO. It turns out that the equations (3) for $E=0$ describe quasiparticle excitations of a $critical$ anisotropic XY spin-$1/2$ chain in a transverse magnetic field, given by the following spin Hamiltonian. $$\label{spin} H=-\sum [e^{-\gamma} \sigma^x_n \sigma^x_{n+1}+ e^{\gamma}\sigma^y_n \sigma^y_{n+1} + 2\lambda V_n \sigma^z_n].$$ The $\sigma_k$, $k=x,y$ are the Pauli matrices. The $e^{-\gamma}$ and $e^{\gamma}$ respectively describe the exchange interactions along the $x$ and the $y$ directions in spin space. Using Jordan-Wigner transformation, the interacting spin problem can be mapped to non-interacting spinless fermion problem[@Lieb] where fermions are the quasiparticle excitations of the spin chain obeying the following coupled equations, $$\begin{aligned} \label{fermion} e^{-\gamma} \eta^1_{l+1}+e^{\gamma} \eta^1_{l-1}+2\lambda V_l \eta^1_l &=& E \eta^2_l \nonumber\\ e^{\gamma} \eta^2_{l+1}+e^{-\gamma} \eta^2_{l-1}+2\lambda V_l \eta^2_l &=& E \eta^1_l\end{aligned}$$ At the onset to LRO, the excitation spectrum becomes gap-less and hence for the $E=0$, the massless mode, the above two equations are degenerate coinciding with the equation (3). Therefore, the massless excitations of the critical anisotropic spin chain describe the fluctuations in the exponentially localized excitations of the isotropic chain where the anisotropy parameter is related to the localization length. Therefore, the localization length like the anisotropy parameter is an irrelevant variable and hence the statement of strong coupling universality of the Harper equation is synonymous with the statement that anisotropic spin chain is in the universality class of the Onsager universality. In short, we establish an equivalence between Ising fixed point of the anisotropic spin chain and the strong coupling fixed point of the Harper equation and thus provide a simple interpretation of the strong coupling universality of the Harper equation. Furthermore, by relating anisotropy to the localization length, we obtain a new picture of the localized phase: the role of the strength of the disorder is similar to that of the strength of the anisotropy, which is an irrelevant parameter for renormalization flow. It should be noted that the spin-fermion mapping provides a new method to determine the localization length of the tbms in the presence of disorder. The expression for the localization length, $\gamma=log(\lambda)$ can be viewed as the relation describing the critical point of the spin chain in a disordered field. We next show that the fluctuation $\eta_n$ obey Dirac equation for zero energy states. In the long wave length limit, the equations(3) reduce to the Dirac equation. We replace $n$ by $x$ and write $\eta_{n\pm 1} = e^{\pm ip} \eta(x)$, where $p$ is the momentum canonically conjugate to $x$. The equation (4) for the fluctuations for $E=0$ state can be described by the following non-Hermitian Hamiltonian $H_{fluc}$ and its adjoint, $H_{fluc} = e^{-\gamma} e^{i p} + e^{\gamma} e^{-i p} + 2\lambda V(x)$. In the limit ($p \rightarrow 0$), the system for $E=0$ reduces to the Dirac equation, $$[g \sigma_x p -i(2\lambda V(x)+2 cosh(\gamma) )\sigma_y]\eta(x) = 0$$ where $\eta(x)$ is a two-dimensional spinor $\eta(x)=(\eta^l(x),\eta^r(x))$. It is interesting that the two-component structure of Dirac spinor arises naturally when we consider fluctuations about exponentially localized wave functions. Here $g \equiv 2sinh(\gamma)$ is the velocity of the Dirac fermions while the mass of the Dirac fermions $m(x) = 2((\lambda V(x)+ cosh(\gamma))$. Therefore, on a lattice, the Dirac fermions with disordered mass are the fluctuations of the nonrelativistic localized fermions. This would imply the absence of exponential localization for relativistic fermions. The defiance of localization by relativistic fermions has been the subject of various studies and our analysis provides a simple way to understand this intriguing result. Next, we address the question whether the strong coupling fixed point which describes localized phase of Harper equation, the critical Ising model and the Dirac fermions with quasiperiodic disorder, implies universal multifractal exponents. We compute the $f(\alpha)$ curve( Fig.2) describing the multifractal spectrum associated with the self-similar wave function or the inverse participation ratios ${\bf P}$, ${\bf P}(q,N) = \frac{\sum |\eta_n|^{2q}}{\sum |\eta_n|^2} \sim N^{-\tau(q)}$, $\alpha = \frac{d\tau}{dq}$ and $f(\alpha) = \alpha q-\tau(q)$ The free energy function $\tau(q)$ and its Legendre transform $f(\alpha)$ were found to be $\lambda$ independent only for [*for positive values of $q$*]{} and hence only left half of the $f(\alpha)$ curve is universal. Therefore, for quasiperiodic spin chains at the onset to LRO, scaling exponents for [*only positive moments*]{} of the participation ratio are universal. This can be viewed as a generalization of the universality statements of periodic spin chains to disordered spins. Finally we investigate the role on correlations on the localization characteristics of massless spin excitations which obey Dirac equation. The fact that correlations would result in delocalization as originally shown by a random-dimer model[@pp] is an important result in localization theory.[@pp] For quasiperiodic disorder, dimer-type correlations can be introduced by replacing $\theta_n=2\pi\sigma_g n$ in $V(\theta_n)$ by the iterates of the $supercritical$ standard map, describing Hamiltonian systems with two degrees of freedom.[@dimerlong] $$\theta_{n+1}+\theta_{n-1}-2\theta_n = - {K\over 2\pi } \sin( 2 \pi \theta_n) .\label{SM}$$ We use iterates that describe golden-mean cantorus(the remanent of the KAM torus beyond the onset to global stochasticity) which has been shown to exhibit dimer-type correlations, and leads to Bloch-type states for the nonrelativistic fermions.[@dimerlong] Here, we will confine to the Ising limit( can be obtained from (5) using $\gamma \to \infty$ and rescaling the parameters), described by, $$\begin{aligned} \eta^1_{n+1}+2\lambda cos(2\pi\theta_n)\eta^1_n&=& E \eta^2_n\nonumber\\ \eta^2_{n-1}+2\lambda cos(2\pi\theta_n)\eta^2_n&=& E \eta^1_n\end{aligned}$$ We determine the critical $\lambda$ ( the threshold for the onset to magnetic transition) as a function of $K$, the nonlinearity parameter of the two-dimensional map.[@Lieb] The localization characteristics of the massless mode of the Ising model are studied using an exact RG methodology[@KSrg]. In this approach, quasiperiodic models such as eqn (7) with golden-mean incommensurability are decimated to a renormalized model defined only at the Fibonacci sites. Renormalization flow describing renormalized couplings at the Fibonacci sites provides an extremely accurate tool to distinguish extended, localized and critical states. Trivial fixed points of the RG describe extended states while critical states correspond to nontrivial fixed points. As shown in Fig. 3, nontrivial $6$-cycle ( which also corresponds to six-cycle of the wave function $\eta_n$ as shown in Fig. 1) degenerates to trivial fixed points at certain special parameter values. The origin of these trivial fixed points, has been traced to a [*hidden dimer*]{} in the quasiperiodic iterates describing the golden-cantorus.[@dimerlong] At these points, the relativistic massless mode of the Ising model is ballistic. Therefore, relativistic fermions may become delocalized analogous to the nonrelativistic case due to dimer-type correlations. The Fig. 3 shows an interesting interplay between the magnetic transition and the ballistic transition due to dimer-type correlations: the ballistic transitions where the relativistic mode is propagating seems to be sandwiched between two peaks corresponding to strong enhancement of ( possibility divergent ) strength of the inhomogeneous field needed for the onset to LRO. This phenomena again confirms the view that spin-fermion relationship may be an extremely useful means to understand the richness and complexity underlying a variety of new phenomena in disordered systems. One-dimensional quantum spin chain in a transverse field at the onset to LRO describes two-dimensional layered Ising model. Therefore, our study relates universal aspects, described by Onsager universality, of two-dimensional Ising model to the universal aspects of the two-dimensional Bloch electron problem described by the Harper equation. This paper establishes a relationship between two important systems where geometry and integrability are of central importance. We believe that our results are valid for a variety of disorders including a large class of pseudorandom as well as random cases. Finally, it should be noted that spin-fermion correspondence is also valid for time-dependent models. Kicked Harper model that has been extensively studied in quantum chaos literature[@Dima; @PSS] also describes XY spin chain in a periodically kicked inhomogeneous magnetic field.[@PS] By exploiting the spin-fermion correspondence, various new results in the kicked Harper model can provide a new way to understand many surprising results.[@SP] One of the interesting results is the independence of the critical exponents with respect to the discommensuration parameter $\sigma_g$ in the limit $\sigma_g \to 0$. This defines a new aspect of the usual universality statements for the spin systems at the onset to magnetic transition and hence broadens the concept of universality to include disorder as well as time dependence. This research is supported by a grant from National Science Foundation, DMR-0072813. I would like to thank Alain Comtet for bringing to my attention the problem of disordered Dirac fermions. Web address: physics.gmu.edu isatija L. Onsager, Phys Rev 65, 117 (1944). Also see H. E. Stanley, “ Introduction to Phase Transitions and Critical Phenomenon”, McGraw-Hill, 1978. E. Lieb, T. Schultz, and D. Mattis, Ann Phys. (N.Y) 16, 407 (1961). J. M. Luck, J . Stat. Phys 72 (1993) 417; Eurpphys Lett. 24 (1993) 359. I.I. Satija, Phys Rev B49, 3391 (1994). P.B. Wiegmann and A.V. Zabrodin, Phys Rev Lett 72, 1890 (1994). J.E.Avron, R. Seiler and B. Simon, Phys. Rev. Lett.,[**51**]{}, 1983. S. Ostlund and R. Pandit, Phys. Rev. B 29, 1394 (1984); S. Ostlund, R. Pandit, D. Rand, H. J. Schellnhuber, and E. D. Siggia, Phys. Rev. Lett. 50, 1873 (1983). J. A. Ketoja and I. I. Satija, Phys. Lett. A 194, 64 (1994), Physica A, 219, 212, (1995). J. A. Ketoja and I. I. Satija, Phys. Rev. Lett. 75, 2762 (1995); B.Mestel,A.Osbaldestin and B. Winn;“Golden mean renormalization for the Harper equation:the strong coupling fixed point”,preprint. L. Balents and M. Fisher, Phys Rev B, 56, 12970 (1997). Castillo, C. Chamon, E. Fradkin, P. Goldbart and C. Mudry, Phys Rev B, 56, (1997) 10668; P. G. Harper, Proc. Phys. Soc. London A 68, 874 (1955); B. Simon, Adv. Appl. Math. 3, 463 (1982); H-L. Wu, W. Golf and P. W. Phillips, Phys Rev B 45 (1992) 1623. J. Ketoja and I. Satija, Phys Rev B, 59, 9174 (1999); P. Leboeuf, J. Kurchan, M. Feingold and D. P. Arovas, Phys.Rev.Lett [**65**]{}, 3076 (1990). T. Prosen, I. I. Satija and N. Shah, Phys. Rev. Lett., [**87**]{}, 066601-1 (2001) T Prosen and I Satija (unpublished). “$\hbar \to 0$ in kicked Harper”: Reassurances and Surprises" , I Satija and T Prosen ( preprint).
--- abstract: | Wireless sensor networks are often designed to perform two tasks: sensing a physical field and transmitting the data to end-users. A crucial aspect of the design of a WSN is the minimization of the overall energy consumption. Previous researchers aim at optimizing the energy spent for the communication, while mostly ignoring the energy cost due to sensing. Recently, it has been shown that considering the sensing energy cost can be beneficial for further improving the overall energy efficiency. More precisely, sparse sensing techniques were proposed to reduce the amount of collected samples and recover the missing data by using data statistics. While the majority of these techniques use fixed or random sampling patterns, we propose to adaptively learn the signal model from the measurements and use the model to schedule when and where to sample the physical field. The proposed method requires minimal on-board computation, no inter-node communications and still achieves appealing reconstruction performance. With experiments on real-world datasets, we demonstrate significant improvements over both traditional sensing schemes and the state-of-the-art sparse sensing schemes, particularly when the measured data is characterized by a strong intra-sensor (temporal) or inter-sensors (spatial) correlation. author: - '[^1] [^2]' bibliography: - 'IEEEabrv.bib' - 'biblio2.bib' - 'papers.bib' title: 'DASS: Distributed Adaptive Sparse Sensing' --- Wireless sensor networks, sparse sensing, adaptive sampling scheduling, compressive sensing, energy efficiency Introduction {#sec1} ============ In a wireless sensor network (WSN), sensor nodes are deployed to take periodical measurements of a certain physical field at different locations. Consider a continuous-time spatio-temporal field $x(\vp,t)$ that we would like to monitor with the WSN and a vector $\vx\in\R^{N}$ containing a discretization of such field with a sufficiently high resolution for our purposes. The target of the WSN is to recover $\vx$ with the maximum precision. ![ Graphical representation of the mathematical model of the proposed sensing scheme. The signal is modeled by an unknown time-varying linear $K$-dimensional model $\mPsi^t$ that is learn from the collected measurements. The sampling pattern $\mPhi^t$ is optimized at run-time according to the signal model and measures only $M$ values out of the $N$ available ones.[]{data-label="fig:intro_ss"}](./pic/intro_ss_juri.pdf){height="0.2\newhcol"} One of the primary goals in designing a WSN is the reduction of the energy consumption, to extend its lifetime without replacing or recharging the batteries of sensor nodes. The energy consumption of a sensor node mainly comes from three activities: sensing, data-processing and communication. Traditionally, the costs for processing and communication are assumed to dominate the overall energy consumption, while the cost for sensing is considered negligible. Therefore, a traditional WSN collects as much data as possible, that is subsequently compressed and transmitted with the lowest possible rate. In other words, it collects a vector of samples $\vy_0$ that is equal to the discretized physical field $\vx$ with some additive noise, $$\begin{aligned} \vy_0=\mI\vx + \boldsymbol{\omega}, \label{eq:trad_sensing}\end{aligned}$$ where $\mI$ is the identity matrix of size $N$ and $\boldsymbol{\omega}$ represents the noise; see Figure \[fig:eye\] for an example. If the energy consumed for sensing is comparable to that for communication and data processing, ignoring the energy cost of the former is sub-optimal. In fact, new sampling paradigms optimizing the overall energy consumption emerge and show that further reductions of the energy consumption are possible. The basic idea involves a reduction of the number of collected samples and a reconstruction of the missing data using algorithms exploiting the structure available in the measured data. The reduction of the collected samples is done by designing a sampling operator $\mPhi\in\R^{M\times N}$ with $M\ll N$, that it is used instead of the identity matrix as, $$\begin{aligned} \vy=\mPhi\vx + \boldsymbol{\omega}. \nonumber\end{aligned}$$ Note that $\vy$ is significantly shorter than $\vx$ and the reconstruction algorithm must estimate a significant amount of information from a limited amount of data. Therefore, regularization and constraints are added to the problem so that a stable solution can be obtained. Moreover, the reconstruction algorithm must be jointly designed with the sampling matrix $\mPhi$ to obtain a precise estimate of $\vx$. Pioneering work on sparse sampling considered compressive sensing (CS) as a reconstruction scheme. CS attempts to recover $\vx$ by solving a convex optimization problem, under the assumption that $\vx$ is sparse in a known dictionary $\mat{\Pi}$. However, the solution is only approximate and it is exact if $\mat{\Pi}$ and $\mPhi$ satisfy certain requirements that are generally hard to check [@Candes2006]. Initially, [@Duarte2006; @Wang2007a; @Luo2009] proposed the use of a sampling matrix $\boldsymbol\Phi$ composed of random i.i.d. Gaussian entries. Note from Figure \[fig:dense\] that such $\mPhi$ has very few zero elements. Therefore, the number of sensing operations is not actually reduced because we need to know all the values of $\vx$ to compute $\vy$. Moreover, if we adopt a distributed algorithm, a dense $\mPhi$ requires the sensor nodes to transmit their local samples to the other nodes, causing an excessive energy consumption for communications. To overcome such limitations, [@Wu2012; @Quer2012] proposed to use a sparse matrix $\mPhi$ which contains very few non-zero elements. More precisely, $\mPhi$ has generally only one non-zero element per row and the locations of such elements determine the spatio-temporal sampling pattern, see Figure \[fig:sparse\]. However, the sampling patterns in these schemes are either fixed or randomly generated and thus not well adapted to the measured signal. Moreover, it is generally hard to guarantee the recovery of a faithful representation of $\vx$, because the sparsity of dictionary $\mat{\Pi}$ usually changes over time and it may not satisfy the theoretical requirements of CS [@Candes:2008pi]. Since the statistics of $\vx$ are often unknown and varying over time, it may be advantageous to consider the decomposition $$\begin{aligned} \vx=\mPsi^t \valpha,\end{aligned}$$ where $\mPsi^t$ is the time-varying model and $\valpha\in\R^K$ is a low dimensional representation of $\vx$ with $K\ll N$. While the ignorance and the non-stationarity of the model $\mPsi^t$ forces us to learn it from the samples collected in the past, it may give us the advantage of optimizing the sampling pattern $\mPhi^t$ according to $\mPsi^t$. Note that $\mPhi^t$ is also time-varying as compared to the fixed pattern $\mPhi$ in Figure \[fig:intro\_all\]. This new problem statement raises new challenges. While the model $\mPsi^t$ can be learnt from the incomplete measurements $\vy$ with some effort using an online version of the principal component analysis (PCA), the sampling scheduling problem is generally combinatorial and hard to optimize. In this paper, we propose to generalize FrameSense, an algorithm that generates a near-optimal sensor placement for inverse problems [@Ranieri:2013wp]. Instead of optimizing the sensor placement, we optimize the spatio-temporal sampling pattern of the WSN. The obtained sampling pattern is generally irregular, time-varying and optimized to gather the maximum amount of information. In particular, it simultaneously exploits the intra-node (temporal) and inter-node (spatial) correlation potentially present in the data. See Figure \[fig:intro\_ss\] for a graphical illustration of the low-dimensional model and of the irregular sampling patterns. Note that the proposed method deviates from the recent sparse sensing schemes [@Quer2012; @Wu2012] because the sampling pattern is neither fixed nor random but dynamically adapted to the signal’s low-dimensional model. It is worth mentioning that the proposed method imposes no on-sensor computation nor inter-node communication. Each sensor node simply collects measurements according to a designated sampling pattern and transmits the data to a common server. The server receives all the data from one or multiple sensor nodes and performs signal reconstruction. This is actually in accordance to the setup of distributed source coding [@Viswanatha2012], where no inter-node communication is used. Hence, the proposed algorithm provides an alternative solution to the distributed coding problem: the communication rate is reduced and the reconstruction error is bounded without using any inter-node communication. The proposed algorithm is tested on different sets of real-word data, outperforming both the traditional sensing schemes and the state-of-the-art sparse sensing schemes, in terms of reconstruction quality of $\vx$ given a fixed amount of measurements. Given the aforementioned characteristics, we call the proposed method “*Distributed Adaptive Sparse Sensing*”, or *DASS*. Problem Formulation {#sec2} =================== ![Upper plot: optimized temporal sampling pattern of DASS. Lower plot: traditional sensing scheme, where samples are collected regularly in time. The subsampling factor is $\gamma=1/3$, since we collect 4 samples instead of 12 in each block.[]{data-label="sampling"}](./pic/sampling.pdf){height="0.19\newhcol"} In this section, we first state the sampling scheduling problem for a WSN having just one sensor. At the end of the section, we generalize the problem statement to a WSN with multiple nodes. We consider a block-based sensing strategy, meaning that the WSN samples the field for a certain time $T$ and at the end we reconstruct the vector $\vx$ from the collected samples. Note that the block length is known and defined a-priori. For each temporal block, the discrete physical field $\vx$ is composed of $N$ samples of $x(\vp,t)$, $$\begin{aligned} \vx=\left[x(\vp,0),x(\vp,\Delta_T),\cdots,x(\vp,(N-1)\Delta_T)\right]^\top,\end{aligned}$$ where $\vp$ indicates the sensor node location and $\Delta_T$ is the sampling period. Note that $\Delta_T$ determines the desired temporal resolution and its inverse is the sampling frequency, $f=1/\Delta_T$. The temporal duration of a block is $T=N\Delta_T$, that is also the maximum delay this sensing scheme occurs—the larger $T$, the longer the delay. See Figure \[sampling\] for a graphical representation of the physical field and its discrete version $\vx$. We denote the reconstructed physical field obtained from the WSN samples as $\widetilde \vx$. In a sparse sampling scenario, we aim at reconstructing $\widetilde{\vx}$ from just a subset of elements of $\vx$. More precisely, we measure $M$ elements out of $N$, where $M<N$. The set of indices $\boldsymbol\tau^t=\{\tau^t_i\}_{i=1}^M$ denotes the indices of these $M$ samples and it is chosen adaptively according to the previous measurements. Note that the sampling pattern $\boldsymbol\tau^t$ uniquely determines the sampling matrix $\mPhi^t\in\R^{M\times N}$: $$\begin{aligned} \mPhi^t_{i,j}= \begin{cases} 1\quad\text{ if } j=\tau^t_i \\ 0 \quad \text{ otherwise} \end{cases}.\nonumber\end{aligned}$$ It is important to underline that $\mtau^t$ is time-varying and potentially changes at every block to adapt to the signal model $\mPsi^t$. Figure \[sampling\] shows an example of sampling patterns where $\boldsymbol\tau^t$ changes for each block. We define $f_s=\frac{M}{N}\cdot f=\gamma f$ to be the average sampling frequency of the sensor node[^3]. The subsampling rate $\gamma = f_s/f<1$ is an important figure of merit for a sparse sampling algorithm—the lower the $\gamma$, the lower the energy consumed for sensing. The measured signal $\vy\in\R^{M}$ is defined as $$\begin{aligned} \vy=\mPhi^t\vx +\vomega,\end{aligned}$$ where $\vomega$ represents the measurement noise, that is modeled as an additive white Gaussian noise (AWGN) with variance $\sigma^2$. Note that it is reasonable to model the noise phenomena as AWGN since the thermal effects [@Johnson1928] or/and quantization [@Widrow2008] are often the dominating terms[^4]. The target of DASS is to optimize the sampling pattern $\mPhi^t$ at the $t$-th block according to $\mPsi^t$ such that we collect the minimum number of samples $M$ while still being able to recover precisely the original signal. Since we modeled the noise as a AWGN, we assess the quality of the recovered signal by using root-mean-square error (RMSE): $$\begin{aligned} \epsilon=\frac{1}{\sqrt{N}}\|{\vx}-\widetilde{\vx}\|_2. \nonumber\end{aligned}$$ ![Signals of multiple distributed sensor nodes can be concatenated into a single signal stream at the server for recovery.[]{data-label="app3"}](./pic/app3.pdf){height="0.19\newhcol"} *Multiple-node scenario*: while the above problem statement focuses on a single-sensor scenario for simplicity of notation, it is simple to generalize the statement to a WSN with more than one sensor node. More precisely, we assume that the nodes are synchronized, so that we can concatenate all the measured blocks at different locations $\vp_i$ in a unique signal block $\vx$, see Figure \[app3\] for an example. The sparse sampling problem is generalized to a spatio-temporal domain meaning that we have to choose *when and where* we want to sample to collect the maximum amount of information. Building Blocks {#sec3} =============== The proposed method is graphically represented in Figure \[framework\] and is based on the three building blocks described in this section: The desired signal $\widetilde{\vx}$ is reconstructed using the collected measurements $\vy$, the signal model $\mPsi^t$ and the estimated mean $\overline{\vx}$ (Section \[sec3.1\]). The measurements $\vy$ are used to update the approximation model $\mPsi^t,\overline{\vx}$ (Section \[sec3.2\]). The sampling pattern for the next temporal block $\mtau^{t+1}$ is optimized according to $\mPsi^t$ and is transmitted back to the sensor node(s) (Section \[sec3.3\]). ![Representation of the operations of DASS in a WSN. The sensor node sends the measured data to the processing server and receives the sampling pattern for the next temporal block. The server uses the data to update the signal model $\mPsi^t$, reconstructs the discrete physical field $\widetilde{\vx}$ and optimizes the sampling pattern $\mtau^{t+1}$ for the sensor nodes. Note that $\mtau^{t+1}$ uniquely determines $\mPhi^{t+1}$.[]{data-label="framework"}](./pic/framework.pdf){height="0.13\newhcol"} The overhead of DASS on the sensor node is minimal in practice. First, the sampling pattern $\mtau^{t}$ has a sparse structure and hence it can be encoded efficiently with a few bytes per block. Therefore, the extra communication cost for receiving $\mtau^{t}$ is minimal. Second, all the algorithmic complexity of DASS is at the server side, while the sensor nodes only need to sample and transmit the signal according to the sampling pattern received from the server. Therefore, the CPU and memory requirements of the sensor node are minimal. In what follows, we analyze each block explaining the challenges and the proposed solution. Signal Approximation and Reconstruction {#sec3.1} --------------------------------------- Due to the nature of most physical fields, a signal block is partially predictable by analyzing past data. In many cases, this predictability can be expressed by assuming that the signal belongs to a $K$-dimensional linear subspace $\mPsi^t\in\R^{N\times K}$. Such a subspace approximates $\vx$ as $$\begin{aligned} \widehat \vx=\mPsi^t\valpha +\overline{\vx}, \label{eq5.1.1}\end{aligned}$$ where $\widehat{\vx}$ is the approximated field, $\valpha\in\R^{K}$ is the vector of the projection coefficients and $\overline{\vx}$ is the mean of $\vx$. If the modeling subspace $\mPsi^t$ is well designed and $K$ is sufficiently large compared to the complexity of $\vx$, the signal realization $\vx$ can be accurately expressed with just $K<< N$ coefficients contained in $\valpha$. To find such a subspace, we analyze all the past signal realizations and estimate at the $t$-th block the $K$-dimensional subspace $\mPsi^t$ that minimizes the expected approximation error $$\begin{aligned} \epsilon_a=\frac{1}{\sqrt{N}}\mathbb{E}\left(\|\vx-\widehat\vx\|_2\right). \nonumber\end{aligned}$$ This is a dimensionality reduction problem that can be solved by the well known technique of *principal component analysis (PCA)*. It has an analytic solution but it requires the covariance matrix $\mC_{\vx}$. Unfortunately, in our scenario it is hard to estimate $\mC_{\vx}$ since we have access only to $M$ out of $N$ elements of $\vx$. However, if the $M$ sampled elements are varying at each temporal block $t$, we may collect enough information to have a sufficiently precise estimate of $\mC_{\vx}$. We present a set of methods to estimate $\mC_{\vx}$ in Section \[sec3.2\]. Note that the approximation through $\mPsi^t$ exploits the correlation among the elements of $\vx$. The higher the correlation available in $\vx$, the lower the dimensionality of the subspace $\mPsi^t$, the number of parameters $K$ and the necessary measurements $M$. Hence, one of the key aspects is the choice of the signal block length $T$. In fact, it should be chosen such that the delay of the WSN respects the design specification while maximizing the correlation among the blocks. For example, if we consider a sensor measuring the outdoor light intensity, the signal itself naturally has diurnal patterns. If we choose a block length of one hour, the correlation between the signal block is usually weak. On the other hand, if we choose a block length of one day, the correlation is stronger due to the aforementioned patterns. Once the approximation model $\mPsi^t$ is estimated, the task of recovering the signal $\widetilde{\vx}$ amounts to estimating $\valpha$ from the measurements $\vy$ when considering the approximated signal model $$\begin{aligned} \vy\approx\mPhi^t\widehat{\vx}+\vomega=\mPhi^t(\mPsi^t\valpha+\overline{\vx})+\vomega. \label{eq:meas}\end{aligned}$$ In general, we can recover $\valpha$ by solving an Ordinary Least Square (OLS) problem: $$\widetilde{\valpha} = {\arg\min_{\valpha}}\|\vy-\mPhi^t\overline{\vx}-\mPhi^t\mPsi^t\valpha\|_2^2, \label{eq5.2.4}$$ which has the following analytic solution $$\begin{aligned} \widetilde{\valpha}=(\mPhi^t\mPsi^t)^\dagger(\vy-\mPhi^t\overline{\vx}).\end{aligned}$$ Here $(\mPhi^t\mPsi^t)^\dagger$ is the Moore-Penrose pseudoinverse of $\mPhi^t\mPsi^t$ that is defined for a generic matrix $\mA$ as $\mA^\dagger=(\mA^*\mA)^{-1}\mA^*$. The reconstruction algorithm is straightforward and is described in Algorithm \[algoreconstruct\]. The following theorem states the necessary conditions to find a unique solution and provides an upper bound for the reconstruction error, that is going to be fundamental when optimizing the sampling pattern. Consider a sensor network measuring a physical field as in where the measurements are corrupted by an i.i.d. Gaussian noise with variance $\sigma^2$. If $M\geq K$, $\mPsi^t$ is formed by orthonormal columns and $\operatorname{rank}(\mPhi^t\mPsi^t)=K$, then $\widetilde{\vx}$ can be uniquely determined using Algorithm \[algoreconstruct\]. The reconstruction error is bounded by $$\epsilon^2=\frac{1}{N}\|\vx-\widetilde{\vx}\|_2^2\leq\frac{1}{\lambda_K}\epsilon_a^2+\sigma^2\sum^K_{k=1}\frac{1}{\lambda_k}, \label{eq5.2.6}$$ where $\epsilon_a$ is the approximation error due to the signal model $\mPsi^t$ and $\lambda_i$ are the eigenvalues of ${\mPsi^t}^*{\mPhi^t}^*\mPhi^t\mPsi^t$ sorted in decreasing order. \[theorem1\] Since the Gaussian noise is independent from the approximation error, we can treat them independently. Moreover, it is sufficient to compute the error on the estimation of $\valpha$ given the orthonormality of the columns of $\mPsi^t$. For the approximation error $\epsilon_a$, we look at the worst case scenario with the following optimization problem $$\begin{aligned} &\max \quad \|(\mPsi^t\mPsi^t)^\dagger(\vx-\widehat{\vx})\|^2_2 \nonumber\\ &\text{subject to}\quad \frac{1}{N}\|(\vx-\widehat{\vx})\|^2_2=\epsilon_a \nonumber,\end{aligned}$$ whose solution is proportional to the largest eigenvalue of $(\mPsi^t\mPsi^t)^\dagger$. More precisely, it is possible to show that approximation noise is equal to the $\frac{1}{\lambda_K}\epsilon_a^2$, where $\epsilon_a$ is the norm of the approximation error. For the white noise, we consider a previous result given in [@Fickus:2011vq] to conclude the proof. $\mPsi^t$, $\overline{\vx}$, $\mtau^t$ and $\mPhi^t$ $\widetilde{\vx}$ Measure the signal $\vy$ according to $\mtau^t$. $\widetilde{\vx}=\mPsi^t(\mPhi^t\mPsi^t)^\dagger(\vy-\mPhi^t\overline{\vx})+\overline{\vx}$. The upper-bound of the total error $\epsilon$ is a function of both the approximation error $\epsilon_a$ and measurement noise. The former term depends on the number of parameters $K$: when $K=N$, we have $\epsilon_a=0$ and it grows when we decrease $K$. However, the rate at which the error increases depends on the spectrum of $C_{\vx}$. In fact, if $\vx$ has elements that are highly correlated, a small $K$ could be sufficient to model $\vx$ with a small approximation error. The latter term can be controlled directly by optimizing the sampling pattern. More precisely, we cannot reduce $\sigma$ but we can reduce the amplification due to the spectrum $\lambda_k$ through an optimization of the sampling matrix $\mPhi^t$. Note that the part involving $\epsilon_a$ depends only on the smallest eigenvalue because we are not guaranteed that the approximation error *spreads* over all the eigenvectors of $\mPhi^t\mPsi^t$. In fact, the worst case scenario is represented by the approximation error being in the same direction of the eigenvector with the smallest eigenvalue and $\epsilon_a$ is consequently maximally amplified. Compared to the methods based on CS, our approach based on a low-dimensional model and OLS has the following advantages: i) the solution is easy to compute and it requires a single matrix inversion, ii) it enables an analysis of the reconstruction error and a consequent optimization of the sampling pattern $\mtau^t$ such that $\epsilon$ is minimized. Learning from Incomplete Data Over Time {#sec3.2} --------------------------------------- In Section \[sec3.1\], we have highlighted some challenges regarding the estimation of the covariance matrix $\mC_{\vx}$ — a fundamental step to determine the approximation model $\mPsi^t$. Most of the challenges derive from the lack of a sufficiently large set of realizations of $\vx$, that are needed to estimate $\mC_{\vx}$. First, there is virtually no past data for a newly installed WSN. Second, $\mC_{\vx}$ is likely to vary over time. Third, a high ratio of data points ($1-\gamma$) are not available for the estimation since we collect sparse measurements. Therefore, we need an on-line algorithm that estimates and adaptively updates the covariance matrix $\mC_{\vx}$ from incomplete data. $\vy$, $L$ $\mPsi^t,\overline{\vx}$ interpolate $\vy\to \vx_{\textrm{intep}}$. insert $\vx_{\textrm{intep}}$ into a buffer storing the most recent $L$ blocks. estimate $\mathbf{C}_{\vx}$ and $\overline{\vx}$ from the buffer. $\mPsi^t$ is formed by the first $K$ eigenvectors of $\mathbf{C}_{\vx}$ ordered in decreasing values of its eigenvalues. $\vy$, $L$, $\mPsi^{t-1}$, $\boldsymbol\lambda^{t-1}, \overline{\vx}^{t-1}$ $\mPsi^t, \boldsymbol\lambda^t, \overline{\vx}^t$ interpolate $\vy\to \vx_{\textrm{intep}}$. $\mathbf{a}={\mPsi^{t-1}}^*(\vx_{\textrm{intep}}-\overline{\vx}^{t-1})$. $\mathbf{b}=\left(\mPsi^{t-1}\mathbf{a}+\overline{\vx}^{t-1})\right)-\vx_{\textrm{intep}}$, and then normalize $\mathbf{b}$. $c=\mathbf{b}^*(\vx_{\textrm{intep}}-\overline{\vx}^{t-1})$. $\mathbf{D}=\frac{1}{L+1}\left[ \begin{array}{cc} \textrm{diag}(\boldsymbol\lambda^{t-1}) & \boldsymbol 0 \\ \boldsymbol 0^* & 0 \\ \end{array} \right]+\frac{L}{(L+1)^2}\left[ \begin{array}{cc} \mathbf{a}\mathbf{a}^* & c\mathbf{a} \\ c\mathbf{a}^* & c^2 \\ \end{array} \right]$. Solve the eigenproblem: $\mathbf{D}=\mathbf{R}\cdot\textrm{diag}(\boldsymbol\lambda')\cdot\mathbf{R}^{-1}$, $\boldsymbol\lambda'$ is sorted in decreasing order. $\boldsymbol\Psi'=\left[\mPsi^{t-1}\ \mathbf{b}\right]\cdot \mathbf{R}$. update $\mPsi^t$ as the first $K$ columns of $\boldsymbol\Psi'$. update $\boldsymbol\lambda^t$ as the first $K$ values of $\boldsymbol\lambda'$. update $\overline{\vx}^t$ as $\left(L\overline{\vx}^{t-1}+\vx_{\textrm{intep}}\right)/(L+1)$. The main difficulty is the lack of complete realizations of $\vx$. Two strategies are generally considered to overcome such a problem. The first one proposes to estimate from $\vy$ an interpolation $\vx_{\text{interp}}$ using classic interpolation methods such as linear, polynomial or spline interpolation. The second strategy skips the estimation of $\mC_{\vx}$ and attempts to perform directly the principal component analysis with the data having missing entries, see [@Raiko2008]. In our experience, the second class of algorithms is less performant for our purposes. Therefore, we focus our attention on the interpolation methods. More precisely, we analyze two different methods that implement an adaptive learning and updating of the approximation model $\mPsi^t$ from the interpolated signal $\vx_{\textrm{intep}}$: Algorithm \[updater1\] and Algorithm \[updater2\]. Algorithm \[updater1\] uses a FIFO buffer to store the most recent $L$ blocks. Whenever a new block is added into the buffer, the oldest block in the buffer is excluded. As the approximation model is estimated according to the signal realizations in the buffer, this scheme is able to capture the variation of signal statistics over time. Algorithm \[updater2\] adaptively updates the approximation model via a technique called incremental PCA [@Hall1998]. It does not keep signal realizations in memory, instead, it stores the largest $K$ eigenvalues of $\mC_{\vx}$, $\boldsymbol\lambda=\{\lambda_i\}$, for $i=1,\cdots,K$. This method requires significantly less memory ($K$ versus $N\times L$), and shows better performance when compared to Algorithm \[updater1\]. Note that in both algorithms, the choice of $L$ depends on the variability of the signal statistics for each specific application. In practice, we can cross-validate this parameter to find a suitable value (e.g., $L=30$). We discuss and compare the performance of these two algorithms in the experimental results. Sampling Scheduling Algorithm {#sec3.3} ----------------------------- According to Theorem \[theorem1\], minimizing the overall error $\epsilon$ is equivalent to finding the optimal sampling pattern $\boldsymbol\tau$ that minimizes (\[eq5.2.6\]). In this paper, we assume that the model $\mPsi^t$ is sufficiently precise and the dimensions $K$ is large enough so that the term due to the white noise $\sigma$ is dominant. Therefore, we would like to find the sampling pattern that minimizes the following cost function, $$\begin{aligned} \Theta(\widetilde{\mPsi}^t)=\sum_{k=1}^K\frac{1}{\lambda_k},\end{aligned}$$ where $\lambda_k$ are the eigenvalues of $(\widetilde{\mPsi}^t)^*\widetilde{\mPsi}^t$, and $\widetilde{\mPsi}^t=\mPhi^t\mPsi^t.$ Note that this optimization is equivalent to finding the $M$ rows of $\mPsi^t$ that forms the submatrix $\widetilde{\mPsi}^t$ with the smallest $\Theta(\widetilde{\mPsi}^t)$. However, it has been already shown that such optimization is NP-hard [@Das:2008uc] and has a complexity $\mathcal{O}\left(\binom{N}{M}\right)$, which is prohibitively high in practice. In this section, we investigate approximate solutions to the scheduling problem that can be implemented efficiently. These approximate solutions are usually hard to find because the cost function $\Theta(\widetilde{\mPsi}^t)$ has many local minima that are arbitrarily far away from the global minimum. Therefore, proxies of $\Theta(\widetilde{\mPsi})$ are usually chosen as a cost function for the approximated algorithm with a twofold aim: (i) inducing an indirect minimization of $\Theta(\widetilde{\mPsi}^t)$ and (ii) being efficiently optimized by standard techniques, as convex optimization or greedy algorithms. In this paper, we extend our recent work [@Ranieri:2013wp] about optimal sensor placement to solve the sampling scheduling problem. In fact, if we define the linear inverse problem to be the estimation of $\vx$ from $\vy$, then the sensor scheduling problem is equivalent to sensor placement. The algorithm [@Ranieri:2013wp] optimizes the sensor placement by a greedy minimization of the frame potential [@Casazza:2006wl], that is defined as $$\begin{aligned} \operatorname{FP}(\mPsi^t,\calS)=\sum_{i,j\in\calS}|\langle\mpsi_i,\mpsi_j\rangle|^2, \label{eq:cost_function}\end{aligned}$$ where $\mpsi_i$ is the $i$-th row of $\mPsi^t$ and $\mathcal{S}$ contains the set of candidate locations for sensing. Under some mild solutions, we proved that such an algorithm is near-optimal w.r.t. the RMSE of the solution. In this work, we propose a sampling scheduling algorithm based on an equivalent greedy “worst-out” procedure: as input we have the signal model $\mPsi^t$ and we initially consider the identity matrix of size $N$ as the sampling matrix $\mPhi^{t+1}$. At each iteration, we remove the row of $\mPhi^{t+1}$ that maximizes . After $N-M+1$ iterations we are left with an optimized $\mPhi^{t+1}$ that has only $M$ elements different from zero and has near-optimal performance when reconstructing $\vx$ from the measurements $\vy$. Note that if $\mPsi^t$ satisfies the conditions given in [@Ranieri:2013wp], the obtained sampling matrix $\mPhi^{t+1}$ stably recovers $\vx$ from the measurements $\vy$. Furthermore, since a uniform sampling schedule is a commonly-used strategy that yields good performance in real applications [@Wu2012], we compare it with the result returned by the greedy algorithm and opt for the one with smaller reconstruction error. Note that this error is approximated by the bound provided by Theorem \[theorem1\]. A detailed description of the overall algorithm is given in Algorithm \[alg:greedy\]. $\boldsymbol{\Psi}^t,$ $M$ $\boldsymbol\tau^{t+1}$ for the next temporal block Initialize the set of removed sampling indices: $\mathcal{L}=\emptyset$. Initialize the set of selected sampling indices: $\mathcal{S}=\{1,\cdots,N\}$. Find the first two rows to eliminate, $\mathcal{L}=\arg \max_{i,j\in \mathcal{S}} |\left<\boldsymbol{\psi}_i, \boldsymbol{\psi}_j\right>|^2$. Update $\mathcal{S}=\mathcal{S}\backslash\mathcal{L}$. Find the optimal row, $i^*=\arg\max_{i\in \mathcal{S}} \operatorname{FP}(\mPsi^t,\mathcal{S}\backslash i)$. Update the set of removed indices, $\mathcal{L}=\mathcal{L}\cup i^*$. Update the set of selected indices, $\mathcal{S}=\mathcal{S}\backslash i^*$. Comparisons with Baseline Methods {#sec4} ================================= In this section, we briefly summarize the state-of-the-art methods for the sparse sensing problem. They will serve as the baseline for comparisons in Section \[sec5\]. The first category of methods [@Quer2012; @Wu2012] are based on compressive sensing (CS). With the notations introduced in Section \[sec2\], $\vx$ is the unknown signal, $\vy$ contains the incomplete measurements, and $\boldsymbol\Phi$ is a sparse sampling matrix with only $M$ elements different from zero. We assume $\vx$ to be sparse w.r.t. a dictionary $\boldsymbol\Pi$. More precisely, we have $\vx= \boldsymbol\Pi \vs$ and $\vs$ has just a few coefficients different from zero, that is $\|\vs\|_0\ll N$ (see [@Cand`es2006] for more details). By approximating the $\ell_0$ norm with the $\ell_1$ norm [@Candes2006], the reconstruction method for the noiseless case is: $$\min_{\vs\in \mathbb{R}^N}\|\vs\|_1,\ \textrm{s.t.}\ \ \vy=\boldsymbol\Phi \boldsymbol\Pi \vs, \label{eq4.1}$$ while the one for the noisy case is $$\min_{\vs\in \mathbb{R}^N}\parallel\vs\parallel_1,\ \textrm{s.t.}\ \ \|\vy-\boldsymbol\Phi \boldsymbol\Pi \vs\|_2\leq \xi, \label{eq4.2}$$ where $\xi$ measures the energy of the noise. Problem (\[eq4.1\]) and (\[eq4.2\]) are both convex and can be solved [@Candes2006] in polynomial time using various solvers, in general iterative or based on convex optimization. In both methods, we use uniform sampling as the sampling scheduler — $\tau_j^t=\lfloor j N/M\rfloor$. The second category of baseline methods [@Quer2012] are based on learning the $K$-dimensional time-varying model $\mPsi^t$ and a reconstruction via OLS as in Algorithm \[algoreconstruct\]. We use two sampling schedulers, namely, a uniform sampling, and a random sampling where $\tau_j^t$ is randomly selected with a uniform distribution. Table \[baselines\] lists all the methods (including DASS) that are evaluated in the experiments. To have a fair comparison, $\boldsymbol\Pi$ in CS-based methods and ${\boldsymbol\Psi}^t$ in OLS-based methods are both learnt[^5] by the incremental PCA described in Algorithm \[updater2\]. Evaluations of DASS and Sparse Sensing Methods {#sec5} ============================================== In this section we evaluate the performance of DASS and compare it with the state-of-the-art sparse sensing methods. Besides the experiments on the single-node case, we also verify DASS in the multi-node case where nearby sensor nodes measure spatially correlated signals. We use two real-world meteorological datasets as the ground truth, namely [*Payerne*]{} and [*Valais*]{}: - [*Payerne*]{} is provided by [[MeteoSwiss]{}]{}[ [@meteoswiss]]{}. This dataset contains 1500 days of continuous measurements for two physical quantities (temperature and solar radiation)[^6], which are suitable for studying long-term performance of DASS. As [[MeteoSwiss]{}]{} only deployed a few observation stations across the whole nation, we use [*Payerne*]{} for evaluating the single-node case. - [*Valais*]{} is provided by a microclimate monitoring service provider [[@Ingelrest2010]]{}. A total of six stations are deployed in a mountain valley (Figure \[map\]), covering an area of around $18\ \textrm{km}^2$. The deployments were started in March 2012 and collected 125 days of continuous temperature measurements. We use [*Valais*]{} for evaluating the multi-node case. The two datasets are summarized in Table \[dataspec\]. For both datasets, there are 144 uniformly sampled data points for each day. We choose the day as the length of each block, that is, $N= 144$. One of the targets of this section is to evaluate DASS and compare it with other algorithms for different SNR regimes of the measurement. Since we cannot measure directly the real value of the physical field, we assume that [*Payerne*]{} and [*Valais*]{} represent the real value of the field $\vx$. Then, we add white gaussian noise to simulate the effect of noisy measurements. Note that the main merit figure considered in this section is the final reconstruction error under a fixed subsampling rate $\gamma$. Since all sparse sensing schemes directly transmit the sensing samples without further data compression, two schemes with the same $\gamma$ have the same amount of energy consumed for sensing and communication[^7], regardless of which sensing platform is used. ![Locations of the sensor nodes that collected the data-set [*Valais*]{}.[]{data-label="map"}](./pic/map.pdf){height="0.16\newhcol"} Components of DASS {#sec5.1} ------------------ In this section, we evaluate the key components of DASS, including the optimal choice of $K$, the cost function $\Theta(\mPhi^t\mPsi^t)$ in the sampling scheduling algorithm, and the performance of adaptive learning algorithms. As stated in Theorem \[theorem1\], the overall reconstruction error $\epsilon$ is a function of both the approximation error $\epsilon_a$ and the cost function $\Theta(\mPhi^t\mPsi^t)$. Generally, $\epsilon_a$ decreases with $K$ and $\Theta(\mPhi^t\mPsi^t)$ increases with $K$, hence there is an optimal choice of $K$ for minimizing the overall error. The optimal $K$ depends on the data statistics, the subsampling rate, and the SNR of the measurement. By cross-validation, Figure \[plot1\] shows the optimal ratio $K/M$ for [*Payerne*]{}-temperature. We can see that DASS generally opts for a larger $K$ when the SNR of measurement increases. ![Optimal ratio $K/M$ of DASS w.r.t. SNR of the measurement, for [*Payerne*]{}-temperature. Note $K/M$ must be smaller than 1 according to Theorem \[theorem1\].[]{data-label="plot1"}](./pic/plot1){height="0.135\newhcol"} The greedy algorithm proposed in Section \[sec3.3\] (Algorithm \[alg:greedy\]) finds an approximate solution of the sampling scheduling problem. By Theorem \[theorem1\], $\Theta(\mPhi^t\mPsi^t)$ determines the reconstruction error. Table \[condtab\] shows the value of $\Theta(\mPhi^t\mPsi^t)$ achieved by different sampling scheduling methods for different datasets. Note that a higher value indicates worse stability w.r.t. noise. We can see that the greedy algorithm achieves the best result for the two datasets. In particular, it is substantially better than uniform sampling for solar radiation data. For temperature data, since $\Theta(\mPhi^t\mPsi^t)$ of the uniform sampling strategy is already near the lower bound[^8], the greedy algorithm provides little improvement. In the next section, we demonstrate how these improvements translates into better reconstruction performance for DASS. DASS is designed to learn the signal statistics from past data. In practical scenarios, a long backlog of data is usually infeasible and thus DASS should be designed to learn the model from scratch. We proposed Algorithm \[updater1\] and Algorithm \[updater2\] for this task. Figure \[plot3\] shows the learning curves of these two algorithms over three years of data. As a benchmark, we considered an offline method that learns the model from 600 days of past data and is represented by the red-dotted curve. Note how Algorithm \[updater1\] and Algorithm \[updater2\] capture the signal statistics precisely. In particular, it is interesting to note that even if they use less data—the last 30 days—they are generally better than the offline method that considers 600 days of data. This phenomenon is due to the non-stationarity of the signal model $\mPsi^t$ that is captured only by adaptive on-line algorithms. Moreover, it is also clear that Algorithm \[updater2\] with incremental PCA performs better than the buffer-based Algorithm \[updater1\]. In the following experiments, we will only consider Algorithm \[updater2\] due to its better performance and lower memory requirements. ![Learning curves of DASS ([*Payerne*]{}-temperature, $\gamma=10\%$, SNR of the measurement=30dB): Comparisons of two online learning algorithms and a one-time learning algorithm with long backlog of past data. Note that Algorithm \[updater2\] achieves always the lowest error. []{data-label="plot3"}](./pic/plot3.pdf){height="0.15\newhcol"} DASS versus Baseline Methods {#sec5.2} ---------------------------- Here, we compare DASS with the baseline methods introduced in Table \[baselines\], namely, CS, CSN, OLS-random, and OLS-uniform. For DASS, we need to choose the optimal $K$ according to the cross-validation studied in Figure \[plot1\]. Hence, we need to know the SNR of the measurement. A similar parameter tuning is necessary for CSN, where $\xi$ in Problem represents the noise level. Therefore, whenever we consider the case of noisy measurements, an estimate of the SNR of the measurement is necessary to avoid degradations of the reconstruction quality. In the first experiment, we assume that the estimation of the SNR is exact. Figure \[plot4\] shows the comparison results of DASS, OLS-uniform, OLS-random, CS and CSN, for both temperature and solar radiation data. First, note that OLS-uniform generally performs better than the two CS-based schemes, especially in low SNR regime. In high SNR regime ($>35$dB), OLS-uniform, CS and CSN tend to perform the same. Second, the bad performance of OLS-random indicates that random sampling is not a valid sampling strategy for neither temperature nor solar radiation signals. Third, while DASS and OLS-uniform performs almost equivalently for temperature data, we can note that DASS is substantially better for solar radiation data. This fact is in accordance with the analysis of $\Theta(\mPhi^t\mPsi^t)$ given in Table \[condtab\]: if $\Theta(\mPhi^t\mPsi^t)$ due to uniform sampling is large, then the sampling scheduling algorithm of DASS (Algorithm \[alg:greedy\]) significantly improves the effectiveness of sensing while preserving the average sampling rate. In practice, the estimation of the noise level might be not exact. Here, we study the performance deviation of the considered algorithms when there is an error in such estimates. More precisely, we fix all the parameters and we vary the estimation error of the SNR and then measure the performance of the algorithms in terms of RMSE. Figure \[plot5\] shows the reconstruction error with respect to the estimation error of SNR, whereas the true SNR is 30dB. We can see that DASS performs the best, and generally DASS and OLS-uniform are both stable w.r.t. errors in the SNR estimation. However, the performance of CSN degrades severely when the SNR is underestimated. According to results given in Figure \[plot4\] and Figure \[plot5\], DASS is both more *accurate* and *robust* when compared to the state-of-the-art sparse sensing methods. ![Reconstruction error (RMSE) w.r.t. estimation error of the SNR of the measurement, of OLS-uniform, DASS and CSN, respectively ([*Payerne*]{}-temperature, $\gamma=10\%$). The true SNR is 30dB. Note that the proposed method is more robust to errors in the estimation of the noise power, when compared to other methods. []{data-label="plot5"}](./pic/plot5.pdf){height="0.15\newhcol"} DASS on Multiple Sensor Nodes {#sec5.3} ----------------------------- As discussed in Section \[sec2\], the concept of DASS can be extended to multiple sensor nodes by concatenating the collected samples in a single vector $\vy$ and using the same strategy as for the single-node case. Merging the data of all the spatial nodes possibly augments the correlation; DASS may exploits such correlation to reduce the sampling rate. In fact, if all the measurements collected by the sensors are linearly independent then DASS generates the same sampling scheduling that would have been optimized for each sensor individually. However, if there exists some correlation between the different sensor nodes, then DASS jointly optimizes the sensor scheduling so that the total average sampling rate is reduced. We denote by *Joint DASS* the scheme that jointly reconstructs the signals of the WSN (Figure \[app3\]), and *Independent DASS* the scheme that independently reconstructs the signals of each node. Note that in both schemes, sensor nodes are operating in a purely distributed manner; the difference is that *Joint DASS* aggregates the sensed data of all nodes and jointly processes them. Figure \[plot6\] shows the ratio between the subsampling rates of *Joint DASS* and *Independent DASS*, using the data-set [*Valais*]{}. We can see that as the number of sensor nodes increases, the required sampling rate of *Joint DASS* also gradually decreases. In particular, with 4 nodes we can reduce the number of samples by 70% with *Joint DASS*. Therefore, exploiting the spatial correlation further enhances the energy reduction of DASS. On the other hand, the benefit flatten out when we consider 5 or more sensor nodes. The intuition behind this phenomenon is that the last two sensor nodes are far apart from the others and there is no more correlation to exploit, see the rightmost two nodes in Figure \[map\]. ![Ratio of sampling rate between *Joint DASS* and *Independent DASS*, such that both schemes have the same reconstruction error ([*Valais*]{}, SNR of the measurement=20dB). Note that the joint scheme always reduces the number of samples required, this is due to the spatial correlation available in the sampled data.[]{data-label="plot6"}](./pic/plot6.pdf){height="0.16\newhcol"} ![Reconstruction error (RMSE) of DASS and CSN, when block length $N=72$ or 144 ([*Payerne*]{}-temperature, $\gamma=10\%$). Note that one day has 144 data points so $N=72$ is half of the day. The performance of DASS is only slightly affected by a change of $N$, while CSN is considerably affected in the low SNR regime. []{data-label="plot7"}](./pic/plot7.pdf){height="0.16\newhcol"} Blocks with Weaker Correlation {#sec5.4} ------------------------------ In real applications, the block length $N$ must be chosen such that the delay of the WSN respects the design specification while the correlation between blocks is maximized. In all experiments above, $N$ is chosen so that one block represents one day, which intuitively fits signals with strong diurnal cycles, such as temperature signals. In practice, it is essential to evaluate how DASS performs with a sub-optimal $N$. In this section, we use the same dataset [*Payerne*]{}-temperature, but splitting one day into two blocks. This means that we transmit and reconstruct signals two times per day and hence the correlation between the different temporal blocks is smaller. Figure \[plot7\] compares DASS and CSN with two possible block length: a full day—$N=144$— and half a day—$N=72$. We can note that the performance of DASS is only slightly affected by the smaller block length, while CSN is considerably affected in the low SNR regime. Energy Saving over Traditional Data Collection Schemes {#sec6} ====================================================== In Section \[sec5\], we have shown that DASS achieves better performance w.r.t. the state-of-the-art *sparse sensing schemes*. In this section, we study the *overall energy saving* of DASS w.r.t. the *traditional data collection schemes* [@Sadler2006; @Zordan2012]. The energy saving is particularly significant on platforms where the energy consumed for sensing is more pronounced. This is intuitive since DASS can substantially reduce the number of sensing samples. Nevertheless, our analysis shows that this saving is also noticeable on platforms with small sensing cost, e.g. a *Tmote-sky* node [@Werner-Allen2006]. The traditional data collection schemes typically sample the physical field at a high frequency $f$ as in and then compress the samples to reduce the communication rate, see Figure \[sensing\]a. In contrast, DASS collects measurements using an optimized sampling pattern and a reduced average sensing frequency $\gamma\cdot f$, where $\gamma<1$. Then, each sensor node transmits the raw data points without any compression, see Figure \[sensing\]b. In both traditional schemes and DASS, we aim at precisely reconstructing the signal $\vx$. ![Two approaches to sensing in a WSN node. (a) Traditional scheme: collect periodical samples at a frequency $f$, compress and transmit the compressed data. (b) DASS: collect samples with an optimized temporal pattern at an average frequency $\gamma\cdot f$ and transmit the raw data.[]{data-label="sensing"}](./pic/sensing.pdf){height="0.15\newhcol"} ![Relative energy saving of DASS ($\gamma=10\%$) w.r.t. traditional data collection schemes. The saving depends on the sensing platform (value of $\mathbf{r}_s$) and the compression ratio $\mathbf{r}_c$ in traditional sensing. The “star” and “circle” markers represent the energy saving on *Tmote-sky*, when DASS achieves the same reconstruction error as traditional sensing using LTC and DCT-LPF compression methods [@Zordan2012] (on dataset [*Payerne*]{}-temperature) . The dashed lines indicate further savings when $\mathbf{r}$ increases, that is for sensors with higher energy costs.[]{data-label="saving"}](./pic/saving.pdf){height="0.25\newhcol"} It is clear that DASS reduces the energy consumption for the sensing operations over the traditional scheme. However, DASS may not necessarily consume less communication energy, since the compression ratio $\mathbf{r}_c$[^9] used in traditional sensing is generally better than $1/\gamma$. In fact, existing data compression schemes can achieve a compression ratio $\mathbf{r}_c$ of $1.5\sim 5$ for lossless coding [@Sadler2006], and $5\sim 50$ for lossy coding [@Zordan2012], while a typical value of $\gamma$ used in DASS is $0.1$. Hence, there is a tradeoff between the energy saved on sensing and communications. Such tradeoff between the different energy consumption depends on platform-specific parameters. In particular, we denote the energy consumption for collecting and transmitting one sample as $E_{sensor}$ and $E_{radio}$, respectively. An interesting figure is the ratio between the two energy values, that we denote as $\mathbf{r}_s=E_{sensor}/E_{radio}$. Intuitively, the larger $\mathbf{r}_s$, the larger the energy savings obtained by DASS. For the traditional data collection schemes, we assume that the compression step has a negligible energy cost. For DASS we use a subsampling rate of $\gamma=0.1$, which means that 10% of the original signal is sampled and transmitted. Under these assumptions, we can quantitatively analyze the relative energy savings of DASS w.r.t. the traditional sensing as a 2-D function of the platform parameter $\mathbf{r}_s$ and the compression ratio $\mathbf{r}_c$ achieved by the compression stage of the traditional scheme. Such function representing the energy saving is plotted in Figure \[saving\]. We see that there is a line, indicated by the zero value, that defines where DASS is more energy-efficient than the traditional schemes. Above the line, a WSN consumes less energy if it uses DASS and vice versa. Note that DASS is only less efficient in the scenarios where the compression ratio $\mathbf{r}_c$ is very high and the platform parameter $\mathbf{r}_s$ is very low. We also looked at the energy savings for a plausible real world scenario. More precisely, we consider *Tmote-sky*, a low-power sensing platform widely used in WSNs [@Werner-Allen2006]; it has a photodiode sensor that measures the light intensity of the surroundings and can communication with others through short-range radio. We measured the two energy consumptions $E_{sensor}$ and $E_{radio}$ of *Tmote-sky* in a set of experiments, and an example of the results is given in Figure \[measuretmote\]. In particular, the experiments indicate that $\mathbf{r}_s=0.26$. To evaluate the energy consumption of a traditional scheme, we need to choose a specific compression algorithm and measure the achieved $\mathbf{r}_c$. Zordan et al. [@Zordan2012] have recently compared various lossy compression algorithms and showed that DCT-LPF [@Zordan2012] achieves the best performance in terms of compression ratio. However, it is also a complex algorithm and may have a significant energy consumption on a resource-limited platform such as *Tmote-sky*. Therefore, we also consider a lightweight algorithm, LTC [@Schoellhammer2004], that achieves the lowest energy consumption on WSN nodes if the energy cost for compression is considered. Here, we ignore the energy cost of compression and we compare both algorithms with DASS. Note that, if we consider computational energy cost, the benefit of DASS will be even larger since it requires minimal on-board computation. We implement and evaluate the two algorithms on the dataset [*Payerne*]{}-temperature, and record the corresponding compression ratio $\mathbf{r}_c$ when their reconstruction errors are the same as those achieved by DASS. The “star” and “circle” markers in Figure \[saving\] show the energy savings of DASS over a *Tmote-sky* that compresses the data with LTC and DCT-LPF, respectively. The energy savings for the two cases are equal to 50% and 35% and go up to 60% if $\mathbf{r}_s$ increases due to a higher energy cost for sensing, as denoted by the dashed lines in Figure \[saving\]. This scenario could be realistic for many WSNs, in particular those using sensor belonging to the following two classes: - Sensors with high energy consumption: for example an air pollution sensors consume $30\sim 50$ mW instead of the 3 mW of a *Tmote-sky*’s light sensor. - Sensors with long sampling time: for example the anemometer, a sensor that measures wind’s direction and strength, requires $1\sim 3$ seconds of continuous measurement per sample instead of the 4 ms of the *Tmote-sky*’s light sensor. Conclusions {#sec8} =========== In this paper, we proposed DASS, a novel approach for sparse sampling that optimizes sparse sampling patterns for precisely recovering spatio-temporal physical fields. DASS is based on three main blocks. First, it adaptively learns the signal statistics from past data. Second, it dynamically adjusts the sampling pattern according to the time–varying signal statistics. Third, it recovers the signal from the limited amount of collected samples and according to the learnt signal statistics. We demonstrated the effectiveness of DASS through extensive experiments using two real-world meteorological datasets. The results show significant improvements over the state-of-the-art methods. These improvements are more pronounced in the presence of significant spatial and/or temporal correlation in the sampled data by WSN. We evaluated DASS on static WSNs; however, DASS is flexible and can be applied to other sensing scenarios such as mobile WSNs. For instance, sensors are installed on top of buses for collecting various environmental data along their trajectories [@Aberer2010]. The collected samples show strong correlation due to the fixed route periodically taken by the buses. In future work, we will analyze the advantages of an optimized sensing schedule in such cases, where the constraint is not the energy consumption but the relatively slow speed of sampling of certain pollution sensors. [^1]: The results of this research are reproducible: The datasets and Matlab codes used to generate figures can be found in our reproducible repository at <http://rr.epfl.ch/>. This research is supported by Swiss National Centre of Competence in Research and ERC Advanced Investigators Grant of European Union. [^2]: Z. Chen, J. Ranieri, R. Zhang and M. Vetterli are with the LCAV, I&C, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (e-mail: chenzc04@gmail.com, juri.ranieri@epfl.ch, runwei.zhang@epfl.ch, martin.vetterli@epfl.ch). [^3]: Note that it is an average frequency given the irregular and time-varying sampling pattern. [^4]: Other noise model may be of interest for specific sensors; for example the noise term of a Geiger counter is usually modeled as a Poisson process. [^5]: The experimental results show that $K=M$ is the best choice for CS-based methods, while $K<M$ is a parameter which needs to be optimized for OLS-based methods, see Section \[sec5.1\]. [^6]: We denote by [*Payerne*]{}-temperature the dataset of temperature measurements. The notation is similar for solar radiation. [^7]: The processing costs of the considered sparse sensing methods are negligible. [^8]: The lower bound of $\Theta(\mPhi^t\mPsi^t)$ is $\gamma=M/N$ if and only if $\mPhi^t\mPsi^t$ is a basis. [^9]: $\mathbf{r}_c$ equals uncompressed size / compressed size.
--- abstract: 'We report 60 and 90  observations of 7 millisecond pulsars with . The pulsar is orbited by three planets, and other millisecond pulsars may be orbited by dust disks that represent planets that failed to form or their residue. We do not detect any infrared emission from the 7 pulsars in our sample, and typical upper limits are 100 mJy. Using a simple model, we constrain the typical dust disk mass to be less than of order 100 M${}_\oplus$, assuming that the heating of any putative dust disk would be coupled only weakly to the pulsar’s emission. If the planets around are composed largely of metals, our limits are probably an order of magnitude above plausible values for the disk mass in metals. Future observations with the Spitzer Space Telescope should be capable of probing into the range of disk masses that could plausibly give rise to planets.' author: - 'T. Joseph W. Lazio & J. Fischer' title: 'Mid- and Far-Infrared ISO Limits on Dust Disks around Millisecond Pulsars' --- Introduction {#sec:intro} ============ The first extrasolar planets discovered were found around the millisecond pulsar [@wf92]. The system consists of (at least) three planets, planet A with approximately a lunar mass, planet B with $M = 4.3 \pm 0.2\,\mathrm{M}{}_\oplus$, and planet C with $M = 3.9 \pm 0.2\,\mathrm{M}{}_\oplus$ [@kw03]. Although planetary systems around main-sequence stars had been long anticipated and numerous such systems have been found since, pulsar planetary systems were unexpected. It was assumed that any planets orbiting the pulsar progenitor would have become gravitationally unbound in the supernova that produced the pulsar. Various mechanisms have been proposed for the formation of these planets [@ph92; @mh01; @h02], but all generally rely on an accretion disk around the pulsar within which the planets form. Millisecond pulsars are a class of pulsars that, subsequent to their formation, undergo an episode of mass accretion from a companion [@wv98]. This process is thought to occur via an accretion disk, which transfers angular momentum to the pulsar as well, thereby spinning it up. Various mechanisms exist to shut down the accretion (e.g., evolution of the companion), but, if the accretion is not 100% efficient, the millisecond pulsar will be left with an orbiting disk of material. Such a residual accretion disk is a natural location for the formation of planets. Even if planets form, the formation process may leave a debris disk. [@gkw03] investigated the long-term stability of a debris disk in the system, finding a stable zone outside 1 . Since the discovery of planets around , a planet has also been found around the pulsar [@bfs93; @tat93; @r94; @jr97; @tacl99; @fjrz00; @srhst03; @rifh03], in the globular cluster M4. In contrast to the planets orbiting , which are thought to have formed *in situ*, the planet orbiting is thought to have been acquired during a dynamical exchange within the globular cluster. Pulsar planetary systems offer valuable insights, even if their total number is unlikely ever to approach the number of planetary systems around main-sequence stars. Taken together the two pulsar planetary systems already indicate that planets can form and exist in a wide variety of environments. The presence of terrestrial mass planets around suggests that terrestrial planets may be widespread, a hypothesis to be tested by future space missions such as Kepler and the Terrestrial Planet Finder (TPF). Planets orbiting main-sequence stars near the Sun are found almost exclusively around stars with solar- or super-solar metallicities [@g97; @sim01], which has led to the belief that only stars with high metallicities can host planets. In contrast, the planet around , if it was acquired during a dynamical exchange, probably has existed for a substantial fraction of the age of the globular cluster M4. This is a low-metallicity globular cluster, suggesting that planets can form in low-metallicity environments. Although the notion that planets can form in a residual accretion disk is plausible, no such examples of residual accretion disks are known. The presence of a planet (or stellar companion) can be inferred using traditional pulsar timing techniques from the periodic advance and delay of the arrival time of the pulsar’s pulse, due to the pulsar’s reflex motion. A relatively uniform disk of material would produce little reflex motion and therefore would remain undetected by these traditional techniques. Detecting dust disks around millisecond pulsars not only would elucidate the late stages of millisecond pulsar “spin up” and planet formation, it would be a new probe of the local environments around millisecond pulsars. A modest number of unsuccessful searches for infrared emission from dust disks around millisecond pulsars have been conducted. Figure \[fig:b1257+12\] summarizes the current situation using as an example. The limits for other pulsars are similar. This paper reports 60 and 90  observations of 7 pulsars with the ISOPHOT instrument onboard the ISO satellite. In §\[sec:observe\] we describe the observations and present our results and in §\[sec:discuss\] we describe how our results constrain the presence of dust disks around millisecond pulsars and present our conclusions. Observations and Data Analysis {#sec:observe} ============================== We compiled a list of millisecond pulsars known prior to 1994 August and with distances less than 1 kpc. Distances are estimated from the [@tc93] model and should be accurate to approximately 25%. Most of these millisecond pulsars lie at high Galactic latitudes. Of these, seven were observed with the ISOPHOT instrument [@lemkeetal96] onboard the ISO satellite [@kessleretal96] between 1996 August and 1997 May. Table \[tab:log\] summarizes the observing details; we also report the distance to each pulsar and, anticipating later discussion, whether or not it is a binary and its spin-down luminosity.[^1] All of the observations used the P32 observing mode with the C100 detector. In this mode the spacecraft was commanded to cover a series of raster pointings around the nominal pulsar position. At each raster pointing an internal chopper pointed the beam toward 13 adjacent sky positions. The throw of the chopper was larger than the offset between raster pointings. The result was that, in general, an individual sky position within the raster was observed multiple times or oversampled. Before and after each observation of a pulsar, an internal calibration source was observed. [lcccccc]{} & Y & 0.98 & 10 & 90 & 3 $\times$ 8 & 1402\ & Y & 1.19 & 0.88 & 60 & 3 $\times$ 6 & 1012\ & & & & 90 & 3 $\times$ 8 & 848\ & & 0.51 & $< 0.35$ & 60 & 3 $\times$ 6 & 1590\ & & & & 90 & 3 $\times$ 8 & 1232\ & Y & 0.91 & 1.1 & 60 & 3 $\times$ 6 & 1012\ & & 0.25 & 1.7 & 60 & 3 $\times$ 6 & 1012\ & & & & 90 & 3 $\times$ 8 & 848\ \ & Y & 0.50 & $< 0.048$ & 60 & 3 $\times$ 6 & 1590\ & & & & 90 & 3 $\times$ 8 & 848\ & & 0.78 & 0.62 & 60 & 3 $\times$ 6 & 1804\ & & & & 90 & 3 $\times$ 8 & 1402\ The analysis of the pulsar observations largely followed the standard ISOPHOT analysis pipeline. The key difference was the amount of “deglitching” performed. Glitches result from cosmic rays striking the detector or secondary electrons produced by spacecraft materials struck by primary cosmic rays. Failure to remove glitches can corrupt later calibration of *all* data, not just of the portion containing the glitches. The standard ISOPHOT analysis pipeline removes glitches but does so without making use of the redundancy implicit in the oversampled P32 observations. Deglitching proceeded in the following fashion. Within each spacecraft pointing the chopper would sweep past a particular sky position multiple times (typically 3–5 times). For each sky position, the median signal level was determined, then subtracted from all observations at that sky position. The observations from all sky positions were then combined to form a signal strength histogram. A signal strength threshold was specified, and signals above this level were eliminated. Typically 3%–10% of the signals were eliminated in this stage. Depending upon the number of chopper sweeps per spacecraft pointing and deglitching prior to this stage, the median signal strength per sky position could not always be determined accurately. Thus, additional manual deglitching was done to remove any remaining outlier signals. Our use of the observations of the internal calibration sources followed the standard ISOPHOT analysis pipeline. After deglitching and calibration using the internal calibration sources, mapping was done within the ISOPHOT Interactive Analysis package. Measurements from the individual detector pixels were co-added to form a sky image, with the contributions from the individual detector pixels weighted by their distances from the image pixels. Doing so takes into account the beam profile falling on each detector pixel. We also employed a median flat field, which has the effect of reducing substantially our sensitivity to any extended emission in the field. As we are attempting to detect point sources, we regard this reduced sensitivity to extended emission as unimportant. In no case have we identified a source at the location of a pulsar. Utilizing the inner quarter of the image, we determined the rms noise level. We take our upper limits to be 3 times this rms noise level. Table \[tab:limits\] summarizes the upper limits. [lc]{} & 80\ & 35\ & 190\ & 100\ & 100\ & 59\ & 48\ & 59\ & 140\ & 55\ & 73\ & 39\ Discussion and Conclusions {#sec:discuss} ========================== We have not detected infrared emission associated with any of the pulsars observed with . Other infrared and sub-millimeter observations of millisecond pulsars have been conducted, and all of these have yielded only upper limits as well. Those observations most relevant to our sample of millisecond pulsars are those by [@ff96] at 10  and [@gh00] at 850 . [@ff96] also utilized IRAS observations to obtain upper limits on the infrared emission from their sample of pulsars. As Figure \[fig:b1257+12\] shows, the upper limits set by IRAS are typically well above the limits set by our ISO observations. Moreover, there is unfortunately little overlap between these three samples of pulsars (those whose observations are reported here, @ff96, and @gh00). Most of the pulsars that have been observed between 10 and 850  have been observed at only one or two wavelengths. [@ff96] developed a model for the infrared emission from a dust disk around a millisecond pulsar. Their model assumes that the disk consists of particles of a uniform radius $a$ heated by a fraction $f_{\mathrm{sd}}$ of the pulsar’s spin-down luminosity L${}_{\mathrm{sd}}$. The total mass of the disk is $m_d$. While the model is simplistic—an actual dust disk presumably consists of particles with a range of sizes, the heating mechanism is left unspecified, non-equilibrium effects such as stochastic heating are ignored, and the impact of any stellar companions (see Table \[tab:log\]) on the disk are ignored—we believe that this simplicity is justified given the uncertainties of the heating mechanism and of the environs of a millisecond pulsar. In this model, for $f_{\mathrm{sd}} \sim 1$%, typical dust temperatures are predicted to be $T \approx 10$–50 K for disks having $m_d \sim 100$ M${}_\oplus$ and $a \sim 1$  and heated by a pulsar with L${}_{\mathrm{sd}} \sim 1$ L${}_\odot$. These temperatures are similar to the lower temperature range used by [@k-mhppns02] and considerably lower than those assumed ($\approx 150$ K) by [@pc94], who estimated disk temperatures by scaling from observations of T Tauri stars. The lower temperatures result from our assumption of a weaker coupling between the pulsar’s spin-down luminosity and the disk. [@pc94] considered disk temperature to be a major uncertainty in converting from measured flux densities to inferred disk masses. Accordingly, our assumption of a weaker coupling means that larger disk masses can be tolerated without violating the observational constraints. Given the paucity of data, it is not possible, in general, to constrain all three parameters of this model with the existing observations. We therefore adopt an approach in which we infer limits on two parameters of the Foster & Fischer (1996) model for fiducial values of the third parameter. Here, as an example, we consider the millisecond pulsar [@bailesetal94] which has a probable white dwarf companion, is at a distance of 1 kpc, and has a spin-down luminosity of 10 L${}_\odot$. Greaves & Holland (2000) placed a $2\sigma$ limit of 3.7 mJy at 850 , and we place a $2\sigma$ limit of 50 mJy at 90 . Figure \[fig:like2\] shows the allowed region of the disk mass-grain size plane given these observational limits and an assumed heating efficiency of $f_{\mathrm{sd}} = 1$%. Allowed regions in the $m_d$-$a$ plane occur for one of two possible reasons. First, the peak of the dust disk emission may appear shortward of 90 , where no constraints exist for this pulsar, with the Rayleigh-Jeans tail of the emission falling below the two measured values. This region is to the lower left in Figure \[fig:like2\]. Second, the peak of the emission may appear between 90  and 850 , but with a magnitude comparable to that measured at 90  so that the Rayleigh-Jeans tail again does not violate the 850  limit while the Wien tail of the emission does not violate the 90  limit. This region is to the lower right in Figure \[fig:like2\]. Obviously, a lower value of $f_{\mathrm{sd}}$ would produce larger allowed regions in the $m_d$-$a$ plane. Larger allowed regions would also exist for other pulsars (Table \[tab:log\]) with smaller spin-down luminosities. We conclude that, with the current observational constraints and the assumption of a fairly weak energy coupling between pulsars and disks, dust disks of order 100 M${}_\oplus$ easily could exist around millisecond pulsars. [@k-mhppns02] have reached similar conclusions based on 15 and 90  observations. To the extent that such disks would be uniform, they would also escape detection from traditional pulsar timing techniques. Pulsar timing techniques utilize the advance or delay of the pulse arrival time resulting from the pulsar’s reflex motion to detect planetary or stellar companions. A relatively uniform dust disk would produce little reflex motion.[^2] The current limits on dust disk masses are far larger than the mass of the disk thought to have produced the planets around . The minimum combined mass of the two larger planets in that system is 8.2 M${}_\oplus$ [@kw03]. Assuming that these planets are composed largely of metals, we expect that any dust mass prior to the planets’ formation would be comparable in magnitude. Indeed, [@h02] has shown how an initial disk of mass 0.1–$10^{-3}$ M${}_\odot$ containing of order 10 M${}_\oplus$ in metals could form a system similar to that orbiting . (See also @mh01 and Figure \[fig:b1257+12\].) For those pulsars orbited by stellar companions, the companions will introduce regions of limited orbital stability within the disks, potentially implying even smaller expected disk masses. We conclude that current observational limits on dust disk masses are at least an order of magnitude above plausible values. The mid- to far-infrared detectors (24, 70, and 160 ) on the Spitzer Space Telescope should have sensitivities some 1–2 orders of magnitude better than the limits we report here. At a minimum, we expect that future observations with the Spitzer Space Telescope may require more sophisticated modelling of disks, including the possible effects of stellar companions. If future observations with the Spitzer Space Telescope do not detect infrared emission from dust disks around millisecond pulsars, the resulting mass limits should be in the range of 10 M${}_\oplus$, sufficient to begin placing stringent constraints on their existence, or the temperatures of any pulsar dust disks must be no more than a few Kelvin. We thank the organizers of the ISOPHOT Workshop on PHT32 Oversampled Mapping, particularly R. Tuffs, C. Gabriel, N. Lu, and B. Schulz for their many helpful discussions, and R. Tuffs for his deglitching software. Without their assistance, no results would be reported here. We thank the referee for comments that helped us clarify certain points and C. Chandler for helpful discussions. The results reported here are based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA. The ISOPHOT data presented in this paper were reduced using PIA, which is a joint development by the ESA Astrophysics Division and the ISOPHOT consortium, with the collaboration of the Infrared Analysis and Processing Center (IPAC) and the Instituto de Astrof[í]{}sica de Canarias (IAC). Basic research in astronomy at the NRL is supported by the Office of Naval Research. Backer, D. C., Foster, R. S., & Sallmen, S. 1993, Nature, 365, 817 Bailes, M., et al. 1994, ApJ, 425, L41 Ford, E. B., Joshi, K. J., Rasio, F. A., & Zbarsky, B. 2000, , 528, 336 Foster, R. S. & Fischer, J. 1996, , 460, 902 Gozdziewski, K., Konacki, M., & Wolszczan, A. 2003, ApJ, submitted; astro-ph/0310750 Gonzalez, G. 1997, MNRAS, 285, 403 Greaves, J. S. & Holland, W. S. 2000, , 316, L21 Hansen, B. M. S. 2002, in Stellar Collisions, Mergers, and Their Consequences, ed. M. M.Shara (San Francisco: ASP) p. 221 Joshi, K. J. & Rasio, F. A. 1997, , 479, 948 Kessler, M. F., et al. 1996, , 315, L27 Koch-Miramond, L., Haas, M., Pantin, E., Podsiadlowski, Ph., Naylor, T., & Sauvage, M. 2002, , 387, 233 Konacki, M. & Wolszczan, A. 2003, ApJ, 591, L147 Lemke, D., et al. 1996, , 315, L64 Miller, M. C. & Hamilton, D. P. 2001, ApJ, 550, 863 Phillips, J. A. & Chandler, C. J. 1994, , 420, L83 Phinney, E. S. & Hansen, B. M. S. 1992, in Planets around Pulsars, eds.  J. A. Phillips, S. E. Thorsett, & S. R. Kulkarni (San Francisco: ASP) p. 371 Rasio, F. A. 1994, , 427, L107 Richer, H. B., Ibata, R., Fahlman, G. G., & Huber, M. 2003, ApJ, 597, L45 Santos, N. C., Israelian, G., & Mayor, M. 2001, A&A, 373, 1019 Sigurdsson, S., Richer, H. B., Hansen, B. M., Stairs, I. H., & Thorsett, S. E. 2003, Science, 301, 193 Taylor, J. H. & Cordes, J. M. 1993, , 411, 674 Thorsett, S. E., Arzoumanian, Z., Camilo, F., & Lyne, A. G. 1999, , 523, 763 Thorsett, S. E., Arzoumanian, Z., & Taylor, J. H. 1993, , 412, L33 van den Heuvel, E. P. J. 1995, J. Astrophys. Astron., 16, 255 Wijnands, R. & van der Klis, M. 1998, Nature, 394, 344 Wolszczan, A. & Frail, D. A. 1992, , 355, 145 [^1]: The spin-down luminosity of a pulsar is a measure of its energy-loss due to magnetic dipole radiation and is given by $L = I\Omega\dot\Omega$, where $I$ is its moment of inertia and $\Omega$ is its rotation frequency. [^2]: One possible exception to this conclusion would be a dust disk illuminated directly by the pulsar’s beam. In this case the relativistic particle flow from the pulsar beam potentially could ionize portions of the disk and induce plasma propagation delays which might be detectable.
--- abstract: 'For the nonlinear Klein-Gordon type models, we describe a general method of discretization in which the static kink can be placed anywhere with respect to the lattice. These discrete models are therefore free of the [*static*]{} Peierls-Nabarro potential. Previously reported models of this type are shown to belong to a wider class of models derived by means of the proposed method. A relevant physical consequence of our findings is the existence of a wide class of discrete Klein-Gordon models where slow kinks [*practically*]{} do not experience the action of the Peierls-Nabarro potential. Such kinks are not trapped by the lattice and they can be accelerated by even weak external fields.' author: - 'S. V. Dmitriev$^1$, P. G. Kevrekidis$^2$ and N. Yoshikawa$^1$' title: 'Discrete Klein-Gordon models with static kinks free of the Peierls-Nabarro potential' --- Introduction ============ Discrete solitons and more specifically kink-like topological excitations are ubiquitous structures that arise in numerous physical applications ranging from dislocations or ferroelectric domain walls in solids, to bubbles in DNA, or magnetic chains and Josephson junctions, among others (see, e.g., [@Kivshar] for a recent exposition of relevant applications). The mobility of such lattice kinks is one of the key issues in many of these applications, especially since the pioneering works of [@peyrard; @yip] which illustrated that the kinematics on the lattice is dramatically different from the continuum analog of such equations where constant speed propagation is typical. Instead, on the discrete substrate, kinks need to overcome the, so-called, Peierls-Nabarro potential (PNp), constantly radiating their energy and being eventually trapped by the lattice. The [*static*]{} PNp refers to the energy difference between a stable inter-site centered discrete kink and an unstable, onsite centered discrete kink. Clearly, as a kink is travelling from one site to the next, it “wobbles” over this potential energy landscape [@boesch]. However, even though clearly travelling is intimately connected with overcoming the static PNp without “radiating” energy [@KW], this connection is relatively subtle and the inter-dependence of these two features (static PNp and travelling) still remains elusive [@flach]. Typically, discrete kinks traveling with finite velocity have only been obtained for a discrete set of velocities [@yzolo] which makes the motion unstable with respect to perturbations. There exists a class of more exotic exact solutions (the so-called “nanoptera”) where the kink propagates together with a plane wave having the same velocity [@yzolo]. While the travelling problem is extremely interesting in its own right, in the present work, we will start by examining the construction of discrete models with PNp-free kinks, using a simplified (quasi)static approach. Two classes of discrete models where static kink can be placed anywhere with respect to the lattice have been previously derived: one conserving energy [@SpeightKleinGordon] and another one conserving momentum [@PhysicaD]. In both cases the static kink solution can be obtained from a two-body nonlinear map. In the present paper we demonstrate that, in general, a discrete version of the first integral of the static continuum Klein-Gordon field plays the role of this nonlinear map. Thus we derive a wide class of such models including the two above-mentioned classes as special cases. The advantage of this approach is that the kinks are no longer (typically) trapped by the lattice. Instead they can be accelerated by even weak external fields. However, a note of caution should be added here. While one might naively expect that such solutions would be intimately connected with slow travelling, it has been demonstrated numerically that travelling solutions (when they can be found as e.g. in [@yzolo; @karpan] for Klein-Gordon lattices, using the methods of [@flesh]) have a sharp lower bound in their wave speed [@aigner]. The existence of such a threshold illustrates the fact that one should be particularly careful in trying to infer features of the travelling problem from such “static” considerations. On the other hand, as the recent work of Barashenkov, Oxtoby and Pelinovsky demonstrates [@dima], discretizations without PNp are much more natural candidates for possessing travelling solutions for a isolated wave speeds (not close to zero). The presentation of our results will be structured as follows. Section II will contain the setup and notations used for the Klein-Gordon models. Section III will present the general methodology for obtaining static PNp-free discretizations. Section IV will illustrate the connection to previously reported models. Section V will focus on the special case example of the $\phi^4$ model, for which our numerical observations will be presented in section VI. Finally in section VII, we will summarize our findings and present our conclusions. Setup ===== We consider the Lagrangian of the Klein-Gordon field, $$L =\int_{-\infty}^{\infty} \left[ \frac{1}{2}\phi_t^2-\frac{1}{2}\phi_x^2-V(\phi)\right]dx\,, \label{KleinGordonHam}$$ and the corresponding equation of motion, $$\phi _{tt} = \phi _{xx} - V'(\phi)\equiv D(x)\,. \label{KleinGordon}$$ Topological solitons (kinks) are possible only if $V(\phi)$ has at least two minima $\phi_{01}$ and $\phi_{02}$, where $V^{\prime}(\phi_{0i})=0$ and $V^{\prime\prime}(\phi_{0i})>0$. Obviously, $\phi=\phi_{01}$ and $\phi=\phi_{02}$ are the stationary solutions to Eq. (\[KleinGordon\]). We will study the properties of kinks that interpolate between these two stationary solutions. Our considerations allow one to treat the cases when other minima appear in between the two minima, $\phi_{01}$ and $\phi_{02}$, connected by the kink. Equation (\[KleinGordon\]) will be discretized on the lattice $x=nh$, where $n=0,\pm 1, \pm 2 ...$, and $h$ is the lattice spacing. For brevity, when possible, we will use the notations $$\phi _{n - 1} \equiv l,\,\,\,\,\,\phi _n \equiv m,\,\,\,\,\,\phi _{n + 1} \equiv r\,. \label{Notation}$$ We would like to construct a nearest-neighbor discrete analog to Eq. (\[KleinGordon\]) of the form $$\ddot{m} = D(C,l,m,r), \label{KleinGordonDiscrete}$$ where $C>0$ is a parameter related to the lattice spacing $h$ as $C=1/h^2$, such that in the continuum limit $(C\rightarrow \infty)$, $D(C,l,m,r) \rightarrow D(x)=\phi_{xx}- V'(\phi)$. Note that in this context, the “standard” discretization emerges in the form: $D(C,l,m,r)=C(l-2m+r)-V'(m)$. Generalizations of this model will be discussed in the form $$\ddot{m}= C(l-2m+r) - B(l,m,r), \label{KleinGordonDiscrS}$$ where $B(l,m,r)$ has $V'(\phi)$ as the continuum limit. We will characterize a model as PNp-free if a [*static*]{} kink can be placed anywhere with respect to the lattice (continuum, rather than discrete, set of equilibrium solutions). This is equivalent to demanding that the kink must have an neutral direction, or (from Noether’s theory [@arnold]) a Goldstone translational mode. It is natural to categorize this definition of PNp-free model as “static” or “quasi-static”, in the sense that it does not involve the kinematic or dynamical properties of the model. On the other hand, one can demand the absence of PNp at finite kink velocities. This can be transformed to the demand that the discrete model supports the exact traveling wave solutions and this demand can be called “dynamic” definition; see e.g. [@flach] for such travelling wave examples, where the “static” definition of the PNp clearly fails. In this paper we aim to construct the models PNp-free in the static sense as a first (yet nontrivial) step towards understanding the nature of the discrete travelling problem (see also the comments above). We will also focus on the existence of physically motivated conserved quantities for the derived models. Hamiltonian models are energy-conserving models and the models with $dM/dt=0$, where $$\begin{aligned} M= \sum_{n=-\infty}^{\infty} \dot{\phi}_n \left(\phi_{n+1}-\phi_{n-1} \right), \label{mom1}\end{aligned}$$ will be called momentum-conserving models. As was shown in [@PhysicaD], the discrete model of Eq. (\[KleinGordonDiscrete\]) conserves the momentum of Eq. (\[mom1\]), if it can be presented in the form $$\begin{aligned} \ddot{m}=\frac{{\cal H}(m,r)-{\cal H}(l,m)}{r-l}. \label{mom2}\end{aligned}$$ This can be verified by calculating $$\begin{aligned} \frac{dM}{dt}=\sum_n \ddot{\phi}_n (\phi_{n+1}-\phi_{n-1})\nonumber \\ = \sum_n [{\cal H}(\phi_{n},\phi_{n+1}) - {\cal H}(\phi_{n-1},\phi_{n})]=0, \label{mom3}\end{aligned}$$ where we have used the fact that the terms $\dot{\phi}_n(\dot{\phi}_{n+1}-\dot{\phi}_{n-1})$ cancel out due to telescopic summation. Static PNp-free discretization ============================== Our aim here will be to discretize Eq. (\[KleinGordon\]) in a symmetric way, so that the static kink solution can be found from a reduced first-order difference equation. According to [@SpeightKleinGordon], if we achieve that, then we are going to have a one-parameter family of solutions with the possibility to place equilibrium kinks anywhere with respect to the lattice (and hence, PNp-free in the static sense). The first integral of the steady state problem in Eq. (\[KleinGordon\]), $\phi_x - \sqrt{2V(\phi)} = 0$ (with zero integration constant), can be written in the form $$w(x) \equiv g(\phi_x) - g\left(\sqrt{2V(\phi)}\right) = 0 \,, \label{StaticFirstIntegral2}$$ where $g$ is a continuous function. Our plan will then be the following: - discretize the first-order differential equation of Eq. (\[StaticFirstIntegral2\]) using a first order difference scheme $w(l,m)=0$. - Then express the right-hand side of Eq. (\[KleinGordon\]) as a sum of terms containing derivatives, e.g., $dw/dx$, $dw/d\phi$, etc. - As a result, discretizations of such terms, e.g., $dw/dx \sim \sqrt{C}[w(m,r)-w(l,m)]$, vanish for $w(l,m)=0$ (or otherwise stated: the construction of the equilibrium solution is converted to a first order difference problem). Then, the static kink (PNp-free, by construction) solutions for the obtained discrete model can be found from this two-site problem. In the following, we will consider a particular case of Eq. (\[StaticFirstIntegral2\]) with $g(\xi)=\xi^2$, for which we introduce the notation $$\begin{aligned} u(x) \equiv \phi_x^2 - 2V(\phi) = 0 \,, \label{ucont1}\end{aligned}$$ and the following two-site discrete analog $$\begin{aligned} u(l,m) \equiv C(m-l)^2 - 2V(l,m) = 0 \,. \label{udisc1}\end{aligned}$$ We will also use the shorthand notations, $$u_l=u(l,m)\,,\,\,\,\,\,\,\,\,\,\,\,\,u_m=u(m,r). \label{Shortnot}$$ We have assumed that the Klein-Gordon field supports kink solutions. Then, at least for the case of weak discreteness, Eq. (\[udisc1\]) also supports static kinks because it is nothing but a discretization of the first integral of static version of Eq. (\[KleinGordon\]) (see also [@SpeightKleinGordon]). The next step is then to find a discretization of the right-hand side of Eq. (\[KleinGordon\]), $D(x)$, which vanishes when Eq. (\[udisc1\]) is fulfilled. One simple possibility comes from the following finite difference $$\begin{aligned} D_1(l,m,r) \equiv \frac{u_m-u_l}{r-l} \rightarrow \frac{1}{2}\frac{du}{d\phi}= D(x). \label{r1}\end{aligned}$$ One can also consider, more generally, continuous functions $q(u_l,h)$ such that $q(0,h)=0$ and, in the continuum limit, $q(u,0)=u$ and $\frac{dq}{du}(u,0)=1$. For example, one can take $q=(e^{hu}-1)/h$ or $q=u+\sum_{n>1}A_nh^{n-1}u^n$ with constant $A_n$, etc. Then, $$\frac{1}{2}\frac{dq}{d\phi}\left(\frac{dq}{du}\right)^{-1} = D(x). \label{Dr2}$$ Discretizing the left-hand side of Eq. (\[Dr2\]) we obtain $$D_2 = \frac{1}{2}\frac{q(u_m,h) - q(u_l,h)}{r-l}\left[ \frac{1}{q'(u_l)} + \frac{1}{q'(u_m)} \right]. \label{r2}$$ Inspired by [@SpeightKleinGordon], we note that, in the continuum limit, $$\begin{aligned} \frac{v(m,r)}{r-m}-\frac{v(l,m)}{m-l} \rightarrow \frac{dv}{d\phi}-v\frac{\phi_{xx}}{\phi_x^2}\,, \label{ContLim}\end{aligned}$$ and find $$\begin{aligned} D_3 \equiv \frac{u_m}{r-m}-\frac{u_l}{m-l} + \sqrt{2V(l,m,r)}\times \nonumber \\ \left(\frac{ \sqrt{C(r - m)^2-u_m}}{r-m}-\frac{ \sqrt{C(m - l)^2-u_l}}{m-l}\right) \nonumber \\ \rightarrow \frac{du}{d\phi}-u\frac{\phi_{xx}}{\phi_x^2} + \sqrt{2V}\left(\frac{d\sqrt{2V}}{d\phi}-\sqrt{2V}\frac{\phi_{xx}}{\phi_x^2}\right) \nonumber \\ = D(x). \label{r3}\end{aligned}$$ Since the expressions for $D_i(l,m,r)$ given by Eqs. (\[r1\]),(\[r2\]) and (\[r3\]) tend to $D(x)$ in the continuum limit, one can write the following discrete analog to the Klein-Gordon equation Eq. (\[KleinGordon\]) $$\ddot{m} = \sum_i b_iD_i(l,m,r), \,\,\,\,\,\,\,\,\,{\rm where}\,\,\,\,\,\,\,\,\,\sum_ib_i=1. \label{KleinGordonDiscr}$$ Then, by construction, any structure derived from the two-site problem of Eq. (\[udisc1\]) is a static solution of Eq. (\[KleinGordonDiscr\]) and hence, the latter is the static PNp-free discrete model. The model of Eq. (\[KleinGordonDiscr\]) can be generalized in a number of ways. For example, function $D_3$, Eq. (\[r3\]), can be modified choosing different functions $V(l,m,r)$ to discretize $V(\phi)$. Then, the modified $\tilde{D}_3$ can be added to the linear combination in the right-hand side of Eq. (\[KleinGordonDiscr\]). The model of Eq. (\[KleinGordonDiscr\]) can be also generalized by appending terms which disappear in the continuum limit and ones that vanish upon substituting $u_l=0$ and $u_m=0$. For example, the derivative $df(u)/d\phi$ can be discretized as $2[f(u_m)-f(u_l)]/(r-l)$ or as $2f'(u_l/2+u_m/2)(u_m-u_l)/(r-l)$. If then we have difference of such terms in the equation of motion, then in the continuum limit they will cancel out. Any term in the right-hand side of Eq. (\[KleinGordonDiscr\]) can be further modified by multiplying by a continuous function $e(C,l,m,r)$, whose continuum limit is unity (see e.g. [@Saxena] for such an example, also discussed in more detail below). Generally speaking, the discrete PNp-free Klein-Gordon models derived here do not conserve either an energy, or a momentum-like quantity. However, as it will be demonstrated below, they contain energy-conserving and momentum-conserving subclasses. Connection with Previously Reported Models ========================================== One energy-conserving PNp-free Klein-Gordon model has been derived by Speight with co-workers [@SpeightKleinGordon] with the use of the Bogomol’nyi argument [@Bogom]. Their model, can be written in the form of Eq. (\[KleinGordonDiscrS\]), with the Lagrangian $$\begin{aligned} L = \frac{1}{2}\sum\limits_n \dot \phi_n^2 -\frac{C}{2}\sum\limits_n\left( {\phi_n - \phi_{n-1} } \right)^2 \nonumber \\ - \sum\limits_n\left({\frac{{G(\phi_n) - G(\phi_{n-1})}}{{\phi_n - \phi_{n-1}}}} \right)^2, \nonumber \\ {\rm where}\,\,\,\,\,\,\,\,\,G^{\prime}(\phi) = \sqrt{V(\phi)}. \label{SpeightHam}\end{aligned}$$ The static kink solution can then be derived from the lattice Bogomol’nyi equation [@SpeightKleinGordon], which can be taken in the form $$\begin{aligned} U(l,m) = C(m - l)^2 - 2\left(\frac{G(m) - G(l)}{m - l}\right)^2 = 0, \label{Speight1}\end{aligned}$$ which is a particular case of Eq. (\[udisc1\]). The equation of motion derived from Eq. (\[SpeightHam\]), written in terms of Eq. (\[Speight1\]), is $$\begin{aligned} \ddot{m}= \frac{U_m}{r-m}-\frac{U_l}{m-l} +\sqrt{2V(m)} \times \nonumber \\ \left(\frac{ \sqrt{C(r - m)^2-U_m}}{r-m}-\frac{ \sqrt{C(m - l)^2-U_l}}{m-l}\right). \label{Speight3}\end{aligned}$$ The right-hand side of Eq. (\[Speight3\]) is a particular case of $D_3(l,m,r)$ given by Eq. (\[r3\]) with $V(l,m,r)=V(m)$. Momentum-conserving PNp-free models were proposed in [@PhysicaD] and further studied in [@Submitted]. They are the non-Hamiltonian models of the form $$\begin{aligned} \ddot{m}= D_1 (l,m,r), \label{PhysD3}\end{aligned}$$ where $D_1$ is given by Eq. (\[r1\]). Notice that Eq. (\[PhysD3\]) can be mapped into the formulation of Eq. (\[mom2\]). Static kink solutions in this model can be found from Eq. (\[udisc1\]). If Eq. (\[udisc1\]) is taken in the particular form of Eq. (\[Speight1\]), then the momentum-conserving PNp-free model Eq. (\[PhysD3\]) and the energy-conserving PNp-free model Eq. (\[Speight3\]) have exactly the same static kink solutions. It has been proved that a standard nearest-neighbor discrete Klein-Gordon model conserving both energy and momentum does not exist [@Submitted]. Application to the $\phi^4$ model ================================= As an example, we will discretize the well-known $\phi^4$ field theory with the potential $$V(\phi) = \frac{1}{4}\left(1-\phi^2\right)^2\,. \label{Phi4potential}$$ By construction, the PNp-free models derived above are written in singular form. In this form the equations are inconvenient in practical simulations and one may wish to find such particular cases when singularities disappear. For example, for the energy-conserving PNp-free model expressed by Eqs. (\[SpeightHam\])-(\[Speight3\]), singularity always disappears when $G(\phi)$ is polynomial [@SpeightKleinGordon]. Particularly, for the $\phi^4$ model with the potential Eq. (\[Phi4potential\]), one obtains from Eq. (\[Speight3\]) the following energy-conserving PNp-free discretization derived in [@SpeightKleinGordon] $$\begin{aligned} \ddot m = \left(C + \frac{1} {6} \right)(l + r - 2m) + m \nonumber \\ - \frac{1} {{18}}\left[ {2m^3 + (m + l)^3 + (m + r)^3 } \right], \label{SpeightPhi4}\end{aligned}$$ whose static kink solution can be found from Eq. (\[Speight1\]), which, for the $\phi^4$ potential, obtains the form $$\begin{aligned} 3\sqrt{2C}(m - l) + m^2 + lm + l^2-3 = 0. \label{SpeightPhi4kink}\end{aligned}$$ Now let us turn to the momentum-conserving model. Substituting Eq. (\[udisc1\]) into Eq. (\[PhysD3\]) we obtain $$\begin{aligned} \ddot m = C(r-2m+l) -2\frac{V(m,r)-V(l,m)}{r-l}. \label{Rphi4momcons}\end{aligned}$$ To remove the singularity, $V(l,m)$ should be taken in the symmetric form $V(l,m)=V(m,l)$, e.g., as $$\begin{aligned} V(l,m)=(1/4) - (\alpha/2)(m^2+l^2)+(\alpha -1/2 ) ml \nonumber \\ + (\beta/2) \left( {m^3 + l^3 } \right) - (\beta/2) ml\left( {m + l} \right) \nonumber \\ + (\gamma/2) \left( {m^4 + l^4 } \right) + (\delta/2) ml\left( {m^2 + l^2 } \right) \nonumber \\ - \left( { \gamma + \delta -1/4} \right)m^2 l^2, \label{HbasicPhi4Transformed}\end{aligned}$$ with free parameters $\alpha$, $\beta$, $\gamma$, and $\delta$. In the continuum limit, when $l\rightarrow m$ and $r\rightarrow m$, Eq. (\[HbasicPhi4Transformed\]) reduces to $V(\phi)$. Substituting Eq. (\[HbasicPhi4Transformed\]) into Eq. (\[Rphi4momcons\]) we obtain the following momentum-conserving PNp-free $\phi^4$ model derived in [@Submitted] $$\begin{aligned} \ddot m = \left( {C + \alpha } \right)(l - 2m+ r ) + m \nonumber \\ -\beta (l^2 + lr + r^2 ) + \beta m(l + r + m) \nonumber \\ -\gamma (l^3 + r^3 + l^2 r + lr^2 ) - \delta m(l^2 + m^2 + r^2 + lr)\nonumber \\ + ( 2\gamma + 2\delta -1/2)m^2 (l + r). \label{PhysicaDPhi4}\end{aligned}$$ The momentum-conserving model Eq. (\[PhysicaDPhi4\]) with $\alpha=\beta=\gamma=\delta=0$ can be written in the form $$\begin{aligned} \ddot m = \left(1-\frac{m^2}{2C}\right)C(l - 2m+ r )+ m -m^3. \label{Saxena1}\end{aligned}$$ The following energy-conserving model, studied in [@Saxena], $$\ddot m = C(l - 2m+ r )+ \frac{m-m^3}{1-m^2/(2C)}, \label{Saxena2}$$ has the same continuum limit as model Eq. (\[Saxena1\]). Furthermore, it can be derived from Eq. (\[Saxena1\]) by multiplication with a factor $e(C,l,m,r)=1/(1-m^2/(2C))$, which possesses a unit continuum limit. The model Eq. (\[Saxena1\]) is PNp-free and thus, model Eq. (\[Saxena2\]) is also PNp-free since they have the same static solutions derivable from $C(m-l)^2-(1-ml)^2/2=0$. Thus, we have another example when energy-conserving and momentum-conserving PNp-free models have exactly the same static kink solutions. It is interesting to note that the energy-conserving model of Eq. (\[Saxena2\]) cannot be constructed by the method reported in [@SpeightKleinGordon] where discretization of the anharmonic term always involves $\phi_{n-1}$ and $\phi_{n+1}$. More generally than it is done in [@SpeightKleinGordon], the problem of finding the energy-conserving PNp-free models can be formulated as follows. We need to discretize the potential energy of the Lagrangian Eq. (\[KleinGordonHam\]) in a way that the corresponding equation of static equilibrium is satisfied when Eq. (\[StaticFirstIntegral2\]) is satisfied. Both energy-conserving models discussed above are the solutions of this problem. As an example of model conserving neither energy, nor momentum we take Eq. (\[r2\]) for the case of $q(u,h)=u+Ahu^2$ with constant $A$ and obtain $$\ddot{m}=\frac{u_m-u_l}{r-l}\frac{(1+Ahu_l+Ahu_m)^2} {(1+2Ahu_l) (1+2Ahu_m)}. \label{Xr2}$$ This model can be obtained from the momentum-conserving model defined by Eq. (\[r1\]) by multiplying by another function that reduces to unity in the continuum limit ($h \rightarrow 0$). Obviously, the original momentum-conserving model and model Eq. (\[Xr2\]) have the same static kink solutions. It can be demonstrated that these two models also have the same spectra of small amplitude vibrations and the same frequencies of kink internal modes. Numerics ======== In our recent work [@Submitted], some properties of kinks were compared for the “standard” energy-conserving $\phi^4$ discretization having PNp, $$\begin{aligned} \ddot{m}=C(l + r - 2m)+m -m^3, \label{PHI4Classic}\end{aligned}$$ with the PNp-free models conserving energy Eq. (\[SpeightPhi4\]) and momentum Eq. (\[Saxena1\]). It was found that the mobility of kinks in the PNp-free models is higher and also that in the momentum-conserving, PNp-free models a kink self-acceleration effect may be observed. The origin of the effect is the non-conservative (non self-adjoint) nature of the model which, however, can be noticed only for asymmetric trajectories of particles when kink passes by [@Submitted]. If the trajectories are symmetric, there is no energy exchange with the surroundings and kink dynamics is the same as in energy-conserving models, e.g., the kink self-acceleration effect disappears. Kinks in some of the momentum-conserving models was found to have internal modes with frequencies above the phonon spectrum. Such modes do not radiate and they can have large amplitudes storing a considerable amount of energy. Here we present/compare results for the energy-conserving PNp-free model Eq. (\[Saxena2\]) and the PNp-free model of Eq. (\[Xr2\]), generally speaking, conserving neither energy nor momentum. For the latter model we take $u_l$ in the form of Eq. (\[udisc1\]) where the $\phi^4$ potential is discretized according to Eq. (\[HbasicPhi4Transformed\]) and, for the sake of simplicity, we set $\alpha=\beta=\gamma=\delta=0$. We obtain $$\begin{aligned} \ddot m = \left[\left(1-\frac{m^2}{2C}\right)C(l - 2m+ r )+ m -m^3\right]\times\nonumber \\ \frac{(1+Ahu_l+Ahu_m)^2} {(1+2Ahu_l) (1+2Ahu_m)}\,,\nonumber \\ {\rm where}\,\,\,\,\,\,\,u_l=C(m-l)^2-(1-ml)^2/2\,. \label{Noconservation}\end{aligned}$$ For $A=0$, Eq. (\[Noconservation\]) coincides with the momentum-conserving model Eq. (\[Saxena1\]). In the model Eq. (\[Noconservation\]), the static kink solutions, phonon spectra, and frequencies of kink internal modes are $A$-independent. The energy-conserving model Eq. (\[Saxena2\]) has the same static kink solutions as model Eq. (\[Noconservation\]) but their spectra are different. The linear vibration spectrum of the vacuum for Eq. (\[Noconservation\]) is $\omega^2=2+(4C-2)\sin^2(\kappa/2)$ and that for Eq. (\[Saxena2\]) is $\omega^2=4C/(2C-1)+4C\sin^2(\kappa/2)$, while the one for the classical model Eq. (\[PHI4Classic\]) is $\omega^2=2+4C\sin^2(\kappa/2)$. ![Upper panels: boundaries of the linear spectrum of the vacuum (solid lines) and kink internal mode frequencies (dots) as functions of the lattice spacing $h=1/\sqrt{C}$. Lower panels: time evolution of kink velocity for different initial velocities and $h=0.7$. The results are shown for (a) classical $\phi^4$ model, Eq. (\[PHI4Classic\]), (b) PNp-free model conserving energy, Eq. (\[Saxena2\]), and (c) PNp-free model conserving momentum, Eq. (\[Noconservation\]) at $A=0$.[]{data-label="Figure1"}](fig1.ps) ![The kink velocity in the regime of steady motion \[see bottom panel in Fig. \[Figure1\] (c)\] for the PNp-free $\phi^4$ model Eq. (\[Noconservation\]) is shown as a function of parameter $A$. For $|A| > 0.2$, the kink self-acceleration effect disappears.[]{data-label="Figure2"}](fig2.ps) The top panels of Fig. \[Figure1\] present the boundaries of the linear vibration spectrum of the vacuum (solid lines) and the kink internal modes (dots) as the functions of lattice spacing $h$ for (a) the classical $\phi^4$ model of Eq. (\[PHI4Classic\]), (b) the PNp-free model of Eq. (\[Saxena2\]) conserving energy, and (c) the PNp-free model of Eq. (\[Noconservation\]) at $A=0$ conserving momentum. In PNp-free models kinks possess a zero frequency, Goldstone translational mode. Since all models presented in Fig. \[Figure1\] share the same continuum $\phi^4$ limit, their spectra are very close for small $h(<0.5)$. The bottom panels of Fig. \[Figure1\] show the time evolution of kink velocity for the corresponding models at $h=0.7$ for kinks launched with different initial velocities. To boost the kink we used the semi-analytical solution for the normalized Goldstone mode, whose amplitude serves as a measure of the initial kink velocity. One can see that the mobility of kinks in the PNp-free models shown in (b) and (c) is higher than in the classical model having PNp and shown in (a). In the energy-conserving models shown in (a) and (b), the kink velocity decreases monotonically due to the energy radiation. Non-Hamiltonian momentum-conserving model in (c) shows the effect of kink self-acceleration discussed in [@Submitted]. It is interesting to study what happens when the parameter $A$ in Eq. (\[Noconservation\]) deviates from zero and the conservation law of the model (momentum conservation) disappears. We found that the effect of kink self-acceleration, which can be seen in the bottom panel of Fig. \[Figure1\] (c) for $A=0$, remains for $|A|<0.2$ but the value of the kink velocity in the steady motion regime decreases with increase in $|A|$ as it is presented in Fig. \[Figure2\]. For $|A|>0.2$ kink self-acceleration effect disappears and kink velocity gradually decreases with time. From the above, we infer that properties such as the self-acceleration (for momentum-conserving models) or the Bogomol’nyi bounds (for energy-conserving discretizations) render such models rather special within the broader class of PNp free models. However, the critical ingredient for the more general feature of ([*static*]{}) PN absence exists in the form of a reduction of the second order problem into a first order. Conclusions =========== A general procedure for deriving discrete Klein-Gordon models whose static kinks can be placed anywhere with respect to the underlying lattice was described. Such models are called [*static*]{} PNp-free models. It was demonstrated that the models of this kind derived earlier [@SpeightKleinGordon; @PhysicaD; @Submitted; @Saxena] are special cases of the wider family of models derived here. Static kink solutions for the PNp-free models can be found from the nonlinear algebraic equation of the form $u(l,m)=0$, which is a discrete analog of the first integral of the static continuum Klein-Gordon equation of motion. This ensures the existence of static kink solutions at least for the regime of sufficiently weak discreteness and smooth background potential. The range of the discreteness parameter supporting stable static kinks varies according to the specific properties of the model. In this paper we have discussed only nearest-neighbor discretizations. However, one can easily write down a PNp-free model involving more distant neighbors by replacing Eq. (\[KleinGordonDiscr\]) with higher-order finite difference operators approximating Eq. (\[KleinGordon\]), keeping the two-point approximation, Eq. (\[udisc1\]), for the first integral of Eq. (\[ucont1\]). Discrete kinks in the static PNp-free models possess the zero-frequency translational Goldstone mode and they can (almost) freely move with at least infinitesimally small velocity. Such kinks are not trapped by the lattice and they can be accelerated by even weak external fields. As a topic for future studies, it would be interesting to find any possible relation between models constructed here and models that support traveling kink solutions for finite kink velocity. Such connections are apparently under intense investigation [@dima] and should provide a framework for understanding travelling in dispersive lattice systems. [99]{} Braun O M and Kivshar Y S 2004 [*The Frenkel-Kontorova Model: Concepts, Methods, and Applications*]{} (Berlin: Springer) Peyrard M and Kruskal M D 1984 Kink dynamics in the highly discrete sine-Gordon system [*Physica*]{} D [**14**]{}, 88-15 Combs J A and Yip S 1983 Single-kink dynamics in a one-dimensional atomic chain: A nonlinear atomistic theory and numerical simulation [*Phys. Rev.*]{} B [**28**]{}, 6873-13 Boesch R and Willis C R 1989 Exact determination of the Peierls-Nabarro frequency [*Phys. Rev.*]{} B [**39**]{}, 361-8 Kevrekidis P G and Weinstein M I 2000 Dynamics of lattice kinks [*Physica*]{} D [**142**]{}, 113-40 Flach S Zolotaryuk Y and Kladko K 1999 Moving lattice kinks and pulses: An inverse method [*Phys. Rev.*]{} E [**59**]{}, 6105-11 Savin A V, Zolotaryuk Y and Eilbeck J C 2000 Moving kinks and nanopterons in the nonlinear Klein-Gordon lattice [*Physica*]{} D [**138**]{}, 267-15 Speight J M and Ward R S 1994 Kink dynamics in a novel discrete sine-Gordon system [*Nonlinearity*]{} [**7**]{}, 475-11; Speight J M 1997 A discrete $\phi^4$ system without a Peierls-Nabarro barrier [*Nonlinearity*]{} [**10**]{}, 1615-13; Speight J M 1999 Topological discrete kinks [*Nonlinearity*]{} [**12**]{}, 1373-17 Kevrekidis P G 2003 On a class of discretizations of Hamiltonian nonlinear partial differential equations [*Physica*]{} D [**183**]{}, 68-19 Karpan V M, Zolotaryuk Y, Christiansen P L and Zolotaryuk A V 2002 Discrete kink dynamics in hydrogen-bonded chains: The one-component model [*Phys. Rev.*]{} E [**66**]{}, 066603-13 Eilbeck J C and Flesch R 1990 Calculation of families of solitary waves on discrete lattices [*Phys. Lett.*]{} A [**149**]{}, 200-3; Duncan D B, Eilbeck J C, Feddersen H and Wattis J A D 1993 Solitons on lattices [*Physica*]{} D [**68**]{}, 1-11 Aigner A A, Champneys A R and Rothos V M 2003 A new barrier to the existence of moving kinks in Frenkel?Kontorova lattices [*Physica*]{} D [**186**]{}, 148-23 Pelinovsky D E (personal communication) Arnold V I 1989 [*Mathematical Methods of Classical Mechanics*]{} (New York: Springer-Verlag) Cooper F, Khare A, Mihaila B and Saxena A 2005 arXiv:nlin.SI/0502054 v1 24 Feb Bogomol’nyi E B 1976 The stability of classical solutions [*J. Nucl. Phys.*]{} [**24**]{}, 449-7 Dmitriev S V, Kevrekidis P G and Yoshikawa N Standard Nearest Neighbor Discretizations of Klein-Gordon Models Cannot Preserve Both Energy and Linear Momentum (submitted)
--- author: - | Peter W. Shor\ MIT\ Cambridge, MA 02482 title: Scrambling Time and Causal Structure of the Photon Sphere of a Schwarzschild Black Hole --- [**Abstract:**]{} Recently, physicists have started applying quantum information theory to black holes. This led to the conjecture that black holes are the fastest scramblers of information, and that they scramble it in time order $M \log M$, where $M$ is the mass of the black hole in natural units. As stated above, the conjecture is not completely defined, as there are several possible definitions of scrambling times. It appears that not all papers that refer to this conjecture interpret it the same way. We consider a definition of scrambling time stronger than the one given in the paper that first proposed this conjecture \[Sekino and Susskind, [*J. High Energy Phys.*]{} [**0810**]{}:065 (2008)\], and show that this stronger version of the conjecture appears to be incompatible with a number of other widely-believed and reasonable-sounding properties of black holes. We argue that for the scrambling time of a black hole to be this fast, either relativity is violated or non-standard physics must be occurring outside the stretched event horizon of a black hole. More specifically, either information is being transferred faster than relativity would permit, the information is not carried by the Hawking radiation and thus must be carried by unknown physics, or the Hawking radiation carries much more information than standard thermodynamics would permit. We analyze the situation from the viewpoint of an outside observer who never falls into the black hole. We assume that an outside observer never sees anything actually fall into the black hole. We also assume that from the viewpoint of this observer, the physics near a black hole are very much like the known laws of physics, except possibly in the stretched horizon, where physics at Planck-scale energies starts being relevant. For the physics near the horizon, all we assume is that the structure of space-time is roughly that predicted by general relativity. Unlike Suskind’s complementarity principle, we do not require that an outside observer’s point of view be in any way compatible with that of an observer falling into the black hole. In fact, we do not consider the viewpoint of observers falling into the black hole at all. Thus, some of the solutions that have been proposed to evade the contradiction discovered in the AMPS paper \[Almheiri et al., [*J. High Energy Phys.*]{} 2013:62 (2013)\], in particular the assumption that black holes have firewalls just below the horizon that destroy any infalling information do not appear to address the problems our paper raises. Naturally, in order to show this, we need to make some assumptions. Our first assumption is that, outside the stretched horizon, the laws of physics are well approximated by some quantum field theory. The second is that the causality structure of spacetime outside the horizon is dictated by the laws of general relativity. Third, we assume that the Hawking radiation carries the information that exits the black hole, as well as the information involved in scrambling the black hole. While we allow information to be stored by a different means in the stretched horizon, general relativity does not appear to permit fast scrambling unless this information leaves the stretched horizon. Finally, we assume that the usual laws of thermodynamics govern how much information can be carried by Hawking radiation. Our argument considers the structure of the photon sphere from the point of view of an outsider who stays outside the black hole. The basic idea of our argument is to divide the photon sphere into cells, and use computer-science style arguments to show that it will take at least order $M^2$ time to transmit enough information from one side of the black hole to the other so as to maximally entangle the two sides. Introduction ============ The black hole information paradox is the question of whether the dynamics of black hole evaporation is unitary, or whether information is lost when you throw it into a black hole. Classical general relativity says that anything that is thrown into a black hole can never come out, while quantum mechanics says that physical processes are reversible, so that information is never destroyed. In [@Bekenstein], Bekenstein proposed that black holes obey their own laws of thermodynamics, and that the entropy of a black hole was of order $M^2$ bits, where $M$ is the mass of the black hole. In [@Hawking], Hawking continued the investigation of the thermodynamics of black holes and argued that radiation emerges from a black hole, and thus that the black hole will eventually evaporate after time order $M^3$. While Hawking’s argument does not appear to let information escape from a black hole, his argument is semi-classical, and thus can only be an approximation to the true physics. It is thus not clear at present whether information can escape from a black hole in a full theory of quantum gravity. Hawking radiation is very similar to Unruh radiation, the radiation that an accelerating observer in a vacuum state sees. While some treatments of black holes distinguish between Hawking and Unruh radiation, classifying the photons that escape the black hole as “Hawking radiation” and the virtual photons that remain inside the black hole as “Unruh radiation”, we will not. We use the term “Hawking radiation” for both of these phenomena when they are in the vicinity of black holes. Maldacena [@Maldacena] discovered a correspondence between Anti de Sitter theories with gravity and conformal field theories without gravity (called AdS-CFT). Since conformal field theories are unitary, this implied that theories of quantum gravity should also be unitary. In order to reconcile general relativity and quantum mechanics, Susskind [@complementarity] proposed a complementarity principle, where both an observer remaining outside the black hole and an observer falling in see things happening in accordance with the laws of physics, but where these two observers do not necessarily agree on exactly what happens. This complementarity principle has been seriously challened by an argument put forth in 2013 in a paper generally known as AMPS (from its authors’ initials) [@AMPS]. More specifically, they use the monogamy of entanglement [@monogamy] to argue that the information inside a black hole cannot be entangled both with the information near the horizon and with the earlier Hawking radiation. They then argue that this means that information cannot escape from a black hole without producing a “firewall” across the horizon, where any infalling matter is destroyed. Reasoning about black hole dynamics and complementarity, in 2007 Hayden and Preskill [@HP] gave arguments for why the scrambling time of a black hole had to be at least order $M \log M$, where $M$ is the mass of the black hole in natural units.[^1] An informal definition of the scrambling time is how fast the information in a black hole gets “mixed up”. This paper led to a more serious study of scrambling time. Several definitions of scrambling time have been proposed. These do not necessarily all yield the same quantity for the scrambling time [@LLZZ]. These definitions will be discussed later in the paper. Hayden and Preskill were reasoning about whether it was possible to use black holes to observe a violation of the no-cloning theorem. They assume that an observer, Bob, knows the exact state of a black hole. He then throws a quantum state into the black hole. Hayden and Preskill showed that if he waits for the scrambling time, Bob can use the Hawking radiation the black hole emits to recover the information he threw in. Bob finally jumps into the black hole in an attempt to catch up with the information he earlier threw in. If he can successfully do this, then he would have two copies of the quantum state, and thus would have effectively cloned a quantum state, a violation of the unitary principle of quantum mechanics (although nobody outside the black hole could verify that he has been successful at this). What Hayden and Preskill showed was that as long as Bob has to wait a time of at least order $M \log M$, he can never catch up with the information thrown into the black hole. The fact that Hayden and Preskill needed about the scrambling time was thus that it was at least order $M \log M$, as this corresponded to the worst case of their analysis, where Bob comes the closest to catching up with the information he threw in. A paper of Seskino and Susskind [@SS] followed up on the Hayden and Preskill paper. They gave a definition of scrambling time that seemed to be the minimum needed for the Hayden-Preskill argument to work, and made the conjecture that black holes were indeed fast scramblers, and mixed information up in order $M \log M$ time. Some support for this idea was presented in [@butterfly]. One argument given in [@HP; @SS] for why the scrambling time should be order $M \log M$ was that if you add mass or electric charge to the black hole, it equalizes in a time of order $M \log M$. Thus, the time for information scrambling should also be $M \log M$. We believe this is a misleading argument. To see why, consider the analogy of putting a drop of dye into a pitcher of water. The water level will equalize in a matter of seconds, while it takes much longer for the dye to diffuse evenly throughout the water—certainly at least several minutes. This is because the process of equalizing the water level is driven by an energy difference, while the diffusion of the dye does not change the energy of the system. Similarly, distributing mass or electric charge uniformly around the black hole decreases its energy, while there is no energy decrease associated with spreading information around the black hole. Another argument for why the scrambling time should be order $M \log M$ arises from the AdS-CFT correspondence. Shenkar and Stanford [@butterfly] compute time-ordered correlation functions in the CFT side of the correspondence, and show that the time scale of the decay of these correlations is order $M \log M$. They conclude that the scrambling time on the CFT side of the correspondence is order $M \log M$, and thus, that the scrambling time in the AdS side of the correspondence should also be order $M \log M$. While we agree that it seems that the out-of-time-order correlations probably do decay with time scale order $M \log M$, we do not see why the timescale for the decay of time-ordered correlation functions should be the same as the scrambling time, especially for the stronger definitions of scrambling time that have been proposed. In this paper, we try to assume that physics near a black hole behaves as much as possible like established physics. We assume that the black hole dynamics are those seen by an outside observer, and that nothing actually ever reaches the event horizon. We assume that the structure of the space-time outside the black hole is well-approximated by the laws of general relativity. We assume further that the black hole dynamics is unitary; that outside the stretched horizon, all information the black hole is carried by the Hawking radiation; and that the thermodynamics of Hawking radiation is given by the standard formulas for the thermodynamics of thermal radiation. Other than the laws of general relativity, we try to make no assumption on the physics in the stretched horizon, where the energy scales are the Planck energy. Using a definition of scrambling time stronger than Sekino and Susskind, but comparable to definitions considered in several other papers that discuss the scrambling conjecture, we show that under these assumptions, for black holes to scramble information faster than order $M^2$, they must be able to transmit information from one part of the black hole to another faster than the speed of light, a violation of causality in relativity theory. Let us note that, under the assumptions of relativity, information cannot be transmitted quickly in the stretched event horizon of the black hole. Suppose we constrain the information to move around the black hole at a height $h$ or less above the horizon, i.e., at radius at most $2M+h$. The line element for Schwartschild coordinates is $$ds^2 = -\left(1-\frac{2M}{r}\right)dt^2 + \left(1-\frac{2M}{r}\right)^{-1} dr^2 +r^2 (d\theta^2 + \sin^2 \theta d\phi^2).$$ If we set $ds^2 \leq 0$, to specify a lightlike or timelike trajectory, then we see that $$r^2( d\theta^2 + \sin^2 \theta d\phi^2)\leq \left(\frac{h}{2M+h}\right) dt^2,$$ showing that in time $t$, we can move a distance of at most $t\sqrt{ h/(2 M)}$ radially. The stretched horizon is of order the Planck distance above the event horizon, which corresponds to $h = 1/M$. Thus, to transmit information that stays within the stretched horizon from one side of a black hole to another, relativity says that we need time order $M^2$. One of the assumptions we make in this paper is that this bound holds. Scrambling Time =============== Let us consider two hemispheres of the black hole, which we will call the north and south hemisphere. In a Haar-random pure state, these two hemispheres will have nearly maximal entanglement; that is, since a black hole has order $M^2$ bits of entropy, they will have order $M^2$ bits of entanglement. It seems likely that there is some minimum entanglement between these two hemispheres in all low-energy states, but this should be governed by the boundary, and thus be on the order of $M$ bits, much less than the maximal entanglement. Assume that the two hemispheres start out in a random nearly unentangled state. We ask how long it will take the state to evolve to a nearly maximally entangled state. This is our definition of scrambling time. One might object that it is hard to initialize the black hole in a nearly unentangled state, i.e., a state having much less than $M^2$ entanglement. To address this objection, let us note that if such states exist, one can ask the question of how long it will take these states to evolve into nearly maximally entangled states; one does not need a plausible physical mechanism for producing these states. We see no reason why these states should not exist. There are many other possible definitions of scrambling time. One is simply the time scale it takes for out-of-time-order correlations to decay. This is the definition used in [@butterfly; @SBSSH]. Another is the time it takes to reach a maximally entangled state, starting not with a product state of two nearly equal halves, but with a tensor product of a pure state on one qubit with a Harr-random pure state on the remaining qubits. This last definition is close to Sekino and Susskind’s [@SS]. We do not see any reason why the scrambling time should not be of order $M \log M$ for these definitions. Sekino and Susskind chose this definition so that it would be close to what would is needed to be able to recover the information in the Gedankenexperiment of Hayden and Preskill [@HP]; however, we don’t know of any proofs that this definition of scrambling is sufficient to let one recover the information in order $M \log M$ time. The paper [@LSHOH] defines the scrambling time in a way much closer to the one we use in this paper: the time it takes to evolve from a tensor product of $n$ qubits each in a pure state to a state that is nearly maximally entangled on subsystems of size $\kappa n$ for some constant $\kappa$. The paper [@YK] assumes that the quantum system is in a Haar-random state after the scrambling time, also a stronger criterion than our definition. The Cell Structure ================== The [*photon sphere*]{} of a Schwarzschild black hole is defined as everything inside the smallest possible circular orbit but inside the horizon. For a Schwarzschild black hole, which has radius $2M$, the smallest possible circular orbit has radius $3M$. What we do is first divide the photon sphere the black hole into cells, with the property that, as seen from an observer far from the black hole, information can be transmitted from one part of a cell to any other part of the same cell in time order $M$. We then calculate that the Hawking radiation within each cell only can carry a constant number of bits. Thus, we can model the causality structure of the black hole as a distributed network of processors, where each processor only contains a constant number of bits, and takes time order $M$ (as measured by an observer far from the black hole) to communicate with its neighbors. We show that this distributed network cannot transmit information from one side of the black hole to the other quickly enough to be a fast scrambler. In order to give bounds on the flow of information in the photon sphere, we divide the photon sphere into cells. The cells (from the point of view of an accelerating observer hovering within a cell) should be roughly constant diameter in all directions. And from the point of view of an outside observer far from the black hole, each round trip from a face of the cell to the opposite face and back should take time order $M$. Recall that the line element of the Schwarzschild coordinates is $$ds^2 = -\left(1-\frac{2M}{r}\right)dt^2 + \left(1-\frac{2M}{r}\right)^{-1} dr^2 +r^2 (d\theta^2 + \sin^2 \theta d\phi^2).$$ Let $h$ be the height above the horizon, i.e. $h = r-2M$. For an observer who stays at the same $r, \Omega$ in Schwarzschild coordinates, we have $$d\tau = \sqrt{1-\frac{2M}{r}} dt = \sqrt{\frac{h}{r}} dt.$$ In the photon sphere, $2M \leq r \leq 3M$, so $$\sqrt{\frac{h}{r}} \approx \sqrt{\frac{h}{M}}.$$ Thus, the diameter of these cells in Schwarzschild coordinates in the direction parallel to the horizon should be order $M \sqrt{h/M}$. Because radial distance is lengthened by an additional factor of order $\sqrt{h/M}$, the vertical diameter of these cells in Schwarzschild coordinates order is $M (h/M)$. Thus, the cells have diameter order $\sqrt{Mh}$ in the direction parallel to the horizon, and diameter order $h$ in the vertical direction. For a two-dimensional representation of what these cells look like, see Figure \[figure-cells\]. ![The black hole cell structure in Schwarzschild coordinates, depicted in two dimensions. Note that the aspect ratio of the cells appears larger as you approach the horizon. This is an artifact of the Schwarzschild coordinates; while the cells do grow smaller as you approach the horizon, an observer hovering near the horizon would see these cells as having an aspect ratio of near unity. []{data-label="figure-cells"}](BlackHoleCells-Dragons.jpg){width="2.1in"} It follows that the number of cells that cross a great circle at radius $2M+h$ is of order $$\frac{2M+h}{\sqrt{Mh}} \approx \sqrt{M/h},$$ and the number of cells that cross a line from the photon sphere to a point at $2M+h$ is order $$\log({M/h}).$$ Let us now calculate the entropy, i.e., the number of bits contained in the Hawking radiation in each cell. The entropy of black-body radiation in a cell of volume $V$ is $$S = \frac{4 \pi^2}{45} VT^3,$$ and so is proportional to $VT^3$, where $T$ is the temperature. The side length of each of these cells, as seen from a stationary observer, is of order $\sqrt{M h}$, so the volume is of order $M^{3/2} h^{3/2}$. What is the temperature? The temperature seen by a near-horizon observer (which still holds within a constant factor for $h < M$) is $$T \approx \frac{1}{4 \pi \sqrt{2 M h}}$$ Thus, the number of bits in each cell is proportional to $$V T^3 =O(1)$$ and so approximately constant. This radiation must consist of virtual particles, since it does not contribute to the mass of the black hole. However, virtual particles can have observable effects, and thus presumably can carry information. Conventional wisdom says that the continuous nature of space and time breaks down at the Planck scale. The value of $h$ which gives Planck-scale cells is order $1/M$. If we stop the process at $h=1/M$, there are order $M^2$ cells, a constant fraction of them just above the horizon of the black hole. The exact constant factor on the $M^2$ depends on exactly where we stop dividing the cells. Black hole thermodynamics predicts that the entropy of a black hole is $A/4$, where $A$ is the surface area. So if we assume that each cell contains a constant number of bits of Hawking radiation, then for an outside observer, the entropy encoded in the Hawking radiation is sufficient to account for the black hole’s entropy. This explanation for the entropy has been previously proposed [@Jacobson; @Wald]. Is the entropy really encoded in the Hawking radiation this way? While we may have to wait until there is a microscopic theory of quantum gravity to answer this question definitively, it seems consistent with our current knowledge of physics. Recall Landauer’s precept “information is physical.” In order for the black hole’s quantum state to be scrambled, information must be carried from one part of the black hole to another. And if we accept Landauer’s precept, it must be carried by some physical process. Given our assumption that outside the stretched horizon, the physics agreed more or less with known physics, it appears that outside the stretched horizon, the only possible carrier of information is the Hawking radiation. This assumption does permit that some information could be stored by unknown physics within the stretched horizon, but relativity does not allow information within the stretched horizon to be transmitted quickly enough for the scrambling time to be order $M \log M$. Transmitting Information in the Photon Sphere ============================================= We now switch to a more computer-science mode of reasoning. Suppose we have a network of these cells, where each cell contains order 1 bits, and we wish to send order $M^2$ bits from one side of the black hole to the other. We divide time into time steps of size $M$, as measured by an outside observer. Each cell can communicate with its neighbor in one time step. More specifically, suppose we have two hemispheres of the black hole, one near the North pole and one near the South pole, each of which is nearly pure. Each of these hemispheres contains half of the black hole’s suface, so each contains order $M^2$ bits. To obtain a maximally entangled state, we need to send order $M^2$ information from one of these regions to the other. Let us consider what paths we could send this information along. The quickest path (as measured by an outside observer) between two points in the stretched horizon is the shortest null geodesic between them (see Figure \[figure-geodesics\]). There is a path that might be easier to think about that is not much longer: go straight up at the speed of light until you reach a level where the two points are covered by adjacent cells, travel at this height until you are directly above the second point, and then go back down. If the two points in the stretched horizon are separated by an angle $\theta$, then this second path will take time order $\log(M \theta)$, and this will thus apply to the null geodesic as well. So one possibility is that we send the information up to the outer region of the photon sphere, send it along the outside of this sphere, and then send it back down to near the horizon once it reaches the other side. The length of these paths is order $\log M$, which is fast enough for the scrambling time to be order $M \log M$. However, there are only a constant number of cells on the outside of the photon sphere. These cells form a bottleneck, since each of these paths must go through one of these cells. Each of these cells on the outer boundary can only contain a constant number of qubits per time step, so if we are going to send $M^2$ qubits along paths involving these cells, we need to take order $M^2$ time steps, resulting in total time order $M^3.$ ![Trajectories of photons from a point on the horizon of a black hole. The red trajectory makes a complete orbit at $R=3M$ before it escapes to infinity. []{data-label="figure-geodesics"}](Black-Hole-Trajectories.pdf){width="2.4in"} Another possibility is that we send the information while keeping it close the horizon of the black hole. There are order $M$ disjoint paths from one of these regions to the other along the horizon, but each of these paths has length order $M$. It would thus take order $M$ time steps to send all the information along these paths, making order $M^2$ time altogether. In fact, we will show that this is the best we can do. Recall that we are assuming that any non-standard physics is confined to the stretched horizon. Thus, any information that is outside the stretched horizon must be carried by Hawking radiation. (Presumably, gravitons could also carry information from one region of the black hole to another, but since gravitons are massless particles, their thermodynamics should be similar to the thermodynamics of Hawking radiation.) Suppose we cut the black hole in half by a plane through its center. The intersection of the cell structure with this plane will look something like that in Figure 1a. There are order $M$ cells in this cut. We can see this by observing that the inner layer has order $M$ cells, and the number of cells in each layer forms a geometric series. If we assume that each cell can only send a constant number of bits to its neighbors during each time step, then each cell can process a constant number of bits per time step. Since the time steps each take time order $M$, and we need order $M$ of them, it will take order $M^2$ time to pass order $M^2$ bits from one side of this cut to the other. The scrambling time must be at least this large. Thus, with our assumptions, a lower bound for the scrambling time is order $M^2$. Note that the same argument shows that to get $Q$ bits of quantum information from one hemisphere of the black hole to the other, we need time order $QM$. Hayden and Preskill Revisited ============================= We have seen that to get $Q$ bits of quantum information from one hemisphere of the black hole to the other, we need to take time order $QM$. Recall that in Hayden and Preskill’s paper, Alice tosses her diary into the black hole. Let us assume her diary weighs $0.02$ mg—one Planck mass (rather a small mass for a diary, but we assume that Alice writes small). How many bits does the hole receive? Ignoring the gravitational potential energy contained in the diary, if we add $1$ Planck mass to the hole, then the mass of the black hole increases from $M$ to $M+1$, and the number of bits goes from $M^2$ to $M^2+2M+1$. We have thus increased the number of bits in the black hole by $2M$. We would need to wait order $M^2$ time for all these bits to be spread evenly through the black hole, and possibly this is the minimum amount of time it takes for these bits to come out in the Hawking radiation. Suppose we take one photon of light and send it into the hole, and encode a qubit in it by arranging for it to be in some specific polarization. This doesn’t improve things much. The photon has energy $\alpha$ in Planck units. Thus, when we add it to the black hole, the information content of the black hole increases by $2 \alpha M$ bits, and it takes order $\alpha M^2$ time for the black hole to scramble, which is still proportional to $M^2$, even if $\alpha$ is very small (around $10^{-25}$ for visible light). We do not seem to be saved by the fact that Bob knows everything about the photon except the polarization, because we might need to wait for all the extra information we’ve added to be evenly distributed through it, and not just the unknown polarization. It is possible that the one unknown qubit of polarization scrambles quickly despite the fact that we’ve added a lot of known information, but this would require some justification. To add one just bit to the black hole, we do not see any better way of doing it than sending a photon of energy $1/(2M)$, which will have wavelength comparable to the radius of the black hole. The fact that we can recover a photon of wavelength comparable to the radius in time order $M \log M$ seems much less remarkable than recovering the polarization of a visible light photon; it seems as though a photon of radius $2M$ may essentially already be spread throughout the black hole as soon as it enters it. Discussion and Speculations =========================== The arguments in this paper appear to indicate that at least one of the commonly held beliefs about black holes in the following list is incorrect: 1. Black hole evolution is unitary. 2. The causal structure in the neighborhood of a black hole is that predicted by general relativity. 3. The scrambling time of a black hole is order $M \log M$. 4. Outside of the stretched horizon, any information in a black hole is contained in the Hawking radiation. This includes the information leaving the black hole, and the information that scrambles it. 5. The amount of information that Hawking radiation contains is the amount predicted by thermodynamics. It is not clear whether this argument can be extended to narrow down the above list of assumptions which might possibly be incorrect. One question that could be asked is whether we can learn any more about black holes by considering the cell structure given by this paper. We believe that one thing the cell structure seems to indicate is that we should not think of black hole dynamics as taking place solely on the horizon, as in the membrane picture [@membranes]. In the cell structure, there are order $\log M$ layers in the photon sphere, and the outer layers seem to play a crucial role, even though they do not contain very many quantum bits. Without these layers it would be difficult or impossible for an unequal distribution of charge on the surface of a black hole to equalize quickly without a violation of relativity, as the information about the amount of charge (or mass) added to the black hole could not propagate from one side of the hole to the other in time $M \log M$. With the cell structure, it is possible to equalize the charge (or mass), as we do not need to get much information from one side to the other; the charge and mass are scalar quantities, so to communicate how much charge there is, one needs only to communicate order $\log M$ bits, which is possible in order $M \log M$ time using the cell structure. Similarly, if one believes that all the information is sitting on the horizon, then in order for the out-of-time order correlations to decay in order $M \log M$ time, either one has to accept non-local causality, or one has to realize that information can be carried on short paths, through the outer layers of the photon sphere (in our case, by virtual photons of Hawking radiation). Further, we believe that even near the horizon, something can be learned about black holes by imagining the dynamics in three dimensions. Suppose we try to extend the cell structure inwards beyond the Planck scale. What happens? The cells at a Planck distance from the horizon are (as seen from an outside observer) at Planck temperature. If we consider possible layers closer to the horizon, a naive application of the formula would say that the temperature should increase. But what happens when you try to add energy to a system at Planck temperature? Paradoxically, the temperature decreases. A system at Planck temperature contains lots of Planck-size black holes, which are constantly forming and evaporating (or nearly forming and then evaporating, if you assume that black holes can never actually form in finite time). If you add energy to these, you obtain larger-than-Planck-size black holes, which have lower temperature. These will start absorbing mass, and increasing in size, until the ambient temperature of the space outside them is the same as the temperature of a black hole. This means that in a static universe of constant volume at thermal equilibrium[^2], there is at most one black hole, and the amount of mass contained in it is a constant fraction of the mass of the universe. This is because the black hole will absorb radiation until the ambient temperature outside of the black hole decreases faster than the temperature of the black hole. This can only happen if the black hole contains much of the mass of the universe. (If the volume of the universe is too large compared to the mass it contains, the black hole will evaporate completely.) Thus, below the Planck-scale layer, we expect to find large black holes. And indeed, it seems quite likely that below the layer of Planck-scale black holes just above the horizon, we indeed find the surface of a single black hole—namely, €”the actual black hole. For an eternal black hole which stays at the same mass, the black hole horizon should be in thermodynamical equilibrium. Thus, if it is constantly absorbing Planck-scale near-black-holes, one expects it to be constantly emitting them as well. (Of course, these Planck-scale black holes may never completely form from an outside observer’s point of view.) One might thus expect space-time at the stretched horizon, as inferred by an outside observer, to be very irregular. Confirmation of this and more details of this phenomenon may require a theory of quantum gravity. We have identified one way in which information might be communicated from one side of the black hole to the order in time less than order $M^2$. However, this is a fairly far-fetched speculation that we believe is ruled out by several considerations. If the Planck-scale black holes in the atmosphere are not just black holes, but also worm-holes, then in a mature black hole, we might expect to find wormholes connecting one side of the black hole to the other. Information falling on one side could then be propagated to the other side quickly. There are numerous problems with this proposal. We believe it is very unlikely these wormholes could last long enough to travel very far from where they are formed before they evaporate. Further, there is a large gradient in the time dilation constant near the horizon; unless there is a mechanism for preventing it [@KT], space-like separated wormhole mouths would turn into time-like separated worm-hole mouths, and information falling into them might be unavailable for long periods of time, or might even emerge before it fell in. This would give rise to causality violation, and it is difficult to construct a consistent theory of physics with causality violation. Finally, unless these wormholes last long enough that each of them can transmit much more than a constant number of bits, arguments similar to those in our paper show that we these wormholes cannot be moved around the stretched horizon quickly enough to enable fast scrambling. Acknowledgements ================ The author thanks Zi-Wen Liu, Seth Lloyd, Imam Marvian, Leonard Susskind, L[á]{}rus Throlacius, and Quntao Zhuang for helpful discussions. He is supported by the National Science Foundation under Grant No. CCF-1525130 and through the NSF Science and Technology Center for Science of Information under Grant No. CCF0-939370. [99]{} Ahmed Almheiri, Donald Marolf, Joseph Polchinski, and James Sully, “Black holes: Complementarity or firewalls,” [*J. High Energy Phys.*]{} [**2013**]{}:62 (2013) Tom Banks and Willy Fischler, “Holograph space-time does not predict firewalls,” arXiv:1208.4757, (2012). Jacob D. Bekenstein, “Black holes and entropy,” [*Phys. Rev. D*]{} [**7**]{}:2333–2346 (1973). Valerie Coffman, Joydip Kundu, and William K. Wootters, “Distributed entanglement,” [*Phys. Rev.* ]{} [**61**]{}:052306 (2000). Stephen W. Hawking, “Particle creation by black holes,” [*Communications in Mathematical Physics*]{} [**43**]{}, 199–220 (1975). research Patrick Hayden and John Preskill, “Black holes as mirrors: Quantum information in random subsystems,” [*J. High Energy Phys.*]{} [**0709**]{}:120 (2007) Ted Jacobson, [*Introductory Lectures on Black Hole Thermodynamics,*]{} online. Sunh-Won Kim and Kip S. Thorne, “Do vacuum fluctuations prevent the formation of closed timelike curves?” [*Phys. Rev. D*]{} [**43**]{}:3929–3947, (1991). Nima Lashkari, Douglas Stanford, Matthew Hastings, Tobias Osborne, Patrick Hayden, “Towards the fast scrambling conjecture,” [*J. High Energy Phys.*]{} [**2013**]{}:22, (2013). Zi-Wen Liu, Seth Lloyd, Elton Yechao Zhu, and Huangjun Zhu “Entropic scrambling complexities,” arXiv:1703:08104. Juan Maldacena, “The large-$N$ limit of superconformal field theories and supergravity,” [*International Journal of Theoretical Physics*]{} [**38**]{}:1113–1133 (1999). Yasuhiro Sekino and Leonard Susskind, “Fast scramblers,” [*J. High Energy Phys.*]{} [**0810**]{}:065 (2008). Stephen H. Shenker and Douglas Stanford, “Black holes and the butterfly effect,” [*J. High Energy Phys.*]{} [**2014**]{}:67 (2014). Leonard Susskind, L[á]{}rus Throlacius, and John Uglum, “The stretched horizon and black hole complementarity” [*Phys. Rev. D*]{} [**48**]{}, 3743–3761 (1993). Brian Swingle, Gregory Bentsen, Monika Schleier-Smith and Patrick Hayden, “Measuring the scrambling of quantum information,” [*Phys. Rev. A*]{} [**94**]{}:040302 (2016). Kip S. Thorne, R. H. Price, and D. A. Macdonald (eds.) [*Black Holes: The Membrane Paradigm*]{} (1986). Robert M. Wald, “The thermodynamics of black holes,” [*Living Rev. Rel.*]{} [**4**]{}:6 (2001). Beni Yoshida and Alexei Kitaev, “Efficient decoding for the Hayden-Preskill protocol,” arXiv:1710.03363 (2017). [^1]: Natural units are chosen so that $G = c = \hbar = k_B = 1$, where $G$ is the gravitational constant, $c$ is the speed of light, $\hbar$ is Planck’s constant, and $k_B$ is Boltzmann’s constant. All quantities in this paper will be given in natural units. [^2]: It is not clear that such a thing is allowed by the laws of physics, as general relativity may only allow static universes with the aid of a cosmological constant, and these may be unstable
--- abstract: 'We prove that any non-cocompact irreducible lattice in a higher rank real semi-simple Lie group contains a subgroup of finite index which is generated by [**three**]{} elements.' address: 'School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005, India' author: - 'R. Sharma and T. N. Venkataramana' title: Generators for Arithmetic Groups --- Introduction ============ In this paper we study the question of giving a small number of generators for an arithmetic group. Our main theorem says that if $\Gamma $ is a higher rank arithmetic group, which is non-uniform, then $\Gamma $ has a finite index subgroup which has at most THREE generators. There are reasons to believe that the bound three is sharp.\ Our proof makes use of the methods and results of [@T],[@R; @4] on certain unipotent generators for non-uniform arithmetic higher rank groups as also the classification of absolutely simple groups over number fields. The question of a small number of generators is also motivated by the congruence subgroup problem (abbreviated to CSP in the sequel). We prove the following theorem, the main theorem of this paper.\ \[mainth\] Every higher rank [**non-uniform**]{} arithmetic group $\Gamma$ has a subgroup $\Gamma '$ of finite index which is generated by at the most [**three**]{} elements. The proof exploits the existence of certain unipotent elements in the arithmetic group. The higher rank assumption ensures that if $U$ and $U^-$ are opposing unipotent radicals of maximal parabolic subgroups, and $M$ is their common normaliser, then $M({{\mathbb{Z}}})$ will have a “sufficiently generic” semi-simple element. There are generic elements in $U^{\pm}({{\mathbb{Z}}})$ which together with this generic element in $M({{\mathbb{Z}}})$ will be shown to generate, [**in general**]{}, an arithmetic group. This is already the case for the group $SL(2,O_K)$ where $K/{{\mathbb{Q}}}$ is a non-CM extension of degree greater than one (see section 2).\ If $\gamma $ is the above “generic” element, and $u^+\in U^+$ and $u^-\in U^-$ are also “generic”, then let $\Gamma $ be the group generated by the n-th powers $\gamma ^n$, $~(u^+)^n$ and $(u^-)^n$ for some integer $n$. Clearly, $\Gamma $ is generated by three elements. It is easy to show that any arithmetic subgroup of $G({{\mathbb{Z}}})$ contains a group of the form $\Gamma $ for some integer $n$. The genericity assumption will be shown to imply that for [**most groups**]{} $G$, $\Gamma $ intersects $U^+({{\mathbb{Z}}})$ and $U^-({{\mathbb{Z}}})$ in subgroups of finite index. Then a Theorem of Tits ([@T]) for Chevalley Groups and its generalisation to other groups of ${{\mathbb{Q}}}$-rank $\geq 2$ by Raghunathan [@R; @4] (see also [@V] for the case when ${{\mathbb{Q}}}$-rank ($G$)=$1$), implies that $\Gamma $ is of finite index in $G({{\mathbb{Z}}})$.\ The proof that $\Gamma $ intersects $U^{\pm}({{\mathbb{Z}}})$ in a lattice for most groups, is reduced to the existence of a torus in the Zariski closure of $M({{\mathbb{Z}}})$ (the latter group is not equal to $M$) whose eigen-spaces (with a given eigen-value) on the Lie algebra $Lie (U^{\pm})$ are one dimensional. The existence of such a torus for groups of ${{\mathbb{Q}}}-rank\geq 2$ is proved by a case by case check, using the Tits diagrams (classification) of simple algebraic groups over number fields. It turns out that in the case of exceptional groups, the existence of such a torus is ensured by the results of Langlands [@L] and Shahidi [@Sh] who (in the course of their work on the analytic continuation of certain intertwining operators) analyse the action of the Levi subgroup $L$ on the Lie algebra $Lie(U^+)$ of the unipotent radical.\ However, this approach fails in many groups of ${{\mathbb{Q}}}$-rank one or two; in these cases, we will have to examine the individual cases (i.e. their Tits diagram), to produce an explicit system of three generators. Thus, a large part of the proof (and a sizable part of the paper), involves, in low rank groups, a case by case consideration of the Tits diagrams. In many of these cases, the explicit system of generators is quite different from the general case (see sections 4 and 5).\ We end this introduction with some notation. Given a ${{\mathbb{Q}}}$-simple semi-simple algebraic group, there is an absolutely almost simple algebraic group ${\mathcal G}$ over a number field $K$ such that $G=R_{K/{{\mathbb{Q}}}}({\mathcal G})$ where $R_{K/{{\mathbb{Q}}}}$ is the Weil restriction of scalars. Moreover, ${{\mathbb{Q}}}-rank (G)=K-rank ({\mathcal G})$ and $G({{\mathbb{Z}}})$ is commensurate to $G(O_K)$ where $O_K$ is the ring of integers in the number field. For these reasons, we replace henceforth the group $G$ over ${{\mathbb{Q}}}$ with an absolutely simple group (still denoted $G$ by an abuse of notation), defined over a number field $K$.\ Given a group $G$, and element $g,h\in G$ and a subset $S\subset G$, denote by $^g(h)$ the conjugate $ghg^{-1}$, and $^g(S)$ the set of elements $ghg^{-1}$ with $h\in S$.\ If $\Gamma _0$ is a group, $\Gamma , \Delta$ are subgroups, one says that $\Gamma $ [**virtually contains**]{} $\Delta$ and writes $\Gamma \geq \Delta$ if the intersection $\Gamma \cap \Delta$ has finite index in $\Delta$. One says that $\Gamma $ is [**commensurate**]{} to $\Delta $ and writes $\Gamma \simeq \Delta$ if $\Gamma $ virtually contains $\Delta$ (.i.e. $\Gamma \geq \Delta$) and vice versa (i.e. $\Delta \geq \Gamma $).\ Preliminary results on Rank one groups ====================================== The Group SL(2) --------------- In this subsection we prove Theorem \[mainth\] for the case $G=EL(2)$ over a number field $E$. The assumption of higher rank translates into the condition that $E$ has infinitely many units. That is, $E$ is neither ${{\mathbb{Q}}}$ nor an imaginary quadratic extension of ${{\mathbb{Q}}}$. It turns out that if $E$ is not a CM field, that is, $E$ is not a totally imaginary quadratic extension of a totally real number field, then, the proof is easier. We will therefore prove this part of the theorem first. \[noncm\] Let $K$ be a number field, which is not ${{\mathbb{Q}}}$ and which is not a CM field. Let $G=R_{K/{{\mathbb{Q}}}}(EL(2))$. Then, any arithmetic subgroup of $G({{\mathbb{Q}}})$ has a subgroup of finite index which has three generators. Before we begin the proof of Proposition \[noncm\] , we prove a few Lemmata. We will first assume that $E$ is a non-CM number field with infinitely many units. Let $O_E$ denote the ring of integers in the number field and $O_E^*$ denote the multiplicative group of units in the ring $O_E$. \[finite\] Let $\Delta $ be a subgroup of finite index in $O_E^*$ and $F$ the number field generated by $\Delta$. Then $F=E$. If $r_1(K)$ and $r_2(K)$ are the number of inequivalent real and complex embeddings of a number field $K$, then, by the Dirichlet Unit Theorem, the rank of $O_E^*$ is $r_1(E)+r_2(E)-1$. Let $d$ be the degree of $E$ over $F$.\ Let $A$ be the set of real places of $F$. To each $a\in A$, let $x(a)$ be the number of real places of $E$ lying above $a$ and $y(a)$ the number of non-conjugate complex places of $E$ lying above $a$. Then, for each $a\in A$ we have $x(a)+2y(a)=d$, the degree of $E$ over $F$. Clearly, $x(a)+y(a)\geq 1$ for each $a$.\ Let $B$ be the number of non-conjugate complex places of $F$. Then all places of $E$ lying above a place $b\in B$ are imaginary. If their number is $y(b)$ , then we have $y(b)=d$ for each $b$.\ The rank of the group of units $O_F^*$ of the number field $F$ is, by the Drichlet Unit Theorem, $Card (A)+Card (B)-1$. That of $O_E^*$ is $$-1+ \sum _{a\in A} (x(a)+y(a))+ \sum_{b\in B} y(b)$$ By assumption, $O_F^*$ and $O_E^*$ have the same rank, since $O_F^*$ contains $\Delta$, a subgroup of finite index in $O_E^*$. We thus have the equation $$\label{card}(\ref{card}) ~~~~Card(A)+Card(B)=\sum _{a\in A}(x(a)+y(a))+\sum_{b\in B}y(b)$$ Since $x(a)+y(a)\geq 1$ and $y(b)=d\geq 1$, equation \[card\] show that if $B$ is non-empty, then $d=1$ and $E=F$.\ If $B$ is empty, then $F$ has no complex places, and so $F$ is totally real. Moreover, since $x(a)+y(a)\geq 1$, equation \[card\] shows that for each $a\in A$, $x(a)+y(a)=1$. Thus, either $x(a)=0$ or $y(a)=0$. If, for some $a$, $y(a)=0$ then the equation $d=x(a)+2y(a)$ shows that $d=1$ and $E=F$.\ The only possibility left is that $x(a)=0$ and $y(a)=1$ for each $a\in A$, and $F$ is totally real. Therefore, for each archimedean (necessarily real) place $a$ of $F$, we have $y(a)=1$ and $d=2y(a)=2$, that is there is only one place of $E$ lying above the place $a$ of $F$ and is a complex place, whence $E/F$ is a quadratic extension, which is totally imaginary. Hence $E$ is a CM-extension, which is ruled out by assumption. The field extension $E$ over ${{\mathbb{Q}}}$ has only finitely many proper sub fields $E_1, E_2,\cdots, E_m$ (this follows trivially from Galois Theory, for example).\ \[exist\] Suppose that $E$ is a number field which is not a CM field. Then There exists an element $\theta \in O_E^*$ such that for any integer $r\geq 1$, the sub ring ${{\mathbb{Z}}}[\theta ^r]$ of $O_E$ generated by $\theta ^r$ is a subgroup of finite index in the additive group $O_E$. In particular, ${{\mathbb{Z}}}[\theta ^r]\supset NO_E$ for some integer $N$. Consequently, there exists an element $\theta \in O_E^*$ which does not lie in any of the subfields $E_1,\cdots,E_m$ as above, and for every such $\theta $, the sub-ring ${{\mathbb{Z}}}[\theta ^r]$ is a subgroup of of finite index in $O_E$. By Lemma \[finite\] the intersection $\Delta _i=O_E^*\cap E_i$ is of [**infinite index**]{} in $O_E^*$. Let us now write the abelian group $O_E^*$ additively. Then, we have the ${{\mathbb{Q}}}$ -subspaces $W_i={{\mathbb{Q}}}\otimes \Delta _i$ of the vector space $W={{\mathbb{Q}}}\otimes O_E^*$ (the latter of dimension $r_1(E)+r_2(E)-1$ over ${{\mathbb{Q}}}$). Since $W_i$ are proper subspaces of $W$, it follows that there exists an element of $W$ (hence of the subgroup $O_E^*$) no rational multiple of which lies in $W_i$ for any $i$. Interpreting this statement multiplicatively, there exists an element $\theta $ of $O_E^*$ such that no integral power of $\theta $ lies in the sub fields $E_i$ for any $i$. Consequently, for any integer $r\neq 0$, the subfield ${{\mathbb{Q}}}[\theta ^{r{{\mathbb{Z}}}}]$ is all of $E$. In particular, the sub ring ${{\mathbb{Z}}}[\theta ^r]$ generated by $\theta ^r$ is of finite index in the ring $O_E$. We now begin the proof of Proposition \[noncm\]. Consider the matrices $u_+=\begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix}$, $u_-=\begin{pmatrix} 1 & 0 \\ 1 & 1\end{pmatrix}$ and by an abuse of and by an abuse of notation, $\theta =u_+=\begin{pmatrix} \theta & 0 \\ 0 & \theta ^{-1}\end{pmatrix}$, where $\theta \in O_K^*$ is as in Lemma \[exist\]. Then, the group $\Gamma =<u_+^r,u_-^r,\theta ^r>$ generated by $u_{\pm}^r$ and $\theta ^r$ contains, for integers $m_1,m_2,\cdots m_l$, and $n_1,n_2,\cdots n_l$, the element $$^{\theta ^{m_1r}}(u_+^{rn_1}) ^{\theta ^{m_2r}}(u_+^{rn_2}) \cdots ^{\theta ^{m_lr}}(u_+^{rn_l}).$$ This element is simply the matrix $\begin{pmatrix} 1 & r\sum n_i\theta ^{2m_ir}\\ 0 & 1\end{pmatrix}$. Picking suitable $m_i, n_i$, we get from Lemma \[exist\], an integer $N$ such that $\begin{pmatrix} 1 & x \\ 0 & 1\end{pmatrix}\in \Gamma $ for all $x\in NO_K$. Similarly, all lower triangular matrices of the form $\begin{pmatrix} 1 & 0 \\ x & 1\end{pmatrix}$ are in $\Gamma $ for all $x\in NO_K$. But the two subgroups $U^+(NO_K)=\begin{pmatrix} 1 & NO_K \\ 0 & 1\end{pmatrix}$ and $U^-(NO_K)=\begin{pmatrix} 1 & 0 \\ NO_K & 1\end{pmatrix}\subset \Gamma $ generate a subgroup of finite index in $SL_2(O_K)$ ( by [@Va]). Hence $\Gamma $ is of finite index in $SL_2(O_K)$. It is clear that any subgroup of finite index in $SL_2(O_K)$ contains a three generated group $\Gamma =<u_+^r,\theta ^r, u_-^r>$ for some $r$. This completes the proof of Proposition \[noncm\]. The CM case. ------------ Suppose that $F$ is a totally real number field of degree $k\geq 2$ and suppose that $E/F$ is a totally imaginary quadratic extension of $F$. There exists an element $\alpha \in E$ such that $\alpha ^2=-\beta \in F$ where $\beta $ is a totally positive element of $F$ (that is, $\beta $ is positive in all the archimedean (hence real) embeddings of $F$). Let $\theta $ be an element of infinite order in $O_F^*$ as in Lemma \[exist\], so that for any integer $r$, the sub-ring ${{\mathbb{Z}}}[\theta ^r]$ of $O_F$ is a subgroup of finite index in $O_F$ (in Lemma \[exist\], replace $E$ by the totally real field $F$). We have thus the following analogue of Lemma \[exist\], in the CM case. \[exist’\] Suppose that $E$ is a CM field and is a totally imaginary quadratic extension of a totally real number field $F$. There exists an element $\theta \in O_E^*$ such that for any integer $r\neq 0$, the ring ${{\mathbb{Z}}}[\theta ^r]$ generated by $\theta ^r$ is a subgroup of finite index in $O_F$. The group of units of $E$ contains the group of units of $F$ as a subgroup of finite index. Therefore, we may apply the previous lemma (Lemma \[exist\]), with $E$ replaced by $F$ (the latter is not a CM field). Consider the elements $h=h(\theta)= \begin{pmatrix}\theta & 0 \\ 0 & \theta ^{-1}\end{pmatrix}$, $u_+= \begin{pmatrix}1 & 1 \\ 0 & 1\end{pmatrix} $, and $u_-= \begin{pmatrix}1 & 0 \\ \alpha & 1\end{pmatrix}$ of $SL(2,O_E)$. Given an arithmetic subgroup $\Gamma _0$ of $SL(2,O_E)$, there exists an integer $r$ such that the group $\Gamma =<h^r, u_+^r, u_-^r>$ generated by the $r$-th powers $h^r, u_+^r$ and $u_-^r$ lies in $\Gamma _0$. \[CM\]For every integer $r$, the group $\Gamma $ in the foregoing paragraph is arithmetic (i.e. is of finite index in $SL(2,O_E)$). In particular, every arithmetic subgroup of $SL(2,O_E)$ is virtually $3$-generated. Write the Bruhat decomposition for the element $$u_-^r=\begin{pmatrix}1 & 0\\ r\alpha & 1\end{pmatrix}= \begin{pmatrix}1 & \frac{1}{r\alpha} \\ 0 & 1\end{pmatrix} \begin{pmatrix}-\frac{1}{r\alpha} & 0 \\ 0 & -r\alpha\end{pmatrix} \begin{pmatrix}0 & 1 \\ -1 & 0\end{pmatrix} \begin{pmatrix}1 & \frac{1}{r\alpha} \\ 0 & 1\end{pmatrix}$$ By the choice of the element $\theta $, the group generated by $h(\theta )^r$ and $u_+^r$ contains the subgroup $U^+(NO_F)=\{u= \begin{pmatrix}1 & Nb \\ 0 & 1\end{pmatrix}: b\in O_F\}$. Clearly, $\Gamma \supset U^+(O_F)$. Define $U^-(NO_F)$ similarly. Since $\alpha ^2$ lies in the smaller field $O_F$, a computation shows that the conjugate $^{u_-^r}(U^+(NO_F))$ contains the subgroup $^{v_+}(U^-(N'O_F))$ for some integer $N'$, where $v_+$ is the element $\begin{pmatrix}1 & \frac{1}{r\alpha} \\ 0 & 1\end{pmatrix}$. Thus the group $^{v_+}(U^-(N'O_F))\subset \Gamma $. Since the group $U^+$ is commutative, we have $^{v_+}(U^+(N'O_F))=U^+(N'O_F)\subset \Gamma $. By a Theorem of Vaserstein ([@Va]), the group generated by $U^+(NO_F)$ and $U^-(N'O_F)$ is a subgroup of finite index in $SL(2,O_F)$. In particular, it contains some power $h^M=h(\theta )^M$ of $h$.\ Since a power of $h$ already lies in $\Gamma $, we see that for some integer $r'$, the commutator $u_1$ of $h^{r'}$ and $^{v_+}(h^{r'})$ lies in $\Gamma $. This commutator is nothing but the matrix $\begin{pmatrix}1 & (\theta ^{2r'}-1)(\frac{1}{r\alpha}) \\ 0 & 1\end{pmatrix}$. Now, $\frac {1}{\alpha }=\frac{-\alpha}{\beta}$, with $\beta \in F$. Therefore, by Lemma \[exist\], the subgroup generated by $\theta ^{r'}$ and $u_1$ contains, for some $M'$, the subgroup $U^+(M'O_F\alpha)$ consisting of elements of the form $\begin{pmatrix}1 & xM'\alpha \\ 0 & 1\end{pmatrix}$. Hence, $U^+(M'O_F\alpha )\subset \Gamma $. We have already seen in the beginning of the last paragraph that $U^+(MO_F)\subset \Gamma $. Now, up to a subgroup of finite index, $O_E$ is the sum of $O_F$ and $O_F\alpha$ since $E/F$ is a quadratic extension generated by $\alpha$. This shows (after changing $M'$ if necessary by a suitable multiple), that $U^+(M'O_E)\subset \Gamma $. The conjugate of $U^+$ by the [*lower triangular*]{} matrix $\begin{pmatrix}1 & 0\\ \alpha & 1\end{pmatrix}$ is a unipotent group $V$ opposed to $U^+$. By Vaserstein’s Theorem in [@Va] (replacing $U^-$ by our opposite unipotent group $V$) we see that $\Gamma $ contains an arithmetic subgroup of $SL(2,O_E)$ and is hence itself arithmetic. For handling some ${{\mathbb{Q}}}$-rank one groups, we will need a more general version of Proposition \[CM\]. Let $x\in E\setminus F$ be an integral element divisible by $N!$ (the product of the first $N$ integers) for a large rational integer $N$. Denote by $U^-(xO_F)$ the set of matrices of the form $\begin{pmatrix} 1 & 0\\ xa & 0 \end{pmatrix}$ with $a\in O_F$. Denote by $U^+(rO_F)$ the group of matrices of the form $\begin{pmatrix} 1 & ra\\ 0 & 1 \end{pmatrix}$ with $a\in O_F$. \[CM’\] The group generated by $U^+(rO_F)$ and $U^-(xO_F)$ is of finite index in $SL(2,O_E)$. Denote by $\Gamma $ the group in the proposition. We first find an element $\begin{pmatrix}a & b\\ c & d \end{pmatrix}$ in $\Gamma $ such that $ac\neq 0$ and $c$ lies in the smaller field $F$. To do this, we use the existence of infinitely many units in $F$. Write $x ^2=tx -n$ with $t(=tr_{E/F}(x))$ and $n(=N_{E/F}(x))$ in $F$. Assume that $t\neq 0$ since $t=0$ has already been covered in Proposition \[CM\]. Given a unit $\theta\in O_E^*$ consider the product element $g\in SL(2,E)$ given by $g= \begin{pmatrix} 1 & 0 \\ -x\theta ^{-1} & 1\end{pmatrix} \begin{pmatrix} 1 & \frac{\theta -1}{t}\\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ x & 1\end{pmatrix}$.\ Now, normal subgroups of higher rank arithmetic groups are again arithmetic ([@M], [@R; @1], [@R; @2]). Since $T(O_F)(=T(O_E))$ normalises the groups $U^+(rO_F)$ and $U^-(xO_F)$, it follows that to prove the arithmeticity of $\Gamma $, it is enough to prove the arithmeticity of the group generated by $\Gamma $ and $T(O_F)$. We may thus assume that $\Gamma $ contains $T(O_E)=T(O_F)$. Here the equality is up to subgroups of finite index.\ If $\theta $ is a unit such that $\theta \equiv 1$ (mod $tO_F$), then from the definition of $g$ and $\Gamma $ it is clear that $g\in \Gamma $. Write $g= \begin{pmatrix} a & b\\ c & d\end{pmatrix}$. A computation shows that $a=1+\frac{\theta -1}{t}x$, $c=\frac{1-\theta ^{-1}}{t}n$. Since $x$ and $1$ are linearly independent over $F$ ($x\notin F$), it follows that $a\neq 0$ and in fact that $a\notin F$. The expression for $c$ shows that $c\neq 0$.\ The Bruhat decomposition for $g=\begin{pmatrix} a & b\\ c & d\end{pmatrix}$ is given by $$g=\begin{pmatrix} a & b\\ c & d\end{pmatrix}= \begin{pmatrix} 1 & ac^{-1}\\ 0 & 1\end{pmatrix} \begin{pmatrix} c^{-1} & 0\\ 0 & c\end{pmatrix} \begin{pmatrix} 0 & -1\\ 1 & 0\end{pmatrix} \begin{pmatrix} 1 & dc^{-1}\\ 0 & 1\end{pmatrix}.$$ Thus, $\Gamma \supset {}^{\big (\begin{smallmatrix}a & b\\ c & d \end{smallmatrix}\big )} \begin{pmatrix} 1 & rO_F\\ 0 & 1\end{pmatrix}= ^{\big (\begin{smallmatrix}1 & ac^{-1}\\ 0 & 1\end{smallmatrix}\big )}\begin{pmatrix} 1 & 0 \\c^2rO_F & 1\end{pmatrix}$. Moreover $\Gamma \supset \begin{pmatrix} 1 & rO_F\\ 0 & 1\end{pmatrix}= ^{\big (\begin{smallmatrix}1 & ac^{-1}\\ 0 & 1 \end{smallmatrix}\big )} \begin{pmatrix} 1 & rO_F\\ 0 & 1\end{pmatrix}$. The group $\Delta $ generated by $\begin{pmatrix} 1 & rO_F\\ 0 & 1\end{pmatrix}$ and $\begin{pmatrix} 1 & 0\\ c^2rO_F & 1\end{pmatrix}$ contains, for some integer $r'$, $U^+(r'O_F)$ and $U^-(r'O_F)$ (since $c$ and hence $c^2$ lie in the field $F$) Hence, by Vaserstein’s Theorem ([@Va]) $\Delta $ is an arithmetic subgroup of $SL(2,O_F)$.\ In particular, $\Gamma $ contains the subgroup $^{\big (\begin{smallmatrix} 1 & ac^{-1}\\ 0 & 1\end{smallmatrix}\big )} (\theta ^{r''{{\mathbb{Z}}}})$ for some integer $r''$. By enlarging $r''$ if necessary, assume that $\theta ^{r''{{\mathbb{Z}}}}\subset \Gamma $. Thus $\Gamma $ contains the commutator group $$[\begin{pmatrix} 1 & ac^{-1}\\ 0 & 1\end{pmatrix}, \theta ^{r''{{\mathbb{Z}}}}]= \begin{pmatrix} 1 & ac^{-1}(\sum {{\mathbb{Z}}}(\theta ^{r''k}-1)) \\ 0 & 1\end{pmatrix}$$ where the sum is over all integers $k$. By the properties of the element $\theta$, the sum is a subgroup of finite index in the ring $O_F$, whence, we get an integer $r_0$ such that $\Gamma \supset \begin{pmatrix} 1 & ac^{-1}r_0 O_F\\ 0 & 1\end{pmatrix}$. Since $\Gamma $ already contains $\begin{pmatrix} 1 & rO_F\\ 0 & 1\end{pmatrix}$, $a$ does not lie in $F$, and $O_E$ contains $O_F\oplus r_0ac^{-1}O_F$ for a suitable $r_0$, it follows that for a suitable integer $r_1$, the subgroup $\begin{pmatrix} 1 & r_1O_E\\ 0 & 1\end{pmatrix}$ lies in $\Gamma $. Now $\Gamma $ is obviously Zariski dense in $SL(2,O_E)$; moreover it intersects the unipotent radical $U^+$ is an arithmetic group. Hence it intersects some opposite unipotent radical also in an arithmetic group; but two such opposing unipotent arithmetic groups generate an arithmetic group ([@Va]). Therefore, $\Gamma $ is arithmetic. the group SU(2,1) ----------------- In this section, we prove results on the group $SU(2,1)$ (with respect to a quadratic extension $L/K$ of a number field $K$), analogous to those in the section on $SL_2$. These will be needed in the proof of Theorem 1, in those cases where a suitable $SU(2,1)$ embeds in $G$.\ Suppose that $E/{{\mathbb{Q}}}$ is a [**real**]{} quadratic extension, $E={{\mathbb{Q}}}(\sqrt{z})$ with $z >0$. Denote by $x\mapsto \overline x$ the action of the non-trivial element of the Galois group of $E/{{\mathbb{Q}}}$. Let $h=\begin{pmatrix}0 & 0 & 1\\0 & 1 & 0\\ 1 & 0 & 0\end{pmatrix}$. We will view $h$ as a form in three variables on $E^3$ which is hermitian with respect to this non-trivial Galois automorphism. Set $$G=SU(h)=SU(2,1)=\{g\in SL_3(E):\overline {^t g}hg=h\}.$$ Then $G$ is an algebraic group over ${{\mathbb{Q}}}$.\ Define the groups $$U^+=\{\begin{pmatrix}1 & z & -\frac{z\overline z}{2}\\ 0 & 1 & -\overline z \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix}1 & 0 & w\\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}: w+\overline w=0\},$$ $U^-=^t(U^+)$, the subgroup of $SU(2,1)$ which is an opposite of $U^+$ consisting of matrices which are transposes of those in $U^+$ and let $T$ be the diagonals in $SU(2,1)$. Then, up to subgroups of finite index, we have $T({{\mathbb{Z}}})=\{\begin{pmatrix}\theta & 0 & 0\\ 0 & \theta ^{-2} & 0 \\ 0 & 0 & \theta \end{pmatrix}: \theta \in O_E^*\}$. Note that for a unit $\theta \in O_E^*$, we have $\theta \overline \theta =\pm 1$.\ Suppose that $F/{{\mathbb{Q}}}$ is imaginary quadratic, $t\in O_F\setminus {{\mathbb{Z}}}$ and [**define**]{} the group $U^+(t{{\mathbb{Z}}})$ as the one generated by the matrices $\begin{pmatrix}1 & 0 & tx\sqrt{z} \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$, and $\begin{pmatrix}1 & tux & -\frac{t^2 x^2 u\overline u}{2}\\ 0 & 1 & -t\overline {u} x\\ 0 & 0 & 1 \end{pmatrix}$ with $x\in {{\mathbb{Z}}}$ and $u\in O_E$. Denote by $U_{2\alpha}$ the root group corresponding to the root $2\alpha$, where $\alpha $ is the simple root for ${\bf G}_m (\subset T)$ occurring in $Lie U^+$. Here, the inclusion of ${\bf G}_m$ in $T$ is given by the map $x\mapsto \begin{pmatrix} x & 0 & 1\\ 0 & 1 & 0 \\ 0 & 0 & x^{-1}\end{pmatrix}$. Note that the commutator $[U^+(t{{\mathbb{Z}}}), U^+({{\mathbb{Z}}})]$ is $U_{2\alpha }(t{{\mathbb{Z}}})\subset U^+(t{{\mathbb{Z}}})$. Hence $U^+({{\mathbb{Z}}})$ normalises $U^+(t{{\mathbb{Z}}})$. Note moreover that $U^+(t{{\mathbb{Z}}})$ contains the subgroup $U_{2\alpha}(t^2 {{\mathbb{Z}}}+t{{\mathbb{Z}}}))$; now, the elements $t$ and $t^2$ are linearly independent over ${{\mathbb{Q}}}$, hence $t{{\mathbb{Z}}}+t^2{{\mathbb{Z}}}$ contains $r{{\mathbb{Z}}}$ for some integer $r>0$.\ If $\Gamma \subset G(O_F)$ is such that for some $r\geq 1$, the group $\Gamma $ contains the group generated by $U^+(rt{{\mathbb{Z}}})$ and $U^-(r{{\mathbb{Z}}})$, then $\Gamma $ is of finite index in $G(O_F)$. By the last remark in the paragraph preceding the proposition, there exists an integer, we denote it again by $r$, such that the group $\Gamma $ contains $U_{2\alpha}(r{{\mathbb{Z}}})$ and $U^-(r{{\mathbb{Z}}})$. Thus, by [@V] (note that ${{\mathbb{R}}}-rank (G)=2$ since $E/{{\mathbb{Q}}}$ is real quadratic and $G({{\mathbb{R}}})=SL(3,{{\mathbb{R}}})$), $\Gamma $ contains a subgroup of $SU(2,1)({{\mathbb{Z}}})$ of finite index. Therefore, $\Gamma $ contains the group generated by $U^+(rt{{\mathbb{Z}}})$ and $U^+(r{{\mathbb{Z}}})$ for some integer $r$. Clearly the group generated contains $U^+(r'O_F)$ for some integer $r'$ (since $F/{{\mathbb{Q}}}$ is quadratic and $t$ and $1$ are linearly independent over ${{\mathbb{Q}}}$). Therefore, by [@V] again, we get: $\Gamma $ is an arithmetic subgroup of $G(O_F)$. We now prove a slightly stronger version of the foregoing proposition. \[SU(2,1)\] Suppose that $E$ and $F$ are as before, $E={{\mathbb{Q}}}\sqrt{z}$ and $F={{\mathbb{Q}}}(\sqrt{t})$. Let $\Gamma \subset G(O_F)$ be such that for some integer $r$, $\Gamma $ contains the groups $U^-(r{{\mathbb{Z}}})$ and $U_{2\alpha}(rt{{\mathbb{Z}}})$. Then, $\Gamma $ is of finite index in $G(O_F)$. Consider the map $f:SL(2){\rightarrow}SU(2,1)$ given by $\begin{pmatrix} a & b\\ c & d\end{pmatrix}\mapsto \begin{pmatrix} a & 0 & b\sqrt{z}\\0 & 1 & 0\\\frac{c}{\sqrt{z}} & 0 & d\end{pmatrix}$. The map $f$ is defined over ${{\mathbb{Q}}}$, takes the upper triangular matrices with 1s on the diagonal to the group $U_{2\alpha}$ and takes the Weyl group element $w$ into the $3\times 3$ matrix $w'$ which has non-zero entries on the anti-diagonal and zeros elsewhere. Under conjugation action by the element $f(h)$ with $h=\begin{pmatrix} a & 0\\0 & a^{-1}\end{pmatrix}$, the group $U^+(r{{\mathbb{Z}}})$ is taken into $U^+(ra{{\mathbb{Z}}})$. Under conjugation by $w'$, $U^{-}$ is taken into $U^+$ and vice versa.\ Write the Bruhat decomposition of $u_+=\begin{pmatrix} 1 & rt\\0 & 1\end{pmatrix}$, with respect to the lower triangular group. We get $u_+=v_1^{-}h_1wu_1^{-}$. Here, $h_1=\begin{pmatrix}-rt & 0\\0 & -\frac{1}{rt}\end{pmatrix}$. If $r$ is suitably large, then $\Gamma $ contains $u_+$ by assumption. To prove arithmeticity, we may assume ( see the proof of Proposition \[CM’\]) that $\Gamma \supset T(r{{\mathbb{Z}}})$. Then, $$\Gamma \supset ^{u_+}(U^-(r{{\mathbb{Z}}}))\supset ^{v_1^-}(U_{\alpha}(rt{{\mathbb{Z}}}))\supset ^{v_1^-}(U_{2\alpha}(r^2t^2{{\mathbb{Z}}})).$$ The last inclusion follows by taking commutators of elements of $U_{\alpha}(rt{{\mathbb{Z}}})$ where, $U_{\alpha}(rt{{\mathbb{Z}}})$ is the group generated by the elements $\begin{pmatrix} 1 & rt & -\frac{r^2t^2}{2}\\0 & 1 & -rt\\ 0 & 0 & 1 \end{pmatrix}$. Note that $t^2\in {{\mathbb{Q}}}$ by assumption. Hence $\Gamma \supset ^{v_1^-}(U_{2\alpha }(r'{{\mathbb{Z}}}))$ for some integer $r'$.\ Define $U_{-\alpha}(rt{{\mathbb{Z}}})$ similarly to the above (e.g. as the transpose of $U_{\alpha}$). Since $v_1^-$ centralises all of $U^-$, we obtain $\Gamma \supset U_{-\alpha}(r{{\mathbb{Z}}})\supset ^{v_1^-}(U_{-\alpha}(r{{\mathbb{Z}}}))$.\ The conclusions of the last two paragraphs and [@V] shows that there exists a subgroup $\Delta $ of finite index in $SU(2,1)({{\mathbb{Z}}})$ such that $\Gamma \supset ^{v_1^-}(\Delta)$. In particular, for some integer $r'$, the group $^{v_1^-}(\Gamma )$ contains [*both*]{} the groups $U_{\alpha}(r't{{\mathbb{Z}}})$ and $U_{\alpha }(r{{\mathbb{Z}}})$. Consequently, it contains $U^+(r'O_F)$, a subgroup of finite index in the integral points of the unipotent radical of a minimal parabolic subgroup of $SU(2,1)$ over $F$ (note that up to subgroups of finite index, $t{{\mathbb{Z}}}+{{\mathbb{Z}}}=O_F$). Note also that the real rank of $SU(2,1)(F\otimes {{\mathbb{R}}})$ is at least two. Now, $\Gamma $ is clearly Zariski dense in $SU(2,1)$ regarded as a group over $F$. Therefore, by [@V], $\Gamma$ is arithmetic. Criteria for Groups of Rank One over Number Fields -------------------------------------------------- Suppose that $K$ is a number field. Let $G$ be an absolutely almost simple algebraic group with $K$-rank ($G$)$\geq 1$. Let $S\simeq {\bf G}_m$ be a maximal $K$-split torus in $G$, $P$ a parabolic subgroup containing $S$, and $U^+$ the unipotent radical of $P$. Let $M\subset P$ be the centraliser of $S$ in $G$. Let $M_0$ be the connected component of identity of the Zariski closure of $M(O_K)$ in $M$. Write ${\mathfrak g}$ for the Lie algebra of $G$. We have the root space decomposition ${\mathfrak g}= {\mathfrak g}_{\pm \alpha}\oplus {\mathfrak g}_0$, where $\oplus _{\alpha >0}{\mathfrak g}_{\alpha}$ is a decomposition of $ Lie U^+$ for the adjoint action of $S$. Denote by $log :U^+{\rightarrow}{\mathfrak u}$ the log mapping on the unipotent group $U^+$. It is an isomorphism of $K$-varieties (not of groups in general). Define similarly $U^-$ to be the unipotent $K$-group group with Lie algebra ${\mathfrak u}^-=\oplus {\alpha >0}{\mathfrak g}_{-\alpha}$. This is the “opposite” unipotent group. There exists an element $w\in N(S)/Z(S)$ in the Weyl group of $G$ ($N(S)$ and $Z(S)$ being the normaliser and the centraliser of $S$ in $G$), which conjugates $U^+$ into $U^-$. Further, the map $(u,m,v)\mapsto umwv=g$ maps $U^+\times M\times U^+$ isomorphically onto a Zariski open subset of $G$.\ The following technical proposition will be used repeatedly in the sequel. \[technical\] Suppose that $K$ is any number field. Let $G$ be of $K$-rank $\geq 1$. Let $\Gamma \subset G(O_K)$ be Zariski dense, and assume that ${{\mathbb{R}}}$-rank ($G_\infty$) $\geq 2$. Suppose that there exists an element $m_0\in M(O_K)$ of infinite order such that 1) all its eigenvalues are of infinite order in its action on $Lie U^+$, 2) if $g=umwv\in \Gamma $ , then there exists an integer $r\neq 0$ such that $^u(m_0^r)\in \Gamma $. Then, $\Gamma $ is arithmetic. Let $V$ be the Zariski closure of the intersection of $U^+$ with $\Gamma$. View $U^+$ as a ${{\mathbb{Q}}}$-group, be restriction of scalars. By assumption, for a Zariski dense set of elements $u\in U^+({{\mathbb{Q}}})$, there exists an integer $r=r(u)$ such that the commutator $[m_o^r,u]$ lies in $\Gamma $. If $\frak v$ denotes the ${{\mathbb{Q}}}$-Lie algebra of $V$, then, this means that, $\frak v$ contains vectors of the form $(Ad(m_0^r)-1)(log u)$ with $log u$ spanning the ${{\mathbb{Q}}}$-vector space ${\mathfrak u}$. By fixing finitely many $u'$ which give a basis of ${\mathfrak u}$ (as a ${{\mathbb{Q}}}$-vector space), we can find a common integer $r$ such that $(Ad (m_0^r)-1)log u\in \frak v$, for all $u$; in other words, $(Ad(m_0^r)-1)({\mathfrak u})\subset \frak v$. The assumption on $m_0$ now implies that $\frak v={\mathfrak u}$. Hence $V=U^+$, which means that $\Gamma \cap U^+\subset U^+(O_K)$ is Zariski dense in $U^+$. By [@R; @5], Theorem (2.1), it follows that $\Gamma \cap U^+(O_K)$ is of finite index in $U^+(O_K)$.\ Similarly, $\Gamma $ intersects $U^-(O_K)$ in an arithmetic group. Hence by [@R; @4] and [@V], $\Gamma $ is arithmetic. [**From now on, in this section, we will assume that $K$-rank of $G$ is ONE**]{}. Consequently, ${\mathfrak u}$ has the root space decomposition ${\mathfrak u}={\mathfrak g}_{\alpha}\oplus {\mathfrak g}_{2\alpha}$. Assume that ${\mathfrak g}_{2\alpha}\neq 0$. Denote by $U_{2\alpha}$ the subgroup of $G$ whose Lie algebra is ${\mathfrak g}_{2\alpha}$. This is an algebraic subgroup defined over $K$.\ It is easy to see that the group $G_0$ whose Lie algebra is generated by ${\mathfrak g}_{\pm 2\alpha}$ is necessarily semi-simple and $K$-simple. Moreover, it is immediate that $S\subset G_0$. Note the Bruhat decomposition of $G$: $G=P\cup UwP$ where $w\in N(S)$ is the Weyl group element such that conjugation by $w$ takes $U^+$ into $U^-$ and $U_{2\alpha}$ into $U_{-2\alpha}$. It is clear that $UwP=UwMU$ is a Zariski open subset of $G$. \[highestroot\] Suppose that $K$ has infinitely many units, and that $K$-rank ($G$) $=1$. Suppose that $\Gamma \subset G(O_K)$ is a Zariski dense subgroup such that $\Gamma\supset U_{2\alpha}(rO_K)$ for some integer $r>0$. Suppose that $rank$-$(G_{\infty})=\sum _{v\in S_{\infty}}K_v$-$rank(G)\geq 2$. Then, $\Gamma $ is of finite index in $G(O_K)$. Let $g=uwmv$ be an element in $\Gamma \cap UwP$. We obtain, $\Gamma \supset <^g(U_{2\alpha}(rO_K)),~U_{2\alpha}(rO_K)>$. The Bruhat decomposition for $g$ and the fact that $u$ centralises $U_{2\alpha}$ shows that $\Gamma \supset ^u<U_{-2\alpha}(r'O_K), U_{2\alpha}(r'O_K)>$ for some integer $r'$. The group $G_0$ is also of higher real rank, since $S\subset G_0$ and $K$ has infinitely many units. Therefore by [@V], the group generated by $U_{\pm 2\alpha }(r'O_K)$ is of finite index in $G_0(O_K)$ and in particular, contains $S(r''O_K)$ for some $r''>0$.\ We have thus seen that $\Gamma \supset ^u(S(r''O_K))$ for some integer $r''$. Since $K$-rank of $G$ is one, the weights of $S$ acting on ${\mathfrak u}$ are $\alpha $ and $2\alpha$. Since $S(r''O_K)$ is infinite, there are elements in $S(r''O_K)$ none of whose eigenvalues acting on ${\mathfrak u}$ is one. Therefore, Proposition \[technical\] implies that $\Gamma $ is arithmetic. We continue with the notation of this subsection. There exists an integer $N$ such that the units $\theta $ of the number field $K$ which are congruent to $1$ modulo $N$, form a torsion-free abelian group. Let $F$ be the field generated by these elements. There exists an element $\theta \in O_K^*$ such that for all integers $r>0$, the field ${{\mathbb{Q}}}[\theta ^r]=F$ (see Lemma \[exist’\]). Moreover, $S(O_F)$ is of finite index in $S(O_K)$. We also have, 1) $F=K$ if $K$ is not CM. 2) $F$ is totally real, $K$ totally imaginary quadratic extension of $F$ otherwise.\ Given an element $u_+\in U_{2\alpha}(O_K)\setminus \{1\}$, consider the subgroup $V^+$ generated by the conjugates $^{\theta ^j}(u_+)$ of $u_+$, as $j$ varies over all integers. By Lemma \[exist\], there exists an integer $r$ such that $V^+\supset u_+^{rO_F}\stackrel{def}{=} Exp(rO_F log (u_+))$. Here $Exp$ is the exponential map from $Lie (U^+)$ onto $U^+$ and $log$ is its inverse map. By the Jacobson-Morozov Theorem, there exists a homomorphism $f:SL(2){\rightarrow}G$ defined over $K$ such that $f\begin{pmatrix} 1 & 1\\ 0 & 1\end{pmatrix}=u_+$. The Bruhat decomposition shows that the image of the group of upper triangular matrices lies in $P$. Since all maximal $K$-split tori in $P$ are conjugate to $S$ by elements of $P(K)$, it follows that there exists a $p\in P(K)$ such that $pf(D)p^{-1}=S$, where $D$ is the group of diagonals in $SL(2)$. Write $p=um$ with $u\in U$ and $m\in M$. Now, $M$ centralises $S$ and $u$ centralises $u_+$ (since $u_+$ lies in $U_{2\alpha}$). Therefore, after replacing $f$ by the map $f':x\mapsto u(f(x)u^{-1}$, we see that $f'(D)=S$ and $f'\begin{pmatrix} 1 & 1\\ 0 & 1\end{pmatrix}=u_+$. We denote $f'$ by $f$ again, to avoid too much notation.\ \[2alpha\] Suppose that $K$ has infinitely many units, and $G$ an absolutely almost simple group over $K$ of $K$-rank one. Suppose that ${\mathfrak g}_{2\alpha}$ is one dimensional over $K$. Then every arithmetic subgroup of $G(O_K)$ is virtually three generated. Let $u_+\in U_{2\alpha}(O_K)$ and $\theta \in S(O_K)$ be as above. Suppose that $\gamma \in G(O_K)$ is in general position with respect to $u_+$. Then, for every $r\geq 1$, the group $\Gamma =<u_+^r,\theta ^r,\gamma ^r>$ is Zariski dense. It is enough to prove that $\Gamma$ is arithmetic.\ By replacing $r$ by a bigger integer if necessary, and using the fact that ${{\mathbb{Z}}}[\theta ^r]$ has finite index in $O_F$, we see that $f\begin{pmatrix} 1 & rO_F\\ 0 & 1\end{pmatrix}=u_+^{rO_F}\subset \Gamma $. Write $w$ for the image of $f\begin{pmatrix} 0 & 1\\ -1 &0 \end{pmatrix}=f(w_0)$. Then, $w$ takes $U^+$ into $U^-$ under conjugation. Write $u_-$ for $wu_+w^{-1}$. Now, $M(K)$ normalises $U_{2\alpha}$ and the latter is one dimensional. Therefore, $^m(u_+^{rO_F})=f\begin{pmatrix} 1 & 0\\ \xi rO_F & 1\end{pmatrix}\stackrel{def}{=}u_-^{\xi rO_F}$, for some element $\xi $ of the larger field $K$. If $\xi \notin F$, then by Proposition \[CM’\], the group generated by $u_+^{rO_F}$ and $u_-^{\xi rO_F}$ contains $f(\Delta)$ for some subgroup of finite index in $SL(2,O_K)$.\ Pick an element $g\in \Gamma$ of the form $g=uwmv$ with $u,v\in U^+$ and $m\in M$. Then, $$\Gamma \supset <^g(u_+^{rO_F}),u_+^{rO_F}>\supset ^u <^m(u_-^{rO_F}), u_+^{rO_F}>=^u <u_-^{\xi rO_F},u_+^{rO_F}>$$ If $g$ is “generic”, then $m$ is sufficiently generic, that $\xi \notin F$ (otherwise, $^m(u_-)$ is always $F$-rational, which lies in a smaller algebraic group, and genericity implies that this is not possible for all $m$). Then, by the conclusion of the last paragraph, $\Gamma \supset ^uf(S(rO_F))$ for some integer $r$. Since $u$’s run through a Zariski dense subset of $U$, Proposition \[technical\] implies that $\Gamma $ is arithmetic. Some General Results ==================== Notation -------- Suppose $G$ is a semi-simple linear algebraic group which is absolutely almost simple and defined over a number field $K$, with $K$-rank($G$)$\geq 1$ and $rank$- $(G_{\infty})\stackrel{def}{=}\sum _{v\in S_\infty}K_v$-rank $(G) \geq 2$ (the last condition says that $G(O_K)$ is a “higher rank lattice”). Let $P\subset G$ be a proper parabolic $K$-subgroup, $U$ its unipotent radical, $S\subset P$ a maximal $K$-split torus in $G$, and $\Phi ^+(S,P)$ the roots of $S$ occurring in the Lie algebra $ {\mathfrak u}$ of $U$. Let $\Phi ^-$ be the negative of the roots in $\Phi ^+(S,P)$, and ${\mathfrak u}^-=\oplus {\mathfrak g}_{-\alpha}$ be the sum of root spaces with $\alpha \in \Phi ^+(S,P)$. Then, ${\mathfrak u}^-$ is the Lie algebra of a unipotent algebraic group $U^-$ defined over $K$, called the “opposite” of $U$. Write the Levi decomposition $P=MU$ with $S\subset M$.\ In the following, we will, by restricting scalars to ${{\mathbb{Q}}}$, think of all these groups $G$, $M$, $U^{\pm}$ as algebraic groups over ${{\mathbb{Q}}}$. Thus, for example, when we say that $U^+(O_K)$ is Zariski dense in $U^+$ we mean that $U^+(O_K)$ is Zariski dense in the complex group $U^+(K\otimes {{\mathbb{C}}})=(R_{K/{{\mathbb{Q}}}}(U^+))({{\mathbb{C}}})$. With this understanding, we prove the following slight strengthening of the Borel density theorem.\ \[boreldense\] Let $G$ be a connected semi-simple $K$-simple algebraic group, and that $G(O_K)$ is infinite. Then, the arithmetic group $G(O_K)$ is Zariski dense in the complex semi-simple group $G(K\otimes {{\mathbb{C}}})$. By restriction of scalars, we may assume that $K={{\mathbb{Q}}}$. Suppose that $H$ is the connected component of identity of the Zariski closure of $G{{\mathbb{Z}}})$ in $G({{\mathbb{C}}})$. Then, as $G({{\mathbb{Q}}})$ commensurates $G({{\mathbb{Z}}})$, it follows that $G({{\mathbb{Q}}})$ normalises $H$.The density of $G({{\mathbb{Q}}})$ in $G({{\mathbb{R}}})$ (weak approximation) shows that $G({{\mathbb{R}}})$ normalises $H$. Clearly, $G({{\mathbb{R}}})$ is Zariski dense in $G({{\mathbb{C}}})$; hence $G({{\mathbb{C}}})$ normalises $H$. The definition of $H$ shows that $H$ is defined over ${{\mathbb{Q}}}$. Now, the ${{\mathbb{Q}}}$-simplicity of $G$ implies (since $G({{\mathbb{Z}}})$ is infinite and hence $H$ is non-trivial) that $H=G$. The following is repeatedly used in the sequel. \[unipotent\] Let $U$ be a unipotent group over a number field $K$. Then, $U(O_K)$ is Zariski dense in $U(K\otimes {{\mathbb{C}}})$; moreover, if $\Delta \subset U(O_K)$ is a subgroup which is Zariski dense in $U(K\otimes {{\mathbb{C}}})$ then, $\Delta $ is of finite index in $U(O_K)$. The proof is essentially given in Theorem (2.1) of [@R; @5], provided $K={{\mathbb{Q}}}$. But, by restriction of scalars, we may assume that $K={{\mathbb{Q}}}$. Denote by $M_0$ the connected component of identity of the Zariski closure of $M(O_K)$ in $M$, and let $T_0\subset M_0$ be a maximal torus defined over $K$. The groups $M$, $M_0$ and $T_0$ are all defined over ${{\mathbb{Q}}}$ and act on the ${{\mathbb{Q}}}$-Lie algebra ${\mathfrak u}$ of $R_{K/{{\mathbb{Q}}}}U^+$ by inner conjugation in $G$. Write the eigenspace decomposition ${\mathfrak u}\otimes {{\mathbb{C}}}=\oplus_{\chi \in X^*(T_0)} {\mathfrak u}_{\chi}$ for the action of $T_0$ on the complex lie algebra ${\mathfrak u}\otimes {{\mathbb{C}}}$. \[multone\] Suppose that each of the spaces ${\mathfrak u}_{\chi}$ is one dimensional. Then every arithmetic subgroup of $G(O_K)$ is virtually three generated. Let $\mathcal U$ be the set of pairs $(m,v)\in M_0\times {\mathfrak u}$ such that the span $\sum _{k\in {{\mathbb{Z}}}}{{\mathbb{C}}}(^{m^k}(v))$ is all of ${\mathfrak u}$. Then, $\mathcal U$ is a Zariski open subset of $M_0\times {\mathfrak u}$. For, the condition says that if $dim {\mathfrak u}=l$, then there exist integers $k_1,k_2,\cdots,k_l$ such that the wedge product $$^{m^{k_1}}(v)\wedge \cdots \wedge ^{m^{k_l}}(v)\neq 0,$$ which is a Zariski open condition.\ Let $\Gamma _0$ be an arithmetic subgroup of $G(K)$. Then, the intersection $\Gamma _0\cap U$ is Zariski dense in $R_{K/{{\mathbb{Q}}}}U$. Now, the map $log: U{\rightarrow}{\mathfrak u}$ is an isomorphism of varieties over ${{\mathbb{Q}}}$. By assumption on $M_0$, the group $\Gamma _0\cap M_0$ is Zariski dense in $M_0$ (the Zariski closure of $\Gamma _0\cap M_0$ is of finite index in $M_0$ since $\Gamma _0$ is of finite index in $M(O_K)$, and $M_0$ is connected). By the foregoing, we thus get elements $m\in \Gamma _0\cap M_0$ and $u\in U\cap \Gamma _0$ such that $(m,log u)\in \mathcal U$. This means that the ${{\mathbb{Z}}}$-span of $^{m^k}(log u)$ as $k$ varies, is Zariski dense in ${\mathfrak u}$. Therefore, the group $U_1$ generated by the elements $^{m^k}(u)$ with $k\in {{\mathbb{Z}}}$ is a Zariski dense subgroup of $U\cap \Gamma _0$. Hence, by Lemma \[unipotent\], $U_1$ is of finite index in $U\cap \Gamma _0$.\ Similarly, we can find an element $u^-\in U^-\cap \Gamma _0$ such that the group $U_1^-$ generated by the conjugates $^{m^k}(u^-)$ with $k\in {{\mathbb{Z}}}$, is of finite index in $U^-\cap \Gamma _0$. Set $$\Gamma =<m, u,u^->\subset \Gamma _0.$$ Now, $\Gamma $ contains $<U_1,U_1^->$. By [@R; @4] and [@V], the latter group is of finite index in $\Gamma _0$, hence so is $\Gamma $. But $\Gamma $ is three generated by construction. The criterion of Proposition \[multone\] depends on the group $M_0$ (which is the connected component of identity of the Zariski closure of $M(O_K)$) and hence on the $K$-structure of the group. But, this dependence is a rather mild one. However, the verification that the conditions of Proposition \[multone\] are satisfied is somewhat complicated, and is done in the next few sections by using the Tits classification of absolutely simple groups over number fields. The criterion works directly, when $K$-rank ($G$)$\geq 3$, somewhat surprisingly, for groups of exceptional type thanks to the analysis of the representations of $M$ occurring in the Lie algebra ${\mathfrak u}$ carried out by Langlands and Shahidi (see [@L] and [@Sh]).\ However, there are some classical groups of $K$-rank $\leq 2$ (notably, if $G$ is of classical type A, C or D but is not of Chevalley type over the number field), for which the criterion of Proposition \[multone\] fails. To handle these cases, we prove below some more lemmata of a general nature. Notation -------- Let $F$ be a field of characteristic zero, and $G$ an absolutely simple algebraic group over $F$. Let $x\in G(F)$ be an element of infinite order. Fix a maximal torus $T\subset G$ defined over $F$ and $Phi$ the roots of $T$ occurring in the Lie algebra ${\mathfrak g}$ of $G$. We have the root space decomposition ${\mathfrak g}={\mathfrak t}\oplus _{\alpha \in \Phi}{\mathfrak g}_{\alpha}$ with ${\mathfrak t}$ the Lie algebra of $T$. Now $T(F)$ is Zariski dense in $T$, hence there exists a Zariski open set $\mathcal V \subset T$ such that for all $v\in T(F)\cap \mathcal V$, the values $\alpha (v) ~(\alpha \in \Phi)$ are all different and distinct from $1$. Fix $y\in T(F)\cap \mathcal V$. \[zariskidense\] There is a Zariski open set $\mathcal U$ of $G$ such that the group generated by $x$ and $gyg^{-1}$ is Zariski dense in $G$ for all $g\in \mathcal U$. Let $H$ be a proper connected Zariski closed subgroup of $G$ containing (or normalised by) the element $y$. Then, the Lie algebra ${\mathfrak h}$ splits into eigenspaces for the action of $y$. Since the values $\alpha (y)$ are all different (and distinct from $0$), it follows that ${\mathfrak h}={\mathfrak t}\cap {\mathfrak h}\oplus {\mathfrak g}_{\alpha}\cap {\mathfrak h}$. Moreover, ${\mathfrak g}_{\alpha}={\mathfrak h}\cap {\mathfrak g}_{\alpha}$ if the latter is non-zero. Therefore, there exists a proper connected subgroup $H'$ containing $T$ which also contains $H$ (e.g. the one with Lie algebra ${\mathfrak t}\oplus {\mathfrak g}_{\alpha}$ such that ${\mathfrak h}\cap {\mathfrak g}_{\alpha} \neq 0$).\ The collection of connected subgroups of $G$ containing the maximal torus $T$ is finite since they correspond to certain subsets of the finite set of roots $\Phi$. Let $H_1,\cdots, H_n$ be the set of [**proper**]{} connected subgroups of $G$ containing $T$. By replacing $x$ by a power of it, we may assume that the Zariski closure $Z$ of the group generated by $x$ is connected. Since $G$ is simple, the group generated by $<gZg^{-1}: g\in G>$ is all of $G$. Hence, for each $\mu$, the set $Z_{\mu}=\{g\in G: gZg^{-1}\subset H_{\mu}\}$ is a [**proper**]{} Zariski closed set, whence its complement $U_{\mu}$ is open. Therefore, $\mathcal U=\cap _{1\leq \mu \leq n} U_{\mu}$ is also Zariski open.\ Let $g\in \mathcal U$ and $H$ be the connected component of the Zariski closure of the group generated by $x$ and $gyg^{-1}$. If $H\neq G$, then by the first paragraph of the proof, there exists a proper connected subgroup $H'$ containing $H$ and the torus $T$. By the foregoing paragraph, $H'$ must be one of the $H_{\mu}$ whence, $g\notin U_{\mu}$, and so $g\notin \mathcal U$, a contradiction. Therefore, $H=G$ and the lemma is proved. Notation -------- Suppose that $G$ is an absolutely simple algebraic group over a number field $K$, with $K$-rank ($G$)$\geq 2$. Let $S$ be a maximal split torus, and $\Phi (G,S)$ the root system. Let $\Phi ^+$ be a system of positive roots, ${\mathfrak g}$ the L:lie algebra of $G$. Let $U_0$ be the subgroup of $G$ whose Lie algebra is $\oplus _{\alpha >0}{\mathfrak g}_{\alpha}$, and $P_0$ the normaliser of $U_0$ in $G$; then $P_0=Z(S)U_0$ where $Z(S)$ is the centraliser of $S$ in $G$. Moreover, $P_0$ is a minimal parabolic $K$-subgroup of $G$.\ Let $\alpha \in \Phi ^+$ be the highest root and $\beta >0$ a “second-highest” root. Then, $\gamma =\alpha \beta$ is a simple root. Let $U_\alpha$ and $U_\beta$ be the root groups corresponding to $\alpha $ and $\beta$. \[secondhighest\] Let $\Gamma \subset G(O_K)$ be a Zariski dense subgroup. Suppose that there exists an integer $r>0$ such that $\Gamma \supset U_{\alpha}(rO_K)$ and $\Gamma \supset U_{\beta }(rO_K)$. Then, $\Gamma $ has finite index in $G(O_K)$. This is proved in [@V2]. We sketch the proof, since we will use this repeatedly in the examples. If $w$ denotes the longest Weyl group element, then the double coset $P_0wU_0$ is a Zariski open subset of $G$. Hence its intersection with $\Gamma $ is Zariski dense in $G$. Fix an element $g_0=p_0wu_0\in \Gamma \cap P_0wU_0$ and consider an arbitrary element $g=pwu\in PwU\cap \Gamma$.\ The subgroup $V=U_{\alpha }U_{\beta}$ is normalised by all of $P$. By assumption, there exists an integer $r$ such that $\Gamma \supset V(rO_K)$. Hence $\Gamma$ contains the group $<^g(V(rO_K)), V(rO_K)>$. By the Bruhat decomposition of $g$, and the fact that $V$ is normalised by $P$, we find an integer $r'$ such that $^g(V(rO_K))\supset ^p(V^-(r'O_K))$ where $V^-=U_{-\alpha}U_{w(\beta)}$ is the conjugate of $V$ by $w$. Note that $-w(\beta)$ is again a second highest root. Write $\gamma =\alpha +w(\beta)$. Then $\gamma $ is a simple root. Moreover, it can be proved that the commutator subgroup $[U_{\alpha},U_{w(\beta)}]$ is not trivial and is all of $U_\gamma$. Therefore, we get $^p(U_{\gamma} (rO_K))\subset \Gamma $ for a Zariski dense set of $p's$ ($r$ depends on the element $p$). It can be proved that the group generated by $^p(U_{\gamma})$ is all of the unipotent radical $U_1$ of the maximal parabolic subgroup corresponding to the simple root $\gamma $. Consequently, for some integer $r$, $U_1(rO_K)\subset \Gamma $, and by [@V2], $\Gamma $ is arithmetic. We will now deduce a corollary to Lemma \[zariskidense\] and Proposition \[secondhighest\]. Under the notation and assumptions of this subsection, suppose that every arithmetic subgroup $\Gamma _0$ of $G(O_K)$ contains a 2 generated subgroup $<a,b>$ which [**contains**]{} a group of the form $$U_\alpha (rO_K)U_\beta (rO_K)$$ for some second highest root $\beta$. Then, every arithmetic subgroup of $G(O_K)$ is virtually three-generated. Given $a,b\in \Gamma _0$ such that $<a,b>\supset (U_\alpha U_\beta)(rO_K)$, Lemma \[zariskidense\] implies the existence of an element $c\in \Gamma $ such that the group $\Gamma =<a,b,c>$ generated by $a,b,c$ is Zariski dense in $G$ ($\Gamma _0$ itself is Zariski dense in $G$ by the Borel density theorem). Then, by Lemma \[secondhighest\], $\Gamma $ is of finite index in $\Gamma _0$. Groups of $K$-rank $\geq 2$ =========================== In this section, we verify that all arithmetic groups of $K$-rank at least two are virtually three-generated. The proof proceeds case by case, using the Tits classification of algebraic groups over a number field $K$. In most cases, we check that the hypotheses of criterion of Proposition \[multone\] are satisfied. In the sequel, $G$ is an absolutely almost simple group defined over a number field $K$, with $K$-rank ($G$) $\geq 2$. The degree of $K/{{\mathbb{Q}}}$ is denoted $k$.\ The classical groups over ${{\mathbb{C}}}$ come equipped with a natural (irreducible) representation, which we refer to as the [**standard**]{} representation, and denote it $St$.\ Groups of Inner Type A ---------------------- In this subsection, we consider all groups which are inner twists of $SL(n)$ over $K$. By [@T2], the only such groups are SL(n) over number fields or SL(m) over central division algebras over number fields. ### SL(n) over number fields $G$ is SL(n) over the number field $K$. The rank assumption means that $n\geq 3$. Take $P$ to be the parabolic subgroup of SL(n) consisting of matrices of the form $\begin{pmatrix} g & x \\ 0 & det g^{-1} \end{pmatrix}$, where $g\in GL_{n-1}$, $x=\begin{pmatrix} x_1\\ x_2\\ \cdots \\x_{n-1}\end{pmatrix}$ is a column vector of size $n-1$, and $0$ is the $1\times (n-1)$ matrix whose entries are all zero. The Levi part $M$ of $P$ may be taken to be $GL(n-1)=\{\begin{pmatrix} g & 0 \\ 0 & det(g^{-1})\end{pmatrix}:g\in GL_{n-1}\}$. Recall that $M_0$ is the connected component of identity of the Zariski closure of $M(O_K)$. Hence $M_0$ contains the subgroup $H=SL(n-1)$. Take $T_0$ to be the diagonals in $SL(n-1)$. The unipotent radical of $P$ is the group $\begin{pmatrix} 1 & x\\ 0 & 1 \end{pmatrix}$ with $x$ a column vector as before. As a representation of $GL(n-1)({{\mathbb{C}}})$ (and therefore of $H({{\mathbb{C}}})=SL(n-1)({{\mathbb{C}}})$), the Lie algebra ${\mathfrak u}$ is nothing but $st\otimes det $, the standard representation twisted by the determinant. Restricted to the torus $T_0$, thus ${\mathfrak u}$ is the standard representation, and is hence multiplicity free. By Proposition \[multone\], it follows that every arithmetic subgroup of $G(O_K)$ is virtually three-generated. ### SL(m) over division algebras $G =SL_m(D)$ where $D$ is a central division algebra over the number field $K$, of degree $d\geq 2$. The rank assumption means that $m\geq 3$. Consider the central simple algebra $D\otimes _{{{\mathbb{Q}}}}{{\mathbb{R}}}$, denoted $D\otimes {{\mathbb{R}}}$ for short. Then, $D\otimes {{\mathbb{R}}}$ is a product of copies of $M_d({{\mathbb{C}}})$, $M_{d/2}({\bf H})$, and $M_d({{\mathbb{R}}})$ where ${\bf H}$ is the division algebra of Hamiltonian quaternions. We consider four cases.\ [*Case 1. $D\otimes {{\mathbb{R}}}\neq {\bf H}\times \cdots {\bf H}$*]{}. Then, $SL_1(D\otimes {{\mathbb{R}}})$ is a non-compact semi-simple Lie group with either $SL_d({{\mathbb{C}}})$ or $SL_d({{\mathbb{R}}})$ or $SL_{d/2}({\bf H})$ (the last can happen only if $d\geq 3$ is even) as a non-compact factor. Then, the Zariski closure of the arithmetic subgroup $SL_1(O_D)$ of $SL_1(D)$ (for some order $O_D$ of $D$) is the noncompact group $SL_1(D\otimes {{\mathbb{R}}})$.\ Take $P$ to be the parabolic subgroup (with the obvious notation) $P=\begin{pmatrix} GL_1(D) & * \\ 0 & GL_{m-1}(D)\end{pmatrix}$, with unipotent radical $U=\begin{pmatrix} 1 & M_{1\times m}(D) \\ 0 & 1\end{pmatrix}$ where $M_{1times m}(D)$ denotes the spaces of $1\times m$ matrices with entries in the division algebra $D$. The group $M_0$ obviously contains (from the observation in the last paragraph) the group $H=\begin{pmatrix}SL_1(D) & 0 \\ 0 & SL_{m-1}(D)\end{pmatrix}$, with $H(K\otimes {{\mathbb{C}}})=[SL_d({{\mathbb{C}}})\times SL_{d(m-1)}({{\mathbb{C}}})]^k$. Let $T_0$ be the product of the diagonals in each copy of $SL_d\times SL_{d(m-1)}$. As a representation of $H$, the Lie algebra ${\mathfrak u}$ of $U$ is the direct sum $\oplus {{\mathbb{C}}}^d\otimes {{\mathbb{C}}}^{(m-1)d})^*$, where the sum is over each copy of $SL_d\times SL_{(m-1)d}$. ${{\mathbb{C}}}^d$ is the standard representation of $SL_d$ and $*$ denotes its dual. It is then clear that as a representation of the product diagonal torus $T_0$, ${\mathfrak u}$ is multiplicity free. Hence the criterion of Proposition \[multone\] applies. Every arithmetic subgroup of $G(O_K)$ is virtually three generated.\ [*Case 2. $D\otimes {{\mathbb{R}}}={\bf H}\times \cdots \times {\bf H}$ but $m\geq 4$.*]{} This can happen only if $d=2$, and $D\otimes {{\mathbb{R}}}={\bf H}^k$. Take $P$ to be the parabolic subgroup $P=\begin{pmatrix}SL_{m-2}(D) & * \\ 0 & SL_2(D)\end{pmatrix}$ and denote its unipotent radical by $U$, with $U=\begin{pmatrix}1 & M_{(m-2)\times 2}(D) \\ 0 & 1\end{pmatrix}$. Then, as before, $M_0$ contains $H=\begin{pmatrix}SL_{m-2}(D) & 0 \\ 0 & SL_2(D)\end{pmatrix}$. Then, $H(K\otimes {{\mathbb{C}}})=[SL_{(m-2)2}({{\mathbb{C}}})\times SL_4({{\mathbb{C}}})]^k$. As a representation of $H(K\otimes {{\mathbb{C}}})$, ${\mathfrak u}$ is the $k$-fold direct sum ${{\mathbb{C}}}^{(m-2)2}\otimes ({{\mathbb{C}}}^4)^*$. Let $T_0$ be the product of the diagonals in $H(K\otimes {{\mathbb{C}}})$. Then, it is clear that ${\mathfrak u}$ is multiplicity free as a representation of $T_0$. By Proposition \[multone\] every arithmetic subgroup of $G(O_K)$ is virtually three-generated.\ [*Case 3. $D\otimes {{\mathbb{R}}}={\bf H}\times \cdots \times {\bf H}$, $m=3$ but $k\geq 2$*]{}. In this case it turns out that the criterion of Proposition \[multone\] fails ( we will not prove that it fails), so we give an ad hoc argument that every arithmetic subgroup of $SL_3(D)$ is virtually three-generated.\ Since $D\otimes {{\mathbb{R}}}={\bf H}^k$, it follows that $K$ is totally real. Since $k\geq 2$, $K$ has infinitely many units. By lemma \[finite\], for every subgroup $\Delta $ of finite index in $O_K^*$, ${{\mathbb{Q}}}[\Delta]=K$. By Lemma \[exist\], there exists an element $\theta \in \Delta $ such that ${{\mathbb{Q}}}[\theta ^r]=K$ for all $r\geq 1$. Consider the $3\times 3$ - matrix $m= \begin{pmatrix} \theta ^{k_1} & 0 & 0 \\ 0 & \theta ^{k_2} & 0 \\ 0 & 0 & \theta ^{-k_1-k_2}\end{pmatrix}$ which lies in $SL_3(O_D)$ for some order $O_D$ in $D$.\ Consider the following matrices in $SL_3(D)$ given by $u= \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & x \\ 0 & 0 & 1\end{pmatrix}$, and $u^-= \begin{pmatrix} 1 & 0 & 0 \\ y & 1 & 0 \\ z & t & 1\end{pmatrix}$, where $x,y,z,t$ are elements of the division algebra,no two of which commute. We may assume that they lie in the order $O_D$. We will prove that for every $r>0$, the group $\Gamma =<m^r,u^r,(u^-)^r>$ generated by the $r$-th powers of $m,u,u^-$ is arithmetic. This will prove that every arithmetic subgroup of $GO_K)$ is virtually three generated. We use the following notation. If $i,j\leq 3$ , $i\neq j$ and $w$ is an element of $O_D$, denote by $x_{ij}^{O_Kw}$ the subgroup $1+cwE_{ij}$ where, $c$ runs through elements of $O_K$; $E_{ij}$ is the matrix whose $ij$-th entry is $1$ and all other entries are zero. We also write $x_{ij}^{O_Kw}\leq \Gamma $ to say that for some integer $r'$, the subgroup $x_{ij}^{r'O_Kw}$ is contained in $\Gamma $.\ For ease of notation, we replace the r-th powers of $m,u,u^-$ by the same letters $m,u,u^-$; this should cause no confusion. The group $<m^l(u): l\in {{\mathbb{Z}}}>$ virtually contains the subgroups (by the choice of $\theta$; see Lemma \[exist\]) $x_{12}^{O_K}$ , $x_{13}^{O_K}$ and $x_{23}^{xO_K}$. Similarly, $<m^l(u^-):l\in {{\mathbb{Z}}}>$ virtually contains $x_{21}^{O_Ky}, x_{31}^{O_Kz}$ and $x_{32}^{O_Kt}$. Since $\Gamma $ contains all these groups, by taking commutators, we get $x_{12}^{O_Kt}=[x_{13}^{O_K},x_{32}^{tO_K}]\leq \Gamma $. Similarly, $x_{13}^{O_Kx}=[x_{12}^{O_K},x_{23}^{O_Kx}]\leq \Gamma,$ and $x_{13}^{O_Ktx}=[x_{12}^{O_Kt},x_{23}^{txO_K}]\leq \Gamma $. By taking suitable commutators, we obtain $$x_{12}^{O_K+O_Kt+O_Kx+O_Ky}=\leq \Gamma.$$ Since up to subgroups of finite index $O_D=OK+O_Kt+O_Kx+O_Ky$, we see that $x_{13}(O_D)\leq \Gamma $, and similarly, $x_{ij}(O_D)$ for all $ij$ with $i\neq j$. Therefore, $\Gamma \supset U(O_K)$ and $U^-(O_K)$ for two opposing maximal unipotent subgroups of $G$. By [@R; @4], $\Gamma $ is then an arithmetic group.\ [*Case 4. $D\otimes {{\mathbb{R}}}={\bf H}^k$, $m=3$ and $k=1$.*]{} The assumptions mean that $K={{\mathbb{Q}}}$, and $D\otimes {{\mathbb{R}}}={\bf H}$. Consider the elements $m_0= \begin{pmatrix} a & b & 0 \\ c & d & 0 \\ 0 & 0 & 1\end{pmatrix}$, $u_0= \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}=x_{13}$, $u_0^-= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1\end{pmatrix}=x_{32}$. We assume that the matrix $\begin{pmatrix} a & b \\ c & d\end{pmatrix}$ is “generic”. In particular, assume that $c\notin {{\mathbb{Q}}}$ and that $e=ca+dc$ does not commute with $c$. Fix $r\geq 1$ and put $\Gamma =<m_0^r,u_0^r,(u_0^-)^r>$. By arguments similar to the last case, it is enough to prove that $\Gamma $ is an arithmetic subgroup of $SL_3(O_D)$ for some order $O_D$ of the division algebra $D$.\ We have $x_{13}^{{{\mathbb{Z}}}}\leq \Gamma $ and $x_{32}^{{{\mathbb{Z}}}}\leq \Gamma $. By taking commutators, we get $x_{12}^{{{\mathbb{Z}}}}\leq \Gamma $.\ The conjugate $^{m^0}(u_0)=\begin{pmatrix} 1 & 0 & a \\ 0 & 1 & c \\ 0 & 0 & 1\end{pmatrix}$. Hence we get $\begin{pmatrix} 1 & 0 & a \\ 0 & 1 & c \\ 0 & 0 & 1\end{pmatrix}^{{{\mathbb{Z}}}}\leq \Gamma $. By taking commutators with $x_{12}^{{{\mathbb{Z}}}}$, we then get $x_{13}^{{{\mathbb{Z}}}[c]}\leq \Gamma $. Taking commutators with $x_{32}^{{{\mathbb{Z}}}}\leq \Gamma $, we obtain $x_{12}^{{{\mathbb{Z}}}[c]}\leq \Gamma $ as well.\ Consider $\begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} a \\ c \end{pmatrix}= \begin{pmatrix} a^2+bc \\ ca+dc \end{pmatrix}= \begin{pmatrix} a' \\ e \end{pmatrix}$. Clearly, $^{m_0^2}(u_0)= \begin{pmatrix} 1 & 0 & a'\\ 0 & 1 & e \\0 & 0 & 1 \end{pmatrix}.$ By the argument of the last paragraph, taking commutators of its conjugate with $x_{12}^{{\zeta}[c]}$ we obtain $x_{13}^{{{\mathbb{Z}}}[e]{{\mathbb{Z}}}[c]}\leq \Gamma $. Since $e$ and $c$ do not commute and $D$ has dimension $4$ over ${{\mathbb{Q}}}$, it follows that ${{\mathbb{Z}}}[e]{{\mathbb{Z}}}[c]$ is of finite index in an order $O_D$ of $D$. Therefore, $x_{13}^{O_D}\leq \Gamma $. Taking commutators with $x_{32}^{{{\mathbb{Z}}}}\leq \Gamma $, we obtain $x_{12}^{O_D}\leq \Gamma $ (and $x_{13}^{O_D}\leq \Gamma $). Thus, $\Gamma $ intersects the unipotent radical (consisting of $x_{12}$ and $x_{13}$ root groups) of a parabolic subgroup of $G$. Clearly, $\Gamma $ is Zariski dense. Therefore, by [@V2], $\Gamma $ is arithmetic. Groups of outer type A ---------------------- Suppose that $K$ is a number field of degree $k\geq 1$ over ${{\mathbb{Q}}}$. Let $E/K$ be a quadratic extension and $\sigma \in Gal(E/K)$ be the non-trivial element. Suppose that $D$ is a central division algebra over $E$ of degree $d\geq 1$ (as usual $d^2=dimension (D/E)$). Assume there is an involution $*$ on $D$ such that its restriction to the centre $E$ coincides with $\sigma$. If $N\geq 1$ is an integer, and $g\in M_N(D)$, with $g=(g_{ij})$ is an $N\times N$ matrix with entries in $D$, then define $g^*$ as the matrix with $ij$-th entry given by $g^*_{ij}=(g_{ij})^*$. Thus, $M_N(D)$ gets an involution $g\mapsto (^tg)^*$.\ Fix an integer $m\geq 0$. Consider the $(m+4)\times (m+4)$-matrix $h=\begin{pmatrix} 0_{2\times 2} & 0_{2\times m} & \begin{pmatrix}0 & 1 \\ 1 & 0 \end{pmatrix}\\ 0_{m\times 2} & h_0 & 0_{m \times 2}\\ \begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix} & 0_{2\times m} & 0_{2\times 2} \end{pmatrix}$, where $0_{p\times q}$ denotes the zero matrix of the relevant size. Here $h_0$ is a non-singular $m\times m$ matrix with entries in $D$ such that $(^t h_0)^*=h_0$. Then $h$ defines a Hermitian form with respect to $*$ on the $m+4$ dimensional vector space over $D$. The algebraic group we consider is of the form $$G=SU_{m+4}(h,D)=\{ g\in SL_{m+4}(D): (^tg)^*hg=h \}.$$ Then $G$ is an absolutely simple algebraic group over $K$. Since $h$ contains [**two**]{} copies of the “hyperbolic” Hermitian form $J=\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}$ it follows that $K$-rank ($G$)$\geq 2$. From the classification tables of [@T2], these $G$ are the only outer forms of type $A$ of $K$-rank at least two.\ Arithmetic subgroups $\Gamma _0$ of $G$ are commensurate to $G\cap GL_{m+4}(O_D)$ for some order $O_D$ of the division algebra $D$. Consider the subgroup $$H=\{\begin{pmatrix}g & 0 & 0\\ 0 & 1_m & 0\\ 0 & 0 & J[(^tg)^*]^{-1}J^{-1} \end{pmatrix}: g\in SL_2(D)\}.$$ Since $H(K\otimes {{\mathbb{R}}})=SL_2(D\otimes {{\mathbb{R}}})$ is non-compact, it follows from the Borel density theorem (see (.)) that $\Gamma _0\cap H\simeq SL_2(O_D)$. Moreover, $\Gamma _0\cap H$ is Zariski dense in the group $H(K\otimes {{\mathbb{C}}})=[SL_{2d}({{\mathbb{C}}})\times SL_{2d}({{\mathbb{C}}})]^k$. The intersection of $G$ with diagonals is at least two dimensional, and is a maximal $K$-split torus $S$, if $h_0$ is suitably chosen (that is, split off all the hyperbolic forms in $h_0$ in the same way as was done for [*two*]{} hyperbolic forms for $h$).\ With respect to $S$, the intersection of unipotent upper triangular matrices with $G$ yields a maximal unipotent subgroup $U_0$ of $G$ and the roots of $S$ occurring in the Lie algebra ${\mathfrak u}_0$ of $U_0$ form a system $\Phi ^+$ of positive roots. If $\alpha $ and $\beta $ are the highest and a second highest root in $\Phi ^+$, then the group $U_\alpha (O_K)U_\beta (O_K)$ is contained in the group $U= \{\begin{pmatrix} 1 & 0 & x \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}: x\in M_2(O_D), (^tx)^*+JxJ=0\}$. Now, as a module over $H(K\otimes {{\mathbb{C}}})=[SL_{2d}({{\mathbb{C}}})\times SL_{2d}({{\mathbb{C}}})]^k$, the Lie algebra $Lie U({{\mathbb{C}}})$ is isomorphic to $[{{\mathbb{C}}}^{2d}\otimes ({{\mathbb{C}}}^{2d})^*]^k$ and has distinct eigenvalues for the diagonals $T_H$ in $H(K\otimes {{\mathbb{C}}})$ (thought of as a product of copies of $SL_{2d}({{\mathbb{C}}})$). Therefore, by section 3, there exist $m_0\in \Gamma _0\cap H$, $u_0\in U\cap \Gamma _0$ such that the group generated by the conjugates $\{^{m_0^j}(u_0): j\in {{\mathbb{Z}}}\}$ contains the group $U(rO_K)$ for some integer $r$. Now the criterion of Proposition \[secondhighest\] says that there exists a $\gamma _0\in \Gamma _0$ such that the three-generated group $\Gamma =<\gamma, m_0, u_0>$ is of finite index in $\Gamma _0$.\ Groups of type B and inner type D --------------------------------- (i.e. type $^1D^1_{n,r}$) In this subsection, we consider groups of the form $G=SO(f)$ with $f$ a non-degenerate quadratic form in $n$ variables over $K$, $n\geq 5$ (and $n\geq 8$ if $n$ is even). Assume that $f$ is a direct sum of two copies of a hyperbolic form and another non-degenerate form $f_2$: $f=\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}\oplus \begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}\oplus f_2$.\ Put $f_1=\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}\oplus f_2$. Then $f=\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}\oplus f_1$. Then, $K$-rank ($G$) $\geq 2$. Consider the subgroup $P= \{\begin{pmatrix} a & x & -\frac{x^tx}{2}\\ 0 & SO(f_1) & -^tx \\ 0 & 0 & a^{-1}\end{pmatrix}: a\in {\bf G}_m, x\in K^{n-2}\}$. Then $P$ is a parabolic subgroup of $G$ with unipotent radical $U= \{\begin{pmatrix} 1 & x & -\frac{x^tx}{2}\\ 0 & 1 & -^tx \\ 0 & 0 & 1\end{pmatrix}: x\in K^{n-2}\}$. Now, the group $SO(f_1)$ is isotropic over $K$ since $f_1$ represents a zero. Moreover, since $n-2\geq 3$, $SO(f_1)$ is a semi-simple algebraic group over $K$. Hence $SO(f_1)(O_K)$ is Zariski dense in $SO(f_1)(K\otimes {{\mathbb{C}}})$. Consequently, $M_0$ contains the subgroup $SO(f_1)$. Moreover, as a representation of $SO(f_1)(K\otimes {{\mathbb{C}}})=SO(n-2)({{\mathbb{C}}})^k$, the Lie algebra ${\mathfrak u}(K\otimes {{\mathbb{C}}})$ of $U$ is the standard representation $St ^k$. Clearly, for a maximal torus in $SO(n-2)({{\mathbb{C}}})$, the standard representation is multiplicity free. Therefore, the criterion of Proposition \[multone\] applies: every arithmetic subgroup of $G(O_K)$ is virtually three-generated. Groups of type C and the rest of the Groups of type D ----------------------------------------------------- ### $G=Sp_{2n}$ over $K$ with $n\geq 3$. Denote by $$\kappa =\begin{pmatrix} 0 & 0 & \cdots & 0 & 1\\ 0 & 0 & \cdots & 1 & 0\\ 0 & \cdots &\cdots &\cdots & 0\\ 1 & 0 & \cdots & 0 & 0\end{pmatrix}$$ the $n\times n$ matrix all of whose entries are zero, except for the anti-diagonal ones, which are all equal to one. Let $J=\begin{pmatrix} 0_n & \kappa \\ -\kappa & 0_n\end{pmatrix}$ be the non-degenerate $2n\times 2n$ skew symmetric matrix. Define the symplectic group $G=Sp_{2n}=\{ g\in SL_{2n}: ^tgJg=J\}$. The group $P=\{\begin{pmatrix} g & 0 \\ 0 & \kappa ^tg\kappa ^{-1} \end{pmatrix}\begin{pmatrix} 1 & x \\0 & 1\end{pmatrix}: x+\kappa ^tx \kappa =0, g\in GL_n \}$ is a parabolic subgroup. Denote by $M$ the Levi subgroup of $P$ such that $x=0$. Then, it is easy to show that $M_0\supset H\simeq SL_n=\{\begin{pmatrix} g & 0 \\ 0 & \kappa ^tg \kappa ^{-1}\end{pmatrix}\}. $ As a representation of $H$, the Lie algebra ${\mathfrak u}$ of the unipotent radical $U$ of $P$ is seen to be isomorphic to $S^2({{\mathbb{C}}}^n)$, the second symmetric power of the standard representaqtion of $H=SL_n$. Therefore, with respect to the diagonal torus $T_H$ of $H$, the representation ${\mathfrak u}$ is multiplicity free. Therefore, by Proposition \[multone\], every arithmetic subgroup of $Sp_{2n}(O_K)$ is three-generated. ### Other Groups of type C and D In this subsection, we will consider all groups of type $C$ or $D$, which are not covered in the previous subsections. Let $D$ be a [**quaternionic**]{} division algebra over the number field $K$. Let $\sigma$ be an involution (of the first kind) on $D$. In the case of type C (resp. type D), assume that the space $D^{\sigma}$ of $\sigma $-invariants in $D$ is one dimensional (resp. three dimensional) over $K$. Let $m\geq 0$ be an integer. Consider the $m+4$ dimensional matrix $h=\begin{pmatrix}0_2 & 0 & \begin{pmatrix}0 & 1 \\ 1 & 0\end{pmatrix}\\ 0 & h_0 & 0 \\ \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} & 0 & 0 \end{pmatrix}$, where $h_0$ is a non-singular matrix with entries in $D$, such that $^t\sigma (h_0)=h_0$. We will view $h$ as a non-degenerate form on $D^{m+4}\times D^{m+4}$ with values in $D$, which is hermitian with respect to the involution $\sigma$. The algebraic group which we consider is the special unitary group of this hermitian form: $G=SU(h)$ - an algebraic group over $K$ (if $D^{\sigma}$ is three dimensional, then $G$ is of type $^1D$ or $^2D$ according as the discriminant of $h$ is $1$ or otherwise). With this choice of $h$, it is immediate ($h$ has two copies of the hyperbolic form: cf. the subsection on groups of outer type A), that $K$-rank ($G$) $\geq 2$.\ Since We needed $K$-rank ($G$) $\geq 2$ we split off two hyperbolic planes from $h$. The form $h_0$ may have more hyperbolic planes in it; after splitting these off in a manner similar to that for $h$, we obtain a form $h_1$ which is anisotropic over $K$. We will assume that $h_0$ is of this type. Then, the intersection of $G$ with the diagonals is a maximal $K$-split torus $S$ in $G$. The roots of $S$ occurring in the group of unipotent upper triangular matrices in $G$ form a positive system $\Phi ^+$. Choose $\alpha $ the highest root and a second highest root $\beta$ in $\Phi ^+$.\ The group $U_\alpha U_\beta $ is contained in the unipotent group $U=\{\begin{pmatrix}1_2 & 0 & x \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}: x suitable\}$ which is the unipotent radical of a parabolic subgroup. Set $H=\{\begin{pmatrix} g & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & J^tgJ^{-1}\end{pmatrix} : g\in SL_2(D)\}$. Then, $M_0$ contains $H$. Let $\Gamma _0\subset G(O_K)$ be an arithmetic subgroup. Then, there exist $m_0\in H(O_K)\cap \Gamma _0$ and $u\in U(O_K)\cap \Gamma _0$ such that the group generated by $m_0$ and $u$ (denoted as usual by $<m_0,u>$) intersects $(U_\alpha U_\beta )(O_K)$ in a subgroup of finite index. By Lemma \[zariskidense\], there exists an element $\gamma \in \Gamma _0$ such that $\Gamma =<m_0,u,\gamma >$ is Zariski dense in $G(K\otimes {{\mathbb{C}}})$. By Proposition \[secondhighest\], $\Gamma $ has finite index in $\Gamma _0$: $\Gamma _0$ is virtually three-generated.\ The Exceptional Groups ---------------------- In this subsection, we prove Theorem 1 for all groups $G$ of exceptional type of $K$-rank $\geq 2$. In each of these cases, we will locate a simple $K$-root in the Tits -Dynkin diagram of $G$, such that the Levi subgroup (actually the group $M_0$ contained in the Levi) of the parabolic group corresponding to the simple root contains a subgroup $H$ with the following property. $H(K\otimes {{\mathbb{C}}})$ has a maximal torus $T_H$ whose action on the Lie algebra ${\mathfrak u}=Lie (U)(K\otimes {{\mathbb{C}}})$ is multiplicity free. By Proposition \[multone\], this implies that every arithmetic subgroup of $G(K)$ is virtually three-generated. The notation is as in [@T2].\ ### the groups $^3D^2_{4,2}$ and $^6D^2_{4,2}$. In the Tits diagram, there is one simple circled root $\alpha$, and three other simple roots which are circled together. The semi-simple part of the Levi is therefore $K$-simple, and hence contains (over ${{\mathbb{C}}}$), the group $SL_2({{\mathbb{C}}})^3$ (three-fold product of $SL(2)$). According to [@L] and [@Sh], the representation ${\mathfrak u}$ is the direct sum of $St\otimes St\otimes St$ and $1\otimes St\otimes 1$ ($St$ is the standard representation and $1$ is the trivial one). This is multiplicity free for the product of the diagonals in the group $SL(2)^3$. ### Groups of type $E_6$ There are three groups of inner type $E_6$ with $K$-rank $\geq 2$. [*Case 1. $G=^1E_{6,2}^{28}$*]{}. The extreme left root in the diagram is circled. Since its $K$-rank is $\geq 1$, the Levi of the corresponding maximal parabolic subgroup is non-compact. Then, $M_0$ contains $SO(10)$ over ${{\mathbb{C}}}$. According to [@L], the representation on ${\mathfrak u}$ is one of the $\frac{1}{2}$-spin representation of $SO(10)$ and has distinct characters for the maximal torus.\ [*Case 2. $G= ^1E_{6,2}^{16}$*]{}. Over the number field $K$, the diagram is that of $^1E_{6,2}^{16}$. The root in the middle of the diagram is circled. However, over any archimedean completion, the diagram can only be the split form ($^1E_{6,2}^{16}$ can not transform into $^1E_{6,2}^{28}$ over ${{\mathbb{R}}}$ or ${{\mathbb{C}}}$). Consequently, $M(K\otimes {{\mathbb{R}}})$ contains $SL_3\times SL_2\times SL_3$, whence $M_0= SL_3\times SL_2\times SL_3$. According to [@Sh], the representation of $M_0({{\mathbb{C}}})$ on ${\mathfrak u}$ is the direct sum of $St _{SL_3}\otimes St _{SL_2} \otimes \wedge ^2 St_{SL_3}$ (from now on we will drop the subscript $SL_3$ or $SL_2$ for ease of notation), $\wedge ^2 St \otimes Triv \otimes St$ and $Triv \otimes St \otimes Triv$. It is clear that restricted to the product of the diagonals in $SL_3\times SL_2\times SL_3$, the representation ${\mathfrak u}$ has multiplicity one.\ [*Case 3. $G= ^1 E_{6,6}^0$*]{}. The same $M_0$ as in Case 2 works, to prove multiplicity one for the torus.\ Now consider the groups of outer type $E_6$ of $K$-rank $\geq 2$. These are $^2E_{6,2}^{16 ^{'}}$, $^2E_{6,2}^{16 ^{''}}$, and $^2E_{6,4}^2$. In all these, the root at the extreme left is circled. Then, since $K$-rank ($M$) $\geq 1$, it follows that $M_0({{\mathbb{C}}})\supset SL_6$. The representation on ${\mathfrak u}$ is (by [@L], page 49, (x)) the direct sum of $Triv$ and $\wedge ^3(St)$ and the diagonal torus in $SL_6$ has multiplicity one for its action on ${\mathfrak u}$.\ ### Groups of type $E_7$ There are four groups of type $E_7$ over a number field $K$ with $K$-rank $\geq 2$. They are $E_{7,2}^{31}$, $E_{7,3}^{28}$, $E_{7,4}^9$ and $E_{7,7}^0$. In all these, the root on the extreme right is circled, and $M$ has $K$-rank $\geq 1$. Hence $M_0({{\mathbb{C}}})$ contains the semi-simple part of the Levi group $M$. This is $SO(12)$. According to [@L], the representation ${\mathfrak u}$ of $SO(12)$ is $triv\oplus \frac {1}{2} spin$ which has distinct eigenvalues for the torus in $SO(12)$. ### Groups of type $E_8$ The groups with $K$-rank $\geq 2$ are $E_{8,4}^{28}$ and $E_{8,8}^0$. Consider the root on the extreme right in the diagram. The corresponding $M$ has semi-simple (actually simple) part $SO(14)$ which is isotropic over $K$. The representation ${\mathfrak u}$, according to [@L], is $\frac{1}{2}$-spin $\oplus St$ and has multiplicity one for the maximal torus of $SO(14)$. ### The groups $F_4$ There is only one $K$-rank $\geq 2$ group,namely the split one, denoted $F_{4,4}^0$. Take the root on the extreme left. Then $M_0\supset SO(7)$. The representation is $triv \oplus \frac{1}{2}-spin$ and is multiplicity free for the action of the maximal torus in $SO(7)$. ### Groups of type $G_2$ . The only group is $G_{2,2}^0$, the split form. For the root on the extreme left, the group $M_0$ contains $SL(2)$ and the representation ${\mathfrak u}$ is $Triv \oplus Sym^3$ which has distinct eigenvalues for the action of the maximal torus in $SL(2)$. This completes the proof of Theorem 1 for groups of $K$-rank $\geq 2$. Classical Groups of Rank One ============================ The case of groups $G$ such that $K$-rank ($G$) $= 1$ and ${{\mathbb{R}}}-rank (G_\infty)\geq 2$ is much more involved. We will have to consider many more cases, both for classical and exceptional groups. In some cases, we will have to supply ad hoc proofs, because the general criteria established in the previous sections do not apply. Groups of inner type A ---------------------- The assumptions imply that $G=SL_2(D)$ where $D$ is a central division algebra over the number field $K$.\ [*Case 1. $D=K$*]{}. The assumption that ${{\mathbb{R}}}-rank (G_\infty)\geq 2$ is equivalent to $r_1+r_2\geq 2$. Therefore, $K$ has infinitely many units. This has been covered in Section (2.1) on $SL(2)$.\ [*Case 2. $D\neq K$, $D\otimes _{{{\mathbb{Q}}}}{{\mathbb{R}}}\neq {\bf H}\times \cdots \times{\bf H}$*]{}. Here ${\bf H}$ denotes the algebra of Hamiltonian quaternions. Consider the parabolic subgroup $P=\{\begin{pmatrix} g & 0 \\ 0 & h\end{pmatrix} \begin{pmatrix}1 & x \\ 0 & 1\end{pmatrix}: g,h\in GL_1(D), Det (gh)=1, x\in D\}$. Let $U$ be its unipotent radical. The assumption on $D$ means that $SL_1(D\otimes {{\mathbb{R}}})$ is not compact, and $SL_1(D\otimes _{{{\mathbb{Q}}}}{{\mathbb{C}}})$ contains $SL_1(O_D)$ as a Zariski dense subgroup (Proposition \[boreldense\]). Therefore, $M_0$ contains the subgroup $M_1$ with $M_1(K\otimes {{\mathbb{C}}})=[SL_2({{\mathbb{C}}})\times SL_2({{\mathbb{C}}})]^k$. As a representation of $M_1({{\mathbb{C}}})$, the Lie algebra ${\mathfrak u}=(Lie U)(K\otimes {{\mathbb{C}}})$ is $[St\otimes St^*]^k$ and is multiplicity free for the action of the maximal torus ($2k$-fold product of the diagonals in $M_1({{\mathbb{C}}})$). Therefore, every arithmetic subgroup of $G(K)$ is virtually three-generated.\ [*Case 3. $D\neq K$ and $D\otimes _{{{\mathbb{Q}}}}{{\mathbb{R}}}={\bf H}^k$*]{}. Therefore, $K$ is totally real of degree $k$ over ${{\mathbb{Q}}}$. The assumption ${{\mathbb{R}}}-rank (G_\infty)\geq 2$ means that $K\neq {{\mathbb{Q}}}$. Let $P$, $M$ be the parabolic subgroup and its Levi subgroup in Case 2 of the present subsection. Fix $m=\begin{pmatrix} \alpha & 0\\ 0 & \beta \end{pmatrix}\in M(K)$ such that $\delta =\alpha \beta ^{-1}$ does not lie in $K$. Since $D\otimes {{\mathbb{R}}}={\bf H}^k$, it follows that the extension $K(\delta )/K$ is a CM extension. Fix $u_+=\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix}$ and $m_0=\begin{pmatrix} \theta & 0\\ 0 & \theta ^{-1} \end{pmatrix}$ where $\theta \in O_K^*$ is chosen as in Lemma \[exist\]. Fix $\gamma \in G(O_K)$ in general position with respect to $u_+$ and $m_0$. Then, for every integer $r$, the group $\Gamma =<u_+^r,m_0^r,\gamma ^r>$ generates a Zariski dense subgroup of $G$ (see Lemma \[zariskidense\]). We will show that $\Gamma $ is arithmetic, proving that every arithmetic subgroup of $G(K)$ is virtually 3-generated (since arithmetic groups contain a group of the form $<\gamma ^r,m_0^r,u_+^r>$ for some integer $r$).\ Since $\Gamma $ contains $\theta ^r$ and $u_+$, it follows that for some integer $r'$, $\Gamma $ contains the group $V^+=\begin{pmatrix} 1 & r'O_K\\ 0 & 1\end{pmatrix}$. Pick a generic element $g\in \Gamma $, with Bruhat decomposition of the form $g=umwv$, where $m=\begin{pmatrix} \alpha & 0 \\0 & \beta \end{pmatrix}$ may be assumed to be as in the foregoing paragraph. Then, $\Gamma \supset <^g(V^+), V^+>$. Note that $u,v$ centralise the group $V^+$; put $V^-=^w(V^+)$. One sees that $\Gamma \supset ^u<^m(V^-),V^+>=^u< \begin{pmatrix}1 & 0\\ \alpha ^{-1}\beta r'O_K & 1\end{pmatrix}, \begin{pmatrix}1 & r'O_K\\ 0 & 1\end{pmatrix}>$. By the result on SL(2) over CM fields (Proposition \[CM’\]), $\Gamma \supset ^u(\Delta)$ for some subgroup $\Delta $ of finite index in $SL_2(O_E)$, where $E=K(\alpha ^{-1}\beta)$ a CM extension of $K$. In particular, there exists an integer $r''$ such that $\Gamma \supset ^u(\theta ^{r''{{\mathbb{Z}}}})$. By Proposition \[technical\], it follows that $\Gamma $ is arithmetic.\ Groups of outer type A ---------------------- ### **The Groups SU(h) over fields** In this subsection, $K$ is a number field, $E/K$ a quadratic extension whose non-trivial Galois automorphism is denoted $\sigma$. Let $h:E^{n+1}\times E^{n+1}{\rightarrow}E$ denote a $\sigma$-hermitian form which is isotropic over $K$, and write $h=\begin{pmatrix}0 & 1\\ 1 & 0\end{pmatrix}\oplus h_0$ where $h_0$ is [**anisotropic**]{} over $K$. Let $G=SU(h)$ be the special unitary group of this hermitian form. Then, $K$-rank ($G$)=$1$. The positive roots are $\alpha$ and $2\alpha$. Assume that ${{\mathbb{R}}}$-rank ($G_\infty$) $\geq 2$. Therefore, ${{\mathbb{R}}}$-rank $(SU(h_0))_\infty \geq 1$. The arguments are general when $K$ has infinitely many units or when $n$ is large (i.e. $n\geq 4$). But for small $n$ and small fields, the proofs become more complicated, and we give ad hoc arguments. We thus have 5 cases to consider.\ [*Case 1. $K$ has infinitely many units*]{}. Note that the $2\alpha$ root space is one dimensional. Therefore, by the criterion of Proposition \[2alpha\], every arithmetic group is virtually 3 generated.\ [*Case 2. $K$ is ${{\mathbb{Q}}}$ or is an imaginary quadratic extension of ${{\mathbb{Q}}}$ but $n\geq 4$*]{}. Take $P$ (resp. $M$) to be the parabolic subgroup of $G$ (resp. Levi subgroup of $P$), consisting of matrices of the form $\begin{pmatrix}a & * & * \\ 0 & b & *\\ 0 & 0 & \sigma (a)^{-1}\end{pmatrix}$ (resp. $\begin{pmatrix}a & 0 & 0 \\ 0 & b & 0\\ 0 & 0 & \sigma (a)^{-1}\end{pmatrix}$). Then, $(M\supset )M_0\supset M_1=SU(h_0)$ since the latter group is semi-simple (because $n-2\geq 2$; we only use the hypothesis that $n\geq 3$, so these observations apply to the next two cases as well) and is non-compact at infinity, and therefore contains an arithmetic subgroup as a Zariski dense subgroup. Moreover, $SU(h_0)({{\mathbb{C}}})=SL_{n-1}({{\mathbb{C}}})$, and its representation on the lie algebra ${\mathfrak u}$ of the unipotent radical of $P$, is simply $St\oplus St^*\oplus triv$. Since $M_1({{\mathbb{C}}})=SL_{n-1}({{\mathbb{C}}})$ with $n-1\geq 3$, the standard representation is not equivalent to its contragredient. Thus the diagonal torus $T_1$ of $M_1$ has one dimensional eigenspaces in ${\mathfrak u}$. Hence arithmetic subgroups of $G(K)$ are virtually three generated.\ [*Case 3. $n=3$, either $K={{\mathbb{Q}}}$ and $E/{{\mathbb{Q}}}$ is real quadratic or $K$ is an imaginary quadratic extension of ${{\mathbb{Q}}}$*]{}. Then, $SU(h_0)({{\mathbb{C}}})=SL_2({{\mathbb{C}}})$, but the torus $T_1$ of the last case does not have multiplicity one in its action on ${\mathfrak u}$. However, observe that $M_0$ of the last case contains in addition the torus $T_2$ consisting of matrices $\begin{pmatrix}u & 0 & 0 & 0 \\ 0 & u^{-1} & 0 & 0 \\ 0 & 0 & u^{-1} & 0 \\ 0 & 0 & 0 & u\end{pmatrix}$ with $u$ a unit in the real quadratic extension $E$ ($E$ has infinitely many units). Put $T_0=T_1T_2$. Now, $T_0\subset M_0$ is a torus consisting of matrices of the form $\begin{pmatrix}u & 0 & 0 & 0 \\ 0 & u^{-1}v & 0 & 0 \\ 0 & 0 & u^{-1}v^{-1} & 0 \\ 0 & 0 & 0 & u\end{pmatrix}$ with $u,v\in {\bf G}_m$, and has ( as may be easily seen) distinct eigenvalues in ${\mathfrak u}: St\oplus St^*\oplus triv={\mathfrak u}$, where ${\mathfrak u}$ is as in the previous case. Thus, arithmetic subgroups of $G=SU(h)$ are virtually three generated.\ [*Case 4. $K={{\mathbb{Q}}}$, $n=3$ and $E/{{\mathbb{Q}}}$ is imaginary quadratic.*]{} Then, $U(h_0)$ is not contained in $M_0$ (of course, $SU(h_0)\subset M_0$). We give an ad hoc argument in this particular case.\ Write $h_0=\begin{pmatrix}\lambda _1 & 0\\ 0 & \lambda _2\end{pmatrix}$ with $\lambda _1,\lambda _2\in {{\mathbb{Q}}}$ (every Hermitian form in two variables is equivalent to one of this type). Now, $h= \begin{pmatrix}0 & 1\\ 1 & 0\end{pmatrix}\oplus h_0$ is viewed a [**hermitian**]{} (with respect to $\sigma$) form from $E^4\times E^4{\rightarrow}E$. Consider $f=\begin{pmatrix}0 & 1\\ 1 & 0\end{pmatrix}\oplus \begin{pmatrix}\lambda _1 & 0\\ 0 & \lambda _2\end{pmatrix}$ as a [**quadratic** ]{} form on ${{\mathbb{Q}}}^4$. Now, the ${{\mathbb{Q}}}$-group $SU(h)$ contains as a $Q$-subgroup, the group $SO(f)$. Since $SU(h_0)_\infty=SU(1,1)$ is non-compact (as we have seen before, this follows from the fact that the real rank of $G_\infty$ is $\geq 2$), it follows that $SO(f_0)_\infty=SO(1,1)$ is also non-compact. Here $f_0=\begin{pmatrix}\lambda _1 & 0\\ 0 & \lambda _2\end{pmatrix}$ is viewed as a quadratic form. Consequently, the group $SO(f)({{\mathbb{R}}})\supset SO(1,1)\times SO(1,1)$ and therefore has real rank $\geq 2$.\ [*Claim: $SO(f)$ is a ${{\mathbb{Q}}}$-simple group*]{}. For, if $SO(f)$ is not ${{\mathbb{Q}}}$-simple, (since it is isogenous to the product $SL_2({{\mathbb{C}}})\times SL_2({{\mathbb{C}}})$) then it is isogenous to $SL_2\times SL_2$ or $SL_2\times SL_1(D)$ over ${{\mathbb{Q}}}$ (with $D$ a quaternionic division algebra over ${{\mathbb{Q}}}$). Now, the only four dimensional representations of $SL_2({{\mathbb{C}}})\times SL_2({{\mathbb{C}}})$ are $St\otimes St$, or $St\oplus St$, or $Triv\otimes Triv\oplus Triv \otimes S^2(St)$, or $Triv^2\oplus Triv \otimes St$. Thus, if both the factors have to act non-trivially, then the only possible four dimensional representations are $St\oplus St$ and $ST\otimes St$. But, $St\oplus St$ does not have a quadratic form invariant under $SL_2\times SL_2$. Thus, the only possible representation (over ${{\mathbb{C}}}$), of $SL_2\times SL_2$ onto $SO(4)$ is $St\otimes St$.\ It follows that the group $SL_2\times SL_1(D)$ cannot have a four dimensional representation defined over ${{\mathbb{Q}}}$ with image $SO(f)$. Thus, $SO(f)$ must be isogenous to $SL_2\times SL_2$. But then, the isogeny $SL_2\times SL_2 {\rightarrow}SL(St\otimes St)=SL(M_2({{\mathbb{Q}}}))$ preserves a quadratic form (namely the determinant) over ${{\mathbb{Q}}}$, which has [**two**]{} ${{\mathbb{Q}}}$-hyperbolic planes in it, and therefore, cannot be of the form $f=J\oplus f_0$ as above. This proves the claim.\ Choose an element $\theta \in SO(h_0)$ of infinite order. Pick non-trivial elements $u_0\in (SO(f)\cap U^+)({{\mathbb{Z}}})$ and $v_0\in U_{2\alpha}({{\mathbb{Z}}})$. Then, the group $<\theta, u_0v_0>$ generated by $\theta $ and $uv$ contains $\theta ^{{\mathbb{Z}}}$, and (since $\theta $ acts by different characters on $Lie U_{2\alpha}$ and $U^+\cap SO(f)$) the unipotent group $V^+=V^+(r{{\mathbb{Z}}})==SO(f)\cap U^+(r{{\mathbb{Z}}})U_{2\alpha }(r{{\mathbb{Z}}})$ for some integer $r$. Let $\gamma \in G({{\mathbb{Z}}})$ be an element in general position with respect to $\theta $ and $u_0v_0$ as in Lemma \[zariskidense\]. For an integer $r$, consider the group $\Gamma =<\theta ^r, (u_0v_0)^r, \gamma ^r>$. Then $\Gamma $ is Zariski dense. By arguments similar to the previous cases, it to prove that every arithmetic group in $G({{\mathbb{Z}}})$ is virtually three generated, it is enough to show that $\Gamma $ is arithmetic for every $r$. Pick an element $g\in \Gamma $ with Bruhat decomposition $g=umwv$, say. Then, there exists an integer $r'$ such that $u$ and $v$ take $V^+(r'{{\mathbb{Z}}})$ into $V^+(r{{\mathbb{Z}}})$ (since commutator of $u$ with $V^+$ lands inside $U_{2\alpha}$). Thus, $\Gamma \supset ^u<^m(V^-), V^+>$ where $V^-=^w(V^+)$ as before. Hence, we get $\Gamma \supset ^u<U_{-2\alpha}(r{{\mathbb{Z}}}), V^+(r{{\mathbb{Z}}})>$.\ Consider the group $<U_{-2\alpha}(r{{\mathbb{Z}}}), U_H^+(r{{\mathbb{Z}}})>$. An element in $U_{-2\alpha }(r{{\mathbb{Z}}})$ has the Bruhat decomposition $u_1m_1wv_1$ where $u_1,v_1\in U_{2\alpha }({{\mathbb{Q}}})$ [*commute with*]{} $U_H^+$. Therefore, $<U_{-2\alpha}(r{{\mathbb{Z}}}), U_H^+(r{{\mathbb{Z}}})>$ contains $$<^{u_1m_1wv_1}(U_H^+(r{{\mathbb{Z}}})), U_H^+(r{{\mathbb{Z}}})>= ^{u_1}<^{m_1}(U_H^-(r{{\mathbb{Z}}})), U_H^+(r{{\mathbb{Z}}})>$$ and the latter contains $^{u_1}<U_H^-(r'{{\mathbb{Z}}}), U_H^+(r'{{\mathbb{Z}}})>$ for some integer $r'$. Since the latter group is of finite index in $H({{\mathbb{Z}}})$ by [@V], it follows that $<U_{-2\alpha}(r{{\mathbb{Z}}}), U_H^+(r{{\mathbb{Z}}})>\supset ^{u_1}(\theta ^{r{{\mathbb{Z}}}})=\theta ^{r{{\mathbb{Z}}}}$ for some integer $r$. Therefore, from the foregoing paragraph, we get $\Gamma \supset ^{uu_1}(\theta ^{r{{\mathbb{Z}}}})=^u(\theta ^{r{{\mathbb{Z}}}})$. Then, $\Gamma $ contains the commutator $[^u(\theta ^r), \theta ^r]$, with $u$ running through generic elements of $U^+$ whence, $\Gamma \supset U^+(r{{\mathbb{Z}}})$ for some integer $r$. By [@V], $\Gamma $ is arithmetic.\ [*Case 5. $n=2$, $K$ is either ${{\mathbb{Q}}}$ or an imaginary quadratic extension of ${{\mathbb{Q}}}$*]{}. We can take $h=\begin{pmatrix} 0 & 0 & 1\\0 & 1 & 0\\ 1 & 0 & 0\end{pmatrix}$ as a Hermitian form over a quadratic extension $E/K$ and $G=SU(h)$ over $K$. If $K={{\mathbb{Q}}}$, and $E$ is imaginary quadratic, then the real rank of $SU(h)$ is one (the group of real points is $SU(2,1)$), and in Theorem 1 we have assumed that $R-rank (G_\infty)\geq 2$. Hence, $E$ is real quadratic and has infinitely many units. If $K$ is imaginary quadratic, then any quadratic extension $E$ of $K$ has infinitely many units. We can therefore assume that $E/K$ has infinitely many units.\ If $P$ is the parabolic subgroup of $G=SU(h)$ consisting of upper triangular matrices in $G$, then it follows from the conclusion of the last paragraph, that $M_0({{\mathbb{C}}})=C^*$ since $M_0(O_K)$ contains the group of matrices $h=\begin{pmatrix} u & 0 & 0\\0 & u^{-2} & 0\\ 0 & 0 & u \end{pmatrix}$ where $u$ is a unit in $E$. The action of $M_0({{\mathbb{C}}})=C^*$ on the Lie algebra ${\mathfrak u}$ of the unipotent radical of $P$ is given by ${\mathfrak u}={{\mathbb{C}}}(3)\oplus {{\mathbb{C}}}(-3)\oplus {{\mathbb{C}}}(0)$ where ${{\mathbb{C}}}(m)$ is the one dimensional module over $M_0({{\mathbb{C}}})$ on which an element $z\in {{\mathbb{C}}}^*$ acts by $z^m$. Hence ${\mathfrak u}$ is multiplicity free for the $M_0({{\mathbb{C}}})$ action, and we have proved Theorem 1 in this case.\ ### **The Groups SU(h) over division algebras** In this subsection, $K$ is a number field, $E/K$ a quadratic extension, $D$ a central division algebra over $E$ with an involution $*$ of the second kind, degree ($D$)=$d\geq 2$, $k=[K:{{\mathbb{Q}}}]$. $h:D^{m+2}\times D^{m+2}{\rightarrow}D$ is a $*$-hermitian form in $m+2$ variables over $D$. $h$ is of the form $$h=\begin{pmatrix}0 & 1\\ 1 & 0\end{pmatrix}\oplus h_m$$ where $h_m$ is an anisotropic hermitian form in $m$ variables. The special unitary group $G=SU(h)$ of the hermitian form $h$ is an absolutely simple algebraic group over $K$, and under our assumptions, $K$-rank($G$) $=1$. [*Case 1. $D\otimes {{\mathbb{R}}}\neq {\bf H}\times\cdots \times {\bf H}$*]{}. Then, the group $SL_1(D\otimes {{\mathbb{R}}})$ is not compact, and is semi-simple. If $U^+=\{\begin{pmatrix}1 & * & *\\ 0 & 1 & *\\ 0 & 0 & 1\end{pmatrix}\}$, $U_{2\alpha}=\{\begin{pmatrix}1 & 0 & x\\ 0 & 1 & 0\\ 0 & 0 & 1\end{pmatrix}: x+x^*=0\}$ and $P$ is the normaliser of $U^+$ in $G$ (then $P$ is a parabolic subgroup of $G$), there is the obvious Levi subgroup $M$ of $P$. Since $SL_1(D)$ is non-compact at infinity, it follows that $M_0\supset M_1$, where, $M_1=R_{E/K}(SL_1(D))$. Moreover, $M_1({{\mathbb{C}}})=SL_d({{\mathbb{C}}})\times SL_d({{\mathbb{C}}})$, and as a module over $M_1({{\mathbb{C}}})$, the Lie algebra ${\mathfrak u}_{2\alpha}$ of $U_{2\alpha}$ is ${{\mathbb{C}}}^d\otimes ({{\mathbb{C}}}^d)^*$. Thus, the weight spaces of the torus $T$= diagonal$\times $ diagonal of $SL_d\times SL_d$, on ${\mathfrak u}_{2\alpha}$ are all one dimensional. Hence, there exist $m_0\in M_1(O_K)$, $u_0\in U_{2\alpha }(O_K)$ such that for every integer $r\geq 1$, there exists an integer $r_0$ with $<m_0^r,u_0^r>\supset U_{2\alpha }(r_0O_K)$.\ By Lemma \[zariskidense\], there exists an element $\gamma \in G(O_K)$ such that for any integer $r$, the group $\gamma =<m_0^r,u_0^r,\gamma ^r>$ is Zariski dense in $G(K\otimes {{\mathbb{C}}})$. As in the previous sections, it suffices to prove that $\Gamma $ is arithmetic. Pick $g=umwv\in \Gamma $. Then, $\Gamma $ contains for some integers $r',r''$ the groups $^g(U_{2\alpha}(r'O_K))\supset ^u(U_{-2\alpha }(r"O_K))$ as well as $U_{2\alpha}(r'O_K)=^u(U_{2\alpha}(r'O_K)$.\ Consider the group $H=SU(J,D)$, where $J$ is the hyperbolic hermitian (with respect to $*$) form in two variables given by the matrix $\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}$. $H$ is absolutely simple over $K$. Moreover, it contains as a $K$-subgroup, the group $M_H=R_{E/K}(GL_1(D))$ the embedding given by $g\mapsto \begin{pmatrix} g & 0\\ 0 & (g^*)^{-1}\end{pmatrix}$. Now, $M_H(K\otimes {{\mathbb{R}}})=GL_1(D\otimes {{\mathbb{R}}})\supset R^*\times SL_1(D\otimes {{\mathbb{R}}})$. Since $SL_1(D\otimes {{\mathbb{R}}})$ is not compact by assumption, it follows that $M_H(K\otimes {{\mathbb{R}}})$ has real rank $\geq 2$. The groups $U_{\pm 2\alpha}$ are maximal opposing unipotent subgroups of $H$, and hence by [@V], the group $<U_{2\alpha}(r'O_K),U_{-2\alpha}(r''O_K)>$ is an arithmetic subgroup of $H(O_K)$. Therefore, we get from the last paragraph, that $\Gamma \supset ^u(\Delta')$ for some subgroup $\Delta' \subset H(O_K)$ of finite index, which implies that $\Gamma \supset ^u(\Delta)$ for some subgroup $\Delta $ of finite index in $SL_1(O_D)$ for some order $O_D$ in $D$. Since $SL_1(O_D)$ contains elements which do not have eigenvalue 1 in their action on $Lie U^+$, it follows from Proposition \[technical\] that $\Gamma $ is arithmetic.\ [*Case 2. $D\otimes {{\mathbb{R}}}={\bf H}\times\cdots\times {\bf H}$ and $m\geq 2$*]{}. Then $E$ is totally real (and so is $K$), and $D$ must be a quaternionic division algebra over $E$. Moreover, $SU(h_m)(K\otimes {{\mathbb{R}}})=\{g\in SL_m(D\otimes {{\mathbb{R}}}): g^*h_mg=h_m\}=\{g=(g_1,g_2)\in SL_m({\bf H})^k \times SL_m({\bf H})^k: (g_2^{\iota},g_1^{\iota}(h_m,h_m)(g_1,g_2)=(h_m,h_m)\}$ where $\iota$ is the standard involution on ${\bf H}$ induced to $SL_m({\bf H})^k$. Thus, $SL_m(K\otimes {{\mathbb{R}}})$ is isomorphic to $SL_m({\bf H})^k$. Since $m\geq 2$, the group $SL_m({\bf H})^k$ is semi-simple and non-compact, and contains a Zariski dense set of integral points , which are $SU(h_m)(O_K)=M_1(O_K)$. Take $P$ to be the standard parabolic subgroup of $G=SU(h)$. Hence $M_0({{\mathbb{C}}})\supset M_1({{\mathbb{C}}})=SL_{2m}({{\mathbb{C}}})^k$. As a module over $M_1({{\mathbb{C}}})$, the Lie algebra $Lie U^+({{\mathbb{C}}})=[{{\mathbb{C}}}^2\otimes (C^{2m})^*\oplus {{\mathbb{C}}}^{2m}\otimes ({{\mathbb{C}}}^2)^*\oplus triv ^4]^k$. Choose a generic toral element $m_0\in M_1(O_K)$, and an element $u_0=u_1u_1\in U^+(O_K)$ with $u_1\in Exp({\mathfrak g}_\alpha)$ and $u_2\in Exp({\mathfrak g}_{2\alpha})$. Choose an element $\gamma \in G(O_K)$ of infinite order, in general position with respect to $u$ and $m$ (Lemma \[zariskidense\]). Then, for every integer $r$, the group $\Gamma =<u_0^r,\gamma ^r, m_0^r>$ is Zariski dense.\ Let $\Delta\subset U^+$ be the group generated by $^{m_0^{jr}}(u_0^r): j\in {{\mathbb{Z}}}$ and $Log :U^+{\rightarrow}{\mathfrak u}$ the log mapping. Then, $log (\Delta )$ contains elements of the form $v_1,\cdots,v_N$ with each $v_i$ an eigenvector for $m_0\in \prod SL_{2m}({{\mathbb{C}}})=M_1({{\mathbb{C}}})$. For the generic toral element $m_0$, the number of distinct [*eigenvalues*]{} on $V_1=({{\mathbb{C}}}^2)^*\otimes {{\mathbb{C}}}^{2m}\oplus \cdots \oplus ({{\mathbb{C}}}^2)^*\otimes {{\mathbb{C}}}^{2m}$ (the direct sum taken $k$ times) is $2mk$. Fix corresponding eigenvectors $v_1^i,\cdots, v_{2m}^i~(1\leq i\leq k$ in $V_1$. Pick similarly, $2mk$ eigenvectors $(v_1^i)^*,\cdots (v_{2m}^I)^* ~ (1\leq i\leq k$ in $V_1^*= ({{\mathbb{C}}}^2)\otimes ({{\mathbb{C}}}^{2m})^*\oplus \cdots \oplus ({{\mathbb{C}}}^2)\otimes ({{\mathbb{C}}}^{2m})^*$ (the direct sum taken $k$ times) for $m_0$. The trivial $M_1({{\mathbb{C}}})$ module ${\mathfrak g}_{2\alpha}$ is the $k$ fold direct sum of $M_2({{\mathbb{C}}})$ with itself. Denote the $i$th component of this direct sum $M_2({{\mathbb{C}}})_i$ ($1\leq i\leq k$). By general position arguments (since the toral element $m_0$ is generic) it can be proved that for each $i$, the $2m-1 (\geq 3)$ vectors $ v_1^i(v_2^i)^* -v_2^i(v_1^i)^*, \cdots, v_1^i(v_{2m}^i)^* -v_{2m}^i(v_1^i)^*$ together with the vector $v_2^i(v_3^i)^* -v_3^i(v_2^i)^*$, span all of the $i$-th component $M_2({{\mathbb{C}}})_i$. We choose $u_1$ such that the element $log (u_1)$ has non-zero projections into each of the eigenspaces of $m_0$ in $V_1\oplus V_1^*$, and its projections $v_\mu ^i,(v_\mu ^i)^*$ are as in the foregoing.\ Therefore, $\Gamma \supset Exp (\Delta )\supset U_{2\alpha}(r'O_K)$ for some integer $r'$. To prove Theorem 1 in this case, by standard arguments, it is enough to prove that $\Gamma$ is arithmetic. Take a generic element $g=umwv\in \Gamma $. Then, $\Gamma $ contains the group $<^g (U_{-2\alpha}(r'O_K)), U_{2\alpha}(r'O_K)u_1^{r'{{\mathbb{Z}}}}>$ (recall that $u_1\in Exp({\mathfrak g}_\alpha)$). Thus, for some other integer (denoted again by $r'$ to save notation), $\Gamma $ contains $^u<U_{-2\alpha}(r'O_K), u_1^{r'{{\mathbb{Z}}}}U_{2\alpha}(r'O_K)>$.\ Let us view $h_0=\begin{pmatrix} 0 & 0 & 1\\0 & 1 & 0\\ 1 & 0 & 0\end{pmatrix}$ as a Hermitian form for $E/K$. Set $H=SU(h_0)\simeq SU(2,1)$. This is an algebraic group over $K$, and has corresponding upper triangular unipotent group $U_H^+$. The Lie algebra spanned by $Elog(u_1)$ and $Elog (^w(u_1)$ is easily seen to be isomorphic to that of $H$ with $Lie(U_H^+)=Elog (u_1)\oplus [Elog u_1,Elog u_1]$ (the square bracket denotes the commutator). From the conclusion of the last paragraph, we get $\Gamma \supset <^u(U_{-2\alpha}(r'O_K), u_1U_{2\alpha}(r'O_K)>$. By [@V], the latter group contains $^u(SU(2,1)(r'O_K))$. Hence $\Gamma $ contains $^u(SU(2,1)(r'O_K))$ as $g=umwv$ varies, and for some fixed $g'=u'm'wv'$, contains $^{u'}(SU(2,1)(r'O_K))$ as well.\ The toral element $h\in SU(2,1)$ of the form $h=\begin{pmatrix}\theta & 0 & 0\\ 0 & \theta ^{-2} & 0\\ 0 & 0 & \theta \end{pmatrix}$ acts on the root space ${\mathfrak g}_\alpha$ by the eigenvalues $theta, \cdots, \theta $ and $\theta ^3$ (as may be easily seen). Therefore, $h$ has no fixed vectors in ${\mathfrak g}_\alpha$. Now, by the last paragraph, $\Gamma $ contains the group $^u (h)$ ($u$ generic). Hence, by Proposition \[technical\], $\Gamma $ is arithmetic.\ [*Case 3. $D\otimes {{\mathbb{R}}}={\bf H}^{2k}$ and $m\leq 1$, but $k\geq 2$*]{}. Again, $E$ and $K$ are totally real. $G=SU(h)$ with $G(K\otimes {{\mathbb{R}}})=$ $SL_3({\bf H})^k$ if $m=1$ and $SL_2({\bf H})^k$ if $m=0$. Fix $\theta \in O_K^*$ such that ${{\mathbb{Z}}}[\theta ^r]$ is a subgroup of finite index in $O_K$ for all $r\neq 0$ (Lemma \[exist\]). Let $\alpha $ be a totally positive element such that $E=K(\sqrt{\alpha})$. Denote by $t(\theta )$ (resp. $u_{+}$) the matrix $\begin{pmatrix}\theta & 0 & 0\\0 & 1 & 0\\ 0 & 0 & \theta ^{-1}\end{pmatrix}$ (resp. $\begin{pmatrix}1 & 0 & \sqrt{\alpha}\\0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}$) if $m=1$ and the matrix $\begin{pmatrix}\theta & 0\\ 0 & \theta ^{-1}\end{pmatrix}$ (resp. $\begin{pmatrix}1 & \sqrt{\alpha}\\ 0 & 1\end{pmatrix}$) if $m=0$. By the choice of $\theta $, the group $<\theta ^r, u_{+}^r>$ contains, for every $r$, $u_{+}^{r'O_K}$ for some integer $r'$. Pick an element $\gamma \in G(O_K)$ in general position as in Proposition \[zariskidense\]. Then for every $r\neq 0$, $\Gamma =<t^r,u_{+}^r,\gamma ^r>\subset G(O_K)$ is Zariski dense in $G(K\otimes {{\mathbb{C}}})$. Pick a generic element $g=umwv\in \Gamma $. Then, $$\Gamma \supset <^g(u_{+}^{rO_K}), u_{+}^{rO_K}>=^u<^m((u_-)^{rO_K}, u_{+}^{rO_K}>.$$ The element $m$ is of the form $\begin{pmatrix}a & 0 & 0\\0 & b & 0\\ 0 & 0 & (a^*) ^{-1}\end{pmatrix}$ with $b\in SU(h_m)$ if $m=1$ and $\begin{pmatrix}a & 0\\ 0 & (a^*)^{-1}\end{pmatrix}$ if $m=0$ for some $a\in D^*$. Hence $^m(u_-^{rO_K})$=$\begin{pmatrix} 1 & 0 & 0\\0 & 1 & 0\\ \sqrt{\alpha}(aa^*)^{-1}r'O_K & 0 & 1\end{pmatrix}$, $u_{+}^{rO_K}=\begin{pmatrix}1 & 0 & r'O_K\\0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}$ if $m=1$ (and $^m(u_-^{r'O_K})=\begin{pmatrix}1 & 0\\ (aa^*)^{-1}r'O_K & 1\end{pmatrix}$, $u_{+}^{r'O_K}=\begin{pmatrix}1 & r'O_K\\ 0 & 1\end{pmatrix}$ if $m=0$). These two groups $^m(u_-)$ and $u_{+}$ generate $SL_2$ over $K(c)$ if $m=1$ and $SL_2$ over $K$if $m=0$.\ The element $c=aa^*\in D^*$ has its reduced norm and trace in $E$. But, in fact, $Tr (c)$ and $N(c)$ lie in $K$ itself, as may be easily seen. Now, $c$ being in the quaternionic division algebra $D$ over $E$ with $D\otimes {{\mathbb{R}}}={\bf H}^{2k}$, generates a totally imaginary quadratic extension over the totally real $E$. Hence $K(c)/K$ is also totally imaginary quadratic extension. By the SL(2) result Proposition \[CM’\] ($K$ is a totally real number field with infinitely many units), we get $<^m(u_-^{r'O_K}), u_{+}^{r'O_K}>$ is a subgroup of finite index in $SL_2(O_{K(c)})$ if $m=1$ and $SL_2(O_K)$ if $m=0$. In particular, the group $<^m(u_-^{r'O_K}), u_{+}^{r'O_K}>$ contains the group $t^{r'{{\mathbb{Z}}}}=t(\theta )^{r''{{\mathbb{Z}}}}$ for some $r''\neq 0$.\ Thus, $\Gamma $ contains the group $^u(t^{r''{{\mathbb{Z}}}})$, $u$ is generic and $t$ does not have eigenvalue one in its action on the Lie algebra $Lie~U^+$. Therefore, by Proposition \[technical\], $\Gamma $ is arithmetic.\ [*Case 4. $D\otimes {{\mathbb{R}}}={\bf H}^{2k}$, $m=1$ and $k=1$*]{} (i.e. $K={{\mathbb{Q}}}$). In this case, we will explicitly exhibit elements $u_{+}$, $u_-$ and $t$ in $G(O_K)$ such that for every $r\neq 0$, the group $\Gamma =<u_+^r,u_-^r,t^r>$ is arithmetic. This will prove Theorem 1 in this case. Since $D\otimes {{\mathbb{R}}}$ is a product of the Hamiltonian quaternions ${\bf H}$, it follows that $E/{{\mathbb{Q}}}$ is real quadratic. Fix a generic element $a\in D^*$. Then, the element $aa^*$ generates as in the last case, an imaginary quadratic extension over ${{\mathbb{Q}}}$. Pick an element $t_2\in {{\mathbb{Q}}}(aa^*)\setminus {{\mathbb{Q}}}$ such that $t_2^2\in {{\mathbb{Q}}}$. Now choose $t_1\in D$ such that $t_1^2\in E$ but does not commute with $t_2$. Write $E={{\mathbb{Q}}}(\sqrt{z})$ where $z \in {{\mathbb{Q}}}$ is positive. Pick a unit $\theta \in O_E^*$ of infinite order.\ Write $$u_+=\begin{pmatrix}1 & 1 & -\frac{1}{2}\\0 & 1 & -1\\ 0 & 0 & 1 \end{pmatrix}\begin{pmatrix}1 & 0 & \sqrt{\alpha} \\0 & 1 & 0\\ 0 & 0 & 1\end{pmatrix}, u_-=\begin{pmatrix}1 & 0 & 0\\t_1 & 1 & 0\\ -\frac{t_1^2}{2} & -t_1 & 1 \end{pmatrix}\begin{pmatrix}1 & 0 & 0\\0 & 1 & 0\\ t_2\sqrt{\alpha} & 0 & 1\end{pmatrix},$$ and $t=\begin{pmatrix}\theta & 0 & 0\\0 & \theta ^{-2} & 0\\ 0 & 0 & \theta\end{pmatrix}$. Now, the group $H=SU(2,1)$ for the extension $E/{{\mathbb{Q}}}$ embeds in $G$ with the corresponding group of upper and lower triangular unipotent matrices $U_H^{\pm}$. By Proposition \[SU(2,1)\] applied to this SU(2,1), we see that $\Gamma $ contains $U_H^{\pm}(r'O_{{{\mathbb{Q}}}(t_2)})$ for some integer $r'$.\ In particular, $\Gamma $ contains, for some $r'$, the subgroup $U_H(r'O_{E\otimes F})$ where $E\otimes F$ is a [*field*]{} ($F={{\mathbb{Q}}}(t_2)$ is imaginary quadratic, and $E/{{\mathbb{Q}}}$ is real quadratic). Taking commutators with $u_-$, we obtain that$\Gamma $ contains elements $v_-\in U_{-2\alpha}$ of the form $\begin{pmatrix}1 & 0 & 0\\0 & 1 & 0\\ r'x & 0 & 1\end{pmatrix}$ with $x$ in the subgroups ${{\mathbb{Z}}}$, $t_1{{\mathbb{Z}}}$, $t_2{{\mathbb{Z}}}$ and $t_1t_2{{\mathbb{Z}}}$ of $O_D$. However, the sum of these subgroups is a subgroup of finite index in $O_D$. Therefore, for some other $r'\neq 0$, we get $\Gamma \supset U_{-2\alpha}(r'{{\mathbb{Z}}})$ (i.e. $\Gamma $ intersects $U_{-2\alpha}$ in an arithmetic subgroup). Then, by Proposition \[highestroot\], $\Gamma $ is arithmetic.\ [*Case 5. $D\otimes {{\mathbb{R}}}= {\bf H}^{2k}$, $m=0$, $k=1$*]{} (i.e. $K={{\mathbb{Q}}}$). Then, $G({{\mathbb{R}}})=SU(h)({{\mathbb{R}}})=SL_2({\bf H})$. Therefore, $G({{\mathbb{R}}})$ has real rank one, and in Theorem 1, this is excluded.\ **Groups of type B** -------------------- $G=SO(f)$ with $f$ a non-degenerate quadratic form in $2l+1\geq 5$ variables over a number field $K$. $f$ is the direct sum of a hyperbolic form and an anisotropic form in $2l-1$ variables: $$f=\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}\oplus f_m$$ with $m=2l-1\geq 3$. Assume that ${{\mathbb{R}}}$-rank ($G_\infty$) $\geq 2$, where $G_\infty=G(K\otimes {{\mathbb{R}}})$. Take $P$ to be the parabolic subgroup $P=\{\begin{pmatrix}a & * & *\\0 & b & *\\ 0 & 0 & a^{-1}\end{pmatrix}\in G: a\in GL_1/K, b\in SO(f_m)\}$. The unipotent radical $U^+$ of $P$ consists of matrices of the form $\begin{pmatrix}1 & x & -\frac{\sum x_i^2}{2}\\0 & 1_m & -^tx \\ 0 & 0 & 1\end{pmatrix}$ with $1_m$ the $m\times m$ identity matrix, and $x\in {\bf A}^{2l-1}$, the affine $2l-1$-space over $K$. Denote by ${\mathfrak u}$ the Lie algebra of $U^+$. Let $U^-$ be the transpose of $U^+$ (it lies in $G$). Let $M$ be the Levi subgroup of $P$ given by $M=\{\begin{pmatrix}a & 0 & 0\\0 & b & 0\\ 0 & 0 & a^{-1}\end{pmatrix}\in P: a\in GL_1/K, b\in SO(f_m)\}$. Put $H=SO(f_m)$.\ [*Case 1. $H_\infty=H(K\otimes {{\mathbb{R}}})$ is non-compact*]{}. Then, $H_\infty$ is a non-compact semi-simple group, hence $H(O_K)$ is Zariski dense in $H(K\otimes {{\mathbb{C}}})$. Therefore, $M_0 \supset H$. As a module over $H({{\mathbb{C}}})=SO(2l-1,{{\mathbb{C}}})$, ${\mathfrak u}({{\mathbb{C}}})=St={{\mathbb{C}}}^{2l-1}$ is the standard representation, and the maximal torus $T_H$ of $H({{\mathbb{C}}})$ has distinct eigenvalues. Hence by Proposition \[multone\], Theorem 1 is true for $G=SO(f)$ in this case.\ [*Case 2. $H_\infty=H(K\otimes {{\mathbb{R}}})$ is compact*]{}. Then, $K$ is totally real, and $(2\leq ){{\mathbb{R}}}$-rank ($G_\infty$) =${{\mathbb{R}}}$-rank ($GL_1(K\times {{\mathbb{R}}})$)=$[K:{{\mathbb{Q}}}]$, therefore $K\neq {{\mathbb{Q}}}$. Now, $M_0$ is rather small. $M(O_K)$ is commensurate to $GL_1(O_K)=O_K^*$, hence $M_0= \{\begin{pmatrix}a & 0 & 0\\0 & 1 & 0\\ 0 & 0 & a^{-1}\end{pmatrix}\in G: a\in GL_1/K\}$. In this case, we will use the fact that $SO(f)$ contains many $PSL_2(E)$ for totally [*imaginary*]{} quadratic extensions of the totally real number field $K$. To see this, we first prove a lemma. Write the anisotropic form $f_m$ as a direct sum $f_m=\phi\oplus \phi '$ with $\phi $ a quadratic form in [**two**]{} variables; write $\phi =1 \oplus \lambda)$ with $\lambda \in K$. Form the quadratic forms $Q=\begin{pmatrix}0 & 1\\ 1 & 0\end{pmatrix} \oplus \phi$, and $Q_1= \begin{pmatrix}0 & 1\\ 1 & 0\end{pmatrix} \oplus 1$. Then, for any archimedean completion $K_v$ of $K$, $SO(Q)(K_v)=SO(3,1)({{\mathbb{R}}})\simeq SL_2({{\mathbb{C}}})$. Let $Spin(Q)$ denote the simply connected two sheeted cover of $SO(f)$. There exists a totally imaginary quadratic extension $E/K$ such that $Spin(Q)$ is $K$-isomorphic to the group $R_{E/K}(SL_2)$ where $R_{E/K}$ denotes the Weil restriction of scalars. Clearly, $SO(Q)$ is $K$-simple. Hence $Spin (Q)=R_{E/K}(H_0)$ with $H_0$ an absolutely simple simply connected group over $E$, for some extension $E/K$, say of degree $d$. Since $SO(Q)$ is isotropic over $K$, so is $H_0$ over $E$. Since $dim(SO(Q)/K)=6$, one sees that $dim (H_0)=\frac{6}{d}$. But $dim (H_0)\geq 3$ since it is absolutely simple, hence $d\leq 2$. Since $Q$ is a form in four variables, $Spin (Q)$ is not absolutely simple. Therefore, $d=2$ (i.e. $E/K$ is a quadratic extension), and $H_0$ has dimension $3$ (and is isotropic over $E$). Therefore, $H_0=SL_2$. Since $SO(Q)(K_v)=PSL_2({{\mathbb{C}}})$, it follows that $Spin (K_v)=SL_2({{\mathbb{C}}})=H_0(E\otimes K_v)$, for every archimedean (hence real) completion of $K$. Hence $E$ is a totally imaginary. The inclusion of the quadratic spaces $Q_1$ and $Q$ in $f$ induce inclusions of $SO(Q_1)$ and $SO(Q)$ into $SO(f)$ defined over $K$. They further induce corresponding inclusions (defined over $K$) of the group of unipotent upper (and lower) triangular matrices $U_{Q_1}^{\pm}$ and $U_Q^{\pm}$ into the group $U^{\pm}$ defined at the beginning of this subsection. Let $v\in U^+_Q(O_K)\setminus U_{Q_1}(O_K)$ ($U^-_{Q_1}$ is one dimensional). Then the $SL_2$ result (Proposition \[CM’\]) shows that $<v^{rO_K}, U^-(rO_K)>$ generates a subgroup of finite index in $SO(Q)(O_K)$ (which is commensurate to $SL_2(O_E)$).\ Let $H=SO(Q_1)\subset SO(f)$, $U^+_H=U^+\cap H$ be as in the last paragraph. Let $t=\begin{pmatrix}\theta & 0 & 0\\0 & 1_m & 0\\ 0 & 0 & \theta ^{-1}\end{pmatrix}$ be in $M(O_K)$, $\theta \in O_K^*$ such that ${{\mathbb{Z}}}[\theta ^r]$ of finite index in $O_K$ (Lemma \[exist\]). Fix $u_+\in U^+_H(O_K)$, $u_+\neq 1$. Let $\gamma $ be in general position with respect to $t $ and $u_+$. Write, for an integer $r\neq 0$, $$\Gamma =<u_+^r, t^r,\gamma ^r>.$$ Then $\Gamma $ is Zariski dense in $G(K\otimes {{\mathbb{C}}})$. Moreover, by the assumptions on $\theta$, $\Gamma $ contains the subgroup $V^+(r')=u_+^{r'O_K}$for some $r'$. Define $V^-(r')$ as the $w$-conjugate of $V^+(r')$. Pick a generic element $g=um'wv\in \Gamma $. Then, $\Gamma $ contains the group $<^g(V^+(r')),V^+>=^{um'w}(V^+(r''), V^+(r')>$ for some $r''$. The latter group contains $^u<^{m'}(V^-(r'')), V^(r'')>$ (replace $r''$ by a larger $r''$ if necessary).\ If $log u_+=X\in Lie U^+\simeq K^m$, then for the generic $m'$, the vectors $X$ and $^{m'}(X)$ span a two dimensional subspace of the anisotropic quadratic space $(K^m,f_m)$. Write the restriction of $f_m$ to $W$ as $\mu \phi$ for some $\mu \in K$, and $\phi $ as in the Lemma above, with $\phi (X,X)=1$, say. Then, $\Gamma $ contains $^u<^{m'}(exp (r''O_KX), exp (r''O_KX)>$, which by Proposition \[CM’\], contains $^u(\Delta )$ for some subgroup $\Delta $ of finite index in $SO(Q)(O_K)$, where $Q$ is the four dimensional quadratic form as in the Lemma. Now, $\Delta $ contains $t^{r_0{{\mathbb{Z}}}}$ for some $r_0$. Hence $\Gamma $ contains $^u(t^r)$, with $t\in M_0(O_K)\cap \Gamma $. By Proposition \[technical\], $\Gamma $ is arithmetic. This proves Theorem 1 for $K$-rank one groups of type B.\ Groups of type C ---------------- The groups of type C are $Sp_{2n}$ over $K$ (which does not have $K$-rank 1), and certain special unitary groups over quaternionic division algebras. In the case of $K$-rank one groups, we need only consider the groups of the latter kind. Thus, let $D$ be a quaternionic central division algebra over $K$, $\sigma : D{\rightarrow}D$ an involution of the [*first*]{} kind, such that the space of $\sigma $ invariants in $D$ is precisely $K$: $D^{\sigma}=K$. Suppose $h:D^n\times D^n{\rightarrow}D$ is a $\sigma$-hermitian form which is a sum of a hyperbolic form in two variables and an anisotropic form: $$h= \begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix} \oplus h_{n-2}$$ with $h_{n-2}$ an anisotropic hermitian form on $D^{n-2}$. The subgroup $P$ of $G$ consisting of matrices of the form $ \begin{pmatrix}g & 0 & 0\\0 & h & 0\\0 & 0 & (g^{\sigma})^{-1}\end{pmatrix} \begin{pmatrix}1 & z & w\\0 & 1_{n-2} & 0\\0 & -^tz & 1\end{pmatrix}$ is a parabolic subgroup with unipotent radical $U^+$ consisting of matrices $\begin{pmatrix}1 & z & w\\0 & 1_{n-2} & 0\\0 & -^tz & 1\end{pmatrix}$ with $w+w^{\sigma}=0$. The commutator of $U^+$ is $U_{2\alpha}$ is the set of matrices $\begin{pmatrix}1 & 0 & w\\ 0 & 1_{n-2} & 0\\0 & 0 & 1\end{pmatrix}$ with $w+w^{\sigma}=0$ having dimension $3$ over $K$. Then $M$ is the Levi subgroup of $P$, with elements of the form $\begin{pmatrix}g & 0 & 0\\0 & h & 0\\0 & 0 & (g^{\sigma})^{-1}\end{pmatrix}$.\ [*Case 1. $D\otimes {{\mathbb{R}}}\neq {\bf H}\times \cdots \times {\bf H}$*]{}. Then, $SL_1(D\otimes {{\mathbb{R}}})$ is a non-compact semi-simple group. Therefore, $M_0$ contains $SL_1(D)$, embedded as the subgroup of $M$ of matrices of the form $\begin{pmatrix}g & 0 & 0\\0 & 1_{n-2} & 0\\0 & 0 & (g^{\sigma})^{-1}\end{pmatrix}$ with $g\in SL_1(D)$. Note that for any embedding of $K$ in ${{\mathbb{C}}}$, we have $SL_1(D\otimes _K{{\mathbb{C}}})=SL_2({{\mathbb{C}}})$. As a representation of $SL_2({{\mathbb{C}}})$, the module $Lie U_{2\alpha}$ is $Sym ^2({{\mathbb{C}}}^2)$ (since the space $w=-w^{\sigma}$ is 3-dimensional) which is multiplicity free for the diagonal torus in $SL_2({{\mathbb{C}}})$. Therefore, there exists an $m_0\in SL_1(O_D)$, and $u_0\in U_{2\alpha}((O_K)$ such that the group generated by the elements $^{m_0^j}(u_0): j\in {{\mathbb{Z}}}$ has finite index in $U_{2\alpha}(O_K)$.\ Choose an element $\gamma \in G(O_K)$ in general position with respect to $m_0,u_0$ as in Lemma \[zariskidense\]. Write, for an integer $r\neq 0$, $\Gamma =<m_0^r,u_0^r, \gamma ^r>$. Then, $\Gamma $ is Zariski dense in $G(K\otimes {{\mathbb{C}}})$. By Proposition \[highestroot\], $\Gamma $ is arithmetic. This proves Theorem 1 in this case.\ [*Case 2. $D\otimes {{\mathbb{R}}}={\bf H}^k$, $k=[K:{{\mathbb{Q}}}]\geq 2$*]{}. Then $K$ is totally real, and contains an element $\theta \in O_K^*$ such that the ring generated by $\theta $ has finite index in the integers $O_K$ of $K$. Pick a non-trivial element $u_+\in U_{2\alpha}(O_K)$ and let $u_-$ denotes its conjugate by the Weyl group element $w$. Let $t=\begin{pmatrix}\theta & 0 & 0\\0 & 1_{n-2} & 0\\0 & 0 & \theta ^{-1}\end{pmatrix}\in G(O_K)$. For $r\neq 0$, the group $<t^r,u_+^r>\supset u_+^{r'O_K}=V^+$ for some integer $R'\neq 0$. Choose $\gamma \in G(O_K)$ in general position with respect to $t,u_0$. Then, $\Gamma =<u_+^r,t^r, \gamma ^r>$ is Zariski dense (Lemma \[zariskidense\]). Then, $V^+\subset \Gamma $. Pick a generic element $g=umwv\in \Gamma $. Then, $\Gamma $ contains the subgroup $<^g(V^+),V^+>\supset ^u<^m(u_+^{r''O_K}), u_+^{r'O_K}>$ for some other integer $r''$. If $u_+=\begin{pmatrix}1 & 0 & w\\0 & 1 & 0\\0 & 0 & 1\end{pmatrix} $ then $^m(u_+)=\begin{pmatrix} 1 & 0 & 0\\0 & 1 & 0\\(a^{\sigma})^{-1}wa^{-1} & 0 & 1\end{pmatrix}$ where $m=\begin{pmatrix}a & 0 & 0\\0 & 1 & 0\\0 & 0 & (a^{\sigma})^{-1}\end{pmatrix}$. Since $m$ is generic, the element $\xi =(a^{\sigma })^{-1}wa^{-1}w^{-1}\in D$ generates a quadratic (totally imaginary, by the assumption on $D$ in this case) extension of $K$. Therefore, by Proposition \[CM’\], the group $<^m(u_-^{r'O_K}),u_+^{r'O_K}>$ is an arithmetic subgroup of $SL_2(K(\xi))$. In particular, $\Gamma $ contains $^u(t^{r_0{{\mathbb{Z}}}})$ for some integer $r_0$. The action of $t$ on $Lie U^+$ has no fixed vectors. By Proposition \[technical\], $\Gamma $ is arithmetic.\ [*Case 3. $D\otimes {{\mathbb{R}}}={\bf H}$, $k=1$ (i.e. $K={{\mathbb{Q}}}$)*]{}. Let $H=SU(h_{n-2})$. Since ${{\mathbb{R}}}$-rank ($G$) $\geq 2$, it follows that ${{\mathbb{R}}}$-rank ($H$) $\geq 1$. Thus, $n-2\geq 2$. But then, $H({{\mathbb{R}}})=SU(h_{n-2},{\bf H})$ is isotropic if and only if $h_{n-2}$ represents a zero over ${{\mathbb{R}}}$. Since $n-2\geq 2$, and a hermitian form in $\geq 2$ variables over a quaternionic algebra (with respect to an involution of the first kind whose fixed points are of dimension one) represents a zero over ${{\mathbb{Q}}}_p$ for every prime $p$, it follows by the Hasse principle ( see Ch 6, section (6.6), Claim (6.2) of [@PR]) that $h_{n-2}$ represents a a zero over ${{\mathbb{Q}}}$ as well, whence ${{\mathbb{Q}}}$-rank ($H$) $\geq 1$ and ${{\mathbb{Q}}}$-rank ($G$) $\geq 2$; this case is not under consideration in this section.\ Classical groups of type D -------------------------- [*Case 1. $G=SO(f)$*]{}. Here, $f=J\oplus f_{2n-2}$ with $J=\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}$ being the hyperbolic form on $K^^2$, $f_{2n-2}$ an anisotropic quadratic form in $2n-2$ variables over $K$, and $n\geq 4$ (i.e. $n-1\geq 3$). Now, the real rank of $SO(f)(K\otimes {{\mathbb{R}}})$ is $\geq 2$. The argument for groups of type B applies without change.\ We now assume that $G=SU_n(h,D)$. Here, $D$ is a quaternionic central division algebra over $K$ with an involution $\sigma$ of the first kind such that the dimension of the set of fixed points $D^{\sigma}$ is three (in the symplectic case, this dimension was one). $h=J\oplus h_{n-2}$, where $ J=\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}$ is the hyperbolic form on $D^2$ and $h_{n-2}$ is an anisotropic hermitian form. Let $P$, $U^+$ and $M$ be as in the symplectic case The only change from that case is that the involution $\sigma$ has three dimensional fixed space, hence $U_{2\alpha}$ which consists of matrices of the form $\begin{pmatrix}1 & 0 & w\\0 & 1_{n-2} & 0\\0 & 0 & 1\end{pmatrix}$ with $w+w^{\sigma}$ is one dimensional over $K$. [*Case 2. $K$ has infinitely many units*]{}. Then, by Proposition \[2alpha\], Theorem 1 holds in this case.\ [*Case 3. $K$ is an imaginary quadratic extension of ${{\mathbb{Q}}}$*]{}. Then, $SL_1(D\otimes {{\mathbb{R}}})=SL_2({{\mathbb{C}}})$. Moreover, $SU(h_{n-2})(K\otimes {{\mathbb{R}}})=SO_{2n-4}({{\mathbb{C}}})$. Note that $n\geq 4$. Therefore, $SO_{2n-4}({{\mathbb{C}}})$ is a semi-simple group. Hence, $M_0(K\otimes {{\mathbb{R}}})= M_0({{\mathbb{C}}})\supset SL_2({{\mathbb{C}}})\times SO_{2n-4}({{\mathbb{C}}})$. Then, the product of the diagonals in the latter group has multiplicity one in its action on the Lie algebra $Lie U^+({{\mathbb{C}}})\simeq {{\mathbb{C}}}^2\otimes {{\mathbb{C}}}^{2n-4}\oplus triv$. By Proposition \[multone\], Theorem 1 holds.\ [*Case 4. $K={{\mathbb{Q}}}$ and $D\otimes {{\mathbb{R}}}\neq {\bf H}$*]{}. Then, $SL_1(D\otimes {{\mathbb{R}}})$ is non-compact and semi-simple. Now, the group $SL_1(D)\times SL_2/K$ is embedded in $SU(J,D)$ where $J=\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}$ is the hyperbolic form in two variables. Therefore, the real rank of $SU(J,D)$ is $\geq 2$.\ Write $h_{n-2}=\lambda _1\oplus h_{n-3}$ for some $\lambda \in D^{\sigma}-\setminus \{0\}$. After a scaling, we may assume that $\lambda =1$. Consider the group $G_1=SU_3(J\oplus 1, D)$. Let $P_1$, $U_1$ be the intersections of $P$ and $U$ with $G_1$. They are respectively a parabolic subgroup and its unipotent radical in $H$. By the last paragraph, it follows that $M_0\simeq SL_1(D)$.\ Now, the Tits diagram of $G_1$ is that of $^2A_3\simeq SU(1,3)$ over ${{\mathbb{Q}}}$, where $SU(1,3)$ actually denotes the $K$-rank one group $SU(B)$ with $B$ a hermitian form in four variables over a quadratic extension $E$ of ${{\mathbb{Q}}}$, such that the maximal isotropic subspaces of $E^4$ for the form $B$ are one dimensional. Thus, $G_1$ is as in Cases 3 or 4 of subsection (5.2.1). In Case 3 of (5.2.1), it is easy to see (and is observed there) that $M_0$ is not semi-simple. Therefore, only Case 4 of (5.2.1) applies. In this case (see (5.2.1), Case 4), there is an embedded $H=SO(1,3)$ in this $G_1\simeq SU(1,3)$ of real rank two. Choose the unipotent element $u_0\in H\cap U_1({{\mathbb{Z}}})\subset U^+({{\mathbb{Z}}})$ and $v_0\in (U_1)_{-2\alpha}({{\mathbb{Z}}})=U_{2\alpha}({{\mathbb{Z}}})$ (the last equality holds since the space ${\mathfrak g}_{2\alpha}$ is one dimensional) and an element $\theta \in M_0\cap H$ as in (5.2.1), Case 4. Set $V^+(r)=H\cap U^+(r{{\mathbb{Z}}})U_{2\alpha}(r{{\mathbb{Z}}})$. Then, by the argument of section (5.2.1), Case 4, $V^+(r)$ is contained in the two-generated group $<(\theta )^r, (u_0v_0)^r>$. Let $\gamma \in G({{\mathbb{Z}}})$ be in general position with respect to $u_0v_0$ and $\theta$. By Lemma \[zariskidense\], for each $r$, the group $\Gamma=<(u_0v_0)^r,\theta ^r,\gamma ^r>$ is Zariski dense. To prove Theorem 1, it is sufficient (by the now familiar arguments) to prove that $\Gamma $ is arithmetic. Let $V^-(r)$ denote the $w$ conjugate of $V^+(r)$.\ Pick a generic element $g=umwv\in \Gamma $. $\Gamma $ contains the group $<^g(V^+), V^+>\supset ^u<^m(V^-(r')), V^+(r')>$ for some $r'$. Thus, $\Gamma $ contains the subgroup $^u<U_{-2\alpha}(r'{{\mathbb{Z}}}), U_H(r{{\mathbb{Z}}})>$ where $U_H=U^+\cap H$; it is proved in Case 4 of (5.2.1), that the group $<U_{-2\alpha}(r'{{\mathbb{Z}}}),U_H(r'{{\mathbb{Z}}})>$ contains $\theta ^{r''{{\mathbb{Z}}}}$ for some $r''$. Therefore, $\Gamma $ contains $^u(\theta ^{r''{{\mathbb{Z}}}})$. By Proposition \[technical\], $\Gamma $ is arithmetic. [*Case 5. $K={{\mathbb{Q}}}$ and $D\otimes {{\mathbb{R}}}={\bf H}$*]{}. If, as before, $J=\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}$ is the hyperbolic form in two variables over the division algebra $D$, then, $SU(J,D)({{\mathbb{R}}})=\{g\in SL_2({\bf H}): g^{\sigma}Jg=J\}$ has ${{\mathbb{R}}}$-rank $1$. Recall that $h=J\oplus h_{n-2}$. Since $R$-rank ($SU(h)$) $\geq 2$, we must have ${{\mathbb{R}}}$-rank ($SU(h_{n-2})$) $\geq 1$. Hence $h_{n-2}$ represents a zero over ${{\mathbb{R}}}$, and therefore, $n-2\geq 2$.\ If $n-2\geq 3$, then write $h_{n-2}=h_2\oplus h_{n-4}$. Now, $G_0=SU(J\oplus h_2,D)$ is an absolutely simple ${{\mathbb{Q}}}$-subgroup of $G$. We will show that ${{\mathbb{Q}}}$-rank ($G_0$) $\geq 2$, which will prove the same for $G$, and contradicts our assumption that ${{\mathbb{Q}}}$-rank of $G$ is one.\ The group $G_0$ is of type $D_4$, with ${{\mathbb{Q}}}$-rank one. Thus, In the diagram of $G_0$, there is one circled root. If the anisotropic kernel $M'$ is ${{\mathbb{Q}}}$-simple, then, $M'({{\mathbb{C}}})\supset SL_2^3$, and therefore, $<U^+_{G_0}({{\mathbb{Z}}}), S_0({{\mathbb{Z}}})>$ (with $S_0$ a suitable torus in $M'$), is two generated: say by $u_+$ and $\theta $. By considering an element $\gamma \in G(O_K)$, in general position it follows that $<\gamma ^r, \theta ^r, u_+^r>$ is Zariski dense. Write $V^+$ for the group generated by $\theta ^r$ and $u_+^r$, and $V^-$ for its conjugate by $w$. Since $G_0$ contains the $2\alpha$ root group $U_{2\alpha}$ it follows that $V^+$ is normalised by the unipotent arithmetic group $U^+(r{{\mathbb{Z}}})$. Consequently, given $g=umwv \in \Gamma \cap U^+MwU^+$, $\Gamma $ contains the group $<^g(V^+),V^+>=^u<^m(V^-),V^+>$. The latter contains $\Delta=^u<U_{-2\alpha}(r{{\mathbb{Z}}}),U^+_{G_0}(r{{\mathbb{Z}}})>$. Since $G_0$ is of higher real rank (and of ${{\mathbb{Q}}}$-rank one), any Zariski dense subgroup of $G_0({{\mathbb{Z}}})$ intersecting $U^+_{G_0}({{\mathbb{Z}}})$ in an arithmetic group is of finite index in $G_0({{\mathbb{Z}}})$ by [@V]. Therefore, $\Delta$ is of finite index in $G_0({{\mathbb{Z}}})$ and hence $\Gamma \supset ^u(S_0(r'{{\mathbb{Z}}}))$ for some integer $r'$. Now, non-trivial elements of $S_0(r'{{\mathbb{Z}}})$, act by eigenvalues $\neq 1$ on the $\alpha$ root space ${\mathfrak g}_\alpha$. An argument similar to the proof of Proposition \[technical\] shows that the Zariski closure $\frak v$ of $\Gamma \cap U^+$ has Lie algebra which contains ${\mathfrak g}_\alpha$. The latter [*generates*]{} ${\mathfrak u}$. Therefore, $\frak v={\mathfrak u}$ and $\Gamma \supset U^+(r''{{\mathbb{Z}}})$ for some $r''$. Thus, by [@V], $\Gamma $ is arithmetic, and Theorem 1 holds.\ \[2\]. If the anisotropic kernel is not ${{\mathbb{Q}}}$-simple, then, there is at least one simple root connected to the above circled root, and together, they generate a group $G_1$ isomorphic to $SL_3$ over ${{\mathbb{C}}}$. Over ${{\mathbb{R}}}$, $G_1$ cannot be outer type $SL_3$, since one root is already circled over ${{\mathbb{Q}}}$ (in outer type $A_2$, [*two*]{} roots over ${{\mathbb{R}}}$, are circled together). Therefore, $G_1$ is $SL_3$ over ${{\mathbb{R}}}$. Hence, over ${{\mathbb{Q}}}$, $G_1$ can only be $SU(2,1)$ with respect to a real quadratic extension. Then again, the group $<U^+_{G_1}(r{{\mathbb{Z}}}), \theta ^{r{{\mathbb{Z}}}}>$ is virtually two generated (for any $r$), and a general position argument as in the previous paragraph shows that Theorem 1 holds in this case too.\ Exceptional groups of rank one ============================== Groups of type $^3D_4$ and $^6D_4$ ---------------------------------- The only $K$-rank one groups (according to [@T2], p.58) are $^3D_{4,1}^9$ and $^6D_{4,1}^9$. The simple root that is connected to all the others is circled. The anisotropic kernel $M_1$ is, over $\overline K$, $SL_2^3$. Moreover, the Galois group of $\overline {K}/K$ acts transitively on the roots connected to this simple root. Thus, the anisotropic kernel is an [*inner twist*]{} of the quasi-split group $M'=R_{E/K}(SL_2)$ with $E/K$ either cubic ($^3D_{4,1}^9$) or sextic ($^6D_{4,1}^9$). $M'$is $K$-simple whence any inner twist is $K$-simple (inner twist of a product is a product of inner twists).\ Now, $G$ being an inner twist of the quasi-split group ${\mathcal G}$, is given by an element of the Galois cohomology set $H^1(K,{\mathcal G})$. However, this element is in the image of $H^1(K,M')$ (Proposition 4 (ii) of [@T2]). Hence $G$ contains the $K$-subgroup $M_1$ (inner twist of $M'$), whence $M_1$ is $K$-simple.\ Since ${{\mathbb{R}}}$-rank ($M_1(K\otimes {{\mathbb{R}}})$) $\geq 1$ (it follows by looking at the Tits diagrams, that $G(K_v)$ has $K_v$-rank $\geq 2$ for each archimedean place $v$ of $K$, because these forms do not occur over real or complex numbers), it follows that $M_1(K\otimes {{\mathbb{R}}})$ is non-compact semi-simple. Hence the Zariski closure of $M_1(O_K)$ is $M_1$. Now, by [@L], [@Sh], as a module over $M_1({{\mathbb{C}}})=SL_2({{\mathbb{C}}})^3$, $Lie U^+=St\otimes St\otimes St$. Therefore, the torus of $M_1({{\mathbb{C}}})$ given by the product of diagonal tori in $SL_2$ acts by multiplicity one on $Lie U^+$. By Proposition \[multone\], Theorem 1 follows. Groups of type $E_6$ -------------------- [*Case 1. There are no inner type groups of rank one*]{}. [*Case 2. $G=^2E_{6,1}^{35}$*]{}. The anisotropic kernel $M_1$ is $K$-simple (since its Tits diagram is connected). It is also non-compact at infinity, since ${{\mathbb{R}}}$rank of any non-compact form of $E_6$ over $K_v$ has $K_v$-rank $\geq 2$ for any archimedean completion of $K$. Hence $M_0\supset M_1$. As a module over $M_1({{\mathbb{C}}})=SL_6({{\mathbb{C}}})$, $Lie U^+$ is $\wedge ^3 ({{\mathbb{C}}}^{10})$ ([@L], [@Sh]), and is multiplicity free for the diagonal torus in $SL_6$. This completes the proof.\ [*Case 3. $G=^2E_{6,1}^{29}$*]{}. The anisotropic kernel is non-compact at infinity for the same reason as above. Hence $M_1\subset M_0$, with $M_1=SO(8)$. As an $M_1({{\mathbb{C}}})=SO(8,{{\mathbb{C}}})$ module, the space $Lie U^+$ is (by p. 568, $^2E_6$-3 of [@Sh]), $St\oplus \delta _3\oplus \delta _5$ where $St$, $\delta _3$ and $\delta _5$ are respectively, the standard, and the two distinct spin modules. With respect to the maximal torus of $SO(8,{{\mathbb{C}}})$, the weights are $x_1,\cdots x_4, \frac{\epsilon _1x_1+\cdots+ \epsilon _4x_4}{2}$ with $\epsilon _i=\pm 1$, each occurring with multiplicity one. Therefore, Theorem 1 follows from Proposition \[multone\].\ Groups of type $E_7$ or $E_8$ or $G_2$ -------------------------------------- There are no $K$-rank one forms over number fields. Groups of type $F_4$ -------------------- The $K$-rank one form is $F_{4,1}^{21}$. This is the only exceptional group which can have rank one over some archimedean completion of $K$.\ [*Case 1. $K$ is not totally real or $K={{\mathbb{Q}}}$ or the anisotropic kernel is non-compact at infinity*]{}. Then, the anisotropic kernel $M_1$ is a form of $SO(7)$. Over ${{\mathbb{C}}}$ this is non-compact. In case $K={{\mathbb{Q}}}$ again, this is non-compact over ${{\mathbb{R}}}$ since $G$ is of real rank $\geq 2$. If $K\neq {{\mathbb{Q}}}$ is totally real, then by [*assumption*]{}, $M_1$ is non-compact at infinity. Thus, $Lie U^+=St\oplus \wedge ^3({{\mathbb{C}}}^7)$ ([@L], (xxii), p.52) is multiplicity free for the torus of $SO(7)$.\ [*Case 2. $K$ totally real, and the anisotropic kernel is compact at infinity*]{}. let ${\mathfrak g}_{2\alpha}$ be the $2\alpha$ root space. Then, the subgroup $G_1$ of $G$ with Lie algebra ${\mathfrak g}_1=<{\mathfrak g}_{-2\alpha},{\mathfrak g}_{2\alpha}>$ must be locally isomorphic to $SO(1,8)$. For, ${\mathfrak g}_1$ has real rank one (since ${\mathfrak g}$ has), is semi-simple, and its obvious parabolic subgroup has abelian unipotent radical. Therefore, it can only be $SO(1,k)$. Since $dim({\mathfrak g}_{2\alpha})=7$, it follows that $k-1=7$ i.e. $k=8$. Now, the anisotropic factor $SO(7)$ of $G_1=SO(1,8)$ is an anisotropic factor of $F_{4,1}^{21}$ as well.\ Fix $u_+\in U_{2\alpha}(O_K)$, and $\theta \in {\bf G}_m(O_K)$ suitably chosen (as in Lemma \[exist\]). Fix $\gamma \in G(O_K)$ in general position with respect to $u_+,\theta $ ( Proposition \[zariskidense\]). For each $r$, write $\Gamma =<u_+^r,\theta ^r,\gamma ^r>$. Then, 1) $\Gamma $ is Zariski dense in $G(K\otimes {{\mathbb{C}}})$ ( Proposition \[zariskidense\]). 2) $V^+=V^+(r')==u_+^{r'O_K}\subset \Gamma $ for some integer $r'$. Put $V^-=V^-(r')$ for the $w$ conjugate of $V^+(r')$. 3) If $g=umwv \in \Gamma $ is generic, then $\Gamma \supset <^g(V^+),V^+>\supset ^u<^m(V^-),V^+>$. By using the result proved for $SO(1,8)$ (it is important to note that $K\neq {{\mathbb{Q}}}$ is totally real, and that $m\in SO(7)\subset SO(1,8)$ to apply this result), we see that $^u (\theta ^{r''{{\mathbb{Z}}}})\subset \Gamma $ for some $r''\neq 0$. Then, by Proposition \[technical\], $\Gamma $ is arithmetic.\ [JPSH]{} H. Bass, J. Milnor and J.-P. Serre, Solution of the congruence subgroup problem for $SL_n(n\geq 3)$ and $Sp_{2n}(n\geq 2)$, Publ. Math. I.H.E.S., [**33**]{} (1967) 59-137. A.Borel and J.Tits, Groupes reductifs, Publ.Math. I.H.E.S., [**27**]{}, (1965) 55-150. R. Langlands, Euler Products, James Whittmore Lecture, Yale mathematical monographs, [**1**]{}, Yale University Press, New Haven, Conn.-London, 1971. F. Shahidi, On the Ramanujan Conjecture and finiteness of poles of certain L-functions, Ann. of Math. (2)[**127**]{} (1988), [**no. 3**]{}, 547-584. J.Mennicke, Finite Factor Groups of the Unimodular Group, Annals of Math.(2), [**81**]{}, (1965) 31-37. H.Oh, On discrete subgroups containing a lattice in a horospherical subgroup, Israle J.Math., [**110**]{}, (1999) 333-340. V. P. Platonov and A. Rapinchuk, Algebraic Groups and Number Theory, Academic Press, Vol 139. M. S. Raghunathan, On the congruence subgroup problem, Publ. Math. Inst. Hautes Etud. Sci. [**46**]{} (1976) 107-161. M. S. Raghunathan, On the congruence subgroup problem II, Invent. Math. [**85**]{} (1986) 73-117. M. S. Raghunathan, The congruence subgroup problem, in Proceedings of the Hyderabad conference on algebraic groups, 465-494, ed. S. Ramanan, National Board for Higher Mathematics, Mumbai, 1991. M.S.Raghunathan, A Note on Generators for Arithmetic Groups, Pacific J. Math. [**152**]{} (1972), [no. 2]{} 365-373. M.S.Raghunathan, Discrete subgroups of Lie Groups, Ergebnisse Math. Grenzgeb. (3), [**68**]{}, Springer-Verlag, New York, 1972. R.Scharlau, J.Tits, “Systemes geńerateurs de groupes de congruence. C.R.Acad. Sci, Paris Seŕ. A-B, [**283**]{} (1976), no.9 Ai, A 693-A 695. J.Tits, Classification of algebraic semi-simple groups, (1966) Algebraic Groups and Discontinuous Groups, Proc. Symp. Pure Math., Boulder, Colorado, 1965), 33-62 , Amer. Math. Soc. Providence, R.I. J.Tits, Free subgroups in linear groups, Journal of Algebra [**20**]{} (1972) 250-270. T.N.Venkataramana, On Systems of generators for higher rank arithmetic groups, Pacific Journal. [**166**]{} (1994), 193-212. T.N.Venkataramana, Zariski dense subgroups of arithmetic groups, J. of Algebra, [**108**]{} (1987), 325-339. L.Vaserstein, Structure of the classical arithmetic groups of rank greater than one (in Russian) Math.Sb (N.S), [**91**]{} ([**133**]{}) (1973), 445-472.
[**Retro-Prospective Differential Inclusions and their Control by the Differential Connection Tensors of their Evolutions: The trendometer**]{}\ \ Jean-Pierre Aubin[^1],[^2]\ **Abstract** *This study is motivated by two different, yet, connected, motivations. The first one follows the observation that the classical definition of derivatives involves prospective (or forward) difference quotients, not known whenever the time is directed, at least at the macroscopic level. Actually, the available and known derivatives are retrospective (or backward). They coïncide whenever the functions are differentiable in the classical sense, but not in the case of non smooth maps, single-valued or set-valued. The later ones are used in differential inclusions (and thus, in uncertain control systems) governing evolutions in function of time and state. We follow the plea of some physicists for taking also into account the retrospective derivatives to study prospective evolutions in function of time, state and retrospective derivatives, a particular, but specific, example of historical of “path dependent” evolutionary systems. This is even more crucial in life sciences, in the absence of experimentation of uncertain evolutionary systems. The second motivation emerged from the study of networks with junctions (cross-roads in traffic networks, synapses in neural networks, banks in financial networks, etc.), an important feature of “complex systems”. At each junction, the velocities of the incoming (retrospective) and outgoing (prospective) evolutions are confronted. One measure of this confrontation (“jerkiness”) is provided by the product of the retrospective and prospective velocities, negative in “inhibitory” junctions, positive for “excitatory” ones, for instance. This leads to the introduction of the “differential connection tensor” of two evolutions, defined as the tensor product of retrospective and prospective derivatives, which can be used for controlling evolutionary systems governing the evolutions through networks with junctions.* **Mathematics Subject Classification**: 34A60,90B10, 90B20, 90B99, 93C10, 93C30,93C99, **Keywords** Transport, networks, junction, impulse, viability, traffic control, jam, celerity, monad Motivations {#motivations .unnumbered} =========== There are two different motivations of this study. Retrospective-Prospective Differential Inclusions {#retrospective-prospective-differential-inclusions .unnumbered} ------------------------------------------------- The first motivation follows the plea of *Efim Galperin* in [@Galperin1; @Galperin2; @Galperin3; @Galperin4 Galperin] for using “retrospective” derivatives[^3] instead of “prospective” derivatives, universally chosen since their introduction by *Newton* and *Leibniz*, at a time when physics became predictive and deterministic: the “prospective derivatives” $\overrightarrow{D}x(t)$ being (more or less weak) limits of *prospective (future) difference quotients* (on positive durations $h>0$) $\displaystyle{\overrightarrow{\nabla }_{h}x(t) := \frac{x(t+h)-x(t)}{h}}$ are “physically non-existent”, because they **are not yet known** at time $t$. Whereas the *the retrospective (past) difference quotients $\displaystyle{\overleftarrow{\nabla }_{h}x(t) := \frac{x(t)-x(t-h)}{h}}$ **may be known*** for some positive durations and should be taken into account[^4]. This is an inescapable issue in life sciences, since the evolutionary engines evolve with time, under contingent and/or tychastic uncertainties and, in most cases, cannot be re-created (at least, for the time, since synthetic biology deals with this issue[^5]). Popper’s recommandations are valid for physical sciences, where experimentation is possible and renewable. However, the quest of the *instant* (temporal window with $0$ duration) has not yet been experimentally created (the smallest measured duration is of the order of the yoctosecond ($10^{-24}$)). Furthermore, our brains deal with observations which are not instantaneous, but, in the best case, are perceived after a positive transmittal duration. For overcoming this difficulty, Fermat, Newton, Leibniz and billions of human brains have invented *instants* and passed to the limit when duration of temporal windows goes to $0$ to reach such an instant. This is actually an approximation[^6] of reality by clever mathematical constructions of objets belonging to an ever evolving “cultural world”. Derivatives are not *perceived*, but were *invented*, simplifying reality by passing to the limit in a mathematical paradise. Therefore, for differentiable functions in the classical sense, the limits of retrospective and prospective difference quotients may coïncide when we pass to the limit. But this is no longer the case when evolutions are no longer differentiable in the classical sense, but derivatives may still exist for “weaker” limits, such as limits in the sense of distributions or graphical limits in set-valued analysis (see Section 18.9, p. 769, of , [@absp Aubin, Bayen & Saint-Pierre]). Even if we restrict our analysis to Lipschitz functions, the *Rademacher*’s Theorem states that Lipschitz maps from one finite dimensional vector space to another one are only *almost everywhere* differentiable. Although small, the set of elements where there are not differentiable is interesting because Lipschitz have always set-valued graphical derivatives. Hence we have to make a detour by recalling what are meant retrospective and prospective graphical derivatives of maps as well as set-valued maps and non differentiable (single-valued) maps. Therefore, we devote the first part of this study to a certain class of viable evolutions governed by functional (or history-dependent) differential inclusions $$x'(t) \; \in \; G(t,x(t), \overleftarrow{D}x(t))$$ where $\overleftarrow{D}x(t)$ is the *retrospective derivative* (or derivative from the left since, at this stage, we consider evolutions defined on $\mathbb{R}^{}$). Retrospective-prospective differential inclusions $x'(t) \; \in \; G(t,x(t), \overleftarrow{D}x(t))$ describe *predictions of evolutions based on the state *and* on the known retrospective velocity at each chronological time*. As delayed differential equations or inclusions, they are particular cases of *functional* (or *historical*, *path-dependent*, etc.) differential equations[^7]. As for second-order differential equations, initial conditions $x(t_{0})$ at time $t_{0}$ must be provided, as well as (retrospective) initial velocities for selecting evolutions governed by retrospective-prospective differential equations. Differential Connection Tensors in Networks {#differential-connection-tensors-in-networks .unnumbered} ------------------------------------------- The second motivation emerged from the study of propagation through “junctions of a network”, such as cross-roads in road networks, banks in financial networks, synapses in neural network, etc. (see for instance [@TransReg-2013 Aubin]). ### Neural Network : the Hebbian Rule {#neural-network-the-hebbian-rule .unnumbered} If we accept that in formal neuron networks, “(evolving) knowledge” is coded as “synaptic weights” at each synapse, their collection defines a “synaptic matrix” which evolves, and, thus, becomes the “state of the network”. *Donald Hebb* introduced in 1949 in *The Organization of Behavior*, [@Hebb Hebb], the *Hebbian learning rule* prescribing that the velocity of the synaptic matric is proportional to the *tensor product*[^8] of the “presynaptic activity” and “postsynaptic activity” described by the propagation of nervous influx in the neurons. Hence, denoting the synaptic matrix $W$ of synaptic weights, the basic question was to minimize a “matrix function” $W \in \mathcal{L}(X,X)\mapsto E(Wx)$ where $x \in X:= \mathbb{R}^{\ell}$ and $E: X \mapsto \mathbb{R}^{}$ a differentiable function are given. Remembering[^9] that the gradient with respect to $W$ is equal to the tensor product $E'(Wx) \otimes x$, the gradient method leads to a differential equation of the form $$\label{e:} W'(t) \; = \; - \alpha E'(W(t)x) \otimes x$$ which governs the evolution of the synaptic matrix (the “synapse $x$ is fixed and does not evolve). ### Differential Connection Tensors {#differential-connection-tensors .unnumbered} However, we take into account the evolution $t \mapsto x(t) \in X $ of the propagation in networks (such as the propagation nervous influx, traffic, financial product, etc.). If the evolution is Lipschitz, retrospective and prospective derivatives exist at all times, so that we can define the tensor product $\overleftarrow{D}x(t)\otimes \overrightarrow{D}x(t)$ of their retrospective and prospective velocities: we shall call it the *differential connection tensor* of the evolution $x(\cdot)$ at time $t$. It *plays the role of a *“trendometer”* measuring the *trend reversal* (or *monotonicity reversals*) at junctions*: the differential connection tensor describes the *trend reversal* between the retrospective and prospective trends when they are strictly negative, the *monotonicity congruence* when they are strictly positive and the *inactivity* they vanish. In neural networks, for instance, *this an inhibitory effect or trend reversal in the first case, an excitatory or trend congruence in the second case, and inactivity of a synapse: one at least of the propagation of the nervous influx stops.* The absolute value of this product measures in some sense the *jerkiness of the trend reversal* at a junction of the network. We are thus tempted to control (pilot, regulate, etc.) the evolution of propagation in the network governed by a system $$\label{e:synpropagactivity} x' (t) \; = \; g (x (t), u (t)) \;\mbox{\rm where}\; u (t) \; \in \; U (\overleftarrow{D}x(t)\otimes \overrightarrow{D}x(t))$$ controlled by differential connection tensors at junctions of the network. We recall that the evolutions governed by (Marchaud) controlled systems are Lipschitz under the standard assumption, *but not necessarily differentiable*. For example, in order to govern the viability of the propagation in terms of the inhibitory, excitatory and stopping behavior at the junctions of the network, some constraints are imposed on the evolution of the differential connection tensors. Examples of retrospective-retrospective differential equations are provided by tracking or controlling differential connection tensors of the evolutions requiring that evolutions governed by differential equations $x'(t)=f(t,x(t))$ satisfy contraints of the form $\overleftarrow{D}x(t) \otimes \overrightarrow{D}x(t) \in C(t,x(t))$. These control systems are examples of retrospective-prospective differential inclusions. These considerations extend to “multiple synapses” when we associate with each subset $S$ of branches $j$ meeting at a junction the tensor products $\otimes_{j \in S}x_{j}'(t) $ of the velocities at the junction[^10]. Organization of the Study {#organization-of-the-study .unnumbered} ------------------------- Section \[s:Remarks\], p. , *Retrospective-Prospective Differential Inclusions*, defines retrospective and prospective (graphical) derivatives of tubes and evolutions, their *differential connection tensor* (Definition \[d:DifConnTensorTube\], p.). They are the ingredients for introducing retrospective-prospective differential inclusions. The Viability Theorem (Theorem \[t:RPdifInc\], p.) is adapted for characterizing viable tubes under such differential inclusions using characterizations linking the retrospective and prospective derivatives of the tube. When these conditions are not satisfied, we restore the viability by introducing *retrospective-prospective viability kernel* of the tube under the retrospective-prospective differential inclusion (Subsection \[s:PRKernel\], p. ). Section \[s:ControlDCT\], p. , *Control by Differential Connection Tensors*, studies the regulation of viable evolutions on tubes by imposing constraints on their differential connection tensors. Section \[s:Illustrations\], p. , *Illustrations*, provides examples of differential connection tensors of vector evolutions in the framework of “technical analysis” of the forty prices series of the CAC 40 stock market index[^11]. Section \[s:Remarks\], p. , *Other Examples of Differential Connection Tensors*, defines differential connection tensors of set-valued maps (Subsection \[s:PRDerivatives\], p. , *Prospective and Retrospective Derivatives of Set-Valued Maps*) and gathers some other classes differential connection tensors than the ones of the evolutions $t \mapsto x(t)$ or tubes $t \leadsto K(t)$ from $\mathbb{R}^{}$ to $\mathbb{R}^{\ell}$, which provided the first source of motivations for studying differential connection tensors. Other specific examples are the *differential connection tensors of numerical functions* $V : \mathbb{R}^{\ell} \mapsto \mathbb{R}^{}$ (Subsection \[s:PREpider\], p. ), and *tangential connection tensors* of retrospective and prospective tangents (Subsection \[s:PRCones\], p. ). These issues are the topics of forthcoming studies. Retrospective-Prospective Differential Inclusions {#s:PRTubes} ================================================= Prospective and Retrospective Derivatives of Tubes and Evolutions ----------------------------------------------------------------- A tube is the nickname of a set-valued map $K: t \in \mathbb{R}^{} \leadsto K(t) \subset X$. Since there are only[^12] two directions $+1$ and $-1$ in $\mathbb{R}^{}$, the prospective (left) and retrospective (right) derivatives of a tube $K$ at a point $(t,x)$ of its graph are defined by $$\label{e:} \left\{ \begin{array}{ll} \displaystyle{v \in \overrightarrow{D}K(t,x) \;\mbox{\rm if and only if }\; \liminf_{h \rightarrow 0+} d\left(v, \frac{K(t+h)-x}{h}\right) \; = \; 0 }\\ \displaystyle{v \in \overleftarrow{D}K(t,x) \;\mbox{\rm if and only if }\; \liminf_{h \rightarrow 0+} d\left(v, \frac{x-K(t-h)}{h}\right) \; = \; 0 }\\ \end{array} \right.$$ (see Definition \[d:PRderiv\], p.. in the general case). **Differential Connection Tensor of a Tube** \[d:DifConnTensorTube\] The *differential connection tensor of a tube* $K(\cdot)$ at $x \in K(t) $ is defined by $$\label{e:ReverIndex} \forall \; \overleftarrow{v} \in \overleftarrow{D}K(t,x), \; \forall \; \overrightarrow{v} \in \overrightarrow{D}K(t,x), \; \; \mathbf{a}_{K}(t,x)[\overleftarrow{v},\overrightarrow{v}] \; := \; \overleftarrow{v}\otimes\overrightarrow{v}$$ In particular, an evolution $x(\cdot)$ is a single-valued tube defined by $K(t):=\{x(t)\}$, so that we can define their graphical prospective derivative $\overrightarrow{D}x(t)$ (from the right) and retrospective derivatives $\overleftarrow{D}x(t)$ (from the left) respectively (see illustrations in Section \[s:Illustrations\], p. , *Illustrations*). Retrospective-Prospective Differential Inclusions {#retrospective-prospective-differential-inclusions-1} ------------------------------------------------- Recall that whenever an evolution $t \mapsto x(t)$ is viable on a neighborhood of $t_{0}$ on a tube $K(t)$, then $\overleftarrow{D}x(t_{0}) \; \in \; \overleftarrow{D}K(t_{0},t_{0}) $ and $\overrightarrow{D}x(t_{0}) \; \in \; \overrightarrow{D}K(t_{0},t_{0}) $. Since we know only retrospective derivatives, forecasting future evolution can be governed by prospective differential inclusion $\overrightarrow{D}x(t) \in F(t,x(t))$ depending only on time and state, but also by the particular case of history-dependent evolutions $\overrightarrow{D}x(t) \in G(t,x(t), \overleftarrow{D}x(t))$ depending on time, state and the retrospective derivatives. This could be the case for system controlling the differential connection tensors of the evolutions, for instance (see Section \[s:ControlDCT\], p. ). **Viability Theorem for Retrospective-Prospective Differential Inclusions** \[t:RPdifInc\] Let us assume that the map $(t,x,v) \in \mathbb{R}^{} \times X \times X \leadsto G(t,x,v) \subset X$ is Marchaud (closed graph, convex valued and linear growth) and that the tube $t \leadsto K(t)$ is closed. Then the “tangential condition” $$\label{e:RPdifInc} \forall \; \overleftarrow{v} \in \overleftarrow{D}K(t,x), \; \; G(t,x,\overleftarrow{v}) \cap \overrightarrow{D}K(t,x) \; \ne \; \emptyset$$ is equivalent to the “viability property”: from any initial state $x_{0} \in K(t_{0})$ and initial retrospective velocity $\overleftarrow{v}_{0} \in \overleftarrow{D}K(t_{0},x_{0})$, there exists at least one evolution $x(\cdot)$ governed by the retrospective-prospective differential inclusion $\overrightarrow{D}x(t) \in G(t,x(t),\overleftarrow{D}x(t)) $ satisfying $x(t_{0})=x_{0}$ and $\overleftarrow{D}x(t_{0})= \overleftarrow{v}_{0}$ and viable in the tube $K(\cdot)$. **Proof** — The proof is an adaptation of the proof of the viability Theorem 19.4.2, p. 782, based on Theorems 11.2.7, p. 447, and 19.3.3, p. 777, of *Viability Theory. New Directions*, [@absp Aubin, Bayen & Saint-Pierre]. We just indicate the modifications to be made. We construct approximate solutions by modifying Euler’s method to take into account the viability constraints, then deduce from available estimates that a subsequence of these solutions converges in some sense to a limit, and finally, check that this limit is a viable solution to the retrospective-prospective differential inclusion ($\overrightarrow{D}x(t) \in G(t,x(t),\overleftarrow{D}x(t)) $. 1. By assumption, there exists $r>0$ such that the neighborhood $\mathcal{K}_{r} := \mbox{\rm Graph}(K) \cap (t_{0},x_{0})+r([-1,+1] )\times B$ of the initial condition $(t_{0},x_{0})$ is compact. Since $G$ is Marchaud, the set $$\mathcal{C}_{r} \;:= \; \left\{ F(t,x, \overleftarrow{v}) \right\} + B, \; \; \mbox{\rm and} \; T \;:= \; r/\|\mathcal{C}_{r}\|$$ is also compact. We next associate with any $h$ the Euler approximation $$v_{j}^{h} \; := \; \frac{x_{j+1}^{h}-x_{j}^{h}}{h} \; \in \; G(jh,x_{j}^{h}, v_{j-1}^{h}) \;\mbox{\rm where}\; v_{j-1}^{h} \; := \; \frac{x_{j}^{h}-x_{j-1}^{h}}{h}$$ starting from $(t_{0}, x_{0}, \overleftarrow{v}_{0})$. 2. Theorems 11.2.7, p. 447 of [@absp Aubin, Bayen & Saint-Pierre] implies that for all $\varepsilon >0$, $$\label{eq-apprexplschviab2} \left\{ \begin{array}{l} \exists \; \eta (\varepsilon) >0 \; \mbox{such that} \; \forall \; (t,x) \in \mathcal{K}_{r}, \; \forall h \in \left[0,\eta (\varepsilon) \right], \\ x_{j}^{h}+ hG(jh,x_{j}^{h}, v_{j-1}^{h}) \; \in \; K(jh,,x_{j}^{h}) +\varepsilon B \end{array} \right.$$ Since $$\|x_{j}^{h}-x_{0}\| \;\leq \; \sum_{i=0}^{i=j-1}\|x_{i+1}^{h}-x_{i}^{h}\| \; \leq \; \sum_{i=0}^{i=J^{h}-1}h \left\| v_{j}^{h}\right\| \; \leq \; \|\mathcal{C}_{k}\|$$ the discrete evolution is viable in $\mathcal{K}_{r}$ on the interval $[0,T]$. Denoting by $x^{h}$, $\overleftarrow{v}^{h}$ and $\overrightarrow{v}_{h}$ the linear interpolations of the sequences $x^{h}_{j}$, $\overleftarrow{v}^{h}_{j}$ and $\overrightarrow{v}^{h}_{j}$, we infer that there exists a constant $\alpha>0$ such that $$\label{e:} \left\{ \begin{array}{l} \displaystyle{ (t^{h},x^{h}, \overleftarrow{v}^{h},\overrightarrow{v}) \; \in \; \mbox{\rm Graph}(G) + \varepsilon \alpha}\\ \displaystyle{(t^{h},x^{h}) \; \in \; \mbox{\rm Graph}(K) + \varepsilon \alpha}\\ \end{array} \right.$$ and that there exists a constant $\beta >0$ such that the *a priori* estimates $$\label{xA445} \max(\|x^{h}\|_{\infty}, \|\overleftarrow{\nabla} ^{h} x^{h}\|_{\infty}, |\overrightarrow{\nabla} ^{h} x^{h}\|_{\infty}) \; \leq \; \beta$$ are satisfied. 3. They imply the *a priori* estimates of the Convergence Theorem 19.3.3, p. 777, of [@absp Aubin, Bayen & Saint-Pierre], which states the limit of a converging subsequence is a solution to the retrospective-prospective differential inclusion, viable in $\mbox{\rm Graph}(K)$. $\;\; \blacksquare$ Retrospective-Prospective Viability Kernels {#s:PRKernel} ------------------------------------------- Naturally, the “tangential assumption” (\[e:RPdifInc\]), p. , is not necessarily satisfied so that we have to adapt the concept of viability kernel to the retrospective-prospective case. **Retrospective-Prospective Viability Kernel of a Tube**\[\] The viability kernel of the tube $K(\cdot)$ is the set of initial conditions $(t_{0},x_{0}, \overleftarrow{v}_{0}) \in \mathbb{R}^{} \times K(t_{0}) \times \overleftarrow{D}K(t_{0},x_{0})$ from which starts at least one viable evolution $t \mapsto x(t) \in K(t)$ to the retrospective-prospective differential inclusion in the sense that $$\label{e:} \left\{ \begin{array}{ll} (i) & \overrightarrow{D}x(t) \; \in \; G(t,x(t),\overleftarrow{D}x(t))\\ (ii) & \overleftarrow{D}x(t) \; \in \; \overleftarrow{D}K(t,x(t)) \;\mbox{\rm and}\; \overrightarrow{D}x(t) \; \in \overrightarrow{D}K(t,x(t))\; \\ \end{array} \right.$$ We provide a viability characterization of retrospective-prospective viability kernel tubes: **Viability Characterization of Retrospective-Prospective Viability Kernel**\[\] Let us consider the control system $$\label{e:RPcontSysy} \left\{ \begin{array}{ll} (i) & \tau'(t) \; = \; 1\\ (ii) & x'(t) \; \in \; G(\tau(t),x(t), \overleftarrow{v}(t))\\ (iii) & \|\overleftarrow{v}'(t)\| \; \leq \; c \; \|G(t,x,\overleftarrow{v})\|\\ & \mbox{\rm where}\; \overleftarrow{v}(t) \; \in \; \overline{\mbox{\rm co}} (\overleftarrow{D}K(\tau(t),x(t))) \end{array} \right.$$ Then the viability kernel of the graph $\mbox{\rm Graph}(DK(\cdot))$ of the derivative tube $K(\cdot)$ coincides with the retrospective-prospective viability kernel of the tube. **Proof** — The viability kernel of the control system (\[e:RPcontSysy\]), p. is the set of initial triple $(t_{0}, x_{0}, \overleftarrow{v}_{0})$ such that $x_{0} \in K(t_{0})$ and $\overleftarrow{v}_{0} \in \overleftarrow{D}K(t_{0},x_{0})$ from which starts an evolution $t \mapsto (t_{0}+t, x(t),\overleftarrow{v}(t))$ of the control system such that $x(t) \in K(\tau(t))$ and $\overleftarrow{v}(t) \; \in \; \overline{\mbox{\rm co}} (\overleftarrow{D}K(\tau(t),x(t)))$. Setting $x_{\star}(t):=x(t-t_{0})$ and $\overleftarrow{v}_{\star}(t):=\overleftarrow{v}(t-t_{0})$, we observe that $x_{\star}(t) \in G(t,x_{\star}(t),\overleftarrow{v}_{\star}(t))$, $\overleftarrow{v}_{\star}(t) \in \overleftarrow{D}K(t,x_{\star}(t))$ and $x_{\star}(t) \in K(t)$. We thus infer that $\overrightarrow{D}x_{\star}(t) \in \overrightarrow{D}K(t,x_{\star}(t))$. Since $x(t)$ is viable in the tube, we also infer that $\overleftarrow{D}x(t)$ actually belongs to $\overleftarrow{D}K(t,x(t))$. Hence $(t_{0}, x_{0}, \overleftarrow{v}_{0})$ belongs to the retrospective-prospective viability kernel of the tube $K(\cdot)$. $\;\; \blacksquare$ Therefore, it remains to provide sufficient conditions for the viability kernel of the graph of $K(\cdot)$ under the control system is Marchaud. **Properties of the Retrospective-Prospective Viability Kernel**\[\] Let us assume that the set-valued map $G: (t,x, \overleftarrow{v}) \leadsto G(t,x, \overleftarrow{v})$ is Marchaud. Then the retrospective-prospective viability kernel of the tube $K(\cdot)$ under the $\overleftarrow{D}x(t) \in G(t,x(t), \overleftarrow{D}x(t))$ is closed and inherits all properties of viability kernels. Control by Differential Connection Tensors {#s:ControlDCT} ========================================== We study the tracking at each date $t$ of the differential connection tensor $\overleftarrow{D}x(t)\otimes \overrightarrow{D}x(t)$ of evolutions governed by a differential inclusion $x'(t) \in F(t,x(t)) $. For that purpose, we introduce a connection map $(t,x) \leadsto C(t,x) \;\subset\; \mathcal{L}(X,X)$. We are looking for evolutions $x(\cdot)$ governed by the differential inclusion satisfying the constraints on the differential connection tensors $$\label{e:} \forall \; t \geq 0, \; \; \overleftarrow{D}x(t)\otimes \overrightarrow{D}x(t) \; \in \; C(t,x(t))$$ This is a problem analogous to the search of the slow evolutions governed by control systems (solutions governed by controls of the regulation map with minimal norm): see [@af85heav Aubin & Frankowska] or Theorem 6.6.3, p. 229, of [@avt *Viability Theory*]. We follow the same strategy by introducing the set-valued map $ G$ defined by $$\label{e:TRregRev} G(t,x, \overleftarrow{v}) \; := \; \left\{w \in F(t,x) \; \mbox{ such that} \; \overleftarrow{v} \otimes w \; \in \; C(t,x) \right\}$$ **Control of Differential Connection Tensors** \[\] We assume that $F$ is Marchaud, that the tube $K(\cdot)$ is closed and that $$\label{e:} \left\{ \begin{array}{ll} (i)& \mbox{\rm the graph of $(t,x) \leadsto C(t,x) \subset \mathcal{L}(X,X)$ is closed and its images are convex}\\ (ii) & \forall \; (t,x) \in \mbox{\rm Graph}(K), \; \; \forall \; \overleftarrow{v} \in \overleftarrow{D}K(t,x), \; \; \exists \; w \in F(t,x) \in \overrightarrow{D}K(t,x) \\ & \mbox{ such that} \; \; \overleftarrow{v} \otimes w \; \in \; C(t,x) \end{array} \right.$$ For any $t_{0}$, for any $x_{0} \in K(t_{0})$, for any $\overleftarrow{v}_{0} \in \overleftarrow{D}K(t_{0},x_{0})$, there exists at least an evolution $x(\cdot)$ governed by the differential inclusion $x'(t) \in F(t,x(t))$ starting at $x_{0}$ viable in the tube $K(\cdot)$ such that $\overleftarrow{v}_{0} \otimes \overrightarrow{D}x(t_{0}) \in C(t_{0},x_{0})$ and satisfying the differential connection tensor constraints $$\label{e:} \forall \; t \geq t_{0}, \; \; \overleftarrow{D}x(t) \otimes \overrightarrow{D}x(t) \; \in \; C(t,x(t))$$ and the retrospective-prospective viability property $$\label{e:} \forall \; t \geq t_{0}, \; \; \overleftarrow{D}x(t) \otimes \overrightarrow{D}x(t) \; \in \; \overleftarrow{D}K(t,x(t)) \otimes \overrightarrow{D}K(t,x(t))$$ [**Proof**]{} — The set-valued map $G$ satisfies the assumptions of Theorem \[t:RPdifInc\], p., in such a way that there exists one evolution $x(\cdot)$ governed by $\overrightarrow{D}x(t) \in G(t,x(t), \overleftarrow{D}x(t))$ viable in the tube $K(\cdot)$. Therefore, $\overrightarrow{D}x(t) \; \in \; \overrightarrow{D}K(t,x(t))$ for all $t \geq t_{0}$. Consequently, $$\label{e:} \overleftarrow{D}x(t) \otimes \overrightarrow{D}x(t) \in C(t,x(t))$$ and since the evolution is viable in the tube $K(\cdot)$, that $$\overleftarrow{D}x(t) \; \in \; \overleftarrow{D}K(t,x(t)) \;\mbox{\rm and}\; \overrightarrow{D}x(t) \; \in \; \overrightarrow{D}K(t,x(t))$$ The theorem ensues. $\;\; \blacksquare$ For instance, we can choose $$\label{e:} C(t,x,\overleftarrow{v}) \; := \; \left\{\overrightarrow{v} \; \mbox{ such that} \; \sup_{w \in F(t,x)} \sup_{(i,j)} \overleftarrow{v}_{i} (\overrightarrow{v}_{j} -w_{j}) \; \leq \; 0 \right\}$$ In other words, the entries $\overleftarrow{v}_{i}\overrightarrow{v}_{j} $ minimize the entries $\overleftarrow{v}_{i} w_{j}$ of the differential connection tensors when the velocities $w \in F(t,x) $. Proposition 6.5.4, p. 226, of *Set-valued analysis*, [@af90sva Aubin & Frankowska], implies that the connection constraint map has a closed graph and convex values whenever the set-valued map $F$ is lower semicontinuous with convex compact images. We could as well requires that the entries of the differential connection tensor maximize the entries $\overleftarrow{v}_{i}\overrightarrow{v}_{j} $ minimize the entries $\overleftarrow{v}_{i} w_{j}$ of the differential connection tensors when the velocities $w \in F(t,x) $ or that for some pairs $(i,j)$, the entries $\overleftarrow{v}_{i}\overrightarrow{v}_{j} $ minimize $\overleftarrow{v}_{i}w_{j} $ and for the other pairs, that they maximize $\overleftarrow{v}_{i}w_{j} $ when the velocities $w \in F(t,x) $. Illustrations {#s:Illustrations} ============= The question arises whether it is possible to detect the connection dates *when the monotonicity of a series of a family of temporal series is followed by the reverse (opposite) monotonicity of other series*, in order to detect the influence of each series on the dynamic behavior of other ones. When the two functions are the same, we obtain their reversal dates when the series achieve their extrema. The *differential connection tensor* measures the *jerkiness* between two functions, smooth or not smooth (temporal series) providing the trend reversal dates of the differential connection tensor. This matrix plays for time series a dynamic rôle analogous to the static rôle played by the correlation matrix of a family of random variable measuring the covariance entries between two random coefficients. In other words, we add in our analysis the dependence on *random* events of variables their dependence on *time*. The differential connection tensor softwares provides at each date the coefficients of the differential connection tensor. We use the tensor trendometer for detecting the dynamic correlations between the forty price series of the CAC 40. For instance, on August 6, 2010, the prices are displayed in the following figure ![image](CoursPrixActifCAC40){width=".6\linewidth"} At each date, it provides the $40 \times 40$ matrix displaying the qualitative jerkiness for each pair of series when the trend of the first one is followed by the opposite trend of the second one. At each entry, the existence of a trend reversal by a circles: ![image](QualitativeJerkinessTensorCAC40){width=".6\linewidth"} The quantitative version replaces the circles by the values of the jerkiness: ![image](JerkinessTensorCAC40){width=".6\linewidth"} In order to analyse further the evolutionary behavior of the CAC 40, we present the analysis of the CAC 40 index only, but over the period from du 03/01, 1990 to 09/25, 2013. The first figure displays the series of the CAC 40 indexes (closing prices). The vertical bars indicate the reversal dates and their height displays their jerkiness. ![image](CAC40-1990-2013-e){width=".6\linewidth"} The 2000 Internet crisis (around May 4, 2000) and the 2008 “subprime” crisis (around October 10, 2008) are detected and measured by the trendometer: [m[0.474]{}m[0.474]{}]{} ![image](CAC40-Internet-e){width="\linewidth"} & ![image](CAC40-subprimes-e){width="\linewidth"}\ The next figure displays the velocities of the jerkiness between two consecutive trend reversal dates, a ratio involving the variation of the jerkiness and the duration of the congruence period (bull and bear): ![image](VelocityTrendReversal){width=".7\linewidth"} The following one displays the classification of trend speeds and absolute value of the accelerations by decreasing jerkiness: [m[0.474]{}m[0.474]{}]{} ![image](ClassSpeed-Jerk){width="\linewidth"} & ![image](CAC40AccelVersusJerkiness){width="\linewidth"}\ The analysis of this series shows that often the jerkiness at minima (bear periods) is higher than the ones at maxima (bull periods). For the CAC 40, the proportion of “bear jerkiness” (57%) over “bull jerkiness” (43%). The next table provides the first dates by decreasing jerkiness. The most violent are those of the subprime crisis (in bold), then the ones of the year 2006 and, next, the dates of the Internet crisis (in italics). ***Date*** **Jerkiness** ***Date*** **Jerkiness** ***Date*** **Jerkiness** -------------------- --------------- ------------------------ --------------- ---------------- --------------- 10/10/****2008**** 94507,21 03/01/*2001* 15153,31 17/02/*2000* 10025,57 23/01/**2008** 57315,90 11/09/*2002* 15111,43 28/10/*2002* 9962,69 07/05/2010 53585,50 10/03/*2000* 15055,45 01/09/1998 9917,22 05/12/**2008** 44927,23 10/08/2011 15011,24 15/02/**2008** 9905,51 03/10/**2008** 43319,41 27/08/*2002* 14958,41 19/04/1999 9887,67 19/09/**2008** 37200,13 22/11/*2000* 14768,91 26/10/*2001* 9556,17 05/04/*2000* 34609,80 03/04/*2000* 14280,35 29/06/*2000* 9470,44 21/01/**2008** 34130,42 03/04/*2001* 14003,47 25/02/*2000* 9438,07 16/10/**2008** 29794,42 18/07/*2002* 13813,67 27/03/*2001* 9436,84 21/11/**2008** 28840,69 19/12/*2000* 13743,01 15/05/*2000* 9411,84 04/12/**2000** 27861,03 12/03/2003 13707,93 04/10/2011 9409,14 12/11/*2001* 26039,07 12/09/**2008** 13682,85 17/01/*2000* 9398,39 22/03/*2001* 25128,11 01/12/**2008** 13207,66 11/08/1998 9320,83 27/04/*2000* 24577,70 29/10/1997 13085,95 20/11/**2007** 9291,91 17/03/**2008** 24416,22 04/03/2009 12845,84 05/10/1998 9277,96 14/10/**2008** 24007,60 14/03/**2007** 12801,09 29/07/1999 9253,97 05/08/*2002* 22021,61 24/06/*2002* 12658,98 04/12/**2007** 9200,48 14/09/*2001* 21658,15 02/08/2012 12628,14 04/02/*2000* 9093,25 10/08/**2007** 21252,50 24/05/*2000* 12456,94 02/10/*2002* 8959,94 13/11/*2000* 20662,32 10/05/*2000* 12411,27 13/09/*2000* 8897,37 22/01/**2008** 20184,96 28/07/*2000* 12145,83 10/05/2010 8877,39 14/08/*2002* 20052,16 23/02/*2001* 11960,59 30/09/*2002* 8845,61 28/10/1997 19720,61 04/11/**2008** 11904,50 04/11/1998 8843,75 14/06/*2002* 19114,56 08/06/******2006****** 11773,65 09/08/2011 8833,20 06/11/**2008** 18900,51 30/10/*2001* 11733,86 11/06/*2002* 8832,22 03/08/*2000* 18621,37 15/10/*2001* 11630,50 07/07/*2000* 8797,60 29/10/*2002* 18550,19 24/03/2003 11294,44 16/01/*2001* 8778,74 08/10/1998 18307,12 15/03/*2000* 11232,52 27/04/1998 8721,52 02/05/*2000* 18087,38 17/09/**2007** 10948,51 19/02/**2008** 8327,20 21/09/*2001* 17771,78 13/08/**2007** 10933,30 20/11/*2000* 8299,90 11/09/*2001* 17660,69 25/10/*2001* 10809,42 03/07/*2002* 8289,95 16/08/**2007** 17398,86 02/10/**2008** 10720,31 28/06/*2000* 8258,67 16/05/*2000* 17228,62 23/10/*2002* 10675,86 28/06/2010 8137,05 04/04/*2000* 16958,95 25/08/1998 10673,02 31/01/*2000* 8093,58 18/10/*2000* 16761,07 30/03/2009 10672,64 21/11/*2000* 8074,23 29/09/**2008** 16502,34 24/01/**2008** 10352,96 28/01/2009 8049,26 08/08/**2007** 16048,09 20/03/*2001* 10294,67 26/02/**2007** 8038,76 21/03/2003 15703,11 14/12/*2001* 10253,40 31/01/*2001* 8033,95 18/09/**2008** 15506,17 31/07/**2007** 10134,80 26/11/*2002* 7933,90 22/05/***2006*** 15470,19 26/04/*2000* 10093,65 08/08/2011 7821,87 05/09/**2008** 15406,87 02/09/1999 10080,12 18/05/2010 7793,80 Other Examples of Differential Connection Tensors {#s:Remarks} ================================================= Prospective and Retrospective Derivatives of Set-Valued Maps {#s:PRDerivatives} ------------------------------------------------------------ We summarize the concept of graphical derivatives. **Retrospective and Prospective Graphical Derivatives** \[d:PRderiv\] Consider a set-valued map $F: X \leadsto Y$ from a finite dimensional vector space $X$ to another one $Y$. Let $(x,y) \in \mbox{\rm Graph}(F)$ an element of its graph. We denote in this study by 1. *retrospective derivative* $\overleftarrow{D}F(x,y): X \leadsto Y$ associating with any direction $u \in X$ the set of elements $v \in Y$ satisfying $$\label{e:} \liminf_{h \mapsto 0+, u_{h} \mapsto u} d \left(v, \frac{y-F(x-hu_{h}) }{h} \right) \; = \; 0$$ 2. *prospective derivative* $\overrightarrow{D}F(x,y):X \leadsto Y $ associating with any direction $u \in X$ the set of elements $v \in Y$ satisfying $$\label{e:} \liminf_{h \mapsto 0+, u_{h} \mapsto u} d \left(v, \frac{F(x+hu_{h})-y}{h} \right) \; = \; 0$$ The retrospective and prospective difference quotients of $F$ at $(x,y) \in \mbox{\rm Graph}(F)$ are defined by $\displaystyle{ \overleftarrow{\nabla }_{h}F(x,y)(\overleftarrow{u}) := \frac{y-F(x-h\overleftarrow{u})}{h}}$ and $\displaystyle{\overrightarrow{\nabla }_{h}F(x,y)(\overrightarrow{u}) :=\frac{F(x+h\overrightarrow{u})-y}{h}}$. We can reformulate the definition of the (contingent) derivative by saying that it is the *upper Painlevé-Kuratowski limit* of the difference quotients, $$\label{e:} \forall \; \overleftarrow{u}, \; \; \overleftarrow{D}F(x,y)(\overleftarrow{u}) \; = \; \mbox{\rm Limsup}_{h \mapsto 0+, u_{h} \rightarrow \overleftarrow{u}} \overleftarrow{\nabla }_{h}F(x,y)(u_{h})$$ i.e., the retrospective (resp. prospective) derivatives are the cluster points $\overleftarrow{v}$ of $\displaystyle{\overleftarrow{v}_{h} \in \overrightarrow{\nabla }_{h}F(x,y)(u_{h}) }$ (resp. of i.e., the cluster points of $\displaystyle{\overrightarrow{v}_{h} \in \overrightarrow{\nabla }_{h}F(x,y)(u_{h})}$). Whenever the set-valued map $F$ is Lipschitz, *the retrospective and prospective difference quotients are bounded*, and thus, relatively compact set since the dimension of the vector spaces is finite. In this case, the prospective and retrospective derivatives are not empty. Taking the tensor product of both the retrospective and prospective derivatives allows us to define the differential connection tensor: **Differential Connection Tensor** \[d:DifferentialConnectionTensor\] The *differential connection tensor* $\mathbf{a}_{F}(x,y)[(\overleftarrow{u},\overrightarrow{u}), (\overleftarrow{v},\overrightarrow{v})]$ of retrospective and prospective derivatives of $F$ at $(x,y) \in \mbox{\rm Graph}(F)$ is defined by $$\label{e:DifferentialConnectionTensor} \left\{ \begin{array}{l} \forall \; (\overleftarrow{u},\overrightarrow{u}), \; \overleftarrow{v} \in \overleftarrow{D}F(x,y)(\overleftarrow{u}),\; \overrightarrow{v} \in \overrightarrow{D}F(x,y)(\overrightarrow{u}), \\ \displaystyle{ \mathbf{a}_{F}(x,y) [(\overleftarrow{u},\overrightarrow{u}),(\overleftarrow{v},\overrightarrow{v})] \; := \; \overleftarrow{v} \otimes \overrightarrow{v} } \end{array} \right.$$ **Remark** — A normalized version of the differential connection tensor is defined by $$\label{e:} \left\{ \begin{array}{l} \forall \; (\overleftarrow{u},\overrightarrow{u}), \; \overleftarrow{v} \in \overleftarrow{D}F(x,y)(\overleftarrow{u}),\; \overrightarrow{v} \in \overrightarrow{D}F(x,y)(\overrightarrow{u}), \\ \displaystyle{ \mathbf{a}_{F}(x,y) [(\overleftarrow{u},\overrightarrow{u}),(\overleftarrow{v},\overrightarrow{v})] \; := \; \frac{ \overleftarrow{v} \otimes \overrightarrow{v} }{ \|\overleftarrow{v}\| \| \overrightarrow{v}\|}} \end{array} \right.$$ The normalized version is not that useful whenever we are interested to the signs of the entries of the connection matrix. $\;\; \blacksquare$ **Remark** — One can associate with the prospective difference quotient $\overrightarrow{\nabla }_{h}F(x,y)(\overrightarrow{u})$ and retrospective difference quotient $\overleftarrow{\nabla }_{h}F(x,y)(\overrightarrow{u})$ their difference quotient $$\label{e:} \nabla ^{2}F(x,y)(\overleftarrow{u},\overrightarrow{u}) \; := \; \frac{\overrightarrow{\nabla }_{h}F(x,y)(\overrightarrow{u})- \overleftarrow{\nabla }_{h}F(x,y)(\overrightarrow{u}) }{h} \; = \; \frac{F(x+h\overrightarrow{u}) + F(x-h\overleftarrow{u})-2y }{h^{2}}$$ The Painlevé-Kuratowski upper limit of $\nabla ^{2}F(x,y)(\overleftarrow{u},\overrightarrow{u})$ defines the retrospective-prospective second order graphical derivative of $F$ at $(x,y) \in \mbox{\rm Graph}(F)$ by: $$\label{e:} D^{2}F(x,y)(\overleftarrow{u},\overrightarrow{u}) \; := \; \mbox{\rm Limsup}_{h \mapsto 0+, \overleftarrow{u}_{h} \rightarrow \overleftarrow{u}, \overrightarrow{u}_{h} \rightarrow \overrightarrow{u} } \nabla ^{2}F(x,y)(\overleftarrow{u}_{h},\overrightarrow{u}_{h})$$ The differential connection tensor replaces the difference between the retrospective and prospective derivatives by their tensor products. We refer to Section 5.6, p. 315, of *Set-valued analysis*, [@af90sva Aubin & Frankowska], for other approaches of higher order graphical derivatives of set-valued maps. $\;\; \blacksquare$ **Remark** — In 1884, *Giuseppe Peano* proved in *Giuseppe Peano* See [@Peano1887 *Applicazioni geometriche del calcolo infinitesimale*] that continuous derivatives are the limits $$\label{e:} \forall \; t \in \symbol{93}a,b\symbol{91}, \; \; \lim_{h \rightarrow 0}\frac{x(t+h)-x(t-h)}{2h} \; = \; \frac{1}{2} \left( \lim_{h \rightarrow 0+} \frac{x(t)-x(t-h)}{h}+ \lim_{h \rightarrow 0}\frac{x(t+h)-x(t)}{h}\right)$$ of both the retrospective and prospective average velocities (difference quotients) at time $t$. We follow his suggestion by taking the average of the prospective difference quotient $\overrightarrow{\nabla }_{h}F(x,y)(\overrightarrow{u})$ and retrospective difference quotient $\overleftarrow{\nabla }_{h}F(x,y)(\overleftarrow{u})$ their difference quotient $$\label{e:} \frac{\overrightarrow{\nabla }_{2h}F(x,y)(\overrightarrow{u})+ \overleftarrow{\nabla }_{h}F(x,y)(\overleftarrow{u}) }{2h}$$ and taking their Painlevé-Kuratowski limits $$\label{e:} \mbox{Limsup}_{h \mapsto 0+, \overrightarrow{u}_{h} \rightarrow \overrightarrow{u}} \overrightarrow{\nabla} _{h}F(x,y)(\overrightarrow{u}_{h})+ \mbox{Limsup}_{h \mapsto 0+, \overleftarrow{u}_{h} \rightarrow \overleftarrow{u}} \overleftarrow{\nabla }_{h}F(x,y)(\overleftarrow{u}_{h})$$ in order to define *Peano graphical derivatives* of $F$ at $(x,y) \in \mbox{\rm Graph}(F)$ depending on *pairs $(\overleftarrow{u},\overrightarrow{u})$ of directions*. $\;\; \blacksquare$ Differential Connection Tensors of Numerical Functions {#s:PREpider} ------------------------------------------------------ When $V: x \in X \mapsto V(x) \in \{-\infty \} \cup\mathbb{R}\cup \{+\infty \}$ is an extended numerical function on $\mathbb{R}^{}$, it can also be regarded as a set-valued map (again denoted by) $V: X \leadsto \mathbb{R}^{}$ defined by $$\label{e:} V(x) \; := \left\{ \begin{array}{ccc} \{V(x)\} &\mbox{\rm if}& V(x) \in \mathbb{R}^{} \; \;(\mbox{\rm i.e.,} \; x \; \in \; \mbox{\rm Dom}(V)) \\ \emptyset &\mbox{\rm if not}& \end{array} \right.$$ A slight modification of Theorem 6.1.6, p. 230 of *Set-valued analysis*, [@af90sva Aubin & Frankowska], states that $$\label{e:} \left\{ \begin{array}{l} \overrightarrow{D}V(x)(\overrightarrow{u})\; = \; [\overrightarrow{D}_{\uparrow}V(x)(\overrightarrow{u}), \overrightarrow{D}_{\downarrow} V(x)(\overrightarrow{u})] \\ \mbox{} \\ \overleftarrow{D}V(x)(\overleftarrow{u})\; = \; [\overleftarrow{D}_{\uparrow}V(x)(\overleftarrow{u}), \overleftarrow{D}_{\downarrow} V(x)(\overleftarrow{u})] \end{array} \right.$$ where $$\label{e:} \left\{ \begin{array}{l} \displaystyle{\overrightarrow{D}_{\uparrow}V(x)(\overrightarrow{u}) \; := \; \liminf_{h \rightarrow 0+} \frac{V(x+h\overrightarrow{u})-V(x)}{h} \;\mbox{\rm (epiderivative of $V$)}\; }\\ \displaystyle{\overrightarrow{D}_{\downarrow} V(x)(\overrightarrow{u})\; := \; \limsup_{h \rightarrow 0+} \frac{V(x+h\overrightarrow{u})-V(x)}{h} \;\mbox{\rm (hypoderivative of $V$)}\; }\\ \displaystyle{\overleftarrow{D}_{\uparrow}V(x)(\overleftarrow{u}) \; := \; \liminf_{h \rightarrow 0+} \frac{V(x )-V(x-h\overleftarrow{u})}{h} \; = \; - \overrightarrow{D}_{\downarrow} V(x)(-\overleftarrow{u})}\\ \displaystyle{\overleftarrow{D}_{\downarrow} V(x)(\overleftarrow{u}) \; := \; \limsup_{h \rightarrow 0+} \frac{V(x )-V(x-h\overleftarrow{u})}{h} \; = \; - \overrightarrow{D}_{\uparrow} V(x)(-\overleftarrow{u}) }\\ \end{array} \right.$$ Definition \[d:DifferentialConnectionTensor\], p. implies that $$\label{e:DifferentialConnectionTensor} \left\{ \begin{array}{l} \forall \; (\overleftarrow{u},\overrightarrow{u}), \; \overleftarrow{v} \in \overleftarrow{D}V(x)(\overleftarrow{u}),\; \overrightarrow{v} \in \overrightarrow{D}V(x)(\overrightarrow{u}), \\ \displaystyle{ \mathbf{a}_{V}(x,y) [(\overleftarrow{u},\overrightarrow{u}),(\overleftarrow{v},\overrightarrow{v})] \; := \; \overleftarrow{v} \overrightarrow{v} } \end{array} \right.$$ since tensor products of real numbers boil down to their multiplication. Therefore, for any pair $(\overleftarrow{u},\overrightarrow{u})$, the subset of differential connection tensors of retrospective and prospective directions is equal to $$\label{e:} \left\{ \begin{array}{l} \overleftarrow{D}V(x)(\overleftarrow{u}) \otimes \overrightarrow{D}V(x)(\overrightarrow{u}) \; := \\ \displaystyle{\left\{ \overleftarrow{v} \overrightarrow{v} \right\} _{(\overleftarrow{v},\overrightarrow{v}) \in [\overleftarrow{D}_{\uparrow}V(x)(\overleftarrow{u}), \overleftarrow{D}_{\downarrow} V(x)(\overleftarrow{u})] \times [\overrightarrow{D}_{\uparrow}V(x)(\overrightarrow{u}), \overrightarrow{D}_{\downarrow} V(x)(\overrightarrow{u})] }} \end{array} \right.$$ **Reversal Direction Pair** \[d:RevesDirPair\] A pair $(\overleftarrow{u},\overrightarrow{u}) $ of directions $\overleftarrow{u} \in X$ and $\overrightarrow{u} \in X $ is a *reversal direction pair* of $V$ at $x \in \mbox{\rm Dom}(V)$ if $$\label{e:} \overleftarrow{D}_{\uparrow}(x)(\overleftarrow{u}) \overrightarrow{D}(x)(\overrightarrow{u}) \; = \; \overleftarrow{D}_{\downarrow} (x)(-\overrightarrow{u} ) \overrightarrow{D}_{\downarrow} V(x)(-\overleftarrow{u}) \; < \; 0$$ A direction $u \in X$ is a reversal direction of $V$ at $x$ if the diagonal pair $(u,u)$ is reversal direction pair.\ This means that a positive (resp. negative) retrospective epiderivative of $V$ at $x$ in the direction $\overleftarrow{u}$ is followed by a negative (resp. positive) prospective epiderivative in the direction $\overrightarrow{u}$, or, respectively,that a positive (resp. negative) retrospective hypoderivative in the direction $- \overrightarrow{u}$ is followed by a negative (resp. positive) prospective hypoderivative in the direction $-\overleftarrow{u}$. Recall that if $V$ achieves a local minimum at $x$, the Fermat rule states that $$\label{e:} \forall \; \overrightarrow{u} \in X, \; \; \overrightarrow{D}_{\uparrow}V(x)(\overrightarrow{u}) \; \geq \; 0 \;\mbox{\rm and}\; \forall \; \overleftarrow{u} \in X, \; \; \overleftarrow{D}_{\downarrow} V(x)(\overleftarrow{u}) \; \leq \; 0$$ and if it achieves a local maximum at $x$, that $$\label{e:} \forall \; \overrightarrow{u} \in X, \; \; \overrightarrow{D}_{\downarrow} V(x)(\overrightarrow{u}) \; \leq \; 0 \;\mbox{\rm and}\; \forall \; \overleftarrow{u} \in X, \; \; \overleftarrow{D}_{\uparrow} V(x)(\overleftarrow{u}) \; \geq \; 0$$ These conditions are not sufficient for characterizing local extrema: convexity or many second order conditions provide sufficient conditions (see *Set-valued analysis*,[@af90sva Aubin & Frankowska], *Variational Analysis*, [@rw91nsa Rockafellar & Wets] and an important literature on set-valued and variational analysis). Recall that the prospective epidifferential (or prospective epidifferential subdifferential) $\overrightarrow{\partial} _{\uparrow}V(x)$ of a function $V$ at $x$ is the set of elements $\overrightarrow{p}_{\uparrow} \in X^{\star}$ such that for any $v \in X$, $\left\langle \overrightarrow{p}_{\uparrow},v \right\rangle \; \leq \; \overrightarrow{D}_{\uparrow}V(x)(v)$. In the same way, we define the retrospective epidifferential (or retrospective epidifferential subdifferential) $\overleftarrow{\partial} _{\uparrow}V(x)$ of a function $V$ at $x$ as the set of elements $\overleftarrow{p}_{\uparrow} \in X^{\star}$ such that for any $v \in X$, $\left\langle \overleftarrow{p}_{\uparrow},v \right\rangle \; \leq \; \overleftarrow{D}_{\uparrow}V(x)(v)$. It is equal to prospective hypodifferential (or prospective superdifferential) $\overrightarrow{\partial}_{\downarrow} V(x) $, the set of elements $\overrightarrow{p}_{\downarrow} \in X^{\star}$ such that for any $v \in X$, $\left\langle \overrightarrow{p}_{\downarrow},v \right\rangle \; \geq \; \overrightarrow{D}_{\downarrow} V(x)(v)$. For instance, the trendometer detects the local extrema of numerical functions, such as the function $t \mapsto 1-cos(2t)cos(3t)$: ![image](trendometercos){width=".8\linewidth"} Tangential Connection Tensors {#s:PRCones} ----------------------------- The tangent spaces to differentiable manifolds being vector spaces, directions arriving at a point (we may call them *retrospective*) and directions starting from this point (*prospective*) belong to the same vector space. This is no longer the case when the subset is any (closed) subset $K \subset X$ of a finite dimensional vector space $X$. However, we may replace vector spaces by cones. We are indebted to the historical studies [@DoleckiGreco Dolecki & Greco] (in which the authors quote *Maurice Fréchet* stating that “Cette théorie des “contingents et paratingents" dont l’utilité a été signalée d’abord par M. Beppo Levi, puis par M. Severi, mais dont on doit à M. Bouligand et ses élèves d’en avoir entrepris l’étude systématique.”) and [@GrecoMazzucchiPagani Greco, Mazzucchi & Pagani]. *Francesco Severi* and *Georges Bouligand*, a whole menagerie of tangent cones, the definitions of which depend upon the limiting process, have been proposed (among many monographs, see [@af90sva *Set-valued analysis*] and *Variational Analysis*, [@rw91nsa Rockafellar & Wets] for instance). At some points, the tangent cones are not vector spaces, and the opposite of some tangent directions may no longer be tangent. We suggest to regard the (contingent) tangent cone[^13] as the *prospective tangent cone* to $K$ at $x \in K$ defined by the Painlevé-Kuratowski upper limits $$\label{e:} \overrightarrow{T}_{K}(x) \; := \; \mbox{Limsup}_{h \mapsto 0+} \frac{K-x}{h} \; := \; \left\{ \overrightarrow{v} \; \in \; X \; \mbox{ such that} \; \liminf_{h \mapsto 0+} \frac{d_{K}(x+h\overrightarrow{v})}{h} \; = \; 0\right\}$$ with which we associate (adjacent) *retrospective tangent cone*[^14] $$\label{e:} \overleftarrow{T}_{K}(x) \; := \; \mbox{Limsup}_{h \mapsto 0+} \frac{x-K}{h} \; := \; := \; \left\{ \overleftarrow{v} \; \in \; X \; \mbox{ such that} \; \liminf_{h \mapsto 0+} \frac{d_{K}(x-h\overleftarrow{v})}{h} \; = \; \right\}$$ satisfying $\overleftarrow{T}_{K}(x) \; := \; -\overleftarrow{T}_{K}(x)$. It is natural to consider their tensor product $(x-h\overleftarrow{v}) \otimes (x+ h \overrightarrow{v})$. The signs of its entries detect the “blunt” and“sharp” elements of the boundary in the same directions (*trend congruence*) or in opposite directions (*trend reversal*). [a]{} Aubin J.-P. (1983) Slow and heavy trajectories of controlled problems: smooth viability domains, In *Multifunctions and Integrands*, 105-116 Lecture Notes in Mathematics,\#1091, Ed. Salinetti G., Springer-Verlag Aubin J.-P. (1991) *Viability Theory*, Birkhäuser Aubin J.-P. (1996) *Neural Networks and Qualitative Physics: a Viability Approach*, Cambridge University Press Aubin J.-P. (1998) Connectionist Complexity and its Evolution, in *Equations aux dérivées partielles, Articles dédiés à J.-L. Lions*, 50-79, Elsevier Aubin J.-P. (2003) Regulation of the Evolution of the Architecture of a Network by Connectionist Tensors Operating on Coalitions of Actors, *J. Evolutionary Economics*, 13,95-124 Aubin J.-P. (2010) Macroscopic Traffic Models: Shifting from Densities to “Celerities”, *Applied Mathematics and Computation*, 217, 963-971, <http://dx.doi.org/10.1016/j.amc.2010.02.032> Aubin J.-P. (2013) Chaperoning State Evolutions by Variable Durations, *SIAM Journal of Control and Optimization*, DOI. 10.1137/120879853 Aubin J.-P. (submitted) Transports Regulators of Networks with Junctions Detected by Durations Functions, Aubin J.-P., Bayen A. & Saint-Pierre P. (2011) *Viability Theory. New Directions*, Springer Aubin J.-P. & Burnod Y. (1998) Hebbian Learning in Neural Networks with Gates, *Cahiers du Centre de Recherche Viabilité, Jeux, Contrôle* \# 981 Aubin J.-P., Chen Lx & Dordan O. (2014) *Tychastic Measure of Viability Risk. A Viabilist Portfolio Performance and Insurance Approach* Aubin J.-P. & Frankowska H. (1990) *Set-valued analysis*, Birkhäuser Aubin J.-P. & Frankowska H. (1985) Heavy viable trajectories of controlled systems, Proceedings of *Dynamics of Macrosystems*, IIASA, September 1984,, Ed. Aubin J.-P., Saari D.& Sigmund K., Springer-Verlag,148-167 Aubin J.-P. & Haddad G. (2001) Path-dependent impulse and hybrid control systems, in *Hybrid Systems: Computation and Control*, 119-132, Di Benedetto & Sangiovanni-Vincentelli Eds, Proceedings of the HSCC 2001 Conference, LNCS 2034, Springer-Verlag Aubin J.-P. & Haddad G. (2002) History (Path) Dependent Optimal Control and Portfolio Valuation and Management, *J. Positivity*, 6, 331-358 Buquoy G. (1815) *Exposition d’un nouveau principe général de dynamique, dont le principe des vitesses virtuelles n’est qu’un cas particulier*, V. Courtier Dolecki S. & Greco G. H. (2007) Towards Historical Roots of Necessary Conditions of Optimality: Regula of Peano, *Control and Cybernetics*, 36, 491-518 Dordan O. (1995) ***Analyse qualitative***, Masson Frankowska H. (1991) Lower semicontinuous solutions to Hamilton-Jacobi-Bellman equations, in *Proceedings of the 30th IEEE Conference on Decision and Control*, Brighton, UK, Frankowska H. (1993) Lower semicontinuous solutions of Hamilton-Jacobi-Bellman equations, *SIAM J. Control Optim.*, 31, 257-272. Galperin E. A. (2009) Information transmittal, time uncertainty, and special relativity, *Computers and Mathematics with Applications*, 57, 1554-1573, doi: 10.1016/j.camwa.2008.09.048 Galperin E. A. (2011) Left time derivatives in mathematics, mechanics and control of motion, *Computers and Mathematics with Application*, 62 4742–4757 Galperin E. A. (submitted) Information Transmittal, Causality, Relativity and Optimality, Galperin E. A. (submitted) Time And Relativity In Dynamical Systems, Greco G. H., Mazzucchi S. & Pagani E. M. (2010) Peano on derivative of measures: strict derivative of distributive set functions, *Rend. Lincei Mat. Appl.*, 21, 305-339 DOI 10.4171/RLM/575 Haddad G. (1981) Monotone trajectories of differential inclusions with memory, *Isr. J. Math.*, 39, 83-100 Haddad G. (1981) Monotone viable trajectories for functional differential inclusions, *J. Diff. Eq.*, 42, 1-24 Haddad G. (1981) Topological properties of the set of solutions for functional differential inclusions, *Nonlinear Anal. Theory, Meth. Appl.*, 5, 1349-1366 Hale J. K. (1993) *Introduction to Functional Differential Equations*, Springer Hebb D. (1949) *The Organization of Behavior*, Wiley Mestschersky I.V. (1897) Dynamics of point with variable mass. In I.V. *Mestschersky, Works on Mechanics of Bodies with Variable Mass*, Edition, Gostechizdat, Moscow, 1952, 37-188 Levi-Civita (1928) Sul moto di un corpo di massa variablie, *Rendiconti dei Lincei*, 329-333 Peano G. (1887) *Applicazioni geometriche del calcolo infinitesimale*, Fratelli Bocca Editori, <http://historical.library.cornell.edu/cgi-bin/cul.math/docviewer?did=00610002&seq=1> Porcar M., Danchin A. de Lorenzo V., dos Santos V., Krasnogor N., Rasmussen S. & Moya A. (2011), The ten grand challenges of synthetic life, *Syst Synth Biol*, 5, 1–9, doi: 10.1007/s11693-011-9084-5 Rockafellar R.T. & Wets R. (1997) *Variational Analysis*, Springer-Verlag Tallos P. (1991) Viability problems for nonautonomous differential inclusions, *SIAM J. on Control and Optimization*, 29 253-263 Vinogradova G. (2012) Correction of Dynamical Network’s Viability by Decentralization by Price, *Complex Systems*, 21, 37-55 [^1]: VIMADES (Viabilité, Marchés, Automatique, Décisions), 14, rue Domat, 75005, Paris, France\ aubin.jp@gmail.com, <http://vimades.com/aubin/> [^2]: **Acknowledgments** *This work was partially supported by the Commission of the European Communities under the 7th Framework Programme Marie Curie Initial Training Network (FP7-PEOPLE-2010-ITN), project SADCO, contract number 264735.* [^3]: For evolutions (fonctions of one variable, retrospective derivatives are derivatives from the left and prospective derivatives are derivatives from the right. For fonctions of several variable, there is no longer left and right, so retrospective (or backward), prospective (or forward), are used instead. [^4]: This has been pointed out by *Jiri Buquoy*, who in 1812, formulated the equation of motion of a body with variable mass, which retained only the attention of *Poisson* before being almost forgotten. See [@Buquoy Jiri Buquoy], [@Mestschersky Mestschersky] and [@Levi-Civita Levi-Civita] among the precursors in this area. [^5]: See for instance [@DanchinAll11 Danchin et al.]. [^6]: actually, an inductive approximation, whereas (deductive) application refers to approximate derivatives of the idealized world by difference quotients, which are closer to the actual perception of our brains and capabilities of the digital computers. [^7]: See *Introduction to Functional Differential Equations*, [@Hale Hale], [@hg81; @hg81b; @hg81c Haddad], summarized in Chapter 12 of *Viability Theory*, [@avt Aubin], [@aubhadclio; @ah01hyb Aubin & Haddad], etc. [^8]: Recall that the tensor product $p \otimes q$ of two vectors $p := (p_{i} )_{i} \in \mathbb{R}^{\ell} $ and $q :=(q_{j} )_{j} \in \mathbb{R}^{\ell} $ is the rank one linear operator $$p \otimes q \in \mathcal{L}(\mathbb{R}^{\ell},\mathbb{R}^{\ell}) : x \mapsto \left\langle p,x \right\rangle q$$ the entries of which (in the canonical basis) are equal to $(p_{i} q_{j}) _{i,j}$. [^9]: See Proposition 2.4.1, p. 37 and Chapter 2 of , [@a92ia Aubin]. [^10]: See [@ab99nng Aubin & Burnod] and the literature on $\Sigma-\Pi$ neural systems, Section 12.2 of , [@absp Aubin, Bayen & Saint-Pierre], *Analyse qualitative*, [@do93liv Dordan], as well as [@a92ia; @aconcom98; @a01ctfcr; @AUB-Leit09; @TransReg-2013 Aubin], [@vino Vinogradova] and the literature on the regulation of networks. [^11]: See Chapter 2 of *Tychastic Viability Measure of Risk Eradication. A Viabilist Portfolio Performance and Insurance Approach*, [@ACD-ALIM Aubin, Chen Luxi & Dordan], for a more detailed study. [^12]: Actually, there is a third one, $0$, where $\overleftarrow{D}K(t,x)(0)$ and $\overrightarrow{D}K(t,x)(0)$ are the retrospective and prospective tangent cones studied in Section \[s:PRCones\], p. . [^13]: See [@af90sva *Set-valued analysis*]. The (adjacent) Peano-Severi-Bouligand tangent cone is defined by the Painlevé-Kuratowski lower limits instead of upper limits $$\label{e:} \mbox{Liminf}_{h \mapsto 0+} \frac{K-x}{h} \; := \; \left\{ \overrightarrow{v} \; \in \; X \; \mbox{ such that} \; \lim_{h \mapsto 0+} \frac{d_{K}(x+h\overrightarrow{v})}{h} \; = \; 0\right\}$$ The smaller *adjacent* tangent cone is used whenever more regularity is required. An element $x \in K$ is said to be *regular* in $K$ at $x$ if both contingent and adjacent tangent cones coincide, i.e., when $T_{K}(x)$ is the Painlevé-Kuratowski limit of $\displaystyle{\frac{K-x}{h}}$. [^14]: Backward evolutions and negative tangents have been introduced in [@fh91cdc; @HJB92 Frankowska] for characterizing lower semicontinuous (viscosity) solutions to Hamilton-Jacobi-Bellman equations.
--- abstract: 'Fake news has altered society in negative ways as evidenced in politics and culture. It has adversely affected both online social network systems as well as offline communities and conversations. Using automatic fake news detection algorithms is an efficient way to combat the rampant dissemination of fake news. However, using an effective dataset has been a problem for fake news research and detection model development. In this paper, we present Fakeddit, a novel dataset consisting of about 800,000 samples from multiple categories of fake news. Each sample is labeled according to 2-way, 3-way, and 5-way classification categories. Prior fake news datasets do not provide multimodal text and image data, metadata, comment data, and fine-grained fake news categorization at this scale and breadth. We construct hybrid text+image models and perform extensive experiments for multiple variations of classification.' author: - | Kai Nakamura\ Laguna Blanca School\ Santa Barbara, CA 93110\ `kai.nakamura42@gmail.com`\ Sharon Levy, William Yang Wang\ Department of Computer Science\ University of California, Santa Barbara\ Santa Barbara, CA 93106 USA\ `{sharonlevy,william}@cs.ucsb.edu`\ bibliography: - 'acl2019.bib' title: | r/Fakeddit:\ A New Multimodal Benchmark Dataset for\ Fine-grained Fake News Detection --- Introduction ============ Within our progressively digitized society, the spread of fake news and misinformation has enlarged, leading to many problems such as an increasingly politically divisive climate. The dissemination and consequences of fake news are exacerbating partly due to the rise of popular social media applications with inadequate fact-checking or third-party filtering, enabling any individual to broadcast fake news easily and at a large scale [@10.1257/jep.31.2.211]. Though steps have been taken to detect and eliminate fake news, it still poses a dire threat to society [@facebook]. As such, research in the area of fake news detection is essential. To build any machine learning model, one must obtain good training data for the specified task. In the realm of fake news detection, there are several existing published datasets. However, they have several limitations: limited size, modality, and/or granularity. Though fake news may immediately be thought of as taking the form of text, it can appear in other mediums such as images. As such, it is important that standard fake news detection systems detect all types of fake news and not just text data. Our dataset will expand fake news research into the multimodal space and allow researchers to develop stronger fake news detection systems. Our contributions to the study of fake news detection are: - We create a large-scale multimodal fake news dataset consisting of around 800,000 samples containing text, image, metadata, and comments data from a highly diverse set of resources. - Each data sample consists of multiple labels, allowing users to utilize the dataset for 2-way, 3-way, and 5-way classification. This enables both high-level and fine-grained fake news classification. - We evaluate our dataset through text, image, and text+image modes with a neural network architecture that integrates both the image and text data. We run experiments for several types of models, providing a comprehensive overview of classification results. Related Work ============ A variety of datasets for fake news detection have been published in recent years. These are listed in Table \[tab:dataset\], along with their specific characteristics. When comparing these datasets, a few trends can be seen. Most of the datasets are small in size, which can be ineffective for current machine learning models that require large quantities of training data. Only four contain over half a million samples, with CREDBANK and FakeNewsCorpus[^1] being the largest with millions of samples [@mitra2015credbank]. In addition, many of the datasets separate their data into a small number of classes, such as fake vs. true. However, fake news can be categorized into many different types [@wardle]. Datasets such as NELA-GT-2018, LIAR, and FakeNewsCorpus provide more fine-grained labels [@NELA; @wang-2017-liar]. While some datasets include data from a variety of categories [@zubiaga2016analysing; @thorne-etal-2018-fever], many contain data from specific areas, such as politics and celebrity gossip [@DBLP:journals/corr/TacchiniBVMA17; @pathak-srihari-2019-breaking; @shu2018fakenewsnet; @fa-kes]. These data samples may contain limited styles of writing due to this categorization. Finally, most of the existing fake news datasets collect only text data, which is not the only mode that fake news can appear in. Datasets such as image-verification-corpus, Image Manipulation, BUZZFEEDNEWS[^2], and BUZZFACE can be utilized for fake image detection, but contain small sample sizes[@christlein2012evaluation; @boididou2018detection; @santia2018buzzface]. It can be seen from the table that compared to other existing datasets, Fakeddit contains a large quantity of data, while also annotating for three different types of classification labels (2-way, 3-way, and 5-way) and comparing both text and image data. Fakeddit ======== Many fake news datasets are crowdsourced or handpicked from a select few sources that are narrow in size, modality, and/or diversity. In order to expand and evolve fake news research, researchers need to have access to a dataset that exceed these current dataset limitations. Thus, we propose Fakeddit[^3], a novel dataset consisting of a large quantity of text+image samples coming from large diverse sources. We sourced our dataset from Reddit[^4], a social news and discussion website where users can post submissions on various subreddits. Each subreddit has its own theme like ‘nottheonion’[^5], where people post seemingly false stories that are surprisingly true. Active Reddit users are able to upvote, downvote, and comment on the submission. \[fig:dataset\] Submissions were collected with the pushshift.io API[^6]. Each subreddit has moderators that ensure submissions pertain to the subreddit theme and remove posts that violate any rules, indirectly helping us obtain reliable data. To further ensure that our data is credible, we filtered out any submissions that had a score of less than 1. Fakeddit consists of 825,100 total submissions from 21 different subreddits. We gathered the submission title and image, comments made by users who engaged with the submission, as well as other submission metadata including the score, the username of the author, subreddit source, sourced domain, number of comments, and up-vote to down-vote ratio. 63% of the samples contains both text and images, while the rest contain only text. For our experiments, we utilize these multimodal samples. The samples span over many years and are posted on highly active and popular pages by tens of thousands of diverse individual users from across the world. Because of the variety of the chosen subreddits, our data also varies in its content, ranging from political news stories to simple everyday posts by Reddit users. We provide three labels for each sample, allowing us to train for 2-way, 3-way, and 5-way classification. Having this hierarchy of labels will enable researchers to train for fake news detection at a high level or a more fine-grained one. The 2-way classification determines whether a sample is fake or true. The 3-way classification determines whether a sample is completely true, the sample is fake news with true text (text that is true in the real world), or the sample is fake news with false text. Our final 5-way classification was created to categorize different types of fake news rather than just doing a simple binary or trinary classification. This can help in pinpointing the degree and variation of fake news for applications that require this type of fine-grained detection. The first label is true and the other four are defined within the seven types of fake news [@wardle]. We provide examples from each class for 5-way classification in Figure \[fig:dataset\]. The 5-way classification labels are explained below: [[l]{}[l]{}|[l]{}|\*[2]{}[c]{}|\*[2]{}[c]{}|\*[2]{}[c]{}]{} && & &\ Type & Text & Image & Validation & Test & Validation & Test& Validation & Test\ Text & BERT & – & 0.769 & 0.773 & 0.761 & 0.767 & 0.741 & 0.739\ & InferSent & – & **0.783** & **0.786** & **0.776** & **0.779** & **0.746** & **0.746**\ Image & – & VGG16 & 0.695 & 0.698 & 0.678 & 0.677 & 0.638 & 0.636\ &– & EfficientNet & 0.560 & 0.561 & 0.560 & 0.561 & 0.547 & 0.545\ &– & ResNet50 & **0.721** & **0.722** & **0.712** & **0.711** & **0.675** & **0.673**\ Text+Image & InferSent & VGG16 & 0.841 & 0.839 & 0.829 & 0.831 & 0.808 & 0.806\ &InferSent & EfficientNet & 0.787 & 0.788 & 0.780 & 0.784 & 0.749 & 0.746\ &InferSent & ResNet50 & 0.857 & 0.854 & 0.850 & 0.849 & 0.819 & 0.818\ &BERT & VGG16 & 0.846 & 0.846 & 0.837 & 0.837 & 0.810 & 0.809\ &BERT & EfficientNet & 0.787 & 0.788 & 0.780 & 0.783 & 0.746 & 0.746\ &BERT & ResNet50 & **0.863** & **0.863** & **0.859** & **0.859** & **0.832** & **0.830**\ [[l]{}|\*[2]{}[c]{}|\*[2]{}[c]{}|\*[2]{}[c]{}]{} & & &\ Combination Methods & Validation & Test & Validation & Test& Validation & Test\ Add & 0.810 & 0.814 & 0.797 & 0.799 & 0.786 & 0.783\ Concatenate & 0.812 & 0.814 & 0.807 & 0.809 & 0.787 & 0.783\ Maximum & **0.863** & **0.863** & **0.859** & **0.859** & **0.832** & **0.830**\ Average & 0.816 & 0.816 & 0.811 & 0.813 & 0.801 & 0.795\ \[tab:combine\] **True:** True content is accurate in accordance with fact. Eight of the subreddits fall into this category, such as usnews[^7] and mildlyinteresting[^8]. The former consists of posts from various news sites. The latter encompasses real photos with accurate captions. The other subreddits include photoshopbattles[^9], nottheonion, neutralnews[^10], pic[^11], usanews[^12], and upliftingnews[^13]. **Satire/Parody:** This category consists of content that spins true contemporary content with a satirical tone or information that makes it false. One of the four subreddits that make up this label is theonion[^14], with headlines such as “Man Lowers Carbon Footprint By Bringing Reusable Bags Every Time He Buys Gas". Other satirical subreddits are fakealbumcovers[^15], satire[^16], and waterfordwhispersnews[^17]. **Misleading Content:** This category consists of information that is intentionally manipulated to fool the audience. Our dataset contains three subreddits in this category: propagandaposters[^18], fakefacts[^19], and savedyouaclick[^20]. **Imposter Content:** This category contains the subredditsimulator[^21] subreddit, which contains bot-generated content and is trained on a large number of other subreddits. It also includes subsimulatorgpt2[^22]. **False Connection:** Submission images in this category do not accurately support their text descriptions. We have four subreddits with this label, containing posts of images with captions that do not relate to the true meaning of the image. These include misleadingthumbnails[^23], confusing\_perspective[^24], pareidolia[^25], and fakehistoryporn[^26]. Experiments =========== Fake News Detection ------------------- Multiple methods were employed for text and image feature extraction. We used InferSent and BERT to generate text embeddings for the title of the Reddit submissions [@conneau-EtAl:2017:EMNLP2017; @devlin2018bert]. VGG16, EfficientNet, and ResNet50 were utilized to extract the features of the Reddit submission thumbnails [@Simonyan15; @tan2019efficientnet; @He2015]. We used the InferSent model because it performs very well as a universal sentence embeddings generator. For this model, we loaded a vocabulary of 1 million of the most common words in English and used fastText as opposed to ELMO embeddings because fastText can perform relatively well for rare words and words that do not appear in the vocabulary [@joulin2016bag; @Peters:2018]. We obtained encoded sentence features of length 4096 for each submission title using InferSent. The BERT model achieves state-of-the-art results on many classification tasks, including Q&A and named entity recognition. To obtain fixed-length BERT embedding vectors, we used the bert-as-service tool, which maps variable-length text/sentences into a 768 element array for each Reddit submission title [@xiao2018bertservice]. For our experiments, we utilized the pretrained BERT-Large, Uncased model. We utilized VGG16, ResNet50, and EfficientNet models for encoding images. VGG16 and ResNet50 are widely used by many researchers, while EfficientNet is a relatively newer model. For EfficientNet, we used the smallest variation: B0. For all three image models, we preloaded weights of models trained on ImageNet and included the top layer and used its penultimate layer for feature extraction. For our experiments, we excluded submissions that did not have an image associated with them and solely used submission image and title data. We performed 2-way, 3-way, and 5-way classification for each of the three types of inputs: image only, text only, and multimodal (text and image). Before training, we performed preprocessing on the images and text. We constrained sizes of the images to 224x224. From the text, we removed all punctuation, numbers, and revealing words such as “PsBattle” that automatically reveal the subreddit source. For the savedyouaclick subreddit, we removed text following the “ ------------------------------------------------------------------------ ” character and classified it as misleading content. When combining the features in multimodal classification, we first condensed the features into 256-element vectors through a trainable dense layer and then merged them through four different methods: add, concatenate, maximum, average. These features were then passed through a fully connected softmax predictor. Results ------- The results are shown in Tables \[tab:results\] and \[tab:combine\]. We found that the multimodal features performed the best, followed by text-only, and image-only in all instances. Thus, having both image and text improves fake news detection. For image and multimodal classification, ResNet50 performed the best followed by VGG16 and EfficientNet. In addition, BERT generally achieved better results than InferSent for multimodal classification. However, for text-only classification InferSent outperformed BERT. The “maximum” method to merge image and text features yielded the highest accuracy, followed by average, concatenate, and add. Overall, the multimodal model that combined BERT text features and ResNet50 image features through the maximum method performed most optimally. Conclusion ========== In this paper, we presented a novel dataset for fake news research, Fakeddit. Compared to previous datasets, Fakeddit provides a large quantity of text+image samples with multiple labels for various levels of fine-grained classification. We created detection models that incorporate both modalities of data and conducted experiments, showing that there is still room for improvement in fake news detection. Although we do not utilize submission metadata and comments made by users on the submissions, we anticipate that these features will be useful for further research. We hope that our dataset can be used to advance efforts to combat the ever growing rampant spread of misinformation. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to acknowledge Facebook for the Online Safety Benchmark Award. The authors are solely responsible for the contents of the paper, and the opinions expressed in this publication do not reflect those of the funding agencies. [^1]: https://github.com/several27/FakeNewsCorpus [^2]: https://github.com/BuzzFeedNews/2016-10-facebook-fact-check [^3]: <https://github.com/entitize/fakeddit> [^4]: https://www.reddit.com/ [^5]: https://www.reddit.com/r/nottheonion [^6]: https://pushshift.io/ [^7]: https://www.reddit.com/r/usnews [^8]: https://www.reddit.com/r/mildlyinteresting [^9]: https://www.reddit.com/r/photoshopbattles [^10]: https://www.reddit.com/r/neutralnews [^11]: https://www.reddit.com/r/pic [^12]: https://www.reddit.com/r/usanews [^13]: https://www.reddit.com/r/upliftingnews [^14]: https://www.reddit.com/r/theonion [^15]: https://www.reddit.com/r/fakealbumcovers [^16]: https://www.reddit.com/r/satire [^17]: https://www.reddit.com/r/waterfordwhispersnews [^18]: https://www.reddit.com/r/propagandaposters [^19]: https://www.reddit.com/r/fakefacts [^20]: https://www.reddit.com/r/savedyouaclick [^21]: https://www.reddit.com/r/subredditsimulator [^22]: https://www.reddit.com/r/subsimulatorgpt2 [^23]: https://www.reddit.com/r/misleadingthumbnails [^24]: https://www.reddit.com/r/confusing\_perspective [^25]: https://www.reddit.com/r/pareidolia [^26]: https://www.reddit.com/r/fakehistoryporn
--- abstract: 'The use of multimedia content has hugely increased in recent times, becoming one of the most important services for the users of mobile networks. Consequently, network operators struggle to optimize their infrastructure to support the best video service-provision. As an additional challenge, 5G introduces the concept of network slicing as a new paradigm that presents a completely different view of the network configuration and optimization. A main challenge of this scheme is to establish which specific resources would provide the necessary quality of service for the users using the slice. To address this, the present work presents a complete framework for this support of the slice negotiation process through the estimation of the provided Video Streaming Key Quality Indicators (KQIs), which are calculated from network low-layer configuration parameters and metrics. The proposed estimator is then evaluated in a real cellular scenario.' bibliography: - 'Bibliography.bib' title: '[Estimation of Video Streaming KQIs for Radio Access Negotiation in Network Slicing Scenarios]{}' --- (0cm, -5.2cm) (0cm, 20 cm) Mobile networks, Optimization, Network Slicing, 5G, Video Streaming, QoE, KQIs. Introduction ============ Fifth generation mobile networks (5G) are expected to allow very flexible network configurations, able to provide connectivity to different services with heterogeneous requirements in an optimal way. Here, three main service categories are expected to be the main target for 5G provision: enhanced Mobile Broadband (eMBB), massive Machine Type Communications (mMTC) and Ultra Reliability and Low Latency Communications (URLLC). 5G does not intend to offer these different services over an unique radio interface and set of resources, but rather to allocate and configure different resources in order to fulfill their differentiated requirements. From this, the “network slicing” concept arises, which consists in a virtual division and sharing of the network elements [@netSlicing]. In this way, the slicing of the network facilitates network operators to offer resources fitted to specifics vertical industries, generally referred simply as “verticals” [@Vertical]. These verticals (e.g. a car manufacturer, a venue administrator, a shopping mall management, a factory owner, etc.) aim to agree with the operator about a specific quality of service to be provided to its associated end users (e.g. a specific set of vehicles, the attendees to a sport match in a stadium, the customers in a mall, the robots and sensors in a factory, etc.). In this scheme, the classical approach where cellular networks were monitored based on performance metrics coming from low layers (e.g. physical, link) of the protocol stack focusing on their radio access indicators (e.g. MAC throughput, radio quality, etc.) become insufficient to provide a proper view of how the network is going to support the end-to-end (E2E) requirements of the verticals. In fact, the process of slice negotiation of resources between them and the operators is expected to be focused on E2E service-specific metrics (e.g. video resolution). Such as application-layer metrics are known as key quality indicators (KQIs)[@morel2017quality]. In this new approach, management task becomes more complex due to the challenge of obtaining KQIs during network operation. The wide adoption of encryption in protocols of high layers, as well as the limited and generally not available access to the application logs in the user equipment (UE) make the full acquisition of KQIs during network operation unfeasible. As a result of this, the adoption of tools able to estimate KQIs based on metrics and configuration parameters coming from lower layers of the cellular network is deemed necessary. These are available to the operator through the control and management planes. Such an approach will allow network management actions aiming at improving the E2E performance of specific services as well as the proper assignment of resources and maintenance of the slices negotiated between verticals and operators. In this scope, there have been some approaches for video streaming services. In [@pan2018qoe], a system able to estimate the bitrate of Youtube encrypted video streaming over HTTPS is proposed. In works such as [@jia2016measuring], [@chen2014method] or [@de2014qoe], models to predict the Quality of Experience (QoE) of this service are presented, using KPI and HAS (HTTP Adaptive Streaming) profiles. However, all those references are focused on QoE unique scores per service, and not on multiple KQIs, which are the ones expected to be used in network slicing as a base to deal with a slice agreement as they provide a far better granularity of the service provision. In [@vaser2015qos] the qualitative relationship between KPIs and KQIs for video streaming and voice services is analyzed, not providing however tools for their numerical estimation and focusing instead on QoE expressions in non-slicing scenarios. Therefore, up to the authors knowledge, no previous works have addressed the challenge of translating application-layer requirements to specific radio configuration and resources in slicing scenarios. Beyond this state of the art, the present work proposes a novel framework for the support of the negotiation, establishment and maintenance of network slices based on the estimation of the KQIs of video streaming via regression models. The proposed estimation system is then evaluated in a real cellular network. In this way, section \[sec:System\] presents the general architecture of the proposed framework, detailing its elements. In section \[sec:evaluation\] the tasks of estimating the KQIs from lower-layer metrics and configuration management parameters (CMs) are defined and evaluated for different machine learning (ML) modeling techniques and using in a real cellular network testbed. Finally, section \[sec:conclusion\] presents the conclusions of the work. Proposed system {#sec:System} =============== As described in [@D32] slice negotiation processes start when a vertical industry agent demands a set of E2E service-oriented requirements in terms of KQIs, e.g. 90% of the time at 1440p resolution for the user of the slice [@D32]. The price of the associated slices has to be set by the operator. Both vertical and operator then start an iterative process of negotiation where the vertical might reduce the requirements and/or duration of its demand and therefore the operator would readjust its price. After one or several iterations, performed by their automated agents, a final price and set of E2E KQIs should be agreed on. After that, the operator will establish the slice with the set of radio and core resources necessary to support the agreed KQIs. ![System architecture[]{data-label="fig:arq"}](arquitecturaSistema3.pdf){width="0.9\columnwidth"} During the negotiation process as well as for the initial configuration of the slice and its maintenance, the operator must be able to estimate the KQIs that the UEs will experience for certain configurations of the network in their available radio conditions. Where previous approaches assumed a small set of fixed and known configuration options, the objective of the present work is to automate the process and achieve a most efficient network resource allocation. To do so, the KQI estimation framework shown in Figure\[fig:arq\] is proposed for the support of network slice negotiations. This is composed of four main blocks: Service Experience Acquisition (SEA), Modelling System (ModSys), Operator Slice Negotiation Agent (OSNA) and the Dynamic Slice Allocation (DySA). These blocks have different functionalities as part of two main different stages: the training of the system and its operational phase. The training phase is dedicated to gather data at different layers and elements of the cellular network and generate the ML models able to estimate the KQIs during the operational phase. In this, service acquisition UEs (SaUEs), this means, UEs where the operator have access to application layer metrics (e.g. drive test terminals, normal users with apps for the indicators extraction, etc.), are required. From this, the main activity of this stage is performed by the Service Experience Acquisition (SEA) and the KQI Modelling system (ModSys). The operational phase is focusing on the support of the slice negotiation and maintenance, using the models created in the training phase. Service Experience Acquisition (SEA) {#sec:SEA} ------------------------------------ This block is dedicated to acquire the measurements needed from the SaUEs and the network elements for the posterior modelling of the KQIs of the service. To this end, the system tests multiple configurations (e.g. different bandwidth) of the network in different radio conditions (e.g. low and high coverage environments), executing for each of them multiple instances of the service. This process can be automated through the control of the SaUEs and the OAM platform of the network. For video services, the SaUEs executes multiple video playbacks, obtaining their KQIs. These KQIs of the video service as defined by the 3GPPP [@3gpp.32.862]: initial time, video bitrate or video stalls, which links with the moments when the image is frozen. At the same time, the radio conditions of the network are measured by parameters such as RSRP (*Reference Signal Received Power*), RSRQ (*Reference Signal Received Quality*), or RSSI (*Received Signal Strength Indicator*), as well as the network configuration (OAM data) used during the different playbacks. In order to properly acquire all these data, the SEA block defines and executes a measurement campaign through different calls to the SaUEs, where the duration of the campaign $T$ can be estimated as: $$\label{eq:durationExp} T = \beta \cdot \gamma \cdot ( n \cdot (\iota + \Delta \iota)+\tau) ,$$ where $\beta$ is the number of base stations where the measurement campaign will be performed, $\gamma$ the amount of possible slice configurations to be tested. The number of service executions, i.e. video playbacks, for each configuration is represented by $n$ where $\iota$ corresponds with the video length and $\Delta \iota$ the time required between executions to relaunch the experiment. Finally, $\tau$ represents the slice reconfiguration time. Modelling System (ModSys) {#sec:ModSyS} ------------------------- Continuing the training phase, the data gathered by the SEA block is stored in the training database. From this, the ModSys is in charge of generating the KQIs estimation functions from both SaUEs and cellular network data (KPIs and CMs). In this way, for each KQI, $\rho$, a regression function $f$ shall be defined, such as: $$\varphi'(t) = f({\boldsymbol\Psi(t)}, {\boldsymbol\vartheta(t)}, {\boldsymbol\Gamma(t)})$$ where $\varphi'(t)$ denotes the estimated value of $\varphi(t)$ at instant $t$, calculated by the regression function $f$. This function takes as inputs [$\boldsymbol\Psi(t)$]{}, which represents the set of measured KPIs, [$\boldsymbol\vartheta(t)$]{} corresponding to the different CMs of the network and $\boldsymbol\Gamma(t)$ are the radio conditions like RSRP or RSRQ. The construction of these models can be performed by different regression techniques[@LR]. As these techniques can have various accuracy for different KQIs and conditions, for each KQI the different techniques are used to simultaneously train regressions models. These models are then evaluated using k-fold cross-validation with the training data. Their performance is then measured in terms of the coefficient of determination ([$R^2$]{}) [@LR] calculated over the training data. by the expression: $$R^{2} = \frac{}{}= \frac{\sum ({\varphi'} - \bar{{\varphi'}})^{2}}{\sum (\varphi - \bar{\varphi})^{2}}, \label{eq:rsq}$$ where $\bar{\varphi}$ and $\bar{\varphi'}$ represents the mean of the estimated and ${\varphi}$ the measured values. This expression can be seen also as the normalization of the standard deviation of the residuals, that is the RMSE (Root Mean Square Error). During the operational phase, the estimator that obtained the best performance is used. For each KQI, those models showing a better accuracy are then selected to be used during the online phase. In this phase, ModSys will be called in order to generate KQIs estimation based on the current available low-layer indicators and the possible slices configuration.Additional retraining or online training and best estimator selection can be also implemented whenever relevant new data coming from SaUEs is available. Operator Slice Negotiation Agent (OSNA) {#sec:OSNA} --------------------------------------- At any point during the operational phase, a vertical industry triggers the network slice negotiation with the network operator. The selected estimators from the ModSys block will then be used by the operator slice negotiation agent (OSNA) to support the estimation of the required configuration/resources (and therefore pricing) for the expected radio conditions. This capability would be key to proper implementation of slices negotiated with verticals as well as for the operator to finely tune the resources required to provide the proper performance to their clients. Dynamic Slice Allocation (DySA) {#sec:DySA} ------------------------------- Once a slice has been negotiated, the experienced KQIs of the UEs in that slice can dynamically change based on the variable radio conditions (e.g. if the UEs associated to the slice move to an area with poor coverage). Where classical approaches consider static allocation of resources for the slice, the proposed system introduces the concept of Dynamic Slice Allocation (DySA). In this way, in order to maintain the quality of service agreed with the vertical, the DySA block is in charge of using the ModSys estimators to adapt the radio resources accordingly. To do so, the DySA monitors the low-layer metrics from the network operator OAM and control plane (for the UEs radio conditions). Although it can be done in different ways, in our implementation a RESTful interface is used. In order to establish the proper slice configuration a set of automatically generated thresholds are compared with the estimated KQIs for each possible resource configuration of the network under and considering the current radio and network conditions. These thresholds are constructed based on the available regression models in the ModSys by: $$\label{eq:TH} \varrho (t) = \varphi(t) + \alpha$$ In this expression, $\alpha$ represents the security margin, that can be estimated based on the performance of the model during the training phase. $\varphi (t)$ denotes the value of the KQI which is estimated by the ML model selected by the ModSys block. By last, $\varrho(t)$ represents the reached value of the KQI with a network configuration in a time instant $t$. In this way, the DySA, by the selection of the configuration whose threshold has a value closer and compliant to the target KQI requisite, is able to get the adequate configuration corresponding to the negotiated slice conditions. Evaluation {#sec:evaluation} ========== In order to evaluate the estimation capabilities of the proposed system, a wide data-set is built and then applied for the assessment and analysis of key different regression techniques: Linear Regression (LR) [@LR], Stepwise Linear Regression (SW-LR) [@LR], Decision Tree (DTR) [@DT], Gaussian (SVM-G), Cubic (SVM-C) and Quadratic (SVM-Q) Support Vector Machine [@SVM] and Gaussian Process Regression (GPR) [@GP]. With the aim to acquire the training data, the video service is launched in a SaUE (based on a computer connected to a LTE testbed [@smartCampus] through an LTE stick) with the help of a software framework called Selenium, which lets automation and application testing in any browser. In background, a Python script processes all the data from the DASH client in order to acquire the KQIs of the video service. Estimation performance ---------------------- A dataset of 800 playbacks in 4 different base stations and with 4 different configurations, for 10-fold cross-validation with 70% of data for training and 30% for testing is performed. Figure \[fig:rsquared\] shows the value of the $R^2$ for the different evaluated regression mechanisms presented in subsection \[sec:ModSyS\] for the different video service KQIs: initial time, average throughput and the percentage of video watched at each quality during the playback [@3gpp.32.862]. For the latest we distinguish between the percentage of the video played at each resolution: 360p, 720p, 1080p and 1440p, represented in the figure as %Q360p, %Q720p, %Q1080p and %Q1440p, respectively. As it can be observed in Figure \[fig:rsquared\], the models generated by GPR and DTR present the best values of [$R^2$]{} for all KQIs. SVM, in their three variants, also achieves good estimation values ([$R^2$]{}[$>$]{}0.8). Moreover, it is observed that the models for 720p and 1080p resolution have worse performance than the others. This is due to the video client common changes between these qualities, which hinders the estimation. However, with the model obtained for 360p, values of [$R^2$]{} close to 1 are obtained. In this way, estimation errors can be overcome in order to assure the appropriate performance of the slice configured by the DySA. ![Coefficient of determination ([$R^2$]{})[]{data-label="fig:rsquared"}](r_squared_v2.pdf){width="\columnwidth"} As an example of the system capabilities, Figure \[fig:estimation\] presents the prediction of the mean throughput of video by the DTR model. As it can be observed, the estimated values are close to the actual measured KQIs, although there are few outliers. Nonetheless, these are given by the dynamic functionality of DASH, which sometimes can change between two consecutive qualities. These outliers are taken in account in the DySA thresholds by the security margins. ![Measured and estimated average throughput[]{data-label="fig:estimation"}](estimation_v3.pdf){width="\columnwidth"} Additionally, the estimation time of the models, this is, how long it takes for them to calculate the KQI’ from their inputs is also measured as it is key for how fast the OSNA and the DySA can obtain new calculation of resources. The training time is not so critical as the training is to be performed just for the generation of the models and sparsely for their retraining. ![Estimation time of different machine learning mechanisms[]{data-label="fig:time"}](time3.pdf){width="0.75\columnwidth"} Hence, this distribution of the estimation times for 1000 executions of the algorithms for the average throughput in a common PC, with an Intel Core i7-8550u, are represented in the boxplots of figure \[fig:time\]. As it can be seen, DTR is not only one of the best techniques in terms of performance, but also it is the fastest technique. Nevertheless, GPR, being the other technique that better estimates the KQIs with a slightly superior accuracy, requires far more time than the other mechanisms due to the higher complexity of the regression function defined by this model [@GP]. Conclusions {#sec:conclusion} =========== Although network slice negotiation processes are expected to be one of the main characteristics of 5G networks, the translation from the high-layer E2E requirements of users and verticals to specific radio-access slice configurations has been mainly not addressed yet. In this area, this work has presented a framework for the application of KQI estimation to support the slice negotiation, allocation of resources and dynamic maintenance, with a special focus on video streaming services. The defined system has been evaluated in a real indoor cellular network and for realistic streaming conditions and protocols. Results have shown that the proposed ML algorithms provide reliable estimation of the quality perceived by users.
--- abstract: '[Ly$\alpha$]{}nebulae, or “[Ly$\alpha$]{}blobs”, are extended (up to $\sim$100kpc), bright ([$L_{\rm Ly\alpha}$]{} $\gtrsim 10^{43}$ [erg s$^{-1}$]{}) clouds of [Ly$\alpha$]{}emitting gas that tend to lie in overdense regions at $z$ $\sim$ 2–5. The origin of the [Ly$\alpha$]{}emission remains unknown, but recent theoretical work suggests that measuring the polarization might discriminate among powering mechanisms. Here we present the first narrowband, imaging polarimetry of a radio-loud [Ly$\alpha$]{}nebula, B3 J2330+3927 at $z=3.09$, with an embedded active galactic nucleus (AGN). The AGN lies near the blob’s [Ly$\alpha$]{}emission peak and its radio lobes align roughly with the blob’s major axis. With the SPOL polarimeter on the 6.5m MMT telescope, we map the total ([Ly$\alpha$]{}+ continuum) polarization in a grid of circular apertures of radius 0.6$^{\prime\prime}$ (4.4kpc), detecting a significant ($>2\sigma$) polarization fraction in nine apertures and achieving strong upper-limits (as low as 2%) elsewhere. increases from $< 2$% at $\sim$5kpc from the blob center to 17% at $\sim$15–25kpc. The detections are distributed asymmetrically, roughly along the nebula’s major axis. The polarization angles $\theta$ are mostly perpendicular to this axis. Comparing the [Ly$\alpha$]{}flux to that of the continuum, and conservatively assuming that the continuum is highly polarized (20–100%) and aligned with the total polarization, we place lower limits on the polarization of the [Ly$\alpha$]{}emission ranging from no significant polarization at $\sim$5 kpc from the blob center to 3–17% at 10–25kpc. Like the total polarization, the [Ly$\alpha$]{}polarization detections occur more often along the blob’s major axis.' author: - | Chang You, Ann Zabludoff, Paul Smith, Yujin Yang, Eunchong Kim,\ Buell Jannuzi, Moire K. M. Prescott, Yuichi Matsuda, Myung Gyoon Lee title: 'Mapping the Polarization of the Radio-Loud [Ly$\alpha$]{}Nebula B3 J2330+3927' --- Introduction ============ Giant (up to $\sim$100kpc) gaseous, [Ly$\alpha$]{}-emitting nebulae, also known as [Ly$\alpha$]{}“blobs" [@Steidel2000; @Matsuda2004; @Dey2005; @Prescott2008; @Yang2009], are extremely luminous ([$L_{\rm Ly\alpha}$]{} $\gtrsim 10^{43}$ [erg s$^{-1}$]{}) and were discovered first in overdense regions of the high redshift ($z$ $\sim$ 2–5) Universe [@Matsuda2005; @Prescott2008]. Their rarity and clustering are consistent with their occupying massive ($\sim$10$^{13}$[M$_{\odot}$]{}) dark matter halos then that evolve into rich groups or clusters of galaxies today [@Yang2009; @Yang2010]. The [Ly$\alpha$]{}blob gas thus may represent the proto-intracluster medium and the embedded sources the progenitors of cluster galaxies [@Yang2010; @Prescott2012]. Thus identifying the mysterious source or sources of the extended [Ly$\alpha$]{}emission is essential to understanding the evolution of large-scale structure and of the most massive galaxies. Observations and theory suggest a range of powering mechanisms, including gravitational cooling radiation [@Haiman2000; @Fardal2001; @Goerdt2010; @Faucher-Giguere2010; @Rosdahl-Blaizot2012], shock-heating from starburst-driven winds [@Taniguchi-Shioya2000; @Mori2004], the resonant scattering of [Ly$\alpha$]{}photons produced by star formation [@Steidel2011], and photo-ionizing radiation from active galactic nuclei (AGN) [@Haiman2000]. Even with careful constraints on the [Ly$\alpha$]{}line profile and distribution, discriminating among these models is difficult, in part due to the complex radiative transfer of the resonantly scattered [Ly$\alpha$]{}line and the uncertain internal geometry of each [Ly$\alpha$]{}blob [e.g., @Yang2011; @Yang2014a; @Yang2014b]. Measuring the polarization of the [Ly$\alpha$]{}line can shed new light on the problem. Recent radiative transfer simulations predict the polarization of the [Ly$\alpha$]{}line in a number of different scenarios. For example, backscattered [Ly$\alpha$]{}flux from galaxies surrounded by a superwind-driven outflow is expected to produce a [Ly$\alpha$]{}polarization fraction that rises with radius to as much as $\sim$40% where the neutral hydrogen column density $N_{\rm HI}$ drops below $10^{19}$ cm$^{-2}$ [@Dijkstra-Loeb2008]. A similar integrated over the line profile may arise from cooling radiation from a collapsing proto-galaxy (@Dijkstra-Loeb2008; see also @Trebitsch2016), but with the inverted wavelength dependence when the line is spectrally resolved. Resonant scattering in the diffuse intergalactic medium typically results in a lower ($\sim$7%), which depends on the flux of the ionizing background . These models, which all currently assume spherical symmetry, continue to grow more sophisticated [e.g., @Trebitsch2016]. Their improving, detailed predictions, when combined with the new availability of polarimeters on the largest telescopes, provide a unique opportunity to isolate the mechanism that powers [Ly$\alpha$]{}blobs by mapping the polarization. Polarization work on [Ly$\alpha$]{}blobs has been limited. To date, only two [Ly$\alpha$]{}blobs have been observed with narrowband imaging polarimetry. One, SSA22–LAB1, shows concentric polarization rings, reaching $\sim$10% at $\sim$30kpc from the blob center and rising to $\sim$20% at $\sim$45kpc [@Hayes2011], suggesting a central powering source for this [Ly$\alpha$]{}blob [see also @Beck2016]. In the other blob, LABd05, @Prescott2011 do not detect polarization within a single, large (radius $\sim$ 33 kpc) aperture, obtaining an upper-limit of 2.6% $\pm$ 2.8%; deeper and spatially resolved observations are required to test this result (E. Kim et al, in preparation). These past studies assume that the polarization arises solely from [Ly$\alpha$]{}, given that the [Ly$\alpha$]{}line dominates the continuum emission, at least at large radii. Both [Ly$\alpha$]{}nebulae are radio-quiet. Spectro-polarimetry of a radio-loud [Ly$\alpha$]{}nebula, TXS 0211–122 at $z=2.3$, reveals polarization of the [Ly$\alpha$]{}line: 16.4% $\pm$ 4.6% on one side of the nebula [@Humphrey2013]. In this case, the spatial information is limited, inhibiting the interpretation of the results. Looking to the literature on radio galaxies, which can be surrounded by line emission nebulae similar in [Ly$\alpha$]{}luminosity and spatial extent to blobs [see @McCarthy1993 and references therein] does not improve our understanding of how the [Ly$\alpha$]{}polarization is distributed on the sky. Existing polarization measurements of radio galaxies, seeking to explain the alignment effect—the strong correlation between their radio and optical continuum morphologies [@McCarthy1987]—tend to focus on the continuum [@Vernet2001]. Constraints on the [Ly$\alpha$]{}polarization are few, particularly over the tens of kpc scales typical of [Ly$\alpha$]{}blobs. Using spectro-polarimetry, @Cimatti1998 find that the [Ly$\alpha$]{}line is unpolarized in two radio galaxies. [Ly$\alpha$]{}around another radio galaxy, 4C 41.1, is polarized at a low level (1.12% $\pm$ 0.26%), while its continuum emission is unpolarized [@Dey1997]. The similarity in morphology and energy between extended [Ly$\alpha$]{}nebulae with radio-loud and radio-quiet AGN suggest an unexplored connection between their powering mechanisms [@Villar-Martin2003; @Dey1997]. Here we present the first [Ly$\alpha$]{}imaging polarimetry measurement for a blob with an embedded radio galaxy. We use the SPOL imaging spectro-polarimeter on the 6.5m MMT telescope to map B3 J2330+3927, a radio-loud [Ly$\alpha$]{}blob at $z=3.087$. Its embedded radio galaxy is one of the 1103 radio sources from the Third Bologna Catalog [@Ficarra1985; @Vigotti1989]. The associated [Ly$\alpha$]{}nebula was discovered by @DeBreuck2003 through a long-slit spectroscopy and observed in detail by [@Matsuda2009]. SPOL is a clean instrument, designed to reduce any instrument polarization by integrating over 16 different waveplate positions. At the redshift of our source, SPOL’s high stability and sensitivity on the MMT enables measurement of a few percent polarization on scales of $\sim$5kpc, even at the low surface brightnesses characteristic of [Ly$\alpha$]{}blobs. This paper is the first of several to map the polarization of giant [Ly$\alpha$]{}nebulae at high-redshift. In this paper, we present the map of our first target and establish our methodologies. Subsequent papers will analyze the full blob sample and compare the results to physical models. This paper is organized as follows. In Section \[sec:obs\], we describe the details of our observations. In Section \[sec:data\], we discuss the data reduction for the polarization measurement and the calibration sources. In Section \[sec:results\], we present our polarization map and discuss the possible sources of error. In Section \[sec:conclusion\], we summarize our conclusions. The Observations {#sec:obs} ================ The Target ---------- B3 J2330+3927 is a high-redshift ($z=3.087$) radio galaxy at R.A.=[23$^{\rm h}$30$^{\rm m}$24.9$^{\rm s}$]{} and decl. =[392712]{} that is embedded in a giant Ly$\alpha$ halo that extends over $\sim$130kpc. This nebula is one of the brightest known, with [$L_{\rm Ly\alpha}$]{}= 2.5$\times$ $10^{44}$ [ergs$^{-1}$]{}[@Matsuda2009]. The CO emission and absorption reveals a massive gas and dust reservoir associated with the radio galaxy [@DeBreuck2003; @Ivison2012]. VLBA and VLA data show a one-sided jet driven by a Type II AGN [@Perez-Torres2005]. The galaxy environment of this [Ly$\alpha$]{}blob is over-dense: a combination of broad and narrowband observations [@Matsuda2009] reveals 127 compact Ly$\alpha$ emitter (LAE) candidates and another giant ($\sim$100kpc), but radio-quiet, [Ly$\alpha$]{}blob within the $31^{\prime} \times 24^{\prime}$ (58 $\times$ 44 comoving Mpc$^2$) field. This wealth of ancillary data, the redshift, and a bright point source at R.A.=[23$^{\rm h}$30$^{\rm m}$25.10$^{\rm s}$]{}, decl. =[392705.4]{} useful for image registering and alignment, make B3 J2330+3927 an attractive target. The Instrument {#sec:instrument} -------------- On UT September 18–20, 2012, we used the 6.5m MMT telescope on Mount Hopkins, Arizona, to observe B3 J2330+3927 with the SPOL CCD imaging/spectro-polarimeter in its imaging polarimetry mode [@Schmidt1992b]. We used a narrowband filter ([kp583]{}) on loan from Kitt Peak National Observatory that is centered at 4980Å and has a FWHM of 54Å. The detector is a thinned, anti-reflection-coated 1200$\times$800 STA CCD with a pixel scale of 019 per pixel and a quantum efficiency of $\sim$0.85 in the filter bandpass. We obtained a total of 8.6 hours exposure time on B3 J2330+3927. For the calibration of the instrument, we observed unpolarized and polarized standard stars each night. We also observed CRL 2688 (the “Egg Nebula”) as an extended polarized source to investigate any unforeseen systematic effects across the 19$\times$19 field of view. In SPOL, the telescope is fed through a half-wave plate and then to a Wollaston prism. The Wollaston prism is located in the optical path between a transmissive collimator and a plane mirror that substitutes for a diffraction grating when imaging polarimetry is desired. The narrowband filter is placed in the collimated beam between the collimator and the Wollaston prism. The half-wave plate retards one orthogonal component of the light and thus changes the polarization angle of the incoming light. The Wollaston prism splits the two orthogonal polarizations so the two polarizations are imaged separately, in our case in separate “panels” in one “image”. The difference between these two panels indicates the strength of the polarization. Linear polarization measurements with SPOL are accomplished by stepping a wheel holding a semi-achromatic half-wave plate through two sequences that are aimed to measure Stokes parameters $Q$ and $U$, respectively. A $Q$-sequence yields two images ($Q^+$ and $Q^-$): the first ($Q^+$) consisting of two beams (panels) of four exposures at four orthogonal position angles of the waveplate (0, 90, 180, 270). The second image is taken at angles offset by 45 degrees from the first (45, 135, 225, 315). The $U$-sequence follows the same progression ($U^+$ and $U^-$) as the $Q$-sequence, but the waveplate position angles are offset by 22.5 deg from those of the $Q$-sequence. The redundancy in the data-taking sequences ensures that effects caused by imperfections in the waveplate and the waveplate’s positioning in the optical path are minimized. As a result, the instrumental polarization of SPOL is consistently $<0.1\%$ and verified by our measurements of unpolarized standards during the nights that we observed B3 J2330+3927 (Section \[sec:standard\]). We do not include this negligible polarization in the subsequent analysis of the data. In addition, the dual-beam design of SPOL eliminates the possibility of measuring spurious polarization arising from variable seeing and sky transparency during observing sequences. For B3 J2330+3927, we took exposures of 300 sec per waveplate position angle, so we completed both $Q$ and $U$ sequences in 80 min. In total, we obtained six full polarization sequences. We optimized the MMT optics between each measurement, except when the seeing remained ideal and the weather conditions did not change. The seeing was $\sim$1.0during most of the observations, rising to 1.5 for the two sequences taken at the end of each of the two nights. We used the positions of the [Ly$\alpha$]{}blob center (R.A.=[23$^{\rm h}$30$^{\rm m}$24.9$^{\rm s}$]{}, decl. =[392712]{}) and of a bright point source (R.A.=[23$^{\rm h}$30$^{\rm m}$25.10$^{\rm s}$]{}, decl. =[392705.4]{}; $\sim$8to the southeast of the blob center) to register and align our images, as the field was dithered slightly between polarization sequences to minimize the effects of any poorly calibrated pixels. We measured the polarization efficiency of the system ($p_{\rm eff}$ $\approx$ 0.973) by inserting a Nicol prism before the aperture plate and waveplate in the light path within the instrument. This efficiency is consistent with other measurements acquired over more than two decades for SPOL at 4980Å when used as a spectro-polarimeter. The Data Reduction {#sec:data} ================== Pre-processing -------------- To prepare the images for polarization measurement, we perform overscan correction, bias subtraction, flat fielding, and cosmic-ray removal. For flat-fielding, we obtained dome flats with all the polarization optics (Wollaston prism and half-wave plate) and construct a partial skyflat by median-combining the science images with the central [Ly$\alpha$]{}blob blocked out. The dome flats show a significant gradient across the image that the science exposures and partial skyflat do not. To correct the domeflat, we fit the gradient with a 2-D first-order polynomial and divide it out. We apply the resulting “flattened" dome flat to the partial skyflat and the science images. There are no significant gradients in the resulting images. We use the [L.A.COSMIC]{} package [@vanDokkum2001] to remove cosmic rays from our images. We examine the cosmic ray masks by eye to confirm that real signal from the nebula remains. Polarization Calculation ------------------------ As described in \[sec:instrument\], from a full $Q$–$U$ sequence, we obtain a total of four images ($Q^+$, $Q^-$, $U^+$, $U^-$), each with two panels (“up” and “down” beams), respectively. Here we explain the calculation of the polarization parameters from those images. With the notation $$q\equiv\frac{Q}{I} {\rm ~~and~~} u\equiv \frac{U}{I},$$ $q$ and $u$ can be determined using the following formulae: $$\label{eq:q1} q = \frac{Q}{I_Q} = \frac{1}{2}\left[\left(\frac{Q^{-} - Q^{+}}{Q^{-} + Q^{+}}\right)_{\rm up}\! + \left(\frac{Q^{+} - Q^{-}}{Q^{+} + Q^{-}}\right)_{\rm down}\right]$$ $$\label{eq:q2} u = \frac{U}{I_U} = \frac{1}{2}\left[\left(\frac{U^{-} - U^{+}}{U^{-} + U^{+}}\right)_{\rm up}\! + \left(\frac{U^{+} - U^{-}}{U^{+} + U^{-}}\right)_{\rm down}\right],$$ where ${I_Q}$ and ${I_U}$ are the total intensities measured from the $Q$ and $U$ sequences, respectively: $$\begin{aligned} I_Q &~=~& [ {(Q^{-} + Q^{+})}_{\rm up} + {(Q^{-} + Q^{+})}_{\rm down} ]/2 \\ I_U &~=~& [ {(U^{-} + U^{+})}_{\rm up} + {(U^{-} + U^{+})}_{\rm down} ]/2 \\ I &~=~& \frac{1}{2}(I_Q + I_U).\end{aligned}$$ Ideally, $Q^+_{\rm up}$ is the same as $Q^-_{\rm down}$ and $Q^+_{\rm down}$ is the same as $Q^-_{\rm up}$. The same applies for the $U$ images. For each $Q$–$U$ sequence, we create these $I_i$, $Q_i$ and $U_i$ images (or $q_i$ and $u_i$), and combine them to increase the signal-to-noise (S/N). When combining the sequences, we scale the images to compensate for the variations arising from airmass and weather. From these final Stokes images ($I$, $Q$, $U$), we calculate the polarization fraction () and angle ($\theta$) using the following formulae: $$\begin{aligned} {\pol} &=& \sqrt{q^2 + u^2} \\ \theta &=& \frac{1}{2} \arctan{\frac{U}{Q}}.\end{aligned}$$ Because the S/N of our target is low, we calculate and $\theta$ for large aperture sizes (1.2–1.5) over the map. The error associated with the polarization due to photon noise is derived from propagating errors through the above formulae. Calibrations ------------ ### Standard Stars {#sec:standard} To calibrate and verify the linear polarization measurements with SPOL, we observed both polarized and unpolarized standard stars [@Schmidt1992a] each night. These observations are summarized in Figure \[fig:polcal\]. For the unpolarized stars, G191-B2B and BD+28 4211, the instrumental polarization ($Q/I$ and $U/I$) at the MMT is indeed $<0.1\%$ (top left panel), as previously found for SPOL at other telescopes. We also use these spectro-photometric standard stars to flux-calibrate the narrowband images. We observed two interstellar polarized standards, BD+59 389 and Hiltner 960. Given that our narrowband filter is centered at a different wavelength (4980 Å) than previous measurements of these standards, we calculate the expected and $\theta$ within our bandpass by interpolating between the previous measurements with an interstellar polarization function [@Serkowski1973]. Our observations of BD+59 389 are consistent with historical measurements, i.e., our three measurements over two nights agree within $\pm$1.6$\sigma$ and $\pm$2.0$\sigma$ of the interpolated and $\theta$ from the literature, respectively, For Hiltner 960, the observed $\theta$’s are also within $\pm$1.3$\sigma$ range, but the ’s are more discrepant ($\sim$3.1$\sigma$) from the value derived from the literature. Possible reasons for this discrepancy include Hiltner 960’s variability, leading to a poorly-fit interstellar polarization curve [@Schmidt1992a], and its close companion, which cannot be easily accounted for in the polarization measurement. Regardless of its source, this discrepancy ($\Delta$ $\lesssim$ 0.2%) is negligible compared with the errors in and $\theta$ arising from photon noise in the measurement of our science target, B3 J2330+3927. ### Egg Nebula In addition to the polarization standard stars, we observe CRL 2688 as an extended polarization “standard” (Figure \[fig:eggmap\]). The short ($2\times960$ sec) observations of both the north and south lobes test the polarization characteristics of SPOL over the entire field of view. We use these high S/N data to examine the images for unexpected systematic effects that would be hidden in the case of a nebula as faint as B3 J2330+3927. Our optical polarization map is roughly consistent with the NICMOS 2[$\mu$m]{}polarization map from [@Sahai1998], e.g., the vectors along the axis connecting the two components are generally perpendicular to it (see their Fig. 5). Furthermore, our average for each lobe lies within $1\%$ of the value expected at 4980Å, as interpolated from the optical polarization measurements of @Shawl-Tarenghi1976. Results and Discussion {#sec:results} ====================== Total Polarization Map {#sec:pol_total} ---------------------- Figure \[fig:polmap\] shows our polarization map of B3 J2330+3927 for the light in the narrowband image, i.e., [Ly$\alpha$]{}plus continuum centered at 4980Å with a FWHM of 54Å. We measure the polarization on a grid of circular apertures with minimum radius of $R = 3$ pixels (i.e., $0.6^{\prime\prime}$, 4.4kpc), comparable to the seeing. We enlarge three apertures far from the [Ly$\alpha$]{}peak from $R = 3$ pixels to 4 pixels, so that they reach a similar flux signal-to-noise ratio as the other apertures. \ We detect significant ($\geq$2$\sigma$) polarization in nine apertures and achieve strong upper-limits (i.e., as low as 2%) elsewhere, indicating varying polarization across the blob. There is little if any polarization at the blob center and to the southwest of the nebula. The significant detections are generally distributed along the blob’s major axis, which is also the radio lobe direction. Along that axis, increases from $< 2\%$ at $\sim$5kpc from the blob center to roughly 17% at $\sim$15–25kpc. The polarization angles tend to be perpendicular to that axis. To test the significance of our polarization measurements, we show the smoothed-$\chi$ images for $Q$ and $U$ fluxes in Figure \[fig:polmap\_chi\]. Here $\chi_{\rm smooth}$ of an image $I$ is defined by $$\chi_{\rm smooth} = \frac{I_{\rm smooth}}{\sigma_{\rm smooth}} = \frac{I_i \ast h(r)}{\sqrt{\sigma^2_{i} \ast h^2(r)}},$$ where $I_{\rm smooth}$ is the convolved image with a smoothing kernel $h(r)$ and $\sigma^2_{\rm smooth}$ is the variance of smoothed image that is propagated from the unsmoothed image. Given that $\chi_{\rm smooth}$ should follow a normal distribution $N(0,1)$ for random noise, $\chi_{\rm smooth}$ is useful to visualize the low-S/N features. Here, we adopt a tophat kernel with a radius of 3 pixels to match the size of apertures used for the measurements of and $\theta$. The $Q$ $\chi_{\rm smooth}$ image shows that the region with $|\chi_{\rm smooth}| > 3$ (outlined with solid contours) is roughly aligned with the major axis, demonstrating the significance of our polarization detections. The errors shown in Fig. \[fig:polmap\] are calculated purely from photon noise statistics. One additional source of uncertainty is the extent to which errors in image alignment, i.e., from shifts and rotations, affect the polarization map when we combine images. Polarization is calculated by taking the difference between different exposures. When images are not aligned correctly, the polarization may be affected. Between sequences and within sequences, the point source in the southeast shifts by $\sim$1 pixel and rotates relative to the blob center by only $\sim$0.5 degree. Thus the uncertainties in misalignment are dominated by translational errors. To estimate how much translational errors could affect our measurements, we introduce errors of this magnitude into our best-aligned images and repeat the entire reduction procedure. Figure \[fig:shifts\] shows four random realizations of the total polarization maps after introducing random alignment errors with $\pm$1 pixel shifts. Our results do not change significantly. [Ly$\alpha$]{}Line Polarization Map ----------------------------------- The UV continuum aligned with the radio lobes of radio galaxies is sometimes polarized, with the continuum polarization fraction typically $<10\%$, but sometimes as high as $\sim$20–30% [@Jannuzi-Elston1991; @Vernet2001; @Tadhunter2005]. As a result, the relative contributions of continuum and [Ly$\alpha$]{}polarization to our total polarization map for B3 J2330+3927 are not clear. Future spectro-polarimetry, which could isolate the line-only polarization signal [e.g., @Beck2016], is needed. For now, we make a conservative argument to place lower limits on the [Ly$\alpha$]{}contribution, asking whether at least some [Ly$\alpha$]{}polarization is required to explain the map in Fig. \[fig:polmap\]. To separate out the polarization contributed by the continuum and to place a lower limit on the line polarization, we use the following simple formalism, where $I_{Q, cont}$ and $I_{Q, line}$ refer to the total flux in the $Q$ images from the continuum and [Ly$\alpha$]{}, respectively. This light is polarized by $q_{cont}$ and $q_{line}$ for the continuum and [Ly$\alpha$]{}, respectively. Because the narrowband filter captures both the continuum and [Ly$\alpha$]{}fluxes at the same time, in one $Q$ sequence, we measure the total $Q$ parameter: $$\left(\frac{Q}{I}\right)_{total} = \frac{ I_{Q, cont} \times q_{cont} + I_{Q, line} \times q_{line} } { I_{Q, cont} + I_{Q, line} }.$$ Likewise, in a $U$-sequence, we have $$\left(\frac{U}{I}\right)_{total} = \frac{ I_{U, cont} \times u_{cont} + I_{U, line} \times u_{line} } { I_{U, cont} + I_{U, line} }.$$ If we assume that the polarization angles of the continuum and [Ly$\alpha$]{}are the same, using the relation $$\frac{q_{cont}}{u_{cont}} = \frac{q_{line}}{u_{line}},$$ we can separate the total polarization into contributions from the continuum and [Ly$\alpha$]{}: $$\label{polcalclast} \pol = (1-f_{c})\,P_{line} + f_{c}\,P_{cont},$$ where $f_c$ is the fraction of the continuum relative to the total light captured by the narrowband filter: $$f_{c}=\frac{I_{cont}}{I_{cont}+I_{line}}.$$ To estimate the continuum light fraction $f_c$, we use a UV continuum image of B3 J2330+3927 constructed from broadband $B$ and $V$ images [@Matsuda2009], which covers a rest-frame wavelength range of 980 –1450 Å. Figure \[fig:lyavsconti\] shows the SPOL (continuum + [Ly$\alpha$]{}) and the Subaru (continuum) images at the same stretch. The flux from the [Ly$\alpha$]{}line dominates that from the UV continuum in our narrowband filter. Using both the SPOL and Subaru images, we calculate $f_c$ for the same apertures where we measured the total polarization in Fig. \[fig:polmap\]. The continuum flux, which is somewhat extended along the radio lobe direction, is only $\sim$10% of the total flux at the blob’s center and drops off at larger radii. We then consider two cases to estimate the lower limit on within each aperture. First, we use Eq. (\[polcalclast\]) to determine under the highly conservative assumption that the UV continuum is 20% polarized and has a polarization direction aligned with the total polarization. A of 20% is typical of the highest values measured in radio galaxy lobes [@Jannuzi-Elston1991; @Vernet2001; @Tadhunter2005], Even in this case (Figure \[fig:lyapolmap\]$a$), contributes significantly to in all nine apertures where significant is detected. The values here range from 3 to 17% at $\sim$10–25kpc, with no significant [Ly$\alpha$]{}polarization detected near the blob center. Like the total polarization, the [Ly$\alpha$]{}polarization detections occur more often along the blob’s major axis. If we assume instead that is 100% (panel $b$), an assumption so extreme that it requires negative (unphysical) values for many apertures given , there remain five apertures in the southeast where is still detected at $\geq 2\sigma$. Physical Interpretation ----------------------- From the first [Ly$\alpha$]{}imaging polarimetry of a radio-loud [Ly$\alpha$]{}nebula, B3 J2330+3927, we find that the total polarization fraction increases from $<$2% at the blob center to 17% at $\sim$15–25 kpc. Significant polarization is detected preferentially along the blob’s major axis at angles perpendicular to that axis. In this section, we briefly discuss the implication of our measurements. Future papers will focus on detailed comparisons with numerical models. Imaging polarimetry is a useful tool to differentiate between a central powering geometry and an extended power source. In the former case, [Ly$\alpha$]{}photons are produced by a central point-source or sources (i.e., embedded star-forming galaxies or AGN) and transported to large radii. When the central source illuminates the surrounding neutral gas, the [Ly$\alpha$]{}photons do not experience much resonant or core scattering and escape the system via Rayleigh or wing scattering. The resultant [Ly$\alpha$]{}radiation is highly polarized at large radii and the polarization angle is aligned tangentially to the overall geometry of the system . In contrast, in the latter case of extended emissivity, [Ly$\alpha$]{}photons are produced [*in situ*]{} in the extended gas through hydrogen recombination following ionization by photo-ionizing sources (e.g., AGNs) or superwind-driven shock-heating. Because the [Ly$\alpha$]{}photons have no preferential orientation with respect to the neutral medium and the observers, little or no polarization is expected. In B3 J2330+3927, the observed high [Ly$\alpha$]{}polarization fraction ($\sim$20% at the largest radii) and extended continuum emission suggest that [Ly$\alpha$]{}photons are produced in the center, instead of arising throughout the nebula itself. Likewise, the observed increase in polarization with radius is consistent with theoretical predictions from @Dijkstra-Loeb2008 assuming a simple geometry and central source. In their expanding shell model, the [Ly$\alpha$]{}polarization gradient arises when photons at larger radii scatter by larger angles (closer to 90) toward the observer. In an alternative model of an optically thick, spherically symmetric, collapsing gas cloud, the [Ly$\alpha$]{}radiation field becomes more anisotropic at larger radii. In other words, photons tend to propagate more radially outward prior to their last scattering events, requiring a larger scattering angle to reach the observer, and thus are more polarized. While some of the polarization properties of B3 J2330+3927 (fractions, angles, and radial gradient) are qualitatively similar to those of SSA22-LAB1 [@Hayes2011], one difference is that the significant polarization favors the major axis (and radio-jet direction). We speculate that the lack of polarization detected along the minor axis could be due to strong obscuration from an AGN torus perpendicular to the radio-jet. Another possibility is that ionization states and optical depths vary from one axis to the other due to photo-ionization along the jet or its interaction with the IGM. In this case, [Ly$\alpha$]{}photons can escape the system with fewer scatterings in the major axis direction. It is not known whether this polarization pattern is common for other giant [Ly$\alpha$]{}nebulae around high-$z$ radio galaxies. To investigate these issues further, we need deeper and higher spatial resolution observations of this system and a systematic survey of polarization for a larger sample. Conclusions {#sec:conclusion} =========== We present the first narrow-band, imaging polarimetry of a [Ly$\alpha$]{}nebula, or “blob," with an embedded, radio-loud AGN. The blob, B3 J2330+3927, lies at $z=3.09$, extends over $\sim$150kpc, and has a [Ly$\alpha$]{}luminosity of 2.5 $\times$ $10^{44}$ [erg s$^{-1}$]{} [@DeBreuck2003; @Matsuda2009]. The AGN lies near the [Ly$\alpha$]{}emission peak and its radio lobes align roughly with the major axis of the blob’s extended [Ly$\alpha$]{}emission. Our findings are: 1. We map the total ([Ly$\alpha$]{}plus continuum) polarization in a grid of circular apertures of radius $0.6^{\prime\prime}$ (4.4kpc), detecting significant ($\geq 2\sigma$) polarization in nine apertures and achieving strong upper-limits (as low as 2% in the total polarization fraction ) elsewhere. 2. The gradient in the total polarization map increases from $<2$% at $\sim$5kpc from the blob center to 17% at $\sim$15–25 kpc. The detections lie mostly along the blob’s major axis and the polarization angles are generally perpendicular to it. 3. Comparing the total flux to that of the continuum, and assuming conservatively that the continuum is 20–100% polarized and aligned with the total polarization, we place lower limits on the [Ly$\alpha$]{}polarization fraction . Under these assumptions, is 3–17% at $\sim$10–25kpc. No significant [Ly$\alpha$]{}polarization is detected at $\sim$5kpc of the blob center. Like the total polarization, the [Ly$\alpha$]{}polarization detections tend to lie along the blob’s major axis. Our polarization measurements for B3 J2330+3927 complement past polarization work, which focused on radio-[*quiet*]{} blobs and on radio galaxies within [Ly$\alpha$]{}clouds. For example, the polarization of SSA22–LAB1 is not measurable at its center, but rises to $\sim$10% at $\sim$30kpc and to $\sim$20% at $\sim$45kpc, forming an almost complete polarized ring [@Hayes2011]. While the polarization that we detect in B3 J2330+3927 is also tangentially-oriented and outside the blob center (and AGN), it is generally significant only along the blob’s major axis (and radio lobe direction). Unlike previous studies, we have constrained and mapped the [Ly$\alpha$]{}contribution to the total polarization. The one spectro-polarization measurement isolating the [Ly$\alpha$]{}line in a radio-loud [Ly$\alpha$]{}blob also reveals its polarization fraction to be high (16%) and perpendicular to the radio lobe axis in a region 10–40kpc from the nucleus, at least on one side of the nebula [@Humphrey2013]. Such a high has not been observed in radio galaxies [e.g., @Dey1997; @Cimatti1998], which might imply a physical difference or arise from being measured on smaller physical scales. Spatially-resolved measurements of for a larger sample of radio galaxies are required to discriminate between these scenarios. A direct comparison of our narrow-band, imaging polarimetry in B3 J2330+3927 with our on-going survey of [Ly$\alpha$]{}blobs without known AGN and with radio-quiet AGN will improve greatly our understanding of the mysterious source of their extended [Ly$\alpha$]{}emission. We thank the referee, Matthew Hayes, for his thorough reading of the manuscript and helpful comments. We thank the staff at the MMT Observatory for their efforts in support of this program. We thank Daryl Willmarth and the NOAO for making the narrowband filter available for our observations. C.Y. and A.I.Z. acknowledge support from the NSF Astronomy and Astrophysics Research Program through grant AST-0908280 and from the NASA Astrophysics Data Analysis Program through grant NNX10AD47G. Y.Y. and E.K.’s research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2016R1C1B2007782). Y.Y. acknowledges support from the BMBF/DLR grant Nr. 50 OR 1306. M.K.M.P. was supported by a Dark Cosmology Centre Fellowship. The Dark Cosmology Centre was funded by The Danish National Research Foundation. M.G.L. and E.K. are supported by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIP) (No. 2012R1A4A1028713). Facilities: Beck, M., Scarlata, C., Hayes, M., Dijkstra, M., & Jones, T. J. 2016, , 818, 138 Cimatti, A., di Serego Alighieri, S., Vernet, J., Cohen, M., & Fosbury, R. A. E. 1998, , 499, L21 De Breuck, C., Neri, R., Morganti, R., et al. 2003, , 401, 911 Dey, A., van Breugel, W., Vacca, W. D., & Antonucci, R. 1997, , 490, 698 Dey, A., Bian, C., Soifer, B. T., et al. 2005, , 629, 654 Dijkstra, M., & Loeb, A. 2008, , 386, 492 Faucher-Gigu[è]{}re, C.-A., Kere[š]{}, D., Dijkstra, M., Hernquist, L., & Zaldarriaga, M. 2010, , 725, 633 Fardal, M. A., Katz, N., Gardner, J. P., et al. 2001, , 562, 605 Ficarra, A., Grueff, G., & Tomassetti, G. 1985, , 59, 255 Goerdt, T., Moore, B., Read, J. I., & Stadel, J. 2010, , 725, 1707 Haiman, Z., Spaans, M., & Quataert, E. 2000, , 537, L5 Hayes, M., Scarlata, C., & Siana, B. 2011, , 476, 304 Humphrey, A., Vernet, J., Villar-Mart[í]{}n, M., et al. 2013, , 768, L3 Ivison, R. J., Smail, I., Amblard, A., et al. 2012, , 425, 1320 Jannuzi, B. T., & Elston, R. 1991, , 23, 1334 Loeb, A., & Rybicki, G. B. 1999, , 524, 527 Matsuda, Y., Yamada, T., Hayashino, T., et al. 2004, , 128, 569 Matsuda, Y., et al. 2005, , 634, L125 Matsuda, Y., Nakamura, Y., Morimoto, N., et al. 2009, , 400, L66 McCarthy, P. J., van Breugel, W., Spinrad, H., & Djorgovski, S. 1987, , 321, L29 McCarthy, P. J. 1993, , 31, 639 Mori, M., Umemura, M., & Ferrara, A. 2004, , 613, L97 P[é]{}rez-Torres, M.-A., & De Breuck, C. 2005, , 363, L41 Prescott, M. K. M., Kashikawa, N., Dey, A., & Matsuda, Y. 2008, , 678, L77 Prescott, M. K. M., Smith, P. S., Schmidt, G. D., & Dey, A. 2011, , 730, L25 Prescott, M. K. M., Dey, A., Brodwin, M., et al. 2012, , 752, 86 Rosdahl, J., & Blaizot, J. 2012, , 423, 344 Rybicki, G. B., & Loeb, A. 1999, , 520, L79 Sahai, R., Hines, D. C., Kastner, J. H., et al. 1998, , 492, L163 Schmidt, G. D., Elston, R., & Lupie, O. L. 1992, , 104, 1563 Schmidt, G. D., Stockman, H. S., & Smith, P. S. 1992, , 398, L57 Serkowski, K. 1973, Interstellar Dust and Related Topics, 52, 145 Shawl, S. J., & Tarenghi, M. 1976, , 204, L25 Steidel, C. C., Adelberger, K. L., Shapley, A. E., et al. 2000, , 532, 170 Steidel, C. C., Bogosavljevi[ć]{}, M., Shapley, A. E., et al. 2011, , 736, 160 Tadhunter, C. 2005, Astronomical Polarimetry: Current Status and Future Directions, 343, 457 Taniguchi, Y., & Shioya, Y. 2000, , 532, L13 Trebitsch, M., Verhamme, A., Blaizot, J., & Rosdahl, J. 2016, arXiv:1604.02066 van Dokkum, P. G. 2001, , 113, 1420 Vernet, J., Fosbury, R. A. E., Villar-Mart[í]{}n, M., et al. 2001, , 366, 7 Vigotti, M., Grueff, G., Perley, R., Clark, B. G., & Bridle, A. H. 1989, , 98, 419 Villar-Mart[í]{}n, M., Vernet, J., di Serego Alighieri, S., et al. 2003, , 346, 273 Yang, Y., Zabludoff, A., Tremonti, C., Eisenstein, D., & Dav[é]{}, R. 2009, , 693, 1579 Yang, Y., Zabludoff, A., Eisenstein, D., & Dav[é]{}, R. 2010, , 719, 1654 Yang, Y., Zabludoff, A., Jahnke, K., et al. 2011, , 735, 87 Yang, Y., Walter, F., Decarli, R., et al. 2014, , 784, 171 Yang, Y., Zabludoff, A., Jahnke, K., & Dav[é]{}, R. 2014, , 793, 114
--- abstract: 'The decay dynamics of the classical electromagnetic field in a leaky optical resonator supporting a single mode coupled to a structured continuum of modes (reservoir) is theoretically investigated, and the issue of threshold condition for lasing in presence of an inverted medium is comprehensively addressed. Specific analytical results are given for a single-mode microcavity resonantly coupled to a coupled resonator optical waveguide (CROW), which supports a band of continuous modes acting as decay channels. For weak coupling, the usual exponential Weisskopf-Wigner (Markovian) decay of the field in the bare resonator is found, and the threshold for lasing increases linearly with the coupling strength. As the coupling between the microcavity and the structured reservoir increases, the field decay in the passive cavity shows non exponential features, and correspondingly the threshold for lasing ceases to increase, reaching a maximum and then starting to decrease as the coupling strength is further increased. A singular behavior for the “laser phase transition”, which is a clear signature of strong non-Markovian dynamics, is found at critical values of the coupling between the microcavity and the reservoir.' address: 'Dipartimento di Fisica and Istituto di Fotonica e Nanotecnologie del CNR, Politecnico di Milano, Piazza L. da Vinci 32, I-20133 Milan, Italy' author: - Stefano Longhi title: 'Non-Markovian Decay and Lasing Condition in an Optical Microcavity Coupled to a Structured Reservoir' --- Introduction. ============= It is well known that the modes of an open optical cavity are always leaky due to energy escape to the outside. Mode leakage can be generally viewed as due to the coupling of the discrete cavity modes with a broad spectrum of modes of the “universe” that acts as a reservoir [@Lang73; @Ching87; @Ching98]. From this perspective the problem of escape of a classical electromagnetic field from an open resonator is analogous to the rather general problem of the decay of a discrete state coupled to a broad continuum, as originally studied by Fano [@Fano64] and encountered in different physical contexts (see, e.g., [@Tannoudji]). The simplest and much used way to account for mode coupling with the outside is to eliminate the reservoir degrees of freedom by the introduction of quasi normal modes with complex eigenfrequencies (see, e.g., [@Lang73; @Ching98]), in such a way that energy escape to the outside is simply accounted for by the cavity decay rate $\gamma$ (the imaginary part of the eigenvalue) or, equivalently, by the cavity quality factor $Q$. This irreversible exponential decay of the mode into the continuum corresponds to the well-known Weisskopf-Wigner decay and relies on the so-called Markovian approximation (see, e.g., [@Tannoudji]) that assumes an instantaneous reservoir response (i.e. no memory): coupling with the reservoir is dealt as a Markovian process and the evolution of the field in the cavity depends solely on the present state and not on any previous state of the reservoir. For the whole system (cavity plus outside), in the Markovian approximation the cavity quasi-mode with a complex frequency corresponds to a resonance state with a Lorentzian lineshape. If now the field in the cavity experiences gain due to coupling with an inverted atomic medium, the condition for lasing is simply obtained when gain due to lasing atoms cancels cavity losses, i.e. for $g=\gamma$, where $g$ is the modal gain coefficient per unit time [@Lang73]. More generally, treating the field classically and assuming that the cavity supports a single mode, an initial field amplitude in the cavity will exponentially decay, remain stationary (delta-function lineshape) or exponentially grow (in the early stage of lasing) depending on whether $g< \gamma$, $g=\gamma$ or $g> \gamma$, respectively. In addition, since the cavity decay rate $\gamma$ increases as the coupling of the cavity with the outside increases, the threshold for laser oscillation increases as the coupling strength of the resonator with the modes of the “universe” is increased. It is remarkable that this simple and widely acknowledged dynamical behavior of basic laser theory, found in any elementary laser textbook (see, e.g., [@Svelto]), relies on the Markovian assumption for the cold cavity decay dynamics [@note0]. However, it is known that in many problems dealing with the decay of a discrete state coupled to a “structured” reservoir, such as in photoionization in the vicinity of an autoionizing resonance [@Piraux90], spontaneous emission and laser-driven atom dynamics in waveguides and photonic crystals [@Lai88; @Lewenstein88; @John90; @John94; @Kofman94; @Vats98; @Lambropoulos00; @Wang03; @Petrosky05], and electron transport in semiconductor superlattices [@Tanaka06], the Markovian approximation may become invalid, and the precise structure of the reservoir (continuum) should be properly considered. Non-Markovian effects may become of major relevance in presence of threshold [@Piraux90; @Gaveau95] or singularities [@Lewenstein88; @John94; @Kofman94; @Lambropoulos00; @Tanaka06] in the density of states or more generally when the coupling strength from the initial discrete state to the continuum becomes as large as the width of the continuum density of state distribution [@Tannoudji]. Typical features of non-Markovian dynamics found in the above-mentioned contexts are non-exponential decay, fractional decay and population trapping, atom-photon bound states, damped Rabi oscillations, etc. Though the role of structured reservoirs on basic quantum electrodynamics and quantum optics phenomena beyond the Markovian approximation has received a great attention (see, e.g., Ref.[@Lambropoulos00] for a rather recent review), at a classical level [@Ching98] previous works have mainly considered the limit of Markovian dynamics [@Lang73], developing a formalism based on quasi-normal mode analysis of the open system [@Ching98]. In fact, in a typical laser resonator made e.g. of two-mirrors with one partially transmitting mirror coupled to the outside open space, the Weisskopf-Wigner decay law for the bare cavity field is an excellent approximation [@Lang73] and therefore non-Markovian effects are fully negligible. However, the advent of micro- and nano-photonic structures, notably photonic crystals (PCs), has enabled the design and realization of high-$Q$ passive microcavities [@Villeneuve96; @Vahala03; @Armani03; @Asano04; @Asano06] and lasers [@Vahala03; @Painter99; @Loncar02; @Park04; @Altug05] which can be suitably coupled to the outside by means of engineered waveguide structures [@Vahala03; @Fan98; @Xu00; @Asano03; @Waks05; @Chak06]. By e.g. modifying some units cells within a PC, one can create defects that support localized high-$Q$ modes or propagating waveguide modes. If we couple localized defect modes with waveguides, many interesting photon transport effects may occur (see, e.g., [@Fan98; @Xu00; @Fan05]). Coupling between optical waveguides and high-$Q$ resonators in different geometries has been investigated in great detail using numerical methods, coupled-mode equations, and scattering matrix techniques in the framework of a rather general Fano-Anderson-like Hamiltonian [@Fan98; @Xu00; @Asano03; @Waks05; @LanLan05; @Chak06]. Another kind of light coupling and transport that has received an increasing attention in recent years is based on coupled resonator optical waveguide (CROW) structures [@Stefanou98; @Yariv99; @Ozbay00; @Olivier01], in which photons hop from one evanescent defect mode of a cavity to the neighboring one due to overlapping between the tightly confined modes at each defect site. The possibility of artificially control the coupling of a microcavity with the “universe” may then invalidate the usual Markovian approximation for the (classical) electromagnetic field decay. In such a situation, for the passive cavity one should expect to observe non-Markovian features in the dynamics of the decaying field, such as non-exponential decay, damped Rabi oscillations, and quenched decay for strong couplings. More interesting, for an active (i.e. with gain) microcavity the usual condition $g=\gamma$ of gain/loss balance for laser oscillation becomes meaningless owing to the impossibility of precisely define a cavity decay rate $\gamma$. Therefore the determination of the lasing condition for a microcavity coupled to a structured reservoir requires a detailed account of the mode structure of the universe and may show unusual features.\ It is the aim of this work to provide some general insights into the classical-field decay dynamics and lasing condition of an optical microcavity coupled to a structured reservoir, in which the usual Markovian approximation of treating the cavity decay becomes inadequate. Some general results are provided for a generic Hamiltonian model describing the coupling of a single-mode microcavity with a continuous band of modes, and the effects of non-Markovian dynamics on lasing condition are discussed. As an illustrative example, the case of a microcavity resonantly coupled to a CROW is considered, for which analytical results may be given in a closed form.\ The paper is organized as follows. In Sec.II a simple model describing the classical field dynamics in an active single-mode microcavity coupled to a band of continuous modes is presented, and the Markovian dynamics attained in the weak coupling regime is briefly reviewed. Section III deals with the exact dynamics, beyond the Markovian approximation, for both the passive (i.e. without gain) and active microcavity. In particular, the general relation expressing threshold for laser oscillation is derived, and its dependence on the coupling strength between the microcavity and the reservoir is discussed. The general results of Sec.III are specialized in Sec.IV for the case of a single-mode microcavity tunneling-coupled to a CROW, and some unusual dynamical effects (such as “uncertainty” of laser threshold, non-exponential onset of lasing instability and transient non-normal amplification) are shown to occur at certain critical couplings. Microcavity coupled to a structured reservoir: description of the model and Markovian dynamics ============================================================================================== The model --------- The starting point of our analysis is provided by a rather general Hamiltonian model [@Fan98; @Xu00] describing the interaction of a localized mode $|a\rangle$ of a resonator system (e.g. a microcavity in a PC) with a set of continuous modes $|\omega_{\mu}\rangle$ of neighboring waveguides with which the resonator is tunneling-coupled. We assume that the microcavity supports a single and high-$Q$ localized mode of frequency $\omega_a$, and indicate by $\gamma_i$ and $g$ the intrinsic losses and gain coefficients of the mode. The intrinsic losses $\gamma_i$ account for both internal (e.g. absorption) losses and damping of the cavity mode due to coupling with a “Markovian” reservoir (i.e. coupling with modes of the universe other than the neighboring waveguides). The modal gain parameter $g$ may be provided by an inverted atomic or semiconductor medium hosted in the microcavity. Since we will consider the microcavity operating below or at the onset of threshold for lasing, as in Refs.[@Xu00; @LanLan05] the modal gain parameter $g$ is assumed to be a constant and externally controllable parameter; above threshold an additional rate equation for $g$ would be obviously needed depending on the specific gain medium (see, for instance, [@Liu05]). Dissipation and gain of the microcavity mode are simply included in the model by adding a non-Hermitian term $H_{NH}$ to the Hermitian part of the Hamiltonian. The full Hamiltonian $H$ then reads $H=H_0+ H_{int}+H_{NH}$, where [@Fan98] $$\begin{aligned} H_0 & = & \omega_a |a \rangle \langle a|+\sum_{\mu} \int d \omega_{\mu} \omega_{\mu} | \omega_{\mu} \rangle \langle \omega_{\mu}|, \\ H_{int} & = & \lambda \sum_{\mu} \int d \omega_{\mu} \left[ \kappa_{\mu}(\omega_{\mu}) |\omega_{\mu} \rangle \langle a | + h.c. \right], \\ H_{NH}& = & i(g-\gamma_i)|a \rangle \langle a|,\end{aligned}$$ with $\langle a| a \rangle=1$, $\langle \omega_{\mu}| \omega^{'}_{\mu^{'}} \rangle=\delta_{\mu, \mu^{'}} \delta(\omega_{\mu}-\omega^{'}_{\mu})$, $\langle a | \omega_{\mu} \rangle=0$, and $\hbar=1$. The coefficients $\kappa_{\mu}(\omega_{\mu})$ describe the direct coupling between the localized mode $|a\rangle$ of the microcavity and the propagating modes $|\omega_{\mu}\rangle$ in the continuum, whereas $\lambda$ is a dimensionless parameter that measures the strength of interaction ($\lambda \rightarrow 0$ for a vanishing interaction). If we write the state $|\psi\rangle$ as $$|\psi\rangle=c_a(t)|a \rangle+ \sum_{\mu} \int d \omega_{\mu} c_{\mu}(\omega_{\mu},t) | \omega_{\mu}\rangle$$ the following coupled-mode equations for the coefficients $c_a(t)$ and $c_{\mu}(\omega_{\mu},t)$ are readily obtained from the equation $ i \partial |\psi\rangle / \partial t=H |\psi\rangle$: $$\begin{aligned} i \dot c_a(t) & = & (\omega_a+ig-i\gamma_i)c_a(t)+ \lambda \sum_{\mu} \int d \omega_{\mu} \kappa_{\mu}^{*}(\omega_{\mu})c_{\mu}(\omega_{\mu},t) , \label{cme1}\\ i \dot c_{\mu}(\omega_{\mu},t) & = & \omega_{\mu} c_{\mu}(\omega_{\mu},t)+\lambda \kappa_{\mu}(\omega_{\mu})c_a(t), \label{cme2}\end{aligned}$$ where the dot stands for the derivative with respect to time $t$. Note that the power of the microcavity mode is given by $|c_a(t)|^2$, whereas the total power of the field (cavity plus structured reservoir) is given by $P(t)=|c_a(t)|^2+\sum_{\mu}\int d \omega_{\mu} |c_{\mu}(\omega_{\mu},t)|^2$. The threshold condition for lasing is obtained when an initial perturbation in the system does not decay with time. From Eqs.(\[cme1\]) and (\[cme2\]) the following power-balance equation can be derived $$\frac{dP}{dt}=(g-\gamma_i)|c_a|^2, \label{power}$$ from which we see that $|c_a|^2 \rightarrow 0$ for any $g<\gamma_i$, so that the threshold $g=g_{th}$ for laser oscillation satisfies the condition $ g_{th} \geq \gamma_i$, as expected. Weak coupling limit: Markovian dynamics --------------------------------------- The temporal evolution of the microcavity-mode amplitude $c_a(t)$ and the condition for laser oscillation can be rigorously obtained by solving the coupled-mode equations (\[cme1\]) and (\[cme2\]) by means of a Laplace transform analysis, which will be done in the next section. Here we show that, in the weak coupling regime ($\lambda \rightarrow 0$) and for a broad band of continuous modes, coupling of the cavity mode with the neighboring waveguides leads to the usual Weisskopf-Wigner (exponential) decay. Though this is a rather standard result (see, e.g. [@Tannoudji]) and earlier derived for a standard Fabry-Perot laser resonator in Ref.[@Lang73] using a Fano diagonalization technique, for the sake of completeness it is briefly reviewed here within the model described in Sec.II.A. If the system is initially prepared in state $|a\rangle$, i.e. if at initial time $t=0$ there is no field in the neighboring waveguides and $c_a(0) \neq 0$, an integro-differential equation describing the temporal evolution of cavity mode amplitude $c_a(t)$ at successive times can be derived after elimination of the reservoir degrees of freedom. A formal integration of Eqs.(\[cme2\]) with initial condition $c_{\mu}(\omega_{\mu},0)=0$ yields $$c_{\mu}(\omega_{\mu},t)=-i \lambda \kappa_{\mu}(\omega_{\mu}) \int_{0}^{t} dt' c_a(t') \exp[-i \omega_{\mu}(t-t')]. \label{eliminac}$$ After setting $c_a(t)=A(t) \exp(-i \omega_a t)$, substitution of Eq.(\[eliminac\]) into Eq.(\[cme1\]) yields the following [*exact*]{} integro-differential equation for the mode amplitude $A(t)$ $$\dot A=(g-\gamma_i)A-\int_{0}^t d \tau G(\tau) A(t-\tau), \label{integrodiff}$$ where $G(\tau)$ is the reservoir response (memory) function, given by $$G(\tau)=\lambda^2 \sum_{\mu} \int d \omega_{\mu} |\kappa_{\mu}(\omega_{\mu})|^2 \exp[-i(\omega_{\mu}-\omega_a) \tau]. \label{memory}$$ Equation (\[integrodiff\]) clearly shows that the dynamics is not a Markovian process since the evolution of the mode amplitude at time $t$ depends on previous states of the reservoir. Nevertheless, if the characteristic memory time $\tau_m$ is short enough (i.e., the spectral coupling coefficients $\kappa_{\mu}$ broad enough) and the coupling weak enough such that $|\dot A / A| \tau_m \ll 1$, we may replace Eq.(\[integrodiff\]) with the following approximate equation $$\dot A \simeq (g-\gamma_i)A-A(t) \int_{0}^t d \tau G(\tau) \simeq (g-\gamma_i)A-(\gamma_R+i\Delta_R)A, \label{Markovian}$$ where $$(\gamma_R+i\Delta_R) = \int_{0}^{t}d \tau G(\tau)$$ for $t \gg \tau_m$. In this limit, the dynamics is therefore Markovian and the reservoir is simply accounted for by a decay rate $\gamma_R$ and a frequency shift $\Delta_R$. Using the relation $$\lim_{t \rightarrow \infty} \int_{0}^t d \tau \exp(-i \omega \tau)= \pi \delta(\omega)-i \mathcal{P} \left( \frac{1}{\omega} \right),$$ from Eq.(\[memory\]) the following expressions for the decay rate $\gamma_R$ and the frequency shift $\Delta_R$ can be derived $$\begin{aligned} \gamma_R & = & \pi \lambda^2 \sum_{\mu} |\kappa_{\mu}(\omega_a)|^2 , \label{decayrate} \\ \Delta_R & = & \lambda^2 \sum_{\mu} \mathcal{P} \int d \omega_{\mu} \frac{|\kappa_{\mu}(\omega_{\mu})|^2 }{\omega_a-\omega_{\mu}}. \label{frequencyshift}\end{aligned}$$ The dynamics of the cavity mode field in the Markovian approximation is therefore standard: an initial field amplitude in the cavity will exponentially decay, remain stationary (delta-function lineshape) or exponentially grow (in the early stage of lasing) depending on whether $g< \gamma$, $g=\gamma$ or $g> \gamma$, respectively, where $\gamma=\gamma_i+\gamma_R$ is the total cavity decay rate. The threshold for laser oscillation is therefore simply given by $g_{th}=\gamma_i+\gamma_R$, i.e. $$g_{th}=\gamma_i+\pi \lambda^2 \sum_{\mu} |\kappa_{\mu}(\omega_a)|^2. \label{thmarkovian}$$ Field Dynamics beyond the Markovian Limit: general aspects ========================================================== Let us assume that the system is initially prepared in state $|a\rangle$, i.e. that at initial time $t=0$ there is no field in the neighboring waveguides \[$c_{\mu}(\omega_{\mu},0)=0$\] whereas $c_a(0)=1$. The exact solution for the field amplitude $c_a(t)$ of the microcavity mode at successive times can be obtained by a Laplace-Fourier transform of Eqs.(\[cme1\]) and (\[cme2\]). Let us indicate by $\hat{c_a}(s)$ and $\hat{c_{\mu}}(\omega_{\mu},s)$ the Laplace transforms of $c_a(t)$ and ${c_{\mu}}(\omega_{\mu},t)$, respectively, i.e. $$\hat{c_a}(s)=\int_{0}^{\infty}dt \; c_a(t) \exp(-st) \label{Laplace}$$ and a similar expression for $\hat{c_{\mu}}(\omega_{\mu},s)$. From the power balance equation (\[power\]), one can easily show that the integral on the right hand side in Eq.(\[Laplace\]) converges for ${\rm Re}(s)> \eta$, where $\eta=0$ for $g-\gamma_i \leq 0$ or $\eta=g-\gamma_i$ for $g-\gamma_i>0$. The field amplitude $c_a(t)$ is then written as the inverse Laplace transform $$c_a(t)= \frac{1}{2 \pi i } \int_{{\rm B}} ds \; \hat{c}_a(s) \exp(st) \label{invLaplace}$$ where the Bromwich path ${\rm B}$ is a vertical line ${\rm Re}(s)={\rm const}> \eta$ in the half-plane of analyticity of the transform, and $\hat{c}_a(s)$ is readily derived after Laplace transform of Eqs.(\[cme1\]) and (\[cme2\]) and reads $$\hat{c}_a(s)= \frac{i}{is-\omega_a-ig'-\Sigma(s)} \label{Laplaceca}$$ In Eq.(\[Laplaceca\]), $g'=g-\gamma_i$ is the effective gain parameter and $\Sigma(s)$ is the self-energy function, which is expressed in terms of the form factor $$\Sigma(s)=\int_{\omega_1}^{\omega_2} d \omega \frac{\mathcal{D}(\omega)}{is-\omega} \label{selfenergy}$$ where $\mathcal{D}(\omega)$ is the reservoir structure function, defined by $$\mathcal{D}(\omega)=\lambda^2 \sum_{\mu} |\kappa_{\mu}(\omega)|^2.$$ In writing Eq.(\[selfenergy\]), we assumed that the spectrum of modes of the waveguides (to which the microcavity is coupled) shows an upper and lower frequency limits $\omega_1$ and $\omega_2$. We will also assume that $\mathcal{D}(\omega)$ does not show gaps, i.e. intervals with $\mathcal{D}=0$, inside the range $(\omega_1,\omega_2)$. The assumption of a finite spectral extension for the continuous modes is physically reasonable and is valid for e.g. PC waveguides or CROW. In addition, in order to avoid the existence of bound states (or polariton modes) for the passive microcavity coupled to the structured reservoir, we assume that $\mathcal{D}(\omega)$ vanishes at the boundary of the band, precisely we require that $\mathcal{D}(\omega) \sim (\omega-\omega_{1,2})^{\delta_{1,2}}$ as $\omega \rightarrow \omega_{1,2}$, with $\delta_{1,2}>0$. This condition, which will be clarified in Sec.III.A, is a necessary requirement to ensure that the field amplitude $c_a(t)$ fully decays toward zero for $g'=0$.\ The temporal evolution of $c_a(t)$ is largely influenced by the analytic properties of $\hat{c}_a(s)$; in particular the occurrence of a singularity (pole) at $s=s_{pole}$ with ${\rm Re}(s_{pole}) \geq 0$ may indicate the onset of an instability, i.e. a lasing regime. The self-energy function $\Sigma(s)$ \[Eq.(\[selfenergy\])\], and hence $\hat{c}_a(s)$, are not defined on the segment of the imaginary axis $s=-i \omega$ with $\omega_1< \omega < \omega_2$, $s_{1,2}=-i \omega_{1,2}$ being two branch points. In fact, using the relation $$\lim_{\rho \rightarrow 0^+} \frac{1}{\omega \pm i \rho}=\mathcal{P}\left( \frac{1}{\omega} \right) \mp i \pi \delta(\omega), \label{deltaR}$$ from Eq.(\[selfenergy\]) one has $$\Sigma(s=-i \omega \pm 0^+)= \Delta(\omega) \mp i \pi \mathcal{D}(\omega), \label{disco}$$ ($\omega_1<\omega<\omega_2$), where we have set $$\Delta(\omega)=\mathcal{P} \int_{\omega_1}^{\omega_2} d \omega' \frac{\mathcal{D}(\omega')}{\omega-\omega'}. \label{Omshift}$$ To further discuss the analytic properties of $\hat{c}_a(s)$ and hence the temporal dynamics of $c_a(t)$, one should distinguish the cases of passive ($g'=0$) and active ($g'>0$) microcavities. The passive microcavity ----------------------- Let us first consider the case of $g'=0$, i.e. of a passive microcavity with negligible internal losses. In this case the full Hamiltonian is Hermitian ($H_{NH}=0$), and therefore the analytic properties of $\hat{c}_a(s)$ and spectrum of $H=H_0+H_{int}$ are ruled as follows (see, for instance, [@Tannoudji; @Gaveau95; @Nakazato96; @Regola]): (i) The eigenvalues $\omega$ of $H$ are real-valued and comprise the continuous spectrum $\omega_1< \omega < \omega_2$ of unbounded modes and up to two isolated real-valued eigenvalues, outside the continuous spectrum from either sides, which correspond to possible bound (or polariton) modes [@Gaveau95]; (ii) The isolated eigenvalues are the poles of $\hat{c}_a(s)$ on the imaginary axis outside the branch cut $- \omega_2<{\rm Im}(s)<- \omega_1$; (iii) $\hat{c}_a(s)$ is analytic in the full complex plane, apart from the branch cut and the two possible poles on the imaginary axis corresponding to bound modes; (iv) In the absence of bound modes $c_a(t)$ fully decays toward zero, whereas a limited (or fractional) decay occurs in the opposite case.\ From Eq.(\[Laplaceca\]), the poles $s=-i \Omega$ of $\hat{c}_a(s)$ outside the branch cut are found as solutions of the equation: $$\Omega-\omega_a=\int_{\omega_1}^{\omega_2} d \omega \frac{\mathcal{D}(\omega)}{\Omega-\omega},$$ i.e. \[see Eq.(\[Omshift\])\]: $$\Omega-\omega_a=\Delta(\Omega) \label{bound}$$ ![ Graphical determination of the roots of Eq.(\[bound\]) below (a), and above (b) the critical coupling. In (b) the full Hamiltonian $H=H_{0}+H_{int}$ has discrete eigenvalues corresponding to bound modes.](Fig1.eps) with the constraint $\Omega>\omega_2$ or $\Omega<\omega_1$ [@note1]. A graphical solution of Eq.(\[bound\]) as intersection of the curves $\Omega-\omega_a$ and $\Delta(\Omega)$ is helpful to decide whether there exist poles of $\hat{c}_a(s)$, i.e. bound modes (see Fig.1). To this aim, note that $\Delta(\Omega)>0$ and $d \Delta / d \Omega<0$ for $\Omega>\omega_2$, $\Delta(\Omega)<0$ and $d \Delta / d \Omega<0$ for $\Omega<\omega_1$, and $\lim_{\Omega \rightarrow \pm \infty} \Delta(\Omega)=0^{\pm}$. Therefore, Eq.(\[bound\]) does not have solutions outside the interval $(\omega_1,\omega_2)$ provided that $\Delta(\omega_2)<\omega_2-\omega_a$ and $\Delta(\omega_1)>\omega_1-\omega_a$ \[Fig.1(a)\]. Such conditions require at least that $\omega_a$ be internal to the band $(\omega_1,\omega_2)$, i.e. that the resonance frequency $\omega_a$ of the microcavity be embedded in the continuum of decay channels, and that $\mathcal{D}(\omega)$ vanishes as a power law at the boundary $\omega=\omega_1$ and $\omega=\omega_2$, i.e. that $\mathcal{D}(\omega) \sim (\omega-\omega_{1,2})^{\delta_{1,2}}$ as $\omega \rightarrow \omega_{1,2}$ for some positive integers $\delta_1$ and $\delta_2$. In fact, if $\mathcal{D}(\omega)$ does not vanish as a power law at these boundaries, one would have $\Delta(\Omega) \rightarrow \pm \infty$ as $\Omega \rightarrow \omega_{2}, \omega_1$. Even though $\mathcal{D}(\omega)$ vanishes at the boundaries, as the coupling strength $\lambda$ is increased either one or both of the conditions $\Delta(\omega_2)>\omega_2-\omega_a$ and $\Delta(\omega_1)<\omega_1-\omega_a$ can be satisfied \[Fig.1(b)\], leading to the appearance of either one or two bound states. The coupling strength at which a bound state starts to appear is referred to as [*critical coupling*]{}. Below the critical coupling \[Fig.1(a)\], for the passive microcavity $\hat{c}_a(s)$ does not have poles and a complete decay of $c_{a}(t)$ is attained. However, owing to non-Markovian effects the decay dynamics may greatly deviate from the usual Weisskop-Wigner exponential decay. The exact decay law for $c_a(t)$ is obtained by the inverse Laplace transform Eq.(\[invLaplace\]), which can be evaluated by the residue method after suitably closing the Bromwich path ${\rm B}$ with a contour in the ${\rm Re}(s)<0$ half-plane (see, e.g. [@Tannoudji] pp.220-221, and [@Nakazato96; @Regola]). Since the closure crosses the branch cut $-\omega_2<{\rm Im}(s)< - \omega_1$ on the imaginary axis, the contour must necessarily pass into the second Riemannian sheet in the section of the half-plane with $-\omega_2<{\rm Im}(s)< - \omega_1$, whereas it remains in the first Riemannian sheet in the other two sections ${\rm Im}(s)>-\omega_1$ and ${\rm Im}(s)<-\omega_2$ of the ${\rm Re}(s)<0$ half-plane. To properly close the contour, it is thus necessary to go back and turn around the two branch points of the cut at $s=-i \omega_1$ and $s=-i \omega_2$, following the Hankel paths $h_1$ and $h_2$ as shown in Fig.2. Note that, while $\hat{c}_a(s)$ is analytic in the first Riemannian sheet for ${\rm Re}(s)<0$, the analytic continuation $\hat{c}_{a}^{II}(s)$ of $\hat{c}_a(s)$ from the right \[${\rm Re}(s)>0$\] to the left \[${\rm Re}(s)<0$\] half-plane across the cut has usually a simple pole at $s=s_{p}$ with ${\rm Re}(s_{p})<0$ and $-\omega_2<{\rm Im}(s_{p})<-\omega_1$ (see Fig.2). Since $\hat{c}_{a}^{II}(s)=i/[is-\omega_a-\Sigma^{II}(s)]$ with $\Sigma^{II}(s)=\Sigma(s)- 2 \pi i \mathcal{D}(is)$ \[see Eq.(\[disco\])\], the pole $s_{p}$ is found as a solution of the equation $$i s_{p}-\omega_a-\Sigma(s_{p})+2 \pi i \mathcal{D}(i s_{p})=0,$$ ![ Integration contour used to calculate the inverse Laplace transform of $\hat{c}_a(s)$. The bold solid line on the imaginary axis is the branch cut. The integration along the solid (dashed) curves is made on the first (second) Riemannian sheet of $\hat{c}_a(s)$. $s_p$ is the pole of $\hat{c}_a(s)$ on the second Riemannian sheet in the ${\rm Re}(s)<0$ half-plane.](Fig2.eps) i.e. $$\begin{aligned} -i \gamma_p+\Delta_p-\int_{\omega_1}^{\omega_2} d \omega \frac{\mathcal{D}(\omega)}{\omega_a+\Delta_p-i\gamma_p-\omega}+ \nonumber \\ +2 \pi i \mathcal{D}(\omega_a+\Delta_p-i\gamma_p)=0 \label{poloP}\end{aligned}$$ where we have set $$s_{p} \equiv - \gamma_{p}-i \omega_a-i\Delta_p.$$ After inversion, we then find for $c_a(t)$ the following decay law $$c_a(t)=\mathcal{Z} \exp[-\gamma_p t -i(\omega_a+\Delta_p)t]+\mathcal{C}(t), \label{decaylaw}$$ where $\mathcal{Z}$ is the residue of $\hat{c}_{a}^{II}(s)$ at the pole $s_p$, and $\mathcal{C}(t)$ is the contribution from the contour integration along the Hankel paths $h_1$ and $h_2$ (see Fig.2): $$\begin{aligned} \mathcal{C}(t) & = & \frac{1}{2 \pi i} \int_{s=-\infty-i \omega_1}^{s=0-i \omega_1} ds \left[ \hat{c}_{a}^{II}(s)-\hat{c}_a(s) \right] \exp(st) + \nonumber \\ & - & \frac{1}{2 \pi i} \int_{s=-\infty-i \omega_2}^{s=0-i \omega_2} ds \left[ \hat{c}_{a}^{II}(s) -\hat{c}_a(s) \right] \exp(st).\end{aligned}$$ The cut contribution $\mathcal{C}(t)$ is responsible for the appearance of non-exponential features in the decay dynamics, especially at short and long times; for an extensive and detailed analysis we refer the reader to e.g. Refs.[@Nakazato96; @Regola]; examples of non-exponential decays will be presented in Sec.IV. We just mention here that, in the weak coupling limit ($\mathcal{D} \rightarrow 0$), from Eq.(\[poloP\]) one has that $\gamma_p$ and $\Delta_p$ are small, and thus using Eq.(\[deltaR\]) we can cast Eq.(\[poloP\]) in the form $$-i \gamma_p+\Delta_p-\mathcal{P} \int_{\omega_1}^{\omega_2} d \omega \frac{\mathcal{D}(\omega)}{\omega_a-\omega}+ \pi i \mathcal{D}(\omega_a) \simeq 0$$ from which we recover for the decay rate $\gamma_p$ and frequency shift $\Delta_p$ of the resonance the same expressions $\gamma_R$ and $\Delta_R$ as given by Eqs.(\[decayrate\]) and (\[frequencyshift\]) in the framework of the Weisskopf-Wigner analysis. In the strong coupling regime, close to the boundary of appearance of bound modes, the decay strongly deviates from an exponential law at any time scale, with the appearance of typical damped Rabi oscillations (see e.g. Ref. [@Tannoudji], pp. 249-255). Microcavity with gain: lasing condition --------------------------------------- Let us now consider the case of a microcavity with gain, i.e. $g'>0$. In this case, one (or more) poles $s_p$ of $\hat{c}_a(s)$ on the first Riemannian sheet with ${\rm Re}(s) \geq 0$ may appear as the modal gain $g'$ is increased, so that the mode amplitude $c_a(t)$ will grow with time, indicating the onset of an instability. In this case, the Bromwich path ${\rm B}$ should be closed taking into account the existence of one (or more than one) pole in the ${\rm Re}(s) \geq 0$ plane, as shown in Fig.3. For the case of a simple pole $s_p=-\gamma_p-i\omega_a-i\Delta_p$, the expression (\[decaylaw\]) for the temporal evolution of $c_a(t)$ is therefore still valid, where now $\gamma_{p} \leq 0$ and $\Delta_p$ are found as a solution of the equation \[compare with Eq.(\[poloP\])\] $$-i \gamma_p-ig'+\Delta_p-\int_{\omega_1}^{\omega_2} d \omega \frac{\mathcal{D}(\omega)}{\omega_a+\Delta_p-i\gamma_p-\omega}=0. \label{pologain}$$ ![ (a) Deformation of the Bromwich path for inverse Laplace transformation with one pole $s_p$ on the ${\rm Re}(s)>0$ half-plane (unstable state). (b) Corresponding integration contour used to calculate the inverse Laplace transform. The integration along the solid (dashed) curves is made on the first (second) Riemannian sheet of $\hat{c}_a(s)$.](Fig3.eps) As a rather general rule, it turns out that, as $g'$ is increased, the pole $s_p$ of $\hat{c}_{a}^{II}(s)$, which at $g'=0$ lies in the ${\rm Re}(s)<0$ plane, crosses the imaginary axis in the cut region. This crossing changes the decay of $c_a(t)$ into a non-decaying or growing behavior, and thus it can be assumed as the threshold for laser oscillation. The modal gain at threshold, $g^{'}_{th}$, is thus obtained from Eq.(\[pologain\]) by setting $\gamma_p=0^-$, i.e. $$-ig^{'}_{th}+\Delta_p-\Delta(\omega_a+\Delta_p)+ i \pi \mathcal{D}(\omega_a+\Delta_p)=0, \label{poloth}$$ where we used Eq.(\[Omshift\]) and the relation $$\begin{aligned} \int_{\omega_1}^{\omega_2} d \omega \frac{\mathcal{D}(\omega)}{\omega_a+\Delta_p+i0^+ -\omega}= \\ =\mathcal{P}\int_{\omega_1}^{\omega_2} d \omega \frac{\mathcal{D}(\omega)}{\omega_a+\Delta_p -\omega}-i \pi \mathcal{D}(\omega_a+\Delta_p).\end{aligned}$$ Therefore the threshold for laser oscillation is given by $$g_{th}=\gamma_i+\pi \mathcal{D}(\omega_a+\Delta_p), \label{thgeneral}$$ where $\Delta_p$ (the frequency shift of the oscillating mode from the microcavity resonance frequency $\omega_a$) is implicitly defined by the equation $$\Delta_p=\mathcal{P}\int_{\omega_1}^{\omega_2} d \omega \frac{\mathcal{D}(\omega)}{\omega_a+\Delta_p -\omega}, \label{shiftX}$$ i.e. $\Omega_{osc}-\omega_a=\Delta (\Omega_{osc})$ with $\Omega_{osc}=\omega_a+\Delta_p$. It should be noted that, under the conditions stated in Sec.III.A ensuring that for the passive microcavity no bound modes exist, Eq.(\[shiftX\]) admits of (at least) one solution for $\omega_a+\Delta_p$ inside the range $(\omega_1,\omega_2)$. The simplest proof thereof can be done graphically \[see Fig.1(a)\] after observing that $\omega_2-\omega_a>\Delta(\omega_2)$ and $\omega_1-\omega_a<\Delta(\omega_1)$.\ The rather simple Eq.(\[thgeneral\]) provides a generalization of Eq.(\[thmarkovian\]) for the laser threshold of the active microcavity beyond the Markovian approximation and reduces to it in the limit $\Delta_p \simeq 0$. The frequency shift $\Delta_p$, however, can not be in general neglected and may strongly affect the value of $g_{th}$ in the strong coupling regime. In fact, for a small coupling of the microcavity with the structured reservoir ($\lambda \rightarrow 0$), the shift $\Delta_p$ can be neglected and therefore $g_{th}$ increases with $\lambda$ according to Eq.(\[thmarkovian\]). However, as $\lambda$ is further increased up to the critical coupling condition, the shift $\Delta_p$ is no more negligible, and the oscillation frequency $\Omega_{osc}=\omega_a+\Delta_p$ at lasing threshold is pushed toward the boundaries $\omega_1$ or $\omega_2$, where $\mathcal{D}(\omega)$ and thus $g{'}_{th}$ vanish. In fact, as $\lambda$ is increased to reach the minimum value between $\lambda_{I,II}$ defined by the relation [@note2]: $$\lambda_{I,II}^2= (\omega_{1,2}-\omega_a) \left[ \mathcal{P} \int_{\omega_1}^{\omega_2}d \omega \frac{\sum_{\mu}|\kappa_{\mu}(\omega)|^2}{\omega_{1,2}-\omega} \right]^{-1},$$ one has $\Omega_{osc} \rightarrow \omega_{1,2}$, and hence $g_{th} \rightarrow \gamma_i$. Therefore, as $g_{th}$ initially increases from $\gamma_i$ as the coupling strength is increased from $\lambda=0$, it must reach a maximum value and then start to decrease until reaching again the $\gamma_i$ value as $\lambda$ approaches the critical value ($\lambda_{I}$ or $\lambda_{II}$). As the increase of $g_{th}$ with $\lambda$ in the weak coupling regime is simply understood as due to the acceleration of the decay of the microcavity mode into the neighboring waveguides, the successive decreasing of $g_{th}$ is related to the appearance of a back-coupling of the field from the continuum (waveguides) into the microcavity mode, until a bound state is formed at the critical coupling strength.\ As a final remark, it should be noted that the precise dynamical features and the kind of instability at lasing threshold may depend on the specific structure function $\mathcal{D}(\omega)$ of the reservoir. In particular, anomalous dynamical features may occur at the critical coupling regime, as it will be shown in the next section. An exactly-solvable model: the coupling of a microcavity with a coupled resonator optical waveguide =================================================================================================== To clarify the general results obtained in the previous section, we present an illustrative example of exactly-solvable model in which a single-mode and high-$Q$ microcavity is tunneling-coupled to a CROW structure [@Stefanou98; @Yariv99; @Ozbay00; @Olivier01], which provides the non-markovian decay channel of the microcavity. In a CROW structure, photons tunnel from one evanescent defect mode of a cavity to the neighboring one due to overlapping between the tightly confined modes at each defect site, and therefore memory effects are expected to be non-negligible whenever the coupling rate of the microcavity with the CROW becomes comparable with the CROW hopping rate. The model --------- The schematic model of a microcavity tunneling-coupled to a CROW is shown in Fig.4 for two typical configurations. The CROW consists of a chain of equally-spaced optical waveguides [@Stefanou98; @Yariv99; @Ozbay00; @Olivier01], supporting a single band of propagating modes, and the microcavity is tunneling-coupled to either one \[Fig.4(a)\] or two \[Fig.4(b)\] cavities of the CROW. For the sake of definiteness, we will consider the coupling geometry shown in Fig.4(b), though similar results are obtained for the single-coupling configuration of Fig.4(a).\ ![ Schematic of a microcavity (M) tunneling-coupled to either one (a) or two (b) cavities of a coupled-resonator optical waveguide. Plot (c) shows a schematic of a microcavity coupled with a CROW in the configuration (b) realized on a PC planform made of a square lattice of air holes with a one-dimensional chain of defects patterned along the lattice (Ref.[@Liu05]).](Fig4.eps) The microcavity and the CROW can be realized on a same PC planform (see, e.g., [@Liu05; @Yanik04]): the CROW is simply obtained by a one-dimensional periodic array of defects, placed at distance $d$ and patterned along the lattice to form resonant cavities with high-$Q$ factors. The microcavity is realized by one defect in the array, say the one corresponding to index $n=0$, which can have a resonance frequency $\omega_a$ different from that of adjacent defects and placed at a larger distance $d_0 \geq d$ than the other cavities \[see Fig.4(c)\]. The CROW supports a continuous band of propagating modes whose dispersion relation, in the tight-binding approximation, is given by [@Yariv99] $$\omega(k)=\omega_0-2 \kappa \cos(kd),$$ where $\kappa$ is the hopping amplitude between two consecutive cavities of the CROW, $d$ is the length of the unit cell of the CROW, $k$ is the Bloch wave number, and $\omega_0$ is the central frequency of the band. The resonance frequency $\omega_m$ of the microcavity is assumed to be internal to the CROW band, i.e. $\omega_0-2 \kappa<\omega_m<\omega_0+2 \kappa$. The microcavity is tunneling-coupled to the two adjacent cavities of the CROW, and we denote by $\kappa_0$ the hopping amplitude. The ratio $\kappa_0 / \kappa$ and the position of $\omega_m$ inside the CROW band can be properly controlled by changing the geometrical parameters of the defects and the ratio $d_0/d$. In particular, in the limiting case where the microcavity has the same geometry and distance of the other CROW cavities, one has $\kappa_0=\kappa$ and $\omega_m=\omega_0$. An excellent and simple description of light transport in the system is provided by a set of coupled-mode equations for the amplitudes $a_n$ of modes in the cavities (see, e.g., [@Yariv99; @Yanik04]) $$\begin{aligned} i\dot a_n & = & -\kappa(a_{n+1}+a_{n-1}) \; \; (|n| \geq 2) \\ i\dot a_{-1} & = & -\kappa a_{-2}- \kappa_0 c_a \\ i\dot c_{a} & = & -\kappa_0 (a_{-1}+ a_1)+(\omega_a+ig) c_a \\ i \dot a_{1} & = & -\kappa a_{2}-\kappa_0 c_a\end{aligned}$$ where $c_a$ is the amplitude of the microcavity mode, $g$ is its effective modal gain per unit time, and $\omega_a=\omega_m-\omega_0$ is the frequency detuning between the microcavity resonance frequency $\omega_m$ and the central frequency $\omega_0$ of the CROW band. For e.g. a CROW built in a GaAs-based PC with a square lattice of air holes in the design of Ref.[@Liu05], a typical value of the cavity coupling coefficient turns out to be $\kappa \simeq 700-800$ GHz and $\omega_0 / \kappa \sim 3 \times 10^3$ at the $\lambda_0 =850$ nm operation wavelength. Note that in writing Eqs.(38), we have neglected the internal losses of the CROW cavities; a reasonable value of the $Q$-factor for a realistic microcavity is $Q=\omega_0/(2 \gamma_{loss}) \sim 10^6$ [@Armani03], which would correspond to a cavity loss rate $\gamma_{loss} \sim 1 $ GHz to be added in Eqs.(38). This loss rate, however, is about two-to-three orders of magnitude smaller than the cavity coupling coefficient $\kappa$, and therefore on a short time scale non-Markovian dynamical effects should be observed even in presence of CROW losses. The effects of reservoir (CROW) losses will be briefly discussed at the end of the section.\ To study the temporal evolution of an initial field in the microcavity, Eqs.(38) are solved with the initial condition $a_n(0)=0$ and $c_a(0)=1$. An integral representation for the solution of Eqs.(38) might be directly derived in the time domain by an extension of the technique described in Refs.[@Longhi06a; @Longhi06b], where a system of coupled-mode equations similar to Eqs.(38), but in the conservative (i.e. $g=0$) case, was considered. However, we prefer here to formally place Eqs.(38) into the more general Hamiltonian formalism of Sec.II and then use the Laplace transform analysis developed in the previous section to obtain the temporal evolution for $c_a(t)$. To this aim, in Appendix we prove that $c_a(t)$ may be obtained as a solution of the following equations, which have the canonical form (3) with a simple continuum of modes acting as a decay channel $$\begin{aligned} i \dot c_a(t) & = & (\omega_a+ig)c_a+ \lambda \int_{-2 \kappa}^{ 2 \kappa} d \omega \kappa_{\mu}(\omega) c(\omega,t) \label{cme1a}\\ i \dot c(\omega,t) & = & \omega c(\omega,t)+\lambda \kappa_{\mu}(\omega) c_a(t) \label{cme2a}\end{aligned}$$ with $$\lambda \kappa_{\mu}(\omega)=\kappa_0 \sqrt{\frac{2}{\pi \kappa}} \left[1-\left( \frac{\omega}{2 \kappa} \right)^2 \right]^{1/4}.$$ Note that the reservoir structure function for this model, defined for $\omega_1<\omega<\omega_2$ with $\omega_1=-2 \kappa$ and $\omega_2=2 \kappa$, is simply given by $$\mathcal{D}(\omega)=\frac{2 \kappa_{0}^2}{\pi \kappa} \sqrt{1-\left( \frac{\omega}{2 \kappa} \right)^2}. \label{sfCROW}$$ With this reservoir structure function, the self-energy \[Eq.(\[selfenergy\])\] can be calculated in an exact way and reads $$\Sigma(s)=i \left( \frac{\kappa_0}{\kappa} \right)^2 \left[ s-\sqrt{4 \kappa^2+s^2} \right].$$ The function $\Delta(\omega)$, as defined by Eq.(\[Omshift\]), then reads $$\Delta(\omega)= \left\{ \begin{array}{lr} (\kappa_0 / \kappa)^2 \omega & |\omega| < 2 \kappa \\ (\kappa_0 / \kappa)^2 \left[\omega-\sqrt{\omega^2-4 \kappa^2} \right] & \omega > 2 \kappa \\ (\kappa_0 / \kappa)^2 \left[\omega+\sqrt{\omega^2-4 \kappa^2} \right] & \omega < -2 \kappa \end{array} \right. \label{shiftCROW}$$ Note that the coupling strength between the microcavity and the CROW is determined by the ratio $\kappa_0 / \kappa$, the limit $\kappa_0 / \kappa \rightarrow 0$ corresponding to the weak coupling regime. The passive microcavity: from exponential decay to damped Rabi oscillations --------------------------------------------------------------------------- Let us consider first the case $g=0$. The conditions for the non-existence of bound modes, i.e. for a complete decay of $c_a(t)$, are $\omega_2-\omega_a \geq \Delta(\omega_2)$ and $\omega_1-\omega_a \leq \Delta(\omega_1)$ (see Sec.III.A), which using Eq.(\[shiftCROW\]) read explicitly $$\left( \frac{\kappa_0}{\kappa} \right)^2 -1 \leq \frac{\omega_a}{2 \kappa} \leq 1-\left( \frac{\kappa_0}{\kappa} \right)^2.$$ Note that, as a necessary condition, this relation implies that $|\omega_a| \leq 2 \kappa$ and $(\kappa_0 / \kappa)^2 \leq 1$. Note also that the the critical coupling regime is reached at $(\kappa_0 / \kappa)=\sqrt{1 -|\omega_a|/(2 \kappa)}$. For a coupling strength $(\kappa_0 / \kappa)$ above such a value, the decay of $c_a(t)$ is imperfect due to the existence of bound modes between the microcavity and the CROW; this case will not be considered here further.\ The temporal decay law for the mode amplitude $c_a(t)$ can be generally expressed using the general relation (\[decaylaw\]), which highlights the existence of the exponential (Weisskopf-Wigner) decaying term plus its correction due to the contribution of the Hankel paths. Perhaps, for the microcavity-CROW system it is more suited to make the inverse Laplace transform on the first Riemannian sheet of $\hat{c}_a(s)$ by closing the Bromwich path ${\rm B}$ with a semicircle with radius $R \rightarrow \infty$ in the ${\rm Re}(s)<0$ half-plane after excluding the branch cut from the domain by the contour $\sigma$ as shown in Fig.5. Since in this case there are no singularities of $\hat{c}_a(s)$, we simply obtain $$c_{a}(t)=\frac{1}{2 \pi } \oint_{\sigma} ds \; \frac{\exp(st)}{is-\omega_a-\Sigma(s)}$$ which, using Eq.(\[disco\]), reads explicitly ![ Integration contour used for the inverse Laplace transform in the passive microcavity-CROW system.](Fig5.eps) $$\begin{aligned} c_{a}(t) & = & \frac{i}{2 \pi } \int_{\omega_1}^{\omega_2} d \omega \left[ \frac{\exp(-i \omega t)}{\omega-\omega_a-\Sigma(-i\omega+0^+)}+ \right. \nonumber \\ & - & \left. \frac{\exp(-i \omega t)}{\omega-\omega_a-\Sigma(-i\omega-0^+)} \right] = \nonumber \\ & = & \int_{\omega_1}^{\omega_2} d \omega \frac{\mathcal{D}(\omega) \exp(-i \omega t)}{[\omega-\omega_a-\Delta(\omega)]^2+ \pi^2 \mathcal{D}^2(\omega)}.\end{aligned}$$ For the microcavity-CROW model, one then obtains $$c_{a}(t)=\frac{1}{2 \pi} \frac{\kappa_{0}^2}{\kappa^3} \int_{-2 \kappa}^{2 \kappa} d \omega \frac{\exp(-i \omega t) \sqrt{1-(\omega/2 \kappa)^2}}{\left\{ (\omega/2 \kappa) \left[ 1-(\kappa_0 / \kappa)^2 \right]- (\omega_a / 2 \kappa) \right\}^2+(\kappa_0 / \kappa)^4 [1-\omega^2 / (4 \kappa^2)]}. \label{intcaCROW}$$ The integral on the right hand side in Eq.(\[intcaCROW\]) can be written in a more convenient form with the change of variable $\omega=-2 \kappa \cos Q$, yielding $$c_a(t)=\frac{1}{\pi} \int_{0}^{\pi} dQ \frac{(k_0 / \kappa)^2 \sin^2 Q \exp(2i \kappa t \cos Q)}{\left[ (\omega_a/ 2 \kappa) +\cos Q-(\kappa_0 / \kappa)^2 \cos Q \right]^2+(\kappa_0 / \kappa)^4 \sin^2 Q }. \label{intrep}$$ In this form, the integral can be written [@Longhi06a] as a series of Bessel functions of first kind and of argument $2 \kappa t$ (Neumann series). Special cases, for which a simple expression for $c_a(t)$ is available, are those corresponding to $\omega_a=0$ and $\kappa_0=\kappa$, for which $$c_a(t)=J_0(2 \kappa t),$$ and to $\omega_a=0$ and $\kappa_0=\kappa/ \sqrt 2$, for which $$c_a(t)=\frac{J_1(2 \kappa t)}{\kappa t} .$$ Note that the former case corresponds to a critical coupling regime, where $\hat{c}_a(s)$ has two singularities at $s=\pm 2i \kappa+0^+$. The residues of $\hat{c}_a(s)$ at these singularities, however, vanish, and therefore the field $c_a(t)$ fully decays toward zero with an asymptotic power law $\sim 1/t^{1/2}$. In general, an inspection of the singularities of the $\hat{c}_a(s)$ reveals that, for $\omega_a \neq 0$, at the critical coupling strength $(\kappa_0/\kappa)=\sqrt{1-|\omega_a|/(2 \kappa)}$ the Laplace transform $\hat{c}_a(s)$ has one singularity at either $s_p=2i \kappa+0^+$ or $s_p=-2i \kappa+0^+$ of type $\hat{c}_a(s) \sim 1/ \sqrt{s-s_p}$.\ The asymptotic decay behavior of $c_a(t)$ at long times can be determined by the application of the method of the stationary phase to Eq.(\[intrep\]). One then finds that at the critical coupling the field $c_a(t)$ decays toward zero with an asymptotic power law $\sim 1/t^{1/2}$, whereas below the critical coupling the decay is faster with an asymptotic decay $\sim 1/t^{3/2}$.\ Typical examples of non-exponential features in the decay process as the coupling strength is increased are shown in Fig.6 for $\omega_a=0$. The curves in the figures have been obtained by a direct numerical solution of Eqs.(38). Note that, as for weak coupling the exponential (Weisskopf-Wigner) decay law is retrieved with a good approximation \[see Fig.6(a)\], as the coupling strength $\kappa_0 / \kappa$ is increased the decay law strongly deviates from an exponential behavior. Note in particular the existence of strong oscillations, which are fully analogous to damped Rabi oscillations found in the atom-photon interaction context [@Tannoudji]. For $\omega_0 \neq 0$, the oscillatory behavior of the long-time power-law decay is less pronounced and may even disappear (see Ref.[@Longhi06a]). ![ Decay of the mode amplitude $|c_a(t)|$ in a passive microcavity-CROW system for $\omega_a=0$ and for increasing values of coupling strength: (a) $\kappa_0 / \kappa=0.2$, (b) $\kappa_0 / \kappa=0.707$, and (c) $\kappa_0 / \kappa=1$ (critical coupling).](Fig6.eps) Microcavity with gain --------------------- Let us consider now the case $g \geq 0$. In order to determine the threshold for laser oscillation, we have to distinguish three cases depending on the value of the coupling strength $\kappa_0 / \kappa$.\ \ [*(i) Lasing condition below the critical coupling.*]{} In this case, corresponding to $ \kappa_0/ \kappa < \sqrt{1-|\omega_a|/(2 \kappa)}$, the threshold for laser oscillation is readily obtained from Eqs.(\[thgeneral\]), (\[shiftX\]), (\[sfCROW\]) and (\[shiftCROW\]). The frequency $\Omega_{osc}$ of the oscillating mode is given by $\Omega_{osc}=\omega_a /[1-(\kappa_0 / \kappa)^2]$, and the gain for laser oscillation is thus given by $$g_{th}=2 \kappa \left( \frac{\kappa_0}{\kappa} \right)^2 \sqrt{1-\left[ \frac{\omega_a/(2 \kappa)}{1-(\kappa_0 / \kappa)^2 }\right]^2}.$$ The typical behavior of normalized threshold gain $g_{th}/(2 \kappa)^2$ versus the coupling strength $(\kappa_0 / \kappa)$ is shown in Fig.7. ![ Behavior of normalized threshold gain $g_{th}/(2 \kappa)$ versus the coupling strength $(\kappa_0 / \kappa)^2$ for a few values of the ratio $\omega_a / (2 \kappa)$.](Fig7.eps) Note that, according to the general analysis of Sec.III.B, the threshold for laser oscillation first increases as the coupling strength is increased, but then it reaches a maximum and then decreases toward zero as the critical coupling strength is attained. At $g=g_{th}$, $\hat{c}_a(s)$ has a simple pole at $s=s_p=-i\Omega_{osc}+0^+$, whereas as $g$ is increased above $g_{th}$ the pole $s_p$ invades the ${\rm Re}(s)>0$ half-plane. Therefore, the onset of lasing is characterized by an amplitude $|c_a(t)|$ which asymptotically decays toward zero for $g<g_{th}$, reaches a steady-state and nonvanishing value at $g=g_{th}$ (the field does not decay nor grow asymptotically), whereas it grows exponentially (in the early lasing stage) for $g>g_{th}$ with a growth rate $\sigma(g)={\rm Re}(s_p)$ (see Fig.8). This instability scenario is the usual one encountered in the semiclassical theory of laser oscillation as a second-order phase transition [@note3]. However, the temporal dynamics at the onset of lasing shows unusual oscillations \[see Fig.8(a)\] which are a signature of non-Markovian dynamics. In addition, as in the Markovian limit the growth rate $\sigma$ should increase linearly with $g-g_{th}$, in the strong coupling regime the growth rate $\sigma$ shows near threshold an unusual non-linear behavior, as shown in Fig.8(b).\ ![ (a) Behavior of mode amplitude $|c_a(t')|$ versus normalized time $t'=2 \kappa t$ for $(\kappa_0 / \kappa)^2=0.8$, $\omega_a / (2 \kappa)=0.18$, and for increasing values of normalized gain $g/ (2 \kappa)$. (b) Behavior of normalized growth rate versus normalized gain for $(\kappa_0 / \kappa)^2=0.8$ and $\omega_a / (2 \kappa)=0.18$.](Fig8R.eps) \ [*(ii) Lasing condition at the critical coupling with $\omega_a \neq 0$.*]{} A different dynamics occurs when the coupling strength $\kappa/\kappa_0$ reaches the critical limit $ \kappa_0/ \kappa=\sqrt{1-|\omega_a|/(2 \kappa)}$. As discussed in Sec.IV.B, at $g=0$ the Laplace transform $\hat{c}_a(s)$ has a singularity at either $s_p=2 i \kappa$ or $s_p=-2 i \kappa$, however $s_p$ [*is not*]{} a simple pole and $c_a(t)$ asymptotically decays toward zero. For $\omega_a \neq 0$, i.e. for $(\kappa/ \kappa_0)<1$, as $g$ is increased just above zero $\hat{c}_a(s)$ shows a simple pole with a growth rate $\sigma={\rm Re}(s_p)>0$ which slowly increases with $g$ at the early stage, as shown in Fig.9. In the figure, a typical temporal evolution of $c_a(t)$ is also shown. Note that in this case [*there is not*]{} a value of $g$ for which the field amplitude $c_a(t)$ does not grow nor decay, i.e. the intermediate situation shown in Fig.8(a) is missed in Fig.9(a): for $g=0$ the amplitude decays, however for $g=0^+$ it always grows exponentially. The transition describing the passage of laser from below to above threshold in the linear stage of the instability is therefore quite unusual at the critical coupling.\ ![ Same as Fig.8, but for parameter values $(\kappa_0 / \kappa)^2=0.8$ and $\omega_a / (2 \kappa)=0.2$ (critical coupling). Note that in this case there exists no lasing threshold in the traditional sense.](Fig9R.eps) \ [*(iii) Lasing condition at the critical coupling with $\omega_a = 0$.*]{} A somewhat singular behavior occurs at the critical coupling when $\omega_a=0$, and therefore $\kappa_0 / \kappa=1$. This case corresponds to consider a periodic CROW in which one of the cavities is pumped and acts as the microcavity in our general model. For $\omega_a = 0$ and $\kappa_0 / \kappa=1$, the Laplace transform $\hat{c}_a(s)$ is explicitly given by $$\hat{c}_a(s)=\frac{1}{-g+\sqrt{s^2+4 \kappa^2}}.$$ To perform the inversion, one needs to distinguish four cases.\ (a) $g=0$. For $g=0$, the field $c_a(t)$ decays according to $$c_a(t)=J_0(2 \kappa t)$$ as shown in Sec.IV.B.\ \ (b) $0<g<2 \kappa$. In this case $\hat{c}_a(s)$ has two simple poles on the first Riemannian sheet at $s_{1,2}=\pm i \sqrt{4 \kappa^2-g^2}+0^+$. The inversion can be performed by closing the Bromwich path ${\rm B}$ with the contour shown in Fig.10, where along the dashed curves the integrals are performed on the second Riemannian sheet. One then obtains $$c_a(t)=\frac{2 g}{\sqrt{4 \kappa^2-g^2}} \sin \left( \sqrt{4 \kappa^2-g^2 }t \right) +\mathcal{C}(t)$$ where the first term on the right hand side in the equation arises from the residues at poles $s_{1,2}$, whereas $\mathcal{C}(t)$ is the contribution from the contour integration along the Hankel paths $h_1$ and $h_2$, which asymptotically decays toward zero as $t \rightarrow \infty$. Note that, after an initial transient, the amplitude $|c_a(t)|$ steadily oscillates in time with frequency $\sqrt{4 \kappa^2-g^2}$ and amplitude $2 g / \sqrt{4 \kappa^2-g^2 }$. Note also that the amplitude and period of oscillations diverge as the modal gain $g$ approaches $2 \kappa^-$.\ ![ Integration contour used to calculate the inverse Laplace transform for $\omega_a=0$, $\kappa_0 / \kappa=1$ and for $0<g/(2 \kappa) <1$. The integration along the solid (dashed) curves is made on the first (second) Riemannian sheet of $\hat{c}_a(s)$. $s_{1,2}$ are the two poles of $\hat{c}_a(s)$ on the imaginary axis inside the cut.](Fig10.eps) \ (c) $g=2 \kappa$. In this case, $\hat{c}_a(s)$ has a single pole of second-order in $s=0^+$, and therefore to perform the inversion it is worth separating the singular and non-singular parts of $ \hat{c}_a(s)$ as $$\hat{c}_a(s)= \frac{4 \kappa}{s^2}+f(s)$$ where $f(s)$ has no singularities on the imaginary axis. After inversion one then obtains $$c_a(t)= 4 \kappa t +\frac{1}{2 \pi } \int_{-\infty}^{\infty} d \omega \; f(-i \omega+0^+) \exp(-i \omega t), \label{lineargrowth}$$ where the second term on the right-hand side in the above equation asymptotically decays toward zero. Therefore, we may conclude that at $g=2 \kappa$ the mode amplitude $c_a(t)$ is dominated by a secular growing term [*which is not exponential*]{}.\ \ (d) $g>2 \kappa$. In this case, $\hat{c}_a(s)$ has an unstable simple pole at $s_p=(g^2-4 \kappa^2)^{1/2}$, and therefore the solution $c_a(t)$ grows exponentially with time.\ \ The dynamical scenario described above for $\omega_a=0$ and $\kappa_0 / \kappa=1$ is illustrated in Fig.11. Note that in this case there is some uncertainty in the definition of laser threshold, since there exists [*an entire interval*]{} of modal gain values, from $g=0^+$ to $g=2 \kappa^-$, at which an initial field in the cavity does not grow nor decay.\ ![ Behavior of mode amplitude $c_a(t')$ versus normalized time $t'=2 \kappa t$ for $\omega_a=0$, $\kappa_0 / \kappa=1$ (critical coupling) and for increasing values of normalized gain: (a) $g/(2\kappa)=0$, (b) $g/(2\kappa)=0.2$, (c) $g/(2\kappa)=0.95$, (d) $g/(2\kappa)=1$, and (d) $g/(2\kappa)=1.1$. ](Fig11.eps) \ As a final comment, we briefly discuss the effects of internal losses of the CROW cavities, which have been so far neglected, on the temporal evolution of the mode amplitude $c_a(t)$. In the case where all the cavities in the CROW have the same loss rate $\gamma_{loss}$, the temporal evolution of $c_a(t)$ is simply modified by the introduction of an additional exponential damping factor $\exp(-\gamma_{loss}t)$, i.e. $c_a(t) \rightarrow c_a(t) \exp(-\gamma_{loss}t)$. This additional decay term would therefore shift the threshold for laser oscillation to higher values and, most importantly for our analysis, it might hinder non-Markovian dynamical effects discussed so far. However, for a small value of $\gamma_{loss} / \kappa$ (e.g. $\gamma_{loss} / \kappa \sim 0.01$ for the numerical values given in Ref.[@Liu05]), non-Markovian effects should be clearly observable in the transient field dynamics for times shorter than $\sim 1 /\gamma_{loss}$. As an example, Fig.12 shows the dynamical evolution of the mode amplitude $|c_a(t)|$ for the same parameter values of Fig.11, except for the inclusion of a CROW loss rate $\gamma_{loss}=0.01 \kappa$. It is worth commenting on the dynamical behavior of Fig.12(d) corresponding to $g=2 \kappa$. In this case, using Eq.(\[lineargrowth\]) and disregarding the decaying term on the right hand side in Eq.(\[lineargrowth\]), one can write $$c_a(t) \sim 4 \kappa t \exp(-\gamma_{loss} t).$$ Note that in the early transient stage the initial mode amplitude stored in the microcavity linearly grows as in Fig.11(d), however it reaches a maximum and then it finally decays owing to the prevalence of the loss-induced exponential term over the linear growing term. Therefore, though the microcavity is [*below*]{} threshold for oscillation as an initial field in the cavity asymptotically decays to zero, before decaying an initial field is subjected to a [*transient amplification*]{}. The maximum amplification factor in the transient is about $\sim 2 \kappa / \gamma_{loss}$, and can be therefore relatively large in high-$Q$ microcavities. Such a transient growth despite the asymptotic stability of the zero solution should be related to the circumstance that for $g \neq 0$ the system (38) is non-normal [@note4]: though its eigenvalues have all a negative real part, the system can sustain a transient energy growth. The transient amplification shown in Fig.12(d) is therefore analogous to non-normal energy growth encountered in other hydrodynamic [@Trefethen93; @Farrell94; @Farrell96] and optical [@Kartner99; @Longhi00; @Firth05] systems and it is an indicator of a major sensitivity of the system to noise. ![ Same as Fig.11, but in presence of CROW losses ($\gamma_{loss}/ \kappa=0.01$).](Fig12.eps) Conclusions =========== In this work it has been analytically studied, within a rather general Hamiltonian model \[Eqs.(1)\], the dynamics of a classical field in a single-mode optical microcavity coupled to a structured continuum of modes (reservoir) beyond the usual Weisskopf-Wigner (Markovian) approximation. Typical non-Markovian effects for the passive microcavity are non-exponential decay and damped Rabi oscillations (Sec.III.A). In presence of gain, the general condition for laser oscillation, that extends the usual gain/loss rate balance condition of elementary laser theory, has been derived (Sec.III.B), and the behavior of the laser threshold versus the microcavity-reservoir coupling has been determined. The general results have been specialized for an exactly-solvable model, which can be implemented in a photonic crystal with defects: an optical microcavity tunneling-coupled to a coupled-resonator optical waveguide (Sec.IV). A special attention has been devoted to study the transition describing laser oscillation at the critical coupling between the cavity and the waveguide (Sec.IV.C). Unusual dynamical effects, which are a clear signature of a non-Markovian dynamics, have been illustrated, including: the existence of a finite interval of modal gain where the field oscillates without decaying nor growing, the gain parameter controlling the amplitude and period of the oscillations; a linear (instead of exponential) growth of the field at the onset of instability for laser oscillation; and the existence of transient (non-normal) amplification of the field below laser threshold when intrinsic losses of the microcavity are considered. It is envisaged that, though non-Markovian effects are not relevant in standard laser resonators in which the field stored in the cavity is coupled to the broad continuum of modes of the external open space by a partially-transmitting mirror [@Lang73], they should be observable when dealing with high-$Q$ microcavities coupled to waveguides, which act as a structured decay channel for the field stored in the microcavity. In this Appendix it is proved the equivalence between coupled-mode equations (38) in the tight-binding approximation and the canonical formulation for the decay of a discrete state into a continuum provided by Eqs.(39). To this aim, let us first note that, owing to the inversion-symmetry of the initial condition $a_{-n}(0)=a_n(0)=0$ ($n \neq 0$), it can be readily shown that the solution $a_n(t)$ maintains the same symmetry at any time, i.e. $a_{-n}(t)=a_n(t)$ for $t \geq 0$. Let us then introduce the continuous function of the real-valued parameter $Q$ $$\phi(Q,t)=\sum_{n=1}^{\infty} a_n(t) \sin(nQ),$$ where $Q$ is taken inside the interval $[0,\pi]$. Using the relation $$\int_{0}^{\pi} dQ \; \sin(nQ) \sin(mQ)= \frac{\pi}{2} \delta_{m,n} \; \; \; (m,n \geq 1)$$ the amplitudes $a_n$ of modes in the CROW are related to the continuous field $\phi$ by the simple relations $$a_n(t)= \frac{2}{\pi} \int_{0}^{\pi}dQ \; \phi(Q,t) \sin(nQ)$$ ($n \geq 1$). The equation of motion for $\phi$ is readily obtained from Eqs.(38) and reads $$i \frac{\partial \phi}{\partial t}=-2 \kappa \cos(Q) \phi-\kappa_0 \sin(Q) c_a$$ whereas the equation for $c_a$, taking into account that $a_{-1}+a_1=2 a_1=(4/ \pi) \int_{0}^\pi d Q \phi(Q,t) \sin(Q)$, can be cast in the form: $$i \dot c_a(t)=(\omega_a+ig)c_a(t)-\frac{4 \kappa_0}{\pi} \int_{0}^{\pi}dQ \; \phi(Q,t) \sin(Q).$$ By introducing the frequency $\omega$ of the continuum $$\omega=-2 \kappa \cos(Q)$$ and after setting $$c(\omega,t)=-\sqrt{\frac{2}{\pi \kappa}} \phi(\omega,t) \frac{1}{\left[1-\omega^2/(2 \kappa)^2 \right]^{1/4}},$$ one finally obtains Eqs.(\[cme1a\]) and (\[cme2a\]) given in the text. [99]{} R. Lang, O. Scully, and W.E. Lamb, Phys. Rev. A [**7**]{}, 1788 (1973). S.C. Ching, H.M. Lai, and K. Young, J. Opt. Soc. Am. B [**4**]{}, 1995 (1987). E.S.C. Ching, P.T. Leung, A. Maassen van den Brink, W.M. Suen, S.S. Tong, and K. Young, Rev. Mod. Phys. [**70**]{}, 1545 (1998). U. Fano, Phys Rev.[**124**]{}, 1866 (1961). C. Cohen-Tannoudji, J. Dupont-Roc, and G. Grynberg, [*Atom-Photon Interactions*]{} (Wiley, New York, 1992). O. Svelto, [*Principles of Lasers*]{}, fourth ed. (Springer, Berlin, 1998). It is remarkable as well that the usual gain/loss balance condition for lasing threshold, with an exponential growth at the onset of lasing, is valid even for less conventional laser systems, such as in random lasers \[see, for instance: V. S. Letokhov, Sov. Phys. JETP [**26**]{}, 835 (1968); T. Sh. Misirpashaev and C.W.J. Beenakker, Phys. Rev. A [**57**]{}, 2041 (1998); X. Jiang and C.M. Soukoulis, Phys. Rev. B [**59**]{}, 6159 (1999); A.L. Burin, M.A. Ratner, H. Cao, and S.H. Chang, Phys. Rev. Lett. [**88**]{}, 093904 (2002)\]. B. Piraux, R. Bhatt, and P.L. Knight, Phys. Rev. A [**41**]{}, 6296 (1990). H.M. Lai, P.T. Leung, and K. Young, Phys. Rev. A [**37**]{}, 1597 (1988). M. Lewenstein, J. Zakrzewski, T.W. Mossberg, and J. Mostowski, J. Phys. B: At. Mol. Opt. Phys. [**21**]{}, L9 (1988). S. John and J. Wang, Phys. Rev. Lett. [**64**]{}, 2418 (1990). S. John and T. Quang, Phys. Rev. A [**50**]{}, 1764 (1994). A.G. Kofman, G. Kurizki, and B. Sherman, J. Mod. Opt. [**41**]{}, 353 (1994). N. Vats and S. John, Phys. Rev. A [**58**]{}, 4168 (1998). P. Lambropoulos, G.M. Nikolopoulos, T.R. Nielsen, and S. Bay, Rep. Prog. Phys. [**63**]{}, 455 (2000). X.-H. Wang, B.-Y. Gu, R. Wang, and H.-Q. Xu, Phys. Rev. Lett. [**91**]{}, 113904 (2003). T. Petrosky, C.-O. Ting, and S. Garmon, Phys. Rev. Lett. [**94**]{}, 043601 (2005). S. Tanaka, S. Garmon, and T. Petrosky, Phys. Rev. B [**73**]{}, 115340 (2006). B. Gaveau and L.S. Schulman, J. Phys. A: Math. Gen. [**28**]{}, 7359 (1995). P.R. Villeneuve, S. Fan, and J.D. Joannopoulos, Phys. Rev. B [**54**]{}, 7837 (1996). K.J. Vahala, Nature (London) [**424**]{}, 839 (2003). D.K. Armani, T.J. Kippenberg, S.M. Spillane, and K.J. Vahala, Nature (London) [**421**]{}, 925 (2003). T. Asano and S. Noda, Nature (London) [**429**]{}, 6988 (2004). T. Asano, W. Kunishi, B.-S. Song, and S. Noda, Appl. Phys. Lett. [**88**]{}, 151102 (2006). O. Painter, R. K. Lee, A. Yariv, A. Scherer, J. D. O’Brien, P. D. Dapkus, and I. Kim, Science [**284**]{}, 1819 (1999). M. Loncar, T. Yoshie, A. Scherer, P. Gogna, and Y. Qiu, Appl. Phys. Lett. [**81**]{}, 2680 (2002). H.G. Park, S.H. Kim, S.H. Kwon, Y.G. Ju, J.K. Yang, J.H. Baek, S.B. Kim, and Y.H. Lee, Science [**305**]{}, 1444 (2004). H. Altug and J. Vuckovic, Opt. Express [**13**]{}, 8819 (2005). S. Fan, P.R. Villeneuve, J.D. Joannopoulos, and H.A. Haus, Phys. Rev. Lett. [**80**]{}, 960 (1998); S. Fan, P.R. Villeneuve, J.D. Joannopoulos, M.J. Khan, C. Manolatou, and H.A. Haus, Phys. Rev. B [**59**]{}, 15882 (1999). Y. Xu, Y. Li, R.K. Lee, and A. Yariv, Phys. Rev. E [**62**]{}, 7389 (2000). T. Asano, B.S. Song, Y. Tanaka, and S. Noda, Appl. Phys. Lett. [**83**]{}, 407 (2003). E. Waks and J. Vuckovic, Opt. Express [**13**]{}, 5064 (2005). P. Chak, S. Pereira, and J.E. Sipe, Phys. Rev. B [**73**]{}, 035105 (2006). M.F. Yanik and S. Fan, Phys. Rev. A [**71**]{}, 013803 (2005). L.-L. Lin, Z.-Y. Li, and B. Lin, Phys. Rev. B [**72**]{}, 165330 (2005). N. Stefanou and A. Modinos, Phys. Rev. B [**57**]{}, 12127 (1998). A. Yariv, Y. Xu, R.K. Lee, and A. Scherer, Opt. Lett. [**24**]{}, 711 (1999). M. Bayindir, B. Temelkuran, and E. Ozbay, Phys. Rev. Lett. [**84**]{}, 2140 (2000). S. Olivier, C. Smith, M. Rattier, H. Benisty, C. Weisbuch, T. Krauss, R. Houdre, and U. Oesterle, Opt. Lett. [**26**]{}, 1019 (2001). Y. Liu, Z. Wang, M. Han, S. Fan, and R. Dutton, Opt. Express [**13**]{}, 4539 (2005). H. Nakazato, M. Namiki, and S. Pascazio, Int. J. Mod. Phys. B [**10**]{}, 247 (1996). P. Facchi and S. Pascazio, [*La Regola d’Oro di Fermi*]{}, in: Quaderni di Fisica Teorica, edited by S. Boffi (Bibliopolis, Napoli, 1999). Note that, by extending the definition of $\Delta(\omega)$ outside the interval $(\omega_1,\omega_2)$, the principal value of the integral in Eq.(\[Omshift\]) can be removed. The value $\lambda_I$ ($\lambda_{II}$) defines the critical value of coupling strenght above which a bound mode (discrete eigenvalue of $H_0+H_{int}$) at frequency $\omega<\omega_1$ ($\omega>\omega_2$) appears. M.F. Yanik and S. Fan, Phys. Rev. Lett. [**92**]{}, 083901 (2004). S. Longhi, Phys. Rev. E [**74**]{}, 026602 (2006). S. Longhi, Phys. Rev. Lett. [**97**]{}, 110402 (2006). If gain saturation is accounted for and the dynamics may be derived from a potential (e.g. after adiabatic elimination of polarization and population inversion in the semiclassical laser equations), the onset of laser oscillation is analogous to a second-order phase transition \[see, for instance: V. DeGiorgio and M.O. Scully, Phys. Rev. A [**2**]{}, 1170 (1970); H. Haken, [*Synergetics*]{}, second ed. (Springler-Verlag, Berlin, 1978)\]. Denoting by $\mathcal{A}$ the matrix for the linear system (38) of ordinary differential equations, the system is referred to as [*non-normal*]{} whenever $\mathcal{A}$ does not commute with its adjoint $\mathcal{A}^\dag$. One can show that transient energy amplification is possible in an asymptotically-stable non-normal system provided that the largest eigenvalue of $\mathcal{A}+\mathcal{A}^\dag$ is positive (see e.g. [@Farrell96]). Non-hermiticity is a necessary (but not sufficient) condition to have transient energy grow in an asymptotically-stable linear system. L.N. Trefethen, A.E. Trefethen, S.C. Reddy, and T.A. Driscoll, Science [**261**]{}, 578 (1993). B.F. Farrell and P.J. Ioannou, Phys. Rev. Lett. [**72**]{}, 1188 (1994). B. F. Farrell and P.J. Ioannou, J. Atmos. Sci. [**53**]{}, 2025 (1996). F.X. Kärtner, D.M. Zumbühl, and N. Matuschek, Phys. Rev. Lett. [**82**]{}, 4428 (1999). S. Longhi and P. Laporta, Phys. Rev. E [**61**]{}, R989 (2000). W.J. Firth and A.M. Yao, Phys. Rev. Lett. [**95**]{}, 073903 (2005).
--- abstract: 'Mean Field Games (MFG) are those in which each agent assumes that the states of all others are drawn in an i.i.d. manner from a common belief distribution, and optimizes accordingly. The equilibrium concept here is a Mean Field Equilibrium (MFE), and algorithms for learning MFE in dynamic MFGs are unknown in general due to the non-stationary evolution of the belief distribution. Our focus is on an important subclass that possess a monotonicity property called Strategic Complementarities (MFG-SC). We introduce a natural refinement to the equilibrium concept that we call Trembling-Hand-Perfect MFE (T-MFE), which allows agents to employ a measure of randomization while accounting for the impact of such randomization on their payoffs. We propose a simple algorithm for computing T-MFE under a known model. We introduce both a model-free and a model based approach to learning T-MFE under unknown transition probabilities, using the trembling-hand idea of enabling exploration. We analyze the sample complexity of both algorithms. We also develop a scheme on concurrently sampling the system with a large number of agents that negates the need for a simulator, even though the model is non-stationary. Finally, we empirically evaluate the performance of the proposed algorithms via examples motivated by real-world applications.' author: - 'Kiyeob Lee, Desik Rengarajan, Dileep Kalathil, Srinivas Shakkottai [^1]' bibliography: - 'ref.bib' title: '**Learning Trembling Hand Perfect Mean Field Equilibrium for Dynamic Mean Field Games** ' --- [^1]: Authors are with the Department of Electrical and Computer Engineering, Texas A&M University, Texas, USA. Email of the corresponding author: [dileep.kalathil@tamu.edu]{}
--- abstract: 'In the context of the geometrical analysis of weakly non Gaussian CMB maps, the 2D differential extrema counts as functions of the excursion set threshold is derived from the full moments expansion of the joint probability distribution of an isotropic random field, its gradient and invariants of the Hessian. Analytic expressions for these counts are given to second order in the non Gaussian correction, while a Monte Carlo method to compute them to arbitrary order is presented. Matching count statistics to these estimators is illustrated on fiducial non Gaussian “Planck" data.' author: - 'Dmitri Pogosyan${}^{1}$' - 'Christophe Pichon${}^{2}$' - 'Christophe Gay${}^{2}$' bibliography: - 'dummy.bib' title: Non Gaussian extrema counts for CMB maps --- Random fields are ubiquitous phenomena in physics appearing in areas ranging from turbulence to the landscape of string theories. In cosmology, the sky-maps of the polarized Cosmic Microwave Background (CMB) radiation – a focal topic of current research – is a prime example of such 2D random fields. Modern view of the cosmos, developed primarily through statistical analysis of these fields, points to a Universe that is statistically homogeneous and isotropic with a hierarchy of structures arising from small Gaussian fluctuations of quantum origin. While the Gaussian limit provides the fundamental starting point in the study of random fields [@Adler; @Doroshkevich; @BBKS], non-Gaussian features of the CMB fields are of great interest. Indeed, CMB inherits a high level of gaussianity from initial fluctuations, but small non-Gaussian deviations may provide a unique window into the details of processes in the early Universe. The search for the best methods to analyze non-Gaussian random fields is ongoing. In paper [@PGP] the general invariant based formalism for computing topological and geometrical characteristics of non Gaussian fields was presented. The general formulae for the Euler characteristics to all orders has been derived, which encompasses the well known first correction [@Matsubara] and which was later confirmed to the next order by [@matsu10]. We now focus on the statistics of the density of extremal points which follows directly from the formalism of [@PGP]. The goal of this paper is to provide an explicit recipe on how to use this formalism in practice on idealised 2D CMB “Planck"-like data. Extrema counts ============== Extrema counts, especially that of the maxima of the field, have long application to cosmology [e.g. @BBKS], however theoretical development have been mostly restricted to the Gaussian fields. The statistics of extrema counts, as well as of the Euler number, requires the knowledge of the one-point JPDF $P(x,x_i,x_{ij})$ of the field $x$, its first, $x_i$, and second, $x_{ij}$, derivatives [^1]. Extrema density is an intrinsically isotropic statistics given by [@Adler; @Longuet] $$\frac{\partial n_{\rm ext}}{ \partial x} = \int {\rm d}^6 x_{ij} P(x,x_i=0,x_{ij}) |x_{ij}| \,. \label{eq:ext_int}$$ Under the condition of statistical isotropy of the field, the essential form for the JPDF is therefore given in terms of the rotation invariants — $x$ itself, the square of the magnitude of the gradient $q^2\equiv x_1^2+x_2^2$ and the two invariants $J_1\equiv\lambda_{1}+\lambda_{2}$, $J_2\equiv(\lambda_{1}-\lambda_{2})^2$ of the Hessian matrix $x_{ij}$ (where $\lambda_i$ are the eigenvalues of the Hessian). Introducing $\zeta=(x+\gamma J_1)/\sqrt{1-\gamma^2}$ (where the spectral parameter $\gamma=- \langle x J_1 \rangle$ characterizes the shape of the underlying power spectrum), leads to the following JPDF for the Gaussian 2D field $$G_{\rm 2D} = \frac{1}{2 \pi} \exp\left[-\frac{1}{2} \zeta^2 - q^2 - \frac{1}{2} J_1^2 - J_2 \right] \,. \label{eq:2DG}$$ The invariant form for the extrema counts $$\frac{\partial n_{\rm ext}}{\partial x} = \int \!\!\! \frac{{{\rm d} J_1} {{\rm d} J_2}}{8\pi^2 \sqrt{1\!-\!\gamma^2}} \exp\left[-\frac{1}{2} \zeta^2 - \frac{1}{2} J_1^2 -J_2\right] \left|J_1^2-J_2 \right| \nonumber$$ then readily recovers the classical results [@Adler; @Longuet; @BBKS] when the limits of integration that define the extrema type are implemented, namely $J_1 \in [-\infty,0]$, $J_2 \in [0, J_1^2] $ for maxima, $J_1 \in [0,\infty]$, $J_2 \in [0, J_1^2] $ for minima and $J_1 \in [-\infty,\infty]$,$J_2 \in [J_1^2, \infty]$ for saddle points. In [@PGP] we have observed that for non-Gaussian JPDF the invariant approach immediately suggests a Gram-Charlier expansion in terms of the orthogonal polynomials defined by the kernel $G_{\rm 2D}$. Since $\zeta$, $q^2$, $J_1$ and $J_2$ are uncorrelated variables in the Gaussian limit, the resulting expansion is $$P_{\rm 2D}(\zeta, q^2, J_1, J_2) = G_{\rm 2D} \left[ 1 + \sum_{n=3}^\infty \sum_{i,j,k,l=0}^{i+2 j+k+2 l=n} \frac{(-1)^{j+l}}{i!\;j!\; k!\; l!} \left\langle \zeta^i {q^2}^j {J_1}^k {J_2}^l \right\rangle_{\rm GC} H_i\left(\zeta\right) L_j\left(q^2\right) H_k\left(J_1\right) L_l\left(J_2\right) \right]\,, \label{eq:2DP_general}$$ where terms are sorted in the order of the field power $n$ and $\sum_{i,j,k,l=0}^{i+2 j+k+2 l=n} $ stands for summation over all combinations of non-negative $i,j,k,l$ such that $i+2j+k+2l$ adds to the order of the expansion term $n$. The Gram-Charlier coefficients, $\left\langle \zeta^i {q^2}^j {J_1}^k {J_2}^l \right\rangle_{\rm GC}\equiv (-1)^{j+l} j! l! \left\langle H_i\left(\zeta\right) L_j\left(q^2\right) H_k\left(J_1\right) L_l\left(J_2\right) \right\rangle_{\rm m}$ that appear in the expansion can be related to the more familiar cumulants of the field and its derivatives (we use $\langle \,\,\,\,\rangle_{\rm m}$ for statistical moments while reserving $\langle \,\,\,\,\rangle$ for statistical cumulants), actually being identical to them for the first three orders $n=3,4,5$. Lookup tables of the relationship between Gram-Charlier cumulants and statistical cumulants can be found at [http://www.iap.fr/users/pichon/Gram/]{}. As an illustration, one sixth order non trivial cumulant would be $\left\langle J_1^3 J_2 \zeta \right\rangle {}_{\text{CG}}=\left\langle J_1^3 J_2 \zeta \right\rangle+\left\langle J_1^3\right\rangle \left\langle J_2 \zeta \right\rangle +3 \left\langle J_1 J_2\right\rangle \left\langle J_1^2 \zeta \right\rangle $. It is prudent to stress that the Gram-Charlier series expansion is distinct from the perturbative expansions. For instance, while the linear Edgeworth or $f_{\rm NL}$ expansion match solely to the first order $n=3$ Gram-Charlier coefficients, quadratic terms require knowledge of the Gram-Charlier terms to $n=6$, while the cubic ones to $n=9$. Integrals over $J_1$ and $J_2$ for extremal points can be carried out analytically even for the general expression (\[eq:2DP\_general\]). Different types of critical points can be evaluated separately by restraining the integration domain in the $J_1$-$J_2$ plane to ensure the appropriate signs for the eigenvalues. The effect of the non-Gaussian cubic correction on the total number of the extrema of different types is given by $$\begin{aligned} n_{\rm max/min} &=& \frac{1}{8 \sqrt{3} \pi {R_*}^2} \pm \frac{18 \left\langle q^2 J_1 \right\rangle - 5\left\langle J_1^3 \right\rangle + 6 \left\langle J_1 J_2 \right\rangle}{54 \pi \sqrt{2\pi} {R_*}^2} \,, \nonumber \\ n_{\rm sad} &=& \frac{1}{4 \sqrt{3} \pi{R_*}^2} \,,\end{aligned}$$ where we have restored (see note \[11\]) the dimensional scaling with $R_* = \sigma_1/\sigma_2$ , the characteristic separation scale between extrema. The total number of saddles, as well as of all the extremal points, $n_{\rm max} + n_{\rm min} + n_{\rm sad}$, are preserved in the first order (the latter following for the former, as topological considerations imply $n_{\rm max}-n_{\rm sad}+n_{\rm min}={\rm const}$), but the symmetry between the minima and the maxima is broken. The differential number counts with respect to the excursion threshold $\nu$ are given by $$\begin{aligned} \frac{\partial n_{\rm max/min}}{\partial \nu} &=& \frac{1}{\sqrt{2 \pi}{R_*}^2} \exp\left(-\frac{\nu^2}{2}\right) \left[1 \pm \mathrm{erf}\left(\frac{\gamma \nu}{\sqrt{2(1-\gamma^2)}}\right) \right] K_1(\nu,\gamma) \pm \frac{1}{\sqrt{2 \pi (1-\gamma^2)}{R_*}^2} \exp\left(-\frac{\nu^2}{2(1-\gamma^2)}\right) K_3(\nu,\gamma) \nonumber \label{eq:difnu1} \\ &+& \frac{\sqrt{3}}{\sqrt{2 \pi (3-2\gamma^2)}{R_*}^2} \exp\left(-\frac{3 \nu^2}{6-4 \gamma^2}\right) \left[1 \pm \mathrm{erf}\left(\frac{\gamma \nu}{\sqrt{2(1-\gamma^2)(3-2\gamma^2)}}\right) \right] K_2(\nu,\gamma) , \\ \frac{\partial n_{\rm sad}}{\partial \nu} &=& \frac{2 \sqrt{3}}{\sqrt{2 \pi (3-2\gamma^2)}{R_*}^2} \exp\left(-\frac{3 \nu^2}{6-4 \gamma^2}\right) K_2(\nu,\gamma), \label{eq:difnu2}\end{aligned}$$ where $K_1, K_2, K_3$ are polynomials with coefficients expressed in terms of the cumulants. Here we give explicit expressions for the first non-Gaussian order, while the next order can be found at the above mentioned URL. The term $K_1(\nu,\gamma)$ has a special role determining the Euler number $\chi(\nu)$ via ${\partial \chi}/{\partial \nu} = {\partial}/{\partial \nu} \left(n_{\rm max} + n_{\rm min} - n_{\rm sad}\right) = \sqrt{{2}/{\pi}} \exp(-{\nu^2}/{2}) K_1(\nu,\gamma)$. As such, its full expansion has been given in [@PGP], Eq. (7), and confirmed to the second order in [@matsu10]. To the leading non-Gaussian order $$K_1 = \frac{\gamma^2}{8\pi} \left[ H_2(\nu) + \left( \frac{2}{\gamma} \left\langle q^2 J_1 \right\rangle + \frac{1}{\gamma^2} \left\langle x {J_1}^2 \right\rangle - \frac{1}{\gamma^2} \left\langle x J_2 \right\rangle \right) H_1(\nu) - \left( \left\langle x q^2 \right\rangle + \frac{1}{\gamma} \left\langle x^2 J_1 \right\rangle \right) H_3(\nu) + \frac{1}{6} \left\langle x^3 \right\rangle H_5(\nu) \right]\,. \label{eq:2D_K3}$$ Introducing scaled Hermite polynomials ${\cal H}_n^{\pm}(\nu,\sigma)\equiv \sigma^{\pm n} H_n\left(\nu/\sigma\right)$, the polynomial $K_2(\nu,\gamma)$, the only one that determines the distribution of saddle points, can be written as $$\begin{aligned} K_2 =\frac{1}{8\pi \sqrt{3}} \left[\ \vphantom{\frac{1}{6}} 1 \right. - \left( \left\langle x q^2 \right\rangle + \frac{1}{3} \left\langle x {J_1}^2 \right\rangle - \frac{4}{3} \left\langle x J_2 \right\rangle + \frac{2}{3} \gamma \left\langle q^2 J_1 \right\rangle + \frac{2}{9} \gamma \left\langle {J_1}^3 \right\rangle - \frac{2}{3} \gamma \left\langle J_1 J_2 \right\rangle \right) {\cal H}_1^- \left(\nu,\sqrt{1-2/3\gamma^2} \right) \nonumber \\ \left. + \frac{1}{6} \left( \left\langle x^3 \right\rangle + 2 \gamma \left\langle x^2 J_1 \right\rangle + \frac{4}{3} \gamma^2 \left\langle x {J_1}^2 \right\rangle + \frac{2}{3} \gamma^2 \left\langle x J_2 \right\rangle + \frac{8}{27} \gamma^3 \left\langle {J_1}^3 \right\rangle + \frac{4}{9} \gamma^3 \left\langle J_1 J_2 \right\rangle \right) {\cal H}_3^- \left(\nu,\sqrt{1-2/3\gamma^2} \right) \right]. \label{eq:2D_K2}\end{aligned}$$ The remaining term, $K_3(\nu,\gamma)$ is the most complicated one. It is expressed as the expansion in ${\cal H}_n^+(\nu,\sqrt{1-\gamma^2})$: $$\begin{aligned} \lefteqn{K_3 =\frac{(1-\gamma^2)}{2 (2 \pi)^{3/2}(3-2\gamma^2)^3} \left[\vphantom{\frac{1}{6}} \gamma (3-2\gamma^2)^3 {\cal H}_1^+ \left(\nu,\sqrt{1-\gamma^2}\right) + \left( \frac{1}{2}\gamma^3 \left(1+\gamma^2-26 \gamma^4 +28 \gamma^6 - 8 \gamma^8 \right) \left\langle x ^3\right\rangle \right.\right.} \nonumber \\ &&\left. -\gamma^4 \left(26-28 \gamma^2+8 \gamma^4\right) \left\langle x^2 J_1 \right\rangle + \gamma \left(1-\gamma^2\right)\left(1+2 \gamma ^2\right) \left(3-2 \gamma ^2\right)^2 \left\langle x q^2 \right \rangle -\gamma \left(24-26 \gamma^2+8 \gamma^4 \right) \left\langle x J_1^2\right \rangle \right. \nonumber \\ &&\left.\vphantom{\frac{1}{6}} +\gamma\left(15-23 \gamma^2 + 8 \gamma^4 \right) \left\langle x J_2\right \rangle + 4 (1-\gamma^2) \left(3 -2 \gamma ^2 \right)^2 \left\langle q^2 J_ 1\right \rangle -\left(10 - 12 \gamma^2 + 4 \gamma^4 \right) \left\langle J_1^3\right \rangle +6 \left(1-\gamma^2\right) \left(2-\gamma^2\right) \left\langle J_1 J_2\right \rangle \right) \nonumber \\ && -\frac{1}{6} \left(\vphantom{\frac{1}{6}} \gamma \left( 27+36 \gamma^2-224 \gamma^4+192 \gamma^6-48 \gamma^8 \right) \left\langle x^3\right \rangle + \left( 108 - 324 \gamma^2+ 216 \gamma^4 - 48\gamma ^6 \right) \left\langle x^2 J_1\right \rangle + 6 \gamma (3 - 2 \gamma^2)^3 \left\langle x q^2 \right \rangle \right. \nonumber \\ &&\left. \left. \vphantom{\frac{1}{6}} -36 \gamma \left\langle x J_1^2\right \rangle -18 \gamma \left\langle x J_2\right \rangle - 8 \gamma^2 \left\langle J_1^3\right \rangle - 12 \gamma^2 \left\langle J_1 J_2\right \rangle \right) {\cal H}_2^+\left(\nu,\sqrt{1-\gamma^2}\right) \right]. \label{eq:2D_K1}\end{aligned}$$ Eqs (\[eq:difnu1\])-(\[eq:difnu2\]) (together with the next order expansion available online) are the main theoretical result of this paper. Implementation ============== ![image](fig1.pdf){width="45.00000%"} ![image](fig2.pdf){width="45.00000%"} ![image](fig3.pdf){width="45.00000%"} ![image](fig4.pdf){width="45.00000%"} Evaluating these estimators requires computing the cumulants appearing in Eqs. (\[eq:2D\_K3\])-(\[eq:2D\_K1\]). In non-Gaussian models where the field is represented by the functional of a Gaussian field this may be possible directly, while in general, as shown in [@matsu10], such cumulants can be found as weighted marginals of the underlying bispectrum, (to third order), trispectrum (to fourth order), [*etc.*]{}. On a sphere, the high order marginals are particularly cumbersome and time consuming to compute, as they also involve the contractions of $n-j$ Wigner symbols. Here we suggest a different route, based on the assumption that scientists interested in fitting extrema counts to non-Gaussian maps are typically in a position to generate realizations of such maps. In that case, it becomes relatively straightforward to draw samples of such maps, and estimate the corresponding cumulants. The [HEALPix]{} [@hivon05] library provides in fact a direct estimate of the derivatives of such maps up to second order, which is all that is required to compute the cumulants of the JPDF. As an illustration, let us generate sets of parameterized non-Gaussian maps using the package [sky-ng-sim]{} [@Rocha] of [HEALPix]{}. In this so called harmonic model, the PDF of the pixel temperature, $T$ is given by $\exp(-T^2/2 \sigma_0^2) \left| \sum_{i=0}^n \alpha_i C_i H_i(T/\sigma_0)\right|^2$, where $C_i$ are normalization constants. In this paper, we use [nside]{}=2048, $\ell_{\max}=4096$, $n=2$, $\sigma_0=1$, $\alpha_{0}=0$ and vary $\alpha_1$ and $\alpha_2$. We also consider the second option of [sky-ng-sim]{} which produces non Gaussian field as even power, $\beta$ of unit variance zero mean Gaussian fields. For each set of maps, we compute its derivatives, and arithmetically average the corresponding cumulants, using a code, [map2cum]{} relying on the [HEALPix]{} routine [alm2map\_der]{}. Invariant variables $J_1$ and $J_2$ on a sphere are defined via the mixed tensor of covariant derivatives $J_1 = {x_{;i}}^{;i}$ and $J_2 = \left| {x_{;i}}^{;j} \right|$. The differential counts are then evaluated for a range of threshold, $\nu\in[-5,5]$. For each of these maps, the number of extrema is computed by the procedure [map2ext]{} which implements the following algorithm: for every pixel a segment of quadratic surface is fit in the tangent plane based on the temperature values at the pixel of origin and its [HEALPix]{} neighbours. The position of the extremum of this quadratic, its height and its Hessian are computed. The extremum is counted into the tally of the type determined by its Hessian if its position falls within the original pixel. Several additional checks are performed to preclude registering extrema in the neighbouring pixels and minimize missing extrema due to jumps in the fit parameters as region shifts to the next pixel. Masks are treated by not considering pixels next to the mask boundary. Pixel-pixel noise covariance can be included while doing the local fit. On noise-free maps the procedure performs with better than 1% accuracy when the map is smoothed with Gaussian filter with FWHM exceeding 6 pixels. Both [map2cum]{} and [map2ext]{} are available upon request. Figure \[fig:mainfig\] illustrates the very good agreement between the theoretical expectation of the differential number counts to the measured ones for both the harmonic and the power-law models. An alternative numerical procedure, which is likely to be more practical for expansion beyond the fourth order was also successfully explored for 2D topological invariants. Starting from Eq. (\[eq:2DP\_general\]), we re-express both the polynomials in $J_1,J_2,\zeta$, and $q^2$ and $G_{\rm 2D}$ in terms of the six field variables, $(x,x_i,x_{ij})$. We then construct formally the marginal $G_{\nu}(\mathbf{x}=(x_{11},x_{12},x_{22})| x=\nu, x_1=x_2=0)$, where the latter condition corresponds to imposing that we are seeking extrema of the field. It becomes straightforward to draw large sets of 3 random numbers satisfying $G_{\nu}$. For each triplets, $\mathbf{x}$, and a given numerical set of cumulants, we then compute the argument, ${\cal I}(\mathbf{x})$ of the square bracket in Eq. (\[eq:2DP\_general\]) (up to some given order), together with the two eigenvalues of the Hessian. For maxima (resp. minima, resp. saddle points), we replace ${\cal I}$ by 0 if the two eigenvalues are not negative (resp. positive, resp. of different sign). The sum over all triplets yields a Monte Carlo estimate of $\partial n_{\rm ext}/\partial \nu$. The accuracy of the estimate depends on the extent of rejection while applying the extremal condition. Note in closing that all the presented analysis is straightforwardly generalized to 3D (noticeably the Monte Carlo method), as shown in [@GPP], to describe the large scale distribution of matter. Indeed in this context, the gravitational instability that nonlinearly maps the initial Gaussian inhomogeneities in matter density into the LSS, induces strong non-Gaussian features culminating in the formation of collapsed, self-gravitating objects such as galaxies and clusters of galaxies.\ \ [*Acknowledgments:*]{} we warmly thank E. Hivon for his help. CP and DP thanks the department of physics, Oxford, for hospitality during the completion of this work. The Gram-Charlier to cumulants lookup table is available at [http://www.iap.fr/users/pichon/Gram/]{}, together with the second order extrema counts, and third order genus given by $K_1$. All codes to compute the cumulants of given fields and the extrema on the [HEALPix]{} pixelisation are available upon request. [^1]: \[sigmas\]We consider the field and its derivatives to be normalized by their corresponding variances $\sigma_0^2=\left\langle x^2 \right\rangle, \sigma_1^2 = \left\langle (\nabla x)^2 \right\rangle, \sigma_2^2 = \left\langle (\Delta x)^2 \right\rangle $. This implies that in our dimensionless units $\langle x^2 \rangle=\langle q^2 \rangle=\langle J_1^2 \rangle=\langle J_2 \rangle=1$.
--- abstract: 'A gravity dual of a superconductor at finite temperature has been recently proposed. We present the vortex configuration of this model and study its properties. In particular, we calculate the free energy as a function of an external magnetic field, the magnetization and the superconducting density. We also find the two critical magnetic fields that define the region in which the vortex configurations are energetically favorable.' author: - Marc Montull - Alex Pomarol - 'Pedro J. Silva' title: The Holographic Superconductor Vortex --- Introduction ============ The Gauge/Gravity duality, that relates strongly interacting gauge theories to theories of gravity in higher dimensions, has opened a new window to study many different strongly interacting systems. The applicability of this approach is very vast ranging from particle physics to plasma and nuclear physics. In Ref. [@Hartnoll:2008vx] a model for a dual description of a superconductor was proposed. The model showed to have a critical temperature $T_c$ under which the system goes into a superconducting phase. The properties of this phase have been thoroughly studied [@Hartnoll:2009sz], showing a resemblance with those of a Type II superconductor. In spite of this, Abrikosov vortices, known to happen in Type II superconductors, have not yet been obtained. The purpose of this letter is to show that in this type of gravity duals vortex solutions indeed exist and can be energetically favorable in the presence of external magnetic fields. Due to the nonlinear nature of these configurations, we will have to rely on numerical methods. Among other physical properties, we will calculate the free energy and the range of the magnetic field $B_{c\, 1}\leq B \leq B_{c\, 2}$ at which the superconductor is at the intermediate phase (Shubnikov phase) characterized by vortex configurations. Further aspects of these solutions will be presented elsewhere. The Model ========= The physical system to study is a conformal strongly coupled superconductor in 3D at finite temperature and charge density. Its gravitational dual theory [@Hartnoll:2008vx] is an asymptotically AdS-Schwarzschild space-time in 4D. The gravitational degrees of freedom are coupled to an U(1) gauge field $A_\mu$ and a complex scalar $\Psi$. The action that summarizes the above model is given by &&S=d\^4x {(R+)-} ,\ &&=[14]{}F\^2+|D\_|\^2+||\^2 . \[action\] $G_N$ is the 4D gravitational Newton constant, the cosmological constant $\Lambda$ defines the asymptotic AdS radius $L$ via the relation $\Lambda=-{3/L^2}$ and $D_\mu=\partial_\mu-iA_\mu$. We use the convention where the metric $G$ has signature $(-,+,+,+)$, with coordinates $(t,z,r,\phi)$ where $t$ is time, $z$ is the holographic direction such that the AdS-boundary occurs at $z=0$, and $(r,\phi)$ are polar coordinates parameterizing the remaining 2D plane. For the scalar mass $m^2$ we will focus on two possible values: $m^2=-2,0$. Other values are expected to give similar behaviors [@Horowitz:2008bn]. We will work in the so-called probe approximation, where the gravity sector is effectively decoupled from the matter sector and therefore, there is no back-reaction on the background metric due to $\mathcal{L}$. This regime is achieved in the limit of large $g$, when compared to the gravitational strength. In this limit we can, without loss of generality, fix $g=1$. In our conventions, the background AdS-Schwarzschild Black hole (BH) metric is given by $$ds^2=\frac{L^2}{z^2}\left(-f(z)dt^2+dr^2+r^2d\phi^2\right)+\frac{L^2}{z^2f(z)}dz^2\, ,$$ where $f(z)=1-(z/z_h)^3$. As we are considering the theory at finite temperature, we have to take the Euclidian regime with compact time $it\in [0,1/T]$ where $T=3/(4\pi z_h)$. Therefore, the holographic coordinate runs from the AdS-boundary at $z=0$ to the BH horizon at $z=z_h$. Notice that we work with a planar BH with energy per unit area $\varepsilon=L^2/(8\pi G_N z_h^3)$. Then, the AdS/CFT duality tells us that the above are precisely the temperature and energy density of the dual superconductor. The gauge field has the usual AdS-boundary behavior A\_a\_+J\_z , where $a_\nu=(\mu,a_i)$ corresponds to the potentials on the dual CFT, while $J_\nu=(-\rho,J_i)$ plays the role of the conjugated currents. We will consider the case in which the charge density $\rho$ is fixed constant. The other potentials $a_i$ are related to turning on either electromagnetic fields or sample velocities in the dual CFT, depending on the interpretation we give to the AdS/CFT duality. The first interpretation is what we will use in this article, while the second one is relevant for superfluids [^1] [@Herzog:2008he]. Similarly, the scalar field has the following AdS-boundary behavior || az\^[3-]{}+bz\^ , \[psiads\] where $\Delta=2,3$ (for $m^2=-2,0$) corresponds to the dimension of the dual operator ${\cal O}_\Delta$ responsible for the U(1) breaking, and $b$ determines the vacuum expectation value of this operator. The value of $a$ corresponds to an explicit breaking of the U(1) symmetry and will then be turned to zero [^2]. Having fixed $m^2$, the only parameters of the model are the scales $T$ and $\sqrt{\rho}$. It has been reported in Ref. [@Hartnoll:2008vx; @Horowitz:2008bn] that for $\rho\not=0$ the system undergoes a phase transition at T\_c&& 0.12    [for]{}   m\^2=-2 ,\ T\_c&& 0.09    [for]{}   m\^2=0 , where the two phases are related to a charged BH and a charged BH with a non-trivial scalar hair. At $T<T_c$, the system is at the hairy phase corresponding to a superconducting phase. In Refs. [@Hartnoll:2008kx; @Albash:2008eh] the model was also studied in the presence of an external magnetic field $B$ using a dyonic BH with a probe scalar field. The result was a bounded superconducting region or drop, that squeezes to zero size as we increase $B$. The above suggested that we are dealing with a Type II superconductor. If this is the case, Abrikosov vortex configurations should be present in this model. We stress that, as is usual in this approach, we are treating the electromagnetic field of the 3D dual theory as a nondynamical background. This corresponds to take the 3D electric charge $e\rightarrow 0$, while keeping constant $B$ and $\rho$. The vortex solution =================== We use the Ansatz given by $$\Psi=\psi(r,z)\, e^{in\phi}\ ,\ \ A_0=A_0(r,z)\ , \ \ A_\phi=A_\phi(r,z)\, ,$$ with all other fields set to zero. This Ansatz preserves global U(1) transformations when combined with a rotation in the 2D plane. The fields $A_r,A_z$ can be consistently set to zero since our Ansatz fulfills $\partial_r Arg[\Psi]=\partial_z Arg[\Psi]=0$. The winding number $n\in Z$ determines different topological solutions. With the above Ansatz we obtain from Eq. (\[action\]) the following equations of motion: $$\begin{aligned} && z^2\partial_z \left(\frac{f}{z^2} \, \partial_z \psi \right)+\frac{1}{r} \partial_r\left(r\partial_r \psi \right) \nonumber \\ && \hspace{2,2cm}+\left(\frac{A^2_0}{f}-\frac{(A_\phi-n)^2}{r^2}-{m^2\over z^2}\right) \psi=0\, ,\nonumber \\ && \partial_z \left( f\partial_z A_\phi \right) + r \, \partial_r \left( \frac{1}{r} \partial_r A_\phi \right) - \frac{2\psi^2 }{z^2} (A_\phi-n) = 0\, ,\nonumber\\ && f\partial^2_z A_0 + \frac{1}{r} \partial_r \left( r \partial_r A_0 \right ) - \frac{2\psi^2}{z^2} A_0= 0\, . \label{pdes}\end{aligned}$$ In order to describe a dual superconductor at fixed $\rho$ in the presence of an external magnetic field $B$, the AdS/CFT correspondence tells us that we must impose the AdS-boundary conditions $$\psi|_{z=0}=0\ ,\ \ \partial_z A_0|_{z=0}=-\rho\ , \ \ A_\phi|_{z=0}=\frac{1}{2}r^2B\, , \label{adsbc}$$ for the case $m^2=0$, while for $m^2=-2$ the first condition must be $\partial_z\psi|_{z=0}=0$ (this is equivalent to set $a=0$ in Eq. (\[psiads\])). At the horizon $z=z_h$ we require the field configurations to be regular; in particular we set $A_0|_{z=z_h}=0$ as usual, to have a well-defined Euclidean continuation. Similar reasoning at $r=0$ implies that for $n\not=0$ $$\psi|_{r=0}=0\ ,\ \ \partial_r A_0|_{r=0}=0\ , \ \ A_\phi|_{r=0}=0\, ,$$ while for $n=0$, $\partial_r \psi|_{r=0}=0$. We will be considering a 3D superconductor of radius $R$ that we will take to be much bigger than the vortex radius. This is implemented by setting a nonzero $\rho$ extending from $r=0$ to $r=R$. The 2D system of the three partial differential equations of Eq. (\[pdes\]) is nonlinear, and therefore requires to be solved numerically. For this purpose we have used the COMSOL 3.4 package [@comsol]. In our numerical studies we have chosen R= ,    T= 0.065 . \[input\] This corresponds to 0.74 (0.54) , for the case of $m^2=0\ (-2)$. In Fig. 1 we show the order parameter $\langle {\cal O}_\Delta\rangle=\frac{1}{\Delta}z^{1-\Delta}\partial_z\psi|_{z=0}$ of the dual superconductor. We can see that this goes to zero at the origin where the vortex is placed. For the value of the magnetic field, we have chosen B\_n= , \[bn\] corresponding to the value at which the magnetic flux crossing a surface of constant $z$, $\Phi=\int d\phi \int^R_0 r dr B$, equals $2\pi n$. This is the quantized flux going through the $n$-vortex of the dual superconductor. ![*Order parameter $\langle {\cal O}_\Delta \rangle$ for the $n=1$ (solid) and $n=2$ (dashed) vortex configuration. The lower (upper) curves correspond to the case $m^2=0\ (-2)$. Presented in units of $\sqrt{\rho}=1$.*](O.eps){width="7.5cm"} Free energy, magnetization and Critical Magnetic fields ======================================================= We are interested to determine the free energy of the superconductor configurations with $n=0,1,2$ to know which one is energetically favorable as we vary $B$. By the AdS/CFT, the free energy $F$ of the superconductor is given by $$\frac{F[T,B,\rho]}{T}= S_E+\frac{\pi}{T} \left. \int^R_0 {dr}{r} A_0\partial_z A_0 \right|_{z=0}\, , \label{freee}$$ where the right-hand side is evaluated on-shell in the 4D theory with the boundary conditions given in Eq. (\[adsbc\]). The second term of Eq. (\[freee\]) has been added to guarantee the variational principle when working at fixed $\partial_z A_0$ on the AdS-boundary. Since, as we will see, the phase transition to vortex configurations occurs at small values of $B$, we can treat the magnetic field as a small perturbation and separate the solution as $$\psi\rightarrow \psi+\delta \psi\ ,\ \ A_0\rightarrow A_0+\delta A_0\ ,\ \ A_\phi\rightarrow A_\phi+\delta A_\phi\, ,$$ where the unperturbed solution $(\psi,A_0,A_\phi)$ corresponds to that at zero external magnetic field, [*i.e.*]{}, $A_\phi|_{z=0}=0$, while the perturbation $(\delta \psi, \delta A_0, \delta A_\phi)$ must fulfill A\_|\_[z=0]{}=r\^2B , \_z A\_0|\_[z=0]{}=0 ,|\_[z=0]{}=0 , for $m^2=0$ and $\partial_z\delta \psi|_{z=0}=0$ for $m^2=-2$. By integrating by parts the free energy of the $n$-vortex configuration can be written, up to $B^2$ terms, as $$F_n(B)\simeq F_n(0)-\alpha_n B+\frac{1}{2}\beta_n B^2\, , \label{freeapprox}$$ where we have defined $$\begin{aligned} F_n(0)&=& 2\pi \int^R_0 dr \int^{z_h}_0 dz \, \frac{r}{z^2} \left( \frac{A_0^2}{f}-\frac{A_\phi (A_\phi-n)}{r^2} \right) \psi^2 \nonumber\\ &-&\pi \left. \int^R_0 dr r A_0 \, \partial_z A_0 \right |_{z=0}\, ,\nonumber\\ \alpha_n&=& \frac{2\pi}{B} \left. \int^R_0 \frac{dr }{r} \delta A_\phi \partial_z A_{\phi} \right |_{z=0} \, ,\nonumber\\ \beta_n&=&- \frac{2\pi}{B^2} \left. \int^R_0 \frac{dr }{r} \delta A_\phi \partial_z \delta A_{\phi} \right |_{z=0}\, . \label{fab}\end{aligned}$$ Notice that the positive-defined quantities $\alpha_n$ and $\beta_n$ do not depend on $B$, since $ \delta A_\phi \propto \delta A_\phi|_{z=0}\propto B$. Eq. (\[freeapprox\]) has a simple interpretation in terms of the magnetization $M$ of the superconductor. Using $M=-\partial F/\partial B$, we can write $$F_n(B)=F_n(0)-\int^B_0 M_n dB\, , \label{newform}$$ where the magnetization of the $n$-vortex configuration $M_n$ in the $z$-component is given by $$M_n=\frac{1}{2}\int d\phi\, dr\, r (\vec r\times \vec J)_z=\pi\int dr\, r J_\phi\, . \label{mn}$$ From the AdS/CFT dictionary, we have that $$\langle J_\phi\rangle=-\left.\frac{\delta F}{\delta A^\phi|_{z=0}}= \partial_z A_\phi+\partial_z \delta A_\phi\right|_{z=0}\, ,$$ that together with Eq. (\[mn\]) leads to our final expression for the magnetization $$M_n=\alpha_n-\beta_n B\, .$$ Using this expression into Eq. (\[newform\]), we recover the free energy of Eq. (\[freeapprox\]). For the free energy at $B=0$ we obtain $$F_n(0)\simeq F_0(0)+0.9(1.5) n^2\ln [R\rho^{1/2}]\sqrt{\rho}+c_n\, ,$$ where $c_0=0$, $c_1\simeq 1.2(3.7)\sqrt{\rho}$, $c_2\simeq 0.3(4)\sqrt{\rho}$ and $$F_0(0)\simeq 5 (4) R^2\rho\sqrt{\rho}\, ,$$ for the case $m^2=0(-2)$. This shows that, as expected, the vortex configurations have for $B=0$ a larger energy than the $n=0$ solution. Note that $F_0(0)$ grows with the volume of the superconductor ($\propto R^2$), although not the difference $F_{1,2}(0)-F_0(0)$ that is only logarithmically sensitive to $R$ for $R\rightarrow \infty$, as expected for 3D vortices in the absence of electromagnetic fields. For the magnetization we find $$\alpha_n\simeq 0.4(0.7)\, n R^2\sqrt{\rho}\ , \ \ \ \beta_n\simeq 0.05(0.09) R^4\sqrt{\rho}\, .$$ From Eq. (\[freeapprox\]) it is clear that there is a critical value for $B$ at which the difference between the free energies $F_1(B)-F_0(B)$ is zero. This value is usually referred as $B_{c\, 1}$ and marks the beginning of the mixed phase where the magnetic field starts to penetrate the superconductor. For the case of $m^2=0$ we have $${F_1(B)-F_0(B)\over \sqrt{\rho}}\simeq 0.9\ln [R\rho^{1/2}]+1.2-0.8\frac{B}{B_{1}}\, ,$$ that for $R=50/\sqrt{\rho}$ equals to zero at $$B_{c\, 1}\simeq 6 B_{1}\, , \label{bc1}$$ where $B_1$ is defined in Eq. (\[bn\]). For $m^2=-2$ we get similar values, $B_{c\, 1}\simeq 7 B_1$. At higher magnetic field values than $B_{c\, 1}$ the vortex configuration is preferred. Notice that for $R\rightarrow\infty$, we have $B_1\rightarrow 1/R^2$ and therefore $B_{c\, 1}\rightarrow 0$, indicating that the non-vortex solution is never favorable at any $B\not=0$. For the configuration with $n=2$, we find that its free energy is less than that for $n=0,1$ if $B\gtrsim 10(14) B_1$ for $m^2=0(-2)$. At this high magnetic field, however, we expect that the free energy of a solution with two $n=1$ vortices will be energetically more favorable, as it happens in Type II superconductors. Indeed, for two vortices sufficiently separated we expect F(B)&& F\_0(0)+2\[(F\_1(0)-F\_0(0))-\_1B\]\ &+& E\_[int]{}+\_1 B\^2 , \[2v\] where $E_{\rm int}$ is the interaction energy between the two vortices. Therefore the difference between the free energy of two $n=1$ vortices and one $n=2$ vortex goes as $\Delta F\simeq E_{\rm int}-1.8(3)\ln [R\rho^{1/2}]\sqrt{\rho}$ for $m^2=0(-2)$. As a consequence a configuration with two $n=1$ vortices will be preferred for $E_{\rm int}<1.8(3)\ln [R\rho^{1/2}]\sqrt{\rho}$ that is expected for a large superconductor. On the other hand, as $B$ increases from $B_{c\, 1}$, a configuration with more and more vortices is expected to be favorable, until we reach a certain critical value $B_{c\, 2}$ at which there is another phase transition; for $B>B_{c\, 2}$ the normal phase is preferred. We estimate this value by the magnetic field at which the superconducting region of the $n=0,1$ configurations shrink to zero size. We find $B_{c\, 2}\simeq 3(5)\rho$ for $m^2=0(-2)$. In Fig. 2 we plot the values of the free energy as a function of $B$ for the configurations $n=0,1,2$ from the exact numerical solutions. We can see that the critical magnetic values at which the lines cross are similar to the approximate ones given above. ![*Free energy for the $m^2=0$ case as a function of the external magnetic field for the $n=0$ (solid), $n=1$ (dashed) and $n=2$ (dotted) vortex configuration. Presented in units of $\sqrt{\rho}=1$.*](EB.eps){width="7.5cm"} Finally, we calculate the “superconducting density" $n_s(r)$ defined as $$n_s(r)=\langle J_\phi J^\phi\rangle=\frac{\delta F}{\delta A^{2}_{\phi}|_{z=0}}=-\left. \frac{\partial_z \delta A_\phi}{\delta A_\phi}\right|_{z=0}\, ,$$ where in the last equality we have used Eq. (\[freeapprox\]). In Fig. 3 we show $n_s(r)$ for the different configurations. We notice that the vortex configuration fullfills $\langle J_\phi\rangle= -n_s(r)(\delta A_\phi|_{z=0}-n)$, as expected from a spontaneously broken U(1) symmetry. For a non-vortex configuration the superconducting density is constant $n_s(r)\simeq 0.28(0.48) \sqrt{\rho}$ for $m^2=0(-2)$. This determines the penetration length $\lambda={1}/{(e \sqrt{n_s})}$ where $e$ is the electric charge of the dual superconductor. ![*Superconducting density $n_s(r)$ for the $n=1$ (solid) and $n=2$ (dashed) vortex configuration. The lower (upper) curves correspond to the case $m^2=0\ (-2)$. Presented in units of $\sqrt{\rho}=1$.*](ns.eps){width="7.5cm"} [**Note Added:**]{} While finishing this paper, we learned of Ref. [@Albash:2009ix] which has also studied the vortex solution in holographic superconductors. [**Acknowledgments:**]{} We would like to thank Alberto Salvio, Massimo Mannarelli and Alvar Sanchez for discussions. The work of AP was partly supported by the Research Projects CICYT-FEDER-FPA2005-02211, SGR2005-00916, UniverseNet (MRTN-CT-2006-035863), and AP2006-03102. The work of PJS was partly supported by the Research Projects CICYT-FEDER-FPA2005-02211 and FIS2006-02842, CSIC under the I3P program. [99]{} S. A. Hartnoll, C. P. Herzog and G. T. Horowitz, Phys. Rev. Lett.  [**101**]{} (2008) 031601. For a review see, for example, S. A. Hartnoll, arXiv:0903.3246 \[hep-th\]; C. P. Herzog, arXiv:0904.1975 \[hep-th\]. G. T. Horowitz and M. M. Roberts, Phys. Rev.  D [**78**]{} (2008) 126008. C. P. Herzog, P. K. Kovtun and D. T. Son, arXiv:0809.4870 \[hep-th\]; P. Basu, A. Mukherjee and H. H. Shieh, arXiv:0809.4494 \[hep-th\]. S. A. Hartnoll, C. P. Herzog and G. T. Horowitz, JHEP [**0812**]{} (2008) 015. T. Albash and C. V. Johnson, JHEP [**0809**]{} (2008) 121; M. Ammon, J. Erdmenger, M. Kaminski and P. Kerner, arXiv:0810.2316 \[hep-th\], arXiv:0903.1864 \[hep-th\]. See http://www.comsol.com. T. Albash and C. V. Johnson, arXiv:0906.0519 \[hep-th\]; arXiv:0906.1795 \[hep-th\]. [^1]: In fact, the vortex solution we present in this article can be identified with vortex configurations in a superfluid, once the appropriated reinterpretations are made. [^2]: For the case $m^2=-2$ there is the possibility to have $b=0$ and $a\not=0$ corresponding to have a dual CFT operator of dimension one [@Hartnoll:2009sz].
--- abstract: | All standard Artifical Intelligence (AI) planners to-date can only handle a single objective, and the only way for them to take into account multiple objectives is by aggregation of the objectives. Furthermore, and in deep contrast with the single objective case, there exists no benchmark problems on which to test the algorithms for multi-objective planning.  () is an evolutionary planner that won the (single-objective) deterministic temporal satisficing track in the last International Planning Competition. Even though it uses intensively the classical (and hence single-objective) planner  ([*Yet Another Heuristic Search Planner*]{}), it is possible to turn [[DaE$_{\text{YAHSP}}$]{}]{} into a multi-objective evolutionary planner. A tunable benchmark suite for multi-objective planning is first proposed, and the performances of several variants of multi-objective [[DaE$_{\text{YAHSP}}$]{}]{} are compared on different instances of this benchmark, hopefully paving the road to further multi-objective competitions in AI planning.[^1] author: - 'M. R. Khouadjia' - 'M. Schoenauer' - 'V. Vidal' - 'J. Dréo' - 'P. Savéant' bibliography: - 'emob.bib' title: | Multi-Objective AI Planning:\ Evaluating [[DaE$_{\text{YAHSP}}$]{}]{} on a Tunable Benchmark --- Introduction ============ An AI Planning problem (see e.g. [@AIplanningBook2004]) is defined by a set of predicates, a set of actions, an initial state and a goal state. A state is a set of non-exclusive instantiated predicates, or (Boolean) atoms. An action is defined by a set of [*pre-conditions*]{} and a set of [*effects*]{}: the action can be executed only if all pre-conditions are true in the current state, and after an action has been executed, the effects of the action modify the state: the system enters a new state. A plan in AI Planning is a sequence of actions that transforms the initial state into the goal state. The goal of AI Planning is to find a plan that minimizes some quantity related to the actions: number of actions, or sum of action costs in case actions have different costs, or makespan in the case of temporal planning, when actions have a duration and can eventually be executed in parallel. All these problems are P-SPACE. A simple planning problem in the domain of logistics is given in Figure \[fig.instance\]: the problem involves cities, passengers, and planes. Passengers can be transported from one city to another, following the links on the figure. One plane can only carry one passenger at a time from one city to another, and the flight duration (number on the link) is the same whether or not the plane carries a passenger (this defines the [*domain*]{} of the problem). In the simplest non-trivial [*instance*]{} of such domain, there are 3 passengers and 2 planes. In the initial state, all passengers and planes are in [city 0]{}, and in the goal state, all passengers must be in [city 4]{}. The not-so-obvious optimal solution has a total makespan of 8 and is left as a teaser for the reader. AI Planning is a very active field of research, as witnessed by the success of the ICAPS conferences (<http://icaps-conferences.org>), and its Intenational Planning Comptetition (IPC), where the best planners in the world compete on a set of problems. This competition has lead the researchers to design a common language to describe planning problems, PDDL (Planning Domain Definition Language). Two main categories of planners can be distinguished: [*exact planners*]{} are guaranteed to find the optimal solution …if given enough time; [*satisficing planners*]{} give the best possible solution, but with no optimality guarantee. A complete description of the state-of-the-art planners is far beyond the scope of this paper. However, to the best of our knowledge, all existing planners are single objective (i.e. optimize one criterion, the number of actions, the cost, or makespan, depending on the type of problem), whereas most real-world problems are in fact multi-objective and involve several contradictory objectives that need to be optimized simultaneously. For instance, in logistics, the decision maker must generally find a trade-off between duration and cost (or/and risk). An obvious solution is to aggregate the different objectives into a single objective, generally a fixed linear combination of all objectives. Early work in that area used some twist in PDDL 2.0 [@do2003sapa; @refanidis2003multiobjective; @gerevini2008]. PDDL 3.0, on the other hand, explicitly offered hooks for several objectives x, and a new track of IPC was dedicated to aggregated multiple objectives: the “net-benefit” track took place in 2006 [@chen2006temporal] and 2008 [@edelkamp2009optimal], …but was canceled in 2011 because of the small number of entries. In any case, no truly multi-objective approach to multi-objective planning has been proposed since the very preliminary proof-of-concept in the first  paper [@Schoenauer2006]. One goal of this paper is to build on this preliminary work, and to discuss various issues related to the challenge of solving multi-objective problems with an evolutionary algorithm that is heavily based on a single-objective planner ( [@Vidal2004]) – and in particular to compare different state-of-the-art multi-objective evolutionary schemes when used within [[DaE$_{\text{YAHSP}}$]{}]{}. However, experimental comparison requires benchmark problems. Whereas the IPC have validated a large set of benchmark domains, with several instances of increasing complexity in each domain, nothing yet exists for multi-objective planning. The other goal of this paper is to propose a tunable set of benchmark instances, based on a simplified model of the IPC logistics domain  illustrated in Fig. \[fig.instance\]. One advantage of this multi-objective benchmark is that the exact Pareto Front is known, at least for its simplest instances. The paper is organized as follows: Section \[sec:dae\] rapidly introduces , more precisely the representation and variation operators that have been used in the single-objective version of [[DaE$_{\text{YAHSP}}$]{}]{} that won the temporal deterministic satisficing track at the last IPC in 2011. Section \[benchmark\] details the proposed benchmark, called , and gives hints about how to generate instances of different complexities within this framework. Section \[sec:evolutionaryMOA\] rapidly introduces the 4 variants of multi-objective schemes that will be experimentally compared on some of the simplest instances of the  benchmark and results of different series of experiments are discussed in Section \[sec:experiments\]. Section \[sec:conclusion\] concludes the paper, giving hints about further research directions. Divide-and-Evolve {#sec:dae} ================= Let ${\cal P}_D(I,G)$ denote the planning problem defined on domain $D$ (the predicates, the objects, and the actions), with initial state $I$ and goal state $G$. In STRIPS representation model [@Fikes1971], a state is a list of Boolean atoms defined using the predicates of the domain, instantiated with the domain objects. In order to solve ${\cal P}_D(I,G)$, the basic idea of  is to find a sequence of states $S_1, \ldots, S_n$, and to use some embedded planner $X$ to solve the series of planning problems ${\cal P}_D(S_{k},S_{k+1})$, for $k \in [0,n]$ (with the convention that $S_0 = I$ and $S_{n+1} = G$). The generation and optimization of the sequence of states $(S_i)_{i \in [1,n]}$ is driven by an evolutionary algorithm. After each of the sub-problems ${\cal P}_D(S_{k},S_{k+1})$ has been solved by the embedded planner, the concatenation of the corresponding plans (possibly compressed to take into account possible parallelism in the case of temporal planning) is a solution of the initial problem. In case one sub-problem cannot be solved by the embedded solver, the individual is said [*unfeasible*]{} and its fitness is highly penalized in order to ensure that feasible individuals always have a better fitness than unfeasible ones, and are selected only when there are not enough feasible individual. A thorough description of  can be found in [@Bibai2010]. The following rest of this section will focus on the evolutionary parts of . Representation and Initialization --------------------------------- An individual in  is hence a variable-length list of states of the given domain. However, the size of the space of lists of complete states rapidly becomes untractable when the number of objects increases. Moreover, goals of planning problems need only to be defined as partial states, involving a subset of the objects, and the aim is to find a state such that all atoms of the goal state are true. An individual in  is thus a variable-length list of partial states, and a partial state is a variable-length list of atoms. Previous work with  on different domains of planning problems from the IPC benchmark series have demonstrated the need for a very careful choice of the atoms that are used to build the partial states [@bibai-EvoCOP2010]. The method that is used today to build the partial states is based on a heuristic estimation, for each atom, of the earliest time from which it can become true [@Haslum2000]. These earliest start times are then used in order to restrict the candidate atoms for each partial state: the number of states is uniformly drawn between 1 and the number of estimated start times; For every chosen time, the number of atoms per state is uniformly chosen between 1 and the number of atoms of the corresponding restriction. Atoms are then added one by one: an atom is uniformly drawn in the allowed set of atoms (based on earliest possible start time), and added to the individual if it is not mutually exclusive (in short, [*mutex*]{}) with any other atom that is already there. Note that only an approximation of the complete mutex relation between atoms is known from the description of the problem, and the remaining mutexes will simply be gradually eliminated by selection, because they make the resulting individual unfeasible. To summarize, an individual in  is represented by a variable-length time-consistent sequence of partial states, and each partial state is a variable-length list of atoms that are not pairwise mutex. Variation Operators ------------------- Crossover and mutation operators are defined on the  representation in a straightforward manner - though constrained by the heuristic chronology and the partial mutex relation between atoms. A simple one-point crossover is used, adapted to variable-length representation: both crossover points are independently chosen, uniformly in both parents. However, only one offspring is kept, the one that respects the approximate chronological constraint on the successive states. The crossover operator is applied with a population-level crossover probability. Four different mutation operators are included: first, a population-level mutation probability is used; one an individual has been designated for mutation, the choice between the four mutation operators is made according to user-defined relative weights. The four possible mutations operate either at the individual level, by adding (addState) or removing (delState) a state, or at the state level by adding (addAtom) or removing (delAtom) some atoms in a uniformly chose state. All mutation operators maintain the approximate chronology between the intermediate states (i.e., when adding a state, or an atom in a state), and the local consistency within all states (i.e. avoid pairwise mutexes). Hybridization -------------  uses an external embedded planner to solve the sequence of sub-problems defined by the ordered list of partial states. Any existing planner can in theory be used. However, there is no need for an optimality guarantee when solving the intermediate problems in order for  to obtain good quality results [@Bibai2010]. Hence, and because several calls to this embedded planner are necessary for a single fitness evaluation, a sub-optimal but fast planner is used:  [@Vidal2004] is a lookahead strategy planning system for sub-optimal planning which uses the actions in the relaxed plan to compute reachable states in order to speed up the search process. For any given $k$, if the chosen embedded planner succeeds in solving $ P_{D} (S_k, S_{k+1} )$, the final complete state is computed by executing the solution plan from $S_k$, and becomes the initial state of the next problem. If all the sub-problems are solved by the embedded planner, the individual is called *feasible*, and the concatenation of the plans for all sub-problems is a global solution plan for $P_{D} (S_{0} = I, S_{n+1} = G)$. However, this plan can in general be further optimized by rescheduling some of its actions, in a step called compression. The computation of all objective values is done from the compressed plan of the given individual. Finally, because the rationale for  is that all sub-problems should hopefully be easier than the initial global problem, and for computational performance reason, the search capabilities of the embedded planner  are limited by setting a maximal number of nodes that it is allowed to expand to solve any of the sub-problems (see again [@Bibai2010] for more details). Multi-Objective Divide-and-Evolve {#modae} ================================= In some sense, the multi-objectivization of  is straightforward – as it is for most evolutionary algorithms. The “only” parts of the algorithm that require some modification are the selection parts, be it the parental selection, that chooses which individual from the population are allowed to breed, and the environmental selection (aka replacement), that decides which individuals among parents and offspring will survive to the next generation. Several schemes have been proposed in the EMOA literature (see e.g. Section \[sec:evolutionaryMOA\]), and the end of this Section will briefly introduce the ones that have been used in this work. However, a prerequisite is that all objectives are evaluated for all potential solutions, and the challenge here is that the embedded planner  performs its search based on only one objective. Multi-objectivization Strategies {#sec:strategies} -------------------------------- Even though   (like all known planners to-date) only solves planning problems based on one objective. However, it is possible since PDDL 3.0 to add some other quantities (aka Soft Constraints or Preferences [@gerevini2006preferences]) that are simply computed throughout the execution of the final plan, without interfering with the search. The very first proof-of-concept of multi-objective  [@Schoenauer2006], though using an exact planner in lieu of the satisficing planner , implemented the simplest idea with respect to the second objective: ignore it (though computing its value for all individuals) at the level of the embedded planner, and let the evolutionary multi-objective take care of it. However, though  can only handle one objective at a time, it can handle either one in turn, provided they are both defined in the PDDL domain definition file. Hence a whole bunch of smarter strategies become possible, depending on which objective  is asked to optimize every time it runs on a sub-problem. Beyond the fixed strategies, in which  always uses the same objective throughout [[DaE$_{\text{YAHSP}}$]{}]{} runs, a simple dynamic randomized strategy has been used in this work: Once the planner is called for a given individual, the choice of which strategy to apply is made according to roulette-wheel selection based on user-defined relative weights; In the end, it will return the values of both objectives. It is hoped that the evolutionary algorithm will find a sequential partitioning of the problem that will nevertheless allow the global minimization of both objectives. Section \[resultsStrategies\] will experimentally compare the fixed strategies and the dynamic randomized strategy where the objective that  uses is chosen with equal probability among both objectives. Other possible strategies include adaptive strategies, where each individual, or even each intermediate state in every individual, would carry a strategy parameter telling  which strategy to use – and this strategy parameter would be subject to mutation, too. This is left for further work. Evolutionary Multi-Objective Schemes {#sec:evolutionaryMOA} ------------------------------------ Several Multi-Objective EAs (MOEAs) have been proposed in the recent years, and this work is concerned with comparing some of the most popular ones when used within the multi-objective version of [[DaE$_{\text{YAHSP}}$]{}]{}. More precisely, the following selection/reproduction schemescan be applied to any representation, and will be experimented with here: NSGA-II [@Deb2002], SPEA2 [@Zitzler2002], and IBEA [@Zitzler2004]. They will now be quickly introduced in turn. The [**Non-dominated Sorting Genetic Algorithm**]{} (NSGA-II) has been proposed by Deb et *al.* [@Deb2002]. At each generation, the solutions contained in the current population are ranked into successive Pareto fronts in the objective space. Individuals mapping to vectors from the first front all belong to the best efficient set; individuals mapping to vectors from the second front all belong to the second best efficient set; and so on. Two values are then assigned for every solution of the population. The first one corresponds to the rank of the Pareto front the corresponding solution belongs to, and represents the quality of the solution in terms of convergence. The second one, the crowding distance, consists in estimating the density of solutions surrounding a particular point in the objective space, and represents the quality of the solution in terms of diversity. A solution is said to be better than another solution if it has a better rank value, or in case of equality, if it has a larger crowding distance. The [**Strength Pareto Evolutionary Algorithm**]{} (SPEA) [@Zitzler2001], introduces an improved fitness assignment strategy. It intrinsically handles an internal fixed-size archive that is used during the selection step to create offspring solutions. At a given iteration of the algorithm, each population and archive member $x$ is assigned a strength value $S(x)$ representing the number of solutions it dominates. Then, the fitness value $F (x)$ of solution $x$ is calculated by summing the strength values of all individuals that $x$ currently dominates. Additionally, a diversity preservation strategy is used, based on a nearest neighbor technique. The selection step consists of a binary tournament with replacement applied on the internal archive only. Last, given that the SPEA2 archive has a fixed size storage capacity, a pruning mechanism based on fitness and diversity information is used when the non-dominated set is too large. The [**Indicator-Based Evolutionary Algorithm**]{} (IBEA) [@Zitzler2004] introduces a total order between solutions by means of a binary quality indicator. The fitness assignment scheme of this evolutionary algorithm is based on a pairwise comparison of solutions contained in the current population with respect to a binary quality indicator $I$. Each individual $x$ is assigned a fitness value $F (x)$ measuring the “loss in quality” that would result from removing $x$ from the current population. Different indicators can be used. The most two popular, that will be used in this work, are the additive $\epsilon$-indicator ($I_{\epsilon^+}$ ) and the hypervolume difference indicator ($I_{H^-}$ ) as defined in  [@Zitzler2004]. Each indicator $I (x, x')$ gives the minimum value by which a solution $x \in X$ can be translated in the objective space to weakly dominate another solution $x' \in X$. An archive stores solutions mapping to potentially non-dominated points in order to prevent their loss during the stochastic search process. ![A schematic view of , a simple benchmark transportation problem: Durations of available flights are attached to the corresponding edges, costs/risks are attached to landing in the central cities (in grey circles).[]{data-label="fig.instance"}](./miniMulti.eps){width="50.00000%"} A Benchmark Suite for Multi-Objective Temporal Planning {#benchmark} ======================================================= This section details the proposed benchmark test suite for multi-objective temporal planning, based on the simple domain that is schematically described in Figure \[fig.instance\]. The reader will have by now solved the little puzzle set in the Introduction, and found the solution with makespan 8 (flying 2 passengers to [city 1]{}, one plane continues with its passenger to [city 4]{} while the other plane flies back empty to [city 0]{}, the plane in city [city 4]{} returns empty to [city 1]{} while the other plane brings the last passenger there, and the goal is reached after both planes bring both remaining passengers to [city 4]{}). The rationale for this solution is that no plane ever stays idle. In order to turn this problem into a not-too-unrealistic logistics multi-objective problem, some costs or some risks are added to all 3 central cities (1 to 3). This leads to two types of problems: In the $_{Cost}$, the second objective is an additive objective: each plane has to pay the corresponding tax every time it lands in that city; In the $_{Risk}$, the second objective is similar to a risk, and the maximal value encountered during the complete execution of a plan is to be minimized. In both cases, there are 3 obvious points that belong to the Pareto Front: the solution with minimal makespan described above, and the similar solutions that use respectively [city 2]{} and [city 3]{} in lieu of [city 1]{}. The values of the makespans are respectively 8, 16 and 24, and the values of the costs are, for each solution, 4 times the value of the single landing tax, and exactly the value of the involved risk. For the risk case, there is no other point on the Pareto Front, as a single landing on a high-risk city sets the risk of the whole plan to a high risk. For the cost model however, there are other points on the Pareto Front, as different cities can be used for the different passengers. For instance, in the case of Figure \[fig.instance\], this leads to a Pareto Front made of 5 points, (8,12), (16,8), and (24,4) (going only through [city 1]{}, [2]{} and [3]{} respectively), plus (12,10) and (20,6). Only the first 3 are the Pareto Front in the risk case. Tuning the Complexity --------------------- There are several ways to make this first simple instance more or less complex. A first possibility is to add passengers. In this work, only bunches of 3 passengers have been considered, in order to be able to easily derive some obvious Pareto-optimal solutions, using several times the little trick to avoid leaving any plane idle. For instance, it is easy to derive all the Pareto solutions for 6 and 9 passengers – and in the following, the corresponding instances will be termed 3, 6, and 9 respectively (sub-scripted with the type of second objective – cost or risk). Of course, the number of planes could also be increased, though the number of passengers needs to remain larger than the number of planes to allow for non-trivial Pareto front. However, departing from the 3 passengers to 2 planes ratio would make the Pareto front not easy to identify any more. Another possibility is to increase the number of central cities: this creates more points on the Pareto front, using either plans in which a single city is used for all passengers, or plans that use several different cities for different passengers (while nevertheless using the same trick to ensure no plane stays idle). In such configuration too the exact Pareto front remains easy to identify: further work will investigate this line of complexification. Modifying the shape of the Pareto Front --------------------------------------- Another way to change the difficulty of the problem without increasing its complexity is to tune the different values of the flight times and the cost/risk at each city. Such changes does not modify the number of points on the Pareto Front, but does change its shape in the objective space. For instance, simply modifying the cost $\alpha$ of [city2]{}, the central city in Figure \[fig.instance\], between 1 and 3 (the costs of respectively [city1]{} and [city3]{}), the Pareto Front, which is linear for $\alpha=2$ becomes strictly convex for $\alpha < 2$ and strictly concave for $\alpha > 2$, as can be seen for two extreme cases ($\alpha = 1.1$ and $\alpha = 2.9$) on Figure \[fig:zeno3ParetoFronts\]. Further work will address the identification of the correct domain parameters in order to reach a given shape of the Pareto front. Experimental Conditions {#sec:condition} ======================= #### Implementation: All proposed multi-objective approaches (see Section \[sec:evolutionaryMOA\]) have been implemented within the  framework [@paradiseo]. All experiments were performed on the 3, 6, and 9 instances. The first objective is the makespan, and the second objective either the (additive) cost or the (maximal) risk, as discussed in Section \[benchmark\]. The values of the different flight durations and cost/risks are those given on Figure \[fig.instance\] except otherwise stated. #### Parameter tuning: All user-defined parameters have been tuned using the framework [@ParamILS-JAIR].  handles any parameterized algorithm whose parameters can be discretized. Based on Iterated Local Search (ILS),  searches through the space of possible parameter configurations, evaluating configurations by running the algorithm to be optimized on a set of benchmark instances, searching for the configuration that yields overall best performance across the benchmark problems. Here, both the parameters of the multi-objective algorithms (including the internal parameters of the variation operators – see [@Bibai:2010:GPT:1830483.1830528]) and  specific parameters (including the relative weights of the possible strategies (see Section \[sec:strategies\]) have been subject to  optimization. For the purpose of this work, parameters were tuned anew for each instance (see [@Bibai:2010:GPT:1830483.1830528] for a discussion about the generality of such parameter tuning, that falls beyond the scope of this paper). #### Performance Metric: The quality measure used by  to optimize [[DaE$_{\text{YAHSP}}$]{}]{} is the unary hypervolume $I_{H^-}$ [@Zitzler2004] of the set of non-dominated points output by the algorithm with respect to the complete true Pareto front (only instances where the true Pareto front is fully known have been experimented with). The lower the better (a value of 0 indicates that the exact Pareto front has been reached). However, and because the true front is known exactly, and is made of a few scattered points (at most 17 for 9 in this paper), it is also possible to visually monitor when each point of the front is discovered by the algorithm. This allows some deeper comparison between algorithms even when none has found the whole front. Such [*attainment plots*]{} will be used in the following, together with more classical plots of hypervolume vs time. For all experiments, 30 independent runs were performed. Note that all the performance assessment procedures, including the hypervolume calculations, have been achieved using the PISA performance assessment tool suite [@Bleuler2003]. #### Stopping Criterion: Because different fitness evaluations involve different number calls to  – and because  runs can have different computational costs too, depending on the difficulty of the sub-problem being solved – the stopping criterion was a fixed amount of CPU time rather than the usual number of fitness evaluation. These absolute limits were set to 300, 600, and 900 seconds respectively for 3, 6, and 9. Experimental Results {#sec:experiments} ==================== Comparing Multi-Objective Schemes --------------------------------- The first series of experiments presented here are concerned with the comparison of the different multi-objective schemes briefly introduced in Section \[sec:evolutionaryMOA\]. Figure \[fig:zenoHypervolume\] displays a summary of experiments of all 4 variants for  instances for both the [*Cost*]{} and [*Risk*]{} problems. Some clear conclusions can be drawn from these results, that are confirmed by the statistical analyses presented in Table \[table:tests\] using Wilcoxon signed rank test with 95% confidence level. First, looking at the minimal values of the hypervolume reached by the different algorithms shows that, as expected, the difficulty of the problems increases with the number of passengers, and for a given complexity, the [*Risk*]{} problems are more difficult to solve than the [*Cost*]{} ones. Second, from the plots and the statistical tests, it can be seen that NSGA-II is outperformed by all other variants on all problems, SPEA2 by both indicator-based variants on most instances, and $IBEA_{H^-}$ is a clear winner over $IBEA_{\varepsilon^+}$ except on 6$_{risk}$. More precisely, Figure \[fig2:zenoParetofront\] show the cumulated final populations of all 30 runs in the objective space together with the true Pareto front for 6-9$_{cost}$ problems: the situation is not as bad as it seemed from Figure \[fig:zenoHypervolume\]-(e) for 9$_{cost}$, as most solutions that are returned by $IBEA_{H^-}$ are close to the Pareto front (this is even more true on 6$_{cost}$ problem). A dynamic view of the attainment plots is given in Figure \[fig:strategiesYahsp\]-(c): two points of the Pareto front are more difficult to reach than the others, namely (48,16) and (56,12). ------------------------ ------------------------ ---------- ------------------------ -------------- ---------- \*[Instances]{} \*[Algorithms]{} $NSGAII$ $IBEA_{\varepsilon^+}$ $IBEA_{H^-}$ $SPEA2$ \*[*Zeno3*$_{cost}$]{} $NSGAII$ – $\equiv$ $\equiv$ $\equiv$ $IBEA_{\varepsilon^+}$ $\equiv$ – $\equiv$ $\equiv$ $IBEA_{H^-}$ $\equiv$ $\equiv$ – $\equiv$ $SPEA2$ $\equiv$ $\equiv$ $\equiv$ – \*[*Zeno3*$_{risk}$]{} $NSGAII$ – $\equiv$ $\equiv$ $\equiv$ $IBEA_{\varepsilon^+}$ $\equiv$ – $\equiv$ $\succ$ $IBEA_{H^-}$ $\equiv$ $\equiv$ – $\succ$ $SPEA2$ $\equiv$ $ \prec$ $\prec$ – \*[*Zeno6*$_{cost}$]{} $NSGAII$ – $ \prec$ $ \prec$ $ \prec$ $IBEA_{\varepsilon^+}$ $\succ$ – $\equiv$ $\equiv$ $IBEA_{H^-}$ $\succ$ $\equiv$ – $\equiv$ $SPEA2$ $\succ$ $\equiv$ $\equiv$ – \*[*Zeno6*$_{risk}$]{} $NSGAII$ – $ \prec$ $ \prec$ $\equiv$ $IBEA_{\varepsilon^+}$ $\succ$ – $\succ$ $\succ$ $IBEA_{H^-}$ $\succ$ $ \prec$ – $\succ$ $SPEA2$ $\equiv$ $ \prec$ $ \prec$ – \*[*Zeno9*$_{cost}$]{} $NSGAII$ – $ \prec$ $ \prec$ $ \prec$ $IBEA_{\varepsilon^+}$ $\succ$ – $ \prec$ $\equiv$ $IBEA_{H^-}$ $\succ$ $\succ$ – $\equiv$ $SPEA2$ $\succ$ $\equiv$ $\equiv$ – \*[*Zeno9*$_{risk}$]{} $NSGAII$ – $ \prec$ $ \prec$ $ \prec$ $IBEA_{\varepsilon^+}$ $\succ$ – $ \prec$ $\equiv$ $IBEA_{H^-}$ $\succ$ $\succ$ – $\equiv$ $SPEA2$ $\succ$ $\equiv$ $\equiv$ – ------------------------ ------------------------ ---------- ------------------------ -------------- ---------- Influence of  Strategy {#resultsStrategies} ---------------------- Next series of experiments aimed at identifying the influence of the chosen strategy for  (see Section \[sec:strategies\]). Figure \[fig:strategiesYahsp\]-(a) (resp. \[fig:strategiesYahsp\]-(b)) shows the attainment plots for the strategy in which  always optimizes the makespan (resp. the cost) on problem 6$_{cost}$. Both extreme strategies lead to much worse results than the mixed strategy of Figure \[fig:attainment\]-(a), as no run discovers the whole front (last line, that never leaves the x-axis). Furthermore, and as could be expected, the makespan-only strategy discovers very rapidly the extreme points of the Pareto front that have a small makespan (points (20,30), (24,28) and (28,26)) and hardly discovers the other end of the Pareto front (points with makespan greater than 48), while it is exactly the opposite for the cost-only strategy. This confirms the need for a strategy that incorporates both approaches. best possible choice. Note that similar conclusion could have been drawn from  results on parameter tuning (see Section \[sec:condition\]): the choice of  strategy was one of the parameters tuned by  …and the tuned values for the weights of both strategies were always more or less equal. Shape of the Pareto Front ------------------------- Figure \[fig:allFronts\] displays the attainment plots of IBEA$_{H^-}$ for both extreme Pareto fronts shown on Figure \[fig:zeno3ParetoFronts\] – while the corresponding plot for the linear case $\alpha=2$ is that of Figure \[fig:attainment\]-(a). Whereas the concave front is fully identified in 40% of the runs (right), the complete front for the strictly convex case (left) is never reached: in the latter case, the 4 most extreme points are found by 90% of the runs in less than 200 seconds, while the central points are hardly ever found. We hypothesize that the handling of  strategy regarding which objective to optimize (see Section \[sec:strategies\]) has a greater influence in the case of this strictly convex front than when the front is linear ($\alpha=2$) or almost linear, even if strictly concave ($\alpha=2.9$). In any case, no aggregation technique could ever solve the latter case, whereas it is here solved in 40% of the runs by [[DaE$_{\text{YAHSP}}$]{}]{}. Conclusion and Perspectives {#sec:conclusion} =========================== The contributions of this paper are twofold. Firstly, , an original benchmark test suite for multi-objective temporal planning, has been detailed, and several levers identified that allow to generate more or less complex instances, that have been confirmed experimentally: increasing the number of passengers obviously makes the problem more difficult; modifying the cost of reaching the cities and the duration of the flights is another way to make the problem harder, though deeper work is required to identify the consequences of each modification. Secondly, several multi-objectivization of , an efficient evolutionary planner in the single-objective case, have been proposed. However, even though the hypervolume-based IBEA$_{H^-}$ clearly emerged as the best choice, the experimental comparison of those variants on the  benchmark raises more questions than it brings answers. The sparseness of the Pareto Front has been identified as a possible source for the rather poor performance of all variants for moderately large instances, particularly for the [*risk*]{} type of instances. Some smoothening of the objectives could be beneficial to tackle this issue (e.g., counting for the number of times each risk level is hit rather than simply accounting for the maximal value reached). Another direction of research is to combat the non-symmetry of the results, due to the fact that the embedded planner only optimizes one objective. Further work will investigate a self-adaptive approach to the choice of which objective to give  to optimize. Finally, the validation of the proposed multi-objective [[DaE$_{\text{YAHSP}}$]{}]{} can only be complete after a thorough comparison with the existing aggregation approaches – though it is clear that aggregation approaches will not be able to identify the whole Pareto front in case it has some concave parts, whereas the results reported here show that [[DaE$_{\text{YAHSP}}$]{}]{} can reasonably do it. [^1]: This work was partially funded by DESCARWIN ANR project (ANR-09-COSI-002).
--- abstract: 'We present experimental work for improved atom loading in the optical molasses of a caesium fountain clock, employing a low-velocity intense source of atoms (LVIS) \[Lu *et al.*, Phys. Rev. Lett. **77**, 3331 (1996)\], which we modified by adding a “dark” state pump laser. With this modification the atom source has a mean flux of $4 \times 10^{8}$ atoms/s at a mean atom velocity of $8.6$m/s. Compared to fountain operation using background gas loading, we achieved a significant increase of the loaded and detected atom number by a factor of 40. Operating the fountain clock with a total number of detected atoms $N_{\mathrm{at}}=2.9 \times 10^6$ in the quantum projection noise-limited regime, a frequency instability $\sigma_y\left(1\text{s}\right)=2.7 \times 10^{-14}$ was demonstrated.' author: - 'G. Dobrev' - 'V. Gerginov' - 'S. Weyers' bibliography: - 'LVIS\_Bibliography.bib' title: 'Loading of a fountain clock with an enhanced Low-Velocity Intense Source of atoms' --- \[sec:Intro\]Introduction ========================= The total measurement uncertainty of frequency measurements performed with fountain primary frequency standards is obtained from the quadratic sum of the statistical and the systematic measurement uncertainties [@Wynands2005]. A higher fountain frequency stability results directly in an improved statistical uncertainty in a given measurement time, accompanied by an improved total measurement uncertainty. Moreover, there are systematic uncertainty contributions such as those from the collisional or the distributed cavity phase shifts, which can also be reduced with an improved statistical uncertainty of their evaluation [@Gerginov2010; @Weyers2012b]. Thus an improved fountain stability during a given measurement can also lower the total measurement uncertainty by indirectly reducing the systematic uncertainty. We describe improvements of the frequency stability of the fountain primary frequency standard CSF2 [@Gerginov2010]. In CSF2 caesium atoms are cooled and accumulated in an optical molasses (OM). The captured atoms are subsequently launched in vertical direction to perform frequency measurements of the microwave clock transition between the $6^{2}S_{1/2}$ ground state hyperfine sublevels ${\lvert{F=3}\rangle}$ and ${\lvert{F=4}\rangle}$ (Fig. \[Cs\]) [@Wynands2005]. With the use of an optically-stabilized microwave signal [@Weyers2009] the stability of the fountain is limited by quantum projection noise (QPN) over a wide range of atom numbers. As a result, the fountain frequency stability is improved with the square root of the detected atom number. The loaded and detected cold atom number is increased (and the corresponding QPN is reduced) by OM loading from a low-velocity intense source of cold atoms (LVIS, [@Wieman96]). The LVIS system is modified similar to the work of Teo *et al.* [@teo2002] by adding a “dark” state pump laser. In this work the pump laser is used to reduce the velocity of the slow beam, and gives another factor of two loading enhancement. The LVIS arrangement [@Wieman96] is very similar to the classical $3$D magneto-optical trap (MOT) scheme [@Metcalf] for atom cooling with an additional leak channel for the cold atoms. In the LVIS scheme one of the six cooling laser beams has significantly reduced field intensity in a small central region of its spatial profile, because it is reflected from a mirror with a small central hole acting as atomic beam aperture. This feature perturbs the MOT trapping potential and creates the leak channel. Atoms in this region become subject to acceleration by the laser beam (in the following called “accelerating beam”) pointing in the direction of the aperture. As a result, they get pushed out of the trap and form a cold continuous atom beam. After a description of our experimental setup and the initial optimization procedures, we will present and explain the findings obtained from the insertion of a “dark” state pump laser. The cold atom beam from the enhanced LVIS system is characterized regarding beam flux and mean atom velocity, before we demonstrate the improved CSF2 frequency stability by evaluating frequency measurement results. \[sec:Experiment\]Experiment ============================ \[sec:Setup\]Experimental setup ------------------------------- The LVIS trapping zone is constructed around a standard DN35CF six–way cube. Optical access is provided by five AR-coated windows with diameter of $38$mm. The free end of the cube is attached to the OM chamber of the fountain vacuum system through a six-way cross and a flexible metal bellow. The distance between both trap centers is approximately $53$cm and the LVIS cube is positioned $11$mm higher than the OM center in order to compensate the height difference acquired by the atoms along their ballistic flight. Two identical coaxial coils, arranged in anti-Helmholtz configuration create the quadrupole magnetic field for the trap operation. The line passing through the coil centers defines the trap symmetry axis $z$ (see Fig. \[LVIS\]). ![\[LVIS\]Schematic drawing of the enhanced low velocity intense source system. The magnetic field gradient coils are not shown.](Fig2.eps){width="48.00000%"} Diode lasers provide the trapping light. It is introduced into the LVIS trough five individual single-mode polarization maintaining fibers with collimators (Schäfter&Kirchhoff Fiber Collimator 60FC-T-4-M100-37) at the fiber ends. The collimators come with integrated quarter-wave plate and provide circularly polarized, collimated laser light having a gaussian profile with $21$mm beam diameter ($1/e^2$). Along the z axis, there is only one fiber collimator mounted, while on the opposite side of the MOT center, at a distance of $18$mm, the output coupler of the LVIS system is positioned. It consists of a pierced quarter-wave plate, with an $0.5$mm diameter aperture in the center and with a high-reflection coating deposited on the back side. The output coupler produces a retro-reflected laser beam which has opposite circular polarization with respect to the incident beam, which is a necessary condition for the MOT operation. The aperture in the output coupler forms the desired extraction column in the LVIS. No additional collimation of the atomic beam is performed. \[sec:Opt\]Initial optimization ------------------------------- The number of cold atoms loaded in the OM zone of the fountain depends on the atomic beam flux, beam divergence and losses during the atom capture process in the OM. The atomic beam flux is determined mainly by the LVIS trap capture rate [@Wieman96]. Parameters relevant to the capture rate are the value of the atomic vapour background pressure, magnetic field gradient, cooling laser power, frequency detuning, intensity profile and distribution among the cooling laser beams, as well as repump laser parameters. An additional factor is the size of the aperture in the LVIS output coupler. The complexity of the system LVIS – OM does not allow to fully separate the individual impact of all these factors on the loading process. We assess and optimize the performance of the overall system by observing the total number $N_{\mathrm{at}}$ of detected cold atoms at the end of the fountain interrogation cycle. Initially, the formation of an ensemble of cold atoms in the LVIS MOT was accomplished by distributing the power of the cooling laser equally among the three trap axes. The magnetic field orientation of the MOT defines the needed polarization state of each individual beam. In Fig. \[cloud\] one can see the fluorescence emitted by the trapped atoms in the LVIS MOT and the asymmetric shape of the cold atom cloud as a result of the imbalanced radiation pressure force along the $z$ axis. Fluorescence from atoms which form the atom beam and leave the MOT central region appears as a tail on the right hand side of the main cooled ensemble. Already without any further optimizations we encountered a noticeable change in $N_{\mathrm{at}}$ which indicated an enhanced loading of atoms in the OM zone of the fountain. An advantage of using the above described source of cold atoms is that the small aperture size in the LVIS output coupler provides a differential pumping mechanism between both trap chambers. This allowed us to increase the caesium partial pressure in the LVIS MOT zone and at the same time to preserve the rest of the fountain vacuum system from unfavourable rise of the local background pressure. This in turn would increase the OM loss rate, and the rate of collisions of the interrogated cold atoms with the hot background Cs atoms during the free propagation time. ![\[cloud\]A snapshot of the LVIS MOT region made with a CCD camera. The bright central spot is the fluorescence from the trapped caesium atoms, and the tail emerging from it is due to scattered photons from atoms which leave the trap and form the cold atom beam. The conditions at which the picture is taken are $18$mW cooling laser optical power on each beam, 0.79mT/cm axial and 0.39mT/cm radial magnetic field gradients.](Fig3.eps){width="0.40\columnwidth"} A Fabry-Perot laser diode (JDSU 5430) at $852$nm serves as LVIS cooling laser. Its frequency is stabilized through an injection locking technique to the frequency of the master laser of CSF2. An acousto-optic modulator is used to tune the laser absolute frequency about $2 \Gamma$ to the red of the ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=5}\rangle}$ cycling transition ($\Gamma=5.2$MHz is the natural linewidth of the transition). In total, the cooling laser delivers $90$mW optical power to the LVIS trap. The trap center is aligned with the extraction column by suitably distributing the laser power between the transverse laser beams. The intensity of the accelerating beam is a crucial parameter, concerning the efficiency of the OM loading process. In our experimental arrangement the optimum value for the power of the accelerating beam was found to be $10$mW. Also the optimum value of the quadrupole magnetic field gradient is close to the one for optimum MOT capture rate. Beside laser parameters the value of the magnetic field gradient defines the rate of absorption-emission cycles that an accelerated atom experiences and thus the final atom beam velocity. In our experiment, the anti-Helmholtz coils are driven with a current of $2.7$A, creating $0.71$mT/cm and $0.35$mT/cm magnetic field gradients in axial and radial directions, respectively. An additional distributed Bragg reflector (DBR) laser diode (Photodigm PH852DBR120) is used as a repump laser to bring the atoms which have decayed to the ${\lvert{F=3}\rangle}$ state back to the cooling cycle. The DBR laser diode delivers a total optical power of $6.2$mW into the LVIS MOT and its output frequency is stabilized by saturated absorption spectroscopy. This laser shares the same optical fibers with the cooling laser depending on the chosen configuration. Here we discuss in more detail the LVIS MOT dynamics. Background gas atoms from the low-velocity tail of the thermal distribution are constantly cooled and pushed towards the confinement region of the trap. The opening in the output coupler defines a cylindrical region around the trap symmetry axis $z$, where an imbalance between the confining forces arises on both sides of the trap center. Therefore, an atom which ended up in the extraction column will experience a net spontaneous force pointing towards the LVIS output coupler. The atoms from the resulting atom beam are continuously confined within the extraction column by the transverse laser beams (cooling and repumping). Those that diverge from the central column are recycled back into the trap. If the beam divergency, which is a measure for the atoms transverse velocity, is too large, the loading efficiency of the OM is compromised. Due to technical reasons we could not measure directly the atomic beam divergency in our experimental setup. Instead, we rely on observations and evaluations from previous studies carried out by Lu [@Wieman96] and Park [@Park99]. It was found that a pure geometrical factor can well describe the measured beam size and the divergence of the atomic beam scales as $\theta=d/x$, where $d$ is the diameter of the aperture in the LVIS output coupler and $x$ is the distance between the aperture and the LVIS trap center. In our case that would result in an atomic beam diameter of $15$mm at the OM trap center, where the OM cooling laser beams have a diameter of $42$mm ($1/e^2$). \[sec:Mod\]LVIS modification: Pump laser ---------------------------------------- With the traditional LVIS setup [@Wieman96] and using transverse repumping, we achieved nearly 20 times more caesium atoms detected at the end of the fountain interrogation cycle in comparison with background gas OM loading. In both cases the loading time constant is about 1s. The efficiency of the LVIS–OM system is sensitive to the velocity of the beam atoms, because of the limited velocity capture range of the OM. However, the traditional LVIS system described so far does not provide enough flexibility to control the velocity of the atoms. The atoms in the extraction column will be accelerated (and heated) by the accelerating beam until the Doppler shift brings the atomic transition out of resonance [@Wang:03]. To stop the acceleration at a certain moment, it is expedient to shelve the atoms in a “dark” state [@teo2002], for which the ${\lvert{F=3}\rangle}$ component of the caesium ground state is a convenient choice. The only way for an atom in the extraction column to escape the cycling transition ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=5}\rangle}$ and to occur in the ${\lvert{F=3}\rangle}$ state is by off-resonant excitation of the ${\lvert{F'=4}\rangle}$ state. Since the frequency splitting between ${\lvert{F'=4}\rangle}$ and ${\lvert{F'=5}\rangle}$ is relatively large ($\sim 251$MHz), this process has low probability. Therefore, we bring into action an additional laser beam, called “pump” laser, which is intended to drive either the ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=3}\rangle}$ or ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=4}\rangle}$ transitions. While this laser grants a very efficient optical pumping to the ${\lvert{F=3}\rangle}$ state, the presence of opposing repumping light along the $z$ axis would support continuous transfer of atoms back from the ${\lvert{F=3}\rangle}$ to the ${\lvert{F=4}\rangle}$ state during their flight to the OM zone. To avoid this scenario the repumping light is removed from the atomic beam path and only present in the vertically aligned LVIS MOT laser beams. Fig. \[LVIS\] illustrates the present laser fields in our LVIS trap and their orientation. We have chosen the sign of the $B_{z}$ component of the magnetic field on both sides of the trap center ($z=0$) as shown in this figure. We define the quantization axis to be coincident with the $z$ axis and therefore the cooling laser light coming out of the fiber and having a wave vector pointing towards the LVIS output coupler must be $\sigma^{-}$ circularly polarized. The pump laser beam is spatially overlapped with the accelerating beam as they propagate in the same optical fiber, but it possesses opposite $\sigma^{+}$ circular polarization. Since beam atoms with positive displacement along $z$ ($B_z<0$) and outside the transverse laser region are shelved in the ${\lvert{F=4, m_{F}=-4}\rangle}$ state by the on-axis $\sigma^{-}$ polarized cooling laser, only $\sigma^{+}$ polarized pump light can promote optical pumping to the ${\lvert{F=3}\rangle}$ component according to the transition selection rules. As a result of the complementary laser pump field, the loaded number of atoms in the OM region significantly increases. ![\[probe\]The detected atom number $\textit{N}_{\mathrm{at}}$, resulting from fountain OM loading by means of the LVIS system, as a function of the probe laser frequency detuning from a) the ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'}\rangle}$ and b) the ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'}\rangle}$ transition. Results, with and without pump laser present, are shown by a green (top) and a red (bottom) trace, respectively. The pump laser is operated on the ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'=3}\rangle}$ transition with an optical power of $0.01$mW. At exact resonance frequencies, quantum numbers $F'$ are indicated, while the crossover resonances in between are not denoted.](Fig4.eps "fig:"){width="48.00000%"} ![\[probe\]The detected atom number $\textit{N}_{\mathrm{at}}$, resulting from fountain OM loading by means of the LVIS system, as a function of the probe laser frequency detuning from a) the ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'}\rangle}$ and b) the ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'}\rangle}$ transition. Results, with and without pump laser present, are shown by a green (top) and a red (bottom) trace, respectively. The pump laser is operated on the ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'=3}\rangle}$ transition with an optical power of $0.01$mW. At exact resonance frequencies, quantum numbers $F'$ are indicated, while the crossover resonances in between are not denoted.](Fig5.eps "fig:"){width="48.00000%"} For verifying the internal atomic state of the atoms after they left the LVIS trap, a probe laser was applied perpendicular to the path of the atoms, 11cm away from the trap center. The probe laser was intended to transfer momentum to the atoms in a given ${\lvert{F}\rangle}$ component in a direction transverse to that of propagation or to cause optical pumping. Both effects depend on the probe laser frequency detuning, polarization, intensity, and on the given static magnetic field. A rectangular aperture was used to produce a thin sheet of light from this laser with 2mm thickness and 10mm height, perpendicular to the atomic beam direction. In this way we ensured that the resulting probe light with 30$\mu$W optical power will interact with most of the passing atoms. In Fig. \[probe\] the detected atom number $N_{\mathrm{at}}$ is shown, when the frequency of the probe laser is scanned in the vicinity of both, the ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'=2}\rangle}$ (Fig. \[probe\]a) and the ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=5}\rangle}$ (Fig. \[probe\]b) transitions. In each sub-figure two modes of OM loading with the LVIS are shown. The top green and bottom red traces represent the situation with and without pump laser, respectively. In the following we discuss the results of Fig. \[probe\] only for a qualitative illustration of the most general features of the probe laser influence on the atom loading process, and do not attempt a quantitative explanation. Without the pump laser (red traces in Fig. \[probe\]), there are atoms in both states ${\lvert{F=3}\rangle}$ and ${\lvert{F=4}\rangle}$, and their interaction with the probe laser tuned close to the ${\lvert{F}\rangle}$ $\rightarrow$ ${\lvert{F'}\rangle}$ resonances leads either to a decrease or an increase in the detected atom number $N_{\mathrm{at}}$. The atom number $N_{\mathrm{at}}$ is decreased, when either the probe laser pushes away beam atoms (${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'}\rangle}$ (Fig. \[probe\]a) or ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'}\rangle}$ (Fig. \[probe\]b) transitions), or pumps beam atoms from ${\lvert{F=3}\rangle}$ to ${\lvert{F=4}\rangle}$ (${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'=3,4}\rangle}$ transitions, Fig. \[probe\]a)), so that they are subsequently detrimentally accelerated and heated by the cooling laser light. On the other hand the detected atom number increases, when the beam atoms are pumped by the probe laser from ${\lvert{F=4}\rangle}$ to ${\lvert{F=3}\rangle}$ (${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=3,4}\rangle}$ transitions, Fig. \[probe\]b), avoiding damaging acceleration and heating, and providing thus more efficient capturing in the fountain OM. With the pump laser (green traces in Fig. \[probe\]), the atoms are transferred to the state ${\lvert{F=3}\rangle}$, so that they do not interact with the probe laser tuned around ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=5}\rangle}$ (Fig. \[probe\]b), while probe laser tuning close to the resonances ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'}\rangle}$ (Fig. \[probe\]a) results in either pushing the atoms away by light scattering, or pumping them to ${\lvert{F=4}\rangle}$ with subsequent detrimental acceleration by the cooling laser light. ![\[holes\]Plot of the detected atom number $N_{\mathrm{at}}$ as a function of the pump laser frequency at four different laser powers. For comparison, the dashed magenta trace represents $N_{\mathrm{at}}$ obtained in the standard mode of fountain operation (background gas OM loading).](Fig6.eps "fig:"){width="48.00000%"} ![\[holes\]Plot of the detected atom number $N_{\mathrm{at}}$ as a function of the pump laser frequency at four different laser powers. For comparison, the dashed magenta trace represents $N_{\mathrm{at}}$ obtained in the standard mode of fountain operation (background gas OM loading).](Fig7.eps "fig:"){width="48.00000%"} In a next step we further investigate the effects of the pump laser properties. Figs. \[holes\]a) and \[holes\]b) illustrate the dependence of $N_{\mathrm{at}}$ on the pump laser frequency for several different values of its optical power. The laser frequency is continuously tuned through the ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'}\rangle}$ resonances. For frequencies far-detuned from resonance, $N_{\mathrm{at}}$ stays stable at a level corresponding to the value of detected atoms without utilizing the pump laser. Once the frequency of the pump laser matches a transition, the process of optical pumping of atoms to the dark state starts to compete with the MOT loading rate. At low power (Fig. \[holes\]a) this laser efficiently transfers the atoms from the beam to the ${\lvert{F=3}\rangle}$ component without significant heating so that most of them stay confined within the extraction column, and later on, travel unaffected towards the OM region. As a result, sharp peaks are observed at frequencies matching the ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=3,4}\rangle}$ transitions (orange triangle symbols in Fig. \[holes\]a). These peaks indicate the increased number of atoms which take part in the CSF2 measurement cycle. The difference in the transfer efficiency to the “dark” state noticed around the ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=3}\rangle}$ and ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=4}\rangle}$ resonances (Fig. \[holes\]a) can be explained by referring to the Clebsch-Gordan coefficients of the corresponding transitions [@Metcalf]. Because the pump laser beam has a very low optical power compared to the powers of the repumping and cooling laser beams (which are about three orders of magnitude higher), it does not significantly disturb the cooling process in the LVIS MOT. When the power of the pump laser is increased, the cooling process becomes more disturbed, and a drop in $N_{\mathrm{at}}$ is observed. In the curves in Figs. \[holes\]a) (only ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=3}\rangle}$ at $\approx -450$MHz detuning, blue circles) and \[holes\]b) (green squares and red rhomboids) this appears as a formation of dips for frequencies close to the resonances. The dips widths broaden gradually when the power of the pump beam grows, since, even off-resonance, the disturbance of the LVIS MOT becomes more and more effective. At the same time, broad maxima of $N_{\mathrm{at}}$ develop (Figs. \[holes\]b)), more and more shifted to lower frequencies with respect to the ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=3}\rangle}$ and ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=4}\rangle}$ transitions. For sufficiently high pump optical power we even observe a complete loss of the LVIS MOT fluorescence and the number of detected atoms in CSF2 as well. With the pump laser beam in a direction transverse to the LVIS symmetry axis there is no enhancement of $N_{\mathrm{at}}$. The observed dips in the spectrum close to the resonance frequencies were similar to the dips in Fig. \[holes\]b), caused by the same mechanisms of MOT disturbance. Such experiment also demonstrates that the optical pumping of the atoms to the ${\lvert{F=3}\rangle}$ state mainly occurs after they leave the LVIS transverse laser beams. On the other hand, with the pump laser beam along the LVIS symmetry axis but with an opposite circular polarization $\sigma^{+}$ (the same as the cooling laser) also no increase of $N_{\mathrm{at}}$ is observed, as no optical pumping can take place according to transition selection rules. At the moderate pump optical powers (0.2mW to 2mW, see Fig. \[holes\]b) the maximum obtainable value of $N_{\mathrm{at}}$ is only $4\%$ lower compared to the maximum obtainable $N_{\mathrm{at}}$ at low pump optical powers (0.01mW to 0.02mW, see Fig. \[holes\]a). However, for moderate pump optical powers the loading process becomes less sensitive to the pump laser detuning. In our setup the peak value of the detected atom number reaches saturation at pump beam intensities of about $3.6$$\mu$W/cm$^{2}$ (50 $\mu$W power). For the frequency instability measurements (see Section \[sec:Instability\]) the power of the pump beam was 0.45mW with a red frequency detuning of 10$\Gamma$ from the ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=3}\rangle}$ transition. We note that scanning the pump laser through the ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'}\rangle}$ resonances shows no significant effect on $N_{\mathrm{at}}$ compared to the case of LVIS–OM loading without the pump laser - the pump laser effect on the atoms in the LVIS is similar to the effect of a repump laser along the $z$-axis. ![\[repumper\]Plot of the detected atom number $N_{\mathrm{at}}$ vs. the frequency of the pump laser for different LVIS repump laser configurations. Each configuration is characterized by the operating frequency of the repump laser (driving either the ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'=3}\rangle}$ transition or the ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'=4}\rangle}$ transition) and by the particular spatial orientation of the repumping light in the LVIS trap (either the repumping light is available along the $z$ axis (atomic beam axis) or it is applied only transversely to the $z$ axis). For all four cases the LVIS repump laser optical power was kept the same and the delivered power by the pump laser was $0.45$mW.](Fig8.eps){width="48.00000%"} The properties of the LVIS repump laser also affect the loading process of the OM with the atomic beam. Fig. \[repumper\] illustrates the impact of both, operating frequency and orientation of the LVIS repump laser, on $N_{\mathrm{at}}$. Changing the repump laser tuning from the more efficient repumping transition ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'=4}\rangle}$ to the less efficient repumping transition ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'=3}\rangle}$ results in a higher number of cold atoms loaded into the OM of CSF2 (Fig. \[repumper\]). This finding is probably the result of two competing effects and depends on the particular repump laser intensities and geometry of our LVIS setup: While the utilisation of the ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'=4}\rangle}$ repumping transition is beneficial for the LVIS MOT operation, stray light from the repump laser at this transition interacting with the ${\lvert{F=3}\rangle}$ atoms in the beam is more efficient in pumping the atoms to the ${\lvert{F=4}\rangle}$ state (resulting in the described detrimental acceleration effect) than stray light from the ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'=3}\rangle}$ repumping transition. In our setup, with the given experimental parameters (pump laser power 0.45mW and a red frequency detuning of 10$\Gamma$ from the ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=3}\rangle}$ transition), the latter repumping transition is the better of the two choices, resulting in a factor of 40 increase of $N_{\mathrm{at}}$ compared to operation of CSF2 with OM loading from background gas. \[sec:Flux\] Velocity and flux of the slow beam ----------------------------------------------- To characterize the source of cold atoms, we performed fluorescence and absorption measurements in the OM zone of the fountain. The first step in the atom velocity measurement was to allow the atomic beam to reach a steady state after the LVIS lasers were turned on. The probe laser was also turned on and tuned to the ${\lvert{F=3}\rangle}$ $\rightarrow$ ${\lvert{F'=2}\rangle}$ transition. Its intensity was increased to efficiently heat the cold beam atoms prepared in the ${\lvert{F=3}\rangle}$ state by the pump laser, and to effectively prevent them from reaching the OM zone. After the atom beam flux reached a steady state ($0.5$s after the LVIS lasers were turned on), as a second step, the probe laser was turned off and the OM beams were simultaneously turned on for $1.1$s. The measured delay between the turning off of the probe beam (which allows the atoms to reach the OM zone) and the observed change in the OM loading rate (measured as sudden increase in the OM fluorescence) as the first cold atoms reach the OM zone was used to calculate a mean value for the atom velocity of $8.6$m/s. We estimate the atom flux by performing a measurement of the cloud absorption in the OM zone. For this measurement, the OM and LVIS lasers were operated simultaneously for $1$s. At the end of this period all lasers were turned off and the probe beam was turned on, preventing additional arrival of cold atoms in the OM zone. After $10$ms, a weak absorption laser beam, resonant with the ${\lvert{F=4}\rangle}$ $\rightarrow$ ${\lvert{F'=5}\rangle}$ transition was introduced along the direction of one of the OM beams. It propagated in the OM fiber, and had the same profile and direction as the corresponding OM beam. The absorption beam power was $0.07$$\mu$W in a beam diameter of $42$mm ($1/e^2$). Its polarization was linear and orthogonal to that of the corresponding OM beam, and it was detected with a high-gain photodetector after passing the OM zone. The absorption beam was kept on for $100$ms. During this measurement time, the absorption changed according to a loss rate consistent with losses due to gravity. From the measured relative absorption and the loading time of 1s, a mean atom flux from the enhanced LVIS system of $4\times10^{8}$/s is estimated. \[sec:Instability\] CSF2 operation with LVIS loading ==================================================== The frequency instability of a fountain frequency standard is expressed by the Allan deviation according to $$\label{sy} \sigma_y\left(\tau\right)=\frac{1}{\pi}\frac{\Delta\nu}{\nu_0}\frac{1}{\text{SNR}}\sqrt{\frac{T_c}{\tau}},$$ where $\Delta\nu$ is the full-width-at-half-maximum of the Ramsey fringe, $\nu_0=9\,192\,631\,770$Hz is the clock transition frequency, $\text{SNR}$ is the signal-to-noise ratio, $T_c$ is the cycle time, and $\tau$ the measurement time. In the case of quantum projection noise-limited operation, $\text{SNR}=\sqrt{N_{\mathrm{\mathrm{at}}}}$, with $N_{\mathrm{\mathrm{at}}}$ the total detected number of atoms in the $F=3$ and $F=4$ hyperfine components of the Cs ground state. The $\text{SNR}$ was measured directly by operating the fountain CSF2 in a regime where the noise of the local oscillator does not contribute to the instability [@Santarelli1999]. The measured $\text{SNR}$ increases linearly as a function of $\sqrt{N_{\mathrm{\mathrm{at}}}}$ (measured in relative units) and reaches values larger than $1700$. The linear dependence between $\text{SNR}$ and $\sqrt{N_{\mathrm{\mathrm{at}}}}$ allows to calibrate $N_{\mathrm{\mathrm{at}}}$ in terms of the absolute number of detected atoms. The expected CSF2 instability, calculated from Eq. \[sy\] for $\Delta\nu=0.9$Hz, $T_c=2$s, $\text{SNR}=1680$ and $N_{\mathrm{at}}=2.9 \times 10^6$ is $\sigma_y\left(1\text{s}\right)=2.6 \times 10^{-14}$. To experimentally confirm this value, the fountain was operated in a regime where the dominant contribution to its instability is the quantum projection noise. To reach this regime, an optically-stabilized 9.6GHz microwave signal is generated using a frequency comb as a transfer oscillator, and is used for the frequency synthesis of CSF2 [@Lipphardt2015]. The frequency comb is referenced to a laser locked to an ultra-stable optical cavity, transferring the stability of the laser to a 9.6GHz dielectric resonator oscillator (DRO). After removing the linear drift of the DRO frequency caused by the drift of the optical cavity, the measured Allan deviation $\sigma_y\left(\tau\right)$ shows a $\tau^{-1/2}$ dependence for measurement times up to 100s (Fig. \[instability\]). ![\[instability\] Allan deviation of the CSF2 frequency measured against the DRO referenced to an optical cavity (symbols). The fountain frequency instability of $\sigma_y\left(\tau\right) = 2.7 \times 10^{-14} \tau^{-1/2}$ is shown with a dashed line.](Fig9.eps){width="48.00000%"} The measured fountain instability $\sigma_y\left(1\text{s}\right)=2.7 \times 10^{-14}$ is in good agreement with the value of $2.6 \times 10^{-14}$ inferred from the $\text{SNR}$ measurements. It is also close to the best measured instability of $\sigma_y\left(1\text{s}\right)=1.6 \times 10^{-14}$ in a primary fountain clock, where the atoms were loaded from a decelerated Cs atomic beam [@Vian2005]. \[sec:Conclusions\] Conclusions =============================== Slow atomic beam loading of the PTB fountain CSF2 is demonstrated. The source of cold atoms is a modified low-velocity intense source setup which includes an additional pump laser. The pump laser reduces the loss of cold atoms during their flight between the LVIS apparatus and the OM loading zone by pumping them into a dark state in which they are not subject to continued acceleration and heating due to light scattering. Additionally, the atoms are loaded in a volume in the proximity of the fountain axis by using a repump laser beam only propagating along the fountain axis, which further increases the number of detected atoms and potentially reduces the contribution of the distributed cavity phase to the uncertainty of CSF2 [@Weyers2012b]. The achieved detected atom number is a factor of 40 above the one used in normal operation of CSF2. The LVIS atom flux and velocity are characterized through fluorescence and absorption measurements. With the LVIS in operation and using an optically-stabilized microwave signal, the fountain frequency instability is quantum projection noise-limited and has a value of $\sigma_y\left(1\text{s}\right)=2.7 \times 10^{-14}$. With such instability, the statistical uncertainty of the fountain reaches the level of its present systematic uncertainty in $10^4$s. The authors thank B. Lipphardt, N. Huntemann and M. Okhapkin for valuable discussions, and D. Griebsch and N. Nemitz for designing the initial LVIS setup. This work was supported by the European Metrology Research Programme (EMRP) in projects SIB04 and SIB55. The EMRP is jointly funded by the EMRP participating countries within EURAMET and the European Union.
--- abstract: 'Lithium-ion batteries exhibit complex nonlinear dynamics, resulting from diffusion and phase transformations coupled to ion intercalation reactions. Using the recently developed Cahn-Hilliard reaction (CHR) theory, we investigate a simple mathematical model of ion intercalation in a spherical solid nanoparticle, which predicts transitions from solid-solution radial diffusion to two-phase shrinking-core dynamics. This general approach extends previous Li-ion battery models, which either neglect phase separation or postulate a spherical shrinking-core phase boundary, by predicting phase separation only under appropriate circumstances. The effect of the applied current is captured by generalized Butler-Volmer kinetics, formulated in terms of diffusional chemical potentials, and the model consistently links the evolving concentration profile to the battery voltage. We examine sources of charge/discharge asymmetry, such as asymmetric charge transfer and surface “wetting" by ions within the solid, which can lead to three distinct phase regions. In order to solve the fourth-order nonlinear CHR initial-boundary-value problem, a control-volume discretization is developed in spherical coordinates. The basic physics are illustrated by simulating many representative cases, including a simple model of the popular cathode material, lithium iron phospate (neglecting crystal anisotropy and coherency strain). Analytical approximations are also derived for the voltage plateau as a function of the applied current.' author: - Yi Zeng - 'Martin Z. Bazant' bibliography: - 'elec42.bib' title: 'Phase Separation Dynamics in Isotropic Ion-Intercalation Particles' --- nonlinear dynamics, Cahn-Hilliard reaction model, Butler-Volmer kinetics, intercalation, phase separation, surface wetting, Li-ion battery, nanoparticles, lithium iron phosphate Introduction ============ The discovery of lithium iron phosphate (Li$_x$FePO$_4$, LFP) as a cathode material for lithium-ion batteries has led to unexpected breakthroughs in the mathematical theory of chemical kinetics coupled to phase transformations [@bazant2013]. Since its discovery in 1997 as a “low power material" with attractive safety and economic attributes [@padhi1997], LFP has undergone a remarkable reversal of fortune to become the cathode of choice for high-power applications [@tarascon2001; @kang2009; @tang2010], such as power tools and electric vehicles [@ritchie2006; @zackrisson2010], through advances in surface coatings and reduction to nanoparticle form. A striking feature of LFP is its strong tendency to separate into stable high density and low density phases, indicated by a wide voltage plateau at room temperature [@padhi1997; @tarascon2001] and other direct experimental evidence [@delacourt2005; @yamada2005; @delmas2008; @allen2008; @oyama2012; @chueh2013]. Similar phase-separation behavior arises in many other intercalation hosts, such as graphite, the typical lithium insertion anode material, which exhibits multiple stable phases. This has inspired new approaches to model the phase separation process coupled to electrochemistry, in order to gain a better understanding of the fundamental lithium-ion battery dynamics. The first mathematical model on two-phase intercalation dynamics in LFP was proposed by Srinivasan and Newman [@srinivasan2004], based on the concept of a spherical “shrinking core" of one phase being replaced by an outer shell of the other phase, as first suggested by Padhi et al. [@padhi1997]. By assuming isotropic spherical diffusion, the sharp, radial “core-shell" phase boundary can be moved in proportion to the current. This single-particle model was incorporated into traditional porous electrode theory for Li-ion batteries [@doyle1993; @newman_book] with Butler-Volmer kinetics and concentration dependent diffusivity and fitted to experiments. The shrinking-core porous-electrode model was recently extended and refitted by Dargaville and Farrell [@dargaville2010]. In recent years, the shrinking-core hypothesis has been called into question because different phase behavior has been observed experimentally [@laffont2006; @chen2006; @allen2008; @delmas2008; @chueh2013] and predicted theoretically [@bazant2013]. It has become clear that a more realistic particle model must account for two-phase thermodynamics [@han2004; @singh2008; @lai2010; @lai2011b; @zeng2013MRS], crystal anisotropy [@singh2008; @bai2011; @tang2011], coherency strain [@cogswell2012], surface energy [@cogswell2013], and reaction limitation in nanoparticles [@singh2008; @bai2011; @bai2014], and electrochemical interactions between large numbers of such particles in porous electrodes [@ferguson2012; @bai2013; @ferguson2014; @orvananos2014]. In larger, micron-sized particles, the shrinking-core model may still have some relevance due to solid diffusion limitation and defects (such as dislocations and micro cracks) that can reduce coherency strain [@singh2008; @burch2009; @dargaville2013CHR]. Moreover, diffusion becomes more isotropic in larger particles due to the increased frequency of point defects, such as channel-blocking Fe anti-site defects in LFP [@malik2010]. Regardless of the details of the model, fundamental questions remain about the dynamics of phase separation driven by electrochemical reactions, even in the simplest case of an isotropic strain-free spherical particle. When should we expect core-shell phase separation versus pure diffusion in a solid solution? What other transient phase morphologies are possible? How are reaction kinetics affected by phase separation? Traditional battery models, which place artificial spherical phase boundaries and assume classical Butler-Volmer kinetics, are not able to answer these questions. In this article, we formulate a simple mathematical model that captures the essential features of [*bulk*]{} phase separation coupled to Faradaic intercalation reactions in a single solid nanoparticle. The model is based on a recently developed mathematical theory of chemical reaction and charge transfer kinetics based on nonequilibrium thermodynamics [@bazant2013], which we review in Section  \[sec:back\]. In the case of an isotropic, strain-free spherical particle, the resulting Cahn-Hilliard reaction (CHR) equations are formulated for Butler-Volmer (BV) kinetics and regular solution thermodynamics in Section  \[sec:eqns\]. The model predicts smooth concentration profiles limited by radial diffusion with smooth voltage profiles versus state of charge in cases of solid-solution thermodynamics (Section  \[sec:ss\]) and radial phase separation with a flat voltage plateau in cases of two stable phases (Section  \[sec:phasesep\]), which are strongly affected by surface wetting (Section  \[sec:wet\]). After summarizing the results, in Section \[sec:num\] we present the control-volume numerical scheme for the CHR model that allows us to accurately solve this stiff fourth-order nonlinear initial-boundary-value problem. Background {#sec:back} ========== A systematic approach to describe chemical kinetics coupled to phase transformations has recently been developed by Bazant  [@bazant2013], based on nonequilibrium thermodynamics. The theory leads to a general reaction-diffusion equation of the form, $$\frac{\partial c_i}{\partial t} = \nabla \cdot \left( M_i c_i \nabla \frac{\delta G}{\delta c_i} \right) + R_i\left( \left\{ \frac{\delta G}{\delta c_j}\right\} \right) \label{eq:rd}$$ where $c_i$ is the concentration, $M_i$ the mobility, and $R_i$ the volumetric reaction rate of species $i$, assuming homogeneous kinetics. The diffusive flux (second term) and the reaction rate (third term) are both expressed in terms of diffusional chemical potentials, $$\mu_i = \frac{\delta G}{\delta c_i} \label{eq:mudef}$$ defined as variational derivatives of the total free energy functional $G[\{c_i\}]$. Physically, $\mu_i(x)$ is free energy required to add a continuum “particle" (delta function) of species $i$ to the system at position $x$. For the conversion of reactants $\{ A_r\}$ to products $\{ B_p \}$, , $$\sum_r s_r \mbox{A}_r \to \sum_p s_p \mbox{B}_p, \label{eq:genreact}$$ assuming thermally activated kinetics, the reaction rate has the general variational form, $$R = \frac{k_0}{\gamma_\ddag} \left[ \exp\left( \sum_r \frac{s_r}{k_BT} \frac{\delta G}{\delta c_r}\right) - \exp\left( \sum_p \frac{s_p}{k_BT} \frac{\delta G}{\delta c_p} \right)\right] \label{eq:Rphase}$$ where $\gamma_\ddag$ is the activity coefficient of the transition state and $R_i = \pm s_i R$ ($+$ for products, $-$ for reactants). A mathematical model of the general form (\[eq:rd\]) was perhaps first proposed by Hildebrand [*et al.*]{} to describe nanoscale pattern formation in catalytic surface reactions [@hildebrand1999; @hildebrand2003] and corresponds to specific models for the free energy ($G$) and the transition state ($\gamma_\ddag$). In the case of electrochemical reactions involving ions and electrons, different assumptions that also account for electrostatic energy lead to Bazant’s generalizations of the classical Butler-Volmer and Marcus theories of charge transfer for concentrated solutions and solids [@bazant2013]. The variational reaction-diffusion equation (\[eq:rd\]) unifies the Cahn-Hilliard and Allen-Cahn equations from phase-field modeling in a general formulation of non-equilibrium chemical thermodynamics for reacting mixtures. These classical equations, widely used in materials science and applied mathematics [@kom], are special cases of Eq. (\[eq:rd\]) that correspond to rate limitation by diffusion, $$\frac{\partial c}{\partial t} = \nabla \cdot \left( M c \nabla \frac{\delta G}{\delta c} \right) \ \ \ \mbox{(Cahn-Hilliard)}$$ or by linear reaction kinetics for a small thermodynamic driving force, $$\frac{\partial c}{\partial t} = - k \frac{\delta G}{\delta c} \ \ \ \mbox{ (Allen-Cahn)}$$ respectively [@singh2008; @bazant2013]. The general equation (\[eq:rd\]) can be applied to many problems in chemical or electrochemical dynamics [@bazant2013]. In the case of ion intercalation in Li-ion battery nanoparticles, it has mainly been studied in two limiting cases. For reaction-limited anisotropic nanoparticles, the general theory can be reduced to the Allen-Cahn reaction (ACR) equation, $$\frac{\partial c}{\partial t} = R\left( \left\{ \frac{\delta G}{\delta c}\right\} \right) \ \ \ \mbox{(ACR)}$$ for the depth-averaged ion concentration $c(x,y)$ along the active surface where intercalation reactions occur, as shown by Bai et al. [@bai2011] and Burch [@burch_thesis], building on the seminal paper of Singh et al. [@singh2008]. The ACR model has been applied successfully to predict experimental data for LFP, using generalized Butler-Volmer kinetics and accounting for coherency strain, by Cogswell and Bazant [@cogswell2012; @cogswell2013; @bazant2013]. An important prediction of the ACR model is the dynamical suppression of phase separation at high rates [@bai2011; @cogswell2012], as it becomes favorable to spread reactions uniformly over the particle surface, rather than to focus them on a thin interface between stable phases. The ACR model has also been used to predict a similar transition in electrochemical deposition of Li$_2$O$_2$ in Li-air battery cathodes, from discrete particle growth at low currents to uniform films at high currents [@horstmann2013]. For larger particles, the Cahn-Hilliard reaction (CHR) model, $$\frac{\partial c}{\partial t} + \nabla\cdot\mathbf{F} = 0, \ \ \mathbf{F} = - M c \nabla \frac{\delta G}{\delta c_i}, \ \ -\hat{n}\cdot \mathbf{F} = R\left( \left\{ \frac{\delta G}{\delta c}\right\} \right) \ \ \ \mbox{(CHR)} \label{eq:chr}$$ describes bulk phase separation driven by heterogenous reactions, which are localized on the surface and described by a flux matching boundary condition [@bazant2013]. This general model was first posed by Singh, Ceder and Bazant [@singh2008] but received less attention until recently. For Butler-Volmer kinetics, Burch and Bazant [@burch2009; @burch_thesis] and Wagemaker et al. [@wagemaker2011] solved the CHR model in one dimension to describe size-dependent miscibility in nanoparticles. Dargaville and Farrell [@dargaville2013CHR; @dargaville_thesis] first solved CHR in two dimensions (surface and bulk) for a rectangular particle using a least-squares based finite-volume method [@dargaville2013numerical] and examined the transition to ACR behavior with increasing crystal anisotropy and surface reaction limitation. They showed that phase separation tends to persist within large particles, similar to the shrinking core picture, if it is not suppressed by coherency strain and/or fast diffusion perpendicular to the most active surface. Cahn-Hilliard Reaction Model {#sec:eqns} ============================ In this work, we solve the CHR model with generalized Butler-Volmer kinetics for a spherical host particle with the intercalated ion concentration varying only in the radial direction. Spherical symmetry is also the most common approximation for solid diffusion in traditional Li-ion battery models [@doyle1993; @zeng2013numerical]. This simple one-dimensional version of the CHR model is valid for large, defective crystals with negligible coherency strain and isotropic diffusion [@singh2008; @burch_thesis; @dargaville2013CHR; @dargaville_thesis]. It may also be directly applicable to low-strain materials such as lithium titanate [@ozhuku1995], a promising long-life anode material [@yang2009]. We simulate phase separation dynamics at constant current, which sometimes, but not always, leads to shrinking-core behavior. Related phase-field models of isotropic spherical particles, including the possibility of simultaneous crystal-amorphous transitions, have also been developed and applied to LFP by Tang et al. [@tang2009; @tang2010], Meethong et al. [@meethong2007; @meethong2007a; @meethong2008], and Kao et al [@kao2010], but without making connections to charge-transfer theories from electrochemistry. Here, we focus on the electrochemical signatures of different modes of intercalation dynamics – voltage transients at constant current – which are uniquely provided by the CHR model with consistent Butler-Volmer reaction kinetics [@bazant2013]. We also consider the nucleation of phase separation by surface wetting [@bai2011], in the absence of coherency strain, which would lead to a size-dependent nucleation barrier [@cogswell2013] and symmetry-breaking striped phase patterns [@vanderven2009; @cogswell2012]. Model formulation ------------------ Consider the CHR model (\[eq:chr\]) for a spherical, isotropic, strain-free, electron-conducting particle of radius $R_p$ with a concentration profile $c(r,t)$ of intercalated ions (number/volume). As first suggested by Han et al. for LFP [@han2004], we assume the chemical potential of the Cahn-Hilliard regular solution model [@cahn1958; @cahn1959-1; @cahn1959-2], $$\mu = k_B T \ln \left(\frac{c}{c_m-c}\right) + \Omega \left(\frac{c_m-2c}{c_m}\right) - \frac{\kappa }{c_m^2} \nabla^2c,$$ where $k_B$ is Boltzmann’s constant, $T$ the absolute temperature, $\Omega$ the enthalpy of mixing per site, $\kappa$ the gradient energy penalty coefficient, $V_s$ the volume of each intercalation site, and $c_m=V_s^{-1}$ is the maximum ion density. Although we account for charge transfer at the surface (below), we set the bulk electrostatic energy to zero, based on the assumption each intercalated ion diffuses as a neutral polaron, coupled to an adjacent mobile electron, e.g. reducing a metal ion such as Fe$^{3+}+e^-\to$ Fe$^{2+}$ in LFP. (For semiconducting electrodes, imbalances in ion and electron densities lead to diffuse charge governed by Poisson’s equation in the CHR model [@bazant2013].) The mobility $M$ in the flux expression (\[eq:chr\]) is related to the tracer diffusivity $D$ by the Einstein relation, $D = M k_BT$. For thermodynamic consistency with the regular solution model, the tracer diffusivity must take into account excluded sites $$D = D_0 \left( 1 - \frac{c}{c_m}\right) = M k_BT$$ where $D_0$ is the dilute-solution limit, which leads to the “modified Cahn-Hilliard equation" [@nauman2001]. This form also follows consistently from our reaction theory, assuming that the transition state for solid diffusion excludes two sites [@bazant2013]. At the surface of the particle, $R=R_p$, the insertion current density $I(t)$ is related to the voltage $V(t)$ and surface flux density $ F(R_p,t)$, where $\mathbf{F} = F \hat{R}$ is the radial flux. By charge conservation, the current is the integral of the surface flux times the charge per ion $ne$, $$\label{eqn:ChargeConservationCondition} I = - n e F(R_p,t),$$ where $e$ is the electron charge. Electrochemistry enters the model through the current-voltage relation, $I(V,c,\mu)$, which depends on $c$ and $\mu$ at the surface. Here, we adopt thermodynamically consistent, generalized Butler-Volmer kinetics for the charge-transfer rate [@bazant2013], given below in dimensionless form. We also impose the “natural" or “variational" boundary condition for the fourth-order Cahn-Hilliard equation, $$\frac{\partial c}{\partial r}(R_p,t) = c_m^2 \frac{\partial \gamma_s}{\partial c},$$ where $\gamma_s(c)$ is the surface energy per area, which generally depends on ion concentration. The natural boundary condition expresses continuity of the chemical potential and controls the tendency for a high or low concentration solid phase to preferentially “wet" the surface from the inside [@cahn1977; @cogswell2013]. Together with symmetry conditions, $F(0,t)=0$ and $\frac{\partial c}{\partial R}(0,t)=0$, we have the required four boundary conditions, plus the current-voltage relation, to close the problem. Dimensionless equations ------------------------ To nondimensionalize the system, we will use several basic references to scale the model, which include the particle radius $R_p$ for the length scale, the diffusion time $\frac{R_p^2}{D_0}$ for the time scale, the maximum ion concentration $c_m$ for the concentration scale and the thermal energy $k_BT$ for any energy scale. The dimensionless variables are summarized in Table \[table:1\]. --------------------------------------- ---------------------------------------- --------------------------------------------------- ------------------------------------------------------ ------------------------------------------------------------------------------------------ $\tilde{c} = \frac{c}{c_m}$ $ \tilde{t} = \frac{ D_0 }{R^2_p} t$ $\tilde{r} = \frac{r}{R_p}$ $\tilde{\nabla} = R_p \nabla$ $\tilde{F} = \frac{R_p}{c_m D_0} F$ $\tilde{\mu} = \frac{\mu}{k_B T}$ $\tilde{\Omega} = \frac{\Omega}{k_BT}$ $\tilde{\kappa} = \frac{\kappa }{R_p^2 c_m k_BT}$ $\tilde{I} = \frac{ R_p }{c_m ne D_0} I$ $\tilde I_0 = \frac{ R_p }{c_m ne D_0} I_0$ $\tilde{\eta} = \frac{e}{k_B T} \eta$ $\tilde{V} = \frac{eV}{k_BT}$ $\tilde{V}^\Theta = \frac{eV^\Theta}{k_BT}$ $\tilde \gamma_s = \frac{ \gamma_s}{R_p c_m k_B T} $ $\beta = \frac{1}{\tilde{\kappa} } \frac{\partial \tilde \gamma_s}{\partial \tilde c}$ --------------------------------------- ---------------------------------------- --------------------------------------------------- ------------------------------------------------------ ------------------------------------------------------------------------------------------ : Dimensionless variables in the CHR model.[]{data-label="table:1"} With these definitions, our model takes the dimensionless form, $$\begin{aligned} \label{eqn:MassConservation} \frac{\partial \tilde c}{\partial \tilde t} = - \frac{1}{\tilde{r}^2} \frac{\partial}{\partial \tilde{r}} \left( \tilde{r}^2 \tilde{F} \right) \\ \tilde F = - (1-\tilde c) \tilde c \frac{ \partial \tilde \mu}{\partial \tilde{r}} \\ \tilde \mu = \ln \frac{\tilde c}{1- \tilde c} + \tilde \Omega (1-2\tilde c) - \tilde \kappa \tilde \nabla^2 \tilde c \\ \frac{\partial \tilde{c}}{\partial \tilde r}(0,\tilde{t}) = 0, \ \ \ \frac{\partial \tilde{c}}{\partial \tilde r}(1,\tilde{t}) = \beta \\ \tilde F(0,\tilde{t}) = 0, \ \ \ \tilde F(1,\tilde{t}) = \textcolor{black}{\tilde I}.\end{aligned}$$ In order to relate the current to the battery voltage, we assume generalized Butler-Volmer kinetics [@bazant2013], $$\begin{aligned} \tilde I &=& \tilde I_0 \left( e^{-\alpha \tilde \eta}-e^{(1-\alpha) \tilde \eta} \right) \\ \tilde{\eta} &=& \tilde \mu+ \tilde V - \tilde{V}^\Theta \\ \tilde I_0 &=& \tilde c^{\alpha}(1-\tilde c)^{1-\alpha} e^{\alpha (\tilde \Omega (1-2\tilde c) - \tilde \kappa \nabla^2 \tilde c) }= (1-\tilde c) e^{ \alpha \tilde \mu}\end{aligned}$$ where is the insertion current density (per area), the exchange current density, $\alpha$ the charge transfer coefficient, the surface or activation overpotential, the battery voltage, and the reference voltage for a given anode (e.g. Li metal) when the particle is homogeneous at $\tilde{c}=\frac{1}{2}$. The derivation of this rate formula assumes that the transition state for charge transfer excludes one surface site, has no enthalpic excess energy, and has an electrostatic energy $(1-\alpha)$ times that of the electron plus the ion in the electrolyte. It is common to assume $\alpha=\frac{1}{2}$, but we will relax this assumption below. In equilibrium, , the interfacial voltage, is determined by the Nernst equation, $\Delta\tilde{V}_{eq} = -\tilde{\mu}$. Out of equilibrium, the overpotential, , is determined by solving for the transient concentration profile. Governing parameters -------------------- Dimensionless groups are widely used in fluid mechanics to characterize dynamical regimes [@barenblatt_book], and recently the same principles have been applied to intercalation dynamics in Li-ion batteries [@singh2008; @ferguson2012]. The CHR model is governed by four dimensionless groups, $\tilde{\Omega}$, $\tilde{\kappa}$, $\beta$ and $\tilde{I}$ (or $\tilde{V}$) with the following physical interpretations. The ratio of the regular solution parameter (enthalpy of mixing) to the thermal energy can be positive or negative, but in the former case (attractive forces) it can be interpreted as $$\tilde \Omega= \frac{\Omega}{k_BT} = \frac{2 T_c}{T},$$ i.e. twice the ratio of the critical temperature $T_c=\frac{\Omega}{2k_B}$, below which phase separation is favored, to the temperature $T$. Below the critical point, $T<T_c$ (or $\tilde \Omega > 2$), the thickness and interfacial tension of the diffuse phase boundary scale as $\lambda_b=\sqrt{\kappa / c_m \Omega}$ and $\gamma_b=\sqrt{\kappa \Omega c_m}$, respectively [@cahn1958], so the dimensionless gradient penalty $$\tilde{\kappa} = \frac{\kappa }{c_m k_BT R_p^2} = \tilde \Omega \left( \frac{\lambda_b}{R_p} \right)^2 \ll 1$$ equals $\tilde\Omega$ times the squared ratio of the interfacial width (between high- and low-density stable phases) to the particle radius, which is typically small. The parameter $\beta$ is the dimensionless concentration gradient at the particle surface, $\beta = \frac{1}{\tilde{\kappa} } \frac{\partial \tilde \gamma_s}{\partial \tilde c}$, which we set to a constant, assuming that the surface tension $\gamma_s(c)$ is a linear function of composition. Letting $\Delta \gamma_s = \frac{\partial \gamma_s}{\partial \tilde{c}}$ be the difference in surface tension between high-density $(\tilde{c}\approx 1)$ and low-density $(\tilde{c}\approx 1)$ phases, $$\beta= \frac{R_p}{\lambda_b} \frac{\Delta \gamma_s}{\gamma_b} \gg 1$$ we can interpret $\beta$ as the ratio of particle size to the phase boundary thickness times the surface-to-bulk phase boundary tension ratio, $\frac{\Delta \gamma_s}{\gamma_b}$. In cases of partial “wetting" of the surface by the two solid phases, this ratio is related to the equilibrium contact angle $\theta$ by Young’s Law, $$\cos\theta = \frac{\Delta \gamma_s}{\gamma_b}.$$ Partial wetting may occur in the absence of elastic strain (as we assume below), but complete wetting by the lower-surface-energy phase is typically favored for coherent phase separation because $\gamma_b \ll |\Delta \gamma_s|$ [@cogswell2013]. In any case, for thin phase boundaries, we typically have $\beta \gg 1$. Finally, the current density is scaled to the diffusion current, $$\tilde{I} = \frac{I }{3ne c_m V / (\tau_D A)}= \frac{R_p}{ne c_m D_0} I,$$ where $V = \frac{4}{3} \pi R^3$ is the volume of the sphere, $ne c_m V$ represents the maximum charge can be stored in the sphere, $A= 4\pi R_p^2$ is the surface area and $\tau_D = R_p^2/D_0$ is the diffusion time into the particle. $\tilde{I} = 1$ is equivalent to can be fully charged from empty in $\frac{1}{3}$ unit of diffusion time $\tau_D$ with this current density. The exchange current has the same scaling. Rate limitation by surface reactions or by bulk diffusion corresponds to the limits $\tilde{I}_0 \ll 1$ or $\tilde{I}_0 \gg 1$, respectively, so this parameter behaves like a Damkoller number [@singh2008; @ferguson2012]. Simulation details ------------------- For a given dynamical situation, either the current or the voltage is controlled, and the other quantity is predicted by the model. Here we consider the typical situation of “galvanostatic" discharge/charge cycles at constant current, so the model predicts the voltage $V$, which has the dimensionless form, $\tilde{V} = \frac{neV}{k_BT}$. The electrochemical response is typically plotted as voltage versus state of charge, or mean filling fraction, $$X =\frac{ \int c \, dV }{\frac{4}{3} \pi R_p^3 c_{m}}.$$ The reference scale for all potentials is the thermal voltage, $\frac{k_BT}{e}$, equal to 26 mV at room temperature. Parameter Value Unit Parameter Value Unit ------------- ---------------------- ------------- ----------- ------------------------ --------- $R_p$ $1 \times 10^{-7}$ m $\Omega$ $0.115$ eV $\kappa$ $3.13 \times 10^{9}$ eV/m $D_0$ m$^2$/s $c(r,0)$ $10$ mol / m$^3$ $c_{m}$ $1.379 \times 10^{28}$ m$^-3$ $n$ $1$ - $\alpha$ $0.5$ - $V^\Theta $ $3.42$ V $I_0$ A/m$^2$ : Parameter settings for LFP [@cogswell2012; @cogswell2013] used in the numerical simulations, except as otherwise noted.[]{data-label="table:2"} In the following sections, we perform numerical simulations for the parameter settings in Table \[table:2\], which have been fitted to experimental and [*ab initio*]{} computational results for LFP [@malik2010; @bai2011; @cogswell2012; @bai2014], but we vary $\tilde \Omega$ to obtain different dynamical behaviors, which may represent other Li-ion battery materials. allowing us to focus on the novel coupling of reaction kinetics with phase separation [@bazant2013]. In this exercise, we initially neglect surface wetting (by setting $\beta=0$) and coherency strain, both of which are important for an accurate description of LFP [@cogswell2012; @cogswell2013]. In later sections, we also consider $\beta > 0$ and $\alpha \neq \frac{1}{2}$ for the more interesting cases of phase separation ($\tilde\Omega > 2$). We employ a control volume method (described below) for the spatial discretization of the system and the ode15s solver in MATLAB for the time integration. [ Consistent with common usage, we report the total current in terms of the “C-rate", C/$n$, which means full charge or discharge (i.e. emptying or filling) of the particle in $n$ hours; for example, “C/10" and “10C" mean full discharge in 10 hours or 6 minutes, respectively. ]{} Solid Solution {#sec:ss} ============== Our model predicts simple diffusive dynamics with slowly varying concentration and voltage transients under “solid solution" conditions, where configurational entropy promotes strong mixing. The regular solution model predicts that bulk solid solution behavior occurs at all temperature if there are repulsive forces between intercalated ions, $\Omega < 0$, or above the critical temperature $T > T_c$ for attractive ion-ion forces, $\Omega > 0$. Here, we consider finite-sized particles and examine current-voltage transients in both of these cases of solid-solution thermodynamics. Repulsive forces ---------------- A negative enthalpy of mixing, $\Omega < 0$, reflects mean-field attraction between ions and vacancies, or equivalently, repulsion between intercalated ions that promotes homogeneous intercalation. Consider galvanostatic (constant current) charge and discharge cycles with $\Omega = -0.0514$eV or $\tilde \Omega=-2$. When the current is small, $\tilde{I}\ll 1$, diffusion is fast, and the ions remain uniformly distributed inside the particle during intercalation dynamics, as shown in Fig. \[fig:NegativeOmega\]. At high currents, $\tilde{I}\gg 1$ (not considered here), diffusion becomes rate limiting, and concentration gradients form, as in prior models of spherical nonlinear diffusion [@doyle1993; @srinivasan2004; @zeng2013numerical]. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ Constant current cycling of a spherical intercalation particle, composed of a solid solution of lithium ions with repulsive forces ($\tilde \Omega = -2$). Left: profiles of dimensionless concentration $\tilde{c}(\tilde{r})$ (local filling fraction) at different mean compositions (average filling fraction, $X$) at . The vertical dimension in the plots shows the concentrations, while the horizontal circle denotes the at the equator of the sphere. Right: voltage versus state of charge (filling fraction) at different currents. []{data-label="fig:NegativeOmega"}](RepulsiveConcentration.pdf "fig:") ![ Constant current cycling of a spherical intercalation particle, composed of a solid solution of lithium ions with repulsive forces ($\tilde \Omega = -2$). Left: profiles of dimensionless concentration $\tilde{c}(\tilde{r})$ (local filling fraction) at different mean compositions (average filling fraction, $X$) at . The vertical dimension in the plots shows the concentrations, while the horizontal circle denotes the at the equator of the sphere. Right: voltage versus state of charge (filling fraction) at different currents. []{data-label="fig:NegativeOmega"}](MultiCurrentsNegativeOmegaBoth.pdf "fig:") ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Given the Butler-Volmer symmetry factor, $\alpha = 0.5$, and assuming uniform composition, the total voltage drop between anode and particle surface is given by $$\tilde{V} = \tilde{V}^{\Theta} -\tilde \mu(\tilde c) - 2 \sinh^{-1} \left(\frac{\tilde I}{2 \tilde I_0(\tilde c)}\right), \label{eq:V1}$$ where $V$ is the battery voltage, $V^{\Theta}$ is the constant reference voltage for a given anode, and $\tilde I_0(\tilde c)$ the exchange current density at the given concentration profile. The simulated discharge curves in Fig. \[fig:NegativeOmega\] fit this expression well and exhibit no voltage plateau (a signature of phase separation discussed below). The model exhibits a positive internal resistance, since the battery voltage decreases for $I>0$ (discharging) and increases for $I<0$ (charging). According to Eq. (\[eq:V1\]), the voltage increment, or overpotential, has two sources: concentration changes at the surface that shift the Nernst equilibrium interfacial voltage (second term, concentration overpotential) and Butler-Volmer charge-transfer resistance (third term, activation overpotential). Weak attractive forces or high temperature ------------------------------------------ When the mixing enthalpy per site $\Omega$ is positive, there is an effective repulsion between ions and vacancies, or equivalently, an attraction between ions that promotes phase separation into Li-rich and Li-poor phases. This tendency counteracted by configurational entropy, which always promotes the mixing of ions and vacancies and leads to homogeneous solid solution behavior at high temperature $T$. Below the critical temperature, $T< T_c=\frac{\Omega}{2 k_B}$, attractive forces overcome configurational entropy, leading to stable bulk phase separation. For $T>T_c$, the numerical results are consistent solid solution behavior. For example, we use the same parameters in Table \[table:2\], except for the $\Omega=2.57\times10^{-2}$ eV, or $\tilde \Omega =1$, so the absolute temperature is twice the critical value, $T/T_c=2$. As shown in Fig. \[fig:MultiCurrentsHighTemp\], the voltage varies less strongly with filling fraction, in a way that resembles previous empirical fits of the flat voltage plateau (below) signifying phase separation. There is no phase separation, however, and the concentration profile (not shown) is very similar to the case of repulsive interactions in Fig. \[fig:NegativeOmega\]. \[!h\] ![Cycling of a high temperature solid solution with attractive forces ($\tilde \Omega = 1$) with other parameters from Fig. \[fig:NegativeOmega\].[]{data-label="fig:MultiCurrentsHighTemp"}](MultiCurrentsHighTempBoth.pdf) Capacity --------- When the particle is charged or discharged at a high rate, the total capacity, defined as the $X$ reached when the voltage drops below some threshold on discharge, will be significantly reduced. In a simple spherical diffusion model, by the scaling of Sand’s time $t_s \sim \frac{1}{I^2}$ [@bard_book; @10.626] and charge conservation, the total capacity $C$ scales as, $C = I t_s \sim {I}^{-1}$. In our CHR model, we observe a different scaling of the capacity from the numerical simulations. In a simple power law expression, $C \sim I^{\gamma}$, the exponent $\gamma$ , such as wetting parameter $\beta$, gradient penalty constant $\kappa$, and regular solution parameter $\Omega$. A sample of the scaling dependence on current with different values of $\kappa$ is shown in Fig. \[fig:capacity\], where $\gamma \approx 0.5$. \[!h\] ![Capacity $C$ versus current with different gradient penalty constant in a solid solution ($\tilde \Omega=\beta=0$). []{data-label="fig:capacity"}](capacity.pdf) Phase Separation {#sec:phasesep} ================ In some materials, such as LFP, the attractive forces between intercalated ions are strong enough to drive phase separation into Li-rich and Li-poor solid phases at room temperature, for $T< T_c$, or $\tilde \Omega >2$ in the regular solution model. Phase separation occurs because the homogeneous chemical potential is no longer a monotonic function of concentration. This has a profound effect on battery modeling that is predicted from first principles by the CHR model. Strong attractive forces or low temperature -------------------------------------------- In order to simulate a representative model, we again use the parameters in Table \[table:2\] but set the $\Omega = 1.15 \times 10^{-1}$ eV, or $\tilde \Omega = 4.48>2$, which is a realistic value of the enthalpy per site value in LFP [@cogswell2012]. Very different from the uniformly filling behavior in Fig. \[fig:NegativeOmega\], phase separation occurs suddenly when the composition passes the linearly unstable spinodal points . The concentration profiles develop sharp boundaries between regions of uniform composition corresponding to the two stable phases, as shown in Fig. \[fig:concentrationPhaseSeparation\]. The new phase appears at the surface and propagates inward, as shown in Fig. \[fig:NoWettingConcentration\], once the surface concentration enters the unstable region of the phase diagram. ![Dynamics of phase separation during ion intercalation ($\tilde \Omega=4.48$). Concentration distributions within the spherical particle are shown at different currents The x-axis represents the nondimensional radial position $\tilde{r}$ and the y-axis presents the overall average filling fraction $X$ of the whole particle, which can be also seen as the time dimension. The warmer color in the figure indicates a higher local filling fraction.[]{data-label="fig:concentrationPhaseSeparation"}](ConcentrationPhaseSeparation.pdf) After phase separation occurs, the CHR model for an isotropic spherical particle predicts similar two-phase dynamics as the shrinking core model, but without placing a sharp phase boundary. Instead, the diffuse phase boundary appears from an initial single-phase solid solution at just the right moment, determined by thermodynamic principles, and there is no need to solve a moving boundary problem for a sharp interface, which is numerically simpler. \[!h\] ![ Shrinking core dynamics of phase separation in an isotropic spherical particle ($\tilde \Omega = 4.48$ and no surface wetting). The vertical dimension in the plots shows the concentrations, while the horizontal circle denotes the at the equator of the sphere. and $X$ the overall filling fraction of lithium ions.[]{data-label="fig:NoWettingConcentration"}](NoWettingConcentration.pdf) The CHR model also predicts the subtle electrochemical signatures of phase separation dynamics [@bazant2013]. Without any empirical fitting, phase separation naturally leads to a flat voltage plateau, as shown in Fig. \[fig:MultiCurrentsNoWetting\]. The constant-voltage plateau reflects the constant chemical potential of ion intercalation in a moving phase boundary (in the absence of coherency strain, which tilts the plateau [@cogswell2012]). At high currents, the initial charge transfer resistance, or activation overpotential, is larger, as signified by the jump to the plateau voltage (derived below), and over time, solid diffusion limitation, or concentration overpotential, causes the voltage to fall more rapidly during discharging, or increase during charging. ![Phase separating particle ($\tilde \Omega=4.48$) voltage vs. filling fraction plot at different C-rates []{data-label="fig:MultiCurrentsNoWetting"}](MultiCurrentsBoth.pdf) Voltage Plateau Estimation -------------------------- As we see from Fig. \[fig:concentrationPhaseSeparation\]-\[fig:MultiCurrentsNoWetting\], our model system always undergoes phase separation, which leads to a voltage plateau. In the case without surface wetting, i.e. $\beta = 0$, we can derive an accurate approximation of the voltage plateau , since the concentration within each phase is relatively uniform, especially when the current is not very large. Therefore, we may ignore the gradient penalty term $\kappa \nabla^2 c$, leaving only the homogeneous chemical potential, $$\tilde \mu \approx \ln\frac{\tilde c}{1 - \tilde c} + \tilde \Omega (1 - 2 \tilde c).$$ The stable composition of each phase approximately solves $\tilde \mu = 0$, where the homogeneous free energy at these two concentrations takes its minimum. During ion insertion, the surface concentration is approximately the larger solution $\tilde c_l$ of this equation. In the case $I>0$, the plateau voltage is given by $$V \approx V^{\Theta} -\frac{2k_BT}{e}\sinh^{-1}\left(\frac{\hat{I}}{4(1-\tilde c_l)}\right). \label{eq:Va}$$ where $\hat{I}= \frac{\tilde I}{ \tilde {I}_0(\tilde c = \frac{1}{2})}$ is the ratio of the applied current to the exchange current at half filling At low currents, the agreement between this analytical approximation and the numerically determined voltage plateau is excellent, as shown in \[fig:PlateauPrediction\]. \[!h\] ![ Comparison of the simulated voltage plateau from Fig. \[fig:MultiCurrentsNoWetting\] (solid curves) and the analytical approximation of Eq. (\[eq:Va\]) (dashed curves) for $I > 0$. []{data-label="fig:PlateauPrediction"}](PlateauPrediction.pdf) The voltage profile can be understood physically as follows. As a result of our assumption of spherical symmetry, the intercalation reaction must proceed into the outer “shell phase", In the case of lithiation, the shell has high concentration and thus strong entropic constraints inhibiting further insertion that lower the reaction rate, increase the overpotential, and lower the voltage plateau when phase separation occurs. Butler-Volmer Transfer Coefficient ---------------------------------- In the preceding examples, we set the Butler-Volmer the transfer coefficient to $\alpha=0.5$ as in prior work with both CHR [@bai2011; @cogswell2012] and diffusive [@doyle1993; @srinivasan2004] models. This choice can be justified by Marcus theory of charge transfer when the reorganization energy is much larger than the thermal voltage [@bazant2013; @bard_book], but in battery materials this may not always be the case. In our isotropic model, charge-transfer asymmetry ($\alpha \neq 0.5$) mainly manifests itself via strong broken symmetry between charge and discharge in the activation overpotential, as shown in the voltage plots of Fig. \[fig:MultiAlpha\]. A smaller value of $\alpha$ leads to a lower voltage plateau while discharging ($I>0$), but does not much affect the voltage plateau during charging ($I<0$). -- -- -- -- Phase Separation with Surface Wetting {#sec:wet} ===================================== The wetting of a solid surface by two immiscible fluids, such as water and air, is very familiar, but it is not widely appreciated that analogous phenomena also occur when binary solids wet" a fluid or solid surface and play a major role in nanoparticle intercalation [@cogswell2013]. The only major difference is that coherent (defect-free) solid-solid interfaces have much lower tension than solid-fluid interfaces due to stretched, rather than broken, bonds. As a result, a stable contact angle cannot form, and one phase tends to fully wet each surface in equilibrium ($\Theta_c=0,\pi$), regardless of the bulk composition. The competition between different phases to wet a surface can promote the nucleation of a phase transformation via the instability of a surface wetting layer. In particular, the wetting of certain crystal facets of LFP particles by either LiFePO$_4$ and FePO$_4$ ensures the existence of surface layers that can become unstable and propagate into the bulk, as a means of surface-assisted nucleation [@cogswell2013]. Shrinking cores and expanding shells ------------------------------------- In this section, we show that surface wetting characteristics have a significant effect on the concentration profile and voltage during insertion, even in an isotropic spherical particle. Mathematically, we impose the inhomogeneous Neumann boundary condition, $\frac{\partial \tilde{c}}{\partial \tilde r}(1,\tilde{t}) = \beta$, where, as described above, $\beta > 0$ promotes the accumulation of ions at the surface, or wetting by the high density phase. In this case, during ion insertion, the surface concentration will be always higher than the remaining bulk region, if we start from a uniform low concentration. As a result, the surface hits the spinodal point earlier than other places inside the particle, which means the Li-rich phase always nucleates at the surface. In an isotropic particle, this leads to the shrinking core phenomenon, as in the cases without surface wetting ($\beta=0$) described above. The case of surface de-wetting ($\beta<0$) is interesting because surface nucleation is suppressed, and more than two phase regions can appear inside the particle. During insertion, the surface concentration is now always lower than in the interior, especially when the current is small. Therefore, an interior point will reach the spinodal concentration earlier than the surface, so the high-density phase effectively nucleates somewhere in the bulk, away from the surface. ![ Phase boundary motion during ion insertion in a spherical particle with surface de-wetting ($\beta = -17.9$, $\Omega=4.48$) at different large currents The warmer color in the figure indicates a higher local filling fraction.[]{data-label="fig:ConcentrationDewetting"}](ConcentrationDewetting.pdf) As a result, there is an “expanding shell" at the same time as a shrinking core of the low density phase. This unusual behavior is shown in Fig. \[fig:ConcentrationDewetting\] for $\beta= -17.9$ at several currents. The surface energy is $\gamma = -90$ mJ/m$^2$ at maximum filing, if we assume the $\gamma$ is a linear function of concentration. A detailed demonstration of this concentration dynamics is shown in Fig. \[fig:Dewetting\]. The middle Li-rich region expands inward and outward simultaneously, it first fills up the Li-poor phase located at the center, and finally it fills the whole particle. \[!h\] --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ Concentration profiles (left) and voltage transients (right) for ion insertion at currents in a phase separating spherical particle ($\tilde \Omega = 4.48$ and surface de-wetting $\beta= -17.9$).[]{data-label="fig:Dewetting"}](DewettingConcentration.pdf "fig:") ![ Concentration profiles (left) and voltage transients (right) for ion insertion at currents in a phase separating spherical particle ($\tilde \Omega = 4.48$ and surface de-wetting $\beta= -17.9$).[]{data-label="fig:Dewetting"}](MultiCurrentsDewettingBoth.pdf "fig:") --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Since the surface is always in the lower stable concentration after the initial phase separation, which does not vary according to the surface derivative $\beta$, we should expect the voltage has very weak dependence on the surface de-wetting condition. The voltage - filling fraction plot in Fig. \[fig:MultiDerivatives\] confirms this intuition. When $I<0$, the strong surface de-wetting will make the surface concentration very closed to zero, which will make the chemical potential extremely sensitive to small perturbation in concentration, therefore, we only show the results with relatively weak surface de-wetting ($\beta \geq -10$). \[!h\] -- -- -- -- Voltage efficiency ------------------ In the limit of zero current at a given filling, the voltage given by the Nernst equation has a unique value $V(X)$ corresponding to thermodynamic equilibrium. When a current is applied, energy is lost as heat due to various resistances in the cell, and there is a voltage gap $\Delta V$ between charge and discharge at the same filling. The voltage efficiency is $1 - \Delta V / V_0$. To account for transient effects, we define the voltage gap for a given current magnitude $|I|$ as the voltage at half filling ($X=0.5$) during galvanostatic charging starting from nearly full with $I<0$, minus that during discharging starting from nearly empty with $I>0$. In Fig. \[fig:VoltageGap\], we show how different parameters, such as the current, mixing enthalpy, and surface wetting condition affect the voltage gap. For our single particle model with surface nucleation, the voltage gap vanishes at zero current, in contrast to experiments [@dreyer2010] and simulations [@dreyer2011; @ferguson2012; @ferguson2014; @orvananos2014] with porous multi-particle electrodes. There is no contradiction, however, because the zero-current voltage gap is an emergent property of a collection of particles with two stable states, resulting from the mosaic instability of discrete transformations (which can also been seen in an array of balloons [@dreyer2011]). \[!h\] ![The gap of the charging and the discharging voltage when the particle is half filled, $X = 0.5$, under several conditions including current, $\tilde \Omega$ and surface wetting. The $\beta$ shown in the legend is the nondimensional concentration derivative at the particle surface, which denotes the surface wetting condition.[]{data-label="fig:VoltageGap"}](VoltageGap.pdf) In the case without surface wetting (), the voltage gap is smaller for solid solutions ($\tilde \Omega < 2$) than for phase separating systems ($\tilde \Omega > 2$), since it is more difficult to insert ions into the stable state than into an intermediate concentration. With strong surface de-wetting by the ions ($\beta < 0$) and phase separation ($\tilde \Omega > 2$), however, the gap can be even smaller than in the solid solution case without surface wetting, because the persistence of the low density phase promotes easy intercalation. This is an important observation because it shows the possibility of improving the voltage efficiency by engineering the solid-solid contact angle of the active particles. Numerical Methods and Error Convergence {#sec:num} ======================================= The CHR model is fourth-order in space and highly nonlinear and thus requires care to solve numerically with accuracy and efficiency. Naive finite difference or finite volume methods would be unstable or inaccurate. In order to obtain the solutions above, we developed a new conservative numerical scheme to solve the CHR model with second-order accurate discretization, described in this section. Numerical Scheme ---------------- Great effort has been devoted for solving the Cahn-Hilliard equation numerically with different boundary conditions, and several numerical schemes have been employed, e.g. finite difference [@choo1998; @de2005; @shin2011], finite element [@Banas2008; @zhang2010; @wodo2011], spectral method [@he2009], boundary integral [@dehghan2009], level set [@greer2006], discontinuous Galerkin [@xia2007] and multi-grid methods [@kim2004; @wise2007]. As our problem is associated with the flux boundary condition, the finite volume method is a more convenient and suitable choice for discretization [@burch_thesis; @cueto2008; @dargaville2013numerical]. Furthermore, the finite volume method may be superior to other methods by its perfect mass conservation and the capability for capturing the concentration shock during phase separations. The finite volume method handles the integral form of the Cahn-Hilliard equation. Using the divergence theorem we may update the change of average concentration within a small volume by calculating the difference of over the corresponding volume boundary. In the recent literature, two basic approaches for estimating the concentrations and their derivatives at the boundary have been developed. Burch [@burch_thesis] uses the finite difference type technique to extrapolate the desired unknown values with the known average concentration in each control volume. This approximation method is highly efficient in low dimensional cases with a well-structured grid. Cueto-Felgueroso and Peraire [@cueto2008], Dargaville and Farrell [@dargaville2013numerical] develop a different least squares based technique, which is more suitable for high dimensions cases with un-structured . They use the concentrations and their partial derivatives on the control volume boundaries to predict the centroid concentrations nearby, and find the “most probable" boundary values (concentrations and derivatives) by least square minimizing the prediction errors in centroid concentrations. It may take additional computation cost to extrapolate the surface condition and this will introduce additional error as well. In order to avoid such extrapolation, we propose a numerical scheme that can immediately provide information on the particle surface and still keep the benefits of the finite volume method in conservation and shock toleration, which is inspired by our numerical method for solving the 1D nonlinear spherical diffusion problem [@zeng2013numerical]. Similar to the finite volume method, our numerical scheme indeed handles the integral form of the original PDE system. We work with dimensionless variables, but drop the tilde accents for ease of notation. Since the phase boundary may propagate to any location in the sphere, a non-uniform mesh may not be as helpful as the case in normal nonlinear diffusion problem, so we use uniform grids. Consider a $N$-point uniform mesh within the sphere, $r_1$, $r_2$, $r_3$, $\cdots$, $r_N$, while $r_1=0$ is the sphere center and $r_N$ is right on the surface. Here we define that $\Delta r = r_{j+1}-r_j$, for any $j \in \{ 1, 2, \cdots, N-1\}$ and make $c_1$, $c_2$, $c_3$, $\cdots$, $c_N$ to be the concentration on these grid points. If we integrate the Eqn. \[eqn:MassConservation\] over a shell centered at a non-boundary grid point $r_i$ with width $\Delta r$, which is equivalent to the volume $V_i$ between $[r_i-\frac{\Delta r}{2},r_i+\frac{\Delta r}{2} ]$, by divergence theorem we have, $$\int_{V_i}\frac{\partial c}{\partial t} dV = -\int_{V_i} \nabla \cdot F dV=-\int_{\partial V_i} n \cdot F dS.$$ We can further write both sides of the above equation in the following form, $$\label{eqn:IntigratedForm} \int_{r_i-\frac{\Delta r}{2}}^{r_i+\frac{\Delta r}{2}} 4 \pi r^2 \frac{\partial c}{\partial t} dr = 4 \pi ( (r_i-\frac{\Delta r}{2})^2 F_{i-\frac{1}{2}}- (r_i+\frac{\Delta r}{2})^2 F_{i+\frac{1}{2}}).$$ while $F_{i-\frac{1}{2}} = F \Big |_{r_i-\frac{\Delta r}{2}}$ and $F_{i+\frac{1}{2}} = F \Big |_{r_i+\frac{\Delta r}{2}}$. The left hand side of the above Eqn. \[eqn:IntigratedForm\] can be approximated by, $$\int_{r_i-\frac{\Delta r}{2}}^{r_i+\frac{\Delta r}{2}} 4 \pi r^2 \frac{\partial c}{\partial t} dr = \frac{\partial }{\partial t} (\frac{1}{8}V_{i-1} c_{i-1} + \frac{3}{4} V_{i}c_i + \frac{1}{8}V_{i+1} c_{i+1} + O(\Delta r^3)).$$ This can be also written in a matrix form for each small volume on each row, $$\left( \begin{array}{c} \int_{r_1}^{r_1+\frac{\Delta r}{2}} 4 \pi r^2 \frac{\partial c}{\partial t} dr \\ \int_{r_2-\frac{\Delta r}{2}}^{r_2+\frac{\Delta r}{2}} 4 \pi r^2 \frac{\partial c}{\partial t} dr\\ \int_{r_3-\frac{\Delta r}{2}}^{r_3+\frac{\Delta r}{2}} 4 \pi r^2 \frac{\partial c}{\partial t} dr\\ \vdots\\ \int_{r_{N-1}-\frac{\Delta r}{2}}^{r_{N-1}+\frac{\Delta r}{2}} 4 \pi r^2 \frac{\partial c}{\partial t} dr\\ \int_{r_N-\frac{\Delta r}{2}}^{r_N} 4 \pi r^2 \frac{\partial c}{\partial t} dr \end{array} \right) \approx \textbf{M} \frac{\partial }{\partial t} \left( \begin{array}{c} c_1\\ c_2\\ c_3\\ \vdots\\ c_{N-1}\\ c_N. \end{array} \right),$$ while $\textbf{M}$ is the mass matrix, $$\textbf{M}= \left( \begin{array}{cccccccc} \frac{3}{4} V_1 & \frac{1}{8} V_2 & 0 & 0 & \cdots & 0 & 0 & 0\\ \frac{1}{4} V_1 & \frac{3}{4} V_2 & \frac{1}{8} V_3 & 0 & \cdots & 0 & 0 & 0\\ 0 & \frac{1}{8} V_2 & \frac{3}{4} V_3 & \frac{1}{8} V_4 & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & 0 & \cdots & \frac{1}{8} V_{N-2} & \frac{3}{4} V_{N-1} & \frac{1}{4} V_N\\ 0 & 0 & 0 & 0 & \cdots & 0 & \frac{1}{8} V_{N-1} & \frac{3}{4} V_N\\ \end{array} \right).$$ In fact, this is the major of our method from the classical finite difference method. Instead of having a diagonal mass matrix in the finite volume method, we hereby use a tri-diagonal mass matrix in our new numerical scheme. Since each column of the this matrix sum to the volume of the corresponding shell, this indicates our method must conserve mass with a correct volume. Before we approximate the flux $F$, we will give the approximation formula for the chemical potential $\mu_i$ at each grid point $r_i$. when $i = 2$, $3$, $\cdots$, $N-1$, $$\begin{aligned} \begin{split} \mu_i = \ln \frac{c_i}{1 - c_i} + \Omega (1-2c_i) - \kappa \nabla^2 c_i = \ln \frac{c_i}{1 - c_i} + \Omega (1-2c_i) - \kappa(\frac{2}{r_i} \frac{\partial c}{\partial r} + \frac{\partial^2 c}{\partial r^2})\\ =\ln \frac{c_i}{1-c_i} + \Omega (1-2c_i) - \kappa (\frac{c_{i-1} - 2r_i + c_{i+1}}{\Delta r^2} + \frac{2}{r_i} \frac{c_{i+1} - c_{i-1}}{2 \Delta r}) + O(\Delta r^2). \end{split}\end{aligned}$$ For $i=1$, by symmetric condition at the center and the isotropic condition, $\nabla^2 c_1 = 3\frac{\partial^2 c_1}{ \partial r^2}$ and $\nabla c_1 = 0$, then, $$\mu_1 = \ln \frac{c_1}{1-c_1} + \Omega (1-2c_1) -3\kappa\frac{\partial^2 c_1}{ \partial r^2} = \ln \frac{c_1}{1-c_1} + \Omega (1-2c_1) -3\kappa \frac{2c_2 -2c_1}{\Delta r^2} + O(\Delta r^2).$$ For $i=N$, since we have the boundary condition $ n \cdot \kappa \nabla c_N = \frac{\partial \gamma_s}{\partial c}$, when $\frac{\partial \gamma_s}{\partial c}$ is only a constant or a function of $c_N$, we can assume a ghost grid point at $r_{N+1}$, while the concentration at this point satisfies $\nabla c_N = \frac{c_{N+1} - c_{N-1}}{2 \Delta r}=\beta$, which is equivalent to $c_{N+1} = 2 \Delta r \beta + C_{N-1}$, $$\mu_N = \ln \frac{c_N}{1-c_N} + \Omega (1-2c_N) - \kappa(\frac{2}{r_N}\beta + \frac{2C_{N-1} -2C_N + 2\Delta r \beta}{\Delta r^2} + O(\Delta r^2)).$$ With the chemical potential on each grid point, we can estimate the right hand side of the Eqn. \[eqn:IntigratedForm\]. For each midpoint of two grid points, the flux $F_{i+\frac{1}{2}}$ satisfies, $$F_{i+\frac{1}{2}} = -(1-\frac{c_i + c_{i+1}}{2})\frac{c_i + c_{i+1}}{2}\frac{\mu_{i+1} - \mu_i}{\Delta r} + O(\Delta r^2).$$ For center of the sphere, again by the symmetric condition we have $$F \Big |_{r=0} = 0.$$ And finally for the particle surface the flux is given by the current, which is also our boundary condition. $$F \Big |_{r=1} = -F_s.$$ This finishes the discretization of the original partial differential equations system to a time dependent ordinary differential equations system. We use the implicit $ode15s$ solver for the time integration to get the numerical solution. Error Convergence Order ----------------------- As we demonstrated in the derivation of this numerical method, the discretization has . Thus, we may expect the error convergence order in the spatial meshing should be also in the second order. This will be confirmed by the numerical convergence test. In the error convergence test, we use small current density We will also assume no surface wetting in this test. As we are mostly interested in the voltage prediction from this single particle ion-intercalation model, we will define the error as the $L^2$ norm of the difference in voltage comparing to the standard curve, which will use the solution from very fine grid ($3001$ uniform grid points in our case) as the reference solution. \[!h\] -- -- -- -- The plot of error convergence is shown in the left half of Fig. \[fig:ErrorConvergence\], which is consistent with our previous expectation. The absolute error in voltage shown in the right hand side in the same figure signifies that we will have trouble with oscillations after the phase separation if the grid is not fine enough. -- -- -- -- As we see from Fig. \[fig:Oscillation\], with $21$ grid points, we may get different oscillation sizes in the solutions, which is sensitive to the parameter $\tilde \Omega$. While to the concentration distribution on the right, a larger parameter $\tilde \Omega$ leads to a smaller interfacial width, we need a fine enough grid which is with the grid size smaller than the interfacial width to capture the propagating shock without creating oscillations. Therefore, in the choice of grid point number, we need to be careful about all conditions such as the radius, $\tilde \Omega$ and $\kappa$ in order to get the desired accuracy with good stability, but without paying too much for the computation cost. Conclusion {#sec:conc} ========== In summary, we have studied the dynamics of ion intercalation in an isotropic spherical battery intercalation particle using the heterogeneous CHR model with Butler-Volmer reaction kinetics  [@bazant2013]. The model predicts either solid solution with radial nonlinear diffusion or core-shell phase separation, depending on the thermodynamic, geometrical, and electrochemical conditions. The model is able to consistently predict the transient voltage after a current step, regardless of the complexity of the dynamics, far from equilibrium. Surface wetting plays a major role in nucleating phase separation. The simplifying assumptions of radial symmetry and negligible coherency strain maybe be applicable to some materials, such as lithium titanate anodes or defective lithium iron phosphate cathodes, while the basic principles illustrated here have broad relevance for intercalation materials with complex thermodynamics and multiple stable phases. Acknowledgments {#acknowledgments .unnumbered} =============== This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374. This work was also partially supported by the Samsung-MIT Alliance.
--- abstract: | The notion of walk entropy $S^V(G,\beta)$ for a graph $G$ at the inverse temperature $\beta$ was put forward recently by Estrada et al. (2014) [@6]. It was further proved by Benzi [@1] that a graph is walk-regular if and only if its walk entropy is maximum for all temperatures $\beta \in I$, where $I$ is a set of real numbers containing at least an accumulation point. Benzi [@1] conjectured that walk regularity can be characterized by the walk entropy if and only if there is a $\beta>0$ , such that $S^V(G,\beta)$ is maximum. Here we prove that a graph is walk regular if and only if the $S^V(G,\beta=1)=\ln n$. We also prove that if the graph is regular but not walk-regular $S^V(G,\beta)<\ln n$ for every $\beta >0$ and $\lim_{\beta \to 0} S^V(G,\beta)=\ln n=\lim_{\beta \to \infty} S^V(G,\beta)$. If the graph is not regular then $S^V(G,\beta) \leq \ln n-\epsilon$ for every $\beta>0$, for some $\epsilon>0$. MSC: 05C50; 15A16; 82C20. Keywords: Walk-regularity; Graph entropies; Graph walks. title: Maximum Walk Entropy Implies Walk Regularity --- plus 1 pt [[$^1$ Department of Mathematics and Statistics, University of Strathclyde, Glasgow G1 1XH, U.K., $\qquad ^2$ CIMAT, Guanajuato, 36240 México, $^3$ Instituto de Matemáticas, UNAM, México, 04510, México]{}]{} Introduction ============ The concept of walk entropy was recently proposed as a way of characterizing graphs using statistical mechanics concepts [@6]. For a simple, undirected graph $G=(V,E)$ with nodes $1 \leq i \leq n$ and adjacency matrix $A$ the walk entropy is defined as $$S^V(G,\beta)=-\sum\limits_{i=1}^n p_i(\beta) \ln p_i(\beta),$$ where $p_i(\beta)=(e^{\beta A})_{ii}/Z$ and $\beta=1/k_B T >0$ (where, $k_B$ is the Boltzmann constant and $T$ is the absolute temperature). Here $Z={\hbox{\rm tr}}(e^{\beta A})$ represents the partition function of the graph, frequently referred to in the literature as the Estrada index of the graph [@3; @4; @9]. The term $(e^{\beta A})_{ii}$ represents the weighted contribution of every subgraph to the centrality of the corresponding node, known as the subgraph centrality $SC(i)$ of the node [@7; @5; @8]. The walk entropy called immediately the attention in the literature [@1] due to its many interesting mathematical properties as well as its potential for characterizing graphs and networks. In [@6] the authors stated a conjecture which was subsequently proved by Benzi [@1] as the following .5cm [**Theorem 1.1.**]{} [@1] [*A graph $G$ is walk-regular if and only if $S^V(G,\beta)=\ln n$ for all $\beta \geq 0$ .*]{} .5cm Benzi \[1\] also reformulated another conjecture stated by Estrada et al. \[6\] in the following stronger form .5cm [**Conjecture 1.2.** ]{}\[1\] A graph is walk-regular if and only if there exists a $\beta>0$ such that $S^V(G,\beta)=\ln n$. .5cm A third conjecture to be considered here generalizes the graphic examples given by Estrada et al. [@6] and can be stated as .5cm [**Conjecture 1.3.**]{} Let $G$ be a non-regular graph, then $S^V(G,\beta)<\ln n$ for every $\beta>0$. In this note we prove these two conjectures, which immediately imply that the walk-entropy is a strong characterization of the walk-regularity in graphs and also gives strong mathematical support to the strength of this graph invariant for studying the structure of graphs and networks. Main results ============ We start here by stating the two main results of this work. .5cm [**Theorem 2.1.**]{} *Let $A$ be the adjacency matrix of a connected graph $G$. Then the following conditions are equivalent:* [(a)]{} $G$ is walk-regular; [(b)]{} $A^k$ has a constant diagonal for natural numbers $k$; [(c)]{} $e^A$ has constant diagonal; [(d)]{} $e^{\beta A}$ has constant diagonal for $\beta\geq 0$; [(e)]{} The walk entropy $S^V(G,1)=\ln n$. .5cm [**Theorem 2.2.**]{} *Let $A$ be the adjacency matrix of a graph $G$. Then one and only one of the following conditions holds:* [(a)]{} $G$ is walk-regular. Then $S^V(G,\beta)=\ln n$ for every $\beta>0$; [(b)]{} $G$ is a regular but not walk-regular graph. Then $S^V(G,\beta)<\ln n$ for every $\beta>0$. Moreover, $\lim_{\beta \to 0} S^V(G,\beta)=\ln n=\lim_{\beta \to \infty} S^V(G,\beta)$; [(c)]{} There is some $\epsilon>0$ such that $S^V(G,\beta)\leq \ln n-\epsilon$ for every $\beta>0$. .5cm To avoid cross-reference in the proofs of the above Theorems we present first the proof of Theorem 2.2. Auxiliary definitions and results ================================= Before stating the proof of the Theorem 2.2 we need to introduce some definitions and auxiliary results, which are given below. We remind the reader that given a set $X=\{x_1,\ldots,x_s\}$ of real numbers, the [*variance*]{} is defined as $$\sigma^2(X)=E(X^2)-(E(X))^2=\frac{1}{s} \sum\limits_{i=1}^s x_i^2-\left(\frac{1}{s} \sum\limits_{i=1}^s x_i \right)^2.$$ [**Definition 3.1**]{}: Given a matrix $M$ with diagonal entries $M_{11},\ldots,M_{nn}$ , not all zero, we introduce the [*diagonal variance*]{} as $$\sigma_d^2(M)=\frac{1}{\sum\limits_{i=1}^n |M_{ii}|} \sigma^2(M_{11},\ldots,M_{nn}).$$ Let us now state and proof the following auxiliary result. We notice in passing that the diagonal variance of $e^A$ was studied by Ejov et al. [@4a] in a different context for regular graphs. .5cm [**Proposition 3.2**]{}: *Let $A$ be the adjacency matrix of a connected graph $G$. Then one of the following conditions holds:* [(a)]{} $e^A$ has constant diagonal; [(b)]{} $e^A$ has no constant diagonal entries and $G$ is a regular graph. Then $\sigma_d^2(e^{\beta A})>0$ for $\beta>0$ and $\lim_{\beta \to \infty}\sigma_d^2(e^{\beta A})=0$; [(c)]{} There is some $\epsilon >0$ such that $\sigma_d^2(e^{\beta A})>\epsilon$ for every $\beta>0$. [**Proof**]{}: We distinguish the following mutually excluding cases: \(1) $G$ is walk-regular which implies that $e^A$ has constant diagonal. \(2) $e^{\beta A}$ does not have constant diagonal entries, for any $\beta>0$. Then $\sigma_d^2(e^{\beta A})>0$ for $\beta>0$. Observe that for $\beta>0$ we have $ (e^{\beta A})_{ii}\sim \phi_1^2 (i) e^{\beta \lambda_1}$ and $Z(\beta A) \sim e^{\beta \lambda_1}$ , where $\phi_1$ is the (Perron) eigenvector of $A$ corresponding to the maximal eigenvalue $\lambda_1$ . Here the symbol $\sim$ means that the quantities are asymptotically equal. In that situation $$\lim_{\beta \to \infty} \sigma_d^2(e^{\beta A})=\frac{1}{Z(\beta)} \sigma_d^2 \left((e^{\beta A})_{ii}: 1 \leq i \leq n \right)=\sigma_d^2(\phi_1^2 (i): 1 \leq i \leq n) .$$ Therefore $\lim_{\beta \to \infty} (e^{\beta A})=0$ is equivalent to $\phi_1$ being constant, or $G$ being regular. If $G$ is not regular then the analytic function $\sigma_d^2(e^{\beta A})>0$, for $\beta>0$, and $\lim_{\beta \to \infty} \sigma_d^2(e^{\beta A})>0$ . Clearly, there is some $\epsilon>0$ such that $\sigma_d^2(e^{\beta A})\geq \epsilon$, for every $\beta >0$ . .5cm We continue now with some other auxiliary results needed to prove the Theorem 2.2. Let $\lambda_1,\cdots,\lambda_n$ be the eigenvalues of $A$, such that $\sum\limits_{j=1}^n \lambda_j = 0$ (since $G$ is a simple graph without loops). For the vector of diagonal entries $y=(y_1,\cdots,y_n)$ of $e^{\beta A}$ we define a vector $z=\ln y=(\ln y_1,\cdots,\ln y_n)$ of real numbers. We have $$\sum\limits_{i=1}^n z_i e^{z_i}=\sum\limits_{i=1}^n y_i \ln y_i$$ with $\sum\limits_{i=1}^n z_i =\ln \prod_{i=1}^n y_i \geq \ln {\hbox{\rm det}}(e^{\beta A})= \sum\limits_{j=1}^n \lambda_j = 0$, where the inequality is a direct application of Hadamard’s theorem for the positive definite matrix $e^{\beta A}$, see for instance [@11]. The remarkable result of Borwein and Girgensohn [@2] states the following. [**Theorem 3.4**]{}. [*Let $c_n=2 (n=2,3,4)$ and $c_n=e(1-1/n)(n \geq 5)$. Let $z_i$ be defined as before. Then [@2] yields $$\frac{c_n}{n} \sum\limits_{i=1}^n z_i^2 \leq \sum\limits_{i=1}^n z_i e^{z_i}.$$*]{} .5cm Proof of the Theorem 2.2 ========================= We know that $S^V(G,\beta) \leq \ln n$ for every $\beta>0$. Observe that for $Z(\beta)={\hbox{\rm tr}}(e^{\beta A})$ the walk vertex entropy is $$S^V(G,\beta) =\ln Z-\frac{1}{Z} \sum\limits_{i=1}^n z_i e^{z_i}|_{\beta}$$ The Borwein-Girgersohn inequality yields $$S^V(G,\beta) \leq \ln Z-\frac{1}{Z} \frac{c_n}{n}\sum\limits_{i=1}^n z_i^2|_{\beta}$$ We distinguish two situations at $\beta>0$ : $(1)$ $\sum\limits_{i=1}^n z_i^2|_{\beta} =0$ , that is $y_i(\beta)=1$ for $i=1,\ldots,n$ . Then, $Z(\beta)=n$ which is only possible if $A=0$. Therefore $S^V(G,\gamma)=\ln n$ for any $\gamma>0$. $(2)$ $\sum\limits_{i=1}^n z_i^2>0$ . Then there is a differentiable function $c_n \leq d_n(\beta)$ such that $$S^V(G,\beta) = \ln Z-\frac{1}{Z} \frac{d_n}{n}\sum\limits_{i=1}^n z_i^2|_{\beta}< \ln n.$$ Since $Z \geq n$ there is a differentiable function $e_n$ satisfying $0<e_n(\beta) \leq d_n(\beta)$ such that $$S^V(G,\beta) = \ln n- \frac{e_n}{n^2}\sum\limits_{i=1}^n z_i^2|_{\beta}.$$ For every $M>0$, using the compactness of the interval $[0,M]$, there exists an $\epsilon(M)>0$ such that $ \frac{e_n}{n^2}\sum\limits_{i=1}^n z_i^2|_{\beta}\geq \epsilon(M)$ for $\beta \in (0,M]$. Choose $\epsilon(M)$ such that $$\inf \{\epsilon(M):0<M \}=\lim\limits_{\beta \to \infty} \frac{e_n}{n^2}\sum\limits_{i=1}^n z_i^2|_{\beta}.$$ Moreover, recall from [@6] that $$S^V(G,\beta \to \infty)=-\sum\limits_{i=1}^n \phi_1^2(i) \ln \phi_1^2(i).$$ This limit is $< \ln n$ except when there is a common value $\phi_1(i)=c_1$, for $i=1,\ldots,n$. The latter property implies that $G$ is a regular graph. We consider these cases separately. $(3)$ Assume that $G$ is not a regular graph. Then $S^V(G,\beta \to \infty)< \ln n$. Therefore there exists an $\epsilon>0$ such that for $M>0$ we have $\epsilon(M) \geq \epsilon$. That is, $S^V(G,\beta) \leq \ln n-\epsilon$, for $\beta>0$. $(4)$ Assume that $G$ is a regular but not a walk-regular graph. Then, according with the analysis in 3.2, the maximal value $S^V(G,\beta)=\ln n$ is not attained for any $\beta>0$. Moreover, $$\lim_{\beta \to 0} S^V(G,\beta)=\ln n=\lim_{\beta \to \infty} S^V(G,\beta)$$ Proof of the Theorem 2.1 ========================= The following are obvious implications: \(a) implies (b), (a) implies (d), (d) implies (c), (c) implies (e), which leaves open only two implications. For (b) implies (a), let $$p(T)=T^n+p_{n-1}T^{n-1}+\cdots+p_0$$ be the characteristic polynomial of the graph $G$. The Cayley-Hamilton theorem yields $p(A)=0$. If $A^k$ has a constant diagonal for natural numbers $0 \leq k \leq m$ and $n-1 \leq m$, then $$A^{m+1}=-(p_{n-1}A^m+\cdots+p_0 A^{m-n+1})$$ has a constant diagonal. \(e) implies (a): follows from Theorem 2.2 .5cm In closing, the maximum of the walk entropy at $\beta=1$ , i.e., $S^V(G,1)=\ln n$ , is attained only for the walk-regular graphs. This means that $S^V(G,1)$ can be used as an invariant to characterize walk-regularity in graphs. .5cm [**Acknowledgement**]{}: We thank the referees for suggestions on the presentation of the paper. .5cm [10000]{} M. Benzi, [*A note on walk entropies in graphs,*]{} Linear Algebra Appl. 445 (2014) 395-399. J. Borwein, R. Girgensohn, [*A class of exponential inequalities*]{}, Math. Inequal. Appl., 6(3), 2003, 397–411. J.A. de la Peña, I. Gutman, J. Rada, [*Estimating the Estrada index*]{}, Linear Algebra Appl. 427 (2007) 70-76. H. Deng, S. Radenkovic, I. Gutman, [*The Estrada index. Applications of Graph Spectra*]{}, Math. Inst., Belgrade, (2009) 123-140. V. Ejov, J.A. Filar, S.K. Lucas and P. Zograf, [*Clustering of spectra and fractals of regular graphs*]{}, J. Math. Anal. Appl. 333 (2007) 236-246. E. Estrada, [*The Structure of Complex Networks. Theory and Applications*]{}, Oxford University Press, UK, 2011. E. Estrada, J.A. de la Peña, N. Hatano,[*Walk entropies in graphs*]{}, Linear Algebra Appl. 443 (2014) 235-244. E. Estrada, J.A. Rodríguez-Velázquez, [*Subgraph centrality in complex networks*]{}, Phys. Rev. E 71 (2005) 671-696. E. Estrada, , N. Hatano, M. Benzi, [*The physics of communicability in complex networks*]{}, Phys. Rep. 514 (2012) 89-119. I. Gutman, H. Deng, S. Radenkovic, [*The Estrada index: an updated survey. Selected Topics on Applications of Graph Spectra*]{}, Math. Inst., Beograd, (2011) 155-174. B. Kostant, P. W. Michor,[*The generalized Cayley map from an algebraic group to its Lie algebra*]{}, In The orbit method in geometry and physics, pp. 259-296. Birkhauser Boston, 2003. F. Zhang, [*Matrix Theory: Basic results and Techniques*]{}, Springer (1999).
--- abstract: 'A field theoretical framework is developed for the Hawkes self-excited point process with arbitrary memory kernels. We derive the corresponding master equation for the general Hawkes process as a field theory of probability density functionals. The Langevin dynamics of the field variables is given by stochastic partial differential equations that are Markovian. This is in contrast to the Hawkes process, which is non-Markovian (in general) by construction as a result of its (long) memory kernel. For the case of a memory kernel decaying as a single exponential, we find the exact time-dependent and steady state solutions for the probability density function (PDF) of the Hawkes intensities, using the Laplace representation of the Master equation. For memory kernels represented as arbitrary sums of exponential (discrete and continuous sums), we derive the exact solutions of the Lagrange-Charpit equations for the hyperbolic master equations in the Laplace representation in the steady state, close to the critical point $n=1$ of the Hawkes process, where $n$ is the branching ratio. The critical condition of the original Hawkes process is found to correspond to a transcritical bifurcation in the Lagrange-Charpit equations associated with the master equations. We predict a power law scaling of the PDF of the intensities in an intermediate asymptotics regime, which crosses over to an asymptotic exponential function beyond a characteristics intensity that diverges as the critical condition is approached ($n \to 1$). The exponent of the PDF is non-universal and a function of the background intensity $\nu_0$ of the Hawkes intensity and of the parameter $\alpha = n \langle \tau \rangle$, where $\langle \tau \rangle$ is the first-order moment of the distribution of time scales of the memory function of the Hawkes process. Our theoretical predictions are confirmed by numerical simulations. Our field theoretical framework provides a way to tackle complex generalisation of the Hawkes process, such as the nonlinear Hawkes processes previously proposed to describe the multifractal properties of earthquake seismicity and of financial volatility.' author: - 'Kiyoshi Kanazawa$^{1}$ and Didier Sornette$^{2-4}$' title: 'Field master equation theory of the self-excited Hawkes process' --- Introduction ============ The self-excited conditional Poisson process introduced by Hawkes [@Hawkes1; @Hawkes2; @Hawkes3] has progressively been adopted as a useful first-order model of intermittent processes with time (and space) clustering, such as those occurring in seismicity and financial markets. The Hawkes process was first used and extended in statistical seismology and remains probably the most successful parsimonious description of earthquake statistics [@KK1981; @KK1987; @Ogata1988; @Ogata1999; @HelmsSor02; @Shyametal2019]. More recently, the Hawkes model has known a burst of interest in finance (see e.g. [@HawkesRev18] for a short review) as it was realised that some of the stochastic processes in financial markets can be well represented by this class of models [@FiliSor12], for which the triggering and branching processes capture the herding nature of market participants (be they due to psychological or rational imitation of human traders or as a result of machine learning and adapting). In field of financial economics, the Hawkes process has been successfully involved in issues as diverse as estimating the volatility at the level of transaction data, estimating the market stability [@FiliSor12; @FiliSor15; @WheatleyWeh2019], accounting for systemic risk contagion, devising optimal execution strategies or capturing the dynamics of the full order book [@Bacry1]. Another domain of intense use of the Hawkes model and its many variations is found in the field of social dynamics on the Internet, including instant messaging and blogging such on Twitter [@Zhao2015] as well as the dynamics of book sales [@SorDeschatres04], video views [@CraneSor08], success of movies [@EscobarSor08], and so on. The present article, together with the joint-submission Letter [@KKDS_PRL], can be considered as a sequel complementing a series of papers devoted to the analysis of various statistical properties of the Hawkes process [@SaiSor2004; @SaiHSor2005; @SaiSor2006; @SaiSorTimes2007; @SaiSor2014]. These papers have extended the general theory of point processes [@DalayVere03] to obtain general results on the distributions of total number of events, total number of generations, and so on, in the limit of large time windows. Here, we consider the opposite limit of very small time windows, and characterise the distribution of “intensities”, where the intensity $\nu(t)$ of the Hawkes process at time $t$ is defined as the probability per unit time that an event occurs (more precisely, $\nu(t) dt$ is the probability that an event occurs between $t$ and $t+dt$). We propose a novel natural formulation of the Hawkes process in the form of a field theory of probability density functionals taking the form of a field master equation. This formulation is found to be ideally suited to investigate the hitherto ignored properties of the distribution Hawkes intensities, which we analyse in depth in a series of increasingly sophisticated forms of the memory kernel characterising how past events influence the triggering of future events. This paper is organised as follows. Section 2 presents the Hawkes process in its original definition and then proceed to develop its equivalent stochastic Markovian partial differential equation. This is done first for the case where the memory kernel is a single exponential, then made of two exponentials, an arbitrary finite number of exponentials and finally for general memory kernels. It is in section 2 that the general field master equations are derived. Section 3 presents the analytical treatment and provides the solutions of the master equations, leading to the derivation of the probability density function of the Hawkes intensities for the various above mentioned forms of the memory kernel. Section 4 summarises and concludes by outlining future possible extensions of the formalism. These sections are complemented by seven appendices, in which the detailed analytical derivations are provided. Model and formulation {#sec:ModelMasterEq} ===================== Definition of the Hawkes conditional Poisson process ---------------------------------------------------- ![ Schematic representation of the Hawkes process (\[def:Hawkes\_general\]) with an exponential kernel . The event occurring at a given time stamp is represented by a jump in the intensity ${\hat{\nu}}(t) $, the probability per unit time that a next event will occur. []{data-label="fig:trj_singleExpon"}](trj_singleExpon.eps){width="75mm"} The Hawkes process is the simplest self-excited point process, which describes with a linear intensity function ${\hat{\nu}}$ how past events influence the triggering of future events. Its structure is particularly well-suited to address the general and important question occurring in many complex systems of disentangling the exogenous from the endogenous sources of observed activity. It has been and continues to be a very useful model in geophysical, social and financial systems. The Hawkes process, as any other point process, deals with events (“points” along the time axis). The theory of point processes indeed considers events as being characterised by a time of occurrence but vanishing duration (the duration of events is very small compared to the inter-event times). Thus, to a given event $i$ is associated a time $t_i$ of occurrence. In the case where one deals with spatial point processes, the event has also a position ${\vec r}_i$. And a “mark” $m_i$ can be included to describe the event’s size, or its “fertility”, i.e. the average number of events it can trigger directly. The stochastic dynamics of the Hawkes process is defined as follows. Let us introduce a state variable ${\hat{\nu}}$, called the intensity. By convention, we denote stochastic variables with a hat symbol, such as $\hat{A}$, to distinguish them from the non-stochastic real numbers $A$, corresponding for instance to a specific realisation of the random variable. The intensity ${\hat{\nu}}$ is a statistical measure of the frequency of events per unit time (i.e., a shock occurs during $[t,t+dt)$ with the probability of ${\hat{\nu}}dt$). In the Hawkes process, the intensity satisfies the following stochastic sum equation (see Fig. \[fig:trj\_singleExpon\]): $$\begin{aligned} {\hat{\nu}}(t) = \nu_0 + \sum_{i=1}^{{\hat{N}}(t)} h(t-{\hat{t}}_i), \label{def:Hawkes_general} \end{aligned}$$ where $\nu_0$ is the background intensity, $\{{\hat{t}}_i\}_i$ represent the time series of events, and ${\hat{N}}(t)$ is the number of events during the interval $[0,t)$ (called “counting process”). One often refers to ${\hat{\nu}}(t)$ as a conditional intensity in the sense that, conditional on the realised sequence of ${\hat{N}}(t)=k$ (with $k\geq 0$) events, the probability that the $(k+1)$th event occurs during $[t,t+dt)$, such that ${\hat{t}}_{k+1}\in [t,t+dt)$, is given by ${\hat{\nu}}(t)dt$. The pulse (or memory) kernel $h(t)$ represents the non-Markovian influence of a given event, and is non-negative definite. The integral of $h(t)$ defines the branching ratio $$n := \int_0^\infty h(t) dt~. \label{deff-wrth2tg2}$$ The parameter $n$ is a very fundamental quantity, which is the average number of events of first generation (“daughters”) triggered by a given event [@DalayVere03; @HelmsSor02]. This definition results from the fact that the Hawkes model, as a consequence of the linear structure of its intensity (\[def:Hawkes\_general\]), can be mapped exactly onto a branching process, making unambiguous the concept of generations: more precisely, a given realisation of the Hawkes process can be represented by the set of all possible tree combinations, each of them weighted by a certain probability derived from the intensity function [@ZhuangVere02]. The branching ratio is the control parameter separating three different regimes: (i) $n<1$: subcritical; (ii) $n=1$: critical and (iii) $n>1$: super-critical or explosive (with a finite probability). The branching ratio $n$ can be shown to be also the fraction of events that are endogenous, i.e., that have been triggered by previous events [@HelmsSor03]. We now formulate the master equation for the model  and provide its asymptotic solution around the critical point. The single exponential kernel case ---------------------------------- ### Mapping to Markovian dynamics Before developing the general framework for arbitrary memory kernel $h(t)$, we consider the simplest case of an exponential memory kernel: $$h(t) = \frac{n}{\tau}e^{-t/\tau}.\label{def:single_expon_kernel}$$ The decay time $\tau$ quantifies how long an event can typically trigger events in the future. And we make explicit the branching ratio $n$ from the normalisation of the exponential. This special case is Markovian as discussed in Refs. [@Oakes1975; @Dassios2011], because the lack of memory of exponential distributions ensures that the number of events after time $t$ defines a Markov process in continuous time [@Knopoff1997]. Let us now show that this single exponential case  can be mapped onto a stochastic differential equation (SDE) driven by a state-dependent Markovian Poisson noise. By decomposing the intensity as $${\hat{z}}:= {\hat{\nu}}-\nu_0, \label{hygqfqtgqb}$$ let us consider the Langevin dynamics $$\frac{d{\hat{z}}}{dt} = -\frac{1}{\tau}{\hat{z}}+ \frac{n}{\tau} {\hat{\xi}}^{{\mathrm{P}}}_{{\hat{\nu}}} \label{eq:Markov_exp_pulse}$$ with a state-dependent Poisson noise ${\hat{\xi}}^{{\mathrm{P}}}_{{\hat{\nu}}}$ with intensity given by ${\hat{\nu}}={\hat{z}}+\nu_0$ and initial condition ${\hat{z}}(0)=0$. The introduction of ${\hat{z}}$ is similar to the trick proposed in [@BouchaudTradebook2018] for an efficient estimation of the maximum likelihood of the Hawkes process. By expressing the state-dependent Poisson noise as $${\hat{\xi}}^{{\mathrm{P}}}_{{\hat{\nu}}}(t) = \sum_{i=1}^{{\hat{N}}(t)} \delta(t-{\hat{t}}_i), \label{hwtrgfq}$$ we obtain the formal solution of equation (\[eq:Markov\_exp\_pulse\]) $${\hat{\nu}}(t) = \nu_0+{\hat{z}}(t) = \nu_0 + \frac{n}{\tau}\int_0^t dt'e^{-(t-t')/\tau} {\hat{\xi}}^{{\mathrm{P}}}_{{\hat{\nu}}}(t') = \nu_0 + \sum_{i=0}^{{\hat{N}}(t)} h(t-{\hat{t}}_i).$$ This solution shows that the SDE  is equivalent to the Hawkes process . Equation  together with (\[hwtrgfq\]) is therefore a short hand notation for $${\hat{z}}(t+dt) - {\hat{z}}(t) = \begin{cases} -\frac{1}{\tau}{\hat{z}}(t)dt & (\mbox{No jump during $[t,t+dt)$; probability} = 1-{\hat{\nu}}(t)dt)\\ \frac{n}{\tau} & (\mbox{Jump in $[t,t+dt)$; probability} = {\hat{\nu}}(t)dt) \end{cases} \label{jehygbqgb}$$ for the probabilistic time evolution during $[t,t+dt)$ (see Fig. \[fig:trj\_singleExpon\_f\]a for a schematic representation). Note that the event probability explicitly depends on ${\hat{\nu}}(t)$, which reflects the endogenous nature of the Hawkes process. ![ a) Schematic representation of a typical trajectory for ${\hat{z}}(t)$ defined by (\[hygqfqtgqb\]). A jump of size $n/\tau$ may occur in the time interval $[t,t+dt)$ with probability ${\hat{\nu}}(t)dt$. (b) Let us consider an arbitrary function $f({\hat{z}})$. At the time $t={\hat{t}}_i$ of the jump, there is a corresponding jump in the trajectory of $f({\hat{z}})$, which is characterized by $df({\hat{z}}(t)):= f({\hat{z}}({\hat{t}}_i-0)+n/\tau)-f({\hat{z}}({\hat{t}}_i-0))$. The plots shown here are based on a numerical simulation with the following parameters: $\tau=1$, $n=0.5$, $\nu_0=0.1$ and $f(z)=\exp[5(z-1)]+0.1$. []{data-label="fig:trj_singleExpon_f"}](trj_singleExpon_f.eps){width="140mm"} ### Master equation By introducing equation  together with (\[hwtrgfq\]), we have transformed a non-Markovian point process into a Markovian SDE. This allows us to derive the corresponding master equation for the probability density function (PDF) $P_t(z)$ of the excess intensity ${\hat{z}}$ (\[hygqfqtgqb\]), $$\frac{\partial P_t(z)}{\partial t} = \frac{1}{\tau}\frac{\partial }{\partial z}zP_t(z) + \Big[(\nu_0+z-n/\tau)P_t(z-n/\tau)-(\nu_0+z) P_t(z)\Big], \label{eq:master_exp}$$ with the boundary condition $$P_t(z)\Big|_{z=0} = 0$$ $P_t(z)dz$ is thus the probability that ${\hat{z}}(t)$ takes a value in the interval ${\hat{z}}(t)\in [z,z+dz)$ at time $t$. The master equation (\[eq:master\_exp\]) is derived as follows. Let us consider an arbitrary function $f({\hat{z}})$. Using (\[jehygbqgb\]), its time evolution during $[t,t+dt)$ is given by $$f({\hat{z}}(t+dt)) - f({\hat{z}}(t)) = \begin{cases} -\frac{{\hat{z}}(t)}{\tau}\frac{\partial f({\hat{z}}(t))}{\partial {\hat{z}}}dt & (\mbox{Probability} = 1-{\hat{\nu}}(t)dt) \\ f({\hat{z}}(t)+n/\tau) - f({\hat{z}}(t)) & (\mbox{Probability} = {\hat{\nu}}(t)dt) \end{cases},$$ as schematically illustrated in Fig. \[fig:trj\_singleExpon\_f\]b. By taking the ensemble average of both sides, we obtain $$\begin{aligned} dt\int dz \frac{\partial P_t(z)}{\partial t}f(z) &= \int dz P_t(z)\left[-\frac{z}{\tau}\frac{\partial f(z)}{\partial z}dt + (z+\nu_0)\{f(z+n/\tau)-f(z)\} dt\right], \notag\\ \Longrightarrow \int dz \frac{\partial P_t(z)}{\partial t}f(z) &= \int dz f(z)\left[\frac{1}{\tau}\frac{\partial }{\partial z}z P_t(z) + \{(\nu_0+z-n/\tau)P(z-n/\tau)-(\nu_0+z) P(z)\}\right]. \label{eq:derivation_master_exp} \end{aligned}$$ This result (\[eq:derivation\_master\_exp\]) is obtained by (i) using the identity $$\left<f({\hat{z}}(t+dt))-f({\hat{z}}(t))\right> = \int dz [P_{t+dt}(z)-P_t(z)]f(z) = dt\int dz\frac{\partial P_t(z)}{\partial t}f(z),$$ (ii) by performing a partial integration of the first term in the right-hand side of Eq. , and (iii) by introducing the change of variable $z\to z-n/\tau$ for the second term. Since (\[eq:derivation\_master\_exp\]) is an identity holding for arbitrary $f(z)$, the integrants of the lelf-hand-side and right-hand-side must be equal for arbitrary $f(z)$, which yields the master equation . Note that the above derivation of the master equation is not restricted to the exponential shape of the memory kernel. We are going to use the same derivation in the more complex examples discussed below. Discrete sum of exponential kernels ----------------------------------- ### Mapping to Markovian dynamics The above formulation can be readily generalized to the case of a memory kernel expressed as a discrete sum of exponential functions: $$h_t = \sum_{k=1}^K \frac{n_k}{\tau_k}e^{-t/\tau_k}. \label{yjtyhnwrgbqbq}$$ In this case, each coefficient $n_k$ quantifies the contribution of the $k$-th exponential with memory length $\tau_k$ to the branching ratio $n = \sum_{k=1}^K n_k$ (see definition (\[deff-wrth2tg2\])). We note that this representation (\[yjtyhnwrgbqbq\]) is quite general, as it can approximate well the case of a power-law kernel with cut-off up to a constant [@Hardimanetal2013]. Harris suggested the intuitive notion that it is possible to map this case to a Markovian dynamics if the state of the system at time $t$ is made to include the list of the ages of all events [@Harris1963]. The problem is that this conceptual approach is unworkable in practice due to the exorbitant size of the required information. By introducing an auxiliary age pyramid process, Ref. [@Boumezoued2016] identified some key components to add to the Hawkes process and its intensity to make the dynamics Markovian. Here, in order to map model  onto a Markovian stochastic process, we propose a more straightforward approach, which generalised the previous case of a single exponential memory function. We decompose the intensity into a sum of $K$ excess intensities $\{z_k\}_{k=1}^K$ as follows: $${\hat{\nu}}(t) = \nu_0 + \sum_{k=1}^K {\hat{z}}_k(t)~.$$ Each excess intensity ${\hat{z}}_k$ is the solution of a Langevin equation driven by a state-dependent Markovian Poisson shot noise $$\frac{d{\hat{z}}_k}{dt} = -\frac{{\hat{z}}_k}{\tau_k} + \frac{n_k}{\tau_k}{\hat{\xi}}^{{\mathrm{P}}}_{{\hat{\nu}}}~. \label{eq:SDE_general_superposition_discrete}$$ Note that the same state-dependent Poisson noise ${\hat{\xi}}^{{\mathrm{P}}}_{{\hat{\nu}}}(t)$ defined by expression (\[hwtrgfq\]) acts on the Langevin equation for each excess intensity $\{{\hat{z}}_k\}_{k=1,\dots,K}$. In other words, each shock event impacts simultaneous the trajectories for all excess intensities $\{{\hat{z}}_k\}_{k=1,\dots,K}$ (see the vertical broken line in Fig. \[fig:trj\_twoExpon\]a and \[fig:trj\_twoExpon\]b and the resulting trajectory of ${\hat{\nu}}(t)$ in Fig. \[fig:trj\_twoExpon\]c). ![Case where the memory kernel is the sum of two exponentials. Panels (a) and (b) show the schematic trajectories of the two excess intensities ${\hat{z}}_1$ and ${\hat{z}}_2$ and panel (c) that of the resulting total intensity ${\hat{\nu}}:=\nu0+{\hat{z}}_1+{\hat{z}}_2$. The parameters are $K=2$, $\tau_1=1$, $n_1=0.3$, $\tau_2=3$, $n_2=0.5$ (and thus $n=0.8$) and $\nu_0=0.1$. []{data-label="fig:trj_twoExpon"}](trj_twoExpon_f.eps){width="140mm"} ### Master equation As the set of SDEs for $\hat{\bm{z}}:= ({\hat{z}}_1,{\hat{z}}_2,\dots,{\hat{z}}_K)^{{\mathrm{T}}}$ are standard Markovian stochastic processes, we obtain the corresponding master equation: $$\frac{\partial P_t(\bm{z})}{\partial t} = \sum_{k=1}^K\frac{\partial }{\partial z_k}\frac{z_k}{\tau_k}P_t(\bm{z}) + \left[ \left\{\nu_0+\sum_{k=1}^K (z_k- n_k/\tau_k)\right\}P_t(\bm{z}-\bm{h}) - \left\{\nu_0+\sum_{k=1}^K z_k \right\}P_t(\bm{z}) \right]. \label{eq:master_n_expon}$$ The jump-size vector is given by $\bm{h} := (n_1/\tau_1, n_2/\tau_2,\dots, n_K/\tau_K)^{{\mathrm{T}}}$. The PDF $P_t(\bm{z})$ obeys the following boundary condition $$P_t(\bm{z})\Big|_{\bm{z} \in \partial \bm{R}^{K}_+} = 0~, \label{eq:boundary_condition_n_expon}$$ on the boundary $\partial \bm{R}^{K}_+:= \{\bm{z} | z_k = 0 \mbox{ for some }k \}$. This equation (\[eq:master\_n\_expon\]) can be derived following the procedure used for the single exponential case that led us to the master equation (see Appenedix \[sec:master\_eq\_n\_expon\] for an explicit derivation). ### Laplace representation of the master equation The master equation (\[eq:master\_n\_expon\]) takes a simplified form under the Laplace representation, $${\tilde}{P}_t(\bm{s}) := \mathcal{L}_{K}[P_t(\bm{z});\bm{s}]~, \label{def:Laplace_PDF_n_gen}$$ where the Laplace transformation in the $K$ dimensional space is defined by $$\mathcal{L}_{K}[f(\bm{z}); \bm{s}] := \int_0^\infty d\bm{z} e^{-\bm{s}\cdot \bm{z}}f(\bm{z})$$ with volume element $d\bm{z}:= \prod_{k=1}^K dz_k$. The wave vector $\bm{s} := (s_1, \dots, s_K)^{{\mathrm{T}}}$ is the conjugate of the excess intensity vector $\bm{z} := (z_1, \dots, z_K)^{{\mathrm{T}}}$. The Laplace representation of the master equation  is then given by $$\frac{\partial {\tilde}{P}_t(\bm{s})}{\partial t} = -\sum_{k=1}^K\frac{s_k}{\tau_k}\frac{\partial {\tilde}{P}_t(\bm{s})}{\partial s_k} + \left(e^{-\bm{h}\cdot \bm{s}}-1\right) \left(\nu_0-\sum_{k=1}^K\frac{\partial }{\partial s_k}\right){\tilde}{P}_t(\bm{s}). \label{eq:master_n_expone_Laplace}$$ Then, the Laplace representation  of $P_t(\bm{z})$, which is the solution of (\[eq:master\_n\_expone\_Laplace\]), allows us to obtain the Laplace representation ${\tilde}{Q}_{t}(s)$ of the intensity PDF $P_t(\nu)$ according to $${\tilde}{Q}_{t}(s) := \mathcal{L}_{1}[P_t(\nu); s] = \left< e^{-s(\nu_0+\sum_{k=1}^K{\hat{z}}_k)}\right> = e^{-\nu_0s} {\tilde}{P}_t\left(\bm{s}=(s,s,\dots,)^{{\mathrm{T}}}\right)~.$$ General kernels --------------- ### Mapping to Markovian dynamics The above formulation can be generalized to general forms of the memory kernel. Let us decompose the kernel as a continuous superposition of exponential kernels, $$h(t) = \int_0^\infty \frac{n(\tau)}{\tau}e^{-t/\tau}d\tau~, \label{eq:continuous_decomposition_kernel}$$ where we have introduced the set of continuous time scale $\tau \in \bm{R}_+ := [0,\infty)$. With definition (\[deff-wrth2tg2\]), the branching ratio is now given by $$n:= \int_0^\infty n(\tau)d\tau~. \label{ryhj3yrgq}$$ Thus, the function $n(\tau)$ quantifies the contribution of the $\tau$-th exponential with memory length $\tau$ to the branching ratio. We can then interpret $n(\tau)/n$ as a normalised distribution of time scales present in the memory kernel of the Hawkes process. As we show below, an important condition for solvability will be the existence of its first-order moment $$\alpha / n := \langle \tau \rangle:= \int_0^\infty \tau (n(\tau)/n) d\tau < \infty~. \label{rhr2bg2}$$ This condition (\[rhr2bg2\]) means that $n(\tau)$ should decay faster than $1/\tau^2$ at large $\tau$’s. Hence, the representation (\[eq:continuous\_decomposition\_kernel\]) implies that the memory kernel has to decay at large times faster than $1/t^2$. This covers situations where the variance of the time scales embedded in the memory kernel diverges. But this excluded the cases $h(t) \sim 1/t^{1+\theta}$ with $0<\theta<1$ that are relevant to the Omori law for earthquakes [@SorDeschatres04; @CraneSor08] and to the response to social shocks [@SaiSor2004]. This case $0<\theta<1$ for which $\alpha$ diverges needs to be treated separately and this is beyond the content of the present work. We then decompose the intensity of the Hawkes process as a continuous sum of excess intensities ${\hat{z}}_t(\tau)$ $${\hat{\nu}}(t) = \nu_0 + \int_0^\infty d\tau {\hat{z}}_t(\tau)~.$$ Each excess intensities ${\hat{z}}_t(\tau)$ is the solution of a stochastic partial differential equation (SPDE) $$\frac{\partial {\hat{z}}_t(\tau)}{\partial t} = - \frac{{\hat{z}}_t(\tau)}{\tau} + \frac{n(\tau)}{\tau}{\hat{\xi}}^{{\mathrm{P}}}_{{\hat{\nu}}}, \label{eq:SPDE_generalHawkes}$$ where, as for the previous case of a discrete sum of exponentials, the same state-dependent Poisson noise ${\hat{\xi}}^{{\mathrm{P}}}_{{\hat{\nu}}}(t)$ defined by expression (\[hwtrgfq\]) acts on the Langevin equation for each excess intensity ${\hat{z}}_t(\tau)$. The set of SDE  expresses the fact that the continuous field of excess intensity $\{{\hat{z}}_t(\tau)\}_{\tau\in \bm{R}_+}$ tends to relax to zero, but they are intermittently simultaneously shocked by the shared shot noise term ${\hat{\xi}}^{{\mathrm{P}}}_{{\hat{\nu}}}$, with a $\tau$-dependent jump size $n(\tau)/\tau$. ### Field master equation The master equation corresponding to the SDE  can be derived by following the same procedure presented for the simple exponential case and for the discrete sum of exponentials. There is however a technical difference since the state of the system is now specified by the continuous field variable $\{{\hat{z}}_t(\tau)\}_{\tau \in \bm{R}_+}$. Thus, the probability density function is replaced with the probability density functional $P[\{{\hat{z}}_t(\tau)=z(\tau)\}_{\tau \in \bm{R}_+}] = P_t[\{z(\tau)\}_{\tau \in \bm{R}_+}]$. In other words, the probability that the system state is in the state specified by $\{z(\tau)\}_{\tau \in \bm{R}_+}$ at time $t$ is characterized by $P_t[\{z(\tau)\}_{\tau \in \bm{R}_+}]\mathcal{D}z$ with functional integral volume element $\mathcal{D}z$. We use the notational convention that any mapping with square bracket $A[\{f(\tau)\}_{\tau\in \bm{R}_+}]$ indicates that the map $A$ is a functional of $\{f(\tau)\}_{\tau\in \bm{R}_+}$. In addition, we sometimes abbreviate the functional $P_t[\{z(\tau)\}_{\tau \in \bm{R}_+}]$ by $P_t[z]:= P_t[\{z(\tau)\}_{\tau \in \bm{R}_+}]$ for the sake of brevety. The presence of a continuous field variable leads to several technical issues, such as in the correct application of the Laplace transform. The functional Laplace transformation ${\mathcal{L}_{\mathrm{path}}}$ of an arbitrary functional $f[z]$ is defined by a functional integration (i.e., a path integral): $${\mathcal{L}_{\mathrm{path}}}\big[f[z]; s\big] := \int \mathcal{D}z e^{-\int_0^\infty d\tau s(\tau)z(\tau)}f[z] ~. \label{etujeyhgq}$$ This allows us to define the Laplace representation of the probability density functional by $${\tilde}{P}_t[s] := {\mathcal{L}_{\mathrm{path}}}\big[P_t[z]; s\big]\label{def:prob_functional_Laplace}$$ for an arbitrary nonnegative function $\{s(\tau)\}_{\tau\in \bm{R}_+}$. As the natural extension of Eq. , the master equation for the probability density functional is given by $$\frac{\partial P_t[z]}{\partial t} = \int_0^\infty d\tau\frac{\delta }{\delta z(\tau)}\frac{z(\tau)}{\tau}P_t[z] + \Bigg[ \left\{\nu_0+\int_0^\infty (z- n/\tau)d\tau \right\}P_t[z-n/\tau] - \left\{\nu_0+\int_0^{\infty} z d\tau \right\}P_t[z] \Bigg]\label{eq:master_gen_functional}$$ with the boundary condition $$P_t[z]\Big|_{z \in \partial \bm{R}^{\infty}_+} = 0$$ where the boundary of the function space $\partial \bm{R}^{\infty}_+:= \{z | z(\tau) = 0 \mbox{ for some }\tau \in [0, \infty) \}$ (see Appendix. \[sec:master\_eq\_gen\] for an explicit derivation). ### Laplace representation of the master equation In the functional Laplace representation , the master equation (\[eq:master\_gen\_functional\]) takes the following simple first-order functional differential equation $$\frac{\partial {\tilde}{P}_t[s]}{\partial t} = \nu_0\left(e^{-\int_{0}^\infty d\tau' s(\tau')n(\tau')/\tau'} -1\right){\tilde}{P}_t[s] -\int_0^\infty d\tau \left(e^{-\int_{0}^\infty d\tau' s(\tau')n(\tau')/\tau'} -1 + \frac{s(\tau)}{\tau}\right)\frac{\delta {\tilde}{P}_t[s]}{\delta s(\tau)}~.\label{eq:master_gen_functional_Laplace}$$ ### General formulation All the above forms of the memory kernel can be unified by remarking that the variable transformation  is equivalent to a Laplace transform, since it can be rewritten as $$\begin{aligned} h(t) = \int_0^\infty \frac{1}{s}n\left(\frac{1}{s}\right)e^{-st}ds = \mathcal{L}_1\left[\frac{1}{s}n\left(\frac{1}{s}\right); t\right] \>\>\>\> \Longleftrightarrow \>\>\>\> n(\tau) = \frac{1}{\tau}\mathcal{L}^{-1}_1\left[h(t); s\right]\bigg|_{s=1/\tau}. \end{aligned}$$ This allows us to reformulate the several examples discussed above in a unified way presented in Table \[table\_examples\_general\]. Case $\displaystyle h(t)$ $\displaystyle n(\tau)$ ---------------------------------------------- ----------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------- Single exponential kernel $\displaystyle \frac{n_1}{\tau_1}e^{-t/\tau_1}$ $\displaystyle n_1\delta (\tau-\tau_1)$ Discrete superposition of exponential kernel $\displaystyle \sum_{k=1}^K\frac{n_k}{\tau_k}e^{-t/\tau_k}$ $\displaystyle \sum_{k=1}^Kn_k\delta (\tau-\tau_k)$ Power-law kernel $(\beta \geq 0)$ $\displaystyle \frac{n}{\tau^*} \frac{\beta}{(1+t/\tau^*)^{\beta+1}}$ $\displaystyle \frac{n}{\tau}\left(\frac{\tau^*}{\tau}\right)^{\beta}\frac{e^{-\tau^*/\tau}}{\Gamma(\beta)}$ : Examples of various memory kernel $h(t)$ and corresponding $n(\tau)$ defined in expression (\[eq:continuous\_decomposition\_kernel\]). []{data-label="table_examples_general"} Solution {#sec:solutions} ======== In section \[sec:ModelMasterEq\], we have derived the master equations and their Laplace representations for the Hawkes processes with arbitrary memory kernels. Remarkably, the Laplace representations are first-order partial (functional) differential equations. Because first-order partial (functional) differential equations can be formally solved by the method of characteristics (see Appendix \[sec:app:method\_of\_characterisics\] for a brief review), various analytical properties of the Hawkes process can be studied in details. In this section, we present novel properties of the Hawkes process unearthed from the solution of the master equations by the method of characteristics. In particular, we focus on the behavior of the PDF of the steady-state intensity near the critical point $n=1$. Under the condition of the existence of the first-order moment (\[rhr2bg2\]), an asymptotic analysis of the master equations shows that the PDF $P_{{\mathrm{ss}}}(\nu):=\lim_{t\to \infty} P_t(\nu)$ exhibits a power-law behavior with a non-universal exponent: $$P_{{\mathrm{ss}}}(\nu) \propto {1 \over \nu^{1-2\nu_0\alpha}}~,~~~~~{\rm with}~\alpha = n \langle \tau \rangle ~(\ref{rhr2bg2}) \label{eq:main_finding_power-law_gen}$$ for large $\nu$, up to an exponential truncation, which is pushed towards $\nu \to \infty$ as $n \to 1$. As the tail exponent is smaller than $1$, the steady-state PDF $P_{{\mathrm{ss}}}(\nu)$ is not renormalizable without the exponential cutoff. However, the characteristic scale of the exponential tail diverges as the system approaches the critical point $n=1$, and the power-law tail (\[eq:main\_finding\_power-law\_gen\]) can be observed over many orders of magnitude of the intensity for near-critical systems, as we illustrate below. The parameter $\alpha = n \langle \tau \rangle$ entering in the expression of tail exponent in expression (\[eq:main\_finding\_power-law\_gen\]) has been defined by expression (\[rhr2bg2\]). Since $\nu_0$ is the background intensity of the Hawkes intensity as defined in (\[def:Hawkes\_general\]), the exponent of $P_{{\mathrm{ss}}}(\nu)$ depends on $\nu_0\alpha = n \nu_0 \langle \tau \rangle$, which is $n$ times the average number of background events (or immigrants) occurring during a time equal to the average time scale $\langle \tau \rangle$ of the memory kernel. Thus, the larger the memory $\langle \tau \rangle$, the larger the background intensity $\nu_0$ and the larger the branching ratio $n$, the smaller is the exponent $1-2\nu_0\alpha$. Note that $1-2\nu_0\alpha$ can even turn negative for $\nu_0\alpha > 1/2$, which corresponds to a non-monotonous PDF $P_{{\mathrm{ss}}}(\nu)$, which first grows according to the power law (\[eq:main\_finding\_power-law\_gen\]) before decaying exponentially at very large $\nu$’s. In simple terms, the PDF (\[eq:main\_finding\_power-law\_gen\]) describes the distribution of the number $\nu dt$ of events in the limit of infinitely small time windows $[t, t+dt]$. We should contrast this limit to the other previously studied limit of infinitely large and finite but very large time windows. Standard results of branching processes (of which the Hawkes model is a subset) give the total number of events generated by a given triggering event (see Ref. [@SaiHSor2005] for a detailed derivation). In equation (\[def:Hawkes\_general\]), this corresponds to counting all the events over an infinitely large time window that are triggered by a single source event $\nu_0 =\delta(t)$ occurring at the origin of time. Ref. [@SaiSor2006] has studied the distribution of “seismic rates” in the limit of large time windows which, in our current formulation, corresponds to the distribution of $N(t) := \int_t^{t+T} \nu(\tau) d\tau$, in the limit of large $T$’s. The corresponding probability density distributions are totally different from (\[eq:main\_finding\_power-law\_gen\]), which corresponds to the other limit $T \to 0$. We derive our main result (\[eq:main\_finding\_power-law\_gen\]) first for the single exponential form of the memory kernel, then for the discrete sum of exponentials and then for the general case. Single exponential kernel ------------------------- As the first example, we focus on the single exponential kernel . While this special case is analytically tractable without the need to refer to the master equation approach [@Dassios2011], we nevertheless derive its exact solution via the master equation approach, because the methodology will be readily generalized to the more complex cases. ### Steady state solution Let us first study the steady solution of the PDF $P_{{\mathrm{ss}}}(\nu)$. By setting $K=1$ in Eq. , we obtain the expression of the Laplace transform of the steady state ${\tilde}{P}_{{\mathrm{ss}}}(s):=\int_{0}^\infty d\nu e^{-s\nu}P_{{\mathrm{ss}}}(z)$ of the master equation (\[eq:master\_exp\]) in the form of a first-order ordinary differential equation $$\left(e^{-ns/\tau}-1+\frac{s}{\tau}\right)\frac{d{\tilde}{P}_{ss}(s)}{ds} = \nu_0 \left(e^{-ns/\tau}-1\right){\tilde}{P}_{ss}(s). \label{eq:maseter_single_expon_steady}$$ By solving this equation, we obtain the exact steady solution below the critical point $n<1$, $$\log {\tilde}{Q}_{{\mathrm{ss}}}(s) = -s\nu_0 + \log {\tilde}{P}_{{\mathrm{ss}}}(s) = -\frac{\nu_0}{\tau}\int_0^s \frac{sds}{e^{-ns/\tau}-1+s/\tau}\label{eq:exact_solution_single_expon_steady}$$ with the renormalization condition $$\int_0^\infty P_{{\mathrm{ss}}}(\nu) = {\tilde}{Q}_{{\mathrm{ss}}}(s=0) = 1.$$ #### Near the critical point. Let us evaluate the asymptotic behavior of ${\tilde}{Q}_{{\mathrm{ss}}}(s)$ for large $\nu$ by assuming that the system is in the near-critical state, such that $${\varepsilon}:= 1-n \ll 1.$$ By performing an expansion in the small parameter ${\varepsilon}$, we obtain an asymptotic formula for small $s$ (large $\nu$), $$\log {\tilde}{Q}_{{\mathrm{ss}}}(s) \simeq -\frac{\nu_0}{\tau}\int_{0}^s \frac{ds}{{\epsilon}/\tau + s/2\tau^2} = -2\nu_0 \tau\log\left(1+\frac{s}{2\tau {\epsilon}}\right),$$ which implies a power-law behavior with a non-universal exponent, up to an exponential truncation: $$P_{{\mathrm{ss}}}(\nu) \propto \nu^{-1+2\nu_0 \tau}~ e^{-2 \tau {\epsilon}\nu} ~. \label{eq:power-law_single_expon_steady}$$ for large $\nu$. This is a special case of expression  obtained for $n \to 1$ and $\langle \tau \rangle = \tau$. It is remarkable that the power-law exponent is less than one, and thus the PDF is not renormalizable without the exponential truncation. This means that the power-law scaling actually corresponds to an intermediate asymptotics of the PDF, according to the classification of Barenblatt [@Barenblatt]. In addition, while this scaling can be regarded as a heavy “tail" for $2\nu_0 \tau<1$, the exponent can be negative when $2\nu_0 \tau>1$ (i.e., the PDF is a power-law increasing function until the exponential tail takes over and ensure the normalisation of the PDF). The characteristic scale of the exponential truncation is defined by $$\nu_{\mathrm{cut}} := \frac{1}{2\tau {\varepsilon}} = \frac{1}{2\tau (1-n)}~,$$ which diverges as the system approaches to the critical condition $n=1$. This means that (i) if the system is in a near-critical state ${\varepsilon}\ll 1$ and (ii) the background intensity is sufficiently small $\nu_0<1/(2\tau)$, one can actually observe the power-law intermediate asymptotics for a wide range $\nu \ll \nu_{\mathrm{cut}} = O({\varepsilon}^{-1})$, up to the exponential truncation. #### Numerical verification. ![ Numerical evaluation of the steady state PDF of the intensity ${\hat{\nu}}$ for the following parameter sets near the critical point: $n=0.999$ (blue bars) and $n=0.99$ (red bars). The theoretical power law is shown by the green straight line. (a) Background intensity $\nu_0=0.01$, relaxation time $\tau=1$, leading to the power-law exponent $0.98$. (b) $\nu_0=0.2$, $\tau=1$, leading to the power-law exponent $0.6$. (c) $\nu_0=1.0$, $\tau=1$, leading to the negative (i.e. growing) power-law exponent $-1.0$. For all simulations, the sampling time interval and total sampling time are $dt=0.001$ and $T_{\mathrm{tot}}=10000$ with the initial condition ${\hat{z}}(0)=0$. The initial 10% of the sample trajectories were discarded from the statistics. []{data-label="fig:Simulation_PDFs_SingleExpon"}](SingleExponPDFs.eps){width="180mm"} We now present numerical confirmations of our theoretical prediction, in particular for the intermediate asymptotics as shown in Fig. \[fig:Simulation\_PDFs\_SingleExpon\]. There is a well-developed literature on the numerical simulation of Hawkes process [@Harte2010; @DassiosZhao13]. We have used an established simulation package for Python called “tick" (version 0.6.0.0) for 32-threads parallel computing. The total simulation time was $10^4$ seconds and the sampling time interval was $0.001$ second. We note that the initial 10% of the sampled trajectories were discorded for initialization. For small background intensity $\nu_0\ll 1/\tau$, we obtain an approximate universal exponent $-1$. For $2\nu_0 \tau <1$, we observe a decaying power law of exponent $1-2\nu_0\tau$, while we observe a growing power law for $2\nu_0 \tau >1$. The power-law intermediate asymptotics is truncated by the exponential function, as predicted and also discussed in Ref. [@BouchaudTradebook2018] (albeit with the error of missing the $1$ in the exponent and thus failing to describe the intermediate asymptotics), ensuring the normalization of the PDF of the Hawkes intensities. ### Time-dependent solution \[htgb12fv1\] We now present the exact solution of the time-dependent master equation. In the Laplace representation, the dynamics of the PDF of the intensities is given by the following first-order PDE, $$\frac{\partial {\tilde}{P}_t(s)}{\partial t} + \left(e^{-ns/\tau}-1+\frac{s}{\tau}\right)\frac{\partial {\tilde}{P}_t(s)}{\partial s} = \nu_0 \left(e^{-ns/\tau}-1\right){\tilde}{P}_t(s). \label{eq:time_dependent_master_Laplace_one_expon}$$ This equation can be solved by the method of characteristics (see Appendix \[sec:app:method\_of\_characterisics\] for a brief review). The corresponding Lagrange-Charpit equations are given by $$\begin{aligned} \frac{ds}{dt} = e^{-ns/\tau}-1+\frac{s}{\tau}, \>\>\> ~~~~~~~~ \frac{d\Phi}{dt} = \nu_o\left(e^{-ns/\tau}-1 \right),~~~{\rm with}~~ \Phi := \log {\tilde}{P}~. \end{aligned}$$ These equations can be solved explicitly, $$\begin{aligned} t = {\mathcal{F}}(s) + C_1, \>\>\>\> ~~~~~~ \Phi = \nu_0s - \frac{\nu_0}{\tau}\int_0^s \frac{s'ds'}{e^{-ns'/\tau}-1+s'/\tau} +C_2 \end{aligned}$$ with $${\mathcal{F}}(s) := \int_{s_0}^s\frac{ds'}{e^{-ns'/\tau}-1+s'/\tau} + C_1. \label{yi,m5i74jhnb2wg}$$ $C_1$ and $C_2$ are constants of integration and $s_0$ is a positive constant chosen to satisfy several convenient properties discussed below. #### Summary of the properties of ${\mathcal{F}}$. We present several analytical properties of ${\mathcal{F}}(s)$ (see Appendix \[app:sec:mcF\_characters\_proof\] for their proof): 1. ${\mathcal{F}}(s)$ is a monotonically increasing function by choosing $s_0>0$ appropriately. 2. The inverse function ${\mathcal{F}}^{-1}(s)$ can be defined uniquely. In addition, for the sub-critical case $n<1$, the following properties hold true: 1. $s_0$ can be set to any positive value. 2. $\lim_{s\to +0}{\mathcal}{F}(s) = -\infty$. 3. $\lim_{s\to \infty}{\mathcal}{F}(s) = +\infty$. 4. ${\mathcal{F}}(s)$ can take all real values: ${\mathcal{F}}(s)\in (-\infty,\infty)$ for $s>0$. #### Regularization of ${\mathcal{F}}(s)$. In the following, we assume the sub-critical condition $n<1$. It is useful to decompose ${\mathcal{F}}(s)$ into regular and singular parts: $$\mathcal{F}(s) = \int_{s_0}^{s} \frac{ds}{e^{-ns/\tau}-1+s/\tau} \underbrace{- \int_{s_0}^s \frac{ds}{(1-n)s/\tau} + \frac{\tau}{1-n}\log\frac{s}{s_0}}_{\mbox{totally zero as an identity}} = \underbrace{\frac{\tau}{1-n}\int_{s_0}^{s} ds\frac{1-e^{-ns/\tau}-ns/\tau}{s(e^{-ns/\tau}-1+s/\tau)}}_{\mbox{regular part}} + \underbrace{\frac{\tau}{1-n}\log \frac{s}{s_0}}_{\mbox{singular part}},$$ where the regular part is well-defined even for $s_0 \to 0$. This expression is useful since the divergent factor in ${\mathcal{F}}(s)$ can be renormalized into the integral constant $C_1$, such that $$C_1 -\frac{\tau}{1-n}\log s_0 \to C_1.$$ We then take the formal limit $s_0\to 0$ and use the following regularized expression for the sub-critical condition $n<1$, $$\mathcal{F}(s) = \mathcal{F}_{\rm R}(s) + \frac{\tau}{1-n}\log s, \>\>\> \mathcal{F}_{\rm R}(s) := \frac{\tau}{1-n}\int_{0}^{s} ds\frac{1-e^{-ns/\tau}-ns/\tau}{s(e^{-ns/\tau}-1+s/\tau)},\label{eq:F_transform_under_critical}$$ where the $s_0$-dependence is removed as the result of the renormalization. The regular part has no singularity at $s\simeq 0$, and reads ${\mathcal{F}}_{\rm R}(s) \simeq -n^2s/\{2\tau(1-n)\}$. #### Explicit solution. Building on the above, we now provide the solution of the master equation . According to the method of characteristics (see Appendix. \[sec:app:method\_of\_characterisics\] for a brief review), the general solution is given by $$C_2 = {\mathcal{H}}(C_1)$$ with an arbitrary function ${\mathcal{H}}(\cdot)$. The time-dependent solution $\log {\tilde}{Q}_t(s) = \log {\tilde}{P}_t(s) -\nu_0s$ is thus given by $$\log {\tilde}{Q}_t(s) = -\frac{\nu_0}{\tau}\int_{0}^s \frac{sds}{e^{-ns/\tau}-1+s/\tau} + \mathcal{H}\left(t - \mathcal{F}(s)\right). \label{eq:solution_time_dependent_single_expon}$$ The function ${\mathcal{H}}(\cdot)$ is determined by the initial condition. Let us assume that the initial PDF and its Laplace representation are given by $P_{t=0}(\nu)$ and ${\tilde}{P}_{t=0}(s)$, respectively. Then, we obtain $${\mathcal{H}}(-{\mathcal{F}}(s)) = \log {\tilde}{P}_{t=0}(s) + \frac{\nu_0}{\tau}\int_{0}^s \frac{sds}{e^{-ns/\tau}-1+s/\tau}$$ or equivalently, $$\mathcal{H}(x) = \log {\tilde}{P}_{t=0}\left(S(x)\right) + \frac{\nu_0}{\tau}\int_0^{S(x)} \frac{ sds}{e^{-ns/\tau}-1+s/\tau}, \>\>\> S(x) = \mathcal{F}^{-1}(-x).$$ Note that the time-dependent solution  is consistent with the steady solution  $$\lim_{t\to \infty}{\tilde}{Q}_t(s) = {\tilde}{Q}_{{\mathrm{ss}}}(s),$$ since $\lim_{x\to +\infty}{\mathcal{H}}(x)=0$ (see Appendix. \[app:sec:convergence\_to\_steady\] for the proof). We also note that, from the time-dependent solution  , we can derive the dynamics of the intensity ${\hat{\nu}}(t)$ for finite $t$ as $${\langle}{\hat{\nu}}(t){\rangle}= \nu_{\rm ini}e^{-(1-n)t/\tau} + \nu_0\frac{1-e^{-nt/\tau}}{1-n} \label{eq:avg_nu_single_expon_time_dependent}$$ with the initial condition ${\hat{\nu}}(0)=\nu_{\rm ini}$ (see Appendix. \[app:sec:average\_nu\_for\_finite\_time\_single\_expon\] for the derivation). This expression (\[eq:avg\_nu\_single\_expon\_time\_dependent\]) shows that the mean intensity converges at long times $t \to +\infty$ to ${\langle}{\hat{\nu}}(t){\rangle}\to \nu_0 / (1-n)$, which is a well-known result [@DalayVere03; @HelmsSor02]. Expression (\[eq:avg\_nu\_single\_expon\_time\_dependent\]) also shows that an initial impulse decays exponentially with a renormalised time decay $\tau/(1-n)$, which is also consistent with previous reports [@Escobar2015]. This diverging time scale $\tau/(1-n)$, as $n \to 1$, reflects the occurrence of all the generations of triggered events that renormalise the “bare” memory function into a “dressed” memory kernel with much longer memory. #### Asymptotic relaxation dynamics for large $t$. The time-dependent asymptotic solution is given for large $t$ by $$\log {\tilde}{P}_t(s) \simeq -\frac{\nu_0}{\tau}\int_{S(t,s)}^s \frac{sds}{e^{-ns/\tau}-1+s/\tau} +\log {\tilde}{P}_{t=0} \left(S(t,s)\right),\>\>\> S(t,s) = s\exp\left[-\frac{1-n}{\tau}\left(t-\mathcal{F}_{\rm R}(s)\right)\right],\label{eq:asymptotic_relaxation_single_expon}$$ assuming that $t\gg {\mathcal{F}}(s)$ for a given $s$. As a corollary of this formula, an asymptotic prediction for the distribution conditional on the initial intensity is given by the following formula $$P(\nu, t| \nu_{\rm ini}, t=0) = \mathcal{L}^{-1}_{1}\left[{\tilde}{P}_t(s); t\right], \>\>\> \log {\tilde}{P}_{t=0}(s) = -\nu_{\rm ini}s$$ for large $t$. Note that asymptotic convergence of these formula is not uniform in terms of $s$; indeed, the convergence of the Laplace representation for large $s$ is slower than that for small $s$. ### Another derivation of the power law exponent: linear stability analysis of the Lagrange-Charpit equation {#sec:single_expon_power-law_stability_analysis} We have presented both the steady state and time-dependent solutions of the master equations, based on exact or asymptotic methods. While these formulations are already clear, here we revisit the power-law bahavior  of the steady state PDF, and present another derivation based on the linear stability analysis of the Lagrange-Charpit equation, which has the advantage of being generalisable to memory kernels defined as superposition of exponential functions. Indeed, while the derivation based on the exact solution  is clear and powerful, it is not easy to extend this kind of calculation to general cases, such as superposition of exponential kernels. In contrast, the derivation that we now present can be extended to arbitrary forms of the memory kernel of the Hawkes processes, as will be shown later. Moreover, we have found additional distinct derivations of and we refer the interested reader to Appendix. \[app:sec:various\_derivation\_power\_law\_single\_expon\]. While the steady state master equation  is a ordinary differential equation which can be solved exactly, let us consider its corresponding Lagrange-Charpit equations, $$\frac{ds}{dl} = -e^{-ns/\tau}+1-\frac{s}{\tau}, \>\>\> \frac{d}{dl}\log {\tilde}{P}_{{\mathrm{ss}}} = -\nu_0\left(e^{-ns/\tau}-1\right) \label{ryjty3hqtbq}$$ where we introduce the parameter $l$ of the characteristic curve. These equations can be regarded as describing a “dynamical system" in terms of the auxiliary “time" $l$. This formulation is useful because the well-developed theory of dynamical systems is applicable even to more general cases as shown later. ![ Schematic illustration of the vector field $V(s):=ds/dt=-e^{-ns/\tau}+1-s/\tau$ along the $s$ dimension as a function of “time” $l$. (a) Near critical condition ${\epsilon}:=1-n \ll 1$, two fixed points exist at $s=0$ (attractor) and $s\simeq -2\tau{\epsilon}$ (repeller). (b) At the critical condition $n=1$, the repeller merges with the attractor, which corresponds to a transcritical bifurcation. []{data-label="fig:phaseSpace1D"}](phaseSpace1D.eps){width="120mm"} #### Sub-critical condition $n<1$. Let us first focus on the sub-critical case $n<1$ and consider the expansion of equations (\[ryjty3hqtbq\]) around $s=0$, which leads to $$\frac{ds}{dl} \simeq -\frac{1-n}{\tau}s - \frac{n^2s^2}{2\tau^2} + \dots, \>\>\> \frac{d}{dl}\log {\tilde}{P}_{{\mathrm{ss}}} \simeq \frac{n\nu_0}{\tau}s + \dots.$$ The corresponding flow of this effective dynamical system $s(l)$ along the $s$ axis as a function of “time” $l$ is illustrated in Fig. \[fig:phaseSpace1D\]. Near the critical condition ${\epsilon}= 1-n\ll 1$, this “dynamical system" has two fixed points $V(s)=0$ at $$s = 0, \>\>\> s\simeq -2\tau {\epsilon}$$ The former is a stable attractor whereas the latter is an unstable repeller (see Fig. \[fig:phaseSpace1D\]a). Remarkably, the critical condition $n=1$ for the Hawkes process corresponds to the condition of a transcritical bifurcation (i.e., the repeller merges with the attractor; see Fig. \[fig:phaseSpace1D\]b) for the “dynamical system" described by the Lagrange-Charpit equations. This picture is useful because it can be straightforwardly generalized to more general memory kernels $h(t)$, as shown later. Let us neglect the sub-leading contribution to obtain the general solution as $$s = e^{-(1-n)(l-l_0)/\tau}, \>\>\> \log {\tilde}{P}_{{\mathrm{ss}}} \simeq \frac{n\nu_0}{\tau}\int dl~ s(l) + C$$ with constants of integration $l_0$ and $C$. In the following, we set the initial “time" (i.e., the initial point on the characteristic curve) as $l_0=0$. We then obtain $$\log {\tilde}{P}_{{\mathrm{ss}}} \simeq -\frac{n\nu_0 s}{1-n} + C$$ with constant of integration $C$. This constant is fixed by the condition of normalization of the PDF, given by $\log {\tilde}{P}_{{\mathrm{ss}}} = 0$ for $s=0$, which imposes $C=0$. We thus obtain $$\log {\tilde}{Q}_{{\mathrm{ss}}}(s) = - \nu_0s + \log {\tilde}{P}_{{\mathrm{ss}}}(s) \simeq -\frac{\nu_0 s}{1-n}, \label{yjnh23rb2}$$ which is consistent with the asymptotic mean intensity in the steady state (see the long time limit of Eq. ). #### At criticality $n=1$. For $n=1$, the lowest-order contribution in the Lagrange-Charpit equation is given by $$\frac{ds}{dl} \simeq -\frac{s^2}{2\tau^2} \>\>\> \Longrightarrow \>\>\> s = \frac{2\tau^2}{l-l_0}.$$ with constant of integration $l_0$. In the following, we set $l_0=0$ as the initial point on the characteristic curve. We then obtain $$\begin{aligned} \log {\tilde}{P}_{{\mathrm{ss}}} = \frac{\nu_0}{\tau}\int dl ~s(l) + C \simeq -2\nu_0\tau \log |s| + C \end{aligned}$$ with the constant of integration $C$. The constant is an “divergent" constant since it has to compensate the diverging logarithm to ensure that $\log P_{{\mathrm{ss}}}(s=0)=0$. This “divergent" constant appears as a result of neglecting the ultra-violet (UV) cutoff for small $s$ (which corresponds to neglecting the exponential tail of the PDF of intensities). By ignoring the divergent constant $C$, we obtain the intermediate asymptotics, $$P_{{\mathrm{ss}}}(\nu) \propto \nu^{-1+2\nu_0\tau},$$ which recovers the leading power law intermediate asymptotic (\[eq:power-law\_single\_expon\_steady\]), which is a special case of the general solution (\[eq:main\_finding\_power-law\_gen\]). Double exponential kernel {#sec:LinearStabilityOfLagrangeCharpitTwoExpon} ------------------------- We now consider the case where the memory function (\[yjtyhnwrgbqbq\]) is made of $K=2$ exponential functions. Since the Laplace representation of the master equation is still a first-order partial differential equation, its solution can be formally obtained by the method of characteristics (see Appendix. \[sec:app:method\_of\_characterisics\] for a short review). Unfortunately, the time-dependent Lagrange Charpit equation cannot be exactly solved in explicit form anymore. We therefore focus on the steady state solution of the master equation, with a special focus on the regime close to the critical point. We develop the stability analysis of the Lagrange-Charpit equations following the same approach as in Sec. \[sec:single\_expon\_power-law\_stability\_analysis\]. Let us start from the Lagrange-Charpit equations, which are given by \[eq:LagrangeCharpit\_2expon\] $$\begin{aligned} \frac{ds_1}{dl} &= -e^{-(n_1s_1/\tau_1+n_2s_2/\tau_2)} + 1 - \frac{s_1}{\tau_1}, \\ \frac{ds_2}{dl} &= -e^{-(n_1s_1/\tau_1+n_2s_2/\tau_2)} + 1 - \frac{s_2}{\tau_2}, \\ \frac{d\Phi}{dl} &= -\nu_0 \left(e^{-(n_1s_1/\tau_1+n_2s_2/\tau_2)} - 1\right)~~~~~{\rm with}~~ \Phi := \log {\tilde}{P}_{{\mathrm{ss}}} \end{aligned}$$ and $l$ is the auxiliary “time” parameterising the position on the characteristic curve. Let us develop the stability analysis around $s=0$ (i.e. for large $\nu$’s) for this pseudo dynamical system. #### Sub-critical case $n<1$. Assuming $n := n_1+n_2 < 1$, let us first consider the linearized dynamics of system (\[eq:LagrangeCharpit\_2expon\]) as $$\frac{d\bm{s}}{dl} \simeq -\bm{H} \bm{s}, \>\>\> \frac{d\Phi}{dl} \simeq \nu_0\bm{K}\bm{s} \label{trhyr2hgbqb}$$ with $$\bm{s} := \begin{pmatrix} s_1\\ s_2 \end{pmatrix}, \>\>\> \bm{H} := \begin{pmatrix} \frac{1-n_1}{\tau_1}& \frac{-n_2}{\tau_2}\\ \frac{-n_1}{\tau_1}& \frac{1-n_2}{\tau_2} \end{pmatrix}, \>\>\> \bm{K} := \begin{pmatrix} \frac{n_1}{\tau_1}, \frac{n_2}{\tau_2} \end{pmatrix}.$$ Regarding this system as a dynamical system with the auxiliary “time" $l$, its qualitative dynamics can be illustrated by its phase space depicted in Fig. \[fig:phaseSpace\_twoExpon\]. In the subcritical case $n<1$, the origin $\bm{s}=(0,0)$ is “attractive" since all the eigenvalues of $\bm{H}$ are positive (Fig. \[fig:phaseSpace\_twoExpon\]a). ![ Qualitative representation of the Lagrange-Charpit equations in phase space. By rewriting $d\bm{s}/dk := \bm{V}(\bm{s}) \simeq -\bm{H}\bm{s}$, the “velocity" vector field $\bm{V}(\bm{s})$ is plotted in the phase space $(s_1, s_2)$. (a) Subcritical case with $(\tau_1,\tau_2,n_1,n_2)=(1,3,0.3,0.1)$, showing that $\bm{s}=\bm{0}$ is a stable attractor. (b) Critical case with $(\tau_1,\tau_2,n_1,n_2)=(1,3,0.3,0.7)$, showing that the $\bm{e}_1$ direction is marginal in terms of the linear stability analysis (i.e., the repeller merges with the attractor, which corresponds to a transcritical bifurcation in dynamical systems). []{data-label="fig:phaseSpace_twoExpon"}](phaseSpace2D.eps){width="100mm"} Let us introduce the eigenvalues $\lambda_1, \lambda_2$ and eigenvectors $\bm{e}_1, \bm{e}_2$ of $\bm{H}$, such that $$\bm{P}:= \begin{pmatrix} \bm{e}_1, \bm{e}_2 \end{pmatrix}, \>\>\> \bm{P}^{-1}\bm{H}\bm{P} = \begin{pmatrix} \lambda_1& 0 \\ 0& \lambda_2 \end{pmatrix}.$$ Because all eigenvalues are real (see Appendix. \[app:sec:proof\_eigenvalues\_H\_real\] for the proof), we denote $\lambda_1\leq \lambda_2$. The determinant of $\bm{H}$ is given by $$\det \bm{H} = \frac{1-n}{\tau_1\tau_2}.$$ This means that the zero eigenvalue $\lambda_1=0$ appears at the critical point $n=1$. Below the critical point $n<1$, all the eigenvalues are positive ($\lambda_1, \lambda_2>0$). For $n<1$, the dynamics can be rewritten as $$\frac{d}{dl}\bm{P}^{-1}\bm{s} = -\begin{pmatrix} \lambda_1& 0 \\ 0& \lambda_2 \end{pmatrix} \bm{P}^{-1}\bm{s} \>\>\> \Longrightarrow \>\>\> \bm{s}(l) = \bm{P} \begin{pmatrix} e^{-\lambda_1 (l-l_0)} \\ e^{-\lambda_2 (l-l_0)}/C_1 \end{pmatrix}$$ with constants of integration $l_0$ and $C_1$. We can assume $l_0=0$ as the initial point of the characteristic curve without loss of generality. Integrating the second equation in (\[trhyr2hgbqb\]), we obtain $$\Phi = \nu_0\bm{K}\int \bm{s}(l)dl + C_2 = -\nu_0\bm{K}\bm{P} \begin{pmatrix} 1/\lambda_1& 0 \\ 0& 1/\lambda_2 \end{pmatrix} \bm{P}^{-1}\bm{s} + C_2 = -\nu\bm{K}\bm{H}^{-1}\bm{s} + C_2.$$ The general solution is given by $$\mathcal{H}(C_1) = C_2$$ with a function $\mathcal{H}$ determined by the initial condition on the characteristic curve. Let us introduce $$\bar{s} := \bm{P}^{-1}\bm{s} = \begin{pmatrix} \bar{s}_1\\ \bar{s}_2 \end{pmatrix} \>\>\> \Longrightarrow \>\>\> C_1 = \left(\bar{s}_1\right)^{\lambda_2/\lambda_1}\left(\bar{s}_2\right)^{-1} .$$ This means that the solution is given by the following form: $$\Phi(\bm{s}) = -\nu\bm{K}\bm{H}^{-1}\bm{s} + \mathcal{H}\left(\left(\bar{s}_1\right)^{\lambda_2/\lambda_1}\left(\bar{s}_2\right)^{-1}\right).$$ Because of the renormalization of the PDF, the following relation must hold $$\lim_{\bm{s}\to \bm{0}} \Phi(\bm{s}) = 0$$ for any path in the $(s_1, s_2)$ space ending on the origin (limit $\bm{s}\to \bm{0}$). Let us consider the specific limit such that $\bar{s}_1\to 0$ with $\bar{s}_2=x^{-1}(\bar{s}_1)^{\lambda_2/\lambda_1}$ for an arbitrary positive $x$: $$\lim_{\bar{s}_1\to 0} \Phi(\bm{s}) = \mathcal{H}(x).$$ Since the left-hand side (LHS) is zero for any $x$, the function $\mathcal{H}(\cdot)$ must be identically zero. With $\Phi := \log {\tilde}{P}_{{\mathrm{ss}}}$ as defined in (\[eq:LagrangeCharpit\_2expon\]), this leads to $$\log {\tilde}{P}_{{\mathrm{ss}}} (\bm{s}) = -\nu\bm{K}\bm{H}^{-1}\bm{s}.$$ By substituting with the special $\bm{s}=(s_1=s, s_2=s)^{\rm{T}}$, we obtain $$\log {\tilde}{Q}_{{\mathrm{ss}}}(s) = -\nu_0 s + \Phi_{t=\infty} \left( s(1,1)^{\rm{T}} \right) \simeq -\frac{\nu_0}{1-n}s \label{temjkymku5jmn3}$$ for small $s$, which recovers expression (\[yjnh23rb2\]) derived above. #### Critical case $n=1$. In this case, the eigenvalues and eigenvectors of $\bm{H}$ are given by $$\lambda_1 = 0, \>\>\> \lambda_2 = \frac{n_1\tau_1+n_2\tau_2}{\tau_1\tau_2}, \>\>\> \bm{e}_1 = \begin{pmatrix} \tau_1 \\ \tau_2 \end{pmatrix}, \>\>\> \bm{e}_2 = \begin{pmatrix} -n_2 \\ n_1 \end{pmatrix}.$$ This means that the eigenvalue matrix and its inverse matrix are respectively given by $$\bm{P} = \begin{pmatrix} \tau_1 & -n_2 \\ \tau_2 & n_1 \end{pmatrix}, \>\>\> \bm{P}^{-1} = \frac{1}{\alpha} \begin{pmatrix} n_1 & n_2 \\ -\tau_2 & \tau_1 \end{pmatrix}, \>\>\> \alpha := \det \bm{P} = \tau_1n_1 + \tau_2n_2 \label{teumk5im4jnw}$$ This value of $\alpha$ is the special case for two exponentials of the general definition (\[rhr2bg2\]). Accordingly, let us introduce $$\bm{x} = (x,y)^{\rm{T}} = \bm{P}^{-1}\bm{s}, \>\>\> \Longleftrightarrow \>\>\> x = \frac{n_1 s_1 + n_2 s_2}{\alpha}, \>\>\> y = \frac{-\tau_2 s_1 + \tau_1 s_2}{\alpha}.$$ We then obtain $$\frac{dx}{dl} = 0, \>\>\> \frac{dy}{dl} = -\lambda_2 y$$ at the leading linear order in expansions in powers of $x$ and $y$. Since the first linear term is zero in the dynamics of $x$, corresponding to a transcritical bifurcation for the Lagrange-Charpit equations , we need to take into account the second order term in $x$, namely $$\begin{aligned} e^{-(n_1s_1/\tau_1+n_2s_2/\tau_2)} \simeq 1 - x + \frac{x^2}{2} + n_1n_2 \left(\frac{1}{\tau_1}-\frac{1}{\tau_2}\right)y + O(xy, x^2y, y^2) \end{aligned}$$ where we have dropped terms of the order $y^2$, $xy$ and $x^2y$. We then obtain the dynamical equations at the transcritical bifurcation (see Fig. \[fig:phaseSpace\_twoExpon\]b) to leading-order $$\frac{dy}{dl} \simeq -\lambda_2 y, \>\>\> \frac{dx}{dl} \simeq -\frac{x^2}{2\alpha}$$ whose solutions are given by $$x(l) = \frac{2\alpha}{l-l_0}, \>\>\> y(l) = C_1e^{-\lambda_2 (l-l_0)} \label{eq:asymptotic_speed_twoExpon}$$ with constants of integration $l_0$ and $C_1$. We can assume $l_0=0$ as the initial point on the characteristic curve. Remarkably, only the contribution along the $x$ axis is dominant for the large $l$ limit (i.e., $|x|\gg |y|$ for $l \to \infty$), which corresponds to the asymptotic limit $\bm{s}\to 0$. We then obtain $$\begin{aligned} \Phi \simeq \nu_0 \int dl \left(\frac{n_1s_1(l)}{\tau_1} + \frac{n_2s_2(l)}{\tau_2}\right) \simeq -2\nu_0\alpha \log |x| + \frac{\nu_0n_1n_2}{\lambda_2}\left(\frac{1}{\tau_1}-\frac{1}{\tau_2}\right)y + C_2. \end{aligned}$$ with constant of integration $C_2$. The general solution is given by $$\mathcal{H}(C_1) = C_2$$ with a function $\mathcal{H}$, which is determined by the initial condition. Considering that $$C_1 = y\exp\left[\frac{2\lambda_2 \alpha}{x}\right],$$ the solution is given by the following form: $$\Phi(\bm{s}) = -2\nu_0\alpha \log |x| + \frac{\nu_0n_1n_2}{\lambda_2}\left(\frac{1}{\tau_1}-\frac{1}{\tau_2}\right)y + \mathcal{H}\left(y\exp\left[\frac{2\lambda_2 \alpha}{x}\right]\right).$$ Because we have neglected the UV cutoff for small $s$, there is an artificial divergent term $-2\nu_0\alpha \log |x|$ for small $x$. Except for this divergent term, $\Phi(\bm{s})$ must be constant for $s\to 0$. The function $\mathcal{H}(\cdot )$ is thus constant because $$\lim_{y\to 0}\left[\Phi(\bm{s}) + 2\nu_0\alpha \log |x|\right] = \mathcal{H}(z) = \mbox{const.}$$ with the choice of $x = 2\lambda_2 \alpha /\log(z/y)$ for any positive constant $z$. Therefore, we obtain the steady solution $$\log {\tilde}{P}_{{\mathrm{ss}}}(\bm{s}) \simeq -2\nu_0\alpha \log |x| + \frac{\nu_0n_1n_2}{\lambda_2}\left(\frac{1}{\tau_1}-\frac{1}{\tau_2}\right)y$$ for small $x$ and $y$, by ignoring the UV cutoff and the constant contribution. This recovers the power law formula of the intermediate asymptotics of the PDF of the Hawkes intensities: $$\log {\tilde}{Q}_{{\mathrm{ss}}}(s) := -\nu_0s + \log {\tilde}{P}_{{\mathrm{ss}}}(s,s) \simeq - 2\nu_0 \alpha \log|s| \>\>\>(s\sim 0) \>\>\> \Longleftrightarrow \>\>\> P(\nu) \sim \nu^{-1+2\nu_0\alpha}\>\>\>(\nu\to +\infty), \label{yji,kiukm4j3w}$$ with $\alpha = \tau_1n_1 + \tau_2n_2$ as defined in (\[teumk5im4jnw\]). #### Numerical verification. ![ Numerical evaluation of the steady state PDF of the Hawkes intensity ${\hat{\nu}}$ for the double exponential case $K=2$ (\[yjtyhnwrgbqbq\]) with $(\tau_1,\tau_2)=(1,3)$, $(n_1,n_2)=(0.5,0.499)$ or $(n_1,n_2)=(0.5,0.49)$, near the critical point: (a) Background intensity $\nu_0=0.01$, leading to the power law exponent $0.96$. (b) $\nu_0=0.1$, leading to the power law exponent $0.6$. (c) $\nu_0=0.75$, leading to the negative (i.e. growing PDF) power law exponent $-2.0$. For all simulations, the sampling time interval and total sampling time are $dt=0.001$ and $T_{\mathrm{tot}}=10000$ from the initial condition ${\hat{z}}(0)=0$. The initial 10% of the sample was discarded from the statistics. []{data-label="fig:Simulation_PDFs_DoubleExpon"}](DoubleExponPDFs.eps){width="180mm"} We have numerically confirmed our theoretical prediction , a special case of for a memory kernel with two exponentials, as shown in Fig. \[fig:Simulation\_PDFs\_DoubleExpon\]. The main properties are the same as those shown in Fig. \[fig:Simulation\_PDFs\_SingleExpon\], implying that our prediction is verified for memory kernels with one and two exponentials. Discrete superposition of exponential kernels --------------------------------------------- We now consider the case where the memory kernel is the sum of an arbitrary finite number $K$ of exponentials according to expression (\[yjtyhnwrgbqbq\]). Our treatment follows the method presented for the case $K=2$. The corresponding Lagrange-Charpit equations read: $$\frac{ds_k}{dl} = -e^{-\sum_{j=1}^Kn_js_j/\tau_j} + 1 - \frac{s_k}{\tau_k}, \>\>\> ~~~~~~ \frac{d\Phi}{dl} = -\nu_0 \left(e^{-\sum_{j=1}^Kn_js_j/\tau_j} - 1\right). \label{eq:Lagrange_Charpit_eq_n_expon}$$ The derivation of the PDF of the Hawkes intensities boils down to a stability analysis of these equations around $s=0$ in the neighbourhood of the critical condition $n=1$. #### Sub-critical case $n<1$. We linearize the Lagrange-Charpit equations to obtain $$\frac{d\bm{s}}{dl} \simeq -\bm{H}\bm{s}, \>\>\> \frac{d\Phi}{dl} \simeq \nu_0 \bm{K}\bm{s}$$ with $$\bm{H} := \begin{pmatrix} \frac{1-n_1}{\tau_1},& -\frac{n_2}{\tau_2},& \dots& -\frac{n_K}{\tau_K} \\ -\frac{n_1}{\tau_1},& \frac{1-n_2}{\tau_2},& \dots& -\frac{n_K}{\tau_K} \\ \vdots& \vdots& \ddots& \vdots \\ -\frac{n_1}{\tau_1},& -\frac{n_2}{\tau_2},& \dots& \frac{1-n_K}{\tau_K} \end{pmatrix}, \>\>\> \bm{K} := \left(\frac{n_1}{\tau_1}, \dots, \frac{n_K}{\tau_K}\right). \label{wrnhmnnh3}$$ Considering that all eigenvalues $\{\lambda_k\}_{k=1,\dots,K}$ of $\bm{H}$ are real (see Appendix. \[app:sec:proof\_eigenvalues\_H\_real\] for its proof), we order them according to $\lambda_i<\lambda_j$ for $i<j$. We denote the corresponding eigenvectors as $\{\bm{e}_k\}_{k=1,\dots,K}$. The matrix $\bm{H}$ can thus be diagonalised as follows $$\bm{P}:= (\bm{e}_1,\dots, \bm{e}_K), \>\>\> \bm{P}^{-1}\bm{H}\bm{P} = \begin{pmatrix} \lambda_1,& 0,& \dots& 0 \\ 0,& \lambda_2,& \dots& 0 \\ \vdots& \vdots& \ddots& \vdots \\ 0,& 0,& \dots& \lambda_K \end{pmatrix}.$$ The critical case $n=1$ corresponds to the existence of a zero eigenvalue. Therefore, at the critical point, the determinant of $\bm{H}$ is zero (see Appendix. \[app:sec:proof\_determinant\_H\] for the derivation of the explicit form of its determinant): $$\det \bm{H} = \frac{1-\sum_{k=1}^K n_k}{\prod_{k=1}^K\tau_k}= 0 \>\>\>\> \Longleftrightarrow \>\>\>\> n: =\sum_{k=1}^K n_k = 1.$$ Following calculations similar those presented in to Sec. \[sec:LinearStabilityOfLagrangeCharpitTwoExpon\], we obtain $$\Phi(\bm{s}) \simeq -\nu_0 \bm{K}\bm{H}^{-1}\bm{s}$$ where the inverse matrix $\bm{H}^{-1}$ is explicitly given in Appendix. \[app:sec:inverse\_matrix\_H\]. We finally obtain $$\log {\tilde}{Q}_{{\mathrm{ss}}}(s) = -\nu_0 s + \Phi(s(1,\dots,1)^T) = \frac{-\nu_0}{1-n}s, \label{uj4m4n32h}$$ again recovering (\[temjkymku5jmn3\]) and (\[yjnh23rb2\]) derived above. #### Critical case $n=1$. At the critical point, the smallest eigenvalue of $\bm{H}$ is zero ($\lambda_1=0$). By direct substitution, its corresponding eigenvector is $$\bm{e}_1 = (\tau_1, \dots, \tau_K)^T~,$$ as seen from $$\bm{H}\bm{e}_1 = \begin{pmatrix} \frac{1-n_1}{\tau_1},& -\frac{n_2}{\tau_2},& \dots& -\frac{n_K}{\tau_K} \\ -\frac{n_1}{\tau_1},& \frac{1-n_2}{\tau_2},& \dots& -\frac{n_K}{\tau_K} \\ \vdots& \vdots& \ddots& \vdots \\ -\frac{n_1}{\tau_1},& -\frac{n_2}{\tau_2},& \dots& \frac{1-n_K}{\tau_K} \end{pmatrix} \begin{pmatrix} \tau_1 \\ \tau_2 \\ \vdots \\ \tau_K \end{pmatrix} =\begin{pmatrix} 1-n \\ 1-n \\ \vdots \\ 1-n \end{pmatrix} = \bm{0} ~~~~~{\rm for}~n=1.$$ We now introduce a new set of variables (i.e., representation based on the eigenvectors) $$\bm{x} = \begin{pmatrix} x_1 \\ x_2 \\ \dots \\ x_K \end{pmatrix} := \bm{P}^{-1}\bm{s}, \>\>\>\>~~~~~ \bm{P}^{-1} = \begin{pmatrix} \bm{g}_1^T \\ \bm{g}_2^T \\ \dots \\ \bm{g}_K^T \end{pmatrix}\bm{s}.$$ The linearized Lagrange-Charpit equations are given by $$\frac{dx_1}{dl} \simeq 0, \>\>\> ~~~~\frac{dx_j}{dl} \simeq -\lambda_j x_j~~~~{\rm for}~j\geq 2~.$$ Similarly to Eq. , the leading-order contribution comes from the $x_1$ direction because $|x_1| \gg |x_j|$ for $j\geq 2$ in the asymptotic limit $l\to \infty$. We thus neglect other contribution by assuming $x_j\sim 0$ for $j\geq 1$. It is therefore necessary to take the second-order contribution along the $x_1$ direction, $$e^{-\sum_{k=1}^{K}n_ks_k/\tau_k}-1 = -\sum_{k=1}^{K}\frac{n_ks_k}{\tau_k} + \frac{1}{2}\left(\sum_{k=1}^{K}\frac{n_ks_k}{\tau_k}\right)^2 + \dots.$$ We note that $x_1$ is given by $$x_1 = \bm{g}_1\cdot \bm{s} = \frac{1}{\alpha}\sum_{k=1}^K n_ks_k, \>\>\>~~~~~ \bm{g}_1 = \left(\frac{n_1}{\alpha},\dots, \frac{n_K}{\alpha}\right)^T, \label{yjt4nh3g}$$ where $\alpha := \sum_{k=1}^K\tau_kn_k$, which is a special case for a discrete sum of exponentials of the general definition (\[rhr2bg2\]). Taking the derivative of (\[yjt4nh3g\]) and using equation , we obtain $$\begin{aligned} \frac{dx_1}{dl} = \frac{1}{\alpha}\sum_{k=1}^Kn_k\frac{ds_k}{dl} = 0 - \frac{1}{2\alpha}\left(\sum_{k=1}^{K}\frac{n_ks_k}{\tau_k}\right)^2 + \dots. \end{aligned}$$ This means that $\bm{g}_1$ is a correct representation. Note that the value of $\alpha$ given by (\[rhr2bg2\]) ensures consistency with the following identify: $$\bm{P}^{-1}\bm{P} = \begin{pmatrix} \bm{g}_1^T \\ \bm{g}_2^T \\ \dots \\ \bm{g}_K^T \end{pmatrix} \begin{pmatrix} \bm{e}_1, \bm{e}_2, \dots, \bm{e}_K \end{pmatrix} = \begin{pmatrix} n_1/\alpha,& n_2/\alpha,& \dots& n_K/\alpha \\ \bigcirc,& \bigcirc,& \dots& \bigcirc \\ \vdots& \vdots& \ddots& \vdots \\ \bigcirc,& \bigcirc,& \dots& \bigcirc \end{pmatrix} \begin{pmatrix} \tau_1,& \bigcirc,& \dots& \bigcirc \\ \tau_2,& \bigcirc,& \dots& \bigcirc \\ \vdots & \vdots& \ddots& \vdots \\ \tau_2,& \bigcirc,& \dots& \bigcirc \end{pmatrix} = \begin{pmatrix} 1,& 0,& \dots& 0 \\ 0,& 1,& \dots& 0 \\ \vdots& \vdots& \ddots& \vdots \\ 0,& 0,& \dots& 1 \end{pmatrix},$$ where $\bigcirc$ represents some unspecified value. Since the contribution of $x_2, \dots x_K$ can be ignored for the description of the leading behavior along $x_1$, let us set $x_2=x_3=\dots=x_K=0$, which leads $$\bm{s} = \bm{P}\bm{x} \simeq (\bm{e}_1,\dots,\bm{e}_K) \begin{pmatrix} x_1\\ 0\\ \vdots\\ 0 \end{pmatrix} =x\bm{e}_1 =\begin{pmatrix} x_1\tau_1\\ x_1\tau_2\\ \vdots\\ x_1\tau_K \end{pmatrix}.$$ We thus obtain the second-order contribution along the $x_1$ axis by ignoring nonlinear contribution from $x_2,\dots, x_K$: $$\frac{dx_1}{dl} \simeq -\frac{x_1^2}{2\alpha}.$$ With calculations that follow step by step those in Sec. \[sec:LinearStabilityOfLagrangeCharpitTwoExpon\], we obtain $$\log {\tilde}{Q}_{{\mathrm{ss}}}(s) \simeq -2\nu_0 \alpha \log |s| \>\>\> \Longleftrightarrow \>\>\> P(\nu) \sim \nu^{-1+2\nu_0 \alpha}, \>\>\> {\rm with}~\alpha := \sum_{k=1}^Kn_k\tau_k.$$ This recovers the power law formula of the intermediate asymptotics of the PDF of the Hawkes intensities given by (\[eq:main\_finding\_power-law\_gen\]). General case ------------ We are now prepared to study the general case where the memory kernel of the Hawkes process is a continuous superposition of exponential functions . Introducing the steady state cumulant functional $$\Phi[s] := \log {\tilde}{P}_{{\mathrm{ss}}}[s],$$ and from the master equation in its functional Laplace representation Eq. , we obtain the following first-order functional differential equation in the steady state, $$\int_0^\infty d\tau \left(e^{-\int_{0}^\infty d\tau' s(\tau')n(\tau')/\tau'} -1 + \frac{s(\tau)}{\tau}\right)\frac{\delta \Phi[s]}{\delta s(\tau)} = \nu_0\left(e^{-\int_{0}^\infty d\tau' s(\tau')n(\tau')/\tau'} -1\right). \label{eq:master_gen_functional_cumulant}$$ The corresponding Lagrange-Charpit equations are the following partial-integro equations, $$\frac{\partial s(l;\tau)}{\partial l} = 1 - e^{-\int_{0}^\infty d\tau' s(\tau')n(\tau')/\tau'} - \frac{s(\tau)}{\tau}, \>\>\>\> \frac{\partial \Phi(l)}{\partial l} = -\nu_0\left(e^{-\int_{0}^\infty d\tau' s(\tau')n(\tau')/\tau'} -1\right)\label{eq:LagrangeCharpitGeneral}$$ where $l$ is the curvilinear parameter indexing the position along the characteristic curve. We now perform the stability analysis of this equation  in the neighbourhood of $s=0$ close to the critical condition $n=1$. #### Sub-critical case $n<1$. We linearize the Lagrange-Charpit equation  to obtain $$\frac{\partial s(l;\tau)}{\partial l} = -\int_0^\infty d\tau' H(\tau,\tau')s(\tau'), \>\>\>\> \frac{\partial \Phi(l)}{\partial l} = \nu_0\int_0^{\infty} d\tau' K(\tau')s(\tau')$$ with $$H(\tau,\tau'):= \frac{\delta(\tau-\tau')-n(\tau')}{\tau'}, \>\>\>\> K(\tau') := \frac{n(\tau')}{\tau'}. \label{yhjuynbj2q}$$ Let us introduce the eigenvalues $\lambda \geq \lambda_{\min}$ and eigenfunctions $e(\tau;\lambda)$, satisfying $$\int_0^\infty d\tau' H(\tau,\tau')e(\tau';\lambda) = \lambda e(\tau;\lambda).$$ Appendix  \[app:sec:realEigenvalues\_continuous\] shows that all the eigenvalues are real. The inverse matrix of $H(\tau,\tau')$, denoted by $H^{-1}(\tau,\tau')$, can be explicitly obtained as shown in Appendix \[app:sec:inverseMatrix\_continuous\]. Since the inverse matrix $H^{-1}(\tau,\tau')$ has a singularity at $n=1$, the critical condition of this Hawkes process is given by $n=1$ as expected. Using calculations that are analogous to those in Sec. \[sec:LinearStabilityOfLagrangeCharpitTwoExpon\], we obtain $$\Phi[s] \simeq -\nu_0 \int_0^\infty d\tau \int_0^\infty d\tau' K(\tau)H^{-1}(\tau,\tau')s(\tau'),$$ from which we state that $$\log {\tilde}{Q}_{{\mathrm{ss}}}(s) = -\nu_0 s +\Phi[s\bm{1}(\tau)] = \frac{-\nu_0}{1-n}s.$$ where $\bm{1}(\tau)$ is an indicator function defined by $\bm{1}(\tau) = 1$ for any $\tau$. This recovers , and derived above. #### Critical case $n=1$. At criticality, the smallest eigenvalue vanishes: $\lambda_{\min}=0$. Indeed, we obtain the zero eigenfunction $$e(\tau;\lambda=0) = \tau,$$ which can be checked by direct substitution: $$\int_0^\infty d\tau H(\tau,\tau')e(\tau';\lambda=0) = \int_0^\infty d\tau \frac{\delta(\tau-\tau')-n(\tau')}{\tau'}\tau' = 1-n = 0~,~~{\rm for}~n=1.$$ We now introduce a set of variables to obtain a new representation based on the eigenfunctions, $$s(\tau) = \sum_{\lambda} e(\tau;\lambda)x(\lambda)\>\>\> \Longleftrightarrow \>\>\> x(\lambda) = \int_0^\infty d\tau e^{-1}(\lambda;\tau)s(\tau)$$ with the inverse matrix $e^{-1}(\lambda;\tau)$ satisfying $$\int_0^\infty d\tau e^{-1}(\lambda;\tau)e(\tau;\lambda') = \delta_{\lambda,\lambda'}.$$ We assume the existence of the inverse matrix, which is equivalent to the assumption that the set of all eigenfunctions is complete. $H(\tau,\tau')$ can be diagonalized: $$\int_0^\infty d\tau \int_0^\infty d\tau' e^{-1}(\lambda;\tau)H(\tau,\tau')e(\tau';\lambda') = \lambda\delta_{\lambda,\lambda'}.$$ We then obtain the linearized Lagrange-Charpit equations, $$\frac{\partial x(\lambda)}{\partial l} \simeq -\lambda x(\lambda).$$ The dominant contribution comes from the vanishing eigenvalue. We therefore focus on $x(0)$ by setting $x(\lambda)=0$ for $\lambda > 0$. We then form the expansion $$e^{-\int_0^\infty d\tau' s(\tau')n(\tau')/\tau'} -1 = -\int_0^\infty d\tau' \frac{n(\tau')s(\tau')}{\tau'} + \frac{1}{2}\left(\int_0^\infty d\tau' \frac{n(\tau')s(\tau')}{\tau'}\right)^2 + \dots.$$ The explicit representation of $x(0)$ is given by $$x(\lambda=0) = \int_0^\infty d\tau e^{-1}(\lambda=0;\tau)s(\tau) = \frac{1}{\alpha}\int_0^\infty d\tau n(\tau)s(\tau), \>\>\> e^{-1}(\lambda=0;\tau) = \frac{n(\tau)}{\alpha}, \label{en4h2g}$$ where $\alpha$ is defined by expression (\[rhr2bg2\]). Expression (\[en4h2g\]) can be checked to be valid by direct substitution since, from Eq. , we have $$\frac{\partial x(0)}{\partial l} = \int_0^\infty d\tau e^{-1}(0;\tau)\frac{\partial s(\tau)}{\partial l} = 0 - \frac{1}{2\alpha}\left(\int_0^\infty d\tau\frac{n(\tau)s(\tau)}{\tau}\right)^2 + \dots,$$ showing that the first-order contribution is actually null in this representation. The parameter $\alpha$ (\[rhr2bg2\]) has the property to ensure the consistency with the following identity: $$\int_0^\infty d\tau e^{-1}(\lambda=0;\tau)e(\tau;\lambda=0) = \frac{1}{\alpha}\int_0^\infty d\tau n(\tau)\tau = \delta_{\lambda=0,\lambda'=0} = 1.$$ Since we ignore the contribution from $x(\lambda)$ except for $\lambda=0$, let us set $x(\lambda)=0$ for $\lambda > 0$, which yields $$s(\tau) = \sum_{\lambda} e(\tau;\lambda)x(\lambda) = e(\tau;0)x = x\tau$$ where we have written $x:= x(0)$. We then obtain the second-order contribution along the $x(0)$ axis, $$\frac{\partial x}{\partial l} \simeq -\frac{x^2}{2\alpha}$$ From calculations mimicking those in Sec. \[sec:LinearStabilityOfLagrangeCharpitTwoExpon\], we obtain $$\log {\tilde}{Q}_{{\mathrm{ss}}}(s) \simeq -2\nu_0 \alpha \log |s| \>\>\> \Longleftrightarrow \>\>\> P(\nu) \sim \nu^{-1+2\nu_0 \alpha}, \>\>\> \alpha := \int_0^\infty d\tau n(\tau)\tau.$$ This recovers the power law formula of the intermediate asymptotics of the PDF of the Hawkes intensities given by (\[eq:main\_finding\_power-law\_gen\]). Conclusion ========== We have presented an analytical framework of the Hawkes process for an arbitrary memory kernel, based on the master equation governing the behavior of auxiliary field variables. We have derived systematically the corresponding functional master equation for the auxiliary field variables. While the Hawkes point process is non-Markovian by construction, the introduction of auxiliary field variables provides a formulation in terms of linear stochastic partial differential equations that are Markovian. For the case of a memory kernel decaying as a single exponential, we presented the exact time-dependent and steady state solutions for the probability density function (PDF) of the Hawkes intensities, using the Laplace representation of the Master equation. For memory kernels represented as arbitrary sums of exponential (discrete and continuous sums), we derived the asymptotic solutions of the Lagrange-Charpit equations for the hyperbolic master equations in the Laplace representation in the steady state, close to the critical point $n=1$ of the Hawkes process, where $n$ is the branching ratio. Our theory predicts a power law scaling of the PDF of the intensities in an intermediate asymptotics regime, which crosses over to an asymptotic exponential function beyond a characteristics intensity that diverges as the critical condition is approached ($n \to 1$). The exponent of the PDF is non-universal and a function of the background intensity $\nu_0$ of the Hawkes intensity and of the parameter $\alpha = n \langle \tau \rangle$, where $\langle \tau \rangle$ is the first-order moment of the distribution of time scales of the memory function of the Hawkes process. We found that, the larger the memory $\langle \tau \rangle$, the larger the background intensity $\nu_0$ and the larger the branching ratio $n$, the smaller is the exponent $1-2\nu_0\alpha$ of the PDF of Hawkes intensities. This work provides the basic analytical tools to analyse Hawkes processes from a different angle than hitherto developed and will be useful to study more general and complex models derived from the Hawkes process. For instance, it is straightforward to extend our treatment to the case where each event has a mark quantifying its impact or “fertility”, thus defining the more general Hawkes process with intensity ${\hat{\nu}}(t) = \nu_0 + \sum_{i=1}^{{\hat{N}}(t)} \hat{\rho}_i h(t-{\hat{t}}_i)$ with independent and identically distributed random numbers $\{\hat{\rho}_i\}_i$. Our framework is also well-suited to nonlinear generalisations of the Hawkes process, for instance with the intensity taking the form ${\hat{\nu}}(t) = g({\hat{\omega}}(t)) > 0$, where the auxiliary variable ${\hat{\omega}}$ is given by ${\hat{\omega}}(t) = {\omega}_0 + \sum_{i=1}^{{\hat{N}}(t)} \hat{\rho}_i h(t-{\hat{t}}_i)$ and where the times $\{{\hat{t}}_i\}_i$ of the events are determined from the intensity ${\hat{\nu}}(t)$. In this nonlinear version, the positivity of ${\hat{\omega}}(t)$ and $\rho_i$ are not anymore required. This nonlinear Hawkes process is more complex than the linear Hawkes process but our framework can be applied to derive its most important analytical properties [@KanazawaSornetteFuture]. We note that such nonlinear Hawkes process include several models that have been proposed in the past, with applications to explain the multifractal properties of earthquake seismicity [@SornetteMSA] and of financial volatility [@FiliSornette11]. This work was supported by the Japan Society for the Promotion of Science KAKENHI (Grand No. 16K16016) and Intramural Research Promotion Program in the University of Tsukuba. We are grateful for useful remarks on the manuscript provided by M. Schatz, A. Wehrli, S. Wheatley, H. Takayasu, and M. Takayasu. Explicit derivation of the master equations =========================================== Derivation of Eqs. [(\[eq:master\_n\_expon\], \[eq:master\_n\_expone\_Laplace\])]{} {#sec:master_eq_n_expon} ----------------------------------------------------------------------------------- Given the dynamical equations (\[eq:SDE\_general\_superposition\_discrete\]) for the excess intensities $\{{\hat{z}}_k, k=1, ..., K\}$, which are short hand notations for the dynamics given by equation (\[jehygbqgb\]), for an arbitrary function $f(\bm{{\hat{z}}})$, its stochastic time evolution therefore reads $$df(\bm{{\hat{z}}}(t)) = f(\bm{{\hat{z}}}(t+dt)) - f(\bm{{\hat{z}}}(t)) = \begin{cases} -\sum_{k=1}^K\frac{{\hat{z}}_k}{\tau_k}\frac{\partial f(\bm{{\hat{z}}})}{\partial {\hat{z}}_k}dt & (\mbox{No jump during $[t,t+dt)$; probability} = 1-{\hat{\nu}}(t)dt) \\ f(\bm{{\hat{z}}}(t)+\bm{h}) - f(\bm{{\hat{z}}}(t)) & (\mbox{Jump in $[t,t+dt)$; probability} = {\hat{\nu}}(t)dt) \end{cases} \label{rn3thn3hnwb}$$ with jump size vector $\bm{h}$ and Hawkes intensity ${\hat{\nu}}$, defined by $$\bm{h} := \left(\frac{n_1}{\tau_1}, \frac{n_2}{\tau_2}, \dots, \frac{n_K}{\tau_K}\right)^{{\mathrm{T}}}, \>\>\> {\hat{\nu}}(t) := \nu_0 + \sum_{k=1}^K {\hat{z}}_k(t).$$ Taking the ensemble average of both sides of (\[rn3thn3hnwb\]) and after partial integration of the left-hand side, we get $$\begin{aligned} \int d\bm{z} f(\bm{z})\frac{\partial P_t(\bm{z})}{\partial t}dt = \int d\bm{z}\left[-\sum_{k=1}^K\frac{z_k}{\tau_k}\frac{\partial f(\bm{z})}{\partial z_k}dt + \left(\nu_0+\sum_{k=1}^K z_k\right)dt\left\{f(\bm{z}+\bm{h}) - f(\bm{z})\right\}\right]P_t(\bm{z}). \end{aligned}$$ After partial integration of the right-hand side and making the change of variable $\bm{z}+\bm{h} \to \bm{z}$, we obtain $$\int d\bm{z} f(\bm{z})\frac{\partial P_t(\bm{z})}{\partial t} = \int d\bm{z}\left[\sum_{k=1}^K\frac{\partial }{\partial z_k}\frac{z_k}{\tau_k}P(\bm{z}) + \left\{ \nu_0+\sum_{k=1}^K (z_k-h_k)\right\}P(\bm{z}-\bm{h}) - \left\{\nu_0+\sum_{k=1}^K z_k\right\} P(\bm{z})\right]f(\bm{z}).$$ Since this is an identify for an arbitrary $f(\bm{z})$, we obtain Eq. . We derive the corresponding Laplace representation  as follows: Let us apply the Laplace transform to both sides of Eq. , $$\mathcal{L}_K\left[\frac{\partial P_t(\bm{z})}{\partial t}\right] = \mathcal{L}_K\left[\sum_{k=1}^K\frac{\partial }{\partial z_k}\frac{z_k}{\tau_k}P(\bm{z}) + \left\{ \nu_0+\sum_{k=1}^K (z_k-h_k)\right\}P(\bm{z}-\bm{h}) - \left\{\nu_0+\sum_{k=1}^K z_k\right\} P(\bm{z})\right]. \label{app:trans_master_n_expon_1}$$ The left-hand side is given by $$\mathcal{L}_K\left[\frac{\partial P_t(\bm{z})}{\partial t}\right] = \frac{\partial {\tilde}{P}_t(\bm{s})}{\partial t}.$$ For the right-hand side, let us consider the following two relations: $$\begin{aligned} \mathcal{L}_K\left[\frac{\partial }{\partial z_k}\frac{z_k}{\tau_k}P(\bm{z})\right] &= \int d\bm{z}e^{-\bm{s}\cdot \bm{z}}\frac{\partial }{\partial z_k}\frac{z_k}{\tau_k}P(\bm{z}) \notag\\ &= \int \prod_{i | i\neq k}dz_i \int dz_k e^{-\bm{s}\cdot \bm{z}}\frac{\partial }{\partial z_k}\frac{z_k}{\tau_k}P(\bm{z})\notag \\ &= \int \prod_{i | i\neq k}dz_i \left\{ \left[\frac{z_k}{\tau_k}P(\bm{z})\right]_{z_k=0}^{z_k=\infty} + s_k\int dz_k e^{-\bm{s}\cdot \bm{z}}\frac{z_k}{\tau_k}P(\bm{z})\right\} \notag \\ &= s_k\int \prod_{i | i\neq k}dz_i \int dz_k e^{-\bm{s}\cdot \bm{z}}\frac{z_k}{\tau_k}P(\bm{z}) \notag \\ &= s_k\int \prod_{i | i\neq k}dz_i \left(-\frac{1}{\tau_k}\frac{\partial}{\partial s_k}\right)\int dz_k e^{-\bm{s}\cdot \bm{z}}P(\bm{z}) \notag \\ &= -\frac{s_k}{\tau_k}\frac{\partial }{\partial s_k} {\tilde}{P}_t(\bm{s}), \end{aligned}$$ where we have used the partial integration on the second line and have used the boundary condition  on the third line, and $$\begin{aligned} \mathcal{L}_K\left[\left\{ \nu_0+\sum_{k=1}^K (z_k-h_k)\right\}P(\bm{z}-\bm{h})\right] &= \int d\bm{z} e^{-\bm{s}\cdot \bm{z}}\left\{ \nu_0+\sum_{k=1}^K (z_k-h_k)\right\}P(\bm{z}-\bm{h}) \notag\\ &= e^{-\bm{s}\cdot \bm{h}}\int d\bm{z} e^{-\bm{s}\cdot (\bm{z}-\bm{h})}\left\{ \nu_0+\sum_{k=1}^K (z_k-h_k)\right\}P(\bm{z}-\bm{h}) \notag \\ &= e^{-\bm{s}\cdot \bm{h}}\int d\bm{z} e^{-\bm{s}\cdot \bm{z}}\left\{ \nu_0+\sum_{k=1}^K z_k\right\}P(\bm{z}) \notag \\ &= e^{-\bm{s}\cdot \bm{h}}\left\{ \nu_0-\sum_{k=1}^K \frac{\partial }{\partial s_k}\right\}\int d\bm{z} e^{-\bm{s}\cdot \bm{z}}P(\bm{z}) \notag \\ &= e^{-\bm{s}\cdot \bm{h}}\left\{ \nu_0-\sum_{k=1}^K \frac{\partial }{\partial s_k}\right\} {\tilde}{P}_t(\bm{s}), \end{aligned}$$ where we have applied the change of variable $\bm{z}-\bm{h} \to \bm{z}$ on the second line. By applying these two relations to the right-hand side of Eq. , we obtain Eq.  Derivation of Eq.  {#sec:master_eq_gen} ------------------ The Hawkes intensity ${\hat{\nu}}$ is defined by $${\hat{\nu}}_t := \nu_0 + \int_0^\infty {\hat{z}}_k(\tau)d\tau~,$$ in terms of the continuous field of excess intensities $\{{\hat{z}}_k(\tau)\}$. For an arbitrary functional $f[{\hat{z}}_t]$, let us consider its stochastic time evolution: $$df[{\hat{z}}_{t}] = f[{\hat{z}}_{t+dt}] - f[{\hat{z}}_{t}] = \begin{cases} \displaystyle -dt\int_0^\infty d\tau \frac{{\hat{z}}_t(\tau)}{\tau}\frac{\delta f[{\hat{z}}_t]}{\delta {\hat{z}}(\tau)} & (\mbox{No jump during $[t,t+dt)$; probability} = 1-{\hat{\nu}}_t dt) \\ f[{\hat{z}}_t+n/\tau] - f[{\hat{z}}_t] & (\mbox{Jump in $[t,t+dt)$; probability} = {\hat{\nu}}_tdt) \end{cases}. \label{grb2gb2b1v}$$ where we have used the the functional Taylor expansion $$f[z+\eta] - f[z] = \sum_{k=0}^{\infty} \frac{1}{k!}\int d\tau_1\dots d\tau_k\frac{\delta^k f[z]}{\delta z(\tau_1)\dots \delta z(\tau_k)}\eta (\tau_1)\dots \eta(\tau_k)$$ up to first-order. Taking the ensemble average of both sides of (\[grb2gb2b1v\]) yields $$\begin{aligned} \int \mathcal{D}z f[z]\frac{\partial P_t[z]}{\partial t}dt = \int \mathcal{D}z \left[-\int_0^\infty d\tau \frac{z(\tau)}{\tau}\frac{\delta f[z]}{\delta z(\tau)}dt + \left(\nu_0+\int_0^\infty z(\tau)d\tau \right)dt\left\{f[z+n/\tau] - f[z]\right\}\right]P_t[z]. \end{aligned}$$ By partial integration and with the change of variable $\bm{z}+\bm{h} \to \bm{z}$, we obtain $$\begin{aligned} \int \mathcal{D}z f[z]\frac{\partial P_t[z]}{\partial t} = \int \mathcal{D}z \left[\int_0^\infty d\tau \frac{\delta }{\delta z}\frac{z}{\tau}P_t[z] + \left\{ \nu_0+\int_0^\infty \left(z-\frac{n}{\tau}\right)d\tau\right\}P_t[z-n/\tau] - \left\{\nu_0+\int_0^\infty zd\tau\right\} P[z]\right]f[z]. \end{aligned}$$ Since this is an identify for arbitrary $f[z]$, we obtain Eq. . Derivations of the power law PDF of Hawkes intensities for the exponential memory kernel {#app:sec:various_derivation_power_law_single_expon} ======================================================================================== Here, we provide two different derivations of the power law PDF  of Hawkes intensities for the exponential memory kernel . Introduction of a UV cutoff. ---------------------------- We now investigate the steady solution of the master equation for the probability density function (PDF) $P_t(z)$ of the excess intensity ${\hat{z}}$ , at the critical point $n=1$. Let us introduce a UV cutoff $s_{\mathrm{uv}}$ to address the singularity at $s \to 0$ so that we can express $${\tilde}{Q}_{{\mathrm{ss}}}(s) \simeq \exp\left[-\frac{\nu_0}{\tau}\int_{s_{\mathrm{uv}}}^s \frac{sds}{e^{-s/\tau}-1+s/\tau}\right] = \exp\left[-\nu_0(s-s_{\rm uv})-\nu_0\tau\log \left(\frac{e^{-s/\tau}-1+s/\tau}{e^{-s_{\rm uv}/\tau}-1+s_{\rm uv}/\tau}\right)\right]~.$$ Recall that $\log {\tilde}{Q}_{{\mathrm{ss}}}(s) = -s\nu_0 + \log {\tilde}{P}_{{\mathrm{ss}}}(s)$ and ${\tilde}{P}_{{\mathrm{ss}}}(s)$ is the Laplace transform of the steady state ${\tilde}{P}_{{\mathrm{ss}}}(s):=\int_{0}^\infty d\nu e^{-s\nu}P_{{\mathrm{ss}}}(z)$ of the master equation (\[eq:master\_exp\]). The introduction of this UV cut-off $s_{\mathrm{uv}}$ amounts to introducing a cut-off in the memory function $h(t)$ at large timescale (i.e., there exists $t_{\mathrm{cut}}$ such that $h(t)$ is negligible for $t>t_{\mathrm{cut}}$). The validity of this approximation is confirmed by considering the time-dependent solution (see Sec. \[htgb12fv1\]). At the critical point $n=1$, it has an asymptotic form for small $s_{\mathrm{uv}} < s \ll \tau$, $$\log {\tilde}{Q}_{{\mathrm{ss}}}(s)\simeq -\frac{\nu_0}{\tau}\int_{s_{\mathrm{uv}}}^s \frac{sds}{e^{-s/\tau}-1+s/\tau} \sim -\frac{\nu_0}{\tau}\int_{s_{\mathrm{uv}}}^s \frac{2\tau^2 ds}{s} = -2\nu_0 \tau\log \frac{s}{s_{\rm uv}},$$ which implies the power-law relation for the tail distribution: $$P_{{\mathrm{ss}}}(\nu) \propto \nu^{-1+2\nu_0 \tau}$$ for $0\ll \nu \ll \nu_{\max} := 1/s_{\rm uv}$. Kramers-Moyal apparoch ---------------------- We can derive relation  using the Kramers-Moyal expansion of the master equation . Let us consider the expansion $$(\nu_0+z-n/\tau)P_t(z-n/\tau) = \sum_{k=0}^\infty\frac{1}{k!}\left(\frac{-n}{\tau}\right)^k\frac{\partial^k}{\partial z^k}(\nu_0+z) P_t(z).$$ By truncating the series at the second order, we obtain the Fokker-Planck equation at the critical point $n=1$ in the steady state: $$\left[-\frac{\nu_0}{\tau}\frac{\partial }{\partial z}+ \frac{1}{2\tau^2}\frac{\partial^2}{\partial z^2}(\nu_0+z) \right]P_{{\mathrm{ss}}}(z) \simeq 0,$$ for $z\to \infty$. We thus obtain an asymptotic formula $$P_{{\mathrm{ss}}}(z) \sim (\nu_0+z)^{-1+2\nu_0\tau}~,~~~~~{\rm for ~large}~z.$$ This solution is consistent with the truncation of the Kramers-Moyal series at the second order, which consists in removing negligible higher order terms. For large $l\geq 3$, indeed, we obtain $$\left|\frac{\partial^2}{\partial z^2}(\nu_0+z)P_{{\mathrm{ss}}}(z)\right| \gg \left|\frac{\partial^l}{\partial z^l}(\nu_0+z)P_{{\mathrm{ss}}}(z)\right| ~,~~~~~{\rm for ~large}~z.$$ Elementary summary of the method of characteristics {#sec:app:method_of_characterisics} =================================================== The method of characteristics is a standard method to solve first-order PDEs. Here we focus on linear first-order PDEs that are relevant to the derivation of the PDF of Hawkes intensities. Let us consider the following PDE: $$a(x,y,z)\frac{\partial z(x,y)}{\partial x} + b(x,y,z)\frac{\partial z(x,y)}{\partial y} = c(x,y,z). \label{eq:app_PDE_method_of_characteristics}$$ According to the method of characteristics, we consider the corresponding Lagrange-Charpit equations: $$\begin{aligned} \frac{dx}{dl} &= -a(x,y,z) \\ \frac{dy}{dl} &= -b(x,y,z) \\ \frac{dz}{dl} &= -c(x,y,z) \end{aligned}$$ with the parameter $l$ encoding the position along the characteristic curves. These equations are equivalent to an invariant form in terms of $l$ $$\frac{dx}{a(x,y,z)} = \frac{dy}{b(x,y,z)} = \frac{dz}{c(x,y,z)}.$$ Let us write their formal solutions as $C_1 = F_1(x,y,z)$ and $C_2 = F_2(x,y,z)$ with constants of integration $C_1$ and $C_2$. The general solution of the original PDE  is given by $$\phi\left(F_1(x,y,z), F_2(x,y,z) \right) = 0$$ with an arbitrary function $\phi(C_1, C_2)$, which is determined by the initial or boundary condition of the PDE . This method can be readily generalized to systems with many variables. Analytical derivation of some main properties of ${\mathcal{F}}(s)$ (\[yi,m5i74jhnb2wg\]) {#app:sec:mcF_characters_proof} ========================================================================================== Here, we derive properties ($\alpha$1)-($\alpha$6) of ${\mathcal{F}}(s)$ (\[yi,m5i74jhnb2wg\]). First, the following relations hold true: $$s>\frac{\tau}{n}\log n \>\>\> \Longrightarrow \>\>\> \frac{d}{ds}\left(e^{-ns/\tau}-1+s/\tau\right) = -\frac{n}{\tau}e^{-ns/\tau} + \frac{1}{\tau} >0$$ and $$\lim_{s\to \infty} \left(e^{-ns/\tau}-1+s/\tau\right) = \infty.$$ These relations guarantee that there exists $s_0>0$ such that $e^{-ns/\tau}-1+s/\tau > 0$ for $s>s_0$. Therefore, the integrand is positive-definite $$\frac{1}{e^{-ns/\tau}-1+s/\tau} >0$$ for $s > s_0$ by choosing an appropriate positive $s_0$. Since the integrand is positive definite, the statement ($\alpha$1) is proved. As a corollary of ($\alpha$1), the statement ($\alpha$2) is proved. We next study ${\mathcal{F}}(s)$ in the sub-critical regime ($n<1$). For $n<1$, the statement ($\alpha$3) is true because $e^{-ns/\tau}-1+s/\tau >0$ for any $s>0$ and $(\tau/n)\log n <0$. The statement ($\alpha$4) is true because, for $0<s<s_0 \ll \tau/n$, we obtain $${\mathcal{F}}(s) \simeq \frac{\tau}{1-n}\log\frac{s}{s_0} \to -\infty$$ for $s\to +0$. The statement ($\alpha$5) is correct because $${\mathcal{F}}(s) \simeq \int_{c_0}^s\frac{ds'}{s'/\tau} + c_1 = \tau\log s + c_1 \to +\infty$$ for $s\to +\infty$ with constants $c_0$ and $c_1$. As a corollary of ($\alpha$1), ($\alpha$4) and ($\alpha$5), the statement ($\alpha$6) is proved. Analytical derivation of the time-dependent solution  ===================================================== Consistency check 1: convergence to the steady solution. {#app:sec:convergence_to_steady} -------------------------------------------------------- Let us check that the time-dependent solution  is consistent with the steady solution  for $n<1$. To prove this, it is sufficient to show that $$\lim_{x\to \infty} \mathcal{H}(x) = 0.$$ Using ($\alpha$1), ($\alpha$4), and ($\alpha$5), we obtain $$\lim_{x\to \infty}S(x) =\lim_{x\to -\infty}\mathcal{F}^{-1}(x)=0~,$$ and thus $$\lim_{x\to \infty} \mathcal{H}(x) = \lim_{S\to 0}\left[\log{\tilde}{P}_{t=0}\left(S\right)+\frac{\nu_0}{\tau}\int_0^{S} \frac{sds}{e^{-ns/\tau}-1+s/\tau}\right]=0.$$ Consistency check 2: relaxation dynamics of the average ${\hat{\nu}}(t)$ at finite times {#app:sec:average_nu_for_finite_time_single_expon} ---------------------------------------------------------------------------------------- Let us now study the dynamics of the average intensity ${\hat{\nu}}(t)$ via the time-dependent formula . Below the critical point, we can use the renormalized expression . Then the integral $\mathcal{F}(s)$ can be asymptotically evaluated for small $0\leq s \ll \tau/n$: $$\mathcal{F}(s) \simeq \frac{\tau}{1-n}\log s \to -\infty \>\>\>(s\to 0).\label{app:eq:transform_transient_finite_t_1}$$ This means that the argument $x(t,s) = t-\mathcal{F}(s)$ shows the divergence $$x(t,s) = t-\mathcal{F}(s) \simeq t - \frac{\tau}{1-n}\log s \to +\infty \label{app:eq:transform_transient_finite_t_2}$$ From Eq. , the inverse function shows the asymptotic behavior for large $x$: $$S(x) = \mathcal{F}^{-1}(-x) \simeq \exp\left[-\frac{1-n}{\tau}x \right].\label{app:eq:transform_transient_finite_t_3}$$ By substituting the relation , we obtain $$S(x(t,s)) = \mathcal{F}^{-1}(-x(t,s)) \simeq \exp\left[-\frac{1-n}{\tau}\left(t-\frac{\tau}{1-n}\log s\right) \right] = se^{-(1-n)t/\tau}.$$ We now assume the initial condition ${\hat{\nu}}(0)=\nu_{\rm ini}$, or equivalently $\log {\tilde}{P}_{t=0}(s)=-\nu_{\rm ini}s$. From Eq. , we thus obtain the relaxation dynamics for the tail $s\simeq 0$, $$\log {\tilde}{P}_t(s) \simeq -se^{-(1-n)t/\tau}\nu_{\rm{ini}}-\frac{\nu_0}{\tau}\int_{se^{-(1-n)t/\tau}}^s \frac{ sds}{e^{-ns/\tau}-1+s/\tau} \simeq -\left(\nu_{\rm ini}e^{-(1-n)t/\tau}+\frac{\nu_0(1-e^{-nt/\tau})}{1-n}\right)s,\label{eq:transient_finite_tail}$$ which means that the average of ${\hat{\nu}}(t)$ is given by $${\langle}{\hat{\nu}}(t) {\rangle}= -\frac{d}{ds}\log {\tilde}{P}_t(s)\bigg|_{s=0} = \nu_{\rm ini}e^{-(1-n)t/\tau} + \frac{\nu_0(1-e^{-nt/\tau})}{1-n}.$$ Derivation of the asymptotic formula ------------------------------------- Let us derive the asymptotic relaxation formula  for sufficiently large $t$, satisfying $$t \gg \mathcal{F}(s)$$ for a given $s$. Under such a condition, the asymptotic relation for the inverse function $\mathcal{F}^{-1}(-x(t,s))$ is available as Eq.  with $x(t,s) = t - \mathcal{F}(s)$. We then obtain $$\mathcal{F}^{-1}(-x(t,s)) \simeq \exp\left[-\frac{1-n}{\tau}\left(t-\mathcal{F}_{\rm R}(s) - \frac{\tau}{1-n}\log s\right) \right] = s\exp\left[-\frac{1-n}{\tau}\left(t-\mathcal{F}_{\rm R}(s)\right)\right]$$ for sufficiently large $t$. By substituting this into Eq. , we obtain Eq. . Proofs of mathematical properties of $\bm{H}$ (\[wrnhmnnh3\]) ============================================================= Here, we summarize the proofs of the main mathematical properties of $\bm{H}$ (\[wrnhmnnh3\]) for arbitrary values of $K$. Proof of that its eigenvalues are real {#app:sec:proof_eigenvalues_H_real} -------------------------------------- All eigenvalues of $\bm{H}$ are real numbers for the following reasons. $\bm{H}$ can be symmetrized as $\bar{\bm{H}}$, defined by $$\begin{aligned} \bar{\bm{H}} := \bm{A}\bm{H}\bm{A}^{-1} = \begin{pmatrix} \frac{1-n_1}{\tau_1},& \sqrt{\frac{n_1n_2}{\tau_1\tau_2}},& \dots& \sqrt{\frac{n_1n_K}{\tau_1\tau_K}} \\ \sqrt{\frac{n_2n_1}{\tau_2\tau_1}},& \frac{1-n_2}{\tau_2},& \dots& \sqrt{\frac{n_2n_K}{\tau_2\tau_K}} \\ \vdots& \vdots& \ddots& \vdots \\ \sqrt{\frac{n_Kn_1}{\tau_K\tau_1}},& \sqrt{\frac{n_Kn_2}{\tau_K\tau_2}},& \dots& \frac{1-n_K}{\tau_K} \end{pmatrix}, \>\>\>\>\> \bm{A} := \begin{pmatrix} \sqrt{\frac{n_1}{\tau_1}},& 0,& \dots& 0 \\ 0,& \sqrt{\frac{n_2}{\tau_2}},& \dots& 0 \\ \vdots& \vdots& \ddots& \vdots \\ 0,& 0,& \dots& \sqrt{\frac{n_2}{\tau_2}} \end{pmatrix}. \>\>\> \end{aligned}$$ Indeed, by representing all the matrices by their elements $\bar{\bm{H}}:=(\bar{H}_{ij})$, $\bm{H}:=(H_{ij})$, and $\bm{A}:= A_{ij}$, we obtain $$\bar{H}_{ij} = \sum_{k,l}A_{ik}H_{kl}A^{-1}_{lj} = \sum_{k,l}\sqrt{\frac{n_i}{\tau_i}}\delta_{ik}\left(\frac{\delta_{kl}}{\tau_k}-\frac{n_l}{\tau_l}\right)\sqrt{\frac{\tau_j}{n_j}}\delta_{lj} = \frac{\delta_{ij}-\sqrt{n_in_j}}{\sqrt{\tau_i\tau_j}}.$$ We therefore obtain $$\bm{H}\bm{e}_i = \lambda_i \bm{e}_i \>\>\> \Longleftrightarrow \>\>\> \bar{\bm{H}}\left(\bm{A}\bm{e}_i\right) = \lambda_i \left(\bm{A}\bm{e}_i\right),$$ implying that any eigenvalue of $\bm{H}$ is the same as that of $\bar{\bm{H}}$. Because $\bar{\bm{H}}$ is a symmetric matrix, all the eigenvalues of $\bar{\bm{H}}$ are real. Therefore, all the eigenvalues of $\bm{H}$ are also real. Determinant {#app:sec:proof_determinant_H} ----------- Here, we derive the determinant $\det \bm{H}$ for arbitrary values of $K$. Let us recall the following identities, showing the invariance of determinants: $$\begin{aligned} \det \bm{H} &= \det \begin{pmatrix} \bm{a}_1 \\ \bm{a}_2 \\ \vdots \\ \bm{a}_j \\ \vdots \\ \bm{a}_K \end{pmatrix} = \det \begin{pmatrix} \bm{a}_1 \\ \bm{a}_2 \\ \vdots \\ \bm{a}_j + c \bm{a}_k \\ \vdots \\ \bm{a}_K \end{pmatrix}. \end{aligned}$$ This implies $$\begin{aligned} \det \bm{H} &= \det \begin{pmatrix} \bm{a}_1 \\ \bm{a}_2 \\ \bm{a}_3 \\ \vdots \\ \bm{a}_K \end{pmatrix} = \det \begin{pmatrix} \bm{a}_1 \\ \bm{a}_2-\bm{a}_1 \\ \bm{a}_3 \\ \vdots \\ \bm{a}_K \end{pmatrix} = \det \begin{pmatrix} \bm{a}_1 \\ \bm{a}_2-\bm{a}_1 \\ \bm{a}_3-\bm{a}_1 \\ \vdots \\ \bm{a}_K \end{pmatrix} = \dots = \det \begin{pmatrix} \bm{a}_1 \\ \bm{a}_2-\bm{a}_1 \\ \bm{a}_3-\bm{a}_1 \\ \vdots \\ \bm{a}_K-\bm{a}_1 \end{pmatrix} := \det \begin{pmatrix} \bm{a}_1' \\ \bm{a}_2' \\ \bm{a}_3' \\ \vdots \\ \bm{a}_K' \end{pmatrix} \end{aligned}$$ and $$\begin{aligned} \det \bm{H} = \det \begin{pmatrix} \bm{a}_1' \\ \bm{a}_2' \\ \vdots \\ \bm{a}_K' \end{pmatrix} = \det \begin{pmatrix} \bm{a}_1'+n_2\bm{a}_2' \\ \bm{a}_2' \\ \vdots \\ \bm{a}_K' \end{pmatrix} = \det \begin{pmatrix} \bm{a}_1'+n_2\bm{a}_2'+n_3\bm{a}_3' \\ \bm{a}_2' \\ \vdots \\ \bm{a}_K' \end{pmatrix} = \dots = \det \begin{pmatrix} \bm{a}_1'+\sum_{k=2}^Kn_k\bm{a}_k' \\ \bm{a}_2' \\ \vdots \\ \bm{a}_K' \end{pmatrix} \end{aligned}$$ with constants $\{n_k\}_k$. Using these relations, the determinant of $\bm{H}$ is given by $$\begin{aligned} \det \bm{H} &= \det \begin{pmatrix} -n_1/\tau_1 + 1/\tau_1,& -n_2/\tau_2,& \dots,& -n_K/\tau_K \\ -n_1/\tau_1,& -n_2/\tau_2 + 1/\tau_2,& \dots,& -n_K/\tau_K \\ \vdots& \vdots& \ddots& \vdots \\ -n_1/\tau_1,& -n_2/\tau_2,& \dots,& -n_K/\tau_K + 1/\tau_K \end{pmatrix} \begin{matrix} \leftarrow \bm{a}_1 \\ \leftarrow \bm{a}_2 \\ \vdots \\ \leftarrow \bm{a}_K \\ \end{matrix}\notag\\ &= \det \begin{pmatrix} (1-n_1)/\tau_1,& -n_2/\tau_2,& \dots& -n_K/\tau_K \\ -1/\tau_1,& 1/\tau_2,& \dots& 0\\ \vdots& \vdots& \ddots& \vdots\\ -1/\tau_1,& 0,& \dots& 1/\tau_K \end{pmatrix} \begin{matrix} \leftarrow \bm{a}_1' &=\bm{a}_1 \\ \leftarrow \bm{a}_2' &=\bm{a}_2&-&\bm{a}_1 \\ \vdots \\ \leftarrow \bm{a}_K' &=\bm{a}_K&-&\bm{a}_1 \\ \end{matrix}\notag\\ &= \det \begin{pmatrix} (1-\sum_{k=1}^Kn_k)/\tau_1,& 0,& \dots& 0 \\ -1/\tau_1,& 1/\tau_2,& \dots& 0 \\ \vdots& \vdots& \ddots& \vdots\\ -1/\tau_1,& 0,& \dots& 1/\tau_K \end{pmatrix} \begin{matrix} \leftarrow \bm{a}_1'' &= \bm{a}_1' &+& \sum_{k=2}^Kn_k\bm{a}_k' \\ \leftarrow \bm{a}_2'' &= \bm{a}_2' \\ \vdots \\ \leftarrow \bm{a}_K'' &= \bm{a}_K' \\ \end{matrix}\notag\\ &= \frac{1-\sum_{k=1}^Kn_k}{\tau_1\dots \tau_K}. \end{aligned}$$ Inverse matrix {#app:sec:inverse_matrix_H} -------------- Here we derive the inverse matrix of $\bm{H}$ for arbitrary values of $K$. The inverse matrix is derived from the method of row reduction: $$\begin{aligned} &\left( \begin{array}{cccc|cccc} -n_1/\tau_1 + 1/\tau_1,& -n_2/\tau_2,& \dots& -n_K/\tau_K& 1,& 0,&\dots,& 0 \\ -n_1/\tau_1,& -n_2/\tau_2 + 1/\tau_2,& \dots& -n_K/\tau_K& 0,& 1,&\dots,& 0 \\ \vdots& \vdots& \ddots & \vdots& \vdots& \vdots & \ddots& \vdots \\ -n_1/\tau_1,& -n_2/\tau_2,& \dots,& -n_K/\tau_K + 1/\tau_K& 0,& 0,&\dots,& 1 \end{array} \right) \begin{matrix} \leftarrow \bm{b}_1 \\ \leftarrow \bm{b}_2 \\ \vdots \\ \leftarrow \bm{b}_K \\ \end{matrix}\notag\\ \to &\left( \begin{array}{cccc|cccc} (1-n_1)/\tau_1,& -n_2/\tau_2,& \dots& -n_K/\tau_K& 1,& 0,&\dots& 0 \\ -1/\tau_1,& 1/\tau_2,& \dots& 0& -1,& 1,&\dots& 0 \\ \vdots& \vdots& \ddots& \vdots& \vdots& \vdots& \ddots& \vdots\\ -1/\tau_1,& 0,& \dots& 1/\tau_K& -1,& 0,&\dots& 1 \end{array} \right) \begin{matrix} \leftarrow \bm{b}_1'&=\bm{b}_1& \\ \leftarrow \bm{b}_2'&=\bm{b}_2&-&\bm{b}_1 \\ \vdots \\ \leftarrow \bm{b}_K'&=\bm{b}_K&-&\bm{b}_1 \\ \end{matrix}\notag\\ \to &\left( \begin{array}{cccc|cccc} (1-n)/\tau_1,& 0,& \dots& 0& 1-\sum_{k=2}^Kn_k,& n_2,&\dots& n_K \\ -1/\tau_1,& 1/\tau_2,& \dots& 0& -1,& 1,&\dots& 0 \\ \vdots& \vdots& \ddots& \vdots& \vdots& \vdots& \ddots& \vdots\\ -1/\tau_1,& 0,& \dots& 1/\tau_K& -1,& 0,&\dots& 1 \end{array} \right) \begin{matrix} \leftarrow \bm{b}_1'' &= \bm{b}_1' &+& \sum_{k=2}^Kn_k\bm{b}_k' \\ \leftarrow \bm{b}_2'' &= \bm{b}_2' \\ \vdots \\ \leftarrow \bm{b}_K'' &= \bm{b}_K' \\ \end{matrix}\notag\\ \to &\left( \begin{array}{cccc|cccc} 1,& 0,& \dots& 0& \tau_1+\tau_1n_1/(1-n),& \tau_1n_2/(1-n),&\dots& \tau_1n_K/(1-n) \\ -\tau_2/\tau_1,& 1,& \dots& 0& -\tau_2,& \tau_2,&\dots& 0 \\ \vdots& \vdots& \ddots& \vdots& \vdots& \vdots& \ddots& \vdots\\ -\tau_K/\tau_1,& 0,& \dots& 1& -\tau_K,& 0,&\dots& \tau_K \end{array} \right) \begin{matrix} \leftarrow \bm{b}_1''' &= \frac{\tau_1}{(1-n)}\bm{b}_1'' \\ \leftarrow \bm{b}_2''' &= \tau_2\bm{b}_2'' \\ \vdots \\ \leftarrow \bm{b}_K''' &= \tau_K\bm{b}_K'' \\ \end{matrix}\notag\\ \to &\left( \begin{array}{cccc|cccc} 1,& 0,& \dots& 0& \tau_1+\tau_1n_1/(1-n),& \tau_1n_2/(1-n),&\dots& \tau_1n_K/(1-n) \\ 0,& 1,& \dots& 0& \tau_2n_1/(1-n),& \tau_2 + \tau_2n_2/(1-n),&\dots& \tau_2n_K/(1-n) \\ \vdots& \vdots& \ddots& \vdots& \vdots& \vdots& \ddots& \vdots\\ 0,& 0,& \dots& 1& \tau_Kn_1/(1-n),& \tau_Kn_2/(1-n),&\dots& \tau_K + \tau_Kn_K/(1-n) \end{array} \right) \begin{matrix} \leftarrow \bm{b}_1'''' &= \bm{b}_1''' \\ \leftarrow \bm{b}_2'''' &= \bm{b}_2''' &+& \frac{\tau_2}{\tau_1}\bm{b}_1''' \\ \vdots \\ \leftarrow \bm{b}_K'''' &= \bm{b}_K''' &+& \frac{\tau_2}{\tau_1}\bm{b}_1'''\\ \end{matrix} \end{aligned}$$ which implies $$\begin{aligned} \bm{H}^{-1} = \begin{pmatrix} \tau_1+\tau_1n_1/(1-n),& \tau_1n_2/(1-n),&\dots& \tau_1n_K/(1-n) \\ \tau_2n_1/(1-n),& \tau_2 + \tau_2n_2/(1-n),&\dots& \tau_2n_K/(1-n) \\ \vdots& \vdots& \ddots& \vdots\\ \tau_Kn_1/(1-n),& \tau_Kn_2/(1-n),&\dots& \tau_K + \tau_Kn_K/(1-n) \end{pmatrix} \end{aligned}$$ or equivalently $$H^{-1}_{ij}=\tau_i \delta_{ij} + \frac{\tau_i n_j}{1-n}$$ in the representation by matrix elements. As a check of the above calculation, we can directly confirm the following relation, defining the inverse matrix: $$\bm{H}\bm{H}^{-1}=\bm{I} \>\>\> \Longleftrightarrow \>\>\> \sum_{j=1}^K H_{ij}H^{-1}_{jk}=\sum_{j=1}^K \left(-\frac{n_j}{\tau_j}+\frac{1}{\tau_j}\delta_{ij}\right)\left(\tau_j \delta_{jk} + \frac{\tau_j n_k}{1-n}\right) = \delta_{ik}.$$ The inverse matrix has a singularity at $n=1$, which corresponds to the critical regime of the Hawkes process. Proofs of the mathematical properties of $\bm{H}(\tau,\tau')$ (\[yhjuynbj2q\]) ============================================================================== Proof that the eigenvalues are real {#app:sec:realEigenvalues_continuous} ----------------------------------- Considering the analogy to the eigenvalue problem of the finite-dimensional matrix $\bm{H}$ (\[wrnhmnnh3\]), it is obvious by taking the continuous limit that all the eigenvalues of $H(\tau,\tau')$ are positive. As an appendix, we remark that all eigenvalues can be proved real for $H(\tau,\tau')$ by making some specific assumptions. For example, let us assume that $n(\tau)$ has a finite cutoff, such that $$n(\tau) = \begin{cases} {\tilde}{n}(\tau) & (\tau<\tau^*) \\ 0 & (\tau\geq \tau^*) \end{cases}$$ with a positive continuous function ${\tilde}{n}(\tau)> 0$ and a cutoff $\tau^* > 0$. In this case, the eigenvalue problem for $H(\tau,\tau')$ can be rewritten as $$\int_0^\infty d\tau' H(\tau,\tau')e(\tau';\lambda) = \int_0^{\tau^*} H(\tau,\tau')e(\tau';\lambda) = \lambda e(\tau;\lambda) \label{app:eq:eigenvalue_integral_equation}$$ While $H(\tau,\tau')$ itself is not a symmetric kernel, $H(\tau,\tau')$ can be symmetrized by introducing $$\bar{H}(\tau,\tau') := \frac{\delta (\tau-\tau')-\sqrt{n(\tau)n(\tau')}}{\sqrt{\tau \tau'}},$$ such that $$\int_0^{\tau^*} d\tau'\bar{H}(\tau,\tau') \bar{e}(\tau;\lambda) = \lambda \bar{e}(\tau;\lambda), \>\>\> \bar{e}(\tau;\lambda) := \sqrt{\frac{n(\tau)}{\tau}}e(\tau;\lambda)$$ or equivalently, $$\int_0^{\tau^*} d\tau'\sqrt{\frac{n(\tau)n(\tau')}{\tau\tau'}} \bar{e}(\tau;\lambda) = (1/\tau-\lambda) \bar{e}(\tau;\lambda) \label{app:eq:eigenvalue_integral_equation2}$$ This implies that all the eigenvalues of $H(\tau,\tau')$ are identical to those of $\bar{H}(\tau,\tau')$. Since Eq.  is a homogeneous Fredholm integral equation of the second kind with a continuous and symmetric kernel $\sqrt{n(\tau)n(\tau')/(\tau\tau')}$ and with a finite interval $[0,\tau^*]$, all the eigenvalues of $\bar{H}(\tau,\tau')$ are real according to the Hilbert-Schmidt theory [@ArfkenBook]. Therefore, all the eigenvalues of $H(\tau,\tau')$ are also real. Inverse matrix {#app:sec:inverseMatrix_continuous} -------------- The inverse matrix of $H(\tau,\tau')$ is given by $$H^{-1}(\tau,\tau') := \tau\left\{ \delta (\tau-\tau') + \frac{n(\tau')}{1-n}\right\}.$$ Indeed, we verify that $$\int_0^{\infty}d\tau' H(\tau,\tau')H^{-1}(\tau',\tau'') = \int_0^\infty d\tau' \frac{\delta(\tau-\tau')-n(\tau')}{\tau'}\tau'\left\{ \delta (\tau'-\tau'') + \frac{n(\tau'')}{1-n}\right\} = \delta(\tau-\tau'').$$ The inverse matrix has a singularity at $n=1$, corresponding to the critical regime of the Hawkes process. [99]{} A. Hawkes, Journal of the Royal Statistical Society. Series B (Methodological) [**33**]{} (3), 438 (1971). A. Hawkes, Biometrika [**58**]{} (1), 83 (1971). A. Hawkes and D. Oakes, J. Appl. Prob. [**11**]{} (3), 493 (1974). Y.Y. Kagan and L. Knopoff, J. Geophys. Res. [**86**]{}, 2853 (1981). Y.Y. Kagan and L. Knopoff, Science [**236**]{}, 1563 (1987). Y. Ogata, J. Am. stat. Assoc. [**83**]{}, 9 (1988). Y. Ogata, Pure Appl. Geophys. [**155**]{}, 471 (1999). A. Helmstetter and D. Sornette, J. Geophys. Res. [107]{} (B10), 2237 (2002). S. Nandan, G. Ouillon, D. Sornette, and S. Wiemer, Seismological Research Letters [**90**]{} (4), 1650 (2019). A.G. Hawkes, Quantitative Finance, 18 (2), 193-198 (2018). V. Filimonov and D. Sornette, Phys. Rev. E [**85**]{} (5), 056108 (2012). V. Filimonov and D. Sornette, Quantitative Finance [**15**]{} (8), 1293 (2015). S. Wheatley, A. Wehrli, and D. Sornette, Quantitative Finance [**19**]{} (7), 1165 (2019). E. Bacry, I. Mastromatteo and J.-F. Muzy, Market Microstructure and Liquidity [**1**]{} (1), 1550005 (2015). Q. Zhao, M. A. Erdogdu, H.Y. He, A. Rajaraman, and J. Leskovec, [*SEISMIC: A Self-Exciting Point Process Model for Predicting Tweet Popularity*]{}. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1513. ACM (2015). D. Sornette, F. Deschatres, T. Gilbert, and Y. Ageon, Phys. Rev. Letts. [**93**]{} (22), 228701 (2004). R. Crane and D. Sornette, Proc. Nat. Acad. Sci. USA [**105**]{} (41), 15649 (2008). J.V. Escobar and D. Sornette, PLoS ONE [**10**]{} (1): e0116811 (2015). K. Kanazawa and D. Sornette, joint submission paper. A.I. Saichev and D. Sornette, Phys. Rev. E [**70**]{}, 046123 (2004). A. Saichev, A. Helmstetter, and D. Sornette, Pure and Applied Geophysics [**162**]{}, 1113 (2005). A.I. Saichev and D. Sornette, Eur. Phys. J. B [**49**]{}, 377 (2006). A. Saichev and D. Sornette, J. Geophys. Res. [**112**]{}, B04313 (2007). A. Saichev and D. Sornette, Phys. Rev. E [**89**]{}, 012104 (2014). D.J. Daley and D. Vere-Jones, [*An introduction to the theory of point processes*]{}, Volume I, Springer, Heidelberg (2003). J. Zhuang, Y. Ogata and D. Vere-Jones, Journal of the American Statistical Association [97]{} (458), 369 (2002). A. Helmstetter and D. Sornette, Geophys. Res. Lett. [30]{} (11), 1576 (2003). D. Oakes, J. Appl. Prob. [12]{}, 69 (1975). A. Dassios and H. Zhao, Advances in Applied Probability [43]{}, 814 (2011). D. Sornette and L. Knopoff, Bull. Seism. Soc. Am. [87]{}, 789 (1997). J.-P. Bouchaud, J. Bonart, J. Donier and M. Gould, Trades, quotes and prices, Cambridge University Press (2018). \[section 9.3.4\] S.J. Hardiman, N. Bercot, and J.-P. Bouchaud, Eur. Phys. J. B, [86]{}, 442 (2013). T.E. Harris, The Theory of Branching Processes. Springer, Berlin (1963). A. Boumezoued, Advances in Applied Probability [48]{} (2), 463 (2016). G.I. Barenblatt, [*Scaling, self-similarity, and intermediate asymptotics*]{} (Cambridge University Press, Cambridge, UK, 1996). A. Dassios and H. Zhao, Electron. Commun. Probab. [18]{} (62), 1 (2013). D. Harte, Journal of Statistical Software [35]{} (8), 1 (2010). J.V. Escobar and D. Sornette, PLoS ONE [10]{}, e0116811 (2015). G.B. Arfken and H.J. Weber, [*Mathematical Methods for Physicists*]{} (Academic, San Diego, 1995). K. Kanazawa and D. Sornette, in preparation. D. Sornette and G. Ouillon, Phys. Rev. Lett. [94]{}, 038501 (2005). V.A. Filimonov and D. Sornette, Europhysics Letters [9]{} (4), 46003 (2011).
--- author: - 'Joseph C. A. Prentice' - 'R. J. Needs' title: 'Supplementary Information: Using forces to accelerate first-principles anharmonic vibrational calculations' --- Pseudopotentials ================ All density functional theory calculations were performed using [CASTEP]{} version $8.0$, and its own “on-the-fly” ultrasoft pseudopotentials. The definition strings for the pseudopotentials were: - H: 1|0.6|1|6|10|10(qc=8) - Li: 1|1.0|14|16|18|10U:20(qc=7) - Zr: 3|2.1|7|8|9|40U:50:41:42 Equilibrium unit cell configurations ==================================== The unit cells for the structures used in this work, containing the atoms at their equilibrium positions, are given in the `.cif` files `H2.cif`, `cmca4.cif`, `cmca12.cif`, `c2c24.cif`, `Li.cif` and `Zr.cif`. Harmonic mode displacement patterns =================================== The displacement patterns corresponding to the mapping directions used in the mapping of 2-D subspaces of the BO surface of the *Cmca*-4 structure of solid hydrogen are given below. They correspond to the displacement patterns of harmonic modes with frequencies of $69.4$, $74.0$ and $114$ meV, labelled as 4, 5 and 7 respectively. Each row shows the displacement of a H atom in the three Cartesian directions, in the same order as the atoms are listed in the file `cmca4.cif`. - Direction 4: x y z 0.00 0.75 -1.00 0.00 0.75 1.00 0.00 -0.75 1.00 0.00 -0.75 -1.00 - Direction 5: x y z 0 1 0 0 -1 0 0 1 0 0 -1 0 - Direction 7: x y z 1 0 0 -1 0 0 -1 0 0 1 0 0
--- abstract: '[A single Nitrogen Vacancy (NV) center hosted in a diamond nanocrystal is positioned at the extremity of a SiC nanowire. This novel hybrid system couples the degrees of freedom of two radically different systems, i.e. a nanomechanical oscillator and a single quantum object. The dynamics of the nano-resonator is probed through time resolved nanocrystal fluorescence and photon correlation measurements, conveying the influence of a mechanical degree of freedom given to a non-classical photon emitter. Moreover, by immersing the system in a strong magnetic field gradient, we induce a magnetic coupling between the nanomechanical oscillator and the NV electronic spin, providing nanomotion readout through a single electronic spin. Spin-dependent forces inherent to this coupling scheme are essential in a variety of active cooling and entanglement protocols used in atomic physics, and should now be within the reach of nanomechanical hybrid systems.]{}' author: - 'O.  Arcizet' - 'V.  Jacques' - 'A.  Siria' - 'P.  Poncharal' - 'P.  Vincent' - 'S.  Seidelin' title: A single NV defect coupled to a nanomechanical oscillator --- Owing to recent developments in cavity opto- and electro-mechanics [@Aspelmeyer2008; @Kippenberg2008; @Schwab2005], it is now realistic to envision the observation of macroscopic mechanical oscillators cooled by active or traditional cryogenic techniques close to their ground state of motion. This conceptually elegant accomplishment would give access to a vast playground for physicists if the resonator wavefunction could be coherently manipulated such as to create, maintain and probe Fock or other non-classical states. It would provide a remarkable opportunity to extend the pioneering experiments with trapped ions [@Blatt2008] to encompass macroscopic objects. However, standard continuous measurements techniques used to actively cool and probe the resonator [@Braginsky1992], when utilized to manipulate its quantum state, tend to blur its non-classical nature. An attractive alternative consists in interfacing the mechanical degrees of freedom with a single quantum object such as a 2-level system whose quantum state can be externally controlled [@Wilson-Rae2004; @Hammerer2009; @Rabl2009; @Hunger2010; @LaHaye2009; @Bennett2010]. Successful realization of this type of coupling between a nanomechanical oscillator in the quantum regime and a phase qubit was recently reported and motivates the development of similar hybrid quantum systems presenting extended coherence times at room temperature and compatible with continuous measurement approaches. ![The hybrid system. (a): A confocal microscope monitors the fluorescence of a single NV defect hosted in a diamond nanocrystal positioned at the extremity of a SiC nanowire. A microwave antenna is used to manipulate the NV electronic spin, while a micro-fabricated magnetic structure approached in the vicinity of the suspended NV center generates a strong magnetic field gradient. (b): Simplified electronic structure of the NV centers at zero magnetic field. (c): Fluorescence map of the system recorded with the confocal microscope while scanning the objective position. The isolated bright spot circled in red corresponds to the fluorescence of a single NV center. Inset: zoom on the nanowire extremity. ](fig1.pdf){width="8.3cm"} Here we report a first step in this direction by coupling a nanomechanical oscillator to a single negatively-charged Nitrogen Vacancy (NV) defect hosted in a diamond nanocrystal attached to its extremity (fig 1a). In that context, the NV defect appears as an attractive quantum system, both for its optical and electronic spin properties. Indeed, perfect photostability at room temperature makes the NV defect a robust and practical single-photon source [@Kurtsiefer2000; @Brouri2000]. Moreover, the NV defect ground state is a spin triplet (fig 1b) which can be initialized and read-out by optical means, and manipulated by resonant microwave excitation with an unprecedented coherence time for a solid-state system under ambient conditions [@Jelezko2004; @Balasubramanian2009]. Such properties are at the heart of diamond-based quantum information processing [@Gurudev2007; @Neumann2008; @Buckley2010; @Togan2010; @Neumann2010] and ultrasensitive magnetometry, where the spin is used as an atomic sized magnetic sensor [@Maze2008; @Balasubramanian2008; @Lange2011]. These results make the NV defect an appealing candidate for interfacing a nanomechanical oscillator: once immersed in a strong magnetic field gradient, an efficient coupling between the NV defect electronic spin and the nanoresonator position can be achieved. Furthermore, this novel hybrid system is susceptible to reach the strong coupling regime, as already envisioned in ref. [@Rabl2009; @Rabl2010].\ In the following, we first show that the nanomechanical oscillator dynamics can be probed using the NV center as a single photon source, illustrating a resonant optomechanical coupling that does not suffer from the usual reduction in strength typically observed while optically interacting with sub-wavelength sized resonators. Furthermore we provide clear spectroscopic evidence of the mechanical degree of freedom by magnetic coupling of the spin to the nanoresonator position, demonstrating spin mediated readout of the oscillator dynamics.\ The nanomechanical oscillator consists of a SiC nanowire attached to the extremity of a conducting tungsten tip (fig. 1). SiC nanowires represent compelling nanomechanical oscillators, combining in a low mass system a high mechanical quality factor, a relatively high vibration frequency and a large spreading of the zero-point energy wave function. For a $10\,\rm \mu m$ long and $50\,\rm nm$ diameter nanowire, with an effective mass of $M_{\rm eff}=16\,\rm fg$, the vibration frequency reaches $\Omega_{\rm m}/2\pi\equiv 1/T=1\,\rm MHz$ and its spring constant $k=1/M_{\rm eff}\Omega_{\rm m}^2=700\,\rm\mu N/m$, corresponding to a room temperature Brownian motion of 3nm rms and a groundstate wave function spreading of $\Delta x^q=\sqrt{\frac{\hbar}{2 M_{\rm eff}\Omega_{\rm m}}}\approx 0.7\,\rm pm$. The nanomechanical oscillator can be efficiently driven to large oscillation amplitudes (several $\mu m$), as shown in fig. 2a. Resonators with a mechanical quality factor (Q) above 10 000 were measured in vacuum in the TEM imager, and even larger values can be achieved in similar devices [@Perisanu2007].\ A diamond nanocrystal hosting a single NV defect is attached to the oscillator free extremity and fluorescence is detected by a confocal microscope as shown in fig. 1 (see Methods). When the hybrid system is set into motion, its extremity oscillates back and forth across the optical spot, thus modulating the overlap with the optical detection volume (fig. 2b). As the emitter can only be pumped and a photon detected when located within this volume, the fluorescence rate therefore provides a simple detection technique of the resonator dynamics (fig 2c). A time-resolved fluorescence measurement synchronized with the piezo driving voltage (fig. 2d) probes the oscillator dynamics across the optical spot. From this, the oscillation amplitude and direction can be determined and the piezo driving efficiency calibrated (of the order of 200 nm/V for the 625 kHz mode considered here). Note that in this experiment, the NV center serves as a probe of the nanoresonator dynamics, and the recoil displacements $\delta x^{\rm rec}=\frac{h/\lambda}{M_{\rm eff}\Omega_{\rm m}}\approx 10\,\rm am$ due to single photon emission ($\lambda=638\,\rm nm$) are negligible compared to $\Delta x^q$, which is equivalent to the Lamb-Dicke regime [@Blatt2008].\ The mechanical degree of freedom given to the single quantum emitter is further elucidated by recording the histogram of the time delays between two consecutive single-photon detections using a standard Hanbury Brown and Twiss (HBT) interferometer. After normalization to a Poissonnian statistics [@Beveratos2002], the recorded histogram is equivalent to a measurement of the second-order autocorrelation function $g^{(2)}(\tau)$. For an oscillator at rest, a pronounced anticorrelation effect is observed ($g^{(2)}(0)=0.3$), as expected for a single quantum emitter (fig. 2e). The shape of the autocorrelation function is strongly altered when the emitter is in motion. Although the anticorrelation effect is still observed at zero delay, additional periodic drops appear, reflecting the time intervals spent outside the detection volume, as usually observed for the $g^{(2)}(\tau)$ function recorded under pulsed excitation. The regime presented here corresponds to a slow oscillator, driven at amplitudes for which the time required to cross the optical detection volume - corresponding to the width of the peaks in the autocorrelation trace - remains long compared to the single emitter lifetime (12 ns). The photon emission probability of the single emitter therefore adiabatically follows the spatial variations of the pump intensity as it traverses the optical spot. A different and equally interesting regime arises in the situation where the illumination duration is comparable to the emitter lifetime. This can be readily reached, e.g. with a 10-MHz oscillator, driven at $1\,\rm \mu m$ oscillation amplitudes.\ Together, the above presented results call upon a wide range of experiments merging the fields of single emitter quantum optics and optomechanics. The second part of this letter demonstrates the coupling between the nanomechanical oscillator position and the NV defect electronic spin. The ground state is a spin triplet $S=1$, whose degeneracy is lifted to $2.8$ GHz by spin-spin interactions in the absence of static magnetic fields (fig. 1b) [@Manson2006]. Radiative transition selection rules associated with the spin state quantum number provide a high degree of spin polarization in the $m_{s}=0$ substate through optical pumping. In addition, the NV defect photoluminescence intensity is significantly higher when the $m_{S}=0$ state is populated [@Manson2006]. Due to this spin dependent fluorescence rate, electron spin resonances (ESR) can be optically detected [@Gruber1997; @Jelezko2004]. More precisely, as shown in fig. 3a, when the suspended single NV defect, initially prepared in the $m_{S}=0$ state through optical pumping, is driven to the $m_{S}=\pm 1$ spin states by applying a resonant microwave field, a dip in the photoluminescence signal is observed. The orientation of the suspended NV defect was determined by measuring the Zeeman shift of the ESR frequencies as a function of the orientation and magnitude of a calibrated static magnetic field (fig. 3b). The latter were subsequently fitted according to the eigenvalues of the ground-state spin Hamiltonian given by $H_{\rm spin}= D S_Z^2+ E(S_X^2-S_Y^2)+ g\mu_B {\bf B}\cdot {\bf S}$, where $D$ and $E$ are the zero-field splitting parameters, $Z$ the NV defect quantization axis, $g$ its g-factor $(\approx 2$), and $\mu_{B}$ the Bohr magneton. The NV axis was found to be aligned (within 5 degrees) with the oscillation trajectory of a $625\,\rm kHz$ mode of the nanoresonator, coinciding with the $z$ axis of fig. 1a.\ To magnetically couple the electronic spin and the nanoresonator position we apply a strong magnetic field gradient to the suspended NV, rendering its electronic spin energy dependent on the oscillator position $z$. To this end an in-house patterned magnetic structure [@Kustov2010] with an extended homogeneity of the field gradient was micro-positioned in the vicinity of the suspended NV. The magnetic field was aligned with the NV axis in order to maintain a high ESR contrast and the position optimized to find a gradient being homogeneous along the oscillating NV trajectory. A prominent signature of the coupling is the modification of the ESR profile when the oscillator is set in motion. Since the oscillation frequency of the mode considered here ($ 625 \,\rm kHz$) is smaller than the ESR linewidth (power broadened to a half-width at half maximum (HWHM) of $\Gamma_{\rm s}/2\pi = 7\,\rm MHz$), we can consider that the electron spin adiabatically follows the Zeeman shifted resonances. The evidence of magnetic coupling between the nanomechanical oscillator position and the NV electronic spin is illustrated in fig. 4b, where one can observe a motional ESR broadening followed by a characteristic splitting at stronger oscillation amplitudes ($\delta z$), whose shape reflects the harmonic oscillation turning points. For a NV axis oriented along the oscillation direction and magnetic field ($z\simeq Z$), which holds true in our system, the system is formally described by the coupling Hamiltonian $g\mu_{\rm B}\nabla B\, S_Z \, z $. In this case, we can approximate the magnetic coupling by a scalar description. Accordingly, the data from fig.4b were fitted with the function $$\Lambda(f,\delta z)= \frac{1}{T}\int_0^{T}{\mathfrak{L}\left(f,f_0-\frac{g\mu_{\rm B}}{h} B(\delta z \, \cos \Omega_{\rm m}t)\right)dt},$$ where $\mathfrak{L}(f,f_{\rm ESR})$ is the unperturbed ESR resonance shape, which for simplicity is supposed to be Lorentzian (half-width $\Gamma_{\rm s}/2\pi$). $B(z)$ is the magnetic field along the NV trajectory, approximated by $B(z)=B_0 + \frac{d B}{dz} z + \frac{d^2 B}{dz^2} z^2 $, whose coefficients are the only free parameters in the fitting procedure. This model is in good agreement with experimental data, even at strong oscillation amplitudes. The effective spin resonance half-width defined as $\frac{1}{2\pi}\sqrt{\Gamma_{\rm s}^2+\left(\frac{g\mu_{\rm B}}{\hbar}\frac{d B}{dz}\delta z\right)^2}$ is then plotted as a function of the calibrated oscillation amplitude $\delta z$ in fig. 4c (in red for data set presented in 4b, in blue and black for different gradients). The plot allows one to verify the consistency of each data set and to extract the magnetic field gradient which amounts to 6700 T/m for the data shown in fig. 4b. This value, as well as the mean magnetic field also obtained from the fit ($B_0 = 90\,\rm mT$) are in good agreement with both static measurements of the field spatial profile obtained by locally displacing the magnetic structure, as well as simulations [@Kustov2010]. The mechanical quality factor (damped to $Q=150$ in air) is obtained by sweeping the driving frequency across the resonance (fig. 4d). ![[Magnetic coupling of the NV electronic spin to the nanomotion observed on the $m_{\rm S}=0$ to $m_{\rm S}=-1$ transition. (a): Schematics of the experiment. (b): ESR obtained for an increasing oscillation amplitude at $\Omega_{\rm m}/2\pi= 625\,\rm kHz $. The red line is a fit (see main text) allowing one to extract the effective ESR half-width which is reported in panel (c) (circles) as a function of the oscillation amplitude ($\delta z$). (c): Various magnetic field gradients (0.1, 6700 and 45000 T/m) were explored, corresponding to different distances above the magnetic structure (1000, 15 and $2\,\rm\mu m$ (blue, red, black)), allowing to tune the NV-oscillator coupling strength. The deviation from the model observed at large driving amplitudes in the strongest gradient case (black squares) is a consequence of the reduced gradient homogeneity at short distances from the structure. (d): Effective ESR half-width as a function of the driving frequency, for a 4500T/m gradient and a resonant oscillation amplitude of 50nm. ]{}](fig4.pdf){width="\linewidth"} Having observed how the nanomotion is imprinted on the electronic spin dynamics, a question that naturally arises is whether the NV electronic spin can affect the nanoresonator dynamics [@Rugar2004]. This reverse interaction would enable cooling of the nanoresonator or preparation of non-classical mechanical states through spin dependent forces [@Blatt2008]. For a magnetic field gradient of $10^5\,\rm T/m$, the change in the spin dependent force exerted on the nanomechanical oscillator from one spin state to another amounts to $g \mu_B \nabla B \approx 2\,\rm aN$. This order of magnitude is comparable to the thermal noise limited force sensitivity of the nanoresonator $\sqrt{2 M_{\rm eff} \Omega_{\rm m} k_B T /{Q}}\approx 9\,\rm aN/\sqrt{Hz}$ expected at room temperature for the parameters previously used and the vacuum $Q=10\,000$. Resolving the Brownian motion thus gives a reasonable metric for the sensitivity required to detect the spin dynamics. The corresponding room temperature thermal noise amounts at resonance (1 MHz) to approx. $100\,\rm pm/\sqrt{Hz}$, an order of magnitude that can be easily detected with simple optical means despite the sub-wavelength size of the resonator [@Sanii2010; @Favero2009; @Anetsberger2009; @Regal2008]. Furthermore, to probe spin dynamics with the nanoresonator, spin coherence has to be preserved over several mechanical oscillations. The so-called resolved sideband regime ($\Omega_{\rm m}>\Gamma_{\rm s}$) is within reach when working with shorter nanowires and increased spin coherence times, and is also of importance when exploring the avenues for probing and cooling the nanomechanical oscillator down to its quantum ground state through single spin manipulations [@Rabl2009; @Rabl2010a]. These results represent a clear quantitative signature of the nanoresonator motion directly imprinted on the electronic spin dynamics via magnetic coupling. Long lived electronic spins coupled to nanomechanical oscillators represent a promising experimental hybrid system whose two components can independently be monitored and controlled. This, combined with the single photon source character of the suspended NV defect paves the way towards single photon optomechanics.\ [**Acknowledgments** ]{} We acknowledge J. Jarreau, C. Hoarau, D. Lepoitevin, J.F. Motte, P. Brichon, N. Dempsey, O. Fruchart, F. Dumas Bouchiat, D. Givord, E. Gheeraert, O. Mollet, A. Drezet, J.F. Roch, S. Huant and J. Chevrier for technical support, experimental assistance and discussions. This project is funded by the European Commission (Reintegration Grant) and the Agence Nationale de la Recherche (project Q-NOM).\ [**Methods** ]{} [*Nanowire functionalization -*]{} A diamond nanocrystal hosting a single NV defect is attached to the oscillator free extremity during a piezo-controlled immersion into a commercial solution of 50-nm-diameter diamond nano-crystals. The adhesion efficiency is significantly increased under focussed laser illumination, due to the increased convection combined with an optical tweezer mechanism. Since the solution meniscus size remains comparable to the nanowire diameter, it is possible to only functionalize the very extremity of the nanowire, while a subsequent focussed ion beam cut allows final adjustments. This method allows for efficient and robust positioning of a single NV center at the extremity of a nanowire and works reliably over a wide variety of resonator sizes and materials including Carbon and Boron Nitride nanotubes. [*Experimental setup -* ]{} The NV center is excited and its fluorescence collected through a 100x long working distance microscope objective (generating an approx. 450nm diameter optical spot) and detected on avalanche photodiodes. The objective is mounted on a fast Physik Instrument XYZ piezo stage in order to localize the suspended NV defect (fig. 1c). A tracking program continuously maintains the detection spot on the single emitter. A fast piezoelectric module positioned on top of the STM tip drives the nanomechanical oscillator and a micro-antenna generates the microwave field used to manipulate the NV electronic spin. [36]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{}, edited by  (, ) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.103.063005) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.104.143002) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.104.017203) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.85.290) [****,  ()](\doibase 10.1364/OL.25.001294) [****,  ()](\doibase 10.1103/PhysRevLett.92.076401) @noop [****,  ()]{} [****,  ()](http://www.sciencemag.org/content/316/5829/1312.short) [****,  ()](\doibase 10.1126/science.1157233) [****,  ()](http://www.sciencemag.org/content/330/6008/1212.abstract?keytype=ref&siteid% =sci&ijkey=uZUjjb3bQrSLo) @noop [****, ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [****,  ()]{} [****,  ()](http://www.nature.com/nphys/journal/v6/n8/abs/nphys1679.html#/abstract) [****,  ()](\doibase 10.1063/1.2432257) @noop [****, ()]{} [****,  ()](\doibase 10.1103/PhysRevB.74.104303) [****,  ()](http://www.sciencemag.org/content/276/5321/2012.full) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****, ()](\doibase 10.1103/PhysRevB.82.165320)
--- bibliography: - 'bib-FF-FL.bib' --- [**The Oka principle\ for holomorphic Legendrian curves in ${\mathbb{C}}^{2n+1}$**]{} [**Franc Forstnerič and Finnur Lárusson**]{} > [**Abstract**]{} Let $M$ be a connected open Riemann surface. We prove that the space ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1})$ of all holomorphic Legendrian immersions of $M$ to ${\mathbb{C}}^{2n+1}$, $n\geq 1$, endowed with the standard holomorphic contact structure, is weakly homotopy equivalent to the space ${\mathscr{C}}(M,{\mathbb{S}}^{4n-1})$ of continuous maps from $M$ to the sphere ${\mathbb{S}}^{4n-1}$. If $M$ has finite topological type, then these spaces are homotopy equivalent. We determine the homotopy groups of ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1})$ in terms of the homotopy groups of ${\mathbb{S}}^{4n-1}$. It follows that ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1})$ is $(4n-3)$-connected. > > [**Keywords**]{} Riemann surface, Legendrian curve, Oka principle, absolute neighborhood retract > > [**MSC (2010)**]{} 53D10, 32E30, 32H02, 57R17 > > [**Date**]{} 6 November 2016; this version 15 May 2017 Introduction {#sec:intro} ============ It is an interesting and important problem to describe the rough shape of mapping spaces that arise in analysis and geometry. Answering such a question typically amounts to proving a [*homotopy principle*]{} (h-principle) to the effect that analytic solutions can be classified by topological data; in particular, a solution exists in the absence of topological obstructions. For a survey of the h-principle and its applications, see the monographs by Gromov [@Gromov1986book], Eliashberg and Mishachev [@EliashbergMishachev2002], and Spring [@Spring2010]. In complex analysis, a synonym for h-principle is [*Oka principle*]{}. This is a subject with a long and rich history going back to Oka’s paper [@Oka1939] in 1939; we refer to the monograph [@Forstneric2011]. In this paper, we describe the rough shape of the space ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1})$ of holomorphic Legendrian immersions of an open Riemann surface $M$ into the complex Euclidean space ${\mathbb{C}}^{2n+1}$, $n\geq 1$, with the standard holomorphic contact structure . Our main result is that ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1})$ is weakly homotopy equivalent to the space ${\mathscr{C}}(M,{\mathbb{S}}^{4n-1})$ of continuous maps from $M$ to the $(4n-1)$-dimensional sphere, and is homotopy equivalent to it if $M$ has finite topological type; see Corollary \[cor:whe\]. Analogous results for several other mappings spaces were obtained in [@ForstnericLarusson2016]. We begin by introducing the relevant spaces of maps. All spaces under consideration are endowed with the compact-open topology, unless otherwise specified. A holomorphic $1$-form $\alpha$ on a complex manifold $X$ of odd dimension $2n+1\ge 3$ is said to be a [*contact form*]{} if it satisfies the nondegeneracy condition $\alpha \wedge(d\alpha)^n \neq 0$ at every point of $X$. The model is the complex Euclidean space ${\mathbb{C}}^{2n+1}$ with coordinates $$\label{eq:coord} x=(x_1,\ldots,x_n)\in{\mathbb{C}}^n,\quad y=(y_1,\ldots,y_n)\in{\mathbb{C}}^n, \quad z\in{\mathbb{C}},$$ and $\alpha$ the standard contact form $$\label{eq:alpha} \alpha = dz + \sum_{j=1}^n x_j \, dy_j.$$ By Darboux’s theorem, every holomorphic contact form on a $(2n+1)$-dimensional complex manifold is given by in some local holomorphic coordinates at each point (see [@AlarconForstnericLopez2016Legendrian Theorem A.2]; for the smooth case, see e.g. [@Geiges2008CUP Theorem 2.5.1]). A smooth map $F\colon M\to {\mathbb{C}}^{2n+1}$ from a smooth manifold $M$ is said to be [*Legendrian*]{} if $F^*\alpha=0$ on $M$. It is an elementary observation that every smooth Legendrian surface in a $3$-dimensional complex contact manifold is a complex curve; see Proposition \[prop:complex\]. Let $M$ be a connected open Riemann surface. Denote by ${\mathscr{I}}(M,{\mathbb{C}}^{2n})$ the space of all holomorphic immersions $M\to{\mathbb{C}}^{2n}$, and consider the closed subspace $${\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) = \bigl\{ (x,y)\in {\mathscr{I}}(M,{\mathbb{C}}^{2n}) : xdy= \sum_{j=1}^n x_j \, dy_j \ \ \text{is an exact $1$-form on}\ M\bigr\}.$$ Elements of ${\mathscr{I}}_*(M,{\mathbb{C}}^{2n})$ will be called [*exact holomorphic immersions*]{}. Let $$\label{eq:incl} {\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \stackrel{\phi}{{\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow}}} {\mathscr{I}}(M,{\mathbb{C}}^{2n})$$ be the inclusion. Note that the map $${\mathscr{L}}(M,{\mathbb{C}}^{2n+1}) \longrightarrow {\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \times{\mathbb{C}},$$ given for a fixed choice of a base point $u_0\in M$ by $$\label{eq:homeo} {\mathscr{L}}(M,{\mathbb{C}}^{2n+1}) \ni (x,y,z) \longmapsto (x,y,z(u_0)) \in {\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \times{\mathbb{C}},$$ is a homeomorphism. This follows immediately from the formula $$\label{eq:z-component} z(u)=z(u_0)-\int_{u_0}^u xdy,\quad u\in M,$$ which holds for any Legendrian immersion $(x,y,z)\in {\mathscr{L}}(M,{\mathbb{C}}^{2n+1})$, observing also that the integral $\int_{u_0}^u xdy$ is independent of the choice of a path from $u_0$ to $u$ (and hence defines a Legendrian immersion by the above formula) if and only if $(x,y)\in {\mathscr{I}}_*(M,{\mathbb{C}}^{2n})$. It follows that the projection $\pi\colon {\mathscr{L}}(M,{\mathbb{C}}^{2n+1}) \to {\mathscr{I}}_*(M,{\mathbb{C}}^{2n})$ is a homotopy equivalence. Fix a nowhere vanishing holomorphic $1$-form $\theta$ on $M$; such exists by the Oka-Grauert principle [@Forstneric2011 Theorem 5.3.1]. The specific choice of $\theta$ will be irrelevant. For every immersion $\sigma \in {\mathscr{I}}(M,{\mathbb{C}}^{2n})$, the map $d\sigma/\theta\colon M\to {\mathbb{C}}^{2n}$ is holomorphic and it avoids the origin $0\in{\mathbb{C}}^{2n}$. The correspondence $\sigma \mapsto d\sigma/\theta$ defines a continuous map $$\varphi : {\mathscr{I}}(M,{\mathbb{C}}^{2n}) \longrightarrow {\mathscr{O}}(M,{\mathbb{C}}^{2n}_*).$$ Here, ${\mathbb{C}}^{2n}_*={\mathbb{C}}^{2n}\setminus\{0\}$. By [@ForstnericLarusson2016 Theorem 1.4], $\varphi$ is a weak homotopy equivalence, and a homotopy equivalence if $M$ has finite topological type. Let $\iota\colon{\mathscr{O}}(M,{\mathbb{C}}^{2n}_*)\hookrightarrow {\mathscr{C}}(M,{\mathbb{C}}^{2n}_*)$ denote the inclusion of the space of holomorphic maps into the space of continuous maps. Since ${\mathbb{C}}^{2n}_*$ is a homogeneous space of the complex Lie group $GL_{2n}({\mathbb{C}})$, $\iota$ is a weak homotopy equivalence by the Oka-Grauert principle [@Forstneric2011 Theorem 5.3.2]; if $M$ has finite topological type, then $\iota$ is a homotopy equivalence [@Larusson2015PAMS]. Finally, the radial projection ${\mathbb{C}}^{2n}_*\to {\mathbb{S}}^{4n-1}$ onto the unit sphere induces a homotopy equivalence $\tau\colon {\mathscr{C}}(M,{\mathbb{C}}^{2n}_*)\to {\mathscr{C}}(M,{\mathbb{S}}^{4n-1})$. In summary, all the maps in the following sequence except $\phi$ are known to be weak homotopy equivalences, and to be homotopy equivalences when $M$ has finite topological type: $$\begin{gathered} \label{eq:fivemaps} {\mathscr{L}}(M,{\mathbb{C}}^{2n+1}) \stackrel{\pi}{\longrightarrow} {\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \stackrel{\phi}{{\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow}}} {\mathscr{I}}(M,{\mathbb{C}}^{2n}) \stackrel{\varphi}{\longrightarrow} \\ \stackrel{\varphi}{\longrightarrow} {\mathscr{O}}(M,{\mathbb{C}}^{2n}_*) \stackrel{\iota}{{\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow}}} {\mathscr{C}}(M,{\mathbb{C}}^{2n}_*) \stackrel{\tau}{\longrightarrow} {\mathscr{C}}(M,{\mathbb{S}}^{4n-1}).\end{gathered}$$ The following is our main result. \[th:immersions\] For every connected open Riemann surface $M$, the inclusion $${\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) {\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow}}{\mathscr{I}}(M,{\mathbb{C}}^{2n})$$ of the space of exact holomorphic immersions $M\to{\mathbb{C}}^{2n}$, $n\geq 1$, into the space of all holomorphic immersions is a weak homotopy equivalence, and a homotopy equivalence if the surface $M$ has finite topological type. Since a composition of (weak) homotopy equivalences is again a (weak) homotopy equivalence, Theorem \[th:immersions\] implies the following. \[cor:whe\] All the maps in the sequence , and compositions thereof, are weak homotopy equivalences, and homotopy equivalences if $M$ has finite topological type. This holds in particular for the map ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1}) \to {\mathscr{C}}(M,{\mathbb{S}}^{4n-1})$. The first part of Theorem \[th:immersions\] follows immediately from Theorem \[th:parametric\], which establishes the parametric Oka principle with approximation for the inclusion . The same proof gives the parametric Oka principle with approximation for holomorphic Legendrian immersions; see Remark \[rem:php-whe\]. The basic case of the latter result is [@AlarconForstnericLopez2016Legendrian Theorem 1.1]. The parametric case considered here is more demanding, but unavoidable when analysing the homotopy type of these mapping spaces. The second part of Theorem \[th:immersions\] is proved in Sec. \[sec:strong\]. Our proofs bring together tools from complex analysis and geometry, convex integration theory, and the theory of absolute neighborhood retracts. The examples in [@Forstneric2016hyp] show that Theorem \[th:immersions\] and Corollary \[cor:whe\] have no analogue for more general holomorphic contact structures on Euclidean spaces; see Remark \[rem:hyperbolic\]. In those examples, the contact structure is Kobayashi hyperbolic, and hence it does not admit any nonconstant Legendrian maps from ${\mathbb{C}}$ or ${\mathbb{C}}_*$. It was shown in [@AlarconForstnericLopez2016Legendrian] that the space ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1})$ is very big from the analytic viewpoint. In particular, every holomorphic Legendrian map $K\to {\mathbb{C}}^{2n+1}$ from a (neighborhood of) a compact ${\mathscr{O}}(M)$-convex subset $K\subset M$ can be approximated on $K$ by proper holomorphic Legendrian embeddings of $M$ into ${\mathbb{C}}^{2n+1}$. Furthermore, every bordered Riemann surface carries a [*complete*]{} proper holomorphic Legendrian embedding into the ball of ${\mathbb{C}}^{3}$, and a complete bounded holomorphic Legendrian embedding in ${\mathbb{C}}^3$ such that the image surface is bounded by Jordan curves. (An immersion $F\colon M\to{\mathbb{R}}^{n}$ is said to be complete if the pull-back of the Euclidean metric on ${\mathbb{R}}^n$ by $F$ is a complete metric on $M$.) Analogous results for holomorphic immersions $M\to{\mathbb{C}}^n$ $(n\ge 2)$, null holomorphic curves in ${\mathbb{C}}^n$ $(n\ge 3)$, and conformal minimal immersions in ${\mathbb{R}}^n$ ($n\ge 3$) were proved in [@AlarconDrinovecForstnericLopez2015MC; @AlarconDrinovecForstnericLopez2015PLMS]. On a compact bordered Riemann surface $M$, we define for every integer $r\geq 1$ the corresponding mapping spaces ${\mathscr{L}}^r(M,{\mathbb{C}}^{2n+1})$ and ${\mathscr{I}}^r_*(M,{\mathbb{C}}^{2n}) \subset {\mathscr{I}}^r(M,{\mathbb{C}}^{2n})$ by considering maps of class ${\mathscr{C}}^r(M)$ that are holomorphic in the interior $\mathring M=M\setminus bM$; see Subsec. \[subs:Legendrian\]. These spaces are complex Banach manifolds (see Theorem \[th:Banach\]), and hence absolute neighborhood retracts, and the corresponding maps in the sequence are homotopy equivalences (see Remark \[rem:php-whe\] and Sec. \[sec:strong\]). We will now explicitly describe the homotopy type of ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1})$ and determine its homotopy groups in terms of the homotopy groups of the sphere ${\mathbb{S}}^{4n-1}$. A connected open Riemann surface $M$ is homotopy equivalent to a bouquet of circles $\bigvee_{i=1}^\ell {\mathbb{S}}^1$, where $\ell \in\{0,1,\ldots,\infty\}$ is the rank of the free abelian group $H_1(M;{\mathbb{Z}})={\mathbb{Z}}^\ell$. For $\ell=0$, we take the bouquet to be a point. The surface $M$ has finite topological type if and only if $\ell$ is finite; then $M$ is biholomorphic to the complement of a finite set of points and closed disks in a compact Riemann surface (see Stout [@Stout1965TAMS]). The bouquet $\bigvee_{i=1}^\ell {\mathbb{S}}^1$ embeds in $M$ as a deformation retract of $M$. Hence we have a homotopy equivalence $${\mathscr{C}}(M,{\mathbb{S}}^{4n-1}) \to {\mathscr{C}}(\bigvee_{i=1}^\ell {\mathbb{S}}^1,{\mathbb{S}}^{4n-1}).$$ For a space $Y$, let us denote the space ${\mathscr{C}}(\bigvee_{i=1}^\ell {\mathbb{S}}^1, Y)$ by ${\mathcal{L}}_\ell Y$. Then ${\mathcal{L}}_1 Y$ is the free loop space ${\mathcal{L}}Y$ of $Y$. It is well known that if we choose a base point $s\in{\mathbb{S}}^1$, then the evaluation map ${\mathcal{L}}Y\to Y$, $\gamma\mapsto\gamma(s)$, is a fibration whose fibre is the loop space $\Omega Y$ of $Y$ [@Strom1968 Theorem 10]. More generally, taking $s$ to be the common point of the circles in the bouquet $\bigvee_{i=1}^\ell {\mathbb{S}}^1$, $\ell\geq 1$, the evaluation map ${\mathcal{L}}_\ell Y \to Y$ is a fibration whose fibre is $(\Omega Y)^\ell$. Corollary \[cor:whe\] now implies the first part of the following result. \[cor:loopspace\] Let $M$ be a connected open Riemann surface with $H_1(M;{\mathbb{Z}})={\mathbb{Z}}^\ell$, $\ell \in\{0,1,\ldots,\infty\}$. For each $n\geq 1$, the spaces ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1})$ and ${\mathcal{L}}_\ell{\mathbb{S}}^{4n-1}$ are weakly homotopy equivalent. If $M$ has finite topological type, then they are homotopy equivalent. It follows that ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1})$ is path connected and simply connected, and for each $k\geq 2$, $$\pi_k({\mathscr{L}}(M,{\mathbb{C}}^{2n+1})) = \pi_k({\mathbb{S}}^{4n-1}) \times \pi_{k+1}({\mathbb{S}}^{4n-1})^\ell.$$ In particular, ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1})$ is $(4n-3)$-connected. Recall that $\pi_i({\mathbb{S}}^m)=0$ for all $i<m$, and $\pi_m({\mathbb{S}}^m)={\mathbb{Z}}$. We must prove the second part of the corollary. It is clear for $\ell=0$, so let us assume that $\ell\geq 1$. Since $Y={\mathbb{S}}^{4n-1}$ is simply connected, ${\mathcal{L}}_\ell Y$ is path connected. Consider the long exact sequence of homotopy groups associated to the fibration ${\mathcal{L}}_\ell Y\to Y$ with fibre $(\Omega Y)^\ell$, $$\cdots \to \pi_{k+1}(Y) \to \pi_k((\Omega Y)^\ell) \to \pi_k({\mathcal{L}}_\ell Y) \to \pi_k(Y) \to \cdots, \quad k\geq 1,$$ and recall that $\pi_i(\Omega Y)=\pi_{i+1}(Y)$ for all $i\geq 0$. We see that $\pi_1({\mathcal{L}}_\ell Y)=0$. The fibration ${\mathcal{L}}_\ell Y\to Y$ has a section defined by taking a point in $Y$ to the map that takes the whole wedge of circles to that point. Let $k\geq 2$. The induced sections of the morphisms $\pi_j({\mathcal{L}}_\ell Y) \to \pi_j(Y)$ for $j=k$ and $j=k+1$ yield a split short exact sequence of abelian groups $$0 \to \pi_k((\Omega Y)^\ell) \to \pi_k({\mathcal{L}}_\ell Y) \to \pi_k(Y) \to 0,$$ demonstrating that $\pi_k({\mathcal{L}}_\ell Y) = \pi_k(Y) \times \pi_{k+1}(Y)^\ell$. Corollary \[cor:loopspace\] shows that holomorphic Legendrian immersions of an open Riemann surface $M$ into ${\mathbb{C}}^{2n+1}$ have no homotopy invariants. Any two such immersions are homotopic through holomorphic Legendrian immersions, and every loop of Legendrian immersions in ${\mathscr{L}}(M,{\mathbb{C}}^3)$ is contractible. The first nontrivial invariant of the space ${\mathscr{L}}(M,{\mathbb{C}}^3)$ is its second homotopy group; see Remark \[rem:hyperbolic\]. This is very different from the case of smooth Legendrian knots in a contact $3$-manifold, where the basic topological invariants are the rotation number and the Thurston-Bennequin number; see e.g. [@Bennequin1983AST; @Eliashberg1993IMRN; @FuchsTabachnikov1997T]. \[rem:hyperbolic\] (a) Theorem \[th:immersions\] and Corollary \[cor:whe\] fail for certain other complex contact structures on ${\mathbb{C}}^{2n+1}$. Indeed, for any $n\geq 1$, the first author has constructed a Kobayashi hyperbolic complex contact form $\eta$ on ${\mathbb{C}}^{2n+1}$ [@Forstneric2016hyp]. In particular, every holomorphic $\eta$-Legendrian map $M\to {\mathbb{C}}^{2n+1}$ from $M={\mathbb{C}}$ or $M={\mathbb{C}}_*$ is constant. Thus, the space ${\mathscr{L}}_\eta({\mathbb{C}}_*,{\mathbb{C}}^3)={\mathbb{C}}^3$ is contractible. On the other hand, for the $\alpha$-Legendrian maps (where $\alpha=dz+xdy$), $$\pi_2({\mathscr{L}}_\alpha ({\mathbb{C}}_*,{\mathbb{C}}^3)) = \pi_2({\mathcal{L}}\, {\mathbb{S}}^3) = \pi_3({\mathbb{S}}^3)={\mathbb{Z}}$$ by Corollary \[cor:loopspace\]. As observed in [@Forstneric2016hyp], the hyperbolic contact forms $\eta$ constructed there are isotopic to $\alpha$ through a $1$-parameter family of holomorphic contact forms on ${\mathbb{C}}^{2n+1}$. \(b) It is easily seen that Corollary \[cor:whe\] fails if we include ramified Legendrian maps in the statement. On the other hand, it was shown in [@AlarconForstnericLopez2016Legendrian Lemma 4.4 and Theorem 5.1] that every holomorphic Legendrian map of an open Riemann surface to ${\mathbb{C}}^{2n+1}$ can be approximated uniformly on compacts by holomorphic Legendrian embeddings. In conclusion, we observe that holomorphic Legendrian curves in a $3$-dimensional complex contact manifold are the only smoothly immersed Legendrian surfaces. Simple examples show that this fails in complex contact manifolds of dimension at least $5$. \[prop:complex\] Let $(X,\xi)$ be a $3$-dimensional complex contact manifold. If $M$ is a smooth real surface and $F\colon M\to X$ is a smooth Legendrian immersion, then $F(M)$ is an immersed complex curve in $X$. Furthermore, $M$ admits the structure of a Riemann surface such that $F\colon M\to X$ is holomorphic. Fix a point $p_0\in M$. By Darboux’s theorem, there exist local holomorphic coordinates $(x,y,z)$ on a neighborhood of the point $F(p_0)\in X$ in which the contact structure $\xi$ is given by $\alpha=dz+xdy$. Choose smooth local coordinates $(u,v)$ on a neighborhood of $p_0$ in $M$ and write $F(u,v)=(x(u,v), y(u,v),z(u,v))$. Then the map $\sigma(u,v)=(x(u,v), y(u,v))$ is an immersion. Differentiation of the equation $dz+xdy=0$ gives $dx(u,v)\wedge dy(u,v)=0$ which is equivalent to $x_u y_v-x_v y_u=0$. This means that the vectors $\sigma_u = (x_u,y_u)$ and $\sigma_v=(x_v,y_v)$ in ${\mathbb{C}}^2$ are ${\mathbb{C}}$-linearly dependent, and hence they span a complex line. Clearly, this line is the image of the tangent space $T_{(u,v)} M$ by the differential of $\sigma$ at the point $(u,v)$. Finally, since the equation $dz=-xdy$ is ${\mathbb{C}}$-linear, it follows that $dF_{p}(T_{p} M)$ is a complex line in $T_{F(p)} X$ for every point $p\in M$. Let $J\colon TX\to TX$ denote the almost complex structure operator induced by the given complex structure on $X$. Since $dF_{p}(T_{p} M)$ is a $J$-complex line in $T_{F(p)} X$ for every $p\in M$, there exists a unique almost complex structure $J_0\colon TM\to TM$ such that $dF_p(J_0 \eta)=J dF_p(\eta)$ for every $p\in M$ and $\eta \in T_p M$. The surface $(M,J_0)$ is then a Riemann surface and $F\colon M\to X$ is a holomorphic Legendrian immersion. Preliminaries {#sec:prelim} ============= Riemann surfaces and mapping spaces {#ssec:RS} ----------------------------------- For $n\geq 1$, we denote by $|\cdot|$ the Euclidean norm on ${\mathbb{C}}^n$. Given a topological space $K$ and a map $f\colon K\to{\mathbb{C}}^n$, we define $$\|f\|_{0,K}:=\sup\{|f(u)|\colon u\in K\}.$$ Let $M$ be an open Riemann surface. We denote by ${\mathscr{O}}(M)$ the algebra of all holomorphic functions on $M$. If $K$ is a compact subset of $M$, then ${\mathscr{O}}(K)$ is algebra of all holomorphic functions on open neighborhoods of $K$ in $M$, where we identify any pair of functions that agree on some neighborhood of $K$. If $K$ is a smoothly bounded compact domain in $M$, then for any integer $r\geq 0$, we denote by ${\mathscr{C}}^r(K)$ the algebra of all $r$ times continuously differentiable complex valued functions on $K$, and by ${\mathscr{A}}^r(K)$ the subalgebra of ${\mathscr{C}}^r(M)$ consisting of all functions that are holomorphic in the interior $\mathring K=K\setminus bK$ of $K$. We denote by $\|f\|_{r,K}$ the ${\mathscr{C}}^r$ norm of a function $f\in{\mathscr{C}}^r(K)$, where the derivatives are measured with respect to a Riemannian metric on $M$; the choice of the metric will not be important. The corresponding notation ${\mathscr{O}}(M)^n$ and ${\mathscr{A}}^r(K)^n$ and norms $\|\cdot\|_{r,K}$ are used for maps $f=(f_1,\ldots,f_n)$ with values in ${\mathbb{C}}^n$, whose component functions $f_j$ belong to the respective function spaces. A [*compact bordered Riemann surface*]{} is a compact Riemann surface $M$ whose nonempty boundary $bM$ consists of finitely many smooth Jordan curves. The interior $\mathring M=M\setminus bM$ of a compact Riemann surface is a [*bordered Riemann surface*]{}. It is classical [@Stout1965TAMS] that every compact bordered Riemann surface $M$ is conformally equivalent to a smoothly bounded compact domain in an open Riemann surface ${\widetilde}M$, so the function spaces ${\mathscr{A}}^r(M)$ are defined as above. Note that ${\mathscr{A}}^r(M)$ is a complex Banach algebra for every $r\geq 0$. Every bordered Riemann surface $M$ admits smooth closed curves $C_1,\ldots,C_\ell\subset \mathring M$ forming a basis of the homology group $H_1(M;{\mathbb{Z}})={\mathbb{Z}}^\ell$ such that the union $C= \bigcup_{j=1}^\ell C_j$ is [*Runge*]{} in $M$, meaning that the Mergelyan approximation theorem [@Mergelyan1951DAN] holds: every continuous function on $C$ can be uniformly approximated by functions that are holomorphic on $M$. When $M$ is connected, this holds if and only if $M\setminus C$ has no relatively compact connected components. Spaces of Legendrian immersions {#subs:Legendrian} ------------------------------- Let $n\in{\mathbb{N}}=\{1,2,3,\ldots\}$. On the space ${\mathbb{C}}^{2n+1}$ we use the coordinates $(x,y,z)$ introduced by . To simplify the notation, we often write the standard contact form on ${\mathbb{C}}^{2n+1}$ in the form $$\alpha = dz+xdy, \quad \text{where}\ \ xdy= \sum_{j=1}^n x_j \, dy_j.$$ We identify ${\mathbb{C}}^{2n}_{(x,y)}$ with the subspace $\{z=0\}\subset {\mathbb{C}}^{2n+1}$. Recall (see ) that ${\mathscr{I}}(M,{\mathbb{C}}^{n})$ denotes the space of holomorphic immersions $M\to{\mathbb{C}}^{n}$, and ${\mathscr{I}}_*(M,{\mathbb{C}}^{2n})$ is the closed subspace of ${\mathscr{I}}(M,{\mathbb{C}}^{2n})$ consisting of holomorphic immersions $(x,y)\colon M\to{\mathbb{C}}^{2n}$ for which the holomorphic $1$-form $xdy$ is exact on $M$: the [*exact holomorphic immersions*]{}. The space ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1})$ of holomorphic Legendrian immersions $M\to{\mathbb{C}}^{2n+1}$ is homeomorphic to ${\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \times{\mathbb{C}}$ provided $M$ is connected; see . On a compact bordered Riemann surface $M$ with smooth boundary we introduce the analogous mapping spaces for any integer $r\geq 1$: - ${\mathscr{I}}^r(M,{\mathbb{C}}^{n})$ is the space of holomorphic immersions $M\to{\mathbb{C}}^{n}$ of class ${\mathscr{A}}^r(M)$; - ${\mathscr{I}}^r_*(M,{\mathbb{C}}^{2n})$ is the space of holomorphic immersions $(x,y)\colon M\to{\mathbb{C}}^{2n}$ of class ${\mathscr{A}}^r(M)$ for which the holomorphic $1$-form $xdy= \sum_{j=1}^n x_j \, dy_j$ is exact; - ${\mathscr{L}}^r(M,{\mathbb{C}}^{2n+1})$ is the space of immersions $F\colon M\to {\mathbb{C}}^{2n+1}$ of class ${\mathscr{A}}^r(M)$ such that $F^*\alpha=0$, that is, $F$ is Legendrian with respect to the contact form . As in Sec. \[sec:intro\], when $M$ is connected, the map induces a homeomorphism $${\mathscr{L}}^r(M,{\mathbb{C}}^{2n+1}) \to {\mathscr{I}}^r_*(M,{\mathbb{C}}^{2n}) \times {\mathbb{C}}.$$ The period map, dominating sprays, and a local structure theorem {#ssec:period} ---------------------------------------------------------------- Let $M$ be an open Riemann surface of finite topological type. Let $H_1(M;{\mathbb{Z}})={\mathbb{Z}}^\ell$ with $\ell\geq 0$. Pick closed curves $C_1,\ldots,C_\ell\subset M$ forming a Runge homology basis (see Subsec. \[ssec:RS\]). Let $${\mathcal{P}}=({\mathcal{P}}_1,\ldots,{\mathcal{P}}_\ell) : {\mathscr{O}}(M)^{2n} \to{\mathbb{C}}^\ell$$ be the [*period map*]{} whose $j$-th component is given by $$\label{eq:periodmap} {\mathcal{P}}_j(x,y)=\int_{C_j} x\, dy,\qquad x,y \in {\mathscr{O}}(M)^n.$$ Note that ${\mathcal{P}}(x,y)=0$ if and only if the $1$-form $xdy = \sum_{i=1}^n x_i \, dy_i$ is exact, and hence $${\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) =\{(x,y)\in {\mathscr{I}}(M,{\mathbb{C}}^{2n}) : {\mathcal{P}}(x,y)=0\}.$$ If $M$ is a compact smoothly bordered Riemann surface, then defines a period map $$\label{eq:P} {\mathcal{P}}: {\mathscr{A}}^r(M)^{2n} \to{\mathbb{C}}^\ell,\quad r\in {\mathbb{N}},$$ and $${\mathscr{I}}^r_*(M,{\mathbb{C}}^{2n}) =\{(x,y)\in {\mathscr{I}}^r(M,{\mathbb{C}}^{2n}) : {\mathcal{P}}(x,y)=0\}.$$ The following lemma provides an important tool used in the proof of Theorem \[th:parametric\]. Clearly, the lemma is vacuous if (and only if) $\ell=0$, that is, $M$ is the closed disk $\overline{\mathbb{D}}$. \[lem:perioddominatingsprays\] Let $M$ be a compact bordered Riemann surface, and let ${\mathcal{P}}$ be the period map associated to a Runge homology basis of $M$. Assume that $P$ is a compact Hausdorff space (a parameter space) and $r\in {\mathbb{N}}$. Given a continuous map $(x,y)\colon P\times M \to{\mathbb{C}}^{2n}$ such that for every $p\in P$, the map $(x(p,\cdotp),y(p,\cdotp))\colon M\to{\mathbb{C}}^{2n}$ is nonconstant, of class ${\mathscr{A}}^r(M)$, and its differential is continuous as a function of $(p,u)\in P\times M$, there exist an integer $N\in{\mathbb{N}}$ and a continuous map $(\tilde x,\tilde y) \colon P\times M \times {\mathbb{C}}^N \to {\mathbb{C}}^{2n}$ such that the map $(\tilde x(p,\cdotp,\cdotp),\tilde y(p,\cdotp,\cdotp)) \colon M\times {\mathbb{C}}^N \to{\mathbb{C}}^{2n}$ is of class ${\mathscr{A}}^r(M\times {\mathbb{C}}^N)$ for every $p\in P$, its differential is continuous on $P\times M \times {\mathbb{C}}^N$, and the partial differential $$\label{eq:derivative-period} \frac{{\partial}}{{\partial}\zeta}\bigg|_{\zeta=0} {\mathcal{P}}(\tilde x(p,\cdotp,\zeta),\tilde y(p,\cdotp,\zeta)) \colon {\mathbb{C}}^N {\longrightarrow}{\mathbb{C}}^\ell$$ is surjective for every $p\in P$. (Here, $\zeta=(\zeta_1,\ldots,\zeta_N)$ are coordinates on ${\mathbb{C}}^N$.) A map $(\tilde x,\tilde y)$ with surjective differential is called a [*period dominating holomorphic spray*]{} of maps $P\times M\to{\mathbb{C}}^{2n}$ with the core $(\tilde x(\cdotp,\cdotp,0),\tilde x(\cdotp,\cdotp,0)) =(x,y)$. Note that continuity of a map $(x,y)\colon P\times M \to{\mathbb{C}}^{2n}$, which is holomorphic on the interior $\mathring M$ for each $p\in P$, implies continuity of its $M$-derivative of any order on $P\times \mathring M$. Since the period basis for $M$ is supported in $\mathring M$, the lemma holds under this weaker assumption, which already ensures continuity of the period map . However, we shall use the lemma in the more general situation when $M$ is an admissible set (see Remark \[rem:perioddominatingsprays\]). Since such sets may include arcs, we need the stronger hypothesis that the differential is continuous in all variables. Without loss of generality, we assume that the Riemann surface $M$ is connected. When $P=\{p\}$ is a singleton, a spray with these properties was obtained in [@AlarconForstnericLopez2016Legendrian proof of Theorem 3.3]. (We drop $P$ from the notation.) An inspection of that proof shows that there exists a spray of this type, with $N=\ell=\mathrm{rank}\, H_1(M;{\mathbb{Z}})$, such that all but one of its component functions $\tilde x_j,\tilde y_j$ are independent of $\zeta\in {\mathbb{C}}^\ell$. For example, if $y_k$ is nonconstant, there is a map $(\tilde x,\tilde y)$ satisfying such that for all $u\in M$ and $\zeta\in {\mathbb{C}}^\ell$ we have $$\begin{aligned} \tilde y(u,\zeta) &=& y(u), \nonumber \\ \tilde x_j(u,\zeta) &=& x_j(u) \quad \text{for} \quad  j\in\{1,\ldots,n\}\setminus\{k\}, \nonumber \\ \tilde x_k(u,\zeta) &=& x_k(u) + \sum_{j=1}^\ell g_j(u)\zeta_j, \label{eq:X-spray1}\end{aligned}$$ where the functions $g_1,\ldots, g_\ell\in {\mathscr{A}}^r(M)$ are chosen such that $\int_{C_i} g_j\, dy_k$ approximates the Kronecker symbol $\delta_{i,j}$ for $i,j=1,\ldots, \ell$. The approximation can be as close as desired. One first constructs smooth functions $g_{j}$ on the curves $C_i$ in the homology basis such that $\int_{C_i} g_j\, dy_k = \delta_{i,j}$ and then applies Mergelyan’s theorem to obtain functions in ${\mathscr{A}}^r(M)$. Similarly, if $x_k$ is nonconstant but $y_k$ is constant, the goal is accomplished by letting $\tilde y_k(u,\zeta)=y_k + \sum_{j=1}^\ell g_j(u)\zeta_j$ for suitably chosen functions $g_1,\ldots, g_\ell\in {\mathscr{A}}^r(M)$, while the other components of the map are independent of $\zeta\in{\mathbb{C}}^\ell$. To obtain the parametric case, we observe that the nonparametric case for a given parameter value $p_0\in P$ automatically satisfies the domination condition for all points $p$ in an open neighborhood $U\subset P$ of $p_0$. Since $P$ is compact, finitely many such neighborhoods $U_1,\ldots, U_m$ cover $P$, and it suffices to combine the associated sprays, each with the parameter space ${\mathbb{C}}^\ell$, into a single spray with the parameter space ${\mathbb{C}}^{m\ell}$. \[rem:perioddominatingsprays\] Lemma \[lem:perioddominatingsprays\] also holds, with the same proof, if $M$ is a compact [*admissible set*]{} in an open Riemann surface ${\widetilde}M$; see [@AlarconForstnericLopez2016MZ Definition 5.1]. This means that $M=K\cup \Gamma$, where $K=\bigcup_j K_j$ is a union of finitely many pairwise disjoint, compact, smoothly bounded domains $K_j$ in ${\widetilde}M$ and $\Gamma=\bigcup_i \Gamma_i$ is a union of finitely many pairwise disjoint smooth arcs or closed curves that intersect $K$ only in their endpoints, or not at all, and such that their intersections with the boundary $bK$ are transverse. By Mergelyan’s theorem [@Mergelyan1951DAN], every function $f\in {\mathscr{A}}^r(M)$, $r\geq 0$, can be approximated in the ${\mathscr{C}}^r(M)$-topology by functions holomorphic on a neighborhood of $M$. If in addition $M$ is Runge (${\mathscr{O}}({\widetilde}M)$-convex) in ${\widetilde}M$, which holds if and only if the inclusion map $M{\hookrightarrow}{\widetilde}M$ induces an injective homomorphism $H_1(M;{\mathbb{Z}}){\hookrightarrow}H_1({\widetilde}M;{\mathbb{Z}})$, then the approximation is possible by functions holomorphic on ${\widetilde}M$. An application of Lemma \[lem:perioddominatingsprays\] and the implicit function theorem give the following structure theorem for the spaces ${\mathscr{I}}^r_*(M,{\mathbb{C}}^{2n})$ and ${\mathscr{L}}^r(M,{\mathbb{C}}^{2n+1})$. \[th:Banach\] Let $M$ be a compact bordered Riemann surface. For every $r\geq 1$, the spaces ${\mathscr{I}}^r_*(M,{\mathbb{C}}^{2n})$ and ${\mathscr{L}}^r(M,{\mathbb{C}}^{2n+1})$ are complex Banach manifolds. In view of the homeomorphism ${\mathscr{L}}^r(M,{\mathbb{C}}^{2n+1}) \to {\mathscr{I}}^r_*(M,{\mathbb{C}}^{2n}) \times {\mathbb{C}}$ induced by the map , it suffices to show that ${\mathscr{I}}^r_*(M,{\mathbb{C}}^{2n})$ is a closed complex Banach submanifold of ${\mathscr{I}}^r(M,{\mathbb{C}}^{2n})$, the latter being an open subset of the complex Banach space ${\mathscr{A}}^r(M)^{2n}$. Obviously, ${\mathscr{I}}^r_*(M,{\mathbb{C}}^{2n})=\{\sigma\in {\mathscr{I}}^r(M,{\mathbb{C}}^{2n}): {\mathcal{P}}(\sigma)=0\}$ is a closed subset of ${\mathscr{I}}^r(M,{\mathbb{C}}^{2n})$. The period map ${\mathcal{P}}\colon {\mathscr{A}}^r(M)^{2n} \to {\mathbb{C}}^{\ell}$ is holomorphic. Lemma \[lem:perioddominatingsprays\] (with $P$ a singleton) says that ${\mathcal{P}}$ has maximal rank $\ell$ at each point $\sigma \in {\mathscr{A}}^r(M)^{2n}$ that represents a nonconstant map. Hence, the conclusion follows from the implicit function theorem. It is easily seen that the tangent space to the submanifold ${\mathscr{I}}^r_*(M,{\mathbb{C}}^{2n})$ of ${\mathscr{I}}^r(M,{\mathbb{C}}^{2n})$ at the point $\sigma_0=(x_0,y_0)\in {\mathscr{I}}^r_*(M,{\mathbb{C}}^{2n})$ equals $$T_{\sigma_0}{\mathscr{I}}^r_*(M,{\mathbb{C}}^{2n}) = \bigl\{\sigma=(x,y)\in {\mathscr{A}}^r(M)^{2n} : \int_{C_j} xdy_0 + x_0dy=0,\ \ j=1,\ldots,l\bigr\},$$ where the curves $C_1,\ldots,C_l$ form a basis of $H_1(M;{\mathbb{Z}})$. An application of the convex integration lemma {#sec:CI-lemma} ============================================== In this section, we establish a key technical result, Lemma \[lem:CI\], which will be used in the proof of Theorem \[th:parametric\] in order to extend families of Legendrian immersions across a smooth arc attached to a compact smoothly bounded domain in a Riemann surface. Let $P$ be a compact Hausdorff space; it will serve as the parameter space. Let ${\mathscr{C}}^{0,1}(P\times [0,1])$ denote the space of all continuous functions $f\colon P\times [0,1]\to{\mathbb{C}}$, considered as a family of paths $f_p=f(p,\cdotp)\colon [0,1]\to {\mathbb{C}}$ depending continuously on $p\in P$, whose derivative $\dot f_p(s)=df_p(s)/ds$ is also continuous in both variables $(p,s)\in P\times [0,1]$. The analogous notation $${\mathscr{C}}^{0,1}(P\times [0,1],{\mathbb{C}}^n) = {\mathscr{C}}^{0,1}(P\times [0,1])^n$$ is used for maps $f=(f_1,\ldots,f_n) \colon P\times [0,1]\to{\mathbb{C}}^n$. We shall need the following lemma. \[lem:1dim\] Let $Q\subset P$ be compact Hausdorff spaces, and let $f\in {\mathscr{C}}^{0,1}(P\times [0,1])$ and $h \in {\mathscr{C}}(P\times [0,1])$ be complex valued functions, with $h$ nowhere vanishing. Write $f_p=f(p,\cdotp)$ and similarly for $h$. Let $ b\colon P\to {\mathbb{C}}$ be a continuous function such that $$b(p) = \int_0^1 f_p(s) h_p(s)\, ds, \quad p\in Q.$$ There is a homotopy $f^t\in {\mathscr{C}}^{0,1}(P\times [0,1])$ $(t\in [0,1])$ satisfying the following conditions: - $f^t_p=f_p$  for all $(p,t) \in (P\times \{0\}) \cup (Q\times [0,1])$; - $f^t_p(s)=f_p(s)$ and $\dot f^t_p(s)=\dot f_p(s)$ for $s=0,1$ and for all $(p,t)\in P\times [0,1]$; - $\int_0^1 f^1_p(s) h_p(s)\, ds = b(p)$ for all $p\in P$. This is a parametric version of Gromov’s one-dimensional [*convex integration lemma*]{} [@Gromov1973IZV Lemma 2.1.7]. The basic version of Gromov’s lemma says that for any open connected set $\Omega$ in a Euclidean space ${\mathbb{R}}^n$ (or in a Banach space), the set of integrals $\int_0^1 f(s)ds$ over all paths $f\colon [0,1]\to \Omega$, with fixed endpoints $f(0)$ and $f(1)$ in $\Omega$, equals the convex hull of $\Omega$. It is a trivial matter to adapt it to arcs of class ${\mathscr{C}}^1$ with the matching conditions for the derivatives at the endpoints of $[0,1]$. For the parametric version we refer to [@Spring2010 Theorem 3.4]. The nowhere vanishing function $h$ plays the role of a weight; it would suffice to assume that $h$ is not identically zero and work on the corresponding subinterval. In preparation for the next lemma, we need some additional notation. Given $z=(z_1,\ldots,z_n),\ w=(z_1,\ldots,z_n)\in {\mathbb{C}}^n$, we write $zw= \sum_{j=1}^n z_j w_j$. We denote by $$\label{eq:In} {\mathscr{I}}(P\times [0,1],{\mathbb{C}}^n) \subset {\mathscr{C}}^{0,1}(P\times [0,1],{\mathbb{C}}^n)$$ the set of all $f \in {\mathscr{C}}^{0,1}(P\times [0,1],{\mathbb{C}}^n)$ for which the derivative $\dot f_p(s)=df_p(s)/ds \in{\mathbb{C}}^n$ is nowhere vanishing on $(p,s)\in P\times [0,1]$. We think of $f \in {\mathscr{I}}(P\times [0,1],{\mathbb{C}}^n)$ as a family of immersed arcs $f_p\colon [0,1]\to{\mathbb{C}}^n$ depending continuously on the parameter $p\in P$. The following is the main technical lemma used in the proof of Theorem \[th:parametric\]. \[lem:CI\] Let $Q\subset P$ be compact Hausdorff spaces, let $\xi=(f,g)\in {\mathscr{I}}(P\times [0,1],{\mathbb{C}}^{2n})$ with $f,g\in {\mathscr{C}}^{0,1}(P\times [0,1])^n$, and let $\beta\colon P\to {\mathbb{C}}$ be a continuous function such that $$\label{eq:alpha_p} \beta(p) = \int_0^1 f_p(s) \dot g_p(s) ds, \quad p\in Q.$$ Then there exists a homotopy $\xi^t=(f^t,g^t)\in {\mathscr{I}}(P\times [0,1],{\mathbb{C}}^{2n})$ $(t\in [0,1])$ satisfying the following conditions: - $\xi^t_p=\xi_p$  for $(p,t) \in (P\times \{0\}) \cup (Q\times [0,1])$; - $\xi^t_p(s)=\xi_p(s)$ and $\dot \xi^t_p(s)=\dot \xi_p(s)$ for $s=0,1$ and $(p,t)\in P\times [0,1]$; - $\int_0^1 f^1_p(s) \dot g^1_p(s) ds =\beta(p)$  for $p\in P$. In [@ForstnericLarusson2016 Lemma 3.1] we give more precise analogues of Lemmas \[lem:1dim\] and \[lem:CI\] by controlling the integrals in (iii) and (c) for all $t\in [0,1]$. This can be proved here as well, but is not needed for the application in the present paper. Since the derivative $\dot \xi_p(s)=(\dot \xi_{p,1}(s),\ldots,\dot \xi_{p,2n}(s))\in{\mathbb{C}}^{2n}$ is nowhere vanishing on $(p,s)\in P\times [0,1]$ and $P$ is compact, an elementary argument gives finitely many pairs of compact sets $U_j\subset V_j$ in $P$ $(j=1,\ldots,m)$, with $U_j\subset \mathring V_j$ and $\bigcup_{j=1}^m U_j=P$, and pairwise disjoint closed segments $I_1,\ldots, I_m$ contained in $[0,1]$ such that for every $j=1,\ldots,m$, there exists an index $k=k(j)\in \{1,2,\ldots,2n\}$ such that $$\label{eq:nonzero} \dot \xi_{p,k}(s)\ne 0\quad\text{for all}\ \ s\in I_j\ \text{and}\ p\in V_j.$$ The proof of the lemma proceeds by a finite induction on $j=1,\ldots,m$. The desired homotopy is obtained as a composition of $m$ homotopies, each supported on one of the segments $I_1,\ldots,I_m$. We explain the initial step; the subsequent steps are analogous. Thus, let $j=1$ and let $k=k(1)\in \{1,2,\ldots,2n\}$ be such that holds for $j=1$. Suppose first that $k\in \{n+1,\ldots,2n\}$. Write $k=n+l$ with $l\in \{1,\ldots,n\}$. Recall that $\xi=(f,g)$ where $f,g\in {\mathscr{C}}^{0,1}(P\times [0,1])^n$. Then means that the function $\dot g_{p,l}$ is nowhere vanishing on $I_1$ for all $p\in V_1$. Let us define the function $b \colon P\to {\mathbb{C}}$ by $$\label{eq:betapt} b(p)=\beta(p) - \int_{[0,1]\setminus I_1}f_{p,l}(s) \dot g_{p,l}(s) ds - \int_0^1 \sum_{\stackrel{i=1}{i\ne l}}^n f_{p,i}(s) \dot g_{p,i}(s) ds.$$ In view of we have that $$b(p)=\int_{I_1}f_{p,l}(s) \dot g_{p,l}(s) ds, \quad p\in Q.$$ We now apply Lemma \[lem:1dim\] with $Q\subset P$ replaced by the pair of parameter sets $V_1 \cap Q \subset V_1$, the interval $[0,1]$ replaced by the segment $I_1$, with the functions on $I_1$ given by $$f_p=f_{p,l},\quad h_p=\dot g_{p,l} \quad\text{for $p\in V_1$},$$ and with the function $b$ given by . (When applying Lemma \[lem:1dim\], we pay attention to the matching condition (ii) at the endpoints of the interval $I_1$). This gives a homotopy $f^{t}_{p,l} \in {\mathscr{C}}^{0,1}(V_1\times [0,1])$ $(t\in [0,1])$ satisfying the following conditions: - $f^t_{p,l}=f_{p,l}$  for all $(p,t) \in (V_1\times \{0\}) \cup ((Q\cap V_1) \times [0,1])$; - $f^t_{p,l}(s)=f_{p,l}(s)$ for all $s=[0,1]\setminus I_1$ and $(p,t)\in V_1 \times [0,1]$; - $\int_{I_1} f^1_{p,l}(s) \dot g_{p,l}(s) ds = b(p)$ for all $p\in V_1$. Condition (b’) means that the deformation is supported on the segment $I_1$. Let $\xi^t_p=(f^t_p,g_p)\colon [0,1]\to {\mathbb{C}}^{2n}$ $(t\in [0,1])$ denote the homotopy whose $l$-th component equals $ f^t_{p,l}$ and whose other components agree with the corresponding components of $\xi_p$. Note that $\xi^t_p$ agrees with $\xi_p$ on $[0,1]\setminus I_1$ for all $t\in [0,1]$ and $p\in V_1$, and hence is an immersion (since its component $\dot g_{p,l}$ is nowhere vanishing on $I_1$ and $\xi^t_p=\xi_p$ on $[0,1]\setminus I_1$). Clearly, $\xi^t_p$ satisfies conditions (a) and (b) in Lemma \[lem:CI\] for $(p,t) \in (V_1\times \{0\}) \cup ((Q\cap V_1) \times [0,1])$, and it satisfies condition (c) for all $p\in V_1$ in view of the definition of the function $b$. Pick a continuous function $\chi\colon P\to [0,1]$ such that $\chi=1$ on $U_1$ and ${\mathrm{supp}}\,\chi \subset \mathring V_1$. Replacing $f^t_p$ by $f^{\chi(p)t}_p$ and $\xi^t_p$ by $\xi^{\chi(p)t}_p$ yields a homotopy, defined for all $p\in P$, which satisfies conditions (a) and (b), and it satisfies condition (c) for $p\in U_1$. This concludes the first step if $k(1)\in \{n+1,\ldots,2n\}$. If on the other hand $k=k(1)\in \{1,\ldots,n\}$, we apply the same argument with the roles of the components reversed, using the integration by parts formula $$\int_0^1 f_{p,k}(s) \dot g_{p,k}(s) \, ds = f_{p,k}(1)g_{p,k}(1) - f_{p,k}(0)g_{p,k}(0) - \int_0^1 g_{p,k}(s) \dot f_{p,k}(s) \, ds.$$ In this case, the assumption is that $\dot f_{p,k}(s)\ne 0$ for all $s\in I_1$ for $p\in V_1$. The same argument as above gives a homotopy $g^t_{p,k}$, supported on $I_1$, which achieves condition (c) for all $p\in U_1$. As before, the other components of the map are kept fixed. This concludes the first step of the induction. In the second step with $j=2$, we take as our datum the map $\xi^1\in {\mathscr{I}}(P\times [0,1],{\mathbb{C}}^{2n})$ (the final map at $t=1$ in the homotopy obtained in step 1). By following the proof of step 1 with the pair of parameter sets $Q_2=Q\cup U_1 \subset P$, we find a family of immersions $$\xi^{1,t}_p=(f^{1,t}_p,g^{1,t}_p) \colon [0,1]\to {\mathbb{C}}^{2n}, \quad (p,t) \in P\times [0,1],$$ satisfying the following conditions: - $\xi^{1,t}_p=\xi^{1}_p$ for $(p,t)\in (P\times \{0\}) \cup (Q_2\times [0,1])$; - $\xi^{1,t}_p(s)=\xi^{1}_p(s)$ for all $s\in [0,1]\setminus I_2$ and $(p,t)\in P\times [0,1]$; - $\int_0^1 f^{1,1}_p(s) \dot g^{1,1}_p(s) ds =\beta(p)$  for all $p\in U_1\cup U_2$. Since the deformation $\xi^{1,t}_p$ is supported on $I_2$ which is disjoint from $I_1$, it does not destroy the immersion property of the individual maps $[0,1]\to{\mathbb{C}}^{2n}$ in the family. Also, since the deformation is fixed for $p\in Q_2=Q\cup U_1$, it does not change the values of the integrals in (c) for $p\in Q_2$, and in addition it achieves the correct values for points $p\in U_2$. We now take $\xi^2=\xi^{1,1}\in {\mathscr{I}}(P\times [0,1],{\mathbb{C}}^{2n})$ as the datum in step 3, let $Q_3=Q_2\cup U_2$, and proceed as before. After $m$ steps of this kind, the proof is complete. A parametric Oka principle for Legendrian immersions {#sec:hprinciple} ==================================================== Let $M$ be an open Riemann surface. In this section we prove the parametric Oka principle with approximation for the inclusion ${\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \hookrightarrow {\mathscr{I}}(M,{\mathbb{C}}^{2n})$ in Theorem \[th:immersions\]. Let $P$ be a compact Hausdorff space. We introduce the following mapping spaces: $$\begin{aligned} {\mathscr{I}}(P\times M,{\mathbb{C}}^{2n}) &=& \{\sigma\in {\mathscr{C}}(P\times M,{\mathbb{C}}^{2n}) : \sigma_p \in {\mathscr{I}}(M,{\mathbb{C}}^{2n})\ \text{for every}\ p\in P\}; \\ {\mathscr{I}}_*(P\times M,{\mathbb{C}}^{2n}) &=& \{\sigma\in {\mathscr{I}}(P\times M,{\mathbb{C}}^{2n}) : \sigma_p\in {\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \ \ \text{for every}\ p\in P\}.\end{aligned}$$ Here, $\sigma_p=\sigma(p,\cdotp)\colon M\to {\mathbb{C}}^{2n}$. Given a compact set $K\subset M$, we write $$\|\sigma\|_{1,P\times K}= \sup_{x\in K} |\sigma_p(x)| + \sup_{x\in K} |d\sigma_p(x)|$$ where the norm $|d\sigma_p|$ of the differential is measured with respect to a fixed Hermitian metric on $TM$ (whose precise choice will not be important) and the Euclidean norm on ${\mathbb{C}}^{2n}$. \[th:parametric\] Assume that $M$ is an open Riemann surface, $Q\subset P$ are compact Hausdorff spaces, $D\Subset M$ is a smoothly bounded domain whose closure $\bar D$ is ${\mathscr{O}}(M)$-convex, and $\sigma=(x,y)\in {\mathscr{I}}(P\times M,{\mathbb{C}}^{2n})$ $(n\ge 1)$ satisfies the following two conditions: - $\sigma|_{Q\times M} \in {\mathscr{I}}_*(Q\times M,{\mathbb{C}}^{2n})$; - there is an open set $U\subset M$, with $\bar D\subset U$, such that $\sigma|_{P\times U}\in {\mathscr{I}}_*(P\times U,{\mathbb{C}}^{2n})$. Given $\epsilon>0$, there is a homotopy $\sigma^t \in {\mathscr{I}}(P\times M,{\mathbb{C}}^{2n})$ $(t\in [0,1])$ satisfying the following conditions: - $\sigma^t_p = \sigma_p$  for every $(p,t)\in (P\times \{0\}) \cup (Q\times [0,1])$; - $\sigma^t|_{P\times D} \in {\mathscr{I}}_*(P\times D,{\mathbb{C}}^{2n})$ for every $t \in [0,1]$; - $\|\sigma^t - \sigma\|_{1,P\times \bar D} <\epsilon$ for every $t\in [0,1]$; - $\sigma^1\in {\mathscr{I}}_*(P\times M,{\mathbb{C}}^{2n})$. If a continuous map $\varphi\colon X\to Y$ satisfies the parametric h-principle (without approximation), then $\varphi$ is a weak homotopy equivalence. Hence, the first part of Theorem \[th:immersions\] is an immediate corollary of Theorem \[th:parametric\]. \[rem:php-whe\] (a) The proof of Theorem \[th:parametric\] gives the analogous result for a compact bordered Riemann surface $M$; in this case, the proof is completed in finitely many steps. \(b) The proof of Theorem \[th:parametric\] also gives the parametric Oka principle with approximation for Legendrian immersions. However, a minor difference in the proof is explained in the paragraph following the proof of Theorem \[th:parametric\]. It has to do with the fact that the map ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1}) \to {\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \times {\mathbb{C}}$ (see ) is a homeomorphism only when $M$ is connected. Hence, when extending an exact holomorphic immersion $\sigma=(x,y)$ (the projection of a Legendrian immersion $(x,y,z)$) across a smooth arc $E$ connecting a pair of disjoint domains in $M$, we must ensure that the integral of the $1$-form $xdy$ on $E$ equals the difference of the values of the last component $z$ at the respective endpoints of the arc; in view of , this ensures the correct extension of the $z$-component. Pick a smooth strongly subharmonic Morse exhaustion function $\rho\colon M\to {\mathbb{R}}$ and exhaust $M$ by sublevel sets $$D_j=\{u\in M\colon \rho(u)<c_j\}, \quad j\in {\mathbb{N}},$$ where $c_1<c_2<c_3<\ldots$ is an increasing sequence of regular values of $\rho$ chosen such that $\lim_{j\to\infty} c_j=\infty$. We may assume that each interval $[c_j,c_{j+1}]$ contains at most one critical value of the function $\rho$, and that $D_1$ coincides with the given domain $D$ in Theorem \[th:parametric\]. Let $U_1=U\supset \bar D_1$ be the open neighborhood of $\bar D_1$ as in the theorem. To begin the induction, set $\epsilon_0=\epsilon$ and $$\sigma^{t,1} = \sigma|_{P\times U_1} \in {\mathscr{I}}_*(P\times U_1,{\mathbb{C}}^{2n}), \quad t\in [0,1].$$ We shall inductively find a sequence of open sets $U_j\supset \bar D_j$ in $M$, homotopies $$\sigma^{t,j} \in {\mathscr{I}}(P\times U_j,{\mathbb{C}}^{2n}), \quad t\in [0,1],\ \ j\in{\mathbb{N}}$$ and numbers $\epsilon_j>0$ satisfying the following conditions for $j=1,2,3,\ldots$: - $\sigma^{t,j}_{p} = \sigma_p|_{U_{j}}$ for every $(p,t)\in (P\times \{0\}) \cup (Q\times [0,1])$; - $\sigma^{t,j}|_{P\times D_{1}} \in {\mathscr{I}}_*(P\times D_{1},{\mathbb{C}}^{2n})$ for every $t\in [0,1]$; - $\|\sigma^{t,j} - \sigma^{t,j-1}\|_{1, P\times \bar D_{j-1}} < \epsilon_j$ for every $t\in [0,1]$; - $\sigma^{1,j}|_{P\times D_j}\in {\mathscr{I}}_*(P\times D_{j},{\mathbb{C}}^{2n})$; - $\epsilon_j<\epsilon_{j-1}/2$; - If $\tilde \sigma^t \colon P\times \bar D_{j-1} \to{\mathbb{C}}^{2n}$ satisfies $\|\tilde \sigma^{t} - \sigma^{t,j-1}\|_{1, P\times \bar D_{j-1}} < 2\epsilon_j$ for every $t\in [0,1]$, then $\tilde \sigma^{t}(p,\cdotp)\colon \bar D_{j-1} \to{\mathbb{C}}^{2n}$ is an immersion for every $p\in P$ and $t\in [0,1]$. Conditions $(a_1)$, $(b_1)$ and $(d_1)$ hold by the definition of $\sigma^{t,1}$, $(e_1)$ is fulfilled by choosing $0<\epsilon_1<\epsilon_0/2$, while $(c_1)$ and $(f_1)$ are vacuous. Assume for a moment that sequences with these properties exist. Conditions $(c_j)$, $(e_j)$ and $(f_j)$ ensure that the sequence $(\sigma^{t,j})_{j\in{\mathbb{N}}}$ converges to a limit $$\sigma^t = \lim_{j\to \infty} \sigma^{t,j} \colon P\times M{\longrightarrow}{\mathbb{C}}^{2n}, \quad t\in [0,1]$$ such that $\sigma^t_p\colon M\to{\mathbb{C}}^{2n}$ is a holomorphic immersion for every $p\in P$ and $t\in[0,1]$ and (3) holds. Condition $(a_j)$ ensures that all homotopies $\sigma^{t,j}$ are fixed on the parameter set $(P\times \{0\}) \cup (Q\times [0,1])$, which gives (1). Condition $(b_j)$ shows that $\sigma^t_p\colon D \to{\mathbb{C}}^{2n}$ is an exact holomorphic immersion for every $p\in P$ and $t\in [0,1]$, so (2) holds. Condition $(d_j)$ shows that $\sigma^1_p\colon M\to{\mathbb{C}}^{2n}$ is an exact holomorphic immersion for every $p\in P$, which gives (4). This shows that the theorem holds if we can construct such a sequence of homotopies. We now explain the induction. Assume that the quantities satisfying the above conditions have been found up to an index $j\in {\mathbb{N}}$. Then, conditions $(e_{j+1})$ and $(f_{j+1})$ hold provided that the number $\epsilon_{j+1}>0$ is chosen small enough; fix such a number. We shall now explain how to obtain $\sigma^{t,j+1}$ and $U_{j+1}$ satisfying conditions $(a_{j+1})$–$(d_{j+1})$. We distinguish two topologically different cases: (a) the noncritical case, and (b) the critical case. [*(a) The noncritical case: $\rho$ has no critical values in $[c_j,c_{j+1}]$.*]{} In this case, $\bar D_j$ is a deformation retract of $\bar D_{j+1}$. (In the critical case considered below, we use the noncritical case also for certain noncritical pairs of sets $K\subset L$ defined by another strongly subharmonic function.) Pick a Runge homology basis ${\mathcal{B}}=\{\gamma_i\}_{i=1}^l$ for $H_1(D_j;{\mathbb{Z}})$, that is, such that the union of supports $\bigcup_{i=1}^l |\gamma_i|$ is ${\mathscr{O}}(D_j)$-convex. Let ${\mathcal{P}}$ denote the associated period map : $${\mathcal{P}}(\sigma) = \left(\int_{\gamma_i} xdy \right)_{i=1,\ldots,l} \in {\mathbb{C}}^l,\qquad \sigma=(x,y) \in {\mathscr{I}}(D_j,{\mathbb{C}}^{2n}).$$ Note that the pair $({\mathcal{B}},{\mathcal{P}})$ also applies to the domain $D_{j+1}$ since $\bar D_j$ is a deformation retract of $\bar D_{j+1}$. Let $\zeta=(\zeta_1,\ldots,\zeta_N)$ denote the coordinates on ${\mathbb{C}}^N$. Shrinking $U_j\supset \bar D_j$ if necessary, Lemma \[lem:perioddominatingsprays\], applied with the parameter space $P'=P\times [0,1]$, gives an integer $N\in{\mathbb{N}}$ and a spray $$\tilde \sigma^t = (\tilde x^t,\tilde y^t) \colon P\times U_j \times {\mathbb{C}}^N \to {\mathbb{C}}^{2n}, \quad t\in [0,1],$$ such that the map $\tilde \sigma^t_p = \tilde \sigma^t(p,\cdotp,\cdotp) \colon U_j \times {\mathbb{C}}^N \to{\mathbb{C}}^{2n}$ satisfies the following conditions: - $\tilde \sigma^t_p$ is holomorphic on $U_j \times {\mathbb{C}}^N$ for every $(p,t)\in P\times [0,1]$; - $\tilde \sigma^t_p(\cdotp,0)=\sigma^t_p(\cdotp,0)$ at $\zeta=0\in{\mathbb{C}}^N$ for every $(p,t)\in P\times [0,1]$; - the partial differential $$\label{eq:tildesigmat} \frac{{\partial}}{{\partial}\zeta}\bigg|_{\zeta=0} {\mathcal{P}}(\tilde \sigma^t_p(\cdotp, \zeta)) \colon {\mathbb{C}}^N {\longrightarrow}{\mathbb{C}}^\ell$$ is surjective for every $(p,t)\in P\times [0,1]$. Furthermore, in view of Mergelyan’s theorem [@Mergelyan1951DAN], the functions $g_j$ used in the construction of $\tilde \sigma^t$ (see ) can be chosen holomorphic on $M$. Since the spray $\tilde \sigma^t$ is linear in $\zeta\in{\mathbb{C}}^N$ and the core $\tilde\sigma^t_p(\cdotp,0)=\sigma^t_p$ is holomorphic on $M$ for all $(p,t)\in (P\times \{0\}) \cup (Q\times [0,1])$, $\tilde \sigma^t_p$ is holomorphic on $M\times {\mathbb{C}}^N$ for all $(p,t)\in (P\times \{0\}) \cup (Q\times [0,1])$. Pick an open relatively compact neighborhood $U_{j+1}\Subset M$ of $\bar D_{j+1}$ which deformation retracts onto $\bar D_{j+1}$. Since the map $\tilde \sigma^t_p(\cdotp,0)=\sigma^t_p$ is an immersion on the respective domain for every $(p,t)\in P\times [0,1]$, we can shrink $U_j$ slightly around $\bar D_j$ and choose a ball $B\subset {\mathbb{C}}^N$ around the origin such that - $\tilde \sigma^t_p(\cdotp,\zeta)\colon U_j\to {\mathbb{C}}^{2n}$ is an immersion for every $(p,t)\in P\times [0,1]$ and $\zeta\in \bar B$, and - $\tilde \sigma^t_p(\cdotp,\zeta)\colon \overline U_{j+1} \to {\mathbb{C}}^{2n}$ is an immersion for all $(p,t) \in (P\times \{0\}) \cup (Q\times [0,1])$ and $\zeta\in \bar B$. [*Claim:*]{} $\tilde \sigma^t$ can be approximated as closely as desired in the ${\mathscr{C}}^1$ norm on $\bar D_j \times \bar B$, and uniformly in the parameters $(p,t)\in P\times [0,1]$, by a homotopy $$\tau^t \colon P\times U_{j+1} \times B \to {\mathbb{C}}^{2n},\quad t\in [0,1],$$ satisfying conditions (i)–(v) above and also the following two conditions: - $\tau^t(p,\cdotp,\zeta) \colon U_{j+1} \to {\mathbb{C}}^{2n}$ is a holomorphic immersion for every $(p,t)\in P\times [0,1]$ and $\zeta\in B$, and - $\tau^t(p,\cdotp,\cdotp) = \tilde \sigma^t(p,\cdotp,\cdotp)$ for all $(p,t)\in (P\times \{0\}) \cup (Q\times [0,1])$. Such $\tau^t$ can be found by following the noncritical case in [@ForstnericLarusson2016 proof of Theorem 5.3] when the cone $A$ equals ${\mathbb{C}}^{2n}$. The only difference is that, in the present situation, the maps $\tilde \sigma^t_p$ depend holomorphically on the additional complex parameter $\zeta\in B\subset{\mathbb{C}}^N$. We outline the main steps and refer to the cited source for the details. Fix a nowhere vanishing holomorphic $1$-form $\theta$ on $M$. Let $d$ denote the exterior differential on $M$. Consider the family of holomorphic maps $$\label{eq:phitp} \tilde \phi^t_p(\cdotp,\zeta) = d\tilde \sigma^t_p(\cdotp,\zeta)/\theta \colon U_j\to {\mathbb{C}}^{2n}_*$$ for $(p,t)\in P\times [0,1]$ and $\zeta\in \bar B$. Their ranges avoid the origin since the maps $\tilde \sigma^t_p(\cdotp,\zeta)$ are immersions by condition (iv). Furthermore, for each $(p,t) \in (P\times \{0\}) \cup (Q\times [0,1])$ and $\zeta\in \bar B$, the map $\tilde \phi^t_p(\cdotp,\zeta)\colon \overline U_{j+1}\to {\mathbb{C}}^{2n}_*$ is holomorphic on $\overline U_{j+1}$ in view of condition (v). Let ${\mathcal{Q}}$ denote the period map defined for any map $\phi\colon D_j\to {\mathbb{C}}^{2n}$ by $${\mathcal{Q}}(\phi)=\left(\int_{C_i} \phi\, \theta\right)_{i=1,\ldots,l} \in ({\mathbb{C}}^{2n})^l.$$ Here, $\{C_i\}_{i=1}^l$ is a Runge homology basis of $H_1(D_j;{\mathbb{Z}})$. We embed the family of maps as the core of a spray $\phi^t_p(\cdotp,\zeta,w)$ (that is, $\phi^t_p(\cdotp,\zeta,0)=\tilde \phi^t_p(\cdotp,\zeta)$), depending holomorphically on another set of parameters $w\in {\mathbb{C}}^{N'}$ for some integer $N'\in{\mathbb{N}}$, such that the partial differential $$\frac{{\partial}}{{\partial}w}\bigg|_{w=0} {\mathcal{Q}}(\phi^t_p(\cdotp,\zeta,w)) : {\mathbb{C}}^{N'}\to ({\mathbb{C}}^{2n})^l$$ is surjective for every $(p,t)\in P\times [0,1]$ and $\zeta\in B$. Such ${\mathcal{Q}}$-period dominating sprays were constructed in [@AlarconForstneric2014IM Lemma 5.1]; see also [@AlarconForstnericCrelle Lemma 3.6] for the parametric case. Fix a ball $B'\subset {\mathbb{C}}^{N'}$ centered at the origin. Since ${\mathbb{C}}^{2n}_*$ is an Oka manifold, the parametric Oka principle with approximation [@Forstneric2011 Theorem 5.4.4] shows that we can approximate the family of holomorphic maps $\phi^t_p\colon U_j\times \bar B\times \bar B' \to {\mathbb{C}}^{2n}_*$ in the ${\mathscr{C}}^r$ topology on $\bar D_j \times B\times B'$ by a continuous family of holomorphic maps $$\psi^t_p \colon U_{j+1}\times B\times B' \to {\mathbb{C}}^{2n}_*,\quad (p,t)\in P\times [0,1],$$ such that $\psi^t_p(\cdotp,\zeta,w)=\phi^t_p(\cdotp,\zeta,w)$ for all $(p,t)\in (P\times \{0\}) \cup (Q\times [0,1])$ and $(\zeta,w) \in B\times B'$. Assuming that the approximation is close enough, the implicit function theorem gives a continuous function $w=w(p,t,\zeta)$ on $P\times [0,1]\times \bar B$ with values in ${\mathbb{C}}^{N'}$ and close to $0$, such that $w$ is holomorphic in $\zeta\in B$, vanishes for $(p,t)\in (P\times \{0\}) \cup (Q\times [0,1])$ and $\zeta\in B$, and we have the period vanishing conditions $$\label{eq:Qdom} {\mathcal{Q}}\bigl(\psi^t_p(\cdotp,\zeta,w(p,t,\zeta))\bigr) =0 \quad \text{for all}\ (p,t,\zeta)\in P\times[0,1]\times B.$$ Pick an initial point $u_0\in D_j$. It is straightforward to verify that the family of maps $$\tau^t(p,u,\zeta) = \tilde\sigma^t(p,u_0,\zeta) + \int_{u_0}^u \psi^t_p(\cdotp,\zeta,w(p,t,\zeta))\, \theta, \quad u\in U_{j+1},$$ then satisfies the claim. (Since $\bar D_j$ is a deformation retract of $U_{j+1}$, the integral is independent of the choice of the path in $U_{j+1}$ due to the period vanishing condition .) If $D_j$ is disconnected, the same argument applies on each connected component. We continue with the proof of the theorem. Assuming as we may that the approximation of $\tilde \sigma^t$ by $\tau^t$ is close enough, the period domination property of the spray $\tilde \sigma^t$ and the implicit function theorem give a continuous map $$\zeta \colon P\times [0,1] \to B \subset{\mathbb{C}}^N,$$ with values close to $0$ (depending on how close $\tau^t$ is to $\tilde\sigma^t$), such that $$\label{eq:zetavanishes} \text{$\zeta$ vanishes on the set $(p,t)\in (P\times \{0\})\cup (Q\times [0,1])$,}$$ and the family of holomorphic immersions $$\sigma^{t,j+1}_p = \tau^t(p,\cdotp,\zeta(p,t)) \colon U_{j+1} \to {\mathbb{C}}^{2n}$$ satisfies the period conditions $$\label{eq:Pjplus1} {\mathcal{P}}(\sigma^{t,j+1}_p)={\mathcal{P}}(\sigma^{t,j}_p), \quad (p,t)\in P \times [0,1].$$ In view of , $\sigma^{t,j+1}$ satisfies condition $(a_{j+1})$. Writing $\sigma^{t,j+1}_p=(x^{t,j+1}_p,y^{t,j+1}_p)$, it follows from that for every loop $C\subset D_{1}$ and for all $(p,t)\in P \times [0,1]$, we have $$\int_C x^{t,j+1}_p dy^{t,j+1}_p = \int_C x^{t,j}_p dy^{t,j}_p = 0.$$ This shows that $\sigma^{t,j+1}$ satisfies condition $(b_{j+1})$. The same argument for loops $C\subset D_{j+1}$ and $t=1$ shows that $(d_{j+1})$ holds. (Note that it suffices to verify the period vanishing condition for loops in $\bar D_{j}$, which is a deformation retract of $\bar D_{j+1}$.) Finally, condition $(c_{j+1})$ holds if the approximations are close enough. This completes the inductive step in the noncritical case. [*(b) The critical case: $\rho$ has a (unique, Morse) critical point in $D_{j+1}\setminus \bar D_j$.*]{} In this case, $\bar D_{j+1}$ deformation retracts onto a compact set of the form $S=\bar D_j \cup E$, where $E$ is a smooth embedded arc contained in $D_{j+1}\setminus \bar D_j$, except for its endpoints which lie in $bD_j$. We may assume that $E$ intersects $bD_j$ transversely at both endpoints. Hence, $S$ is an [*admissible Runge set*]{} in $D_{j+1}$ (see Remark \[rem:perioddominatingsprays\] and [@AlarconForstnericLopez2016MZ Definition 5.1]). There are two topologically different cases to consider. [*Case 1:*]{} the arc $E$ closes inside the domain $D_j$ to a Jordan curve $C$ such that $E=C\setminus D_j$. This happens when the endpoints of $E$ belong to the same connected component of $\bar D_j$. In this case, $H_1(D_{j+1};{\mathbb{Z}})=H_1(D_{j};{\mathbb{Z}})\oplus {\mathbb{Z}}$ where $C$ represents the additional generator. [*Case 2:*]{} the endpoints of the arc $E$ belong to different connected components of $\bar D_j$. In this case, no new element of the homology basis appears. We begin with case 1. Let $C$ be a smooth Jordan curve in $M$ such that $E=C\setminus D_j$. Recall that $\sigma=(x,y) \in {\mathscr{I}}(P\times M,{\mathbb{C}}^{2n})$ is the given map in the theorem, and $\sigma^{t,j}=(x^{t,j},y^{t,j}) \in {\mathscr{I}}(P\times U_j,{\mathbb{C}}^{2n})$ is a homotopy from the $j$-th step. After shrinking the neighborhood $U_j$ around $\bar D_j$ if necessary, we can extend $\sigma^{t,j}$ from $P\times U_j$ to a homotopy $$\sigma^{t,j} = (x^{t,j},y^{t,j}) \colon P\times (U_j\cup E) \to {\mathbb{C}}^{2n}, \quad t\in [0,1]$$ such that $\sigma^{t,j}_p|_E \colon E \to {\mathbb{C}}^{2n}$ is a ${\mathscr{C}}^1$ immersion for every $(p,t)\in P\times [0,1]$ and $$\sigma^{t,j}_p|_E = \sigma_p|_E\quad \text{for all}\ \ (p,t)\in (P\times \{0\})\cup (Q\times [0,1]).$$ In particular, condition (a) on $\sigma$ (in the theorem) implies $$\label{eq:OKonQ} \int_C x^{t,j}_p dy^{t,j}_p =0 \quad \text{for all}\ \ (p,t)\in Q\times [0,1].$$ Our goal is to deform the homotopy $\sigma^{t,j}$ (only) on the relative interior of $E$, keeping it fixed for the parameter values $(p,t)\in (P\times \{0\})\cup (Q\times [0,1])$, to a new homotopy (still denoted $\sigma^{t,j}=(x^{t,j},y^{t,j})$) such that at $t=1$ we have $$\label{eq:intCvanishes} \int_C x^{1,j}_p dy^{1,j}_p =0 \quad \text{for all}\ \ p\in P.$$ This can be done by using Lemma \[lem:CI\] as follows. Choose a smooth regular parametrization $\lambda\colon [0,1]\to E$ with $\lambda(0), \lambda(1) \in bD_j$. Consider the family of immersed arcs $\xi^t_p=(f^t_p, g^{t}_p) \colon [0,1]\to{\mathbb{C}}^{2n}$ for $(p,t)\in P\times [0,1]$ defined by $$\label{eq:xieta} \xi^t_p(s)=\sigma^{t,j}_p(\lambda(s)) = \bigl(f^{t}_p(s), g^{t}_p(s)\bigr), \quad s\in [0,1].$$ It follows that $$\int_E x^{t,j}_p dy^{t,j}_p = \int_0^1 f^{t}_p(s) \dot g^{t}_p(s)ds.$$ Define the function $\beta\colon P\to{\mathbb{C}}$ by $$\label{eq:beta} \beta(p) = - \int_{C\setminus E} x^{1,j}_p dy^{1,j}_p, \quad p\in P.$$ We now apply Lemma \[lem:CI\] to the family $(\xi^t_p)_{p,t}$, the pair of parameter spaces $$(p,t)\in P'=P\times [0,1],\quad Q'=(P\times \{0\})\cup (Q\times [0,1]),$$ the function $\beta$ given by , taking into account condition . This provides a deformation of $(\xi^t_p)_{(p,t)\in P'}$ through a family of immersions $[0,1]\to {\mathbb{C}}^{2n}$ of class ${\mathscr{C}}^1$ (the parameter of the homotopy $\tau\in[0,1]$ shall be omitted) such that the homotopy is fixed for $(p,t)\in Q'$, it is fixed near the endpoints of $[0,1]$ for all $(p,t)\in P'$, and the new family obtained at $\tau=1$ satisfies the condition $$\int_0^1 f^{1}_p(s) \dot g^{1}_p(s)ds =\beta(p), \quad p\in P.$$ By using the parametrization $\lambda\colon [0,1]\to E$ as in , this provides a homotopy of the family of immersions $\sigma^{t,j}_p=(x^{t,j}_p,y^{t,j}_p) \colon U_j\cup E \to {\mathbb{C}}^{2n}$ which is fixed on $U_j$ such that the new family satisfies the condition $$\label{eq:integralFG} \int_E x^{1,j}_p dy^{1,j}_p = \int_0^1 f^{1}_p(s) \dot g^{1}_p(s)ds =\beta(p), \quad p\in P.$$ Now, follows immediately from and . Denote by ${\mathcal{P}}'$ the period map with respect to the homology basis ${\mathcal{B}}$ of $D_j$ and the additional loop $C$. It follows from the above that ${\mathcal{P}}' (\sigma^{1,j}_p) = 0$ for all $p\in P$. The inductive step can now be completed as in the noncritical case; here is an outline. By Lemma \[lem:perioddominatingsprays\] we can embed the family of immersions $\sigma^{t,j}_p\colon U_j\cup E\to {\mathbb{C}}^{2n}$ $((p,t)\in P\times [0,1])$ as the core of a period dominating spray depending on an additional set of variables $\zeta\in{\mathbb{C}}^N$. (The set $U_j$ may shrink around $\bar D_j$.) Since $\bar D_j\cup E$ is an admissible set in $D_{j+1}$ and a deformation retract of $\bar D_{j+1}$, we can apply the Mergelyan theorem for holomorphic immersions to ${\mathbb{C}}^{2n}$ to approximate this spray, as closely as desired in the ${\mathscr{C}}^1$-topology on $\bar D_j\cup E$, by a spray consisting of holomorphic immersions from a neighborhood $U_{j+1}\subset M$ of $\bar D_{j+1}$ into ${\mathbb{C}}^{2n}$. As in the proof of the noncritical case, replacing the parameter $\zeta$ by a suitably chosen function $\zeta(p,t)$ with values in ${\mathbb{C}}^N$ and close to $0$ gives a homotopy $\sigma^{t,j+1} \in {\mathscr{I}}(P\times U_{j+1},{\mathbb{C}}^{2n})$ satisfying conditions $(a_{j+1})$–$(d_{j+1})$. This completes the induction step in case 1 of the critical case (b). In case 2, the arc $E$ connects two distinct connected components of $\bar D_j$. We follow the construction in case 1 to obtain an extension of the family $\sigma^t_p\colon U_j\to{\mathbb{C}}^{2n}$ across $E$ to a family of immersions $U_j \cup E\to{\mathbb{C}}^{2n}$; however, there is no need to adjust the value of the integral . On the other hand, when approximating this family of maps on $\bar D_j \cup E$ by maps on $U_{j+1} \supset \bar D_{j+1}$, we still need to use a dominating spray as in case 1 in order to keep the period vanishing condition on curves in the homology basis ${\mathcal{B}}$ for $D_j$. Returning to Remark \[rem:php-whe\], we note that a nontrivial difference appears in the final paragraph of the above proof when proving the parametric Oka property for the space of Legendrian immersions. Recall that the map ${\mathscr{L}}(M,{\mathbb{C}}^{2n+1}) \to {\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \times {\mathbb{C}}$, given by , is a homeomorphism only if $M$ is connected. When the arc $E$ connects two distinct connected components of the set $\bar D_j$, we must ensure the correct value of the integral in order to match the $z$-component of the Legendrian map (which is already defined on a neighborhood of $\bar D_j$) near the endpoints of $E$. This can be achieved just like in case 1. Strong homotopy equivalence for surfaces of finite topological type {#sec:strong} =================================================================== In this section, we complete the proof of Theorem \[th:immersions\] by showing that if $M$ is a connected open Riemann surface of finite topological type, then the inclusion ${\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \hookrightarrow {\mathscr{I}}(M,{\mathbb{C}}^{2n})$, already known to be a weak homotopy equivalence, is in fact a homotopy equivalence. It is even the inclusion of a strong deformation retract. We closely follow the proof of a similar result in [@ForstnericLarusson2016 Section 6], which in turn is based on [@Larusson2015PAMS]. Our approach to showing that the weak homotopy equivalence ${\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \hookrightarrow {\mathscr{I}}(M,{\mathbb{C}}^{2n})$ is the inclusion of a strong deformation retract is to prove that the metrizable spaces ${\mathscr{I}}_*(M,{\mathbb{C}}^{2n})$ and ${\mathscr{I}}(M,{\mathbb{C}}^{2n})$ are absolute neighborhood retracts (ANR). Namely, an ANR has the homotopy type of a CW complex, and a weak homotopy equivalence between CW complexes is a homotopy equivalence. Hence, if $j:A\hookrightarrow B$ is the inclusion of a closed subspace in a metrizable space $B$, both spaces are ANRs, and $j$ is a weak homotopy equivalence, then $j$ is a homotopy equivalence. Moreover, $j$ is a cofibration (in the sense of Hurewicz), so $j$ is the inclusion of a strong deformation retract. For more information on what is involved, we refer to [@ForstnericLarusson2016 Section 6]. The space ${\mathscr{I}}(M,{\mathbb{C}}^{2n})$ is an open subset of the Fréchet space of all holomorphic maps $M\to{\mathbb{C}}^{2n}$, so it is an ANR. To show that the space ${\mathscr{I}}_*(M,{\mathbb{C}}^{2n})$ is an ANR, we verify that it satisfies the so-called Dugundji-Lefschetz property. Once we have prepared two ingredients for the proof, it proceeds exactly as the proof of [@ForstnericLarusson2016 Theorem 6.1]. First, we note the homeomorphism $${\mathscr{I}}(M,{\mathbb{C}}^{2n}) \to {\mathscr{O}}_0(M,{\mathbb{C}}_*^{2n})\times {\mathbb{C}}, \qquad \sigma\mapsto(d\sigma/\theta, \sigma(p)),$$ where ${\mathscr{O}}_0(M,{\mathbb{C}}_*^{2n})$ is the space of holomorphic maps $M\to{\mathbb{C}}_*^{2n}$ with vanishing periods, $\theta$ is a nowhere vanishing holomorphic $1$-form on $M$, and $p\in M$ is a chosen base point. We put together the parametric Oka principles with approximation for the inclusion ${\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \hookrightarrow {\mathscr{I}}(M,{\mathbb{C}}^{2n})$ (Theorem \[th:parametric\]), for the inclusion ${\mathscr{O}}_0(M,{\mathbb{C}}_*^{2n})\hookrightarrow {\mathscr{O}}(M,{\mathbb{C}}_*^{2n})$ [@ForstnericLarusson2016 Theorem 5.3], and for the inclusion ${\mathscr{O}}(M,{\mathbb{C}}_*^{2n})\hookrightarrow {\mathscr{C}}(M,{\mathbb{C}}_*^{2n})$, which comes from ${\mathbb{C}}_*^{2n}$ being an Oka manifold. This yields the first ingredient: the parametric Oka principle with approximation for the inclusion ${\mathscr{I}}_*(M,{\mathbb{C}}^{2n}) \hookrightarrow{\mathscr{C}}(M,{\mathbb{C}}_*^{2n})\times{\mathbb{C}}$. The second ingredient is the following lemma, which is analogous to [@ForstnericLarusson2016 Lemma 6.4]. The proof that ${\mathscr{I}}_*(M,{\mathbb{C}}^{2n})$ is an ANR is then so similar to the proof of [@ForstnericLarusson2016 Theorem 6.1] that we omit further details. \[lem:contractible\] Let $M$ be an open Riemann surface, let $r\geq 1$ be an integer, and let $\rho:M\to[0,\infty)$ be a smooth exhaustion function. Let $L_0\supset L_1\supset \cdots \supset K$ be compact smoothly bounded domains in $M$ of the form $\rho^{-1}([0,c])$, such that $K$ contains all the critical points of $\rho$. Let $\sigma_0 \in{\mathscr{I}}_{*}(M,{\mathbb{C}}^{2n})$ and let $W$ be a neighborhood of $\sigma_0|_K$ in ${\mathscr{I}}_{*}^r(M,{\mathbb{C}}^{2n})$. Then there are contractible neighborhoods $C_m$ of $\sigma_0|_{L_m}$ in ${\mathscr{I}}_{*}^r(L_m,{\mathbb{C}}^{2n})$ such that $C_m|_{L_{m+1}}\subset C_{m+1}$ and $C_m|_K\subset W$ for all $m\geq 0$. Since $K$ contains all the critical points of $\rho$, there is a homology basis ${\mathcal{B}}=\{\gamma_i\}_{i=1,\ldots,l}$ of $H_1(M;{\mathbb{Z}})$ whose support $|{\mathcal{B}}| = \bigcup_{j=1}^l |\gamma_j|$ is contained in $K$ and is Runge in $M$. Let ${\mathcal{P}}\colon {\mathscr{O}}(M,{\mathbb{C}}^{2n}) \to {\mathbb{C}}^l$ denote the associated period map : $${\mathcal{P}}(\sigma)= \left(\int_{C_j} x\, dy\right)_{j=1,\ldots,l}, \qquad \sigma=(x,y) \in {\mathscr{O}}(M,{\mathbb{C}}^{2n}).$$ Fix a map $\sigma_0 \in {\mathscr{I}}_{*}(M,{\mathbb{C}}^{2n})$. Let $M_0$ be a compact smoothly bounded domain in $M$ (say a sublevel set of $\rho$) with the same topology as $M$ and containing $L_0$. Note that ${\mathscr{I}}^r(M_0,{\mathbb{C}}^{2n})$ is an open subset of the complex Banach space ${\mathscr{A}}^r(M_0,{\mathbb{C}}^{2n})$. Pick $\epsilon_0>0$ such that the $\epsilon_0$-ball around $\sigma_0$ in ${\mathscr{A}}^r(M_0,{\mathbb{C}}^{2n})$ is contained in ${\mathscr{I}}^r(M_0,{\mathbb{C}}^{2n})$. By Lemma \[lem:perioddominatingsprays\], the differential of the period map ${\mathcal{P}}:{\mathscr{A}}^r(M_0,{\mathbb{C}}^{2n}) \to {\mathbb{C}}^l$ at $\sigma_0$ is surjective. Let us denote it by $$D= d_{\sigma_0}{\mathcal{P}}: {\mathscr{A}}^r(M_0,{\mathbb{C}}^{2n}){\longrightarrow}{\mathbb{C}}^l.$$ Its kernel $$\label{eq:Lambda0} \Lambda_0= \ker D = \{\sigma \in {\mathscr{A}}^r(M_0,{\mathbb{C}}^{2n}) : D(\sigma)=0 \}$$ is a closed complex subspace of codimension $l$ in ${\mathscr{A}}^r(M_0,{\mathbb{C}}^{2n})$; it is precisely the tangent space to the submanifold ${\mathscr{I}}_{*}^r(M,{\mathbb{C}}^{2n})$ at the point $\sigma_0$. Pick $h_1,\ldots,h_{l} \in {\mathscr{A}}^r(M_0,{\mathbb{C}}^{2n})$ such that the vectors $D(h_1),\ldots,D(h_l)\in {\mathbb{C}}^l$ span ${\mathbb{C}}^l$; then $${\mathscr{A}}^r(M_0,{\mathbb{C}}^{2n})=\Lambda_0\oplus {\mathrm{span}}_{\mathbb{C}}\{h_1,\ldots,h_{l}\}.$$ Note that the period map ${\mathcal{P}}(\sigma)$ is defined whenever the domain $L$ of $\sigma$ contains the support $|{\mathcal{B}}|$ of the homology basis. Hence, the map $D=d_{\sigma_0}{\mathcal{P}}$ is well defined on ${\mathscr{C}}^r(L,{\mathbb{C}}^{2n})$ whenever $|{\mathcal{B}}| \subset L \subset M_0$. Taking $L=|{\mathcal{B}}|$, it follows that the complex Banach space ${\mathscr{C}}^r(|{\mathcal{B}}|,{\mathbb{C}}^{2n})$ decomposes as a direct sum of closed complex Banach subspaces $$\label{eq:split} {\mathscr{C}}^r(|{\mathcal{B}}|,{\mathbb{C}}^{2n}) = \ker D|_{{\mathscr{C}}^r(|{\mathcal{B}}|,{\mathbb{C}}^{2n})} \oplus {\mathrm{span}}_{\mathbb{C}}\{h_1|_{|{\mathcal{B}}|},\ldots,h_{l}|_{|{\mathcal{B}}|}\} = \Lambda \oplus H.$$ By the implicit function theorem for Banach spaces, there are a number $\epsilon_1\in (0,\epsilon_0)$ and smooth bounded complex functions $c_1,\ldots,c_l$ on the set $\Lambda_{\epsilon_1}=\{\sigma\in \Lambda : \|\sigma\|_{r,|{\mathcal{B}}|} < \epsilon_1\}$, vanishing at the origin $0\in \Lambda$, such that for every $\sigma \in \Lambda_{\epsilon_1}$ the map $$\label{eq:tildeg} \tilde \sigma = \sigma_0|_{|{\mathcal{B}}|} + \sigma + \sum_{j=1}^{l} c_j(\sigma) h_j|_{|{\mathcal{B}}|} \in {\mathscr{C}}^r(|{\mathcal{B}}|, {\mathbb{C}}^{2n})$$ satisfies the period vanishing equation ${\mathcal{P}}(\tilde \sigma)=0$. Morever, gives a local representation of the set $\{\tilde \sigma \in {\mathscr{C}}^r(|{\mathcal{B}}|, {\mathbb{C}}^{2n}): {\mathcal{P}}(\tilde \sigma)=0\}$ in a neighborhood of $\sigma_0|_{|{\mathcal{B}}|}$ as a graph over the affine linear subspace $\sigma_0|_{|{\mathcal{B}}|}+\Lambda \subset {\mathscr{C}}^r(|{\mathcal{B}}|, {\mathbb{C}}^{2n})$. If $L$ is any smoothly bounded compact set with $|{\mathcal{B}}|\subset L \subset M_0$ and $\sigma \in {\mathscr{A}}^r(L,{\mathbb{C}}^{2n})$ satisfies $D\sigma=0$ and $\|\sigma\|_{r,L}<\epsilon_1$, then yields a map $$\psi_L(\sigma) = \sigma_0|_L + \sigma + \sum_{j=1}^{l} c_j(\sigma|_{|{\mathcal{B}}|}) h_j|_{L} \in {\mathscr{A}}^r(L,{\mathbb{C}}^{2n})$$ such that ${\mathcal{P}}(\psi_L(\sigma))=0$. Note that $\psi_L(0) = \sigma_0$. Hence, $\psi_L(\sigma) \in {\mathscr{I}}^r_*(L,{\mathbb{C}}^{2n})$ provided that $\|\psi_L(\sigma)-\sigma_0\|_{r,L}< \epsilon_0$; the latter condition is satisfied if $\epsilon_1>0$ is small enough. As before, this gives a local representation of the set $\{\tilde \sigma \in {\mathscr{A}}^r(L, {\mathbb{C}}^{2n}) : {\mathcal{P}}(\tilde \sigma)=0\}$ in a neighborhood of $\sigma_0|_L$ as a graph over the affine linear subspace $\sigma_0|_{L}+\Lambda_0|_L \subset {\mathscr{A}}^r(L, {\mathbb{C}}^{2n})$. Here, $\Lambda_0=\ker D \subset {\mathscr{A}}^r(M_0,{\mathbb{C}}^{2n})$ (see ). Note that for any compacts $L$ and $L'$ with $|{\mathcal{B}}| \subset L\subset L'\subset M_0$, we have $$\label{eq:restriction} \psi_{L}(\sigma|_L) = \psi_{L'}(\sigma)\big|_L$$ for every $\sigma\in {\mathscr{A}}^r(L',{\mathbb{C}}^{2n})$ such that $D(\sigma)=0$ and $\|\sigma\|_{r,|{\mathcal{B}}|} < \epsilon_1$. Since the functions $c_j$ are bounded on a neighborhood of the origin in $\Lambda$ (see ), there is a number $\epsilon\in (0,\epsilon_1)$ such that the set $$C_0 = \bigl\{\psi_{M_0}(\sigma) : \sigma \in \Lambda_0,\ \|\sigma\|_{r, M_0}< \epsilon\bigr\} \subset {\mathscr{I}}^r_*(M_0,{\mathbb{C}}^{2n})$$ is a neighborhood of $\sigma_0|_{M_0}$ in ${\mathscr{I}}^r_*(M_0,{\mathbb{C}}^{2n})$. Furthermore, being a smooth graph over the ball $\{\sigma \in \Lambda_0 : \|\sigma\|_{r, M_0}<\epsilon\}$ in the Banach space $\Lambda_0$, $C_0$ is contractible. Similarly, for every $m\in {\mathbb{N}}$, the set $$C_m = \bigl\{\psi_{L_m}(\sigma) : \sigma \in {\mathscr{A}}^r(L_m,{\mathbb{C}}^{2n}),\ D(\sigma)=0,\ \|\sigma\|_{r, L_m}< \epsilon \bigr\} \subset {\mathscr{I}}^r_*(L_m,{\mathbb{C}}^{2n})$$ is a contractible neighborhood of $\sigma_0|_{L_m}$ in ${\mathscr{I}}^r(L_m,{\mathbb{C}}^{2n})$. Taking into account that for any $\sigma\in {\mathscr{A}}^r(L_m,{\mathbb{C}}^{2n})$, we have $\|\sigma\|_{r,L_{m+1}} \le \|\sigma\|_{r,L_{m}}$ by the maximum principle, the formula shows that the restriction map associated to the inclusion $L_m\supset L_{m+1}$ maps $C_m$ into $C_{m+1}$ for every $m\geq 0$. By choosing $\epsilon>0$ small enough, we can also ensure that the restriction map associated to $L_m\supset K$ maps $C_m$ into a given neighborhood $W$ of $\sigma_0|_{K}$ in ${\mathscr{I}}_{*}^r(K,{\mathbb{C}}^{2n})$. Acknowledgements {#acknowledgements .unnumbered} ---------------- F. Forstnerič is supported in part by research program P1-0291 and Grant J1-7256 from ARRS, Republic of Slovenia. F. Lárusson is supported in part by Australian Research Council Grant DP150103442. The work on this paper was done at the Centre for Advanced Study at the Norwegian Academy of Science and Letters in Oslo in the autumn of 2016. The authors would like to warmly thank the Centre for hospitality and financial support. The authors would like to warmly tbank the Centre for hospitality and financial support. We thank Antonio Alárcon and Francisco J. López for many helpful discussions on this topic, and Jaka Smrekar for his advice on topological issues concerning loop spaces. Franc Forstnerič Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, SI–1000 Ljubljana, Slovenia Institute of Mathematics, Physics and Mechanics, Jadranska 19, SI–1000 Ljubljana, Slovenia e-mail: [franc.forstneric@fmf.uni-lj.si]{} 0.5cm Finnur Lárusson School of Mathematical Sciences, University of Adelaide, Adelaide SA 5005, Australia e-mail: [finnur.larusson@adelaide.edu.au]{}
--- author: - | [Luiz H. Gomes$^{\dag}$[^1], Fernando D. O. Castro$^{\dag}$, Rodrigo B. Almeida$^{\dag}$, ]{}\ [Luis M. A. Bettencourt$^{\ddag}$, Virgílio A. F. Almeida$^{\dag}$, Jussara M. Almeida$^{\dag}$]{}\ title: Improving Spam Detection Based on Structural Similarity --- Abstract {#abstract .unnumbered} -------- We propose a new detection algorithm that uses structural relationships between senders and recipients of email as the basis for the identification of spam messages. Users and receivers are represented as vectors in their reciprocal spaces. A measure of similarity between vectors is constructed and used to group users into clusters. Knowledge of their classification as past senders/receivers of spam or legitimate mail, comming from an auxiliary detection algorithm, is then used to label these clusters probabilistically. This knowledge comes from an auxiliary algorithm. The measure of similarity between the sender and receiver sets of a new message to the center vector of clusters is then used to asses the possibility of that message being legitimate or spam. We show that the proposed algorithm is able to correct part of the false positives (legitimate messages classified as spam) using a testbed of one week smtp log. Introduction {#sec:introduction} ============ The relentless rise in spam email traffic, now accounting for about $83\%$ of all incoming messages, up from $24\%$ in January 2003 [@messageLabs], is becoming one of the greatest threats to the use of email as a form of communication. The greatest problem in detecting spam stems from active adversarial efforts to thwart classification. Spam senders use a multitude of techniques based on knowledge of current detection algorithms, to evade detection. These techniques range from changes in the way text is written - so that it can not be directly analyzed computationally, but can be understood by humans naturally - to frequent changes in other elements, such as user names, domains, subjects, etc. Therefore, good choices for spam identifiers are becoming increasingly more difficult. In the light of this enormous variability the question then is: what are the identifiers of spam that are most costly to change, from the point of view of the sender? The limitations of attempts to recognize spam by analyzing content are clear [@gerf]. Content-based techniques[@sahami98bayesian; @zhou03approximate; @spamassassin] have to cope with the constant changes in the way spammers generate their solicitations. The structure of the target space for these solicitations tends however to be much more stable since spams senders still need to reach recipients, even if under forged identifiers, in order to be effective. Specifically by structure we mean the space of recipients targeted by a spam sender, as well as the space of senders that target a given recipient, i.e. the contacts of a user. The contact lists, or subsets thereof, can then be thought of as a signature of spam senders and recipients. Additionally by constructing a similarity measure in these spaces we can track how lists evolve over time, by addition or removal of addresses. In this paper, we propose an algorithm for spam detection that uses structural relationships between senders and recipients as the basis for the identification of spam messages. The algorithm must work in conjunction with another spam classifier, necessary to produce spam or legitimate mail tags on past senders and receivers, which in turn are used to infer new ones through structural similarity (hereafter called: auxiliary algorithm), The key idea is that the lists spammers and legitimate users send messages to, as well as the lists from which they receive messages from can be used as the identifiers of classes of email traffic [@priority; @ceas]. We will show that the final result of the application of our structural algorithm over the determinations of the initial classifier leads to the correction of a number of misclassifications as false positives. This paper is organized as follows: Section \[sec:modeling\] presents the methodology used to handle email data. Our structural algorithm is described in Section \[sec:algorithm\]. We present the characteristics of our example workload in section \[sec:results\], as well as the classification results obtained with our algorithm over this set. Related work is presented in Section \[sec:related-work\] and conclusions and future work in Section \[sec:concl-future-works\]. Modeling Similarity Among Email Senders and Recipients {#sec:modeling} ====================================================== Our proposed spam detection algorithm exploits the structural similarities that exist in groups of senders and recipients as well as in the relationship established through the emails exchanged between them. This section introduces our modeling of individual email users and a metric to express the similarity existent among different users. It then extends the modeling to account for clusters of users who have great similarity. Our basic assumption is that, in both legitimate email and spam traffics, users have a defined list of peers they often have contact with (i.e., they send/receive an email to/from). In legitimate email traffic, contact lists are consequence of social relationships on which users’ communications are based. In spam traffic, on the other hand, the lists used by spammers to distribute their solicitations are created for business interest and, generally, do not reflect any form of social interaction. A user’s contact list certainly may change over time. However, we expect it to be much less variable than other characteristics commonly used for spam detection, such as sender user-name, presence of certain keywords in the email content and encoding rules. In other words, we expect contact lists to be more effective in identifying spams and, thus, we use them as the basis for developing our algorithm. We start by representing an email user as a vector in a multi-dimensional conceptual space created with all possible contacts. We represent email senders and recipients separately. We then use vectorial operations to express the similarity among multiple senders (recipients), and use this metric for clustering them. Note that the term email user is used throughout this work to denote any identification of an email sender/recipient (e.g., email address, domain name, etc). Let $N_r$ be the number of distinct recipients. We represent sender $s_i$ as a $N_r$ dimensional vector, $\vec{s_i}$, defined in the conceptual space created by the email recipients being considered. The $n$-th dimension (representing recipient $r_n$) of $\vec{s_i}$ is defined as: $$\begin{aligned} \vec{s_i}[n] = \left\{ \begin{array}{ll} 1, & $ if $s_i \rightarrow r_n \\ 0, & $ otherwise$ \\ \end{array} \right.,\end{aligned}$$ where $s_i \rightarrow r_n$ indicates that sender $s_i$ has sent at least one email to $r_n$ recipient. Similarly, we define $\vec{r_i}$ as a $N_s$ dimensional vector representation for the recipient $r_i$, where $N_s$ is the number of distinct senders being considered. The $n$-th dimension of this vector is set to $1$ if recipient $r_i$ has received at least one email from $s_n$. We next define the similarity between two senders $s_i$ and $s_j$ as the cosine of the angle between their vector representation ($\vec{s_i}$ and $\vec{s_j}$). The similarity is computed as follows: $$\begin{aligned} \label{similarity} sim(s_i,s_j) = \frac{\vec{s_i} \circ \vec{s_j}}{|\vec{s_i}||\vec{s_j}|} = cos(\vec{s_i},\vec{s_j}) ,\end{aligned}$$ where $\vec{s_i} \circ \vec{s_j}$ is the internal product of the vectors and $|\vec{s_i}|$ is the norm of $\vec{s_i}$. Note that this metric varies from 0, when senders do not share any recipient in their contact lists, to 1, when senders have identical contact lists and thus have the same representation. The similarity between two recipients is defined similarly. We note that our similarity metric has different interpretations in legitimate and spam traffics. In legitimate email traffic, it represents social interaction with the same group of people, whereas in the spam traffic, a great similarity represents the use of different identifiers by the same spammer or the sharing of distribution lists by distinct spammers. Finally, we can use our vectorial modeling approach to represent a cluster of users (senders or recipients) who have great similarity. A sender cluster $sc_i$, represented by vector $\vec{sc_i}$, is computed as the vectorial sum of its elements, that is: $$\vec{sc_i} = \sum_{s \in sc_i}{\vec{s}}.$$ The similarity between sender $s_i$ and an existing cluster $sc_j$ can then be directly assessed by extending Equation \[similarity\] as follows: $$\begin{aligned} sim(sc_i,s_i) = \left\{ \begin{array}{ll} cos(\vec{sc_i} - \vec{s_i}, \vec{s_i}) , & $ if $s_i \in sc_i \\ cos(\vec{sc_i}, \vec{s_i}) , & $ otherwise$ \\ \end{array} \right.\end{aligned}$$ We note that a sender $s_i$ vectorial representation and thus the sender cluster to which it belongs (i.e., shares the greatest similarity) may change over time as new emails are considered. Therefore, in order to accurately estimate the similarity between a sender $s_i$ and a sender cluster $sc_i$ to which $s_i$ currently belongs, we first remove $s_i$ from $sc_i$, and then take the cossine between the two vectors ($\vec{sc_i} - \vec{s_i}$ and $\vec{s_i}$). This is performed so that the previous classification of a user does not influence its reclassification. Recipient clusters and the similarity between a recipient and a given recipient cluster are defined analogously. A New Algorithm for Improving Spam Detection {#sec:algorithm} ============================================= This section introduces our new email classification algorithm which exploits the similarities between email senders and between email recipients for clustering and uses historical properties of clusters to improve spam detection accuracy. Our algorithm is designed to work together with any existing spamdetection and filtering technique that runs at the ISP level. Our goal is to provide a significant reduction of false positives (i.e., legitimate emails wrongly classified as spam), which can be as high as 15% in current filters [@sizecost]. A description of the proposed algorithm is shown in Algorithm \[alg:detection\]. It runs on each arriving email $m$, taking as input the classification of $m$, $mClass$, as either spam or legitimate email, performed by the existing auxiliary spam detection method. Using the vectorial representation of email senders, recipients and clusters as well as the similarity metric defined in Section 2, it then determines a new classification for $m$, which may or not agree with $mClass$. The idea is that the classification by the auxiliary method is used to build an incremental historical knowledge base that gets more representative through time. Our algorithm benefits from that and outperforms the auxiliary one as shown in Section \[sec:results\]. $mClass = $classification of $m$ by auxiliary detection method; $sc = $find cluster for $m.sender$; Update spam probability for $sc$ using $mClass$; $P_s(m) = $spam probability for $sc$; $P_r(m) = 0$; $rc = $find cluster for $r$; Update spam probability for $rc$ using $mClass$; $P_r(m) = P_r(m) + $spam probability for $rc$; $P_r(m) = P_r(m)/size(m.recipients)$ $SP(m) = $ compute spam rank based on $P_s(m)$ and $P_r(m)$; classify $m$ as spam; classify $m$ as legitimate; classify $m$ as $mClass$; In order to improve the accuracy of email classification, our algorithm maintains sets of sender and recipient clusters, created based on the structural similarity of different users. A sender (recipient) of an incoming email is added to a sender (recipient) cluster that is most similar to it, as defined in Equation (4), provided that their similarity exceeds a given threshold $\tau$. Thus, $\tau$ defines the minimum similarity a sender (recipient) must have with a cluster to be assigned to it. Varying $\tau$ allows us to create more tightly or loosely knit clusters. If no cluster can be found, a new single-user cluster is created. In this case, the sender (recipient) is used as seed for populating the new cluster. The sets of recipient and sender clusters are updated at each new email arrival based on the email sender and list of recipients. Recall that to determine the cluster a previously observed, and thus clustered, user (sender or recipient) belongs to, we first remove the user from his current cluster and then assess its similarity to each existing cluster. Thus, single-user clusters tend to disappear as more emails are processed, except for users that appear only very sporadically. ![Spam Rank Computation and Email Classification.[]{data-label="fig:spamRank"}](figures/spamRank.eps){width="200pt"} A probability of sending (receiving) a spam is assigned to each sender (recipient) cluster. We refer to this measure as simply the cluster spam probability. We calculate the spam probability of a sender (recipient) cluster as the average spam probability of its elements, which, in turn, is estimated based on the frequency of spams sent/received by each of them in the past. Therefore, our algorithm uses the result of the email classification performed by the auxiliary algorithm on each arriving email $m$ ($mClass$ in Algorithm \[alg:detection\]) to continuously update cluster spam probabilities. Let us define the probability of an email $m$ being sent by a spammer, $P_s(m)$, as the spam probability of its sender’s cluster. Similarly, let the probability of an email $m$ being addressed to users that receive spam, $P_r(m)$, as the average spam probability of all of its recipients’ clusters (see Algorithm \[alg:detection\]). Our algorithm uses $P_s(m)$ and $P_r(m)$ to compute a number that expresses the chance of email $m$ being spam. We call this number the spam rank of email $m$, denoted by $SR(m)$. The idea is that emails with large values of $P_s(m)$ and $P_r(m)$ should have large spam ranks and thus should be classified as spams. Similarly, emails with small values of $P_s(m)$ and $P_r(m)$ should receive low spam rank and be classified as legitimate email. Figure \[fig:spamRank\] shows a graphical representation of the computation of an email spam rank. We first normalize the probabilities $P_s(m)$ and $P_r(m)$ by a factor of $\sqrt{2}$, so that the diagonal of the square region defined in the bi-dimensional space is equal to 1 (see Figure \[fig:spamRank\]-left). Each email $m$ can be represented as a point in this square. The spam rank of $m$, $SR(m)$, is then defined as the length of the segment starting at the origin (0,0) and ending at the projection of $m$ on the diagonal of the square (see Figure \[fig:spamRank\]-right). Note the spam rank varies between 0 and 1. The spam rank $SR(m)$ is then used to classify $m$ as follows: if it is greater than a given threshold $\omega$, the email is classified as spam; if it is smaller than $1 - \omega$, it is classified as legitimate email. Otherwise, we can not precisely classify the email, and we rely on the initial classification provided by the auxiliary detection algorithm. The parameter $\omega$ can be tuned to determine the precision that we expect from our classification. Graphically, emails are classified according to the marked regions shown in Figure \[fig:spamRank\]-left. The two triangles, with identical size and height $\omega$, represent the regions where our algorithm is able to classify emails as either spam (upper right) or legitimate email (lower left). Experimental Results {#sec:results} ==================== In this section we describe our experimental results. We first present some important details of our workload, followed by the quantitative results of our approach, compared to others. Workload {#sec:workload} -------- Our email workload consists of anonymized and sanitized SMTP logs of incoming emails to a large university in Brazil, with around 22 thousand students. The server handles all emails coming from domains outside the university, sent to students, faculty and staff with email addresses under the university’s domain name [^2] The central email server runs Exim email software [@exim], the Amavis virus scanner [@amavis] and the Trendmicro Vscan anti-virus tool [@antivirus]. A set of pre-acceptance spam filters (e.g. black lists, DNS reversal) blocks about 50% of the total traffic received by the server. The messages not rejected by the pre-acceptance tests are directed to Spam-Assassin [@spamassassin]. Spam-Assassin is a popular spam filtering software that detects spam messages based on a changing set of user-defined rules. These rules assign scores to each email received based on the presence in the subject or in the email body of one or more pre-categorized keywords. Spam-Assassin also uses other rules based on message size and encoding. Highly ranked messages according to these criteria are flagged as spam. We analyze an eight-day log collected between 01/19/2004 to 01/26/2004. Our logs store the header of each email (i.e. containing sender, recipients, size , date, etc.) that passes the pre-acceptance filters, along with the results of the tests performed by Spam-Assassin and the virus scanners. We also have the full body of the messages that were classified as spam by Spam-Assassin. Table \[bst\] summarizes our workload. [**Measure**]{} [**Non-Spam**]{} [**Spam**]{} [**Aggregate**]{} --------------------------- ------------------ -------------- ------------------- \# of emails 191,417 173,584 365,001 Size of emails 11.3 GB 1.2 GB 12.5 GB \# of distinct senders 12,338 19,567 27,734 \# of distinct recipients 22,762 27,926 38,875 : Summary of the Workload[]{data-label="bst"} By visually inspecting the list of sender [*user names*]{} [^3] in the spam component of our workload, we found that a large number of them corresponded to a seemingly random sequence of characters, suggesting that spammers tend to change user names as an evasion technique. Therefore, for the experiments presented below we identified the sender of a message by his/her domain while recipients were identified by their full address, including both domain and user name. Classification Results ---------------------- ![Number of Email User Clusters and Beta CV vs. $\tau$.[]{data-label="fig:betacvxncom"}](plots/tauxncomm.eps){width="200pt"} The results shown in this section were obtained through the simulation of the algorithm proposed here over the set of messages in our logs. The implementation of the simulator made use of an inverted lists [@moffat] approach for storing information about senders, recipients and clusters that is effective both in terms of memory and processing time. Our simulations were executed on a commodity workstation (Intel Pentium 4 - 2.80GHz - with 500MBytes) and the simulator was able to classify 20 messages per second. This is far faster than the average rate with which messages usually arrive and than the peak rate observed over the workload collection time [@gomes]. The number and quality of the clusters generated through our similarity measure are the direct result of the chosen value for the threshold $\tau$ (see Section \[sec:algorithm\]). In order to determine the best parameter value the simulation was executed several times for varying $\tau$. Figure \[fig:betacvxncom\] shows how the number of clusters and beta CV [^4] vary with $\tau$. There is one clear point of stabilization of the curve (i.e. a plateau) at $\tau = 0.5$ and that is the value we adopt for the remaining of the paper. Although other stabilization points occur for values of $\tau$ above $0.5$, the lowest of such values seems to be the most appropriate for our experiments. The reason for that is that this value of $\tau$ is the one that generates the smaller stable number of clusters, i.e. cluster with more elements, and that allows us to evaluate better the beneficial effects that clustering senders and recipients may have. Moreover, while analyzing the beta CV we are able to see that the quality of the clustering for all values $\tau>0.4$ is approximately the same. One of the hypothesis of our algorithm is that we can group spam messages in terms of the probabilities $P_s(m)$ and $P_r(m)$. Figure \[fig:message\_classification\] shows the fraction of spam messages that exist for different values of $P_s(m)$ and $P_r(m)$ grouped based on a discretization of the full space represented in the plot. The full space is subdivided into smaller squares of the same size called bins. Clearly, spam/legitimate messages are indeed located in the regions (top and bottom respectively) as we have hypothesized in Section \[sec:algorithm\]. There is however a region in the middle where we can not determine the classification for the messages based on the computed probabilities. This is why it becomes necessary to vary $\omega$. One should adjust $\omega$ based on the level of confidence he/she has on the auxiliary algorithm. Figure \[fig:message\_classification\] shows that differentiation between senders and recipients for detecting spam can be more effective than the simple choice we use in this paper. Messages addressed to recipients that have high $P_r(m)$ tend to be spam more frequently than messages with the same value of $P_s(m)$. Analogously, messages with low $P_s(m)$ have higher probability of being legitimate messages. Ways of using this information in our algorithm are an ongoing research effort that we intend to pursue in future extensions. Our algorithm makes use of an auxiliary spam detection algorithm - such as SpamAssassin. Therefore, we need to evaluate how frequently we maintain the same classification as such an algorithm. Figure \[fig:omega\] shows the the percentage of messages that received the same classification and the total number of classified messages in our simulation by varying $\omega$. The difference between these curves is the set of messages that were classified differently from the original classification provided. There is a clear tradeoff between the total number of messages that are classifiable and the accordance with the previous classification provided by the original classifier algorithm. ![Messages Classified in Accordance With to the Auxiliary Algorithm and the Total Number of Messages Classified by Varying $\omega$[]{data-label="fig:omega"}](plots/omega_0.5.eps){width="150pt"} In another experiment, we simulated a different algorithm that also makes use of history information provided by an auxiliary spam detector described in [@priority]. This approach tries to classify messages based on the historical properties of their senders. We built a simulator for this algorithm and executed it against our data set. The results show that it was able to classify $85.11\%$ of the messages in accordance with the auxiliary algorithm. Its important to note that, on the other hand, our algorithm can be tuned by the proper set of threshold $\omega$. The higher the parameter $\omega$ the more in acordance with the auxiliary classification the classification of our algorithm is. We believe that the differences between the original classification and the classification proposed for high $\omega$ values generally are due to missclassifications by the auxiliary algorithm. In our data set we have access to the full body of the messages that were originally classified as spam. Therefore, we can evaluate a fraction of the total amount of false positives (messages that the auxiliary algorithm classify as spam and our algorithm classify as legitimate message) that were generated by the auxiliary algorithm. This is important since there is a common belief that the cost of false positives is higher than the cost of false negatives [@gerf]. Each of the possible false positives were manually evaluated by three people so as to determine whether such a message was indeed spam. Table \[tab:manual\] summarizes the results for $\omega = 0.85$, 879 messages were manually analyzed ($0.24\%$ of the total of messages). Our algorithm outperforms the original classification since it generates less false positives. We emphasize that we can not similarly determine the quality of classification for the messages classified as legitimate by the auxiliary algorithm since we do not have access to the full body of those messages. Due to the cost of manually classifying messages we can not aford to classify all of the messages classified as spam by the auxiliary algorithm. ------------------------- ----------- Original Classification $60.33\%$ Our approach $39.67\%$ ------------------------- ----------- : Possible False Positives Generated by the Approaches Studied.[]{data-label="tab:manual"} Related Work {#sec:related-work} ============ Previous work have focused on reducing the impact of spam. The approaches to reduce spam can be categorized into pre-acceptance and post-acceptance methods, based on whether they detect and block spam before or after accepting messages. Examples of pre-acceptance methods are black lists [@blacklist2], gray lists [@greylist], server authentication [@spam; @authentication] and accountability [@solvingspam]. Post-acceptance methods are mostly based on information available in the body of the messages and include Bayesian filters [@sahami98bayesian], collaborative filtering [@zhou03approximate]. Recent papers have focused on spam combat techniques based on characteristics of graph models of email traffic [@emailnetcombat; @spammachines]. The techniques used try to model email traffic as a graph and detect spam and spam attacks respectively in terms of graph properties. In [@emailnetcombat] a graph is created representing the email traffic captured in the mailbox of individual users. The subsequent analysis is based on the fact that such a network possesses several disconnected components. The clustering coefficient of each of these components is then used to characterize messages as spam or legitimate. Their results show that 53% of the messages were precisely classified using the proposed approach. In [@spammachines] the authors used the approach of detecting machines that behave as spam senders by analyzing a border flow graph of sender and recipient machines. In[@priority], the authors propose a new scheme for handling spam. It is a post-acceptance mechanism that processes mail suspected of being spam at reduced priority, when compared to the priority assigned to messages classified as legitimate. The proposed mechanism[@priority] works in conjunction with some sort of mail filter that provides past history of mails received by a server. None of the existing spam filtering mechanisms are infallible[@priority; @gerf]. Their main problems are false positive and wrong mail classification. In addition to those problems, filters must be continuously updated to capture the multitude of mechanism constantly introduced by spammers to avoid filtering actions. The algorithm presented in this paper aims at improving the effectiveness of spam filtering mechanisms, by reducing false positives and by providing information that help those mechanism to tune their collection of rules. Conclusions and Future Work {#sec:concl-future-works} =========================== In this paper we proposed a new spam detection algorithm based on the structural similarity between contact lists of email users. The idea is that contact lists, integrated over a suitable amount of time, are much more stable identifiers of email users than id names, domains or message contents, which can all be made to vary quickly and widely. The major drawback of our approach is that our algorithm can only group users based on their structural similarity, but has no way of determining by itself if such vector clusters correspond to spam or legitimate email. Because of this feature it must work in tandem with an original classifier. Given this information we have shown that we can successfully group spam and legitimate email users separately and that this structural inference can improve the quality of other spam detection algorithms. Specifically we have implemented a simulator based on data collected from the main SMTP server for a major university in Brazil that uses SpamAssassin. We have shown that our algorithm can be tuned to produce classifications similar to those of the original classifier algorithm and that, for a certain set of parameters, is was capable of correcting false positives generated by SpamAssassin in our workload. There are several improvements and developments that were not explored here, but promise to reinforce the strength of our approach. We intend to explore these in future work. We observe that structural similarity gives us a basis for time correlation of similar addresses, and as such to follow the time evolution of spam sender techniques, in ways that suitably factor out the enormous variability of their apparent identifiers. Finally we note that the probabilistic basis of our approach lends itself naturally to the evolution of users’ classifications (say through Bayesian inference), both through collaborative filtering using user feedback and from information derived from other algorithmic classifiers. [10]{} Amavis. http://www.amavis.org, 2004. Size and cost of the problem. In [*56th IETF Meeting*]{} (March 2003). Authentication approaches. In [*56th IETF Meeting*]{} (March 2003). Personal email networks: An effective anti-spam tool. http://www.arxiv.org/abs/cond-mat/0402143, February 2004. Solving spam by establishing a platform for sender accountability. In [*56th IETF Meeting*]{} (March 2003). Spam, spim, and spit. , 4 (2005), 39–43. Spam! In [*Communications of the ACM*]{} (1998). Analyzing network traffic to detect e-mail spamming machines. Tech. Rep. 180, Army High Performance Computing Research Center TECHNICAL REPORT, 2004. Exim internet mailer home page. http://www.exim.org, 2004. Comparative graph theoretical characterization of networks of spam and regular email. http://arxiv.org/abs/cond-mat/0503725, March 2005. Characterizing a spam traffic. In [*Proc. of the 4th ACM SIGCOMM conference on Internet measurement*]{} (2004). The next step in the spam control war: Greylisting. http://projects.puremagic.com/greylisting/, April 2004. Message labs home page. http://www.messagelabs.co.uk/, 2005. Maps - mail abuse prevention system home page. http://mail-abuse.org/rbl/getoff.html, 2004. . Prentice Hall Inc., USA, September 2001. A bayesian approach to filtering junk [E]{}-mail. In [*Learning for Text Categorization: Papers from the 1998 Workshop*]{} (Madison, Wisconsin, USA, 1998), AAAI Technical Report WS-98-05. Spamassassin. http://www.spamassassin.org, 2004. Trend micro home page. http://www.trendmicro.com, 2004. Email prioritization: Reducing delays on legitimate mail caused by junk mail. In [*Proc. Usenix Annual Technical Conference*]{} (Boston, MA, June 2004). . John Wiley & Sons, Inc., New York, NY, USA, 1994. Approximate object location and spam filtering on peer-to-peer systems. In [*Proc. of Middleware*]{} (June 2003). [^1]: Luiz H. Gomes is supported by Banco Central do Brasil. [^2]: Only the emails addressed to two out of over 100 university subdomains (i.e., departments, research labs, research groups) do not pass through the central server. [^3]: The part before @ in email addresses. [^4]: Beta CV means intra CV/inter CV and assesses the quality of the clusters generated. The lower the beta CV the better quality in terms of grouping obtained [@livrovirgilio].
--- abstract: | We present the results of our investigation of the composition of the diffuse soft X–ray background emission (SXRB). Combining data of the Leiden/Dwingeloo [HI]{} Survey and the [*ROSAT*]{} All–Sky Survey (RASS), we set up a radiation transport equation in order to model the SXRB. Two different techniques lead to the model parameters: An image oriented approach which compares observed and modeled maps of the 1/4 and 3/4keV X–ray energy regime and a more analytic approach using scatter diagrams. The analysis shows that [*only three*]{} independent components of the emitting plasma (local, halo and extragalactic) are needed to explain the SXRB. The results for the temperatures and X–ray intensities, which characterize the three components, are given and compared to an alternative model. author: - 'M.Kappes' - 'J.Pradas' - 'J.Kerp' title: 'On the Temperature and Intensity Distribution of the Galactic X-ray Plasma' --- Introduction ============ To understand the origin and evolution of the Milky Way it is necessary to investigate its emission across the entire electromagnetic frequency spectrum. In the soft X–ray energy regime much progress was gained by the [*ROSAT*]{}–mission and its discovery of coronal gas located within the Milky Way halo ([@jkerp-E1-6:sno91]). Today there is general agreement that we can identify at least three individual components which contribute to the diffuse soft X–ray background (SXRB) emission: The coronal gas partly filling the Local Bubble ([@jkerp-E1-6:sno98]), X–ray plasma localized within the Milky Way halo ([@jkerp-E1-6:pie98]) and finally the superposed emission of individual X–ray sources at extragalactic distances ([@jkerp-E1-6:has01]). With [*ROSAT*]{}–all sky survey data it is possible to shed light on the physical properties of the several components of the SXRB. Questions we like to answer are: Is the halo plasma hotter or cooler than the local X–ray gas? Is there evidence for more than one single coronal gas phase in the Milky Way halo ([@jkerp-E1-6:kun00])? Is the plasma emissivity a function of Galactic longitude and/or latitude ([@jkerp-E1-6:pie98])? Data and Model ============== The correlation of the [*ROSAT*]{} All–Sky Survey (RASS) 1/4 keV ([*ROSAT*]{}–C–band) and 3/4 keV ([*ROSAT*]{}–M–band) data with the Leiden/Dwingeloo [HI]{} Survey of galactic neutral hydrogen ([@jkerp-E1-6:har97]) provides an opportunity to disentangle the different SXRB components. Figure \[jkerp-E1-6:rad\] illustrates our approach to model the X–ray radiation transport through the Galactic interstellar medium. Because of the anti–correlation between X–ray radiation and [HI]{} column density it is possible to set up the following radiation transport equation: $$I = I_{\rm l} + I_{\rm h} \cdot e^{- \sigma(E, N_{\rm HI,h}) \cdot N_{\rm HI,h}}+ I_{\rm e} \cdot e^{- \sigma(E, N_{\rm HI,e}) \cdot N_{\rm HI,e}} \label{jkerp-E1-6:rad}$$ $I_{\rm l}$ denotes the Local Bubble component, $I_{\rm h}$ denotes the halo component and $I_{\rm e}$ the extragalactic contribution. The observed X–ray intensity distribution is modulated by photoelectric absorption traced by the [HI]{} gas. In our initial approach we include three X–ray emission components. First, an unabsorbed foreground Raymond–Smith plasma representing the Local Bubble emission, an absorbed distant Raymond–Smith plasma (halo component) and an absorbed extragalactic energy power–law (EPL) with index $\alpha = -1.5$ ([@jkerp-E1-6:has01]). Deriving the model parameters ============================= We investigate two different fields: [*Field A*]{} at $20{\degr} < b < 47{\degr}$, $34{\degr} < l < 85{\degr}$ and [*field B*]{} at $12{\degr} < b < 74{\degr}$, $99{\degr} < l < 166{\degr}$. These fields are at high Galactic latitude, where the smaller [HI]{} column density allows a better study of the halo component in comparison with the Galactic plane where this radiation is much stronger absorbed. Both fields cover a large range in [HI]{} column densities which improves the significance of the X–ray/[HI]{} correlation. Figure \[jkerp-E1-6:loco\] shows the 1/4 keV [*ROSAT*]{}–band for [*field A*]{}. Scatter diagrams ---------------- First, we evaluate the X–ray intensity of the Local Bubble. For this aim, we produce “scatter diagrams” which are shown in Fig. \[jkerp-E1-6:losca\]. At high column densities the distant and extragalactic X–ray components are so strongly absorbed that the remaining C–band intensity can be attributed entirely to the Local Bubble emission. We derive a Local Bubble intensity of $I_{\rm l} = 350 \cdot 10^{-6} {\rm cts \enspace s^{-1}\,arcmin^{-2}}$ for both fields investigated. Second, for the extragalactic background intensity we use the value $I_{\rm e} = (228 \pm 90) \cdot 10^{-6} {\rm cts \enspace s^{-1}\,arcmin^{-2}}$ given by [-@jkerp-E1-6:bar96], as a first estimate. The power–law index is fixed to $\alpha = -1.5$ ([@jkerp-E1-6:has01]). Third, we evaluate the contribution of the halo component to the SXRB. Different values for the C–band intensity and temperatures of the Raymond–Smith plasma used, is combined with the corresponding theoretical band ratios. It turns out that the C–band scatter diagram is [*not*]{} a sensible measure for halo plasma temperature; on the contrary the other energy bands and derived ratios are! Note especially the upper right diagram in Fig. \[jkerp-E1-6:losca\]: The curves for the three different temperatures are almost identical. Only in combination with the M–band scatter diagram and the C/M–band ratio it is possible to derive the temperatures reliably. Because of this finding, we have to fit [*simultaneously*]{} the R1–, R2–, C– and M–band data. The derived results – on temperature and intensity – are used to calculate model images (see Fig. \[jkerp-E1-6:locm\]) which we test independently as follows. Modeled images -------------- By subtracting the modeled images from the observed ones (this is done pixel–by–pixel) we calculate absolute deviations between both which are divided by the uncertainty maps of the [*ROSAT*]{} observations in both C– and M–band. The resulting normalized deviation–distribution is a measure for the statistical significance of the model expressed in units of the standard deviation (e.g. if the deviation width is close to unity then the uncertainty of the model is comparable to the uncertainty of the [*ROSAT*]{} data). We produce “deviation images” and superimpose their contour lines on the [*ROSAT*]{}–C–band model images, as shown in Fig. \[jkerp-E1-6:locm\]. At first glance it is clear that the deviations are not randomly distributed across the fields, but form coherent structures. The dotted contours correspond to areas where the modeled intensities are brighter than the observed ones, while the modeled intensity encircled by the solid contours is too faint. The regions with dotted contours can be explained by the lack of absorbing material, while the regions marked by solid lines represent excess emission or too much absorbing material. For example, [*field B*]{} contains intergroup [HI]{}–gas belonging the M81 group of galaxies, which yields X–ray excess emission in the modeled intensity (see Fig. \[jkerp-E1-6:hicm\]). In order to minimize the misfitting regions, we subtract gas with velocities $|{\rm v}_{\rm LSR}| \ge 25\,{\rm km\,s^{-1}}$ from the [HI]{} data in the solid marked areas and it turns out that the model fits much better, without producing further excess emission. With this [HI]{} data selection, which is done for both fields, it is possible to reduce a huge area of excess emission in Fig. \[jkerp-E1-6:locm\] to only three small spots. Unfortunately the dotted features cannot be identified. Testing the model ================= To obtain better model parameters we take a closer look at the statistics of the deviations. Figure \[jkerp-E1-6:hist\] illustrates the goodness of the model. The red plot represents the deviation in the C–band while the blue plot corresponds to the M–band deviation. The goal is to derive model parameters which lead to histograms with a mean $\mu = 0$ and a standard deviation $\sigma = 1$ simultaneously in both energy bands. This is done by an iterative process in which we vary the model parameters we initially obtained from the scatter diagrams and analyze the mean and standard deviation of their histograms. Varying $I_{\rm e}$, it turns out, that a value of $170 \cdot 10^{-6} {\rm cts \enspace s^{-1}\,arcmin^{-2}}$ fits much better than the original value given by [-@jkerp-E1-6:bar96]. In addition to our model we fit another model with two distant halo components as proposed by [-@jkerp-E1-6:kun00]. For the statistical significance see Fig. \[jkerp-E1-6:histsno\]. Note, that both energy bands are independent from each other in the two distant halo component model, i.e. they cannot be fitted simultaneously, in contrast to our approach. The best fitting model intensities are shown in Tab. \[jkerp-E1-6:result\] for the observed fields. The best fitting temperature for the Local Bubble is log($T_{\rm l}$)=5.9 and for the Halo we derive log($T_{\rm h}$)=6.2. For the two halo component model we adopt the temperatures derived by [-@jkerp-E1-6:kun00]: log($T_{\rm l}$)=6.1, log($T_{\rm h_1}$)=6.0, and log($T_{\rm h_2}$)=6.4. [ccccc]{} [**Field A**]{} & $I_{\rm l}$ & $I_{\rm h}$ & $I_{\rm e}$\ \ 1/4 keV & 350 & 1975 & 170\ 3/4 keV & 1 & 152 & 40\ \ [**Field B**]{} & & &\ \ 1/4 keV & 350 & 1380 & 170\ 3/4 keV & 1 & 110 & 40\ \ [**Field A**]{} & & &\ \ 1/4 keV & 350 & 1380,1100& 230\ 3/4 keV & 12 & 15,282& 53\ \ The intensities found for [*field A*]{} and [*B*]{} differ by about 40% which can be attributed to a variation in intensities with Galactic latitude and/or longitude (see Tab. \[jkerp-E1-6:result\]). This behavior is [*not*]{} expected with the two distant component model proposed by [-@jkerp-E1-6:kun00]. A more detailed study of the $l,b$–dependency will be presented in a forthcoming paper. Summary ======= - The Local Bubble appears as a one component X–ray plasma with a temperature of $T = 10^{5.9}$K and an intensity of $350 \cdot 10^{-6} {\rm cts \enspace s^{-1}\,arcmin^{-2}}$ in the 1/4–keV energy regime. Moreover, the contribution of the halo exceeds the local distribution by a factor of four. - The remaining SXRB is compatible with the following: A one component halo plasma with a temperature of $T =10^{6.2}$K and varying intensity in both latitude and longitude. Note, that it is not necessary to introduce [*more*]{} than one halo component. Furthermore, an extragalactic component with an intensity of $170 \cdot 10^{-6} {\rm cts \enspace s^{-1}\,arcmin^{-2}}$ is found, which is consistent with the value provided by [-@jkerp-E1-6:bar96] - The variation with Galactic latitude and longitude suggests the existence of a smoothly distributed halo plasma which surrounds the Milky Way. The authors like to thank the Deutsches Zentrum für Luft– und Raumfahrt for financial support under grant No. 50OR0103. Barber, C. R., Roberts, T. P., Warwick, R. S. 1996, MNRAS 282, 157B Hartmann, D., Burton, W. B. 1997, Atlas of galactic neutral hydrogen, Cambridge University Press Hasinger, G., Altieri, B., Arnaud, M., Barcons, X., Bergeron, J., Brunner, H., Dadina, M., Dennerl, K., Ferrando, P., Finoguenov, A., Griffiths, R. E., Hashimoto, Y., Jansen, F. A., Lumb, D. H., Mason, K. O., Mateos, S., McMahon, R. G., Miyaji, T., Paerels, F., Page, M. J., Ptak, A. F., Sasseen, T. P., Schartel, N., Szokoly, G. P., Trümper, J., Turner, M., Warwick, R. S., Watson, M. G. 2001, A&A 365L, 45H Kuntz, K. D., Snowden, S. L. 2000, ApJ 543, 195K Pietz, J., Kerp, J., Kalberla, P. M. W., Burton, W. B., Hartmann, Dap, Mebold, U. 1998, A&A 332, 55P Snowden, S. L., Egger, R., Finkbeiner, D. P., Freyberg, M. J., Plucinsky, P. P. 1998, ApJ 493, 715S Snowden, S. L., Plucinsky, P. P., McCammon, D., Freyberg, M. J., Schmitt, J. H. M. M., Trümper, J. 1991, BAAS 23, 1400S
--- abstract: 'We consider the (1+1)-dimensional ${\cal N}=(2,2)$ super Yang–Mills theory which is obtained by dimensionally reducing ${\cal N}=1$ super Yang–Mills theory in four dimension to two dimensions. We do our calculations in the large-$N_c$ approximation using Supersymmetric Discrete Light Cone Quantization. The objective is to calculate quantities that might be investigated by researchers using other numerical methods. We present a precision study of the low-mass spectrum and the stress-energy correlator $\bra T^{++}(r) T^{++}(0) \ket$. We find that the mass gap of this theory closes as the numerical resolution goes to infinity and that the correlator in the intermediate $r$ region behaves like $r^{-4.75}$.' --- \#1[(\[\#1\])]{} \#1\#2 \#1\#2 \#1\#2 \#1\#2 \#1[[N]{}=(\#1,\#1)]{} [**Improved results for ${\cal N}=(2,2)$ super Yang–Mills theory using supersymmetric discrete light-cone quantization**]{} [**Motomichi Harada$^a$, John R. Hiller$^b$, Stephen Pinsky$^a$, and Nathan Salwen$^a$**]{} [*${}^a$Department of Physics\ Ohio State University\ Columbus OH 43210*]{} [*${}^b$Department of Physics\ University of Minnesota Duluth\ Duluth MN 55812*]{} Introduction ============ There is a pressing need to solve quantum field theories in the nonperturbative regime. Over the last thirty years a significant amount of progress has been made in this area using lattice gauge theory. Many of the most interesting quantities in QCD and electroweak physics are being calculated to ever increasing accuracy. There remains, however, a number of nonperturbative quantities in supersymmetric quantum field theories that are interesting in a variety of formal physics contexts but have not been calculated. Some of these calculations are now just beginning to be considered. With increased interest in the physics of extra dimensions, it is more important than ever to solve supersymmetric theories in the nonperturbative regime. The progress in putting supersymmetry on a lattice has been rather slow due to some critical problems: the lack of translational invariance on a lattice, the notorious doubling of fermion states [@Nielsen:1980rz], and the breakdown of the Leibniz rule [@Fujikawa:2002ic]. Recently, however, some interesting new approaches have shed some light on this issue [@Cohen:2003xe; @Cohen:2003qw; @Sugino:2003yb; @Sugino:2004qd]. These approaches make possible the restoration of supersymmetry in a continuum limit without fine-tuning of parameters and even without introducing some “sophisticated” fermions such as domain-wall [@Kaplan:1992bt] or overlap fermions [@Narayanan:1994gw; @Neuberger:1997fp]. However, these techniques seem to be applicable to only some subset of all supersymmetric theories. Given this increasing interest and promising new ideas for the realization of supersymmetry on a lattice, it is worthwhile to provide some specific, detailed numerical results using Supersymmetric Discretized Light Cone Quantization (SDLCQ) [@sakai95; @Lunin:1999ib] for the simplest theory for which the new lattice techniques are applicable. SDLCQ is a well established tool for calculations of physical quantities in supersymmetric gauge theory and has been exploited for many supersymmetric Yang–Mills (SYM) theories. The ${\cal N}=(2,2)$ theory in 1+1 dimensions in the large-$N_c$ limit is discussed in Ref. [@Antonuccio:1998mq]; however, the published results are primitive compared to what can be obtained today because of our greatly improved hardware and software. In this paper we are now able to reach a resolution of $K=12$, while in Ref. [@Antonuccio:1998mq] we could reach only $K=5$. Here we will present new and more detailed results on this theory against which the lattice community can compare the results of their new techniques. Briefly, the SDLCQ method rests on the ability to produce an exact representation of the superalgebra but is otherwise very similar to Discrete Light Cone Quantization (DLCQ) [@pb85; @bpp98]. In DLCQ we compactify the $x^-$ direction by putting the system on a circle with a period of $2L$, which discretizes the longitudinal momentum as $p^+=n\pi/L$, where $n$ is an integer. The total longitudinal momentum $P^+$ becomes $K\pi/L$, where $K$ is an integer called the harmonic resolution [@pb85]. The positivity of the light-cone longitudinal momenta then limits the number of possible Fock states for a given $K$, and, thus, the dimension of Fock space becomes finite, enabling us to do some numerical computations. It is assumed that as $K$ approaches infinity, the solutions to this large finite problem approach the solutions of the field theory. The difference between DLCQ and SDLCQ lies in the choice of discretizing either $P^-$ or $Q^-$ to construct the matrix approximation to the eigenvalue problem $M^2|\Psi\ket=2P^+P^- |\Psi\ket=2P^+(Q^-)^2/\sqrt 2|\Psi\ket$, with $P^+=K\pi/L$. For more details and additional discussion of SDLCQ, we refer the reader to Ref. [@Lunin:1999ib]. An interesting new result of the calculation we present here is that finite-dimensional representations of the SDLCQ with odd and even values of $K$ result in very distinct solutions of the $\N 2$ SYM theory, which only become identical as $K$ approaches infinity. One might initially think that this is a shortcoming of the SDLCQ approach, but it turns out to be an advantage because it provides an internal measure of convergence. We will give some numerical results of the low-energy spectrum. There we will see that as we go to higher and higher resolutions, we find bound states with lower and lower mass. We have seen this behavior in the ${\cal N}=(1,1)$ theory where the lowest mass state converges linearly to zero as a function of $1/K$. This closing of the mass gap as $K\to \infty$ was predicted by Witten [@Witten:1995im] for the ${\cal N}=(1,1)$ and ${\cal N}=(2,2)$ theories. We find that in the latter case the convergence is not linear in $\frac{1}{K}$, and, while our results are consistent with the mass gap going to zero, they are not conclusive. We have also been able to solve analytically for the wave functions of some of the pure bosonic massless states, and we will present the exact form of the wave function for some cases. We will show that the states must have certain properties to be massless, which then enable us to count the number of the states for a given resolution $K$. In addition, we will present the formulae to count a minimum total number of massless states. Finally, we will look at the two-point correlation function of the stress-energy tensor $\bra T^{++}(r)T^{++}(0)\ket $. We see the expected $1/r^4$-behavior in the UV and IR regions, and, interestingly, we find that the correlator behaves as $1/r^{4.75}$ in the intermediate region. We know of no predictions for this behavior; however, for $\N 8$ SYM theory there is a prediction that this correlator should behave like $1/r^5$ in the intermediate region. The structure of this paper is the following. In Sec. \[sec:N2SYM\] we focus our attention on the low-energy states. After giving a quick review of $\N 2$ SYM theory with SDLCQ, we give some numerical results for the low-energy states, discuss analytically some properties of pure bosonic massless states, and present the formulae to count a minimum total number of massless states. We discuss the numerical results for the two-point correlation function of the stress-energy tensor in Sec. \[sec:cor\]. A summary and some additional discussion are given in Sec. \[sec:discussion\]. Review of ${\cal N}$=(2,2) SYM theory {#sec:N2SYM} ===================================== ${\cal N}$=(2,2) SYM theory and SDLCQ ------------------------------------- Before giving the numerical results, let us quickly review some analytical work on ${\cal N}$=(2,2) SYM theory for the sake of completeness. For more details see Ref. [@Antonuccio:1998mq]. This theory is obtained by dimensionally reducing ${\cal N}$=1 SYM theory from four dimensions to two dimensions. In light cone gauge, where $A_-=0$, we find for the action $$\begin{aligned} S^{LC}_{1+1}&=&\int dx^+ dx^- \tr \Bigg[ \d_+ X_I \d_-X_I +i\theta^T_R \d^+\theta_R+i\theta^T_L\d^-\theta_L \\ &&\quad +\frac 12 (\d_-A_+)^2+gA_+J^++\sqrt 2 g\theta^T_L\ep_2\beta_I [X_I,\theta_R]+\frac {g^2}4 [X_I,X_J]^2 \Bigg], \nonumber\end{aligned}$$ where $x^{\pm}$ are the light-cone coordinates in two dimensions, the trace is taken over the color indices, $X_I$ with $I=1,2$ are the scalar fields and the remnants of the transverse components of the four-dimensional gauge field $A_{\mu}$, two-component spinor fields $\theta_R$ and $\theta_L$ are remnants of the right-moving and left-moving projections of the four-component spinor in the four-dimensional theory, and $g$ is the coupling constant. We also define the current $J^+=i[X_I,\d_-X_I]+2\theta^T_R\theta_R$, and use the Pauli matrices $\beta_1\equiv\sigma_1$, $\beta_2\equiv\sigma_3$, and $\ep_2\equiv -i\sigma_2$. After eliminating all the non-dynamical fields using the equations of motion, we find for $P^{\a}=\int dx^- T^{+\a}$ P\^+=dx\^- (\_-X\_I\_-X\_I+i\_R\^T\_-\_R), and P\^-=g\^2dx\^- (-12 J\^+1[\^2\_-]{}J\^+ -14\[X\_I,X\_J\]\^2+i2 (\_2\_I\[X\_I,\_R\])\^T1[\_-]{} \_2\_J\[X\_J,\_R\]). The supercharges are found by dimensionally reducing the supercurrent in the four-dimensional theory. They are Q\^+\_=2\^[5/4]{}dx\^- (\_-X\_I\_[I]{}u\_), Q\^-\_=gdx\^- ( -2\^[3/4]{}J\^+1[\_-]{}\_[2]{}u\_+2\^[-1/4]{}i\[X\_I,X\_J\](\_I\_J\_2)\_u\_), where $\a,\eta=1,2$ and $u_{\a}$ are the components of $\theta_R$. We expand the dynamical fields $X_I$ and $u_{\a}$ in Fourier modes as X\_[Ipq]{}(x\^-)=1\_0\^ \[A\_[Ipq]{}(k\^+)\^[-ik\^+x\^-]{} +A\^\_[Iqp]{}(k\^+)\^[ik\^+x\^-]{}\], u\_[pq]{}(x\^-)=1\_0\^ \[B\_[pq]{}(k\^+)\^[-ik\^+x\^-]{} +B\^\_[qp]{}(k\^+)\^[ik\^+x\^-]{}\], where $p,q=1,2,\ldots, N_c$ stand for the color indices, and $A,B$ satisfy the usual commutation relations $$\begin{aligned} [A_{Ipq}(k^+),A^{\dag}_{Jrs}(k^{'+})] =\delta_{IJ}\delta_{pr}\delta_{qs}\delta(k^+-k^{'+}), \\ \{B_{\a pq}(k^+),B^{\dag}_{\beta rs}(k^{'+})\} =\delta_{\a\beta}\delta_{pr}\delta_{qs}\delta(k^+-k^{'+}).\end{aligned}$$ We work in a compactified $x^-$ direction of length $2L$ and ignore zero modes. With periodic boundary conditions we restrict to a discrete set of momenta  [@sakai95] $$k^+ = \frac{\pi}{L} k, \quad k = 1,2,3,\ldots,\quad \int dk^+ \rightarrow \frac{\pi}{L}\sum_{k=1}^{\infty}, \quad \delta(k^+ - k^{\prime +}) \rightarrow\frac{L}{\pi} \delta_{k\kp}$$ Relabeling the operator modes $\sqrt{\frac{L}{\pi}}a(k) = A(k^+ = \frac{\pi k}{L})$ and $\sqrt{\frac{L}{\pi}}b(k) = B(k^+ = \frac{\pi k}{L})$, so that $$\begin{aligned} [a_{Ipq}(k),a^{\dag}_{Jrs}(\kp)] =\delta_{IJ}\delta_{pr}\delta_{qs}\delta_{k\kp}, \quad \{b_{\a pq}(k),b^{\dag}_{\beta rs}(\kp)\} =\delta_{\a\beta}\delta_{pr}\delta_{qs}\delta_{k\kp}.\end{aligned}$$ the expansion is \[eq:discretized\] X\_[Ipq]{}(x\^-)=1\_[k=1]{}\^ \[a\_[Ipq]{}(k)\^[-ikx\^-]{} +a\^\_[Iqp]{}(k\^+)\^[ikx\^-]{}\], \[eq:discretized2\] u\_[pq]{}(x\^-)=1\_[k=1]{}\^ \[b\_[pq]{}(k)\^[-ikx]{} +b\^\_[qp]{}(k)\^[ikx\^-]{}\]. In terms of $a$ and $b$, the supercharges are given by Q\_\^+=2\^[1/4]{}i\_[k=1]{}\^ k \_[I]{}\[a\^\_[Iij]{} (k)b\_[ij]{}(k)-b\^\_[ij]{}(k)a\_[Iij]{}(k)\], and $$\begin{aligned} Q_{\a}^- &=&\frac{i2^{-1/4}g}{\pi}\sqrt{\frac{L}{\pi}}\sum^{\infty}_{k_1,k_2,k_3 = 1} \delta_{(k_1+k_2),k_3} \Biggl\{ (\epsilon_2)_{\a\eta} \\ &&\times \Biggl[ \frac 1{2\sqrt{k_1k_2}}\left(\frac {k_2-k_1}{k_3}\right) [b^{\dag}_{\eta ij}(k_3)a_{Iim}(k_1)a_{Imj}(k_2)-a^{\dag}_{Iim}(k_1) a^{\dag}_{Imj}(k_2)b_{\eta ij}(k_3)] \nonumber \\ &&+ \frac 1{2\sqrt{k_1k_3}}\left(\frac {k_1+k_3}{k_2}\right) [a^{\dag}_{Iim}(k_1)b^{\dag}_{\eta mj}(k_2)a_{I ij}(k_3)-a^{\dag}_{I ij} (k_3)a_{Iim}(k_1)b_{\eta mj}(k_2)] \nonumber \\ &&+ \frac 1{2\sqrt{k_2k_3}}\left(\frac {k_2+k_3}{k_1}\right) [a^{\dag}_{I ij}(k_3)b_{\eta im}(k_1)a_{Imj}(k_2)-b^{\dag}_{\eta im}(k_1) a^{\dag}_{Imj}(k_2)a_{I ij}(k_3)] \nonumber \\ &&-\frac 1{k_1}[b^{\dag}_{\eta ij}(k_3)b_{\eta im}(k_1)b_{\eta mj}(k_2) +b^{\dag}_{\eta im}(k_1)b^{\dag}_{\eta mj}(k_2)b_{\eta ij}(k_3)] \nonumber \\ &&-\frac 1{k_2}[b^{\dag}_{\eta ij}(k_3)b_{\eta im}(k_1)b_{\eta mj}(k_2) +b^{\dag}_{\eta im}(k_1)b^{\dag}_{\eta mj}(k_2)b_{\eta ij}(k_3)] \nonumber \\ &&+\frac 1{k_3}[b^{\dag}_{\eta ij}(k_3)b_{\eta im}(k_1)b_{\eta mj}(k_2) +b^{\dag}_{\eta im}(k_1)b^{\dag}_{\eta mj}(k_2)b_{\eta ij}(k_3)] \Biggr] \nonumber \\ && + 2 (\epsilon_2)_{IJ} \Biggl( \frac 1{4\sqrt{k_1k_2}} [b^{\dag}_{\alpha ij}(k_3)a_{I im}(k_1)a_{J mj}(k_2) +a^{\dag}_{J im}(k_1)a^{\dag}_{I mj}(k_2)b_{\alpha ij}(k_3)] \nonumber \\ &&+\frac 1{4\sqrt{k_2k_3}} [a^{\dag}_{J ij}(k_3)b_{\alpha im}(k_1)a_{I mj}(k_2) +b^{\dag}_{\alpha im}(k_1)a^{\dag}_{J mj}(k_2)a_{I ij}(k_3)] \nonumber \\ &&+\frac 1{4\sqrt{k_3k_1}} [a^{\dag}_{I ij}(k_3)a_{J im}(k_1)b_{\alpha mj}(k_2) +a^{\dag}_{I im}(k_1)b^{\dag}_{\alpha mj}(k_2)a_{J ij}(k_3)] \Biggr)\Biggr\}.\nonumber\end{aligned}$$ using the relation $([\beta_I,\beta_J]\epsilon_2)_{\alpha\eta} = \delta_{\alpha\eta} (\epsilon_2)_{IJ}$. They satisfy the superalgebra conditions for anticommutators involving $Q^+_{\a}$, $$\{Q^{+}_{\a},Q^{+}_{\beta}\}=\delta_{\a\beta}2\sqrt 2 P^{+}, \quad \{Q^+_{\a},Q^-_{\beta}\}=0. \label{superalgebra}$$ but do not satisfy the the condition $\{Q^{-}_{\a},Q^{-}_{\beta}\}=\delta_{\a\beta}2\sqrt 2 P^{-}$. Instead, in SDLCQ we find {Q\^[-]{}\_,Q\^[-]{}\_}0  [if]{}  , (Q\^-\_1)\^2=2 P\^-\_1 2 P\^-\_2=(Q\^-\_2)\^2. Although we have different $P^-_{\a}$ for different $Q^-_{\a}$, we can define a unitary, self-adjoint transformation $C$, such that C a\_[1 ij]{} C = a\_[2 ij]{}, C b\_[1 ij]{} C = -b\_[2 ij]{}. and find that $C P^-_1 C = P^-_2$. Thus the eigenvalues of $P^-_{\a}$ are the same. We may choose either one of the two $Q^-_{\a}$’s, at least for our purposes, and in what follows we will use $Q^-_1$ and will suppress the subscript unless it is needed for clarity. The momentum, $P^+$, is given by $$P^+ = \frac{1}{\sqrt{2}} (Q^+_1)^2 = \frac{\pi}{L} \sum_{k} k \bigl( a_{Iij}^{\dag} a_{Iij} + b_{\nu ij}^{\dag} b_{\nu ij} \bigr)$$ We work with a fixed value of momentum $$\begin{aligned} P^+ = \frac{\pi}{L} K, \quad K = 1,2,\ldots\end{aligned}$$ We call $K$ the resolution because larger values of $K$ allow larger values of $L$ while leaving the momentum $P^+$ fixed. The next thing to note is that there are three $Z_2$ symmetries of $Q_1^-$. The first one is $R_1$-symmetry, where $R_{\alpha}$ acts as follows a\_[1ij]{} a\_[2ij]{}, b\_-b\_ The second is $S$-symmetry a\_[I ij]{} -a\_[I ji]{}, b\_[ij]{}-b\_[ji]{}. The third is what we call $T$-symmetry a\_[Iij]{}-a\_[Iij]{}, b\_  [unchanged]{}. It is easy to see that under these symmetries $Q^-_1$ is invariant. Using the relations, $$R_1 Q^+_{1} R_1 = -Q_{2}^+, \quad TQ^+_{\a}T = - Q^+_{\a},$$ we find $$\begin{aligned} R_1 (Q^+_{1} \pm Q_{2}^+) R_1 = \mp (Q^+_{1} \pm Q_{2}^+), \quad T (Q^+_{1} \pm Q_{2}^+) T = - (Q^+_{1} \pm Q_{2}^+).\end{aligned}$$ Also note that $$\begin{aligned} \{Q^+_{1} \pm Q_{2}^+,Q^+_{1} \pm Q_{2}^+\} = \{Q^+_{1} ,Q^+_{1} \} + \{ Q_{2}^+, Q_{2}^+\} + \pm 2 \{Q^+_{1} , Q_{2}^+\} = 4 \sqrt{2} P^+.\end{aligned}$$ We work in a subspace of definite momentum so $(Q^+_{1} \pm Q_{2}^+)$ must have non zero eigenvectors. Since $Q^+_{\a}$ and $Q^-_{\a}$ are fermionic operators we see that a bosonic energy eigenstate $|\Psi_B\ket_{++}$ which is even under $R$ and $T$-symmetry, can be transformed into |\_B\_[-]{}=Q\^-\_1(Q\^+\_1Q\_2\^+)|\_B \_[++]{}, |\_B\_[-+]{}=(Q\^+\_1+Q\^+\_2)(Q\^+\_1-Q\^+\_2)|\_[++]{} which are all degenerate with $|\Psi_B\ket_{++}$. One should notice here that we cannot use $Q^-_1$ and $Q_2^-$ at the same time since they do not commute with each other. Thus, including the supersymmetry, we have an 8-fold degeneracy. Utilizing the remaining $S$-symmetry, which does not give us a mass degeneracy, we can divide the mass spectrum into 16 independent sectors. This significantly reduces the size of the computational problem. It will be convenient to refer to bound states of this theory as having $S$, $T$, or $R$ even or odd parity and to refer to a state as having even or odd resolutions if $K$ is an even or odd integer. Mass gap -------- Tables \[mass+\] and \[mass-\] show the first few low-mass states. We find anomalously light states in the sectors with opposite $K$ and $S$ parity for $K$ larger than 4. Furthermore, the number of extremely light states increases by one as we increase $K$ by two. We believe that these anomalously light states should be exactly massless states, but for some reason there is an impediment preventing SDLCQ from achieving this result. Some of the evidence for this comes from a study of the average number of partons $\bra n \ket$ in the bound states. For example, in the sector with $S$ and $K$ even, for each even integer $r$ less than $K$, there is exactly one bosonic massless state with $\bra n \ket=r$. For $K$ odd we do not see massless states of this type, but we do find $\bra n \ket=r$ for the anomalously light bound states in this sector. This is also the first sign of the distinction between representations of the supersymmetry algebra in different symmetry sectors, namely those with anomalously light states (with opposite $S$ and $K$ parity) and those without anomalously light states (with matching $S$ and $K$ parity). $K$=3 4 5 6 7 8 9 10 11 12 ------- ------- -------- ------- -------- -------- -------- ------- -------- -------- 1.308 4.009 0.0067 2.144 0.0040 1.415 0.0026 1.040 0.0018 0.8188 12.62 12.24 0.6304 2.514 0.0060 1.5999 0.0038 1.138 0.0026 0.8790 22.06 15.04 1.0813 2.645 0.4366 1.712 0.0048 1.212 0.0026 0.9312 15.28 1.1099 2.773 0.6016 1.729 0.3515 1.256 0.0039 0.9397 22.53 1.5732 2.807 0.6308 1.811 0.4372 1.347 0.3062 1.0072 : The mass squared $M^2$ of the first few lowest massive states in the $S$-even sector in units of $g^2N_c/\pi$ for a series of resolutions $K$.[]{data-label="mass+"} $K$=4 5 6 7 8 9 10 11 12 -------- -------- --------- -------- --------- -------- --------- --------- --------- 1.2009 3.1876 0.00674 1.8427 0.00440 1.2687 0.00302 0.95786 0.00217 1.2009 3.1887 0.6402 1.9305 0.00538 1.3266 0.00317 0.99795 0.00218 12.296 3.3239 0.6747 2.0413 0.45529 1.4087 0.00431 1.0302 0.00219 12.296 11.489 0.9900 2.1415 0.48010 1.5107 0.36858 1.1036 0.00356 19.502 11.492 1.0313 2.3603 0.55873 1.5219 0.38647 1.1345 0.32053 : Same as Table 1 but for the $S$-odd sector.[]{data-label="mass-"} In our discussion of the mass gap we will not include the anomalously light states as part of the massive spectrum for the reason given above. To study the mass gap we will look at the lowest massive state in each sector as a function of $1/K$ as shown in Fig. \[low\]. ----- ----- (a) (b) ----- ----- There we also show polynomial fits in all four sectors separately. The fits are constrained to go through the origin. The quadratic fits look very good in Fig. \[low\](a). but Fig. \[low\](b) required a cubic. The two fits with opposite $S$ and $K$ parity look very similar as do the two fits with same $S$ and $K$ parity. In each case we could have fit all the points with one curve if we were to include a small oscillatory function in the fit. We should note here that oscillatory behavior has been observed before in different theories [@Gross:1997mx; @Hiller:2003qe]. The explanation given there is that those states which show the oscillatory behavior comprise non-interacting two-body states. This, however, does not seem applicable in our case since the states in Fig. \[low\] are the lowest energy states; thus there are no lower energy states available to form two-body states. The distinct character of the mass gap serves as another piece of evidence that we have two different classes of representations. The data is consistent with the mass gap closing to $0$ as $K\to \infty$, especially for the case where $S$ and $K$ have the same parity. The odd and even representations approach each other as $K$ increases and we hypothesize that they become identical in the continuum limit of $K\to \infty$. When we present the correlation function in Sec. \[sec:cor\], we will see further evidence for this claim. Massless states --------------- ### Pure bosonic massless states Let us investigate the properties of pure bosonic massless states in full detail in the $N_c \to \infty$ limit. This is done by generalizing the discussion of the bound states in SDLCQ for ${\cal N}$=(1,1) SYM theory, as given in Refs. [@Lunin:1999ib; @Antonuccio:1998kz], to ${\cal N}$=(2,2) SYM theory. For simplicity, let us consider the states consisting of a fixed $n$ number of partons only. A pure bosonic massless state is given by $$|\Psi,0\ket=N\sum_{q_1,\ldots,q_n}\sum_A \delta_{(q_1+\ldots+q_n),K} \bar f^{(0)}_{[A_1\ldots A_n]}(q_1\ldots q_n) \tr[a_{A_1}^{\dag}(q_1)\ldots a_{A_n}^{\dag}(q_n)] |0\ket,$$ where $N$ is the normalization factor, $q_i=1,2,\ldots$ is the unit of the light-cone momentum $p_i=q_i\pi/L$ carried by the $i$-th parton, $A_i=1,2$ indicates the flavor index for each parton, the sum $\sum_A$ is the summation over all possible permutations of the flavor indices $A_i$’s, $\bar f$ is the wave function, and the trace is taken over the color indices. Note that we don’t have the symmetry factor coming from the cyclic property of the trace in the above notation; one has to put in the symmetry factor by hand if one would like it to be in there as we will do so for an example given later in this subsection. In other words, Fock states with non-zero symmetry factor are not normalized. Due to the cyclic property of the trace, we have $$\bar f_{[A_1\ldots A_{n}]}(q_1,\ldots,q_n) =\bar f_{[A_2\ldots A_{n}A_1]}(q_2,\ldots,q_n,q_1)=\ldots = \bar f_{[A_nA_1\ldots A_{n-1}]}(q_n,\ldots,q_{n-1}).$$ Since $P^-=(Q^-)^2/\sqrt 2$, all the massless states should vanish upon the action of $Q^-$. Thus, we must have $Q^-|\Psi,0\ket=0$. This identity, however, can be simplified somewhat for pure bosonic massless states. That is, the terms to consider in $Q^-$ are those which annihilate one boson and create one boson and one fermion, and those which annihilate two bosons and create one fermion. Both the former and latter class of terms in $Q^-$ separately annihilates $|\Psi,0\ket$. In the large-$N_c$ limit the former class gives, writing $f(q_1,\ldots,q_n)\equiv \sqrt{q_1\ldots q_n}\bar f(q_1,\ldots,q_n)$, $$\begin{aligned} &0&=(\ep_2)_{\a\beta}\Bigl\{ \frac{2q_{n-1}+t}{(q_{n-1}+t)t}f^{(0)}_{[A_1\ldots A_n]} (q_1,\ldots,q_{n-1}+t,q_n) \nonumber \\ &&\quad -\frac{2q_{n}+t}{(q_{n}+t)t}f^{(0)}_{[A_1\ldots A_n]} (q_1,\ldots,q_{n-1},q_n+t)\Bigl\} \nonumber \\ && \quad +\frac {M_{IA_n}^{\a\beta}}{2(q_n+t)} f^{(0)}_{[A_1\ldots A_{n-1},I]}(q_1,\ldots,,q_{n-1},q_n+t) \nonumber \\ && \quad -\frac {M_{IA_{n-1}}^{\a\beta}}{2(q_{n-1}+t)} f^{(0)}_{[A_1\ldots A_{n-2},I,A_n]}(q_1,\ldots,,q_{n-1}+t,q_n), \label{eq1}\end{aligned}$$ and the latter yields $$0=\sum_{A_{n-1},A_n}\sum_k \left( (\ep_2)_{\a\beta} \frac{t-2k}{tk(t-k)} \delta_{A_{n-1},A_n}+\frac {M_{A_{n-1}A_n}^{\a\beta}}{k(t-k)}\right) f^{(0)}_{[A_1\ldots A_n]}(q_1,\ldots,q_{n-2},k,t-k), \label{eq2}$$ where $M_{IJ}^{\a\beta}\equiv [(\beta_I\beta_J-\beta_J\beta_I)\epsilon_2]_{\a\beta}$, $t$ is the momentum of the created fermion, and the momentum conserving Kronecker’s delta $\delta_{(q_1+\ldots+q_n),K}$ is understood implicitly. These are the necessary and sufficient conditions for a pure bosonic state to be massless. One should notice that the above equations reduce to the corresponding equations found in Ref. [@Lunin:1999ib; @Antonuccio:1998kz] with $(\ep_2)_{\a\beta}=1$, $A_i=1$ for all $i$’s, and $M_{IJ}^{\a\beta}=0$, as expected. In principle, we could find the properties of all kinds of pure bosonic massless states using Eqs. (\[eq1\]) and (\[eq2\]). However, we limit ourselves here to the investigation of only two special types. To simplify the notation, we omit the superscript $(0)$ from the wave function $f$ hereafter. The simplest case is where $n=K$, that is to say, all the partons have one unit of momentum $\pi/L$ and, thus, $f=\bar f$. In this case Eq.  is trivially satisfied since we cannot have states with $(K+1)$ partons. From Eq.  we get $$0=f_{[A_1\ldots A_{n-2},1,2]}-f_{[A_1\ldots A_{n-2},2,1]} ,\label{1eq1}$$ where we have omitted $(q_1,\ldots,q_n)=(1,\ldots,1)$. Eq.  means, with the help of the cyclic property of $f$, that the wave function is unchanged after moving [*any*]{} flavor index to [*any*]{} location in the list of indices. For instance, we find, writing $f_{[A_1\ldots A_{n}]} \equiv [A_1\ldots A_n]$, $$[1212]=[1221]=[2121]=[2211]=[2112]=[1122] .$$ It is clear that the state with the above six wave functions being the same and all others zero satisfies , or equivalently Eqs. (\[eq1\]) and (\[eq2\]), the necessary and sufficient conditions to be massless. Therefore, writing $\tr[a^{\dag}_{A_1}(1)\ldots a^{\dag}_{A_n}(1)]|0\ket \equiv A_1\ldots A_n$, we find the state $$N[1212](1212+1221+2121+2211+2112+1122)=N[1212](2(1212)+4(1122))$$ is massless, where we used the cyclic property of $f$. In terms of the normalized Fock states $\tr[a^{\dag}_{A_1}(1)\ldots a^{\dag}_{A_n}(1)] |0\ket/(\sqrt s N_c^{n/2}) \equiv \underline{A_1\ldots A_n}=A_1\ldots A_n/(\sqrt s N_c^{n/2})$, where $s$ is the symmetry factor, we find, after normalizing properly, that $$\frac 1{\sqrt 3}(\underline{1212})+\sqrt {\frac 23}(\underline{1122})$$ is massless since $s$ for 1212 and 1122 equals two and one, respectively. Indeed we have found the very same massless state in our numerical results. As we have seen above, there is a one-to-one correspondence between a massless state and a given set of flavor indices, which has a [*fixed*]{} number of 1’s and 2’s. This means that every time we change the number of 1’s (or 2’s) in the flavor indices, we find a new massless state. Since we can have $K+1$ such different sets of flavor indices, we have $K+1$ massless states of this kind. As verification of our argument, we enumerated all the massless states for $K$ up to six and found all of them with the correct coefficients in our numerical results. The next case to consider is where $n=K-1$. In this case only one of the partons has two units of momentum, so that $f=\sqrt 2 \bar f$. However, since all the $f$’s have the same factor of $\sqrt 2$, we can absorb $\sqrt 2$ into the normalization factor $N$ and practically can set $f\equiv \bar f$. We have $t=1$ and $q_i=1$ with $i=1,\ldots,n$ in Eq.  and find, writing $(q_1,\ldots,q_n)=(1,\ldots,1,1,2)\equiv (1,2)$ and so on, $$0=[A_1\ldots A_{n}](2,1)-[A_1\ldots A_{n}](1,2) ,\label{2eq1}$$ $$0=[A_1\ldots A_{n-2},A_{n-1},A_n](1,2)-[A_1\ldots A_{n-2},A_n,A_{n-1}](2,1), \label{2eq2}$$ $$0=[A_1\ldots A_{n-2},A_{n-1},A_{n-1}](1,2)+[A_1\ldots A_{n-2},A_n,A_n](2,1), \label{2eq3}$$ where $A_{n-1}\ne A_n$ in Eqs. (\[2eq2\]) and (\[2eq3\]). For Eq.  we have $t=2$, $k=1,2$ and $q_i=1$ with $i=1,\ldots,n-2$, and we get $$0=[A_1\ldots A_{n-2},A,A](1,2)-[A_1\ldots A_{n-2},A,A](2,1) ,\label{2eq4}$$ $$\begin{aligned} 0&=&[A_1\ldots A_{n-2},1,2](1,2)+[A_1\ldots A_{n-2},1,2](2,1) \nonumber \\ &&\quad -[A_1\ldots A_{n-2},2,1](1,2)-[A_1\ldots A_{n-2},2,1](2,1). \label{2eq5}\end{aligned}$$ Apparently, we have five equations for the massless states to satisfy, but it is easy to see that Eq.  is incorporated into Eq.  and that if Eqs. (\[2eq1\]) and (\[2eq2\]) are true, so is Eq.  automatically. Hence, the three equations Eqs. (\[2eq1\]), (\[2eq2\]), and (\[2eq3\]) are in fact the equations for massless states to satisfy for $n=K-1$. In order to see what the three equations allow us to do, let us first write $$[A_1,\ldots,A_n](1,2)\equiv [A_1,\ldots,A_n'] .$$ That is, let us put a prime on top of an index whose corresponding parton has two units of momentum. Then, Eq.  allows us to move the “prime" to any index. Eq. , along with this fact, then also allows us to move the index with a prime to any location in the index list. For example, we have $$[112']=[11'2]=[1'12]=[12'1]=[1'21]=[121']=[2'11]=[21'1]=[211'].$$ Furthermore, Eq.  allows us to replace $11'$ by $2'2$ (or $22'$ using Eq. ) as long as a minus sign is inserted. Thus, for the above example we get $$[112']=[11'2]=[1'12]=-[22'2],$$ where we have omitted the wave functions related by cyclic permutations. This means that the state $$(112'+11'2+1'12-22'2)/2$$ is massless. Note that the symmetry factor in this case is equal to one for all the Fock states above. Since Eqs. (\[2eq1\]), (\[2eq2\]), and (\[2eq3\]) relate all the sets of flavor indices with an even/odd number of 1’s to one another, we have only [*two*]{} independent sets of flavor indices: the one with even numbers of 1’s and the other with an odd number. This means that there are [*two*]{} massless states of this type. Again we have confirmed this statement numerically for $K$ up to six. To summarize, we have found in the large-$N_c$ limit the necessary and sufficient conditions, Eqs. (\[eq1\]) and (\[eq2\]), that pure bosonic massless states are to satisfy. As an application we considered two special cases and found that there are $K+1$ massless states of the type $\tr[a^{\dag}_{A_1}(1)\ldots a^{\dag}_{A_K}(1)]$ and two of the type $\tr[a^{\dag}_{A_1}(1)\ldots a^{\dag}_{A_{K-2}}(1)a^{\dag}_{A_{K-1}}(2)]$. Also, we gave a way to enumerate all such massless states for a given $K$. ### Count of massless states It is possible to predict a minimum number of massless states by comparing the number of states in the different symmetry sectors. Since $(Q^-)^2$ takes a state from one symmetry sector to another and then back it must have $0$ eigenvalues if the dimensionality of the intermediate sector is less than that of the original sector. It is possible to create a simple recursive formula for the number of states in each sector[@ToAppear]. For the case when $K$ is prime and odd, the formula is particularly simple. We present the results here but refer to the other publication for justification. We define $A_{bes^+}(K,n)$ as the number of states in the bosonic sector with an even number of partons and even $S$ symmetry, where $n$ indicates how many types of particles we have in a SYM theory, i.e. $n=4$ for ${\cal N}=(2,2)$ SYM. Then $$\begin{aligned} &&A_{bes^+}(K,n) = A_{fes^+}(K,n) = \frac{A_f(K,n) + A_f(K,-n) + W}{2} \\ \nonumber && A_{bes^-}(K,n) = A_{fes^-}(K,n) = \frac{A_f(K,n) + A_f(K,-n) - W}{2} \\ \nonumber && A_{bos^+}(K,n) = A_{fos^+}(K,n) = A_{bos^-}(K,n) = A_{fos^-}(K,n) = \frac{A_f(K,n) - A_f(K,-n)}{2} \end{aligned}$$ where $$\begin{aligned} && A_f(K,n)_{\text{prime}} = \frac{1}{2K} ((1+n)^K - (1+n)) \\ && W = (\frac{n}{2})^2(K-1)\end{aligned}$$ $Q^-$ goes from bosonic to fermionic and from even to odd. $$\begin{aligned} A_{fos^+}(K,n)-A_{bes^+}(K,n) = A_{bos^+}(K,n)-A_{fes^+}(K,n) = -A_f(K,-n) - \frac{W}{2} \\ A_{fos^-}(K,n)-A_{bes^-}(K,n) = A_{bos^-}(K,n)-A_{fes^-}(K,n) = -A_f(K,-n) + \frac{W}{2}\end{aligned}$$ The minimum total number of massless states must therefore be $$\begin{aligned} -4 A_f(K,-n) = -\frac{2}{K} ((1-n)^K - (1-n)) =\frac{2}{K} (3^K - 3)\end{aligned}$$ For $K=5$, this comes to $96$ states which is way more than the $8$ purely bosonic states with $4$ or $5$ partons that we have found in this section. Correlation functions {#sec:cor} ===================== One of the physical quantities we can calculate nonperturbatively is the two-point function of the stress-energy tensor. Previous calculations of this correlator in this and other theories can be found in [@Antonuccio:1999iz; @Hiller:2000nf; @Hiller:2001qb]. Ref. [@Antonuccio:1999iz] gives results for the theory considered here but only for resolutions $K$ up to 6. We can now reach $K=12$. We will show that there is a distinct behavior for even and odd $K$ in the correlation function, just as in the energy spectrum. Then we will argue, by taking a closer look at the data, that we have two different classes of representations at finite $K$, which become identical as $K \to \infty$. Correlation functions in supergravity ------------------------------------- Let us first recall that there is a duality that relates the results for the two-point function in ${\cal N}$=(8,8) SYM theory to the results in string theory [@Hiller:2000nf]. The correlation function on the string-theory side, which can be calculated with use of the supergravity approximation, was presented in [@Antonuccio:1999iz], and we will only quote the result here. The computation is essentially a generalization of that given in [@Gubser:1998bc; @Witten:1998qj]. The main conclusion on the supergravity side was reported in [@Hashimoto:1999xu]. Up to a numerical coefficient of order one, which we have suppressed, it was found that $$\bra {\cal O}(x){\cal O}(0)\ket=\frac {N_c^{\frac 32}}{g_{YM}x^5}. \label{two}$$ This result passes the following important consistency test. The SYM theory in two dimensions with 16 supercharges has conformal fixed points in both the UV and the IR regions, with central charges of order $N_c^2$ and $N_c$, respectively. Therefore, we expect the two-point function of the stress-energy tensor to scale like $N_c^2/x^4$ and $N_c/x^4$ in the deep UV and IR regions, respectively. According to the analysis of [@Itzhaki:1998dd], we expect to deviate from these conformal behaviors and cross over to a regime where the supergravity calculation can be trusted. The crossover occurs at $x=1/g_{YM}\sqrt{N_c}$ and $x=\sqrt{N_c}/g_{YM}$. At these points, the $N_c$ scaling of and the conformal result match in the sense of the correspondence principle [@Horowitz:1996nw]. We should note here that this property for the correlation functions is expected [*only*]{} for ${\cal N}$=(8,8) SYM theory, not for the theory in consideration in this paper. However, it would be natural to expect some similarity between ${\cal N}$=(8,8) and ${\cal N}$=(2,2) theories. Indeed, we will find numerically that is [*almost*]{} true in ${\cal N}$=(2,2) SYM theory. Correlation functions in SUSY with 4 supercharges ------------------------------------------------- We wish to compute a general expression of the form $F(x^-,x^+)=\bra {\cal O}(x^-,x^+){\cal O}(0,0)\ket$ where ${\cal O}$ is $T^{++}$. In DLCQ, where we fix the total momentum in the $x^-$ direction, it is more natural to compute the Fourier transform and express the transform in a spectral decomposed form [@Antonuccio:1999iz; @Hiller:2000nf] $$\begin{aligned} \nonumber \tilde F(P_-,x^+)=\frac 1{2L}\bra{T^{++}}(P_-,x^+){T^{++}}(-P_-,0)\ket \\ =\sum_i \frac 1{2L}\bra 0|{ T^{++}}(P_-,0)|i\ket\e^{-iP^i_+x^+} \bra i|{ T^{++}}(-P_-,0)|0\ket.\end{aligned}$$ The position-space form of the correlation function is recovered by Fourier transforming with respect to $P_-=P^+ =K\pi/L$. We can continue to Euclidean space by taking $r=\sqrt{2x^+x^-}$ to be real. The result for the correlator of the stress-energy tensor was presented in [@Antonuccio:1999iz], and we only quote the result here: $$F(x^-,x^+)\equiv\bra T^{++}({\bf x})T^{++}(0)\ket = \sum_i \Big|\frac L{\pi}\bra 0|T^{++}(K)|i\ket\Big|^2\left( \frac {x^+}{x^-}\right)^2 \frac{M_i^4}{8\pi^2K^3}K_4(M_i \sqrt{2x^+x^-}), \label{cor}$$ where ${\bf x}$ has light cone coordinates $x^-,x^+$, $M_i$ is a mass eigenvalue and $K_4(x)$ is the modified Bessel function of order 4. In [@Antonuccio:1998mq] we found that the momentum operator $T^{++}({\bf x})$ is given by T\^[++]{}([**x**]{}) =, I,=1,2, where $X$ and $u$ are the physical adjoint scalars and fermions, respectively, following the notation of [@Antonuccio:1998mq]. When written in terms of the discretized operators, $a$ and $b$, (Eqs. (\[eq:discretized\],\[eq:discretized2\])), we find $$\begin{aligned} &&T^{++}(K)|0\ket =\frac {\pi}{2L}\sum_{k=1}^{K-1} \nonumber \\ && \quad \left[-\sqrt{k(K-k)} a^{\dag}_{Iij}(K-k)a^{\dag }_{Iji}(k)+\left(\frac K2-k\right) b^{\dag }_{\a ij} (K-k)b^{\dag }_{\a ji}(k)\right]|0\ket.\end{aligned}$$ The matrix element $(L/\pi)\bra 0|T^{++}(K)|i\ket$ is independent of $L$ and can be substituted directly to give an explicit expression for the two-point function. We see immediately that the correlator behaves like $1/r^4$ at small $r$, for in that limit, it asymptotes to \[eq:smallr\] ()\^2F(x\^-,x\^+)= (1-1K ). On the other hand, the contribution to the correlator from strictly massless states is given by $$\left(\frac {x^-}{x^+}\right)^2F(x^-,x^+)=\sum_i\Big|\frac L{\pi} \bra 0|T^{++}(K)|i\ket \Big|^2_{M_i=0}\frac 6{K^3\pi^2r^4}. \label{larger}$$ That is to say, we would expect the correlator to behave like $1/r^4$ at both small and large $r$, assuming massless states have non-zero matrix elements. Numerical results ----------------- To compute the correlator using Eq. , we approximate the sum over eigenstates by a Lanczos [@Lanczos] iteration technique, as described in [@Hiller:2000nf; @Hiller:2001qb]. Only states with positive $R_{\a}$, $T$ and $S$ parity contribute to the correlator. The results are shown in Fig. \[cor\_Dcor\], which includes a log-log plot of the scaled correlation function fT\^[++]{}([**x**]{})T\^[++]{}(0) ()\^2 and a plot of $d\log_{10}(f)/d\log_{10}(r)$ versus $\log_{10}(r)$, with $r$ measured in units of $\sqrt{\pi/g^2N_c}$. Let us discuss the behavior of the correlator at small, large, and intermediate $r$, separately in the following. First, at small $r$, the graphs of $f$ for different $K$ approach 0 as $K$ increases. This follows Eq. (\[eq:smallr\]) which gives the form $f = \log(1-\frac{1}{K})$. Second, at large $r$, obviously, the behavior is different for odd $K$, in Fig. \[cor\_Dcor\](c) and (d), and even $K$, in (e) and (f). However, the difference gets smaller as $K$ gets bigger, as seen in Fig. \[cor\_Dcor\](a). The reason for this is as follows. Looking at the detailed information of the computation of the correlator, we found that for even $K$ there is exactly one massless state that contributes to the correlator, while there is no massless state nor even an anomalously light state that makes any contribution for odd $K$. Instead, it is the lowest massive state that contributes the most for odd $K$. This observation serves as another piece of evidence for the claim that we have two distinct classes of representations for odd and even $K$. In the intermediate-$r$ region, for the ${\cal N}$=(8,8) theory we expected from Eq.  that the behavior is $1/r^5$, and in [@Hiller:2000nf] we found that the correlator may be approaching this behavior. We indicated in [@Hiller:2000nf] that conclusive evidence would be a flat region in the derivative of the scaled correlator at a value of $-1$. Our resolution was not high enough to see this in the ${\cal N}$=(8,8) case. Here we find such a flat region, indicating that the correlator in fact behaves like $1/r^{-4.75}$ for ${\cal N}$=(2,2) SYM theory. Also, note that the region of flattening around $-0.75$ extends farther out as $K$ gets bigger, for both odd and even $K$, implying again that the representations appear to agree as $K$ goes to infinity. For any fixed value of $r$ the correlators for odd and even $K$ approach each other as $K$ increases and the flat region extends further. This indicates that it is only in the region of $r$ where the correlators for even and odd $K$ agree that we have sufficient convergence for the results to be meaningful. ----- ----- (a) (b) (c) (d) (e) (f) ----- ----- Discussion {#sec:discussion} ========== To respond to the increasing interest in calculating supersymmetric theories on a lattice [@Cohen:2003xe; @Cohen:2003qw; @Sugino:2003yb; @Sugino:2004qd], we have presented detailed numerical results for the low-energy spectrum and the two-point correlation function of the stress-energy tensor, using SDLCQ for ${\cal N}$=(2,2) SYM theory in $1+1$ dimensions in the large-$N_c$ approximation. Our hope is that these results will serve as benchmarks for others to compare and check their results. In addition, we found an important new aspect of the SDLCQ approximation in this calculation. There seem to be two distinct classes of representations for ${\cal N}$=(2,2) SYM theory, one where $S$ and $K$ have the same parity and one where $S$ and $K$ have opposite parity; these representations become identical as $K\to \infty$. We found evidence for this feature of ${\cal N}$=(2,2) SYM theory in both the mass spectrum and the correlator. We also found that there are some anomalously light states that appear only in the sectors where $S$ and $K$ have opposite parity. We argued that the anomalously light states should be exactly massless, but have acquired a tiny mass because of some impediment to having them exactly massless in the SDLCQ approximation. In the calculation of the correlator where only positive S parity contribute we found that there is exactly one massless state that contributes to the correlator when $K$ has positive parity and that no massless state or anomalously light state contributes when $K$ has negative parity. The lightest massive state in the sector where $K$ has negative parity does contribute to the correlator, but because the mass gap appears to close at infinite resolution this state appears to become massless, as expected [@Witten:1995im]. The two-point correlator of the stress-energy tensor was found to show $1/r^4$-behavior in the UV (small $r$) and IR (large $r$, $K$ even) regions as expected. The large $r$ behavior for $K$ odd, on the other hand, has an exponential decay. Surprisingly, the correlator behaves like $1/r^{4.75}$ at intermediate values of $r$. In ${\cal N}$=(8,8) SYM theory in $1+1$ dimensions, the correlator is expected to behave like $1/r^5$ in the intermediate region, and it is interesting that ${\cal N}$=(2,2) behaves similarly but with a different exponent. We were able to confirm this power law behavior with a flat region in the derivative of the scaled correlator. Previously, in our calculation of the ${\cal N}$=(8,8) correlator at lower resolutions, we were not able to find this flat region. We are hopeful that in the near future we may be able to conclusively confirm the $1/r^5$ behavior in the ${\cal N}$=(8,8) theory. Interestingly, we also note that earlier results seem to indicate the same type of odd/even behavior for the ${\cal N}$=(8,8) theory. Analytically, we investigated the properties of pure bosonic massless states and found the necessary and sufficient conditions to determine their wave function. Then we explored some special cases to find that there are $K+1$ massless states of type $$\tr[a_{A_1}^{\dag}(1)a_{A_2}^{\dag}(1)\ldots a_{A_K}^{\dag}(1)]|0\rangle,$$ where $A_i$ is a flavor index and the number in the parentheses tells how many units of momentum each parton carries, and that there are two massless states of the type $$\tr[a_{A_1}^{\dag}(1)a_{A_2}^{\dag}(1)\ldots a_{A_{K-1}}^{\dag}(2)]|0\rangle.$$ We also gave the formulae to count a minimum total number of massless states for a SYM theory which is demensionally reduced to one spatial and one time dimensions. What prevents us from reaching even higher $K$ is obviously the fact that, as one can show [@ToAppear], the total number of basis states grows like $\sim (1+n)^K$, where $n$ is the total number of particle types and $n=4$ for ${\cal N}$=(2,2) SYM theory. Our numerical results were obtained using one single PC with memory of 4 GB. The problem that we now face is that we do not have enough memory to store all the states in one PC. However, as we make use of a cluster of PCs and find ways to split and share the information among them, we are able to reach even higher $K$. This is the direction of our future work, with the ultimate goal being to achieve sufficient numerical precision to detect the correspondence between $\N 8$ SYM theory and supergravity conjectured by Maldacena [@Maldacena:1997re]. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported in part by the U.S. Department of Energy and the Minnesota Supercomputing Institute. [99]{} H. B. Nielsen and M. Ninomiya, Nucl. Phys. B [**185**]{}, 20 (1981) \[Erratum-ibid. B [**195**]{}, 541 (1982)\]. K. Fujikawa, Nucl. Phys. B [**636**]{}, 80 (2002) \[arXiv:hep-th/0205095\]. A. G. Cohen, D. B. Kaplan, E. Katz, and M. Unsal, JHEP [**0308**]{}, 024 (2003) \[arXiv:hep-lat/0302017\]. A. G. Cohen, D. B. Kaplan, E. Katz, and M. Unsal, JHEP [**0312**]{}, 031 (2003) \[arXiv:hep-lat/0307012\]. F. Sugino, JHEP [**0401**]{}, 015 (2004) \[arXiv:hep-lat/0311021\]. F. Sugino, arXiv:hep-lat/0401017. D. B. Kaplan, Phys. Lett. B [**288**]{}, 342 (1992) \[arXiv:hep-lat/9206013\]. R. Narayanan and H. Neuberger, Nucl. Phys. B [**443**]{}, 305 (1995) \[arXiv:hep-th/9411108\]. H. Neuberger, Phys. Lett. B [**417**]{}, 141 (1998) \[arXiv:hep-lat/9707022\]. Y. Matsumura, N. Sakai, and T. Sakai, Phys. Rev. D [**52**]{}, 2446 (1995). O. Lunin and S. Pinsky, AIP Conf. Proc.  [**494**]{}, 140 (1999) \[arXiv:hep-th/9910222\]. F. Antonuccio, H. C. Pauli, S. Pinsky, and S. Tsujimaru, Phys. Rev. D [**58**]{}, 125006 (1998) \[arXiv:hep-th/9808120\]. H.-C. Pauli and S.J. Brodsky, Phys. Rev. D [**32**]{} (1985), 1993; [**32**]{} (1985), 2001. S.J. Brodsky, H.-C. Pauli, and S.S. Pinsky, Phys. Rep. [**301**]{}, 299 (1998) \[arXiv:hep-ph/9705477\]. E. Witten, Nucl. Phys. B [**460**]{}, 335 (1996) \[arXiv:hep-th/9510135\]. D. J. Gross, A. Hashimoto, and I. R. Klebanov, to Phys. Rev. D [**57**]{}, 6420 (1998) \[arXiv:hep-th/9710240\]. J. R. Hiller, S. S. Pinsky, and U. Trittmann, with Nucl. Phys. B [**661**]{}, 99 (2003) \[arXiv:hep-ph/0302119\]. F. Antonuccio, O. Lunin, and S. S. Pinsky, Phys. Lett. B [**429**]{}, 327 (1998) \[arXiv:hep-th/9803027\]. S. Pinsky and N. Salwen, in preparation. F. Antonuccio, A. Hashimoto, O. Lunin, and S. Pinsky, JHEP [**9907**]{}, 029 (1999) \[arXiv:hep-th/9906087\]. J. R. Hiller, O. Lunin, S. Pinsky, and U. Trittmann, Phys. Lett. B [**482**]{}, 409 (2000) \[arXiv:hep-th/0003249\]. J. R. Hiller, S. Pinsky, and U. Trittmann, Phys. Rev. D [**63**]{}, 105017 (2001) \[arXiv:hep-th/0101120\]. S. S. Gubser, I. R. Klebanov, and A. M. Polyakov, Phys. Lett. B [**428**]{}, 105 (1998) \[arXiv:hep-th/9802109\]. E. Witten, Adv. Theor. Math. Phys.  [**2**]{}, 253 (1998) \[arXiv:hep-th/9802150\]. A. Hashimoto and N. Itzhaki, Phys. Lett. B [**454**]{}, 235 (1999) \[arXiv:hep-th/9903067\]. N. Itzhaki, J. M. Maldacena, J. Sonnenschein, and S. Yankielowicz, Phys. Rev. D [**58**]{}, 046004 (1998) \[arXiv:hep-th/9802042\]. G. T. Horowitz and J. Polchinski, Phys. Rev. D [**55**]{}, 6189 (1997) \[arXiv:hep-th/9612146\]. C. Lanczos, J. Res. Nat. Bur. Stand. [**45**]{}, 255 (1950); J. Cullum and R. A. Willoughby, [*Lanczos Algorithms for Large Symmetric Eigenvalue Computations*]{}, Vol. I and II, (Birkhauser, Boston, 1985). J. M. Maldacena, Adv. Theor. Math. Phys.  [**2**]{}, 231 (1998) \[Int. J. Theor. Phys.  [**38**]{}, 1113 (1999)\] \[arXiv:hep-th/9711200\].
--- abstract: 'This paper investigates the linear-quadratic-Gaussian (LQG) mean-field game (MFG) for a class of stochastic delay systems. We consider a large population system in which the dynamics of each player satisfies some forward stochastic differential delay equation (SDDE). The consistency condition or Nash certainty equivalence (NCE) principle is established through an auxiliary mean-field system of anticipated forward-backward stochastic differential equation with delay (AFBSDDE). The wellposedness of such consistency condition system can be further established by some continuation method instead the classical fixed-point analysis. Thus, the consistency condition maybe given on arbitrary time horizon. The decentralized strategies are derived which are shown to satisfy the $\epsilon$-Nash equilibrium property. Two special cases of our MFG for delayed system are further investigated.' author: - 'Na Li [^1] Shujun Wang [^2]' title: 'A Class of Linear-Quadratic-Gaussian (LQG) Mean-Field Game (MFG) of Stochastic Delay Systems' --- Anticipated forward-backward stochastic differential equation with delay (AFBSDDE), Continuation method, $\epsilon$-Nash equilibrium, Mean-field game, Stochastic differential equation with delay (SDDE). Introduction ============ Recently, within the context of noncooperative game theory, the dynamic optimization of stochastic large-population system has attracted consistent and intense research attentions through a variety of fields including management science, engineering, mathematical finance and economics, social science, etc. The most special feature of controlled large-population system lies in the existence of considerable insignificant agents whose dynamics and (or) cost functionals are coupled via the state-average across the whole population. To design low-complexity strategies, one efficient methodology is the mean-field game (MFG) theory which enable us to obtain the decentralized control based on the individual own state together with some off-line quantity. The interested readers may refer [@GLL10; @LL07] for the motivation and methodology, and [@AD; @BCQ; @BDL; @CD] for recent progress in mean-field game theory. Besides, some other recent literature include [@B12; @B11; @hcm07; @HCM12; @hmc06; @LZ08] for linear-quadratic-Gaussian (LQG) mean-field games of large-population system. It is remarkable that all agents in above literature are comparably negligible in that they are not able to affect the whole population in separable manner. By contrast, their impacts are imposed in an unified manner through the population state-average. In this sense, all agents can be viewed as negligible peers but they can generate some mass effects via some “unified manner" such as the control (input)-average or state (output)-average. These averages represent some type of impact imposed to other peers. We point out in above works, all agents’ states are formulated by (forward) stochastic differential equations (SDEs) with the initial conditions as a priori. As a sequel, the agents’ objectives are minimizations of cost functionals involving their terminal states. In some realistic situation, there exist some phenomena in which the state behavior depends not only on the situation at time $t$, but also on a finite lagged state at $t-\theta.$ Moreover, if we use the information which we know to anticipate the future evolution, we can get better results. As the novelty, this paper turns to consider the delay framework in which the agents’ dynamics is characterized by some (forward) stochastic differential equations with delay (SDDEs). It means that the impacts are hardly imposed to each agent immediately. A new type of BSDEs called anticipated BSDEs (ABSDEs) was introduced in [@Peng-Yang], which type of BSDEs can be applied to many fields such as optimal control and finance. Based on it, the problems which depend not only the present but also the history were solved by [@Chen-Wu]. In the consequent works, the FBSDEs with delay and related LQ problems were studied in [@Chen-Wu2] and [@Chen-Wu-Yu]. A kind of stochastic maximum principle for optimal control problems of delay systems involving continuous and impulse controls was considered in [@Yu]. The forward-backward linear quadratic stochastic optimal control problem with delay was investigated in [@Huang-Li-Shi]. And the maximum principle for optimal control of fully coupled forward stochastic differential delayed equations was derived in [@Huang-Shi]. Moreover, some other important phenomena with delay were under consideration in [@Zhang1; @Zhang2]. To formulate the above problem mathematically, some SDDE should be introduced to characterize the dynamics of the agents. It is remarkable that there exist rich literature concerning the theories and applications of SDDE. Generally, the large population problem with delay is under consideration. We discuss the related mean-field LQG games and derive the decentralized strategies. A stochastic process which relates to the delay term of control is introduced here to be the approximation of the control-average process. An auxiliary mean-field SDDE and a AFBSDDE system are considered and analyzed. Here, the AFBSDDE, which is composed by a SDDE and a ABSDE. Further, the AFBSDDE can be divided into two simple AFBSDDEs. In addition, the limit process is related to the wellposedness of a anticipated forward-backward ordinary differential equation with delay (AFBODDE) and a AFBSDDE. We also derive the $\epsilon$-Nash equilibrium property of decentralized control strategy with $\epsilon=O(1/\sqrt N)$. The rest of this paper is organized as follows. Section 2 formulates the large population LQG games of forward systems with delay. In Section 3, we derive the limiting optimal controls of the track systems and the consistency conditions. Section 4 is devoted to the related $\epsilon$-Nash equilibrium property. Section 5 gives two special cases in this work. Problem formulation =================== $(\Omega, \mathcal F, P)$ is a complete probability space on which a standard $(d+m\times N)$-dimensional Brownian motion $\{W^0_t,W^i_t,\ 1\le i\leq N\}_{0 \leq t \leq T}$ is defined, in which a finite time horizon $[0,T]$ is considered for fixed $T>0$. $\mathcal F^{W^0}_t:=\sigma\{W^0_s, 0\leq s\leq t\}$, $\mathcal F^{W^i}_t:=\sigma\{W^i_s, 0\leq s\leq t\}$, $\mathcal F^{i}_t:=\sigma\{W^0_s,W^i_s;0\leq s\leq t\}$. Here, $\{\mathcal F^{W^0}_t\}_{0\leq t\leq T}$ stands for the common information of all players; while $\{\mathcal F^{i}_t\}_{0\leq t\leq T}$ is the individual information of $i^{th}$ player. Throughout this paper, $\mathbb{R}^n$ denotes the $n$-dimensional Euclidean space, its usual norm $|\cdot|$ and the usual inner product $\langle\cdot, \ \cdot\rangle$. For a given vector or matrix $M$, $M^\top$ stands for its transpose. Moreover, we denote the spaces of matrices as follows. - $S^d$ : the space of all $d\times d$ symmetric matrices. - $S^d_+$ : the subspace of all positive semi-definite matrices of $S^d$. - $\hat{S}^d_+$ : the subspace of all positive definite matrices of $S^d$. For any Euclidean space $\mathbb R^n$, we introduce the following notations: - $L^2_{\mathcal F}(0,T;\mathbb R^n) = \{ g:[0,T]\times\Omega \rightarrow \mathbb R^n\ |\ g(\cdot)$ is an $\mathbb R^n$-valued $\mathcal{F}_t$-progressively measurable process such that $\|g\|^2_{L^2_{\mathcal F}}=\mathbb E\int_0^T |g(t)|^2 dt <\infty\}$. - $L^2(0,T;\mathbb R^n) = \{ g:[0,T] \rightarrow \mathbb R^n\ |\ g(\cdot)$ is an $\mathbb R^n$-valued deterministic function such that $\|g\|^2_{L^2}=\int_0^{T}|g(t)|^{2}dt<\infty;$}. - $L^\infty(0,T;\mathbb R^n) = \{ g:[0,T]\rightarrow \mathbb R^n\ |\ g(\cdot)$ is an $\mathbb R^n$-valued uniformly bounded function}. - $C(0,T;\mathbb R^n) = \{g:[0,T]\rightarrow \mathbb R^n\ |\ g(\cdot)$ is $\mathbb R^n$-valued continuous function}. In this paper, we consider a large population system with $N$ individual agents, denoted by $\{\mathcal{A}_{i}\}_{1 \leq i \leq N}$. The dynamics of $\mathcal{A}_{i}$ satisfies the following controlled stochastic differential equation with delay (SDDE): $$\label{e1} \left\{ \begin{aligned} dx_t^i&=\Big[A_tx^i_t+\widetilde{A}_tx^i_{t-\delta}+B_tu^i_t+\widetilde{B}_tu^i_{t-\theta}+\frac{1}{N-1} \sum_{j=1,j\neq i}^N\widehat B_tu^j_{t-\theta}\Big]dt+\sigma_tdW^i_t+\sigma^0_tdW^0_t,~t\in[0,T],\\ x^i_0&=a,~~~ x^i_t=\xi^i_t,~~~t\in[-\delta,0),~~~u^i_t=\eta^i_t,~~~t\in[-\theta,0), \end{aligned} \right.$$ where $a$ is the initial state of $\mathcal{A}_i$, $x^i_{t-\delta}$ denotes the individual state delay, $u^i_{t-\theta}$ denotes the individual input or control delay. In addition, $\frac{1}{N-1} \sum_{j=1,j\neq i}^N\widehat B_tu^j_{t-\theta}$ is introduced to denote the input delay of all other agents, imposed on a given agent $\mathcal{A}_i$. Similar state delay can be found in [@Sung]. Here, for simplicity, we assume all agents are statistically identical (homogeneous) in that they share the same coefficients $(A, \widetilde A, B, \widetilde{B},\widehat B, \sigma, \sigma^0)$ and deterministic initial state $a$. The admissible control strategy $u^i\in \mathcal{U}_i$, where$$\mathcal{U}_i:=\Big\{u^i\big|u^i_t\in L^{2}_{\mathcal{F}^i_t}(0, T; \mathbb{R}^k)\Big\},\ 1\leq i \leq N.$$ Let $u=(u^1, \cdots, u^{N})$ denotes the set of strategies of all $N$ agents; $u^{-i}=(u^1, \cdots, u^{i-1}$, $u^{i+1}, \cdots, u^{N})$ the strategies set but excluding that of $\mathcal{A}_i,1\leq i\leq N$. Considering the state and control delay, the cost functional for $\mathcal{A}_i,1\leq i\leq N$ is given by $$\label{e2} \begin{aligned} \mathcal{J}^i(u^i_t, u^{-i}_t)&=\frac{1}{2}\mathbb{E}\int_0^T\big[\langle R_tx^i_t,x^i_t\rangle+\langle \widetilde{R}_tx^i_{t-\delta},x^i_{t-\delta}\rangle+\langle N_tu^i_t,u^i_t\rangle+\langle \widetilde{N}_tu^i_{t-\theta},u^i_{t-\theta}\rangle\big]dt\\ &~~~+\frac{1}{2}\mathbb{E}\langle M x^i_T,x^i_T\rangle, \end{aligned}$$ where $\widetilde R_t=0,~t\in[T,T+\delta],~\widetilde N_t=0,\ t\in[T,T+\theta]$. For the coefficients of and , we set the following assumption: (H1) : $A_t,\widetilde{A}_t\in L^\infty(0,T;\mathbb{R}^{n\times n}),B_t,\widetilde{B}_t,\widehat{B}_t\in L^\infty(0,T;\mathbb{R}^{n\times k}),\sigma_t\in L^2(0,T;\mathbb{R}^{n\times m}), \sigma^0_t\in L^2(0,T; \\\mathbb{R}^{n\times d}),a\in \mathbb {R}^n$; (H2) : $R_t, \widetilde{R}_{t}\in L^\infty(0,T;S^n)$, $N_t, \widetilde{N}_{t}\in L^\infty(0,T; S^k)$, and $R(\cdot)+\widetilde{R}(\cdot+\delta)\in S_+^n$, for some $\delta>0$; $N(\cdot)+\widetilde{N}(\cdot+\theta)\in \hat S^n_+$ and the inverse $(N(\cdot)+\widetilde{N}(\cdot+\theta))^{-1}$ is also bounded for some $\theta>0$; $M\in S_+^n$. Now, we formulate the large population dynamic optimization problem with delay.\ **Problem (LD).** Find a control strategies set $\bar{u}=(\bar{u}^1,\cdots,\bar{u}^N)$ which satisfies $$\mathcal{J}^i(\bar{u}^i_t,\bar{u}^{-i}_t)=\inf_{u^i\in \mathcal{U}_i}\mathcal{J}^i(u^i_t,\bar{u}^{-i}_t),\ 0\leq i\leq N,$$where $\bar{u}^{-i}$ represents $(\bar{u}^1,\cdots,\bar{u}^{i-1},\bar{u}^{i+1},\cdots, \bar{u}^N)$, for $1\leq i\leq N$. The limiting optimal control and Nash certainty equivalence (NCE) equation system ================================================================================= To study Problem **(LD)**, an efficient approach is to discuss the associated mean-field games by analyzing the asymptotic behavior when the agent number $N$ tends to infinity. The key ingredient in this approach is to specify some suitable representation of state-average limit. With the help of such limit representation, we can figure out some auxiliary or tracking problem parameterized by the state-average limit. Based on it, the decentralized strategies of individual agents can thus be derived and we can also determine the state-average limit via some consistency condition. Moreover, the approximate Nash equilibrium property can be verified. Noting that the agents are homogeneous, thus the optimal controls of $\mathcal{A}_i,1\leq i\leq N$ are conditionally independent with identical distribution. Suppose $\frac{1}{N-1}\sum_{j=1, j\neq i}^N\widehat B_tu^j_{t-\theta}$ is approximated by $m_0^{\theta}(t)\in\mathcal F_{t-\theta}^{W^0}$ as $N \rightarrow +\infty.$ Introducing the following auxiliary dynamics of the players, $$\label{e3} \left\{ \begin{aligned} d {x}_t^i&=\Big[A_t {x}^i_t+\widetilde{A}_t {x}^i_{t-\delta} +B_tu^i_t+\widetilde{B}_tu^i_{t-\theta}+m_0^{\theta}(t)\Big]dt+\sigma_tdW^i_t+\sigma^0_tdW^0_t,~t\in[0,T],\\ {x}^i_0&=a,~~~ {x}^i_t=\xi^i_t,~~~t\in[-\delta,0),~~~u^i_t=\eta^i_t,~~~t\in[-\theta,0). \end{aligned} \right.$$ The associated limiting cost functional becomes $$\label{e4} \begin{aligned} J^i(u^i_t)&=\frac{1}{2}\mathbb{E}\int_0^T\big[\langle R_t {x}^i_t, {x}^i_t\rangle+\langle \widetilde{R}_t {x}^i_{t-\delta}, {x}^i_{t-\delta}\rangle+\langle N_tu^i_t,u^i_t\rangle+\langle \widetilde{N}_tu^i_{t-\theta},u^i_{t-\theta}\rangle\big]dt\\ &~~~+\frac{1}{2}\mathbb{E}\langle M {x}^i_T, {x}^i_T\rangle. \end{aligned}$$ Thus, we formulate the limiting LQG game with delay **(LLD)** as follows.\ **Problem (LLD).** To find an admissible control $\bar{u}^i\in \mathcal{U}_i$ for $i^{th}$ agent $\mathcal{A}_i$ satisfying $$\label{e5} J^i(\bar{u}^i_t)=\inf_{u^i\in \mathcal{U}_i}J^i(u^i_t).$$ Such an admissible control $\bar{u}^i$ is called an optimal control, and $\bar x^i(\cdot)=x^i_{\bar{u}}(\cdot)$ is called the corresponding optimal trajectory. We link the Problem (LLD) to a stochastic Hamiltonian system as follows, which is an anticipated stochastic algebra differential equation system with delay, $$\label{H sys} \left\{ \begin{aligned} 0&=(N_t+\widetilde N_{t+\theta})\bar{u}^i_t+B_t^\top \bar{y}^i_t+\widetilde B^\top_{t+\theta}\mathbb E^{\mathcal F^i_t}[\bar{y}^i_{t+\theta}],\\ d\bar{x}_t^i&=\Big[A_t\bar{x}^i_t+\widetilde{A}_t\bar{x}^i_{t-\delta}-B_t\bar u^i_t-\widetilde{B}_t\bar u^i_{t-\theta}+m_0^{\theta}(t)\Big]dt+\sigma_tdW^i_t+\sigma^0_tdW^0_t,\\ d\bar{y}_t^i&=-\Big[A^\top_t\bar{y}^i_t+\widetilde{A}^\top_{t+\delta}\mathbb E^{\mathcal F^i_t}[\bar{y}^i_{t+\delta}]+(R_t+\widetilde{R}_{t+\delta})\bar{x}^i_t\Big]dt +\bar{z}^i_tdW^i_t+\bar{z}^0_tdW^0_t,~t\in[0,T],\\ \bar{x}^i_0&=a,~~~ \bar{x}^i_t=\xi^i_t,~~~t\in[-\delta,0),~~~u^i_t=\eta^i_t,~~~t\in[-\theta,0),\\ \bar{y}^i_T&=M {x}^i_T,~~~ \bar{y}^i_t=0,~~~t\in(T, T+(\delta\vee\theta)]. \end{aligned} \right.$$ To get the optimal control of Problem **(LLD)**, we have the following theorem. \[l1\] Let ***(H1)***-***(H2)*** hold. The sufficient and necessary condition for the optimal control of $\mathcal{A}_i$ for ***(LLD)*** is that $u^i_t$ has the following form $$\label{e6} \bar{u}^i_t=-(N_t+\widetilde N_{t+\theta})^{-1}\big(B_t^\top \bar{y}^i_t+\widetilde B^\top_{t+\theta}\mathbb E^{\mathcal F^i_t}[\bar{y}^i_{t+\theta}]\big).$$ Moreover, for any given $m_0^{\theta}(t)\in L^2_{\mathcal F_{t-\theta}^{W^0}}(-\theta,T+(\delta\vee\theta);\mathbb R^n)$, the stochastic Hamiltonian system admits a unique solution $(\bar{x}_t^i, \bar u_t^i, \bar{y}_t^i,\bar{z}^i_t,\bar{z}^0_t)\in L^2_{\mathcal F^{i}_t}(-\delta,T;\mathbb R^n)\times \mathcal U_i\times L^2_{\mathcal F^{i}_t}(-\theta,T+(\delta\vee\theta);\mathbb R^n)\times L^2_{\mathcal F^{i}_t}(0, T;\mathbb R^{n\times m})\times L^2_{\mathcal F^{i}_t}(0,T;\mathbb R^{n\times d})$. The sufficient and necessary condition part could be from some variational calculus and dual representation, which is a straightforward consequence of the stochastic maximum principle in Yu [@Yu]. We omit the proof. Moreover, under assumption **(H2)**, by the form of , our problem is to solve the following fully-coupled AFBSDDE, $$\label{e7} \left\{ \begin{aligned} d\bar{x}_t^i&=\Big[A_t\bar{x}^i_t+\widetilde{A}_t\bar{x}^i_{t-\delta}-B_t(N_t+\widetilde N_{t+\theta})^{-1}\big(B_t^\top \bar{y}^i_t+\widetilde B^\top_{t+\theta}\mathbb E^{\mathcal F^i_t}[\bar{y}^i_{t+\theta}]\big)\\ &~~~-\widetilde{B}_t(N_{t-\theta}+\widetilde N_{t})^{-1}\big(B_{t-\theta}^\top \bar{y}^i_{t-\theta}+\widetilde B_{t}\mathbb E^{\mathcal F^i_{t-\theta}}[\bar{y}^i_{t}]\big)+m_0^{\theta}(t)\Big]dt+\sigma_tdW^i_t+\sigma^0_tdW^0_t,\\ d\bar{y}_t^i&=-\Big[A^\top_t\bar{y}^i_t+\widetilde{A}^\top_{t+\delta}\mathbb E^{\mathcal F^i_t}[\bar{y}^i_{t+\delta}]+(R_t+\widetilde{R}_{t+\delta})\bar{x}^i_t\Big]dt +\bar{z}^i_tdW^i_t+\bar{z}^0_tdW^0_t,~t\in[0,T],\\ \bar{x}^i_0&=a,~~~ \bar{x}^i_t=\xi^i_t,~~~t\in[-\delta,0),\\\bar{y}^i_T&=M\bar{x}^i_T,~~~ \bar{y}^i_t=0,~~~t\in(T, T+(\delta\vee\theta)]. \end{aligned} \right.$$ Applying the classic “continuation method" which was proposed in [@HP], [@PengWu99], the proof is similar as in the Appendix of [@Chen-Wu-Yu], the above linear AFBSDDE has a unique solution. So the Hamiltonian system (\[H sys\]) admits a unique solution. For the further studying, consider the following two AFBSDDEs which are fully-coupled in states, $$\label{e8} \left\{ \begin{aligned} dx_t^{i,1}&=\Big[A_tx^{i,1}_t+\widetilde{A}_tx^{i,1}_{t-\delta}-B_t(N_t+\widetilde N_{t+\theta})^{-1}\big(B_t^\top y^{i,1}_t+\widetilde B^\top_{t+\theta}\mathbb E^{\mathcal F^{W^i}_t}[y^{i,1}_{t+\theta}]\big)\\ &~~~-\widetilde{B}_t(N_{t-\theta}+\widetilde N_{t})^{-1}\big(B_{t-\theta}^\top y^{i,1}_{t-\theta}+\widetilde B_{t}^\top\mathbb E^{\mathcal F^{W^i}_{t-\theta}}[y^{i,1}_{t}]\big)\Big]dt+\sigma_tdW^i_t,\\ dy_t^{i,1}&=-\Big[A^\top_ty^{i,1}_t+\widetilde{A}^\top_{t+\delta}\mathbb E^{\mathcal F^{W^i}_t}[y^{i,1}_{t+\delta}]+(R_t+\widetilde{R}_{t+\delta})x^{i,1}_t\Big]dt+z^i_tdW^i_t,~t\in[0,T],\\ x^{i,1}_0&=a^{i,1},~~~x^{i,1}_t=\xi^{i,1}_t,~~~t\in[-\delta,0),\\ y^{i,1}_T&=Mx^{i,1}_T,~~~y^{i,1}_t=0,~~~t\in(T, T+(\delta\vee\theta)], \end{aligned} \right.$$ and $$\label{e9} \left\{ \begin{aligned} dx_t^2&=\Big[A_tx^2_t+\widetilde{A}_tx^2_{t-\delta}-B_t(N_t+\widetilde N_{t+\theta})^{-1}\big(B_t^\top y^2_t+\widetilde B^\top_{t+\theta}\mathbb E^{\mathcal F^{W^0}_t}[y^2_{t+\theta}]\big)\\ &~~~-\widetilde{B}_t(N_{t-\theta}+\widetilde N_{t})^{-1}\big(B_{t-\theta}^\top y^2_{t-\theta}+\widetilde B^\top_{t}\mathbb E^{\mathcal F^{W^0}_{t-\theta}}[y^2_{t}]\big)+m_0^{\theta}(t)\Big]dt+\sigma^0_tdW^0_t,\\ dy_t^2&=-\Big[A^\top_ty^2_t+\widetilde{A}^\top_{t+\delta}\mathbb E^{\mathcal F^{W^0}_t}[y^2_{t+\delta}]+(R_t+\widetilde{R}_{t+\delta})x^2_t\Big]dt+z^0_tdW^0_t,~t\in[0,T],\\ x^2_0&=a^2,~~~x^2_t=\xi^2_t,~~~t\in[-\delta,0),\\ y^2_T&=Mx^2_T,~~~y^2_t=0,~~~t\in(T, T+(\delta\vee\theta)], \end{aligned} \right.$$ where $a^i=a^{i,1}+a^2$, $\xi_t^i=\xi_t^{i,1}+\xi_t^2$. It follows from the Appendix in [@Chen-Wu-Yu] that and admit the unique solutions $(x_t^{i,1},y_t^{i,1},z^i_t)\in L^2_{\mathcal F^{W^i}_t}(-\delta,T;\mathbb R^n)\times L^2_{\mathcal F^{W^i}_t}(-\theta,T+(\delta\vee\theta);\mathbb R^n)\times L^2_{\mathcal F^{W^i}_t}(0,T;\mathbb R^{n\times m})$ and $(x^2_t,y^2_t,z^0_t)\in L^2_{\mathcal F^{W^0}_t}(-\delta,T;\mathbb R^n)\times L^2_{\mathcal F^{W^0}_t}(-\theta,T+(\delta\vee\theta);\mathbb R^n)\times L^2_{\mathcal F^{W^0}_t}(0,T;\mathbb R^{n\times d})$. Then we have the following lemma. \[l2\] Let ***(H1)-(H2)*** hold, if $(x_t^{i,1}, y_t^{i,1}, z^i_t)$ is the solution of and $(x_t^2, y_t^2, z_t^0)$ is the solution of , then $(x_t^{i,1}+x_t^2, y_t^{i,1}+y_t^2, z^i_t, z_t^0)$ is the solution of . It is easily to check that $\bar{x}_t^i=x_t^{i,1}+x_t^2$, $\bar{y}_t^i=y_t^{i,1}+y_t^2$, $\bar z_t^i=z_t^i$ and $\bar z^0_t=z^0_t$ are the solutions of AFBSDDE , then we can get the conclusion. In the following part, we will point out the essence of the limiting stochastic process $m_0^{\theta}(t)$. Firstly, we introduce the following AFBODDE and AFBSDDE, $$\label{e12} \left\{ \begin{aligned} d[\mathbb E x_t^{1}]&=\Big[A_t[\mathbb Ex_t^{1}]+\widetilde{A}_t[\mathbb E x_{t-\delta}^{1}]-B_t(N_t+\widetilde N_{t+\theta})^{-1}\big(B_t^\top [\mathbb E y_t^{1}]+\widetilde B_{t+\theta}^\top[\mathbb E y_{t+\theta}^{1}]\big)\\ &~~~-\widetilde{B}_t(N_{t-\theta}+\widetilde N_{t})^{-1}\big(B_{t-\theta}^\top [\mathbb E y_{t-\theta}^{1}]+\widetilde B_{t}^\top[\mathbb E y^{1}_{t}]\big)\Big]dt,\\ d[\mathbb Ey_t^{1}]&=-\Big[A^\top_t[\mathbb Ey_t^{1}]+\widetilde{A}^\top_{t+\delta}[\mathbb Ey_{t+\delta}^{1}]+(R_t+\widetilde{R}_{t+\delta})[\mathbb Ex_t^{1}]\Big]dt,~t\in[0,T],\\ \mathbb E x^{1}_0&=a^{1},~~~~\mathbb Ex^{1}_t=\mathbb E\xi^{1}_t,~~~t\in[-\delta,0),\\ \mathbb Ey^{1}_T&=M[\mathbb Ex^{1}_T],~~~~\mathbb Ey^{1}_t=0,~~~t\in(T, T+(\delta\vee\theta)], \end{aligned} \right.$$ and $$\label{e13} \left\{ \begin{aligned} dx_t^2&=\Big[A_tx^2_t+\widetilde{A}_tx^2_{t-\delta}-B_t(N_t+\widetilde N_{t+\theta})^{-1}\big(B_t^\top y^2_t+\widetilde B^\top_{t+\theta}\mathbb E^{\mathcal F^{W^0}_t}[y^2_{t+\theta}]\big)\\ &~~~-(\widetilde{B}_t+\widehat B_t)(N_{t-\theta}+\widetilde N_{t})^{-1}\big(B_{t-\theta}^\top y^2_{t-\theta}+\widetilde B_{t}^\top\mathbb E^{\mathcal F^{W^0}_{t-\theta}}[y^2_{t}]\big)\\ &~~~-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}\big(B_{t-\theta}^\top[\mathbb E y^{1}_{t-\theta}]+\widetilde B_t^\top [\mathbb E y_t^{1}]\big)\Big]dt+\sigma^0_tdW^0_t,\\ dy_t^2&=-\Big[A^\top_ty^2_t+\widetilde{A}^\top_{t+\delta}\mathbb E^{\mathcal F^{W^0}_t}[y^2_{t+\delta}]+(R_t+\widetilde{R}_{t+\delta})x^2_t\Big]dt+z^0_tdW^0_t,~t\in[0,T],\\ x^2_0&=a^2,~~~x^2_t=\xi^2_t,~~~t\in[-\delta,0),\\ y^2_T&=Mx^2_T,~~~ y^2_t=0,~~~t\in(T, T+(\delta\vee\theta)]. \end{aligned} \right.$$ $m_0^{\theta}(t)$ is in $L^2_{\mathcal F_t^{W^0}}(-\theta, T+(\delta\vee\theta); \mathbb R^n)$ and it is of the following form, $$\begin{aligned} m_0^{\theta}(t) &=-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}\big[B_{t-\theta}^\top[\mathbb E y^{1}_{t-\theta}]+\widetilde B_t^\top[\mathbb E y_t^{1}]\big]\\ &\qquad-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}\big[B_{t-\theta}^\top y^2_{t-\theta}+\widetilde B_t^\top\mathbb E^{\mathcal F_{t-\theta}^{W^0}}[y_t^2]\big],\\ \end{aligned}$$ where $y_t^{1}$ is the solution of and $y_t^2$ is the solution of It follows from and that $y^{j,1}_{t}$ is independent of $W^0_t$, $y^{2}_{t}$ is independent of $W^j_t$, for $1\leq j\leq N$, respectively. Thus, we have $$\label{e10} \mathbb E^{\mathcal F^j_{t-\theta}}[y^{j,1}_{t}]=\mathbb E^{\mathcal F^{W^j}_{t-\theta}}[y^{j,1}_{t}],\quad \mathbb E^{\mathcal F^j_{t-\theta}}[y^{2}_{t}]=\mathbb E^{\mathcal F^{W^0}_{t-\theta}}[y^{2}_{t}].$$ By virtue of Lemma \[l2\], we obtain $$\label{e11} \begin{aligned} m_0^{\theta}(t)&=\lim_{N\rightarrow\infty}\widehat B_t\frac{1}{N-1}\sum_{j=1,j\neq i}^{N}\bar{u}^j_{t-\theta}\\ &=-\widehat B_t\lim_{N\rightarrow\infty}\frac{1}{N-1}\sum_{j=1,j\neq i}^{N}(N_{t-\theta}+\widetilde N_{t})^{-1}(B_{t-\theta}^\top \bar {y}^j_{t-\theta}+\widetilde B_{t}^\top\mathbb E^{\mathcal F^j_{t-\theta}}[\bar{y}^j_{t}])\\ &=-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}\lim_{N\rightarrow\infty}\frac{1}{N-1}\sum_{j=1,j\neq i}^{N}(B_{t-\theta}^\top (y^{j,1}_{t-\theta}+y^2_{t-\theta})+\widetilde B_{t}^\top\mathbb E^{\mathcal F^j_{t-\theta}}[(y^{j,1}_{t}+y^2_{t})])\\ &=-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}\Big[\lim_{N\rightarrow\infty}\frac{1}{N-1}\sum_{j=1,j\neq i}^{N}(B_{t-\theta}^\top y^{j,1}_{t-\theta}+\widetilde B_{t}^\top\mathbb E^{\mathcal F^{W^j}_{t-\theta}}[y^{j,1}_{t}])\\ &\qquad+\lim_{N\rightarrow\infty}\frac{1}{N-1}\sum_{j=1,j\neq i}^{N}(B_{t-\theta}^\top y_{t-\theta}^2+\widetilde B_{t}^\top\mathbb E^{\mathcal F^{W^0}_{t-\theta}}[y^2_{t}])\Big]\\ &=-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}\big[B_{t-\theta}^\top[\mathbb E y^{j,1}_{t-\theta}]+\widetilde B_t^\top[\mathbb E y_t^{j,1}]\big]\\ &\qquad-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}\big[B_{t-\theta}^\top y^2_{t-\theta}+\widetilde B_t^\top\mathbb E^{\mathcal F_{t-\theta}^{W^0}}[y_t^2]\big]\\ &:=\Sigma_1^\theta(t)+\Sigma_2^\theta(t). \end{aligned}$$ Here, $\Sigma_1^\theta(t)=-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}\big[B_{t-\theta}^\top[\mathbb E y^{j,1}_{t-\theta}]+\widetilde B_t^\top[\mathbb E y_t^{j,1}]\big]$, which is the deterministic function; $\Sigma_2^\theta(t)=-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}\big[B_{t-\theta}^\top y^2_{t-\theta}+\widetilde B_t^\top\mathbb E^{\mathcal F_{t-\theta}^{W^0}}[y_t^2]\big]$ is in $L^2_{\mathcal F_t^{W^0}}(-\theta, T+(\delta\vee\theta); \mathbb R^n)$. For $\mathbb E y^{i,1}_{t}=\mathbb E y^{j,1}_{t}$, for $i\neq j,\ i,j=1,2,\cdots,N$, so $\mathbb E y^{i,1}_{t}$ is independent on $i$, then we denote $\mathbb E y^{i,1}_{t}=\mathbb E y^{1}_{t}$, where $y^{1}_{t}$ is the solution of . Thus $\Sigma_1^\theta(t)$ can be rewritten as $$\Sigma_1^\theta(t)=-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}\big[B_{t-\theta}^\top[\mathbb E y^{1}_{t-\theta}]+\widetilde B_t^\top[\mathbb E y_t^{1}]\big].$$ Hence the result. In what follows - are called the Nash certainty equivalence (NCE) equation system which can be used to determine the control state-average limit $m_0(t)$. Note that $m_0(t)$ plays an important role due to the dependence of decentralized strategy $\bar{u}_i(t)$ on it. We can see that $\bar u_t^i$ in is dependent on the solution $\bar y_t^i$ and $\bar y_{t+\theta}^i$ of , and $\bar y_t^i$, $\bar y_{t+\theta}^i$ are dependent on $m_0(t)$. $\epsilon$-Nash equilibrium analysis ==================================== In above sections, we obtained the optimal control $\bar{u}^i_t, 1\le i\le N$ of Problem (**LLD**) through the consistency condition system. Now, we turn to verify the $\epsilon$-Nash equilibrium of Problem (**LD**). To start, we first present the definition of $\epsilon$-Nash equilibrium. \[d1\] A set of controls $u_t^i\in \mathcal{U}_i,\ 1\leq i\leq N,$ for $N$ agents is called to satisfy an $\epsilon$-Nash equilibrium with respect to the costs $J^i,\ 1\leq i\leq N,$ if there exists $\epsilon\geq0$ such that for any fixed $1\leq i\leq N$, we have $$\label{e14} J^i(\bar{u}_t^i,\bar{u}_t^{-i})\leq J^i(u_t^i,\bar{u}_t^{-i})+\epsilon,$$ when any alternative control $u^i\in \mathcal{U}_i$ is applied by $\mathcal{A}_i$. If $\epsilon=0,$ then Definition \[d1\] is reduced to the usual Nash equilibrium. Now, we state the main result of this paper and its proof will be given later. \[t2\] Under ***(H1)-(H2)***, $(\bar{u}_t^1,\bar{u}_t^2,\cdots,\bar{u}_t^N)$ satisfies the $\epsilon$-Nash equilibrium of *(**LD**)*. Here, for $1\le i\le N,$ $\bar{u}_t^i$ is given by . The proof of Theorem \[t2\] needs several lemmas which are presented later. Denoting $\check {x}^i_t$ is the centralized state trajectory with respect to $\bar u^i_t$; $\hat{x}^i_t$ is the decentralized one with respect to $\bar u^i_t$. The cost functionals for **(LD)** and **(LLD)** are denoted by $\mathcal J^i(\bar u^i_t,\bar u^{-i}_t)$ and $J^i(\bar u^i_t)$, respectively. \[l3\] \_[1iN]{}=O(),\[e16\] where $\bar u_t^j$ is given by . [*Proof.*]{} By , and Lemma \[l2\], we get $$\label{e17} \begin{aligned} \frac{1}{N-1}\sum_{j=1, j\neq i}^N\widehat B_t\bar u^j_{t-\theta}&=\frac{1}{N-1}\sum_{j=1, j\neq i}^N\widehat B_t\Big\{-(N_{t-\theta}+\widetilde N_{t})^{-1}\big(B_{t-\theta}^\top \bar{y}^j_{t-\theta}+\widetilde B_{t}^\top\mathbb E^{\mathcal F^j_{t-\theta}}[\bar{y}^j_{t}]\big)\Big\}\\ &=-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}\Bigg\{B_{t-\theta}^\top\Big(\frac{1}{N-1}\sum_{j=1, j\neq i}^Ny^{j,1}_{t-\theta}+y^2_{t-\theta}\Big)\\ &\qquad\qquad+\widetilde B_{t}^\top\Big(\frac{1}{N-1}\sum_{j=1, j\neq i}^N\mathbb E^{\mathcal F^{W^j}_{t-\theta}}[y^{j,1}_{t}]+\mathbb E^{\mathcal F^{W^0}_{t-\theta}}[y^{2}_{t}]\Big)\Bigg\}. \end{aligned}$$ Combining and , we obtain $$\label{e18} \begin{aligned} &\frac{1}{N-1}\sum_{j=1, j\neq i}^N\widehat B_t\bar u^j_{t-\theta}-m_0^{\theta}(t)\\ =&-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}\Bigg\{B_{t-\theta}^\top\Big(\frac{1}{N-1}\sum_{j=1, j\neq i}^Ny^{j,1}_{t-\theta}-\mathbb E y^{1}_{t-\theta}\Big)\\ &\qquad\qquad\qquad\qquad\qquad+\widetilde B_{t}^\top\Big(\frac{1}{N-1}\sum_{j=1, j\neq i}^N\mathbb E^{\mathcal F^{W^j}_{t-\theta}}[y^{j,1}_{t}]-\mathbb E y_t^{1}\Big)\Bigg\}. \end{aligned}$$ Then it follows from **(H1)** that $$\begin{aligned} &\mathbb E\Big|\frac{1}{N-1}\sum_{j=1, j\neq i}^N\widehat B_t\bar u^j_{t-\theta}-m_0^{\theta}(t)\Big|^2\\ \leq &C_1\Bigg\{\mathbb E\Big|\frac{1}{N-1}\sum_{j=1, j\neq i}^Ny^{j,1}_{t-\theta}-\mathbb E y^{1}_{t-\theta}\Big|^2+\mathbb E\Big|\frac{1}{N-1}\sum_{j=1, j\neq i}^N\mathbb E^{\mathcal F^{W^j}_{t-\theta}}[y^{j,1}_{t}]-\mathbb E y_t^{1}\Big|^2\Bigg\}, \end{aligned}$$ where $C_1$ is a positive constant. Recall that $y^{j,1}_t\in L^2_{\mathcal F^{W^j}_t}(-\theta,T+(\delta\vee\theta);\mathbb R^n)$. Thus $y^{j,1}_t$ is independent of $y^{k,1}_t$, for $j\neq k$, and we have $$\mathbb E \big(y^{j,1}_{t-\theta}-\mathbb E y^{1}_{t-\theta}\big)\big(y^{k,1}_{t-\theta}-\mathbb E y^{1}_{t-\theta}\big)=0$$ and $$\mathbb E \big(\mathbb E^{\mathcal F^{W^j}_{t-\theta}}[y^{j,1}_{t}]-\mathbb E y_t^{1}\big)\big(\mathbb E^{\mathcal F^{W^k}_{t-\theta}}[y^{k,1}_{t}]-\mathbb E y_t^{1}\big)=0.$$ Hence the result. $\Box$ \[l4\] &\_[1iN]{}=O(),\[e19\]\ &\_[1iN]{}=O().\[e20\] [*Proof.*]{} For $\forall\ 1\leq i\leq N$, by and , we have $$\nonumber \left\{ \begin{aligned} d(\check{x}_t^i-\hat{x}_t^i)&=\Big[A_t(\check{x}_t^i-\hat{x}_t^i)+\widetilde{A}_t(\check{x}^i_{t-\delta}-\hat{x}^i_{t-\delta})+\frac{1}{N-1} \sum_{j=1,j\neq i}^N\widehat B_t\bar{u}^j_{t-\theta}-m_0^{\theta}(t)\Big]dt,~t\in[0,T],\\ \check{x}^i_0-\hat{x}_0^i&=0,~~~ \check{x}^i_t-\hat{x}_t^i=0,~~~t\in[-\delta,0). \end{aligned} \right.$$ Taking integral from 0 to $T$, we get $$\nonumber \begin{aligned} \check{x}_t^i-\hat{x}_t^i&=\int_0^T\Big[A_s(\check{x}_s^i-\hat{x}_s^i)+\widetilde{A}_s(\check{x}^i_{s-\delta}-\hat{x}^i_{s-\delta})+\frac{1}{N-1} \sum_{j=1,j\neq i}^N\widehat B_s\bar{u}^j_{s-\theta}-m_0^\theta(t)\Big]ds. \end{aligned}$$ Note that $$\int_0^T\widetilde{A}_s(\check{x}^i_{s-\delta}-\hat{x}^i_{s-\delta})ds=\int_{-\delta}^{T-\delta}\widetilde{A}_{s+\delta}(\check{x}^i_{s}-\hat{x}^i_{s})ds =\int_{0}^{T-\delta}\widetilde{A}_{s+\delta}(\check{x}^i_{s}-\hat{x}^i_{s})ds.$$ By Lemma \[l3\], **(H1)** and Gronwall’s inequality, is obtained. In addition, $$\sup_{0\leq t\leq T}\mathbb{E}\big|\check{x}^i_{t-\delta}-\hat{x}^i_{t-\delta}\big|^2=\sup_{0\leq \tau\leq T-\delta}\mathbb{E}\big|\check{x}^i_{\tau}-\hat{x}^i_{\tau}\big|^2\leq \sup_{0\leq \tau\leq T}\mathbb{E}\big|\check{x}^i_{\tau}-\hat{x}^i_{\tau}\big|^2.$$ Then we get . $\Box$ \[l5\] &\_[1iN]{}=O(),\[e21\]\ &\_[1iN]{}=O(),\[e22\]\ &|J\^i(|u\^i\_t,|u\^[-i]{}\_t)-J\^i(|u\^i\_t)|=O(),  1iN. \[e23\] [*Proof.*]{} For $\forall\ 1\leq i\leq N,$ it is easy to see $\sup\limits_{0\leq t\leq T}\mathbb{E}\big|\hat{x}^i_t\big|^2<+\infty,\sup\limits_{0\leq t\leq T}\mathbb{E}\big|\hat{x}^i_{t-\delta}\big|^2<+\infty$. Applying Cauchy-Schwarz inequality and , we have $$\nonumber\begin{aligned} &\sup_{0\leq t\leq T}\mathbb{E}\left||\check{x}^i_t|^2-|\hat{x}^i_t|^2\right|\\ \leq&\sup_{0\leq t\leq T}\mathbb{E}\big|\check{x}^i_t-\hat{x}^i_t\big|^2+2\Big(\sup_{0\leq t\leq T}\mathbb{E}|\hat{x}^i_t|^2\Big)^{\frac{1}{2}}\Big(\sup_{0\leq t\leq T}\mathbb{E}\big|\check{x}^i_t-\hat{x}^i_t\big|^2\Big)^{\frac{1}{2}}\\ =&O\Big(\frac{1}{\sqrt{N}}\Big). \end{aligned}$$ Similarly, is obtained. Then noting **(H2)**, we have $$\nonumber\begin{aligned} &\big|\mathcal J^i(\bar u^i_t,\bar u^{-i}_t)-J^i(\bar u^i_t)\big|\\ \leq& C_2\mathbb{E}\int_0^T \big(\big||\check{x}^i_t|^2-|\hat{x}^i_t|^2\big|+\big||\check{x}^i_{t-\delta}|^2-|\hat{x}^i_{t-\delta}|^2\big|\big)dt+ C_2\mathbb{E}\big||\check{x}^i_T|^2-|\hat{x}^i_T|^2\big|\\ =& O\Big(\frac{1}{\sqrt{N}}\Big), \end{aligned}$$ which implies . Here, $C_2$ is a positive constant. $\Box$ Until now, we have addressed some estimates of states and costs corresponding to control $\bar{u}^i_t$, $1\le i\le N$. Next we will focus on the $\epsilon$-Nash equilibrium for (**LD**). For any fixed $i$, $1\le i\le N$, consider a alternative control $u^i_t \in \mathcal{U}_i$ for $\mathcal{A}_i$ and introduce the dynamics $$\label{e24} \left\{ \begin{aligned} dl_t^i&=\Big[A_tl^i_t+\widetilde{A}_tl^i_{t-\delta}+B_tu^i_t+\widetilde{B}_tu^i_{t-\theta}+\frac{1}{N-1} \sum_{\kappa=1,\kappa\neq i}^N\widehat B_t\bar{u}^\kappa_{t-\theta}\Big]dt+\sigma_tdW^i_t+\sigma^0_tdW^0_t,~t\in[0,T],\\ l^i_0&=a,~~~ l^i_t=\xi^i_t,~~~t\in[-\delta,0),~~~u^i_t=\eta^i_t,~~~t\in[-\theta,0), \end{aligned} \right.$$ whereas other players keep the control $\bar{u}_t^j,1\leq j\leq N,j\neq i$ i.e., $$\nonumber \left\{ \begin{aligned} dl_t^j&=\Big[A_tl^j_t+\widetilde{A}_tl^j_{t-\delta}+B_t\bar{u}^j_t+\widetilde{B}_t\bar{u}^j_{t-\theta}+\frac{1}{N-1} \widehat B_t\Big(\sum_{\kappa=1,\kappa\neq i,j}^N\bar{u}^\kappa_{t-\theta}+u^i_{t-\theta}\Big)\Big]dt+\sigma_tdW^j_t+\sigma^0_tdW^0_t,\\ l^j_0&=a,~~~ l^j_t=\xi^j_t,~~~t\in[-\delta,0),~~~u^i_t=\eta^i_t,~~~t\in[-\theta,0). \end{aligned} \right.$$ The dynamics of $\mathcal{A}_i$ with respect to $u^i_t$ for (**LLD**) is $$\label{e25} \left\{ \begin{aligned} dp_t^i&=\Big[A_tp^i_t+\widetilde{A}_tp^i_{t-\delta}+B_tu^i_t+\widetilde{B}_tu^i_{t-\theta}+m_0^{\theta}(t)\Big]dt+\sigma_tdW^i_t+\sigma^0_tdW^0_t,~t\in[0,T],\\ p^i_0&=a,~~~ p^i_t=\xi^i_t,~~~t\in[-\delta,0),~~~u^i_t=\eta^i_t,~~~t\in[-\theta,0). \end{aligned} \right.$$ We have the following lemma. \[l6\] &\_[1iN]{}=O(),\[e26\]\ &\_[1iN]{}=O(),\[e27\]\ &\_[1iN]{}=O(),\[e28\]\ &\_[1iN]{}=O(),\[e29\]\ &|J\^i(u\^i\_t,|u\^[-i]{}\_t)-J\^i(u\^i\_t)|=O(),  1iN. \[e30\] [*Proof.*]{} Using the same analysis to the proof of Lemma \[l4\], by - and noting Lemma \[l3\], we get and . By virtue of and , and follows by applying Cauchy-Schwarz inequality. Same to Lemma \[l5\], is obtained. $\Box$ **Proof of Theorem \[t2\]:** Now, we consider the $\epsilon$-Nash equilibrium for $\mathcal{A}_i,1\leq i\leq N$. It follows from and that $$\nonumber\begin{aligned} \mathcal J^i(\bar u^i_t,\bar u^{-i}_t)&=J^i(\bar u^i_t)+O\Big(\frac{1}{\sqrt{N}}\Big)\\ &\leq J^i(u^i_t)+O\Big(\frac{1}{\sqrt{N}}\Big)\\ &=\mathcal J^i(u^i_t,\bar u^{-i}_t)+O\Big(\frac{1}{\sqrt{N}}\Big). \end{aligned}$$ Thus, Theorem \[t2\] follows by taking $\epsilon=O\Big(\frac{1}{\sqrt{N}}\Big)$. Special cases ============= In this section, we will study some special cases to show the essence of MFG problem with delay.\ **Case I:** In this case, we will give the “closed-loop" form of the $\epsilon$-Nash equilibrium. For simplicity, let $\widetilde A_t=\widetilde B_t=0$ in system , then we study the following system, $$\label{system x} \left\{ \begin{aligned} dx_t^i&=\Big[A_tx^i_t+B_tu^i_t+\frac{1}{N-1} \sum_{j=1,j\neq i}^N\widehat B_tu^j_{t-\theta}\Big]dt+\sigma_tdW^i_t+\sigma^0_tdW^0_t,~t\in[0,T],\\ x_0&=a,\\ \end{aligned} \right.$$ and the cost functional is still . Now, we consider the following FBSDE $$\label{e77} \left\{ \begin{aligned} d\bar{x}_t^i&=\Big[A_t\bar{x}^i_t-B_t(N_t+\widetilde N_{t+\theta})^{-1}B_t^\top \bar{y}^i_t+m_0^{\theta}(t)\Big]dt+\sigma_tdW^i_t+\sigma^0_tdW^0_t,\\ d\bar{y}_t^i&=-\Big[A^\top_t\bar{y}^i_t+(R_t+\widetilde{R}_{t+\delta})\bar{x}^i_t\Big]dt +\bar{z}^i_tdW^i_t+\bar{z}^0_tdW^0_t,~t\in[0,T],\\ \bar{x}^i_0&=a,\\ \bar{y}^i_T&=M\bar{x}^i_T,\\ \end{aligned} \right.$$ In system , we could deduce the $m_0^{\theta}(t)$ as follows, $$\begin{aligned} m_0^{\theta}(t)&=\lim_{N\rightarrow\infty}\widehat B_t\frac{1}{N-1}\sum_{j=1,j\neq i}^{N}\bar{u}^j_{t-\theta}\\ &=-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}B_{t-\theta}^{\top}\lim_{N\rightarrow\infty}\frac{1}{N-1}\sum_{j=1,j\neq i}^{N}\tilde y_{t-\theta}^i\\ &=-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}B_{t-\theta}^{\top}\mathbb E^{\mathcal F^{W^0}_{t-\theta}}[\tilde y_{t-\theta}] \end{aligned}$$ where $\tilde y_{t}$ satisfies the following FBSDDE, $$\label{e777} \left\{ \begin{aligned} d\tilde{x}_t&=\Big[A_t\tilde{x}_t-B_t(N_t+\widetilde N_{t+\theta})^{-1}B_t^\top \tilde{y}_t-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}B_{t-\theta}^{\top}\mathbb E^{\mathcal F^{W^0}_{t-\theta}}[\tilde y_{t-\theta}]\Big]dt+\sigma_tdW_t\\ &\qquad\qquad+\sigma^0_tdW^0_t,\\ d\tilde{y}_t&=-\Big[A^\top_t\tilde{y}_t+(R_t+\widetilde{R}_{t+\delta})\tilde{x}_t\Big]dt +\tilde{z}_tdW_t+\tilde{z}^0_tdW^0_t,~t\in[0,T],\\ \tilde{x}_0&=a,\\ \tilde{y}_T&=M\tilde{x}_T,\\ \end{aligned} \right.$$ where $W_t$ and $W^i_t$ are independent and identically distributed. According to the Appendix in [@Chen-Wu-Yu], has a unique solution $(\tilde x_t, \tilde y_t, \tilde z_t, \tilde z_t^0)$. Then, FBSDDE could be rewritten as follows $$\label{e7777} \left\{ \begin{aligned} d\bar{x}_t^i&=\Big[A_t\bar{x}^i_t-B_t(N_t+\widetilde N_{t+\theta})^{-1}B_t^\top \bar{y}^i_t-\widehat B_t(N_{t-\theta}+\widetilde N_{t})^{-1}B_{t-\theta}^{\top}\mathbb E^{\mathcal F^{W^0}_{t-\theta}}[\tilde y_{t-\theta}]\Big]dt]dt\\ &\qquad\qquad+\sigma_tdW^i_t+\sigma^0_tdW^0_t,\\ d\bar{y}_t^i&=-\Big[A^\top_t\bar{y}^i_t+(R_t+\widetilde{R}_{t+\delta})\bar{x}^i_t\Big]dt +\bar{z}^i_tdW^i_t+\bar{z}^0_tdW^0_t,~t\in[0,T],\\ \bar{x}^i_0&=a,\\ \bar{y}^i_T&=M\bar{x}^i_T.\\ \end{aligned} \right.$$ FBSDE could be decoupled by the following Riccati equation and ordinary differential equation $$\left\{ \begin{aligned} &~\dot{P}_t+P_tA_t+A^\top_tP_t+R_t+\widetilde{R}_{t+\delta}-P_tB_t(N_t+\widetilde{N}_{t+\theta})^{-1}B^\top_tP_t=0,\\ &~P_T=M, \end{aligned} \right.$$ and $$\left\{ \begin{aligned} &~\dot{\phi}_t+[A_t-B_t(N_t+\widetilde{N}_{t+\theta})^{-1}B^\top_tP_t]\phi_t-P_t\hat{B}_t(N_{t-\theta}+\widetilde{N}_{t})^{-1}B^\top_{t-\theta} \mathbb E ^{\mathcal F_{t-\theta}^{W^0}}[\tilde y_{t-\theta}]=0,\\ &~\phi_T=0. \end{aligned} \right.$$ We obtain the optimal feedback is $$\bar{u}_t^i=-(N_t+\widetilde{N}_{t+\theta})^{-1}B^\top_t(P_t\bar x_t^i+\phi_t).$$ From Theorem \[t2\], we claim that $(\bar{u}_t^1,\bar{u}_t^2,\cdots,\bar{u}_t^N)$ is the $\epsilon$-Nash equilibrium of the Problem , .\ **Case II:** Now, we consider another special case. Let $A_t= B_t=0$ in system , moreover, we assume $\delta=\theta$, then we study the following system $$\label{e222} \left\{ \begin{aligned} dx_t^i&=\Big[\widetilde{A}_tx^i_{t-\delta}+\widetilde{B}_tu^i_{t-\delta}+\frac{1}{N-1} \sum_{j=1,j\neq i}^N\widehat B_tu^j_{t-\delta}\Big]dt+\sigma_tdW^i_t+\sigma^0_tdW^0_t,~t\in[0,T],\\ x^i_0&=a,~~~ x^i_t=\xi^i_t,~~~t\in[-\delta,0),~~~u^i_t=\eta^i_t,~~~t\in[-\theta,0), \end{aligned} \right.$$ and the cost functional is $$\begin{aligned} \mathcal{J}^i(u^i_t, u^{-i}_t)&=\frac{1}{2}\mathbb{E}\int_0^T\big[N_t(u^i_t)^2+ \widetilde{N}_t(u^i_{t-\theta})^2\big]dt+M x^i_T. \end{aligned}$$ We will consider the following system instead of $$\label{e333} \left\{ \begin{aligned} d{x}_t^i&=\Big[\widetilde{A}_t{x}^i_{t-\delta}+\widetilde{B}_tu^i_{t-\delta}+m_0^{\delta}(t) \Big]dt+\sigma_tdW^i_t+\sigma^0_tdW^0_t,~t\in[0,T],\\ {x}^i_0&=a,~~~ {x}^i_t=\xi^i_t,~~~t\in[-\delta,0),~~~u^i_t=\eta^i_t,~~~t\in[-\theta,0). \end{aligned} \right.$$ and the adjoint equation is $$\label{adjiont} \left\{ \begin{aligned} d{y}_t^i&=-\widetilde{A}_{t+\delta}\mathbb E^{\mathcal F^i_t}[{y}^i_{t+\delta}]dt +{z}^i_tdW^i_t+{z}^0_tdW^0_t,~t\in[0,T],\\ {y}^i_T&=-M,\\ {y}^i_t&=0,~~~t\in(T, T+(\delta\vee\theta)]. \end{aligned} \right.$$ We could solve explicitly by applying the method in [@Yu], which can also be found in [@Menoukeu-Pamen].\ (i) When $t\in[T-\delta, T]$, the ABSDE becomes $$\bar y_t^i=-M-\int_t^T\bar z_s^idW_t^i-\int_t^T\bar z_s^0dW_t^0,~~~~t\in[T-\delta, T]$$ We could solve $$\bar y_t=-M, ~~~ \bar z_t^i=0,~~~\bar z^0_t=0, ~~~t\in[T-\delta, T].$$ \(ii) If we solve on the interval $[T-k\delta, T-(k-1)\delta](k=1,2,3\cdots)$, and the solution $\{(\bar y_t^i, \bar z_t^i, \bar z_t^0); T-k\delta\leq t\leq T-(k-1)\delta\}$ is Malliavin differentiable, then we could solve the on the next interval $[T-(k+1)\delta, T-k\delta]$, $$\bar y_t^i=\mathbb E [\bar y_{T-k\delta}]+\int_t^{T-k\delta}\widetilde A_s \mathbb E^{\mathcal F^i_s}[\bar y_{s+\delta}]ds,$$ and $$\bar z_t^i=0,~~~\bar z_t^0=0,~~~~t\in[T-(k+1)\delta, T-k\delta].$$ The optimal control is $$\bar u^i_t=-(N_t+\widetilde N_{t+\delta})^{-1}\widetilde B_{t+\delta}\mathbb E^{\mathcal F^i_t}[\bar y_{t+\delta}].$$ So the $\epsilon$-Nash equilibrium is $(\bar{u}_t^1,\bar{u}_t^2,\cdots,\bar{u}_t^N)$. Next, we consider a special case that the coefficients are all constants: $\widetilde A_t=\widetilde A$, $\widetilde B_t=\widetilde B$, $M=1$, $N_t=N$, $\widetilde N_t=\widetilde N$, then the solution of as follows, $$\begin{aligned} \bar y_{t+\delta}^i&=0,~~\bar z_t^i=0, ~~~\bar z_t^0=0,~~~t\in[T-\delta, T];\\ \bar y_{t+\delta}^i&=-1, ~~\bar z_t^i=0, ~~~\bar z_t^0=0,~~~t\in[T-2\delta, T-\delta];\\ \bar y_{t+\delta}^i&=-1-\widetilde A(T-\delta-t), ~~\bar z_t^i=0, ~~~\bar z_t^0=0,~~~t\in[T-3\delta, T-2\delta];\\ \bar y_{t+\delta}^i&=-1-\widetilde A\delta-\widetilde A(T-2\delta-t)[1+\frac{1}{2}\widetilde A(T-2\delta-t)], ~~\bar z_t^i=0, ~~~\bar z_t^0=0,~~~t\in[T-4\delta, T-3\delta];\\ \cdots\cdots \end{aligned}$$ Then, $\epsilon$-Nash equilibrium is $(\bar{u}_t^1,\bar{u}_t^2,\cdots,\bar{u}_t^N)$, where $$\bar u^i_t=-\frac{\widetilde B}{N+\widetilde N}\bar y^i_{t+\delta}.$$ [0]{} D. Andersson and B. Djehiche, “A maximum principle for SDEs of mean-field type," *Appl. Math. Optim.*, vol. 63, pp. 341-356, 2011. M. Bardi, “Explicit solutions of some linear-quadratic mean field games," *Netw. Heterogeneous Media*, vol. 7, pp. 243-261, 2012. A. Bensoussan, K. Sung, S. Yam, and S. Yung, “Linear-quadratic mean-field games," preprint, 2015. R. Buckdahn, P. Cardaliaguet and M. Quincampoix, “Some recent aspects of differential game theory," *Dynam. Games Appl.*, vol. 1, pp. 74-114, 2010. R. Buckdahn, B. Djehiche, and J. Li, “A general stochastic maximum principle for SDEs of mean-field type," *Appl. Math. Optim.*, vol. 64, pp. 197-216, 2011. R. Carmona and F. Delarue, “Probabilistic analysis of mean-field games," *SIAM J. Control Optim.*, vol. 51, pp. 2705-2734, 2013. L. Chen and Z. Wu, “Maximum principle for the stochastic optimal control problem with delay and application", *Automatica*, Vol. 46, pp. 1074-1080, 2010. L. Chen and Z. Wu “Dynamic programming principle for stochastic recursive optimal control problem with delay systems", *ESAIM: COCV*, Vol. 18, pp. 1005-1026, 2012. L. Chen, Z. Wu and Z. Yu, “Delayed Stochastic Linear-Quadratic Control Problem and Related Applications", *Journal of Applied Mathematics*, Vol. 2012, 2012. O. Guéant, J.-M. Lasry and P.-L. Lions, “Mean field games and applications," *Paris-Princeton lectures on mathematical finance*, Springer, Berlin, 2010. J. Huang, X. Li and J. Shi, “Forward-backward linear quadratic stochastic optimal control problem with delay," *Systems and Control Letters*, Vol. 61, pp. 623-630, 2012. J. Huang and J. Shi, “Maximum principle for optimal control of fully coupled forward stochastic differential delayed equations", *ESAIM: COCV*, Vol. 18, pp. 1073-1096, 2012. M. Huang, P. Caines, and R. Malhamé, “Large-population cost-coupled LQG problems with non-uniform agents: individual-mass behavior and decentralized $\varepsilon$-Nash equilibria," *IEEE Transactions on Automatic Control*, vol. 52, pp. 1560-1571, 2007. M. Huang, P. Caines, and R. Malhamé, “Social optima in mean field LQG control: centralized and decentralized strategies," *IEEE Transactions on Automatic Control*, vol. 57, pp. 1736-1751, 2012. M. Huang, R. Malhamé, and P. Caines, “Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle," *Communication in Information and Systems*, vol. 6, pp. 221-251, 2006. Y. Hu and S. Peng, “Solution of forward-backward stochastic differential equations," *Proba. Theory Rel. Fields*, vol. 103, pp. 273-283, 1995. J.-M. Lasry and P.-L. Lions, “Mean field games," *Japan. J. Math.*, vol. 2, pp. 229-260, 2007. T. Li and J. Zhang, “Asymptotically optimal decentralized control for large population stochastic multiagent systems," *IEEE Transactions on Automatic Control*, vol. 53, pp. 1643-1660, 2008. O. Menoukeu-Pamen, “Optimal control for stochastic delay system under model uncertainty: a stochastic differential game approach", *Journal of Optimization Theory and Applications*, pp. 1-34, 2011. S. Peng, Z. Yang, “Anticipated backward stochastic differential equations", *Ann Probab*, Vol. 37, pp. 877-902, 2009. S. Peng and Z. Wu, “Fully coupled forward-backward stochastic differential equations and applications to optimal control," [*SIAM J. Control Optim.*]{}, vol. 37, pp. 825-843, 1999. K. Sung, “Recent Results in Linear-Quadratic Mean Field Games," *2013 CACS International Automatic Control Conference (CACS)*, 2013. Z. Yu, “Linear-quadratic optimal control and nonzero-sum differential game of forward-backward stochastic system," *Asian Journal of Control*, vol. 14, pp. 173-185, 2012. Z. Yu, “The stochastic maximum principle for optimal control problems of delay systems involving continuous and impulse controls", *Automatica*, Vol. 48, pp. 2420¨C2432, 2012. H.Zhang, L. Li, J. Xu, and M. Fu, “Linear Quadratic Regulation and Stabilization of Discrete-time Systems with Delay and Multiplicative Noise", *IEEE Transactions on Automatic Control*, Vol. 47, No. 4, pp. 640-646, 2002. H. Zhang, X. Lu, W. Zhang, and W. Wang“Kalman filtering for linear time-delayed continuous-time systems with stochastic multiplicative noises", *International Journal of Control, Automation, and Systems*, vol. 5, no. 4, pp. 355-363, 2007. [^1]: N. Li is with the Department of Mathematics, Qilu Normal University, Jinan. N. Li acknowledges the financial support partly by the Project B-Q34X and G-YL04. [^2]: S. Wang is with the Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong.
Proceedings of the Conference\ “Path Integrals from peV to TeV”\ Firenze, August 1998\ to be published by World Scientific [**The Quantum Dissipative Villain Model**]{} Introduction ============ Quantum dissipative systems can be described in terms of system plus environment hamiltonians[@kn:Weiss-98] where a special quantum variable $\varphi$ interacts with an environment of harmonic oscillators. The reduced dynamics of $\varphi$ obtained by integrating out the reservoir’s degrees of freedom is studied. An effective euclidean action is found which, for state-independent dissipation, reads $$\label{eq:general-action} {\cal S}[\varphi] \;=\; {1 \over 2} \int_0^{\beta} d \tau\, d \tau^{\prime}\, \varphi(\tau) \, {\cal A}(\tau-\tau^{\prime}) \,\varphi(\tau^{\prime}) \;+\; \int_0^{\beta} d \tau \,{\cal V}(\varphi) \quad ,$$ where ${\cal V}(\varphi)$ is the potential. The kernel, whose Fourier transform (FT) is[@kn:revs] ${\cal A}(\omega)\!=\! m \omega^2\! - \!\frac{1}{2}\alpha(\omega)$, with $\omega \!=\! 2 \pi n/\beta$, contains the kinetic “mass” term and the damping term $\alpha(\omega)$ subsuming the spectral properties of the bath coupling [@kn:Weiss-98]. If ${\cal V}(\varphi)$ describes a tunneling problem then the variable $\varphi$ has an underlying discrete character even if it is defined to be continuous. For instance, the low energy properties of the double-well potential ${\cal V}(\varphi) = V (\varphi^2 - a^2)^2$ are approximately accounted for by a two-state system. However, it is impossible to find the exact relation for a generic environment.$\!$ We now introduce a model which displays the discrete character of the continuous variable $\varphi$ by writing $$\label{eq:QDVM-potential} \Lambda^{-1}\! \sum_{\tau} {\cal V}(\varphi_{\tau}) \,\, -\ln \Bigl[ \sum_{\{m_\tau \}} \exp \bigl\{ {1 / (2 \Lambda)} \sum_{\tau} V \left( \varphi_{\tau} - 2 \pi m_{\tau} \right)^2 \bigr\}\Bigr] - {\cal J}_{\tau} \, \varphi_{\tau} \;, \;\; %\[-1mm]$$ where we added a source ${\cal J}_{\tau}$. Here we are considering a version of eq.(\[eq:general-action\]) on a lattice with $N$ sites and spacing $\beta/N = 1/\Lambda$. We name it Quantum Dissipative Villain (QDV) model because eq.(\[eq:QDVM-potential\]) is the Villain approximation[@kn:Savit-80] of the potential $\,{\cal V}(\varphi) \,=\, - V \cos \varphi - {\cal J} \varphi \,$ in the Caldeira-Leggett (CL) model[@kn:Caldeira-Leggett]. Dual representations of the QDVM ================================ [**$m$-representation**]{}: We integrate $\varphi$ in the partition function of the QDV model, which becomes ${\cal Z}^{QDV} [{\cal J}] = {\cal Z}^{V} [{\cal J}] \cdot {\cal Z}^{(m)} [{\cal J}^{(m)}] \,$. Here $ {\cal Z}^{V}$ is the partition function of the damped harmonic oscillator (DHO), and $$\hskip-1mm {\cal Z}^{(m)} [{\cal J}^{(m)}]\; = \; \sum_{\{m_\tau \}} \mbox{\large e}^{ - {1 \over 2} \sum_{\tau,\tau ^{\prime}} m_{\tau} \, \left( {2 \pi V \over \Lambda} \right)^2 \, \left[ {\Lambda \over V} \delta_{\tau,\tau^{\prime}} - \Lambda^2 {\cal G}_{\tau-\tau^{\prime}}^{V} \right] \, m_{\tau ^{\prime}} + \sum_{\tau} % {J^{(m)}_{\tau} \over \Lambda} m_{\tau} {J^{(m)}_{\tau} / \Lambda} % \; m_{\tau} }$$ is a 1-D surface roughening model with interacting heights[@kn:falci] $m_{\tau}$. The source is ${\cal J}^{(m)}_{\omega} = 2 \pi V \Lambda \, {\cal G}_{\omega}^{V} \, {\cal J}_{\omega}$. and ${\cal G}_{\tau}^{V}$ is the Green’s function of the discretized DHO, given by $\; {\cal G}_{\omega}^{V} = \Theta(\Lambda - |\omega |) \; [m \Lambda \omega^2 - \Lambda \, \alpha(\omega)/2 + \Lambda \, V]^{-1} \,$, for large enough $\Lambda$.\ [**$e$-representation**]{}: We introduce $e_{\tau} := m_{\tau+1}-m_{\tau}$ and rewrite ${\cal Z}^{(m)}$ as $$\begin{aligned} \label{eq:Z-e-representation} %{\cal Z}^{(m)} [{\cal J}^{(m)}] &=& {\cal Z}^{(e)} [{\cal J}^{(e)}] \,&=&\, \sum_{\{e_\tau \}} \mbox{\large e}^{-\, {1 \over 2} \; \sum_{\tau,\tau ^{\prime}} \, e_{\tau} \; \Delta_{\tau - \tau^{\prime}} \; e_{\tau ^{\prime}} \;+ \; \sum_{\tau} \; {\cal J}_{\tau}^{(e)} e_{\tau} } \;, \\ && \hskip-18mm \Delta_{\omega} \;=\; (2 \pi V \Lambda/ \omega)^2 \left[ {1 / V} - \Lambda {\cal G}_{\omega}^{V} \right] \; ; \hskip6mm {\cal J}_{\omega}^{(e)} \;=\; {- 2 \pi i V \Lambda / \omega} \; {\cal G}_{\omega}^{V} \; {\cal J}_{\omega} \;,\quad\end{aligned}$$ obtaining a gas of interacting charges[@kn:falci] $e_{\tau} \in ]-\infty,\infty[$.\ [**$n$-representation**]{}: Another charge representation can be obtained starting from the QDV model, performing a Poisson transformations (which changes $m \to n$) and then integrating out $\varphi$. We obtain[@kn:falci] ${\cal Z}^{QDV} [{\cal J}] = {\cal Z}^{0} [{\cal J}] \cdot {\cal Z}^{(n)} [{\cal J}^{(n)}] \,$ where ${\cal Z}^{0}$ describes a Brownian particle and ${\cal Z}^{(n)}$ has the same structure of ${\cal Z}^{(e)}$ eq.(\[eq:Z-e-representation\]), being a gas of $n_{\tau} \in ]-\infty,\infty[$ charges with interaction and source given by $$\label{eq:interaction-n-representation} {\cal D}_{\omega} \;=\; {\Lambda / V} + \Lambda^2 {\cal G}_{\omega}^{0} \; ; \hskip6mm {\cal J}_{\omega}^{(n)} = i \, \Lambda \, {\cal G}_{\omega}^{0} {\cal J}_{\omega}\;.$$ Exact Self-duality ================== The ${\cal Z}^{(e)}$ and ${\cal Z}^{(n)}$ represent [*the same model*]{}, with modified interaction and sources. This means that the QDV model has an exact [*self-dual*]{} structure[@kn:falci]. A simple reformulation of this self-dual mapping is obtained if we introduce the functions $\, \zeta^0(\omega) = |\omega | / (2 \pi) \, \Lambda {\cal G}_{\omega}^{0} \,$ and $\, \zeta(\omega) = \zeta^0(\omega) + |\omega| / (2 \pi V) \,$. Then we rewrite $\Lambda^{-1} {\cal D}_{\omega} = 2 \pi / |\omega| \, \zeta(\omega) \,$ and $\Lambda^{-1} \Delta_{\omega} = 2 \pi / |\omega| \, [\zeta(\omega)]^{-1}$. The transformations of the interaction and of the source are finally given by $$\label{eq:self-duality} \zeta(\omega) \;\; \longrightarrow \; \; 1/\zeta(\omega) \;, \qquad\mbox{and}\qquad {\cal J}_{\omega}^{(n)} \;=\; - \omega/|\omega| \; \; \zeta(\omega) \; {\cal J}_{\omega}^{(e)} \;.$$ We can also write exact relations between correlation functions of the representations of the QDV model. For instance, the FT of the correlation function $\langle \varphi_{\tau} \varphi_0 \rangle $ of the QDV model is related to $ \langle n \, n \rangle_{\omega} =: |\omega|/(2 \pi \Lambda) \, {\cal C}_{\omega}[\zeta]$ by $$|\omega|/(2 \pi \Lambda) \; \langle \varphi \varphi \rangle_{\omega} \; = \; \zeta^0(\omega) \; \Bigl\{ 1 \;-\; \zeta^0(\omega) \; {\cal C}_{\omega}[\zeta({\omega})] \Bigr\} \;.$$ Using self-duality, the relation between the $e$-$e$ and $n$-$n$ correlation functions becomes an exact equation for ${\cal C}$ $$\zeta(\omega) \; {\cal C}_{\omega} [\zeta(\omega)] \;+\; \zeta^{-1}(\omega) \; {\cal C}_{\omega} [\zeta^{-1}(\omega)] \;=\; 1 \;.$$ \[fig:dualcircuit\] We have not yet specified the environment, i.e. the function $\zeta(\omega)$. Both $\zeta(\omega)$ and $1/\zeta(\omega)$ have to be strictly positive for $\omega \neq 0$ since otherwise the integrations involved in the transformations cannot be performed. Moreover the calculation of dynamic correlation functions involves the analytic continuation $|\omega| \to \mathbf{p} \to i \Omega + 0^+$, so it is desirable that $\zeta( \mathbf{p})$ is analytic in ${\cal R}e \, \mathbf{p} > 0$. Thus we require that $\zeta( \mathbf{p})$ has the properties of the impedance of a linear passive bipole[@kn:Chua-Desoer-Kuh]. The analogy with network theory involves also duality. Namely, eqs.(\[eq:self-duality\]) for ${\cal R}e \, \mathbf{p} > 0$ can be reparaphrased by associating to each charge representation a circuit with a non linear quantum component ${\cal X}$, the interaction $\zeta( \mathbf{p})$ corresponding to the impedance seen by ${\cal X}$ and the current bias being ${\cal J}^{(.)}_{\mathbf{p}}$ (see fig.1). Then the quantum self-dual transformation for the charge models, eqs.(\[eq:Z-e-representation\],\[eq:interaction-n-representation\],\[eq:self-duality\]), correspond to transforming the linear elements and the source of the circuit using the known[@kn:Chua-Desoer-Kuh] classical dual and Norton transformations, while keeping unchanged the non-linear quantum component ${\cal X}$. Further developments ==================== The above results are significant in view of the fundamental character of the QDV model. Indeed, the well known CL model can be obtained exactly from the QDV model in the continuum limit[@kn:falci] if $V \to \Lambda/[2 \ln(2\Lambda/V)]$. In other words this choice makes [*exact*]{} the Villain approximation. In this case the low frequency limit of eq.(\[eq:self-duality\]) reproduces both the approximate Schmid self-duality and the $\sigma \leftrightarrow - \sigma$ correspondence.[@kn:Schmid] The exact self-dual structure of the CL model was recently found for a special environment.[@kn:Fendley-Saleur] Here we find that it holds true for arbitrary temperatures and general environments. When the CL model describes a mesoscopic Josephson junction[@kn:revs] in a circuit the analogy with network theory (see fig.1) becomes more stringent. The lowest order in the Coulomb-gas representation[@kn:falci] of the $n$-model is the standard theory of the “influence of the environment”[@kn:revs2; @kn:Weiss-98] and calculations can be performed numerically for any external impedance. The same can be done for the lowest order in the $e$-representation, which corresponds to a single-instanton contribution[@kn:Weiss-98; @kn:revs]. Duality network relations for a purely resistive environment, justified by the results of Schmid[@kn:Schmid] have been used for mesoscopic junctions since a long time[@kn:revs]. They are here substantiated and generalized. Acknowledgments {#acknowledgments .unnumbered} =============== G.F. acknowledges R. Fazio and M. Annino for discussions and suggestions, EU (TMR - FMRX CT 960042) and INFM (PRA-QTMD) for support. [99]{} U. Weiss, [*Quantum Dissipative Systems*]{}, Series in Modern Condensed Matter Physics, vol. 2, second Edition, World Scientific Singapore 1998. G. Schön and A.D. Zaikin, Phys. Rept. [**198**]{}, 237 (1990). Savit, Rev. Mod. Phys. [**52**]{}, 453 (1980). A.O. Caldeira and A.J. Leggett, Phys. Rev. Lett. [**46**]{}, 211 (1981). G. Falci and U. Weiss, Journ. Superc., Oct. 1998 issue. L.O. Chua, C.A. Desoer, E.S. Kuh, [*Linear and Non Linear Circuits*]{}, Mc Graw Hill, New York, 1969. A. Schmid, Phys. Rev. Lett. [**51**]{}, 1506 (1983); M. Sassetti, H. Schomerus and U. Weiss, Phys. Rev. B [**53**]{}, R2914 (1996). P. Fendley and H. Saleur, Phys.Rev.Lett. [**81**]{}, 2518 (1998) 2518. G.L. Ingold and Yu.V. Nazarov, in [*Single Charge Tunneling*]{}, H. Grabert and M. Devoret Eds., Plenum, New York, 1991.
New Repair strategy of Hadamard Minimum Storage Regenerating Code for Distributed Storage System Xiaohu Tang, *Member, IEEE*, Bin Yang, and Jie Li [^1] **Abstract**— The newly presented $(k+2,k)$ Hadamard minimum storage regenerating (MSR) code is the first class of high rate storage code with optimal repair property for all single node failures. In this paper, we propose a new simple repair strategy, which can considerably reduces the computation load of the node repair in contrast to the original one. **Index Terms**—Distributed storage, MSR, Hadamard, repair strategy, computation load. Introduction ============ In distributed storage systems, data is placed on a number of storage nodes with redundancy. Redundancy is the basis for distributed storage systems to provide reliable access service. Normally, there are two mechanisms of redundancy: replication and erasure coding. Compared with replication, erasure coding is becoming more and more attractive because of much better storage efficiency. Up to now, some famous storage applications, such as Google Colossus (GFS2) [@gfs2], Microsoft Azure [@azure], HDFS Raid [@hdfs-raid], and OceanStore [@oceanstore], have adopted erasure coding. Due to the unreliability of individual storage nodes, node repair will be launched once node failures take place, so as to retain the same redundancy. With data growing much faster than before, node repair becomes a regular maintenance operation now. In general, there are several metrics to evaluate the cost of node repair, such as disk I/O, network bandwidth, number of accessed disks, etc. Among these metrics, the repair bandwidth, defined as the amount of data downloaded to repair a failed node, is the most useful. In [@coding], Dimakis *et al.* established a tradeoff between the storage and repair bandwidth where MBR (minimum bandwidth regenerating) code corresponding to minimum repair bandwidth and MSR (minimum storage regenerating) code corresponding to minimum storage are the most important. In this study, we focus on MSR codes with high rate. So far, several explicit constructions of such MSR codes have been proposed based on the interference alignment technique [@hadamard; @zigzag; @longMDS]. However, it should be noted that in all the aforementioned constructions except the one in [@hadamard], only the systematic nodes possess the optimal repair property. In [@hadamard], the first $(k,k+2)$ MSR code with optimal repair property for all storage nodes, including both $k$ systematic nodes and $2$ parity nodes, was presented. Actually, the optimal repair property follows from Hadamard design with the help of lattice representation of the symbol extension technique. Therefore, we call this code Hadamard MSR code throughout this paper. In this paper, we fully explore the fundamental properties of Hadamard design. As a result, we present a generic repair strategy for Hadamard MSR code only based on the elementary mathematics instead of the lattice knowledge. Further, the new generic repair strategy not only includes the original repair strategy in [@hadamard], but also generates a much more simple but efficient one which can greatly reduce the computation load during the repair of failed nodes. The remainder of this paper is organized as follows. In Section \[section\_of\_model\], the $(k+2,k)$ Hadamard MSR code is briefly reviewed. In Section \[section\_of\_property\], some fundamental properties of Hadamard deign are studied to help the optimal repair. In Section \[section\_repair\_strategy\], the new repair strategy is proposed for systematic nodes, the first parity node and the second parity node respectively. The comparison of computation load between the original strategy in [@hadamard] and ours is given in Section \[section\_of\_comparison\]. Finally, Section \[section\_of\_conclusion\] concludes this paper. $(k+2,k)$ Hadamard MSR code {#section_of_model} =========================== The $(k+2,k)$ MSR code, consisting of $k$ systematic nodes and $2$ parity nodes, is a typical high rate storage code in distributed storage system. Assume that the original data is of size $M=kN$, it can be equally partitioned into $k$ parts $\textbf{f}=[\textbf{f}_1^T,\textbf{f}_2^T,\cdots,\textbf{f}_k^T]^T$ and placed on $k$ systematic nodes, where $\textbf{f}_i$ is a $N\times 1$ vector. In general, 2 parity nodes hold parity data, namely two $N\times 1$ vectors $\textbf{f}_{k+1}$ and $\textbf{f}_{k+2}$, of all the systematic nodes. Table 1 illustrates the structure of a $(k+2,k)$ MSR code. \[Hadamard\_Model\] Systematic node Systematic data ----------------- ------------------------------------------------------------- 1 $\textbf{f}_1$ $k$ $\textbf{f}_k$ Parity node Parity data 1 $\textbf{f}_{k+1}=\textbf{f}_1+ \cdots+ \textbf{f}_k$ 2 $\textbf{f}_{k+2}=A_1\textbf{f}_1+\cdots + A_k\textbf{f}_k$ Let $N=2^{k+1}$. The $(k+2,k)$ Hadamard MSR code [@hadamard] is characterized by the coding matrices $A_1,\cdots,A_k$ over finite field $\mathbb{F}_q ~(q\ge{2k+3})$ as $$\begin{aligned} A_i &=& a_iX_i+b_iX_0+I_N, ~1\le i\le k \nonumber\\ X_j&=&\textrm{diag}(\underbrace{I_{2^j},-I_{2^j},\cdots,I_{2^j},-I_{2^j}}\limits_{2^{k+1-j}}),\label{A_i_definition-2} ~0\le j\le k,\end{aligned}$$ where $I_m$ is the identity matrix of order $m$, the elements $a_i\ne 0$ and $b_i\ne 0$ over the finite field of odd characteristic and order $q\ge 2k+3$ satisfy $$\begin{aligned} a_i^2-b_i^2&=&-1,\label{Eqn_a-req-1}\\ a_i\pm a_j&\ne& b_i- b_j,\nonumber\\ a_i\pm a_i&\ne& -(b_i- b_j),\nonumber\end{aligned}$$ for all $1\le i\ne j\le k$ [@hadamard]. In fact, the matrices in are built on Hadamard design [@DS92]. As the same as other $(k+2,k)$ MSR codes, this $(k+2,k)$ Hadamard MSR code can tolerate $2$ arbitrary node failures [@hadamard]. Notably, recall that this Hadamard MSR code has an advantage over other $(k+2,k)$ MSR codes that both systematic nodes and parity nodes have optimal repair property. Indeed, to repair a failed node $1\le i\le k+2$, the optimal repair property requires downloading $N/2=2^{k}$ data from each surviving node $1\le l\ne i\le k+2$ by multiplying its original data $\textbf{f}_l$ with a $N/2\times N$ matrix [@hadamard], which will be discussed in detail in Section \[section\_repair\_strategy\]. \[Exm\_1\] For $k=2$, the $(4,2)$ Hadamard MSR code has the following coding matrices over $\mathbb{F}_{7}$ $$\begin{aligned} A_1 &=& \mathrm{diag}(1,1,-1,-1,1,1,-1,-1)+3\cdot\mathrm{diag}(1,-1,1,-1,1,-1,1,-1)+I_{8}\\ A_2 &=& \mathrm{diag}(1,1,1,1,-1,-1,-1,-1)+4\cdot\mathrm{diag}(1,-1,1,-1,1,-1,1,-1)+I_{8}\end{aligned}$$ Its repair matrices will be elaborated in Section \[section\_repair\_strategy\]. \[Exm\_2\] For $k=3$, the $(5,3)$ Hadamard MSR code has the following coding matrices over $\mathbb{F}_{11}$ $$\begin{aligned} A_1 &=& 2\cdot\mathrm{diag}(1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1)+\\ &&7\cdot\mathrm{diag}(1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1)+I_{16}\\ A_2 &=& 2\cdot\mathrm{diag}(1,1,1,1,-1,-1,-1,-1,1,1,1,1,-1,-1,-1,-1)+\\ &&4\cdot\mathrm{diag}(1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1)+I_{16}\\ A_3 &=& 6\cdot\mathrm{diag}(1,1,1,1,1,1,1,1,-1,-1,-1,-1,-1,-1,-1,-1)+\\ &&2\cdot\mathrm{diag}(1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1)+I_{16}\end{aligned}$$ Properties about Hadamard design {#section_of_property} ================================ For $0\le i\le k$, to characterize the diagonal matrix $X_i$ in from Hadamard design, we define $\textbf{x}_i=(x^i_j)_{j=0}^{N-1}$ to be the row vector of length $N$ formed by its elements of the main diagonal, i.e., $$\begin{aligned} \label{Eqn_Xi} \textbf{x}_i=(\underbrace{\textbf{1}_{2^i},-\textbf{1}_{2^i},\cdots,\textbf{1}_{2^i},-\textbf{1}_{2^i}}\limits_{2^{k+1-i}})\end{aligned}$$ where $\textbf{1}_{2^i}$ is the all one row vector of length $2^{i}$. For example, when $k=2$, $$\begin{aligned} \textbf{x}_0 &=& (1,-1,1,-1,1,-1,1,-1)\\ \textbf{x}_1 &=& (1,1,-1,-1,1,1,-1,-1)\\ \textbf{x}_2 &=& (1,1,1,1,-1,-1,-1,-1)\end{aligned}$$ The following properties of $\textbf{x}_i$ are obvious: - **Alternative Property**: $x_j^i=-x_{j+2^i}^i$ for $0\le j< N-2^i$; - **Periodic Property**: $x_j^i=x_{j+2^{i+1}}^i$ for $0\le j< N-2^{i+1}$, i.e., $\textrm{x}_i$ has period $2^{i+1}$; - **Run Property**: $x_j^i=(-1)^{\lfloor{j/{2^i}}\rfloor}$ for $0\le j< N$, i.e., $\textrm{x}_i$ has $2^{k+1-i}$ runs of $1$ or $-1$ of length $2^i$; - **Skew-symmetric Property**: $x^i_j=-x^i_{N-1-j}$ for $0\le j< N$. Based on the above properties, we derive the following useful lemmas, which are crucial to our repair strategy. \[lem\_pro\] For any $0\le i, l\le k$, $j=\mu 2^{l+1}+\nu$, $0\le \mu<2^{k-l}$, and $0\le \nu<{2^l}$, $$\begin{aligned} \label{Eqn_Had} x_j^i=\left\{\begin{array}{rl} -x_{j+2^{l}}^i, & i=l\\ x_{j+2^{l}}^i, &\textrm{otherwise} \end{array} \right.\end{aligned}$$ *Proof*: Firstly, when $i=l$, holds due to the alternative property. Secondly, when $i<l$, is true because of the periodic property. Thirdly, when $i>l$, write $\mu=\mu_0 2^{i-l-1}+\mu_1$ where $0\le\mu_0<2^{k+1-i}$ and $0\le \mu_1\le 2^{i-l-1}-1$, then $$\begin{aligned} \lfloor {j\over {2^i}}\rfloor=\lfloor { j+2^l\over 2^i} \rfloor=\mu_0\end{aligned}$$ since $0\le \mu_1 2^{l+1}+2^l+\nu\le 2^i-2^{l+1}+2^{l}+\nu <2^i$, which results in by the run property. $\Box$ \[lem\_near\_skew\_symmetry\] For any $0\le i\le k$ and $0\le j<N/2$, $$\begin{aligned} x_{N-1-j-(-1)^j}^i =\left\{ \begin{array}{rl} x_j^i, & i=0\\ -x_j^i, & 0<i\le k \end{array} \right.\end{aligned}$$ *Proof*: When $i=0$, the result directly follows from the periodic property that $\textrm{x}_0$ has period $2$ and $2|(N-1-2j-(-1)^j)$. When $0<i\le k$, let $j=\mu 2^i+\nu$ where $0\le\mu<2^{k+1-i}$ and $0\le \nu<2^{i}$. According to the run property, $x_j^i=(-1)^{\mu}$ and $$\begin{aligned} x_{N-1-j-(-1)^j}^i={(-1)}^{\lfloor {N-1-j-(-1)^j\over 2^i} \rfloor}=(-1)^{2^{k+1-i}-\lceil {1+j+(-1)^j\over 2^i}\rceil} =(-1)^{\lceil {1+j+(-1)^j\over 2^i}\rceil}\end{aligned}$$ If $j$ is even, $1+j+(-1)^j=j+2=\mu 2^i+\nu+2$, which implies $\lceil {1+j+(-1)^j\over 2^i}\rceil=\mu+1$ since $0\le \nu\le 2^i-2$ in this case. If $j$ is odd, $1+j+(-1)^j=j=\mu 2^i+\nu$, which still gives $\lceil {1+j+(-1)^j\over 2^i}\rceil=\mu+1$ since $1\le \nu\le 2^i-1$. Therefore we always have $$\begin{aligned} x_{N-1-j-(-1)^j}^i={(-1)}^{\mu+1}=-x_j^i\end{aligned}$$ $\Box$ Sylvester Hadamard matrices are one of the earliest infinite family of Hadamard matrices recursively defined by $$H_1= \left( \begin{array}{cc} 1 & 1\\ 1 & -1 \end{array} \right)$$ and $$\label{Eqn_SyH} H_k= \left( \begin{array}{cc} H_{k-1} & H_{k-1}\\ H_{k-1} & -H_{k-1} \end{array} \right), ~k\ge 2.$$ Normally, when a $2^k\times 2^k$ matrix, with each entry being $1$ or $-1$, is multiplied by a column vector of length $2^k$, we do not need multiplication and what we need are $2^k(2^k-1)$ additions. But for the Sylvester Hadamard matrix, we can reduce the number of additions by means of the recursive property. \[lem\_H\_multiply\_f\] Let $H_k$ be the Sylvester Hadamard matrix in and $\mathbf{z}$ be an arbitrary column vector of length $2^k$ where $k$ is a positive integer. Then, 1. To compute $H_k\cdot \mathbf{z}$, $k\cdot 2^k$ additions are needed; 2. To compute $(H_{k-1}~H_{k-1})\mathbf{z}$ or $(H_{k-1}~-H_{k-1})\mathbf{z}$, $k 2^k-2^{k-1}$ additions are needed. *Proof*: Let $\mathcal{N}_k$ denote the number of additions of $H_k\cdot \mathbf{z}$. \(1) We prove the first assertion by induction. Obviously, it is true for $k=1$, i.e., $\mathcal{N}_1=2$. Note that $$\begin{aligned} \label{Eqn_Hadamard_Comp} H_k\textbf{z} &=& \left( \begin{array}{cc} H_{k-1} & H_{k-1} \\ H_{k-1} & -H_{k-1} \end{array} \right) \left( \begin{array}{c} \textbf{z}^1 \\ \textbf{z}^2 \end{array} \right)\nonumber\\ &=& \left( \begin{array}{c} H_{k-1}\textbf{z}^1+H_{k-1}\textbf{z}^2 \\ H_{k-1}\textbf{z}^1-H_{k-1}\textbf{z}^2 \end{array} \right)\end{aligned}$$ where $\textbf{z}^1$ and $\textbf{z}^2$ are two column vectors of length $2^{k-1}$. Then, we have $$\begin{aligned} \mathcal{N}_k=2 \mathcal{N}_{k-1}+2^k=2^{k-1} \mathcal{N}_1+(k-1)2^k=k\cdot 2^k.\end{aligned}$$ \(2) The second assertion follows directly from . $\Box$ Optimal repair strategy {#section_repair_strategy} ======================= Let $\{\textbf{e}_0,\cdots,\textbf{e}_{2^k-1}\}$ be the basis of $\mathbb{F}_q^{2^k}$. For example, it can be simply chosen as the standard basis $$\label{Eqn_Standard_Basis} \textbf{e}_i=(\underbrace{0,\cdots,0,1,0,\cdots,0}\limits_{2^k})^T$$ with only the $i$th entry being nonzero. In this section, we present our repair strategy respectively for the systematic nodes, the first parity node, and the second parity node by giving the corresponding repair matrices, and then check the optimality. Optimal repair of systematic nodes {#subsection_repair_systematic_node} ---------------------------------- In order to repair the $i$th systematic node, $1\le i\le k$, one downloads data $S_i \textbf{f}_l$, $1\le l\ne i\le k+2$, where the $N/2\times N$ repair matrix $S_i$ is $$\begin{aligned} \label{systematic_node_repair_matrix_element} S_i= (\underbrace{\textbf{e}_0,\cdots,\textbf{e}_{2^i-1}}\limits_{2^i},\underbrace{\textbf{e}_0,\cdots,\textbf{e}_{2^i-1}}\limits_{2^i},\cdots, \underbrace{\textbf{e}_{2^k-2^i},\cdots,\textbf{e}_{2^k-1}}\limits_{2^i},\underbrace{\textbf{e}_{2^k-2^i},\cdots,\textbf{e}_{2^k-1}}\limits_{2^i})\end{aligned}$$ Let $s^i_j$ be the $j$th column vector of $S_i$. Obviously, $\textbf{s}_j^i=\textbf{e}_{\mu 2^i+\nu}$ and $$\begin{aligned} \label{systematic_node_repair_matrix_element-1} \textbf{s}_{j+2^i}^i=\textbf{s}_j^i\end{aligned}$$ where $j=\mu 2^{i+1}+\nu$, $0\le\mu<2^{k-i}$ and $0\le \nu<2^{i}$. Then, the data from two parity nodes are $$\begin{aligned} \label{Eqn_Sys_IF} \left( \begin{array}{c} S_i \\ S_iA_i \end{array} \right)\textbf{f}_i + \sum_{l=1,l\ne i}^k \left( \begin{array}{c} S_i \\ S_iA_l \end{array} \right)\textbf{f}_l\end{aligned}$$ where the second term is the interference resulted from systematic nodes except the failed one. To cancel the interference and recover the data $\textbf{f}_i$, the optimal repair strategy requires [@hadamard] $$\begin{aligned} \label{repair_systematic_node_requirement1} \textrm{rank} \left( \begin{array}{c} S_i \\ S_iA_i \end{array} \right) = N\end{aligned}$$ and $$\begin{aligned} \label{repair_systematic_node_requirement2} \textrm{rank} \left( \begin{array}{c} S_i \\ S_iA_l \end{array} \right) = {N\over 2}\end{aligned}$$ for $1\le i\ne l\le k$. Multiplying $A_l$ by $S_i$, $1\le l\le k$, we get $$\begin{aligned} \label{Eqn_Sysm-1} S_i A_l=((a_lx_0^l+b_l x_0^0+1)\textbf{s}_0^i~\cdots~(a_lx_j^l+b_l x_j^0+1)\textbf{s}_j^i~\cdots~(a_lx_{N-1}^l+b_{l} x_{N-1}^0+1)\textbf{s}_{N-1}^i)\end{aligned}$$ Consider the submatrix of $\left( \begin{array}{c} S_i \\ S_iA_l \end{array} \right)$ formed by columns $j$ and $j+2^i$ where $j=\mu 2^{i+1}+\nu$, $0\le \mu< 2^{k-i}$ and $0\le \nu<2^i$, i.e., $$\begin{aligned} \label{Eqn_Sysm} \Delta_j=\left( \begin{array}{cc} \textbf{s}^i_j & \textbf{s}^i_{j+2^i}\\ (a_lx_j^l+b_l x_j^0+1)\textbf{s}^i_j & (a_lx_{j+2^i}^l+b_{l} x_{j+2^i}^0+1)\textbf{s}^i_{j+2^i} \end{array} \right)\end{aligned}$$ By Lemma \[lem\_pro\], and , we then have $$\begin{aligned} \textrm{rank} (\Delta_j) = \left\{ \begin{array}{ll} 2, & \textrm{if}~i=l\\ 1, & \textrm{otherwise} \end{array} \right.\end{aligned}$$ which results in and . When $k=2$, for the $(4,2)$ Hadamard MSR code determined by the coding matrices given in Example \[Exm\_1\], the repair matrices of systematic nodes 1 and 2 are respectively $$\begin{aligned} S_1= \left( \begin{array}{llllllll} 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \end{array} \right), S_2= \left( \begin{array}{cccccccc} 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \end{array} \right)\end{aligned}$$ Optimal repair of the first parity node {#first_parity_node_repair_subsection} --------------------------------------- In order to repair the first parity node, we need the following transformation $$\begin{aligned} \textbf{y}_1&=&\textbf{f}_1+\cdots+\textbf{f}_k\\ \textbf{y}_i&=&-\textbf{f}_i,~~2\le i\le k\end{aligned}$$ Let $\textbf{y}=[\textbf{y}_1^T,\cdots,\textbf{y}_k^T]^T$. The storage code can then be described as $$\begin{aligned} \left( \begin{array}{c} \textbf{f}_{k+1}\\ -\textbf{f}_2\\ \vdots\\ -\textbf{f}_{k}\\ \textbf{f}_1\\ \textbf{f}_{k+2} \end{array} \right) =\left( \begin{array}{cccc} I_N & 0_N & \cdots & 0_N \\ 0_N & I_N & \cdots & 0_N \\ \vdots & \vdots & \ddots & \vdots \\ 0_N & 0_N & \cdots & I_N \\ I_N & I_N & \cdots & I_N \\ A_1 & A_1-A_2 & \cdots & A_1-A_k \end{array} \right) \cdot \textbf{y}\end{aligned}$$ where the first systematic node and the first parity node are exchanged. Thus, it suffices to repair the new first systematic node by respectively downloading data $S \textbf{f}_i$, $1\le i\le k$, and $\tilde{S}\textbf{f}_{k+2}$, where the repair matrices $S$ and $\tilde{S}$ are $$\begin{aligned} S&=& (\underbrace{\textbf{e}_0,\textbf{e}_1,\cdots,\textbf{e}_{2^k-2},\textbf{e}_{2^k-1}}\limits_{2^k},\underbrace{\textbf{e}_{2^k-1},\textbf{e}_{2^k-2},\cdots,\textbf{e}_1,\textbf{e}_0}\limits_{2^k})\\ \tilde{S}&=& (\underbrace{\textbf{e}_0,\textbf{e}_1,\cdots,\textbf{e}_{2^k-2},\textbf{e}_{2^k-1}}\limits_{2^k},\underbrace{-\textbf{e}_{2^k-1},-\textbf{e}_{2^k-2},\cdots,-\textbf{e}_1,-\textbf{e}_0}\limits_{2^k})\end{aligned}$$ with the $j$th columns $\textbf{s}_j$ and $\tilde{\textbf{s}}_j$ satisfying $$\begin{aligned} \label{Eqn_1st-Pnode} \textbf{s}_j&=&\textbf{s}_{N-1-j}\nonumber\\ \tilde{\textbf{s}}_j&=&-\tilde{\textbf{s}}_{N-1-j}\end{aligned}$$ for $0\le j< N$. Then, the data from the new first parity node and the second parity node can be expressed as $$\begin{aligned} \label{Eqn_download-1} \left( \begin{array}{c} S \\ \tilde{S}A_1 \end{array} \right)\textbf{f}_{k+1} - \sum_{l=2}^k \left( \begin{array}{c} S \\ \tilde{S} (A_1-A_l) \end{array} \right)\textbf{f}_l.\end{aligned}$$ The optimal repair strategy requires [@hadamard] $$\begin{aligned} \label{repair_1parity_node_requirement1} \textrm{rank} \left( \begin{array}{c} S \\ \tilde{S}A_1 \end{array} \right) = N\end{aligned}$$ and $$\begin{aligned} \label{repair_1parity_node_requirement2} \textrm{rank} \left( \begin{array}{c} S \\ \tilde{S} (A_1-A_l) \end{array} \right) = {N\over 2}\end{aligned}$$ for $2\le l\le k$. According to and , we investigate $$\begin{aligned} \left(\begin{array}{c} S\\ \tilde{S} A_1 \end{array}\right)=\left(\begin{array}{ccccc} \textbf{s}_{0}& \cdots & \textbf{s}_{j} & \cdots & \textbf{s}_{N-1}\\ (a_1 x_{0}^1+b_{1} x_{0}^0+1)\tilde{\textbf{s}}_{0} & \cdots & (a_1 x_{j}^1+b_{1} x_{j}^0+1)\tilde{\textbf{s}}_{j}&\cdots & (a_1 x_{N-1}^1+b_{1} x_{N-1}^0+1)\tilde{\textbf{s}}_{N-1}\\ \end{array}\right)\end{aligned}$$ and $$\begin{aligned} &&\left(\begin{array}{c} S\\ \tilde{S} (A_1-A_l) \end{array}\right)\\ &=&\left(\begin{array}{ccc} \textbf{s}_{0}& \cdots & \textbf{s}_{j}\\ (a_1x_{0}^1+(b_{1}-b_{l}) x_{0}^0-a_lx_{0}^l)\textbf{s}_{0} & \cdots & (a_1x_j^1+(b_{1}-b_{l}) x_{j}^0-a_l x_{j}^l)\tilde{\textbf{s}}_{j}\end{array}\right.\\ &&\left.\begin{array}{cccc} &&\cdots & \textbf{s}_{N-1}\\ &&\cdots & (a_1 x^1_{N-1}+(b_{1}-b_{l}) x_{N-1}^0-a_l x_{N-1}^l)\tilde{\textbf{s}}_{N-1} \end{array}\right)\end{aligned}$$ The submatrices formed by columns $j$ and $N-1-j$, $0\le j<N/2$, are respectively $$\begin{aligned} \Delta_j&=&\left(\begin{array}{cc} \textbf{s}_{j} & \textbf{s}_{N-1-j}\\ (a_1 x_{j}^1+b_{1} x_{j}^0+1)\tilde{\textbf{s}}_{j} & (a_1 x_{N-1-j}^1+b_{1} x_{N-1-j}^0+1)\tilde{\textbf{s}}_{N-1-j} \end{array}\right)\\ &=&\left(\begin{array}{cc} \textbf{s}_{j} & \textbf{s}_{j}\\ (a_1 x_{j}^1+b_{1} x_{j}^0+1)\tilde{\textbf{s}}_{j} & (a_1 x_{j}^1+b_{1} x_{j}^0-1)\tilde{\textbf{s}}_{j} \end{array}\right)\end{aligned}$$ and $$\begin{aligned} \label{Eqn_Parity-1} \Gamma_j&=&\left(\begin{array}{cc} \textbf{s}_{j} & \textbf{s}_{N-1-j}\\ (a_1x_j^1+(b_{1}-b_{l}) x_{j}^0-a_l x_{j}^l)\tilde{\textbf{s}}_{j} & (a_1x_{N-1-j}^1+(b_{1}-b_{l}) x_{N-1-j}^0-a_l x_{N-1-j}^l)\tilde{\textbf{s}}_{N-1-j} \end{array}\right)\nonumber\\ &=&\left(\begin{array}{cc} \textbf{s}_{j} & \textbf{s}_{j}\\ (a_1x_j^1+(b_{1}-b_{l}) x_{j}^0-a_l x_{j}^l)\tilde{\textbf{s}}_{j}& (a_1x_j^1+(b_{1}-b_{l}) x_{j}^0-a_l x_{j}^l)\tilde{\textbf{s}}_{j} \end{array}\right)\end{aligned}$$ by the skew-symmetric property and . In other words, $$\begin{aligned} \textrm{rank}(\Delta_j)=2,~~~\textrm{rank}(\Gamma_j)=1\end{aligned}$$ which leads to and . When $k=2$, for the $(4,2)$ Hadamard MSR code determined by the coding matrices given in Example \[Exm\_1\], the repair matrices of the first parity node are $$\begin{aligned} S= \left( \begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0\\ \end{array} \right), \tilde{S}= \left( \begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & -1\\ 0 & 1 & 0 & 0 & 0 & 0 & -1 & 0\\ 0 & 0 & 1 & 0 & 0 & -1 & 0 & 0\\ 0 & 0 & 0 & 1 & -1 & 0 & 0 & 0\\ \end{array} \right)\end{aligned}$$ Optimal repair of the second parity node ---------------------------------------- Similar to the repair of the first parity node, the second parity node can be regarded as the first systematic node by the following transformation $$\begin{aligned} \textbf{y}_1&=&A_1\textbf{f}_1+\cdots+A_k\textbf{f}_k \\ \textbf{y}_i&=&-A_i \textbf{f}_i ,~2\le i\le k\end{aligned}$$ Let $\textbf{y}=[\textbf{y}_1^T,\cdots,\textbf{y}_k^T]^T$. With this transformation, the storage code can be described as $$\begin{aligned} \left( \begin{array}{c} \textbf{f}_{k+2}\\ -A_2\textbf{f}_2\\ \vdots\\ -A_kf_{k}\\ A_1\textbf{f}_{1}\\ \textbf{f}_{k+1} \end{array} \right)=\left( \begin{array}{cccc} I_N & 0_N & \cdots & 0_N \\ 0_N & I_N & \cdots & 0_N \\ \vdots & \vdots & \ddots & \vdots \\ 0_N & 0_N & \cdots & I_N \\ I_N & I_N & \cdots & I_N \\ A_1^{-1} & A_1^{-1}-A_2^{-1} & \cdots & A_1^{-1}-A_k^{-1} \end{array} \right) \cdot \textbf{y}\end{aligned}$$ where the three nodes, i.e., the first systematic node, the first parity node and the second parity node, are cyclically shifted. Hence, it is sufficient to repair the new first systematic node by downloading data $S A_i\textbf{f}_i$, $1\le i\le k$, and $\tilde{S}\textbf{f}_{k+1}$, where the two repair matrices $S$ and $\tilde{S}$ are $$\begin{aligned} S&=& (\underbrace{\textbf{e}_0,\textbf{e}_1,\cdots,\textbf{e}_{2^k-2},\textbf{e}_{2^k-1}}\limits_{2^k},\underbrace{\textbf{e}_{2^k-2},\textbf{e}_{2^k-1},\cdots,\textbf{e}_0,\textbf{e}_1}\limits_{2^k})\\ \tilde{S}&=& (\underbrace{\textbf{e}_0,\textbf{e}_1,\cdots,\textbf{e}_{2^k-2},\textbf{e}_{2^k-1}}\limits_{2^k},\underbrace{-\textbf{e}_{2^k-2},-\textbf{e}_{2^k-1},\cdots,-\textbf{e}_0,-\textbf{e}_1}\limits_{2^k})\end{aligned}$$ with the $j$th columns $\textbf{s}_j$ and $\tilde{\textbf{s}}_j$ being $$\begin{aligned} \textbf{s}_j&=& \left\{ \begin{array}{ll} \textbf{e}_j,&0\le j<N/2\\ \textbf{e}_{N-1-j-(-1)^j}, &N/2 \le j< N \end{array} \right. \\ \tilde{\textbf{s}}_j&=& \left\{ \begin{array}{ll} \textbf{e}_j,&0\le j<N/2\\ -\textbf{e}_{N-1-j-(-1)^j}, &N/2 \le j< N \end{array} \right.\end{aligned}$$ satisfying $$\begin{aligned} \label{first_parity_node_repair_matrix_property} \begin{array}{ccc} \textbf{s}_j & = & \textbf{s}_{N-1-j-(-1)^j}\\ \tilde{\textbf{s}}_j & = & -\tilde{\textbf{s}}_{N-1-j-(-1)^j} \end{array}\end{aligned}$$ for $0\le j<N/2$. Then, the data from the new first parity node and the new second parity node can be expressed as $$\begin{aligned} \label{Eqn_download-1} \left( \begin{array}{c} S \\ \tilde{S}A_1^{-1} \end{array} \right)\textbf{f}_{k+2} - \sum_{l=2}^k \left( \begin{array}{c} S \\ \tilde{S} (A_1^{-1}-A_l^{-1}) \end{array} \right)A_l\textbf{f}_l\end{aligned}$$ The optimal repair strategy requires [@hadamard] $$\begin{aligned} \label{repair_2parity_node_requirement1} \textrm{rank} \left( \begin{array}{c} S \\ \tilde{S}A_1^{-1} \end{array} \right) = N\end{aligned}$$ and $$\begin{aligned} \label{repair_2parity_node_requirement2} \textrm{rank} \left( \begin{array}{c} S \\ \tilde{S} (A_1^{-1}-A_l^{-1}) \end{array} \right) = {N\over 2}\end{aligned}$$ for $2\le l\le k$. By and , we need to discuss $\left( \begin{array}{c} S \\ \tilde{S}A_1^{-1} \end{array} \right)$ and $\left( \begin{array}{c} S \\ \tilde{S}(A_1^{-1}-A_l^{-1}) \end{array} \right) $ where $$\begin{aligned} \label{inverse A_i} A_i^{-1} &=& 2^{-1}(I_N-a_i^{-1}b_iX_0X_i+a_i^{-1}X_i)\\ &=& 2^{-1}I_N+2^{-1}a_i^{-1}X_i(I_N-b_iX_0) ,~1\le i\le k \nonumber\end{aligned}$$ and $$\begin{aligned} \label{inverse A_i_subtract_inverse_A_1} A_1^{-1}-A_l^{-1} &=& 2^{-1}(a_l^{-1}b_lX_0X_l-a_1^{-1}b_1X_0X_1+a_1^{-1}X_1-a_l^{-1}X_l)\\ &=& 2^{-1}a_1^{-1}X_1(I_N-b_1X_0)-2^{-1}a_l^{-1}X_l(I_N-b_lX_0),~2\le l\le k\nonumber\end{aligned}$$ according to [@hadamard]. For simplicity of the characterization of the matrices $A_1^{-1}$ and $A_1^{-1}-A_l^{-1}$, we define $$\begin{aligned} p^1_j&=&2^{-1}+2^{-1}a_1^{-1}x^1_j(1-b_1 x^0_j)\\ q^l_j&=&2^{-1}a_1^{-1}x^1_j(1-b_1x^0_j)-2^{-1}a_l^{-1}x^l_j(1-b_lx^0_j)\end{aligned}$$ where $1< l\le k$ and $0\le j<{N}$. By Lemma \[lem\_near\_skew\_symmetry\], we have $$\begin{aligned} p^1_{N-1-j-(-1)^j} &=&2^{-1}-2^{-1}a_1^{-1}x^1_j(1-b_1x^0_j)\\ &=&-p^1_j+1\end{aligned}$$ and $$\begin{aligned} q^l_{N-1-j-(-1)^j} &=&-q^l_j\end{aligned}$$ for $0\le j<N/2$. For $0\le j<N/2$, consider the submatrices formed by columns $j$ and $N-1-j-(-1)^j$ in matrices $\left( \begin{array}{c} S \\ \tilde{S}A_1^{-1} \end{array} \right)$ and $\left( \begin{array}{c} S \\ \tilde{S}(A_1^{-1}-A_l^{-1}) \end{array} \right) $, i.e., $$\begin{aligned} \Delta_j &=& \left( \begin{array}{cc} \textbf{s}_j & \textbf{s}_{N-1-j-(-1)^j}\\ p^1_j\tilde{\textbf{s}}_j & p^1_{N-1-j-(-1)^j}\tilde{\textbf{s}}_{N-1-j-(-1)^j} \end{array} \right)\\ &=& \left( \begin{array}{cc} \textbf{s}_j & \textbf{s}_j\\ p^1_j\tilde{\textbf{s}}_j & p^1_j\tilde{\textbf{s}}_j-\tilde{\textbf{s}}_j \end{array} \right)\end{aligned}$$ and $$\begin{aligned} \Gamma_j &=& \left( \begin{array}{cc} \textbf{s}_j & \textbf{s}_{N-1-j-(-1)^j}\\ q^l_j\tilde{\textbf{s}}_j & q^l_{N-1-j-(-1)^j}\tilde{\textbf{s}}_{N-1-j-(-1)^j} \end{array} \right)\\ &=&\left( \begin{array}{cc} \textbf{s}_j & \textbf{s}_j\\ q^l_j\tilde{\textbf{s}}_j & q^l_j\tilde{\textbf{s}}_j \end{array}\right)\end{aligned}$$ That is, $\textrm{rank}(\Delta_j)=2$ and $\textrm{rank}(\Gamma_j)=1$, which gives and . When $k=2$, for the $(4,2)$ Hadamard MSR code determined by the coding matrices given in Example \[Exm\_1\], the repair matrices of the second parity node are $$\begin{aligned} S= \left( \begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \end{array} \right), \tilde{S}= \left( \begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & -1 & 0 & 0 \end{array} \right)\end{aligned}$$ Comparison {#section_of_comparison} ========== In fact, in the original repair strategy [@hadamard] the basis $\{\textbf{e}_0,\cdots,\textbf{e}_{2^k-1}\}$ is chosen as the column vectors of the Sylvester Hadamard matrix in . Whereas, for our strategy, $\{\textbf{e}_0,\cdots,\textbf{e}_{2^k-1}\}$ can be any basis of $\mathbb{F}_q^{2^k}$. In this sense, our new repair strategy generalizes the previous one in [@hadamard]. Most importantly, by choosing the standard basis in , our new repair strategy can considerably reduce the computation, including both addition and multiplication, in contrast to the original repair strategy in [@hadamard]. Indeed, the decrease comes from the fact that in each row, our new repair matrices have $2$ nonzero elements of $1$ or $-1$ whereas the original matrices have $N$ nonzero elements of $1$ or $-1$. The computation of node repair lies in 3 phases: download, interference cancellation and recover. In what follows, we investigate it case by case. **Case 1. Computation load of the repair of systematic nodes** Since each $S_i \cdot\textbf{f}_l$ needs $N/2$ additions, the new strategy needs $(k+1)N/2$ additions in the download phase. When $i\ne l$, note that in $S_iA_l$ has only two nonzero elements in each row, which indicates that there exists an $N/2\times N/2$ matrix $$\begin{aligned} \label{Eqn_simp_comp} B_l=\textrm{diag}(p^l_0,\cdots,p^l_{N/2-1})\end{aligned}$$ where $p^{l}_{\mu 2^{i}+\nu}=a_{l}x^l_{\mu 2^{i+1}+\nu}+b_l x^0_{\mu 2^{i+1}+\nu}+1$, $0\le\mu<2^{k-i}$ and $0\le\nu<2^i$ such that $S_iA_l=B_l S_i$. Hence, the new strategy needs $(k-1)N$ additions and at most $(k-1)N/2$ multiplications to cancel the interference term in . In the recover phase, $N$ additions and at most $2N$ multiplications are needed for the new strategy since the matrix $\left( \begin{array}{c} S_i \\ S_iA_i \end{array} \right)^{-1}$ still has only two nonzero elements in the each row. Therefore, totally $(3k+1)N/2$ additions and at most $(k+3)N/2$ multiplications are needed for the new strategy. For the original strategy, the download phase requires $(k+1)(2k+1)N/2$ additions by Lemma \[lem\_H\_multiply\_f\] since $S_i$ is equivalent to $(H_{k}~H_{k})$ with respect to columns permutation; The interference cancellation phase at most requires $(k-1)(N/2+1)N/2$ additions and $(k-1)N^2/4$ multiplications; The recover phase requires $N(N-1)$ additions and at most $N^2$ multiplications. Thus, totally $(k+3)N^2/4+(k^2+2k-1)N$ additions and $(k+3)N^2/4$ multiplications are needed at most. **Case 2. Computation load of the repair of the first parity node** Similarly to case 1, the new strategy needs $(3k+1)N/2$ additions and at most $(k+3)N/2$ multiplications because (1) $\tilde{S} \cdot\textbf{f}_{k+2}$ needs $N/2$ additions, as the same as $ S\cdot\textbf{f}_i$, $1\le i\le k$; (2) For $2\le l\le k$ there exists an $N/2\times N/2$ matrix $$\begin{aligned} \label{Eqn_simp_comp-2} B_l=\textrm{diag}(a_1x_0^1+(b_{1}-b_{l}) x_{0}^0-a_l x_{0}^l, \cdots, a_1x_{N/2-1}^1+(b_{1}-b_{l}) x_{N/2-1}^0-a_l x_{N/2-1}^l)\end{aligned}$$ such that $\tilde{S} (A_1-A_l)=B_l S$ by ; (3) The matrix $\left( \begin{array}{c} S \\ \tilde{S}A_1 \end{array} \right)^{-1}$ has only two nonzero elements in the each row. For the original strategy, $(k+3)N^2/4+(k^2+2k-1)N$ additions and $(k+3)N^2/4$ multiplications are required at most. **Case 3. Computation load of the repair of the second parity node** During the download phase, the new strategy needs $(k+1)N/2$ additions and at most $kN$ multiplications since (1) $S A_i\cdot\textbf{f}_i$, $1\le i\le k$, needs $N/2$ additions and at most $N$ multiplications; (2) $\tilde{S}\cdot \textbf{f}_{k+1}$ needs $N/2$ additions. In the interference cancellation phase and recover phase, the computation can be analyzed in the same fashion as that of Case 1. Hence, totally $(3k+1)N/2$ additions and at most $(3k+3)N/2$ multiplications are needed for the new strategy. For the original strategy, $(3k+3)N^2/4+(2k-2)N/2$ additions and $(3k+3)N^2/4$ multiplications are needed at most. The above comparison is summarized in Table 2, where ADD and MUL respectively denote the numbers of addition and multiplication. The exact number of additions and multiplications depends on the concrete values of $a_l, b_l$, $1\le l\le k$, and the finite field ${\mathbb{F}}_q$. For the new strategy, the number of multiplications can be further reduced if set $a_l\pm b_l=\pm 2$ or $a_1 \pm (b_{1}-b_{l})\pm a_l =\pm 1$ such that there are some $1$ or $-1$ in the diagonal matrix $B_l$ given by or , which is feasible by the equations (81) and (82) in [@hadamard]. As for the old strategy, it seems hard to be analyzed because there are too many nonzeros in the Sylvester Hadamard matrix. \[Table\_Comparison\] ------------ ---------- ------------------------------ -------------------- Node Repair to repair strategy [\[0pt\][ADD]{}]{} [\[0pt\][MUL]{}]{} Systematic New $(3k+1)N/2$ $\le (k+3)N/2$ node Original $\le (k+3)N^2/4+(k^2+2k-1)N$ $\le (k+3)N^2/4$ Parity New $(3k+1)N/2$ $\le (k+3)N/2$ node 1 Original $\le (k+3)N^2/4+(k^2+2k-1)N$ $\le (k+3)N^2/4$ Parity New $(3k+1)N/2$ $\le (3k+3)N/2$ node 2 Original $\le (3k+3)N^2/4+(2k-2)N/2$ $\le (3k+3)N^2/4$ ------------ ---------- ------------------------------ -------------------- Finally, we give two examples to compare the computation load of our new strategy and the original strategy, by two concrete values $k=2$ and $k=3$. It can be seen our new repair strategy needs much less computation. \[Exm\_computation\_k\_2\] When $k=2$, for the $(4,2)$ Hadamard MSR code determined by the coding matrices given in Example \[Exm\_1\], the computation load is given in Table 3. ------------ ---------- -------------------- -------------------- Node Repair to repair strategy [\[0pt\][ADD]{}]{} [\[0pt\][MUL]{}]{} Systematic New 28 17 node 1 Original 132 28 Systematic New 28 17 node 2 Original 132 28 Parity New 28 15 node 1 Original 132 24 Parity New 28 20 node 2 Original 152 120 ------------ ---------- -------------------- -------------------- \[Exm\_computation\_k\_3\] When $k=3$, for the $(5,3)$ Hadamard MSR code determined by the coding matrices given in Example \[Exm\_2\], the computation load is given in Table 4. ------------ ---------- -------------------- -------------------- Node Repair to repair strategy [\[0pt\][ADD]{}]{} [\[0pt\][MUL]{}]{} Systematic New 80 42 node 1 Original 528 128 Systematic New 80 42 node 2 Original 528 128 Systematic New 80 28 node 3 Original 528 256 Parity New 80 44 node 1 Original 528 272 Parity New 80 66 node 2 Original 736 576 ------------ ---------- -------------------- -------------------- Conclusion {#section_of_conclusion} ========== In this paper, a new repair strategy of Hadamard MSR code was presented, which can be regarded as a generalization of the original repair strategy. By choosing the standard basis, our strategy can dramatically decrease the computation load in contrast to the original one. [99]{} A.G. Dimakis, P.G. Godfrey, Y. Wu, M.J. Wainwright, and K. Ramchandran, “Network coding for distributed storage systems,” *IEEE Trans. Inf. Theory*, vol. 56, no. 9, pp. 4539-4551, Sep. 2010. J.H. Dinitz and D.R. Stinson, “A brief introduction to design theory,” in *Contemporary Design Theory: A Collection of Surveys*, J.H. Dinitz and D.R. Stinson, Eds. New York: Wiley, 1992, chap. 1, pp. 1-12. Google-GFS2 Colossus, *http://www.quora.com/Colossus-Google-GFS2*, Google, 2012. HDFS-Raid, *http://wiki.apache.org/hadoop/HDFS-RAID*. C. Huang, H. Simitci, Y. Xu, A. Ogus, B. Calder, P. Gopalan, J. Li, and S. Yekhanin, “Erasure coding in Windows Azure storage,” in *Proceedings of the USENIX Annual Technical Conference (ATC)*, 2012. J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weatherspoon, W. Weimer, C. Wells, and B. Zhao, “OceanStore: An architecture for global-scale persistent storage,” in *Proceedings of the 9th ACM International Conference on Architectural Support for Programming Languages and Operating Systems*, pp. 190-201, Boston, MA, Nov. 2000. D.S. Papailiopoulos, A.G. Dimakis, and V.R. Cadambem, “Repair optimal erasure codes through Hadamard designs,” *IEEE Trans. Inf. Theory*, vol. 59, no. 5, May. 2013. I. Tamo, Z. Wang, and J. Bruck, “Zigzag codes: MDS array codes with optimal rebuilding,” *IEEE Trans. Inf. Theory*, vol. 59, no. 3, Mar. 2013. Z.Y. Wang, I. Tamo, and J. Bruck, “Long MDS codes for optimal repair bandwidth,” in *Information Theory Proceedings (ISIT)*, *2012* *IEEE International Symposium on*, July 2012. [^1]: The authors are with the Information Security and National Computing Grid Laboratory, Southwest Jiaotong University, Chengdu, 610031, China (e-mail: xhutang@swjtu.edu.cn, metroyb@gmail.com, jieli873@gmail.com).
--- abstract: 'This is the text of my abstract. It is a brief description of my paper, outlining the purposes and goals I am trying to address.' author: - 'Corey Gray[^1]\' - 'Tricia Manning[^2]' title: 'SIAM/ACM Preprint Series Macros for Use With LaTeX[^3]' --- Problem Specification. ====================== In this paper, we consider the solution of the $N \times N$ linear system $$\label{e1.1} A x = b$$ where $A$ is large, sparse, symmetric, and positive definite. We consider the direct solution of (\[e1.1\]) by means of general sparse Gaussian elimination. In such a procedure, we find a permutation matrix $P$, and compute the decomposition $$P A P^{t} = L D L^{t}$$ where $L$ is unit lower triangular and $D$ is diagonal. Design Considerations. ====================== Several good ordering algorithms (nested dissection and minimum degree) are available for computing $P$ [@GEORGELIU], [@ROSE72]. Since our interest here does not focus directly on the ordering, we assume for convenience that $P=I$, or that $A$ has been preordered to reflect an appropriate choice of $P$. Our purpose here is to examine the nonnumerical complexity of the sparse elimination algorithm given in [@BANKSMITH]. As was shown there, a general sparse elimination scheme based on the bordering algorithm requires less storage for pointers and row/column indices than more traditional implementations of general sparse elimination. This is accomplished by exploiting the m-tree, a particular spanning tree for the graph of the filled-in matrix. The method was extended to three dimensions. For the standard multigrid coarsening (in which, for a given grid, the next coarser grid has $1/8$ as many points), anisotropic problems require plane relaxation to obtain a good smoothing factor. Our purpose here is to examine the nonnumerical complexity of the sparse elimination algorithm given in [@BANKSMITH]. As was shown there, a general sparse elimination scheme based on the bordering algorithm requires less storage for pointers and row/column indices than more traditional implementations of general sparse elimination. This is accomplished by exploiting the m-tree, a particular spanning tree for the graph of the filled-in matrix. Several good ordering algorithms (nested dissection and minimum degree) are available for computing $P$ [@GEORGELIU], [@ROSE72]. Since our interest here does not focus directly on the ordering, we assume for convenience that $P=I$, or that $A$ has been preordered to reflect an appropriate choice of $P$. In this paper we consider two methods. The first method is basically the method considered with two differences: first, we perform plane relaxation by a two-dimensional multigrid method, and second, we use a slightly different choice of interpolation operator, which improves performance for nearly singular problems. In the second method coarsening is done by successively coarsening in each of the three independent variables and then ignoring the intermediate grids; this artifice simplifies coding considerably. Our purpose here is to examine the nonnumerical complexity of the sparse elimination algorithm given in [@BANKSMITH]. As was shown there, a general sparse elimination scheme based on the bordering algorithm requires less storage for pointers and row/column indices than more traditional implementations of general sparse elimination. This is accomplished by exploiting the m-tree, a particular spanning tree for the graph of the filled-in matrix. [We describe the two methods in §1.2. In § 1.3. we discuss some remaining details.]{} Our purpose here is to examine the nonnumerical complexity of the sparse elimination algorithm given in [@BANKSMITH]. As was shown there, a general sparse elimination scheme based on the bordering algorithm requires less storage for pointers and row/column indices than more traditional implementations of general sparse elimination. This is accomplished by exploiting the m-tree, a particular spanning tree for the graph of the filled-in matrix. Several good ordering algorithms (nested dissection and minimum degree) are available for computing $P$ [@GEORGELIU], [@ROSE72]. Since our interest here does not focus directly on the ordering, we assume for convenience that $P=I$, or that $A$ has been preordered to reflect an appropriate choice of $P$. Our purpose here is to examine the nonnumerical complexity of the sparse elimination algorithm given in [@BANKSMITH]. As was shown there, a general sparse elimination scheme based on the bordering algorithm requires less storage for pointers and row/column indices than more traditional implementations of general sparse elimination. We discuss first the choice for $I_{k-1}^k$ which is a generalization. We assume that $G^{k-1}$ is obtained from $G^k$ by standard coarsening; that is, if $G^k$ is a tensor product grid $G_{x}^k \times G_{y}^k \times G_{z}^k$, $G^{k-1}=G_{x}^{k-1} \times G_{y}^{k-1} \times G_{z}^{k-1}$, where $G_{x}^{k-1}$ is obtained by deleting every other grid point of $G_x^k$ and similarly for $G_{y}^k$ and $G_{z}^k$. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase \[4\] - \[10\], \[5\], \[6\]. In §1.3., we analyze the complexity of the old and new approaches to the intersection problem for the special case of an $n \times n$ grid ordered by nested dissection. The special structure of this problem allows us to make exact estimates of the complexity. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase \[4\] - \[10\], \[5\], \[6\]. In §1.2, we review the bordering algorithm, and introduce the sorting and intersection problems that arise in the sparse formulation of the algorithm. In §1.3., we analyze the complexity of the old and new approaches to the intersection problem for the special case of an $n \times n$ grid ordered by nested dissection. The special structure of this problem allows us to make exact estimates of the complexity. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase \[4\] - \[10\], \[5\], \[6\]. For the old approach, we show that the complexity of the intersection problem is $O(n^{3})$, the same as the complexity of the numerical computations. For the new approach, the complexity of the second part is reduced to $O(n^{2} (\log n)^{2})$. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase \[4\] - \[10\], \[5\], \[6\]. In §1.3., we analyze the complexity of the old and new approaches to the intersection problem for the special case of an $n \times n$ grid ordered by nested dissection. The special structure of this problem allows us to make exact estimates of the complexity. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase \[4\] - \[10\], \[5\], \[6\]. This is accomplished by exploiting the m-tree, a particular spanning tree for the graph of the filled-in matrix. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase [@EISENSTAT] - [@LIU2], [@ROSE76], [@SCHREIBER]. Robustness. ----------- We do not attempt to present an overview here, but rather attempt to focus on those results that are relevant to our particular algorithm. This section assumes prior knowledge of the role of graph theory in sparse Gaussian elimination; surveys of this role are available in [@ROSE72] and [@GEORGELIU]. More general discussions of elimination trees are given in [@LAW] - [@LIU2], [@SCHREIBER]. Thus, at the $k$th stage, the bordering algorithm consists of solving the lower triangular system $$\label{1.2} L_{k-1}v = c$$ and setting $$\begin{aligned} \ell &=& D^{-1}_{k-1}v , \\ \delta &=& \alpha - \ell^{t} v .\end{aligned}$$ Robustness. =========== We do not attempt to present an overview here, but rather attempt to focus on those results that are relevant to our particular algorithm. Versatility. ------------ The special structure of this problem allows us to make exact estimates of the complexity. For the old approach, we show that the complexity of the intersection problem is $O(n^{3})$, the same as the complexity of the numerical computations [@GEORGELIU], [@ROSEWHITTEN]. For the new approach, the complexity of the second part is reduced to $O(n^{2} (\log n)^{2})$. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase \[4\] - \[10\], \[5\], \[6\]. In §1.3., we analyze the complexity of the old and new approaches to the intersection problem for the special case of an $n \times n$ grid ordered by nested dissection. The special structure of this problem allows us to make exact estimates of the complexity. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase \[4\] - \[10\], \[5\], \[6\]. In §1.2, we review the bordering algorithm, and introduce the sorting and intersection problems that arise in the sparse formulation of the algorithm. In §1.3., we analyze the complexity of the old and new approaches to the intersection problem for the special case of an $n \times n$ grid ordered by nested dissection. The special structure of this problem allows us to make exact estimates of the complexity. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase \[4\] - \[10\], \[5\], \[6\]. For the old approach, we show that the complexity of the intersection problem is $O(n^{3})$, the same as the complexity of the numerical computations. For the new approach, the complexity of the second part is reduced to $O(n^{2} (\log n)^{2})$. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase \[4\] - \[10\], \[5\], \[6\]. In §1.3., we analyze the complexity of the old and new approaches to the intersection problem for the special case of an $n \times n$ grid ordered by nested dissection. The special structure of this problem allows us to make exact estimates of the complexity. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase \[4\] - \[10\], \[5\], \[6\]. This is accomplished by exploiting the m-tree, a particular spanning tree for the graph of the filled-in matrix. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase [@EISENSTAT] - [@LIU2], [@ROSE76], [@SCHREIBER]. [99]{} R. E. Bank and R. K. Smith, [*General sparse elimination requires no permanent integer storage*]{}, SIAM J. Sci. Stat. Comput., 8 (1987), pp. 574–584. S. C. Eisenstat, M. C. Gursky, M. Schultz, and A. Sherman, [ *Algorithms and data structures for sparse symmetric gaussian elimination*]{}, SIAM J. Sci. Stat. Comput., 2 (1982), pp. 225–237. A. George and J. Liu, [*Computer Solution of Large Sparse Positive Definite Systems*]{}, Prentice Hall, Englewood Cliffs, NJ, 1981. K. H. Law and S. J. Fenves, [*A node addition model for symbolic factorization*]{}, ACM TOMS, 12 (1986), pp. 37–50. J. W. H. Liu, [*A compact row storage scheme for cholesky factors using elimination trees*]{}, ACM TOMS, 12 (1986), pp. 127–148. , [*The role of elimination trees in sparse factorization*]{}, Tech. Report CS-87-12,Department of Computer Science, York University, Ontario, Canada, 1987. D. J. Rose, [*A graph theoretic study of the numeric solution of sparse positive definite systems*]{}, in Graph Theory and Computing, Academic Press, New York, 1972. D. J. Rose, R. E. Tarjan, and G. S. Lueker, [*Algorithmic aspects of vertex elimination on graphs*]{}, SIAM J. Comput., 5 (1976), pp. 226–283. D. J. Rose and G. F. Whitten, [*A recursive analysis of disection strategies*]{}, in Sparse Matrix Computations, Academic Press, New York, 1976. R. Schrieber, [*A new implementation of sparse gaussian elimination*]{}, ACM TOMS, 8 (1982), pp. 256–276. [^1]: Society for Industrial and Applied Mathematics. [^2]: Society for Industrial and Applied Mathematics. [^3]: Supported by GSF grants ABC123, DEF456, and GHI789.
--- author: - 'Sebastian T. Ohlmann[^1]' - Markus Kromer - Michael Fink - Rüdiger Pakmor - | \ Ivo R. Seitenzahl - 'Stuart A. Sim' - 'Friedrich K. Röpke' bibliography: - 'astrofritz.bib' title: 'The white dwarf’s carbon fraction as a secondary parameter of Type Ia supernovae' --- Introduction ============ Despite enormous efforts in recent years, both in observation and modeling, identification of the progenitors of Type Ia supernovae (SNe Ia) remains elusive; no system has been observed so far [e.g., @roelofs2008a; @li2011b] and model predictions do not allow us to unambiguously distinguish different progenitor scenarios [e.g., @roepke2012a]. It is generally agreed that SNe Ia result from thermonuclear explosions of carbon/oxygen white dwarfs (WDs) in interacting binary systems, where mass transfer from the secondary component triggers the explosion [e.g., @hillebrandt2000a]. Depending on whether the secondary star is degenerate, the systems are distinguished in single degenerate (SD) systems – with main sequence, red giant, or sdB companion stars, for example – and double degenerate (DD) systems – with He or carbon/oxygen (C/O) WDs as secondary components. For a review of explosion models see @hillebrandt2000a; more recent explosion models and their comparison to observations are presented in @hillebrandt2013a. Interestingly, for the subclass of SN 2002cx-like SNe, pure deflagrations of Chandrasekhar-mass [@chandrasekhar1931a] WDs [@jordan2012b; @kromer2013a; @fink2014a] in the SD channel match observables quite well [@kromer2013a]. For normal SNe Ia, the most promising candidates are delayed detonations of Chandrasekhar-mass WDs in SD systems [e.g., @golombek2005a; @gamezo2005a; @roepke2007b; @bravo2008a; @jordan2008a; @townsley2009a; @bravo2009a; @bravo2009b; @jackson2010a; @seitenzahl2011a; @roepke2012a; @jordan2012a; @seitenzahl2013a; @sim2013a], double detonations of sub-Chandrasekhar-mass WDs in SD and DD systems [e.g., @fink2007a; @fink2010a; @moll2013a], and violent mergers of sub-Chandrasekhar-mass WDs in DD systems [e.g., @pakmor2012b; @pakmor2012a; @pakmor2013a; @moll2013b; @raskin2013a]. A recent comparison to SN 2011fe [@roepke2012a] shows that disentangling different models is hard given the current predictions in the optical regime. Other suggestions for progenitor scenarios include head-on collisions [e.g., @rosswog2009a; @raskin2009a; @kushnir2013a; @garcia2013a] and the core-degenerate scenario [e.g., @soker2014a]. The level of interest in SNe Ia has risen in the past 20 years mainly because they can be used as distance indicators in cosmology (for a review, see [@leibundgut2008a]), leading to the discovery of an accelerated expansion of the Universe [e.g., @riess1998a; @schmidt1998a; @perlmutter1999a]. The foundation for this development was the establishment of a light curve width–luminosity relation (WLR, also called the Phillips relation) by @phillips1993a [see also the earlier work of [@pskovskii1977a]]. This relation enables the estimation of absolute luminosities from the light curve width: broader light curves correspond to brighter SNe. [Thus, to a first approximation, SNe Ia are a one-parameter family driven by the *primary* parameter which affects light curve width and luminosity in such a way that the observed WLR emerges. As observed SNe Ia show some scatter around this WLR, some *secondary* parameters have to be present influencing light curve width and luminosity following a relation different from the mean observed WLR. One challenge present in SN Ia models is identifying the primary and secondary physical parameters (or sets of parameters) to better understand the physical origin of this relation. This should, as an ultimate goal, lead to a theoretical examination and justification of using this relation for a wide range of parameters, such as redshift or host stellar population.]{} In the multi-dimensional simulations presented here, we investigate the delayed detonation model in Chandrasekhar-mass WDs [@blinnikov1986a; @khokhlov1991a]. If Chandrasekhar mass models are to account for normal SNe Ia [(in the sense of [@branch1993a])]{}, the combination of a deflagration and a detonation is needed: neither pure detonation nor pure deflagration is sufficient. [Pure detonations of Chandrasekhar-mass WDs produce almost no IME because of the high densities ($\gtrsim 10^7$ g cm$^{-3}$ in most of the WD; [@arnett1971a], see also introduction in [@khokhlov1991a]). Pure deflagrations have recently been identified as promising models for 2002cx-like SNe Ia [@jordan2012b; @kromer2013a], but in other parameter ranges they fail to reproduce SNe Ia [@fink2014a; @ma2013a; @roepke2008c].]{} The densities through which the detonation propagates have to be lowered to produce more intermediate mass elements (IME). Hence, a deflagration flame first burns from the core outwards to the surface, thereby expanding the WD before the detonation is initiated. The mechanism of igniting the detonation is unclear, but several possibilities are proposed and explored in 3D simulations: the spontaneous deflagration-to-detonation transition (spontaneous DDT, [@gamezo2005a; @roepke2007b; @bravo2008a; @townsley2009a; @jackson2010a; @seitenzahl2011a; @roepke2012a; @seitenzahl2013a; @sim2013a]), the gravitationally confined detonation (GCD, [@jordan2008a; @jordan2012a]), and the pulsational reverse detonation (PRD, [@bravo2009a; @bravo2009b]). These models differ in the way the detonation is initiated. [The detonation emerges only in the spontaneous DDT from regions where the flame turbulently mixes fuel with ashes; in the other models, the deflagration initiates a large-scale motion of gas, leading at some point to highly compressed hot spots, where the detonation is initiated. Thus, these models also differ in the hydrodynamical structure at the onset of the detonation.]{} It is often assumed that for these models, the primary parameter of the WLR may be the ignition configuration, i.e., the shape of the initial deflagration flame (e.g., [@seitenzahl2013a]). This parameter indeed leads to a variety of $^{56}$Ni masses and hence luminosities, but fails to reproduce the WLR in the recent study of @sim2013a [although [@kasen2009a] find their models populate a similar region to that of the WLR by changing ignition configuration and DDT criterion]. [Thus, recent multi-dimensional spontaneous DDT models [@sim2013a; @seitenzahl2013a; @roepke2012a] show reasonable agreement to observations, but some shortcomings remain, such as colors that are too red at maximum, velocities that are too high, and a failure to explain the observed WLR in terms of a sequence of models with differing initial deflagration strengths.]{} Therefore, it is vital to examine the consequences of other parameters. [For the single degenerate scenario considered here, the initial carbon fraction is one parameter that is expected to show variations for different progenitor systems depending on the zero-age main sequence (ZAMS) mass and on the metallicity of the progenitor star [@umeda1999b; @dominguez2001a].]{} It affects important parameters of the light curve evolution, such as the kinetic energy of the ejecta. In 1D model studies, this has already been examined by @hoeflich1998a and @umeda1999a; @hoeflich2010a also suggest the C fraction to be a secondary parameter. [The main objective of this work is to examine if varying the initial C fraction resolves any of the discrepancies between predictions of the spontaneous DDT model and observations of normal SNe Ia. More specifically, we want to answer the following two questions:]{} 1. [How does the carbon mass fraction affect the width–luminosity relation predicted for our 3D models? In particular, does it drive variations along the observed Phillips relation of SNe Ia?]{} 2. Does a reduction of the carbon fraction result in better agreement of spectral features? To this end, we examine the impact of different initial compositions on the explosion process in multi-dimensional DDT models including its interplay with other parameters and present detailed nucleosynthesis results and synthetic light curves and spectra. First, we give an overview of initial parameters involved in spontaneous DDT models in Section \[sec:initialparameters\]. In Section \[sec:numericalmethods\], we explain our numerical methods: the initial WD models, the hydrodynamic modeling of the explosion phase, the detailed nucleosynthesis calculations, and the radiative transfer simulations. In Section \[sec:resultshydro\], we examine the hydrodynamic evolution in a parameter study of 2D simulations as well as for a few 3D simulations and present detailed nucleosynthesis results. Results from radiative transfer simulations, synthetic light curves and spectra, for a series of 3D simulations are presented in Section \[sec:resultsrt\] along with a discussion of the width–luminosity relation. We conclude in Section \[sec:conclusions\] with answers to the questions posed above. Initial Parameters of Delayed Detonation Models for SNe Ia {#sec:initialparameters} ========================================================== The hydrodynamic evolution of the explosion phase in spontaneous DDT models is governed by several parameters. In some cases, these are poorly constrained and in others, they are constrained to vary in a certain range. In this work, we systematically explore the *initial carbon fraction* of the pre-explosion WD, which depends on the evolution of the system. In the phase prior to explosion, simmering sets in, changing the composition in the interior. This has been modeled [e.g., @lesaffre2006a], but it is still difficult to include important effects such as the URCA process [@lesaffre2005a; @lesaffre2006a]. Nevertheless, these calculations show that the progenitor WD is composed of an outer layer of accreted material with an equal-by-mass composition of C and O, and an inner convective core with a lower C mass fraction. According to the calculations of @lesaffre2006a, this mass fraction and the size of the convective core is correlated to other parameters, such as the central density or metallicity. [The central C mass fraction depends on the ZAMS mass and the metallicity of the progenitor [@umeda1999b; @dominguez2001a]. The range in the central C fraction is about 0.24 to 0.37 for the models of @umeda1999b and about 0.21 to 0.32 for the models of @dominguez2001a. However, because of limitations in the modeling and in the nuclear data, the central C concentration is rather uncertain and @dominguez2001a suggest that it may vary between 0.1 and 0.5. The chemical stratification also changes during the simmering phase prior to ignition; therefore, we choose our model parameters similar to the models of @lesaffre2006a. We account for the possible spread in C fractions by varying it between 0.2 and 0.5 in our models. ]{} The effect of the initial composition on the explosion is mainly by a different nuclear energy release in the burning: owing to the lower binding energy of C compared to O, material with less C possesses a smaller energy difference to the burning products, which are mostly in the iron group. Changing the *initial metallicity* of the WD progenitor mainly influences the nucleosynthesis, since the principal influcence on the hydrodynamic evolution is due to the dependence of the equation of state on the electron fraction. In the nucleosynthesis, a lower metallicity and thus higher electron fraction leads to a larger $^{56}$Ni production [@timmes2003a; @travaglio2005a; @seitenzahl2013a] owing to reduced neutronization, but the effect is also not large enough for a primary parameter. The initial evolution of the deflagration flame is governed by the *ignition configuration*, which is poorly understood. Hydrodynamical simulations [@hoeflich2002a; @kuhlen2006a; @zingale2009a] favor a dipole structure of the convective flow preceding ignition, which may be fractured by rotation, yielding ignition over a broader region. In our simulations, we place a certain number of ignition kernels near the center of the WD to excite different numerical modes of the deflagration flame. The number of these kernels determines the strength of the deflagration, i.e., the rate of energy production [@seitenzahl2013a; @fink2014a]. More ignition kernels corresponds to a stronger deflagration and thus to a stronger expansion of the WD. Consequently, lower densities result at the onset of the detonation and thus, less $^{56}$Ni is produced. The range in deflagration strengths studied in @seitenzahl2013a is able to reproduce the observed range in brightnesses of normal SNe Ia, but fails to explain the WLR [@sim2013a]. This study implies that the deflagration strength is probably not the primary parameter. Another important factor in the hydrodynamic evolution is the *DDT criterion*. Although it is still unknown if and how this transition is realized in SNe Ia, some restrictions on the mechanism have been placed. The proposed instant of DDT to occur is when the flame leaves the flamelet regime and enters the distributed burning regime [@woosley2009a], where hot ashes and cold fuel mix in the presence of large turbulent velocity fluctuations; hot spots result and a detonation is initiated. @woosley2007a estimate the density of the DDT to lie in the range of (0.5–1.5)$\times10^{7}$ g cm$^{-3}$. The higher the density at the DDT is, the more $^{56}$Ni is produced (see the 2D simulations by [@kasen2009a]). In the 3D simulations by @seitenzahl2013a, the DDT criterion was fixed for all simulations in order to examine the consequences of different ignition conditions. It is unclear why the DDT criterion should change in different SNe, if it is the primary parameter. The *central density* of the WD at ignition is on the order of $10^9\,$g$\,$cm$^{-3}$. However, some variation is possible, as the mass of a near-Chandrasekhar-mass WD is almost independent of the central density; this depends on the pre-explosion evolution. Higher initial densities shift the explosion products to more neutron-rich nuclei, mostly stable iron group elements (IGE) instead of $^{56}$Ni. In their parameter study with 2D simulations, @krueger2010a find that a higher central density leads to similar overall production of IGE; but a lower amount of $^{56}$Ni is produced, as more neutron-rich nuclei are formed. @seitenzahl2011a, in contrast, find in their 3D simulations a higher IGE production for higher central densities, whereas the amount of $^{56}$Ni is roughly constant. In any case, this effect is too small to be a primary parameter. Numerical Methods {#sec:numericalmethods} ================= To compute synthetic observables from explosion models, we use a modeling pipeline: after creating the initial WD models, the explosion phase is simulated using the hydrodynamic code <span style="font-variant:small-caps;">Leafs</span>; then, detailed nucleosynthesis results are computed in a postprocessing step; finally, synthetic observables are obtained with the radiative transfer Monte Carlo code <span style="font-variant:small-caps;">Artis</span>, using mapped data from the previous steps. Initial WD Models ----------------- The initial WD models have been created as cold isothermal WDs by integrating the hydrostatic equilibrium equations for a central density of $2.9\times10^9$ g cm$^{-3}$ and a constant temperature of $5\times10^5$ K. The equation of state used for the integration is the same as in <span style="font-variant:small-caps;">Leafs</span>. The composition of the WD is chosen based on the results of @lesaffre2006a: uniform composition in the convective core and in the outer accretion layer with a smoothly connecting transition region. In the outer accretion layer, $X(^{12}\mathrm{C})=0.5$, whereas in the convective core $X(^{12}\mathrm{C})$ ranges from 0.2 to 0.5, depending on the model. The convective core ends at [about $1 M_\odot$]{} and the accretion layer starts at $1.2 M_\odot$; again these values correspond to a typical scenario from @lesaffre2006a. [The size of the convective core depends on the chosen ignition criterion and for a certain choice of parameters, its mass is about $1 M_\odot$ for a central density of $2.9\times10^9$ g cm$^{-3}$ [compare solid lines in fig. 7 of @lesaffre2006a].]{} [We fix the mass of the convective core for all models and vary only the C fraction in order to have only one parameter changing in our models. Moreover, we take a rather wide range of the C fraction from 0.2 to 0.5 to assess the maximum possible influence of this parameter on the explosion process and on observables. Other findings indicate that the mass of the convective core may vary in a rather wide range, depending on variables such as the chemical stratification [@piro2008b] and uncertainties in the nuclear reaction rate data [@bravo2011b]. Moreover, in the models of @bravo2011b, a C depleted core develops in the innermost $0.05 M_\odot$ of the WD. This should not influence the evolution of the flame much because the evolution of the flame in our 3D models is mostly governed by the Rayleigh–Taylor instabilities at later times. Thus, a C depletion in the very core should not significantly change the hydrodynamical evolution and thus also not the resulting observables.]{} To clarify the composition of each model, we introduce a naming scheme. The first part of the model names encodes the initial composition: cXY denotes a homogeneous progenitor model with a carbon mass fraction of XY%. Additional more realistic progenitor models [following @lesaffre2006a] are labeled rpcXY corresponding to a homogeneous carbon depleted core with a carbon mass fraction of XY%. Hydrodynamic Simulations {#sec:leafs} ------------------------ We use the supernova code <span style="font-variant:small-caps;">Leafs</span> [@reinecke1999a; @reinecke2002b] for the hydrodynamic simulations of the explosion phase. It employs a finite-volume, grid-based scheme in a Eulerian formulation of the piecewise parabolic method by @colella1984a, in the <span style="font-variant:small-caps;">Prometheus</span> implementation by @fryxell1989a. The Riemann solver is implemented according to @colella1985a, being capable of using a general convex equation of state. The equation of state is based on the Timmes equation of state [@timmes2000a]. Thermonuclear flames are modeled with the levelset method [@osher1988a] as described by @reinecke1999a [@reinecke2002b]. This approach approximates the flame front as a discontinuity, which burns the nuclear fuel instantaneously. The large difference in scales of several orders of magnitude between the flame width ($\sim$mm–cm) and the grid cell size ($\sim$km) justifies this approximation. Nuclear burning is treated in an approximate scheme, yielding the final composition directly behind the front. To track the energy release, a simplified composition is used including five pseudo-species, $\alpha$ particles, $^{12}$C, $^{16}$O, “Mg” (representing IME) and “Ni” (representing iron group elements, IGE); these approximate fuel and burning products from the different burning stages. Nuclear statistical equilibrium (NSE) is treated approximately by adjusting the mass fractions of IGE and $\alpha$ particles to follow the energy release depending on density and temperature. The detailed nucleosynthetic yields are computed in a postprocessing step using the method of tracer particles (see Section \[sec:postprocessing\]). The tables giving the composition behind the level set depend on the density and composition of the unburnt fuel and have to be calculated once, prior to the simulations. This is done using an iterative calibration method similar to @fink2010a. The method is extended to different initial compositions to allow for varying compositions (for further details, see Appendix \[sec:calibration\]). For the computational grid, we use the moving hybrid grid technique as described in @roepke2006a. An inner equidistant grid tracks the deflagration flame and expands into the outer, exponentially spaced grid as the deflagration evolves to allow for high resolution in the beginning. The deflagration burning takes place in the flamelet regime of turbulent combustion. The effects of turbulence on unresolved scales are accounted for by a subgrid-scale model, which is used to compute turbulent velocity fluctuations below the grid scale. For 2D models, the subgrid-scale model by @niemeyer1995b is used, whereas for 3D models, a more elaborate model is employed as introduced by @schmidt2006b [@schmidt2006c]. The deflagration-to-detonation transition (DDT) is assumed to occur when the turbulent burning changes from the flamelet regime to the distributed burning regime [@woosley2009a]. Here, the internal flame structure is disturbed by turbulent eddies due to an increased flame width at lower densities. This leads to heat transfer from hot ashes to cold fuel [@niemeyer1997b; @woosley2007a], whereupon hot spots may form potentially initiating a detonation via the Zel’dovich gradient mechanism [@zeldovich1970a]. The flame widths necessary for this transition are reached in a density range of (0.5–1.5)$\times$10$^7\,$g$\,$cm$^{-3}$ [@woosley2007a]. Furthermore, high turbulent velocity fluctuations of the order of $10^8\,$cm$\,$s$^{-1}$ must be present at the flame front [@lisewski2000b; @woosley2009a], which was found in 3D deflagration models by @roepke2007d. For our 2D models, the DDT criterion is modeled as in @kasen2009a, but with differing parameters. A detonation is initiated in a cell if the density lies in a certain range and if the Karlovitz number Ka is larger than a given minimum value. Since $\mathrm{Ka} \propto (u')^{3/2}$ [@kasen2009a Supp. Information], where $u'$ denotes the turbulent velocity fluctuations below the grid scale, this criterion requires the turbulent velocity fluctuations to be above a certain threshold. In three dimensions, the DDT criterion is modeled as described in @ciaraldi2013a, but varying the parameters. In this criterion, an effective flame surface is calculated by choosing cells in a certain density and fuel mass fraction range. This surface is additionally multiplied by the probability of large velocity fluctuations being present and it is required to exceed a critical value for at least half an eddy turnover time to ensure sufficient mixing between fuel and ashes. For the 2D simulations, a grid size of $512\times 1024$ cells in axial symmetry was chosen, corresponding to a spatial resolution of $1.06\,$km in the inner part at the beginning of the simulation. The 3D simulations are full star simulations and use a grid with $512^3$ cells, which corresponds to a spatial resolution of $2.14\,$km in the inner part at the beginning of the simulation. Criterion $\rho_\mathrm{min}$ $\rho_\mathrm{max}$ Ka$_\mathrm{min}$ ----------- --------------------- --------------------- ------------------- ddt1 0.6 1.2 250 ddt2 0.5 0.8 1000 ddt3 0.5 0.8 2250 ddt4 0.6 1.2 2250 : Parameters for DDT criteria for 2D models similar to @kasen2009a. For details on the different parameters see ibid., supplementary information. The densities are given in $10^7\ \mathrm{g}\ \mathrm{cm}^{-3}$. \[tab:ddtcriteria2d\] The model names for 2D models consist of three parts; the first part gives the initial composition, as explained above. The second part of the model name consists of the DDT criterion; the corresponding parameters are given in Table \[tab:ddtcriteria2d\] and are similar to those used by @kasen2009a. The last part of the model name is determined by the initial conditions, the number gives the number of initial ignition spots for the deflagration flame. The DDT criteria and ignition conditions are the same as in @kasen2009a with slightly different notations. The parameter study comprises of runs for five different initial composition profiles (c20, c30, c40, c50, and rpc32), for eight different ignition configurations (dd020, dd050, dd060, dd080, dd090, dd100, dd100C, and dd150), and for two different DDT criteria (ddt1, ddt2). The rpc32 model has been run for all four DDT criteria of Table \[tab:ddtcriteria2d\]. For the 3D models, the treatment of initial composition is the same as for the 2D models (see above). The values used for the limits in the DDT criterion are $0.4 < X_\mathrm{fuel} < 0.6$ and $0.6 < \rho / (\mathrm{g}\ \mathrm{cm}^{-3}) < 0.9$ [for details, see @ciaraldi2013a] [, where $X_\mathrm{fuel}$ is the mass fraction of unburnt material in the cell. The parameter range around 0.5 ensures that a detonation is ignited only in cells where fuel and ashes are mixed.]{} This criterion for the 3D models is termed DDT8 and differs from the one used by @seitenzahl2013a. The ignition conditions for the deflagration flame are the same as described by @seitenzahl2013a. Nucleosynthetic postprocessing {#sec:postprocessing} ------------------------------ Since coupling a reaction network to the hydrodynamic equations is computationally very expensive, we compute the detailed nucleosynthetic abundances in an additional postprocessing step. This was first done by @thielemann1986a for 1D models, computing a nuclear reaction network for the Lagrangian mass shells. For multi-dimensional simulations, we use the concept of *tracer particles*, first introduced by @nagataki1997a in the context of Type II supernovae. In this method, tracer particles are passively advected with the flow and their thermodynamic trajectories are recorded. As the particles are moving in a Lagrangian manner, the nucleosynthetic abundances can be computed by evolving the nuclear network separately on each particle trajectory. The tracer particle method employed in our work is based on @travaglio2004a and uses the network of @thielemann1996a and @iwamoto1999a including 384 isotopes. More details on the algorithm can be found in @roepke2006b. The distribution of the tracer particles is chosen according to @seitenzahl2010a, who proposed variable tracer masses in order to improve the resolution in the outer layers with lower densities. The exact number of tracer particles depends on the simulation according to the algorithm by @seitenzahl2010a and is about $41000$ for 2D simulations and about $10^6$ for 3D simulations. The initial composition for the postprocessing is assumed to include the detailed solar metallicity of @asplund2009a. The CNO cycle elements are assumed to be processed to $^{22}$Ne during He burning; thus, their abundances are added by number to the $^{22}$Ne abundance. Radiative Transfer Simulations ------------------------------ The input data for the radiative transfer simulations is generated in the following way: the detailed nucleosynthesis data from the tracer particles is mapped onto a $200^3$ Cartesian grid using an SPH-like algorithm; the density distribution is mapped on this grid from the hydrodynamic simulation. A further down-sampling to a $50^3$ grid yields the final input data for the radiative transfer calculation [more details in @kromer2010a]. The radiative transfer simulations are then carried out with the multi-dimensional Monte Carlo code <span style="font-variant:small-caps;">Artis</span> [@sim2007b; @kromer2009a]. On a co-expanding grid, following the homologous expansion of the ejecta, $10^8$ photon packages are propagated for 111 logarithmically spaced time steps from 2 d to 120 d after explosion. The computations are sped up in the beginning by using a gray approximation in optically thick cells [as discussed in @kromer2009a] and by assuming local thermodynamic equilibrium for the first 10 time steps, i.e., for the first two to three days post explosion. The atomic lines are taken from the same atomic data set as described in @gall2012a, including approximately $2\times 10^6$ bound–bound transitions. For the model N100 of @seitenzahl2013a and @sim2013a, the radiative transfer simulations have been recomputed with this large atomic data set. Hydrodynamic Evolution and Nucleosynthesis {#sec:resultshydro} ========================================== In this section, we present the results from the hydrodynamic simulations of the explosion phase and detailed nucleosynthetic abundances. This is first done for a set of 2D simulations, which can be run in larger numbers (compared to 3D simulations), owing to the lower computational effort. Then, the results for a few 3D simulations are presented. Parameter Study: 2D Simulations ------------------------------- A parameter study was performed in two dimensions to explore the impact of different initial compositions. To this end, hydrodynamical simulations of DDT models, followed by nucleosynthetic postprocessing, were performed for a set of five different initial compositions for a range of different ignition conditions and different DDT criteria. This also allows us to examine the effects of ignition conditions and DDT criteria separately and to compare these to the repercussions of the initial composition. Moreover, our parameter study results in models with similar $^{56}$Ni yields and thus similar luminosities; these may then be compared as models for the same supernova. ![Evolution of nuclear energy release vs. time for several 2D simulations. The dot marks the point of the first DDT. The upper panel shows simulations with different ignition conditions, but same initial composition and DDT criterion (rpc32\_ddt2\_ddxxx). The middle panel shows simulations with different DDT criteria, but same initial composition and ignition conditions (rpc32\_ddtx\_dd03). The lower panel shows simulations with different C mass fraction, but same DDT criterion and ignition conditions (xxx\_ddt2\_dd03). []{data-label="fig:2devolution"}](figures/2dtimeevolution) ![Final asymptotic kinetic energy over total $^{56}$Ni mass for several 2D models. Different ignition configurations are shown with different markers. The average carbon mass fraction at the beginning of the simulation is color-coded. The DDT criterion is ddt2 for all models. []{data-label="fig:2dekinnimass"}](figures/2dekinnimass) The repercussions of changing ignition conditions, DDT criteria, and initial compositions separately can be seen in Fig. \[fig:2devolution\]. In the upper panel, different *ignition configurations* are compared. The nuclear energy release in the deflagration phase (prior to the first DDT marked with a dot) approximately increases with increasing number of ignition kernels. In this way, we can numerically excite varying deflagration strengths by changing the number of ignition kernels in our models. As the nuclear energy release is an indicator of the expansion of the WD, the WD has expanded more at the onset of the detonation for stronger deflagrations. Hence, for more ignition spots, the ensuing detonation burns less material at high densities; consequently, less $^{56}$Ni is produced and less nuclear energy is released in total [see also @mazzali2007a; @roepke2007b; @seitenzahl2013a; @fink2014a]. This relation, however, is fulfilled only approximately since the hydrodynamic evolution is highly non-linear and also depends on the locations of the ignition spots, not only on their number. Changing the *DDT criterion* for otherwise identical conditions (Fig. \[fig:2devolution\], middle panel) leads to a different delay until the detonation is ignited. Later ignitions cause a lower $^{56}$Ni production and also a lower release of nuclear energy in total because of the longer-lasting pre-expansion. The repercussions of the *initial composition* on the hydrodynamic evolution for identical ignition configurations and DDT criteria can be seen in the lower panel of Fig. \[fig:2devolution\]. The homogeneous models show a slightly lower nuclear energy release in the deflagration phase for lower carbon mass fractions which can be explained by the lower energy release of deflagration fronts at lower carbon mass fractions (see Appendix \[sec:calibration\]). This leads to a slower expansion and the turbulent velocity fluctuations needed for the DDT develop more slowly; thus, the detonation is initiated later. This corresponds to a larger pre-expansion for lower carbon mass fractions. Hence less $^{56}$Ni is produced and less nuclear energy is released in total. The more realistic model with a C depleted core (rpc32 model) is very similar to the homogeneous model with 30% C in the deflagration phase because the deflagration flame does not leave the C depleted core. In the detonation phase, however, a larger nuclear energy release can be seen, which is due to the detonation burning also in the outer layers with larger C mass fractions. The nuclear energy release during the explosion phase drives the gravitational unbinding and the expansion of the ejecta; thus, the final, asymptotic kinetic energy of the ejecta is given by the sum of the initial internal energy, the initial gravitational energy (being negative) and the nuclear binding energy difference. This energy determines the scaling of the ejecta distribution in velocity space. The asymptotic kinetic energies and $^{56}$Ni yields are compared for several models in Fig. \[fig:2dekinnimass\]. First, models with larger $^{56}$Ni production show larger kinetic energies, which can be explained by the larger nuclear energy release. Second, when comparing models with identical ignition conditions but different initial compositions, a larger carbon mass fraction leads to a larger $^{56}$Ni yield and a larger asymptotic kinetic energy. The $^{56}$Ni masses for a model series with different initial compositions enclose a smaller interval for larger $^{56}$Ni masses because the detonation burns mostly at high densities, where the influence of the composition is small (see Appendix \[sec:calibration\]). One important consequence is simply that different initial compositions lead to a spread in $^{56}$Ni masses for identical ignition conditions. The relation between the total $^{56}$Ni mass and the average initial C mass fraction $\overline{X}_0(^{12}\mathrm{C})$ is nearly linear (Fig. \[fig:2dnimasslinear\]). Linear regressions for all model series with $^{56}$Ni mass in the range for normal SNe Ia ($\sim$0.3–0.8$M_\odot$) show correlation coefficients $> 0.94$. Averaging over these regressions yields an approximate expression for the $^{56}$Ni mass, $$M(^{56}\mathrm{Ni})/M_\odot = 0.17 + 1.01 \overline{X}_0(^{12}\mathrm{C}), \label{eq:nimasslinear}$$ for $0.2< \overline{X}_0(^{12}\mathrm{C}) <0.5$, showing a surprisingly simple mean relation between the initial C fraction and the $^{56}$Ni mass. ![Total $^{56}$Ni mass over mean C mass fraction for several 2D models with different ignition conditions. The full lines show linear regressions for each ignition condition series; the dashed line shows the mean of all regressions. []{data-label="fig:2dnimasslinear"}](figures/2dnimasscfraction) If two different models are compared to the same SN, the models must produce a similar amount of $^{56}$Ni to show a similar luminosity. This can be reached by varying initial composition, ignition conditions, and DDT criterion at once. As can be seen in Fig. \[fig:2dekinnimass\], the model with the smaller C mass fraction produces similar $^{56}$Ni yields for lower asymptotic kinetic energies than the model with the larger C mass fraction. Hence, the ejecta are distributed in a different way; and the light curves and spectra determined by this distribution will change. ![image](figures/vel_c30_ddt1_dd09){width="95.00000%"} ![image](figures/vel_rp1_ddt2_dd07){width="95.00000%"} ![image](figures/vel_c50_ddt2_dd09){width="95.00000%"} The ejecta distribution in velocity space is shown for three models with similar $^{56}$Ni masses but different initial compositions in Figs. \[fig:velc30\], \[fig:velrpc32\], and \[fig:velc50\]. The density structure (left panel) shows similar features for all three models: a higher density in the interior part (${\la}10^4$ km s$^{-1}$), where most deflagration ashes reside alongside detonation ashes; and shocks in the outer parts, where multiple detonation fronts merged. The abundance structure (right panels) shows the same general features for all models: the central deflagration ashes are surrounded by layered detonation ashes. The models, however, also show differences: in the interior part (${\la}10^4$ km s$^{-1}$), the variations are mostly due to the different hydrodynamic evolution of the deflagration flame for the different ignition conditions, but in the outer part, the ejecta are shifted to lower velocities for lower C mass fractions. Especially $^{56}$Ni and stable iron isotopes are confined to lower velocities; also the peak of the $^{28}$Si distribution shifts to lower velocities. Moreover, comparing the homogeneous models, the outer layers of the ejecta contain more unburnt material for a smaller C mass fraction (compared to the initial composition), which can be explained by less burning at these smaller densities for lower C mass fractions (see Appendix \[sec:calibration\]). The more realistic model with a C depleted core resembles the 50% model in the outer layers (less unburnt material) and the 30% model in the inner layers (the ejecta are shifted to lower velocities). 3D Simulations -------------- ------------------------- ----------------------------------- ----------------------- -------------------- --------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- ---------------------------------------- Model $\overline{X}_0(^{12}\mathrm{C})$ $M(^{56}\mathrm{Ni})$ $E_\mathrm{kin,a}$ $t(B_\mathrm{max})$ $U_{\mathrm{max}}$ $B_{\mathrm{max}}$ $V_{\mathrm{max}}$ $R_{\mathrm{max}}$ $I_{\mathrm{max}}$ $\Delta m_{15}(B)$ $v_\mathrm{Si}(t_{B_\mathrm{max}})$ $(M_\odot)$ ($10^{51}$ erg) (d) (mag) (mag) (mag) (mag) (mag) (mag) $(10^3\ \mathrm{km}\ \mathrm{s}^{-1})$ rpc20\_DDT8\_N100 0.26 0.364 0.43 — — — — — — — — rpc32\_DDT8\_N100 0.36 0.603 1.28 17.4 $-18.9$ $-19.0$ $-19.5$ $-19.5$ $-19.5$ 1.41 12.5 rpc40\_DDT8\_N100 0.42 0.701 1.43 17.5 $-19.1$ $-19.2$ $-19.7$ $-19.6$ $-19.5$ 1.49 13.5 c50\_DDT8\_N100 0.50 0.799 1.54 17.3 $-19.4$ $-19.4$ $-19.9$ $-19.7$ $-19.6$ 1.55 14.4 N100 [@seitenzahl2013a] 0.50 0.604 1.44 16.6 $-18.8$ $-19.0$ $-19.5$ $-19.6$ $-19.6$ 1.42 13.1 ------------------------- ----------------------------------- ----------------------- -------------------- --------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- ---------------------------------------- A few 3D full-star simulations with more realistic treatment of the 3D turbulent burning process have been conducted to quantify the impact of different compositions. In all simulations, the initial model is composed of a core of differing C mass fraction ($0.2$ to $0.5$) which is surrounded by an outer layer with a C mass fraction of $0.5$. General properties of the model and results from the radiative transfer simulations are presented in Table \[tab:3dproperties\]. The detailed nucleosynthetic abundances for selected models are given in Table \[tab:ryields\] for radioactive isotopes after 100 s (this corresponds to the end of the simulation, when the expansion is nearly homologous) and in Table \[tab:syields\] for stable isotopes after 2 Gyr. For these models, the $^{56}$Ni mass varies between 0.36 and 0.80 solar masses. The relative abundances (normalized to $^{56}$Fe) compared to solar values do not show large changes for different composition; here, metallicity (i.e., mainly the neutron-rich isotope $^{22}$Ne after He burning) plays a larger role [see @seitenzahl2013a Fig. 7]. When the different parameters (ignition conditions, DDT criteria, and initial compositions) are varied, the hydrodynamic evolution in the 3D models shows similarities to the 2D models. This can be seen for different initial compositions, e.g., in the evolution of the nuclear energy, which is shown in Fig. \[fig:3dtimeevolution\] for selected 3D models. The main difference between the evolution in 2D and 3D models is the slower energy release in 3D which is due to burning starting from spheres unlike tori in 2D-axisymmetric geometry. Hence, the deflagration transitions later to the turbulent regime driven by the Rayleigh–Taylor instability. In Fig. \[fig:3dtimeevolution\], a dot indicates the time when the first DDT is initiated and thus marks the transition to the detonation phase. Model rpc20\_DDT8\_N100 fails to initiate a detonation because the released nuclear energy does not generate enough turbulent motions to trigger the DDT.[^2] As this model is simply a pure deflagration (also called “failed detonation” by [@jordan2012b; @fink2014a], see there for recent models), it will not be discussed further. Nevertheless it is interesting that the DDT criterion chosen here fails for some models, while it successfully initiates a detonation for other models. For the model series with varying C fraction, the nuclear energy release during the deflagration phase rises with the carbon mass fraction as expected from the binding energy differences and from the calibration (see Appendix \[sec:calibration\]). This leads to the DDT criterion being fulfilled earlier for larger C mass fractions; thus, the expansion of the WD is smaller and unburnt material is present at higher densities. The detonation consumes the remaining unburnt material; and for higher densities, more $^{56}$Ni is produced. Moreover, for larger C mass fractions, the transition density to burning to NSE is smaller, adding to the effect of producing more NSE material for larger C mass fractions and thereby releasing more nuclear energy. The $^{56}$Ni mass follows a similar linear relation for the carbon mass fraction with a slope near 1, similar to the mean relation found for the 2D models (cf. Equation \[eq:nimasslinear\] and values in Tables \[tab:3dproperties\] and \[tab:ryields\]). ![Evolution of nuclear energy in time for several 3D simulations with different initial compositions but otherwise identical explosion parameters (models rpc20\_DDT8\_N100, rpc32\_DDT8\_N100, rpc40\_DDT8\_N100, c50\_DDT8\_N100; inner core of 20%, 32%, 40% and 50% C by mass, respectively). The dot marks the point of the first DDT. The rpc20 model failed to initiate a detonation for this DDT criterion. []{data-label="fig:3dtimeevolution"}](figures/3dtimeevolution) ![Angle averaged ejecta distribution in velocity space for several 3D models. The model in the top panel is N100 from @seitenzahl2013a. Shown are the mass fractions (top panels, stable IGE are all iron group elements with $Z>20$ without $^{56}$Ni) and the density (bottom panel).[]{data-label="fig:3dejecta"}](figures/3dejecta) [cccccccc]{} [cccccccc]{} Apart from examining the influence of each parameter separately, several parameters can be changed at once to create models with similar $^{56}$Ni yields. As these models will show similar peak luminosities (dominated by the total amount of $^{56}$Ni), they can be compared in their ability to explain the same SN, as opposed to explaining SNe Ia in general. The spherically averaged ejecta distribution in velocity space is compared for two 3D models with similar $^{56}$Ni masses of about $0.6 M_\odot$ in Fig. \[fig:3dejecta\]: the first panel shows the homogeneous model (in initial composition) N100 from [@seitenzahl2013a], in the second panel the C depleted model rpc32\_DDT8\_N100 is plotted. The C depleted model features a lower asymptotic kinetic energy of $1.28\times 10^{51}$ erg compared to the homogeneous model ($1.44\times 10^{51}$ erg). The global structure of the ejecta is similar to the 2D models (Figs. \[fig:velc30\], \[fig:velrpc32\], \[fig:velc50\]): outer layers of C, O and Si and a core consisting mainly of $^{56}$Ni and stable IGE. The stable iron group elements are created mainly in the deflagration ashes during normal freeze-out from NSE. This is also the reason for the stable iron group elements extending to rather high velocities for both models, up to $\sim15\times10^3$ km s$^{-1}$. They are created during the deflagration phase in the rising hot plumes, thus being present at large radii and velocities[^3]. Despite the larger kinetic energy in the homogeneous model, the velocities in the ejecta tend to be only slightly larger than in the carbon depleted model with similar $^{56}$Ni mass (see first and second panel of Fig. \[fig:3dejecta\]). Especially the outer boundary of the Ni core and the maximum of the Si distribution are shifted by only a few $100$ km s$^{-1}$. Moreover, more unburnt material is present in the C depleted model mostly because of the shift in the burning tables (see Appendix \[sec:calibration\]). When comparing a model series with varying core C mass fraction (second, third, and fourth panel of Fig. \[fig:3dejecta\]), these effects can be seen more clearly as the kinetic energy of the ejecta increases with increasing C mass fraction and increasing production of $^{56}$Ni. The density structure (bottom panel of Fig. \[fig:3dejecta\]) is very similar for all models; thus, the differences in the spectra mainly stem from differences in the abundance distributions. Synthetic Observables {#sec:resultsrt} ===================== In this section, we present synthetic light curves and spectra from the radiative transfer simulations and compare to observed SNe. The effects of the initial composition are examined in two ways: 1. by comparing a series of models with differing carbon mass fraction but otherwise identical explosion parameters, thus having different kinetic energies and $^{56}$Ni masses; 2. by comparing models with different carbon mass fraction producing a similar amount of $^{56}$Ni while having different kinetic energies. Light Curves ------------ Radiative transfer simulations were run for the DDT models compared in Fig. \[fig:3dtimeevolution\] (rpc32\_DDT8\_N100, rpc40\_DDT8\_N100, and c50\_DDT8\_N100). The ignition condition was chosen to be the same as for the N100 model of @seitenzahl2013a since the intermediate deflagration strength of this model leads to the best agreement with observed light curves and spectra in this series [@sim2013a][^4]. Compared to the N100 model, the DDT criterion of the new models was adjusted such that the rpc32 model produces approximately $0.6 M_\odot$ of $^{56}$Ni, the same amount as the N100 model of [@seitenzahl2013a]. Thus, these two models are very similar (apart from the initial composition) in order to assess the results of changing the initial C fraction. ![image](figures/lightcurves) The synthetic light curves from these models and the N100 model from @seitenzahl2013a are compared to some normal SNe Ia in Fig. \[fig:lightcurves\]. The peak luminosity of the bolometric and the band-limited light curves are larger for larger $^{56}$Ni masses. The spread in peak luminosities is largest in the *U* band and decreases to redder bands, similar to what is found by @sim2013a for their model series. The shapes of the light curves match observations quite well around maximum for *U*, *B*, and *V* bands, although the flux is too low in the *U* band and too high in the *V* band. Thus, the colors are too red compared to observed light curves (similar to the models of [@sim2013a]). As already stated in @sim2013a, this is probably due to line-blocking mainly of IGE in the blue part of the spectrum. This is a generic feature of their spontaneous DDT models caused by the deflagration ashes (which contain stable IGE) rising to rather high velocities, near the IME (as described above, see Fig. \[fig:3dejecta\]), hence influencing the synthetic observables in the photospheric phase. In contrast to this, in 1D models (see fig. 2 from [@khokhlov1991b]) the IGE are contained in the core of the ejecta beneath the radioactive $^{56}$Ni owing to the spherical symmetry adopted in these models, neglecting the turbulent deflagration burning. Apart from this, the reddening could also be due to shortcomings in the radiative transfer treatment, as reproducing the colors in radiative transfer simulations of SNe Ia is in general difficult [@dessart2013b]. In the *I* band, the models deviate from observed light curves: they are too bright and do not show two maxima, similar to the models in @sim2013a. Although this may be due to an incomplete treatment in the radiative transfer code affecting the Ca<span style="font-variant:small-caps;">ii</span> infrared triplet, which significantly contributes to this band [@sim2013a], this may also hint to the spontaneous DDT models being in general inferior to other models in this respect. For example, sub-Chandrasekhar models [@sim2010a] or violent merger models [@pakmor2012a] show better agreement using the same radiative transfer code <span style="font-variant:small-caps;">Artis</span> (see also [@sim2013a]). In the near-infrared bands *J* and *H*, the models agree qualitatively with observations, matching the magnitudes at the first maximum and exhibiting a second maximum. The variations in these near-infrared bands are, especially at maximum, smaller than in the optical bands, which is also seen in observations showing that SNe Ia are better standard candles in the near-infrared [@elias1985a; @meikle2000a; @krisciunas2004a]. Moreover, in these bands, the first maxima are larger compared to the light curves in @sim2013a, thus agreeing better with observed light curves. As already predicted in @sim2013a, this results from using a larger atomic data set, thus producing more fluorescence in the near-infrared. The position of the second maximum, however, is too early compared to observed light curves. The second maximum is caused by the recombination front from doubly to singly ionized material hitting the iron-rich core [@kasen2006b]. Thus, the offset between simulations and observations could indicate that IGE reside at too large velocities in our models. However, it could also be related to deficiencies in the numerical treatment, such as inaccurate atomic data or approximations in calculating the plasma state in <span style="font-variant:small-caps;">Artis</span>. A comparison of the two models with similar $^{56}$Ni masses (N100 from [@seitenzahl2013a] and rpc32\_DDT8\_N100) shows only slight differences. The main consequence on the light curve here is given by the different kinetic energies of the ejecta. According to the analytic study of bolometric light curve models by @pinto2000a, models with larger kinetic energy “peak earlier, at higher luminosities, and decline more rapidly” [@pinto2000a see also their fig. 4]. This is indeed also found for the bolometric light curves of the models N100 from @seitenzahl2013a and rpc32\_DDT8\_N100 (see Fig. \[fig:lightcurves\]): the C depleted model peaks later and at a lower luminosity. Moreover, its decline rate is smaller. The effect is not as large as for the models in @pinto2000a because the total kinetic energy of the rpc32\_DDT8\_N100 model differs only by about 11% from the N100 model of [@seitenzahl2013a]. Width–Luminosity Relation ------------------------- ![Light curve width–luminosity relation for a series of models with differing C depletion in the core, but otherwise identical parameters (in black: models rpc32\_DDT8\_N100, rpc40\_DDT8\_N100, and c50\_DDT8\_N100 with increasing *B*$_\mathrm{max}$ in this order) as well as for the N100 model of @seitenzahl2013a [in gold]. The dots and the square denote angle-averaged values, pale crosses denote values for different lines of sight. The green crosses show observed supernovae from the CfA sample [@hicken2009b]. []{data-label="fig:dm15"}](figures/philipps) The decline in the *B* band of the models is more rapid than for most normal SNe Ia (Fig. \[fig:dm15\]). More importantly, the model series as a whole fails to show the same width–luminosity relation (WLR) as normal SNe Ia; in contrast, the WLR is roughly perpendicular to that observed (Fig. \[fig:dm15\]). The fundamental parameters for the light curve evolution that are changed in this model series are the kinetic energy of the ejecta and the $^{56}$Ni mass, which both increase with increasing C fraction. According to the analytic study of light curves by @pinto2000a, both of these parameters anti-correlate individually with the observed WLR. Therefore, it is not surprising to find an anti-correlation for our model series, where the increase in C fraction (as a physical parameter of the explosion model) leads to an increase in kinetic energy and $^{56}$Ni mass, both driving a trend perpendicular to the observed WLR. This implies that the initial composition is probably not the main parameter driving the WLR, but rather a secondary parameter causing scatter perpendicular to the WLR. This is similar to orientation effects also driving scatter around the mean WLR. The only possibility of driving the WLR in the direction that is observed would, in this model, be the existence of a correlation of the physical model parameters. In this case, the ignition configuration and the DDT criterion would depend on the initial composition (in a yet unknown way), thereby supposedly resulting in a suitable WLR. The 1D delayed detonation models of @hoeflich1996a show a WLR, where the changing parameter is the DDT transition density, but as this parametrization does not include turbulence, for example, it cannot be easily generalized to our multi-dimensional models. The 2D models of @kasen2009a lie in a reasonable region of the light curve width–luminosity diagram; this was reached by varying the ignition conditions for the deflagration as well as the DDT criterion. This model series faces the problem that the correlation between the varying explosion parameters and the underlying physical parameters of the initial model (such as central density, composition or metallicity) is not known and thus does not identify the physical parameter driving the WLR. Spectra ------- The synthetic spectra of the model series are shown in Fig. \[fig:spectra\] at *B*-band maximum. They share all main spectral features and differ mostly in the absolute flux values (Fig. \[fig:spectra\]). Moreover, the Si<span style="font-variant:small-caps;">ii</span> feature at $\lambda6355\,\AA{}$ varies in blue shift for different models: with increasing C mass fraction, the absorption feature shifts from $12.5\times10^3$ km s$^{-1}$ (rpc32) to $14.4\times10^3$ km s$^{-1}$ (c50), thus reflecting the change in the velocity distributions (compare Fig. \[fig:3dejecta\]). The features associated with Ca<span style="font-variant:small-caps;">ii</span>[^5], however, are not shifted in wavelength for different models. ![Comparison of spectra around *B*-band maximum at 17.3 days post explosion for models rpc32\_DDT8\_N100, rpc40\_DDT8\_N100, and c50\_DDT8\_N100. The inset shows the Si <span style="font-variant:small-caps;">ii</span> feature at $\lambda6355\,\AA{}$ in more detail, the units are the same as in the main plot. All fluxes are scaled to a distance of 1 Mpc. []{data-label="fig:spectra"}](figures/spectra-compmax) ![Comparison of spectra for three epochs (8 d before, 5 d before and at *B*-band maximum) of models rpc32\_DDT8\_N100, and N100 of @seitenzahl2013a to SN 2011fe. The flux is scaled to M101, the host galaxy of SN 2011fe. []{data-label="fig:spectraseries"}](figures/spectra-timeseries) A comparison between two models with similar $^{56}$Ni mass but different kinetic energies (rpc32\_DDT8\_N100 and N100 from [@seitenzahl2013a], see Fig. \[fig:spectraseries\]) shows that the C depleted model shifts in velocities only by about 600 km s$^{-1}$ at *B*-band maximum beacuse of the lower kinetic energy of the ejecta. Comparing to observations, this shift goes in the right direction but is not large enough to account for the lower velocities in, e.g., SN 2011fe, as shown for several epochs in Fig. \[fig:spectraseries\]. Moreover, the bulk of observed SNe shows considerably lower velocities, mostly between $10\,000$ km s$^{-1}$ and $12\,000$ km s$^{-1}$ at *B*-band maximum [@benetti2005a; @silverman2012b]. The magnitude of this effect can be estimated by assuming that the velocities in the two models scale with the square root of the kinetic energy, $\frac{\overline{v}_1}{\overline{v}_2} = \sqrt{\frac{E_{\mathrm{kin},1}}{E_{\mathrm{kin},2}}}$. A change in the kinetic energies from N100 to the rpc32 model of about 11% hence results in a change in velocities of about 6%, which yields about 750 km s$^{-1}$ for the Si<span style="font-variant:small-caps;">ii</span> velocity, on the order of the change seen in the models. This seems to be a shortcoming of the spontaneous DDT model, also present in our previous model series [@seitenzahl2013a; @sim2013a], but also in studies of other groups (e.g., the DDC models from [@dessart2013a], see their fig. 17; these models are based on explosion simulations of [@khokhlov1991a]). Thus, if explosions from Chandrasekhar-mass WDs constitute a large fraction of normal SNe Ia, the nucleosynthetic yields (and thus the energy release) could be affected by uncertainties in the nuclear reaction rates. Alternatively, the mechanism that distorts the hydrostatic equilibrium of the WD[^6] may not involve a direct transition from the deflagration to a detonation, as in our spontaneous DDT models, but perhaps other large-scale motions (e.g., pulsations). Comparison to Earlier Studies ----------------------------- @hoeflich1998a presented a series of 1D delayed detonation models (including hydrodynamics, nucleosynthesis, light curves, and spectra) with varying metallicity and included one model with a different C fraction. Their 1D models treat the deflagration as propagating at a certain fraction of the local sound speed; the transition to a detonation is initiated when a certain density is reached. Despite these simplifications, their conclusions are similar to our findings: a lower C fraction leads to lower $^{56}$Ni production, less kinetic energy, and more confined ejecta. Their C reduced model shows a faster decline, as opposed to what is seen in our models and expected for lower kinetic energies according to the analytic study of [@pinto2000a]. They also mention that more realistic structures of the progenitor composition should be taken into account including an outer accretion layer and a C depleted core; this was accomplished in the present study. Their assumption, however, that a model with a homogeneous, but lower C fraction does not show a difference to a model with a C depleted core (see their “Final Discussion and Conclusions”) holds only to a first approximation. As the propagation of the burning fronts does not only depend on density, but also on the composition (see Appendix \[sec:calibration\]), the energy release and burning products of the detonation in the outer layers depend on the composition there. The 1D simulations by @umeda1999a include only hydrodynamics and nucleosynthesis of delayed detonation models, where the DDT density depends on the initial C fraction. In contrast to this, our DDT criterion includes different effects as, e.g., turbulent velocity fluctuations (see Section \[sec:leafs\]). Nevertheless, this leads to a similar result: the density at which the detonation is initiated decreases for decreasing C fractions. Thus, in their model series, the produced amount of $^{56}$Ni decreases for decreasing C fractions (see their fig. 2). Apart from this, the working hypothesis of @umeda1999a, who assume the C mass fraction to be responsible for the WLR, is challenged by our findings. [ In their study of 1D delayed detonation models, @dominguez2001a compute stellar models for different ZAMS masses and metallicities which they use as initial models for the explosion simulations. All models use the same central density at explosion and employ the same density as DDT criterion, but do not take the pre-explosion simmering phase into account. They find that for larger ZAMS masses, less $^{56}$Ni is produced because of the lower C abundance, resulting in lower velocities in the ejecta, similar to our findings. This leads in their models to a decrease in the maximum brightness, while the decline rate stays constant. Thus, @dominguez2001a conclude that the variation in the ZAMS mass leads to a spread or offset in the WLR, similar to what we find for the initial C mass fraction. ]{} @hoeflich2010a suggest that in addition to the primary light curve parameter, $\Delta m_{15}$ or stretch $s$, two independent parameters are necessary to describe the differences in shapes for different SNe. As physical parameters they suggest C/O ratio and central density to account for different shapes in the early and late phase of the light curve, respectively. In the 1D models of @hoeflich2010a, the transition density of the DDT determines the $^{56}$Ni mass of the explosion; the impact of other parameters (C/O ratio, central density) on the intrinsic brightness is small; nevertheless, these variations should be taken into account in the calibration. Our study agrees on the C fraction being a secondary parameter in the family of SN Ia light curves. In our 3D models, however, the C fraction causes large variations in the $^{56}$Ni mass because of the different turbulent evolution of the deflagration flame and the resulting different triggering of the DDT. This should be taken into account when trying to create a physically motivated multi-parameter set for SN Ia light curves. The first multi-dimensional simulations examining different C fractions were presented by @roepke2004c and @roepke2006b. In their 3D simulations of pure deflagrations, the C fraction does not affect the explosion significantly; only the kinetic energy of the ejecta is altered to some extent. Therefore, they conclude that “the progenitor’s carbon-to-oxygen ratio is unlikely to account for the observed variations in type Ia supernova luminosity” [@roepke2004c]. This statement only holds for pure deflagration models, which nowadays are thought to account rather for 2002cx-like SNe Ia than for normal SNe Ia [@jordan2012a; @kromer2013a; @fink2014a] because of their mixed ejecta structure in contrast to the layered structure seen in normal SNe Ia. Thus, their statement does not apply to modeling normal SNe Ia and it does not contradict our results for delayed detonation models. Conclusions {#sec:conclusions} =========== In this work, we study the hydrodynamics, nucleosynthesis, synthetic light curves, and synthetic spectra of a series of [multi-dimensional]{} spontaneous DDT models for SNe Ia in order to [examine if varying the initial C fraction resolves remaining discrepancies to observations. The main points we consider are the WLR resulting from the models and differences in spectral features. ]{} Firstly, the initial C mass fraction is not the primary parameter of SNe Ia (at least for spontaneous DDT models). Although absolute luminosities [(*B*$_\mathrm{max}$ between $-19.0$ and $-19.4$)]{} and decline rates [($\Delta m_{15}(B)$ between $1.41$ and $1.55$)]{} are in the range of normal SNe Ia, respectively, our model series fails to reproduce the observed WLR (Fig. \[fig:dm15\]). Therefore, it is probably only a secondary parameter causing scatter perpendicular to the observed WLR. This may only be changed by a concerted correlation of the different physical parameters of the underlying explosion model, such as ignition conditions or DDT criteria. Secondly, carbon depleted models do not show significantly better agreement of important spectral features, such as the Si<span style="font-variant:small-caps;">ii</span> feature at $\lambda6355\,\AA{}$. The decrease in kinetic energy does not lead to a decrease in the blueshift of the feature to be compatible with the bulk of normal SNe Ia. This shortcoming seems to be generally present in spontaneous DDT models (1D, 3D, different groups; see discussion above). [Finally, our spontaneous DDT models are able to reproduce most of the observed properties of SNe Ia light curves and spectra, thus supporting the spontaneous DDT model. So far, however, our 3D spontaneous DDT models do not show the observed width–luminosity relation.]{} While the deflagration strength (through number of ignition kernels, [@seitenzahl2013a; @sim2013a]) and the initial C fraction (this work) are not the primary parameter, it may still be possible that other parameters (e.g., DDT criterion) or yet unknown correlations of parameters are able to reproduce the light curve width–luminosity relation in 3D models. Nevertheless, other shortcomings remain, such as colors, which are too red [@sim2013a], and the velocities of spectral features, especially the Si<span style="font-variant:small-caps;">ii</span> feature that is defining SNe Ia. This may be interpreted in different ways: if Chandrasekhar-mass progenitors are indeed responsible for the bulk of SNe Ia, the spontaneous DDT model has some severe shortcomings; this may hint to the possibility that a different mechanism distorts the hydrostatic equilibrium of the WD and leads to a detonation (e.g., pulsations). Apart from this, the failure of recent multi-dimensional DDT models to identify the primary parameter of the WLR could also indicate that this primary parameter is the mass of the primary WD[^7], as is the case in detonations of sub-Chandrasekhar-mass WDs either in a double degenerate binary (violent merger scenario, e.g., [@pakmor2012a; @pakmor2013a]) or in a single degenerate system (double detonation scenario, e.g., [@fink2010a]). The 3D models have been computed on the supercomputers <span style="font-variant:small-caps;">Jugene</span> and <span style="font-variant:small-caps;">Juqueen</span> at the Jülich Supercomputer Center under the project HMU13. This work was also supported by the Deutsche Forschungsgemeinschaft via the Transregional Collaborative Research Center TRR 33 “The Dark Universe”, the Emmy Noether Program (RO 3676/1-1), the ARCHES prize of the German Ministry of Education and Research (BMBF), the graduate school “Theoretical Astrophysics and Particle Physics” at the University of Würzburg (GRK 1147) and the Excellence Cluster EXC 153. Parts of this research were conducted by the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020. STO acknowledges support from the Studienstiftung des deutschen Volkes and thanks S. Hachinger for valuable dicussions. RP acknowledges support by the European Research Council under ERC-StG grant EXAGAL-308037 and by the Klaus Tschira Foundation. MF, SAS, and FKR acknowledge travel support by the DAAD/Go8 German-Australian exchange program. We thank S. Taubenberger for providing the data of the CfA sample. For data processing and plotting, we used NumPy and SciPy [@oliphant2007a], IPython [@perez2007a], and Matplotlib [@hunter2007a]. Iterative Calibration of the Levelset Tables {#sec:calibration} ============================================ The tables necessary for determining the composition behind the burning fronts are created using an iterative calibration scheme similar to @fink2010a. This calibration scheme is carried out for homogeneous compositions of the progenitor WD ($X(^{12}\mathrm{C})=0.2,0.3,\ldots,0.9$), separately for deflagrations and detonations. It yields the composition behind the burning front as a function of the density of the unburnt material. Each calibration run uses as an initial estimate burning to NSE at the relevant densities (detonations: above $10^5$ g cm$^{-3}$; deflagrations: above $2\times 10^5$ g cm$^{-3}$), such that the energy release is overestimated. The table with the nucleosynthetic yields of this initial estimate as a function of density is used in a hydrodynamic simulation of a pure detonation or deflagration, followed by a nucleosynthetic postprocessing. A new table is then computed with the detailed nucleosynthetic yields for use in the next hydrodynamic simulation. This procedure is iterated six times for each calibration run. As an example, the final table for $X(^{12}\mathrm{C})=0.5$ for detonations is shown in Fig. \[fig:dettable-c50\]. The transitions to different burning stages (C burning, O burning, Si burning) are clearly visible. The convergence of this scheme is based on the fact that the reaction rates depend strongly on density. The overestimation of the energy release in the first hydrodynamic simulation is thus decreased by the subsequent postprocessing, since the density of unburned material prior to the front crossing is not strongly affected by a larger energy release. The influence of the initial composition on the final tables is better seen by comparing the reaction $Q$ value, which is the energy release of the burning front. The $Q$ value is shown for different initial compositions in the upper panel of Fig. \[fig:tables-qvalue\] for detonations and in the lower panel of Fig. \[fig:tables-qvalue\] for deflagrations. The tables differ mainly in the density interval over which the composition changes; it is wider and extends to lower densities for detonations. Apart from this, the shape is different, as in the tables for deflagrations the transitions to the different burning stages are not separated. For both detonations and deflagrations, the $Q$ value is globally lower for lower carbon mass fractions. Moreover, the transitions to the different burning stages shift to higher densities for lower carbon mass fractions. Globally seen, this leads to a *weaker development* of both detonations and deflagrations for lower carbon mass fractions. ![Levelset table for a detonation with an initial composition of $X(^{12}\mathrm{C})=0.5$. We show the mass fractions of C, O, intermediate mass elements (IME) and iron group elements (IGE), which are released behind the front, against the density of the unburnt material. []{data-label="fig:dettable-c50"}](figures/det_c50) ![$Q$ value (energy release behind burning front) of levelset tables for selected initial compositions for detonations (upper panel) and deflagrations (lower panel). []{data-label="fig:tables-qvalue"}](figures/tables_qvalues) [^1]: [^2]: The DDT criterion chosen here requires high turbulent velocity fluctuations $\ge 10^8$ cm s$^{-1}$ to be present with a certain probability for at least half an eddy-turnover-time; this is the same criterion as used by @seitenzahl2013a. More details on the treatment of the criterion are described by @ciaraldi2013a. [^3]: This is a general feature of multi-dimensional DDT models; as opposed to 1D models, where the stable iron group elements are concentrated near the center (see fig. 2 from [@khokhlov1991b]). [^4]: This model was also compared to SN 2011fe alongside a double-degenerate violent merger model in @roepke2012a. [^5]: The Ca<span style="font-variant:small-caps;">ii</span> H & K lines ($\lambda\lambda3934\,\&\,3968$Å) and the Ca<span style="font-variant:small-caps;">ii</span> infrared triplet. [^6]: This is needed for the detonation to propagate through lower-density, pre-expanded material such that also IME are produced. [^7]: This was already suspected by @pinto2000a in their analytic study of light curves and is supported by observations of @stritzinger2006a and @scalzo2014a.
--- author: - 'Mete Ozay, ' title: 'Fine-grained Optimization of Deep Neural Networks' --- Introduction {#intro} ============ Despite the practical success of DNNs, understanding their generalization behavior is certainly an open problem [@rethinkGen]. The recent theoretical works [@NIPS2017176; @NIPS2017204; @suzuki18a; @zhou18a; @arora18b; @SizeInd; @neyshabur2018a] addressed this problem by extending the early results proposed for shallow linear neural networks (NNs) [@Anthony] for a more general class of DNNs (e.g. NNs with ReLU) and convolutional neural networks (CNNs) (see Table \[tab:compareGen\] for a comparison). The proposed asymptotic bounds were obtained by defining matrices of weights of DNNs using random matrices, and applying concentration inequalities on them. Thereby, the bounds were computed by functions of several $\ell_p$ norms of matrices of weights, where $1 \leq p \leq \infty$. In this work, we conjecture that if we can impose multiple constraints on weights of DNNs to set upper bounds of the norms of the weight matrices, and train the DNNs with these weights, then the DNNs can achieve empirical generalization errors closer to the proposed theoretical bounds, and we can improve their accuracy in various tasks. We pose two problems in order to achieve this goal; (1) renormalization of weights to upper bound norms of their matrices, (2) training DNNs with renormalized weights with assurance to convergence to minima. **(1) Bounding norms of weights:** We propose a two-stage renormalization procedure. First, we normalize weights according to the Euclidean, Frobenius and spectral norm, since they are used in the bounds of generalization errors [@NIPS2017176; @NIPS2017204; @suzuki18a; @zhou18a; @arora18b; @SizeInd; @neyshabur2018a]. Second, we aim to reparameterize the normalized weights to set a finite and constant upper bound on the weight matrices. For this purpose, we can use a parameter learning approach as utilized in batch normalization (BN) [@BN]. However, such an approach substantially increases running time of DNNs during training. In addition, it is not efficient to estimate the parameters using small number of samples in batch training. Therefore, we reparameterize weights according to (a) geometric properties of weight spaces, and (b) statistical properties of features (standard deviation) on which the weights are applied. The proposed reparameterization method enables to set upper bound of each different norm of weight matrices to 1.0. In addition, the proposed renormalization procedure enables to control variance of weights during training of DNNs, thereby assures that DNNs do not have spurious local minima [@xie17a]. Employment of standard deviation in reparameterization also makes optimization landscapes significantly smoother by bounding amount of change of norms of gradients during training. This property has been recently studied to analyze effect of BN on optimization landscape in [@Santurkar]. We use this property to develop a new optimization method for weight renormalization in this paper, as explained in the next problem. **(2) Training DNNs with renormalized weights:** We consider two subproblems. (i) First, note that, there is not a single procedure used to normalize weights jointly according to all different norms. Thereby, we normalize weights in groups such that similar or different norms can be used to normalize matrices of weights belonging to each different group. We can mathematically prove that the procedure proposed to solve the previous problem (1) can set an upper bound for all of the aforementioned norms. However, we do not have a mathematical proof to explain whether weights normalized according a single norm can provide the generalization bound, and to determine its type. We examine this question in various experiments in detail in the supp. mat. Experimental results show that training DNNs using a set of groups of weights normalized according to all these different norms achieves the best generalization performance in various tasks. Since we cannot mathematically verify this observation, we conjecture that using a diverse set of weights normalized with different constraints improves the generalization error compared to using weights normalized according to single constraint. We consider mathematical characterization of this property as an open problem. Spaces of normalized weights can be identified by different Riemann manifolds [@ooAAAI18][^1]; (i) unit norm weights reside on the sphere $Sp(A_lB_l-1)$, (ii) orthonormal weights belong to the Stiefel manifold $St(A_l,B_l)$, and (iii) weights with orthogonal columns reside on the oblique manifold $Ob(A_lB_l)$, at each $l^{th}$ layer of a DNN. We consider training DNNs using a more general setting employing groups of weights which can be normalized according to different normalization constraints. Group wise operations are implemented by concatenating weight matrices $\omega_{g,l}^i$ belonging to each $g^{th}$ group by ${\omega_{g,l} = (\omega_{g,l}^1, \omega_{g,l}^2, \ldots,\omega_{g,l}^{\mathfrak{g}})}$,${\forall g = 1,2,\ldots,G_l}$. For the corresponding group, a space of concatenated weights is identified by Cartesian product of manifolds of weights $\omega_{g,l}^i, i=1,2,\ldots,\mathfrak{g}$. In addition, if we renormalize weights using standard deviation of features obtained at each epoch, then geometry of the manifolds of weights also changes. Therefore, we address the second subproblem (ii) which is optimization on dynamically changing product manifolds of renormalized weights. DNNs can be trained with multiple constraints using optimization methods proposed for training shallow algorithms [@Pls; @Lui2012], and individual manifolds [@ooAAAI18; @RBN]. If we employ these methods on products of weight manifolds (POMs) to train DNNs, then we observe early divergence, vanishing and exploding gradients due to nonlinear geometry of product of different manifolds. More precisely, the assumption of a bound on the operator norm of Hessian of geodesics in POMs, which is required for assurance of convergence, fails, while performing Stochastic Gradient Descent (SGD) with backpropagation on product of different weight manifolds. Therefore, a non-increasing bound on the probability of failure of the optimization algorithm cannot be computed, and a convergence bound cannot be obtained. In order to solve these problems, we first propose a mathematical framework to make use of the geometric relationship between weight manifolds determined by different constraints (Section \[sec3\]). Then, we suggest an approach for training DNNs using multiple constraints on weights to improve their performance under the proposed framework. To this end, we propose a new algorithm that we call fine-grained stochastic gradient descent (FG-SGD) to train DNNs using POMs. We elucidate geometric properties of POMs to assure convergence of FG-SGD to global minima while training nonlinear DNNs with particular assumptions on their architectures, and to local minima while training a more generic class of nonlinear DNNs. Our contributions are summarized as follows: 1. DNNs trained using weights renormalized by the proposed method (see Proposition 1 in the supp. mat. for derivation) can achieve tighter bounds for theoretical generalization errors compared to using unnormalized weights. These DNNs do not have spurious local minima [@xie17a] (see the next section for a detailed discussion). The proposed scaling method generalizes the scaling method proposed in [@w_norm] for weight normalization by incorporating geometric properties of weight manifolds. 2. We explicate the geometry of weight manifolds defined by multiple constraints in DNNs. For this purpose, we explore the relationship between geometric properties of POMs (i.e. sectional curvature), gradients computed at POMs (Theorem 1), and those of component manifolds of weights in DNNs in Section \[sec3\] (please see Lemma 1 in the supp. mat. for more precise results). 3. We propose an algorithm (FG-SGD) for optimization on different collections of POMs (Section \[sec3\]) by generalizing SGD methods employed on weight manifolds [@ooAAAI18; @NIPS2017_7107]. Next, we explore the effect of geometric properties of the POMs on the convergence of the FG-SGD using our theoretical results. In the proof of convergence theorems, we observe that gradients of weights should satisfy a particular normalization requirement and we employ this requirement for adaptive computation of step size of the FG-SGD (see in Section \[G-SGDdetails\]). To this best of our knowledge, this is first result which also establishes the relationship between norms of weights and norms of gradients for training DNNs. We also provide an example for computation of a step size function for optimization on POMs identified by the sphere (Corollary 2 in the supp. mat.). 4. We propose a strategy to construct sets of identical and non-identical weight spaces according to their employment in groups on input and output channels in DNNs (Section \[sec2\]). In the experimental analyses, we apply this strategy to train state-of-the-art networks (e.g. Resnext [@resnext], Mobilenetv2 [@Sandler] and DeepRoots [@Ioanno]) which use well-known weight grouping strategies, such as depth-wise or channel-wise grouping, for efficient implementation of DNNs. The results show that the proposed strategy also improves accuracy of these DNNs. 5. We prove that loss functions of DNNs trained using the proposed FG-SGD converges to minima almost surely (see Theorem 2 and Corollary 1 in the supp. mat.). To the best of our knowledge, our proposed FG-SGD is the first algorithm performing optimization on different collections of products of weight manifolds to train DNNs with convergence properties. Construction of Sets of POMs in DNNs {#sec2} ==================================== Let ${S=\{s_i= (\mathbf{I}_i,y_i) \}_{i=1}^N}$ be a set of training samples, where $y_i $ is a class label of the $i^{th}$ image $\mathbf{I}_i$. We consider an $L$-layer DNN consisting of a set of tensors $\mathcal{W} = \{\mathcal{W}_l \}_{l=1}^L$, where ${\mathcal{W}_l = \{ \mathbf{W}_{d,l} \in \mathbb{R}^{A_l \times B_l \times C_l} \} _{d=1} ^{D_l}}$, and ${\mathbf{W}_{d,l} = [W_{c,d,l} \in \mathbb{R}^{A_l \times B_l}]_{c=1}^{C_l}}$ is a tensor[^2] of weight matrices $W_{c,d,l}, \forall {l=1,2,\ldots,L}$, for each $c^{th}$ channel ${c=1,2,\ldots,C_l}$ and each $d^{th}$ weight $d=1,2,\ldots,D_l$. In popular DNNs, weights with $A_l=1$ and $B_l=1$ are used at fully connected layers, and those with $A_l >1$ or $B_l>1$ are used at convolutional layers. At each $l^{th}$ layer, a feature representation $f_l(\mathbf{X}_l;\mathcal{W}_l)$ is computed by compositionally employing non-linear functions by $$f_l(\mathbf{X}_l;\mathcal{W}_{l}) = f_l(\cdot;\mathcal{W}_l) \circ f_{l-1}(\cdot;\mathcal{W}_{l-1}) \circ \cdots \circ f_1(\mathbf{X}_1;\mathcal{W}_{1}),$$ where $\mathbf{X}_{l} = [ X_{c,l}]_{c=1}^{C_l}$, and $\mathbf{X}_1 := \mathbf{I}$ is an image at the first layer ($l=1$). The $c^{th}$ channel of the data matrix $X_{c,l}$ is convolved with the kernel ${W}_{c,d,l}$ to obtain the $d^{th}$ feature map $ X_{c,l+1} : = q(\hat{X}_{d,l})$ by ${\hat{X}_{d,l} = {W}_{c,d,l} \ast X_{c,l}}, \forall c, d, l$, where $q(\cdot)$ is a non-linear function, such as ReLU[^3]. Previous works [@ooAAAI18; @NIPS2017_7107] employ SGD using weights each of which reside on a single manifold[^4] at each layer of a DNN. We extend this approach considering that each weight can reside on an individual manifold or on collections of products of manifolds, which are defined next. Suppose that ${\mathcal{G}_l = \{ \mathcal{M}_{\iota,l}: \iota \in \mathcal{I}_{\mathcal{G}_l} \}}$ is a set of weight manifolds$^{\ref{footnote1}}$ $\mathcal{M}_{\iota,l}$ of dimension $n_{\iota,l}$, which is identified by a set of indices $\mathcal{I}_{\mathcal{G}_l}, \forall {l=1,2,\ldots,L}$. More concretely, $\mathcal{I}_{\mathcal{G}_l}$ contains indices each of which represents an identity number ($\iota$) of a weight that resides on a manifold $\mathcal{M}_{\iota,l}$ at the $l^{th}$ layer. In addition, a subset ${\mathcal{I}_{l}^g \subseteq \mathcal{I}_{\mathcal{G}_l}}, {g =1,2,\ldots,G_l}$, is used to determine a subset $\mathcal{G}^g_l \subseteq \mathcal{G}_l$ of weight manifolds which will be aggregated to construct a product of weight manifolds (POM). Each ${\mathcal{M}_{\iota,l} \in \mathcal{G}^g_l}$ is called a component manifold of a product of weight manifolds which is denoted by $\mathbb{M}_{g,l}$. A weight $\omega_{g,l} \in \mathbb{M}_{g,l}$ is obtained by concatenating weights belonging to $\mathcal{M}_{\iota,l}$, $\forall \iota \in \mathcal{I}^g_{l}$, using ${\omega_{g,l} = (\omega_1, \omega_2, \cdots, \omega_{|\mathcal{I}^g_{l}|})}$, where $|\mathcal{I}^g_{l}|$ is the cardinality of $\mathcal{I}^g_{l}$. A $\mathcal{G}_l$ is called a *collection of POMs*. [$\blacksquare$]{} We propose three schemes called POMs for input channels (PI), for output channels (PO) and input/output channels (PIO) to construct index sets. Indices of the sets are selected randomly using a hypergeometric distribution without replacement at the initialization of a training step, and fixed in the rest of the training. Implementation details and experimental analyses are given in the supp. mat. Optimization using Fine-Grained SGD in DNNs {#sec3} =========================================== Optimization on POMs in DNNs: Challenges and Solutions ------------------------------------------------------ Employment of a vanilla SGD on POMs with assurance to convergence to local or global minima for training DNNs using back-propagation (BP) with collections of POMs is challenging. More precisely, we observe early divergence of SGD, and exploding and vanishing gradients in the experiments, due to the following theoretical properties of collections of POMs: - Geometric properties of a POM $\mathbb{M}_{g,l}$ can be different from those of its component manifolds $\mathbb{M}_{\iota}$, even if the component manifolds are identical. For example, we observe locally varying curvatures when we construct POMs of unit spheres. Weight manifolds with more complicated geometric properties can be obtained using the proposed PIO strategy, especially by constructing collections of POMs of non-identical manifolds. Therefore, assumption on existence of compact weight subsets in POMs may fail due to locally varying metrics within a nonlinear component manifold and among different component manifolds [^5]. - When we optimize weights using SGD in DNNs, we first obtain gradients computed for each weight $\omega_{g,l} \in \mathbb{M}_{g,l}$ at the $l^{th}$ layer from the $(l+1)^{st}$ layer using BP. Then, each weight $\omega_{g,l}$ moves on $\mathbb{M}_{g,l}$ according to the gradient. However, curvatures and metrics of $\mathbb{M}_{g,l}$ can locally vary, and they may be different from those of component manifolds of $\mathbb{M}_{g,l}$ as explained above. This geometric drawback causes two critical problems. First, weights can be moved incorrectly if we move them using only gradients computed for each individual component of the weights, as popularly employed for the Euclidean linear weight spaces. Second, due to incorrect employment of gradients and movement of weights, probability of failure of the SGD cannot be bounded, and convergence cannot be achieved (see proofs of Theorem 2, Corollary 1 and Corollary 2 for details). In practice, this causes unbounded increase or decrease of values of gradients and weights. In order to address these problems for training DNNs, we first analyze the relationship between geometric properties of POMs and those of their component manifolds in the next theorem. \[remark\] (See Lemma 1 given in the supp. mat. for the complete proof of the following propositions) Our main theoretical results regarding geometric properties of POMs are summarized as follows: 1. A metric defined on a product weight manifold $\mathbb{M}_{g,l}$ can be computed by superposition (i.e. linear combination) of Riemannian metrics of its component manifolds. 2. Sectional curvature of a product weight manifold $\mathbb{M}_{g,l}$ is lower bounded by 0. [$\blacksquare$]{} We use the first result (1) for *projection* of Euclidean gradients obtained using BP onto product weight manifolds. More precisely, we can compute *norms of gradients* at weights on a product weight manifold by linear superposition of those computed on its component manifolds in [FG-SGD]{}. Thereby, we can move a weight on a product weight manifold by (i) retraction of components of the weight on component manifolds of the product weight manifold, and (ii) concatenation of projected weight components in FG-SGD. Note also that some sectional curvatures vanish on a product weight manifold $\mathbb{M}_{g,l}$ by the second result (2). For instance, suppose that each component weight manifold $\mathcal{M}_{\iota,l}$ of $\mathbb{M}_{g,l}$ is a unit two-sphere $\mathbb{S}^2$, ${\forall \iota \in \mathcal{I}_{\mathcal{G}_l}}$. Then, $\mathbb{M}_{g,l}$ has unit curvature along two-dimensional subspaces of its tangent spaces, called two-planes. However, $\mathbb{M}_{g,l}$ has zero curvature along all two-planes spanning exactly two distinct spheres. In addition, weights can always move according to a non-negative bound on sectional curvature of compact product weight manifolds on its tangent spaces. Therefore, we do not need to worry about varying positive and negative curvatures observed at its different component manifolds. The second result also suggests that learning rates need to be computed adaptively by a function of *norms of gradients* and *bounds on sectional curvatures* at each layer of the DNN and at each epoch of FG-SGD for each weight $\omega$ on each product weight manifold $\mathbb{M}_{g,l}$. We employ these results to analyze convergence of FG-SGD and compute its adaptive step size in the following sections. [|C[3.25cm]{}|C[5.6cm]{}|]{} &\ Neyshabur et al. [@Neyshabur15] & $\mathcal{O}\Big( \frac{2^L \prod\limits_{l=1}^{L} \prod\limits_{g=1}^{G_l} \delta_{g,l,F}}{\sqrt{N}} \Big)$\ Bartlett et al. [@NIPS2017204] & $\mathcal{\tilde{O}} \Bigg( \frac{\prod\limits _{l=1} ^L \prod\limits _{g=1} ^{G_l} \delta_{g,l,2}}{\sqrt{N}} \Big( \sum \limits_{l=1} ^L \prod \limits _{g=1}^{G_l} (\frac{\delta_{g,l,2 \to 1}}{\delta_{g,l,2}})^{\frac{2}{3}} \Big) ^{\frac{3}{2}} \Bigg)$\ Neyshabur et al. [@neyshabur2018a] & $\mathcal{\tilde{O}} \Bigg ( \frac{\prod \limits_{l=1}^{L} \prod \limits_{g=1}^{G_l}\delta_{g,l,2}} {\sqrt{N}} \sqrt{L^2 \varpi \sum\limits_{l=1} ^L \prod \limits_{g=1}^{G_l}\frac{\delta^2_{g,l,F}}{\delta^2_{g,l,2} } } \Bigg)$\ \[tab:compareGen\] [|C[1.5cm]{}|C[1.50cm]{}|C[1.5cm]{}|C[1.5cm]{}|]{} Norms & & &\ $\|\omega^i_{g,l}\|_{2}$& $ \sigma(\omega_{g,l}^i)$ & $1.0 $ & $\sigma(\omega_{g,l}^i)$\ $\|\omega^i_{g,l}\|_{F}$ & $1.0$ & $(B_l)^{1/2}$ & $(B_l)^{1/2}$\ $\|\omega^i_{g,l}\|_{2 \to 1}$ & $1.0$ & $(B_l)^{1/4}$ & $(B_l)^{1/4}$\ \[tab:norms\] Bounding Generalization Errors using Fine-grained Weights ========================================================= Mathematically, norms of concatenated weights $\omega_{g,l}, \forall g$, are lower bounded by products of norms of component weights $\omega_{g,l}^i, \forall i$. We compute norms of weights belonging to each different manifold in Table \[tab:norms\]. Weights are rescaled dynamically at each $t^{th}$ epoch of an optimization method proposed to train DNNs using $\Re_{i,l}^t= \frac{\gamma_{i,l}}{\lambda_{i,l}^t}$, where $\gamma_{i,l} >0$ is a geometric scaling parameter and $\lambda_{i,l}^t$ is the standard deviation of features input to the $i^{th}$ weight in the $g^{th}$ group $\omega_{g,l}^i, \forall i,g$. The scaling parameter $\Re_{i,l}^t$ enables us to upper bound the norms of weights by $1$ (see Table \[tab:compareGen\]). Computation of upper bounds are given in Proposition 1 in the supplemental material. The proof strategy is summarized as follows: - Let $\mathfrak{b}_{i,l}$ be multiplication of the number of input channels and the size of the receptive field of the unit that employs $\omega_{g,l}^i$, and $\hat{\mathfrak{b}}_{i,l}$ be multiplication of the dimension of output feature maps and the number of output channels used at the $l^{th}$ layer, respectively. Then, geometric scaling $\gamma_{i,l}$ of the weight space of $\omega_{g,l}^i$ is computed by $$\gamma_{i,l} = \sqrt{\frac{1}{\mathfrak{b}_{i,l}+\hat{\mathfrak{b}}_{i,l}} }. \label{eq:gamma}$$ - We can consider that standard deviation of features satisfy $\lambda_{i,l}^t \geq 1$ using two approaches. First, by employing the central limit theory for weighted summation of random variables of features, we can prove that $\lambda_{i,l}^t$ converges to $1$ asymptotically, as popularly employed in the previous works. Second, we can assume that we apply batch normalization (BN) by setting the re-scaling parameter of the BN to $1$. Thereby, we can obtain $\frac{1}{\lambda_{i,l}^t} \leq 1$. By definition, $\gamma_{i,l}^2 < B_l, \forall i,l$. In order to show that $ \sigma(\omega_{g,l}^i) \leq (\gamma_{i,l})^{-1}, \forall i,l$, we apply the Bai-Yin law [@bai1993; @BAI1988166]. Thereby, we conclude that norms of concatenated weights belonging to groups given in Table \[tab:compareGen\] are upper bounded by $1$, if the corresponding component weights given in Table \[tab:norms\] are rescaled by $\Re_{i,l}^t, \forall i,l,t$ during training. Note that scaling by $\Re_{i,l}^t$ computed using is different from the scaling method suggested in [@ooAAAI18] such that our proposed method assures tighter upper bound for norms of weights. Our method also generalizes the scaling method given in [@xavier] in two ways. First, we use size of input receptive fields and output feature spaces which determine dimension of weight manifolds, as well as number of input and output dimensions which determine number of manifolds used in groups. Second, we perform scaling not just at initialization but also at each $t^{th}$ epoch of the optimization method. Therefore, diversity of weights is controlled and we can obtain weights uniformly distributed on the corresponding manifolds whose geometric properties change dynamically at each epoch. Applying this property with the results given in [@xie17a], we can prove that NNs applying the proposed scaling have no spurious local minima[^6]. In addition, our method generalizes the scaling method proposed in [@w_norm] for weight normalization by incorporating geometric properties of weight manifolds. $T$ (number of iterations), $S$ (training set),\ $\Theta$ (set of hyperparameters), $\mathcal{L}$ (a loss function), ${\mathcal{I}^l_{g} \subseteq \mathcal{I}_{\mathcal{G}_l}}, \forall g, l$. Construct a collection of products of weight manifolds $ \mathcal{G}_l$, initialize re-scaling parameters $\mathcal{R}_l^t$ and initialize weights ${ \omega_{g,l}^t \in \mathbb{M}_{g,l} }$ with ${\mathcal{I}^l_{g} \subseteq \mathcal{I}_{\mathcal{G}_l}}, \forall m,l$. ${ {{\rm grad}}\mathcal{L}(\omega_{g,l}^{t}) := {\rm \Pi}_{\omega_{g,l}^t} \Big ( {{\rm grad}}_E \; \mathcal{L}(\omega_{g,l}^{t}),\Theta,\mathcal{R}_l^t \Big)},\forall \mathcal{G}_l$. $ v_t := h({{\rm grad}}\mathcal{L}(\omega_{g,l}^{t}), r(t,\Theta)),\forall \mathcal{G}_l$. $ \omega_{g,l}^{t+1} := \phi_{\omega_{g,l}^t}( v_t,\mathcal{R}_l^t), \forall \omega_{g,l}^t,\forall \mathcal{G}_l$. A set of estimated weights $\{\omega_{g,l}^T \}_{l=1}^{{L}}, {\forall g}$. \[alg1\] Optimization on POMs using FG-SGD in DNNs {#G-SGDdetails} ----------------------------------------- An algorithmic description of our proposed fine-grained SGD (FG-SGD) is given in Algorithm \[alg1\]. At the initialization of the FG-SGD, we identify the component weight manifolds $\mathcal{M}_{\iota,l}$ of each product weight manifold $\mathbb{M}_{g,l}$ according to the constraints that will be applied on the weights ${\omega_{\iota} \in \mathcal{M}_{\iota,l}}$ for each $g^{th}$ group at each $l^{th}$ layer[^7]. For $t=1$, each manifold $\mathcal{M}_{\iota,l}$ is scaled by $\Re_{\iota,l}^{t=1}$ using $\lambda_{\iota,l}^{t=1}=1, \forall \iota,l$. For $t>1$, each $\mathcal{M}_{\iota,l}$ is re-scaled by $\Re_{\iota,l}^{t} \in \mathcal{R}_l^t$ computing empirical standard deviation $\lambda_{\iota}^{t}$ of features input to each weight of $\mathcal{M}_{\iota,l}$, and $\mathcal{R}_l^t$ is the set of all re-scaling parameters computed at the $t^{th}$ epoch at each $l^{th}$ layer. When we employ a FG-SGD on a product weight manifold $\mathbb{M}_{g,l}$ each weight $\omega_{g,l}^t \in \mathbb{M}_{g,l}$ is moved on $\mathbb{M}_{g,l}$ in the descent direction of gradient of loss at each $t^{th}$ step of the FG-SGD by the following steps: **Line 5 (Projection of gradients on tangent spaces):** The gradient ${{\rm grad}}_E \; \mathcal{L}(\omega_{g,l}^{t})$, obtained using back-propagation from the upper layer, is projected onto the tangent space ${\mathcal{T}_{\omega^t_{g,l}} \mathbb{M}_{g,l} = \bigtimes \limits _{\iota \in \mathcal{I}_{g}^l} \mathcal{T}_{\omega^t_{\iota,l}} \mathbb{M}_{\iota,l}}$ to compute ${{\rm grad}}\mathcal{L}(\omega_{g,l}^{t})$ at the weight $\omega_{g,l}^{t}$ using the results given in Remark \[remark\], where $\mathcal{T}_{\omega^t_{\iota,l}} \mathbb{M}_{\iota,l}$ is the tangent space at ${\omega^t_{\iota,l}}$ on the component manifold $\mathbb{M}_{\iota,l}$ of $\mathbb{M}_{g,l} $. **Line 6 (Movement of weights on tangent spaces):** The weight $\omega^t_{g,l}$ is moved on $\mathcal{T}_{\omega^t_{g,l}} \mathbb{M}_{g,l}$ using $$ h({{\rm grad}}\mathcal{L}(\omega_{g,l}^{t}), r(t,\Theta)) = -\frac{r(t,\Theta)}{\mathlcal{r}(\omega_{g,l}^t)}{{\rm grad}}\mathcal{L}(\omega_{g,l}^{t}), \label{eq:steps} $$ where $r(t,\Theta)$ is the learning rate that satisfies $$\sum_{t=0} ^{\infty}r(t,\Theta) = +\infty \; {\rm and} \; \sum_{t=0} ^{\infty} r(t,\Theta)^2 < \infty, \label{eq:rate} $$ $${\mathlcal{r}(\omega_{G^m_l}^t) = \max\{ 1,\Gamma_1^t\}^{\frac{1}{2}}} \label{grad_norm}$$ ${\Gamma_1^t = (R_{g,l}^{t})^2 \Gamma_2^t}$, ${R_{g,l}^{t} \triangleq \| {{\rm grad}}\mathcal{L}(\omega_{g,l}^{t}) \|_2}$ is computed using , ${\Gamma_2^t = \max \{(2\rho_{g,l}^{t} + R_{g,l}^{t})^2, (1+\mathfrak{c}_{g,l}(\rho_{g,l}^{t} + R_{g,l}^{t}))\} }$, $\mathfrak{c}_{g,l}$ is the sectional curvature of $\mathbb{M}_{g,l}$, ${\rho_{g,l}^{t} \triangleq \rho(\omega_{g,l}^t, \hat{\omega}_{g,l})} $ is the geodesic distance between $\omega_{g,l}^t$ and a local minima $\hat{\omega}_{g,l}$ on $\mathbb{M}_{g,l}$. The following result is used for computation of the $\ell_2$ norm of gradients. [1]{}\[Computation of gradients on tangent spaces\] \[thm\_grads\] The $\ell_2$ norm $\| {{\rm grad}}\mathcal{L}(\omega_{g,l}^{t}) \|_2$ of the gradient ${{\rm grad}}\mathcal{L}(\omega_{g,l}^{t})$ residing on $\mathcal{T}_{\omega^t_{g,l}} \mathbb{M}_{g,l}$ at the $t^{th}$ epoch and the $l^{th}$ layer can be computed by $$\| {{\rm grad}}\mathcal{L}(\omega_{g,l}^{t}) \|_2 = \Big (\sum \limits_{\iota \in \mathcal{I}_{g}^l} {{\rm grad}}\mathcal{L}(\omega_{\iota,l}^{t})^2 \Big)^{\frac{1}{2}}, \label{eq:grad_norm}$$ where ${{\rm grad}}\mathcal{L}(\omega_{\iota,l}^{t})$ is the gradient computed for $\omega_{\iota,l}^{t}$ on the tangent space $\mathcal{T}_{\omega^t_{\iota,l}} \mathbb{M}_{\iota}$, $\forall {\iota \in \mathcal{I}^{l}_g}$. [$\blacksquare$]{} **Line 7 (Projection of moved weights onto product of manifolds):** The moved weight located at $v_t$ is projected onto $\mathbb{M}_{g,l}$ re-scaled by $\mathcal{R}_l^t$ using $\phi_{\omega_{g,l}^t}( v_t,\mathcal{R}_l^t)$ to compute $\omega^{t+1}_{g,l}$, where $\phi_{\omega_{g,l}^t}( v_t,\mathcal{R}_l^t)$ is an exponential map, or a retraction, i.e. an approximation of the exponential map [@absil_retr]. The function $\mathlcal{r}(\omega_{g,l}^t)$ used for computing step size in is employed as a regularizer to control the change of gradient ${{\rm grad}}\mathcal{L}(\omega_{g,l}^{t})$ at each step of FG-SGD. This property is examined in the experimental analyses in the supp. mat. For computation of $\mathlcal{r}(\omega_{g,l}^t)$, we use with Theorem \[thm\_grads\]. In FG-SGD, weights residing on each POM are moved and projected jointly on the POMs, by which we can employ their interaction using the corresponding gradients considering nonlinear geometry of manifolds unlike SGD methods studied in the literature. G-SGD can consider interactions between component manifolds as well as those between POMs in groups of weights. Employment of and at line 7, and retractions at line 8 are essential for assurance of convergence as explained next. Convergence Properties of FG-SGD -------------------------------- Convergence properties of the proposed FG-SGD used to train DNNs are summarized as follows: **Convergenge to local minima:** The loss function of a non-linear DNN, which employs the proposed FG-SGD, converges to a local minimum, and the corresponding gradient converges to zero almost surely (a.s.). The formal theorem and proof are given in Theorem 2 in the supplemental material. **Convergenge to global minima:** Loss functions of particular DNNs such as linear DNNs, one-hidden-layer CNNs, one-hidden-layer Leaky Relu networks, nonlinear DNNs with specific network structures (e.g. pyramidal networks), trained using FG-SGD, converge to a global minimum a.s. under mild assumptions on data (e.g. being distributed from Gaussian distribution, normalized, and realized by DNNs). The formal theorem and proof of this result are given in Corollary 1 in the supp. mat. The proof idea is to use the property that local minima of loss functions of these networks are global minima under these assumptions, by employing the results given in the recent works [@Kawaguchi; @BrutzkusG17; @SDu; @OverParam; @Yun; @Hardt; @Ge; @criticalGlobal; @raghu17a; @nguyen17a]. **An example for adaptive computation of step size:** Suppose that $\mathbb{M}_{\iota}$ are identified by ${n_{\iota} \geq 2}$ dimensional unit sphere, or the sphere scaled by the proposed scaling method. If step size is computed using with $${\mathlcal{r}(\omega_{G^m_l}^t) = (\max\{ 1, (R_{G^m_l}^{t})^2(2+R_{G^m_l}^{t})^2 \} })^{\frac{1}{2}}, \label{eq:corr1}$$ then the loss function converges to local minima for a generic class of nonlinear DNNs, and to global minima for DNNs characterized in Corollary 1. The formal theorem and proof of this result are given in Corollary 2 in the supp. mat. We consider analyzing global convergence properties of FG-SGD using different manifolds for larger class of nonlinear DNNs relaxing these assumptions and conditions as a future work. [|C[5.5cm]{}|C[3.50cm]{}|C[5.5cm]{}|]{} **Model** & **Imagenet(Resnet-50)** & **Imagenet(SENet-Resnet-50)**\ Euc. & [ 24.73 $\pm$ 0.32]{} & [ 23.31$\pm$ 0.55]{}\ St & 23.77 $\pm$ 0.27 & 23.09 $\pm$ 0.41\ POMs of St & 23.61 $\pm$ 0.22 & 22.97 $\pm$ 0.29\ PIO (Sp+Ob+St) & 23.04 $\pm$ 0.10 & 22.67 $\pm$ 0.15\ PIO (Sp+Ob+St+Euc.) & [ 22.89 $\pm$ 0.08 ]{}& [ 22.53 $\pm$ 0.11]{}\ (Additional results) & **Imagenet(Resnet-101)** & **Imagenet(SENet-Resnet-101)**\ Euc. & [ 23.15 $\pm$ 0.09]{} & [ 22.38 $\pm$ 0.30]{}\ PIO (Sp+Ob+St) & 22.83 $\pm$ 0.06& 21.93 $\pm$ 0.12\ PIO (Sp+Ob+St+Euc.) & [ 22.75 $\pm$ 0.02 ]{}& [ 21.76 $\pm$ 0.09]{}\ \[tab:summary\] Experimental Analyses {#sec:exp} ===================== We examine the proposed FG-SGD method for training DNNs using different architectures with different configurations on benchmark datasets for image classification tasks. We provide representative results in Table \[tab:summary\] in this main text, and the other results in the supp. mat. Implementation details and analysis of computational complexity of the proposed methods are given in the supplemental material. We give accuracy of DNNs for baseline Euclidean (Euc.), the sphere (Sp), the oblique (Ob) and the Stiefel (St) manifold in Table \[tab:summary\]. POMs of St denotes results for weights employed on all input and output channels residing on a POM of St. PIO (*manifolds*) denotes results for collections of POMs of *manifolds* using PIO. Table \[tab:summary\] shows results using the state-of-the-art Squeeze-and-Excitation (SE) blocks [@senet] implemented for Resnets with 50 layers (Resnet-50) on Imagenet. We run the experiments 3 times and provide the average performance. We first observe that PIO boosts the performance of baseline Euc. ([ 24.73$\%$]{}) by $1.84\%$ if sets of weights are employed using Euc, Sp, Ob and St ([ 22.89$\%$]{}). We note that the sets computed for Resnet-50 outperform Resnets with 101 layers ([ 23.15]{}) by $0.26\%$. SE blocks aim to aggregate channel-wise descriptive statistics (i.e. mean of convolution outputs) of local descriptors of images to feature maps for each channel. In FG-SGD, we use standard deviation (std) of features extracted from each batch and size of receptive fields of units while defining and updating weight manifolds (see Section 3.3 in supp. mat.). Unlike SE blocks, FG-SGD computes statistical and geometric properties for different sets of input and output channels, and used to update weights by FG-SGD. This property helps FG-SGD to further boost the performance. For instance, we observe that collections of manifolds (23.04$\%$ and [ 22.89$\%$]{} error) outperform SENet-Resnet-50 ([ 23.31]{}$\%$ error). Although FG-SGD estimates standard deviation using moving averages as utilized in batch normalization [@ICML-2015-IoffeS], SE blocks estimates the statistics using small networks. Therefore, we conjecture that they provide complementary descriptive statistics (mean and std). The experimental results justify this claim such that sets implemented in SENet-Resnet-50 further boost the performance by providing [ 22.53$\%$]{} error. Conclusion and Discussion {#sec:conc} ========================= We introduced and elucidated a problem of training CNNs using multiple constraints employed on convolution weights with convergence properties. Following our theoretical results, we proposed the FG-SGD algorithm and adaptive step size estimation methods for optimization on collections of POMs that are identified by the constraints. The experimental results show that our proposed methods can improve convergence properties and classification performance of CNNs. Overall, the results show that employment of collections of POMs using FG-SGD can boost the performance of various different CNNs on benchmark datasets. We consider a research direction for investigating how far local minima are from global minima in search spaces of FG-SGD using products of weight manifolds with nonlinear DNNs and their convergence rates. We believe that our proposed framework will be useful and inspiring for researchers to study geometric properties of parameter spaces of deep networks, and to improve our understanding of deep feature representations. Supplemental Material {#supplemental-material .unnumbered} ===================== Bounding Generalization Errors using Fine-grained Weights ========================================================= [1]{}\[Bounding norms of weight matrices and generalization errors of DNNs\] \[prp1\] Suppose that DNNs given in Table \[tab:compareGen\] are trained using weights renormalized by the renormalization method proposed in the main text according to the Frobenius, spectral and column/row wise norms with reparameterization parameters $\Re_{i,l}^t, \forall i,l,t$ with $\lambda_{i,l}^t \geq 1$. Then, norms of renormalized weight matrices are upper bounded by a constant number, and generalization errors of the corresponding DNNs are asymptotically bounded as given in the rightmost column of the Table \[tab:compareGen\], denoted by **DNNs** (our proposed reparameterization). Suppose that matrices of weights $\omega_{g,l}^i \in \mathbb{R}^{A_{l} \times B_{l}}$ belonging to the $g^{th}$ group of size $|\mathfrak{g}|, {g=1,2,\ldots,G_l}$, $\forall l$ have the same size $A_{l} \times B_{l}$ for simplicity, and $\sigma(\omega_{g,l}^i)$ denotes the top singular value of $\omega_{g,l}^i$. Let $\|\omega^i_{g,l}\|_F$, $\|\omega^i_{g,l}\|_2$, and $\|\omega^i_{g,l}\|_{2 \to 1}$, denote respectively the Frobenius, spectral and $\ell_{2 \to 1}$ norms of the weight $\omega_{g,l}^i$. We note that, matrices of weights $\omega_{g,l}^i$ belonging to the $g^{th}$ group are concatenated by ${\omega_{g,l} = (\omega_{g,l}^1, \omega_{g,l}^2, \ldots,\omega_{g,l}^{\mathfrak{g}})}$, ${\forall g = 1,2,\ldots,G_l}$, to perform group-wise operations in DNNs. Thereby, we can employ bounds for norms of each concatenated matrix in generalization error bounds given in the leftmost column of Table \[tab:compareGen\], denoted by **DNNs** (bounds on norms), and obtain the bounds given in the rightmost column of the Table \[tab:compareGen\], denoted by **DNNs**(our proposed reparameterization). We compute norms of matrices of normalized weights $\omega_{g,l}^i$ belonging to each different manifold in Table \[tab:norms\]. These norms are computed using simple matrix calculus considering definitions of matrices residing on each manifold according to the definition given in Table \[tab:manifolds\]. From these calculations given in Table \[tab:norms\], we observe that, the maximum of norm values that a weight $\omega^i_{g,l}$ belonging to the sphere can achieve is $\mathbb{M}_{sp}(\omega^i_{g,l})= \sigma(\omega_{g,l}^i)$, that of a weight belonging to the Stiefel manifold is $\mathbb{M}_{st}(\omega^i_{g,l})=(B_l)^{1/2}$, and that of a weight belonging to the oblique manifold is ${\mathbb{M}_{ob}(\omega^i_{g,l})=\max\{(B_l)^{1/2},\sigma(\omega_{g,l}^i) \} }$. In our proposed renormalization method, we first normalize each weight matrix such that the norm of the matrix $\omega^i_{g,l}$ can have one of these values $\mathbb{M}_{sp}(\omega^i_{g,l})$, $\mathbb{M}_{st}(\omega^i_{g,l})$ and $\mathbb{M}_{ob}(\omega^i_{g,l})$. Therefore, we need to reparameterize weight matrices such that norm of each reparameterized weight is less than 1.0. For this purpose, we need show that the rescaling of these norm values by $\Re_{i,l}^t$ is upper bounded by 1.0. Weights are rescaled dynamically at each $t^{th}$ epoch of an optimization method proposed to train DNNs using $\Re_{i,l}^t= \frac{\gamma_{i,l}}{\lambda_{i,l}^t}$, where $0 < \gamma_{i,l} < 1.0$ is a geometric scaling parameter and $\lambda_{i,l}^t$ is the standard deviation of features input to the $i^{th}$ weight in the $g^{th}$ group $\omega_{g,l}^i, \forall i,g$. By assumption, $\lambda_{i,l}^t \leq 1.0, \forall i,t,l$. By definition, $ B_l \gamma_{i,l}^2 \leq 1.0, \forall i,l$. In order to show that $ \sigma(\omega_{g,l}^i) \leq (\gamma_{i,l})^{-1}, \forall i,l$, we apply the Bai-Yin law [@bai1993; @BAI1988166]. Thereby, we conclude that norms of concatenated weights belonging to groups given in Table \[tab:compareGen\] are upper bounded by $1$, if the corresponding component weights given in Table \[tab:norms\] are rescaled by $\Re_{i,l}^t, \forall i,l,t$ during training of DNNs. Since norm of each weight matrix $\omega^i_{g,l}$ is bounded by 1.0, their multiplication for all $g=1,2,\ldots,G_l$ and $\forall l$ is also bounded by 1.0. [|C[1.5cm]{}|C[1.50cm]{}|C[1.5cm]{}|C[1.5cm]{}|]{} Norms & & &\ \ $\|\omega^i_{g,l}\|_{2}$& $ \sigma(\omega_{g,l}^i)$ & $1.0 $ & $\sigma(\omega_{g,l}^i)$\ \ $\|\omega^i_{g,l}\|_{F}$ & $1.0$ & $(B_l)^{1/2}$ & $(B_l)^{1/2}$\ \ $\|\omega^i_{g,l}\|_{2 \to 1}$ & $1.0$ & $(B_l)^{1/4}$ & $(B_l)^{1/4}$\ \[tab:norms\] [cc ]{} **Manifolds** & **Definitions**\ \ The Sphere & $ \mathcal{S}(A_l,B_l) = \{ {\omega} \in \mathbb{R}^{A_l \times B_l}: \| \omega \|_F = 1 \}$\ \ The Oblique & $ \mathcal{OB}(A_l,B_l) = \{ {\omega} \in \mathbb{R}^{A_l \times B_l}: \| \omega_b \|_F = 1, \forall b=1,2,\ldots,B_l \}$\ \ The Stiefel & $ St(A_l,B_l) = \{ {\omega} \in \mathbb{R}^{A_l \times B_l}: (\omega^{\rm T} {\omega})= I_{B_l} \}$\ \[tab:manifolds\] [|C[3.25cm]{}|C[5.6cm]{}|]{} &\ Neyshabur et al. [@Neyshabur15] & $\mathcal{O}\Big( \frac{2^L \prod\limits_{l=1}^{L} \prod\limits_{g=1}^{G_l} \delta_{g,l,F}}{\sqrt{N}} \Big)$\ Bartlett et al. [@NIPS2017204] & $\mathcal{\tilde{O}} \Bigg( \frac{\prod\limits _{l=1} ^L \prod\limits _{g=1} ^{G_l} \delta_{g,l,2}}{\sqrt{N}} \Big( \sum \limits_{l=1} ^L \prod \limits _{g=1}^{G_l} (\frac{\delta_{g,l,2 \to 1}}{\delta_{g,l,2}})^{\frac{2}{3}} \Big) ^{\frac{3}{2}} \Bigg)$\ Neyshabur et al. [@neyshabur2018a] & $\mathcal{\tilde{O}} \Bigg ( \frac{\prod \limits_{l=1}^{L} \prod \limits_{g=1}^{G_l}\delta_{g,l,2}} {\sqrt{N}} \sqrt{L^2 \varpi \sum\limits_{l=1} ^L \prod \limits_{g=1}^{G_l}\frac{\delta^2_{g,l,F}}{\delta^2_{g,l,2} } } \Bigg)$\ \[tab:compareGen\] Proofs of Theorems given in the Main Text ========================================= Let $\mathfrak{X}({\mathcal{M}_{\iota,l}})$ denote the set of smooth vector fields on ${\mathcal{M}_{\iota,l}}$. The sectional curvature of ${\mathcal{M}_{\iota,l}}$ associated with a two dimensional subspace $\mathfrak{T} \subset \mathcal{T}_{{\omega_{\iota}}}{\mathcal{M}_{\iota,l}}$ is defined by $$\mathfrak{c}_{\iota} = \frac{\left\langle {\mathcal{C}_{\iota}}(X_{{\omega_{\iota}}},Y_{{\omega_{\iota}}})Y_{{\omega_{\iota}}},X_{{\omega_{\iota}}} \right\rangle}{\left\langle X_{{\omega_{\iota}}} , X_{{\omega_{\iota}}} \right\rangle \left\langle Y_{{\omega_{\iota}}} , Y_{{\omega_{\iota}}} \right\rangle - \left\langle X_{{\omega_{\iota}}} ,Y_{{\omega_{\iota}}} \right\rangle^2}$$ where ${\mathcal{C}_{\iota}}(X_{{\omega_{\iota}}},Y_{{\omega_{\iota}}})Y_{{\omega_{\iota}}}$ is the Riemannian curvature tensor, $\left\langle \cdot,\cdot \right\rangle$ is an inner product, ${X_{{\omega_{\iota}}} \in \mathfrak{X}({\mathcal{M}_{\iota,l}})}$ and ${Y_{{\omega_{\iota}}} \in \mathfrak{X}({\mathcal{M}_{\iota,l}})}$ form a basis of $\mathfrak{T}$.[$\blacksquare$]{} Let $\mathfrak{X}({\mathcal{M}_{\iota,l}})$ denote the set of smooth vector fields on ${\mathcal{M}_{\iota,l}}$ and $\mathfrak{F}({\mathcal{M}_{\iota,l}})$ denote the set of smooth scalar fields on ${\mathcal{M}_{\iota,l}}$.The Riemannian connection $\bar{\nabla}$ on ${\mathcal{M}_{\iota,l}}$ is a mapping [@absil_retr] $$\bar{\nabla}: \mathfrak{X}({\mathcal{M}_{\iota,l}}) \times \mathfrak{X}({\mathcal{M}_{\iota,l}}) \to \mathfrak{X}({\mathcal{M}_{\iota,l}}): (X_{{\omega_{\iota}}}, Y_{{\omega_{\iota}}} ) \mapsto \bar{\nabla} X_{{\omega_{\iota}}}Y_{{\omega_{\iota}}}$$ which satisfies the following properties: 1. $\bar{\nabla}_{pX_{{\omega_{\iota}}}+qY_{{\omega_{\iota}}}} Z_{{\omega_{\iota}}} = p\bar{\nabla}_{Z_{{\omega_{\iota}}}} + q \nabla_{Y_{{\omega_{\iota}}}} Z_{{\omega_{\iota}}}$, 2. $\bar{\nabla} X_{{\omega_{\iota}}}(\alpha Y_{{\omega_{\iota}}} + \beta Z_{{\omega_{\iota}}}) = \alpha \bar{\nabla}_{X_{{\omega_{\iota}}}Y_{{\omega_{\iota}}}} + \beta \bar{\nabla}_{X_{{\omega_{\iota}}}}Z_{{\omega_{\iota}}}$, 3. $ \bar{\nabla} _{X_{{\omega_{\iota}}}}(pY_{{\omega_{\iota}}} ) = (X_{{\omega_{\iota}}}p)Y_{{\omega_{\iota}}} + p\bar{\nabla}_{X_{{\omega_{\iota}}}}Y_{{\omega_{\iota}}}$, 4. $\bar{\nabla}_{X_{{\omega_{\iota}}}} Y_{{\omega_{\iota}}} - \bar{\nabla}_{Y_{{\omega_{\iota}}}} X_{{\omega_{\iota}}} = [X_{{\omega_{\iota}}},Y_{{\omega_{\iota}}}] $ and 5. $Z_{{\omega_{\iota}}} \left\langle X_{{\omega_{\iota}}},Y_{{\omega_{\iota}}} \right\rangle = \left\langle \bar{\nabla}_{Z_{{\omega_{\iota}}}}X_{{\omega_{\iota}}}, Y_{{\omega_{\iota}}}\right\rangle + \left\langle X_{{\omega_{\iota}}}, \bar{\nabla}_Z Y_{{\omega_{\iota}}} \right\rangle$ where $X_{{\omega_{\iota}}}, Y_{{\omega_{\iota}}}, Z_{{\omega_{\iota}}} \in \mathfrak{X}({\mathcal{M}_{\iota,l}})$, $p, q \in \mathfrak{F}({\mathcal{M}_{\iota,l}})$, $\alpha, \beta \in \mathbb{R}$, $\left\langle \cdot,\cdot \right\rangle$ is an inner product, $[X_{{\omega_{\iota}}},Y_{{\omega_{\iota}}}]$ is the Lie bracket of $X_{{\omega_{\iota}}}$ and $Y_{{\omega_{\iota}}}$, and defined by ${[X_{{\omega_{\iota}}}, Y_{{\omega_{\iota}}}]p = X_{{\omega_{\iota}}}(Y_{{\omega_{\iota}}}p) - Y_{{\omega_{\iota}}}(X_{{\omega_{\iota}}}p)}$, $\forall p \in \mathfrak{F}({\mathcal{M}_{\iota,l}})$. [1]{}\[Metric and curvature properties of POMs\] \[lemma11\] Suppose that $u_{\iota} \in \mathcal{T}_{\omega_{\iota}} \mathcal{M}_{\iota}$ and $v_{\iota} \in \mathcal{T}_{\omega_{\iota}} \mathcal{M}_{\iota}$ are tangent vectors belonging to the tangent space $\mathcal{T}_{\omega_{\iota}} \mathcal{M}_{\iota}$ computed at ${{\omega_{\iota}} \in \mathcal{M}_{\iota}}$, $\forall \iota \in \mathcal{I}_{{G}_l}$. Then, tangent vectors $u_{G_l} \in \mathcal{T}_{\omega_{G_l}} \mathbb{M}_{G_l}$ and $v_{G_l} \in \mathcal{T}_{\omega_{G_l}} \mathbb{M}_{G_l}$ are computed at $\omega_{G_l} \in \mathbb{M}_{G_l}$ by concatenation as $u_{G_l} = (u_1, u_2, \cdots, u_{|\mathcal{I}_{{G}_l}|})$ and $v_{G_l} = (v_1, v_2, \cdots, v_{|\mathcal{I}_{{G}_l}|})$. If each weight manifold $\mathcal{M}_{\iota}$ is endowed with a Riemannian metric $\mathfrak{d}_{\iota}$, then a $G_l$-POM is endowed with the metric $\mathfrak{d}_{G_l}$ computed by $$\mathfrak{d}_{G_l} ( u_{G_l} , v_{G_l} ) = \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} \mathfrak{d}_{\iota}(u_{\iota},v_{\iota}). \label{eq:prod_metric}$$ In addition, suppose that $\bar{C}_{\iota}$ is the Riemannian curvature tensor field (endomorphism) [@lee2009manifolds] of $\mathcal{M}_{\iota}$, ${x_{\iota}, y_{\iota} \in \mathcal{T}_{\omega_{\iota}} \mathcal{M}_{\iota}}$, $\forall \iota \in \mathcal{I}_{{G}_l}$ defined by $$\bar{C}_{\iota}(u_{\iota},v_{\iota},x_{\iota},y_{\iota}) = \left\langle {C}_{\iota} (U,V)X,Y \right\rangle_{{\omega_{\iota}}}, \label{eq:R_tensor}$$ where $U,V,X,Y$ are vector fields such that $U_{{\omega_{\iota}}} = u_{\iota} $, $V_{{\omega_{\iota}}} = v_{\iota} $, $X_{{\omega_{\iota}}} = x_{\iota} $, and $Y_{{\omega_{\iota}}} = y_{\iota} $. Then, the Riemannian curvature tensor field $\bar{C}_{G_l} $ of $\mathbb{M}_{G_l}$ is computed by $$\bar{C}_{G_l} ( u_{G_l} , v_{G_l}, x_{G_l} , y_{G_l} ) = \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} \bar{C}_{\iota}(u_{\iota},v_{\iota},x_{\iota},y_{\iota}), \label{eq:curv_tensor}$$ where ${x_{G_l} = (x_1, x_2, \cdots, x_{|\mathcal{I}_{{G}_l}|})}$ and $y_{G_l} = (y_1, y_2, \cdots, y_{|\mathcal{I}_{{G}_l}|})$. Moreover, $\mathbb{M}_{G_l}$ has never strictly positive sectional curvature $\mathfrak{c}_{G_l}$ in the metric . In addition, if $\mathbb{M}_{G_l}$ is compact, then $\mathbb{M}_{G_l}$ does not admit a metric with negative sectional curvature $\mathfrak{c}_{G_l}$. [$\blacksquare$]{} Since each weight manifold ${\mathcal{M}_{\iota,l}}$ is a Riemannian manifold, $\mathfrak{d}_{\iota}$ is a Riemannian metric such that ${\mathfrak{d}_{\iota}(u_{\iota}, v_{\iota}) = \left\langle u_{\iota}, v_{\iota} \right\rangle }$. Thereby, $$\mathfrak{d}_{G_l} ( u_{G_l} , v_{G_l} ) = \left\langle u_{G_l}, v_{G_l} \right\rangle = \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} \left\langle u_{\iota}, v_{\iota} \right\rangle \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} \mathfrak{d}_{\iota}(u_{\iota},v_{\iota}) \label{eq:prod_metric2}$$ and we obtain . In order to derive , we first compute $$\left\langle \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} u_{\iota}, \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} v_{\iota} \right\rangle = \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} \left\langle u_{\iota}, v_{\iota} \right\rangle . \label{eq:rm1}$$ Then, we use the equations for the Lie bracket by $$\left[ \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} u_{\iota}, \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} v_{\iota} \right ] = \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} \left[ u_{\iota}, v_{\iota} \right] . \label{eq:rm2}$$ Next, we employ the Koszul’s formula [@lee2009manifolds] by $$2 \left\langle \bar{\nabla}_{u_{\iota}} v_{\iota} , x_{\iota} \right\rangle = u_{\iota} \left\langle v_{\iota} , x_{\iota} \right\rangle + v_{\iota} \left\langle x_{\iota} , u_{\iota} \right\rangle \nonumber - x_{\iota} \left\langle u_{\iota} , v_{\iota} \right\rangle + \left\langle x_{\iota} , [u_{\iota} , v_{\iota} ]\right\rangle \nonumber - \left\langle v_{\iota} , [u_{\iota} , x_{\iota} ]\right\rangle - \left\langle u_{\iota} , [v_{\iota} , x_{\iota} ]\right\rangle$$ such that $$\bar{\nabla}_{ \bar{u}} ( \bar{v} ) = \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} \bar{\nabla}_{u_{\iota}} (v_{\iota} ), \label{eq:rm3}$$ where $\bar{u} = \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} u_{\iota} $ and $\bar{v} = \sum \limits _{\iota \in \mathcal{I}_{{G}_l}} v_{\iota} $. Using and definition of the curvature with , , , and , we obtain . In order to show that $\mathbb{M}_{G_l}$ has never strictly positive sectional curvature $\mathfrak{c}_{G_l}$ in the metric , it is sufficient to show that some sectional curvatures always vanish. Suppose that $U$ is a vector field on $\mathbb{M}_{G_l}$ along a component weight manifold ${\mathcal{M}_{\iota,l}}$ such that no local coordinate $o$ of $\mathcal{M}_{\bar{\iota}}$ and $\frac{\partial}{\partial o}$ are present in local coordinates of $U$, $\forall \iota \neq \bar{\iota}$, $\bar{\iota} \in \mathcal{I}_{G_l}$. In addition, suppose that $\bar{U}$ is a vector field along $\mathcal{M}_{\bar{\iota}}$. Then, $\bar{\nabla}_{U} \bar{U} = 0$, $\forall \iota, \bar{\iota} \in \mathcal{I}_{G_l}$. By employing , we have $\bar{C}_{\iota}(u_{\iota},v_{\iota},x_{\iota},y_{\iota}) = 0$. Then, we use to obtain $\bar{C}_{G_l} ( u_{G_l} , v_{G_l}, x_{G_l} , y_{G_l} ) = 0$. Therefore, following the definition of the sectional curvature, for arbitrary vector fields on component manifolds, $\mathbb{M}_{G_l}$ has never strictly positive sectional curvature $\mathfrak{c}_{G_l}$ in the metric . Since $\mathbb{M}_{G_l}$ is a Riemannian manifold, if $\mathbb{M}_{G_l}$ is compact, then $\mathbb{M}_{G_l}$ does not admit a metric with negative sectional curvature $\mathfrak{c}_{G_l}$ by the Preissmann’s theorem [@petersen2006riemannian] [^8]. [1]{}\[Computation of gradients on tangent spaces\] \[thm\_grads\] The $\ell_2$ norm $\| {{\rm grad}}\mathcal{L}(\omega_{G^m_l}^{t}) \|_2$ of the gradient ${{\rm grad}}\mathcal{L}(\omega_{G^m_l}^{t})$ residing on $\mathcal{T}_{\omega^t_{G^m_l}} \mathbb{M}_{G^m_l}$ at the $t^{th}$ epoch and the $l^{th}$ layer can be computed by $$\| {{\rm grad}}\mathcal{L}(\omega_{G^m_l}^{t}) \|_2 = \Big (\sum \limits_{\iota \in \mathcal{I}_{G^m_l}} {{\rm grad}}\mathcal{L}(\omega_{l,\iota}^{t})^2 \Big)^{\frac{1}{2}}, \vspace{-0.1cm} \label{eq:grad_norm}$$ where ${{\rm grad}}\mathcal{L}(\omega_{l,\iota}^{t})$ is the gradient computed for the weight $\omega_{l,\iota}^{t}$ on the tangent space $\mathcal{T}_{\omega^t_{\iota,l}} \mathbb{M}_{\iota}$, ${\forall \iota \in \mathcal{I}_{G^m_l}}$. [$\blacksquare$]{} We use the inner product for the Riemannian metric $\mathfrak{d}_{G_l} ( {{\rm grad}}\mathcal{L}(\omega_{G^m_l}^{t}) , {{\rm grad}}\mathcal{L}(\omega_{G^m_l}^{t}) )$ and $\mathfrak{d}_{\iota} ( {{\rm grad}}\mathcal{L}(\omega_{l,\iota}^{t}) , {{\rm grad}}\mathcal{L}(\omega_{l,\iota}^{t}) )$ of manifolds $ \mathbb{M}_{G^m_l}$ and $ \mathbb{M}_{\iota}, \forall \iota$, respectively. By definition of the product manifold, we have $$\begin{aligned} {{\rm grad}}\mathcal{L}(\omega_{G^m_l}^{t}) = \Big ( {{\rm grad}}\mathcal{L}(\omega_{l,1}^{t}), {{\rm grad}}\mathcal{L}(\omega_{l,2}^{t}), {{\rm grad}}\mathcal{L}(\omega_{l,|\mathcal{I}_{G_l}|}^{t}) \Big ). \end{aligned}$$ Thereby, we can apply bilinearity of inner product in Lemma 1 and obtain $$\| {{\rm grad}}\mathcal{L}(\omega_{G^m_l}^{t}) \|_2^2 = \Big (\sum \limits_{\iota \in \mathcal{I}_{G^m_l}} {{\rm grad}}\mathcal{L}(\omega_{l,\iota}^{t})^2 \Big), \label{eqtemp}$$ where $\| \cdot \|_2^2$ is the squared $\ell_2$ norm. The result follows by applying the square root to . [2]{}\[Convergence of the FG-SGD\] \[thm33\] Suppose that there exists a local minimum ${\hat{\omega}_{G_l} \in \mathbb{M}_{G_l}}, \forall {G_l \subseteq \mathcal{G}_l}$, $\forall l$, and $\exists \epsilon>0$ such that $\inf \limits _{\rho_{G_l}^{t} > \epsilon^{\frac{1}{2}}} \left\langle \phi_{\omega_{G_l}^t}(\hat{\omega}_{G_l})^{-1}, \nabla \mathcal{L}(\omega_{G_l}^t) \right\rangle <0$, where $\phi$ is an exponential map or a twice continuously differentiable retraction, and $\langle \cdot,\cdot \rangle$ is the inner product. Then, the loss function and the gradient converges almost surely (a.s.) by $\mathcal{L}(\omega^t_{G_l}) \xrightarrow[t \to \infty]{\rm a.s.} \mathcal{L}(\hat{\omega}_{G_l})$, and $\nabla \mathcal{L}(\omega^t_{G_l}) \xrightarrow[t \to \infty]{\rm a.s.} 0$, for each $\mathbb{M}_{G_l}, \forall l$. [$\blacksquare$]{} In this theorem, we generalize the proof idea of Theorem 4.1 and 4.2 given in [@ooAAAI18], and Theorem 3 given in [@sgdman] for collections of products of embedded weight manifolds (POMs) for training of CNNs. The proof idea is to show that $\rho_{G_l}^{t} \triangleq \rho (\omega_{l,\iota}^{t},\hat{\omega}_{l,\iota})$ converges almost surely to $0$ as $t \to \infty$. For this purpose, we need to first model the change of gradient on the geodesic $\rho_{G_l}^{t}$ by defining a function $\Psi_t \triangleq \psi((\rho_{G_l}^{t})^2)$ according to the following constraints [@sgdman]; - $\Psi_t = 0$, for $0 \leq \rho_{G_l}^{t} \leq \sqrt{\epsilon}$. - $0 < \Psi''_t \leq 2$, for $\sqrt{\epsilon} \leq \rho_{G_l}^{t} \leq \sqrt{\epsilon+1}$. - $\Psi'_t = 1$, for $\rho_{G_l}^{t} \geq \sqrt{\epsilon+1}$. Then, we compute gradients and geodesics on collections of POMs using given in Lemma \[lemma11\] by $$\| {{\rm grad}}\mathcal{L}(\omega_{G_l}^{t}) \|_2 = \Big (\sum \limits_{\omega_{l,\iota}^{t} \in \mathbb{M}_{\iota}, \iota \in \mathcal{I}_{G_l}} {{\rm grad}}\mathcal{L}(\omega_{l,\iota}^{t})^2 \Big)^{\frac{1}{2}}$$ and $$\rho (\omega_{G_l}^{t}) = \Big (\sum \limits_{\omega_{l,\iota}^{t} \in \mathbb{M}_{\iota}, \iota \in \mathcal{I}_{G_l}} \rho (\omega_{l,\iota}^{t},\hat{\omega}_{l,\iota}) \Big),$$ where ${\omega^t_{G_l} = (\omega^t_1, \omega^t_2, \cdots, \omega^t_{|\mathcal{I}_{{G}_l}|})}$. We employ a Taylor expansion on $\Psi_t$ [@sgdman; @ooAAAI18], and we obtain $$\Psi_{t+1} - \Psi_t \leq ((\rho_{G_l}^{t+1})^2 - (\rho_{G_l}^{t})^2 ) \Psi'_t + ((\rho_{G_l}^{t+1})^2 - (\rho_{G_l}^{t})^2 ) ^2.$$ In order to compute the difference between $\rho_{G_l}^{t+1}$ and $\rho_{G_l}^{t}$, we employ a Taylor expansion on the geodesics [@sgdman; @ooAAAI18] by $$\rho_{G_l}^{t+1} - \rho_{G_l}^{t} \leq \Big (\frac{g(t,\Theta)}{\mathfrak{g}(\omega_{G_l}^t)} \Big)^2 \| {{\rm grad}}\mathcal{L}(\omega_{G_l}^{t}) \|^2 \kappa \nonumber - 2 \left\langle h({{\rm grad}}\mathcal{L}(\omega_{G_l}^{t}), g(t,\Theta)) , \phi_{\omega_{G_l}^t}(\hat{\omega}_{G_l})^{-1} \right\rangle , \label{eq:geod_diff}$$ where ${\hat{\omega}_{G_l} = (\hat{\omega}_1, \hat{\omega}_2, \cdots, \hat{\omega}_{|\mathcal{I}_{{G}_l}|})}$, and $\kappa \leq \Upsilon_1$ where $\Upsilon_1=1+\mathfrak{c}_{G_l}(\rho_{G_l}^{t} + R_{G_l}^{t})$ is an upper bound on the operator norm of half of the Riemannian Hessian of $\rho(\cdot,\hat{\omega}_{G_l})^2$ along the geodesic joining $\omega_{G_l}^{t}$ and $\omega_{G_l}^{t+1}$. In order to explore asymptotic convergence, we define $\Omega_t = \{s_i \}_{i=1}^{t-1}$ to be an increasing sequence of $\sigma$ algebras generated by samples that are processed before the $t^{th}$ epoch. Since $s_t$ is independent of $\Omega_t$ and $\omega_{G_l}^t$ is $\Omega_t$ measurable, we have $$\mathbb{E} (h({{\rm grad}}\mathcal{L}(\omega_{G_l}^{t}), g(t,\Theta))^2 \kappa | \Omega_t] ) \leq \Big (\frac{g(t,\Theta)}{\mathfrak{g}(\omega_{G_l}^t)} \Big)^2 \mathbb{E} \Big ( (R_{G_l}^{t} )^2 \Upsilon_1 \Big),$$ and $$\mathbb{E}((\rho_{G_l}^{t+1})^2 - (\rho_{G_l}^{t})^2 |\Omega_t ) \leq 2 \frac{g(t,\Theta)}{\mathfrak{g}(\omega_{G_l}^t)} \left\langle \phi_{\omega_{G_l}^t}(\hat{\omega}_{G_l})^{-1}, \nabla \mathcal{L}(\omega_{G_l}^t) \right\rangle + g(t,\Theta)^2.$$ If $\mathfrak{g}(\omega_{G_l}^t) = \max\{ 1,\Gamma_1^t\}^{\frac{1}{2}}$, $\Gamma_1^t = (R_{G_l}^{t})^2 \Gamma_2^t$, ${\Gamma_2^t = \max \{(2\rho_{G_l}^{t} + R_{G_l}^{t})^2, (1+\mathfrak{c}_{G_l}(\rho_{G_l}^{t} + R_{G_l}^{t}))\} }$, then we have $$\mathbb{E} (\Psi_{t+1} - \Psi_t| \Omega_t) \leq \mathbb{E}((\rho_{G_l}^{t+1})^2 - (\rho_{G_l}^{t})^2 |\Omega_t ) \Psi'_t + g(t, \Theta ) ^2$$ and $$\mathbb{E} (\Psi_{t+1} - \Psi_t| \Omega_t) \leq 2 \frac{g(t,\Theta)}{\mathfrak{g}(\omega_{G_l}^t)} \left\langle \phi_{\omega_{G_l}^t}(\hat{\omega}_{G_l})^{-1}, \nabla \mathcal{L}(\omega_{G_l}^t) \right\rangle \Psi'_t + g(t, \Theta ) ^2.$$ Thus, we have $$\mathbb{E} (\Psi_{t+1} - \Psi_t| \Omega_t) \leq 2 g(t, \Theta ) ^2,$$ and $\Psi_t +\sum_{t=0} ^{\infty} g(t,\Theta)^2$ is a positive supermartingale, and converges almost surely. Since $$\sum_{t=0} ^{\infty} \mathbb{E} ( [\mathbb{E} (\Psi_{t+1} - \Psi_t| \Omega_t) ^+ ] )\leq \sum_{t=0} ^{\infty} g(t,\Theta)^2 < \infty,$$ we observe that $\Psi_t$ is a quasi-martingale [@sgdman; @ooAAAI18], and thereby we have almost surely $$- \sum_{t=0} ^{\infty} \frac{g(t,\Theta)}{\mathfrak{g}(\omega_{G_l}^t)} \left\langle \phi_{\omega_{G_l}^t}(\hat{\omega}_{G_l})^{-1}, \nabla \mathcal{L}(\omega_{G_l}^t) \right\rangle \Psi'_t < \infty.$$ Using properties of quasi-martingale [@fisk], $\Psi_t$ converges almost surely. In order to show almost sure convergence of $\nabla \mathcal{L}(\omega^t_{G_l}) $ to $0$, we use Theorem 4.1 and 4.2 of [@ooAAAI18]. For this purpose, we need to show that gradients of loss functions are bounded in compact sets of weights. Since $$\inf \limits _{\rho_{G_l}^{t} > \epsilon^{\frac{1}{2}}} \left\langle \phi_{\omega_{G_l}^t}(\hat{\omega}_{G_l})^{-1}, \nabla \mathcal{L}(\omega_{G_l}^t) \right\rangle <0,$$ a weight $\omega_{G_l}^t$ is moved towards $\hat{\omega}_{G_l}$ by the gradient when $\rho_{G_l}^{t} > \epsilon^{\frac{1}{2}}$ where the set $\mathfrak{S}=\{\omega_{G_l}^t : \rho_{G_l}^{t} \leq \epsilon^{\frac{1}{2}} \}$ is a compact set. Since all continuous functions of $\omega_{G_l}^t \in \mathfrak{S}$ are bounded, and adaptive step size $\mathfrak{g}(\omega_{G_l}^t)$ satisfies $\frac{g(t,\Theta)}{\mathfrak{g}(\omega_{G_l}^t)} \leq g(t,\Theta)$ and $\mathfrak{g}(\omega_{G_l}^t)^2$ dominates $R_{G_l}^t$, we obtain that $\mathbb{E}( R_{G_l}^t)^2 \leq \mathfrak{K}$ for some $\mathfrak{K} > 0$ on a compact set $\mathcal{K}$. Thereby, we can show that conditions of Theorem 4.1 and 4.2 of [@ooAAAI18] are satisfied. Therefore, we obtain almost sure convergence of $\nabla \mathcal{L}(\omega^t_{G_l}) $ to $0$ by applying Theorem 4.1 and 4.2 in the rest of the proof. [1]{} \[corr1\] Suppose a DNN has loss functions whose local minima are also global minima. If the DNN is trained using the proposed FG-SGD and weight renormalization methods, then the loss of the DNN converges to global minima. By Theorem 2, we assure that a loss function of a DNN which employs the proposed FG-SGD and weight renormalization methods for training converges to local minima. If the local minima is the global minima for the DNN, then the loss function converges to the global minima. [2]{} \[corr34\] Suppose that $\mathbb{M}_{\iota}$ are identified by ${n_{\iota} \geq 2}$ dimensional unit sphere $\mathbb{S}^{n_{\iota}}$, and $\rho_{G_l}^t \leq \hat{\mathfrak{c}}^{-1}$, where $\hat{\mathfrak{c}}$ is an upper bound on the sectional curvatures of $\mathbb{M}_{G_l}, \forall l$ at $\omega_{G_l}^t \in \mathbb{M}_{G_l}, \forall t$. If step size is computed using $$h({{\rm grad}}\mathcal{L}(\omega_{G_l}^{t}), g(t,\Theta)) = -\frac{g(t,\Theta)}{\mathfrak{g}(\omega_{G_l}^t)}{{\rm grad}}\mathcal{L}(\omega_{G_l}^{t}), \label{eq:steps} $$ with ${\mathfrak{g}(\omega_{G_l}^t) = (\max\{ 1, (R_{G_l}^{t})^2(2+R_{G_l}^{t})^2 \} })^{\frac{1}{2}}$, then ${\mathcal{L}(\omega^t_{G_l}) \xrightarrow[t \to \infty]{\rm a.s.} \mathcal{L}(\hat{\omega}_{G_l})}$, and ${\nabla \mathcal{L}(\omega^t_{G_l}) \xrightarrow[t \to \infty]{\rm a.s.} 0}$, for each $\mathbb{M}_{G_l}, \forall l$. [$\blacksquare$]{} If $\mathbb{M}_{G_l}$ is a product of ${n_{\iota} \geq 2}$ dimensional unit spheres $\mathbb{S}^{n_{\iota}}$, then $\mathfrak{c}_{G_l} =0$ and $\hat{\mathfrak{c}}=1$ by Lemma \[lemma11\]. Thereby, Theorem \[thm33\] is applied to assure convergence by $\Gamma^1_t = (R_{G_l}^{t})^2(2+R_{G_l}^{t})^2$. Experimental Details ==================== We use three benchmark image classification datasets, namely Cifar-10, Cifar-100 and Imagenet [@Alexnet], for analysis of convergence properties and performance of CNNs trained using FG-SGD. The Cifar-10 dataset consists of 60000 $32 \times 32$ RGB images (50000 training images and 10000 test images) in 10 classes, with 6000 images per class. The Cifar-100 dataset consists of 100 classes containing 600 images each (500 training images and 100 testing images per class). The Imagenet (ILSVRC 2012) dataset consists of 1000 classes of $224 \times 224$ RGB images (1.2 million training images, 100000 test images and 50000 images used for validation). Computational Complexity of Algorithm 1 {#sec:comp} --------------------------------------- Compared to SGD algorithms that use weights belonging to linear weight spaces [@res_net; @nature_deep], the computational complexity of Algorithm 1 is dominated by computation of the maps $\Pi$ and $\phi$ at line 6 and 9, depending on the structure of the weight manifold used at the $l^{th}$ layer. Concisely, the computational complexity of $\Pi$ is determined by computation of different norms that identify the manifolds. For instance, for the sphere, we use ${\Pi_{\omega_l^t} \mu_t \triangleq (1- \| \omega_l^t \|_F^2) \mu_t}$. Thereby, for an $A \times A$ weight, the complexity is bounded by $O(A^3)$, where $O(\cdot)$ denotes an asymptotic upper bound [@algo]. Similarly, the computational complexity of $\phi$ depends on the manifold structure. For example, the exponential maps on the sphere and the oblique manifold can be computed using functions of $\sin$ and $\cos$ functions, while that on the Stiefel manifold is a function of matrix exponential. For computation of matrix exponential, various numerical approximations with ${O}(\epsilon A^3)$ complexity were proposed for different approximation order $\epsilon$ [@fisk; @Higham; @kenney; @nineteen]. However, unit norm matrix normalization is used for computation of retractions on the sphere and the oblique manifold. Moreover, QR decomposition of matrices is computed with ${O}(A^3)$ [@golub] for retractions on the Stiefel manifold. In addition, computation time of maps can be reduced using parallel computation methods. For instance, a rotation method was suggested to compute QR using ${O}(A^2)$ processors in ${O}(A)$ unit time in [@fast_qr]. Therefore, computation of retractions is computationally less complex compared to that of the exponential maps. Since the complexity analysis of these maps is beyond the scope of this work, and they provide the same convergence properties for our proposed algorithm, we used the retractions in the experiments. Implementation details are given in the next section. ### A Discussion on Implementation of Algorithm 1 in Parallel and Distributed Computing Systems In the experiments, algorithms are implemented using GPU and CPU servers consisting of GTX 2070, GTX 1080, GTX-Titan-X, GTX-Titan-Black, Intel i7-5930K, Intel Xeon E5-1650 v3 and E5-2697 v2. Since we used hybrid GPU and CPU servers in the experiments, and a detailed analysis of parallel and distributed computation methods of CNNs is beyond the scope of this work, we report bounds on average running times of SGD algorithms in this section. In the implementation of linear Euclidean SGD methods, we use vectorized computation of weight updates. Therefore, we use large scale matrix computation methods (in some cases, for sparse matrices) to improve running time of the linear Euclidean SGD methods. However, we deal with optimization using batched (small size) dense matrices in the implementation of Algorithm 1 [@964]. Therefore, in order to improve running time of the algorithm, we implemented Algorithm 1 using hybrid CPU-GPU programming paradigms. More precisely, we consider two computation schemes according to matrix/tensor structure of the weights, i.e. geometric structure of weight manifolds. First, we recall that we construct different manifolds of weights ${ \mathcal{W} = \{ \mathbf{W}_{d,l} \in \mathbb{R}^{A_l \times B_l \times C_l} \} _{d=1} ^{D_l}}, \forall l=1,2,\dots,L$, at different layers of an $L$-layer CNN. Then, we implement projections of gradients and retractions at 1. Fully Connected (FC) layers at which we use ${\mathbf{W}_{l}^{fc} \in \mathbb{R}^{C_l \times D_l}}$ with $A_l = B_l =1$, and 2. Convolution (Conv) layers at which we use $\mathbf{W}_{d,l} \in \mathcal{W}$ with $A_l > 1$ and $B_l >1$. At the FC layers, we implemented Algorithm 1 on GPUs using Cuda with Cublas and Magma [@dghklty14; @tdb10; @tnld10] Blas [@ntd10_vecpar; @ntd10]. In the experimental analyses, we obtained similar running times using Cublas and Magma Blas implementation of Algorithm 1 (denoted by $\mathcal{R}^{fc}_M$) compared to running time of linear Euclidean SGD (denoted by $\mathcal{R}^{fc}_E$), for each epoch. For instance, if we train CNNs using the Cifar-100 dataset and one GTX 1080, then we observe $\mathcal{R}^{fc}_M < \mathfrak{a} \mathcal{R}^{fc}_E$, where the running times are bounded by $\mathfrak{a} > 0$ due to implementation of gradient projections and retractions. The overhead factor $\mathfrak{a}$ also depends on the manifold structure of the weights such that $\mathfrak{a} < 1.5$ for the sphere, $\mathfrak{a} <2.5$ for the oblique manifold and $\mathfrak{a} < 5$ for the Stiefel manifold. When we implemented a QR decomposition algorithm using the Givens transformation (Rotation) [@Brouwer2014; @golub], we obtained further improvement by $\mathfrak{a} < 4$. In addition, batch size does not affect the overhead of running time crucially as long as the GPU memory is sufficient. The effect of this overhead on the overall training time depends on structure of CNNs. For example, we use multiple (6) FC layers in NiNs where we have 2 FC layers in SKs. Therefore, the overhead affects the training time of NiNs more than that of SKs. At the Conv layers, we implemented Algorithm 1 on both GPUs and CPUs. However, the structure of parallelization of projections and maps at the Conv layers is different than that of projections and maps computed at the FC layers. More precisely, we perform parallel computation either 1) using tensors $\mathbf{W}_{d,l} \in \mathbb{R}^{A_l \times B_l \times C_l}$ for each output $d=1,2,\dots,D_l$, or 2) using matrices ${W}_{c,d,l} \in \mathbb{R}^{A_l \times B_l}$ for each output $d=1,2,\dots,D_l$ and channel $c=1,2,\ldots,C_l$. Since there is an I/O bottleneck between transfer of matrices and tensors to/from GPUs from/to CPUs, we used either (1) or (2) according to output size $D_l$, and channel size $C_l$. For instance, if $C_l > D_l$, then we performed computations on GPUs. Otherwise, we implemented the algorithm on multi-core CPUs. In average, for an epoch[^9], the running time of a GPU implementation of Algorithm 1 for the case (1) denoted by $\mathcal{R}^{1}_{M,gpu}$, and that of linear Euclidean SGD for the case $\mathcal{R}^{1}_{E,gpu}$ are related by $\mathcal{R}^{1}_{E,gpu} < \mathfrak{a} \mathcal{R}^{1}_{M,gpu}$ for $\mathfrak{a} < 3$ for the sphere and $\mathfrak{a} < 3$ for the oblique manifold and $\mathfrak{a} < 6$ for the Stiefel manifold[^10]. The additional computational overhead can be attributed to additional transmission time and computation of multi-dimensional transpose operations. Moreover, we observed that the running time of the multi-core CPU implementation of the algorithm $\mathcal{R}^{1}_{M,cpu}$ is bounded by $\mathcal{R}^{1}_{M,gpu} < \mathfrak{a} \mathcal{R}^{1}_{M,cpu}$ for ${\mathfrak{a} < f(D_l)< 10}$, where $f(\cdot)$ is a function of number of output $D_l$ for all manifolds[^11]. In other words, the difference between running times on CPUs and GPUs is affected by $D_l$ more than the other parameters $2 \leq A_l \leq 7$ and $2 \leq B_l \leq 7$, and $C_l$. This observation can be attributed to the less overhead between Blas and Cublas implementations of matrix operations for small number (e.g. $C_l<10^3$) of weight matrices. For the second case where $C_l>D_l$, we observed that $\mathcal{R}^{1}_{E,gpu} < \mathfrak{a}_1 \mathcal{R}^{1}_{M,cpu} < \mathfrak{a}_2 \mathcal{R}^{1}_{M,gpu}$. We observed that ${\mathfrak{a}_1 < \hat{f}(C_l,D_l) < 2}$ and ${\mathfrak{a}_2 < \hat{f}(C_l,D_l) < 5}$, where $\hat{f}(\cdot,\cdot)$ is a function of both $C_l$ and $D_l$, for the sphere, and scales for the other manifolds accordingly, for implementation using one GTX 1080 and E5-2697 v2. Implementation Details of Algorithm 1 ------------------------------------- In this section, we give implementation details of Algorithm 1. ### Identification of Component Kernel Submanifolds of POMs We identify component weight manifolds $\mathcal{M}_{\iota}$ of POMs $\mathbb{M}_{G_l}$ at each $l^{th}$ of an $L$-layer CNN, and initialize weights residing in the manifolds considering both statistical properties of data, and geometric properties of weight manifolds. In the experiments, we used the sphere, the oblique manifold and the Stiefel manifold to construct component weight manifolds according to definition of manifolds given in Table \[tab:manifolds\]. [ccc]{} **Manifolds** & **Tangent Spaces** & **Projection of Gradients**\ \ $\mathcal{S}(A_l,B_l)$ & $ T_{\omega} \mathcal{S}(A_l,B_l) = \{ \hat{\omega} \in \mathbb{R}^{A_l \times B_l}: \omega^{\rm T} \hat{\omega} = 0 \}$ & $\Pi_{\omega} \mu = (I-\omega \omega^{\rm T}) \mu$\ \ $\mathcal{OB}(A_l,B_l)$ & $ {T_{\omega} \mathcal{OB}(A_l,B_l) = \{ \hat{\omega} \in \mathbb{R}^{A_l \times B_l}: {\omega}^{\rm T} \hat{\omega} = 0 \}}$ & $ \Pi_{\omega} \mu = \mu - \omega {\rm ddiag} (\omega^{\rm T} \mu)$\ \ $St(A_l,B_l)$ & $T_{\omega} St(A_l,B_l) = \{ \hat{\omega} \in \mathbb{R}^{A_l \times B_l}: { \rm ddiag} (\omega^{\rm T} \hat{\omega})= 0 \}$ & $\Pi_{\omega} \mu = (I - \omega \omega^{\rm T} ) \mu + \omega \varsigma(\omega^{\rm T} \mu)$\ \ \[tab:tangent\_spaces\] [ccc]{} **Manifolds** & **Exponential Maps** & **Retraction**\ \ $\mathcal{S}(A_l,B_l)$ & $ \exp_{\omega}(v) ={\omega}\cos(\|v \|_F) + \frac{v}{\| v \|_F}\sin(\| v \|_F) $ & $\mathfrak{R}_{{\omega}}(v) = \frac{\omega + v}{\| \omega +v \|_F} $\ \ $\mathcal{OB}(A_l,B_l)$ & ${ \exp_{\omega} (v) = \omega {\rm ddiag } (\cos(\| v \|_F)) + v {\rm ddiag} ( \frac{\sin(\| v\|_F)}{\| v \|_F} )}$ & $\mathfrak{R}_{{\omega}}(v) = \aleph(\omega+v)$\ \ $St(A_l,B_l)$ & $\exp _{\omega} (v) = [ \omega \; v ] \hat{\exp} \Big ( \begin{bmatrix} \omega ^{\rm T} v & - v^{\rm T} v\\ I & \omega ^{\rm T} v \end{bmatrix} \Big) \begin{bmatrix} I \\ 0 \end{bmatrix} \hat{\exp} (-\omega^{\rm T} v)$ & $\mathfrak{R}_{{\omega}}(v) = \mathcal{Q}_{\mathcal{F}}(\omega+v)$\ \ \[tab:groups\_manifolds\] ### Computation of Gradient Maps, Projections and Retractions used in [Algorithm]{} 1 {#sec:comp} In this section, we provide the details of the methods used for computation of gradient maps, projections and retractions for different collections of POMs in [Algorithm]{} 1. We denote a vector moved on a tangent space at the $t^{th}$ epoch by $v_t$ (see Line 7 of Algorithm 1). In addition, $\aleph(Z)$ is the unit-norm normalization of each column of a matrix $Z$. $\mathcal{Q}_{\mathcal{F}}(Z) := Q$ is the $Q$ factor of the QR decomposition $Z=QR$ of $Z$. Definitions of component manifolds of POMs used in this work are given in Table \[tab:manifolds\]. In Table \[tab:tangent\_spaces\], we provide tangent spaces and maps used for orthogonal projection of Euclidean gradients onto the tangent spaces for the manifolds of the normalized weights which are defined in Table \[tab:manifolds\]. Exponential maps and retractions are given in Table \[tab:groups\_manifolds\]. We also note that various types of projections, exponential maps and retractions can be computed and used in [Algorithm]{} 1 in addition to the projections, maps and retractions given in the tables. More detailed discussion on their computation are given in [@oblq; @manopt_book; @absil_retr]. Implementation Details of CNN Architectures used in the Experiments ------------------------------------------------------------------- **Data pre-processing and post-processing:** For the experiments on Cifar-10 and Cifar-100 datasets, we used two standard data augmentation techniques which are horizontal flipping and translation by 4 pixels [@res_net; @SN]. For the experiments on Imagenet dataset, we followed the data augmentation methods suggested in [@res_net]. In addition, we used both the scale and aspect ratio augmentation used in [@go_deeper1]. For color augmentation, we used the photometric distortions [@Howard13] and standard color augmentation [@res_net]. Moreover, we used random sampling of $224 \times 224$ crops or their horizontal flips with the normalized data obtained by subtracting per-pixel mean. In the bottleneck blocks, stride 2 is used for the $A_l=B_l=3$ weights. Moreover, Euclidean gradient decays are employed for all the weights. **Acceleration methods:** In this section, we employed state-of-the-art acceleration methods [@on_mom] modularly in Algorithm 1 for implementation of the CNNs as suggested in the reference works [@res_net; @SN; @ooAAAI18]. In this work, we consider employment of acceleration methods on the ambient Euclidean space and collections of POMs as suggested in [@ooAAAI18]. For this purpose, momentum and Euclidean gradient decay methods are employed on the Euclidean gradient $ {{\rm grad}}_E \; \mathcal{L}(\omega_{g,l}^{t})$ using $\mu_t := q \Big ( {{\rm grad}}_E \; \mathcal{L}(\omega_{g,l}^{t}),\mu_t,\Theta \Big)$. We can employ state-of-the-art acceleration methods [@on_mom] modularly in this step. Thus, momentum was employed with the Euclidean gradient decay using $$q \Big ( {{\rm grad}}_E \; \mathcal{L}(\omega_{g,l}^{t}),\mu_t,\Theta \Big) = \theta_{\mu} \mu_t - \theta_E {{\rm grad}}_E \; \mathcal{L}(\omega_{g,l}^{t}), \label{mom_decay} \vspace{-0.1cm}$$ where $\theta_{\mu} \in \Theta$ is the parameter employed on the momentum variable $\mu_t$. We consider $\theta_E \in \Theta$ as the decay parameter for the Euclidean gradient. In the experiments, we used $\theta_{\mu} = \theta_E = 0.9$. **Architectural Details of CNNs:** In the experiments, we used the same hyper-parameters of CNN architectures (e.g. number of channels, layers, weight sizes, stride and padding parameters) and their implementation provided by the authors of the compared works for training of CNNs using our proposed SGD method, for a fair comparison with base-line methods. Differences between the implementations and hyper-parameters are explained below. In other words, we just implemented the SGD algorithm of the provided CNN implementations using our proposed SGD method. More precisely, we used the following implementations for comparison: - RCD and RSD: We used the Residual networks with constant and stochastic depth using the same configuration hyper-parameters (see below for number of weights used in the architectures) and code given in [@SN]. - Residual Networks (Resnets): We re-implemented residual networks with the same configuration and training hyper-parameters (see below for number of weights used in the architectures) given in [@res_net; @ooAAAI18]. - Squeeze-and-Excitation networks implemented for Resnets with 50 layers (SENet-Resnet-50): We re-implemented residual networks with the same configuration and training hyper-parameters (see below for number of weights used in the architectures) given in [@senet]. In order to construct collections of weights belonging to four spaces (Euc., Sp, St and Ob) using WSS, we increase the number of weights used in CNNs to 24 and its multiples as follows; $\bullet$ Resnet with 18 Layers (Table 6 in this text): 72 filters at the first and second, 144 filters at the third, 288 filters at the fourth, and 576 filters at the fifth convolution blocks [@res_net]. $\bullet$ Resnet with 44 Layers (Table 7 in this text): 24 filters for 15 layers, 48 filters for 14 layers, 96 filters for 14 [@res_net]. $\bullet$ Resnets with constant depth (RCD) and stochastic depth (RSD) with 110 layers (Table 2 in the main text and Table 8 in this text): 24, 48 and 72 filters at the first, second, and the third convolution blocks [@SN]. $\bullet$ Resnet-50 and SENet-Resnet-50 (Table 1 in the main text): Configurations of Resnet-50 and SENet-Resnet-50 are given in Table \[res50\] and Table \[seres50\], respectively. Resnet-50 -- ---------------------------------------------------------------------- Kernel size: $7\times7$, Number of convolution weights: 64, Stride 2 $3 \times 3$ Max Pooling, Stride 2 3 Residual Blocks with the Following Convolution Kernels: 72 convolution weights of size $1 \times 1$ 72 convolution weights of size $3 \times 3$ 264 convolution weights of size $1 \times 1$ 4 Residual Blocks with the Following Convolution Kernels: 144 convolution weights of size $1 \times 1$ 144 convolution weights of size $3 \times 3$ 528 convolution weights of size $1 \times 1$ 6 Residual Blocks with the Following Convolution Kernels: 264 convolution weights of size $1 \times 1$ 264 convolution weights of size $3 \times 3$ 1032 convolution weights of size $1 \times 1$ 3 Residual Blocks with the Following Convolution Kernels: 528 convolution weights of size $1 \times 1$ 528 convolution weights of size $3 \times 3$ 2064 convolution weights of size $1 \times 1$ Global Average Pooling Fully connected layer Softmax \[res50\] Resnet-50 -- ---------------------------------------------------------------------- Kernel size: $7\times7$, Number of convolution weights: 64, Stride 2 $3 \times 3$ Max Pooling, Stride 2 3 Residual Blocks with the Following Convolution Kernels: 72 convolution weights of size $1 \times 1$ 72 convolution weights of size $3 \times 3$ 264 convolution weights of size $1 \times 1$ Fully connected layer with weights of size $24 \times 264$ 4 Residual Blocks with the Following Convolution Kernels: 144 convolution weights of size $1 \times 1$ 144 convolution weights of size $3 \times 3$ 528 convolution weights of size $1 \times 1$ Fully connected layer with weights of size $48 \times 528$ 6 Residual Blocks with the Following Convolution Kernels: 264 convolution weights of size $1 \times 1$ 264 convolution weights of size $3 \times 3$ 1032 convolution weights of size $1 \times 1$ Fully connected layer with weights of size $72 \times 1032$ 3 Residual Blocks with the Following Convolution Kernels: 528 convolution weights of size $1 \times 1$ 528 convolution weights of size $3 \times 3$ 2064 convolution weights of size $1 \times 1$ Fully connected layer with weights of size $144 \times 2064$ Global Average Pooling Fully connected layer Softmax \[seres50\] **Scaling of weights:** We use $\Re_{l}^t$ for scaling of weights and identification of component weight manifolds of POMs. As we mentioned in the main text, for instance, $\Re_{l}^t$ is computed and used as the radius of the sphere. More precisely, we initialize weights $\omega \in \mathcal{M}_{\iota}$ that belong to the sphere $\mathcal{M}_{\iota} \equiv \mathcal{S}(A_l,B_l)$ subject to the constraint $\| \omega \|^2_{F} = \Re_{l}^t$ by constructing a scaled sphere $$\mathbb{S}^{A_l B_l-1} \triangleq \mathcal{S}_{\Re_{l}^t}(A_l,B_l) = \{\omega \in \mathbb{R}^{A_l \times B_l} : \| \omega \|^2_{F} = \Re_{l}^t \}.$$ The other manifolds (the oblique and the Stiefel manifolds) are identified, and the weights that belong to the manifolds are initialized, appropriately, following the aforementioned methods. Then, projection of gradients, exponential maps and retractions which are determined according to manifold structure of weight spaces (see Table \[tab:tangent\_spaces\] and Table \[tab:groups\_manifolds\]), are updated accordingly by $\Re_{l}^t$. For example, for the scaled sphere $\mathcal{S}_{\Gamma_l^t}(A_l,B_l)$, we compute the projection of gradients by ${(I\Re_{l}^t-\omega \omega^T)\mu}$, and the exponential map by $$\exp_{\omega}(v) ={\omega}\cos(\|v \|_F \Re_{l}^t) + \Re_{l}^t \frac{v}{\| v \|_F}\sin(\| v \|_F \Re_{l}^t) .$$ Employment of Weight Set Splitting Scheme (WSS) in the Experiments: ------------------------------------------------------------------- Recall that, at each $l^{th}$ layer, we compute a weight ${\omega_{\iota} \triangleq W_{c,d,l}}$, ${c \in \Lambda^l}$, $\Lambda^l=\{1,2,\ldots,C_l\}$, ${d \in O^l}$, ${O^l=\{1,2,\ldots,D_l \}}$. We first choose $\mathfrak{A}$ subsets of indices of input channels ${\Lambda_a \subseteq \Lambda^l}, a=1,2,\ldots,\mathfrak{A}$, and $\mathfrak{B}$ subsets of indices of output channels $O_b \subseteq O^l, b=1,2,\ldots,\mathfrak{B}$, such that $\Lambda^l = \bigcup \limits _{a=1} ^\mathfrak{A} \Lambda_a$ and $O^l = \bigcup \limits _{b=1} ^\mathfrak{B} O_b$. We determine indices of weights belonging to different groups using the following three schemes: 1. POMs for input channels (PI): For each $c^{th}$ input channel, we construct $\mathcal{I}_{\mathcal{G}_l} = \bigcup \limits _{c=1} ^{C_l} \mathcal{I}_{{G}_l} ^c $, where ${\mathcal{I}_{{G}_l} ^c = O_b \times \{c\} }$ and the Cartesian product ${O_b \times \{c\}} $ preserves the input channel index, $\forall b,c$ (see Figure \[fig\_block1\]). 2. POMs for output channels (PO): For each $d^{th}$ output channel, we construct $\mathcal{I}_{\mathcal{G}_l} = \bigcup \limits _{d=1} ^{D_l} \mathcal{I}_{{G}_l} ^d $, where ${\mathcal{I}_{{G}_l} ^d = \Lambda_a \times \{d\} }$ and the Cartesian product $\Lambda_a \times \{d\} $ preserves the output channel index, $\forall a,d$ (see Figure \[fig\_block1\]). 3. POMs for input and output channels (PIO): In PIO, we construct $\mathcal{I}_{l}^{a,b} = \mathcal{I}_{l}^{a} \cup \mathcal{I}_{l}^{b}$, where $ \mathcal{I}_{l}^{a} = \{ \Lambda_a \times a \}$, and $ {\mathcal{I}_{l}^{b} = \{ O_b \times b\} }$ such that $\mathcal{I}_{\mathcal{G}_l} = \bigcup \limits _{a=1, b=1} ^{\mathfrak{A},\mathfrak{B}} \mathcal{I}_{l} ^{a,b}$ (see Figure \[fig\_block1\]). ![image](ensemble_PEMs_block_v6.eps) **Illustrative Examples of Employment of PI, PO and PIO** A comparative and illustrative example for comparison of PI, PO and PIO is given in Figure \[fig\_block1\]. Suppose that we have a weight tensor of size $3 \times 3 \times 4 \times 6$ where the number of input and output channels is $4$ and $6$. In total, we have ${4*6=24}$ weight matrices of size $3 \times 3$. An example of construction of an collection of POMs is as follows. 1. PIO: We split the set of 24 weights into 10 subsets. For 6 output channels, we split the set of weights corresponding to 4 input channels into 3 subsets. We choose the sphere (Sp) for [ 2 subsets]{} each containing 3 weights (depicted by [ light blue rectangles]{}), and [ 3 subsets]{} each containing 2 weights (depicted by [ red rectangles]{}). We choose the Stiefel manifold (St) similarly for the remaining subsets. Then, our ensemble contains 5 POMs of St and 5 POMs of Sp. 2. PI: For each of 4 input channels, we split a set of 6 weights associated with 6 output channels into two subsets of 3 weights. Choosing the sphere (Sp) for the first subset, we construct a POM as a product of 3 Sp. That is, each of 3 component manifolds ${\mathcal{M}_{\iota}}, {\iota = 1,2,3}$, of the POM is a sphere. Similarly, choosing the Stiefel (St) for the second subset, we construct another POM as a product of 3 St (each of 3 component manifolds ${\mathcal{M}_{\iota}}, \iota = 1,2,3$, of the second POM is a Stiefel manifold.). Thus, at this layer, we construct an collection of 4 POMs of 3 St and 4 POMs of 3 Sp. 3. PO: For each of 6 output channels, we split a set of 4 weights corresponding to the input channels into two subsets of 2 weights. We choose the Sp for the first subset, and we construct a POM as a product of 2 Sp using. We choose the St for the second subset, and we construct a POM as a product of 2 St. Thereby, we have an collection consisting of 6 POMs of St and 6 POMs of Sp. In the experiments, indices of weights for PI, PO and PIO are randomly selected. An illustration of the selection method is given in Figure \[fig\_random\]. ![image](ensemble_PEMs_mete_v4.pdf){width="5.70in"} **Notation used in the Tables** 1. Sp/Ob/St: Kernels employed on each input and output channel are defined to reside on the sphere, oblique and Stiefel manifold, respectively. 2. POMs of Sp/Ob/St: Kernels employed on all input and output channels are defined to reside on a POM of Sp/Ob/St. 3. PI/PO/PIO for POMs of Sp/Ob/St: Ensembles of POMs of Sp/Ob/St are computed using the schemes PI/PO/PIO. 4. Results for Manifold$_1$ + Manifold$_2$: Results are computed for collections of POMs of Manifold$_1$ and Manifold$_2$. 5. Results for Manifold$_1$ + Manifold$_2$ + Manifold$_3$: Results are computed for collections of POMs of Manifold$_1$, Manifold$_2$ and Manifold$_3$. 6. Results for Manifold$_1$ + Manifold$_2$ + Manifold$_3$ + Manifold$_4$: Results are computed for collections of POMs of Manifold$_1$, Manifold$_2$, Manifold$_3$ and Manifold$_4$. Additional Results ================== Analyses using Resnets with Different Number of Layers ------------------------------------------------------ In this subsection, we give additional results for image classification using Cifar-10 and Imagenet datasets for different networks such as Resnets with 18 and 44 layers (Resnet-18 and Resnet-44), 110-layer Resnets with constant depth (RCD) and stochastic depth (RSD) with data augmentation (DA) and without using data augmentation (w/o DA). [C[4.85cm]{} C[2.70cm]{}]{} **Model** & **Top-1 Error (%)**\ Euc. [@ooAAAI18] & 30.59\ Euc. $\dagger$ & [ 30.31]{}\ Sp/Ob/St[@ooAAAI18] & 29.13/28.97/[[28.14]{}]{}\ Sp/Ob/St $\dagger$ & 28.71/28.83/[[28.02]{}]{}\ POMs of Sp/Ob/St & 28.70/28.77/[[28.00]{}]{}\ PI for POMs of Sp/Ob/St & 28.69/28.75/[[27.91]{}]{}\ PI (Euc.+Sp/Euc.+St/Euc.+Ob) & 30.05/29.81/29.88\ PI (Sp+Ob/Sp+St/Ob+St) & 28.61/28.64/28.49\ PI (Sp+Ob+St/Sp+Ob+St+Euc.) & 27.63/27.45\ PO for POMs of Sp/Ob/St & 28.67/28.81/[[27.86]{}]{}\ PO (Euc.+Sp/Euc.+St/Euc.+Ob) & 29.58/29.51/29.90\ PO (Sp+Ob/Sp+St/Ob+St) & 28.23/28.01/28.17\ PO (Sp+Ob+St/Sp+Ob+St+Euc.) & 27.81/27.51\ PIO for POMs of Sp/Ob/St & 28.64/28.72/[[27.83]{}]{}\ PIO (Euc.+Sp/Euc.+St/Euc.+Ob) & 29.19/28.25/28.53\ PIO (Sp+Ob/Sp+St/Ob+St) & 28.14/27.66/27.90\ PIO (Sp+Ob+St/Sp+Ob+St+Euc.) & 27.11/[ 27.07]{}\ \[tab:imagenet\] We give classification performance of Resnets with 18 layers (Resnet-18) employed on the Imagenet in Table \[tab:imagenet\]. The results show that performance of CNNs are boosted by employing collections of POMs (denoted by PIO for POMs) using FG-SGD compared to the employment of baseline Euc. We observe that POMs of component manifolds of identical geometry (denoted by POMs of Sp/St/Ob), and their collections (denoted by PIO for POMs of Sp/St/Ob) provide better performance compared to employment of individual component manifolds (denoted by Sp/Ob/St) [@ooAAAI18]. For instance, we obtain $28.64\%$, $28.72\%$ and $27.83\%$ error using PIO for POMs of Sp, Ob and St in Table \[tab:imagenet\], respectively. However, the error obtained using Sp, Ob and St is $28.71\%$, $28.83\%$ and $28.02\%$, respectively. We observe [ $3.24\%$]{} boost by construction of an collection of four manifolds (Sp+Ob+St+Euc.) using the PIO scheme in Table \[tab:imagenet\] ([ $27.07\%$]{}). In other words, collection methods boost the performance of large-scale CNNs more for large-scale datasets (e.g. Imagenet) consisting of larger number of samples and classes compared to the performance of smaller CNNs employed on smaller datasets (e.g. Cifar-10). This result can be attributed to enhancement of sets of features learned using multiple constraints. In addition, we obtain $0.28\%$ and $2.06\%$ boost of the performance by collection of the St with Euc. ($6.77\%$ and $28.25\%$ using PIO for Euc.+St, respectively) for the experiments on the Cifar-10 and Imagenet datasets using the PIO scheme in Table \[tab:res10\] and Table \[tab:imagenet\], respectively. Moreover, we observe that construction of collections using Ob performs better for PI compared to PO. For instance, we observe that PI for POMs of Ob provides $6.81\%$ and $28.75\%$ while PO for POMs of Ob provides $6.83\%$ and $28.81\%$ in Table \[tab:res10\] and Table \[tab:imagenet\], respectively. We may associate this result with the observation that weights belonging to Ob are used for feature selection and modeling of texture patterns with high performance [@oblq; @oo16]. However, collections of St and Sp perform better for PO ($6.59\%$ and $28.01\%$ in Table \[tab:res10\] and Table \[tab:imagenet\]) compared to PI ($6.67\%$ and $28.64\%$ in Table \[tab:res10\] and Table \[tab:imagenet\]) on weights employed on output channels. It is also observed that PIO performs better than PI and PO in all the experiments. We observe [ $3.24\%$]{} boost by construction of an collection of four manifolds (Sp+Ob+St+Euc.) using the PIO scheme in Table \[tab:imagenet\] ([ $27.07\%$]{}). In other words, collection methods boost the performance of large-scale CNNs more for large-scale datasets (e.g. Imagenet) consisting of larger number of samples and classes compared to the performance of smaller CNNs employed on smaller datasets (e.g. Cifar-10). This result can be attributed to enhancement of sets of features learned using multiple constraints. [C[4.95cm]{}C[2.5cm]{}]{} **Model** & **Class. Error(%)**\ Euc. [@res_net] & 7.17\ Euc. [@ooAAAI18] & 7.16\ Euc. $\dagger$ & [ 7.05]{}\ Sp/Ob/St [@ooAAAI18] & 6.99/6.89/[[6.81]{}]{}\ Sp/Ob/St $\dagger$ & 6.84/6.87/[ [6.73]{}]{}\ POMs of Sp/Ob/St & 6.81/6.85/[ [6.70]{}]{}\ PI for POMs of Sp/Ob/St & 6.82/6.81/[ [6.70]{}]{}\ PI (Euc.+Sp/Euc.+St/Euc.+Ob) & 6.89/6.84/6.88\ PI (Sp+Ob/Sp+St/Ob+St) & 6.75/6.67/6.59\ PI (Sp+Ob+St/Sp+Ob+St+Euc.) & 6.31/6.34\ PO for POMs of Sp/Ob/St & 6.77/6.83/[ [6.65]{}]{}\ PO (Euc.+Sp/Euc.+St/Euc.+Ob) & 6.85/6.78/6.90\ PO (Sp+Ob/Sp+St/Ob+St) & 6.62/6.59/6.51\ PO (Sp+Ob+St/Sp+Ob+St+Euc.) & 6.35/6.22\ PIO for POMs of Sp/Ob/St & 6.71/6.73/[ [6.61]{}]{}\ PIO (Euc.+Sp/Euc.+St/Euc.+Ob) & 6.95/6.77/6.82\ PIO (Sp+Ob/Sp+St/Ob+St) & 6.21/6.19/6.25\ PIO (Sp+Ob+St/Sp+Ob+St+Euc.) & 5.95/[ 5.92 ]{}\ \[tab:res10\] [C[4.8cm]{}|C[2.750cm]{}|C[2.69cm]{}|]{} & &\ RCD [@DCCN] & 27.22 & 44.74\ (Euc.) $\dagger$ &[ 27.01]{} & [ 44.65 ]{}\ Sp/Ob/St ([@ooAAAI18]) & 26.44/25.99/[[25.41]{}]{} & 42.51/42.30/[[40.11]{}]{}\ Sp/Ob/St $\dagger$ & 26.19/25.87/[[25.39]{}]{} & 42.13/42.00/[[39.94]{}]{}\ POMs of Sp/Ob/St & 25.93/25.74/[[25.18]{}]{} & 42.02/42.88/[[39.90]{}]{}\ PIO (Euc.+Sp/Euc.+St/Euc.+Ob) & 25.57/25.49/25.64 & 41.90/41.37/41.85\ PIO (Sp+Ob/Sp+St/Ob+St) & 24.71/24.96/24.76 & 41.49/40.53/40.34\ PIO (Sp+Ob+St/Sp+Ob+St+Euc.) & 23.96/[ 23.79 ]{} & 39.53/ [ 39.35 ]{}\ RSD [@DCCN] & 24.58 & 37.80\ Euc. $\dagger$ & [ 24.39]{} & [ 37.55 ]{}\ Sp/Ob/St [@ooAAAI18] & 23.77/23.81/[[23.16]{}]{} & 36.90/36.47/[[35.92]{}]{}\ Sp/Ob/St $\dagger$ & 23.69/23.75/[[23.09]{}]{} & 36.71/36.38/[[35.85]{}]{}\ POMs of Sp/Ob/St & 23.51/23.60/[[23.85]{}]{} & 36.40/36.11/[[35.53]{}]{}\ PIO (Euc.+Sp/Euc.+St/Euc.+Ob) & 23.69/23.25/23.32 & 35.76/35.55/35.81\ PIO (Sp+Ob/Sp+St/Ob+St) & 22.84/22.91/22.80 & 35.66/35.01/35.35\ PIO (Sp+Ob+St/Sp+Ob+St+Euc.) & 22.19/[ 22.03]{} & 34.49/[ 34.25]{}\ \[tab:rcd100\] [C[4.8cm]{}|C[2.7cm]{}|C[2.69cm]{}|]{} & &\ RCD [@DCCN] & 6.41 & 13.63\ (Euc.) $\dagger$ & [ 6.30]{} &[ 13.57]{}\ Sp/Ob/St ([@ooAAAI18]) & 6.22/6.07/[[5.93]{}]{} & 13.11/12.94/[[12.88]{}]{}\ Sp/Ob/St $\dagger$ & 6.05/6.03/[[5.91]{}]{} & 12.96/12.85/[[12.79]{}]{}\ POMs of Sp/Ob/St & 6.00/6.01/[[5.86]{}]{} & 12.74/12.77/[[12.74]{}]{}\ PIO for POMs of Sp/Ob/St & 5.95/5.91/[[5.83]{}]{} & 12.71/12.72/[[12.69]{}]{}\ PIO (Euc.+Sp/Euc.+St/Euc.+Ob) & 6.03/5.99/6.01 & 12.77/12.21/12.92\ PIO (Sp+Ob/Sp+St/Ob+St) & 5.97/5.86/5.46 & 11.47/11.65/ 11.51\ PIO (Sp+Ob+St/Sp+Ob+St+Euc.) & 5.25/[ 5.17]{} & 11.29/[ 11.15]{}\ RSD [@DCCN] & 5.23 & 11.66\ Euc. $\dagger$ & [ 5.17 ]{}& [ 11.40]{}\ Sp/Ob/St [@ooAAAI18] & 5.20/5.14/[[4.79]{}]{} & 10.91/10.93/[[10.46]{}]{}\ Sp/Ob/St $\dagger$ & 5.08/5.11/[[4.73]{}]{} & 10.52/10.66/[[10.33]{}]{}\ POMs of Sp/Ob/St & 5.05/5.08/[[4.69]{}]{} & 10.41/10.54/[[10.25]{}]{}\ PIO for POMs of Sp/Ob/St & 4.95/5.03/[[4.62]{}]{} & 10.37/10.51/[[10.19]{}]{}\ PIO (Euc.+Sp/Euc.+St/Euc.+Ob) & 5.00/5.08/5.14 & 10.74/10.25/10.93\ PIO (Sp+Ob/Sp+St/Ob+St) & 4.70/4.58/4.90 & 10.13/10.24/10.06\ PIO (Sp+Ob+St/Sp+Ob+St+Euc.) & [ 4.29]{}/4.31 & [ 9.52]{}/9.56\ \[tab:rcd\] [C[9.5cm]{}|C[2.7cm]{}|]{} **Model** **Cifar-100 with DA (110 layer RCD)** & **Error**\ Euc. $\dagger$ & [ 27.01 $\pm$ 0.47]{}\ St & 25.39 $\pm$ 0.40\ POMs of St & 25.18 $\pm$ 0.34\ PIO (Sp+Ob+St) & 23.96 $\pm$ 0.28\ PIO (Sp+Ob+St+Euc.) & [ 23.79 $\pm$ 0.15]{}\ (Additional results) **Cifar-100 with DA (SENet-Resnet-101)** & **Error**\ Euc. $\dagger$ & [ 19.93 $\pm$ 0.51]{}\ PIO (Sp+Ob+St) & 18.96 $\pm$ 0.27\ PIO (Sp+Ob+St+Euc.) & [ 18.54 $\pm$ 0.16]{}\ \[tab:summary\] In Table \[tab:rcd100\], we analyze the performance of larger CNNs consisting of 110 layers on Cifar-100 with and without using DA. We implemented the experiments 10 times and provided the average performance. We observe that sets boost the performance of CNNs that use DA methods more compared to the performance of CNNs without using DA. For instance, PIO of all manifolds ([ 39.35$\%$]{}) outperform baseline ([ 44.65$\%$]{}) by $5.3\%$ without using DA, while those ([ 23.79$\%$]{}) obtained using DA outperform baseline ([ 27.01$\%$]{}) by $3.22\%$ for RCD. Additional results for different CNNs using Imagenet and Cifar-10, and a comparison with vanilla network sets are given in this supplemental material. Comparison with Vanilla Network Ensembles ----------------------------------------- Our method fundamentally differs from network ensembles. In order to analyze the results for network ensembles of CNNs, we employed an ensemble method [@res_net] by *voting of decisions* of Resnet 44 on Cifar 10. When CNNs trained on individual Euc, Sp, Ob, and St are ensembled using voting, we obtained $7.02\%$ (Euc+Sp+Ob+St) and $6.85\%$ (Sp+Ob+St) errors (see Table 1 for comparison). In our analyses of ensembles (PI, PO and PIO), each POM contains $\frac{N_l}{M}$ weights, where $N_l$ is the number of weights used at the $l^{th}$ layer, and $M$ is the number of POMs. When each CNN in the ensemble was trained using an individual manifold which contains $\frac{1}{4}$ of weights (using $M=4$ as utilized in our experiments), then we obtained $11.02\%$ (Euc), $7.76\%$ (Sp), $7.30\%$ (Ob), $7.18\%$ (St), $9.44\%$ (Euc+Sp+Ob+St) and $7.05\%$ (Sp+Ob+St) errors. Thus, our proposed methods outperform ensembles constructed by voting. Analyses for Larger DNNs with Large Scale Image Datasets -------------------------------------------------------- We give the results for Cifar-100 obtained using data augmentation denoted by with DA in Table \[tab:summary\].Cifar-100 dataset consist of $5\times 10^4$ training and $10^4$ test images belonging to 100 classes. In Table \[tab:summary\], we provide results using the state-of-the-art Squeeze-and-Excitation (SE) blocks [@senet] implemented for Resnets with 110 layers (Resnet-110) on Cifar-100. We run the experiments 3 times and provide the average performance. In the second set of experiments, we perform separable convolution operations using the proposed weight splitting scheme. We compare the results using various popular separable convolution schemes, such as depth-wise and channel-wise convolution implemented using state-of-the-art DNNs such as ResNext with 50 layers (ResNext-50) [@resnext], MobileNet v2 with 21 layers (Mobilenet) [@Sandler] and 50 layer Resnets with hierarchical filtering using 4 roots (DeepRoots) [@Ioanno]. The results obtained using PIO with (Sp+Ob+St+Euc.) with the separable convolution scheme proposed in the corresponding related work are denoted by PIO-SOSE. The results obtaied using PIO with (Sp+Ob+St+Euc.) with our proposed WSS are denoted by PIO-SOSE-WSS. ------------------------------- -------------------------- **Model** **Classification Error** Resnext-50 (Euc. [@resnext]) 22.2 Resnext-50 (Euc. $\dagger$) [ 22.7]{} Resnext-50 (Euc.WSS) [22.3]{} Resnext-50 (PIO-SOSE) 21.5 Resnext-50 (PIO-SOSE-WSS) [ 21.3]{} Mobilenetv2 (Euc. [@Sandler]) 28.0 Mobilenetv2 (Euc. $\dagger$) [ 27.9]{} Mobilenetv2 (Euc.-WSS) [27.5]{} Mobilenetv2 (PIO-SOSE) 26.8 Mobilenetv2 (PIO-SOSE-WSS) [ 26.4]{} DeepRoots (Euc. [@Ioanno]) 26.6 DeepRoots (Euc. $\dagger$) [ 27.0]{} DeepRoots (Euc.-WSS) [26.6]{} DeepRoots (PIO-SOSE) 25.9 DeepRoots (PIO-SOSE-WSS) [ 25.5]{} ------------------------------- -------------------------- : Analysis of classification error (%) of state-of-the-art DNNs which employ separable convolutions on Imagenet dataset. \[tab:sep\] [^1]: Please see the supplemental material for more precise mathematical definitions and examples. [^2]: We use shorthand notation for matrix concatenation such that $[W_{c,d,l} ]_{c=1}^{C_l} \triangleq [W_{1,d,l}, W_{2,d,l}, \cdots,W_{C_l,d,l}]$. [^3]: We ignore the bias terms in the notation for simplicity. [^4]: In this work, we consider Riemannian manifolds of normalized weights defined in the previous section. Formal definitions are given in the supp. mat. \[footnote1\] [^5]: Formal definitions and additional details are given in the supp. mat. to improve readability of the main text. [^6]: We omit the formal theorem and the proof on this result in this work to focus on our main goal and novelty for optimization with multiple weight manifolds. [^7]: In the experimental analyses, we use the oblique and the Stiefel manifolds as well as the sphere and the Euclidean space to identify component manifolds $\mathcal{M}_{\iota,l}$. Details are given in the supplemental material. [^8]: see Theorem 24 in [@petersen2006riemannian] . [^9]: For the example of training using the Cifar-100 dataset given above. [^10]: For different implementations of QR decomposition on GPUs, we observed $3<\mathfrak{a} < 6.$ [^11]: We observed that for Intel Xeon E5-1650 v3, and obtained improvement of running time by approximately $f(D_l)<5$ for E5-2697 v2 since using larger number of CPU cores.
--- abstract: 'This paper discusses the current uncertainties in luminosity calibration of the RR Lyrae variables. The difference in distance moduli between the SMC and LMC as derived from RR Lyrae stars and classical Cepheids is used to estimate a metallicity effect on the Cepheid PL(VI) relation of $0.29\pm 0.11 \rm{mag\, dex^{-1}}$ . There is evidence that suggests RR Lyrae variables and type II Cepheids share a common $K$ - $\log P$ relation. Metallicity and age gradients in the LMC are discussed from data on RR Lyrae variables and AGB stars.' author: - | [Michael Feast]{}\ [*Astronomy,Cosmology and Gravitation Centre, Astronomy Dept., University of Cape Town, 7701, Rondebosch, South Africa.\ and, South African Astronomical Observatory, P.O. Box 9, Observatory, 7935, South Africa*]{} title: 'RR Lyraes and Type II Cepheids in the Magellanic Clouds: Distance Scales and Population Gradients' --- \[1996/06/01\] 2[cm$^{-2}$]{} 2[C [ii]{}]{} 4[C [iv]{}]{} 2[Fe [ii]{}]{} 3[Fe [iii]{}]{} 1[Mg [i]{}]{} 2[Mg [ii]{}]{} 2[Si [ii]{}]{} 4[Si [iv]{}]{} 2[Al [ii]{}]{} 3[Al [iii]{}]{} ø1[O [i]{}]{} 1[N [i]{}]{} 1[H [i]{}]{} ł= Introduction ============ The aims of the present paper are the following:\ 1. To outline briefly the current position on the luminosity calibration of RR Lyrae variables and the future prospects. 2\. To compare the relative luminosities of the RR Lyrae variables in the LMC and SMC with that of the classical Cepheids and to deduce the implies metallicity effect on the Cepheid scale. 3\. To review infrared period-luminosity relations for type II Cepheids and their relation RR Lyrae variables. 4\. To discuss evidence for a small, but significant, mean metallicity gradient of the RR Lyrae population in the LMC, implying a classical picture for the formation of the LMC halo and suggesting that the metallicity of RR Lyrae variables is correlated with their age, the more metal-poor stars being older. 5\. To suggest from published data on AGB stars in the LMC, that the oldest stars of this type are dominant in the outer parts of the LMC whilst the main bulk of AGB stars, which are of intermediate age, are more centrally concentrated. Basic Relations for RR Lyrae variables ====================================== In a given globular cluster RR Lyrae variables have roughly the same $V$ magnitude independent of period though there is considerable scatter and this increases with increasing cluster metallicity (e.g. Sandage 1990). There is a long history of attempts to determine how $M_{V}$ depends on metallicity. It is usually expressed in the form $$M_{V} = \alpha [Fe/H] + \beta$$ but it is not clear whether it is linear over the full range of possible values of \[Fe/H\] (see e.g. Feast 1999, McNamara 1999). Probably the best determination of the slope of this relation is from the work of Gratton et al. (2004). They obtained, $$V_{o} = 0.214 (\pm 0.05)([Fe/H] +1.5) + 19.064$$ from LMC data with the values of \[Fe/H\] being determined by a modification of the Preston (1959) method. Whilst this slope agrees well with that found, for instance, in our Galaxy using pulsation parallaxes, a smaller slope ($0.09 \pm 0.03$) was found in the Sculptor dwarf spheroidal (Clementini et al. 2005). These authors suggested that this was due to the Sculptor variables being on average more evolved than those in the LMC. Evolution may also be part of the reason for the spread in these relations (a total spread of $\sim 0.5$mag in the case of the LMC) though a significant amount is likely to be due to the depth of the LMC and Sculptor. Longmore et al. (1986) found that RR Lyraes in globular clusters followed a $K$ (2.2 microns) versus $\log$ period relation. The scatter in this relation at a given metallicity is small. For instance in the case of the Reticulum cluster in the LMC the standard deviation about such a relation is only 0.03mag (Dall’Ora et al. 2004). The relation may be written, $$M_{K} =\gamma \log P + \delta [Fe/H] + \phi,$$ where a term has been included for a possible metallicity dependence. Table 1 contains a number of recent estimates of $\gamma$. [ll]{}\ Globular clusters (Sollima et al. 2006) & $-2.38 \pm 0.04$\ Reticulum cluster (Dall’Ora et al. 2004) & $-2.16 \pm 0.09$\ LMC Field (Borissova et al. 2009) & $-2.11 \pm 0.17$\ LMC Field (Szewczyk et al. 2008) & $-2.19 \pm 0.40$\ SMC Field (Szewczyk et al. 2009) & $-3.10 \pm 0.49$\ Theory (Bono et al. 2003) & $-2.10$\ Theory (Catelan et al. 2004) & $-2.35$\ The result from Sollima et al.(2006) is the mean from a number of Galactic globular clusters. The scatter about a mean relation is much larger in the LMC and SMC fields than in individual globular clusters (Szewczyk et al. 2008, 2009). This is probably mainly due to the depth of these galaxies but the range in \[Fe/H\] may also contribute as well as the lack of full $K$ light curves. An estimate of the coefficient, $\delta$, of the metal term in eq. 3 was made by Sollima et al. (2006) using globular clusters of different metallicities with distances derived from main sequence fitting. They found $\delta = +0.08 \pm 0.11$. A similar value, $+0.05\pm 0.17$, was obtained by Borissova et al. (2009) from RR Lyraes with known values of \[Fe/H\] in the LMC. These two values are not significantly different from zero but agrees within the errors with the theoretical estimates of Bono et al. (2003) (+0.23) and of Catelan et al. (2004) (+0.18). The SMC-LMC Modulus difference from RR Lyraes and classical Cepheids ==================================================================== Whilst the absolute luminosity scale for classical Cepheids of close to solar metallicity has been fixed, at least at the shorter periods, by trigonometrical parallaxes (Benedict et al. (2007); van Leeuwen et al. (2007)), there is still considerable uncertainty in the effects of metallicity on the scale. Matsunaga et al. (2011) have compiled data on the relative distances of the SMC and LMC and this is relevant to the metallicity issue. Table 2 is based on their work. [lll]{}\ Uncorr. & $0.327 \pm 0.002$ & $0.48 \pm 0.01$\ Corr. & $0.363 \pm 0.04$ &\ & &\ $\Delta$\[Fe/H\] & $-0.22$ & $-0.42 \pm 0.15$\ In the case of the RR Lyrae variables the data are from Szewczyk et al. (2008, 2009) based on the PL(K) relations with the difference in mean metallicities which they adopt. The table lists the modulus difference without metallicity correction and the metallicity corrected value, using a mean of the various estimates of $\delta$ discussed in the last section. Evidently the metallicity correction has rather little effect on the modulus difference unless the metallicity difference between the Clouds and/or the coefficient, $\delta$, in eq. 3, have been grossly underestimated. From these data we adopt a modulus difference of 0.36 for the Clouds. The results in table 2 for the (classical) Cepheids are for the PL(VI) relations uncorrected for metallicity effects and are from the Appendix of Matsunaga et al. (2011). The Cepheid metallicity difference quoted is based on the spectroscopic determination of iron abundances in both Clouds by Romaniello et al. (2008). To bring the Cepheid modulus difference to the value given by the RR Lyrae variables requires a metallicity correction equivalent to $0.29\pm 0.11 \rm{mag\,dex^{-1}}$. This happens to be in exact agreement with the the value derived by Macri et al. (2006) from observations of Cepheids in NGC4258, $0.29 \pm 0.10 \rm {mag\,dex^{-1}}$. The metallicity correction taken from Macri et al. is derived using metallicities of HII regions measured on the empirical scale of Zaritsky et al (1994). On the $\rm{T_{e}}$ scale of Kennecutt et al (2003), which has been extensively used in extragalactic work, the correction would be greater ($0.49\pm 0.15 \rm{mag\,dex^{-1}}$) and the corrected Cepheid SMC-LMC modulus difference would be 0.30mag, agreeing less well with the RR Lyraes. Bono et al. (2010) obtained a Cepheid metallicity effect on PL(VI) of $0.03\pm 0.07 \rm{mag\,dex^{-1}}$ on the basis of galaxy distances derived from tip of the RGB magnitudes. This agrees with their theoretical estimate that the metallicity effect in PL(VI) is small. A caveat in the discussion of the SMC-LMC difference, is that it depends on the assumption that the distribution of RR Lyraes and Cepheids in each Cloud is such that their mean distances are the same. Provided the metallicity effect of $0.29\pm 0.11 \rm{mag\,dex^{-1}}$ can be extrapolated linearly to higher metallicities, the correction to the LMC modulus based on Cepheids of near solar metallicity is $-0.09 \pm 0.05$ mag. Adopting an uncorrected Cepheid modulus of $18.52\pm 0.03$ from van Leeuwen et al. (2007), the corrected modulus is $18.43\pm 0.06$. Evidently the metallicity correction remains the most significant uncertainty in the Cepheid distance to the LMC. RR Lyrae Absolute Magnitudes ============================ This section discusses the values of $\beta$ and $\phi$ in the equations, $$M_{V} = 0.21 [Fe/H] +\beta$$ and $$M_{K} = -2.33 \log P + \phi.$$ The value of the coefficient of \[Fe/H\] is from Gratton et al (2004) (see above) and the coefficient of $\log P$ is one that has been used by various workers and is consistent with the discussion of section 2. There are three methods which have been used to establish RR Lyraes as primary distance indicators; trigonometrical parallaxes, statistical parallaxes and pulsation parallaxes. The available data is summarized in Table 3. The results from trigonometrical parallaxes rely entirely on the the HST parallax of RR Lyrae itself (Benedict et al. 2002, Feast et al. 2008)). Other parallaxes for this type of star are too poor to add significantly to the result . The most elaborate study of RR Lyrae statistical parallaxes is that of Popowski and Gould (1998a, b, c). Their results lead to the value of $\beta$ in the table. The value of $\phi$ from statistical parallaxes is from Dambis (2009). There have been a considerable number of determinations of absolute magnitudes of RR Lyrae variables from pulsation parallaxes. The results depend on the models adopted (see for instance the discussion by Cacciari & Clementini (2003). The pulsation parallax results of Fernley et al. (1998) lead to the tabulated value of $\beta$ whilst the value of $\phi$ is derived from the data of Jones et al. (1992). A striking feature of Table 3 is that the values of $\beta$ and $\phi$ derived from statistical and pulsation parallaxes agree closely, whilst the trigonometrical parallax result of RR Lyrae itself is discrepant. This is particularly notable in the case of $\phi$. The position is clearly unsatisfactory. Fortunately the preliminary results of the new HST trigonometrical parallax programme (Barnes, this volume) indicate that the calibration will soon be greatly improved. Any remaining difference between the trigonometric and the statistical result would be of considerable interest as it might indicate that the Galactic model use in the statistical work was unsatisfactory. It is clear, for instance from the work of Martin & Morrison (1998), that the mean velocity of halo RR Lyrae variables relative to the Sun in the direction of Galactic rotation is quite sensitive to the absolute magnitudes adopted. A difference between accurate trigonometrical parallaxes and pulsation parallaxes would indicate a need to update RR Lyrae models. [lll]{}\ Trig. Par. & $+0.52 \pm (0.11)$ & $-1.22 \pm (0.11)$\ Stat. Par. & $+0.79 \pm 0.13$ & $-0.82 \pm 0.08$\ Puls. Par. & $+0.73 \pm 0.14$ & $-0.88 \pm 0.06$\ Type II Cepheids ================ Like the RR Lyrae variables, the type II Cepheids belong to both halo and old disc populations. Matsunaga et al. (2006) showed that the type II Cepheids in globular clusters follow well defined period -luminosity relations in $J,H$ and $K$. This is illustrated in fig 1. The figure also shows the period-luminosity relations for this type of variable in the LMC at W(VI) as obtained by Soszyński et al. (2008) and at $K_{s}$ by Matsunaga et al. (2009) for the same stars using the IRSF point source catalogue of the Magellanic Clouds (Kato et al. 2007). Similar results have been obtained for the SMC (Matsunaga et al. 2011). The main difference between the globular cluster results and those for the LMC and SMC is the presence of some stars in the W Vir period range (periods greater than 4 days and less than 20 days) above the period-luminosity relations in the Clouds. Soszyński et al. (2008) show that such stars have distinctive light curves. Also, at the long period end (the RV Tau period range, periods greater than 20 days) most of the stars in the LMC and SMC lie above the period-luminosity relations. Further work is required to see whether there are stars in this period range in the LMC and SMC which are similar to those in globular clusters. In addition there is some suggestion that the slope of the period-luminosity relation at K (omitting stars in the RV Tau range) may vary from system to system (Matsunaga et al. 2011)(see table 4). Further work on this is required. It might, for instance, indicate a period dependent metallicity effect. There might also be problems with selection effects at the short period end. The slopes for the type II Cepheid period-luminosity relation in table 4 are very similar to those give for the RR Lyraes in table 1. Furthermore, Matsunaga et al. (2006) showed that within the uncertainties of relative distance estimation, the RR Lyraes in the globular cluster NGC6341 fitted an extrapolation of the globular cluster type II Cepheid $K$-band period-luminosity to shorter periods. It is therefore possible that there is a common period-luminosity relation covering both types of variable. [ll]{}\ Globular Clusters & $-2.41 \pm 0.05$\ LMC Field & $-2.28 \pm 0.05$\ SMC Field & $-2.11 \pm 0.10$\ A Metallicity gradient in the LMC RR Lyrae population ===================================================== Our understanding of the formation and evolution of dwarf galaxies such as the LMC is rather sparse. The RR Lyraes, as representing the oldest populations, are particularly important in this regard. This field has been revolutionized by the work of the OGLE group. The OGLE-III catalogue (Soszyński et al. (2009) lists 17,693 variables of type RRab. Leaving out those that are likely to be blended or foreground or are otherwise dubious, there is a sample of 16,864 RRab stars available for analysis. Feast et al. (2010) have used these data to study the change of mean period with distance ($\rm R_{GC}$) from the centre of the LMC. Fig. 2 shows the results, with mean period converted to mean metallicity using two possible (Galactic) mean period -metallicity relations. The gradient is small but significant. For instance the upper line in fig. 2 has the equation, $$[Fe/H] = -0.0104(\pm 0.0021)R_{GC} - 1.4213 (\pm 0.0046)$$ It should be noted that fig. 2 simply shows linear, scaled versions, of a mean period versus $R_{GC}$ relation and this relation remains if one chose to discount the mean relation between period and metallicity. The period gradient, though slight, indicates that the oldest populations in the LMC have a clear structure with the most metal poor component showing the greatest extent. This would , for instance, be consistent with the classical picture of formation by the collapse of a gas cloud. An Age Gradient in the AGB star population of the LMC ===================================================== It is of interest to ask if there are metallicity or age gradients in the LMC for populations other than the RR Lyraes. In the case of the youngest populations, HII regions give only very marginal evidence for a metallicity gradient (Pagel et al. 1978). AGB stars belong to intermediate and old populations. Cioni & Habing (2003) have selected LMC AGB stars (i.e. they a brighter than an adopted RGB tip) and have divided then into probable C and M type stars using DENIS $IJK$ colours. Dr Cioni kindly made the data (32,801 stars) available and in fig 3. the number ratio of C/M stars is plotted as a function the distance from the LMC centre (Feast et al. 2010). There is some very slight evidence of a small increase in C/M out to $\sim 4$kpc, beyond which there is a steep drop. In the past the C/M ratio has been thought to increase with decreasing metallicity and this may indeed be the case when comparing systems of about the same age. However it seems quite unlikely that there is a marked increase in metallicity as one moves outwards beyond 4kpc. It seems much more likely that the outer AGB population is dominated by older stars. It should be noted that this decrease in C/M ratio is not affected by the possible inclusion of galactic foreground stars. The density of the selected AGB stars at $R_{GC} \sim 6$kpc, whilst $\sim 20$ times lower than in the centre is still $\sim 25$ times greater than at 10kpc where it is still falling. Conclusions =========== At present the absolute calibration of RR Lyrae luminosities is in a very unsatisfactory state since the results from statistical parallaxes differ by several tenths of a magnitude from that implied by the trigonometrical parallax of RR Lyrae itself. This matter should be clarified soon by the current parallax work of Benedict et al. The difference in distance moduli between the SMC and LMC as derived from RR Lyraes and classical Cepheids suggests that the Cepheid scale is metallicity dependent unless there is a difference in the mean distance of the two types of variables in the SMC and/or the LMC. Type II Cepheids show period-luminosity relations in the infrared and in $VI$. The extension of the type II PL(K) relation to shorter periods may well fit the RR Lyrae variables. There is evidence of a small metallicity gradient in the RR Lyrae population of the LMC consistent with the the classical picture of LMC formation by collapse of a gas cloud. An age gradient is present in the AGB star population Acknowledgements ================ I am grateful to Dr Noriyuki Matsunaga for providing figure 1 and to him and Prof. Patricia Whitelock for discussions on the topics of this paper. Benedict, G.F., et al. 2002, AJ, 123, 473 Benedict, G.F. et al. 2007, AJ, 133, 1810 Bono, G., Caputo, F., Castellani, V., Marconi, M., Storm, J. & Degl’Innocenti, S., 2003, MNRAS, 344, 1097 Bono,G., Caputo, F., Marconi, M. & Musella, I., 2010, ApJ, 715, 277 Borissova, J., Rejkuba, M., Minniti, D., Catelan, M. & Ivanov, V.D., 2009, A&A, 502, 505 Cacciari, C. & Clementini, G., 2003, in Stellar candles for the extragalactic distance scale, ed. Alloin, D & Gieren, W., Springer, Berlin, p105 Catelan, M., Pritzl, B.J. & Smith, H.A., 2004, ApJS, 154, 633 Cioni, M-R. L. & Habing, H.J., 2003 A&A, 402, 133 Clementini, G., Ripepi, V., Bragaglia, A., Martinez Fiorenzano, A.F., Held, E.V. & Gratton, R.G., 2005, MNRAS, 363, 734 Dall’Ora, M. et al., 2004, ApJ, 610, 269 Dambis, A.K., 2009, MNRAS, 396, 553 Feast, M.W., 1999, in Matínez Roger C.et al. (eds) Globular Clusters, Cambridge University press, 251 Feast, M.W., Laney, C.D., Kinman, T.D., van Leeuwen, F. & Whitelock, P. A., 2008, MNRAS, 386, 2115 Feast, M W., Abedigamba, O.P., & Whitelock, P.A., 2010, MNRAS, 408, L76 Fernley, J. et al. 1998, A&A, 330, 515 Gratton, R.G., Bragaglia, A., Clementini, G., Carretta, E., Di Fabrizio, L., Maio, M. & Taribello, E., 2004, A&A, 421, 937 Jones, R.V., Carney, B.W., Storm, J. & Latham, D.W., 1992, ApJ, 386, 646 Kato, D. et al. 2007, PASJ, 59, 615 Kennecutt, R.C., Bresolin, F. & Garnett, D.R., 2003, ApJ, 591, 801 Longmore, A.J, Fernley, J.A. & Jameson, R.F., 1986, MNRAS, 220, 279 Macri, L.M., Stanek, K.Z., Bersier, D., Greenhill, L.J. & Reid, M.J., 2006, ApJ, 652, 1133 Martin, J.C. & Morrison, H.L., 1998, AJ, 116, 1724 Matsunaga, N. et al., 2006, MNRAS, 370, 1979 Matsunaga, N., Feast, M.W., & Menzies, J.W., 2009, MNRAS, 397, 933 Matsunaga, N., Feast, M. W. & Soszyński, I., 2011, MNRAS, 413, 223 McNamara, D.H., 1999, PASP, 111, 480 Pagel, B.E.J., Edmunds, M.G., Fosbury, R.A.E. & Webster, B.L., 1978, MNRAS, 184, 569 Popowski, P. & Gould, A., 1998a, ApJ, 506, 259 Popowski, P. & Gould, A., 1998b, ApJ, 506, 271 Popowski, P. & Gould, A., 1996c, ApJ, 508, 844 Preston, G.W., 1959, ApJ, 130, 507 Romaniello, M. et al., 2008, A&A, 488, 731 Sandage, A, 1990, ApJ, 350, 603 Sollima, A., Cacciari, C. & Valenti, E., 2006, MNRAS, 372, 1675 Szewczyk, O., et al., 2008, AJ, 136, 272 Szewczyk, O., Pietrzyński, G., Gieren, W., Ciechanowska, A., Bresolin, F., & Kudritzki, R-P., 2009, AJ, 138, 1661 Soszyński, I. et al. 2008, Act.Ast., 58, 293 Soszyński, I. et al. 2009, Act.Ast., 59, 1 van Leeuwen, F., Feast, M.W., Whitelock, P.A. & Laney, C.D., 2007, MNRAS, 379, 723 Zaritsky, D., Kennicutt, R.C. & Huchra, J.P., 1994, ApJ, 420, 87
--- abstract: 'A shortest-path algorithm finds a path containing the minimal cost between two vertices in a graph. A plethora of shortest-path algorithms is studied in the literature that span across multiple disciplines. This paper presents a survey of shortest-path algorithms based on a taxonomy that is introduced in the paper. One dimension of this taxonomy is the various flavors of the shortest-path problem. There is no one general algorithm that is capable of solving all variants of the shortest-path problem due to the space and time complexities associated with each algorithm. Other important dimensions of the taxonomy include whether the shortest-path algorithm operates over a static or a dynamic graph, whether the shortest-path algorithm produces exact or approximate answers, and whether the objective of the shortest-path algorithm is to achieve time-dependence or is to only be goal directed. This survey studies and classifies shortest-path algorithms according to the proposed taxonomy. The survey also presents the challenges and proposed solutions associated with each category in the taxonomy.' title: 'A Survey of Shortest-Path Algorithms' --- [A Survey of Shortest-Path Algorithms]{} - [*Purdue University, West Lafayette, USA*]{}\ - [*Umm Al-Qura University, Makkah, KSA*]{}\ Introduction ============ The shortest-path problem is one of the well-studied topics in computer science, specifically in graph theory. An optimal shortest-path is one with the minimum length criteria from a source to a destination. There has been a surge of research in shortest-path algorithms due to the problem’s numerous and diverse applications. These applications include network routing protocols, route planning, traffic control, path finding in social networks, computer games, and transportation systems, to count a few. There are various graph types that shortest-path algorithms consider. A *general graph* is a mathematical object consisting of vertices and edges. An *aspatial graph* contains vertices where their positions are not interpreted as locations in space. On the other hand, a *spatial graph* contains vertices that have locations through the edge’s end-points. A *planar graph* is plotted in two dimensions with no edges crossing and with continuous edges that need not be straight. There are also various settings in which a shortest-path can be identified. For example, the graph can be *static*, where the vertices and the edges do not change over time. In contrast, a graph can be *dynamic*, where vertices and edges can be introduced, updated or deleted over time. The graph contains either *directed* or *undirected* edges. The weights over the edges can either be *negative* or *non-negative* weights. The values can be real or integer numbers. This relies on the type of problem being issued. The majority of shortest-path algorithms fall into two broad categories. The first category is *single-source shortest-path* (SSSP), where the objective is to find the shortest-paths from a single-source vertex to all other vertices. The second category is *all-pairs shortest-path* (APSP), where the objective is to find the shortest-paths between all pairs of vertices in a graph. The computation of shortest-path can generate either *exact* or *approximate* solutions. The choice of which algorithm to use depends on the characteristics of the graph and the required application. For example, approximate shortest-path algorithms objective is to produce fast answers even in the presence of a large input graph. A special sub-graph, called a *spanner*, can also be created from the main graph that approximates the distances so that a shortest-path can be computed over that sub-graph. Given the large body of literature on algorithms for computing the shortest-path, the objective of this survey is to present a breakdown of these shortest-path algorithms through an appropriate taxonomy. The taxonomy aims to help researchers, practitioners, and application developers understand how each shortest-path algorithm works and to help them decide which type or category of shortest-path algorithms to use given a specific scenario or application domain. Figure \[fig:taxonomy\] illustrates the proposed taxonomy where each branch describes a specific category of shortest-path problem. ![Taxonomy of Shortest-Path Algorithms[]{data-label="fig:taxonomy"}](shortestpath-map.eps) Taxonomy ======== As in Figure \[fig:taxonomy\], the proposed taxonomy classifies the various shortest-path algorithms into multiple high-level branches. The static branch in Figure \[fig:taxonomy\] lists algorithms that operate over graphs with fixed weights for each edge. The weights can denote distance, travel time, cost, or any other weighting criteria. Given that the weights are fixed, some static algorithms perform precomputations over the graph. The algorithms try to achieve a trade-off between the query time compared to the precomputation and storage requirements. Static algorithms consists of two classical algorithms for shortest-path fall under the two main categories (1) *Single-source* shortest-path (SSSP), and (2) *All-pairs* shortest-path (APSP). The SSSP algorithms compute the shortest-path from a given vertex to all other vertices. The APSP algorithms compute the shortest-paths between all pairs of vertices in the graph. *Hierarchical* algorithms break the shortest-path problem into a linear complexity problem. This can lead to enhanced performance in computation by orders of magnitude. *Goal-directed* algorithms optimize in terms of distance or time toward the target solution. *Distance oracle* algorithms include a preprocessing step to speed up the shortest-path query time. *Distance oracle* algorithms can either be exact or approximate. The dynamic branch in Figure \[fig:taxonomy\] lists algorithms that process *update* or *query* operations on a graph over time. The update operation can insert or delete edges from the graph, or update the edge weights. The query operation computes the distance between source and destination vertices. Dynamic algorithms include both *(APSP)* and *(SSSP)* algorithms. *Time-dependent* algorithms target graphs that change over time in a predictable fashion. *Stochastic* shortest-path algorithms capture the uncertainty associated with the edges by modeling them as random variables. *Parametric* shortest-path algorithms compute a solutions based on all values of a specific parameter. *Replacement path* algorithms computes a solution that avoids a specified edge, for every edge between the source vertex and the destination vertex. Replacement paths algorithms achieve good performance by reusing the computations of each edge it avoids. On the other hand, *alternative path* algorithms also computes a shortest path between vertices that avoids a specified edge. The distinguishing factor between both categories is that replacement paths are not required to indicate a specific vertex or edge. On the other hand, alternative shortest-paths avoids the specified edge on the shortest-path. The weighted-regions problem finds the approximate shortest-path on weighted planar divisions. Related Work ============ Zwick [@Zwick2001] survey adopts a theoretical stand-point with regards to the exact and approximate shortest paths algorithms. Zwick’s survey addresses single-source shortest-path (SSSP), all pairs shortest-path (APSP), spanners (a weighted graph variation), and distance oracles. The survey illustrates the various variations that each category adopts when handling negative and non-negative edge weights as well as directed and undirected graphs. Sen [@Sen2009] surveys approximate shortest-paths algorithms with a focus on spanners and distance oracles. Sen’s survey discusses how spanners and distance oracles algorithms are constructed and their practical applicability over a static all-pairs shortest-paths setting. Sommer [@Sommer2012] surveys query processing algorithms that trade-off the index size and the query time. Sommer’s survey also introduce the transportation network class of algorithms, and include algorithms for general graphs as well as planar and complex graphs. Many surveys focus on algorithms that target traffic applications, especially route planning methods. In such related work, a network denotes a graph. Holzer et al. [@Holzer2005] classify variations of Dijkstra’s algorithm according to the adopted speedup approaches. Their survey emphasizes on techniques that guarantee correctness. It argues that the effectiveness of speed-up techniques highly relies on the type of data. In addition, the best speedup technique depends on the layout, memory and tolerable preprocessing time. In contrast to optimal shortest-path algorithms, Fu et al. [@Fu2006] survey algorithms that target heuristic shortest-path algorithms to quickly identify the shortest-path. Heuristic algorithms aim is to minimize computation time. The survey proposes the main distinguishing features of heuristic algorithms as well as their computational costs. Goldberg [@Goldberg2007] investigates the performance of point-to-point shortest-path algorithms over road networks from a theoretical standpoint. Goldberg reviews algorithms, e.g., Dijkstra and $A*$, and illustrates heuristic techniques for computing the shortest-path given a subset of the graph. The survey proves the good worst-case and average-case bounds over a graph. Also, it discusses reach-based pruning and illustrates how all-pairs shortest-path algorithms can be altered to compute reaches while maintaining the same time bound as their original counterparts. Delling and Wagner [@Delling2009] survey route planning speedup techniques over some shortest-path problems including dynamic and time-dependent variants. For example, the authors argue that shortcuts used in static networks cannot work in a time-dependent network. In essence, they investigate which networks can existing techniques be adopted to. Bast [@Bast2009] illustrates speed-up techniques for fast routing between road networks and transportation networks. Bast’s survey argues that the algorithms for both networks are different and require specialized speed-up techniques for each. Also, the survey presents how the speed-up technique performs against Dijkstra’s algorithm. Moreover, the survey presents two open questions, namely, (1) how to achieve speed-up despite the lack of a hierarchy in transportation networks, and (2) how to efficiently compute local searches, e.g., as in neighborhoods. Demetrescu and Italiano [@Demetrescu2006] survey algorithms that investigate fully dynamic directed graphs with emphasis on dynamic shortest-paths and dynamic transitive closures. The survey focuses on defining the algebraic and combinatorial properties as well as tools for dynamic techniques. The survey tackles two important questions, namely whether dynamic shortest-paths achieve a space complexity of $O(n^2)$, and whether single-source shortest path algorithms in a fully-dynamic setting be solved efficiently over general graphs. Nannicini and Liberti [@Nannicini2008] survey techniques for dynamic graph weights and dynamic graph topology. They list classical and recent techniques for finding trees and shortest-paths in large graphs with dynamic weights. They target two versions of the problem, namely, time-dependence, and what they refer to as cost updates of the weights. Dean’s survey [@Dean2004] focuses on time-dependent techniques in a dynamic setting. It surveys one special case, namely, the First-In-First-Out (FIFO) network as it exposes structural properties that allow for the development of efficient polynomial-time algorithms. This survey presents these aspects that are different from all its predecessors. First, it presents a taxonomy that can aid in identifying the appropriate algorithm to use given a specific setting. Second, for each branch of the taxonomy, the algorithms are presented in chronological order that captures the evolution of the specific ideas and algorithms over time. Moreover, our survey is more comprehensive. We cover more recent algorithms that have been invented after the publication of the other surveys. Problem Definition ================== Given a set of vertices $V$, a source vertex $s$, a destination vertex $d$, where $s,d \in V$, and a set of weighted edges $E$, over the set $V$, find the shortest-path between $s$ and $d$ that has the minimum weight. The input to the shortest-path algorithm is a graph $G$ that consists of a set of vertices $V$ and edges $E$. The graph is defined as $G=(V,E)$. The edges can be *directed* or *undirected*. The edges have explicit weights, where a weight is defined as $w(e)$, where $e \in E$, or unweighted, where the implicit weight is considered to be 1. When calculating the algorithm complexity, we refer to the size of the set of vertices $V$ as $n$ and the size of the set of edges $E$ as $m$. Static Shortest-Path Algorithms =============================== In this section, we review algorithms for both the single-source shortest-path (SSSP) and all-pairs shortest-path (APSP) problems. Single-Source Shortest-Path (SSSP) ---------------------------------- *Definition:* Given a Graph $G=(V,E)$ and Source $s \in V$, compute all distances $\delta(s,v)$, where $v \in V$. The simplest case for SSSP is when the graph is unweighted. Cormen et al. [@Cormen2001] suggest that breadth-first search can be simply employed by starting a scan from a root vertex and inspecting all the neighboring vertices. For each neighboring vertex, it probes the non-visited vertices until the path with the minimum number of edges from the source to the destination vertex is identified. Dijkstra’s algorithm [@Dijkstra1959] solves the single source shortest-path (SSSP) problem from a given vertex to all other vertices in a graph. Dijkstra’s algorithm is used over directed graphs with non-negative weights. The algorithm identifies two types of vertices: (1) Solved and (2) Unsolved vertices. It initially sets the source vertex as a solved vertex and checks all the other edges (through unsolved vertices) connected to the source vertex for shortest-paths to the destination. Once the algorithm identifies the shortest edge, it adds the corresponding vertex to the list of solved vertices. The algorithm iterates until all vertices are solved. Dijkstra’s algorithm achieves a time complexity of $O(n^{2})$. One advantage of the algorithm is that it does not need to investigate all edges. This is particularly useful when the weights on some of the edges are expensive. The disadvantage is that the algorithm deals only with non-negative weighted edges. Also, it applies only to static graphs. Dijkstra’s algorithm performs a brute-force search in order to find the optimum shortest-path and as such is known to be a greedy algorithm. Dijkstra’s algorithm follows a successive approximation procedure based on Bellman Ford’s optimality principle [@Bellman1957]. This implies that Dijkstra’s algorithm can solve the dynamic programming equation through a method called the reaching method [@Denardo2003; @Sniedovich2006; @Sniedovich2010]. The advantage of dynamic programming is that it avoids the brute-force search process by tackling the sub-problems. Dynamic programming algorithms probe an exponentially large set of solutions but avoids examining explicitly all possible solutions. The greedy and the dynamic programming versions of Dijkstra’s algorithm are the same in terms of finding the optimal solution. However, the difference is that both may get different paths to the optimal solutions. Fredman and Tarjan [@Fredman1987] improve over Dijkstra’s algorithm by using a Fibonnaci heap (F-heap). This implementation achieves $O(nlogn + m)$ running time because the total incurred time for the heap operations is $O(n~log~n+m)$ and the other operations cost $O(n+m)$. Fredman and Willard [@Fredman1990a; @Fredman1990; @Fredman1993] introduce an extension that includes an $O(m+n~log~n/loglog~n)$ variant of Dijkstra’s algorithm through a structure termed the AF-Heap. The AF-Heap provides constant amortized costs for most heap operations and $O(log~n/loglog~n)$ amortized cost for deletion. Driscoll and Gabow [@Driscoll1988] propose a heap termed the relaxed Fibonacci heap. A relaxed heap is a binomial queue that allows heap order to be violated. The algorithm provides a parallel implementation of Dijkstra’s algorithm. Another line of optimization is through improved priority queue implementations. Boas [@Boas1975] and Boas et al. [@Boas1976] implementations are based on a stratified binary tree. The proposed algorithm enables online manipulation of a priority queue. The algorithm has a processing time complexity of $O(loglog~n)$ and storage complexity of $O(n~loglog~n)$. A study by Thorup [@Thorup1996] indicates the presence of an analogy between sorting and the SSSP problem, where SSSP is no harder than sorting edge weights. Thorup [@Thorup1996] describes a priority queue giving a complexity of $O(loglog~n)$ per operation and $O(m~loglog~n)$ complexity for the SSSP problem. The study examines the complexity of using a priority queue given memory with arbitrary word size. Following the same analogy, Han [@Han2001] proposes a deterministic integer sorting algorithm in linear space that achieves a time complexity of $O(m~loglog~n~logloglog~n)$ for the SSSP problem. The approach by Han [@Han2001] illustrates that sorting arbitrarily large numbers can be performed by sorting on very small integers. Thorup [@Thorup1999] proposes a deterministic linear space and time algorithm by building a hierarchical bucketing structure that avoids the sorting operation. A bucketing structure is a dynamic set into which an element can be inserted or deleted. The elements from the buckets can be picked in an unspecified manner as in a doubly-linked list. The algorithm by Thorup [@Thorup1999] works by traversing a component tree. Hagerup [@Hagerup2000] improves over the algorithm of Thorup, achieving a time complexity of $O(n+m~log~w)$, where $w$ is the width of the machine word. This is done through a deterministic linear time and space algorithm. Bellman, Ford, and Moore [@Bellman1958; @Ford1956; @Moore1957] develop an SSSP algorithm that is capable of handling negative weights unlike Dijkstra’s algorithm. It operates in a similar manner to Dijkstra’s, where it attempts to compute the shortest-path but instead of selecting the shortest distance neighbor edges with shortest distance, it selects all the neighbor edges. Then, it proceeds in $n-1$ cycles in order to guarantee that all changes have been propagated through the graph. While it provides a faster solution than Bellman-Ford’s algorithm, Dijkstra’s algorithm is unable to detect negative cycles or operate with negative weights. However, if there is a negative cycle, then there is no shortest-path that can be computed. The reason is due to the lower total weight incurred due to the traversal cycle. Bellman-Ford’s algorithm achieves a run-time complexity of $O(nm)$. Its strong points include the ability to operate on negative weights and detect negative cycles. However, the disadvantages include its slower run-time when compared to Dijkstra’s algorithm. Also, Bellman-Ford’s algorithm does not terminate when the iterations do not affect the graph weights any further. Karp [@Karp1978] addresses the issue of whether a graph contains a negative cycle or not. He defines a concept termed minimum cycle mean and indicates that finding the minimum cycle mean is similar to finding the negative cycle. Karp’s algorithm achieves a time complexity of $O(nm)$. Yen [@Yen1970] proposes two performance modifications over Bellman Ford, and Moore  [@Bellman1958; @Ford1956; @Moore1957]. The first involves the relaxation of edges. An edge is relaxed if the value of the vertex has changes. The second modification is dividing the edges based on a linear ordering over all vertices. Then, the set of edges are partitioned into one or more subsets. This is followed by performing comparisons between the two sets according to the proposed partitioning scheme. A slight improvement to what Yen [@Yen1970] proposes has been introduced by Bannister and Eppstein [@Bannister2011] where instead of using an arbitrary linear ordering, they use a random ordering. The result is fewer number of iterations over both subsets. All-Pairs Shortest-Path (APSP) ------------------------------ *Definition:* Given a graph $G=(V,E)$, compute all distances between a source vertex $s$ and a destination v, where $s$ and $v$ are elements of the set $V$. The most general case of APSP is a graph with non-negative edge weights. In this case, Dijkstra’s algorithm can be computed separately for each vertex in the graph. The time complexity will be $O(mn+n^2logn)$ [@Karger]. A vast number of algorithms has been proposed that handle real edge-weights for the all-pairs shortest-path problem. Floyd-Warshall algorithm [@Floyd1962; @Warshall1962] tries to find all pairs shortest-paths (APSP) in a weighted graph containing positive and negative weighted edges. Their algorithm can detect the existence of negative-weight cycles but it does not resolve these cycles. The complexity of Floyd-Warshall algorithm is $O(n^3)$, where n is the number of vertices. The detection of negative-weight cycle is done by probing the diagonal path matrix. Floyd-Warshall algorithm cannot find the exact shortest-paths between vertices pairs because it does not store the intermediate vertices while calculating. However, using a simple update, one can store this information within the algorithm steps. The space complexity of the algorithm is $O(n^{3})$. However, this space complexity can reach $O(n^{2})$ by using a single displacement array. The strong point of the algorithm is that it can handle negative-weight edges and can detect negative-weight cycles. The main drawback though is that the timing complexity for running Dijkstra’s algorithm on all vertices (to convert it from SSSP to APSP) will be $O(mn + n^{2} log n)$. This timing complexity is lower than $O(n^3)$ if and only if $m < n^{2}$ (i.e., having a sparse graph). Many studies have been proposed better running time over Floyd-Warshall’s algorithm on  *real-valued edge weights*. A notable enhancement has been proposed by Fredman [@Fredman1976] that relies on a matrix-oriented approach. His approach relies on the theorem proposed by Aho and Hopcroft [@Aho1974] the complexity of an $N$x$N$ matrix multiplication using a min/plus multiplication approach is similar to that of shortest-paths. He shows that $O(N^{5/2})$ comparisons suffices to solve the all-pairs shortest-paths (APSP) problem. The algorithm achieves a complexity of $O(n^3 (log log n)/ log n^{1/3})$. Table \[table:realvalue\] summarizes the enhancements proposed for real-valued edges up to this date. Time Complexity Author --------------------------------- ----------------------------- $n^3$ [@Floyd1962; @Warshall1962] $n^3 (log log n)/ log n^{1/3}$ [@Fredman1976] $n^3 (log log n / log n)^{1/2}$ [@Takaoka1992] $n^3 / (log n)^{1/2}$ [@Dobosiewicz1990] $n^3 (log log n / log n)^{5/7}$ [@Han2004] $n^3 log log n / log n$ [@Takaoka2004] $n^3 (log log n)^{1/2} / log n$ [@Zwick2004] $n^3/log n)$ [@Chan2006] $n^3 (log log n / log n)^{5/4}$ [@Han2006] $n^3 (log log n)^3 / (log n)^2$ [@Chan2007] $n^3 (log log n)/ (log n)^2$ [@Han2012] \[table:realvalue\] The best result by Han and Takaoka [@Han2012] achieve $O(log log n)^2$ reduction factor when compared to the result of [@Chan2007]. Their approach focuses on the distance product computation. First, an $n$x$n$ matrix js divided into $m$ sub-matrices, each having $n$x$n$/$m$ dimensions, where m is determined based on a specific criterion. Then, the algorithm proceeds in a series of matrix manipulations, index building, encoding, and partitioning steps until it reaches the proposed bound. The best *non-negative edge weight* complexity is $O(n^2 log n)$  [@Moffat1987]. First, the algorithm sorts all adjacency lists in an increasing weight fashion. Then, it performs an SSSP computation $n$ times and proceeds in iterations. In the first phase, it uses the notion of *potential* over the edges of vertices and selects and labels the edge with the minimum potential. *Potential* derived from the *potential-model* is defined as a probability distribution on complete directed graphs with arbitrary edge lengths that contain no negative cycles. The algorithm runs in two main phases, each with a specific invariant and has an $O(n^2 log n)$ complexity. The best *positive integer edge weight* complexity is $O(n^{\omega}+c)$ [@Roditty2011], where $\omega < 2.575$ is the exponent being proposed by Coppersmith and Winograd [@Coppersmith1990]. Their proposed algorithm provides a transition between the fastest exact and approximate shortest-paths algorithms with a linear error rate. The algorithm focuses on directed graphs with small positive integer weights in order to obtain additive approximations. The approximations are polynomial given the actual distance between pairs of vertices. Distance Oracles ---------------- *Definition:* Given a graph $G=(V,E)$, a distance oracle encompasses a (1) data structure or index that undergoes preprocessing, and a (2) query algorithm. The term *distance oracle* has been proposed by Thorup and Zwick [@Thorup2005]. It proposes a faster alternative to the SSSP and APSP algorithms. This can be achieved by preprocessing the graph and creating an auxiliary data structure to answer queries. Distance oracle operates in two phases, namely, a preprocessing phase and a query phase. In the preprocessing phase, information such as data structures or indexes are computed. In contrast, the query processing phase processes queries efficiently using the outcome from the preprocessing phase. Distance oracles may return exact or approximate distances. A distance oracle provides an efficient trade-off between space (in terms of data structure or index storage) and query time. ### Exact Distances Fakcharoenphol and Rao [@Fakcharoenphol2006] propose an algorithm for planar graphs that balances the trade-off between preprocessing and query time. The preprocessing complexity for both space and time is  $\tilde{O}(n)$, and the run-time complexity is $\tilde{O}(\sqrt{n})$. Their proposed approach creates a non-planar graph given a subset of vertices followed by the computation of the shortest-path tree. First, the graph is divided into a set of bipartite graphs. The distance matrices of the bipartite graph need to comply with a non-crossing condition referred to as the Monge condition. The proposed result of $O(\sqrt{n})$ holds as long as the non-crossing condition is enforced. Klein et al. [@Klein2010] propose a linear-space algorithm with a fast preprocessing complexity of $O(nlog^{2} n)$, over a directed planar graph. The graph can include both positive and negative edges. Given a planar directed graph $G$, and a source vertex, the algorithm finds a curve known as a Jordan curve. A Jordan curve $C$ is identified if it passes through $O(\sqrt{n})$ vertices. A *boundary vertex* is one that passes through $C$. Cutting the graph and duplicating the boundary vertices creates subgraphs $G_i$. The algorithm passes through five stages: (1) recursively compute the distances from $r$ within a graph where $r$ is an arbitrary boundary vertex, (2) compute all distances between boundary vertices, (3) use a variant of Bellman-Ford to compute the graph distances from the boundary vertex $r$ to all other boundary vertices, (4) use Dijkstra’s algorithm to compute the graph distances from the boundary vertex $r$ to all other vertices, (5) use Dijkstra’s algorithm to compute graph distances from the source Vertix. This requires time of $O(nlog n)$. Djidjev [@Djidjev1996] proposes a faster query time algorithm and proves that for any $S \in [n,n^2]$, a distance oracle can have a space complexity during preprocessing of $O(S)$, and query time complexity of $O(n^2/S)$. Djidjev’s objective is to have an algorithm in which the product of preprocessing-space and query-time is not greater than those of SSSP and APSP problems. The proposed algorithm provides a complexity of $O(\sqrt{n})$ for any class of directed graphs where the separator theorem holds. Cabello [@Cabello2006] improves the preprocessing time, and provides a theoretical proof that, for any $S \in [n^{4/3},n^2]$, a distance oracle can have $O(S)$ preprocessing space complexity, and $O(n/\sqrt{S})$ query time complexity. This is slower than the algorithm proposed by Djidjev [@Djidjev1996] by a logarithmic factor but still covers a wider range of $S$. The proposed approach constructs a data structure between any pair of vertices that can answer distance-based queries. Then, the algorithm queries the data structure with those pairs. Wulff-Nilsen [@Wulff-Nilsen2013] proposes a constant query-time algorithm for unweighted graphs, and proves that for any $S \in [(log n/loglog n)^2,n^{2/5}]$, a distance oracle can have a space complexity of $o(n^2)$. The algorithm relies on the Wiener index of a graph. The Weiner index defines the sum of distances between all pairs of vertices in a graph. The proposed technique shows the existence of subquadratic time algorithms for computing the Wiener index. Computing the Wiener index has the same complexity as computing the average vertex pairs distances. Henzinger et al. [@Henzinger1997] propose a SSSP algorithm requiring $O(n^{4/3} log(nL))$ time, where $L$ is the absolute value of an edge with the smallest negative value. The proposed algorithm also achieves a similar bound for planar graphs and planar bipartite graphs. They also propose a parallel and dynamic variant of the algorithm. The key component of their approach is the use of graph-decompositions based on planar separators. Mozes and Sommer [@Mozes2012] propose an algorithm to answer distance queries between pairs of vertices in planar graphs with non-negative edge weights. They prove that, for any $S \in [n loglog n, n^2]$, a distance oracle can have $\tilde{O}(S)$ preprocessing time complexity, and $O(S)$ space complexity. Distance queries can be answered in $\tilde{O}(n/\sqrt{S})$. The graph can be preprocessed in $\tilde{O}(n)$ and the generated data structure will have a size of $O(n loglog c)$. The query time will be $\tilde{O}(c)$ where $C$ is a cycle with $c = O(\sqrt{n})$ vertices. ### Approximate Distances Approximate distance oracles algorithms attempt to compute shortest-paths by querying only some of the distances. It is important to note that algorithms that deal with finite metric spaces produce only approximate answers. Some algorithms create *spanners*, where a spanner is a sparse sub-graph that approximates the original graph. They can be regarded as a spanning tree that maintains the locality aspects of the graph. These locality aspects defines a *stretch* where a stretch is a multiplicative factor that indicates the amount distances increase in the graph. The stretch is a result of utilizing the spanner edges only [@Elkin2004]. Other algorithms approximate distances by triangulation using a concept called *landmark* or *beacon* [@Sommer2012] that is selected by random sampling, where each vertex stores distances to all landmarks. Note that given the definition of approximate distance oracles, the actual shortest-path is still not guaranteed to be retrieved. Zwick [@Zwick1998] presents an APSP algorithm for directed graphs that utilizes a matrix multiplication where the approximate distance is computed in $O((n^\omega/\epsilon)log(W/\epsilon)$, where $\epsilon > 0$ for any $\epsilon$. They define the stretch as $1+\epsilon$ and $W$ represents the largest weighted edge identified in the graph. Aingworth et al. [@Aingworth1996] propose an APSP algorithm for undirected graphs with unweighted edges that does not adopt a matrix multiplication approach. A trade-off of not using fast matrix multiplication is a small additive error. They propose two algorithms; one that achieves an additive error of 2 in time $O(n^{2.5}\sqrt{log~n})$. They also provide an estimate of graph paths and distances in $O(n^{5/2}(log~n)^{1/2})$ and another 2/3-approximation algorithm that achieves a query time of $O(m(n~log~n)^{1/2})$. Dor et al. [@Dor2000] improve on previous surplus results by proposing an APSP algorithm that computes the surplus 2 estimate in $\tilde{O}(n^{3/2} m^{1/2})$. They also show that, for any $k$, a surplus 2(k-1) estimate takes $\tilde{O}(kn^{2-1/k} m^{1/k})$ to be computed. Their work relies on the one main observation that there is a set of vertices that represent vertices with high degree value. In other words, a set of vertices $X$ is said to represent a set of $Y$ if all vertices in $X$ have a neighbor in $Y$. Cohen and Zwick [@Cohen2001] improve the work proposed by Dor et al. [@Dor2000] for weighted undirected graphs by proposing an algorithm that computes the surplus 2 estimate of all distances in $\tilde{O}(n^{3/2}m^{1/2})$ and 3 estimate in $\tilde{O}(n^2)$. They show that finding the estimated distances between all-pairs in directed graphs is a hard problem, similar to the Boolean Matrix multiplication. This makes their proposed approximation algorithm only valid for undirected graphs. Their algorithm relies on two important aspects: partitioning of the graph with the assumption that it is directed and the use of an SSSP algorithm, e.g., Dijkstra’s. Patrascu and Roditty [@Patrascu2010] further improve the stretch bound of intermediate vertices on the expense of increasing the space requirements and achieve $\tilde{O}(n^{2/3})$. This approach defines the notion of *balls*, defined as $B$, where balls around each vertex grow geometrically and stop based on a specific criteria. Given the vertices $s$ and $t$, the worst-case happens when the balls do not intersect. Agarwal et al. [@Agarwal2011] also propose a 2 estimate approach that can be implemented in a distributed fashion. The approach is mainly meant for compact routing protocols. It aims to characterize the space and time trade-off for approximate distance queries in sparse graphs. For both approaches above (i.e., [@Patrascu2010] and [@Agarwal2011]), the space versus query time trade-off depends on the number of edges. For spanners, Elkin and Peleg [@Elkin2004] propose a general $(1+\epsilon,\beta)$-spanner with space complexity of $O(\beta n^{1+1/k})$, where $\beta = \beta(\kappa,\epsilon)$ is a constant when $\kappa$ and $\epsilon$ are also constants. They claim that the stretch and spanners can be minimized in a simultaneous evaluation fashion. Baswana and Sen [@Baswana2007] propose a spanner with a $(2k-1)$ stretch that can be computed in $O(km)$ and with a size of $O(kn^{1+1/k})$, where $k > 1$. They provide a theoretical proof that a spanner with a $(2k-1)$ stretch can be computed without distance computation in linear time through a novel clustering technique. The proposed approach can take $O(k)$ rounds. Each round explores an adjacency vertex list in order to determine the edges that need to be removed. The advantage of this approach is its applicability to various computational environments, e.g., the synchronous distributed model, the external memory model, and the CRCW PRAM model. For planar graphs, Thorup [@Thorup2004] proposes an $(1+\epsilon)$-approximate distance oracle. This approach provides a constant number of shortest-paths through separators in contrast to Lipton et al. [@Lipton1979]. For each vertex, it stores the shortest-path distances to a set of $O(1/\epsilon)$ landmarks per level. This process is performed recursively for $O(log n)$ levels. Kawarabayashi et al. [@Kawarabayashi2011] propose a planar graph algorithm that provides tunable trade-offs, where a polylogarithmic query time can be achieved while maintaining a linear space requirement with respect to the graph size. The proposed approach achieves a preprocessing time complexity of $O(n log^2 n)$ and query time of $O(\epsilon^{-2} log^2 n)$. It achieves faster running time than Thorup’s approach that computes a set $C$ of connections that covers all vertices of a graph with every vertex containing $O(\epsilon^{-1})$ connections [@Thorup2004]. In contrast, only a subset of vertices is covered using Kawarabayashi et al. approach. The approach is $O(\epsilon^{-1})$ times the number of paths in space complexity. For complex networks, Chen et al. [@Chen2012] proposes a distance oracle over random power-law graphs [@Aiello2000] with 3 estimate that has a space complexity of $O(n^{4/3})$. Their approach adopts the distance oracle proposed by Thorup and Zwick [@Thorup2005], where they use high-degree vertices as landmarks. The adaptation includes selecting vertices with the highest degree as landmarks. It encodes the shortest-paths in the vertex labels. Goal-Directed Shortest-Paths ---------------------------- A goal-directed shortest-path search algorithm is based on adding annotations to vertices or edges of the graph that consist of additional information. This information allows the algorithm to determine which part of the graph to prune in the search space. ### Simple Goal-Directed Search Hart et al. [@Hart1968] propose a simple goal-directed algorithm, termed $A^{*}$. The algorithm proposes a heuristic approach in finding the shortest-path. Unlike Dijkstra’s algorithm, $A^{*}$ is an informed algorithm, where it searches the routes that lead to the $A^{*}$ final goal. $A^{*}$ is an optimal best-first-search greedy algorithm. But what sets $A^{*}$ aside from other algorithms is its ability to maintain the distance it traveled into account. $A^{*}$ always finds the shortest-path if an admissible heuristic function is used. The strong point of the algorithm is that it is meant to be faster than Dijkstra since it explores less number of vertices. On the downside, if $A^{*}$ does not use a good heuristic method, it will not reach the shortest-path. Some of the variants of the $A^{*}$ algorithm use landmarks and other techniques in order to achieve better performance than $A^{*}$ under various setups. Goldberg and Werneck [@Goldberg2005] propose a preprocessing phase where initially a number of landmarks are selected followed by the computation of the shortest-path where it is stored between the vertices of all these landmarks. They propose a constant-time lower-bound technique using the computed distances in addition to the triangle inequality property. The lower-bound technique is based on the $A^{*}$ algorithm, the landmark chosen, and the triangle inequality. Gutman [@Gutman2004] offers a comparable solution to the problem, where his work is based on the concept of reach. Gutman’s technique relies on storing a reach value and the Euclidean coordinates of all vertices. The advantage of Gutman’s approach is that it can be combined with the $A^{*}$ algorithm when compared to the work by Goldberg and Werneck [@Goldberg2005], Gutman’s [@Gutman2004] outperforms their proposed technique given one landmark while it performs worse given sixteen landmarks. On the downside, Gutman’s approach depends on domain-specific assumptions, longer preprocessing complexity, and inapplicability in a dynamic setting. Potamias et al. [@Potamias2009] propose an approximate *landmark-based* technique for point-to-point distance estimation over large networks. A theoretical proof is presented to indicate that the problem is NP-Hard and they propose heuristic solutions. In specific, they propose a smart landmark selection technique that can yield higher accuracy, reaching 250 times less space than selecting landmarks at random. Among their evaluated strategies, the Centrality is more robust than the Degree strategy. Also, strategies based on partitioning, e.g., Border/P exhibit better computational cost across datasets. Kleinberg et al. [@Kleinberg2004] propose an algorithm with provable performance guarantees for beacon-based triangulation and embedding. The beacon-based algorithms are basically designed for triangulation, where they use the triangle inequality to deduce the unmeasured distances. They indicate that a multiplicative error of $1+\delta$ on a $1-\epsilon$ fraction of distances can be achieved by triangulation-based reconstruction given a constant number of beacons. The algorithm also achieves a constant distortion over $1-\epsilon$ of distances. Maue2009 et al. [@Maue2009] claim that Dijkstra’s algorithm can be enhanced by precomputing the shortest-path distances. They propose to partition the graph into $k$ non-overlapping clusters and perform two operations; (1) store the start and end point, (2) store the shortest connection between each pair of clusters. The proposed algorithm achieves a speed-up scaling factor of $\sqrt{k}$ in contrast to Dijkstra’s algorithm. ### Advanced Goal-Directed Search Edge labels is an approach that relies on precomputing the information for an edge $e$ and vertices $M$. The superset $M(e)$ represents all the vertices on a shortest-path that start with an edge $e$. The graph is first partitioned into a set of regions of the same size alongside a precomputed set of boundary vertices. In order to compute the edge flags, an SSSP computation is done on the regions for all the boundary vertices. Various work, e.g., Kohler et al. [@Kohler2005], Schulz et al.[@Schulz1999], and Lauther [@Lauther2004] further present some of the edge-label variations. Möhring et al. [@Mohring2007] propose an algorithm for sparse directed graphs with non-negative edge weights, termed the *arc-flag* approach. The arc-flag approach preprocesses graph data to generate information that speeds up shortest-path queries by dividing the graph into regions and determining if an arc in a specific region lies on the shortest-path. Given a suitable partitioning scheme and a bi-directed search, the arc-flag approach 500 times faster than the standard Dijkstra’s algorithm over a large graph. Schilling et al. [@Schilling2006] present a further improvement by searching once for each region. Their approach achieves speed-up of more than 1,470 on a subnetwork of 1 million vertices. Goldberg and Werneck [@Goldberg2005] propose an $A^{*}$ based search Landmarks (ALT) algorithm that uses the triangle inequality. They show that precomputing the distances to a set of landmarks can bound the shortest path computational cost. They propose an average of 20 landmarks that are well-distributed over the corners of the graph. In turn, their approach leads to speed up for route planning. Bauer et al. [@Bauer2010] study how to systematically combine speed-up techniques proposed for Dijkstra’s algorithm, e.g., adding goal-direction approaches to hierarchical approaches. They present generalized technique that demonstrates how speed-up performance can be improved. Their results show that Highway vertex Routing and Arc-Flags achieves the best speed-up while maintaining an adequate preprocessing cost. They also present a hierarchical $A^{*}$-based search Landmarks (ALT) algorithm on dense graphs. Delling et al. [@Delling2012] present an algorithm termed round-based public transit router (RAPTOR). RAPTOR is not based on Dijkstra’s algorithm as it probes each route in the graph at most once. RAPTOR works in fully dynamic scenarios and can be extended to handle, for example, flexible departure times. Bauer and Delling [@Bauer2009] uses hierarchical based techniques to extend the edge flag approach, e.g., using contraction hierarchies during preprocessing, and hence tackling a main processing drawback of edge flags. The proposed work is termed (Shortcuts + Arc-Flags) or SHARC, for short. The key observation about SHARC is that it is enough to set sub-optimal edge flags to most edges, and this focuses the preprocessing on important edges only. Another observation is that SHARC incorporates hierarchical aspects implicitly. SHARC also extends the edge flag approach of Möhring et al. [@Mohring2007] to achieve a fast unidirectional query algorithm. Maue et al. [@Maue2009] propose a goal-directed algorithm that utilizes precomputed cluster distances (PCD). The proposed approach first partitions the graph into clusters. This is followed by precomputing the shortest connections between the pairs of clusters $U$ and $V$. PCDs produce bounding factors for distances that can be used to prune the search when compared with the $A^{*}$ algorithm. In turn, this achieves a speed-up comparable to ALT while using less space. Hierarchical Shortest-Path -------------------------- Hierarchical shortest-path algorithms deal with generating a multi-layered vertex hierarchy in the preprocessing stage. A hierarchical structure is prominent in areas, e.g., road networks, where it exhibits hierarchical properties, e.g., ordering important streets, motorways, and urban streets [@Schultes2005]. In general, methods using contraction hierarchies provide low space complexity. Contraction hierarchies contain many variants such as reach-based methods and highway hierarchies and vertex routing. On the other hand, Transit-vertex Routing and Hub Labels provide fast query-time [@Sommer2012]. The following sections discuss various algorithms that follow a hierarchical approach. ### Highway Hierarchies Highway Hierarchies capture edge-based properties. For example, highway edges exhibit a better representation for shortest paths although they may not be located between the source and the destination vertices. The algorithm generates a hierarchy of graphs that enables fast query time with correctness guarantees. Sanders and Schultes [@Sanders2005; @Sanders2006] propose a static undirected highway hierarchies algorithm around the notion of correctly defining local search and highway network appropriately. They define local search as one that visits $H$ (tuning parameter) closest vertices from the source or target. A highway edge is created if it lies on the path from the source vertex to the destination vertex with that edge not being within the $H$ closest vertices from the source or destination. Nannixini et al. [@Nannicini2010] propose an algorithm that relies on time-dependent lengths. They extend the original algorithm by Sanders and Schultes [@Sanders2005] to the case of directed graphs. Their aim is to find the fastest paths on a large dynamic road network that have quasi real-time updates. ### Contraction Hierarchies A contraction hierarchy has a level for each vertex reaching up to $n$ levels. Hierarchical models can improve query performance as search can be conducted in an upwards manner only over the graph. This reduced the space complexity as edges are stored at their lower endpoints only. Geisberger et al. [@Geisberger2008] propose contraction hierarchies, where vertices are initially ordered by importance, and then a hierarchy is generated by contracting the least important vertices in an iterative manner. Contracting is the process of replacing the shortest-paths passing a vertex by what they call shortcuts. They propose a hierarchical algorithm that utilizes a bidirectional shortest-path search technique. Batz et al. [@Batz2010] propose a time-dependent version of the algorithm. It tackles time-dependent road networks where it proposes a fast and exact route planning algorithm. The issue it faces is space complexity. They tackle this problem by using approximations of piecewise-linear functions that lead to significant space reduction while preserving correctness. The proposed approach relies on approximating shortcuts and non-shortcuts to acquire time-dependent edge weights. Then, these weights can then be used with their bidirectional search algorithm to create a corridor of shortcuts that can be searched. Kieritzcite et al. [@Kieritz2010] propose a distributed memory parallelization of time-dependent contraction hierarchies. The algorithm identifies vertices that can be contracted in every iteration. Parallelization is achieved when each process contracts its vertices independently and the vertices contractions do not overlap with each other. They attempt to approximate the ordering of the sequential algorithms used. Geisberger et al. [@Geisberger2012] devise an algorithm based on contraction hierarchies to calculate continent-based shortest-paths. The preprocessing step relies on the hierarchical properties of road networks in order to add shortcut edges. They use a modified version of Dijkstra’s algorithm that visits only a few hundred vertices that in turn makes it suitable to implement on mobile devices. ### Multi-Level Graphs In a multi-level overlay graph, if a set of vertices lie at a specific level, then the shortest-paths in that level do not use vertex from the upper levels. In turn, this method depends on the correct selection of vertices to act as landmarks on the higher levels. Schulz et al. [@Schulz2002] propose a multi-level graph-based decomposition method that targets space reduction. This method precomputed the shortest-paths and replaces the weights of single edges with a weight equal to the shortest-path length. The result is a subgraph that is smaller in size when compared with the original graph. The subgraph distances between a set of vertices is the same as the shortest-path graph distance between the same set of vertices in the original graph. Holzer et al. [@Holzer2009] introduce several vertex selection criteria on overlay graphs. These include criteria to determine a representative subset of the original graph. They investigate the criteria’s effectiveness over multilevel overlay graphs and the speed-up achieved for shortest-path computation. ### Transit vertex Routing Transit vertex routing precomputed the shortest paths to and from all landmarks identified in a graph. The algorithm requires extensive preprocessing but exhibits very fast query time as it requires a limited number of look-ups between landmarks located in different locations. Bast et al. [@Bast2007] propose transit vertex routing. They suggest that a vertical and horizontal sweep are sufficient to compute the set of transit vertices. They also illustrate some techniques to make the approach more space-efficient. Arz et al. [@Arz2013] propose a variant of contraction hierarchies that achieves an order of magnitude speeds up , similar to the time needed to find contraction hierarchies. They propose a graph-theoretical locality filter that does not affect the query time. ### Hub Labeling Modeling road networks as a low-dimensional graph is a method used for computing the shortest paths. One method used for such modeling is the process of *labeling*. Algorithms for labeling have been introduced in the distributed computing field [@Gavoille2004; @Thorup2005]. In the labeling preprocessing stage, each vertex $v$ is computed and assigned a *forward label* and a *reverse label*. The forward label encompasses a set of vertices $w$, where each vertex contains a computed distance $dist(v,w)$ from $v$. The reverse label consists of a set of vertices $u$, where each vertex contains a computed distance $dist(u,v)$ to $v$. These labels are later used in the query stage to determine the vertices that minimize the distance from source to destination. A label can be perceived as a set of *hubs* that a vertex $v$ has a direct connection to. The labeling algorithm ensures that any two vertices have one hub in common when computing the shortest path. Hub labeling starts by preprocessing the vertices, where, for each vertex $v$, it precomputes the distance to a set of landmarks $L(v)$ in the vertex label. The query algorithm is fast as long as the number of landmarks of the source and destination vertices is small. Storing the labels in a consecutive manner allows the algorithm to exhibit good locality. Abraham and Delling [@Abraham2011; @Abraham2012] propose a labeling scheme that, given a vertex $s$ and $t$, it considers the sets of vertices visited by the forward contraction hierarchy from $s$ and the reverse contraction hierarchy of $t$. The contraction hierarchies algorithm computes for the shortest-path the intersection of the forward and reverse sets that contain the maximum-rank vertex. Babenko et al. [@Babenko2013] propose an approximation algorithm for producing small labels. Their main target is to reduce the size of the maximum hub-label. This reduction process leads to unbalanced solutions as vertices will have a skewed label sizes. They propose an approximation algorithm for the maximum label size that runs in $O(log n)$. The proposed approach reduces the the hub-labeling problem to a set-covering problem. Cohen et al. [@Cohen2003] propose a data structure for storing the reachability label using a 2-hops cover of all the paths in a graph. Each vertex $v \epsilon V$ precomputes the Label $L_{in}$ and $L_{out} \subseteq V$ such that, for any pair $s$ and $t$, at least one vertex is in $L_{out}(s) \bigcap L_{in}(t)$. The distance labeling query finds the shortest-path from source $s$ to destination $t$ by finding the minimum distance from $(L_{out} (s), x)$ to $(x, L_{in} (t))$ for each label $x \epsilon (L_{out} (s) \bigcap L_{in}(t))$. The size of a label $L$ is not guaranteed and the polynomial preprocessing time is approximately $O(log n)$ for finding a 2-hop cover of the invariant paths whose size is larger than the set of all shortest-paths. Chang et al. [@Chang2012] propose a multi-hop distance labeling with a size smaller than another 2-hop labeling approach [@Cohen2003]. In the preprocessing phase, the algorithm stores a parent function $P$ that assigns the parent vertex to each vertex by avoiding the preprocessing of the all-pairs shortest-path. The proposed approach performs vertex separation on the graph $G$ that divides $G$ into multiple connected subgraphs. The graph is further decomposed into a minimal tree $T (I,F)$, where $I \subset V$ represents the set of vertices and $F$ is the set of edges. The approach uses the distance query to compute the minimum distance. The time complexity of query processing is $O(tw*h)$, where $tw$ represents the width and $h$ represents the height of the decomposed tree $T$. ### Highway Node Routing The motivation behind using highway node routing is that prominent vertices that overlap various shortest-paths will generate sparse overlay graphs. The result would be faster query processing and lower space overhead. Schultes and Sanders [@Schultes2007] proposes a dynamic algorithm that is space-efficient and allows query time to be thousand times faster when compared to Dijkstra’s algorithm. The choice of vertices is achieved by capitalizing on previous results in addition to using the required vertex sets defined by highway hierarchies algorithms. They simplify the complications of computation into the prepreprocessing step. This also leads to simplification of the query processing algorithm, especially the dynamic variants. Abraham [@Abraham2010] suggests that road networks do not necessarily have a significant highway dimension. The proposed algorithm relies on realizing balls of a specific radius $r$. For every $r > 0$, there exits a sparse set $S_{r}$ where shortest-path of length more than $r$ will have a vertex from the set $S_{r}$. If every ball having radius $O(r)$ contains less number of vertices than $S_{r}$, then the set $S_{r}$ is sparse. Dynamic Shortest-Path Algorithms ================================ The main requirement of dynamic shortest-path algorithms is to process updates and query operations efficiently in an online fashion. In the update operation, edges are inserted or deleted from the graph. In the query operation, the distance between vertices is computed. *Fully dynamic algorithms* are those that can process insertions and deletions. *Incremental algorithms* can process insert operations, but not delete operations. *Decremental algorithms* can process delete operations, but not insert operations. This implies that incremental and decremental algorithms are *partially dynamic*. The following section illustrates the algorithms that demonstrate the aforementioned differences. All-Pairs Shortest-Path (APSP) ------------------------------ The all-pairs shortest-paths algorithms reports the distances between any two vertices in a graph. The algorithms attempt to answer distance queries between any two vertices while dynamically maintaining changes that can occur to the graph such as inserts, deletes, and updates. Demetrescu and Italiano [@Demetrescu2004] propose a fully dynamic algorithm over directed graphs for all-pairs shortest-paths with real-valued edge weights. Every edge can have a predefined number of values. Their algorithm achieves an amortized time complexity of $O(S · n^{2.5} log^3 n)$ for update operations while achieving an optimal worst-case for query processing time. The proposed algorithm for the update operation inserts or deletes a vertex in addition to all its possible edges. The algorithm also maintains a complete distance matrix between updates. Thorup [@Thorup2004a] improves over Demetrescu and Italiano [@Demetrescu2004] by reducing the fully-dynamic graph problem to a smaller set of decremental problems. Thorup adopts the idea of a fully-dynamic minimum spanning tree by utilizing the efficiency of the decremental algorithm to solve the fully-dynamic all-pairs shortest-paths problem. Bernstein [@Bernstein2009] presents a $(2 + \epsilon)$-approximation algorithm for APSP over an undirected graph with positive edge weights. Bernstein’s algorithm achieves an update time that is almost linear and a query time of $O(log logn)$. The proposed query algorithm is deterministic while the update procedure is randomized. The algorithm run-time behavior depends on the distance from the source vertex to the destination vertex. Since $d(x, y)$ is not known beforehand, the algorithm relies on guessing several different values for $d(x, y)$. Roditty and Zwick [@Roditty2010] propose a fully dynamic APSP algorithm for unweighted directed graphs. The algorithm is randomized and the correctness of the returned results are claimed to be high. The proposed algorithm passes through a set of phases that rely on the ideas of a decremental algorithm [@Henzinger1995]. They demonstrate how the incremental and decremental versions of the SSSP problems are similar in terms of complexity to the the static all-pairs shortest-paths problem over directed or undirected graphs Bernstein [@Bernstein2013] proposes an $(1 + \epsilon)$ approximate algorithm that improves over existing studies with respect to the delete operation and edge weight increase. The algorithm computes the decremental all-pairs shortest-paths on weighted graphs. The approach achieves an update time of $o(mn^2)$ using a randomized algorithm. Henzinger et al. [@Henzinger2013] enhances over the fastest deterministic algorithm by Shiloach and Even [@Shiloach981] by achieving an update time of $Ȏ(n^{5/2})$. It also achieves a constant query time. Also, they propose a deterministic algorithm with with an update time of $Ȏ(mn)$ and a query time of $O(log log n)$. They introduce two techniques, namely a lazy Even-Shiloach tree algorithm. The proposed approach maintains a shortest-paths tree that is bounded by distance with a Even-Shiloach tree based de-randomization technique. Single-Source Shortest-Path --------------------------- The single-source shortest-paths algorithm reports the distances from a given source vertex. The dynamic algorithm computes the update and query operations in an online fashion. The update operation inserts, deletes, or modify the edge’s weight. The query operation probes for the distance from the source vertex to a given target vertex. Fakcharoenphol and Rao [@Fakcharoenphol2006] propose an algorithm for planar graphs with real-valued edge weights. It achieves a time complexity of $O(nlog^3 n)$. It performs update and query operations in $O(n^{4/5} log^{13/5} n)$ amortized time. The proposed algorithm uses Monge matrices [@Cechlarova1990] with a combination of Bellman-Ford and Dijkstra’s algorithms for searching  in sub-linear time. Bernstein and Roditty [@Bernstein2011] propose a dynamic shortest-paths algorithm that can achieve an update time better than $O(n)$ without sacrificing query time. In specific, they obtain $O(n^{2+o(1)})$ total update time and constant query time. The main type of graphs that it can achieve this result on is moderately sparse graphs. Bernstein and Roditty propose two randomized decremental algorithms that operate over unweighted, undirected graph for two approximate shortest-path problems. Henzinger et al. [@Henzinger2014] improve the update operation time of Bernstein and Roditty [@Bernstein2011] to $O(n^{1.8+o(1)} + m^{1+o(1)})$ while maintaining a constant query time. The algorithm utilizes the center-cover data structure where, given a parameter $h$ and a constant $\gamma$, maintains $O(h)$ vertices, referred to as *centers*. The main property of the center-cover data structure is that every vertex within a specific distance is in a tree termed Even-Shiloach tree (ES-tree). The proposed algorithm has the same property of the center-cover data structure and is fastest when $h$ is moderately small. Time-Dependent Shortest-Path Algorithms ======================================= A time-dependent shortest-path algorithm processes graphs that have edges associated with a function, known as an *edge-delay* function. The edge-delay function indicates how much time is needed to travel from one vertex to another vertex. The query operation probes for the the minimum-travel-time path from the source to the destination vertex over graph. The returned result represents the best departure time found in a given time interval. Continuous-Time Algorithms -------------------------- Kanoulas et al. [@Kanoulas2006] propose an algorithm that finds a set of all fastest paths from source to destination given a specified time interval. The specified interval is defined by the user and represents the departure or arrival time. The query algorithm finds a partitioning scheme for the time interval and creates a set of sub-intervals where each sub-interval is assigned to a set of fastest paths. Unlike the $A^{*}$ algorithm, the proposed algorithm probes the graph only once instead of multiple times. Ding et al. [@Ding2008] propose an algorithm that finds the departure time that minimizes the travel time over a road network. Also, the traffic conditions are dynamically changing in the road network. The algorithm is capable of operating on a variety of time-dependent graphs. George et al. [@George2006; @George2007] propose a Time-Aggregated Graph (TAG) graph that changes its topology with time. In TAG, vertices and edges are modeled as time series. Apart from time dependence, it is also responsible for managing the edges and vertices that are absent during any instance in time. They propose two algorithms to compute shortest-path using time-aggregated network (SP-TAG) and best start-time shortest-path (BEST). SP-TAG finds the shortest-path at the time of given query using a greedy algorithm. On the other hand, BEST algorithm finds out the best start-time (i.e., earliest travel time) over the entire period using TAG. The time complexity of SP-TAG and BEST are $O(e(log T + log n)$, and $O(n^{2}eT)$, respectively, where $e$ represents edges, $n$ represents vertices, and $T$ represents the time instance. Ding et al. [@Ding2008] propose an algorithm for the shortest-path problem over a large time-dependent graph GT. Each edge has a delay function that denotes the time taken from the source vertex to the destination vertex at a given time. The user queries the least travel time (LTT). The proposed algorithm achieves a space complexity of $O((n + m)\alpha(T))$ and a time complexity of $O((n log n + m)\alpha(T))$. Discrete-Time Algorithms ------------------------ Nannicini et al. [@Nannicini2008a] propose a bidirectional $A^{*}$ algorithm that restricts the $A^{*}$ search to a set of vertices that are defined by a time-independent algorithm. The bidirectional $A^{*}$ algorithm operates in two modes, where the first mode, namely the*forward search* algorithm, is run on the graph weighted by a specific cost function while the second mode, namely the *backward search*, is run on the graph weighted by a lower-bound function. Delling and Wagner [@Delling2009] reanalyzes various time-dependent technique. The concluded that the most of the techniques that operate over time-dependent graphs guarantee correctness by augmenting the preprocessing and query phases subroutines. Foschini et al. [@Foschini2014] study the computational complexity of the shortest-paths problem over time-dependent graphs. They conclude that linear edge-cost functions causes the shortest path to the destination changes $n^{\theta(logn)}$ times. They study the complexity of the arrival time by mapping the problem to a parametric shortest-paths problem in order for it to be analyzed correctly. Demiryurek et al. [@Demiryurek2011] propose a technique to speed-up the fastest-path computation over time-dependent spatial graphs. They propose a technique based on the $A^{*}$ bidirectional time-dependent algorithm that operates in two main stages. The first stage is pre-computation, where it partitions the graph into a set of partitions that do not overlap. Next, they calculate a lower-bound distance label for vertices and borders. The second state is online, where it probes for the fastest path by utilizing a heuristic function based on the computed distance labels. The results indicate that the proposed technique decreases the computation time and reduces the storage complexity significantly. Stochastic Shortest-Path Algorithms =================================== A stochastic shortest-path attempts to capture the uncertainty associated with the edges by modeling them as random variables. Then, the objective becomes to compute the shortest-paths based on the minimum expected costs. The two notable lines of research in this problem are adaptive and non-adaptive algorithms. The adaptive algorithms determine what the next best next hop would be based on the current graph at a certain time instance. The non-adaptive algorithms focus on minimizing the length of the path. Adaptive Algorithms ------------------- Miller-Hooks and Mahmassani [@Miller-Hooks2000] propose an algorithm to determine the apriori least-expected-time-paths from all source vertices to a single destination vertex. This computation is for done for each departure time during busy time of the graph. They also propose a lower-bound over these apriori least-expected-time-paths. Nikolova et al. [@Nikolova2006] propose an algorithm that maximizes the probability without exceeding a specific threshold for the shortest-paths length. They define a probabilistic model where edge weights are drawn from a known probability distribution. The optimal path is the one with the maximum probability indicating a path that does not pass a specific threshold. Non-Adaptive Algorithms ----------------------- Loui [@Loui1983] proposes using a utility function with the length of the path, where the utility function is monotone and non-decreasing. When the utility function exhibits a linear or an exponential behavior, it becomes separable into the edge lengths. This allows the utility function to be identified using classical shortest-paths algorithms via paths that maximize the utility function. Nikolova et al. [@Nikolova2006a] propose an algorithm for optimal route planning under uncertainty. They define the target as a function of both the path length and the departure time starting from the source. They indicate that path and start time are jointly optimizable due to the penalizing behavior that they exhibit for late and early arrivals. They also indicated that this joint optimization is reducible to classic shortest-path algorithms. Parametric Shortest-Path Algorithms =================================== Parametric shortest-paths objective is to compute the shortest-paths for all vertices based on a specific parameter. It probes for the parameter values known as *breakpoints* where the shortest-path tends to change. The edge value varies based on a linear function of the parameter value. Mulmuley and Shah [@Mulmuley2001] propose a model for lower-bound computation. It is a variant of the Parallel Random Access Machine. The proof starts with a lower-bound definition about the parametric complexity of the shortest-path problem. Plotting the weights of the shortest-path as a function results in an optimal cost graph that is piecewise-linear and concave. Breakpoints are defined as a fixed set of linear weight functions over a fixed graph. Young et al. [@Young2002] propose a model where the computed edge values makes it more tractable than its predecessors. This tractability allows obtaining shortest-paths in polynomial time. They use the algorithm proposed by Karp and Orlin [@Karp1981] and modify it to use Fibonacci heaps instead in order to improve its performance. Erickson [@Erickson2010] proposes an algorithm for computing the maximum flow in planar graphs. The algorithm maintains three structures, namely an edge spanning tree, a predecessor dual vertex set, and the slack value of dual edge set. They compute the initial predecessor pointers and slacks in $O(nlogn)$ using Dijkstra’s algorithm. Replacement Shortest-Path Algorithms ==================================== Consider a Graph $G=(V,E)$, where $V$ is the set of vertices and $E$ is the set of edges. For every Edge $e~\varepsilon~E$ on the shortest-path from source $s~\varepsilon~V$ to destination $d~\varepsilon~V$, the replacement path algorithm calculates the shortest-path from $s$ to $d$ that avoids $e$. Emek et al. [@Emek2010] propose an algorithm that computes the replacement path in near-linear time. The algorithm requires $O(n log^3 n)$ time during the preprocessing stage and $O(h log log n)$ time to answer the replacement path query, where $h$ is the number of hops in a weighted planar directed graph. Roditty and Zwick [@Roditty2012] propose a Monte-Carlo randomized algorithm that computes the replacement path in an unweighted directed graph. The run-time complexity of the algorithm is $\tilde{O}(m \sqrt{n})$. The Monte Carlo algorithm improves the run-time of the $k$-simple shortest-path and Vickrey pricing problems [@Hershberger2001] by a factor of $\sqrt{n}$. Bernstein [@Bernstein2010] proposes an approximate $(1+\epsilon)$ replacement-path algorithm that computes the paths in $O(\epsilon^{-1} log^{2}n(m + nlog(nC/c)(m+nlogn)) = \tilde{O} (m log (nC/c)/\epsilon)$ time, where $C/c$ is the ratio of largest and smallest edge-weights in the graph. Bernstein’s algorithm achieves a running time of $\tilde{O}(km\sqrt{n})$ when applied over the $k$-th simple shortest-paths problem. Alternative Shortest-Path Algorithms ==================================== The alternative shortest-path problem reports paths that avoid a given vertex or edge, termed the unwanted vertex or the unwanted edge. The key difference between the replacement-path and the alternative shortest-path is that the user is not required to specify the unwanted vertex or edge for replacement paths. The goal of the alternative path problem is reusing the previously computed results of the unwanted vertex or edge. In turn, this achieves better performance. Existing algorithms, e.g., all-pairs dynamic shortest-paths, do not solve the alternative shortest-path problem because of the high complexity of the update operation. Xie et al. [@Xie2012] propose a storage schemed, termed iSPQF. It is an extension of the shortest-path quad-tree [@Sankaranarayanan2005] that further reduces the number of quad-trees at each vertex. The space complexity of the shortest-path quad-tree into forest (SPQF) is $O(n^{1.5})$. The SPQF algorithm can find the alternative shortest-path over a single source (from source $s$ to destination $d$ that avoids Vertex $v$) as well as all pairs (from set of sources $X$ to set of destinations $Y$ that avoid Vertex $v$) in $O(n)$ time-complexity. Weighted Region Shortest-Path Algorithms ======================================== Mitchell and Papadimitriou [@Mitchell1990] define the Weighted Region Problem (WRP) as a generalization of the two-dimensional shortest path problem with obstacles. The problem assumes that the plane is subdivided into weighted polygonal regions. The objective is to minimize the cost according to a weighted Euclidean metric. The study by Mitchell and Papadimitriou sheds light on the discriminating properties of the weighted region problem over planar divisions and proposes an algorithm that runs in $O(n^{8}L)$, where $n$ is the number of vertices and $L$ is the number of bits required to encode the problem instance. In specific, $L = O(log(nNW/ \epsilon W))$, where $N$ is the maximum integer representing vertices of the triangulation, and $\epsilon > 0$ is a user-specified error value that can be tolerated. Mata and Mitchell [@Mata1997] propose an algorithm to compute the approximate optimal-path for the weighted planar subdivision problem by constructing a sparse graph, termed the *path-net*. The approach uses Snell’s law of Refraction [@Warntz1957] to divide the vertices into cones that bound the path of a vertex. The worst-case complexity to build the path-net graph with $O(kn)$ vertices is $O(kn^{3})$, where $k$ is the number of cones. After being scanned, it produces the paths that are within a factor of $(1 + \epsilon)$ from the optimal solution. Conclusion ========== In this paper, we devise a taxonomy for the shortest-path problem. For each branch of the taxonomy, we illustrate the discriminating features and highlight the state-of-the-art research. The taxonomy provides investigators of the shortest-path problem with a guideline on where a required problem definition maps within the current related work. Acknowledgements {#acknowledgements .unnumbered} ================ Walid G. Aref’s research has been supported in part by the National Science Foundation under Grant IIS 1117766. [FHK[[$^{+}$]{}]{}05]{} I. Abraham and D. Delling. . , 2011. I. Abraham, D. Delling, A. Goldberg, and R. Werneck. . , 2012. I. Abraham, A. Fiat, A. V. Goldberg, and R. F. Werneck. Highway dimension, shortest paths, and provably efficient algorithms. , pages 782–793, 2010. R. Agarwal, P. B. Godfrey, and S. Har-Peled. . , pages 1754–1762, 2011. A. V. Aho and J. E. Hopcroft. . Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1st edition, 1974. W. Aiello, F. Chung, and L. Lu. . , 2000. D. Aingworth, C. Chekuri, and R. Motwani. . , pages 547–553, 1996. J. Arz, D. Luxen, and P. Sanders. , 2013. M. Babenko, A. Goldberg, A. Gupta, and V. Nagarajan. . , 2013. M. J. Bannister and D. Eppstein. . , 2011. H. Bast. . , pages 355–367, 2009. H. Bast, S. Funke, D. Matijevic, P. Sanders, and D. Schultes. , 2007. S. Baswana and S. Sen. . , pages 532–563, 2007. G. Batz, R. Geisberger, S. Neubauer, and P. Sanders. . , pages 166–177, 2010. R. Bauer and D. Delling. . , 2009. R. Bauer, D. Delling, P. Sanders, D. Schieferdecker, D. Schultes, and D. Wagner. . , pages 303–318, 2010. R. Bellman. . Princeton University Press, 1957. R. Bellman. . Quarterly of Applied Mathematics, 1958. A. Bernstein. . , pages 693–702, 2009. A. Bernstein. . , pages 742–755, 2010. A. Bernstein. . , page 725, 2013. A. Bernstein and L. Roditty. . , pages 1355–1365, 2011. P. v. E. Boas. . pages 75–84, 1975. P. v. E. Boas, R. Kaas, and E. Zijlstra. . , pages 99–127, 1976. S. Cabello. . , pages 361–381, 2012. K. Cechlárová and P. Szabó. On the monge property of matrices. , 81(2):123 – 128, 1990. T. M. Chan. All-pairs shortest paths for unweighted undirected graphs in o(mn) time. , pages 514–523, 2006. T. M. Chan. More algorithms for all-pairs shortest paths in weighted graphs. , pages 590–598, 2007. L. Chang, J. X. Yu, L. Qin, H. Cheng, and M. Qiao. . , 21(6):869–888, 2012. W. Chen, C. Sommer, S.-H. Teng, and Y. Wang. . , pages 1–26, 2012. E. Cohen, E. Halperin, H. Kaplan, and U. Zwick. . , 32:1338–1355, 2003. E. Cohen and U. Zwick. . , pages 335–353, 2001. D. Coppersmith and S. Winograd. Matrix multiplication via arithmetic progressions. , pages 251 – 280, 1990. T. H. Cormen, C. Stein, R. L. Rivest, and C. E. Leiserson. . McGraw-Hill Higher Education, 2nd edition, 2001. B. Dean. . , 2004. D. Delling, T. Pajor, and R. Werneck. . , 2012. D. Delling and D. Wagner. . , 2:1–18, 2009. C. Demetrescu and G. Italiano. . , pages 1–29, 2004. C. Demetrescu and G. F. Italiano. . , pages 353–383, 2006. U. Demiryurek, F. Banaei-kashani, and C. Shahabi. . , pages 92–111, 2011. E. V. Denardo. . Dover Publications, 2003. E. W. Dijkstra. . , pages 269–271, 1959. B. Ding, J. X. Yu, and L. Qin. . , page 205, 2008. H. Djidjev. . , pages 151–165, 1996. W. Dobosiewicz. A more efficient algorithm for min-plus multiplication. , 1990. D. Dor, S. Halperin, and U. Zwick. . , 2000. J. Driscoll and H. Gabow. . , pages 1343–1354, 1988. M. Elkin and D. Peleg. (1+epsilon,beta)-spanner constructions for general graphs. , pages 608–631, 2004. Y. Emek, D. Peleg, and L. Roditty. . , 6:1–13, 2010. J. Erickson. . , 2010. J. Fakcharoenphol and S. Rao. . , pages 868–889, 2006. R. Floyd. . , pages 344–348, 1962. L. R. Ford. . Report P-923, The Rand Corporation, 1956. L. Foschini, J. Hershberger, and S. Suri. . , 2014. M. Fredman. . , pages 83–89, 1976. M. Fredman and R. Tarjan. . , pages 338–346, 1987. M. Fredman and D. Willard. . , pages 719–725, 1990. M. Fredman and D. Willard. . , pages 424–436, 1993. M. L. Fredman and D. E. Willard. . , pages 1–7, 1990. L. Fu, D. Sun, and L. Rilett. . , pages 3324–3343, 2006. C. Gavoille, D. Peleg, S. P[é]{}rennes, and R. Raz. Distance labeling in graphs. , pages 85–112, 2004. R. Geisberger, P. Sanders, D. Schultes, and D. Delling. . , pages 319–333, 2008. R. Geisberger, P. Sanders, D. Schultes, and C. Vetter. Exact routing in large road networks using contraction hierarchies. , pages 388–404, 2012. B. George, S. Kim, and S. Shekhar. . , pages 460–477, 2007. B. George and S. Shekhar. . , pages 85–99, 2006. A. Goldberg. . , pages 9–12, 2007. A. Goldberg and R. Werneck. , 2005. R. Gutman. , 2004. T. Hagerup. . , pages 61–72, 2000. Y. Han. . , pages 81–94, 2001. Y. Han. Improved algorithm for all pairs shortest paths. , pages 245–250, 2004. Y. Han. An o(n3 (loglogn/logn)5/4) time algorithm for all pairs shortest paths. , pages 411–417, 2006. Y. Han and T. Takaoka. An o(n3 log log n/ log2 n) time algorithm for all pairs shortest paths. , pages 131–141, 2012. P. Hart, N. Nilsson, and B. Raphael. Formal basis for the heuristic determination of minimum cost paths. , pages 100–107, 1968. M. Henzinger, S. Krinninger, and D. Nanongkai. . , pages 538–547, 2013. M. Henzinger, S. Krinninger, and D. Nanongkai. , pages 1053–1072, 2014. M. R. Henzinger and V. King. . , pages 664–672, 1995. M. R. Henzinger, P. Klein, S. Rao, and S. Subramanian. . , pages 3–23, 1997. J. Hershberger and S. Suri. Vickrey prices and shortest paths: What is an edge worth? pages 252–, 2001. M. Holzer, F. Schulz, and D. Wagner. . , 13(2):2.5, Feb. 2009. M. Holzer, F. Schulz, D. Wagner, and T. Willhalm. . , 2005. E. Kanoulas, Y. Du, T. Xia, and D. Zhang. . , pages 10–10, 2006. D. Karger, D. Koller, and S. Phillips. . , pages 560–568, 1993. R. Karp. . , pages 309–311, 1978. R. Karp and J. Orlin. . , pages 37–45, 1981. K.-i. Kawarabayashi, P. Klein, and C. Sommer. . , pages 135–146, 2011. T. Kieritz, D. Luxen, P. Sanders, and C. Vetter. . , pages 1–11, 2010. P. N. Klein, S. Mozes, and O. Weimann. . , 2010. J. Kleinberg, a. Slivkins, and T. Wexler. . , pages 444–453, 2004. E. Köhler, R. Möhring, and H. Schilling. . , pages 1–17, 2005. U. Lauther. . , pages 219–230, 2004. R. Lipton, D. Rose, and R. Tarjan. . , (2):346–358, 1979. R. Loui. . , 1983. C. Mata and J. S. B. Mitchell. . , pages 264–273, 1997. J. Maue, P. Sanders, and D. Matijevic. . , 2009. E. Miller-Hooks and H. Mahmassani. . , pages 198–215, 2000. J. Mitchell and C. Papadimitriou. . 1990. A. Moffat and T. Takaoka. An all pairs shortest path algorithm with expected running time o(n2 log n). , page 1023–1031, 1987. R. H. Möhring, H. Schilling, B. Schütz, D. Wagner, and T. Willhalm. . , 2007. E. F. Moore. . Proceedings of the International Symposium of Switching Theory 1957, Part [II]{}, 1957. S. Mozes and C. Sommer. . , 2012. K. Mulmuley and P. Shah. . , 63:253–267, 2001. G. Nannicini, P. Baptiste, G. Barbier, D. Krob, and L. Liberti. . , 2010. G. Nannicini, D. Delling, L. Liberti, and D. Schultes. . , pages 334–346, 2008. G. Nannicini and L. Liberti. . , pages 551–563, 2008. E. Nikolova, M. Brand, and D. Karger. , 2006. E. Nikolova, J. Kelner, M. Brand, and M. Mitzenmacher. . , 2006. M. Patrascu and L. Roditty. . , (1):815–823, 2010. M. Potamias, F. Bonchi, C. Castillo, and A. Gionis. . , page 867, 2009. L. Roditty and A. Shapira. . , pages 1–12, 2011. L. Roditty and U. Zwick. . , pages 389–401, 2010. L. Roditty and U. Zwick. Simple shortest paths in unweighted directed graphs. , 8(4):1–11, 2012. P. Sanders and D. Schultes. . , pages 568–579, 2005. P. Sanders and D. Schultes. . , pages 804–816, 2006. J. Sankaranarayanan, H. Alborzi, and H. Samet. Efficient query processing on spatial networks. pages 200–209, 2005. E. K. Schilling, R. H. Möhring, and Heiko. . , 2006. D. Schultes. . , 2005. D. Schultes and P. Sanders. . , pages 66–79, 2007. F. Schulz, D. Wagner, and K. Weihe. . , 1999. F. Schulz, D. Wagner, and C. Zaroliagis. . , pages 43–59, 2002. S. Sen. . , pages 32–43, 2009. Y. Shiloach and S. Even. An on-line edge-deletion problem. , 1981. M. Sniedovich. . , pages 599–620, 2006. M. Sniedovich. . Francis and Taylor, 2010. C. Sommer. . , 2012. T. Takaoka. A new upper bound on the complexity of the all pairs shortest path problem. , pages 195–199, 1992. T. Takaoka. A faster algorithm for the all-pairs shortest path problem and its application. , pages 278–289, 2004. M. Thorup. . , pages 59–67, 1996. M. Thorup. . , pages 1–33, 1999. M. Thorup. . , 51:993–1024, 2004. M. Thorup. . , pages 384–396, 2004. M. Thorup and U. Zwick. . , 52:1–24, 2005. W. Warntz. Transportation, social physics, and the law of refraction. , pages 2–7, 1957. S. Warshall. . , (1), 1962. C. Wulff-Nilsen. . , pages 831–838, 2013. K. Xie, K. Deng, S. Shang, X. Zhou, and K. Zheng. . , pages 1–31, 2012. J. Y. Yen. . Quarterly of Applied Mathematics, 1970. N. Young, R. Tarjant, and J. Orlin. . , 21, 2002. U. Zwick. . , pages 310–319, 1998. U. Zwick. . , 2001. U. Zwick. A slightly improved sub-cubic algorithm for the all pairs shortest paths problem with real edge lengths. pages 921–932, 2004.
--- abstract: 'We study the collective transport of paramagnetic colloids driven above a magnetic bubble lattice by an external rotating magnetic field. We measure a direct ratchet current which rises in integer and fractional steps with the field amplitude. The stepwise increase is caused by excluded volume interactions between the particles, which form composite clusters above the bubbles with mobile and immobile occupation sites. Transient energy minima located at the interstitials between the bubbles cause the colloids to hop from one composite cluster to the next with synchronous and period doubled modes of transport. The colloidal current may be polarized to make selective use of type up or type down interstitials.' author: - 'Pietro Tierno$^{1,2}$ and Thomas M. Fischer$^{3}$' title: | Excluded volume causes integer and fractional plateaus\ in colloidal ratchet currents --- The emergence of quantized steps in the current of a driven system is a fascinating phenomenon in condensed matter, occurring in a broad range of systems such us in charge density waves [@densitywave1; @densitywave2], in driven vortex lattices [@vortex1; @vortex2], in sliding frictional surfaces [@friction] or in electronic tunneling [@tunneling1; @tunneling2]. The impossibility of visualizing condensed matter quasi particles has triggered the use of alternative model systems to unveil the basic mechanism leading to such transport behaviour. Colloidal particles with accessible length-scale and dynamics, represent a versatile model system with tuneable interactions [@Model1; @Model2; @Model3]. In particular, hard sphere interactions creating excluded volume [@excluded1; @excluded2; @excluded3] are relevant for the rheological properties of colloidal dispersions [@rheology1; @rheology2] and they dominate the dynamics near the colloidal glass transition [@glass1; @glass2]. Here we show that the excluded volume between paramagnetic colloids driven above a magnetic bubble lattice causes a series of discrete plateaus in the particle current, separated by steps where some particles abruptly loose or gain mobility. Integer plateaus result from particles moving synchronously with the driving field, while fractional plateaus arise from nonlinear period doubling with particles moving only every second cycle. In contrast to many ratchet mechanisms [@ratchet1; @ratchet2; @ratchet3] which operate under negligible particle-particle interactions, we introduce a colloidal ratchet where quantized transport phenomena arise due to excluded volume between the particles.\ ![(color online)(a) Schematic of the FGF film ($a=8.6 \, \mu m$) covered with a polymer film ($h=1 \mu m$), with one Wigner-Seitz unit cell shaded. Potential particle paths between the bubbles pass the type up (blue) and type down (red) interstitial regions. (b) Polarization microscopy image of the FGF loaded with one particle per unit cell. Crystal directions are indicated in white, type up and type down interstitial wells are marked in blue and in red, and nucleate at the beginning of the arrows, annihilate at the end. Central region has been magnified for clarity. (c) Magnetic potential energy of a paramagnetic particle during different phases of the applied field (inset). Energy maxima are colored in yellow, minima in red.[]{data-label="figure1"}](Figure1.jpg){width="\columnwidth"} We use monodisperse polystyrene paramagnetic colloids dispersed in deionized water and moving on top of ferrite garnet film (FGF) characterized by a triangular lattice of magnetic bubble domains [@Bubble], Fig.1(a). The FGF film exerts magnetic attraction to the paramagnetic colloids, and confines their motion in two dimensions. A direct ratchet current is obtained by modulating the heterogeneous stray field of the FGF with an external field elliptically polarized in the $(x,z)$ plane, $\bm{H}_{ext} \equiv (H_x\sin{(\omega t)},0,H_z\cos{(\omega t)})$, more details are given in [@EPAPS]. Here $H_z$ indicates the component perpendicular to the FGF, $H_x$ the parallel, and $\omega$ the angular frequency. Fig.1(c) illustrates how the potential energy landscape of a paramagnetic colloid is altered by the applied field during one field cycle. ![(color online)(a) Time sequence of polarization microscopy images showing a row of bubbles subsequently filled with paramagnetic particles of diameter $d=1\mu m$ **(a)**, and $d=2.8\mu m$ (b). Particle motion occurs from left to right. In (a) the magnetic bubble in the shaded column requires a filling of $\tilde\rho>8$ particles to emit particles towards the next bubble. In (b) the particle transport occurs when the bubbles are filled with $\tilde\rho>1$ particles. (c) Normalized current $\tilde{j}$ for particles of size $d=1 \mu m$, blue symbols, and of $d=2.8 \mu m$, black symbols, as a function of the overloading, $\tilde\rho-N_c$ (d) Partially filled integer plateaus in the normalized current for $d=2.8 \mu m$ particles as a function of the amplitude of $H_z$. (Movie [*ad fig 2.AVI*]{}).[]{data-label="figure2"}](Figure2.jpg){width="\columnwidth"} The external field modulates the potential and increases or decreases the potential wells of the magnetic bubbles when it is parallel or antiparallel to the bubble magnetization. Before the field is oriented completely antiparallel to the bubble, two additional energy wells per unit cell nucleate in the marked blue and red interstitial regions in Fig.1(b). Due to the parallel component of the field, $H_x$, the nucleation sites are displaced from the centre of the three surrounding bubbles and are located in the proximity of the particle “emitter” bubble. The preference for one of the three bubbles vanishes for completely antiparallel field orientations $H_x=0$, and reverses when the parallel field orients mirror ![image](Figure3.jpg){width="\textwidth"} symmetrically $H_{x,annihil}=-H_{x,nucl}$ with respect to the orientation during the nucleation. The interstitial wells hence nucleate near a bubble which becomes an emitter of particles and they annihilate near the collector bubble i.e. the neighboring bubble in transport direction. The chirality of the external magnetic field modulation ensures unidirectional motion from the emitter bubble toward the collector bubble. A directed ratchet current is induced when a particle moves from an emitter bubble into a collector bubble via the interstitial region.\ The collective effect of the excluded volume on the transport for small ($1 \mu m$) particles is shown in Fig.2(a) (movie [*ad fig 2.AVI*]{}). Individual particles will reside in the bubble wells, while bubbles filled with several particles generate a composite cluster with a radius, $r_{cluster}$. The green shaded column in Fig.2(a) separates the unloaded lattice, on the right, from a series of loaded bubbles thereby forming composite clusters, on the left. The particles are transported from the left bubbles passing via the interstitial regions to the marked unfilled bubble, which they reach at $t=0.1s$, and they start to grow a new composite cluster. For small values of $r_{cluster}$, this composite cluster is unable to emit particles into the interstitial wells to its right, and no net current is observed. For $t>1.35 s$, the cluster size surmounts a critical value, $r_{cluster}>r_c \sim 2 \mu m$, and the particles located within $r_{c}$ remain immobile, while excess particles are emitted to the right into an interstitial and transported to the next collector bubble. In each cycle, the excess particles flow via the interstitials, filling more bubbles till reaching a stationary state that corresponds to a particle loading of $r_{cluster}\approx r_c$ for all bubbles.\ We measure the current $\mathbf{j(\rho)}=\rho\mathbf{v}$ of particles with velocity $\mathbf{v}$ as a function of the particle number area density $\rho$. Figure 2(c) shows the dimensionless current, $\tilde j=2\pi Aj/\omega a_1$ versus the dimensionless density $\tilde \rho= \rho A$, where $A=a_1^2 sin(\pi/3)$ is the area of the unit cell, $a_1$ is the length of the unit vector in the $01$ direction, and $\omega$ is the angular frequency of the external field. We observe no current below an average loading of $\tilde\rho\le 8$ particles per unit cell. Only if $\tilde \rho >N_c=8$, a net current is observed, which linearly increases with the density as $\tilde j =\tilde \rho -N_c$. Eight particles in each unit cell remain immobile while only the excess particles contribute to the current.\ While small particles require the formation of highly populated clusters to produce net flow, we can simplify the complexity of the transport by increasing the particle size. In Fig.2(b) we show the filling of a magnetic lattice using large ($2.8 \mu m$) particles, which reduces the size of the critical cluster to $N_c=0-4$ particles per unit cell. As a consequence, the filling process is much faster than before, and the colloidal front propagates by one composite cluster every second cycle rather than every fourth cycle. The net current produced by the large particles follows the same law $\tilde j =\tilde \rho -N_c$ as the small particles, but with $N_c=1$, i.e. exactly one particle per unit cell remains immobile while excess particles are transported.\ In Fig. 2(d) we explore the dependence of $\tilde{j}$ on the amplitude of the normal component of the applied field $H_z$, for a fixed density $\tilde{\rho}=1.36$. No current is observed for amplitudes $H_z < H^c_1= 780$ $A/m$. Beyond the threshold $H^c_1$, only one particle per doublet is mobilized, while individual particles do not move, and the flux reaches the constant plateau $\tilde{j}=0.36$. Above a second threshold field, $H^c_2=1200$ $A/m$, all particles are mobilized, and they flow at a constant speed above the lattice. The fields $H^c_1$ and $H^c_2$ are therefore the mobility edges where different sub groups of particles start to move. The plateaus in Fig.2(d) correspond to $N_c=2,1,0$ fully occupied immobile sites. The rest of the particles are mobile and consist of fully and partially occupied sites contributing $\tilde j =\tilde \rho -N_c$ to the macroscopic current. There are six small interstitial regions around each magnetic bubble which can be separated in two types, Fig.1(b,c). Type up and type down interstitials are marked in blue and in red resp., and are characterized by a neighboring bubble in the $-21$ and $2-1$ direction. In Fig.2(a,b), there are equal numbers of particles flowing via the type up and type down interstitials, and the macroscopic current is unpolarized.\ A rich variety of transport modes can be obtained by tuning the particle density and the magnetic field parameters. In Fig.3(a) (movie [*ad fig 3.AVI*]{}) we show modes which arise for magnetic bubbles filled with doublets ($d$ modes) and triplets ($t$ modes). For field amplitudes $H_z<400 \, A/m$, no particles are transported corresponding to the trivial $d0$ and $t0$ modes. The first modes giving rise to net particle current are the $d1$ and $t1$ mode. The $d1$ mode occurs for fields in the range $700 \, A/m <H_z<1100 \, A/m$ and is characterized by the break up of a doublet into a stored particle in the bubble and an emitted particle in the interstitial. The emitted particle is transported via type up, marked in blue, or type down, marked in red, interstitials, and then collected by a magnetic bubble occupied by a single particle. During the next cycle, particles which were immobile during the first field cycle become mobile, and vice versa. The $t1$ mode starts with a particle triplet oriented with two corners toward the interstitials. Before the interstitial wells are nucleated, the triplets undergo a rotation by $\pi/6$ followed by the emission of a single particle into one of the interstitial regions and the collection by a doublet with the final formation of a triplet having the initial orientation. In contrast, the $t2$ mode starts with a particle triplet oriented with one corner between the interstitials followed by a rotation by $\pi/6$ and an emission of two particles which simultaneously follow both types of interstitials towards the collector bubble. The $d2$ and $t3$ modes are fully mobile transport modes where all particles hop from the emitter to the collector bubble and are found for magnetic fields higher than $H_z>1200 \, A/m$, that during the antiparallel orientation also annihilate the bubble wells. The triplet rotation takes more time than the actual particle hopping. Thus, increasing the driving frequency allows us to freeze the rotational motion, giving rise to two period doubling modes, $t12$ and $t23$. Since the hopping inverts the orientation of the triplets, two cycles are required to restore the original triplet orientation. For both modes, the number of particles hopping during the first and the second cycles are different, and this gives rise to an average fractional current of $3/2$ for the $t12$ mode and $5/2$ for the $t23$ mode.\ In Fig.3(b) we show measurements of $\tilde{j}$ as a function of the field amplitude $H_z$ for a particle density ($\tilde{\rho}=2.75$) for which the dominant colloidal clusters in the bubbles are doublets and triplets. An increase of $H_z$ leads to a superposition of doublet and triplet modes resulting in partially filled integer plateaus ($\tilde{\rho}-N$), fully occupied integer plateaus ($N$), and fractional partially filled plateaus ($\tilde{\rho}-3/2$). Fractional plateaus occur due to period-doubled modes where first an odd and secondly an even number of particles are transported during the first and second field cycle. Further experiments produced tetramers and highly ordered clusters with even more complex transport modes and dynamics.\ Our magnetic ratchet allows to create polarized colloidal currents, using an additional field oriented along the $-21$ ($y-$direction). We demonstrate this for the $d1$ mode in Fig.4(a,b), and for the $t1$ mode in Fig.4(c,d). When using the rotating field in the $(x,z)$ plane, both modes are characterized by a unpolarized current, with the emission of one particle which randomly hops towards the collector bubble via either the type up or type down interstitials. The application of an alternating field along the $y-$direction, $H_y=H_0 \sin{(\omega_y t)}$, phase-locked to the rotating field with $\omega_y=\omega/2=9.4 \, s^{-1}$, polarizes the particle emission in the $d1$ mode, (Fig.4(b), movie [*ad fig 4.AVI*]{}). With the $y$-field, the $d1$ mode carries a macroscopic alternating polarized current, since the particles are periodically displaced in the type up and type down interstitials, Fig.4(a). ![(color online)(a) The polarization $P$ measured as a function of time without (squares) and with (circles) an additional field along the $y-$direction for the $d1$ mode. (b) Four microscopy images corresponding to the first cycle doublet storage ($t=1.25 s$), hopping in type down interstitial ($t=1.4 s$), second cycle doublet storage ($t=1.6 s$), and hopping in type up interstitial period ($t=1.75 s$). (c) The polarization per unit cell measured as a function of time with the field along the $y-$direction for the $t1$ mode. (d) Four microscopy images corresponding to the first cycle hopping $t=0.5 s$, triplet storage $t=0.8 s$, and a similar second cycle. Hopping particles are marked in blue or red when moving into an interstitial of type up or down, while particles stored in bubbles are marked in green, Movie [*ad fig 4.AVI*]{}.[]{data-label="figure4"}](Figure4.jpg){width="\columnwidth"} We created fully polarized direct currents by increasing the particle density and accessing odd triplet modes. We used a static field, $H_y=600 \, A/m$, $\omega_y=0$ to displace the triplet such that the corner of the triplet before emission, lay close to the type up interstitial. The resulting macroscopic current of the t1 mode displays an alternating polarization during each half cycle, Fig. 4(c). The alternating and direct polarization currents are collective effects only achieved with even and odd modes, resp. The same principle is applied to the $t12$ mode (not shown here): A $y$-field induces polarized hopping of single particles via type up interstitials during the first field cycle, while unpolarized hopping occurs during the second cycle when two particles are emitted to both the type up and down interstitials. This realizes a macroscopic fractional current since the net current flowing in the type up interstitial transports one particle per cycle while the current in the type down transports only half a particle per cycle.\ In summary, our experiments show that excluded volume between mesoscopic particles gives rise to ratchet transport modes where $n$ particle steps occur during a period consisting of $m$ cycles of the field, contributing with $\tilde{j} = n/m$ to the particle current. For integer particle filling of the bubbles, only a single mode is selected. If the filling is incommensurate with the bubble lattice, a superposition of modes is observed due to the inhomogeneous distribution of composite clusters across the bubbles. The total current is the sum of the currents associated with each transport mode and remains at simple integer or fractional plateaus. The mobility or immobility of the partial particle layer determines whether one has a partially filled or fully filled plateau in the current.\ We thank Tom H. Johansen for the FGF film and Matthias Schmidt for scientific discussions. P.T. acknowledges support from the ERC starting grant “DynaMO” (335040) and from the programs RYC-2011-07605, and FIS2011-15948-E. S. E. Brown, G. Mozurkewich, and G. Grüner, Phys. Rev. Lett. [**52**]{}, 2277 (1984). A. A. Middleton, O. Biham, P. B. Littlewood, and P. Sibani, Phys. Rev. Lett. [**68**]{}, 1586 (1992). C. Reichhardt, F. Nori, Phys. Rev. Lett. [**82**]{}, 414 (1999). A. B. Kolton, D. Domínguez, N. Grønbech-Jensen, Phys. Rev. Lett., [**86**]{}, 4112 (2001). A. Vanossi, N. Manini, F. Caruso, G. E. Santoro, E. Tosatti, Phys. Rev. Lett. [**99**]{}, 206101 (2007). T. A. Fulton, and G. J. Dolan, Phys. Rev. Lett. [**59**]{}, 109 (1987). D. E. Grupp, T. Zhang, G. J. Dolan, N. S. Wingreen, Phys. Rev. Lett. [**87**]{}, 186805 (2001). A. Yethiraj, A. van Blaaderen, Nature [**421**]{}, 513 (2003). D. Babic, C. Schmitt, C. Bechinger Chaos [**15**]{}, 026114 (2005). H. Löwen, J. Phys.: Condensed Matter [**21**]{}, 474203 (2009). R. Bhat, S N. Timasheff, Protein Sci. [**1**]{}, 1133 (1992). P. G. Bolhuis, A. A. Louis, J. P. Hansen, Phys. Rev. Lett. [**89**]{}, 128302 (2002). Y. Han [et al.]{} Nature [**456**]{}, 898 (2008). D. T. N. Chen [*et al*]{}, Annu. Rev. Condens. Matt. Phys. [**1**]{}, 301 (2010). J. Mattsson [*et al.*]{} Nature [**462**]{}, 83 (2009). E. R. Weeks [*et al.*]{} Science [**287**]{}, 627 (2000). A. Stradner [*et al.*]{} Nature [**432**]{}, 492 (2004). F. Jülicher, A. Ajdari, and J. Prost, Rev. Mod. Phys. [**69**]{}, 1269 (1997). P. Reimann, Phys. Rep. [**361**]{}, 57 (2002). P. Hänggi, F. Marchesoni, Rev. Mod. Phys. [**81**]{}, 387 (2009). P. Tierno, T. H. Johansen, T. M. Fischer, Phys. Rev. Lett. [**99**]{}, 038303 (2007). See EPAPS Document No. XXXXXx for more experimental and theoretical details, and three videoclips illustrating the particle dynamics.
--- abstract: 'We study a possibility of detection of circumstellar absorption lines of NaI D$_{1,2}$ and CaII H,K in spectra of type IIP supernovae at the photospheric epoch. The modelling shows that the circumstellar lines of NaI doublet will not be seen in type IIP supernovae for moderate wind density, e.g., characteristic of SN 1999em, whereas rather pronounced CaII lines with P Cygni profile should be detectable. A similar model is used to describe NaI and CaII circumstellar lines seen in SN 1998S, type IIL with a dense wind. We show that line intensities in this supernova are reproduced, if one assumes an ultraviolet excess, which is caused primarily by the comptonization of supernova radiation in the shock wave.' author: - 'N. N. Chugai' - 'V. P. Utrobin' title: CIRCUMSTELLAR NaI AND CaII LINES IN TYPE IIP SUPERNOVAE AND SN 1998S --- Introduction ============ Type IIP supernovae (SN IIP) presumably originate from stars with initial masses in the range of $9-25~M_{\odot}$ (Heger et al. 2003). Prior to the explosion a pre-SN IIP is usually a red supergiant (RSG) (Grasberg et al. 1971) that presumably loses matter in a form of a slow dense wind. It would be reasonable to assume that the mass loss rate should correspond to RSG with the initial mass characteristic of SN IIP, i.e., $\sim(1-10)\times10^{-6}~M_{\odot}$ yr$^{-1}$ (Chevalier et al. 2006). However, it is not yet clear that this is always the case. There is an opinion that massive RSG ($10-20~M_{\odot}$) during the last $10^4$ yr before the gravitational collapse of iron core could lose matter in the form of superwind with the rate of $\sim10^{-4}~M_{\odot}$ yr$^{-1}$ owing to pulsation instability (Heger et al. 1997). On the other hand, for type IIP SN 1999em with the known mass of pre-SN of $\approx 20~M_{\odot}$ the mass loss rate is $\dot{M}\sim10^{-6}~M_{\odot}$ yr$^{-1}$ (Chugai et al. 2007), which is lower than not only the pulsation mass loss rate but also the value of $\sim8\times10^{-6}~M_{\odot}$ yr$^{-1}$, predicted by the phenomenological relation of Nieuwenhuijsen and de Jager (1990) for a RSG with the same main sequence mass. This disparity emphasises the significant uncertainty in the problem of the mass loss by pre-SN IIP. To compose a more clear picture one needs to obtain sufficiently large sample of SN IIP with the estimated density of the circumstellar (CS) gas. At present the mass loss rate by pre-SN IIP is estimated from radio and X-ray emission originated from a shock interaction between supernova ejecta and the wind (Chevalier 1982; Pooley et al. 2002), perhaps, with the more reliable estimates based on X-ray data. For SN 1999em, SN 1999gi, SN 2004dj, and SN 2004et mass loss rates recovered from X-ray data are confined in the range of $(1-2.5)\times10^{-6}~M_{\odot}$ yr$^{-1}$ (Chevalier et al. 2006; Rho et al. 2007), whereas for SN 2006bp the value of $\sim10^{-5}~M_{\odot}$ yr$^{-1}$ is obtained (Immler et al. 2007). Recently another method based on the high velocity components of H$\alpha$ and HeI 10830 Å lines is proposed which in case of SN 1999em results in the estimate of $\approx10^{-6}~M_{\odot}$ yr$^{-1}$ (Chugai et al. 2007). Here we investigate a more direct diagnostic tool for estimating the wind density based on the observation of CS absorption lines of NaI D$_{1,2}$ and CaII H,K against the luminous supernova photosphere. Up to now these lines have been confidently detected only in type IIL SN 1998S (Bowen et al. 2000). A search for these lines in SN IIP has not yet been performed, although at present the search for CS lines in SN Ia is actively carrying out (Patat et al. 2007a; Patat et al. 2007b). In the case of SN 1998S the wind density according to high X-ray and radio luminosity is large and corresponds to the mass loss rate of $\sim2\times10^{-4}~M_{\odot}$ yr$^{-1}$ (Pooley et al. 2002). By this reason it is not yet clear whether NaI and CaII lines could be observed in SN IIP in which case the mass loss rate is significantly lower than in SN 1998S. In the present paper we study the formation of NaI and CaII lines in the RSG wind after the SN IIP explosion and the use of these lines for the diagnostics of the wind density. We start with the description of the model (section 2), compute the ionization of NaI and CaII in the wind before and after the explosion, and then present model profiles of CS lines of NaI 5890 Å and CaII 3934 Å for typical wind densities (section 3). We then apply our model to the explanation of circumstellar lines of NaI 5890 Å and CaII 3934 Å in SN 1998S and discuss conditions for which these lines have the observed intensities (section 4). In conclusion we consider the possibility of detection of CS lines and discuss factors that might lead to the deviations of line intensities from model results. Model ===== We consider below a spherically-symmetric stationary wind with the density $\rho=w/(4\pi r^2)$ and velocity $u$, in which SN IIP explodes. It is convenient to deal with the dimensionless parameter $\omega$ defined by the relation $w=6.3\times10^{13}\omega$ g cm$^{-1}$; the values $\omega=1$ corresponds to the mass loss rate of $10^{-6}(u/10\,\mbox{km s}^{-1})~M_{\odot}$ yr$^{-1}$. Before the supernova explosion the wind hydrogen is neutral, whereas Na and Ca could be singly ionized by the RSG radiation. The major ionizing factor is the chromospheric radiation of RSG. A general idea about the intensity of the chromospheric radiation of pre-SN provides the galactic RSG $\alpha$ Ori (Betelgeuse). According to the data obtained with [*IUE*]{} (Rinehart et al. 2000), the fluxes in 1250-1750 Å and 1900-3200 Å bands are $(4-6)\times10^{-11}$ and $(2-3)\times10^{-9}$ erg cm$^{-2}$ s$^{-1}$, respectively. For the power law approximation $f_{\lambda}\sim \lambda^q$ these fluxes are reproduced with $q=5$. The absolute monochromatic luminosity is determined adopting the standard distance of 131 pc for $\alpha$ Ori. To calculate ionization of metals in the pre-SN wind, we solve numerically a time-dependent ionization balance taking into account the wind expansion with the velocity $u=15$ km s$^{-1}$ assuming the same ultraviolet luminosity as for Betelgeuse. Metals Mg, Si, and Fe, dominating in the electron number density, are treated as a single element with the relative abundance of $10^{-4}$ with respect to the hydrogen and with the ionization potential of 7.9 eV. Time dependent ionization is solved on the time interval of $10^5$ yr. The wind temperature is assumed to be equal to the local radiation temperature $T=T_{\rm s}W^{0.25}$, where $W$ is the dilution factor, while $T_{\rm s}=3900$ K is the effective temperature of the RSG, which corresponds to the luminosity of $10^5~L_{\odot}$ and the radius of $700~R_{\odot}$. The calculated ionization fractions of NaI and CaII in the pre-SN wind are used then as initial conditions for the calculations of the time-dependent ionization of these ions after the supernova explosion (cf. Chugai 2008). The high initial supernova luminosity with the temperature $\geq10^5$ K results in the strong ionization of hydrogen in the wind, which then has not enough time to recombine during the considered period of 50 days. We therefore adopt the complete ionization of wind hydrogen. The metal ionization is calculated with the fixed wind electron temperature of $3\times10^4$ K, the log-average between extreme values of $10^4$ K and $10^5$ K (Lundqvist and Fransson 1988). The supernova bolometric luminosity and the velocity at the photosphere are adopted to be equal to those of SN 1999em (Utrobin 2007). To describe ultraviolet spectrum, we introduce a reduction factor for the black body radiation; this factor depends on the wavelength and time according to the evolution of the ultraviolet spectrum of SN 1987A (Pun et al. 1995). To compute line profiles, we consider the wind outside the shock wave, which coincides with the contact surface at the boundary between the supernova ejecta and the wind in the thin shell model (Chevalier 1982). The evolution of the radius of this shell is calculated numerically assuming the ejecta mass of $18~M_{\odot}$ and the kinetic energy of $1.3\times10^{51}$ erg, close to the parameters of SN 1999em (Utrobin 2007). The density distribution of the supernova envelope is set as a combination of internal plateau for $v<v_0$, external power law drop $\rho\propto v^{-9}$, and outer cutoff at $v=v_{\rm b}$. This cutoff is related with the shock wave breakout and the transition from adiabatic to radiative regime (Grasberg et al. 1971). The adopted boundary velocity is $v_{\rm b}=15000$ km s$^{-1}$ in accordance with radial velocities in the blue wing of H$\alpha$ absorption in early spectra of normal SN IIP, e.g., SN 1999em (Leonard et al. 2002a) and SN 1999gi (Leonard et al. 2002b). Note, the adopted boundary velocity is qualitatively consistent with the hydrodynamic modelling of SN 1999em which gives the value $v_{\rm b}=13400$ km s$^{-1}$ (Utrobin 2007). Results ======= According to our model calculations the metals in the pre-SN wind turn out strongly ionized within considered zone $r<10^{18}$ cm. Fractions ($y$) of NaI/Na and CaII/Ca as a function of radius are shown in Fig. 1a for the wind density parameter $\omega=1$ and 10. In the internal wind zone $r<10^{16}$ cm one gets $y(\mbox{Na\,I})\sim 10^{-3}-5\times10^{-2}$ and $y(\mbox{Ca\,II})\sim 0.1-1$; at the larger distance the value $y(\mbox{Ca\,II})$ is lower by an order of magnitude. The supernova explosion results in the significant enhancement of the metal ionization. The distribution of the relative concentrations of NaI/Na and CaII/Ca in the wind on day 50 after the explosion is presented in Fig. 1b for the same density parameter values $\omega=1$ and 10. The NaI ionization is strong everywhere, while CaII is strongly ionized only in the outer zone where recombination is suppressed because of the low density. At first glance the setting of the wind conditions for a single moment is nonsense, because this does not take into account light travel effect. In fact, however, the photon absorption is determined by the age of supernova $t_1$, when the photons were emitted by the photosphere. Indeed, at the moment $t_1+r/c$, when photon packet attains the point $r$, where they can be absorbed, the state of the wind is determined by the radiation emitted in the interval $0<t<t_1$ independent of the $r$ value. Moreover, for the observer at the distance $D$ the moment of detection of this photon packet, $t_0=t_1+r/c+(D-r)/c-D/c=t_1$, coincides with the supernova age $t_1$. To summarize, when only absorption is considered the light travel effects do not present explicitly. This statement is true with the accuracy of $u/c\ll1$, where $u$ is the wind velocity. For photons scattered at the radius $r$ by angle $\theta$ towards the observer the detection moment $t_0=t_1+r(1-\cos\,\theta)/c>t_1$ is larger than the supernova age, i.e., the light travel effects should be taken into account in this case (see below). The wind optical depth $\tau$ in NaI 5890 Å and CaII 3934 Å lines outside the shock wave on days 15 and 50 is given in Fig. 2 for the same wind density and temperature as above and the turbulent velocity of 2 km s$^{-1}$. Interestingly, the optical depth of CaII 3934 Å is contributed primarily by the inner region $r<6\times10^{15}$ cm, while the NaI 5890 Å  line by the region around $r\sim10^{16}$ cm. In both lines $\tau$ grows with time. Note, the optical depth of the NaI 5890 Å line for the wind density $\omega=1$, characteristic of SN 1999em, is small even on day 50 ($\tau\sim0.05$), whereas the optical depth in the CaII 3934 Å line is large not only on day 50 but on day 15 as well. Only for very dense wind $\omega\approx10$ the optical depth in the NaI 5890 Å line is large ($\tau>1$) at the late photospheric phase ($t\sim50$ d). The obtained distributions of number density of CaII and NaI in the wind permit us to compute line profiles of CS lines via direct integration of the equation of radiation transfer. The source function is determined in the escape probability approximation assuming complete frequency redistribution $$S=\frac{\beta WI_{\rm c}}{\beta+(1-\beta)\epsilon}\,,$$ where $W$ is the dilution factor, $I_{\rm c}$ is the photosphere brightness, $\beta$ is the Sobolev escape probability, $\epsilon$ is the photon destruction probability. In the resonance NaI line the scattering is conservative ($\epsilon=0$), while in the CaII 3934 Å line we take into account photon destruction due to the fluorescence in the infrared triplet lines ($\epsilon=0.068$). Light travel effects in the profile computations are taken into account approximately by discarding the region for which the light delay is greater than the supernova age. The occultation by the photosphere and the resonance scattering by NaI and CaII in the supernova atmosphere are taken into account. To this end we assume that the inner scattering zone of the supernova envelope is bounded by the velocity of 0.8 of the maximal velocity. The wind velocity is set to be 15 km s$^{-1}$, the value found for Betelgeuse (Huggins et al. 1994). The turbulent velocity is set to be 2 km s$^{-1}$. This value is based on the turbulent velocity in the wind of Betelgeuse $v_{\rm t}\approx 1$ km s$^{-1}$ and on the estimate of the velocity dispersion due to the radiative acceleration after the supernova explosion $$u=\frac{k_{\rm T}E_{\rm r}}{4\pi r^2c}=0.9E_{\rm r,49}r_{16}^{-2}\;\; \mbox{km s$^{-1}$}\,,$$ where $k_{\rm T}=0.34$ cm$^2$ g$^{-1}$ is the Thomson opacity, $E_{\rm r}$ is the radiated energy, $r$ is the radius; numerical indices indicate units in $10^{49}$ erg and $10^{16}$ cm, respectively. This relation shows that in the region $r\sim (0.4-1)\times10^{16}$ cm, which contributes mostly to the optical depth of CaII line (Fig. 2), one obtains for $E_{\rm r,49}\approx0.5$ at about day 40 the velocity dispersion of $\approx1-2$ km s$^{-1}$, so that the total dispersion in the wind is about 2 km s$^{-1}$. The Doppler width is calculated in a standard way using turbulent and thermal velocity. The calculated line profiles of NaI 5890 Å and CaII 3934 Å  on days 15 and 50 for $\omega=1$ and 10 are plotted in Fig. 3. The profiles are convolved with the Gaussian instrumental profile FWHM=10 km s$^{-1}$ to mimic the typical spectral resolution. Calculated line profiles have strong emission component, which is consistent with its formation in the inner wind zone in which light travel effects are not pronounced. Note, the emission component may serve as a signature that the line forms in the wind, not in the interstellar medium. It should be emphasized that CaII line is strong on days 15 and 50 even for moderate density ($\omega=1$) whereas the NaI 5890 Å line gets noticeable only for rather dense wind $\omega\approx10$ and on the late stage $t\sim 50$ d. Equivalent width of the CaII 3934 Å absorption grows with $\omega$ approximately as $$W_{\lambda}\approx0.13(1+0.385\lg\,\omega)\, \mbox{\AA}.$$ This relation can be used for a rough estimate of the wind density in SN IIP using the CS absorption CaII 3934 Å around day 50. Type IIL supernova 1998S ======================== It is tempting to apply our model to the interpretation of CS lines of NaI and CaII, detected in spectra of SN 1998S. This supernova belongs to bright variety of SN IIL; in fact this is a close analogue of SN 1979C (Liu et al. 2000). According to X-ray data the wind around SN 1998S is characterized by the mass loss rate of $(1-2)\times10^{-4}~M_{\odot}$ yr$^{-1}$ assuming wind velocity of 10 km s$^{-1}$ (Pooley et al. 2002). The corresponding wind density parameter is $\omega\sim200$. The extrapolation of results obtained above suggests that strong circumstellar CaII and NaI lines should be present in the spectrum of this supernova with the equivalent width of CaII 3934 Å of $>0.2$ Å. Indeed high resolution spectra of SN 1998S show CS lines of NaI D$_{1,2}$ doublet with the growing intensity between days 20 and 39 after the outburst (Bowen et al. 2000). In the 3934 Å band on day 39 the spectrum shows similar CS component of CaII 3934 Å. Despite the expectation the circumstellar CaII 3934 Å line has a moderate intensity with the equivalent width of 0.1 Å and a relative depth of 0.5. To reproduce CS lines in SN 1998S, we use the model applied above for SN IIP with the following modifications. The bolometric light curve and the effective temperature evolution correspond to SN 1998S (Fassia et al. 2000), while the wind density is $\omega=200$. The adopted wind velocity is 40 km s$^{-1}$ (Fassia et al. 2001); the turbulent velocity is assumed to be 5 km s$^{-1}$, higher than for SN IIP, because the radiated energy of SN 1998S is 2-3 times larger than that for SN IIP on day 40. The envelope mass of SN 1998S can be estimated from the following considerations. The mass of mixed metal core in the velocity range $v\leq 3650$ km s$^{-1}$ is about $4~M_{\odot}$ (Fassia et al. 2001). The major envelope mass is confined within the velocity of 5000 km s$^{-1}$ (Fransson et al. 2005). Assuming homogeneous density distribution we find that the total mass is $M=10~M_{\odot}$. Since the density should fall towards higher velocities, the mass should be $M<10~M_{\odot}$; we adopt $M=8~M_{\odot}$. The kinetic energy is taken the same as in SN IIP, i.e., $E=1.3\times10^{51}$ erg. Note, the uncertainty in mass and energy only weakly affects the final results. Preliminary modelling shows that for the black body continuum CS absorption lines turn out too strong. The natural mechanism for the suppressing of line intensity could be an ultraviolet excess in the SN 1998S spectrum. There are two reasons for the emergence of this excess: Compton scattering on hot electrons of the forward shock wave (Fransson 1984) and intrinsic emission of the gas in the shock wave. We consider, therefore, two options for the supernova spectrum: (1) black body spectrum and (2) black body continuum with the ultraviolet excess $F_{\nu}\propto \nu^{-3}$ in the region $\lambda<2000$ Å. Integrated flux of the ultraviolet excess makes up the fraction $\eta$ relative to the black body flux $\sigma T^4$. The first option corresponds to $\eta=0$, while the second to $\eta>0$. We find that the optimal value of the ultraviolet excess is $\eta=0.06$. The results of computations of the optical depth in NaI 5890 Å and CaII 3934 Å lines on days 20 and 39 for $\eta=0$ and $\eta=0.06$ are presented in Fig. 4. The line intensity increases with time and CaII line is stronger than NaI line in the same way as for SN IIP. In the case $\eta=0$ CS lines are stronger than for $\eta=0.06$, which is a natural outcome of a more stronger ionization in the latter case. Moreover, for $\eta=0$ the intensities of CaII and NaI lines differ stronger than for $\eta=0.06$ since in the latter case the ultraviolet excess ionizes CaII relatively stronger than NaI, which results in the equalizing of NaI and CaII concentrations. Observed CS NaI 5890 Å and CaII 3934 Å lines in SN 1998S have moderate intensities and differ weakly. By these signatures the case $\eta=0.06$ should be preferred compared to $\eta=0$. The above said is illustrated by Fig. 5 that shows calculated profiles of NaI 5890 Å and CaII 3934 Å in SN 1998S on days 20 and 39 in the case of $\eta=0$ (Fig. 5a,b) and $\eta=0.06$ (Fig. 5c,d). The case with ultraviolet excess describes observed CS lines of NaI 5890 Å and CaII 3934 Å in SN 1998S (cf. Bowen et al. 2000, their Fig. 4) much better than the case $\eta=0$ that predicts unacceptably strong lines on day 39. We conclude that the moderate intensity of NaI 5890 Å and CaII 3934 Å and their resemblance are related with the presence of the ultraviolet excess compared to the black body radiation in the spectrum of SN 1998S. To what extent the required ultraviolet excess is consistent with the shape of ultraviolet spectrum on day 30 taken from [*HST*]{} data (Fransson et al. 2005) and with comptonized black body spectrum? We computed the comptonized spectrum in the single scattering approximation (Rephaeli & Yankovitch 1997) adopting parameters of SN 1998S on day 30, i.e., the radiation temperature of 9440 K, electron temperature in the forward shock of 57 keV, for the Thomson optical depth of the forward shock $\tau_{\rm T}=0.13$. The computed spectrum in comparison with the required ultraviolet excess is shown in Fig. 6. We show there also the observed spectrum corrected for the reddening $E(B-V)=0.26$, which is slightly higher than the value 0.22 adopted by Fassia et al. (2000), but still within reported uncertainties. The figure shows reasonable agreement between the required ultraviolet excess and both observed and computed comptonized spectrum. Yet it should be noted that on day 39 the computed ultraviolet comptonized flux is weaker by 1.3 times than the required ultraviolet excess. We suggest that this deficit is covered by the thermal radiation of the shock. It was mentioned already that the characteristic signature of model CS lines is the presence of an emission component. We note that the comparison of the observed NaI D$_{1,2}$ profiles on days 20 and 39 (Bowen et al. 2000) indeed shows the presence of emission component on day 39. This is an additional argument in favour of the CS origin of the blue component of NaI D$_{1,2}$ blend in SN 1998S. Conclusion ========== The primary goal of the paper was to construct the model of the formation of NaI and CaII CS lines in the wind around SN IIP in a hope to use them for the wind diagnostics. The modelling shows that lines of NaI doublet will not be seen in SN IIP spectra for moderate wind density, $\omega\sim 1$, but will be detectable at late photospheric stage $t\geq50$ d in the case of the dense wind $\omega\sim 10$. Yet the CaII lines will be seen even in the case of a rarefied wind $\omega<1$ and therefore they are especially advantageous for the detection of the wind around SN IIP. We predict that the spectrum with the resolution of $\approx10$ km s$^{-1}$ of a normal SN IIP at the photospheric stage should show the presence of CS lines of CaII with P Cygni profile. We emphasize that the emission component is a signature for the confident distinguishing of CS lines from interstellar ones. Another goal of the paper was the interpretation of CS lines detected in the spectrum of SN 1998S with a very dense wind. The modelling demonstrates that for the wind density $\omega=200$ and the black body spectrum of the supernova radiation the CS lines, especially CaII, turn out to be too strong compared with observations. This controversy is resolved by assuming the existence of the ultraviolet excess with the relative flux fraction of about 6%. We show that at the early stage $t<35$ d this ultraviolet excess can form owing to comptonization of the supernova radiation in the forward shock wave. At the later epoch the thermal radiation of the gas in the forward shock may contribute additionally, although this assumption requires a confirmation. An observation of CS lines in SN IIP can be used to estimate the wind density. However, an example of SN 1998S shows that the equivalent width of the absorption depends non-monotonically on the wind density. For $\omega<10$ we expect that equivalent width grows with the wind density, whereas in the region $\omega\sim10^2$ the equivalent width decreases with the wind density because of the ionization of metals in the wind by ultraviolet radiation produced by comptonization of optical photons on hot electrons of the forward shock. The use of the relation between the equivalent width of CaII and $\omega$ for SN IIP is hampered by uncertainties related with the reduction factor of the ultraviolet radiation and with parameters of the turbulent velocity and the wind temperature. By these reasons one hardly could measure the wind density with an accuracy better than factor of two. A wind clumpiness also affects the equivalent width. The effect of clumpiness is two-fold. First, the ionization decreases with the growing density. Therefore, for a given average column density the optical depth in clumpy case will be larger. Second, for a given average column density of absorbing ions the equivalent width will be smaller, if the average number of clouds in the the line of sight is small, i.e., an order or less than unity. The expected modification of the line profile in this case is the decrease of the line depth because of incomplete covering of the photosphere by clouds. The effect of clumpiness will be especially apparent when profiles of the H and K lines of CaII are compared. A similar relative intensities of these lines would evidence in favor of a saturation, while a shallow depth would indicate the clumpy structure of the wind with the average number of clouds in the line of sight of the order or less than unity. We assumed that the wind is spherically-symmetric. In the case of asymmetric wind, e.g., equatorial wind, the emission component can become notably weaker than the absorption one, if the line of sight is close to the equatorial plane, or it can be stronger than the absorption, if the line of sight is close to the polar axis. In the case of RSG strong deviations from spherical symmetry are unlikely, since SN IIP are single stars or components of wide binaries. For example, Betelgeuse shows only weak deviations from spherical symmetry of its CS dusty envelope (Skinner et al. 1997) which indicates a quasi-spherical wind structure. Yet we should not rule out that in rare cases the SN IIP wind could be strongly asymmetric (SN 1987A is an example) because of close binary configuration. The line profile of CaII 3934 Å could be a valuable indicator of the asphericity of the wind outflow. Bowen D. V., Roth K. C., Meyer D. M., Blades C. J. 2000, ApJ, [**536**]{}, 225 Chevalier R. A., Fransson C., Nymark T. 2006, ApJ, [**641**]{}, 1029 Chevalier R. A. 1982, ApJ, [**258**]{}, 790 Chugai N. N., Chevalier R. A., Utrobin V. P. 2007, ApJ, [**662**]{}, 1136 Chugai N. N. 2008, Astron. Lett., in press, (arXiv:0801.4468) Fassia A., Meikle W. P. S., Vacca W. D., et al. 2000, MNRAS, [**318**]{}, 1093 Fassia A., Meikle W. P. S., Chugai N.N., et al. 2001, MNRAS, [**325**]{}, 907 Fransson C., Challis P. M., Chevalier R.A., et al. 2005, ApJ, [**622**]{}, 991 Fransson C. 1984, A&A, [**133**]{}, 264 Grasberg E. K., Imshennik V. S., Nadyozhin D. K. 1971, Ap&SS, [**10**]{}, 28 Huggins P. J., Bachiller R., Cox P., Forveille T. 1994, ApJ, [**424**]{}, L127 Heger A., Fryer C. L., Woosley S. E., Langer N., Hartmann D. H. 2003, ApJ, [**591**]{}, 288 Heger A., Jeannin L., Langer N., Baraffe I. 1997, A&A, [**327**]{}, 224 Immler S., Brown P. J., Milne P., et al. 2007, ApJ, [**664**]{}, 435 Leonard D. C., Filippenko A. V., Gates E. L., et al. 2002a, PASP, [**114**]{}, 35 Leonard D. C., Filippenko A. V., Li W., et al. 2002b, AJ, [**124**]{}, 2490 Liu Q.-Z., Hu J.-Y., Hang H.-R., Qiu Y.-L., Zhu Z.-X., Qiao Q.-Y. 2000, A&AS, [**144**]{}, 219 Lundqvist P., Fransson C. 1998, A&A, [**192**]{}, 221 Nieuwenhuijsen H., de Jager C. 1990, A&A, [**231**]{}, 134 Pun C. S. J., Kirshner R. P., Sonneborn G., et al. 1995, ApJS, [**99**]{}, 223 Patat F., Chandra P., Chevalier R. et al. 2007a, Science, [**315**]{}, 924 Patat F., Benetti S., Mazzali P. A. et al. 2007b, A&A, [**474**]{}, 931 Pooley D., Lewin W. H. G., Fox D. W., et al. 2002, ApJ, [**572**]{}, 932 Rephaeli Y., Yankovitch D. 1997, ApJ, [**481**]{}, L55 Rinehart S. A., Hajian A. R., Houck J. R., Terzian Y. 2000, PASP, [**112**]{}, 977 Rho J., Jarrett T. H., Chugai N. N., Chevalier R. A. 2007, ApJ, [**666**]{}, 1108 Skinner C. J., Dougherty S. M., Meixner M. 1997, MNRAS, [**288**]{}, 295 Utrobin V. P. 2007, A&A, [**461**]{}, 233
--- title: | Supplementary Material\ Attended Temperature Scaling:\ A Practical Approach for Calibrating Deep Neural Networks --- $\mathcal{L_{ATS}}$ is a calibration measure ============================================ #### Lemma : Suppose $T^*=\underset{T}{\arg\min}(\mathcal{L_{ATS}})$ on validation set $\mathcal{V}$. Therefore, $S_{y=k}(x,T^*)$ approaches $Q(y=k|x)$ for $k=1,\ldots,K$ and consequently $S_{y}(x,T^*)$ approaches toward $Q(y|x)$ which means $\mathcal{L_{ATS}}$ is a calibration measure. #### Proof : The samples in subset $M_k$ are supposed to be generated from $Q(x,y=k)$ distribution. Based on Gibbs inequality (refer to Eq. (1)) minimizing negative log of likelihood function on $M_k$ samples leads that likelihood function to approach $Q(y=k|x)$. In $M_k$ there are two groups of samples. The samples which are originally generated from $Q(x,y=k)$ distribution and have the true label $y_i = k$ and the samples which are borrowed from other distributions as the surrogate samples for $Q(x,y=k)$ and their true labels are $y_i\not=k$. These two groups of samples have different probability weights. Therefore, to converge to $Q(y=k|x)$, the loss function should differ based on the type of the samples. $\mathcal{L_{ATS}}$ is defined as: $ \begin{aligned} \mathcal{L_{ATS}} &= \sum_{k=1}^{K} \sum_{(x_i,y_i) \in M_k}\\ & -\log \left ( \frac{S_{y=k}(x_i,T)(1-S_{y = y_i}(x_i,T))}{S_{y\not=k}(x_i,T)} \right ) \\ \text{where} & \quad T^* = \operatorname*{arg\,min}_{T}(\mathcal{L_{ATS}}) \quad \quad \text{s.t:}\quad T > 0 \end{aligned} $ which can be analyzed for two cases: - **Case I**: In this case, the samples are $(x_i,y_i=k)$ which means they are generated directly from $Q(x,y=k)$. The likelihood function of $\mathcal{L_{ATS}}$ in this case is equal to:\ $\begin{aligned} \mathcal{L_{ATS}} &= \sum_{k=1}^{K} \sum_{(x_i,y_i) \in \{M_k|y_i=k\}}\\ & -\log \left ( \frac{S_{y=k}(x_i,T)(1-S_{y = k}(x_i,T))}{S_{y\not=k}(x_i,T)} \right ) \end{aligned} $\ which means: $\begin{aligned} \mathcal{L_{ATS}} &= \sum_{k=1}^{K} \sum_{(x_i,y_i=k)} -\log \left (S_{y=k}(x_i,T) \right ), \end{aligned} $\ that is the NLL loss function. Minimizing NLL respecting to $T$ on the samples generated from $Q(x,y=k)$, consequences $S_{y=k}(x_i,T^*)$ to approach $Q(y=k|x)$ for each $k=\{1,\ldots,K\}$. - **Case II**: In this case $(x_i,y_i\neq k)$, which means the samples are selected from $Q(x,y\not=k)$ distribution. Using these samples instead of samples which are directly generated from $Q(x,y=k)$, applies a weight on distribution. Referring to Eq.(6) this weight is equal to $W = Q(y=k|x)/Q(y\neq k|x)$. Therefore, the negative log of likelihood function on these samples will approach instead of $Q(y=k|x)$ to $Q(y=k|x)Q(y=k|x)/Q(y\not=k|x)$. In this case: $\mathcal{L_{ATS}}$ is: $\begin{aligned} \mathcal{L_{ATS}} &= \sum_{k=1}^{K} \sum_{(x_i,y_i) \in \{M_k|y_i\not=k\}}\\ & -\log \left ( \frac{S_{y=k}(x_i,T)(1-S_{y \not= k}(x_i,T))}{S_{y\not=k}(x_i,T)} \right ) \end{aligned} $\ which means: $\begin{aligned} \mathcal{L_{ATS}} &= \sum_{k=1}^{K} \sum_{(x_i,y_i\not=k)} -\log \left ( \frac{S_{y=k}(x_i,T)^2}{S_{y\not=k}(x_i,T)} \right ), \end{aligned} $\ As we know $S_{y\not=k}(x_i,T^*) = 1- S_{y=k}(x_i,T^*)$ and $Q(y\not=k|x)= 1- Q(y=k|x)$. Minimizing $\mathcal{L_{ATS}}$ respecting to $T$, makes $S_{y=k}(x_i,T^*)^2/(1- S_{y=k}(x_i,T^*))$ approach $Q(y=k|x)^2/(1- Q(y=k|x))$ that means $S_{y=k}(x_i,T^*)$ becomes similar to $Q(y=k|x)$. We have shown $S_{y=k}(x,T^*)$ becomes similar to $Q(y=k|x)$ on sample set $M_k$ for $k=1,\ldots,K$. Therefore we can deduce $S_{y}(x,T^*)$ becomes similar to $Q(y|x)$ which is the final goal of calibration. \[Tabel\_ISIC\_Stat\] \ \ Datasets Details ================ We apply the calibration method on different image classification datasets ( The results are reported in Sec. 6 in the main text). For each experiment, the size of validation set is $20\%$ of the test set which is selected randomly. For all the model-dataset used in Table 1&3, we have trained them on the specified training set. Except for the experiments with ImageNet, that we used ResNet152 pre-trained PyTorch model to report the results. 1. CIFAR-10 \[20\]: It contains 60000, 32$\times$32 color images of 10 different objects, with 6000 images per class. The size of training and test sets are 50000 and 10000 respectively. 2. CIFAR-100 \[20\]: With the same setting as CIFAR-10, except it has 100 classes of different objects containing 600 images in each class. 3. SVHN \[33\]: It contains 32$\times$32 color images of numbers between 0 to 9 that has 73257 digits for training, 26032 digits for testing. 4. MNIST \[25\] It contains 28$\times$28 gray-scale images of numbers between 0 to 9. It has 60,000 images for training, and 10,000 images for test. 5. Calthec-UCSD Birds \[41\]: It contains 11,788 color images of 200 different birds species. We divided randomly into 7073 training, and 4715 testing samples. 6. ImageNet2012 \[10\]: Natural scene images from 1000 classes. It contains 1.3 million and 25000 images for training and test, respectively. 7. ISIC datset \[8, 40\] (data extracted from the “ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection” grand challenge datasets): It contains 10015 color images of 7 possible skin anomalies. We divide randomly the dataset into 6009 training and 4006 test images. Robustness to Noise and Validation Size ======================================= In this section, we provide more models-datasets results for comparing the behavior of ATS vs. TS in calibrating the model in existence of labeling noise and few number of samples in validation. The results are shown in Figure \[noise\] and Figure \[validation\], respectively. ATS is much more robust to the labeling noise and more stable when the number of validation samples are few. Implementation Specification of Skin Lesion Detection System ============================================================ To test the impact of calibration in the real application, we design a medical assistant system. We select ISIC dataset which contains color images of 7 different skin lesions which are , Melanoma, Melanocytic nevus, Basal cell carcinoma (BCC), Bowen, Benign keratosis, Dermatofibroma, and Vascular. The selected model is a ResNet200 with pretrained weights on ImageNet. In order to fine-tune it, we use 60% of ISIC images resizing them to $224 \times 224$ and normalizing with mean and standard deviation of the ImageNet dataset. Notice that we use stratification to divide the dataset. We run the fine-tuning for 100 epochs with batchsize of 32 using Adam optimizer with starting learning rate of 1e-4 and a scheduled decaying rate of 0.95 every 10 epochs. To increase the variety of the training samples, we perform data augmentation with probability of 0.5 of transforming every image with a random horizontal or vertical flip or a random rotation of a maximum of 12.5 either to the left or to the right. The details statistic of dataset is provided in Table \[Tabel\_ISIC\_Stat\]. More Results of Skin Lesion Detection System ============================================ In this section, we provide more results of skin lesion detection system. The confidence of the system before and after calibration with TS and ATS methods and for correctly classified and misclassified samples is reported in Figure  \[Skin\_Lesion\_TS\_ATS\] for different skin lesion types. \ \ \
--- abstract: 'In this thesis, we investigate the proof of the Baum-Connes Conjecture with Coefficients for a-$T$-menable groups. We will mostly and essentially follow the argument employed by N. Higson and G. Kasparov in the paper [@HigKas2]. The crucial feature is as follows. One of the most important point of their proof is how to get the Dirac elements (the inverse of the Bott elements) in Equivariant $KK$-Theory. We prove that the group homomorphism used for the lifting of the Dirac elements is an isomorphism in the case of our interests. Hence, we get a clear and simple understanding of the lifting of the Dirac elements in the Higson-Kasparov Theorem. In the course of our investigation, on the other hand, we point out a problem and give a fixed precise definition for the non-commutative functional calculus which is defined in the paper [@HigKas2]. In the final part, we mention that the ${C^\ast\text{-algebra}}$ of (real) Hilbert space becomes a $G$-${C^\ast\text{-algebra}}$ naturally even when a group $G$ acts on the Hilbert space by an affine action whose linear part is of the form an isometry times a scalar and prove the infinite dimensional Bott-Periodicity in this case by using Fell’s absorption technique.' author: - | Shintaro Nishikawa\ Keio University bibliography: - 'bib1.bib' date: September 2016 title: | On the Lifting of the Dirac Elements\ in the Higson-Kasparov Theorem --- Acknowledgement {#acknowledgement .unnumbered} =============== I would like to thank, first and foremost, my advisor, Takeshi Katsura who always gives me great insights all of which help me to overcome difficulties I have. At the time when this thesis was written, two and a half years passed since he introduced me a beautiful realm of Functional Analysis, Operator Algebras and Noncommutative Geometry where the infinite dimensionality makes such a wonderful interplay between topology, analysis and algebra. Since then, his way of seeing and doing mathematics has influenced mine a lot in a good way. It is his constant support which made it possible for me to get such wonderful understandings of these subjects some of which are explored in this thesis. Also, here, I would like to thank Nigel Higson who kindly answered to my questions and gave me many helpful comments on my research. I am deeply indebted to his great hospitality during my visit to the Pennsylvania State University in December 2015. Finally, I would like to thank Narutaka Ozawa and Klaus Thomsen who kindly responded to my questions in their busy schedule. Introduction {#introduction .unnumbered} ============ The Baum-Connes Conjecture (Conjecture \[thm:BC\]) is a long-standing conjecture in non-commutative geometry. It does have deep relations with other fields of mathematics; the Novikov conjecture in topology and the idempotent conjecture in algebra are famous examples of conjectures which the Baum-Connes Conjecture implies. Since it was formulated in 1982 by Baum and Connes, there has been outstandingly great progress in understanding and verification of this conjecture. For a second countable, locally compact topological group $G$, the reduced group ${C^\ast\text{-algebra}}$ ${C^\ast}_{\text{red}}(G)$ of $G$ is defined as the completion of the convolution algebra $L^1(G)$ acting on the Hilbert space $L^2(G)$ of square integrable functions on $G$. The set of unitary equivalence classes of irreducible representations of the ${C^\ast\text{-algebra}}$ ${{C^\ast}_{\text{red}}(G)}$ correspond bijectively to that of irreducible unitary representations of the group $G$ which are weakly contained in the (left) regular representation of $G$; this set is the reduced unitary dual $\hat{G}_r$. When $G$ is a compact or an abelian group, the natural topology defined on $\hat{G}_r$ is locally compact and Hausdorff. However, for a general group $G$, the topology on $\hat{G}_r$ may not be Hausdorff. The $K$-theory $K_\ast({{C^\ast}_{\text{red}}(G)})$ of the ${C^\ast}$-algebra ${{C^\ast}_{\text{red}}(G)}$ can be considered as one of the tools for properly describing the geometric nature of the “space” $\hat{G}_r$. On the other hand, Kasparov ([@Kas2]) generalized the index theory of elliptic operators on smooth manifolds to develop the bivariant theory of ${C^\ast\text{-algebras}}$: the equivariant $KK$-theory. This beautiful generalization of the index theory achieved to define not only the notion of abstract elliptic operators which induce the group homomorphisms on $K$-theory groups of ${C^\ast\text{-algebras}}$ but also the well-defined product of two elliptic operators so that it is compatible with the composition of group homomorphisms they induce; this is the Kasparov product. Kasparov and others managed to define the (higher) indices of elliptic operators taking values in the groups $K_\ast({{C^\ast}_{\text{red}}(G)})$. The Baum-Connes Conjecture states that all elements of the $K$-theory groups $K_\ast({{C^\ast}_{\text{red}}(G)})$ should be indices of some elliptic operators and that any two elliptic operators having same indices should be linked by certain geometric relations (i.e. homotopies). N. Higson and G. Kasparov ([@HigKas2]) showed that the Baum-Connes Conjecture holds for all a-$T$-menable groups (Definition \[dfn:a-t\]), in particular for all amenable groups. Actually, what they proved is that they satisfy the Baum-Connes Conjecture with Coefficients (Conjecture \[thm:BCC\]) which is a much stronger conjecture than the Baum-Connes Conjecture. This is the Higson-Kasparov Theorem (Theorem \[thm:BCCa-T\]). They proved this result following the Dual-Dirac method (Theorem \[DD\]), the standard method used for proving the Baum-Connes Conjecture with Coefficients which says the Baum-Connes Conjecture with Coefficients holds for a group $G$ if one finds an isomorphism between the ${C^\ast\text{-algebra}}$ ${\mathbb{C}}$ and some proper $G$-${C^\ast\text{-algebra}}$ in Equivariant Kasparov’s category $KK^G$. For an a-$T$-menable group $G$, there is a natural candidate of this isomorphism which is called the Bott element. However, as is described in the paper [@HigKas2], there is a certain analytic technicality in finding the inverse of the Bott element, the Dirac element. N. Higson and G. Kasparov defined for separable $G$-${C^\ast\text{-algebras}}$ $A,B$, an abelian group $\{\Sigma A, B\}_G$ and a group homomorphism $\eta$ from the odd Kasparov group $KK^G_1(A, B)$ to $\{\Sigma A, B\}_G$ ($\Sigma A=C_0(0, 1)\otimes A$). They defined a Dirac element $\alpha$ in the group $\{\Sigma A({\mathcal{H}}), S\Sigma\}_G$ where $A({\mathcal{H}})$ is a certain proper $G$-${C^\ast\text{-algebra}}$ and $S=C_0({\mathbb{R}})$. They managed to find the “honest” Dirac element $d$ by showing that we can lift $\alpha$ to $d$ by $\eta$. Their proof of this lifting ([@HigKas2] Theorem 8.1.) contains a very technical argument concerning the extension of $G$-${C^\ast\text{-algebras}}$ having a not necessarily equivariant completely positive cross section. In this thesis, among the other things, we prove the following result: \[Theo\](See Theorem \[Result\]) Let $A,B$ be separable $G$-${C^\ast\text{-algebras}}$. Suppose that $A$ is a nuclear, proper $G$-${C^\ast\text{-algebra}}$ and that $B$ is isomorphic to $\Sigma B'$ for some separable $G$-${C^\ast\text{-algebra}}$ $B'$. Then, the homomorphism $\eta\colon KK^G_1(A, B)\to \{\Sigma A, B\}_G$ is an isomorphism of abelian groups. Thanks to this result, we can avoid the technical theorem ([@HigKas2] Theorem 8.1.) in defining the Dirac element in Equivariant Kasparov’s category $KK^G$. The brief description of this thesis is as follows. Chapter 1 serves as a very quick introduction of ${C^\ast\text{-algebras}}$ for readers who might not be familiar with these notions. Chapter 2 contains further preliminary materials which are used in later chapters such as graded ${C^\ast\text{-algebras}}$, Hilbert-modules and unbounded multipliers. The functional calculus for unbounded multipliers is explained using the Bott-Dirac operator which plays the important role in the proof of the Higson-Kasparov Theorem. In Chapter 3, we give a basic introduction of $K$-Theory and $K$-Homology of ${C^\ast\text{-algebras}}$ and go on to introducing Kasparov’s Equivariant KK-Theory in Chapter 4 and Equivariant E-Theory in Chapter 5. We confine ourselves to see only, necessary facts for our investigation of the proof of the Higson-Kasparov Theorem. In Chapter 6, we quickly see the standard formalization of the Baum-Connes Conjecture and the Baum-Connes Conjecture with Coefficients. In this chapter, we also introduce the Higson-Kasparov Theorem and give a brief review of the proof given by N. Higson and G. Kasparov. In Chapters 7 and 8, we give a proof of the Higson-Kasparov Theorem following the argument employed by N. Higson and G. Kasparov. In Chapter 7, we point out a certain problem of the non-commutative functional calculus defined by N. Higson and G. Kasparov and give a fixed precise definition. In Chapter 8, among the other things, we show our main result (Theorem \[Theo\]) which says that the group homomorphism used for the lifting of the Dirac elements is in fact, an isomorphism in the case of our interests. This gives us a clear and simple understanding of the technical part of the Higson-Kasparov Theorem. In the final Chapter, we mention that the ${C^\ast\text{-algebra}}$ of Hilbert space becomes a $G$-${C^\ast\text{-algebra}}$ naturally even when a group $G$ acts on the Hilbert space by an affine action whose linear part is not necessarily isometric but of the form an isometry times a scalar, and prove the infinite dimensional Bott-Periodicity in this case by using the Fell’s absorption technique. ${C^\ast}$-Algebras =================== In this first chapter, we give a basic introduction of ${C^\ast\text{-algebras}}$. Materials given here can be found in many textbooks of this subject such as [@BrOza], [@DixC], [@HigRoe] and [@analysisnow]. A complex algebra $A$ is a [Banach algebra]{} (resp. [normed algebra]{}) if its underlying vector space is a Banach space (resp. normed vector space) with a norm which is submultiplicative (i.e. $\|xy\|\leq\|x\|\|y\|$ for all $x,y \in A$). An [involution]{} on a normed algebra $A$ is a conjugate-linear antimultiplicative isometry of order two, denoted $x \mapsto x^\ast$ for $x\in A.$ A [Banach [$\ast$]{}-algebra]{} (resp. [normed [$\ast$]{}-algebra]{}) is a Banach algebra (resp. normed algebra) with an involution. A [[${C^\ast\text{-algebra}}$]{}]{} is a Banach $\ast$-algebra $A$ satisfying [[${C^\ast}$-identity]{}]{}: $$\begin{aligned} \|x^\ast x\| &= \|x\|^2 \quad \text{for all} \quad x\in A \label{eq:identity}\end{aligned}$$ A normed algebra is called [separable]{} if it is separable in a topological sense, i.e. if it has a countable dense subset. A normed algebra $A$ is called [unital]{} if it has a unit (a multiplicative identity) usually denoted $1$ or $1_A$. A [unital]{} subalgebra of $A$ is a subalgebra of $A$ containing the unit $1_A.$ A [[${C^\ast}$-subalgebra]{}]{} is a norm-closed selfadjoint (closed under the involution) subalgebra of a ${C^\ast}$-algebra. It is a ${C^\ast}$-algebra in an obvious way. Let $A$ and $B$ be normed $\ast$-algebras. A [[$\ast$]{}-homomorphism]{} from $A$ to $B$ is an algebraic homomorphism from $A$ to $B$ which intertwines the involutions. An isomorphism of normed $\ast$-algebras is a surjective isometric $\ast$-homomorphism. For any Banach $\ast$-algebra (resp. ${C^\ast\text{-algebra}}$) $A$, there is a unital Banach ${\ast}$-algebra (resp. ${C^\ast\text{-algebra}}$) $\tilde A$ containing $A$ as a subalgebra of codimension one. Its algebraic structure is unique, but it may be defined several norms on $\tilde A$. If $A$ is a ${C^\ast}$-algebra, it will soon be clear that a ${C^\ast\text{-algebra}}$ $\tilde A$ is unique up to isomorphisms. For a non-unital algebra $A$, $\tilde A$ is called a [unitization]{} of $A.$ Let $A$ be a unital Banach algebra. For $a \in A$, the [spectrum]{} $\operatorname{sp}_A(a)$ of $a$ in $A$ is a subset of $\mathbb{C}$ defined by $\operatorname{sp}_A(a)=\{ \, \lambda \in \mathbb{C} \mid \text{$\lambda - a$ is not invertible in $A$}\, \}$. The spectrum $\operatorname{sp}_A(a)$ is a nonempty compact subset of $\mathbb{C}$. If $A$ is a unital subalgebra of a unital Banach algebra $B$, $\operatorname{sp}_A(a)$ and $\operatorname{sp}_B(a)$ may not coincide in general. Fortunately, if $B$ is a ${C^\ast}$-algebra and $A$ is its unital ${C^\ast}$-subalgebra, they can be shown to be the same. Henceforth, we can speak about the spectrum of $a$ in a ${C^\ast}$-algebra $A$ without any confusion; we will denote it by $\operatorname{sp}(a)$ (for $a$ in a non-unital ${C^\ast\text{-algebra}}$ $A$, $\operatorname{sp}(a)$ is defined to be $\operatorname{sp}_{\tilde A}(a)$). Any $\ast$-homomorphism from a Banach $\ast$-algebra to a ${C^\ast}$-algebra is bounded (continuous). In fact, it is always norm decreasing. Therefore, any bijective $\ast$-homomorphism between ${C^\ast}$-algebras is automatically an isomorphism. This explains the uniqueness of a unitization of a ${C^\ast}$-algebra. Let $A$ be a ${C^\ast}$-algebra. - $a \in A$ is [normal]{} if $a^\ast a=aa^\ast$; - $a \in A$ is [selfadjoint]{} if $a=a^\ast$; - $a \in A$ is [positive]{} if $a=b^\ast b$ for some $b \in A$; - $p \in A$ is a [projection]{} if $p=p^\ast=p^2$; - $w \in A$ is a [partial isometry]{} if $w^\ast w$ and $ww^\ast$ are projections. Assume $A$ is unital. - $v \in A$ is an [isometry]{} if $v^\ast v=1$; - $u \in A$ is a [unitary]{} if $u^\ast u=uu^\ast=1.$ The set of positive elements in a ${C^\ast}$-algebra $A$ forms a cone. We define an order for selfadjoint elements in $A$ in the following way. For selfadjoint elements $a,b \in A$, $a\leq b$ if $b-a$ is positive. Let ${\mathcal{H}}$ be a complex Hilbert space, i.e. a complex Banach space whose norm is coming from an inner product $\langle \cdot,\cdot \rangle \colon {\mathcal{H}}\times {\mathcal{H}}\to \mathbb{C}$ (we will always take it to be linear in the second variable). A linear operator $T$ on a normed vector space is continuous if and only if it is uniformly bounded on the unit ball; we call such $T$ a bounded operator. The algebra ${B(\mathcal{H})}$ of bounded operators on ${\mathcal{H}}$ is a ${C^\ast}$-algebra in the following way. We consider the [operator norm]{} $\|T\|=\displaystyle \sup_{\|\xi\|=1}\|T\xi\|.$ There is an involution $T\mapsto T^\ast$, where $T^\ast$ for a bounded operator $T$ is the unique bounded operator on ${\mathcal{H}}$ satisfying ${\langle T\xi,\eta \rangle}={\langle \xi,T^\ast\eta \rangle}$ for all $\xi,\eta \in {\mathcal{H}}.$ Equipped with them, ${B(\mathcal{H})}$ becomes a ${C^\ast}$-algebra. If $\dim({\mathcal{H}})=n<\infty$, ${B(\mathcal{H})}$ can be identified with a matrix algebra ${M_n(\mathbb{C})}$ uniquely up to inner automorphisms. In this paper, ${M_n(\mathbb{C})}$ will be almost all cases treated as ${C^\ast}$-algebras endowed with the canonical operator norm. ${C^\ast}$-subalgebras of ${B(\mathcal{H})}$ are sometimes called concrete ${C^\ast}$-algebras. It turns out any abstract ${C^\ast}$-algebra is isomorphic to a concrete one (See Proposition \[prop:cstarrep\].) In other words, ${C^\ast}$-identity encodes all the necessary and sufficient informations for Banach $\ast$-algebras to be realized as “operator algebras” on Hilbert spaces. All the definitions above about particular elements of ${C^\ast}$-algebras reflect the corresponding notions defined for operators on Hilbert spaces. (commutative ${C^\ast}$-algebras) Let $X$ be a locally compact Hausdorff space. The algebra $C_b(X)$ of bounded continuous $\mathbb{C}$-valued functions on X becomes a ${C^\ast}$-algebra in the following way. The norm is supremum norm $\displaystyle \|f\|=\sup_{x\in X}|f(x)|$; and the involution is a pointwise complex conjugation $f\mapsto \overline{f}.$ Inside $C_b(X)$, there is a normed $\ast$-algebra $C_c(X)$ of continuous functions on $X$ with compact supports; its completion $C_0(X)$ in $C_b(X)$ is a ${C^\ast}$-subalgebra of $C_b(X)$; it is identified with the algebra of continuous functions on $X$ vanishing at infinity. The ${C^\ast\text{-algebra}}$ $C_0(X)$ is unital if and only if $X$ is compact, and in this case we usually denote it by $C(X).$ The algebra $C_0(X)$ is separable if and only if $X$ is second countable. Any commutative ${C^\ast}$-algebra is canonically isomorphic to $C_0(X)$ for some $X$. (cf. [@HigRoe] THEOREM 1.3.12) Let $A$ be a commutative ${C^\ast}$-algebra. Denote by ${\widehat{A}}$, the space of characters of $A$ (nonzero $\ast$-homomorphisms from $A$ to $\mathbb{C}$) equipped with the weak-$\ast$ topology (pointwise convergence topology). Then ${\widehat{A}}$ is a locally compact Hausdorff space; and is compact if and only if $A$ is unital. The ${C^\ast}$-algebra $A$ is isomorphic to $C_0({\widehat{A}})$; the isomorphism sends $a$ in $A$ to a function $\hat{a}\colon \psi \mapsto \psi(a).$ The character space ${\widehat{A}}$ is called Gelfand spectrum of $A.$ For a normal element $a$ in a unital ${C^\ast}$-algebra $A$, denote by $C^\ast(a,1)$ the minimal unital ${C^\ast}$-subalgebra of A containing $a.$ This is a unital commutative ${C^\ast}$-algebra isomorphic to $C(\operatorname{sp}(a))$; the canonical (unital) isomorphism takes $a \in C^\ast(a,1)$ to the coordinate function $z \in C(\operatorname{sp}(a)).$ For any continuous function $f \in C(\operatorname{sp}(a)),$ we denote by $f(a)$ the corresponding element in $C^\ast(a,1).$ This correspondence is called [functional calculus.]{} More generally, for a normal element $a$ in any ${C^\ast}$-algebra, the minimal ${C^\ast}$-subalgebra $C^\ast(a)$ containing $a$ is canonically isomorphic to $C_{0}(\operatorname{sp}(a)\backslash \{0\})$. There is analogous functional calculus for this possibly non-unital situation. For a normed $\ast$-algebra $A$, a [representation]{} of $A$ on a Hilbert space ${\mathcal{H}}$ is a bounded $\ast$-homomorphism from $A$ to ${B(\mathcal{H})}.$ Two representations $\rho_1$ on ${\mathcal{H}}_1$ and $\rho_2$ on ${\mathcal{H}}_2$ are [unitary equivalent]{} if there exists a unitary (an isomorphism of Hilbert spaces) $U$ from ${\mathcal{H}}_1$ to ${\mathcal{H}}_2$ which intertwines the two representations: $$\begin{aligned} U\rho_1(a) U^\ast &= \rho_2(a) \quad \text{for} \,\, a \in A\end{aligned}$$ A representation $\rho$ of $A$ on ${\mathcal{H}}$ is called [nondegenerate]{} if $\rho(A){\mathcal{H}}=$ span$\{\,\rho(a)v \mid a\in A, v\in {\mathcal{H}}\,\}$ is dense in ${\mathcal{H}}.$ \[prop:cstarrep\] (cf. [@HigRoe] THEOREM 1.6.2) The following are equivalent for a normed $\ast$-algebra $A.$ - $A$ is a ${C^\ast}$-algebra; - $A$ is isomorphic to a ${C^\ast}$-subalgebra of ${B(\mathcal{H})}$ for some Hilbert space ${\mathcal{H}}.$ The proof comes down to constructing for each selfadjoint element $a$ in $A$, a representation of $A$ which sends $a$ to a nonzero element. This is done by an elaboration of the Hahn-Banach Theorem and the GNS-construction. We omit the detail; see [@HigRoe] for example. For any ${C^\ast\text{-algebra}}$ $A$, the matrix algebra $M_n(A)$ over $A$ becomes a ${C^\ast\text{-algebra}}$ in the following way. One first identifies $A$ as a ${C^\ast}$-subalgebra of ${B(\mathcal{H})}$ by faithfully representing $A$ on some Hilbert space ${\mathcal{H}}$. Then, $M_n(A)$ is naturally identified with a ${C^\ast}$-subalgebra of $B({\mathcal{H}}^n)$. The ${C^\ast}$-norm defined on $M_n(A)$ in this way is independent of representations of $A$. Let $A$ be a Banach $\ast$-algebra. There is a canonical pre-${C^\ast}$-norm on $A$ defined by: $$\begin{aligned} \|a\| &= \displaystyle \sup_{\rho}\|\rho(a)\| \label{eq:envelop norm}\end{aligned}$$ where the supremum is taken over all representations of $A.$ Since any representation of $A$ is norm decreasing as we remarked earlier, this norm is well-defined. It satisfies ${C^\ast}$-identity because it comes from the operator norm on Hilbert spaces. The completion of $A$ with this new norm (after taking a quotient by “zero” elements) is the [enveloping [${C^\ast}$-algebra]{}]{} of $A.$ By its construction, it has a universal property that representations of $A$ correspond bijectively to representations of the enveloping ${C^\ast}$-algebra of $A$. Let’s see some examples of this construction. The last one is the most important; the first two are described just for seeing what kinds of properties of Banach $\ast$-algebras make them far away from being ${C^\ast}$-algebras. - Consider a subalgebra $A$ (as a Banach algebra) of $M_2(\mathbb{C})$ consisting of upper triangular matrices. Define an involution on $A$ by the following formula. $$\begin{pmatrix} a & b \\ 0 & c \end{pmatrix} ^\ast= \begin{pmatrix} \bar{c} & \bar{b} \\ 0 & \bar{a} \end{pmatrix} \quad \text{for} \,\, a,b,c \in \mathbb{C}$$ One can check that this makes $A$ a Banach $\ast$-algebra and that its enveloping ${C^\ast}$-algebra is $0.$ The reason of vanishing of all elements is clear: all the three basic coordinate vectors satisfy $a^\ast a = 0$ which implies $a=0$ in ${C^\ast}$-algebras. - Denote the closed unit disk of the complex plane $\mathbb{C}$ by $\mathbb{D}$. Consider a subalgebra $A$ (as a Banach algebra) of $C(\mathbb{D})$ consisting of bounded holomorphic functions on $\mathbb{D}$. Define a new involution on $A$ by $f^\ast(z)=\overline{f(\overline{z})}.$ This makes it a commutative Banach $\ast$-algebra; and its enveloping ${C^\ast}$-algebra is $C([-1,1])$ of continuous functions on the interval $[-1,1].$ To check this, one can first see the image of the coordinate function $z$ and the identity (the constant function $1$) generate the enveloping algebra and note that since $z$ is selfadjoint in $A$, the spectrum of its image must be contained in $\mathbb{D}\cap\mathbb{R}=[-1,1].$ - (full group ${C^\ast}$-algebras) Let $G$ be a locally compact topological group. We denote by $\mu$ its left invariant Haar measure which is unique up to scalar multiplication. Let $\Delta$ be the associated modular function. Consider a Banach space $L^1(G,\mu)$ of integrable functions. We define a product and an involution to make it a Banach $\ast$-algebra: for $f,g \in L^1(G,\mu)$ $$\begin{aligned} (fg)(t)&=\int f(s)g(s^{-1}t)d\mu(s) \\ (f^\ast)(t)&=\Delta(t)^{-1}\overline{f(t^{-1})}\end{aligned}$$ The enveloping ${C^\ast}$-algebra of $L^1(G,\mu)$ is the (full) [group [${C^\ast}$]{}-algebra]{} ${C^\ast}(G)$ of a locally compact topological group $G.$ This ${C^\ast}$-algebra has an important universal property that associated to any unitary representation of $G$ on a Hilbet space, the canonical representation of $C_c(G)$ extends continuously (hence uniquely) to ${C^\ast}(G);$ here one may identify $C_c(G)$ as a subalgebra of ${C^\ast}(G)$ not just of $L^1(G,\mu).$ Conversely, any nondegenerate representation of ${C^\ast}(G)$ arises in this way and uniquely determines the underlying unitary representation of $G.$ The ${C^\ast\text{-algebra}}$ ${C^\ast}(G)$ is commutative if and only if $G$ is abelian; and in this case, it is isomorphic to $C_0(\widehat{G})$ where $\widehat{G}$ is the character space of $G$ which is locally compact in its own right. The group ${C^\ast\text{-algebra}}$ ${C^\ast}(G)$ is separable if $G$ is second countable. Let $G$ be a locally compact topological group. Associated to the left regular representation of $G$ on $L^2(G,\mu)$, we have the canonical representation of ${C^\ast}(G)$. The image of this representation is the [reduced group [${C^\ast\text{-algebra}}$]{}]{} of $G$; it is denoted by ${C^\ast}_{\text{red}}(G)$. Let $A$ be a ${C^\ast\text{-algebra}}$ and $G$ be a locally compact topological group. A $G$-action on $A$ is a group homomorphism from $G$ to the automorphism group $\mathrm{Aut}(A)$ of $A$. An element $a$ in $A$ with a $G$-action is [[$G$]{}-continuous]{} if the map $g \mapsto g\cdot a$ is a continuous map from $G$ to $A$. A ${C^\ast\text{-algebra}}$ $A$ with a $G$-action is called a [[$G$]{}-[${C^\ast\text{-algebra}}$]{}]{} if all elements in $A$ are $G$-continuous. A $\ast$-homomorphism between ${C^\ast\text{-algebras}}$ with $G$-action is called $G$-equivariant or simply, equivariant if it intertwines the two $G$-actions. Note, an equivariant $\ast$-homomorphism necessarily sends $G$-continuous elements to $G$-continuous elements. A $G$-Hilbert space is a Hilbert space ${\mathcal{H}}$ with a unitary representation of $G$. A representation of $G$-${C^\ast\text{-algebra}}$ $A$ on a $G$-Hilbert space ${\mathcal{H}}$ is a ${C^\ast}$-algebraic representation $\rho$ of $A$ on ${\mathcal{H}}$ which satisfies the following additional condition: for any $a$ in $A$ and for any $g$ in $G$, $\rho(g\cdot a)=u_g\rho(a)u_g^\ast$. Here, $u_g$ denotes the unitary on ${\mathcal{H}}$ corresponding to $g$ in $G$. Let $G$ be a locally compact group and $A$ be a $G$-${C^\ast\text{-algebra}}$. Consider a Banach space $L^1(G,A)$ of integrable functions from $G$ to $A$. We define a product and an involution to make it a Banach $\ast$-algebra: for $f,g \in L^1(G,A)$ $$\begin{aligned} (fg)(t)&=\int f(s)s(g(s^{-1}t))d\mu(s) \\ (f^\ast)(t)&=\Delta(t)^{-1}t(f(t^{-1}))^\ast\end{aligned}$$ The enveloping ${C^\ast}$-algebra of $L^1(G,A)$ is the [full crossed product]{} ${C^\ast}_{\text{max}}(G,A)$ of $A$ by $G$. It has a universal property that associated to any representation of a $G$-${C^\ast\text{-algebra}}$ $A$ on a $G$-Hilbert space ${\mathcal{H}}$, the canonical representation of $C_c(G,A)$ extends continuously (hence uniquely) to a representation of ${C^\ast\text{-algebra}}$ ${C^\ast}_{\text{max}}(G,A)$. Conversely, any nondegenerate representation of ${C^\ast}_{\text{max}}(G,A)$ arises in this way. Let $G$ and $A$ be the same as above. Represent $A$ faithfully and nondegenerately on a Hilbert space ${\mathcal{H}}$. Then, the Hilbert space $L^2(G,{\mathcal{H}})$ becomes a $G$-Hilbert space by means of the left regular representation. There is a canonical representation of $G$-${C^\ast\text{-algebra}}$ $A$ on $L^2(G,{\mathcal{H}})$. The image of the associated representation of ${C^\ast}_{\text{max}}(G,A)$ is the [reduced crossed product]{} ${C^\ast}_{\text{red}}(G,A)$ of $A$ by $G$. If $A$ is a commutative $G$-${C^\ast\text{-algebra}}$ $C_0(X)$ of continuous functions which vanish at infinity on a locally compact space $X$ equipped with a continuous $G$-action, we usually denote the full (resp. reduced) crossed product algebra by ${C^\ast}_{\text{max}}(G,X)$ (resp. ${C^\ast}_{\text{red}}(G,X)$). Let $A$ be a ${C^\ast}$-algebra. A (countable) [approximate unit]{} for $A$ is an increasing sequence $(u_n)_{n\geq 1}$ of positive contractible elements (contractive means having the norm no more than $1$) in $A$ such that for all $a \in A$, $\|a-u_na\|\rightarrow 0$ as $n \rightarrow \infty.$ A continuous approximate unit for $A$ is a family $(u_t)_{t\geq1}$ of (not necessarily increasing) positive contractive elements in $A$ such that for all $a \in A$, $\|a-u_ta\|\rightarrow 0$ as $t \rightarrow \infty.$ There is a net version of approximate units; and any ${C^\ast}$-algebra has an approximate unit in this sense. A ${C^\ast}$-algebra having a countable approximate unit is called [[$\sigma$]{}-unital.]{} Separable ${C^\ast}$-algebras are $\sigma$-unital. A ${C^\ast\text{-algebra}}$ has a continuous approximate unit if and only if it is $\sigma$-unital. Let $J$ be a closed selfadjoint ideal of a ${C^\ast\text{-algebra}}\,\,A$ (selfadjointness actually follows from the other conditions). Then, the quotient algebra $A/J$ naturally becomes a ${C^\ast\text{-algebra}}$. In this paper, by an ideal of a ${C^\ast\text{-algebra}}$ $A$, we mean a closed selfadjoint ideal of $A$. Associated to an ideal $J$ of $A$, we have a short exact sequence: $$\begin{aligned} \xymatrix{ 0 \ar[r] & J \ar[r] & A \ar[r] & A/J \ar[r] & 0 } \label{def:extension}\end{aligned}$$ We usually call a short exact sequence an extension of $A/J$ by $J$. When all ${C^\ast\text{-algebras}}$ which appear in are $G$-${C^\ast\text{-algebra}}$ and all connecting $\ast$-homomorphisms are equivariant, then we call it an $G$-extension. An ideal $J$ of $A$ is called essential if the annihilator ideal $$J^\perp=\{\, a\in A\mid \text{$aj=ja=0$ for all $j\in J$} \,\}$$of $J$ in $A$ is $0$. For a ${C^\ast\text{-algebra}}$ $A$, the [multiplier algebra]{} $M(A)$ of $A$ is a ${C^\ast\text{-algebra}}$ containing $A$ as an essential ideal and maximal among such in the following sense. For any ${C^\ast\text{-algebra}}$ $B$ containing $A$ as an ideal, there is a unique $\ast$-homomorphism from $B$ to $M(A)$ which is identity on $A$ and has kernel $A^\perp$. The multiplier algebra $M(A)$ can be defined for example, after faithfully and nondegenerately representing $A$ on a Hilbert space ${\mathcal{H}}$, as an idealizer $\{\,T\in B({\mathcal{H}})\mid \text{for any $a\in A$, $Ta, \,aT\in A$ } \,\}$ of $A$ in $B({\mathcal{H}})$. When $A$ is a $G$-${C^\ast\text{-algebra}}$, the $G$-action extends to the natural $G$-action on the multiplier algebra $M(A)$. The quotient algebra $M(A)/A$ is called the outer multiplier algebra of $A$. An operator $T$ on a Hilbert space is [compact]{} if it is a norm-limit of finite rank operators. The set of compact operators on a separable infinite dimensional Hilbert space ${\mathcal{H}}$ forms an ideal of ${B(\mathcal{H})}$. We denote it by ${\mathcal{K}}({\mathcal{H}})$ or simply by ${\mathcal{K}}$ when there is no confusions. Calkin algebra $Q({\mathcal{H}})$ or simply $Q$ is the quotient of ${B(\mathcal{H})}$ by ${\mathcal{K}}$. Suppose we have a system $(A_\lambda)_{\lambda\in\Lambda}$ of ${C^\ast\text{-algebras}}$ indexed by an upward filtering set $\Lambda$ with connecting $\ast$-homomorphisms $\phi_{\lambda_1\lambda_2}\colon A_{\lambda_1}\to A_{\lambda_2}$ for $\lambda_1\leq\lambda_2$ satisfying $\phi_{\lambda_2\lambda_3}\circ\phi_{\lambda_1\lambda_2}=\phi_{\lambda_1\lambda_3}$ for $\lambda_1\leq\lambda_2\leq\lambda_3$. We assume all connecting maps are injective and $\phi_{\lambda\lambda}=\operatorname{id}_{A_\lambda}$. An [inductive limit]{} $\displaystyle \lim_{\lambda\in\Lambda}A_\lambda$ of $(A_\lambda)_{\lambda\in\Lambda}$ is defined as the completion of an algebraic inductive limit of $(A_\lambda)_{\lambda\in\Lambda}$ which can be viewed as the union $\displaystyle\cup_{\lambda\in\Lambda}A_\lambda$ with the obvious pre-${C^\ast}$-norm. A tensor product of ${C^\ast\text{-algebras}}$ is a very complicated notion. It is defined as a completion of an algebraic tensor product of ${C^\ast\text{-algebras}}$ by a ${C^\ast}$-norm. Surprisingly, such a completion is not unique in general. The following two ${C^\ast}$-tensor products are standard and very important. The detail of these definitions, for example, the definition of a tensor product of Hilbert spaces can be found in [@BrOza]. Let $A$ and $B$ be ${C^\ast\text{-algebras}}$. The [maximal tensor product]{} $A\otimes_{\text{max}}B$ of $A$ and $B$ is the completion of an algebraic tensor product $A\odot B$ by the following ${C^\ast}$-norm: $$\begin{aligned} \|x\| &= \displaystyle \sup_{\rho}\|\rho(x)\| \,\, \text{for $x$ in $A\odot B$}\end{aligned}$$ Here, the supremum is taken for all (algebraic) representation of the $\ast$-algebra $A\odot B$. The maximal tensor product $A\otimes_{\text{max}}B$ has a universal property that for any pair of commuting $\ast$-homomorphisms from $A$ and from $B$ to a ${C^\ast\text{-algebra}}$ $C$, it “extends” uniquely to a $\ast$-homomorphism from $A\otimes_{\text{max}}B$ to $C$. The [minimal tensor product]{} $A\otimes_{\text{min}}B$ or simply $A\otimes B$ of $A$ and $B$ is defined by the following way. We first faithfully represent $A$ and $B$ on Hilbert spaces ${\mathcal{H}}_1$ and ${\mathcal{H}}_2$. Then, an algebraic tensor product $A\odot B$ is realized as a $\ast$-subalgebra of $B({\mathcal{H}}_1\otimes{\mathcal{H}}_2)$, where ${\mathcal{H}}_1\otimes{\mathcal{H}}_2$ is a tensor product of Hilbert spaces. We take $A\otimes_{\text{min}}B$ to be the completion of $A\odot B$ inside $B({\mathcal{H}}_1\otimes{\mathcal{H}}_2)$. It is independent of choices of representations. Let $A$ be a ${C^\ast\text{-algebra}}$ and $X$ be a locally compact Hausdorff space. The algebra $C_0(X,A)$ (also denoted by $A(X)$) of $A$-valued continuous functions on $X$ which vanish at infinity naturally becomes a ${C^\ast\text{-algebra}}$. There is a canonical isomorphism from the tensor product $A\otimes C_0(X)$ to $A(X)$ which sends an elementary tensor $a\otimes f$ to a function $x\mapsto f(x)a$. When $X$ is an interval, say $(0,1)$, then we further simply write $A(X)$ by $A(0,1)$. The [nuclearity]{} of ${C^\ast\text{-algebras}}$ is a very fundamental notion in ${C^\ast\text{-algebra}}$ theory. We refer to [@BrOza] for a very detailed account for this class of ${C^\ast\text{-algebras}}$. Here, we note the following important facts. For a nuclear ${C^\ast\text{-algebra}}$ $A$, the maximal tensor product $A\otimes_{\text{max}}B$ and the minimal tensor product $A\otimes B$ coincide for any ${C^\ast\text{-algebra}}$ $B$. All commutative ${C^\ast\text{-algebras}}$ are nuclear. A direct sum and an inductive limit of nuclear ${C^\ast\text{-algebras}}$ are nuclear. The minimal (maximal) tensor product of nuclear ${C^\ast\text{-algebras}}$ is nuclear. Further Preliminaries ===================== In this chapter, we give a further preparation needed for the discussions in the following chapters. The contents of this chapter include proper ${C^\ast\text{-algebras}}$, graded Hilbert spaces, graded ${C^\ast\text{-algebras}}$, Hilbert modules, continuous fields of Hilbert spaces, continuous fields of ${C^\ast\text{-algebras}}$, unbounded operators on a Hilbert space and unbounded multipliers on a Hilbert module. In this chapter, $G$ always denote a second countable, locally compact topological group. A second countable, locally  compact, Hausdorff  topological  space equipped with a $G$-action $G\times X\to X$ is called a $G$-space. A $G$-space $X$ is called a proper $G$-space if the map $G\times X \ni (g, x) \to (gx, x) \in X\times X$ is proper (i.e. the inverse image of any compact set is compact). A separable $G$-${C^\ast\text{-algebra}}$ $A$ is a [proper]{} $G$-${C^\ast\text{-algebra}}$ if, for some second countable, locally compact proper $G$-space $X$, there exists an equivariant $\ast$-homomorphism from the $G$-${C^\ast\text{-algebra}}$ $C_0(X)$ to the center of the multiplier algebra $Z(M(A))$ of $A$ such that $C_0(X)A$ is dense in $A$. We denote by $A_c(X)$ the (frequently non-complete) subalgebra $C_c(X)A$ of $A$. Let $X$ be a proper $G$-space. A cut-off function $c$ on $X$ is a bounded, non-negative continuous function on $X$ satisfying the following conditions. First, for any compact subset $K$ of $X$, there exists a compact subset $L$ of $G$ such that $(gc)(x)=c(g^{-1}x)=0$ for any $x$ in $K$ and for any $g$ outside $L$: in other words, a map $g\mapsto (gc)f$ from $G$ to $C_c(X)\subset C_0(X)$ has compact support for any $f$ in $C_c(X)$. Secondly $\int_G(gc)(x)^2d\mu=1$ for all $x$ in $X$. A cut-off function exists for any proper $G$-space $X$; this may be constructed as follows. Our assumption on $X$ ensures the orbit space $X/G$ is second countable, locally compact, Hausdorff and in particular paracompact. We can take a family of compact sets and relatively compact open sets $K_\lambda\subset U_\lambda$ of $X/G$ such that the compact sets $K_\lambda$ cover $X/G$ and that the family of relatively compact open sets $U_\lambda$ are locally finite. Now, we can further take a family of compact sets and relatively compact open sets $F_\lambda\subset W_\lambda$ such that each $W_\lambda/G$ is contained in $U_\lambda$ and that each $F_\lambda$ contains $K_\lambda$. Now, for each $\lambda$, get a nonnegative function $\theta_\lambda$ such that $\theta_\lambda(x)=1$ for all $x$ in $F_\lambda$ with support contained in $W_\lambda$. Then, a well-defined expression $\theta=\Sigma\theta_\lambda$ defines a continuous, nonnegative function $\theta$ such that first, for any $G$-compact subset $F$ of $X$, there exists a $G$-invariant open subset $W$ of $X$ on which the sum $\Sigma\theta_\lambda$ becomes a finite sum (thus $\theta$ has compact support inside $W$) and secondly, for any $x$ in $X$ there exsits $g\in G$ with $\theta(gx)>0$. The desired cut-off function $c$ on $X$ can be defined by $c(x)={\left(\frac{\theta(x)}{\int_G(g\theta)(x)d\mu} \right)}^{1/2}$. We also remark here that the set of cut-off functions on a proper $G$-space $X$ is connected in $C_b(X)$. (cf. [@HigRoe] APPENDIX A) A graded $G$-Hilbert space is a $G$-Hilbert space ${\mathcal{H}}$ with a fixed grading automorphism $\epsilon$ which is involutive (a selfadjoint unitary) and commutes with the action of $G$. A grading automorphism, or simply a grading $\epsilon$ defines a decomposition of ${\mathcal{H}}$ into two orthogonal closed $G$-invariant subspaces ${\mathcal{H}}^{(0)}$ and ${\mathcal{H}}^{(1)}$, where ${\mathcal{H}}^{(0)}$ (resp. ${\mathcal{H}}^{(1)}$) is the $+1$ (resp. $-1$) eigenspace of $\epsilon$. In this way, a graded $G$-Hilbert space is understood as nothing but as a pair of $G$-Hilbert spaces ${\mathcal{H}}^{(0)}$ and ${\mathcal{H}}^{(1)}$. An operator on a graded $G$-Hilbert space is called [even]{} (resp. [odd]{}) if it commutes (resp. anti-commutes) with the grading $\epsilon$. A graded tensor product ${\mathcal{H}}_1\hat\otimes{\mathcal{H}}_2$ of graded $G$-Hilbert spaces ${\mathcal{H}}_1$ and ${\mathcal{H}}_2$ is defined as a Hilbert space ${\mathcal{H}}_1\otimes{\mathcal{H}}_2$ with a grading $\epsilon_1\otimes\epsilon_2$ where $\epsilon_i$ are the gradings on ${\mathcal{H}}_i$ for $i=1,2$. (cf. [@HigRoe] APPENDIX A) A graded $G$-${C^\ast\text{-algebra}}$ is a $G\times\mathbb{Z}/2\mathbb{Z}$-${C^\ast\text{-algebra}}$ $A$. This is nothing but a $G$-${C^\ast\text{-algebra}}$ with a fixed grading automorphism on $A$ of degree two which commutes with the $G$-action. A graded $G$-${C^\ast\text{-algebra}}$ $A$ decomposes into two $G$-invairant closed selfadjoint subspaces $A^{(0)}$ and $A^{(1)}$, where $A^{(0)}$ (resp. $A^{(1)}$) is the $+1$ (resp. $-1$) eigenspace of the grading automorphism on $A$. They satisfy $A^{(i)}\cdot A^{(j)} = A^{(i+j)}$ for $i,j \in \mathbb{Z}/2\mathbb{Z}$. An element $a$ in $A^{(i)}$ is called homogeneous of degree $i$; and we express it by $\partial a = i$. We also call a element $a$ in $A^{(0)}$ (resp. $A^{(1)}$) as even (resp. odd). A grading commutator $[\,,\,]$ is defined by $[a,b]=ab-(-1)^{\partial a\partial b}ba $ for homogeneous elements $a,b \in A$ and by extending it linearly. Let ${\mathcal{H}}$ be a graded $G$-Hilbert space with a grading $\epsilon$. The conjugation by $\epsilon$ defines a grading on the ${C^\ast\text{-algebra}}$ ${B(\mathcal{H})}$. The algebra $C_0(\mathbb{R})$ of continuous functions on the real line which vanish at infinity becomes a graded ${C^\ast\text{-algebra}}$ by a grading automorphism which is identity on even functions and $-1$ on odd functions. We denote this graded ${C^\ast\text{-algebra}}$ by $\mathcal{S}$. Let $A$ and $B$ be graded $G$-${C^\ast\text{-algebras}}$. There is a notion of graded tensor products of $A$ and $B$. The crucial feature is that we first define a product and an involution on an algebraic tensor product $A\hat\odot B$ (a tensor product of vector spaces) by $(a\hat\otimes b)(c\hat\otimes d)=(-1)^{\partial b\partial c}(ac)\hat\otimes(bd), (a\hat\otimes b)^\ast=(-1)^{\partial a\partial b}a^\ast\hat\otimes b^\ast$ for homogeneous $a,c \in A, \, b,d \in B$. There are the maximal graded tensor product $A\hat\otimes_{\text{max}}B$ and the minimal graded tensor product $A\hat\otimes_{\text{min}}B$, or simply $A\hat\otimes B$. They coincide when $A$ or $B$ is nuclear. We refer to the book [@Bla] for further details. \[Clif\] (Clifford algebras) (cf. [@HigRoe] APPENDIX A) Let $V$ be a finite dimensional vector space over $\mathbb{R}$. The complexified exterior algebra $\Lambda^\ast(V)\otimes\mathbb{C}$ naturally becomes a graded Hilbert space. The Clifford algebra $\operatorname{Cliff}(V)$ of $V$ is a (graded) ${C^\ast}$-subalgebra of $B(\Lambda^\ast(V)\otimes\mathbb{C})$ generated by the Clifford multiplication operators $c(v)=\text{ext}(v)+\text{int}(v)$ for $v \in V$. Here, $\text{ext}(v)$ is the exterior multiplication by $v$ and $\text{int}(v)$ is its adjoint. One can also define the Clifford algebra $\overline{\operatorname{Cliff}}(V)$ as a (graded) ${C^\ast}$-subalgebra of $B(\Lambda^\ast(V)\otimes\mathbb{C})$ generated by the Clifford multiplication operators $\overline c(v)=\text{ext}(v)-\text{int}(v)$. The graded ${C^\ast\text{-algebras}}$ $\operatorname{Cliff}(V)$ and $\overline{\operatorname{Cliff}}(V)$ are both isomorphic (as graded ${C^\ast\text{-algebras}}$) to the (abstract) Clifford algebra $\mathbb{C}_n$ where $n=\text{dim}(V)$ which is a graded ${C^\ast\text{-algebra}}$ generated by $n$ anticommuting odd selfadjoint unitaries. A graded tensor product $\operatorname{Cliff}(V)\hat\otimes\overline{\operatorname{Cliff}}(V)$ can be naturally identified with $B(\Lambda^\ast(V)\otimes\mathbb{C})$. Also, we have a natural isomorphism $\operatorname{Cliff}(V)\hat\otimes \operatorname{Cliff}(W) \cong \operatorname{Cliff}(V\oplus W)$ for finite real vector spaces $V$ and $W$. When a group $G$ acts on $V$ by linear isometries $g\colon v\mapsto g(v)$ for $g$ in $G$ and $v$ in $V$, the ${C^\ast\text{-algebra}}$ $\operatorname{Cliff}(V)$ or $\overline{\operatorname{Cliff}}(V)$ naturally becomes a graded $G$-${C^\ast\text{-algebra}}$ by defining $g(c(v))=c(g(v))$ for $g$ in $G$ and $v$ in $V$. (cf. [@Bla] Chapter 13.) Let $B$ be a $G$-${C^\ast\text{-algebra}}$. A pre-Hilbert $B$-module is a right $B$-module $\mathcal{E}$ with a $B$-valued inner product $\langle \cdot,\cdot \rangle \colon \mathcal{E} \times \mathcal{E} \to B$. The norm on ${\mathcal{E}}$ is defined by $\|e\|=\|{\langle e,e \rangle}\|^{\frac12}$ for $e \in {\mathcal{E}}$. If ${\mathcal{E}}$ is complete with this norm, we call it a Hilbert $B$-module. It is called full if ${\langle {\mathcal{E}},{\mathcal{E}}\rangle}=B$. A Hilbert $G$-$B$-module is a Hilbert $B$-module with a continuous $G$-action which is compatible with the $B$-module structure: ${\langle ge_1,ge_2 \rangle}=g{\langle e_1,e_2 \rangle}$, $g(eb)=g(e)g(b)$ for $g\in G, e,e_1,e_2\in{\mathcal{E}}, b\in B$. For a graded $G$-${C^\ast\text{-algebra}}$ $B$ (i.e. $G\times{\mathbb{Z}}/2{\mathbb{Z}}$-${C^\ast\text{-algebra}}$), a graded Hilbert $G$-$B$-module is nothing but a Hilbert $G\times{\mathbb{Z}}/2{\mathbb{Z}}$-$B$-module. For Hilbert $B$-modules ${\mathcal{E}}_1,{\mathcal{E}}_2$, a $B$-linear map $T\colon{\mathcal{E}}_1\to{\mathcal{E}}_2$ is called adjointable if there exists a $B$-linear map $T^\ast\colon{\mathcal{E}}_2\to{\mathcal{E}}_1$ such that ${\langle Te_1,e_2 \rangle}={\langle e_1,T^\ast e_2 \rangle}$ for $e_1\in {\mathcal{E}}_1, e_2\in {\mathcal{E}}_2$. An adjointable $B$-linear map is automatically continuous. We denote the set of adjointable $B$-linear maps on ${\mathcal{E}}_1$ (resp. from ${\mathcal{E}}_1$ to ${\mathcal{E}}_2$) by $B({\mathcal{E}}_1)$ (resp. $B({\mathcal{E}}_1,{\mathcal{E}}_2)$). The set $B({\mathcal{E}}_1)$ becomes a ${C^\ast\text{-algebra}}$ with the operator norm. If ${\mathcal{E}}_1$ is a graded $G$-$B$-Hilbert module, $B({\mathcal{E}}_1)$ naturally becomes a graded ${C^\ast\text{-algebra}}$ with a $G$-action. An adjointable $B$-linear map $T$ from ${\mathcal{E}}_1$ to ${\mathcal{E}}_2$ is called compact if it is in the closed linear span of $B$-rank-one operators $\theta_{e_2,e_1}\colon e \mapsto e_2{\langle e_1,e \rangle}$ $e_1\in {\mathcal{E}}_1, e_2\in{\mathcal{E}}_2$. The set of compact operators on ${\mathcal{E}}_1$, denoted by ${\mathcal{K}}({\mathcal{E}}_1)$, is an ideal of $B({\mathcal{E}}_1)$ (we also denote the set of compact operators from ${\mathcal{E}}_1$ to ${\mathcal{E}}_2$ by ${\mathcal{K}}({\mathcal{E}}_1,{\mathcal{E}}_2)$). Moreover, $B({\mathcal{E}}_1)$ can be identified with the multiplier algebra $M({\mathcal{K}}({\mathcal{E}}_1))$. The quotient ${C^\ast\text{-algebra}}$ of $B({\mathcal{E}}_1)$ by the ideal ${\mathcal{K}}({\mathcal{E}}_1)$ is sometimes called as the Calkin algebra of ${\mathcal{E}}_1$; we denote it by $Q({\mathcal{E}}_1)$. If ${\mathcal{E}}_1$ is a graded Hilbert $G$-$B$-module, ${\mathcal{K}}({\mathcal{E}}_1)$ is a graded $G$-${C^\ast\text{-algebra}}$. (Graded) exterior and interior tensor products of Hilbert modules are defined in [@Bla] for example. For ungraded (resp. graded) Hilbert $G$-$B_i$-modules ${\mathcal{E}}_i$ for $i=1, 2$, we denote their ungraded (resp. graded) exterior tensor product by ${\mathcal{E}}_1\otimes{\mathcal{E}}_2$ (resp. ${\mathcal{E}}_1\hat\otimes{\mathcal{E}}_2$). It is an ungraded (resp. graded) Hilbert $G$-$B_1\hat\otimes B_2$ module. We may sometimes omit to write $\otimes$ or $\hat\otimes$ for convenience when there could be no confusion. Let $B$ be a graded $G$-${C^\ast\text{-algebra}}$. Then, $B$ itself may be viewed as a graded Hilbert $G$-$B$-module with ${\langle b_1,b_2 \rangle}=b_1^\ast b_2$. The ${C^\ast\text{-algebra}}$ ${\mathcal{K}}(B)$ of compact operators on $B$ is naturally isomorphic to $B$ by means of left multiplication. Hence the ${C^\ast\text{-algebra}}$ $B(B)$ of adjointable operators on $B$ is isomorphic to the multiplier algebra $M(B)$. The standard $G$-Hilbert space ${\mathcal{H}}_G$ is defined as $L^2(G)\otimes l^2$ equipped with the left-regular representation $G$ on $L^2(G)$ and the trivial one on $l^2$. For a proper ${C^\ast\text{-algebra}}$ $A$, any countably generated Hilbert $G$-$A$-module can be equivariantly embedded into a standard one $A\otimes{\mathcal{H}}_G$. \[prop:stab\](cf. [@bolic] PROPOSITION 5.5.) Let $A$ be a proper ${C^\ast\text{-algebra}}$ with the base space $X$ and ${\mathcal{E}}$ be a countably generated Hilbert $G$-$A$-module. Then, there exists a $G$-equivariant adjointable isometry $V$ from ${\mathcal{E}}$ to $A\otimes{\mathcal{H}}_G$. Take any cut-off function $c$ on $X$. We have an adjointable isometry $V$ from ${\mathcal{E}}$ to $L^2(G, {\mathcal{E}})$ which maps an element $e$ in ${\mathcal{E}}$ to a function $f\colon g\mapsto c(g)e$. The adjoint of $V$ is an operator from $L^2(G, {\mathcal{E}})$ to ${\mathcal{E}}$ sending a function $f$ to $\int_Gc(g)f(g)d\mu$. This defines an equivariant embedding of ${\mathcal{E}}$ into $L^2(G, {\mathcal{E}})$. Now, by using any non-equivariant embedding $W$ of ${\mathcal{E}}$ into $A\otimes l^2$ (we refer the book [@Bla] for the existence of such embeddings), we have an equivariant embedding $\tilde W$ of $L^2(G, {\mathcal{E}})$ into $L^2(G, A\otimes l^2)$ which sends a function $f$ to a function $\tilde W(f)\colon g\mapsto g(V(g^{-1}(f(g))))$. Hence, we have an equivariant embedding of ${\mathcal{E}}$ into $L^2(G, A\otimes l^2)\cong A\otimes{\mathcal{H}}_G$. In the above proposition, considering in particular the Hilbert $G$-$A$-module $A$, we have an $G$-equivariant embedding $V$ of $A$ into $A\otimes{\mathcal{H}}_G$. We have an injective $G$-equivariant $\ast$-homomorphism $\operatorname{Ad}_V$ from ${\mathcal{K}}(A)\cong A$ into ${\mathcal{K}}(A\otimes{\mathcal{H}}_G)\cong A\otimes{\mathcal{K}}({\mathcal{H}}_G)$. This is called the Stabilization of a proper $G$-${C^\ast\text{-algebra}}$. Here, we define for any adjointable isometry $V$ of Hilbert $G$-$B$-modules ${\mathcal{E}}_1$ to ${\mathcal{E}}_2$, the $\ast$-homomorphism $\operatorname{Ad}_V\colon T\mapsto VTV^\ast$ from $B({\mathcal{E}}_1)$ to $B({\mathcal{E}}_2)$ which is $G$-equivariant if $V$ is; and this $\ast$-homomorphism restricts to the $\ast$-homomorphism $\operatorname{Ad}_V$ from ${\mathcal{K}}({\mathcal{E}}_1)$ to ${\mathcal{K}}({\mathcal{E}}_2)$. (Continuous field of Hilbert spaces) (cf. [@DixC] CHAPTER 10.) A continuous field of (complex) $G$-Hilbert spaces over a locally compact, Hausdorff topological space $X$ is a pair $(({\mathcal{H}}_x)_{x\in X}, \Gamma)$ of a family $({\mathcal{H}}_x)_{x\in X}$ of $G$-Hilbert spaces over $X$ and a prescribed set $\Gamma$ of sections of $({\mathcal{H}}_x)_{x\in X}$ (we call them basic sections) satisfying the following conditions: $\Gamma$ is a vector space (over ${\mathbb{C}}$) with respect to its vector space structure coming from those of ${\mathcal{H}}_x$; $\Gamma_x=\{\,v(x)\in{\mathcal{H}}_x \,\mid\, v\in\Gamma\,\}$ is dense in ${\mathcal{H}}_x$ for each $x$ in $X$; for any sections $v,w$ in $\Gamma$, a function $x \to \langle v(x),w(x)\rangle_x$ is continuous on $X$ ($\langle\cdot,\cdot\rangle_x$ denotes the inner product on ${\mathcal{H}}_x$); the set $\Gamma$ is $G$-invariant with respect to the evident pointwise action of $G$ on $({\mathcal{H}}_x)_{x\in X}$; and for any section $v$ in $\Gamma$ and for any sequence $(g_n)$ converging to $1$ in $G$, $(g_n(v(x)))$ converges to $v(x)$ uniformly on compact subsets of $X$. An arbitrary section of $({\mathcal{H}}_x)_{x\in X}$ is said to be continuous if it is a uniform limit over compact subsets of $X$ of basic sections. The set ${\mathcal{E}}$ of continuous sections which vanish at infinity becomes a Hilbert $G$-$C_0(X)$-module. The inner product on ${\mathcal{E}}$ is defined by $\langle v\,w\rangle\colon x\mapsto \langle v(x),w(x)\rangle_x$ for $v,w$ in ${\mathcal{E}}$. The continuous field of graded $G$-Hilbert spaces can be defined analogously. In this case, the set of continuous sections which vanish at infinity becomes a graded Hilbert $G$-$C_0(X)$-module. We note here that one can further generalize this construction to define a continuous field of graded Hilbert $G$-$B$-modules. (Continuous field of ${C^\ast\text{-algebras}}$) (cf. [@DixC] CHAPTER 10.) A continuous field of $G$-${C^\ast\text{-algebras}}$ over a locally compact, Hausdorff topological space $X$ can be defined similarly to a continuous field of Hilbert spaces. It is a pair $((A_x)_{x\in X}, \Gamma)$ of a family $(A_x)_{x\in X}$ of ${C^\ast\text{-algebras}}$ over $X$ and a prescribed set $\Gamma$ of (basic) sections of $(A_x)_{x\in X}$ satisfying the following conditions: $\Gamma$ is a $\ast$-algebra with respect to its structure of a $\ast$-algebra coming from those of $A_x$; $\Gamma_x=\{\,v(x)\in\ A_x \,\mid\, v\in\Gamma\,\}$ is dense in $A_x$ for each $x$ in $X$; for any section $v$ in $\Gamma$, a function $x \to ||v(x)||_x$ is continuous on $X$ ($||\cdot||_x$ denotes the norm on $A_x$); the set $\Gamma$ is $G$-invariant; and for any section $v$ in $\Gamma$ and for any sequence $(g_n)$ converging to $1$ in $G$, $(g_n(v(x)))$ converges to $v(x)$ uniformly on compact subsets of $X$. An arbitrary section of $(A_x)_{x\in X}$ is said to be continuous if it is a uniform limit over compact subsets of $X$ of basic sections. The set of continuous sections $A$ which vanish at infinity becomes a ${C^\ast\text{-algebra}}$. The continuous field of graded $G$-${C^\ast\text{-algebras}}$ can be defined analogously. In this case, the set of continuous sections which vanish at infinity becomes a graded $G$-${C^\ast\text{-algebra}}$. Given a continuous field $((A_x)_{x\in X}, \Gamma)$ of graded $G$-${C^\ast\text{-algebras}}$ and a nuclear graded $G$-${C^\ast\text{-algebra}}$ $B$, one can perform a (graded) tensor product on each fiber to get a new continuous field $((A_x\hat\otimes B)_{x\in X}, \Gamma')$. The set of basic sections $\Gamma'$ can be defined as the span of algebraic tensors $v\hat\otimes b$ with $v\in\Gamma$ and $b\in B$. On the other hand, when $G$ is an abelian group or more generally, an amenable group (a group $G$ is said to be amenable if its reduced group ${C^\ast\text{-algebra}}$ $C^{\ast}_{\text{red}}(G)$ is nuclear), one can perform a reduced (maximal) crossed product on each fiber to get a continuous field $(({C^\ast}_{\text{red}}(G,A_x))_{x\in X}, \Gamma'')$. The set of basic sections $\Gamma''$ can be defined as the span of algebraic tensors $f\otimes c$ with $v\in\Gamma$ and a continuous function $f\in C_c(G)$ with compact support. That these indeed define continuous fields of ${C^\ast\text{-algebras}}$ is explained in [@KW] for example. Let $(({\mathcal{H}}_x)_{x\in X}, \Gamma)$ be a continuous field of graded $G$-Hilbert spaces over a locally compact space $X$. It naturally defines a continuous field $({\mathcal{K}}({\mathcal{H}}_x)_{x\in X}, \Gamma')$ of graded $G$-${C^\ast\text{-algebras}}$ over $X$. The set of basic sections $\Gamma'$ is defined to be the span of rank-one sections $x\mapsto \theta_{v(x),w(x)}$ for sections $v,w$ in $\Gamma$. (Unbounded operators on a Hilbert space) (cf. [@analysisnow] CHAPTER 5.) Let ${\mathcal{H}}$ be a Hilbert space over ${\mathbb{C}}$ or ${\mathbb{R}}$ and denote the inner product on ${\mathcal{H}}$ by $\langle\cdot,\cdot\rangle$. The linear map $T$ from a dense subspace $D(T)$ of ${\mathcal{H}}$ to ${\mathcal{H}}$ is usually called an unbounded operator on ${\mathcal{H}}$. (One can of course consider an unbounded operator taking another Hilbert space for its range). The adjoint $T^\ast$ of $T$ is defined on $D(T^\ast)=\{\,v\in{\mathcal{H}}\mid w\mapsto \langle v,Tw\rangle \,\,\text{is a bounded linear functional on $D(T)$}\,\}$; for any $v$ in $D(T^\ast)$, $T^\ast v$ is defined to be the unique vector in ${\mathcal{H}}$ satisfying $\langle T^\ast v, w\rangle= \langle v,Tw\rangle$ for all $w$ in $D(T)$. When $D(T^\ast)$ is a dense subspace of ${\mathcal{H}}$, in other words, if $T^\ast$ is densely defined on ${\mathcal{H}}$, then we say $T$ is adjointable, and call $T^\ast$ as the adjoint of $T$. Evidently, in this case, $T^\ast$ is an adjointable unbounded operator on ${\mathcal{H}}$ and its adjoint $T^{\ast\ast}$ is an extension of $T$. (By an extension of $T$, we mean an unbounded operator $S$ on ${\mathcal{H}}$ defined on the domain containing $D(T)$ such that $S=T$ on $D(T)$ as linear maps.) For an adjointable operator $T$, $D(T^{\ast\ast})$ coincides with a subspace $$\{\,v\in{\mathcal{H}}\mid \text{there exists a sequence $(v_n)\subset D(T)$ s.t. $v_n\to v$ and $(Tv_n)$ is convergent as $n\to \infty$}\,\}.$$ An adjointable unbounded operator $T$ on ${\mathcal{H}}$ is called symmetric if $T^\ast$ is an extension of $T$, selfadjoint if $T$ is symmetric and $D(T)=D(T^\ast)$ and essentially selfadjoint if $T^{\ast\ast}$ is selfadjoint. For a symmetric operator $T$, being essentially selfadjoint is equivalent to that $T^2+1$ has dense range; and in complex case, this is equivalent to that $T\pm i$ have dense ranges. In this case, we have well-defined bounded operators $(T^2+1)^{-1}$ and $(T\pm i)^{-1}$ in complex case. We say an essentially selfadjoint operator $T$ has compact resolvent if these operators are compact. We are mostly interested in selfadjoint operators, and so, when $T$ is essentially selfadjoint, we usually treat $T$ as a selfadjoint operator by implicitly using its extension $T^{\ast\ast}=T^{\ast\ast\ast}=T^\ast$ whenever it makes no confusion. Any diagonalizable operator with diagonal entries in ${\mathbb{R}}$ is essentially selfadjoint; and it has compact resolvent if and only if the number of its eigenvalues lying in any compact set of ${\mathbb{R}}$ is finite taking into account the multiplicities. For an essentially selfadjoint operator $T$ on a complex Hilbert space ${\mathcal{H}}$, $T\pm i$ are unbounded operators defined on $D(T)$ which have dense images in ${\mathcal{H}}$ and are bounded away from $0$. One has the unique $\ast$-homomorphism from $C_0({\mathbb{R}})$ to $B({\mathcal{H}})$ sending functions $(x\pm i)^{-1}$ to $(T\pm i)^{-1}$. When $T$ is a diagonalizable operator, then this $\ast$-homomorphism becomes the evident one. If in addition, $T$ has compact resolvent, then it is also clear that this $\ast$-homomorphism takes ${\mathcal{K}}({\mathcal{H}})$ for its range. We remark here that in the case when the Hilbert space ${\mathcal{H}}$ is graded and the operator $T$ is odd, then the above defined $\ast$-homomorphism becomes a graded $\ast$-homomorphism from the graded ${C^\ast\text{-algebra}}$ ${\mathcal{S}}$. \[harm\](Harmonic oscillator) (cf.[@HigKas2] Definition 2.6.) For a positive real number $\alpha$, let $H=\alpha^2\Delta+x^2$ be an unbounded operator on the real (or complex) Hilbert space $L^2({\mathbb{R}})$ which is defined on the subspace $C^\infty_c({\mathbb{R}})$ of test functions. Here, $\Delta$ is the laplacian $-\frac{d^2}{dx^2}$. This operator is a diagonalizable operator with diagonal entries in ${\mathbb{R}}$ having compact resolvent, and so in particular, is essentially selfadjoint. This can be seen as follows. First, note that $H=KL-\alpha=LK+\alpha$ where $K=\alpha\frac{d}{dx}+x$ and $L=-\alpha\frac{d}{dx}+x$. One finds that $HK^n=K^nH+2n\alpha K^n$ and that a function $f_0(x)=e^{-\frac{x^2}{2\alpha}}$ is in the kernel of $K$ and hence an eigenvector for $H$ with an eigenvalue $\alpha$. By $HK^n=K^nH+2n\alpha K^n$, we see a function $f_n(x)=K^nf_0(x)=p_n(x)e^{-\frac{x^2}{2\alpha}}$ is an eigenvector for $H$ with an eigenvalue $(2n+1)\alpha$ for any $n\geq0$ where $p_n(x)$ is a certain polynomial of degree $n$. We note that any two eigenvectors for a symmetric operator corresponding to different eigenvalues are orthogonal. After being normalized, these functions become an orthonormal basis of the Hilbert space $L^2({\mathbb{R}})$ and hence $H$ is diagonalizable with eigenvalues $(2n+1)\alpha$ for $n\geq0$ each of which has single multiplicity. The functional calculus sends $f$ in $C_0({\mathbb{R}})$ to a bounded diagonal operator $\left( \begin{array}{ccccc} f(\alpha)&0&0&0&\cdots\\ 0&f(3\alpha)&0&0&\cdots\\ 0&0&f(5\alpha)&0&\cdots\\ 0&0&0&\ddots&\\ \vdots&\vdots&\vdots&&\ddots \\ \end{array} \right)$ where we used the mentioned orthonormal basis for $L^2({\mathbb{R}})$ to write an operator as an infinite matrix. \[lemselfad\] Let $T$ be a symmetric unbounded operator on a complex Hilbert space ${\mathcal{H}}$ defined on $D(T)$ with $D(T)=D(T^2)$ (i.e. the image $TD(T)$ is contained in $D(T)$). Suppose, $T^2$ is essentially selfadjoint. Then $T$ is essentially selfadjoint. In addition, if $T^2$ has compact resolvent, then so does $T$. Denote the inner product by ${\langle \cdot,\cdot \rangle}$. First, we see $T^2+1$ has the dense range. In fact, assume for $y\in{\mathcal{H}}$, ${\langle (T^2+1)x, y \rangle}=0$ for any $x\in D(T)$. Then, it follows $y\in D({T^{2}}^{\ast})$. Take $(y_n)\subset D(T)$ with $y_n\to y$ and $T^2y_n \to {T^2}^\ast y$. We have ${\langle ({T^2}^\ast+1)y, y \rangle}=0$ showing $y=0$. Now, it follows $T\pm i$ has the dense ranges. Hence we have bounded operators $(T\pm i)^{-1}$. Now, we may assume $T\pm i$ is onto (just replace $T$ by $T^{\ast\ast}$). Take any $y=(T+i)x\in D(T^\ast)$ with $x\in D(T)$: so, $z\mapsto {\langle (T+i)x, Tz \rangle}$ is bounded on $D(T)$. Since $T$ is symmetric, it follows $x$ in $D({T^2}^\ast)$; so take $(x_n)\subset D(T)$ with $x_n\to x$ and $((T^2+1)x_n)$ convergent. Applying a bounded operator $(T-i)^{-1}$ to a convergent sequence ($(T^2+1)x_n$), we see ($(T+i)x_n)=(y_n$) converges to $(T+i)x=y$. Also, ($Ty_n$) is convergent. Thus, $y$ is in $D(T^{\ast\ast})$ showing $T$ is essentially selfadjoint. When $T^2$ has compact resolvent, it follows $(T^2+1)^{-1}$ is a compact operator on ${\mathcal{H}}$. Thus, $(T\pm i)^{-1}$ must be compact. (Bott-Dirac operator)\[BottDirac\] (cf. [@HigKas2] Definition 2.6.) For a positive real number $\alpha$, let $B=\alpha\overline{c}(w)\frac{d}{dx}+c(w)x=\begin{pmatrix} 0& -\alpha \frac{d}{dx}+x \\ \alpha \frac{d}{dx}+x & 0 \end{pmatrix}$ be an odd symmetric unbounded operator on a graded complex Hilbert space ${\mathcal{H}}=L^2({\mathbb{R}}, \Lambda^\ast({\mathbb{R}})\otimes{\mathbb{C}})$ which is defined on a subspace $C^\infty_c({\mathbb{R}}, \Lambda^\ast({\mathbb{R}})\otimes{\mathbb{C}})$. Here, we used the Clifford multiplication $c(w)$ and $\overline{c}(w)$ as explained in Example \[Clif\] where $w$ denotes the standard basis vector on a real Hilbert space ${\mathbb{R}}$. The matrix representation respects the even subspace $L^2({\mathbb{R}})$ and the odd subspace $L^2({\mathbb{R}})w$ of ${\mathcal{H}}$. Note, in the same notation in Example \[harm\], $B=\begin{pmatrix} 0& L \\ K & 0 \end{pmatrix}$. Hence, $B^2=\begin{pmatrix} LK & 0 \\ 0 & KL \end{pmatrix}=\begin{pmatrix} H-\alpha & 0 \\ 0 & H+\alpha \end{pmatrix}$. It is now easy to see that $B^2$ is a diagonalizable operator having compact resolvent with eigenvalues $2n\alpha$ $(n\geq0)$ on the even subspace and $2(n+1)\alpha$ $(n\geq0)$ on the odd subspace. It follows by Lemma \[lemselfad\], $B$ is an odd essentially selfadjoint operator having compact resolvent, hence diagonalizable. Note, eigenvalues of $B$ is necessarily $\pm\sqrt{2n\alpha}$ $(n\geq0)$ (each of which has single multiplicity): if we have a nonzero eigenvalue $a$ of $B$, its eigenvector $v$ can be written $v^{(0)}+v^{(1)}$ with homogeneous $v^{(0)},v^{(1)}$; and it follows the odd operator $B$ must send $v^{(0)}$ to $av^{(1)}$ and $v^{(1)}$ to $av^{(0)}$. We see $v^{(0)}-v^{(1)}$ is an eigenvector with eigenvalue $-a$. One may want to write $B$ as a diagonal operator, but since any eigenvector for $B$ corresponding to a nonzero eigenvalue is necessarily not homogeneous, it is not so enlightening to do so. However, it is easy to see we have a following way of writing $B$ as an infinite matrix which respects the grading of the Hilbert space: $$B=\left( \begin{array}{ccccc} 0 &0&0&0&\cdots\\ 0& \left(\begin{array}{cc} 0&\sqrt{2\alpha}\\ \sqrt{2\alpha}&0\\ \end{array} \right) &0&0&\cdots\\ 0&0&\left(\begin{array}{cc} 0&\sqrt{4\alpha}\\ \sqrt{4\alpha}&0\\ \end{array} \right)&0&\cdots\\ 0&0&0&\left(\begin{array}{cc} 0&\sqrt{6\alpha}\\ \sqrt{6\alpha}&0\\ \end{array} \right)&\\ \vdots&\vdots&\vdots&&\ddots \\ \end{array} \right)$$ where we are using here a basis of Hilbert space ${\mathcal{H}}$ consisting of (homogeneous) eigenvectors for $B^2$. We remark that whenever we have an odd (symmetric) diagonalizable operator $T$ on a graded Hilbert space with $T^2$ which is diagonalizable by using homogeneous eigenvectors, we can represent $T$ in similar way using eigenvectors for $T^2$ even if $T^2$ have an infinite dimensional eigenspace for some eigenvalues. Now, the functional calculus for $B$ send an odd function $f$ to: $$f(B)=\left( \begin{array}{ccccc} 0 &0&0&0&\cdots\\ 0& \left(\begin{array}{cc} 0&f(\sqrt{2\alpha})\\ f(\sqrt{2\alpha})&0\\ \end{array} \right) &0&0&\cdots\\ 0&0&\left(\begin{array}{cc} 0&f(\sqrt{4\alpha})\\ f(\sqrt{4\alpha})&0\\ \end{array} \right)&0&\cdots\\ 0&0&0&\left(\begin{array}{cc} 0&f(\sqrt{6\alpha})\\ f(\sqrt{6\alpha})&0\\ \end{array} \right)&\\ \vdots&\vdots&\vdots&&\ddots \\ \end{array} \right),$$ and an even function $f$ to: $$f(B)=\left( \begin{array}{ccccc} f(0) &0&0&0&\cdots\\ 0& \left(\begin{array}{cc} f(\sqrt{2\alpha})&0\\ 0&f(\sqrt{2\alpha})\\ \end{array} \right) &0&0&\cdots\\ 0&0&\left(\begin{array}{cc} f(\sqrt{4\alpha}) & 0\\ 0 & f(\sqrt{4\alpha}) \\ \end{array} \right)&0&\cdots\\ 0&0&0&\left(\begin{array}{cc} f(\sqrt{6\alpha})&0\\ 0& f(\sqrt{6\alpha})\\ \end{array} \right)&\\ \vdots&\vdots&\vdots&&\ddots \\ \end{array} \right).$$ (Mehler’s formula) (cf. [@HKT] APPENDIX B, [@Shrodinger]) For $\alpha>0$, we have the following equation of bounded operators on the Hilbert space $L^2({\mathbb{R}})$: $$e^{-(\alpha^2\Delta+x^2)}=e^{-r(\alpha)\alpha^{-1}x^2}e^{-s(\alpha)\alpha\Delta}e^{-r(\alpha)\alpha^{-1}x^2} \\$$ with, $$r(\alpha)=\frac{1}{\sqrt{2}}\frac{\sinh(\frac{\alpha}{\sqrt{2}})}{\cosh(\frac{\alpha}{\sqrt{2}})},\,\,\,\,\, s(\alpha)=\frac{1}{\sqrt{2}}\sinh(\sqrt{2}\alpha) \\$$ It suffices to show the following holds for any $t>0$: $$e^{-t(\Delta+x^2)}=e^{-r(t)x^2}e^{-s(t)\Delta}e^{-r(t)x^2} \\$$ First, notice by letting $A=x^2$, $B=\Delta$, $C=[x^2, \Delta]=2x\frac{d}{dx}+1$, we have $[A, B]=C$, $[C, A]=4A$, $[C,B]=-4B$. On the other hand, the same algebraic relations hold when we set $A$ as $\begin{pmatrix} 0 & \sqrt{2} \\ 0 & 0 \\ \end{pmatrix} $, $B$ as $\begin{pmatrix} 0 & 0 \\ \sqrt{2} & 0 \\ \end{pmatrix} $ and $C$ as $\begin{pmatrix} 2 & 0 \\ 0 & -2 \\ \end{pmatrix} $. Hence, it suffices to check that the following holds for any $t>0$: $$e^{-t(B+A)}=e^{-r(t)A}e^{-s(t)B}e^{-r(t)A} \\$$ This can be checked easily, so we omit the rest of our calculations. (Unbounded multiplier on a Hilbert module) (cf. [@HKT] APPENDIX A) Let $B$ be a ${C^\ast\text{-algebra}}$, ${\mathcal{E}}$ be a Hilbert $B$-module and denote the inner product by $\langle\cdot,\cdot\rangle$. The notion of unbounded operators easily translates into this situation. However, since we don’t have an analogue of Riesz Representation Theorem on a Hilbert space, there is some difference from the Hilbert space case. An essentially selfadjoint unbounded multiplier is a linear map $T$ defined on a dense $B$-submodule $D(T)$ of ${\mathcal{E}}$ which is symmetric (i.e. $\langle Tv, w\rangle=\langle v, Tw\rangle$ for $v,w$ in $D(T)$) and such that $(T\pm i)^{-1}$ have dense images in ${\mathcal{E}}$. Similarly to the Hilbert space case, one has the unique $\ast$-homomorphism from $C_0({\mathbb{R}})$ to $B({\mathcal{E}})$ sending functions $(x\pm i)^{-1}$ to $(T\pm i)^{-1}$ which we call functional calculus associated to $T$. We say $T$ has compact resolvent if this $\ast$-homomorphism takes ${\mathcal{K}}({\mathcal{E}})$ for its range. (i.e. if $(T\pm i)^{-1}$ is in ${\mathcal{K}}({\mathcal{E}})$). When the Hilbert module ${\mathcal{E}}$ is graded and the operator $T$ is odd, then the above defined $\ast$-homomorphism becomes a graded one. Note the functional calculus is necessarily nondegenerate ($(T\pm i)^{-1}$ have dense range). Conversely, given any nondegenerate $\ast$-homomorphism from $C_0({\mathbb{R}})$ to $B({\mathcal{E}})$, a symmetric unbounded multiplier $x$ on ${\mathcal{E}}$ can be defined on a subspace $C_c({\mathbb{R}}){\mathcal{E}}$ in an obvious way. The image of $(x\pm i)^{-1}$ contains $C_c({\mathbb{R}}){\mathcal{E}}$, and thus, is dense. Consider a graded Hilbert ${\mathcal{S}}$-module ${\mathcal{E}}={\mathcal{S}}\hat\otimes{\mathcal{S}}$. A symmetric unbounded multiplier $T=x\hat\otimes1+1\hat\otimes x$ on ${\mathcal{E}}$ is defined on a subspace $C_c({\mathbb{R}})\hat\otimes C_c({\mathbb{R}})$ (the tensor product here is the algebraic one). That the multipliers $(T\pm i)^{-1}$ have the dense ranges or that $T$ is essentially selfadjoint may not be easy to be seen. However, it is easy to check that we have a graded $\ast$-homomorphism from ${\mathcal{S}}$ to ${\mathcal{S}}\hat\otimes{\mathcal{S}}={\mathcal{K}}({\mathcal{E}})$ sending $e^{-x^2}$ to $e^{-x^2}\hat\otimes e^{-x^2}$ and $xe^{-x^2}$ to $xe^{-x^2}\hat\otimes e^{-x^2}+e^{-x^2}\hat\otimes xe^{-x^2}$. (Continuity can be checked by representing ${\mathcal{S}}\hat\otimes{\mathcal{S}}$ on $(L^2({\mathbb{R}})\oplus L^2({\mathbb{R}})^{\operatorname{op}})\hat\otimes(L^2({\mathbb{R}})\oplus L^2({\mathbb{R}})^{\operatorname{op}})$ for example: $L^2({\mathbb{R}})$ and $L^2({\mathbb{R}})^{\operatorname{op}}$ are an even space and an odd space respectively.) This representation is nondegenerate, which easily implies that $(T\pm i)^{-1}$ have the dense ranges. The functional calculus associated to $T$ is evidently, our already defined graded $\ast$-homomorphism. The observation given in the above example can be pushed further. Let $B$ be a graded ${C^\ast\text{-algebra}}$, and ${\mathcal{E}}_1$, ${\mathcal{E}}_2$ be graded Hilbert $B$-modules. Given any odd essentially selfadjoint unbounded multipliers $T_1$ and $T_2$ on ${\mathcal{E}}_1$ and on ${\mathcal{E}}_2$ respectively with domains $D(T_1)$ and $D(T_2)$, we can define an odd symmetric unbounded multiplier $T=T_1\hat\otimes1+1\hat\otimes T_2$ on a Hilbert $B$-module ${\mathcal{E}}_1\hat\otimes{\mathcal{E}}_2$ defined on $D(T_1)\hat\otimes D(T_2)$. Again, it may not be easy to see this multiplier $T$ is essentially selfadjoint at first look. However, we have a graded $\ast$-homomorpshim ${\mathcal{S}}$ to $B({\mathcal{E}}_1)\hat\otimes B({\mathcal{E}}_2)\subset B({\mathcal{E}}_1\hat\otimes{\mathcal{E}}_2)$ defined as the composition of the graded $\ast$-homomorpshism from ${\mathcal{S}}$ to ${\mathcal{S}}\hat\otimes{\mathcal{S}}$ which appeared in the above example with a graded $\ast$-homomorphism from ${\mathcal{S}}\hat\otimes{\mathcal{S}}$ to $B({\mathcal{E}}_1)\hat\otimes B({\mathcal{E}}_2)$ which is the graded tensor product of two functional calculus associated to $T_1$ and $T_2$. This $\ast$-homomorpshim is nondegenerate; and thus, it follows that $T$ is essentially selfadjoint and that the functional calculus for $T$ is the graded $\ast$-homomorphism defined above. (cf. [@HigKas2]) We observed that the Bott-Dirac operator $B$ (depending on $\alpha>0$) on a graded Hilbert space (Hilbert ${\mathbb{C}}$-module) ${\mathcal{H}}=L^2({\mathbb{R}}, \Lambda^\ast({\mathbb{R}})\otimes{\mathbb{C}})$ defines an odd essentially selfadjoint operator having compact resolvent, and so the functional calculus ${\mathcal{S}}\to {\mathcal{K}}({\mathcal{H}})$. Combining this with an odd multiplier $x$ on a graded ${\mathcal{S}}$-module ${\mathcal{S}}$, we now want to consider the functional calculus ${\mathcal{S}}\to{\mathcal{S}}\hat\otimes{\mathcal{K}}({\mathcal{H}})$ associated to the essentially selfadjoint odd unbounded multiplier $T=x\hat\otimes1+1\hat\otimes B$ on a graded Hilbert ${\mathcal{S}}$-module ${\mathcal{S}}\hat\otimes{\mathcal{H}}$. To describe this functional calculus, we first decompose ${\mathcal{S}}\hat\otimes{\mathcal{H}}$ as a direct sum $\displaystyle\bigoplus_{n\geq0}{\mathcal{S}}\hat\otimes{\mathcal{H}}_n$ where ${\mathcal{H}}_n$ is the eigenspace for $B^2$ corresponding to its eigenvalue $2n\alpha$ for $n\geq0$. We just need to see the functional calculus associated to the multipliers on each summands. On the summand ${\mathcal{S}}\hat\otimes{\mathcal{H}}_0\cong{\mathcal{S}}$, $T$ acts as $x\hat\otimes1$; thus the functional calculus is just $\operatorname{id}_{{\mathcal{S}}}\colon{\mathcal{S}}\to{\mathcal{S}}={\mathcal{K}}({\mathcal{S}}\hat\otimes{\mathcal{H}}_0)$. For other “two-dimensional” summands ${\mathcal{S}}\hat\otimes{\mathcal{H}}_n$, $T$ acts as $x\hat\otimes1+1\hat\otimes\begin{pmatrix} 0 & \sqrt{2n\alpha}\\ \sqrt{2n\alpha} & 0 \end{pmatrix}$. For $n\geq1$, it may not be simple to describe the functional calculus associated to this odd unbounded multiplier on ${\mathcal{S}}\hat\otimes{\mathcal{H}}_n$. However, there are some other way of observing this functional calculus by regarding this graded $\ast$-homomorphism ${\mathcal{S}}\to{\mathcal{S}}\hat\otimes{\mathcal{K}}({\mathcal{H}}_n)$ as a $\ast$-homomorphism from $C_0({\mathbb{R}})$ to $C_0({\mathbb{R}})\otimes{\mathcal{K}}({\mathcal{H}}_n)\cong M_2(C_0({\mathbb{R}}))$ i.e. by neglecting the grading information. Here, we use an isomorphism of (ungraded) ${C^\ast\text{-algebras}}$ ${\mathcal{S}}\hat\otimes{\mathcal{K}}({\mathcal{H}}_n)$ and $C_0({\mathbb{R}})\otimes{\mathcal{K}}({\mathcal{H}}_n)$ sending a homogeneous element $f\hat\otimes T$ to $f\otimes \epsilon^{\partial f} T$ where $\epsilon$ is the grading operator on ${\mathcal{H}}_n$. Then, we may view the functional calculus for $T$ on ${\mathcal{S}}\hat\otimes{\mathcal{H}}_n$ as the functional calculus associated to an unbounded multiplier $\begin{pmatrix} x & \sqrt{2n\alpha} \\ \sqrt{2n\alpha} & -x \\ \end{pmatrix}$ on an “ungraded” Hilbert $C_0({\mathbb{R}})$-module $C_0({\mathbb{R}})\oplus C_0({\mathbb{R}})$. Indeed, one may use this observation on whole space ${\mathcal{S}}\hat\otimes{\mathcal{H}}$. Namely, using an isomorphism of ungraded ${C^\ast\text{-algebras}}$ ${\mathcal{S}}\hat\otimes{\mathcal{K}}({\mathcal{H}})$ and $C_0({\mathbb{R}})\otimes{\mathcal{K}}({\mathcal{H}})$ (an isomorphism can be given in the same way as above), we can consider the functional calculus for $T$ as the functional calculus associated to an unbounded multiplier: $$T'=\left( \begin{array}{cccc} x &0&0&\cdots\\ 0& \left(\begin{array}{cc} x&\sqrt{2\alpha}\\ \sqrt{2\alpha}&-x\\ \end{array} \right) &0&\cdots\\ 0&0&\left(\begin{array}{cc} x&\sqrt{4\alpha}\\ \sqrt{4\alpha}&-x\\ \end{array} \right)&\cdots\\ \vdots&\vdots&\vdots&\ddots\\ \end{array} \right)$$ on an ungraded Hilbert $C_0({\mathbb{R}})$-module $C_0({\mathbb{R}})\otimes{\mathcal{H}}$. Note the functional calculus for $T'$ sends an even function $f$ to: $$f(T')=\left( \begin{array}{cccc} f(x) &0&0&\cdots\\ 0& \left(\begin{array}{cc} f(\sqrt{x^2+2\alpha})&0\\ 0&f(\sqrt{x^2+2\alpha})\\ \end{array} \right) &0&\cdots\\ 0&0&\left(\begin{array}{cc} f(\sqrt{x^2+4\alpha}) & 0\\ 0 & f(\sqrt{x^2+4\alpha}) \\ \end{array} \right)&\cdots\\ \vdots&\vdots&\vdots&\ddots\\ \end{array} \right).$$ $K$-Theory and $K$-Homology of ${C^\ast\text{-algebras}}$ ========================================================= We are going to see some definitions and basic results in ${C^\ast\text{-algebra}}$ $K$-theory and $K$-homology. They are non-commutative generalizations of $K$-theory and $K$-homology of locally compact Hausdorff spaces. Our basic reference here is [@HigRoe]. Let $A$ be a ${C^\ast\text{-algebra}}$. Two projections $p,q$ in $M_n(A)$ (we call such projections as projections [over]{} $A$) are [unitary equivalent]{} if there exists a unitary $u\in M_n(A)$ which conjugates one to another: i.e. $upu^\ast=q$. We define a direct sum of two projections over $A$ by the following: $$p\oplus q= \begin{pmatrix} p & 0 \\ 0 & q \end{pmatrix} \quad \text{for $p \in M_m(A)$ and $q \in M_n(A)$}$$ Let $A$ be a unital ${C^\ast\text{-algebra}}$. The $\tilde K_0$ group of $A$ is a group $\tilde K_0(A)$ generated by unitary equivalence classes of projections over $A$ subject to the relations $[p]+[q]=[p\oplus q]$ for projections $p,q$ over $A$ and $[0]=0$. This is an abelian group and a countable group if $A$ is separable. An abelian group $\tilde K_0({\mathbb{C}})$ is isomorphic to the integer $\mathbb{Z}$ via a map which sends the unitary equivalent class $[p]$ of a projection $p \in M_n({\mathbb{C}})$ to the rank of $p$. A unital $\ast$-homomorphism from $A$ to $B$ defines a group homomorphism from $\tilde K_0(A)$ to $\tilde K_0(B)$. In this way, we obtain a (covariant) functor $\tilde K_0$ from the category of unital ${C^\ast\text{-algebras}}$ and unital $\ast$-homomorphisms to the category of abelian groups. Let $A$ be a ${C^\ast\text{-algebra}}$. The $K_0$ group of $A$ is the kernel $K_0(A)$ of a homomorphism $\tilde K_0(\tilde A) \to \tilde K_0(\mathbb{C})$ associated to the unique unital $\ast$-homomorphism from $\tilde A$ to ${\mathbb{C}}$ with the kernel $A$. For a unital ${C^\ast\text{-algebra}}$ $A$, the group $K_0(A)$ is isomorphic to the group $\tilde K_0(A)$ by the inclusion $\tilde K_0(A) \to \tilde K_0(\tilde A)$ which identifies a class defined by a projection over $A$ as a class defined by the same projection viewed as over $\tilde A$. Hence, we usually regard $K_0(A)$ as $\tilde K_0(A)$ for a unital ${C^\ast\text{-algebra}}$ $A$. Any $\ast$-homomorphism from $A$ to $B$ extends uniquely to a unital $\ast$-homomorphism from $\tilde A$ to $\tilde B$; and using this, we obtain a group homomorphism from $K_0(A)$ to $K_0(B)$. In this way, $K_0$ becomes a functor from the category of ${C^\ast\text{-algebras}}$ and $\ast$-homomorphisms to the category of abelian groups. A [homotopy]{} of $\ast$-homomorphisms from $A$ to $B$ is a $\ast$-homomorphism from $A$ to $B[0,1]$. Two $\ast$-homomorphisms from $A$ to $B$ are [homotopic]{} if there exists a homotopy whose evaluation at $0$ and $1$ gives the two $\ast$-homomorphisms. A [stabilization]{} of a ${C^\ast\text{-algebra}}$ $A$ is a $\ast$-homomorphism from $A$ to $A\otimes {\mathcal{K}}$ sending $a \in A$ to $a\otimes p \in {\mathcal{K}}$ where $p$ is a rank-one projection in ${\mathcal{K}}$. Let $F$ be a functor from the category of ${C^\ast\text{-algebras}}$ to the category of abelian groups. A functor $F$ is called [homotopy invariant]{} if any two homotopic $\ast$-homomorphisms induce the same group homomorphism; [stable]{} if any stabilization for any ${C^\ast\text{-algebra}}$ induces an isomorphism of groups; and [half-exact]{} if it sends a short exact sequence of ${C^\ast\text{-algebras}}$: $$\xymatrix{ 0 \ar[r] & J \ar[r] & A \ar[r] & A/J \ar[r] & 0 }$$ to a half exact sequence of groups: $$\xymatrix{ F(J) \ar[r] & F(A) \ar[r] & F(A/J) }$$ A functor $F$ is called [split-exact]{} if it sends a split exact sequence of ${C^\ast\text{-algebras}}$: $$\xymatrix{ 0 \ar[r] & J \ar[r] & A \ar[r]_-{\longleftarrow} & A/J \ar[r] & 0 }$$ to a split exact sequence of groups: $$\xymatrix{ 0 \ar[r] & F(J) \ar[r] & F(A) \ar[r]_-{\longleftarrow} & F(A/J) \ar[r] & 0 }$$ The above definitions for two kinds of exactness are written for a covariant functor $F$. In a contravariant case, the arrows for groups go in the reverse direction. (cf. [@HigRoe] Chapter 4) The functor $K_0$ is a homotopy invariant, stable and half-exact functor. Let $F$ be a functor from the category of ${C^\ast\text{-algebras}}$ to the category of abelian groups, which is homotopy invariant, stable and half-exact. Denote by $S$ the ${C^\ast\text{-algebra}}$ $C_0(\mathbb{R})$ of continuous functions on the real line which vanish at infinity. For each $n \in \mathbb{N}$, we define a functor $F_n$ from the category of ${C^\ast\text{-algebras}}$ to the category of abelian groups by $F_n(A)=F(S^n\otimes A)$ for a ${C^\ast\text{-algebra}}$ $A$. Then, the functors $F_n$ satisfies the same property as $F$. Cuntz showed that there is always a natural Bott Periodicity isomorphism $F_n(A)\cong F_{n+2}(A)$ (see [@HigRoe]). Hence we can define functors $F_n$ for each $n \in \mathbb{Z}$, by extending the previous definition with the relations $F_n(A)=F_{n+2}(A)$. The sequence of functors $(F_n)_{n \in \mathbb{Z}}$ becomes a homology (cohomology) theory on ${C^\ast\text{-algebras}}$, i.e. for any short exact sequence of ${C^\ast\text{-algebras}}$: $$\xymatrix{ 0 \ar[r] & J \ar[r] & A \ar[r] & A/J \ar[r] & 0 }$$ there is a natural long exact sequence of abelian groups and group homomorphisms: $$\xymatrix{ \, & \ar[r] & F_n(J) \ar[r] & F_n(A) \ar[r] & F_n(A/J) \ar[r] & F_{n+1}(J) \ar[r] & F_{n+1}(A) \ar[r] & \, }$$ The connecting maps $F_n(A/J)\to F_{n+1}(J)$ are called the [boundary maps]{} of the homology (cohomology) theory $(F_n)_{n \in \mathbb{Z}}$. As a corollary of this, one sees the functors $F_n$, and so $F$, are split-exact. In view of the periodicity, the long exact sequence above is nothing but the six-term exact sequence: $$\xymatrix{ F_1(J) \ar[r] & F_1(A) \ar[r] & F_1(A/J) \ar[d] \\ F_{0}(A/J) \ar[u] & \ar[l] F_{0}(A) & \ar[l] F_{0}(J) }$$ The [${K}$-theory]{} of ${C^\ast\text{-algebras}}$ is the homology theory $(K_n)_{n\in\mathbb{Z}}$ on ${C^\ast\text{-algebras}}$ defined by the functor $K_0$ from the category of ${C^\ast\text{-algebras}}$ to the category of abelian groups. M. Atiyah proposed how to (analytically) define the $K$-homology of ${C^\ast\text{-algebras}}$ which is dual to the $K$-theory of ${C^\ast\text{-algebras}}$. We will follow the treatment given by the book [@HigRoe]. Let $A$ be a separable ${C^\ast\text{-algebra}}$. Let $n$ be a nonnegative integer. A $n$-multigraded Fredholm module over $A$ is a triple $({\mathcal{H}}, \rho, F)$, where ${\mathcal{H}}$ is a separable graded Hilbert space, $\rho$ is a graded representation of $A\hat\otimes{\mathbb{C}}_n$ on ${\mathcal{H}}$ and $F$ is an odd bounded operator on $H$ satisfying the following relations: $$\begin{aligned} \label{eq:Fred} \rho(x)(F^2-1) \sim 0,\,\,\, \rho(x)(F-F^\ast) \sim 0,\, \,\, [\rho(x),F] \sim 0 \,\,\,\,\,\text{for $x \in A\hat\otimes{\mathbb{C}}_n$ }\end{aligned}$$ Here $[\,,\,]$ denotes the graded commutator; and for $T\in {B(\mathcal{H})},$ $T \sim 0$ means $T$ is compact. A $n$-multigraded Fredholm module $({\mathcal{H}}, \rho, F)$ over $A$ is [degenerate]{} if all the relations in are exact (i.e. if they hold with $\sim 0$ replaced by $=0$). An [operator homotopy]{} of $n$-multigraded Fredholm modules over $A$ is  a  triple $({\mathcal{H}}, \rho, F_t)_{t\in[0,1]}$ where for each $t \in [0,1]$, $({\mathcal{H}}, \rho, F_t)$ is an $n$-multigraded Fredholm module over $A$ and a map $t \mapsto F_t$ is norm-continuous. We say $n$-multigraded Fredholm modules $({\mathcal{H}}, \rho, F_0)$ and $({\mathcal{H}}, \rho, F_1)$ over $A$ are [homotopic]{} if there exists an operator homotopy $({\mathcal{H}}, \rho, F_t)_{t\in[0,1]}$. Let $({\mathcal{H}}_1, \rho_1, F_1)$ and $({\mathcal{H}}_2, \rho_2, F_2)$ be $n$-multigraded Fredholm modules over $A$. The [direct sum]{} $({\mathcal{H}}_1\oplus{\mathcal{H}}_2, \rho_1\oplus\rho_2, F_1\oplus F_2)$ is a $n$-multigraded Fredholm module over $A$; we denote it by $({\mathcal{H}}_1, \rho_1, F_1)\oplus({\mathcal{H}}_2, \rho_2, F_2)$. Two $n$-multigraded Fredholm modules $({\mathcal{H}}_1, \rho_1, F_1)$ and $({\mathcal{H}}_2, \rho_2, F_2)$ are said to be [unitary equivalent]{} if there exists a unitary $u$ from ${\mathcal{H}}_1$ to ${\mathcal{H}}_2$ of degree 0 (i.e. it intertwines two gradings) such that $\rho_2(x)=u\rho_1(x)u^\ast$ for $x \in A\hat\otimes\mathbb{C}_n$ and $F_2=uF_1u^\ast$. Let $A$ be a separable ${C^\ast\text{-algebra}}$. Let $n$ be a nonnegative integer. The (analytic) $K$-homology group $K^{-n}(A)$ of $A$ of degree $-n$ is a group generated by cycles of $n$-multigraded Fredholm modules over $A$ subject to the relations $[({\mathcal{H}}_1, \rho_1, F_1)]+[({\mathcal{H}}_2, \rho_2, F_2)]=[({\mathcal{H}}_1, \rho_1, F_1)\oplus ({\mathcal{H}}_2, \rho_2, F_2)]$ and $[({\mathcal{H}}_1, \rho_1, F_1)]=[({\mathcal{H}}_2, \rho_2, F_2)]$ if $({\mathcal{H}}_1, \rho_1, F_1)$ and $({\mathcal{H}}_2, \rho_2, F_2)$ are homotopic for $n$-multigraded Fredholm modules $({\mathcal{H}}_i, \rho_i, F_i)$ over $A$. This is an abelian group. The zero class is represented by any degenerate cycle. Unitary equivalent cycles define the same class. The additive inverse of $[({\mathcal{H}}, \rho, F)]$ is $[({\mathcal{H}}^{op}, \rho^{op}, -F^{op})]$ where ${\mathcal{H}}^{op}$ is a graded Hilbert space ${\mathcal{H}}$ with two eigenspaces of the grading interchanged; and $F^{op}$ is the operator on ${\mathcal{H}}^{op}$ corresponding to an operator $F$ on ${\mathcal{H}}$; and $\rho^{op}$ is a (graded) representation of $A\hat\otimes {\mathbb{C}}_n$ on ${\mathcal{H}}^{op}$ defined by $\rho^{op}(x)=(-1)^{\partial x}\rho(x)$ for homogeneous $x \in A\hat\otimes {\mathbb{C}}_n$ (here, we identified ${\mathcal{H}}^{op}$ with ${\mathcal{H}}$ by neglecting the gradings). \[prop:KC\] The group $K^0({\mathbb{C}})$ is isomorphic to ${\mathbb{Z}}$. The isomorphism sends $[({\mathcal{H}},1,F)]$ to the graded index $\text{Index}(F)=\text{dim}(\text{Ker($F$)}^{(0)})-\text{dim}(\text{Ker($F$)}^{(1)})$ of $F$ ($1$ denotes the unique unital representation of ${\mathbb{C}}$). The cycle $[({\mathbb{C}},1,0)]$ corresponding to the integer $1$ is denoted as [$1$]{}. (The Formal Periodicity) (cf. [@HigRoe] THEOREM 8.2.13) Let $A$ be a separable ${C^\ast\text{-algebra}}$. Let $n$ be a nonnegative integer. Then, there is a (formal) periodicity isomorphism $K^{-n}(A)\to K^{-n-2}(A)$ which sends an element $[({\mathcal{H}}, \rho, F)]$ to $[({\mathcal{H}}\oplus{\mathcal{H}}^{op}, \rho\hat\otimes \text{id}, F\oplus F^{op})]$. Here, $\rho\hat\otimes \text{id}$ is a representation of $(A\hat\otimes {\mathbb{C}}_n)\hat\otimes {\mathbb{C}}_2$ on ${\mathcal{H}}\oplus{\mathcal{H}}^{op}={\mathcal{H}}\hat\otimes{\mathbb{C}}_1$ where ${\mathbb{C}}_2$ is identified with $M_2({\mathbb{C}})$ acting on a graded Hilbert space ${\mathbb{C}}_1$. Let $\phi$ be a $\ast$-homomorphism from $A$ to $B$. We obtain a group homomorphism $K^{n}(B)\to K^{n}(A)$ which sends $[({\mathcal{H}}, \rho, F)]$ to $[({\mathcal{H}}, \rho\circ(\phi\otimes \text{id}), F)]$. In this way, $K^{n}$ becomes a (contravariant) functor from the category of separable ${C^\ast\text{-algebras}}$ to the category of abelian groups. Functorial properties (stability, homotopy invariance and Bott periodicity) of $K$-homology are all beautifully proved by means of the Kasparov product. (cf. [@HigRoe] Section 9.2) Let $A_1$ and $A_2$ be separable ${C^\ast\text{-algebras}}$. There is a well-defined product (the Kasparov product) on $K$-homology: $$\xymatrix{ K^{-n_1}(A_1) \otimes K^{-n_2}(A_2) \ar[r] & K^{-n_1-n_2}(A_1\otimes A_2) \quad \text{for $n_1,n_2\geq0$} }$$ The Kasparov product is bilinear, associative and functorial. It is commutative in a suitable sense. The generator ${{1}}\in K^0({\mathbb{C}})$ is the multiplicative identity of the Kasparov product. The functoriality means that the right (or left) multiplication by an element $\alpha \in K^{-n}(B)$ defines a natural transformation between functors $A\mapsto K^{-m}(A)$ and $A\mapsto K^{-m-n}(A\otimes B)$; namely, for any $\ast$-homomorphism $\phi\colon A_2\to A_1$, the following diagram commutes: $$\xymatrix{ K^{-m}(A_1) \ar[d]_-{\times \alpha} \ar[r]^-{\phi^\ast} &K^{-m}(A_2) \ar[d]^-{\times \alpha} \\ K^{-m-n}(A_1\otimes B) \ar[r]_-{(\phi\otimes \text{id})^\ast} &K^{-m-n}(A_2\otimes B) }$$ See [@HigRoe] for details. (homotopy invariance) (cf. [@HigRoe] Section 9.3) The $K$-homology functors $K^n$ are homotopy invariant. It can be shown that the evaluation maps $\text{ev}_0,\text{ev}_1\colon C[0,1]\to {\mathbb{C}}$ induce the same group homomorphism $K^0({\mathbb{C}})\to K^0(C[0,1])$, in particular $\text{ev}_0^\ast({\text{$1$}})=\text{ev}_1^\ast({\text{$1$}})$ (See also [@Kas1]). Using the functoriality of the Kasparov product, we have $(\text{id}_B\otimes \text{ev}_i)^\ast(\alpha)=(\text{id}_B\otimes \text{ev}_i)^\ast(\alpha\times \text{{1}})=\alpha \times \text{ev}_i^\ast(\text{{$1$}})$ for $\alpha \in K^{-n}(B)$ for $i=0,1$. This shows the desired homotopy invariance. It can be proven that the $K$-homology functors $K^{-n}$ satisfy the stability and Bott periodicity by using the Kasparov product and the homotopy invariance above. We only state the results here. (Stability) (cf. [@HigRoe] Section 9.4) The $K$-homology functors $K^{-n}$ are stable. In other words, a stabilization morphism $A \to A\otimes{\mathcal{K}}$ induces isomorphisms of abelian groups $K^{-n}(A\otimes{\mathcal{K}})\to K^{-n}(A)$. The inverses are the Kasparov product by $[({\mathcal{H}}, \text{id}, 0)] \in K^0({\mathcal{K}})$ where ${\mathcal{H}}$ is a separable infinite dimensional Hilbert space. (Bott Periodicity) (cf. [@HigRoe] Section 9.5) The Dirac class $d$ in $K^{-1}(C_0(-1,1))$ is defined by $$d=\left[\left(L^2[-1,1]\oplus L^2[-1,1]^{op}, \rho\hat\otimes \text{id}, \begin{pmatrix} 0 & -i(2P-I) \\ i(2P-I) & 0 \end{pmatrix} \right)\right],$$ where $\rho$ is the standard representation of $C_0(-1,1)$ on $L^2[-1,1]$ sending functions to multiplication operators; $\rho\hat\otimes \text{id}$ is a representation of $C_0(-1,1)\hat\otimes{\mathbb{C}}_1$ on $L^2[-1,1]\oplus L^2[-1,1]^{op}$ which “sends” the generator (an odd selfadjoint unitary) $\epsilon \in {\mathbb{C}}_1$ to $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ and $P$ is the projection in $L^2[-1,1]$ onto the closed subspace spanned by functions $e^{in\pi x} (n\geq 0)$. The Kasparov product by the Dirac class induces Bott periodicity isomorphisms $K^{-n}(A)\cong K^{-n-1}(A\otimes S)$. Here, we identified $S=C_0({\mathbb{R}})$ with $C_0(-1,1)$ by using an arbitrary orientation preserving homeomorphism ${\mathbb{R}}\cong(-1,1)$. Euivariant $KK$-Theory ====================== In this chapter, we will introduce Kasparov’s equivariant $KK$-theory. Standard reference of Kasparov’s bivariant theory is [@Kas1], [@Kas2] and the book [@Bla]. Throughout this chapter, $G$ denotes a second countable locally compact group. Let $A$ and $B$ be separable graded $G$-${C^\ast\text{-algebras}}$. A Kasparov $A$-$B$ module is a triple $({\mathcal{E}}, \rho, F)$, where ${\mathcal{E}}$ is a countably generated (as a Banach $B$-module) graded Hilbert $G$-$B$-module; $\rho$ is a representation of a graded $G$-${C^\ast\text{-algebra}}$ $A$ on ${\mathcal{E}}$ (i.e. a $G$-equivariant graded $\ast$-homomorphism from $A$ to $B({\mathcal{E}})$) and $F$ is an odd adjointable $B$-linear map in $B({\mathcal{E}})$ satisfying the following relations: $$\begin{aligned} \label{eq:Kas} \rho(a)(F^2-1) \sim 0,\,\,\, \rho(a)(F-F^\ast) \sim 0,\,\,\, [\rho(a),F] \sim 0,\\ \rho(a)(g(F)-F) \sim 0 \,\,\,\,\,\text{for $a \in A, g\in G$ } \nonumber\end{aligned}$$ Here $[\,,\,]$ denotes the graded commutator; and for $T\in B({\mathcal{E}}),$ $T \sim 0$ means $T$ is compact. In addition, $g\mapsto \rho(a)(g(F)-F)$ must be continuous for $a\in A, g\in G$. If all the relations in are exact, we call a Kasparov $A$-$B$-module $({\mathcal{E}}, \rho, F)$ degenerate. A direct sum and unitary equivalence of Kasparov $A$-$B$-modules are defined similarly to those of Fredholm-modules. We will not distinguish between two unitary equivalent Kasparov $A$-$B$-modules. Let $({\mathcal{E}}, \rho, F)$ be a Kasparov $A$-$B$-module. For a separable graded $G$-${C^\ast\text{-algebra}}$ $D$, using an exterior tensor product of Hilbert modules, we define a Kasparov $A\hat\otimes D$-$B\hat\otimes D$-module $\sigma_D({\mathcal{E}}, \rho, F)$ to be $({\mathcal{E}}\hat\otimes D, \rho\hat\otimes1, F\hat\otimes1)$. Let $\phi$ be an equivariant graded $\ast$-homomorphism from $D$ to $A$. We define a Kasparov $D$-$B$-module $\phi^\ast({\mathcal{E}}, \rho, F)$ to be $({\mathcal{E}}, \rho\circ\phi, F)$. If $\phi$ is an equivariant graded $\ast$-homomorphism from $B$ to $D$, we define a Kasparov $A$-$D$-module $\phi_\ast({\mathcal{E}}, \rho, F)$ to be $({\mathcal{E}}\hat\otimes_\phi D, \rho\hat\otimes1, F\hat\otimes1)$, here ${\mathcal{E}}\hat\otimes_\phi D$ is an interior tensor product of Hilbert modules. A homotopy of Kasparov $A$-$B$-modules is a Kasparov $A$-$B[0,1]$-module. If there exists a Kasparov $A$-$B[0,1]$-module $({\mathcal{E}}, \rho, F)$, we say two Kasparov $A$-$B$-modules ${\text{ev}_0}_\ast({\mathcal{E}}, \rho, F)$ and ${\text{ev}_1}_\ast({\mathcal{E}}, \rho, F)$ are [homotopic]{}. This homotopy is an equivalence relation. Let $A$ and $B$ be separable graded $G$-${C^\ast\text{-algebras}}$. The set $KK^G(A,B)$ is the set of (unitary equivalence classes of ) Kasparov $A$-$B$-modufles divided by the equivalence relation of homotopy. The set $KK^G(A,B)$ becomes a group with addition defined by direct sums of Kasparov $A$-$B$-modules. The zero class is represented by degenerate modules. The additive inverse of $[({\mathcal{E}}, \rho, F)]$ is $[({\mathcal{E}}^{op}, \rho^{op}, -F^{op})]$ (the latter module is defined analogously to the case of Fredholm modules). When $G=1$, we usually denote the Kasparov group by $KK(A,B)$ instead of $KK^G(A,B)$. We defined a map from the set of Kasparov $A$-$B$-modules to the set of Kasparov $D$-$B$-modules (resp. $A$-$D$-modules) for an equivariant graded $\ast$-homomorphism from $D$ to $A$ (resp. $B$ to $D$). One can check this defines a group homomorphism from $KK^G(A,B)$ to $KK^G(D,B)$ (resp. to $KK^G(A,D)$). In this way, $KK^G(\,,\,)$ becomes a bi-functor (contravariant in the first variable and covariant in the second) from the category of graded $G$-${C^\ast\text{-algebras}}$ to the category of abelian groups. Similarly, we have a group homomorphism $\sigma_D$ from $KK^G(A,B)$ to $KK^G(A\hat\otimes D,B\hat\otimes D)$ which is natural in both variables. Homotopy invariance is almost incorporated in the definition of the Kasparov groups. (cf. [@Bla] Proposition 17.9.1.) The bi-functor $KK^G(\,,\,)$ is homotopy invariant in both variables. When $G=1$, the Kasparov group $KK(A\hat\otimes{\mathbb{C}}_n,{\mathbb{C}})$ is nothing but the $K$-homology group $K^{-n}(A)$. The following proposition explains that the Kasparov group generalizes both $K$-theory and $K$-homology of ${C^\ast\text{-algebras}}$. (cf. [@Bla] Proposition 17.5.5.) Let $G=1$ and $B$ be a separable ungraded ${C^\ast\text{-algebra}}$. The Kasparov group $KK({\mathbb{C}}, B)$ is isomorphic to the $K_0$ group $K_0(B)$ of $B$. Let $D$ and $B$ be a separable graded $G$-${C^\ast\text{-algebra}}$. Let ${\mathcal{E}}_1$ be a countably generated graded $G$-$D$-Hilbert module and $({\mathcal{E}}_2, \rho, F_2)$ be a Kasparov $D$-$B$-module. Define an adjointable map $T_{e_1}\colon {\mathcal{E}}_2 \to {\mathcal{E}}_1\hat\otimes_\rho{\mathcal{E}}_2$ for $e_1$ in ${\mathcal{E}}_1$ by $T_{e_1}\colon e_2 \mapsto e_1\hat\otimes e_2$. An odd adjointable $B$-linear map $F$ in $B({\mathcal{E}}_1\hat\otimes_\rho{\mathcal{E}}_2)$ is an $F_2$-connection if for any $e_1$ in ${\mathcal{E}}_1$, the following diagrams graded commute modulo compact operators. $$\xymatrix{ {\mathcal{E}}_2 \ar[d]_-{F_2} \ar[r]^-{T_{e_1}} & {\mathcal{E}}_1\hat\otimes_\rho{\mathcal{E}}_2 \ar[d]^-{F} & {\mathcal{E}}_2 \ar[d]_-{F_2^\ast} \ar[r]^-{T_{e_1}} & {\mathcal{E}}_1\hat\otimes_\rho{\mathcal{E}}_2 \ar[d]^-{F^\ast} \\ {\mathcal{E}}_2 \ar[r]_-{T_{e_1}} & {\mathcal{E}}_1\hat\otimes_\rho{\mathcal{E}}_2 & {\mathcal{E}}_2 \ar[r]_-{T_{e_1}} & {\mathcal{E}}_1\hat\otimes_\rho{\mathcal{E}}_2 }$$ (cf. [@Kas2] Theorem 2.14.) Let $A_1,A_2$, $D$ and $B_1,B_2$ be separable graded $G$-${C^\ast\text{-algebras}}$. There is a bilinear pairing of the Kasparov groups: $$\begin{aligned} KK^G(A_1,B_1\hat\otimes D)\times KK^G(D\hat\otimes A_2 ,B_2) \to KK^G(A_1\hat \otimes A_2,B_1\hat\otimes B_2)\end{aligned}$$ This pairing is associative, functorial in both variable and commutative when $D={\mathbb{C}}$. We write the product of $\alpha \in KK^G(A_1,B_1\hat\otimes D)$ and $\beta \in KK^G(D\hat\otimes A_2,B_2)$ by $\alpha\otimes_D\beta$. For any separable graded $G$-${C^\ast\text{-algebra}}$ $A$, $KK^G(A,A)$ is a ring with unit $1_A$ represented by a cycle $(A, \text{id}_A, 0)$. For further properties of the product, see [@Kas2]. When $B_1=A_2={\mathbb{C}}$, at the level of cycles, a product of a Kasparov $A$-$D$-module $({\mathcal{E}}_1, \rho_1, F_1)$ and a Kasparov $D$-$B$-module $({\mathcal{E}}_2, \rho_2, F_2)$ is defined as a Kasparov $A$-$B$-module $({\mathcal{E}}_1\hat\otimes_{\rho_2}{\mathcal{E}}_2, \rho_1\hat\otimes1, F)$ where the operator $F \in B({\mathcal{E}}_1\hat\otimes_{\rho_2}{\mathcal{E}}_2)$ is an $F_2$-connection and satisfies $(\rho_1\hat\otimes1)(a)[F_1\hat\otimes1, F](\rho_1\hat\otimes1)(a)^\ast\geq0$ for $a$ in $A$. A general case is defined as above after applying $\sigma_{A_2}$ and $\sigma_{B_1}$. When $G$ acts trivially, there is a simple case where one has a good formula for the Kasparov product: (cf. [@Bla] Proposition 18.10.1.) Let $A$, $D$ and $B$ be separable graded ${C^\ast\text{-algebras}}$. Let $({\mathcal{E}}_1, \rho_1, F_1)$ be a Kasparov $A$-$D$-module with $F_1$ being selfadjoint and contractible and $({\mathcal{E}}_2, \rho_2, F_2)$ be a Kasparov $D$-$B$-module. Let $F\in B({\mathcal{E}}_1\hat\otimes_{\rho_2}{\mathcal{E}}_2)$ be an $F_2$-connection. Assume $({\mathcal{E}}_1\hat\otimes_{\rho_2}{\mathcal{E}}_2, \rho_1\hat\otimes1, F_1\hat\otimes1+((1-F_1^2)\hat\otimes1)^{\frac12}F)$ is a Kasparov $A$-$B$-module. Then, this Kasparov $A$-$B$-module defines the same class in $KK^G(A,B)$ as the Kasparov product of $({\mathcal{E}}_1, \rho_1, F_1)$ and $({\mathcal{E}}_2, \rho_2, F_2)$. Separable graded $G$-${C^\ast\text{-algebras}}$ $A$ and $B$ are [[$KK^G$]{}-equivalent]{} if there exist $\alpha \in KK^G(A,B)$ and $\beta \in KK^G(B,A)$ such that $\alpha \otimes_B \beta=1_A$ and $\beta \otimes_A \alpha=1_B$. In other words, $A$ and $B$ are $KK^G$-equivalent if they are isomorphic in the additive category of separable graded $G$-${C^\ast\text{-algebras}}$ with morphisms $KK^G(A,B)$. We denote by $KK^G$ its full subcategory consisting of separable (trivially graded) $G$-${C^\ast\text{-algebras}}$. We call this additive category $KK^G$ as Equivariant Kasparov’s category. Let $A$ and $B$ be separable graded $G$-${C^\ast\text{-algebras}}$. An $A$-$B$ imprimitivity bimodule is a full graded Hilbert $G$-$B$-module ${\mathcal{E}}$ with a graded $G$-equivariant isomorphism $\rho_A\colon A\cong {\mathcal{K}}({\mathcal{E}})$. We say $A$ and $B$ are Morita-Rieffel equivalent if there exists an $A$-$B$ imprimitivity bimodule. This is an equivalence relation; if ${\mathcal{E}}$ is an $A$-$B$ imprimitivity bimodule, then ${\mathcal{E}}^\ast={\mathcal{K}}({\mathcal{E}},B)$ with the left multiplication $\rho_B$ by $B$ becomes a $B$-$A$ imprimitivity bimodule. Morita-Rieffel equivalence implies $KK^G$-equivalence; an isomorphism is given by $[({\mathcal{E}}, \rho_A, 0)]$ and $[({\mathcal{E}}^\ast, \rho_B, 0)]$. As a corollary, we see, for any separable graded $G$-Hilbert space ${\mathcal{H}}$, ${\mathcal{K}}({\mathcal{H}})$ is $KK^G$-equivalent to ${\mathbb{C}}$. This is more or less implying the stability of the bifunctor $KK^G(\,,\,)$. Take any projection $p\in {\mathcal{K}}({\mathcal{H}})$ onto a one-dimensional even subspace of ${\mathcal{H}}$ with trivial $G$-action. One sees the stabilization $\ast$-homomorphism $\rho$ from ${\mathbb{C}}$ to ${\mathcal{K}}({\mathcal{H}})$ given by $p$ defines a element $\rho=[({\mathcal{K}}({\mathcal{H}}), \rho, 0)] \in KK^G({\mathbb{C}}, {\mathcal{K}}({\mathcal{H}}))$ which is a left inverse of the element $[({\mathcal{H}},\text{id}_{{\mathcal{K}}({\mathcal{H}})},0)] \in KK^G({\mathcal{K}}({\mathcal{H}}),{\mathbb{C}})$ implementing Morita-Rieffel equivalence between ${\mathbb{C}}$ and ${\mathcal{K}}({\mathcal{H}})$; hence $\rho$ is invertible. A general stabilization $\sigma_A(\rho)$ is invertible by functoriality of the Kasparov product. Before going to prove Bott-periodicity in our quite general context, we state a lemma which generalizes the rotational argument used by M. Atiyah. Let $A$ be a separable graded $G$-${C^\ast\text{-algebra}}$. Assume $A$ has the following property: the flip isomorphism $A\hat\otimes A\to A\hat\otimes A$ is $\pm1$ in the group $KK^G(A\hat\otimes A, A\hat\otimes A)$. Suppose one finds $\alpha \in KK^G({\mathbb{C}}, A)$ and $\beta \in KK^G(A, {\mathbb{C}})$ such that $\alpha\otimes_A\beta=1_{{\mathbb{C}}}$. Then, $A$ is $KK^G$-equivalent to ${\mathbb{C}}$. This follows from the fact that the following diagram commutes: $$\xymatrix{ {\mathbb{C}}\hat\otimes A \ar[d]_{1\hat\otimes\beta} \ar[r]^{\alpha\hat\otimes1} & A\hat\otimes A \ar[d]^{1\hat\otimes \beta} \ar[r]^{\text{flip}} & A\hat\otimes A \ar[d]^{\beta\hat\otimes1} \\ {\mathbb{C}}\hat\otimes {\mathbb{C}}\ar[r]_{\alpha\hat\otimes1} \ar@/_15pt/[rr]_{1\hat\otimes \alpha} & A\hat\otimes {\mathbb{C}}\ar[r]_{\text{flip}} & {\mathbb{C}}\hat\otimes A }$$ The author would like to thank Nigel Higson for showing him this diagram. (Bott periodicity) (cf. [@Bla] Section 19.2.) A (trivially graded) separable ${C^\ast\text{-algebra}}$ $S^2$ is $KK^G$-equivalent to ${\mathbb{C}}$. It suffices to show that a graded ${C^\ast\text{-algebra}}$ $S\hat\otimes{\mathbb{C}}_1$ is $KK^G$-equivalent to ${\mathbb{C}}$; and in view of the previous lemma, this follows once we have shown that the Dirac class $d \in K^{-1}(C_0(-1,1))=KK^G(C_0(-1,1)\hat\otimes{\mathbb{C}}_1,{\mathbb{C}})$ is right invertible. Let $s=[(C_0(-1,1)\hat\otimes{\mathbb{C}}_1, 1, x\hat\otimes\epsilon)] \,\,\, \in KK^G({\mathbb{C}}, C_0(-1,1)\hat\otimes{\mathbb{C}}_1)$. The product $s\otimes_{C(-1,1)\hat\otimes{\mathbb{C}}_1}d \in KK^G({\mathbb{C}}, {\mathbb{C}})=K^0({\mathbb{C}})$ is represented by a Fredholm module $$\left( L^2[-1,1]\oplus L^2[-1,1]^{op}, 1, \begin{pmatrix} 0 & x-i(1-x^2)^\frac12(2P-I) \\ x+i(1-x^2)^\frac12(2P-I) & 0 \end{pmatrix} \right).$$ By Example \[prop:KC\], we must calculate the Fredholm index of the operator $x+i(1-x^2)^\frac12(2P-I) \in L^2[-1,1]$. Using the straight line homotopy between $x$ and $\sin\frac{\pi}{2}x$, we see that this is same as $\text{Index}(\sin\frac{\pi}{2}x+i\cos\frac{\pi}{2}x(2P-I))$ which in turn is same as $\text{Index}(P+e^{-i\pi x}(I-P))=-1$. This shows $d$ is right invertible. The proof above showed that the ${C^\ast\text{-algebra}}$ $S$ of continuous functions on the real line is $KK^G$-equivalent to the first Clifford ${C^\ast\text{-algebra}}$ ${\mathbb{C}}_1$. We define for any graded $G$-${C^\ast\text{-algebras}}$ $A,B$, the even $G$-equivariant Kasparov group $KK^G_0(A,B)=KK^G(A,B)$ and the odd $G$-equivariant Kasparov group $$KK^G_1(A,B)=KK^G(A\hat\otimes{\mathbb{C}}_1,B)=KK^G(A,B\hat\otimes{\mathbb{C}}_1)=KK^G(A\hat\otimes S,B)=KK^G(A,B\hat\otimes S).$$ Thanks to the Bott Periodicity, the odd group $KK^G_1(A\hat\otimes S,B)$ is naturally isomorphic to $KK^G_0(A,B)$ for any graded $G$-${C^\ast\text{-algebras}}$ $A,B$. In the following discussions, we will essentially consider the $G$-equivariant $KK$-theory of ungraded $G$-${C^\ast\text{-algebras}}$. In this case, there is a different but useful description of even and odd $G$-equivariant Kasparov groups. Let $A$ and $B$ be separable (ungraded) $G$-${C^\ast\text{-algebras}}$. An even Kasparov $A$-$B$ module is a triple $({\mathcal{E}}, \rho, F)$, where ${\mathcal{E}}$ is a countably generated (ungraded) Hilbert $G$-$B$-module; $\rho$ is a representation of a $G$-${C^\ast\text{-algebra}}$ $A$ on ${\mathcal{E}}$ and $F$ is an adjointable $B$-linear map in $B({\mathcal{E}})$ satisfying the following relations: $$\begin{aligned} \label{eq:evenKas} \rho(a)(FF^\ast-1) \sim 0,\,\,\, \rho(a)(F^\ast F-1) \sim 0,\,\,\, [\rho(a),F] \sim 0,\\ \rho(a)(g(F)-F) \sim 0 \,\,\,\,\,\text{for $a \in A, g\in G$ }\nonumber\end{aligned}$$ In addition, $g\mapsto \rho(a)(g(F)-F)$ must be continuous for $a\in A, g\in G$. Simply put, the map $F$ is essentially $G$-equivariant, essentially unitary and essentially commuting with the representation of $A$. An odd Kasparov $A$-$B$ module is a triple $({\mathcal{E}}, \rho, P)$, where ${\mathcal{E}}$ is a countably generated (ungraded) Hilbert $G$-$B$-module; $\rho$ is a representation of a $G$-${C^\ast\text{-algebra}}$ $A$ on ${\mathcal{E}}$ and $P$ is an adjointable $B$-linear map in $B({\mathcal{E}})$ satisfying the following relations: $$\begin{aligned} \label{eq:oddKas} \rho(a)(P^\ast-P) \sim 0,\,\,\, \rho(a)(P^2-P) \sim 0,\,\,\, [\rho(a),P] \sim 0,\\ \rho(a)(g(P)-P) \sim 0 \,\,\,\,\,\text{for $a \in A, g\in G$ } \nonumber\end{aligned}$$ In addition, $g\mapsto \rho(a)(g(P)-P)$ must be continuous for $a\in A, g\in G$. Simply put, the map $P$ is essentially $G$-equivariant, an essentially projection and essentially commuting with the representation of $A$. All the notion defined for Kasparov $A$-$B$-modules, namely addition, unitary equivalence, functoriality, homotopy e.t.c., are defined for even and odd Kasparov $A$-$B$-modules. We will not distinguish between two unitary equivalent Kasparov $A$-$B$-modules. The set of homotopy equivalence classes of even (odd) Kasparov $A$-$B$-modules is a group with the obvious addition (the direct sum) of even (odd) Kasparov $A$-$B$-modules. The following proposition gives the nice description of $G$-equivariant Kasparov groups for ungraded $G$-${C^\ast\text{-algebras}}$ mentioned earlier. For any separable $G$-${C^\ast\text{-algebras}}$ $A,B$, the group of homotopy equivalence classes of even Kasparov $A$-$B$-modules is naturally isomorphic to the even $G$-equivariant Kasparov group $KK^G_0(A,B)=KK^G(A,B)$. The isomorphism takes an even Kasparov $A$-$B$-module $({\mathcal{E}}, \rho, P)$ to a Kasparov $A$-$B$-module $\left({\mathcal{E}}\oplus{\mathcal{E}}^{op}, \rho\otimes1, \begin{pmatrix} 0 & F^\ast \\ F & 0 \end{pmatrix}\right)$. The group of homotopy equivalence classes of odd Kasparov $A$-$B$-modules is naturally isomorphic to the odd $G$-equivariant Kasparov group $KK^G_1(A,B)=KK^G(A\hat\otimes{\mathbb{C}}_1,B)$. The isomorphism takes an odd Kasparov $A$-$B$-module $({\mathcal{E}}, \rho, F)$ to a Kasparov $A\hat\otimes{\mathbb{C}}_1$-$B$-module $\left({\mathcal{E}}\oplus{\mathcal{E}}^{op}, \rho\hat\otimes\text{id$_{{\mathbb{C}}_1}$}, \begin{pmatrix} 0 & -i(2P-1) \\ i(2P-1) & 0 \end{pmatrix}\right)$. Here, we are identifying the graded Hilbert $B$-module ${\mathcal{E}}\oplus{\mathcal{E}}^{op}$ as ${\mathcal{E}}\hat\otimes{\mathbb{C}}_1$. \[exampleBott\] Let $A,B$ be separable $G$-${C^\ast\text{-algebras}}$ and $\Sigma=C_0(0, 1)\cong S$. We simply write $B\otimes\Sigma$ by $B\Sigma$ for example. Let $x=({\mathcal{E}}, \phi, P)$ be an element of $KK^G_1(A, B)$. We would like to compute the element in $KK^G_0(A , B\Sigma)$ which corresponds to the element $x$ under the Bott Periodicity $KK^G_1(A, B)\cong KK^G_0(A, B\Sigma)$. In such computations, we frequently use the Formal Periodicity (or just Morita Equivalence) such as $KK^G(A, B)\cong KK^G(A\hat\otimes{\mathbb{C}}_2, B)$. Hence, it is always safer and easier to say that we compute in up to sign precision. We recall $x$ is represented as $\left({\mathcal{E}}\oplus{\mathcal{E}}^{\operatorname{op}}, \phi\hat\otimes\operatorname{id}_{{\mathbb{C}}_1}, \begin{pmatrix} 0 & -i(2P-1) \\ i(2P-1) & 0 \end{pmatrix}\right)$ as the element in $KK^G(A\hat\otimes{\mathbb{C}}_1, B)$. The Bott Periodicity maps this element to the Kasparov product $x\otimes_{\mathbb{C}}\left(\Sigma\oplus\Sigma^{\operatorname{op}}, \operatorname{id}_{{\mathbb{C}}_1}, \begin{pmatrix} 0 & -i(2x-1) \\ i(2x-1) & 0 \end{pmatrix}\right)$ in $KK^G(A\hat\otimes{\mathbb{C}}_1\hat\otimes{\mathbb{C}}_1, B\Sigma)$ which can be computed as $\left(({\mathcal{E}}\oplus{\mathcal{E}}^{\operatorname{op}})\hat\otimes(\Sigma\oplus\Sigma^{\operatorname{op}}), \phi\hat\otimes\operatorname{id}_{{\mathbb{C}}_1}\hat\otimes\operatorname{id}_{{\mathbb{C}}_1}, T\right)$ where $$T= \begin{pmatrix} 0 & -i(2P-1) \\ i(2P-1) & 0 \end{pmatrix} \hat\otimes\begin{pmatrix} 2(x-x^2)^{-\frac12} & 0 \\ 0 & 2(x-x^2)^{-\frac12} \end{pmatrix} +1\hat\otimes\begin{pmatrix} 0 & -i(2x-1) \\ i(2x-1) & 0 \end{pmatrix}.$$ After the identification $KK^G(A\hat\otimes{\mathbb{C}}_2, B)\cong KK^G(A, B)$, this element can be represented by $({\mathcal{E}}\otimes\Sigma, \phi\otimes1, 1\otimes(2x-1)+i(2P-1)\otimes2(x-x^2)^{-\frac12})$ in $KK^G_0(A, B\Sigma)$. Applying first the straight line homotpy between $x$ and $\sin^2\frac\pi2x$, next multiplying $-1$ and last multiplying a unitary $1\otimes e^{-i\pi x}$ (which is homotopic to 1), we see that the element can be written by the following (probably) simplest form $({\mathcal{E}}\otimes\Sigma, \phi\otimes1, P\otimes e^{2\pi ix}+(1-P)\otimes1)$ in $KK^G_0(A, B\Sigma)$. Similarly, the element in $KK^G_1(A, B\Sigma)$ which corresponds to an element $({\mathcal{E}}, \phi, F)$ in $KK^G_0(A, B)$ under the Bott Periodicity can be computed as $\left({\mathcal{E}}\Sigma\oplus{\mathcal{E}}\Sigma, \phi\otimes1, \begin{pmatrix} 1\otimes x & F^\ast\otimes(x-x^2)^{\frac12}\\ F\otimes(x-x^2)^{\frac12} & 1\otimes(1-x) \end{pmatrix} \right)$. Finally, let us compute the element in $KK^G_0(A, B)$ which corresponds to an element $y$ in $KK^G_1(\Sigma A, B)$ under the Bott Periodicity. We suppose $y=({\mathcal{E}}, \phi, P)$ where $\phi$ is a nondegenerate representation of $\Sigma A$ on ${\mathcal{E}}$; this ensures that we can write $\phi$ as $\phi_\Sigma\otimes\phi_A$ where $\phi_\Sigma$ and $\phi_A$ are commuting, nondegenerate representations of $\Sigma$ and $A$ respectively. We remark that any Kasparov module is homotopic to such one (such one is called an essential Kasparov module). Again, note that $y$ is represented as $\left({\mathcal{E}}\oplus{\mathcal{E}}^{\operatorname{op}}, \phi\hat\otimes\operatorname{id}_{{\mathbb{C}}_1}, \begin{pmatrix} 0 & -i(2P-1) \\ i(2P-1) & 0 \end{pmatrix}\right)$ in $KK^G(\Sigma A{\mathbb{C}}_1, B)$. The Bott Periodicity maps this element to $\left(\Sigma\hat\otimes{\mathbb{C}}_1, 1, x\hat\otimes\epsilon \right)\otimes_{\Sigma{\mathbb{C}}_1}y$ ($\epsilon$ is the standard generator of ${\mathbb{C}}_1$). We compute this to get $({\mathcal{E}}, \phi_A, (2x-1)+2i(x-x^2)^{\frac12}(2P-1))$ in $KK^G_0(A, B)$ where $x$ is an operator on ${\mathcal{E}}$ obtained by extending the nondegenerate representation $\phi_\Sigma$ of $\Sigma$ to that of $C_b(0,1)$. A similar calculation as above leads us to get probably the simplest form $({\mathcal{E}}, \phi_A, Pe^{2i\pi x}+1-P)$. These computations will be used in Chapter 9. We recall here Equivariant Kasparov’s category $KK^G$. It is the additive category whose objects are separable $G$-${C^\ast\text{-algebras}}$ and morphisms are the elements in the Kasparov groups $KK^G(A, B)$ for separable $G$-${C^\ast\text{-algebras}}$ $A,B$. Also, we frequently denote by $KK^G$ the bifunctor $(A, B)\mapsto KK^G(A, B)$ from the category of separable $G$ ${C^\ast\text{-algebra}}$ to abelian groups. We mentioned that this functor is stable and homotopy invariant. It can be shown that the functor $KK^G$ is split-exact in both variables. In [@Mey], R. Meyer elegantly showed the following universal property of the category $KK^G$. (cf. [@Mey] Theorem 6.6.)\[universal\] Let $F$ be any stable, homotopy invariant, split exact (covariant or contravariant) functor from the category of separable $G$-${C^\ast\text{-algebra}}$ to an additive category. Then, the functor $F$ uniquely factors through the category $KK^G$. One may want to consider whether the bifunctor $KK^G(\cdot, \cdot)$ from the category of separable $G$-${C^\ast\text{-algebras}}$ to the category of abelian groups is half-exact in either variable. Unfortunately, this is not true in general. However, we have a following. (cf. [@bolic] PROPOSITION 5.7.) Let $A$ be a proper, nuclear $G$-${C^\ast\text{-algebra}}$, then the functor $KK^G(A, \cdot)$ is half-exact. It is also true that the functor $KK^G(\cdot, A)$ is half-exact for any proper, nuclear $G$-${C^\ast\text{-algebra}}$. Thanks to the above proposition, we have for any proper $G$-${C^\ast\text{-algebra}}$ $A$ and for any $G$-extension: $$\begin{aligned} \xymatrix{ 0 \ar[r] & J \ar[r] & B \ar[r] & B/J \ar[r] & 0 \\ }\end{aligned}$$ the six-term exact sequence in Equivariant $KK$-Theory: $$\begin{aligned} \xymatrix{ KK^G_0(A, J) \ar[r] & KK^G_0(A, B) \ar[r] & KK^G_0(A, B/J) \ar[d] \\ KK^G_1(A, B/J) \ar[u] & KK^G_1(A, B) \ar[l] & KK^G_1(A, J) \ar[l] \\ }\end{aligned}$$ Asymptotic Morphisms and Equivariant $E$-Theory =============================================== This chapter introduces asymptotic morphisms which are now been regarded as another fundamental tool for calculating the $K$-theory of ${C^\ast\text{-algebras}}$. The importance of this notion comes from the fact that associated to any extension of ${C^\ast\text{-algebras}}$, there is a canonical asymptotic morphism called a central invariant which is unique up to suitable equivalence relation (i.e. homotopy). We follows the treatment given in [@GHT]. (asymptotic algebra) Let $B$ be a separable $G$-${C^\ast\text{-algebra}}$. The $G$-${C^\ast\text{-algebra}}$ ${\mathfrak{T}}_0(B)=C_0([1, \infty), B)$ of continuous functions from the interval $[0, \infty)$ to $B$ which vanish at infinity sits as a $G$-invariant ideal in the ${C^\ast\text{-algebra}}$ of $C_b([1, \infty), B)$ of bounded functions from $[0, \infty)$ to $B$ with a natural pointwise $G$-action. We denote by ${\mathfrak{T}}(B)$ the subalgebra of $G$-continuous elements in $C_b([1, \infty), B)$; this is a $G$-${C^\ast\text{-algebra}}$ containing ${\mathfrak{T}}_0(B)$ as a $G$-equivariant ideal. The asymptotic algebra ${\mathfrak{A}}(B)$ of $B$ is the quotient $G$-${C^\ast\text{-algebra}}$ ${\mathfrak{T}}(B)/{\mathfrak{T}}_0(B)$. (asymptotic morphisms)\[dfn:asym\] For separable $G$-${C^\ast\text{-algebras}}$ $A,B$, an equivariant asymptotic morphism from $A$ to $B$ is an equivariant $\ast$-homomorphism from $A$ to the asymptotic algebra ${\mathfrak{A}}(B)$. An equivariant asymptotic morphism $\phi$ from $A$ to $B$ is denoted by $\phi\colon A\to\to B$. A homotopy of equivariant asymptotic morphisms from $A$ to $B$ is an equivariant asymptotic morphism from $A$ to $B[0,1]$. We denote by $[[A, B]]_G$ the set of homotopy equivalence classes of equivariant asymptotic morphisms from $A$ to $B$. Any element of $[[A, B]]_G$ can be represented as a family of continuous maps $\phi_t\colon A\to B$ $(t\geq1)$ satisfying the following conditions (such a map is called an equicontinuous equivariant asymptotic morphism from $A$ to $B$ with a slight abuse of language). - for any $a$ in $A$, the map $t\mapsto\phi_t(a)$ is in ${\mathfrak{T}}(B)$; - $(g,a)\mapsto g(\phi_t((a)))$ is a continuous map from $G\times A$ to $B$ uniformly in $t \in [0, \infty)$; - the map $A\to {\mathfrak{T}}(B) \to {\mathfrak{A}}(B)$ given by composition of the map $a\mapsto (\phi_t(a))$ with the quotient map from ${\mathfrak{T}}(B)$ to ${\mathfrak{A}}(B)$ is an equivariant $\ast$-homomorphism. Hence, we frequently write an element of $[[A, B]]_G$ as an equicontinuous equivariant asymptotic morphism $(\phi_t)_{t\geq1}$ without any fear of confusion. Any continuous family $(\phi_t)_{t\geq1}$ of equivariant $\ast$-homomorphisms from $A$ to $B$ defines an equivariant asymptotic morphism from $A$ to $B$ in an obvious way. More generally, a continuous family $(\phi_t)_{t\geq}$ of $\ast$-homomorphisms from $A$ to $B$ defines an equivariant asymptotic morphism from $A$ to $B$, if the family is asymptotically equivariant (meaning for any $a$ in $A$ and for any $g$ in $G$, $\phi_t(g(a))-g(\phi_t(a))$ converges to $0$ as $t$ goes to infinity). There is a well-defined “composition” operation $[[A, B]]_G\times[[B, C]]_G\to[[A, C]]_G$ given by a composition after a reparametrization of asymptotic morphisms: more specifically, for given two equicontinuous equivariant asymptotic morphisms $(\phi_t)_{t\geq1}\colon A\to\to B$ and $(\psi_t)_{t\geq1}\colon B\to\to C$, there is a strictly increasing continuous function $r$ from $[1, \infty)$ onto $[1, \infty)$ such that the composition $(\psi_{s(t)}(\phi_{t}))_{t\geq1}$ defines an equivariant asymptotic morphism from $A$ to $C$ for any reparametrization in $t$ (strictly increasing continuous functions from $[1, \infty)$ to $[1, \infty)$) satisfying $s(t)\geq r(t)$ for all $t$; and this defines a well-defined operation $[[A, B]]_G\times[[B, C]]_G\to[[A, C]]_G$. We write the composition of two asymptotic morphisms $\phi\colon A\to\to B$ and $\psi\colon B\to\to C$ by $\psi \circ \phi$ as long as it makes no confusion. The set $[[A, B{\mathcal{K}}({\mathcal{H}}_G)]]_G$ can be endorsed with an abelian semigroup structure by the following way. (Here, $B{\mathcal{K}}({\mathcal{H}}_G)=B\otimes{\mathcal{K}}({\mathcal{H}}_G)$.) The addition operation comes from an (equivariant) embedding of ${\mathcal{K}}({\mathcal{H}}_G)\oplus{\mathcal{K}}({\mathcal{H}}_G)\subset{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathcal{H}}_G)$ into ${\mathcal{K}}({\mathcal{H}}_G)$ induced from an (equivariant) embedding of ${\mathcal{H}}_G\oplus{\mathcal{H}}_G$ into ${\mathcal{H}}_G$. We may use any embedding here, since any pair of such embeddings can be connected through a homotopy of embeddings. With these in mind, associativity and commutativity of this operation is clear. The zero element is represented by $0$ morphism. The semigroup $[[\Sigma A, B{\mathcal{K}}({\mathcal{H}}_G)]]_G$ becomes a group thanks to the presence of $\Sigma$: the inverse operation is defined by the composition with a $\ast$-homomorphism $h^\ast\otimes\operatorname{id}_A\colon\Sigma A\to \Sigma A$ where $h^\ast\colon \Sigma\to \Sigma$ is induced from an order reversing homeomorphism $h\colon s\mapsto 1-s$ on $(0,1)$. (Equivariant $E$-Theory) Let $A$ and $B$ be separable $G$-${C^\ast\text{-algebras}}$. The Equivariant $E$-theory group $E^G(A, B)$ is, defined to be the abelian group $[[\Sigma A{\mathcal{K}}({\mathcal{H}}_G), \Sigma B{\mathcal{K}}({\mathcal{H}}_G)]]_G$. The composition of asymptotic morphisms defines a bilinear map $E^G(A, B)\times E^G(B,C)\to E^G(A, C)$. We define an additive category $E^G$ to be the category which has separable $G$-${C^\ast\text{-algebras}}$ as objects and an $E$-theory group $E^G(A, B)$ as the morphism group from $A$ to $B$. We call the category $E^G$ as the Equivariant $E$-Theory category. We have a functor from category of separable $G$-${C^\ast\text{-algebra}}$ to $E^G$ which is identity on objects and sends an equivariant-$\ast$-homomorphism $\phi$ to the class of equivariant asymptotic morphisms $\operatorname{id}_\Sigma\otimes\phi\otimes\operatorname{id}_{{\mathcal{K}}({\mathcal{H}}_G)}$ in $E^G(A, B)$. For any nuclear $G$-${C^\ast\text{-algebra}}$ $D$, we have a tensor product functor $\sigma_D$ on the category $E^G$ coming from the operation $\sigma_D\colon [[A, B]]_G\to [[A\otimes D, B\otimes D]]_G$ which is the tensor product by $\operatorname{id}_D$ at the level of cycles. There is a The bifunctor $(A,B)\mapsto E^G(A,B)$ from the category of separable $G$-${C^\ast\text{-algebras}}$ to the category of abelian groups are homotopy invariant and stable. Moreover, it is half-exact in both variable with respect to any $G$-extension. We are mostly interested in half-exactness in the second variable. We first define an canonical equivariant asymptotic morphism associated to a $G$-extension. For any $G$-extension: $$\begin{aligned} \label{gext} \xymatrix{ 0 \ar[r] & J \ar[r] & B \ar[r]^-{\pi} & B/J \ar[r] & 0 \\ }\end{aligned}$$an approximate unit for the $G$-extension is a continuous approximate unit $(u_t)_{t\geq1}$ of $J$ satisfying the following: - it is asymptotically equivariant: that is, for any $g$ in $G$, $g(u_t)-u_t\to0$ as $t\to0$ uniformly over compact subsets of $G$; - it asymptotically commutes with elements in $B$: that is, for any $b\in B$, $[a, u_t]=bu_t-u_tb\to0$ as $t\to0$. Such an approximate unit always exists, and any two can be connected through the obvious straight line path. We call an approximate unit having the second property above as quasicentral with respect to $B$; and by an approximate unit for the pair of ${C^\ast\text{-algebras}}$ $B_1\subset B_2$, we usually mean an approximate unit of $B_1$ which is quasicentral with respect to $B_2$. For any $G$-extension , a central invariant for the $G$-extension is an equivariant asymptotic morphism from $\Sigma (B/J)$ to $J$ defined by $f\otimes x\mapsto f(u_t)s(x)$ for $t\geq1$ using any set-theoretic section $s$ of the quotient map $\pi\colon B\to B/J$. A central invariant for the $G$-extension defines a unique element in the set $[[\Sigma (B/J), J]]_G$ independent of choices of an approximate unit and of a section $s$ which we also call as the central invariant for the $G$-extension . Central invariants are natural in the following sense. Suppose we have a following digram of $G$-extension: $$\begin{aligned} \xymatrix{ 0 \ar[r] & J \ar[d]^-{q}\ar[r] & B \ar[d]\ar[r] & B/J \ar[d]^{p}\ar[r] & 0 \\ 0 \ar[r] & J' \ar[r] & B' \ar[r] & B'/J' \ar[r] & 0 \\ }\end{aligned}$$ Then, if we denote the central invariant for the first row and for the second row by $x\in [[\Sigma (B/J), J]]_G$ and $x' \in [[\Sigma (B'/J'), J']]_G$ respectively, we have $x'\circ\Sigma p=q\circ x$ in $[[\Sigma (B/J), J']]_G$. Let $A$ be a separable $G$-${C^\ast\text{-algebra}}$. Consider the following $G$-extension:$$\begin{aligned} \label{gext2} \xymatrix{ 0 \ar[r] & \Sigma A \ar[r] & A(0,1] \ar[r] & A \ar[r] & 0 \\ }\end{aligned}$$ The central invariant associated to the $G$-extension and the class defined by $\operatorname{id}_{\Sigma A}$ coincide in the group $[[\Sigma A, \Sigma A]]_G$. We now state the important property of the bifunctor $E^G$. (cf. [@GHT] THEOREM 6.20.) The bifunctor $(A,B) \mapsto E^G(A, B)$ is half-exact in both variables. In the same way as for the $K$-theory functor, the half-exactness together with the stability and the homotopy invariance of the functor $E^G$ automatically implies Bott-Periodicity that is, an isomorphism between $\Sigma^2$ and ${\mathbb{C}}$ in the category $E^G$. For any $G$-extension , and for any separable $G$-${C^\ast\text{-algebra}}$ $A$, we have six-term exact sequences: $$\begin{aligned} \xymatrix{ E^G(A, J) \ar[r] & E^G(A, B) \ar[r] & E^G(A, B/J) \ar[d] \\ E^G(A, \Sigma (B/J)) \ar[u] & E^G(A, \Sigma B) \ar[l] & E^G(A, \Sigma J) \ar[l] \\ }\end{aligned}$$ and, $$\begin{aligned} \xymatrix{ E^G(J, A) \ar[d] & E^G(B, A) \ar[l] & E^G(B/J, A) \ar[l] \\ E^G(\Sigma (B/J), A) \ar[r] & E^G(\Sigma B, A) \ar[r] & E^G(\Sigma J, A) \ar[u] \\ }\end{aligned}$$ In the above sequences, the boundary maps are given by the composition by the central invariant associated to the extension . Just as in the case of the functor $KK^G$, the bifunctor $E^G$ has a following universal property which can be shown purely categorically using the property of $E^G$ listed so far. (cf. [@AsympE] Theorem 1.13., [@GHT])\[universalE\] Let $F$ be any stable, homotopy invariant, half-exact (covariant or contravariant) functor from the category of separable $G$-${C^\ast\text{-algebra}}$ to an additive category. Then, the functor $F$ uniquely factors through the category $E^G$. Now, it is time to relate the Equivariant $KK$-Theory with Equivariant $E$-Theory. Note, since $E^G$ is stable, homotopy invariant and split exact, by the universal property of $KK^G$, the canonical functor from the category of separable $G$-${C^\ast\text{-algebras}}$ to the Equivariant $E$-Theory category $E^G$ uniquely factors through Kasparov’s category $KK^G$. We first describe this unique functor from the category $KK^G$ to the category $E^G$. Next, we see the important fact that when $A$ is a separable $G$-${C^\ast\text{-algebra}}$ such that the functor $B\mapsto KK^G(A, B)$ is half exact, the abelian groups $KK^G(A, B)$ and $E^G(A,B)$ is isomorphic for any separable $G$-${C^\ast\text{-algebra}}$ $B$ via this unique functor from $KK^G$ to $E^G$. Let $A, B$ be any separable $G$-${C^\ast\text{-algebras}}$. We define a homomorphism from $KK^G_1(A, B)$ to $E^G(\Sigma A, B)=[[\Sigma^2 A{\mathcal{K}}({\mathcal{H}}_G), \Sigma B{\mathcal{K}}({\mathcal{H}}_G)]]_G$ by the following way. Take any odd Kasparov $A$-$B$-module $x=({\mathcal{E}}, \phi, P)$. We have a canonical pullback extension associated to $x$:$$\begin{aligned} \label{extpicture2} \xymatrix{ 0 \ar[r] & {\mathcal{K}}({\mathcal{E}}) \ar@{=}[d]\ar[r] & E_{\phi'} \ar[d]\ar[r] & A \ar[d]^-{\phi'}\ar[r] & 0\\ 0 \ar[r] & {\mathcal{K}}({\mathcal{E}}) \ar[r] & M(B) \ar[r] & Q(B) \ar[r] & 0\\ }\end{aligned}$$ Denote by $c$, the central invariant in the group $[[\Sigma A, {\mathcal{K}}({\mathcal{E}})]]_G$ associated to the $G$-extension . We tensor this element $c$ with $\operatorname{id}_\Sigma$ and $\operatorname{id}_{{\mathcal{K}}({\mathcal{H}}_G)}$ to obtain the element $\operatorname{id}_{\Sigma}\otimes c \otimes\operatorname{id}_{{\mathcal{K}}({\mathcal{H}}_G)}$ in the group $[[\Sigma^2 AK({\mathcal{H}}_G), \Sigma {\mathcal{K}}({\mathcal{E}}\otimes{\mathcal{H}}_G)]]_G$. Finally, using any $G$-embedding ${\mathcal{E}}\otimes{\mathcal{H}}_G\to B\otimes{\mathcal{H}}_G$, we map this element to $[[\Sigma^2 AK({\mathcal{H}}_G), \Sigma B{\mathcal{K}}({\mathcal{H}}_G)]]_G=E^G(\Sigma A, B)$. This procedure obviously respects homotopy of Kasparov-modules. In this way, we obtained the homomorphism from $KK^G_1(A, B)$ to $E^G(\Sigma A, B)$. It is easy to see, for any fixed $B$, the homomorphisms $KK^G_1(A, B)$ to $E^G(\Sigma A, B)$ defines a natural transformation between two functors $KK^G_1(\cdot, B)$ and $E^G(\Sigma\cdot, B)$ from the category of separable $G$-${C^\ast\text{-algebras}}$ to the category of abelian groups. Using the Bott Periodicity $KK^G(A, B)\cong KK^G_1(A, \Sigma B)$ and the above homomorphisms $KK^G_1(A, \Sigma B)\to E^G(\Sigma A, \Sigma B)\cong E^G(A, B)$, we have a natural transformation of the functors $KK^G(\cdot, B)$ to $E^G(\cdot, B)$ which sends $\operatorname{id}_B$ to $\operatorname{id}_B$. Each homomorphism $KK^G(A, B)\to E^G(A, B)$ must coincide with the homomorphisms from $KK^G(A, B)$ to $E^G(A, B)$ of the canonical functor from $KK^G$ to $E^G$. To see this, since the homomorphisms $KK^G(A, B) \to E^G(A, B)$ commute with $\sigma_{{\mathcal{K}}({\mathcal{H}}_G)}$, we may assume $A= A'{\mathcal{K}}({\mathcal{H}}_G)$ and $B=B'{\mathcal{K}}({\mathcal{H}}_G)$ for some separable $G$-${C^\ast\text{-algebras}}$ $A',B'$. According to [@Mey], there is a $G$-${C^\ast\text{-algebras}}$ $qA$ and $qB$ such that $qA$ (resp. $qB$) is isomorphic to $A$ (resp. $B$) in $KK^G$ via a canonical $\ast$-homomorphism $qA\to A$ (resp. $qB\to B$) and that any element in $KK^G(A, B)$ correspond to the element in $KK^G(qA, B)$ defined by a $\ast$-homomorphism (up to tensoring ${\mathcal{K}}$) via the $\ast$-homomorphism $qA\to A$. Since our two homomorphisms $KK^G(A, B)\to E^G(A, B)$ coincide on elements defined by $\ast$-homomorphisms and natural in first variable, they must coincide on whole elements. It follows that the homomorphisms $KK^G_1(A, B)$ to $E^G(\Sigma A, B)$ defined above are in fact, natural in both variables. The following important fact is proven by Kasparov and Skandalis in [@bolic]. \[KasSka\] (cf. [@bolic] PROPOSITION A.3.) Let $A$ be a separable $G$-${C^\ast\text{-algebra}}$ such that $KK^G(A, \cdot)$ is half-exact (for example, when $A$ is proper and nuclear). Then, the canonical homomorphism from $KK^G(A, B)\to E^G(A, B)$ is an isomorphism for any separable $G$-${C^\ast\text{-algebra}}$ $B$. Showing the surjectivity is easier. Since the homomorphisms $KK^G(A, B) \to E^G(A, B)$ commute with $\sigma_\Sigma$ and $\sigma_{{\mathcal{K}}({\mathcal{H}}_G)}$, we may assume $A=\Sigma A'{\mathcal{K}}({\mathcal{H}}_G)$ and $B=\Sigma B'{\mathcal{K}}({\mathcal{H}}_G)$ for some separable $G$-${C^\ast\text{-algebras}}$ $A',B'$. By our assumption, for any $x$ in $E^G(A, B)$, there is a separable $G$-${C^\ast\text{-algebra}}$ $C$ and a $\ast$-homomorphism $\phi$ from $A$ to $C$ and another $\ast$-homomorphism $\phi'$ from $B$ to $C$ which defines an invertible map $\phi'_\ast\colon E^G(A,B)\to E^G(A,C)$ such that $\phi'^{-1}_\ast(\phi)=x$ in $E^G(A, B)$: here, $C$ is a mapping cone of some $G$-extension. Half-exactness of $KK^G(A, \cdot)$ implies that $\phi'$ induce the isomorphism $\phi'_\ast\colon KK^G(A, B) \to KK^G(A, C)$ too. Since our homomorphisms $KK^G(A, B)\to E^G(A,B)$ are natural and send an element defined by a $\ast$-homomorphism to the element defined by the same $\ast$-homomorphism, we see that they are surjective. We now turn to the injectivity. Thanks to the universal property of $E^G$, the assumption that the functor $KK^G(A, \cdot)$ is half-exact implies there is a natural transformation from $E^G(A, \cdot)$ to $KK^G(A, \cdot)$ sending $\operatorname{id}_A$ to $\operatorname{id}_A$. Composing this with the homomorphisms $KK^G(A, B)$ to $E^G(A, B)$, we have a natural transformation from $KK^G(A, \cdot)$ to $KK^G(A, \cdot)$ which send $\operatorname{id}_A$ to $\operatorname{id}_A$. We show this natural transformation gives an isomorphism (actually the identity) $KK^G(A, B) \to KK^G(A, B)$ for any $B$. The homomorphisms $KK^G(A, B)\to KK^G(A,B)$ are natural in both variables (as long as we consider $A$ such that $KK^G(A, \cdot)$ is half-exact) and send $\operatorname{id}_A$ to $\operatorname{id}_A$. Thus, it is the identity on elements defined by $\ast$-homomorphisms. As above, according to [@Mey], any element in $KK^G(A, B)$ is a composition of $\ast$-homomorphisms and inverses of $\ast$-homomorphisms in the category $KK^G$. Thus, the homomorphisms $KK^G(A, B)\to KK^G(A, B)$ are identities on whole morphisms. We conclude the canonical homomorphism $KK^G(A, B)$ to $E^G(A, B)$ is an isomorphism for all $B$. The following is an immediate corollary of this: \[important\] Let $A$ be a separable $G$-${C^\ast\text{-algebra}}$ such that $KK^G(A, \cdot)$ is half-exact (for example, when $A$ is proper and nuclear). Then, the canonical homomorphism from $KK^G_1(A, B)\to E^G(\Sigma A, B)$ is an isomorphism for any separable $G$-${C^\ast\text{-algebra}}$ $B$. The Baum-Connes Conjecture and\ the Higson-Kasparov Theorem =============================== Let $G$ be a second countable, locally compact group. The Baum-Connes conjecture proposes the formula for calculating $K$-theory of the reduced group algebra ${C^\ast}_{\text{red}}(G)$, which is highly analytic object (it is a ${C^\ast}$-completion of the convolution algebra $C_c(G)$ or $L^1(G)$), in terms of $G$-equivariant $K$-homology (with $G$-compact supports) of a universal proper $G$-space $\underline{E}G$, which is certainly more geometric in nature. Following [@Val], we will first quickly introduce the most current form of the conjecture using the equivariant $KK$-theory. After that, we will introduce the Higson-Kasparov Theorem which is one of the most general results concerning the Baum-Connes Conjecture and discuss some of the technical issues one must overcome when proving this theorem which will be explored in detail in later chapters. A Hausdorff, paracompact topological space $X$ with a continuous $G$-action is a proper $G$-space if it is covered by $G$-invariant open subsets $U$ such that there exists a compact subgroup $H$ of $G$ and a $G$-equivariant map from $U$ to $G/H$. A proper $G$-space $X$ is universal if for any proper $G$-space $Y$, there exists a $G$-equivariant continuous map from $Y$ to $X$ unique up to $G$-homotopy. It it known that properness of a locally compact $G$-space coincides with the usual notion of properness of $G$-actions. A universal proper $G$-space exists; and it is unique up to $G$-homotopy; we denote it by $\underline{E}G$. See [@BCH] and also [@CEM] for a detailed exposition of these notions. A proper $G$-space is called $G$-compact if it is covered by translates of a compact subset $K$ over $G$. A $G$-compact proper $G$-space is locally compact; and its quotient by $G$ is compact. Given a universal proper $G$-space $\underline{E}G$, $G$-invariant $G$-compact proper subsets of $\underline{E}G$ form an inductive system under (proper) inclusion. Hence, we obtain an inductive system of $G$-equivariant $K$-homology groups. A $K$-homology group $RK^G_{\ast}(\underline{E}G)$ of a universal proper $G$-space $\underline{E}G$ with $G$-compact supports is defined by: $$\begin{aligned} RK^G_{\ast}(\underline{E}G) = \lim_{\substack{X\subseteq \underline{E}G \\ \text{$X$:$G$-inv.\,$G$-cp.}}}K_\ast^G(X) = \lim_{\substack{X\subseteq \underline{E}G \\ \text{$X$:$G$-inv.\,$G$-cp.}}}KK_\ast^G(C_0(X), {\mathbb{C}})\end{aligned}$$ G. Kasparov defined in [@Kas2] a descent homomorphism for separable $G$-${C^\ast\text{-algebras}}$: $$\xymatrix{ KK_\ast^G(A,B) \ar[r]^-{j_G} & KK_\ast({C^\ast}_{\text{max}}(G,A), {C^\ast}_{\text{max}}(G,B)); }$$ and its reduced version: $$\xymatrix{ KK_\ast^G(A,B) \ar[r]^-{j_{G,\text{red}}} & KK_\ast({C^\ast}_{\text{red}}(G,A), {C^\ast}_{\text{red}}(G,B)). }$$ On the other hand, for any proper $G$-compact space $X$, there is a distinguished class $[\mathcal{L}_X]$ in the $K$-theory group of the reduced group ${C^\ast\text{-algebra}}$ $K_\ast({C^\ast}_{\text{red}}(G,X))=KK_\ast({\mathbb{C}},{C^\ast}_{\text{red}}(G,X))$ (see [@Val]). The assembly map $\mu_{G,\text{red}}^X$ for a proper $G$-compact space $X$ is defined by a descent homomorphism followed by the Kasparov product with $[\mathcal{L}_X]$: $$\xymatrix{ \mu_{G,\text{red}}^X\colon KK_\ast^G(C_0(X),{\mathbb{C}}) \ar[r]^-{j_{G,\text{red}}} & KK_\ast({C^\ast}_{\text{red}}(G,X),{C^\ast}_{\text{red}}(G)) \ar[r]^-{[\mathcal{L}_X]\times} & KK_\ast({\mathbb{C}},{C^\ast}_{\text{red}}(G)). }$$ By fixing a universal proper $G$-space $\underline{E}G$, the Baum-Connes assembly map $\mu_{G,\text{red}}$ is defined as the inductive limit of above defined assembly maps $\mu_{G,\text{red}}^X$ for $G$-invariant, $G$-compact subsets $X$ of $\underline{E}G$: $$\xymatrix{ \mu_{G,\text{red}}\colon RK^G_{\ast}(\underline{E}G) = \lim_{\substack{X\subseteq \underline{E}G \\ \text{$X$:$G$-inv.\,$G$-cp.}}}KK_\ast^G(C_0(X),{\mathbb{C}}) \ar[r]^-{\mu_{G,\text{red}}^X} & KK_\ast({\mathbb{C}},{C^\ast}_{\text{red}}(G)) = K_\ast({C^\ast}_{\text{red}}(G)). }$$ Shintaro Nishikawa \[thm:BC\] (Baum-Connes conjecture) The assembly map $\mu_{G,\text{red}}$ is always an isomorphism. For general groups $G$, Conjecture \[thm:BC\] is still open. Nonetheless, it has been verified for quite large classes of groups. See [@Val] for a list of groups satisfying Conjecture \[thm:BC\]. For any separable $G$-${C^\ast\text{-algebra}}$ $A$, there is a more general assembly map $\mu_{G,\text{red}}^A$ with coefficient $A$ which can be defined by almost the same way as above: $$\xymatrix{ \mu_{G,\text{red}}^A\colon RKK^G_{\ast}(\underline{E}G, A) = \lim_{\substack{X\subseteq \underline{E}G \\ \text{$X$:$G$-inv.\,$G$-cp.}}}KK_\ast^G(C_0(X), A) \ar[r] &KK_\ast({\mathbb{C}},{C^\ast}_{\text{red}}(G,A)) = K_\ast({C^\ast}_{\text{red}}(G,A)) }$$ \[thm:BCC\] (Baum-Connes Conjecture with coefficients) The assembly map $\mu_{G,\text{red}}^A$ is an isomorphism for any separable $G$-${C^\ast\text{-algebra}}$ $A$. Conjecture \[thm:BCC\] has a virtue of being hereditary to closed subgroups. Although this conjecture is known to be false in general (See [@HigGuen]), it is believed that this conjecture should be valid for a reasonably large class of groups. Concerning Conjecture \[thm:BCC\], N. Higson and G. Kasparov proved in [@HigKas2] a quite general result which states that Conjecture \[thm:BCC\] holds for any (second countable) groups satisfying a certain geometric condition. Let $G$ be a locally compact group. An affine isometric action of $G$ on a real Hilbert space ${\mathcal{H}}$ will be denoted by $(\pi, b)$. It means we have a continuous group homomorphism $\pi\colon G\to O({\mathcal{H}})$ (the infinite orthogonal group $O({\mathcal{H}})$ of ${\mathcal{H}}$ is equipped with the strong operator topology) and a (norm) continuous map $b\colon G\to {\mathcal{H}}$ satisfying the cocycle condition $b(gg')=\pi(g)b(g')+b(g)$ for any $g,g'$ in $G$. It is called metrically proper if $\displaystyle \lim_{g\to \infty}\|b(g)\|=\infty$. \[dfn:a-t\] A second countable, locally compact group is called [a-${T}$-menable]{} if it admits a metrically proper, affine isometric action on a Hilbert space. A-$T$-menable groups are also called as groups with the Haagerup property. The class of a-$T$-menable groups contains all second countable amenable groups; and an a-$T$-menable group $G$ has Kazhdan’s property $(T)$ if and only if $G$ is compact. The prefix a-$T$ means “not” having property $(T)$. \[thm:BCCa-T\] (cf. [@HigKas2] Theorem 9.1.) The Baum-Connes conjecture with coefficients holds for all a-$T$-menable groups. In the rest of this chapter, we roughly summarize the proof of Theorem \[thm:BCCa-T\] given by N. Higson and G. Kasparov in [@HigKas2] and discuss some of the technical issues surrounding the proof. Our reference includes [@HigKas1] [@HigGuen] and [@Julg]. Recall for a second countable, locally compact group $G$, $KK^G$ denotes the additive category of separable $G$-${C^\ast\text{-algebras}}$ whose morphism groups are the Kasparov groups $KK^G(A,B)$. A standard approach to Conjecture \[thm:BCC\] is so-called the Dual-Dirac method which may be summarized in the following way. See [@Kas2] [@Tu] and [@MeNe]. (cf. [@MeNe] Theorem 8.3.) Let $G$ be a second countable, locally compact group. Suppose one finds a separable proper $G$-${C^\ast\text{-algebra}}$ $A$, an element $D$ (a Dirac morphism) in $KK^G(A, {\mathbb{C}})$ and an element $\beta$ (a dual Dirac morphism) in $KK^G({\mathbb{C}}, A)$ such that $\beta\circ D=1_A$. Then, $\gamma=D\circ\beta$ (a gamma element for $G$) is an idempotent in a ring $KK^G({\mathbb{C}},{\mathbb{C}})$; and the Baum-Connes assembly map $\mu_{G,\text{red}}^A$ is split-injective for any separable $G$-${C^\ast\text{-algebra}}$ $A$. Moreover, a gamma element $\gamma$ is unique if it exists. If $\gamma=1 \in KK^G({\mathbb{C}}, {\mathbb{C}})$, then the assembly map $\mu_{G,\text{red}}^A$ is also surjective for any $A$: in other words, the Baum-Connes conjecture with coefficients holds for $G$. There is a way of seeing a Dirac morphism as an analogue of simplicial approximation of topology; and in that sense it is known a Dirac morphism (which can be defined in a suitable way) always exists, see [@MeNe]. What we actually use is the following. \[DD\] (cf. [@MeNe] Theorem 8.3., [@Tu] Theorem 2.2.) Let $G$ be a second countable, locally compact group. Suppose the identity $1_{\mathbb{C}}\in KK^G({\mathbb{C}}, {\mathbb{C}})$ factors through a separable proper $G$-${C^\ast\text{-algebra}}$ $A$. Then, a gamma element $\gamma$ for $G$ exists and $\gamma=1$. Hence, the Baum-Connes conjecture with coefficients holds for $G$. We fix a second countable, locally compact group $G$ which acts properly, and affine isometrically on a fixed separable Hilbert space ${\mathcal{H}}$. In view of Theorem \[DD\], in order to prove Theorem \[thm:BCCa-T\], we need to find a natural candidate $A$ through which the identity $1_{\mathbb{C}}$ factors through. The candidate $A$ is the ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ which is now being called the ${C^\ast\text{-algebra}}$ of the Hilbert space ${\mathcal{H}}$. (${C^\ast\text{-algebra}}$ of Hilbert space) (cf. [@HigKas2] Section. 4.) We define a graded ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ of the Hilbert space ${\mathcal{H}}$ by the following way. For each finite dimensional affine subspace $V$ of ${\mathcal{H}}$, denote by $V_0$ the linear part of $V$ which is naturally regarded as a linear subspace of ${\mathcal{H}}$. The complexified exterior algebra $\Lambda^\ast(V_0)\otimes{\mathbb{C}}$ is naturally regarded as a graded Hilbert space (see Example \[Clif\]). We simply denote the graded ${C^\ast\text{-algebra}}$ $B(\Lambda^\ast(V_0)\otimes{\mathbb{C}})$ by $L(V)$. The graded ${C^\ast\text{-algebra}}$ ${\mathcal{C}}(V)$ is defined by ${\mathcal{C}}(V)=C_0(V\times V_0, L(V))$. The graded ${C^\ast\text{-algebra}}$ $A(V)$ is a graded tensor product ${\mathcal{S}}\hat\otimes{\mathcal{C}}(V)$: recall ${\mathcal{S}}$ is the ${C^\ast\text{-algebra}}$ $C_0({\mathbb{R}})$ graded by reflection at the origin. For finite dimensional affine subspaces $V \subseteq V'=V\oplus W$ ($W$ is defined as $V'_0\ominus V_0$ which is a finite dimensional linear subspace of ${\mathcal{H}}$), we have an isomorphism of graded ${C^\ast\text{-algebras}}$ ${\mathcal{C}}(V^\prime)\cong{\mathcal{C}}(V)\hat\otimes{\mathcal{C}}(W)$. We define an inclusion $A(V)\xhookrightarrow{}A(V^\prime)$ by tensoring the inclusion ${\mathcal{S}}\xhookrightarrow{}A(W)$ which we will define soon below, with the identity on ${\mathcal{C}}(V)$. For a finite dimensional linear subspace $W$ of ${\mathcal{H}}$, the Bott operator ${\mathcal{B}}_W$ for $W$ is an odd unbounded multiplier on ${\mathcal{C}}(W)$ defined as $(w_1,w_2)\mapsto i\overline{c}(w_1)+c(w_2)$ on the subspace of compactly supported functions ($c(w)$ and $\overline{c}(w)$ are Clifford multiplication operators defined in Example \[Clif\]). We recall here that associated to an (odd) multiplier $T$ on $A$, there is a unique (graded) functional calculus homomorphism from ${\mathcal{S}}$ to the multiplier algebra $M(A)$ sending resolvent functions $(x\pm i)^{-1}$ to the resolvents $(T\pm i)^{-1}$. We use an odd multiplier $X\hat\otimes1+1\hat\otimes{\mathcal{B}}_W$ on $A(W)={\mathcal{S}}\hat\otimes{\mathcal{C}}(W)$ to define the inclusion ${\mathcal{S}}\xhookrightarrow{}A(W)$ for any finite dimensional linear subspace $W$ of ${\mathcal{H}}$. The ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ of the Hilbert space ${\mathcal{H}}$ is defined as an inductive limit of the ${C^\ast\text{-algebra}}$ $A(V)$ for all finite dimensional affine subspace $V$ of ${\mathcal{H}}$ using the inclusions we defined above. The ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ naturally becomes a $G$-${C^\ast\text{-algebra}}$. We note here that for any increasing sequence of affine subspaces $V_n$ whose union is dense in ${\mathcal{H}}$, an inductive limit of $A(V_n)$ is canonically isomorphic to $A({\mathcal{H}})$. The next proposition says that the ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ is a proper $G$-${C^\ast\text{-algebra}}$. (cf. [@HigKas2] Theorem 4.9.) The ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ of the Hilbert space ${\mathcal{H}}$ is a proper $G$-${C^\ast\text{-algebra}}$. The center $Z({\mathcal{H}})$ of $A({\mathcal{H}})$ is an inductive limit of the center $Z(V)\cong C_0([0,\infty)\times V\times V_0)$ of $A(V)$. The inclusion $A(V)\xhookrightarrow {}A(V^\prime)$ descends to an inclusion $Z(V)\xhookrightarrow {}Z(V^\prime)$; and this corresponds to ‘projections’ $$[0,\infty)\times V^\prime\times V^\prime_0 \ni (t, v^\prime_1, v^\prime_2) \mapsto (\sqrt{\mathstrut t^2+\|w_1\|^2+\|w_2\|^2}, v_1, v_2) \in [0,\infty)\times V\times V_0$$ where $v^\prime_i=v_i+w_i$ in the decomposition $V^\prime=V\oplus V^\perp$ or $V^\prime_0=V_0\oplus V^\perp_0$. Therefore, the Gelfand spectrum of $Z({\mathcal{H}})$ is identified with the second countable, locally compact Hausdorff space $[0,\infty)\times {\mathcal{H}}\times{\mathcal{H}}$ whose topology is the weak topology defined by inclusion $[0,\infty)\times {\mathcal{H}}\times{\mathcal{H}}\ni (t,v_1,v_2) \mapsto (\sqrt{\mathstrut t^2+\|v_1\|^2+\|v_2\|^2}, v_1, v_2) \in [0,\infty)\times {\mathcal{H}}\times{\mathcal{H}}$ where two ${\mathcal{H}}$ in the right-hand side are endowed with the weak topology of the Hilbert space. The $G$-action on $A({\mathcal{H}})$ corresponds to a $G$-action on $[0,\infty)\times {\mathcal{H}}\times{\mathcal{H}}$ which is identity on the first factor, the affine action of $G$ on ${\mathcal{H}}$ on the second and the linear part of the affine action of $G$ on the third. The properness of this $G$-action is easily verified. One can also check $Z({\mathcal{H}})A({\mathcal{H}})$ is dense in $A({\mathcal{H}})$. This shows $A({\mathcal{H}})$ is a proper $G$-${C^\ast\text{-algebra}}$. The Bott operator ${\mathcal{B}}_W$ for each finite dimensional linear subspace $W$ of ${\mathcal{H}}$ can be assembled together to define a single odd unbounded multiplier ${\mathcal{B}}$ on $A({\mathcal{H}})$ which we call the Bott operator for ${\mathcal{H}}$. Using the functional calculus for ${\mathcal{B}}$, we obtain an element $F={\mathcal{B}}(1+{\mathcal{B}}^2)^{-\frac{1}2}$ in $M(A({\mathcal{H}}))$. This element is selfadjoint and essentially unitary and essentially equivariant (meaning $F^2-I, \,g(F)-F \in A({\mathcal{H}}) \,\,\text{for $g \in G$}$); thus $\frac{F+1}{2}$ is an essentially projection which is essentially equivariant. It defines an element $b$ in $KK^G_1({\mathbb{C}}, A(H))$. In the same notation as above, we call the element $b=(A({\mathcal{H}}), 1, \frac{F+1}{2})$ in $KK^G_1({\mathbb{C}}, A({\mathcal{H}}))$ as the Bott element or the dual Dirac element. To find the Dirac element which inverts the Bott element $b$, we need to find a certain “Dirac operator” which defines an extension of $A({\mathcal{H}})$ by “compact operators” because a natural Dirac element should lie in the boundary of such an extension (just like Toeplitz extension inverts the classical Bott element). The approach given in [@HigKas2] is slightly different from this. They constructed a certain $G$-continuous field $(A_\alpha({\mathcal{H}}))_{\alpha\in[0,\infty)}$ of $G$-${C^\ast\text{-algebras}}$ over the interval $[0,\infty)$ with $A_0({\mathcal{H}})=A({\mathcal{H}}), A_{\alpha}({\mathcal{H}})= {\mathcal{S}}\hat\otimes{\mathcal{K}}(H_\alpha({\mathcal{H}}))$ for $\alpha$ in $(0,\infty)$ where $(H_\alpha({\mathcal{H}}))_{\alpha\in(0,\infty)}$ is a certain continuous field of $G$-Hilbert spaces (the details are given in the following chapter). The $G$-${C^\ast\text{-algebra}}$ $\mathcal{F}$ of continuous sections of the field $(A_\alpha({\mathcal{H}}))_{\alpha\in[0,\infty)}$ which vanish at infinity (by evaluating at $0$) would give us an extension of $G$-${C^\ast\text{-algebras}}$: $$\label{ext} \xymatrix{ 0 \ar[r] & {\mathcal{K}}({\mathcal{S}}\hat\otimes{\mathcal{E}}) \ar[r] & \mathcal{F} \ar[r] & A({\mathcal{H}}) \ar[r] & 0 }$$ where ${\mathcal{E}}$ is a continuous sections of the field $(H_\alpha({\mathcal{H}}))_{\alpha\in(0,\infty)}$ of $G$-Hilbert spaces which vanish at infinity and where ${\mathcal{S}}\hat\otimes{\mathcal{E}}$ is regarded as a $G$-$S\Sigma$-Hilbert module (we identify $\Sigma=C_0(0, 1)$ as $C_0(0, \infty)$ as far as it makes no confusion). We note that the extension is isomorphic to $$\xymatrix{ 0 \ar[r] & S{\mathcal{K}}\Sigma \ar[r] & \mathcal{F} \ar[r] & A({\mathcal{H}}) \ar[r] & 0 }$$ if we disregard the $G$-actions. Unfortunately, we would not be able to directly associate an element in $KK^G_1(A({\mathcal{H}}),S\Sigma)\\\cong KK^G_1(A({\mathcal{H}}),{\mathbb{C}})$ to the extension since it is not clear that the extension admits an $G$-equivariant completely positive section. Nonetheless, there is an element in $KK^G_1(A({\mathcal{H}}),S\Sigma)$ which serves as an approximation of an “element” associated to the extension . The precise meaning of this approximation is as follows. For separable $G$-${C^\ast\text{-algebras}}$ $A$ and $B$, one can define an abelian semigroup $\{A, B\}_G$ of homotopy equivalence classes of equivariant asymptotic morphisms from $A$ to ${\mathcal{K}}({\mathcal{E}})$ where ${\mathcal{E}}$ is a countably generated $G$-$B$-Hilbert module. The semigroup $\{\Sigma A, B\}_G$ is an abelian group; and associated to any extension of separable $G$-${C^\ast\text{-algebras}}$: $$\xymatrix{ 0 \ar[r] & {\mathcal{K}}({\mathcal{E}}) \ar[r] & \mathcal{F} \ar[r] & A \ar[r] & 0 }$$ there is a uniquely determined class of the group $\{\Sigma A, B\}_G$. Now, we denote by $\alpha$ the class in $\{\Sigma A({\mathcal{H}}), S\Sigma\}_G$ uniquely associated to the extension . There are naturally defined group homomorphisms $\eta$ from $KK^G_1(A,B)$ to $\{\Sigma A, B\}_G$ and from $KK^G_0(A,B)$ to $\{\Sigma^2 A, B\}_G$ ([@HigKas2] Definition 7.4.). In the paper [@HigKas2], N. Higson and G. Kasparov constructed a canonical element $d$ in $KK^G_1(A({\mathcal{H}}),S\Sigma)\cong KK^G_1(A({\mathcal{H}}),{\mathbb{C}})$ such that $\eta(d)=\alpha \in \{\Sigma A({\mathcal{H}}), S\Sigma\}_G$. We call the element $d$ as the Dirac element. The conclusion is as follows. \[conc\] (cf. [@HigKas2] Theorem 8.5.) The Dirac element $d \in KK^G_1(A({\mathcal{H}}), {\mathbb{C}})$ is a (right) inverse of the dual Dirac element $b \in KK^G_1({\mathbb{C}}, A({\mathcal{H}}))$. In other words, we have $b\otimes_{A({\mathcal{H}})}d=1_{\mathbb{C}}$. As is implied in the construction of the Dirac element $d$, the proof of Theorem \[conc\] takes a somewhat indirect approach. It is based on an $E$-theoretic argument. One first calculates the composition of asymptotic morphisms $\eta(d)$ and $\eta(b)$, and next translates this calculation to one for the Kasparov bivariant theory $KK^G$. This ends our brief summary of the Higson-Kasparov Theorem. In the following chapters, we are going to give details of the proof of the Higson-Kasparov Theorem. $E$-theoretic Part of the Higson-Kasparov Theorem ================================================= In this chapter, we will give the detail of $E$-theoretic theoretic part of Higson-Kasparov Theorem. We will consider a second countable group $G$ which acts affine isometrically on a separable infinite dimensional (real) Hilbert space ${\mathcal{H}}$. The goal of this chapter is to define the canonical $G$-extension of the ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ and to compute the composition (of asymptotic morphisms) of the Bott element and the central invariant associated to this extension. (cf. [@HigKas2] Definition 2.6.) For any positive real number $\alpha>0$, we will later define the canonical graded (complex!) Hilbert space $H_\alpha({\mathcal{H}})$ associated to the (real) Hilbert space ${\mathcal{H}}$ and the canonical unbounded operator on $H_\alpha({\mathcal{H}})$. Fix $\alpha>0$. We first define for any finite dimensional affine subspace $V$ of ${\mathcal{H}}$, the graded (complex Hilbert space) $H(V)=L^2(V, \Lambda^\ast(V_0)\otimes{\mathbb{C}})$. Here, we use the usual Lebesgue measure on $V$. For any finite dimensional linear subspace $W$ of ${\mathcal{H}}$, the Bott-Dirac operator on $W$ (for a fixed $\alpha$) is an odd symmetric unbounded operator$$\label{BottDiracW} B_{W,\alpha}=\sum^m_{j=1}\alpha \overline{c}(w_j)\frac{\partial}{\partial x_j}+c(w_j)x_j$$ defined on the subspace $s(W)$ of Schwarts functions of $H(W)$ where $m=\dim W$, $x_j$ are coordinate functions for some fixed orthonormal system for $W$ and $w_j$ are its dual basis. One can check the Bott-Dirac operator $B_{W,\alpha}$ is defined independently of choices of an orthonormal system (basis) for $W$. When $W=W_1\oplus W_2$, $H(W)=H(W_1)\hat\otimes H(W_2)$ naturally; and we have $B_{W_\alpha}=B_{W_1,\alpha}\hat\otimes1 + 1\hat\otimes B_{W_2,\alpha}$. (cf. [@HigKas2] Definition 2.6.) The Bott Dirac operator $B_{W,\alpha}$ is an essentially selfadjoint odd unbounded operator having compact resolvent with one-dimensional kernel. If $W$ is one-dimensional, the Bott Dirac operator $B_{W,\alpha}$ on $W$ is nothing but the one which we described in Example \[BottDirac\]; and we know it is an essentially selfadjoint odd unbounded operator having compact resolvent with one-dimensional kernel. In general case, decompose $W$ into one-dimensional subspace. Then, $B_{W, \alpha}^2$ may be written as $B_{W_1,\alpha}^2\hat\otimes1\hat\otimes\cdots\hat\otimes1 + 1\hat\otimes B_{W_2,\alpha}^2\hat\otimes\cdots\hat\otimes1 + \cdots + 1\hat\otimes1\hat\otimes\cdots\hat\otimes B_{W_m,\alpha}^2$ with $m=\dim(W)$ and $W_i$ are mutually orthogonal one-dimensional subspace of $W$ for $i=1,2,\dots,m$. It is now clear that $B_{W,\alpha}^2$ is an essentially selfadjoint operator having compact resolvent with one dimensional kernel; and so is $B_{W,\alpha}$ by Lemma \[lemselfad\]. We note here that the kernel of $B_{W,\alpha}$ (hence of $B_{W,\alpha}^2$) is spanned by a normalized vector $\xi_{W,\alpha}(x)=(\alpha\pi)^{-\frac{m}{4}}\exp(-\frac{||x||^2}{2\alpha}).$ (The Hilbert space $H_\alpha({\mathcal{H}})$ and the Bott-Dirac operator $B_\alpha$ on ${\mathcal{H}}$) (cf. [@HigKas2] Definition 2.8.) We still implicitly fix $\alpha>0$. The graded Hilbert space $H_\alpha({\mathcal{H}})$ is defined as an inductive limit of graded Hilbert spaces $H(V)$ where $V$ runs through all finite dimensional affine subspaces of ${\mathcal{H}}$: given finite dimensional subspaces $V$ and $ V'=V\oplus W$, we define an inclusion $H(V) \xhookrightarrow{} H(V')=H(V)\otimes H(W)$ by $\xi\mapsto\xi\otimes\xi_{W,\alpha}$. When a group $G$ acts on ${\mathcal{H}}$ by affine isometries, it naturally acts on $H_\alpha({\mathcal{H}})$. For finite dimensional linear subspaces $W\subseteq W'$ of ${\mathcal{H}}$, we have the following commutative diagram. $$\begin{aligned} \label{diag:bottdirac} \xymatrix{ s(W) \ar[d]^-{B_{W,\alpha}} \ar[r] & s(W') \ar[d]^-{B_{W',\alpha}} \\ s(W) \ar[r] & s(W')\\ }\end{aligned}$$ The Bott-Dirac operator on ${\mathcal{H}}$ is an odd symmetric unbounded operator $B_\alpha$ defined on a subspace $\displaystyle s_\alpha({\mathcal{H}})=\lim_{\substack{W\subseteq{\mathcal{H}}\\ \text{$W$:f.n.dim.\,linear}}} s(W)$ of $\displaystyle H_\alpha({\mathcal{H}})\cong\lim_{\substack{W\subseteq{\mathcal{H}}\\ \text{$W$:f.n.dim.\,linear}}} H(W)$ which is defined as an inductive limit of the Bott-Dirac operator $B_{W,\alpha}$. We note here that we have a continuous field $(H_\alpha({\mathcal{H}}))_{\alpha\in(0, \infty)}$ of (graded) Hilbert spaces over the interval $(0, \infty)$: basic sections are defined by vectors in $H(V)$ for any finite dimensional affine subspace $V$ of ${\mathcal{H}}$. When $G$ acts on ${\mathcal{H}}$ by affine isometries, this becomes a continuous field of (graded) $G$-Hilbert spaces naturally. (cf. [@HigKas2] Definition 2.8.) The Bott-Dirac operator $B_{\alpha}$ is an essentially selfadjoint odd unbounded operator having one-dimensional kernel. When a group $G$ acts on ${\mathcal{H}}$ by linear isometries, it is $G$-equivariant. The first part is similar to the finite dimensional case. Taking any decomposition of ${\mathcal{H}}=\displaystyle\bigoplus_{i=1}^\infty W_i$ with finite dimensional subspaces $W_i$ for $i=1,2,\dots$, we may write $B_\alpha$ as an infinite sum $B_{W_1, \alpha}\hat\otimes1\hat\otimes1\hat\otimes\cdots+1\hat\otimes B_{W_2, \alpha}\hat\otimes1\hat\otimes\cdots+\cdots$. Then, we have $B_\alpha^2=B_{W_1, \alpha}^2\hat\otimes1\hat\otimes1\hat\otimes\cdots+1\hat\otimes B_{W_2, \alpha}^2\hat\otimes1\hat\otimes\cdots+\cdots$. It is clear that $B_\alpha^2$ is an essentially selfadjoint (diagonalizable) operator having one-dimensional kernel, and so is $B_\alpha$ by Lemma \[lemselfad\]. That it is $G$-equivariant follows from that for finite dimensional subspace $W$, $B_{W, \alpha}$ is well-defined by the expression independently of choices of a basis for $W$ and that the diagram commutes for any $W$ and $W'$. Unfortunately, the Bott-Dirac operator $B_\alpha$ does not have compact resolvent. In [@HigKas2], N. Higson and G. Kasparov introduced a non-commutative functional calculus for the operator $B_\alpha$ in order to perturb $B_\alpha$ to make it having compact resolvent in a very tractable way.\ For any (not necessarily bounded, but densely defined) operator $h$ on (real) Hilbert space ${\mathcal{H}}$, we define an (unbounded) operator $h(B_\alpha)$ defined on a subspace $\displaystyle s_\alpha({\mathcal{H}}_{h})=\lim_{\substack{W\subseteq{\mathcal{H}}_h \\ \text{$W$:f.n.dim.\,linear}}} s(W)$ of $s_\alpha({\mathcal{H}})$ where we denote the domain of $h$ by ${\mathcal{H}}_h$. For any finite dimensional linear subspaces $W$ of ${\mathcal{H}}_h$ and $V\supseteq W+hW$, we denote by $s(W,V)$ the space of Schwarts functions from $W$ to $\Lambda^\ast(V)\otimes{\mathbb{C}}$ naturally regarded as a subspace of $L^2(W,\Lambda^\ast(V)\otimes{\mathbb{C}})\subseteq H_\alpha({\mathcal{H}})$ (Note $s(W)=s(W,W)$); and we define an (unbounded) operator $h(B_{W,\alpha})$ from $s(W)$ to $s(W,V)\subseteq s({\mathcal{H}})$ by the following formula: $$\begin{aligned} h(B_{W,\alpha}) &= \sum^m_{j=1}\alpha \overline{c}(h(w_j))\frac{\partial}{\partial x_j}+c(h(w_j))x_j \label{def:cal}\end{aligned}$$ here, $m=\dim{W}$ and $x_j$ and $w_j$ are the same as before. This is again defined independently of a choice of a basis for $W$.\ Now, we consider whether for any finite dimensional linear subspaces $W\subseteq W'$ and $V\supseteq W'+hW'$, the following diagram is commutative or not. $$\begin{aligned} \xymatrix{ s(W) \ar[d]^-{h(B_{W,\alpha})} \ar[r] & s(W') \ar[d]^-{h(B_{W',\alpha})}\\ s(W,V) \ar[r] & s(W',V)\\ } \label{diagram:cal}\end{aligned}$$ \[prop:cal1\] Let $W''=W'\ominus W$. The diagram commutes if and only if $hW''$ is orthogonal to $W$. In particular, when $h$ is symmetric, the diagram commutes if and only if $hW$ is orthogonal to $W''$. We fix orthonormal bases for $W$ and for $W''$ and denote the corresponding coordinate functions and dual bases by $x_j, w_j \, (j=1,\dots,m)$ and by $x''_k, w''_k \, (k=1,\dots,l)$. We first note that vectors of the form $\xi\otimes (w_{j_1}\wedge\dots\wedge w_{j_s})$ spans $s(W)$ where $\xi$ is a (complex valued) Schwarts function on $W$. Therefore, the diagram is commutative if and only if $$\begin{aligned} h(B_{W,\alpha})(\xi\otimes (w_{j_1}\wedge\dots\wedge w_{j_s}))\otimes\xi_{W'',\alpha}=h(B_{W',\alpha})(\xi\otimes\xi_{W'',\alpha}\otimes (w_{j_1}\wedge\dots\wedge w_{j_s})) \label{cal1}\end{aligned}$$ holds for any $\xi$ and $j_1,\dots,j_s$. By considering a natural decomposition $h(B_{W',\alpha})=h(B_{W,\alpha})+h(B_{W'',\alpha})$, we see the equation holds if and only if $$\begin{aligned} h(B_{W'',\alpha})(\xi\otimes\xi_{W'',\alpha}\otimes (w_{j_1}\wedge\dots\wedge w_{j_s}))=0 \label{cal2}\end{aligned}$$ By a further decomposition $$\begin{aligned} h(B_{W'',\alpha})=\sum^l_{k=1}\operatorname{ext}(h(w''_k))(\alpha\frac{\partial}{\partial x''_k}+x''_k)+\sum^l_{k=1}\operatorname{int}(h(w''_k))(-\alpha\frac{\partial}{\partial x''_k}+x''_k)\end{aligned}$$ we see the equation holds if and only if $$\begin{aligned} \sum^l_{k=1}\operatorname{int}(h(w''_k))(-\alpha\frac{\partial}{\partial x''_k}+x''_k)(\xi\otimes\xi_{W'',\alpha}\otimes (w_{j_1}\wedge\dots\wedge w_{j_s}))=0 \label{cal3}\end{aligned}$$ since $\xi_{W'',\alpha}=(\alpha\pi)^{-\frac{l}{4}}\exp(-\frac{||x''||^2}{2\alpha})$ is in the kernels of differential operators $\alpha\frac{\partial}{\partial x''_k}+x''_k$. In sum, by further calculating the equation , the diagram is commutative if and only if for any Schwarts function $\xi$ on $W$ and $j_1,\dots,j_s \in \{1,\dots,m\}$ the following equation holds. $$\begin{aligned} \sum^l_{k=1}\xi\otimes2x''_k\xi_{W'',\alpha}\otimes\operatorname{int}(h(w''_k))(w_{j_1}\wedge\dots\wedge w_{j_s})=0 \label{cal4}\end{aligned}$$ Now, note that $2x''_k\xi_{W'',\alpha}$ are mutually orthogonal vectors in $L^2(W)$, hence we conclude the diagram is commutative if and only if $$\begin{aligned} &\operatorname{int}(h(w''_k))(w_{j_1}\wedge\dots\wedge w_{j_s})=0 \,\,\, \text{for any $k$ and $j_1,\dots,j_s$}\\ \iff& \text{$hW''$ is orthogonal to W} \end{aligned}$$ For any finite dimensional linear subspaces $W\subseteq W'\subseteq W''$ and $V\supseteq W''+hW''$, let us now consider the following slightly different diagram from : $$\begin{aligned} \xymatrix{ s(W) \ar[r] & s(W') \ar[d]^-{h(B_{W',\alpha})} \ar[r] & s(W'') \ar[d]^-{h(B_{W'',\alpha})}\\ & s(W',V) \ar[r] & s(W'',V)\\ } \label{diagram:cal2}\end{aligned}$$ Let us say that the diagram eventually commutes if there exists a finite dimensional subspace $W'\supseteq W$ of ${\mathcal{H}}_h$ such that for any finite dimensional subspace $W''\supseteq W'$ of ${\mathcal{H}}_h$, the diagram commutes. The following is an immediate corollary. Let $W'^\perp={\mathcal{H}}_h\ominus W'$. The diagram eventually commutes if and only if there exists a finite dimensional subspace $W'$ of ${\mathcal{H}}_h$ such that $hW'^\perp$ is orthogonal to $W$. In particular, when $h$ is symmetric, the diagram eventually commutes if and only if $hW\subseteq {\mathcal{H}}_h$. The above corollary says that when trying to define an inductive limit of $h(B_{W,\alpha})$, one needs to be careful more than merely observing whether the diagram eventually commutes. This is the point which is not mentioned in the paper [@HigKas2]. Fortunately, as the next proposition and its corollary says, even though for any finite dimensional subspace $W$ of ${\mathcal{H}}_h$, the diagram may not eventually commute, if $h$ has its adjoint defined on ${\mathcal{H}}_h$, it always asymptotically commutes in the following sense: we say that the diagram asymptotically commutes if for any vector in $s(W)$ and for any $\epsilon>0$, there exists a finite dimensional subspace $W'\supseteq W$ of ${\mathcal{H}}_h$ such that for any finite dimensional subspace $W''\supseteq W'$ of ${\mathcal{H}}_h$, the difference between two vectors gained by two ways in the diagram is within $\epsilon$ in the norm of $s_\alpha({\mathcal{H}})\subseteq H_\alpha({\mathcal{H}})$. The diagram asymptotically commutes if and only if $h$ has its adjoint defined on $W$. Let us still denote a fixed coordinates and basis $w_1,\dots,w_m$ for $W$. We do a similar calculation as in Proposition \[prop:cal1\]. We see that the diagram asymptotically commutes if and only if for any $j_1,\dots, j_s$ and for any $\epsilon>0$, there exists a finite dimensional subspace $W'\supseteq W$ of ${\mathcal{H}}_h$, such that for any finite dimensional subspace $W''\supseteq W'$ of ${\mathcal{H}}_h$, $$\begin{aligned} \sum^l_{k=1}||\operatorname{int}(h(w''_k))(w_{j_1}\wedge\dots\wedge w_{j_s})||^2<\epsilon\end{aligned}$$ where $w''_1,\dots,w''_l$ is some (arbitrary) basis for $W''\ominus W'$ and the norm is computed in $L^2(\Lambda^\ast(W)\otimes{\mathbb{C}})$. Considering each one-dimensional subspace of $L^2(\Lambda^\ast(W)\otimes{\mathbb{C}})$ we see that the diagram asymptotically commutes if and only if for any $\epsilon>0$ there exists $W'\supseteq W$ such that for any $W''\supseteq W'$, $$\begin{aligned} \sum^l_{k=1}|\langle w_j, hw''_k \rangle|^2<\epsilon\end{aligned}$$ for any $j=1,\dots,m$. It is now clear this is equivalent to that the adjoint of $h$ is defined on $W$. \[cor:cal\] The diagram asymptotically commutes for any finite dimensional subspace $W$ of ${\mathcal{H}}_h$ if and only if $h$ has its adjoint defined on ${\mathcal{H}}_h$. In particular, when $h$ is symmetric, the diagram always asymptotically commutes. \[dfn:cal\](a fixed non-commutative functional calculus) (cf. [@HigKas2] Definition 3.5.) Let $h$ be a densely defined operator on ${\mathcal{H}}$ whose adjoint is defined on the domain ${\mathcal{H}}_h$ of $h$, we define a densely defined operator $h(B_\alpha)$ on $H_\alpha({\mathcal{H}})$ defined on its subspace $\displaystyle s_\alpha({\mathcal{H}}_h)=\lim_{\substack{W\subseteq{\mathcal{H}}_h \\ \text{$W$:f.n.dim.\,linear}}} s(W)$ by the following: $$\begin{aligned} h(B_\alpha)(\xi):= \displaystyle\lim_{\substack{W\subseteq W' \subseteq {\mathcal{H}}_h \\ \text{$W'$: f.n. dim. \,linear}}} h(B_{W',\alpha})(\xi\otimes\xi_{W'\ominus W,\alpha})\end{aligned}$$ for a finite dimensional subspace $W$ of ${\mathcal{H}}_h$, $\xi$ in $s(W)$. The limit is taken in the Hilbert space $H_\alpha({\mathcal{H}})$. This is well-defined thanks to Corollary \[cor:cal\]. We note for diagonalizable operators $h$, our fixed non-commutative functional calculus is essentially the same as defined in the paper [@HigKas2]. Since, the arguments following the definition of a non-commutative functional calculus in [@HigKas2] are, fundamentally, about the diagonalizable operators, they are still valid without any change. Hence, we give here the important properties of a non-commutative functional calculus without any proof as is proven in [@HigKas2]. \[property\] A non-commutative functional calculus \[dfn:cal\] has the following properties. (cf. [@HigKas2] Section 3.) - For any $h$ satisfying the assumption of Definition \[dfn:cal\], $h(B_\alpha)$ is a symmetric operator defined on $s_\alpha({\mathcal{H}}_h)$; - The assignment $h\mapsto h(B_\alpha)$ is “${\mathbb{R}}$-linear” (on the domain where the sum makes sense); - if $h$ is diagonalizable and $h=\sum^\infty_{k=1}\lambda_kP_{W_i}$, $h(B_\alpha)=\sum^\infty_{k=1}\lambda_kB_{W_k,\alpha}$; hence $h(B_\alpha)$ is diagonalizable and in particular, essentially selfadjoint; if $h$ has compact resolvent, so is $h(B_\alpha)$; - if $h$ is diagonalizable and $h^2\geq1$, $||h(B_\alpha)\xi||\geq||B_\alpha\xi||$ for any $\xi$ in $s_\alpha({\mathcal{H}}_h)$; hence the selfadjoint domain of $h(B_\alpha)$ is contained in that of $B_\alpha$, and this inequality extends to the selfadjoint domain of $h(B_\alpha)$; - if $h$ is an bounded operator, $||h(B_\alpha)\xi||\leq||h||||B_\alpha\xi||$ for any $\xi$ in $s_\alpha({\mathcal{H}})=s_\alpha({\mathcal{H}}_h)$; - if $h_1, h_2$ are positive, diagonalizable operators which differ by a bounded operator (hence have their common domain ${\mathcal{H}}_h$), and if $h_1^2,h_2^2\geq1$, $||h_1(B_\alpha)\xi-h_2(B_\alpha)\xi||\leq||h_1-h_2||||B_\alpha\xi||$ for any $\xi$ in $s_\alpha({\mathcal{H}}_h)$; and this inequality extend to the selfadjoint domain of $h_1(B_\alpha)$ or of $h_2(B_\alpha)$; - For two positive, diagonalizable operators $h_1,h_2$ having compact resolvent which differ by a bounded operator, if we set $B_{\alpha,1,t}=(1+th_1)(B_\alpha), B_{\alpha, 2, t}=(1+th_2)(B_\alpha)$ for $t>0$, we have for any $f$ in $C_0({\mathbb{R}})$, $$\begin{aligned} \displaystyle \lim_{t\to0}\sup_{s>0,\alpha>0}||f(sB_{\alpha,1,t})-f(sB_{\alpha,2,t})||=0 \label{eq:perturb}\end{aligned}$$ - When a group $G$ acts on ${\mathcal{H}}$ by linear isometries and if $h$ is a positive, diagonalizable operator having compact resolvent whose domain is $G$-invariant and if $g(h)-h$ is bounded for any $g$ in $G$, we set as above $B_{\alpha,t}=(1+th)(B_\alpha)$. Then we have for any $f$ in $C_0({\mathbb{R}})$ and for any $g$ in $G$, $$\begin{aligned} \displaystyle \lim_{t\to0}\sup_{s>0,\alpha>0}||f(sB_{\alpha,t})-g(f(sB_{\alpha,t}))||=0 \label{eq:perturb2}\end{aligned}$$ We remark here that the equation describes the “asymptotic behaviors” of the perturbations $(1+th_1)(B_\alpha)$ and $(1+th_2)(B_\alpha)$ of $B_\alpha$ (which has compact resolvent) are “close” in some strong sense when $h_1$ and $h_2$ differ by a bounded operator. Also, the equation says that such perturbations can be made to be “asymptotically $G$-equivariant” in some strong sense when one finds a good operator $h$ and uses it for the perturbation. As is proven in [@HigKas2] this is alway possible. \[adapt\](cf. [@HigKas2] Lemma 5.7.) Let $G$ be a second countable, locally compact group. Suppose $G$ acts on a real separable Hilbert space ${\mathcal{H}}$ by affine isometries. Write the action of $G$ by $(\pi, b)$. Then, there exists a positive, diagonalizable operator $h$ on ${\mathcal{H}}$ having compact resolvent whose domain is $G$-invariant and $\pi(g)h-h\pi(g)$ is bounded for any $g$ in $G$. We say such an operator $h$ is adapted to the action of $G$. We follow the argument as in [@HigKas2]. It actually proves the following: let $X$ and $Y$ be $\sigma$-compact subsets of $O({\mathcal{H}})$ and ${\mathcal{H}}$ respectively. Then there exists a positive, diagonalizable operator $h$ on ${\mathcal{H}}$ having compact resolvent whose domain contains $Y$ and is $X$-invariant and $xh-hx$ is bounded for any $x\in X$. It is clear that this implies our stated claim. Now, we prove this. Write $X$ and $Y$ as increasing unions of compact sets $X_n$ and $Y_n$ ($n\geq1$) respectively. Take an increasing sequence of finite rank projections $(P_n)_{n\geq1}$ such that $\|(1-P_n)y\|\leq2^{-n}$ for $y$ in $Y_n$ and $\|(1-P_{n+1})xP_{n}\|\leq2^{-n}$ for $x\in X_n$. Set $P_0=P_{-1}=0$ and $Q_n=P_n-P_{n-1}$ for $n\geq0$. Define, a positive, diagonalizable operator $h$ by $h=\sum_{n\geq1}^{\infty}nQ_n$. It is clear that $h$ has compact resolvent and that the (selfadjoint) domain of $h$ contains $Y$. Take $x\in X$; we claim $xh-hx=\sum_{n\geq1}^{\infty}n(xQ_n-Q_nx)$ is a bounded operator. Write $xh-hx$ as the sum of $\sum_{n\geq1}^{\infty}n((1-P_{n+1})xQ_n-Q_nx(1-P_{n+1}))$ and $\sum_{n\geq1}^{\infty}n(P_{n-2}xQ_n-Q_nxP_{n-2})$ and $\sum_{n\geq1}^{\infty}n(Q_{n+1}xQ_n+Q_{n-1}xQ_n-Q_nxQ_{n+1}-Q_nxQ_{n-1})$. It is now clear each of them are bounded. We now go to the definition of a continuous field of ${C^\ast\text{-algebras}}$ which is the key component of the construction of $G$-extension of the ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ of Hilbert space. We will consider a bit more general situation than affine isometric actions of $G$. Let $G$ be a second countable, locally compact group, $Y$ be a second countable, locally compact $G$-space, ${\mathcal{H}}$ be a separable real Hilbert space. A continuous field of affine isometric actions of $G$ on ${\mathcal{H}}$ (parametrized) over $Y$ is a pair $(\pi, (b_y)_{y\in Y})$ where $\pi\colon G\to O({\mathcal{H}})$ is a continuous group homomorphism from $G$ to $O({\mathcal{H}})$ and $(b_y)_{y\in Y}$ is a continuous map $(b_y) \colon G\times Y\to {\mathcal{H}}$ satisfying a (twisted) cocycle condition $b_y(gg')=b_y(g)+\pi(g)b_{g^{-1}y}(g')$ for any $g,g'$ in $G$ and $y$ in $Y$. Given such a field $(\pi, (b_y)_{y\in Y})$, the ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})(Y)=C_0(Y, A({\mathcal{H}}))$ becomes a $G$-${C^\ast\text{-algebra}}$ naturally: we set for any $g\in G$ and for $f\colon Y\to A({\mathcal{H}})$, $g(f)(y)=(\pi(g), b_y(g))_\ast f(g^{-1}y)$ for $y\in Y$ where $(\pi(g), b_y(g))_\ast$ is an action on $A({\mathcal{H}})$ induced by an affine isometric action $(\pi(g), b(g))$. Also, we have a continuous field $(C_0(Y, H_\alpha({\mathcal{H}})))_{\alpha\in (0, \infty)}$ of (graded) $G$-$C_0(Y)$-Hilbert modules. Note, we may allow the linear part $\pi$ also vary along $Y$, but we will stick to the above simple case. For example, for any affine isometric action $(\pi, b)$ of $G$ on ${\mathcal{H}}$, taking $Y$ as a (trivial) $G$-space $[0, 1]$, we have a continuous field $(\pi, (b_y)_{y\in[0,1]})$ of affine isometric actions of $G$ on ${\mathcal{H}}$ over $[0, 1]$ with $b_y(g)=yb(g)$ which gives us a homotopy between the affine isometric action $(\pi, b)$ and the liner isometric action $(\pi, 0)$. More generally, for any continuous filed $(\pi, (b_y)_{y\in Y})$ of affine isometric actions of $G$ on ${\mathcal{H}}$ over $Y$, we have a homotopy between $(\pi, (b_y)_{y\in Y})$ and $(\pi, (0)_{y\in Y})$. In the following discussion of this chapter, we fix one continuous field $(\pi, (b_y)_{y\in Y})$ of a second countable, locally compact group $G$ on a separable real Hilbert space ${\mathcal{H}}$ over a second countable, locally compact $G$-space $Y$. (continuous field $(A_\alpha({\mathcal{H}})(Y))_{\alpha\in[0,\infty)}$) (cf. [@HigKas2] Section 5.) We define a continuous field $(A_\alpha({\mathcal{H}})(Y))_{\alpha\in[0,\infty)}$ of $G$-${C^\ast\text{-algebras}}$ with fibers $A_0({\mathcal{H}})(Y)=A({\mathcal{H}})(Y),\,\, A_{\alpha}({\mathcal{H}})(Y)= {\mathcal{S}}\hat\otimes{\mathcal{K}}(H_\alpha({\mathcal{H}}))(Y)$ for $\alpha$ in $(0,\infty)$. For any finite dimensional affine subspace $V$ of ${\mathcal{H}}$, a continuous field $(C_\alpha(V))_{\alpha\in[0,\infty)}$ of graded ${C^\ast\text{-algebras}}$ with fibers $C_0(V)=C(V)=C_0(V\times V_0)\hat\otimes L(V), C_{\alpha}(V)={\mathcal{K}}(H(V))={\mathcal{K}}(L^2(V))\hat\otimes L(V)$ for $\alpha$ in $(0,\infty)$ is defined by a (graded) tensor product of a (trivially graded) continuous field $(C^\ast_\alpha(V_0,C_0(V)))_{\alpha\in[0,\infty)}$ by a graded ${C^\ast\text{-algebra}}$ $L(V)$: the continuous field $(C^\ast_\alpha(V_0,C_0(V)))_{\alpha\in[0,\infty)}$ is obtained as a (reduced) crossed product of a “constant” field $(C_0(V))_{\alpha\in[0,\infty)}$ by an additive group $V_0$ whose action on the fiber $C_0(V)$ at $\alpha$ is induced from the translation action of $V_0$ on $V$ defined by $v_0\cdot v=v+\alpha v_0$. A continuous field $(A_\alpha(V)(Y))_{\alpha\in[0,\infty)}$ of ${C^\ast\text{-algebras}}$ with fibers $A_\alpha(V)(Y)={\mathcal{S}}\hat\otimes C_\alpha(V)(Y)$ for $\alpha$ in $[0,\infty)$ is obtained by a graded tensor product of the above continuous field $(C_\alpha(V))_{\alpha\in[0,\infty)}$ by a graded ${C^\ast\text{-algebra}}$ ${\mathcal{S}}$ and by a (ungraded) ${C^\ast\text{-algebra}}$ $C_0(Y)$. With these in mind, continuous sections of $(A_\alpha({\mathcal{H}})(Y))_{\alpha\in[0,\infty)}$ are defined as follows. Fix a positive, selfadjoint compact operator $h$ on ${\mathcal{H}}$ which has compact resolvent and adapted to the “actions” $(\pi, b(y))$ for all $y$ in $Y$: this is possible; see the proof of Lemma \[adapt\]. Denote as before the domain of $h$ by ${\mathcal{H}}_h$. For any finite dimensional affine subspace $V$ of ${\mathcal{H}}_h$, any continuous sections $(T_\alpha)_{\alpha\in[0,\infty)}$ of the continuous field $(C_\alpha(V)(Y))_{\alpha\in[0,\infty)}$ and for any $f$ in ${\mathcal{S}}$, a basic section of $(A_\alpha({\mathcal{H}})(Y))_{\alpha\in[0,\infty)}=(A_\alpha(V^\perp)\hat\otimes C_\alpha(V)(Y))_{\alpha\in[0,\infty)}$ associated to $(T_\alpha)_{\alpha\in[0,\infty)}$ and $f$ is defined as $f(X\hat\otimes1+1\hat\otimes {\mathcal{B}}_{V^\perp})\hat\otimes T_0$ at $\alpha=0$ and $f(X\hat\otimes1+1\hat\otimes (1+\alpha h_V)(B_{V^\perp,\alpha}))\hat\otimes T_\alpha$ at $\alpha>0$. Here $V^\perp={\mathcal{H}}\ominus V_0$; ${\mathcal{B}}_{V^\perp}$ and $B_{V^\perp,\alpha}$ are the Bott operator on $A(V^{\perp})$ and the Bott-Dirac operator on $V^\perp$ respectively; and $h_V$ is the compression of $h$ to $V^\perp$. A section of $(A_\alpha({\mathcal{H}})(Y))_{\alpha\in[0,\infty)}$ is defined to be continuous if it is a uniform limit over compact subsets of basic sections. We denote by $\mathcal{F}_h$ the ${C^\ast\text{-algebra}}$ of the continuous sections of $(A_\alpha({\mathcal{H}})(Y))_{\alpha\in[0,\infty)}$ which vanish at infinity. On the one hand, the evaluation of the section algebra $\mathcal{F}_h$ at $\alpha=0$ gives a surjective homomorphism from $\mathcal{F}_h$ onto $A({\mathcal{H}})(Y)$. On the other hand, the ${C^\ast\text{-algebra}}$ of the continuous sections of $(A_\alpha({\mathcal{H}})(Y))_{\alpha\in[0,\infty)}$ which vanish at $0$ and at infinity, i.e. the kernel of the evaluation of $\mathcal{F}_h$ at $\alpha=0$, is naturally isomorphic to the ${C^\ast\text{-algebra}}$ ${\mathcal{S}}\hat\otimes{\mathcal{K}}({\mathcal{E}})$ where ${\mathcal{E}}$ is a graded Hilbert $\Sigma(Y)$-module of the continuous sections of $(H_\alpha({\mathcal{H}})(Y))_{\alpha\in(0,\infty)}$ which vanish at infinity. Hence, we have a following extension of ${C^\ast\text{-algebras}}$: $$\label{Gext} \xymatrix{ 0 \ar[r] & {\mathcal{S}}\hat\otimes{\mathcal{K}}({\mathcal{E}}) \ar[r] & \mathcal{F}_h \ar[r] & A({\mathcal{H}})(Y) \ar[r] & 0 }$$ As is proven in [@HigKas2], the ${C^\ast\text{-algebra}}$ $\mathcal{F}_h$ becomes a $G$-${C^\ast\text{-algebra}}$ naturally; though we are in a bit general situation, the proof goes verbatim. Hence, the extension becomes a $G$-extension of ${C^\ast\text{-algebras}}$. We have a natural isomorphism ${\mathcal{S}}\hat\otimes{\mathcal{K}}({\mathcal{E}})\cong S\otimes{\mathcal{K}}({\mathcal{E}})$; and this is even an isomorphism of $G$-${C^\ast\text{-algebras}}$. Hence, we have actually a following extension ($G$-extension): $$\label{Gext2} \xymatrix{ 0 \ar[r] & S\otimes{\mathcal{K}}({\mathcal{E}}) \ar[r] & \mathcal{F}_h \ar[r] & A({\mathcal{H}})(Y) \ar[r] & 0 }$$ We now come to the definition of two important asymptotic morphisms. As in [@HigKas2] (Definition 6.4.), we define for separable $G$-${C^\ast\text{-algebras}}$ $A,B$, the abelian semigroup $\{A, B\}_G$ as a set of homotopy equivalence classes of asymptotic morphisms from $A$ to ${\mathcal{K}}({\mathcal{E}})$ for a countably generated Hilbert $G$-$B$-module ${\mathcal{E}}$. Here, the homotopy means the asymptotic morphism from $A$ to ${\mathcal{K}}({\mathcal{E}}')$ for a countably generated Hilbert $G$-$B[0,1]$-module. Addition law for $\{A, B\}_G$ is induced from the direct sum operation for Hilbert $G$-$B$-module. As in Definition \[dfn:asym\], the semigroup $\{\Sigma A, B\}_G$ is an abelian group thanks to the presence of $\Sigma$. (cf. [@HigKas2] Definition 6.6.) The dual Dirac element $\beta$ is the class in the group $\{S(Y), A({\mathcal{H}})(Y)\}_G$ of the $G$-equivariant asymptotic morphism $(\phi_t)\colon S(Y)\to\to A({\mathcal{H}})(Y)$ defined by $\phi_t(f\otimes f'):=f(t^{-1}{\mathcal{B}})\otimes f'$ for $t$ in $[1,\infty)$, for $f$ in $S$ and $f'$ in $C_0(Y)$. (cf. [@HigKas2] Definition 6.7.) The Dirac element $\alpha$ is the class in the group $\{\Sigma A({\mathcal{H}})(Y),S\Sigma(Y)\}_G$ defined by a central invariant of the extension (recall this asymptotic morphism is defined using some asymptotically equivariant continuous approximate unit of $S\otimes{\mathcal{K}}({\mathcal{E}})$ but its class is independent of choices). We also call the asymptotic morphisms defining the dual Dirac element $\beta$ and the Dirac element $\alpha$ as the dual Dirac element and the Dirac element respectively and even write them as $\alpha$ or $\beta$. \[theorem:comp\](cf. [@HigKas2] Theorem 6.10.) The composition of $G$-equivariant asymptotic morphisms $\Sigma\beta\colon\Sigma S(Y)\rightarrow\rightarrow \Sigma A({\mathcal{H}})(Y)$ and $\alpha\colon\Sigma A({\mathcal{H}})(Y)\rightarrow\rightarrow S{\mathcal{K}}({\mathcal{E}})$ represents the same class in the group $\{\Sigma S(Y), S\Sigma(Y)\}_G$ as the flip isomorphism $\Sigma S\to S\Sigma$ tensored with the identity $\operatorname{id}_{C_0(Y)}$. A homotopy of continuous fields of affine isometric actions between $(\pi, (b_y)_{y\in Y})$ and $(\pi, (0)_{y \in Y})$ evidently produce homotopy between the compositions of the dual Dirac elements and the Dirac elements corresponding to the two continuous fields of affine isometric actions. Hence, we can assume the affine part $(b_y)_{y\in Y}$ is $0$. In this case, $\phi_1\colon S(Y)\to A({\mathcal{H}})(Y)$ is a $G$-equivariant homomorphism which, viewed as an equivariant asymptotic morphism, is homotopic to $(\phi_t)$. Therefore, by the naturality of central invariants, $\alpha\circ\Sigma\beta$ in $\{\Sigma S(Y), S\Sigma(Y\}_G$ is represented by the central invariant of the following pullback $G$-extension: $$\begin{aligned} \label{pulled} \xymatrix{ 0 \ar[r] & S{\mathcal{K}}({\mathcal{E}}) \ar@{=}[d] \ar[r] & \mathcal{F}_{h,S} \ar[d] \ar[r] & S(Y) \ar[d]^{\phi_1} \ar[r] & 0 \\ 0 \ar[r] & S{\mathcal{K}}({\mathcal{E}}) \ar[r] & \mathcal{F}_{h} \ar[r] & A({\mathcal{H}})(Y) \ar[r] & 0 } \end{aligned}$$ Here, $\mathcal{F}_{h, S}$ is the $G$-${C^\ast}$-subalgebra of $\mathcal{F}$ consisting of continuous sections $(a_\alpha)_{\alpha \in [0, \infty)}$ of the continuous field $(A_\alpha({\mathcal{H}})(Y))_{\alpha \in [0, \infty)}$ vanishing at infinity taking values in $S(Y)\subset A({\mathcal{H}})(Y)$ at $\alpha=0$. Modulo null sections, that is, the elements in $S{\mathcal{K}}({\mathcal{E}})$, this algebra is generated by basic sections associated to continuous sections $(T_\alpha)_{\alpha \in [0, \infty)}$ of the constant field $(C_0(Y))_{\alpha \in [0, \infty)}$ vanishing at infinity and $f\in S$. The functional calculus $f\mapsto f(X\hat\otimes1+1\hat\otimes (1+\alpha h)(B_{\alpha}))$ decomposes into the identity $f\mapsto f$ and the other part similarly to the one explained in Chapter 2. Therefore, the central invariant associated to the extension is the sum of central invariants associated to the following two $G$-extensions of $S(Y)$: $$\begin{aligned} \label{flipext} \xymatrix{ 0 \ar[r] & S(0,\infty)(Y) \ar[r] &S[0, \infty)(Y) \ar[r] & S(Y) \ar[r] & 0 } \end{aligned}$$ and $$\begin{aligned} \label{nullext} \xymatrix{ 0 \ar[r] & PS{\mathcal{K}}({\mathcal{E}})P \ar[r] & P\mathcal{F}_{h,S}P \ar[r] & S(Y) \ar[r] & 0 } \end{aligned}$$ Here, $P=(P_\alpha)$ denotes the (pointwise) orthogonal projection of the Hilbert space $H_\alpha({\mathcal{H}})$ onto the subspace orthogonal to the one dimensional kernel of the Bott-Dirac operator $B_\alpha$ of ${\mathcal{H}}$. Therefore, it suffices to show the central invariant associated to the extension is $0$. As in [@HigKas2], we define a $G$-${C^\ast\text{-algebra}}$ $\mathcal{D}$. To produce this, we consider a continuous field $(D_\alpha)_{\alpha\in [0, \infty)}$ with fibers $D_\alpha=P_\alpha A_\alpha({\mathcal{H}})(Y)P_\alpha(0, 1]$ for $\alpha$ in $(0, \infty)$ and $D_0=S(Y)$. Continuous sections are generated by continuous sections of $(D_\alpha)_{\alpha \in (0 \infty)}$ vanishing at $0$ and infinity and by basic sections associated to continuous sections $(T_\alpha)_{\alpha \in [0, \infty)}$ of the constant field $(C_0(Y))_{\alpha \in [0, \infty)}$ and $f$ in $S$ which in tern defined as a section which is $f\otimes T_0$ at $\alpha=0$ and is a function $s\mapsto P_\alpha f(X\hat\otimes1+1\hat\otimes (1+\alpha h)(s^{-1}B_{\alpha}))P_\alpha\otimes T_\alpha$ at $\alpha>0$. Thanks to the last property listed in Proposition \[property\], the ${C^\ast\text{-algebra}}$ $\mathcal{D}$ of continuous sections of $(D_\alpha)_{\alpha\in[0, \infty)}$ naturally becomes a $G$-${C^\ast\text{-algebra}}$. Moreover, we have a following diagram of $G$-extension: $$\begin{aligned} \xymatrix{ 0 \ar[r] & PS{\mathcal{K}}({\mathcal{E}})P(0, 1] \ar[d] \ar[r] & \mathcal{D} \ar[d] \ar[r] & S(Y) \ar@{=}[d] \ar[r] & 0 \\ 0 \ar[r] & PS{\mathcal{K}}({\mathcal{E}})P \ar[r] & P\mathcal{F}_{h,S}P \ar[r] & S(Y) \ar[r] & 0 } \end{aligned}$$ Here, the vertical arrows are the (fiberwise) evaluation at $s=1$ of $C_0(0, 1]$. By the naturality of central invariants, we see the central invariant of the $G$-extension is $0$. As in [@HigKas2], we want to compute “the composition of asymptotic morphisms” $\Sigma\beta\colon\Sigma S(Y)\to\to \Sigma A({\mathcal{H}})(Y)$ and $\alpha\colon \Sigma A({\mathcal{H}})(Y)\to\to SK({\mathcal{E}})$ in the other order to conclude $A({\mathcal{H}})(Y)$ and $S(Y)$ are isomorphic in the equivariant $E$-Theory category $E^G$. We consider another continuous field over $[0, \infty)$ with fibers $A({\mathcal{H}})\hat\otimes A_\alpha({\mathcal{H}})(Y)$ for $\alpha$ in $(0, \infty)$ and $A({\mathcal{H}}\times{\mathcal{H}})(Y)$ at $\alpha=0$. The continuous sections of this field are generated by continuous sections of the field $(A({\mathcal{H}})\hat\otimes A_\alpha({\mathcal{H}})(Y))_{\alpha \in (0, \infty)}$ which vanish at $0$ and infinity and by basic sections associated to $f$ in $S$, $T$ in $C(V)$ and a continuous section $(T_\alpha)_{\alpha \in [0, \infty)}$ of the field $(C_\alpha(V)(Y))_{\alpha \in [0, \infty)}$ which vanish at infinity for a finite dimensional subspace $V$ of ${\mathcal{H}}$ which are defined analogously as before. Denote by $\mathcal{F}$ the $G$-${C^\ast\text{-algebra}}$ of continuous sections of this field. Evaluation at $\alpha=0$ produces the following $G$-extension: $$\begin{aligned} \label{hillhill} \xymatrix{ 0 \ar[r] & {\mathcal{K}}({\mathcal{E}}') \ar[r] & \mathcal{F} \ar[r] & A({\mathcal{H}}\times{\mathcal{H}})(Y) \ar[r] & 0 }\end{aligned}$$ Here, ${\mathcal{E}}'$ is a Hilbert $G$-$A({\mathcal{H}})\Sigma(Y)$-module of continuous sections of $(A({\mathcal{H}})\hat\otimes H_\alpha({\mathcal{H}})(Y))_{\alpha\in(0,\infty)}$ which vanish at infinity. We denote a central invariant of the extension by $\zeta$. If we consider the equivariant asymptotic morphism $(\phi'_t)\colon A({\mathcal{H}})(Y)\to\to A({\mathcal{H}}\times{\mathcal{H}})(Y)$ associated to the embedding of ${\mathcal{H}}$ into the second factor of ${\mathcal{H}}\times{\mathcal{H}}$, a similar argument as before shows the composition $\zeta\circ\Sigma(\phi'_t)\colon \Sigma A({\mathcal{H}})(Y)\to\to {\mathcal{K}}({\mathcal{E}}')$ in the group $\{\Sigma A({\mathcal{H}})(Y), A({\mathcal{H}})\Sigma(Y)\}_G$ is the same as the one defined by the flip tensored with the identity $\operatorname{id}_{C_0(Y)}$: as explained in [@HigKas2], one uses Atiyah’s rotation trick flipping the Hilbert space ${\mathcal{H}}\times{\mathcal{H}}$. Now, we return to the asymptotic morphism $\alpha\colon \Sigma A({\mathcal{H}})(Y)\to\to S\hat\otimes{\mathcal{K}}({\mathcal{E}})$. We can “compose” this with the equivariant asymptotic morphism $\beta_{p}\colon S\to\to A({\mathcal{H}})$ (the dual Dirac element for $Y=$ point) tensored with $\operatorname{id}_{{\mathcal{K}}({\mathcal{E}})}$ to get an asymptotic morphism $\beta_{p}\hat\otimes\operatorname{id}_{{\mathcal{K}}({\mathcal{E}})}\circ\alpha\colon \Sigma A({\mathcal{H}})(Y)\to\to{\mathcal{K}}({\mathcal{E}}')$: note, here, we are using an isomorphism ${\mathcal{K}}({\mathcal{E}}')\cong A({\mathcal{H}})\hat\otimes{\mathcal{K}}({\mathcal{E}})$ of ${C^\ast\text{-algebras}}$ not of $G$-${C^\ast\text{-algebra}}$. It is easy to see the two asymptotic morphisms $\zeta\circ\Sigma(\phi'_t)$ and $\beta_{p}\hat\otimes\operatorname{id}_{{\mathcal{K}}({\mathcal{E}})}\circ\alpha$ are homotopic. Hence, we have a following result. (cf. [@HigKas2] Theorem 6.11.) The dual Dirac element $\alpha\colon S(Y) \to A({\mathcal{H}})(Y)$ defines an invertible morphism $$\operatorname{id}_\Sigma\otimes\alpha\otimes\operatorname{id}_{{\mathcal{K}}({\mathcal{H}}_G)}\colon \Sigma S(Y){\mathcal{K}}({\mathcal{H}}_G)\to\to\Sigma A({\mathcal{H}})(Y){\mathcal{K}}({\mathcal{H}}_G)$$ in $E^G(S(Y), A({\mathcal{H}})(Y))$. Its inverse is $\beta\otimes\operatorname{id}_{{\mathcal{K}}({\mathcal{H}}_G)}\colon\Sigma A({\mathcal{H}})(Y){\mathcal{K}}({\mathcal{H}}_G)\to\to {\mathcal{K}}({\mathcal{E}}){\mathcal{K}}({\mathcal{H}}_G)\cong \Sigma S(Y){\mathcal{K}}({\mathcal{H}}_G)$ defined by the the Dirac element $\beta\colon \Sigma A({\mathcal{H}})(Y)\to\to {\mathcal{K}}({\mathcal{E}})$. In particular, $S(Y)$ and $A({\mathcal{H}})$ are isomorphic in the Equivariant $E$-Theory category $E^G$. Technical Part of the Higson-Kasparov Theorem ============================================= In this chapter, we are going to discuss the technical part of the Higson-Kasparov Theorem. Throughout this chapter, we assume an additional assumption that the action of the group $G$ on the Hilbert space ${\mathcal{H}}$ is metrically proper. Hence, the ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ of the Hilbert space is a proper $G$-${C^\ast\text{-algebra}}$. What we will be concerned is how to lift the Dirac element $\alpha$ in the group $\{\Sigma A({\mathcal{H}}), S\Sigma\}_G$ to the group $KK^G_1(A({\mathcal{H}}), S\Sigma)$. In view of Proposition \[KasSka\], there is one obvious candidate in the group $KK^G_1(A({\mathcal{H}}), S\Sigma)_G$. In this chapter, ${\mathcal{K}}$ denotes the $G$-${C^\ast\text{-algebra}}$ ${\mathcal{K}}({\mathcal{H}}_G)$ of compact operators on the standard $G$-Hilbert space ${\mathcal{H}}_G=L^2(G)\otimes l^2$. N. Higson and G. Kasparov defined very natural group homomorphisms $\eta$ from $KK^G_0(A, B)$ to $\{\Sigma^2A, B\}_G$ and $KK^G_1(A, B)$ to $\{\Sigma A, B\}_G$ for any separable $G$-${C^\ast\text{-algebras}}$ $A$ and $B$ (we use the same notation $\eta$ for these two homomorphisms). Also, they defined left inverses $\rho$ of the homomorphisms $\eta$ when $A={\mathbb{C}}$. We first recall the definition of the homomorphisms $\eta$ and $\rho$. \[eta\] (cf. [@HigKas2] Definition 7.2.) We define the homomorphism $\eta$ from $KK^G_0(A, B)$ to $\{\Sigma^2 A, B\}_G$ as follows. Let $x$ be an element in the group $KK^G_0(A, B)$. Suppose $x$ is represented by a cycle $({\mathcal{E}}, \phi, F)$; recall that ${\mathcal{E}}$ is a countably generated Hilbert $G$-$B$-module, $\phi$ is an equivariant $\ast$-homomorphism from $A$ to $B({\mathcal{E}})$, and $F$ is an operator in $B({\mathcal{E}})$ which is essentially unitary, essentially equivariant and essentially commuting with elements in $A$. We obtain an equivariant $\ast$-homomorphism $\phi'$ from $\Sigma A$ to $Q({\mathcal{E}})$ defined by $\phi'\colon f\otimes a \mapsto f(F)\phi(a)$ for $f$ in $\Sigma\cong C_0(S^1-\{1\})$ and $a$ in $A$ (We omit to write the quotient map from $B({\mathcal{E}})$ to $Q({\mathcal{E}})$). We define $\eta(x)$ to be an element in the group $\{\Sigma^2A, B\}_G$ represented by a central invariant for the following pullback extension of $\Sigma A$ by ${\mathcal{K}}({\mathcal{E}})$ defined by $\phi'$: $$\begin{aligned} \xymatrix{ 0 \ar[r] & {\mathcal{K}}({\mathcal{E}}) \ar[d]\ar[r] & E_{\phi'} \ar[d]\ar[r] & \Sigma A \ar[d]^-{\phi'}\ar[r] & 0\\ 0 \ar[r] & {\mathcal{K}}({\mathcal{E}}) \ar[r] & B({\mathcal{E}}) \ar[r] & Q({\mathcal{E}}) \ar[r] & 0\\ }\end{aligned}$$ The definition of the homomorphism $\eta$ from $KK^G_1(A, B)$ to $\{\Sigma A, B\}_G$ is similar but simpler. Let $x$ be an element in the group $KK^G_1(A, B)$ which is represented by a cycle $({\mathcal{E}}, \phi, P)$; recall this time, $P$ is an operator in $B({\mathcal{E}})$ which is essentially an projection, essentially equivariant and essentially commuting with elements in $A$. We obtain an equivariant $\ast$-homomorphism $\phi'$ from $A$ to $Q({\mathcal{E}})$ defined by $\phi'\colon a\mapsto \phi(a)P$ for $a$ in $A$. We define $\eta(x)$ to be an element in the group $\{\Sigma A, B\}_G$ represented by a central invariant for the following pullback extension of $A$ by ${\mathcal{K}}({\mathcal{E}})$ defined by $\phi'$: $$\begin{aligned} \xymatrix{ 0 \ar[r] & {\mathcal{K}}({\mathcal{E}}) \ar[d]\ar[r] & E_{\phi'} \ar[d]\ar[r] & A \ar[d]^-{\phi'}\ar[r] & 0\\ 0 \ar[r] & {\mathcal{K}}({\mathcal{E}}) \ar[r] & B({\mathcal{E}}) \ar[r] & Q({\mathcal{E}}) \ar[r] & 0\\ }\end{aligned}$$ The defined homomorphisms $\eta$ behave well with the Bott-Periodicity, tensor products with the identity morphisms and Stabilization. (cf. [@HigKas2] Lemma 7.3.) For any separable $G$-${C^\ast\text{-algebras}}$ $A,B$, the following diagram commutes up to sign. $$\begin{aligned} \xymatrix{ KK^G_1(\Sigma A, B) \ar[d]^{\eta} \ar@{=}[r] & KK^G_0(A, B) \ar[d]^{\eta} \\ \{\Sigma^2A, B\}_G \ar@{=}[r] & \{\Sigma^2A, B\}_G }\end{aligned}$$ Here, the top horizontal equality means the natural isomorphism by the Bott Periodicity which is unique up to sign. Lex $x=({\mathcal{E}}, \phi, P)$ be an element of $KK^G_1(\Sigma A, B)$ with $\phi=\phi_\Sigma\otimes\phi_A$ a nondegenerate representation of $\Sigma A$ on ${\mathcal{E}}$. As is explained in Example \[exampleBott\], the Bott periodicity maps this element to $y=({\mathcal{E}}, \phi_A, e^{2i\pi x}P+1-P)$ in $KK^G_0(A, B)$ where $x$ is $\phi_\Sigma(x)$ (recall that we extend the nondegenerate representation $\phi_\Sigma$ of $\Sigma$ to that of $C_b(0,1)$). That the homomorphisms $\eta$ send two elements to the same class in $\{\Sigma^2A, B\}_G$ can be seen as follows. The $\ast$-homomorphism from $\Sigma A$ to $Q({\mathcal{E}})$ defining $\eta(x)$ is $f\otimes a\mapsto \phi_\Sigma(f)\phi_A(a)P$. On the other hand, $\ast$-homomorphism defining $\eta(y)$ is $f\otimes a\mapsto f(e^{2i\pi x}P+1-P)\phi_A(a)$ where $f$ is in $C_0(S^1-1)\cong\Sigma$. For any $f$ in $C_0(S^1-1)$ and $a$ in $A$, we have $f(e^{2i\pi x}P+1-P)\phi_A(a)=f(e^{2i\pi x})P$ in the Calkin algebra $Q({\mathcal{E}})$. We can now see that the two $\ast$-homomorphisms are actually the same via the identification $\Sigma\cong C_0(S^1-1)$ given by a homeomorphism $x\mapsto e^{2i\pi x}$ from $(0,1)$ to $S^1-1$. For any separable $G$-${C^\ast\text{-algebras}}$ $A,B$ and $C$, the following diagram commutes for $\ast=0,1$. $$\begin{aligned} \xymatrix{ KK^G_\ast(A, B) \ar[d]^{\eta} \ar[r]^-{\sigma_C} & KK^G_\ast(A\otimes C, B\otimes C) \ar[d]^{\eta} \\ \{\Sigma^{2-\ast}A, B\}_G \ar[r]^-{\sigma_C} & \{\Sigma^{2-\ast}A\otimes C, B\otimes C\}_G }\end{aligned}$$ We consider the case $\ast=1$. Take any element $x=({\mathcal{E}}, \phi, P)$ in $KK^G_1(A, B)$. Then, the element $\sigma_C(x)$ in $KK^G_1(A\otimes C, B\otimes C)$ is by definition, represented as $({\mathcal{E}}\otimes C, \phi\otimes \operatorname{id}_C, P\otimes1)$. On the one hand, the element $\sigma_C(\eta(x))$ in the $\{\Sigma A\otimes C, B\otimes C\}_G$ is represented by an asymptotic morphism $\phi_t\colon f\otimes a\otimes c\mapsto (f(u_t)\otimes1)(\phi(a)P\otimes c) \in {\mathcal{K}}({\mathcal{E}})\otimes C$ where $(u_t)_{t\geq1}$ is an approximate unit for the pair ${\mathcal{K}}({\mathcal{E}})\subset E_{\phi'}$. On the other hand, $\eta(\sigma_C(x))$ is represented by an asymptotic morphism $\psi_t\colon f\otimes a\otimes c\mapsto f(v_t)(\phi(a)P\otimes c) \in {\mathcal{K}}({\mathcal{E}})\otimes C$ where $(v_t)_{t\geq1}$ is an approximate unit for the pair ${\mathcal{K}}({\mathcal{E}})\otimes C\subset E_{\phi'\otimes\operatorname{id}_C}$. They are homotopic via the straight line homotopy between $(u_t)$ and $(v_t)$. The case $\ast=0$ can be handled completely analogously. \[lemStab\] For any separable $G$-${C^\ast\text{-algebras}}$ $A,B$, the following diagram commutes for $\ast=0,1$. $$\begin{aligned} \xymatrix{ KK^G_\ast(A, B) \ar[d]^{\eta} \ar@{=}[r] & KK^G_\ast(A, B{\mathcal{K}}) \ar[d]^{\eta} \\ \{\Sigma^{2-\ast}A, B\}_G \ar@{=}[r] & \{\Sigma^{2-\ast}A, B{\mathcal{K}}\}_G }\end{aligned}$$ Here, the top inequality is Stabilization in Equivariant $KK$-Theory: that is the Kasparov product with $({\mathcal{K}}({\mathcal{H}}_G, {\mathbb{C}}), 1, 0)$ in $KK^G({\mathbb{C}}, {\mathcal{K}})$ or with its inverse $({\mathcal{H}}_G, \operatorname{id}_{\mathcal{K}}, 0)$ in $KK^G({\mathcal{K}}, {\mathbb{C}})$. The meaning of the bottom inequality is similar: in rightward direction, for any Hilbert $G$-$B$-module ${\mathcal{E}}$, we identify ${\mathcal{K}}({\mathcal{E}})$ with ${\mathcal{K}}({\mathcal{E}}\otimes{\mathcal{K}}({\mathcal{H}}_G, {\mathbb{C}}))$; and for any Hilbert Hilbert $G$-$B{\mathcal{K}}$-module ${\mathcal{E}}'$, we identify ${\mathcal{K}}({\mathcal{E}}')$ with ${\mathcal{K}}({\mathcal{E}}\otimes_{\mathcal{K}}{\mathcal{H}}_G)$. This is more or less obvious. We just need to see for any Hilbert $G$-$B$-module ${\mathcal{E}}$, we have an isomorphism from ${\mathcal{K}}({\mathcal{E}})$ to $K({\mathcal{E}}\otimes{\mathcal{K}}({\mathcal{H}}_G, {\mathbb{C}}))$ sending $T$ to $T\otimes1$. \[dfnrho\](cf. [@HigKas2] Definition 7.4.) We define the homomorphisms $\rho$ from $\{\Sigma^2, B\}_G$ to $KK^G_1({\mathbb{C}}, B(0, \infty))\cong KK^G_0({\mathbb{C}}, B)$ as follows. Given any asymptotic morphism $\phi=(\phi_t)_{t\geq1}$ from $\Sigma^2$ to ${\mathcal{K}}({\mathcal{E}})$, we view $\phi$ as a map from $\Sigma^2$ to a ${C^\ast\text{-algebra}}$ $K({\mathcal{E}})(0,\infty)$ by deeming $\phi_t=t\phi_1$ for $t<1$. By extending $\phi$ to a unital map from $\tilde\Sigma^2$ to $B({\mathcal{E}})(0,\infty)$ and naturally extending it to a map $\phi'$ from $M_2(\tilde\Sigma^2)$ to $B({\mathcal{E}}\oplus{\mathcal{E}})(0,\infty)$, we now define an operator $P$ in $B({\mathcal{E}}\oplus{\mathcal{E}})(0,\infty)$ to be an image $\phi'(p)$ of a projection $p=\frac{1}{1+|z|^2}\begin{pmatrix} 1 & z \\ \bar{z} & |z|^2 \end{pmatrix}$ in $M_2(\tilde\Sigma^2)\cong M_2(\widetilde{C_0({\mathbb{C}})})$. An operator $P$ is an essentially projection, which is essentially equivariant. We set $\eta(\phi)$ in $KK^G_1({\mathbb{C}}B)$ to be the odd Kasparov module $(({\mathcal{E}}\oplus{\mathcal{E}})(0, \infty), 1, P)$.\ We next define $\rho$ from $\{\Sigma, B\}_G$ to $KK^G_0({\mathbb{C}}, B(0, \infty))\cong KK^G_1({\mathbb{C}}, B)$. Given any asymptotic morphism $\phi=(\phi_t)_{t\geq1}$ from $\Sigma$ to ${\mathcal{K}}({\mathcal{E}})$, we view $\phi$ as a map from $\Sigma$ to a ${C^\ast\text{-algebra}}$ $K({\mathcal{E}})(0,\infty)$ by deeming $\phi_t=t\phi_1$ for $t<1$. By extending $\phi$ to a unital map $\phi'$ from $\tilde\Sigma$ to $B({\mathcal{E}})(0,\infty)$, we define an operator $T$ in $B({\mathcal{E}})(0,\infty)$ to be an image $\phi'(z)$ of a unitary $z$ in $\tilde\Sigma\cong C(S^1)$. An operator $T$ is an essentially unitary which is essentially equivariant. We set $\eta(\phi)$ in $KK^G_0({\mathbb{C}}, B(0, \infty))$ to be the even Kasparov module $({\mathcal{E}}(0, \infty), 1, T)$. The defined homomorphisms $\rho$ are right inverses of $\eta$. \[rhoinverse\] (cf. [@HigKas2] Lemma 7.5.) For any separable $G$-${C^\ast\text{-algebra}}$ $B$, the following composition for $\ast=0,1$ $$\begin{aligned} \xymatrix{ KK^G_\ast({\mathbb{C}}, B) \ar[r]^-{\eta} & \{\Sigma^{2-\ast}, B\}_G \ar[r]^-{\rho} & KK^G_{1-\ast}({\mathbb{C}}, B(0, \infty)) }\end{aligned}$$ coincides with the Bott-Periodicity map (up to sign). Hence, $\rho$ is a right inverse of $\eta$. We consider the case $\ast=1$. Take any element $x=({\mathcal{E}}, 1, P)$ in $KK^G_1({\mathbb{C}}, B)$. The asymptotic morphism $\phi_t\colon f\mapsto f(u_t)P$ represents $\eta(x)$ in $\{\Sigma, B\}_G$. Here, $(u_t)$ is an asymptotically equivariant, approximate unit asymptotically commuting with $P$. Then, the map $\phi'\colon\tilde\Sigma\to B({\mathcal{E}})(0, \infty)$ as in Definition \[dfnrho\] sends $z$ in $\tilde\Sigma\cong C(S^1)$ to an essentially unitary $T_t=e^{i2\pi u_t}P+I-P$ in $B({\mathcal{E}})(0, \infty)$ with $u_t=tu_1$ for $t\leq1$ (we are identifying $\Sigma$ with $C_0(S^1-1)$ using a homeomorphism $x\mapsto e^{i2\pi x}$ from $(0,\infty)$ to $S^1-1$). The straight line homotopy between $u_t$ and $\min(t, 1)$ shows, the even Kasparov module $({\mathcal{E}}(0, \infty), 1, T_t)$ defines the class in $KK^G_0({\mathbb{C}}, B(0, \infty))$ which corresponds to $x$ under the Bott Periodicity. The case $\ast=0$ is similar but a bit complicated. Take any element $y=({\mathcal{E}}, 1, F)$ in $KK^G_0({\mathbb{C}}, B)$. The asymptotic morphism $\phi_t\colon f\otimes f'\mapsto f(u_t)f'(F)$ represents $\eta(y)$ in $\{\Sigma^2, B\}_G=\{\Sigma C_0(S^1-1), B\}_G$. Here, $(u_t)$ is an asymptotically equivariant, approximate unit asymptotically commuting with $F$. Then, the map $\phi'\colon M_2(\tilde\Sigma^2)\to B({\mathcal{E}}\oplus{\mathcal{E}})(0,\infty)$ as in Definition \[dfnrho\] sends $p=\frac{1}{1+|z|^2}\begin{pmatrix} 1 & z \\ \bar{z} & |z|^2 \end{pmatrix}$ in $M_2(\tilde\Sigma^2)\cong M_2(\widetilde{C_0({\mathbb{C}})})$ to an essentially projection which is homotopic to $P_t=\begin{pmatrix} 1-u_t & (u_t-{u_t}^2)^{\frac12}F^\ast\\ (u_t-{u_t}^2)^{\frac12}F & u_t \end{pmatrix}$ in $M_2(B({\mathcal{E}})(0, \infty))$ with $u_t=tu_1$ for $t\leq1$). The straight line homotopy between $u_t$ and $\min(t, 1)$ shows, the odd Kasparov module $({\mathcal{E}}(0, \infty)\oplus{\mathcal{E}}(0, \infty), 1, P_t)$ defines the class in $KK^G_1({\mathbb{C}}, B(0, \infty))$ which corresponds to $y$ under the Bott Periodicity. The following lemmas show the homomorphisms $\rho$ also behave well with the Bott-Periodicity and Stabilization. (cf. [@HigKas2] Lemma 7.6.) For any separable $G$-${C^\ast\text{-algebra}}$ $B$, the following diagram commutes. $$\begin{aligned} \xymatrix{ \{\Sigma, B\}_G \ar[d]^{\rho} \ar[r]^-{\sigma_\Sigma} & \{\Sigma^2, \Sigma B\}_G \ar[d]^\rho \\ KK^G_0({\mathbb{C}}, B(0, \infty)) \ar@{=}[r] & KK^G_1({\mathbb{C}}, \Sigma B(0, \infty)) }\end{aligned}$$ The bottom inequality is of course, the Bott Periodicity. Take any $x$ in $\{\Sigma, B\}_G$ represented by an asymptotic morphism $\phi_t\colon C_0(S^1-1)\to {\mathcal{K}}({\mathcal{E}})$. We may write an essential unitary on ${\mathcal{E}}(0, \infty)$ defining $\rho(x)$ in $KK^G_0({\mathbb{C}}, B(0, \infty))$ as $T_t=\phi_t(z)$. Now, the homomorphism $\sigma_\Sigma$ sends $\phi_t$ to an asymptotic morphism $\operatorname{id}_\Sigma\otimes\phi_t\colon f\otimes f'\mapsto f\otimes \phi_t(f')$. The homomorphism $\rho$ sends this asymptotic morphism to an essential projection homotopic to $P_t=\begin{pmatrix} 1-x & (x-{x}^2)^{\frac12}\phi_t(\overline{z})\\ (x-{x}^2)^{\frac12}\phi_t(z) & x \end{pmatrix}$ on $(\Sigma\otimes{\mathcal{E}}\oplus\Sigma\otimes{\mathcal{E}})(0,\infty))$ which defines clearly the element in $KK^G_1({\mathbb{C}}, \Sigma B(0,\infty))$ corresponding to $\rho(x)$. For any separable $G$-${C^\ast\text{-algebras}}$ $B$, the following diagram commutes for $\ast=0,1$. $$\begin{aligned} \xymatrix{ \{\Sigma^{2-\ast}, B\}_G \ar[d]^{\rho} \ar@{=}[r] & \{\Sigma^{2-\ast}, B{\mathcal{K}}\}_G \ar[d]^\rho \\ KK^G_{1-\ast}({\mathbb{C}}, B(0, \infty)) \ar@{=}[r] & KK^G_{1-\ast}({\mathbb{C}}, B{\mathcal{K}}(0, \infty)) }\end{aligned}$$ Here, the top and bottom equalities are analogous to those of Lemma \[lemStab\]. This is again, more or less obvious. It is clear that for any $x$ in $\{\Sigma^{2-\ast}, B\}_G$, if we denote by $x'$ the corresponding element in $\{\Sigma^{2-\ast}, B{\mathcal{K}}\}_G$, $\rho(x')$ is just the Kasparov product of $\rho(x)$ with $(K({\mathcal{H}}_G, {\mathbb{C}}), 1, 0)$. (cf. [@HigKas2] Lemma 8.4.) Consider the Bott element $b$ in $KK^G_1({\mathbb{C}}, A({\mathcal{H}}))$. We have $-\eta(b)=\beta$ in the group $\{\Sigma, A({\mathcal{H}})\}_G$. Here, we consider the dual Dirac element $\beta$ in $\{S, A({\mathcal{H}})\}_G$ as an element in $\{\Sigma, A({\mathcal{H}})\}_G$ using an (order preserving) homeomorphism between the real line ${\mathbb{R}}$ and the interval $(0, 1)$. We use a homeomorphism $x\mapsto \frac{x(x^2+1)^{-\frac12}+1}{2}$ from ${\mathbb{R}}$ to $(0, 1)$. Recall that the Bott-element $b$ is represented by an essentially projection $P=\frac{{\mathcal{B}}(1+{\mathcal{B}}^2)^{-\frac12}+1}{2}$ in $A({\mathcal{H}})$. We set $P_t=\frac{t^{-1}{\mathcal{B}}(1+t^{-2}{\mathcal{B}}^2)^{-\frac12}+1}{2}$. Then, the dual Dirac element $\beta$ in $\{\Sigma, A({\mathcal{H}})\}_G$ is represented by an asymptotic morphism $f\mapsto f(P_t)$. On the other hand, $-\eta(b)$ is represented by $f\mapsto f(1-u_t)P_1$ where $(u_t)$ is an approximate unit in $A({\mathcal{H}})$ which is asymptotically equivariant and quasi-central with respect to $P$. This asymptotic homomorphism is homotopic to an asymptotic morphism $f\mapsto f((1-u_t)^{\frac12}P_1(1-u_t)^{\frac12})$. The latter is homotopic to an asymptotic morphism $f\mapsto f(1-u_t)^{\frac12}P_{s(t)}(1-u_t)^{\frac12}$ with suitably slowly increasing function $s$ on $[1,\infty)$ onto $[1, \infty)$. The straight line homotopy between $1$ and $u_t$ followed by a reparametrization connects this asymptotic morphism to the one $f\mapsto f(P_t)$. In order to get the Dirac element in Equivariant $KK$-Theory, we must lift the Dirac element $\alpha$ in the group $\{\Sigma A({\mathcal{H}}), S\Sigma\}_G$ to $KK^G_1(A({\mathcal{H}}), S\Sigma)$. The following ensures that this is possible. \[Result\] (cf. [@GHT] Chapter 9.) Let $A,B$ be separable $G$-${C^\ast\text{-algebras}}$. Suppose $A$ is a proper $G$-${C^\ast\text{-algebra}}$. Then, the abelian group $\{\Sigma A, B\}_G$ is naturally isomorphic to the abelian group $[[\Sigma A, B{\mathcal{K}}]]_G$. Suppose further that $A$ is nuclear and that $B$ is isomorphic to $\Sigma B'$ for some separable $G$-${C^\ast\text{-algebra}}$ $B'$. Then, the homomorphism $\eta\colon KK^G_1(A, B)\to \{\Sigma A, B\}_G$ is an isomorphism of abelian groups. We first prove a natural group homomorphism $\iota\colon[[\Sigma A,B{\mathcal{K}}]]_G \to \{\Sigma A, B\}_G$ (a map gained by regarding $B{\mathcal{K}}$ as ${\mathcal{K}}(B\otimes {\mathcal{H}}_G)$) is an isomorphism when $A$ is a proper $G$-${C^\ast\text{-algebra}}$. In fact, we prove the natural map $\iota\colon[[A,B{\mathcal{K}}]]_G \to \{A, B\}_G$ is a bijection of sets (or of semigroups) when $A$ is proper. Let $\sigma_{{\mathcal{K}}}$ be a map from $\{A, B\}_G$ to $[[A{\mathcal{K}}, B{\mathcal{K}}]]_G$ which sends the class represented by an asymptotic morphism $\phi\colon A\to\to{\mathcal{K}}({\mathcal{E}})$ to the class represented by an asymptotic morphism $\phi\otimes\operatorname{id}_{\mathcal{K}}\colon A{\mathcal{K}}\to\to{\mathcal{K}}({\mathcal{E}}){\mathcal{K}}\to B{\mathcal{K}}$ (the last map is induced by any adjointable isometry ${\mathcal{E}}\otimes{\mathcal{H}}_G\to B\otimes{\mathcal{H}}_G$ of $G$-$B$-Hilbert modules) and $\kappa_A$ be a map from $[[A{\mathcal{K}}, B{\mathcal{K}}]]_G$ to $[[A, B{\mathcal{K}}]]_G$ given by the composition with a stabilization homomorphism $\operatorname{Ad}_{V}\colon A\to A{\mathcal{K}}$ induced by some adjointable isometry $V\colon A\to A\otimes{\mathcal{H}}_G$ which exists since $A$ is proper (see Proposition \[prop:stab\]). Note that the map $\sigma_{\mathcal{K}}$ is defined independently of choices of an isometry ${\mathcal{E}}\otimes{\mathcal{H}}_G\to B\otimes{\mathcal{H}}_G$ since any such isometry are (equivariantly) homotopic to each other (in the $\ast$-strong topology); similarly, $\sigma_A$ is defined independently of choices of an adjointable isometry $V\colon A\to A\otimes{\mathcal{H}}_G$. We claim that the map $\kappa_A\circ\sigma_{{\mathcal{K}}}\colon\{A, B\}_G\to[[A{\mathcal{K}}, B{\mathcal{K}}]]_G\to[[A, B{\mathcal{K}}]]_G$ is the inverse of $\iota$. For later use, we prove three maps $\iota, \kappa_A, \sigma_{{\mathcal{K}}}$ are all bijective. That the maps $\kappa_A\circ\sigma_{{\mathcal{K}}}\circ\iota$ and $\sigma_{{\mathcal{K}}}\circ\iota\circ\kappa_A$ are the identities on $[[A, A{\mathcal{K}}]]_G$ and on $[[A{\mathcal{K}}, B{\mathcal{K}}]]_G$ respectively is essentially, the Stabilization in Equivariant $E$-Theory which is explained in [@GHT]. $\iota\circ\kappa_A\circ\sigma_{{\mathcal{K}}}$ is the identity on $\{A, B\}_G$: Take any ($G$-equivariant) asymptotic morphism $\phi\colon A\to\to{\mathcal{K}}({\mathcal{E}})$ where ${\mathcal{E}}$ is a countably generated $G$-$B$-Hilbert module. The homomorphism $\iota\circ\kappa_A\circ\sigma_{{\mathcal{K}}}$ sends the class represented by an asymptotic morphism $\phi$ to the class represented by the asymptotic morphism $\phi\otimes\operatorname{id}_{\mathcal{K}}\circ\kappa_A\colon A\to A{\mathcal{K}}\to\to {\mathcal{K}}({\mathcal{E}}){\mathcal{K}}$. Let $\operatorname{Ad}_{V_s}\colon A\to A{\mathcal{K}}({\mathcal{H}}_G\oplus {\mathbb{C}})$ ($s\in[0, 1]$) be a homotopy between the stabilization $\operatorname{Ad}_{V_0}=\operatorname{Ad}_{V}\colon A\to A{\mathcal{K}}$ and the identity $\operatorname{Ad}_{V_1}=\operatorname{id}_A\colon A\to A$ induced by a homotopy $V_s\colon A\to A\otimes({\mathcal{H}}_G\oplus{\mathbb{C}})$ of (adjointable) isometries $V_0=V\colon A\to A\otimes{\mathcal{H}}_G$ and $V_1=\operatorname{id}_A\colon A\to A\otimes{\mathbb{C}}$ of Hilbert $G$-$A$-modules which can be defined by $V_s=(1-s)^{\frac12}V_0\oplus s^{\frac12}V_1$ for example. The homotopy of asymptotic morphisms $\phi\otimes\operatorname{id}_{{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})}\circ\operatorname{Ad}_{V_s}\colon A\to A{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})\to\to{\mathcal{K}}({\mathcal{E}}){\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})$ ($s\in[0, 1]$) connects the two asymptotic morphisms $\phi$ $(s=1)$ and $\phi\otimes\operatorname{id}_{\mathcal{K}}\circ\kappa_A$ $(s=0)$. This shows $\iota\circ\kappa_A\circ\sigma_{{\mathcal{K}}}$ is the identity on $\{A,B\}_G$. $\kappa_A\circ\sigma_{{\mathcal{K}}}\circ\iota$ is the identity on $[[A, B{\mathcal{K}}]]_G$: The proof is almost identical as above. Take any asymptotic morphism $\phi\colon A\to\to B{\mathcal{K}}$. The map $\kappa_A\circ\sigma_{{\mathcal{K}}}\circ\iota$ sends the class represented by $\phi$ to the one represented by $\phi\otimes\operatorname{id}_{\mathcal{K}}\circ\operatorname{Ad}_V\colon A\to A{\mathcal{K}}\to\to B{\mathcal{K}}{\mathcal{K}}$. We use the homotopy of $\operatorname{Ad}_{V_s}\colon A\to A{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})$ defined above. The homotopy of asymptotic morphisms $\phi\otimes\operatorname{id}_{{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})}\circ\operatorname{Ad}_{V_s}\colon A\to A{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})\to\to B{\mathcal{K}}{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})$ ($s\in[0, 1]$) connects the two asymptotic morphisms $\phi$ $(s=1)$ and $\phi\otimes\operatorname{id}_{\mathcal{K}}\circ\operatorname{Ad}_V$ $(s=0)$. This shows $\kappa_A\circ\sigma_{{\mathcal{K}}}\circ\iota$ is the identity on $[[A, B{\mathcal{K}}]]_G$. $\sigma_{{\mathcal{K}}}\circ\iota\circ\kappa_A$ is the identity on $[[A{\mathcal{K}}, B{\mathcal{K}}]]_G$: Take any asymptotic morphism $\phi\colon A{\mathcal{K}}\to\to B{\mathcal{K}}$. The map $\sigma_{{\mathcal{K}}}\circ\iota\circ\kappa_A$ sends the class represented by $\phi$ to the one represented by $(\phi\circ\operatorname{Ad}_V)\otimes\operatorname{id}_{\mathcal{K}}=\phi\otimes\operatorname{id}_{\mathcal{K}}\circ\operatorname{Ad}_V\otimes\operatorname{id}_{\mathcal{K}}\colon A{\mathcal{K}}\to A{\mathcal{K}}{\mathcal{K}}\to B{\mathcal{K}}{\mathcal{K}}$. We use the homotopy of $\operatorname{Ad}_{V_s}\colon A\to A{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})$ again. The homotopy of asymptotic morphisms $\phi\otimes\operatorname{id}_{{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})}\circ\operatorname{Ad}_{V_s}\otimes\operatorname{id}_{\mathcal{K}}\colon A{\mathcal{K}}\to A{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}}){\mathcal{K}}=A{\mathcal{K}}{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})\to\to B{\mathcal{K}}{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})$ ($s\in[0, 1]$) connects the two asymptotic morphisms $\phi\otimes\operatorname{id}_{\mathcal{K}}$ $(s=1)$ and $\phi\otimes\operatorname{id}_{\mathcal{K}}\circ\operatorname{Ad}_V\otimes\operatorname{id}_{\mathcal{K}}$ $(s=0)$, but the asymptotic morphisms $\phi$ and $\phi\otimes\operatorname{id}_{\mathcal{K}}$ are homotopic via the homotopy $\phi\otimes\operatorname{id}_{{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})}\circ\operatorname{id}_A\otimes\operatorname{Ad}_{W_{s'}}\colon A{\mathcal{K}}\to A{\mathcal{K}}{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})\to\to B{\mathcal{K}}{\mathcal{K}}({\mathcal{H}}_G\oplus{\mathbb{C}})$ $(s'\in [0, 1])$ where $W_{s'}\colon {\mathcal{H}}_G\to{\mathcal{H}}_G\otimes({\mathcal{H}}_G\oplus{\mathbb{C}})$ is any homotopy of isometries of $G$-Hibert spaces between $W_0\colon{\mathcal{H}}_G\cong{\mathcal{H}}_G\otimes{\mathcal{H}}_G\xhookrightarrow{}{\mathcal{H}}_G\otimes({\mathcal{H}}_G\oplus{\mathbb{C}})$ and $W_1\colon{\mathcal{H}}_G\cong{\mathcal{H}}_G\otimes{\mathbb{C}}\xhookrightarrow{}{\mathcal{H}}_G\otimes({\mathcal{H}}_G\oplus{\mathbb{C}})$. This shows $\sigma_{{\mathcal{K}}}\circ\iota\circ\kappa_A$ is the identity on $[[A{\mathcal{K}}, B{\mathcal{K}}]]_G$. Now, suppose further that $A$ is nuclear and that $B$ is isomorphic to $\Sigma B'$. We have the following commutative diagram of abelian groups. $$\begin{aligned} \xymatrix{ KK^G_1(A, B) \ar[d]^-{\eta} \ar[r]^-{\sigma_\Sigma} & KK^G_1(\Sigma A, \Sigma B) \ar[d]^-{\eta} \ar[r]^-{\sigma_{\mathcal{K}}} & KK^G_1(\Sigma A{\mathcal{K}}, \Sigma B{\mathcal{K}}) \ar[d]^-{\eta} \\ \{\Sigma A, B\}_G \ar[d]^-{\sigma_{\mathcal{K}}} \ar[r]^-{\sigma_\Sigma} & \{\Sigma^2 A, \Sigma B\}_G \ar[d]^-{\sigma_{\mathcal{K}}} \ar[r]^-{\sigma_{\mathcal{K}}} & \{\Sigma^2A{\mathcal{K}}, \Sigma B{\mathcal{K}}\}_G \ar[d]^-{\sigma_{\mathcal{K}}} \\ [[\Sigma A{\mathcal{K}}, B{\mathcal{K}}]]_G \ar@{=}[d]^-{} \ar[r]^-{\sigma_\Sigma} & [[\Sigma^2 A{\mathcal{K}}, \Sigma B{\mathcal{K}}]]_G \ar@{=}[d]^-{} \ar[r]^-{\sigma_{\mathcal{K}}} & [[\Sigma^2A{\mathcal{K}}, \Sigma B{\mathcal{K}}]]_G \ar@{=}[d]^-{} \\ E^G(A, B') \ar[r]^-{\sigma_\Sigma} & E^G(\Sigma A, B) \ar@{=}[r]^-{} & E^G(\Sigma A, B) } \label{diagram:proof of theorem}\end{aligned}$$ In the diagram , we know all the indicated arrows are isomorphisms except those indicated by $\eta$ and $\sigma_\Sigma\colon\{\Sigma A, B\}_G\to\{\Sigma^2A, \Sigma B\}_G$. However, the composition $\sigma_{\mathcal{K}}\circ\eta\circ\sigma_{\mathcal{K}}\circ\sigma_\Sigma\colon KK^G_1(A, B) \to E^G(\Sigma A, B)$ is the natural isomorphism according to Corollary \[important\]. It follows that all $\eta$ in the diagram are isomorphisms. In particular, the homomorphism $\eta\colon KK^G_1(A, B)\to \{\Sigma A, B\}_G$ is an isomorphism. Note, it follows $\sigma_\Sigma\colon\{\Sigma A, B\}_G\to\{\Sigma^2A, \Sigma B\}_G$ is also an isomorphism. (Compare with [@HigKas2] Definition 8.2.) We define the Dirac element $d$ in $KK^G_1(A({\mathcal{H}}), S\Sigma)$ to be the unique element which corresponds to the Dirac element $\alpha$ in $\{\Sigma A({\mathcal{H}}), S\Sigma\}_G$ via the isomorphism $\eta\colon KK^G_1(A({\mathcal{H}}), S\Sigma)\to \{\Sigma A({\mathcal{H}}), S\Sigma\}_G$. The following theorem is the heart of the Higson-Kasparov Theorem. \[HigKasTech\] (cf. [@HigKas2] Theorem 7.8.) Let $A$ be a separable proper $G$-${C^\ast\text{-algebra}}$ and $B$ be a separable $G$-${C^\ast\text{-algebra}}$. Then, the following diagram commutes up to sign. $$\begin{aligned} \xymatrix{ KK^G_1({\mathbb{C}}, A) \ar[d]^-{\eta}_{\eta} \times KK^G_1(A, B) \ar[r] & KK^G_0({\mathbb{C}}, B)\\ [[\Sigma, A{\mathcal{K}}]]_G \ar[d]^-{\sigma_{\mathcal{K}}}_{\sigma_{\Sigma}} \times \{\Sigma A, B\}_G \ar[d] & \{\Sigma^2, B\}_G \ar[u]_-{\rho}\\ [[\Sigma^2, \Sigma A{\mathcal{K}}]]_G \times [[\Sigma A{\mathcal{K}}, B{\mathcal{K}}]]_G \ar[r] & [[\Sigma^2, B{\mathcal{K}}]]_G \ar[u]\\ } \label{diagram:tech}\end{aligned}$$ Here, the homomorphism $\eta\colon KK^G_1({\mathbb{C}}, A)\to \{\Sigma, A\}_G$ is naturally considered as the map from $KK^G_1({\mathbb{C}}, A)$ to $[[\Sigma, A{\mathcal{K}}]]_G\cong\{\Sigma, A\}_G$ and the top (or the bottom) horizontal arrow is the Kasparov product (or the composition of asymptotic morphisms). Since $\eta$ and $\rho$ are compatible with stabilization, it suffices to show when $A\cong A{\mathcal{K}}$ and $B\cong B{\mathcal{K}}$ (i.e. when $A,B$ are stable). This ensures that we need to only consider elements of the form $(A, 1, P)$ in $KK^G_1({\mathbb{C}}, A)$. Also in this case, $\sigma_{\mathcal{K}}$ is the identity on $\{\Sigma A, B\}_G$. Hence, it suffices to show $\eta(y)\circ\sigma_\Sigma(\eta(x))=\eta(x\otimes_A y)$ in $\{\Sigma^2, B\}_G$ for any $x=(A, 1, P)$ in $KK^G_1({\mathbb{C}}, A)$ and $y=({\mathcal{E}}, \phi, Q)$ in $KK^G_1(A, B)$. Here, $x\otimes_Ay$ denotes the Kasparov product of $x$ and $y$ in $KK^G_0({\mathbb{C}}, B)$. The rest of the proof would be identical to the one given in [@HigKas2]. Theorem \[HigKasTech\] enables us to compute the composition of the Bott element $b$ in $KK^G_1({\mathbb{C}}, A({\mathcal{H}}))$ and the Dirac element $d$ in $KK^G_1(A({\mathcal{H}}), S\Sigma)$. (cf. [@HigKas2] Theorem 8.5.)\[KKcomp\] The composition $b\otimes_{A({\mathcal{H}})}d$ in $KK^G({\mathbb{C}}, S\Sigma)$ coincides with the identity in $KK^G({\mathbb{C}}, {\mathbb{C}})$ up to sign under the Bott Periodicity $KK^G({\mathbb{C}}, {\mathbb{C}})\cong KK^G({\mathbb{C}}, S\Sigma)$. We only need to check the composition $\sigma_{\mathcal{K}}(\eta(d))\circ\sigma_\Sigma(\eta(b))$ coincides with $\alpha\circ\Sigma\beta$ up to sign in $\{\Sigma^2, S\Sigma\}_G \cong \{\Sigma S, S\Sigma \}_G$, but at the level of asymptotic morphisms, the first element is $-\Sigma\beta\colon \Sigma S\to\to \Sigma A({\mathcal{H}})$ composed with the composition of the stabilization $\Sigma A({\mathcal{H}})\to \Sigma A({\mathcal{H}}){\mathcal{K}}$ and $\sigma_{\mathcal{K}}(\alpha)$ which is homotopic to $\alpha$. The Dual-Dirac method (Theorem \[DD\]) says the Baum-Connes conjecture with coefficients holds for $G$, if the identity in $KK^G({\mathbb{C}}, {\mathbb{C}})$ factors through a proper algebra. Thus, we finally finish the proof of the Higson-Kasparov Theorem. (cf. [@HigKas2] Theorem 9.1.) The Baum-Connes conjecture with coefficients holds for all a-$T$-menable groups. Non-Isometric Actions ===================== In this last chapter, we consider an affine action of a second countable, locally compact group $G$ on a separable (infinite-dimensional) real Hilbert space ${\mathcal{H}}$ whose linear part is not necessarily isometric. It has been suggested that it is important to consider such an action since some groups like $\text{sp}(n,1)$ which cannot admit metrically proper, affine isometric action on Hilbert space (due to the Kazhdan’s property-($T$)) admits a metrically proper affine action whose linear part is not isometry but uniformly bounded. However, in order to carry out some analogy of the argument of the Higson-Kasparov Theorem to this case, it is necessary to go beyond the framework of ${C^\ast\text{-algebras}}$. For example, the ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ of Hilbert space would not become a $G$-${C^\ast\text{-algebra}}$ in an obvious way. We will see, however, if we consider an affine action of $G$ whose linear part is of the form an isometry times a scalar, then there indeed exists a natural action of $G$ on the ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ which makes it a $G$-${C^\ast\text{-algebra}}$.\ In this chapter, by an affine action of a group $G$ on a Hilbert space ${\mathcal{H}}$, we mean an affine action $(\pi\times r, b)$ of $G$ on ${\mathcal{H}}$ whose linear part $\pi\times r$ is of the form an isometry times a scalar. Namely, $\pi$ and $r$ are continuous group homomorphisms from $G$ to $O({\mathcal{H}})$ and to ${\mathbb{R}}_+$ respectively; and $b$ is a continuous map from $G$ to ${\mathcal{H}}$ satisfying the cocycle condition $b(gg')=\pi(g)r(g)b(g')+b(g)$ for any $g,g'$ in $G$. We denote by $g$ the affine transformation given by $g$; i.e. the homeomorphism $v\mapsto \pi(g)r(g)v+b(g)$ of ${\mathcal{H}}$. Now, let $(\pi\times r, b)$ be an affine action of a group $G$ on a Hilbert space ${\mathcal{H}}$. Then, we have a natural action of $G$ on a ${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ of Hilbert space which makes it a $G$-${C^\ast\text{-algebra}}$. The $G$-action is defined as follows. For $g$ in $G$ and for a finite dimensional affine subspace $V$ of ${\mathcal{H}}$, we have a $\ast$-isomorphism $g$ from $A(V)=\mathcal{S}\hat\otimes C_0(V\times V_0, \mathcal{L}(V))$ to $A(gV)$ which decomposes as an action $r(g)_\ast$ on ${\mathcal{S}}$ defined by $r(g)$, isomorphisms $g_{\ast}\colon C_0(V)\to C_0(gV)$ and $(\pi(g)r(g))_\ast\colon C_0(V_0)\to C_0(gV_0)=C_0(\pi(g)r(g)V_0)$ defined by $g$ and by $\pi(g)r(g)$ respectively and an isomorphism from $\pi(g)_\ast\colon\mathcal{L}(V)\to {\mathcal{L}}(gV)={\mathcal{L}}(\pi(g)V)$ defined by $\pi(g)$. The next lemma says that this defines a $G$-action on $A({\mathcal{H}})$. \[commute\] For any element $g$ in $G$ and for any finite dimensional affine subspaces $V\subseteq V'=V\oplus W$, the following diagram commutes: $$\begin{aligned} \xymatrix{ A(V) \ar[d]^-{g} \ar[r] & A(V') \ar[d]^-{g} \\ A(gV) \ar[r] & A(gV') \\ }\end{aligned}$$ Here, the horizontal maps are the natural inclusion; and the vertical maps are the maps defined above. The proof is identical to the isometric case. It suffices to show the following diagram commutes: $$\begin{aligned} \xymatrix{ {\mathcal{S}}\ar[d]^-{r(g)_\ast} \ar[r] & A(W) \ar[d]^-{(\pi(g)r(g))_\ast} \\ {\mathcal{S}}\ar[r] & A(\pi(g)r(g)W) \\ }\end{aligned}$$ Rewrite $\pi(g)r(g)W$ as $W'$. We need to check the commutativity of the diagram only for $\exp(x^2)$ and for $x\exp(x^2)$ in ${\mathcal{S}}$. Both routes send $\exp(x^2)$ to $$\exp(r(g)^{-2}x^2)\hat\otimes\exp(r(g)^{-2}\|(w_1', w_2')\|^2)$$ in $A(W')={\mathcal{S}}\hat\otimes C_0(W'\times W', {\mathcal{L}}(W'))$, and similarly send $x\exp(x^2)$ to $$\begin{aligned} r(g)^{-1}x\exp(r(g)^{-2}x^2)\hat\otimes\exp(r(g)^{-2}\|(w_1', w_2')\|^2)\\+\exp(r(g)^{-2}x^2)\hat\otimes r(g)^{-1}{\mathcal{B}}_{W'}\exp(r(g)^{-2}\|(w_1', w_2')\|^2)\end{aligned}$$ in $A(W')$. Here, ${\mathcal{B}}_{W'}$ is the Bott operator for $W'$. For any affine action $(\pi\times r, b)$ on ${\mathcal{H}}$, we define the $G$-action on $A({\mathcal{H}})$ which is guaranteed by Lemma . This makes $A({\mathcal{H}})$ a $G$-${C^\ast\text{-algebra}}$. Now, denote by $S_G$, the (ungraded) $G$-${C^\ast\text{-algebra}}$ $S$ with the $G$-action coming from the homomorphism $r\colon G\to{\mathbb{R}}_+$. Then, the natural inclusion $S_G \to A({\mathcal{H}})$ is $G$-equivariant when the affine part $b$ of the $G$-action on ${\mathcal{H}}$ is zero. In general, analogously to the case of isometric actions, we have an equivariant asymptotic morphism $(\phi_t)\colon S_G\to\to A({\mathcal{H}})$ given by $\phi_t(f):=f(t^{-1}{\mathcal{B}})$ for $t$ in $[1,\infty)$. We will prove the following (though very little) generalization of infinite dimensional Bott Periodicity by N. Higson, G. Kasparov and J. Trout (see [@HKT]). An equivariant asymptotic morphism $(\phi_t)\colon S_G\to\to A({\mathcal{H}})$ defines an invertible morphism in $E^G(S_G, A({\mathcal{H}}))$. There might be a direct proof of this, but we will soon see that this result follows from an already established result: the infinite dimensional Bott Periodicity for a continuous field of affine isometric actions on real Hilbert spaces. We use Fell’s absorption technique. Denote by $S_T$ the $G$-${C^\ast\text{-algebra}}$ $S$ equipped with the $G$-action induced by the translation action on ${\mathbb{R}}$ defined by $g\colon y\to y+\log(r(g))$ for $g$ in $G$. Since $S_T^2$ is isomorphic to ${\mathbb{C}}$ in Equivariant $E$-Theory, our claim follows if we show an equivariant asymptotic morphism $(\phi_t)\otimes\operatorname{id}_{S_T}\colon S_GS_T\to\to A({\mathcal{H}})S_T$ defines an invertible morphism in $E^G(S_GS_T, A({\mathcal{H}})S_T)$. Now, the $G$-${C^\ast\text{-algebra}}$ $S_GS_T$ is isomorphic to $SS_T$: write $S_GS_T$ as $C_0({\mathbb{R}}_T, S_G)$ and $SS_T$ as $C_0({\mathbb{R}}_T, S)$ when ${\mathbb{R}}_T$ is equipped with the translation action defined above. The isomorphism sends a function $f\colon{\mathbb{R}}_T\to S_G$ to a function ${\mathbb{R}}_T\ni y\mapsto (\exp(-y))_\ast(f(y))\in S$. Similarly, write $A({\mathcal{H}})S_T$ as $C_0({\mathbb{R}}_T, A({\mathcal{H}}))$. Use exactly the same formula; namely, send a function $f\colon{\mathbb{R}}_T\to A({\mathcal{H}})$ to a function ${\mathbb{R}}_T\ni y\mapsto (\exp(-y))_\ast(f(y))\in A({\mathcal{H}})$ where $(\exp(-y))_\ast$ denotes now an action on $A({\mathcal{H}})$ defined by $\exp(-y)$ in ${\mathbb{R}}_+$. Then, this defines an isomorphism from $G$-${C^\ast\text{-algebra}}$ $A({\mathcal{H}})S_T$ to the $G$-${C^\ast\text{-algebra}}$ which we write by $A({\mathcal{H}})({\mathbb{R}}_T)$ by the slight abuse of notation. The $G$-action on the latter algebra $A({\mathcal{H}})({\mathbb{R}}_T)$ is defined as follows. For $g$ in $G$ and for $f\colon{\mathbb{R}}_T\to A({\mathcal{H}})$, $g(f)(y)=(\pi(g), \exp(-y)b(g))_\ast f (y-\log r(g))$ where $(\pi, b)_\ast$ denotes the action on $A({\mathcal{H}})$ induced from an affine isometric action $(\pi, b)$ on ${\mathcal{H}}$. Rewrite $SS_T$ as $S({\mathbb{R}}_T)$. With these identifications the asymptotic morphism $(\phi_t)\otimes\operatorname{id}_{S_T}\colon S({\mathbb{R}}_T)\to\to A({\mathcal{H}})({\mathbb{R}}_T)$ is nothing but the asymptotic morphism associated to the continuous filed of affine isometric actions $(\pi, (b_y)_{y\in {\mathbb{R}}_T})$ over ${\mathbb{R}}_T$ with $b_y=\exp(-v)b(g)$. In Chapter 7, we already proved that this defines invertible morphism in the category $E^G$; hence, we are done. One might want to consider whether we can do some analogy of the Higson-Kasparov Theorem in this situation. Namely, one may want to consider an affine action of a group $G$ on a Hilbert space ${\mathcal{H}}$ which is metrically proper which makes $A({\mathcal{H}})$ a proper $G$-${C^\ast\text{-algebra}}$ and see whether there is a Bott element and a Dirac element in Equivariant Kasparov’s category. However, it is not enlightening to do so. (When such an action is metrically proper, it is more or less an isometric affine action.) In stead, one should consider the action such that the $G$-${C^\ast\text{-algebra}}$ $A({\mathcal{H}})$ becomes a proper $G$-${C^\ast\text{-algebra}}$ after tensoring $S_T$ as above. In such a situation, it is highly likely that one can do the exact analogy of the Higson-Kasparov Theorem to deduce $G$ satisfies BCC. Whether or not, there is a non a-$T$-menable group $G$ which admits such a “proper” action is not clear to the author’s knowledge, but it would be just an extension of some subgroup of ${\mathbb{R}}_+$ by an a-$T$-menable group. As we remarked at the outset of this chapter, it is definitely an interesting and important problem to find the analogy of the Higson-Kasparov Theorem for more general affine actions of a group on a Hilbert space. The author considers in attacking this interesting problem, our Theorem \[Result\] or the idea behind its proof could play some important role.
--- abstract: 'We present exact solutions of the non-linear [bcre]{} model for granular avalanches without diffusion. We assume a generic sandpile profile consisting of two regions of constant but different slope. Our solution is constructed in terms of characteristic curves from which several novel predictions for experiments on avalanches are deduced: Analytical results are given for the shock condition, shock coordinates, universal quantities at the shock, slope relaxation at large times, velocities of the active region and of the sandpile profile.' author: - 'Thorsten Emig, Philippe Claudin and Jean-Philippe Bouchaud' title: Exact Solutions of a Model for Granular Avalanches --- euromacr epsf € Introduction and Model ====================== The study of avalanches and surface flows in granular materials has attracted much attention recently, both from a theoretical [@deG] and an experimental point of view [@exp]. A simple model, thought to capture some of the essential phenomena, has been proposed in [@bcre.equa; @bcre.hysteresis; @bcre.triangular]. It is based on the assumption that a strict separation between rolling grains and static grains can be made. Coupled dynamical equations for these two species, based on phenomenological arguments, can then be written. Calling $R$ the local density of rolling grains and $h$ the height of static grains, the simplest form of the [bcre]{} equations read: $$\begin{aligned} \label{eq:bcre1} H_t & = & -\gamma R H_x, \\ R_t & = & R_x+\gamma R H_x, \label{eq:bcre2}\end{aligned}$$ where $H$ is the height of static grains, counted from the repose slope of angle $\theta_r$: $h(t,x)=H(t,x)+x\tan(\theta_r)$ (the heap is sloping upwards from left to right). In the above equations, the units of lengths and time are chosen such that the (downhill) velocity of grains is $v=1$, while $H$ and $R$ are counted in units of the grains diameter. The term $\gamma R H_x$ describes the conversion of static grains into rolling grains if $H_x >0$, or vice versa if $H_x<0$. $\gamma$ is a grain collision frequency, typically of the order of $100$ Hz. Many important phenomenon are left out from the above description, and can be included by adding more terms. For example, diffusion terms (such as $D_1 R_{xx}$ or $D_2 H_{xx}$, describing, e.g. non local dislodgement effects) will generically be present, and qualitatively change the structure of the solutions [@bcre.triangular]. Another aspect not described by the linear form of the conversion term above is the expected saturation of rolling grains with time, rather than the exponential growth predicted by Eq. (\[eq:bcre2\]) for a constant positive slope $H_x$. Non linear saturation terms, as well as a dependence of the velocity of the rolling grains on $R$, are thus expected in general, and can lead to important differences with the above equations [@pgdg.thick; @pgdg.new]. Recently, these equations has been studied by Mahadevan and Pomeau ([mp]{}) [@mp]. They found a conservation law, which relates the solutions $R(t,x)$ and $H(t,x)$ in a frame moving with the velocity of the grains. From this law, they concluded that the [bcre]{} equations have characteristics that are straight lines, along which both $R(t,x)$ and $H(t,x)$ are constant. Independently of the initial profile $H_0(x)$, they found that a shock forms at time $t_s=-1/(\gamma R_{0,\max}')$ with $R'_{0,\max}$ is the maximum (in absolute value) of the initial gradient of rolling grains. Whereas our exact solution fulfills the same conservation law, our results for the characteristics and the shock time disagree with the results of [mp]{}. As we will discuss below, the reason for this disagreement is their implicit assumption of a very restrictive relation between the initial profiles $R_0(x)$ and $H_0(x)$. Characteristic coordinates ========================== The general basis of the method [@cf_book] we used to solve Eqs. (\[eq:bcre1\],\[eq:bcre2\]) consists in a replacement of the original equations by an equivalent system of four partial differential equations for the functions $t$, $x$, $R$ and $H$, but now considered as functions of new coordinates $\mu$ and $\nu$, which will be defined below.[^1] These new equations will be particularly simple inasmuch as each equation has derivatives with respect to either $\mu$ or $\nu$, though the mapping between the coordinates $(t,x)$ and $(\mu,\nu)$ will be in general complicated. To define the characteristic coordinates $(\mu,\nu)$, we have to specify first the characteristic curves of the system (\[eq:bcre1\],\[eq:bcre2\]). For practical reasons, we introduce new functions $u(t,x)=1-R(t,x)/\alpha$ and $v(t,x)=(\alpha+x-R(t,x)-H(t,x))/\alpha$ instead of $H(t,x)$ and $R(t,x)$. For this new functions the differential expressions become $$\begin{aligned} \label{eq:dop1} L_1[u,v] & = & -u_t - \gamma\alpha(1-u) u_x + v_t +\gamma\alpha (1-u) v_x - \gamma(1-u)=0,\\ L_2[u,v] & = & u_t+[-1+\gamma\alpha(1-u)]u_x -\gamma\alpha(1-u)v_x +\gamma(1-u)=0. \label{eq:dop2}\end{aligned}$$ Both operators $L_1$ and $L_2$ contain linear combinations of the type $a u_t + b u_x$ of the derivatives of $u$ (and the same holds for $v$). This combination means that $u$ is differentiated in the direction given by the ratio $t/x=a/b$. Since the coefficients $a$ and $b$ differ for $u$ and $v$ and also for $L_1$ and $L_2$, the functions $u$ and $v$ are differentiated in each of the operators in different directions in the $(t,x)$ plane. Notice that the directions depend also on $u$ itself, and therefore on the solution under consideration, which is a typical feature of non-linear systems. As noted above, our goal is to find equivalent differential equations of which each contains derivatives in only one (local) direction corresponding to one of the new coordinates $\mu$ and $\nu$. Therefore we take a linear combination $L=\lambda_1 L_1 + \lambda_2 L_2$ of the operators in Eqs. (\[eq:dop1\],\[eq:dop2\]) such that the derivatives of $u$ and $v$ in $L$ combine to derivatives in the same direction, which is called a characteristic direction. Moreover we assume that these local directions change smoothly as functions of $t$ and $x$, and are given by the tangential vectors $(t_\sigma(\sigma),x_\sigma(\sigma))$ of a smooth path $(t(\sigma),x(\sigma))$ with $\sigma$ as parameter. Considering the functions $u$ and $v$ along this path, they depend only on $\sigma$ and we have, e.g., $u_\sigma=u_t t_\sigma + u_x x_\sigma$. Using these conditions, we obtain four homogeneous linear equations for the coefficients $\lambda_1$ and $\lambda_2$ with coefficients depending on $t$, $x$, $u$, $v$ and their derivatives with respect to $\sigma$. For non-trivial solutions all possible determinants of the matrix of these coefficients have to vanish, leading to three independent equations or characteristic relations ([cr]{}). The first one can be written as a quadratic equation for the local direction $\zeta=x_\sigma/t_\sigma$ of differentiation, the solution of which are: $\zeta_+=-1$ and $\zeta_-=\gamma\alpha(1-u)$. Now, for a fixed solution $u$, the equations $dx/dt=\zeta_+$ and $dx/dt=\zeta_-$ are ordinary differential equations, which define two families of paths with the starting position $x_0$ at $t=0$ as parameter. These families of paths are the characteristics $C_+$ and $C_-$ of the system (\[eq:bcre1\],\[eq:bcre2\]). From a physical point of view, they are simply the paths along which $R(x,t)$ ($\zeta_+$) and $H(t,x)$ ($\zeta_-$) evolves with time. The new curved coordinate frame $(\mu,\nu)$ is now defined such that the two one-parametric families of characteristics are mapped by the coordinate transformation on an usual Cartesian coordinate frame in the $(\mu,\nu)$-plane, i.e., along the characteristics the coordinate functions $\mu(t,x)$ and $\nu(t,x)$, respectively, are constant. Here we have chosen to map the line $t=0$ on the line given by $\mu=-\nu$. In terms of the new coordinates we find $$\label{eq:ch1} x_\nu+t_\nu=0, \quad x_\mu-\gamma\alpha(1-u)t_\mu=0.$$ Now we make use of another [cr]{}, which evaluated along $C_+$ and $C_-$ by identifying $\sigma$ with $\nu$ and $\mu$, respectively, yields the conditions $$\label{eq:ch2} u_\nu+\gamma\alpha(1-u)v_\nu+\gamma(1-u)t_\nu=0, \quad u_\mu-v_\mu+\gamma(1-u)t_\mu=0.$$ These equations together with the Eqs. (\[eq:ch1\]) form the desired set of four equations mentioned before. Every solution of this new system satisfies the original Eqs. (\[eq:bcre1\],\[eq:bcre2\]), since the Jacobian $t_\nu x_\mu - t_\mu x_\nu \sim 1+\gamma R(\mu,\nu)$ of the coordinate map does not vanish due to $\gamma R(\mu,\nu)>0$. General solution ================ Before we can construct a solution to the equivalent system (\[eq:ch1\],\[eq:ch2\]), we have to specify initial data along the line $\mu=-\nu$ corresponding to $t=0$. We choose an general profile $H_0(x)$, perturbed at $t=0$ by a uniform ‘rain’ of rolling grains: $R_0(x)=\alpha$. In terms of the new coordinates, the initial conditions become $t_0(\mu)=0$, $x_0(\mu)=-\mu$, $u_0(\mu)=0$, $ v_0(\mu)=-(\mu+H_0(-\mu))/\alpha$. By introducing the function $\Delta(\mu,\nu)=-1-\gamma\alpha(1-u(\mu,\nu))$, one can show that the problem of solving the system given by Eqs. (\[eq:ch1\],\[eq:ch2\]) can be reduced to the task of finding a solution to the equation $\Delta_\nu = \gamma H'_0(\nu)(1+1/\Delta)$, with initial condition $\Delta(\mu,-\mu)=-1-\gamma\alpha$. The solution of this equation can be simply expressed in terms of the so-called Lambert function $W$ [@Lambert]: $$\label{eq:delta} \Delta(\mu,\nu)=-1-W\left\{\alpha\gamma \exp[\alpha\gamma + \gamma(H_0(-\mu)- H_0(\nu)) ]\right\}.$$ With this solution at hand, the solution to the system (\[eq:ch1\],\[eq:ch2\]) is determined by $$\begin{aligned} t(\mu,\nu)=\int_{-\nu}^\mu \frac{ds}{\Delta(s,\nu)} = -\mu-\nu+\int_{-\nu}^\mu \frac{\Delta_\mu(s,\nu)ds}{\gamma H'_0(-s)} &,& \quad x(\mu,\nu)=-\mu-t(\mu,\nu) \nonumber\\ R(\mu,\nu)=-\frac{1+\Delta(\mu,\nu)}{\gamma} &,& \quad H(\mu,\nu)=H_0(\nu), %u(\mu,\nu)=1+\frac{1+\Delta(\mu,\nu)}{\alpha\gamma} &,& \quad %v(\mu,\nu)=u(\mu,\nu)-\frac{t(\mu,\nu)+\mu+H_0(\nu)}{\alpha}. \label{gen_solution}\end{aligned}$$ where we have expressed already the original fields $R(\mu,\nu)$ and $H(\mu,\nu)$ in terms of the functions $u$ and $v$. To get the fields as functions of $t$ and $x$, one has to invert the coordinate map. This can be done by using $\mu(t,x)=-t-x$ and integrating the equation for $t(\mu,\nu)$ to obtain also $\nu(t,x)$. As announced before, the height profile $H(t,x)=H_0(\nu(t,x))$ turns out to be constant along the characteristics $C_-$. Generic shape for $H(t,x)$ ========================== In the following we will consider a situation which is generic for sandpile surfaces. Suppose that one starts with a sandpile profile, which consists of two regions with constant but different slopes matching with a kink at $x=0$, and again with a constant amount of rolling grains. The slopes may be either larger or smaller than the angle of repose $\theta_r$. If we denote the slope to the right (left) by $\theta_r+\theta_>$ ($\theta_r+\theta_<$), we have $H_0(x)=\theta_> x$ for $x>0$ and $H_0(x)=\theta_< x$ for $x<0$. In the case of a piecewise constant $H_0'(x)$ one can integrate the equation for $t(\mu,\nu)$ easily as can be seen from Eq. (\[gen\_solution\]). The structure of Eq. (\[eq:delta\]) suggests to distinguish between three regions given by $\mu>0$,$\nu<0$ (I), $\mu,\nu<0$ (II) and $\mu<0,\nu>0$ (III). [^2] In regions I and III one can find the explicit expression $\nu(t,x)=x + \frac{\alpha}{\theta} (1 - e^{\gamma\theta t})$ with $\theta=\theta_<$ (I) or $\theta=\theta_>$ (III), i.e., the characteristics $C_-$ are in these regions simple exponential curves. As a consequence, no shocks can appear in these two regions and the corresponding solutions are particularly simple: $$\label{eq:RHinreg13} R_{I(III)}(t,x) = \alpha e^{\gamma\theta_{<(>)} t}, \quad H_{I(III)}(t,x) = H_0(x) +\alpha - R_{I(III)}(t,x).$$ The boundaries of the regions I and III in real space $(t,x)$ are given by the conditions $x<-t$ and $x>x_1(t)=\frac{\alpha}{\theta_>} (e^{\gamma\theta_> t}-1)$ corresponding to the $\mu=0$ and $\nu=0$ characteristics, respectively, see Fig. \[fig1\]. The boundary for region I has an obvious physical meaning: The information that there is a kink at $x=0$ can only propagate to the left with the velocity of the moving grains, which is $1$ in our rescaled units. Moreover, it is important to note that the ‘uphill’ velocity with which the kink moves is only equal to $\gamma \alpha$ at small times, before growing exponentially. As discussed in the introduction, this growth eventually saturates, as does the value $R$, or else the characteristic $C_-$ quickly reaches the edge of the pile. The range of $x$ in between the above two regions corresponds to the intermediate region II. Within this range one can obtain only an implicit solution for the coordinate map $\nu(t,x)$. It reads $$\label{eq:nuinreg2} \nu = x + \frac{1}{\gamma} \left [ \frac{\Delta(-x-t,\nu) - \Delta(0,\nu)}{\theta_>} + \frac{\Delta(0,\nu) + 1 + \alpha\gamma}{\theta_<} \right ],$$ where $\Delta(\mu,\nu)=-1-W\left\{\alpha\gamma \exp[\alpha\gamma - \gamma(\theta_> \mu + \theta_< \nu)]\right\}$ as follows from Eq. (\[eq:delta\]). The shape of $R(t,x)$ and $H(t,x)$ can be obtained directly from the last two equations of (\[gen\_solution\]). In general, Eq. (\[eq:nuinreg2\]) has to be solved numerically although several results can be obtained in an analytic way. It turns out that the solutions of Eq. (\[eq:nuinreg2\]) fall into two qualitatively different classes, according to the values of $\beta=\theta_>/\theta_<$ and $\theta_<$: for $\beta > 1-\alpha\gamma$ or $\theta_< <0$, both $R(t,x)$ and $H(t,x)$ remain continuous for all times, while for $\beta < 1-\alpha\gamma$ and $\theta_< >0$, the solutions develop a discontinuity in $R(t,x)$ and $H(t,x)$ beyond a finite shock time $t_s$. This must be contrasted with [mp]{}, since in the present case $R_0(x)=\alpha$, they predict that shocks are absent for all times. Examples ======== The characteristics resulting from numerical solutions of Eq. (\[eq:nuinreg2\]) have been plotted in Fig. \[fig1\]. The left part of this figure has been obtained for $\theta_> > 0$ and $\theta_< < 0$, corresponding to $\beta<0$. In this case, the characteristics are more and more ‘diluted’ as time increases, and therefore never cross – no shock. In the limit of large times, the argument of the Lambert $W$ function becomes very large. Using the first two terms of the asymptotic expansion of $W$ [@Lambert] we get $\nu(t,x)=[-\beta t+\ln( x + \frac{\beta t} {\beta - 1} )/(\gamma\theta_<)]/(\beta-1)$. The corresponding expression for $R(t,x)$ and $H(t,x)$ can be obtained from Eq. (\[gen\_solution\]). A particularly interesting quantity to look at is the local slope at, say, $x=0$. In this limit the slope is negative and decays with time as $H_x(t,x=0)=1/(\gamma\beta t)$.[^3] It means that the ‘true’ slope $h_x$ actually relaxes to the angle of repose $\theta_r$ for very large time. If $L$ is the size of experimental system, then $C_-$ reaches the boundary of the system at a time $t^*$ such that $L \approx \frac{\alpha}{\theta_>}e^{\gamma \theta_> t^*}$. One should therefore measure a final slope $h_x \approx \theta_r + \theta_</\ln ( \theta_> L /\alpha )$ [*smaller*]{} than the repose angle. This result is consistent with the qualitative discussion of Boutreux and de Gennes for a similar situation [@pgdg.sinai]. Another experimentally important quantity is the velocity $v_R$ of the “active” region. Following [@bcre.triangular], this region can be defined by the condition $R(t,x)>R_{\rm min}$, where $R_{\rm min}$ is a small threshold. $v_R$ is then given by the slope of the curves of constant $R(t,x)$, which tends to a constant in the large $t$ limit as can be seen in Fig. \[fig1\](a). The asymptotic analysis yields $v_R=\beta/(1-\beta)$. Since $\beta<0$, $-1<v_R<0$, and the avalanche proceeds [*downhill*]{}, but slower than the grains themselves. This is an effect of the non-linear term in the [bcre]{} equations since the linearized theory yields $v_R=-1$ [@bcre.triangular]. =0.49 =0.49 The situation where $\theta_> < 0$ and $\theta_< > 0$ is qualitatively different. In this case, the characteristics cross at some finite time: a shock occurs – see Fig. \[fig1\](b). A crossing point of two characteristics means indeed that at this point, two different values of $R$ (or $H$) are possible and these functions then become discontinuous. Strictly speaking, the Eqs. (\[eq:bcre1\],\[eq:bcre2\]) are no longer valid, and the diffusion terms left of from the analysis become important to smooth out this discontinuity. In Fig. \[fig2\], we plotted snapshots of the $h$ and $R$ profiles at different times, for both situations (with and without the occurrence of a shock). =0.49 =0.49 One can calculate the time $t_s$ and location $x_s$ at which the shock occurs. For that purpose, let us introduce the envelop of the characteristic curves $x(t,\nu)$, where $\nu$ is a label. The envelop can be represented in a parametric way as $\left ( t_e(\nu), x_e(\nu) \right )$. It has the property that for each of its points exists a characteristic, which touches it tangentially. It has then to fulfill the conditions $x(t_e(\nu),\nu) = x_e(\nu)$, $x_\nu ( t_e(\nu), \nu ) = 0$. After some calculations, one can find the [*explicit*]{} expression for the envelop, $$\begin{aligned} \label{eq:xe} x_e(\nu) & = & \nu - \frac{1}{\gamma\theta_<} \left [ 1 + \alpha\gamma + \Delta(0,\nu) \left ( 1 + \frac{1}{1-\beta} \right ) \right ] \\ \label{eq:te} t_e(\nu) & = & - x_e(\nu) + \frac{\nu}{\beta} - \frac{1}{\gamma\theta_>} \left [ \alpha\gamma + \ln (\alpha\gamma) + 1 + \frac{\Delta(0,\nu)}{1-\beta} - \ln \left ( -1 - \frac{\Delta(0,\nu)}{1-\beta} \right ) \right ].\end{aligned}$$ This envelop has two branches, separated by a kink, see Fig. \[fig1\](b), given by $\nu=\nu_s = [\alpha\gamma + \ln (\alpha\gamma)- 1 + \beta - \ln \left ( 1 - \beta \right )]/(\gamma\theta_<)$. Whereas the upper branch is parameterized by $-\infty < \nu < \nu_s$, the lower one corresponds to $\nu_s < \nu < \nu_c=[\beta+\alpha\gamma+\ln(-\alpha\gamma/\beta)]/(\gamma\theta_<)$. The resulting shock coordinates are $$\label{eq:xsts} x_s = \frac{1}{\gamma\theta_<} \left [\ln(\alpha\gamma)- \ln(1-\beta)+1+\frac{1}{1-\beta}\right], \quad t_s = \frac{1}{\gamma\theta_<}\left[\left(1-\frac{2}{\beta} \right)\ln\left(1-\beta\right)-\ln(\alpha\gamma)\right].$$ The condition that ($t_s,x_s$) has to be located inside region II leads to the boundary between the classes with and without shock as mentioned above. At the shock position, the amount of moving grains is [*universal*]{} (independent of the initial value $\alpha$), and given by $R_s=1/(\gamma(1-\beta))$, while $H_s=\theta_< \nu_s$. Since typically $v \sim \gamma d$ with $d$ the grain diameter, we have in our rescaled units $\gamma\sim 1$ showing that due to $R_s \stackrel{<}{\sim} 1$ non linear saturation terms can be neglected at the shock if $\beta \stackrel{<}{\sim} -1$. The lower branch of the envelop saturates for large $t$ exponentially fast with a characteristic time $1/(\gamma\theta_>)$ at $x_\infty=[1+\ln(-\alpha\gamma/\beta)]/(\gamma\theta_<)$, which is always larger than $x_s$. This means that the shock stops propagating upwards. A large time expansion in the shock free range $-t<x<x_\infty$ gives, taking the two leading terms of $W$, $\nu(t,x)= -(\alpha/\theta_<)\exp[\gamma\theta_<t - (\theta_</\alpha) x e^{-\gamma\theta_<t}]$. Thus the slope is non monotonous within this range: after increasing for small times it relaxes again to the initial value $\theta_<$ as $H_x(t,x)=\theta_< \exp[-(\theta_</\alpha) x e^{-\gamma \theta_< t}]$. Discussion ========== Let us summarize the major results of this paper, which could be explored experimentally. Starting from an initial profile made up of two different slopes, we find that shocks can occur after a finite time, depending on the value of the two slopes and the initial density of rolling grains. When shocks are absent, we find that the evolution surface profile is characterized by different velocities: the kink moves upwards with a velocity of the order of $\alpha \gamma$ for early times, while the edge of the “active” region moves downwards at a velocity which only depends on the initial slopes, and is smaller than the velocity of the grains. The final slope is shown to be the angle of repose; however, for finite size systems, one expects the final slope to be smaller by an amount which varies as $1/\ln L$. When a shock appears, we predict the time and position of this shock, as well as the density of rolling grains there, which takes a universal value. The shock is found to stop progressing upwards. Our results are in disagreement with those of [mp]{}. For the situation considered here, they predict that the initial profile is rigidly shifted along straight characteristics. Therefore, for example, the final slope would be given by $H_x(t,x=0)=\theta_<$, which is completely different from our prediction of a decaying slope. The reason for this discrepancy comes from their implicit assumption that $R_0(x)+H_0(x)+\ln(R_0(x))/\gamma={\rm const.}$, which does not hold in the cases considered here. The method presented here can be extended to more general situations. For example, each profile $H_0(x)$ can be approximated by a piecewise linear function. Therefore, our analysis can be used to obtain analytical results for more complicated situations as, e.g., bumps or sinusoidal shapes. Another interesting situation is the case where $R_0(x)$ is localized in space. Applications of this method to the problem of ripple formation are under way. Two important physical phenomena have been neglected: diffusion terms, which are expected to be important in the presence of shocks or in the case of a localized initial $R_0(x)$ (see [@bcre.triangular]), and non-linear effects, which lead to a saturation of the static/rolling grains conversion term. A simple way to account for the latter effect is to replace the characteristics by straight lines of velocity $\gamma R_\infty$ as soon as $R=R_\infty$. The influence of a dependence of the velocity of grains on their density would also be worth investigating [@pgdg.new]. This research was partly supported by the Deutsche Forschungsgemeinschaft (DFG) under grant EM70/1-1. -12pt [99]{} P.-G. de Gennes, Physica A **261** (1998) 267. see e.g. J. Rajchenbach, in [*Physics of Dry Granular Media*]{}, Nato-Asi Series, H. Herrmann, J.-P. Hovi and S. Luding Edts., Kluwer (1998). J.-P. Bouchaud, M.E. Cates, J. Ravi Prakash and S.F. Edwards, J. Phys. I France [**4**]{} (1994) 1383. J.-P. Bouchaud, M.E. Cates, J. Ravi Prakash and S.F. Edwards, Phys. Rev. Lett. [**74**]{} (1995) 1982. J.-P. Bouchaud and M.E. Cates, Gran. Matter [**1**]{} (1998) 101. T. Boutreux, E. Raphaël and P.-G. de Gennes, Phys. Rev. E [**58**]{} (1998) 4692. A. Aradian, E. Raphaël and P.-G. de Gennes, Phys. Rev. E (to be published). L. Mahadevan and Y. Pomeau, Europhys. Lett. [**46**]{} (1999) 595. R. Courant and K.O. Friedrichs, [*Supersonic Flow and Shock Waves*]{} (Interscience Pub., New York) 1956. O. Terzidis, P. Claudin, and J.-P. Bouchaud, Eur. Phys. J. B [**5**]{} (1998) 245. The Lambert’s function $W(x)$ is defined by $W(x)e^{W(x)}=x$. For very large values of $x$, one has $W(x) \sim \ln x - \ln\ln x$. On the contrary, $W(x) \sim x - x^2$ for $x \ll 1$. Its derivative can be simply expressed by $W'(x)=\frac{W(x)}{x[1+W(x)]}$, see e.g. R.M. Corless, G.H. Gonnet, D.E.G. Hare, D.J. Jeffrey and D.E. Knuth, Maple Share Library. T. Hwa, M. Kardar, Phys. Rev. Lett. [**62**]{} (1989) 1813; Phys. Rev. A [**45**]{} (1992) 7002. T. Boutreux and P.-G. de Gennes, C.R. Acad. Sci. Paris, [**325**]{} série II b (1997) 85. [^1]: The theory used here is actually more general and can be used in the presence of non-linear saturation terms or for ripple models [@ripple-paper]. [^2]: The region where $\mu,\nu>0$ turns out to be mapped on the half space with $t<0$ and is therefore not of physical interest. [^3]: Note that this $t^{-1}$ relaxation of the slope has also been obtained in [@HK] within a very different model.
It is increasingly believed that a successful theory of high-temperature superconductivity (HTS) must also account for the anomalous properties exhibited by the so-called “normal state”, the resistive state observed at temperatures above the superconducting transition temperature, T$_c$. There are numerous similarities in the normal-state properties of HTS and ladder cuprates, including an energy gap for spin excitations and a crossover from insulator to metal upon hole doping. These suggest that the ladder compounds provide a valuable experimental laboratory for probing the unusual properties of the resistive normal state of the high-temperature superconductors. In this letter we discuss resistivity measurements on five single-crystal samples of Sr$_{2}$Ca$_{12}$Cu$_{24}$O$_{41}$, which were grown by the traveling-solvent-floating-zone method[@c6]. As shown in Fig. \[fig\_struct\], the crystal structure of this compound is composed of layers of CuO$_2$ chains and Cu$_2$O$_3$ two-leg ladders, interleaved with Sr$_{1-x}$Ca$_x$ buffer layers. The undoped parent compound, Sr$_{14}$Cu$_{24}$O$_{41}$, exhibits semiconducting behavior with an activation energy gap of 0.18eV[@c7]. The formal valency of Cu in Sr$_{14}$Cu$_{24}$O$_{41}$ is +2.25, corresponding to a partially-filled valence band which ordinarily would result in metallic conductivity. However, in Sr$_{14}$Cu$_{24}$O$_{41}$, the positively-charged carriers (the “holes”) are located on the CuO$_2$ chains, which do not conduct because the transfer integral along the 90$^0$ oxygen bond is small (Fig. \[fig\_struct\](b)). Upon partial substitution of Ca for the isovalent Sr (or upon the application of pressure), the holes are redistributed from the chains to the ladders, which are more conductive due to the 180$^0$ oxygen bonds[@c6]. This “self-doping” leads to a decrease of resistivity and, eventually, a crossover from insulating to metallic behavior as the carrier concentration is increased. (1,1)(0,0) (10,320)[**a**]{} (0,195)[**b**]{} (120,190)[**c**]{} Fig. \[fig\_linear\] presents c-axis resistivity, $\rho_c$, for two of the samples, denoted “D” and “E”, measured using the standard four-probe method with the current applied along the ladder. In both of these samples, the temperature dependence of $\rho_c$ is metallic ($d\rho_c/dT >0$) at room temperature, although only in sample “D” does $\rho_c$ follow a linear dependence ($\rho_c \sim aT+b$) that extrapolates (dashed line) to a nearly-zero residual resistivity in the $T \rightarrow 0$ limit. A resistivity with a strictly linear temperature dependence is one of the striking unexplained trademarks of the normal state of the high-temperature superconductors[@c8]. The data of Fig. \[fig\_linear\] suggest that it is also a signature of transport in the two-leg ladder cuprate, once enough carriers are introduced onto the ladder. At lower temperatures, both samples experience a crossover from metallic to insulating behavior ($d\rho_c/dT <0$). Although both samples have the same nominal composition, the insulating behavior begins at a higher temperature in sample “E” and the low-temperature resistivity is roughly two orders of magnitude larger in sample “E” than in sample “D”. This difference can be due to different amounts of disorder and/or different levels of the carrier concentration between two samples. Nevertheless, in the extreme low-temperature limit, $\rho_c$ for both samples follows the temperature dependence expected for variable range hopping (VRH) of strongly localized carriers: $\rho_c = \rho_0\exp(T_0/T)^\beta$. The best fit to the data over the widest temperature range (Fig. \[fig\_vrh\]) corresponds to $\beta=1/2$, the same temperature dependence reported in highly resistive samples of the HTS cuprates[@c9; @c10; @c11]. In this VRH regime, electrical current is dominated by variable-length hops of the charge carriers between localized states at the minima of the random disorder potential. The $\beta=1/2$ exponent can result from VRH in the presence of Coulomb repulsion between carriers, which suppresses the density of states at the Fermi energy in a low-carrier-density system[@c12]. The most surprising feature of the data occurs in an intermediate temperature regime between  3K and  20K in sample “D” (Fig. \[fig\_loglin\]). This regime is characterized by a temperature dependence best approximated by a logarithmic insulating behavior, $\rho_c \sim \log (1/T)$ [@c13]. In this log-$T$ regime, the magnetoresistance (MR) of sample ”D” (measured at $T=4.2$K with the magnetic field applied along the b-axis) was found to be negative, while in the strong localization regime (measured at $T=1.2$K) the MR was found to be positive, as is typical for VRH. This gives additional experimental evidence that there is a third transport regime in sample “D”, distinctly different from the linear-$T$ and strong localization regimes. Of the three transport regimes, the lowest-temperature strong localization regime is the most robust, since VRH is observed at temperatures below 2K in every sample. This is the same temperature range in which long range spin order has been reported in (Sr,Ca)$_{14}$Cu$_{24}$O$_{41}$, in which spins on neighboring Cu-ladder sites are anti-aligned[@c14]. It is consistent with our observations to suppose that the onset of long range anti-ferromagnetic order gives rise to the onset of strong localization of charge carriers in the ladders. The other two transport regimes observed in sample “D” are not nearly so robust, perhaps signaling a greater sensitivity to disorder for the underlying transport mechanisms. After all, quasi-1D transport would be particularly sensitive to disorder: a mobile charge on a ladder may well have difficulty getting past a defect or break in that ladder. The cleanest linear-$T$ dependence is observed in sample “D”, while samples with higher room- temperature resistivity tend to show quasi-linear dependence of $\rho_c$ for which the extrapolated zero-temperature resistivity is non-zero. In samples with sufficiently high room-temperature resistivity ($>2$m$\Omega$ cm), insulating behavior ($d\rho_c/dT <0$) is observed even at room temperature, probably due to larger amounts of disorder in these samples. Although evidence of the log-$T$ regime has been found in two samples, we note that the unambiguous observation of the log-$T$ regime occurs in the sample which exhibits the cleanest linear-$T$ regime ($\rho_c \sim aT$) of all samples studied. Like the linear-$T$ transport regime, a log-$T$ transport regime also exists in the normal state of the HTS cuprates. There is growing experimental evidence that the anomalous normal-state properties of the HTS are due to the strong electron interactions near an insulator-to-metal crossover. The crossover is ordinarily obscured in HTS by the appearance of the superconducting phase; however, by suppressing superconductivity with an intense, pulsed magnetic field, the insulator-to-metal crossover in La$_{2-x}$Sr$_x$CuO$_4$ has been found to occur near optimum doping[@c15], that carrier density which yields the maximum T$_c$. Underdoped samples, those with fewer carriers than optimal doping, exhibit a transport regime characterized by a log-$T$ divergence of the normal state resistivity, once superconductivity is lifted by the magnetic field[@c4; @c5]. This divergence is inconsistent with known models for log-$T$ insulating behavior: it is apparently not arising from weak localization due to coherent backscattering[@c4; @c5] or disorder-enhanced electron interactions, neither is it likely due to spin-flip (Kondo) scattering[@c4; @c5]. While there are differences between the CuO$_2$ plane of HTS cuprates and the plane containing 2-leg ladders in Sr$_{2}$Ca$_{12}$Cu$_{24}$O$_{41}$, we note that the log-$T$ insulating behavior occurs in both systems at the same magnitude of normalized resistivity, when the resistivity per layer is near the quantum resistance, h/e$^2 \sim 25.8$k$\Omega$. In a conventional two-dimensional system the quantum resistance corresponds to that resistance at which the mean free path is comparable to the deBroglie wavelength at the Fermi energy, which is where transport typically crosses from metallic (diffusive) to insulating (localized) behavior. It is tempting to suggest that similar physical mechanisms can govern normal-state transport properties in the two-leg ladder compound and the HTS cuprates, even though, prima facia, the former contains quasi-one-dimensional transport along Cu$_2$O$_3$ ladders, while the HTS cuprates contain quasi-two-dimensional transport in the CuO$_2$ plane. Nonetheless, the phenomenology of three different transport regimes is similar in the two systems, including the two regimes (linear-$T$ and log-$T$) for which the underlying physical mechanisms remain unknown. In light of the data presented here, when coupled with published evidence of charged-stripe formation in HTS[@c17], one could speculate that the effective dimensionality of the HTS CuO$_2$ plane is reduced with regard to charge transport and that the charge-transport mechanisms are similar in the two systems. Dagotto, E. and Rice, T. M. Science 271, 618-623 (1996). Dagotto, E., Riera J., and Scalapino, D. Phys. Rev. B45 5744-5747 (1992). Uehara, M., et al. J. Phys. Soc. Jpn. 65, 2764-2767 (1996). Ando, Y., Boebinger, G. S., Passner, A., Kimura, T., and Kishio, K. Phys. Rev. Lett. 75, 4662-4665 (1995). Ando, Y. et al. J. of Low Temperature Physics 105, 867-875 (1996). Motoyama, N., Osafune, T., Kakeshita, T., Eisaki. H., and Uchida, S., Phys. Rev. B55, R3386- R3389 (1997). McElfresh, M. W. et al. Phys. Rev. B40, 825-828 (1989). For a review, see Iye, Y. in Physical Properties of High Temperature Superconductors III (ed. Ginsberg, D. M.) 285-361 (World Scientific, Singapore, 1991). Ellman, B. et al. Phys. Rev. B39, 9012-9016 (1989). K. Karpinska et al. Phys. Rev. Lett. 77, 3033-3036 (1996). Cheong, S-W. et al. Phys. Rev. B37, 5916-5919 (1988). Efros, A. L. and Shklovskii, B. I., J. Phys. C 8, L49-L51 (1975). We note that the data in Fig. \[fig\_loglin\] has a slight curvature in the semi-log plot suggesting a dependence somewhat weaker than purely logarithmic. This curvature is reproducible and larger than any known measurement errors. Akimitsu, J., private communication. Boebinger, G. S., et al. Phys. Rev. Lett. 77, 5417-5420 (1996). Ando, Y. et al. Phys. Rev. B56, R8530-R8533 (1997). For a summary see Tranquada, J. M., Physica B 241-243, 745-750 (1998).
--- abstract: 'We build a sample of O[vi]{} absorption sytems in the redshift range 2.0 $\lesssim z \lesssim$ 2.6 using high spectral resolution data of ten quasars from the [*VLT-UVES*]{} Large Programme. We investigate the existence of a metal-rich O[vi]{} population and define observational criteria for this class of aborbers under the assumption of photoionization. The low temperatures of nearly half of all O[vi]{} aborbers, implied by their line widths, are too low for collisional ionization to be a dominant process. We estimate the oxygen abundance under the assumption of photoionization; a striking result is the bimodal distribution of \[O/H\] with median values close to 0.01 and 0.5 solar for the metal-poor and metal-rich populations, respectively. Using the line widths to fix the temperature or assuming a constant, low gas density does not drastically change the metallicities of the metal-rich population. We present the first estimate of the O[vi]{} column density distribution. Assuming a single power-law distribution, $f$(N) $\propto$ N$^{-\alpha}$, yields $\alpha \sim 1.7$ and a normalization of $f$(N) $ = 2.3\times 10^{-13}$ at log N(O[vi]{}) $\sim$ 13.5, both with a $\sim$30% uncertainty. The value of $\alpha$ is similar to that found for C[iv]{} surveys, whereas the normalization factor is about ten times higher. We use $f$(N) to derive the number density per unit $z$ and cosmic density, $\Omega_{\rm b}$(O[vi]{}), selecting a limited column density range not strongly affected by incompleteness or sample variance. Comparing our results with those obtained at $z\sim0.1$ for a similar range of column densities implies some decline of $dn/dz$ with $z$. The cosmic O[vi]{} density derived from $f$(N), $\Omega_{\rm b}$(O[vi]{})$\approx (3.5\pm ^{3.2}_{0.9}) \times 10^{-7}$, is 2.3 times higher than the value estimated using the observed O[vi]{} sample (of which the metal-rich population contributes $\sim$35%), easing the problem of missing metals at high $z$ ($\sim$ 1/4 of the produced metals) but not solving it. We find that the majority of the metal-rich absorbers are located within $\sim$ 450 km s$^{-1}$ of strong Ly-$\alpha$ lines and show that, contrary to the metal-poor absorbers, this population cannot be in hydrostatic equilibrium. All of the O[vi]{} absorber properties imply that there are two distinct populations: metal-poor absorbers tracing the intergalactic medium and metal-rich absorbers associated with active sites of star formation and most probably linked to galactic winds.' --- Introduction ============ There are two major open questions that could be solved by the existence of a warm-hot and/or highly ionized phase of the intergalactic medium (IGM): the missing baryons at low redshift, $z \sim 0$-0.5, and the missing metals at $z \sim 2.5$. The baryon budget at low $z$ implies that about 45% of the cosmic baryons are still in the form of ionized gas in the IGM ( [Fukugita98]{}). The missing baryonic matter could reside in a warm-hot IGM (WHIM) as predicted by hierarchical structure formation models (see e.g. [@Cen99 Cen & Ostriker 1999]; [@Dave01]). The cooler phase of the WHIM can be probed by O[vi]{}$\lambda\lambda1031,1037$ absorption but, at the sensitivity of the [*FUSE*]{} and [*HST*]{} surveys, the contribution to the cosmic baryon density, $\Omega_{\rm b}$, of the detected O[vi]{} absorbers is only $\sim$5% ([@savage02]; [@richter04]). The hotter phase of the WHIM, $T>5\times10^5$ K, can be probed by O[vii]{}-O[viii]{} X-ray absorption. There are very few suitably bright targets for X-ray spectroscopy with [*Chandra*]{} and [*XMM-Newton*]{}, and the rare confirmed detections, at a significance level higher than 3$\sigma$ (Nicastro  2005a,b), have O[vii]{} column densities about ten times larger than those of O[vi]{}. The contribution of this intergalactic hot phase to $\Omega_{\rm b}$ could be up to ten times higher than that of the cooler WHIM. At $z \sim 2.5$, at least 90% of the baryons are in the Ly-$\alpha$ forest, but only about 10% of the metals produced by star formation activity in Lyman Break Galaxies (LBGs) have been detected up to now ([@pett99]). The mean metal enrichment of the IGM could reach a value Z $\simeq$ 0.04 Z$_{\odot}$ ([@pett99]) and recent simulations of galactic winds give estimates in the range 0.01-0.06 Z$_{\odot}$ ([@bert05]). The observed C[iv]{} cosmic density equals $\Omega_{\rm b}$(C[iv]{}) $\sim 7\times10^{-8}$ ([@song01]; [@scann05]) and, assuming an ionization correction of about a factor two, leads to a cosmic abundance \[C/H\]$\sim-2.9$, thus a shortfall of metals by a factor of at least ten and maybe up to $10^2$. The missing metals could reside in hot gaseous halos around star-forming galaxies ([@pett99]; [@ferr05]) and the cooler part of these hot bubbles might be traced by O[vi]{} absorption. A few surveys of O[vi]{} absorbers at $z \sim 2.0$-2.5 have already been conducted at the [*VLT*]{} and [*Keck*]{} telescopes, some for only a limited number of sightlines ([@berg02]; [@car02]; [@sim02],2004). A non-negligible fraction, $\sim$1/3, of the O[vi]{} absorptions associated with the Ly-$\alpha$ forest have line widths $b<14$ km s$^{-1}$, thus $T<2\times10^5$ K, which favors a radiative ionization process. A hard UV background flux, i.e. small discontinuity at 4 Ryd ([@haa96]), reproduces well the observed ionic ratios for $-3.0<$ \[Z/H\] $<-0.5$. The inferred values of $\Omega_{\rm b}$(O[vi]{}) of the above surveys are $\approx 1.1 \times10^{-7}$ (assuming $\Omega_{\rm \Lambda}, \Omega_{\rm m}, \Omega_{\rm b}, h=$ 0.7, 0.3, 0.04, 70 throughout this paper). Applying a conservative ionization correction, O[vi]{}/O=0.15, yields a mean oxygen abundance of \[O/H\]$\sim-2.7$, and thus, as for C[iv]{} surveys, leaves open the problem of the missing metals. However, a higher metallicity has been derived for O[v]{} absorbers. The EUV O[v]{}$\lambda630$ singlet was searched for and detected in a stacked composite absorption spectra from [*HST-FOS*]{} data for absorbers at $1.6<z<2.9$ with a large range of H[i]{} column densities ([@telf02]). Except in the strongest H[i]{} systems, the lack of detection of the associated EUV O[iv]{} doublet also suggests a hard ionizing background flux, and the derived oxygen abundance is \[O/H\]$\sim-2.2$ to $-1.3$. The paper is organized as follows: our new [*VLT*]{} O[vi]{} sample is presented in § 2. In § 3, we give results on the oxygen abundance derived under various assumptions on the ionization process. The O[vi]{} column density distribution and the contribution of the O[vi]{} absorbers to the cosmic baryon density are given in § 4. The origin of these absorbers is discussed in § 5. We present our conclusions and prospectives in § 6. The O[vi]{} sample ================== The [*VLT-UVES*]{} Large Programme “The Cosmic Evolution of the IGM” provides a homogeneous sample of quasar sightlines, with emphasis given to lower redshift quasars ($z<3$) to take advantage of the high UV sensitivity of [*UVES*]{}. This allows a study of O[vi]{} systems in the range $z=2.0$-2.5 where the crowding of the Ly-$\alpha$ forest is not too severe. Altogether, the sample comprises 21 bright quasars (most with V $<$ 17), of which 19 are at $2<z<4$, observed with dichroics blue and red. The spectral resolution is $R =45,000$ (line width $b$ = 6.6 km s$^{-1}$) and the exposure time per setting per quasar (2 settings per quasar) of 6 to 10 hr yields a signal-to-noise S/N $\sim$ 30-40 and 100 at 3300 and 5500 Å respectively. The data were reduced using an upgraded version of the [*ESO-UVES*]{} data-reduction pipeline (Aracil in preparation). We present results derived from the spectral analysis of ten quasars at 2.1 $<z_{em}<$ 2.8. Our O[vi]{} sample comprises 136 absorbers with column densities in the range 12.7 $<$ log N(O[vi]{}) $<$ 14.6. These absorbers span the redshift interval 1.99 $<z<$ 2.57 with a mean value of $\overline{z}=2.28$. Due to partial blending of the associated H[i]{} absorptions and small velocity differences between the O[vi]{} and H[i]{} components, we group the O[vi]{} and H[i]{} absorptions into 51 systems. Absorption systems within 5000 km s$^{-1}$ of the quasar emission redshift are excluded from this sample. The O[vi]{} subsamples ---------------------- There are unusual O[vi]{} absorbers with high abundances, $-1<$ \[O/H\] $\lesssim 0$, in previous O[vi]{} surveys ([@berg02]; [@car02]). They have high ionic ratios, N(O[vi]{})/N(H[i]{}) $>$ 0.5, and low H[i]{} column densities, log N(H[i]{}) $<$ 13.0. The survey by Simcoe  (2004) focussed on systems with log N(H[i]{}) $>$ 13.6, which could account for the underrepresentation of highly metal-rich O[vi]{} absorbers in their O[vi]{} sample. Since these intriguing systems are not present in every sightline, a large quasar sample is mandatory for a statistically significant number of metal-rich O[vi]{} absorbers. It is possible to define the class of metal-rich absorbers using observed column density ratios derived from photoionization models since the small line widths of a non-negligible fraction of the O[vi]{} systems imply “low” gas temperatures (see § 3.1). Adopting a hard UV background spectrum together with a 0.1 solar metallicity leads to observational identification criteria for the following classes of absorbers:\ - type 1: N(O[vi]{})/N(H[i]{}) $>$ 0.25: metal-rich absorbers,\ - type 0: N(O[vi]{})/N(H[i]{}) $<$ 0.25: metal-poor absorbers. ![Metal column densities of O[vi]{} and C[iv]{} versus H[i]{} column density. The dashed and dotted line give the locations of systems with N(O[vi]{})/N(H[i]{}) = 0.25 and N(C[iv]{})/N(H[i]{}) = 0.015, respectively.[]{data-label="fig:OCvsH"}](O_C_H_bw.ps){width="10.2cm"} ![H[i]{} and metal absorptions of type 1 O[vi]{} absorbers: a weak H[i]{} system at $z=2.468$ is shown in the left panel, and a strong H[i]{} system at $z=2.398$ is shown in the right panel. The latter is 140 km s$^{-1}$ away from a type 0 absorber. []{data-label="fig:type1"}](type_1s.eps){width="11cm"} ![H[i]{} and metal absorptions of a type 0 O[vi]{} absorber at $z=2.089$ (left panel) and a type 2 O[vi]{} absorber at $z=2.314$ (right panel).[]{data-label="fig:type0"}](types_0_2.eps){width="11cm"} ![H[i]{} and metal absorptions of C[iv]{}-only type 1 absorbers: two low redshift systems at $z=1.727$ and 1.729 (left panel) and one system at $z=2.415$ with O[vi]{} lines fully blended with saturated Lyman lines (right panel).[]{data-label="fig:typeciv"}](type_CIVs.eps){width="11cm"} There are 39 O[vi]{} type 1 components, 12.9 $<$ log N(O[vi]{}) $<$ 14.5, grouped in 14 O[vi]{}-H[i]{} systems. A similar criterium is derived for the C[iv]{} systems and is used to identify C[iv]{}-only metal-rich absorbers (O[vi]{} doublet either outside the observing range, $z<2.0$, or fully blended with saturated Lyman lines):\ - C[iv]{}-only type 1: N(C[iv]{})/N(H[i]{}) $>$ 0.015: metal-rich absorbers.\ There are 18 C[iv]{}-only type 1 components, 11.8 $<$ log N(C[iv]{}) $<$ 13.8, grouped in 8 C[iv]{}-H[i]{} systems. Finally, a few absorbers with O[vi]{} blended with strong Lyman lines, and thus with uncertain values of N(O[vi]{}), are labelled type 2. These different classes of absorbers are shown in figure \[fig:OCvsH\]. About 70% of the O[vi]{}+C[iv]{}-only type 1 absorbers have weak associated H[i]{} lines, log N(H[i]{}) $<$ 13.6, and the type 0 and type 1 O[vi]{} absorbers span roughly the same N(O[vi]{}) range. This demonstrates the importance of searching for O[vi]{} systems whatever the strength of their associated H[i]{} aborption. Examples of type 1 absorbers are given in figure \[fig:type1\]. The proximity in velocity space of a strong Ly-$\alpha$ system to a type 1 absorber is investigated in § 5.1. Examples of types 0 & 2 O[vi]{} absorbers and C[iv]{}-only type 1 absorbers are given in figures \[fig:type0\] and \[fig:typeciv\], respectively. Abundances ========== O[vi]{} line widths ------------------- The histogram of O[vi]{} line widths is shown in figure \[fig:histb\]. There are 81, 39 and 16 O[vi]{} components for the types 0, 1 and 2 absorbers, respectively. The bulk of the $b$ distributions of the types 0 and 1 overlap, but not their high velocity tails. A Kolmogorov-Smirnov test shows that these distributions indeed differ at the 98% confidence level. It should be stressed that among the broader absorbers, $b>16$ km s$^{-1}$, most components are blends of several O[vi]{} lines within a velocity range of a few tens of km s$^{-1}$. Very few individual components are unambiguously broad. Close to half (43%) of the O[vi]{} absorbers have line widths $b<12$ km s$^{-1}$. This confirms the results previously found with smaller O[vi]{} samples. The implied temperatures, $T <1.4\times 10^5$ K, as also found for the non-saturated associated Lyman lines, are too low for O[vi]{} to be produced by collisional ionization even for abundances close to solar. We will thus assume that photoionization is the dominant ionization process, but will also consider simple cases where there is additional collisional heating of the gas, possibly through shocks. ![Distribution of the O[vi]{} line widths with the totals outlined.[]{data-label="fig:histb"}](OVI_hist_bw.ps){width="7cm"} Abundances under the assumption of photoionization -------------------------------------------------- Following previous discussions on constraints of the spectral energy distribution of the ionizing background flux (e.g. [@berg02]; [@car02]; [@telf02]), we select a hard UV metagalactic flux ([@haa96]) to derive the gas ionization level. We used the CLOUDY v94.0 code ([@ferl98]) to estimate ionic column density ratios as a function of the ionization parameter, $U \equiv n_{\rm \nu}/n_{\rm H}$, and assumed solar relative abundances ([@and89]). For each system, the value of $U$ is fixed by the observed ionic ratio N(O[vi]{})/N(C[iv]{}) which is applicable only if O[vi]{} and C[iv]{} are in the same phase. This should be the case for most absorbers as Si[iv]{} is not detected, except in a few systems with large N(H[i]{}) ($>10^{15}$ cm $^{-2}$). The observed range of N(O[vi]{})/N(C[iv]{}) implies $-1.4 \leq $ log $U \leq -0.4$, thus an ionization ratio 0.09 $\leq $ O[vi]{}/O $\leq $ 0.21. The distributions of the derived oxygen abundances are presented in figure \[fig:histO\_H\] for the 31, 14 and 6 O[vi]{} systems of type 0, 1 and 2, respectively. Contrary to the distribution of $b$(O[vi]{}) shown above, there is very little overlap between the \[O/H\] distributions of the types 0 and 1 populations. This further suggests that they are indeed two distinct populations. The type 2 \[O/H\] distribution spans a small range in between those of the other two populations. ![Distribution of the oxygen abundance in the photoionization case. The \[O/H\] median values for the types 0, 1 and 2 populations are $-$2.07, $-$0.33 and $-$1.56, respectively.[]{data-label="fig:histO_H"}](OH_hist_bw.ps){width="7cm"} ![Distributions of the oxygen abundance for the type 1 population under various ionization conditions: (1) pure photoionization case (top panel), (2) constant gas density (middle panel), (3) photoionization plus temparature fixed by $b$(O[vi]{}) (bottom panel). The \[O/H\] median values for the cases 1, 2 and 3 are $-$0.33, $-$0.80 and $-$0.35, respectively.[]{data-label="fig:histO_Hall"}](OH_hist_1_bw.ps){width="6cm"} Abundances under the assumption of other ionization processes ------------------------------------------------------------- To confirm the difference in metallicity between the type 0 (population tracing the IGM) and type 1 (population tracing highly metal-enriched sites) absorbers, we investigate whether other ionization conditions could yield much lower abundances for the type 1 population. First, we consider a gas phase of constant density, $\rho$, thus a single value of the ionization paramater. We select an overdensity $\delta\equiv(\rho/\overline{\rho}) \approx 10$ at $z \approx 2.2$ which is within the range of values found in the previous O[vi]{} surveys. This yields log $U=-0.5$, thus an ionization ratio O[vi]{}/O = 0.16. In a large fraction of these cases, O[vi]{} and C[iv]{} do not trace the same phase. Secondly, we reconsider photoionization by a hard UV background flux but now with the gas temperature derived from the $b$ value of the main O[vi]{} component of each system. This is to account for possible additional shock heating. The value of $U$ is still derived from the observed N(O[vi]{})/N(C[iv]{}) ionic ratio. There is no solution for absorbers with $T >2.0\times 10^5$ K, or $b>14$ km s$^{-1}$, implying that at these higher temperatures O[vi]{} and C[iv]{} are not co-spatial. However, for the type 1 systems, the main O[vi]{} component is always narrower than 14 km s$^{-1}$ except in the case of one O[vi]{} doublet blended with Lyman lines. Together with the case of pure photoionization, the distributions of the oxygen abundances estimated in the above two cases are shown in figure \[fig:histO\_Hall\]. Although the values of \[O/H\] are somewhat lower under the new ionization conditions, they remain far higher than those of the type 0 population. This confirms that the types 0 and 1 O[vi]{} absorbers trace markedly different populations. Contribution of the O[vi]{} absorbers to the cosmic baryon density ================================================================== Column density distribution --------------------------- The column density distribution, $f$(N), of O[vi]{} absorbers per unit redshift path per unit column density can be written: $$f{\rm (N)} = \{ n/(\Delta {\rm N} \sum \Delta X) \}, \label{f}$$ where $n$ is the number of O[vi]{} absorbers in a column density bin $\Delta {\rm N}$ centered on N for a sample of quasars with total redshift path $\sum \Delta X$. For our adopted cosmology, the redshift path is defined as: $${\rm d}X \equiv (1+z)^2 \{\Omega_{\rm \Lambda}+\Omega_{\rm m}(1+z)^3\}^{-0.5} \ {\rm d}z , \label{X}$$ $${\rm or} \ {\rm d}X/{\rm d}z \cong \{(1+z)/0.3\}^{0.5} \ {\rm when} \ z>1.$$ The O[vi]{} column density distribution is shown in figure \[fig:Ndistr\]. It can be seen that the present data become incomplete below a column density of $\sim 1\times 10^{13}$ cm$^{-2}$ and that sample variance may be important at column densities larger than $\sim 2\times 10^{14}$ cm$^{-2}$. In between these limits, a power-law fit ($f$(N) $\propto$ N$^{-\alpha}$) gives $\alpha \simeq 1.7$. To estimate the uncertainty in $\alpha$, we shift the $\Delta$N bins by 0.1 dex and derive new power-law fits. This yields $\alpha = 1.71 \pm ^{0.48}_{0.47}$ and a normalization of $f$(N) $ = 2.3\times 10^{-13}$ at log N(O[vi]{}) = 13.5, with a $\sim$30% uncertainty. The value of the power-law index is similar to that obtained from C[iv]{} samples, $\alpha$(C[iv]{}) $\simeq 1.8$, such as those drawn from the [*VLT-UVES*]{} Large Programme at $\overline{z}$(C[iv]{}) = 2.16 ([@scann05]) or from [*Keck-HIRES*]{} data at higher redshift ([@song01]). The power-law fit of the latter (corrected for the different adopted cosmological parameters) is also shown in figure \[fig:Ndistr\]. At N(O[vi]{}) = N(C[iv]{}) = $10^{13.5}$ cm$^{-2}$, the value of $f$(N) for O[vi]{} absorbers is nearly a factor of ten larger than that for C[iv]{} absorbers. ![Column density distribution of O[vi]{} absorbers. The dashed line is the fit to our data in the column density range 13.0 $<$ log N(O[vi]{}) $<$ 14.3 (see text). The dotted line is the fit to the column density distribution of C[iv]{} absorbers given by Songaila (2001).[]{data-label="fig:Ndistr"}](f_NOVI_Song.ps){width="8cm"} Number density -------------- We use the power-law fit to $f$(N) for the O[vi]{} population to estimate the number density per unit $z$ of O[vi]{} absorbers: $$dn/dz = (dX/dz) \int f({\rm N) dN}.$$ We select conservative N(O[vi]{}) limits of $10^{13}$ and $10^{15}$ cm$^{-2}$, a range that is not drastically affected by incompleteness or sample variance. Adopting the fit with $\alpha$ = 1.71, we then get $dn/dz = 74$ at $\overline{z}=2.3$. Taking into account the range of possible values of the power-law index and normalization factor of $f$(N), we obtain $66 < dn/dz < 106$. At low redshift, $\overline{z}=0.1$, surveys with [*FUSE*]{} and [*HST*]{} give $dn/dz \approx 13 $ for a rest-equivalent width limit of $w_{\rm r, min}=50$ mÅ, or log N(O[vi]{})=13.60 in the optically thin case (see [@sem04]). For this column density limit, we get $dn/dz \approx 26$ at $\overline{z}=2.3$, whereas we expect a somewhat higher value, $dn/dz \approx 36$, in the case of an unevolving O[vi]{} population. However, comparison between the values of $dn/dz$ at $\overline{z}=0.1$ and 2.3 is not straightforward as O[vi]{} absorbers may trace different populations at low and high redshift. Cosmic density of [O]{}[vi]{} absorbers --------------------------------------- The O[vi]{} cosmic density can be expressed as a mass fraction relative to the critical density, $\rho_{crit}$. It can be estimated either from the individual, observed O[vi]{} column densities or using the power-law fit to $f$(N) of O[vi]{} absorbers to correct for incompleteness. ### Observed [O]{}[vi]{} cosmic density The mean cosmic density of a given ion can be expressed as: $$\Omega_{\rm b, ion} = \{ H_0 m_{\rm ion}/c \rho_{crit} \} \{\sum {\rm N_{\rm ion}}/\sum \Delta X \} = 2.20\times 10^{-22} \{\sum {\rm N_{\rm ion}}/\sum \Delta X \}, \label{Omega-obs}$$ where $H_0$ is the Hubble constant, $m_{\rm ion}$ and $\sum {\rm N_{\rm ion}}$ the atomic mass and the sum of the column densities of the given ion, respectively, and $\sum \Delta X$ the total redshift path. For our O[vi]{} sample, we obtain $\Omega_{\rm b}$(O[vi]{})$ = 1.51 \times 10^{-7}$, a value higher than previous estimates by a factor 1.3 ([@sim04]: sample restricted to O[vi]{} systems with strong, associated H[i]{} absorption \[see § 2.1\]) and 1.8 ([@car02]: two sightlines, none with very strong O[vi]{} absorbers). The contribution of the O[vi]{} type 1 population to $\Omega_{\rm b}$(O[vi]{}) is 35%. ### [O]{}[vi]{} cosmic density corrected for incompleteness The mean cosmic density of O[vi]{} ions can also be written as: $$\Omega_{\rm b} = 2.20 \times 10^{-22} \int {\rm N} f({\rm N) dN} \label{Omega-f}$$ Using our fit with $\alpha$ = 1.71 and the same N(O[vi]{}) limits as in § 4.2, we get $\Omega_{\rm b}$(O[vi]{}) $\approx 3.5 \times 10^{-7}$, thus an incompleteness correction factor of 2.3 at $\overline{z}=2.3$. The uncertainty in the power-law fit of $f$(N) leads to values in the range $2.6\times10^{-7} < \Omega_{\rm b}$(O[vi]{}) $< 6.7\times10^{-7}$. To estimate the mean cosmic density of oxygen, $\Omega_{\rm b}$(O), we use the O[vi]{} mean ionization level obtained in the pure photoionization case, $\langle$O[vi]{}/O$\rangle$ = 0.15 (see § 3.2). Under the other ionization conditions investigated in § 3.3, this ratio is either similar or smaller. We then get $\Omega_{\rm b}$(O) $\approx (2.3\pm ^{2.1}_{0.6}) \times 10^{-6}$. Using the solar oxygen abundance given by Anders & Grevesse (1989), yields:\ log ($\Omega_{\rm b}$(O)/$\Omega_{\rm b}$(O)$_{\odot}) = -2.22$. This result demands attention for the two following reasons: (1) the above value is close to the median of \[O/H\] found for the O[vi]{} type 0 absorbers (IGM), \[O/H\] = $-$2.07, but well below that of the O[vi]{} type 1 absorbers (metal-enriched sites), \[O/H\] = $-$0.33 (see figures \[fig:histO\_H\] and \[fig:histO\_Hall\]), and (2) it is smaller than the mean metal enrichment of the IGM by star-forming galaxies at $z \sim 2.5$ (Z$^{\rm SF}$) by a factor of about 3.7 and 6.6 when adopting the values of Z$^{\rm SF} \approx$ 1/45 and 1/25 solar as given by Ferrara  (2005) and Pettini (1999), respectively. Consequently, there is still a shortfall of observed metals as compared to those produced by LBGs, but about a factor three smaller than previously thought. Our sample contains very few cases of unambiguously broad O[vi]{} doublets ($b>16$ km s$^{-1}$) which could trace hotter parts ($T>2.5\times10^5$ K) of metal-rich sites. However, if most of the gas in these sites is at even higher temperatures, oxygen will then mainly be in the form of O[vii]{} and O[viii]{} ions and not detectable with present-day X-ray satellites. Origin of the O[vi]{} absorbers =============================== Nearest strong [H]{}[i]{} absorption system ------------------------------------------- From a pixel analysis of [*VLT-UVES*]{} Large Programme quasar spectra, Aracil   (2004) found that weak O[vi]{} absorption associated with weak H[i]{} absorption (0.2 $< \tau$(Ly-$\alpha$) $<$ 1 or 12.9 $<$ log N(H[i]{}) $<$ 13.6 for $b$(H[i]{}) = 30 km s$^{-1}$) is predominantly detected in the vicinity ($\Delta v \leq$ 300 km s$^{-1}$) of strong H[i]{} absorption ($\tau$(Ly-$\alpha$) $>$ 4). These authors suggested that the O[vi]{} absorption arising in regions spatially close to strong Ly-$\alpha$ absorption may be part of outflows from overdense regions. ![Distribution of the velocity difference between type 1 systems and the nearest strong Ly-$\alpha$ system, with the totals outlined.[]{data-label="fig:dv"}](dv_hist_bwnew.ps){width="8cm"} The O[vi]{} type 1 population from our study should exhibit the same property as the weak O[vi]{} absorptions analyzed with the pixel analysis method, since there is an overlap in their N(H[i]{}) range. The distribution of $\Delta v$ between O[vi]{} or C[iv]{}-only type 1 systems and the nearest strong Ly-$\alpha$ system ($\tau$(Ly-$\alpha) > 4$) is presented in figure \[fig:dv\]. For the few cases of an O[vi]{} doublet associated with a saturated Ly-$\alpha$ line, $\Delta v$ is nul. It should be noted that all C[iv]{}-only type 1 systems have unsaturated, associated Ly-$\alpha$ lines. Among the O[vi]{} and C[iv]{}-only type 1 systems, 64% and 63%, respectively, have a strong Ly-$\alpha$ system at $\Delta v <$ 450 km s$^{-1}$. Results from both the pixel analysis method of weak O[vi]{} systems (log $\tau$(O[vi]{}) $\sim -1.35$) and the study of individual O[vi]{} absorbers suggest a link to gas outflows. Gas density of the O[vi]{} absorbers ------------------------------------ The gas overdensity of the O[vi]{} absorbers is estimated for two cases: photoionization by a hard UV metagalactic flux and hydrostatic equilibrium ([@scha01]). In the photoionization case $U$ is fixed by the O[vi]{}/C[iv]{} ionic ratio, assuming a relative O/C solar abundance. In the range $2.0<z<2.5$, the adopted hydrogen photoionization rate is $\Gamma$(H[i]{}) $ \approx 1.5 \times 10^{-12}$ s$^{-1}$. Using the mean baryon density at each $z$(O[vi]{}), we get: $$\delta (U) \approx 4.0 \ U^{-1} ([1+z]/3)^{-3}.$$ The results are shown in figure \[fig:rhoU\] for the different types of O[vi]{} absorbers. The median values of $\delta(U)$ for the type 0 (metal-poor) and type 1 (metal-rich) populations are equal, $\delta (U) = 22$, and that of the type 2 population is $\sim$40% smaller. A Kolmogorov-Smirnov test shows that the types 0 and 1 populations have the same $\delta(U)$ distribution at the 97% confidence level. For the hydrostatic equilibrium case ([@scha01]: equation (8)), we assume a gas temperature $T = 4\times 10^4$ K and the same photoionization rate as above. This gives: $$\delta (G) = 3.7 \times 10 ^{-9} \ {\rm N(H\,{\sc {\i}})}^{2/3}([1+z]/3)^{-3}.$$ The results are presented in figure \[fig:rhoG\]. Contrary to the photoionization case, there is a marked difference between the types 0 and 1 populations. The median values of $\delta(G)$ are 41 and 4.6 for the type 0 and type 1 absorbers, respectively, and that for the type 2 absorbers is 8.3. Moreover, the values of $\delta(G)$ for $\sim$80% of the type 1 population (log $\delta(G)<1.1$) do not overlap with those obtained for the type 0 population. ![Distribution of the O[vi]{} absorber overdensity in the hydrostatic equilibrium case.[]{data-label="fig:rhoG"}](rhoU_hist_bw.ps){width="6cm"} ![Distribution of the O[vi]{} absorber overdensity in the hydrostatic equilibrium case.[]{data-label="fig:rhoG"}](rhoG_hist_bwOK.ps){width="6cm"} We now compare in figure \[fig:rhoUG\] the values of $\delta(U)$ and $\delta(G)$ to check the validity of the assumption of hydrostatic equilibrium. For the type 0 absorbers, a Spearman rank correlation test shows that $\delta(G)$ and $\delta(U)$ are correlated at a $>$99% confidence level. The mean value of their $\delta(G)/\delta(U)$ overdensity ratio is close to 2.0. It cannot be substantially decreased as its dependence on $\Gamma$(H[i]{}), mass fraction in gas and $T$ is small ($\delta(G)/\delta(U) \propto T^{0.17} \Gamma$(H[i]{})$^{-1/3}$ $(\Omega_{\rm b}/\Omega_{\rm m})^{-1/3}$). Therefore, this departure of $\delta(G)/\delta(U)$ from unity may suggest that a fraction of the observed H[i]{} is not in the O[vi]{} phase. Nevertheless, the correlation between $\delta(G)$ and $\delta(U)$ suggests that the O[vi]{} type 0 population is roughly in hydrostatic equilibrium. For the type 1 absorbers, $\delta(G)$ and $\delta(U)$ are totally uncorrelated which implies that hydrostatic equilibrium is not a valid assumption for this population: low H[i]{} column density absorbers do not trace low density regions of the IGM. This further supports that the metal-rich and metal-poor absorbers trace different populations. ![Comparison of the O[vi]{} absorber overdensities in the photoionization versus hydrostatic equilibrium cases.[]{data-label="fig:rhoUG"}](rho_ug_bwOK.ps){width="9cm"} Conclusions and prospectives {#sec:concl} ============================ Our large [*VLT-UVES*]{} sample of 136 O[vi]{} absorbers at $\overline{z}=2.28$ towards ten quasars enables a study of the highly ionized phase of the IGM, in particular its metal enrichment and contribution to the cosmic baryon density. Previous O[vi]{} studies at high $z$ have uncovered a few systems with high \[O/H\] ($>-1.0$) abundances, motivating us to fully investigate this class of metal-rich absorbers. Because those systems already identified have low H[i]{} column densities ([@berg02]; [@car02]), our sample includes all detected O[vi]{} systems whatever the strength of their associated H[i]{} absorption. In contrast, the survey by Simcoe  (2004) only includes systems with N(H[i]{}) $> 10^{13.6}$ cm$^{-2}$. We restrict our sample to absorbers with both lines of the O[vi]{} doublet clearly detected or, if partially blended, enough unambiguous structure to allow for deblending from Lyman lines. Since nearly half of the O[vi]{} absorbers have small line widths, $b<12$ km s$^{-1}$ or $T <1.4\times 10^5$ K, photoionization must be the dominant ionization process. We thus introduce an observational identification criterium to separate the classes of metal-poor (type 0) and metal-rich (type 1) absorbers. Selecting a hard UV background flux (see [@berg02]; [@car02] ; [@telf02]) and assuming a 0.1 solar metallicity yields a column density ratio N(O[vi]{})/N(C[iv]{}) $< 0.25$ and $> 0.25$ for the type 0 and type 1 populations, respectively. The bulk of the $b$ distributions of these two O[vi]{} populations are similar except at the highest velocities. However, we stress that very few individual components are unambiguously broad, a result of blending for complex, multiple systems and limited S/N ($\sim 30$-40) in the O[vi]{} range. The cosmic oxygen abundance is derived under the assumptions of photoionization, coexistence of O[vi]{} and C[iv]{} in the same phase, and a solar O/C relative abundance. The overall \[O/H\] distribution is clearly bimodal with median values of \[O/H\] equal to $-2.05$ and $-0.33$ for the type 0 and type 1 populations, respectively. This is not a consequence of a strong difference in ionization levels between the two types of O[vi]{} absorbers. All of the type 1 O[vi]{} systems and all but two of the type 0 systems have associated C[iv]{} absorption (detection limit of N(C[iv]{}) $\approx 1 \times 10^{12}$ cm$^{-2}$). Their N(O[vi]{})/N(C[iv]{}) distributions, which cover about two orders of magnitude, are similar with median values both close to 10. A high metallicity (median \[O/H\] $> -1.0$) for the type 1 population is still found under different ionization conditions: photoionization together with either a gas phase of constant density (overdensity $\delta = 10$ at $z = 2.2$) or a temperature fixed by the line width of the main O[vi]{} component of each system (always $<14$ km s$^{-1}$). The N(N[v]{})/N(O[vi]{}) ratio cannot be used to constrain the ionization level of the O[vi]{} phase because the N/O relative abundance departs from the solar value. In most O[vi]{} absorbers, the N[v]{} doublet is weak or undetected and the nitrogen abundance relative to oxygen is usually well below solar ([@berg02]). For a very few O[vi]{} systems, the strength of the N[v]{} absorption is similar to those of C[iv]{} and O[vi]{} and the nitrogen abundance may be enhanced relative to that of oxygen ([@car02]), as also observed in quasar associated systems (e.g. [@ham00]). In our type 1 O[vi]{} sample, associated N[v]{} absorption is either absent or very weak, except in one case already reported by Bergeron  (2002). This absorber at $z=2.352$ in Q 0329$-$385 was labelled as “intrinsic” by these authors because its properties are typical of those of associated systems, even though it is at 6200 km s$^{-1}$ from the quasar emission redshift. Our O[vi]{} sample is large enough to derive the first estimate of the O[vi]{} column density distribution, although incompleteness becomes evident at N(O[vi]{}) $\lesssim 1 \times 10^{13}$ cm$^{-2}$ and sample variance may be important at N(O[vi]{}) $\gtrsim 2 \times 10^{14}$ cm$^{-2}$. A power-law fit, $f$(N) $\propto$ N$^{-\alpha}$, yields $\alpha \approx 1.7 \pm 0.5$, a value similar to that found for C[iv]{} samples, $\alpha$(C[iv]{}) $\simeq 1.8$ ([@song01]; [@scann05]). In contrast, the normalization factor, $f$(N) $ = 2.3\times 10^{-13}$ (with an uncertainty of $\sim$ 30%) at N(O[vi]{}) = $10^{13.5}$ cm$^{-2}$, is about ten times larger than that of C[iv]{} absorbers. We aim to better constrain $f$(N), particularly at large O[vi]{} column densities, by analyzing a larger number of sightlines in a future paper. There we will also include blended O[vi]{} components associated with C[iv]{} absorption. We use the fit to $f$(N) in a conservative N(O[vi]{}) range, $10^{13}$-$10^{15}$ cm$^{-2}$, to estimate the number density per unit $z$ of O[vi]{} absorbers as well as their cosmic density. This is a first step for correcting $dn/dz$ and $\Omega_{\rm b}$(O[vi]{}) from incompleteness and sample variance. We find $dn/dz = 74 \pm ^{32} _{8}$. Selecting an integration range as wide as is usually adopted for C[iv]{} ($10^{12}$-$10^{16}$ cm$^{-2}$) would yield a larger value of $dn/dz$ (and of $\Omega_{\rm b}$(O[vi]{})) but the uncertainty on the result would then be far too large. In the case of an unevolving population and a column density lower limit equal to that of low $\overline{z} = 0.1$ O[vi]{} surveys, that is N(O[vi]{})$_{\rm min} = 10^{13.6}$ cm$^{-2}$ or $w_{\rm r, min}=50$ mÅ  (see e.g. [@sem04]), the expected value of $dn/dz$ at $\overline{z} = 2.3$ derived from the low $z$ samples is 36 (assuming a 0.1 solar metallicity and an ionic fraction O[vi]{}/O = 0.2), whereas that obtained from our $f$(N) distribution equals 26. The suggested decline of $dn/dz$ with $z$ is not straightforward to interpret as the O[vi]{} absorbers may trace different populations at low and high $z$. The O[vi]{} cosmic density estimated from the individual, observed column densities is $\Omega_{\rm b}$(O[vi]{})$\approx 1.5\times 10^{-7}$, i.e. higher than previous estimates by a factor 1.3 ([@sim04]) and 1.8 ([@car02]). This increase is due to the high contribution (35%) of the type 1 population to $\Omega_{\rm b}$(O[vi]{}). The value derived from the $f$(N) distribution is 2.3 times larger: $\Omega_{\rm b}$(O[vi]{})$\approx (3.5\pm ^{3.2}_{0.9}) \times 10^{-7}$. This illustrates the effects of incompleteness and sample variance in our sample, even within the conservative N(O[vi]{}) range adopted. To get the element cosmic density, we use the mean ionic fraction obtained in the pure photoionization case, $\langle$O[vi]{}/O$\rangle$ = 0.15, which yields $\Omega_{\rm b}$(O) $\approx (2.3\pm ^{2.1}_{0.6}) \times 10^{-6}$. Adopting the solar oxygen abundance given by Anders & Grevesse (1989), we get log $\big(\Omega_{\rm b}$(O)/$\Omega_{\rm b}$(O)$_{\odot}\big) = -2.22$. This value is well below that of the metal-rich population and also smaller than the metal enrichment of the IGM expected from high $z$ star-forming galaxies, $\langle$\[O/H\]$\rangle \sim -1.40$ ([@pett99]) or $-1.65$ ([@ferr05]). Although the problem of missing metals at high $z$ (where previously an order of magnitude disparity was measured) is now less severe as a result of our O[vi]{} survey, there remains a shortfall of observed metals by about a factor of four as compared to those produced by star-forming galaxies. Other properties of the type 1 O[vi]{} absorbers suggest a tight link to galactic halos. This population is predominantly detected in the vicinity ($\Delta v <$ 450 km s$^{-1}$) of strong H[i]{} systems ($\tau$(Ly-$\alpha$) $>$ 4). This is also the case for C[iv]{}-only metal-rich absorbers (O[vi]{} doublet either outside the observing range, $z<2.0$, or fully blended with saturated Lyman lines). In the photoionization case, the type 0 and type 1 O[vi]{} absorbers have the same gas overdensity distribution, with a median value $\delta (U) = 22$, but under the assumption of hydrostatic equilibrium the gas overdensity, $\delta (G)$, distributions of these two populations barely overlap. Moreover, the values of $\delta (U)$ and $\delta (G)$ are totally uncorrelated for the metal-rich population, whereas they are well correlated for the metal-poor population. Consequently, the assumption of hydrostatic equilibrium is not valid for the metal-rich O[vi]{} population: these absorbers do not trace low density regions of the IGM but rather gas outflows in the vicinity of active star-formation sites. If most of the gas in the metal-rich sites is at high temperature ($T>5 \times 10^5$ K), as suggested by Pettini (1999), oxygen will mainly be in the form of O[vii]{} and O[viii]{} ions and their signatures in the very soft X-ray range are not detectable with present-day X-ray satellites. For a phase at lower temperatures, $2\times 10^5< T< 5 \times 10^5$ K, the O[vi]{} and H[i]{} species, but not C[iv]{} (ionic fraction C[iv]{}/C $< 10^{-2}$), should be detectable. We have begun to search for these absorbers with broad ($b > 50$ km s$^{-1}$), weak Ly-$\alpha$ lines associated with semi-broad, weak O[vi]{} doublets ($b > 15$ km s$^{-1}$). This is coupled to a statistical analysis of the Ly-$\alpha$ forest in simulated spectra (in progress). We also plan to acquire deep, multi-band images of the quasar fields with several metal-rich O[vi]{} absorbers. If this population does indeed trace hot galactic halos, we expect to find a strong correlation with star-forming galaxies. Using these images together with spectroscopic follow-up of the associated galaxies may help clarify the ejection mechanism(s) responsible for the metal-pollution of galactic halos and the surrounding IGM. S. Herbert-Fort is supported by the EU under the Marie Curie Early Stage Training programme EARA-EST. 1989, *Geochim. Cosmochim. Acta* [53]{}, 197 2004, *A&A* [419]{}, 811 2002, *A&A* [396]{}, L11 2005, *MNRAS* [359]{}, 1216 2002, *ApJ* [578]{}, 43 1999, *ApJ* [514]{}, 1 [Davé, R., Cen, R., Ostriker, J.P., Bryan, G.L., Hernquist, L., Katz, N., Weinberg, D.H., Norman, M.L. & O’Shea, B.]{} 2001, *ApJ* [552]{}, 473 1998, *PASP* [110]{}, 761 2005 preprint 1998, *ApJ* [503]{}, 518 1996, *ApJ* [461]{}, 20 2000, *ApJ* [536]{}, 101 2005a, astro-ph/0501126 2005b, *Nature* 433, 495 , astro-ph/9902173 2004, *ApJS* [153]{}, 165 2002, *ApJ* [564]{}, 631 2001, *ApJ* [559]{}, 507 2005, astro-ph/0503001 2004, *ApJS* [155]{}, 351 2002, *ApJ* [ 578]{}, 737 2004, *ApJ* [606]{}, 115 2001, *ApJ* [561]{}, L153 2002, *ApJ* [579]{}, 500
--- author: - 'Pablo Arnalte-Mur' - Antoine Labatie - Nicolas Clerc - 'Vicent J. Martínez' - 'Jean-Luc Starck' - 'Marc Lachièze-Rey' - Enn Saar - Silvestre Paredes bibliography: - 'ArnalteMur\_baolet.bib' date: 'Received XXX; accepted YYY' title: Wavelet analysis of baryon acoustic structures in the galaxy distribution --- [Baryon Acoustic Oscillations (BAO) are a feature imprinted in the density field by acoustic waves travelling in the plasma of the early universe. Their fixed scale can be used as a standard ruler to study the geometry of the universe.]{} [BAO have been previously detected using correlation functions and power spectra of the galaxy distribution. In this work, we present a new method for the detection of the real-space structures associated with this feature. These baryon acoustic structures are spherical shells with a relatively small density contrast, surrounding high density central regions.]{} [We design a specific wavelet adapted to the search for shells, and exploit the physics of the process by making use of two different mass tracers, introducing a specific statistic to detect the BAO features. We show the effect of the BAO signal in this new statistic when applied to the $\Lambda$ – Cold Dark Matter ($\Lambda$CDM) model, using an analytical approximation to the transfer function.We confirm the reliability and stability of our method by using cosmological $N$-body simulations from the MareNostrum Institut de Ciències de l’Espai (MICE).]{} [We apply our method to the detection of BAO in a galaxy sample drawn from the Sloan Digital Sky Survey (SDSS). We use the ‘Main’ catalogue to trace the shells, and the Luminous Red Galaxies (LRG) as tracers of the high density central regions. Using this new method, we detect, with a high significance, that the LRGs in our sample are preferentially located close to the centres of shell-like structures in the density field, with characteristics similar to those expected from BAOs. We show that stacking selected shells, we can find their characteristic density profile.]{} [We have delineated a new feature of the cosmic web, the BAO shells. As these are real spatial structures, the BAO phenomenon can be studied in detail by examining those shells.]{} Introduction {#sec:intro} ============ Before recombination, the energy of photons is high enough to avoid the formation of neutral hydrogen atoms. This means that baryons and photons are coupled through Compton scattering and electromagnetic interaction between protons and electrons, forming a plasma. In this fluid two phenomena act in opposite directions: gravitational forces tend to compress the plasma around high density regions, while radiation pressure tends to dilute any such over-density. The combination of both in the presence of any initial inhomogeneity give rise to acoustic waves propagating in the baryon-photon plasma. This phenomenon ends abruptly at the epoch of recombination, when the temperature drops enough to allow hydrogen atoms to form, and therefore radiation decouples from the baryons. Baryon acoustic oscillations (BAO) are therefore due to the propagation of these sound waves in the baryon-photon plasma in the early universe [@pee70a; @hu97a; @eis98a; @bas09a]. Any primordial over-density in the early universe produces a spherical acoustic wave in the baryon-photon plasma, travelling outwards: the radiation pressure drags the baryons that are coupled to the photons, and compensates the gravity force that pulls all matter towards the centre. Dark matter, however, is totally decoupled from the photons, and therefore its density at the centre continues growing. About $380,000$ years after the Big Bang, temperature drops so that photons and baryons decouple, and the scale of the baryon shells freezes. After this time, both the central over-density and the shell grow gravitationally, accreting both dark matter and baryons. The result at late times is a large over-density at the position of the original perturbation, surrounded by a faint spherical shell at a fixed co-moving scale [@eis07a]. The BAO scale is fixed by the sound horizon at decoupling: it is the distance that the expanding acoustic shells can travel before decoupling. It has been accurately measured by the study of the anisotropies in the Cosmic Microwave Background (CMB) to be [@kom08a] $r_s = 153.3 \pm 2.0 \, \mathrm{Mpc} = 110.4 \pm 1.4 {\, h^{-1}\, \mathrm{Mpc}}$ (where we take $h = 0.72$, [@fre01a])[^1]. Therefore, this scale, once measured, could be used as a standard ruler to measure the Hubble expansion rate with redshift $H(z)$ and the angular diameter distance $D_A(z)$ [@coo01a; @bla03c; @seo05a]. The BAO should appear as a series of damping wiggles in the matter power spectrum, with the locations of the peaks and throughs in $k$-space being a function of $r_s$ and other cosmological parameters [@eis98a]. All the harmonics sum up to the same peak in the galaxy correlation function $\xi(r)$ at the scale $r_s$, and therefore it could seem more appropriate to use this statistic for the detection of the BAO feature on the available galaxy redshift surveys encompassing large volumes of the universe [@san08b]. The first detection (claiming a $\sim 3\sigma$ level) was reported in the analysis of the correlation function [@eis05a] of the Sloan Digital Sky Survey (SDSS) [@yor00a] Luminous Red Galaxies (LRG) sample [@eis01a], and later in the power spectrum [@col05a] of the 2-degree Field Galaxy Redshift Survey (2dFGRS) [@col01a]. But certainly this is a controversial topic. @cab10a are not finding such level of detection using a data set twice as large in volume and in number of galaxies. They do not claim this result to be in contradiction with the standard $\Lambda$CDM model, but to be a consequence of insufficient data. One of the arguments in @cab10a is the fact that mixing model selection with parameter determination can lead to some confusion in the interpretation of the results and their significance. Different authors are using different criteria to assess the significance of ther BAO detection. For example, when [@eis05a] affirm that the baryon signature was detected at 3.4 $\sigma$ (or at 3.0 $\sigma$ when including only data points between 60 and 180 $h^{-1}$ Mpc) they are comparing their results of the SDSS-LRG correlation function with the expected for the best-fit pure CDM model and different BAO models. The best BAO detection up to now [@per10a] was obtained studying the combined power spectrum of LRG and ‘Main’ [@str02a] samples of SDSS, together with the 2dFGRS sample, and is at the $\sim 3.6\sigma$ level. The authors explicitly state that since this number is obtained comparing to an arbitrary smooth model, the significance cannot be directly compared with the one reported in [@eis05a]. This is a clear example of different authors using different ways to assess the significance of their results that in practice are not comparable. @hut06a calculated the redshift space power spectrum of the SDSS-LRG sample drawn from the Data Release 4. He concludes that BAO models are favored by $3.3 \sigma$ over the corresponding models without any oscillatory behavior in the power spectrum. @per07a detected BAOs in the clustering of the combined 2dFGRS and SDSS main galaxy samples, and use their measurements to constrain cosmological models, in particular a given combination of the angular diameter distance $D_A(z)$ and the Hubble parameter $H(z)$. @cab08a [@cab08b] studied the LRGs anisotropic redshift space correlation function $\xi(\sigma,\pi)$, where $\pi$ is the line-of-sight or radial separation and $\sigma$ is the transverse separation. Moreover, @gaz08b have shown how to constrain $H(z)$ using the correlations in the radial direction. @kaz10a found similar results for the correlation measurements and uncertainties, but manifest disagreement in the interpretation of the results regarding the detection of a line-of-sight baryonic acoustic feature. More recent studies [@mar08a; @cab08a; @san09a; @kaz09a] have confirmed this detection in the last Data Release (DR7, [@aba08a]) of the SDSS-LRG, containing twice as many galaxies as the original sample, although the observed peak is in these cases wider than that observed in the original detection – an issue that needs further explanation. These measurements of the BAO scale at a low redshift, combined with other cosmological probes, have been used to put stringent constraints on the values of cosmological parameters [@teg06a; @per07a; @san09a; @per10a; @rei09a; @kaz10a]. While @bas10a argue that low-level detections may not be sufficient to robustly estimate the cosmological parameters, @cab10a show instead that it is still possible –assuming a model– to locate the BAO position with data providing very low significant BAO detection. It is important, therefore, to find evidence of BAO in the galaxy distribution based on complementary methods. A step further is to search for real structures in the galaxy distribution that are responsible for the BAO feature in these second order statistics. The detection of these structures would be a confirmation of the existence of the baryon acoustic phenomenon. Moreover, if we are able to localize these structures in configuration space, this would allow us to study in more detail the properties of BAO. In this paper, we introduce a new method for the detection of BAO, which is closely tied to the underlying physics of the process, and apply it to a sample drawn from the SDSS catalogue. This method (described in Section \[sec:method\]) is based on analyzing directly the 3D galaxy distribution using a very specific wavelet function (which we called ‘BAOlet’), which is specially well suited to search for BAO features. The method makes use of two different tracers, one to map the overall density field (including the BAO shells), and the other to locate the position of the largest overdensities, which should correspond to centres of the shells. As we study directly the galaxy distribution in configuration space, this method also allows us to identify regions of space where the BAO signal is stronger or fainter. We describe the expected signal in the $\Lambda$CDM model in Section \[sec:theory\], using both analytical prediction and a $N$-body simulation catalogue. We describe the samples used in the case of SDSS in Section \[sec:data\]. In Section \[sec:results\], we show the results obtained in this case. We also make a test to assess the significance of these results, and explore the implications of this analysis regarding the localization of BAO structures. Finally, we summarize our conclusions and discuss possibilities for future work in Section \[sec:conc\]. The wavelet detection method {#sec:method} ============================ The basis of the new BAO detection method is to focus on the positions of massive dark matter haloes, which correspond to the location of large initial perturbations, and to look for the presence of structures resembling the acoustic shells around these. Once we locate the positions of the large over-densities, we need to study the density field to identify the structures corresponding to the acoustic shells around these centres. An appropriate method for identification of structures in continuous fields is wavelet analysis [@mar93a; @starck:book06; @jon09a]. Wavelet transforms are widely used in many areas, especially in image analysis [@mallatb08; @starck:book10]. They are specially suited for the analysis of data at different scales, and identification of characteristic patterns or structures. Wavelets have been used in Cosmology for the analysis of the large-scale structure, and of the CMB anisotropies [@mar93a; @rau93a; @wave:vielva04; @starck:sta05_2; @saa09a]. Standard wavelet functions like the Mexican hat are, however, not suitable for the detection of shells. Instead, we need a family of wavelets whose shape matches the type of structures we want to find in our data. Therefore, we use a specially designed wavelet (the ‘BAOlet’), well adapted to the search of BAO features – shell-like structures around our selected centres. We design this new family of wavelet functions as a transformation of the wide-used B-spline wavelets [@saa09a]. These $\psi_{R,s}(\mathbf{x})$ functions are spherically symmetric, and their radial profiles are defined as $$\label{eq:baodef} \psi_{R,s}(r) = \frac{\alpha_{R,s}}{4\pi r^2} \left[ 2B_3\left( 2\frac{r - R}{s} \right) - B_3\left(\frac{r-R}{s}\right)\right] \, ,$$ where $R$ and $s$ are the two parameters that define the scale and width of the BAOlet function, $\alpha_{R,s}$ is the normalization constant defined so that $$\label{eq:wavenorm} ||\psi_{R,s}||^2 \equiv \int |\psi_{R,s}(\mathbf{x})|^2\mathrm{d}\mathbf{x} = 1 \, ,$$ and $B_3(x)$ is the box spline of the third degree, defined by $$B_3(x) = \frac{1}{12} \Big(\vert x-2 \vert^3 - 4\vert x-1 \vert^3+6\vert x \vert^3-4 \vert x+1 \vert^3+\vert x+2 \vert^3 \Big) \, .$$ The BAOlet function is shown in Fig. \[fig:profile\]. It can be thought of as a spherical shell of radius $R$ and width $s$, with zero amplitude at its centre and therefore adapted to the detection of spherical shells of a given radius. This specific choice is motivated by the fact that the integrated profile is the widely-used one-dimensional ‘B-spline’ wavelet function that has a null mean and compact support $[-2,2]$. These properties directly translate onto the BAOlet that has also a null mean –a requirement for any wavelet function– if $R > 2s$, and takes non-zero values only for $R - 2s \leq |\mathbf{x}| \leq R + 2s$. ![The BAOlet function. Here we show a 2D plot (bottom) of the wavelet $\psi_{R,s}(\mathbf{x})$ used in the analysis, as defined by equation (\[eq:baodef\]). The top panel shows a 1D slice along the dashed-dotted axis. The wavelet is plotted here for $R = 105 {\, h^{-1}\, \mathrm{Mpc}}$, $s = 30 {\, h^{-1}\, \mathrm{Mpc}}$. The red dot marks the centre of the wavelet. This function has a null mean (provided that $R > 2s$), and compact support. It takes non-zero values only for $R - 2s \leq |\mathbf{x}| \leq R + 2s$.[]{data-label="fig:profile"}](ArnalteMur_fig_wavelet){width="\columnwidth"} We describe the density field using the density contrast $\delta(\mathbf{x})$, defined as $$\delta(\mathbf{x}) = \frac{\rho(\mathbf{x}) - \rho_0}{\rho_0} \, ,$$ where $\rho(\mathbf{x})$ is the density field, and $\rho_0$ is its mean. Then, given a density contrast map $\delta(\mathbf{x})$, properly normalized as in equation (\[eq:wavenorm\]), we can construct, for each point in the parameter space $(R,s)$, a BAOlet coefficient map as the convolution of our density field with the corresponding wavelet: $$\label{eq:coeff} W_{R,s}(\mathbf{x}) = \int_{\Re^3} \psi_{R,s}(\mathbf{y}) \delta(\mathbf{y} - \mathbf{x}) \mathrm{d}^3\mathbf{y}\, .$$ The BAOlet acts as a matched filter, which is sensitive to data containing shells of different radius and different widths. Its property of zero mean is also of high importance since it makes the statistics derived from the BAOlet coefficients independent of the background level. Indeed, it is obvious that any constant added to the input data would not change the BAOlet coefficients. In comparison, the estimation of such a baseline level is a very delicate aspect of the BAO detection in the two-point correlation function. Due to the properties of the wavelet, the coefficient maps $W_{R,s}(\mathbf{x})$ should have a null mean when averaged over all points in the volume considered. Equivalently, if we sampled these maps at $N$ random points uniformly distributed in the volume ($\mathbf{x}_r^{(i)}$), the expected value of the average of the coefficients is zero, $$\label{eq:E0} E\left\lbrace \left \langle W_{R,s}(\mathbf{x}_r^{(i)}) \right\rangle_N \right\rbrace = 0 \, .$$ This condition holds even in the presence of shell-like structures in the density field. Of course, for such structures the value of $W_{R,s}(\mathbf{x_c})$ ($\mathbf{x_c}$ is the centre of the shell) is positive, and remains positive in nearby points. For an ideal $\delta(r-R)$ density shell the radius of the region around the centre where the wavelet amplitude is positive, is $s$; the positive signal in this region is compensated by negative amplitudes around $|\mathbf{x}|=R$. However, if we are able to identify the positions of $N$ massive haloes in the same volume ($\mathbf{x}_c^{(i)}$), we can define a new statistic $B(R,s)$ as the mean value of the coefficients $W_{R,s}(\mathbf{x})$ at these positions: $$\label{eq:Bstat} B(R,s) = \left\langle W_{R,s}(\mathbf{x}_c^{(i)})\right\rangle_N \, .$$ If there are indeed shell-like structures around the selected density maxima $\mathbf{x}_c^{(i)}$, as expected for baryon acoustic structures, we should find positive values of $B(R,s)$ with the maximum of $B$ at the $(R,s)$ values characterizing these shells. We can obtain further information from the wavelet coefficients $W_{R,s}(\mathbf{x})$, as we have information on the actual dependence of the signal picked up by the BAOlet function on the position. In particular, fixing a set of parameters of interest ($R_i, s_i$), we could use the coefficients $W_{R_i,s_i}(\mathbf{x}_c)$ to identify which of the selected massive haloes are giving the largest signal for these characteristics of the shells. In the context of BAO, the parameters $R_i$, $s_i$ can be chosen *a priori* using a theoretical model, or *a posteriori* using the parameters for which the function $B(R,s)$ attains it maximum. In this way, we can localize in configuration space the structures responsible for the largest BAO signal in a given sample. For our calculation of $B(R,s)$, we sample the $(R,s)$ parameter space on a grid. For each point $(R,s)$, we calculate the coefficient map $W_{R,s}(\mathbf{x})$ as the convolution of the BAOlet with the density field (equation \[eq:coeff\]). We perform the convolution in Fourier space using a Fast Fourier Transform (FFT) technique. To avoid problems with the FFT, we zero-pad a large region around our density cube. To obtain $B(R,s)$, we sample $W_{R,s}(\mathbf{x})$ at the position of the $N$ selected centres, and calculate the average value (equation \[eq:Bstat\]). Therefore, to apply this method, we need a way to map the overall density field $\delta(\mathbf{x})$, but also to locate the position of massive matter haloes $\mathbf{x}_c^{(i)}$. We have to use two different populations of mass tracers, so that they play the appropriate role in the detection algorithm. The idea of using two different tracer sets, one for the small perturbations and another for the high peaks, in a cross-correlation analysis was anticipated by @eis07a. We implement here a similar idea, but using a wavelet tool directly on the density field. As detailed below, we use galaxies from the ‘Main’ and LRG samples of SDSS in this case. However, this choice would depend on the kind of data available in each case. Prediction from $\Lambda$CDM {#sec:theory} ============================ In order to better understand our method, we show here which results we expect according to the $\Lambda$CDM model, and the effect of BAO in our new statistic $B(R,s)$. We use for this aim both the analytical approximation to the transfer function of @eis98a, and the results from the MareNostrum Institut de Ciències de l’Espai (MICE) simulation [@fos08a]. In the first place, we use the $\Lambda$CDM transfer function, which allow us to study directly the effect of the BAO. However, in this case, we must do a series of approximations in order to make a prediction for $B(R,s)$. We want to predict which is the typical result for the wavelet coefficient $W_{R,s}$ at the position of massive matter haloes $\mathbf{x}_c$, as a function of $R,s$. From equation (\[eq:coeff\]), we see that this is equivalent to study the typical density profile around such haloes, $\delta(\mathbf{y}-\mathbf{x}_c)$. The $\Lambda$CDM transfer function allows us to calculate this profile, provided we know which is the initial perturbation corresponding to the selected haloes. We make here the simple approximation of considering that these initial perturbations are point-like and spherically symmetric, and can thus be simply described by a Dirac delta function in configuration space. This corresponds to a constant value in Fourier space. As the transfer function $T(k)$ describes the relative evolution of the different Fourier modes, the present day radial density profile corresponding to such initial perturbation will we given simply by [@eis07a] $$\label{eq:tf} \rho(r) = C \widetilde{T}(r) \,$$ where $\widetilde{T}(r)$ is the Fourier transform of the transfer function $T(k)$, and $C$ is a normalization constant that depends on the details of the initial perturbation, and on the cosmic growth function $D_1(z)$. From equations (\[eq:coeff\]) and (\[eq:Bstat\]), we see that the effect of $C$ will be just to change the normalization of our statistic $B(R,s)$. We used the fitting formulae to the transfer function $T(k)$ from @eis98a, and obtained the expected $W_{R,s}$ at a large overdensity using equations (\[eq:tf\]) and (\[eq:coeff\]). In order to highlight the particular signature of BAO, we also calculated $W_{R,s}$ using the ‘no wiggle’ transfer function formula, in which the BAO have been edited out. We used here the values $\Omega_M =0.25$, $\Omega_{\Lambda} = 0.75$, $\Omega_b = 0.044$, and $h=0.7$ for the cosmological parameters, to allow a direct comparison with the MICE simulation. Following @eis98a, the sound horizon scale in this case is $r_s = 109.3 {\, h^{-1}\, \mathrm{Mpc}}$. The results for both cases are shown in Fig. \[fig:eishu\]. In the plot, we mask the region $R < 2s$, as for these values of the parameters our BAOlet is not compensated (its mean is different from $0$). Comparing both panels of the Figure, we see clearly which is the effect of the presence of BAO in our statistic. In the case without BAO $W_{R,s}$ is always negative, and it presents a smooth gradient across the $(R,s)$ plane. This gradient is due to the overall shape of the radial profile (equation (\[eq:tf\]). However, in the presence of BAO, $W_{R,s}$ shows a prominent peak with positive values. This clearly shows the idea behind the $B(R,s)$ statistic. The BAOlet $\psi_{R,s}$ acts as matched filter with a shape adapted to detect BAO shells. Therefore the positive values in the coefficients $W_{R,s}$ correspond to the cases in which the radial profile is matched by the BAOlet shape. The values at which $W_{R,s}$ attains its absolute maximum, $R_{\mathrm{max}} = 110 {\, h^{-1}\, \mathrm{Mpc}}$ and $s_{\mathrm{max}} = 22 {\, h^{-1}\, \mathrm{Mpc}}$, correspond thus to the characteristics of the shell that best matches the observed profile about the selected centres. ![Values of the BAOlet coefficients $W_{R,s}$ expected at the positions of large initial point-like perturbations, as a function of the BAOlet parameters $(R,s)$. The bottom panel shows the result using a standard $\Lambda$CDM transfer function, while the top panel shows the result using a transfer function with the BAO wiggles edited out [@eis98a]. The normalization is arbitrary. The contours are drawn at steps of 1000 for $W_{R,s}<0$ (dotted), $W_{R,s}=0$ (solid), and $W_{R,s}>0$ (dashed). The map attains a maximum at $R = 110 {\, h^{-1}\, \mathrm{Mpc}}$, $s = 22 {\, h^{-1}\, \mathrm{Mpc}}$. []{data-label="fig:eishu"}](ArnalteMur_fig_EH98){width="\columnwidth"} In order to test the reliability of the method, and of this $\Lambda$CDM prediction, we calculated the $B(R,s)$ for a halo catalogue drawn from the MICE simulations. We used the publicly available halo catalogue from the ‘MICE3072’ run [@cro10a]. This particular run contains $2048^3$ particles in a box of side $3072 {\, h^{-1}\, \mathrm{Mpc}}$, therefore covering a volume of $29 \, h^{-3} \, \mathrm{Gpc}^3$. The simulation was run with the GADGET-2 code [@spr05b], assuming a $\Lambda$CDM model with the parameters mentioned above. Haloes in the simulation were selected using a friends-of-friends (FoF) algorithm. We used the resulting halo catalogue at $z=0$, which contains a total of $2819031$ haloes containing $143$ or more particles. This corresponds to haloes with masses $\geq 3.35 \times 10^{13} {\, h^{-1}\, M_{\odot}}$. The halo number density is thus $9.72 \times 10^{-5} \, h^3 \mathrm{Mpc}^{-3}$. We used the full halo catalogue as a tracer of the overall density field. We then selected as centres for the calculation of $B(R,s)$ in equation (\[eq:Bstat\]) only the haloes with a mass $\geq 1.76 \times 10^{14} {\, h^{-1}\, M_{\odot}}$. We chose this mass threshold in order to select approximately the $10\%$ most massive haloes in the simulation box. This choice is somewhat arbitrary, but serves for the purpose of testing the BAOlet method and illustrating the expected result. Fig. \[fig:mice\] shows the BAOlet result $B(R,s)$ for these MICE samples, compared to the theoretical results obtained above from the @eis98a transfer functions. We obtain a result very similar to that of Fig. \[fig:eishu\], as $B(R,s)$ shows a clear peak, and attains its absolute maximum for $R_{\mathrm{max}} = 108 {\, h^{-1}\, \mathrm{Mpc}}$, $s_{\mathrm{max}} = 28 {\, h^{-1}\, \mathrm{Mpc}}$. This indicates that our BAOlet method can be applied to two sets of mass tracers, although the details of the tracers used here are very different from the ones we use later on the SDSS samples. This also confirms the expected effect of the presence of BAO in the $B(R,s)$ function: the presence of a large peak with positive values of $B$, located approximately at the values of $R$ and $s$ corresponding to the radius and width of the acoustic shells. The fact that we obtain here slightly different values for $R_{\mathrm{max}}$ and $s_{\mathrm{max}}$ than those predicted above may be due to non-linear evolution effects, which slightly reduce the radius and increase the width of the shells. A similar effect is present in the correlation function [see e.g. @cro08a]. ![The BAOlet statistic $B$ calculated for the MICE simulation sample described in the text as a function of the parameters $(R,s)$ (bottom panel). The contours are drawn at steps of $5$ for $B<0$ (dotted), $B=0$ (solid), and $B>0$ (dashed). This function attains its maximum for $R=108 {\, h^{-1}\, \mathrm{Mpc}}$, $s=28 {\, h^{-1}\, \mathrm{Mpc}}$. The top two panels show cuts at the values $s = 28 {\, h^{-1}\, \mathrm{Mpc}}$ (top) and $s = 22 {\, h^{-1}\, \mathrm{Mpc}}$ (middle), marked with grey horizontal lines in the 2D panel. In each case, the solid blue line corresponds to the value obtained from MICE, the dashed red line corresponds to the theoretical expectation from the @eis98a transfer function (bottom panel of Fig. \[fig:eishu\]), and the dotted green line to the theoretical expectation using the ‘no wiggle’ transfer function (top panel of Fig. \[fig:eishu\]). These theoretical predictions have been re-normalised to get the same value at the maximum in $B(R,s)$. []{data-label="fig:mice"}](ArnalteMur_fig_MICE){width="\columnwidth"} We also used this halo catalogue from MICE to make a qualitative estimation of how different observational effects would affect the BAOlet result. In the first place, we studied the effect of redshift-space distortions. To this end, we calculated the redshift-space positions of all haloes taking into account their peculiar velocities, as output by the simulation, and considering an observer located in one of the vertices of the simulation cube. The result for $B(R,s)$ in this case is shown in the top panel of Fig. \[fig:mice-obs\], where it is compared to the real-space result discussed above. As can be seen from the figure, although small differences appear between the real- and redshift-space results, the main features of the $B(R,s)$ prediction remain the same, with the position of the maximum changing only by $\sim 1 {\, h^{-1}\, \mathrm{Mpc}}$. For the second case, we added the effect of a decreasing radial selection function across the sample. We model this selection as an exponential decay function, such that the final number of haloes used to trace the overall density field is $\sim 20\%$ of the total. In our calculations, we then weight each halo by the inverse of the mean density at its redshift, as we do later for the SDSS data. We do not apply any selection function to the centres. The results for $B(R,s)$ obtained in this case (including also the redshift-space effects) are shown in the bottom panel of Fig. \[fig:mice-obs\]. As above, these observational effects do not change significantly the overall behaviour of $B(R,s)$, or the location of the maximum of the peak. Overall, although the MICE catalogue used does not mimic the characteristics of our SDSS samples, we can be confident that neither redshift-space distortions nor a radial selection function (when it is taken into account in the calculation) should bias significantly our results. ![ The BAOlet statistic $B$ for the MICE simulation when some observational effects are taken into account. In the top panel, we show the $B(R,s)$ obtained when redshift-space distortions are included in the simulation. In the bottom panel, we show the result when a radial selection function is applied to the halo catalogue. In both cases, the contours are drawn at steps of $5$ in $B$. Solid contours correspond to the results with the observational effects included. The dashed contours correspond to the original real-space result without selection, i.e., they are identical to those in the bottom panel of Fig. \[fig:mice\]. []{data-label="fig:mice-obs"}](ArnalteMur_fig_MICEobs){width="\columnwidth"} SDSS samples used {#sec:data} ================= We used data from two different samples of the latest data release (DR7) of the spectroscopic SDSS. On one side, we used the ‘Main’ galaxy sample [@str02a] as mass tracers for reconstructing the overall density field $\delta(\mathbf{x})$. On the other, we used the LRGs as tracers of the central over-densities, and therefore used them as the selected centres $\mathbf{x}_c^{(i)}$ to compute $B(R,s)$. Luminous Red Galaxies were selected by the SDSS team using several colour and magnitude cuts to obtain a highly biased sample reaching high redshift [@eis01a]. The galaxies selected in this way are known to reside near the centres of massive dark matter haloes [@zhe09a] and are thus adequate tracers for the centres of baryon acoustic structures. We applied an extra cut in the K-corrected, evolved, $g$-band absolute magnitude: $-23.2 < M_g < -21.2$, as in the previous BAO analysis by @eis05a. This results in an approximately volume-limited sample in the redshift range $0.15 < z < 0.30$. ‘Main’ galaxies in the SDSS constitute a much denser sample, and are therefore more suitable to map small density changes such as BAO shells. We used the ‘Main’ sample from the Value-Added Galaxy Catalogue [@bla05b], which constitutes a magnitude limited sample in the $r$ band, with $r < 17.6$. We applied an extra simple cut, $M_r < -20$. For the conversion of angles and redshifts into co-moving distances, we used a fiducial cosmology with the parameters $\Omega_M = 0.25$, $\Omega_{\Lambda} = 0.75$. In all our analysis we use distances in units of ${\, h^{-1}\, \mathrm{Mpc}}$, so that they do not depend on the specific value of $h$. We converted the distribution of the ‘Main’ galaxies into a density field $\delta(\mathbf{x})$ binning it into a grid with cubic pixels of $3 {\, h^{-1}\, \mathrm{Mpc}}$ side. We corrected for the selection effects by weighting each galaxy by the inverse of the average density at its redshift. As explained below, we performed some tests by slightly changing this weighting scheme. Although this weighting may not be optimal, it should not affect significantly our results, given that the wavelet method does not depend on the local background level. We used the density field constructed in this way for the calculation of the BAOlet coefficients following equation (\[eq:coeff\]). In our calculations, we could only use the region in which these two samples overlap, which corresponds to the redshift limits $0.15 < z < 0.26$. To minimize border effects in the $B(R,s)$ calculation, we defined a buffer region of $r_{\rm buff} = 175 {\, h^{-1}\, \mathrm{Mpc}}$ from any of the borders of the ‘Main’ sample volume. We used as centres only the LRGs in the inner volume. This allows us to use the density field, as traced by the ‘Main’ sample galaxies, from $z > 0.09$. In order to minimize angular selection effects and border effects, we use a compact area of the sky where the angular completeness is nearly uniform. This area covers $5511 \deg^2$ and is defined, in the SDSS survey coordinates [@sto02a], by the limits $-31.25^{\circ} < \eta < 28.75^{\circ}$, $-54.8^{\circ} < \lambda < 51.8^{\circ}$. This results in finally using the density field in a volume of $2.2 \times 10^8\,h^{-3}\,\mathrm{Mpc}^3$, as traced by $N_{\rm Main} = 198342$ galaxies. The number of LRGs used as centres (avoiding the buffer region) is $N_{\rm LRG} = 1599$. In Fig. \[fig:slices\] we show a slice of this survey showing both the ‘Main’ galaxies and the LRGs. We show how, given the buffer used, the LRGs used as centres are located only in an inner volume of the larger ‘Main’ sample. To illustrate the idea of the method, we show a zoom around a given LRG galaxy. Even for this single centre, a slight over-density of ‘Main’ galaxies is seen at the radii of 105–110$\,{\, h^{-1}\, \mathrm{Mpc}}$. ![image](ArnalteMur_fig_3Dslice){width="80.00000%"} As the structures we look for are huge, with radii about $100 {\, h^{-1}\, \mathrm{Mpc}}$, we have to consider the effect of assumed cosmology (different comoving distances) on our result. In order to estimate the distance differences, we compared the distances in our adopted MICE cosmology ($\Omega_M = 0.25$, $\Omega_{\Lambda}=0.75$) with these in the WMAP 7-year cosmological model [@kom11a], $\Omega_M = 0.271$, $\Omega_{\Lambda}=0.729$. We fixed the redshift difference $\delta z=0.07$ that corresponds approximately to our shell diameter of $200 {\, h^{-1}\, \mathrm{Mpc}}$, and found that this gives distance differences of only a 0.3 and 0.8 per cent at the near and far borders of our sample (the MICE distances are larger than the WMAP7 ones in each case). So, for our nearby volume, the effect is small, and does not affect our results given that the statistical uncertainties are much larger (see next Section). However, this effect will be significant for deep samples. Results for the SDSS samples {#sec:results} ============================ We performed the calculation of $B(R,s)$ for the SDSS in an analogous way to the case of the MICE simulation, using the samples defined in Section \[sec:data\]. Our results are shown in Fig. \[fig:baolet\]. As above, we mask the region $R < 2s$. As we are not introducing any border correction when calculating the $B(R,s)$ statistic, we also mask the region corresponding to the values $R > r_{\rm buf} - s$. Values obtained at a those large values of $R$ could contain some spurious signal, as the calculation of $W_{R,s}$ would rely on the density field in regions outside of the survey boundaries. ![The BAOlet statistic $B$ calculated for SDSS data as a function of the parameters $(R,s)$. The bottom panel shows the results in the full parameter space considered, where we sampled both $R$ and $s$ at intervals of $1{\, h^{-1}\, \mathrm{Mpc}}$. We mask two areas, at the upper right and left corners, where our results are not reliable (see details in the text). The contours are drawn at steps of 5 for $B<0$ (dotted), $B=0$ (solid), and $B>0$ (dashed). The top two panels show cuts at the arbitrarily chosen values $s = 36 {\, h^{-1}\, \mathrm{Mpc}}$ (top) and $s = 20 {\, h^{-1}\, \mathrm{Mpc}}$ (middle), marked with grey horizontal lines in the 2D panel. In these panels, the blue line is $B(R,s)$, while the green line and the red band show the mean ($\overline{B}^{MC}$) and 1-$\sigma^{MC}$ interval for the Monte Carlo realizations of random centres. We obtain a clear significant peak at different values of $s$, with a maximum for $R = 116 {\, h^{-1}\, \mathrm{Mpc}}$, $s = 36 {\, h^{-1}\, \mathrm{Mpc}}$. []{data-label="fig:baolet"}](ArnalteMur_fig_Bdata25){width="\columnwidth"} The resulting $B(R,s)$ map is qualitatively very similar to that expected, either using an analytical $\Lambda$CDM model (Fig. \[fig:eishu\]), or the MICE simulation (Figs. \[fig:mice\] and \[fig:mice-obs\]). This is an indication that the observed pattern does not originate from spurious features in the SDSS but is closely related to the large scale structure and more specifically the BAO. $B(R,s)$ attains a maximum at $R_{\rm max} = 116 {\, h^{-1}\, \mathrm{Mpc}}$, $s_{\rm max} = 36 {\, h^{-1}\, \mathrm{Mpc}}$. This maximum is clearly related to the characteristics of the BAO structures present in our samples. We studied the robustness of this result by changing the weighting scheme applied for the construction of the density map (see Section \[sec:data\]). We did so by capping at different maximum values the possible weights associated to each galaxy, and repeating the calculation of $B(R,s)$ in each case. The results were qualitatively similar, obtaining a peak in $B(R,s)$ in all cases. However, the position of the peak changed in each case, with maximum changes of the order of $\pm 5 {\, h^{-1}\, \mathrm{Mpc}}$ in $R_{\rm max}$, and $\pm 10 {\, h^{-1}\, \mathrm{Mpc}}$ in $s_{\rm max}$. Therefore, the difference between the position of the peak obtained from the SDSS data and that given by the MICE simulation is not significant. In any case, we can not use the scale and the width of the observed maximum of $B(R,s)$ as direct estimates of the radius or width of the shells, specially given that our analysis of the possible observational biases (Fig. \[fig:mice-obs\]) was only qualitative. In order to assess the significance of the BAO detection with this method, we focused on the value of $B(R,s)$ obtained at the maximum, $B_{\rm max} = B(R_{\rm max}, s_{\rm max}) = 22.9 \pm 3.7$[^2]. A more thoroughfull analysis would model the $B(R,s)$ statistic in the full parameter space. However, given the large covariances between measurements at different values of $(R,s)$ we do not expect a large difference from the simple case we consider. We will assess the probability of finding such a maximum in the case in which there are not baryon acoustic structures present in our sample. We model this null hypothesis by using randomly distributed centres for the calculation of $B(R,s)$ in equation (\[eq:Bstat\]), instead of LRGs. Even using the $W_{R,s}(\mathbf{x})$ coefficients from the observed density field (traced by SDSS ‘Main’ galaxies), the expected value of $B(R,s)$ in this case is 0 (see equation \[eq:E0\]), and we expect to obtain a significantly higher signal in the data. In this way, we are testing the null hypothesis that, either there are not shell-like structures in the density field traced by the ‘Main’ sample, or these shell-like structures are not found preferentially around LRG centres. In either case, that would mean that there are not BAO-like structures present in our sample. To perform the significance test, we generated $10^5$ random realizations of a Poisson process, with the mean number of points $N_{\rm LRG}$, in the same volume as the LRGs considered in the calculation (i.e. taking into account the buffer zone). For a realization $j$, we use the generated points as our centres $\mathbf{x}_c^{(i)}$ to compute the $B(R,s)$ statistic following equation (\[eq:Bstat\]), using the $W_{R,s}(\mathbf{x})$ coefficients obtained from the data. We can then obtain the mean value $\overline{B}^{MC}(R,s)$, and the standard deviation $\sigma^{MC}(R,s)$ of the Monte-Carlo realizations of the centres. We show $\overline{B}^{MC}(R,s)$ and a band of $1 \sigma^{MC}(R,s)$ around it in the top panels of Fig. \[fig:baolet\]. We now calculate our signal-to-noise ratio at the maximum as $SNR_{\rm max} \equiv B_{\rm max}/\left[\sigma^{MC}(R_{\rm max},s_{\rm max})\right] = 6.60$, and assess the probability of finding such a large value of $SNR_{\rm max}$ anywhere in the parameter space for the Monte Carlo realizations. We used $SNR_{\rm max}$ instead of directly using $B_{\rm max}$ because for some regions of parameter space, specially at low $s$, $\sigma^{MC}(R,s)$ is extremely large. Therefore, if we used $B_{\rm max}$, we would need to arbitrarily restrict the parameter space studied, thus introducing a possible a posteriori bias. When using $SNR_{\rm max}$ we sample the full parameter space considered in the calculations (as shown in Fig. \[fig:baolet\]). We computed the maximum value of $SNR$ for each realization $j$ in the full $(R,s)$ range, $SNR^{MC(j)}_{\rm max}$. The distribution of the values of $SNR^{MC(j)}_{\rm max}$ is shown in Fig. \[fig:histogram\], where it is compared to the value of $SNR_{\rm max}$ obtained in the real data. We found that only one of the realizations gave a value of $SNR^{MC(j)}_{\rm max}$ larger than $SNR_{\rm max}$. Thus, the probability of obtaining a maximum with such a large $SNR$ in the absence of baryon acoustic structures (our null hypothesis) is $p \simeq 10^{-5}$, equivalent to a $\sim 4.4\sigma$ detection in the Gaussian case. However, we should stress here that the significance found in this work can not be compared directly to other detection levels found in the literature, as it has been stated in the introduction. In particular, we are not comparing our results with an analytical no-BAO model of $B(R,s)$ (such as that shown in the top panel of Fig. \[fig:eishu\]), since to do so would require the detailed modelling of all the selection effects affecting the two samples used. ![Histogram showing the distribution of the maximum $SNR$ values obtained, in the full $(R,s)$ space, for the $10^5$ Monte Carlo realizations of Poisson-distributed centres ($SNR^{MC(j)}_{\rm max}$). This histogram has a mean of $3.72$ and a standard deviation of $0.46$. We show as a dashed vertical line the value obtained from the data (using the LRGs as centres), $SNR_{\rm max} = 6.60$. Only one of the Monte Carlo realizations give a maximum value larger than $SNR_{\rm max}$.[]{data-label="fig:histogram"}](ArnalteMur_fig_histo25){width="\columnwidth"} As explained in Section \[sec:method\], we can extract more information about the BAO phenomenon in our samples making further use of the BAOlet coefficient maps $W_{R,s}(\mathbf{x})$. Here, we use the coefficient values at the positions of the LRG, for the parameters $R_{\rm max}$, $s_{\rm max}$, which correspond to the characteristics of the BAO shells present in our samples. In this way, the values $W_{\rm max} \equiv W_{R_{\rm max}, s_{\rm max}}$ are a measure of how strong is the signal coming from a BAO shell around a given point, and in particular, a given LRG. Therefore, using $W_{\rm max}$ we can localise in configuration space the regions of the volume covered by our samples where the BAO signal is mostly coming from. We illustrate this idea in Fig. \[fig:3dcoeff\], where we plot a two-dimensional projection of the distribution of the LRGs used as centers in our analysis, showing also the value of $W_{\rm max}$ for each of them, following a color gradient. The highest values of $W_{\rm max}$ correspond to the red points in the plot. In Table \[tab:gallist\], we provide the 10 LRGs used as centers with the larger values of $W_{\rm max}$. The whole catalogue of the $N_{\rm LRG} = 1599$ LRGs used as our centers, and the value of $W_{\rm max}$ obtained for each of them can be found at the web page [http://www.uv.es/martinez]{}. This catalogue could be used to study the relation of the BAO signal at a given LRG to its properties or the environment. It could also be used to make a selection of LRG centres with high signal, and use them to refine the measurements of the BAO characteristics. ![image](ArnalteMur_fig_Wmaxdist){width="80.00000%"} SDSS object name $\alpha$ (deg) $\delta$ (deg) $z$ $\sigma_z$ $W_{\rm max}$ -------------------------- ---------------- ---------------- --------- ------------ --------------- SDSS J141746.20+184733.0 214.44254 18.79250 0.19872 0.00016 517.84 SDSS J121858.41+380813.6 184.74341 38.13714 0.18974 0.00018 436.68 SDSS J112430.27+415557.3 171.12613 41.93260 0.19433 0.00019 419.45 SDSS J112355.53+423816.5 170.98140 42.63793 0.19404 0.00020 414.63 SDSS J112352.72+424542.4 170.96968 42.76178 0.19469 0.00018 414.63 SDSS J122935.13+384636.4 187.39640 38.77680 0.18686 0.00016 413.47 SDSS J112535.99+412608.3 171.39998 41.43564 0.19288 0.00019 401.91 SDSS J104501.94+362944.3 161.25810 36.49566 0.15938 0.00015 399.06 SDSS J140443.31+264439.2 211.18047 26.74424 0.15854 0.00016 396.19 SDSS J142031.28+211700.4 215.13036 21.28346 0.19232 0.00020 395.40 As an illustration of this later use, we show a simple way to study the overall properties of the BAO structures, its shape and scale. We select those centres which we know that present a prominent acoustic feature, i.e., those for which $W_{\rm max}> 0$ This leaves us with $N_r = 809$ centres. In order to improve the signal-to-noise in this illustration for studying the BAO structures, we stacked together the 3D density maps around the $N_r$ selected LRGs. In doing so, we kept the line-of-sight direction aligned for all the centres, as this direction will define the possible anisotropies in the distribution. We show a 3D view and a 2D cut of this stacked density map in Fig. \[fig:stack3d\]. Thanks to this selection the characteristic elements of the BAO are amplified: on the one side, a central bump with high density, corresponding to the massive halo traced by the LRG, and on the other side, the shell surrounding it at a scale of $\sim 109 {\, h^{-1}\, \mathrm{Mpc}}$, showing a fainter over-density. We also observe the anisotropic nature of these structures. This is a combination of the fact that we have to work in redshift space, and of the redshift-dependent selection function for ‘Main’ galaxies. ![Stacked 3D and 2D density field. In the top three panels, we show the density field after stacking the $N_r$ centres with $W_{R_{\rm max}, s_{\rm max}} > 0$. We show surfaces encompassing the regions above different thresholds in density after an isotropic Gaussian smoothing with $\sigma = 10 {\, h^{-1}\, \mathrm{Mpc}}$. The density threshold decreases from top to bottom, with values of $\delta = 1.24, 1.18, 1.13$ ($\delta$ is the density relative to the average density of the sample). We show only the bottom half of the density field for clarity. It can be seen that the acoustic shell appears clearly around the central over-density at the detected horizon scale. A 2D slice of this density field is shown in the bottom panel. Here, the dotted line is a circle whose radius corresponds to the one we measure for the BAO shells, $r_{\rm max} = 109.5 {\, h^{-1}\, \mathrm{Mpc}}$, and the arrow marks the direction of the line of sight. []{data-label="fig:stack3d"}](ArnalteMur_fig_3Dstack-a "fig:")\ ![Stacked 3D and 2D density field. In the top three panels, we show the density field after stacking the $N_r$ centres with $W_{R_{\rm max}, s_{\rm max}} > 0$. We show surfaces encompassing the regions above different thresholds in density after an isotropic Gaussian smoothing with $\sigma = 10 {\, h^{-1}\, \mathrm{Mpc}}$. The density threshold decreases from top to bottom, with values of $\delta = 1.24, 1.18, 1.13$ ($\delta$ is the density relative to the average density of the sample). We show only the bottom half of the density field for clarity. It can be seen that the acoustic shell appears clearly around the central over-density at the detected horizon scale. A 2D slice of this density field is shown in the bottom panel. Here, the dotted line is a circle whose radius corresponds to the one we measure for the BAO shells, $r_{\rm max} = 109.5 {\, h^{-1}\, \mathrm{Mpc}}$, and the arrow marks the direction of the line of sight. []{data-label="fig:stack3d"}](ArnalteMur_fig_3Dstack-b "fig:")\ ![Stacked 3D and 2D density field. In the top three panels, we show the density field after stacking the $N_r$ centres with $W_{R_{\rm max}, s_{\rm max}} > 0$. We show surfaces encompassing the regions above different thresholds in density after an isotropic Gaussian smoothing with $\sigma = 10 {\, h^{-1}\, \mathrm{Mpc}}$. The density threshold decreases from top to bottom, with values of $\delta = 1.24, 1.18, 1.13$ ($\delta$ is the density relative to the average density of the sample). We show only the bottom half of the density field for clarity. It can be seen that the acoustic shell appears clearly around the central over-density at the detected horizon scale. A 2D slice of this density field is shown in the bottom panel. Here, the dotted line is a circle whose radius corresponds to the one we measure for the BAO shells, $r_{\rm max} = 109.5 {\, h^{-1}\, \mathrm{Mpc}}$, and the arrow marks the direction of the line of sight. []{data-label="fig:stack3d"}](ArnalteMur_fig_3Dstack-c "fig:")\ ![Stacked 3D and 2D density field. In the top three panels, we show the density field after stacking the $N_r$ centres with $W_{R_{\rm max}, s_{\rm max}} > 0$. We show surfaces encompassing the regions above different thresholds in density after an isotropic Gaussian smoothing with $\sigma = 10 {\, h^{-1}\, \mathrm{Mpc}}$. The density threshold decreases from top to bottom, with values of $\delta = 1.24, 1.18, 1.13$ ($\delta$ is the density relative to the average density of the sample). We show only the bottom half of the density field for clarity. It can be seen that the acoustic shell appears clearly around the central over-density at the detected horizon scale. A 2D slice of this density field is shown in the bottom panel. Here, the dotted line is a circle whose radius corresponds to the one we measure for the BAO shells, $r_{\rm max} = 109.5 {\, h^{-1}\, \mathrm{Mpc}}$, and the arrow marks the direction of the line of sight. []{data-label="fig:stack3d"}](ArnalteMur_fig_2Dstack "fig:") A simpler view can be obtained by calculating the average radial density profile $\rho(r)$ around the $N_r$ centres. The resulting profile, shown in Fig. \[fig:radprof\], has the same features as the 3D view: a high bump at short scales, and a clear peak at about the acoustic scale, with a maximum at $r_{\rm max} = 109.5 \pm 3.9{\, h^{-1}\, \mathrm{Mpc}}$. The error in $r_{\rm max}$ was estimated using bootstrap realizations [@lup93]. This scale gives the radius of the baryon acoustic shells, and it is therefore a good estimator of the acoustic scale in the sample. We also show in Fig. \[fig:radprof\] the radial profiles restricted to different regions of the sphere, to better characterize the anisotropy of the distribution. We define two cones with a width of $45^{\circ}$ with respect to the line-of sight in each direction (we call these ‘near’ and ‘far’ regions), and a ‘transverse’ region covering the belt between the cones. We obtain for each of these regions qualitatively similar results. As expected, we see how the ‘near’ and ‘far’ subsamples are more strongly affected by observational effects, such as redshift-space distortions, which are more severe along the line of sight. In contrast, the result for the ‘transverse’ subsample matches, within the errors, that for the full sphere. It is interesting to note that the value of $r_{\rm max}$ is slightly larger for the ‘far’ sample than for the ‘near’ one. It is worth to emphasize that this approach would be impossible with any statistical BAO detection method used this far, since the spatial localization of the shells is completely lost in the correlation function or in the power spectrum, while the local nature of the wavelet approach has allowed us to identify the positions of the most representative structures in our sample. Moreover, we are measuring the acoustic scale at positions selected for their low contamination from other structures, which is not the case when averaging over the full sample. In this way, we maximize the BAO signal, while minimizing the effect of signals coming from different large-scale structures. ![Average radial profiles. We show the radial density profile averaged over the $N_r$ centres with $W_{R_{\rm max}, s_{\rm max}} > 0$. We plot $\delta(r) = \frac{\rho(r)}{\rho_0}$, where $\rho_0$ is the average density of the sample. The continuous line with red error band shows the radial profile for the full sphere. We also show the error bands for the radial profile restricted to regions of the sphere, as defined in the text: the ‘near’ region (green), the ‘far’ region (blue), and the ‘transverse’ region (orange). The arrow signals the location of the maximum, $r_{\rm max} = 109.5 {\, h^{-1}\, \mathrm{Mpc}}$. The error band corresponds to the 1-$\sigma$ dispersion of 1000 bootstrap realisations. The profiles were estimated using a $B_3$ kernel of width $h=4 {\, h^{-1}\, \mathrm{Mpc}}$ in the radial coordinate, but similar results are obtained when using slightly different widths or shapes of the kernel. []{data-label="fig:radprof"}](ArnalteMur_fig_profile){width="\columnwidth"} Discussion and conclusions {#sec:conc} ========================== In summary, we have designed a new method for the detection of baryon acoustic oscillations in the galaxy distribution and for the localization, in configuration space, of the structures responsible for them. This method is based on the use of a specially designed wavelet applied directly on the density field. Our approach also relies on the use of two different tracers: one for the overall density distribution, and the other for the central overdensities of the baryon acoustic structures. After testing the method with simulations, we applied this method to the detection of baryon acoustic structures in a sample drawn from the SDSS. In this case, we used galaxies from the ‘Main’ catalogue to trace the overall density field, and galaxies from the LRG catalogue to trace the location of massive dark matter haloes. We clearly detect BAO in the sample providing a confirmation of the detection obtained previously using general two point statistics (the power spectrum and correlation function). In fact, our approach provides an independent method for the detection. Finally, we showed how this method allows us, through the use of $W_{\rm max}(\mathbf{x})$, to localize in configuration space the actual structures responsible for the BAO signal obtained. This is a consequence of using a wavelet acting directly on the density field. We illustrate the utility of this approach by showing the density distribution stacked around a set of centres known to show the BAO feature given their $W_{\rm max}$ value. Recent works have proposed alternative methods to study the BAO based on wavelets [@xu10a; @tia11a]. In particular, @tia11a use a Mexican hat wavelet function with two parameters, conceptually similar to ours. They use it to search for a peak in the two point correlation function of the ‘Main’ SDSS sample, obtaining a detection with a $p$-value $p = 0.002$ (equivalent to $3.1\sigma$ in the Gaussian case). As in our case, this shows the utility of using the ‘Main’ sample to reduce the shot noise in the calculation and to obtain significant detections. However, these works apply the wavelet to the measured two point correlation function, instead of directly to the density field. In this way, the use the capabilities of the wavelets to characterize accurately the BAO signal (in terms of radius and width), but they are not able to get any information about the localization of these structures in space. The use of wavelets directly on the density field isolates valuable information about the baryon acoustic structures that is hidden in the standard two point statistics. In particular it gives us information, through the coefficients $W_{R,s}(\mathbf{x})$, to localize regions in the sampled volume giving the largest or lowest signal. We expect that this new method for studying BAO will be of much use for ongoing or planned surveys, such as the WiggleZ Survey [@dri10a], the Baryon Oscillation Spectroscopic Survey [BOSS, @eis11a], or the Physics of the Accelerating Universe (PAU) Survey [@ben08a], which will cover a much larger volume than studied here, and will explore higher redshifts. This work has been supported by the European Research Council grant SparseAstro (ERC 228261), by the Spanish CONSOLIDER projects AYA2006-14056 and CSD2007-00060, including FEDER contributions, by the Generalitat Valenciana project of excellence PROMETEO/2009/064, and by the Estonian grants SF0060067s08 and ETF8005. P.A.M. was supported by the Spanish Ministerio de Educación through a FPU grant, and by an ERC StG Grant (DEGAS-259586). We acknowledge the use of data from the MICE simulations, publicly available at http://www.ice.cat/mice. We also acknowledge the use of public data from SDSS. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. [^1]: $h$ is the Hubble constant in units of $100 {\,{\rm km\,s^{-1}}}\, \mathrm{Mpc}^{-1}$ [^2]: This error in $B_{\rm max}$ is obtained from the variance of the coefficients at the $N_{\rm LRG}$ different LRGs. However, our significance test is independent of this error value.
[^1] **A. Skopenkov [^2]** This note is purely expository. In the course of the Kolmogorov-Arnold solution of Hilbert’s 13th problem on superpositions there appeared the notion of [*basic embedding*]{}. A subset $K$ of $\R^2$ is [*basic*]{} if for each continuous function $f\colon K\to\R$ there exist continuous functions $g,h\colon\R\to\R$ such that $f(x,y)=g(x)+h(y)$ for each point $(x,y)\in K$. We present descriptions of basic subsets of the plane (with a proof) and description of graphs basically embeddable into the plane (solutions of Arnold’s and Sternfeld’s problems). We present some results and open problems on the smooth version of the property of being basic. This note is accessible to undergraduates and could be an interesting easy reading for mature mathematicians. The two sections can be read independently on each other. Let us recall informally the concept of [*superposition*]{}. Suppose that there is a set of functions of several variables, including all variables considered as functions. Represent each of the functions as an element of a circuit with several entries and one exit. Then a [*superposition*]{} of functions of this set is a function that can be repsesented by a circuit constructed from given elements; the circuit should not contain oriented cycles. For example, a polynomial $a_n x^n+a_{n-1} x^{n-1}+\dots+a_1 x+a_0$ is a superposition of the constant functions and the functions $f(x,y)=x+y$, $g(x,y)=xy$. It is clear that any elementary function can be represented as a superposition of functions of at most two variables. [*Is it possible to represent each function of several arguments as a superposition of functions of at most two arguments?*]{} Since there is a 1–1 correspondence between a segment and a square, any function of three and more variables is superposition of (in general, discontinuous) functions of two variables. So the above question is only interesting for continuous functions. [^3] From now on we assume all functions to be continuous, unless the contrary is explicitly specified. This question was answered affirmatively in 1957 by Kolmogorov and Arnold. They proved that any continuous function of $n$ variables defined on a compact subset of $\R^n$ can be represented as a superposition of continuous functions of one variable and addition. For an exposition accessible to undergraduates see \[Ar58\]. See also \[Vi04\]. Ostrand extended the Kolmogorov-Arnold Theorem this theorem to arbitrary $n$-dimensional compacta \[St89\]. It is in the Kolmogorov-Arnold-Ostrand papers that the notion of basic subset appeared for the first time. It was explicitly introduced by Sternfeld \[St89\]. A subset $K\subset\R^m$ is [*basic*]{} if for each continuous function $f:K\to\R$ there exist continuous functions $g_1,\dots,g_m:\R\to\R$ such that $f(x_1,\dots,x_m)=g_1(x_1)+\dots+g_m(x_m)$ for each point $(x_1,\dots,x_m)\in K$. \[St89\] [*Any $n$-dimensional compactum is basically embeddable into $\R^{2n+1}$ and, for $n>1$, is not basically embeddable into $\R^{2n}$.*]{} It is interesting to compare this theorem with the Nöbeling-Menger-Pontryagin theorem on embeddability of any $n$-dimensional compact space into $\R^{2n+1}$ and the example of an $n$-dimensional polyhedron non-embeddable into $\R^{2n}$. Obviously, $K$ is basically embeddable into $\R$ if and only if $K$ is topologically embeddable into $\R$. It follows from Theorem 1 that a compactum $K$ is basically embeddable into $\R^m$ for $m>2$ if and only if $\dim K<m/2$. Thus, the only remaining case is $m=2$ (Sternfeld’s problem). A subset $K$ of $\R^2$ is [*basic*]{} if for each continuous function $f\colon K\to\R$ there exist continuous functions $g,h\colon\R\to\R$ such that $f(x,y)=g(x)+h(y)$ for each point $(x,y)\in K$. Let us present the characterization of arcwise connected compacta basically embeddable into the plane \[Sk95\] (this is a partial solution of Sternfeld’s problem). We formulate the criterion first for graphs and then for the general case. A conjecture on embeddability of (not necessarily arcwise connected) connected compacta into the plane can be found in \[Sk95\]. Compacta used in the statements are defined after the statements.  \[Sk95\] *A finite graph $K$ is basically embeddable into the plane if and only if any of the following two equivalent conditions holds:* \(a) $K$ does not contain subgraphs homeomorphic to $S, C_1, C_2$ (fig. 1), that is, a circle, a five-point star, and a cross with branched endpoints; \(b) $K$ is contained in one of the graphs $R_n$, $n=1,2,3,\dots$ (fig. 2). --------------- ------- ----- \[3mm\] $S^1$ $T_5$ $C$ --------------- ------- ----- --------------- ------- ------- \[3mm\] $R_1$ $R_2$ $R_3$ \[5mm\] \[3mm\] $F_1$ $F_2$ $F_3$ --------------- ------- ------- Let $F_1$ be a triod. The graph $F_{n+1}$ is obtained from $F_n$ by branching its endpoints (fig. 2). The graph $R_n$ is obtained from $F_n$ by by adding a hanging edge to each non-hanging vertex. \[Sk95\] *An arcwise-connected compactum $K$ is basically embeddable into the plane if and only if it is locally connected (i.e., is a Peano continuum) and any of the two following (equivalent) conditions hold:* \(1) $K$ does not contain $S^1, C_2, C_4, B$ as subcompacta and contains only finitely many subcontinua $F_n, H_n$ (fig. 1,2,3); \(2) $K$ does not contain any of the continua $S^1, C_1, C_2, C_3, B, F$, $H_+, H_-, h_+, h_-$ (fig. 1,3,4). --------------- ----- ------- \[3mm\] $C_4$ $B$ $H_2$ --------------- ----- ------- --------------- ------- \[2mm\] $C_3$ $F$ \[3mm\] \[2mm\] $H_-$ $H_+$ \[3mm\] \[2mm\] $h_+$ $h_-$ --------------- ------- Let $I=[0;1]$. A sequence of sets is called a [*null-sequence*]{} if their diameters tend to zero. Define $\bullet$ $H_n$ to be the union of $I$ with a null-sequence of triods having endpoints attached to $I$ at points $3^{-l_1}+\dots+3^{-l_s}$, where $s\le n$ and $0<l_1<\dots<l_s$ are integers; $\bullet$ $C_3$ to be a cross with a null-sequence of arcs attached to one of its branches and converging to its center; $\bullet$ $C_4$ to be a cross with a sequence of points converging to its center; $\bullet$ $B$ to be the union of the arc $I$ and a null-sequence of arcs attached to $(0;1)$ by their endpoints at rational points; $\bullet$ $F$ to be the union of $I$ with a null-sequence of sets $F_n$ each having an endpoint attached to the point $1/n\in I$; $\bullet$ $H_+$ ($H_-$) to be the union of $I$ with a null-sequence of continua $H_n$ connected to the points $1/n\in I$ by arcs that intersect $H_n$ at the points $1\in I\subset H_n$ ($0\in I\subset H_{n-1}$, respectively); $\bullet$ $h_+$ ($h_-$) to be obtained from a null-sequence of continua $H_n$ by pasting together the points $1\in I\subset H_n$ and $0\in I\subset H_{n-1}$ ($0\in I\subset H_n$ and $1\in I\subset H_{n-1}$, respectively). An embedding $K\subset X\times Y$ is [*basic*]{} if for any continuous function $f:K\to\R$ there exist continuous functions $g:X\to\R$, $h:Y\to\R$ such that $f(x,y)=g(x)+h(y)$ for any point $(x,y)\in K$. Denote by $T_n$ an $n$-od, i.e., an $n$-pointed star. A vertex of a graph $K$ is called [*horrible*]{} if its degree is greater than 4 and [*awful*]{} if its degree is equal to 4 and it is not an endpoint of a hanging edge. The [*defect*]{} of a graph $K$ is the sum $\delta(K)=(degA_1-2)+\dots+(degA_k-2)$, where $A_1,\dots ,A_k$ are all the horrible and awful vertices of $K$. \[Ku99\] [*A finite graph $K$ admits a basic embedding $K\subset\R\times T_n$ if and only if $K$ is a tree and either $\delta(K)<n$ or $\delta(K)=n$ and $K$ has a horrible vertex with a hanging edge.*]{} The material is presented as a sequence of problems, which is peculiar not only to Zen monasteries but also to elite mathematical education (at least in Russia). Difficult problems are marked by a star, and unsolved problems by two stars. If the statement of a problem is an assertion, then it is required to prove this assertion. \(a) Is it true that for any four numbers $f_{11},f_{12},f_{21},f_{22}$ there exist four numbers $g_1,g_2,h_1,h_2$ such that $f_{ij}=g_i+h_j$ for each $i,j=1,2$? \(b) Andrey Nikolaevich and Vladimir Igorevich play the ’Dare you to decompose!’ game. Some cells of chessboard are marked. A. N. writes numbers in the marked cells as he wishes. V. I. looks at the written numbers and chooses (as he wishes) 16 numbers $a_1,\dots,a_8,b_1,\dots,b_8$ as ’weights’ of the columns and the lines. If each number in a marked cell turns out to be equal to the sum of weights of the line and the row (of the cell), then V. I. wins, and in the opposite case (i.e., when the number in at least one marked cell is not equal to the sum of weights of the line and the row) A. N. wins. Prove that V. I. can win no matter how A. N. plays if and only if there does not exist a closed route of a rook starting and turning only at marked cells (the route is not required to pass through each marked cell). Let $\R^2$ be the plane with a fixed coordinate system. Let $x(a)$ and $y(a)$ be the coordinates of a point $a\in\R^2$. An ordered set (either finite or infinite) $\{a_1,\dots,a_n,\dots\}\subset\R^2$ is called an [*array*]{} if for each $i$ we have $a_i\neq a_{i+1}$ and $x(a_i)=x(a_{i+1})$ for even $i$ and $y(a_i)=y(a_{i+1})$ for odd $i$. It is not assumed that points of an array are distinct. An array is called [*closed*]{} if $a_1=a_{2l+1}$. Consider a closed array $\{a_1,\dots,a_n=a_1\}$. A [*decomposition*]{} for such an array is an assignment of numbers at the projections of the points of the array on the $x$-axis and on the $y$-axis. Is it possible to put numbers $f_1,\dots,f_n\in\R$, where $f_1=f_n$, at the points of the array so that for each decomposition there exists an $f_i$ that is not equal to the sum of the two numbers at $x(a_i)$ and $y(a_i)$? A subset $K\subset\R^2$ is called [*discontinuously basic*]{} if for each function $f:K\to\R$ there exist functions $g,h:\R\to\R$ such that $f(x,y)=g(x)+h(y)$ for each point $(x,y)\in K$. \(a) The segment $K=0\times[0;1]\subset\R^2$ is discontinuously basic. \(b) The cross $K=0\times[-1;1]\cup[-1;1]\times0\subset\R^2$ is discontinuously basic. \(c) [*A criterion for a subset of the plane to be discontinuously basic.*]{} A subset of the plane is discontinuously basic if and only if it does not contain any closed arrays. Given a set of marked unit cubes in the cube $8\times8\times8$, how can we see who wins in the 3D analogue of the ‘Dare you to decompose!’ game? In this analogue V. I. tries to choose 24 numbers $a_1,\dots,a_8,b_1,\dots,b_8,c_1,\dots,c_8$ so that the number at the unit cube $(i,j,k)$ would be equal to the sum $a_i+b_j+c_k$ of the three weights. \(a) Define discontinuous basic subsets of the 3-space. Discover and prove the 3D analogue of the above criterion. \(b) The same for higher-dimensional case. \(a) It is not true. If $f_{ij}=g_i+h_j$ for each $i,j=1,2$, then $f_{11}+f_{22}=f_{12}+f_{21}$, but this is false for some numbers $f_{ij}$. \(b) The statement ‘only if’ follows from the problem 2. Let us prove the ‘if’ part by induction on the number of the marked cells. If only one cell is marked then we are done. Let $K$ be the set of centres of the marked cells. The set $E(K)$ is defined in the following subsection after Problem 9. The set $K$ does not contain any closed array, therefore $\#E(K)<\#K$. So by the induction hypothesis V. I. can win for $E(K)$. Each cell from $K-E(K)$ is the only marked cell on its line or column, thus V. I. can choose the remaining weights for $K$. Yes, it is. If every $f_i$ is equal to the sum of two numbers at $x(a_i)$ and $y(a_i)$, then $f_1-f_2+f_3- \dots -f_{n-1}=0$, but this is false for some numbers $f_i$. \(a) Set $h(y)=f(0,y)$ and $g(x)=0$. \(b) Set $g(x)=f(x,0)$ and $h(y)=f(0,y)-f(0,0)$. \(c) The statement ‘only if’ follows from the problem 2. Let us prove the ‘if’ part. Consider a function $f:K\to \R$. Our aim is to construct functions $g$ and $h$ so that $f(x,y)=g(x)+h(y)$. Two points $a,b\in K$ are called [*equivalent*]{} if there is an array $\{a=a_1,\dots,a_n=b\}\subset K$. Now take an equivalence class $K_1\subset K$. Define function $g:x(K_1)\to\R$ and $h:y(K_1)\to\R$ in the following way. Take any point $a_1\in K_1$ and set $g(x(a_1))=f(a_1)$ and $h(y(a_1))=0$. If $\{a_1,a_2,\dots,a_{2l}\}$ is an array, then set $$h(y(a_{2l})):=f(a_{2l})-f(a_{2l-1})+\dots -f(a_1)\quad\text{and} \quad g(x(a_{2l})):=f(a_{2l-1})-f(a_{2l-2})+\dots+f(a_1).$$ If $\{a_1,a_2,\dots,a_{2l+1}\}$ is an array, then set $g(x(a_{2l+1})):=f(a_{2l+1})-f(a_{2l})+\dots+f(a_1)$ ($h(y(a_{2l+1}))$ is already defined). Make this construction for each equivalence class. Then set $g=0$ and $h=0$ at all other points of $\R$. A subset $K\subset\R^2$ is called [*(continuously) basic*]{} if for each continuous function $f:K\to\R$ there exist continuous functions $g,h:\R\to\R$ such that $f(x,y)=g(x)+h(y)$ for each point $(x,y)\in K$. \[Ar58\] In order to approach a solution consider some examples. \(a) A closed array is not basic. \(b) The segment $K=0\times[0;1]\subset\R^2$ is basic. \(c) The cross $K=0\times[-1;1]\cup[-1;1]\times0\subset\R^2$ is basic. \(d) The graph $V$ of the function $y=|x|$, $x\in[-1;1]$ is basic. A sequence of points $\{a_1,\dots,a_n,\dots\}\subset\R^2$ [*converges to a point $a\in\R^2$*]{} if for each $\eps>0$ there exists an integer $N$ such that for each $i>N$ we have $|a_i,a|<\eps$. \(a) If a subset of the plane is basic, then it is discontinuously basic. \(b) A [*completed array*]{} is the union of a point $a_0\in\R^2$ with an infinite array $\{a_1,\dots,a_n,\dots\}\subset\R^2$ of distinct points which converges to the point $a_0$. Prove that any completed array is not basic. (Note that it is discontinuously basic). \(c) Let $[a,b]$ be the rectilinear arc which connects points $a$ and $b$. Prove that the cross $K=[(-1,-2),(1,2)]\cup[(-1,1),(1,-1)]$ is not basic. \(d) Let $m_{ij}=2-3\cdot2^{-i}+j\cdot2^{-2i}$. Consider the set of points $(m_{i,2l},m_{i,2l})$ and $(m_{i,2l},m_{i,2l-2})$, where $i$ varies from 1 to $\infty$ and $l=1,2,3,\dots,2^{i-1}$. Prove that this subset of the plane does not contain any infinite arrays but contains arbitrary long arrays. \(e) The union of the set from the previous problem and the point $(2,2)$ is not basic. A subset $K\subset\R^2$ of the plane is [*closed*]{}, if for each sequence $a_i\in K$ converging to a point $a$ this point belongs to $K$. A subset $K\subset\R^2$ of the plane is closed if and only if for each point $a\not\in K$ there exists $\eps>0$ such that if for a point $b$ of the plane we have $|a,b|\le\eps$, then $b$ does not belong to $K$. \(a) The criterion is false without the assumption that $K$ closed. \(b) The criterion is false without the assumption that $K$ bounded. (c)\*\* Find a criterion of being a basic subset for closed (but unbounded) subsets of the plane. Suppose that $K$ is a subset of $\R^2$. For every point $v\in K$ consider the pair of lines passing through $v$ and parallel to the $x$-axis and the $y$-axis. If one of these two lines intersects $K$ only at point $v$, we colour $v$ in white. Define $E(K)$ as the set of noncoloured points of $K$: $$E(K)=\{v\in K:\ |K\cap(x=x(v))|\ge2\text{ and }|K\cap(y=y(v))|\ge2\}.$$ Let $E^2(K)=E(E(K))$, $E^3(K)=E(E(E(K)))$ etc. A subset $K$ of the plane does not contain arbitrary long arrays if and only if $E^n(K)=\emptyset$ for some $n$. (a)\* Give an elementary proof that if $K$ is a closed bounded subset of $\R^2$ and $E(K)=\emptyset$, then $K$ is basic \[Mi09\]. Hint. It can be proven that for piecewise-linear maps $f$ there is a decomposition $f(x,y)=g(x)+h(y)$ with $|g|+|h|<5|f|$. (b)\* Prove the ‘if’ part of the criterion without using the functional spaces as below. Hint. Same as above with $|g|+|h|<C_n|f|$, where $C_n$ depends only on that $n$ for which $E^n(K)=\emptyset$. A subset $K\subset\R^3$ is called [*(continuously) basic*]{} if for each continuous function $f:K\to\R$ there exist continuous functions $g,h,l:\R\to\R$ such that $f(x,y,z)=g(x)+h(y)+l(z)$ for each point $(x,y,z)\in K$. \(a) The ‘hedgehog’ $0\times0\times[-1;1]\cup0\times[-1;1]\times0\cup [-1;1]\times0\times0\subset\R^3$ is basic. \(b) The set of 4 points $(0,0,0)$; $(1,1,0)$; $(0,1,1)$; $(1,0,1)$ is basic. (But $E^n(K)\neq\emptyset$ for each $n$, see below.) (c)\* Define $E(K)$ analogously to the above, only instead of lines use planes orthogonal to the axes: $$E(K)=\{v\in K: \ |K\cap(x=x(v))|\ge2,\ |K\cap(y=y(v))|\ge2\text{ and }|K\cap(z=z(v))|\ge2\}.$$ Let $K$ be a closed bounded subset of $\R^3$. Prove that if $E^n(K)=\emptyset$ for some $n$, then $K$ is basic \[St89, Lemma 23.ii\]. \(a) If an array $A=\{a_1,\dots,a_{2l+1}\}$ is basic, then $f(a_1)-f(a_2)+\dots +f(a_{n-2})-f(a_{2l})=0$. But this is false for some functions $f$. Cf. problem 2. (b),(c) Analogously to problems 3a,3b. \(d) Take $h(y)=0$ and $g(x)=f(x,y)$. \(a) If the subset is not discontinuously basic, then it contains a closed array. Hence the statement follows by extension of $f$ on the subset and using problem 6a. \(b) Define function $f$ by $f(a_n)=\frac{(-1)^n}n$. Suppose that $f(x,y)=g(x)+h(y)$ for some $g$ and $h$. Then $$f(a_1)-f(a_2)+f(a_3)-f(a_4)+ \dots-f(a_{2l})=h(y(a_1))-h(y(a_{2l})).$$ Since $\lim_{l\to\infty}h(y_{2l})$ exists and equals to $h(y(a_0))$, it follows that $\sum_{i=1}^{2l} (-1)^i f(a_i)$ converges when $l\to\infty$, which is a contradiction. \(c) The cross contains a completed array $$a_{4k+1}=(-2^{-2k},2^{-2k}),\ a_{4k+2}=(2^{-2k-1},2^{-2k}), \ a_{4k+3}=(2^{-2k-1},-2^{-2k-1}),\ a_{4k+4}=(-2^{-2k-2},-2^{-2k-1}).$$ Define a function $f$ on this array using problem 7.b and then extend it (e.g. piecewise linearly) to the cross. Then there are no functions $g$ and $h$ such that $f(x,y)=g(x)+h(y)$. \(d) For every $i$ the set ${(m_{i,2l},m_{i,2l})}_{l=1}^{2^{i-1}}\cup {(m_{i,2l},m_{i,2l-2})}_{l=1}^{2^{i-1}}$ is an array of $2^i$ points. \(e) Define a function $f$ by $$f((m_{i,2l},m_{i,2l})):=2^{-i}\quad\text{and} \quad f(m_{i,2l},m_{i,2l-2}):=-2^{-i}.$$ If $f(x,y)=g(x)+h(y)$ for some $g$ and $h$, then for every $i$ using array of points $(m_{i,2l},m_{i,2l})$ and $(m_{i,2l},m_{i,2l-2})$, where $l=1,2,3,\dots 2^{i-1}$, we obtain $h(2-\frac 3{2^i})-h(2-\frac 2{2^i})=1$. This contradicts to the continuity of $h$. Let us prove the ‘only if’ part. Let $K$ be a closed subset of the plane. Suppose that for some point $a=(x,y)\not\in K$ and for each $\eps=\frac 1n>0$ there exists a point $a_n\in K$ (at least one) such that $|a,a_n| \le \frac 1n$. The sequence of points $a_n\in K$ converges to the point $a$, thus $a\in K$. Contradiction. Now let us prove the ‘if’ part. Suppose that a sequence $a_n$ converges to a point $a$ and the point $a=(x,y)$ is not in $K$. There exists $\eps>0$ such that for every point $a_n \in K$ the distance $|a,a_n|>\eps$. This is a contradiction. \(a) Any infinite array $A$ not containing closed arrays and converging to a point $a\not\in A$ is basic. This follows because each function defined on $A$ is continuous. \(b) A counterexample is $\{(k,k)\}_{k=1}^\infty\cup\{(k,k-1)\}_{k=1}^\infty$. Let us prove the ‘only if’ part. Suppose that $E^n(K)\neq\emptyset$ for each $n$. For each $n$ take a point $a_0\in E^n(K)$. Then there exist points $a_{-1},a_1\in E^{n-1}(K)$ such that $x(a_{-1})=x(a_0)$ and $y(a_1)=y(a_0)$. Analogously there exist points $a_{-2},a_2\in E^{n-2}(K)$ such that $\{a_{-2},a_{-1},a_0,a_1,a_2\}$ is an array. Analogously we construct an array of $2n+1$ points in $K$, which is a contradiction. Let us prove the ‘if’ part. Suppose that $K$ contains an array of $2n+1$ points $\{a_{-n},\dots,a_0,\dots,a_{n}\}$. Then there is an array of $2n-1$ points $\{a_{-n+1},\dots,a_{n-1}\}$ in $E(K)$. Analogously $a_0\in E^n(K)$. Thus if $E^n(K)=\emptyset$, then $K$ does not contain an array of $2n+1$ points. \(a) For each functuion $f:K \to \R$ on $K$ define $g(x):=f(x,0,0)$, $h(y):=f(0,y,0)-f(0,0,0)$ and $l(z):=f(0,0,z)-f(0,0,0)$. \(b) Set $g(0)=f(0,0,0)$, $h(0)=0$, $l(0)=0$, $$2g(1)=f(0,0,0)+f(1,1,0)+f(1,0,1)-f(0,1,1),\quad 2h(1)=-f(0,0,0)+f(1,1,0)-f(1,0,1)+f(0,1,1)$$ $$\text{and}\quad 2l(1)=-f(0,0,0)-f(1,1,0)+f(1,0,1)+f(0,1,1).$$ Let $K$ be a closed bounded subset of the plane. It is known that each continuous function $f:K\to \R$ is bounded. A function $f:K\in\R$ is called [*bounded*]{}, if there exists a number $M$ such that $|f(x)|<M$ for every $x\in K$. For a bounded function $G:K\to\R$ denote $|G|:=\sup_{x\in K}|G(x)|$. Assume to the contrary that $K$ contains arbitrary long arrays and is basic. Choosing subsequences we may assume that points of each array are distinct. Therefore for each $n$ there is an array $\{a^n_1,\dots,a^n_{2n+5}\}$ of $2n+5$ distinct points in $K$. Then there exists continuous function $$f_n:K\to\R\quad\mbox{such that}\quad f_n(a^n_i)=(-1)^i\quad\mbox{and}\quad |f_n(x)|\le1\quad\mbox{for each}\quad x\in K.$$ (Indeed, first define such a continuous function $f:\R^2\to \R$. Denote $s=\min_{i<j}|a_i,a_j|$. Take $n$ disks with centers $a_i$ and radii $\frac s3$. Outside of these disks set $f=0$. Inside the $i$-th disk take $f$ to be $(-1)^i$ in the center $a_i$, $0$ on the boundary and extend it linearly in the distance to $a_i$. Then restrict $f$ to $K\subset \R^2$.) Define integers $s_n$ and functions $F_n:K\to\R$ inductively as follows. Set $s_0=1$ and $F_0=0$. Suppose now that $F_{n-1}$ and $s_{n-1}$ are defined. If $F_{n-1}$ is not representable as $G_{n-1}(x)+H_{n-1}(y)$, then we are done. If it is representable in this way, then take $$s_n>s_{n-1}!(|G_{n-1}|+n) \quad \text{ and } \quad F_n=F_{n-1}+\dfrac{f_{s_n}}{s_{n-1}!}$$ It remains to prove that if we can construct in this way an infinite number of $s_n$ and $F_n$, then the function $$F=\lim\limits_{n\to\infty}F_n=\sum\limits_{n=1}^\infty\frac{f_{s_n}}{s_{n-1}!}$$ is not representable as $G(x)+H(y)$. Assume to the contrary that $F(x,y)=G(x)+H(y)$ for some $G$ and $H$. It suffices to prove that $|G|>n$ for each $n$. For this it suffices to prove that $s_{n-1}!|G-G_{n-1}|>s_n$: then we would have $$|G|+|G_{n-1}|\ge|G-G_{n-1}|>\frac{s_n}{s_{n-1}!}> |G_{n-1}|+n.$$ [**Lemma.**]{} *Let $m\ge 4$,* $\bullet$ $K=\{a_1,\dots,a_{2m+5}\}$ be an array of $2m+5$ distinct points, $\bullet$ $f(a_1),\dots,f(a_{2m+5})$ numbers such that $|(-1)^i-f(a_i)|\le 1/m$, $\bullet$ $g(x(a_i)),h(y(a_i))$, $i=1,\dots,2m+5$, numbers such that $f_i=g(x(a_i))+h(y(a_i))$ for each $i$. Then $\max_i |g(x(a_i))|>n$. We may assume that $a_1a_2\| Ox$. Then $$|(f(a_1)-f(a_2)+f(a_3)-f(a_4)+\dots-f(a_{2m+4})-(2m+4)|\le\frac{2m+4}m\le3.$$ Therefore $g(x(a_1))-g(x(a_{2m+4}))\ge(2m+4)-3>2m$. This implies the required inequality. We have $$F-F_n=F-F_{n-1}-\frac{f_{s_n}}{s_{n-1}!}= \frac{s_{n-1}!(F-F_{n-1})-f_{s_n}}{s_{n-1}!}.$$ Apply the Lemma to $$m=s_n,\quad a_i=a_i^{s_n},\quad f=s_{n-1}!(F-F_{n-1}), \quad g=s_{n-1}!(G(x)-G_{n-1}(x)), \quad h=s_{n-1}!(H(y)-H_{n-1}(y)).$$ This is possible because $f(x,y)=g(x)+h(y)$ and (since $s_n-1>s_{n-1}$ for $n>2$) $$|f-f_{s_n}|=s_{n-1}!|F-F_n|<\frac1{(s_n-1)\cdot s_n}\sum\limits_{k=0}^\infty \frac1{(s_n+1)\cdot\dots\cdot s_{n+k}}< \frac1{(s_n-1)\cdot s_n}\sum\limits_{k=0}^\infty\frac1{2^k}<\frac1{s_n}.$$ By Lemma we obtain $s_{n-1}!|G-G_{n-1}|>s_n$. The proof is based on a reformulation of the property of being a basic subset in terms of [*bounded linear operators*]{} in [*Banach functional spaces*]{}. Denote by $C(X)$ the space of continuous functions on $X$ with the norm $|f|=\sup\limits\{|f(x)|\ :\ x\in X\}$. In this proof denote by $pr_x(a)$ and $pr_y(a)$ the projections of a point $a\in K$ on the coordinate axes. For $K\subset I^2:=[0;1]\times[0;1]$ define a map ([*linear superposition operator*]{}) $$\phi\colon C(I)\oplus C(I)\to C(K)\quad\text{by}\quad \phi(g,h)(x,y):=g(x)+h(y).$$ Clearly, the subset $K\subset I^2$ is basic if and only if $\phi$ is surjective, or equivalently, epimorphic. Denote by $C^*(X)$ the space of [*bounded linear functions*]{} $C(X)\to\R$ with the norm $|\mu|=\sup\{|\mu(f)|\ :\ f\in C(X),\ |f|=1\}$. For a subset $K\subset I^2$ define a map ([*dual linear superposition operator*]{}) $$\phi^*\colon C^*(K)\to C^*(I)\oplus C^*(I)\quad\text{by}\quad \phi^*\mu(g,h):=(\mu(g\circ pr_x),\mu(h\circ pr_y)).$$ Since $|\phi^*\mu|\le2|\mu|$, it follows that $\phi^*$ is bounded. By duality, $\phi$ is epimorphic if and only if $\phi^*$ is monomorphic. [^4] It is clear that $\phi^*$ is monomorphic if and only if [*(\*) there exists $\varepsilon>0$ such that $|\phi^*\mu|>\varepsilon|\mu|$ for each unzero $\mu\in C^*(K)$.*]{} We leave as an excercise the proof that (\*) implies the abcense of arbitrarily large arrows. (This proves the ‘only if’ part of the criterion, for which we already have an elementary proof.) So it remains to prove that $E^n(K)=\emptyset$ implies the condition (\*). We present the proof for $n\in\{1,2\}$. The proof for arbitrary $n$ is analogous. We use the following non-trivial fact: [*$C^*(X)$ is the space of $\sigma$-additive regular real valued Borel measures on $X$*]{} (in the sequel we call them simply ‘measures’). We have $$\phi^*\mu=(\mu_x,\mu_y),\quad\text{where}\quad\mu_x(U)=\mu(pr_x^{-1}U) \quad\text{and} \quad \mu_y(U)=\mu(pr_y^{-1}U)\quad\text{for each Borel set}\quad U\subset I.$$ If $\mu=\mu^+-\mu^-$ is the decomposition of a measure $\mu$ into its positive and negative parts, then $|\mu|=\bar\mu(X)$, where $\bar\mu=\mu^++\mu^-$ is the absolute value of $\mu$. Let $D_x$ ($D_y$) be the set of points of $K$ which are not shadowed by some other point of $K$ in $x$- ($y$-) direction. Take any measure $\mu$ on $K$ of the norm 1. If $n=1$, then $$E(K)=\emptyset,\quad\text{then}\quad D_x\cup D_y=K,\quad\text{so}\quad 1=\bar\mu(K)\le\bar\mu(D_x)+\bar\mu(D_y).$$ Therefore without loss of generality, $\bar\mu(D_x)\ge1/2$. Since the projection onto the $x$-axis is injective over $D_x$, it follows that $|\mu_x|\ge1/2$, thus the required assertion holds for $\varepsilon=\frac 12$. If $n=2$, then $$E(E(K))=\emptyset,\quad\text{hence}\quad D_x\cup D_y=K-E(K),\quad\text{so} \quad E(D_x\cup D_y)=\emptyset.$$ In the case when $\bar\mu(E(K))<3/4$ we have $\bar\mu(D_x\cup D_y)>1/4$ and without loss of generality $\bar\mu(D_x)>1/8$. Then as for $n=1$ we have $|\mu_x|>1/8$, thus (\*) holds for $\varepsilon =\frac 18$. In the case when $\bar\mu(E(K))\ge3/4$ we have $\bar\mu(K-E(K))\le1/4$. By the case $n=1$ above without loss of generality $\bar\mu_x(pr_x(E(K)))\ge\bar\mu(E(K))/2$. Hence $|\mu_x|\ge\frac12\cdot\frac34-\frac14=\frac18$, thus (\*) holds for $\varepsilon=\frac 18$. Let $K$ be a subset of the plane $\R^2$. A function $f:K\to\R$ is called [*differentiable*]{} if for each point $z_0\in K$ there exist a vector $a\in\R^2$ and infinitesimal function $\alpha:\R^2\to\R$ such that for each point $z\in K$ $$f(z)=f(z_0)+a\cdot(z-z_0)+\alpha(z-z_0)|z,z_0|.$$ Here the dot denotes scalar product of vectors $a=:(f_x,f_y)$ and $z-z_0=:(x,y)$, i.e. $a\cdot(z-z_0)=xf_x + yf_y$. A function $\alpha:\R^2\to\R$ is [*infinitesimal*]{}, if for each number $\eps >0$ there exists a number $\delta>0$ such that for each point $(x,y)\in \R^2$ $$\text{if}\quad \sqrt{x^2+y^2}<\delta,\quad\text{then} \quad|\alpha(x,y)|<\eps.$$ Let $V$ be the graph of the function $y=|x|$, where $x\in[-1;1]$. A function $f:V\to\R$ is differentiable if and only if $f(x,|x|)$ is differentiable on the segments $[-1;0]$ and $[0;1]$. A subset $K\subset\R^2$ of the plane is called [*differentiably basic*]{} if for each differentiable function $f:K\to\R$ there exist differentiable functions $g:\R\to\R$ and $h:\R\to\R$ such that $f(x,y)=g(x)+h(y)$ for each point $(x,y)\in K$. \(a) (b) (c) Solve the analogues of problem 6 for differentiably basic sets. \(a) The graph $V$ is differentiably basic. \(b) $W:=(V-(2,0))\cup(V+(2,0))$ is not differentiably basic. \(c) The broken line whose consecutive vertices are $(-2,0)$, $(-1,1)$, $(0,0)$, $(1,1)$ and $(2,0)$ is not differentiably basic. (Note that it is continuously basic). \(d) The completed array $\{([\frac{n+1}2]^{-1/2},[\frac n2]^{-1/2})\}_{n=2}^{\infty}\cup\{(0,0)\}$ is not differentiably basic. (Note that it is also not continuously basic.) \(e) The completed array $\{(2^{-[\frac{n+1}2]},2^{-[\frac n2]})\}_{n=1}^{\infty}\cup\{(0,0)\}$ is differentiably basic. (Note that it is not continuously basic.) \(f) (I. Shnurnikov) The cross $K=[(-1,-2),(1,2)]\cup[(-1,1),(1,-1)]$ is not differentiably basic. (This assertion and Conjecture 15a imply that the property of being differentably basic is not hereditary.) \(g) If a graph is basically embeddable in the plane, then it is differentiably basically embeddable in the plane. (This is non-trivial because the plane contains graphs which are basic but not differentaibly basic and vice versa.) \[RZ06\] \(a) (I. Shnurnikov) A completed array $\{a_n\}_{n=1}^\infty\cup\{(0,0)\}$ is differentiably basic if and only if the sequence $\frac{\sum\limits_{n=k}^\infty|a_n|}{|a_k|}$ is bounded. \(b) The subset $\{(t^2,\frac {t^2}{(1+t)^2})\}_{t\in[-\frac 12;\frac 12] }$ of the plane is not differentiably basic. Hint. One can try to prove this analogously to 14f. Cf. \[Vo81, Vo82\]. \(c) A piecewise-linear graph in $\R^2$ is differentiably basic if and only if it does not contain arbitrary long arrays and for each two singular points $a$ and $b$ we have $x(a)\ne x(b)$ and $y(a)\ne y(b)$. A point $a\in K$ is [*singular*]{} if the intersection of $K$ with each disk centered at $a$ is not a rectilinear arc. It would be interesting to find a criterion of being differentiably basic for closed bounded subsets of the plane. Apparently a simple-to-state criterion (analogous to the Sternfeld criterion) does not exist. Another interesting question: is there a continuous map $[0;1]\to\R^2$ whose image is differentiably basic but not basic? Let $r\ge0$ be an integer and $K\in \R^2$ a subset. A function $f:K\to\R$ is called [*$r$ times differentiable*]{} if for each point $z_0\in K$ there exist a polynomial $\overline f(z)=\overline f(x,y)$ of degree at most $r$ of 2 variables $x$ and $y$ and an infinitesimal function $\alpha:\R^2\to\R$ such that $f(z)=\overline f(z-z_0)+\alpha(z-z_0)|z,z_0|^r$ for each point $z\in K$. (This definition differs from the one generally accepted.) \(a) Functions differentiable zero times are exactly continuous functions, and functions differentiable one time are exactly differentiable functions. \(b) For each positive integer $r$ define the property of being an $r$ times differentiably basic subset of the plane $\R^2$. \(c) For each integer $k\ge0$ there is a subset of the plane which is $r$ times differentiably basic for $ r=0,1\dots k$ but is not $r$ times differentiably basic for each $r>k$. (d)\*\* Find a criterion for graphs in $\R^2$ to be $r$ times differentiably basic. (a), (b), (c) Analogously to problems 6(a), 3(a) and 3(b). \(a) Take a differentiable function $f:V\to\R$. Since $f$ is differentiable at $(0,0)$, it follows that there exist $a,b\in\R$ such that $$f(x,|x|)=f(0,0)+ax+b|x|+\alpha(x),\quad\text{where} \quad \alpha(x)=o(\sqrt{x^2+|x|^2})\quad\text{when}\quad x\to0.$$ Take $h(y):=by$ and $g(x):=f(0,0)+ax+\alpha(x)$. Clearly, $h$ is differentiable and $g$ is differentiable outside 0. Since $\alpha(x)=o(x)$ when $x\to0$, it follows that $g$ is differentiable also at 0. \(b) See 16c for $k=0$. \(c) Suppose the broken line is differentiably basic. The function $f(x,y)=xy$ is differentiable. We have $f(x,y)=g(x)+h(y)$, where both $g$ and $h$ are differentiable. Then $$2-2d=f(1+d,1-d)+f(1-d,1-d)=g(1+d)+g(1-d)+2h(1-d)= 2g(1)+2h(1)-2h'(1)d+o(d).$$ Hence $h'(1)=1$. Analogously $$2d-2=f(-1+d,1-d)+f(-1-d,1-d)=g(-1+d)+g(-1-d)+2h(1-d)= 2g(-1)+2h(1)-2h'(1)d+o(d).$$ Hence $h'(1)=-1$. A contradiction. \(d) Suppose that this completed array is differentiably basic. Set $a_n=([\frac{n+1}2]^{-1/2},[\frac n2]^{-1/2}),$ $f(a_n):=\frac{(-1)^n}n$, $n=2,3,\dots$. If $f(x,y)=g(x)+h(y)$ for some functions $g(x)$ and $h(y)$, then the series $f(a_2)-f(a_3)+f(a_4)-\dots$ converges to $g(1)-g(0)$ (analogously to Problem 7b). This is a contradiction because the series $\frac 12 +\frac 13+ \frac 14 + \dots $ diverges. \(e) Without loss of generality assume that $f(0,0)=0$, then take $g(0)=0$ and $h(0)=0$. Set $$h(2^{-k})=f(2^{-(k+1)},2^{-k})-f(2^{-(k+1)},2^{-(k+1)})+f(2^{-(k+2)},2^{-(k+1)})- \dots,$$ $$g(2^{-k})=f(2^{-k},2^{-k})-f(2^{-(k+1)},2^{-k})+ f(2^{-(k+1)},2^{-(k+1)})-\dots,$$ where the right-hand sides are sums of alternating series. Now $g(x)$ and $h(y)$ may be extended to differentiable functions $\R\to\R$. \(f) Define $$w(0)=w(4^{-i}+4^{-3i})=w(4^{-i})=0\quad\text{and} \quad w(4^{-i}+4^{-3i-1})=2^{3i}\quad\text{for}\quad i=1,2,3,\dots.$$ Extend piecewice-linearly to obtain a function $w:[0;1]\to \R$. For $x\in[0;1]$ define $W(x)$ as the area under the graph of $w$ on $[0;x]$. (This is well-defined because this area is finite.) Define $f(x,-x)=W(x)$ for $x\in[0;1]$ and $f(x,y)=0$ on the rest of the cross. Clearly, $f$ is differentiable outside $(0,0)$. It is easy to check that $f$ is differentiable at $(0,0)$. Suppose that $f(x,y)=g(x)+h(y)$ for some differentiable functions $g$ and $h$. Without loss of generality we assume that $g(0)=h(0)=0$. The function $g$ is not differentiable at $x=1/4$ because for $0<d<\frac 14$ we have $$g\left(\frac14+d\right)-g\left(\frac14\right)= W\left(\frac14+d\right)-W\left(\frac14\right)+ W\left(\frac1{4^2}+\frac d4\right)-W\left(\frac1{4^2}\right)+\dots >$$ $$>W\left(\frac1{4^{k+1}}+\frac d{4^k}\right)-W\left(\frac1{4^{k+1}}\right)= \frac{2^{3k}\cdot 4^{-3k}}2\ge \frac {(4d)^{3/4}}2.$$ Here $\bullet$ the first equality is proved using two infinite arrays starting at points $(\frac14+d,-\frac14-d)$ and $(\frac14,-\frac14)$ and converging to the point $(0,0)$; $\bullet$ $k\ge0$ is such that $4^{-2k}\ge 4d>4^{-2(k+1)}$; $\bullet$ the first inequality follows because $W$ is a non-decreasing function; $\bullet$ the second inequality follows because $\frac d{4^k}>\frac1{4^{3(k+1)}}$; $\bullet$ the second equality follows by definition of $k$. (In the same way one can prove that $g$ is not differentiable at $x=4^{-i}$ for each $i$.) \(a) Hints. For the ‘only if’ part use the idea of Problem 7b and prove that if $\sum\limits_{n=1}^\infty|a_n|=\infty$, then there is a sequence $b_n\to0$ such that $\sum\limits_{n=1}^\infty|a_n|b_n=\infty$. For the ‘if’ part we may assume that numbers $x(a_i)$ are distinct, numbers $y(a_i)$ are distinct, $x(a_{2i})=x(a_{2i+1})$, $y(a_{2i})=y(a_{2i-1})$. If $f(0,0)=0$, define $$g(x(a_{2i})):=f(a_1)-f(a_2)+f(a_3)-\dots+f(a_{2i+1}), \quad g(0):=\sum_{i=1}^{\infty}(-1)^if(a_i),$$ $$h(y(a_{2i})):=-f(a_1)+f(a_2)-f(a_3)-\dots+f(a_{2i-2})\quad\text{and} \quad h(0):=\sum_{i=1}^{\infty}(-1)^if(a_i).$$ Prove that $g$ and $h$ are differentiable at 0. Since $f$ is differentiable at points $(-1,1)$ and $(1,1)$, the following relations hold for sufficiently small $d>0$: $$f(-1+d,1-d)-f(-1,1)=f_1d-f_2d+\alpha_{(-1,1)}(d,-d)|(d,-d)|,$$ $$f(-1-d,1-d)-f(-1,1)=-f_1d-f_2d+\alpha_{(-1,1)}(-d,-d)|(-d,-d)|,$$ $$f(1+d,1-d)-f(1,1)=f_3d-f_4d+\alpha_{(1,1)}(d,-d)|(d,-d)|\quad\text{and}$$ $$f(1-d,1-d)-f(1,1)=-f_3d-f_4d+\alpha_{(1,1)}(-d,-d)|(-d,-d)|.$$ Also we have $f(x,y)=g(x)+h(y)$ and both $g(x)$, $h(y)$ are differentiable. Hence $$f(-1+d,1-d)-f(-1,1)=g(-1+d)-g(-1)+h(1-d)-h(1)= g'(-1)d-h'(1)d+\alpha(d)d\quad\text{and}$$ $$f(-1-d,1-d)-f(-1,1)=g(-1-d)-g(-1)+h(1-d)-h(1)= -g'(-1)d-h'(1)d+\alpha(d)d.$$ Therefore $h'(1)=f_2$ (and $g'(-1)=f_1$). Analogously $h'(1)=f_4$. Thus $h'(1)=f_2=f_4$. But for function $f(x,y)=xy$ we have $f_4=1\neq f_2=-1$. To 14e, 15a: Let us introduce some notation for arrows converging to $(0,0)$. Denote $$a_1 = (x_1,y_1), \quad a_2=(x_2,y_1),\quad a_3=(x_2,y_2), \dots$$ $$a_{2k}=(x_{k+1},y_k),\quad a_{2k+1}=(x_{k+1},y_{k+1}),\dots$$ $$f_n:=f (a_n),\quad g_k:=g(x_k),\quad h_k:=h(y_k),$$ $$f(0,0):=f_\infty, \quad g(0):=g_\infty, \quad h(0):=h_\infty$$ In our case it suffices to consider differentiability only at the point $(0,0)$, because at other points any functions $g(x)$ and $h(y)$ will be differentiable. The function $f_n$ is differentiable at point $(0,0)$ if there exists $a,b\in R$ for which $f_{2k-1}=f_\infty+\frac{a}{2^k}+\frac{b}{2^k}+\frac{\alpha_k}{2^k}$ and $f_{2k}=f_\infty+\frac{a}{2^{k+1}}+\frac{b}{2^k}+\frac{\beta_k}{2^k}$ where $\alpha_k$ and $\beta_k$ are infinitesimal. In order to prove that $M_a$ is $D$-basic we only need to consider the case when $f_{\infty}=a=b=0$. Indeed, denote $s(x,y)=f(x,y)-f_\infty-ax-by$, then from an expansion of the function $s$ the expansion of the function $f$ is easily constructed. Thus it suffices to prove the existence of the expansion $f(x,y)=g(x)+h(y)$ for a function $f$ such that $f_{2k-1}=\frac {\alpha_k}{2^k}$ and $f_{2k}=\frac {\beta_k}{2^k}$, i. e. such that $f_k=\frac{\gamma_k}{(\sqrt2)^k}$. Take $h_1=0$. Sum up the equalities $(*)$ from the Proof of the Lemma on arrows with alternative sign, we have $g_k=f_1-f_2+f_3-... +f _ {2k-1}$. Thus $$g_{\infty}=-\sum_{i=1}^{\infty}(-1)^{i}f_i= -\sum_{i=1}^{\infty}(-1)^{i}\frac{\gamma_i}{(\sqrt2)^i},$$ and this sum, obviously, converges. Analogously $$h_k=-f_1+f_2-f_3+...+f_{2n-2} \quad\text {and} \quad h_{\infty}=\sum_{i=1}^{\infty}(-1)^{i}f_i$$ Now it suffices to prove differentiability of the constructed functions $g_k$ and $h_k$. We have $$|\frac{h_k-h_\infty}{1/2^k}|=|\sum_{i=2k-1}^{\infty}2^k(-1)^{i}f_i| \le\sum_{i=2k-1}^{\infty} |\frac {\gamma_i}{(\sqrt {2})^{i- (2k-1)}} | \le$$ $$\le \sqrt2 \sum _ {i=1} ^ {\infty} \frac {\varepsilon_k} {(\sqrt {2}) ^ {i}} = \frac {2} {\sqrt2-1} \varepsilon_k,$$ where $\varepsilon_k=max_{i\ge 2k-1}\{|\gamma_i|\}$. Therefore $$h'(0)=|\lim\limits_{k\to\infty}\frac{h_k-h_\infty}{1/2^k}| \le\lim\limits_{k\to\infty}\frac{2}{\sqrt2-1}\varepsilon_k=0.$$ The differentiability of the function $g_k $ is proved analogously. \(a) It is clear. \(b) A subset $K\subset\R^2$ is called [*$r$ times differentiably basic*]{} if for each $r$ times differentiable function $f:K\to\R$ there exist $r$ times differentiable functions $g:\R\to\R$ and $h:\R\to\R$ such that $f(x,y)=g(x)+h(y)$ for each point $(x,y)\in K$. \(c) We can take the graph $V_k$ of the function $y=|x|^k$, $x\in[-1;1]$ for $k$ odd, and $W_{k+1}=(V_{k+1}-(2,0))\cup(V_{k+1}+(2,0))$ for $k$ even. Let us prove that $W_{k+1}$ is $r$ times differentiably basic for each $0\le r\le k$. Given an $r$ times differentiable function $f:W_{k+1}\to\R$, take functions $h(y)=0$ and $g(x)=f(x,|x-2\sign x|^{k+1})$. Clearly, $h$ is $r$ times differentiable and $f(x,y)=g(x)+h(y)$ for each $(x,y)\in W_{k+1}$. Since the function $p(t)=|t|^{k+1}$ is $k$ times differentiable and $r\le k$, it follows that $g$ is $r$ times differentiable. Let us prove that $W_{k+1}$ is not $r$ times differentiably basic for $k$ even and each $k<r$. Define a function $f:W_{k+1}\to\R$ by $f(x,y)=y\sign x$. Clearly, $f$ is $r$ times differentiable. If $W_{k+1}$ is $r$ times differentiably basic, then there are $r$ times differentiable functions $g$ and $h$ such that $f(x,y)=g(x)+h(y)$. For $t\in[-1;1]$ we have $$g(\pm2+t)+h(|t|^{k+1})=f(\pm2+t,|t|^{k+1})=\pm|t|^{k+1}.$$ Since $g$ is $(k+1)$ times differentiable and $k+1$ is odd, it follows that $h'(0)=+1$ and $h'(0)=-1$, which is a contradiction. First we prove that $V_k$ is $r$ times differentiably basic for each $0\le r\le k$. Take an $r$ times differentiable function $f:V_k\to\R$. Since $f$ is $r$ times differentiable at $(0,0)$, it follows that there exist $\{a_{ij}\}_{i,j=0}^r\subset\R$ such that $$a_{00}=f(0,0)\quad\text{and}\quad f(x,|x|^k)=\sum\limits_{i,j=0}^r a_{ij}x^i|x|^{kj}+o([x^2+x^{2r}]^{r/2}) \quad\text{when}\quad x\to0.$$ Since $$o([x^2+x^{2r}]^{r/2})=o_1(x^r),\quad\text{we have}\quad f(x,|x|^k)=a_{00}+a_{01}|x|^k+a_{10}x+\dots+a_{r0}x^r+o_2(x^r).$$ Take $h(y)=a_{01}y$ and $g(x)=f(x,|x|^k)-h(|x|^k)$. Clearly, $h$ is $r$ times differentiable and $g$ is $r$ times differentiable outside 0. We also have $g(x)=a_{00}+a_{10}x+\dots+a_{r0}x^r+o_2(x^r)$ when $x\to0$. So $g$ is $r$ times differentiable also at 0. Next we prove that $V=V_1$ is not $r$ times differentiably basic for each $1<r$. Define a differentiable function $f:V\to\R$ by $f(x,y)=xy$, where $y=|x|$. If $V$ is $r$ times differentiably basic for some $r\ge2$, then there are $r$ times differentiable functions $$g,h:\R\to\R\quad\text{such that}\quad f(x,|x|)=x|x|=g(x)+h(|x|).$$ Hence $g(x)-g(-x)=2x^2$ for $x\in[0;1]$. But this is impossible because $g$ is 2 times differentiable, hence for $x\to+0$ $$g(x)=g(0)+ax+bx^2+o(x^2)\quad\text{and}\quad g(-x)=g(0)-ax+bx^2+o(x^2).$$ At last we prove that $V_k$ is not $r$ times differentiably basic for $k$ odd and each $k<r$. Define a differentiable function $f:V_k\to\R$ by $f(x,y)=xy$, where $y=|x|^k$. If $V$ is $r$ times differentiably basic for some $r>k$, then there are $r$ times differentiable functions $$g,h:\R\to\R\quad\text{such that}\quad f(x,|x|^k)=x|x|^k=g(x)+h(|x|^k).$$ Hence $g(x)-g(-x)=2x^{k+1}$ for each $x\in[0;1]$. But this is impossible for $k$ odd because $g$ is $(k+1)$ times differentiable, hence for $x\to+0$ $$g(x)=g_0+g_1x+\dots+g_{k+1}x^{k+1}+o(x^{k+1})\quad\text{and} \quad g(-x)=g_0-g_1x+\dots+g_{k+1}x^{k+1}+o(x^{k+1}).\quad\qed$$ **References** \[Ar58\] V.I. Arnold, [*Representation of functions of some number of variables as superposition of functions of less number of variables (in Russian)*]{}, Mat. Prosveschenie, 3 (1958), 41–61. http://ilib.mirror1.mccme.ru/djvu/mp2/mp2-3.djvu?, djvuopts&page=43 \[Ar58’\] V.I. Arnold, [*Problem 6 (in Russian)*]{}, Mat. Prosveschenie, 3 (1958), 273-274. http://ilib.mirror1.mccme.ru/djvu/mp2/mp2-3.djvu?, djvuopts&page=243 \[Ku00\] V. Kurlin, [*Basic embeddings into products of graphs,*]{} Topol.Appl. 102 (2000), 113–137. \[Ku03\] V. A. Kurlin, [*Basic embeddings of graphs and the Dynnikov method of three-pages embeddings (in Russian),*]{} Uspekhi Mat. Nauk, 58:2 (2003), 163–164. English transl.: Russian Math. Surveys, 58:2 (2003). The full text of dissertation is available at http://maths.dur.ac.uk/$\sim$dma0vk/PhD.html \[Mi09\] E. Miliczka, [*Constructive decomposition of a function of two variables as a sum of functions of one variable*]{}, Proc. AMS, 137:2 (2009), 607-614. \[MK03\] N. Mramor-Kosta and E. Trenklerova, [*On basic embeddings of compacta into the plane,*]{} Bull. Austral. Math. Soc. 68 (2003), 471–480. \[RZ06\] D. Repovš and M. Željko, [*On basic embeddings into the plane,*]{} Rocky Mountain J. Math., 36:5 (2006), 1665-1677. \[Sk95\] A. Skopenkov, [*A description of continua basically embeddable in $\R^2$,*]{} Topol. Appl. 65 (1995), 29–48. \[St89\] Y. Sternfeld, [*Hilbert’s 13th problem and dimension,*]{} Lect. Notes Math. 1376 (1989), 1–49. \[Vi04\] A.G. Vitushkin, *Hilbert’s 13th problem and related questions,* Russian Math. Surveys, 59:1, (2004), 11–24. \[Vo81\] S.M. Voronin, Funkcionalniy Analiz, 15:1 (1981), 1–17. \[Vo82\] S.M. Voronin, Funkcionalniy Analiz, 16:2 (1982), 21–29. [^1]: This is an English version of the paper in Russian under the same title. The English version has much shorter first section (which corresponds to two sections in Russian version), but contains solutions of problems 14a and 16c from the third section. Whenever possible I give references to surveys not to original papers. I would like to acknowledge V.I.Arnold, Yu.M. Burman, I.N. Shnurnikov, A.R. Safin, S.M. Voronin and M. Vyaliy for useful discussions, and M. Vyaliy for preparation of figures. [^2]: skopenko@mccme.ru, http://dfgm.math.msu.su/people/skopenkov/papersc.ps [^3]: Denote by $$|x,y|=|(x_1,\dots,x_n),\ (y_1,\dots,y_n)|= \sqrt{(x_1-y_1)^2+\dots+(x_n-y_n)^2}$$ the ordinary distance between points $x=(x_1,\dots,x_n)$ and $y=(y_1,\dots,y_n)$ of $\R^n$. Let $K$ be a subset of $\R^n$. A function $f:K\to\R$ is called [*continuous*]{} if for each point $x_0\in K$ and number $\eps >0$ there exists a number $\delta>0$ such that for each point $x\in K$ if $|x,x_0|<\delta$, then $|f(x)-f(x_0)|<\eps$. E. g. the function $f(x_1,x_2)=\sqrt{x_1^2+x_2^2}$ is continuous on the plane, whereas the function $f(x_1,x_2)$ equal to the integer part of $x_1+x_2$ is not. [^4]: We remark that $\phi^*$ can be injective but not monomorphic. In other words not only some linear relation on $\im\phi$ can force it to be strictly less than $C(K)$. If an embedding $K\subset \R^2$ is basic, then we can prove that $\phi^*$ is monomorphic without use of $\phi$ as follows. Define a linear operator $$\Psi\colon C^*(I)\oplus C^*(I)\to C^*(K)\quad\text{by}\quad \Psi(\mu_x,\mu_y)(f)=\mu_x(g)+\mu_y(h),$$ where $g,h\in C(I)$ are such that $g(0)=0$ and $f(x,y)=g(x)+h(y)$ for $(x,y)\in K$. Clearly, $\Psi\phi^*=\id$ and $\Psi$ is bounded, hence $\phi^*$ is monomorphic.
--- author: - 'Yakiv V. Pavlenko' - 'T. R. Geballe' date: 'Received ; accepted ' title: ' Models of infrared spectra of Sakurai’s Object (V4334 Sgr) in 1997 [^1]' --- Introduction ============ V4334 Sgr (Sakurai’s Object), the “novalike object in Sagittarius” discovered by Y. Sakurai on February 20, 1996 (Nakano et al. 1996) is a very rare example of extremely fast evolution of a star during a very late final helium-burning event (Duerbeck & Benetti 1996). During the first few months after discovery, Sakurai’s Object increased in visual brightness to V $\sim$ 12$^m$. In 1997 it increased further to V $\sim$ 11$^m$. In March 1997 the first evidence of dust formation was seen ( Kimeswenger et al. 1997, Kamath & Ashok 1999, Kerber et al. 2000). In early 1998 the optical brightness of Sakurai’s Object decreased ( dimming first reported by Liller et al. 1998), but then recovered. However, during the second half of 1998 an avalanche-like growth of the dusty envelope occurred, causing a rapid decrease in optical brightness and the complete [**visual**]{} disappearance of the star in 1999. At present essentially only thermal emission by dust can be observed (Geballe et al. 2002). Our view of the born again star has been completely obscurred by the dust it has produced. Abundance analyses by Asplund et al. (1997, 1999) and Kipper & Klochkova (1997) have found peculiarities similar to those of R CrB-like stars. Asplund et al. (1999) estimate that the logarithmic abundances of hydrogen, helium and carbon in atmosphere of Sakurai’s Object in October 1996 were -2.42, -0.02 and -1.62, respectively[^2], with hydrogen only the third most abundant element by number. All of the above studies are based on optical spectra obtained in 1996. At that time the spectrum of Sakurai’s Object resembled that of an F-supergiant; molecular bands were absent or very weak. Cooling of the photosphere of Sakurai’s Object resulted in its optical spectrum during 1997 and 1998 resembling those of C-giants with very strong bands of CN and C$_2$ (Pavlenko,Yakovina & Duerbeck 2000). Modeling of some of these optical spectra have allowed estimates of the changes in [T$_{\rm eff}$ ]{}and [E$_{\rm B-V}$ ]{}to be made during this period of rapid evolution of the optical spectrum (Pavlenko et al. 2000; Pavlenko & Duerbeck 2001). Modeling of near infrared (1–2.5 $\mu$m) spectra of Sakurai’s Object is of interest for several reasons. In addition to providing comparisons with results obtained from the optical spectrum and tests of the reliability of molecular and atomic data, it allows accurate determination of the effective temperature and sensitive tests for emission by hot dust. Use of the 1–2.5 $\mu$m region for modeling is especially important after 1996, when the bulk of the photospheric flux shifted from the optical into this waveband. In this paper we present and compare model 1–2.5 $\mu$m spectra with those of Sakurai’s Object obtained during 1997, on UT April 21 and July 13 at the United Kingdom Infrared Telescope (UKIRT). The observed spectra together with observational details were presented by Eyres et al. (1998) and the July spectrum is also shown in Geballe et al. (2002). The resolutions of these spectra as presented here are 1.4 nm (0.0014 $\mu$m) at 1.02–1.35 $\mu$m and 2.8 nm (0.0028 $\mu$m) at 1.42–2.52 $\mu$m. Narrow spectral features in the 1.82–1.95 $\mu$m portions of these spectra are due to incomplete removal of strong telluric lines. Modeling Procedure ================== Grids of plane-parallel model atmospheres in LTE, with no energy divergence were computed by the SAM12 program (Pavlenko 2002). This program is a modification of ATLAS12 (Kurucz 1999). Opacities due to C I bound-free absorption, of importance in atmospheres of hydrogen-deficient, carbon-rich stars over a wide (0.1–8 $\mu$m) wavelength region (see Pavlenko 1999, 2002; Asplund et al. 2000), were computed using the OPACITY PROJECT cross sections database (Seaton et al. 1992). The opacity of C$^-$ also was taken into account Myerscough & McDowell (1996); see Pavlenko 1999 for more details). An opacity sampling approach (Sneden et al. 1976) was used to account for atomic and molecular line absorption. The source of the atomic line information was the VALD database (Kupka et al. 1999). Lists of diatomic molecular lines of $^{12}$CN, $^{13}$CN, $^{12}$C$_2$, $^{13}$C$_2$, $^{12}$C$^{13}$C, $^{12}$CO, $^{13}$CO, SiH, and MgH were taken from Kurucz (1993). We adopted Voigt profiles for every absorption line; damping constants were computed following Unsold (1955). Microturbulent velocities of 3–6  km s$^{-1}$ were adopted. Two grids of model atmospheres with different abundances were computed. One grid used the chemical composition of Sakurai’s Object determined by Asplund et al. (1999) for October 1996. The other is the same except that the abundance of carbon is increased by 0.6 dex. This “carbon-rich” case is of interest because the carbon abundance of Asplund et al. (1997) is larger by 0.6 dex than that which was obtained by them later from the analysis of high resolution spectra (see Asplund et al. 1997, 1999, 2000. This “carbon problem” appears to arise more from the analysis of C I lines than from C II or C$_2$ lines (Asplund, private communication). For both grids the isotopic ratio [$^{12}$C/$^{13}$C]{}= 5 was adopted (Asplund et al. 1997). To determine molecular densities, a system of equations of chemical equilibrium was solved for a mixture of $\sim$70 atoms, ions and molecules, including the most abundant diatomic molecules containing carbon. We used the approach developed by Kurucz in ATLAS12 (Kurucz 1999) in which ratios of densities of atoms and $n_x$, ... ,$ n_z$ and molecules $n_{xy...z}$ follow the equations: $$\begin{aligned} n_x*...*n_z/n_{x...z}= exp(-D_{0}/T_{ev}+b- \nonumber\\ c* (T+d*(T-e*(T+f*T)))+ \nonumber\\ 3/2*(m-k-1)*ln T), \label{eq3}\end{aligned}$$ where $D_0$ and $T_{ev}$ are the dissociation potential and the temperature in eV (for neutral diatomic molecules m=1, k=0). The adopted molecular constants for the three most important molecules are given in Table 1. [ccccccc]{} Molecule & $D_{0}$ & $b$&$c*$1.E2&$d*$1.E6&$e*$1.E10&$f*$1.E15\ & & & & & &\ C$_2$ & 6.116 & 48.75 & .2192 & .4149 & .4121 &1.550\ CN & 7.700 & 47.45 & .1332 & .1989 & .1778 & .6323\ CO & 11.105 & 49.45 & .1651 & .3103 & .3100 &1.168\ & & & & & &\ Synthetic spectra[^3] were computed using the WITA6 program (Pavlenko 1999) for the same grid of opacities, abundances, isotopic ratios and microturbulent velocites that were adopted for the model atmospheres. WITA6 computes spectra and spectral energy distributions (SEDs) taking into account “line by line” absorption by atomic and molecular transitions. For Sakurai’s Object spectra computed with wavelength step 0.05 nm were convolved with gaussian profiles with full widths at half maximum of 1.4 and 2.8 nm over the appropriate wavelength intervals. Computed and observed spectra were normalized at 1.7 [$\mu$m ]{}for comparison; see Pavlenko, Yakovina & Duerbeck (2000) for more details. Results ======= Principal spectral features --------------------------- In Fig. \[\_01ident\_\] the principal features formed by the molecular species C$_{2}$, CO, and CN are displayed in separate spectra. Atomic features are also shown, as these are also present in Sakurai’s Object (see Eyres et al. 1998; Geballe at al. 2002). As in the optical spectrum (Pavlenko et al. 2000), absorption of only a few molecular species accounts for the main features in the IR spectrum. Only the most abundant isotopic species of each molecule is shown. Of the less abundant isotopic species, only bands of $^{13}$CO have been detected in the infrared (Eyres et al. 1998). ![\[\_01ident\_\] Model spectra of species that produced the strongest absoption features in the 1.0-2.5 $\mu$m spectrum of Sakurai’s Object spectra during 1997, computed for [T$_{\rm eff}$ ]{}/log g = 5500/0.0 model atmosphere with Asplund at al. (1999) abundances for October 1996. The model spectrum due to atomic species alone, labelled VALD (see text) is also shown. Spectra are artificially shifted on the y-axis.](MS2440f1.eps){width="88mm" height="70mm"} Dependences on [T$_{\rm eff}$ ]{}, log g, [$V_{t}$ ]{}and log N(H) ------------------------------------------------------------------ The model spectra of Sakurai’s Object display a strong dependence on [T$_{\rm eff}$ ]{}(Fig. \[\_02teff\_\]).[^4] In general, the dependence of the IR SED on [T$_{\rm eff}$ ]{}is determined mainly by the variations of the molecular densities with temperature. The band strengths of CN, CO and C$_{2}$ all increase as [T$_{\rm eff}$ ]{}decreases. Changes in the continuum fluxes are much smaller. Similar effects are seen in model optical spectra (Pavlenko & Yakovina 2000). However, there the molecular bands are numerous, whereas in the infrared only the few strongest vibration-rotation bands of CN, C$_2$, and CO are prominent. ![\[\_02teff\_\] Dependence of the model IR spectrum on [T$_{\rm eff}$ ]{}. The model spectra use Asplund at al. (1999) abundances for October 1996. The observed spectrum of Sakurai’s Object on July 13, 1997 is shifted on the y-axis.](MS2440f2.eps){width="88mm" height="70mm"} As can be seen in Fig. \[\_03logg\_\], the dependence of the spectrum on log g is generally rather weak. However, there are differences in the responses of different spectral regions. The strong molecular bands show rather weak dependence on log g, whereas the fluxes at 1.25-1.35, 1.60-1.75, 1.9-2.2 microns show more noticeable changes. ![\[\_03logg\_\] Dependence of the model spectrum on log g.](MS2440f3.eps){width="88mm" height="70mm"} Previous abundance analyses of the spectra of Sakurai’s Object and related R CrB stars indicate microturbulent velocities [$V_{t}$ ]{}in the range 5-8 km/s (cf. Asplund et al. 2000). The value of [$V_{t}$ ]{}affects the spectral distribution, as is shown in Fig. \[\_vt\_\]. The effect of [$V_{t}$ ]{}on the IR spectra of Sakurai’s Object is larger at the heads of molecular bands than elsewhere, because the heads are formed by closely packed molecular lines whose overall absorption is sensitive to [$V_{t}$ ]{}. ![\[\_vt\_\] Dependence of the model spectrum on [$V_{t}$ ]{}.](MS2440f4.eps){width="88mm" height="70mm"} The main sources of line opacity in the model atmospheres approximating Sakurai’s Object in 1997 are molecular (Pavlenko et al. 2000). Thus it is not surprising that the optical spectra which match Sakurai’s Object respond weakly to changes in the hydrogen abundance. This is in contrast to the behavior of models corresponding to the star a year earlier (Asplund et al. 1997). Similarly, the model IR spectra of Sakurai’s Object for [T$_{\rm eff}$ ]{}= 5000–6000 K depend weakly on log N(H) (Fig. \[\_04logH\_\]). The magnitude of the change in the spectrum when log N(H) is changed from -2.42 (the Asplund et al. 1999 value [**for October 1996**]{}) to -0.97 (i.e, a change of 1.5 dex) is comparable (in a qualitative sense) to lowering log g from 1 to 0 (Fig. \[\_03logg\_\]). ![\[\_04logH\_\] Dependence of the model spectrum of Sakurai’s Object on log N(H)](MS2440f5.eps){width="88mm" height="70mm"} Changes between 1997 April 21 and July 13 ----------------------------------------- Fits to the spectra of Sakurai’s Object on April 21 and July 13 are shown in Figs. \[\_05sak\_\] and \[\_06sak\_\]. The long wavelength portion of the H band is of special interest for the “carbon problem,” because the strongest absorption bands of the C$_2$ molecule, the Ballick-Ramsey bands, occur just longward of 1.768 $\mu$m. In the comparatively hot atmosphere of Sakurai’s Object log N(C) $>$ log N(O) (Asplund et al. 1997, 1999) and the abundance of C$_2$ depends mainly on the elemental abundance of carbon. Therefore, these bands may provide the most accurate determination of log N(C). The fits imply that the carbon abundance is in the range log N(C) = -1.3 $\pm$0.2. The most likely value is 0.3 dex higher than that found by Asplund et al. (1999). The accuracy of the determination of log N(C) is limited mainly by the quality of the molecular line list. The effective temperatures that best fit the 1.0-2.0 $\mu$m spectra in 1997 April and July are 5500 $\pm$ 200 K and 5250 $\pm$ 200 K, respectively, indicating that the cooling evidenced by the dramatic spectral changes seen between 1996 and 1997 (e.g., Geballe et al. 2002) continued in 1997. Our estimated uncertainties in the above temperatures are rather large, despite the comparatively good fits to the observed spectra, because of questions concerning abundances, non-sphericity effects, and dynamical phenomena, and because of contamination of the spectra by dust emission (see below). Hot dust -------- Emission by dust is evident in the 1997 spectra by the mismatch between the synthetic and observed spectra longward of 2.0 $\mu$m in Figs. \[\_05sak\_\] and \[\_06sak\_\]. The difference between the observed and synthetic spectra is greater in the July spectrum, attesting to an increase in the amount of dust. The thermal emission from the dusty envelope overlaps the region of first overtone bands of $^{12}$CO and $^{13}$CO at $\lambda~>~$2.3 $\mu$m. Usually these bands are used for determination of carbon abundances and isotopic ratios (cf. Lazarro et al. 1991). The reduced equivalent widths of the CO bands in July 1997 cannot be reasonably attributed to a large decrease in the oxygen abundance, because (1) this is unlikely to have occurred in three months and because the continuum shortward of the CO bands also shows an excess. We note that in fitting spectra, the most frequent situation is that the computed spectra have excess flux due to the deficit of known or hypothetical opacities. To fit the observed spectra, opacities in the model would need to be [*decreased*]{} at $\lambda >$ 2 $\mu$m, an unrealistic possibility. ![\[\_05sak\_\] Top: fits to observed spectrum of of Sakurai’s Object on 1997 April 21. Bottom: details of the fits at 1.6–2.0 $\mu$m; much of the structure at 1.82–1.95 $\mu$m in the observed spectrum is due to incomplete removal of telluric absorption features. Synthetic spectra were computed for a microturbulent velocity of 6 km/s.](MS2440f6.eps "fig:"){width="88mm" height="70mm"} ![\[\_05sak\_\] Top: fits to observed spectrum of of Sakurai’s Object on 1997 April 21. Bottom: details of the fits at 1.6–2.0 $\mu$m; much of the structure at 1.82–1.95 $\mu$m in the observed spectrum is due to incomplete removal of telluric absorption features. Synthetic spectra were computed for a microturbulent velocity of 6 km/s.](MS2440f6a.eps "fig:"){width="88mm" height="70mm"} ![\[\_06sak\_\] Top: fits to observed spectrum of of Sakurai’s Object on 1997 July 13. Bottom: details of the fit at 1.6–2.0 $\mu$m. Synthetic spectra were computed for a microturbulent velocity of 6 km/s.](MS2440f7.eps "fig:"){width="88mm" height="70mm"} ![\[\_06sak\_\] Top: fits to observed spectrum of of Sakurai’s Object on 1997 July 13. Bottom: details of the fit at 1.6–2.0 $\mu$m. Synthetic spectra were computed for a microturbulent velocity of 6 km/s.](MS2440f7a.eps "fig:"){width="88mm" height="70mm"} Discussion ========== Our analysis of the 1997 infrared spectra of Sakurai’s Object strongly implies that dust was already present at that time. We note that Duerbeck (2002) did not find evidence for dust in the optical spectra from 1997. On the other hand, the fits by Pavlenko & Duerbeck (2001) to the observed SEDs at optical wavelengths indicate that [E$_{\rm B-V}$ ]{}had increased by 0.6 (from 0.7 to 1.3) from April 1997 to August 1998. However, the August 1998 data were best fit using [T$_{\rm eff}$ ]{} = 5250 $\pm$ 200 K, the same value as for July 1997 in this paper. Between June 1997 and August 1998 there were some variations in the photospheric radiaton, probably caused by mass losses events, evolution of the dusty envelope, and dynamical processes in the photosphere –envelope system (see light curve of Sakurai’s Object in Duerbeck 2002). Nevertheless, [T$_{\rm eff}$ ]{}apparently remained nearly constant during this period. In 1997, the year of maximum optical brightness of Sakurai’s Object, the luminosity then was still dominated by optical radiation. At that time “quasi-periodic fluctuations of increasing cycle length and amplitude were superimposed on the general brightness evolution” (Duerbeck 2002). In general, the effective temperature during such fluctuations does not need to follow changes of luminosity. In fact, it can be anti-correlated, since an increased radius can more than compensate for a lower [T$_{\rm eff}$ ]{}. On the other hand, a change of radius can change the thermodynamical properties in the radiating region (i.e. photosphere). That may explain the similarity of [T$_{\rm eff}$ ]{}obtained in this paper for July 1997 and that found in August 1998 by Pavlenko & Duerbeck (2001). The decreased optical brightness of Sakurai’s Object in 1998 was mainly caused by development of the dust envelope (Kimeswenger 1999). One question arises — were the optical and 2 $\mu$m SED’s being affected by the same dust in 1997-1998? The answer is probably yes. As mentioned earlier, the effective temperature remained constant during this time and thus cannot be the cause of the large change in [E$_{\rm B-V}$ ]{}. This suggests that the cause of the increase in [E$_{\rm B-V}$ ]{}was newly formed dust. The new dust would be expected to have been close to the star and thus quite hot. Indeed the full 1–5 $\mu$m spectrum from 1998 (e.g., Geballe et al. 2002) shows that the excess peaked close to 3 $\mu$m, indicating a mean dust temperature close to 1,000 K at that time. The dust must have been hotter (and closer) in 1997; this is supported by the data from 1997 (Eyres et al. 1998; Geballe et al. 2002) which show that the continuum flux density decreased monotonically with wavelength in the observed wavelength range, 1–4 $\mu$m, at that time. Comparison of the 1997 and 1998 SEDs also show that much less dust was present in 1997. Thus we confirm that the first appearance of dust occured in 1997 and that the amount of dust increased through summer 1998, prior to its becoming totally dominant in the latter part of 1998 and since then. We thank the staff of the Joint Astronomy Centre for assistance in obtaining the spectra and the VALD database team for its helpful assistance. Partial financial support for YVP was provided by a Small Research Grant from the American Astronomical Society. TRG’s research is supported by the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., on behalf of the international Gemini partnership of Argentina, Australia, Brazil, Canada, Chile, the United Kingdom, and the United States of America. We thank the referee, M. Asplund for several helpful suggestions. Asplund, M., Gustafsson, B., Lambert, D.L., Rao, N.K. 1997, A&A 321, L17 Asplund, M., Gustafsson, B., Lambert, D.L. Kameswara Rao, N., 2000, A&A 353, 287. Asplund, M., Lambert, D.L., Kipper, T., Pollacco, D., Shetrone, M.D. 1999, A&A 343, 507 Duerbeck, H.W., Benetti, S. 1996, ApJ 468, L111 Duerbeck, H.W. 2002, Astrophys. Space Sci, in press. Eyres, S.P.S., Evans, A., Geballe, T.R., Salama, A., & Smalley, B. 1998, MNRAS 298, L37 Geballe, T,G., Evans, A., Smalley, B., Eyres, S.P.S. 2002, Astrophys. Space Sci, in press. Kamath, U.S., Ashok, N.M. 1999, MNRAS 302, 512 Lazaro, C., Lynas-Gray, A.E., Clegg, R.E.S., Mountain, C.M. and Zadrozny, A. 1991, MNRAS 249, 62. Kerber, F., Palsa, R., Köppen, J., Blöcker, T., Rosa, M.R. 2000, ESO Messenger, No. 101, 27 Kimeswenger, S., Gratl, H., Kerber, F., Fouqué, P., Kohle, S. Steele, S. 1997, IAU Circ. 6608 Kipper, T., Klochkova, V., 1997, A&A 324, L65 Kupka F., Piskunov N., Ryabchikova T.A., Stempels H.C., Weiss W.W. 1999. Astron. and Astrophys. Suppl. 138, 119. Kurucz R.L. 1993, CD-ROMs, Cambridge, Harvard Univ. Kurucz R.L. 1999, http://www.cfa5.harvard.edu Liller, W., Janson, M., Duerbeck, H.W., van Genderen, A.M. 1998, IAU Circ. 6825 Myerscough, V.P. & McDowell, M.R.C. 1966, MNRAS, 132, 457 Nakano, S., Sakurai, Y., et al. 1996, IAU Circ. 6322 Pavlenko, Y.V., Yakovina, L.A. 1994, Astron. Reports 38, 768 Pavlenko, Ya.V. 1997, ApSS 253, 43 Pavlenko, Ya.V. 1999, Astron. Reports 43, 94 Pavlenko, Ya.V. & Duerbeck, H.W. 2001, A&A 367, 933 Pavlenko, Ya.V., Yakovina, L.A. 2000, Astron. Reports 44, 209. Pavlenko Ya.V. 2000, Astron. Reports 44, 219. Pavlenko, Ya.V., Yakovina, L.A., Duerbeck, H.W. 2000, A&A 354, 229 Pavlenko, Ya.V. 2002, Astron. Reports, submitted. Pollacco, D. 1999, MNRAS 304, 127 Seaton, M.J. 1992, Rev.Mex.Astron.Astrophys. 23, 180. Sneden, C., Johnson, H., Krupp, B. 1976, ApJ 204, 281. Unsold, A. 1955, Physik der Sternatmospheren, 2nd ed., Springer, Berlin. [^1]: Based on observations obtained at the United Kingdom Infrared Telescope (UKIRT), which is operated by the Joint Astronomy Centre on behalf of the U. K. Particle Physics and Astronomy Research Council. [^2]: In this work we will use an abundance scale $\sum N_i$ = 1. [^3]: In fact these are spectral energy distributions (SEDs). [^4]: For $\lambda$ $>$ 2.7 $\mu$m model spectra shown here and in subsequent plots were computed without molecular absorption; i.e, they are continuum fluxes, which provide information about the dependence of the continuum fluxes on input parameters.
--- abstract: 'We consider the question whether there is an infinitary analogue of the Church-Turing-thesis. To this end, we argue that there is an intuitive notion of transfinite computability and build a canonical model, called Idealized Agent Machines ($IAM$s) of this which will turn out to be equivalent in strength to the Ordinal Turing Machines defined by P. Koepke.' author: - Merlin Carl title: 'Towards a Church-Turing-Thesis for Infinitary Computations' --- Introduction ============ Since [@ITTM], various generalizations of classical notions of computability to the transfinite have been given and studied. The Infinite Time Turing Machines ($ITTM$s) of Hamkins and Lewis generalized classical Turing machines to transfinite working time. Ordinal Turing Machines ($OTM$s) (see [@OTM]) and Ordinal Register Machines ($ORM$s) further generalized this by allowing working space of ordinal size. Recently, a transfinite version of $\lambda$-calculus was introduced and studied [@Sey]. It was soon noted (see e.g. [@Fi]) that the corresponding notion of computability enjoys a certain stability under changes of the machine model: For example, the sets of ordinals computable by $OTM$s and $ORM$s both coincide with the constructible sets of ordinals. A similar phenomenon is known from the models of classical computability: Turing machines, register machines, recursive functions, $\lambda$ calculus etc. all lead to the same class of computable functions. In the classical case, this is taken as evidence for what is known as the Church-Turing-Thesis ($CTT$), i.e. the claim that these functions are exactly those computable in the ‘intuitive sense’ by a human being following a rule without providing original input. This thesis plays an important role in mathematics: It underlies, for example, the - to our knowledge undisputed[^1] - view that Matiyasevich’s theorem [@Ma] settles Hilbert’s $10$th problem or that Turing’s work [@Tu1] settles the Entscheidungsproblem. The study of recursive functions gets a lot of its attraction from this well-grounded belief that they coincide with this intuitive notion of computability. It therefore seems natural to ask whether something similar can be said about transfinite models of computation, i.e. whether these models are mere ‘ordinalizations’ of the classical models or whether they actually ‘model’ something, whether there is an intutive concept of transfinite computability that is captured by these models: Hence, we ask for an infinitary Church-Turing-thesis ($ICTT$).\ There seems to be some evidence that a satisfying $ICTT$ should be obtainable. Beside the stability of the corresponding notion of computability mentioned above, it also became common to describe and communicate the activity of such machines in rather informal terms: Rather than writing an actual program for e.g. deciding number-theoretical statements with an $ITTM$, it generally suffices to explain that the machine will e.g. ‘search through the naturals for a witness’. It usually soon becomes clear to someone with a basic familiarity with these models that such a method can indeed be implemented and will lead to the right results. Indeed, we will usually find such a ‘process description’ much easier to grasp than an actual implementation. This indicates that we indeed possess an intuitive understanding of what these machines can do which is based on an understanding of infinite processes rather than the formal definition of the machine. We aim at connecting infinitary models of computation with a natural notion. Here, ‘natural’ means that the notion can be obtained and described independently from the models and that it is in some sense present in normal (mathematical) thinking. Such a notion should furthermore serve as a background thesis explaining the equivalence of the different models, should (in analogy with the classical Church-Turing-thesis) justify the use of informal ’process descriptions’ to prove the existence of formally specified programs and, ideally, allow mathematically fruitful applications, similar to the role the classical $CTT$ plays in e.g. Hilbert’s $10$th problem.\ In this work, we offer evidence for the claim that notions of transfinite computation are indeed naturally present in mathematical (and possibly in everyday) thinking and that these notions are captured by the transfinite machine models we mentioned.[^2] This will allow us to formulate an $ICTT$. This article is structured as follows: We begin by describing an approach of mathematical philosophy initiated by P. Kitcher [@Ki], where mathematical objects are modelled as mental constructions of idealized agents. We also indicate that such idealizations are indeed present in understanding mathematics. After that, we work towards a formal notion of a computing transfinite agent, obtaining the notion of an Idealized Agent Machine ($IAM$). Then, we show that the computational power of an $IAM$ coincides with that of $OTM$s and $ORM$s (which we will summarize under the term ‘standard models’ from now on). Finally, we state (a candidate for) an $ICTT$ and discuss whether it meets the above requirements. Idealized Constructions and Idealized Agents in Mathematics =========================================================== In this section, we briefly describe the view on the philosophy of mathematics described in \[Kitcher\]. We use his account as a demonstration that the concept of transfinite agents can be motivated and has arisen completely independent from our considerations. Furthermore, we want to indicate how these views can be fruitful for infinitary computations (and vice versa) and bring them into interaction. Finally, his work serves us as a first introduction to the notion of idealized agents. We will then demonstrate that this notion seems indeed to be present in mathematical language and understanding. Kitcher’s idealized-agents-view of mathematics ---------------------------------------------- In a nutshell, Kitcher attempts to justify an empiricist account of mathematics by describing mathematics as an idealization of operations with real-world objects like grouping them together, adding an object to a pile of objects etc. These actions in themselves already are a kind of primitive mathematics, limited by our practical constraints. What is usually called mathematics is obtained by forming a theory of idealized operations in a similar way that, say, a theory of idealized gases is formed: We abstract away from certain ‘complicating factors’ like e.g. our factual incabability of indefinitely adding objects to a collection. Mathematics is then the study of idealized operations, or, equivalently, of the operations of idealized agents. Upon reading this, one might wonder how this account is supposed to make sense of the large parts of mathematics which, like axiomatic set theory, deal with actual infinite objects. Kitcher’s reply to this is simply that this is a mere question of the degree of idealization: > [@Ki], p. $146$: I see no bar to the supposition that the sequence of stages at which sets are formed is highly superdenumerable, that each of the stages corresponds to an instant in the life of the constructive subject, and that the subject’s activity is carried out in a medium *analogous* to time, but far richter than time. (Call it ‘supertime’.) ... The view of the ideal subject as an idealization of ourselves does not lapse when we release the subject from the constraints of our time. Comparing Kitcher’s account of axiomatic set theory with his treatment of arithmetic or intuitionistic mathematics, mathematical areas can roughly be characterized by the degree of idealization, i.e. by considering how remote the underlying operations are from our actual capabilities. The agent working in ‘supertime’ mentioned in the quote above seems to belong to a benchmark of idealization. As this is the degree of idealization corresponding to set theory in Kitcher’s account, we will refer to it as the ‘idealized agent of set theory’ from now on.[^3] Not unexpectedly, several issues with this approach can and have been raised: E.g. about the ontological status of these idealized agents (discussed in [@Ho]), whether this degree of idealization still admits an explanation of the applicability of mathematics, whether and how certain large cardinals can be accomodated in this account etc. Nevertheless, the imagination of an idealized agent or an idealized mental activity seems to be in the background of large parts of mathematical understanding in one way or the other. In fact, there are numerous common figures of speech in mathematical textbooks and even more in spoken conversation that point to such (implicit) notions: For example, in many proofs of the Bolzano-Weierstra[ß]{}-theorem, ‘we’ are supposed to ‘pick’ a number from a subintervall containing infintely many elements of a given sequence. One might find this problematic: In a naive sense, of course, we cannot do this, as in general, we will not know which intervall that is.[^4] However, this problem doesn’t seem to come up in understanding this proof. In fact, agent-based formulations generally seem to increase understanding and make constructions more imaginable rather than leading into conflicts with our factual limitations. A similar observation holds for e.g. proofs of the well-ordering principle from the axiom of choice, and in general for many uses of transfinite recursion or transfinite induction. Another example would be the various places in mathematical logic where constructions are explained by interpreting them as transfinite ‘games’ between two ‘players’. Degrees of idealization and the Church-Turing-Thesis ---------------------------------------------------- In the Church-Turing-Thesis, recursiveness is stated to capture the intuitive meaning of ‘computable’. However, if the intuitive meaning of ‘computable’ is taken as ‘possible for a human being working without understanding’, then literally, this is of course false: What we can actually do is very limited: In general, a recursive function is far away from being computable by ‘a man provided with paper, pencil, and rubber, and subject to strict discipline’ ([@Tu]). But this fact is quite irrelevant for e.g. Hilbert’s $10$th problem, which asks for a ‘finite’ procedure, not a practical one. In the $CTT$, we are hence in fact facing a notion of an idealized computing subject. Usually, this idealization goes from certain factual bounds to ‘arbitarily large, but finite’. But there seems to be a distinguished intuitive notion of computability going beyond this: For example, there is little to no trouble with the idea of testing all even numbers for being a sum of at most two primes. In fact, this thought experiment seems to be at least part of the reason the Goldbach conjecture is generally assumed to have a definite truth value. On the other hand, no such intuition supports the idea of e.g. searching through $V$ looking for a bijection between $\mathbb{R}$ and $\aleph_{1}$, not even if one assumes $CH$ to have a definite truth value.[^5] The idea of a transfinite systematic procedure for obtaining certain objects or answering certain questions hence allows for a clear distinction: Not every formulation that at the surface looks like a ‘process description’ is eligible as an indication of a computation of an idealized agent. Our goal is to find an exact characterization of those procedures that are. A model for idealized Agents ============================ Even if one accepts that, beyond finiteness, clear degrees of idealization of our activity can be concretely captured, the standard models are not as canonical a model of it as e.g. Turing machines are in the finite case. In the one direction, it does indeed seem plausible that the actions of an $OTM$ are available to a transfinite idealized agent and that hence everything computable by an $OTM$ should be computable by such an agent: The aspects of an $OTM$-computation going beyond classical computability consist in elementary limit operations like forming the limes inferior of a sequence of $0$s and $1$s. But the other direction is not as clear: For example, the limit rule of $OTM$s seems to be rather arbitrary. The intuition here is that other reasonable choices of limit rules will not change the class of computable objects, but it is exactly the intuition leading there that we want to capture here. We see no direct path from idealized agents to the standard models known so far. Our approach is hence to develop a formal notion of a transfinitely computing agent modelled after our intuition and then see how it relates to the standard models. It turns out that it does indeed describe the same notion of computability, which we consider a good piece of evidence for our thesis. The notion we are about to develop will be called Idealized Agent Machines ($IAM$s). $IAM$s are meant to give a very liberal account of the computational activity of idealized agents. In fact, one might get the impression that what we model as a single step of an $IAM$ is really a series of lengthy sub-computations and that we are hence far to generous in attributing abilities to our idealized agent. However, we will demonstrate that even this liberal notion is equivalent to the standard models. Therefore, we don’t need to claim that $IAM$s are a very accurate model for the intuition of transfinite computations: we only need it to be strong enough to include that intuition. We can then argue that if such an intuition is really present - as we tried to show above, then it is grasped by the standard models, as, in the end, we will arrive at the following implications: $OTM$-computable\ $\underset{(1)}{\implies}$ computable by the idealized agent of set theory\ $\underset{(2)}{\implies} IAM$-computable\ $\underset{(3)}{\implies} OTM$-computable Here, implication $(3)$, being a claim about two notions expressable in the language of set theory, is provable (in $ZFC$) and implication $(1)$ is very natural (see above). It is step $(2)$ that depends on the plausibility of the analysis and modelling we are about to give. An ideal computing agent works as follows: At each time, he has a complete memory of his earlier computational activity. Also, he has a working memory where he may store information. We assume that the working memory consists of separate ‘places’, each containing one symbol from a finite alphabet.[^6] The agent is working in according with instructions that determine his activity. Certainly, any kind of operation that can be considered an idealization of an activity we are actually capable of must be describable by finite means. We hence stipulate that the instructions are given by some finite expressions. Based on the instructions, it must be possible at each time to determine what to do (e.g. which new symbols to write) on the basis of the computational activity so far. We propose to model this in the following way: There should be a first-order formula $\phi(x,y)$ such that, if the computational activity so far is given by $c$ and $p$ is a place in the memory, $\phi(c,p,s)$ holds iff $s$ is the the symbol that should be written in place $p$ after $c$. Here, it must be possible to evaluate $\phi$ by mere inspection of $c$. Even if ‘inspection’ may be taken in an idealized sense here as well, this should certainly mean that the appearing quantifiers should in some sense be ‘bounded’ by $c$. We will make this precise below.[^7]\ This description does not depend on any assumptions on the structure of time. It is hence sufficiently general to yield a notion of transfinite computability once an appropriate notion of transfinite time is introduced. Supertime and Superspace ------------------------ In the passage quoted in the first paragraph, Kitcher suggests that set theory can be considered as the outcome of the mental activity of an idealized agent working in ‘a medium analogous to time, but far richer than time’. Here, we want to argue that the only sensible choice for such a medium are ordinals. In his argumentation, it is also implicitely assumed that the agent not only has a non-standard working time, but also the ability to ‘store’ the outcome of his work, e.g., infinite memory or at least infinite writing space. We will argue that it is natural and harmless to assume that the writing space of an idealized agent is indexed by ordinals. Certainly, we intend a notion of time as a medium of a deterministic computation to be a linear ordering. But we can say more. The computational activity has to start at some point. Every other state may depend on this earlier state and hence has to take place at a moment after the starting point. Hence, the ‘medium of computation’ has to have a unique minimal element. Whenever the agent has carried out a certain amount of computational activity, he has to know what to do next, i.e. there must be a unique next state for him to assume. This next state has to take place at some point of time. Hence, the medium in which he computes has to contain a unique next element after those through which the activity passed so far. Put differently: For every initial segment of time, there has to be a unique time point preceeded by all moments in the initial segment and only by those. This leads to the following notion of ‘supertime’: A ‘supertime’ is a linearly ordered set[^8] $(X,\leq)$ with a unique minimal element $\mu$ and such that, for every proper initial segment $I$ of $X$, there is a $\leq$-minimal $x_{I}\in X$ such that $\forall{t\in I}t<x_{I}$. It is now easy to see that this means that all candidates for supertime are (isomorphic to) ordinals: Let $(X,\leq)$ be a linearly ordered set such that, for every $I\subsetneq X$ which is downwards closed (i.e. $x<y\in I$ implies $x\in I$), there is a minimal $x_{I}\in X$ such that $\forall{t\in I}t<x_{I}$. Then $(X,\leq)$ is isomorphic to an ordinal. Note that $\emptyset$ is downwards closed in $(X,\leq)$ and let $\mu:=x_{\emptyset}$. Obviously, $\mu$ is the unique minimal element of $X$.\ Let $A\subseteq X$. Consider the set $Y:=\{x\in X|x<A\}$. It is easy to see that $Y$ is an initial segment of $X$. We claim that $x_{Y}$ is a minimal element of $A$.\ To see that $x_{Y}\in A$, assume otherwise. As every element smaller than $x_{Y}$ is in $Y$ and hence smaller than every element of $A$, it follows that $x_{Y}<A$. But this implies $x_{Y}\in Y$, so $x_{Y}<x_{Y}$, a contradiction. So $x_{Y}\in A$ and every $z<x_{Y}$ satisfies $z\notin A$. Thus $x_{Y}$ is indeed a minimal element of $A$. As $\leq$ is linear, $x_{Y}$ is unique with this property.\ This implies that $(X,\leq)$ is a well-ordered set. Hence, it is isomorphic to an ordinal. However, not all ordinals are suitable as such a medium: For example, if our medium allows two procedures to be carried out, it should also allow to carry out one after the other. Also, it should be possible to have a procedure as a ‘subroutine’ of another to be repeatedly called by the other. Finally, the class of ordinals itself provides an attractive unification of appropriate computation times; hence we allow computations carried out without fixing a particular ordinal in advance.\ Appropriate candidates for supertime hence turn out to be ordinals which are closed under ordinal addition and multiplication and $On$ itself. In the following, we will - for the sake of simplicity - focus on the broadest case where the underlying time is $On$. Note that this notion of supertime matches well with the way transfinite constructions are commonly communicated and imagined: It is completely normal to relate stages of such a construction by expressions coming from the relation of time points and state that e.g. ‘earlier on, we made sure that’. In fact, it is hard to talk about transfinite constructions avoiding such expressions. We imagine our agent to be equipped with a sufficient supply of place for writing symbols. We assume that this space is partioned into slots and that each slot is uniquely recognizable. There is a canonical well-ordering on the set of used slots: Namely, each slot is at some point of time used for the first time. Via this property, this slot is henceforth identifiable. We may hence assume for our convenience that the slots are indexed with ordinals from the very beginning: That is, the working memory is at any time a function from some ordinal $\alpha$ into the set $S$ of symbols.[^9] Finally, even if we allow - as we will - several symbols to be re-written in one step, an adequate model of computing time and space should also impose some bounds on the space that can be actually used after computing for $\tau$ many steps. We model this intuition by the extra condition that, at time $\tau$, only slots with index in $\tau$ may contain written symbols.[^10] Idealized Agent Machines ------------------------ We will now describe a formal model for the concept developed above. The instructions will be given by a first-order statement in an appropriate language, which can be evaluated on the basis of an initial segment of a computation. We let $L_{c}$ be the first-order language with equality, a binary function symbol $C(x,y)$ and a binary relation symbol $\leq$. The intended meaning of $C(x,y)=z$ is that, at time $x$, $z$ is the symbol in the $y$th place, while $\leq$ is the ordering relation of ordinals.\ If $A$ is a finite set (the alphabet) and $\tau$ an ordinal, then a $\tau$-state for $A$ is a function $f:\alpha\rightarrow A$, where $\alpha\leq\tau$. We denote the class of $\tau$-states for $A$ by $S_{A}^{\tau}$.\ A function $F$ with $dom(F)=:\tau\in On$ and $F(\iota)\in S_{A}^{\iota}$ for all $\iota<\tau$ is called an $A$-$\tau$-precomputation. For $F$ an $A$-$\tau$-precomputation, an $L_{c}$-formula $\phi$, $\vec{s}\in A^{<\omega}$, $\vec{\alpha}\in(\tau+1)^{<\omega}$, we define $[\phi(\vec{\alpha},\vec{s})]_{\tau}^{F}$, the truth value of $\phi(\vec{\alpha},\vec{s})$ in $F$, recursively (omitting the parameters where possible): $[C(\alpha,\beta)=x]_{\tau}^{F}=1$ if $\alpha<\beta$ or $F(\alpha)(\beta)=x$, otherwise $[C(\alpha,\beta)=x]_{\tau}^{F}=0$; $[x\leq y]_{\tau}^{F}=1$ iff $x,y\in On$ and $x\leq y$, otherwise $[x\leq y]_{\tau}^{F}=0$; $[x=y]_{\tau}^{F}=1$ iff $x=y$, otherwise $[x=y]_{\tau}^{F}=0$; $[\neg\phi]_{\tau}^{F}=1-[\phi]_{\tau}^{F}$; $[\phi\wedge\psi]_{\tau}^{F}=[\phi]_{\tau}^{F}[\psi]_{\tau}^{F}$; and $[\exists{x}\phi(x)]_{\tau}^{F}=1$ iff there is $\iota\in\tau$ such that $[\phi(\iota)]_{\tau}^{F}=1$, otherwise $[\exists{x}\phi(x)]_{\tau}^{F}=0$.\ An $L_c$-formula $\phi(x,y,z)$ is an $IAM$-program iff, for all $\tau\in On$, $\alpha\leq\tau$ and all $A$-$\tau$-precomputations $F$, there is exactly one $s\in A$ such that $[\phi(\tau,\alpha,s)]_{\tau}^{F}=1$. If $\phi$ is an $IAM$-program, $A$ a finite set, $\tau\in On$ and $F$ an $A$-$\tau$-precomputation, then we define $\mathbb{S}_{\phi,\tau,F}:\tau\rightarrow A$, the state of the $IAM$-computation with $\phi$ at time $\tau$ after $F$, by letting $\mathbb{S}_{\phi,\tau,F}(\alpha)$ be the unique $s\in A$ such that $\phi(F,\alpha,s)$ holds for $\alpha<\tau$.\ Furthermore, we define $\mathbb{I}_{\phi}^{\tau}$, the $\tau$-th initial segment of the $IAM$-computation with $\phi$ at time $\tau$, recursively by letting $\mathbb{I}_{\phi}^{0}:=\emptyset$, $\mathbb{I}_{\phi}^{\tau+1}:=\{(\tau,\mathbb{S}_{\phi,\tau,\mathbb{I}_{\phi}^{\tau}})\}\cup\mathbb{I}_{\phi}^{\tau}$ and $\mathbb{I}_{\phi}^{\lambda}:=\bigcup_{\iota<\lambda}\mathbb{I}_{\phi}^{\iota}$ for $\lambda$ a limit ordinal.\ So far, our machines have no notion of halting. We therefore assume that all our $IAM$s have a special symbol $\mathbb{H}$ in their alphabet. The $IAM$-computation by $\phi$ is said to have stopped at time $\tau$ iff $\mathbb{I}_{\phi,\tau}(\tau)(0)=\mathbb{H}$, i.e. if the first symbol in the memory at time $\tau$ is $\mathbb{H}$.\ An $IAM$-computation by $\phi$ will hence start with an empty tape and then repeatedly apply the $\mathbb{S}$-operator to obtain the next state, taking unions at limits.\ It is easy to see from the boundedness of the formula evaluated in each step that this notion of computability is absolute insofar $IAM$-computations are absolute between transitive models of $ZFC$. We can also account for computations with a non-empty input and computations with parameters in these terms by adjusting the initial memory content. $X\subseteq On$ is $IAM$-computable iff there exists an $IAM$-program $\phi$ such that, for every $\alpha\in On$, there is $\tau\in On$ such that, if $\chi_{\alpha}$ is the characteristic function of $\alpha$ in $On$ and $F=(0,\chi_{\alpha})$, we have $\mathbb{S}_{\phi,\tau,F}(0)=\mathbb{H}$ and $\mathbb{S}_{\phi,\tau,F}(1)=1$ iff $\alpha\in On$.\ Similarly, $f:On\rightarrow On$ is $IAM$-computable iff there is an $IAM$-program $\phi$ such that, for every $\alpha\in On$, there is $\tau\in On$ such that $\mathbb{S}_{\phi,\tau,F}(0)=\mathbb{H}$, $\mathbb{S}_{\phi,\tau,F}(f(\alpha)+1)=1$ and $\mathbb{S}_{\phi,\tau,F}(\iota)=0$ for $\iota\notin\{0,f(\alpha)+1\}$, where again $F=(0,\chi_{\alpha})$ and $\chi_{\alpha}$ is the characteristic function of $\alpha$ in $On$.\ We say that a set $X\subseteq On$ or a function $f:On\rightarrow On$ is $IAM$-computable from finitely many ordinal parameters iff there exists a finite set $p\subset On$, an $IAM$-program $\phi$ using the alphabet $A$ and an $a\in A$ such that $\phi$ computes $X$ (or $f$, respectively) when the following change is made for all $\tau<\alpha\in On$ in the definition of the $\alpha$-th state $\mathbb{S}_{\phi,\alpha,\mathbb{I}_{\phi}^{\alpha}}$: If $\beta\in p$, then $\mathbb{S}_{\phi,\alpha,\mathbb{I}_{\phi}^{\alpha}}(\beta)$ is set to $a$. Idealized Agent Machines, ordinal computability and the $ICTT$ ============================================================== Having developed our formal model for infinitary computations, it is now rather straightforward to show that, in terms of computability, it is equivalent to the standard models. As the elobarate versions are quite long and cumbersome, we merely sketch the arguments here.\ [\[element\]]{} (a) There is an $L_{c}$-formula $\phi_{lim}$ such that, for any precomputation $F$ with $dom(F)=\tau$, we have $[\phi]_{\tau}^{F}=1$ iff $\tau$ is a limit ordinal. Furthermore, the statement $\alpha=\beta+1$ is expressable by an $L_{c}$-formulas $succ(\alpha,\beta)$.\ (b) Let $A\subset\omega$ be finite. There is an $L_{c}$-formula $\phi_{liminf}(x,y)$ such that, for any $\tau\in On$, $a\in On$, $b\in A$ and any $A$-$\tau$-precomputation $F$, $[\phi_{liminf}(a,b)]_{\tau}^{F}$ holds iff $b=\liminf{((F(\iota))(a))_{\iota<\tau}}$.\ (c) Let $P$ be an $OTM$-program, and let $\sigma=(i,\alpha,t)$ be a triple coding a state in the computation with $P$, where $i$ is codes the current state of the program, $\alpha$ the head position and $t:\tau\rightarrow\{0,1\}$ the tape content. There are $L_{c}$-formulas $\phi^{P}_{state}(i,\alpha,t,j)$, $\phi^{P}_{head}(i,\alpha,t,\beta)$ and $\phi^{P}_{tape}(i,\alpha,t,s)$ such, for any pre-computation $F$ with $dom(F)=\gamma+1$, $[\phi_{state}(i,\alpha,t,j)]_{\gamma+1}^{F}=1$, $[\phi^{P}_{head}(i,\alpha,t,\beta)]_{\gamma+1}^{F}=1$ and $[\phi^{P}_{tape}(i,\alpha,t,s)_{\gamma+1}^{F}=1$ hold iff applying $P$ in the state $\sigma$ leads into the new state $(j,\beta,t^{\prime})$, where $t^{\prime}:\tau+1\rightarrow\{0,1\}$ is given by $t^{\prime}(\alpha)=s$ and $t^{\prime}(\zeta)=t^{\prime}(\zeta)$ for $\zeta\neq\alpha$. \(a) Take $\phi_{lim}$ to be $\forall{x}\exists{y}(x\leq y\wedge\neg(x=y))$. First assume that $\tau$ is a\ limit ordinal. Then $[\phi_{lim}]_{\tau}^{F}=1-[\exists{x}\forall{y}(\neg(x\leq y)\vee x=y))]_{\tau}^{F}$. Now\ $[\exists{x}\forall{y}(\neg(x\leq y)\vee x=y))]_{\tau}^{F}=1$ iff there exists $x\in\tau$ with $[\forall{y}(\neg(x\leq y)\vee x=y)]=1$, which is equivalent to $[\neg\exists{y}(x\leq y\wedge x\neq y)]_{\tau}^{F}=1\leftrightarrow[\exists{y}(x\leq y\wedge x\neq y)]_{\tau}^{F}=0$, which means that there is no $y<\tau$ such that $[x\leq y\wedge x\neq y]_{\tau}^{F}=1$, i.e. such that $x\leq y\wedge x\neq y$ holds. But such an $x$ obviously cannot exist if $\tau$ is a limit ordinal. The other direction works in the same way, again by simply unfolding the definition of the truth predicate. The second statement is similarly immediate. \(b) As $A=\{a_1,...,a_n\}$ is finite, we can define $\leq$ on $A$ by taking $a<b$ to be $\bigvee_{a_{i}\leq b}a_{i}=a$. Now take $\phi(a,b)$ to be $\exists{x}\forall{z}(x\leq z\implies b\leq C(z,a)\wedge\forall{x}\exists{z}(x\leq z\wedge C(z,a)=b)$. \(c) The required formulas are immediate from $P$ and the fact that limit ordinals are $L_{c}$-definable. To give an example, if $P$ requires to change from state $i$ to state $j_{1}$ when the symbol under the reading head (at position $\alpha$) is currently $\iota_{1}$ and to state $j_{2}$ when the symbol is $\iota_{2}$, we can express this through the $L_{c}$-formula\ $\phi_{i}(\alpha,j)\equiv \exists{\gamma}(((\neg\exists{\beta}succ(\gamma,\beta)\wedge((C(\gamma,\alpha)=\wedge j=j_{1})\vee(C(\gamma,\alpha)=\wedge j=j_{2})))$. Let $f:On\rightarrow On$ be $OTM$-computable. Then $f$ is $IAM$-computable. Let $P$ be an $OTM$-program for computing $f$. Suppose wlog that $P$ uses $s\geq 3$ many states and put $A:=\{0,1,...,s\}$. We will represent states of the $OTM$-computation as sequences $(a_i|i\in\alpha)$ where $a_0\in\{1,2,...,s\}$ codes the inner state of the machine and the $a_{\iota}$ code the tape content. Let $b_{i}=a_{i+1}$ for $i\in\omega$ and $b_{\iota}=a_{\iota}$ otherwise. To express the head position, we put $b_{\iota}=2$ if the $\iota$th cell of the Turing tape contains a $0$ and the head is currently at position $\iota$, $b_{\iota}=3$ if the $\iota$th tape content is $1$ and the head is currently at position $\iota$; otherwise, the $b_{\iota}$ will just agree with the tape content.\ Using the last lemma, one can now construct an $L_{c}$-formula $\phi$ such that $\mathbb{I}_{\phi}^{\alpha}$ represents the state and tape content of $P$ at time $\alpha$ in the way we described. Let $x\subset On$ be a set of ordinals. Then $x$ is $IAM$-computable from a finite set of ordinals iff it is $OTM$-computable from a finite set of ordinals. By [@Koe], $x\subseteq On$ is $OTM$-computable from finitely many ordinal parameters iff $x\in L$. But it is not hard to see by adapting the theorem above that $OTM$-computations in finitely many parameters can be simulated by an $IAM$ that hence every $OTM$-computable $x$ is also $IAM$-computable. On the other hand, as $IAM$-computations are definable in $L$, every $x$ $IAM$-computable from finitely many ordinal parameters must be an element of $L$. Hence the classes of $IAM$-computable sets of ordinals and $OTM$-computable sets of ordinals both coincide with the constructible sets of ordinals and hence with each other. $f:On\rightarrow On$ is $IAM$-computable iff it is computable by an ordinal Turing machine ($OTM$) without parameters. (Sketch) We saw above that an $OTM$ can be simulated on an $IAM$.\ For the other direction, we indicate how to simulate an $IAM$ by an $OTM$. Let a finite $A$ and an $IAM$-program $\phi$ be given.[^11] To see how to emulate one computation step, assume we have safed the sequence $\textbf{s}:=(s_{\iota}|\iota<\tau)$ of $IAM$-states up to $IAM$-computing time $\tau$ so far on an extra tape $T_1$, separated by an extra symbol. The techniques from [@OTM] for evaluating the bounded truth predicate can then be adapted to compute $s_{\tau}$ on a second tape, using a third tape as a scratch tape. For this, we compute, for each $\alpha\leq\tau$, $[\phi(\tau,\alpha,s)]_{\tau}^{\textbf{s}}$ for each $s\in A$ until we find the unique $\bar{s}$ with $[\phi(\tau,\alpha,\bar{s})]_{\tau}^{\textbf{s}}=1$, so that $s_{\tau}(\alpha)=\bar{s}$. Finally, we copy $s_{\tau}$ to the end of $T_1$ to obtain a representation of $(s_{\iota}|\iota<\tau+1)$. This shows, up to our analysis in section $3$ and the restriction to working time and space $On$, that the intuitive concept of transfinite computability coincides with $OTM$-computability. Hence, we can finally close this section by stating our candidate for an $ICTT$: **Infinitary Church-Turing-Thesis**: A function $f:On\rightarrow On$ is computable by the idealized agent of set theory following a deterministic rule iff it is computable by an $OTM$. Conclusion and further Work =========================== We have argued that there is an intuitive notion of transfinite computability and that rendering it precisely leads us to a notion of transfinite computability equivalent with $ORM$- and $OTM$-computability. Consequently, the constructible hierarchy was obtained as the realm of this idealized activity. This suggests that these models indeed capture some general intuitive concept and hence that results about these models can be interpreted as results about this notion. Accordingly, one should expect interesting applications to general mathematics: For example, one might consider measuring the complexity of an object or a function by the computational ressources necessary to compute it. This would give a precise meaning to the question whether certain objects granted to exist by indirect proofs can be ‘concretely constructed’, even if this construction is allowed to be transfinite. In particular, it suggests connections of transfinite computability to reverse mathematics as exhibited in [@KoeWe]. However, our argument has the drawback of being model-dependent: We develop a certain notion of computability from the informal idea of an idealized agent, hopefully along plausible lines. It would be preferable to have a formal notion of transfinite computation not refering to a particular model; this could be obtained by an appropriate axiomatization of transfinite computations similar to approaches that have been made in the classical case. (See e.g. [@DeGu]. See also [@KoeSy].) Another question is whether a similar approach will work for other models like e.g. $ITTM$s. This is likely to be more difficult, as our coarse approach of approximating the activity of an idealized agent is not available here: As it is shown in [@FrWe], there are natural alternative choices for the limit rules that lead to larger classes of computable functions. \[DeGu\] N. Dershowitz, Y. Gurevich. A Natural Axiomatization of Computability and Proof of Church’s Thesis. Bulletin of Symbolic Logic 14(3): 299-350 (2008) \[Fi\] T. Fischbach. The Church-Turing-Thesis for Ordinal Computable Functions. Diploma Thesis. Bonn $2010$. \[FrWe\] S.-D. Friedman, P. Welch. Hypermachines. J. Symbolic Logic Volume 76, Issue 2 (2011), 620-636 \[Ho\] S. Hoffman. Kitcher, Ideal Agents and Fictionalism. Philosophia Mathematica (3) Vol. 12, pp. 3-17 (2004) \[Hog\] M. L. Hogarth. Does General Relativity Allow an Observer to View an Eternity in a Finite Time? Foundations of Physics Letters, Vol. 5, No 2, $1992$ \[ITTM\] J.D. Hamkins and A. Lewis. Infinite Time Turing Machines. J. Symbolic Logic, 65(2), 567-604 (2000) \[Jech\] T. Jech. Set Theory. 3rd Millenium edition, revisited and expanded. Springer (2002) \[Ki\] P. Kitcher. The Nature of Mathematical Knowledge. Oxford University Press (1983) \[Koe\] P. Koepke. Ordinal computability. In Mathematical Theory and Computational Practice. K. Ambos-Spies et al, eds., Lecture Notes in Computer Science 5635 (2009), 280-289. \[KoeSy\] P. Koepke, R. Siders. Minimality considerations for ordinal computers modeling constructibility. Theoretical Computer Science 394 (2008), 197-207 \[KoeWe\] P. Koepke, P. Welch. A Generalized Dynamical System, Infinite Time Register Machines, and $\Pi_{1}^{1}-CA_{0}$. In CiE 2011. B. Löwe et al. (eds.), LNCS 6735, 152-159 (2011) \[Ma\] Y. Matiyasevich. Hilbert’s $10$th problem. MIT Press, Cambridge, Massachusetts ($1993$) \[NeGe\] P. Nemeti, G. Szekely. Existence of Faster than Light Signals Implies Hypercomputation already in Special Relativity. arXiv:1204.1773v1 \[ORM\] P. Koepke, R. Siders. Register computations on ordinals. Archive for Mathematical Logic $47$ ($2008$), $529-548$. \[OTM\] P. Koepke. Turing computations on ordinals. Bulletin of Symbolic Logic $11$ ($2005$), $377-397$ \[Sey\] B. Seyfferth. Three models of ordinal computability. PhD thesis. Bonn $2012$ \[Tu\] A. Turing. Intelligent Machinery. National Physical Laboratory Report. In Meltzer, B., Michie, D. (eds) 1969. Machine Intelligence 5. Edinburgh: Edinburgh University Press. \[Tu1\] A. Turing. On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society. 42, S. 230-65 ($1937$) \[Wa\] H. Wang. From Mathematics to Philosophy. Routledge and Kegan Paul Ltd (1974) [^1]: It has been remarked that there are challenges to the claim that no physical device could decide such questions, see e.g. [@Hog] and [@NeGe]. However, here we are interested in the capabilites of idealized computing agents. Whether what such devices do can be considered to be a computation in the intuitive sense rather than the observation of an incomputable process is a question we won’t consider here. [^2]: To be precise, we will argue for this claim in the case of $OTM$s and $ORM$s. Whether similar approaches are available for other models as well is briefly adressed at the end of this paper. [^3]: Similar ideas are mentioned in other accounts on the philosophy of mathematics. For example, in [@Wa], S. 182, we find the following: ‘The overviewing of an infinite range of objects presupposes an infinite intuition which is an idealization. Strictly speaking, we can only run through finite ranges (and perhaps ones of rather limited size only).’ [^4]: This is the reason why Bolzano-Weierstrass is intuitionistically invalid. [^5]: Searching through $L$ or its stages, on the other hand, seems again quite reasonable, as $L$ is canonically well-ordered. [^6]: The finiteness of the alphabet could in fact be dropped without changing the class of computable functions we ultimately obtain. However, we consider this a reasonable assumption for the notion we are about to model and hence decided against taking the effort to demonstrate this. [^7]: The choice of first-order logic might be objectional; we feel that e.g. second-order logic would be inappropriate, for it would require the agent to have access to an external notion of set which is not determined from his computational activity. However, we are certainly interested in plausible alternatives and whether they would turn out to lead to an equivalent notion of computability. [^8]: The outcome might be different if one would allow ‘class time’. We don’t pursue this further here. [^9]: This point could be strengthened by modelling space in a more general way and then proving the resulting notion to be equivalent with the one obtained here. However, this requires a cumbersome analysis and the gain in plausibility seems to be too limited to justify it. [^10]: This condition may seem to be too strict compared to the overall very liberal model we set up. However, this choice is technically the least cumbersome; furthermore, we conjecture from our experience so far that every bound that is reasonably explicit in $\tau$ will ultimately lead to the same class of computable functions. [^11]: Note that a variant of an $OTM$ working with finitely many symbols $\sigma_1,...,\sigma_n$ can be simulated by an $OTM$ using only $0$ and $1$ by representing $s_i$ as $\underbrace{0...0}_{n-i}\underbrace{1...1}_{i}$.
--- abstract: 'We report the first experimental observation of stationary zonal flow in the transport barrier region of the H mode plasma. Strong peaks in $E_r$ shear mark the width of this region. Strong $m=n=0$ low-frequency ($f<$ 0.6 kHz) zonal flow is observed in regions of increased $E_r$, suggesting substantial contribution of zonal flow to the spatial modulation of $E_r$ radial profiles. Radial localization of the zonal flow is correlated with a region of zero magnetic shear and low-order (7/5) rational surfaces.' author: - 'H. Xia' - 'M. G. Shats' - 'H. Punzmann' title: Strong ExB shear flows in the transport barrier region in H mode plasma --- Transport barriers (TBs) are radially localized regions in toroidal plasma where radial transport of particles or energy is drastically reduced. In the high confinement mode (H mode) [@Wagner1982], the presence of a TB is manifested as a steep density (or temperature) gradient near the plasma boundary. The top of this region is sometimes referred to as a pedestal. Characteristics of the H mode edge TBs are important. Spatial structure of a TB is closely related to the global stability, confinement and the plasma performance (for a review see, for example, [@Fujisawa_PPCF_2003]). Understanding and predicting characteristics of the TBs has become a focus of the international fusion community (see, e.g., [@Hatae2001; @Fujita2002]). The ultimate goal of these studies is the optimization of the radial profiles of the plasma parameters in the future fusion reactor [@ITER]. The formation of a TB has been ascribed to the generation of a sheared radial electric field (or flow, where **E** is the electric field and **B** is the magnetic field) which leads to the reduction in turbulence and transport [@Terry2000; @Gohil_PPCF_2002; @Hahm_PPCF_2002]. However, the physics of the TB formation is not yet well understood. Experimental studies of TBs are restricted due to difficulties in measuring radial parameter profiles with sufficient spatial and temporal resolution. In this Letter we report detailed experimental studies of the TB structure in H mode of the H-1 heliac. It is shown for the first time, that distinct features in the electron density profile, marking the pedestal and the foot of TB, spatially coincide with radially localized strongly sheared flows. These radial regions are also identified as regions where strong stationary $m = n = 0$ zonal flows are localized in H mode. The radial localization of zonal flows also coincides with the position of a low-order rational surface and a minimum in the magnetic shear. These results confirm, to some extent, a hypothesis based on results of the gyrokinetic simulations that strong zonal flows developing near rational surfaces can provide a trigger for the TB formation [@Waltz2006]. We present results obtained in the H-1 toroidal heliac [@Ham90] (major radius of $R$ = 1 m and mean minor plasma radius of about $\left\langle a \right\rangle\approx 0.2\ $ m) under the following plasma conditions (see, for example, [@Sha02a] and references therein): $n_e=1\times10^{18}$ m$^{-3}$, $T_{e} \sim 10$ eV, $T_{i} \sim 40$ eV in argon at filling pressure of $(1 - 4)\times $10$^{-5}$ Torr and at low magnetic fields, $B = (0.05 - 0.12)$ T. Such plasma is produced by $\sim $ 80 kW of the radio-frequency waves at 7 MHz. Several combinations of Langmuir probes (single, triple probes) are used to characterize plasma parameters, such as the electron density, electron temperature, and electrostatic potential, as described in [@Sha02b]. Probes are also used to characterize poloidal and toroidal wave numbers of turbulent fluctuations [@Shats_PPCF_06]. The high confinement mode observed in H-1 [@Sha99] is similar to H mode in tokamaks. Typical electron density and plasma potential profiles are illustrated in Fig. \[fig1\] for low (L) and H modes. L and H modes are achieved respectively above and below critical magnetic field [@Sha99]. When the magnetic field is close to the critical value, spontaneous L-H transitions are observed. In these discharges, a triple probe is used to measure the electron density and potential on a shot-to-shot basis. Excellent reproducibility of the measurements allows reliable determination of profiles without perturbing plasma by the probe arrays. Despite large differences in electron temperature, density, and magnetic field, plasmas in H-1 and in the TB regions of large tokamaks, are dimensionally similar. This dimensional similarity has been discussed in [@Punzmann2004], where it has been shown that the width of TB measured in ion gyroradii is very similar to that in, for example, DIII-D tokamak. However in absolute units, the TB width in H-1 is substantially broader (30-40 mm) than that in larger experiments with stronger magnetic fields and lighter ions. This, in combination with low electron temperature in H-1, opens an opportunity to study structure of the TB using probes with sufficiently high spatial resolution. ![\[fig1\] Radial profiles of (a) electron density, (b) plasma potential in L mode, and (c) electron density, (d) plasma potential in H mode, respectively. Dashed guide lines and shading are used to mark two radial regions in the transport barrier (I and II).](fig1.eps) The development of the TB in H mode plasma in H-1 is illustrated in Fig. \[fig1\]. Radial profiles of the electron density and the plasma potential in L mode are rather featureless as seen in Fig. \[fig1\](a,b). In H mode, the central density doubles while the plasma potential becomes more negative in the central region and more positive at the edge (Fig. \[fig1\](c,d)). The increase in the density coincides with the formation of the characteristic kink in the density profile at about $\rho = r/a \approx 0.6$, referred to as the pedestal. The $n_e$ profile outside the pedestal can be approximated by a straight line (Fig. \[fig1\](c)). The profile of the plasma potential $\phi$ also shows two characteristic kinks: one at the top of the TB, and the other at $\rho = 0.8$, which we will refer to as the foot of the TB. The third kink in the plasma potential is seen near the last closed flux surface ($\rho = 1.0$) and is due to the reversal of the radial electric field from negative (inside) to positive (outside). We use dashed guide lines and shading throughout the paper to mark two radial regions of interest: (I) - a region between the top and foot of the TB, and (II) - a region between the foot and the last closed flux surface. Profiles of the radial electric field $E_r$ and its shear $E_r^{\prime}$, derived from the radial profile of the plasma potential are shown in Fig. \[fig2\]. Since $E_r$ is computed by differentiating radial profile of the plasma potential, the (negative) maxima of the radial electric field can not be determined exactly. Three $E_r$ regions are seen: slightly positive $E_r$ inside the top of the transport barrier, substantial negative $E_r \approx$ -1 kV/m in region I, and even more negative $E_r \approx$ -4 kV/m in region II. Correspondingly, the $E_r^{\prime}$ has distinct peak at the top, at the foot of the transport barrier and at the last closed flux surface. ![\[fig2\] Radial profiles of (a) radial electric field and (d) shear in the radial electric field, computed using plasma potential profile of Fig. \[fig1\](d).](fig2.eps) Fluctuations in the electron density and potential are strongly reduced in a broad range of frequencies from L to H mode as discussed in [@Shats_PRE_05]. However, the low frequency ($f<$ 0.6 kHz) spectral feature increases in some radial regions, which will be discussed later. The power spectra of the fluctuations in the plasma potential, $P(\phi)$, at various radial positions in H mode are shown in Fig. \[fig3\](a). The low-frequency feature ($0.1 - 0.6$ kHz) is dominant in H mode. Poloidal wave number $k_{\theta}$ of this low frequency component is measured using two poloidally separated probes. Measured poloidal wave number of $k_{\theta}= (2 - 5)~ $ m$^{-1}$ at $f=(0.1 - 0.6)$ kHz is indicative of the mode number $m=0$. The toroidal mode number is estimated using toroidally separated probes, as described in [@Shats_PPCF_06], and shows $n=0$. Hence the strong low frequency fluctuations in the plasma potential are identified as stationary zonal flows. It should be noted that it is usually difficult to align toroidally separated probes to exactly the same poloidal position. As a result, a phase shift between toroidally separated probes will occur due to the uncertainty in the poloidal separation between the probes, $\Delta y$: $$\label{mode_number_measurement} \Delta \varphi(f) = k_{\|}(f)\Delta L_{\|} + k_{\theta}(f)\Delta y,$$ where $\Delta L_{\|}$ and $\Delta y$ are toroidal and poloidal separation between the probes respectively, and $k_{\theta}(f)$ is known from the phase difference between the probes which are poloidally separated. In case of a zonal flow, $m = 0$, the second term on the right-hand side becomes zero (since $k_{\theta} = 0$), such that the poloidal uncertainty $\Delta y$ becomes unimportant and the toroidal wave number can be reliably estimated by measuring $\Delta \varphi$. Spectra similar to those in Fig. \[fig3\](a) have also been observed in the Compact Helical System (CHS) using heavy-ion-beam probe [@Fujisawa_PRL_04]. In that experiment, low frequency potential structures were also identified as stationary zonal flows. ![\[fig3\] (a) Power spectra of the plasma potential in different radial regions; (b) radial profile of the spectral power density of stationary zonal flows (0.1 $\sim$ 0.6 kHz). Hatched boxes indicate radial positions of the $E_r$ maxima in regions I and II. ](fig3.eps) The spectral power density of the zonal flow (shaded spectral region of $f=(0.1 - 0.6)$ kHz in Fig. \[fig3\](a)) varies along the radius. Radial profile of the spectral power density of the zonal flow in H mode is presented in Fig. \[fig3\](b). Stationary zonal flow is a band-like structure localized in radial region of $0.6 < \rho < 1.0$. Two hatched boxes drawn in Fig. \[fig3\](b) indicate the uncertainty in the radial positions of the (negative) $E_r$ maxima in regions I and II ( Fig. \[fig2\](a)). It can be seen that the zonal flow maximum spatially coincides with the maximum in (negative) $E_r$. This suggests that stationary zonal flow directly contributes to mean $E_r$ and may be responsible for the “corrugation” of the $E_r$ profile seen in Fig. \[fig2\](a). The list of spatially coinciding phenomena in this plasma is complemented by the observation that the TB region appears in the vicinity of the zero magnetic shear in this magnetic configuration. The computed radial profile of the rotational transform ${\raisebox{-1pt}{$\mathchar'40$}\mkern-5.43mu\iota}= 1/q$ (where $q$ is the safety factor) is shown in Fig. \[fig4\]. In addition to zero shear at $\rho \approx$ 0.75, ${\raisebox{-1pt}{$\mathchar'40$}\mkern-5.43mu\iota}$ = 1.4 = $n/m$ = 7/5 rational surfaces are present in both zones I (at $\rho \approx$ 0.65) and II (at $\rho \approx$ 0.85). The accuracy of the ${\raisebox{-1pt}{$\mathchar'40$}\mkern-5.43mu\iota}$ computation has been verified using experimental electron beam mapping [@Shats_RSI_95]. The existence of the 7/5 rational surfaces is also confirmed by the observation of the $m=5$ chain of magnetic islands in the region of $\rho\approx (0.83-0.87)$. The plasma current in the H-1 heliac is negligibly small ($\sim 10$ A) and does not affect the vacuum magnetic structure. ![\[fig4\] Radial profile of rotational transform in magnetic configuration discussed in this paper.](fig4.eps) A possible role of low-order rational surfaces in the formation of H mode has been recognized since the first observation of H mode in stellarators [@Wagner_PPCF_94; @Ascasibar_PPCF_2002]. Spatial correlation of the rational surfaces with the stationary zonal flows, seen in Fig. \[fig3\](b) and Fig. \[fig4\], may be indicative of the generation of stationary zonal flows due to the influence of the rational surfaces, as suggested in [@Hidalgo_PPCF_2001]. Formation of the TB and strong stationary zonal flow is also observed during *spontaneous* transitions in H-1, described in [@Punzmann2004]. Fig. \[fig5\](a) shows temporal evolution of the mean plasma density during spontaneous L-H transition. In this discharge the mean electron density jumps from about $0.6 \times 10^{18}$ m$^{-3}$ to almost $1.2 \times 10^{18}$ m$^{-3}$ in about one millisecond. Similarly to stationary H mode discharges described above, strong zonal flow in the TB region is observed in the H mode stage of discharges with spontaneous transitions. The spatial correlation of the TB regions and stationary zonal flow in the spontaneous transitions is also observed. ![\[fig5\](a) Time evolution of the mean plasma density during the spontaneous L-H transition, (b) power spectra of the plasma floating potential across the L (dashed line) to H (solid line) transition at $\rho=0.65$.](fig5.eps) In Fig. \[fig5\](b), the change of the fluctuation power spectra across the L-H transition at the radial position of $\rho=0.65$ is illustrated. It can be seen that across the transition, fluctuations in the broad spectral range from 0.6 to 100 kHz are reduced, while the spectral power of low-frequency zonal flow, $f= (0.1-0.6)$ kHz, is increased. Many of the ingredients of the TB physics presented in this paper have been discussed with regard to H modes in tokamaks and stellarators. For example, the role of the low-order rational surfaces [@Garcia_PoP_2001; @Hahm_PPCF_2002], the role of zonal flows in L-H transition [@Fujisawa2006], modification of the $E_r$ profiles by non-neoclassical (turbulence-driven) effects [@Diamond1991; @Terry2000; @Wagner2006]. Here we present for the first time experimental evidence that zonal flow, which develop near the $n/m$ = 7/5 rational surfaces, is spatially correlated with distinct regions of the radial electric field inside the TB region in the H mode plasma. The resulting strong peaks in $E_r$ shear coincide with the kinks in the density profile in H mode, suggesting that the strong peaks in the $E_r^{\prime}$ define the position and the width of the TB. Similar physical picture has recently emerged as a result of analysis of gyrokinetic simulations of DIII-D tokamak discharges [@Waltz2006]. Corrugations in the radial profiles of electron density, temperature and radial electric field in DIII-D are observed near low-order rational surfaces. The development of strong zonal flows and strong $E_r$ shear layer in these plasma regions suggests the development of zonal flow as a trigger for the TB formation. Since zonal flows are usually thought of as turbulence-driven flows, possible mechanisms of the zonal flow enhancement and sustainment in H mode, when the level of turbulence is substantially reduced, need to be explained. It has been suggested in [@Shats_PRE_05], that the redistribution of spectral energy from a broad range of intermediate scales into a stationary zonal flow is the mechanism of the zonal flow enhancement during L-H transitions. The result shown in Fig. \[fig5\](b), to large extent confirms this hypothesis in a radial region close to the transport barrier. Also in [@Shats_PRE_05] we presented experimental evidence that the nonlocal spectral transfer of energy from the unstable drift wave is responsible for the sustainment of zonal flow in H mode, as has been proposed in [@Balk_JETP_90]. Further theoretical work in this direction is needed. The authors would like to thank Santhosh Kumar for providing computed data on rotational transform. [02]{} F. Wagner *et al.*, Phys. Rev. Lett. **49**, 1408 (1982). A. Fujisawa, Plasma Phys. Controlled Fusion **45**, R1 (2003). T. Hatae *et al.*, Nucl. Fusion **41**,285 (2001). T. Fujita, Plasma Phys. Controlled Fusion **44**, A19 (2002). ITER physics expert groups on confinement and transport and confinement modelling and database, *et al.*, Nucl. Fusion, **39**,2175 (1999). P. Terry, Rev. Mod. Phys. **72**, 109 (2000). P. Gohil, Plasma Phys. Controlled Fusion **44**, A37 (2002). T.S. Hahm, Plasma Phys. Controlled Fusion **44**, A87 (2002). R.E. Waltz *et al.*, Phys. Plasmas **13**,052301 (2006). S.M. Hamberger *et al*., Fusion Technol. **17**, 123 (1990). M.G. Shats and W.M. Solomon, Phys. Rev. Lett. **88**, 045001 (2002). M.G. Shats and W.M. Solomon, New Journal of Physics **4**, 30 (2002). M.G. Shats *et al.*, Plasma Phys. Control. Fusion **48**, S17 (2006). M.G. Shats *et al.*, Phys. Rev. Lett. **77**, 4190 (1996). H. Punzmann and M.G. Shats, Phys. Rev. Lett. **93**, 125003 (2004). M.G. Shats *et al.*, Phys. Rev. E **71**, 046409 (2005). A. Fujisawa *et al.*, Phys. Rev. Lett. **93**, 165002 (2004). F. Wagner *et al.*, Plasma Phys. Control. Fusion **36**, A61 (1994). M.G. Shats *et al.*, Rev. Sci. Instrum. **66**, 1163 (1995). E. Ascasibar *et al.*, Plasma Phys. Control. Fusion **44**, B307 (2002). C. Hidalgo *et al.*, Plasma Phys. Control. Fusion **43**, A313 (2001). L. Garcia *et al.*, Phys. Plasmas **8**, 4111 (2001). A. Fujisawa *et al.*, Plasma Phys. Control. Fusion **48**, A365 (2006). P.H. Diamond and Y.B. Kim, Phys. Fluids B **3**, 1626 (1991). F. Wagner *et al.*, Plasma Phys. Control. Fusion **48**, A217 (2006). A.M. Balk *et al.*, Sov. Phys. JETP **71**, 249 (1990).
--- abstract: 'We present the hydrodynamic theory of active XY spins coupled with flow fields, for systems both having and or lacking number conservation in two dimensions (2D). For the latter, with strong activity or nonequilibrium drive, the system can synchronize, or be phase-ordered with various types of order, e.g., quasi long range order (QLRO) or new kind of order weaker or stronger than QLRO for sufficiently strong active flow-phase couplings. For the number conserving case, the system can show QLRO or order weaker than QLRO, again for sufficiently strong active flow-phase couplings. For other choices of the model parameters, the system necessarily disorders in a manner similar to immobile but active XY spins, or 2D Kardar-Parisi-Zhang surfaces.' author: - Astik Haldar - Abhik Basu bibliography: - 'citeXY.bib' title: 'Flow can order: Phases of live XY spins in two dimensions' --- Systems out of equilibrium are often marked by their striking ability to display ordered states that are impossible in their equilibrium counterparts. For instance, a two-dimensional (2D) collection of self-propelled, orientable particles, known as a flock, can display long range orientational order in the presence of finite noises (nonequilibrium equivalent of temperature) and in the absence of any long-range interactions or symmetry-breaking external fields [@toner-tu]. On the other hand, nonequilibrium effects can also completely disorder a system that otherwise shows order in the equilibrium limit [@toner-cgle]. In this Letter, we formulate the hydrodynamic theory of a collection of mobile nearly phase-ordered oscillators, or “active XY model” with their velocity being redirected by force densities arising from inhomogeneities in the phase in 2D, with or without number conservation. We show that this system is distinctly different from both its non-moving but active and equilibrium analogs. Our theory forms the basis of further studies in ordered states in woder ranging 2D systems with broken continuous symmetries, e.g., active superfluids [@active-super1; @active-super2; @active-super3] (for which the entropy density is no longer a conserved hydrodynamic variable due to the activity) and oscillating chemical reactions [@bz]. We restrict ourselves to a system with momentum conservation, but with or without number conservation. In the former case, the system is assumed to be incompressible, i.e., with a constant number density. In the latter case, we allow “birth” and “death” (hereafter the “Malthusian” case [@John-malth]), in the spirit of having “live” XY spins. In this case, the system is compressible, but the density fluctuations relax [*fast*]{} with a wavevector independent damping to a constant mean value determined by the birth and death rates. This means the local phase and the hydrodynamic velocity field are the only two hydrodynamic or slow variables in both the Malthusian and number conserving cases. We are interested in the variance $\Delta$ of the local phase fluctuations $\phi$ in the limit when the dynamics is dominated by the active processes: $\Delta \equiv\langle \phi^2({\bf x},t)\rangle$ in 2D. Our principal results are as follows. For a range of the model parameters corresponding to sufficiently strong flow-phase coupling of generic active origin, (i) in the Malthusian case, we find a novel kind of ordered state that can be stable hydrodynamically and robust against noise: the variance $\Delta$ of the local phase fluctuations grow with the system size $L$ as $(\log L)^\mu$, where $\mu >0$ is a [*nonuniversal exponent*]{}, reflecting a slow growth with $L$. The exponent $\mu$ can be smaller or larger than unity; $\mu=1$ corresponds to quasi-long-range order (QLRO), also found in the 2D equilibrium XY model [@chaikin], $0<\mu<1$ ($\mu>1$) implies slower (faster) than logarithmic growth with $L$ that we name [*stronger than QLRO*]{} (SQLRO) ([*weaker than QLRO*]{} (WQLRO)). (ii) In the number-conserving incompressible case, the dominant active effects come from the chirality of the XY spins that couples the phase with the vorticity of the flow. For a range of the active parameters again $\Delta\sim (\log L)^\mu,\mu \geq 1$, indicating WQLRO. Remarkably, for other ranges of the active model parameters in both the Malthusian and the number conserving cases, the spins disorder as $L$ grows beyond a microscopic size, in a manner reminiscent of the roughening of 2D growing Kardar-Parisi-Zhang surfaces [@kpz; @stanley]. Thus appropriate flow-phase coupling can induce either order of various types in a collection of active XY spins, or disorder them. We find that the nonuniversal exponent $\mu$ in the Malthusian case can in general depend on two dimensionless parameters $\alpha_1,\alpha_2$, both nonuniversal themselves. These are the ratios of two effective flow-phase coupling constants with another that represents the strength of the lowest order phase-phase coupling that exists even for immobile active XY spins. Out of these two, $\alpha_1$ is proportional to the ratio of the birth and death rates, can be of either sign and vanishes in the number conserving case; whereas $\alpha_2\geq 0 $ that models how phase fluctuations are influenced by the fluid vorticity, which is a number conserving process and can be present even in the non-Malthusian case. In this Letter, for simplicity, we study the Malthusian case with $\alpha_2=0$ and obtain $\mu$ as a function of $\alpha_1$ that suffices our purpose here. The full form of $\mu$ as a function of both $\alpha_1$ and $\alpha_2$ are given in the associated long paper (ALP). [ In the Malthussian case flow induced ordering takes place outside the window $1.3829> \alpha_1 >-0.1607$. WQLRO is found for $1.3829<\alpha_1 <2$ and $-1<\alpha_1 <-0.1607$, QLRO is for $\alpha_1=-1,2$. Else SQLRO is displayed for either $\alpha_1 >2$ or $\alpha_1 <-1$]{}. The nonlinear dependence of $\mu$ on $\alpha_1$ is shown in Fig. \[muplot1\] (a). The system disorders for $1.3829 > \alpha_1 >-0.1607$, that closely resembles the rough phase of KPZ surface. ![Plot of $\mu_1$ as a function of $\alpha_1$ for mathusian case[]{data-label="muplot"}](mu-alpha1.eps){width="6cm"} For the number conserving incompressible case, WQLRO is obtained for $\alpha_2>1$, when $\mu=1+1/(\alpha_2-1)$ is necessarily larger than unity, as it should be for WQLRO; see Fig. \[muplot1\] for the nonlinear dependence of $\mu$ on $\alpha_2$. For $0<\alpha_2<1$, disordered phase akin to the rough phase of a KPZ surface ensues. ![Plot of $\mu$ as a function of $\alpha_2$ in the number conserving case.[]{data-label="muplot1"}](mu-alpha2){width="5cm"} We now outline the derivation of these results. Since we are considering an active or nonequilibrium system, we must begin by setting up the dynamical equations for the slow variables $\phi$ and $\bf v$. Symmetry considerations (invariance under translation and rotation in space, and an arbitrary but constant shift in the phase) require that the dynamical equation for $\phi$ to lowest order in a gradient expansion, take the form $$\begin{aligned} \frac{\partial \phi}{\partial t} +\frac{\lambda}{2} ({\boldsymbol\nabla} \phi)^2 +\lambda_1 {\bf v}.{\boldsymbol\nabla} \phi &=& \nu \nabla^2 \phi + \lambda_2({\boldsymbol\nabla} \times {\bf v})_z \nonumber \\&+& \lambda_3 {\boldsymbol\nabla}\cdot {\bf v} + \xi ,\label{phieq}\end{aligned}$$ Here, $\nu >0$ is the damping coefficient, $\lambda_1,\lambda_2,\lambda_3$ couple flow ($\bf v$) with $\phi$ and vanish for immobile oscillators (${\bf v}=0$), $\lambda$ is another coupling constant for a symmetry-permitted nonlinearity that survives for immobile oscillators; $\xi$ is a Gaussian-distributed spatio-temporally white noise (since $\phi$ is non-conserved) with zero mean and a variance $$\langle \xi({\bf x},t) \xi(0,0)\rangle = 2D_0\delta^2({\bf x})\delta(t).\label{phi-noise}$$ Terms with coefficients $\lambda,\lambda_2,\lambda_3$ are forbidden in equilibrium by the rotation invariance (invariance under a constant phase shift) of the underlying free energy in equilibrium. They are, however, permitted here simply because rotation invariance at the level of the equations of motion, which is all that can be demanded in an active system, does not rule them out; see the associated long paper (ALP) for a detailed derivation. Coupling $\lambda,\,\lambda_2,\lambda_3$ can be of any sign; Lastly, $\lambda_2,\lambda_3$ couple local change in $\phi$ with the local vorticity and compressibility of the fluid; for an achiral system $\lambda_2=0$ identically. Assuming low Reynolds number flows, we ignore inertia of the fluid. Similar symmetry considerations lead to the generalized Stokes equation: $$\eta_1 \nabla^2 v_i +\eta_2 \nabla_i({\boldsymbol\nabla}\cdot {\bf v})= \nabla_i \varPi + w \nabla_i \phi \nabla^2\phi, \label{stokes}$$ consistent with the momentum conservation; [ we have ignored any external force in (\[stokes\])]{}. Here $\eta$ is the 2D fluid viscosity, $\varPi$ the pressure; $w$ is a coupling constant that controls how the local velocities are redirected by the local inhomogeneities in the phase; in the equilibrium limit, $w=\lambda_1$ as a condition for equilibrium. For an active system as here, $w$ and $\lambda_1$ are in principle free parameters. Equations (\[phieq\]) and (\[stokes\]) clearly are invariant under the Galilean transformation ${\bf v}\rightarrow {\bf v} + {\bf v}_0$. Finally, all terms in (\[phieq\]) and (\[stokes\]), except the term with coefficient $\lambda_2$ generalize straightforwardly to any arbitrary dimension; the term with coefficient $\lambda_2$ can exist only in 2D and models chiral effects. For achiral systems, $\lambda_2$ vanishes. Note also that with $\bf v=0$ as appropriate for immobile oscillators, (\[phieq\]) reduce to the well-known KPZ equation [@kpz]. Hence, (\[phieq\]) and (\[stokes\]) can also be interpreted as the coupled equations for a growing KPZ surface coupled to a flow field that in turn is affected by the local height fluctuations. However, there is a crucial difference: For a growing surface, $\phi$ is a single-valued height field and unbounded. Clearly, states with different heights are always physically distinguishable. In contrast, for phase oscillators $\phi$ is periodic and hence can support stable topological defects, whose unbinding in equilibrium is described by the well-known Kosterlitz-Thouless theory. Since we are considering nearly phase-ordered states, the number of such defects is implicitly assumed to vanish. In the Malthusian case, the dynamical equation for density fluctuations $\delta\rho$, a nonhydrodynamic variable, with a source term to reflect the tendency of birth and death to restore the local population density to its steady state value, and an additional, non-number-conserving Gaussian-distributed noise $f_\rho$ reflecting statistical fluctuations reads after dropping irrelevant terms [@John-malth] and linearizing about a mean density $\rho_0$ $$\delta\rho =\gamma{\boldsymbol\nabla}\cdot {\bf v} + f_\rho,\label{deneq}$$ where $\gamma$ is a time-scale, a constant; see ALP for details. Further, applying an equation of state $\varPi(\delta\rho)=\psi\delta\rho$ locally, where $\psi$ is a susceptibility, (\[stokes\]) reduce to $$\eta_1 \nabla^2 v_i +\eta_2 \nabla_i ({\boldsymbol\nabla}\cdot {\bf v})= w \nabla_i \phi \nabla^2\phi + f_i,\label{stokes1}$$ where $\psi$ and $f_\rho$ have been absorbed in $\eta_2$ and $f$, respectively. In the incompressible number conserving case, $\lambda_3=0$ in (\[phieq\]) and ${\boldsymbol\nabla}\cdot {\bf v}=0$ in (\[stokes\]). We intend to calculate the universal scaling exponents that characterize the time-dependent correlation function of $\phi$, define as $$\langle \phi ({\bf x},t)\phi(0,0)\rangle \sim |x|^{2\chi}\theta(|x|^z/t),$$ where $\chi$ and $z$ are, respectively, the roughness and dynamic exponent; $\theta$ is a dimensionless scaling function of its argument. In the linear passive theory ($\lambda=\lambda_1=\lambda_3=w=0$), $\chi$ and $z$ are known exactly: $\chi=0$ and $z=2$ in 2D. This corresponds to quasi-long range order with $\Delta\sim \log L$. Simple power counting shows that the nonlinear terms are marginal at 2D. Thus the nonlinear effects can affect the scaling. To study this systematically, a perturbative dynamic renormalization group (RG) treatment is needed. Couplings $\lambda,\,\lambda_1,\,\lambda_2,\,\lambda_3$ are all dimensionless; $\lambda_1$ can be set to 1. All the active coefficients $\lambda,\,\lambda_2,\,\lambda_3$ must scale with the strength of the underlying nonequilibrium processes, or the energy released by them that ultimately drive the system away from thermal equilibrium. [In order to extract the role of the active effects on the phases in a systematic manner, we assume strong activity limit with $\lambda_1 \ll \lambda,\,\lambda_2,\,\lambda_3 $; further, we ignore any stochastic force $f_i$ in (\[stokes1\]) for large $\eta_1,\, \eta_2$, consistent with the Stokesian limit for $\bf v$ .]{} To proceed further, we note that the Stoksian velocity $\bf v$ that appears linearly in (\[stokes\]) can be eliminated [*exactly*]{} to obtain an effective equation for $\phi$. The resulting equation, whose detailed form is given in ALP, now contains three nonlinear terms: one with coefficient $\lambda$ that is already existing in (\[phieq\]), and two other equally relevant nonlinearities with coefficients $\tilde\lambda_2,\tilde\lambda_3$, that have its origin in the vorticity and compression terms with coefficients $\lambda_2$ and $\lambda_3$, respectively, in (\[phieq\]). As usual, the RG is done by tracing over the short wavelength Fourier modes of the fields, followed by a rescaling of lengths, times and the fields. This leads to the following differential recursion relations: $$\begin{aligned} &&\frac{d\nu}{dl} = \nu\left[z-2+(\frac{\alpha_1^2}{2}-\frac{5\alpha_1}{8}+\frac{\alpha_2}{8})g\right],\label{nu}\\ &&\frac{dD}{dl} = D\left[z-2-2\chi+(\frac{1}{4}+\frac{3\alpha_1^2}{8}-\frac{\alpha_1}{2}+\frac{\alpha_2}{8})g\right],\label{D}\\ &&\frac{d\lambda}{dl} = \lambda\left[z+\chi-2\right] ,\label{lambda}\\ &&\frac{d \tilde\lambda_2}{dl} = \tilde\lambda_2 \left[z+\chi-2\right],\label{lambda_2}\\ &&\frac{d \tilde\lambda_3}{dl}= \tilde\lambda_3\left[z+\chi-2\right],\label{lambda_3}\end{aligned}$$ where we have defined two effective coupling constants $g=\lambda^2D/(2\pi\nu^3)$ and two dimensionless coupling constants $\alpha_1=\tilde\lambda_3/\lambda$ and $\alpha_2=\tilde\lambda_2^2/\lambda^2$. Here $\exp(l)$ is the length rescaling factor. Notice that by construction $g$ is non-negative, where as $\alpha_1,\,\alpha_2$ can be of any sign. At this stage, for reasons of simplicity, we extract the universal scaling for (i) $\alpha_2=0$ (i.e., $\lambda_2=0$) for Malthusian case and (ii) $\alpha_1=0$ (i.e., $\lambda_3=0$) for number conserving case separately. \(i) [*Malthusian case*]{} ($\alpha_1\neq 0$): We set $\alpha_2=0$ in (\[nu\]-\[D\]) for simplicity. The flow equations for $g,\,\alpha_1$ then read $$\begin{aligned} &&\frac{dg}{dl} = \frac{g^2}{4}[1-\frac{9}{2}\alpha_1^2 + \frac{11\alpha_1}{2}]=-\frac{g^2}{8}{\cal A}_1 (\alpha_1),\label{flowmal}\\ &&\frac{d\alpha_1}{dl}=0.\label{flowalpha1} \end{aligned}$$ where ${\cal A}_1(\alpha_1)=9\alpha_1^2 - 11\alpha_1-2 >0$. Thus $\alpha_1$ is [*marginal*]{}. For immobile oscillators, $\alpha_1=0$ identically, giving $$\frac{dg}{dl} = \frac{g^2}{4}>0,$$ implying a run away flow, with $g$ diverging in ${\cal O}(1)$ renormalization group time $l$. This corresponds to a short range order or disorder, similar to the rough phase of a KPZ surface [@kpz; @natter; @stanley]. Translating this result for phase-coupled oscillators, we find that nearly phase-ordered states of a collection of immobile oscillators are always unstable - they always [*desynchronize*]{}. When the oscillators are mobile, $\alpha_1>0$, the physics can change dramatically. In fact, for appropriately chosen $\alpha_1$, if ${\cal A}_1(\alpha_1)>0$, then $g$ flows to zero in the long wavelength limit. The range of $\alpha_1$ for which this is possible is given by $${\cal A}_1(\alpha_1)=9\alpha_1^2 - 11\alpha_1-2 >0.\label{a1}$$ This holds so long as $\alpha_1 >(11+\sqrt{193})/18\approx 1.3829$, or $\alpha_1 < (11-\sqrt{193})/18\approx -0.1607$. Within the window $-0.1607<\alpha_1<1.3829$, ${\cal A}_1 <0$ and instability ensues. Various possibilities of ordered states emerge in the stable regime. In general in the ordered state (\[flowmal\]) gives $$g(l)=\frac{g(l=0)}{1+{\cal A}_1 l g(l=0)/8}\approx \frac{8}{{\cal A}_1 l},$$ for $l\gg 1$. This gives, together with $z=2$ and $\chi=0$ and with the identification $l=-\ln (a_0q)$, we get $$\nu(q)=C_\nu \left(\ln\frac{1}{q}\right)^{\Delta_1},\,D(q)=C_D\left(\ln\frac{1}{q}\right)^{\Delta_2},$$ where $a_0$ is the lattice spacing, $\Delta_1=\frac{4\alpha_1^2-5\alpha_1}{{\cal A}_1(\alpha_1)},\Delta_2=\frac{3\alpha_1^2-4\alpha_1+2}{{\cal A}_1(\alpha_1)}$, and $C_\nu$ and $C_D$ are two dimensional constants. This gives $$\langle \phi^2 ({\bf x},t)\rangle = \int \frac{d^2 q}{(2\pi)^2}\frac{D(q)}{\nu(q)q^2}\sim (\ln L)^{1+\Delta_2-\Delta_1},$$ [ giving $\mu=1+\Delta_2 -\Delta_1$. Therefore, QLRO ensues when $\Delta_1=\Delta_2$, i.e., $\alpha_1=2,\,-1$. Within the windows $1.3829<\alpha_1<2$ and $-1<\alpha_1 <-0.1607$, $\Delta_2-\Delta_1>0$, giving $\mu>1$. This means $\langle \phi^2 ({\bf x},t)\rangle$ diverges with $L$ faster than QLRO, demonstrating an order weaker than QLRO, that we call [*weaker than QLRO*]{} or [*WQLRO*]{}. On the other hand, for $\alpha_1>2$ or $\alpha_1<-1$, $\Delta_2-\Delta_1<0$, $\mu<1$ indicating $\langle \phi^2 ({\bf x},t)\rangle$ to grow with $\ln L$ slower than QLRO. This gives the name [*stronger than QLRO*]{} or [*SQLRO*]{}.Minimum of $\mu$ is 0.88805, obtained for $\alpha_1=15.347$ for which the divergence of $\langle \phi^2 ({\bf x},t)\rangle$ is slowest]{}. For ${\cal A}_1<0$ \[see Eq. (\[a1\]) above\], the system disorders: We find $$\frac{dg}{dl}=|{\cal A}_1| g^2/8 \implies g(l)=\frac{g(l=0)}{1-|{\cal A}_1| g(l=0)/8}.$$ Thus $g$ diverges in a finite renormalization group time, or for a microscopic system size, implying disorder. This is reminiscent of roughening of 2D KPZ surfaces [@stanley]. The RG flow diagram in the $g-\alpha_1$ plane is given in Fig. \[phase1\]. ![Schematic flow diagram in the $g-\alpha_1$ plane. Stable and unstable regions are marked by the arrows indicating the direction of RG flow (see text).[]{data-label="phase1"}](flow1.eps){width="5cm"} \(ii) [*Number conserving case ($\alpha_1=0,\,\alpha_2 \neq 0$):-*]{} This is relevant over time scales smaller $\gamma$ as defined in (\[deneq\]). The differential recursion relations for $D$ and $\nu$ may be obtained from (\[nu\]-\[D\]) by setting $\alpha_1=0$. The corresponding flow equations for $g, \alpha_2$ now read $$\begin{aligned} &&\frac{dg}{dl}=\frac{g^2}{4}(1-\alpha_2)=-g^2{\cal A}_2 (\alpha_2),\label{flowcons}\\ &&\frac{d\alpha_2}{dl}=0.\label{flowalpha2}\end{aligned}$$ Thus $\alpha_2$ is marginal; ${\cal A}_2=(\alpha_2-1)/4$. Thus, for $\alpha_2>1$, $g(l)$ flows to zero for large $l$ corresponding to ordered states, whose scaling properties we calculate. We find $$g(l)=\frac{g(l=0)}{{\cal A}_2l g(l=0) + 1}\approx \frac{1}{{\cal A}_2l},$$ for large $l$, where ${\cal A}_2>0$ for $\alpha_2>1$. On the other hand, when $\alpha_2<1$, ${\cal A}_2<0$ and hence there is a run away flow of $g$. This implies instability of the nearly phase ordered state, and is reminiscent of roughening of 2D KPZ surfaces. The RG flow lines are illustrated in Fig. \[rgflow2\]. Clearly $\alpha_2=1$ is a separatrix that separates the stable and unstable regions in the $g-\alpha_2$ plane; in fact, $\alpha_2=1$ is a [*fixed line*]{} on which $dg/dl=0$ identically. ![Schematic flow diagram in the $g-\alpha_2$ plane. Stable and unstable regions are marked by the arrows indicating the direction of RG flow (see text).[]{data-label="rgflow2"}](flow2.eps){width="5cm"} As can be seen from that figure, flow lines with $\alpha_2=\tilde\lambda_2^2(l=0)/\lambda^2(l=0)>1$ flow towards the line $g=0$, where as flow lines with $\alpha_2<1$ flow away from the line $g(l)=0$. Considering $\alpha_2>1$, i.e., above the separatrix in Fig. \[rgflow2\], in the long wavelength limit $l\rightarrow \infty$, with $g (l)\sim \frac{1}{{\cal A}_2 l}$, This gives together with $z=2,\,\chi=0$ with the identification $l\sim \ln(1/q)$, where $q$ is a Fourier wavevector, we obtain the scale-dependent renormalized damping $\nu(q)$ and noise variance $D(q)$ $$\nu(q)=\nu_0[\ln(1/q)]^{\frac{\alpha_2}{8{\cal A}_2}},\;D(q)=D_0[\ln(1/q)]^{\frac{2+\alpha_2}{8{\cal A}_2}}.\label{nu-D-q}$$ With (\[nu-D-q\]) we now calculate $\Delta=\langle\phi^2({\bf x},t)\rangle$: $$\Delta \sim (\log L)^{1+ 1/(\alpha_2-1)}.\label{wqlro}$$ For $\alpha_2>1$, $\Delta$ grows [*faster*]{} than just $\log L$ (expected for QLRO). Thus it is a weaker order than QLRO; we name it [*weak QLRO*]{} (WQLRO). Nonetheless, it is a [*far stronger*]{} order than just short range order (SRO) or disorder, where any notion of order is lost within a microscopic distance. As $\alpha_2\rightarrow\infty$, $\Delta\sim \log L$, recovering QLRO. We now turn to the case $\alpha_2<1$. Then ${\cal A}_2(\alpha_2)<0$; hence $$\frac{dg}{dl}=|{\cal A}_2|g^2\implies g(l)=\frac{g(l=0)}{1-|{\cal A}_2|g(l=0)}.$$ Thus, $g(l)$ [*diverges*]{} at a microscopic length scale $l_{crit}=1/[{\cal A}_2g(l=0)]$. Therefore, the system can remain ordered only if it is sufficiently small. This is again similar to the roughening of 2D KPZ surfaces. The above results for a free standing system show that $\langle\theta^2 ({\bf x},t)\rangle$ steadily albeit slowly rises with $L$. Thus for a sufficiently large system size $L$, the nonlinear term $\lambda_1 {\bf v}\cdot {\boldsymbol\nabla} \theta$ should become relevant. The scale beyond which this happens should however be exponentially large in $\lambda_2$ or $\lambda_3$; see ALP. On the other hand, the assumption of a 2D free standing system is of course admittedly an idealization; this breaks down when “new physics” not contained in the hydrodynamic model (\[phieq\]) and (\[stokes\]) intervenes beyond some length scale $L_\eta$. We now consider what “new physics” might intervene. In reality, a 2D system like the one considered here is resting on a three-dimensional (3D) bulk fluid or a bulk solid. As shown in the ALP, interactions with the surrounding bulk medium (fluid or solid) change the $\eta q^2 {\bf v}({\bf q},t)$ viscous term in (\[stokes\]) to $\eta'|q|{\bf v}({\bf q},t)$ for a surrounding bulk fluid, or $(1/\xi){\bf v}({\bf q},t)$ as the dominant damping terms in (\[stokes\]) for $L>L_\eta$. Here, $\eta'$ is a 3D viscosity and $\xi$ is a friction coefficient. Scale $L_\eta$ can be estimated by equating $\eta q^2 {\bf v}({\bf q},t)$ with $\eta'|q|{\bf v}({\bf q},t)$, or $(1/\xi){\bf v}({\bf q},t)$. This gives $L_\eta = \eta/\eta'$ for a 3D bulk fluid, and $L_\eta = \sqrt{\xi\eta}$ for a solid substrate in contact. As we have seen above, the term with coefficient $\lambda$ pushes the system to disorder and desynchronization, for both the Malthusian and number conserving cases. In contrast, compressibility and the vorticity terms with coefficients with coefficients $\lambda_1$ and $\lambda_2$, respectively, for the Malthusian and the number conserving cases, can counteract the $\lambda$-term and help stabilizing order. However, for wavevectors $q<2\pi/L_\eta$, or equivalently, for system size $L>L_\eta$, the nonlinear active flow - phase coupling gets weakened and is [*subleading*]{} (in a RG sense) to the nonlinear term with coefficient $\lambda$. This makes it unable to compete with the distabilizing effect due to the $\lambda$-term. To possibilities now exist. If $L_\eta > \xi_{NL}$ the nonlinear length, which is the smallest length scale big enough to allow enough renormalization group “time” $l$ for $\nu(l)$ and $D(l)$ to acquire the logarithmic corrections in (\[nu-D-q\]), then for $L> L_\eta >\xi_{NL}$, for which the stabilizing active flow-phase coupling ceases to become relevant, $\nu(q)$ and $D(q)$ are already renormalized as given in (\[nu-D-q\]). This means that for $L>L_\eta$ or for wavevectors $q<2\pi/L_\eta$, the hydrodynamics of $\phi$ is described by the KPZ equation, [*but*]{} with a damping $\nu(q)$ and a noise variance $D(q)$ that diverge as in (\[nu-D-q\]). The new theory has a pseudo-Galilean invariance that enforces non-renormalization of $\lambda$. The new effective coupling constant $\tilde g(l)=\lambda^2 D(l)/\nu^3(l)$. Due to the singular nature of $\nu(q)$ and $D(q)$, there are no fluctuation corrections to them in a perturbative expansion. Clearly, $\tilde g(l)$ is [*not*]{} marginal at 2D for both the Malthusian and number conserving cases: Under naïve rescaling, thus, $$\begin{aligned} \frac{d\tilde g}{dl}&\sim& -{\cal M}\frac{\tilde g}{l},\label{newflow} $$ where ${\cal M}= {\cal A}_1$, or ${\cal A}_2$, respectively, for the Malthusian and number conserving cases. Thus, $\tilde g$ decays (grows) with $l$ for ${\cal M}>(<)0$. Thus for sufficiently small $\tilde g(l)$, (\[newflow\]) tells us that $\tilde g(l)$ will asymptotically vanish, maintaining WQLRO. Noting that $\tilde g(l=0)= g(l=\log L_\eta)$, this may be achieved by tuning $L_\eta$. The latter tuning can be made possible by adding active materials at the interface between the 2D system and 3D bulk, which can allow “slip” at the interface, potentially weakening the interaction. If $L_\eta <\xi_{NL}$, the system does not have enough renormalization group time $l$ for $\nu$ and $D$ to renormalize. In that case, for a system with $L>L_\eta$ for wavevectors $q<2\pi/L_\eta$, the hydrodynamics of $\phi$ is described by the standard KPZ equation [@stanley], that in 2D only has a rough phase. Thus, the system disorders as $L$ exceeds a microscopic size. Finally, the length $\xi_{NL}=\exp(l^*)$ may be estimated by setting ${\cal A}_{1,2}l^*g(l=0)\sim 1$, giving $\xi_{NL}\sim \exp(1/{\cal A}_{1,2}g(l=0))$. [*summary:-*]{} We have developed the hydrodynamic theory for live XY spins. This theory predicts that when the active effects dominate, the system can show strong or weak QLRO scaling for certain choices of the active coefficients. For other choices, the system completely disorders as its size exceeds a finite value in a manner reminiscent of roughening of 2D KPZ surfaces. We consider both the Malthusian and the number conserving cases. These lead to the phase diagrams (\[flowalpha1\]) and (\[flowalpha2\]) for the Malthusian and the number conserving cases, respectively. These results should help us understand recent experiments on driven diffusive superfluids or active bacterial superfluids. [*Acknowledgements:-*]{} A.H. and A.B thank the Alexander von Humboldt Stiftung (Germany) for partial financial support under the Research Group Linkage Programme scheme (2016).
--- abstract: | A common assumption of political economy is that profit rates across firms or sectors tend to uniformity, and often models are formulated in which this tendency is assumed to have been realised. But in reality this tendency is never realised and the distribution of firm profits is not degenerate but skewed to the right. The mode is less than the mean and super-profits are present. To understand the distribution of firm profits a general probabilistic argument is sketched that yields a candidate functional form. The overall properties of the derived distribution are qualitatively consistent with empirical measures, although there is more work to be done.\ \ Key words: firms, profit, economic, distribution, probabilistic author: - 'Ian Wright[^1] [^2]' bibliography: - 'ian.bib' title: A conjecture on the distribution of firm profit --- Introduction ============ Farjoun and Machover [@farjoun], dissatisfied with the concept of mechanical equilibrium applied to political economy and the concomitant assumption of a realised uniform profit rate, outlined a probabilistic approach to political economy, which replaced mechanical equilibrium with statistical equilibrium and a uniform profit rate with a distribution of profit rates. They reasoned that the proportion of industrial capital, out of the total capital invested in the economy, which finds itself in any given profit bracket will be approximated by a gamma distribution, by analogy with the distribution of kinetic energy in a gas at equilibrium. The gamma distribution is a right-skewed distribution. They examined UK industry data from 1972 and concluded that it was consistent with a gamma distribution. Wells [@wells01] examined the distributions of profit rates defined in a variety of ways of over 100,000 UK firms and found right-skewness to be prevalent, but did not investigate their functional form. Wright [@wright04a] measured the distribution of firm profits in an agent-based model of a competitive economy, and found that the distribution was right-skewed, although not well characterised by a gamma distribution, even when capital-weighted. Analysis of the model suggested that the profit distribution may be explained by general probabilistic laws. The remainder of the paper outlines some theoretical assumptions and derives a candidate functional form for the distribution of firm profits. A probabilistic argument ======================== Under normal circumstances a firm expects that a worker adds a value to the product that is bound from below by the wage. A firm’s markup on costs reflects this value expectation, which may or may not be validated in the market. Wages are normally paid in installments of between a week and one month, but the markup on costs is validated in the market at a frequency that depends on the rate at which a firm’s goods and services are purchased by buyers. The frequency of payments to a firm differ widely and depend on the complexity of the product and the details of payment schedules (for example, compare a firm that sells sweets to a firm that sells battleships). The frequency mismatch between wage payments and revenue payments can be mitigated in many different ways, not least by the arrangement of capital loans. But whatever the frequency of sale or the complexity of the product a revenue payment to a firm partially reflects the value added by the firm’s workers during a period of time. Assume that the revenue from the sale of a firm’s product consists of a sum of market samples where each sample represents the value-added by a particular employee working for a small period of time, say an hour. Obviously, there are multiple and particular reasons why an individual worker adds more or less value to the firm’s total product, most of which are difficult to measure, as partially reflected in the large variety of contested and negotiable compensation schemes. Although each worker normally adds value there is a great deal of local contingency. A worker may be a slacker or a workaholic, an easily replaceable administrator, or a unique, currently fashionable film star. Therefore, the precise value contribution of an individual worker to the product is highly complex and largely unknown, particularly when it is considered that the productive co-operation of many workers cannot be easily reduced to separate and orthogonal contributions, as is the case in highly creative industries with production processes that have yet to mature into separable, repeatable and well-defined tasks. This local contingency and indeterminacy is modelled by assuming that the value-added per worker-hour is a [*random variable*]{}. Consider that a worker $i$ adds a monetary value, $X_{i}$, to a firm’s product for every hour worked, where each $X_{i}$ is an independent and identically distributed (iid) random variable, with mean $\mu_{X}$ and variance $\sigma_{X}^{2}$. The added value is assumed to be globally idd to reflect the common determinants of the value-creating power of an hour of work, but also random to model local contingencies. Negative $X_{i}$ represents negative value-added, corresponding to cases in which the worker’s labour reduces the value of inputs, for example the production of unwanted goods, or a slower than average work pace, and so forth. Assume that the distribution of $X_{i}$ is such that the Central Limit Theorem (CLT) may be applied. Consider a single firm that sets in motion a total of $n$ worker-hours during a single year. The firm’s total value-added, $S_{n}$, may therefore be approximated by a normal distribution $S_{n} = \sum_{i=1}^{n} X_{i} \approx N( n \mu_{X}, n \sigma_{X}^{2} )$. The CLT approximation will improve with the size of the firm, but even for small firms the number of iid draws is large given the stated assumptions. In reality the productivity of workers within firms is correlated. For example, employees of firms that employ state-of-the-art machinery, or are exceptionally well-organised, will all tend to add more value than employees of firms that employ out-of-date machinery or are badly organised. Although competitive processes tend to homogenise the value-added per worker, new innovations never cease, so that at any moment in time the employees of particular firm will be more or less productive than the average. A more accurate representation of value-added is obtained if each $X_{i}$ is considered to be drawn from a distribution indexed by the firm that employs worker $i$, at the expense of a considerable increase in model complexity. However, the correlation of value-added within a large firm, which employs diverse skills and machinery to produce a variety of products, will be weak. Although a huge multinational is normally considered a single entity for the purpose of reporting profits, in reality it sets into a motion a large sample of different kinds of labours utilising different kinds of machinery and tools. Hence, for large firms the assumption that $X_{i}$ is sampled from a single, economy-wide distribution is a reasonable approximation, for small firms less so. An advantage of modelling value-added per worker as a random variable is that it is possible that total value-added by a firm, $S_{n}$, is much higher or lower than the norm, but this event has low probability. The assumption of a single distribution that determines the value-added per worker is able to approximate the diverse productivities of individual firms. Each worker costs a certain amount to employ during the year. This cost includes the wage, the cost of inputs used by the worker, the cost of wear and tear on any fixed capital, the cost of rent, local taxes and so forth, all of which may be differently reported due to local accountancy practices. Again, there is a great deal of contingency. Hence costs per worker-hour are also modelled as a random variable. Assume that a worker $i$ costs a monetary value, $Y_{i}$, to productively employ per hour worked, where each $Y_{i}$ is an idd random variable with mean $\mu_{Y}$ and variance $\sigma_{Y}^{2}$. This cost includes both the wage and capital costs per worker, and therefore effaces the distinction between variable and constant capital. Costs per worker-hour are also correlated at the firm level: the employees of different firms productively combine a greater or lesser amount of capital. A more accurate representation of costs would therefore consider the distribution of constant capital across firms conditional on local circumstances, such as firm size, but this extension is not pursued here. The assumption that cost per worker-hour is statistically unifrom across firms is an approximation, which, as for the case of value-added, improves with firm size, under the assumption of a tendency toward homogenisation due to competitive pressures. Assume that the distribution of $Y_{i}$ is such that the CLT may be applied. Hence a firm that sets in motion $n$ worker-hours during a year has total costs that may be approximated by a normal distribution, $K_{n} = \sum_{i=1}^{n} Y_{i} \approx N( n \mu_{Y}, n \sigma_{Y}^{2} )$. This approximation also improves with the size of the firm. Different firms employee different numbers of workers and hence the amount of hours worked for each firm during a year will vary. Define the profit, $P_{n}$, of a firm that sets in motion $n$ hours of labour in a single year as the ratio of value-added to costs, $P_{n}=S_{n}/K_{n}$, and assume that $S_{n}$ and $K_{n}$ are independent. $P_{n}$ is the ratio of two normal variates. Its probability density function (pdf) may derived by the transformation method (or alternatively see [@marsaglia65]) to give: $$\begin{aligned} \nonumber f_{P_{n}} (p \mid n) &=& \frac{\sqrt{n} \exp[ -\frac{1}{2} n (\mu_{X}^{2} / \sigma_{X}^{2} + \mu_{Y}^{2} / \sigma_{Y}^{2})] } {4 \pi (\sigma_{X}^{2} + p^{2} \sigma_{Y}^{2})^{3/2}} \\ & & \left( \frac{2}{\sqrt{n}} \sqrt{\lambda_{1}} + \sqrt{2 \pi} \exp[\frac{n}{2} \frac{\lambda_{2}^{2}}{\lambda_{1}}] \lambda_{2} \left( 1 + \Phi[\sqrt{\frac{n}{2}} \frac{\lambda_{2}}{\sqrt{\lambda_{1}}}] \right) \right) \label{eq:conditional}\end{aligned}$$ where $$\begin{aligned} \nonumber \lambda_{1} & = & \sigma_{X}^{2} \sigma_{Y}^{2} ( \sigma_{X}^{2} + p^{2} \sigma_{Y}^{2} ) \\ \nonumber \lambda_{2} & = & \mu_{Y} \sigma_{X}^{2} + p \mu_{X} \sigma_{Y}^{2} \\ \nonumber \Phi(x) & = & \frac{2}{\sqrt{\pi}} \int_{0}^{x} \exp^{-t^{2}} dt\end{aligned}$$ Equation (\[eq:conditional\]) is the pdf of the rate-of-profit of a firm conditional on $n$, the number of hours worked for the firm per year. Axtell [@axtell] analysed US Census Bureau data for US firms trading between 1988 and 1997 and found that the firm size distribution, where size is measured by the number of employees, followed a special case of a power-law known as Zipf’s law, and this relationship persisted from year to year despite the continual birth and demise of firms and other major economic changes. During this period the number of reported firms increased from 4.9 million to 5.5 million. Gaffeo et. al. [@gaffeo03] found that the size distribution of firms in the G7 group over the period 1987-2000 also followed a power-law, but only in limited cases was the power-law actually Zipf. Fuijiwara et. al. [@fujiwara03] found that the Zipf law characterised the size distribution of about 260,000 large firms from 45 European countries during the years 1992–2001. A Zipf law implies that a majority of small firms coexist with a decreasing number of disproportionately large firms. Firm sizes theoretically range from 1 (a degenerate case of a self-employed worker) to the whole available workforce, representing a highly unlikely monopolisation of the whole economy by a single firm. The empirical evidence implies that at any point in time the firm size distribution follows a power-law, and that this distribution is constant, despite the continual churning of firms in the economy (birth, death, shrinkage and growth). Firms hire and fire employees, and therefore the number of hours worked for a firm during a year depends on its particular historical growth pattern. To simplify, assume that the average number of employees per firm per year also follows a power-law. This approximation is reasonable if the growth trajectories of firms do not fluctuate too widely during the accounting period. Assume also that every employee works the same number of hours in a year, which is a reasonable simplification. The firm hours per year is therefore a constant multiple of the number of firm employees. Firms with more employees proportionately set in motion more hours of labour. A constant multiple of a power-law variate is also a power-law variate. Hence, the firm size distribution has the same power-law form whether firm size is measured by employees or by the total number of hours worked by employees. The unconditional rate-of-profit distribution can therefore be obtained by considering that the number of hours worked for a firm during a year is a random variable $N$ distributed according to a Pareto (power-law) distribution: $$\begin{aligned} \nonumber f_{N}(n) = \frac{\alpha \beta^{\alpha}}{n^{\alpha + 1}}\end{aligned}$$ where $\alpha$ is the shape and $\beta$ the location parameter. Assume that firm sizes range between $m_{1}$ hours, which represents a degenerate case of a self-employed worker who trades during the year, to $m_{2}$ hours, which represents a highly unlikely monopolisation of all social labour by a single huge firm ($m_{2} >> m_{1}$). The truncated Pareto distribution $$\begin{aligned} \nonumber g_{N}(n) = f_{N}( n \mid m_{1} < N \leq m_{2} ) = \frac{f_{N}(n)}{F_{N}(m_{2}) - F_{N}(m_{1})} = \frac{n^{-(1 + \alpha)} \alpha m_{1}^{\alpha} m_{2}^{\alpha}} {m_{2}^{\alpha} - m_{1}^{\alpha}}\end{aligned}$$ where $$\begin{aligned} \nonumber f(n) = F'(n)\end{aligned}$$ is formed to ensure that all the probability mass is between $m_{1}$ and $m_{2}$. Assume that $m_{2}$ is large so that the discrete firm size distribution can be approximated by the continuous distribution $g_{N}$. By the Theorem of Total Probability the unconditional profit distribution $f_{P}(p)$ is given by: $$f_{P}(p) = \int_{m_{1}}^{m_{2}} f_{P}(p \mid n) g_{N}(n) dn \label{eq:unconditional}$$ Expression (\[eq:unconditional\]) defines the $g_{N}(n)$ parameter-mix of $f_{P}(p \mid N = n)$. The rate-of-profit variate is therefore composed of a parameter-mix of a ratio of independent normal variates each conditional on a firm size $n$, measured in hours per year, distributed according to a power-law. Writing (\[eq:unconditional\]) in full yields the pdf of firm profit: $$\begin{aligned} \nonumber f_{P} (p) &=& \int_{m_{1}}^{m_{2}} \frac{\exp[ -\frac{1}{2} n (\mu_{X}^{2} / \sigma_{X}^{2} + \mu_{Y}^{2} / \sigma_{Y}^{2})] } {4 \pi (\sigma_{X}^{2} + p^{2} \sigma_{Y}^{2})^{3/2}} \\ \nonumber& & \left( \frac{2}{\sqrt{n}} \sqrt{\lambda_{1}} + \sqrt{2 \pi} \exp[\frac{n}{2} \frac{\lambda_{2}^{2}}{\lambda_{1}}] \lambda_{2} \left( 1 + \Phi[\sqrt{\frac{n}{2}} \frac{\lambda_{2}}{\sqrt{\lambda_{1}}}] \right) \right) \\ & & \frac{n^{-(\frac{1}{2} + \alpha)} \alpha m_{1}^{\alpha} m_{2}^{\alpha}} {m_{2}^{\alpha} - m_{1}^{\alpha}} \; dn \label{eq:unconditionalFull}\end{aligned}$$ This distribution has 7 parameters: (i) $\mu_{X}$, the mean value-added per worker-hour, (ii) $\sigma_{X}^{2}$, the variance of value-added per worker-hour, (iii) $\mu_{Y}$, the mean cost per worker-hour, (iv) $\sigma_{Y}^{2}$, the variance of cost per worker-hour, (v) $\alpha$, the Pareto exponent of the firm size power-law distribution, where size is measured in worker-hours per year, (vi) $m_{1}$, the number of hours worked by a single worker in a year, and (vii) $m_{2}$, the total number of hours worked in the whole economy during a year. Both percentage profit, $R = 100 P$, and the growth rate of capital invested, $G = 1 + P$, are simple linear transforms of this distribution. The parameters can be estimated from economic data and the resulting distribution compared to empirical rate-of-profit measures, under various simplifying assumptions about how profit is defined (e.g. see Wells [@wells01]). A good fit would imply that the assumptions made in the theoretical derivation are empirically sound. Alternatively, best-fit parameters may be directly estimated from empirical data, for example by the method of maximum likelihood estimation, to determine how well the theoretical distribution can fit a set of empirical distributions. A good fit compared to other candidate functional forms would imply that a parameter-mix of a ratio of normal variates with parameters conditional on a power-law captures some essential structure of the determinants of firm profit, but it would not validate the theoretical derivation. \ Equation (\[eq:unconditionalFull\]) is difficult to analyse so numerical solutions are employed. Figure 1 graphs some representative numerical samples of the distribution. The samples range from sharply peaked symmetrical curves, in which most of the probability mass is concentrated about the mode, to less peaked distributions that are skewed to the right. Wells’ [@wells01] variety of profit measures yield distributions that share these characteristics, and therefore there is qualitative agreement between the theory and the empirical data. But clearly a full quantitative analysis is required. \[fig:longTail\] Figure 2 graphs a sample of $f_{P}(p)$ in log-log scale. The approximate straight line in the tail is the signature of a power-law decay of the probability of super-profits. Super-profit outliers are found in the empirical data, although it has not been investigated whether they decay as an approximate power-law. Further analysis of the pdf $f_{P}(x)$ is required. But the qualitative form of the distribution is sufficiently encouraging to consider it a candidate for fitting to empirical profit measures and for comparison with other candidate functional forms. To go beyond models that assume a realised uniform profit rate it is necessary to investigate empirical data on firm profit and propose theoretical explanations of its distribution. This paper is a tentative step in that direction. Conclusion ========== A general probabilistic argument suggests that the empirical rate-of-profit distribution will be consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power law. [^1]: iKuni Inc., 3400 Hillview Avenue, Building 5, Palo Alto, CA 94304, USA. Email: wrighti@acm.org, URL: ianusa.home.mindspring.com. Fax: +1 650 320 9827. Phone: +1 650 739 5355. [^2]: I am grateful to Julian Wells for explaining his work on firm profits, and an anonymous reviewer for helpful criticisms.
--- abstract: 'In spatial statistics, a common method for prediction over a Gaussian random field (GRF) is kriging. Unfortunately, kriging requires solving linear systems which, for data of length $n$, amounts to a computational cost of $O(n^3)$. Thus, a new approach to estimation and prediction that uses a combination of concepts from fixed-rank kriging, restricted maximum likelihood estimation, and sparse matrix methodology will be presented in an effort to gain efficiency. The gains in run time versus the loss in precision will be contrasted and the connection between the smoothness of the field, the dimension reduction and the choice of basis functions in the model will be explored.' author: - Karl Pazdernik bibliography: - 'References.bib' nocite: '\nocite{}' title: Efficient Kriging for Large Spatial Fields --- : kriging, fixed rank kriging, Gaussian random field, sparse matrix, spatial prediction, maximum likelihood estimation, bandwidth, best linear unbiased predictor Introduction ============ Kriging is one of the most common methods of spatial prediction in areas such as mining [@richmond2003], hydrogeology [@chilesdelfiner1999], natural sciences [@goovaerts1997], environmental sciences [@bayraktarturalioglu2005], remote sensing [@steinvandermeergorte2002], and black box modeling in computer experiments [@sackswelchmitchellwynn1989]. Kriging is essentially an optimal interpolation method that utilizes spatial dependence to obtain predictions within the spatial domain where no response had been observed. Developed by the French mathematician Georges Matheron (1962), kriging was named after D. G. Krige, a South African mining engineer, for his work on his Master’s thesis [@cressie1990]. The concept is to consider a spatial domain separated into two distinct sets: observed locations ($\mathbf{s}$) and unobserved locations ($\mathbf{s_0}$). The best linear unbiased predictor (BLUP), or kriging estimate, for the random variable ($Z$) at the unobserved locations can then be found based on the first and second moments of $Z$. An additional simplification occurs when $Z$ is a Gaussian process, at which point the weights are given by the conditional expectation $\widehat{Z(\mathbf{s_0})} = E(Z(\mathbf{s_0}) \mid Z(\mathbf{s}))$ (cite??). To perform spatial prediction, or kriging, we must first define the distribution of the spatial field and apply two simplifications for parametric models of dependence. Due to its versatility, the Gaussian random field model is a common choice in describing spatial data, particularly geospatial data, and will be used here. Also, as noted, an advantage to assuming a Gaussian process is that estimation and prediction are greatly simplified. The two simplifications for parametric models of spatial dependence that we will assume are intrinsic stationarity and isotropy. Intrinsic stationarity is defined as a spatial field having a constant mean and the variance of the difference between any two points being equal when the direction and distance are the same. Isotropy occurs when displacement enters the covariance through distance alone, meaning direction has no effect. These two simplifications combined result in a spatial field that has second order stationarity, meaning that the field has a constant mean and that the covariance between any two points is dependent on displacement alone. Unfortunately, these two assumptions are often unjustifiable when working with real data, as will be illustrated in the application section of this paper. However, despite the lack of second order stationarity in the original data, tactics such as detrending and median polishing [@hoaglinmostellertukey1983] can be used to achieve a second order stationary process, and so this will be assumed as well. Formal Setup of Problem ----------------------- Given a Gaussian random field with second order stationarity, the model can now be defined. Let $y(\mathbf{s_1}), ..., y(\mathbf{s_n})$ be observations from a Gaussian random field and, without loss of generality, assume with mean zero. Under this definition, let $\mathbf{s} = \{\mathbf{s_1}, ..., \mathbf{s_n}\}$ be the locations of the observations in the spatial domain, $D$, and $f(.)$ be some real-valued spatial process. The model is then the following: $$\begin{aligned} \label{model1} y(\mathbf{s}) &=& f(\mathbf{s}) + \epsilon(\mathbf{s}) \\ f(\mathbf{s}) &\sim& N(\mathbf{0},\Sigma) \nonumber \\ \epsilon(\mathbf{s}) &\sim& N(\mathbf{0},\sigma^2I) \nonumber \\ f(\mathbf{s}), \epsilon(\mathbf{s}) && \mbox{ independent } \nonumber\end{aligned}$$ Given (\[model1\]), the parameters of interest are $\Sigma$ and $\sigma$. Estimation of these parameters is commonly accomplished by likelihood-based methods; in this paper we focus on maximum likelihood estimation. Unfortunately, due to the complexity of spatial covariance structures ($\Sigma$), likelihood-based estimation of the parameter values cannot be accomplished analytically and must be done using an iterative process. To perform this estimation we will need to evaluate the likelihood (or log-likelihood), which, given the model in (\[model1\]), is defined in equation (\[ll1\]). $$\label{ll1} l(\theta) = -\frac{1}{2}\mathbf{y}^T (\Sigma+\sigma^2I)^{-1}\mathbf{y} - \frac{1}{2}\log(|\Sigma+\sigma^2I|) - \frac{n}{2}\log(2\pi) \\$$ The parameter values obtained by maximizing (\[ll1\]) are used in the kriging equation. Recall that for all unobserved locations, $\mathbf{s_0}$, the kriging estimate is then $\widehat{f(\mathbf{s_0})} = E(f(\mathbf{s_0}) \mid f(\mathbf{s}))$. With the Gaussian model defined as above, this conditional expectation simplifies to the following: $$\begin{aligned} \label{regkrige} \widehat{f(\mathbf{s_0})} &=& E(f(\mathbf{s_0}) \mid f(\mathbf{s})) \nonumber \\ &=& Cov(f(\mathbf{s_0}),f(\mathbf{s})) (\Sigma+\sigma^2I)^{-1} y(\mathbf{s})\end{aligned}$$ Neither (\[ll1\]) nor (\[regkrige\]) are a problem for moderate $n$, however for large $n$ estimation and prediction become computationally infeasible because both require solving an $n \times n$ linear system ($(\Sigma+\sigma^2I)^{-1}$), which is $O(n^3)$. Calculating the determinant in (\[ll1\]) is also an issue as it requires a decomposition which is also $O(n^3)$. Specifically in terms of estimation, since an analytic solution is not possible, an iterative method will result in a multiplied computational burden. U.S. Weather Example -------------------- The National Climate Data Center (NCDC) collects data from 5,030 weather stations across the U.S. on monthly maximum temperature, minimum temperature, precipitation, etc., as seen in Figure \[USstations\]. Under the normal kriging method, prediction using the entire U.S. observational record would require a Cholesky decomposition on a matrix of dimension 5,030, which can be computationally prohibitive, both in terms of CPU time and memory. Additionally, obtaining kriging predictions generally require the use of an iterative process to estimate the parameter values of the model, meaning that this expensive calculation will need to be repeated. The normal kriging approach clearly becomes impractical for large data sets such as this. Further advancements in technology will continue to provide machinery capable of handling larger data sets, however the dichotomy still exists that as technology improves, data sets continue to grow and at a faster rate. Thus, there will always be a need for an efficient method of spatial prediction. Background and Related Work --------------------------- There currently are several strategies for reducing the computational cost of estimation and prediction for large spatial fields [@banerjeecarlingelfand2004]. A first approach is to subsample from the original set of sampled locations. Although ignoring some of the available information is certainly not ideal, it may be the only means of gaining knowledge of the process and for particularly dense data, the inferential loss may be minimal. Unfortunately, there are concerns with location of the removed observations as they may not affect the large-scale and small-scale structures of the spatial field equally, resulting in biased estimates. Also increasing sample size and decreasing sub-sample size will ultimately lead to stability problems caused by an increasingly singular covariance matrix. A second strategy is to work in the spectral domain. The idea is to transform to the space of frequencies, develop a periodogram, and then use a Whittle likelihood to evaluate the transformed data. The Whittle likelihood is useful in that it does not require a matrix inversion, however a discretization is used to implement a fast Fourier transform. Other issues are that the development of a periodogram is somewhat arbitrary and that the Whittle likelihood may be poor in the tails. A third approach is to use a Gaussian Markov random field. Gaussian Markov random fields assume that data exist on a lattice and that the spatial dependencies between locations can be reduced to a “neighborhood” structure where only neighbors are related. Using this method, the inverse of the covariance matrix is directly available, which reduces the computation burden to sampling a large number of full conditional distributions. Concerns are that the association between points is not modeled directly, which prevents the certain correlation structures and that the relationship between the covariance matrix and its inverse is quite complex and highly non-linear. Another option is dimension reduction, in which a finite approximation is obtained using a linear combination of functions coupled with a smaller set of locations. A major advantage to this approach is that the size of smaller set can be fixed at a particularly value, thus computation is efficient regardless of the size of the original data set. There are concerns regarding the particular locations of the smaller set of locations and the number of locations necessary. The focus of this paper is to utilize one particular form of dimension reduction is called “fixed rank kriging” [@cressiejohannesson2008] which uses an approximation of the original dataset through basis functions. Fixed rank kriging begins with the same model and assumptions as in (\[model1\]). The computational concern with likelihood-based estimation and kriging, as mentioned earlier, is the repeated solving of $n \times n$ linear systems of equations for the covariance of $y(\mathbf{s})$. To eliminate this computational burden, this method uses a reduced set of independent locations ($m < n$) coupled with a linear combination of basis functions to approximate the spatial process on the original set of locations. The basis functions are used to interpolate from the smaller spatial process to the original spatial process. Let $a(.)$ define the spatial process on these new locations, $\mathbf{u} = \{\mathbf{u}_1,...,\mathbf{u}_m\}$, also known as knots. Let $S_k(.)$ be the linear combination of basis functions corresponding to the $k^{th}$ knot. The spatial process is then approximated by $f(\mathbf{s}) \approx \sum_{k=1}^{m}S_k(\mathbf{s})a(\mathbf{u}_k)$ and the model becomes the following: $$\begin{aligned} \label{model2} y(\mathbf{s}) &=& \sum_{k=1}^{m}S_k(\mathbf{s})a(\mathbf{u}_k) + \epsilon(\mathbf{s}) \\ a(\mathbf{u}) &\sim& N(\mathbf{0},K) \nonumber \\ \epsilon(\mathbf{s}) &\sim& N(\mathbf{0},\sigma^2I) \nonumber \\ a(\mathbf{u}), \epsilon(\mathbf{s}) && \mbox{independent} \nonumber \\ y(\mathbf{s}) &\sim& N(\mathbf{0},SKS^T + \sigma^2I) \nonumber\end{aligned}$$ Given this model, the log-likelihood required for estimation is the following. $$\label{ll2} l(\theta) = -\frac{1}{2}\mathbf{y}^T (SKS^T+\sigma^2I)^{-1}\mathbf{y} - \frac{1}{2}\log(|SKS^T+\sigma^2I|) - \frac{n}{2}\log(2\pi) \\$$ Also, by defining a new set of the basis functions between the unobserved locations and the knots as $A_k(\mathbf{s_0})$ for $1 \le k \le m$, the spatial predictions can then be expressed in the following way. $$\begin{aligned} \label{effkrige} \widehat{f(\mathbf{s_0})} &=& Cov(\sum_{k=1}^{m}A_k(\mathbf{s_0})a(\mathbf{u}_k),\sum_{k=1}^{m}S_k(\mathbf{s})a(\mathbf{u}_k)) (Cov(y(\mathbf{s}),y(\mathbf{s})))^{-1} y(\mathbf{s}) \nonumber \\ &=& AKS^T (SKS^T + \sigma^2I)^{-1} \mathbf{y}\end{aligned}$$ Cressie and Johannesson [@cressiejohannesson2008] then used the Sherman-Morrison-Woodbury formula [@hendersonsearle1981] to utilize the dimension reduction of fixed rank kriging and obtained the equality given in (\[siginv\]). Thus, the necessary matrix inversion is reduced from $n \times n$ to $m \times m$. The spatial covariance parameter estimates were then estimated using binned method-of-moments (BMoM). $$\label{siginv} (SKS^T + \sigma^2I)^{-1} = (\sigma^2I)^{-1} - (\sigma^2I)^{-1}S[K^{-1} - S^T(\sigma^2I)^{-1}S]^{-1}S^T(\sigma^2I)^{-1}$$ Unfortunately, any likelihood-based estimation (\[ll2\]) still requires calculating the determinant of the covariance of the original data, which, regardless of its representation, remains $n \times n$. Katzfuss and Cressie [@katzfusscressie2009] suggested using an E-M algorithm which avoids the need to directly evaluate the likelihood and, subsequently, calculate the determinant. This form of estimation was found to be superior to BMoM estimation as it lowered empirical mean square error (EMSE) for the parameters. In particular, the EMSE for $\sigma^2$ was reduced by over 30% when compared to BMoM. The E-M approach considers $\mathbf{y}$ as the known data and chooses $\mathbf{a}$ and $\mathbf{\epsilon}$ to be the missing data. The Expectation step then requires finding the conditional expectation of log-likelihood given the observed data. Let $\theta = (K,\sigma^2)$ and let $\Sigma^{(t)} = SK^{(t)}S^T + \sigma^{2(t)}I$. Assuming that $\mathbf{a}$ and $\mathbf{\epsilon}$ are independent normal variables ($\mathbf{a} \sim N(0,K)$ and $\mathbf{\epsilon} \sim N(0,\sigma^2I)$), then the conditional expectation is given by (\[E\]). $$\begin{aligned} \label{E} Q(\mathbf{\theta}|\mathbf{\theta^{(t)}}) &=& -\frac{1}{2} \{\log(|K|) + {\mbox{tr}}[K^{-1} E_{\mathbf{\theta^{(t)}}}(\mathbf{a}\mathbf{a^T}|\mathbf{y})] + n\log(\sigma^2) + \frac{1}{\sigma^2}{\mbox{tr}}[E_{\mathbf{\theta^{(t)}}}(\mathbf{\epsilon}\mathbf{\epsilon^T}|\mathbf{y})] \} \nonumber \\ &=& -\frac{1}{2} \{\log(|K|) + {\mbox{tr}}[K^{-1} (K^{(t)} - K^{(t)}S^T{\Sigma^{(t)}}^{-1}SK^{(t)} \nonumber \\ && + (K^{(t)}S^T{\Sigma^{(t)}}^{-1}\mathbf{y})(K^{(t)}S^T{\Sigma^{(t)}}^{-1}\mathbf{y})^T)] + n\log(\sigma^2) \nonumber \\ && + \frac{1}{\sigma^2}{\mbox{tr}}[\sigma^{2(t)} - \sigma^{2(t)}{\Sigma^{(t)}}^{-1}\sigma^{2(t)} + (\sigma^{2(t)}{\Sigma^{(t)}}^{-1}\mathbf{y})(\sigma^{2(t)}{\Sigma^{(t)}}^{-1}\mathbf{y})^T] \}\end{aligned}$$ The Maximization step then takes partial derivatives of $Q(\mathbf{\theta}|\mathbf{\theta^{(t)}})$ to arrive at the iterative updating scheme given by (\[M\]). $$\begin{aligned} \label{M} K^{(t+1)} &=& K^{(t)} - K^{(t)}S^T{\Sigma^{(t)}}^{-1}SK^{(t)} + (K^{(t)}S^T{\Sigma^{(t)}}^{-1}\mathbf{y})(K^{(t)}S^T{\Sigma^{(t)}}^{-1}\mathbf{y})^T \nonumber \\ \sigma^{2(t+1)} &=& \sigma^{2(t)} + \frac{(\sigma^{2(t)})^2}{n}{\mbox{tr}}[{\Sigma^{(t)}}^{-1}(\mathbf{y}\mathbf{y^T}{\Sigma^{(t)}}^{-1} - I)]\end{aligned}$$ We propose alterations to the predetermined specifications of this method that result in fewer iterations necessary to reach convergence and can reduce mean square prediction error. These modifications were discovered by comparing the E-M approach to alternative method of obtaining maximum likelihood estimates. In this paper, we will combine the approximations used in fixed rank kriging [@cressiejohannesson2008] with the an alternative form of the Sherman-Morrison-Woodbury formula [@higham2002] and a version of restricted maximum likelihood estimation (REML) to obtain an alternative method for obtaining efficient and reliable predictions for large spatial fields that are comparable to the E-M approach. Specifically, the Woodbury matrix identity provides an algebraic equality that can be used to reduce the linear system solved during prediction and REML, achieved through a QR-decomposition of the matrix of basis functions, $S$, can be used to reduce the linear system solved during estimation. Section 2 illustrates how this decomposition of the basis matrix can be used to reduce the linear system solved in maximum likelihood estimation, outlines how the Woodbury matrix identity can be used to reduce the linear system solved during prediction, and defines the specific details of implementation used in this application. Section 3 presents details of a simulation study used to investigate this method and make comparisons to the E-M approach discussed by Katzfuss and Cressie [@katzfusscressie2009], as well as interesting relationships between the accuracy in prediction and the choices of covariance matrix for the knots ($K$) and the basis functions ($S$). Section 4 outlines how the efficiency of E-M approach can be greatly improved upon by fine tuning the form of the $K$ and $S$ matrix. Section 5 demonstrates a specific application of this improved method to a temperature data set introduced in section 1.2. Section 6 contains discussion and conclusions and is followed by the Appendix. Methodology =========== In this section, alternative solutions to the computational restrictions imposed by maximum likelihood estimation and kriging are presented. The data are concentrated by a linear projection of the basis functions to reduce the computational cost of estimation and a form of the Sherman-Morrison-Woodbury formula is used as an equivalent to the conditional expectation used in kriging. Specifications necessary to perform fixed rank kriging method are also introduced. Maximum Likelihood Estimation ----------------------------- As illustrated in equation (\[ll2\]), direct evaluation of the log-likelihood for a Gaussian random field model when the covariance of the spatial process is approximated using fixed rank kriging ($\Sigma \approx SKS^T+\sigma^2I$) still requires computing the determinant of an $n \times n$ matrix. Now consider the QR-decomposition of $S$. Using this equality, the original dataset can be transformed into a dataset of length $m$ (which will be referred to as $\mathbf{y}^\ast$) located at the knots. $$\begin{aligned} \label{ystar} S &=& QR \\ Q^T \mathbf{y} &=& Q^T S \mathbf{a} + Q^T \mathbf{\epsilon} \nonumber \\ \mathbf{y}^\ast &=& R \mathbf{a} + \mathbf{\epsilon}^\ast \nonumber \\ \mathbf{y}^\ast &\sim& MVN(\mathbf{0}, RKR^T + \sigma^2I) \nonumber\end{aligned}$$ Performing maximum likelihood estimation on a transformed dataset is a version of REML and this transformation produces a smaller corresponding covariance matrix, $RKR^T + \sigma^2I$, which is $m \times m$. The form of the new log-likelihood to be maximized is provided in equation (\[llnew\]). $$\label{llnew} l(\theta) = -\frac{1}{2}(\mathbf{y}^\ast)^T (RKR^T+\sigma^2I)^{-1}\mathbf{y}^\ast - \frac{1}{2}\log(|RKR^T+\sigma^2I|) - \frac{n}{2}\log(2\pi) \\$$ To calculate the determinant of covariance of $\mathbf{y}^\ast$, we used a Cholesky decomposition due to its versatility in both evaluating determinants and solving linear systems. In particular, let $L$ refer to the lower triangular matrix of a Cholesky decomposition ($KR^T+\sigma^2I = LL^T$). Then the determinant can be calculated by (\[detnew\]) and the inverse can be obtained by (\[invnew\]). $$\begin{aligned} \label{detnew} |KR^T+\sigma^2I| &=& |L||L^T| \nonumber \\ &=& (\prod_{k=1}^{m}{L_{kk}})^2\end{aligned}$$ $$\begin{aligned} \label{invnew} (\mathbf{y}^\ast)^T (RKR^T+\sigma^2I)^{-1}\mathbf{y}^\ast &=& (\mathbf{y}^\ast)^T (LL^T)^{-1}\mathbf{y}^\ast \nonumber \\ &=& [L^{-1}\mathbf{y}^\ast]^T[L^{-1}\mathbf{y}^\ast]\end{aligned}$$ This eliminates the need to solve an additional $n \times n$ linear system to obtain the inverse of the covariance matrix. Instead, a linear system involving a coefficient matrix that is upper triangular is solved. Thus, by calculating the Cholesky decomposition of the reduced covariance matrix, efficiency has been increased from $O(n^3)$ to $O(m^3)$. In the next section, we explore how an alternative form of the Sherman-Morrison-Woodbury formula can be used to reduce the cost of obtaining kriging predictions once the parameter values of the model have been estimated. Prediction ---------- Recall from expression (\[effkrige\]) that even after reasonable estimates for the parameters in our model have been computed, a linear system needs to be solved to obtain a predicted field through kriging. The Sherman-Morrison-Woodbury formula provides an algebraic equality to the normal kriging equations which can reduce the linear system to be solved, as illustrated by Cressie and Johannesson in equation (\[siginv\]). An alternative form of this theorem (given in the Appendix) [@higham2002], yields further computational gains by reducing the amount of matrix multiplication necessary. For any two $n \times n$ matrices, the computational cost of matrix multiplication is $O(n^3)$. Hence, although the equation for the predicted values need only be computed once (as opposed to the iteration evaluations of the log-likelihood in estimation), significant computational gains may still be achieved by the equality given in (\[ridgereg\]). $$\begin{aligned} \label{ridgereg} \frac{1}{\sigma}K^{\frac{1}{2}}S^T (\frac{1}{\sigma^2}SKS^T + I)^{-1} &=& (\frac{1}{\sigma^2}K^{\frac{1}{2}}S^TSK^{\frac{1}{2}}+I)^{-1} K^{\frac{1}{2}}S^T\frac{1}{\sigma} \nonumber \\ &=& \frac{1}{\sigma}K^{-\frac{1}{2}}(\frac{1}{\sigma^2}S^TS+K^{-1})^{-1}S^T \nonumber \\ \Rightarrow \frac{1}{\sigma^2}AKS^T (\frac{1}{\sigma^2}SKS^T + I)^{-1}\mathbf{y} &=& \frac{1}{\sigma^2}A(\frac{1}{\sigma^2}S^TS+K^{-1})^{-1}S^T\mathbf{y}\end{aligned}$$ This particular form of the Sherman-Morrison-Woodbury also maintains the gain in computational efficiency of inverting two $m \times m$ matrices ($K^{-1}$ and $(\frac{1}{\sigma^2}S^TS+K^{-1})^{-1}$) instead of one $n \times n$ matrix ($(\frac{1}{\sigma^2}SKS^T + I)^{-1}$). With this approach, contributions have been made to reduce the CPU time required to perform spatial prediction. However, the form of the basis functions and the covariance structure also have a significant effect on computational efficiency. We next look at a specific implementation of the reduced rank model that addresses these issues. Specific Implementation ----------------------- In fixed rank kriging, the purpose of basis functions is to provide interpolating functions to describe the area in between the knots. This compensates for the loss of information on the spatial dependence when the spatial process is reduced from $n$ to $m$ locations. The specific choice of basis functions is subjective since it is not estimated from the data, however it is common to use some type of multi-resolution smoothing function [@cressiejohannesson2008]. Following Cressie and Johannesson, a generic basis function, used in this paper, is the multi-resolution local bisquare function. From (\[defineB\]), the $l^{th}$ resolution basis functions used in estimation and prediction are defined as $S_{k(l)}(\mathbf{s}) = B_{k(l)}(\mathbf{s})$ and $A_{k(l)}(\mathbf{s_0}) = B_{k(l)}(\mathbf{s_0})$. Assuming a second order stationary field, all distances are Euclidean and $r_l$ is a standardizing constant known as the “bandwidth” of the basis function where $b$ is some constant. $$\begin{aligned} \label{defineB} B_{k(l)}(\mathbf{x}) &=& \Psi \left( \frac{\|\mathbf{x}-\mathbf{u_{k(l)}}\|}{r_l} \right) \quad \forall \mathbf{x} \in D \\ \Psi(d) &=& \left\{ \begin{array}{l l} \left(1 - d^2 \right)^2 & \quad 0 \le d \le 1 \\ 0 & \quad d > 1 \\ \end{array} \right. \label{bisquare}\end{aligned}$$ where $$\label{bandwidth} r_l = b*min\{\|\mathbf{u_{i(l)}}-\mathbf{u_{j(l)}}\| : j \neq i, 1 \le i,j \le m\}$$ The local bisquare function (\[bisquare\]) sets any value equal to zero where the distance between the location and the knot is greater than the bandwidth. This means that only observations close to knots will provide influence on the estimation at that specific knot. An additional consequence of this choice of basis function is that the matrices $S$ and $A$ are sparse as long as $r_l$ for each resolution is not too large. This provides us memory saves but also allows for fast matrix manipulations, such as an efficient approach to computing a QR-decomposition. Before efficient kriging can be implemented, the covariance of the spatial process and the spatial covariance for basis coefficients must also be defined. For the purposes of this particular study, we will assume that the original spatial covariance, $\Sigma$, can be modeled with a Matérn family covariance matrix, which is a common choice for spatial data (cite??). The Matérn covariance is then defined by the following equation, where $K_{\nu}$ is the modified Bessel function of the second kind of order $\nu$, $\nu$ is the smoothness parameter, $\rho$ is the sill parameter, and $\theta$ is the range parameter, for $\{\nu, \rho, \theta \in \Re^+\}$. $$\begin{aligned} Cov(f(\mathbf{s_i}),f(\mathbf{s_j})) = \frac{\rho}{2^{\nu - 1} \Gamma(\nu)} \left(\frac{\|\mathbf{s_i} - \mathbf{s_j}\|}{\theta}\right)^{\nu} K_{\nu}\left(\frac{\|\mathbf{s_i} - \mathbf{s_j}\|}{\theta}\right) \end{aligned}$$ The form of $K$ when $\Sigma$ is Matérn is unknown, however the theoretical covariance structure can be obtained by combining the QR-decomposition of $S$ with an eigenvalue decomposition of $\Sigma$. An eigenvalue decomposition of $\Sigma$ is as follows. $$\begin{aligned} \Sigma = UDU^T = \left( \begin{array}{cc} U_1 & U_2 \end{array} \right) \left( \begin{array}{cc} D_1 & 0 \\ 0 & D_2 \end{array} \right) \left( \begin{array}{c} U_1^T \\ U_2^T \end{array} \right)\end{aligned}$$ Assuming equality where the fixed rank method uses an approximation yields $SKS^T \approx \Sigma = UDU^T$, and since $K$ is a symmetric matrix, this implies $SK^{\frac{1}{2}} = U_1 D_1^{\frac{1}{2}}$. Utilizing the QR-decomposition of $S$ provides the equality $K^{\frac{1}{2}} = R^{-1} Q^T U_1 D_1^{\frac{1}{2}}$, which results in the theoretical form of $K$ as the following. $$\label{theoryK} K = R^{-1} Q^T U_1 D_1 U_1^T Q (R^{-1})^T$$ In the next section, we will investigate the behavior of the spatial dependence at the knots to conclude a reasonable covariance structure. Also, since approximations have provided the efficient computation, we must investigate the effect of using an approximate model. We will also explore the loss in accuracy caused by using an approximate model through simulation and outline methods for minimizing this loss. Experimental Evaluations ======================== In this section, a simulation study is used to define a reasonable covariance structure for the reduced spatial process and also contrast the behavior of the E-M approach to the alternative method of fixed-rank kriging in terms of efficiency and accuracy. In addition, an interesting relationship between the resolution in the basis functions and the accuracy in the kriging predictions is discovered and explored. Simulation Details ------------------ To investigate the behavior of likelihood-based fixed rank kriging, a simulation study was designed based on a Matérn covariance for the original spatial process and the local bisquare basis function. Although the parameter space for the parameters in the Matérn covariance is the positive reals, for the purposes of this simulation, parameter values were restricted to reasonable ranges commonly seen in geostatistical data. Spatial fields where the ratio of measurement error to spatial dependence is high will not provide significant insight into the effects of using this alternate method to kriging. To maintain a reasonable signal-to-noise ratio, the scale parameter should be roughly four times the measurement error. To investigate the effect of measurement error, simulations using $\sigma^2$ equal to 0, 0.1, 0.25, and 0.4 were used, while holding $\rho = 1$ constant. The smoothness parameter, $\nu$, is most commonly between 0.5 and 2, so the values 0.5, 1, 1.5, and 2 were used. For certain combinations of $\nu$ and $\theta$, the Matérn covariance is not well-defined. In particular, the range parameter, $\theta$, depends directly on the range of the data as well as the smoothness parameter. To avoid computational concerns, as well as supply a means for direct comparison, the value of $\theta$ was chosen in relation to $\nu$ so that the correlation would be 0.2 between observations at a distance of 1/3 of the spatial domain. Specifically, $\theta$ was set equal to 0.205, 0.137, 0.110, and 0.095, respectively. Figure \[nutheta\] illustrates the relationship between $\nu$ and $\theta$. The domain for the simulation consisted of a $50 \times 50$ grid with locations ranging from 0 to 1. Three hundred randomly selected locations out of the possible 2500 represented the locations of the observations. Basis functions were selected based on a variety of options. The number of knots to use ($m$), the location of the knots, the number of resolutions ($l$), and the bandwidth constant ($b$) are all variables that require specification prior to implementation of the fixed-rank kriging method. The values for $m$, $l$, and $b$ were varied to test the effect on time and accuracy. A regular grid was chosen for the knot location after preliminary results suggested that different space-filling designs for the locations of the knots had an insignificant effect on prediction when the observations were not clustered. The number of resolutions was varied between $l = 1,2,3$. The number of knots depended on the number required to fill the specified domain using a triangular grid. Five, nine, and thirteen knots were placed equidistant along the x-axis, with the remaining domain space filled by equally spaced knots falling along a triangular grid. Multiple resolutions were added by including additional knots on a coarser triangular grid. To avoid issues that arise from creating singular matrices, the starting point for each resolution was slightly altered to create some distance between all knots. The number of knots used for each division of the domain and each resolution are provided in Table \[noknots\] and Figure \[knots\] illustrates the pattern. Knots along x-axis Resolution 1 Resolution 2 Resolution 3 -------------------- -------------- -------------- -------------- 5 23 31 34 9 77 100 108 13 175 221 235 : Number of knots for varying inputs.[]{data-label="noknots"} A bandwidth of 1.5 multiplied by the shortest distance between knots was suggested in an application to total column ozone data [@cressiejohannesson2008] and was used as a baseline for this simulation. Unfortunately, the method of estimation and prediction outlined in section 2 is not unaffected by poor choices of bandwidth, so investigation into the optimal choice of bandwidth was also necessary. This method was thus tested for bandwidths varying from 0.5 to 2.5 in increments of 0.1 multiplied by the shortest distance between knots of the $l^{th}$ resolution. We next investigate the covariance of the reduced spatial process that results from the specifics of the simulation study and suggest reasonable structures for this covariance matrix. Covariance of the Reduced Spatial Process ----------------------------------------- Using the setup from the previous subsection and the method outlined in (\[theoryK\]), the theoretical form of $K$ was simulated for every combination of parameter values, number of knots, resolution, and bandwidth constants. As an example, the covariance structure for $K$ is provided in Figure \[K1\] for all resolutions at select bandwidth constant when $\nu = 1$, $\theta = 0.137$, $\sigma^2=0.25$, and the x-axis divisor is 9. The stationarity commonly visible in a Matérn covariance is clearly weakened by the use of basis functions to describe the large scale spatial structure. Some of the Matérn structure is maintained when using single resolution basis functions, particularly for lower bandwidth constants. However, the majority of the plots depicted a positive variance at zero distance and random scatter centered at zero for all greater distances. This suggests that the structure of a Matérn covariance is overly complex in describing the correlation between knots at varying distances and could be simplified to an independent, diagonal matrix. There is occasional evidence of a more complex covariance structure where the dependence is negative for extremely small distances larger than zero, as seen in Figure \[K1\] for the multi-resolution basis functions. However, regardless of the complexity of the covariance structure, the end goal is efficient prediction. Given these simulated patterns, a simple covariance structure, such as the identity covariance matrix, will provide huge computational gains. In the next section, we present the results from a simulation study conducted with multiple fields simulated for each combination of parameter values. We then compare the E-M approach to likelihood-based fixed rank kriging for various resolutions, bandwidth constants, and number of knots. Simulation Results ------------------ One hundred fields were simulated for each set of parameter values, with an example of one such simulation provided in Figure \[exfield\]. A measurement error was added to each response at the 300 randomly selected locations, resulting in values that represented the data. For each simulated field, maximum likelihood estimation and the corresponding predicted values were acquired using three different methods: by optimizing (\[llnew\]) using an identity covariance for $K$ and utilizing (\[ridgereg\]), by the E-M approach (\[M\]) using the full form of $K$, and by the E-M approach using an identity covariance for $K$. The identity covariance for $K$ requires estimating two parameters as opposed to $\frac{m(m+1)+2}{2}$. Thus, the simplicity of the identity covariance for $K$ resulted in an expected gain in computational efficiency. To illustrate this gain, the seconds required to iterate to convergence for each combination of inputs were recorded. This value combines the cost of inverting a matrix with a penalty for covariance structures that require additional iterations to meet convergence. Example plots of the distribution of seconds for varying number of knots, resolutions, bandwidth constants, and estimation methods when $\nu=1$ and $\sigma^2=.25$ is provided in Figures \[time11\]-\[time33\]. The red boxplots represent using the E-M approach with an identity $K$ and the blue boxplots represent using the alternative method. The distributions of seconds for the E-M approach using the full form of $K$ were too high to be visible on the same plot. Fewer knots resulted in less computation time and bandwidth constants between 1 and 1.5 provided the optimal level of sparsity in $S$. An interesting difference between the E-M approach and the alternative method is the sensitivity of the E-M approach to poor choices of bandwidth constant. The alternative approach will be efficient regardless of the specifics of the basis functions you use. The alternative approach was computed consistently faster than the E-M approach, however for fewer knots, if the proper bandwidth and resolution is chosen, the E-M approach required fewer iterations to converge. Thus, given knowledge of the proper bandwidth and resolution, if the E-M approach was implemented using a software with higher performance in terms of matrix algebra, the computational efficiency could be greatly improved, or at worst, be equivalent. Median iterations required to meet convergence for $\nu=1$ and $\sigma^2=.25$ are provided in Tables \[iter5\]-\[iter13\] for 5, 9, and 13 knots along the x-axis. E-M Alternative ----- --------- ------ ------ ------------- ------ ------ $b$ res = 1 2 3 res = 1 2 3 0.5 14.0 13.0 14.0 23.0 16.5 18.0 0.6 11.0 12.0 14.0 27.0 16.0 17.0 0.7 9.0 11.0 14.0 27.0 17.0 17.5 0.8 8.0 11.0 14.0 26.5 17.0 17.5 0.9 8.0 12.0 14.0 26.0 17.0 17.0 1.0 8.0 12.0 15.0 26.0 17.0 17.0 1.1 8.0 12.5 15.0 27.0 17.0 17.0 1.2 8.5 13.0 17.0 25.0 17.0 17.0 1.3 9.0 14.0 18.0 21.0 17.0 17.0 1.4 11.0 14.0 19.0 17.0 17.0 16.5 1.5 13.0 15.0 20.0 16.0 16.5 16.0 1.6 16.0 15.0 21.0 14.5 17.0 16.0 1.7 20.0 16.0 21.5 13.0 17.0 16.0 1.8 24.0 17.0 22.0 13.0 17.0 16.0 1.9 27.0 17.5 23.0 13.0 17.0 16.0 2.0 29.5 18.5 24.0 13.0 17.0 16.0 2.1 30.5 19.0 24.0 13.0 17.0 15.5 2.2 32.5 20.0 24.0 13.0 17.0 15.0 2.3 35.5 20.5 25.0 13.0 17.0 15.0 2.4 37.0 22.0 27.0 13.0 16.0 14.0 2.5 31.0 23.0 27.0 14.0 17.0 14.0 : Median iterations required to meet convergence for 5 knots along the x-axis.[]{data-label="iter5"} E-M Alternative ----- --------- ------ ------ ------------- ------ ------ $b$ res = 1 2 3 res = 1 2 3 0.5 50.5 38.0 32.0 20.0 13.0 9.0 0.6 28.0 31.0 28.0 16.5 9.0 9.0 0.7 18.0 28.0 27.0 14.0 8.0 10.0 0.8 14.0 27.0 26.0 13.0 8.0 10.0 0.9 13.0 27.0 26.0 13.0 8.0 11.0 1.0 12.0 28.0 27.0 13.0 8.0 10.0 1.1 12.0 30.0 28.0 15.0 8.0 10.0 1.2 13.0 31.5 29.0 13.0 8.0 10.0 1.3 14.0 34.0 30.0 12.0 8.0 10.0 1.4 16.0 36.5 32.0 11.0 8.0 10.0 1.5 19.0 40.0 33.0 10.0 8.0 9.0 1.6 22.0 43.0 34.0 8.0 9.0 9.0 1.7 25.0 46.0 35.0 8.0 9.0 9.0 1.8 28.0 48.5 37.0 8.0 9.0 9.0 1.9 32.0 53.0 39.0 8.0 9.0 9.0 2.0 36.0 57.0 40.0 8.5 10.0 9.0 2.1 40.0 62.0 41.0 9.0 10.0 9.0 2.2 43.0 67.0 42.0 9.0 10.0 9.0 2.3 48.0 71.0 43.0 9.5 10.0 10.0 2.4 53.0 75.5 44.0 10.0 11.0 9.0 2.5 58.0 82.0 45.0 10.0 11.0 9.0 : Median iterations required to meet convergence for 9 knots along the x-axis.[]{data-label="iter9"} E-M Alternative ----- --------- ------ ------ ------------- ------ ------ $b$ res = 1 2 3 res = 1 2 3 0.5 165.0 68.0 49.0 22.0 17.0 15.0 0.6 70.5 41.0 38.0 25.5 15.0 13.0 0.7 36.0 33.0 34.0 28.5 13.0 11.0 0.8 24.0 30.0 32.5 16.0 12.0 7.0 0.9 21.0 29.0 32.0 13.0 7.0 7.0 1.0 20.0 29.0 32.0 9.5 7.0 7.0 1.1 20.0 30.0 32.5 8.0 7.0 7.0 1.2 20.0 31.0 34.0 8.0 7.0 7.0 1.3 21.0 32.0 35.0 8.0 7.0 8.0 1.4 23.0 33.5 36.0 7.0 7.0 8.0 1.5 25.0 35.0 37.0 7.0 7.0 8.0 1.6 27.0 36.0 38.0 7.0 8.0 8.0 1.7 30.0 38.5 39.5 7.0 8.0 8.0 1.8 33.0 40.5 41.0 7.0 8.0 8.0 1.9 36.0 42.5 42.0 8.0 8.0 8.0 2.0 39.0 44.0 43.0 8.0 8.0 8.0 2.1 41.0 47.0 45.0 8.0 8.0 8.0 2.2 45.0 49.0 47.0 8.0 8.0 8.0 2.3 49.0 50.0 49.0 8.0 8.0 8.0 2.4 53.0 52.0 51.0 9.0 8.0 8.0 2.5 58.0 54.0 52.0 9.0 9.0 8.0 : Median iterations required to meet convergence for 13 knots along the x-axis.[]{data-label="iter13"} Although fixed rank kriging improves computational efficiency, this approximation will ultimately also increase prediction error. The statistical value of this approximate method can be quantified by the mean square prediction error (MSPE) computed against the true simulated field. As an example, the medians and distributions of MSPE when $\nu=1$ and $\sigma^2=.25$ are provided in Tables \[mspe5knots\]-\[mspe13knots\] and Figures \[mspe11\]-\[mspe33\]. The boxplots in blue correspond to using the alternative method, the boxplots in red correspond to using the E-M approach with an identity covariance for $K$, and the white boxplot corresponds to using the E-M approach with a full covariance for $K$ for fixed $b=1.5$. Based on the distributions and medians of MSPE, the identity covariance structure is at least as accurate in terms of prediction as the full covariance structure. Also, given a proper resolution and bandwidth, the alternative method and E-M approach are comparable. However, due to its simplicity and the speed of estimation, proceeding sections will focus only on spatial prediction using the identity covariance model. E-M Alternative ----- --------- ------- ------- ------------- ------- ------- $b$ res = 1 2 3 res = 1 2 3 0.5 0.601 0.466 0.500 0.684 0.473 0.517 0.6 0.483 0.396 0.471 0.573 0.404 0.483 0.7 0.372 0.362 0.455 0.413 0.369 0.471 0.8 0.296 0.345 0.442 0.312 0.348 0.459 0.9 0.262 0.336 0.435 0.265 0.339 0.446 1.0 0.238 0.334 0.434 0.250 0.337 0.450 1.1 0.230 0.335 0.426 0.238 0.342 0.442 1.2 0.222 0.339 0.432 0.239 0.352 0.446 1.3 0.221 0.346 0.431 0.246 0.358 0.459 1.4 0.222 0.356 0.433 0.250 0.375 0.472 1.5 0.223 0.368 0.439 0.266 0.382 0.480 1.6 0.228 0.381 0.445 0.286 0.396 0.485 1.7 0.238 0.390 0.450 0.299 0.404 0.486 1.8 0.248 0.400 0.457 0.310 0.415 0.485 1.9 0.255 0.411 0.470 0.329 0.430 0.495 2.0 0.254 0.423 0.480 0.346 0.442 0.508 2.1 0.255 0.436 0.494 0.359 0.455 0.513 2.2 0.262 0.450 0.506 0.372 0.467 0.523 2.3 0.265 0.457 0.516 0.406 0.476 0.532 2.4 0.266 0.472 0.527 0.429 0.491 0.544 2.5 0.263 0.482 0.535 0.456 0.505 0.549 : Median MSPE for 5 knots along the x-axis.[]{data-label="mspe5knots"} E-M Alternative ----- --------- ------- ------- ------------- ------- ------- $b$ res = 1 2 3 res = 1 2 3 0.5 0.675 0.565 0.487 0.855 0.696 0.517 0.6 0.491 0.487 0.430 0.770 0.515 0.454 0.7 0.338 0.434 0.393 0.423 0.457 0.414 0.8 0.243 0.405 0.374 0.261 0.405 0.382 0.9 0.200 0.381 0.360 0.221 0.381 0.375 1.0 0.181 0.360 0.351 0.209 0.360 0.360 1.1 0.167 0.344 0.348 0.197 0.343 0.348 1.2 0.156 0.332 0.345 0.179 0.331 0.341 1.3 0.148 0.324 0.345 0.162 0.321 0.334 1.4 0.143 0.315 0.347 0.148 0.313 0.334 1.5 0.142 0.308 0.353 0.143 0.310 0.331 1.6 0.141 0.303 0.358 0.146 0.305 0.330 1.7 0.144 0.299 0.358 0.149 0.306 0.332 1.8 0.146 0.303 0.355 0.154 0.303 0.331 1.9 0.150 0.303 0.357 0.161 0.304 0.336 2.0 0.154 0.301 0.357 0.164 0.305 0.338 2.1 0.157 0.303 0.361 0.171 0.305 0.340 2.2 0.161 0.305 0.366 0.176 0.309 0.340 2.3 0.163 0.304 0.374 0.180 0.316 0.336 2.4 0.168 0.306 0.371 0.185 0.317 0.336 2.5 0.171 0.308 0.373 0.193 0.321 0.335 : Median MSPE for 9 knots along the x-axis.[]{data-label="mspe9knots"} E-M Alternative ----- --------- ------- ------- ------------- ------- ------- $b$ res = 1 2 3 res = 1 2 3 0.5 0.784 0.549 0.470 0.886 0.793 0.746 0.6 0.582 0.416 0.386 0.851 0.723 0.616 0.7 0.381 0.339 0.335 0.818 0.573 0.439 0.8 0.276 0.291 0.303 0.701 0.406 0.323 0.9 0.231 0.257 0.278 0.436 0.289 0.289 1.0 0.208 0.231 0.263 0.277 0.245 0.270 1.1 0.187 0.216 0.252 0.214 0.233 0.261 1.2 0.167 0.207 0.242 0.188 0.218 0.248 1.3 0.157 0.199 0.237 0.169 0.208 0.241 1.4 0.148 0.193 0.234 0.154 0.199 0.236 1.5 0.141 0.190 0.229 0.146 0.193 0.231 1.6 0.138 0.188 0.226 0.140 0.191 0.227 1.7 0.134 0.188 0.223 0.133 0.189 0.225 1.8 0.133 0.188 0.224 0.133 0.188 0.225 1.9 0.131 0.188 0.223 0.131 0.188 0.223 2.0 0.131 0.190 0.225 0.131 0.188 0.223 2.1 0.131 0.190 0.225 0.131 0.189 0.226 2.2 0.132 0.191 0.227 0.133 0.190 0.227 2.3 0.133 0.192 0.228 0.134 0.191 0.228 2.4 0.134 0.194 0.231 0.135 0.191 0.230 2.5 0.136 0.196 0.233 0.137 0.194 0.234 : Median MSPE for 13 knots along the x-axis.[]{data-label="mspe13knots"} From both the medians and distributions, it is apparent that both the bandwidth constant and the number of resolutions used are crucial in minimizing the error associated with prediction. Not only does the median MSPE drop considerably given the best bandwidth and resolution, but the variability in the distribution of MSPE decreases as well. Another promising result is that sensitivity of MSPE is much improved by using the identity covariance. Poor choices in resolution can greatly increase MSPE when using a full covariance matrix for $K$, particularly when using more knots. However, MSPE remains smallest when using the identity covariance provided the bandwidth constant is within a reasonable range. The random fields were simulated as having a single resolution. Thus, the fact that a single resolution produced the lowest MSPE in each case is not surprising. The increase in MSPE when an improper number of resolutions is chosen is of concern. This suggests the need for an a priori estimate of the number of resolutions. When the number of resolutions is selected properly and the number of knots relative to the sample size is small ($\frac{m}{n}<.25$), any bandwidth constant near the original value suggested by Cressie and Johanesson ($b=1.5$) seems reasonable, thus we will use $b=1.5$ for the remainder of this paper. In the next section, we will outline how to simulate multiple resolutions and the effect of varying the original form of $K$. (Perhaps I should re-evaluate bandwidth. It appears as though $b$ should go up and down as $m$ does.) The Optimal Number of Resolutions ================================= In this section, an algorithm for simulating multi-resolution data is outlined and the effect of the form of the covariance matrix for the knots is examined in terms of efficiency and accuracy. Additionally, a means of identifying the optimal number of resolutions is suggested. Simulating Multiple Resolutions ------------------------------- We want to minimize the error in prediction, quantified by MSPE, by selecting a reasonable number of resolutions. A multi-resolution covariance matrix is necessary to simulate multi-resolution data. To achieve this, we assume that our data could be directly described by the covariance structure outlined by fixed rank kriging and given by (\[simy\]) where $S_{(l)}$ represents a matrix of basis functions containing at most $l$ resolutions. $$\label{simy} \mathbf{y} \sim MVN(\mathbf{0}, S_{(l)}KS_{(l)}^T + \sigma^2I)$$ As a simple example, the simulation from Katzfuss and Cressie [@katzfusscressie2009] was mirrored, with added complexity to incorporate multiple resolutions. The spatial domain is one-dimensional, $D={1,2,\cdots,256}$, and $\sigma^2$ was fixed at 0.25. Six knots were used at locations $\mathbf{u}={64,192,32,96,160,224}$ at 2 resolutions. Since additional knots can greatly improve prediction, particularly in a situation where few knots are being used, all six knots were used with one as well as two resolutions. The minimum distance between knots from \[bandwidth\] was altered to adjust for this inconsistency. When only a single resolution was used, the bandwidth was 96 ($r=1.5*\min\{u_3,\cdots,u_6\}=96$). When two resolutions were used, $\{u_1,u_2\}$ were considered knots at the first resolution and $\{u_3,\cdots,u_6\}$ were knots at the second, finer resolution. Thus, for two resolutions, the bandwidth was computed in the usual way ($r_1=192$ and $r_2=96$). Simulation Results ------------------ A concern when simulating the data given this algorithm is that the behavior of $K$ is somewhat unknown. Evidence from subsection 3.2 suggested that $K$ could be an identity matrix or possibly follow a more variable Wishart distribution when the original spatial process is Matérn. The effect on the entire spatial process when the form of $K$ is assumed is less understood. To ensure that the form of $K$ does not have a severe effect on prediction, a small simulation was conducted using three options for $K$: a semi-random positive-definite matrix, a random Wishart matrix with 6 d.f., and a Matérn covariance matrix with $\nu=1$, $\rho=5$, and $\theta=64$. $K$ plotted against distance is provided in Figure \[K\], the resulting $SKS^T$ for one resolution is plotted against distance in Figure \[SKS1\], $SKS^T$ for two resolutions is plotted against distance in Figure \[SKS2\], and the exact form of the $K$ matrices are given in the Appendix. All plots illustrate a loss of stationarity and the form of $SKS^T$ is clearly affected by $K$. Prediction, however, was the main concern, so the effect of the form of $K$ on predicted values was explored. Figures \[KpredS\]-\[KpredM\] illustrate the simulated values compared to the predicted values for a single simulation. Predicted values were obtained for $K$ estimated in its totality and using the identity matrix ($K=\delta I$) and for $S$ containing one and two resolutions. Predicted values are nearly identical when estimating all elements of $K$ or using the identity form of $K$ and this was consistent for an assumed semi-random, Wishart, or Matérn covariance for $K$. Once again, resolution selection was the key to efficiently minimizing MSPE. All combinations of resolutions used to simulate $y$, forms of $K$ used to simulate $y$, resolutions used to estimate $K$, and forms of $K$ used in the estimation and prediction process were randomly simulated 100 times. Boxplots of MSPE are provided in Figure \[Kmspe\]. The blue and green boxplots represent the correct resolution used for estimation and prediction with an identity and full $K$, respectively, and the red and violet boxplots represent the incorrect resolution used for estimation and prediction with an identity and full $K$, respectively. The distributions of MSPE for the identity and full $K$ matrix are nearly identical regardless of the form of $K$ used to simulate the data. An incorrect choice in resolution, however, resulted in a uniform substantial increase in MSPE, as illustrated for a single random field in Figures \[KpredS\]-\[KpredM\]. Any additional knowledge gained through full estimation of $K$ is negated by its computational inefficiency. Estimation and prediction using multi-resolution basis functions consistently requires more time, but the larger issue is the exponential increase in computation time required to estimate $\frac{m(m+1)+2}{2}$ parameters as opposed to 2, i.e. the full $K$ versus the identity $K$. This increase in computational time is a direct result of an increase in the number of iterations necessary to meet convergence, as illustrated in Figures \[Ktime\] and \[Kcount\]. This simulation provided evidence that the exact form of $K$ used to simulate multi-resolution data is insensitive to prediction and that the advantages to using the identity form of $K$ greatly outweigh the loss of information. Selecting the correct resolution remains the greatest concern in prediction. In the next subsection, we will examine a method to determine the optimal number of resolutions needed to minimize MSPE. Resolution Selection -------------------- Knowledge of the correct number of resolutions has been shown to be a crucial aspect of improving prediction when using fixed rank kriging. Unfortunately, the correct number of resolutions is unknown when applied to real data. From the simulation in subsection 3.4, a pattern emerges with respect to the parameter estimates for $\sigma^2$. It is almost impossible to differentiate between boxplots of MSPE from Figure \[Kmspe\] and boxplots of $\sigma^2$ from Figure \[Ksig\]. Distributions are shifted down for the correct resolution, however in this case, the distributions are also slightly decreased when using a full $K$. This is intuitive because a full $K$ provides more information to describe the process, thus decreasing the “measurement error” ($\sigma^2$). Provided $y$ is a Gaussian random variable, as outlined in \[model2\], $\sigma^2$ represents the amount of variation in data not described by the underlying spatial process. Consequently, the closer $SKS^T$ is to the true covariance of the spatial process of $y$, the smaller $\sigma^2$ will be. Table \[agreeRes\] summarizes the frequency with which the correct resolution resulted in the smaller estimate of both MSPE and $\sigma^2$ out of a possible 100 simulations. Comparing frequencies of the number of simulations where MSPE and $\sigma^2$ agree on which is the correct resolution provide further evidence of how correlated MSPE and $\sigma^2$ truly are. This information is provided in Table \[agreeEst\]. MSPE $\sigma^2$ --------------------------- ------------- --------- -------- ------------- ------------ -------- Simulation and Prediction Semi-Random Wishart Matérn Semi-Random Wishart Matérn 1 resolution 87 87 90 87 87 91 2 resolutions 81 80 91 81 85 92 : Frequency table for agreement between simulation and prediction for minimizing MSPE and $\sigma^2$.[]{data-label="agreeRes"} 1 Resolution 2 Resolutions ------------- -------------- -------- ------------- --------------- -------- Semi-Random Wishart Matérn Semi-Random Wishart Matérn 100 100 97 100 95 99 : Frequency table for agreement between minimizing MSPE and $\sigma^2$.[]{data-label="agreeEst"} The high accordance between MSPE and $\sigma^2$ is somewhat alarming. However, this simulation was performed when $\mathbf{s}=\mathbf{s_0}$, i.e. prediction was performed on only the observations themselves. Given the kriging equation, it is clear that MPSE will converge to 0 as $\sigma^2$ tends toward 0 regardless of $S$ or $K$, as illustrated in \[mspeto0\]. The fact that situations exist where the smallest $\sigma^2$ does not guarantee the minimum MSPE implies that the relationship is not that obvious. $$\label{mspeto0} \lim_{\sigma^2 \rightarrow 0} SKS^T(SKS^T + \sigma^2I)^{-1}y = y$$ To verify this belief, cross-validation was administered on these data with 16 observations of the 256 being left out and prediction done on the remaining 240. This was done systematically so that all 256 observations were predicted by means of cross-validation and the results are provided in Table \[agreeCV\]. 1 Resolution 2 Resolutions ------------- -------------- -------- ------------- --------------- -------- Semi-Random Wishart Matérn Semi-Random Wishart Matérn 92 97 96 96 93 92 : Frequency table for agreement between minimizing MSPE obtained through cross-validation and $\sigma^2$.[]{data-label="agreeCV"} The relationship holds; $\sigma^2$ is a good indicator that the optimal resolution to adequately describe data was used. In terms of CPU time, it is more efficient to perform fixed rank kriging for each possible number of resolutions considered using the identity matrix for $K$ than it is to perform a single application of fixed rank kriging using the full $K$. Thus, the best method to guarantee improved prediction while reducing the computational burden is to perform fixed rank kriging for each number of resolutions considered while restricting $K=\delta I$ and to select the predictions obtained from the model with the smallest estimate of $\sigma^2$. Comparing the E-M approach for obtaining MLEs to the alternative fixed rank kriging method suggested that although the alternative method was faster when directly implemented in `R`, the potential for time improvements using the E-M approach should result in a more efficient algorithm. Also, the possible reduction in MSPE and the ability of the algorithm to remain in the constricted parameter space add to the benefits of the E-M approach. We next apply the methods presented above to the NCDC data of monthly temperatures recorded across the continental United States of America. STOP HERE!!! Application =========== A basic need in climate science is to consider mean temperature and precipitation fields on a regular grid [@johnsnychkakitteldaly2003]. One important application is to compare these fields from observational data to those simulated by climate models. An illustration of the likelihood-based fixed rank kriging method was applied to a mean temperature data set collected by the Cooperative Observer Program (COOP) and archived by the National Climate Data Center (NCDC). The COOP, formally established in 1890, is the nation’s largest and oldest weather and climate observing network [@coop2000]. It consists of over 11,700 volunteer citizens and institutions observing and reporting weather information on a 24-hour basis. For our example, we use mean temperatures in April in 1990 observed over the entire United States of America. The daily minimum and maximum temperatures were observed at 5030 locations across the U.S. and the mean monthly minimum and maximum temperatures were calculated. To obtain an overall monthly average, the mean monthly minimum and maximum temperatures were averaged together and we will use this as the “monthly mean” temperature. Intrinsic stationarity is not a reasonable assumption for this data. For example, temperature is affected by elevation and so the spatial field does not have a constant mean. A simple approach to adjust for the non-stationarity due to elevation is the additive model, provided in equation (\[elmodel\]). $$\begin{aligned} \label{elmodel} y(\mathbf{s},\mathbf{h}) &=& g(\mathbf{h}) + f(\mathbf{s}) + \epsilon(\mathbf{s}) \\ \mathbf{h} &=& \mbox{elevation at location } \mathbf{s} \nonumber \\ \epsilon(\mathbf{s}) &\sim& MVN(\mathbf{0},\sigma^2I) \nonumber\end{aligned}$$ To create a zero mean, intrinsically stationary spatial process, the additive effect of elevation was removed by backfitting a cubic smoothing spline regression model to the original data. The residuals from this spline regression fit were then assumed to be zero mean, intrinsically stationary observations and could be used with likelihood-based fixed rank kriging to obtain predicted values for the entire grid. $$\begin{aligned} \label{USmodel} (y(\mathbf{s},\mathbf{h}) - g(\mathbf{h})) &=& f(\mathbf{s}) + \epsilon(\mathbf{s}) \\ E(y(\mathbf{s},\mathbf{h}) - g(\mathbf{h})) &=& \mathbf{0} \nonumber \\ \epsilon(\mathbf{s}) &\sim& MVN(\mathbf{0},\sigma^2I) \nonumber\end{aligned}$$ For simplicity, the knots were selected on a regular grid. When run on a standard laptop with 3MB of RAM and a 32-bit processor, the entire process took roughly 24 minutes. The following figures depict the observations and knots, the predicted values for the spline regression model, the predicted values using likelihood-based fixed rank kriging for the residuals, and the combination of the spline regression and likelihood-based fixed rank kriging estimates. From the separate predicted values, it is clear that elevation is useful for describing mountain ranges, such as the Rocky Mountains and Appalachians, however provides little information over the plains and cannot differentiate between the plains Ohio and the desert in Arizona. The likelihood-based fixed rank kriging estimates capture the additional warmth of desert states west of Texas, the mild climate of the Pacific coast line, the cold areas of the Midwest and the Northeast, and the negative relationship between latitude and temperature. Overall, the full prediction model appears to capture the essence of the data fairly well. To see the temperature change over the four seasons, plots of the predicted mean temperatures for April 1990, July 1990, October 1990, and January 1991 are provided in the Appendix. Discussion ========== This paper presents a method for performing spatial prediction efficiently and effectively when data sets are large. This is accomplished by first approximating the original spatial process using fixed rank kriging which separates the spatial covariance into a linear combination of basis functions and a reduced rank spatial process. With the approximate spatial process, a QR-decomposition is used to reduce the dimensionality of the covariance matrix that is then solved for maximum likelihood estimation. Finally, ridge regression is used in place of kriging to obtain the predicted values over a spatial grid in order to increase efficiency. Assuming a Matérn covariance structure, experimental results have shown that when using this likelihood-based fixed rank kriging approach, the spatial covariance can be simplified to an independent model with the remaining dependence captured in the basis functions. However, the choice of bandwidth in these basis functions is crucial to minimizing the error of the spatial predictions. Estimating the bandwidth could be very costly to the point of eradicating all computational gains achieved by likelihood-based fixed rank kriging for estimation methods such as maximum likelihood estimation. Fortunately, it has been shown that a surprisingly simple rule can provide a sufficient estimate of the optimal bandwidth when providing only the smoothness parameter and amount of reduction used in the data. When using the empirical bandwidth rule, the same reduction in computational costs as stated by Cressie and Johannesson [@cressiejohannesson2008] exists and so this method can still be applied to very large data sets, such as the climatological data set used in the previous example. Since the theory behind this computationally efficient method for spatial prediction can be applied to any type of likelihood estimation, this method could also be expanded to a Bayesian analysis, but may require additional exploration. Although efficient kriging should be applicable to any Gaussian random field with intrinsic stationarity, the empirical bandwidth rule pertains to when the basis functions are local bisquare functions and only applies to situations where a Matérn covariance structure could be assumed on the original spatial process. To produce simulations that are comparable to geophysical data sets, the measurement error was also assumed to be constant and relatively low. In addition, whether intrinsic stationarity can be assumed or can be reasonably achieved through detrending of the data, the approximations used in fixed rank kriging will result in a lack of exact stationarity in the field. There are several topics for future research. The first is the optimal knot placement. Simulations were attempted using both a regular grid and a stratified sampling technique, without an obvious difference in performance, but this results apply only to randomly located observations and thus a more rigorous analysis would be required to come to any definite conclusions. This model could be further generalized to incorporate covariates such as elevation instead of detrending. Obtaining a stationary field is another difficulty that could be explored further. Alternative basis functions could possibly be used to adjust for directional dependence. Also, considering that the accuracy of these methods were quantified by means of mean square prediction error, another area for future work would be in investigating the effects of this criteria on kriging predictions errors. Acknowledgments =============== This research was supported, in part, by National Science Foundation (NSF) grant DMS-0707069. The National Center for Atmospheric Research is managed by the University Corporation for Atmospheric Research under the sponsorship of the National Science Foundation. Appendix ======== Seasonal Temperatures --------------------- Woodbury matrix identity ------------------------ Let $C$ be any matrix. Then: $$\begin{aligned} C^T(CC^T + I)^{-1} &=& (C^TC + I)^{-1} (C^TC + I) C^T (CC^T + I)^{-1} \\ &=& (C^TC + I)^{-1} (C^TCC^T + C^T) (CC^T + I)^{-1} \\ &=& (C^TC + I)^{-1} C^T (CC^T + I) (CC^T + I)^{-1} \\ &=& (C^TC + I)^{-1} C^T\end{aligned}$$ K matrices ---------- ### Semi-Random K $$\begin{aligned} K = \left( \begin{array}{cccccc} 4.0 & 0.5 & -0.5 & 0.1 & 0.2 & 0.3 \\ 0.5 & 6.0 & -0.2 & 0.7 & 1.0 & -0.4 \\ -0.5 & -0.2 & 3.0 & 1.0 & 1.0 & -0.9 \\ 0.1 & 0.7 & 1.0 & 4.0 & 0.7 & 0.9 \\ 0.2 & 1.0 & 1.0 & 0.7 & 7.0 & -1.0 \\ 0.3 & -0.4 & -0.9 & 0.9 & -1.0 & 5.0 \end{array} \right)\end{aligned}$$ ### Wishart K $$\begin{aligned} K = \left( \begin{array}{cccccc} 2.176 & 1.269 & -0.685 & -0.409 & 0.212 & -1.630 \\ 1.269 & 7.803 & -1.707 & -1.927 & 1.341 & 0.830 \\ -0.685 & -1.707 & 2.571 & -1.591 & 0.612 & -0.802 \\ -0.409 & -1.927 & -1.591 & 3.445 & -2.132 & 1.245 \\ 0.212 & 1.341 & 0.612 & -2.132 & 2.049 & -1.237 \\ -1.630 & 0.830 & -0.802 & 1.245 & -1.237 & 2.968 \end{array} \right)\end{aligned}$$ ### Matérn K $$\begin{aligned} K = \left( \begin{array}{cccccc} 5.000 & 1.399 & 4.141 & 4.141 & 2.080 & 0.924 \\ 1.399 & 5.000 & 0.924 & 2.080 & 4.141 & 4.141 \\ 4.141 & 0.924 & 5.000 & 3.010 & 1.399 & 0.602 \\ 4.141 & 2.080 & 3.010 & 5.000 & 3.010 & 1.399 \\ 2.080 & 4.141 & 1.399 & 3.010 & 5.000 & 3.010 \\ 0.924 & 4.141 & 0.602 & 1.399 & 3.010 & 5.000 \end{array} \right)\end{aligned}$$
\ \ $^{\mathrm{1}}$ JPSM and US Census Bureau Research Associate, $^{\mathrm{2}}$ US Census Bureau, $^{\mathrm{3}}$ JPSM, University of Maryland $^{\mathrm{1,3}}$JPSM, 1218 LeFrak Hall, 7251 Preinkert Dr., College Park, MD 20742, USA, $^{\mathrm{2}}$4600 Silver Hill Rd, Washington, MD 20233, United States\ Disclaimer: Any views expressed are those of the authors and not necessarily those of the U.S. Census Bureau. Abstract {#abstract .unnumbered} ======== For the last several decades, the US Census Bureau has been using the AK composite estimation method for generating employment level and rate estimates. In this paper, we devise an evaluation study to compare the AK estimator with different competitors using the Current Population Survey (CPS) data and a sample design that mimics the CPS design. To this end, we first expand the list of potential competitors to the AK estimator by developing two new classes of estimators. The first class includes the AK estimator as a member while the second class includes a subclass of univariate estimators considered earlier in the literature. The optimum estimator under a given optimality criterion is obtained for each class. The optimum estimators, however, cannot be used as they depend on unknown variances and covariances of the month-in-sample estimates, which are essentially direct survey-weighted estimates for groups of sampled households staying in the sample a same number of months within a given group and different number of months across groups. The AK estimator is obtained from the first class when the variances and covariances are estimated under rather strong stationary assumptions on the variances and covariances. The AK estimator and other estimators obtained from the optimum estimator in either class when the unknown variances and covariances are substituted by their natural estimators did not produce good results in our evaluation study. In the real data analysis, the AK estimates are constantly below the survey-weighted estimates, indicating potential bias in the estimator. Any attempt to improve on the estimated optimal estimator in either class would require a thorough investigation of the highly non-trivial problem of estimation of variances and covariances for a complex setting like the CPS. Better estimators of the variances and covariances are needed. We did not entertain this problem in this paper. A different approach is to use a variant of the regression composite estimator used by Statistics Canada. This estimator does not require estimation of variances and covariances of the month-in-sample estimators and is less sensitive to the rotation group bias. Our study demonstrates that there is a great potential for improving the estimation of levels and month to month changes in the unemployment rates by using the regression composite estimator. Keywords {#keywords .unnumbered} -------- Calibration; estimated controls; longitudinal survey; labor force statistics. Introduction ============ In repeated surveys, different composite estimators that borrow strength over time have been proposed; see [@jones1980best], [@yansaneh1998optimal], [@bell2001comparison], [@singh2001regression], [@fuller2001regression] and others. Such composite estimators typically improve on the standard direct survey-weighted estimators in terms of mean squared error (MSE) and are commonly used by different government agencies for producing official labor force statistics. For example, to produce national employment and unemployment levels and rates, the U.S. Census Bureau uses the AK composite estimation technique developed using the ideas given in [@gurney1965multivariate]. Motivated by a Statistics Canada application, [@singh1995composite] introduced an ingenious idea for generating a composite estimator that can be computed using Statistics Canada’s existing software for computing generalized regression estimates. The key idea in [@singh1995composite] is to create a proxy (auxiliary) variable that uses information at the individual level as well as estimates at the population level from both previous and current periods. Using this proxy variable, [@singh1995composite] obtained a composite estimator, referred to as Modified Regression 1 estimator (MR1) in the literature. However, [@Singh1997] noted that MR1 does not perform well in estimating changes in labor force statistics, which motivated them to propose a different composite estimator, called MR2, using a new proxy variable. [@singh2001regression] generalized the idea of MR1 and MR2 estimators by suggesting a general set of proxy variables. [@fuller2001regression] noted that the regression composite estimator proposed by [@Singh1997] is subject to an undesirable drift problem, i.e., it may produce estimates that drift away from the real value suggested by the underlying model as time progresses and proposed an alternative regression composite method to rectify the drift problem. Their method differs from the method of [@singh2001regression] in two directions. First, the idea of rectifying the drift problem by a weighted combination of the two proxy variables used for MR1 and MR2 is new. Secondly, their final regression composite estimator involves estimation of the weight assigned to MR1 or MR2 control variable in the weighted combination — this idea was not discussed in [@singh2001regression]. In short, the Fuller-Rao regression composite estimator with estimated weight cannot be viewed as a special case of [@singh2001regression] and vice versa. [@gambino2001regression] conducted an empirical study to evaluate the Fuller-Rao regression composite estimator, offered missing value treatment and listed several advantages (e.g, weighting procedure, consistency, efficiency gain, etc.) of the Fuller-Rao regression composite estimator over the AK estimator. Statistics Canada now uses the Fuller-Rao method for their official labor force statistics production. [@salonen2007regression] conducted an empirical study to compare the currently used Finnish labor force estimator with the Fuller-Rao’s regression composite and other estimators. [@bell2001comparison] applied the generalized regression technique to improve on the Best Linear Unbiased Estimator (BLUE) based on a fixed window of time points and compared his estimator with the AK composite estimator of [@gurney1965multivariate] and the modified regression estimator of Singh et al. (1997), using data from the Australian Labour Force Survey. [@beaumont2005refinement] proposed a regression composite estimator with missing covariates defined using variables of interest from the previous month. The main goal of this paper is to compare the design-based properties of the AK estimator with different rival estimators using the CPS data. To this end, we first expand the list of potential estimators by considering two new classes of composite estimators. The first class includes the AK estimator as a member. The second class generalizes the class of estimators considered earlier by Yasaneh and Fuller (1988) to incorporate multiple categories of employment status (e.g., employed, unemployed, and not in the labor force). We obtain the best linear unbiased estimator (BLUE) for each class of estimators. We call them the best AK estimator and multivariate BLUE, respectively. As special cases of the multivariate BLUE, one can generate the univariate BLUE and the best AK estimators. If the covariance matrix between two vectors of observations corresponding to any two different variables is a null matrix, then multivariate BLUE is identical to the univariate BLUE when the design matrix is the same for the variables. However, in general they are not identical when we do not have a block-diagonal covariance structure as is the case in our problem. The optimal estimator for a given class of estimators, derived under a given model and an optimality condition cannot be used as it involves unknown model parameters (e.g., variances and covariances). The AK estimator used by the Census Bureau is obtained from the optimal estimator when variances and covariances are substituted by estimators justified under a rather strong stationary assumptions. We devise an evaluation study in order to assess the exact design-based properties of different composite estimators using the CPS data and CPS sample design. We demonstrate that the optimal estimator for a given model with estimated variances and covariances can perform poorly even when the modeling assumptions are valid. We included the multivariate BLUE with estimated variances and covariances for completeness of this research. While the multivariate BLUE performs the best under the model that generates it, as expected, it performed the worst (worst than the univariate BLUE with estimated variances and covariances) once we substitute estimated variances and covariances in the multivariate BLUE formula. Overall, we found that the Fuller-Rao estimator performed the best among all composite estimators considered in our study. In Section 2, we discuss the population and sample design. In Section 3, we review different classes of estimators and optimal estimator within each class. In section 4, we describe a our evaluation study to assess the design-based properties of different estimators. In section 5, we report the CPS data analysis. Some discussions and future research topics are given in Section 6. We defer the proofs of relevant results and description of CPS design to the Appendix. To facilitate reading of the paper, in Appendix \[appendix:notation\] we list all the notations used in the paper. Notations ========= Population ---------- Our theoretical framework uses three indices to identify three dimensions: m for month, k for individual and e for an employment status category. In this paper, we will consider three categories of employment status: employed, unemployed and not in the labor force. The theory and methods developed in this paper, however, extend to more than 3 categories of employment status in a straightforward way. Consider a sequence of finite populations of individuals $\left(U_{m}\right)_{{m}\in \left\{1\ldots M\right\}}$, where $U_{m}$ refers to the finite population for month ${m}$. Let $N$ denote the cardinality of $U=\bigcup_{m=1}^M U_m$. Let ${\mathbf{y}}_{{m},{k},{e}}=1$ if the ${k}$th individual belongs to $U_{m}$ and has ${e}$th employment status and ${\mathbf{y}}_{{m},{k},{e}}=0$ otherwise, ${m}\in\{1,\cdots,{M}\},\;{k}\in\{1,\cdots,N\}, \; {e}\in\{1,2,3\}.$ Because of our three dimensional data structure, we find it convenient to introduce arrays in developing our methodology and theory. Let ${\mathbf{y}}=[{\mathbf{y}}_{{m},{k},{e}}]_{{m}\in\{1,\ldots,{M}\},{k}\in \{1,\ldots,N\},e\in \{1,2,3\}}$ denote a three dimensional $({M},N,3)$-sized array. We also define ${\mathbf{x}}$ as a 3-dimensional array of auxiliary variables indexed by month, individual, and auxiliary variable, and an array ${\mathbf{z}}$, indexed the same way, that contains endogenous variables in the sense that ${\mathbf{z}}$ is a function of ${\mathbf{x}}$ and ${\mathbf{y}}$. Any element of an array with (${m},{k}$)-index satisfying $k\notin U_{m}$ is equal to 0 by convention. Notational conventions on arrays -------------------------------- Given subsets $A$, $B$, $C$ of $\{1,\ldots,M\}$, $\{1,\ldots,N\}$, $\{1,2,3\}$, respectively (including the full set), we use the following notation for sub-arrays: ${\mathbf{y}}_{A,B,C}=[{\mathbf{y}}_{a,b,c}]_{a\in A,b\in B, c\in C}$, and may replace $A$, $B$, or $C$ by “.” when $A=\{1,\ldots,M\}$, $B=\{1,\ldots,N\}$ or $C=\{1,2,3\}$, respectively: for example, ${\mathbf{y}}={\mathbf{y}}_{.,.,.}$. Let ${\mathrm{t}}_{\mathbf{y}}=\left[\sum_{{k}\in U}{\mathbf{y}}_{{m},{k},{e}}\right]_{{m}\in\{1,\ldots,{M}\},{e}\in\{1,2,3\}}$ be the two dimensional $({M},3)$-sized array of population totals indexed by month ${m}$ and employment status ${e}$. We now show we can form a vector or matrix from an array. For a $p$-dimensional $(a_1,\ldots,a_p)$-sized array $A$, define $\vec{A}$ as the vector $\left(\vec{A}_1,\ldots,\vec{A}_{\prod_{l=1}^pa_l}\right)$, where $\forall (i_1,\ldots,i_p)\in \prod_{l=1}^p\{1,\ldots,a_l\}$, $\vec{A}_{1+\sum_{l=1}^p \left[\prod_{l'<l}(a_{l'}-1)i_1\right]}=A_{i_1,\ldots,i_p}$, with the convention that a product over the empty set equals $1$. By convention, when an array $B$ is defined as an $((a_1,\ldots,a_p),(b_1,\ldots,b_q))$-sized array (with two vector of indexes), $\vec{A}$ is the matrix $\left[\vec{A}_{i,j}\right]_{i\in\{1,\ldots,\prod_{l=1}^pa_l\},j\in\{1,\ldots,\prod_{l=1}^q b_l\}}$ such that $\forall (i_1,\ldots,i_p)\in \prod_{l=1}^p\{1,\ldots,a_l\}$, $(j_1,\ldots,j_q)\in \prod_{l=1}^p\{1,\ldots,a_l\}$, $\vec{A}_{1+\sum_{l=1}^p \left[(i_l-1)\prod_{l'<l}(a_{l'})\right],1+\sum_{l=1}^q \left[(j_l-1)\prod_{l'<l}(b_{l'})\right]}=A_{(i_1,\ldots,i_p),(j_1,\ldots,j_p)}$. [Given $A$ an $((a_1,\ldots, a_n),(b_1,\ldots b_l)$ array and $B$ a $((b_1,\ldots, b_l),(c_1,\ldots c_p)$ array, $C=A\times B$ is the $((a_1,\ldots, a_n),(c_1,\ldots c_p))$ array defined by $C_{(i_1,\ldots,i_n),(k_1,\ldots,k_n)}=\sum_{j_1,\ldots, j_l}A_{(i_1,\ldots,i_n),(j_1,\ldots, j_l)}B_{(j_1,\ldots,j_l),(k_1,\ldots ,k_n)}$.]{} The sample design ----------------- The CPS monthly sample comprises about 72,000 housing units and is collected for 729 areas (Primary Sampling Units) consisting of more than 1,000 counties covering every state and the District of Columbia. The CPS, conducted by the Census Bureau, uses a 4-8-4 rotating panel design. For any given month, the CPS sample can be grouped into eight subsamples corresponding to the eight rotation groups. All the units belonging to a particular rotating panel enter and leave the sample at the same time. A given rotating panel (or group) stays in the sample for four consecutive months, leaves the sample for the eight succeeding months, and then returns for another four consecutive months. It is then dropped from the sample completely and is replaced by a group of nearby households. Of the two new rotation groups that are sampled each month, one is completely new (their first appearance in the panel) and the other is a returning group, which has been out of the sample for eight months. Thus, in the CPS design six and four out of the eight rotation groups are common between two consecutive months (i.e., 75% overlap) and the same month of two consecutive years (i.e., 50% overlap) respectively; see [@Hansen1955]. For month ${m}$, let ${S}_{m}$ denote the sample of respondents. Let ${S}_{{m},{g}}$ denote the set of sampled respondents in the ${g}$th sample rotation group for month ${m}$ and ${S}_{m}=\bigcup_{{g}=1}^8{S}_{{m},{g}}$. For a given month ${m}$, the rotation groups $S_{{m},{g}}$, ${g}=1,\ldots,8$ are indexed so that ${g}$ indicates the number of times that rotation group ${S}_{{m},{g}}$ has been a part of the sample in month ${m}$ and before. In the US Census Bureau terminology, ${g}$ is referred to as the month-in-sample (${\mathrm{mis}}$) index and $S_{{m},{g}}$ as the month-in-sample ${g}$ rotation group (more details on the design are given in Section \[sec:4.3\]). We adopt a design-based approach in this study in which variables ${\mathbf{x}}$ and ${\mathbf{y}}$ are considered fixed parameters of the underlying fixed population model for design-based inference [@CasselSarndalWretman1977 p. 2]. Estimation ========== Direct and month-in-sample estimators ------------------------------------- Let ${\mathbf{w}}_{{m},{k}}$ denote the second stage weight of individual ${k}$ in month ${m}$ (by convention, ${\mathbf{w}}_{{m},{k}}=0$ if ${k}\notin {S}_{m}$), which is obtained from the basic weight (that is, the reciprocal of the inclusion probability) after standard non-response and post-stratification adjustments (for more details, we refer to [@CPS2006]). The array of direct survey-weighted estimator of ${\mathrm{t}}_{\mathbf{y}}$ is given by $\hat{{\mathrm{t}}}^{{\mathrm{direct}}}_{\mathbf{y}}=\left[\sum_{{k}\in {S}_{{m}}}{\mathbf{w}}_{{m},{k}}{\mathbf{y}}_{{m},{k},{e}}\right]_{{m}\in\{1,\ldots,{M}\},{e}\in\{1,2,3\}}$ . Define the $({M},8,3)$-sized array of month-in-sample estimates: $\hat{{\mathrm{t}}}^{{\mathrm{mis}}}_{\mathbf{y}}=\left[8\times\sum_{{k}\in {S}_{{m},{g}}}{\mathbf{w}}_{{m},{k}}{\mathbf{y}}_{{m},{k},{e}}\right]_{{m}\in\{1,\ldots,{M}\},{g}\in\{1,\ldots,8\},{e}\in\{1,2,3\}}.$ For a month-in-sample number ${g}$, $\left(\hat{{\mathrm{t}}}^{{\mathrm{mis}}}_{\mathbf{y}}\right)_{.,{g},.}$ is called the month-in-sample ${g}$ estimator of ${\mathrm{t}}_{\mathbf{y}}$. An extended Bailar model for the rotation group bias ---------------------------------------------------- Because of differential non-response mechanism and measurement errors distribution across different rotation groups, the direct and month-in-sample estimators are subject to a bias, commonly referred to as the rotation group bias or rotation bias. [@Bailar1975] proposed a class of semi-parametric models on the expected values of the month-in-sample estimators. Under a model in this class, (i) the bias of each month-in-sample estimator of total of unemployed depends on the month-in-sample index ${g}$ only, (ii) the bias is invariant with time, and (iii) the vector of month-in-sample biases are bounded by a known linear constraint (without this binding linear constraint, month-in-sample rotation group biases could only be estimated up to an additive constant). Note that these very strong assumptions were made in order to reveal the existence of what in the US Census Bureau terminology is known as the rotation group bias. It would be highly questionable to use this model for rotation group bias correction, because (i) the choice of the linear constraint would be totally arbitrary in the absence of a re-interview experiment and (ii) the stationarity assumptions are unreasonable. We propose the following model in order to extend the Bailar model to account for the rotation group biases of the multiple categories: $$\mathrm{E}\left[\left(\hat{{\mathrm{t}}}_{\mathbf{y}}^{{\mathrm{mis}}}\right)_{{m},{g},{e}}\right]=\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m},{e}}+b_{{g},{e}},\label{model:bailar}$$ where $b$ is a two-dimensional $(8,p)$-sized array of biases such that $\forall {e}, C_{e}b_{.,{e}}=0$, $C_1,C_2,C_3$ being known linear forms satisfying $C_{e}{\left(1,\ldots,1\right)^{\!\mathrm{T}}}\neq 0$, . Estimation of unemployment rate and variance approximation {#varur} ---------------------------------------------------------- We define the function ${\mathrm{R}}:(0,+\infty)^3\to [0,1], x\mapsto x_2/(x_1+x_2)$. By convention, when applied to an array with employment status as an index, $x_1$, $x_2$ denotes the subarrays for employment status 1 and 2, respectively, and $/$ denotes the term by term division. The unemployment rate vector is defined as ${\mathrm{r}}={\mathrm{R}}\left({\mathrm{t}}_{\mathbf{y}}\right)=\left({\mathrm{t}}_{\mathbf{y}}\right)_{.,1}/\left(\left({\mathrm{t}}_{\mathbf{y}}\right)_{.,1}+\left({\mathrm{t}}_{\mathbf{y}}\right)_{.,2}\right)$. Given an estimator $\hat{{\mathrm{t}}}_{\mathbf{y}}^\star$ of ${\mathrm{t}}_{\mathbf{y}}$, we derive the following estimator of ${\mathrm{r}}$ from $\hat{{\mathrm{t}}}_{\mathbf{y}}^\star$: $\hat{{\mathrm{r}}}^\star={\mathrm{R}}(\hat{{\mathrm{t}}}_{\mathbf{y}}^\star)$. Using the linearization technique, we can approximate the variance $\mathrm{Var}\left[\hat{{\mathrm{r}}}^\star_{{m}}\right]$ of the unemployment rate estimator for month $m$ by $J_1\mathrm{Var}\left[\left(\hat{{\mathrm{t}}}^{\star}_{\mathbf{y}}\right)_{{m},.}\right]{\left.J_1\right.^{\!\mathrm{T}}},$ where $J_1$ is the Jacobian matrix: $J_1=\left(\frac{\mathrm{d}~{\mathrm{R}}(t)}{\mathrm{d}~t}\right)\left(({\mathrm{t}}_{\mathbf{y}})_{{m},.}^\star\right)=\begin{bmatrix}\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m},1}^{-1},-\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m},1}\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m},2}^{-2},0\end{bmatrix}$, and the variance of the estimator of change of the employment rate between two consecutive months by $J_2\mathrm{Var}\left[\left(\left(\hat{{\mathrm{t}}}^{\star}_{\mathbf{y}}\right)_{{m},.},\left(\hat{{\mathrm{t}}}^{\star}_{\mathbf{y}}\right)_{{m}-1,.}\right)\right]{\left.J_2\right.^{\!\mathrm{T}}},$ where $$\begin{aligned} J_2&=&\left(\frac{\mathrm{d}~{\mathrm{R}}(t)-{\mathrm{R}}(t')}{\mathrm{d}~(t,t')}\left(\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m},.},\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m}-1,.}\right)\right)\\ &=&\begin{bmatrix}\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m},1}^{-1},-\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m},1}\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m},2}^{-2},0,-\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m}-1,1}^{-1},\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m}-1,1}\left(\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m}-1,2}\right)^{-2},0\end{bmatrix}.\end{aligned}$$ The class of linear combinations of month-in-sample estimators {#sec:3.5} -------------------------------------------------------------- Here, as in [@yansaneh1998optimal], we consider the best estimator of counts by employment status in the class of linear combinations of month-in-sample estimators. Generalizing [@yansaneh1998optimal], the unbiasedness assumption of all month-in-sample estimators is: $$\mathrm{E}\left[{\vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}}}\right]=\vec{{X}} {\vec{\mathrm{t}}_{\mathbf{y}}},\label{M1}$$ where $X$ is the $(({M}, 8, 3), ({M}, 3))$-sized array with rows indexed by the triplet $({m},{g},{e})$ and columns indexed by the couple $({m},{e})$ such that $X_{({m}, {g},{e}),({m}',{e}')}=1$ if ${m}'={m}$ and ${e}={e}'$, $0$ otherwise. Let $L$ be a $(p, ({M}, 3))$-sized array with $p\in\mathbb{N}\setminus \{0\}$ and rows indexed by $({m},{e})$. By class of linear estimators of $L{\mathrm{t}}_{\mathbf{y}}$, we will designate the class of estimators that are linear combinations of the month-in-sample estimates, i.e. of the form $W \vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}}$ where $W$ is a fixed (does not depend on the observations) $(p, ({M}\times 8\times 3))$-sized matrix. ### Best linear estimator {#best-linear-estimator .unnumbered} Let $\Sigma_{\mathbf{y}}=\mathrm{Var}_{\mathbf{y}}\left[{\vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}}}\right]$. In the design-based approach, $\Sigma_{\mathbf{y}}$ is a function of the parameter ${\mathbf{y}}$. The variance of a linear transformation $W \vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}}$ of $\hat{{\mathrm{t}}}^{{\mathrm{mis}}}_{\mathbf{y}}$ is: $\mathrm{Var}\left[W \vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}}\right]=W^T \Sigma_{\mathbf{y}}W.$ When month-in-sample estimates are unbiased, $\Sigma_{\mathbf{y}}$ is known, and only $\vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}}$ is observed, and [when $\vec{X}^+\vec{X}=I$]{}, the Gauss-Markov theorem states that the BLUE of ${\mathrm{t}}_{\mathbf{y}}$ uniformly in ${\mathrm{t}}_y$ is the $({M},3)$-sized matrix $\hat{{\mathrm{t}}}^{\text{BLUE}}_{\mathbf{y}}$ defined by [ $$\label{bestW} \vec{X}^+ (\vec{X}\vec{X}^+) \left(I-\Sigma_{\mathbf{y}}((I-\vec{X}\vec{X}^+)^+ \Sigma_{\mathbf{y}}(I-\vec{X}\vec{X}^+))^+\right)\vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}},$$ ]{} where the $^+$ operator designates the Moore Penrose pseudo inversion, $I$ is the identity matrix. Here the minimisation is with respect to the order on symmetric positive definite matrices: $M_1\leq M_2 \Leftrightarrow M_2-M_1$ is positive. It can be shown that $\vec{X}^+={\left.\vec{X}\right.^{\!\mathrm{T}}}/8$ in our case and that $\vec{X}^+\vec{X}=I$. For more details about the Gauss-Markov result under singular linear model, one may refer to [[@Searle1994 p. 140, Eq. 3b]]{}. This is a generalization of the result of [@yansaneh1998optimal], as it takes into account the multi-dimensions of ${\mathbf{y}}$ and non-invertibility of $\Sigma_{\mathbf{y}}$. Note that $\Sigma_{\mathbf{y}}$ can be non-invertible, especially when the sample is calibrated on a given fixed population size, considered non-random, because of an affine relationship between month-in-sample estimates (e.g., $\sum_{{g}=1}^8\sum_{{e}=1}^3 \left(\hat{{\mathrm{t}}}^{{\mathrm{mis}}}_{\mathbf{y}}\right)_{{m},{g},{e}} $ is not random). It is important to recall that (i) for any linear transformation $L$ applicable to $\vec{{\mathrm{t}}}_{\mathbf{y}}$, the best linear unbiased estimator of $L {\vec{{\mathrm{t}}}_{\mathbf{y}}}$ uniformly in ${\mathrm{t}}_{\mathbf{y}}$ is $L \vec{\hat{{\mathrm{t}}}}_{\mathbf{y}}^{\text{BLUE}}$, which ensures that the BLUE of month-to-month change can be simply obtained from the BLUE of level and so there is no need for searching a compromise between estimation of level and change; (ii) for any linear transformation $L$ applicable to $\vec{{\mathrm{t}}}_{\mathbf{y}}$, any linear transformation $J$ applicable to $L\vec{{\mathrm{t}}}_{\mathbf{y}}$, $L\vec{\hat{{\mathrm{t}}}}_{\mathbf{y}}^{\text{BLUE}}\in \operatorname*{argmin}\left\{\left.J W \Sigma_{\mathbf{y}}{\left(J W \right)^{\!\mathrm{T}}}\right|W, W\vec{X}=L\right\}$, which ensures that plug-in estimators for unemployment rate and month-to-month unemployment rate change derived from the BLUE are also optimal in the sense that they minimize the linearized approximation of the variance of such plug-in estimators, that can be written in the form $J W \Sigma_{\mathbf{y}}{\left(J W \right)^{\!\mathrm{T}}}$. ### Remark: BLUE under Bailar rotation bias model {#remark-blue-under-bailar-rotation-bias-model .unnumbered} Here we give the expression of the BLUE under the general Bailar rotation bias model. Bailar’s rotation bias model can be written in matrix notation: $$\mathrm{E}\left[\vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}}\right]=\vec{X}\vec{\mathrm{t}}_{\mathbf{y}}+ \vec{X}' \vec{{b}},\label{bailarmatrix}$$ where ${X}'$ is a fixed known array (see also [@yansaneh1998optimal equation 8]). For example under the bias model \[model:bailar\], with $C_1=C_2=C_3=(1,\ldots,1)$, ${X}'$ is the $(({M}, 8, 3), (7, 2))$-sized array where for ${m}\in\{1,\ldots,{M}\}$, ${g}\in \{1,\ldots,8\}$, ${g}' \in \{1,\ldots,7\},{e}\in\{1,2,3\} $, ${e}'\in\{1,2,3\}$, ${X}'_{({m},{g},{e}),({g}',{e}')}= 1$ if ${g}={g}'<8$ and ${e}={e}'$, $-1$ if ${g}=8$ and ${e}={e}',$ $0$ otherwise. We can reparametrize Model in the form $\mathrm{E}[\hat{{\mathrm{t}}}^{{\mathrm{mis}}}_{\mathbf{y}}]=X^\star \mu$, where $X^\star=[\vec{X}\mid \vec{X}']$, and the parameter $\mu={\left.[\vec{\mathrm{t}}_{\mathbf{y}}\mid \vec{{b}}]\right.^{\!\mathrm{T}}}$. The best linear unbiased estimator of $\vec{{\mathrm{t}}}_{\mathbf{y}}$ under this rotation bias model is [ $$L X^{\star+}(X^\star X^{\star+})\left(I-\Sigma_{\mathbf{y}}(I-X^\star X^{\star+})^+\Sigma_{\mathbf{y}}(I-X^\star X^{\star+}))^+\right)\vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}},$$]{} with $L$ satisfying $LX^\star=\vec{X}$. This is a generalization of [@yansaneh1998optimal], as it considers non-invertible $\Sigma_{\mathbf{y}}$, does not limit to a unidimensional variable and is generalized to general Bailar’s model. AK composite estimation ----------------------- ### Definition {#definition .unnumbered} We define a general class of AK composite estimators. Let $A=\mathrm{diag}(a_1,a_2,a_3), K=\mathrm{diag}({k}_1,{k}_2,{k}_3)$, be real diagonal $(3, 3)$ matrices, The AK estimator with coefficients $A$ and $K$ is defined as follows: first define $\left(\hat{{\mathrm{t}}}^{{\mathrm{AK}}}_{\mathbf{y}}\right)_{1,.}=\left(\hat{{\mathrm{t}}}^{{\mathrm{direct}}}_{\mathbf{y}}\right)_{1,.},$ then recursively define for ${m}\in 2,\ldots,{M}$, $$\begin{gathered} \left(\hat{{\mathrm{t}}}^{{\mathrm{AK}}}_{\mathbf{y}}\right)_{{m},.}=K\left(\hat{{\mathrm{t}}}^{{\mathrm{direct}}}_{\mathbf{y}}\right)_{{m},.}\\ + (I-K)\times\left(\left( \hat{{\mathrm{t}}}^{{\mathrm{AK}}}_{\mathbf{y}}\right)_{{m}-1,.}+\sum_{{k}\in {S}_{m}\cap {S}_{{m}-1}}\left({\mathbf{w}}_{{m},{k},.}{\mathbf{y}}_{{m},{k},.}-{\mathbf{w}}_{{m}-1,{k},.}{\mathbf{y}}_{{m}-1,{k},.}\right)\right)\\ +A\times\left(\sum_{{k}\in {S}_{m}\setminus {S}_{{m}-1}}{\mathbf{w}}_{{m},{k},.}{\mathbf{y}}_{{m},{k},.}-\frac13\sum_{{k}\in {S}_{m}\cap {S}_{{m}-1}}{\mathbf{w}}_{{m},{k},.}{\mathbf{y}}_{{m},{k},.} \right),\end{gathered}$$ where $\setminus$ denotes the set difference operator and $I$ is the identity matrix of dimension 3. The sum of the first two terms of the AK estimator is indeed a weighted average of the current month direct estimator and the previous month AK estimator suitably updated for the change. The last term of the AK estimator is correlated to the previous terms, and has an expectation 0 with respect to the sample design. [@gurney1965multivariate] explained the benefits of adding the third term in reducing the mean squared error. The Census Bureau uses specific values of $A$ and $K$, which were empirically determined in order to arrive at a compromise solution that worked reasonably well for both employment level and rate estimation (see. [@Lent1999]). The corresponding unemployment rate estimator is obtained as: $\hat{{\mathrm{r}}}_{m}^{{\mathrm{AK}}}={\mathrm{R}}\left(\left(\hat{{\mathrm{t}}}^{{\mathrm{AK}}}_{\mathbf{y}}\right)_{{m},.}\right)$. Note that $\hat{{\mathrm{r}}}_{m}^{{\mathrm{AK}}}$ just depends on $a_1$, $a_2$, $k_1$, $k_2$ and not on $a_3$ and $k_3$. Note that the class of AK estimators is a sub class of the class of linear estimators, as the AK estimator can be written as a linear combination of the month-in-sample estimators: $\left(\hat{{\mathrm{t}}}^{{\mathrm{AK}}}_{\mathbf{y}}\right)_{{m},.}=\sum_{{m}'=0}^{m}\sum_{{g}=1}^8 c_{{m},{m}',{g}}\left(\hat{{\mathrm{t}}}^{{\mathrm{mis}}}_{\mathbf{y}}\right)_{{m}',{g},.},$ where the $(3,3)$ matrices $c_{{m},{m},{g}}$ are defined recursively: $\forall {g}\in \{1,\ldots,8\}, c_{0,0,{g}}=(1/8)\times\mathrm{I}_3,$ where $\mathrm{I}_3,$ is the $(3,3)$ identity matrix and $$\label{cond2} \forall {m}\in \{2,\ldots,{M}\}, \left\{\begin{array}{lll} \forall {g}\in \{1,5\}& c_{{m},{m},{g}} &=( I -K)/8+A\\ \forall {g}\in \{2,3,4,6,7,8\}& c_{{m},{m},{g}} &=( I -K)/8+K/6-A/3\\ \forall {g}\in \{1,2,3,5,6,7\}& c_{{m},{m}-1,{g}} &=c_{{m}-1,{m}-1,{g}}\times K-K/6\\ \forall {g}\in \{4,8\}& c_{{m},{m}-1,{g}} &=c_{{m}-1,{m}-1,{g}}\times K\\ \forall 1\leq{m}'<{m}-1& c_{{m},{m}',{g}} &=c_{{m}-1,{m}',{g}}\times K\\ \end{array}\right.$$ $\forall {m}'>{m},{g}\in \{1,\ldots,8\}, c_{{m},{m}',{g}}=0.$ Let $W^{{\mathrm{AK}}}$ be the $(({M}, 3),({M}, 8, 3))$ array, such that for ${m},{m}'\in \{1,\ldots,M\}$, ${g}\in\{1,\ldots,8\}$, ${e},{e}'\in\{1,2,3\}$, $W^{{\mathrm{AK}}}_{({m},{e}),({m}',{g},{e}')}=c_{{m},{m}',{g}}$ if ${e}={e}'$, $0$ otherwise. Then $\vec{\hat{{\mathrm{t}}}}^{{\mathrm{AK}}}_{\mathbf{y}}=\vec{W}^{{\mathrm{AK}}} \vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}}$. ### Notes on AK estimator {#notes-on-ak-estimator .unnumbered} In presence of rotation bias, the bias of the AK estimator is not null and equal to $\vec{W}^{{\mathrm{AK}}}\vec{X}'\vec{b}$. Depending on the rotation bias model, there may not exist an unbiased version of the AK estimator. Furthermore, contrary to the BLUE, the best A,K coefficients for estimation of one particular month and status may not be optimal for another month and status, and the best A,K, coefficients for estimation of level may not be optimal for estimation of change. For example, one may find $A,K,{m},{e}, A',K',{m}',{e}'$ such that $\mathrm{Var}\left[\left(\hat{{\mathrm{t}}}^{{\mathrm{AK}}}_{\mathbf{y}}\right)_{{m},{e}}\right]<\mathrm{Var}\left[\left(\hat{{\mathrm{t}}}^{\mathrm{A'},\mathrm{K'}}_{\mathbf{y}}\right)_{{m},{e}}\right]$ and $\mathrm{Var}\left[\hat{{\mathrm{t}}}^{{\mathrm{AK}}}_{{\mathbf{y}}_{{m}',{e}'}}\right]>\mathrm{Var}\left[\hat{{\mathrm{t}}}^{\mathrm{A'},\mathrm{K'}}_{{\mathbf{y}}_{{m}',{e}'}}\right]$. When $\Sigma_{\mathbf{y}}$ is known, let $\hat{{\mathrm{t}}}_{{\mathbf{y}}}^{BAK,level}$, $\hat{{\mathrm{t}}}_{{\mathbf{y}}}^{BAK,change}$, $\hat{{\mathrm{t}}}_{{\mathbf{y}}}^{BAK,compromise}$ be the AK estimators obtained for $A$, $K$, that minimize the average approximated variance of level estimates $\sum_{{m}=1}^{M}J_1\mathrm{Var}_{\mathbf{y}}\left[\left(\hat{{\mathrm{t}}}^{A,K}_{\mathbf{y}}\right)_{{m},.}\right]{\left.J_1\right.^{\!\mathrm{T}}}$, of change estimates $\sum_{{m}=1}^{M}J_2\mathrm{Var}_{\mathbf{y}}\left[\left(\hat{{\mathrm{t}}}^{A,K}_{\mathbf{y}}\right)_{\{{m}-1,{m}\},.}\right]{\left.J_2\right.^{\!\mathrm{T}}}$ and compromise averaged variance $\sum_{{m}=1}^{M}\left(J_1\mathrm{Var}_{\mathbf{y}}\left[\left(\hat{{\mathrm{t}}}^{A,K}_{\mathbf{y}}\right)_{{m},.}\right]{\left.J_1\right.^{\!\mathrm{T}}}+J_2\mathrm{Var}_{\mathbf{y}}\left[\left(\hat{{\mathrm{t}}}^{A,K}_{\mathbf{y}}\right)_{\{{m}-1,{m}\},.}\right]{\left.J_2\right.^{\!\mathrm{T}}}\right)$, respectively. For AK estimation, note that the three objective functions are polynomial functions of $A$ and $K$ whose coefficients are functions of $\Sigma_{\mathbf{y}}$, and by using a standard numerical method (Nelder Mead) we can obtain the optimal coefficients. Empirical best linear estimator and empirical best AK estimator. ---------------------------------------------------------------- Let $\hat{\Sigma}$ be an estimator of $\Sigma_{\mathbf{y}}$, and let $\hat{{\mathrm{t}}}^{EBLUE}_{\mathbf{y}}$ be the estimator of ${\mathrm{t}}_{\mathbf{y}}$ obtained from when $\Sigma_{\mathbf{y}}$ is replaced by $\hat{\Sigma}$. In the same manner, we can define the empirical best AK estimators for change, level and compromise. For the CPS, optimal $A$ and $K$ coefficients were determined so that a compromise objective function, accounting for the variances of the month-to-month changes and levels estimates, would be minimum. The variances were estimated according to a stationary covariance of month-in-sample estimates assumption (see [@lent1996effect]) and the method used in the Census Bureau consists in choosing the best coefficients $a_1$, $a_2$, $k_1$, $k_2$ on a grid with 10 possible values for each coefficient $(0.1,\ldots,0.9)$. Regression Composite Estimation {#RC} ------------------------------- In this section we elaborate on the general definition of the class of regression composite estimators proposed by [@fuller2001regression], parametrized by a real number $\alpha\in[0,1]$. This class includes regression composite estimators MR1 (for $\alpha=0$) and MR2 (for $\alpha=1$) as defined by [@singh1995composite] and [@singh2001regression]. For $\alpha\in[0,1]$, the regression composite estimator of ${\mathrm{t}}_{{\mathbf{y}}}$ is a calibration estimator $\left(\hat{{\mathrm{t}}}^{{\mathrm{r.c.}},\alpha}_{\mathbf{y}}\right)_{{m},.}$ defined as follows: provide calibration totals $\left({\mathrm{t}}^{adj}_{{\mathbf{x}}}\right)_{{m},.}$ for the auxiliary variables (they can be equal to the true totals when known or estimated), then define $ \left(\hat{{\mathrm{t}}}^{{\mathrm{r.c.}},\alpha}_{\mathbf{z}}\right)_{1,.}=\left(\hat{{\mathrm{t}}}^{{\mathrm{direct}}}_{\mathbf{z}}\right)_{1,.},$ and ${\mathbf{w}}_{1,{k}}^{{\mathrm{r.c.}},\alpha}={\mathbf{w}}_{1,{k}}$ if $k\in {S}_1$, 0 otherwise. For ${m}\in \{2,\ldots, {M}\}$, recursively define $$\begin{aligned} {\mathbf{z}^{{\text{r.c.}}(\alpha)}}_{{m},{k},.}&= \begin{cases} \alpha\left(\tau_{m}^{-1} \left({\mathbf{z}}_{{m}-1,{k},.}-{\mathbf{z}}_{{m},{k},.}\right) +{\mathbf{z}}_{{m},{k},.}\right) +(1-\alpha)~{\mathbf{z}}_{{m}-1,{k},.} & \text{if }k\in {S}_{{m}}\cap {S}_{{m}-1}, \\ \alpha~ {\mathbf{z}}_{{m},{k},.} +(1-\alpha)~\left(\sum_{k\in {S}_{{m}-1}}{\mathbf{w}}_{{m}-1,{k}}^{{\mathrm{r.c.}},\alpha}\right)^{-1} \left(\hat{{\mathrm{t}}}_{\mathbf{y}}^{\mathrm{c}}\right)_{{m}-1,.} & \text{if }k\in {S}_{{m}}\setminus {S}_{{m}-1}, \end{cases}\label{RCstep1}\end{aligned}$$ where $\tau_{m}=\left(\sum_{k\in {S}_{m}\cap {S}_{{m}-1}}{\mathbf{w}}_{{m},{k}}\right)^{-1}\sum_{k\in {S}_{m}}{\mathbf{w}}_{{m},{k}}$. Then the regression composite estimator of $\left({\mathrm{t}}_{{\mathbf{y}}}\right)_{{m},.}$ is given by $\left(\hat{{\mathrm{t}}}^{{\mathrm{r.c.}},\alpha}_{\mathbf{y}}\right)_{{m},.}= \sum_{k\in {S}_{m}}{\mathbf{w}}^{{\mathrm{r.c.}},\alpha}_{{m},{k}}{\mathbf{y}}_{{m},{k}},$ where $$\left({\mathbf{w}}^{{\mathrm{r.c.}},\alpha}_{{m},.}\right)\!=\!\operatorname*{argmin}\left\{\sum_{k\in U}\frac{ \left({\mathbf{w}}^\star_{k}-{\mathbf{w}}_{{m},{k}}\right)^2}{(k\notin S_m)+{\mathbf{w}}_{{m},{k}}}\left| {\mathbf{w}}^\star\in\mathbb{R}^{U},\!\!\! \begin{array}{l}\sum_{k\in {S}_{m}} {\mathbf{w}}^\star_{k}{\mathbf{z}^{{\text{r.c.}}(\alpha)}}_{{m},{k},.}\!=\!\left(\hat{{\mathrm{t}}}^{{\mathrm{r.c.}},\alpha}_{\mathbf{z}}\right)_{{m}-1,.}\\ \sum_{k\in {S}_{m}} {\mathbf{w}}^\star_{k}{\mathbf{x}}_{{m},{k},.}=\left({\mathrm{t}}^{adj}_{{\mathbf{x}}}\right)_{{m},.} \end{array} \right.\!\!\!\! \right\}\!\!,\label{RCstep2}$$ and $\left(\hat{{\mathrm{t}}}^{{\mathrm{r.c.}},\alpha}_{\mathbf{z}}\right)_{{m},.}= \sum_{k\in {S}_{m}}{\mathbf{w}}^{{\mathrm{r.c.}},\alpha}_{{m},{k}}{\mathbf{z}^{{\text{r.c.}}(\alpha)}}_{{m},{k}},$ where $(k\notin S_m)=1$ if $k\notin S_m$ and $0$ otherwise. Our definition of regression composite estimator is more general than in [@fuller2001regression] as it takes into account a multivariate version of ${\mathbf{y}}$. Modified Regression 3 (MR3), of [@gambino2001regression], does not belong to the class of regression composite estimators. The MR3 estimator imposes too many constraints in the calibration procedure, which leads to a high variability of the calibration weights, and consequently, MR3 estimator has a larger MSE than composite regression estimators. ### Choice of $z$ and choice of $\alpha$ {#choice-of-z-and-choice-of-alpha .unnumbered} [@fuller2001regression] studied the properties of the estimator $\left(\hat{{\mathrm{t}}}^{{\mathrm{r.c.}},\alpha}_{\mathbf{y}}\right)_{{m},1}$ for the choice of ${\mathbf{z}}={\mathbf{y}}_{.,1}$. As the employment rate is a function of ${\mathbf{y}}_{{m},1}$ and ${\mathbf{y}}_{{m},2}$, we studied the properties of Regression Composite Estimator with the choice ${\mathbf{z}}={\mathbf{y}}$. [@fuller2001regression] proposed a method that allows an approximation of the optimal $\alpha$ coefficient for month-to-month change and level estimation, under a specific individual level superpopulation model for continuous variables. They proposed this superpopulation model to explain the drift problem of MR2 (regression composite estimator for $\alpha=1$) and obtain the best coefficient $\alpha$. Since we deal with a discrete multidimensional variable, the continuous superpopulation model assumed by [@fuller2001regression] is not appropriate in our situation. It will be interesting to propose an approach to estimate the best $\alpha$ in our situation. For our preliminary study we examined a range of known $\alpha$ values in our simulations and in the CPS data analysis. Simulation Experiment ===================== Description of Simulation Study ------------------------------- We conducted a simulation study to enhance our understanding of the finite sample properties of different composite estimators. We generated three finite populations, each with size 100,000. In order to make the simulation experiment meaningful, we generated employment statuses for each finite population in a manner that attempts to capture the actual U.S. national employment rate dynamics during the study period 2005-2012. Moreover, in order to understand the maximum gain from composite estimation, we induced high correlation in the employment statuses between two consecutive months subject to a constraint on the global employment rate evolution. We set the probability of month-to-month changes in employment statuses for an individual to zero in case of no change in the corresponding direct national employment rates. Samples were selected according to a rotating design with systematic selection that mimics the CPS design. Since the number of possible samples is only 1000, we are able to compute the exact design-based bias, variance and mean squared error of different estimators, and, subsequently, the optimal linear and optimal AK estimators. We compute employment rate, total employed, and total unemployed series over the 85-month period using the direct, AK and the Fuller-Rao regression composite methods. We then compared the optimal estimator in the class of regression composite estimators to those in the class of the AK and best linear estimators. [Note that the simulation study can be reproduced using the R package we created for this purpose (see [@github:pubBonneryChengLahiri2016]).]{} Populations generation ---------------------- We created three populations of $N=100,000$ individuals each, indexed by $1,\ldots,N$. For each individual $k$ of each population, we created a time series $({\mathbf{y}}_{{m},{k},.})_{{m}\in{1,\ldots,{M}}}$, where ${\mathbf{y}}_{{m},{k},.} \in \{(1,0,0),(0,1,0),(0,0,1)\}$ (for unemployed, not in labor force, employed), and with $M=85$. Each individual belongs to one household. Each household consists of $5$ individuals. The number of all households is ${H}=20,000$, the set of all households is $\left\{{h}_i=\left\{(5\times(i-1)+1),\ldots, (5\times i) \right\}\mid i=1,\ldots,{H}\right\}$ . The time series are created under certain constraints at the population level. For each population, the unemployment rates are the same as the direct estimates obtained from the CPS data. In population 1, the number of people who change status between two consecutive months is minimal. In populations 2 and 3, the proportions of persons who change from one status to another between two consecutive months are equal to those proportions as estimated from the CPS data. In population 2, people with a small index have a higher probability to change status, whereas the probability to change status between two months is the same for all individuals of population 3 with a same status. Repeated design {#sec:4.3} --------------- We mimic the CPS design, which is described in Appendix \[ap:cps\]. For month ${m}$, a sample ${S}_{{m}}$ is the union of 8 rotation groups. [The design and the creation of]{} rotation groups are explained below. Rotation groups are made of ${n}=20$ households, i.e. 100 individuals. So for each month ${m}$, there are $\#({S}_{{m}})=800$ individuals in the sample, and the inclusion probability of any unit is $1/125$. [The selection of the longitudinal samples $S_1,\ldots S_m$ is made in 3 steps]{}: 1. Draw an integer number ${\eta}$ between 1 and 1,000, from a uniform distribution. 2. For $\ell\in{1,\ldots,({M}+15)},$ create the cluster of households ${\mathrm{Clu}}_{\ell}=\bigcup_{j =1}^{{n}}{h}_{i_{\ell,j}}$, where $i_{\ell,j}=\mathrm{rem}\left((r-1+\ell-1)+\frac{{H}}{{n}}\times(j-1),{H}\right)+1$, and $\mathrm{rem}(a,b)$ denotes the remainder of the Euclidean division of $a$ by $b$. 3. Let $\delta_1=0$, $\delta_2=1$, $\delta_3=2$, $\delta_4=3$, $\delta_5=12$, $\delta_6=13$, $\delta_7=14$, $\delta_8=15$. For ${m}\in \left\{1,\ldots,{M}\right\}$, ${g}\in \{1,\ldots,8\}$, create the samples ${S}_{{m},{g}}={\mathrm{Clu}}_{{m}+\delta_{g}},$ and ${S}_{{m}}=\bigcup_{{g}=1}^8S_{{m},{g}}.$ As only 1000 different possible samples exist, we will be able in our simulation to draw them all and to compute exact design-based moments. Table \[tab:cpsrotchar\] displays the rotation chart for our simulation, which is identical to the CPS rotation chart [@CPS2006 Figure 3-1] [@ ccccccccccccccccccccc]{}\ &${\mathrm{Clu}}_{1}$&${\mathrm{Clu}}_{2}$&${\mathrm{Clu}}_{3}$&${\mathrm{Clu}}_{4}$&${\mathrm{Clu}}_{5}$&${\mathrm{Clu}}_{6}$&${\mathrm{Clu}}_{7}$&${\mathrm{Clu}}_{8}$&${\mathrm{Clu}}_{9}$&${\mathrm{Clu}}_{10}$&${\mathrm{Clu}}_{11}$&${\mathrm{Clu}}_{12}$&${\mathrm{Clu}}_{13}$&${\mathrm{Clu}}_{14}$&${\mathrm{Clu}}_{15}$&${\mathrm{Clu}}_{16}$&${\mathrm{Clu}}_{17}$&${\mathrm{Clu}}_{18}$&${\mathrm{Clu}}_{19}$&${\mathrm{Clu}}_{20}$\ \ Jan 05 & $S_{1,1}$ & $S_{1,2}$ & $S_{1,3}$ & $S_{1,4}$ & & & & & & & & & $S_{1,5}$ & $S_{1,6}$ & $S_{1,7}$ & $S_{1,8}$ & & & &\ Feb 05 & & $S_{2,1}$ & $S_{2,2}$ & $S_{2,3}$ & $S_{2,4}$ & & & & & & & & & $S_{2,5}$ & $S_{2,6}$ & $S_{2,7}$ & $S_{2,8}$ & & &\ Mar 05 & & & $S_{3,1}$ & $S_{3,2}$ & $S_{3,3}$ & $S_{3,4}$ & & & & & & & & & $S_{3,5}$ & $S_{3,6}$ & $S_{3,7}$ & $S_{3,8}$ & &\ Apr 05 & & & & $S_{4,1}$ & $S_{4,2}$ & $S_{4,3}$ & $S_{4,4}$ & & & & & & & & & $S_{4,5}$ & $S_{4,6}$ & $S_{4,7}$ & $S_{4,8}$ &\ May 05 & & & & & $S_{5,1}$ & $S_{5,2}$ & $S_{5,3}$ & $S_{5,4}$ & & & & & & & & & $S_{5,5}$ & $S_{5,6}$ & $S_{5,7}$ & $S_{5,8}$\ Jun 05 & & & & & & $S_{6,1}$ & $S_{6,2}$ & $S_{6,3}$ & $S_{6,4}$ & & & & & & & & & $S_{6,5}$ & $S_{6,6}$ & $S_{6,7}$\ Jul 05 & & & & & & & $S_{7,1}$ & $S_{7,2}$ & $S_{7,3}$ & $S_{7,4}$ & & & & & & & & & $S_{7,5}$ & $S_{7,6}$\ Aug 05 & & & & & & & & $S_{8,1}$ & $S_{8,2}$ & $S_{8,3}$ & $S_{8,4}$ & & & & & & & & & $S_{8,5}$\ Sep 05 & & & & & & & & & $S_{9,1}$ & $S_{9,2}$ & $S_{9,3}$ & $S_{9,4}$ & & & & & & & &\ Oct 05 & & & & & & & & & & $S_{10,1}$ & $S_{10,2}$ & $S_{10,3}$ & $S_{10,4}$ & & & & & & &\ Nov 05 & & & & & & & & & & & $S_{11,1}$ & $S_{11,2}$ & $S_{11,3}$ & $S_{11,4}$ & & & & & &\ Dec 05 & & & & & & & & & & & & $S_{12,1}$ & $S_{12,2}$ & $S_{12,3}$ & $S_{12,4}$ & & & & &\ Jan 06 & & & & & & & & & & & & & $S_{13,1}$ & $S_{13,2}$ & $S_{13,3}$ & $S_{13,4}$ & & & &\ Feb 06 & & & & & & & & & & & & & & $S_{14,1}$ & $S_{14,2}$ & $S_{14,3}$ & $S_{14,4}$ & & &\ Mar 06 & & & & & & & & & & & & & & & $S_{15,1}$ & $S_{15,2}$ & $S_{15,3}$ & $S_{15,4}$ & &\ Apr 06 & & & & & & & & & & & & & & & & $S_{16,1}$ & $S_{16,2}$ & $S_{16,3}$ & $S_{16,4}$ &\ May 06 & & & & & & & & & & & & & & & & & $S_{17,1}$ & $S_{17,2}$ & $S_{17,3}$ & $S_{17,4}$\ Jun 06 & & & & & & & & & & & & & & & & & & $S_{18,1}$ & $S_{18,2}$ & $S_{18,3}$\ Jul 06 & & & & & & & & & & & & & & & & & & & $S_{19,1}$ & $S_{19,2}$\ Aug 06 & & & & & & & & & & & & & & & & & & & & $S_{20,1}$\ [ For example, for ${\eta}=506$, $m=12$, $g=3$, we have $S_{m,g}={\mathrm{Clu}}_{12+\delta_3}={\mathrm{Clu}}_{14}$, and ${\mathrm{Clu}}_{14}=\{h_{\mathrm{rem}((506-1+14-1)+\frac{20000}{20}\times(k-1),20000)+1}\mid k=1\ldots 20\}= \{h_{19},h_{1019},h_{2019},h_{3019},\ldots,h_{19019}\}$. ]{} Rotation bias ------------- In each sample, we introduced a measurement error by changing employment status of $20\%$ of employed individuals in month-in-sample group 1 from employed to unemployed, which leads to an overestimation of the unemployment rate. Variance on month-in-sample estimators computation -------------------------------------------------- As we draw all the possible samples, we are able to compute the exact variance of any estimator. Moreover, we are able to compute the true $\Sigma_{\mathbf{y}}$, which yields both the optimal best linear and AK estimators. Estimation of $\Sigma_{\mathbf{y}}$ ----------------------------------- Define the $3\times 3$ matrix $$\sigma^2_{{m},{m}'}= \frac{\sum_{i=1}^ {H}\left(\sum_{{k}\in h_i}{\mathbf{y}}_{{m},{k},.}-\frac{\sum_{i'=1}^ {H}\sum_{{k}'\in h_{i'}}{\mathbf{y}}_{{m},{k}',.}}{{H}}\right) {\left(\sum_{{k}\in h_i}{\mathbf{y}}_{{m}',{k},.}\right)^{\!\mathrm{T}}}}{{H}-1}.$$ We estimate $\sigma^2_{{m},{m}'}$ by $$\hat{\sigma}^2_{{m},{m}'}=\frac{ \sum\limits_{i\in\{1,\ldots,{H}\} \mid h_i\subset {S}_{m}\cap{S}_{{m}'}} \left(\sum\limits_{{k}\in h}{\mathbf{y}}_{{m},{k},.}-\frac{\sum_{i=1}^ {H}\sum_{{k}\in h_i}{\mathbf{y}}_{{m}',{k},.}}{\# \{i\in\{1,\ldots,{H}\} \mid h_i\subset {S}_{m}\cap{S}_{{m}'}\}}\right){\left(\sum\limits_{{k}\in h}{\mathbf{y}}_{{m}',{k},.}\right)^{\!\mathrm{T}}}}{\#\left\{i \in \{1,\ldots,{H}\}\mid h_i\subset {S}_{m}\cap{S}_{{m}'}\right\}-1}$$ if ${S}_{m}\cap{S}_{{m}'}\neq\emptyset$, $0$ otherwise. Let ${m}, {m}'\in \left\{1,\ldots,{M}\right\}$, ${g},{g}'\in\{1,\ldots,8\}$. If $m'+\delta_{{g}'}=m+\delta_{{g}}$ then $S_{{m},{g}}=S_{{m}',{g}'}$, we approximate the distribution of $S_{{m}',{g}'}$ by a cluster sampling distribution, where the first stage is simple random sampling. and we estimate $\mathrm{Cov}\left[\hat{{\mathrm{t}}}^{{\mathrm{mis}},{g}}_{{m}},\hat{{\mathrm{t}}}^{{\mathrm{mis}},{g}}_{{m}'}\right]$ by $\widehat{\mathrm{Cov}}\left[\hat{{\mathrm{t}}}^{{\mathrm{mis}},{g}}_{{\mathbf{y}}_{{m},{e}}},\hat{{\mathrm{t}}}^{{\mathrm{mis}},{g}'}_{{\mathbf{y}}_{{m}',{e}'}}\right]=({H})^2\left(1-\frac{{n}}{{H}}\right)\frac{\hat{\sigma}^2_{{m},{m}'}}{{n}/8}.$ If $m'+\delta_{{g}'}\neq m+\delta_{{g}}$ then ${S}_{{m},{g}}\cap{S}_{{m}',{g}'}=\emptyset$ and we approximate the distribution of $({S}_{{m},{g}},{S}_{{m}',{g}'})$ by the distribution of two independent simple random samples of clusters conditional to non-overlap of the two samples, and we estimate $\mathrm{Cov}\left[\hat{t}^{{\mathrm{mis}}}_{{m},{g},.},\hat{t}^{{\mathrm{mis}}}_{{m}',{g}',.}\right]$ by $\widehat{\mathrm{Cov}}\left[\hat{{\mathrm{t}}}^{{\mathrm{mis}}}_{{\mathbf{y}}_{{m},{g},.}},\hat{{\mathrm{t}}}^{{\mathrm{mis}}}_{{\mathbf{y}}_{{m}',{g}',.}}\right]=-{H}\hat{\sigma}^2_{{m},{m}'}$. Choice of optimal estimator in each class ----------------------------------------- In our simulations, the best linear unbiased estimator turned out to be exact, in the sense that for the three different choices of ${\mathbf{y}}$ (population 1, population 2, population 3), the $(1000,2040)$-matrix $Y$ whose rows are the $1000$ probable values of $\vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}}$ is of rank $1000$, so for all $({m},{e})$, we can find a $2040$-sized vector $x_{{m},{e}}$ such that $Yx_{{m},{e}}=\left({\mathrm{t}}_{\mathbf{y}}\right)_{{m},{e}}.\mathds{1}$, where $\mathds{1}$ is $1000$-sized vector of ones. Then we define $W_o$ as the $(({M}\times 8\times 3),({M}\times 3))$-sized array whose rows are the vectors $x_{{m},{g}}$ such that $W_0{\left.Y\right.^{\!\mathrm{T}}}=\vec{{\mathrm{t}}}_{\mathbf{y}}$, which means that surely $W_o\vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}}=\vec{{\mathrm{t}}}_{\mathbf{y}}$, and then the BLUE is necessarily equal to $W_o\vec{\hat{{\mathrm{t}}}}^{{\mathrm{mis}}}_{\mathbf{y}}$, a result that we were able to reproduce in our simulations. This situation is particular to our simulation setup, that allows a small number of possible samples, but with a design for which the number of probable samples is larger than the number of month-in-sample estimates, the best linear estimator would not be exact. We computed the objective functions for $\alpha\in\{0,0.05,\ldots,1\}$ only. Table \[bestak\] shows the optimal values for $a_1$, $k_1$, $a_2$, and $k_2$ for the three different populations and the best empirical estimator for level, change and compromise. The Census Bureau uses the coefficients $a_1=0.3$, $k_1=0.4$, $a_2=0.4$ and $k_2=0.7$ for the CPS. We notice that for each population, the best set of coefficients for change, level and compromise are very close, which means that the optimal choice for level is also almost optimal for change for those three populations. Population 1 Population 2 Population 3 -------------------------- -------------------- -------------------- -------------------- $(a_1,k_1)$ (unemployed) Level $(0.0471,0.85)$ $(0.0395,0.398)$ $(-0.0704,-0.619)$ Compromise $(0.029,0.895)$ $(0.00175,0.0551)$ $(0.0038,0.0253)$ Change $(0.0243,0.89)$ $(0.0358,0.362)$ $(-0.0239,-0.445)$ $(a_2,k_2)$ (employed) Level $(0.0714,0.752)$ $(0.0453,0.73)$ $(-0.0354,0.825)$ Compromise $(-0.0075,-0.232)$ $(0.002,0.0598)$ $(0.0464,0.0482)$ Change $(-0.0187,-0.256)$ $(0.0658,0.723)$ $(-0.0529,0.836)$ : Optimal $(a_1,k_1)$ and $(a_2,k_2)$ values for the three populations[]{data-label="bestak"} Table \[bestrc\] shows the best coefficient $\alpha$ for the regression composite estimators. Population 1  Population 2  Population 3 ------------ ---------------- ---------------- -------------- Level $0.55$ ($0.6$) $0.45$ ($0.6$) $0$ Change $1$ $0.75$ $0.8$ Compromise $0.55$ ($0.6$) $0.45$ ($0.6$) $0$ : Optimal regression composite estimator’s $\alpha$ parameter value for three different populations []{data-label="bestrc"} [Numbers in the parentheses indicate parameter values in presence of rotation with bias when different]{} Analysis without measurement error ---------------------------------- Figure \[fig:1\] displays the relative mean squared error for the different estimators of unemployment level and change, i.e. the times series : $\left(\frac{\mathrm{MSE}\left[\hat{{\mathrm{r}}}^\star_{m}\right]}{\mathrm{MSE}\left[\hat{{\mathrm{r}}}^{\mathrm{direct}}_{m}\right]}\right)_{{m}\in\{1,\ldots,{M}\}}$, and $\left(\frac{\mathrm{MSE}\left[\hat{{\mathrm{r}}}^\star_{m}-\hat{{\mathrm{r}}}^\star_{{m}-1}\right]}{\mathrm{MSE}\left[\hat{{\mathrm{r}}}^{\mathrm{direct}}_{m}-\hat{{\mathrm{r}}}^{\mathrm{direct}}_{{m}-1}\right]}\right)_{{m}\in\{2,\ldots,{M}\}},$ for $\star\in \{{\mathrm{direct}},{\mathrm{AK}},{\mathrm{r.c.}}\}.$ In this figure, the best representative in each class is chosen, in the sense that the coefficients of Tables \[bestak\] and \[bestrc\] are used. Note that in the absence of measurement error, the performances of all best “estimators” are comparable. (0,0) rectangle (433.62,361.35); at (-10,20) [Population 3 ]{}; at (-10,130) [Population 2 ]{}; at (-10,230) [Population 1 ]{}; ( 23.76,319.67) rectangle (216.81,353.43); at (102.41,328.03) [Level]{}; ( 36.85,344.26) – ( 48.73,344.26); at ( 54.67,341.99) [Direct]{}; (161.97,344.26) – (173.85,344.26); at (179.79,341.99) [Best AK]{}; (240.57,319.67) rectangle (433.62,353.43); at (319.22,328.67) [Change]{}; (253.66,344.26) – (265.54,344.26); at (271.48,341.99) [R.C., Best $\alpha$ ]{}; ( 0.00, 0.00) rectangle (433.62,361.35); ( 23.76,218.39) – (216.81,218.39) – (216.81,319.67) – ( 23.76,319.67) – ( 23.76,218.39); ( 0.00, 0.00) rectangle (433.62,361.35); ( 30.91,218.39) – (209.66,218.39); ( 30.91,218.39) – ( 30.91,319.67); ( 56.45,218.39) – ( 56.45,319.67); ( 81.98,218.39) – ( 81.98,319.67); (107.52,218.39) – (107.52,319.67); (133.05,218.39) – (133.05,319.67); (158.59,218.39) – (158.59,319.67); (184.12,218.39) – (184.12,319.67); (209.66,218.39) – (209.66,319.67); ( 23.76,220.60) – ( 23.76,315.92); ( 23.76,220.60) – (216.81,220.60); ( 23.76,234.22) – (216.81,234.22); ( 23.76,247.84) – (216.81,247.84); ( 23.76,261.45) – (216.81,261.45); ( 23.76,275.07) – (216.81,275.07); ( 23.76,288.69) – (216.81,288.69); ( 23.76,302.30) – (216.81,302.30); ( 23.76,315.92) – (216.81,315.92); at ( 15.84,218.33) [0.3]{}; at ( 15.84,231.95) [0.4]{}; at ( 15.84,245.56) [0.5]{}; at ( 15.84,259.18) [0.6]{}; at ( 15.84,272.80) [0.7]{}; at ( 15.84,286.41) [0.8]{}; at ( 15.84,300.03) [0.9]{}; at ( 15.84,313.65) [1.0]{}; ( 23.76,218.39) rectangle (216.81,319.67); ( 30.91,315.92) – ( 33.04,290.39) – ( 35.17,275.61) – ( 37.29,260.96) – ( 39.42,249.47) – ( 41.55,240.99) – ( 43.68,236.01) – ( 45.81,232.94) – ( 47.93,229.96) – ( 50.06,231.03) – ( 52.19,229.85) – ( 54.32,230.61) – ( 56.45,231.17) – ( 58.57,230.37) – ( 60.70,228.07) – ( 62.83,237.83) – ( 64.96,236.17) – ( 67.09,241.76) – ( 69.21,237.90) – ( 71.34,233.63) – ( 73.47,231.33) – ( 75.60,228.55) – ( 77.73,227.29) – ( 79.85,237.43) – ( 81.98,248.90) – ( 84.11,244.77) – ( 86.24,236.58) – ( 88.37,231.83) – ( 90.49,226.61) – ( 92.62,226.62) – ( 94.75,229.07) – ( 96.88,228.32) – ( 99.01,233.47) – (101.13,232.32) – (103.26,230.38) – (105.39,236.04) – (107.52,241.62) – (109.65,236.16) – (111.77,233.20) – (113.90,241.96) – (116.03,244.27) – (118.16,248.50) – (120.28,248.35) – (122.41,242.77) – (124.54,240.13) – (126.67,236.43) – (128.80,239.20) – (130.92,242.21) – (133.05,252.79) – (135.18,249.54) – (137.31,244.46) – (139.44,240.53) – (141.56,237.85) – (143.69,241.60) – (145.82,238.34) – (147.95,235.16) – (150.08,234.07) – (152.20,236.01) – (154.33,234.19) – (156.46,237.30) – (158.59,244.83) – (160.72,241.09) – (162.84,238.07) – (164.97,240.45) – (167.10,237.38) – (169.23,235.76) – (171.36,235.08) – (173.48,232.28) – (175.61,229.26) – (177.74,233.48) – (179.87,232.83) – (182.00,231.39) – (184.12,232.37) – (186.25,229.81) – (188.38,232.77) – (190.51,235.49) – (192.64,231.73) – (194.76,234.45) – (196.89,231.82) – (199.02,229.12) – (201.15,230.89) – (203.28,228.94) – (205.40,228.29) – (207.53,229.74) – (209.66,235.99); ( 30.91,315.92) – ( 33.04,315.92) – ( 35.17,315.92) – ( 37.29,315.92) – ( 39.42,315.92) – ( 41.55,315.92) – ( 43.68,315.92) – ( 45.81,315.92) – ( 47.93,315.92) – ( 50.06,315.92) – ( 52.19,315.92) – ( 54.32,315.92) – ( 56.45,315.92) – ( 58.57,315.92) – ( 60.70,315.92) – ( 62.83,315.92) – ( 64.96,315.92) – ( 67.09,315.92) – ( 69.21,315.92) – ( 71.34,315.92) – ( 73.47,315.92) – ( 75.60,315.92) – ( 77.73,315.92) – ( 79.85,315.92) – ( 81.98,315.92) – ( 84.11,315.92) – ( 86.24,315.92) – ( 88.37,315.92) – ( 90.49,315.92) – ( 92.62,315.92) – ( 94.75,315.92) – ( 96.88,315.92) – ( 99.01,315.92) – (101.13,315.92) – (103.26,315.92) – (105.39,315.92) – (107.52,315.92) – (109.65,315.92) – (111.77,315.92) – (113.90,315.92) – (116.03,315.92) – (118.16,315.92) – (120.28,315.92) – (122.41,315.92) – (124.54,315.92) – (126.67,315.92) – (128.80,315.92) – (130.92,315.92) – (133.05,315.92) – (135.18,315.92) – (137.31,315.92) – (139.44,315.92) – (141.56,315.92) – (143.69,315.92) – (145.82,315.92) – (147.95,315.92) – (150.08,315.92) – (152.20,315.92) – (154.33,315.92) – (156.46,315.92) – (158.59,315.92) – (160.72,315.92) – (162.84,315.92) – (164.97,315.92) – (167.10,315.92) – (169.23,315.92) – (171.36,315.92) – (173.48,315.92) – (175.61,315.92) – (177.74,315.92) – (179.87,315.92) – (182.00,315.92) – (184.12,315.92) – (186.25,315.92) – (188.38,315.92) – (190.51,315.92) – (192.64,315.92) – (194.76,315.92) – (196.89,315.92) – (199.02,315.92) – (201.15,315.92) – (203.28,315.92) – (205.40,315.92) – (207.53,315.92) – (209.66,315.92); ( 30.91,315.92) – ( 33.04,291.85) – ( 35.17,276.74) – ( 37.29,264.05) – ( 39.42,255.52) – ( 41.55,247.79) – ( 43.68,242.94) – ( 45.81,238.45) – ( 47.93,233.75) – ( 50.06,236.92) – ( 52.19,236.40) – ( 54.32,238.20) – ( 56.45,237.85) – ( 58.57,234.75) – ( 60.70,230.36) – ( 62.83,240.92) – ( 64.96,236.44) – ( 67.09,239.36) – ( 69.21,233.90) – ( 71.34,229.95) – ( 73.47,227.91) – ( 75.60,225.61) – ( 77.73,223.67) – ( 79.85,236.06) – ( 81.98,246.31) – ( 84.11,242.09) – ( 86.24,234.49) – ( 88.37,229.71) – ( 90.49,223.60) – ( 92.62,222.14) – ( 94.75,224.91) – ( 96.88,224.38) – ( 99.01,233.66) – (101.13,233.70) – (103.26,231.38) – (105.39,236.75) – (107.52,240.16) – (109.65,233.77) – (111.77,231.30) – (113.90,240.91) – (116.03,241.99) – (118.16,245.49) – (120.28,244.60) – (122.41,239.93) – (124.54,237.55) – (126.67,233.46) – (128.80,234.22) – (130.92,238.64) – (133.05,252.29) – (135.18,250.04) – (137.31,245.55) – (139.44,242.49) – (141.56,239.99) – (143.69,239.77) – (145.82,236.45) – (147.95,232.76) – (150.08,231.43) – (152.20,233.99) – (154.33,231.30) – (156.46,233.61) – (158.59,240.95) – (160.72,236.21) – (162.84,232.98) – (164.97,239.31) – (167.10,235.17) – (169.23,232.20) – (171.36,231.52) – (173.48,228.12) – (175.61,223.85) – (177.74,230.99) – (179.87,229.67) – (182.00,227.56) – (184.12,229.16) – (186.25,226.09) – (188.38,230.17) – (190.51,234.22) – (192.64,229.36) – (194.76,230.40) – (196.89,228.66) – (199.02,226.37) – (201.15,229.86) – (203.28,228.16) – (205.40,226.94) – (207.53,229.83) – (209.66,235.48); ( 0.00, 0.00) rectangle (433.62,361.35); (240.57,218.39) – (433.62,218.39) – (433.62,319.67) – (240.57,319.67) – (240.57,218.39); ( 0.00, 0.00) rectangle (433.62,361.35); (247.72,218.39) – (402.78,218.39); (247.72,218.39) – (247.72,319.67); (273.56,218.39) – (273.56,319.67); (299.41,218.39) – (299.41,319.67); (325.25,218.39) – (325.25,319.67); (351.09,218.39) – (351.09,319.67); (376.94,218.39) – (376.94,319.67); (402.78,218.39) – (402.78,319.67); (240.57,228.14) – (240.57,315.92); (240.57,228.14) – (433.62,228.14); (240.57,250.08) – (433.62,250.08); (240.57,272.03) – (433.62,272.03); (240.57,293.97) – (433.62,293.97); (240.57,315.92) – (433.62,315.92); at (232.65,225.86) [0.2]{}; at (232.65,247.81) [0.4]{}; at (232.65,269.76) [0.6]{}; at (232.65,291.70) [0.8]{}; at (232.65,313.65) [1.0]{}; (240.57,218.39) rectangle (433.62,319.67); (247.72,228.67) – (249.87,240.42) – (252.03,227.93) – (254.18,227.28) – (256.33,230.40) – (258.49,226.27) – (260.64,225.84) – (262.80,226.95) – (264.95,225.36) – (267.10,225.37) – (269.26,225.32) – (271.41,225.17) – (273.56,224.83) – (275.72,226.49) – (277.87,246.58) – (280.02,225.91) – (282.18,241.39) – (284.33,231.83) – (286.49,225.77) – (288.64,226.42) – (290.79,227.79) – (292.95,226.31) – (295.10,247.35) – (297.25,252.33) – (299.41,229.42) – (301.56,226.81) – (303.71,225.68) – (305.87,227.37) – (308.02,227.15) – (310.17,227.40) – (312.33,225.51) – (314.48,235.45) – (316.64,223.90) – (318.79,225.23) – (320.94,233.97) – (323.10,240.69) – (325.25,224.53) – (327.40,224.82) – (329.56,243.60) – (331.71,242.60) – (333.86,238.00) – (336.02,234.61) – (338.17,226.09) – (340.33,224.83) – (342.48,225.41) – (344.63,232.48) – (346.79,242.29) – (348.94,251.61) – (351.09,233.16) – (353.25,225.41) – (355.40,225.18) – (357.55,226.99) – (359.71,237.27) – (361.86,224.15) – (364.02,225.85) – (366.17,225.61) – (368.32,229.49) – (370.48,225.08) – (372.63,228.57) – (374.78,239.96) – (376.94,224.54) – (379.09,224.99) – (381.24,243.15) – (383.40,224.26) – (385.55,228.57) – (387.70,227.08) – (389.86,225.08) – (392.01,225.75) – (394.17,237.93) – (396.32,227.90) – (398.47,224.34) – (400.63,234.48) – (402.78,225.27) – (404.93,233.93) – (407.09,232.57) – (409.24,225.85) – (411.39,235.26) – (413.55,224.63) – (415.70,223.73) – (417.86,233.81) – (420.01,224.20) – (422.16,225.02) – (424.32,232.95) – (426.47,236.76); (247.72,315.92) – (249.87,315.92) – (252.03,315.92) – (254.18,315.92) – (256.33,315.92) – (258.49,315.92) – (260.64,315.92) – (262.80,315.92) – (264.95,315.92) – (267.10,315.92) – (269.26,315.92) – (271.41,315.92) – (273.56,315.92) – (275.72,315.92) – (277.87,315.92) – (280.02,315.92) – (282.18,315.92) – (284.33,315.92) – (286.49,315.92) – (288.64,315.92) – (290.79,315.92) – (292.95,315.92) – (295.10,315.92) – (297.25,315.92) – (299.41,315.92) – (301.56,315.92) – (303.71,315.92) – (305.87,315.92) – (308.02,315.92) – (310.17,315.92) – (312.33,315.92) – (314.48,315.92) – (316.64,315.92) – (318.79,315.92) – (320.94,315.92) – (323.10,315.92) – (325.25,315.92) – (327.40,315.92) – (329.56,315.92) – (331.71,315.92) – (333.86,315.92) – (336.02,315.92) – (338.17,315.92) – (340.33,315.92) – (342.48,315.92) – (344.63,315.92) – (346.79,315.92) – (348.94,315.92) – (351.09,315.92) – (353.25,315.92) – (355.40,315.92) – (357.55,315.92) – (359.71,315.92) – (361.86,315.92) – (364.02,315.92) – (366.17,315.92) – (368.32,315.92) – (370.48,315.92) – (372.63,315.92) – (374.78,315.92) – (376.94,315.92) – (379.09,315.92) – (381.24,315.92) – (383.40,315.92) – (385.55,315.92) – (387.70,315.92) – (389.86,315.92) – (392.01,315.92) – (394.17,315.92) – (396.32,315.92) – (398.47,315.92) – (400.63,315.92) – (402.78,315.92) – (404.93,315.92) – (407.09,315.92) – (409.24,315.92) – (411.39,315.92) – (413.55,315.92) – (415.70,315.92) – (417.86,315.92) – (420.01,315.92) – (422.16,315.92) – (424.32,315.92) – (426.47,315.92); (247.72,226.30) – (249.87,241.28) – (252.03,226.54) – (254.18,226.10) – (256.33,228.90) – (258.49,225.56) – (260.64,225.30) – (262.80,226.28) – (264.95,225.08) – (267.10,225.22) – (269.26,225.42) – (271.41,225.24) – (273.56,224.89) – (275.72,226.80) – (277.87,248.00) – (280.02,226.57) – (282.18,240.89) – (284.33,230.95) – (286.49,225.60) – (288.64,226.41) – (290.79,227.88) – (292.95,226.28) – (295.10,247.61) – (297.25,251.57) – (299.41,230.76) – (301.56,225.82) – (303.71,224.91) – (305.87,227.42) – (308.02,226.25) – (310.17,226.82) – (312.33,225.31) – (314.48,236.28) – (316.64,224.07) – (318.79,225.13) – (320.94,233.55) – (323.10,240.33) – (325.25,223.69) – (327.40,224.31) – (329.56,245.68) – (331.71,241.74) – (333.86,236.98) – (336.02,233.19) – (338.17,225.01) – (340.33,223.94) – (342.48,223.77) – (344.63,231.17) – (346.79,241.44) – (348.94,251.25) – (351.09,231.43) – (353.25,223.55) – (355.40,223.35) – (357.55,225.18) – (359.71,235.66) – (361.86,222.14) – (364.02,224.49) – (366.17,223.92) – (368.32,228.70) – (370.48,223.69) – (372.63,226.93) – (374.78,238.21) – (376.94,222.48) – (379.09,223.05) – (381.24,243.28) – (383.40,222.51) – (385.55,226.59) – (387.70,225.43) – (389.86,223.33) – (392.01,223.86) – (394.17,237.99) – (396.32,226.14) – (398.47,222.78) – (400.63,232.78) – (402.78,223.64) – (404.93,233.91) – (407.09,232.22) – (409.24,223.93) – (411.39,234.07) – (413.55,222.78) – (415.70,222.33) – (417.86,234.14) – (420.01,222.91) – (422.16,224.00) – (424.32,233.22) – (426.47,236.22); ( 0.00, 0.00) rectangle (433.62,361.35); ( 23.76,117.12) – (216.81,117.12) – (216.81,218.39) – ( 23.76,218.39) – ( 23.76,117.12); ( 0.00, 0.00) rectangle (433.62,361.35); ( 30.91,117.12) – (209.66,117.12); ( 30.91,117.12) – ( 30.91,218.39); ( 56.45,117.12) – ( 56.45,218.39); ( 81.98,117.12) – ( 81.98,218.39); (107.52,117.12) – (107.52,218.39); (133.05,117.12) – (133.05,218.39); (158.59,117.12) – (158.59,218.39); (184.12,117.12) – (184.12,218.39); (209.66,117.12) – (209.66,218.39); ( 23.76,119.39) – ( 23.76,214.64); ( 23.76,119.39) – (216.81,119.39); ( 23.76,133.00) – (216.81,133.00); ( 23.76,146.60) – (216.81,146.60); ( 23.76,160.21) – (216.81,160.21); ( 23.76,173.82) – (216.81,173.82); ( 23.76,187.43) – (216.81,187.43); ( 23.76,201.04) – (216.81,201.04); ( 23.76,214.64) – (216.81,214.64); at ( 15.84,117.12) [0.86]{}; at ( 15.84,130.72) [0.88]{}; at ( 15.84,144.33) [0.90]{}; at ( 15.84,157.94) [0.92]{}; at ( 15.84,171.55) [0.94]{}; at ( 15.84,185.15) [0.96]{}; at ( 15.84,198.76) [0.98]{}; ( 23.76,117.12) rectangle (216.81,218.39); ( 30.91,214.64) – ( 33.04,188.45) – ( 35.17,169.27) – ( 37.29,169.87) – ( 39.42,179.86) – ( 41.55,167.58) – ( 43.68,176.48) – ( 45.81,169.13) – ( 47.93,207.00) – ( 50.06,200.23) – ( 52.19,166.78) – ( 54.32,162.52) – ( 56.45,187.21) – ( 58.57,180.73) – ( 60.70,172.39) – ( 62.83,173.82) – ( 64.96,190.72) – ( 67.09,191.93) – ( 69.21,190.31) – ( 71.34,186.42) – ( 73.47,185.34) – ( 75.60,155.93) – ( 77.73,190.56) – ( 79.85,173.68) – ( 81.98,196.69) – ( 84.11,188.12) – ( 86.24,192.55) – ( 88.37,177.12) – ( 90.49,166.65) – ( 92.62,202.79) – ( 94.75,197.96) – ( 96.88,167.84) – ( 99.01,178.56) – (101.13,175.04) – (103.26,186.35) – (105.39,175.30) – (107.52,165.86) – (109.65,182.50) – (111.77,194.09) – (113.90,173.62) – (116.03,193.32) – (118.16,197.82) – (120.28,176.14) – (122.41,183.88) – (124.54,149.48) – (126.67,152.69) – (128.80,148.07) – (130.92,152.49) – (133.05,148.05) – (135.18,130.94) – (137.31,121.84) – (139.44,137.07) – (141.56,134.15) – (143.69,156.66) – (145.82,133.04) – (147.95,144.02) – (150.08,141.30) – (152.20,131.47) – (154.33,135.09) – (156.46,144.82) – (158.59,164.20) – (160.72,137.42) – (162.84,132.15) – (164.97,142.55) – (167.10,142.12) – (169.23,148.90) – (171.36,155.11) – (173.48,149.79) – (175.61,159.49) – (177.74,152.01) – (179.87,140.00) – (182.00,149.95) – (184.12,158.81) – (186.25,127.54) – (188.38,135.33) – (190.51,175.38) – (192.64,148.63) – (194.76,155.43) – (196.89,128.37) – (199.02,120.87) – (201.15,148.93) – (203.28,142.42) – (205.40,125.60) – (207.53,155.56) – (209.66,159.37); ( 30.91,214.64) – ( 33.04,214.64) – ( 35.17,214.64) – ( 37.29,214.64) – ( 39.42,214.64) – ( 41.55,214.64) – ( 43.68,214.64) – ( 45.81,214.64) – ( 47.93,214.64) – ( 50.06,214.64) – ( 52.19,214.64) – ( 54.32,214.64) – ( 56.45,214.64) – ( 58.57,214.64) – ( 60.70,214.64) – ( 62.83,214.64) – ( 64.96,214.64) – ( 67.09,214.64) – ( 69.21,214.64) – ( 71.34,214.64) – ( 73.47,214.64) – ( 75.60,214.64) – ( 77.73,214.64) – ( 79.85,214.64) – ( 81.98,214.64) – ( 84.11,214.64) – ( 86.24,214.64) – ( 88.37,214.64) – ( 90.49,214.64) – ( 92.62,214.64) – ( 94.75,214.64) – ( 96.88,214.64) – ( 99.01,214.64) – (101.13,214.64) – (103.26,214.64) – (105.39,214.64) – (107.52,214.64) – (109.65,214.64) – (111.77,214.64) – (113.90,214.64) – (116.03,214.64) – (118.16,214.64) – (120.28,214.64) – (122.41,214.64) – (124.54,214.64) – (126.67,214.64) – (128.80,214.64) – (130.92,214.64) – (133.05,214.64) – (135.18,214.64) – (137.31,214.64) – (139.44,214.64) – (141.56,214.64) – (143.69,214.64) – (145.82,214.64) – (147.95,214.64) – (150.08,214.64) – (152.20,214.64) – (154.33,214.64) – (156.46,214.64) – (158.59,214.64) – (160.72,214.64) – (162.84,214.64) – (164.97,214.64) – (167.10,214.64) – (169.23,214.64) – (171.36,214.64) – (173.48,214.64) – (175.61,214.64) – (177.74,214.64) – (179.87,214.64) – (182.00,214.64) – (184.12,214.64) – (186.25,214.64) – (188.38,214.64) – (190.51,214.64) – (192.64,214.64) – (194.76,214.64) – (196.89,214.64) – (199.02,214.64) – (201.15,214.64) – (203.28,214.64) – (205.40,214.64) – (207.53,214.64) – (209.66,214.64); ( 30.91,214.64) – ( 33.04,190.20) – ( 35.17,172.68) – ( 37.29,172.70) – ( 39.42,177.89) – ( 41.55,166.84) – ( 43.68,173.17) – ( 45.81,174.55) – ( 47.93,207.97) – ( 50.06,198.55) – ( 52.19,170.10) – ( 54.32,164.28) – ( 56.45,187.90) – ( 58.57,182.06) – ( 60.70,174.55) – ( 62.83,186.90) – ( 64.96,192.99) – ( 67.09,191.45) – ( 69.21,197.71) – ( 71.34,198.91) – ( 73.47,186.02) – ( 75.60,163.40) – ( 77.73,193.92) – ( 79.85,174.37) – ( 81.98,197.91) – ( 84.11,187.69) – ( 86.24,190.45) – ( 88.37,178.34) – ( 90.49,170.29) – ( 92.62,201.12) – ( 94.75,195.63) – ( 96.88,169.61) – ( 99.01,191.29) – (101.13,177.45) – (103.26,189.29) – (105.39,179.72) – (107.52,176.64) – (109.65,183.34) – (111.77,194.62) – (113.90,174.98) – (116.03,194.95) – (118.16,198.96) – (120.28,177.89) – (122.41,185.99) – (124.54,155.31) – (126.67,155.80) – (128.80,154.93) – (130.92,157.21) – (133.05,154.61) – (135.18,135.38) – (137.31,121.34) – (139.44,136.18) – (141.56,136.82) – (143.69,158.78) – (145.82,134.57) – (147.95,145.96) – (150.08,139.99) – (152.20,124.70) – (154.33,135.94) – (156.46,151.33) – (158.59,172.12) – (160.72,140.01) – (162.84,137.34) – (164.97,147.85) – (167.10,142.45) – (169.23,150.47) – (171.36,156.87) – (173.48,160.27) – (175.61,162.31) – (177.74,158.82) – (179.87,138.44) – (182.00,144.37) – (184.12,153.99) – (186.25,130.77) – (188.38,137.10) – (190.51,174.65) – (192.64,146.48) – (194.76,159.08) – (196.89,132.16) – (199.02,123.07) – (201.15,155.67) – (203.28,143.88) – (205.40,130.54) – (207.53,162.17) – (209.66,165.88); ( 0.00, 0.00) rectangle (433.62,361.35); (240.57,117.12) – (433.62,117.12) – (433.62,218.39) – (240.57,218.39) – (240.57,117.12); ( 0.00, 0.00) rectangle (433.62,361.35); (247.72,117.12) – (402.78,117.12); (247.72,117.12) – (247.72,218.39); (273.56,117.12) – (273.56,218.39); (299.41,117.12) – (299.41,218.39); (325.25,117.12) – (325.25,218.39); (351.09,117.12) – (351.09,218.39); (376.94,117.12) – (376.94,218.39); (402.78,117.12) – (402.78,218.39); (240.57,124.95) – (240.57,214.64); (240.57,124.95) – (433.62,124.95); (240.57,147.37) – (433.62,147.37); (240.57,169.80) – (433.62,169.80); (240.57,192.22) – (433.62,192.22); (240.57,214.64) – (433.62,214.64); at (232.65,122.68) [0.80]{}; at (232.65,145.10) [0.85]{}; at (232.65,167.52) [0.90]{}; at (232.65,189.95) [0.95]{}; at (232.65,212.37) [1.00]{}; (240.57,117.12) rectangle (433.62,218.39); (247.72,186.64) – (249.87,180.41) – (252.03,179.09) – (254.18,166.43) – (256.33,178.16) – (258.49,170.04) – (260.64,179.73) – (262.80,192.30) – (264.95,201.75) – (267.10,186.55) – (269.26,174.20) – (271.41,189.97) – (273.56,188.91) – (275.72,178.31) – (277.87,175.27) – (280.02,183.28) – (282.18,184.05) – (284.33,195.02) – (286.49,194.40) – (288.64,187.22) – (290.79,186.66) – (292.95,178.46) – (295.10,176.28) – (297.25,193.88) – (299.41,195.07) – (301.56,187.40) – (303.71,176.30) – (305.87,179.28) – (308.02,202.29) – (310.17,192.12) – (312.33,182.59) – (314.48,179.18) – (316.64,182.17) – (318.79,175.57) – (320.94,174.33) – (323.10,184.18) – (325.25,182.43) – (327.40,183.04) – (329.56,190.81) – (331.71,184.48) – (333.86,186.12) – (336.02,182.29) – (338.17,179.65) – (340.33,188.56) – (342.48,167.02) – (344.63,174.23) – (346.79,172.74) – (348.94,161.77) – (351.09,149.18) – (353.25,158.45) – (355.40,158.86) – (357.55,154.05) – (359.71,162.41) – (361.86,165.39) – (364.02,147.55) – (366.17,139.15) – (368.32,151.06) – (370.48,144.69) – (372.63,146.10) – (374.78,163.72) – (376.94,143.48) – (379.09,148.13) – (381.24,160.99) – (383.40,144.38) – (385.55,151.43) – (387.70,171.28) – (389.86,162.78) – (392.01,158.42) – (394.17,166.38) – (396.32,152.71) – (398.47,147.37) – (400.63,165.72) – (402.78,162.67) – (404.93,147.22) – (407.09,168.46) – (409.24,160.73) – (411.39,174.88) – (413.55,171.04) – (415.70,155.80) – (417.86,162.01) – (420.01,152.78) – (422.16,147.74) – (424.32,149.89) – (426.47,167.61); (247.72,214.64) – (249.87,214.64) – (252.03,214.64) – (254.18,214.64) – (256.33,214.64) – (258.49,214.64) – (260.64,214.64) – (262.80,214.64) – (264.95,214.64) – (267.10,214.64) – (269.26,214.64) – (271.41,214.64) – (273.56,214.64) – (275.72,214.64) – (277.87,214.64) – (280.02,214.64) – (282.18,214.64) – (284.33,214.64) – (286.49,214.64) – (288.64,214.64) – (290.79,214.64) – (292.95,214.64) – (295.10,214.64) – (297.25,214.64) – (299.41,214.64) – (301.56,214.64) – (303.71,214.64) – (305.87,214.64) – (308.02,214.64) – (310.17,214.64) – (312.33,214.64) – (314.48,214.64) – (316.64,214.64) – (318.79,214.64) – (320.94,214.64) – (323.10,214.64) – (325.25,214.64) – (327.40,214.64) – (329.56,214.64) – (331.71,214.64) – (333.86,214.64) – (336.02,214.64) – (338.17,214.64) – (340.33,214.64) – (342.48,214.64) – (344.63,214.64) – (346.79,214.64) – (348.94,214.64) – (351.09,214.64) – (353.25,214.64) – (355.40,214.64) – (357.55,214.64) – (359.71,214.64) – (361.86,214.64) – (364.02,214.64) – (366.17,214.64) – (368.32,214.64) – (370.48,214.64) – (372.63,214.64) – (374.78,214.64) – (376.94,214.64) – (379.09,214.64) – (381.24,214.64) – (383.40,214.64) – (385.55,214.64) – (387.70,214.64) – (389.86,214.64) – (392.01,214.64) – (394.17,214.64) – (396.32,214.64) – (398.47,214.64) – (400.63,214.64) – (402.78,214.64) – (404.93,214.64) – (407.09,214.64) – (409.24,214.64) – (411.39,214.64) – (413.55,214.64) – (415.70,214.64) – (417.86,214.64) – (420.01,214.64) – (422.16,214.64) – (424.32,214.64) – (426.47,214.64); (247.72,176.78) – (249.87,172.88) – (252.03,172.02) – (254.18,159.77) – (256.33,175.87) – (258.49,164.02) – (260.64,172.17) – (262.80,186.92) – (264.95,199.69) – (267.10,179.77) – (269.26,169.98) – (271.41,190.20) – (273.56,181.72) – (275.72,174.10) – (277.87,176.42) – (280.02,178.25) – (282.18,188.51) – (284.33,190.16) – (286.49,190.09) – (288.64,184.65) – (290.79,185.22) – (292.95,167.60) – (295.10,167.36) – (297.25,195.36) – (299.41,189.43) – (301.56,178.40) – (303.71,163.77) – (305.87,174.13) – (308.02,197.11) – (310.17,192.58) – (312.33,174.05) – (314.48,178.38) – (316.64,179.55) – (318.79,169.17) – (320.94,170.20) – (323.10,181.47) – (325.25,174.30) – (327.40,175.18) – (329.56,184.84) – (331.71,178.22) – (333.86,179.10) – (336.02,174.13) – (338.17,177.26) – (340.33,188.18) – (342.48,158.18) – (344.63,165.58) – (346.79,162.52) – (348.94,158.74) – (351.09,137.91) – (353.25,143.30) – (355.40,143.28) – (357.55,138.37) – (359.71,149.20) – (361.86,155.24) – (364.02,130.98) – (366.17,124.66) – (368.32,135.81) – (370.48,124.48) – (372.63,122.68) – (374.78,151.24) – (376.94,120.87) – (379.09,136.77) – (381.24,150.50) – (383.40,123.61) – (385.55,138.74) – (387.70,164.46) – (389.86,148.68) – (392.01,147.82) – (394.17,156.25) – (396.32,140.69) – (398.47,130.99) – (400.63,154.40) – (402.78,139.49) – (404.93,130.58) – (407.09,158.83) – (409.24,146.85) – (411.39,164.95) – (413.55,159.92) – (415.70,139.19) – (417.86,150.60) – (420.01,131.09) – (422.16,127.93) – (424.32,134.23) – (426.47,156.06); ( 0.00, 0.00) rectangle (433.62,361.35); ( 23.76, 15.84) – (216.81, 15.84) – (216.81,117.12) – ( 23.76,117.12) – ( 23.76, 15.84); ( 0.00, 0.00) rectangle (433.62,361.35); ( 30.91, 15.84) – (209.66, 15.84); ( 30.91, 15.84) – ( 30.91,117.12); ( 56.45, 15.84) – ( 56.45,117.12); ( 81.98, 15.84) – ( 81.98,117.12); (107.52, 15.84) – (107.52,117.12); (133.05, 15.84) – (133.05,117.12); (158.59, 15.84) – (158.59,117.12); (184.12, 15.84) – (184.12,117.12); (209.66, 15.84) – (209.66,117.12); at ( 30.91, 1.58) [2005]{}; at ( 56.45, 1.58) [2006]{}; at ( 81.98, 1.58) [2007]{}; at (107.52, 1.58) [2008]{}; at (133.05, 1.58) [2009]{}; at (158.59, 1.58) [2010]{}; at (184.12, 1.58) [2011]{}; at (209.66, 1.58) [2012]{}; ( 23.76, 27.21) – ( 23.76,115.32); ( 23.76, 27.21) – (216.81, 27.21); ( 23.76, 49.24) – (216.81, 49.24); ( 23.76, 71.26) – (216.81, 71.26); ( 23.76, 93.29) – (216.81, 93.29); ( 23.76,115.32) – (216.81,115.32); at ( 15.84, 24.93) [0.98]{}; at ( 15.84, 46.96) [0.99]{}; at ( 15.84, 68.99) [1.00]{}; at ( 15.84, 91.02) [1.01]{}; ( 23.76, 15.84) rectangle (216.81,117.12); ( 30.91, 71.26) – ( 33.04,113.37) – ( 35.17, 97.90) – ( 37.29, 84.69) – ( 39.42, 77.27) – ( 41.55, 61.84) – ( 43.68, 79.23) – ( 45.81, 65.41) – ( 47.93, 75.15) – ( 50.06, 59.56) – ( 52.19, 72.53) – ( 54.32, 54.65) – ( 56.45, 68.43) – ( 58.57, 69.23) – ( 60.70, 67.92) – ( 62.83, 56.78) – ( 64.96, 59.72) – ( 67.09, 55.22) – ( 69.21, 66.58) – ( 71.34, 65.08) – ( 73.47, 54.86) – ( 75.60, 62.75) – ( 77.73, 46.30) – ( 79.85, 56.95) – ( 81.98, 47.59) – ( 84.11, 66.69) – ( 86.24, 66.31) – ( 88.37, 66.32) – ( 90.49, 61.40) – ( 92.62, 60.48) – ( 94.75, 64.03) – ( 96.88, 55.99) – ( 99.01, 72.68) – (101.13, 71.51) – (103.26, 64.85) – (105.39, 63.77) – (107.52, 58.15) – (109.65, 69.18) – (111.77, 60.49) – (113.90, 61.92) – (116.03, 55.37) – (118.16, 59.40) – (120.28, 57.48) – (122.41, 75.02) – (124.54, 57.95) – (126.67, 64.35) – (128.80, 57.09) – (130.92, 70.60) – (133.05, 72.41) – (135.18, 85.18) – (137.31, 80.36) – (139.44, 75.70) – (141.56, 70.86) – (143.69, 67.50) – (145.82, 70.91) – (147.95, 70.78) – (150.08, 80.02) – (152.20, 53.13) – (154.33, 79.83) – (156.46, 38.78) – (158.59, 66.14) – (160.72, 35.28) – (162.84, 73.97) – (164.97, 19.59) – (167.10, 62.02) – (169.23, 44.92) – (171.36, 51.76) – (173.48, 45.21) – (175.61, 50.41) – (177.74, 40.04) – (179.87, 57.76) – (182.00, 32.05) – (184.12, 48.21) – (186.25, 58.01) – (188.38, 67.81) – (190.51, 66.16) – (192.64, 60.63) – (194.76, 58.19) – (196.89, 59.18) – (199.02, 55.05) – (201.15, 47.45) – (203.28, 74.63) – (205.40, 53.23) – (207.53, 55.72) – (209.66, 59.03); ( 30.91, 71.26) – ( 33.04, 71.26) – ( 35.17, 71.26) – ( 37.29, 71.26) – ( 39.42, 71.26) – ( 41.55, 71.26) – ( 43.68, 71.26) – ( 45.81, 71.26) – ( 47.93, 71.26) – ( 50.06, 71.26) – ( 52.19, 71.26) – ( 54.32, 71.26) – ( 56.45, 71.26) – ( 58.57, 71.26) – ( 60.70, 71.26) – ( 62.83, 71.26) – ( 64.96, 71.26) – ( 67.09, 71.26) – ( 69.21, 71.26) – ( 71.34, 71.26) – ( 73.47, 71.26) – ( 75.60, 71.26) – ( 77.73, 71.26) – ( 79.85, 71.26) – ( 81.98, 71.26) – ( 84.11, 71.26) – ( 86.24, 71.26) – ( 88.37, 71.26) – ( 90.49, 71.26) – ( 92.62, 71.26) – ( 94.75, 71.26) – ( 96.88, 71.26) – ( 99.01, 71.26) – (101.13, 71.26) – (103.26, 71.26) – (105.39, 71.26) – (107.52, 71.26) – (109.65, 71.26) – (111.77, 71.26) – (113.90, 71.26) – (116.03, 71.26) – (118.16, 71.26) – (120.28, 71.26) – (122.41, 71.26) – (124.54, 71.26) – (126.67, 71.26) – (128.80, 71.26) – (130.92, 71.26) – (133.05, 71.26) – (135.18, 71.26) – (137.31, 71.26) – (139.44, 71.26) – (141.56, 71.26) – (143.69, 71.26) – (145.82, 71.26) – (147.95, 71.26) – (150.08, 71.26) – (152.20, 71.26) – (154.33, 71.26) – (156.46, 71.26) – (158.59, 71.26) – (160.72, 71.26) – (162.84, 71.26) – (164.97, 71.26) – (167.10, 71.26) – (169.23, 71.26) – (171.36, 71.26) – (173.48, 71.26) – (175.61, 71.26) – (177.74, 71.26) – (179.87, 71.26) – (182.00, 71.26) – (184.12, 71.26) – (186.25, 71.26) – (188.38, 71.26) – (190.51, 71.26) – (192.64, 71.26) – (194.76, 71.26) – (196.89, 71.26) – (199.02, 71.26) – (201.15, 71.26) – (203.28, 71.26) – (205.40, 71.26) – (207.53, 71.26) – (209.66, 71.26); ( 30.91, 71.26) – ( 33.04, 61.27) – ( 35.17, 68.36) – ( 37.29, 78.91) – ( 39.42, 67.54) – ( 41.55, 84.86) – ( 43.68, 67.30) – ( 45.81, 70.97) – ( 47.93, 72.66) – ( 50.06, 76.26) – ( 52.19, 75.77) – ( 54.32, 73.22) – ( 56.45, 75.56) – ( 58.57, 75.65) – ( 60.70, 76.79) – ( 62.83, 65.95) – ( 64.96, 75.92) – ( 67.09, 64.24) – ( 69.21, 74.81) – ( 71.34, 72.60) – ( 73.47, 94.64) – ( 75.60, 67.22) – ( 77.73, 73.80) – ( 79.85, 68.77) – ( 81.98, 67.18) – ( 84.11, 69.09) – ( 86.24, 65.79) – ( 88.37, 73.96) – ( 90.49, 70.50) – ( 92.62, 66.66) – ( 94.75, 74.11) – ( 96.88, 75.57) – ( 99.01, 71.62) – (101.13, 79.19) – (103.26, 69.79) – (105.39, 74.02) – (107.52, 73.78) – (109.65, 72.86) – (111.77, 67.86) – (113.90, 75.13) – (116.03, 71.93) – (118.16, 75.63) – (120.28, 75.40) – (122.41, 76.93) – (124.54, 81.52) – (126.67, 80.13) – (128.80, 79.18) – (130.92, 81.48) – (133.05, 80.56) – (135.18, 76.52) – (137.31, 80.35) – (139.44, 86.62) – (141.56, 76.54) – (143.69, 79.90) – (145.82, 74.56) – (147.95, 82.60) – (150.08, 70.32) – (152.20, 82.74) – (154.33, 76.06) – (156.46, 80.40) – (158.59, 71.61) – (160.72, 84.52) – (162.84, 78.49) – (164.97, 78.45) – (167.10, 78.47) – (169.23, 76.85) – (171.36, 70.07) – (173.48, 72.33) – (175.61, 58.71) – (177.74, 83.40) – (179.87, 70.74) – (182.00, 81.71) – (184.12, 79.07) – (186.25, 80.04) – (188.38, 77.98) – (190.51, 77.86) – (192.64, 81.21) – (194.76, 72.22) – (196.89, 79.77) – (199.02, 72.62) – (201.15, 60.82) – (203.28, 82.51) – (205.40, 70.51) – (207.53, 81.69) – (209.66, 77.24); ( 0.00, 0.00) rectangle (433.62,361.35); (240.57, 15.84) – (433.62, 15.84) – (433.62,117.12) – (240.57,117.12) – (240.57, 15.84); ( 0.00, 0.00) rectangle (433.62,361.35); (247.72, 15.84) – (402.78, 15.84); (247.72, 15.84) – (247.72,117.12); (273.56, 15.84) – (273.56,117.12); (299.41, 15.84) – (299.41,117.12); (325.25, 15.84) – (325.25,117.12); (351.09, 15.84) – (351.09,117.12); (376.94, 15.84) – (376.94,117.12); (402.78, 15.84) – (402.78,117.12); at (247.72, 1.58) [2005]{}; at (273.56, 1.58) [2006]{}; at (299.41, 1.58) [2007]{}; at (325.25, 1.58) [2008]{}; at (351.09, 1.58) [2009]{}; at (376.94, 1.58) [2010]{}; at (402.78, 1.58) [2011]{}; (240.57, 24.20) – (240.57,116.91); (240.57, 24.20) – (433.62, 24.20); (240.57, 39.65) – (433.62, 39.65); (240.57, 55.10) – (433.62, 55.10); (240.57, 70.56) – (433.62, 70.56); (240.57, 86.01) – (433.62, 86.01); (240.57,101.46) – (433.62,101.46); (240.57,116.91) – (433.62,116.91); at (232.65, 21.93) [0.96]{}; at (232.65, 37.38) [0.98]{}; at (232.65, 52.83) [1.00]{}; at (232.65, 68.28) [1.02]{}; at (232.65, 83.73) [1.04]{}; at (232.65, 99.19) [1.06]{}; at (232.65,114.64) [1.08]{}; (240.57, 15.84) rectangle (433.62,117.12); (247.72, 69.92) – (249.87,113.37) – (252.03, 69.35) – (254.18, 60.57) – (256.33, 53.53) – (258.49, 52.64) – (260.64, 54.58) – (262.80, 51.32) – (264.95, 47.73) – (267.10, 47.58) – (269.26, 43.73) – (271.41, 45.01) – (273.56, 48.34) – (275.72, 49.42) – (277.87, 44.23) – (280.02, 37.34) – (282.18, 40.73) – (284.33, 38.87) – (286.49, 47.33) – (288.64, 44.72) – (290.79, 38.31) – (292.95, 31.21) – (295.10, 29.96) – (297.25, 33.63) – (299.41, 35.96) – (301.56, 47.27) – (303.71, 45.71) – (305.87, 46.04) – (308.02, 43.94) – (310.17, 44.25) – (312.33, 40.92) – (314.48, 43.65) – (316.64, 55.13) – (318.79, 50.64) – (320.94, 48.42) – (323.10, 43.91) – (325.25, 44.88) – (327.40, 44.94) – (329.56, 42.74) – (331.71, 35.22) – (333.86, 34.13) – (336.02, 34.20) – (338.17, 51.84) – (340.33, 49.91) – (342.48, 40.00) – (344.63, 33.70) – (346.79, 42.32) – (348.94, 49.54) – (351.09, 63.92) – (353.25, 67.76) – (355.40, 60.50) – (357.55, 58.23) – (359.71, 56.27) – (361.86, 56.62) – (364.02, 57.78) – (366.17, 56.51) – (368.32, 52.57) – (370.48, 51.42) – (372.63, 44.06) – (374.78, 39.66) – (376.94, 40.32) – (379.09, 47.54) – (381.24, 46.68) – (383.40, 40.22) – (385.55, 45.93) – (387.70, 44.56) – (389.86, 43.70) – (392.01, 41.39) – (394.17, 42.50) – (396.32, 40.38) – (398.47, 30.45) – (400.63, 19.59) – (402.78, 35.98) – (404.93, 51.18) – (407.09, 54.09) – (409.24, 52.29) – (411.39, 50.08) – (413.55, 47.38) – (415.70, 50.48) – (417.86, 46.75) – (420.01, 49.89) – (422.16, 49.21) – (424.32, 42.81) – (426.47, 45.17); (247.72, 55.10) – (249.87, 55.10) – (252.03, 55.10) – (254.18, 55.10) – (256.33, 55.10) – (258.49, 55.10) – (260.64, 55.10) – (262.80, 55.10) – (264.95, 55.10) – (267.10, 55.10) – (269.26, 55.10) – (271.41, 55.10) – (273.56, 55.10) – (275.72, 55.10) – (277.87, 55.10) – (280.02, 55.10) – (282.18, 55.10) – (284.33, 55.10) – (286.49, 55.10) – (288.64, 55.10) – (290.79, 55.10) – (292.95, 55.10) – (295.10, 55.10) – (297.25, 55.10) – (299.41, 55.10) – (301.56, 55.10) – (303.71, 55.10) – (305.87, 55.10) – (308.02, 55.10) – (310.17, 55.10) – (312.33, 55.10) – (314.48, 55.10) – (316.64, 55.10) – (318.79, 55.10) – (320.94, 55.10) – (323.10, 55.10) – (325.25, 55.10) – (327.40, 55.10) – (329.56, 55.10) – (331.71, 55.10) – (333.86, 55.10) – (336.02, 55.10) – (338.17, 55.10) – (340.33, 55.10) – (342.48, 55.10) – (344.63, 55.10) – (346.79, 55.10) – (348.94, 55.10) – (351.09, 55.10) – (353.25, 55.10) – (355.40, 55.10) – (357.55, 55.10) – (359.71, 55.10) – (361.86, 55.10) – (364.02, 55.10) – (366.17, 55.10) – (368.32, 55.10) – (370.48, 55.10) – (372.63, 55.10) – (374.78, 55.10) – (376.94, 55.10) – (379.09, 55.10) – (381.24, 55.10) – (383.40, 55.10) – (385.55, 55.10) – (387.70, 55.10) – (389.86, 55.10) – (392.01, 55.10) – (394.17, 55.10) – (396.32, 55.10) – (398.47, 55.10) – (400.63, 55.10) – (402.78, 55.10) – (404.93, 55.10) – (407.09, 55.10) – (409.24, 55.10) – (411.39, 55.10) – (413.55, 55.10) – (415.70, 55.10) – (417.86, 55.10) – (420.01, 55.10) – (422.16, 55.10) – (424.32, 55.10) – (426.47, 55.10); (247.72, 50.08) – (249.87, 51.23) – (252.03, 52.08) – (254.18, 51.56) – (256.33, 55.28) – (258.49, 49.99) – (260.64, 47.49) – (262.80, 49.82) – (264.95, 50.64) – (267.10, 54.57) – (269.26, 55.41) – (271.41, 64.24) – (273.56, 64.16) – (275.72, 62.85) – (277.87, 71.34) – (280.02, 68.53) – (282.18, 70.70) – (284.33, 61.52) – (286.49, 64.39) – (288.64, 76.27) – (290.79, 67.31) – (292.95, 67.50) – (295.10, 57.52) – (297.25, 55.70) – (299.41, 66.58) – (301.56, 69.50) – (303.71, 72.13) – (305.87, 77.12) – (308.02, 68.39) – (310.17, 65.08) – (312.33, 67.08) – (314.48, 67.43) – (316.64, 67.52) – (318.79, 59.72) – (320.94, 57.41) – (323.10, 57.81) – (325.25, 54.66) – (327.40, 54.01) – (329.56, 58.82) – (331.71, 61.23) – (333.86, 56.53) – (336.02, 56.64) – (338.17, 53.83) – (340.33, 61.23) – (342.48, 63.47) – (344.63, 58.06) – (346.79, 54.11) – (348.94, 54.10) – (351.09, 57.45) – (353.25, 54.87) – (355.40, 56.59) – (357.55, 55.39) – (359.71, 57.10) – (361.86, 55.01) – (364.02, 55.36) – (366.17, 55.58) – (368.32, 54.61) – (370.48, 56.40) – (372.63, 58.74) – (374.78, 54.12) – (376.94, 54.26) – (379.09, 56.07) – (381.24, 57.19) – (383.40, 55.13) – (385.55, 57.96) – (387.70, 54.60) – (389.86, 55.31) – (392.01, 54.27) – (394.17, 54.97) – (396.32, 58.18) – (398.47, 55.99) – (400.63, 56.19) – (402.78, 57.03) – (404.93, 55.38) – (407.09, 55.85) – (409.24, 56.79) – (411.39, 55.75) – (413.55, 54.15) – (415.70, 54.38) – (417.86, 54.58) – (420.01, 53.85) – (422.16, 55.30) – (424.32, 55.23) – (426.47, 58.00); When trying to estimate the best A and K, then the results differ: Table \[tab:3\] (resp. \[tab:4\]) represent the quantiles of the relative mean squared errors for different populations and for the best AK estimator, the empirical best AK estimator, the AK estimator with coefficient taken arbitrarily equal to the CPS AK coefficients (Arb. AK column), the best regression composite estimator (column) and the Regression Composite estimator with $\alpha$ taken arbitrarily equal to $0.75$ (Arb. r.c. column) for the level (resp. change) estimation. Then for all population, the arbitrary regression composite estimator seems to behave much better than the estimated best AK estimator, and arbitrary estimators, that perform worse than the direct. The estimation of the best linear estimator gives even worse results than the estimated best AK and is not reported. This underlines the weakness of the AK and Yansaneh Fuller-type estimators: without a good estimator of the variance matrix, they perform very poorly. On the other hand the regression composite estimator with arbitrary $\alpha$ performs better without requiring any estimation of the variance. ------ --------- -------- -------- --------- --------- -- --------- -------- -------- --------- --------- -- --------- -------- -------- --------- --------- Best Arb. Emp. Best Arb. Best Arb. Emp. Best Arb. Best Arb. Emp. Best Arb. AK AK AK AK AK AK AK AK AK 0% $0.318$ $1$ $1$ $0.322$ $0.477$ $0.87$ $1$ $1$ $0.863$ $0.885$ $0.983$ $1$ $1$ $0.994$ $0.994$ 25% $0.377$ $1.52$ $2.59$ $0.38$ $0.546$ $0.906$ $1.35$ $2.56$ $0.913$ $0.94$ $0.996$ $1.08$ $1.03$ $1$ $1.01$ 50% $0.409$ $1.6$ $2.64$ $0.42$ $0.591$ $0.929$ $1.41$ $2.7$ $0.945$ $0.974$ $0.997$ $1.14$ $1.04$ $1$ $1.02$ 75% $0.454$ $1.95$ $2.74$ $0.472$ $0.663$ $0.951$ $1.49$ $2.79$ $0.969$ $0.989$ $1$ $1.26$ $1.07$ $1$ $1.02$ 100% $1$ $2.09$ $2.86$ $1$ $1$ $1$ $1.68$ $3.08$ $1$ $1.02$ $1.01$ $1.65$ $1.14$ $1.01$ $1.15$ Mean $0.431$ $1.72$ $2.64$ $0.443$ $0.613$ $0.926$ $1.42$ $2.66$ $0.94$ $0.966$ $0.997$ $1.19$ $1.05$ $1$ $1.02$ ------ --------- -------- -------- --------- --------- -- --------- -------- -------- --------- --------- -- --------- -------- -------- --------- --------- ------ ---------- -------- -------- ---------- ---------- -- --------- --------- -------- --------- --------- -- --------- --------- -------- --------- --------- Best Arb. Emp. Best Arb. Best Arb. Emp. Best Arb. Best Arb. Emp. Best Arb. AK AK AK AK AK AK AK AK AK 0% $0.0959$ $2.77$ $5.43$ $0.0279$ $0.0936$ $0.845$ $0.872$ $2.72$ $0.774$ $0.791$ $0.973$ $0.998$ $1.01$ $0.984$ $0.994$ 25% $0.123$ $3.31$ $6.35$ $0.0455$ $0.112$ $0.887$ $0.953$ $3.07$ $0.835$ $0.847$ $0.99$ $1.02$ $1.03$ $0.992$ $1$ 50% $0.142$ $3.68$ $6.64$ $0.0552$ $0.127$ $0.914$ $0.998$ $3.33$ $0.885$ $0.89$ $0.993$ $1.02$ $1.04$ $0.997$ $1$ 75% $0.215$ $5.21$ $6.93$ $0.146$ $0.201$ $0.932$ $1.03$ $3.62$ $0.916$ $0.919$ $0.996$ $1.03$ $1.06$ $1$ $1$ 100% $0.395$ $6.12$ $7.59$ $0.355$ $0.383$ $0.971$ $1.13$ $3.92$ $0.965$ $0.967$ $1.04$ $1.06$ $1.14$ $1.11$ $1.01$ Mean $0.174$ $4.21$ $6.68$ $0.102$ $0.163$ $0.909$ $0.993$ $3.33$ $0.876$ $0.883$ $0.993$ $1.03$ $1.04$ $1$ $1$ ------ ---------- -------- -------- ---------- ---------- -- --------- --------- -------- --------- --------- -- --------- --------- -------- --------- --------- Analysis with measurement error ------------------------------- Under , a solution to the rotation group bias for adapting the AK estimator consists in estimating the rotation bias parameter vector ${b}$ and in applying AK coefficients to corrected month-in-sample estimates, to obtain $\left(\hat{{\mathrm{t}}}^{{\mathrm{AK}}*}_{\mathbf{y}}\right)_{{m},.}=\sum_{{m}'=1}^{m}\sum_{{m}'=1}^{m}\left( c_{{m},{m}',{g}} \left(\hat{{\mathrm{t}}}^{{\mathrm{mis}},{g}}_{\mathbf{y}}\right)_{{m},{g},.}-\hat{{b}}_{g}\right).$ The question of how to adapt the regression composite estimator to take into account measurement error is more complicated. Besides, the model used for rotation bias is itself questionable. The linear constraint on ${b}$ ($\sum {b}_{{g},.} =0$ or ${b}_{1,.}=0$) is imposed to address an identifiability problem, but one cannot assess its validity. This is why we think it is not a good way to deal with the rotation bias. We have not investigated how to adapt the regression composite estimator to address the problem of rotation bias (we think that rotation bias has to be studied at the individual level throught resampling method). Instead we studied its behaviour in the presence of rotation bias. To this end, we systematically (for all month, all sample) changed up to 2 unemployed persons of month-in-sample group 1 status from unemployed to employed. Tables \[tab:6\] displays for different populations the quantiles and means of the relative mean square errors for the level and change estimation and for the best AK estimator and the best regression composite estimator. We applied the best AK and best regression composite estimators for the case without measurement error to the case with measurement error. We notice that the AK estimator is very sensitive to rotation bias, whereas regression composite estimator is not. A reason may be that introducing a variable not correlated to the study variables in the calibration procedure does not much change the estimation of the study variable. Rotation bias weakens the correlation between ${\mathbf{z}}$ and ${\mathbf{y}}$ and though the performance of the regression composite estimator is comparable to the performance of the direct. The CPS Data Analysis ===================== Implementation of regression composite estimator for the CPS ------------------------------------------------------------ ### Choice of $\alpha$ {#sec:5.2.2} Under a simple unit level time series model with auto-regression coefficient $\rho$, Fuller and Rao (2001) proposed a formal expression for an approximately optimal $\alpha$ as a function of $\rho$ and studied the so-called drift problem for the MR2 choice: $\alpha=1$. They also proposed approximate expressions for variances of their estimators for the level and change. For various reasons, it seems difficult to obtain the optimal or even an approximately optimal $\alpha$ needed for the Fuller-Rao type regression composite estimation technique to produce the U.S. employment and unemployment rates using the CPS data. First of all, the simple time series model used by Fuller and Rao (2001) is not suitable to model a nominal variable (employment status) with several categories. Secondly, complexity of the CPS design poses a challenging modeling problem. Before attempting to obtain the optimal or even an approximately optimal choice for $\alpha$ required for the Fuller-Rao type regression composite method, it will be instructive to evaluate the regression composite estimators for different known choices of $\alpha$. This is the focus of this section. ### Choice of ${\mathbf{x}}$ and ${\mathbf{z}}$ {#sec:5.1.2} In our study, we considered two candidates for ${\mathbf{z}}$: (i) ${\mathbf{z}}={\mathbf{y}}$, (ii) a more detailed employment status variable with 8 categories. As the use of this variable reduces the degrees of freedom in the calibration procedure and leads to estimates with a higher mean square error, we just report on our first choice. For an application of the Fuller-Rao method, one might think of including all the variables that have been already used for the weight adjustments in the ${\mathbf{x}}$ variables. However, this would introduce many constraints on the coefficients and thus is likely to cause a high variability in the ratio of ${\mathbf{w}}_{{m},{k}}$ and ${\mathbf{w}}_{{m},{k}}^{{\mathrm{r.c.}}}$. The other extreme option is not to use any of these auxiliary variables. But then the final weights would not be adjusted for the known totals of auxiliary variables ${\mathbf{x}}$. As a compromise, we selected only two variables: gender and race. Results ------- Figure \[fig:2\](a) displays the difference $\widehat{{{\mathrm{r}}}}^{\star}_{m}-\widehat{{{\mathrm{r}}}}^{{\mathrm{direct}}}_{m}$ between different composite estimates and the corresponding direct estimates against months ${m}$. For the regression composite estimator, we considered three choices: (i) $\alpha=0.75$ (suggested by Fuller and Rao), (ii) $\alpha=0$ (corresponding to MR1), and (iii) $\alpha=1$ (corresponding to MR2). We display similar graphs for month-to-month change estimates in Figure \[fig:2\](b). Notice that $\alpha=0$ and $\alpha=1$ correspond to MR1 and MR2, respectively. We display similar graphs for month-to-month change estimates in Figure \[fig:2\]. It is interesting to note that the AK composite estimates of unemployment rates are always lower than the corresponding direct estimates in Figure \[fig:2\](a). To our knowledge, this behavior of AK composite estimates has not been noticed earlier. In contrast, the regression composite estimates MR1 are always higher than the corresponding direct estimates. However, such deviations decrease as $\alpha$ gets closer to 1 in Figure \[fig:2\](a). Application of the Fuller-Rao method at the household level causes an increase in the distance between the original and calibrated weights and one may expect an increase in the variances of the estimates. Figure \[fig:2\](b) does not indicate systematic deviations of the composite estimates of the month-to-month changes from the corresponding direct estimates. Deviations of the regression composite estimates from the corresponding direct estimates seem to decrease as $\alpha$ approaches 1. (0,0) rectangle (433.62,289.08); ( 47.52,254.63) rectangle (433.62,281.16); ( 67.76,272.26) – ( 79.64,272.26); at ( 85.58,269.98) [AK]{}; (139.26,272.26) – (151.14,272.26); at (157.08,269.98) [R.C. $\alpha=0.75$]{}; (210.76,272.26) – (222.64,272.26); at (228.58,269.98) [MR1]{}; (282.26,272.26) – (294.14,272.26); at (300.08,269.98) [MR2]{}; ( 0.00, 0.00) rectangle (433.62,289.08); ( 47.52,135.23) – (433.62,135.23) – (433.62,254.63) – ( 47.52,254.63) – ( 47.52,135.23); ( 0.00,135.23) rectangle (433.62,254.63); at ( 14.26,194.93) [(a) Level]{}; ( 0.00, 0.00) rectangle (433.62,289.08); ( 61.82,135.23) – (419.32,135.23); ( 61.82,135.23) – ( 61.82,254.63); (112.89,135.23) – (112.89,254.63); (163.96,135.23) – (163.96,254.63); (215.03,135.23) – (215.03,254.63); (266.11,135.23) – (266.11,254.63); (317.18,135.23) – (317.18,254.63); (368.25,135.23) – (368.25,254.63); (419.32,135.23) – (419.32,254.63); ( 47.52,141.20) – ( 47.52,252.11); ( 47.52,141.20) – (433.62,141.20); ( 47.52,157.04) – (433.62,157.04); ( 47.52,172.89) – (433.62,172.89); ( 47.52,188.73) – (433.62,188.73); ( 47.52,204.58) – (433.62,204.58); ( 47.52,220.42) – (433.62,220.42); ( 47.52,236.27) – (433.62,236.27); ( 47.52,252.11) – (433.62,252.11); at ( 39.60,138.93) [-0.002]{}; at ( 39.60,154.77) [0.000]{}; at ( 39.60,170.62) [0.002]{}; at ( 39.60,186.46) [0.004]{}; at ( 39.60,202.30) [0.006]{}; at ( 39.60,218.15) [0.008]{}; at ( 39.60,233.99) [0.010]{}; at ( 39.60,249.84) [0.012]{}; ( 47.52,135.23) rectangle (433.62,254.63); ( 61.82,157.04) – ( 66.08,168.98) – ( 70.33,189.65) – ( 74.59,190.52) – ( 78.84,184.46) – ( 83.10,213.14) – ( 87.36,186.84) – ( 91.61,179.11) – ( 95.87,210.91) – (100.12,206.41) – (104.38,227.32) – (108.64,191.78) – (112.89,208.19) – (117.15,190.71) – (121.40,182.30) – (125.66,208.61) – (129.92,207.41) – (134.17,201.43) – (138.43,204.73) – (142.68,209.07) – (146.94,212.23) – (151.19,199.34) – (155.45,226.47) – (159.71,192.15) – (163.96,216.69) – (168.22,221.44) – (172.47,191.17) – (176.73,188.79) – (180.99,204.87) – (185.24,182.45) – (189.50,191.62) – (193.75,208.32) – (198.01,211.69) – (202.27,208.16) – (206.52,205.51) – (210.78,204.75) – (215.03,205.60) – (219.29,215.60) – (223.55,185.55) – (227.80,211.73) – (232.06,218.11) – (236.31,200.73) – (240.57,204.20) – (244.83,208.80) – (249.08,228.70) – (253.34,203.36) – (257.59,228.56) – (261.85,222.24) – (266.11,212.07) – (270.36,204.11) – (274.62,229.17) – (278.87,202.28) – (283.13,243.72) – (287.39,179.06) – (291.64,187.01) – (295.90,227.61) – (300.15,220.39) – (304.41,215.17) – (308.67,196.63) – (312.92,197.74) – (317.18,199.68) – (321.43,227.48) – (325.69,207.63) – (329.94,201.77) – (334.20,201.78) – (338.46,202.94) – (342.71,221.52) – (346.97,176.73) – (351.22,227.91) – (355.48,206.02) – (359.74,221.17) – (363.99,195.38) – (368.25,203.35) – (372.50,189.38) – (376.76,195.96) – (381.02,203.97) – (385.27,188.81) – (389.53,250.21) – (393.78,199.71) – (398.04,208.34) – (402.30,190.58) – (406.55,203.90) – (410.81,211.25) – (415.06,200.53) – (419.32,197.32); ( 61.82,157.04) – ( 66.08,158.48) – ( 70.33,156.22) – ( 74.59,158.85) – ( 78.84,159.13) – ( 83.10,154.40) – ( 87.36,155.69) – ( 91.61,161.48) – ( 95.87,160.05) – (100.12,158.53) – (104.38,152.16) – (108.64,156.67) – (112.89,156.65) – (117.15,156.39) – (121.40,166.24) – (125.66,161.89) – (129.92,162.04) – (134.17,162.46) – (138.43,161.06) – (142.68,160.70) – (146.94,159.25) – (151.19,160.82) – (155.45,153.55) – (159.71,155.90) – (163.96,154.62) – (168.22,159.79) – (172.47,164.92) – (176.73,164.33) – (180.99,162.11) – (185.24,161.59) – (189.50,161.00) – (193.75,161.93) – (198.01,158.34) – (202.27,158.58) – (206.52,156.67) – (210.78,155.91) – (215.03,152.81) – (219.29,154.87) – (223.55,159.98) – (227.80,162.51) – (232.06,157.54) – (236.31,159.02) – (240.57,159.67) – (244.83,160.09) – (249.08,157.49) – (253.34,159.97) – (257.59,157.23) – (261.85,148.37) – (266.11,139.66) – (270.36,143.81) – (274.62,148.95) – (278.87,153.87) – (283.13,147.35) – (287.39,152.99) – (291.64,163.92) – (295.90,162.32) – (300.15,155.75) – (304.41,153.02) – (308.67,154.30) – (312.92,150.06) – (317.18,149.34) – (321.43,154.10) – (325.69,150.44) – (329.94,160.88) – (334.20,159.54) – (338.46,161.11) – (342.71,160.55) – (346.97,163.66) – (351.22,159.56) – (355.48,164.35) – (359.74,159.31) – (363.99,157.55) – (368.25,161.47) – (372.50,162.91) – (376.76,171.62) – (381.02,172.38) – (385.27,169.35) – (389.53,160.51) – (393.78,156.82) – (398.04,162.87) – (402.30,163.39) – (406.55,169.69) – (410.81,163.12) – (415.06,161.55) – (419.32,161.70); ( 61.82,157.04) – ( 66.08,158.23) – ( 70.33,154.71) – ( 74.59,156.35) – ( 78.84,156.85) – ( 83.10,151.10) – ( 87.36,152.43) – ( 91.61,158.27) – ( 95.87,155.74) – (100.12,153.34) – (104.38,146.28) – (108.64,151.89) – (112.89,152.96) – (117.15,152.95) – (121.40,162.91) – (125.66,156.35) – (129.92,156.52) – (134.17,158.42) – (138.43,156.73) – (142.68,154.41) – (146.94,152.60) – (151.19,154.91) – (155.45,147.14) – (159.71,151.01) – (163.96,151.28) – (168.22,154.27) – (172.47,160.02) – (176.73,159.83) – (180.99,157.32) – (185.24,159.41) – (189.50,158.34) – (193.75,156.80) – (198.01,152.31) – (202.27,152.93) – (206.52,151.54) – (210.78,151.87) – (215.03,149.91) – (219.29,150.20) – (223.55,156.84) – (227.80,156.68) – (232.06,153.09) – (236.31,156.17) – (240.57,156.63) – (244.83,155.92) – (249.08,151.16) – (253.34,155.39) – (257.59,152.52) – (261.85,145.08) – (266.11,139.95) – (270.36,143.87) – (274.62,146.62) – (278.87,149.41) – (283.13,142.78) – (287.39,152.40) – (291.64,163.20) – (295.90,158.59) – (300.15,150.54) – (304.41,147.96) – (308.67,149.95) – (312.92,147.11) – (317.18,149.38) – (321.43,151.34) – (325.69,146.10) – (329.94,154.55) – (334.20,153.78) – (338.46,157.26) – (342.71,155.95) – (346.97,160.15) – (351.22,152.94) – (355.48,158.67) – (359.74,154.32) – (363.99,152.93) – (368.25,159.82) – (372.50,160.21) – (376.76,168.38) – (381.02,167.73) – (385.27,166.03) – (389.53,156.27) – (393.78,152.36) – (398.04,157.61) – (402.30,158.08) – (406.55,164.45) – (410.81,156.59) – (415.06,156.35) – (419.32,159.93); ( 61.82,150.24) – ( 66.08,153.29) – ( 70.33,153.17) – ( 74.59,153.27) – ( 78.84,151.69) – ( 83.10,143.92) – ( 87.36,146.51) – ( 91.61,150.84) – ( 95.87,153.29) – (100.12,152.10) – (104.38,147.05) – (108.64,145.69) – (112.89,149.23) – (117.15,148.23) – (121.40,152.18) – (125.66,150.03) – (129.92,150.25) – (134.17,147.31) – (138.43,145.58) – (142.68,146.72) – (146.94,147.67) – (151.19,148.48) – (155.45,146.00) – (159.71,145.83) – (163.96,149.58) – (168.22,149.54) – (172.47,154.17) – (176.73,151.79) – (180.99,149.61) – (185.24,147.86) – (189.50,149.14) – (193.75,150.77) – (198.01,148.82) – (202.27,148.44) – (206.52,145.23) – (210.78,148.22) – (215.03,150.25) – (219.29,147.44) – (223.55,150.38) – (227.80,151.58) – (232.06,148.13) – (236.31,148.07) – (240.57,149.02) – (244.83,149.49) – (249.08,147.94) – (253.34,149.73) – (257.59,153.47) – (261.85,147.18) – (266.11,146.93) – (270.36,147.61) – (274.62,149.53) – (278.87,149.42) – (283.13,145.68) – (287.39,142.88) – (291.64,149.97) – (295.90,151.96) – (300.15,153.90) – (304.41,149.83) – (308.67,147.86) – (312.92,148.25) – (317.18,154.09) – (321.43,155.59) – (325.69,148.87) – (329.94,150.18) – (334.20,150.31) – (338.46,150.44) – (342.71,149.82) – (346.97,149.26) – (351.22,150.83) – (355.48,153.05) – (359.74,152.95) – (363.99,150.36) – (368.25,154.55) – (372.50,152.28) – (376.76,156.93) – (381.02,154.33) – (385.27,152.55) – (389.53,149.82) – (393.78,146.05) – (398.04,149.86) – (402.30,147.61) – (406.55,152.68) – (410.81,150.11) – (415.06,152.04) – (419.32,151.24); ( 0.00, 0.00) rectangle (433.62,289.08); ( 47.52, 15.84) – (433.62, 15.84) – (433.62,135.23) – ( 47.52,135.23) – ( 47.52, 15.84); ( 0.00, 15.84) rectangle (433.62,135.23); at ( 14.26, 75.54) [(b) Change]{}; ( 0.00, 0.00) rectangle (433.62,289.08); ( 61.82, 15.84) – (419.32, 15.84); ( 61.82, 15.84) – ( 61.82,135.23); (112.89, 15.84) – (112.89,135.23); (163.96, 15.84) – (163.96,135.23); (215.03, 15.84) – (215.03,135.23); (266.11, 15.84) – (266.11,135.23); (317.18, 15.84) – (317.18,135.23); (368.25, 15.84) – (368.25,135.23); (419.32, 15.84) – (419.32,135.23); at ( 61.82, 1.58) [2005]{}; at (112.89, 1.58) [2006]{}; at (163.96, 1.58) [2007]{}; at (215.03, 1.58) [2008]{}; at (266.11, 1.58) [2009]{}; at (317.18, 1.58) [2010]{}; at (368.25, 1.58) [2011]{}; at (419.32, 1.58) [2012]{}; ( 47.52, 42.23) – ( 47.52,111.70); ( 47.52, 42.23) – (433.62, 42.23); ( 47.52, 76.97) – (433.62, 76.97); ( 47.52,111.70) – (433.62,111.70); at ( 39.60, 39.96) [-0.005]{}; at ( 39.60, 74.69) [0.000]{}; at ( 39.60,109.43) [0.005]{}; ( 47.52, 15.84) rectangle (433.62,135.23); ( 61.82, 76.97) – ( 66.08, 87.43) – ( 70.33, 95.10) – ( 74.59, 77.73) – ( 78.84, 71.65) – ( 83.10,102.12) – ( 87.36, 53.90) – ( 91.61, 70.18) – ( 95.87,104.85) – (100.12, 73.02) – (104.38, 95.31) – (108.64, 45.79) – (112.89, 91.36) – (117.15, 61.63) – (121.40, 69.59) – (125.66,100.04) – (129.92, 75.91) – (134.17, 71.73) – (138.43, 79.86) – (142.68, 80.77) – (146.94, 79.74) – (151.19, 65.66) – (155.45,100.76) – (159.71, 46.87) – (163.96, 98.49) – (168.22, 81.13) – (172.47, 50.42) – (176.73, 74.88) – (180.99, 91.07) – (185.24, 57.31) – (189.50, 85.00) – (193.75, 91.62) – (198.01, 79.92) – (202.27, 73.87) – (206.52, 74.64) – (210.78, 76.30) – (215.03, 77.72) – (219.29, 85.73) – (223.55, 50.62) – (227.80, 99.92) – (232.06, 82.56) – (236.31, 61.73) – (240.57, 80.00) – (244.83, 81.00) – (249.08, 94.42) – (253.34, 54.74) – (257.59, 99.07) – (261.85, 71.42) – (266.11, 68.05) – (270.36, 69.99) – (274.62, 98.94) – (278.87, 53.38) – (283.13,113.31) – (287.39, 20.26) – (291.64, 83.93) – (295.90,112.58) – (300.15, 70.63) – (304.41, 72.39) – (308.67, 60.71) – (312.92, 77.94) – (317.18, 78.66) – (321.43,101.35) – (325.69, 59.55) – (329.94, 71.83) – (334.20, 76.98) – (338.46, 77.98) – (342.71, 93.26) – (346.97, 37.69) – (351.22,121.85) – (355.48, 57.78) – (359.74, 90.25) – (363.99, 54.35) – (368.25, 83.96) – (372.50, 64.71) – (376.76, 82.74) – (381.02, 84.00) – (385.27, 63.67) – (389.53,130.81) – (393.78, 32.68) – (398.04, 84.54) – (402.30, 61.39) – (406.55, 88.65) – (410.81, 83.42) – (415.06, 67.56) – (419.32, 74.15); ( 61.82, 76.97) – ( 66.08, 78.22) – ( 70.33, 74.99) – ( 74.59, 79.27) – ( 78.84, 77.21) – ( 83.10, 72.82) – ( 87.36, 78.10) – ( 91.61, 82.04) – ( 95.87, 75.72) – (100.12, 75.63) – (104.38, 71.38) – (108.64, 80.92) – (112.89, 76.95) – (117.15, 76.73) – (121.40, 85.61) – (125.66, 73.16) – (129.92, 77.09) – (134.17, 77.34) – (138.43, 75.74) – (142.68, 76.65) – (146.94, 75.70) – (151.19, 78.34) – (155.45, 70.59) – (159.71, 79.03) – (163.96, 75.84) – (168.22, 81.50) – (172.47, 81.47) – (176.73, 76.45) – (180.99, 75.02) – (185.24, 76.51) – (189.50, 76.45) – (193.75, 77.78) – (198.01, 73.81) – (202.27, 77.18) – (206.52, 75.29) – (210.78, 76.30) – (215.03, 74.25) – (219.29, 78.77) – (223.55, 81.45) – (227.80, 79.19) – (232.06, 72.61) – (236.31, 78.27) – (240.57, 77.54) – (244.83, 77.33) – (249.08, 74.69) – (253.34, 79.15) – (257.59, 74.56) – (261.85, 69.19) – (266.11, 69.33) – (270.36, 80.61) – (274.62, 81.48) – (278.87, 81.28) – (283.13, 71.25) – (287.39, 81.91) – (291.64, 86.56) – (295.90, 75.55) – (300.15, 71.21) – (304.41, 74.58) – (308.67, 78.09) – (312.92, 73.24) – (317.18, 76.34) – (321.43, 81.14) – (325.69, 73.76) – (329.94, 86.12) – (334.20, 75.79) – (338.46, 78.34) – (342.71, 76.48) – (346.97, 79.69) – (351.22, 73.38) – (355.48, 81.17) – (359.74, 72.54) – (363.99, 75.43) – (368.25, 80.40) – (372.50, 78.23) – (376.76, 84.61) – (381.02, 77.63) – (385.27, 74.31) – (389.53, 69.21) – (393.78, 73.73) – (398.04, 82.27) – (402.30, 77.42) – (406.55, 82.49) – (410.81, 71.21) – (415.06, 75.59) – (419.32, 77.09); ( 61.82, 76.97) – ( 66.08, 78.00) – ( 70.33, 73.88) – ( 74.59, 78.40) – ( 78.84, 77.41) – ( 83.10, 71.93) – ( 87.36, 78.13) – ( 91.61, 82.09) – ( 95.87, 74.75) – (100.12, 74.86) – (104.38, 70.77) – (108.64, 81.89) – (112.89, 77.91) – (117.15, 76.96) – (121.40, 85.70) – (125.66, 71.22) – (129.92, 77.12) – (134.17, 78.63) – (138.43, 75.48) – (142.68, 74.94) – (146.94, 75.38) – (151.19, 78.99) – (155.45, 70.15) – (159.71, 80.35) – (163.96, 77.20) – (168.22, 79.59) – (172.47, 82.01) – (176.73, 76.80) – (180.99, 74.77) – (185.24, 78.80) – (189.50, 76.02) – (193.75, 75.62) – (198.01, 73.02) – (202.27, 77.52) – (206.52, 75.75) – (210.78, 77.25) – (215.03, 75.25) – (219.29, 77.23) – (223.55, 82.78) – (227.80, 76.83) – (232.06, 73.81) – (236.31, 79.67) – (240.57, 77.37) – (244.83, 76.34) – (249.08, 72.79) – (253.34, 80.68) – (257.59, 74.44) – (261.85, 70.44) – (266.11, 72.47) – (270.36, 80.40) – (274.62, 79.38) – (278.87, 79.41) – (283.13, 71.15) – (287.39, 85.41) – (291.64, 86.44) – (295.90, 72.92) – (300.15, 69.91) – (304.41, 74.71) – (308.67, 78.71) – (312.92, 74.48) – (317.18, 78.95) – (321.43, 78.69) – (325.69, 72.37) – (329.94, 84.38) – (334.20, 76.30) – (338.46, 80.02) – (342.71, 75.82) – (346.97, 80.65) – (351.22, 70.64) – (355.48, 82.00) – (359.74, 73.15) – (363.99, 75.74) – (368.25, 83.01) – (372.50, 77.32) – (376.76, 84.13) – (381.02, 76.39) – (385.27, 75.47) – (389.53, 68.41) – (393.78, 73.54) – (398.04, 81.57) – (402.30, 77.37) – (406.55, 82.55) – (410.81, 70.08) – (415.06, 76.76) – (419.32, 80.11); ( 61.82, 76.97) – ( 66.08, 79.64) – ( 70.33, 76.87) – ( 74.59, 77.05) – ( 78.84, 75.58) – ( 83.10, 70.15) – ( 87.36, 79.23) – ( 91.61, 80.76) – ( 95.87, 79.11) – (100.12, 75.93) – (104.38, 72.53) – (108.64, 75.78) – (112.89, 80.07) – (117.15, 76.09) – (121.40, 80.43) – (125.66, 75.08) – (129.92, 77.16) – (134.17, 74.39) – (138.43, 75.45) – (142.68, 77.97) – (146.94, 77.80) – (151.19, 77.68) – (155.45, 74.79) – (159.71, 76.81) – (163.96, 80.26) – (168.22, 76.93) – (172.47, 81.03) – (176.73, 74.89) – (180.99, 75.05) – (185.24, 75.43) – (189.50, 78.09) – (193.75, 78.39) – (198.01, 75.26) – (202.27, 76.64) – (206.52, 74.15) – (210.78, 79.59) – (215.03, 78.75) – (219.29, 74.51) – (223.55, 79.54) – (227.80, 78.02) – (232.06, 73.94) – (236.31, 76.91) – (240.57, 77.80) – (244.83, 77.38) – (249.08, 75.61) – (253.34, 78.54) – (257.59, 80.25) – (261.85, 71.44) – (266.11, 76.75) – (270.36, 77.56) – (274.62, 78.65) – (278.87, 76.87) – (283.13, 73.69) – (287.39, 74.51) – (291.64, 83.19) – (295.90, 78.71) – (300.15, 78.67) – (304.41, 73.39) – (308.67, 75.24) – (312.92, 77.31) – (317.18, 82.09) – (321.43, 78.28) – (325.69, 71.08) – (329.94, 78.11) – (334.20, 77.08) – (338.46, 77.08) – (342.71, 76.42) – (346.97, 76.48) – (351.22, 78.34) – (355.48, 78.92) – (359.74, 76.88) – (363.99, 74.70) – (368.25, 80.64) – (372.50, 74.97) – (376.76, 81.04) – (381.02, 74.68) – (385.27, 75.41) – (389.53, 74.57) – (393.78, 73.66) – (398.04, 80.31) – (402.30, 74.99) – (406.55, 81.41) – (410.81, 74.72) – (415.06, 78.66) – (419.32, 76.26); Discussion ========== Our study reveals that there is ample scope for improving the AK estimator used by the Census Bureau. We would like to emphasize the following undesirable features of the AK estimation method: \(i) the method used to compute optimal coefficient is crude — the best coefficients are just picked from 10 different values. Our R package, based on the built in R Nelder-Mead algorithm, can provide the optimal coefficients within 8 digits of precision in a reasonable time. \(ii) The stationarity assumption on the variances and covariances of the month-in-sample estimators over a period of 10 years does not seem realistic, and to our knowledge, has not been tested before. Besides, even though the stationary model was reasonable, the complexity of the CPS design makes it difficult to evaluate the quality of the estimators used for that model. The difficulty to propose a stochastic model in advance for the best linear estimators in the CPS was already pointed out earlier by [@jones1980best Sec. 4]. Our evaluation study shows that the AK estimators are very sensitive to the choices of A and K, and that the errors in the estimation of the variances and covariances may lead to poor performance of the AK estimators. We add that estimators of variances and covariances of month-in-sample estimators estimation also affect the performances of empirical best linear unbiased estimators. \(iii) Using the Bailar model for the bias in our study, we showed that the AK estimator is very sensible to rotation group bias. There is currently no satisfactory way to correct the AK estimator for the rotation bias. The Bailar model relies on an arbitrary constraint on the month-in-sample biases and a strong stationarity assumption of the month-in-sample bias and should not be used unless some re-interview study can justify the Bailar’s model. \(iv) The computation of composite weights in CPS to calibrate the weights on the AK estimators will affect all other weighted estimators. Although [@lent1996effect] showed that there was not a big effect on the estimates, considering the concerns about AK estimators listed before, we do not think that the use of those composite weights is a good option. \(v) the CPS data analysis shows that the AK estimates are consistently smaller than the corresponding direct survey-weighted estimates for the period 2005-2012. This is also a source of concern. The composite regression estimator does not rely on an estimation of the variances and covariances matrix. In our simulation study, it appears to be less sensitive to rotation group bias, and bounces around the survey-weighted estimates when applied to the real CPS data. Our study encourages the use of the regression composite method in the US labor force estimation. To facilitate and encourage further research on this important topic, we make the following three R packages, developed under this project, freely available: (i) the package dataCPS downloads CPS public data files and transform them into R data set ([@github:dataCPS]); (ii) the package CompositeRegressionEstimation allows computation of the AK, best AK, composite regression, linear and best linear estimators ([@github:CompositeRegressionEstimation]); (iii) the package pubBonneryChengLahiri2016 allows to reproducing all computations and simulations of this paper [@github:pubBonneryChengLahiri2016]. Bailar, B. A. (1975). . , 70(349):23–30. Beaumont, J.-F. and Bocci, C. (2005). . , (June):1–6. Bell, P. (2001). . , 27(1):53–63. Bonn[é]{}ry, D. B. (2016a). . https://github.com/DanielBonnery/CompositeRegressionEstimation. Bonn[é]{}ry, D. B. (2016b). . https://github.com/DanielBonnery/dataCPS. Bonn[é]{}ry, D. B. (2016c). . https://github.com/DanielBonnery/pubBonneryChengLahiri2016. Cassel, C., Sarndal, C., and Wretman, J. (1977). . (2006). . Technical Report 66, U.S. Census Bureau. Fuller, W. A. and Rao, J. N. K. (2001). . , 27(1):45–51. Gambino, J., Kennedy, B., and Singh, M. M. P. M. M. P. (2001). . , 27(1):65–74. Gurney, M. and Daly, J. F. (1965). . In [*Proceedings of the Social Statistics Section, American Statistical Association*]{}, volume 242, page 257. Hansen, M. H., Hurwitz, W. N., Nisselson, H., and Steinberg, J. (1955). . , 50(271):701–719. Jones, R. G. (1980). . , 42(2):221–226. Lent, J., Miller, S. M., Cantwell, P. J., and Duff, M. (1996). . In [*Proceedings of the Section on Survey Research Methods, American Statistical Association*]{}, volume 91, pages 130–139. Lent, J., Miller, S. M., Cantwell, P. J., and Duff, M. (1999). . , 15(3):431–448. Salonen, R. (2007). . , 8(3):503–517. Searle, S. (1994). . , 210:139–151. Singh, A. C., Kennedy, B., and Wu, S. (2001). . , 27(1):33–44. Singh, A. C., Kennedy, B., Wu, S., and Brisebois, F. (1997). . , pages 300–305. Singh, A. C. and Merkouris, P. (1995). . , pages 420–425. Yansaneh, I. S. and Fuller, W. A. (1998). . , 24:31–40. Acknowledgements {#acknowledgements .unnumbered} ================ The research of the first and third authors has been supported by the U.S. Census Bureau Prime Contract No: YA1323-09-CQ-0054 (Subcontract No: 41-1016588). [The programs used for the simulations have been made available on the github repository [@github:pubBonneryChengLahiri2016]]{}. Description of the CPS design {#ap:cps} ============================= This section uses CPS notations for rotation groups. Let $U$ be the intersection of a given basic primary sampling unit component (BPC) and one of the frames used in CPS (see [@CPS2006]). The BPC is a set of clusters of about four housing units, the clusters are the ultimate sampling units (USU). Let $N$ be the number of clusters in $U$. The clusters in $U$ are sorted according to geographical and demographic characteristics and then indexed by $k=1\ldots N$. In the sequence, we will designate a cluster by its index. Let $SI_w$ be the adjusted within-PSU sampling interval, as defined in [@CPS2006 p. 3-11]. Let $n=\left\lfloor(21\times 8*SI_w)^{-1} N\right\rfloor$, where $\lfloor.\rfloor$ is the floor function. The number $n$ is the sample size for a sample rotation group. The drawing of the USU within the PSU consists in the generation of a random number $X$ according to the uniform law on $[0,1]$. For $i=1\ldots n$, $j=1\ldots 8$, $\ell=85\ldots (85+15)$, let $k_{i,j,\ell}$ denote the cluster $k_{i,j,\ell}=\lfloor (X+8\times (i-1)+j)\times SI_w+(\ell-85)\rfloor$. Then, with the notations of [@CPS2006] for $\ell=85\ldots 100 $, $j=1\ldots 8$, the rotation group $j$ of sample $A_\ell$ is $$A_{\ell,j}=\left\{k_{i,j,\ell}\mid i=1\ldots n\right\} .$$ For a given month the sample consits of 8 rotation groups. There are 120 months in a period of 10 years. For $m=1\ldots 120$, $j'\in \left\{1,\ldots,8\right\}$, $\ell_{m,j'}$ and $j_{m,j'}$ are given by: $j_{m,j'}=t+j'-1-8\times \left\lfloor(t+j'-2)/8\right\rfloor$. If $j'\in\left\{1,\ldots,4\right\}$, $\ell_{m,j'}=85+\left\lfloor{(t+j'-2)/8}\right\rfloor$. If $j'\in\left\{5,\ldots,8\right\}$, $\ell_{m,j'}=86+\left\lfloor(t+j'-2)/8\right\rfloor$. The sample of the $m$th month, counting from November 2009, is $$s_m=\bigcup_{j'=1}^8 A_{\ell_{m,j'},j_{m,j'}}.$$ For example, June 2013 corresponds to $m=44$, counting from November 2009. Then $$\begin{aligned} \ell_{m,1}&=85+\left\lfloor43/8\right\rfloor=90& j_{m,1}&=44-8\times\left\lfloor43/8\right\rfloor=4\\ \ell_{m,2}&=85+\left\lfloor44/8\right\rfloor=90& j_{m,2}&=45-8\times\left\lfloor44/8\right\rfloor=5 \\ \ell_{m,3}&=85+\left\lfloor45/8\right\rfloor=90& j_{m,3}&=46-8\times\left\lfloor45/8\right\rfloor=6 \\ \ell_{m,4}&=85+\left\lfloor46/8\right\rfloor=90& j_{m,4}&=47-8\times\left\lfloor46/8\right\rfloor=7 \\ \ell_{m,5}&=86+\left\lfloor47/8\right\rfloor=91& j_{m,5}&=48-8\times\left\lfloor47/8\right\rfloor=8 \\ \ell_{m,6}&=86+\left\lfloor48/8\right\rfloor=92& j_{m,6}&=49-8\times\left\lfloor48/8\right\rfloor=1 \\ \ell_{m,7}&=86+\left\lfloor49/8\right\rfloor=92& j_{m,7}&=50-8\times\left\lfloor49/8\right\rfloor=2 \\ \ell_{m,8}&=86+\left\lfloor50/8\right\rfloor=92& j_{m,8}&=51-8\times\left\lfloor50/8\right\rfloor=3 \end{aligned}$$ We can check from the CPS rotation chart [@CPS2006 Fig. 3-1] that the sample of June 2013 consists of the 4th, 5th, 6th, 7th rotation groups of A90, of the 8th rotation group of A91, and of the 1st, 2d and 3rd rotation groups of A92: $$S_{\text{June 2013}}=A_{90,4}\cup A_{90,5}\cup A_{90,6}\cup A_{90,7}\cup A_{91,8}\cup A_{92,1}\cup A_{92,2} \cup A_{92,3}.$$ \[appendix:notation\]
--- abstract: 'Clustering statistics are compared in the Automatic Plate Machine (APM) and the Edinburgh/Durham Southern Galaxy Catalogue (EDSGC) angular galaxy surveys. Both surveys were independently constructed from scans of the same adjacent UK IIIa–J Schmidt photographic plates with the APM and COSMOS microdensitometers, respectively. The comparison of these catalogs is a rare practical opportunity to study systematic errors, which cannot be achieved via simulations or theoretical methods. On intermediate scales, $0.1^\circ < \theta < 0.5^\circ$, we find good agreement for the cumulants or reduced moments of counts in cells up to sixth order. On larger scales there is a small disagreement due to edge effects in the EDSGC, which covers a smaller area. On smaller scales, we find a significant disagreement that can only be attributed to differences in the construction of the surveys, most likely the dissimilar deblending of crowded fields. The overall agreement of the APM and EDSGC is encouraging, and shows that the results for intermediate scales should be fairly robust. On the other hand, the systematic deviations found at small scales are significant in a regime, where comparison with theory and simulations is possible. This is an important fact to bear in mind when planning the construction of future digitized galaxy catalogs.' author: - | István Szapudi$^{1}$ E. Gaztañaga $^{2}$\ 1. University of Durham, Department of Physics South Road, Durham DH1 3LE, United Kingdom\ 2. Institut d’Estudis Espacials de Catalunya, Research Unit (CSIC), Edf. Nexus-104 - c/ Gran Capitan 2-4, 08034 Barcelona title: Comparison of the Large Scale Clustering in the APM and the EDSGC Galaxy Surveys --- [[h\^[-1]{}Mpc ]{}]{}[h\^[-1]{} [Mpc]{}]{} large scale structure of the universe — methods: numerical Introduction ============ Clustering measurements from galaxy catalogues have become an important tool to test models of structure formation. Large sophisticated data sets are currently under analysis or construction. To interpret high precision measurements of clustering, a detailed understanding of the uncertainties is required. Errors can arise from finite size and geometry of the catalog, such as discreteness, edge, and finite volume effects (“cosmic errors”), from the insufficient sampling of the measurement technique itself (”measurement errors”), and finally, “systematic errors” arise from data reduction, object detection, magnitude uncertainties, etc. Studying the first two classes is by no means simple, but theoretical methods (e.g., Szapudi & Colombi 1996, hereafter SC96) and $N$-body simulations yield reasonable estimates. Systematic errors are even more difficult to investigate, and a unique opportunity is provided, when the same raw data are reduced independently by two research teams. The goal of this [*Letter*]{} is to seize on such an opportunity: the APM and the EDSGC galaxy surveys were constructed independently from the same underlying photographic plates. In particular, we investigate the degree of reproducibility of the higher order clustering measurements, i.e. to what extent different choices during the construction of a galaxy catalog can lead to different estimates of clustering. The most wide spread tools to study clustering in a galaxy catalog are the two-point correlation function, $\xi_2$, and the amplitudes of the higher order correlation functions. These latter are usually expressed in the form of hierarchical ratios: $S_J= \xi_J/\xi_2^{J-1}$, where $\xi_J$ is the $J$-order correlation function or reduced cumulant. The predictions for $S_J$’s in both perturbation theory and $N$-body simulations [@peebles80; @bern92; @jbc93; @bern94; @gb95; @bge95; @cbh96; @bg96; @sqsl97] can be used to test the gravitational instability picture, the form of the initial conditions and the biasing parameters [@fg94; @gf94]. The $S_J$’s are more difficult to measure and interpret than the two-point function, however, at low orders, they are less affected by intrinsic observational uncertainties, like time evolution or projection effects. In section §2 we summarize the properties of the two catalogues, the method of analysis and the actual comparison follows in sections §3 and §4. §5 discusses the implications of the results. The APM and Edinburgh/Durham Southern Galaxy Catalogues {#sec:catalogues} ======================================================= The APM Galaxy Survey covers 4300 square degrees on the sky and contains over 2 million galaxies to a limiting apparent magnitude of $b_{J} \le 20.5$ [@mad90a; @mad90b; @mad90c; @mad96]. It was constructed from APM (a microdensitometer) scans of 188 adjacent UK IIIa–J Schmidt photographic plates and reaches a limiting magnitude of $b_j=20.5$. In an extensive analysis of the systematic errors involved in plate matching, Maddox [*et al*]{} (1996) have placed an upper limit of $\delta w(\theta) \sim 1 \times 10^{-3}$ on the likely contribution of the systematic errors to the angular correlations. The shape of the angular correlation function measured from the survey at scales of $\theta > 1^{\circ}$ indicates that the universe contains more structure on large scales than is predicted by the standard Cold Dark Matter scenario (Maddox [*et al*]{} 1990c). The higher order correlations in the APM were measured by [@gaz94; @sdes95; @ss97a]. The EDSGC is a catalogue of 1.5 million galaxies covering $\simeq1000$ square degrees centered on the South Galactic Pole. The database was constructed from COSMOS scans (a microdensitometer) of 60 adjacent UK IIIa–J Schmidt photographic plates (a subset of the APM plates) and also reaches a limiting magnitude of $b_{J,EDSGC}=20.5$. The entire catalogue has $<10\%$ stellar contamination and is ${\lower.5ex\hbox{{$\; \buildrel > \over \sim \;$}}}95\%$ complete for galaxies brighter than $b_j=19.5$ [@heydon89]. The two–point galaxy angular correlation function measured from the EDSGC has been presented by Collins, Nichol, & Lumsden (1992) and Nichol & Collins (1994). The higher order correlations in the EDSGC were measured by Szapudi, Meiksin, & Nichol 1996, hereafter SMN96. We emphasize that the raw data for both catalogs comprise of the same UK IIIa-J Schmidt Plates (a smaller subset in case of the EDSGC), while the hardware to digitize the plates and the the software to classify and detect objects, measure their apparent magnitudes were different. In particular, different methods of calibration, plate-matching, deblending algorithms were employed. As a consequence, there is a small offset in the magnitude scales of the two catalogues [@nichol92], even though a simple one-to-one mapping can be established. Magnitude cuts for the comparison of the statistics were determined by practical considerations. For the APM we follow G94 and use $ m_{\rm APM}=17-20$, which is half a magnitude brighter than the completeness limit. For the EDSGC catalogue, which is complete to about $ m_{\rm EDS}=20.3$ magnitude, we follow SMN96 to use a magnitude cut of $16.98 \le m_{\rm EDS} \le 19.8$, which is again half a magnitude brighter than the completeness limit. Based on matching the surface densities listed in SDES, these magnitude ranges approximately correspond to each other. This facilitates the direct cross-comparison of the results. The Method of Analysis ====================== The calculation of the higher order correlation functions followed closely the method outlined in [@smn96]. It consists of estimating the probability distribution of counts in cells, calculation of the factorial moments, and extraction of the normalized, averaged amplitudes of the $J$-point correlation functions. For the most crucial first step the infinitely oversampling algorithm of [@s97] was used. Only few of the most important definitions are presented below. The average of the $J$-point angular correlation functions on a scale $\ell$ is defined by $${\bar{\omega}}_J (\ell)=A(\ell)^{-J}\int d^2r_1\ldots d^2r_J ~\omega_J(r_1,\ldots,r_J),$$ where $\omega_J$ is the $J$-point correlation function in the two dimensional survey, and $A(\ell)$ is the area of a square cell of size $\ell$. The hierarchical ratios, $s_J$, are defined in the usual way, $$s_J = \frac{{\bar{\omega}}_J}{{\bar{\omega}}_2^{J-1}}.$$ The raw counts in cells measurements are reduced to a set consisting of $n,{\bar{\omega}}_2, s_J$, which forms a suitable basis for subsequent comparison of the statistics; $n$ denotes the average count in a cell. Counts in cells were measured in square cells with sizes in the range $0.015125^\circ-2^\circ$ (corresponding to $0.1-14{\rm {h^{-1}Mpc }}$ with $D \simeq 400{\rm {h^{-1}Mpc }}$, the approximate depth of the catalogues). Practical considerations determined this scale range: the upper scale was chosen to minimize the edge effects from cut-out holes, while the smallest scale approaches that of galaxy halos for the typical depth of the catalogs. For details see [@smn96]. Note that physical coordinates were used in both surveys to eliminate the effects of distortion. Comparison ========== The amplitudes of the measured $J$-point correlation functions for $2 \ge J \ge 6$ are displayed on a series of figures. To facilitate comparison with perturbation theory, angular scales in all graphs were converted to an equivalent circular cell size, $\theta$, i.e. $\pi \theta^2 = \ell^2$. Note that square cells were used for the measurements, up to a small deformation due to projection. This has a negligible effect through slightly differing form factors, which cancels out anyway when comparing the results from the two catalogs with [*each other*]{}. The cell size in the APM pixel maps is defined by dividing the full APM area over the number of cells. The corresponding scale is about $5\%$ smaller than previously used in G94 and SDES, where the cell size was defined as the mean equal area projection size. The mean density of the EDSGC counts is about $10\%$ smaller than that of the APM (see also SMN97). This is partially due to star mergers which account to $5\%$ of the APM images in the $b_J=17-20$ slice (Maddox [[*et al. *]{}]{}1990). The remaining $5\%$ can be attributed to a small difference in the depths due to a slight offset in the magnitude slices. =9.truecm Figure \[sigma\] shows the variance of counts-in-cells as a function of the cell radius in degrees. The full squares linked by the solid line correspond to the measurement in the EDSGC catalogue. The small differences in the mean depth mentioned above should produce an upward shift of about $10\%$ in the EDSGC correlation amplitude, which is confirmed by the Figure. The open squares display the measurements by G94 for the full APM catalogue, while the short-dashed line is the recalculation of the same with infinite sampling. The long-dashed line is the measurement of a subregion of the APM which overlaps with the EDSGC ($EDSGC\cap APM$). The latter agrees well within the errors with the full APM measurements and is slightly lower than the corresponding $w_2$ in the EDSGC catalogue, roughly as expected from the mentioned differences. There is an overall agreement between all estimates, at least on large scales. On smaller scales the APM appears to produce slightly lower values; this is probably related to the larger discrepancy of the hierarchical amplitudes which will be discussed next. =9.truecm Figures \[s3\]-\[sj\] compare the skewness, $s_3$, and the higher order $s_J$’s, $J=4,5,6$. The following discussion is equally applicable to all orders; the separate graph for $J=3$ shows more details. Contrary to what happened for $w_2$, a small difference in the depth should not change the hierarchical ratios, as the depth cancels out in the normalization (see [@gp77]). The Figures follow this expectation. For scales of about $0.2^\circ$ to $2^\circ$ the agreement is good between the full EDSGC and the same region of the APM ($EDSGC\cap APM$ region). The increase of the $s_J$’s at the largest scales ($\theta > 0.5^\circ$) in the $EDSGC\cap APM$ region is due to edge and finite volume effects: a similar trend appears in the same region for both catalogues. On these large scales, the full APM measurements are more accurate since its larger area decreases cosmic errors. Note that for the measurement represented with the short dashes the edges of the catalog were cut out generously to eliminate any possible inhomogeneity. In addition, the masks was fully excluded, while the original measurement followed a somewhat different procedure (see G94 for details). This could account for the slight difference at the largest scales. The $S_J$’s measured in the $EDSGC\cap APM$ region of the APM are compatible with the errors of the full APM measurements at most scales. At scales larger than $0.5^\circ$, edge effects start to dominate the errors of the smaller sample. For $0.1^\circ \ge\theta\ge 0.5^\circ$ the $EDSGC\cap APM$ region appears to produce slightly lower hierarchical ratios than the full APM. These values in some cases are outside of the formal errorbars. The reason for this is that dividing the sample into subsamples is an approximate estimate of the errors, and it can lead to underestimation as the subsamples are not fully independent. Moreover, for a non-Gaussian error distribution values outside the formal errorbar are less unlikely [@sc96]. At the smaller scales there is a significant statistical difference between the APM and the EDSGC. This is not due to finite volume effects, since it persists when only the same region of the sky is used. The identical geometry with the same magnitude cut excludes edge or discreteness effect as well, thus all cosmic errors. The difference is not due to the method of estimation either, since the original low sampling measurement by G94 gives similar results to the recalculation with infinite oversampling, which fully eliminates measurement errors [@sc96; @s97]. The only remaining possibility is that the results should be attributed to systematics. =9.truecm Discussion ========== According to SMN97, insufficient sampling can cause severe underestimation or the higher order $S_J$’s. This could be a possible cause for the disagreement between the EDGSC and the APM on scales smaller than 0.2 degrees, since the original APM measurements by G94 were performed on density pixel map with resolution given by the lowest scale shown at the figure. However, the infinite sampling $S_J$’s are in good agreement with the original analysis by G94. Although, as expected, the infinite oversampling results at small scales seem slightly higher than the corresponding low sampling ones, the Figures prove that this effect is not significant and it can be discounted as the main reason for the disagreement between the APM and the EDSGC. The discrepancies on small scales are therefore due to intrinsic differences in the catalogues. Since both catalogs use [*same*]{} raw photographic plates, the difference discovered with the [*same*]{} statistical methods must lie with the different choices of hardware and software during the scans and/or the data reduction. The dissimilarity in the deblending algorithms is a particularly good candidate to account for the detected statistical difference (G. Efstathiou, private communication). However, this point needs further investigation. Previous results and their interpretations on large scales seem unaffected by the detected discrepancies. In particular, both the APM and the EDSGC higher order correlations are in general agreement with perturbation theory (G94, GF94, BGE95, SMN97). In summary, the results support qualitatively scenarios with gravitational instability arising from Gaussian initial conditions, with little or no biasing. Note that the EDSGC barely probes quasi-linear scales ($R> 8 {\rm {h^{-1}Mpc }}$ or $\theta > 1^\circ$), thus extended perturbation theory, and results from $N$-body simulations have to be invoked as a theoretical basis for comparison at smaller scales. There is hint that, at least qualitatively, the EDSGC results at the smallest scales follow $N$-body simulations more closely, while the drop experienced in the APM reduced moments at the same scales is unexpected, and could be an artificial effect. The new generation of CCD based red-shift and angular surveys, such as the SDSS, and 2DF, should be able to clarify this situation and put tighter constraints on biasing models. [**Acknowledgments**]{} I.S. was supported by DOE and NASA through grant NAG-5-2788 at Fermilab and by the PPARC rolling grant for Extragalactic Astronomy and Cosmology at Durham. E.G. acknowledges support from supported by CSIC, DGICYT (Spain), project PB93-0035, and CIRIT, grant GR94-8001 and 1996BEAI300192. We would like to thank both the APM and EDSGC team for generously allowing us to use their respective catalogs. Baugh C.M., Gaztañaga E., Efstathiou G., 1995, MNRAS, 274, 1049 (BGE95) Baugh C.M., Gaztañaga E., MNRAS, 280, L37 Bernardeau, F. 1992, , 292, 1 Bernardeau, F. 1994, , 433, 1 Collins, C. A. Nichol, R. C., & Lumsden, S. L. 1992, , 254, 295 Colombi, S., Bouchet, F.R., & Hernquist, L. 1996, , 458, 14 Frieman, J.A., Gaztañaga, E., 1994, 425, 392 Gaztañaga, E. 1994, , 268, 913 (GF94) (G94) Gaztañaga, E., Frieman, J.A., 1994, 437, L13 (GF94) Gaztañaga, E. & Baugh, C.M., 1995, 273, L1 Groth, E.J., & Peebles, P.J.E. 1977, , 217, 385 Heydon-Dumbleton, N. H., Collins, C. A., & MacGillivray, H. T. 1989, , 238, 379 Juszkiewicz, R., Bouchet, F. R., & Colombi, S. 1993, , 412, L9 Maddox, S. J., Efstathiou, G., Sutherland, W. J., & Loveday, L. 1990a, , 242, 43P Maddox, S. J., Sutherland, W. J., Efstathiou, G., & Loveday, L. 1990b, , 243, 692 Maddox, S. J., Sutherland, W. J., Efstathiou, G., & Loveday, L. 1990b, , 246, 433 Maddox, S. J., Efstathiou, G., Sutherland, W. J.,, 283, 1227 Nichol, R. C. 1992, PhD thesis, University of Edinburgh Nichol, R. C., & Collins, C. A. 1994, MNRAS, 265, 867 Peebles, P.J.E. 1980, The Large Scale Structure of the Universe (Princeton: Princeton University Press) Szapudi, I., Dalton, G., Efstathiou, G.P., & Szalay, A. 1995, , 444, 520 (SDES) Szapudi, I., Meiksin, A., & Nichol, R.C. 1996, , 473, 15 (SMN96) Szapudi, I., & Colombi, S. 1996, , 470, 131 (SC96) Szapudi, I., 1997, , accepted Szapudi, I., Quinn, T., Stadel, J., & Lake, G., 1997, in preparation Szapudi, I., Szalay, A.S. 1997a, , 481, L1
--- abstract: 'Given a Hamiltonian with a continuous symmetry one can generally factorize that symmetry and consider the dynamics on invariant Hilbert Spaces. In Statistical Mechanics this procedure is known as the vertex-IRF map, and in certain cases, like rotational invariant Hamiltonians, can be implemented via group theoretical techniques. Using this map we translate the DMRG method, which applies to 1d vertex Hamiltonians, into a formulation adequate to study IRF Hamiltonians. The advantage of the IRF formulation of the DMRG method ( we name it IRF-DMRG), is that the dimensions of the Hilbert Spaces involved in numerical computations are smaller than in the vertex-DMRG, since the degeneracy due to the symmetry has been eliminated. The IRF-DMRG admits a natural and geometric formulation in terms of the paths or string algebras used in Exactly Integrable Systems and Conformal Field Theory. We illustrate the IRF-DMRG method with the study of the SOS model which corresponds to the spin 1/2 Heisenberg chain and the RSOS models with Coxeter diagram of type A, which correspond to the quantum group invariant XXZ chain.' author: - | [**Germ[á]{}n Sierra$^{1}$ and [**Tomotoshi Nishino$^{2}$**]{}** ]{}\ $^{1}$ Instituto de Matem[á]{}ticas y F[í]{}sica Fundamental,\ C.S.I.C., Madrid, Spain\ $^{2}$ Department of Physics, Faculty of Science,\ Kobe University, Rokkoudai, Kobe, Japan date: October 1996 title: | [**The Density Matrix Renormalization Group** ]{}\ [**Method applied to** ]{}\ [**Interaction Round a Face Hamiltonians**]{} --- = 210truemm = 297truemm = .75 truein = 1 truein =by -2by -1truein by =by -2by -1truein by by -2 PACS numbers: 05.50.+q, 11.10.Gh, 75.10.Jm Introduction {#introduction .unnumbered} ============ The Density Matrix Renormalization Group is a powerful numerical real space RG method introduced by White in 1992 to study quantum lattice Hamiltonians of interest in Condensed Matter and Statistical Physics [@W1]. This method has its roots in Wilson’s solution of the Kondo problem [@Wi], but it is not confined to impurity problems. The DMRG method overcomes the problems of the old Block RG method of the SLAC [@SLAC] and Paris groups [@Paris], which in many cases gives qualitative correct results but lacks numerical accuracy ( for a review of the Block RG method see [@BRG; @hT]) . The DMRG is well suited for 1d problems as spin chains [@WH; @SA] , but it has also been applied successfully to ladder systems [@WNS] and large 2d blocks [@W2]. Another related developments are : Generalization of the DMRG to classical systems and its relation with the Baxter’s Corner Transfer Matrix [@NO], Variational Formulation of the DMRG ground state wave function [@OR], Momentum space DMRG [@X], application of the DMRG to transfer matrices [@Ni], DMRG study of quantum systems at finite temperature [@X2], analytic formulation of the DMRG [@MS1]. The correlation between blocks inherent to the DMRG method has also been implemented in the old Block RG method in references [@qRG; @Role; @CBRG], etc. The purpose of this paper is to generalize the DMRG to a class of models commonly known in Statistical Mechanics as Interaction Round a Face (IRF) or more simply Face models [@B]. In these models the lattice variables are labelled by the points (heights) of a graph ${\cal G}$, and such that heights located at nearby sites on the lattice must also be nearest neighbours on the graph. The SOS models and the RSOS models, which are a restricted class on the former ones are the most interesting examples of IRF models, due to their connection with Integrable Models [@B; @ABF], Affine Lie algebras [@Affine], Towers of Multi-Matrix Algebras [@GHJ; @GS1] and Conformal Field Theories (CFT) [@P1]. Other important class of models is given by the vertex models, where the nearby lattice variables are independent, although the Bolztmann weights may satisfy certain conservation laws [@LW]. The well known Heisenberg, t-J and Hubbard models, are Hamiltonian or transfer matrix versions of vertex models, which can be studied using the DMRG. It is for the later class of theories that applies the standard DMRG. We shall propose in this paper to translate the “vertex language”, which is used to formulate the DMRG, into “IRF language”. This translation process is suggested by the fact that some models, like the Baxter’s 8 vertex, can be mapped into IRF models [@B2; @Ji2; @P2]. Moreover, if the vertex Hamiltonian has a symmetry described by some group (or quantum group), then the vertex-IRF map consists in the factorization of that symmetry. In symbolical terms we may write, $${\rm IRF} = \frac{{\rm Vertex}}{ {\rm Symmetry}} \label{0.1}$$ In the case of the Heisenberg model, the factorization of the rotational symmetry has lead us to formulate the DMRG in IRF variables. The vertex-IRF map is given in this example by the tensor product decomposition of irreps of $SU(2)$. The heights coincide with the irreps of this group. From (\[0.1\]) it is clear that an advantage in working with IRF variables is that the symmetry present in the vertex Hamiltonian is factorized, and consequently the dimension of the Hilbert spaces involved is much lower. For numerical purposes this property is also important since it implies a reduction of the computational complexity of the problem. On the other hand the IRF formulation of the DMRG is the most natural one to discuss its relation with the corner transfer matrix formalism and CFT. The organization of the paper is as follows. In section I we review the basic concepts and tools of the IRF models. In section II we introduce the real space renormalization group method applied to IRF Hamiltonians. In section III we define the density matrix for IRF states and use it to propose the IRF-DMRG algorithm. In section IV we apply the IRF-DMRG to the SOS model, which corresponds to the spin 1/2 Heisenberg model, and to the RSOS models whose graph is a Coxeter diagram of type A, and which are related to the minimal models of Belavin, Polyakov and Zamolodchikov [@BPZ]. In section V we describe the vertex-IRF map for Hamiltonians with spin rotation symmetry, and derive the IRF-DMRG from the vertex-DMRG. Finally we state our conclusions and prospects. I) IRF Models : Basics {#i-irf-models-basics .unnumbered} ====================== The IRF models were introduce by Baxter [@B] as Statistical Mechanical models where the variables are defined on the vertices of the square lattice while the interaction are defined on the faces. These kind of models should be distinguished from the vertex ones, where the variables are located on the edges and the interaction is defined on the vertices where four edges meet [@LW]. In certain cases there will be a deep relationship between these two types of models, given by a vertex-IRF map. The most interesting class of IRF models are the so called graph-IRF models ( for a review see [@qb]). The heights of these models are labelled by the vertices of a graph ${\cal G}$. The allowed configurations are restricted by the constraint that the lattice variables that are nearest neigbour in the lattice are also nearest neighbour in the graph ${\cal G}$. A characterization of ${\cal G}$ is given by its incidence matrix $\Lambda_{a,b}$ which is 1 ( reps. 0) if the heights $a$ and $b$ are connected (resp. disconnected) by a link of ${\cal G}$. We assume that there is at most one link connecting any two points. The graphs which we shall study in this paper are bipartite, which means that they can be partitioned into two subgraphs, say even and odd, so that any point of one subgraph is connected only to points of the other subgraph. A pair of variables $(a,b)$ is said to be admissible if $\Lambda_{a,b} = 1$. In this terminology, all the heights connected by a link of the square lattice must be admissible. An important example of IRF models are the RSOS models for which ${\cal G}$ is a ADE Coxeter diagram, which will be studied in detail in section IV. The $A_r$ graph consist of $r$ points labelled by $a= 1, 2, \dots, r$. In the Hamiltonian or transfer matrix formulation of IRF models, a state of the Hilbert space is described by the ket, $$|{\bf a}> = |a_0, a_1, \dots , a_{N}> \label{1.1}$$ where $(a_i, a_{i+1})$ is an admissible pair. There is a geometrical interpretation of the IRF states as paths on a Brateli diagram, which is constructed by folding the graph as in figure 1, and repeating the pattern along the “x-axis” [@GHJ; @O]. A path $\xi$ on the Bratelli diagram is a succession of points $\{\xi(i)\}_{i=0}^N $ such that the couple $(\xi (i), \xi (i+1) )$ coincides with a link of the diagram (see figure 2). Another important concept is that of a plaquette on the Bratelli diagram. A plaquette is the four-tuple, $$(a,b,c,d) \equiv \left( \begin{array}{ccc} & d & \\ a & & c \\ & b & \end{array} \right), \;\; (a,b) , (b,c), (a,d), (d,c) : {\rm admissible} \label{1.2}$$ and can be identified with the elementary squares or plaquettes of the Bratelli diagram. The IRF models can be defined on periodic or open chains. In this paper we shall concentrate on the later case. To define the dynamics of the IRF model we shall introduce the plaquette operator $X_i$, which gives the infinitesimal evolution of an IRF state in the neighbourhood of the $i^{\rm th}$-site of the chain, $$X_i | \dots, a_{i-1} , a_{i} , a_{i+1}, \dots> = \sum_{a'_i} R \left( \begin{array}{lll} & a'_i & \\ a_{i-1} & & a_{i+1} \\ & a_i & \end{array} \right) | \dots, a_{i-1}, a'_i, a_{i+1}, \dots> \label{1.3}$$ $R( a_{i-1}, a_i, a_{i+1}, a'_i)$ denotes the local Hamiltonian associated to the plaquette $(a_{i-1}, a_i, a_{i+1}, a'_i)$. If $R( a_{i-1}, a_i, a_{i+1}, a'_i)$ is replaced by a Boltzmann weight then the operators $X_i$ are those introduced by Baxter in the study of integrable IRF models. In our case we are working with infinitesimal versions of these Bolztmann weights and, on the other hand we do not need to impose any kind of integrability condition, although it could be interesting to analize a possible interplay between the RG and integrability. The Hamiltonian acting on an open chain is defined as, $$H = \sum_{i=1}^{N-1} X_i \label{1.4}$$ The time evolution produced by (\[1.4\]) preserves the boundary heights $a=a_0$ and $b=a_N$. We shall call ${\cal H}_{a,b}^N$ the Hilbert space expanded by IRF states with these boundary conditions, $${\cal H}_{a,b}^N = \{ |a_0, a_1, \dots, a_N > || a_0 =a , a_N=b \} \label{1.5}$$ Below we shall consider a generalization of this type of Hilbert spaces characterized by fixed boundary conditions at the ends. To finish this section we shall review other applications of IRF ideas in the context of Particle Physics, which will help us to introduce new concepts in the next section. It is well known the connection between integrable statistical models and factorizable S-matrix theories [@Zamos]. For example, the Boltzmann weights of the 6 vertex model can be conveniently identified with the scattering S-matrix of the solitons of the sine-Gordon theory. This kind of interpretation is also possible for IRF-Bolztmann weights, which may describe the S-matrix of solitons (or kinks) which connect different vacua. A soliton say $S_{a,b}(\theta)$ with rapidity $\theta$, is a field configuration which connects the vacuum $a$ at $x=- \infty$ with the vacuum $b$ at $ x= + \infty$. Two solitons say $S_{a,b}(\theta_1)$ and $S_{b,c}(\theta_2)$ can meet at the common vacuum $b$ and after a certain time the middle vacuum $b$ can turn into a new vacuum, say $d$. The corresponding S matrix for the process $S_{a,b}(\theta_1) S_{b,c}(\theta_2) \rightarrow S_{a,d}(\theta_2) S_{d,c}(\theta_1)$ is described by the Boltzmann weight associated to the plaquette (\[1.2\]) and rapidity $\theta_1 -\theta_2$. In this terminology the IRF state (\[1.1\]) can be interpreted as a collective state formed by N solitons connecting the vacuum $a_0$ and $a_N$ through a series of interpolating vacua $a_1, \dots, a_{N-1}$. There is yet another interpretation of the IRF states. If we view the graph ${\cal G}$ as the target space of a discretized string, then (\[1.1\]) becomes the state of an open string $S_{a,b}$ with fixed boundary conditions at the ends. As in the case of solitons, the strings may join and split in various ways according to graph rules. In the rest of the paper we shall consider as equivalent the interpretations of the IRF states as paths on a Bratelli diagram, kinks of a field theory and discrete strings (see figure 3), $${\rm Path} = {\rm Kink} = {\rm String} \label{1.6}$$ II) RG of IRF-Hamiltonians: Generalities {#ii-rg-of-irf-hamiltonians-generalities .unnumbered} ======================================== The basic problem we want to address is the construction of the ground state and excited states of the IRF Hamiltonian (\[1.4\]) for large values of N. The RG method gives an approximate answer to this difficult problem. Wilson’s strategy for the Kondo problem is to start out from small chains and grow them by adding site by site, while keeping a fixed, and usually large number of states, say m, as “representatives” of the whole chain. This method is known as the onion scheme, to be distinguished from the Wilson-Kadanoff blocking scheme which consists in the partition of the chain, or more generally the lattice, into blocks which are afterwards renormalized getting smaller lattices. The Wilsonian growth process applied to an IRF state is depicted in figure 4. A string (kink) with states $\ast$ and $a$ and its ends, “absorbs” a particle (vacuum) in a state $b$ becoming a new string (kink) with BC’s $(\ast,b)$. Of course the pair $(a,b)$ should be admissible for the absortion process to be possible. The state $\ast$ at the l.h.s. of the string will be kept fixed in the construction of “longer” strings. The strings will grow from their r.h.s. String Hilbert Spaces {#string-hilbert-spaces .unnumbered} ---------------------- Let us call ${\cal H}^S_{\ast, a}$, or more simply ${\cal H}^S_{a}$, the Hilbert space of the strings $S$ which have BC’s $\ast,a$ at their ends. An example of ${\cal H}^S_{a}$ is given by the Hilbert space (\[1.5\]), where N measures the length of the string $S$. The RG method will lead to the construction of Hilbert spaces ${\cal H}^S_{a}$ which are imbedded into ${\cal H}^N_{a}$ for some N. The Hilbert space of the string $S$ with an added point $\bullet$ on its r.h.s. will be denoted by ${\cal H}^{S \bullet} _{a}$ and it is given by, $${\cal H}^{S \bullet}_{a} = \oplus_{b | \Lambda_{a,b} =1} {\cal H}^S_{b} \label{2.1}$$ which implies, $${\rm dim}{\cal H}^{S \bullet}_{a} = \sum_b \Lambda_{a,b} \; {\rm dim} {\cal H}^S_{b} \label{2.2}$$ Proceeding as above one can construct longer strings as for example $S \bullet \bullet$. All what is needed is the “fusion” matrix $\Lambda$. For later convenience we give below the basis of the most common string spaces, $$\begin{array}{rl} {\cal H}^{S }_{a} = & \{ |\xi_{a} > \} \\ {\cal H}^{S \bullet }_{b} = & \{ |\xi_{a} \otimes b> | \Lambda_{a,b} =1 \} \\ {\cal H}^{S \bullet \bullet }_{c} = & \{ |\xi_{a} \otimes b \otimes c> | \Lambda_{a,b}= \Lambda_{b,c} = 1 \} \\ \vdots & \end{array} \label{2.3}$$ A generic Hilbert space of the form ${\cal H}^{S \bullet \stackrel{ n }{ \cdots} \bullet}$ will be denoted by ${\cal H}^{S, n \bullet}_a$. The complete Hilbert space of a string $S$ plus n points $\bullet$, consists in the direct sum, $${\cal H}^{S, n \bullet} =\oplus_a {\cal H}^{S, n \bullet}_a \label{2.4}$$ The total dimension of ${\cal H}^{S, n \bullet}$ can be computed from eq.(\[2.2\]), $$m_{S, n \bullet} \equiv {\rm dim} {\cal H}^{S, n \bullet} = \sum_{a_0, \dots, a_n} \Lambda_{a_0,a_1} \dots \Lambda_{a_{n-1}, a_n} \; m_{a_n} \label{2.5}$$ where $m_a = {\rm dim}{\cal H}^{S}_{a} $. It is important to realize that the sum over heights in (\[2.5\]) may not contain all the heights of the graph. For example for a bipartite graph only even or odd heights will appear at the right end of the string ( in this sense the strings associated to a bipartite lattice can be classified as even or odd). Thus if the string $S$ is even (odd) the string plus one point $S \bullet$ will be odd (even), and $S \bullet \bullet $ will again be even (odd), etc. String Operators {#string-operators .unnumbered} ----------------- We shall call string operators those operators ${\cal O}^{S, n\bullet}$ which acting on the Hilbert space ${\cal H}^{S, n \bullet}$, do not change the height located at right hand end of the combined system $S \bullet \stackrel{n}{\cdots}\bullet $. Their action on the basis (\[2.3\]) is given as follows, $$\begin{array}{rl} {\cal O}^{S } | \xi_a> = & \sum_{\xi'_a} \; |\xi'_a> < \xi'_a | {\cal O}^S | \xi_a> \\ & \\ {\cal O}^{S \bullet } | \xi_a \otimes b > = & \sum_c \sum_{\xi'_c} \; |\xi'_c \otimes b > < \xi'_c \otimes b | {\cal O}^{S \bullet} | \xi_a \otimes b> \\ & \\ {\cal O}^{S \bullet \bullet } | \xi_a \otimes b \otimes c > = & \sum_{d,e} \sum_{\xi'_e} \; |\xi'_e \otimes d \otimes c > < \xi'_e \otimes d \otimes c | {\cal O}^{S \bullet \bullet} | \xi_a \otimes b \otimes c> \\ \vdots & \end{array} \label{2.6}$$ The matrix elements of the operators ${\cal O}^{S, n \bullet}$ appearing in (\[2.6\]) will be denoted by, $$\begin{array}{rl} < \xi'_a | {\cal O}^S | \xi_a> = & {\cal O}_{\xi_a}^{\xi'_a}\left( \ast a \right) \\ < \xi'_c \otimes b | {\cal O}^{S \bullet} | \xi_a \otimes b>= & {\cal O}_{\xi_a}^{\xi'_c} \left( \begin{array}{ccc} & c & \\ \ast & & b \\ & a & \end{array} \right) \\ < \xi'_e \otimes d \otimes c | {\cal O}^{S \bullet \bullet} | \xi_a \otimes b \otimes c> = & {\cal O}_{\xi_a}^{\xi'_e}\left( \begin{array}{cccc} &e & d & \\ \ast & & & c \\ & a & b & \end{array} \right) \\ \vdots & \end{array} \label{2.7}$$ and can be depicted as $2(n+1)-$gons, with a special vertex $\ast$ from which emanate two thick lines representing string states labelled by $\xi$ (see figure 5). The remaining $2n$ thin lines connect admissible pairs of heights. The most important examples of string operators are given by the Hamiltonians ${H}^{S,n \bullet }$. However not all the string Hamiltonians are independent. Actually, given $ {H}^{S \bullet}$ and the “Boltzmann weight” R (\[1.3\]) one can build up the remaining Hamiltonians ${H}^{S, n \bullet}$ for $n \geq 2$. The first member of the later family , namely ${H}^S $, has to be given independently, but quite paradoxically it plays little role in the construction. As as example we give below the matrix representation of ${H}^{S \bullet \bullet}$, $$\begin{aligned} & { H}_{\xi_a}^{\xi'_e}\left( \begin{array}{cccc} &e & d & \\ \ast & & & c \\ & a & b & \end{array} \right) & \label{2.8} \\ & = { H}_{\xi_a}^{\xi'_e} \left( \begin{array}{ccc} & e & \\ \ast & & b \\ & a & \end{array} \right) \delta_{b,d} \; \Lambda_{b,c} + \delta_{a,e} \; \delta_{\xi_a, \xi'_e} \; R \left( \begin{array}{ccc} & d & \\ a & & c \\ & b & \end{array} \right) & \nonumber\end{aligned}$$ This eq. is depicted in figure 6, where we show also the construction of ${H}^{S, 3 \bullet}$. The RG-operation {#the-rg-operation .unnumbered} ----------------- The key point of the RG method is the construction of the RG-operator $T$ that truncates the Hilbert space ${\cal H}^{S \bullet}$ into ${\cal H}^{S'}$, where $S'$ represents a string with one more site than the string $S$, i.e. $$T : \; {\cal H}^{S \bullet} \longrightarrow {\cal H}^{S'} \label{2.9}$$ The matrix representation of $T$ and its hermitean conjugate $T^\dagger$ are given by, $$\begin{aligned} & T | \xi_a \otimes b > = \sum_{\xi'_b} \; T_{\xi_a}^{\xi'_b} \left( \begin{array}{ccc} \ast & & b \\ & a & \end{array} \right) | \xi'_b> & \label{2.10} \\ & T^\dagger | \xi'_b > = \sum_a \sum_{\xi_a} \; \bar{T}^{\xi_a}_{\xi'_b} \left( \begin{array}{ccc} & a & \\ \ast & & b \end{array} \right) | \xi_a \otimes b> & \nonumber\end{aligned}$$ where $$\left[ T_{\xi_a}^{\xi'_b} \left( \begin{array}{ccc} \ast & & b \\ & a & \end{array} \right) \right]^* = \bar{T}^{\xi_a}_{\xi'_b} \left( \begin{array}{ccc} & a & \\ \ast & & b \end{array} \right) \label{2.11}$$ According to (\[2.9\]) T is a $m_{S'} \times m_{S \bullet}$ matrix. Except for the first RG-operations we shall always keep the same number of states describing the renormalized system, i.e. $ m= m_S= m_{S'}$. Both $T$ and $T^\dagger$ can be depicted as triangles with the special vertex $\ast$, which is the origin of two thick edges which symbolize the old and new (renormalized) strings ( see figure 7). The truncation operator must satisfy the equation, $$T T^\dagger = {\bf 1} \label{2.12}$$ which guarantees that $T^\dagger T$ is a projection operator which maps ${\cal H}^{S \bullet}$ into a subspace which is isomorphic to ${\cal H}^{S'}$. Eq.(\[2.12\]) reads in components (see figure 8), $$\sum_a \sum_{\xi_a} T_{\xi_a}^{\xi''_b} \left( \begin{array}{ccc} \ast & & b \\ & a & \end{array} \right) \; \bar{T}^{\xi_a}_{\xi'_b} \left( \begin{array}{ccc} & a & \\ \ast & & b \end{array} \right) = \delta_{\xi'_b , \xi''_b} \label{2.13}$$ Given the operators $T $ and $T^\dagger$ we can renormalized any operator ${\cal O}^{S, n \bullet}$ down to an operator ${\cal O}^{S', (n-1) \bullet}$ by means of the equation, $${\cal O}^{S', (n-1) \bullet} = T \; {\cal O}^{S, n \bullet} \; T^\dagger, \;\;\; (n \geq 1) \label{2.14}$$ which in mathematical terms expresses the commutativity of the following diagram, $$\begin{array}{rcl} {\cal H}^{S',(n-1) \bullet} & \stackrel{{\cal O}^{S', (n-1) \bullet}}{\longrightarrow} & {\cal H}^{S', (n-1) \bullet} \\ T^\dagger \; \downarrow & & \uparrow \; T \\ {\cal H}^{S,n \bullet} & \stackrel{{\cal O}^{S, n \bullet}}{\longrightarrow} & {\cal H}^{S, n \bullet} \end{array} \label{2.15}$$ In eqs.(\[2.14\]) and (\[2.15\]) the operators $T$ and $T^\dagger$ act trivially on the points beyond the closest one to the string $S$. As an example we give below the renormalization of $ {\cal O}^{S, \bullet} $ and ${\cal O}^{S, 2 \bullet}$ (see figures 9 and 10 ), $$\begin{aligned} & {\cal O}_\xi^{\xi'}(\ast,b) = \sum_{a,c,\eta,\eta'} T_{\eta'}^{\xi'} \left( \begin{array}{ccc} \ast & & c \\ & b & \end{array} \right) {\cal O}_{\eta}^{\eta'}\left( \begin{array}{ccc} & c & \\ \ast & & b \\ & a & \end{array} \right) \; \bar{T}^{\eta}_{\xi} \left( \begin{array}{ccc} & a & \\ \ast & & b \end{array} \right) & \label{2.16} \\ & {\cal O'}_{\xi}^{\xi'} \left( \begin{array}{ccc} & c & \\ \ast & & b \\ & a & \end{array} \right) = \sum_{d,e} \sum_{\eta,\eta'} T_{\eta'}^{\xi'} \left( \begin{array}{ccc} \ast & & c \\ & e & \end{array} \right) {\cal O}_{\eta}^{\eta'}\left( \begin{array}{cccc} &e & c & \\ \ast & & & b \\ & d & a & \end{array} \right) \; \bar{T}^{\eta}_{\xi} \left( \begin{array}{ccc} & d & \\ \ast & & a \end{array} \right) & \label{2.17} \end{aligned}$$ In summary we have presented in this section a formalism to deal with the renormalization of generic IRF Hamiltonians. In the next section we shall explain the DMRG algorithm to construct the truncation operator $T$, which will then allow us to carry out explicit computations. III) The IRF-DMRG algorithm {#iii-the-irf-dmrg-algorithm .unnumbered} =========================== The “standard” RG method to construct the operator $T$ applied to IRF models consists in the following two steps: i) diagonalization of the Hamiltonian ${\cal H}^{S \bullet}$ and ii) projection to its lowest energy states. This algorithm treats the system $S \bullet$ as isolated from the rest of points which one adds in posteriori RG steps. In other words, the height associated to the point $\bullet$ in $S \bullet$ is fixed to a given value. Imposing fixed boundary conditions at the ends of the blocks in the RG method always leads to bad results. Instead one should consider a combination of B.C.’s as in [@WN], or impose open B.C.’s as in [@Role]. The DMRG is a way to take care of the influence or correlations of those points that have not yet been added to the block. There are various DMRG algorithms: infinite system method, finite system method, etc. We shall give in this paper the IRF version of the infinite system algorithm, which is based on the superblock formed by a string S, a point $\bullet$ and another string $S^R$, which is the mirror image or reflection of the string $S$ ( see figure 11). The dynamics of the “super-string” $S \bullet S^R$ involves all allowed heights at the middle point $\bullet$, and in that way one is not commited to a particular B.C. on $\bullet$. There is an appealing electrostatic analogy to understand the role of $S^R$. Let us recall the mirror image method which is used to impose Neumann (open) B.C.’s on the electrostatic potential. In this sense the mirror string $S^R$ seems to play a similar role, i.e. that of imposing open B.C.’s on $\bullet$. A basis of the Hilbert space of the super-string $S \bullet S^R$ is given by, $${\cal H}^{S \bullet S^R} = \{ | \xi_a \otimes b \otimes \eta_c > \; || \; \Lambda_{a,b} = \Lambda_{b,c} = 1 \} \label{3.1}$$ The Hamiltonian which generates the dynamics of the states belonging to ${\cal H}^{S \bullet S^R}$ can be obtained using the methods of the last section and it reads (recall figure 6), $$\begin{aligned} & { H}_{\xi, \eta}^{\xi', \eta'}\left( \begin{array}{ccccc} &a' & b' & c' & \\ \ast & & & & \ast \\ & a & b & c& \end{array} \right) = { H}_{\xi}^{\xi'} \left( \begin{array}{ccc} & a' & \\ \ast & & b \\ & a & \end{array} \right) \delta_{b,b'} \delta_{c,c'} \; \Lambda_{b,c} \delta_{\eta, \eta'} & \label{3.2} \\ & + \delta_{a,a'} \delta_{\xi, \xi'} \; R \left( \begin{array}{ccc} & b' & \\ a & & c \\ & b & \end{array} \right) \delta_{c,c'} \delta_{\eta, \eta'} +\delta_{a,a'} \delta_{b,b'} \Lambda_{a,b} \delta_{\xi, \xi'} { H}_{\eta}^{\eta'} \left( \begin{array}{ccc} & c' & \\ \ast & & b \\ & c & \end{array} \right) & \nonumber\end{aligned}$$ Now we diagonalize this Hamiltonian and select its ground state which is called the target state and can be written as, $$| \psi_0> = \sum_{a,b,c} \sum_{\xi, \eta} \psi_{\xi, \eta} (a,b,c) \; | \xi \otimes b \otimes \eta> \label{3.3}$$ The mirror string $S^R$ plays an auxiliary role in the construction and we should get rid of it. The DMRG proposal is to construct the reduced density matrix $\rho^{S \bullet}$ of the subsystem $S \bullet$ by tracing over the states in $S^R$, $$\rho^{S \bullet} = {\rm Tr}_{ {\cal H}^{S^R}} \;\; | \psi_0>< \psi_0| \label{3.4}$$ In the above trace we shall set the height of the middle point the same for both the ket and the bras, so that the matrix representation of $\rho^{S \bullet}$ will be given by, $${\rho}_{\xi}^{\xi'} \left( \begin{array}{ccc} & a' & \\ \ast & & b \\ & a & \end{array} \right) = \sum_c \sum_{\eta} \psi_{\xi, \eta} (a,b,c) \; \psi_{\xi', \eta}^* (a',b,c) \label{3.5}$$ A normalized ground state (\[3.3\]) yields a properly normalized density matrix, $${\rm Tr}_{ {\cal H}^{S \bullet} }\; \rho^{S \bullet} = \sum_{a,b} \sum_{\xi} \; {\rho}_{\xi}^{\xi} \left( \begin{array}{ccc} & a & \\ \ast & & b \\ & a & \end{array} \right) = 1 \label{3.6}$$ The next step is to diagonalize the matrix (\[3.5\]) in the Hilbert space ${\cal H}^{S \bullet}_b$, for every value of $b$, keeping the first $m $ eigenstates with highest eigenvalue. These states are the most probable ones to contribute to the ground state of the super-string. Finally the matrix T (\[2.9\]) is given by these m column vectors. Eq.(\[3.5\]) is very similar to Baxter’s definition of the corner transfer matrix (CTM) for IRF models [@B], in the sense that one traces over the degrees of freedom of half of the system while keeping the height located at the edge of the “cut” fixed. This relation between the DMRG and the CTM has already been pointed out in [@NO], and we expect it to hold also for the IRF-DMRG. This ends our presentation of the IRF-DMRG. IV) The IRF-DMRG at work {#iv-the-irf-dmrg-at-work .unnumbered} ========================= We shall apply below the formalism developed in the last two sections to study IRF models that can be obtained by means of vertex-IRF maps of vertex Hamiltonians. This map will be explained in detail in section V for the case of the SOS models. SOS model (S=1/2) {#sos-model-s12 .unnumbered} ------------------ The spin chain Hamiltonian of the Heisenberg model with spin 1/2 reads, $$H = \frac{1}{2} \sum_i \left( \vec{\sigma}_i \vec{\sigma}_{i+1} +1 \right) \label{4a.1}$$ where $\vec{ \sigma}_i$ are Pauli matrices acting at the $i^{th}$ site of the chain. The choice of (\[4a.1\]) is motivated by the fact that $ \frac{1}{2} \left( \vec{\sigma}_i \vec{\sigma}_{i+1} +1 \right)$ is the permutation operator acting at the sites $i $ and $i+1$. The model defined by (\[4a.1\]) is equivalent to an IRF model whose graph, denoted by $A_{\infty}$, consists in the semi-infinite chain of fig. 12. The heights $j=0,1/2,1, \dots$, that label the points of the graph $A_\infty$, are in one-to-one correspondence with the irreps of the group $SU(2)$. According to fig. 12 the incidence matrix of $A_\infty$ satisfies $$\Lambda_{j,j'} = 1 \; \Longleftrightarrow \; |j-j'| = 1/2 \label{4a.2}$$ The IRF-Hilbert space associated to a chain with N-sites is given by the direct sum ( recall (\[2.4\])), $${\cal H}^N = \; \oplus_j \; {\cal H}^N_j \label{4a.3}$$ where ${\cal H}^N_{j}$ is the IRF-Hilbert space of all the states with total spin $j$, $${\cal H}^N_j = \{ |j_0, j_1, \dots, j_N >\; || \; j_0=0, j_N = j, \;|j_i - j_{i +1}| =1/2 \; \;\; {\rm for} \; i=0, \dots, N-1 \} \label{4a.4}$$ According to (\[4a.4\]) the height $\star$ should be identified with the identity irrep (i.e. $\star = 0$). Since the graph $A_{\infty}$ contains an infinite number of heights this model is an unrestricted IRF model called solid-on-solid model (SOS). This implies in particular that the Bratelli diagram consists of a pyramid of infinite height as one moves from the origin of the diagram (i.e. $\star$ point = vacuum representation) to the right hand side ( fig.13). The dimension of the Hilbert space (\[4a.4\]) is given by, $${\rm dim} \; {\cal H}^N_j = \left( \begin{array}{c} N \\ \frac{N}{2} - j \end{array} \right) - \left( \begin{array}{c} N \\ \frac{N}{2} - j -1 \end{array} \right) \label{4a.5}$$ This formula can be compared with the number of vertex states of the standard formulation of the Heisenberg model, with a fixed value of the third component of the spin $s^z$ ( see section V) , $${\rm dim} \; {\cal V}^N_{ s^z} = \left( \begin{array}{c} N \\ \frac{N}{2} - s^z \end{array} \right) \label{4a.6}$$ For N even (odd) the ground state will belong to the Hilbert space with $j = 0 \; ( 1/2)$. As can be seen from eqs.(\[4a.5\]) and (\[4a.6\]) it is more efficient, for numerical purposes, to look for the ground state of the Heisenberg model in the IRF subspaces than in the vertex ones ( see table 1). $N$ dim${\cal H}^N_{j=0}$(IRF) dim${\cal H}^N_{s^z=0}$(vertex) --------- ----------------------------- --------------------------------- 4 2 6 10 42 252 20 16 796 184 756 24 208 012 2 704 156 $N >>1$ $\sim 1.5956 {2^N}/N^{3/2}$ $ \sim 0.7978 {2^N}/N^{1/2}$ Table 1 From (\[4a.5\]) and (\[4a.6\]) we get the relation, $$\frac{ {\rm dim} {\cal V}^N_{s^z=0}}{ {\rm dim} {\cal H}^N_{j=0}} = \frac{N}{2} + 1 \label{4a.7}$$ which is a numerical version of the eq.(\[0.1\]) and shows that the difference between the vertex and IRF formulations persists in the thermodynamic limit. The constraints (\[4a.2\])imply that there are only 6 different “Boltzmann” weights whose values are given in table 2. ---------------------------------------------------------------------------------------------------------------------------------- $ \begin{array}{ccc} & d & \\ a & & c \\ & b & $ R \left( \begin{array}{ccc} & d & \\ a & & c \\ & b & \end{array} $ \end{array}\right) $ --------------------------------------------------------------------- ------------------------------------------------------------ $ \begin{array}{ccc} & j & \\ j \pm 1/2& & j \mp 1/2 \\ & j & 1 \end{array} $ $ \begin{array}{ccc} & j \pm 1/2 & \\ j & & j \\ & j \pm 1/2 & $ \frac{\mp 1 }{2 j + 1}$ \end{array} $ $ \begin{array}{ccc} & j \pm 1/2 & \\ j & & j\\ & j \mp 1/2 & $\frac{ \sqrt{2j( 2 j + 2)} }{2j +1}$ \end{array} $ ---------------------------------------------------------------------------------------------------------------------------------- Table 2: The SOS (S=1/2) Hamiltonian. Numerical Results {#numerical-results .unnumbered} ------------------ In table 3 we present the data for the ground state energy of the Heisenberg Hamiltonian (\[4a.1\]) for chains of length $N= 6 $ up to 24. In this IRF-DMRG computation we keep m=12 states, which is a rather modest number of states, while for the vertex-DMRG we keep a maximum of 12 states , since it is impossible to fix the number of m for vertex-DMRG, due to the degeneracy based on SU(2) symmetry. This is one of the advantages of the IRF-DMRG as compared with the vertex-DMRG. It is clear that at equal number of retained states, the IRF method should give better results than the vertex method. This expectation is confirmed in table 3. N Exact IRF-DMRG Vertex DMRG ---- ------------------ --------------------- --------------------- 6 -2.4871542677758 -2.4871542677758 \* -2.4871542677758 \* 8 -3.2498651973757 -3.2498651973757 \* -3.2498651973757 \* 10 -4.0160704145657 -4.0160704145657 \* -4.0160641768009 12 -4.7841812656810 -4.7841812656810 \* -4.7835746471807 14 -5.5534493237243 -5.5534493236562 -5.5527041895949 16 -6.3234742911502 -6.3234742887647 -6.3246392674964 18 -7.0940221370730 -7.0940221268443 -7.0953356164320 20 -7.8649466687979 -7.8649466378157 -7.8663701424885 22 -8.6361517519671 -8.6361516790424 -8.6375643759289 24 -9.4075715208191 -9.4075713713902 -9.4089821742785 Table 3: Ground state energy of the Hamiltonian (\[4a.1\]). The data followed by “\*” are exact. If we increase the number $ m$ of states retained, the results converge exponentially fast both in the vertex-DMRG and IRF-DMRG methods ( fig.14). RSOS models {#rsos-models .unnumbered} ------------ An interesting generalization of the spin 1/2 Heisenberg chain is provided by the XXZ Hamiltonian with boundary terms [@ABBBQ] , $$H^{XXZ} = \frac{1}{2} \left[ \sum_{i=1}^{N-1} \left( { \sigma}^X_i {\sigma}^X_{i+1} +{ \sigma}^Y_i {\sigma}^Y_{i+1} + \frac{ q + q^{-1}}{2} {\sigma}^Z_i {\sigma}^Z_{i+1} \right) + \frac{q - q^{-1}}{2} ( { \sigma}^Z_1 - {\sigma}^Z_{N} ) \right] \label{4b.1}$$ where $q = e^{ {\rm i} \gamma} $ is a phase. This Hamiltonian has very interesting properties: - The eigenenergies of the N=2M site XXZ chain (\[4b.1\]) coincide with those a M-site self-dual Q-state Potts model with $Q = q + q^{-1}= 2 \; {\rm cos \gamma}$ [@H; @ABB]. - Invariance under the action of the quantum group $SU(2)_q$ [@PS]. - Using q-group theory one can map the vertex Hamiltonian (\[4b.1\]) into a RSOS Hamiltonian whose graph is given by the Coxeter diagram $A_r$ ( see figure 15) [@P2]. - $H^{XXZ}$ is critical [@ABBBQ] and for $\gamma= \frac{\pi}{r+1}$ it belongs to the universality class of the minimal CFT’s [@BPZ] with a value of the central charge given by, $$c = 1 - \frac{6}{r ( r+1)} \label{4b.2}$$ We shall study below the RSOS version of the vertex Hamiltonian (\[4b.1\]). A way to arrive to this version consists in writing (\[4b.1\]) as follows [@TL] , $$\begin{aligned} & H^{XXZ}= \sum_{i=1}^{N-1} \left( \frac{ q + q^{-1}}{4} - e_i \right) & \label{4b.3} \end{aligned}$$ where $e_i$ are the Temperley-Lieb-Jones (TLJ) operators which act at the positions $i^{th}$ and $(i+1)^{th}$ of the chain and satisfy the TLJ algebra [@qb], $$\begin{aligned} & e^2_i = ( q + q^{-1}) e_i & \nonumber \\ & e_i \; e_{i \pm 1} \; e_i = e_i & \label{4b.4} \\ & e_i \; e_j = e_j \; e_i ,\;\;\;\; |i-j| \geq 2 & \nonumber\end{aligned}$$ In the vertex basis the TLJ operator $e_i$ can be written as follows, $$e_{i} = {\bf 1}_1 \otimes \cdots \otimes {\bf 1}_{i-1} \otimes \left( \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & q^{-1} & -1 & 0 \\ 0 & -1 & q & 0 \\ 0 & 0 & 0 & 0 \end{array} \right) \otimes {\bf 1}_{i+2} \cdots \otimes {\bf 1}_N \label{4b.5}$$ The existence of a vertex-IRF map, and the fact that the operators $e_i$ commute with the action of $SU(2)_q$ imply that they can be given a representation on the RSOS-Hilbert spaces of the face model defined by the graph $A_r$, $$e_i | \dots, a_{i-1} , a_{i} , a_{i+1}, \dots> = \sum_{a'_i} e \left( \begin{array}{lll} & a'_i & \\ a_{i-1} & & a_{i+1} \\ & a_i & \end{array} \right) | \dots, a_{i-1}, a'_i, a_{i+1}, \dots> \label{4b.6}$$ $$e\left( \begin{array}{ccc} & d & \\ a & & c \\ & b & \end{array} \right) = \delta_{a,c} \; \frac{ \sqrt{ t_b t_d}}{ t_a} \label{4b.7}$$ where $t_a$ are the components of the Perron-Frobenius vector of the incidence matrix of the graph $A_r$, which are given by $$t_a = {\rm sin}\left( \frac{ \pi a}{ r+1} \right), \;\; a= 1, 2, \dots, r \label{4b.8}$$ Notice that $t_a$ satisfies, $$t_{a+1} + t_{a-1} = 2 {\rm cos}\left( \frac{{\pi}}{r+1}\right) \; t_a \label{4b.9}$$ Recalling that the incidence matrix satisfies in this case $$\Lambda_{a,b} = 1 \Longleftrightarrow |a-b| = 1 \label{4b.10}$$ one gets that there are only 6 types of “Boltzmann weights”, whose expression, given in table 4, can be computed using eqs (\[4b.3\]) and (\[4b.7\]), ------------------------------------------------------------------------------------------------------------------------------ $ \begin{array}{ccc} & d & \\ a & & c \\ & b & $ R \left( \begin{array}{ccc} & d & \\ a & & c \\ & b & \end{array} $ \end{array}\right) $ ---------------------------------------------------------------- ------------------------------------------------------------- $ \begin{array}{ccc} & a & \\ a \pm 1& & a \mp 1 \\ & a & $\frac{{\rm cos}{\gamma}}{2}$ \end{array}$ $ \begin{array}{ccc} & a \pm 1 & \\ a & & a \\ & a \pm 1 & $ \frac{{\rm cos}{\gamma}}{2} \end{array} $ - \frac{t_{a \pm 1} }{t_a}$ $ \begin{array}{ccc} & a \pm 1 & \\ a & & a \\ & a \mp 1 & $- \frac{ \sqrt{ t_{a+1} t_{a-1} } }{ t_a }$ \end{array} $ ------------------------------------------------------------------------------------------------------------------------------ Table 4: The RSOS Hamiltonian for the face model $A_r$. In the limit where $r \rightarrow \infty$ the RSOS model $A_r$ becomes equivalent to the SOS model with graph $A_{\infty}$ studied previously. One can check that the R-matrix given in table 4 is, up to a constant and a change of basis, the same as the R-matrix given in table 2, with the identification $a= 2j+1$. Numerical Results {#numerical-results-1 .unnumbered} ----------------- In table 5 we give the ground state energy per 2 sites $E_0(M)/M ( M = N/2)$ of the XXZ model (\[4b.1\]), which coincides with the ground state energy per site of the corresponding Potts model. This table should be compared with table 1b in reference [@ABBBQ], which was obtained using the Bethe ansatz. The authors of [@ABBBQ] give their results up to 6 decimals ( ours is 9) and the agreement in the energies holds until the $6^{\rm th}$ digit. The number of states retained in our computation is $m=160$. Using the IRF-DMRG data we have computed the finite size corrections to the ground state energy, which are governed by the formula [@Cardy; @Aff], $$E_0(M)/M = e_{\infty} + \frac{f_\infty}{M} - \frac{\pi \varsigma c}{24 M^2} + o(M^{-2}) \label{4b.11}$$ where $e_\infty$ and $f_\infty$ are, respectively, the bulk and surface energy per site. $\varsigma$ can be identified with the spin wave velocity and it is given for the Potts model by, $$\varsigma = \frac{ \pi {\rm sin} \gamma}{ 2 \gamma} \label{4b.12}$$ We have used Sach’s formula to get the values of the central charge c [@van]. The outcome of this computation is that the IRF-DMRG method reproduces rather accurately the results obtained using the Bethe ansatz. This supports the hypothesis that the DMRG is in fact an exact numerical RG method. N/2 r=3 (Q=2) r=5 (Q=3) r=7 (Q=3.414) r=inf (Q=4) ------------ ---------------- ---------------- ---------------- ---------------- 2 -1.320 899 500 -1.478 675 250 -1.537 532 848 -1.616 025 403 4 -1.459 958 153 -1.580 754 400 -1.626 304 313 -1.687 466 299 8 -1.532 472 880 -1.636 075 843 -1.675 237 896 -1.727 934 286 16 -1.569 617 420 -1.665 066 291 -1.701 130 752 -1.749 664 452 32 -1.588 433 915 -1.679 941 590 -1.714 488 941 -1.760 960 537 64 -1.597 906 426 -1.687 482 175 -1.721 280 643 -1.766 733 433 128 -1.602 659 177 -1.691 279 439 -1.724 706 257 -1.769 649 934 256 -1.605 039 733 -1.693 185 000 -1.726 426 767 -1.771 116 470 512 -1.606 231 063 -1.694 139 540 -1.727 288 989 -1.771 851 868 $e_\infty$ -1.607 423 097 -1.695 095 264 -1.728 152 544 -1.772 588 719 (AB$^3$Q) -1.607 423 -1.695 095 -1.728 152 -1.772 588 $f_\infty$ 0.610 501 838 0.489 636 433 0.442 486 891 0.377 747 124 (AB$^3$Q) 0.610 502 0.489 637 0.442 487 0.377 649 c 0.499 942 0.798 817 0.893 150 1.038 18 (AB$^3$Q) 0.500 00(1) 0.799 9(2) 0.89(3) 0.99(2) (exact) 0.5 0.8 0.892 857 1 Table 5: Ground state energy per 2-sites of the RSOS chain. We also give the results of ref [@ABF] (AB$^3$Q) for $e_{\infty}$, $f_{\infty}$ and $c$. V) Vertex-IRF Map {#v-vertex-irf-map .unnumbered} ================= We shall illustrate this map in the case of a spin-s chain whose dynamics is dictated by a rotational invariant Hamiltonian. The Hilbert space of the spin-s chain with N sites will be denoted by ${\cal V}^{N,s}$ and consists in the tensor product of N copies of the vector space $V_s = {\bf C}^{2s+1}$, where acts the local spin operators ${\bf S}_i$ ( i = 1, …, $N$). The vertex-IRF map is based on the tensor product decomposition of the space ${\cal V}^{N,s}$ into its irreducible components. Vertex Hilbert Spaces $\rightarrow$ IRF Hilbert Spaces {#vertex-hilbert-spaces-rightarrow-irf-hilbert-spaces .unnumbered} ------------------------------------------------------ Using the Clebsch-Gordan decomposition of tensor product of irreps of $SU(2)$ one can write, $${\cal V}^{N,s} = \sum_{0 \leq j \leq 2 s N} {\cal H}^{N,s}_j \otimes V_j \label{5.1}$$ where ${\cal H}^{N,s}_j$ is the generalization of the IRF Hilbert space (\[4a.4\]) to the spin s case. The heights $a_i \in \frac{1}{2} {\bf Z_+} $ that label the IRF states are subject to the following constraints, $$\begin{array}{ccc} & a_0 = 0 & \\ & a_1 = S & \\ |S- a_i|& \leq a_{i+1} \leq & |S + a_{i}| ,\;\; i = 1, \dots, N-1 \\ & a_N = j & \end{array} \label{5.2}$$ The dimension of ${\cal H}^{N,s}_j$ is given by the number of times the spin-j irrep appears in the CG-decomposition of the tensor product $s \otimes \stackrel{N}{\cdots} \otimes s$, $${\rm dim} {\cal H}^{N,s}_j = {\rm multiplicity}\; {\rm of } \; V_j \;\;{\rm in} \; {\cal V}^{N,s} \label{5.3}$$ ${\rm dim} {\cal H}^{N,s}_j $ can be computed using the following formula, $${\rm dim} {\cal H}^{N,s}_j = {\rm dim} {\cal V}^{N,s}_{j}- {\rm dim} {\cal V}^{N,s}_{j+1} \label{5.4}$$ where $ {\cal V}^{N,s}_{s^z}$ denotes the subspace of $ {\cal V}^{N,s} $ with a fixed value $s^z$ of the third component of the spin. Eq.(\[5.4\]) says that the highest weights with total spin $j$ are given by the states with spin $s^z =j$ minus the ones that can be obtained from $ {\cal V}^{N,s}_{s^z+1}$ by the lowering operator $S^-$. The counting of states with a fixed value of $s^z$ is easily done using the “Bethe method” of starting with the ferromagnetic state with all the spins up and lowering the spin. Below we give the formulae for s=1/2 and 1. $$\begin{aligned} & {\rm dim} {\cal V}^{N,s=1/2}_{s^z} = \left( \begin{array}{c} N \\ \frac{N}{2} - s^z \end{array} \right) & \label{5.5} \\ & {\rm dim} {\cal V}^{N,s=1}_{s^z}= \sum_{k=0}^{ [(N - s^z)/2 ]} \left( \begin{array}{c} N \\ s^z + k \end{array} \right) \; \left( \begin{array}{c} N- s^z- k \\ k \end{array} \right) & \nonumber\end{aligned}$$ where the symbol $[x]$ appearing in the upper limit of the sum denotes the integer part of x. The relation between the vertex basis of the spaces ${\cal V}^{N,s}, V_j $ and the IRF basis of ${\cal H}^{N,s}_j$ can be obtained using the Clebsch-Gordan coefficients as follows, $$\begin{aligned} &\xi({\bf a}) \otimes e^j_m & \nonumber \\ & = \sum_{m_1, \dots, m_N} \left[ \begin{array}{lll} 0 & s & a_1 \\ 0 & m_1 & n_1 \\ \end{array} \right] \left[ \begin{array}{lll} a_1 & s & a_2 \\ n_1 & m_2 & n_2 \\ \end{array} \right] \left[ \begin{array}{lll} a_2 & s & a_3 \\ n_2 & m_3 & n_3 \\ \end{array} \right] \cdots & \label{5.6} \\ & \cdots \; \left[ \begin{array}{lll} a_{N-2} & s & a_{N-1} \\ a_{N-2} & m_{N-1} & n_{N-1} \\ \end{array} \right] \left[ \begin{array}{lll} a_{N-1} & s & a_N \\ n_{N-1} & m_N & n_N \\ \end{array} \right] \;\; {\rm e}^s_{m_1} \otimes \cdots \otimes {\rm e}^s_{m_N} & \nonumber \end{aligned}$$ where $n_i = m_1 + \cdots + m_i $ , $m= n_N = \sum_{i=1}^N m_i $, and ${\bf a}$ denote the IRF labels which satisfy conditions (\[5.2\]). A graphical representation of eq.(\[5.6\]) is given in Fig.16. The 0 at the upper left of the diagram can be identified with the $\star$ symbol introduced in section II. The vertex-IRF map, as defined by eq.(\[5.6\]), is nothing but a change of basis from vertex variables to IRF ones which achieves the factorization of the $SU(2)$ symmetry. Vertex-Hamiltonians $\rightarrow$ IRF-Hamiltonians {#vertex-hamiltonians-rightarrow-irf-hamiltonians .unnumbered} --------------------------------------------------- The most general form of a rotational invariant Hamiltonian $H$ acting in ${\cal V}^{N,s}$, i.e. $$[ H , {\bf S} ] = 0 , \;\; {\bf S} = \sum_{i=1}^N {\bf S}_i \label{5.7}$$ which is translational invariant and contains only nearest neaghbours couplings is given by, $$H = \sum_{i=1}^{N-1} \sum_{r=0}^{2s} \alpha_r \left( {\bf S}_i \cdot {\bf S}_{i+1} \right)^r \label{5.8}$$ where $\alpha_r$ is a set of 2s+1 coupling constants ($\alpha_0$ can be put equal to zero since it multiplies the identity operator). Since (\[5.8\]) commutes with $SU(2)$ it means that its action affects only the IRF spaces ${\cal H}^N_j$. Using group theoretical methods we get, $$\begin{aligned} & H \; \xi( {\bf a}) = \sum_{i=1}^{N-1} \; R \left( \begin{array}{ll} a_{i-1} & a'_i \\ a_i & a_{i+1} \end{array} \right) \xi(\cdots, a_{i-1}, a'_i, a_{i+1}, \cdots) & \label{5.9}\end{aligned}$$ where the IRF “weights” $R$ can be computed in terms of the coupling constants $\alpha_r$ and the 6j-symbols as follows (the details of this computation will be given elsewhere), $$R\left( \begin{array}{ll} a_{i-1} & a'_i \\ a_i & a_{i+1} \end{array} \right) = \sum_{0 \leq j \leq 2s} A^{ss}_j \left\{ \begin{array}{lll} s & s & j \\ a_{i-1} & a_{i+1} & a_i \end{array} \right\} \;\left\{ \begin{array}{lll} s & s & j \\ a_{i-1} & a_{i+1} & a'_{i} \end{array} \right\} \label{5.10}$$ $$\begin{aligned} &A^{ss}_j = \sum_{r=0}^{2s} \alpha_r x_j^r & \label{5.11} \\ & x_j = \frac{1}{2} j(j+1) - s (s+1) & \nonumber\end{aligned}$$ As an example we may choose H to be the sum of all the permutation operators between nearest neighbours, in which case $A_j^{ss}$ turns out to be a sign factor, $$H = \sum_i P_{i,i+1} \Longrightarrow A^{ss}_j = (-1)^{2s-j} \label{5.12}$$ The IRF hamiltonian corresponding to (\[5.12\]) is given by, $$R\left( \begin{array}{ll} a_{i-1} & a'_i \\ a_i & a_{i+1} \end{array} \right) = (-1)^{a_{i-1}+ a_{i+1} - a_i - a_i'} \left\{ \begin{array}{lll} s & a_{i-1} & a_i \\ s & a_{i+1} & a'_i \end{array} \right\} \label{5.13}$$ The s=1/2 Heisenberg Hamiltonian (\[4a.1\]) is precisely of the form (\[5.12\]) so that table 2 can be derived from (\[5.13\]). In a subsequent publication we shall use eq.(\[5.10\]) to study higher spin Heisenberg chains in the IRF formalism. Vertex-DMRG $\rightarrow$ IRF-DMRG {#vertex-dmrg-rightarrow-irf-dmrg .unnumbered} ---------------------------------- The DMRG algorithm to renormalize a block $B$ plus one point $\odot$ into a new block $B'$, is based on the superblock $B \odot \odot B^R$, where $B^R$ is the reflection of the block B ( we use $\odot$ to distinguish vertex-points from IRF-points which were denoted above by $\bullet$ ). The main steps of the vertex-DMRG are: - Diagonalization of the superblock Hamiltonian to find the ground state $|\psi>$. - Construction of the reduced density matrix by tracing over the states in $\odot B^R$, $$\rho^{B \odot} = {\rm Tr}_{\odot B^R} \;\; |\psi_0> < \psi_0| \label{5.14}$$ - Diagonalization of $\rho^{B \odot}$ to find the eigenvalues $w_\alpha$ and eigenvectors $|u^\alpha>$. Discard all but the largest $m$ eigenvalues and associated eigenvectors. - Construct the operator $T$ using the truncated eigenvectors $|u^\alpha>$. - Renormalize all the operators using the analog of eq. (\[2.14\]) in the vertex case. - Repeat the process for the new block $B'$. These set of rules define the vertex-DMRG algorithm, which applies directly to systems where the lattice variables associated to the points $\odot$ are not subject to constraints except perhaps for conservation laws like total spin, charge, etc. Most of the Hamiltonians in Condensed Matter or Stat. Mech. are of this form. If the vertex Hamiltonian happens to have a continuous symmetry, then the factorization of that symmetry would lead naturally to an IRF model, whose renormalization can be studied using the IRF-DMRG method. The relation between the vertex-DMRG and the IRF-DMRG algorithms, presented in section III, is illustrated diagrammatically in fig.17: the height $a$ (resp. $b) $ labels the different irreps (f.ex. the total spins of the spin chains studied above) that appear in the tensor product of all the irreps contained in the block $B \;$ ( resp. $ B^R) $. The intermediate height $j$ is obtained tensoring $a$ (resp. $b) $ with the irrep carried by the vertex $\odot$, which in the case of the spin chains is a spin-s irrep. Finally, we must tensor $j \otimes j$ and pick up the identity irrep. Let us now find the analytic relation between the vertex and IRF density matrices. We shall call ${\cal V}^B$ ( resp. $ {\cal V}^{B^R}$ ) the Hilbert space associated to the block $B$ ( resp. $B^R)$. The tensor product decomposition (\[5.1\]) becomes in this case $$\begin{aligned} & {\cal V}^B = \sum_a {\cal H}^B_a \otimes V_a & \label{5.15} \\ &{\cal V}^{B^R}= \sum_a V_a \otimes {\cal H}^{B^R}_a & \nonumber\end{aligned}$$ Using this decomposition the Hilbert space of the superblock $B \odot \odot B^R$ becomes, $${\cal V}^{B \odot \odot B^R} = \sum_{a,b} \; {\cal H}^B_a \otimes V_a \otimes V_s \otimes V_s \otimes V_b \otimes {\cal H}^{B^R}_b \label{5.16}$$ The ground state of any rotational invariant Hamiltonian acting in this superblock can be written in the basis of (\[5.16\]) as follows (see fig.18), $$\begin{aligned} & |\psi_0> = \sum \; | \xi_a \otimes m_a \otimes m_1 \otimes m_2 \otimes m_b \otimes \eta_b> {\psi}_{\xi_a, \eta_b}(a,j,b) & \label{5.17} \\ & \left[ \begin{array}{lll} j & j & 0 \\ m_a + m_1 & m_b + m_2 & 0 \\ \end{array} \right] \; \left[ \begin{array}{lll} a & s & j \\ m_a & m_1 & m_a + m_1 \\ \end{array} \right] \left[ \begin{array}{lll} s & b & j \\ m_2 & m_b & m_2 + m_b \\ \end{array} \right] & \nonumber \end{aligned}$$ where $ {\psi}_{\xi_a, \eta_b}(a,j,b) $ is the IRF wave function of the ground state $|\psi_0>$. The density matrix $\rho^{B \odot}$ can be obtained from (\[5.14\]). Using the properties of the CG coefficients we get, $$\begin{aligned} & \rho^{B \odot} = \sum | \xi_a \otimes m_a \otimes m_1 > < \xi'_{a'} \otimes m'_{a'} \otimes m'_1 | & \label{5.18} \\ & = \frac{ \delta_{m_a+ m_1, m'_{a'} + m'_1 } }{2j +1} \; {\rho}_{\xi_a}^{\xi'_{a'}} \left( \begin{array}{ccc} & a' & \\ \ast & & b \\ & a & \end{array} \right) \; \left[ \begin{array}{lll} a & s & j \\ m_a & m_1 & m_a + m_1 \\ \end{array} \right] \left[ \begin{array}{lll} a' & s & j \\ m'_{a'} & m'_1 & m'_{a'} + m'_1 \\ \end{array} \right] & \nonumber\end{aligned}$$ where $${\rho}_{\xi_a}^{\xi'_{a'}} \left( \begin{array}{ccc} & a' & \\ \ast & & j \\ & a & \end{array} \right) = \sum_{b, \eta_b} \psi_{\xi_a, \eta_b}(a,j,b) \; \psi^*_{\xi'_{a'}, \eta_b}(a',j,b) \label{5.19}$$ coincides with the definition of the IRF-DM given in (\[3.5\]). The relation (\[5.18\]) between the vertex and IRF density matrices can be finally written as, $$\rho^{B \odot} = \sum_j \frac{1}{2j +1} \; \rho^{S \bullet} \label{5.20}$$ The factor $1/(2j+1)$ takes care of the degeneracy of the irrep $V_j$ in the CG decomposition $a \otimes s \rightarrow j$, and guarantees the correct normalization conditions of both density matrices. VII) Conclusions and Perspectives {#vii-conclusions-and-perspectives .unnumbered} ================================= We have generalized in this paper the DMRG method to 1d Hamiltonians of IRF type and showed, in the examples of the spin 1/2 SOS and RSOS models, that it gives very accurate results. Our method is equivalent, by means of a vertex-IRF map, to the standard DMRG method formulated by White for Hamiltonians of the vertex type. This map consists in the factorization of the symmetry group of the vertex theory. This factorization has numerical and conceptual advantages. From a numerical point of view one needs to keep a smaller number of states in the IRF-DMRG in order to achieve the same accuracy as in the vertex-DMRG. The degeneracy of the eigenvalues of the vertex formulation, due to the symmetry, is absent in the IRF case, which makes the numerical analysis more compact and stable. Conceptually the IRF-DMRG is also very appealing since it employs tools and techniques well known in Statistical Mechanics, Integrable Systems, Multi-Matrix Algebras and Conformal Field Theory. Thus the IRF states can be seen as paths of a Bratelli diagram, kinks of a Theory of Solitons, discretized strings and conformal blocks in CFT. The formalism we have developed allow us to apply the DMRG method to IRF states in a very natural way. Let us mention some of the lines of research which we believe deserve further study, - [**Higher Spin and Ferromagnetic Spin Chains:**]{} In section V we have presented the necessary tools to study higher spin chains. A particular interesting case is the spin 1 chain , which has a rich phase diagram. We shall show in a subsequent publication that the string order parameter of den Nijs and Rommelse [@dNR], which is used to characterize the Haldane phase, adopts a particular simple form when written in IRF variables. In fact the IRF states constitute a complete and orthonormal basis of valence bond states. In particular the AKLT state [@AKLT], which is a pure Haldane state, is simply a straight path in the Bratelli diagram of the spin 1 Heisenberg chain. The IRF-DMRG is also very promissing for the study of ferromagnetic systems, which seems to display a rich phase structure in the presence of magnetic fields [@OYA]. The vertex-DMRG method applied to ferromagnetic systems encounters the difficulty that the ground state has a huge degeneracy. As we have shown in this paper, the IRF-DMRG eliminates this degeneracy, avoiding the complications arisen from that fact. - [**t-J and Hubbard models:**]{} The vertex-IRF map can be straighforwardly applied to these models yielding an IRF formulation where the spins form valence bonds. The IRF heights are now given by the couple (spin (j) ,charge (q)). In the case of Hubbard model the symmetry group is given by $SO(4)$ and it contains, in addition to the rotational group, the group of pseudo spin rotations. The factorization of this larger group should reduce considerably the dimension of the Hilbert Spaces. - [**Ladders:**]{} These systems, which have received considerably attention in the last 2 years, consist of a finite number of coupled chains, with very interesting properties ( for a review see [@DR]). For spin ladders with a few number of chains it is rather simple to obtain their IRF version simply by taking the tensor product of the irreps located on the rungs and performing afterwards their tensor product along the chains. This procedure imitates the strong coupling analysis applied to these kind of systems [@DR]. The IRF models so obtained has more than one link connecting different heights, and so one has to generalize slightly the construction of this paper. A similar multiplicity phenomena occurs in the theory of solitons [@GS2]. The IRF formulation of ladders could be useful to clarify the relationship between their phases and those appearing in higher spin chains. - [**Higher Dimensions**]{} The DMRG philosophy is not confined to 1d, but the standard DMRG algorithms proposed so far are one dimensional, despite of some 2d applications to finite clusters [@W2]. The IRF-DMRG strengthen this point of view, since in particular the vertex-IRF map is a one-dimensional operation. We should perhaps say that the vertex-IRF map is really adimensional because the tensor product operation does not impose any particular geometry or dimension. In connection with this problem it may be useful to realize that the vertex-IRF map is a duality transformation similar to the Krammers-Wannier duality or the Jordan-Wigner transformation. This interpretation may serve as a guide to construct higher dimensional vertex-IRF maps. There are still many more topics to be considered in connection with the DMRG. The DMRG method has arisen as a numerical tool specially well adapted to 1d systems, but in our opinion its importance goes beyond its numerical success. There are still some fundamental questions whose solution we would like to know. It is perhaps not exagerate to say that new and radical developments connected with the DMRG are likely to happen in the near future. We would like to thank Steven White and Miguel A. Martin-Delgado for conversations. : nishino@phys560.phys.kobe-u.ac.jp and sierra@sisifo.imaff.csic.es [99]{} S.R. White, Phys. Rev. Lett. 69, 2863 (1992); Phys. Rev. B 48, 10345 (1993). K. G. Wilson, Rev. Mod. Phys. 47, 773 (1975). S.D. Drell, M. Weinstein and S. Yankielowicz, Phys. Rev. D 16, 1769 (1977) R. Jullien, P. Pfeuty, J.N. Fields and S. Doniach, Phys. Rev. B 18, 3568 (1978) P. Pfeuty, R. Jullien and K.A. Penson, in: Real Space Renormalization, eds. T.W. Burkhardt and J.M.J. van Leeuwen, Series Topics in Current Physics 30, Springer-Verlag, 1982. J. Gonzalez, M.A. Martin-Delgado, M.A.H. Vozmediano, Quantum Electron Liquids and Hight T$_c$ Superconductivity, Lecture Notes in Physics m38, Springer-Verlag 1995. For the RG consult chapter 11. S.R. White and D.A. Huse, Phys. Rev. B 48, 3844 (1993) E.S. Sorensen and I. Affleck, Phys. Rev. Lett. 71, 1633 (1993); Phys. Rev. B 49, 15771 (1994). S.R. White, R.M. Noack and D.J. Scalapino, Phys. Rev. Lett. 73, 886 (1994). S.R. White, cond-mat/9604129. T. Nishino and K. Okunishi, J. Phys. Soc. Jpn. 65, 891 (1996). S. Östlund and S. Rommer, Phys. Rev. Lett. 75, 3537 (1995); cond-mat/9606213. T. Xiang, Phys. Rev. 53 , 10445 (1996). T. Nishino, J. Phys. Soc. Jpn. 64, 3598 ( 1995). R.J. Bursill, T. Xiang and G.A. Gehring, cond-mat/9609001. M.A. Martin-Delgado and G. Sierra, Int. J. Mod. Phys. A 11, 3145 (1996). M.A. Martin-Delgado and G. Sierra, Phys. Rev. Lett. 76, 1146 (1996). M.A. Martin-Delgado and G. Sierra, Phys. Lett. B 364, 41 (1995). M.A. Martin-Delgado, J. Rodriguez-Laguna and G. Sierra, Nucl. Phys. B 473 \[FS\], 685 (1996). R.J. Baxter, “Exactly Solved Models in Statistical Mechanics”, Academic Press, London, 1982. G.E. Andrews, R.J. Baxter and P.J. Forrester, J. Stat. Phys. 35, 193 (1984). E. Date, M.Jimbo, T. Miwa and M.Okado, Phys. Rev. B 35, 2105 (1987). F.M. Goodman, P.M. de la Harpe and V. F. R. Jones, “Coxeter-Dynkin Diagrams and Towers of Algebras” , MSRI Publications/Springer-Verlag, New York (1989). C. Gomez and G.Sierra, Intern. J. Mod. Phys. A6, 2045 (1991). V. Pasquier, Nucl. Phys. B285 \[FS19\], 162 (1987) ; J. Phys. A 20, L217, L221 (1987). E.H.Lieb and F.Y. Wu, in Phase transitions and critical phenomena, vol. I, ed. C. Domb and M.S. Green, Academic Press (1972). R.J. Baxter, Ann. Phys. 76, 25 (1973). M. Jimbo, T. Miwa and M.Okado, Comm. Math. Phys. 116, 507 ( 1988). V. Pasquier, Comm. Math. Phys. 118, 355 (1988). A.A. Belavin, A. M. Polyakov and A.B. Zamolodchikov, Nucl. Phys. B 241, 333, (1984). C. Gomez, M. Ruiz-Altaba and G. Sierra, “ Quantum Groups in Two Dimensional Physics”, Cambridge University Press, 1996. A. Ocneanu, “Quantized groups, string algebras and Galois theory of algebras”, London Math. Soc. Lecture Notes 136, 119 ( 1989). A.B. Zamolodchikov and A.Bl. Zamolodchikov, Ann. Phys. 120, 253 (1979). S.R. White and R.M. Noack, Phys. Rev. Lett. 68, 3487 (1992). F.C. Alcaraz, M.N. Barber and M.T. Batchelor, R.J. Baxter and G.R.W. Quispel, J. Phys. A: Math. Gen. 20, 6397 (1987). C.J. Hamer, J. Phys. A: Math Gen, 19, 3335 (1986). F.C. Alcaraz, M.N. Barber and M.T. Batchelor, Phys. Rev. Lett. 58, 771 (1987). V. Pasquier and H. Saleur, Nucl. Phys. B 330, 523 (1990). H.N.V. Temperley and E. Lieb, Proc. Roy. Soc. London A 322, 251 (1971). H.W.J. Blote, J.L. Cardy and M.P. Nightingale, Phys. Rev. Lett. 56, 742 (1986). I. Affleck, Phys. Rev. Lett. 56, 746 (1986). J.M. Van der Broeck and L.W. Schwartz, SIAM J. Math. Anal. 10, 658 (1979). M. den Nijs and K. Rommelse, Phys. Rev. B 40, 4709 (1989) I. Affleck, T. Kennedy, E. Lieb and H. Tasaki, Commun. Math. Phys. 115, 477 (1988). M. Osaka, M. Yamanaka and I. Affleck, cond-mat/9610168. E. Dagotto and T.M. Rice, Science 271, 618 (1996). C. Gomez and G. Sierra, Nucl. Phys. B 419 \[FS\], 589 (1994). Figure Captions {#figure-captions .unnumbered} =============== Fig.1.- Bratelli diagram associated to the Coxeter graph $A_7$. Fig.2.- In dark it is shown a path on the Bratelli diagram of Fig.1. Fig.3.- The string $S_{\ast,a}$ as a representative of the class of all paths on a Bratelli diagram starting at $\ast$ and ending at $a$. Fig.4.- A string $S_{\ast,a}$ absorbs a point $\bullet$ which carries an allowed state $b$, becoming a new string $S'_{\ast,b}$. Fig.5.- Diagrammatic representation of the string operators (\[2.7\]). Fig.6.- Top: diagrammatic representation of the equation (\[2.8\]). Bottom: Diagrammatic reconstruction of the Hamiltonian $H^{S,3 \bullet}$. Fig.7.- Diagrams of the $T$ and $T^\dagger$ operators (\[2.10\]) ( T for truncation and for triangle). Fig.8.- The normalization condition (\[2.13\]) interpreted as a kind of annihilation process of triangles. Fig.9.- Eq.(\[2.16\]) in pictorical form. Fig.10.- The renormalization of ${\cal O}^{S, 2\bullet}$ ( see eq.(\[2.17\])). From figs.8.9 and 10 we see that the RG procedure is a kind of sewing or gluing construction involving triangles, plaquettes and higher n-gons. Fig.11.- The “super-string” configuration that leads to the infinite system IRF-DMRG algorithm. Fig.12.- Coxeter graph $A_\infty$. Fig.13.- Bratelli diagram built up using the Coxeter graph $A_\infty$. Fig.14.- Plot of the deviation of the IRF-DMRG ground state energy of a s=1/2 chain with 512 sites, as a function of the number of states retained m. Fig.15.- Coxeter garph $A_r$. Fig.16.- Graphical representation of the vertex-IRF map. Notice that the IRF points $\bullet$ and the vertex points $\odot$ belong to lattices which are dual one another. Indeed the vertex-IRF map is a kind of duality transformation. Fig.17.- The vertex-IRF map that relates the vertex-DMRG and the IRF-DMRG algorithms. Fig.18.- Here we show the CG decompositions involved in fig.17.
--- abstract: 'In this work the following lepton flavor violating $\tau$ and $\mu$ decays are studied: $\tau^- \to \mu^- \mu^- \mu^+$, $\tau^- \to e^- e^- e^+$, $\mu^- \to e^- e^- e^+$, $\tau^- \to \mu^- \gamma$, $\tau^- \to e^- \gamma$ and $\mu^- \to e^- \gamma$. We work in a supersymmetric scenario consisting of the minimal supersymmetric standard model particle content, extended by the addition of three heavy right handed Majorana neutrinos and their supersymmetric partners, and where the generation of neutrino masses is done via the seesaw mechanism. Within this context, a significant lepton flavor mixing is generated in the slepton sector due to the Yukawa neutrino couplings, which is transmited from the high to the low energies via the renormalization group equations. This slepton mixing then generates via loops of supersymmetric particles significant contributions to the rates of $l_j \to 3 l_i$ and the correlated $l_j \to l_i \gamma$ decays. We analize here in full detail these rates in terms of the relevant input parameters, which are the usual minimal supergravity parameters and the seesaw parameters. For the $l_j \to 3 l_i$ decays, a full one-loop analytical computation of all the contributing supersymmetric loops is presented. This completes and corrects previous computations in the literature. In the numerical analysis compatibility with the most recent experimental upper bounds on all these $\tau$ and $\mu$ decays, with the neutrino data, and with the present lower bounds on the supersymmetric particle masses are required. Two typical scenarios with degenerate and hierarchical heavy neutrinos are considered. We will show here that the minimal supergravity and seesaw parameters do get important restrictions from these $\tau$ and $\mu$ decays in the hierarchical neutrino case.' author: - Ernesto Arganda - 'María J. Herrero' title: | Testing Supersymmetry with Lepton Flavor Violating\ $\tau$ and $\mu$ decays --- \[sec:Intro\] Introduction ========================== The present strong evidence for lepton flavor changing neutrino oscillations in neutrino data [@neutrinodata] implies the existence of non-zero masses for the light neutrinos, and provides the first experimental clue for physics beyond the Standard Model (SM). These oscillations also give an important information on the neutrino mixing angles of the Maki-Nakagawa-Sakata matrix ($U_{MNS}$) [@MNS]. The experimentally suggested smallness of the three neutrino masses can be explained in a very simple and elegant way by the seesaw mechanism of neutrino mass generation [@seesaw]. This mechanism is usually implemented by the introduction of three heavy right-handed (RH) Majorana neutrinos whose masses, $m_{M_i}$, can be much higher than the SM particle masses. The smallness of the light neutrino masses, $m_{\nu_i}$, appears naturally due to the induced large suppression by the ratio of the two very distant mass scales that are involved in the $3 \times 3$ seesaw mass matrices, the Majorana matrix $m_M$ and the Dirac matrix $m_D$. For instance, in the one generation case, where the seesaw model predicts $m_\nu \sim m_D^2/m_M$, light neutrino masses in the 0.1 - 1 eV range can be generated with $m_D$ being of the order of the electroweak scale, $v=174$ GeV, and large $m_M$ of the order of $10^{14}$ GeV. This huge separation between $m_M$ and the electroweak scale has, however, a serious drawback since it leads to the well known hierarchy problem of the SM, where a tree level Higgs boson mass of the order of $v$ is driven by the radiative corrections involving the Majorana neutrinos to very unnatural high values related to the new scale $m_M$. The most elegant solution to this hierarchy problem is provided by the introduction of the symmetry relating fermions and bosons, called supersymmmetry (SUSY). When the seesaw mechanism for the neutrino mass generation is implemented in a SUSY context, the SUSY scalar partners of the neutrinos, i.e. the sneutrinos, do also contribute to the radiative corrections of the Higgs boson masses and cancel the dangerous contributions from the Majorana neutrinos, solving in this way the hierarchy problem of the simplest non-SUSY version of the seesaw mechanism. The best evidence of supersymmetry would be obviously the discovery of the SUSY particles in the present or next generation colliders. However, there are alternative ways to test supersymmetry which are indirect and complementary to the direct SUSY particle searches. These refer to the potential measurement of the SUSY particle contributions, via radiative corrections, to rare processes which are being explored at present and whose rates are predicted to be highly suppressed in the SM. Among these processes, the Lepton Flavor Violating (LFV) $\tau$ and $\mu$ decays are probably the most interesting ones for various reasons. On one hand, they get vanishing rates in the SM with massless neutrinos and highly suppressed rates in the SM with massive netrinos. The smallness of these rates in the non-SUSY version of the seesaw mechanism for neutrino mass generation is due to their suppression by inverse powers of the heavy scale $m_M$. On the other hand, although these decays have not been seen so far in the present experiments, there are very restrictive upper bounds on their possible rates which imply important restrictions on the new physics beyond the SM. These restrictions apply even more severely to the case of softly broken SUSY theories with massive neutrinos and the seesaw mechanism, since these give rise to higher rates [@borzumati], being suppressed by inverse powers of the SUSY breaking scale, $m_{SUSY}\leq 1$ TeV, instead of inverse powers of $m_M$. We will be devoted here in particular to the LFV $\tau$ and $\mu$ decays of type $l_j\to l_i \gamma$ and $l_j\to 3l_i$ where the present experimental upper bounds are the most restrictive ones [@Aubert:2003pc; @Bellgardt:1987du; @Aubert:2005ye; @Aubert:2005wa; @mue], specifically, $$\begin{aligned} BR(\tau^- \to \mu^- \mu^- \mu^+) &<& 1.9 \times 10^{-7} \nonumber \\ BR(\tau^- \to e^- e^- e^+) &<& 2.0 \times 10^{-7} \nonumber \\ BR(\mu^- \to e^- e^- e^+) &<& 1.0 \times 10^{-12} \nonumber \\ BR(\tau^- \to \mu^- \gamma) &<& 6.8 \times 10^{-8} \nonumber \\ BR(\tau^- \to e^- \gamma) &<& 1.1 \times 10^{-7} \nonumber \\ BR(\mu^- \to e^- \gamma) &<& 1.2 \times 10^{-11} \nonumber \label{cotas}\end{aligned}$$ Our aim in this paper is to analize the branching ratios that can be generated for all these processes in the context of the SUSY-seesaw scenario with the minimal SUSY content, i.e the Minimal Supersymmetric Standard Model (MSSM), enhanced by the addition of three RH neutrinos and their corresponding SUSY partners. These LFV processes are induced by loops of SUSY particles which transmit the lepton flavor mixing from the slepton mass matrices to the observable charged lepton sector. The intergenerational mixing in the slepton sector, $(M_{\tilde l})_{ij},\, i\neq j$, is induced in turn by the radiative corrections involving the neutrino Yukawa couplings $Y_{\nu}$ or, simmilarly, by the running of the soft SUSY parameters in the slepton sector, via the Renormalization Group Equations (RGEs), from the high energy scale, $M_X>m_M$, where the heavy Majorana neutrinos are still active, down to the electroweak scale. We will assume here a Minimal Supergravity scenario (mSUGRA) with universal soft parameters at $M_X$ and the breaking of the electroweak symmetry being generated radiatively. This scenario is also refered to in the literature as the constrained MSSM (CMSSM). The above LFV processes have previously been studied in the SUSY-seesaw context by several authors [@Hisano:1995cp; @todos; @Casas:2001sr; @Babu:2002et; @Ellis:2002fe], under some specific assumptions for both seesaw parameters, $m_D$ (or $Y_{\nu}$, since they are related by $m_D=Y_{\nu}v\sin\beta$, with $\tan\beta=v_2/v_1$ being the ratio between the two MSSM Higgs vacuum spectation values) and $m_M$, and for the mSUGRA parameters, $M_0$, $M_{1/2}$, $A_0$, $\mbox{sign}(\mu)$ and $\tan\beta$. Our present study of these decay channels updates, completes and corrects the previous anlayses in several respects. First we include, by the first time to our knowledge, the full set of SUSY one-loop contributions to the $l_j \to 3 l_i$ decays, namely, the photon, the Z boson, and the Higgs bosons penguin diagrams, and the box diagrams. The most complete computation so far of these $l_j \to 3 l_i$ decays was done in [@Hisano:1995cp] where the contributions from the photon and Z boson penguin diagrams and from the box diagrams were included, but they focused on the particular choice of degenerate heavy Majorana neutrinos and they presented numerical results just for $\mu \to 3e$ decays. We extend this previous study by including in addition the Higgs penguin diagrams mediated by the three neutral MSSM bosons, $H_0$, $h_0$ and $A_0$, and correct their results for the Z penguin contributions. We also extend their study in that we present results for the three decays, $\mu \to 3e$, $\tau \to 3\mu$ and $\tau \to 3e$ and consider both possible scenarios, degenerate and hierarchical heavy neutrinos. The contributions from the Higgs penguin diagrams, in the SUSY-seesaw model, were firstly analized in [@Babu:2002et]. They worked in the large $\tan \beta$ limit and used the mass insertion approximation to account for the induced effect from the intergenerational slepton mixing in the SUSY contributing loops. There it was concluded that these Higgs-mediated contributions can be very relevant in the large $\tan\beta$ region, because the radiatively induced LFV Higgs-$\tau$-$\mu$ couplings grow as $\tan^2 \beta$ (and, in consequence, the $BR(\tau \to 3\mu)$ as $\tan^6 \beta)$, and also because the SUSY one-loop contributions do not decouple in these couplings. These large $\tan\beta$ enhacement and SUSY non-decoupling behaviour were also found in the LFV Higgs boson decays, $H_0,h_0,A_0 \to l_i\bar l_j$ [@Brignole1; @Arganda:2004bz]. A more exhaustive study of the $\tau \to 3\mu$ and other Higgs-mediated LFV $\tau$ decays, including an estimate of the Higgs contributions, were done in [@Brignole2]. However, these previous numerical estimates of the Higgs contributions to LFV $\tau$ and $\mu$ decays were done in the context of a generic MSSM (see also [@Paradisi:2005tk]), where the Higgs boson mass, or equivalently $m_A^0$, is an input parameter and can take small values of the order of 100 GeV which produces larger rates. A more recent study on the LFV Higgs decays has been done in [@Parry:2005fp] in the SUSY-GUT $SU(5)$ context. We instead work here in the mSUGRA context where all the MSSM particle masses are quantities derived from the mSUGRA parameters. We will see here that this and the requirement of compatibility with the present experimental lower bounds for all the SUSY particle masses [@pdg2004] do indeed constraint the contribution from the Higgs penguins. In the present work we also include the predictions for the $l_j\to l_i \gamma$ channels which, for the context we work with, are interestingly correlated with the $l_j\to 3 l_i$ rates. This correlation has been studied previously in the generic MSSM context in [@Brignole2] and in a similar mSUGRA context in [@Ellis:2002fe], but in this later the dominant photon penguin approximation was used. We will update this comparative analysis of the $l_j\to l_i \gamma$ and $l_j\to 3 l_i$ rates, in the mSUGRA context, including the full contributions, and considering the very recent upper bounds for $\tau\to \mu \gamma$ [@Aubert:2005ye] and $\tau\to e \gamma$ [@Aubert:2005wa]. In addition, we also require the input seesaw parameters to be compatible with the present neutrino data. For this comparison with the neutrino data we use the parametrization first introduced in [@Casas:2001sr] for the study of the $\mu \to e \gamma$ decay. Our final goal will be to use the SUSY contributions to all the above LFV $\tau$ and $\mu$ decays as an efficient way to test the mSUGRA and seesaw parameters. With this goal in mind we will analize here the size of the branching ratios in terms of the mSUGRA and seesaw parameters and will explore in detail the restrictions imposed from the present experimental bounds. We will find that for some plausible choices of the seesaw parameters, being compatible with neutrino data, there are indeed large excluded regions in the mSUGRA parameter space. The present work is organized as follows. In section \[MSSMnuR\] we will review the basic aspects of the MSSM extended with three RH neutrinos, their SUSY partners and the seesaw mechanism for neutrino mass generation. The lepton flavor mixing in the slepton sector and in the mSUGRA context will be explained in section \[LFV\]. There we also include the exact diagonalization of the sfermion mass matrices, both in the slepton and in the sneutrino sectors. The analytical results of the LFV $l_j \to 3 l_i$ decays will be presented in section \[analytical\]. The numerical results for all the LFV $\tau$ and $\mu$ decays will be presented in section \[numerical\]. Finally, section \[conclu\] will be devoted to the conclusions. \[MSSMnuR\] The MSSM extended with RH neutrinos and sneutrinos ============================================================== In this section we briefly review the additional basic ingredients that are needed to extend the MSSM in order to include three right handed neutrinos, their corresponding SUSY partners, i.e. the sneutrinos, and the generation of neutrino masses by the seesaw mechanism. We follow closely the notation of refs. [@Casas:2001sr; @Arganda:2004bz] to describe the SUSY-seesaw scenario and the connection with neutrino data. For the other sectors of the MSSM we assume here the standard conventions as defined, for instance, in  [@Haber:1985rc; @Gunion:1986yn]. We start with the Yukawa-sector of the MSSM-seesaw that contains the three left handed (LH) SM neutrinos $\nu_{L, i}^o$ and three extra right handed (RH) massive neutrinos $\nu_{R, i}^o$, whose Yukawa interactions provide, after spontaneous electroweak symmetry breaking, together with the right handed neutrino masses, the following mass Lagrangian containing the Dirac and Majorana mass terms, $$-L^\nu_{mass} = \frac{1}{2} (\overline{\nu^0_L}, (\overline{\nu^0_R})^C) M^\nu \left(\begin{array}{c} (\nu^{0}_L)^C\\ \nu^0_R \end{array} \right)\ + h.c.,$$ where, $$M^\nu\ =\ \left( \begin{array}{cc} 0 & m_D\\ m_D^T & m_M \end{array} \right). \label{mass6x6}$$ Here $m_D$ is the $3 \times 3$ Dirac mass matrix that is related to the $3 \times 3 $ Yukawa coupling matrix $Y_{\nu}$ and the MSSM Higgs vacum expectation value, $<H_2>=v_2=v\sin\beta$ with $v=174$ GeV, by $m_D=Y_{\nu} <H_2>$. The other MSSM Higgs doublet gives masses to the charged leptons by $m_l=Y_l <H_1>$, where $Y_l$ are the Yukawa couplings of the charged leptons and $<H_1>=v_1=v\cos\beta$. The remaining $3 \times 3 $ mass matrix involved in the seesaw mechanism, $m_M$, is real, non singular and symmetric, and provides the masses for the three RH neutrinos The mass matrix $M^{\nu}$ is a $6 \times 6$ complex symmetric matrix that can be diagonalized by a $6 \times 6$ unitary matrix $U^{\nu}$ in the following way: $$U^{\nu T}M^\nu U^\nu =\hat{M}^\nu = diag (m_{\nu_1},m_{\nu_2},m_{\nu_3},m_{N_1},m_{N_2},m_{N_3}). \label{matrizU}$$ This gives 3 light Majorana neutrino mass eigenstates $\nu_i$, with masses $m_{\nu_i}$ (i=1,2,3), and three heavy ones $N_i$, with masses $m_{N_i}$ (i=1,2,3), which are related to the weak eigenstates via, $$\left(\begin{array}{c} \nu^0_L \\ (\nu^{0}_R)^C \end{array} \right)\ =\ U^{\nu\ast}\ \left(\begin{array}{c} \nu_L \\ N_L \end{array} \right)\quad \mbox{and}\quad \left(\begin{array}{c} (\nu^{0}_L)^C \\ \nu^0_R \end{array} \right)\ =\ U^\nu\ \left(\begin{array}{c} \nu_R \\ N_R \end{array} \right).$$ The seesaw mechanism for neutrino mass generation assumes a large separation between the two mass scales involved in $m_D$ and $m_M$ matrices. More specifically, we shall assume here that all matrix elements of $m_D$ are much smaller than those of $m_M$, $m_D<<m_M$, and the predictions of the seesaw model are then given in power series of a matrix defined as, $$\begin{aligned} \xi &\equiv &m_D m_M^{-1}.\end{aligned}$$ In particular, the previous diagonalization of the mass matrix $M^{\nu}$ can be solved in power series of $\xi$. For simplicity, we choose to work here and in the rest of this paper, in a flavor basis where the RH Majorana mass matrix, $m_M$, and the charged lepton mass matrix, $m_l$, are flavor diagonal. This means that all flavor mixing of the LH sector is included in the mixing matrix $U_{MNS}$. By working to the lowest orders of these power series expansions one finds, in the flavor basis, the following neutrino $3 \times 3$ matrices, $$\begin{aligned} m_{\nu}&=&-m_D \xi^T + \mathcal{O}(m_D \xi^3) \simeq -m_D m_M^{-1}m_D^T\\ \nonumber m_N &=& m_M + \mathcal{O}(m_D \xi) \simeq m_M.\end{aligned}$$ Here, $m_N$ is already diagonal, but $m_{\nu}$ is not yet diagonal. The rotation from this flavor basis to the mass eigenstate basis is finally given by the MNS unitary matrix, $U_{MNS}$. Thus, $$\begin{aligned} m_{\nu}^{diag}&=&U_{MNS}^T m_{\nu} U_{MNS}= diag(m_{\nu_1},m_{\nu_2},m_{\nu_3}),\\ \nonumber m_N^{diag} &=& m_N = diag(m_{N_1},m_{N_2},m_{N_3}), \end{aligned}$$ and the diagonalization of $M^{\nu}$ in eqs. (\[mass6x6\]) and (\[matrizU\]) can be performed by the following unitary $6 \times 6$ matrix: $$U^\nu\ =\ \left( \begin{array}{cc} (1-\frac{1}{2} \xi^* \xi^T) U_{MNS} & \xi^* (1-\frac{1}{2} \xi^T \xi^*)\\ -\xi^T (1- \frac{1}{2} \xi^* \xi^T)U_{MNS} & (1-\frac{1}{2} \xi^T \xi^*) \end{array} \right) + \mathcal{O}(\xi ^4). \label{eq8}$$ As for the $U_{MNS}$ matrix, we use the standard parametrization given by, $$U_{MNS}\ =\ \left( \begin{array}{ccc} c_{12} c_{13} & s_{12} c_{13}& s_{13} e^{-i \delta}\\ -s_{12} c_{23}-c_{12}s_{23}s_{13}e^{i \delta} & c_{12} c_{23}-s_{12}s_{23}s_{13}e^{i \delta} & s_{23}c_{13} \\ s_{12} s_{23}-c_{12}c_{23}s_{13}e^{i \delta} & -c_{12} s_{23}-s_{12}c_{23}s_{13}e^{i \delta} & c_{23}c_{13}\end{array} \right) diag(1,e^{i \alpha},e^{i \beta}). \label{Umns}$$ where $c_{ij} \equiv \cos \theta_{ij}$ and $s_{ij} \equiv \sin \theta_{ij}$. Regarding the sneutrino sector, and because of SUSY, the introduction of three RH neutrinos, $\nu_R$, leads to the addition of the three corresponding SUSY partners, ${\tilde \nu_R}$. Thus, there are two complex scalar fields $\tilde \nu_L$ and $\tilde \nu_R$ per generation, as in the charged slepton case where there are $\tilde l_L$ and $\tilde l_R$. The difference is that in the sneutrino sector, the seesaw matrix $\xi$ is involved, as in the neutrino sector, and gives rise to a natural suppression of the RH sneutrino components in the relevant mass eigenstates. This fact makes the diagonalization procedure simpler in the sneutrino sector than in the charged slepton one. In order to understand properly this feature of the MSSM-seesaw model, we will illustrate in the following the simplest case of one generation, where this suppression already manifests. For this, we follow closely [@Grossman:1997is]. The generalization of this decoupling behaviour of the ${\tilde \nu_R}$ components to the three generations case is straightforward and we omit to show it here, for brevity. One starts by adding the new terms in the MSSM Lagrangian that involve the $\nu_R$ and/or ${\tilde \nu_R}$. In particular, the usual MSSM soft SUSY breaking potential must be modified to include new mass and coupling terms for the right handed sneutrinos, which for the one generation case are the following, $$\begin{aligned} V_{soft}^{\tilde \nu}= m_{\tilde M}^2 \tilde \nu_R^* \tilde \nu_R - \left( \frac{g}{\sqrt{2}m_W}\epsilon_{ij} \frac{m_D A_{\nu}}{\sin \beta} H_2^i \tilde l_L^j \tilde \nu_R^* + h.c. \right) + \left( m_M B_M \tilde \nu_R^* \tilde \nu_R + h.c. \right) \nonumber \\\end{aligned}$$ where $m_{\tilde M}$, $A_{\nu}$ and $B_M$ are the new soft breaking parameters. These are in addition to the usual soft parameters of the slepton sector, $m_{\tilde L}$, $m_{\tilde E}$ and $A_{l}$. The sneutrino mass terms of the MSSM-seesaw model can then be written in the one generation case as, $$-\mathcal{L}_{mass}^{\nu}=\left(\begin{array}{c} Re (\tilde{\nu}_L) \, Re (\tilde{\nu}_R) \, Im(\tilde{\nu}_L) \, Im (\tilde{\nu}_R)\end{array} \right) \left( \begin{array}{cc} M_+^2 & 0\\ 0 & M_{-}^2 \end{array} \right) \left(\begin{array}{c} Re(\tilde{\nu}_L) \\ Re(\tilde{\nu}_R) \\ Im(\tilde{\nu}_L)\\ Im(\tilde{\nu}_R) \end{array} \right)$$ with, $$M_{\pm}^2= \left( \begin{array}{cc} m_{\tilde{L}}^2 + m_D^2 + \frac{1}{2} m_Z^2 \cos 2 \beta & m_D (A_{\nu}- \mu \cot \beta \pm m_M)\\ m_D (A_{\nu}- \mu \cot \beta \pm m_M) & m_{\tilde{M}}^2+m_D^2+m_M^2 \pm 2 B_M m_M \end{array} \right)$$ Notice that, in the sneutrino sector, there are several mass scales involved, the soft SUSY-breaking parameters, $m_{\tilde L}$, $m_{\tilde M}$, $B_M$ and $A_{\nu}$, the Dirac mass $m_D$, the $\mu$-mass parameter, the Z boson mass $m_Z$ and the Majorana neutrino mass $m_M$. Our basic assumption in all this work is that $m_M$ is much heavier than the other mass scales involved (except $M_X$), $m_M>>m_D, m_Z, \mu, m_{\tilde{L}}, m_{\tilde M}, A_{\nu}, B_M$. The size of $B_M$ has been discussed in the literature [@Grossman:1997is] and seems more controversial. For simplicity, we shall assume here that this is also smaller than $m_M$. In this large $m_M$ limit, the diagonalization of the previous sneutrino squared mass matrix is simpler and leads to four mass eigenstates, two of which are light, $\xi_1^l$, $\xi_2^l$ and two heavy, $\xi_1^h$, $\xi_2^h$. In the leading orders of the series expansion in powers of $\xi$ the mass eigenstates and their corresponding mass eigenvalues are given by (We correct in the definitions of $M_{\pm}^2$ and $\xi_2^l$ some typos with wrong signs of ref. [@Arganda:2004bz]), $$\begin{aligned} \xi_1^l &=& \sqrt{2} \left( Re(\tilde{\nu}_L) - \xi Re(\tilde{\nu}_R)\right) \,\,; \xi_2^l = \sqrt{2} \left( Im(\tilde{\nu}_L) + \xi Im(\tilde{\nu}_R)\right) \nonumber \\ \xi_1^h &=& \sqrt{2} \left( Re(\tilde{\nu}_R) + \xi Re(\tilde{\nu}_L)\right) \,\,; \xi_2^h = \sqrt{2} \left( Im(\tilde{\nu}_R) - \xi Im(\tilde{\nu}_L)\right) \nonumber \\ m_{\xi_{1,2}^l}^2 &=& m_{\tilde{L}}^2 + \frac{1}{2} m_Z^2 \cos 2 \beta \mp 2 m_D (A_{\nu} -\mu \cot \beta-B_N)\xi \nonumber \\ m_{\xi_{1,2}^h}^2 &=& m_M^{2} \pm 2 B_M m_M + m_{\tilde M}^2 + 2 m_D^2\end{aligned}$$ Here we can see that the heavy states $\xi_{1,2}^h$ will couple very weakly to the rest of particles of the MSSM via their $\tilde{\nu}_L$ component, which is highly suppresed by the small factor $\xi$ and, therefore, it is a good approximation to ignore them and keep just the light states $\xi_{1,2}^l$, which are made mainly of $\tilde{\nu}_L$ and its complex conjugate $\tilde{\nu}_L^*$. One says then that the heavy sneutrinos decouple from the low energy physics. The generalization of the previous argument to the three generations case leads to the conclusion that, in the seesaw limit, $\xi \ll 1$, the physical sneutrino eigenstates, $\tilde{\nu}_{\beta}$ ($\beta = 1,2,3$) are made mainly of the $\tilde \nu_{L,\,l}$ states with $l=e,\,\mu,\,\tau$ respectively, and their corresponding complex conjugates. The process from the weak eigenstates to the mass eigenstates is simplified to the diagonalization of a $3 \times 3$ sneutrino mass matrix. This is to be compared with the more complex case of charged sleptons where the corresponding process requires the diagonalization of a $6 \times 6$ slepton mass matrix. This will be presented in the next section, where the most general case with lepton flavor mixing is considered. To end this section, we shortly comment on the parameterization that we use to make contact with the neutrino data. It was first introduced in [@Casas:2001sr] to study the $\mu \to e \gamma$ decay and used later by many other authors. The advantage of this parameterization is that instead of using as input parameters the seesaw mass matrices $m_D$ and $m_M$ it uses the three physical light neutrino masses, $m_{\nu_i}$, the three physical heavy neutrino masses, $m_{N_i}$, the $U_{MNS}$ matrix, and a general complex $3 \times 3$ orthogonal matrix $R$. With our signs and matrix conventions, the relation between the seesaw mass matrices and these other more physical quantities is given by, $$m_D^T =i \,m_N^{diag \, 1/2}\, R \,m_{\nu}^{diag \, 1/2}\, U_{MNS}^+ \label{Rcasas}$$ where $R^T R=1$ and, as we have said, $m_{N_i} \simeq m_{M_i}$. Thus, instead of proposing directly possible textures for $m_D$, or $Y_{\nu}$, one proposes possible values for $m_{N_1} \, ,m_{N_2} \, ,m_{N_3} $ and $R$, and sets $m_{\nu_1} \, ,m_{\nu_2} \, ,m_{\nu_3} $ and $U_{MNS}$ to their suggested values from the experimental data. Notice that any hypothesis for $R$ different from the unit matrix will lead to an additional lepton flavor mixing, besides the one introduced by the $U_{MNS}$. Notice also that the previous relation holds at the energy scale $m_M$, and to use it properly one must apply the Renormalization Group Equations to run the input experimental data $m_{\nu}^{diag}$ and $U_{MNS}$ from the low energies $m_W$ up to $m_M$. Therefore, we will also include these running effects in the numerical results for all the branching ratios presented in this work. Regarding the matrix $R$, we will consider the following parameterization: $$R =\ \left( \begin{array}{ccc} c_{2} c_{3} & -c_{1} s_{3}-s_1 s_2 c_3& s_{1} s_3- c_1 s_2 c_3\\ c_{2} s_{3} & c_{1} c_{3}-s_{1}s_{2}s_{3} & -s_{1}c_{3}-c_1 s_2 s_3 \\ s_{2} & s_{1} c_{2} & c_{1}c_{2}\end{array} \right).$$ where $c_i\equiv \cos \theta_i$, $s_i\equiv \sin\theta_i$ and $\theta_1$, $\theta_2$ and $\theta_3$ are arbitrary complex angles. This parameterization was proposed in ref. [@Casas:2001sr] for the study of $\mu \rightarrow e \gamma$ decays. It has also been considered in ref. [@Chankowski:2004jc; @Bi:2003ea] with specific values for the $\theta_i$ angles to study the implications for baryogenesis in the case of hierarchical neutrinos. And it has also been considered by [@Arganda:2004bz] to study the LFV Higgs boson decays into $l_i \bar{l}_j$. Finally, for the numerical estimates in this work, we will consider the following two plausible scenarios, at the low energies, being compatible with data: - Scenario A: with quasi-degenerate light and degenerate heavy neutrinos, $$\begin{aligned} m_{\nu_1}&=&0.2 \, eV \, , m_{\nu_2}=m_{\nu_1}+\frac{\Delta m_{sol}^2}{2 m_{\nu_1}} \, , m_{\nu_3}=m_{\nu_1}+\frac{\Delta m_{atm}^2}{2 m_{\nu_1}}, \\ \nonumber m_{N_1} &=& m_{N_2}= m_{N_3}= m_N\end{aligned}$$ - Scenario B: with hierarchical light and hierarchical heavy neutrinos, $$\begin{aligned} m_{\nu_1} &\simeq& 0 \, eV \, , m_{\nu_2}= \sqrt{\Delta m_{sol}^2} \, , m_{\nu_3}=\sqrt{\Delta m_{atm}^2}, \\ \nonumber m_{N_1} &\leq & m_{N_2} < m_{N_3}\end{aligned}$$ In the two above scenarios, we will fix the input low energy data to the following values, $\sqrt{\Delta m_{sol}^2}=0.008$ eV, $\sqrt{\Delta m_{atm}^2}=0.05$ eV, $\theta_{12}=\theta_{sol}=30^o$, $\theta_{23}=\theta_{atm}=45^o$, $\theta_{13}=0^o$ and $\delta = \alpha= \beta =0$ (See for instance, ref. [@review]). Some results will also be presented for the alternative choice of small but non-vanishing $\theta_{13}$. \[LFV\] Generation of flavor mixing in the slepton sector ========================================================= Once the three $\nu_R$ and the three $\tilde{\nu}_R$ are added to the MSSM particle content, lepton flavor mixing is generated in the slepton sector. This can be seen as the result of a misalignment between the rotations leading to the mass eigenstate basis of sleptons with respect to the one of leptons, which is generically present in the SUSY-seesaw models. This misalignment is radiatively generated from the Yukawa couplings of the Majorana neutrinos and can be sizable, in both, the charged slepton and sneutrino sectors. Usually, it is implemented via the Renormalization Group Equations (RGEs), which we take within the context of mSUGRA extended with three right-handed neutrinos and their SUSY partners. In consequence, we assume here universal soft-SUSY-breaking parameters at the large energies $M_X >> m_M$, which must now include the corresponding parameters of the neutrino and sneutrino sectors, namely, $$\begin{aligned} (m_{\tilde{L}})_{ij}^2 &=& M_0^2 \delta_{ij}, \, (m_{\tilde{E}})_{ij}^2 = M_0^2 \delta_{ij}, \, (m_{\tilde{M}})_{ij}^2 = M_0^2 \delta_{ij} \nonumber \\ (A_{l})_{ij}&=& A_0 (Y_l)_{ij}, \, (A_{\nu})_{ij}= A_0 (Y_{\nu})_{ij},\,i,j=1,2,3 \label{univ_cond}\end{aligned}$$ Here, $M_0$ and $A_0$ are the usual universal soft SUSY breaking parameters in mSUGRA, $(Y_{l})_{ij}=Y_{l_i} \delta_{ij}$ with $Y_{l_i}= m_{l_i}/v_1$, and $(Y_{\nu})_{ij}=(m_D)_{ij}/v_2$. Notice that we have used the $3 \times 3$ matrix form with $i,j=1,2,3$ or equivalently $i,j=e,\mu,\tau$. The effects of the running from $M_X$ down to $m_M$ on the soft mass matrices of the slepton sector are found then by solving the RGEs which now include the corresponding terms and equations for the Yukawas of the neutrinos and soft breaking parameters of the sneutrino sector, as they are active particles in this energy range. Below the energy scales $m_M$, the right handed neutrinos decouple and the effects of running from $m_M$ down to the electroweak scale on the various parameters are obtained by solving the RGEs but now without the terms and equations containing the Yukawas and soft breaking neutrino parameters. The obtained values at the electroweak scale of the various SUSY parameters are the relevant ones in order to build the slepton and sneutrino mass matrices that will be presented below. To solve numerically the RGEs we use the Fortran code SPheno [@Porod:2003um] that we have adapted to include the full flavor structure of the $3 \times 3$ soft SUSY breaking mass and trilinear coupling matrices and of the Yukawa coupling matrices. This program solves the full RGEs (i.e. including the commented extra equations and neutrino terms) in the two loops approximation, computes the MSSM spectra at low energies, and uses as inputs the universal mSUGRA parameters, $M_0$, $A_0$, $M_{1/2}$; the value of $\tan\beta$ at the electroweak scale, and the sign of the $\mu$ mass parameter. The value of $M_X$ is derived from the unification condition for the $SU(2)$ and $U(1)$ couplings, $g_1 = g_2$. For all the numerical analysis performed in this work, we have got very close values to $M_X = 2 \times 10^{16}$ GeV. The value of $|\mu|$ is derived from the requirement of the proper radiative electroweak symmetry breaking. We present next the slepton mass matrices, relevant to low energies, that include the lepton mixing generated from the neutrino Yukawa couplings by the RGEs. For the charged slepton case and referred to the $(\tilde{e}_L, \tilde{e}_R, \tilde{\mu}_L, \tilde{\mu}_R, \tilde{\tau}_L, \tilde{\tau}_R)$ basis, the squared mass matrix can be written as follows, $$M_{\tilde{l}}^2\ =\ \left( \begin{array}{cccccc} M_{LL}^{ee \, 2} & M_{LR}^{ee \, 2} & M_{LL}^{e \mu \, 2} & M_{LR}^{e \mu \, 2} & M_{LL}^{e \tau \, 2} & M_{LR}^{e \tau \, 2} \\ M_{RL}^{ee \, 2} & M_{RR}^{ee \, 2} & M_{RL}^{e \mu \, 2} & M_{RR}^{e \mu \, 2} & M_{RL}^{e \tau \, 2} & M_{RR}^{e \tau \, 2} \\ M_{LL}^{\mu e \, 2} & M_{LR}^{\mu e \, 2}& M_{LL}^{\mu \mu \, 2} & M_{LR}^{\mu \mu \, 2} & M_{LL}^{\mu \tau \, 2} & M_{LR}^{\mu \tau \, 2} \\ M_{RL}^{\mu e \, 2} & M_{RR}^{\mu e \, 2} & M_{RL}^{\mu \mu \, 2} & M_{RR}^{\mu \mu \, 2} & M_{RL}^{\mu \tau \, 2} & M_{RR}^{\mu \tau \, 2} \\ M_{LL}^{\tau e \, 2} & M_{LR}^{\tau e \, 2} & M_{LL}^{\tau \mu \, 2} & M_{LR}^{\tau \mu \, 2} & M_{LL}^{\tau \tau \, 2} & M_{LR}^{\tau \tau \, 2}\\ M_{RL}^{\tau e \, 2} & M_{RR}^{\tau e \, 2} & M_{RL}^{\tau \mu \, 2} & M_{RR}^{\tau \mu \, 2} & M_{RL}^{\tau \tau \, 2} & M_{RR}^{\tau \tau \, 2} \end{array} \right) \label{sleptonmatrix}$$ where, $$\begin{aligned} M_{LL}^{ij \, 2} &=& m_{\tilde{L}, ij}^2 + v_1^2 \left( Y_l^{\dagger} Y_l \right)_{ij} + m_Z^2 \cos 2 \beta \left(-\frac{1}{2}+ \sin^2 \theta_{W} \right) \delta_{ij} \nonumber \\ M_{RR}^{ij \, 2} &=& m_{\tilde{E}, ij}^2 + v_1^2 \left( Y_l^{\dagger} Y_l \right)_{ij} - m_Z^2 \cos 2 \beta \sin^2 \theta_{W} \delta_{ij} \nonumber \\ M_{LR}^{ij \, 2} &=& v_1 \left(A_l^{ij}\right)^{\ast} -\mu Y_l^{ij} v_2 \nonumber \\ M_{RL}^{ij \, 2} &=& \left(M_{LR}^{ij \, 2}\right)^{\ast} \nonumber \\\end{aligned}$$ The soft SUSY breaking mass matrices and trilinear coupling matrices above, $m_{\tilde{L}, ij}$, $m_{\tilde{E}, ij}$ and $A_l^{ij}$, with $i,j= e\,,\,\mu \,,\,\tau$, refer to their corresponding values at the electroweak scale which we get with the SPheno program. After numerical diagonalization of the $M_{\tilde{l}}^2$ matrix one gets the physical slepton masses and the six mass eigenstates ($\tilde{l_1},.....,\tilde{l_6}$)$\equiv \tilde{l}$ which are related to the previous weak eigenstates ($\tilde{e}_L$,....$\tilde{\tau}_R$)$\equiv \tilde{l}'$ by $\tilde{l}' = R^{(l)}\tilde{l}$, where $R^{(l)}$ is a $6 \times 6$ rotation matrix such that, $$\begin{aligned} M_{\tilde l_{diag}}^2 &=& R^{(l)} M_{\tilde l}^2 R^{(l)\,\dag} = diag(m_{\tilde l_1}^2,..,m_{\tilde l_6}^2).\end{aligned}$$ For the sneutrino sector, the $3 \times 3$ squared mass matrix, referred to the $\tilde \nu'= (\tilde \nu_{e,\,L}, \,\tilde \nu_{\mu,\,L}, \,\tilde \nu_{\tau,\,L})$ basis can be written as follows, $$M_{\tilde{\nu}}^2\ =\ \left( \begin{array}{ccc} m_{\tilde{L}, ee}^2 + \frac{1}{2} m_Z^2 \cos 2 \beta & m_{\tilde{L}, e \mu}^2 & m_{\tilde{L}, e \tau}^2 \\ m_{\tilde{L}, \mu e}^2 & m_{\tilde{L}, \mu \mu}^2 + \frac{1}{2} m_Z^2 \cos 2 \beta & m_{\tilde{L}, \mu \tau}^2 \\ m_{\tilde{L}, \tau e}^2 & m_{\tilde{L}, \tau \mu}^2 & m_{\tilde{L}, \tau \tau}^2 + \frac{1}{2} m_Z^2 \cos 2 \beta \end{array} \right)$$ where $m_{\tilde{L}, ij}^2$ are the same as in the previous charged slepton squared mass matrix. After diagonalization of the $M_{\tilde{\nu}}^2$ matrix one gets the relevant physical sneutrino masses and eigenstates, $\tilde{\nu}_{\beta}$ ($\beta = 1,2,3$) which are related to the previous states $\tilde{\nu}_{\alpha}'$ by the corresponding $3 \times 3$ rotation matrix, $\tilde{\nu}'=R^{(\nu)} \tilde{\nu}$, and is such that, $$\begin{aligned} M_{\tilde \nu_{diag}}^2 &=& R^{(\nu)} M_{\tilde \nu}^2 R^{(\nu)\,\dag} = diag(m_{\tilde \nu_1}^2,m_{\tilde \nu_2}^2,m_{\tilde \nu_3}^2).\end{aligned}$$ Finally, in order to illustrate later the size of the misalignment effects in the slepton sector we define the following dimesionless parameters, $$\begin{aligned} \delta_{LL}^{ij} = \frac{M_{LL}^{ij 2}}{\tilde{m}^2} \label{deltaLL}\\ \delta_{LR}^{ij} = \frac{M_{LR}^{ij 2}}{\tilde{m}^2} \label{deltaLR}\\ \delta_{RR}^{ij} = \frac{M_{RR}^{ij 2}}{\tilde{m}^2} \label{deltaRR}\end{aligned}$$ where $$\tilde{m}^2 = \left( m_{\tilde{l}_1}^2 m_{\tilde{l}_2}^2 m_{\tilde{l}_3}^2 m_{\tilde{l}_4}^2 m_{\tilde{l}_5}^2 m_{\tilde{l}_6}^2 \right)^{1/6}$$ is an average slepton squared mass. These parameters have also been considered by other authors in a more model independent approach and with the purpose of getting bounds from experimental data. Some of these bounds can be found in [@9604387; @Chankowski:2005jh; @Paradisi:2005fk]. For all the numerical results presented in this paper, we will set values for all the following input parameters and physical quantities: - [mSUGRA parameters]{}: $M_0$, $M_{1/2}$, $A_0$, $\mbox{sign}(\mu)$ and $\tan{\beta}$. - [seesaw parameters]{}: $m_{N_1}$, $m_{N_2}$, $m_{N_3}$ and $R$ (or equivantly $\theta_1$, $\theta_2$, $\theta_3$). - [physical quantities]{}: $m_{\nu_1}$, $m_{\nu_2}$, $m_{\nu_3}$, $U_{MNS}$ \[analytical\] Analytical results for the $l_j^- \to l_i^- l_i^- l_i^+$ decays =============================================================================== In this section we present the analytical results for the LFV $\tau$ and $\mu$ decays into three leptons with the same flavor, within the mSUGRA-seesaw context that we have presented in the previous sections. We perform a complete one-loop computation of the $\tau$ and $\mu$ decay widths for all the three possible channels, $\tau^- \to \mu^- \mu^- \mu^+$, $\tau \to e^- e^- e^+$ and $\mu \to e^- e^- e^+$, and include all the contributing SUSY loops. We present each contribution separately, $\gamma$-penguin, $Z$-penguin, Higgs-penguin and boxes. The contributions from the Higgs-penguin diagrams are, to our knowledge, computed exactly by the first time here. We have also reviewed the analytical results in [@Hisano:1995cp] and correct their results for the Z-penguin contributions. Notice that we make the computation in the physical mass eigenstate basis. That is, we consider the one-loop contributions from charged sleptons, $\tilde l_X$ ($X=1,..,6$), sneutrinos $\tilde \nu_X$ ($X=1,2,3$), charginos ${\tilde{\chi}_A^-}$ ($A=1,2$), and neutralinos ${\tilde{\chi}_A^0}$ ($A=1,..,4$). In all this section we follow closely the notation and way of presentation of  [@Hisano:1995cp]. The interactions in the physical mass eigenstate basis that are needed for this computation are collected in the form of Feynman rules in Appendix \[apendice1\]. First, we define the amplitudes for the $l_j^-(p) \to l_i^-(p_1) l_i^-(p_2) l_i^+(p_3)$ decays as the sum of the various contributions, $$T(l_j^- \to l_i^- l_i^- l_i^+) = T_{\gamma-{\rm penguin}} + T_{Z-{\rm penguin}} + T_{\rm H-penguin} + T_{\rm boxes}.$$ In the following we present the results for these contributions in terms of some convenient form factors. The $\gamma$-penguin contributions ---------------------------------- The diagrams where a photon is exchanged are called $\gamma$-penguin diagrams and are shown in fig. \[GammaPenguin\]. The result for the $\gamma$-penguin amplitude contributing to the $l_j^- \to l_i^- l_i^- l_i^+$ decays is usually written as, $$\begin{aligned} T_{\gamma-{\rm penguin}} &=& \bar{u}_i(p_1)\left[q^2 \gamma_{\mu} (A_1^L P_L + A_1^R P_R) + i m_{l_j} \sigma_{\mu \nu} q^{\nu} \left( A_2^L P_L + A_2^R P_R \right) \right] u_j(p) \nonumber \\ &\times& \frac{e^2}{q^2} \bar{u}_i(p_2) \gamma^ {\mu} v_i(p_3) - (p_1 \leftrightarrow p_2)\end{aligned}$$ where $q$ is the photon momentum and $e$ is the electric charge. The photon-penguin amplitude has two contributions in the MSSM-seesaw from the chargino and neutralino sectors respectively, as can be seen in the structure of the form factors, $$A_a^{L,R} = A_a^{(n)L.R} + A_a^{(c)L,R}, \quad a = 1, 2$$ ![$\gamma$-penguin diagrams contributing to the $l_j^- \to l_i^- l_i^- l_i^+$ decay[]{data-label="GammaPenguin"}](gamma-penguin.epsi){width="10cm"} The neutralino contributions are given by $$\begin{aligned} A_1^{(n)L} &=& \frac{1}{576 \pi^2} N_{iAX}^R N_{jAX}^{R \ast} \frac{1}{m_{\tilde{l}_X}^2} \frac{2 - 9 x_{AX} + 18 x_{AX}^2 - 11 x_{A}^3 + 6 x_{AX}^3 \log{x_{AX}}}{\left( 1 - x_{AX} \right)^4} \nonumber \\ \\ A_2^{(n)L} &=& \frac{1}{32 \pi^2} \frac{1}{m_{\tilde{l}_X}^2} \left[ N_{iAX}^L N_{jAX}^{L \ast} \frac{1 - 6 x_{AX} + 3 x_{AX}^2 + 2 x_{AX}^3 - 6 x_{AX}^2 \log{x_{AX}}}{6 \left( 1 - x_{AX} \right)^4} \right. \nonumber \\ &+& N_{iAX}^R N_{jAX}^{R \ast} \frac{m_{l_i}}{m_{l_j}} \frac{1 - 6 x_{AX} + 3 x_{AX}^2 + 2 x_{AX}^3 - 6 x_{AX}^2 \log{x_{AX}}}{6 \left( 1 - x_{AX} \right)^4} \nonumber \\ &+& \left. N_{iAX}^L N_{jAX}^{R \ast} \frac{m_{\tilde{\chi}_A^0}}{m_{l_j}} \frac{1 - x_{AX}^2 +2 x_{AX} \log{x_{AX}}}{\left( 1 - x_{AX} \right)^3} \right] \label{A2Lneut}\\ A_a^{(n)R} &=& \left. A_a^{(n)L} \right|_{L \leftrightarrow R}\label{ARneut}\end{aligned}$$ where $x_{AX} = m_{\tilde{\chi}_A^0}^2/m_{\tilde{l}_X}^2$. On the other hand, the chargino contributions are $$\begin{aligned} A_1^{(c)L} &=& -\frac{1}{576 \pi^2} C_{iAX}^R C_{jAX}^{R \ast} \frac{1}{m_{\tilde{\nu}_X}^2} \frac{16 - 45 x_{AX} + 36 x_{AX}^2 - 7 x_{A}^3 + 6 (2 - 3 x_{AX}) \log{x_{AX}}}{\left( 1 - x_{AX} \right)^4} \nonumber \\ & & \\ \cr A_2^{(c)L} &=& -\frac{1}{32 \pi^2} \frac{1}{m_{\tilde{\nu}_X}^2} \left[ C_{iAX}^L C_{jAX}^{L \ast} \frac{2 + 3 x_{AX} - 6 x_{AX}^2 + x_{AX}^3 + 6 x_{AX} \log{x_{AX}}}{6 \left( 1 - x_{AX} \right)^4} \right. \nonumber \\ &+& C_{iAX}^R C_{jAX}^{R \ast} \frac{m_{l_i}}{m_{l_j}} \frac{2 + 3 x_{AX} - 6 x_{AX}^2 + x_{AX}^3 + 6 x_{AX} \log{x_{AX}}}{6 \left( 1 - x_{AX} \right)^4} \nonumber \\ &+& \left. C_{iAX}^L C_{jAX}^{R \ast} \frac{m_{\tilde{\chi}_A^-}}{m_{l_j}} \frac{-3 + 4 x_{AX} - x_{AX}^2 - 2 \log{x_{AX}}}{\left( 1 - x_{AX} \right)^3} \right] \label{A2Lchar} \\ A_a^{(c)R} &=& \left. A_a^{(c)L} \right|_{L \leftrightarrow R}\label{ARchar}\end{aligned}$$ where $x_{AX} = m_{\tilde{\chi}_A^-}^2/m_{\tilde{\nu}_X}^2$. Notice that in both neutralino and chargino contributions a summation over the indices $A$ and $X$ is understood. Notice also that we have not neglected any of the fermion masses. If we neglect these masses in the previous formulas we get the same result as in [@Hisano:1995cp]. The expressions for the $N$ and $C$ couplings are given in the Appendix \[apendice1\]. The $Z$-penguin contributions ----------------------------- The diagrams where a Z boson is exchanged are called the $Z$-penguin diagrams and are shown in fig. \[ZPenguin\]. The amplitude in this case is $$\begin{aligned} T_{Z-\mbox{\rm penguin}} &=& \frac{1}{m_Z^2} \bar{u}_i(p_1) \left[ \gamma_{\mu} \left( F_L P_L + F_R P_R \right) \right] u_j(p) \nonumber \\ &\times& \bar{u}_i(p_2) \left[ \gamma^{\mu} \left( Z_L^{(l)} P_L + Z_R^{(l)} P_R \right) \right] v_i(p_3) - (p_1 \leftrightarrow p_2)\end{aligned}$$ where, as before, $F_{L(R)} = F_{L(R)}^{(n)} + F_{L(R)}^{(c)}$. The expressions for these form factors are the following: $$\begin{aligned} F_L^{(n)} &=& -\frac{1}{16 \pi^2} \left\{ N_{iBX}^R N_{jAX}^{R \ast} \left[ 2 E_{BA}^{R(n)} C_{24}(m_{\tilde{l}_X}^2, m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2) - E_{BA}^{L(n)} m_{\tilde{\chi}_A^0} m_{\tilde{\chi}_B^0} C_0(m_{\tilde{l}_X}^2, m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2) \right] \right. \nonumber \\ &+& \left. N_{iAX}^R N_{jAY}^{R \ast} \left[ 2 Q_{XY}^{\tilde{l}} C_{24}(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2, m_{\tilde{l}_Y}^2) \right] + N_{iAX}^R N_{jAX}^{R \ast} \left[ Z_L^{(l)} B_1(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2) \right] \right\} \\ F_R^{(n)} &=& \left. F_L^{(n)} \right|_{L \leftrightarrow R} \\ F_L^{(c)} &=& -\frac{1}{16 \pi^2} \left\{ C_{iBX}^R C_{jAX}^{R \ast} \left[ 2 E_{BA}^{R(c)} C_{24}(m_{\tilde{\nu}_X}^2, m_{\tilde{\chi}_A^-}^2, m_{\tilde{\chi}_B^-}^2) - E_{BA}^{L(c)} m_{\tilde{\chi}_A^-} m_{\tilde{\chi}_B^-} C_0(m_{\tilde{\nu}_X}^2, m_{\tilde{\chi}_A^-}^2, m_{\tilde{\chi}_B^-}^2) \right] \right. \nonumber \\ &+& \left. C_{iAX}^R C_{jAY}^{R \ast} \left[ 2 Q_{XY}^{\tilde{\nu}} C_{24}(m_{\tilde{\chi}_A^-}^2, m_{\tilde{\nu}_X}^2, m_{\tilde{\nu}_Y}^2) \right] + C_{iAX}^R C_{jAX}^{R \ast} \left[ Z_L^{(l)} B_1(m_{\tilde{\chi}_A^-}^2, m_{\tilde{\nu}_X}^2) \right] \right\} \\ F_R^{(c)} &=& \left. F_L^{(c)} \right|_{L \leftrightarrow R}\end{aligned}$$ ![$Z$-penguin diagrams contributing to the $l_j^- \to l_i^- l_i^- l_i^+$ decay[]{data-label="ZPenguin"}](Z-penguin.epsi){width="10cm"} Notice that all the loop functions are evaluated at zero external momenta which is a very good approximation in these decays. That is, $$\begin{aligned} B(m_1^2, m_2^2) &=& B(0, m_1^2, m_2^2) \\ C(m_1^2, m_2^2, m_3^2) &=& C(0, 0, m_1^2, m_2^2, m_3^2)\end{aligned}$$ The expressions for the couplings are collected in Appendix \[apendice1\] and the loop functions [@Hollik] are given in the Appendix \[apendice2\]. Notice that our result for the $Z$-penguin contributions differs significantly from the result in [@Hisano:1995cp]. In fact, these authors did not consider all the diagrams in these $Z$-penguin contributions, which we think is not justified. The box contributions --------------------- ![Box-type diagrams contributing to the $l_j^- \to l_i^- l_i^- l_i^+$ decay[]{data-label="Boxes"}](boxes.epsi){width="6cm"} The box-type diagrams are shown in fig. \[Boxes\]. We have computed these diagrams and found a result in agreement with [@Hisano:1995cp]. The amplitude for these box contributions can be written as, $$\begin{aligned} T_{\rm boxes} &=& e^2 B_1^L \left[ \bar{u}_i(p_1) \left( \gamma^{\mu} P_L \right) u_j(p) \right] \left[ \bar{u}_i(p_2) \left( \gamma_{\mu} P_L \right) v_i(p_3) \right] \nonumber \\ &+& e^2 B_1^R \left[ \bar{u}_i(p_1) \left( \gamma^{\mu} P_R \right) u_j(p) \right] \left[ \bar{u}_i(p_2) \left( \gamma_{\mu} P_R \right) v_i(p_3) \right] \nonumber \\ &+& e^2 B_2^L \left\{ \left[ \bar{u}_i(p_1) \left( \gamma^{\mu} P_L \right) u_j(p) \right] \left[ \bar{u}_i(p_2) \left( \gamma_{\mu} P_R \right) v_i(p_3) \right] - (p_1 \leftrightarrow p_2) \right\} \nonumber \\ &+& e^2 B_2^R \left\{ \left[ \bar{u}_i(p_1) \left( \gamma^{\mu} P_R \right) u_j(p) \right] \left[ \bar{u}_i(p_2) \left( \gamma_{\mu} P_L \right) v_i(p_3) \right] - (p_1 \leftrightarrow p_2) \right\} \nonumber \\ &+& e^2 B_3^L \left\{ \left[ \bar{u}_i(p_1) P_L u_j(p) \right] \left[ \bar{u}_i(p_2) P_L v_i(p_3) \right] - (p_1 \leftrightarrow p_2) \right\} \nonumber \\ &+& e^2 B_3^R \left\{ \left[ \bar{u}_i(p_1) P_R u_j(p) \right] \left[ \bar{u}_i(p_2) P_R v_i(p_3) \right] - (p_1 \leftrightarrow p_2) \right\} \nonumber \\ &+& e^2 B_4^L \left\{ \left[ \bar{u}_i(p_1) \left( \sigma_{\mu \nu} P_L \right) u_j(p) \right] \left[ \bar{u}_i(p_2) \left( \sigma^{\mu \nu} P_L \right) v_i(p_3) \right] - (p_1 \leftrightarrow p_2) \right\} \nonumber \\ &+& e^2 B_4^R \left\{ \left[ \bar{u}_i(p_1) \left( \sigma_{\mu \nu} P_R u_j(p) \right) \right] \left[ \bar{u}_i(p_2) \left( \sigma^{\mu \nu} P_R \right) v_i(p_3) \right] - (p_1 \leftrightarrow p_2) \right\} \nonumber \\\end{aligned}$$ where $$B_a^{L,R} = B_a^{(n)L,R} + B_a^{(c)L,R} \quad a = 1, ..., 4$$ The different neutralino contributions are, $$\begin{aligned} e^2 B_1^{(n)L} &=& \frac{1}{16 \pi^2} \left[ \frac{\tilde{D}_0}{2} N_{iAY}^R N_{jAX}^{R \ast} N_{iBX}^R N_{iBY}^{R \ast} + D_0 m_{\tilde{\chi}_A^0} m_{\tilde{\chi}_B^0} N_{iBY}^R N_{iBX}^R N_{jAX}^{R \ast} N_{iAY}^{R \ast} \right] \nonumber \\ \\ e^2 B_2^{(n)L} &=& \frac{1}{16 \pi^2} \left[ \frac{\tilde{D}_0}{4} N_{iAY}^R N_{jAX}^{R \ast} N_{iBX}^L N_{iBY}^{L \ast} - \frac{D_0}{2} m_{\tilde{\chi}_A^0} m_{\tilde{\chi}_B^0} N_{iAY}^L N_{jAX}^{R \ast} N_{iBX}^R N_{iBY}^{L \ast} \right. \nonumber \\ &-& \left. \frac{\tilde{D}_0}{4} N_{iBY}^L N_{iBX}^R N_{jAX}^{R \ast} N_{iAY}^{L \ast} + \frac{\tilde{D}_0}{4} N_{iBY}^R N_{iBX}^L N_{jAX}^{R \ast} N_{iAY}^{L \ast} \right] \\ e^2 B_3^{(n)L} &=& \frac{1}{16 \pi^2} \left[ D_0 m_{\tilde{\chi}_A^0} m_{\tilde{\chi}_B^0} N_{iAY}^L N_{jAX}^{R \ast} N_{iBX}^L N_{iBY}^{R \ast} + \frac{D_0}{2} m_{\tilde{\chi}_A^0} m_{\tilde{\chi}_B^0} N_{iBY}^L N_{iBX}^L N_{jAX}^{R \ast} N_{iAY}^{R \ast} \right] \nonumber \\ \\ e^2 B_4^{(n)L} &=& \frac{1}{16 \pi^2} \left[ \frac{D_0}{8} m_{\tilde{\chi}_A^0} m_{\tilde{\chi}_B^0} N_{jAX}^{R \ast} N_{iAY}^{R \ast} N_{iBY}^L N_{iBX}^L \right] \\ B_a^{(n)R} &=& \left. B_a^{(n)L} \right|_{L \leftrightarrow R} \quad a = 1, ..., 4\end{aligned}$$ where $$\begin{aligned} D_0 &=& D_0(m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2, m_{\tilde{l}_X}^2, m_{\tilde{l}_Y}^2) \\ \tilde{D}_0 &=& \tilde{D}_0(m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2, m_{\tilde{l}_X}^2, m_{\tilde{l}_Y}^2)\end{aligned}$$ The chargino contributions read, $$\begin{aligned} e^2 B_1^{(c)L} &=& \frac{1}{16 \pi^2} \left[ \frac{\tilde{D}_0}{2} C_{iAY}^R C_{jAX}^{R \ast} C_{iBX}^R C_{iBY}^{R \ast} \right] \\ e^2 B_2^{(c)L} &=& \frac{1}{16 \pi^2} \left[ \frac{\tilde{D}_0}{4} C_{iAY}^R C_{jAX}^{R \ast} C_{iBX}^L C_{iBY}^{L \ast} - \frac{D_0}{2} m_{\tilde{\chi}_A^-} m_{\tilde{\chi}_B^-} C_{iAY}^L C_{jAX}^{R \ast} C_{iBX}^R C_{iBY}^{L \ast} \right] \nonumber \\ \\ e^2 B_3^{(c)L} &=& \frac{1}{16 \pi^2} \left[ D_0 m_{\tilde{\chi}_A^-} m_{\tilde{\chi}_B^-} C_{iAY}^L C_{jAX}^{R \ast} C_{iBX}^L C_{iBY}^{R \ast} \right] \\ e^2 B_4^{(c)L} &=& 0 \\ B_a^{(c)R} &=& \left. B_a^{(c)L} \right|_{L \leftrightarrow R} \quad a = 1, ..., 4\end{aligned}$$ where $$\begin{aligned} D_0 &=& D_0(m_{\tilde{\chi}_A^-}^2, m_{\tilde{\chi}_B^-}^2, m_{\tilde{\nu}_X}^2, m_{\tilde{\nu}_Y}^2) \\ \tilde{D}_0 &=& \tilde{D}_0(m_{\tilde{\chi}_A^-}^2, m_{\tilde{\chi}_B^-}^2, m_{\tilde{\nu}_X}^2, m_{\tilde{\nu}_Y}^2)\end{aligned}$$ The Higgs-penguin contributions ------------------------------- The diagrams where a Higgs boson is exchanged are called the Higgs-penguin diagrams. These are shown in fig. \[HPenguin\] and have been computed here by the first time. These are usually not considered in the literature. In particular, in the most complete study so far of [@Hisano:1995cp] these Higgs-penguin diagrams were not included. However, they are expected to be relevant at large $\tan{\beta}$ [@Babu:2002et]. We will therefore include them here. Specifically, we include the contributions from the three neutral MSSM Higgs bosons, $h_0$, $H_0$ and $A_0$ and consider all SUSY loops. ![Higgs-penguin diagrams contributing to the $l_j^- \to l_i^- l_i^- l_i^+$ decay. Here $H_p (p = 1, 2, 3) = h^0, H^0, A^0$.[]{data-label="HPenguin"}](H-penguin.epsi){width="10cm"} In this case, the amplitude can be written as, $$\begin{aligned} T_{\rm Higgs} &=& e^2 B_{2, \rm Higgs}^L \left\{ \left[ \bar{u}_i(p_1) \left( \gamma^{\mu} P_L \right) u_j(p) \right] \left[ \bar{u}_i(p_2) \left( \gamma_{\mu} P_R \right) v_i(p_3) \right] - (p_1 \leftrightarrow p_2) \right\} \nonumber \\ &+& e^2 B_{2, \rm Higgs}^R \left\{ \left[ \bar{u}_i(p_1) \left( \gamma^{\mu} P_R \right) u_j(p) \right] \left[ \bar{u}_i(p_2) \left( \gamma_{\mu} P_L \right) v_i(p_3) \right] - (p_1 \leftrightarrow p_2) \right\} \nonumber \\ &+& e^2 B_{3, \rm Higgs}^L \left\{ \left[ \bar{u}_i(p_1) P_L u_j(p) \right] \left[ \bar{u}_i(p_2) P_L v_i(p_3) \right] - (p_1 \leftrightarrow p_2) \right\} \nonumber \\ &+& e^2 B_{3, \rm Higgs}^R \left\{ \left[ \bar{u}_i(p_1) P_R u_j(p) \right] \left[ \bar{u}_i(p_2) P_R v_i(p_3) \right] - (p_1 \leftrightarrow p_2) \right\}\end{aligned}$$ where $$B_{a, \rm Higgs}^{L,R} = B_{a, \rm Higgs}^{(n)L,R} + B_{a, \rm Higgs}^{(c)L,R} \quad a = 2, 3$$ The first term represents the neutralino contribution, which we find to be $$\begin{aligned} e^2 B_{2, \rm Higgs}^{(n)L} &=& \sum_{p=1}^3 \left(-\frac{1}{2}\right) \frac{1}{m_{H_p}^2} H_{L, n}^{(p)} S_{R, i}^{(p)} \\ e^2 B_{3, \rm Higgs}^{(n)L} &=& \sum_{p=1}^3 \frac{1}{m_{H_p}^2} H_{L, n}^{(p)} S_{L, i}^{(p)} \\ B_{a, \rm Higgs}^{(n)R} &=& \left. B_{a, \rm Higgs}^{(n)L} \right|_{L \leftrightarrow R} \quad a = 2, 3\end{aligned}$$ where $H_p (p = 1, 2, 3) = h^0, H^0, A^0$ and $$\begin{aligned} H_{L, n}^{(p)} &=& -\frac{1}{16 \pi^2} \left\{ \left[ B_0(m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2) + m_{\tilde{l}_X}^2 C_0(m_{\tilde{l}_X}^2, m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2) + m_{l_j}^2 C_{12}(m_{\tilde{l}_X}^2, m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2) \right. \right. \nonumber \\ &+& \left. m_{l_i}^2 (C_{11} - C_{12})(m_{\tilde{l}_X}^2, m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2) \right] N_{iAX}^L D_{R, AB}^{(p)} N_{jBX}^{R \ast} \nonumber \\ &+& m_{l_i} m_{l_j} (C_{11} + C_0)(m_{\tilde{l}_X}^2, m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2) N_{iAX}^R D_{L, AB}^{(p)} N_{jBX}^{L \ast} \nonumber \\ &+& m_{l_i} m_{\tilde{\chi}_B^0} (C_{11} - C_{12} + C_0)(m_{\tilde{l}_X}^2, m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2) N_{iAX}^R D_{L, AB}^{(p)} N_{jBX}^{R \ast} \nonumber \\ &+& m_{l_j} m_{\tilde{\chi}_B^0} C_{12}(m_{\tilde{l}_X}^2, m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2) N_{iAX}^L D_{R, AB}^{(p)} N_{jBX}^{L \ast} \nonumber \\ &+& m_{l_i} m_{\tilde{\chi}_A^0} (C_{11} - C_{12})(m_{\tilde{l}_X}^2, m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2) N_{iAX}^R D_{R, AB}^{(p)} N_{jBX}^{R \ast} \nonumber \\ &+& m_{l_j} m_{\tilde{\chi}_A^0} (C_{12} + C_0)(m_{\tilde{l}_X}^2, m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2) N_{iAX}^L D_{L, AB}^{(p)} N_{jBX}^{L \ast} \nonumber \\ &+& m_{\tilde{\chi}_A^0} m_{\tilde{\chi}_B^0} C_0(m_{\tilde{l}_X}^2, m_{\tilde{\chi}_A^0}^2, m_{\tilde{\chi}_B^0}^2) N_{iAX}^L D_{L, AB}^{(p)} N_{jBX}^{R \ast} \nonumber \\ &+& G_{XY}^{(p) \tilde{l}} \left[ - m_{l_i} (C_{11} - C_{12})(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2, m_{\tilde{l}_Y}^2) N_{iAX}^R N_{jAY}^{R \ast} \right. \nonumber \\ &-& \left. m_{l_j} C_{12}(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2, m_{\tilde{l}_Y}^2) N_{iAX}^L N_{jAY}^{L \ast} + m_{\tilde{\chi}_A^0} C_0(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2, m_{\tilde{l}_Y}^2) N_{iAX}^L N_{jAY}^{R \ast} \right] \nonumber \\ &+& \frac{S_{L, j}^{(p)}}{m_{l_i}^2 - m_{l_j}^2} \left[ - m_{l_i}^2 B_1(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2) N_{iAX}^L N_{jAX}^{L \ast} + m_{l_i} m_{\tilde{\chi}_A^0} B_0(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2) N_{iAX}^R N_{jAX}^{L \ast} \right. \nonumber \\ &-& \left. m_{l_i} m_{l_j} B_1(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2) N_{iAX}^R N_{jAX}^{R \ast} + m_{l_j} m_{\tilde{\chi}_A^0} B_0(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2) N_{iAX}^L N_{jAX}^{R \ast} \right] \nonumber \\ &+& \frac{S_{L, i}^{(p)}}{m_{l_j}^2 - m_{l_i}^2} \left[ - m_{l_j}^2 B_1(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2) N_{iAX}^R N_{jAX}^{R \ast} + m_{l_j} m_{\tilde{\chi}_A^0} B_0(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2) N_{iAX}^R N_{jAX}^{L \ast} \right. \nonumber \\ &-& \left. \left. m_{l_i} m_{l_j} B_1(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2) N_{iAX}^L N_{jAX}^{L \ast} + m_{l_i} m_{\tilde{\chi}_A^0} B_0(m_{\tilde{\chi}_A^0}^2, m_{\tilde{l}_X}^2) N_{iAX}^L N_{jAX}^{R \ast} \right] \right\} \\ H_{R, n}^{(p)} &=& \left. H_{L, n}^{(p)} \right|_{L \leftrightarrow R} \quad p = 1, 2, 3\end{aligned}$$ The values of the couplings are given again in Appendix \[apendice1\] and the loop functions in Appendix \[apendice2\]. Correspondingly, the result for the chargino contribution is given by, $$\begin{aligned} e^2 B_{2, \rm Higgs}^{(c)L} &=& \sum_{p=1}^3 \left(-\frac{1}{2}\right) \frac{1}{m_{H_p}^2} H_{L, c}^{(p)} S_{R, i}^{(p)} \\ e^2 B_{3, \rm Higgs}^{(c)L} &=& \sum_{p=1}^3 \frac{1}{m_{H_p}^2} H_{L, c}^{(p)} S_{L, i}^{(p)} \\ B_{a, \rm Higgs}^{(c)R} &=& \left. B_{a, \rm Higgs}^{(c)L} \right|_{L \leftrightarrow R} \quad a = 2, 3\end{aligned}$$ where $H_{L (R), c}^{(p)}$ can be obtained from the previous $H_{L (R), n}^{(p)}$ by replacing everywhere $$\begin{aligned} \tilde{l} &\to& \tilde{\nu} \nonumber \\ \tilde{\chi}^0 &\to& \tilde{\chi}^- \nonumber \\ N^{L(R)} &\to& C^{L(R)} \nonumber \\ D_{L(R)} &\to& W_{L(R)} \nonumber\end{aligned}$$ Again the values of the couplings and the loop functions are given in Appendices \[apendice1\] and \[apendice2\] respectively. $l_j^- \to l_i^- l_i^- l_i^+$ decay width ----------------------------------------- The decay width for $l_j^- \to l_i^- l_i^- l_i^+$ can be written in terms of the form factors given in the previous sections as [@Hisano:1995cp]: $$\begin{aligned} \Gamma(l_j^- \to l_i^- l_i^- l_i^+) &=& \frac{e^4}{512 \pi^3} m_{l_j}^5 \left[ \left| A_1^L \right|^2 + \left| A_1^R \right|^2 - 2 \left( A_1^L A_2^{R \ast} + A_2^L A_1^{R \ast} + h.c. \right) \right. \nonumber \\ &+& \left( \left| A_2^L \right|^2 + \left| A_2^R \right|^2 \right) \left( \frac{16}{3} \log{\frac{m_{l_j}}{m_{l_i}}} - \frac{22}{3} \right) \nonumber \\ &+& \frac{1}{6} \left( \left| B_1^L \right|^2 + \left| B_1^R \right|^2 \right) + \frac{1}{3} \left( \left| \hat{B}_2^L \right|^2 + \left| \hat{B}_2^R \right|^2 \right) \nonumber \\ &+& \frac{1}{24} \left( \left| \hat{B}_3^L \right|^2 + \left| \hat{B}_3^R \right|^2 \right) + 6 \left( \left| B_4^L \right|^2 + \left| B_4^R \right|^2 \right) \nonumber \\ &-& \frac{1}{2} \left( \hat{B}_3^L B_4^{L \ast} + \hat{B}_3^R B_4^{R \ast} + h.c. \right) \nonumber \\ &+& \frac{1}{3} \left( A_1^L B_1^{L \ast} + A_1^R B_1^{R \ast} + A_1^L \hat{B}_2^{L \ast} + A_1^R \hat{B}_2^{R \ast} + h.c. \right) \nonumber \\ &-& \frac{2}{3} \left( A_2^R B_1^{L \ast} + A_2^L B_1^{R \ast} + A_2^L \hat{B}_2^{R \ast} + A_2^R \hat{B}_2^{L \ast} + h.c. \right) \nonumber \\ &+& \frac{1}{3} \left\{ 2 \left( \left| F_{LL} \right|^2 + \left| F_{RR} \right|^2 \right) + \left| F_{LR} \right|^2 + \left| F_{RL} \right|^2 \right. \nonumber \\ &+& \left( B_1^L F_{LL}^{\ast} + B_1^R F_{RR}^{\ast} + \hat{B}_2^L F_{LR}^{\ast} + \hat{B}_2^R F_{RL}^{\ast} + h.c. \right) \nonumber \\ &+& 2 \left( A_1^L F_{LL}^{\ast} + A_1^R F_{RR}^{\ast} + h.c. \right) + \left( A_1^L F_{LR}^{\ast} + A_1^R F_{RL}^{\ast} + h.c. \right) \nonumber \\ &-& 4 \left. \left. \left( A_2^R F_{LL}^{\ast} + A_2^L F_{RR}^{\ast} + h.c. \right) - 2 \left( A_2^L F_{RL}^{\ast} + A_2^R F_{LR}^{\ast} + h.c. \right) \right\} \right] \nonumber \\ \label{decay}\end{aligned}$$ where $$\begin{aligned} F_{LL} &=& \frac{F_L Z_L^{(l)}}{g^2 \sin^2 \theta_W m_Z^2} \\ F_{RR} &=& \left. F_{LL} \right|_{L \leftrightarrow R} \\ F_{LR} &=& \frac{F_L Z_R^{(l)}}{g^2 \sin^2 \theta_W m_Z^2} \\ F_{RL} &=& \left. F_{LR} \right|_{L \leftrightarrow R}\end{aligned}$$ Notice that we have put the Higgs contributions together with the box ones in order to follow closely the way of presentation of [@Hisano:1995cp] $$\begin{aligned} \hat{B}_2^{L,R} &=& B_2^{L,R} + B_{2, \rm Higgs}^{L,R} \\ \hat{B}_3^{L,R} &=& B_3^{L,R} + B_{3, \rm Higgs}^{L,R} \end{aligned}$$ Notice that we have corrected the result in ref.[@Hisano:1995cp] for the term that goes with $\left( \left|A_2^L\right| + \left|A_2^R\right| \right)$. \[numerical\] Numerical results for the LFV branching ratios ============================================================ We present in this section the numerical results for all the branching ratios of LFV $\tau$ and $\mu$ decays in the context of the mSUGRA-seesaw scenario that has been introduced in the previous sections. We focus on the following LFV decays, $\tau^- \to \mu^- \mu^- \mu^+$, $\tau^- \to e^- e^- e^+$ and $\mu^- \to e^- e^- e^+$, and the radiative decays $\tau^-\to \mu^- \gamma$, $\tau^- \to e^- \gamma$ and $\mu^- \to e^- \gamma$. The reason to consider these radiative decays together with the decays into three leptons is that there are insteresting correlations among them that provide additional information in testing SUSY. Specifically, we show in this section the correlations between the ratios of $\tau^- \to \mu^- \mu^- \mu^+$ and $\tau^- \to \mu^- \gamma$; between $\tau^- \to e^- e^- e^+$ and $\tau^- \to e^- \gamma$; and between $\mu^- \to e^- e^- e^+$ and $\mu^- \to e^- \gamma$. For the numerical estimates of the radiative decays we use the formula of [@Hisano:1995cp], which is given in terms of the $A_2^{L,R}$ as, $$\begin{aligned} \Gamma(l_j^- \to l_i^- \gamma) &=& \frac{e^2}{16 \pi} m^5_{l_j}(|A_2^L|^2+|A_2^R|^2) \end{aligned}$$ but we use our expressions for the form factors in eqs.(\[A2Lneut\]), (\[ARneut\]), (\[A2Lchar\]) and (\[ARchar\]) that include the lepton mass contributions. We explore here in full detail the size of the SUSY contributions to all these LFV $l_j \to 3l_i$ and $l_j \to l_i \gamma$ decays as a function of all the mSUGRA parameters, $M_0$, $M_{1/2}$, $A_0$, $\tan\beta$ and sign($\mu$) and the seesaw parameters $m_{N_i}$, $i=1,2,3$ and $R$ or, equivalently, $\theta_1$, $\theta_2$ and $\theta_3$. In all this numerical analysis we require compatibility with the neutrino data and with the present upper experimental bounds for all these branching ratios [@Aubert:2003pc; @Bellgardt:1987du; @Aubert:2005wa; @Aubert:2005ye; @mue], as given explicitely in the introduction. We also demand the complete set of SUSY particle masses, which we derive with the SPheno program, to be above the present experimental lower bounds [@pdg2004]. The numerical values of the total $\tau$ and $\mu$ widths (lifetimes) are taken from [@pdg2004]. We show first the results for the scenario A with quasi-degenerate light and degenerate heavy neutrinos and next the most interesting scenario B with hierarchical light and hierarchical heavy neutrinos. Degenerate case --------------- We show in figs. \[fig:1a\] through \[fig:1ef\] the numerical results of the branching ratios for the LFV $\tau$ and $\mu$ decays in scenario A with degenerate heavy neutrinos of mass $m_N$. We show our predictions for the three channels, $\tau^- \to \mu^- \mu^- \mu^+$, $\tau^- \to e^- e^- e^+$ and $\mu^- \to e^- e^- e^+$, and similarly, for the comparison with the leptonic radiative decays, $l_j\to l_i \gamma$, we also show in the plots the correlated decay, $\tau^- \to \mu^- \gamma$, $\tau^- \to e^- \gamma$ and $\mu^- \to e^- \gamma$, respectively. \ The results of the branching ratios for the $\tau^-\to \mu^- \mu^- \mu^+$ and $\tau^- \to \mu^- \gamma$ decays as a function of $\tan{\beta}$ are illustrated in fig. \[fig:1a\]. In these plots we set $m_N = 10^{14}$ GeV and assume the matrix $R$ to be real. Notice that in the degenerate case with real $R$ these LFV ratios do not depend on the particular choice for $R$. This can be easily understood because the dependence on $R$ drops in the relevant factor, $(Y_\nu^*Y_\nu^T)_{ij}$, appearing in the dominant $\delta^{ij}_{LL}$ slepton mixing, and due to the property $R^TR=1$. From this figure we also see that the predicted rates for both channels are well below their respective experimental upper bounds for all $\tan \beta$ values, eventhough the total rates grow fast with $\tan \beta$. We also see clearly the mentioned correlation between the $\tau^-\to \mu^- \mu^- \mu^+$ and $\tau^- \to \mu^- \gamma$ rates. In fact, this correlation is an inmediate consequence of the dominance of the $\gamma$-penguin contributions which clearly governs the size of the $\tau^-\to \mu^- \mu^- \mu^+$ rates. This dominance is illustrated in fig. \[fig:1a\].(a). where the various contributions are shown separately. In fact, the contributions from the $\gamma$-penguin diagrams are almost undistinguishable from the total rates for all $\tan \beta$ values. For low $\tan \beta$ values the next dominant contribution is from the $Z$-penguin diagrams, but this is still more than one order of magnitude smaller than the $\gamma$-penguin contribution. The contributions from the box diagrams are even smaller. We also learn that the $Z$ and boxes contributions do not depend significantly on $\tan{\beta}$, while the photon contribution goes approximately as $(\tan{\beta})^2$ at large $\tan{\beta}$. In this large $\tan{\beta}$ region it is interesting to note that the total Higgs contribution becomes larger than the $Z$ contribution and the boxes, due to the fact that it grows approximately as $(\tan{\beta})^6$. In this total Higgs contribution the dominant penguins are those with $H_0$ and $A_0$ exchanged, which are several orders of magnitude larger than the $h_0$-penguin contribution. However, in spite of this huge enhacement of the total Higgs contribution occurring at large $\tan{\beta}$, its relative size as compared to the photon-penguin contribution is still negligible. For instance, for the values set in this figure of $M_0 = 400$ GeV, $M_{1/2} = 300$ GeV, $A_0 = 0$, $\mbox{sign} \mu > 0$ and $m_{N} = 10^{14}$ GeV, the Higgs contribution is still four orders of magnitude smaller than the photon-penguin contribution at $\tan{\beta}=50$. This set of values give rise to the following MSSM spectrum (we just specify here the relevant sectors), ----------------------------- ---------------------------------- -------------------------------- $m_{\tilde{l}_1} = 247$ GeV $m_{\tilde{\chi}_1^0} = 121$ GeV $m_{h^0} = 114$ GeV $m_{\tilde{l}_2} = 397$ GeV $m_{\tilde{\chi}_2^0} = 232$ GeV $m_{H^0} = 457$ GeV $m_{\tilde{l}_3} = 413$ GeV $m_{\tilde{\chi}_3^0} = 484$ GeV $m_{A^0} = 457$ GeV $m_{\tilde{l}_4} = 416$ GeV $m_{\tilde{\chi}_4^0} = 493$ GeV $m_{\tilde{\nu}_1} = 351$ GeV $m_{\tilde{l}_5} = 417$ GeV $m_{\tilde{\chi}_1^-} = 232$ GeV $m_{\tilde{\nu}_2} = 409$ GeV $m_{\tilde{l}_6} = 419$ GeV $m_{\tilde{\chi}_2^-} = 495$ GeV $m_{\tilde{\nu}_3} = 410$ GeV. ----------------------------- ---------------------------------- -------------------------------- We have checked that other choices of parameters, specially lower $M_0$ and $M_{1/2}$ lead to larger contributions from the Higgs penguins, since one gets ligther SUSY spectra and more importantly lighter $H_0$ and $A_0$ bosons. However, the present experimental lower bounds on the MSSM particle masses, do not allow to decrease much these $M_0$ and $M_{1/2}$ values, so that in this mSUGRA context the relevant $m_{H_0}$, and $m_{A_0}$ masses can never get low enough values such that their corresponding Higgs-penguin contributions be competitive with the $\gamma$-penguin ones. From this figure we conclude then that the leading $\gamma$-penguin approximation works extremely well, for all $\tan{\beta}$ values. In this approximation one gets, $$\begin{aligned} \frac{BR(l_j \to 3 l_i)}{BR(l_j \to l_i \gamma)} &=& \frac{\alpha}{3\pi}\left(\log\frac{m_{l_j}^2}{m_{l_i}^2}-\frac{11}{4}\right) \end{aligned}$$ which leads to the approximate values of $\frac{1}{440}$, $\frac{1}{94}$ and $\frac{1}{162}$ for $(l_jl_i)= (\tau \mu), (\tau e)$ and $(\mu e)$, respectively. As will be seen later it also works extremely well in the other channels. These nearly constant values of the ratios of branching ratios will be showing along this work. Obviously, if these ratios could be measured they could provide interesting information. In fig. \[fig:1a\](c) we have included our predictions for $|\delta_{LL}^{23}|, |\delta_{LR}^{23}|$ and, $|\delta_{RR}^{23}|$, as defined in eqs.(\[deltaLL\]),(\[deltaLR\]) and (\[deltaRR\]) respectively, as a function of $\tan{\beta}$. These are the flavor changing parameters that are the relevant ones for the $\tau$ decays having $\mu$ in the final state. It is also interesting to compare them with the predictions in the leading logarithmic approximation where the generated mixing in the off-diagonal terms $(i\neq j, i,j=1,2,3)$, through the running from $M_X$ down to $m_M$, is given by $$\begin{aligned} (\Delta m_{\tilde{L}}^2)_{ij}&=&-\frac{1}{8 \pi^2} (3 M_0^2+ A_0^2) (Y_{\nu}^* L Y_{\nu}^T)_{ij} \nonumber \\ (\Delta A_l)_{ij}&=&- \frac{3}{16 \pi^2} A_0 Y_{l_i} (Y_{\nu}^* L Y_{\nu}^T)_{ij}\nonumber \\ (\Delta m_{\tilde{E}}^2)_{ij}&=&0\,\,;\, L_{kl} \equiv \log \left( \frac{M_X}{m_{M_k}}\right) \delta_{kl}. \label{misalignment_sleptons}\end{aligned}$$ and, in consequence, it predicts the hierachy, $ |\delta_{LL}^{23}|>|\delta_{LR}^{23}|>|\delta_{RR}^{23}| $. As expected from the leading-log approximation, we see in fig. \[fig:1a\](c) that $|\delta_{LL}^{23}|$ is much larger than $|\delta_{LR}^{23}|$ and $|\delta_{RR}^{23}|$. However, we get $|\delta_{RR}^{23}|$ larger than $|\delta_{LR}^{23}|$ and it can be indeed two orders of magnitude larger than $|\delta_{LR}^{23}|$ at large $\tan{\beta}$. It is clear that, at least for our choice here of $A_0=0$, the leading-log approximation does not fully work. We also learn from this figure that the size of the mixing is always small in the degenerate case, being the largest $|\delta_{LL}^{23}|$ about $3 \times 10^{-3}$. We next comment on the relevance of the choice for the $m_N$ values. \ In fig. \[fig:1b\] we have illustrated the $\tau \to \mu^- \mu^- \mu^+$ and $\tau \to \mu \gamma$ branching ratios as a function of $m_N$ for degenerate heavy neutrinos and $\tan{\beta} = 50$. The explored range in $m_N$ is from $10^8 $ GeV up to $10^{14} $ GeV which is favorable for baryogenesis. Both rates have the same behaviour with $m_N$ which corresponds approximately to $BR(\tau \to \mu^- \mu^- \mu^+)$, $BR(\tau \to \mu \gamma) \propto |m_N\log(m_N)|^2 $. As before, these two predicted branching ratios are well bellow their experimental upper bounds, even at the largest $m_N$ value of $10^{14}$ GeV. In the last plot of fig. \[fig:1b\] we inlude the dependence of $|\delta_{LL}^{23}|, |\delta_{LR}^{23}|, |\delta_{RR}^{23}|$ on $m_N$ which clearly show a correlated behaviour with the previous plots. Again, $|\delta_{LL}^{23}|$ is the dominant one reaching values up to about $3 \times 10^{-3}$, and $\delta_{RR}^{23}$ is larger than $\delta_{LR}^{23}$. \ \ For completeness, we also include the results of the other four LFV $\tau$ and $\mu$ decays in fig. \[fig:1cd\], where the predictions are shown as a function of $\tan{\beta}$. These behaviours are very similar to those in $BR(\tau^- \to \mu^- \mu^- \mu^+)$ and $BR(\tau \to \mu \gamma)$ decays correspondingly. The main difference is in the lower plots, where now $|\delta^{12(13)}_{LR}|$ is larger than $|\delta^{12(13)}_{RR}|$. The maximum reached values are very small in this case, $|\delta^{12(13)}_{LL}| \sim 5 \times 10^{-5}$. We see again that the leading $\gamma$-penguin approximation works extremely well for these channels, and the previously mentioned values of the ratios of branching ratios give a pretty good answer. We also find that the rates for all these four decays are well below their corresponding experimental bounds, in the degenerate case, for all the explored values of $\tan \beta$ and $m_N$. \ To end up the study of the degenerate case, we have also explored the dependence of the largest ratios $BR(\tau^- \to \mu^- \mu^- \mu^+)$ and $BR(\tau^- \to \mu^- \gamma)$ with the mSUGRA parameters $M_0$ and $M_{1/2}$. These results are shown in fig \[fig:1ef\]. We see clearly a similar behaviour in the two channels and their rates decrease as expected when increasing the soft SUSY breaking mass parameters. This implies that for large enough values of $M_0$ or $M_{1/2}$ the branching ratios are considerably suppresed, due to the decoupling of the heavy SUSY particles in the dominant loops which are common to both observables. Thus, looking at these plots we can obviously conclude that the lighter the SUSY spectrum is, the larger branching ratios we get. However, as already said, the more interesting region of low $M_0$ and/or $M_{1/2}$ values, being close to $100$ GeV, is not allowed by the present experimental lower bounds on the MSSM particle masses. In summary, in the case of degenerate heavy neutrinos, we get LFV $\tau$ and $\mu$ decay rates which are still below their present experimental upper bounds, for all the explored values of the seesaw and mSUGRA parameters, which have been required to provide a full MSSM spectrum with masses being compatible with the present experimental bounds. Hierarchical case ----------------- We next present the results for hierarchical neutrinos, scenario B, which are much more promissing. In this case the choice for $R$ is very relevant. The results for the general complex $R$ case and for the particular mass hierarchy $(m_{N_1},m_{N_2},m_{N_3})=(10^8, 2 \times 10^8, 10^{14})$ GeV, are shown in figs. \[fig:2a\] through \[fig:2f\]. This particular choice for the heavy neutrino masses seems to generate a proper rate for baryogenesis via leptogenesis in the hierarchical case [@Chankowski:2004jc]. We will later explore other choices as well. From these figures we first confirm that the LFV $\tau$ and $\mu$ decay rates are much larger in the hierarchical case than in the degenerate one. This is true even for the case of real $R$, which corresponds in our plots to the predictions at $\arg(\theta_1)=\arg(\theta_2)= \arg(\theta_3)=0$. Furthermore, we get severe restrictions on the maximum allowed decay rates coming from the experimental upper bounds. \ The predictions for $BR(\tau^- \to \mu^- \mu^- \mu^+)$ and $BR(\tau \to \mu \gamma)$ as a function of $\vert \theta_2 \vert$ are depicted in fig \[fig:2a\]. Here $\theta_1$ and $\theta_3$ are set to zero, and $\arg(\theta_2) = \pi/4$. From now on the arguments of $\theta_1$, $\theta_2$ and $\theta_3$ are written in radians. The other parameters are set to $\tan \beta = 50$, $M_0=400 $ GeV, $M_{1/2}=300 $ GeV, $A_0 = 0$ and $\mbox{sign}(\mu) > 0$. In fig. \[fig:2a\](a) we show separately the various contributions to $BR(\tau^- \to \mu^- \mu^- \mu^+)$. The dominant one is again the photon-penguin contribution (which is undistinguisible from the total in this figure) and the others are several orders of magnitude smaller. We also see that the relative size of the subdominant contributions have changed respect to the previously studied degenerate case. Now the Higgs contribution is larger than the boxes one and this is larger than the $Z$ one. This is so because the largest $\tan{\beta} = 50$ value has been set. All the rates for $\tau^- \to \mu^- \mu^- \mu^+$ in this plot are within the allowed range by the experimental bound, which is placed just at the upper line of the rectangle. In contrast, one can see in fig .\[fig:2a\](b) that, for the chosen mSUGRA and seesaw parameters, the predicted $BR(\tau \to \mu \gamma)$ are clearly above the experimental bound. The dependence of $|\delta^{23}_{LL, LR, RR}|$ with $\vert \theta_2 \vert$ is shown in fig. \[fig:2a\](c). We see that $|\delta^{23}_{LL}|$ can reach very large values, up to 0.4, for $\vert \theta_2 \vert = 3$ and $\arg(\theta_2) = \pi/4$. We have checked that this particular choice of $\theta_2 = 3 e^{i \pi/4}$ gives rise to large neutrino Yukawa matrix elements $|Y_{\nu}^{33}|$ and $|Y_{\nu}^{23}|$ of the order of 1, which are the responsible for this large mixing in the slepton sector. It is also interesting to compare the MSSM spectrum for this hierarchical case with the previous degenerate case. For the input values of fig. \[fig:2a\] but with $\theta_2$ set to the extreme value $\theta_2 = 2.8e^{i\frac{\pi}{4}}$ we get the following masses, ----------------------------- ---------------------------------- ------------------------------- $m_{\tilde{l}_1} = 230$ GeV $m_{\tilde{\chi}_1^0} = 122$ GeV $m_{h^0} = 114$ GeV $m_{\tilde{l}_2} = 356$ GeV $m_{\tilde{\chi}_2^0} = 232$ GeV $m_{H^0} = 455$ GeV $m_{\tilde{l}_3} = 413$ GeV $m_{\tilde{\chi}_3^0} = 481$ GeV $m_{A^0} = 455$ GeV $m_{\tilde{l}_4} = 417$ GeV $m_{\tilde{\chi}_4^0} = 490$ GeV $m_{\tilde{\nu}_1} = 296$ GeV $m_{\tilde{l}_5} = 436$ GeV $m_{\tilde{\chi}_1^-} = 232$ GeV $m_{\tilde{\nu}_2} = 422$ GeV $m_{\tilde{l}_6} = 448$ GeV $m_{\tilde{\chi}_2^-} = 492$ GeV $m_{\tilde{\nu}_3} = 441$ GeV ----------------------------- ---------------------------------- ------------------------------- It is obvious that the complex $R$ affects significantly the predictions of the MSSM masses, specially in the slepton sector. In general, the slepton mixing generated by the complex $\theta_i$, lower the lightest charged slepton and the lightest sneutrino masses and increases the heaviest charged slepton and sneutrino masses. \ \ \ \ In fig. \[fig:2b\] we show the predictions of $BR(l_j^- \to l_i^- l_i^- l_i^+)$ and $BR(l_j \to l_i \gamma)$ as functions of $\vert \theta_2 \vert$, for all the channels and for the different values of $\arg(\theta_2)=0, \pi/10, \pi/8, \pi/6, \pi/4$. In all these plots we set again $\tan{\beta} = 50$, $M_0=400 $ GeV, $M_{1/2}=300 $ GeV, $A_0 = 0$, $\mbox{sign}(\mu) > 0$ and $(m_{N_1},m_{N_2},m_{N_3})=(10^8, 2 \times 10^8, 10^{14})$ GeV. The upper lines correspond to $\arg(\theta_2) = \pi/4$ and the lower ones to $\arg(\theta_2) = 0$. These lower lines are therefore the corresponding predictions for real $R$. It is clear that all the branching ratios have a soft behaviour with $\vert \theta_2 \vert$ except for the case of real $\theta_2$ where appears a narrow dip in each plot. In this fig. \[fig:2b\] we see that all the rates obtained are below their experimental upper bounds, except for the processes $\tau \to \mu \gamma$ and $\mu \to e \gamma$, where the predicted rates for complex $\theta_2$ with large $\vert \theta_2 \vert$ are clearly above the allowed region. The most restrictive channel in this case is $\tau \to \mu \gamma$ where compatibility with data occurs just for real $\theta_2$ and for complex $\theta_2$ but with $\vert \theta_2 \vert$ values near the region of the narrow dip. We also see that the rates for $BR(\mu \to 3 e)$ enter in conflict with experiment at the upper corner of large $\vert \theta_2 \vert$ and large $\arg(\theta_2) = \pi/4$. \ \ \ \ Even more interesting are the predictions for $BR(l_j^- \to l_i^- l_i^- l_i^+)$ and $BR(l_j \to l_i \gamma)$ as functions of $\vert \theta_1 \vert$, due to the large values of the relevant entries of the $Y_{\nu}$ coupling matrix, which are illustrated in fig. \[fig:Ynu\]. Concretely, $|Y_{\nu}^{13}|$ can be as large as $\sim 0.2$ for $|\theta_1| \sim 2.5$ and $\arg{(\theta_1)} = \pi/4$, and $|Y_{\nu}^{23}|$ and $|Y_{\nu}^{33}|$ are in the range $0.1 - 1$ for all studied complex $\theta_1$ values. The results for $BR(l_j^- \to l_i^- l_i^- l_i^+)$ and $BR(l_j \to l_i \gamma)$ as functions of $\vert \theta_1 \vert$, for different values of $\arg{(\theta_1)}$, are illustrated in fig. \[fig:2c\]. Here $\theta_2$ and $\theta_3$ are set to zero. The same set of mSUGRA parameters and heavy neutrino masses as in fig. \[fig:2b\] are taken for comparison. We see clearly that the restrictions are more severe in this case than in the previous one. In fact, all the rates cross the horizontal lines of the experimental bounds except for $BR(\tau^- \to \mu^- \mu^- \mu^+)$ and $BR(\tau^- \to e^- e^- e^+)$. The most restrictive channel is now the $\mu \to e \gamma$ decay. More specifically, we see that all the points in the plot of $BR(\mu \to e \gamma)$, except for the particular values $\theta_1= 0$ and real $\theta_1$ at the dip, are excluded by the experimental upper bound. Also the predictions for $BR(\mu \to 3e)$ are mostly excluded, except again for the region close to zero and the dip. Notice that the qualitative behaviour of these all branching ratios with $|\theta_1|$ in fig. \[fig:2c\] and the locations of the dips can be explained from the Yukawa coupling matrix behaviour in fig. \[fig:Ynu\]. The scenario most seriously in conflict with experiment is shown in fig. \[fig:2d\] where the predictions for $BR(l_j^- \to l_i^- l_i^- l_i^+)$ and $BR(l_j \to l_i \gamma)$ are again plotted as a function of $\vert \theta_1 \vert$ and for the same choices of $\arg(\theta_1)$ as in the previous case, but now the mSUGRA mass parameters are set to the lower values, $M_0=250 $ GeV, and $M_{1/2}=150 $ GeV. These lead to a lighter MSSM spectrum and and in consequence to higher rates. For comparison with the previous cases, we include below the predicted masses of the relevant MSSM particles, for the particular value $\theta_1=2.8e^{i \frac{\pi}{4}}$, ----------------------------- ---------------------------------- ------------------------------- $m_{\tilde{l}_1} = 94$ GeV $m_{\tilde{\chi}_1^0} = 58$ GeV $m_{h^0} = 108$ GeV $m_{\tilde{l}_2} = 218$ GeV $m_{\tilde{\chi}_2^0} = 107$ GeV $m_{H^0} = 269$ GeV $m_{\tilde{l}_3} = 259$ GeV $m_{\tilde{\chi}_3^0} = 284$ GeV $m_{A^0} = 269$ GeV $m_{\tilde{l}_4} = 259$ GeV $m_{\tilde{\chi}_4^0} = 296$ GeV $m_{\tilde{\nu}_1} = 143$ GeV $m_{\tilde{l}_5} = 273$ GeV $m_{\tilde{\chi}_1^-} = 107$ GeV $m_{\tilde{\nu}_2} = 247$ GeV $m_{\tilde{l}_6} = 273$ GeV $m_{\tilde{\chi}_2^-} = 300$ GeV $m_{\tilde{\nu}_3} = 261$ GeV ----------------------------- ---------------------------------- ------------------------------- Notice that the lightest slepton, neutralino, chargino and Higgs boson have masses close to their experimental lower bounds. We conclude from this fig. \[fig:2d\] that the predictions for $BR(\mu \to e \gamma)$ and $BR(\mu \to 3 e)$ are totally excluded by present data and the predictions for $BR(\tau \to \mu \gamma)$ are practically excluded, with the exception of the two narrow dips. The predictions for $BR(\tau \to e \gamma)$ get severe restrictions for complex $\theta_1$ with large $\vert \theta_1 \vert$ and/or large $\arg(\theta_1)$, and the rates for $BR(\tau \to 3 \mu)$ start being sensitive to the present experimental bounds for large complex $\theta_1$ values in the upper corner of the plot. We have also explored the dependence with the complex $\theta_3$ angle, and it turns out that the predictions for all rates are nearly constant with this angle. For instance, for $\tan \beta = 50$, $M_0=400 $ GeV, $M_{1/2}=300 $ GeV, $A_0 = 0$ and $\mbox{sign}(\mu) > 0$, we get $BR(\tau \to 3 \mu) =2.6 \times 10^{-10}$, $BR(\tau \to 3 e) =8.8 \times 10^{-15}$, $BR(\mu \to 3 e) =1.8 \times 10^{-14}$, $BR(\tau \to \mu \gamma)= 9.1 \times 10^{-8}$, $BR(\tau \to e \gamma)= 7.8 \times 10^{-13}$ and $BR(\mu \to e \gamma)= 2.6 \times 10^{-12}$. In this case only the prediction for $BR(\tau \to \mu \gamma)$ is in conflict with the experiment. \ \ The dependence of $BR(\tau^- \to \mu^- \mu^- \mu^+)$ and $BR(\tau \to \mu \gamma)$ with the mSUGRA parameters $M_0$ and $M_{1/2}$ is illustrated in fig. \[fig:2f\]. We see a similar behaviour as in the degenerate case, where a suppresion of the branching ratios occurs for large values of $M_0$ and/or $M_{1/2}$. Whereas the ratios for $BR(\tau \to 3 \mu)$ enter in to the allowed region by the experimental bound for large enough $M_0$ and/or $M_{1/2}$, the ratios for $B(\tau \to \mu \gamma)$ are well above their bound for all $M_0$ and $M_{1/2}$ values explored. The main point again is the particular value of $\theta_2$ with large $|\theta_2|$ and large $\arg(\theta_2)$, which generates large rates. With the purpose of exploring other choices of the mSUGRA parameters, we have also generated results for the specific value $A_0=-100$ and found very close predictions to the $A_0 = 0$ case, the lines in the plots being nearly undistinguisable respect to this case. We have also run the alternative case of $\mbox{sign}(\mu)< 0$, and found again very close predictions to the $\mbox{sign}(\mu)> 0$ case, with the lines in the plots being undistinguisable from this case. Finally, we have also tried another input values for the heavy neutrino masses. The results for $BR(\tau \to 3 \mu)$ are shown in fig. \[fig:2g\]. Here we compare the predictions for the three following set of values, $(m_{N_1},m_{N_2},m_{N_3})=(10^8, 2 \times 10^8, 10^{14})$ GeV, $(10^{10}, 2 \times 10^{10}, 10^{14})$ GeV and $(10^8, 2 \times 10^8, 10^{12})$ GeV. We conclude, that the relevant mass is the heaviest one, $m_{N_3}$, and the scaling with this mass is approximately as the scaling with the common mass $m_N$ in the degenerate case. Because of this, the rates for the two first sets are nearly undistinguisable, and the rates for the third set are about four orders of magnitude below. Last but not least, we consider the very interesting case where the angle $\theta_{13}$ of the $U_{MNS}$ is non vanishing. It is known that the present neutrino data still allows for small values of this angle, $\theta_{13}<10^o$. The dependence of $BR(\mu^- \to e^- e^- e^+)$ and $BR(\mu \to e \gamma)$ with this $\theta_{13}$ is shown in fig. \[fig:2e\] where we explore values in the $0 < \theta_{13} < 10^o$ range. We choose these two channels because they are the most sensitive ones to this angle. For this study we assume the most conservative choice of $R = 1$, and set the other parameters to the following values: $\tan \beta = 50$, $A_0 = 0$, $\mbox{sign}(\mu) > 0$, and $(m_{N_1},m_{N_2},m_{N_3})=(10^8, 2 \times 10^8, 10^{14})$ GeV. The upper lines are for $M_0=250 $ GeV, $M_{1/2}=150 $ GeV and the lower ones for $M_0=400 $ GeV, $M_{1/2}=300 $ GeV. We conclude that, for this choice of parameters, values of $\theta_{13}$ larger than 1 degree are totally excluded by the data on LFV $\mu$ decays. It is a quite stricking result. In summary, we obtain in the hierachical case much larger rates than in the degenerate one, and one must pay attention to these values, because the rates in several channels do get in conflict with the experimental bounds. More specifically, the choice of a complex $R$ matrix with large modules and/or large arguments of $\theta_1$ and/or $\theta_2$ and a light SUSY spectrum is very constrained by data. We also confirm that the experimental upper bounds of the processes $l_j \to l_i \gamma$ are more restrictive than the $l_j^- \to l_i^- l_i^- l_i^+$ ones but all together will allow to extract large excluded regions of the mSUGRA and seesaw parameter space. A more precise conclusion on the excluded regions of this parameter space deserves a more devoted study. \[conclu\] Conclusions ====================== We have shown in this paper that the LFV $\tau$ and $\mu$ decays do provide a very efficient tool to look for indirect SUSY signals. Whereas the predicted rates for these processes are negligible within the SM, the SUSY scenario considered here provides in contrast significant rates which are at the present experimental reach for some of the studied channels. This scenario consists of the well known mSUGRA extended with three right handed neutrinos and their SUSY partners, and with the needed neutrino masses being generated via the seesaw mechanism. The reason for these significant rates is because of the important lepton flavor mixing that is generated in the slepton sector due to large Yukawa neutrino couplings, which is transmited via the RGE running from the large energies down the electroweak scale. With the motivation in mind of testing SUSY we have studied exhaustively in this work the particular decays $\tau \to 3\mu$, $\tau \to 3e$ and $\mu \to 3e$, and the correlated radiative decays $\tau \to \mu \gamma$, $\tau \to e \gamma$ and $\mu \to e \gamma$. All of these channels have quite challenging experimental bounds and they are expected to improve in the future . We have explored the dependence of the branching ratios for these LFV processes with the various parameters involved, namely, the mSUGRA and seesaw parameters. We have computed and analyzed in full detail all the contributions from the SUSY loops to the $l_j^- \to l_i^- l_i^- l_i^+$ decays. Our analytical results for these decays correct and complete previous results in the literature. In particular we have presented the results for the separate contributions from the $\gamma$-penguin, the $Z$-penguin, the Higgs-penguin and the box diagrams and shown explicitely the $\gamma$-penguin dominance. In the numerical estimates we have presented results for both the $l_j^- \to l_i^- l_i^- l_i^+$ and the correlated radiative decays $l_j \to l_i \gamma$. For the degenerate heavy neutrinos case, we have got rates for all the studied LFV $\tau$ and $\mu$ decays that are below the present experimental upper bounds. The largest rates we get, within the explored range of the seesaw and mSUGRA parameter space, are for the $\tau$ decays. Specifically, $BR(\tau \to \mu \gamma) \sim 10^{-8}$ and $BR(\tau^- \to \mu^- \mu^- \mu^+) \sim 3 \times 10^{-11}$, corresponding to the extreme values of $\tan{\beta} = 50$ and $m_{N} = 10^{14}$ GeV and for the lowest values of $M_0$ and $M_{1/2}$ explored. The case of hierarchical heavy neutrinos turns out to be much more interesting. First of all, we get much larger branching ratios than in the previous case and secondly they are in many cases above the present experimental bounds. We have analyzed in detail the behaviour of the branching ratios with the mSUGRA and seesaw parameters also in the hierarchical case. The largest ratios found are again for $\tau \to \mu \gamma$ and $\tau^- \to \mu^- \mu^- \mu^+$ decays. All the LFV $\tau$ and $\mu$ decay rates are mainly sensitive to $\tan{\beta}$, the heaviest neutrino mass $m_{N_3}$, which we have set to $m_{N_3} = 10^{14}$ GeV, and the complex angles in the $R$ matrix $\theta_1$ and $\theta_2$, which have been taken in the range $3 < \tan{\beta} < 50$, $0 < |\theta_i|< 3$ and $0 < \arg(\theta_i) < \pi/4$. For the values of these parameters at the upper limit of this studied interval we have found that some of the predicted branching ratios are clearly above the corresponding experimental upper bounds. The most restrictive channels being $\mu \to e \gamma$, $\mu \to 3e$ and $\tau \to \mu \gamma$. Therefore, we get in this region important restrictions on the posible values of the mSUGRA and seesaw parameters. In particular, for $\theta_2 = 2.8 e^{i \pi/4}$, we get that the whole studied range of $100 \, \mbox{GeV} < M_0, M_{1/2} < 800 \, \mbox{GeV}$ with $\tan{\beta} = 50$ is totally excluded by $\tau \to \mu \gamma$. Values of $M_0$ and $M_{1/2}$ in the low region below $250$ GeV are also excluded by $\tau^- \to \mu^- \mu^- \mu^+$ data. The case of $\theta_1$ is even more restrictive, because the predictions for $\mu \to e \gamma$, $\mu \to 3e$ and $\tau \to \mu \gamma$ totally exclude a light SUSY scenario, for practicaly all $\theta_1$ values. Perhaps, the most striking result is that even for the most conservative choice of $R = 1$, that is $\theta_1 = \theta_2 = \theta_3 = 0$, there are also important restrictions at low $M_0$, $M_{1/2}$ and large $\tan{\beta}$ values. In particular, for $\tan{\beta} = 50$, values lower or equal than $M_0 = 250$ GeV and $M_{1/2} = 150$ GeV are totally excluded by $\tau \to \mu \gamma$, $\mu \to e \gamma$ and $\mu^- \to e^- e^- e^+$ data. For this conservative choice of $R=1$ we have also found the surprising result that both $\mu \to e \gamma$ and $\mu^- \to e^- e^- e^+$ place important restrictions on the allowed values for the $U_{MNS}$ angle $\theta_{13}$. For values lower or equal than $M_0 = 250$ GeV and $M_{1/2} = 150$ GeV and for $\tan\beta=50$ and $m_{N_3 }=10^{14}$ GeV, we get that values of $\theta_{13}$ larger than 1 degree are not allowed by these LFV data. In conclusion, it is clear from these results that the LFV $\tau$ and $\mu$ decays studied here do restrict significantly the mSUGRA and seesaw parameter space. A more refined analysis of the restrictions on this multidimensional parameter space, deserves a further study. We thank the kind hospitality of the SLAC theory group members, the valuable enviroment for discussions there and the facilities provided at SLAC, where most of this work was done. We also thank K. Tobe for some clarifications regarding some results in [@Hisano:1995cp]. M.J. Herrero acknowledges the finantial support from the Spanish Ministery of Science and Education (MEC) by the grant under the name “Estancias de Profesores de Universidad en Centros Extranjeros”, ref: PR2005-0069. E. Arganda acknowledges the Spanish MEC for finantial support by his FPU grant AP2003-3776. This work was also supported by the Spanish MEC under project FPA2003-04597. \[apendices\] {#apendice1} In this appendix we collect the Feynman rules for the interactions that are relevant in this work. They are expressed in the physical eigenstate basis, for all the MSSM sectors involved: sleptons ${\tilde l_X}$ $(X=1,..,6)$, sneutrinos ${\tilde{\nu_X}}$ $(X=1,2,3)$, neutralinos ${\tilde \chi^0_A}$ $(A=1,..,4)$, charginos ${\tilde \chi^-_A}$ $(A=1,2)$ and the neutral Higgs bosons $H_p \,(p=1,2,3)\,=h^0, H^0, A^0$. Photon interactions ------------------- The Feynman rules for the photon interactions that are used in this work are given by, ![image](photon.epsi){width="7cm"} \[neutralinos\_feyn\] Neutralino interactions ----------------------- The Feynman rules for neutralinos that take part in the one-loop diagrams computed here are the following: ![image](neutralinos.epsi){width="9.5cm"} \[neutralinos\_feyn\] where $$\begin{aligned} N_{iAX}^L &=& -g \sqrt{2} \left\{ \frac{m_{l_i}}{2 M_W \cos{\beta}} N_{A3}^{\ast} R_{(1, 3, 5) X}^{(l)} + \tan{\theta_W} N_{A1}^{\ast} R_{(2, 4, 6) X}^{(l)} \right\} \\ N_{iAX}^R &=& -g \sqrt{2} \left\{ -\frac{1}{2} \left( \tan{\theta_W} N_{A1} + N_{A2} \right) R_{(1, 3, 5) X}^{(l)} + \frac{m_{l_i}}{2 M_W \cos{\beta}} N_{A3} R_{(2, 4, 6) X}^{(l)} \right\} \nonumber \\\end{aligned}$$ $C$ is the charge conjugation matrix and $P_{L, R} = \frac{1 \mp \gamma_5}{2}$. Here $R^{(l)}$ and $N$ are the rotation matrices in the charge slepton and neutralino sectors, respectively. The definition of $N$ can be found in  [@Haber:1985rc; @Gunion:1986yn]. Chargino interactions --------------------- The Feynman rules for the chargino interactions are given by ![image](charginos.epsi){width="8.5cm"} \[charginos\_feyn\] where $$\begin{aligned} C_{iAX}^L &=& g \frac{m_{l_i}}{\sqrt{2} M_W \cos{\beta}} U_{A2}^{\ast} R_{(1, 2, 3) X}^{(\nu)} \\ C_{iAX}^R &=& -g V_{A1} R_{(1, 2, 3) X}^{(\nu)}\end{aligned}$$ and $R^{(\nu)}$, $U$ and $V$ are the rotation matrices in the sneutrino and chargino sectors, respectively. The definitions of $U$ and $V$ can be found in  [@Haber:1985rc; @Gunion:1986yn]. $Z$ boson interactions ---------------------- The Feynman rules for $Z$ boson interactions are given by, ![image](Zneutralinos.epsi){width="9cm"} \[charginos\_feyn\] where $$\begin{aligned} E_{AB}^{L(n)} &=& \frac{g}{\cos{\theta_W}} O_{AB}^{\prime \prime L} = \frac{g}{c_W} \left( -\frac{1}{2} N_{A3} N_{B3}^{\ast} + \frac{1}{2} N_{A4} N_{B4}^{\ast} \right) \\ E_{AB}^{R(n)} &=& \frac{g}{\cos{\theta_W}} O_{AB}^{\prime \prime R} = -\frac{g}{c_W} \left( -\frac{1}{2} N_{A3}^{\ast} N_{B3} + \frac{1}{2} N_{A4}^{\ast} N_{B4} \right)\end{aligned}$$ ![image](Zcharginos.epsi){width="9cm"} \[charginos\_feyn\] where $$\begin{aligned} E_{AB}^{L(c)} &=& -\frac{g}{\cos{\theta_W}} O_{AB}^{\prime R} = -\frac{g}{c_W} \left[ -\left( \frac{1}{2} - s_W^2 \right) U_{A2}^{\ast} U_{B2} - c_W^2 U_{A1}^{\ast} U_{B1} \right] \\ E_{AB}^{R(c)} &=& -\frac{g}{\cos{\theta_W}} O_{AB}^{\prime L} = -\frac{g}{c_W} \left[ -\left( \frac{1}{2} - s_W^2 \right) V_{A2} V_{B2}^{\ast} - c_W^2 V_{A1} V_{B1}^{\ast} \right]\end{aligned}$$ ![image](Zslpetons.epsi){width="7cm"} \[charginos\_feyn\] where $$Q_{XY}^{(\tilde{l})} = -\frac{g}{c_W} \sum_{k=1}^3 \left[ \left( -\frac{1}{2} + s_W^2 \right) R_{2k-1, X}^{(l) \ast} R_{2k-1, Y}^{(l)} + s_W^2 R_{2k, X}^{(l) \ast} R_{2k, Y}^{(l)} \right]$$ ![image](Zsneutrinos.epsi){width="7cm"} \[charginos\_feyn\] where $$Q_{XY}^{(\tilde{\nu})} = -\frac{g}{2 c_W} \delta_{XY}$$ ![image](Zleptons.epsi){width="8.5cm"} \[charginos\_feyn\] where $$\begin{aligned} Z_L^{(l)} &=& -\frac{g}{c_W} \left[ -\frac{1}{2} + s_W^2 \right] \\ Z_R^{(l)} &=& -\frac{g}{c_W} s_W^2\end{aligned}$$ We have used here and everywhere the short notation $s_W = \sin{\theta_W}$ and $c_W = \cos{\theta_W}$. Higgs boson interactions ------------------------ The Feynman rules for the three neutral Higgs bosons read as, ![image](Hneutralinos.epsi){width="9cm"} \[charginos\_feyn\] where $$\begin{aligned} D_{L, AB}^{(p)} &=& -\frac{g}{\sin{\beta}} \left[ Q_{BA}^{'' \ast} \sigma_5^{(p)} - R_{BA}^{'' \ast} \sigma_2^{(p)} + \frac{m_{\chi_A^0}}{2 M_W} \sigma_2^{(p)} \delta_{BA} \right] \\ D_{R, AB}^{(p)} &=& -\frac{g}{\sin{\beta}} \left[ Q_{BA}^{''} \sigma_5^{(p) \ast} - R_{BA}^{''} \sigma_2^{(p) \ast} + \frac{m_{\chi_A^0}}{2 M_W} \sigma_2^{(p) \ast} \delta_{BA} \right]\end{aligned}$$ and $$\begin{aligned} Q_{AB}^{''} &=& \frac{1}{2} \left[ N_{A3} \left( N_{B2} - \tan{\theta_W} N_{B1} \right) + N_{B3} \left( N_{A2} - \tan{\theta_W} N_{A1} \right) \right] \\ R_{AB}^{''} &=& \frac{1}{2 M_W} \left[ M_2^* N_{A2} N_{B2} + M_1^* N_{A1} N_{B1} - \mu^* \left( N_{A3} N_{B4} + N_{A4} N_{B3} \right) \right]\end{aligned}$$ ![image](Hcharginos.epsi){width="9cm"} \[charginos\_feyn\] where $$\begin{aligned} W_{L, AB}^{(p)} &=& -\frac{g}{\sin{\beta}} \left[ Q_{BA}^{\ast} \sigma_5^{(p)} - R_{BA}^{\ast} \sigma_2^{(p)} + \frac{m_{\chi_A^-}}{2 M_W} \sigma_2^{(p)} \delta_{BA} \right] \\ W_{R, AB}^{(p)} &=& -\frac{g}{\sin{\beta}} \left[ Q_{AB} \sigma_5^{(p) \ast} - R_{AB} \sigma_2^{(p) \ast} + \frac{m_{\chi_A^-}}{2 M_W} \sigma_2^{(p) \ast} \delta_{AB} \right]\end{aligned}$$ and $$\begin{aligned} Q_{AB} &=& \frac{1}{\sqrt{2}} U_{A2} V_{B1} \\ R_{AB} &=& \frac{1}{2 M_W} \left[ M_2^* U_{A1} V_{B1} + \mu^* U_{A2} V_{B2} \right]\end{aligned}$$ ![image](Hsleptons.epsi){width="5.5cm"} \[charginos\_feyn\] where $$\begin{aligned} G_{XY}^{p (\tilde{l})} &=& -g \left[ g_{LL, e}^{(p)} R_{1X}^{*(l)} R_{1Y}^{(l)} + g_{RR, e}^{(p)} R_{2X}^{*(l)} R_{2Y}^{(l)} + g_{LR, e}^{(p)} R_{1X}^{*(l)} R_{2Y}^{(l)} + g_{RL, e}^{(p)} R_{2X}^{*(l)} R_{1Y}^{(l)}\right. \nonumber \\ &+& g_{LL, \mu}^{(p)} R_{3X}^{*(l)} R_{3Y}^{(l)} + g_{RR, \mu}^{(p)} R_{4X}^{*(l)} R_{4Y}^{(l)} + g_{LR, \mu}^{(p)} R_{3X}^{*(l)} R_{4Y}^{(l)} + g_{RL, \mu}^{(p)} R_{4X}^{*(l)} R_{3Y}^{(l)} \nonumber \\ &+& \left. g_{LL, \tau}^{(p)} R_{5X}^{*(l)} R_{5Y}^{(l)} + g_{RR, \tau}^{(p)} R_{6X}^{*(l)} R_{6Y}^{(l)} + g_{LR, \tau}^{(p)} R_{5X}^{*(l)} R_{6Y}^{(l)} + g_{RL, \tau}^{(p)} R_{6X}^{*(l)} R_{5Y}^{(l)} \right]\end{aligned}$$ with $$\begin{aligned} g_{LL, l}^{(p)} &=& \frac{M_Z}{\cos{\theta_W}} \sigma_3^{(p)} \left( \frac{1}{2}- \sin^2{\theta_W} \right) + \frac{m_{l}^2}{M_W \cos{\beta}} \sigma_4^{(p)} \\ g_{RR, l}^{(p)} &=& \frac{M_Z}{\cos{\theta_W}} \sigma_3^{(p)} \left( \sin^2{\theta_W} \right) + \frac{m_{l}^2}{M_W \cos{\beta}} \sigma_4^{(p)} \\ g_{LR, l}^{(p)} &=& \left(-\sigma_1^{(p)}A_l-\sigma_2^{(p)*}\mu\right) \frac{m_{l}}{2 M_W \cos{\beta}} \\ g_{RL, l}^{(p)} &=& g_{LR, l}^{(p)*}\end{aligned}$$ with $A_l = (A_l)^{ii}/(Y_l)^{ii}, i = 1, 2, 3$ for $l = e, \mu, \tau$, respectively. ![image](Hsneutrinos.epsi){width="5.5cm"} \[charginos\_feyn\] where $$G_{XY}^{p (\tilde{\nu})} = -g \left[ g_{LL, \nu}^{(p)} R_{1X}^{*(\nu)} R_{1Y}^{(\nu)} + g_{LL, \nu}^{(p)} R_{2X}^{*(\nu)} R_{2Y}^{(\nu)} + g_{LL, \nu}^{(p)} R_{3X}^{*(\nu)} R_{3Y}^{(\nu)} \right]$$ with $$g_{LL, \nu}^{(p)} = -\frac{M_Z}{2\cos{\theta_W}} \sigma_3^{(p)}$$ ![image](Hleptons.epsi){width="8cm"} \[charginos\_feyn\] where $$\begin{aligned} S_{L, i}^{(p)} &=& g \frac{m_{l_i}}{2 M_W \cos{\beta}} \sigma_1^{(p) \ast} \\ S_{R, i}^{(p)} &=& S_{L, i}^{(p) \ast} \end{aligned}$$ In all the above equations, $$\begin{aligned} \sigma_1^{(p)} &=& \left( \begin{array}{c} \sin{\alpha} \\ -\cos{\alpha} \\ i \sin{\beta} \end{array} \right) \\ \sigma_2^{(p)} &=& \left( \begin{array}{c} \cos{\alpha} \\ \sin{\alpha} \\ -i \cos{\beta} \end{array} \right) \\ \sigma_3^{(p)} &=& \left( \begin{array}{c} \sin{(\alpha + \beta)} \\ -\cos{(\alpha + \beta)} \\ 0 \end{array} \right) \\ \sigma_4^{(p)} &=& \left( \begin{array}{c} -\sin{\alpha} \\ \cos{\alpha} \\ 0 \end{array} \right) \\ \sigma_5^{(p)} &=& \left( \begin{array}{c} -\cos{(\beta - \alpha)} \\ \sin{(\beta - \alpha)} \\ i \cos{2\beta} \end{array} \right)\end{aligned}$$ and $H_p (p = 1, 2, 3) = h^0, H^0, A^0$. We have also used here the standard notation for the MSSM soft-gaugino-mass parameters $M_{1,2}$ and the $\mu$ parameter. {#apendice2} In this appendix we present the analytical expressions of the loop-functions for the calculations of the $l_j^- \to l_i^- l_i^- l_j^+$ decays. In these expressions we neglect the external fermion momenta/masses which for the present computation works extremely well. That is, $$\begin{aligned} B(k^2, m_1^2, m_2^2) &\simeq& B(0, m_1^2, m_2^2) = B(m_1^2, m_2^2) \\ C(k_1^2, k_2^2, m_1^2, m_2^2, m_3^2) &\simeq& C(0, 0, m_1^2, m_2^2, m_3^2) = C(m_1^2, m_2^2, m_3^2) \\ D(k_1^2, k_2^2, k_3^2, m_1^2, m_2^2, m_3^2, m_4^2) &\simeq& D(0, 0, 0, m_1^2, m_2^2, m_3^2, m_4^2) \nonumber \\ &=& D(m_1^2, m_2^2, m_3^2, m_4^2)\end{aligned}$$ Two-points functions -------------------- The analytical expressions for $B_0$ and $B_1$ functions are the following: $$\begin{aligned} B_0(m_1^2, m_2^2) &=& -\log{m_2^2} + \frac{m_2^2 - m_1^2 + m_1^2 \log{\left(\frac{m_1^2}{m_2^2}\right)}}{m_2^2 - m_1^2} \\ B_1(m_1^2, m_2^2) &=& -\frac{1}{2} + \frac{1}{2} \log{m_2^2} - \frac{m_1^2 - m_2^2 + 2 m_1^2 \log{\left(\frac{m_2^2}{m_1^2}\right)}}{4 \left( m_1^2 - m_2^2 \right)^2}\end{aligned}$$ Three-points functions ---------------------- The expressions for the three-points functions used in this work are given by, $$\begin{aligned} C_0(m_1^2, m_2^2, m_3^2) &=& -\frac{1}{m_2^2 - m_3^2} \left( \frac{m_1^2 \log{m_1^2} - m_2^2 \log{m_2^2}}{m_1^2 - m_2^2} - \frac{m_1^2 \log{m_1^2} - m_3^2 \log{m_3^2}}{m_1^2 - m_3^2} \right) \\ \tilde{C}_0(m_1^2, m_2^2, m_3^2) &=& 1 - \frac{1}{m_2^2 - m_3^2} \left( \frac{m_1^4 \log{m_1^2} - m_2^4 \log{m_2^2}}{m_1^2 - m_2^2} - \frac{m_1^4 \log{m_1^2} - m_3^4 \log{m_3^2}}{m_1^2 - m_3^2} \right) \\ C_{11}(m_1^2, m_2^2, m_3^2) &=& \frac{m_1^2}{2 (m_1^2 - m_2^2)^2 (m_1^2 - m_3^2)^2 (m_2^2 - m_3^2)} \nonumber \\ &\times& \left[ -(m_1^2 - m_2^2) (m_1^2 - m_3^2) (m_2^2 - m_3^2) + m_1^2 m_2^2 (2 m_1^2 - m_2^2) \log{\frac{m_1^2}{m_2^2}} \right. \nonumber \\ &+& \left. m_1^2 m_3^2 (-2 m_1^2 + m_3^2) \log{\frac{m_1^2}{m_3^2}} + m_2^2 m_3^2 (-2 m_1^2 + m_2^2) (-2 m_1^2 + m_3^2) \log{\frac{m_2^2}{m_3^2}} \right] \nonumber \\ \\ C_{12}(m_1^2, m_2^2, m_3^2) &=& \frac{1}{2 (m_1^2 - m_2^2) (m_1^2 - m_3^2)^2 (m_2^2 - m_3^2)^2} \nonumber \\ &\times& \left[ (m_1^2 - m_2^2) (m_1^2 - m_3^2) (m_2^2 - m_3^2) m_3^2 + m_2^4 m_3^2 (2 m_1^2 - m_3^2) \log{\frac{m_2^2}{m_3^2}} \right. \nonumber \\ &+& m_1^4 \left. \left( m_2^4 \log{\frac{m_1^2}{m_2^2}} + m_3^2 (-2 m_2^2 + m_3^2) \log{\frac{m_1^2}{m_3^2}} \right) \right] \\ C_{24}(m_1^2, m_2^2, m_3^2) &=& \frac{1}{4} \tilde{C}_0(0, 0, m_1^2, m_2^2, m_3^2)\end{aligned}$$ Four-points functions --------------------- Finally, the four-points functions have the following expressions, $$\begin{aligned} D_0(m_1^2, m_2^2, m_3^2, m_4^2) &=& -\frac{m_1^2 \log{m_1^2}}{(m_1^2 - m_2^2) (m_1^2 - m_3^2) (m_1^2 - m_4^2)} + \frac{m_2^2 \log{m_2^2}}{(m_1^2 - m_2^2) (m_2^2 - m_3^2) (m_2^2 - m_4^2)} \nonumber \\ &-& \frac{m_3^2 \log{m_3^2}}{(m_1^2 - m_3^2) (m_2^2 - m_3^2) (m_3^2 - m_4^2)} + \frac{m_4^2 \log{m_4^2}}{(m_1^2 - m_4^2) (m_2^2 - m_4^2) (m_3^2 - m_4^2)} \nonumber \\ \\ \tilde{D}_0(m_1^2, m_2^2, m_3^2, m_4^2) &=& -\frac{m_1^4 \log{m_1^2}}{(m_1^2 - m_2^2) (m_1^2 - m_3^2) (m_1^2 - m_4^2)} + \frac{m_2^4 \log{m_2^2}}{(m_1^2 - m_2^2) (m_2^2 - m_3^2) (m_2^2 - m_4^2)} \nonumber \\ &-& \frac{m_3^4 \log{m_3^2}}{(m_1^2 - m_3^2) (m_2^2 - m_3^2) (m_3^2 - m_4^2)} + \frac{m_4^4 \log{m_4^2}}{(m_1^2 - m_4^2) (m_2^2 - m_4^2) (m_3^2 - m_4^2)} \nonumber \\\end{aligned}$$ [10]{} B. T. Cleveland [*et al.*]{}, Astrophys. J.  [**496**]{} (1998) 505; W. Hampel [*et al.*]{}, Phys.  lett. B [**447**]{} (1999) 127; Q. R. Ahmad [*et al.*]{} \[SNO Collaboration\], Phys. Rev. Lett.  [**87**]{} (2001) 071301 \[arXiv:nucl-ex/0106015\]; Q. R. Ahmad [*et al.*]{} \[SNO Collaboration\], Phys. Rev. Lett.  [**89**]{} (2002) 011302 \[arXiv:nucl-ex/0204009\]. R. Becker-Szendy [*et al.*]{}, Nucl. Phys. Proc. Suppl.  [**38**]{}, 331 (1995). Y. Fukuda [*et al.*]{} \[Kamiokande Collaboration\], Phys. Lett. B [**335**]{}, 237 (1994); Y. Ashie [*et al.*]{} \[Super-Kamiokande Collaboration\], Phys. Rev. Lett.  [**93**]{}, 101801 (2004) \[arXiv:hep-ex/0404034\]. T. Araki [*et al.*]{} \[KamLAND Collaboration\], Phys. Rev. Lett.  [**94**]{}, 081801 (2005) \[arXiv:hep-ex/0406035\]; E. Aliu [*et al.*]{} \[K2K Collaboration\], Phys. Rev. Lett.  [**94**]{}, 081802 (2005) \[arXiv:hep-ex/0411038\]; T. Araki [*et al.*]{} \[KamLAND Collaboration\], Phys. Rev. Lett.  [**94**]{}, 081801 (2005) \[arXiv:hep-ex/0406035\]. Z. Maki, M. Nakagawa and S. Sakata, Prog.  Theor.  Phys. [**28**]{} (1962) 870; B. Pontecorvo, Zh.  Eksp.  Teor. Fiz. [**33**]{} (1957) 549 and [**34**]{} (1957) 247. M. Gell-Mann, P. Ramond, and R. Slansky, *Complex spinors and unified theories*, in *Supergravity* (P. van Nieuwenhuizen and D. Z. Freedman, eds.), North Holland, Amsterdam, 1979, p. 315; Pierre Ramond, CALT-68-709, Feb 1979. 21pp. Invited talk given at Sanibel Symposium, Palm Coast, Fla., Feb 25 - Mar 2, 1979; hep-ph/9809459; T. Yanagida, in *Proceedings of the Workshop on the Unified Theory and the Baryon Number in the Universe* (O. Sawada and A. Sugamoto, eds.), KEK, Tsukuba, Japan, 1979, p. 95; S. L. Glashow, *The future of elementary particle physics*, in *Proceedings of the 1979 Carg[è]{}se Summer Institute on Quarks and Leptons* (M. L[é]{}vy, J.-L. Basdevant, D. Speiser, J. Weyers, R. Gastmans, and M. Jacob, eds.), Plenum Press, New York, 1980, pp. 687–713; R. N. Mohapatra and G. Senjanovi[ć]{}, Phys. Rev. Lett. **44**, 912 (1980). F. Borzumati and A. Masiero, Phys. Rev. Lett.  [**57**]{}, 961 (1986). B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. Lett.  [**92**]{}, 121801 (2004) \[arXiv:hep-ex/0312027\]. U. Bellgardt [*et al.*]{} \[SINDRUM Collaboration\], Nucl. Phys. B [**299**]{}, 1 (1988). B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. Lett.  [**95**]{}, 041802 (2005) \[arXiv:hep-ex/0502032\]. B. Aubert [*et al.*]{} \[BABAR Collaboration\], arXiv:hep-ex/0508012. M. L. Brooks [*et al.*]{} \[MEGA Collaboration\], Phys. Rev. Lett.  [**83**]{} (1999) 1521 \[arXiv:hep-ex/9905013\]. J. Hisano, T. Moroi, K. Tobe and M. Yamaguchi, Phys. Rev. D [**53**]{} (1996) 2442 \[arXiv:hep-ph/9510309\]. J. Hisano, D. Nomura and T. Yanagida, Phys. Lett. B [**437**]{}, 351 (1998) \[arXiv:hep-ph/9711348\]; J. Hisano and D. Nomura, Phys. Rev. D [**59**]{}, 116005 (1999) \[arXiv:hep-ph/9810479\]; W. Buchmuller, D. Delepine and F. Vissani, Phys. Lett. B [**459**]{}, 171 (1999) \[arXiv:hep-ph/9904219\]; J. R. Ellis, M. E. Gomez, G. K. Leontaris, S. Lola and D. V. Nanopoulos, Eur. Phys. J. C [**14**]{}, 319 (2000) \[arXiv:hep-ph/9911459\]; X. J. Bi, Y. B. Dai and X. Y. Qi, Phys. Rev. D [**63**]{}, 096008 (2001) \[arXiv:hep-ph/0010270\]; J. Hisano and K. Tobe, Phys. Lett. B [**510**]{}, 197 (2001) \[arXiv:hep-ph/0102315\]; X. J. Bi and Y. B. Dai, Phys. Rev. D [**66**]{}, 076006 (2002) \[arXiv:hep-ph/0112077\]; D. F. Carvalho, J. R. Ellis, M. E. Gomez, S. Lola and J. C. Romao, Phys. Lett. B [**618**]{}, 162 (2005) \[arXiv:hep-ph/0206148\]; S. Lavignac, I. Masina and C. A. Savoy, Phys. Lett. B [**520**]{}, 269 (2001) \[arXiv:hep-ph/0106245\]; Y. Kuno and Y. Okada, Rev. Mod. Phys.  [**73**]{}, 151 (2001) \[arXiv:hep-ph/9909265\]; J. R. Ellis, J. Hisano, M. Raidal and Y. Shimizu, Phys. Rev. D [**66**]{}, 115013 (2002) \[arXiv:hep-ph/0206110\]; F. Deppisch, H. Pas, A. Redelbach, R. Ruckl and Y. Shimizu, Eur. Phys. J. C [**28**]{}, 365 (2003) \[arXiv:hep-ph/0206122\]; A. Dedes, J. R. Ellis and M. Raidal, Phys. Lett. B [**549**]{}, 159 (2002) \[arXiv:hep-ph/0209207\]; J. Hisano, arXiv:hep-ph/0209005; S. Pascoli, S. T. Petcov and C. E. Yaguna, Phys. Lett. B [**564**]{}, 241 (2003) \[arXiv:hep-ph/0301095\]; T. Fukuyama, T. Kikuchi and N. Okada, Phys. Rev. D [**68**]{}, 033012 (2003) \[arXiv:hep-ph/0304190\]; J. I. Illana and M. Masip, Eur. Phys. J. C [**35**]{}, 365 (2004) \[arXiv:hep-ph/0307393\]; A. Masiero, S. K. Vempati and O. Vives, New J. Phys.  [**6**]{}, 202 (2004) \[arXiv:hep-ph/0407325\]. J. A. Casas and A. Ibarra, Nucl. Phys. B [**618**]{} (2001) 171 \[arXiv:hep-ph/0103065\]. K. S. Babu and C. Kolda, Phys. Rev. Lett.  [**89**]{}, 241802 (2002) \[arXiv:hep-ph/0206310\]. J. R. Ellis, J. Hisano, M. Raidal and Y. Shimizu, Phys. Rev. D [**66**]{}, 115013 (2002) \[arXiv:hep-ph/0206110\]. A. Brignole, A. Rossi, Phys. Lett. [**B566**]{}, 217 (2003), hep-ph/0304081. E. Arganda, A. M. Curiel, M. J. Herrero and D. Temes, Phys. Rev. D [**71**]{}, 035011 (2005) \[arXiv:hep-ph/0407302\]. A. Brignole and A. Rossi, Nucl. Phys. B [**701**]{}, 3 (2004) \[arXiv:hep-ph/0404211\]. P. Paradisi, arXiv:hep-ph/0508054. J. K. Parry, arXiv:hep-ph/0510305. S. Eidelman [*et al.*]{} \[Particle Data Group\], Phys. Lett. B [**592**]{}, 1 (2004). H. E. Haber and G. L. Kane, *The search for supersymmetry: probing physics beyond the standard model*, Phys. Rept. **117**, 75 (1985). J. F. Gunion and H. E. Haber, *Higgs bosons in supersymmetric models. 1*, Nucl. Phys. **B272**, 1 (1986). \[E: [**B402**]{}, 569 (1993)\] Y. Grossman and H. E. Haber, Phys. Rev. Lett.  [**78**]{} (1997) 3438 \[arXiv:hep-ph/9702421\]. P. H. Chankowski, J. R. Ellis, S. Pokorski, M. Raidal and K. Turzynski, Nucl. Phys. B [**690**]{} (2004) 279 \[arXiv:hep-ph/0403180\]. X. J. Bi, B. Feng and X. m. Zhang, arXiv:hep-ph/0309195. M. C. Gonzalez-Garcia and C. Pena-Garay, Phys. Rev. D [**68**]{} (2003) 093003 \[arXiv:hep-ph/0306001\]. W. Porod, Comput. Phys. Commun.  [**153**]{}, 275 (2003) \[arXiv:hep-ph/0301101\]. F. Gabbiani, E. Gabrielli, A. Masiero, L. Silvestrini, Nucl. Phys. **B477**, 321 (1996). P. H. Chankowski, O. Lebedev and S. Pokorski, arXiv:hep-ph/0502076. P. Paradisi, arXiv:hep-ph/0505046. W. Hollik, in [*Precision Tests of the Standard Electroweak Model*]{}, edited by P. Langacker (World Scientific, Singapore, 1995), pp. 37–116;
--- author: - | [Pengfei Zuo[^\*^ ]{}, Yu Hua[^\*^]{}, Yuan Xie[^^]{} ]{}\ [^\*^]{}Huazhong University of Science and Technology\ [^^]{}University of California, Santa Barbara\ bibliography: - 'references.bib' title: '**A Secure and Persistent Memory System for Non-volatile Memory **' --- = 8000 = 2000 Abstract {#abstract .unnumbered} -------- In the non-volatile memory, ensuring the security and correctness of persistent data is fundamental. However, the security and persistence issues are usually studied independently in existing work. To achieve both data security and persistence, simply combining existing persistence schemes with memory encryption is inefficient due to crash inconsistency and significant performance degradation. To bridge the gap between security and persistence, this paper proposes **SecPM**, a **Sec**ure and **P**ersistent **M**emory system, which consists of a counter cache write-through (CWT) scheme and a locality-aware counter write reduction (CWR) scheme. Specifically, SecPM leverages the CWT scheme to guarantee the crash consistency via ensuring both the data and its counter are durable before the data flush completes, and leverages the CWR scheme to improve the system performance via exploiting the spatial locality of counter storage, log and data writes. We have implemented SecPM in gem5 with NVMain and evaluated it using five widely-used workloads. Extensive experimental results demonstrate that SecPM reduces up to half of write requests and speeds up the transaction execution by $1.3\times \sim2.0 \times$ via using the CWR scheme,
--- abstract: 'The growth of an elastic film adhered to a confining substrate might lead to the formation of delimitation blisters. Many results have been derived when the substrate is flat. The equilibrium shapes, beyond small deformations, are determined by the interplay between the sheet elastic energy and the adhesive potential due to capillarity. Here, we study a non-trivial generalization to this problem and consider the adhesion of a growing elastic loop to a confining *circular* substrate. The fundamental equations, i.e., the Euler Elastica equation, the boundary conditions and the transversality condition, are derived from a variational procedure. In contrast to the planar case, the curvature of the delimiting wall appears in the transversality condition, thus acting as a further source of adhesion. We provide the analytic solution to the problem under study in terms of elliptic integrals and perform the numerical and the asymptotic analysis of the characteristic lengths of the blister. Finally, and in contrast to previous studies, we also discuss the mechanics and the internal stresses in the case of vanishing adhesion. Specifically, we give a theoretical explanation to the observed divergence of the mean pressure exerted by the strip on the container in the limit of small excess-length.' author: - 'R. De Pascalis' - 'G. Napoli[^1]' - 'S. S. Turzi' date: '27/11/2013' title: '**Growth-induced blisters in a circular tube**' --- Introduction ============ The classical theory of bending, due to Bernoulli and Euler more than four centuries ago, is still considered a key simplified model for understanding the mechanics of many [*hard*]{} and [*soft systems*]{}. Within this theory, the mechanical properties and the shape of rods and sheets can be determined by solving an ordinary differential equation: the fundamental equation of Euler’s [*Elastica*]{}. Several problems may be tackled by this method, albeit with some variations, as for instance the occurrence of delamination blisters [@Williams:1997; @Wagner:2013], the adhesion of lipid tubules [@Rosso:1998], growth mechanisms in climbing plants [@Goriely:2006], the mechanics of the insertion of a guidewire into the artery of a patient [@Chen:2007], the equilibria of the uplifted heavy elastic strip [@Domokos:2003] and the pattern formation of flexible structures induced by tight packing [@Boue:2006]. In this paper, we analyze the growth of a closed planar Euler-Bernoulli strip confined by a rigid *circular* domain. We employ a very simple growth mechanism and posit that the total length of the strip may be changed arbitrarily by a suitable external action. Thus, mathematically, we consider the total length of the strip as an adjustable parameter, whose governing equation has no need to be specified. Furthermore, we assume the strip to be inextensible and always at equilibrium. A simple rudimentary experimental setup can help to describe the physical phenomenon we wish to analyze. Let us imagine a flexible cylinder made out of a piece of paper, simply by gluing together the edges of a rectangular sheet. Next, we insert this flexible cylinder into a rigid circular tube of smaller radius. The shape of the confined sheet, unavoidably, exhibits [*blisters*]{}, [*i.e.*]{}, regions of the sheet which are not in contact with the substrate but form inward protuberances. Even in the presence of an ideal frictionless substrate, part of the strip adheres to the confining wall as the circular geometry acts as an adhesion mechanism. An increase of adhesion may be further promoted by capillarity. Generally, adhesion by capillarity may occur in an elastic structure when its restoring ability is unable fully to overcome the interfacial attraction induced by liquid surface tensions. Various capillary adhesion phenomena can be observed at small scale in both natural phenomena and industrial processes. For a more exhaustive overview of these topics, we refer the reader to the recent review article [@Liu:2012] and references therein. The problem of a growing Elastica confined by a frictionless rigid circumference has been studied numerically in [@Boue:2006] in order to explain the packing of a flexible sheet inside a cylindrical cavity. This analysis has been subsequently extended to the growth of an Helfrich’s membrane confined within a spherical domain [@Kahraman:2012]. In addition to the elastic problem, both these papers consider the complicated conditions arising from the contact with the container and from the self-contact. Other studies exploit the theory of elliptic integrals to characterize the adhesion of lipid tubules on curved substrates [@Rosso:1998], the stability of clamped elastic arches [@Patricio:1998] or the Euler buckling of constrained strait beams [@Domokos:1997]. More recently [@Wagner:2013], the Elastica theory has been used to study the deformation of thin elastic sheets that adhere to a stiff flat substrate by means of a surface potential. The authors use a combination of numerical and asymptotic techniques to predict the equilibrium shapes of this [*sticky Elastica*]{}. Our paper generalizes the theoretical results relative to the formation of delamination blisters on a flat surface, as reported by Wagner and Vella [@Wagner:2013], to the case of a circular substrate. In §2, we obtain the equilibrium equations as extremal points of the energy functional subject to suitable constraints. The variational procedure is here slightly complicated by the fact that the end-points of the energy functional are not fixed, but are part of the unknowns. This entails that, besides the equilibrium equation and its boundary conditions, a further boundary condition is needed to determine the location of the detachment points. In contrast to the case of a flat substrate, the morphology of the strip is affected not only by the [*elastocapillarity length*]{}, but also by the container radius. In §3, the symmetrical equilibrium configurations of the strip are investigated using both an integral formulation and an asymptotic approach. Relevant measurable quantities (the blister height, the length of the adherent part and the internal forces) are provided as functions of the total length and the adherence strength. Furthermore, we provide an asymptotic expansion of the solution in terms of a dimensionless parameter measuring the excess length of the beam with respect to the container length. Finally, in §4, we analyze the case of adherence due to the sole curvature when the delimiting wall is a frictionless unilateral contact. In this case, the tensional state of the entire strip and the forces exerted on the wall can be easily computed. As we shall see, these forces show a singular behaviour and thus lead to non-trivial conclusions. Variational problem =================== In this section we derive the equilibrium equation and the boundary conditions that have to be fulfilled by the free part of the Euler-Bernoulli beam. Let us describe the geometry of the beam with a planar curve $\gamma$. In the plane of the curve, we introduce a Cartesian frame of reference $(O;\ev_x,\ev_y)$, where $O$ is the origin and $\ev_x$, $\ev_y$ are the unit vectors along, respectively, the $x$ and the $y$ axes. Let $s$ be the arc-length along the curve and $\theta(s)$ the inflection angle. More precisely, $\theta(s)$ measures the anti-clockwise angle between $\ev_x$ and the tangent to the curve $\tv(s)$. Therefore, the Frenet curvature is $\theta_s(s)$, where the subscript denotes differentiation with respect to its argument. Each point $p$ on $\gamma$ can be parametrized by the Cartesian coordinates $x(s)$ and $y(s)$, so that its position vector is $(p - O) = x(s) \ev_x + y(s) \ev_y$. On the other hand, since $\tv = \d (p-O)/\d s$, it follows that $$x_s = \cos \theta, \qquad y_s = \sin \theta. \label{para}$$ Furthermore, we posit the following classical bending energy $$W_b[\theta_s] =\frac{\kappa}{2}\int_{-\frac{\ell}{2}}^{\frac{\ell}{2}} (\theta_s - c_0)^2 \d s,$$ where the constant $c_0$ accounts for a possible spontaneous curvature of the beam, $\kappa$ is the bending rigidity and $\ell$ is the total length. ![Schematic representation of the elastic strip, confined by a cylindrical wall of radius $r$. We assume that its shape has a mirror symmetry with respect to the $y$-axis, allowing the study only for the branch $s\ge0$. The free part of the curve is parametrized by values of the arc length in the range $s \in [0, \bs)$, where $\bs$ is the detachment point. At $s=s_0$ the curvature vanishes. The adopted conventions imply that the curvature is positive in $s\in[0,s_0)$ and negative in $s \in (s_0, \ell/2]$. In particular, $\theta_s \equiv -1/r$ throughout the adherent part.[]{data-label="fig1"}](figure1.pdf){width="60.00000%"} It is worth noticing that the parametrization of $\gamma$ by means of the inflection angle $\theta$ automatically ensures the arc-length preservation, thus no Lagrange multiplier associated to the inextensibility constraint is needed. With reference to the schematic representation in Figure \[fig1\], we assume that the beam forms a unique blister. Thus, the beam at equilibrium consists of two parts: a free –non-adherent– curve, described by $s \in (-\bs, \bs)$, and an adherent one, with $s$ in the range $s \in [-\ell/2,-\bs] \cup [\bs, \ell/2]$. Hereinafter, we assume $\theta(s)$ odd, so that the problem can be studied in the interval $[0, \ell/2]$ only. In our simplified treatment there are only two detachment points, which correspond to the arc-lengths $-\bs$ and $\bs$, respectively and, of course, are constrained to lie along a circumference of radius $r$. A glance at Figure \[fig1\] shows that their distance is $-2 r \sin \bt$, where $\bt$ is the value of the inflection angle at $\bs$ (note that $\bt \le 0$, by our conventions). Elementary geometric arguments yield also the following identity $$\bt := \theta(\bs) = -\frac{\bs}{r} + \frac{\ell - 2 \pi r}{2r}. \label{sb:tb}$$ On the other hand, the distance between the detachment points can be obtained by integrating $_1$ in $[-\bs,\bs]$. As a consequence, the equilibrium solution should obey the global constraint $$\int_{0}^{\bs} \cos \theta \d s = - r \sin\bt. \label{vinco}$$ Thus, in the free region, the effective potential to minimize becomes $$W_f[\theta, \theta_s;\bs] =\int_{0}^{\bs} \kappa(\theta_s - c_0)^2 \d s - 2T_x \left(r \sin\bt + \int_{0}^{\bs} \cos \theta \,\d s \right) ,$$ where $T_x$ is a Lagrange multiplier and $\bs$ is to be determined in the minimization process. In the adherent region, the beam is in contact with the circular container and the bending energy is constant since $\theta_s = -1/r$. However, in order to account for elasto-capillarity effects, we further consider an adhesive potential describing the strip-substrate adhesion. We assume this in its simplest form taking it proportional to the length of the sticking region through a positive constant $w$, which is called the [*adherence strength*]{}. The energy associated to the adherent part is therefore $$W_a[\bs] = \int_{\bs}^{\frac{\ell}{2}} \kappa\left(\frac{1}{r} + c_0\right)^2 \d s -2 \int_{\bs} ^{\frac{\ell}{2}} w \,\d s.$$ Since the shape of the adherent part is fixed, this energy is a function of $\bs$ only. Euler-Lagrange equations and boundary conditions ------------------------------------------------ The equilibrium configurations are stationary points of the total free energy $W = W_f + W_a$. By adopting the notation as in [@Fomin:1963], we consider two neighboring curves $\theta(s)$ and $\theta_h(s)$ such that $$\theta_h(s) = \theta(s) + h(s).$$ The variational procedure must explicitly include the fact that the end points $s=0$ and $s= \ell/2$ are fixed, while the detachment point $s=\bs$ is not. Consequently, standard arguments [@Fomin:1963] show that the possible variations have to satisfy the following equations at the end-points and to first order $$\theta_h(0) = \theta(0), \qquad \theta_h(\ell/2) = \theta(\ell/2) ,$$ $$h(\bs) = \de \bt - \theta_s(\bs) \de \bs ,$$ where $\de \bt := \theta_h(\bs+\de \bs)-\theta(\bs)$. Thus, by setting $$g_f(\theta_s) = \kappa(\theta_s - c_0)^2, \quad g_c(\theta) = - 2T_x \cos \theta, \quad g_a = \kappa\left(\frac{1}{r} + c_0\right)^2 - 2 w , \label{gg}$$ the first variation of $W$ is $$\begin{aligned} \de W = \int_0 ^{\bs}\left[\frac{\pt g_c}{\pt \theta} - \frac{\d }{\d s}\frac{\pt g_f}{\pt \theta_s} \right] h(s) \d s + \left. \left(\frac{\pt g_f}{\pt \theta_s} - 2 T_x r \cos \theta \right) \right|_{s=\bs} \hspace{-2mm} \de \bt \nonumber \\ + \left.\left(g_f - \frac{\pt g_f}{\pt \theta_s} \theta_s + g_c - g_a\right)\right|_{s=\bs} \hspace{-2mm} \de \bs. \label{var1}\end{aligned}$$ Since $\bs$ lies on a circumference of radius $r$ it follows that the variations $\de \bt$ and $\de \bs$ are not independent: $$\de \bt = -\frac{\de \bs}{r} \, . \label{vart}$$ The substitution of equation into , after some manipulations, yields $$\de W = \int_0 ^{\bs}\left[\frac{\pt g_c}{\pt \theta} - \frac{\d }{\d s}\frac{\pt g_f}{\pt \theta_s} \right] h(s) \d s + \left[-\left(\frac{1}{r} + \theta_s\right)\frac{\pt g_f}{\pt \theta_s} + g_f - g_a\right]_{s=\bs} \hspace{-2mm}\de \bs \, . \label{var2}$$ The equilibrium condition, $\de W = 0$, for any arbitrary choice of $h(s)$ and $\de \bs$, leads to the requirement that each term enclosed in square brackets in must vanish. Therefore, once the explicit expressions of $g_a$, $g_c$ and $g_f$ as given in are taken into account, the following Euler-Lagrange equation is derived $$\kappa \theta_{ss} - T_x \sin \theta = 0, \qquad s \in (0,\bs), \label{pendolo}$$ with the boundary conditions $$\theta(0) =0, \qquad \theta(\bs) = \bt. \label{compa}$$ As expected, the angle $\theta(s)$ has to satisfy the non-linear pendulum equation. Nevertheless, contrary to the classic pendulum dynamics, the equation has to be solved with boundary conditions rather than initial conditions. Moreover, both the Lagrangian multiplier $-T_x$ (which plays the role of gravity in the pendulum analogy) and the boundary point $\bs$ are unknowns. However, the vanishing of the coefficient of $\de \bs$ in gives a further condition at the detachment point: $$\kappa \left(\bt_s + \frac{1}{r}\right)^2 - 2 w = 0, \label{aderenza}$$ which we refer to as the *transversality condition*. This equation, along with Eqs., and , allows us to solve the problem and thus determine the unknowns $T_x$ and $\bs$. The condition is a special case of a more general *adhesive condition* obtained in [@Rosso:1998]. It reflects the fact that there are two different sources of adhesion: (i) the adherence by curvature, which is proportional to the bending stiffness and is a decreasing function of the radius $r$; and (ii) the adhesive potential whose strength is provided by $w$. In the limit case where $w=0$, equation guarantees the continuity of the curvature $\theta_s$ at $s=\bs$. On the other hand, whenever the substrate is flat ($r \rightarrow \infty$), we correctly recover the adherence condition used in [@Wagner:2013]. By defining the [*elasto-capillarity*]{} as $\ell_{ec} = \sqrt{\kappa/w}$, reduces to $$\bt_s = -\frac{1}{r} - \frac{\sqrt 2}{ \ell_{ec}} \label{ad1}$$ where the minus sign in front of $\sqrt 2/\ell_{ec}$ is due to the fact that the curvature radius at the detachment point cannot exceed that of the delimiting wall. Finally, we remark that the spontaneous curvature $c_0$ plays no role in the equilibrium equations. Indeed, the energetic terms involving $c_0$ are null Lagrangians and, hence, they could possibly affect only the boundary conditions. However, since $\gamma$ is a closed curve, $c_0$ cannot have any effect on the equilibrium shape. Equilibrium shapes ================== We now examine a special class of equilibrium solutions, schematically shown in Figure \[fig1\]. The expected equilibrium solution $\theta(s)$ is an increasing function for $s \in (0,\s0)$, while it decreases for $s \in (\s0, \bs)$. Let $\theta_0 = \theta(\s0) \in [0,\pi]$ be the maximum value of $\theta(s)$ in $(0,\bs)$. Standard arguments in the calculus of variations show that a first integral of is $$\frac{1}{2}\theta^2_s = \tau(\cos \theta - \cos \theta_0), \label{primo}$$ where we have set $\tau = -T_x/\kappa$. To simplify the notation, we introduce $$\eta := \frac{1}{r} + \frac{\sqrt{2}}{ \ell_{ec}},$$ and rewrite the transversality condition as $\theta_s(\bs) = -\eta$. Therefore, Eq. yields $$\frac{1}{\tau} = \frac{2}{\eta^2} (\cos \bt- \cos \t0 ), \label{eq:tau}$$ with $|\bt| \neq \t0$. By replacing into , we finally deduce that $$\theta_s = \pm \eta \sqrt{\frac{\cos \theta - \cos \theta_0}{\cos \bt - \cos \theta_0}}, \label{tetas}$$ where the sign $+$ (respectively, $-$) is to be used in the interval $s\in(0,\s0)$ (respectively, $s\in(\s0,\bs)$). By symmetry $\theta(0)=0$ and Eq. evaluated at $s=0$ shows that $\cos\bt -\cos\t0 >0$. This gives a restriction on the possible values of $\bt$: $|\bt|<\t0$. Furthermore, is an ordinary differential equation which can be solved by separation of variables in $(0,\bs)$. To this end, we change the variable of integration from $s$ to $\theta$ $$\d s = \pm \frac{1}{\eta} \sqrt{\frac{\cos \bt - \cos \theta_0} {\cos \theta - \cos \theta_0}} \, \d \theta$$ and divide the integral into the two sub-regions where the function $\theta(s)$ is monotonic $$\int_0^\t0 \frac{\d \theta}{\sqrt{\cos \theta - \cos \theta_0}} - \int_\t0^{\bt} \frac{\d \theta}{\sqrt{\cos \theta - \cos \theta_0}} = \frac{\eta \bs}{ \sqrt{\cos \bt - \cos \theta_0}}, \label{sepa}$$ where, on the right hand side, the boundary conditions have been used. Finally, equation can be recast in the following form $$4\F(q_0)-2\F(\bar q) ={\eta \bs} {\sqrt \frac{1-\cos\theta_0}{\cos \bt - \cos \theta_0}}, \label{prima}$$ where $\F$ denotes the incomplete elliptic integral of first kind [@Abramowitz:1970] and, for ease of notation, we set $$q_0 := \left\{\frac{\t0}{2}, \csc^2 \frac{\t0}{2} \right\}, \qquad \bar q := \left\{\frac{\bt}{2}, \csc^2 \frac{\t0}{2} \right\}.$$ Similarly, we reduce equation as follows $$\int_{0} ^{\t0} \frac{\cos \theta}{\sqrt{\cos \theta - \cos \theta_0}} \d \theta - \int_{\t0} ^{\bt} \frac{\cos \theta}{\sqrt{\cos \theta - \cos \theta_0}} \d \theta = - \frac{ \eta r \sin \bar \theta}{ \sqrt{\cos \bt - \cos \theta_0}}, \label{pp}$$ and rewrite the left hand side of in terms of elliptic integrals. With the aid of , we finally obtain $${2 \left(1-\cos\theta_0\right) \left[2 \E (q_0)- \E (\bar q)\right]} = - \eta(\bs \cos \t0 + r\sin \bar \theta)\sqrt\frac{{1-\cos\theta_0}}{\cos \bt - \cos \theta_0}, \label{seconda}$$ where $\E$ represents the incomplete elliptic integral of second kind [@Abramowitz:1970]. By using equation , we can eliminate $\bs$ in equations and in favor of $\bt$. Thus, the solutions of the nonlinear transcendental equations and (whenever exist) give the values of $\bt$ and $\t0$ as functions of the length $\ell$, the elasto-capillarity length $\ell_{ec}$ and the radius $r$. Hence, the solution is completely determined. ![The shaded area represents the region of the parameters $\eps$ and $\varrho$ where the solution is valid. This region is delimited by two solid curves. The first (black line) identifies the configuration at which the blister vertex ($s=0$) is in contact with the diametrically opposite point $s=\ell/2$. We say that the elastic strip “touches the container at the bottom”. For values of $(\varrho,\eps)$ along the second curve (blue line), the elastic strip is in contact with itself in an intermediate point (self-contact). Below the dashed line, $\ell^{(adh)}$ is a decreasing function of $\eps$, while above it is an increasing function of $\eps$. []{data-label="fig:limits"}](fig_Limits.pdf){width="50.00000%"} It is now of special interest to study the expression of the length of the adherent portion of the strip, defined as $\ell^{(adh)} := \ell - 2 \bs$, and that of the blister height, defined as $\delta:=r-y(0)$. These are, in fact, quantities easily accessible experimentally. The former is simply given by $$\ell^{(adh)} = 2r(\pi + \bt) \, . \label{eq:laderenza}$$ In order to derive $\delta$ as a function of $\bt$ and ${\theta_0}$, we note that $\delta = r-(y(0)-y(\bs))-y(\bs)$, so that we can write $$\begin{split} \delta & = r(1-\cos\bt) + \int_{0}^{\bs}\sin\theta(s) \d s = r(1-\cos\bt) -\frac{1}{\tau} \int_{0}^{\bs}\theta_{ss} \d s \\ & = r(1-\cos\bt) -\frac{1}{\tau} \left[\theta_s(\bs) - \theta_s(0) \right] \, . \nonumber \end{split}$$ We then use Eqs. and to simplify further and obtain $$\delta = r(1-\cos\bt) + \frac{2}{\eta}\sqrt{\cos\bt-\cos{\theta_0}}\Big(\sqrt{\cos\bt-\cos{\theta_0}}+\sqrt{1-\cos{\theta_0}} \Big) \, . \label{eq:blister_height}$$ The solution in terms of elliptic integrals is relatively simple to implement computationally. However, the type of solution we seek remains valid as long as there is no self-intersection and the strip does not touch the lower part of the circular container. For later convenience, it is apposite to introduce the following adimensional quantities $$\eps = \frac{\ell - 2\pi r}{2\pi r} \, , \qquad \varrho=\eta r = 1 + \frac{\sqrt{2}\,r}{\ell_{ec}}\, . \label{eq:epsilon_rho}$$ The former measures the excess length with respect to the confining circumference, while the latter determines the relative importance of the adhesion induced by curvature with respect to the adhesion by elasto-capillarity. The region of the $(\eps,\varrho)$-plane in which our solutions are admissible is sketched in Figure \[fig:limits\]. We gather that, for $\varrho < \varrho^{*}$, with $\varrho^{*}\approx 2.916$, the contact with the wall occurs before the self-contact, and viceversa for $\varrho > \varrho^{*}$. We also find that, while the blister height is an increasing function of the total length, the adherence length may exhibit a non-monotonic behavior. Thus, with reference to Figure \[fig:limits\], $\ell^{(adh)}$ decreases with $\eps$ in the region below the dashed line, while it reverse its behaviour in the region above. Obviously, this change of slope occurs only if the adherence strength is suitably large. This is clearly displayed in Figure \[fig:shapes\], where the equilibrium shapes of the elastic strip are plotted for different excess-lengths and for two values of the adherence strength, corresponding to ${\varrho}=1$ (no capillarity) and ${\varrho}=5$. [0.35]{} ![Equilibrium shapes of the elastic strip calculated with (a) zero capillarity (${\varrho}= 1$) and (b) ${\varrho}=5$, obtained using the following values for the excess length: $\eps=0.01$ (thin solid lines), $\eps=0.05$ (dashed lines), $\eps=0.1$ (dotted lines) and $\eps=0.2$ (thick solid lines). In absence of elasto-capillarity, Figure (a) shows a decrease of the adherence length for increasing excess-length. By contrast, Figure (b) clearly shows that the adherence length is a non-monotonic function of $\eps$ for moderate capillarity effects (${\varrho}\gtrsim 1.9$).[]{data-label="fig:shapes"}](fig_Shapes_1.pdf "fig:"){width="\textwidth"} [0.35]{} ![Equilibrium shapes of the elastic strip calculated with (a) zero capillarity (${\varrho}= 1$) and (b) ${\varrho}=5$, obtained using the following values for the excess length: $\eps=0.01$ (thin solid lines), $\eps=0.05$ (dashed lines), $\eps=0.1$ (dotted lines) and $\eps=0.2$ (thick solid lines). In absence of elasto-capillarity, Figure (a) shows a decrease of the adherence length for increasing excess-length. By contrast, Figure (b) clearly shows that the adherence length is a non-monotonic function of $\eps$ for moderate capillarity effects (${\varrho}\gtrsim 1.9$).[]{data-label="fig:shapes"}](fig_Shapes_2.pdf "fig:"){width="\textwidth"} Asymptotic analysis {#sec:asymptotic} ------------------- The main aim of this Section is to provide an approximation to some physically relevant quantities when the length of the strip slightly exceeds that of the confining circumference. To this end, we first derive the approximations of $\bt$ and ${\theta_0}$ by performing an asymptotic expansion of Eqs. and in the limit $\eps\ll 1$, where $\eps$ is defined as in . Subsequently, we apply these results to find the approximations of $\ell^{(adh)}$ (complementary to the blister width), $\delta$ and the internal stresses. When $\eps = 0$, there is only the trivial solution: ${\theta_0}=\bt=0$. Since we don’t expect any singular behaviour in the solution of the problem at hand, we look for asymptotic expansions where ${\theta_0}(\eps) = o(1)$ and $\bt(\eps)=o(1)$, as $\eps\!\downarrow\! 0$. However, we do not make at present any specific assumption on the ratio $v(\eps)=\bt(\eps)/{\theta_0}(\eps)$. Next, we substitute the leading approximations for the elliptic integrals, as given in equations , and consider only the leading approximation to the following function $$\sqrt{\frac{\cos\bt-\cos{\theta_0}}{1-\cos{\theta_0}}} \sim {\sqrt{1-v^2}} \, .$$ After a simple manipulation, equations and can then be recast in the following form, $$\begin{aligned} {\theta_0}\big[ \sqrt{1-v^2} (\pi & - \arcsin v ) + \varrho v \big] = \pi \eps \varrho \, , \label{eq:eq1_leading} \\ \frac{{\theta_0}^3}{12}\big[3 \sqrt{1-v^2} (\pi & - \arcsin v ) + v^3(3-2 \varrho) + v (6\varrho-3) \big] \notag \\ & = -\pi \eps \varrho \left(1 - \frac{{\theta_0}^2}{2} + \frac{{\theta_0}^4}{24} \right) \label{eq:eq2_leading}\end{aligned}$$ which is particularly suited to a dominant balance argument [@Bender:1999]. From Eq., we recognize that the only possible asymptotic balance is , with $a_1$ to be determined. Eq. then implies that the leading order term of $v(\eps)=v_0+o(\eps)$ must satisfy the following equation $$\varrho v_0 + \sqrt{1-v_0^2} (\pi - \arcsin v_0 )=0 . \label{eq:v0}$$ These results show how to extend the asymptotic analysis to higher orders. In particular, we know that: $(i)$ the expansion is regular with an asymptotic sequence given by $(\eps^{1/3},\eps^{2/3}, \ldots , \eps^{k/3})$; $(ii)$ $\bt$ and ${\theta_0}$ have the same asymptotic behavior, i.e, $v_0=O(1)$. Thus, we simplify the elliptic integrals in Eqs. and as described in detail in \[app:elliptic\] –specifically, we use Eqs., – and then look for solutions of the following form: $$\begin{gathered} {\theta_0}= a_1 \eps^{1/3} + a_2 \eps^{2/3} + a_3 \eps + o(\eps) \, ,\\ \bt = v_0 \big(a_1 \eps^{1/3} + b_2 \eps^{2/3} + b_3 \eps + o(\eps) \big) \, .\end{gathered}$$ \[asy\_adhesion\] The substitution of these expressions into Eqs., yields the following equations for the coefficients $$\begin{aligned} a_1 & = \left[\frac{12 \pi \varrho v_0^{-1} }{ 3-3 \varrho +v_0^2 (2 \varrho - 3)}\right]^{1/3} \\ a_2 & = 0 \\ a_3 & = \big[15 \pi \varrho \left(1-v_0^2\right)^2 \left(20 v_0^2 - 1 \right) + 2 \pi \varrho^2 \left(15-320 v_0^2+497 v_0^4-192 v_0^6\right) \notag\\ & + \pi \varrho^3 \left(-15+310 v_0^2-384 v_0^4+120 v_0^6\right)\big] f(\varrho)^{-1} \\ b_2 & = 0 \\ b_3 & = \big[15 \pi \varrho \left(1-v_0^2\right)^2 \left(21-2 v_0^2\right) - 2 \pi \varrho^2 (1-v_0^2) \left(315-225 v_0^2+8 v_0^4\right) \notag \\ & +\pi \varrho^3 \left(315-420 v_0^2+136 v_0^4\right)\big] f(\varrho)^{-1} \, , \\ f(\varrho) & = 40 v_0 (\varrho + v_0^2 - 1) [3-3 \varrho + v_0^2 (2 \varrho - 3)]^2 \, ,\end{aligned}$$ where $v_0$ is again given by Eq.. Figure \[fig:relErr\] reports the relative errors of our approximations in the range of interests. The asymptotic expansion of $\bt$ immediately yields the behaviour of $\ell^{(adh)}$ as a function of $\eps$ (see Eq.). However, it is slightly more complicated to obtain a good approximation of the blister height, $\delta$. In fact, the Taylor expansion of Eq., once $\bt$ and ${\theta_0}$ are expressed in terms of $\eps$, only very slowly converge to the numerical solution and thus not provide a good approximation when $\eps$ varies in the range of Figure \[fig:relErr\]. However, the two-term approximation is still accurate, within a 10% of relative error (see Figure \[fig:relErrDelta\]), when . Despite small, this value of the excess-length already accounts for large deformations suitable to direct measurements, since the blister height is of the order of magnitude of the container radius (see Figure \[fig:shapes\]). Thus, the two term approximation can be used to compare the theoretical predictions with experiments. [0.47]{} ![[Relative-error contour-lines of the two-term approximations to blister height (a) and the adherence length (b). The grey shaded region identifies the limit of validity of our approximation as we neglect the contact with the container at the bottom and the self-contact (see Figure \[fig:limits\]).]{}[]{data-label="fig:relErr"}](fig_NumAsy_contour_Ladh.pdf "fig:"){width="\textwidth"} [0.47]{} ![[Relative-error contour-lines of the two-term approximations to blister height (a) and the adherence length (b). The grey shaded region identifies the limit of validity of our approximation as we neglect the contact with the container at the bottom and the self-contact (see Figure \[fig:limits\]).]{}[]{data-label="fig:relErr"}](fig_NumAsy_contour_delta.pdf "fig:"){width="\textwidth"} We now turn to the discussion of two important limiting cases of adherence: (i) the pure-curvature regime $(\varrho=1)$ and (ii) the elasto-capillarity regime $(\varrho \gg 1)$. ### Curvature regime ($\varrho =1$) {#sec:subasymptotic_I} In this regime the only source of adherence is due to the curvature of the confining wall. When $\varrho=1$, equation is solved by $v_0={v_0^{_{(1)}}}$, with ${v_0^{_{(1)}}}\approx -0.9761$. The expansion coefficients reduce to $$\begin{aligned} a_{1}^{_{(1)}} & = -\frac{(12 \pi)^\frac{1}{3}}{{v_0^{_{(1)}}}}, \label{eq:expa0} \\ a_{3}^{_{(1)}} & = \frac{\pi }{2 {v_0^{_{(1)}}}}\left(\frac{9}{5} - \frac{1}{4 ({v_0^{_{(1)}}})^2} \right), \\ b_{3}^{_{(1)}} & = \frac{\pi}{4 {v_0^{_{(1)}}}}\left(-\frac{7}{5} + \frac{9}{2 ({v_0^{_{(1)}}})^2} \right) \, .\end{aligned}$$ ![Plot of $\bt$ and $\t0$ versus $\eps$ for $\varrho=1$. The thick solid line represents the numerical solution. The one-term approximation (dashed line) agrees with the numerical solution only for very small values of $\eps$. The two-term asymptotic expression (solid thin line) gives a much better agreement on a wider range of $\eps$.[]{data-label="fig:theta"}](fig_NumAsy_theta.pdf){width="50.00000%"} [0.47]{} ![Plot of (a) the adherence length, $\ell^{(adh)}/r$, and (b) the blister height, $\delta/r$, as functions of $\eps$, when $\varrho=1$. The solid thick lines are the numerical solutions, the dashed lines represent the one-term approximation while the solid thin lines show the two-term approximation. This latter approximation seems to describe the adherence length reasonably well, but it does not fully capture the behaviour of the blister height.[]{data-label="fig:dl"}](fig_NumAsy_Ladh.pdf "fig:"){width="\textwidth"} [0.47]{} ![Plot of (a) the adherence length, $\ell^{(adh)}/r$, and (b) the blister height, $\delta/r$, as functions of $\eps$, when $\varrho=1$. The solid thick lines are the numerical solutions, the dashed lines represent the one-term approximation while the solid thin lines show the two-term approximation. This latter approximation seems to describe the adherence length reasonably well, but it does not fully capture the behaviour of the blister height.[]{data-label="fig:dl"}](fig_NumAsy_delta.pdf "fig:"){width="\textwidth"} Figure \[fig:theta\] sketches the angles $\bt$ and $\t0$ as functions of $\eps$. The comparison with the numerical approximation clearly shows that the two-term approximation is needed in order to better capture the behaviour of the solution in the whole range of interest. This approximation is also sufficient to describe the adherence length, as shown in Figure \[fig:NumAsy\_Ladh\]. However, as already discussed in §\[sec:asymptotic\], Figure \[fig:NumAsy\_delta\] clearly shows that the blister height is accurately represented by the two-term approximation only in a limited range of $\eps$. ### Elasto-capillarity regime ($\varrho \gg 1$) {#sec:subasymptotic_II} Whenever the adhesive potential is dominant, we have $\ell_{ec} \ll r$ and, hence, $\varrho \rightarrow \infty$. In this case, equation yields $${v_0^{_{(\infty)}}}(\varrho) \approx -\frac{\pi}{\varrho},$$ and consequently $$a_{1}^{_{(\infty)}}(\varrho) \approx 2^\frac{2}{3} \varrho^\frac{1}{3}, \qquad a_{3}^{_{(\infty)}}(\varrho) \approx \frac{1}{24} \varrho, \qquad b_{3}^{_{(\infty)}}(\varrho) \approx -\frac{7}{8}\varrho.$$ It is worth comparing the length of the free part of the strip and the blister height with the analogue quantities in the planar case as given by formulas (9) and (10) of [@Wagner:2013]. To this end, we observe that the [*compression $\Delta l$*]{} can be express in terms of $\eps$ as $\Delta l := 2 \pi r \eps$. By using $\varrho \approx \sqrt 2 r/\ell_{ec}$, the length of the non-adhering portion (measured in unit of elasto-capillarity length) is $$-\frac{2 r}{\ell_{ec}} \sin \bt_{\infty} \approx -2 \frac{r}{\ell_{ec}} \bt_\infty = 2 \pi^{2/3} \left(\frac{\Delta l }{\ell_{ec}}\right)^\frac{1}{3} - \frac{7}{8}\frac{\Delta l }{\ell_{ec}} .$$ Similarly, we obtain the expression for the blister height $$\frac{\delta_\infty}{\ell_{ec}} \approx 2 \sqrt 2 \left(\frac{\Delta l}{ \pi \ell_{ec}}\right)^\frac{2}{3} - \frac{1}{2\sqrt2}\left(\frac{\Delta l}{ \pi \ell_{ec}}\right)^\frac{4}{3} .$$ Not surprisingly, these results coincide with those reported in [@Wagner:2013]. Adherence by curvature with unilateral contact ============================================== Let us now suppose that the container can be modelled as a unilateral and frictionless contact ($w=0$). This means that the wall can exerts only contact forces directed along the inward normal direction. We discuss this problem from the mechanical point of view, within the theory of the Euler-Bernoulli beam. Accordingly, at equilibrium the internal force $\Tv(s)$ and the internal torque $\Mv(s)$ obey the following equilibrium equations $$\frac{\d \Tv(s)}{\d s} + \fv(s) = {\bf 0}, \qquad \frac{\d \Mv(s)}{\d s} + \tv(s) \times \Tv(s) + \mv(s) = {\bf 0}, \label{uni}$$ where $\fv$ and $\mv$ are the external forces and torques per unit length, respectively. This equations must hold in any section $s\in [s_1,s_2]$ of the curve. Since we assume the effects of gravity to be negligible, the only source of external distributed forces is the contact force exerted by the container, while $\mv={\bf 0}$. In the presence of a concentrated force $\Fv$ and torque ${ \mbox{ \hspace{-1mm}\boldmath $\Gamma$ \hspace{-1mm}}}$ at $s = s_*$ the following local balances hold $$\lim_{s \to s_*^+} \Tv(s) - \lim_{s \to s_*^-} \Tv(s) = \Fv, \qquad \lim_{s \to s_*^+} \Mv(s) - \lim_{s \to s_*^-} \Mv(s) = { \mbox{ \hspace{-1mm}\boldmath $\Gamma$ \hspace{-1mm}}}. \label{salto}$$ This system of equations is completed by the Euler constitutive equation, that, within the hypothesis of plane deformations, states that the internal torque is proportional to the difference between the curvature and the intrinsic curvature $c_0$: $$\Mv(s) = \kappa[\theta_s(s) - c_0]\ev_z. \label{eulero}$$\ The free part of the beam is not subject to any external distributed load. Therefore, $_1$ shows that $\Tv$ must be constant throughout $s\in[0,\bs)$. The equilibrium equation of the free part is thus provided by the balance of torque $_2$ which reads $$\kappa \theta_{ss} - T_x \sin \theta + T_y \cos \theta = 0, \label{eq:equilibrium_Euler}$$ where $T_x$ and $T_y$ are the Cartesian components of the internal force. We recall that, by our convention, the tangent unit vector to the beam is $\tv(s) = \cos \theta(s) \ev_x + \sin \theta(s)\ev_y$. At first sight, equation differs from the equilibrium equation as it contains a term in $\cos\theta$. However, we have assumed that $\theta(s)$ is odd (and therefore also $\theta_{ss}$ is odd). It is then easy to show that $T_y$ must vanish and the torque equation reduces to . As a further consequence of this symmetry, we observe explicitly that from $T_y=0$ it also follows that the constant internal stress, $\Tv$, is purely horizontal: $\Tv=T_x \, \ev_x$. ![Internal and external forces acting on the half-beam in case of unilateral and frictionless contact. The forces $T_x$ and $T_t$, as given in equations and respectively, are negative, so that the beam is under compression for any admissible configuration. The concentrated force ${ \mbox{ \hspace{-1mm}\boldmath $\psi$ \hspace{-1mm}}}$ is necessary to balance the vertical components of the internal forces at $s=\bs$.](figure2.pdf){width="60.00000%"} \ When the beam is in contact with the external container, the balance of forces requires the introduction of the contact forces, whose density per unit length will be denoted by ${ \mbox{ \hspace{-1mm}\boldmath $\phi$ \hspace{-1mm}}}(s)$. Since we model the container as an ideal unilateral frictionless constraint, ${ \mbox{ \hspace{-1mm}\boldmath $\phi$ \hspace{-1mm}}}$ is directed along the inward normal to the surface, hence, we assume ${ \mbox{ \hspace{-1mm}\boldmath $\phi$ \hspace{-1mm}}}(s) = -\phi(s) \nv(s)$, with $\phi(s) \ge0$, where $\nv(s) = -\sin \theta(s) \ev_x + \cos \theta(s) \ev_y$. Furthermore, the curvature of the beam in contact with the container is constant and Eq. implies that also the internal moment $\Mv$ is constant. Thus, from $_2$, we obtain that the normal component of the internal force, $T_n$, must vanish for $s\in(\bs, \ell/2]$. On the other hand, equation $_1$, projected along $\nv$ and $\tv$, gives $$\phi = -\frac{T_t}{r} , \qquad T_t = \text{constant} \label{fn}$$ where $T_t$ is the axial component of $\Tv$. This also implies that necessarily $T_t\le0$.\ More subtle is the discussion of the balances at detachment point. To put the problem in the right perspective, we isolate a small portion of beam around the detachment point. Since the internal force in the adherent part posses a non-zero vertical component, while $T_y=0$ in the free part, the balance of forces requires the introduction of a concentrate reactive force ${ \mbox{ \hspace{-1mm}\boldmath $\psi$ \hspace{-1mm}}}= -\psi \nv(\bs)$. This can only be given by the container, and therefore it is necessarily direct as the inward normal $(\psi \ge 0)$. More precisely, the balance in the $y$-direction is $$T_t \sin \bt = \psi \cos \bt. \label{tt}$$ Thus, $\psi$ is non-negative (and the contact is truly unilateral) only for $\bt \in [-\pi/2,0]$. On the other hand, the continuity of the axial internal force yields $T_t \cos \bt = T_x - \psi \sin \bt $ whence $$T_t = T_x \cos \bt. \label{tx}$$ The restriction $\bt \in [-\pi/2,0]$ implies that $T_x$ is non-positive, in agreement with the fact that the entire beam is under compression. Finally we remark that (when $w=0$) the transversality condition implies the continuity of the curvature at $\bs$ and, thus, the continuity of the internal torque.\ It is now of special interest to study the asymptotic behaviour of the reactive forces $\phi$ and $\psi$. Equations , and yield $\phi = - T_x \cos \bt / r$ and $\psi = T_x \sin \bt$. Recalling that $T_x = -\kappa \tau$, we gather that both $\phi$ and $\psi$ diverge as $\eps$ goes to zero, because $\tau$ diverges. In fact, by using together with the asymptotic expansions and , to leading order we find $$\tau \sim \frac{1}{r^2 (1-({v_0^{_{(1)}}})^2)\t0^2} \sim \frac{({v_0^{_{(1)}}})^2}{r^2 (1-({v_0^{_{(1)}}})^2)(12 \pi \eps)^{2/3}} .$$ Albeit unexpected, this result is in agreement with the experimental results reported in [@Boue:2006], where it is shown that the mean pressure exerted by the strip on the container becomes very large when $\eps$ tends to zero. Concluding remarks ================== We have studied the morphology of an elastic closed inextensible strip of length $\ell$, confined by a cylinder of radius $r$, where $\ell > 2 \pi r$. The excess length forces the beam to detach from the cylinder, leading to two distinct parts: an adhering portion and a free part (or ‘blister’). These regions are governed by different equations and must agree at the detachment point, whose position is part of the problem. Two different mechanisms concur to promote the adhesion. The first is purely geometric and is the curvature of the container. The second has a physical origin and it is given by the elasto-capillarity interaction of the strip with the container. At human length scales the former usually dominates. However, at small scales the elasto-capillarity often plays a significant role in many phenomena [@Liu:2012]. We have presented numerical results for the equilibrium shape when the strip length $\ell$ and the adhesion strength are given, allowing for the possibility of large deformations. At fixed $\ell$, the solution depends upon a dimensionless parameter $\varrho \in [1,\infty)$ that measures the relevance of the adhesion due to the curvature with respect to that due to the adhesive potential. The geometrical aspects dominate whenever $\varrho$ approaches one. On the contrary, for very large $\varrho$, while the elasto-capillarity length remains finite, we match the results that apply to the formation of delamination blisters on a rigid flat substrate [@Wagner:2013]. In addition to the numerical results, we have provided the asymptotic expansions for two quantities related to the blister shape: the length of the adhering region $\ell^{(adh)}$ and the blister height $\delta$. The small parameter used in these expansions is the normalized excess length $\eps := (\ell - 2 \pi r)/(2\pi r)$. The two-term approximation is able to capture the behaviour of $\ell^{(adh)}$ up to the points of self-contact or contact of the blister with the delimiting wall. By contrast, the same approximation predicts the blister height accurately only in a smaller range of $\eps$. In any case, the asymptotic analysis yields simple laws that an experimentalist can possibly use to determine some constitutive parameters by inverse analysis. Finally, we have considered the case where the delimiting wall is modelled as an ideal frictionless unilateral contact and hence determined the external actions that the surface exerts on the strip. These consist in a distributed force (per unit length) and, unexpectedly, also in concentrated force acting at the detachment point. The latter makes the derivative of the curvature discontinuous at the detachment point and is also responsible for the discontinuity of the internal shear force. We also find that when the detachment angle, $\bt$, reaches $\pi/2$, the contact force exerted by the container changes sign, thus violating the unilateral constraint. This effect places an upper limit to the value of admissible $\eps$. More precisely, we find that this effect appears for $\eps \approx 0.228$, a value below that attained for the blister contact with the container. Furthermore, our asymptotic analysis has shown that the internal force and, consequently, the external actions diverge when $\eps$ tends to zero. This agrees with the experimental results reported in [@Boue:2006]. In our opinion, the origin of this singularity could be a consequence of the assumed inextensibility. In a more realistic model, one should relax the inextensibility constraint in favor of a penalization energy term related to compression/dilatation. Thus, initially the strip may undergo a slight compression and then form a blister, beyond a compression threshold. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank A. Goriely, L. De Lorenzis and C. Morosi for their fruitful comments and helpful discussions. RDP is grateful to the Engineering and Physical Sciences Research Council for funding this work via grant EP/H050779/1. GN and SST acknowledge support from the Italian Ministry of University and Research through the Grant No. 200959L72B 004 ‘Mathematics and Mechanics of Biological Assemblies and Soft Tissues.’ [10]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{} J. Williams, Energy release rates for the peeling of flexible membranes and the analysis of blister tests, International Journal of Fracture 87 (3) (1997) 265–288. T. J. W. Wagner, D. Vella, The ‘sticky elastica’: delamination blisters beyond small deformations, Soft Matter 9 (4) (2013) 1025–1030. R. Rosso, E. G. Virga, Adhesion by curvature of lipid tubules, Continuum. Mech. Thermodyn. 10 (6) (1998) 359–367. A. Goriely, S. Neukirch, Mechanics of climbing and attachment in twining plants, Phys. Rev. Lett. 97 (18) (2006) 184302. J.-S. Chen, C.-W. Li, Planar elastica inside a curved tube with clearance, International journal of solids and structures 44 (18) (2007) 6173–6186. G. Domokos, W. Fraser, I. Szeber[é]{}nyi, Symmetry-breaking bifurcations of the uplifted elastic strip, Physica D: Nonlinear Phenomena 185 (2) (2003) 67–77. L. Bou[é]{}, M. Adda-Bedia, A. Boudaoud, D. Cassani, Y. Couder, A. Eddi, M. Trejo, Spiral patterns in the packing of flexible structures, Phys. Rev. Lett. 97 (16) (2006) 166104. J.-L. Liu, X.-Q. Feng, On elastocapillarity: A review, Acta Mechanica Sinica 28 (4) (2012) 928–940. O. Kahraman, N. Stoop, M. M. M[ü]{}ller, Morphogenesis of membrane invaginations in spherical confinement, EPL 97 (6) (2012) 68008. P. Patricio, M. Adda-Bedia, M. Ben Amar, An elastica problem: instabilities of an elastic arch, Physica D: Nonlinear Phenomena 124 (1) (1998) 285–295. G. Domokos, P. Holmes, B. Royce, Constrained euler buckling, J Nonlinear Sci 7 (3) (1997) 281–314. I. M. Gelfand, S. V. Fomin, Calculus of variations, Prentice Hall, 1963. M. Abramowitz, I. A. Stegun, Handbook of Mathematical Function with Formulas, Graphs, and Mathematical Tables, Vol. 55, 1970. C. M. Bender, S. A. Orszag, Advanced Mathematical Methods for Scientists and Engineers, Springer, 1999. Asymptotic approximations for elliptic integrals {#app:elliptic} ================================================ A very simple approximation of $F(x|m)$ for small $x$ can be obtained as follows $$\begin{aligned} \F(x|m) & = \int_{0}^{x}\frac{dt}{\sqrt{1-m \sin^2 t}} \approx \int_{0}^{x}\frac{dt}{\sqrt{1-m t^2}} = \frac{1}{\sqrt{m}} \arcsin(\sqrt{m}x) \, , \\ \E(x|m) & = \int_{0}^{x}\sqrt{1-m \sin^2 t} \, dt \approx \frac{1}{2}\Big[x\sqrt{1-m x^2} +\frac{1}{\sqrt{m}}\arcsin(\sqrt{m}x)\Big] \, .\end{aligned}$$ Therefore, the leading approximations of the elliptic integrals contained in Eqs., are found to be $$\begin{aligned} 2\F({q_{0}}) - \F({\bar{q}}) & \approx \frac{{\theta_0}}{2} \left(\pi - \arcsin({\bt/{\theta_0}}) \right)\, , \label{eq:Fleading} \\ 2\E({q_{0}}) - \E({\bar{q}}) & \approx \frac{{\theta_0}}{4} \Big(\pi - \arcsin({\bt/{\theta_0}}) - {\bt/{\theta_0}}\sqrt{1-({\bt/{\theta_0}})^2}\Big) \, . \label{eq:Eleading}\end{aligned}$$ Despite quite crude, these approximations are surprisingly good, as some numerical experiments readily show. However, to be on the safe side, we look for more refined approximations. To this end, we adapt the strategy outlined in the electronic supplementary information of Ref.[@Wagner:2013]. We make the substitution $u = \sqrt{m} \, \sin t$, $du = \sqrt{m}\, \cos t \, dt$, so that $$dt = \frac{1}{\sqrt{m}} \frac{du}{\sqrt{1-\frac{u^2}{m}}} \, .$$ We then substitute $m = \csc^2 {\theta_0}/2$, and expand for ${\theta_0}\ll 1$, $$dt = \sin\frac{{\theta_0}}{2} \Big(1+\frac{1}{2}u^2 \sin^2\frac{{\theta_0}}{2} + O({\theta_0}^4) \Big) du \, .$$ The incomplete elliptic integrals are then approximated by $$\begin{aligned} \F({\bar{q}}) & \approx \int_{0}^{\frac{\sin \bt/2}{\sin {\theta_0}/2}} \sin\frac{{\theta_0}}{2} \Big(1+\frac{1}{2}u^2 \sin^2\frac{{\theta_0}}{2} \Big) \frac{du}{\sqrt{1-u^2}} \, , \label{eq:F_O3} \\ \E({\bar{q}}) & \approx \int_{0}^{\frac{\sin \bt/2}{\sin {\theta_0}/2}} \sin\frac{{\theta_0}}{2} \Big(1+\frac{1}{2}u^2 \sin^2\frac{{\theta_0}}{2} \Big) \sqrt{1-u^2} \, du \, . \label{eq:E_O3}\end{aligned}$$ These integrals can be computed exactly. However, we are only interested in their approximation for ${\theta_0}\ll 1$ and $\bt \ll 1$. After some calculations, which we do not report for brevity, we obtain $$\begin{aligned} 2\F({q_{0}}) - \F({\bar{q}}) & \approx \Big(\frac{{\theta_0}}{2} + \frac{{\theta_0}^3}{96}\Big)\Big(\pi - \arcsin ({\bt/{\theta_0}}) \Big) + \frac{{\theta_0}^2 \, \bt}{96} \sqrt{1-({\bt/{\theta_0}})^2} \, , \label{eq:F_O4}\\ 2\E({q_{0}}) - \E({\bar{q}}) & \approx \Big(\frac{{\theta_0}}{4} - \frac{{\theta_0}^3}{384}\Big)\Big(\pi - \arcsin ({\bt/{\theta_0}}) - {\bt/{\theta_0}}\sqrt{1-({\bt/{\theta_0}})^2}\Big) \notag \\ & - \frac{{\theta_0}^2 \, \bt}{192} \big(1-({\bt/{\theta_0}})^2 \big)^{3/2}\, . \label{eq:E_O4}\end{aligned}$$ [^1]: `gaetano.napoli@unisalento.it`
--- abstract: 'One of the main achievements in modern cosmology is the so-called ‘unified model’, which successfully describes most classes of active galactic nuclei (AGN) within a single physical scheme. However, there is a particular class of radio-luminous AGN that presently cannot be explained within this framework – the ‘low-excitation’ radio AGN (LERAGN). Recently, a scenario has been put forward which predicts that LERAGN, and their regular ‘high-excitation’ radio AGN (HERAGN) counterparts represent different (red sequence vs. green valley) phases of galaxy evolution. These different evolutionary states are also expected to be reflected in their host galaxy properties, in particular their cold gas content. To test this, here we present CO(1$\rightarrow$0) observations toward a sample of 11 of these systems conducted with CARMA. Combining our observations with literature data, we derive molecular gas masses (or upper limits) for a complete, representative, sample of 21 $z<0.1$ radio AGN. Our results yield that HERAGN on average have a factor of $\sim7$ higher gas masses than LERAGN. We also infer younger stellar ages, lower stellar, halo, and central supermassive black masses, as well as higher black hole accretion efficiencies in HERAGN relative to LERAGN. These findings support the idea that high- and low-excitation radio AGN form two physically distinct populations of galaxies that reflect different stages of massive galaxy build-up.' author: - 'V. Smolčić and D. A. Riechers' title: | The Molecular Gas Content of $z<0.1$ Radio Galaxies:\ Linking the AGN Accretion Mode to Host Galaxy Properties --- Introduction {#sec:intro} ============ Over the past two decades a standard model of AGN has emerged. In this ‘unified’ model efficient disk accretion of cold matter on the central supermassive black hole (BH) provides the radiation field that photoionizes emission-line regions. However, there is a certain fraction of AGN identified by radio observations that poses a challenge to the unified model, the so-called low-excitation radio AGN (hereafter: LERAGN). The main difference between [*high-excitation radio AGN (HERAGN)*]{} and these [*LERAGN*]{} is that the latter do not exhibit strong emission lines in their optical spectra (Jackson & Rawlings 1997; Evans et al. 2006). Recently, Hardcastle et al. (2006) have suggested that high- and low-excitation radio AGN may represent a principal separator between populations fundamentally different in their black hole accretion mechanisms (see also Evans et al. 2006; Allen et al. 2006; Kewley et al. 2006). They developed a model in which central supermassive black holes of HERAGN accrete in a standard (radiatively efficient) way from the cold phase of the intragalactic medium (IGM), while those of LERAGN are powered in a radiatively inefficient manner by Bondi accretion of the hot IGM.  (2009) showed that low- and high excitation radio AGN exhibit not only systemic differences in their black hole masses and accretion rate properties, but also in their host galaxy properties, such as stellar masses and stellar populations. This is consistent with these two classes of radio AGN dividing in a stellar mass vs. color plane in such a way that LERAGN occupy the red sequence and HERAGN inhabit the so called “green valley”, a sparsely populated region between the blue-cloud and the red-sequence (2009). The stellar mass vs. color plane can be interpreted as a time-sequence for galaxy evolution. Galaxies are thought to evolve from an initial star-formation-dominated state with blue optical colors into the most massive “red-and-dead” galaxies through a transition phase reflected in the green valley (Bell et al. 2004a, 2004b; Borch et al. 2006; Faber et al. 2007; Brown et al. 2007). In recent years it has been suggested that radio outflows from AGN likely play a crucial role in this massive galaxy build-up [@croton06; @bower06; @sijacki06; @sijacki07; @fanidakis10]. In this context the radio-AGN feedback (often called the “radio” or “maintenance” mode), which is thought to limit stellar mass growth in already massive galaxies, is expected to occur only in LERAGN ( 2009). Furthermore, it has been shown that the cosmic evolution of the space density of various types of radio AGN is significantly different (e.g. Peacock et al. 1985, Willott et al. 2001; Smolcic et al.2009). Based on a study of the evolution of the radio AGN luminosity function out to $z=1.3$,  et al. (2009) have shown that the comoving space density of low-luminosity radio AGN (predominantly LERAGN) only modestly declines since $z=1.3$, while that of powerful AGN (predominantly HERAGN) dramatically diminishes over the same cosmic time interval. This suggests that LERAGN and HERAGN not only represent physically distinct galaxy populations, but also populations in different stages of massive galaxy build-up. If this is the case, the molecular gas masses and fractions in low- and high- excitation radio AGN are expected to directly reflect this trend. We here investigate this idea by observing CO($J$=1$\to$0) emission of a carefully selected, representative sample of nearby ($z<0.1$) HE- and LERAGN with CARMA. We adopt a $\Lambda$CDM cosmology with $H_0=70$, $\Omega_M=0.3$, $\Omega_\Lambda=0.7$. Data {#sec:data} ==== Sample ------ We here utilize a sample of 21 Type 2 AGN at $z<0.1$ that have been observed in X-rays (with Chandra or XMM-Newton) by @evans06. 18 out of the 21 AGN have been drawn from the 3CRR survey, adding 3 more sources (3C 403, 3C405 and Cen A) for completeness (see @evans06 for details). The sample properties are summarized in  (see also Tab. 1 in Evans et al.2006). We separate our AGN into LERAGN (i.e. LINERs) and HERAGN (i.e. Seyferts) using standard diagnostic tools based on optical emission line flux ratios where possible (see  and ; @kauffmann03a [@kewley01; @kewley06]; @smo09; @buttiglione09). For this we make use of the emission line fluxes extracted from high resolution spectroscopy of 3CR sources presented in @buttiglione09 [@buttiglione10; @buttiglione11 see also Tab. 1 in @smo09]. In cases where the relevant emission line fluxes are not available, we make use of the galaxy type information available in the NASA Extragalactic Database[^1] to separate the sources into LE- and HE-RAGN. The sample contains 9 HERAGN and 12 LERAGN. ---------- ---------- ----------- ---------------------- --------------------------------------- ------------- ------------------------ ------------------------ ----------------------------- -- name redshift type $L_\mathrm{178-MHz}$ $L_\mathrm{0.5-10keV}/L_\mathrm{EDD}$ stellar age $M_*$ $M_\mathrm{BH}$ $M_{H_2} $ \[W/Hz/srad\] \[Gyr\] \[$\mathrm{M_\odot}$\] \[$\mathrm{M_\odot}$\] \[$\mathrm{M_\odot}$\] 3C 31 0.017 Seyfert 9.08$\times10^{23}$ $<2.0\times10^{-4}$ 3 $2.4\times10^{11}$ $7.8\times10^7$ $(5.1\pm0.4)\times10^8$ 3C 33 0.060 Seyfert $3.95\times10^{25}$ $1.6\times10^{-3}$ 5 $1.3\times10^{11}$ $4.8\times10^8$ $(3.75\pm1.5)\times10^8$ 3C 98 0.030 Seyfert 8.75$\times10^{24}$ $3.7\times10^{-4}$ 2 $7.9\times10^{10}$ $1.7\times10^8$ $<7.8\times10^7$ 3C 321 0.097 Seyfert 2.6$\times10^{25}$ – 13 $7.0\times10^{11}$ – $(3.3\pm0.6)\times10^9$ 3C 403 0.059 Seyfert 3.5$\times10^{25}$ $3.3\times10^{-3}$ 5 $2.4\times10^{11}$ $2.6\times10^8$ $(6.6\pm1.6)\times10^8$ 3C 449 0.017 Seyfert 6.51$\times10^{23}$ $<7.0\times10^{-3}$ 3 $2.4\times10^{10}$ $5.1\times10^7$ $(1.1\pm0.2)\times10^8$ 3C 452 0.081 Seyfert 7.54$\times10^{25}$ $3.3\times10^{-3}$ 13 $4.5\times10^{11}$ $3.5\times10^8$ $8.1\times10^{8,c}$ Cen A 0.0008 Sey 2$^a$ 5.4$\times10^{23}$ $3.0\times10^{-5}$ – – $2.0\times10^8$ $1.4\times10^8$ 3C 405 0.0565 Sey 2$^a$ 4.90$\times10^{27}$ $8.5\times10^{-4}$ – – $2.5\times10^9$ $<3.3\times10^8$ 3C 66B 0.022 LINER 2.21$\times10^{24}$ $<4.4\times10^{-5}$ 3 $3.0\times10^{10}$ $6.9\times10^8$ $<7.8\times10^7$ 3C 84 0.018 LINER 3.74$\times10^{24}$ $<9.2\times10^{-6}$ – – $1.9\times10^9$ $(2.14\pm0.02)\times10^9$ 3C 264 0.022 LINER 2.20$\times10^{24}$ $<1.8\times10^{-5}$ 13 $4.4\times10^{11}$ $7.1\times10^8$ $(9.3\pm1.8)\times10^7$ 3C 272.1 0.004 LINER 3.1$\times10^{22}$ $<8.5\times10^{-7}$ 13 $2.9\times10^{11}$ $1.5\times10^9$ $(9.3\pm3.2)\times10^{5,c}$ 3C 274 0.004 LINER 3.4$\times10^{24}$ $<4.3\times10^{-7}$ 2 $1.5\times10^{11}$ $2.4\times10^9$ $(1.65\pm0.15)\times10^7$ 3C 296 0.025 LINER 1.43$\times10^{24}$ $<1.2\times10^{-5}$ 13 $1.1\times10^{12}$ $1.3\times10^9$ $<5.7\times10^7$ 3C 338 0.032 LINER 8.63$\times10^{24}$ $<2.0\times10^{-5}$ 13 $1.3\times10^{12}$ $1.7\times10^9$ $3\times10^7$ 3C 388 0.091 LINER 4.29$\times10^{25}$ – 9 – – $<1.2\times10^{9,b}$ 3C 465 0.030 LINER 6.41$\times10^{24}$ $<2.2\times10^{-4}$ 13 $1.0\times10^{12}$ $2.1\times10^9$ $<1.95\times10^7$ NGC 1265 0.027 LERG$^a$ 3.39$\times10^{24}$ $<6.8\times10^{-6}$ 13 $1.5\times10^{10}$ $1.0\times10^9$ $<5.7\times10^7$ NGC 6109 0.0296 LERG$^a$ 1.86$\times10^{24}$ – – – – $(1.3\pm0.3)\times10^8$ NGC 6251 0.0244 LERG$^a$ 1.20$\times10^{24}$ $<2.0\times10^{-4}$ – – $6.0\times10^8$ $<7.5\times10^7$ ---------- ---------- ----------- ---------------------- --------------------------------------- ------------- ------------------------ ------------------------ ----------------------------- -- $^a$ Based on NED; LERG abbreviates “low-excitation radio galaxy”\ $^b$ Adopted from Saripalli et al. (2007), and scaled to the cosmology used here.\ $^c$ Not considered in our statistical analysis (see  for details).\ The first, second and third columns denote the source, its redshift and AGN type. The last was inferred either via optical diagnostic diagrams (see ), or adopted from NED. The fourth column shows the radio continuum luminosity, adopted from Evans et al. (2006). The fifth column, also adopted from @evans06, represents the accretion efficiency (in Eddington units) derived from X-ray observations of the cores of the AGN (the upper limits are obtained assuming $N_H=10^{24}$ atoms cm$^{-2}$; see Evans et al. 2006 for details). The sixth column shows the stellar age of the source based on fitting stellar population synthesis models to the optical spectra of the sources (encompassing the $H\alpha$ portion of the spectrum; see Tab. 5 in Buttiglione et al. 2009). The seventh column shows the stellar mass derived using the 2MASS K-band luminosity and stellar age (where available) following Drory et al. (2004; see text for details). The second to last column shows the black hole mass, adopted from Evans et al.(2006), and the last column reports the molecular gas mass obtained from CO(1$\rightarrow$0) observations (see ) using a conversion factor of $\alpha=M_{H_2}/L'_{CO}=1.5$ (K  pc$^2$)$^{-1}$, and assuming $H_0=70$, $\Omega_M=0.3$, $\Omega_\Lambda=0.7$. The horizontal lines separate Seyferts (i.e. HERAGN; top) and LINERs (i.e. LERAGN; bottom). ![image](f1.eps) CO(1$\rightarrow$0) observations and data reduction --------------------------------------------------- At the time of observations, 8 out of the 21 Type 2 AGN in our sample have already been detected in CO(1$\rightarrow$0). Thus, we observed the CO(1$\rightarrow$0) transition line toward the remaining 13 AGN using the the CARMA (Combined Array for Research in Millimeter-wave Astronomy) Interferometer. Observations were performed during Summer/2009 and Spring/2010 for about 4 to 15 hours per source ( ). All targets were observed under good to excellent weather conditions at 3mm with 15 antennas (corresponding to 105 baselines) in the two most compact, E and D configurations (2009 and 2010, respectively). Data on two objects (3C 388 and 3C 405) had to be discarded due to technical problems, and are excluded in the following. The receivers were tuned to the redshifted CO($J$=1$\to$0) line frequencies ($\nu_{\rm rest}$=115.2712GHz; see  for exact observing frequencies), centering them in the upper sideband. Three bands with 15 channels of 31.25MHz width each were utilized. The bands were overlapped by 2 channels to improve calibration of the correlated dataset, leading to an effective bandwidth of 1281.25 MHz (∼3500 ) per sideband. Phase calibration was performed by observing bright nearby radio quasars every 15minutes. Bandpass calibration was performed once per track on bright quasars. Fluxes were bootstrapped relative to planets, or monitored radio quasars if no planet was available. The total calibration is estimated to be accurate to 15%. Data reduction was performed using the MIRIAD package. The CO(1$\rightarrow$0) spectra are shown in . --------------------- ------------- -------------- --------------- ------------ ------------- ------------------ ------------- -- Source RA (J2000) DEC (J2000) configuration obs. freq. On-source beam rms/channel \[GHz\] time \[hr\] \[arcsec\] \[mJy\] 3C 296 14 16 52.94 +10 48 26.50 D & E 112.460 14.1 $6.4"\times5.5"$ 2.1 3C 321 15 31 43.45 +24 04 19.10 E 105.079 4.0 $9.8"\times7.2"$ 3.8 3C 33 01 08 52.86 +13 20 13.80 D & E 108.746 14.8 $8.7"\times6.3"$ 1.8 3C 403 19 52 15.80 +02 30 24.47 E 108.849 5.7 $9.0"\times6.9"$ 4.7 3C 452 22 45 48.77 +39 41 15.70 D & E 106.634 15.6 $6.8"\times5.1"$ 1.8 3C 465 23 38 29.52 +27 01 55.90 E 111.914 8.9 $9.3"\times6.5"$ 3.0 3C 66B 02 23 11.41 +42 59 31.38 E 112.790 4.4 $8.7"\times6.3"$ 5.1 3C 98 03 58 54.43 +10 26 03.00 E 111.914 4.8 $9.3"\times6.7"$ 3.7 3C 83.1B (NGC 1265) 03 18 15.86 +41 51 27.80 D & E 112.241 13.5 $4.5"\times3.7"$ 2.1 NGC 6109 16 17 40.54 +35 00 15.10 D & E 111.957 10.6 $5.0"\times4.0"$ 2.7 NGC 6251 16 32 31.97 +82 32 16.40 E 112.526 6.53 $9.0"\times8.2"$ 3.5 --------------------- ------------- -------------- --------------- ------------ ------------- ------------------ ------------- -- ------------------------- --------------- ------------------- ---------- -------------------------- --------------------------------- --------------------------- Source z $S_\mathrm{cont}$ $z_{CO}$ $\Delta v_\mathrm{FWHM}$ $I_\mathrm{CO(1\rightarrow 0)}$ $L'_\mathrm{CO}$ \[mJy\] \[\] \[Jy \] \[K  pc$^2$\] 3C 33 0.060$^*$ $31.8\pm0.4$ 0.060 $400\pm200$ $1.5\pm0.6$ $(2.5\pm1.0)\times10^8$ 3C 66B 0.022$^*$ $113.6\pm0.8$ – – $<2.4$ $<5.2\times10^7$ 3C 83.1B (NGC 1265) 0.027$^*$ $40.5\pm0.4$ – – $<1.2$ $<3.8\times10^7$ 3C 98 0.030$^*$ $8.0\pm0.4$ – – $<1.3$ $<5.2\times10^7$ 3C 296 0.025$^*$ $144.4\pm0.4$ – – $<1.4$ $<3.8\times10^7$ 3C 321 0.097$^*$ $9.1\pm0.6$ 0.097 $320\pm70$ $5.0\pm0.9$ $(2.2\pm0.4)\times10^9$ 3C 403 0.059$^*$ $4.6\pm0.5$ 0.059 $350\pm100$ $2.8\pm0.7$ $(4.4\pm1.1)\times10^8$ 3C 452$^+$ 0.081$^*$ $44.7\pm0.6$ – – – $<5.4\times10^8$ 3C 465 0.030$^*$ $20.2\pm0.1$ – – $<0.3$ $<1.3\times10^7$ NGC 6109 0.0296$^{**}$ $17.7\pm0.3$ 0.0301 $230\pm50$ $2.2\pm0.4$ $(8.8\pm1.7)\times10^7$ NGC 6251 0.0244$^{**}$ $624.9\pm0.6$ – – $<1.9$ $<5.0\times10^7$ Cen A$^e$ 0.0008$^{**}$ – – – – $9.4\times10^7$ 3C 272.1 (M84) $^{b,c}$ 0.004$^*$ – 0.0028 200 $1.8\pm0.6$ $(6.2\pm2.1)\times10^5$ 3C 274 (M87)$^b$ 0.004$^*$ – 0.0035 200 $20\pm2$ $(1.1\pm0.1)\times10^7$ 3C 31$^a$ 0.017$^*$ – 0.0169 450 $27\pm2$ $(3.4\pm0.2)\times10^8$ 3C 449$^b$ 0.017$^*$ – 0.0169 500 $6\pm1$ $(7.6\pm1.3)\times10^7$ 3C 84 (NGC 1275)$^a$ 0.018$^*$ – 0.0176 200 $104\pm1$ $(1.43\pm0.01)\times10^9$ 3C 264$^b$ 0.022$^*$ – 0.02 200 $3.5\pm0.7$ $(6.2\pm1.2)\times10^7$ 3C 338$^d$ 0.032$^*$ – 0.030 – 0.495 $2.0\times10^7$ 3C 405$^a$ 0.0565$^{**}$ – – – $<1.5$ $<2.2\times10^8$ 3C 388 0.091 – – – – – ------------------------- --------------- ------------------- ---------- -------------------------- --------------------------------- --------------------------- The columns show the source, its redshift, the observed continuum flux density ($S_\mathrm{cont}$), the redshift based on the CO(1$\rightarrow$0) emission line ($z_\mathrm{CO}$), the line width at half-maximum ($v_\mathrm{FWHM}$), the CO line intensity ($I_\mathrm{CO(1\rightarrow 0)}$), and luminosity ($L'_\mathrm{CO}$; see eq. 4 in Evans et al. 2005). For sources in which the CO line was not detected we report $3\sigma$ upper limits, computed assuming $\Delta v_\mathrm{FWHM}=300$ .\ $^*$ adopted from Buttiglione et al. (2009)\ $^{**}$ adopted from Evans et al. (2006)\ $^+$ Due to the strong contribution of complex, steeply sloped mm continuum emission from extended radio jets to the mm emission of this source, the continuum was fitted over only 33 channels, where the jet contribution is estimated to be small after deconvolution. Due to this uncertainty, however, we do not consider this source in our statistical analysis.\ $^a$ $z_\mathrm{CO}$, $\Delta v_\mathrm{FWHM}$ and $I_\mathrm{CO(1\rightarrow 0)}$ are adopted from Evans et al. 2005. $L'_\mathrm{CO}$ was computed using the cosmology adopted here.\ $^b$ $z_\mathrm{CO}$, $\Delta v_\mathrm{FWHM}$ and $I_\mathrm{CO(1\rightarrow 0)}$ are taken from Flaquer et al. (2010). Given that their observations were conducted with the IRAM 30 m telescope, we take $1~\mathrm{K} = 4.95$ Jy, and compute $L'_\mathrm{CO}$ using the cosmology adopted here.\ $^c$ tentative detection\ $^d$ $I_\mathrm{CO(1\rightarrow 0)}$ adopted from Leon et al.(2001).\ $^e$ adopted from Eckart et al. (1990), and scaled to the cosmology used here.\ \ \ \ \ \ Results {#sec:results} ======= CO Data ------- CO(1$\rightarrow$0) has been detected in 4 (3C33, 3C321, 3C403, and NGC6109) out of the 11 galaxies in our CARMA-CO sample (see ). To parameterize the emission lines detected in these four galaxies we fit Gaussian profiles to the line and underlying continuum emission (see  for line/continuum properties). Two of our four CO detected sources (3C321 and 3C403) have recently been detected in the CO(1$\rightarrow$0) transition by Flaquer et al.(2010) using the IRAM 30 m telescope. The line parameters reported by Flaquer et al. are in good agreement with ours. $3\sigma$ upper limits for CO(1$\rightarrow$0) non-detections are determined by assuming a line width of 300 , corresponding to the average width of the detected lines. We further complement  with data from literature for the 8 sources with already existing CO(1$\rightarrow$0) detections, and the 2 sources (3C 388 and 3C405) that had to be excluded from our sample. Ancillary Data -------------- We summarize the physical properties of the 21 sources in our low- and high-excitation radio AGN sample in . We adopt the 178 MHz luminosities, accretion efficiencies, and black hole masses from Evans et al. (2006). The stellar ages of our sources, taken from Buttiglione et al. (2009), were derived by fitting stellar population synthesis models to the ($H\alpha$ portion of the sources’) optical spectra. Combining the stellar ages with 2MASS K-band luminosities (where available) we computed the stellar masses of our sources following Drory et al. (2004). Drory et al.have parameterized the mass-to-light ratio in K-band as a function of stellar age (see their Fig. 1) using simple stellar population models (Maraston 1998), and a Salpeter initial mass function. The total systematic uncertainty of such a derived mass-to-light ratio is estimated to be $\sim25-30$%. Lastly, from the CO(1$\rightarrow$0) luminosities inferred for our sources (see ) we estimated the molecular ($H_2$) mass using a conversion factor of $\alpha=M_\mathrm{H_2}/L'_\mathrm{CO}=1.5$ (K  pc$^2$)$^{-1}$ [@evans05]. We find systematic differences in the average black hole and host galaxy properties of the low- and high- excitation sources (i.e.LINERs and Seyferts, respectively) in our sample ( ). This is illustrated in , where we also indicate the average properties of our low- and high-excitation radio AGN, computed using the ASURV statistical package and assuming log-normal distributions in luminosity and mass. The average properties are specifically given in .[^2] -------- ---------------------------- -------------- ---------------------------- --------------------------- ----------------------------- AGN $L_\mathrm{178-MHz}$ stellar age $M_*$ $M_\mathrm{BH}$ $M_{H_2} $ type \[W/Hz/srad\] \[Gyr\] \[$\mathrm{M_\odot}$\] \[$\mathrm{M_\odot}$\] \[$\mathrm{M_\odot}$\] HERAGN $(7.2\pm4.9)\times10^{24}$ $6.3\pm1.6$ $(1.7\pm0.7)\times10^{11}$ $(2.5\pm1.0)\times10^{8}$ $(2.9\pm1.2)\times10^8$ LERAGN $(2.5\pm1.1)\times10^{24}$ $10.2\pm1.4$ $(2.4\pm1.4)\times10^{11}$ $(1.3\pm0.2)\times10^{9}$ $(4.3\pm1.9)\times10^{7,*}$ -------- ---------------------------- -------------- ---------------------------- --------------------------- ----------------------------- $^*$ The given limit was computed excluding the tentative CO detection in 3C 272.1 (see ). Including the gas mass for this source yields an average of $(1.8\pm1.5)\times10^7$ . Compared to LERAGN, HERAGN on average have a factor of $\sim3$ higher radio continuum luminosities, significantly higher accretion efficiencies, but about an order of magnitude lower mass central black holes. Furthermore, their host galaxies have about a factor of 1.5 younger stellar populations and stellar masses, but about a factor of $\sim7$ higher molecular gas masses. As discussed in the next section, this is consistent with the idea that high- and low- excitation radio AGN form two physically distinct populations of galaxies that reflect different phases of massive galaxy formation. Discussion and Summary {#sec:discussion} ====================== Our main result is that HERAGN have systematically higher molecular gas masses (a factor of $\sim7$; see ), compared to LERAGN. Flaquer et al. (2010) have found a similar trend by dividing their sample ($\sim50$ radio AGN observed with the IRAM 30m telescope, partially overlapping with our sample) into FR class I and II objects. They find that the molecular gas mass in FR IIs is a factor of $\sim4$ higher than that in FR Is. The FR class can be taken to roughly correspond to the low- and high- excitation classification.[^3] Flaquer et al. (2010) have, however, concluded that the systematic differences they find are likely a result of a Malmquist bias, i.e. simply due to a systematically higher redshift of their FR-II sources. Although our HERAGN lie on average at a slightly higher redshift, compared to our LERAGN (0.046 vs. 0.030, resp.) in the following we argue that the systematic differences we find in molecular gas mass are not due to a Malmquist bias. Morić et al. (2010) have shown that the redshift distributions of carefully selected samples of radio-selected LINERs and Seyferts are approximately the same (see their Fig. 6). This eliminates Malmquist bias from their results. They find that the detection fraction in the FIR is significantly lower for LINERs than for Seyferts (6.5% vs.22%, resp.) in their sample. Assuming that the star formation law parameterized by $L'_{\rm CO}$ (as a proxy for total gas mass) and $L_{\rm FIR}$ (as a proxy for star formation rate; e.g., Kennicutt 1998; Solomon & Vanden Bout 2005; Bigiel et al. 2008), on average, correctly represents the star formation properties of these samples (as confirmed by the CO/FIR luminosities of the IRAS detected sources analyzed here; see ), the lower average FIR luminosity in low-excitation sources (i.e. LINER) implies lower gas masses than in high-excitation (i.e. Seyfert) types of galaxies. A similar result is obtained based on average (optically derived) star formation rates[^4], suggesting that those in LINERs are by about a factor of 3 lower than in Seyferts in a redshift-matched sample. These findings suggest that the systematic differences in molecular gas mass in high- and low-excitation radio AGN are physical, and not due to Malmquist bias. ![ CO vs. FIR luminosity for our local AGN sources detected with IRAS. The lines represent the $L'_\mathrm{CO} - L_\mathrm{FIR}$ correlation derived by @riechers06. []{data-label="fig:cofir"}](f4.eps){width="\columnwidth"} The systematically higher molecular gas masses that we find in HERAGN, relative to LERAGN in our $z<0.1$ radio AGN sample, are in excellent agreement with the systematic differences in various properties of high- and low- excitation radio AGN, both on pc- and kpc- galaxy scales (see  and ). We find that, on average, HERAGN have lower stellar masses and stellar ages compared to LERAGN (see ; see also 2009). This is consistent with HERAGN and LERAGN being green valley and red sequence sources, respectively. Furthermore, we show that HERAGN have on average higher radio luminosities than LERAGN, consistent with the results presented in Kauffmann et al.(2008). Kauffmann et al. have shown that the fraction of radio AGN with strong emission lines in their spectra significantly rises beyond $\sim10^{25}$ . In general, the comparison of the black hole and host galaxy properties inferred for our 21 $z<0.1$ AGN with much larger samples of radio AGN (Kauffmann et al. 2008;  2009) suggests that our AGN sample is representative of high- and low-excitation radio AGN in the nearby universe. From the average stellar masses that we infer for our high- and low excitation sources we extrapolate that they occupy $\sim3\times10^{13}$  and $\sim5\times10^{14}$  halos, respectively [e.g. @behroozi10; @moster10]. Compared to the systematic molecular gas mass difference, this yields an even more dramatic discrepancy of more than 2 orders of magnitude in the average molecular gas fractions in HE- ($\sim10^{-5}$) and LE-RAGN ($\sim9\times10^{-8}$). The discrepancy remains significant (about an order of magnitude) if the average gas-to-stellar mass fraction (which can be interpreted as star formation efficiency) is considered. On small scales, the average black hole accretion efficiencies in HE- and LE-RAGN suggest different supermassive black-hole accretion mechanisms (standard disk accretion of cold gas in HERAGN vs. Bondi accretion of hot gas in LERAGN; see Evans et al. 2006; Hardcastle et al. 2006). Furthermore, the higher black hole masses in LERAGN suggest a later evolution stage of their host galaxies, compared to that of HERAGN. This is further strengthened by the higher stellar masses in LERAGN, as well as older stellar ages, and less massive gas reservoirs. In the blue-to-red galaxy formation picture, blue gas rich galaxies are thought to transform into read-and-dead gas-poor galaxies, the stellar populations in the host galaxies of HERAGN are expected to be younger and have lower masses, while their molecular gas reservoirs – fueling further stellar mass growth – are expected to be higher than those in LERAGN. This is in very good agreement with the results presented here. Thus, in summary, our results strengthen the idea that low- and high-excitation radio AGN form two physically distinct galaxy populations that reflect different stages of massive galaxy formation. The authors thank F. Bertoldi and K. Knudsen for insightful discussions. The research leading to these results has received funding from the European Union’s Seventh Framework programme under grant agreement 229517. DR acknowledges support from NASA through an award issued by JPL/Caltech, and from NASA through Hubble Fellowship grant HST-HF-51235.01 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. Support for CARMA construction was derived from the Gordon and Betty Moore Foundation, the Kenneth T. and Eileen L. Norris Foundation, the James S. McDonnell Foundation, the Associates of the California Institute of Technology, the University of Chicago, the states of California, Illinois, and Maryland, and the National Science Foundation. Ongoing CARMA development and operations are supported by the National Science Foundation under a cooperative agreement, and by the CARMA partner universities. Allen, S. W., Dunn, R. J. H., Fabian, A. C., Taylor, G. B., & Reynolds, C. S. 2006, , 372, 21 Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, , 93, 5 Behroozi, P. S., Conroy, C., & Wechsler, R. H. 2010, , 717, 379 Bell, E. F., et al. 2004a, , 608, 752 Bell, E. F., et al. 2004b, , 600, L11 Borch, A., et al. 2006, , 453, 869 Bower, R. G., Benson, A. J., Malbon, R., Helly, J. C., Frenk, C. S., Baugh, C. M., Cole, S., Lacey, C. G. 2006, , 370, 645 Brown, M. J. I., Dey, A., Jannuzi, B. T., Brand, K., Benson, A. J., Brodwin, M., Croton, D. J., Eisenhardt, P. R. 2007, , 654, 858 Buttiglione, S., Capetti, A., Celotti, A., Axon, D. J., Chiaberge, M., Duccio Macchetto, F., & Sparks, W. B. 2009, arXiv:0901.1764 Buttiglione, S., Capetti, A., Celotti, A., Axon, D. J., Chiaberge, M., Macchetto, F. D., & Sparks, W. B.2010, , 509, A6 Buttiglione, S. et al., 2011, , 525, A28 Croton, D. J., et al.  2006, , 365, 11 Downes, D., & Solomon, P. M. 1998, , 507, 615 Drory, N., Bender, R., Feulner, G., Hopp, U., Maraston, C., Snigula, J., & Hill, G. J. 2004, , 608, 742 Eckart, A., Cameron, M., Rothermel, H., Wild, W., Zinnecker, H., Rydbeck, G., Olberg, M., & Wiklind, T. 1990, , 363, 451 Evans, A. S., Mazzarella, J. M., Surace, J. A., Frayer, D. T., Iwasawa, K., & Sanders, D. B. 2005, , 159, 197 Evans, D. A., Worrall, D. M., Hardcastle, M. J., Kraft, R. P., & Birkinshaw, M. 2006, , 642, 96 Faber, S. M., et al.  2007, , 665, 265 Fanidakis, N., Baugh, C. M., Benson, A. J., Bower, R. G., Cole, S., Done, C., & Frenk, C. S. 2010, , 1547 Oca[ñ]{}a Flaquer, B., Leon, S., Combes, F., & Lim, J. 2010, , 518, A9 Hardcastle, M. J., Evans, D. A., & Croston, J. H. 2006, , 370, 1893 Jackson, N., & Rawlings, S. 1997, , 286, 241 Kauffmann, G. et al.,  2003, , 341, 33 Kauffmann, G., Heckman, T. M., Best, P. N. 2008, , 384, 953 Kewley, L. J., Dopita, M. A., Sutherland, R. S., Heisler, C. A., Trevena, J. 2001, , 556, 121 Kewley, L. J., Groves, B., Kauffmann, G., Heckman, T.,  2006, , 372, 961 Leon, S., Lim, J., Combes, F., & van-Trung, D. 2001, QSO Hosts and Their Environments, 185 Mori[ć]{}, I., Smol[č]{}i[ć]{}, V., Kimball, A., Riechers, D. A., Ivezi[ć]{}, [Ž]{}., & Scoville, N. 2010, , 724, 779 Moster, B. P., Somerville, R. S., Maulbetsch, C., van den Bosch, F. C., Macci[ò]{}, A. V., Naab, T., & Oser, L. 2010, , 710, 903 Peacock, J. A. 1985, , 217, 601 Riechers, D. A., et al. 2006, , 650, 604 Saripalli, L., & Mack, K.-H. 2007, , 376, 1385 Sijacki, D., & Springel, V. 2006, , 366, 397 Sijacki, D., Springel, V., di Matteo, T., & Hernquist, L. 2007, , 380, 877 Smol[v c]{}i[ć]{}, V., et al. 2008, , 177, 14 Smol[č]{}i[ć]{}, V.  2009, , 699, L43 Smol[v c]{}i[ć]{}, V., et al. 2009, , 696, 24 Solomon P. M., Vanden Bout P. A., 2005, , 43, 677 Willott, C. J., Rawlings, S., Blundell, K. M., Lacy, M., & Eales, S. A. 2001, , 322, 536 [^1]: http://nedwww.ipac.caltech.edu [^2]: It should be kept in mind that in ASURV there is an implicit assumption that the censored data follow a similar distribution to that of the measured population. If this is not the case, “average” values calculated by ASURV will be generally biased upwards (as our upper limits typically lie towards the bottom-end of the distribution). Note however that, if this were the case, it would not change, but only strengthen the results presented here. [^3]: Almost all FR I –- low power -– radio galaxies are LERAGN, while optical hosts of FR IIs, which are typically more powerful than FR Is (Fanaroff & Riley 1974; Owen 1993; Ledlow & Owen 1996), usually have strong emission lines. Note however that the correspondence between the FR class and the presence of emission lines is not one-to-one. [^4]: Morić et al. (2010) derived star formation rates for each galaxy in their sample via stellar population synthesis model fitting to the SDSS photometry of the host galaxy (see also  et al. 2008).
--- author: - | Gaetano Frascella$^{1,2}$, Sascha Agne$^{1,2}$, Farid Ya. Khalili$^{3,4}$,\ and Maria V. Chekhova$^{1,2,5}$ bibliography: - 'Nonlinearbib.bib' date: | $^1$Max Planck Institute for the Science of Light, Staudtstr. 2, 91058 Erlangen, Germany.\ $^2$University of Erlangen-Nuremberg, Staudtstr. 7/B2, 91058 Erlangen, Germany.\ $^3$Russian Quantum Center, Bolshoy Bulvar 30/bld. 1, 121205 Skolkovo, Moscow, Russia.\ $^4$NUST “MISiS”, Leninskiy Prospekt 4, 119049 Moscow, Russia.\ $^5$Department of Physics, M.V. Lomonosov Moscow State University, Leninskie Gory, 119991 Moscow, Russia.\ title: 'Supplemental Material to ‘Overcoming detection inefficiency in squeezing-assisted interferometers’' --- Pump pulses post-selection ========================== The pump beam used in the experiment is not shot-noise limited and the excess intensity fluctuations are measured to be $2\%$ RMS. To reduce the excess fluctuations of the pump, the measurement of the phase sensitivity with the photodetector PD1 is post-selected conditioned on the signal measured on PD2. As shown in Fig. \[fig:setup-1\], we tap off part of the pump beam at the beam splitter BS1 to generate spatially and spectrally-multimode parametric down-conversion (PDC) in the nonlinear crystal BBO3. We reject the pump with the dichroic mirror DM2 and the long-pass filter LPF with the transmission edge at $645$ nm. The relative intensity fluctuations of the pump $\sigma_{I_{{\rm p}}}/I_{{\rm p}}$ are amplified proportionally to the parametric gain $G$ [^1] $$\frac{\sigma_{I_{{\rm PDC}}}}{I_{{\rm PDC}}}=G\cdot\coth G\cdot\frac{\sigma_{I_{{\rm p}}}}{I_{{\rm p}}},\label{eq:post-selection}$$ where $I_{{\rm PDC}}$ is the intensity of the PDC radiation. Eq.  can be derived using the proportionalities $I_{{\rm PDC}}\propto\sinh^{2}G$ and $G\propto\sqrt{I_{{\rm p}}}$. For more efficient post-selection, we choose a high gain $G\sim7$ in a 3-mm crystal by focusing tightly the pump with a lens not shown in the schematic for simplicity. In order to achieve better sub-shot-noise phase sensitivity, the window of acceptance of the events at PD2 corresponds to a standard deviation of $2\%$ relative to the mean. Control of the relative phase ============================= ![\[fig:setup-1\]Full experimental setup. Complementary to Fig. 2 of the main text. The second output of the beam splitter BS2 is used to lock the relative phase of the coherent ($800$ nm) and pump ($400$ nm) beam. The dispersion in a pair of movable wedges W gives the desired value of the phase. Second-harmonic generation in the nonlinear crystal BBO4 frequency-doubles the coherent beam at $800$ nm and the residue is rejected with a band-pass filter BF2. The interference of the pump and the frequency-doubled $800-$nm beam is filtered spatially by the combination of lenses L2-3 with a pinhole and detected on the photodetector PD3, through the Glan polarizer GP2 oriented at $45$ degrees. The feedback system stabilizes the phase of the $800$ nm beam moving the piezoelectric actuator PA. Other notation is: HWP, half-wave plate; DM, dichroic mirror; LPF, long-pass filter.](2020-4-setupsu2-complete.png){width=".5\columnwidth"} In Fig. \[fig:setup-1\] we show explicitly the path at the second output of the beam splitter BS2 used to control the relative phase between the coherent and the SV input. We control the path length of the $800$-nm beam to lock the phase relative to the $400$-nm pump. To obtain the same wavelength, we generate vertically-polarized second harmonic at $400$ nm in the nonlinear crystal BBO4 from the $800$-nm coherent beam. The movement of one of the N-BK7 wedges W can change, due to dispersion, the relative phase between the two beams at different wavelengths without changing the alignment. The band-pass filter BF2 (central wavelength $400$ nm, bandwidth $40$ nm) rejects the $800$ nm beam. To obtain good visibility, we use a beam expander (lenses L2-3) and the pinhole P2 for spatial mode filtering and the Glan polarizer GP2 with transmission axis at $45$ degrees to project both beams on the same polarization axis. The signal on the photodetector PD3 determines through a feedback system the movement of the piezoelectric actuator PA connected to a mirror in the path of the 800-nm coherent beam in order to lock the phase. Operation of the interferometer without the squeezed input ========================================================== ![\[fig:noSV\]Phase sensitivity measurement with no squeezed vacuum state, i.e. first amplifier gain $G_{1}=0$. Second amplifier gain $G_{2}=2.9$ and $N_{\alpha}=1400\pm250$ photons in the coherent beam. Internal and external transmissions are $\mu=97\%$ and $\eta=50\%$. The shot-noise limit (red dashed line) is not overcome. The green line shows the theoretical prediction. The inset shows the photon number dependence on the phase.](noSV-presentation.png){width="0.5\columnwidth"} In this Section, we check that the phase sensitivity does not overcome the shot-noise limit (SNL) if the squeezed vacuum input is removed. In this way, the interferometer is not squeezing-assisted anymore and resembles the usual interferometer with coherent input and an optical parametric amplifier (OPA) at the output. Fig. \[fig:noSV\] presents the phase sensitivity of the interferometer when the first amplifier is removed, i.e. $G_{1}=0$. In this case, the SNL for the phase sensitivity is given by $$\Delta\phi_{{\rm SNL}}=\frac{1}{\sqrt{N_{\alpha}}},$$ where $N_{\alpha}$ is the number of photons at the half-wave plate, i.e. inside the interferometer, before the OPA. The amplifier does not affect the SNL, because phase sensitive amplification does not change the signal-to-noise ratio of the radiation at the input [@Caves:82]. The phase sensitivity does not overcome the SNL, shown with a red dashed line. In addition, the SNL is not even reached, mainly because of the detector dark noise and of the internal losses [^2]. For this part of the experiment, the photon number is $N_{\alpha}=1400\pm250$ and the gain of the amplifier from the fit is $G_{2}=2.9\pm0.3$. The values of external and internal transmission are the same as in the main text. The experimental points show good agreement with the theoretical model. Excess noise for a coherent beam after an optical parametric amplifier ====================================================================== We study the effect of the excess fluctuations of a non-perfectly coherent beam seeding an OPA. To take into account the excess fluctuations at the input of the OPA, the following equations are valid for the input photon number operator: $\hat{N}_{{\rm in}}$ $$\begin{array}{c} \left<\hat{N}_{{\rm in}}\right>=N_{\alpha},\\ \Delta\hat{N}_{{\rm in}}^{2}=N_{\alpha}+\left(g^{\left(2\right)}-1\right)N_{\alpha}^{2}, \end{array}\label{eq:excess}$$ where $N_{\alpha}$ is the average photon number and $g^{\left(2\right)}$ is the normalized second-order correlation function, deviating from the value one for a non-perfectly coherent beam. To describe our state, we consider a mixture of coherent states $\left|\beta\right>$ described by the density matrix $$\hat{\rho}=\int d\beta\,P(\beta)\left|\beta\right>\left<\beta\right|,\label{eq:densmatrix}$$ where $\beta=be^{-i\psi}$ is a complex number and $P(\beta)$ is the P-function of the state. We consider the phase fixed at $\psi=0$, and only amplitude fluctuations to be present: $P(\beta)=p\left(b\right)\delta\left(\psi\right)$, where $p$ is a generic probability distribution and $\delta$ is the Dirac delta. The average value of a generic operator $\hat{O}$ can be evaluated with the formula $\left<\hat{O}\right>={\rm Tr}\left(\hat{O}\hat{\rho}\right)$. Evaluating the averages in Eqs.  with Eq. , we obtain the following prescriptions to the function $p$ $$\begin{array}{c} \int db\,p(b)\,b^{3}=N_{\alpha},\\ \int db\,p(b)\,b^{5}=g^{\left(2\right)}N_{\alpha}^{2}. \end{array}\label{eq:pfunction}$$ We consider a degenerate OPA described with the Bogoliubov transformation for the output and input annihilation operators $\hat{a}_{{\rm out}}$ and $\hat{a}_{{\rm in}}$ $$\hat{a}_{{\rm out}}=\cosh G\,\hat{a}_{{\rm in}}+\sinh G\,\hat{a}_{{\rm in}}^{\dagger},\label{eq:bogoliubov}$$ where $G$ is the parametric gain. For the input state described by Eqs. (\[eq:densmatrix\], \[eq:pfunction\]), we calculate the expectation values for the number of photons at the output of the OPA $\hat{N}_{{\rm out}}$ and the variance [^3], $$\begin{array}{c} \left<\hat{N}_{{\rm out}}\right>=\sinh^{2}G+N_{\alpha}e^{2G},\\ \Delta\hat{N}_{{\rm out}}^{2}=2\sinh^{2}G\cosh^{2}G+N_{\alpha}e^{4G}+\\ +\left(g^{\left(2\right)}-1\right)N_{\alpha}^{2}e^{4G}. \end{array}\label{eq:outputN}$$ Eqs.  suggest that the relative excess intensity fluctuations $g^{(2)}-1$ of the input beam will be amplified by the OPA, proportionally to the mean number of photons squared $N_{\alpha}^{2}$ and to the factor $e^{4G}$. Moreover, the value of the $g^{\left(2\right)}$ at the output will not necessarily be as the one at the input; in general, it increases. For the limiting case of no photons at the input ($N_{\alpha}=0$), $g^{(2)}=3+1/\sinh^{2}G$ in accordance with the result for an unseeded OPA. For the case of high number of photons at the input $\left(N_{\alpha}\rightarrow\infty\right)$, the value of $g^{\left(2\right)}$ at the output approaches the one at the input from above. Derivation of the phase sensitivity\[subsec:Theoretical-model-for\] =================================================================== ![\[fig:notation\]Polarization interferometer with squeezed vacuum light at the input 1 and coherent light at the input 2 and an optical parametric amplifier at the output. The figure shows the notation used in the calculations.](2020-4-notation.png){width=".5\columnwidth"} We use the theory developed in Ref. [@Manceau:17] to calculate the theoretical phase sensitivity for the polarization interferometer with an output amplifier. In particular, we take into account the direct detection and the excess noise of the coherent beam. For the notation we refer to Fig. \[fig:notation\]. For the derivation, we use the transformation through each element of the two-component quadrature vector $$\boldsymbol{\hat{o}}=\frac{1}{\sqrt{2}}\left(\begin{array}{c} \hat{o}+\hat{o}^{\dagger}\\ \left(\hat{o}-\hat{o}^{\dagger}\right)/i \end{array}\right),$$ instead of the annihilation and creation operators $\hat{o}$ and $\hat{o}^{\dagger}$. In this formalism, a phase shift, described as $\hat{w}=\hat{o}e^{-i\gamma}$, becomes $\hat{\mathbf{w}}=\mathbb{O}\left(\gamma\right)\hat{\mathbf{o}},$ where $\mathbb{O}\left(\gamma\right)=\mathbb{I}\cos\gamma-\mathbb{Y}\sin\gamma$, with $$\mathbb{I}=\left(\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right),\,\mathbb{Y}=\left(\begin{array}{cc} 0 & -1\\ 1 & 0 \end{array}\right).$$ The squeezed vacuum input with squeeze factor $G_{1}$ and the coherent beam with classical amplitude $\alpha$ and phase $\pi/2$ have quadrature vectors $$\begin{array}{c} \hat{\mathbf{a}}_{1}=\mathbb{S}\left(G_{1}\right)\hat{\mathbf{z}}_{1},\\ \hat{\mathbf{a}}_{2}=\sqrt{2}\alpha\left(\begin{array}{c} 0\\ -1 \end{array}\right)+\hat{\mathbf{z}}_{2}, \end{array}\label{eq:vecinput}$$ where the single-mode Bogolyubov transformation in Eq.  with gain $G$ is described by $$\mathbb{S}\left(G\right)=\left(\begin{array}{cc} e^{G} & 0\\ 0 & e^{-G} \end{array}\right),$$ and $\hat{\mathbf{z}}_{1,2}$ are vacuum quadrature vectors. The transformation of the quadrature vectors through the elements can be written as $$\begin{array}{c} \hat{\mathbf{b}}_{1,2}=\frac{1}{\sqrt{2}}\left(\hat{\mathbf{a}}_{2,1}\pm\mathbb{Y}\,\hat{\mathbf{a}}_{1,2}\right)\textrm{, }\\ \hat{\mathbf{c}}_{1,2}=\mathbb{O}\left(\frac{\pi}{2}\pm2\delta\right)\hat{\mathbf{b}}_{2,1}\textrm{, }\\ \hat{\mathbf{d}}_{1,2}=\sqrt{\text{\ensuremath{\mu}}}\,\hat{\mathbf{c}}_{1,2}+\sqrt{1-\mu}\,\hat{\mathbf{m}}_{1,2},\\ \hat{\mathbf{e}}_{2}=\frac{-\mathbb{Y}}{\sqrt{2}}\left(\hat{\mathbf{d}}_{2}-\hat{\mathbf{d}}_{1}\right)\textrm{, }\\ \hat{\mathbf{f}}=\mathbb{S}\left(G_{2}\right)\hat{\mathbf{e}}_{2},\\ \hat{\mathbf{g}}=\sqrt{\eta}\,\hat{\mathbf{f}}+\sqrt{1-\eta}\,\hat{\mathbf{n}}, \end{array}\label{eq:bs-phase}$$ where $\hat{\mathbf{m}}_{1/2}$ and $\hat{\mathbf{n}}$ are vacuum annihilation operators and $\mu$, $\eta$ are respectively the internal and external transmissivity, $\delta$ is the angle of the HWP’s optic axis, while $G_{2}$ is the squeeze factor of the output amplifier. The first line of Eq.  describes the basis transformation from horizontal, vertical polarization to right- and left- circular polarization, while the fourth line describes the inverse transformation. The second line describes the operation of the HWP in the right- and left- circular polarization basis, i.e. a total phase shift of $\phi=4\delta$. Indeed, the Jones matrix of a half-wave plate in the right- and left-circular polarisation basis is [@Shurcliff:62] $$i\begin{pmatrix} 0 & e^{2i\delta}\\e^{-2i\delta}&0\end{pmatrix}.$$ The third and last lines describe respectively the internal and external losses, while the fifth line describes the amplification at the output. The last line of Eq. can be rewritten as $$\begin{array}{c} \hat{\mathbf{g}}=-\sqrt{\eta\mu}\,\mathbb{S}\left(G_{2}\right)\mathbb{Y}\left(\hat{\mathbf{a}}_{1}\sin\frac{\phi}{2}+\hat{\mathbf{a}}_{2}\cos\frac{\phi}{2}\right)+\\ +\sqrt{\eta\left(1-\mu\right)}\,\mathbb{S}\left(G_{2}\right)\mathbf{\hat{m}}+\sqrt{1-\eta}\,\mathbf{\hat{n}}, \end{array}\label{eq:out-in}$$ where $\mathbf{\hat{m}}=\mathbb{Y}\left(\frac{\mathbf{\hat{m}}_{1}-\mathbf{\hat{m}}_{2}}{\sqrt{2}}\right)$. The output quadrature vector in Eq.  substituted with Eqs.  can be considered as the sum $\hat{\mathbf{g}}=\mathbf{\hat{A}}+\mathbf{\hat{g}_{fl}},$ where the first quantity corresponds to the coherent beam $$\begin{array}{c} \mathbf{\hat{A}}=\sqrt{2\eta\mu}\,\alpha\cos\frac{\phi}{2}\mathbb{S}\left(G_{2}\right)\mathbb{Y}\left(\begin{array}{c} 0\\ -1 \end{array}\right),\end{array}$$ and the second is the fluctuating part $$\begin{array}{c} \mathbf{\hat{g}_{fl}}=\sqrt{\eta\mu}\,\sin\frac{\phi}{2}\mathbb{S}\left(G_{2}\right)\mathbb{Y}\mathbb{S}\left(G_{1}\right)\hat{\mathbf{z}}_{1}+\\ +\sqrt{\eta\mu}\,\cos\frac{\phi}{2}\mathbb{S}\left(G_{2}\right)\mathbb{Y}\hat{\mathbf{z}}_{2}+\\ +\sqrt{\eta\left(1-\mu\right)}\,\mathbb{S}\left(G_{2}\right)\mathbf{\hat{m}}+\sqrt{1-\eta}\mathbf{\,\hat{n}}. \end{array}$$ In our case, $N_{\alpha}=\alpha^{2}\gg\sinh^{2}G_{1}$ and the mean and variance of the photon number operator $\hat{N}=\hat{g}^{\dagger}\hat{g}$ are given by $$\label{eq:theory} \left<\hat{N}\right>\approx\frac{1}{2}\mathbf{\hat{A}}^{\top}\mathbf{\hat{A}}=\eta\mu N_{\alpha}e^{2G_{2}}\cos^{2}\frac{\phi}{2},$$ $$\begin{array}{c} \Delta\hat{N}^{2}\approx\mathbf{\hat{A}}^{\top}\left<\mathbf{\hat{g}_{fl}}\mathbf{\hat{g}_{fl}}^{\top}\right>\mathbf{\hat{A}}=\\ =\eta\mu N_{\alpha}e^{2G_{2}}\cos^{2}\frac{\phi}{2}\left[\eta\mu\cos^{2}\frac{\phi}{2}\sigma_{{\rm SNL}}^{2}+\right.\\ \left.+\eta\mu\sin^{2}\frac{\phi}{2}\sigma_{{\rm SV}}^{2}+\eta\left(1-\mu\right)\sigma_{{\rm SNL}}^{2}+\left(1-\eta\right)\right], \end{array}$$ with $$\begin{array}{c} \sigma_{{\rm SV}}^{2}=\left(\begin{array}{cc} 1 & 0\end{array}\right)\mathbb{S}\left(G_{2}\right)\left(-\mathbb{Y}\right)\mathbb{S}^{2}\left(G_{1}\right)\mathbb{Y}\mathbb{S}\left(G_{2}\right)\left(\begin{array}{c} 1\\ 0 \end{array}\right)=\\=e^{2G_{2}-2G_{1}},\\ \sigma_{{\rm SNL}}^{2}=\left(\begin{array}{cc} 1 & 0\end{array}\right)\mathbb{Y}\mathbb{S}^{2}\left(G_{2}\right)\left(-\mathbb{Y}\right)\left(\begin{array}{c} 1\\ 0 \end{array}\right)=e^{2G_{2}}. \end{array}$$ With the addition of the detector dark noise and the excess fluctuations of the coherent beam to the photon number variance calculated in Eq. , we obtain $$\begin{array}{c} \Delta N^{2}=\Delta N_{{\rm det}}^{2}+\eta\mu N_{\alpha}e^{2G_{2}}\cos^{2}\frac{\phi}{2}\times\\ \left[\eta\mu\cos^{2}\frac{\phi}{2}\sigma_{{\rm SNL}}^{2}\left(1+\left(g^{\left(2\right)}-1\right)N_{\alpha}\right)+\right.\\ \left.+\eta\mu\sin^{2}\frac{\phi}{2}\sigma_{{\rm SV}}^{2}+\eta\left(1-\mu\right)\sigma_{{\rm SNL}}^{2}+\left(1-\eta\right)\right]. \end{array}\label{eq:theory-2}$$ The phase sensitivity can be calculated from Eq. 1 of the main text using Eqs. (\[eq:theory\], \[eq:theory-2\]). [^1]: $\coth G\sim$1, when $G>2$. [^2]: The excess noise of the coherent beam would not degrade the best phase sensitivity, if there were no detector dark noise. However, in this case it does. [^3]: The knowledge of the probability distribution $p\left(b\right)$ is not necessary in this case.
--- abstract: | To date, testing interactions in high dimensions has been a challenging task. Existing methods often have issues with sensitivity to modeling assumptions and heavily asymptotic nominal p-values. To help alleviate these issues, we propose a permutation-based method for testing marginal interactions with a binary response. Our method searches for pairwise correlations which differ between classes. In this manuscript, we compare our method on real and simulated data to the standard approach of running many pairwise logistic models. On simulated data our method finds more significant interactions at a lower false discovery rate (especially in the presence of main effects). On real genomic data, although there is no gold standard, our method finds apparent signal and tells a believable story, while logistic regression does not. We also give asymptotic consistency results under not too restrictive assumptions.\ [: correlation, high dimensional, logistic regression, false discovery rate]{} author: - 'Noah Simon [^1]' - 'Rob Tibshirani[^2]' bibliography: - '/home/nsimon/texlib/simon.bib' title: A Permutation Approach to Testing Interactions in Many Dimensions --- Introduction {#sec:intro} ============ In many areas of modern science, massive amounts of data are generated. In the biomedical sciences, examples arise in genomics, proteomics, and flow cytometry. New high-throughput experiments allow researchers to look at the dynamics of very rich systems. With these vast increases in data accumulation, scientists have found classical statistical techniques in need of improvement, and classical notions of error control (type 1 error) overwhelmed. Consider the following two class situation: our data consists of $n$ observations, each observation with a known class label of 1 or 2, with $p$ covariates measured per observation. Let $y$ denote the $n$-vector corresponding to class (with $n_1$ observations in class $1$ and $n_2$ in class $2$), and $X$, the $n\times p$ matrix of covariates. We often assume each row of $X$ is independently normally distributed with some class specific mean $\mu_{y(i)}\in\mathbb{R}^p$ and covariance $\Sigma_{y(i)}$ (for instance in quadratic discriminant analysis). Here, we are interested in differences between classes. A common example is gene expression data on healthy and diseased patients: the covariates are the genes ($p\sim 20,000$), the observations are patients ($n\sim 100$) belonging to either the healthy or diseased class. Here, one might look at differences between classes to develop a genetic prognostic test of the disease, or to better understand its underlying biology. Recent high dimensional procedures have focused on detecting differences between $\mu_1$ and $\mu_2$ by considering them one covariate at a time. In this paper we consider the more difficult problem of testing marginal interactions. In a fashion similar to the approaches used in large scale testing of main effects (see e.g @DSB2003, @TTC01 and @efron2010ebayes), we do this on a pair by pair basis. The standard approach for this problem has been to run many bivariate logistic regressions and then conduct a post-hoc analysis on the nominal p-values. @buzkova2011 has a nice summary of the subtle issues that arise in testing for just a single interaction in a regression framework. In particular, a permutation approach cannot be simply applied because it tests the null hypothesis of both no interaction and no main effects at the same time. In the high-dimensional setting with FDR estimates, these issues are compounded. The logistic regression based methods are all derived from what we call a [*forward model*]{}, that is, a model for the conditional distribution of $Y|X$. In contrast, a [*backward model*]{} (discussed below) is a model for the conditional distribution of $X|Y$. We propose a method, based on a backwards model, to approach this same problem. By using this backwards framework we avoid many of the pitfalls of standard approaches: we have a less model-based method, we attack a potentially more scientifically interesting quantity, and we can use a permutation null for FDR estimates. Our approach is unfortunately only for binary response — the backwards model is more difficult to work with for continuous $y$. In this paper we develop our method, and show its efficacy as compared to straightforward logistic regression on real and simulated data. We explain how to deal with nuisance variables, and give insight into our permutation-based estimates of FDR. We also give some asymptotic consistency results. Existing Methods {#sec:exist} ================ We begin by going more in-depth on the standard approach and its issues. In general one might like to specify a generative logistic model for the data (a forward model) of the form $$\label{eq:inter} \operatorname{logit}\left[\operatorname{P}(y_i = 1 | X_{i,\cdot})\right] = \beta_0 + \sum_{j=1}^p \beta_j X_{i,j} + \sum_{k< j}\gamma_{j,k} X_{i,j} X_{i,k}$$ where $X_{i,\cdot}$ is the $i$-th row of $X$, and test if the $\gamma_{j,k}$ are nonzero in this model. Here $i$ indexes the observations and $j,k$ index the predictors. However, because it is a joint rather than a marginal model, this does not easily allow us to test individual pairs of covariates separately from the others. Furthermore in the scenario with $n < p(p+1)/2$, the MLE for this model is not well defined (one can always get perfect separation) and non-MLE estimates are very difficult to use for testing. Alternatively, for each pair $(X_{i,j}, X_{i,k})$ one might assume a generative logistic model of the form $$\label{eq:bivLog} \operatorname{logit}\left[\operatorname{P}(y_i = 1 | X_{i,j}, X_{i,k})\right] = \beta_0 + \beta_j X_{i,j} + \beta_k X_{i,k} + \gamma_{j,k}X_{i,j} X_{i,k}$$ and estimate or test $\gamma_{j,k}$ using the MLE $\hat\gamma_{j,k}$. A standard approach to this problem in the past has been to fit pairwise logistic models  independently for every pair $(j,k)$, and then use standard tools (ie. asymptotic normality of the MLE) to calculate approximate $P$-values. Once the $p(p-1)/2$ $p$-values are calculated, the approach of @BH95 or some other standard procedure can be used to estimate/control FDR. This approach has a number of problems. First of all, while the approach is very model-based, one cannot even ensure that all of the bivariate logistic models are consistent with one another (i.e. that there is a multivariate model with the given marginals). In particular, model misspecification will often cause over-dispersion resulting in anti-conservative FDR estimates. Also, if the true model contained quadratic terms (which we do not have in our model) then for correlated pairs of features this approach will compensate by trying to add false interactions. Even if we did believe the model, the p-values are only approximate, and this approximation grows worse as we move into the tails. One might hope to avoid some of these issues by using permutation p-values, however, as shown in @buzkova2011 permutation methods are incongruous with this approach — they test the joint null hypothesis of no main effect or interaction, which is not the hypothesis of interest. This difficulty is also discussed in @pesarin2001. In an attempt to resolve this, @kooperberg2008 regress out the main effects before permuting the residuals. This is a nice adjustment, but is still heavily model-based. To deal with these issues, we take a step back and use a different generative model. Our generative model has an equivalent logistic model and this correspondence allows us to sidestep many of the issues with the standard logistic approach. Forward vs Backward Model {#sec:forVsback} ------------------------- We propose to begin with a “backward” generative model — as mentioned in Section \[sec:intro\], we assume that observations are Gaussian in each class $\left(x_i|y_i\right) \sim N(\mu_{y(i)}, \Sigma_{y(i)})$ with a class specific mean and covariance matrix. We argue that the most natural test of interaction is a test of equality of correlations between groups. Toward this end, let us apply Bayes theorem to our backwards generative model, to obtain $$\begin{aligned} \operatorname{P}(y = 1 | x) &= \frac{\pi_1 \operatorname{exp}\left(l_1\right)}{\pi_2 \operatorname{exp}\left(l_2\right) + \pi_1 \operatorname{exp}\left(l_1\right)}\\ &= \frac{ \operatorname{exp}\left[\operatorname{log}(\pi_1/\pi_2) + l_1 - l_2\right]}{1 + \operatorname{exp}\left[\operatorname{log}(\pi_1/\pi_2) + l_1 - l_2\right]}\end{aligned}$$ where $$l_m = -p\operatorname{log}\left(2\pi\right)/2 - \operatorname{logdet}\left(\Sigma_m\right)/2 - (x-\mu_m)^{\top}\Sigma_m^{-1}(x-\mu_m)/2$$ and $\pi_m$ is the overall prevalence of class $m$. We can simplify this to $$\begin{aligned} \operatorname{logit} \left(P\right) &= \operatorname{logdet}\left(\Sigma_2\right)/2 - \operatorname{logdet}\left(\Sigma_1\right)/2 + \operatorname{log}(\pi_1/\pi_2) + \mu_2^{\top}\Sigma_2^{-1}\mu_2/2\\ &- \mu_1^{\top}\Sigma_1^{-1}\mu_1/2 + \left(\Sigma_1^{-1}\mu_1 - \Sigma_2^{-1}\mu_2\right)^{\top} x + x^{\top}\left(\Sigma_2^{-1} - \Sigma_1^{-1}\right) x/2.\end{aligned}$$ This is just a logistic model with interactions and quadratic terms, and in the form of (with additional quadratic terms) we have $$\begin{aligned} \beta_0 &= \operatorname{logdet}\left(\Sigma_2\right)/2 - \operatorname{logdet}\left(\Sigma_1\right)/2 + \operatorname{log}(\pi_1/\pi_2)\\ &+ \mu_2^{\top}\Sigma_2^{-1}\mu_2/2 - \mu_1^{\top}\Sigma_1^{-1}\mu_1/2\\ \beta_{j} &= \left(\Sigma_1^{-1}\mu_1 - \Sigma_2^{-1}\mu_2\right)_j\\ \gamma_{j,k} &= \left(\Sigma_2^{-1} - \Sigma_1^{-1}\right)_{j,k}.\end{aligned}$$ From here we can see that traditional logistic regression interactions in the full model correspond to nonzero off-diagonal elements of $\Sigma_2^{-1} - \Sigma_1^{-1}$. Testing for non-zero elements here is not particularly satisfying for a number of reasons. Because coordinate estimates are so intertwined, there is no simple way to marginally test for non-zero elements in $\Sigma_2^{-1} - \Sigma_1^{-1}$ — in particular there is no straightforward permutation test. Also, for $n<p$ the MLEs for the precision matrices are not well defined. As in the logistic model  we may condition on only a pair of covariates $j$ and $k$ in our backwards model. Using Bayes theorem as above, our equivalent bivariate forward model is $$\begin{aligned} \operatorname{P}(y = 1 |\, \tilde{x} = \left(x_j, x_k\right)^{\top}) &= \operatorname{log}(\pi_1/\pi_2) + \tilde{\mu}_2^{\top}\tilde{\Sigma}_2^{-1}\tilde{\mu}_2/2 - \tilde{\mu}_1^{\top}\tilde{\Sigma}_1^{-1}\tilde{\mu}_1/2\\ & + \left(\tilde{\Sigma}_1^{-1}\tilde{\mu}_1 - \tilde{\Sigma}_2^{-1}\tilde{\mu}_2\right)^{\top} \tilde{x} + \tilde{x}^{\top}\left(\tilde{\Sigma}_2^{-1} - \tilde{\Sigma}_1^{-1}\right) \tilde{x}/2\end{aligned}$$ where $\tilde{\mu}_m$ and $\tilde{\Sigma}_m$ are the mean vector and covariance matrix in class $m$ for only $X_j$ and $X_k$. Hence the backwards model has an equivalent logistic model similar to   but with quadratic terms included as well. One should note that the main effect and interaction coefficients in this marginal model *do not* match those from the full model (i.e. the marginal interactions and conditional interactions are different). Our usual marginal logistic interaction between covariates $j$ and $k$ corresponds to a nonzero off-diagonal entry in $\tilde{\Sigma}_2^{-1} - \tilde{\Sigma}_1^{-1}$. Simple algebra gives $$\tilde{\Sigma}^{-1}_{m(1,2)} = -\left(\frac{R_{m(j,k)}}{\sigma_{m(j)}\sigma_{m(k)}\left(1-R_{m(j,k)}^2\right)}\right)$$ where $R_{m(j,k)}$ is the correlation between features $j$ and $k$ in class $m$, and $\sigma_{m(j)}$ is the standard deviation of variable $j$ in class $m$. Thus, if we were to test for “logistic interactions” in our pairwise backwards model, we would be testing: $$\frac{R_{1(j,k)}}{\sigma_{1(j)}\sigma_{1(k)}\left(1-R_{1(j,k)}^2\right)} = \frac{R_{2(j,k)}}{\sigma_{2(j)}\sigma_{2(k)}\left(1-R_{2(j,k)}^2\right)}$$ Now, if $\sigma_{1(j)} = \sigma_{2(j)}$, and $\sigma_{1(k)} = \sigma_{2(k)}$, then this is equivalent to testing if $R_{1(j,k)} = R_{2(j,k)}$. If not, then a number of unsatisfying things may happen. For example if the variance of a single variable changes between classes, then, even if its correlation with other variables remains the same, it still has an “interaction” with all variables with which it is correlated. This change of variance is a characteristic of a single variable, and it seems scientifically misleading to call this as an “interaction” between a pair of features. Toward this end, we consider a restricted set of null hypotheses — rather than testing for an interaction between each pair of features $(j,k)$, we test the null $R_{1(j,k)} = R_{2(j,k)}$. Not all logistic interactions will have $R_{1(j,k)} \neq R_{2(j,k)}$, but we believe this is the property which makes an interaction physically/scientifically interesting. To summarize, there are a number of issues in the forward model which are alleviated through the use of the backwards model: - The marginal forward models are not necessarily consistent (one cannot always find a “full forward model” with the given marginals). - Omitted quadratic terms may be mistaken for interactions between correlated covariates. - Interesting interactions are only those for which $R_{1(j,k)} \neq R_{2(j,k)}$. - $P$-values are approximate and based on parametric assumptions. Proposal {#sec:method} ======== We begin with the generative model described in Section \[sec:forVsback\]— we assume observations are Gaussian in each class $\left(x_i|y_i\right) \sim N(\mu_{y(i)}, \Sigma_{y(i)})$ with a class specific mean and covariance matrix. As argued above, we test for interactions by testing $$\mathbf{H}_{j,k}:\,R_{1(j,k)} = R_{2(j,k)}$$ for each $j<k$, where again, $R_{m(j,k)}$ denotes the $(j,k)$-th entry of the correlation matrix for class $m$. If we were only testing one pair of covariates $(j,k)$, a straightforward approach would be to compare the sample correlation coefficients $\hat{R}_{1(j,k)}$ to $\hat{R}_{2(j,k)}$. In general, because the variance of $\hat{R}_{m(j,k)}$ is dependent on $R_{m(j,k)}$, it is better to make inference on a Fisher transformed version of $\hat{R}_{m(j,k)}$: $$U_{m(j,k)} = \operatorname{arctanh}\left(\hat{R}_{m(j,k)}\right) \dot{\sim} N\left(\operatorname{arctanh}\left(R_{m(j,k)}\right),\frac{1}{n_m-3}\right)$$ This is a variance stabilizing transformation. Now, to compare the two correlations we consider the statistic $$\label{eq:stat} T_{(j,k)} = U_{1(j,k)} - U_{2(j,k)} \dot{\sim} N\left(\operatorname{arctanh}\left(R_{1(j,k)}\right) - \operatorname{arctanh}\left(R_{2(j,k)}\right),\frac{1}{n_1-3} + \frac{1}{n_2-3}\right)$$ Under the null hypothesis: $R_{1(j,k)} = R_{2(j,k)}$, this statistic is distributed $N\left(0,\frac{1}{n_1-3} + \frac{1}{n_2-3}\right)$. To test if the correlations are equal we need only compare our statistic $T_{(j,k)}$ to its null distribution and find a $p$-value. While this approach works well for single tests, because we are in the high dimensional setting we use a different approach which doesn’t rely on the statistic’s asymptotic normal distribution. We are interested in testing differences between two large correlation matrices in higher dimensional spaces. We again calculate the differences of our transformed sample correlations — we now calculate $p(p-1)/2$ statistics; one for each pair $(j,k)$ with $j<k$. However to assess significance we no longer just compare each statistic to the theoretical null distribution and find a p-value. Instead we directly estimate false discovery rates (FDR): we choose some threshold for our statistics, $t$, and reject (/call significant) all $(j,k)$ with $|T_{(j,k)}| > t$. Clearly, not all marginal interactions called significant in this way will be truly non-null and it is important to estimate the FDR of the procedure for this cutoff, that is $$\operatorname{FDR} = E\left[\frac{\textrm{\# false rejections}}{\textrm{\# total rejections}}\right],$$ where ‘\#’ is short-hand for “number of”. It is standard to approximate this quantity by $$\label{eq:FDR} \frac{\hat{E}[\textrm{\# false rejections}]}{\textrm{\# total rejections}}.$$ The denominator is just the number of $|T_{(j,k)}| > t$ (which we know). If we knew which hypotheses were null and their distributions then one could find the numerator by $$\label{eq:numer} E[\textrm{\# false rejections}] = \sum_{(j,k) \textrm{ null}} \operatorname{P}(|T_{(j,k)}| > t)$$ Clearly we don’t know which hypotheses are null. To estimate we propose the following permutation approach. We first center and scale our variables within class: for each observation we subtract off the class mean for each feature and divide by that feature’s within-class standard deviation — let $\tilde{X}$ denote this standardized matrix. This standardization doesn’t change our original statistics, $T_{j,k}$ (the correlation calculated from $X$ and $\tilde{X}$ are identical), but is important for our null distribution. Now, let $\pi$ be some random permutation of $\{1,\ldots,n\}$. Thus, $\pi(y)$ is a random permutation of the class memberships of the standardized variables (we keep the standardization from before the permutation). With these new class labels we calculate a new set of $p(p-1)/2$ statistics, $\{T^{*a}_{(j,k)}\}_{j<k}$. We can permute our data $A$ times, and gather a large collection of these null statistics, ($Ap(p-1)/2$) of them. To estimate $E[\textrm{\# false rejections}]$, we take the average number of these statistics that lie above our cutoff $$\hat{E}[\textrm{\# false rejections}] = \frac{1}{A} \sum_{a=1}^A \# \{|T^{*a}_{(j,k)}| > t\}$$ Often, one is interested in the FDR of the $l$ most significant interactions. In this case the cutoff, $t$, is chosen to be the absolute value of the $l$-th most significant statistic, denoted $T(l)$. We refer to this procedure as Testing Marginal Interactions through correlation (TMIcor) and summarize it below. [**TMIcor: Algorithm for Testing Marginal Interactions**]{}\[alg:1\] 1. Mean center and scale $X$ within each group. 2. Calculate the feature correlation matrices $\hat R_1$ and $\hat R_2$ within each class. 3. Fisher transform the entries (for $j<k$): $U_{m(j,k)} = \operatorname{arctanh}\left(\hat{R}_{m(j,k)}\right)$\ and take their coordinate-wise differences: $T_{(j,k)} = U_{1(j,k)} - U_{2(j,k)}$ 4. for $a=1,\ldots\, ,A$ execute the following 1. Randomly permute class labels of the standardized variables. 2. Using the new class labels, reapply steps 2-4 to calculate new statistics $\{T^{*a}_{(j,k)}\}_{j<k}$ 5. Estimate FDR for any $l$ most significant interactions by $$\widehat{{\rm FDR}} = \frac{\left(\frac{1}{A}\right) \sum_{a=1}^A \#\{|T^{*a}_{(j,k)}| > T(l)\}}{l}$$ Using this approach, one gets a ranking of pairs of features and an FDR estimate for every position in the ranking. Furthermore, rather than testing for interactions between all pairs of variables, one may instead test for interactions between variables in one set (such as genes) and variables in another (such as environmental variables). To do this, one would only need restrict the statistics considered in steps $3$, $4b$ and $5$. Standardizing in step $(1)$ before permuting may seem strange, but in this case is necessary. If we do not standardize first, we are testing the joint null that the means, variances and correlations are the same between classes. This is precisely what we moved to the backward model to avoid — by standardizing we avoid permuting the “main effects”. We discuss this permutation-based estimate of FDR in more depth in appendix A. Comparisons {#sec:comparisons} =========== In this section we apply TMIcor and the standard logistic approach to real and simulated data. On simulated data we see that in some scenarios (in particular with main effects) the usual approach has serious power issues as compared to TMIcor. Similarly on our real dataset we see that the usual approach does a poor job of finding interesting interactions, while TMIcor does well. Simulated Data -------------- We attempt to simulate a simplified version of biological data. In general, groups of proteins or genes act in concert based on biological processes. We model this with a block diagonal correlation matrix — each block of proteins/genes is equi-correlated. This can be interpreted as a latent factor model — all the proteins in a single block are highly correlated with the same latent variable (maybe some unmeasured cytokine), and conditional on this latent variable, the proteins are all uncorrelated. In our simulations we use $10$ blocks, each with $10$ proteins ($100$ total proteins). We simulate the proteins for our healthy controls as jointly Gaussian with $0$ mean and covariance matrix $$\Sigma_1 = \begin{pmatrix} R_1 & 0 & \cdots & 0\\ 0 & R_2 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots\\ 0 & \cdots & 0 & R_{10} \end{pmatrix}$$ where each $R_i$ is a $10\times 10$ matrix with $1$s along the diagonal, and a fixed $\rho_i>0$ for all off-diagonal entries. Now, for our diseased patients we again use mean $0$ proteins, but change our covariance matrix to $$\Sigma_2 = \begin{pmatrix} \tilde{R}_1 & 0 & \cdots & 0\\ 0 & R_2 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots\\ 0 & \cdots & 0 & R_{10} \end{pmatrix}$$ where $\tilde{R}_1$ has $1$s on the diagonal and $\tilde{\rho}_1$ for all off-diagonal entries (with $0\leq \tilde{\rho}_1 \neq \rho_1$). This correlation structure would be indicative of a mutation in the cytokine for the first group causing a change in the association between that signaling protein and the rest of the group. Within each class (diseased and healthy) we simulated $250$ patients and applied TMIcor and the usual logistic approach. We averaged the true and estimated false discovery rates of these methods over $10$ trials. As we can see from Figure \[fig:1\] TMIcor outperforms the logistic approach. This difference is particularly pronounced in the second plot of Figure \[fig:1\]. In this plot, because the correlations are large but different in both groups ($\rho_1 = 0.3$, $\tilde{\rho}_1 = 0.6$), there are some moderate quadratic effects in the true model — this induces a bias in the logistic approach and its FDR suffers. In contrast, these quadratic effects are not problematic in the backward framework. We also consider a second set of simulations. This set used $\rho_i = 0.3$ for all $i$ and $\tilde{\rho}_1 = 0$. However, instead of mean $0$ in both classes, we set the mean for all proteins in block 1 for diseased patients to be some $\tilde{\mu}_1$ ($> 0$). The results are plotted in Figure \[fig:2\]. This mean shift had no effect on TMIcor (the procedure is meanshift invariant), but as the mean difference grows, it becomes increasingly difficult for the logistic regression to find any interactions. This issue is especially important as, biologically, one might expect that genes with main effects to be more likely to have true marginal interactions (and these interactions may also be more scientifically interesting). While these simulations are not exhaustive, they give an indication of a number of scenarios in which TMIcor significantly outperforms logistic regression. More exhaustive simulations were run and the results mirrored those in this section. Real Data --------- We also applied both TMIcor and logistic regression to the colitis gene expression data of @burczynski2006. In this dataset, there are $127$ total patients, $85$ with colitis ($59$ Crohn’s patients + $26$ ulcerative colitis patients) and $42$ healthy controls. We restricted our analysis to the $101$ patients without ulcerative colitis. Each patient had expression data for $22283$ genes run on an Affymetrix U133A microarray. Because chromosomes $5$ and $10$ have been indicated in Crohn’s disease, we enriched our dataset by using only the genes on these chromosomes, along with the $NOD2$ and $ATG16L1$ genes (chromosomes as specified by the $C1$ geneset from @subramanian2005). In total $663$ genes were used. Some of these genes were measured by multiple probesets — the final expression values used for those genes were the average of all probesets. From these $663$ genes we have $219,453$ of interactions to consider. Figure \[fig:fdr\] shows the estimated FDR curves for the two methods. TMIcor finds many more significant interactions — at an FDR cutoff of $0.1$, TMIcor finds $2570$ significant interactions, while the logistic approach finds $15$. The significant $15$ from the logistic approach may not even be entirely believeable — the smallest p-value of the $15$ is roughly $1/219453$, which is what we would expect it to be if all null hypotheses were true. Because the smallest p-value is large, we see that the FDR for logistic regression begins surprisingly high. The FDR subsequently drops because there are a number of p-values near the smallest, however, the significance of these hypotheses is still suspect. ![Corhn’s data; FDR estimates for TMIcor and logistic approaches for the $5000$ most significant marginal interactions[]{data-label="fig:fdr"}](FDRlarge.pdf){width="3in"} Unfortunately interpreting $2570$ marginal interactions is difficult (even if all are true). Toward this end we consider the graphical representation of our analysis in Figure \[fig:graphBig\]. Each gene is a node in our graph, and edges between genes signify marginal interactions. In this plot we considered only the $1250$ of the $2570$ significant marginal interactions indicative of a decrease in correlation from healthy control to Crohn’s (ie. $T_{j,k} > 0$). There is one large connected component, a few connected pairs and a large number of isolated genes. The connected component appears to be split into $2$ clusters. To get a better handle on this, we considered a more stringent cutoff for significant interactions — at an FDR cutoff of $0.03$, we are left with $832$ significant interactions of which only $402$ have $T_{j,k} > 0$. We plot this graph in Figure \[fig:graphSmall\]: we see that our large connected component has divided into $2$. From here we further zoomed in on each component (now displaying only the $50$ most significant interactions per component), and can actually see which genes are are most important (in figure \[fig:graphComp\]).\ ![Graph of $1250$ marginal interactions (with decreasing correlation) significant at FDR cutoff of $0.1$. Genes with no significant interactions not shown[]{data-label="fig:graphBig"}](graphBig.pdf){width="3in"} ![Graph of $402$ marginal interactions (with decreasing correlation) significant at FDR cutoff of $0.03$. Genes with no significant interactions not shown[]{data-label="fig:graphSmall"}](graphSmall.pdf){width="3in"} It appears, from this analysis, that there are two genetic pathways which are modified in Crohn’s disease. Many of the genes in each cluster are already known to be indicated in Crohn’s, but to our knowledge these interactions have not been considered. Dealing with Nuisance Variables {#sec:nuis} =============================== Often, aside from the variables of interest, one may believe that other nuisance variables play a role in complex interactions. For example, it seems reasonable that many genes are conditionally independent given age, but are each highly correlated with age. Ignoring age, these genes would appear to be highly correlated, but this correlation is uninteresting to us. TMIcor can be adapted to deal with these nuisance variables provided there are few compared to the number of observations, they are continuous, and they are observed. We resolve this issue by using partial correlations. Assume $x_j$ and $x_k$ are our variables of interest, and $z$ is a vector of potential confounders. Rather than comparing $\operatorname{cor}\left(x_j, x_k\right)$ in groups $1$ and $2$, we compare the partial correlations, $\operatorname{cor}\left(\left[x_j|z\right], \left[x_k|z\right]\right)$. This is done by first regressing our potential confounders, $Z$, out of all the other features, then running the remainder of the analysis as usual. To adapt the original algorithm in Section \[sec:method\] to deal with nuisance variables we need only replace step $(1)$ by: 1. Replace our feature matrices $X_1$ and $X_2$ by $$\tilde{X}_m = \left[I - Z_m\left(Z_m^{\top}Z_m\right)Z_m^{\top}\right]X_m$$ Now, mean center and scale $\tilde{X}$ within each group. We give more details motivating this approach and discussing potential computational advantages in appendix B. Asymptotics {#sec:asymptotics} =========== In this section we give two asymptotic results. We show that if $n\rightarrow \infty$, and $\frac{\log p_n}{n} \rightarrow 0$, then under certain regularity conditions our procedure for testing marginal interactions (in the absence of nuisance variables) is asymptotically consistent — with probability approaching $1$ it calls significant all true marginal interactions and makes no false rejections. Furthermore, using the permutation null, it also consistently estimates that the true FDR is converging to $0$. Because we only need $\frac{\log p_n}{n} \rightarrow 0$, $p_n$ may increase very rapidly in $n$. We first give a result showing that for sub-Gaussian variables our null statistics converge to $0$ and our alternative statistics are asymptotically bounded away from $0$. The proof of this theorem is based on several technical lemmas which we relegate to appendix C. \[thm:con\] Let $\tilde{x}_{1(j)}$ and $\tilde{x}_{2(j)}$, $j=1,\ldots$ be random variables. Assume there is some $C>0$ such that for all $t\geq 0$ $$\operatorname{P}\left(\left|x_{m(j)} - \operatorname{E}[x_{m(j)}]\right| > t\right) \leq \operatorname{exp}\left(1-t^2/C^2\right)$$ for each $m=1,2$ . Let $\mu_{i(j)}$ denote the mean of $\tilde{x}_{m(j)}$ and $\sigma_{m(j)}^2$ its variance. For each $i\leq\infty$, let $x_{m(i,\cdot)}$ be independent realizations with the same distribution as $\tilde{x}_{m(\cdot)}$. Let $p_n$ be a sequence of integers such that $\frac{\log p_n}{n} \rightarrow 0$. Let $R_{m}$ be the correlation “matrix” (an infinite but countably indexed matrix) of the covariates from group $m$. Let $I$ denote the set of ordered pairs $(j,k)$ for which $R_{1(j,k)} \neq R_{2(j,k)}$, and $C_n$ denote the set of ordered pairs $(j,k)$ with $j,k\leq p_n$. Assume for every $m$ and $j$, $\sigma_{m(j)}^2 \geq \sigma_{min}^2$ (for some $\sigma_{min}^2 > 0$). Furthermore, assume that for all $(j,k)$ in each $I$, $\left|R_{1(j,k)} - R_{2(j,k)}\right| > \Delta_{\min}$ for some $\Delta_{\min}>0$ and that for $m=1,2$, $\operatorname{sup}_{j<k}\left|R_{m(j,k)}\right| < \rho_{\max}$ for some fixed $\rho_{\max} < 1$.\ Now, given any $\epsilon_p >0$, and $0 < t < \Delta_{\min}$, if we choose $n$ sufficiently large, then with probability at least $1 - \epsilon_p$ $$\left|T_{(j,k)}\right| \leq t$$ for all $(j,k)$ in $C_n - I$, and $$\left|T_{(j,k)}\right| \geq t$$ for all $(j,k)$ in $C_n\cap I$. The notation here is a little bit tricky, but the result is very straightforward: under some simple conditions, we find all marginal interactions and make no false identifications. While there were a number of assumptions in the above theorem, most of these are fairly trivial and will almost always be found in practice: the variance must be bounded away from $0$ and the correlations bounded away from $\pm 1$. The assumption that the correlation differences are bounded below by a fixed $\Delta_{\min}$ for true marginal interactions is a bit more cumbersome, but may easily be relaxed to $\Delta_{\min} \rightarrow 0$ at a slow enough rate that $\Delta_{min} /\left[\log p /n\right]^{1/2} \rightarrow \infty$. The astute reader might note that our assumption bounding the variance away from $0$ seems strange — the distribution of the sample correlation is independent of the variance. This is necessary only because we assumed the covariates have a subgaussian tail with a shared constant $C$. One could have relaxed the bounded variance assumption to the assumption that $\left\{x_j/\sigma_j\right\}_{j=1,\ldots}$ have a sub-Gaussian tail with a shared constant $C$. Permutation Consistency ----------------------- Now that we have shown our procedure has FDR converging to $0$, we would like to show that it asymptotically estimates FDR consistently as well. In particular we show that as $n\rightarrow \infty$, if $\frac{\log p}{n}\rightarrow 0$, then with probability approaching $1$, for a random permutation, our permuted statistics converge to $0$ uniformly in probability ($\max_{j,k}\left|T_{(j,k)}^*\right| \leq t$ for any fixed $t>0$ with probability converging to $1$). Thus our estimated FDR converges to $0$ under the same conditions as our true FDR. We begin with some notation. Let us consider an arbitrary permutation of class labels, $\Pi$. Let $\hat{\pi}$ denote the proportion of observations from class $1$ that remain in class $1$ after permuting. We discuss a somewhat simplified procedure in our proof, as otherwise the algebra becomes significantly more painful (without any added value in clarity), but it is straightforward to carry the proof through to the full procedure. In our original procedure, after permuting class labels we recenter and rescale our variables within each class. Because we already centered and scaled variables before permuting, this step will have very little effect on our procedure (though it does have the nice effect of never giving $|\rho^{*}| > 1$). In this proof we consider a procedure identical in every way except without recentering and rescaling within each permutation. Before we give the theorem, we would like to define a few new terms for clarity. For a given permutation $\Pi$, let $\Pi_i(m)\in\left\{0,1\right\}$ be the permuted class of the $i$-th observation originally in class $m$. Furthermore, let $\Pi\left(m,l\right)$ be the set of observations in class $m$ that are permuted to class $l$, and let $\Pi\left(\cdot,l\right)$ be the set of observations in both classes permuted to class $l$, ie. $$\begin{aligned} \Pi\left(m,l\right) &= \left\{i:\,\Pi_i(m) = l\right\}\\ \Pi\left(\cdot,l\right) &= \left\{(i,m):\,\Pi_i(m) = l\right\}\end{aligned}$$ Now, we give a result which shows that for any fixed $t>0$ if our variables are sub-Gaussian with some other minor conditions, then for $n\rightarrow \infty$ and $\log p/n\rightarrow 0$ with probability approaching $1$, none of our permuted statistics will be larger than $t$, or in other words, as our true converged to $0$, so will our estimated FDR $0$. As before, the proof of this theorem is based on several technical lemmas which we again leave to appendix C. \[thm:perm\] Let $\tilde{x}_{1(j)}$ and $\tilde{x}_{2(j)}$, $j=1,\ldots$ be random variables with $$\operatorname{P}\left(|x_{m(j)} - \operatorname{E}\left[x_{m(j)}\right]|\geq t\right) \leq 1 - e^{t^2/C}$$ for all $t>0$, and each $m=1,2$, with some fixed $C>0$. Let $\mu_{m(j)}$ denote the mean of $\tilde{x}_{m(j)}$ and $\sigma_{m(j)}^2$ its variance. For each $i\leq\infty$, let $x_{m(i,\cdot)}$ be independent realizations with the same distribution as $\tilde{x}_{m(\cdot)}$. Let $p_n$ be a sequence of integers such that $\frac{\log p_n}{n} \rightarrow 0$. Let $R_{m}$ be the correlation “matrix” (an infinite but countably indexed matrix) of the covariates from class $m$. Assume for every $m,\,j$, $\sigma_{m(j)}^2 \geq \sigma_{min}^2$ (for some $\sigma_{min}^2 > 0$). Furthermore, assume that for $m=1,2$, $\operatorname{sup}_{j<k}\left|R_{m(j,k)}\right| < \rho_{\max}$ for some fixed $\rho_{\max} < 1$. Now, given any $\epsilon_p >0$, and $0 < t$, if we choose $n$ sufficiently large and let $\Pi$ be a random permutation, then with probability at least $1 - \epsilon_p$ $$\left|T_{(j,k)}^*\right| \leq t$$ for all $(j,k)$ with $j,k \leq p_n$ where $$T_{(j,k)}^* = \operatorname{arctanh}\left(\hat{R}_{\operatorname{perm:1(j,k)}}\right) - \operatorname{arctanh}\left(\hat{R}_{\operatorname{perm:2(j,k)}}\right)$$ and $$\hat{R}_{\textrm{perm}:m(j,k)} = \frac{1}{n}\sum_{(i,l)\in\Pi(\cdot,m)}\left(\frac{x_{l(i,j)} - \hat{\mu}_{l(j)}}{\hat{\sigma}_{l(j)}}\right)\left(\frac{x_{m(i,k)} - \hat{\mu}_{l(k)}}{\hat{\sigma}_{l(k)}}\right)$$ The notation is again somewhat ugly, but the result is very straightforward: under some simple conditions, our permuted statistics are very small. In particular from the proof one can see that $\operatorname{sup}\left\{T_{(j,k)}^*\right\} = O_p\left(\sqrt{\log p_n/n}\right)$. Note there is an implicit indexing of $n$ in $\hat{R}_{\textrm{perm}:m(j,k)}$ (it seemed unneccessary to add more indices). As in theorem \[thm:con\], some of our conditions may be relaxed. Instead of bounding $\sigma_j^2$ below, we need only bound $C\sigma_j$ below. Also, rather than choose a fixed cutoff, $t>0$, we may use any sequence $\left\{t_n\right\}$ with $t_n/\left(\log p_n/n\right)^{1/2} \rightarrow \infty$. Also, as noted before, the result we have just shown ignores the restandardizing within each permutation, however it is straightforward (though algebraicly arduous, and not insightful) to extend this result to that case as well. As a last note, in theorem \[thm:perm\], we gave our consistency result for only a single permutation. This result can easily be extended to any fixed number of permutations using a union bound. This was left out of the original statement/proof as the notation is already clunky and the extension is straightforward. Through theorems \[thm:con\] and \[thm:perm\] we have shown that, under fairly relaxed conditions, our procedure is asymptotically consistent at discovering marginal interactions and that the permutation null reflects this. Discussion ========== In this paper we have discussed marginal interactions for logistic regression in the framework of forward and backward models. We have developed a permutation based method, TMIcor, which leverages the backward model. We have shown its efficacy on real and simulated data and given asymptotic results showing its consistency and convergence rate. We also plan to release a publically available [R]{} implementation. Appendix A {#sec:perm} ========== In this section we give more details on our permutation-based estimate of FDR, and discuss a potential alternative. Recall that we are using the permutations to approximate $$\label{eq:numer} \sum_{(j,k) \textrm{ null}} \operatorname{P}(|T_{(j,k)}| > t).$$ For the moment, assume that all covariates in both classes have mean $0$ and variance $1$, and that we did not do any sample standarization. Then, under the null hypothesis that $R_{1(j,k)} = R_{2(j,k)}$, $T_{(j,k)}$ calculated under the original class assignments and $T^*_{(j,k)}$ calculated under any permuted class assignments have the same distribution, so $$\sum_{(j,k) \textrm{ null}} \operatorname{P}(|T_{(j,k)}| > t) = \sum_{(j,k) \textrm{ null}} \operatorname{P}(|T^*_{(j,k)}| > t)$$ which is reasonably (and unbiasedly) approximated by $$\sum_{(j,k) \textrm{ null}} \frac{1}{A}\sum_{a=1}^AI(|T^{*a}_{(j,k)}| > t).$$ Because we do not know which genes are null, our actual estimate of is $$\begin{aligned} \label{eq:bias} \sum_{(j,k)} \frac{1}{A}\sum_{a=1}^AI(|T^{*a}_{(j,k)}| > t) &=\sum_{(j,k) \textrm{ null}} \frac{1}{A}\sum_{a=1}^A I(|T^{*a}_{(j,k)}| > t)\\ &+ \sum_{(j,k) \textrm{ alternative}} \frac{1}{A}\sum_{a=1}^A I(|T^{*a}_{(j,k)}| > t)\end{aligned}$$ This gives a slight conservative bias (especially small if most marginal interactions are null). One should also note that unlike the null statistics, for the alternative $(j,k)$, $T^*_{(j,k)}$ are not distributed $N\left(0,\frac{2}{n-3}\right)$; they are still mean $0$, but the variance is increased. However, this conservative bias is very slight — in general there are few alternative hypotheses, and the variance increase is not large. Because in practice we do not have mean $0$, variance $1$ for all covariates in both classes, we must standardize before running our procedure. Otherwise, instead of testing for a changing correlation, we are actually testing for a different mean, variance, or correlation between classes. The effect of standardizing with the sample mean and variance rather than the true values is asymptotically washed out, and while the variance of our tests is increased for small samples, this increase is only minimal. An alternative to permutations, as discussed in @efron2010ebayes, is to directly estimate the numerator using the approximate theoretical distribution of the null statistics. Each null statistic is asymptotically $N\left(0,\frac{1}{n_1-3} + \frac{1}{n_2-3}\right)$, so for $(j,k)$ null $$\operatorname{P}(|T_{(j,k)}| > t) = 2\Phi\left(-\frac{t (n_1 -3) (n_2-3)}{n_1 + n_2 - 6}\right).$$ Now we can conservatively approximate the quantity in Eq  by $$\begin{aligned} \sum_{(j,k) \textrm{ null}}P\left(|T_{(j,k)}| > t\right) &\leq p(p-1)/2 \cdot P\left(|T_{\textrm{null}}| > t\right)\\ &= p(p-1)\cdot\Phi\left(-\frac{t (n_1 -3) (n_2-3)}{n_1 + n_2 - 6}\right)\end{aligned}$$ While this approach is reasonable and simple, it is less robust than using permutations, and in practice, even for truly Gaussian data, it is only slightly more efficient. Appendix B ========== Before proceeding, we remind the reader that $x$ are our variables of interest and $z$ are potential confounding variables. Furthermore we are interested in comparing $\operatorname{cor}\left(\left[x_j|z\right], \left[x_k|z\right]\right)$ between groups. From basic properties of the Gaussian distribution we know that $$x|z\sim N\left[\mu_x + \Sigma_{(x,z)}\Sigma_{z}^{-1}\left(z - \mu_z\right), \Sigma_{(x|z)}\right]$$ where $\Sigma_{(x|z)}$ is the variance/covariance matrix of $x$ given $z$, $\Sigma_{(x,z)}$ is the covariance matrix between $x$ and $z$, $\Sigma_z$ is the variance matrix of $z$, and $\mu_x$ and $\mu_z$ are the means of $x$ and $z$. Now, if $\mu_x,\,\mu_z,\,\Sigma_{(x,z)},$ and $\Sigma_{z}$ were known, then the MLE for $\Sigma_{(x|z)}$ would be $$\hat{\Sigma}_{(x|z)} = \frac{1}{n} \left[X - 1\mu_x^{\top} - \left(Z - 1\mu_z^{\top}\right)\Sigma_{z}^{-1}\Sigma_{(z,x)}\right]^{\top}\left[X - 1\mu_X^{\top} - \left(Z - 1\mu_Z^{\top}\right)\Sigma_{Z}^{-1}\Sigma_{(z,x)} \right].$$ Unfortunately, these nuisance parameters are unknown. However we can also estimate them by maximum likelihood. This gives us the estimate $$\begin{aligned} \hat{\Sigma}_{(X|Z)} &= \frac{1}{n}\left[\tilde{X} - \tilde{Z}\left(\tilde{Z}^{\top}\tilde{Z}\right)^{-1}\tilde{Z}^{\top}\tilde{X}\right]^{\top}\left[\tilde{X} - \tilde{Z}\left(\tilde{Z}^{\top}\tilde{Z}\right)^{-1}\tilde{Z}^{\top}\tilde{X}\right]\\ &=\frac{1}{n}\left[\operatorname{P}_{\tilde{Z}\perp}\left(\tilde{X}\right)\right]^{\top}\left[\operatorname{P}_{\tilde{Z}\perp}\left(\tilde{X}\right)\right]\end{aligned}$$ where $\tilde{Z}$ is the standardized version of $Z$, and $\tilde{X}$ is the standardized version of $X$, and $\operatorname{P}_{\tilde{Z}\perp}$ is the projection onto the orthogonal complement of the column space of $\tilde{Z}$. So, our estimate of partial correlation is just an estimate of correlation with $Z$ regressed out of both covariates. We use this to contruct our permutation null. In the orginal algorithm, we mean centered and scaled before permuting; here we do the equivalent — we project our variables of interest onto the orthogonal complement of our nuisance variables, and then center/scale them. Now we are ready to permute. We permute these “residuals”, and calculate permuted correlations as before. Before proceeding, we note that for $n$ sufficiently large $n$ ($n >> p$) one might use a similar approach to consider partial correlations rather than marginal correlations in our original algorithm (conditioning out all covariates except any particular $2$). However, in general $n << p$ and thus $\operatorname{P}_{\perp} \equiv 0$ rendering this approach ineffective — this approach only works for nuisance variables because we assume that there are very few relative to the number of observations. As stated in the text, to adapt the original algorithm to deal with nuisance variables we need only replace step $(1)$ by: 1. Replace our feature matrices $X_1$ and $X_2$ by $$\tilde{X}_m = \left[I - Z_m\left(Z_m^{\top}Z_m\right)Z_m^{\top}\right]X_m$$ Now, mean center and scale $\tilde{X}$ within each group. One may note that we only calculate $\tilde{X}$ once per class, at the beginning of our procedure, not in each permutation. We do this for a similar reason that we standardize our variables before permuting — because we are not testing the hypothesis that the relationship between $X$ and $Z$ is the same in both groups. If we relcalulate after each permutation then we are implicitly assuming that this relationship is the same in both groups under the null. Even with nuisance variables this approach is very computationally fast. Projecting our original variables onto $Z\perp$ can be done in $O\left(npp_{\textrm{nuis}}\right)$ operations where $p_{\textrm{nuis}}$ is the number of nuisance variables. Thus the total runtime of this algorithm is $O\left(npp_{\textrm{nuis}} + Anp(p-1)/2\right)$ where $A$ is the number of permutations — this is dominated by the second term, which is independent of the number of nuisance parameters. In contrast, if we were to use the standard approach (fitting pairwise logistic regressions with nuisance variables), its runtime would be $O\left[\left(iter\right)(3+p_{\textrm{nuis}})^2np(p-1)/2\right]$ where $iter$ is the number of iterations of the algorithm for finding the MLE. In general $A \sim 100$ and $iter \sim 5$. Now, since $(3+p_{\textrm{nuis}})^2$ grows very quickly in $p_{\textrm{nuis}}$, for even a small number of nuisance parameters the logistic approach becomes much slower. Appendix C ========== This appendix contains the technical details from the theorems in section $7$ of the main manuscript. We begin with a number of technical lemmas: First, as one might imagine, if we can consistently estimate our correlation matrices, applying a Fisher transformation should not change much. We formalize this with the next lemma. \[lemma3\] Let $R_1$, $R_2$ be correlation matrices, and $\hat{R}_1$, $\hat{R}_2$ be estimates of $R_1$ and $R_2$. Let $I$ be the set of ordered pairs $(j,k)$ where $R_{1(j,k)} \neq R_{2(j,k)}$. Assume for all $(j,k)$ in $I$, $\left|R_{1(j,k)} - R_{2(j,k)}\right| > \Delta_{\min}$ for some $\Delta_{\min} > 0$ and that for $m=1,2$ we have $\operatorname{sup}_{j<k}\left\|R_{m(j,k)}\right\|_{\infty} < \rho_{\max}$ for some fixed $\rho_{\max} < 1$.\ Further assume that for $m=1,2$, $\left\|R_m - \hat{R}_m\right\|_{\infty} \leq \delta$ (for some $\delta < 1-\rho_{\max}$). Then for all $(j,k)$ in $I^{c}$ with $j \neq k$ we have $$\label{eq:close} \left|\operatorname{arctanh}\left(\hat{R}_{1(j,k)}\right) - \operatorname{arctanh}\left(\hat{R}_{2(j,k)}\right)\right| \leq \frac{2\delta}{1-\left(\rho_{\max} + \delta\right)^2}$$ and for all $(j,k)$ in $I$ with $j \neq k$ we have $$\label{eq:far} \left|\operatorname{arctanh}\left(\hat{R}_{1(j,k)}\right) - \operatorname{arctanh}\left(\hat{R}_{2(j,k)}\right)\right| \geq \Delta_{\min} - 2\delta$$ One immediate consequence of this lemma is that as $\delta \rightarrow 0$, for $(j,k)$ in $I^{C}$ our statistics $T_{(j,k)}$ converge to $0$ (at rate O($\delta)$), and for $(j,k)$ in $I$, $T_{(j,k)}$ are bounded away from $0$ (at a rate of at least O($\delta)$). We begin by showing that for all $(j,k)$ in $I^{c}$ with $j \neq k$ we have $$\left|\operatorname{arctanh}\left(\hat{R}_{1(j,k)}\right) - \operatorname{arctanh}\left(\hat{R}_{2(j,k)}\right)\right| \leq \frac{2\delta}{1-\left(\rho_{\max} + \delta\right)^2}$$ The mean value theorem gives us that $$\left|\operatorname{arctanh}\left(\hat{R}_{1(j,k)}\right) - \operatorname{arctanh}\left(\hat{R}_{2(j,k)}\right)\right| \leq \operatorname{sup}_{r}\left|\frac{1}{1-r^2}\right|\left|\hat{R}_{1(j,k)} - \hat{R}_{2(j,k)}\right|$$ where the supremum is taken over $r$ in $\left[\hat{R}_{1(j,k)},\, \hat{R}_{2(j,k)}\right]$. Note that for $m=1,2$, we have $|\hat{R}_{m(j,k)}| < \rho_{\max} + \delta$, and $\left|\hat{R}_{1(j,k)} - \hat{R}_{2(j,k)}\right| \leq 2\delta$, for $(j,k)$ not in $I$. Thus, $$\operatorname{sup}_{r}\left|\frac{1}{1-r^2}\right|\left|\hat{R}_{1(j,k)} - \hat{R}_{2(j,k)}\right| \leq \frac{2\delta}{1-\left(\rho_{\max} + \delta\right)^2}.$$ Now for $(j,k)$ in $I$, we again use the mean value theorem: $$\left|\operatorname{arctanh}\left(\hat{R}_{1(j,k)}\right) - \operatorname{arctanh}\left(\hat{R}_{2(j,k)}\right)\right| \geq \operatorname{inf}_{r}\left|\frac{1}{1-r^2}\right|\left|\hat{R}_{1(j,k)} - \hat{R}_{2(j,k)}\right|$$ and our result follows because $\left|\hat{R}_{1(j,k)} - \hat{R}_{2(j,k)}\right| \geq \Delta_{\min} - 2\delta$. Now we consider convergence of these sample correlation matrices. We show that their convergence depends only on the convergence of the sample means ($\hat{\mu}_j$), variances ($\hat{\sigma}_j^2$), and pairwise inner products. We formalize this in the following lemma. \[lemma1\] Let $\tilde{x}_j$, $j=1,\ldots$ be random variables. Let $\mu_j$ denote the mean of $\tilde{x}_j$ and $\sigma_j^2$ its variance. Let $R_{j,k}$ be the correlation between $\tilde{x}_j$ and $\tilde{x}_k$. For each $i$, let $x_{i,\cdot}$ be independent realizations with the same distribution as $\tilde{x}_{\cdot}$ (eg. $x_{i,j}$ has the marginal distribution of $\tilde{x}_j$). For any given $\epsilon > 0$, there exists $\delta > 0$ such that if $$\label{eq:bnd} \operatorname{sup} \left\{\left|\hat{\sigma}_j - \sigma_j\right|,\, \left|\hat{\mu}_j - \mu_j\right|,\,\left|\frac{(1/n)\sum_{i\leq n}x_{i,j}x_{i,k}}{\sigma_j\sigma_k} - \frac{\mu_j\mu_k}{\sigma_j\sigma_k} - R_{j,k}\right|\right\}_{j,k} \leq \delta$$ then $$\label{eq:bnd0} \operatorname{sup}_{j<k \leq p} \left|\hat{R}_{j,k} - R_{j,k}\right| \leq \epsilon$$ Furthermore, one can choose $\delta = O(\epsilon)$ We begin by noting that the distribution of $\hat{R}_{j,k}$ is independent of $\mu_j$, $\mu_k$, $\sigma_j$ and $\sigma_k$. For ease of notation we assume $\mu_j = \mu_k = 0$ and $\sigma_j = \sigma_k = 1$.\ To see that is sufficient for we write $\hat{R}_{j,k} - R_{j,k}$ as $$\begin{aligned} \left|\hat{R}_{j,k} - R_{j,k}\right| &= \left|\frac{\left(1/n\right)\sum_{i=1}^n x_{i,j}x_{i,k}}{\hat{\sigma}_j\hat{\sigma}_k} - \frac{\hat{\mu}_j\hat{\mu}_k}{\hat{\sigma}_j\hat{\sigma}_k} - R_{j,k}\right|\\ &\leq \left|\frac{1}{n}\sum_{i=1}^n x_{i,j}x_{i,k}\right|\left|\left(\frac{1}{\hat{\sigma}_j\hat{\sigma}_k} - 1\right)\right|\\ &+ \left|\frac{1}{n}\sum_{i=1}^n x_{i,j}x_{i,k} - R_{j,k}\right| + \left|\frac{\hat{\mu}_j\hat{\mu}_k}{\hat{\sigma}_j\hat{\sigma}_k}\right|\end{aligned}$$ We first note that $\left|\frac{1}{n}\sum_{i=1}^n x_{i,j}x_{i,k} - R_{j,k}\right| < \delta$. Thus we need only consider $\left|\frac{\hat{\mu}_j\hat{\mu}_k}{\hat{\sigma}_j\hat{\sigma}_k}\right|$ and $\left|\left(\frac{1}{\hat{\sigma}_j\hat{\sigma}_k} - 1\right)\right|$. Expanding these terms using the fact that $1/(1-\delta) = 1 + O(\delta)$, it is straightforward to see that the whole expression converges to $0$ at rate $O(\delta)$. This completes our proof. Now that we have reduced convergence to that of the sample mean, variance, and inner products, we show particular circumstances under which our estimation is consistent, and give rates of convergence. \[lemma2\] Let $\tilde{x}_j$, $j=1,\ldots$ be random variables. Assume there is some $C>0$ such that for all $t\geq 0$ $$\operatorname{P}\left(\left|x_j - \operatorname{E}[x_j]\right| > t\right) \leq \operatorname{exp}\left(1-t^2/C^2\right)$$ (These are known as sub-Gaussian random variables). Let $\mu_j$ denote the mean of $\tilde{x}_j$ and $\sigma_j^2$ its variance. Let $R_{j,k}$ be the correlation between $\tilde{x}_j$ and $\tilde{x}_k$. For each $i$, let $x_{i,\cdot}$ be independent realizations with the same distribution as $\tilde{x}$. Let $\delta,\, \epsilon_p > 0$ be given. Then for $n$ sufficiently large and $\frac{\log p}{n}$ sufficiently small we have that $$\label{eq:lem2} \operatorname{sup} \left\{\left|\hat{\sigma}_j - \sigma_j\right|,\, \left|\hat{\mu}_j - \mu_j\right|,\,\left|\frac{(1/n)\sum_{i\leq n}x_{i,j}x_{i,k}}{\sigma_j\sigma_k} - \frac{\mu_j\mu_k}{\sigma_j\sigma_k} - R_{j,k}\right|\right\}_{j,k\leq p} \leq \delta$$ with probability greater than $1-\epsilon_p$. In particular one can choose $\delta = O\left(\log p / n\right)^{1/2}$. The class of subgaussian random variables is rather broad, containing gaussian random variables and all bounded random variables. Applying this lemma, we are able to show consistency for the wide class of variables with sufficiently light tails. In the proof of this lemma we get a convergence rate of $\delta = O\left(\log p / n\right)^{1/2}$. This rate agrees with the literature for other similar problems in covariance estimation (@bickel2008 among others). We will begin by bounding $\left|\hat{\mu}_j - \mu_j\right|$. If we consider Lemma $5.10$ of @vershynin2010 we see that $$\operatorname{P}\left(\left|\hat{\mu}_j - \mu_j\right| > t\right) \leq e\cdot \operatorname{exp}\left[-\left(\tilde{C}t^2\right)n\right]$$ where $\tilde{C}$ is some function of $C$ (one can prove this Hoeffding type inequality by an exponential Markov argument). Applying the union bound to this we see that $$\operatorname{P}\left(\operatorname{sup}_{j\leq p}\left|\hat{\mu}_j - \mu_i\right| > t\right) \leq 3 p \operatorname{exp}\left[-\left(\tilde{C}t^2\right)n\right]$$ If we set $t = \left(\sqrt{1/C}\right)\sqrt{\frac{q + \log p}{n}}$ then we have $$\operatorname{P}\left(\operatorname{sup}_{j\leq p}\left|\hat{\mu}_j - \mu_j\right| > t\right) \leq e^{1-q},$$ bounding $\left|\hat{\mu}_j - \mu_j\right|$.\ Next we bound $\left|\hat{\sigma}_j - \sigma_i\right|$. We first note that $$\left|\hat{\sigma}_j - \sigma_j\right| = \frac{\left|\hat{\sigma}_j^2 - \sigma_j^2\right|}{\hat{\sigma}_j + \sigma_j} \leq \frac{\left|\hat{\sigma}_j^2 - \sigma_j^2\right|}{\sigma_j}$$ because $\hat{\sigma_j}, \sigma_j >0$. so we need only consider convergence of $\hat{\sigma}_j^2 - \sigma_j^2$. Next note that $$\frac{1}{n}\sum_i \left(x_{i,j} - \bar{x}_j\right)^2 - \frac{1}{n}\sum_i \left(x_{i,j} - \mu_j\right)^2 = -\left(\bar{x}_j - \mu_j\right)^2$$ So now if we can bound $\left|\frac{1}{n}\sum_i \left(x_{i,j} - \mu_j\right)^2 - \sigma_j^2\right|$ and $\left(\bar{x}_j - \mu_j\right)^2$, then we can bound $|\hat{\sigma}_j^2 - \sigma_j^2|$.\ To bound $\left|\frac{1}{n}\sum_i \left(x_{i,j} - \mu_j\right)^2 - \sigma_j^2\right|$, we first note that if $x_{i,j}$ is sub-Gaussian then $(x_{i,j} - \mu_j)^2$ is subexponential; ie $$\operatorname{P}\left(\left(x_{i,j} - \mu_j\right)^2 - \sigma_i > t\right) \leq \operatorname{exp}\left(-C_1 t\right)$$ for some fixed $C_1$. Now we apply Corollary $5.17$ of @vershynin2010, and get that for any $t$ sufficiently small (independent of $n$) $$\operatorname{P}\left(\frac{1}{n}\sum_i\left(x_{i,j} - \mu_j\right)^2 > t\right) \leq 2\operatorname{exp}\left(-\tilde{C}_1 t^2\right)$$ for some fixed $\tilde{C}_1$. Bounding $\left(\bar{x}_j - \mu_j\right)^2$ is also quite straightforward (we just use the bound for $\left|\bar{x}_j - \mu_j\right|$) $$P\left(\left(\bar{x}_j - \mu_j\right)^2 \geq t\right) \leq e \operatorname{exp}\left[-\left(\tilde{C}t\right)n\right]$$ We note that for $t<1$, $t^2 < t$. Let $\bar{C} = \min\{\tilde{C}_1,\tilde{C}\}$. Now, combining these inequalities with the triangle inequality we have $$\begin{aligned} P\left(\left|\hat{\sigma}_j^2 - \sigma_j^2\right| \geq t\right) &\leq e\operatorname{exp}\left[-\left(\tilde{C}t\right)n\right] + 2\operatorname{exp}\left(-\tilde{C}_1 t^2\right)\\ & \leq 5\operatorname{exp}\left[-\bar{C}t^2n\right]\end{aligned}$$ for $t$ sufficiently small. Now finally, $$P\left(\left|\hat{\sigma}_j - \sigma_j\right| \geq t\right) \leq P\left(\left|\hat{\sigma}_j^2 - \sigma_j^2\right| \geq t\sigma_{\min}\right) \leq 5\operatorname{exp}\left[-\bar{C}\sigma_{\min}^2t^2n\right].$$ Using the union bound again, we get $$P\left(\operatorname{sup}_{j}\left|\hat{\sigma}_j^2 - \sigma_j^2\right| \geq t\right) \leq 5p\operatorname{exp}\left[-\bar{C}t^2n\right].$$ so $$P\left(\operatorname{sup}_{j}\left|\hat{\sigma}_j - \sigma_j\right| \geq t\right) \leq 5p\operatorname{exp}\left[-\bar{C}\sigma_{\min}^2t^2n\right].$$ Finally, we need to bound $\left|\frac{(1/n)\sum_{i\leq n}x_{i,j}x_{i,k}}{\sigma_j\sigma_k} - \frac{\mu_j\mu_k}{\sigma_j\sigma_k} - \rho_{j,k}\right|$. This is slightly trickier but still not terrible. We first note that $$(1/n)\sum_{i\leq n}x_{i,j}x_{i,k} - \mu_j\mu_k = (1/n)\sum_{i\leq n}\left(x_{i,j} - \mu_j\right)\left(x_{i,k}-\mu_k\right)$$ We also see that $$\begin{aligned} 2\sum_{i\leq n}\left(x_{i,j} - \mu_j\right)\left(x_{i,k}-\mu_k\right) &= \sum_{i\leq n}\left[\left (x_{i,j} - \mu_j\right) + \left(x_{i,k}-\mu_k\right)\right]^2\\ & - \sum_{i\leq n}\left (x_{i,j} - \mu_j\right)^2 - \sum_{i\leq n}\left (x_{i,k} - \mu_k\right)^2\end{aligned}$$ Now to bound the above quantity we consider the moment generating function of $x_{i,j} - \mu_j + x_{i,k}-\mu_k$. This not necessarily the sum of independent random variables, still by Cauchy Schwartz we have $$\begin{aligned} &\operatorname{E}\left[\operatorname{exp}\left[t\left(x_{i,j} - \mu_j + x_{i,k}-\mu_k\right)\right]\right]\\ &\leq \operatorname{max}\left\{\operatorname{E}\left[\operatorname{exp}\left[2t\left(x_{i,j} - \mu_j\right)\right]\right],\operatorname{E}\left[\operatorname{exp}\left[2t\left(x_{i,k} - \mu_k\right)\right]\right]\right\}\end{aligned}$$ It is a well known fact that sub-gaussan random variables can be charaterized by their MGF (shown in @vershynin2010), and this is still the moment generating function of a subgaussian random variable. Thus, $\left(x_{i,j} - \mu_j + x_{i,k}-\mu_k\right)^2$ is sub-exponential, and again by Corollary $5.17$ of @vershynin2010 we have that $$\begin{aligned} &\operatorname{P}\left(\left|\frac{1}{n}\sum_{i}\left(x_{i,j} - \mu_j + x_{i,k}-\mu_k\right)^2 - \sigma_j^2 - \sigma_k^2 - 2\sigma_j\sigma_k\rho_{j,k}\right| > t\right)\\ &\leq 2\operatorname{exp}\left[-C_2t^2n\right].\end{aligned}$$ for $t>0$ sufficiently small and some fixed $C_2 > 0$. Now, stringing all of these together with the triangle inequality we have that $$\begin{aligned} &\operatorname{P}\left(\left|\frac{2}{n}\sum_{i\leq n}\left(x_{i,j} - \mu_j\right)\left(x_{i,k}-\mu_k\right) - 2\rho\sigma_j\sigma_k\right| > 3t\right)\\ &\leq \operatorname{P}\left(\left|\frac{1}{n}\sum_{i\leq n}\left(x_{i,j} - \mu_j + x_{i,k}-\mu_k\right)^2 - \sigma_j^2 + \sigma_k^2 - 2\sigma_j\sigma_k\rho_{j,k}\right| > t\right)\\ &+ \operatorname{P}\left(\left|\frac{1}{n}\sum_{i\leq n}\left (x_{i,j} - \mu_j\right)^2 - \sigma_j^2\right| > t\right) + \operatorname{P}\left(\left|\frac{1}{n}\sum_{i\leq n}\left (x_{i,k} - \mu_k\right)^2 - \sigma_k^2\right| > t\right)\\ &\leq 2\operatorname{exp}\left[-C_2t^2n\right] + 2*5\operatorname{exp}\left[-\bar{C}t^2n\right]\\ &\leq 12\operatorname{exp}\left[-\bar{C}_1t^2n\right]\end{aligned}$$ for all $t>0$ sufficiently small with some fixed $\bar{C}_1>0$. Taking this a step further, and applying the union bound, we see that $$P\left(\operatorname{sup}_{j,k}\left|\frac{(1/n)\sum_{i\leq n}x_{i,j}x_{i,k}}{\sigma_j\sigma_k} - \frac{\mu_j\mu_k}{\sigma_j\sigma_k} - \rho_{j,k}\right| > t\right) \leq 12p^2 \operatorname{exp}\left[-\bar{C}_2t^2n\right]$$ for some fixed $\bar{C}_2$.\ Now that we have bounded each term, we see that happens with probability at most $$\begin{aligned} &12p^2 \operatorname{exp}\left[-\bar{C}_2\delta^2n\right] + 2*5p\operatorname{exp}\left[-\bar{C}\sigma_{\min}^2\delta^2n\right] + 2*3p\operatorname{exp}\left[-\tilde{C}\delta^2n\right]\\ &\leq 28p^2\operatorname{exp}\left[-\mathbf{C}\delta^2 n\right]\end{aligned}$$ for $\delta$ sufficiently small where $\mathbf{C} = \min\left\{\bar{C}\sigma_{\min}^2,\bar{C_2},\tilde{C}\right\}$. Thus, if $\delta = \left(\frac{q + 2\log p}{\mathbf{C}n}\right)^{1/2}$ then we have with probability at least $1-28e^{-q}$. If $n$ is sufficiently large, and $\frac{\log p}{n}$ sufficiently small, then for any $q$, $\delta$ can be made arbitrarily small. Now, we combine these lemmas to show that under certain conditions, for a given cutoff $t$, as $n\rightarrow\infty$ if $\log p/n\rightarrow 0$ then, with probability approaching $1$, all true marginal interactions have $|T_{i,j}| > t$, and all null statistics will have $|T_{i,j}| < t$ (ie. we asymptotically find all true interactions and make no false rejections). Before we begin, it deserves mention that we use slightly different notation than in the discussion of our algorithm in Section $3$. Rather than having $X_{i,\cdot}$ denote the $i$-th observation overall, and letting $y(i)$ denote its group (where $i$ ranged from $1$ to the total number of observations in both groups), we split up our observations by group, letting $x_{m(i,\cdot)}$ denote the $i$-th observation from group $m$ (now $i$ ranges from $1$ to the total number of observations in group $m$). This change simplifies notation in the statement of the theorem and its proof. We also assume equal group sizes ($n_1 = n_2 = n$), this again simplifies notation but can be relaxed to $n_1/(n_1 + n_2) \rightarrow \alpha \in (0,1)$. This result is a straightforward corollary of our $3$ lemmas: First choose an arbitrary $\epsilon_p > 0$, and $0 < t < \Delta_{\min}$. If we consider Lemma \[lemma1\], we see that the conclusion of our theorem holds if we can find a bound on the sup-norm distance between each correlation matrix and its MLE (a bound I will call $\delta_1$) which satisfies $$\max\left\{\frac{2\delta_1}{1-\left(\rho_{\max} + \delta_1\right)^2},\, \Delta_{\min} - 2\delta_1\right\} \leq t.$$ Because $\rho_{\max} < 1$, $\delta_1 > 0$ sufficiently small will satisfy this.\ Now applying Lemma \[lemma2\]: if we choose $\delta_2$ sufficiently small (but still of $O(\delta_1)$), then if $$\label{thm:bound1} \operatorname{sup} \left\{\left|\hat{\sigma}_j - \sigma_j\right|,\, \left|\hat{\mu}_j - \mu_j\right|,\,\left|\frac{(1/n)\sum_{i\leq n}x_{i,j}x_{i,k}}{\sigma_j\sigma_k} - \frac{\mu_j\mu_k}{\sigma_j\sigma_k} - \rho_{j,k}\right|\right\}_{j,k} \leq \delta_2$$ we have that the sup norm distance between each correlation matrix and its MLE is bounded by $\delta_1$: for $m=1,2$ $$\left\| \hat{R}_m - R_m \right\|_{\infty}\leq \delta_1$$ Finally, by Lemma!\[lemma3\], we see that if $n$ is sufficiently large and $\log p / n$ is sufficiently small then holds with probability at least $1-\epsilon_p$. This finishes our proof. Proofs of Permutation Results ----------------------------- To begin, we prove a Lemma which does most of the leg-work for our eventual theorem. It says that for a reasonably balanced permutation, for $n$ sufficiently large and $\log p/n$ sufficiently small, both of our permuted sample correlation matrices will be very close to the average of the $2$ population correlation matrices. \[lemma:perm\] Let $\tilde{x}_{1(j)}$ and $\tilde{x}_{2(j)}$, $j=1,\ldots$ be random variables with $$\operatorname{P}\left(|x_{m(j)} - \operatorname{E}\left[x_{m(j)}\right]|\geq t\right) \leq 1 - e^{t^2/C}$$ for all $t>0$, and each $m=1,2$, with some fixed $C>0$. Let $\mu_{m(j)}$ denote the mean of $\tilde{x}_{m(j)}$ and $\sigma_{m(j)}^2$ its variance. For each $i < \infty$, let $x_{m(i,\cdot)}$ be independent realizations with the same distribution as $\tilde{x}_{m(\cdot)}$. Let $p_n$ be a sequence of integers such that $\frac{\log p_n}{n} \rightarrow 0$. Let $R_{m}$ be the correlation “matrix” (an infinite but countably indexed matrix) of the covariates from class $m$. Define $R_{\operatorname{perm}}$ to be the average of the two, $$R_{\textrm{perm}} = \frac{1}{2} R_1 + \frac{1}{2}R_2$$ Let $\hat{\mu}_{m(j)}$ and $\hat{\sigma}_{m(j)}^2$ be the pre-permuted estimates of the mean and variance (in each class): $$\hat{\mu}_{m(j)} = \frac{1}{n}\sum_{i \leq n} x_{m(i,j)}$$ and $$\hat{\sigma}_{m(j)}^2 = \frac{1}{n}\sum_{i \leq n} \left(x_{m(i,j)} - \hat{\mu}_{m(j)}\right)^2.$$ Further, define $$\hat{R}_{\textrm{perm}:m(j,k)} = \frac{1}{n}\sum_{(i,l)\in\Pi(\cdot,m)}\left(\frac{x_{m(i,j)} - \hat{\mu}_{m(j))}}{\hat{\sigma}_{m(j)}}\right)\left(\frac{x_{m(i,k)} - \hat{\mu}_{m(k)}}{\hat{\sigma}_{m(k)}}\right)$$ our permuted correlation between covariates $j$ and $k$ in class $m$. Assume for every $j$, $\sigma_j^2 \geq \sigma_{min}^2 > 0$. Now for any $\epsilon >0$, $\delta>0$, one can find $n$ sufficiently large such that for any permutation, $\Pi$ with $$\left|\hat{\pi} - \frac{1}{2}\right|\leq \frac{\delta}{12}$$ (where $\hat{\pi}$ is the proportion of class $1$ that remains fixed under $\Pi$). We have $$\label{perm:bnd} \left\|R_{\textrm{perm}} - \hat{R}_{\textrm{perm}:m}\right\|_{\infty} \leq \delta$$ for both $m=1,2$ with probability at least $1-\epsilon$. We first consider only $m=1$. If we can show that $$\label{perm:bnd} \left\|R_{\textrm{perm}} - \hat{R}_{\textrm{perm}:m}\right\|_{\infty} \leq \delta$$ with high probability for $m=1$, then by symmetry we have it for $m=2$, and by a simple union bound we have it for both simultaneously.\ Now, we begin by decomposing our sample permuted correlation matrix $$\begin{aligned} \hat{R}_{\textrm{perm}:1} &= \frac{1}{n}\sum_{(i,m)\in\Pi(\cdot,1)}\left(\frac{x_{m(i,j)} - \hat{\mu}_{m(j)}}{\hat{\sigma}_{m(j)}}\right)\left(\frac{x_{m(i,k)} - \hat{\mu}_{m(k)}}{\hat{\sigma}_{m(k)}}\right)\\ &= \hat{\pi}\hat{R}_{\textrm{perm}:1}^{(1)} + \left(1 - \hat{\pi}\right)\hat{R}_{\textrm{perm}:1}^{(2)}\end{aligned}$$ where $\hat{R}_{\textrm{perm}:1}^{(l)}$ is a matrix defined by $$\label{eq:contrib} \hat{R}_{\textrm{perm}:1(j,k)}^{(l)} = \frac{1}{\tilde{n}_l}\sum_{i\in\Pi(l,1)}\left(\frac{x_{l(i,j)} - \hat{\mu}_{1(j)}}{\hat{\sigma}_{1(j)}}\right)\left(\frac{x_{l(i,k)} - \hat{\mu}_{1(k)}}{\hat{\sigma}_{1(k)}}\right)$$ where $\tilde{n}_l$ is the number of elements from group $l$ permuted to group $1$ (ie. the cardinality of $\Pi(l,l)$ or more explicitly $\tilde{n}_1 = \hat{\pi}n$ and $\tilde{n}_2 = (1-\hat{\pi}n$). The quantity is just the contribution from observations originally in class $l$ to the permuted correlation matrix for class $1$. Thus by the triangle inequality $$\begin{aligned} \label{eq:permTriangle} \left\|R_{\textrm{perm}} - \hat{R}_{\textrm{perm}:1}\right\|_{\infty} &\leq \left\|\frac{1}{2} R_1 - \hat{\pi}\hat{R}_{\textrm{perm}:1}^{(1)}\right\|_{\infty} + \left\|\frac{1}{2} R_2 - \left(1 - \hat{\pi}\right)\hat{R}_{\textrm{perm}:1}^{(1)}\right\|_{\infty}\\ &\leq \frac{1}{2} \left\|R_1 - \hat{R}_{\textrm{perm}:1}^{(1)}\right\|_{\infty} + \frac{1}{2}\left\|R_2 - \hat{R}_{\textrm{perm}:1}^{(2)}\right\|_{\infty}\notag\\ &+ \left|\hat{\pi} - \frac{1}{2}\right|\left(\left\|\hat{R}_{\textrm{perm}:1}^{(1)}\right\|_{\infty} + \left\|\hat{R}_{\textrm{perm}:1}^{(2)}\right\|_{\infty}\right)\notag\end{aligned}$$ If we consider $\hat{R}_{\textrm{perm}:1}^{(1)}$, we see that it is essentially a sample correlation matrix (using only the $\hat{\pi}n$ observations that were fixed in class $1$ by $\Pi$ for the inner product). We can make a similar observation for $\hat{R}_{\textrm{perm}:1}^{(2)}$. Now, for $n$ sufficiently large, because $|\frac{1}{2} - \hat{\pi}|$ is small, we can make $\hat{\pi}n$ and $\left(1-\hat{\pi}\right)n$ as large as we would like. Thus, by a combination of Lemma \[lemma3\] and Lemma \[lemma2\], we have that $$\left\|R_l - \hat{R}_{\textrm{perm}:1}^{(l)}\right\|_{\infty} < \delta/3$$ with probability greater than $1-\epsilon/3$. Furthermore, using the same Lemmas we get $$\left\|\hat{R}_{\textrm{perm}:1}^{(1)}\right\|_{\infty} + \left\|\hat{R}_{\textrm{perm}:1}^{(2)}\right\|_{\infty} \leq 4$$ with probability at least $1-\epsilon/3$ (this bound can easily be made tighter, and if we were to standardize within permutation this bound is trivial). Plugging this in with the assumed bound on $\left|\hat{\pi} - \frac{1}{2}\right|$ completes the proof. Now, we use this Lemma (along with some of our previous Lemmas) to show that for any fixed $t>0$ if our variables are subgaussian with some other minor conditions, then for $n\rightarrow \infty$ and $\log p/n\rightarrow 0$ with probability approaching $1$, none of our permuted statistics will be larger than $t$, or in other words our estimated FDR will converge to $0$. First we choose an arbitrary $2\epsilon_p >0$ and $t>0$. If we consider Lemma $7.1$, we see that if we find some $\delta >0$ satisfying $$\label{eq:need} \left\|R_{\textrm{perm}} - \hat{R}_{\textrm{perm}:m}\right\|_{\infty}$$ for $m=1,2$ with probability at least $1-\epsilon_p$ and $$\label{eq:bd} \frac{2\delta}{1-(\rho_{\max} + \delta)^2} \leq t$$ then we have satisfied our claim. Because, $\rho_{\max} < 1$, there exists some $\delta >0$ satisfying . Now, we first note that, for $n$ sufficiently large, standard concentration inequalities give us that $$\left|\hat{\pi} - \frac{1}{2}\right| \leq \delta/12$$ with probability greater than $1 - \epsilon_p$. If we apply Lemma $7.5$ with this bound on $\hat{\pi}$ and combine the probabilities with the union bound, we get that for $n$ sufficiently large is violated with at most probability $2\epsilon_p$. This completes our proof. Acknowledgments =============== We would like to thank Jonathan Taylor and Trevor Hastie for their helpful comments and insight. [^1]: Department of Statistics, Stanford University, `nsimon@stanford.edu` [^2]: Department of Statistics, Stanford University, Department of Health Research and Policy, Stanford University
--- abstract: 'Although it is widely accepted that galaxy interactions stimulate secular evolutionary effects (e.g. enhanced star formation) the amplitude of this effect and the processes for accomplishing them, are not well quantified. The goal of the project AMIGA (Analysis of the Interstellar Medium of Isolated Galaxies) is to provide a sizable reference sample (n=1050) of the most isolated galaxies as a basis for the study of the influence of the environment on galaxy properties. Here, we present the far-infrared (FIR) properties of 1030 galaxies of the sample for which IRAS data are available. We improved the detection rate and accuracy of the IRAS data with respect to the Point Source and Faint Source Catalog by redoing the data reduction with the IPAC utility ADDSCAN/SCANPI. Comparing the FIR to the blue luminosities, we find a slightly non-linear relation. Furthermore, we find that interacting galaxies tend to have an enhanced FIR emission.' author: - 'U. Lisenfeld, L. Verdes-Montenegro, S. Leon, and J. Sulentic' --- The AMIGA project ================= A key question in astrophysics is the relative role of nurture versus nature in galaxy evolution. In order to make progress, studies need to be based on a well-defined sample of isolated galaxy which has been lacking so far. We are compiling and analysing data for the first complete unbiased control sample of the most isolated galaxies in the northern sky (Leon & Verdes-Montenegro 2003, Verdes-Montenegro et al. 2005). To compare and quantify the properties of different phases of the interstellar medium, as well as the level of star formation, we are building a multiwavelength database (far-infrared, near-infrared, optical, H$\alpha$, radio continuum, HI and CO) for this sample. The data will be publicly available from www.iaa.es/AMIGA.html. Our sample is based on the Catalogue of Isolated Galaxies (CIG, 1050 galaxies, Karatchenseva 1973 ) assembled with the requirement that no similarly sized galaxies with diameter d (where d is between 1/4 and 4 times diameter D of the CIG galaxy) lie within 20d. We chose the CIG as a basis because this sample presents various advantages: (i) It is selected using a powerful criterium, so that the CIG contains a large fraction of the most isolated nearby galaxies in the northern hemisphere. Since the selection criterium does not take into account redshift, it actually excludes some galaxies which have only apparent companions that lie in reality at a very different redshift. This is however not a problem for our purpose because (ii) the sample is large enough to be statistically significant. It furthermore covers a large enough volume to be almost (80%) optically complete up to a Zwicky magnitude of 15 mag (Verdes-Montenegro et al. 2005). (iii) Finally, the fact that the galaxies in the CIG are nearby (the bulk of the galaxies have recession velocities below 10000 km/s) enables us to determine the morphologies in a reliable way (Sulentic et al. 2006). Since furthermore all morphological types are found in CIG, we are able to study galaxy properties as a function of galaxy type. As a first step, we are performing a number of refinements to the CIG: a) We are carrying out a computational revision and quantification of the degree of isolation using SExtractor and LMORFO to the POSSI plates (Verley et al. in prep.), b) we are revising the morphologies with the help of POSSII and our optical images (Sulentic et al. 2006), and c) we have checked the positions and accumulated new redshifts available in the literature (Leon & Verdes-Montenegro 2003, Verdes-Montenegro et al. 2005). Reprocesssing of IRAS data ========================== We obtained the IRAS fluxes at 12, 25, 60 and 100 $\mu$m using the ADDSCAN/SCANPI utility at IPAC. We followed the recommendation for the calculation of the total fluxes and visually inspected all spectra in order to check for (i) the presence of cirrus emission, (ii) confusing with neighboring galaxies and (iii) the significance of the detection (e.g. confusing with noise spikes). A more detailed description of the data processing will be presented in Lisenfeld et al. (in preperation). This reprocessing yielded: - An increase in the number of data points in comparison to the IRAS Point Source Catalog (PSC) and Faint Source Catalog (FSC): Whereas there are only 524 galaxies of the 1050 CIG galaxies in the PSC/FSC, the ADDSCAN/SCANPI reduction provided data for 1031 objects. - An improvement of the signal-to-noise-ratio by a factor of 2-5. In particular, (55, 70, 9, 81) galaxies at (12, 25, 60, 100)$\mu$m were only upper limits in the PSC/FSC but changed to detections after our reprocessing. - An improved accuracy of the fluxes, because ADDSCAN/SCANPI is able to measure the total flux of extended objects, as long as their size is not above a few arcmin. We found a trend of the ratio of the flux derived with ADDSCAN/SCANPI to the flux from the PSC/FSC to increase with source diameter, especially at short wavelengths, suggesting that the fluxes in the PSC/FSC indeed underestimate the correct fluxes for large object. Furtermore, our visual inspection of the spectra allowed us to reject dubious cases. In fact, we classified (29, 21, 5, 3) galaxies at (12, 25, 60, 100)$\mu$m as non-detection that were listed as detection in the PSC/FSC. Relation between L$_{\rm FIR}$ and L$_{\rm B}$ ============================================== As a first result, we show in Fig. 1 a comparison of the FIR luminosity, L$_{\rm FIR}$, (calculated from the 60 and 100 µm fluxes) to the blue luminosity, L$_{\rm B}$, derived from the corrected Zwicky magnitudes (see Verdes-Montenegro et al. 2005). A more detailed analysis of the data, including the full presentation of the characteristics of the FIR luminosities and colours will be presented in Lisenfeld et al. (in prepartion). We limit the sample to 736 galaxies with optical magnitudes between 11 and 15 mag, representing an 80% complete subsample of the CIG (Verdes-Montenegro et al. 2005). Furthermore, based on the morphological revision of the sample we exclude 23 galaxies which are judged to be interacting (Sulentic et al. 2006). ![ The relation between the FIR and blue luminosity for an optically complete subsample of the CIG, excluding 23 interacting CIG galaxies [**(left)**]{} and for different samples of interacting galaxies [**(right)**]{} The line is in both panels the regression found for the CIG (eq. 1). ](lisenfeld_u_fig1a.ps "fig:"){width="6cm"} ![ The relation between the FIR and blue luminosity for an optically complete subsample of the CIG, excluding 23 interacting CIG galaxies [**(left)**]{} and for different samples of interacting galaxies [**(right)**]{} The line is in both panels the regression found for the CIG (eq. 1). ](lisenfeld_u_fig1b.ps "fig:"){width="6cm"} We fit the correlation, taking into account the upper limits by applying survival methods from the package ASURV (Feigelson & Nelson 1985, Isolbe, Feigelson & Nelson 1986) and obtain for the relation (adopting L$_{\rm B}$ as the independent variable). $$\log(L_{\rm FIR}) = (1.13 \pm 0.03) \log(L_{\rm B}) - (2.1 \pm 0.3)$$ The slope obtained for the sample of the 23 clearly interacting CIGs was considerably higher, 1.46 $\pm$ 0.14. The difference is due to an increase in L$_{\rm FIR}$: Whereas the average L$_{\rm B}$ of both samples are basically the same ($\nobreak{<L_{\rm B}>} = 10.22 \pm 0.02$ for the 713 CIG galaxies and $\nobreak{< L_{\rm B}>} = 10.23 \pm 0.11$ for the 23 interacting CIG galaxies) the FIR luminosity is increased for the interacting galaxies ($\nobreak{< L_{\rm FIR}>} = 9.18 \pm 0.08$ for the 713 CIG galaxies and $\nobreak{< L_{\rm B}>} = 9.75 \pm 0.17$ for the 23 interacting CIG galaxies). In Fig. 1 (left) we show different samples of interacting galaxies compared to the slope of the CIG (eq. 1). The interacting galaxies clearly lie above this slope, indicating an enhancement of the FIR emission compared to L$_{\rm B}$. UL, LVM and SL are partially supported by DGI (Spain) AYA 2002-03338, AYA2004-08251-CO2-02 (UL), the Junta de Andalucía and the Universidad de Granada. [&lt;widest bib entry&gt;]{} Feigelson, E.D., & Nelson, P.I., 1985, ApJ, 293, 192 Karatchenseva, V.E., 1973, Comm. Spec. Ap. Obs, USSR 8, 1 Isobe, T., Feigelson, E.D., Nelson, P. I, 1986, ApJ, 306, 490 Leon, S., Verdes-Montegnegro, L., 2003, A&A, 411, 391 Perea, J., del Olmo, A., Verdes-Montenegro, L., 1997, ApJ, 490, 166 Sulentic, J., Verdes-Montegnegro, L., Bergond, G., et al., 2006, A&A submitted Verdes-Montegnegro, L., Sulentic, J. Lisenfeld, U., et al., 2005, A&A, 436, 443
--- abstract: 'In order to investigate the effect of inhomogeneities on the volume expansion of the universe, we study modified Swiss-Cheese universe model. Since this model is an exact solution of Einstein equations, we can get an insight into non-linear dynamics of inhomogeneous universe from it. We find that inhomogeneities make the volume expansion slower than that of the background Einstein-de Sitter universe when those can be regarded as small fluctuations in the background universe. This result is consistent with the previous studies based on the second order perturbation analysis. On the other hand, if the inhomogeneities can not be treated as small perturbations, the volume expansion of the universe depends on the type of fluctuations. Although the volume expansion rate approaches to the background value asymptotically, the volume itself can be finally arbitrarily smaller than the background one and can be larger than that of the background but there is an upper bound on it.' author: - Hiroshi Kozaki - 'Ken-ichi Nakao' title: 'Volume Expansion of Swiss-Cheese Universe' --- [ ]{} 1.5cm INTRODUCTION ============ The standard Big Bang scenario is based on an assumption of the homogeneous and isotropic distribution of matter and radiation. This assumption then leads to the Robertson-Walker spacetime geometry and the Friedmann-Lemaître (FL) universe model through the Einstein equations. This model has succeeded in explaining various important observational facts: Hubble’s expansion law, the content of light elements and the isotropic cosmic microwave background radiation (CMBR)[@ref:Weinberg]. The CMBR conversely gives a strong observational basis for the assumption of homogeneity and isotropy of our universe by its highly isotropic distribution together with the Copernican principle; we know that our universe was highly isotropic and homogeneous at least on the last scattering surface where CMBR comes from[@ref:Smoot]. Hence, in the early stage of our universe, the linear perturbation analysis in the FL universe model is a powerful tool to investigate the dynamical evolution of our universe[@ref:KS84]. In order to perform the perturbation analysis, we need an appropriate background universe model, i.e., information of the Hubble parameter, the density parameter and further the equation of state of the matter, radiation and so on, in the real universe. To fix the background universe, we use the observational data in the neighborhood of our galaxy. Especially the Hubble parameter is determined from the data about the distance-redshift relation within $100h^{-1}$Mpc except for the type Ia supernova. However, the universe in a region within $100h^{-1}$Mpc is highly inhomogeneous and hence there are non-trivial prescriptions to identify the present inhomogeneous universe to the homogeneous and isotropic FL universe model. If those procedure are not appropriate, we might miss finding the correct background universe. It is often stated that the spatially averaged observational data in the vicinity of our galaxy are recognized as those of the background FL universe model. The Hubble parameter determined by the observed distance-redshift relation in our universe is regarded as the expansion rate of the volume of the region co-moving to matter. There are several researches for the effects of inhomogeneities on the volume expansion of the universe[@ref:futamase89; @ref:futamase97; @ref:Tomita; @ref:Russ; @ref:Mukhanov; @ref:Abramo; @ref:Nambu; @ref:Nambu-2; @ref:Nambu-3; @ref:Geshnizjani]. Especially, Nambu applied the renormalization group method to the second order cosmological perturbation theory and claimed that the expansion of the dust filled universe is decelerated by the inhomogeneities[@ref:Nambu]. In the real universe, this back reaction effect might be very small[@ref:Russ]. However in order to get deeper insight into the dynamics of the inhomogeneous universe, we will consider the situation in which the back reaction of the inhomogeneities seems to be effective. For this purpose, we consider the Swiss-Cheese universe model and investigate the volume expansion in it. The original Swiss-Cheese universe model is constructed by choosing non-overlapping spherical regions in the background homogeneous and isotropic dust filled universe and then replacing these regions by the Schwarzschild space-time whose mass parameter is identical with the “gravitational” mass of the dust fluid in the removed region. On the other hand, in this article, we consider a modified version; we first remove spherical regions from the homogeneous and isotropic dust filled universe and then fill these regions with spherically symmetric but inhomogeneous dust balls. A spherically symmetric inhomogeneous dust ball is described by the Lemaître-Tolman-Bondi (LTB) solution which is an exact solution of the Einstein equations, and hence by this procedure, we obtain an exact solution of Einstein equations, which represents an inhomogeneous universe. Using this solution, we can study non-linear effects of inhomogeneities on the volume expansion of the universe without use of perturbation analysis. In the LTB solution, shell crossing singularities are generic. Since the LTB solution is no longer valid after the occurrence of the shell crossing, we need to change the treatment if it occurs. As a crude approximation to describe the dynamics after the shell crossing, we adopt a model in which the shell crossing region is replaced by a spherical dust shell. This article is organized as follows. In section \[sec:SC\], we explain how to construct modified Swiss-Cheese universe models which are studied in this article. We investigate the volume expansion rate in the case of small perturbations in section \[sec:perturb\] and in the highly inhomogeneous case in \[sec:non-linear\]. In section \[sec:non-linear\], an alternative model is also constructed which describes the universe after the shell crossing and investigate the dynamics of this model. Finally, section \[sec:summary\] is devoted to summary and discussion. We use the units in which $c=G=1$ throughout the paper. MODIFIED SWISS CHEESE UNIVERSE MODEL {#sec:SC} ==================================== In this section, we give a prescription to construct MSC universe model. First we consider a Einstein-de Sitter (EdS) universe and then remove spherical regions from it; these removed regions should not overlap with each other. Next these regions filled with inhomogeneous dust balls with the same radii and the same gravitational mass as those of the removed homogeneous dust balls. In this MSC universe model, each inhomogeneous region is described by the Lemaître-Tolman-Bondi (LTB) solution which is an exact solution of Einstein equations. LTB solution describes the dust filled spherically symmetric spacetime. Adopting synchronous and co-moving coordinate system, the line element is written as $$\begin{aligned} ds^{2}=&-dt^{2}+\gamma_{ij}dx^{i}dx^{j} \notag \\ =&-dt^{2}+\frac{Y'{}^{2}(t,\chi)}{1-\chi^{2}k(\chi)}d\chi^{2} +Y^{2}(t,\chi)(d\theta^{2}+\sin^{2}\theta d\varphi^{2}), \label{eq:line-element}\end{aligned}$$ where the prime ${}'$ denotes the differentiation with respect to the radial coordinate $\chi$. In this coordinate system, components of 4-velocity $u^{a}$ of a dust fluid element are $$u^{a}=(1,~0,~0,~0).$$ The stress-energy tensor $T_{ab}$ is then given by $$T_{ab}=\rho(t,\chi) \delta^{0}_{a} \delta^{0}_{b},$$ where $\rho(t,\chi)$ is the rest mass density of the dust. Einstein equations lead to the equations for the areal radius $Y(t,\chi)$ and the rest mass density $\rho(t,\chi)$ of the dust; $$\begin{aligned} \dot{Y}^{2} =& -\chi^{2}k(\chi) + \frac{2M(\chi)}{Y},\label{eq:einstein} \\ \rho =& \frac{M'(\chi)}{4\pi Y'Y^{2}},\label{eq:density}\end{aligned}$$ where $k(\chi)$ and $M(\chi)$ are arbitrary functions and the dot $\dot{}$ denotes the differentiation with respect to $t$. We set $M(\chi)$ as $$M(\chi)=\frac{4\pi\rho_{0}}{3}\chi^{3}, \label{eq:mass-form}$$ where $\rho_{0}$ is a non-negative arbitrary constant. The above choice of $M(\chi)$ does not loose any generality. Eqs. (\[eq:line-element\])-(\[eq:density\]) are invariant for the rescaling of the radial coordinate $\chi$, $$\chi \rightarrow \tilde{\chi}=\tilde{\chi}(\chi).$$ Considering this property, we can choose above form of $M(\chi)$ as long as $\rho Y'>0$. The solutions of eq. (\[eq:einstein\]) are given as follows: In the region where $k(\chi)>0$, $$\begin{aligned} Y=&\frac{4\pi\rho_{0}}{3k}(1-\cos\eta)\chi, \label{eq:k>0}\\ t-t_{0}(\chi)=&\frac{4\pi\rho_{0}}{3k^{3/2}}(\eta-\sin\eta); \label{eq:t-solution}\end{aligned}$$ in the region where $k(\chi)=0$, $$Y=\Bigl[ 6\pi\rho_{0} \bigl\{ t-t_{0}(\chi) \bigr\}^{2} \Bigr]^{1/3}\chi; \label{eq:k=0}$$ in the region where $k(\chi)<0$, $$\begin{aligned} Y=&\frac{4\pi\rho_{0}}{3|k|}(\cosh\eta-1)\chi, \label{eq:k<0}\\ t-t_{0}(\chi)=&\frac{4\pi\rho_{0}}{3|k|^{3/2}}(\sinh\eta-\eta),%\label{}\end{aligned}$$ where $t_{0}(\chi)$ is an arbitrary function. Note that $t_{0}(\chi)$ is the time when a shell focusing singularity appears, where ‘shell focusing singularity’ means $Y=0$ for $\chi>0$ and $Y'=0$ at $\chi=0$. In this article, we consider a region of $t>t_{0}$, and hence the time $t=t_{0}$ corresponds to the Big Bang singularity. Here, we focus on a case of $t_{0}=0$, i.e., simultaneous Big Bang. For simplicity, we consider the simplest version of MSC universe models shown in fig. \[fig:MSC\]; there is only one inhomogeneous spherical region at the center in each identical cubic region $\Omega$. We focus on only one cubic co-moving region $\Omega$ (fig. \[fig:MSC-2\]). ![The MSC universe model. Each shaded region represents the inhomogeneity and is described by LTB solution. []{data-label="fig:MSC"}](MSC.eps) ![One cubic region $\Omega$ of the MSC universe model. $\ell$ and $\chi_{\rm sc}$ are the co-moving scales of this cubic region and the inhomogeneous region respectively.[]{data-label="fig:MSC-2"}](MSC-2.eps) In this article, we consider a following model. Assuming $0<\chi_{1}<\chi_{2}<\chi_{3}<\chi_{\rm sc}$, $$k(\chi)= \begin{cases} % k_{0} & \text{for~~~} 0\leq\chi<\chi_{1}\\ % \dfrac{k_{0}}{2\chi^{2}} \left\{ \dfrac{(\chi^{2}-\chi_{2}^{2})^{2}}{\chi_{1}^{2}-\chi_{2}^{2}} +\chi_{1}^{2}+\chi_{2}^{2} \right\} & \text{for~~~} \chi_{1}\leq\chi<\chi_{2} \rule{0pt}{20pt} \\ % \dfrac{k_{0}}{2\chi^{2}} \left(\chi_{1}^{2}+\chi_{2}^{2}\right) & \text{for~~~} \chi_{2}\leq\chi<\chi_{3} \rule{0pt}{26pt}\\ % \dfrac{k_{0}}{2\chi^{2}}\left(\chi_{1}^{2}+\chi_{2}^{2}\right) \left\{ \left(\dfrac{\chi^{2}-\chi_{3}^{2}} {\chi_{\rm sc}^{2}-\chi_{3}^{2}} \right)^{2}-1 \right\}^{2} & \text{for~~~} \chi_{3}\leq\chi<\chi_{\rm sc} \rule{0pt}{26pt} \end{cases}, \label{eq:region-2-3}$$ where $k_{0}$ is constant. In order to guarantee $1-\chi^{2}k>0$, the following inequality should hold, $$\kappa:=\dfrac{k_{0}}{2}(\chi_{1}^{2}+\chi_{2}^{2})<1. \label{eq:kappa-def}$$ We consider two cases; one is the case of $k_{0}>0$ and the other is the case of $k_{0}<0$. Then we investigate the volume expansion rate of a cubic co-moving region $\Omega$. The volume $V$ is defined by $$V(t):=\int_{\Omega}\sqrt{\gamma}d^{3}x, \label{eq:volume-def}$$ where $\gamma$ is the determinant of the spatial metric $\gamma_{ij}$. The volume expansion rate is defined as ${\dot V}/V$. The Case of Small Fluctuations {#sec:perturb} ============================== In a region where $$0< \frac{9t|k|^{3/2}}{2\pi\rho_{0}}\ll 1$$ is satisfied, the areal radius $Y(t,\chi)$ is written in the form of power series as $$Y(t,\chi)=a(t)\chi \left( 1-\frac{1}{20}\epsilon-\frac{3}{2800}\epsilon^{2} \right)+O(\epsilon^{3}) , \label{eq:perturbation}$$ where $$\epsilon(t,\chi):=\left( \frac{9t}{2\pi\rho_{0}} \right)^{2/3}k =\left( \frac{12t}{M(\chi)} \right)^{2/3}\chi^{2}k,$$ and $$a(t):=(6\pi\rho_{0}t^{2})^{1/3}. \label{eq:s-factor}$$ Further, we consider the case of $|k|\chi^{2}\ll1$, which has been studied by Nambu by the second order perturbation analysis. The components of the metric tensor are written as $$\begin{aligned} g_{\chi\chi} &= a^{2} \Bigg[ 1 +\left(\dfrac{M}{12t}\right)^{2/3}\epsilon -\frac{1}{10}\frac{d}{d\chi}(\chi\epsilon) +O(\epsilon^{2}) \Bigg], \label{eq:metric-perturbation-1}\\ g_{\theta\theta} &= a^{2}\chi^{2} \Bigg[ 1-\frac{1}{10}\epsilon+O(\epsilon^{2}) \Bigg], \label{eq:metric-perturbation-2}\\ g_{\varphi\varphi} &= a^{2}\chi^{2}\sin^{2}\theta \Bigg[ 1 -\frac{1}{10}\epsilon+O(\epsilon^{2}) \Bigg]. \label{eq:metric-perturbation-3}\end{aligned}$$ From the above equations, it is easily seen that in the limit $\epsilon\rightarrow 0$ with $t$ fixed, the metric tensor becomes that of EdS. Since the outside region is EdS universe, $\epsilon$ should vanish at the boundary $\chi=\chi_{\rm sc}$ by the continuity of the metric tensor. To compare our result with the study by Nambu[@ref:Nambu], we impose conditions that spatial averages of the Cartesian components of the metric tensor and of the density agree with those of the *background* EdS universe up to the first order of $\epsilon$. The spatial average of a quantity $F$ is defined as follows: $$\langle F \rangle := \left(\int_{\Omega}d^{3}x\right)^{-1} \int_{\Omega} F d^{3}x.$$ We consider a Cartesian coordinate system $(x,y,z)$ which is related to the spherical polar coordinate system $(\chi,\theta,\varphi)$ in the ordinary manner as $$\begin{aligned} x&=\chi\sin\theta\cos\varphi, \notag \\ y&=\chi\sin\theta\sin\varphi, \notag \\ z&=\chi\cos\theta. \nonumber\end{aligned}$$ Then the spatial averages of Cartesian components of the spatial metric $\gamma_{ij}$ are obtained as $$\begin{aligned} \langle \gamma_{ij} \rangle &=a^{2}\left[ 1+\dfrac{4\pi}{3\ell^{3}} \int_{0}^{\chi_{\rm sc}} \left\{ \left( \dfrac{M}{12t} \right)^{2/3}\epsilon\chi^{2} -\dfrac{1}{10}\dfrac{d}{d\chi}(\chi^{3}\epsilon) \right\} d\chi \right]\delta_{ij} +O(\epsilon^{2}) \notag \\ &=(6\pi\rho_{0}t^{2})^{2/3} \left( 1+\dfrac{4\pi}{3\ell^{3}}\int_{0}^{\chi_{\rm sc}}k\chi^{4}d\chi \right)\delta_{ij}+O(\epsilon^{2}),\end{aligned}$$ where we have used $\epsilon=0$ at $\chi=\chi_{\rm sc}$. Therefore we should define the scale factor $a_{\textsc{b}}(t)$ of the background EdS universe as $$a_{\textsc{b}}(t) :=a(t)\left( 1+\dfrac{4\pi}{3\ell^{3}}\int_{0}^{\chi_{\rm sc}}k\chi^{4}d\chi \right)^{1/2}, \label{eq:linear-s-factor}$$ The above equation means that although the outside homogeneous region has the same geometry of EdS universe, it does not agree with [*the background*]{} EdS universe as long as $$\int_{0}^{\chi_{\mbox{\scriptsize{sc}}}}k\chi^{4}d\chi \neq 0. \label{eq:condition-k}$$ The rest mass density $\rho$ of the dust is written in the form of a power series with respect to $\epsilon$ as $$\rho=\dfrac{1}{6\pi t^{2}}\left\{1+\dfrac{1}{20\chi^{2}} \dfrac{d}{d\chi}(\chi^{3}\epsilon)\right\}+O(\epsilon^{2}).$$ The spatial average of $\rho$ is obtained as $$\langle\rho\rangle =\dfrac{1}{6\pi t^{2}}+O(\epsilon^{2}).$$ Hence the background energy density $\rho_{\textsc{b}}$ is defined by $$\rho_{\textsc{b}}(t):=\dfrac{1}{6\pi t^{2}}. \label{eq:linear-density}$$ Here note that the background Hubble equation $$\left(\dfrac{{\dot a}_{\textsc{b}}}{a_{\textsc{b}}}\right)^{2} =\dfrac{8\pi}{3}\rho_{\textsc{b}}, \label{eq:Hubble-eq}$$ holds. By eq. (\[eq:perturbation\]), the 3-dimensional volume element $\sqrt{\gamma}$ is written as $$\begin{gathered} \sqrt{\gamma} = a^{3}\chi^{2}\sin\theta \biggl[ 1 -\frac{1}{20\chi^{2}}\frac{d}{d\chi}(\chi^{3}\epsilon) +\frac{1}{2}\chi^{2}k(\chi) +\frac{1}{700\chi^{2}}\frac{d}{d\chi}(\chi^{3}\epsilon^{2}) \\ -\frac{1}{40}k(\chi)\frac{d}{d\chi}(\chi^{3}\epsilon) +\frac{3}{8}\chi^{4}k^{2}(\chi) \biggr]+O(\epsilon^{3}).\end{gathered}$$ Using the above equation, the volume $V$ defined by eq. (\[eq:volume-def\]) is obtained as $$V(t)= a_{\textsc{b}}^{3}(t) \left( \ell^{3}+V_{1}+V_{2}t^{2/3} \right)+O(\epsilon^{3}), \label{eq:linear-volume}$$ where $$\begin{aligned} V_{1}&:= \dfrac{3\pi}{2} \int_{0}^{\chi_{\rm sc}} \chi^{6}k^{2} d\chi -\dfrac{2\pi^{2}}{3\ell^{6}} \left( \int_{0}^{\chi_{\rm sc}} k\chi^{4} d\chi \right)^{2}, \\ V_{2}&:=-\dfrac{\pi}{20} \left( \frac{9}{2\pi\rho_{0}} \right)^{2/3} \int_{0}^{\chi_{\rm sc}} k^{2}\chi^{4} d\chi <0.\end{aligned}$$ In eq. (\[eq:linear-volume\]), $a_{\textsc{b}}^{3}\ell^{3}$ is the 3-dimensional volume measured by background EdS geometry and extra terms come form inhomogeneities. These terms do not include first order perturbations, but come from second order perturbations. The volume expansion rate is given by $$\frac{\dot{V}}{V} = 3\frac{\dot{a}_{\textsc{b}}}{a_{\textsc{b}}} +\frac{2V_{2}}{3\ell^{3}}t^{-1/3}+O(\epsilon^{3}).$$ The first term of the R.H.S. in the above equation corresponds to the background part. On the other hand, the second term implies that the back-reaction of inhomogeneities decelerates the volume expansion. It is worthwhile to note that this result does not depend on the detailed functional form of $k(\chi)$. The Case of Non-Linear Fluctuations {#sec:non-linear} =================================== In this section, we study the cases where $|k|\chi^{2}$ is not necessarily much smaller than unity. As in the case treated in the previous section, in order to specify an effect due to inhomogeneity, we need a background homogeneous cubic region to be compared with an inhomogeneous one. However the background homogeneous universe introduced in the previous section is not appropriate for the non-linear case; for example, too small $k$ makes the background scale factor $a_{\textsc{b}}$ defined in eq. (\[eq:linear-s-factor\]) negative. In order to introduce an appropriate background, we consider the rest mass $M_{\rm R}$ defined by $$\begin{aligned} M_{\rm R} &:= \int_{\Omega} \rho u^{0}\sqrt{-g} d^{3}x \notag \\ &= 4\pi \int_{0}^{\chi_{\rm sc}} \dfrac{\rho Y'Y^{2}}{\sqrt{1-\chi^{2}k}} d\chi +\rho_{0} \left( \ell^{3}-\dfrac{4\pi}{3}\chi_{\rm sc}^{3} \right) \notag \\ &= \rho_{0}\ell^{3} \left\{ 1+\dfrac{4\pi}{\ell^{3}} \int_{0}^{\chi_{\rm sc}} \left(\dfrac{1}{\sqrt{1-\chi^{2}k}}-1\right)\chi^{2} d\chi \right\}, \label{eq:M-def}\end{aligned}$$ where $g$ is the determinant of the metric tensor of spacetime. $M_{\rm R}$ is a conserved quantity by virtue of the continuity of the rest mass density, $\partial_{a}(\rho u^{a}\sqrt{-g})=0$, where $\partial_{a}$ is a partial derivative. We introduce a background as a cubic region with the same rest mass as the corresponding inhomogeneous cubic region. Note that in general, the rest mass of the dust $M_{\rm R}$ in a cubic co-moving region disagrees with that of the original EdS universe. Hence a cubic region of the original EdS universe is not background. The rest mass density $\rho_{\textsc{b}}$ and scale factor $a_{\textsc{b}}$ of the background are introduced in the following manner, $$\rho_{\textsc{b}}a_{\textsc{b}}^{3}\ell^{3}:=M_{\rm R}. \label{eq:relation}$$ Even if we specify $M_{\rm R}$, $\rho_{\textsc{b}}$ and $a_{\textsc{b}}$ are not fixed completely; we need one more condition. Here we impose a condition in which the volume $V(t)$ approaches to the volume $a_{\textsc{b}}^{3}(t)\ell^{3}$ of the background cubic region for $t\rightarrow0$. In the limit of $t\rightarrow0$, the volume $V$ behaves as $$\begin{aligned} V&= a^{3}\ell^{3} +4\pi \int_{0}^{\chi_{\rm sc}} \left( \dfrac{Y'Y^{2}}{\sqrt{1-\chi^{2}k}}-a^{3}\chi^{2} \right) d\chi \notag \\ &\longrightarrow a^{3}\ell^{3} \left\{ 1 +\dfrac{4\pi}{\ell^{3}} \int_{0}^{\chi_{\rm sc}} \left( \dfrac{1}{\sqrt{1-\chi^{2}k}}-1 \right)\chi^{2} d\chi \right\}\end{aligned}$$ Hence the background scale factor $a_{\textsc{b}}$ is given by $$a_{\textsc{b}}(t):=a\left\{1+\dfrac{4\pi}{\ell^{3}} \int_{0}^{\chi_{\rm sc}}\left(\dfrac{1}{\sqrt{1-\chi^{2}k}}-1\right) \chi^{2}d\chi\right\}^{1/3}. \label{eq:a-def}$$ Here note that in the case of $\chi^{2}|k|\ll1$, the above definition of $a_{\textsc{b}}$ agrees with eq. (\[eq:linear-s-factor\]) up to the first order of $\chi^{2}k$. From eqs. (\[eq:M-def\]), (\[eq:relation\]) and (\[eq:a-def\]), we find that the background rest mass density $\rho_{\textsc{b}}$ is completely the same as eq. (\[eq:linear-density\]). We can easily see that the background Hubble equation (\[eq:Hubble-eq\]) also holds. From eq. (\[eq:density\]), we can easily see that if $Y'$ vanishes, the rest mass density $\rho$ becomes infinite and hence a singularity forms there. This is called shell crossing singularity. Hellaby and Lake showed that a necessary and sufficient condition for the appearance of a shell crossing singularity is [@ref:Hellaby-Lake] $$\begin{aligned} k'&> 0~~~~~{\rm for~the~region}~~k>0, \\ \left( \chi^{2}k \right)'&> 0~~~~~{\rm for~the~region}~~k\leq0. \label{eq:condition}\end{aligned}$$ In the case of $k_{0}>0$, the first condition does not hold and hence a shell crossing singularity does not appear. On the other hand, in the case of $k_{0}<0$, shell crossing singularities $Y'=0$ necessarily appear since $(\chi^{2}k)'>0$ in the region $\chi_{3}\leq\chi<\chi_{\rm sc}$. The Case of $k_{0}>0$ --------------------- To estimate the volume $V$, we rewrite it in the form, $$V=a^{3}\left(\ell^{3}-\dfrac{4\pi}{3}\chi_{\rm sc}^{3}\right) +4\pi\int_{0}^{Y_{\rm sc}}\dfrac{Y^{2}dY}{\sqrt{1-\chi^{2}k}}, \label{eq:volume-2}$$ where $Y_{\rm sc}:=a(t)\chi_{\rm sc}$. From eq. (\[eq:k&gt;0\]), we can see that $Y$ vanishes at $\eta=2\pi$ by gravitational collapse, and hence by substituting $\eta=2\pi$ into eq. (\[eq:t-solution\]), the singularity formation time $t=t_{\rm sg}(\chi)$ is obtained as $$t=t_{\rm sg}(\chi)=\dfrac{8\pi^{2}\rho_{0}}{3k^{3/2}}.$$ Denoting the inverse function of $t_{\rm sg}(\chi)$ by $\chi_{\rm sg}(t)$, the region of $0\leq \chi \leq\chi_{\rm sg}(t)<\chi_{\rm sc}$ has already collapsed at time $t$ larger than $8\pi^{2}\rho_{0}/3k_{0}^{3/2}$. $\chi_{\rm sg}(t)$ approaches to $\chi_{\rm sc}$ for $t\rightarrow \infty$ asymptotically. Here note that in the integrand of eq. (\[eq:volume-2\]), $\chi=\chi_{\rm sg}(t)$ at $Y=0$ and $\chi=\chi_{\rm sc}$ at $Y=Y_{\rm sc}$. Since $\chi^{2}k(\chi)$ is decreasing function with respect to $\chi$ in the region $\chi_{3}<\chi<\chi_{\rm sc}$ and vanishes just at $\chi=\chi_{\rm sc}$, we find that $0 < \chi^{2}k(\chi)\leq \chi_{\rm sg}^{2}k(\chi_{\rm sg})$ holds in the integrand at sufficiently large $t$. This means that $\chi^{2}k(\chi)$ also approaches to zero asymptotically, since $\chi_{\rm sg}^{2}k(\chi_{\rm sg})\rightarrow \chi_{\rm sc}^{2}k(\chi_{\rm sc})=0$ for $t\rightarrow\infty$, and thus $$V\longrightarrow a^{3}\left(\ell^{3}-\dfrac{4\pi}{3}\chi_{\rm sc}^{3}\right) +4\pi\int_{0}^{Y_{\rm sc}}Y^{2}dY=a^{3}\ell^{3}. \label{eq:volume-3}$$ This equation means that the volume expansion rate approaches to the background value asymptotically, i.e., $$\dfrac{\dot{V}}{V}\longrightarrow 3\dfrac{\dot{a}}{a}=3\dfrac{\dot{a}_{\textsc{b}}}{a_{\textsc{b}}} \text{~~~for~} t\rightarrow\infty.$$ However the volume itself may approach to much different value from the background one $a_{\textsc{b}}^{3}\ell^{3}$. $\sqrt{1-\chi^{2}k}$ can be made arbitrarily smaller than unity in the region $\chi_{2}\leq\chi<\chi_{3}$; in the limit of $\kappa\rightarrow1$, $\sqrt{1-\chi^{2}k}\rightarrow 0$ in this region (see eqs. (\[eq:region-2-3\]) and (\[eq:kappa-def\])). Thus, if we set $\kappa$ to be very close to unity, we obtain $$a_{\textsc{b}} \sim a\left(\dfrac{4\pi}{\ell^{3}} \int_{0}^{\chi_{\rm sc}} \dfrac{\chi^{2}d\chi}{\sqrt{1-\chi^{2}k}}\right)^{1/3} \gg a.%\label{}$$ In this case, the volume $V$ approaches to the value much smaller than the background one, asymptotically. The volume expansion is also much different from that of the background in the intermediate stage (see fig. \[fig:positive\]). ![ \[fig:positive\] Volume expansion with positive $k(\chi)$. The dotted line is the temporal evolution of the co-moving volume measured by the background EdS geometry and the dashed line is that measured by the outer EdS geometry. The time is set to unity, when the scale of the spherical inhomogeneous region $a(t)\chi_{\rm sc}$ agrees with that of the horizon scale $a(t)/\dot{a}(t)$. The scale factor at $t=1$ is also set to unity and the co-moving scale $\ell$ is set to $4\chi_{\rm sc}$. Hence the co-moving volume measured by the outer EdS geometry at $t=1$ is $\ell^3(=6^3=216)$.](volume.eps) The Case of $k_{0}<0$ --------------------- As mentioned in the above, shell crossing singularities $Y'=0$ necessarily appear in this model. Before it, the volume expansion is shown in fig. \[fig:ratio.before\]. For $k(\chi)<0$, background scale factor $a_{\textsc{b}}$ cannot differ from original one very much. We plot the ratio of the volume $V(t)$ to $a^{3}\ell^{3}$ and $a_{\textsc{b}}^{3}\ell^{3}$. We find that the inhomogeneities decelerate the volume expansion before shell crossing singularity appears. ![\[fig:ratio.before\] Temporal evolution of the ratios before shell crossing singularity appears. The time is set to unity, when the scale of the spherical inhomogeneous region $a(t)\chi_{\rm sc}$ agrees with that of the horizon scale $a(t)/\dot{a}(t)$. Dashed line corresponds to the shell crossing time. $V_{\textsc{msc}}$ is the volume of the co-moving cubic region $\Omega$ in modified Swiss Cheese universe. $V_{\textsc{b}}$ is the volume of $\Omega$ measured by the background EdS geometry, i.e., $V_{\textsc{b}}=a_{\textsc{b}}^{3}\ell^{3}$. $V_{\textsc{out}}$ is the volume of $\Omega$ measured by the outer EdS geometry, i.e., $V_{\textsc{out}}=a^{3}\ell^{3}$.](ratio-1.eps) Here, we investigate the volume expansion rate after the appearance of this shell crossing singularity. The structure formed by the shell crossing depends on what is approximated by the dust matter. In case that the dust matter is extremely cold fluid, a shock wave will form after the shell crossing. If the dust matter consists of collisionless particles, a spherical wall will form. When the width of the shock wave or of the wall is much smaller than the radius of it, we can treat the shock front or the wall as a timelike singular hypersurface, where ‘timelike’ means that the unit normal vector $n^{a}$ to the hypersurface is spacelike, i.e., $n^{a}n_{a}=1$. A timelike singular hypersurface is characterized by its surface-stress-energy tensor defined by $$S_{ab}:=\lim_{\varepsilon\rightarrow0}\int_{-\varepsilon}^{\varepsilon} T_{cd}h_{a}^{c}h_{b}^{d}dx$$ where $x$ is a Gaussian coordinate ($x=0$ on the hypersurface) in the direction of the normal vector $n^{a}$, and $h_{a}^{c}:=\delta_{a}^{c}-n_{a}n^{c}$ is a projection operator. A timelike singular hypersurface with a surface-stress-energy tensor of the form $$S_{ab}=\sigma v_{a}v_{b},$$ is called a world sheet generated by a trajectory of a dust shell, where $\sigma$ is the surface-energy density and $v_{a}$ is the 4-velocity of an infinitesimal surface element of the dust shell. Hereafter we focus on this case. In order to get an insight into the dynamics after shell crossing, we study the volume expansion of a cubic region with a spherically symmetric dust shell. In the case of $k_{0}<0$, $k(\chi)$ is negative in the LTB region; $(\chi^{2}k)'\leq0$ in the inner region $0\leq\chi\leq\chi_{3}$, while $(\chi^{2}k)'>0$ in the outer region $\chi_{3}<\chi<\chi_{\rm sc}$. In accordance with eq. (\[eq:condition\]), shell crossing necessarily occur in the outer region and we assume that this region collapses into a dust shell. Hence we focus on a situation in which $(\chi^{2}k)'<0$ holds inside the dust shell. The model is constructed by enclosing the interior LTB region by a spherically symmetric timelike singular hypersurface (see fig. \[fig:configuration\]). ![\[fig:configuration\]\ The schematic configuration of the shell. ](DS.eps) The analysis by using the dust shell model works only before the dust shell reaches the boundary of the cubic co-moving region $\Omega$. When the dust shell reaches the boundary, it collides with other dust shells centered in surrounding cubic regions. In this article, we do not consider the dynamics after collisions of the dust shells and assume that there is enough time before collisions of dust shells. The interior region of the dust shell is described by the LTB solution with a line element, $$ds_{-}^{2} = -dt_{-}^2 +\dfrac{{Y'}^{2}(t_{-},\chi_{-})}{1-\chi_{-}^{2}k(\chi_{-})}d\chi_{-}^{2} +Y^{2}(t_{-},\chi_{-}) \left( d\theta^{2}+\sin^{2}\theta d\varphi^{2} \right),$$ where the prime ${}'$ means the derivative with respect to $\chi_{-}$. The equation for the areal radius $Y$ is given by the same equation as eq. (\[eq:einstein\]), i.e., $$\dot{Y}^{2}=-\chi_{-}^{2}k(\chi_{-})+\dfrac{2M(\chi_{-})}{Y}, \label{eq:density-LTB}$$ where the dot $\dot{}$ means the derivative with respect to $t_{-}$. Here we focus on the late time behavior of the volume expansion. In this case, since $Y$ is monotonically increasing with respect to $t_{-}$, the “gravitational potential” $2M(\chi_{-})/Y$ becomes much smaller than $-\chi_{-}^{2}k(\chi_{-})$ and hence we ignore this potential term in eq. (\[eq:density-LTB\]). This approximation corresponds to an assumption in which the interior region of the dust shell is described by Minkowski geometry. Then the solution for $Y$ is easily obtained as $$Y(t_{-},\chi_{-})=\left\{ -\chi_{-}^{2}k(\chi_{-}) \right\}^{1/2}t_{-}. \label{eq:Y-solution}$$ The line element of the outer region is written as $$ds_{+}^{2} = -dt_{+}^{2} +a^{2}(t_{+}) \left\{ d\chi_{+}^{2} +\chi_{+}^{2} \left( d\theta^{2}+\sin^{2}\theta d\varphi^{2} \right) \right\}, \label{eq:FRW}$$ and Einstein equations reduce to the Hubble equation as $$(\dot{a}\chi_{+})^{2}=\frac{M_{+}(\chi_{+})}{a\chi_{+}}, \label{eq:density-FRW}$$ where $M_{+}(\chi_{+})$ is related to the rest mass density $\rho_{+}$ of the outer region as $$M_{+}(\chi_{+}) = \frac{4\pi}{3}\rho_{+}a^{3}\chi^{3}_{+}. \label{eq:mass}$$ Note that by virtue of the spherical symmetry, angular coordinates, $\theta$ and $\varphi$, are common for both interior and exterior regions of the dust shell. A physically and geometrically clear prescription to treat a timelike singular hypersurface has been presented by Israel[@ref:Israel]. In his prescription, junction conditions on metric tensor across the singular hypersurface lead to equations to determine the singular hypersurface itself, i.e., the equation of motion of a dust shell in our case. Using his prescription, dynamics of a vacuum void surrounded by a spherical dust shell in expanding universe has been analyzed by Maeda and Sato[@ref:Maeda]. We can use their results since the situation considered here is completely the same as theirs. By virtue of the spherical symmetry of the system considered here, the trajectory of the dust shell is given by $$t_{-}=t_{\rm s-}(t_{+}),~~~ \chi_{\pm}=\chi_{\rm s\pm}(t_{+}),~~~\theta={\rm constant}~~~{\rm and}~~~ \varphi={\rm constant},$$ where the time coordinate $t_{+}$ in the exterior region has been adopted as an independent temporal variable. The areal radius $R$ of the dust shell is then given by $$R(t_{+}):=a\left(t_{+}\right) \chi_{\rm s+}=Y_{-}\left(t_{\rm s-},\chi_{\rm s-}\right)$$ Maeda and Sato derived a differential equation for the areal radius $R$ of the dust shell as[@ref:Maeda] $$\dfrac{d^{2}R}{dt_{+}^{2}} = \frac{1}{2R} \left\{-\left(1+VV_{H}+2V^{2}+V_{H}^{2}\right) +(1-4V^{2})(1+2VV_{H}+V_{H}^{2})^{1/2}\right\},\label{eq:R-EOM}$$ where $$\begin{aligned} H&:= \dfrac{\dot{a}(t_{+})}{a(t_{+})}, \\ V_{H}&:= HR, \\ V&:=\dfrac{dR}{dt_{+}}-V_{H}.\end{aligned}$$ Using a solution of eq. (\[eq:R-EOM\]), the coordinate radius $\chi_{\rm s+}$ of the dust shell in the outer region is given by $$\chi_{\rm s+}=\dfrac{R}{a(t_{+})}.$$ Equations for $t_{\rm s-}$ and $\chi_{\rm s-}$ on the dust shell are given by $$\begin{aligned} \dfrac{dt_{\rm s-}}{dt_{+}} = &\left\{1-\chi_{\rm s-}^{2}k(\chi_{\rm s-})\right\}^{1/2} \left\{ 1-a^{2}(t_{+})\left(\dfrac{d\chi_{\rm s+}}{dt_{+}}\right)^{2} +\left(\dfrac{dR}{dt_{+}}\right)^{2} \right\}^{1/2} \notag \\ &-\left\{-\chi_{\rm s-}^{2}k(\chi_{\rm s-})\right\}^{1/2} \dfrac{dR}{dt_{+}}, \label{eq:t-EOM}\\ \dfrac{d\chi_{\rm s-}}{dt_{+}} = &\dfrac{1}{Y_{-}'(t_{\rm s-},\chi_{\rm s-})} \left( \dfrac{dR}{dt_{+}} -{\dot Y}_{-}(t_{\rm s-},\chi_{\rm s-})\dfrac{dt_{\rm s-}}{dt_{+}} \right),\end{aligned}$$ where the dot $\dot{ }$ and the prime $'$ are derivatives with respect to $t_{-}$ and $\chi_{-}$, respectively. The volume $V_{\rm in}$ inside the dust shell is written as $$\begin{aligned} V_{\rm in} &= 4\pi \int_{0}^{\chi_{\rm s-}} \dfrac{Y'Y^{2}}{\sqrt{1-\chi^{2}k(\chi)}} d\chi =4\pi \int^{R}_{0} \dfrac{Y^{2}}{\sqrt{1+Y^{2}/t_{\rm s-}^{2}}} dY \notag \\ &= 2\pi t_{\rm s-}^{3} \left[ \dfrac{R}{t_{\rm s-}}\sqrt{1+\left(\dfrac{R}{t_{\rm s-}}\right)^{2}} -\ln \left\{ \dfrac{R}{t_{\rm s-}} +\sqrt{ 1+\left( \dfrac{R}{t_{\rm s-}} \right)^{2} } \right\} \right], \label{eq:V-in}\end{aligned}$$ where we have used eq. (\[eq:Y-solution\]) in the second equality to estimate $\chi^{2}k(\chi)$ in the integrand. We consider a normalized volume ${\tilde V}_{\rm in}$ by the volume of the removed homogeneous dust ball in the original EdS universe, $${\tilde V}_{\rm in}:=\dfrac{V_{\rm in}}{4\pi R^{3}/3}= \dfrac{3}{2}\left(\dfrac{t_{\rm s-}}{R}\right)^{3}\left[\dfrac{R}{t_{\rm s-}} \sqrt{1+\left(\dfrac{R}{t_{\rm s-}}\right)^{2}} -\ln\left\{\dfrac{R}{t_{\rm s-}}+\sqrt{1+\left(\dfrac{R}{t_{\rm s-}}\right)^{2}} \right\}\right]. \label{eq:V-tilde}$$ ${\tilde V}_{\rm in}$ is monotonically decreasing function of $R/t_{\rm s-}$; it approaches to unity in the limit of $R/t_{\rm s-}\rightarrow0$, while it vanishes in the limit $R/t_{\rm s-}\rightarrow\infty$ (see fig. \[fig:V\_in\]). In order to see temporal behavior of ${\tilde V}_{\rm in}$, we need to solve eqs. (\[eq:R-EOM\]) and (\[eq:t-EOM\]). ![\[fig:V\_in\]Behavior of $\tilde{V}_{\textrm{in}}$ as a function of $R/t_{\textrm{s}-}$. ](V_in.eps) For sufficiently large $t_{+}$, the areal radius $R$ of the dust shell becomes much smaller than the cosmological horizon scale $H^{-1}$. In this case, the the motion of the dust shell is well described by the Newtonian approximation. Maeda and Sato showed that for sufficiently large $t_{+}$, $R$ behaves as[@ref:Maeda] $$R(t_{+})\propto t_{+}{}^{(15+\sqrt{17})/24}\sim t_{+}{}^{0.797} \sim t_{\rm s-}{}^{0.797}.$$ Hence we find that $R/t_{\rm s-}\propto t_{\rm s-}{}^{-0.203}\rightarrow0$ for $t_{\rm s-}\rightarrow \infty$. Using this result and eq. (\[eq:V-tilde\]), we find ${\tilde V}_{\rm in}\longrightarrow 1$ for $t_{\rm s-}\rightarrow\infty$, and hence for $t_{+}\rightarrow\infty$, $$V\longrightarrow a^{3}(t_{+})\ell^{3}.$$ This equation means that the volume expansion rate approaches to the background value asymptotically, i.e., $$\dfrac{\dot{V}}{V}\longrightarrow 3\dfrac{\dot{a}}{a}=3\dfrac{\dot{a}_{\textsc{b}}}{a_{\textsc{b}}} \text{~~for~}t\rightarrow \infty.$$ The effect of inhomogeneities on the volume expansion rate vanishes after the dust shell becomes much smaller than the horizon scale $H^{-1}$ (see fig. \[fig:ratio.after\]). However, as in the case of $k_{0}>0$, the volume itself is different from the background value $a_{\textsc{b}}^{3}\ell^{3}$. By eq. (\[eq:a-def\]), we find that the asymptotic value of $V$ is larger than the background value $a_{\textsc{b}}^{3}\ell^{3}$. ![ \[fig:ratio.after\] Temporal evolution of the ratios after shell crossing singularity appears. The time is set to unity, when the scale of spherical inhomogeneous region $a(t)\chi_{\rm sc}$ agrees with that of the horizon scale $a(t)/\dot{a}(t)$. $V_{\textsc{msc}}$ is the volume of the co-moving cubic region $\Omega$ in modified Swiss Cheese universe. $V_{\textsc{b}}$ is the volume of $\Omega$ measured by the background EdS geometry, i.e., $V_{\textsc{b}}=a_{\textsc{b}}^{3}\ell^{3}$. $V_{\textsc{out}}$ is the volume of $\Omega$ measured by the outer EdS geometry, i.e., $V_{\textsc{out}}=a^{3}\ell^{3}$.](ratio-2.eps) However it should be noted that there is an upper limit on the asymptotic value of $V$. Since $k$ can be arbitrarily small in this model, $\sqrt{1-\chi^{2}k}$ can be made arbitrarily larger than unity except at $\chi=0$ and $\chi=\chi_{\rm sc}$. Hence we obtain $$a_{\textsc{b}}(t_{+}) >a(t_{+})\left(1-\dfrac{4\pi}{3\ell^{3}}\chi_{\rm sc}^{3}\right)^{1/3} >a(t_{+})\left(1-\dfrac{\pi}{6}\right)^{1/3},$$ where the last inequality is obtained by setting $\chi_{\rm sc}=\ell/2$. Using this inequality, we obtain $$a_{\textsc{b}}^{3}(t_{+})\ell^{3} <a^{3}(t_{+})\ell^{3} < \left(1-\dfrac{\pi}{6}\right)^{-1} a_{\textsc{b}}^{3}(t_{+})\ell^{3} \sim 2.10\times a_{\textsc{b}}^{3}(t_{+})\ell^{3}.$$ In contrast with the case of $k_{0}>0$, the volume itself can not be so different from the background value. SUMMARY AND DISCUSSION {#sec:summary} ====================== We have investigated an effect of inhomogeneities on the volume expansion in modified Swiss-Cheese universe model. We considered two cases; the inhomogeneities collapse into black holes ($k_{0}>0$), while inhomogeneities expands faster than the background volume expansion ($k_{0}<0$). When inhomogeneities can be treated as perturbations of Einstein-de Sitter universe, the volume expansion is decelerated due to the second order contribution of the perturbations in both models. This result agrees with Nambu’s second order perturbation analysis. Although the choice of background homogeneous universe is straightforward in the case of $|k|\chi^{2}\ll 1$, it is not in the case of non-linear situation. We introduced the background homogeneous universe in order to satisfy the conditions; the cubic region in the background homogeneous universe with the same rest mass have the same evolution of the volume as that of corresponding region in the modified Swiss-Cheese universe in the limit of $t\rightarrow 0$. In the case of non-linear fluctuation with $k_{0}>0$, we find that the volume expansion rate approaches to that of background universe asymptotically. Since the modified Swiss-Cheese model is an exact solution of Einstein equation, we can obtain the precise behavior of the volume expansion. We set it so that for $t\rightarrow 0$, the temporal evolution agrees with background one. Then we found that for $t\rightarrow\infty$, the temporal evolution agrees with that of outer EdS universe. We can see these asymptotic behaviors analytically but we have to rely only on the numerical method to obtain the behavior of the intermediate stages. From the result of the numerical calculation (fig. \[fig:positive\]), we found that volume expansion is decelerated by the inhomogeneities. This behavior coincides with a result obtained by Nambu. We note that his result is based on the perturbation theory but our result is not. In our highly non-linear example, the volume expansion rate becomes negative at the intermediate stage. This result may be the characteristics of the effect of non-linear fluctuations which can not be treated by the perturbation method. In the case of $k_{0}<0$, the shell crossing singularity appears in the inhomogeneous region. In this article, we assumed that a spherical dust shell forms after the shell crossing singularity appears. We find that the inhomogeneities decelerate the volume expansion before it appears (fig. \[fig:ratio.before\]). This is consistent with the previous results by Nambu but the inhomogeneities cannot be treated as the perturbation of homogeneous universe. After a spherical dust shell forms, the volume expansion rate approaches to that of the background universe asymptotically. Our dust shell universe model (fig. \[fig:configuration\]) is a crude approximation. Therefore the behavior of the volume expansion obtained (fig. \[fig:ratio.after\]), especially at the early stage after the shell crossing, contains the influences of this crudeness. But the asymptotic behavior ($t\rightarrow \infty$) might be free from this approximation. We fixed the form of $k(\chi)$. Temporal evolution of the volume depends on the form of $k(\chi)$ but the asymptotic behavior may be independent of the form of $k(\chi)$; $$\begin{aligned} V&\longrightarrow a^{3}\ell^{3} \text{~~for~~} t\rightarrow \infty.\end{aligned}$$ 0.5cm We are grateful to colleagues in Department of Physics, Osaka City University for helpful discussions. [99]{} S. Weinberg, [*Gravitation and Cosmology*]{} (Wiley, New York, 1973). G. F. Smoot [*et al*]{}., Astrophys. J. Lett. [**396**]{}, L1 (1992). H. Kodama and M. Sasaki, Prog. Theor. Phys. Suppl. [**78**]{}, 1 (1984) B. P. Schmidt [*et al.*]{}, Astrophys. J. [**507**]{}, 46 (1998). S. Perlmutter [*et al.*]{}, Astrophys. J. [**517**]{}, 565 (1999). T. Futamase, Mon. Not. R. astr. Soc. [**237**]{}, 187 (1989). T. Futamase, Phys. Rev. [**D53**]{}, 681 (1997). K. Tomita, Prog. Theor. Phys. [**37**]{}, 831 (1967). H. Russ, M.H. Soffle, M. Kasai and G. B[" o]{}rner, Phys. Rev. [**D56**]{}, 2044 (1997). V. F. Mukhanov, L. R. W. Abramo and R. H. Brandenberger, Phys. Rev. Lett. [**78**]{}, 1624 (1997). L. R. W. Abramo and R. H. Brandenberger, Phys. Rev. [**D56**]{}, 3248 (1997). Y. Nambu, Phys. Rev. [**D62**]{}, 104010 (2000). Y. Nambu, Phys. Rev. [**D63**]{}, 044013 (2001). Y. Nambu, Phys. Rev. [**D65**]{}, 104013 (2002). G. Geshnizjani and R. H. Brandenberger, arXiv:gr-qc/0204074. C. Hellaby and K. Lake, Astrophys. J. [**290**]{}, 381(1985), W. Israel, Nuovo Cimento [**44B**]{} (1966), 1;\ —–, Nuovo Cimento [**48B**]{} (1967), 463;\ —–, Phys. Rev. [**153**]{} (1967), 1388. K. Maeda and H. Sato, Prog.  Theor. Phys. [**70**]{}, 772 (1983).\ K. Maeda and H. Sato, Prog.  Theor. Phys. [**70**]{}, 1276 (1983).
--- abstract: | In this paper, we initiate the study of “Generalized Divide and Color Models”. A very special interesting case of this is the “Divide and Color Model” (which motivates the name we use) introduced and studied by Olle Häggström. In this generalized model, one starts with a finite or countable set $V$, a random partition of $V$ and a parameter $p\in [0,1]$. The corresponding Generalized Divide and Color Model is the $\{0,1\}$-valued process indexed by $V$ obtained by independently, for each partition element in the random partition chosen, with probability $p$, assigning all the elements of the partition element the value 1, and with probability $1-p$, assigning all the elements of the partition element the value 0. Some of the questions which we study here are the following. Under what situations can different random partitions give rise to the same color process? What can one say concerning exchangeable random partitions? What is the set of product measures that a color process stochastically dominates? For random partitions which are translation invariant, what ergodic properties do the resulting color processes have? The motivation for studying these processes is twofold; on the one hand, we believe that this is a very natural and interesting class of processes that deserves investigation and on the other hand, a number of quite varied well-studied processes actually fall into this class such as (1) the Ising model, (2) the fuzzy Potts model, (3) the stationary distributions for the Voter Model, (4) random walk in random scenery and of course (5) the original Divide and Color Model. author: - 'Jeffrey E. Steif[^1]' - 'Johan Tykesson[^2]' bibliography: - 'tykesson.bib' title: Generalized Divide and Color models --- [*The first author dedicates this paper to the memory of Jonathan Kaminsky\ (1978-2016).*]{} Introduction ============ Overview {#ss.overview} -------- In this paper, we initiate the study of a large class of processes which we call “Generalized Divide and Color Models”. The name is motivated by a model, introduced and studied by Olle Häggström [@OH01], called the “Divide and Color Model”, which is a special case of the class we look at here; this special case will be described later in this section. We believe that this general class of models warrants investigation, partly because it seems to be a very natural class and partly because a number of very different processes studied in probability theory fall into this class, as described in Subsection \[ss.Examples\]. We now describe this class somewhat informally; formal definitions will be given in Subsection \[s.Defn.Not\]. We start with a finite or countable set $V$. In the first step, a random partition of $V$ (with an arbitrary distribution) is chosen and in the second step, independently, for each partition element in the random partition chosen in the first step, with probability $p$, all the elements of the partition element are assigned the value 1 and with probability $1-p$, all the elements of the partition element are assigned the value 0. This yields in the end a $\{0,1\}$-valued process indexed by $V$, which we call a “Generalized Divide and Color Model” and it is this process which will be our focus. Note that this process depends on, in addition of course to the set $V$, the distribution of the random partition and the parameter $p$. A trivial example is when the random partition always consists of singletons, in which case we simply obtain an i.i.d. process with parameter $p$. Definitions and notation {#s.Defn.Not} ------------------------ Let $V$ be a finite or countable set and let $\rm{Part}_V$ be the set of all partitions of $V$. Elements of $V$ will be referred to as vertices. Elements of a partition will be referred to either as equivalence classes or clusters. If $\pi\in \rm{Part}_V$ and $v\in V$, we let $\pi(v)$ denote the partition element of $\pi$ containing $v$. For any measurable space $(S,\sigma(S))$, let ${\mathcal P}(S)$ denote the set of probability measures on $(S,\sigma(S))$. If $\pi\in \rm{Part}_V$ and $K\subseteq V$, let $\pi_{K}$ denote the partition of $K$ induced from $\pi$ in the obvious way. On $\rm{Part}_V$ we consider the $\sigma$-algebra $\sigma(\rm{Part}_V)$ generated by $\{\pi_K\}_{K\subset V,\,|K|<\infty}$. We denote the set of all probability measures on $(\rm{Part}_V,\sigma(\rm{Part}_V))$ by $\rm {RER}_V$ where $\rm {RER}$ stands for “random equivalence relation”. When $V$ has a natural set of translations (such as $\Zf^d$), we let $\rm{RER}^{\rm{stat}}_V$ (“stat” for stationary) denote the elements of $\rm {RER}_V$ which are invariant under these translations. When $V$ is a graph (such as $\Zf^d$ with nearest neighbor edges), we let $\rm{RER}^{\rm{conn}}_V$ denote the subset of $\rm {RER}_V$ which are supported on partitions for which each cluster is connected in the induced graph. Finally, we let $\rm{RER}^{\rm{exch}}_V$ (“exch” for exchangeable) denote the elements of $\rm {RER}_V$ which are invariant under all permutations of $V$ which fix all but finitely many elements. For each finite or countable set $V$ and for each $p \in (0,1)$, we now introduce a mapping $\Phi_p$ from $\rm {RER}_V$ to probability measures on $\{0,1\}^{V}$. The image of some $\nu\in\rm {RER}_V$ will be called the “color process” or “Generalized Divide and Color Model” associated to $\nu$ with parameter $p$ and is defined as follows. Let $\pi\in \rm{Part}_V$ be picked at random according to $\nu$. For each partition element $\phi$ of $\pi$, we assign *all* vertices in $\phi$ the value $1$ with probability $p$ and the value $0$ with probability $1-p$, independently for different partition elements. This yields for us a $\{0,1\}^V$-valued random object, $\xnup$, whose distribution is denoted by $\Phi_p(\nu)$. (Clearly $\Phi_p(\nu)$ is affine in $\nu$.) We will also refer to $\xnup$ as the *color process* associated to $\nu$ with parameter $p$. This clearly corresponds, in a more formal way, to the generalized divide and color model introduced in Subsection \[ss.overview\]. Finally, we let $\cp_{V,p}$ (CP for “color process”) be the image of $\rm {RER}_V$ under $\Phi_p$ and we also let $\cp^{*}_{V,p}$ be the image under $\Phi_p$ of the relevant subset $\rm{RER}^{*}_V$ of $\rm{RER}_V$ (${*}$ is [[stat]{}]{}, [[conn]{}]{} or [[exch]{}]{}.) We usually do not consider the cases $p=0$ or $1$ for they are of course trivial. We let $|\cdot |_1$ denote the $L^1$ norm on $\zd$. We end this section with the following elementary observation. For any $\nu\in \rm {RER}_V$, $p\in [0,1]$ and $u,v\in V$, we have, letting $E$ denote the event that $u$ and $v$ are in the same cluster, $$\label{e.nonneg.cor} {\mathbf P}(\xnup(u)=\xnup(v)=1)=p{\mathbf P}(E)+ p^2{\mathbf P}(E^c)\ge p^2 ={\mathbf P}(\xnup(u)=1) {\mathbf P}(\xnup(v)=1)$$ and hence $\xnup$ has nonnegative pairwise correlations. Note trivially that $\xnup$ is pairwise independent if and only if it is i.i.d. Examples of color processes {#ss.Examples} --------------------------- It turns out that a number of random processes which have been studied in probability theory have representations as color processes. In this subsection, we give five such key examples. There is a slight difference between the first two examples and the last three examples. In the first two examples, the known model corresponds to a color process with respect to a particular RER at a specific value of the parameter $p$ but not for other values of $p$, while in the last three examples, the known model corresponds to all the color processes with respect to a particular RER as $p$ varies over all values. ### The Ising Model For simplicity, we stick to finite graphs here. While the results here are [*essentially*]{} true also for infinite graphs as well, there are some issues which arise in that case but they will not concern us here. Let $G=(V,E)$ be a finite graph. \[df.Ising\] The Ising model on $G=(V,E)$ with coupling constant $J\in {\mathbb R}$ and external field $h\in {\mathbb R}$ is the probability measure $\mu_{G,J,h}$ on $\{-1,1\}^V$ given by $$\mu_{G,J,h}(\{\eta(v)\}_{v\in V}):= e^{J\sum_{\{v,w\}\in E} \eta(v)\eta(w)+ h\sum_v \eta(v)}/Z$$ where $Z=Z(G,J,h)$ is a normalization constant. It turns out that $\mu_{G,J,0}$ is a color process when $J\ge 0$; this corresponds to the famous FK (Fortuin-Kasteleyn) or so-called random cluster representation. To explain this, we first need to introduce the following model. \[df.RC\] The FK or random cluster model on $G=(V,E)$ with parameters $\alpha\in [0,1]$ and $q\in (0,\infty)$ is the probability measure $\nu^{\rm{RCM}}_{G,\alpha,q}$ on $\{0,1\}^E$ given by $$\nu^{\rm{RCM}}_{G,\alpha,q}(\{\eta(e)\}_{e\in E}):=\alpha^{N_1}(1-\alpha)^{N_2}q^C/Z$$ where $N_1$ is the number of edges in state 1, $N_2$ is the number of edges in state 0, $C$ is the resulting number of connected clusters and $Z=Z(G,\alpha,q)$ is a normalization constant. Note, if $q=1$, this is simply an i.i.d. process with parameter $\alpha$. We think of $\nu^{\rm{RCM}}_{G,\alpha,q}$ as an RER on $V$ by looking at the clusters of the percolation realization; i.e., $v$ and $w$ are in the same partition if there is a path from $v$ to $w$ using edges in state 1. The following theorem from [@FK] tells us that the Ising Model with $J\ge 0$ and $h=0$ is indeed a color process. We however must identify $-1$ with $0$. See also [@ES]. \[t.FKIsing\] ([@ES],[@FK]) For any graph $G$ and any $J\ge 0$, $$\mu_{G,J,0}=\Phi_{1/2}(\nu^{\rm{RCM}}_{G,1-e^{-2J},2}).$$ See [@OHRCrep] for a nice survey concerning various random cluster representations. We remark that while for all $p$, $\Phi_{p}(\nu^{\rm{RCM}}_{G,\alpha,2})$ is of course a color process, we do not know if this corresponds to anything natural when $p\neq \frac{1}{2}$. We mention that, if $G$ is the complete graph, then an alternative way to see that the Ising model with $J\ge 0$ and 0 external field is a color process is to combine Theorem \[t.mainp12\] later in this paper with the fact that the Ising model on the complete graph can be extended to an infinitely exchangeable process. This latter fact was proved in [@Pap] where the technique is credited to Kac [@Kac]; see also Theorem 1.1 in [@LST]. We end by mentioning that for the Ising model on the complete graph on 3 vertices, there are other RERs, besides the random cluster model, that generate it and that in some sense, the random cluster model is not the most natural generating RER; see remark (iii) after Question \[q.ising\]. ### The Fuzzy Potts Model Again for simplicity, we stick to finite graphs here and so let $G=(V,E)$ be a finite graph. \[df.Potts\] For $q\in \{2,3,\ldots,\}$, the $q$-state Potts model on $G=(V,E)$ with coupling constant $J$ (and no external field) is the probability measure $\mu^{\rm{Potts}}_{G,J,q}$ on $\{1,\ldots,q\}^V$ given by $$\mu^{\rm{Potts}}_{G,J,q}(\{\eta(v)\}_{v\in V}):= e^{J\sum_{\{v,w\}\in E}I_{\{\eta(v)=\eta(w)\}}}/Z$$ where $Z=Z(G,J,q)$ is a normalization constant. \[df.PottsFuzzy\] For $G,q$ and $J$ as in Definition \[df.Potts\] and parameter $\ell \in \{1,\ldots,q-1\}$, the fuzzy $q$-state Potts model on $G$ with parameters $J$ and $\ell$, denoted by $\mu^{\rm{Potts,Fuzzy}}_{G,J,q,\ell}$, is obtained by taking a realization from $\mu^{\rm{Potts}}_{G,J,q}$ and changing each $i\in \{1,\ldots,\ell\}$ to a 1 and each $i\in \{\ell+1,\ldots,q\}$ to a 0. It turns out that $\mu^{\rm{Potts,Fuzzy}}_{G,J,q,\ell}$ is also a color process for $J\ge 0$. \[t.PottsFuzzy\] ([@ES],[@FK]) For any graph $G$, and any $J\ge 0,q$ and $\ell$ as above, $$\mu^{\rm{Potts,Fuzzy}}_{G,J,q,\ell}=\Phi_{\frac{\ell}{q}}(\nu^{\rm{RCM}}_{G,1-e^{-2J},q}).$$ This follows easily from an extension of Theorem \[t.FKIsing\] which says that one can obtain a realization of $\mu^{\rm{Potts}}_{G,J,q}$ by taking a realization of $\nu^{\rm{RCM}}_{G,1-e^{-J},q}$ and “coloring” each cluster independently and uniformly from $\{1,\ldots,q\}$. We again remark that while for all $p$, $\Phi_{p}(\nu^{\rm{RCM}}_{G,\alpha,q})$ is also of course a color process, we do not know if this corresponds to anything natural when $p$ is not of the form $\frac{\ell}{q}$. ### The (Classical) Divide and Color Model Unlike the previous examples discussed in this subsection, this model is [*defined*]{} as a color process. In this model, which was introduced and studied in [@OH01], one first performs ordinary percolation with some parameter $\alpha$ on a finite or infinite graph $G$ and then considers the RER corresponding to the clusters which result. The divide and color model is then defined to be the color processes coming from this RER as $p$ varies. Of course, using the terminology of the previous two examples, this is simply $\Phi_{p}(\nu^{\rm{RCM}}_{G,\alpha,1})$. Some papers dealing with this model are the following: [@B], [@BBT1] and [@BCM]. ### Stationary distributions for the Voter Model The Voter Model on ${\mathbb Z}^d$ is a continuous time Markov process with state space $\{0,1\}^{{\mathbb Z}^d}$; an element of $\{0,1\}^{{\mathbb Z}^d}$ specifies for each location (voter) in ${\mathbb Z}^d$ whether it is in state 0 or 1 representing two possible opinions. Heuristically, the Markov process evolves as follows: each location in ${\mathbb Z}^d$ at rate 1 chooses a neighbor at random and then changes its state to that of its neighbor. (If the chosen neighbor has the same state, then nothing happens.) A detailed description of this process and the results described below can be found in [@DurrIPS], [@LiggIPS] and [@LiggIPSeasy]. Clearly, the two states consisting of all 0’s or of all 1’s are fixed states and hence the two point masses at these configurations as well as their convex combinations are stationary distributions. It turns out that in dimensions 1 or 2, these are the only stationary distributions while in $d\ge 3$, there is a continuum of extremal stationary distributions indexed by $[0,1]$, denoted by $\{\mu_p\}_{p\in [0,1]}$. For each $p$, $\mu_p$ is a translation invariant ergodic measure and is obtained by starting the Markov process i.i.d. with density $p$ and taking the limiting distribution as time goes to infinity. This dichotomy between $d\le 2$ and $d\ge 3$ is exactly due to the recurrence/transience dichotomy in these cases. While it is by no means obvious, it turns out, based on the analysis of the voter model carried out in the above references, that for each $d\ge 3$, there is an RER $\nu_d$ on ${\mathbb Z}^d$ such that for each $p\in [0,1]$, $\mu_p=\Phi_p(\nu_d)$. This is also true for $d\le 2$ but then $\mu_p$ is taken to be the (nonergodic) measure corresponding to a $(p,1-p)$ convex combination of the point mass at all 1’s and the point mass at all 0’s and $\nu_d$ is concentrated on the partition which has only one partition element, all of $\zd$. For all $d\ge 1$, the RER $\nu_d$ corresponds to “coalescing random walks” and is described as follows. Start independent continuous time rate 1 simple random walkers at each location of ${\mathbb Z}^d$, any two of which coalesce upon meeting. Run the random walkers until time $\infty$ and then declare two locations $x,y\in {\mathbb Z}^d$ to be in the same partition if the two random walkers starting at $x$ and $y$ ever coalesce. Note that for $d\le 2$ we have, due to recurrence, that this yields one partition element, ${\mathbb Z}^d$, which is consistent with our description of $\nu_d$ above. For $d\ge 3$, all the equivalence classes will be infinite with 0 density. Transience of random walk implies clusters must have 0 density. The formula for return probabilities easily yields the fact that the expected size of the cluster of the origin is infinite. Finally, the fact that the cluster size is in fact infinite a.s. can be found in [@Griff]. ### Random Walk in Random Scenery Let $(X_i)_{i\ge 1}$ be an i.i.d. sequence of random variables taking values in ${\mathbb Z}^d$. Let $(S_n)_{n\ge 1}$ be the associated random walk defined by $S_0=0$ and $S_n=\sum_{i=1}^n X_i$ for $n\ge 1$. Next, let $\{C^p_z\}_{z\in {\mathbb Z}^d}$ be an i.i.d. process taking the value $1$ with probability $p$ and taking the value $0$ with probability $1-p$. Finally, letting, for $k\ge 0$, $Y^p_k:=C^p_{S_k}$, we call $(Y^p_k)_{k\ge 0}$ “Random Walk in Random Scenery” since the process gives the “scenery” at the location of the random walker. It turns out that $(Y^p_k)_{k\ge 0}$ is also in fact a color process which can be seen as follows. We define an RER $\nu$ on ${\mathbb N}$ by declaring $i,j\ge 0$ to be in the same partition if $S_i=S_j$. It is then straightforward to see that $(Y^p_k)_{k\ge 0}$ has distribution $\Phi_p(\nu)$. Although it is not so natural when thinking of random walk in random scenery, it is sometimes useful to have the index set being ${\mathbb Z}$ instead of ${\mathbb N}$ which can be done as follows. One starts with an i.i.d. process $(X_i)_{i\in {\mathbb Z}}$ and then defines $S_n$ as above for $n\ge 0$ and for $n \le -1$ to be $-\sum_{i=n+1}^0 X_i$. Finally, one defines $Y^p_k$ to be $C^p_{S_k}$ for any $k\in {\mathbb Z}$. The strange definition of $S_n$ for negative $n$ in fact insures that $(Y^p_k)_{k\in {\mathbb Z}}$ is a stationary process. Moreover, the process $(X_k,Y^p_k)_{k\in {\mathbb Z}}$ is also a stationary process and is called a generalized $\ttinv$-process. (The name $\ttinv$ comes from the case of simple random walk in $1$ dimension where $T$ denotes the left shift by 1 of $\{C^p_z\}_{z\in {\mathbb Z}}$: the idea then is that from the walker’s perspective, the latter sequence is shifted to the left or right depending on the step of the walker.) One can generalize further by allowing $(X_i)_{i\in {\mathbb Z}}$ to be an arbitrary stationary process rather than requiring it to be i.i.d., in which case the random walk in random scenery would still be a color process. If $(X_i)_{i\in {\mathbb Z}}$ yields a recurrent random walk, then a.s. all the equivalence classes are infinite and have 0 density (provided $X_1$ is not identically 0), while if $(X_i)_{i\in {\mathbb Z}}$ yields a transient random walk, then all the equivalence classes are finite a.s. Summary of paper ---------------- In this subsection, we summarize the different sections of the paper. Section \[s.finitecase\] deals exclusively with the case that $V$ is the finite set $[n]:=\{1,2,\ldots,n\}$. A first natural question is whether, for fixed $p$, the map $\Phi_p\,:\,\rer_{[n]}\to \cp_{[n],p}$ is injective or not. One can also ask this same question when $\rer_{[n]}$ is replaced with $\rer_{[n]}^{\exch}$. Moreover, one can also address the question of whether there can be two distinct (exchangeable) RERs such that their corresponding color processes agree for *all* values of $p$. For each of these questions, we identify a phase transition in $n$. These are given in Theorem \[t.bigfinitetheorem\] which is the main result in the finite case. We also obtain more refined results in this section as well as develop some general results. In Section \[s.exchangeable\], we stick to color processes arising from exchangeable RERs on $\N$. We first there remind the reader of Kingman’s characterization of such RERs; see Theorem \[t.kingman\]. Some of the obtained results are as follows. For $p=1/2$, it is shown that the set of color processes are exactly the collection of exchangeable processes which exhibit $0\ks 1$-symmetry; see Theorem \[t.mainp12\]. While Proposition \[p.Russ\] tells us that, for each $p\in (0,1)$, $\Phi_p$ is injective when restricted to the extremal elements of $\rer_{\N}^{\exch}$ (the so-called paint-boxes), it is shown that, for $p=1/2$, $\Phi_p$ is highly non-injective on $\rer_{\N}^{\exch}$ and the subset where “$\Phi_p$ is injective” is characterized; see Theorem \[p.unprop\]. It turns out however that the behavior for $p\neq 1/2$ seems quite different and $\Phi_p$ is “much more injective”. In Section \[s.conn\], we look at a very specific type of color process; namely those where $V={\mathbb Z}$ and the classes are connected and hence are simply intervals. In Section \[s.dom\], we study the question of stochastic domination of product measures for the set of color processes. More specifically, given an RER and $p\in (0,1)$, we consider the maximum density product measure which the corresponding color process dominates. Of particular interest is the limit, as $p\to 1$ of this maximum density which often is not 1; this is related to the large deviation picture of the number of clusters intersecting a large box. In addition to obtaining various general results, the case of $\rer_{\N}^{\exch}$ as well as our various models from Subsection \[ss.Examples\] are analyzed in detail. In Section \[s.transfer\], we move into our “ergodic theory” section. Here we consider stationary color processes indexed by $\zd$ and study their ergodic behavior. Some of the obtained results are as follows. Theorem \[t.ergod1\] tells us that if there is positive probability of a positive density cluster, then ergodicity is ruled out. On the other hand, Theorem \[t.finiteclust\] tells us that if all clusters are finite a.s., then the color process inherits all of the ergodic properties of the generating RER. These two results tell us that the interesting cases are when the RER has infinite clusters but all with 0 density a.s. Various results in this case are obtained as well as other questions looked at. Finally, in Section \[s.ques\], we present a number of questions and further directions which we feel might be interesting to pursue. The finite case {#s.finitecase} =============== In this section, we restrict ourselves to the case when $V$ is finite. In the first and main subsection, we state and prove Theorem \[t.bigfinitetheorem\] concerning uniqueness of the representing RER and present further refined results. The second subsection deals with some other general results in the finite case. Uniqueness of the representing RER in the finite case ----------------------------------------------------- It is natural to ask, for various color processes, whether the representing RER is unique. We give in this subsection fairly detailed answers to this in the finite case. Recall $p\in (0,1)$. We begin by giving an alternative description of $\rer^{\exch}_{[n]}$ which is as follows. A partition of the *integer* $n$ is given by an integer $s\ge 1$ and positive integers $k_1\le k_2\le \ldots \le k_s$ such that $\sum_i k_i=n$. We denote by $[k_s\ks k_{s-1}\ks \ldots\ks k_1]$ the set of all partitions of (the set) $[n]$ that can be written as $\{C_1,\ldots,C_s\}$ where $|C_i|=k_i$. It is easy to see that $\rer_{[n]}^{\exch}$ are those $\nu\in\rer_{[n]}$ such that if $\pi$ and $\pi'$ belong to the same $[k_s\ks k_{s-1}\ks \ldots \ks k_1]$, then $\nu(\pi)=\nu(\pi')$. In this way, $\rer_{[n]}^{\exch}$ can be identified with probability measures on partitions of the integer $n$. The following is the main result in the finite case. \[t.bigfinitetheorem\] [**(A).**]{} The map $$\Phi_{1/2}\,:\,\rer_{[n]}\to \cp_{[n],1/2}$$ is injective if $n=2$ and non-injective if $n\ge 3$. [**(B).**]{} The map $$\Phi_{1/2}\,:\,\rer_{[n]}^{\exch}\to\cp_{[n],1/2}^{\exch}$$ is injective if $n=2$ and non-injective if $n\ge 3$. [**(C).**]{} If $p\neq 1/2$, then the map $$\Phi_p\,:\,\rer_{[n]}\to \cp_{[n],p}$$ is injective for $n=2,3$ and non-injective for $n\ge 4$. [**(D).**]{} If $p\neq 1/2$, then the map $$\Phi_p\,:\,\rer_{[n]}^{\exch}\to \cp_{[n],p}^{\exch}$$ is injective if $n=2,3$ and non-injective if $n\ge 4$. [**(E).**]{} There are $\nu_1\neq \nu_2 \in \rer_{[n]}$ such that $\Phi_p(\nu_1)=\Phi_p(\nu_2)$ for all $p\in [0,1]$ if and only if $n\ge 4$. [**(F).**]{} There are $\nu_1\neq \nu_2 \in \rer_{[n]}^{\exch}$ such that $\Phi_p(\nu_1)=\Phi_p(\nu_2)$ for all $p\in [0,1]$ if and only if $n\ge 6$. Before starting with any of the parts, we first show that in each of these parts, we have monotonicity in $n$; for [**(A)**]{}-[**(D)**]{}, this means that the relevant map being non-injective for $n$ implies it is non-injective for $n+1$ and for [**(E)**]{} and [**(F)**]{}, this means that if we have such a pair of measures as described for $n$, then we have such a pair for $n+1$. To do this, we first note that there are simple injections from $\rer_{[n]}$ into $\rer_{[n+1]}$ and from $\rer_{[n]}^{\exch}$ into $\rer_{[n+1]}^{\exch}$. For the first one, given $\nu\in\rer_{[n]}$, we can let $T(\nu)\in\rer_{[n+1]}$ be such that $n+1$ is its own cluster and the partition on $[n]$ is distributed according to $\nu$. For the second one, given $\nu\in\rer_{[n]}^{\exch}$, we construct $S(\nu)\in\rer_{[n+1]}^{\exch}$ as follows. For every partition $s,k_1, \ldots, k_s$ of $n$, let $S({\nu})([k_s\ks k_{s-1}\ks\ldots\ks k_1 \ks 1]) :=\nu([k_s\ks k_{s-1}\ks\ldots\ks k_1])$. (Note that, unlike for $T$, the projection of $S(\nu)$ to $[n]$ is not $\nu$.) Finally, it is easy to check that if $\mu$ and $\nu$ give the same color process in [**(A)**]{}-[**(D)**]{} or satisfy the properties in [**(E)**]{} or [**(F)**]{}, then this will also hold for the extended measures $T(\mu)$ and $T(\nu)$ or $S(\mu)$ and $S(\nu)$, as the case may be. [**(A).**]{} In view of the above monotonicity, we only need to look at $n=2$ and 3. First consider the case $n=2$. We represent $\nu\in \cp_{[2]}$ as the probability vector $(q_1,q_2)$ where $q_1:=\nu(\{\{1\},\{2\}\})$ and $q_2:=1-q_1=\nu(\{\{1,2\}\})$. Observe that $\Phi_p(\nu)((0,1))=q_1p(1-p)$. The injectivity now follows immediately, not just for $p=1/2$ but for all $p\in (0,1)$, since $\nu$ is determined by $q_1$. Next, consider the case $n=3$. We write $\nu\in \rer_{[3]}$ as $(q_1,\ldots,q_5)^t$ where $q_1:=\nu(\{\{1\},\{2\},\{3\}\})$, $q_2:=\nu(\{\{1,2\},\{3\}\})$, $q_3:=\nu(\{\{1\},\{2,3\}\})$, $q_4:=\nu\{\{\{1,3\},\{2\}\}\}$ and $q_5:=\nu(\{1,2,3\})$. In addition, we write $\Phi_{1/2}(\nu)$ as $(p_{111},p_{110},p_{101},p_{011},p_{100},p_{010},p_{001},p_{000})^t,$ where $p_{ijk}=\Phi_{1/2}(\nu)((i,j,k))$. Let $\nu_1=(2/3,0,0,0,1/3)$ and $\nu_2=(0,1/3,1/3,1/3,0)$. Note in fact, $\nu_1,\nu_2\in \rer_{[3]}^{{\rm exch}}$. Straightforward calculations which are left to the reader give that $$\Phi_{1/2}(\nu_1)=\Phi_{1/2}(\nu_2)= (1/4,1/12,1/12,1/12,1/12,1/12,1/12,1/4),$$ and the non-injectivity follows. [**(B).**]{} Again, we only need to look at $n=2$ and 3. These are however contained in [**(A)**]{} since (i) it is easier to be injective on a subset (in fact, in this case, $\rer_{[2]}=\rer_{[2]}^{\exch}$) and (ii) the examples there showing non-injectivity for $n=3$ are in fact exchangeable. [**(C).**]{} This time, by monotonicity, we only need to look at $n=3$ and 4. For $n=3$, $$\Phi_p(\nu)=L_p \nu,$$ where $L_p$ is the matrix given by $$\label{e.3matr} L_p=\left( \begin{array}{ccccc} p^3 & p^2 & p^2 & p^2 & p \\ p^2(1-p) & p(1-p) & 0 & 0 & 0 \\ p^2(1-p) & 0 & 0 & p(1-p) & 0 \\ p^2(1-p) & 0 & p(1-p) & 0 & 0 \\ p(1-p)^2 & 0 & p(1-p) & 0 & 0 \\ p(1-p)^2 & 0 & 0 & p(1-p) & 0 \\ p(1-p)^2 & p(1-p) & 0 & 0 & 0 \\ (1-p)^3 & (1-p)^2 & (1-p)^2 & (1-p)^2 & (1-p) \end{array} \right),$$ where we use the same notation and ordering as in ([**A**]{}). Suppose that $p\neq 1/2$. Let $\nu=(q_1,\ldots,q_5)^t$ and $\nu'=(q_1',\ldots,q_5')^t$. We must show that if $\Phi_p(\nu)=\Phi_p(\nu')$, then $\nu=\nu'$. So suppose that $\Phi_p(\nu)=\Phi_p(\nu')$. Denote the entries of $\Phi_p(\nu')$ by $p_{111}',p_{110}',\ldots$. Calculating the entries in $\Phi_p(\nu)$ and $\Phi_p(\nu')$ (using ) gives $p_{011}=p^2(1-p)q_1+p(1-p)q_3$ and $p_{100}=p(1-p)^2 q_1+p(1-p) q_3$, and the same formulas for $p_{011}'$ and $p_{100}'$ with $q_1$ and $q_3$ replaced with $q_1'$ and $q_3'$. Observe that $$p_{011}-p_{100}= (2p-1)p(1-p)q_1,$$ and $$p_{011}'-p_{100}'=(2p-1)p(1-p)q_1'.$$ Since $\Phi_p(\nu)=\Phi_p(\nu')$ and $p\neq 1/2$, we get that $q_1=q_1'$. From the facts that $p_{100}=p_{100}'$ and $q_1=q_1'$ it follows that $q_3=q_3'$. By symmetry, it then follows that $q_2=q_2'$ and $q_4=q_4'$. Hence, $\nu=\nu'$. For the $n=4$ case, we first let $g(p):=p(1-p)$ and then define $\nu_1$ and $\nu_2=\nu_2(p)\in \rer_{[4]}^{\exch}$ as follows. Let $\nu_1([4])=\nu_1([3\ks 1])=\nu_1([2\ks 2])=\nu_1([2\ks 1\ks 1])=\nu_1([1\ks 1\ks 1\ks 1])=1/5,$ and let $\nu_2([4])=1/5+g(p)/10$, $\nu_2([3\ks 1])=1/5-2 g(p)/5$, $\nu_2([2\ks 2])=1/10+3 g(p)/10$, $\nu_2([2\ks 1\ks 1])=2/5$ and $\nu_2([1\ks 1\ks 1\ks 1])=1/10.$ Straightforward calculations which are left to the reader show that for all $p$, $\Phi_p(\nu_1)=\Phi_p(\nu_2(p))$, from which the non-injectivity follows. We mention that the (nonexchangeable) construction in part ([**E**]{}) below also could have been used here in this case; however, we would still need the above for (D). [**(D).**]{} Again, by monotonicity, we only need to look at $n=3$ and 4. These are however contained in [**(C)**]{} since (i) it is easier to be injective on a subset and (ii) the examples there showing non-injectivity for $n=4$ are in fact exchangeable. [**(E).**]{} Again, by monotonicity, we only need to look at $n=3$ and 4. The case $n=3$ follows from Part ([**C**]{}). Now consider the case $n=4$ and define $\nu_1$ by letting $$\nu_1( \{ \{ 1,3\}, \{2 \}, \{4\} \} )=\nu_1( \{ \{1 \} ,\{3\} , \{ 2,4\}\} )=1/3$$ and $$\nu_1(\{\{ 1,2\}, \{3,4 \}\})=\nu_1(\{\{ 1,4\}, \{2,3 \}\})=1/6.$$ Then define $\nu_2$ by letting $$\begin{aligned} \lefteqn{\nu_2( \{\{1,2 \} , \{ 3\} ,\{4\} \})=\nu_2(\{ \{ 1\} ,\{2,3\}, \{4\} \} )}\\ & & =\nu_2(\{ \{ 1\}, \{2 \},\{3,4\} \})=\nu_2(\{ \{ 1,4\}, \{2 \},\{3\} \})=1/6,\end{aligned}$$ and $$\nu_2(\{ \{ 1,3\}, \{2,4\} \})=1/3.$$ Observe that $\nu_1$ and $\nu_2$ are each invariant under rotations and reflections. Straightforward calculations show that for $i=1,2$, $\Phi_p(\nu_i)((1,1,1,1))=2p^3/3+p^2/3$, $\Phi_p(\nu_i)((0,1,1,1))=(1-p)p^2/3$, $\Phi_p(\nu_i)((1,1,0,0))=p(1-p)/6$ and $\Phi_p(\nu_i)((1,0,1,0))=p(1-p)/3$. Since $\nu_1$ and $\nu_2$ are each invariant under rotations and since the roles of $1$ and $0$ get switched when $p$ is replaced by $1-p$, we conclude that $\Phi_p(\nu_1)=\Phi_p(\nu_2)$ for all $p$. [**(F).**]{} By monotonicity, we only need to look at $n=5$ and 6. For the case of $n= 5$, we will make important use of Lemma \[l.js\] below, which we believe can be of independent interest. We state and prove it after the completion of the present proof. Assume now, by way of contradiction, that there exist $\nu_1\neq \nu_2$ in $\rer_{[5]}^{\exch}$ such that $\Phi_p(\nu_1)=\Phi_p(\nu_2)$ for all $p$. We now want to “singularize” $\nu_1$ and $\nu_2$. Let $m$ be the largest subprobability measure dominated by both $\nu_1$ and $\nu_2$. Since $\Phi_p$ is affine, it is easy to see that we also have that $\Phi_p(\frac{\nu_1-m}{|\nu_1-m |_1})=\Phi_p(\frac{\nu_2-m}{|\nu_2-m |_1})$ for all $p$. The latter two measures are singular with respect to each other. The conclusion is that we may now assume that we have $\nu_1\neq \nu_2$ in $\rer_{[5]}^{\exch}$ which are singular and such that $\Phi_p(\nu_1)=\Phi_p(\nu_2)$ for all $p$. We now make use of Lemma \[l.js\] several times. The application of part (i) is always made with $S=[5]$. By Lemma \[l.js\] (i) and the assumed singularity between $\nu_1$ and $\nu_2$, we can conclude that $\nu_1$ and $\nu_2$ both vanish on $[5]$, $[2\ks 1\ks 1\ks 1]$ and $[1\ks 1\ks 1\ks 1\ks 1]$. Also, Lemma \[l.js\] (ii) tells us that $\nu_1$ and $\nu_2$ give the same measure to $[4\ks 1]$ and hence they both vanish there by singularity. At this point, we know that both $\nu_1$ and $\nu_2$ are concentrated on $[3\ks 2]$, $[3\ks 1\ks 1]$ and $[2\ks 2\ks 1]$. Again using Lemma \[l.js\] (i) and singularity shows that $\nu_1$ and $\nu_2$ vanish on $[3\ks 2]$. Next, Lemma \[l.js\] (ii) and singularity then shows that $\nu_1$ and $\nu_2$ vanish on $[3\ks 1\ks 1]$. Hence, both $\nu_1$ and $\nu_2$ are both concentrated on $[2\ks 2 \ks 1]$ which is a contradiction since they are singular probability measures. For the case $n= 6$ we define two probability measures $\nu_1$ and $\nu_2$ on partitions of the integer $6$ as follows. First let $$\nu_1([4\ks 2])=1/3\mbox{ and }\nu_1([3\ks 2\ks 1])=2/3.$$ Then let $$\nu_2([4\ks 1\ks 1])=\nu_2([3\ks 3])=\nu_2([2\ks 2\ks 2])=1/3.$$ Let $A_k$ be the event that there are exactly $i$ ones in the color process. Exchangeability implies that if $\Phi_p(\nu_1)(A_k)=\Phi_p(\nu_2)(A_k)$ for $k=0,1,\ldots 6$ and all $p$, then $\Phi_p(\nu_1)=\Phi_p(\nu_2)$ for all $p$. Simple calculations left to the reader show that for $i=1,2$, $$\Phi_p(\nu_i)(A_6)=\frac{p^2}{3}+\frac{2p^3}{3},\mbox{ }\Phi_p(\nu_i)(A_5)=\frac{2p^2(1-p)}{3}\mbox{ and }\Phi_p(\nu_i)(A_4)=\frac{p}{3}+\frac{p^2}{3}-\frac{2p^3}{3}.$$ Since we have, for $i=1,2$, $\Phi_{p}(\nu_i)(A_0)=\Phi_{1-p}(\nu_i)(A_6)$, $\Phi_{p}(\nu_i)(A_1)=\Phi_{1-p}(\nu_i)(A_5)$, $\Phi_{p}(\nu_i)(A_2)=\Phi_{1-p}(\nu_i)(A_4)$ and $\Phi_{p}(\nu_i)(A_3)=1-\sum_{k\neq 3} \Phi_{p}(\nu_i)(A_k)$, we can finally conclude that $\Phi_p(\nu_1)= \Phi_p(\nu_2)$ for all $p$. Next, we give the lemma which was used repeatedly in the proof of [**(F)**]{} in Theorem \[t.bigfinitetheorem\] above. \[l.js\] Let $\nu_1,\nu_2\in \rer_{[n]}$. Then each one of the following conditions implies that $\Phi_p(\nu_1)\neq \Phi_p(\nu_2)$ for some $p$.\ (i). For some $S\subseteq [n]$, the distribution of the number of equivalence classes of $\pi_S$ is different under $\nu_1$ and $\nu_2$.\ (ii). For some $T\ge 1$, the mean of the number of equivalence classes whose size is equal to $T$ is different under $\nu_1$ and $\nu_2$.\ (iii). For some $C\subseteq [n]$, the probability that $C$ is an equivalence class is different under $\nu_1$ and $\nu_2$.\ (i). For the given $S$, let $F$ be the event that the color process is identically $1$ on $S$, and let $N$ be the number of equivalence classes of $\pi_S$. Then for all $p$ and $i=1,2$, $$\Phi_p(\nu_i)(F)=E_{\nu_i}(p^{N}).$$ By assumption, some coefficient in these two polynomials in $p$ are different and hence $\Phi_p(\nu_1)$ and $\Phi_p(\nu_2)$ give $F$ different probability for some $p$. (ii). For the given $T$ let $X$ be the number of equivalence classes of size equal to $T$, and suppose that $E_{\nu_1}(X)\neq E_{\nu_2}(X)$. Let $K$ be the event that the color process contains exactly $T$ $1$’s. Then $\Phi_p(\nu_1)(K)=p E_{\nu_1}(X) + O(p^2)$ as $p\to 0$ and similarly for $\nu_2$. We conclude that $\Phi_p(\nu_1)$ and $\Phi_p(\nu_2)$ give the event $K$ different probability for small $p$. (iii). For the given $C$, let $D$ be the event that $C$ is a cluster and let $H$ be the event that the color process is identically 1 exactly on $C$. Then $\Phi_p(\nu_1)(H)=\nu_1(D) p +O(p^2)$ as $p\to 0$ and similarly for $\nu_2$. We conclude that $\Phi_p(\nu_1)$ and $\Phi_p(\nu_2)$ give $H$ different probability for small $p$. \(i) Concerning Theorem \[t.bigfinitetheorem\](E,F), it might at first be surprising that one can find distinct and exchangeable $\mu$ and $\nu$ such that $\Phi_p(\mu)= \Phi_p(\nu)$ for all $p$ since there are infinitely many $p$. However, since all the functions of $p$ that arise are polynomials in $p$ of degree at most $n$, we are essentially in a finite dimensional situation. Another way to see this is that if $\Phi_p(\mu)= \Phi_p(\nu)$ for $n+1$ many values of $p$, then this holds for all $p$.\ (ii). We describe how we came up with the example for the $n=6$ case. The negations of conditions (i) and (ii) of Lemma \[l.js\] for $S=[6]$ give a set of linear equations that must hold in order for two RERs to have the same color process. With the help of Mathematica, the nullspace of the coefficient matrix of the linear system was calculated. By looking at the positive and negative part of one of the vectors of the nullspace, the two measures $\nu_1$ and $\nu_2$ were then constructed. The next result, Proposition \[p.kerprop\], describes our injectivity results in more linear algebraic terms and goes into more detail concerning what happens in the non-injective case. In particular, in the case of non-injectivity, it is natural to try to identify “where $\Phi_p$ is non-injective”. The next definition captures this notion. Let $V$ be a finite or countable set. Let ${\mathcal R}\subseteq \rer_V$ and $p\in (0,1)$. We say that $\nu \in {\mathcal R}$ is $({\mathcal R},p)$-unique if $\Phi_p(\nu')\neq \Phi_p(\nu)$ for all $\nu'\in {\mathcal R}\setminus \{\nu\}$. \[p.kerprop\] Let $n\ge 2$, $p\in (0,1)$ and consider the map $$\Phi_p\,:\,\rer_{[n]}\to \cp_{[n]}.$$ Noting that $\Phi_p$, being affine, extends to the vector space of signed measures on ${\rm Part}_{[n]}$ and denoting this extension by $\Phi_p^*$, the following four statements hold: 1. $\Phi_p$ is non-injective if and only if ${\rm Ker}(\Phi_p^*)\neq \{{\bf 0}\}$. 2. Suppose that $n\ge 2$ and $p\in (0,1)$. Then $\nu\in \rer_{[n]}$ is not $(\rer_{[n]},p)$-unique if and only if there is $v\in {\rm Ker}(\Phi_p^*)\setminus \{{\bf 0}\}$ such that $v_i\ge 0$ for all $i\in ({\rm supp} \,\nu)^c$. 3. If ${\rm Dim}({\rm Ker}(\Phi_p^*))=1$, then there is a unique pair $\nu_1,\nu_2\in \rer_{[n]}$, singular with respect to each other, such that $\Phi_p(\nu_1)=\Phi_p(\nu_2)$. 4. If ${\rm Dim}({\rm Ker}(\Phi_p^*))\ge 2$, then there infinitely many distinct pairs $\nu_1,\nu_2\in \rer_{[n]}$, singular with respect to each other, such that $\Phi_p(\nu_1)=\Phi_p(\nu_2)$. Moreover, if ${\mathcal R}$ is a closed and convex subset of $\rer_{[n]}$ and $\Phi_{p,\langle {\mathcal R}\rangle}^*$ is the restriction of $\Phi_{p}^{*}$ to $\langle{\mathcal R}\rangle$, the subspace spanned by $\Rc$, then (i) and (ii) still hold with $\rer_{[n]}$, $\Phi_p$ and $\Phi_p^*$ replaced by $\Rc$, $\Phi_p|_{\Rc}$ and $\Phi_{p,\langle \Rc \rangle}^{*}$. Also, if in addition $\Rc$ is such that that $\nu_1,\nu_2 \in \Rc$ and $\nu_1\neq \nu_2$ imply that $$\label{e.condition} \frac{\nu_1- (\nu_1 \wedge \nu_2)}{|\nu_1- (\nu_1 \wedge \nu_2)|_1}\in \Rc,$$ then (iii) and (iv) hold with $\rer_{[n]}$ and $\Phi_p^*$ replaced by $\Rc$ and $\Phi_{p,\langle \Rc \rangle}^*$. (i). First, ${\rm Ker}(\Phi_p^*)=\{{\bf 0}\}$ trivially implies injectivity. Now suppose that ${\rm Ker}(\Phi_p^*)\neq \{{\bf 0}\}$. Let $\nu\in \rer_{[n]}$ be such that if we let $\pi_1,\ldots$ be an enumeration of ${\rm Part}_{[n]}$ we have $\nu(\pi_i)\in (0,1)$ for all $i$. Pick $u\in {\rm Ker}(\Phi_p^*)\setminus \{{\bf 0}\}$. Since $\nu(\pi_i)\in (0,1)$ for all $i$ and ${\rm Part}_{[n]}$ is finite, we can pick $\epsilon>0$ such that $\nu(\pi_i)+\epsilon u_i>0$ for all $i$. Let $\nu'=\nu+\epsilon u$. It is easy to show that $\sum_i u_i=0$ for any $u\in {\rm Ker}(\Phi_p^*)$ and so we have $\nu'\in \rer_{[n]}$. Moreover, $\Phi_p(\nu')=\Phi_p(\nu)$, finishing the proof. (ii). Suppose that $\nu\in \rer_{[n]}$ is such that there is $v\in {\rm Ker}(\Phi_p^*)\setminus \{{\bf 0}\}$ with $v_i\ge 0$ for all $i\in ({\rm supp}\, \nu)^c$. In similar fashion as in the proof of part (i), we get that if $\epsilon>0$ is sufficiently small, then $\nu':=\nu+\epsilon v$ belongs to $\rer_{[n]}$ and moreover, $\Phi_p(\nu)=\Phi_p(\nu')$. Hence $\nu$ is not $(\rer_{[n]},p)$-unique. For the other direction, suppose that $\nu$ is not $(\rer_{[n]},p)$-unique. Then we can pick $\nu'\in\rer_{[n]}$ such that $\nu'\neq \nu$ and $\Phi_p(\nu)=\Phi_p(\nu')$ in which case ${\bf 0}\neq v:=\nu'-\nu\in {\rm Ker}(\Phi_p^*)$. Moreover, since $\nu'=v+\nu$ it follows that $v_i\ge 0$ for all $i\in ({\rm supp}\,\nu)^c$ since otherwise $\nu'$ would have a negative entry. (iii). Suppose that ${\rm Dim}({\rm Ker}(\Phi_p^*))=1$. Pick $w\in {\rm Ker}(\Phi_p^*)\setminus \{{\bf 0}\}$. Write $w=w_{+}-w_{-}$ where $(w_{+})_i=w_i$ if $w_i\ge 0$ and $(w_{+})_i=0$ if $w_i<0$. Then, letting $\nu_1:=2 w_{+}/|w|_1$ and $\nu_2:=2 w_{-}/|w|_1$, we have $\nu_1,\nu_2\in \rer_{[n]}$, $\nu_1\neq\nu_2$ and since $w\in {\rm Ker}(\Phi_p^*)$ we have $\Phi_p(\nu_1)=\Phi_p(\nu_2)$. It is also clear that $\nu_1$ and $\nu_2$ are singular with respect to each other. It remains to prove uniqueness. For this, assume that $\nu_1',\nu_2'\in \rer_{[n]}$ satisfy $\Phi_p(\nu_1')=\Phi_p(\nu_2')$ and that $\nu_1'$ and $\nu_2'$ are singular with respect to each other. Since $\Phi_p$ is affine, $\nu_1'-\nu_2'\in {\rm Ker}(\Phi_p^*)$, and since ${\rm Dim}({\rm Ker}(\Phi_p^*))=1$ it follows that $\nu_1'-\nu_2'= c (\nu_1-\nu_2)$ for some $c\neq 0$. If $c>0$, then by singularity, $\nu_1'=c\nu_1$ and $c=1$. Hence, $\nu_1=\nu_1'$ and $\nu_2=\nu_2'$. Similarly, $c<0$ implies $\nu_1=\nu_2'$ and $\nu_2=\nu_1'$. Hence, the uniqueness is established. (iv). Now instead assume that ${\rm Dim}({\rm Ker}(\Phi_p^*))\ge 2$. Let $v$ and $w$ be two linearly independent elements in ${\rm Ker}(\Phi_p^*)$. It follows that either $2 v_+/|v|_1$ differs from $2 w_+/|w|_1$ or $2 v_-/|v|_1$ differs from $2 w_-/|w|_1$ (or both). Without loss of generality, we assume the first. For $a\ge 0$, let $u(a):=2 (a v +w)/|av+w|_1$ and let $\nu_1(a):=u(a)_{+}$ and let $\nu_2(a):=u(a)_{-}$, defined as in part (iii). Then for every $a$, $\nu_1(a),\nu_2(a)\in \rer_{[n]}$, $\Phi_p(\nu_1(a))=\Phi_p(\nu_2(a))$ and $\nu_1(a)$ and $\nu_2(a)$ are singular with respect to each other. Observe that $\nu_1(a)$ is continuous in $a$, $\nu_1(0)= 2 w_+/|w|_1$ and $\nu_1(a)\to 2 v_+/|v|_1$ as $a\to \infty$. The latter are distinct and hence $(\nu_1(a))_{a\ge 0}$ contains an uncountable collection of distinct elements from $\rer_{[n]}$. Finally we observe that the extensions mentioned to certain $\Rc\subseteq \rer_{[n]}$ require easy modifications of the given proofs. (i). Taking $\Rc\subset \rer_{[3]}$ to be $$\Rc=\{\nu_1,\nu_2,\nu_3\}:=\{(1,0,0,0,0),(0,0,0,0,1),(0,1/3,1/3,1/3,0)\},$$ we have that ${\rm Ker}(\Phi_{1/2,\langle \Rc\rangle}^*)$ is nonempty (indeed, by Example \[ex.n3\] below we have that $2\nu_1+\nu_2-3\nu_3\in {\rm Ker}(\Phi_{1/2,\langle \Rc \rangle }^*))$ but $\Phi_p$ is injective on $\Rc$; hence we need some convexity assumption on $\Rc$.\ (ii). If $\Rc$ is either the set of probability measures supported on some fixed subset of ${\rm Part}_{[n]}$ or $\Rc$ is the set of probability measures invariant under some group action (such as $\rer_{[n]}^{\exch}$), then all of the last conditions in Proposition \[p.kerprop\] hold and hence so do (i)-(iv).\ (iii). An example of a closed and convex set $\Rc\subset \rer_{[3]}$ where (iii) fails when $p=1/2$ is $$\{(q_1,\ldots,q_5)\in \rer_{[3]}\,:\,q_5\le \min(q_1,q_2,q_3,q_4)\}.$$ ($q_1,\ldots,q_5$ are defined as they were in the proof of Theorem \[t.bigfinitetheorem\](A).) To see this, first observe that $\nu_1:=(\frac{3}{7},\frac{1}{7},\frac{1}{7},\frac{1}{7},\frac{1}{7})$ and $\nu_2:=(\frac{1}{7},\frac{2}{7},\frac{2}{7},\frac{2}{7},0)$ are in $\Rc$ and $\Phi_{1/2}(\nu_1)=\Phi_{1/2}(\nu_2)$. Hence ${\rm Ker}(\Phi_{1/2}^*|_{\Rc})$ has dimension at least 1 while this dimension is at most 1 since Example \[ex.n3\] (given below) shows that ${\rm Ker}(\Phi_{1/2}^*)$ has dimension 1. Now part (iii) of Proposition \[p.kerprop\] applied to $\rer_{[3]}$ gives that there is only one pair of singular measures in $\rer_{[3]}$ with the same $\Phi_{1/2}$ value, namely $(\frac{2}{3},0,0,0,\frac{1}{3})$ and $(0,\frac{1}{3},\frac{1}{3},\frac{1}{3},0)$. Since the first is not in $\Rc$, we do not have such a singular pair there, showing (iii) fails. As must be the case, (\[e.condition\]) fails and one can immediately check that it fails for $\nu_1=(\frac{3}{7},\frac{1}{7},\frac{1}{7},\frac{1}{7},\frac{1}{7})$ and $\nu_2=(\frac{1}{7},\frac{2}{7},\frac{2}{7},\frac{2}{7},0)$, whose difference is in ${\rm Ker}(\Phi_{1/2}^*)$. However, it is easy to see that (iii) can never fail the “other way”, namely that if the dimension of the relevant kernel is 1, then there are at most one desired pair of singular measures; to see this, one notes that the proof given goes through verbatim for any $\Rc\subset \rer_{[3]}$. \[ex.n3\] As we saw in Theorem \[t.bigfinitetheorem\], $\Phi_{1/2}\,:\,\rer_{[3]}\to \cp_{[3],1/2}$ is not injective. Using Proposition \[p.kerprop\](ii), we can determine exactly which $\nu\in \rer_{[3]}$ are $(\rer_{[3]},1/2)$-unique. Recall that we write $\Phi_{1/2}(\nu)=L_{1/2}\nu$. The first four rows of $L_{1/2}$ will be the same as the last four (unlike in the $p\neq 1/2$ case). The first four rows of $L_{1/2}$ are given by $$(L_{1/2})_{1\le i \le 4, 1\le j\le 5}=\left( \begin{array}{ccccc} 1/8 & 1/4 & 1/4 & 1/4 & 1/2 \\ 1/8 & 1/4 & 0& 0& 0 \\ 1/8 & 0 & 0 & 1/4 & 0 \\ 1/8 & 0 & 1/4 & 0 & 0 \end{array} \right).$$ Elementary algebraic calculations show that the kernel of $L_{1/2}$ is spanned by $$\label{e.kernel3} \left( \begin{array}{c} 2 \\ -1 \\-1 \\ -1 \\ 1 \end{array} \right).$$ Using Proposition \[p.kerprop\](ii) and  we can conclude that for $\nu\in \rer_{[3]}$: 1. If $|{\rm supp}\, \nu| =1$ then $\nu$ is $(\rer_{[3]},1/2)$-unique. 2. If $|{\rm supp}\, \nu| =2$ then $\nu$ is not $(\rer_{[3]},1/2)$-unique if and only if ${\rm supp}\,\nu=\{1,5\}.$ 3. If $|{\rm supp}\, \nu| =3$ then $\nu$ is not $(\rer_{[3]},1/2)$-unique if and only if\ ${\rm supp}\,\nu=\{2,3,4\}, \{1,2,5\}, \{1,3,5\}$ or $\{1,4,5\}.$ 4. If $|{\rm supp}\, \nu| =4$ then $\nu$ is not $(\rer_{[3]},1/2)$-unique. 5. If $|{\rm supp}\, \nu| =5$ then $\nu$ is not $(\rer_{[3]},1/2)$-unique. Using $(iii)-(iv)$ of Proposition \[p.kerprop\] applied to $\rer_{[n]}$ and $\rer_{[n]}^{\exch}$, we can obtain the following corollary. This corollary only deals with cases where we already have established non-injectivity. (i). If $p=1/2$ then there is a unique singular pair $\nu_1,\nu_2\in \rer_{[n]}$ such that $\Phi_p(\nu_1)=\Phi_p(\nu_2)$ if $n=3$ and infinitely many such pairs if $n\ge 4$.\ (ii). If $p=1/2$ then there is a unique singular pair $\nu_1,\nu_2\in \rer_{[n]}^{\exch}$ such that $\Phi_p(\nu_1)=\Phi_p(\nu_2)$ if $n=3$ and infinitely many such pairs if $n\ge 4$.\ (iii). If $p\neq 1/2$ then there are infinitely many distinct singular pairs $\nu_1,\nu_2\in \rer_{[n]}$ such that $\Phi_p(\nu_1)=\Phi_p(\nu_2)$ if $n\ge 4$.\ (iv). If $p\neq 1/2$ then there is a unique singular pair $\nu_1,\nu_2\in \rer_{[n]}^{\exch}$ such that $\Phi_p(\nu_1)=\Phi_p(\nu_2)$ if $n=4$ and infinitely many such pairs if $n\ge 5$. First we show the following monotonicity property: If $n$ is such that $\rer_{[n]}$ contains infinitely many pairs of singular measures $\nu_1,\nu_2\in \rer_{[n]}$ with $\Phi_p(\nu_1)=\Phi_p(\nu_2)$, then the same holds for $n+1$. To see this, assume that $\nu_1,\nu_2\in \rer_{[n]}$ are singular with $\Phi_p(\nu_1)=\Phi_p(\nu_2)$. Let $T\,:\,\rer_{[n]}\to \rer_{[n+1]}$ be the injection from the proof of Theorem \[t.bigfinitetheorem\]. Then it is straightforward to verify that $T(\nu_1)$ and $T(\nu_2)$ are singular and give the same color process. The same proof using the injection $S$ (instead of $T$) from the proof of Theorem \[t.bigfinitetheorem\] shows that the same monotonicity property holds for $\rer_{[n]}^{\exch}$. In the general case, ($\rer_{[n]}$), the dimension of the domain of our operator will be the number of partitions of the set $[n]$ and the dimension of the image space will be $2^n$. In the exchangeable case, ($\rer_{[n]}^{\exch}$), the dimension of the domain of our operator will be the number of partitions of the integer $n$ and the dimension of the image space will be $n+1$. (i). By Example \[ex.n3\], we have that ${\rm Dim}({\rm Ker}(\Phi_{1/2}^*))=1$ if $n=3$. For $n=4$, we have a mapping from a 15-dimensional space to a 16-dimensional space. However, since $p=1/2$, the probability on the latter has a $0\ks 1$-symmetry and so the range is at most 8-dimensional. From this, we conclude that ${\rm Dim}({\rm Ker}(\Phi_{1/2}^*))\ge 7$ and hence $(i)$ follows from Proposition \[p.kerprop\](iii,iv) and the above monotonicity. We mention that Mathematica shows that indeed ${\rm Dim}({\rm Ker}(\Phi_{1/2}^*))=7$. (ii). One also can check directly that ${\rm Dim}({\rm Ker}(\Phi_{1/2,\langle \rer_{[3]}^\exch \rangle}^*))=1$ (which essentially follows from (i) also). For $n=4$, $\Phi_{1/2,\langle \rer_{[4]}^\exch \rangle}^*$ maps from a 5-dimensional space to a 5-dimensional space and one easily checks that the range is 3-dimensional and therefore ${\rm Dim}({\rm Ker}(\Phi_{1/2,\langle \rer_{[4]}^\exch \rangle}^*))=2$. Hence $(ii)$ follows from Proposition \[p.kerprop\](iii,iv) and the above monotonicity. (iii). For $n=4$, $\Phi_{p}^*$ maps from a 15-dimensional space to a 16-dimensional space. Mathematica claims to give a basis (depending on $p$) for the kernel which is 3-dimensional. One can then check by hand that this proposed basis is linearly independent and belongs to the kernel. Hence, $(iii)$ follows from Proposition \[p.kerprop\](iii,iv) and the above monotonicity. (Note that Mathematica is not needed for the formal proof.) (iv). Finally, with $p\neq 1/2$, if $n=4$, one can check by hand that $\Phi_{p,\langle \rer_{[4]}^\exch \rangle}^*$, which maps from a 5-dimensional space to a 5-dimensional space, has a range which is 4-dimensional and hence $${\rm Dim}({\rm Ker}(\Phi_{p,\langle \rer_{[4]}^\exch \rangle}^*))=1.$$ If $n=5$, $\Phi_{p,\langle \rer_{[5]}^\exch \rangle}^*$ maps a 7-dimensional space into a 6-dimensional space. Mathematica claims to give a basis (depending on $p$) for the kernel which is 2-dimensional. One can then check by hand that this proposed basis is linearly independent and belongs to the kernel. Hence $(iv)$ follows from Proposition \[p.kerprop\](iii,iv) and the above monotonicity. (Note that Mathematica is not needed for the formal proof.) Other geneneral results in the finite case ------------------------------------------ \[t.mainfinitetheorem2\] If $\mu\in {\mathcal P}(\{0,1\}^{[2]})$, then $\mu\in \cp_{[2]}$ if and only if $\mu$ satisfies non-negative pairwise correlations and $\mu((1,0))=\mu((0,1))$. The “only if” direction is immediate. For the other direction, let $$\nu(\{\{1\},\{2\}\})=\frac{\mu((0,1))}{(\mu((1,1))+\mu((0,1)))(\mu((0,1))+\mu((0,0)))}$$ and $p=\mu((1,1))+\mu((0,1))$. Then the assumption of non-negative pairwise correlations implies that $\nu(\{\{1\},\{2\}\}) \le 1$ and a straightforward calculation shows that $\Phi_p(\nu)=\mu$, as desired. A measure $\mu$ on $\{0,1\}^{n}$ is said to be exchangeable if it is invariant under all permutations of $[n]$. If we move to $n=3$, then it turns out that non-negative pairwise correlations and exchangeability (the latter no longer being necessary for being a color process with $n=3$) do not suffice for being a color process as is shown by the following example. We consider the distribution $\frac{1}{9}m_1+\frac{8}{9}m_2$ where $m_1,m_2$ are product measures with respective densities $.9$ and $.45$. This is exchangeable and has non-negative pairwise correlations. Since the marginals are $1/2$ but the process does not exhibit $0\ks 1$-symmetry (see next definition), it cannot be a color process. \[df.01symm\] A measure $\mu$ on $\{0,1\}^n$ is said to be $0\ks 1$-symmetric if for any $\xi\in \{0,1\}^n$, we have $\mu(\xi)=\mu(\hat{\xi})$ where we define $\hat{\xi}$ by letting $\hat{\xi}(i)=1-\xi(i)$ for all $i\in [n]$. The following result characterizes color processes for $n=3$ in the special case $p=1/2$. \[l.symm\] Let $\mu$ be a probability measure on $\{0,1\}^3$. Then $\mu\in\cp_{[3],1/2}$ if and only if $\mu$ has non-negative pairwise correlations and is $0\ks 1$-symmetric. The “only if” direction is immediate. For the other direction, let $p_1=\mu(1,1,1)=\mu(0,0,0), p_2=\mu(1,1,0)=\mu(0,0,1), p_3=\mu(1,0,1)=\mu(0,1,0)$ and $p_4=\mu(0,1,1)=\mu(1,0,0)$ where clearly $\sum_i p_i =1/2$. Let $q_1=\nu(\{1,2,3\})$, $q_2=\nu(\{\{1,2\},\{3\}\})$, $q_3=\nu\{\{\{1,3\},\{2\}\}\}$, $q_4=\nu(\{\{1\},\{2,3\}\})$ and $q_5=\nu(\{\{1\},\{2\},\{3\}\})$. Without loss of generality, we may assume that $p_2\le \min\{p_3,p_4\}$. We then take $q_1:= 2(p_1+p_2-p_3-p_4), q_2:=0, q_3:= 4p_3-4p_2, q_4:= 4p_4-4p_2$ and $q_5:= 8p_2$. One can immediately check that $\sum_i q_i =1$ with no assumptions. The key point is to show that $q_i\in [0,1]$ for each $i$. After this, it is easy to check that this $\nu$ works and this is left to the reader. To establish $q_i\in [0,1]$ for each $i$, we will of course use the non-negative pairwise correlations assumption. The latter assumption easily yields $p_1+p_2\ge 1/4$, $p_1+p_3\ge 1/4$ and $p_1+p_4\ge 1/4$. Recall also $\sum_i p_i =1/2$ and $p_2\le \min\{p_3,p_4\}$. These are all that will be used. If $p_2=1/8 + \epsilon$ for some $\epsilon>0$, then $\sum_i p_i =1/2$ and $p_2\le \min\{p_3,p_4\}$ imply that $p_1\le 1/8 -3 \epsilon$, contradicting $p_1+p_2\ge 1/4$. Hence $p_2\le 1/8$ and so $q_5\in [0,1]$. Next, $q_1\ge 0$ since $p_1+p_2\ge 1/4$ and $\sum_i p_i =1/2$. The latter also gives that $q_1\le 1$. Next $p_2\le \min\{p_3,p_4\}$ yields $q_3\ge 0$. If $p_3=1/4 + \epsilon$ for some $\epsilon>0$, then $\sum_i p_i =1/2$ yields that $p_1+p_2< 1/4$, contradicting one of our inequalities. Therefore $p_3\le 1/4$ implying $q_3\le 1$. Lastly, $q_4$ is handled exactly as $q_3$. Unfortunately, we don’t have any nice characterization of $\cp_{[3],p}$ for $p\neq 1/2$ since we don’t have a good replacement for the $0\ks 1$-symmetry in this case. The next result shows that Proposition \[l.symm\] has no extension to larger $n$, even if exchangeability is assumed. \[l.nonnegcorr\] For each $n\ge 4$, there is a measure $\mu$ on $\{0,1\}^{[n]}$ which is exchangeable, $0\ks 1$-symmetric and has non-negative pairwise correlations but for which $\mu\notin\cp_{[n],1/2}$. Consider the measure $\mu$ on $\{0,1\}^{[n]}$ which is uniform on all points belonging to levels 1 or $n-1$ where level $i$ refers to those elements which have $i$ 1’s. Exchangeability and $0\ks 1$-symmetry are obvious. Next, we have $$E_{\mu}[X(1)X(2)]=\frac{1}{2}\times \frac{(n-2)}{n} =\frac{1}{2}-\frac{1}{n}$$ so that $$\mbox{Cov}_{\mu}(X(1),X(2))=\frac{1}{2}-\frac{1}{n} -\frac{1}{4}=\frac{1}{4}-\frac{1}{n},$$ which is non-negative if and only if $n\ge 4$. Finally, since $\mu$ assigns measure $0$ to the configuration $\{1,\ldots,1\}$, $\mu\notin \cp_{[n],1/2}$. We recall the following two definitions. A probability measure on $\{0,1\}^{[n]}$ is called [*positively associated*]{} if any two increasing events are positively correlated. \[d.FKGL\] A probability measure on $\{0,1\}^{[n]}$ is said to satisfy the [*FKG lattice condition*]{}, if, whenever all but two of the variables are conditioned on, then the remaining two variables are (conditionally) positively correlated. The famous FKG Theorem (see [@FKG]) says that if a measure on $\{0,1\}^{[n]}$ has full support and satisfies the FKG lattice condition, then, whenever some of the variables are conditioned on, then the (conditional) distribution of the remaining variables is positively associated (and so, in particular, the measure itself is positively associated). One can show that the example right before Definition \[df.01symm\] satisfies the FKG lattice condition. This shows that exchangeability and the FKG lattice condition do not necessarily lead to being a color process. Interestingly, although color processes of course always have non-negative pairwise correlations, they are not necessarily positively associated as shown by the following simple example. \[e.posass4\] Define $\nu\in \rer_{[4]}$ to be $\{\{1,2\},\{3\},\{4\}\}$ with probability $1/2$ and $\{\{1\},\{2\},\{3,4\}\}$ with probability $1/2$. Let $A$ be the event that $X^{\nu,1/2}(1)=X^{\nu,1/2}(2)=1$ and $B$ the event that $X^{\nu,1/2}(3)=X^{\nu,1/2}(4)=1$. Then ${\bf P}(A)={\bf P}(B)=3/8$ but ${\bf P}(A\cap B)=1/8<9/64={\bf P}(A){\bf P}(B)$. While we have not bothered to check, we suspect that all color processes for $n=3$ are in fact positively associated; this is certainly true for $n=2$. There are results concerning positive association for color processes associated to the RER corresponding (using the percolation clusters) to the FK model given in Definition \[df.RC\]. Positive association was proved, in chronological order, (1) for $q\ge 1$ and $p\in [1/q,1-1/q]$ in [@OHfuzzy], (2) for $q=1$ and $p\in [0,1]$ in [@OH01] and (3) for $q\ge 1$ and $p\in [0,1]$ in [@KW]. Interestingly, in this last mentioned paper, the authors conjecture that this is true for all $q>0$ and bring up the question of positively association in the general setup of divide and color models that we study in this paper. Color processes associated to infinite exchangeable random partitions {#s.exchangeable} ===================================================================== In this section, we restrict ourselves to color processes arising from so-called infinite exchangeable random partitions. In Subsection \[s.definetti\], we recall the notions of simplices, infinite exchangeable processes and infinite exchangeable random partitions as well as the central de Finetti’s and Kingman’s Theorems concerning such objects. In Subsection \[s.ecp\], we develop some general results which apply for all values of $p$. It turns out that the map $\Phi_p$ seems to have very different properties depending on whether $p=1/2$ or $p\neq 1/2$, being “much more injective” in the latter case. (Recall, analogously, that Theorem \[t.bigfinitetheorem\](A) and (C) (or (B) and (D)) in Section \[s.finitecase\] tells us that for $n=3$, we have injectivity in the $p\neq 1/2$ case and non-injectivity in the $p= 1/2$ case.) In Subsection \[s.symmcase\], we restrict to the $p= 1/2$ case, characterizing the set of color processes as those which exhibit $0\ks 1$-symmetry (Theorem \[t.mainp12\]) and characterizing “where $\Phi_{1/2}$ is injective”, i.e., which $\nu\in \rer_{\N}^{\exch}$ are $\rer_{\N}^{\exch}$-unique (Theorem \[p.unprop\]). In Subsection \[s.NONsymmcase\], we restrict to the $p\neq 1/2$ case, obtaining some results which might suggest that $\Phi_{p}$ is injective in this case. In Subsection \[s.gaussian\], we look at threshold Gaussian and stable processes. Background: Simplices and de Finetti’s and Kingman’s Theorems {#s.definetti} ------------------------------------------------------------- We first recall Choquet’s Theorem (see [@glasner], p. 367). \[t.choquet\] If $Q$ is a metrizable compact convex subset of a locally convex topological vector space, then for each $x\in Q$, there is a probability measure $\mu$ on the extremal elements $\emph{ext}(Q)$ of $Q$ for which $x$ is the barycenter (average) of $\mu$ in the sense that for all continuous affine functions $f$ on $Q$, $$f(x)=\int_{\emph{ext}(Q)} f d\mu.$$ If $Q$ is a metrizable compact convex subset of a locally convex topological vector space, then $Q$ is a [*simplex*]{} if for all $x\in Q$, the representing $\mu$ in Choquet’s Theorem is unique. The following example is illustrative and will appear soon. Let $C_3$ be the set of probability measures on $[0,1]$ in the weak$^*$ topology, $C_2$ be the subset consisting of probability measures with mean $1/2$ and $C_1$ the further subset consisting of probability measures which are symmetric about $1/2$. Clearly $C_1\subseteq C_2\subseteq C_3$ and each $C_i$ is a metrizable compact convex set in this topology for which Choquet’s Theorem is applicable. Interesting, while $C_1$ and $C_3$ are simplices, $C_2$ is not, as can be checked. The extremal elements of $C_3$ are the point masses while the extremal elements of $C_1$ are measures of the form $\frac{\delta_{1/2+a}+\delta_{1/2-a}}{2}$. Next, let ${\rm Perm}_{\N}$ denote the space of permutations on $\N$ which fix all but finitely many elements. A stochastic process $(X(i))_{i\in \N}$ is said to be exchangeable if for any $\sigma \in {\rm Perm}_{\N}$, $(X(\sigma(i)))_{i\in \N}$ and $(X(i))_{i\in \N}$ are equal in distribution. The following is de Finetti’s Theorem (see [@DURRETT], p.228). \[t.definetti\] Given a real-valued exchangeable process $X$, there is a unique random distribution $\Xi$ on ${\mathbb R}$ such that $X$ is obtained by first choosing $\Xi$ and then letting $X$ be i.i.d.  with distribution $\Xi$. It follows that this set of exchangeable processes is a simplex whose extremal elements are product measures. In this paper, we mainly consider processes which are $\{0,1\}$-valued. Let ${\rm EP}_{\N}$ denote the space of exchangeable processes on $\N$ taking values in $\{0,1\}^\N$. For $p\in [0,1]$, let ${\rm EP}_{\N,p}$ denote the space of elements in ${\rm EP}_{\N}$ whose marginal distribution has mean $p$. Mostly, we will refer to the elements of ${\rm EP}_{\N,p}$ as probability measures, but sometimes as processes. If $\nu\in {\rm EP}_{\N}$, then de Finetti’s Theorem says that there exists a unique probability measure $\rho_\nu$ on $[0,1]$ such that $$\label{e.nuxi} \nu=\int_{s=0}^1 \Pi_s \, d\rho_{\nu}(s),$$ where $\Pi_s$ denotes product measure on $\{0,1\}^{\N}$ with density $s$. In this case, $\Xi$ is concentrated on $\{0,1\}$ and hence is parameterized by $[0,1]$. We therefore have a bijection between ${\rm EP}_{\N}$ and probability measures on $[0,1]$. In what follows, we will denote by $\xi_{\nu}$ a random variable with law $\rho_{\nu}$. Similarly, given a random variable $\xi$ on $[0,1]$ we will by $\nu_{\xi}$ denote the exchangeable process obtained by  where $\rho_{\nu}$ is taken to be the law of $\xi$; i.e., $\xi$ has distribution $\rho_{{\nu}_{\xi}}$. Given a real-valued exchangeable process $X$ and $h\in {\mathbb R}$, we let $Y^h=(Y^h(i))_{i=0}^{\infty}$ be the “$h$-threshold process obtained from $X$” defined by $Y^h(i)=1\{X(i)\ge h\}$. Clearly $Y^h\in {\rm EP}_{\N}$ and it is of interest to determine if $Y^h$ is a color process. In Section \[s.gaussian\], we will see that this is the case for the $0$-threshold Gaussian and stable processes. Next, we find the probability measure $\rho_{Y^h}$ corresponding to $Y^h$. Recall the definition of $\Xi$ used in the representation of $X$ above. Observe that for any $k\ge 1$, any sequence of integers $0\le n_1<\ldots <n_k$ and any choices of $i_{n_1},\ldots,i_{n_k}\in \{0,1\}$ we have $$\label{e.projectexch} P(Y^h(n_1)=i_{n_1},\ldots,Y^h(n_k)=i_{n_k})=E\left[\Xi([h,\infty))^{\sum_{j=1}^k i_{n_j}}(1-\Xi([h,\infty)))^{k-\sum_{j=1}^k i_{n_j}}\right].$$ From  it follows that $\rho_{Y^h}$ is the law of $\Xi([h,\infty))$, or equivalently, $\xi_{Y^h}=\Xi([h,\infty))$. For $\sigma\in {\rm Perm}_{\N}$ and $\pi \in {\rm Part}_{\N}$ define $\sigma \pi \in {\rm Part}_{\N}$ by letting $\sigma\pi(x)=\sigma \pi(y)$ if and only if $\pi(\sigma^{-1}(x))=\pi(\sigma^{-1}(y))$. The “$-1$” is present to ensure that we have a “group action”. For $\nu\in \rer_{\N}$ and $\sigma\in {\rm Perm}_{\N}$, let $\sigma\circ \nu \in \rer_{\N}$ be defined as $\sigma\circ \nu (\cdot)=\nu(\sigma^{-1}(\cdot))$. We say that $\nu\in \rer_{\N}$ is *exchangeable* if for any $\sigma \in {\rm Perm}_{\N}$ we have $\sigma \circ \nu=\nu$. The space of exchangeable RERs on $\N$ will be denoted by $\rer_{\N}^{\exch}$. Of course, $\N$ can be replaced by any countable set here since there is no “geometric structure” since we are considering all permutations but we use $\N$ for simplicity. The following is the first step in introducing our collection of exchangeable RERs. We say that ${\bf p}=(p_1,p_2,\ldots)$ is a *paint-box* if $p_i\ge 0$ for all $i$, $p_i\ge p_{i+1}$ for all $i$, and $\sum_i p_i\le 1$. Given a paint-box ${\bf p}=(p_1,p_2,\ldots)$, we obtain an element of $\rer_{\N}^{\exch}$ as follows. Define the random equivalence classes $(S_i)_{i\ge 1}$ by putting each element of ${\mathbb N}$ independently in $S_i$ with probability $p_i$ and with probability $1-\sum_i p_i$ put it in its own equivalence class. We denote this [RER]{} by $\nu_{{\bf p}}$. It follows easily that $\nu_{{\bf p}}\in \rer_{\N}^{{\rm \exch}}$. We use slightly different terminology for paint-boxes than what is used in [@JB06], where it is the RER $\nu_{\bf p}$, rather than the vector ${\bf p}$, which is called a paint-box. The subset of $\rer^{\exch}_{\N}$ which consists of RERs obtained from paint-boxes will be denoted by $\rer^{\exch,\pure}_{\N}$. We can obtain more elements in $\rer_{\N}^{\exch}$ by taking convex combinations and in fact generalized convex combinations of the elements in $\rer_{\N}^{\exch,\pure}$. It is immediate that all of these are in $\rer_{\N}^{\exch}$. Kingman’s famous theorem (Theorem \[t.kingman\] below, see also [@JB06]) says that these account for all of the elements of $\rer_{\N}^{\exch}$. Moreover, the uniqueness in this theorem tells us that $\rer_{\N}^{\exch}$ is a simplex whose extremal elements are $\rer_{\N}^{\exch,\pure}$. \[t.kingman\] [**(Kingman)** ]{}Suppose that $\nu\in {\rm RER}_{\N}^{\exch}$. Then there is a unique probability measure $\rho=\rho_{\nu}$ on ${\rm RER}^{\exch,\pure}_{\N}$ such that $$\nu=\int_{\nu_{{\bf p}}\in {\rm RER}^{\exch,\pure}_{\N}}\nu_{\bf p}\,d\rho(\nu_{\bf p}).$$ Infinite exchangeable color processes {#s.ecp} ------------------------------------- Our first result says that ${\rm CP}_{\N,p}^{\exch}$ (which recall was defined to be the image of $\rer_{\N}^{\exch}$ under $\Phi_p$) is simply ${\rm EP}_{\N,p}\cap {\rm CP}_{\N,p}$. \[p.imageprop\] For any $p\in[0,1]$, $${\rm CP}_{\N,p}^{\exch}={\rm EP}_{\N,p}\cap {\rm CP}_{\N,p}.$$ The containment $\subseteq$ is clear. Assume that $\mu\in {\rm EP}_{\N,p}\cap {\rm CP}_{\N,p}$. Then there is some $\nu\in \rer_{\N}$ such that $\Phi_p(\nu)=\mu$. We will be done if we find some $\nu'\in \rer_{\N}^{\exch}$ such that $\Phi(\nu')=\mu$. We will construct such a $\nu'$ from $\nu$. Let ${\rm Perm}_{[n]}$ denote the set of permutations on $[n]$ and let $$\nu_n=\frac{1}{|{\rm Perm}_{[n]}|}\sum_{\sigma\in {\rm Perm}_{[n]}}\sigma\circ \nu,$$ where it is understood that a $\sigma\in {\rm Perm}_{[n]}$ is viewed as an element of ${\rm Perm}_{\N}$ which fixes all $k$ larger than $n$. Since $\mu\in {\rm EP}_{\N,p}$ and $\Phi_p$ commutes with permutations it follows that $\Phi_{p}(\sigma\circ \nu)=\sigma\circ\Phi_p(\nu)=\sigma\circ \mu =\mu$ for any $\sigma\in {\rm Perm}_{[n]}$. In particular, $\Phi_p(\nu_n)=\mu$ for all $n$. Clearly $\nu_n$ is invariant under permutations of $[n]$ (meaning that $\sigma \circ \nu_n=\nu_{n}$ for any $\sigma\in {\rm Perm}_{[n]}$), so that in particular the restriction of $\nu_n$ to $[n]$ belongs to $\rer_{[n]}^{\exch}$. By compactness, we can choose some subsequence $n_k$ so that $\nu_{n_k}$ converges to some $\nu_{\infty}$ as $k\to \infty$. It is clear that $\nu_{\infty}\in \rer_{\N}^{\exch}$ and $\Phi_p(\nu_{\infty})=\mu$ follows from the easily shown fact that $\Phi_p(\cdot)$ is continuous. We now show that the mixing random variable $\xi$ for the color process corresponding to a paintbox is a so-called Bernoulli convolution. \[l.xipaint\] Fix $p\in [0,1]$ and a paintbox ${\bf p}=(p_1,p_2,\ldots)$. For the associated color process, let $\xi_{{\bf p},p}$ be the representing random variable in $[0,1]$ in de Finetti’s Theorem. Then, in distribution, $$\label{e.xiequiv} \xibfp=(1-\sum_{i\ge 1} p_i) p+\frac{1}{2} \sum_{i\ge 1} p_i +\frac{1}{2} \sum_{i\ge 1} p_i Z_i,$$ where the $Z_i$ are i.i.d. random variables with $P(Z_i=1)=p$ and $P(Z_i=-1)=1-p$. If $p=1/2$,  simplifies to $$\label{e.xiequi2} \xihalf=\frac{1}{2}+\frac{1}{2}\sum_{i\ge 1} p_i Z_i.$$ Let $p\in [0,1]$ and consider the paintbox ${\bf p}=(p_1,p_2,\ldots)$. Define a random subset $S$ of ${\N}$ by independently putting each $n\in \N$ in $S$ with probability $p$ and in $S^c$ with probability $1-p$. Letting $$\label{e.xidef} \xi_{{\bf p},p}:=\sum_{i\ge 1} I\{i\in S\} p_i + (1-\sum_{i\ge 1} p_i) p,$$ and $F_{\xi_{{\bf p},p}}$ be the law of $\xi_{{\bf p},p}$, it is straightforward to see that $$\Phi_p(\nu_{\bf p})=\int_{s=0}^1 \Pi_s d F_{\xi_{{\bf p},p}}(s)\mbox{ }(=\nu_{\xi_{{\bf p},p}}).$$ Finally, one verifies that  can be rewritten as . As an application of Lemma \[l.xipaint\] we get the identities $$\label{e.simplebox} \Phi_p(\nu_{(p_1,0,\ldots)})=p \Pi_{p_1+(1-p_1)p} +(1-p) \Pi_{(1-p_1) p},$$ and $$\begin{aligned} \label{e.pure2box} \lefteqn{\Phi_p(\nu_{(p_1,p_2,0,\ldots)})=p^2 \Pi_{p_1+p_2+(1-p_1-p_2)p} +(1-p) p\Pi_{p_1+(1-p_1-p_2) p}}\\ & & +(1-p) p\Pi_{p_2+(1-p_1-p_2) p}+(1-p)^2\Pi_{(1-p_1-p_2) p},\nonumber\end{aligned}$$ which in the case $p=1/2$ simplify to $$\label{e.simplebox05} \Phi_{1/2}(\nu_{(p_1,0,\ldots)})=\frac{1}{2}( \Pi_{1/2+p_1/2} +\Pi_{1/2-p_1/2}),$$ and $$\begin{aligned} \label{e.pure2box05} \lefteqn{\Phi_{1/2}(\nu_{(p_1,p_2,0,\ldots)})}\nonumber\\& & =\frac{1}{4} (\Pi_{1/2+(p_1+p_2)/2}+\Pi_{1/2+(p_1-p_2)/2}+\Pi_{1/2-(p_1-p_2)/2}+\Pi_{1/2-(p_1+p_2)/2}).\end{aligned}$$ From  and  we obtain $$\label{e.purenonuniq05} \Phi_{1/2}(\nu_{(p_1,p_2,0,\ldots)})=\frac{1}{2}\Phi_{1/2}(\nu_{(q_1,0,\ldots)})+\frac{1}{2} \Phi_{1/2}(\nu_{(q_2,0,\ldots)}),$$ where $q_1=p_1+p_2$ and $q_2=p_1-p_2$. Note that this implies that $\Phi_{1/2}\,:\,\rer_{\N}^{\exch}\to \cp_{\N,{1/2}}^{\exch}$ is not injective. On the other hand, we have the following proposition, where the key part of the proof was provided to us by Russell Lyons. \[p.Russ\] The map $$\Phi_p\,:\,\rer_{\N}^{\exch,\pure}\to \cp_{\N,p}^{\exch}$$ is injective for every $p\in (0,1)$. Fix $p\in (0,1)$ and consider two different paintboxes ${\bf p}$ and ${\bf p}'$. In view of Lemma \[l.xipaint\] and the uniqueness in de Finetti’s Theorem, we need to show that $$(1-\sum_{i\ge 1} p_i) p+\frac{1}{2} \sum_{i\ge 1} p_i +\frac{1}{2} \sum_{i\ge 1} p_i Z_i,$$ and $$(1-\sum_{i\ge 1} p_i') p+\frac{1}{2} \sum_{i\ge 1} p_i' +\frac{1}{2} \sum_{i\ge 1} p_i' Z_i,$$ have different distributions where, as before, the $Z_i$ are i.i.d. random variables with $P(Z_i=1)=p$ and $P(Z_i=-1)=1-p$. The length of the smallest intervals containing the supports of these distributions are $\sum_{i\ge 1} p_i$ and $\sum_{i\ge 1} p_i'$ and hence if these differ, then the distributions are different. Assume now that $\sum_{i\ge 1} p_i=\sum_{i\ge 1} p_i'$. In this case, if the distributions were the same, we would also have that the distributions of $\sum_{i\ge 1} p_i Z_i$ and $\sum_{i\ge 1} p_i' Z_i$ were the same. We will now be done if we prove that the Fourier transform $$f(z):={\mathbf E}[{\rm e}^{z \sum_{i=1}^{\infty} p_i Z_i}],\mbox{ }z\in {\mathbb C}$$ determines the paintbox ${\bf p}$. We do this in the case $p_i>0$ for all $i\ge 1$. The argument is easily modified to the case $p_i=0$ for all $i$ sufficiently large. By independence, $$f(z)=\prod_{j=1}^{\infty}{\mathbf E}[{\rm e}^{z \,p_j Z_j}]=\prod_{j=1}^{\infty}(p\,{\rm e}^{z\, p_j}+(1-p)\,{\rm e}^{-z\,p_j}).$$ For $j\ge 1$, let $\Delta_j=\{z\in {\mathbb C}\,:\,{\mathbf E}[{\rm e}^{z \,p_j Z_j}]=0\}$. Then $$\label{e.deltadef} \Delta_j=\left\{\frac{1}{p_j} \left( \frac{\log{\left(\frac{1-p}{p}\right)}}{2} +i (\pi k+\pi/2)\right)\,:\,k\in {\mathbb Z}\right\}.$$ Since $\sum_{j\ge 1} p_j\le 1$, we have $g_n(z)=0$ only if $z\in \Delta_j$ for some $j$. Let $g_1(z)=f(z)$ and for $n\ge 2$ let $$g_n(z)=\prod_{j=n}^{\infty} {\mathbf E}[{\rm e}^{z \,p_j Z_j}].$$ For $n\ge 1$, let $$t_n=\inf\{|{\rm Im}(z)|\,:\,g_n(z)=0\}.$$ Hence, according to , $t_n=\pi/( 2 p_n)$. Hence, we can recover the sequence $(p_n)_{n\ge 1}$ from the sequence $(t_n)_{n\ge 1}$ and the result follows. The case $p=1/2$. {#s.symmcase} ----------------- In this subsection, we obtain some results concerning $\Phi_{1/2}$ on $\rer_{\N}^{\exch}$. First observe that if $\mu \in \cp_{\N,1/2}^{\exch}$, then $\mu$ is $0\ks 1$-symmetric and hence so is the representing random variable $\xi_\mu$; i.e. $\xi_\mu=1-\xi_\mu$ in law. Interestingly, as we will see below in Theorem \[t.mainp12\], this necessary condition of symmetry is actually a sufficient condition for being a color process when $p=1/2$. In Theorem \[p.unprop\] we determine exactly which are the exchangeable RERs that are $(\rer_{\N}^{\exch},1/2)$-unique. In the proofs below, we will make use of the following lemma which follows easily from de Finetti’s theorem. \[l.exchextremal\] Let ${\rm EP}_{\N,1/2}^{\symm}$ be the set of exchangeable processes which are $0\ks 1$-symmetric (or equivalently their representing distribution in $[0,1]$ is symmetric about $1/2$) and for $\alpha\in [0,1/2]$, let $\mu_{\alpha}:=(\Pi_{1/2+\alpha}+\Pi_{1/2-\alpha})/2$. Then ${\rm EP}_{\N,1/2}^{\symm}$ is a simplex and $$\label{e.extremalset} \emph{ext}({\rm EP}_{\N,1/2}^{\symm})=(\mu_\alpha)_{\alpha\in [0,1/2]}.$$ The following subset of ${\rm RER}_{\N}^{\exch,\simple}$ will play an important role in our discussions below. The subset of $\rer^{\exch,\pure}_{\N}$ which consists of RERs obtained from paint-boxes with $p_2=0$ will be denoted by $\rer^{\exch,\simple}_{\N}$. Note that, using , we have a natural identification between $\rer^{\exch,\pure}_{\N}$, $\{\mu_{\alpha}\}_{\alpha \in [0,1/2]}$ from Lemma \[l.exchextremal\] and $[0,1/2]$ via $$(p,0,\ldots)\leftrightarrow \mu_{p/2} \leftrightarrow p/2$$ with the first bijection also being given by $\Phi_{1/2}$. \[t.mainp12\] The map $\Phi_{1/2}\,:\,\rer_{\N}^{\exch}\to {\rm EP}_{\N,1/2}^{\symm}$ is onto. Moreover, for every $\mu\in {\rm EP}_{\N,1/2}^{\symm}$ there is a unique probablity measure $\rho_{\mu}$ on $\rer_{\N}^{\exch,\simple}$ such that $$\label{e.mainexch2} \mu=\Phi_{1/2}\left(\int_{\nu\in \rer_{\N}^{\exch,\simple}}\nu\,d\rho_{\mu}(\nu)\right).$$ (Hence ${\rm EP}_{\N,1/2}^{\symm}=\cp_{\N,1/2}^{\exch}$ is a simplex whose extremal elements is the set $\{\mu_{\alpha}\}_{\alpha \in [0,1]}$.) On the other hand, the map $\Phi_{1/2}\,:\,{\rm RER}_{\N}^{\exch,\pure}\to {\rm EP}_{\N,1/2}^{\symm}$ is not onto. We start with . As already observed right before Theorem \[t.mainp12\], if ${\bf p}_{\alpha}= (2\alpha,0,\ldots)$ with $\alpha\in [0,1/2]$, then $$\label{e.degen} \Phi_{1/2}(\nu_{{\bf p}_\alpha})=\mu_\alpha.$$ Hence $\mu_\alpha \in \cp_{\N,1/2}^{\exch}$. Now pick an arbitrary $\mu\in {\rm EP}_{\N,1/2}^{\symm}$. By Lemma \[l.exchextremal\] there is a unique law $F_\mu$ on $[0,1/2]$ such that $$\label{e.muident} \mu=\int_{0}^{1/2} \mu_{\alpha} d F_{\mu}(\alpha).$$ It follows from the affine property of $\Phi_{1/2}$ that $$\label{e.convpunch} \Phi_{1/2}\left(\int_{0}^{1/2} \nu_{{\bf p}_\alpha} d F_{\mu}(\alpha)\right)=\int_{0}^{1/2} \Phi_{1/2}( \nu_{{\bf p}_\alpha}) d F_{\mu}(\alpha)\stackrel{~\eqref{e.degen}}{=}\int_{0}^{1/2} \mu_\alpha d F_{\mu}(\alpha)\stackrel{~\eqref{e.muident}}{=}\mu,$$ and  follows. The uniqueness of $\rho_\mu$ follows from the comment before Theorem \[t.mainp12\]. Next, we need to prove that there exist elements of ${\rm EP}_{\N,1/2}^{\symm}$ which can not be obtained as the image of some element of $\rer_{\N}^{\exch,\pure}$ under $\Phi_{1/2}$. Consider a paintbox ${\bf p}=(p_1,p_2,\ldots)$. Recall $\xi_{{\bf p},1/2}$ from . Then $$\label{e.phintrepr} \Phi_{1/2}(\nu_{\bf p})=\int_{s=0}^1 \Pi_s d F_{\xi_{{\bf p},1/2}}(s),$$ where $F_{\xi_{{\bf p},1/2}}$ is the law of $\xi_{{\bf p},1/2}$. From  and , we see that it suffices to find a random variable $W$ in $[0,1]$ which is symmetric around $1/2$ which can not be written as $$\label{e.zrepr} W=\frac{1}{2}+\frac{1}{2}\sum_i p'_i Z_i,$$ for any paintbox ${\bf p}'=(p_1',\ldots)$ where the $\{Z_i\}$’s are as in the proof of Proposition \[p.Russ\]. Take $W$ to be a random variable with $P(W=1)=P(W=0)=3/8$ and $P(W=1/2)=1/4.$ Now, if $W$ has the above representation, then we must have $p_i'\neq 0$ for $i=1,2$ and $p_i'=0$ for all $i\ge 3$, since $W$ has three possible values. However, we then obtain $$\label{e.contreq} P(W=\frac{1}{2}+\frac{p_1'+p_2'}{2}) = P(W=\frac{1}{2}+\frac{p_1'-p_2'}{2}) =$$ $$P(W=\frac{1}{2}+\frac{p_2'-p_1'}{2}) =P(W=\frac{1}{2}-\frac{p_1'+p_2'}{2})=1/4.$$ Since we assumed that $P(W=1/2)>0$, we must have $p_2'-p_1'=0.$ However then according to  we get $P(W=1/2)=1/2$, which is a contradiction. Hence $W$ does not have the representation  and the result follows. \[l.nuq1\] For any $\nu\in \rer_{\N}^{\exch}$ there is a unique probability measure $\rho=\rho_{\nu}$ on $[0,1]$ such that $$\label{e.combsimple1} \Phi_{1/2}(\nu)=\Phi_{1/2}\left(\int_{0}^1 \nu_{(p,0,\ldots)}\,d\rho(p)\right).$$ We have that $\Phi_{1/2}(\nu)\in {\rm EP}_{\N,1/2}^{\symm}$. Now  follows immediately from  and the comment preceding Theorem \[t.mainp12\]. We have seen in the previous subsection that $\Phi_{1/2}$ is not injective. The following characterizes exactly the subset of $\rer_{\N}^{\exch}$ on which $\Phi_{1/2}$ is injective. \[p.unprop\] If $\nu\in \rer_{\N}^{\exch}$, then $\nu$ is $(\rer_{\N}^{\exch},1/2)$-unique if and only if $\nu\in \rer_{\N}^{\exch,\simple}$. If $\nu=\nu_{(p,0,\ldots)}$, then the support of $\xi_\nu$ is $\{\frac{1}{2}+\frac{p}{2},\frac{1}{2}-\frac{p}{2}\}$. The $\xi$ corresponding to every other $\nu'\in \rer_{\N}^{\exch,\pure}$ has part of its support outside of the above set. Hence any $\nu'\in \rer_{\N}^{\exch}$ other than $\nu$ has its corresponding $\xi$ having part of its support outside of this set. It follows that $\nu$ is $(\rer_{\N}^{\exch},1/2)$-unique. For the other direction, fix $\nu\in \rer_{\N}^{\exch}\setminus \rer_{\N}^{\exch,\simple}$. By Corollary \[l.nuq1\] and the fact that $\nu$ is not simple, it suffices to consider the case when we can write $$\label{e.notpointmass} \nu=\int_{p=0}^1 \nu_{(p,0,\ldots)}d\psi(p),$$ for some probability measure $\psi$ on $[0,1]$ where $\psi\neq \delta_t$ for any $t\in [0,1]$. Then we can find constants $a_1,a_2,b_1,b_2$ such that $0\le a_1<a_2<b_1<b_2\le 1$, $\psi([a_1,a_2])>0$ and $\psi([b_1,b_2])>0$. Let $I=[a_1,a_2]$ and $J=[b_1,b_2]$ and $K=[0,1]\setminus (I\cup J)$. For any $T\subset [0,1]$ such that $\psi(T)>0$ let $\tilde{\psi}_T:=\psi_T/\psi(T)$ where $\psi_T$ stands for the restriction of $\psi$ to $T$. Without loss of generality, assume that $\psi(J)\ge \psi(I)$. Observe that $$\begin{aligned} \label{e.psieq} \lefteqn{\psi=\psi(I)\tilde{\psi}_I+\psi(J)\tilde{\psi}_J+\psi(K)\tilde{\psi}_K}\\ & & =\psi(K)\tilde{\psi}_K +(\psi(J)-\psi(I))\tilde{\psi}_J+2\psi(I)(\tilde{\psi}_I/2+\tilde{\psi}_J/2).\end{aligned}$$ Hence, $$\begin{aligned} \lefteqn{\nu=\psi(K)\int_{p\in K}\nu_{(p,0,\ldots)}d\tilde{\psi}_K(p)+(\psi(J)-\psi(I))\int_{p\in J}\nu_{(p,0,\ldots)}d\tilde{\psi}_J(p)}\\ & & +2\psi(I)\left(\frac{1}{2}\int_{p\in I}\nu_{(p,0,\ldots)}d\tilde{\psi}_I(p)+\frac{1}{2}\int_{p\in J}\nu_{(p,0,\ldots)}d\tilde{\psi}_J(p)\right).\end{aligned}$$ We now focus on the last term in the sum above. Let $$\rho=\frac{1}{2}\int_{p\in I}\nu_{(p,0,\ldots)}d\tilde{\psi}_I(p)+\frac{1}{2}\int_{p\in J}\nu_{(p,0,\ldots)}d\tilde{\psi}_J(p),$$ and observe that $\rho\in \rer_{\N}^{\exch}$ since $\tilde{\psi}_I$ is a probability measure on $I$ and $\tilde{\psi}_J$ is a probability measure on $J$. Since $\Phi_{1/2}$ is affine and $\psi(I)>0$, we will be done if we can find $\rho'\in \rer_{\N}^{\exch}$ such that $\rho'\neq \rho$ but $\Phi_{1/2}(\rho)=\Phi_{1/2}(\rho')$. We let $$\rho'=\int_{p_1\in J}\int_{p_2\in I} \nu_{\left((p_1+p_2)/2,(p_1-p_2)/2,0,\ldots\right)} d\tilde{\psi}_I(p_2)d\tilde{\psi}_J(p_1),$$ where we recall that $p_1>p_2$ for $p_1\in J$ and $p_2\in I$. Clearly, $\rho'\in \rer_{\N}^{\exch}$. Moreover, $\rho'\neq \rho$ since $\rho'$ assigns measure $1$ to those $\nu_{(q_1,q_2,\ldots)}\in \rer_{\N}^{\exch,\pure}$ which have $q_2\neq 0$. Since $\Phi_{1/2}$ is affine, we get $$\begin{aligned} \label{e.affinfubini} \lefteqn{\Phi_{1/2}(\rho')=\int_{p_1\in J}\int_{p_2\in I}\Phi_{1/2}( \nu_{\left((p_1+p_2)/2,(p_1-p_2)/2,0,\ldots\right)} )d\tilde{\psi}_I(p_2)d\tilde{\psi}_J(p_1)}\nonumber \\ & & \stackrel{~\eqref{e.purenonuniq05}}{=} \frac{1}{2}\int_{p_1\in J}\int_{p_2\in I}\Phi_{1/2}( \nu_{\left(p_1,0,\ldots\right)} )+\Phi_{1/2}( \nu_{\left(p_2,0,\ldots\right)} ) d\tilde{\psi}_I(p_2)d\tilde{\psi}_J(p_1)\nonumber\\ & & =\frac{1}{2}\int_{p_1\in J}\Phi_{1/2}( \nu_{\left(p_1,0,\ldots\right)} )d\tilde{\psi}_J(p_1)+\frac{1}{2}\int_{p_2\in I}\Phi_{1/2}( \nu_{\left(p_2,0,\ldots\right)} )d\tilde{\psi}_I(p_2) \\ & & =\Phi_{1/2}(\rho). \nonumber\end{aligned}$$ The case $p\neq 1/2$ {#s.NONsymmcase} -------------------- If $p=1/2$, we have seen in the previous subsection that the map $\Phi_p \,:\,\rer_{\N}^{\exch}\to {\rm CP}_{\N,p}^{\exch}$ is “highly non-injective”. In this subsection, we present evidence that, for $p\neq 0,1/2,1$, $\Phi_p$ might be injective, although we do not manage to prove such a result. We first introduce some notation. Let $S_0=\{\nu_{(0,\ldots)}\}$ and for $k\ge 1$, define $$S_k:=\{\nu_{\bf p}\in \rer_{N}^{\exch,\pure}\,:\,{\bf p}=(p_1,\ldots,p_k,0,\ldots)\,\mbox{ with }\,p_k>0\},$$ and $$S_{\infty}:=\{\nu_{\bf p}\in \rer_{N}^{\exch,\pure}\,:\,{\bf p}=(p_1,\ldots)\,\mbox{ with }\,p_i>0\,\,\,\forall i\}.$$ Then the $S_k$’s are disjoint and $\rer_{\N}^{\exch,\pure}=\cup_{0\le k\le\infty} S_k$. The following result from [@MR2538010] (see Theorem 1.3 there) tells us what needs to be verified in order to conclude that $\Phi_p$ is injective. \[t.czech\] If $\phi$ is a continuous affine map from a compact convex set $X$ to a simplex $Y$ such that $\phi(\emph{ext}(X))\subseteq \emph{ext}(Y)$ and $\phi$ is injective on $\emph{ext}(X)$, then $\phi$ is injective. It is not so difficult to show (and left to the reader) that if $x\in \rm{ext}(X)$ is $\phi$-unique (meaning $\phi(x)\neq \phi(y)$ for all $y\neq x$), then $\phi(x)\in \rm{ext}(Y)$. Hence, in our context, to show injectivity using Theorem \[t.czech\], one needs, in addition to Proposition \[p.Russ\], to show that, for $p\neq 1/2$, (1) $\cp_{\N,p}^{\exch}$ is a simplex and (2) for all $0\le k\le \infty$, all elements of $S_k$ are $(\rer_{\N}^{\exch},p)$-unique. We are not able to show (1) (but note we have seen this is true for $p= 1/2$) and in the rest of the subsection, we show (2) for $S_0$, $S_1$, $S_2$ and a subset of $S_3$. Observe first that $\Phi_p(\nu_{(0,\ldots)})=\Pi_p$, so it is easy to see that $\nu_{(0,\ldots)}$ is $(\rer_{\N}^{\exch},p)$-unique for every $p\in (0,1)$. The following three propositions cover the cases $k=1,2$ and part of $k=3$. \[p.s1unique\] Suppose that $\nu\in S_1$. Then $\nu$ is $(\rer_{\N}^{\exch},p)$-unique for every $p\in (0,1)\setminus \{1/2\}$. (This is also true for $p=1/2$ by Theorem \[p.unprop\].) [**Proof.**]{} By symmetry we assume that $p\in (1/2,1)$. Fix $\nu=\nu_{(s,0,\ldots)}\in S_1$. Suppose that $\tilde{\nu}\in \rer_{\N}^{\exch}$ is such that $\Phi_p(\nu)=\Phi_p(\tilde{\nu})$. Recall that by Kingman’s theorem, there is a unique probability measure $\rho=\rho_{\tilde{\nu}}$ on $\rer_{\N}^{\exch,\pure}$ such that $$\label{e.unprop10} \tilde{\nu}=\int_{\nu_{\bf p}\in \rer_{\N}^{\exch,\pure}} \nu_{\bf p} d\rho(\nu_{\bf p}).$$ Hence, we will be done if we show that $\Phi_p(\nu)=\Phi_p(\tilde{\nu})$ implies that $\rho=\delta_{\nu}$. For our fixed $\nu\in S_1$, we have, using , that $$\label{e.xi3support2} \xi_{(s,0,\ldots),p}=\left\{\begin{array}{lll} y_1:=p+ s (1-p)&{\rm w.p.}&p\\ y_2:=p-sp &{\rm w.p.}&1-p \end{array}\right.$$ Observe that if $\paintbox\in S_k$, then $|{\rm supp}(\xipp)|\ge k+1$. Hence $\rho(\cup_{k\ge 2}S_k)=0$. Using , we see that if $\paintbox \in S_1$ then in order to have ${\rm supp}(\xi_{{\bf p},p})\subseteq {\rm supp}(\xi_{(s,0,\ldots)})$ we must have $\paintbox=\nu$. Hence $\rho(S_1\setminus \{\nu\})=0$. Finally, since $y_2<p<y_1$ for every $p\in (0,1)$, it follows that $\rho(S_0)=0$. Hence $\rho=\delta_{\nu}$ as claimed. \[p.uniqueprop1\] Suppose that $\nu\in S_2$. Then $\nu$ is $(\rer_{\N}^{\exch},p)$-unique for every $p\in (0,1)\setminus \{1/2\}$. (This is false for $p=1/2$ by Theorem \[p.unprop\].) [**Proof.**]{} The strategy of this proof will be the same as that of the proof of Proposition \[p.s1unique\], but more involved since there will be more cases to deal with. By symmetry we assume that $p\in (1/2,1)$. We fix $\nu=\nu_{(p_1,p_2,0,\ldots)}\in S_2$ and suppose that $\tilde{\nu}\in \rer_{\N}^{\exch}$ is such that $\Phi_p(\nu)=\Phi_p(\tilde{\nu})$. Let $\rho=\rho_{\tilde{\nu}}$ be the unique probability measure on $\rer_{\N}^{\exch,\pure}$ such that $$\label{e.unprop1} \tilde{\nu}=\int_{\nu_{\bf p}\in \rer_{\N}^{\exch,\pure}} \nu_{\bf p} d\rho(\nu_{\bf p}).$$ We will show that $\Phi_p(\nu)=\Phi_p(\tilde{\nu})$ implies that $\rho=\delta_{\nu}$. Again, we recall the random variable $\xipp$ from Lemma \[l.xipaint\], and we will proceed by looking at the support of this random variable. For our fixed $\nu\in S_2$, we have, using , that $$\label{e.xisupport} \xi_{(p_1,p_2,0,\ldots),p}=\left\{\begin{array}{lllll} z_1:=p+(p_1+p_2)(1-p) &\mbox{ w.p. } &p^2\\ z_2:= p+p_1(1-p)-p_2 p &\mbox{ w.p. } &p(1-p)\\ z_3:= p+p_2(1-p)-p_1 p &\mbox{ w.p. } &p(1-p)\\z_4:= p-(p_1+p_2)p &\mbox{ w.p. }&(1-p)^2 \end{array} \right.$$ In , we have ordered the elements of ${\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})$ in decreasing order, with the largest element on the first line. Now, we will look at the elements in $S_0,S_1,\ldots$ in order to find those $\paintbox$ for which $\xipp$ has its support contained in ${\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})$. The measure $\rho$ must be supported on such $\xipp$’s. [**Case 1:**]{} First we look at the single element of $S_0$, namely $\nu_{(0,\ldots)}$. We have $$\label{e.xi1support} \xi_{(0,\ldots),p}=p\,\, \mbox{ w.p. }1.$$ Since $p>1/2$ we have $z_1>p$ and $z_3,z_4<p$. Hence we see that if ${\rm supp}(\xi_{(0,\ldots),p})\subseteq {\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})$, then $p=z_2$ so that $p_1/p=p_2/(1-p)$. From this we conclude that $$\label{e.keep3} \mbox{If }\rho(\nu_{(0,\ldots)})>0,\mbox{ then }p_1/p=p_2/(1-p) \mbox{ and } p=z_2.$$ [**Case 2:** ]{} Assume that $\nu_{(s,0,\ldots)}\in S_1$ and recall that $$\label{e.xi3support} \xi_{(s,0,\ldots),p}=\left\{\begin{array}{lll} y_1:=p+ s (1-p)&\,\, {\rm w.p.}&p\\ y_2:=p-sp &\,\, {\rm w.p.}&1-p \end{array}\right.$$ Assume now that ${\rm supp}(\xi_{(s,0,\ldots),p})\subseteq {\rm supp}(\xi_{( p_1,p_2,0,\ldots),p})$. Since $z_3,z_4<p$ and $y_1>p$, we have $y_1=z_1$ or $y_1=z_2$. The former case implies that $s=p_1+p_2$. If we instead assume that $y_1=z_2$ then we must also have $y_2=z_3$ or $y_2=z_4$. If $y_2=z_4$, then $s=p_1+p_2$. On the other hand, $y_1=z_2$ and $y_2=z_3$ imply after a short calculation that $p=1/2$, which is a contradiction. Hence we can conclude $$\label{e.xi6support} \rho(S_1\setminus \{\nu_{(p_1+p_2,0,\ldots)}\})=0.$$ Also observe that from the above it follows that $$\label{e.keep1} \mbox{$s=p_1+p_2$ implies that $y_1=z_1$ and $y_2=z_4$.}$$ [**Case 3:**]{} Assume that $\nu_{(s_1,s_2,0,\ldots)}\in S_2$. We consider four subcases. [*Case 3(i):*]{} Suppose that $s_1\neq s_2$ and $p_1\neq p_2$. Then ${\rm supp}(\xi_{(s_1,s_2,0,\ldots),p})\subseteq {\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})$ implies that, since both supports have four elements, $${\rm supp}(\xi_{(s_1,s_2,0,\ldots),p})={\rm supp}(\xi_{(p_1,p_2,0,\ldots),p}).$$ From this it is easy to conclude (using ) that $s_1=p_1$ and $s_2=p_2$ so that $\nu_{(s_1,s_2,0,\ldots)}= \nu_{(p_1,p_2,0,\ldots)}$. [*Case 3(ii)*]{}: Suppose that $s_1=s_2$ and $p_1=p_2$. Then, arguing similarly as in case 3(i), ${\rm supp}(\xi_{(s_1,s_2,0,\ldots),p})\subseteq {\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})$ implies that $\nu_{(s_1,s_2,0,\ldots)}= \nu_{(p_1,p_2,0,\ldots)}$. [*Case 3(iii)*]{}: Suppose that $s_1\neq s_2$ and $p_1=p_2$. Then $|{\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})|=3$ while $|{\rm supp}(\xi_{(s_1,s_2,0,\ldots),p})|=4$, and so ${\rm supp}(\xi_{(s_1,s_2,0,\ldots),p})$ cannot be a subset of ${\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})$. [*Case 3(iv)*]{}: Suppose that $s_1=s_2$ and $p_1\neq p_2$. Then using  we see that $$\label{e.xi2support} \xi_{(s_1,s_2,0,\ldots),p}=\left\{\begin{array}{lll} q_1:=p+2 s_1(1-p)&{\rm w.p.}&p^2\\ q_2:=p+s_1(1-2p)&{\rm w.p.}&2p(1-p)\\q_3:= p-2 s_1 p &{\rm w.p.}&(1-p)^2 \end{array}\right.$$ Assume that ${\rm supp}(\xi_{(s_1,s_2,0,\ldots),p})\subseteq {\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})$. Since $q_1=z_1$ or $q_3=z_4$, we have $2s_1=p_1+p_2$. Using  and  we see that $(q_1-q_2,q_2-q_3)=(s_1,s_1)$ and $(z_1-z_2,z_2-z_3,z_3-z_4)=(p_2,p_1-p_2,p_2)$. Therefore, since $\{q_1,q_2\}=\{z_1,z_2\}$ or $\{q_2,q_3\}=\{z_3,z_4\}$, we have $s_1=p_2$, contradicting $2s_1=p_1+p_2$ since $p_1\neq p_2$. Hence, this case can not occur either. Putting cases $3(i)-3(iv)$ together we can now conclude that $$\label{e.xi7support} \rho(S_2\setminus \{\nu_{(p_1,p_2,0,\ldots)}\})=0.$$ [**Case 4:** ]{}Assume now that $\nu_{(t_1,t_2,t_3,0\ldots)}\in S_3$. Unless $t_1=t_2=t_3$ it is straightforward to see that ${\rm supp}(\xi_{(t_1,t_2,t_3,0,\ldots),p})$ has at least $5$ elements. Hence we can conclude $$\rho(S_3\setminus\{\nu_{(t,t,t,0,\ldots)}\,:\,t\in (0,1/3]\})=0.$$ So assume now that $t_1=t_2=t_3=t$ for some $t\in (0,1/3]$. We get that, again using  that $$\label{e.xi4support} \xi_{(t,t,t,0,\ldots),p}=\left\{\begin{array}{lll} x_1:=p+t(3-3p)&{\rm w.p.}&p^3\\ x_2:=p+t(2-3p) &{\rm w.p.}&3p^2(1-p)\\ x_3:=p+t(1-3p)&{\rm w.p.}&3p(1-p)^2\\ x_4:=p-3tp&{\rm w.p.}&(1-p)^3 \end{array}\right.$$ Clearly, if $p_1=p_2$, then ${\rm supp}(\xi_{(t,t,t,0,\ldots),p})$ is not a subset of ${\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})$, so assume that $p_1\neq p_2$. Then in order to have ${\rm supp}(\xi_{(t,t,t,0,\ldots),p})\subseteq {\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})$ we must have ${\rm supp}(\xi_{(t,t,t,0,\ldots),p})= {\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})$. This implies that $x_1=z_1$ so that $t=(p_1+p_2)/3$. So we can conclude that $$\label{e.xi5support} \rho(S_3\setminus \{\nu_{((p_1+p_2)/3,(p_1+p_2)/3,(p_1+p_2)/3,0,\ldots)}\})=0.$$ So we must have $$\label{e.keep2} \mbox{$t=(p_1+p_2)/3$ and $(x_1,x_2,x_3,x_4)=(z_1,z_2,z_3,z_4)$.}$$ [**Case 5:**]{} Finally we show that we do not need to consider $S_k$ for $k\ge 4$. Observe that if $\paintbox\in S_k$, then it is straightforward to check that $|{\rm supp}(\xipp)|\ge k+1$. Since $|{\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})|\le 4$, we can conclude that $$\rho(S_k)=0\mbox{ for every }4\le k\le \infty.$$ From ,  ,   and   above, we see that to show that $\rho=\delta_{\nu}$ and thereby finish the proof it suffices to show that we cannot find $\alpha,\beta\in [0,1]$ with $\alpha+\beta\le 1$ such that $$\label{e.alfabetaeq} \Phi_p(\nu_{(p_1,p_2,0,\ldots)})=\alpha \Phi_p(\nu_{(0,\ldots)})+\beta\Phi_p(\nu_{(p_1+p_2,0,\ldots)})+(1-\alpha-\beta)\Phi_p(\nu_{(\frac{p_1+p_2}{3},\frac{p_1+p_2}{3},\frac{p_1+p_2}{3},0,\ldots)}).$$ Comparing  with ,  and  we see that in order for  to hold, it is necessary that (keeping ,  and  in mind) $$\label{e.finishingpunch} \begin{array}{rl}p^2=&\beta p+(1-\alpha-\beta)p^3 \\ p(1-p) =&\alpha {\bf 1}\{\frac{p_1}{p}=\frac{p_2}{(1-p)}\} +(1-\alpha-\beta)3 p^2(1-p)\\ p(1-p)=& (1-\alpha-\beta)3 p(1-p)^2 \\ (1-p)^2=&\beta(1-p)+(1-\alpha-\beta)(1-p)^3 \end{array}$$ Since $p\in (0,1)$, the third equation gives that $1-\alpha-\beta\neq 0$. Therefore, since $p\in(1/2,1)$, the right hand side of the second equation of  is strictly larger than the right hand side of the third equation. Hence, the linear system in  does not have any solution for $\alpha,\beta\in [0,1]$ with $\alpha+\beta\le 1$ when $p\in (1/2,1)$. Let $t\in (0,1/3]$. Then $\nu_{(t,t,t,0,\ldots)}\in S_3$ is $(\rer_{\N}^{\exch},p)$-unique for every $p\in (0,1)\setminus\{1/2\}$. (This is false for $p=1/2$ by Theorem \[p.unprop\].) [**Proof.**]{} The strategy of this proof is the same as in the proof of Proposition \[p.uniqueprop1\], so we will be somewhat briefer. By symmetry we can assume that $p\in (1/2,1)$. Fix $t\in (0,1/3]$ and let $\nu:=\nu_{(t,t,t,0,\ldots)}$. Assume that $\tilde{\nu}\in \rer_{\N}^{\exch}$ is such that $\Phi_p(\nu)=\Phi_p(\tilde{\nu})$. Let $\rho=\rho_{\tilde{\nu}}$ be the unique probability measure on $\rer_{\N}^{\exch,\pure}$ such that $$\tilde{\nu}=\int_{\nu_{\bf p}\in \rer_{\N}^{\exch,\pure}}\nu_{\bf p}d\rho(\nu_{\bf p}).$$ As above, we will show that $\rho=\delta_{\nu}$. [**Case 1:** ]{}First we consider $S_0=\{\nu_{(0,\ldots)}\}$. Recall that ${\rm supp}(\xi_{(0,\ldots),p})=\{p\}$. Using , we see that only if $p=2/3$ can we have that $p\in {\rm supp}(\xi_{(t,t,t,0,\ldots),p})$. Hence, $$\label{e.anothereq1} \mbox{ If }\rho(\nu_{(0,\ldots)})>0,\mbox{ then }p=2/3.$$ [**Case 2:** ]{}Now suppose that $\nu_{(s,0,\ldots)}\in S_1$. Recall that ${\rm supp}(\xi_{(s,0,\ldots),p})=\{y_1,y_2\}$ from  and ${\rm supp}(\xi_{(t,t,t,0,\ldots),p})=\{x_1,x_2,x_3,x_4\}$ from . We have that $y_1>p$, $y_2<p$, $x_1>p$ and (since $p>1/2$), $x_3<p$. Hence if ${\rm supp}(\xi_{(s,0,\ldots),p})\subseteq {\rm supp}(\xi_{(t,t,t,0,\ldots),p})$ it must be the case that $y_1=x_1$ or $y_1=x_2$. First, if $y_1=x_1$, we get that $s=3t$. If $y_1=x_2$ and $y_2=x_3$ then $s=t(2-3p)/(1-p)$ and $s=-t(1-3p)/p$, and these two equations give that $p=1/2$, which is a contradiction. Finally, if $y_1=x_2$ and $y_2=x_4$ then $s=t(2-3p)/(1-p)$ and $s=3t$, and it is easy to see that these two equations can not hold at the same time for any $p$. Hence, we can conclude that $$\label{e.anothereq2} \rho(S_1\setminus \{\nu_{(3t,0,\ldots)}\})=0.$$ Also observe that $$\label{e.anothereq3} \mbox{If $s=3t$ then $y_1=x_1$ and $y_2=x_4$.}$$ [**Case 3:** ]{}Now assume that $\nu_{(p_1,p_2,0,\ldots)}\in S_2$. Recall from  that ${\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})=\{z_1,z_2,z_3,z_4\}$, where the four elements are distinct when $p_1\neq p_2$, and $z_1>z_2=z_3>z_4$ if $p_1=p_2$. Also observe that we have $(x_1-x_2,x_2-x_3,x_3-x_4)=(t,t,t)$ and, as before, $(z_1-z_2,z_2-z_3,z_3-z_4)=(p_2,p_1-p_2,p_2)$. [*Case 3(i)*]{}: Assume that $p_1\neq p_2$. From the above, we see that in order to have ${\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})\subseteq{\rm supp}(\xi_{(t,t,t,0,\ldots),p})$ we must have $t=p_2=p_1-p_2$, which implies $p_1=2p_2=2 t$. [*Case 3(ii)*]{}: Assume that $p_1=p_2$. From the above, it follows that in order to have ${\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})\subseteq {\rm supp}(\xi_{(t,t,t,0,\ldots),p})$ we must have $t=p_2$. However, in this case $|{\rm supp}(\xi_{(p_1,p_2,0,\ldots),p})|=3$ so we must also have $z_1=x_1$ or $z_4=x_4$. Each of these two cases imply that $t=(p_1+p_2)/3$, which contradicts $t=p_2$ since $p_1=p_2$. From [*Case 3(i)*]{} and [*Case 3(ii)*]{} we conclude that $$\label{e.anothereq4} \rho(S_2\setminus \{\nu_{(2t,t,0,\ldots)}\})=0.$$ Also observe that $$\label{e.anothereq5} \mbox{If $p_1=2t$ and $p_2=t$ then $(x_1,x_2,x_3,x_4)=(z_1,z_2,z_3,z_4)$.}$$ [**Case (4):**]{} Now assume that $\nu_{(t_1,t_2,t_3,0,\ldots)}\in S_3$. If $(t_1,t_2,t_3)\neq (t',t',t')$ for some $t'$, then $|{\rm supp}(\xi_{(t_1,t_2,t_3,0,\ldots),p})|>|{\rm supp}(\xi_{(t,t,t,0,\ldots),p})|$. Next, if $t'\neq t$ and $t'\in (0,1/3]$, then by looking at  we see that ${\rm supp}(\xi_{(t',t',t',0,\ldots),p})$ cannot be a subset of ${\rm supp}(\xi_{(t,t,t,0,\ldots),p})$. It follows that $$\label{e.anothereq6} \rho(S_3\setminus\{\nu_{(t,t,t,0,\ldots)}\})=0.$$ We now finish in the same way as in the proof of Proposition \[p.uniqueprop1\]. From ,  ,   and  above, we see that to show that $\rho=\delta_{\nu}$ and thereby finish the proof it suffices to show that we cannot find $\alpha,\beta\in [0,1]$ with $\alpha+\beta\le 1$ such that $$\label{e.alfabetaeq2} \Phi_p(\nu_{(t,t,t,0,\ldots)})=\alpha \Phi_p(\nu_{(0,\ldots)})+\beta\Phi_p(\nu_{(3t,0,\ldots)})+(1-\alpha-\beta)\Phi_p(\nu_{(2t,t,0,\ldots)})$$ Comparing  with ,  and  we see that in order for  to hold, it is necessary that (keeping ,   and  in mind) $$\label{e.finishingpunch2} \begin{array}{rl}p^3=&\beta p+(1-\alpha-\beta)p^2 \\ 3p^2(1-p) =&\alpha {\bf 1}\{p=2/3\} +(1-\alpha-\beta) p(1-p)\\ 3p(1-p)^2=& (1-\alpha-\beta)p(1-p) \\ (1-p)^3=&\beta(1-p)+(1-\alpha-\beta)(1-p)^2\end{array}$$ If $p\neq 2/3$, then the second and third equations in  imply that $p=1/2$, finishing the proof in this case. If $p=2/3$, then the third equation implies that $\alpha=\beta=0$, in which case the first equation does not hold, completing this case. Gaussian and symmetric stable exchangeable processes {#s.gaussian} ---------------------------------------------------- In this section, we first consider the exchangeable Gaussian threshold process, and then the more general case of exchangeable symmetric stable threshold processes. Suppose that $X$ is an exchangeable Gaussian process with $N(0,1)$-marginals and pairwise correlations $r\in [0,1]$. Let $\Xi$ be the random distribution used in the representation of $X$ from Subsection \[s.definetti\]. Observe that in the case $r=0$ we have $\Xi$ is $N(0,1)$ a.s. and in the case $r=1$ we have $\Xi=\delta_x$ where $x$ has distribution $N(0,1)$. For general $r\in [0,1]$, $\Xi$ is $N(r^{1/2} W, (1-r))$ where $W$ is $N(0,1)$. We can equivalently obtain $X$ as follows: Let $W,U_1,U_2,\ldots$ be i.i.d. $N(0,1)$ and let $X_i:=r^{1/2} W+(1-r)^{1/2} U_i$. Now let $Y^h$ be the $h$-threshold process obtained from $X$ as described in Subsection \[s.definetti\], where $r$ is suppressed in the notation. A straightforward calculation left to the reader shows that (recall ) $$\label{e.xirepr} \xi_{Y^h}=\Xi([h,\infty])=\int_{\frac{h-r^{1/2}W}{(1-r)^{1/2}}}^{\infty} \frac{e^{-t^2/2}}{\sqrt{2\pi}}\,dt.$$ In particular, if $h=0$ and $r=1/2$, then we see that $$\xi_{Y^0}=\Xi([0,\infty])=1-\Phi(-W),$$ where $\Phi$ is the probability distribution function of the $N(0,1)$-distribution. Now $\Phi(-W)$ is uniformly distributed on $[0,1]$, and hence so is $\Xi([0,\infty])$. By symmetry and Theorem \[t.mainp12\], we can conclude that for $h=0$ and any $r$, $Y^0$ is a color process. Observe that if ${\bf p}=(p_1,\ldots)$ where $p_i=1/2^i$ for $i\ge 1$, then the random variable $\xi_{{\bf p},1/2}$ in  is uniformly distributed on $[0,1]$. It follows that when $r=1/2$, $Y^0$ is the color process associated to the paintbox $(1/2,1/4,1/8,\ldots)$. Now we move on to the symmetric stable case. Recall that a stable distribution is characterized by four parameters: the location parameter $\mu\in \R$, the skewness parameter $\beta\in [-1,1]$, the scale parameter $c\in (0,\infty)$ and the stability parameter $\alpha\in (0,2]$. Here we consider only the special case when $\mu=0$, $c=1$ and $\beta=0$. In this case, the characteristic function of the stable distribution with stability parameter $\alpha$ is given by $e^{-|t|^{\alpha}}$, $t\in \R$. We denote this distribution by ${\mathcal S}(\alpha)$. If $\alpha=2$, then we (essentially) get the $N(0,1)$ distribution, the case of which we already covered above. We obtain an exchangeable process where the marginals are ${\mathcal S}(\alpha)$ as follows. First recall that if $|a|^{\alpha}+|b|^{\alpha}=1$ and $V_1,V_2\in {\mathcal S}(\alpha)$, then $aV_1+ b V_2\in {\mathcal S}(\alpha)$. Let $W,U_1,U_2,\ldots\in {\mathcal S}(\alpha)$ be i.i.d. and fix $a\in (0,1)$. Let $b=(1-a^{\alpha})^{1/\alpha}$ and let $X=(X(i))_{i\in \N}$ where $X_i=a W+b U_i$. Then $X$ is clearly exchangeable with marginals given by ${\mathcal S}(\alpha)$. Let $Y^h$ be the $h$-threshold process obtained from $X$. This depends on $\alpha$ and $a$ but this is suppressed in the notation. In the same way as in the Gaussian case, one gets that $$\xi_{Y^h}=1-F\left(\frac{h-aW}{b}\right)$$ where $F$ be the distribution function of $W$. We see that in the special case of $h=0$ and $a=b=(1/2)^{1/\alpha}$ we have that $\xi_{Y^0}$ is uniform on $[0,1]$. By symmetry and Theorem \[t.mainp12\], we can conclude that for $h=0$ and any $\alpha$ and $a$, $Y^0$ is a color process. As in the Gaussian case, we have that when $a=(1/2)^{1/\alpha}$, $Y^0$ is the color process associated to the paintbox $(1/2,1/4,1/8,\ldots)$. In particular, the $0$-threshold Gaussian for $r=1/2$ is the same process as the $0$-threshold stable process when $a=(1/2)^{1/\alpha}$. Connected random equivalence relations on ${\mathbb Z}$ {#s.conn} ======================================================= In this section, we focus on the class of connected RERs on ${\mathbb Z}$ thought of as a graph with nearest neighbor edges. Therefore, in this case, all of the clusters are of the form $\phi=\{m,m+1,\ldots,n\}$ with $-\infty\le m\le n\le \infty$. For $m\in \Z$, the edge between $m$ and $m+1$ will be denoted by $e_{m,m+1}$. The next definition gives a way of creating an element from $\rer_{\Z}^{\conn}$ by using a process on the edges of $\Z$. \[d.rergen\] Let $\{Y(e_{n,n+1})\}_{n\in {\mathbb Z}}$ be any process on the edges of $\Z$ with state space $\{-1,1\}$ Define $\pi_Y$ to be the random equivalence relation on ${\mathbb Z}$ obtained as follows: $m<n\in \Z$ are said to be in the same equivalence class of $\pi_Y$ if and only if $Y(e_{m,m+1})=\ldots=Y(e_{n-1,n})=1$. Observe that $Y$ and $\pi_Y$ can be recovered from each other. It follows that $\pi_Y$ will inherit any property which $Y$ has. We will often say that $\pi_Y$ is induced by $Y$. Let $\{Y(e_{n,n+1})\}_{n\in {\mathbb Z}}$ be any process with state space $\{-1,1\}$. We denote by $X^{Y,p}$ the color process obtained from the RER induced by $Y$ with parameter $p$. In the next proposition we describe exactly which Markov chains with state space $\{0,1\}$ are color processes. In some sense, most of this proposition is well known. \[t.markovcolor\] Let $Z=(Z(n))_{n\in {\mathbb Z}}$ be a Markov chain with state-space $\{0,1\}$ and transition probabilities $p_{0,0},p_{0,1},p_{1,0}$ and $p_{1,1}$. The following statements are equivalent: 1. \[i.it1\] For all $m,n\in {\mathbb Z}$, $Cov(Z(m),Z(n))\ge 0$ 2. \[i.it2\] $p_{0,1}\le p_{1,1}$ 3. \[i.it3\] $(Z(n))_{n\in {\mathbb Z}}$ is a color process 4. \[i.it4\] $(Z(n))_{n\in {\mathbb Z}}$ satisfies the FKG lattice condition 5. \[i.it5\] $(Z(n))_{n\in {\mathbb Z}}$ satisfies positive associations [**Proof.** ]{} $~\ref{i.it1}\Longrightarrow ~\ref{i.it2}:$ This is completely straightforward.\ $~\ref{i.it2}\Longrightarrow ~\ref{i.it3}:$ Assume that $p_{0,1}\le p_{1,1}$. Let $\{Y(e_{n,n+1})\}_{n\in {\mathbb Z}}$ be an i.i.d. process with $$P(Y(e_{n,n+1})=1)=p_{1,1}-p_{0,1}=1-P(Y(e_{n,n+1})=0).$$ We now claim that the color process $X^{Y,p}$ where $p=p_{0,1}/(p_{0,1}+p_{1,0})$ has the same law as $Z$. First we show that $X^{Y,p}$ has the Markov property. Let $s:=P(Y(e_{n,n+1})=1)$. Fix $n\ge 1$ and $i_0,\ldots,i_n\in\{0,1\}$. We have $$P(X^{Y,p}(0)=i_0| X^{Y,p}(1)=i_1,\ldots,X^{Y,p}(n)=i_n)=\frac{P(X^{Y,p}(0)=i_0,\ldots,X^{Y,p}(n)=i_n)}{P(X^{Y,p}(1)=i_1,\ldots,X^{Y,p}(n)=i_n)}.$$ We now observe that conditioned on $\{Y(e_{0,1})=0\}$ the events $\{X^{Y,p}(0)=i_0\}$ and $\{X^{Y,p}(1)=i_1,\ldots,X^{Y,p}(n)=i_n\}$ are conditionally independent. This follows from the fact that $\{Y(e_{0,1})=0\}$ implies $0$ and $1$ are in different clusters of $\pi_Y$. Hence $$\begin{aligned} \lefteqn{P(X^{Y,p}(0)=i_0,\ldots,X^{Y,p}(n)=i_n|Y(e_{0,1})=0)}\\ & & =P(X^{Y,p}(0)=i_0 | Y(e_{0,1})=0) P(X^{Y,p}(1)=i_1,\ldots,X^{Y,p}(n)=i_n|Y(e_{0,1})=0)\\ & & = P(X^{Y,p}(0)=i_0) P(X^{Y,p}(1)=i_1,\ldots,X^{Y,p}(n)=i_n),\end{aligned}$$ where the last equality uses the fact that $Y$ is an i.i.d. process. Observe that $\{Y(e_{0,1})=1\}$ implies $X^{Y,p}(0)=X^{Y,p}(1)$. We get that, again using that $Y$ is i.i.d., $$\begin{aligned} \lefteqn{P(X^{Y,p}(0)=i_0,\ldots,X^{Y,p}(n)=i_n|Y(e_{0,1})=1)}\\ & & ={\bf 1}\{i_0=i_1\}P(X^{Y,p}(1)=i_1,\ldots,X^{Y,p}(n)=i_n).\end{aligned}$$ Hence, $$\begin{aligned} \lefteqn{P(X^{Y,p}(0)=i_0,\ldots,X^{Y,p}(n)=i_n)}\\ & & =(s {\bf 1}\{i_0=i_1\}+(1-s) P(X^{Y,p}(0)=i_0))P(X^{Y,p}(1)=i_1,\ldots,X^{Y,p}(n)=i_n),\end{aligned}$$ which implies $$\begin{aligned} \lefteqn{P(X^{Y,p}(0)=i_0| X^{Y,p}(1)=i_1,\ldots,X^{Y,p}(n)=i_n)}\\ & & =s {\bf 1}\{i_0=i_1\}+(1-s) P(X^{Y,p}(0)=i_0),\end{aligned}$$ which does not depend on $i_2,\ldots,i_n$. Hence the Markov property of $X^{Y,p}$ follows. It remains to show that the transition probabilities coincide with those of $Z$. We have that $$\begin{aligned} \lefteqn{P( X^{Y,p}(n)=1| X^{Y,p}(n-1)=1)}\\ & & = P(Y(e_{n-1,n})=1)+\frac{p_{0,1}}{p_{0,1}+p_{1,0}}P(Y(e_{n-1,n})=0)\\ & & =p_{1,1}-p_{0,1}+\frac{p_{0,1}}{p_{0,1}+p_{1,0}} (1-p_{1,1}+p_{0,1})\\ & & = p_{1,1}-p_{0,1}+\frac{p_{0,1}}{p_{0,1}+p_{1,0}} (p_{1,0}+p_{0,1})\\ & & = p_{1,1},\end{aligned}$$ and $$\begin{aligned} \lefteqn{P( X^{Y,p}(n)=0| X^{Y,p}(n-1)=0)}\\ & & = P(Y(e_{n-1,n})=1)+\frac{p_{1,0}}{p_{0,1}+p_{1,0}}P(Y(e_{n-1,n})=0)\\ & & =p_{1,1}-p_{0,1}+\frac{p_{1,0}}{p_{0,1}+p_{1,0}} (1-p_{1,1}+p_{0,1})\\ & & = p_{1,1}-p_{0,1}+\frac{p_{1,0}}{p_{0,1}+p_{1,0}} (p_{1,0}+p_{0,1})\\ & & = p_{1,1}-p_{0,1}+p_{1,0}\\ & & =1-p_{0,1}\\ & & =p_{0,0}.\end{aligned}$$ From the above, it follows that $Z\stackrel{{\mathcal D}}{=}X^{Y,p},$ and so $Z$ is a color process.\ $~\ref{i.it3}\Longrightarrow ~\ref{i.it1}:$ This follows from the fact that any color process has non-negative pairwise correlations.\ $~\ref{i.it4}\Longrightarrow ~\ref{i.it5}:$ This implication was already mentioned in the paragraph following Definition \[d.FKGL\].\ $~\ref{i.it2}\Longrightarrow ~\ref{i.it4}:$ This is a standard but tedious calculation which we omit.\ $~\ref{i.it5}\Longrightarrow ~\ref{i.it1}:$ This implication is trivial. The Ising model on $\Z$ will play an important role in this section from now on. However, we will define the Ising model on the edges of $\Z$ since we will use it to generate an RER as in Definition \[d.rergen\]. Since we now also want to allow a varying external field, we regive the definition. For $m<n$ let $E_{m,n}=\{e_{m,m+1},\ldots,e_{n-1,n}\}$. Let $J\ge 0$ and $h=(h_e)_{e\in E_{m,n}}$ be a sequence of real numbers. Let $\mu_{J,h}^{m,n}$ denote the Ising model with nearest neighbor interaction $J$ and edge varying external field $h$ on $E_{m,n}$, i.e. for any $x\in \{-1,1\}^{E_{m,n}}$, $$\mu_{J,h}^{m,n}(x)=\frac{\exp(J\sum_{i=m}^{n-2} x(e_{i,i+1})x(e_{i+1,i+2})+\sum_{i=m}^{n-1} h(e_{i,i+1}) x(e_{i,i+1}))}{Z_{m,n}}.$$ Here $Z_{m,n}(J,h)$ is a normalizing constant making $\mu_{J,h}^{m,n}$ into a probability measure. The Ising model on the edges of all of $\Z$ is defined as the distributional limit $$\mu_{J,h}^{\Z}:=\lim_{n\to \infty\, m\to -\infty} \mu_{J,h}^{m,n},$$ which is well known to exist. We will denote by $Y_{J,h}^{m,n}$ ($Y_{J,h}^{\Z}$) a random object with law $\mu_{J,h}^{m,n}$ ($\mu_{J,h}^{\Z}$). In the proof of Proposition \[t.markovcolor\] we saw that discrete time two-state Markov chains with non-negative pairwise correlations can be viewed as color processes, where the underlying RER is generated by an i.i.d. process. Theorem \[t.notnmarkov\] below shows that if, instead of an i.i.d. process, we use a (nontrivial) Ising model to generate an RER, then the resulting process is not $n$-step Markov for any $n\ge 2$. First, we give some more preliminary results. The first proposition might be of independent interest. \[p.isingmod\] Let $J\ge 0$ and let the (possibly edge dependent) external field $h$ be arbitrary. Then for any $0\le k \le l\le n$ and any $p$, $${\mathcal D}(Y_{J,h}^{0,n}\, | \,X^{Y,p}(k)=1,\ldots, X^{Y,p}(l)=1)=\mu_{J,\tilde{h}_{k,l}}^{0,n}$$ where $\tilde{h}_{k,l}(e_{i,i+1})=h(e_{i,i+1})-(\log{p})/2$ for $i=k,\ldots,l-1$ and $\tilde{h}_{k,l}(e_{i,i+1})=h(e_{i,i+1})$ otherwise and where we write $Y$ for $Y_{J,h}^{0,n}$. Fix $0\le k\le l\le n$ and $y(e_{0,1}),\ldots,y(e_{n-1,n})\in \{-1,1\}^{\{e_{0,1},\ldots,e_{n-1,n}\}}$. Then $$\begin{aligned} \label{e.bayes} \lefteqn{{\mathbf P}(Y(e_{0,1})=y(e_{0,1}),\ldots,Y(e_{n-1,n})=y(e_{n-1,n})\, | \,X^{Y,p}(k)=1,\ldots, X^{Y,p}(l)=1)}\nonumber\\ & & = \frac{ {\mathbf P}(X^{Y,p}(k)=1,\ldots, X^{Y,p}(l)=1\,|\,Y(e_{0,1})=y(e_{0,1}),\ldots,Y(e_{n-1,n})=y(e_{n-1,n}))}{ {\mathbf P}( X^{Y,p}(k)=1,\ldots, X^{Y,p}(l)=1 ) }\nonumber \\ & &\mbox{ } \times {\mathbf P}(Y(e_{0,1})=y(e_{0,1}),\ldots,Y(e_{n-1,n})=y(e_{n-1,n})) .\end{aligned}$$ Let $M(k,l)=M(k,l,Y)$ be the number of equivalence classes in $\pi_{Y}$ intersecting $\{k,..,l\}$. For $s=-1,1$ let $$N_{s}(k,l)=N_{s}(k,l,Y)=|\{i\in \{k,\ldots,l-1\}\,:\,Y(e_{i,i+1})=s\}|.$$ We observe the identities $$\label{e.classes1} M(k,l)=1+N_{-1}(k,l),$$ and $$\begin{aligned} \label{e.classes2} (l-k)-2 N_{-1}(k,l) =N_{1}(k,l)- N_{-1}(k,l)=\sum_{i=k}^{l-1} Y(e_{i,i+1}).\end{aligned}$$ In what follows, the constant implicit in the proportionality sign $\propto$ is allowed to depend only on $J,h,k,l,n$ and $p$. We now get that $$\begin{aligned} \label{e.bayes2} \lefteqn{{\mathbf P}(X^{Y,p}(k)=1,\ldots, X^{Y,p}(l)=1\,|\,Y(e_{0,1})=y(e_{0,1}),\ldots,Y(e_{n-1,n})=y(e_{n-1,n}))}\nonumber\\ & & = p^{M(k,l)}\stackrel{~\eqref{e.classes1}}\propto p^{N_{-1}(k,l)}=\left(\frac{1}{p^{1/2}}\right)^{-2 N_{-1}(k,l)}\propto \left(\frac{1}{p^{1/2}}\right)^{(l-k)-2 N_{-1}(k,l)}\nonumber \\ & & \stackrel{~\eqref{e.classes2}}{=} \exp\left\{-\frac{\log p}{2} \sum_{i=k}^{l-1} y(e_{i,i+1})\right\}.\end{aligned}$$ In addition we have $$\begin{aligned} \label{e.isingnormal} \lefteqn{{\mathbf P}(Y(e_{0,1})=y(e_{0,1}),\ldots,Y(e_{n-1,n})=y(e_{n-1,n}))}\nonumber \\ & & \propto \exp\left\{ J\sum_{i=0}^{n-2}y(e_{i,i+1}) y(e_{i+1,i+2})+\sum_{i=0}^{n-1} h(e_{i,i+1}) y(e_{i,i+1})\right\}\end{aligned}$$ Combining ,  and , we get $$\begin{aligned} \label{e.final1} \lefteqn{{\mathbf P}(Y(e_{0,1})=y(e_{0,1}),\ldots,Y(e_{n-1,n})=y(e_{n-1,n})\, | \,X^{Y,p}(k)=1,\ldots, X^{Y,p}(l)=1)}\\ & & \propto \exp\left\{ J \sum_{i=0}^{n-2}y(e_{i,i+1}) y(e_{i+1,i+2})+\sum_{i=0}^{k-1} h(e_{i,i+1}) y(e_{i,i+1}) \right. \\ & & \left. +\sum_{i=k}^{l-1} \left(h(e_{i,i+1})-\frac{\log p}{2} \right) y(e_{i,i+1}) + \sum_{i=l}^{n-1} h(e_{i,i+1}) y(e_{i,i+1}) \right\},\end{aligned}$$ finishing the proof of the proposition. \[l.percus\] Let $J>0$ and let the (possibly edge dependent) external field $h$ be arbitrary. Then for any $n\in {\mathbb Z}$ and $k \le l \in \Z$, $$\begin{aligned} \label{e.strictineq} \lefteqn{{\mathbf E}(Y_{J,h}^{\Z}(e_{n,n+1})\,| \, X^{Y_{J,h}^{\Z},p}(k)=1,\ldots, X^{Y_{J,h}^{\Z},p}(l)=1)}\nonumber \\ & & >{\mathbf E}(Y_{J,h}^{\Z}(e_{n,n+1}) \,| \, X^{Y_{J,h}^{\Z},p}(k)=1,\ldots, X^{Y_{J,h}^{\Z},p}(l-1)=1).\end{aligned}$$ If $k=l$, then there is no conditioning on the right hand side of the above. In the proof, we will work on the interval $[-N,N]$ and keep $J$ fixed, so we write $Y^{N}_{h}=Y^{-N,N}_{J,h}$, and in addition we write $Y_{h}=Y^{\Z}_{J,h}$. Without loss of generality, we can choose $n=0$ and so we will be done if we show that for any fixed $k\le l$, $$\begin{aligned} \label{e.enoughts} \lefteqn{\lim_{N\to \infty } {\mathbf E}(Y^{N}_{h}(e_{0,1})\,|\,X^{Y^{N}_{h},p}(k)=1,\ldots, X^{Y^{N}_{h},p}(l)=1)}\nonumber \\ & & > \lim_{N\to \infty } {\mathbf E}(Y^{N}_{h}(e_{0,1})\,|\,X^{Y^{N}_{h},p}(k)=1,\ldots, X^{Y^{N}_{h},p}(l-1)=1),\end{aligned}$$ since the LHS and RHS in  coincide with the LHS and RHS of  respectively. If $N>\max(|k|,|l|)$, then we know from Proposition \[p.isingmod\] that $${\mathcal D}(Y^{N}_{h}\, |\, X^{Y^{N}_{h},p}(k)=1,\ldots, X^{Y^{N}_{h},p}(l-1)=1) = {\mathcal D}(Y^{N}_{\tilde{h}_{k,l-1}}),$$ and $${\mathcal D}(Y^{N}_{h}\, |\, X^{Y^{N}_{h},p}(k)=1,\ldots, X^{Y^{N}_{h},p}(l)=1) = {\mathcal D}(Y^{N}_{\tilde{h}_{k,l}}),$$ where $\tilde{h}_{k,l}$ is given in the statement of Proposition \[p.isingmod\]. It is well known and easy to prove (see [@ELLIS] p.148) that for all $i$ and $j$, $$\frac{\partial {\mathbf E}[Y^{N}_{h}(e_{j,j+1})]}{\partial h(e_{i,i+1})}={\bf Cov}(Y^{N}_{h}(e_{i,i+1}),Y^{N}_{h}(e_{j,j+1})).$$ This implies that $$\begin{aligned} \label{e.corrint} \lefteqn{{\mathbf E}[Y^{N}_{\tilde{h}_{k,l}}(e_{0,1})]-{\mathbf E}[Y^{N}_{\tilde{h}_{k,l-1}}(e_{0,1})] }\nonumber \\& & = \int_{h(e_{l-1,l})}^{h(e_{l-1,l})-(\log p )/2}{\bf Cov}(Y^{N}_{s}(e_{0,1}),Y^{N}_{s}(e_{l-1,l}))d s(e_{l-1,l}),\end{aligned}$$ where $s(e_{i,i+1})=\tilde{h}_{k,l-1}(e_{i,i+1})$ for $i\neq l-1$. As $N\to \infty$, the right hand side of  converges (by the bounded convergence theorem) to $$\int_{h(e_{l-1,l})}^{h(e_{l-1,l})-(\log p )/2}{\bf Cov}(Y_{s}(e_{0,1}),Y_{s}(e_{l-1,l}))d s(e_{l-1,l}).$$ Since $J>0$, this last expression is strictly positive. (Percus’ equality ([@P75], see also [@ELLIS] p.142) gives the weaker fact that the expression is nonnegative.) Now  follows. In what follows, we write, as in the proof of Lemma \[l.percus\], $Y_{h}=Y^{\Z}_{J,h}$. \[t.notnmarkov\] Let $J> 0$ and let the external field $h$ be constant but arbitrary. Then the color process $X^{Y_h,p}$ is not $n$-step Markov for any $n\ge 1$ unless $p\in \{0,1\}$. Observe that $$\begin{aligned} {\mathbf P}(&X^{Y_h,p}(0)=1| X^{Y_h,p}(1)=1,\ldots, X^{Y_h,p}(n)=1)\\ &= {\mathbf P}(Y_h(e_{0,1})=1\,|\, X^{Y_h,p}(1)=1,\ldots, X^{Y_h,p}(n)=1)\\ & +p\, {\mathbf P}(Y^{h}(e_{0,1})=0\,|\, X^{Y_h,p}(1)=1,\ldots, X^{Y_h,p}(n)=1)\\& =p+(1-p){\mathbf P}(Y_h(e_{0,1})=1\,|\, X^{Y_h,p}(1)=1,\ldots, X^{Y_h,p}(n)=1).\end{aligned}$$ Lemma \[l.percus\] says that the last expression is strictly increasing in $n$ and so the theorem is proved. Stochastic domination of product measures {#s.dom} ========================================= Given $\nu$ and $p$, it is natural to ask which product measures the color process $\Phi_p(\nu)$ stochastically dominates. In this section, we present results in this direction. We write $\mu_1\preceq \mu_2$ if $\mu_2$ stochastically dominates $\mu_1$ which we recall means that the two measures can be coupled so that the joint distribution is concentrated on pairs of configurations where the realization for $\mu_1$ is below the realization for $\mu_2$. To begin with, the following definition is natural. Let $V$ be a finite or countable set and let $\nu\in\rer_V$. For $p\in (0,1)$, let $d(\nu,p):=\max\{\alpha:\Pi_\alpha\preceq \Phi_p(\nu)\}$. We also let $d(\nu):=\lim_{p\to 1}d(\nu,p)$. ($\Pi_s$ denotes as before product measure on $\{0,1\}^{V}$ with density $s$.) Some general results for stochastic domination ---------------------------------------------- At first, one might think that $d(\nu)$ should often be 1. However, this is usually not the case; see e.g. Proposition \[p.expdomlemma1\](ii) below. Our first proposition tells us that $d(\nu)=1$ does hold if the cluster sizes are bounded. \[p.BoundedCluster\] Suppose that $\nu\in \rer_{V}$ where $V$ is an arbitrary set and that $$\label{e.ClusterBounded} \nu(\{\pi\,:\,|\phi|\le M\mbox{ for all }\phi\in \pi\})=1.$$ Then for all $p\in (0,1)$, $$d(\nu,p)\ge 1-(1-p)^{\frac{1}{M}}$$ and hence $d(\nu)=1$. Suppose first that $\pi\in {\rm Part}_{V}$ is such that $\pi$ contains only equivalence classes of size at most $M$. Letting $\alpha:=1-(1-p)^{\frac{1}{M}}$, it is straightforward to show that $\Pi_{\alpha}\preceq \Phi_p(\delta_{\pi})$ where $\delta_\pi$ stands for the point measure at $\pi$. Now write $$\Phi_p(\nu)=\int_{\pi\in {\rm Part}_{V}} \Phi_p(\delta_{\pi})d\nu(\pi).$$ The claim now follows, since $\Pi_{\alpha}\preceq \Phi_p(\delta_{\pi})$ for $\nu$-almost every $\pi$. The next proposition, due to Olle Häggström, shows that having uniformly bounded cluster sizes is not a necessary condition for $d(\nu)=1$. \[p.Olle\] There exists an RER $\nu$ with $d(\nu)=1$ for which the supremum of the cluster sizes is infinite a.s. The main step is to first construct an RER $\nu$ with $d(\nu)=1$ for which  fails for each $M$. To do this, let $V_2,V_3,\ldots$ be disjoint finite sets with $|V_k|=k$ for each $k$ and let $V=\cup_{k\ge 2} V_k$. Given a sequence $(\epsilon_k)$, we consider the RER $\nu$ on $V$ obtained as follows. Independently for different $k$, we let $V_k$ be a cluster with probability $\epsilon_k$ and we let all the elements of $V_k$ to be singletons with probability $1-\epsilon_k$. Clearly if $\epsilon_k>0$ for each $k$, then  fails for each $M$. We now claim that if $\epsilon_k=\frac{1}{2^{k^2}}$, then $d(\nu)=1$. We need to show that for each $\alpha <1$, there is $p<1$ so that $\Pi_\alpha\preceq \Phi_p(\nu)$. Since the behavior on different $V_k$’s is independent under $\nu$, we only need to check the stochastic domination for each $V_k$. We first check that we can obtain the desired inequality for the (decreasing) event of having all 0’s. This inequality is then $$(1-\alpha)^k \ge \eps_k (1-p)+(1-\eps_k) (1-p)^k$$ and it is easy to check that with $\epsilon_k=\frac{1}{2^{k^2}}$ as above, given any $\alpha <1$, there is $p<1$ so that this inequality holds for all $k$. Theorem 1.3 in [@LS06] states that a finite exchangeable process which satisfies the FKG lattice condition dominates a given product measure once one has the appropriate inequality for the event of having all 0’s. It is not hard to see that the color process above on $V_k$ is exchangeable and satisfies the FKG lattice condition therefore yielding the desired stochastic domination. Finally, once we have an RER $\nu$ with $d(\nu)=1$ for which  fails for each $M$, we can obtain what is claimed in the proposition simply by considering an infinite number of independent such systems. The next proposition relates stochastic domination with the behavior of the number of clusters intersecting a large box. \[p.expdomlemma1\] Let $d\ge 1$, $\nu\in \rer_{\zd}^{\stat}$ and $C_n=C^\nu_n$ be the number of clusters intersecting $[-n,n]^d$. (i). If $p,\alpha \in (0,1)$ is such that we have $\Pi_\alpha\preceq \Phi_p(\nu)$, then for all $n\ge 0$ and all $k\ge 1$, $$\label{e.dominationLDnewversion} \nu(C_n\le k)\le \frac{1}{(1-p)^k} (1-\alpha)^{(2n+1)^d}.$$ (ii). If $$\label{e.suff.cond.dzeroagain} \liminf_{n\to\infty}\frac{-\log \nu(C_n\le \delta (2n+1)^d)}{(2n+1)^d}\le \epsilon,$$ then $d(\nu,p)\le 1-\frac{(1-p)^{\delta}}{e^{\epsilon}}$. In particular if this $\liminf$ is 0, then $d(\nu,p)\le 1-(1-p)^{\delta}$. (iii). If there exists $k_n=o(n^d)$ such that $$\label{e.suff.cond.dzero} \liminf_{n\to\infty}\frac{-\log \nu(C_n\le k_n)}{(2n+1)^d}\le \epsilon,$$ then $d(\nu)\le 1-e^{-\epsilon}$. In particular if this $\liminf$ is 0, then $d(\nu)=0$. (iv). If $\nu(C_n=1)\ge \gamma^{(2n+1)^d}$ for infinitely many values of $n$, then $d(\nu)\le 1-\gamma$. (i). Fix $p,\alpha \in (0,1)$ with $\Pi_\alpha\preceq \Phi_p(\nu)$ and let $n\ge 0$ and $k\ge 1.$ Then $$\begin{aligned} \label{e.domincube1} \nonumber (1-\alpha)^{(2n+1)^d} =\Pi_{\alpha}(X|_{[-n,n]^d}\equiv 0) \ge \Phi_p(\nu)(X|_{[-n,n]^d}\equiv 0) \\ =E[(1-p)^{C_n}]\ge \nu(C_n\le k) (1-p)^k.\end{aligned}$$ (ii). This follows from (i) in a straightforward manner. (iii). This follows from (ii) in a straightforward manner. (iv). This follows from (iii) in a straightforward manner. We next have the following proposition for RERs concentrated on connected classes. \[p.statconncorr\] (i). Let $\nu\in \rer_{\Z}^{\stat,\conn}$. If $p,\alpha \in (0,1)$ is such that we have $\Pi_\alpha\preceq \Phi_p(\nu)$, then for all $n\ge 1$ $$\label{e.dominationLDagain} \nu(|\pi(0)|\ge n)\le (n+2) \frac{1}{1-p} (1-\alpha)^{2\lfloor n/2 \rfloor +1}.$$ It follows that if $\nu(|\pi(0)|\ge n)\ge C\gamma^n$ for infinitely many $n$ for some $C>0$, then $d(\nu)\le 1-\gamma$. (ii). There exists $\nu\in \rer_{\Z}^{\stat}$ and $p,\alpha \in (0,1)$ such that $\Pi_\alpha\preceq \Phi_p(\nu)$ but where the LHS of  does not go to 0 with $n$. (iii). There exists $d\ge 2$, $\nu\in\rer_{\zd}^{\rm stat,conn}$ and $p,\alpha \in (0,1)$ such that $\Pi_\alpha\preceq \Phi_p(\nu)$ but where the LHS of  does not go to 0 with $n$. (iv). Let $\nu\in \rer_{\zd}^{\stat,\conn}$. If $p,\alpha \in (0,1)$ is such that we have $\Pi_\alpha\preceq \Phi_p(\nu)$, then for all $n\ge 1$ $$\label{e.dominationLDagainagain} \nu(|\pi(0)| \ge n ) \le \frac{(7^d(1-\alpha))^n}{1-p}.$$ (This only has content if $\alpha\in (1-7^{-d},1)$.) It follows that if $\nu(|\pi(0)|\ge n)\ge C\gamma^n$ for infinitely many $n$ for some $C>0$, then $d(\nu)\le 1-\frac{\gamma}{7^d}$. (i). Observe that since $\nu$ produces only connected equivalence classes a.s.  the following inclusion holds a.s. $$\{|\pi(0)|\ge n\}\subseteq \bigcup_{i=-\lceil n/2 \rceil}^{\lceil n/2 \rceil} \{\pi(i)\supseteq [i-\lfloor n/2 \rfloor,i+\lfloor n/2 \rfloor]\}.$$ Hence $$\begin{aligned} \lefteqn{\nu(|\pi(0)|\ge n)\le \sum_{i=-\lceil n/2 \rceil}^{\lceil n/2 \rceil} \nu(\pi(i)\supseteq [i-\lfloor n/2 \rfloor,i+\lfloor n/2 \rfloor])}\nonumber \\ & & \le (n+2) \frac{1}{1-p} (1-\alpha)^{2\lfloor n/2 \rfloor +1},\end{aligned}$$ using Proposition \[p.expdomlemma1\](i) with $k=1$ in the last inequality, finishing the proof. The last statement follows easily. (ii). We use Proposition \[p.dominationexch\] which comes later in this section. Assume we have a paintbox with $p_1 >0$ and $\sum_i p_i <1$. Since $\sum_i p_i <1$, Proposition \[p.dominationexch\] says that $\Pi_{\alpha} \preceq\Phi_p(\nu)$ for some $\alpha, p\in (0,1)$. However, since $p_1 >0$, $\nu(|\pi(0)|=\infty)>0$ and so the LHS of  does not go to 0 with $n$. (iii). Let $\nu\in\rer_{\zd}^{\stat,\conn}$ be the random cluster model with $J>J_c$. Then, using the fact that the random cluster model has a unique infinite cluster, the color process $\Phi_{1/2}(\nu)$ is necessarily given by $(\mu_{J}^{\zd,+}+\mu_{J}^{\zd,-})/2$ where these two measures are respectively the plus and minus states for the Ising model with coupling constant $J$. It is well known that there is some $\epsilon=\epsilon(J,d)>0$ such that $\Pi_{\epsilon}\preceq \mu_{J}^{\zd,-} (\preceq \mu_{J}^{\zd,+})$ and hence $\Pi_{\epsilon}\preceq \Phi_{1/2}(\nu)$. However $\nu(|\pi(0)|=\infty)>0$ and hence the LHS of  does not go to 0 with $n$. (iv). Let $S_n$ be the set of connected subsets of $\zd$ of size $n$ containing the origin. It is known that $|S_n|\le 7^{dn}$, see p.$81$ of [@grimmett]. We then have $$\label{e.domination11} \nu( |\pi(0)| \ge n ) \le \sum_{\phi\in S_n} \nu(\phi \subseteq \pi(0)).$$ Since by assumption $\Pi_{\alpha} \preceq\Phi_p(\nu)$, we get, using domination in the second inequality, that for any $\phi\in S_n$ $$(1-p)\nu(\phi \subseteq \pi(0))\le\Phi_p(\nu)(X|_{\phi}\equiv 0)\le (1-\alpha)^n,$$ so that $$\label{e.domination12} \nu(\phi \subseteq \pi(0))\le \frac{(1-\alpha)^n}{1-p}.$$ From  and  it follows that $$\nu( |\pi(0)| \ge n )\le |S_n| \frac{(1-\alpha)^n}{1-p}\le \frac{(7^d(1-\alpha))^n}{1-p},$$ as claimed. The last statement follows easily. The essential reason that (i) does not hold when $d\ge 2$ is that the number of connected sets of size $n$ containing the origin is exponential in $n$ rather than linear in $n$ as in $d=1$. The next proposition says that no matter how fast $\nu(|\pi(0)|\ge n)$ decays to 0 for $d=1$, there is no guarantee that $\Phi_p(\nu)$ will dominate any product measure, even for $\nu\in \rer_{{\mathbb Z}}^{\stat,\conn}$. This shows in particular that the converse of Proposition \[p.statconncorr\](i) is false. \[p.nodomination\] Let $(b_n)_{n\ge 1}$ be a decreasing sequence of real numbers such that $b_n\to 0$ as $n\to \infty$ and $b_n>0$ for all $n$. Then there exists $\nu\in \rer_{{\mathbb Z}}^{\stat,\conn}$ such that $\nu(|\pi(0)|\ge n)\le b_n$ for all $n\ge 2$ but $d(\nu)=0$. For $n\ge 1$ let $(K_n)_{n\ge 1}$ be uniform on $\{0,\ldots,n-1\}$. For $k\in {\mathbb Z}$ and $n\ge 1$, let $I_{k,n}=\{kn,\ldots,kn+n-1\}$. For $n\ge 1$ let $\pi_n$ be the [RER]{} with equivalence classes given by $(I_{k,n}+K_n)_{k\in {\mathbb Z}}$ and let $\nu_n$ be the law of $\pi_n$. Let $(p_n)_{n\ge 1}$ satisfy $p_n\in(0,1)$ for all $n$ and $\sum_{n\ge 1}p_n=1$ and then put $\nu=\sum_{n\ge 1} p_n \nu_n$. We now show that the sequence $(p_n)$ can be chosen so that $\nu$ satisfies the properties required. First, we see that the decay of the probabilities $\nu(|\pi(0)|\ge n)$ can be given the desired behavior by an appropriate choice of the sequence $(p_n)_{n\ge 1}$. For example one can let $p_1:=1-b_2$ and then $p_n:=b_n-b_{n+1}$ for $n\ge 2$. This gives $\nu(|\pi(0)|\ge n)= b_n$ for all $n\ge 2$. To show that $d(\nu)=0$, we proceed as follows. If $d(\nu)> 0$, then there would exist $\epsilon,p\in (0,1)$ such that $\Pi_{\epsilon} \preceq\Phi_p(\nu)$. Next consider the ergodic decomposition of any stationary coupling of $\Pi_{\epsilon}$ and $\Phi_p(\nu)$ which couples the former below the latter. Since $\Pi_{\epsilon}$ is ergodic, it follows that $\Pi_{\epsilon} \preceq\Phi_p(\sum_{n\ge 1} p_n \nu_n)$ can only occur if $\Pi_{\epsilon} \preceq\Phi_p(\nu_n)$ for each $n$. However, $\Pi_{\epsilon} \preceq\Phi_p(\nu_n)$ implies that $$\frac{1-p}{n}\le \Phi_p(\nu_n)(X|_{1,\ldots,n}\equiv 0)\le (1-\epsilon)^n$$ which is clearly false for large $n$. Stochastic domination for the infinitely exchangeable case ---------------------------------------------------------- We now turn to the infinitely exchangeable case and give a formula (see Proposition \[p.dominationexch\] below) $d(\nu,p)$. Suppose first that $\mu\in {\rm EP}_{\N}$. Recall (see ) that $$\mu=\int_{s=0}^1 \Pi_s \, d\rho_{\mu}(s),$$ for some unique measure $\rho_{\mu}$ on $[0,1]$. The proof of the next lemma is straightforward and certainly known, so we omit it. \[l.dominationlemma\] Suppose that $\mu \in {\rm EP}_{\N}$. Then $$\sup\{s\,:\,\Pi_s\preceq \mu\}=\inf {\rm supp} \,\rho_{\mu}.$$ Recall (see Theorem \[t.kingman\]) that for any $\nu\in \rer_{\N}^{\exch}$, there is a unique measure $\rho_{\nu}$ on $\rer_{\N}^{\exch,\pure}$ such that $$\label{e.rerepr} \nu=\int_{\nu_{\bf p}\in \rer_{\N}^{\exch,\pure}} \nu_{\bf p} \, d\rho_{\nu}(\nu_{\bf p}).$$ As an application of Lemma \[l.dominationlemma\] to exchangeable color processes, we have the following. \[p.dominationexch\] If $\nu_{\bf p}\in \rer_{\N}^{\exch,\pure}$ with ${\bf p}=(p_1,p_2,\ldots)$, then for all $p\in (0,1)$ $$\label{e.dominationexch1} d(\nu_{\bf p},p)=p\left(1-\sum_{i\ge 1} p_i\right).$$ Hence $$d(\nu_{\bf p})=1-\sum_{i\ge 1} p_i.$$ More generally, if $\nu\in \rer_{\N}^{\exch}$ then for all $p\in (0,1)$ $$\label{e.dominationexch2} d(\nu,p)=\inf\left\{p\left(1-\sum_{i\ge 1} p_i\right)\,:\,\nu_{\bf p}\in {\rm supp}\,\rho_{\nu}\right\}.$$ Hence $$d(\nu)=1-\sup\left\{\sum_{i\ge 1} p_i\,:\,\nu_{\bf p}\in {\rm supp}\,\rho_{\nu}\right\}.$$ Statement  follows from Lemma \[l.dominationlemma\] by inspection of . The general statement  follows from  and the upper semicontinuity of the map\ $\nu_{\bf p}\mapsto p(1-\sum_{i=1}^{\infty} p_i)$ (which in fact is not continuous) by observing that $$\Phi_p(\nu)\stackrel{~\eqref{e.rerepr}}{=}\int_{\nu_{\bf p}\in \rer_{\N}^{\exch,\pure}} \Phi_p(\nu_{\bf p}) \, d\rho_{\nu}(\nu_{\bf p}).$$ Next we present a result for the infinite exchangeable case projected to a finite set which follows from a result in [@LS06]. For $\nu\in \rer_{\N}^{\exch}$ we let $\nu_{[n]}\in\rer_{[n]}^{\exch}$ stand for the RER on $[n]$ induced by $\nu$. Similarly for $\mu\in {\rm EP}_{\N}$ let $\mu_{[n]}$ be the measure induced by $\mu$ on $\{0,1\}^{[n]}$. Corollary $1.1$ in [@LS06] says that for all $\mu\in {\rm EP}_{\N}$ and all $n\ge 1$ $$\sup\{s\,:\,\Pi_s\preceq \mu_{[n]} \}=1-\left(\int_{s=0}^{1}(1-s)^n d\rho_{\mu}(s)\right)^{1/n}.$$ This immediately implies the following proposition which we therefore give without proof. Recall the definition of $\xi_{{\bf p},p}$ from . Let $n\ge 1$ and suppose that $\nu\in \rer_{\N}^{\exch}$. Then $$\sup\{s\,:\,\Pi_s\preceq \Phi_p(\nu_{[n]})\}=1-\left(\int_{\nu_{\bf p}\in \rer_{\N}^{\exch,\pure} }\int_{s=0}^1 (1-s)^n\,d F_{\xi_{{\bf p},p}}(s)d\rho_{\nu}(\nu_{\bf p}) \right)^{1/n}.$$ Stochastic domination for our various models -------------------------------------------- In this subsection, we examine what the earlier results in this section tell us about stochastic domination for some of our standard models. ### Random walk in random scenery \[p.recurrent.mean0\] (i). Consider a recurrent random walk on $\zd$ and let $\nu$ be the associated RER on $\Z$. Then $d(\nu)=0$.\ (ii). Consider a random walk on $\zd$ whose steps have mean 0 and let $\nu$ be the associated RER on $\Z$. Then $d(\nu)=0$. While (ii) is much stronger in some sense than (i), it does not actually imply it since there are recurrent random walks with infinite mean. [**Proof.**]{} (i). It is well known and easy to show that for any recurrent random walk, $E(R_n)=o(n)$ where $R_n$ is the range of the random walk up to time $n$, i.e., the cardinality of the set $\{S_0,S_1,\ldots,S_{n-1}\}$. It is clear that $R_n$ is exactly the number of clusters intersecting $[0,n-1]$ in the associated RER. Using a trivial modification of Proposition \[p.expdomlemma1\](iii) (where $[-n,n]$ is simply replaced by $[0,n-1]$), we let $k_n:=2E(R_n)$. Then $k_n=o(n)$ and $\nu(R_n\ge k_n)\le \frac{1}{2}$ by Markov’s inequality and hence $\nu(R_n\le k_n)\ge \frac{1}{2}$. It follows that  holds in this case with $\epsilon=0$ and hence $d(\nu)=0$ by Proposition \[p.expdomlemma1\](iii). (ii). We will use Lemma 2.2 in [@JS] which is the following. \[lem:kesten\] Consider a random walk on $\zd$ whose steps have mean 0. Then for every $\eps >0$, it is the case that $$P\left({R_n \over n} \le \eps\right) \ge \left({1\over 2}\right)^{\eps n}$$ holds for large $n$. The key ingredient in the proof of the above lemma is Lemma 5.1 in [@DONVAR] which gives a much stronger result when the distribution of the steps is compact or even satisfies much weaker assumptions. It is easy to see that Lemma \[lem:kesten\] implies that we can choose $(\eps_n)$ going to 0 such that for all $n\ge 1$ $$P\left({R_n \over n} \le \eps_n\right) \ge \left({1\over 2}\right)^{\eps_n n}.$$ Now let $k_n:=n\eps_n$ which is clearly $o(n)$. The above inequality yields that  holds in this case with $\epsilon=0$ as well and hence $d(\nu)=0$ by Proposition \[p.expdomlemma1\](iii). Understanding what happens with $d(\nu)$ for 1 dimensional random walk with drift seems to be an interesting question; see Question \[q.bhs\]. ### Stationary distributions for the voter model in $d\ge 3$ Recall that in this case, the RER $\nu_d$ is described by taking independent coalescing random walkers starting at each point of $\zd$ and running to time $\infty$ and letting two points be in the same class if the random walkers started at those two points ever coalesce. \[p.voting.sd\] For all $d\ge 1$, $d(\nu_d)=0$. [**Proof.**]{} For $d=1,2$, $\nu_d$ has a.s. 1 cluster and therefore the result is trivial. For $d\ge 3$, it is stated (in different terminology) on p. 60 in [@LEBSCHON] that $E(C_n)\le O(n^{d-2})$. Letting $k_n:=n^{d-1}(=o(n^d))$ and using Markov’s inequality, we obtain $\nu(C_n\le k_n)\to 1$ as $n\to\infty$. It follows that  holds in this case with $\epsilon=0$ and hence $d(\nu)=0$ by Proposition \[p.expdomlemma1\](iii). ### 1-dimensional Random Cluster Model Consider the RER, denoted by $\nu_s$, in $\rer_{{\mathbb Z}}^{\stat,\conn}$ where one performs i.i.d.percolation with parameter $s$ on ${\mathbb Z}$ and considers the connected components. (This is exactly the RER that arises in Definition \[d.rergen\] where the $Y$ process is i.i.d. with marginal probability $s$.) \[p.1Drcm\] $d(\nu_s,p)= p-ps$ and hence, by letting $p\to 1$, $d(\nu_s)= 1-s$. [**Proof.**]{} By the proof of Proposition \[t.markovcolor\], as we vary $s$ and $p$, the collection of color processes that we obtain are exactly the set of 2 state Markov chains with nonnegative correlations and the correspondence is given by $s=p_{1,1}-p_{0,1}$ and $p=p_{0,1}/(p_{0,1}+p_{1,0})$. Now, by Proposition 5.1 in [@LS06], the maximal density product measure that our Markov chain dominates has density $p_{0,1}$. Next, we want to express this in terms of $s$ and $p$. Inverting the above set of equations yields $p_{0,1}=p-ps$ and $p_{1,1}=s+p-ps$. It follows that $d(\nu_s,p)= p-ps$, as desired. We point out that, in the terminology of Proposition \[p.expdomlemma1\], we clearly have that $\nu_s(C_n=1)= s^{2n}$ and hence we can conclude from Proposition \[p.expdomlemma1\](iv) that $d(\nu_s)\le 1-s$. Hence Proposition \[p.expdomlemma1\](iv) is sharp in this case. Finally, we recall that the above set of color processes (as $s$ and $p$ vary) corresponds to the set of 1-dimensional nearest neighbor Ising models as we vary $J\ge 0$ and $h\in {\mathbb R}$. Using the exact correspondence given in [@Georgii], p. 50-51 between Ising models on ${\mathbb Z}$ and the above processes, one can can determine the largest product measure which the Ising model with parameters $J\ge 0$ and $h\in {\mathbb R}$ dominates. ### Random Cluster models in $\zd$ We refer to [@GrimRC] for all background concerning the random cluster model. Given $d\ge 2$, $\alpha\ge 0$ and $q\ge 1$, we let $\nu^{\rm{RCM}}_{d,\alpha,q}$ be the random cluster model on $\zd$ with parameters $\alpha$ and $q$ which is a probability measure on $\{0,1\}^E$, where $E$ are the edges in $\zd$, obtained by taking a limit of the random cluster models on finite boxes as defined in Subsection \[ss.Examples\]. We then think of $\nu^{\rm{RCM}}_{d,\alpha,q}$ as an RER on $\zd$ by considering the induced connected components. (For the experts, using one of the possible definitions of a random cluster model, there might be more than one such measure on $\zd$; nonetheless, our definition of $\nu^{\rm{RCM}}_{d,\alpha,q}$ above is well-defined as this limit exists.) Recall that $q=1$ corresponds to the classical divide and color model. \[p.RCM\] (i). For all $d\ge 2$, $q\ge 1$ and $\alpha> 0$, $d(\nu^{\rm{RCM}}_{d,\alpha,q})< 1$.\ (ii). ([@BBT1]) For all $d\ge 2$, $\alpha> 0$ and $p>0$, $d(\nu^{\rm{RCM}}_{d,\alpha,1},p)>0$. [**Proof.**]{} (i). It is easy to show that for all $d\ge 2$, $q\ge 1$ and $\alpha> 0$, there exists $C,\gamma >0$ so that for all $n$, $\nu^{\rm{RCM}}_{d,\alpha,q}(|\pi(0)|\ge n)\ge C\gamma^n$. It follows from Proposition \[p.statconncorr\](iv) that $d(\nu^{\rm{RCM}}_{d,\alpha,q})< 1$.\ (ii). This is stated in Theorem 3.1 in [@BBT1]. Ergodic results in the translation invariant case {#s.transfer} ================================================= In this section, the main theme is to investigate the ergodic theoretic properties of our color processes in the translation invariant case. These will turn out to depend both on the ergodic behavior of the RERs generating the color process as well as on the structure of the clusters which arise. We therefore assume in this section that $V={\mathbb Z}^d$ and we only consider RERs in $\rm{RER}^{\rm{stat}}_V$. We will refer to [@EW] and [@W] for the standard definitions in ergodic theory and will not, in view of space, recall these definitions here. The ergodic concepts which we will consider are (1) ergodicity, (2) weak-mixing, (3) mixing, (4) $k$-mixing, (5) $K$-automorphism and (6) Bernoullicity. Importantly, in [@EW], these definitions are also stated for ${\mathbb Z}^d$. In addition, we will assume familiarity with the notion of the entropy of a dynamical system or a stationary process. We recall that one stationary process is a [*factor*]{} of another stationary process if the former can be expressed as a translation invariant function of the latter. All the standard ergodic properties (in particular all those considered in this paper) are easily shown (or known to be) preserved by factor maps. In addition, it is known that i.i.d. processes satisfy all of the ergodic properties that we study and that, in addition, if we have a stationary process $\mu$ satisfying one of our ergodic properties, then the joint stationary process where (1) the first marginal is $\mu$, (2) the second marginal is an i.i.d. process and (3) the two processes are independent also satisfies this given ergodic property. In what follows, $(\pi,X^{\nu,p})$ is our [*joint*]{} RER and color process where $\pi$ is the random partition with distribution $\nu$ and $X^{\nu,p}$ is the corresponding color process with parameter $p$; the latter of course has distribution $\Phi_p(\nu)$. The distribution of the joint law will be denoted by $\bfp=\jointlaw$. With $d$ specified, we let $B_n:=[-n,n]^d\cap{\mathbb Z}^d$ (so that $|B_n|=(2n+1)^d$). For a subset $A\subseteq{\mathbb Z}^d$ and $x\in {\mathbb Z}^d$, define the translation of $A$ by $x$ by $T^x A:=\{y\,:\,y-x\in A\}$ and for subsets $B\subseteq\{0,1\}^{{\mathbb Z}^d}$ and $x\in {\mathbb Z}^d$, $T^x B$ will also have the obvious meaning. Positive density clusters imply nonergodicity of the color process ------------------------------------------------------------------ Essentially following Burton-Keane [@BK89], we first make the following definition. \[d.clustdens\] We say that a subset $S$ of ${\mathbb Z}^d$ has density $\alpha$ if $$\lim_{i\to\infty} \frac{|S\cap B_i|}{|B_i|}=\alpha.$$ We say that $S$ has *upper density* $\alpha$ if $$\varlimsup_{i\to \infty} \frac{|S\cap B_i|}{|B_i|}=\alpha.$$ The proof of Theorem 1 in [@BK89] easily yields the following result. \[t.bkrer\] Suppose that $\nu\in \rer_{\zd}^{\stat}$. Then $$\nu(\mbox{every }\phi\in \pi\mbox{ has a density})=1.$$ The main result of this subsection is the following result. \[t.ergod1\] Fix $d\ge 1$, $p\in (0,1)$ and suppose that $\nu\in \rer_{\zd}^{\stat}$. If $$\nu(\exists \phi\in \pi\,:\,\phi\mbox{ has positive density})>0,$$ then $\munup$ is not ergodic. In particular, if under $\nu$ there are a positive finite number of infinite clusters with positive probability, then $\munup$ is not ergodic. To prove this, we begin with the following lemma. \[l.ergthm1\] Suppose $\nu\in \rer_{\zd}^{\stat}$ and that $$\nu(\exists \phi\in \pi\,:\,\phi\mbox{ has positive density})>0.$$ Then there exists a set $S\subseteq {\mathbb Z}^d$ of positive upper density and a number $\delta=\delta_ S>0$ such that $$\nu(\pi(0)=\pi(x))\ge \delta\mbox{ for all }x\in S.$$ We proceed by contradiction. Assume that there does not exist a set $S$ with positive upper density and a $\delta>0$ such that $\nu(\pi(0)=\pi(x))\ge \delta\mbox{ for all }x\in S.$ Let $\epsilon>0$ be arbitrary and let $$S_{\epsilon}=\{x\,:\,\nu(\pi(0)=\pi(x))\ge\epsilon\}.$$ Our assumptions imply that $S_\epsilon$ has upper density $0$. We now get that $$\begin{aligned} \lefteqn{\varlimsup_{n\to \infty}\frac{E_{\nu}[|\pi(0)\cap B_n|]}{(2n+1)^d} = \varlimsup_{n\to\infty}\sum_{x\in B_n} \frac{\nu(\pi(0)=\pi(x))}{(2n+1)^d} }\\ & & =\varlimsup_{n\to\infty} \sum_{x\in B_n\cap S_{\epsilon}} \frac{\nu(\pi(0)=\pi(x))}{(2n+1)^d} + \varlimsup_{n\to\infty} \sum_{x\in B_n\cap S_{\epsilon}^c} \frac{\nu(\pi(0)=\pi(x))}{(2n+1)^d}\\ & & \le \varlimsup_{n\to\infty} \sum_{x\in B_n\cap S_{\epsilon}} \frac{1}{(2n+1)^d} + \varlimsup_{n\to\infty} \sum_{x\in B_n\cap S_{\epsilon}^c} \frac{\epsilon}{(2n+1)^d}\\ & & \le 0 +\epsilon=\epsilon,\end{aligned}$$ using that $S_{\epsilon}$ has upper density $0$ in the last inequality. Since $\epsilon>0$ was arbitrary, it follows that $$\label{e.bk1} \lim_{n\to \infty}\frac{E_{\nu}[|\pi(0)\cap B_n|]}{(2n+1)^d} =0.$$ On the other hand, by Theorem \[t.bkrer\], $$\label{e.bk2} \lim_{n\to\infty} \frac{|\pi(0)\cap B_n|}{(2n+1)^d}=L\mbox{ a.s.},$$ for some random variable $L$. The assumption $\nu(\exists \phi\in \pi\,:\,\phi\mbox{ has positive density})>0$ implies that $P_\nu(L>0)>0$, so that $E_{\nu}[L]>0$. Hence, using  and the bounded convergence theorem, $$\label{e.bk3} \lim_{n\to\infty}\frac{E_{\nu}[|\pi(0)\cap B_n|]}{(2n+1)^d}=E_{\nu}[L]>0.$$ However  contradicts , finishing the proof. [**Proof of Theorem \[t.ergod1\].**]{} If $\munup$ is ergodic, then $$\label{e.bk4} \lim_{n\to\infty} \sum_{x\in B_n}\frac{\munup(X(0)=X(x)=1)}{(2n+1)^d}=p^2.$$ From Lemma \[l.ergthm1\], it follows that there is a deterministic set $S\subseteq \zd$ of positive upper density and a $\delta=\delta_S>0$ such that $$\label{e.bk5} \nu(\pi(0)=\pi(x))\ge \delta \mbox{ for }x\in S.$$ Hence, $$\label{e.bk6} \munup(X(0)=X(x)=1)\ge \delta p + (1-\delta) p^2\mbox{ for }x\in S.$$ However, we also know that $$\label{e.bk7} \munup(X(0)=X(x)=1)\ge p^2\mbox{ for all }x.$$ Equations ,   and the fact that $S$ has positive upper density imply that $$\label{e.bk8} \varlimsup_{n} \sum_{x\in B_n} \frac{\munup(X(0)=X(x)=1)}{(2n+1)^d}>p^2,$$ which implies that $\munup$ is not ergodic due to . The final statement follows easily from the ergodic theorem. We will see later (Theorem \[t.ergod3\]), that the converse to Theorem \[t.ergod1\] holds when $\nu$ is ergodic. When does the color process inherit ergodic properties from the RER? -------------------------------------------------------------------- The first theorem in this subsection tells us that, when all clusters are finite, then any ergodic property of $\nu$ is automatically passed on to $\jointlaw$ and hence to $\munup$. This is really just an extension (with the same proof) of Theorem 3.1 in [@JS] where this was proved for the particular property of being Bernoulli. Nonetheless, since the proof is short, we include it for completeness. We mention that Bernoulliness for the $\ttinv$-process (and consequently for random walk in random scenery) in the transient case (which is a special case of having finite clusters) was proved earlier by a different method in [@dHS97]. \[t.finiteclust\] Fix $d\ge 1$, $p\in (0,1)$ and $\nu\in \rer_{\zd}^{\stat}$. Assume that $\nu$ satisfies $$\nu(\forall \phi\in \pi\,:\,\phi\mbox{ is finite})=1.$$ Then, letting $\mu_p$ denote product measure with density $p$ on ${\zd}$, we have that $\jointlaw$ is a factor of $\nu\times \mu_p$. In particular, if ${\mathfrak p}$ denotes any one of the ergodic properties being studied here, then $\nu$ has property ${\mathfrak p}$ if and only if $\jointlaw$ has property ${\mathfrak p}$. In particular, if $\nu$ has property ${\mathfrak p}$, then $\munup$ has property ${\mathfrak p}$. Concerning the middle statement, first, since $\pi$ is a factor of $(\pi,\xpip)$ and all of these properties are preserved under factors, the “if” direction follows. Secondly, for the “only if” direction, we observe that if $\nu$ has property ${\mathfrak p}$, then so does $\nu\times \mu_p$ and hence $\jointlaw$ in turn has this property being, as claimed, a factor of the latter. Since $\munup$ is a factor of $\jointlaw$, the final statement is immediate. For the first and main statement, let $Y=(Y(z))_{z\in {\mathbb Z}^d}$ be an i.i.d.  field with $P(Y(z)=1)=p=1-P(Y(z)=0)$ for $z\in {\mathbb Z}^d$, and let $Z=(\pi,Y)$. We will now obtain $(\pi,\xpip)$ as a factor of $Z$. For the first marginal, we just copy the first marginal of $Z$. For the second marginal, we proceed as follows. Choose an arbitrary lexicographic ordering of ${\mathbb Z}^d$. For $x\in {\mathbb Z}^d$, let $y_x$ be that element $y$ of $\pi(x)$ which minimizes $y-x$ with respect to the above ordering. Finally, we let $\xpip(x)=Z(y_x)$. It is easy to see that this yields the desired factor map. Theorems \[t.ergod1\] and \[t.finiteclust\] suggest to us that the interesting case is when $\pi$ contains no equivalence class of positive density but contains some infinite equivalence class, necessarily of 0 density. Theorems \[t.ergod3\], \[t.ergod4\], \[t.numixmumix\] and \[t.numixmumix-kmixing\] below cover this case for some ergodic properties. \[t.ergod3\] Fix $d\ge 1$ and $p\in (0,1)$ and assume that $\nu\in \rer_{\zd}^{\stat}$ satisfies $$\nu(\exists \phi\in \pi\,:\,\phi\mbox{ has positive density})=0.$$ Then $\nu$ is ergodic if and only if $\jointlaw$ is ergodic. In particular, if $\nu$ is ergodic, then $\munup$ is ergodic. First assume that $(\pi,\xpip)$ is ergodic. Since $\pi$ is a factor of $(\pi,\xpip)$ and ergodicity is preserved under factors, the if part of the theorem follows. Similarly, we obtain the last statement of the theorem from the first statement since $\xpip$ is a factor of $(\pi,\xpip)$. We move on to the only if part of the theorem. Assume that $\nu$ is ergodic. Suppose that $K_1$ and $K_2$ are finite subsets of ${\zd}$. For $i=1,2$, suppose that $E_i$ is an event depending only on the color process $\xpip$ restricted to $K_i$, that is $E_i\in \sigma(\xpip(z)\,:\,z\in K_i)$. For $i=1,2$, fix $\phi_i\in\Delta_{K_i}$ and let $F_i= \{\pi_{K_i}=\phi_i\}$. By standard approximation by cylinder sets, it suffices to show that $$\label{e.nts} \lim_{n\to \infty} \frac{1}{(2n+1)^d}\sum_{x\in B_n} \Pr(E_1, F_1, T^x E_2, T^x F_2) =\Pr(E_1,F_1)\Pr(E_2,F_2).$$ For $x\in {\mathbb Z}^d$, let $C_x=C_x(K_1,K_2)$ be the event that there is some $z_1\in K_1$ and some $z_2\in K_2$ such that $\pi(z_1)=\pi(T^x(z_2))$. We will be done if we show that $$\label{e.nts2} \lim_{n\to \infty} \frac{1}{(2n+1)^d}\sum_{x\in B_n} \Pr(C_x^c,E_1, F_1, T^x E_2, T^x F_2) =\Pr(E_1,F_1)\Pr(E_2,F_2)$$ and $$\label{e.nts3} \lim_{n\to \infty} \frac{1}{(2n+1)^d}\sum_{x\in B_n} \Pr(C_x,E_1, F_1, T^x E_2, T^x F_2) =0.$$ We start with . Clearly, it suffices to show $$\begin{aligned} \label{e.densclust} \lim_{n\to \infty} \frac{1}{(2n+1)^d}\sum_{x\in B_n} \Pr( C_x)=0.\end{aligned}$$ We get that $$\begin{aligned} \label{e.innersum} \lefteqn{\sum_{x\in B_n} \frac{\Pr( C_x)}{(2n+1)^d}}\nonumber\\ & & \le \sum_{x\in B_n} \sum_{z_1\in K_1,z_2\in K_2} \frac{\Pr(\pi(z_1)=\pi(T^x z_2))}{(2n+1)^d}\nonumber\\ & & = \sum_{z_1\in K_1,z_2\in K_2} \sum_{x\in B_n} \frac{\Pr(\pi(z_1)=\pi(T^x z_2))}{(2n+1)^d}.\end{aligned}$$ For fixed $z_1$ and $z_2$, the inner sum in  converges to $0$ as $n\to\infty$ since every cluster of $\pi$ has density $0$ a.s. Hence,  follows, and so  is established. We now move on to prove . We write $$\begin{aligned} \label{e.condind} \lefteqn{\Pr(C_x^c,E_1, F_1, T^x E_2, T^x F_2) } \\ & & =\Pr(C_x^c,F_1,T^x F_2) \Pr(E_1,T^x E_2|C_x^c,F_1,T^x F_2)\nonumber\\ & & =\Pr(C_x^c,F_1,T^x F_2)\Pr(E_1|F_1)\Pr(T^x E_2| T^x F_2),\nonumber\\ & & =\Pr(C_x^c,F_1,T^x F_2)\Pr(E_1|F_1)\Pr( E_2| F_2)\nonumber\end{aligned}$$ where in the second equality we used the fact that $E_1$ and $T^x E_2$ are conditionally independent given the event $\{C_x^c,F_1,T^x F_2\}$, and translation invariance was used in the last equality. Next, we argue that for $\phi_1,\phi_2$ fixed, $$\label{e.nts4} \lim_{n\to \infty} \frac{1}{(2n+1)^d} \sum_{x\in B_n } \Pr(C_x^c,F_1,T^x F_2)= \Pr(F_1)\Pr(F_2).$$ To see this, we observe that $$\label{e.ineq1} \Pr(C_x^c,F_1,T^x F_2)\le \Pr(F_1,T^x F_2)$$ and $$\label{e.ineq2} \Pr(C_x^c,F_1,T^x F_2)\ge \Pr(F_1,T^x F_2)-\Pr(C_x).$$ By ergodicity of $\nu$, $$\label{e.nts5} \lim_{n\to \infty} \frac{1}{(2n+1)^d} \sum_{x\in B_n }\Pr(F_1,T^x F_2)= \Pr(F_1)\Pr(F_2).$$ Next, we already proved in  that $$\label{e.nts6} \lim_{n\to \infty} \frac{1}{(2n+1)^d} \sum_{x\in B_n }\Pr(C_x)=0.$$ Hence, Equation  follows from , ,  and . We are now ready to obtain  from  and . We get $$\begin{aligned} \lefteqn{\lim_{n\to \infty} \frac{1}{(2n+1)^d}\sum_{x\in B_n} \Pr(C_x^c,E_1, F_1, T^x E_2, T^x F_2) }\\ & & \stackrel{~\eqref{e.condind}}{=}\lim_{n\to \infty} \frac{1}{(2n+1)^d}\sum_{x\in B_n} \Pr(C_x^c,F_1,T^x F_2)\Pr(E_1|F_1)\Pr( E_2| F_2) \\ & & \stackrel{~\eqref{e.nts4}}{=} \Pr(F_1)\Pr(F_2)\Pr(E_1|F_1)\Pr( E_2| F_2) =\Pr(E_1,F_1)\Pr(E_2,F_2).\end{aligned}$$ Hence,  is established. Since $K_1$ and $K_2$ are arbitrary finite sets, ergodicity of $\munup$ follows. \[t.ergod4\] Fix $d\ge 1$ and $p\in (0,1)$ and assume that $\nu\in \rer_{\zd}^{\stat}$ satisfies $$\nu(\exists \phi\in \pi\,:\,\phi\mbox{ has positive density})=0.$$ Then $\nu$ is weakly mixing if and only if $\jointlaw$ is weakly mixing. In particular, if $\nu$ is weakly mixing, then $\munup$ is weakly mixing. First assume that $(\pi,\xpip)$ is weakly mixing. Since $\pi$ is a factor of $(\pi,\xpip)$ and weak mixing is preserved under factors, the if part of the theorem follows. Similarly, we obtain the last statement of the theorem from the first statement since $\xpip$ is a factor of $(\pi,\xpip)$. We move on to the only if part of the theorem. Assume that $\nu$ is weak mixing. Suppose that $K_1$ and $K_2$ are finite subsets of $\zd$. For $i=1,2$, suppose that $E_i$ is an event depending only on the color process $\xpip$ restricted to $K_i$, that is $E_i\in \sigma(\xpip(z)\,:\,z\in K_i)$. For $i=1,2$, fix $\phi_i\in\Delta_{K_i}$ and let $F_i= \{\pi_{K_i}=\phi_i\}$. By approximation by cylinder sets, it suffices to show that $$\lim_{n\to\infty} \frac{\sum_{x\in B_n} | \Pr(E_1, F_1, T^x E_2, T^x F_2) -\Pr(E_1,F_1)\Pr(E_2,F_2) |}{(2n+1)^d}=0.$$ Define $C_x$ in the same way as in the proof of Theorem \[t.ergod3\]. By the triangle inequality, we have for each $n$ $$\begin{aligned} \label{e.twoterms} \lefteqn{\frac{\sum_{x\in B_n} | \Pr(E_1, F_1, T^x E_2, T^x F_2) -\Pr(E_1,F_1)\Pr(E_2,F_2) |}{(2n+1)^d} } \\ & &\le \frac{\sum_{x\in B_n}\Pr(C_x)}{(2n+1)^d}+ \frac{\sum_{x\in B_n} | \Pr(C_x^c,E_1, F_1, T^x E_2, T^x F_2) -\Pr(E_1,F_1)\Pr(E_2,F_2)|}{(2n+1)^d}.\nonumber\end{aligned}$$ The first term in the last line of  converges to $0$ as $n\to \infty$ due to Equation  above, so we can focus on the second term. We get that $$\begin{aligned} \label{e.prodrewrite} \lefteqn{\frac{\sum_{x\in B_n} | \Pr(C_x^c,E_1, F_1, T^x E_2, T^x F_2) -\Pr(E_1,F_1)\Pr(E_2,F_2)|}{(2n+1)^d}}\\ & & =\frac{\sum_{x\in B_n} | \Pr(C_x^c,F_1, T^x F_2)\Pr(E_1|F_1 )\Pr(E_2|F_2 ) -\Pr(E_1|F_1)\Pr(E_2|F_2)\Pr(F_1)\Pr(F_2)|}{(2n+1)^d} \nonumber\\ & & \le \frac{\sum_{x\in B_n} | \Pr(C_x^c,F_1, T^x F_2) -\Pr(F_1)\Pr(F_2)|}{(2n+1)^d}\nonumber\\ & &\le\frac{\sum_{x\in B_n}\Pr(C_x)}{(2n+1)^d}+ \frac{\sum_{x\in B_n} | \Pr(F_1, T^x F_2) -\Pr(F_1)\Pr(F_2)|}{(2n+1)^d}\to 0,\nonumber\end{aligned}$$ as $n\to \infty$ due to the weak mixing of $\nu$ and the comment above Equation . Since $K_1$ and $K_2$ are arbitrary finite sets, this finishes the proof. \[t.numixmumix\] Fix $d\ge 1$ and $p\in (0,1)$ and assume that $\nu\in \rer_{\zd}^{\stat}$ satisfies $$\lim_{x\to\infty}\nu(\pi(x)=\pi({\bf 0}))= 0.$$ Then $\nu$ is mixing if and only if $\jointlaw$ is mixing. In particular, if $\nu$ is mixing, then $\munup$ is mixing. It is elementary to check that the condition that $\nu(\pi(x)=\pi({\bf 0}))\to 0$ as $x\to\infty$ is necessary for mixing, since if this fails, pairwise correlations in the color process do not converge to $0$ and hence mixing does not hold. Clearly the condition $\nu(\pi(x)=\pi(y))\to 0$ as $|x-y|\to\infty$ implies the condition $\nu(\exists \phi\in \pi\,:\,\phi\mbox{ has positive density})=0$. To see that the converse does not hold, consider the following deterministic example in ${\mathbb Z}^2$. Let $\pi$ be the partition into horizontal lines. Then clearly each cluster has density $0$, but $\nu(\pi(x)=\pi(y))=1$ if $x_2=y_2$. Obviously, a similar example exists for any $d\ge 2$. First assume that $(\pi,\xpip)$ is mixing. Since $\pi$ is a factor of $(\pi,\xpip)$ and mixing is preserved under factors, the if part of the theorem follows. Similarly, we obtain the last statement of the theorem from the first statement since $\xpip$ is a factor of $(\pi,\xpip)$. We move on to the only if part of the theorem. Assume that $\nu$ is mixing. Suppose that $K_1$ and $K_2$ are finite subsets of $V$. For $i=1,2$, suppose that $E_i$ is an event depending only on the color process $\xpip$ restricted to $K_i$, that is $E_i\in \sigma(\xpip(z)\,:\,z\in K_i)$. For $i=1,2$, fix $\phi_i\in\Delta_{K_i}$ and let $F_i= \{\pi_{K_i}=\phi_i\}$. By approximation by cylinder sets, it suffices to show that $$\lim_{|x|\to\infty} \Pr(E_1, F_1, T^x E_2, T^x F_2) =\Pr(E_1,F_1)\Pr(E_2,F_2).$$ For $x\in {\mathbb Z}^d$, let $C_x$ be the event as defined in the proof of Theorem \[t.ergod3\]. Then $$\begin{aligned} \label{e.firstsum} \lefteqn{\Pr(E_1, F_1, T^x E_2, T^x F_2)} \\ & & = \Pr(C_x,E_1, F_1, T^x E_2, T^x F_2)+\Pr(C_x^c,E_1, F_1, T^x E_2, T^x F_2).\nonumber\end{aligned}$$ Since $K_1$ and $K_2$ are finite, the property that $\Pr(\pi(x)=\pi(y))\to 0$ as $|x-y|\to\infty$ implies that $\Pr(C_x)\to 0$ as $|x|\to \infty$. Hence, the first term in the right hand side of  converges to $0$ as $|x|\to\infty$, and we can focus on the second term. Observe that as in the proof of Theorem \[t.ergod3\], $$\begin{aligned} \label{e.spliteq1} \Pr(C_x^c,E_1, F_1, T^x E_2, T^x F_2) =\Pr(C_x^c,F_1,T^x F_2)\Pr(E_1|F_1)\Pr( E_2| F_2).\end{aligned}$$ Using the mixing property of $\nu$ and the fact that $\Pr(C_x^c)\to 1$ as $|x|\to \infty$, we get $$\begin{aligned} \label{e.spliteq3} \lefteqn{\lim_{|x|\to \infty} \Pr(C_x^c,F_1,T^x F_2)\Pr(E_1|F_1)\Pr( E_2| F_2)}\\ & & =\Pr(F_1)\Pr( F_2)\Pr(E_1|F_1)\Pr( E_2| F_2)=\Pr(E_1,F_1)\Pr(E_2,F_2).\nonumber\end{aligned}$$ Since $K_1$ and $K_2$ are arbitrary finite sets, this establishes the mixing property of $\munup$ and the proof is finished. The following theorem also holds. Its proof is a straightforward modification of the proof of Theorem \[t.numixmumix\] and hence is left to the reader. In addition, also here the condition $\nu(\pi(x)=\pi(y))\to 0$ as $|x-y|\to \infty$ clearly cannot be weakened. \[t.numixmumix-kmixing\] Fix $d\ge 1$ and $p\in (0,1)$ and assume that $\nu\in \rer_{\zd}^{\stat}$ satisfies $$\lim_{x\to\infty}\nu(\pi(x)=\pi({\bf 0}))= 0.$$ Then $\nu$ is $k$-mixing if and only if $\jointlaw$ is $k$-mixing. In particular, if $\nu$ is $k$-mixing, then $\munup$ is $k$-mixing. Theorem \[t.finiteclust\] tells us that when all the clusters are finite, all ergodic properties of $\nu$ are passed to $\jointlaw$ and Theorems \[t.ergod3\], \[t.ergod4\], \[t.numixmumix\] and \[t.numixmumix-kmixing\] tell us that four specific ergodic properties are passed from $\nu$ to $\jointlaw$ under the weaker assumption (and even under weaker assumptions for two of these) that $$\lim_{x\to\infty}\nu(\pi(x)=\pi({\bf 0}))= 0.$$ However, it turns out interestingly that the important property of being Bernoulli is not necessarily passed from $\nu$ to $\jointlaw$ under this latter assumption. We call the following a theorem although it is actually just an observation based on Kalikow’s famous work (see [@Kal]) on the $\ttinv$-process. \[t.kalikow.example\] There exists $\nu\in \rer_{{\mathbb Z}}^{\stat}$ which is Bernoulli satisfying $$\lim_{x\to\infty}\nu(\pi(x)=\pi({\bf 0}))= 0$$ but for which ${\mathbf P}_{\nu,1/2}$ is not Bernoulli and even for which $\Phi_{1/2}(\nu)$ is not Bernoulli. Let $(X(i))_{i\in {\mathbb Z}}$ be an i.i.d. sequence such that $P(X(i)=1)=P(X(i)=-1)=1/2$. Let $\nu \in \rer_{{\mathbb Z}}^{\stat}$ be the distribution of the RER given by $j < k$ are put in the same cluster if $\sum_{i=j+1}^k X(i)=0$. (This is of course just our RER for random walk in random scenery from Subsection \[ss.Examples\].) Being a factor of an i.i.d. process, $\nu$ is Bernoulli and one easily has $\lim_{x\to\infty}\nu(\pi(x)=\pi({\bf 0}))= 0$. The fact however that ${\mathbf P}_{\nu,1/2}$ is not Bernoulli is Kalikow’s famous theorem ([@Kal]). The stronger fact that even $\Phi_{1/2}(\nu)$ is not Bernoulli was proved by Hoffman ([@Hoff]). One should however stress that the latter proof relies on Kalikow’s theorem. Can the color process enjoy more ergodic properties than the RER? ----------------------------------------------------------------- While $\jointlaw$ cannot of course exhibit stronger ergodic behavior than $\nu$ itself (since the latter is a factor of the former), $\munup$ could possibly exhibit stronger ergodic behavior than $\nu$. Our first example shows that $\Phi_{1/2}$ is not injective on $\rer_{{\mathbb Z}}^{\stat}$ and as a consequence gives us a nonergodic RER whose color process is ergodic. \[p.differinf\] There exist $\nu_3,\nu_4\in \rer_{{\mathbb Z}}^{\stat}$ with $\nu_3\neq \nu_4$, $\Phi_{1/2}(\nu_3)=\Phi_{1/2}(\nu_4)$ and such that the latter process is ergodic. It follows that there is a nonergodic RER whose color process is ergodic. Construct $\nu_3$ as follows: on each subset of the type $\{i,i+1,i+2\}$ for $i$ divisible by $3$, independently use the RER $\nu_1$ from the proof of Theorem \[t.bigfinitetheorem\](A) on these 3 points. Next, shift the configuration, uniformly at random by $0$, $1$ or $2$ steps to the right to construct a stationary RER. Next construct $\nu_4$ in the same way, using $\nu_2$ from the proof of Theorem \[t.bigfinitetheorem\](A). Since $\nu_1$ and $\nu_2$ yield the same color processes in the setting with three elements, it follows easily that $\Phi_{1/2}(\nu_3)=\Phi_{1/2}(\nu_4)$. Ergodicity (but not mixing) of the latter is easily established. The final claim is established by considering any nontrivial convex combination of $\nu_3$ and $\nu_4$. \[r.nonerg\] (1) Using Theorem \[t.bigfinitetheorem\](E), one can even, in the same way, find $\nu_3,\nu_4\in \rer_{{\mathbb Z}}^{\stat}$ with $\nu_3\neq \nu_4$ such that $\Phi_{p}(\nu_3)=\Phi_{p}(\nu_4)$ for all $p$.\ (2). The above also shows that ergodicity of the color process may depend on $p$. If we take $\nu_3$ and $\nu_4$ from the above proof and take any $p\neq 0,1/2,1$, then $\Phi_{p}(\nu_3)\neq \Phi_{p}(\nu_4)$ (since now $\nu_1$ and $\nu_2$ yield different color processes for such $p$ by Theorem \[t.bigfinitetheorem\](C)) and hence the image of any nontrivial convex combination of $\nu_3$ and $\nu_4$ is nonergodic, being a nontrivial convex combination of the respective color processes. One can strengthen Proposition \[p.differinf\], obtaining examples where the color process is Bernoulli. \[p.nonergtomix\] There exist a non-ergodic $\nu\in \rer_{{\mathbb Z}}^{\stat}$ such that $\Phi_{1/2}(\nu)$ is Bernoulli. We will only sketch the proof. Define $\nu_3\in\rer_{{\mathbb Z}}^{\stat}$ as follows. Let $(Z(i))_{i\in {\mathbb Z}}$ be an i.i.d.  sequence such that $P(Z(i)=1)=P(Z(i)=0)=1/2$. Call all vertices $i$ with $Z(i)=0$ white, and all vertices $i$ with $Z(i)=1$ blue. Replace each blue vertex with three green vertices. Let each white vertex be its own equivalence class. The green vertices come in blocks of length divisible by $3$. Partition the green blocks independently using $\nu_1$ as in Proposition \[p.differinf\] yielding what we also call here $\nu_3$. Define $\nu_4$ in the same way as $\nu_3$ but using $\nu_2$ from Proposition \[p.differinf\] instead of $\nu_1$. Again $\Phi_{1/2}(\nu_3)= \Phi_{1/2}(\nu_4)$ but now it is easily seen that the latter process is Bernoulli. Now take a nontrivial convex combination of $\nu_3$ and $\nu_4$ as above. Again, $\Phi_{p}(\nu_3)\neq \Phi_{p}(\nu_4)$ for any $p\neq 0,1/2,1$ and so we see that a color process can change from being Bernoulli to being nonergodic as $p$ varies. We should confess at this point, although we felt it important to point out the above results to the reader, we do feel at the same time that using nonergodic RERs in this context is a little bit of a cheat. We give another result which gives some restriction on the ergodic behavior of the color process in terms of a restriction on the RER. \[p.0entropy,notK\] If $\nu\in \rer_{\zd}^{\stat}$ has 0 entropy and is not the RER which assigns probability 1 to the “all singletons” partition, then for any $p\in (0,1)$, $\Phi_{p}(\nu)$ is not a $K$-automorphism. Case 1. $\nu$ is deterministic; i.e., there exists $\pi\in \rm{Part}_{{\zd}}$ such that $\nu({\pi})=1$. In this case, since $\pi$, by assumption, is not the “all singletons” partition, there must exist $x\neq y$ so that $x$ and $y$ are in the same cluster with positive probability and hence with probability 1. By translation invariance, there are points arbitrarily far away which are in the same cluster with probability 1. This clearly rules out even mixing. Case 2. $\nu$ is nondeterministic. Considering the joint process $(\pi,X^{\nu,p})$, it is easy to see that if $\nu$ is nondeterministic, then the two processes $\pi$ and $X^{\nu,p}$ cannot be independent. However, it has been proved by H. Furstenberg (see Theorem 18.16 in [@glasner]) that if one has a 0 entropy system and a $K$-automorphism, then the only stationary joint process (so-called [*joining*]{}) for them is when they are independently coupled. (When two processes have this latter property, they are called [*disjoint*]{}.) Therefore, since $\pi$ is assumed to have 0 entropy, $X^{\nu,p}$ cannot be a $K$-automorphism. Constructing color processes with various ergodic behavior ---------------------------------------------------------- The first observation in this subsection that we want to make is that we can find $\nu\in \rer_{{\mathbb Z}}^{\stat,\conn}$ which falls anywhere in the ergodic hierarchy (e.g., weak-mixing but not mixing). This is an immediate consequence of the following lemma and of course the fact that we can find stationary 0,1-valued processes anywhere in the ergodic hierarchy. \[l.zconstruct\] Given a stationary 0-1 valued process $\{X_n\}$ on ${\mathbb Z}$, there is $\nu\in \rer_{{\mathbb Z}}^{\stat,\conn}$ which is isomorphic to $\{X_n\}$; i.e., there is a translation invariant invertible measure preserving transformation between them. This is nothing other than what we considered in Section \[s.conn\]. If $X_n=1$, then we place $n$ and $n+1$ in the same class and then we saturate this so that it is an equivalence relation. (So, essentially, the clusters will correspond to intervals of 1’s in $\{X_n\}$.) This map is clearly invertible, proving the lemma. We first mention that constructing a color process which is ergodic but not weak-mixing is a triviality. Let $\{X_n\}$ be the stationary 0-1 valued process on ${\mathbb Z}$ which goes back and forth between 0 and 1 and consider the associated $\nu\in \rer_{{\mathbb Z}}^{\stat,\conn}$ given in the proof of Lemma \[l.zconstruct\]. It is immediate that for all $p\in (0,1)$, the associated color process is ergodic but not weak-mixing. We next have the following proposition. \[p.Chacon\] There exists $\nu\in \rer_{{\mathbb Z}}^{\stat,\conn}$ so that for all $p\in (0,1)$, the associated color process $\munup$ is weak-mixing but not mixing. We start with a stationary 0-1 valued process $\{X_n\}$ on ${\mathbb Z}$ which is weak-mixing but for which $\limsup_n \Pr(X_0=X_n=1)> \Pr(X_0=1)^2$ (and hence is not mixing). An example of such a process is the so-called Chacon example; see for example page 216 in [@Petersen]. Next consider the associated $\nu\in \rer_{{\mathbb Z}}^{\stat,\conn}$ given in the proof of Lemma \[l.zconstruct\]. Clearly, $\lim_{x\to\infty}\nu(\pi(x)=\pi({\bf 0}))= 0$ and hence Theorem \[t.ergod4\] implies that $\munup$ (and in fact $\jointlaw$) is weak-mixing. To show that $\munup$ is not mixing, consider the two events $A:=\{\xpip_0=\xpip_1\}$ and $B_n:=\{\xpip_n=\xpip_{n+1}\}$. An elementary computation left to the reader gives that $$\limsup_n\Pr(A\cap B_n) >\Pr(A)^2$$ which implies that $\munup$ is not mixing. Questions and further directions {#s.ques} ================================ In this final section, we list a number of questions and a number of directions which might be interesting to pursue. The questions certainly might be of somewhat varying difficulty but all seem natural to us. \[q.gaussian\] Let $X=(X_1,\ldots,X_n)$ be a $n$-dimensional Gaussian random variable, where each $X_i$ is $N(0,1)$ and the pairwise correlations are given by $\sigma_{i,j}$. Assume $\sigma_{i,j}\ge 0$ for all $i,j$. Let $h\in (-\infty,\infty)$ and $Y^h=(Y_1^h,\ldots,Y_n^h)$ be, as earlier, given by $Y_i^h=1$ if $X_i\ge h$ and $Y_i^h=0$ if $X_i<h$. When is $Y^h$ a color process? Note that if $n=3$ and $h=0$, then $Y^h$ is a color process by Lemma \[l.symm\]. The next three questions are special cases of the above question. Concerning the exchangeable Gaussian process described in Subsection \[s.gaussian\], which nonzero thresholds yield color processes? Given $\rho \in [0,1]$, consider the Markov chain on $\Rf$ where if in state $x$, then the next state has distribution $\rho^{1/2} x + (1-\rho)^{1/2} Z$ where $Z$ is standard normal. Clearly the stationary distribution is a standard normal and we consider the corresponding stationary Markov Chain $(Z_i)_{i\in {\mathbb Z}}$. Fix $h$ and define the process $Y=(Y_i)_{i\in {\mathbb Z}}$ where $Y_i=1$ if $Z_i\ge h$ and $Y_i=0$ if $Z_i<h$. For which $\rho$ and $h$ is $Y$ a color process? Consider a centered Gaussian free field $(Z(x))_{x\in {\mathbb Z}^d}$ with $d\ge 3$. Fix $h$ and consider the process $Y^h=(Y^h(x))_{x\in {\mathbb Z}^d}$ where $Y^h(x)=1$ if $Z(x)\ge h$ and $Y^h(x)=0$ if $Z(x)<h$. When is $Y$ a color process? \[q.ising\] On which graphs and for which values of the parameters $J\ge 0$ and $h>0$ is the Ising model a color process? (i). Unlike in the case $h=0$, the marginal distributions of the Ising model with $J\ge 0$ and $h>0$ need not be the same in which case it of course cannot be a color process; this happens for example for a path of length 2. One might therefore restrict to transitive graphs for this question.\ (ii). In [@Alexander], an asymmetric random cluster model is studied and it is shown how one can obtain the Ising model with $J\ge 0$ and $h>0$ using this model. However, this procedure does not correspond to a color process in our sense as it does in the case $h=0$.\ (iii). Theorem \[t.bigfinitetheorem\](B) and (D) in Section \[s.finitecase\] yield that there is more than one RER generating the Ising model on $K_3$ (the complete graph on 3 vertices) when $J>0$ and $h=0$ while there is at most one RER generating the Ising model on $K_3$ when $J>0$ and $h>0$. Mathematica gives a (necessarily unique) solution for the latter RER for positive $h$ which interestingly does not coverge, as $h\to 0$, to the RER corresponding to the random cluster model but rather converges to a different RER. One might conclude from this that the random cluster RER is not the natural RER which yields the Ising model on $K_3$ with $J>0$ and $h=0$ since it cannot be perturbed to obtain the $h>0$ case. For $p\neq 1/2$, determine those $\nu\in \rer_{\N}^{\exch}$ which are $(\rer_{\N}^{\exch},p)$-unique. Is it all of $\rer_{\N}^{\exch}$ (which is equivalent to $\Phi_p$ being injective)? What are all the possible limiting distributions (after normalization) of $$\sum_{i\in B_n} \xpip(i)$$ which one can obtain by varying $\nu$ and $p$? It was shown in [@KS79] that one can obtain a large number of limiting distributions for the special case of random walk in random scenery. Also, it is known (see [@NW]) that if $\{X_n\}_{n\ge 0}$ is a stationary and positively associated process with $\sum_n {\rm Cov}(X_0,X_n)<\infty$, then one obtains a central limit theorem. This, together with , could be used to show that certain classes of color processes obey a central limit theorem. In addition, a central limit theorem and various other results concerning the original divide and color model are obtained in [@Garet]. If an RER $\nu_1$ is finer than another RER $\nu_2$, in the sense that $\nu_1$ and $\nu_2$ can be coupled so that the clusters of $\nu_2$ are unions of clusters of $\nu_1$, does it follow that $d(\nu_1,p)\ge d(\nu_2,p)$ for each $p$? We note that for $d\ge 1$ and $J_1 <J_2$, the RER for the random cluster model with parameters $q=2$ and $J_1$ is finer than the RER for $q=2$ and $J_2$ and in this case, Proposition 1.6 in [@LS06] states the asked for inequality above for the special case $p=1/2$, in which case the color process is just the Ising model. There is a minor additional point here. In the color process, even the infinite clusters are colored using $p=1/2$ while in Proposition 1.6 in [@LS06], one was looking at the plus states for the Ising model which is obtained by coloring the unique (if there is any) infinite cluster 1. However, by Proposition 1.2 in [@LS06], the set of product measures that one dominates is the same whether the infinite cluster is colored 1 (corresponding to the plus state) or colored $-1$ (corresponding to the minus state) and therefore also for the above color process which lies inbetween. If an RER $\nu$ is such that $d(\nu)>0$, does it follow that $d(\nu,p)>0$ for all $p >0$? Let $\nu^{\rm{RCM}}_{d,\alpha,2}$ be the random cluster model on $\zd$ with $q=2$ and parameter $\alpha$. One would perhaps expect that (1) $d(\nu^{\rm{RCM}}_{d,\alpha,2},p)$ is jointly continuous in $\alpha$ and $p$ and decreasing in $\alpha$ for fixed $p$, (2) $d(\nu^{\rm{RCM}}_{d,\alpha,2})$ is continuous in $\alpha$, (3) $\lim_{\alpha\to 0}d(\nu^{\rm{RCM}}_{d,\alpha,2})=1$ and (4) $\lim_{\alpha\to \infty}d(\nu^{\rm{RCM}}_{d,\alpha,2})=0$. Verify as much of this picture as possible. Does anything interesting happen near the critical value $\alpha_c(d)$?. \[q.bhs\] Consider a 1-dimensional random walk which moves to the right with probability $\frac{1}{2}+\sigma$ and to the left with probability $\frac{1}{2}-\sigma$ where $\sigma>0$. Let $\nu_\sigma$ be the associated RER on $\Z$ (whose color process is then random walk in random scenery). What results can one obtain concerning $d(\nu_\sigma,p)$ and $d(\nu_\sigma)$? Is there some phase transition in the parameter $\sigma$? In [@BHS], a phase transition in $\sigma$ is shown for random walk in random scenery, concerning Gibbsianness of the process. Is it possible that this could be related to a phase transition concerning the stochastic domination behavior? Provide natural examples of RERs for which all clusters are infinite and $d(\nu)>0$. \[q.voter.bern\] Are the stationary distributions for the voter model (which we have seen are color processes) in $d\ge 3$ dimensions Bernoulli shifts? If we look at the RER corresponding to coalescing random walks in $d\ge 4$ dimensions and we restrict the clusters down to a $d-3$ dimensional sublattice, then all the clusters become finite. It follows from Theorem \[t.finiteclust\] that the restriction of the stationary distributions for the voter model to this $d-3$ dimensional sublattice is a Bernoulli shift and the fact that the RER itself in any dimension is a Bernoulli shift. The latter is most easily seen by noting that the entire evolution of the process of coalescing random walks (which yields the RER) can be generated by uniform $[0,1]$ random variables at each of the points of $\zd$ and hence must be a Bernoulli shift being a factor of an i.i.d. process. \[q.musd\] If one cannot provide an affirmative answer to Question \[q.voter.bern\], can one give an example of an RER which has infinite clusters but the corresponding color process is Bernoulli? [**Acknowledgements.** ]{} We thank Olle Häggström for providing us with Proposition \[p.Olle\] and Russell Lyons for the key part of the proof of Proposition \[p.Russ\]. [^1]: Department of Mathematics, Chalmers University of Technology and Gothenburg University, Sweden. E-mail: steif@chalmers.se. Research supported by the Swedish research council and the Knut and Alice Wallenberg foundation. [^2]: Department of Mathematics, Chalmers University of Technology and Gothenburg University, Sweden. E-mail: johant@chalmers.se. Research supported by the Knut and Alice Wallenberg foundation.
--- abstract: 'This is an addendum to the paper “Deformation of $L_\infty$-Algebras” [@Schuh1]. We explain in which way the deformation theory of $L_\infty$-algebras extends the deformation theory of singularities. We show that the construction of semi-universal deformations of $L_\infty$-algebras gives explicit formal semi-universal deformations of isolated singularities.' author: - 'Frank Schuhmacher[^1]' title: 'Deformation of Singularities via $L_\infty$-Algebras' --- Introduction {#introduction .unnumbered} ============ In this paper, we apply the following general idea for the construction of moduli spaces to isolated singularities: Take the differential graded Lie algebra $L$ describing a deformation problem (for isolated singularities, this is the tangent complex) and find a minimal representative $M$ of $L$ in the class of formal $L_\infty$-algebras (see [@Schuh1]). In geometric terms, $M$ is a formal DG-manifold, containing the moduli space as analytic substructure. This general concept is also sketched in [@Merk1].\ We define a functor $F$ from the category of complex analytic space germs to the localization of the category of $L_\infty$-algebras by $L_\infty$-equivalence. For a singularity $X$, we take the semi-universal $L_\infty$-deformation $(V,Q^V)$ of $F(X)$ constructed in [@Schuh1]. For isolated singularities, the components $V^i$ are of finite dimension. The restriction of the vectorfield $Q^V$ defines a formal map (Kuranishi-map) $V^0{\longrightarrow}V^1$ whose zero locus gives the formal moduli space. Definitions and reminders ========================= In the whole paper, we work over a ground field $k$ of characteristic zero. Denote the category of formal (resp. convergent) complex analytic space germs by ${\mathfrak{Anf}}$ (resp. ${\mathfrak{An}}$). Denote the category of isomorphism classes of formal DG manifolds by ${\text{\texttt{DG-Manf}}}$. We use the following superscripts to denote full subcategories of ${\text{\texttt{DG-Manf}}}$:\ L (“local”): the subcategory of all $(M,Q^M)$ in ${\text{\texttt{DG-Manf}}}$ such that $Q^M_0=0$;\ M (“minimal”): the subcategory of all $(M,Q^M)$ in ${\text{\texttt{DG-Manf}}}^L$ such that $Q^M_1=0$;\ G (“g-finite”): the subcategory of all $(M,Q^M)$ in ${\text{\texttt{DG-Manf}}}^L$ such that $H(M,Q^M_1)$ is g-finite.\ We call a morphism $f=(f_n)_{n\geq 1}$ in ${\text{\texttt{DG-Manf}}}^L$ **weak equivalence**, if the morphism $f_1$ of DG vectorspaces is a quasi-isomorphism, i.e. if the corresponding morphism of $L_\infty$-algebras is an $L_\infty$-equivalence. Recall that by Theorem 4.4 and Lemma 4.5 of [@Kont], weak equivalences define an equivalence relation in ${\text{\texttt{DG-Manf}}}^L$ and that in each equivalence class, there is a uniquely defined **minimal model**, i.e. an object belonging to ${\text{\texttt{DG-Manf}}}^M$. We can localize the category ${\text{\texttt{DG-Manf}}}^L$ by weak equivalences ($\approx$). The quotient ${\text{\texttt{DG-Manf}}}^L/\approx$ is equivalent to the category ${\text{\texttt{DG-Manf}}}^M$ and the localization functor assigns to each object of ${\text{\texttt{DG-Manf}}}^L$ its minimal model. This follows directly by Corollary 2.5.7 of [@Merk1]. The functors $F$ and $V$ {#functors} ======================== In this section we explain how to represent (formal) singularities by formal DG manifolds.\ Let ${\mathcal{C}}$ be the category of formal analytic algebras, $A\in{\operatorname{Ob}}({\mathcal{C}})$ and $R=(R,s)$ a **resolvent** of $A$ over $k$, i.e. a g-finite free DG-algebra in ${\operatorname{gr}}({\mathcal{C}})$ such that $H^0(R,s){\cong}A$ and $H^j(R,s)=0$, for $j<0$. For $l\geq 0$, let $I_l$ be an index set containing one index for each free algebra generator of $R$ of degree $-l$. Consider the disjoint union $I$ of all $I_l$ as graded set such that $g(i)=l$, for $i\in I_l$. Fix an ordering on $I$, subject to the condition $i<j$, if $g(i)<g(j)$. Thus, as graded algebra, $R=k[[X^0]][X^-]$, where $X^0=\{x_i|\;i\in I,\,g(i)=0\}$ and $X^-=\{x_i|\;i\in I, g(i)\geq 1\}$ are sets of free algebra generators with $g(x_i)=-g(i)$. Set $M:=\coprod_{i\in I}ke_i$ to be the free, graded $k$-vectorspace with base $\{e_i:i\in I\}$, where $g(e_i)=g(i)$. Consider $S(M)=\coprod_{n\geq 0}M^{\odot n}$ in the usual way as graded coalgebra (see Section 1.1 of [@Schuh1]). Set $$S(M)^\ast:={\operatorname{Hom}}_{k-{\mathfrak{Mod}}}(S(M),k)=\prod_{j\geq 0} {\operatorname{Hom}}_{k-Mod}(M^{\odot j},k).$$ We identify products $x_{i_1}\cdot\ldots \cdot x_{i_l}$ in $R$ with the maps $M^{\odot l}{\longrightarrow}k$, defined by $e_{i_1}{\cdot\ldots\cdot}e_{i_l}\mapsto 1$ and $e_{j_1}{\cdot\ldots\cdot}e_{j_l}\mapsto 0$ for $\{j_1,\ldots ,j_l\}\neq\{i_1,\ldots ,i_l\}$. Especially, we identify each constant $\lambda\in k$ with the map $k{\longrightarrow}k$, sending $1$ to $\lambda$. We have $$R^j=\prod_{n\geq 0}{\operatorname{Hom}}^j(M^{\odot n},k)$$ and $R=\coprod_{j\leq 0}R^j$. The differential $s$ of $R$ extends naturally to ${\bar{R}}:=\prod_{j\leq 0}R^j$. As complexes, $R$ and ${\bar{R}}$ are identical, but not as graded modules. We identify ${\bar{R}}=S(M)^\ast$. Set $${\operatorname{Der}}(R):=\coprod_{i\in{\mathbb{Z}}}{\operatorname{Der}}^i(R,R) \quad\text{ and }\quad {\operatorname{Coder}}(S(M)):=\coprod_{i\in{\mathbb{Z}}}{\operatorname{Coder}}^i(S(M),S(M)).$$ Denote ${\operatorname{Diff}}(R)$ (resp. ${\operatorname{Codiff}}(S(M))$) the submodule of differentials (resp. codifferentials). The following proposition explains why, for a formal DG manifold $W$, the complex ${\operatorname{Coder}}(S(W),S(W))$ is called tangent complex of $W$. \[tan\] Take $R$ and $M$ as above. The natural map $$\begin{aligned} {\operatorname{Coder}}(S(M))&{\longrightarrow}{\operatorname{Der}}(R),\\ Q&\mapsto s^Q\end{aligned}$$ where $s^Q(g)=g\circ Q$, is bijective and the restriction gives rise to an isomorphism $${\operatorname{Codiff}}(S(M)){\longrightarrow}{\operatorname{Diff}}(R).$$ The injectivity is clear. Surjectivity: A derivation $s$ of degree $j$ on $R$ induces a differential (also denoted by $s$) on ${\bar{R}}=S(M)^\ast$. We have to find a coderivation $Q$ of degree $j$ on $S(M)$ such that, for $u\in S(M)^\ast$, we have $s(u)=u\circ Q$.\ For each $i\in I$, set $f_i:=s(x_i)$. Then, $f_i$ is a product $((f_i)_n)_{n\geq 1}$ with $(f_i)_n\in{\operatorname{Hom}}^{-g(i)+1}(M^{\odot n},k)$. We define the coderivation $Q$ by $$Q_n(m_1,\ldots ,m_n):=\sum_{i\in I}(f_i)_n(m_1,\ldots ,m_n)\cdot e_i,$$ for homogeneous $m_1,\ldots ,m_n\in M$. In fact, the non-vanishing terms in the sum satisfy the condition $g(m_1)+\ldots +g(m_n)=g(i)$, hence the sum is finite. To show that for $u\in S(M)^\ast$, we have $s(u)=u\circ Q$, it is enough to show that for all $i\in I$, $s(x_i)=x_i\circ Q$. But by definition, for $m_1,\ldots ,m_n\in M$, we have $$(x_i\circ Q)_n(m_1,\ldots ,m_n)=(f_i)_n(m_1,\ldots ,m_n)=(s(x_i)) (m_1,\ldots,m_n).$$ The second statement is a direct consequence of the first. As consequence, the differential $s$ on $R$ induces a codifferential $Q^M$ on $S(M)$. We consider the pair $(M,Q^M)$ as formal DG manifold in ${\text{\texttt{DG-Manf}}}^LG$. It has the following property: The restriction of $Q^M$ to $M^0$ defines a formal map $M^0{\longrightarrow}M^1$. Its zero locus is isomorphic to $X$.\ Summarizing the above construction, to each formal space germ $X$ with associated formal analytic algebra $A$, we can construct a formal DG manifold $(M,Q^M)$, containing $X$ as “subspace”. Of course, $(M,Q^M)$ depends on the choice of the resolvent $(R,s)$. But we will show that $(M,Q^M)$ is well defined up to weak equivalence, i.e. that the assignment $X\mapsto (M,Q^M)$ defines a functor $$F:{\mathfrak{Anf}}{\longrightarrow}{\text{\texttt{DG-Manf}}}^LG/\approx.$$ \[mdgut1\] If $W=(W,d)$ is a DG $k$-vectorspace and if the dual complex\ ${\operatorname{Hom}}(W,k)$ is acyclic, then $W$ is acyclic. Consequently, if $f:V{\longrightarrow}W$ is a morphism of DG $k$-vectorspaces such that the dual complex $f^\ast:W^\ast{\longrightarrow}V^\ast$ is a quasi-isomorphism, then $f$ is a quasi-isomorphism. Assume that $M$ is cyclic, i.e. there is an $n$ and an element $a\in M^n$ such that $d^n(a)=0$ and $a\not\in{\operatorname{Im}}d^{n-1}$. Let $B'$ be a base of ${\operatorname{im}}d^{n-1}$. We extend $B'\cup\{a\}$ to a base $B$ of $M^n$. Let $p:M^n{\longrightarrow}k$ be the projection on the coordinate $a$ of $B$. Then, $d^\ast(p)=p\circ d^{n-1}=0$ and $p(a)=1$, hence $p\not\in{\operatorname{Im}}d^\ast$. Contradiction ! \[mdgut2\] Let $f:M{\longrightarrow}M'$ be a morphism of formal DG-manifolds such that the corresponding map $S(M){\longrightarrow}S(M')$ is a quasi-isomorphism of complexes. Then, $f$ is a weak equivalence. By the Decomposition Theorem for $L_\infty$-algebras (see Lemma 4.5 of [@Kont]), we may assume that $M$ is minimal and that $f$ is strict. In this case, the homomorphism $f:S(M){\longrightarrow}S(M')$ of DG coalgebras is a direct sum of maps of complexes $f_1:M{\longrightarrow}M'$ and $$\sum_{j\geq 2}f_1^{\odot j}:\coprod_{j\geq 2}M^{\odot j}{\longrightarrow}\coprod_{j\geq 2}{M'}^{\odot j}.$$ Since the sum is a quasi-isomorphism, both factors are quasi-isomorphisms. Let $F:(M,Q^M){\longrightarrow}(M',Q^{M'})$ be a morphism of formal DG manifolds in ${\text{\texttt{DG-Manf}}}^G$ and suppose that the dual map If $S(M')^\ast{\longrightarrow}S(M)^\ast$ is a quasi-isomorphism of free DG algebras, then $F$ is a weak equivalence. This follows by Lemma \[mdgut1\] and \[mdgut2\]. Thus, we have proved the functoriality of $F$. Next, we define a functor $$V:{\text{\texttt{DG-Manf}}}^{GM}{\longrightarrow}{\mathfrak{Anf}}$$ as already mentioned above: For a minimal DG manifold $(M,Q^M)$ in ${\text{\texttt{DG-Manf}}}^{MG}$, set $V(M,Q^M)$ to be the zero locus of the formal map $M^0{\longrightarrow}M^1$, induced by $Q^M$. It can easily be seen that the composition $V\circ F$ is the identity on ${\mathfrak{Anf}}$. As a consequence, we get the following theorem: The functor $F$ embeds ${\mathfrak{Anf}}$ as full subcategory into ${\text{\texttt{DG-Manf}}}^{GM}$. Deformations and embedded deformations {#embdef} ====================================== In this section we recall some classical results, showing that each deformation of a singularity is equivalent to an embedded deformation.\ A morphism $G:{\mathcal{C}}{\longrightarrow}{\mathcal{D}}$ of fibered gruppoids over the category ${\mathfrak{An}}$ of complex space germs is called **smooth** if the following condition holds: If $\beta:b{\longrightarrow}b'$ is a morphism in ${\mathcal{D}}$ such that $G(\beta):S{\longrightarrow}S'$ is a closed embedding, and if $a$ is an object in ${\mathcal{C}}$ such that $G(a)=b$, then there is a morphism $\alpha:a{\longrightarrow}a'$ in ${\mathcal{C}}$ such that $G(\alpha)=\beta$.\ Consider a complex space germ $X$ with corresponding analytical algebra ${\mathcal{O}}_X$. Suppose that $X$ is embedded in the smooth space germ $P$ with corresponding analytic algebra $R^0$. Let $R=(R,s)$ be a g-finite, free algebra resolution of ${\mathcal{O}}_X$ such that $R^0={\mathcal{O}}_P$.\ For any space germ $(S,{\mathcal{O}}_S)$, set $R_S:=R\hat{{\otimes}}_{{\mathbb{C}}}{\mathcal{O}}_S$ and $${\mathcal{C}}(S):=\{\delta\in{\operatorname{Der}}^1(R_S,R_S)|\;\delta(0)=0 \text{ and }(s+\delta)^2=0\}$$ Furthermore, let ${\mathcal{D}}(S)$ be the equivalence class of deformations of $X$ with base $S$, i.e. the equivalence class of all flat morphisms ${\mathcal{X}}{\longrightarrow}S$ such that there is a cartesian diagram $$\label{deform} \xymatrix{ X\ar[r]\ar[d] & {\mathcal{X}}\ar[d]\\ \ast\ar[r] & S }$$ Then, ${\mathcal{C}}$ and ${\mathcal{D}}$ are fibered gruppoids over ${\mathfrak{An}}$ and we define a morphism $G:{\mathcal{C}}{\longrightarrow}{\mathcal{D}}$ as follows: For $\delta\in{\mathcal{C}}(S)$, let ${\mathcal{X}}$ be the space germ with ${\mathcal{O}}_{{\mathcal{X}}}=H^0(R_S,\delta+s)$ and ${\mathcal{X}}{\longrightarrow}S$ the composition of the closed embedding ${\mathcal{X}}{\longrightarrow}S\times P$ and the canonical projection $S\times P{\longrightarrow}S$. Obviously, there is a cartesian diagram (\[deform\]). I. e. $G(\delta):={\mathcal{X}}{\longrightarrow}S$ is a deformation of $X$. We want to remind the proof of the well-known fact that $G$ is smooth.\ Let $(A,{\mathfrak{m}})$ be a local analytic algebra, $B$ a graded, g-finite free $A$-algebra and $C$ a flat DG-algebra over $A$. For $A$-modules $M$, we set $M':=M\hat{{\otimes}}_AA/{\mathfrak{m}}$. The following statement is a special case of Proposition 8.20 in Chapter I of [@BiKo]: \[bick\] Let $v'\in{\operatorname{Der}}_{B'_0}^1(B',B')$ be a differential and $\phi':B'{\longrightarrow}C'$ a surjective quasi-isomorphism of DG-algebras over $A'$. Then, there is a differential $v\in{\operatorname{Der}}^1_{B_0}(B,B)$, lifting $v'$ and a surjective quasi-isomorphism $\phi:B{\longrightarrow}C$ of DG-algebras, lifting $\phi'$. For all $S$ in ${\mathfrak{An}}$, $G(S):{\mathcal{C}}(S){\longrightarrow}{\mathcal{D}}(S)$ is surjective. For ${\mathcal{X}}{\longrightarrow}S$ in ${\mathcal{D}}(S)$, we have to find a ${\mathcal{O}}_S$-derivation $\delta:R_S{\longrightarrow}R_S$ of degree 1 with $\delta(0)=0$ such that $\delta+s$ is a differential and a surjective quasi-isomorphism $(R_S,s+\delta){\longrightarrow}{\mathcal{O}}_{{\mathcal{X}}}$. Since $R_S\hat{{\otimes}}_{{\mathcal{O}}_S}{\mathbb{C}}=R$ and ${\mathcal{O}}_{{\mathcal{X}}}\hat{{\otimes}}_{{\mathcal{O}}_S}{\mathbb{C}}={\mathcal{O}}_X$, the existence follows by Proposition \[bick\]. \[smooth\] $G$ is smooth. We have to show that for each $\delta\in{\mathcal{C}}(S)$ and each morphism $$\xymatrix{ {\mathcal{X}}:=V(S\times P,\delta+s)\ar[r]\ar[d] & {\mathcal{X}}'\ar[d]\\ S\ar[r] & S' }$$ of deformations of $X$, there exist $\delta'\in{\mathcal{C}}(S')$ such that $G(\delta')={\mathcal{X}}'$ and a cartesian diagram $$\xymatrix{ (R_{S'},\delta'+S)\ar[r] & (R_S,\delta+s)\\ {\mathcal{O}}_{S'}\ar[u]\ar[r] & {\mathcal{O}}_S\ar[u] }$$ Setting $A:={\mathcal{O}}_{S'}$, this follows by Proposition \[bick\]. In the literature (see [@BiKo], for instance), the deformation functor is defined such that a space germ $S$ maps to the quotient of ${\mathcal{C}}(S)$ by the Lie group, associated to the Lie algebra ${\operatorname{Der}}^0(R_S,R_S)$. In fact, $G$ factors through this quotient and the first factor is even “minimal smooth”. For the construction here, we don’t need to consider this group action to get semi-universal deformations. One can say that the group action is replaced by the going - over to a minimal model. A formal semi-universal deformation =================================== In this section, we apply the new method for the construction of a formal semi-universal deformations to isolated singularities $X$. Let $(M,Q^M):=F(X)$ be the formal DG-manifold in ${\text{\texttt{DG-Manf}}}^MG$, assigned to the space germ $X$. As in Section \[functors\], denote the resolvent of $A={\mathcal{O}}_X$, having $S(M)^\ast$ as completion, by $(R,s)$.\ By Theorem 5.13 of [@Schuh1], there is a semiuniversal deformation $(V,Q^V,Q)$ of $(M,Q^M)$. Recall that as graded modules $V=H[1]$, where $H$ denotes the cohomology of ${\operatorname{Coder}}(S(M),S(M))$, i.e. the tangent cohomology of $X$. It is well-known that $H$ is g-finite.\ We apply the functor $V$ to the morphism $(V\times M,Q^V+Q^M+Q){\longrightarrow}(V,Q^V)$ and get a morphism ${\mathcal{Y}}{\longrightarrow}Y$ in ${\mathfrak{Anf}}$. The morphism ${\mathcal{Y}}{\longrightarrow}Y$ is a formal semi-universal deformation of the space germ $X$. Let $$\xymatrix{ {\mathcal{X}}\ar[d] & X\ar[l]\ar[d]\\ S & \ast\ar[l] }$$ be any formal deformation of $X$. By Corollary \[smooth\], there is a morphism of the deformation ${\mathcal{X}}{\longrightarrow}S$ to an embedded deformation ${\tilde{\mathcal{X}}}{\longrightarrow}S$, where ${\tilde{\mathcal{X}}}$ is such that ${\mathcal{O}}_{{\tilde{\mathcal{X}}}}=H^0(R_S,s+\delta)$, for a certain $\delta\in{\mathcal{C}}(S)$ (see Section \[embdef\]). I.e. there is a cartesian diagram $$\xymatrix{ {\tilde{\mathcal{X}}}\ar[d] & {\mathcal{X}}\ar[l]\ar[d]\\ S & S\ar[l] }$$ Set $(B,Q^B):=F(S)$. By Proposition \[tan\], to $\delta$, there corresponds a coderivation $Q_\delta$ in ${\operatorname{Coder}}^{+1}(S(B\times M),S(B\times M))$, defining a deformation $(B,Q^B,Q_\delta)$ of $(M,Q^M)$. Since $(V,Q^V)$ is semi-universal, there is a morphism $$\xymatrix{ (B\times M,Q^B+Q^M+Q_\delta)\ar[r]\ar[d] & (V\times M,Q^V+Q^M+Q)\ar[d]\\ (B,Q^B)\ar[r] & (V,Q^V) }$$ of deformations. Application of the functor $V$ gives a cartesian diagram $$\xymatrix{ {\tilde{\mathcal{X}}}\ar[r]\ar[d] & {\mathcal{Y}}\ar[d]\\ S\ar[r] & Y }$$ which obviously respects the distinguished fiber $X{\longrightarrow}\ast$. This shows that ${\mathcal{Y}}{\longrightarrow}Y$ is versal. Since $Y$ is a formal analytic subgerm of $V^0=H^1$, we have ${\operatorname{dim}}(TY)\leq {\operatorname{dim}}H^1$. Thus, necessarily ${\mathcal{Y}}{\longrightarrow}Y$ is semi-universal (see Chapter 2.6 of [@Pala]). [lalalalala]{} Jürgen Bingener; Siegmund Kosarew: *Modulräume in der analytischen Geometrie,* Vieweg (1987) Siegmund Kosarew: *Local moduli spaces and Kuranishi maps,* Manuscr. Math. 110, No.2, 237-249 (2003) Maxim Kontsevich: *Deformation quantization of Poisson manifolds I,* preprint q-alg/9709040 Lada, Tom; Markl, Martin *Strongly homotopy Lie algebras* (English) Commun. Algebra 23, No.6, 2147-2161 (1995) Manetti: *Deformation of singularities via differential graded Lie algebras,* notes (2001) S.A. Merkulov: *Frobenius $\infty$ invariants of homotopy Gerstenhaber algebras I,* Duke Math. J. 105 (2000), 411-461 S.A. Merkulov: *Operads, deformation theory and F-manifolds,* preprint AG/0210478 Palamodov: *Deformations of complex spaces,* in Gindikin; Khenkin: Several complex variables IV, Encyclopaedia of mathematical sciences, vol. 10, Springer (1990) pp. 105-194 Frank Schuhmacher: *Deformation of $L_\infty$-algebras,* preprint QA/0405485 Institut Fourier\ UMR 5582\ BP 74\ 38402 Saint Martin d’Hères\ France\ frank.schuhmacher@ujf-grenoble.fr [^1]: Supported by: Doktorandenstipendium des Deutschen Akademischen Austauschdienstes im Rahmen des gemeinsamen Hochschulsonderprogramms III des Bundes und der Länder
--- abstract: 'In this paper we complement the program concerning the application of symmetrization methods to nonlocal PDEs by providing new estimates, in the sense of mass concentration comparison, for solutions to linear fractional elliptic and parabolic PDEs with Neumann boundary conditions. These results are achieved by employing suitable symmetrization arguments to the Stinga-Torrea local extension problems, corresponding to the fractional boundary value problems considered. Sharp estimates are obtained first for elliptic equations and a certain number of consequences in terms of regularity estimates is then established. Finally, a parabolic symmetrization result is covered as an application of the elliptic concentration estimates in the implicit time discretization scheme.' author: - 'Bruno Volzone[^1]' title: ' **Symmetrization for fractional Neumann problems**\' --- Introduction {#sec.intro} ============ Following the study initiated in the work [@dBVol] and continued in [@VazVol1], [@VazVol2], [@VazVolSire], the spirit of this note is to provide a further insight on applications of classical symmetrization techniques to PDEs involving fractional Laplacian operators. In particular, we will focus on these methods with the aim of deriving optimal estimates, in the sense of mass concentration comparisons and their consequences, for solutions of nonlocal elliptic PDEs with *Neumann* boundary conditions, of the type $$\label{eq.0} \left\{ \begin{array} [c]{lll}\left( -\Delta\right)^{\sigma}u+cu=f\left( x\right) & & in\text{ }\Omega,\\[10pt] \dfrac{\partial u}{{\partial\nu}}=0 & & on\text{ }\partial\Omega, \end{array} \right. $$ for all the exponents $\sigma\in(0,1)$. Problem is posed in a smooth domain $\Omega$ of ${{\mathbb R}}^{N}$ ($N\geq2$), $\nu$ is the outward unit normal vector to $\partial \Omega$, $c$ is a nonnegative constant and the source term $f=f(x)$ is assumed to belong (for instance) to $L^{p}(\Omega)$ for suitable $p>1$. When $c=0$, we will require the natural compatibility condition $$\int_{\Omega}f dx=0.\label{compatib}$$ Using the results we shall achieve in the elliptic framework, we will determine also a comparison result for solutions to *Cauchy-Neumann* linear parabolic problems of the form $$\label{linearparabintr} \left\{ \begin{array} [c]{lll}u_{t}+(-\Delta)^{\sigma}u =f & & in\text{ }\Omega\times(0,T)\\[15pt] \dfrac{\partial u}{\partial\nu}=0 & & on\text{ }\partial \Omega\times[0,T],\\[15pt] u(x,0)=u_{0}(x) & & in\text{ }\Omega, \end{array} \right. $$ being $T>0$ and the data $f=f(x,t)$, $u_{0}=u_{0}(x)$ belonging to suitable functional spaces.\ The first application of symmetrization techniques to linear Neumann elliptic problems goes back to the classical paper by Maderna-Salsa [@MadSalsa]. Let us briefly describe the main result achieved in their paper. Let us consider a second order linear elliptic Neumann problem of the form $$\label{SalsaNeum} \left\{ \begin{array} [c]{lll}Lu=f & & in\text{ }\Omega,\\[10pt] \dfrac{\partial u}{\partial\nu}=0 & & on\text{ }\partial\Omega, \end{array} \right. $$ where $$Lu=-\sum_{i,j} \partial_i(a_{ij}\partial_j u)\,,\label{ellipticop}$$ posed in a smooth bounded domain $\Omega\subseteq {{\mathbb R}^N}$; the coefficients $\{a_{ij}\}$ are assumed to be bounded, measurable and satisfy the usual normalized ellipticity condition $$\sum_{i,j}a_{ij}(x)\xi_{i}\xi_{j}\geq|\xi|^{2}\quad \text{for a.e. }x\in\Omega,\,\forall\xi\in{{\mathbb R}}^{N};\label{elliptcond}$$ finally, we impose the compatibility condition . The absence of homogeneous Dirichlet boundary condition prevents us from using some features of the classical analysis introduced by Talenti [@Talenti1], leading to pointwise comparison between the symmetrized version of the actual solution of the problem $u(x)$ and the radially symmetric solution $v(|x|)$ of some radially symmetric model problem which is posed in a ball with the same volume as $\Omega$: indeed, a rough explanation of such an issue is that the Neumann boundary conditions imply that the level sets $\left\{x\in\Omega:|u(x)|>t\right\}$ of a solution $u$ are not compactly contained in $\Omega$ (as for the zero Dirichlet data case), thus a part of the boundary of such sets may be contained in $\partial\Omega$. This forces the use of the *relative* isoperimetric inequality, saying that for any measurable subset $E$ of $\Omega$ one has $$\left[\min\left\{|E|,|\Omega\setminus E|\right\}\right]^{1-\frac{1}{N}}\leq Q P_{\Omega}(E)\label{relisop}$$ being $\left\vert E\right\vert $ the $N$-dimensional Lebesgue measure of $E$, $P_{\Omega}(E)$ the perimeter (in the De Giorgi sense) of $E$ in $\Omega$ and the best value of the constant $Q$ in depends on $\Omega$. Then, the choice of the classical truncation functions introduced in [@Talenti1] as test functions in the weak formulation of leads to the issue of choosing the minimum value in , being $E$ the level set $\left\{x\in\Omega:|u(x)|>t\right\}$ for some $t\geq0$. A key role for solving such problem is played by the so called *median* $\mathsf{m}(u)$ defined by $$\mathsf{m}(u)=\inf\left\{k\in{{\mathbb R}}:|\left\{x\in\Omega:u(x)>k\right\}|\leq |\Omega|/2\right\}:\label{median}$$ indeed, the function $u_{1}=u-\mathsf{m}(u)$ is still a solution to the homogeneous Neumann problem , and its remarkable property relies in the fact that $$|\text{sprt }u_{1}^{+}|\leq|\Omega|/2\,\quad |\text{sprt }u_{1}^{-}|\leq|\Omega|/2,$$ where $u_{1}^{+}$, $u_{1}^{-}$ are the positive and the negative parts of $u_{1}$ respectively. Then testing the first equation in with the test functions of [@Talenti1] constructed on the level sets of $u_{1}^{\pm}$ gives that the minimum in is achieved by $|E|$, where $E=\left\{x\in\Omega: u_{1}^{\pm}(x)>t\right\}$: then the classical method shown in [@Talenti1] takes naturally to the following two pointwise estimates in the ball $B$ centered at the origin such that $|B|=|\Omega|/2$: $$(u_{1}^{+})^{\#}(x)\leq v_{1}(x)\label{firstpointest}$$ $$(u_{1}^{-})^{\#}(x)\leq v_{2}(x),\label{secondpointest}$$ where $(u_{1}^{\pm})^{\#}$ is the *Schwarz* decreasing rearrangement of $(u_{1}^{\pm})$ (see Section 2 for its precise definition and related properties) and $v_{i}$, $i=1,2$ is the solution of the *Dirichlet* problem $$\label{DirichNeum} \left\{ \begin{array} [c]{lll}-\gamma\Delta v_{i}=f_{i}^{\#}\left( x\right) & & in\text{ }B,\\[10pt] v_{i}=0 & & on\text{ }\partial B, \end{array} \right. $$ where $f_{1}$, $f_{2}$ are the positive and negative part of $f$ respectively and $\gamma=1/(N\omega_{N}^{1/N}Q)^{2}$, with $\omega_{N}$ being the measure of the unit ball in ${{\mathbb R}}^{N}$, which implies that $N\omega_{N}^{1/N}$ is the best constant in the classical isoperimetric inequality. The meaningful result represented by estimates - shows on one hand that there is no “worst” problem (unlike the Dirichlet case) among the class of Neumann problems, defined by fixing the distribution function of $f$ and the measure of the ground domain, which problem could be compared to, in the sense of a pointwise comparison; on the other hand, the same inequalities -, along with the estimates with sharp constants obtained for the Dirichlet problem in [@Talenti1], easily allow to derive some optimal regularity estimates for the original solution $u$ in terms of the data $f$.\ Since the paper [@MadSalsa], many other works dealing with symmetrization in Neumann problems enriched the already existing literature with interesting developments and sharper results: among them, it is worth mentioning the contributions given in [@AlTrMat], [@Ferone], [@BettaNeum], [@FeroneMercaldoStein] for the linear case, in the paper [@AlCianMAz] dealing with the nonlinear elliptic framework. Furthermore, we refer to [@Bramanti] for an interesting treatment of the linear parabolic case. Proceeding as in the papers [@dBVol], [@VazVol1], [@VazVol2], [@VazVolSire], due to the nonlocal nature of problem , it will be essential to link such a problem to a suitable, local *extension problem*, whose solution $w(x,y)$, a *harmonic extension* of $u$, is defined on the infinite cylinder $\mathcal{C}_{\Omega}=\Omega\times(0,\infty)$, to which classical symmetrization techniques (with respect to the $x\in\Omega$- variable) can be applied: the issues arising in this approach will be mainly due to the Neumann boundary conditions and the presence of the “extra” variable $y\geq0$, which is fixed in the symmetrization arguments, an important detail implying that reaching a pointwise comparison is hopeless. Then an integral (or mass concentration) comparison is expected, and since $u$ is the trace of $w$ over $\Omega\times\left\{0\right\}$, once we obtain a comparison result for the extension $w$ of $u$, an estimate for $u$ is immediately derived.\ We notice that problem can be rewritten as $$(-\Delta_{N})^{\sigma}u+cu=f$$ where the operator $(-\Delta_{N})^{\sigma}$ is the so called *spectral Neumann fractional Laplacian*, whose definition and domain encode in particular the homogeneous Neumann boundary conditions: then the existence of an extension problem, associated to $(-\Delta_{N})^{\sigma}$, with zero Neumann boundary conditions on the lateral surface $\partial_{L}\mathcal{C}_{\Omega}$ of the cylinder $\mathcal{C}_{\Omega}$, follows by [@Stinga-Torrea Theorem 1.1], generalizing the by now classical result by Caffarelli and Silvestre [@Caffarelli-Silvestre]. We also refer to [@Gale-Miana-Stinga] for a result of this nature in an even more general setting. [Organization of the paper and main results.]{} Section \[Sec2\] contains the preliminaries about symmetrization and mass concentration that we will largely use throughout the paper. In Section \[Sec3\] we give all the necessary functional background related to problem , which is naturally connected to the very definition of the operator $(-\Delta_{N})^{\sigma}$. Section \[Sec4\] is entirely devoted to the introduction and the proof of the main result, consisting in comparing the solution $u$ to with $c=0$ to the following Dirichlet radial problem $$\label{symmetriz} \left\{ \begin{array} [c]{lll}\left( -\gamma\Delta\right)^{\sigma}v=f_{1}^{\#}(x)+f_{2}^{\#}\left( x\right) & & in\text{ }B,\\ [10pt] v=0 & & on\text{ }\partial B \end{array} \right. $$ where the operator $(-\Delta)^{\sigma}$ is the so called *spectral Dirichlet fractional Laplacian* $(-\Delta_{D})^{\sigma}$, and $f_{1}$, $f_{2}$ are the positive and negative part of $f$ respectively. Moreover, making use of some results of [@dBVol], a number of important regularity estimates of $u$ in terms of the data $f$ are then derived. Section \[zeroordersec\] provides the generalization of the comparison result shown in Section \[Sec4\] for problems appearing in the form with a positive constant $c$. This last result is applied in Section \[parabolic\] in the iterations of the parabolic implicit time discretization scheme, that allows to establish an interesting concentration comparison for solutions of linear parabolic problems with Neumann boundary conditions of the form . On symmetrization and related properties {#Sec2} ======================================== In this Section we briefly recall the basic notions of Schwarz symmetrization and some related fundamental properties. Readers who are interested in more details of the theory are warmly addressed to the classical monographs [@Hardy], [@Bennett], [@Kesavan], [@Bandle] or in the paper [@Talentirearrinv].\ A measurable real function $f$ defined on ${{\mathbb R}}^{N}$ is called *radially symmetric* (or *radial*) if there is a function $\widetilde{f}:[0,\infty)\rightarrow {{\mathbb R}}$ such that $f(x)=\widetilde{f}(|x|)$ for all $x\in {{\mathbb R}}^{N}$. We will often write $f(x)=f(r)$, $r=|x|\ge0$ for such functions by abuse of notation. We say that $f$ is *rearranged* if it is radial, nonnegative and $\widetilde{f}$ is a right-continuous, non-increasing function of $r>0$. A similar definition can be applied for real functions defined on a ball $B_{R}(0)=\left\{x\in{{\mathbb R}}^{N}:|x|<R\right\}$. Now, let $\Omega$ be an open set of $\mathbb{R} ^{N}$ and $f$ be a real measurable function on $\Omega$. We then define the *distribution function* $\mu_{f}$ of $f$ as$$\mu_{f}\left( k\right) =\left\vert \left\{ x\in\Omega:\left\vert f\left( x\right) \right\vert >k\right\} \right\vert \text{ , }k\geq0,$$ and the *one dimensional decreasing rearrangement* of $f$ as$$f^{\ast}\left( s\right) =\sup\left\{ k\geq0:\mu_{f}\left( k\right) >s\right\} \text{ , }s\in\left( 0,\left\vert \Omega\right\vert \right).$$ We may also think of extending $f^{\ast}$ as the zero function in $[|\Omega|,\infty)$ if $\Omega$ is bounded. From this definition it follows that $\mu_{f^{\ast}}=\mu_{f}$ (*i.e.,* $f$, and $f^{\ast}$ are equi-distributed) and $f^{\ast}$ is exactly the *generalized right inverse function* of $\mu_{f}$. Furthermore, if $\Omega^{\#}$ is the ball of $\mathbb{R} ^{N}$ centered at the origin having the same Lebesgue measure as $\Omega,$ we define the function $$f^{\#}\left( x\right) =f^{\ast}(\omega_{N}\left\vert x\right\vert ^{N})\text{ \ , }x\in\Omega^{\#},$$ that will be called *spherical decreasing rearrangement*, or *Schwarz decreasing rearrangement*, of $f$. We easily infer that $f$ is rearranged if and only if $f=f^{\#}$. The only properties which will turn useful for what follows are the conservation of the $L^{p}$ norms: for all $p\in[1,\infty]$ $$\|f\|_{L^{p}(\Omega)}=\|f^{\ast}\|_{L^{p}(0,|\Omega|)}=\|f^{\#}\|_{L^{p}(\Omega^{\#})}\,,$$ as well as the classical Hardy-Littlewood inequality $$\int_{\Omega}\left\vert f\left( x\right) g\left( x\right) \right\vert dx\leq\int_{0}^{\left\vert \Omega\right\vert }f^{\ast }\left( s\right) g^{\ast}\left( s\right) ds=\int_{\Omega^{\#}}f^{\#}(x)\,g^{\#}(x)\,dx\,, \label{HardyLit}$$ where $f,g$ are measurable functions on $\Omega$. $\bullet$ We will often deal with two-variable functions of the type $$\label{f}f:\left( x,y\right) \in\mathcal{C}_{\Omega}\rightarrow f\left( x,y\right) \in{\mathbb{R}}$$ defined on the cylinder $\mathcal{C}_{\Omega}:=\Omega\times\left( 0,+\infty\right) $, and measurable with respect to $x.$ In that case, it will be convenient to define the so-called [*Steiner symmetrization*]{} of $\mathcal{C}_{\Omega}$ with respect to the variable $x$, namely the set   In addition, we will denote by $\mu _{f}\left( k,y\right) $ and $f^{\ast}\left( s,y\right) $ the distribution function and the decreasing rearrangement of (\[f\]), with respect to $x$ for $y$ fixed, and we will also define the function$$f^{\#}\left( x,y\right) =f^{\ast}(\omega_{N}|x|^{N},y)$$ which is called the *Steiner symmetrization* of $f$, with respect to the line $x=0.$ Clearly, $f^{\#}$ is a spherically symmetric and decreasing function with respect to $x$, for any fixed $y$. $\bullet$ We recall now two important differentiation formulas that will prominently come into play in our arguments. They are basically used when dealing with the derivation of sharp estimates for the rearrangement $u^{\ast}$ of a solution $u$ to a certain parabolic problem, since in that context it is essential to differentiate with respect to the extra variable $y$ under the integral symbol, for functions defined in the form $$\int_{\left\{x:u(x,y)>u^{*}(s,y)\right\}}\frac{\partial u}{\partial y}(x,y)\,dx\,.$$ Here we recall two formulas, of first and second order, available in literature. The following proposition can be found in [@Mossino], and is a generalization of a well-known result by C. Bandle (see [@Band2]). \[BANDLE\] Suppose that $f\in H^{1}(0,T;L^{2}(\Omega))$ is nonnegative, for some $T>0$. Then $$f^{*}\in H^{1}(0,T;L^{2}(0,|\Omega|))$$ and if $|\left\{x:f(x,t)=f^{*}(s,t)\right\}|=0$ for a.e. $(s,t)\in(0,|\Omega|)\times(0,T)$, the following differentiation formula holds: $$\int_{\left\{x:f(x,y)>f^{*}(s,y)\right\}}\frac{\partial f}{\partial y}(x,y)\,dx=\int_{0}^{s}\frac{\partial f^{*}}{\partial y}(\tau,y)\,d\tau.\label{Rakotoson}$$ Moreover, the following second order differentiation formula (which was also proved in [@AlLionTrom] in a more regular framework) is due to Mercaldo and Ferone (see [@FeroneMercaldoStein]): Let us choose a nonnegative function $f\in W^{2,\infty}\left( \mathcal{C}_{\Omega}\right) $. Then for almost every $y\in(0,+\infty)$ the following differentiation formula holds: $$\begin{aligned} \int_{\left\{x:f\left( x,y\right) >f^{\ast}\left( s,y\right)\right\} }\frac{\partial^{2}f}{\partial y^{2}}\left( x,y\right) dx & =\frac{\partial^{2}}{\partial y^{2}}\int_{0}^{s}f^{\ast}\left( \tau,y\right) d\tau-\int_{\left\{x:f\left( x,y\right) =f^{\ast}\left( s,y\right)\right\} }\frac{\left( \frac{\partial f}{\partial y}\left( x,y\right) \right) ^{2}}{\left\vert \nabla _{x}f\right\vert }\,d\mathcal{H}^{N-1}\left( x\right) \nonumber\\ & \!\!\!+\left( \int_{\left\{x:f\left( x,y\right) =f^{\ast}\left( s,y\right)\right\}} \!\frac{\frac{\partial f}{\partial y}\left( x,y\right) }{\left\vert \nabla_{x}f\right\vert }\,d\mathcal{H}^{N-1}\left( x\right) \!\right) ^{2}\!\left( \!\int_{\left\{x:f\left( x,y\right) =f^{\ast}\left( s,y\right)\right\} }\!\frac{1}{\left\vert \nabla_{x}f\right\vert }\,d\mathcal{H}^{N-1}\left( x\right) \!\right) ^{-1}\! .\label{Ferone-Mercaldo}\end{aligned}$$ Mass concentration ------------------ Since we will provide estimates of the solutions of our fractional elliptic and parabolic problems in terms of their integrals, the following definition, taken from [@Vsym82], is of basic importance. Let $f,g\in L^{1}_{loc}({{\mathbb R}}^{N})$ be two nonnegative radially symmetric functions on ${{\mathbb R}}^{N}$. We say that $f$ is less concentrated than $g$, and we write $f\prec g$ if for all $R>0$ we get $$\int_{B_{R}(0)}f(x)dx\leq \int_{B_{R}(0)}g(x)dx.$$ The partial order relationship $\prec$ is called *comparison of mass concentrations*. Of course, this definition can be suitably adapted if $f,g$ are radially symmetric and locally integrable functions on a ball $B_{R}$. Moreover, we have that $f\prec g$ if and only if $$\int_{0}^{s}f^{\ast}(\tau)d\tau\leq \int_{0}^{s}g^{\ast}(\tau)d\tau,$$ for all $s\geq0$. The comparison of mass concentrations enjoys a nice equivalent formulation if $f$ and $g$ are rearranged. Indeed the following result holds (for the proof we refer to [@Chong], [@VANS05]): \[lemma1\] Let $f,g\in L^{1}(\Omega)$ be two rearranged functions on a ball $\Omega=B_{R}(0)$. Then $f\prec g$ if and only if for every convex nondecreasing function $\Phi:[0,\infty)\rightarrow [0,\infty)$ with $\Phi(0)=0$ we have $$\int_{\Omega}\Phi(f(x))\,dx\leq \int_{\Omega}\Phi(g(x))\,dx.$$ This result still holds if $R=\infty$ and $f,g\in L^{1}_{loc}({{\mathbb R}}^{N})$ with $g\rightarrow0$ as $|x|\rightarrow\infty$, in the sense that $\mu_{g}(k)<\infty$ for all $k>0$. From this Lemma it easily follows that if $f\prec g$ and $f,g$ are rearranged, then $$\|f\|_{L^{p}(\Omega)}\leq \|g\|_{L^{p}(\Omega)}\quad \forall p\in[1,\infty].$$ Functional background {#Sec3} ===================== In this Section we provide a self-contained description of the functional background which is necessary for the well-posedness of problems of the type . Most of the material we present here are excerpts of the papers [@StingaVolz], [@PellacciMontef] to which we refer the interested reader for extra details. Furthermore, we point out that another version of a nonlocal elliptic Neumann problem is available in literature, see *e.g.* [@ValdinoRos]. Let us consider the homogeneous Neumann eigenvalue problem for the Laplacian on a smooth bounded domain $\Omega$ of ${{\mathbb R}}^{N}$: $$\label{eigen} \left\{ \begin{array} [c]{lll}-\Delta\varphi=\lambda\varphi & & in\text{ }\Omega,\\[10pt] \dfrac{\partial \varphi}{{\partial\nu}}=0 & & on\text{ }\partial\Omega. \end{array} \right. $$ It is well known (see for example [@Evans; @Gilbarg-Trudinger]) that there exists a sequence of nonnegative eigenvalues $\{\lambda_{k}\}_{k\in{{\mathbb N}}_{0}}$ corresponding to eigenfunctions $\{\varphi_{k}\}_{k\in{{\mathbb N}}_{0}}$ in $H^{1}(\Omega)$, the latter being weak solutions to . We have that $\lambda_{0}=0$, $\varphi_{0}=1/\sqrt{|\Omega|}$, $\int_{\Omega}\varphi_{k}\, dx=0$ and $\varphi_k$ belongs to $C^\infty(\overline{\Omega})$ for all $k\geq1$. In order to introduce the spectral Neumann fractional Laplacian $(-\Delta_N)^{\sigma}$, we define its domain as $$\mathcal{H}^{\sigma}(\Omega)\equiv{\operatorname{Dom}}((-\Delta_N)^{\sigma}):= \Big\{u\in L^2(\Omega):\sum_{k=1}^\infty\lambda_k^{\sigma}\,|\langle u,\varphi_k \rangle_{L^2(\Omega)}|^2<\infty\Big\},$$ which is a Hilbert space equipped with the scalar product $$\langle u,v\rangle_{\mathcal{H}^{\sigma}(\Omega)}:=\langle u,v\rangle_{L^{2}(\Omega)}+\sum_{k=1}^{\infty}\lambda_{k}^{\sigma}\, \langle u,\varphi_k\rangle_{L^2(\Omega)}\langle v,\varphi_k\rangle_{L^2(\Omega)},$$ defining the following norm in $\mathcal{H}^{\sigma}(\Omega)$: $$\|u\|^2_{\mathcal{H}^{\sigma}(\Omega)}=\|u\|_{L^2(\Omega)}^{2}+\sum_{k=1}^{\infty}\lambda_{k}^{\sigma}\,|\langle u,\varphi_k\rangle_{L^2(\Omega)}|^{2}.$$ Since $\lambda_k\nearrow\infty$, it is obvious that $C^\infty(\overline{\Omega})\subset H^1(\Omega)\subset\mathcal{H}^{\sigma}(\Omega)$. For $u\in\mathcal{H}^{\sigma}(\Omega)$, we define $(-\Delta_N)^{\sigma}u$ as [an]{} element in the dual space $\mathcal{H}^{\sigma}(\Omega)'$ given by $$(-\Delta_N)^{\sigma}u=\sum_{k=1}^\infty\lambda_k^{\sigma}\,\langle u,\varphi_k\rangle_{L^2(\Omega)}\varphi_k, \quad\hbox{in}~\mathcal{H}^{\sigma}(\Omega)',$$ that is, for any function $v\in\mathcal{H}^{\sigma}(\Omega)$, $$\langle(-\Delta_N)^{\sigma}u,v\rangle=\sum_{k=1}^\infty\lambda_k^{\sigma}\,\langle u,\varphi_k\rangle_{L^2(\Omega)} \langle v,\varphi_k\rangle_{L^2(\Omega)}.$$ Notice that the set of constant functions is the nontrivial kernel [of $(-\Delta_N)^{\sigma}$ in $\mathcal{H}^{\sigma}(\Omega)$]{}. The last identity can be rewritten as $$\langle(-\Delta_N)^{\sigma}u,v\rangle=\int_\Omega\left[(-\Delta_N)^{\sigma/2}u\right]\left[(-\Delta_N)^{\sigma/2}v\right]\,dx,\quad\hbox{for every}~v\in\mathcal{H}^{\sigma}(\Omega),$$ where $(-\Delta_{{N}})^{\sigma/2}$ is defined by taking the power $\sigma/2$ of the eigenvalues $\lambda_k$.\ Actually it is possible to identify (see [@StingaVolz Theorem 2.4] and the generalization in [@Caffarelli-Stinga]) the domain $\mathcal{H}^{\sigma}(\Omega)$ of $(-\Delta_N)^{\sigma}$ with the fractional Sobolev space $H^{\sigma}(\Omega)$. Now let us consider problem with $c=0$, that is the problem $$\label{eq.0.1} \left\{ \begin{array} [c]{lll}\left( -\Delta\right)^{\sigma}u=f\left( x\right) & & in\text{ }\Omega,\\[10pt] \dfrac{\partial u}{{\partial\nu}}=0 & & on\text{ }\partial\Omega. \end{array} \right. $$ which in our notations can be written in the form $$(-\Delta_{N})^{\sigma}u=f.$$ For a function $u\in L^{1}(\Omega)$ we set $$u_{\Omega}:=\frac{1}{|\Omega|}\int_{\Omega}u\,dx.$$ We define the functional spaces $$\mathfrak{H}^{\sigma}(\Omega):=\left\{u\in H^{\sigma}(\Omega):u_{\Omega}=0\right\}$$ and $$\mathscr{H}^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma}):=\left\{w\in H^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma}):(w(\cdot,y))_{\Omega}=0,\,\forall y\geq0\right\},$$ where $H^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma})$ is the weighted Sobolev space with respect to the weight $y^{1-2\sigma}$. By [@PellacciMontef Lemma 2.2]) the space $\mathscr{H}^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma})$ can be equipped with the norm $$\|w\|_{\mathscr{H}^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma})}=\left(\iint_{\mathcal{C}_{\Omega}}y^{1-2\sigma}|\nabla_{x,y}w|^{2}dx\,dy\right)^{1/2}.$$ It is possible to provide the following useful characterization of $\mathfrak{H}^{\sigma}(\Omega)$ (see [@PellacciMontef Proposition 2]): $$\begin{aligned} \mathfrak{H}^{\sigma}(\Omega)&=\left\{u={\operatorname{tr}}w:w\in \mathscr{H}^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma}) \right\}\\ &=\Big\{u\in L^2(\Omega):u_{\Omega}=0\,and\,\sum_{k=1}^\infty\lambda_k^{\sigma}|\langle u,\varphi_k \rangle_{L^2(\Omega)}|^2<\infty\Big\};\end{aligned}$$ moreover, $\mathfrak{H}^{\sigma}(\Omega)$ is an Hilbert space equipped with the Gagliardo seminorm $$\|u\|_{\mathfrak{H}^{\sigma}(\Omega)}:=[u]_{H^{\sigma}(\Omega)}:=\left(\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^{2}}{|x-y|^{n+2\sigma}}\,dx\,dy{\right)}^{1/2}.$$ Now we wish to particularize the general extension problem proved in [@Stinga-Torrea] for the operator $(-\Delta_N)^{\sigma}$ restricted to $\mathfrak{H}^{\sigma}(\Omega)$ . \[extensth\] Let $u\in \mathfrak{H}^{\sigma}(\Omega)$. Define $$w(x,y):=\sum_{k=1}^{\infty}\rho(\lambda_{k}^{1/2}y)\, \langle u,\varphi_k\rangle_{L^2(\Omega)}\varphi_{k}(x),\label{trueextens}$$ where the function $\rho(t)$ solves the problem $$\label{ODE} \left\{ \begin{array} [c]{lll}\rho^{\prime\prime}(t)+\frac{1-2\sigma}{t}\rho^{\prime}(t)=\rho(t) & & t>0,\\[5pt] -\lim_{t\rightarrow 0^{+}}t^{1-2\sigma}\rho^{\prime}(t)=\kappa_{\sigma},\\[5pt] \rho(0)=1, \end{array} \right. $$ where $$\kappa_{\sigma}:=\frac{2^{1-2\sigma}\,\Gamma(1-\sigma)}{\Gamma(\sigma)}.$$ Then $w\in \mathscr{H}^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma})$ and it is the unique weak solution to the extension problem $$\label{extens1} \begin{cases} \dfrac{\partial^{2}w}{\partial y^{2}}+\dfrac{1-2\sigma}{y}\dfrac{\partial w}{\partial y}+\Delta_{x}w=0,&\hbox{in}~\mathcal{C}_{\Omega},\\ \\ \dfrac{\partial w}{\partial \nu_{x}}=0,&\hbox{on}~\partial_L\mathcal{C}_{\Omega},\\ \\ w(x,0)=u(x),&\hbox{on}~\Omega, \end{cases}$$ where $\nu$ is the outward normal to the lateral boundary $\partial_L\mathcal{C}_{\Omega}$ of $\mathcal{C}_{\Omega}$. More precisely, $$\iint_{\mathcal{C}_{\Omega}}y^{1-2\sigma}\big(\nabla_{x}w\cdot\nabla_{x}\psi+ w_y\psi_y\big) \,dx\,dy=0,$$ for all test functions $\psi\in H^1(\mathcal{C}_{\Omega},y^{1-2\sigma})$ with zero trace over $\Omega$, i.e. ${\operatorname{tr}}_\Omega\psi=0$, and $\lim_{y\to0^+}w(x,y)=u(x)$ in $L^2(\Omega)$. Furthermore, the function $w$ is the unique minimizer of the energy functional $$\mathcal{F}(w)=\frac{1}{2}\iint_{\mathcal{C}_{\Omega}}y^{1-2\sigma}\big(|\nabla_{x}w|^{2}+|w_{y}|^{2}\big)\,dx\,dy,\label{energufunct}$$ over the set $\mathcal{U}=\left\{w\in H^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma}):\,{\operatorname{tr}}_{\Omega}w=u\right\}$. We can also write $$\label{eq.5} w(x,y)=\frac{y^{2\sigma}}{4^{\sigma}\Gamma(\sigma)}\int_{0}^{\infty}e^{- y^2/(4t)}e^{t\Delta_N}u(x)\,\frac{dt}{t^{1+\sigma}}$$ where $e^{t\Delta_N}u(x)$ is the heat diffusion semigroup generated by the Neumann Laplacian acting on $u$. An equivalent formula for $w$ is $$w(\cdot,y)=\frac{1}{\Gamma(\sigma)}\int_0^\infty e^{- y^2/(4t)}e^{t\Delta_N}((-\Delta_N)^{\sigma}u)\,\frac{dt}{t^{1-\sigma}}.$$ Moreover, $$\label{DirchNeuma} -\frac{1}{\kappa_{\sigma}}\lim_{y\to 0^+}y^{1-2\sigma}w_y=(-\Delta_N)^{\sigma}u,\quad\hbox{in}~\mathcal{H}^{\sigma}(\Omega)'.$$ For the proof of Theorem \[extensth\] see [@StingaVolz Theorem 2.1] and [@Stinga-Torrea Theorem 1.1]. We also notice that the solution $\rho(y)$ to problem is explicit and given in terms of modified Bessel of the third kind (see [@Stinga-Torrea Section 3.1].\ For any $u\in \mathfrak{H}^{\sigma}(\Omega)$, we will call the solution $w$ to problem the *Neumann harmonic extension of $u$* and we write $w=E(u)$. With this definition at hand, we assume that $f\in \mathfrak{H}^{-\sigma}(\Omega):=\left\{g\in H^{\sigma}(\Omega)^{\prime}:\langle g,1\rangle=0\right\}$: if $f\in L^{2}(\Omega)$, this condition imposes $f$ to satisfy the compatibility condition ; furthermore it can be proved (see [@PellacciMontef Proposition 3]) that $\mathfrak{H}^{-\sigma}(\Omega)$ is actually isomorphic to the dual space $(\mathfrak{H}^{\sigma}(\Omega))^{\prime}$ of $\mathfrak{H}^{\sigma}(\Omega)$. Let us now consider problem , to which we associate the following extension problem $$\left\{ \begin{array} [c]{lll}\dfrac{\partial^{2}w}{\partial y^{2}}+\dfrac{1-2\sigma}{y}\dfrac{\partial w}{\partial y}+\Delta_{x}w=0 & & in\text{ }\mathcal{C}_{\Omega},\\[15pt] \ \dfrac{\partial w}{\partial \nu_{x}}=0 & & on\text{ }\partial_{L}\mathcal{C}_{\Omega},\\[15pt] \displaystyle{-\frac{1}{\kappa_{\sigma}}\lim_{y\rightarrow0^{+}}y^{1-2\sigma}\dfrac{\partial w}{\partial y}(x,y)}=f\left( x\right) & & in\text{ }\Omega. \end{array} \right. \label{extens}$$ We give now the following, suitable definition of weak solution of problem : \[weakdef1\] We say that a function $w\in \mathscr{H}^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma})$ is a weak solution to if $$\iint_{\mathcal{C}_{\Omega}}y^{1-2\sigma}\big(\nabla_{x} w\cdot\nabla_{x} \psi+ w_{y}\psi_{y}\big)\,dx\,dy=\kappa_{\sigma}\langle f,{\operatorname{tr}}_\Omega \psi\rangle,\label{weakform1}$$ for every $\psi\in \mathscr{H}^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma})$. By the classical Lax-Milgram Theorem, the existence and uniqueness of weak solution to in the sense of definition \[weakdef1\] is immediate and the solution $w$ is explicit (see [@StingaVolz]). As a direct consequence, we have that Let $f\in\mathfrak{H}^{-\sigma}(\Omega)$ and assume that $w \in \mathscr{H}^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma})$ is the weak solution to . Then $w$ takes the form and $u:={\operatorname{tr}}w\in \mathfrak{H}^{\sigma}(\Omega)$ is the unique (weak) solution in $\mathfrak{H}^{\sigma}(\Omega)$ to the linear problem . Moreover, the space $\mathscr{H}^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma})$ for test functions in definition \[weakdef1\] can be actually replaced by the whole space $H^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma})$, as the following result shows (see [@PellacciMontef]): \[enlargementtestspace\] Let $f\in \mathfrak{H}^{-\sigma}(\Omega)$ and $w$ be the weak solution to problem . Then: [(i)]{} there exist positive constants $C,\,k$ such that for all $y>0$ $$\int_{\Omega}|\nabla w(x,y)|^{2}dx\leq C\,e^{-ky};$$ [(ii)]{} equation holds for any function $\psi\in H^{1}_{loc}(\mathcal{C}_{\Omega},y^{1-2\sigma})$, such that there is a positive constant $C$, uniform w.r. to $y>0$, for which $$\|\psi(\cdot,y)\|_{L^{2}(\Omega)}\leq C.$$ *Remark \[enlargementtestspace\] allows to choose $\psi(x,y)=\eta(y)\theta(x)$, with $\eta\in C_{0}^{\infty}(0,\infty)$ and $\theta\in H^{1}(\Omega)$ as a test function in Definition : therefore integrating by parts and using Fubini’s Theorem we can conclude that (see also [@Caffarelli-Stinga]) $$\int_{\Omega}\nabla_{x} w(x,y)\cdot\nabla \theta(x)\,dx=\int_{\Omega}\left(w_{yy}+\frac{1-2\sigma}{y}w_{y}\right)\theta(x)dx\label{weakformlevelbylevel}$$ for all $\theta\in H_{1}(\Omega)$ and a.e. $y>0$. Moreover, since $w$ is defined by means of formula (or equivalently by ), we also notice that $w$ is smooth in $\mathcal{C}_{\Omega}$ (see [@Stinga-Torrea]).* It is clear that if $u\in H^{\sigma}(\Omega)$ solves , then $u-u_{\Omega}$ is the unique solution to the same problem in the smaller space $\mathfrak{H}^{\sigma}(\Omega)$: hence if $\overline{u}$ is the unique weak solution to in the space $\mathfrak{H}^{\sigma}(\Omega)$, all the solutions in $H^{\sigma}(\Omega)$ to are of the form $u=\overline{u}+c$ for $c\in{{\mathbb R}}$. As far as problem for $c>0$ is concerned, the space where to look for solutions of the extension problems is a direct generalization of the full description given in [@StingaVolz] (see [@Caffarelli-Stinga]). Indeed, if $u_{\Omega}\neq 0$ then the function $$\label{extnonzeraver} w(x,y)=\sum_{k=0}^{\infty}\rho(\lambda_{k}^{1/2}y)\, \langle u,\varphi_k\rangle_{L^2(\Omega)}\varphi_{k}(x)$$ is in general not in $L^2(\mathcal{C}_{\Omega},y^{1-2\sigma})$ but only its gradient is (see for instance the computations in [@ColoradoBrandle Lemma 4.3]). Therefore, in order to give a suitable definition of the Neumann harmonic extension of $u$, we first solve the extension problem with initial data $\tilde{u}=u-u_\Omega$, in order to find a function $\tilde{w}=E(\tilde{u})$. Then we define $$w=E(u):=\tilde{w}+u_\Omega,$$ which clearly coincides with .\ Using the fact that the fractional Neumann Laplacian does not see constants, we have $$(-\Delta_N)^{\sigma}u=(-\Delta_N)^{\sigma}\tilde{u}= -\frac{1}{\kappa_{\sigma}}\lim_{y\rightarrow0^{+}}y^{1-2\sigma}\dfrac{\partial \tilde{w}}{\partial y}(x,y)= --\frac{1}{\kappa_{\sigma}}\lim_{y\rightarrow0^{+}}y^{1-2\sigma}\dfrac{\partial w}{\partial y}(x,y),\quad\hbox{in}~\mathcal{H}^{\sigma}(\Omega)',$$ thus we recover the local interpretation of the fractional Neumann Laplacian.\ Since is *formally* a solution to , we have to define the right functional space where this extension belongs to. Following [@StingaVolz], we introduce the space $\mathsf{H}^{\sigma,c}(\mathcal{C}_{\Omega})$ as the completion of $H^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma})$ under the scalar product $$\label{eq:dos estrellas} (w,\psi)_{\sigma,c}=\iint_{\mathcal{C}_{\Omega}} y^{1-2\sigma}\nabla_{x,y} w\cdot\nabla_{x,y} \psi\,dx\,dy+c\kappa_{\sigma}\int_{\Omega}({\operatorname{tr}}_{\Omega}w)({\operatorname{tr}}_{\Omega}\psi)\,dx.$$ We denote by $\|\cdot\|_{\sigma,c}$ the associated norm: $$\|w\|_{\sigma,c}^2=\iint_{\mathcal{C}_{\Omega}}y^{1-2\sigma}|\nabla_{x,y} w|^2\,dx\,dy+c\kappa_{\sigma}\int_{\Omega}({\operatorname{tr}}_{\Omega}w)^2\,dx.$$ Notice that, for each $c>0$, $$H^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma})\subset\mathsf{H}^{\sigma,c}(\mathcal{C}_{\Omega}),$$ as Hilbert spaces, where the inclusion is strict, since constant functions belong to $\mathsf{H}^{\sigma,c}(\mathcal{C})$ but not to $H^{1}(\mathcal{C}_{\Omega},y^{1-2\sigma})$.\ From [@StingaVolz Theorem 2.4, Lemma 2.5 ] it follows that a unique trace embedding from $\mathsf{H}^{\sigma,c}(\mathcal{C})$ to $H^{\sigma}(\Omega)$ can be defined. Then we can give the following definition of weak solution for linear problems of the following form: $$\left\{ \begin{array} [c]{lll}\dfrac{\partial^{2}w}{\partial y^{2}}+\dfrac{1-2\sigma}{y}\dfrac{\partial w}{\partial y}+\Delta_{x}w=0 & & in\text{ }\mathcal{C}_{\Omega},\\[15pt] \ \dfrac{\partial w}{\partial \nu_{x}}=0 & & on\text{ }\partial_{L}\mathcal{C}_{\Omega},\\[15pt] \displaystyle{-\frac{1}{\kappa_{\sigma}}\lim_{y\rightarrow0^{+}}y^{1-2\sigma}\dfrac{\partial w}{\partial y}(x,y)}+c\,w(x,0)=f\left( x\right) & & in\text{ }\Omega: \end{array} \right. \label{extenszero}$$ Let $f\in H^{\sigma}(\Omega)^{\prime}$. We say that a function $w\in \mathsf{H}^{\sigma,c}(\mathcal{C})$ is a weak solution to if $$(w,\psi)_{c,\sigma}=\kappa_{\sigma}\langle f,{\operatorname{tr}}_\Omega \psi\rangle,\label{eq.33}$$ for every $\psi\in \mathsf{H}^{\sigma,c}(\mathcal{C})$. By the Lax-Milgram Theorem again we easily infer that a unique, explicit weak solution to exists (see [@StingaVolz Lemma 3.3]), and its trace $u={\operatorname{tr}}w$ is the unique solution in $H^{\sigma}(\Omega)$ to the linear problem . For further regularity properties concerning solutions to problems of the type we refer the reader to [@StingaVolz] for the case $s=1/2$ and [@Caffarelli-Stinga] for a general exponent $\sigma\in(0,1)$. *In the radial Dirichlet problems appearing in our comparison theorems, namely the problems $$\label{symmetriz} \left\{ \begin{array} [c]{lll}\left( -\gamma\Delta\right)^{\sigma}v+cv=g(x) & & in\text{ }B,\\ [10pt] v=0 & & on\text{ }\partial B \end{array} \right. $$ for some radial function $g$ and a positive constant $\gamma$, the operator $\left( -\gamma\Delta\right)^{\sigma}$ is understood as the spectral *Dirichlet* fractional Laplacian $(-\gamma\Delta_{D})^{\sigma}$: for all the most useful properties of such operator we refer the interested reader to the [@Cabre-Tan], [@ColoradoBrandle].\ Finally, from now on we will always omit the subscripts in the powers of the Laplacian, since it will be always clear by the context which boundary conditions are chosen, so that the spectral definition of the operator changes accordingly.* The main result {#Sec4} =============== The aim of this Section is to derive sharp estimates via symmetrization for solutions to fractional Neumann problems of the type . According to what explained in the introduction, we will compare problem with the following fractional radially symmetric problem $$\label{symmetrizz} \left\{ \begin{array} [c]{lll}\left( -\gamma\Delta\right)^{\sigma}v=f_{1}^{\#}(x)+f_{2}^{\#}\left( x\right) & & in\text{ }B,\\ [10pt] v=0 & & on\text{ }\partial B, \end{array} \right. $$ where $f_{1}$, $f_{2}$ are the positive and negative part of $f$ respectively, $B$ is the ball centered at the origin with Lebesgue measure $|\Omega|/2$ and $\gamma=1/(N\omega_{N}^{1/N}Q)^{2}$, being $N\omega_{N}^{1/N}$ and $Q$ the best constants in the isoperimetric and relative isoperimetric (see ) inequality respectively . Then we associate to the solution $v$ to its *Dirichlet* harmonic extension $\xi$, solving (see [@Stinga-Torrea Theorem 1.1]) $$\left\{ \begin{array} [c]{lll}\dfrac{\partial^{2}\xi}{\partial y^{2}}+\dfrac{1-2\sigma}{y}\dfrac{\partial\xi}{\partial y}+\gamma\Delta_{x}\xi=0 & & in\text{ }\mathcal{C}_{B},\\[15pt] \xi=0 & & on\text{ }\partial_{L}\mathcal{C}_{B},\\[15pt] \displaystyle{-\frac{1}{\kappa_{\sigma}}\lim_{y\rightarrow0^{+}}y^{1-2\sigma}\dfrac{\partial\xi}{\partial y}(x,y)}=f_{1}^{\#}+f_{2}^{\#}& & in\text{ }B. \end{array} \right. \label{extensionsymm}$$ According to [@Cabre-Tan],[@ColoradoBrandle] (see also the nice Appendix in [@SirBonfVaz]) we have that $v\in H(B)$, where $$H(B)=\left\{ \begin{array} [c]{lll}H^{\sigma}(\Omega)&& if\,\sigma\in(0,1/2)\\ \\ H_{00}^{1/2}(\Omega) && if\,\sigma=1/2\\ \\ H_{0}^{\sigma}(\Omega) && if\,\sigma\in(1/2,1); \end{array} \right.$$ moreover the solution $\xi$ belongs to the energy space $X_{0}^{\sigma}(\mathcal{C}_{\Omega})$ defined as the completion of $C_{0}^{\infty}(\Omega\times[0,\infty))$ with respect to the norm $$\Vert \psi\Vert_{X_{0}^{\sigma}(\mathcal{C}_{\Omega})}:=\left( \iint_{\mathcal{C}_{\Omega}}y^{1-2\sigma}|\nabla \psi(x,y)|^{2}\,dxdy\right) ^{1/2}.$$ Our main goal is to compare any solution $u$ to with the solution $v$ to . The most direct (and natural) way to proceed is to compare the Neumann extension $w$ of $u$, that is the solution to , with the solution $\xi$ to the extension problem . Before stating our main result, for all $y>0$ we define the function $$\lambda(y)=\mathsf{m}((w(\cdot,y)))\label{functiont(y)}$$ where $\mathsf{m}((w(\cdot,y)))$ is the median of the function $w(\cdot,y)$ (see ). Moreover, we set $$w_{1}(x,y)=[w(x,y)-\lambda(y)]^{+}\,\quad w_{2}(x,y)=[w(x,y)-\lambda(y)]^{-}.$$ It is clear that, for all fixed $y\geq0$, $$|\left\{x\in\Omega:w(x,y)>\lambda(y)\right\}|\leq\frac{|\Omega|}{2}.$$ and $$|\left\{x\in\Omega:w(x,y)\geq \lambda(y)\right\}|\geq\frac{|\Omega|}{2}$$ thus $$|\text{sprt}\,w_{i}(\cdot,y)|\leq |\Omega|/2,\label{support}$$ for all $i=1,2$.\ Then we can prove the following result \[comparisontheorem\] Let us choose a source term $f\in \mathfrak{H}^{-\sigma}(\Omega)$ and let $u\in H^{\sigma}(\Omega)$ be any solution to . Assume that $w$ is the Neumann harmonic extension of $u$, namely the solution to the extension problem associated to . Let $v$ be the solution to and $\xi$ its Dirichlet harmonic extension, solving . Then for all $y\geq0$, we have $$w_{1}^{\#}(\cdot,y)+w_{2}^{\#}(\cdot,y)\prec \xi(\cdot,y)$$ that is $$\int_{0}^{s}(w^{\ast}_{1}(\tau,y)+w^{\ast}_{2}(\tau,y))d\tau\leq\int_{0}^{s}\xi^{\ast}(\tau,y)d\tau\label{concentrestim}$$ for all $s\in[0,|\Omega|/2]$. We will borrow some ideas from [@Bramanti]. To start with, we first notice that one can always reduce to consider smooth source data $f$, since in the less regular case we can obtain the estimate through an approximation argument. According to [@dBVol], using the change of variables $$z=\left(\frac{y}{2\sigma}\right) ^{2\sigma},$$ problems and become respectively $$\left\{ \begin{array} [c]{lll}-z^{\nu}\dfrac{\partial^{2}w}{\partial z^{2}}-\Delta_{x}w=0 & & in\text{ }\mathcal{C}_{\Omega}\\ & & \\ \dfrac{\partial w}{\partial \nu_{x}}=0 & & on\text{ }\partial_{L}\mathcal{C}_{\Omega}\\ & & \\ -\dfrac{\partial w}{\partial z}\left( x,0\right) =\beta_{\sigma}f\left( x\right) & & in\text{ }\Omega, \end{array} \right. \label{eq.0.1bis}$$ and $$\left\{ \begin{array} [c]{lll}-z^{\nu}\dfrac{\partial^{2}\xi}{\partial z^{2}}-\gamma\Delta_{x}\xi=0 & & in\text{ }\mathcal{C}_{B}\\ & & \\ \xi=0 & & on\text{ }\partial_{L}\mathcal{C}_{B}\\ & & \\ -\dfrac{\partial \xi}{\partial z}\left( x,0\right) =\beta_{\sigma}\left(f_{1}^{\#}\left( x\right)+f_{2}^{\#}\left( x\right)\right) & & in\text{ }B. \end{array} \right. \label{extensionsymmbis}$$ where $$\nu:=\left( 2\sigma-1\right) /\sigma$$ and $$\beta_{\sigma}:=(2\sigma)^{2\sigma-1}\kappa_{\sigma}.$$ Then, the problem reduces to prove the concentration comparison between the solutions $w(x,z)$ and $\xi(x,z)$ to - respectively. Notice that by the weak formulation we have $$\int_{\Omega}\nabla_{x}w(x,z)\cdot\nabla\theta(x)\,dx=z^{\nu}\int_{\Omega}\theta(x)\,w_{zz}(x,z)\,dx\label{weakformy}$$ for all $\theta=\theta(x)\in H^{1}(\Omega)$ and a.e. $z>0$. Then, let us fix $z>0,\,h>0$, $t\geq0$ and plug in the test function $$\varphi_{h,1}^{z}\left( x\right) =\left\{ \begin{array} [c]{lll}1 & & if\text{ \ }w_{1}(x,z) \geq t+h\\ & & \\ \dfrac{ w_{1}(x,z) -t}{h}\, & & if\text{ \ }t< w_{1}(x,z) <t+h\\ & & \\ 0 & & if\text{ \ } w_{1}(x,z) \leq t.\text{ }\end{array} \right.$$ Therefore, passing to the limit as $h\rightarrow0$ $$-z^{\nu}\int_{\left\{x:\,w_{1}(x,z)>t\right\}}\frac{\partial^{2}w}{\partial z^{2}}dx-\frac{d}{dt}\int_{\left\{x:\,w_{1}(x,z)>t\right\}}|\nabla_{x}w_{1}(x,z)|^{2}dx=0.\label{wekform}$$ Using the relative isoperimetric inequality , the coarea formula and the bounds we have $$\begin{aligned} -\frac{d}{dt}\int_{\left\{x:\,w_{1}(x,z)>t\right\}}|\nabla_{x}w_{1}(x,z)|dx&=P(\left\{x:\,w_{1}(x,z)>t\right\};\Omega)\nonumber\\&\geq Q^{-1}[\min\left\{\mu_{w_{1}}(t,z),|\Omega|-\mu_{w_{1}}(t,z)\right\}]^{1-1/N}\nonumber\\&=Q^{-1}[\mu_{w_{1}}(t,z)]^{1-1/N}\label{isop}.\end{aligned}$$ Then inserting into and using the Cauchy-Schwartz inequality, $$Q^{-2}[\mu_{w_{1}}(t,z)]^{2-2/N}\leq\ z^{\nu}\left(\int_{w_{1}(\cdot,z)>t}\frac{\partial^{2}w}{\partial z^{2}}dx\right)\left(-\frac{\partial\mu_{w_{1}}}{\partial t}\right)$$ hence a change of variables leads to $$-z^{\nu}\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}\frac{\partial^{2}w}{\partial z^{2}}dx-Q^{-2}s^{2-2/N}\frac{\partial^{2}w_{1}^{\ast}}{\partial s^{2}}\leq0.$$ Now, observe that on the set $\left\{x:w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}$ we have $$\frac{\partial^{2}w}{\partial z^{2}}=\frac{\partial^{2}w_{1}}{\partial z^{2}}+\lambda^{\prime\prime}(z)$$ thus $$-z^{\nu}\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}\left(\frac{\partial^{2}w_{1}}{\partial z^{2}}+\lambda^{\prime\prime}(z)\right)dx-Q^{-2}s^{2-2/N}\frac{\partial^{2}w_{1}^{\ast}}{\partial s^{2}}\leq0,$$ so the second order derivation formula by Ferone-Mercaldo shows that $$-z^{\nu}\int_{0}^{s}\left(\frac{\partial^{2}w_{1}^{\ast}}{\partial z^{2}}+\lambda^{\prime\prime}(z)\right)d\tau-Q^{-2}s^{2-2/N}\frac{\partial^{2}w_{1}^{\ast}}{\partial s^{2}}\leq0,\label{firstestim}$$ for a.e. $s\in (0,|\Omega|/2)$ and $z>0$.\ Now we use in the test function $$\varphi_{h,2}^{z}\left( x\right) =\left\{ \begin{array} [c]{lll}1 & & if\text{ \ }w_{2}(x,z) \geq t+h\\ & & \\ \dfrac{ w_{2}(x,z) -t}{h}\, & & if\text{ \ }t< w_{2}(x,z) <t+h\\ & & \\ 0 & & if\text{ \ } w_{2}(x,z) \leq t,\text{ }\end{array} \right.$$ in order to obtain $$-z^{\nu}\int_{\left\{x:\,w_{2}(x,z)>t\right\}}\frac{\partial^{2}w}{\partial z^{2}}dx+\frac{d}{dt}\int_{\left\{x:\,w_{2}(x,z)>t\right\}}|\nabla_{x}w_{2}(\cdot,z)|^{2}dx=0\label{wekform2}.$$ The coarea formula and relative isoperimetric inequality applied to $w_{2}(\cdot,z)$ give $$\begin{aligned} -\frac{d}{dt}\int_{\left\{x:\,w_{2}(x,z)>t\right\}}|\nabla_{x}w_{2}(x,z)|dx\geq Q^{-1}[\mu_{w_{2}}(t,z)]^{1-1/N}\end{aligned}$$ then from $${\left(}-z^{\nu}\int_{\left\{x:\,w_{2}(x,z)>t\right\}}\frac{\partial^{2}w}{\partial z^{2}}dx{\right)}\left(-\frac{\partial \mu_{w_{2}}}{\partial t}\right)\geq Q^{-2}[\mu_{w_{2}}(t,z)]^{2-2/N}$$ which yields $$-Q^{-2}s^{2-2/N}\frac{\partial^{2}w_{2}^{\ast}}{\partial s^{2}}\leq -z^{\nu}\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}\frac{\partial^{2}w}{\partial z^{2}}dx.\label{intermediate}$$ Now, observe that on the set $\left\{x:w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}$ we have $$\frac{\partial^{2}w}{\partial z^{2}}=-\frac{\partial^{2}w_{2}}{\partial z^{2}}+\lambda^{\prime\prime}(z)$$ hence by $$-Q^{-2}s^{2-2/N}\frac{\partial^{2}w_{2}^{\ast}}{\partial s^{2}}\leq z^{\nu}\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}\left(\frac{\partial^{2}w_{2}}{\partial z^{2}}-\lambda^{\prime\prime}(z)\right)dx$$ and by $$-Q^{-2}s^{2-2/N}\frac{\partial^{2}w_{2}^{\ast}}{\partial s^{2}}\leq z^{\nu}\int_{0}^{s}\left(\frac{\partial^{2}w_{2}^{\ast}}{\partial z^{2}}-\lambda^{\prime\prime}(z)\right)d\tau.\label{secondestim}$$ Then, adding to we have $$-z^{\nu}\int_{0}^{s}\frac{\partial^{2}}{\partial z^{2}}(w_{1}^{\ast}+w_{2}^{\ast})d\tau-Q^{-2}s^{2-2/N}\frac{\partial^{2}}{\partial s^{2}}(w_{1}^{\ast}+w_{2}^{\ast})\leq 0\label{thirdestimate}$$ for a.e. $s\in (0,|\Omega|/2)$ and $z>0$.\ Next, we set $$U(s,z)=\int_{0}^{s}(w_{1}^{\ast}(\tau,z)+w_{2}^{\ast}(\tau,z))d\tau.$$ Now we observe that by the main result in [@Stinga-Torrea], $$-\lim_{z\rightarrow0^{+}}\frac{\partial w}{\partial z}(\cdot,y)=\beta_{\sigma}f\,\quad \,in\,\,L^{2}(\Omega)$$ then $$-\frac{\partial w}{\partial z}(\cdot,z)=\beta_{\sigma}f+\mathcal{R}(\cdot,z)\,\quad \,in\,\,L^{2}(\Omega)\,, z\rightarrow0^{+}$$ where the remaining term $\mathcal{R}=\mathcal{R}(x,z)$ is such that $$\lim_{z\rightarrow0^+}\mathcal{R}(\cdot,z)=0\quad \,in\,\,L^{2}(\Omega).$$ Therefore using the first order derivation formula and the Hardy-Littlewood inequality we have, for small $z>0$, $$\begin{aligned} -\frac{\partial U}{\partial z}(s,z) &=-\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}\frac{\partial w_{1}}{\partial z}(x,z)dx-\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}\frac{\partial w_{2}}{\partial z}(x,z)dx \nonumber\\ & =-\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}{\left(}\frac{\partial w}{\partial z}(x,z)-\lambda^{\prime}(z){\right)}dx-\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}{\left(}-\frac{\partial w}{\partial z}(x,z)+\lambda^{\prime}(z){\right)}dx\nonumber\\ &=-\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}\frac{\partial w}{\partial z}(x,z)\,dx+\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}\frac{\partial w}{\partial z}(x,z)dx\nonumber\\ &=\beta_{\sigma}\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}f(x)\,dx-\beta_{\sigma}\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}f(x)dx\nonumber\\ &+\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}\mathcal{R}(x,z)\,dx-\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}\mathcal{R}(x,z)\,dx\nonumber\\ &\leq \beta_{\sigma}\left(\int_{0}^{s}f_{1}^{\ast}(\tau)d\tau+\int_{0}^{s}f_{2}^{\ast}(\tau)d\tau\right) +\int_{\left\{x:w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}\mathcal{R}(x,z)\,dx-\int_{\left\{x: w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}\mathcal{R}(x,z)\,dx \label{mossinorak}\end{aligned}$$ then passing to the limit as $z\rightarrow 0^{+}$ yields $$\begin{aligned} -\frac{\partial U}{\partial z}(s,0) \leq\beta_{\sigma}\left(\int_{0}^{s}f_{1}^{\ast}(\tau)d\tau+\int_{0}^{s}f_{2}^{\ast}(\tau)d\tau\right).\label{mossinoraky}\end{aligned}$$ Hence by inequalities , we find that $U$ satisfies $$\label{principsystem} \left\{ \begin{array} [c]{lll}-z^{\nu}\dfrac{\partial^{2}U}{\partial z^{2}}-Q^{-2}s^{2-2/N}\dfrac{\partial^{2}U}{\partial s^{2}}\leq0, \quad for\text{ }a.e.\,(s,z)\in (0,|\Omega|/2)\times(0,\infty)\\[15pt] U(0,z)=\dfrac{\partial U}{\partial s}(|\Omega|/2,z)=0 \quad \forall z>0\\[15pt] \dfrac{\partial U}{\partial{z}}(s,0)\geq -\beta_{\sigma}{\displaystyle\int_{0}^{s}(f_{1}^{\ast}(\tau)+f_{2}^{\ast}(\tau))d\tau},\quad\text{for a.e. } s\in (0,|\Omega|/2). \end{array} \right. $$ Now, since the solution $\xi$ to is radially decreasing w.r. to $x$, all the inequalities used above become equalities, hence the function $$V(s,z)=\int_{0}^{s}\xi^{\ast}(\tau,z)d\tau.$$ solves the problem $$\left\{ \begin{array} [c]{lll}-z^{\nu}\dfrac{\partial^{2}V}{\partial z^{2}}-Q^{-2}s^{2-2/N}\dfrac{\partial^{2}V}{\partial s^{2}}=0, \quad for\text{ }a.e.\,(s,z)\in (0,|\Omega|/2)\times(0,\infty)\\[15pt] V(0,z)=\dfrac{\partial V}{\partial s}(|\Omega|/2,z)=0 \quad \forall z>0\\[15pt] \dfrac{\partial V}{\partial{z}}(s,0)= -\beta_{\sigma}{\displaystyle\int_{0}^{s}(f_{1}^{\ast}(\tau)+f_{2}^{\ast}(\tau))d\tau},\quad\text{for a.e. } s\in (0,|\Omega|/2), \end{array} \right. $$ thus the function $$\chi=U-V$$ verifies $$\left\{ \begin{array} [c]{lll}-z^{\nu}\dfrac{\partial^{2}\chi}{\partial z^{2}}-Q^{-2}s^{2-2/N}\dfrac{\partial^{2}\chi}{\partial s^{2}}\leq0, \quad for\text{ }a.e.\,(s,z)\in (0,|\Omega|/2)\times(0,\infty)\\[15pt] \chi(0,z)=\dfrac{\partial \chi}{\partial s}(|\Omega|/2,z)=0 \quad \forall z>0\\[15pt] \dfrac{\partial \chi}{\partial{y}}(s,0)\geq0,\quad\text{for a.e. } s\in (0,|\Omega|/2). \end{array} \right. $$ Then a classical maximum principle argument allows to conclude that $$\chi\leq0\quad\forall(s,z)\in[0,|\Omega|/2]\times[0,\infty)$$ that is $$\int_{0}^{s}(w_{1}^{\ast}(\tau,z)+w_{2}^{\ast}(\tau,z))d\tau\leq \int_{0}^{s}\xi^{\ast}(\tau,z)d\tau\quad\forall(s,z)\in[0,|\Omega|/2]\times[0,\infty).\label{mainestimate}$$ *If the function $\lambda(z)$ in is constant, then Theorem \[comparisontheorem\] can be actually strengthened. Indeed, in such a case the second derivative of $\lambda(z)$ disappears in estimates , , so we have that the concentration function $$U_{i}(s,z)=\int_{0}^{s}w^{\ast}_{i}(\tau,y)d\tau\quad\forall i=1,2$$ satisfies the system $$\left\{ \begin{array} [c]{lll}-z^{\nu}\dfrac{\partial^{2}U_{i}}{\partial z^{2}}-Q^{-2}s^{2-2/N}\dfrac{\partial^{2}U_{i}}{\partial s^{2}}\leq0, \quad for\text{ }a.e.\,(s,z)\in (0,|\Omega|/2)\times(0,\infty)\\[15pt] U_{i}(0,z)=\dfrac{\partial U_i}{\partial s}(|\Omega|/2,z)=0 \quad \forall z>0\\[15pt] \dfrac{\partial U_{i}}{\partial{z}}(s,0)\geq -\beta_{\sigma}{\displaystyle\int_{0}^{s}f_{i}^{\ast}(\sigma)d\sigma},\quad\text{for a.e. } s\in (0,|\Omega|/2). \end{array} \right. $$ Then if for $i=1,2$ we call $v_{i}$ the solution to the problem $$\label{symmetriz} \left\{ \begin{array} [c]{lll}\left( -\gamma\Delta\right)^{\sigma}v_{i}=f_{i}^{\#}(x) & & in\text{ }B,\\ [10pt] v_{i}=0 & & on\text{ }\partial B \end{array} \right. $$ and $\xi_{i}$ the Dirichlet extension of $v_{i}$, we have that $$V_{i}=\int_{0}^{s}\xi_{i}^{\ast}(\tau,z)d\tau$$ solves $$\left\{ \begin{array} [c]{lll}-z^{\nu}\dfrac{\partial^{2}V_{i}}{\partial z^{2}}-Q^{-2}s^{2-2/N}\dfrac{\partial^{2}V_{i}}{\partial s^{2}}=0, \quad for\text{ }a.e.\,(s,z)\in (0,|\Omega|/2)\times(0,\infty)\\[15pt] V_{i}(0,z)=\dfrac{\partial V_i}{\partial s}(|\Omega|/2,z)=0 \quad \forall z>0\\[15pt] \dfrac{\partial V_{i}}{\partial{y}}(s,0)= -\beta_{\sigma}{\displaystyle\int_{0}^{s}f_{i}^{\ast}(\tau)d\tau},\quad\text{for a.e. } s\in (0,|\Omega|/2), \end{array} \right. $$ therefore the function $$\chi_{i}=U_{i}-V_{i}$$ is a solution to $$\left\{ \begin{array} [c]{lll}-z^{\nu}\dfrac{\partial^{2}\chi_{i}}{\partial z^{2}}-Q^{-2}s^{2-2/N}\dfrac{\partial^{2}\chi_{i}}{\partial s^{2}}\leq0, \quad for\text{ }a.e.\,(s,z)\in (0,|\Omega|/2)\times(0,\infty)\\[15pt] \chi_{i}(0,z)=\dfrac{\partial \chi_{i}}{\partial s}(|\Omega|/2,z)=0 \quad \forall z>0\\[15pt] \dfrac{\partial \chi_{i}}{\partial{y}}(s,0)\geq0,\quad\text{for a.e. } s\in (0,|\Omega|/2). \end{array} \right. $$ By the maximum principle again we have $$\chi_{i}\leq0 \quad\forall(s,z)\in[0,|\Omega|/2]\times[0,\infty)$$ that is $$\int_{0}^{s}w_{i}^{\ast}(\tau,z)d\tau\leq \int_{0}^{s}\xi_{i}^{\ast}(\tau,z)d\tau \quad\forall(s,z)\in[0,|\Omega|/2]\times[0,\infty).$$ This can be interpreted as the mass concentration comparison version of the Maderna-Salsa result [@MadSalsa], for the nonlocal operator $(-\Delta)^{\sigma}$.* Finally, a natural extension of Theorem \[comparisontheorem\] is the following \[Corollary\] Assume that $g$ is a radially decreasing function on the ball $B$, such that $$f_{1}^{\#}+f_{2}^{\#}\prec g\,\,in\,\,B,$$ and let $v$ be the solution to problem with $f_{1}^{\#}+f_{2}^{\#}$ replaced by $g$. If $\xi$ is the harmonic Neumann extension of $v$ (namely the solution to with $f_{1}^{\#}+f_{2}^{\#}$ replaced by $g$), then the concentration inequality still holds. Consequences ------------ The following remarkable properties can be easily deduced from Theorem \[comparisontheorem\].\ [*1. Oscillation estimate.*]{} From the mass concentration comparison we have $$\int_{0}^{s}(u_{1}^{\ast}(\tau)+u_{2}^{\ast}(\tau))d\tau\leq \int_{0}^{s}v^{\ast}(\tau)d\tau\quad\forall s\in[0,|\Omega|/2],\label{mainestimate2}$$ where $u_{1}=(u-\mathsf{m}(u))^{+}$ and $u_{1}=(u-\mathsf{m}(u))^{-}$. Then inequality can be rewritten as $$u_{1}^{\#}+u_{2}^{\#}\prec v,$$ which implies, in particular, the meaningful oscillation estimate $$\|v\|_{L^{\infty}(B)}\geq\|u_{1}^{\#}+u_{2}^{\#}\|_{L^{\infty}(B)}=\sup_{\Omega}(u-\mathsf{m}(u))-\inf_{\Omega}(u-\mathsf{m}(u))=\sup_{\Omega} u-\inf_{\Omega} u.$$ [*2. $L^{p}$-estimates.*]{} From we also have $$\int_{0}^{s}(u-\mathsf{m}(u))^{\ast}d\tau\leq\int_{0}^{s}(u_{1}^{\ast}(\tau)+u_{2}^{\ast}(\tau))d\tau\leq\int_{0}^{s}v^{\ast}(\tau)d\tau$$ hence in particular $$\|u-\mathsf{m}(u)\|_{L^{p}(\Omega)}\leq\|v\|_{L^{p}(B)},$$ for all $p\in [1,\infty)$. Then, making use of the fractional Dirichlet regularity estimates derived in [@dBVol], we can obtain the whole sharp $L^{p,r}$-scale of regularity estimates for Neumann problems of the type , generalizing some of the assertions in [@StingaVolz Theorem 3.5] (see also [@Caffarelli-Stinga] for an important treatment of $C^{\alpha}$ regularity estimates up to the boundary). Therefore we can state the following result (for basic properties of Lorentz and Orlicz spaces see *e.g* [@Bennett]): \[Thm:regularity\] Let $u\in H^{\sigma}(\Omega)$, $\sigma\in(0,1)$ be a solution to and $$f\in L^{p,r}(\Omega)$$ with $$p\geq\frac{2N}{N+2\sigma}, \quad r\geq1,$$ where $L^{p,r}(\Omega)$ is the Lorentz space on $\Omega$ of exponents $p,\,r$. Suppose that $f$ verifies the compatibility condition . Then, for some positive constants $C$, the following assertions hold: 1. if $p<N/2\sigma$ then $u\in L^{q,r}(\Omega)$ with $$q=\frac{Np}{N-2\sigma p}$$ and $$\|u-\mathsf{m}(u)\|_{L^{q,r}(\Omega)}\leq C\|f\|_{L^{p,r}(\Omega)};$$ 2. if $p=N/2\sigma$ and $r=1$, then $u\in L^{\infty}(\Omega)$ and $$\|u-\mathsf{m}(u)\|_{L^{\infty}(\Omega)}\leq C\|f\|_{L^{N/2\sigma,1}(\Omega)};$$ 3. if $p=N/2\sigma$ and $r\in(1,\infty]$, then $u\in L_{\Phi_{r}}(\Omega)$ and $$\|u-\mathsf{m}(u)\|_{L_{\Phi_{r}}(\Omega)}\leq C \|f\|_{L^{N/2\sigma,r}(\Omega)},$$ where $L_{\Phi_{r}}(\Omega)$ is the Orlicz space generated by the $N$-function $$\Phi_{r}(t)=\exp(|t|^{r^{\prime}})-1$$ being $r^{\prime}$ the conjugate exponent of $r$. Extensions to operators with constant zero order coefficient {#zeroordersec} ============================================================ If $c>0$ is a constant, we wish to generalize Theorem \[comparisontheorem\] for fractional linear Neumann problems of the type . Of course, in this setting we will not require the compatibility condition . According to what explained in Section \[Sec3\], the unique weak solution $u\in H^{\sigma}(\Omega)$ is the trace over $\Omega$ of the unique weak solution $w\in \mathsf{H}^{\sigma,c}(\mathcal{C}_{\Omega})$ to the extension problem . In this case, we compare problem with the radial Dirichlet problem $$\label{symmetriz2} \left\{ \begin{array} [c]{lll}\left( -\gamma\Delta\right)^{\sigma}v+cv=f_{1}^{\#}(x)+f_{2}^{\#}\left( x\right) & & in\text{ }B,\\ [10pt] v=0 & & on\text{ }\partial B, \end{array} \right. $$ being $$f_{1}=(f-\mathsf{m}(f))^{+},\,f_{2}=(f-\mathsf{m}(f))^{-}.$$ The extension problem associated to is given by $$\left\{ \begin{array} [c]{lll}\dfrac{\partial^{2}\xi}{\partial y^{2}}+\dfrac{1-2\sigma}{y}\dfrac{\partial\xi}{\partial y}+\gamma\Delta_{x}\xi=0 & & in\text{ }\mathcal{C}_{B},\\[15pt] \xi=0 & & on\text{ }\partial_{L}\mathcal{C}_{B},\\[15pt] \displaystyle{-\frac{1}{\kappa_{\sigma}}\lim_{y\rightarrow0^{+}}y^{1-2\sigma}\dfrac{\partial\xi}{\partial y}(x,y)}+c\,\xi(x,0) =f_{1}^{\#}(x)+f_{2}^{\#}(x) & & in\text{ }B. \end{array} \right.\label{extensionsymme2}$$ In this respect, we will prove the following result: \[comparisonc\] Let $c>0$, assume that $f\in H^{\sigma}(\Omega)^{\prime}$ and let $u,\,w$ be the solutions to and the extension problem respectively. Let $v,\,\xi$ be the solutions to the symmetrized problem and the extension problem respectively. Then inequality still holds. It is crystal clear that the first estimate in with the first two boundary conditions still hold. As for the Neumann condition satisfied by $U$, we first observe that since (see [@Stinga-Torrea] again) $$\lim_{z\rightarrow0^{+}}\left[-\frac{\partial w}{\partial z}(\cdot,z)+\beta_{\sigma}cw(\cdot,z)\right]=\beta_{\sigma}f\,\quad \,in\,\,L^{2}(\Omega)$$ we have $$-\frac{\partial w}{\partial z}(\cdot,z)+\beta_{\sigma}cw(\cdot,z)=\beta_{\sigma}f+\mathcal{R}(\cdot,z)\,\quad \,in\,\,L^{2}(\Omega)\,, z\rightarrow0^{+}$$ for a certain remaining term $\mathcal{R}=\mathcal{R}(x,z)$ tending to zero in $L^{2}(\Omega)$, uniformly in $z$. Then using the same notations of the proof of Theorem \[comparisontheorem\] and arguing as in inequality we have, for small $z>0$, $$\begin{aligned} -\frac{\partial U}{\partial z}(s,z)&=-\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}\frac{\partial w}{\partial z}(x,z)\,dx+\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}\frac{\partial w}{\partial z}(x,z)dx \nonumber\\ &=-\beta_{\sigma}c\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}w(x,z)dx+\beta_{\sigma}c\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}w(x,z)dx \nonumber\\&+\beta_{\sigma}\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}f(x)dx -\beta_{\sigma}\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}f(x)dx\nonumber\\ &+\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}\mathcal{R}(x,z)dx-\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}\mathcal{R}(x,z)dx\nonumber\\ &=-\beta_{\sigma}c\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}w_{1}(x,z)dx-\beta_{\sigma}c\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}w_{2}(x,z)dx\nonumber \\&+\beta_{\sigma}\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}(f(x)-\mathsf{m}(f))dx -\beta_{\sigma}\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}(f(x)-\mathsf{m}(f))dx\nonumber\\ &+\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}\mathcal{R}(x,z)dx-\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}\mathcal{R}(x,z)dx\nonumber\\ &=-\beta_{\sigma}c\,U(s,z)+\beta_{\sigma}\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}(f(x)-\mathsf{m}(f))dx -\beta_{\sigma}\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}(f(x)-\mathsf{m}(f))dx\label{zeroorder}\\ &+\int_{\left\{x:\,w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}\mathcal{R}(x,z)dx-\int_{\left\{x:\,w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}\mathcal{R}(x,z)dx\nonumber\\end{aligned}$$ therefore using the Hardy-Littlewood inequality and passing to the limit as $z\rightarrow0^{+}$ we find $$-\frac{\partial U}{\partial z}(s,0)\leq-\beta_{\sigma}c\,U(s,0)+\beta_{\sigma}\left(\int_{0}^{s}f_{1}^{\ast}(\tau)d\tau+\int_{0}^{s}f_{2}^{\ast}(\tau)d\tau\right).\label{eq.3}$$ Taking into account the symmetry of the solution $\xi$ to \[extensionsymm2\] with respect to $x$, the first inequality in and inequality become equalities, up to replacing $U$ by the concentration $V$ of $\xi$, that is $$V(s,z)=\int_{0}^{s}\xi^{\ast}(\tau,z)d\tau.$$ Then we finally obtain $$\label{princsystem} \left\{ \begin{array} [c]{lll}-z^{\nu}\dfrac{\partial^{2}\chi}{\partial z^{2}}-Q^{-2}s^{2-2/N}\dfrac{\partial^{2}\chi}{\partial s^{2}}\leq0, \quad for\text{ }a.e.\,(s,z)\in (0,|\Omega|/2)\times(0,\infty)\\[15pt] \chi(0,z)=\dfrac{\partial \chi}{\partial s}(|\Omega|/2,z)=0 \quad \forall z>0\\[15pt] \dfrac{\partial \chi}{\partial{z}}(s,0)\geq \beta_{\sigma}c\chi(s,0),\quad\text{for a.e. } s\in (0,|\Omega|/2), \end{array} \right. $$ where have set as usual $$\chi=U-V.$$ By Hopf’s boundary maximum principle we easily obtain that $\chi\leq0$, which is the desired estimate. It is worth noting that an easy analogue of Corollary occurs, which can be stated as follows: \[corollimplicitime\] Assume that $c>0$, $u$ solves the problem $$\left\{ \begin{array} [c]{lll}\left( -\Delta\right)^{\sigma}u+cu=f\left( x\right)+h(x) & & in\text{ }\Omega,\\[10pt] \dfrac{\partial u}{{\partial\nu}}=0 & & on\text{ }\partial\Omega, \end{array} \right. $$ and let $w$ its Neumann harmonic extension. Let $g_{1},\,g_{2}$ be two radially decreasing functions on the ball $B$, such that $$f_{1}^{\#}+f_{2}^{\#}\prec g_{1}\,\,in\,\,B,$$ where $$f_{1}=(f-\mathsf{m}(f))^{+},\,f_{2}=(f-\mathsf{m}(f))^{-}$$ and $$(h^{+})^{\#}+(h^{-})^{\#}\prec g_{2}.$$ Let $v$ be the solution to problem with $f_{1}^{\#}+f_{2}^{\#}$ replaced by $g_{1}+g_{2}$. If $\xi$ is the harmonic Neumann extension of $v$ (namely the solution to with $f_{1}^{\#}+f_{2}^{\#}$ replaced by $g_{1}+g_{2}$) then the concentration inequality still holds. Just notice that from we have $$\begin{aligned} -\frac{\partial U}{\partial z}(s,z)&=-\beta_{\sigma}c\,U(s,z)+\beta_{\sigma}\int_{\left\{x:w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}(f(x)-\mathsf{m}(f))dx -\beta_{\sigma}\int_{\left\{x:w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}(f(x)-\mathsf{m}(f))dx\nonumber\\ &+\beta_{\sigma}\int_{\left\{x:w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}h(x)dx-\int_{\left\{x:w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}h(x)dx\nonumber\\ &+\int_{\left\{x:w_{1}(x,z)>w_{1}^{\ast}(s,z)\right\}}\mathcal{R}(x,z)dx-\int_{\left\{x:w_{2}(x,z)>w_{2}^{\ast}(s,z)\right\}}\mathcal{R}(x,z)dx\nonumber\end{aligned}$$ thus Hardy-Littlewood inequality yields, after passing to the limit as $z\rightarrow0^{+}$ $$\begin{aligned} -\frac{\partial U}{\partial z}(s,0)&\leq-\beta_{\sigma}c\,U(s,0)+\beta_{\sigma}\left(\int_{0}^{s}f_{1}^{\ast}(\tau)d\tau+\int_{0}^{s}f_{2}^{\ast}(\tau)d\tau\right)\\ &+\int_{0}^{s}(h^{+})^{\ast}(\tau)d\tau+\int_{0}^{s}(h^{-})^{\ast}(\tau)d\tau,\end{aligned}$$ thus we can argue as in the proof of Theorem \[comparisonc\] Symmetrization for linear fractional parabolic equations with Neumann boundary conditions {#parabolic} ========================================================================================= Our goal now is to use the elliptic results shown in the previous sections in order to prove a symmetrization result for the following linear, fractional parabolic Cauchy-Neumann problem $$\left\{ \begin{array} [c]{lll}u_{t}+(-\Delta)^{\sigma}u =f & & in\text{ }\Omega\times(0,T)\\[15pt] \dfrac{\partial u}{\partial\nu}=0 & & on\text{ }\partial \Omega\times[0,T],\\[15pt] u(x,0)=u_{0}(x) & & in\text{ }\Omega. \end{array} \right. \label{parabolicpro}$$ In this framework, we will always assume that $u_{0}\in L^{2}(\Omega)$, $f\in L^{2}(\Omega\times(0,T))$.\ It is easy to recast the issue of solving problem in an abstract setting. Indeed, the introduction of the linear operator $\mathcal{A}_{N}:D(\mathcal{A})\subset H\rightarrow H$, where $H=L^{2}(\Omega)$, defined by $$\mathcal{A}_{N}u=(-\Delta_{N})^{\sigma}u$$ with the fixed domain $$D(\mathcal{A}_{N})=H^{1}(\Omega)$$ allows to reformulate the parabolic problem as the abstract Cauchy problem $$\label{abstract} \left\{ \begin{array} [c]{lll}u^{\prime}(t)+\mathcal{A}_{N}u(t)=f(t) & & on\,[0,T]\\ u(0)=u_{0}, \end{array} \right.$$ where as usual we have set $f(t)(x)=f(x,t)$. The concept of solution to problem (or equivalently to ), which suitably adapts for the use of the elliptic symmetrization arguments proved in the previous sections, is that of *mild* solution, namely a solution which is obtained as the uniform limit of a time piecewise constant sequence of discrete approximated solutions, defined by means of an implicit time discretization scheme. In order to briefly introduce such definition, assume first to divide the time interval $[0,T]$ in $n$ subintervals $(t_{k-1},t_{k}]$, where $t_{k}=kh$ and $h=T/n$. Next we consider a time discretization $\{f_k^{(h)}\}$ of $f$, such that the piecewise constant interpolation of this sequence provides a function $f^{(h)}(x,t)$ for which $\|f-f^{(h)}\|_1\to 0$ as $h\to 0$. We construct then the function $u_{h}$ which is piecewise constant in each interval $(t_{k-1},t_{k}]$, by $$\label{approxsolut} u_{h}(x,t)= \left\{ \begin{array} [c]{lll}u_{h,1}(x) & & if\,\,t\in[0,t_{1}]\\[6pt] u_{h,2}(x) & & if\,\,t\in(t_{1},t_{2}] \\ [6pt] \cdots \\ [6pt] u_{h,n}(x) & & if\,\,t\in(t_{n-1},t_{n}] \end{array} \right. $$ where $u_{h,k}$ solves the elliptic equation $$h\mathcal{A}_{N}(u_{h,k})+u_{h,k}=u_{h,k-1}+hf_k^{(h)}\label{discreteequat}$$ with the initial value $u_{h,0}=u_{0}$. Then we wish to find that $u_{h}(t)$ converges as $h\rightarrow0$ to a certain function $u(t)$ uniformly in $[0,T]$, where $u(t)$ is continuous at $t=0$ and $u(t)\rightarrow u_{0}$ as $t\rightarrow0$, namely we would like to prove that $u(t)$ is a *mild solution* to . The following Lemma gives a positive answer to such question. \[existence\] There exists a unique mild solution $u$ to problem . Since we work in the Hilbert space $H=L^{2}(\Omega)$ for the linear operator\ $\mathcal{A}_{N}:D(\mathcal{A}_{N})\subset H\rightarrow H$, it is sufficient to show that $\mathcal{A}_{N}$ is maximal monotone. It is clear that $\mathcal{A_{N}}$ is monotone: indeed, for any $u\in H^{1}(\Omega)$ we have $$(\mathcal{A}_{N}u,u)_{L^2(\Omega)}=\|(-\Delta_{N})^{\sigma/2}\|^{2}_{L^{2}(\Omega)}\geq0.$$ It is straightforward to check that $\mathcal{A}_{N}$ is self-adjoint. Moreover, $\mathcal{A}$ is maximal monotone, *i.e.* $R(I+\mathcal{A}_{N})=H$, since for any $f\in L^{2}(\Omega)$ there is a unique $u\in H^{1}(\Omega)$ solving (see [@StingaVolz]) the equation $$u+(-\Delta_{N})^{\sigma}u=f.$$ Then [@VazPorous Theorem 10.17] ensures the existence of a unique mild solution $u\in C([0,T];L^{2}(\Omega)$ to the abstract Cauchy problem , obtained exactly as the uniform limit of the approximated solutions . *If there is no forcing term $f$ in , then the classical Hille-Yosida Theorem implies that for any $u_{0}\in L^{2}(\Omega)$ the mild solution $u$ to is actually classical, and $u\in C^{1}([0,T];L^{2}(\Omega))\cap C^{0}([0,T];H^{1}(\Omega))$ (see for instance [@BREZIS]).* *Keeping tracks of the papers [@afractpor; @genfrac], one could make use of the Stinga-Torrea extension method in order to give a proper meaning of *weak* energy solution to problem , which shows to coincide, when $f\equiv0$, to the unique mild solution obtained in Lemma \[existence\]. Nevertheless, in this context we decided to work only with mild solutions, which are enough for our purpose: the problem about the equivalence of the two notions of solutions, along with several questions posed in a more general nonlinear setting, will be discussed in the forthcoming paper [@BVprox].* With these preliminaries at hand, we are in position to establish the following parabolic comparison result, related to problem . Assume that $u_{0}\in L^{2}(\Omega)$, $f\in L^{2}(\Omega\times (0,T))$ ($T>0$) and let $u$ be the mild solution to problem ; set $$u_{1}(\cdot,t)=[u(\cdot,t)-\mathsf{m}(u(\cdot,t))]^{+},\,u_{2}(\cdot,t)=[u(\cdot,t)-\mathsf{m}(u(\cdot,t))]^{-}.$$ Moreover, let $v$ be the mild solution to the following Cauchy-Dirichlet problem, which is radially symmetric with respect to $x$: $$\left\{ \begin{array} [c]{lll}v_{t}+(-\gamma\Delta)^{\sigma}v =(f^{+})^{\#}+(f^{-})^{\#} & & in\text{ }B\times(0,T)\\[15pt] v=0 & & on\text{ }\partial B\times[0,T],\\[15pt] v(x,0)=u_{0,1}^{\#}(x)+u_{0,2}^{\#}(x) & & in\text{ }B, \end{array} \right. \label{symmetrparabolic}$$ where $B$ is the ball centered at the origin with measure $|\Omega|/2$, $(f^{\pm})^{\#}(|x|,t)$ means symmetrization of $f^{\pm}(x,t)$ w.r. to $x$ for a.e. time $t>0$ and $$u_{0,1}=(u_{0}-\mathsf{m}(u_{0}))^{+},\,u_{0,2}=(u_{0}-\mathsf{m}(u_{0}))^{-}.$$ Then, for all $t>0$ we have $$u_{1}^\#(|x|,t)+u_{2}^\#(|x|,t)\prec v(|x|,t).\label{comparisonparab}$$ Let us consider the sequence of discrete approximated solutions . Then applying the implicit time discretization scheme to the symmetrized problem produces the sequence of discrete solutions $v_{h}$ defined as $$v_{h}(x,t)= \left\{ \begin{array} [c]{lll}v_{h,1}(x) & & if\,t\in[0,t_{1}]\\ [6pt] v_{h,2}(x) & & if\,t\in(t_{1},t_{2}] \\ [6pt] \cdots \\ [6pt] v_{h,n}(x) & & if\,t\in(t_{n-1},t_{n}], \end{array} \right. $$ where $v_{h,k}(x)$ solves the equation $$h\gamma^{1/2}\mathcal{A}_{D}(v_{h,k})+v_{h,k}=v_{h,k-1}+hf_{k,1}^{(h)}+hf_{k,2}^{(h)}\label{discreteequsym}$$ with the initial value $v_{h,0}=u_{0,1}^{\#}+u_{0,2}^{\#}$. The operator $\mathcal{A}_{D}:H_{0}^{1}(B)\subset L^{2}(\Omega)\rightarrow L^{2}(\Omega)$ is defined through $\mathcal{A}_{D}=(-\Delta_{D})^{\sigma}$ and we have posed $f_{k,1}^{(h)}=((f_{k}^{(h)})^{+})^{\#},\,f_{k,2}^{(h)}=((f_{k}^{(h)})^{-})^{\#}$. Then $v_{h}\rightarrow v$ uniformly, where $v$ is the unique mild solution to .\ In order to compare $u$ with $v$, our aim is now to compare the solution $u_{h,k}$ to with the solution $v_{h,k}$ to . By induction we shall prove that $$([u_{h,k}-\mathsf{m}(u_{h,k})]^{+})^{\#}+([u_{h,k}-\mathsf{m}(u_{h,k})]^{-})^{\#}\prec v_{h,k},\label{discretecomparison}$$ for each $k=1,\ldots,n$. Indeed by Corollary \[corollimplicitime\] we have $$([u_{h,1}-\mathsf{m}(u_{h,1})]^{+})^{\#}+([u_{h,1}-\mathsf{m}(u_{h,1})]^{-})^{\#}\prec v_{h,1}.$$ If we suppose by induction that holds for $k-1$, we can use Corollary \[corollimplicitime\] again and get , which in turns implies $$([u_{h}-\mathsf{m}(u_{h})]^{+})^{\#}+([u_{h}-\mathsf{m}(u_{h})]^{-})^{\#}\prec v_{h}.\label{discretecomparisoninterp}$$ Then passing to the limit in as $h\rightarrow0^{+}$ the result follows. Comments, extensions and open problems ====================================== - It is worth noticing that all the results contained in this paper can be extended to a more general context, for example when the fractional Laplacian operator $\mathcal{L}=(-\Delta)^{\sigma}$ is replaced by a $\sigma$-power of a linear, second order elliptic operator in divergence form, namely defined by $$\mathcal{L}=L^{\sigma},$$ where $\sigma\in(0,1)$, $L$ is defined by and its coefficients $a_{ij}$ satisfy the ellipticity condition . Indeed, [@Stinga-Torrea Theorem 1.1] allows to identify $\mathcal{L}$ with the Dirichlet-to-Neumann map defined by the extension problem $$\begin{cases} \dfrac{\partial^{2}w}{\partial y^{2}}+\dfrac{1-2\sigma}{y}\dfrac{\partial\xi}{\partial y}+L_{x}w=0,&\hbox{in}~\mathcal{C}_{\Omega},\\ \\ \dfrac{\partial w}{\partial \nu_{x}}=0,&\hbox{on}~\partial_L\mathcal{C}_{\Omega},\\ \\ w(x,0)=u(x),&\hbox{on}~\Omega; \end{cases}$$ moreover, the whole functional setting is carefully detailed in [@Caffarelli-Stinga]. Concerning the symmetrization techniques we just notice that, due to , the equal sign in is replaced by $\leq$. This allows to interpret Theorem as a full nonlocal version of the classical result of Maderna-Salsa [@MadSalsa]. - It would be interesting to consider problems of the form , where $c$ is *not* constant. Indeed, it is well known that in the local case this simple variation leads to further nontrivial issues, which can be solved by subtle, nonstandard modifications of the main results (see *e.g.* [@AlTrMat]). Moreover, it would make sense to adapt our arguments when extra first order terms are added to the left-hand side of the same equation in . This will be an object of future studies.\ [**Acknowledgments**]{} B.V. is partially supported by the INDAM-GNAMPA project 2015 “*Proprietà qualitative di soluzioni di equazioni ellittiche e paraboliche*” (ITALY). [10]{} , [*Well-posed elliptic [N]{}eumann problems involving irregular data and domains*]{}, Ann. Inst. H. Poincaré Anal. Non Linéaire, 27 (2010), pp. 1017–1054. , [*Elliptic equations and [S]{}teiner symmetrization*]{}, Comm. Pure Appl. Math., 49 (1996), pp. 217–236. , [*Elliptic boundary value problems: comparison results via symmetrization*]{}, Ricerche Mat., 51 (2002), pp. 341–355 (2003). , [*On symmetrizations in parabolic equations*]{}, J. Analyse Math., 30 (1976), pp. 98–112. height 2pt depth -1.6pt width 23pt, [*Isoperimetric inequalities and applications*]{}, vol. 7 of Monographs and Studies in Mathematics, Pitman (Advanced Publishing Program), Boston, Mass.-London, 1980. , [*Interpolation of operators*]{}, vol. 129 of Pure and Applied Mathematics, Academic Press Inc., Boston, MA, 1988. , [*Neumann problem: comparison results*]{}, Rend. Accad. Sci. Fis. Mat. Napoli (4), 57 (1990), pp. 41–58 (1991). , [*Existence, uniqueness and asymptotic behaviour for fractional porous medium equations on bounded domains*]{}, Discrete Contin. Dyn. Syst., 35 (2015), pp. 5725–5767. , [*Symmetrization in parabolic [N]{}eumann problems*]{}, Appl. Anal., 40 (1991), pp. 21–39. , [*A concave-convex elliptic problem involving the fractional [L]{}aplacian*]{}, Proc. Roy. Soc. Edinburgh Sect. A, 143 (2013), pp. 39–71. , [*Functional analysis, [S]{}obolev spaces and partial differential equations*]{}, Universitext, Springer, New York, 2011. , [*Positive solutions of nonlinear problems involving the square root of the laplacian*]{}, Adv. Math., 224 (2010), pp. 2052–2093. , [*An extension problem related to the fractional [L]{}aplacian*]{}, Comm. Partial Differential Equations, 32 (2007), pp. 1245–1260. , [*Fractional elliptic equations, [C]{}accioppoli estimates and regularity*]{}, Ann. Inst. H. Poincaré Anal. Non Linéaire, 33 (2016), pp. 767–807. , [*Some extensions of a theorem of [H]{}ardy, [L]{}ittlewood and [P]{}ólya and their applications*]{}, Canad. J. Math., 26 (1974), pp. 1321–1340. , [*A fractional porous medium equation*]{}, Adv. Math., 226 (2011), pp. 1378–1409. height 2pt depth -1.6pt width 23pt, [*A general fractional porous medium equation*]{}, Comm. Pure Appl. Math., 65 (2012), pp. 1242–1284. , [*Comparison and regularity results for the fractional [L]{}aplacian via symmetrization methods*]{}, J. Differential Equations, 253 (2012), pp. 2593–2615. , [*Nonlocal problems with neumann boundary conditions*]{}, preprint arXiv. , [*Partial Differential Equations*]{}, vol. 19 of Graduate Studies in Mathematics, American Mathematical Society, Providence, RI, second ed., 2010. , [*Symmetrization in a [N]{}eumann problem*]{}, Matematiche (Catania), 41 (1986), pp. 67–78 (1989). , [*Neumann problems and [S]{}teiner symmetrization*]{}, Comm. Partial Differential Equations, 30 (2005), pp. 1537–1553. , [*Extension problem and fractional operators: semigroups and wave equations*]{}, J. Evol. Equ., 13 (2013), pp. 343–368. , [*Elliptic Partial Differential Equations of Second Order*]{}, Classics in Mathematics, Springer-Verlag, Berlin, 2001. Reprint of the 1998 edition. , [*Inequalities*]{}, Cambridge, at the University Press, 1952. 2d ed. , [*Symmetrization & applications*]{}, vol. 3 of Series in Analysis, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2006. , [*Symmetrization in [N]{}eumann problems*]{}, Applicable Anal., 9 (1979), pp. 247–256. , [*Fractional diffusion with [N]{}eumann boundary conditions: the logistic equation*]{}, Discrete Contin. Dyn. Syst. Ser. B, 18 (2013), pp. 2175–2202. , [*Isoperimetric inequalities in parabolic equations*]{}, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 13 (1986), pp. 51–73. , [**symmetrization for fractional elliptic and parabolic equations and an isoperimetric application**]{}, In press. , [*Extension problem and [H]{}arnack’s inequality for some fractional operators*]{}, Comm. Partial Differential Equations, 35 (2010), pp. 2092–2122. , [*Fractional semilinear neumann problems arising from a fractional [K]{}eller-[S]{}egel model*]{}, Calc. Var. Partial Differential Equations, in press (2015). , [*Elliptic equations and rearrangements*]{}, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 3 (1976), pp. 697–718. height 2pt depth -1.6pt width 23pt, [*Inequalities in rearrangement invariant function spaces*]{}, in Nonlinear analysis, function spaces and applications, [V]{}ol. 5 ([P]{}rague, 1994), Prometheus, Prague, 1994, pp. 177–230. , [*Symétrisation pour [$u_{t}=\Delta \varphi (u)$]{} et applications*]{}, C. R. Acad. Sci. Paris Sér. I Math., 295 (1982), pp. 71–74. height 2pt depth -1.6pt width 23pt, [*Symmetrization and mass comparison for degenerate nonlinear parabolic and related elliptic equations*]{}, Adv. Nonlinear Stud., 5 (2005), pp. 87–131. height 2pt depth -1.6pt width 23pt, [*The porous medium equation*]{}, Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, Oxford, 2007. Mathematical theory. , [*Symmetrization for linear and nonlinear fractional parabolic equations of porous medium type*]{}, J. Math. Pures Appl. (9), 101 (2014), pp. 553–582. height 2pt depth -1.6pt width 23pt, [*Optimal estimates for fractional fast diffusion equations*]{}, J. Math. Pures Appl. (9), 103 (2015), pp. 535–556. , [*On [N]{}eumann problems for nonlinear fractional parabolic equations of porous medium type*]{}, in preparation. 2000 *Mathematics Subject Classification.* 35B45, 35R11, 35K20. *Keywords and phrases.* Symmetrization, fractional Laplacian, Neumann problems, nonlocal elliptic and parabolic equations. [^1]: Dipartimento per le Tecnologie, Facoltà di Ingegneria, Università degli Studi di Napoli “Parthenope”, 80143 Italia.   E-mail: [bruno.volzone@uniparthenope.it]{}
--- abstract: 'Searching for two-dimensional (2D) organic Dirac materials, which have more adaptable practical applications in comparing with inorganic ones, is of great significance and has been ongoing. However, only two kinds of these materials with low Fermi velocity have been discovered so far. Herein, we report the design of an organic monolayer with C$_4$N$_3$H stoichiometry which possesses fascinating structure and good stability in its free-standing state. More importantly, we demonstrate that this monolayer is a semimetal with anisotropic Dirac cones and very high Fermi velocity. This Fermi velocity is roughly one order of magnitude larger than that in 2D organic Dirac materials ever reported, and is comparable to that in graphene. The Dirac states in this monolayer arise from the extended $\pi$-electron conjugation system formed by the overlapping 2*p*$_z$ orbitals of carbon and nitrogen atoms. Our finding opens a door for searching more 2D organic Dirac materials with high Fermi velocity.' author: - Hongzhe Pan - Hongyu Zhang - Yuanyuan Sun - Jianfu Li - Youwei Du - Nujiang Tang title: 'C$_4$N$_3$H monolayer: A novel two-dimensional organic Dirac material with high Fermi velocity' --- \[sec:level1\]introduction ========================== Following with the great development of graphene [@1] and topological insulators [@2], an emerging field of ¡°Dirac physics¡± is being established for investigating the quantum relativistic properties of a class of special materials with Dirac cones, namely Dirac materials [@3]. Such materials exhibit linear electronic band dispersion at the Fermi level, that is Dirac band, and thus have charge carriers behaved like massless Dirac fermions. The unique Dirac bands endow these materials with many novel phenomena in electronic transport, such as ballistic charge transport and high carrier mobility [@4], Klein tunneling [@5], various quantum Hall effects [@1; @6; @7], *etc*. These specific transport properties offer Dirac materials a wide range of promising applications in high-speed low-dissipation devices. In addition, the continuous reduction in the size of devices highly needs to develop low-dimensional materials. Thus, two-dimensional (2D) Dirac materials are much more desirable for the applications in nanoscale integrated circuits. Theoretically, based on the generalized von Neumann-Wigner theorem [@8], at least three conditions are required to achieve Dirac bands in 2D materials: (i) specific symmetries, (ii) proper parameters and (iii) appropriate Fermi level and band overlap [@9]. The hexagonal symmetry was widely believed to the most favorable for the existence of Dirac bands due to the fact that many 2D Dirac materials are observed in hexagonal lattice. However, it is not a necessary precondition for the presence of 2D Dirac materials. For example, 6,6,12-graphyne [@10], square-MoS$_2$ [@11] and *Pmmn*-boron [@12] have rectangular lattices while also possess Dirac cones. Besides symmetries, proper structure parameters such as bond lengths and angles, which are related to the crystal lattice, are also required. Furthermore, the Fermi level should precisely lie at the Dirac points, while there should be not any other bands overlap at the Fermi level. This condition is required to ensure the experimental observation and practical applications of the novel properties particular to the Dirac cones. Unfortunately, 2D Dirac materials are found to be very rare due to these rigorous conditions [@9]. For instance, among hundreds of 2D materials, only graphene [@1; @4], phagraphene [@13], graphynes [@10], silicene and germanen [@14], borophenes [@12], FeB$_2$ monolaye [@15], *etc*., have been confirmed to be 2D Dirac materials. Moreover, most of the existing 2D Dirac materials are inorganic compounds. In general, organic materials have the additional advantages of mechanical flexibility and tunable properties in comparison with inorganic ones. Therefore, it is of great practical significance to search for more 2D organic Dirac materials with the development of ¡°Dirac physics¡±. Actually, an extensive search for 2D organic Dirac materials has been ongoing. However, to the best of our knowledge, only two kinds of 2D organic systems have been discovered to be Dirac materials so far. The first successfully realized example is the 2D layered organic conductor $\alpha$-(BEDT-TTF)$_2$I$_3$ (BEDT-TTF = bis(ethylenedithio)-tetrathiafulvalene) which turns into a Dirac material with a pair of titled Dirac cones when applying a high hydrostatic pressure above 1.2 GPa [@16; @17]. The other one is a theoretical design about some 2D conjugated polymers. These polymers can be theoretically constructed by replacing the insulating connectors (1,3,5-triazine, *etc*.) of conventional semiconducting 2D covalent organic frameworks (COFs) with the conductive ones (trivalent carbon atoms, *etc*.) [@18]. Unfortunately, both of the two kinds of 2D organic materials have very low Fermi velocities, which are roughly one order of magnitude lower than that in 2D inorganic Dirac materials [@4; @10; @12; @13; @14; @15]. It is known that a high Fermi velocity is favorable to the transport property of a Dirac material [@19]. Thus, designing stable 2D organic Dirac materials with high Fermi velocities is urgent and of great practical significance. Inspired by the experimental synthesis of several 2D carbon nitride sheets, such as graphitic carbon nitride (g-C$_3$N$_4$) [@20; @21], C$_2$N holey 2D crystal (C$_2$N-*h*2D) [@22] and C$_3$N sheets [@23] based on condensation reactions, we design a novel 2D organic material named as C$_4$N$_3$H monolayer by its stoichiometry. This organic monolayer has a intriguing structure with evenly distributed heart-shaped angstrom-scale pores and rather good dynamic, thermal and mechanical stabilities in its freestanding state. More interestingly, we demonstrate that this organic monolayer is a 2D Dirac material with anisotropic Dirac cones and a very high Fermi velocity of $1.1\times10^{6}$ m s$^{-1}$. Remarkably, this Fermi velocity is roughly one order of magnitude larger than that in 2D organic Dirac materials ever reported [@17; @18], and is even comparable to that in 2D inorganic Dirac materials [@4; @10; @12; @13; @14; @15]. The Dirac points in this monolayer locate at off-symmetry points between the $\Gamma$ and K points, and arise predominantly from the overlapping 2*p*$_z$ orbitals of carbon (C) and nitrogen (N) atoms. In addition, we also comment on the experimental feasibility of producing the predicted C$_4$N$_3$H monolayer and propose two hypothetical synthesis routes. \[sec:level1\]Computational Details =================================== Structural optimization, energy, density of states (DOS), band structure, electron localization function (ELF) [@24] and deformation charge density calculations based on density functional theory were performed in the framework of the generalized gradient approximation with the PBE functional [@25] using Vienna ab initio simulation package (VASP) [@26; @27]. Projector-augmented plane wave approach [@28] was used to represent the electron interaction. A 500 eV energy cutoff of the plane-wave basis sets and the first Brillouin zone sampled with a $45\times45\times1$ Monkhorst-Pack *k*-points grid were adopted in the structural relaxations and self-consistent calculations. In all the calculations, a vacuum distance of 15 Å was applied along the perpendicular direction to ensure negligible interaction between adjacent layers. All atoms were allowed to relax without any constraint in the geometric optimizations until the total energy change was less than $1.0\times10^{-5}$ eV and the force on each atom was smaller than 0.001 eV Å$^{-1}$. According to the crystal symmetry, the band structure of the C$_4$N$_3$H monolayer was calculated along the special lines connecting the following high-symmetry points: $\Gamma$ (0, 0, 0), K (0.4, 0.4, 0), M (0.5, 0, 0), R (0.6, -0.4, 0), S (0.5, -0.5, 0), $\Gamma$ (0, 0, 0), and M (0.5, 0, 0) in the *k*-space. Moreover, to confirm the existence of Dirac states, the band structure of the C$_4$N$_3$H monolayer was also recomputed respectively by the more accurate Heyd-Scuseria-Ernzerhof (HSE06) functional [@29] and the spin-orbit coupling (SOC) effect. The finite displacement method, as implemented in the phonopy code [@30], was employed to calculate the phonon dispersion curves of the C$_4$N$_3$H monolayer. During the phonon spectrum calculation, a $4\times4\times1$ supercell was employed and the force constant matrix was determined by the VASP. First-principles molecular dynamics (FPMD) simulations, as also performed in the VASP, were employed to evaluate the thermal stability of this monolayer. The initial configuration with a $4\times4\times1$ supercell was annealed at different temperatures of 300, 500 and 1000 K, respectively. The temperature was controlled by the Nosé-Hoover thermostat [@31]. At each temperature, FPMD simulations in NVT ensemble lasted for 30 ps with a time step of 2.0 fs. A kinetic energy cutoff of 450 eV and the PBE functional were employed in all FPMD simulations. \[sec:level1\]Results and Discussion ==================================== \[sec:level2\]Geometric structure of the C$_4$N$_3$H monolayer -------------------------------------------------------------- Figures \[fig1\](a) and \[fig1\](b) respectively present the top and side views of the geometric structure for the optimized C$_4$N$_3$H monolayer. As shown in Fig. \[fig1\](a), this monolayer has a rhombic primitive cell (represented by the green dash lines) with the fully relaxed lattice constants *a*$_1$ = *a*$_2$ = 4.77 Å, $\gamma = 104.5^{\circ}$ (angle between the ***a*$_1$** and ***a*$_2$** lattice vectors). Obviously, its primitive cell consists of four C atoms, three N atoms and one H atom. The correspondingly optimized lattice constants of the transformed rectangular conventional cell \[denoted by the blue dash lines in Fig. \[fig1\](a)\] with the same symmetry are *a* = 7.54 Å and *b* = 5.84 Å, respectively. This structure has the symmetry of $C_{2v}^{14}$ and belongs to the space group of *Amm2*. It is obvious that angstrom-scale pores evenly distribute in the structure of the C$_4$N$_3$H monolayer. Intriguingly, if we link the atoms around the angstrom-scale pore in its conventional cell one by one, the connecting line shows a perfect heart-shape \[denoted by the red solid line in Fig. \[fig1\](a)\]. By anatomizing this intriguing structure, one can find that this monolayer is actually constructed by using N atoms to link the framework of pyrrole molecules \[represented by the pink dash lines in Fig. \[fig1\](a)\]. As shown in Fig. \[fig1\](a), we label the N atoms in the framework of pyrrole molecules as N1, and the ones sited at the linking positions as N2. There are also two kinds of C atoms, i) the C atoms with the nearest neighbors of two C atoms and one N atom (labeled as C1), and ii) the rest of C atoms which has one C atom and two N atoms as the nearest neighbors (labeled as C2). The C1–C1 and C1–C2 bond lengths (1.43 and 1.46 Å, respectively) are only slightly larger than that in graphene (1.42 Å, a typical bond length for *sp*$^2$ C–C bonds), and are noticeably smaller than 1.54 Å (the standard value of C–C bond length for *sp*$^3$ hybridization) [@32]. The lengths of C1–N2, C2–N1, and C2–N2 bonds in this monolayer are respectively about 1.33, 1.38 and 1.31 Å, very similar to that in the already-synthesized g-C$_3$N$_4$ (*ca*. 1.33 and 1.41 Å) [@21] and C$_2$N-*h*2D (1.33 Å) [@22] which have stable structures with *sp*$^2$ hybridization bonds. More detailed information about the structural properties is summarized in Fig. 1 of the Supplemental Material [@33]. In addition, it is noteworthy that the C$_4$N$_3$H monolayer has an exactly planar structure, as shown in Fig. \[fig1\](b). This exactly planar structure and the features of bond lengths and bond angles imply that the chemical bonds in this monolayer ought to be covalent bonds with *sp*$^2$ hybridization. ![\[fig1\] (a) Top and (b) side views of the optimized geometric structure of the C$_4$N$_3$H monolayer. The primitive cell and conventional cell are respectively denoted by the green and blue dashed lines; ***a*$_1$** and ***a*$_2$** represent the lattice vectors of the primitive cell and $\gamma$ is the angle between them; *a* and *b* are the lattice constants of the conventional cell. The red solid line is the connection line among the certain atoms around the angstrom-scale pore in the conventional cell. C1 and C2 denote different C atoms, and N1 and N2 represent different N atoms. (c) Isosurface of the ELF of C$_4$N$_3$H monolayer plotted with the value of 0.5.](fig_1){width="8.5cm"} ![\[fig2\] (a) ELF plotted with the value of 0.8. (b) Deformation charge density of the C$_4$N$_3$H monolayer. Yellow and cyan refer to electron accumulation and depletion regions, respectively. The isovalue of deformation charge density is 0.3 e Å$^{-3}$.](fig_2){width="6.8cm"} To confirm this conjecture and further elucidate the bonding nature of the C$_4$N$_3$H monolayer, we then calculated the ELF to analyze its electron distributions. As known, ELF can be described in the form of isosurface in real space with isovalues ranging from 0 to 1. The region with 1 indicates the strong covalent electrons or lone-pair electrons, the region close to 0 implies the area with low electron density, and the region with an isovalue of 0.5 is an area with homogeneous electron gas. As shown in Fig. \[fig1\](c), the electron gas is well distributed and delocalized at the whole region of this monolayer network, which can electronically stabilize the 2D framework. To highlight the in-plane bonding states, we also plotted the isosurface of ELF for this monolayer with an isovalue of 0.8 in Fig. \[fig2\](a). It is found that the ELF localization centers clearly locate at the middle of the C–C, C–N and N–H bonds, indicating that the bonds have strong covalent electron states with $\sigma$-like *sp*$^2$ hybridization. The $\sigma$ bonds between C, N and H atoms can also be evidenced by the deformation electronic density of the C$_4$N$_3$H monolayer \[Fig. \[fig2\](b)\]. The deformation electronic density is defined as the total electronic density of this monolayer excluding those of isolated atoms. Clearly, electrons are well localized over the C–C, C–N and N–H bonds, confirming the conclusion obtained from the ELF and the conjecture we proposed above. In addition, the remaining valence electrons of carbon and nitrogen atoms form the delocalized $\pi$ network, like the case of graphene. According to the Bader charge analysis [@34; @35], C1, C2, N1, N2 and H atoms in the C$_4$N$_3$H monolayer respectively possess $+0.55$, $+1.08$, $-1.28$, $-1.25$ and $+0.55$ $|e|$ charge. This level of charge transfer between C and N atoms is similar to that in stable C$_2$N-*h*2D monolayer which has been successfully synthesized [@22; @36], suggesting that the C$_4$N$_3$H monolayer is likely to have the similar stability. \[sec:level2\]Stability of the C$_4$N$_3$H monolayer ---------------------------------------------------- To evaluate the feasibility of experimental synthesis and the stability of the C$_4$N$_3$H monolayer, we first calculated its cohesive energy ($E_{\text{coh}}$) defined by $E_{\text{coh}}=(n_{\text{C}}E_{\text{C}}+n_{\text{N}}E_{\text{N}}+n_{\text{H}}E_{\text{H}}-E_{\text{C$_4$N$_3$H}})/(n_\text{C}+n_\text{N}+n_\text{H})$, where $E_\text{C}$, $E_\text{N}$, $E_\text{H}$ and $E_\text{C$_4$N$_3$H}$ are the calculated total energies of isolated C, N and H atoms, and C$_4$N$_3$H monolayer, respectively; $n_\text{C}$, $n_\text{N}$ and $n_\text{H}$ are the number of C, N and H atoms in the supercell of this monolayer, respectively. According to our computations, the cohesive energy is 7.88 eV per atom. This value is evidently larger than that of the already-synthesized borophene (5.87 eV per atom) [@37], silicene (4.01 eV per atom) [@38] and phosphorene (4.67 eV per atom) [@39], implying the high stability and synthetic feasibility of the C$_4$N$_3$H monolayer from the viewpoint of energy level. ![\[fig3\] (a) Phonon dispersion curves and (b) total phonon DOS of the C$_4$N$_3$H monolayer. Inset is the enlarged drawing of red rectangle part in the phonon dispersion curves.](fig_3){width="6cm"} The stability of this monolayer can be further confirmed by its phonon dispersion curves and phonon DOS. As shown in Fig. \[fig3\](a), there is no sign of imaginary phonon mode in the phonon spectrum along the highly symmetric points in the entire Brillouin zone. In detail, there are eight atoms in the primitive cell of the C$_4$N$_3$H monolayer, thus its phonon spectrum has twenty-four phonon bands, including three acoustic branches and twenty-one optical branches. The three acoustic branches are respectively transverse acoustic (TA) and longitudinal acoustic (LA) branches corresponding to vibration within the plane, and the other one (ZA) corresponding to vibration out of plane \[inset of Fig. \[fig3\](a)\]. It can be seen that in contrast to the linear dispersion for the TA and LA branches, the frequency of the ZA branch shows a quadratic dispersion near the $\Gamma$ point. It is worth mentioning that this type of quadratic dispersion of ZA branch is a characteristic feature of the phonon dispersion curves in monolayered or layered crystals, e.g., graphene [@40], graphite [@41] and other layered compounds [@15; @42; @43]. As shown in Fig. \[fig3\](b), the total phonon DOS also reveals that no phonon with imaginary frequency is found in this monolayer, which agrees very well with its phonon spectrum. These results demonstrate the good dynamic stability of the C$_4$N$_3$H monolayer. ![\[fig4\] (a) Top and (b) side views of snapshots for the equilibrium structures of the C$_4$N$_3$H monolayer at the end of 30 ps FPMD simulations under the temperature of 300 K; (c) and (d) are similar respectively to Figs. 4(a) and 4(b), but for another FPMD simulation at the temperature of 500 K. The green dashed lines denote the $4\times4\times1$ supercell used in the FPMD simulations. The red arrows and circles display the migration of hydrogen atoms and the break of C–N bonds, respectively. (e) and (f) are the fluctuations of total energies with respect to FPMD simulation times at 300 K and 500 K, respectively.](fig_4){width="8.5cm"} Moreover, to further evaluate the thermal stability of the C$_4$N$_3$H monolayer, we performed FPMD simulations using a $4\times4\times1$ supercell containing 64 C atoms, 48 N atoms and 16 H atoms. The initial configuration was annealed at different temperatures of 300, 500 and 1000 K with a time step of 2 fs. As shown in Figs. \[fig4\](a) and \[fig4\](b), the snapshots of the geometry structure at the end of 30 ps simulations clearly reveal that the C$_4$N$_3$H monolayer can maintain its structural integrity except for some thermal fluctuations at the temperature of 300 K. The total energy of the simulated system can reach equilibrium quickly at 300 K \[see Fig. \[fig4\](e)\], verifying the above result from the viewpoint of energy level. This result can be understood by the fact that the binding energies of the C–C, C–N and C–H bonds are larger than the thermal energy corresponding to room temperature, consistent with other 2D carbon nitride systems [@21; @22; @23; @44]. Moreover, this slightly distorted structure can restore its initial planar configuration after complete atomic relaxation. The structure of this monolayer after annealing at 500 K is shown in Figs. \[fig4\](c) and \[fig4\](d). Obviously, significant atomic rearrangement took place and the basal plane substantially disordered after thermal annealing \[see the partial structures pointed by the red arrows and circles in Figs. \[fig4\](c) and \[fig4\](d)\]. Further analysis of the FPMD simulations reveals the process of the structure collapse at this temperature , as shown in Fig. \[fig4\](f) and Fig. 2 of the Supplemental Material [@33]. Moreover, this monolayer immediately and completely collapsed at a higher temperature of 1000 K. Hence, combined with the above result that the C$_4$N$_3$H monolayer can maintain its structural integrity at room temperature, we can get the conclusion that this monolayer has a melting point between 300 and 500 K, revealing its decent thermal stability. Considering the importance of the mechanical stability of a material for its applications, we also studied the mechanical properties of this organic monolayer by examining its elastic constants. It is known that there are four nonzero elastic constants for a 2D material, which are $C_{11}$, $C_{22}$, $C_{12}$ ($C_{21}$) and $C_{66}$, respectively. These elastic constants need to satisfy the criteria ($C_{11}C_{22}-C_{12}^2>0,C_{66}>0$) for a mechanically stable 2D sheet [@45; @46]. The in-plane Young¡¯s modules along ***a*** ($Y_a$) and ***b*** ($Y_b$) directions can be expressed as $Y_a=(C_{11}C_{22}-C_{12}C_{21})/C_{22}$ and $Y_b=(C_{11}C_{22}-C_{12}C_{21})/C_{11}$. The elastic constants of the C$_4$N$_3$H monolayer computed are $C_{11}$ = 209.8 N m$^{-1}$, $C_{22}$ = 138.7 N m$^{-1}$, $C_{12}$ = $C_{21}$ = 71.5 N m$^{-1}$ and $C_{66}$ = 86.0 N m$^{-1}$. Clearly, these elastic constants satisfy the above criteria, denoting the good mechanical stability of this monolayer. Accordingly, $Y_a$ and $Y_b$ calculated respectively are 173.1 and 114.3 N m$^{-1}$, indicating that the C$_4$N$_3$H monolayer is mechanically anisotropic. The Young¡¯s modules of this monolayer are higher than those of already-synthesized phosphorene ($Y_a$ = 25.5 N m$^{-1}$ and $Y_b$ = 91.6 N m$^{-1}$) [@39; @46], suggesting its good mechanical property. Considering that the C$_4$N$_3$H monolayer has comparable cohesive energy to other already-synthesized 2D materials, and dynamic, thermal and mechanical stabilities, we believe that it is viable for its experimental synthesis. It is well known that the condensation reaction is a valid method for producing exotic materials in polymer chemistry. Most importantly, recent developments in this field have already successfully synthesized a series of polymer materials, for instance, g-C$_3$N$_4$ [@20; @21], C$_2$N-*h*2D [@22], C$_3$N sheets [@23], 2D COFs [@47; @48], metal-organic frameworks [@49], *etc*. These materials show the tremendous capability of modern chemistry to create novel 2D networks from custom-designed monomers. Inspired by this, we proposed two hypothetical routes to synthesize the C$_4$N$_3$H monolayer based on pyrrole molecules. Firstly, one should respectively nitride (or oxidize) pyrrole molecules to the certain products, and then, 2D C$_4$N$_3$H material can be synthesized *via* the condensation reaction of these intermediate products (see Fig. 3 of the Supplemental Material [@33]). Herein, it should be noted that the real synthetic process must be much more complicated and difficult. However, it is worth expecting for the experimental realization of this novel material due to its unique electronic properties revealed in the next section. \[sec:level2\]Electronic structures of the C$_4$N$_3$H monolayer ---------------------------------------------------------------- As a novel material with intriguing structure, its detailed electronic structure needs to be explored. Thus, we calculated the band structure, total DOS (TDOS) and projected DOS (PDOS) of the C$_4$N$_3$H monolayer. As shown in Fig. \[fig5\](a), one can see that this monolayer is a semimetal with the valence and conduction bands meeting in a single point at the Fermi level. The characteristics of linear bands and degenerate state at this point denote the appearance of Dirac states in the C$_4$N$_3$H monolayer. Different from the well-known graphene where Dirac points locate at the high-symmetry points (K and K$^{'}$ points), the Dirac point of the C$_4$N$_3$H monolayer locates on the path from the $\Gamma$ to K point. As a result, there are two symmetry-related Dirac points appearing in the entire first Brillouin zone \[only one representative is shown in Figs. \[fig5\](a) and \[fig5\](c), as the two Dirac points are related by symmetry\]. Consistently, its TDOS is zero at the Fermi level \[Fig. \[fig5\](b)\], supporting the presence of the Dirac point. Figure 5(c) shows the first Brillouin zone of the C$_4$N$_3$H monolayer with orthorhombic (*Amm2*) structure, and further demonstrates that neither honeycomb structure nor hexagonal symmetry is prerequisite for the existence of Dirac cones. ![\[fig5\] (a) Band structure and (b) TDOS and PDOS of the C$_4$N$_3$H monolayer. The Fermi level is assigned at 0 eV. Inset in Fig. \[fig5\](a) is the enlarged drawing of the bands in the vicinity of the Dirac point, which was calculated by using high-precision screening parameter of 0.0001 Å$^{-1}$. (c) First Brillouin zone with the special *k* points: $\Gamma$ (0, 0, 0), K (0.4, 0.4, 0), M (0.5, 0, 0), R (0.6, -0.4, 0), and S (0.5, -0.5, 0). ***b$_1$*** and ***b$_2$*** are reciprocal lattice vectors of the *k*-space, while ***k$_x$*** and ***k$_y$*** are basis vectors of the rectangular coordinate system. The pink lines depict the high symmetric lines connected by the special k points while the black dot represents the position of the Dirac point. (d) Distorted Dirac cone formed by the valence and conduction bands in the vicinity of the Dirac point \[the region displayed by the green square in Fig. \[fig5\](c)\].](fig_5){width="8.5cm"} To obtain deeper insight into the Dirac states of this organic monolayer, we further calculated the Dirac cone formed by the valence and conduction bands in the vicinity of the Dirac point. As shown in Fig. \[fig5\](d), its Dirac cone is distinctly distorted, similar to that of some inorganic Dirac materials such as phagraphene [@13], 6, 6, 12-graphyne [@10] and *Pmmn*-boronene [@12]. The linear dispersion curves of energy and momentum in both the ***k$_x$*** and ***k$_y$*** directions around the Dirac point suggest the zero effective mass of the carriers (electrons and holes) near the Fermi level. To examine the carrier mobility around the distorted Dirac cone, we then calculated the Fermi velocity ($v_F$) of the C$_4$N$_3$H monolayer using the formula of $v_F=(1/\hbar)\cdot(\partial E/\partial k)$, where $\partial E/\partial k$ is the slop of valence or conduction band near the Dirac point and $\hbar$ is the reduced Planck$^{'}$s constant. The slope of the bands in the ***k$_x$*** direction is ¡À7.3 eV Å, equivalent to a Fermi velocity $v_{Fx}=1.1\times10^6$ m s$^{-1}$, while in the ***k$_y$*** direction, the slop of the bands equal to 6.2 eV Å ($v_{Fy}=9.4\times10^5$ m s$^{-1}$) and -2.8 eV Å ($v_{Fy}=4.3\times10^5$ m s$^{-1}$)(see Fig. 4 of the Supplemental Material [@33]). The largest Fermi velocity is comparable to that in graphene and other 2D inorganic Dirac materials [@10; @12; @13; @14; @15], and is roughly one order of magnitude larger than that in 2D organic Dirac materials ever reported [@17; @18]. It is noted that Fermi velocities in 2D organic polymers followed an approximately inverse exponential relation to their pore size, largely independent of the interconnecting oligomer [@18]. That is to say, the 2D organic Dirac polymer with the small pore size is favorable to the high Fermi velocity. Thus, the high Fermi velocity of our organic monolayer may attribute to its angstrom-scale pores. In addition, the anisotropy of the distorted Dirac cone with different slopes at the Dirac point in the ***k$_x$*** and ***k$_y$*** directions implies the direction-dependent electronic properties of the C$_4$N$_3$H monolayer, indicating its more flexible applications in contrast with graphene. Additionally, the hybrid HSE06 functional [@29] and the SOC effect were respectively used to recalculate the band structure of the C$_4$N$_3$H monolayer for further confirming its Dirac states. As shown in Fig. 5(a) of the Supplemental Material [@33], the dispersion of the valence and conduction bands at the Fermi level given by the HSE06 functional is very similar to that computed based on the PBE functional [@25] and no band gap can be identified, indicating that the above predicted intrinsic Dirac states in the C$_4$N$_3$H monolayer still survive to the hybrid HSE06 functional. On the other hand, we also demonstrate that the SOC effect on the electronic properties of this monolayer is negligible and the Dirac cone is still well preserved, which can be seen in Fig. 5(b) of the Supplemental Material [@33]. This is unsurprising due to the fact that C, N and H are all light elements, and the SOC effect on the nontrivial gap should be very small. ![\[fig6\]Top and side views of the isosurfaces of partial charge densities for (a) the highest valence band and (b) the lowest conduction band. The isovalue is 0.015 e Å$^{-3}$.](fig_6){width="8.5cm"} Then, we explored the physical origin of the Dirac states in the C$_4$N$_3$H monolayer. Through analyzing the PDOS of this monolayer \[Fig. \[fig5\](b)\], one can see that both the occupied and unoccupied peaks near the Fermi level mostly originate from the 2*p*$_z$ orbitals of C and N atoms. The calculated orbital-resolved band structures of this monolayer also show the same conclusion that the bands near the Fermi level are mainly contributed by 2*p*$_z$ orbitals of the atoms (see Fig. 6 of the Supplemental Material [@33]). Furthermore, we also calculated the band decomposed charge density at the Dirac point to visualize the above result. Clearly, the electron density distribution presents an obvious characteristic of 2*p*$_z$ orbitals \[see the side views of the isosurfaces plotted in Figs. \[fig5\](a) and \[fig5\](b)\]. Moreover, both the top valence band and the bottom of conduction band around the Dirac point are mainly contributed by the 2*p*$_z$ orbitals of C and N2 atoms, while the 2*p*$_z$ orbital of N1 atoms also makes a contribution to the lowest conduction band. These 2*p*$_z$ orbitals overlap, resulting in the formation of an extended $\pi$-electron conjugation system in the C$_4$N$_3$H monolayer, similar to that in graphene. Accordingly, we can take the conclusion that this $\pi$-electron conjugation system is the reason for the emergence of Dirac states in this organic monolayer. \[sec:level1\]Conclusion ======================== To summarize, we designed a novel 2D organic material with evenly distributed heart-shaped angstrom-scale pores, namely the C$_4$N$_3$H monolayer. In its unique structure, every C atom is bonded with three other atoms (C or N atoms) *via* *sp*$^2$ hybridization, giving rise to three $\sigma$-like orbitals placed in its basal plane and one $\pi$ orbital along *Z* axis in the perpendicular direction. These $\sigma$ and $\pi$ bonds are responsible for the energy stability of the 2D C$_4$N$_3$H framework. Furthermore, we have confirmed its dynamic, thermal and mechanical stabilities respectively by its phonon dispersion curves, FPMD simulations and mechanical properties. Band structure and TDOS clearly reveal that this organic monolayer is a semimetal with anisotropic Dirac cones and very high Fermi velocity which is roughly one order of magnitude larger than that in 2D organic Dirac materials ever reported [@17; @18]. Based on the PDOS and the band decomposed charge density of this monolayer, we demonstrate that the anisotropic Dirac cones originate from the extended $\pi$-electron conjugation system which is mainly contributed by the 2*p*$_z$ orbitals of C and N atoms. These results indicate that we successfully designed a 2D organic Dirac material with high Fermi velocity which can be comparable to that in 2D inorganic ones. We hope that our findings will promote the experimental realization of this novel material and greatly push forward the study of the 2D organic Dirac materials. We thank Feng Liu of University of Utah for helpful discussion. This work was financially supported by the State Key Program for Basic Research (Grant Nos. 2014CB921102 and 2017YFA0206304) and NSFC (Grant Nos. 51572122 and 11304096), China. We are grateful to the High Performance Computing Center of Nanjing University for doing the numerical calculations in this paper on its blade cluster system. [49]{} A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. **81**, 109 (2009). J. E. Moore, Nature 464, 194 (2010). T. O.Wehling, A. M. Black-Schaffer, and A. V. Balatsky, Adv. Phys. **63**, 1 (2014). K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature **438**, 197 (2005). M. I. Katsnelson, K. S. Novoselov, and A. K. Geim, Nat. phys. **2**, 620 (2006). C. R. Dean, L.Wang, P. Maher, C. Forsythe, F. Ghahari, Y. Gao, J. Katoch, M. Ishigami, P. Moon, M. Koshino, T. Taniguchi, K.Watanabe, K. L. Shepard, J. Hone, and P. Kim, Nature **497**, 598 (2013). R. Yu, W. Zhang, H.-J. Zhang, S.-C. Zhang, X. Dai, and Z. Fang, Science **329**, 61 (2010). K. Asano and C. Hotta, Phys. Rev. B **83**, 245125 (2011). J. Wang, S. Deng, Z. Liu, and Z. Liu, Natl. Sci. Rev. **2**, 22 (2015). D. Malko, C. Neiss, F. Viñes, and A. Görling, Phys. Rev. Lett. **108**, 086804 (2012). W. Li, M. Guo, G. Zhang, and Y.-W. Zhang, Phys. Rev. B **89**, 205402 (2014). X.-F. Zhou, X. Dong, A. R. Oganov, Q. Zhu, Y. Tian, and H.-T. Wang, Phys. Rev. Lett. **112**, 085502 (2014). Z. Wang, X.-F. Zhou, X. Zhang, Q. Zhu, H. Dong, M. Zhao, and A. R. Oganov, Nano Lett. **15**, 6182 (2015). S. Cahangirov, M. Topsakal, E. Aktürk, H. Şahin, and S. Ciraci, Phys. Rev. Lett. **102**, 236804 (2009). H. Zhang, Y. Li, J. Hou, A. Du, and Z. Chen, Nano Lett. **16**, 6124 (2016). M. O. Goerbig, J.-N. Fuchs, G. Montambaux, and F. Piéchon, Phys. Rev. B **78**, 045415 (2008). M. Hirata, K. Ishikawa, K. Miyagawa, M. Tamura, C. Berthier, D. Basko, A. Kobayashi, G. Matsuno, and K. Kanoda, Nat. Commun. **7**, 12666 (2016). J.-J. Adjizian, P. Briddon, B. Humbert, J.-L. Duvail, P. Wagner, C. Adda, and C. Ewels, Nat. Commun. **5**, 5842 (2014). K. Medjanik, O. Fedchenko, S. Chernov, D. Kutnyakhov, M. Ellguth, A. Oelsner, B. Schönhense, T. R. F. Peixoto, P. Lutz, C.-H. Min, F. Reinert, S. Däster, Y. Acremann, J. Viefhaus, W. Wurth, H. J. Elmers, and G. Schönhense, Nat. Mater. **16**, 615 (2017). X. Wang, K. Maeda, A. Thomas, K. Takanabe, G. Xin, J. M. Carlsson, K. Domen, and M. Antonietti, Nat. Mater. **8**, 76 (2009). G. Algara-Siller, N. Severin, S. Y. Chong, T. Björkman, R. G. Palgrave, A. Laybourn, M. Antonietti, Y. Z. Khimyak, A. V. Krasheninnikov, J. P. Rabe, U. Kaiser, A. I. Cooper, A. Thomas, and M. J. Bojdys, Angew. Chem. Int. Ed. **53**, 7450 (2014). J. Mahmood, E. K. Lee, M. Jung, D. Shin, I.-Y. Jeon, S.-M. Jung, H.-J. Choi, J.-M. Seo, S.-Y. Bae, S.-D. Sohn, N. Park, J. H. Oh, H.-J. Shin, and J.-B. Baek, Nat. Commun. **6**, 6486 (2015). S. Yang, W. Li, C. Ye, G. Wang, H. Tian, C. Zhu, P. He, G. Ding, X. Xie, Y. Liu, Y. Lifshitz, S.-T. Lee, Z. Kang, and M. Jiang, Adv. Mater. **29**, 1605625 (2017). A. Savin, R. Nesper, S. Wengert, and T. F. Fässler, Angew. Chem. Int. Ed. **36**, 1808 (1997). J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. **77**, 3865 (1996). D. Vanderbilt, Phys. Rev. B **41**, 7892 (1990). G. Kresse and J. Furthmüller, Phys. Rev. B **54**, 11169 (1996). P. E. Blöchl, Phys. Rev. B **50**, 17953 (1994). J. Heyd, G. E. Scuseria, and M. Ernzerhof, J. Chem. Phys. **124**, 219906 (2006). A. Togo, F. Oba, and I. Tanaka, Phys. Rev. B **78**, 134106 (2008). S. Nosé, Mol. Phys. **52**, 255 (1984). D. W. Boukhvalov, M. I. Katsnelson, and A. I. Lichtenstein, Phys. Rev. B **77**, 035427 (2008). See Supplemental Material at <http://link.aps.org/supplemental/XXX> for details on geometric structure, stability and electronic structures of the C$_4$N$_3$H monolayer. G. Henkelman, A. Arnaldsson, and H. Jónsson, Comp. Mater. Sci. **36**, 354 (2006). E. Sanville, S. D. Kenny, R. Smith, and G. Henkelman, J. Comput. Chem. **28**, 899 (2007). J. Kang, S. Horzum, and F. M. Peeters, Phys. Rev. B **92**, 195419 (2015). A. J. Mannix, X.-F. Zhou, B. Kiraly, J. D. Wood, D. Alducin, B. D. Myers, X. Liu, B. L. Fisher, U. Santiago, J. R. Guest, M. J. Yacaman, A. Ponce, A. R. Oganov, M. C. Hersam, and N. P. Guisinger, Science **350**, 1513 (2015). A. Fleurence, R. Friedlein, T. Ozaki, H. Kawai, Y.Wang, and Y. Yamada-Takamura, Phys. Rev. Lett. **108**, 245501 (2012). L. Li, Y. Yu, G. J. Ye, Q. Ge, X. Ou, H. Wu, D. Feng, X. H. Chen, and Y. Zhang, Nat. Nanotechnol. **9**, 372 (2014). J.-A. Yan, W. Y. Ruan, and M. Y. Chou, Phys. Rev. B **77**, 125401 (2008). N. Mounet and N. Marzari, Phys. Rev. B **71**, 205214 (2005). H. Pan, Y. Sun, Y. Zheng, N. Tang, and Y. Du, New J. Phys. **18**, 093021 (2016). H. Zabel, J. Phys.: Condens. Matter **13**, 7679 (2001). A. Du, S. Sanvito, and S. C. Smith, Phys. Rev. Lett. **108**, 197207 (2012). S. Zhang, J. Zhou, Q. Wang, X. Chen, Y. Kawazoe, and P. Jena, Proc. Natl Acad. Sci. USA **112**, 2372 (2015). Y. Wang, F. Li, Y. Li, and Z. Chen, Nat. Commun. **7**, 11488 (2016). C. S. Diercks and O. M. Yaghi, Science **355**, 923 (2017). J. W. Colson, A. R.Woll, A. Mukherjee, M. P. Levendorf, E. L. Spitler, V. B. Shields, M. G. Spencer, J. Park, and W. R. Dichtel, Science **332**, 228 (2011). T. Kambe, R. Sakamoto, K. Hoshiko, K. Takada, M. Miyachi, J.-H. Ryu, S. Sasaki, J. Kim, K. Nakazato, M. Takata, and H. Nishihara, J. Am. Chem. Soc. **135**, 2462 (2013).
--- abstract: 'We discuss the role of subdivisions of tropical moduli spaces in logarithmic Gromov–Witten theory, and use them to study the virtual class of curves in a product of pairs. Our main result is that the cycle-valued logarithmic Gromov–Witten theory of $X\times Y$ decomposes into a product of pieces coming from $X$ and $Y$, but this decomposition must be considered in a blowup of the moduli space of curves. This blowup is specified by tropical moduli data. As an application, we show that the cycle of curves in a toric variety with fixed contact orders is a product of virtual strict transforms of double ramification cycles. The formalism we outline offers a unified viewpoint on a number of recent results in logarithmic Gromov–Witten theory, including works of Herr, Holmes–Pixton–Schmitt, and Nabijou and the author.' address: | Department of Pure Mathematics [*&*]{} Mathematical Statistics\ University of Cambridge, UK author: - Dhruv Ranganathan bibliography: - 'Products.bib' title: A note on cycles of curves in a product of pairs --- Introduction ============ Logarithmic Gromov–Witten theory concerns cycles on moduli spaces of stable maps from pointed curves $(C,p_1,\ldots, p_n)$ to a pair $(Z,D_Z)$, where $Z$ is a smooth variety and $D_Z$ is a simple normal crossings divisor. The logarithmic structure prescribes the tangency orders of each point $p_i$ along each divisor $D_j$. This tangency order is equal to the scheme theoretic contact order for non-degenerate maps, but also remains locally constant in families. The resulting moduli space $\mathsf K_\Gamma(Z)$ is equipped with a virtual fundamental class and invariants are integrals of evaluation cycles from $Z$ and tautological classes against the virtual class [@AC11; @Che10; @GS13]. A basic property of the virtual structure of the spaces $\mathsf K_\Gamma(Z)$ that has appeared in recent work in the subject is the behaviour under products [@Herr; @HPS19; @LQ18; @NR19]. We revisit the question in this note from the viewpoint of tropical moduli theory. Let $(X,D_X)$ and $(Y,D_Y)$ be smooth projective varieties equipped with simple normal crossings divisors, and let $(Z,D_Z)$ be the product. Modified products ----------------- Fix the genus, curve class, marked points, and their contact orders for maps to $Z$. This determines discrete data for maps to $X$ and $Y$ as well. We will package the discrete data in the symbol $\Gamma$, and use it flexibly to name the discrete data on moduli spaces of maps to $X,Y$, and $Z$. The moduli space of maps from *smooth curves* to $Z$ with given contact orders is naturally identified with the fiber product the mapping spaces for $X$ and $Y$, over $\cM_{g,n}$. This product decomposition breaks down over ${\overline{\cM}\vphantom{\cM}}_{g,n}$. For simplicity, assume that the genus and markings are in the stable range. Given spaces $\mathsf K^\dagger$ and $\mathsf K$ equipped with virtual class, if there is a morphism $\mathsf K^\dagger\to \mathsf K$ such that pushforward identifies virtual classes, we say that $\mathsf K^\dagger$ is a *virtual birational model* of $\mathsf K$. Our main result explains that the product formula continues to hold provided the moduli spaces are replaced by virtual birational models. [A]{}\[thm: product-formula\] There exists an explicit logarithmic modification of the moduli space of stable curves $$\begin{tikzcd} {\overline{\cM}\vphantom{\cM}}_{\Gamma}\arrow{r}{\pi} &{\overline{\cM}\vphantom{\cM}}_{g,n} \end{tikzcd}$$ and virtual birational models $\mathsf K_\Gamma(W)^\dagger\to\mathsf K_\Gamma(W)$ for $W = X,Y,Z$, fitting into a diagram $$\begin{tikzcd} \mathsf K_\Gamma(Z)^\dagger\arrow{r}{\vartheta} & \mathsf P_\Gamma(X\times Y) \arrow{d}\arrow{r}\arrow[dr, phantom, "\square"] & \mathsf K_\Gamma(X)^\dagger\times \mathsf K_\Gamma(Y)^\dagger \arrow{d} \\ &{\overline{\cM}\vphantom{\cM}}_{\Gamma}\arrow{r}[swap]{\Delta} &{\overline{\cM}\vphantom{\cM}}_{\Gamma}\times {\overline{\cM}\vphantom{\cM}}_{\Gamma}. & \end{tikzcd}$$ There is an equality of virtual classes $$\vartheta_\star [\mathsf K_\Gamma(Z)^\dagger]^{\mathrm{vir}} = \Delta^![\mathsf K_\Gamma(X)^\dagger\times \mathsf K_\Gamma(Y)^\dagger]^{\mathrm{vir}} \ \ \mathrm{in} \ \ A_\star(\mathsf P_\Gamma(X\times Y);{{\mathbb Q}}).$$ Geometry of the modification ---------------------------- The necessity of this modification can be easily captured in examples, and we comment further on it below. The modification ${\overline{\cM}\vphantom{\cM}}_\Gamma\to {\overline{\cM}\vphantom{\cM}}_{g,n}$ is easily described. The morphisms $\pi_W: \mathsf K_\Gamma(W)\to {\overline{\cM}\vphantom{\cM}}_{g,n}$ are typically ill-behaved at the level of logarithmic structures. This manifests concretely as follows. If $S$ is a codimension $k$ stratum of ${\overline{\cM}\vphantom{\cM}}_{g,n}$, its preimage, which is a logarithmic stratum, may not have virtual codimension $k$ in $\mathsf K_\Gamma(W)$. This is equivalent to the statement that the morphism $\pi_W$ is not *integral and saturated* as a logarithmic morphism, or that it is not flat with reduced fibers at the level of Artin fans [@AW]. Combinatorially, in the morphism of tropical moduli stacks associated to $\pi_W$, there are cones which do not map surjectively onto their target cone. The tropical moduli stack $\cM_{g,n}^\trop$ can be subdivided until the image of $\pi_W$ is a union of cones for each $W = X,Y,Z$, i.e. *along the image of the cycle of tropical maps of type $\Gamma$*. Toroidal geometry furnishes a birational modification ${\overline{\cM}\vphantom{\cM}}_\Gamma$ of ${\overline{\cM}\vphantom{\cM}}_{g,n}$, and there is a compatible refinement for mapping spaces. In the next section we describe how such modifications can be interpreted as moduli functors on logarithmic schemes, carrying natural perfect obstruction theories, whose virtual classes have the expected properties. Toric contact cycles -------------------- We briefly recall that given a vector of integers $A\in \mathbb Z^n$ with vanishing sum, the *double ramification cycle* $\mathsf{DR}_g(A)$ is cycle on the moduli space of curves whose restriction to ${\overline{\cM}\vphantom{\cM}}_{g,n}$ is the locus of curves that admit a map to $\mathbb P^1$ with ramificaition orders over $0$ and $\infty$ given by the positive and negative entries in $A$ along the corresponding markings. Let $\mathsf K_{g,A}({\mathbb{P}}^1)_{\mathbb G_m}$ be the moduli space of genus $g$ logarithmic stable maps to $\mathbb P^1$ relative to $0$ and $\infty$ with contact orders $A$, up to $\mathbb G_m$ scaling on the target. The double ramification cycle is the pushforward of the virtual class of this mapping space under $$\mathsf K_{g,A}({\mathbb{P}}^1)_{\mathbb G_m}\to {\overline{\cM}\vphantom{\cM}}_{g,n}.$$ The class is known to lie in the tautological ring and a formula for it has been calculated [@FP; @JPPZ]. We consider the following generalization. Fix contact order data $A_1$ and $A_2$ in $\mathbb Z^n$. Let $\mathsf K_\Gamma(\mathbb P^1\times {\mathbb{P}}^1)_{T}$ be the moduli space of genus $g$ logarithmic stable maps to ${\mathbb{P}}^1\times{\mathbb{P}}^1$ with contact orders along the toric boundary given by $(A_1,A_2)$. We consider two maps equivalent if the differ by scaling under the $2$-dimensional dense torus action. Define the *toric contact cycle* $\mathsf{TC}_g(A_1,A_2)$ to be the pushforward of the virtual fundamental class under the morphism $$\mathsf K_\Gamma(\mathbb P^1\times {\mathbb{P}}^1)_{T}\to {\overline{\cM}\vphantom{\cM}}_{g,n}.$$ [B]{}\[thm: HPS-theorem\] There exists an explicit logarithmic modification of the moduli space of stable curves $$\begin{tikzcd} {\overline{\cM}\vphantom{\cM}}_{\Gamma}\arrow{r}{\pi} &{\overline{\cM}\vphantom{\cM}}_{g,n} \end{tikzcd}$$ and explicit lifts of the cycles $\mathsf{DR}(A_i)$ and $\mathsf{TC}(A_1,A_2)$ in the the Chow groups of $A_\star({\overline{\cM}\vphantom{\cM}}_\Gamma;{{\mathbb Q}})$, such that, denoting the lifts by hats, there is an equality $$\widehat{\mathsf{TC}}(A_1,A_2) = \widehat{\mathsf{DR}}(A_1)\cdot \widehat{\mathsf{DR}}(A_2) \ \ \mathrm{in} \ A_\star({\overline{\cM}\vphantom{\cM}}_{\Gamma};{{\mathbb Q}}).$$ The analogous statements for multifold products also holds. Note that a marked point may have nonzero contact with multiple divisors, so the statement for multifold products of ${\mathbb{P}}^1$ implies the analogous result for arbitrary toric varieties, by virtual birational invariance [@AW]. Recent results -------------- The double ramification cycle can also be defined using a resolution of the Abel–Jacobi section from the moduli space of curves to the universal Picard variety, and in that context, the statement of Theorem \[thm: HPS-theorem\] has been proved recently in an elegant paper of Holmes–Pixton–Schmitt [@HPS19]. In particular Section 8 of loc. cit. provides a concrete example exhibiting the failure of multiplicativity of the toric contact cycles. Our second result is a proof of their result from the Gromov–Witten viewpoint. The state of the art on resolutions of the Abel–Jacobi section may be found in [@AP19; @Hol17; @MW17]. Even more recently, Leo Herr has proved the product formula in logarithmic Gromov–Witten theory as an application of an elegant and general framework of logarithmic normal cones and logarithmic perfect obstruction theories. These yield a notion of *virtually logarithmically smooth morphisms* [@Herr]. If the logarithmic obstruction theories are unwound into a piece coming from an ordinary perfect obstruction theory and another coming from a logarithmic blowup, one is led to a formula of the form presented here. In contrast, the results here are presented using the traditional framework of *virtually smooth* morphisms. We hope that the presentation here explains the need and naturality of a logarithmic Chow theory in the work of Herr, as well as earlier work of Barrott [@Bar18; @Herr]. The idea of using tropical geometry to correct the product formula appeared also in our earlier work with Nabijou, proving the local/logarithmic conjecture [@NR19]. The basic strategy in that paper is to establish a version of the product formula for different logarithmic structures over the same base manifold, and therefore relate the logarithmic Gromov–Witten theory with the older relative theory for smooth pairs. Finally, this paper serves as a companion to [@R19], which establishes the degeneration formula for logarithmic Gromov–Witten invariants using expanded degenerations. Products of smooth pairs provide basic examples of normal crossings pairs, and the central strategy required to prove the product formula appears in loc. cit. In particular, we rely on the correcting “naive” intersections of classes by strict transform calculations attached to subdivisions to obtain both results. Earlier work on products ------------------------ When the logarithmic structures on $X$ and $Y$ are both trivial, the tropical part of the geometry disappears and we recover Behrend’s product formula for ordinary Gromov–Witten theory, and indeed its proof [@Beh97]. The product formula also holds for orbifolds [@AJT]. If $X$ alone has trivial logarithmic structure, the product formula holds without birational modifications, and was proved by Lee and Qu [@LQ18]. In genus $0$, for toric targets there is a product decomposition for the moduli space of logarithmic maps, using the logarithmic torus [@RW19]. Acknowledgements {#acknowledgements .unnumbered} ---------------- My view on this subject has been shaped by conversations with Davesh Maulik in the context of earlier work on the degeneration formula, and I’m grateful to him for his generosity with time. I thank Dan Abramovich for the comment, “it would all be fine if the stabilization morphism was flat, but it probably fails in general”, which finally clarified the geometry. I have also benefited from conversations with Tom Graber, Andreas Gross, Navid Nabijou, Rahul Pandharipande, and Jonathan Wise. I thank Jonathan Wise for keeping me informed of the parallel work of his student Leo Herr. Subdivisions of moduli problems {#sec: subdivisions} =============================== Given a moduli space $\cM$ representing a fibered category over schemes, a blowup $\widetilde{\cM}$ does not come equipped with a natural modular interpretation. However, if $\cM$ is a logarithmic scheme representing a fibered category over *logarithmic schemes*, a logarithmic blowup can be described as a *subcategory* of the fibered category $\cM$. This fact was observed by Kato [@Kato-LogMod], and utilized in logarithmic moduli problems in [@MW17; @R19; @RSW17A; @RSW17B]. We begin by reviewing these ideas. Subdivisions: cone complexes and logarithmic schemes ---------------------------------------------------- Let $\Sigma$ be a rational polyhedral cone complex. This defines a functor from the category of rational polyhedral cones to sets: $$\begin{aligned} F_\Sigma: \mathbf{Cones}&\to& \mathbf{Sets}\\ \tau&\mapsto& \mathbf{Hom}(\tau, \Sigma).\end{aligned}$$ A key observation is to note that if $\widetilde\Sigma\to \Sigma$ is a subdivision of cone complexes, then $F_{\widetilde \Sigma}$ is a *subfunctor* of $F_\Sigma$. The link to logarithmic geometry arises as follows. Assume that $\Sigma$ is a fan embedded in a vector space, and let $X = X(\Sigma)$ be the toric variety associated to $\Sigma$. The scheme $X$ inherits a natural logarithmic structure from the toric boundary divisor, and therefore defines a functor on logarithmic schemes $$\begin{aligned} F_X: \mathbf{LogSch}&\to& \mathbf{Sets}\\ S&\mapsto& \mathbf{Hom}(S, X),\end{aligned}$$ where the morphisms are taken to be in the category of logarithmic schemes. Each logarithmic scheme $S$ comes equipped with a tropicalization $S^{\trop}$, which we may assume is a single cone. There is a natural tropicalization map carrying a logarithmic scheme to its underlying polyhedral functor [@CCUW; @Kato94]. Kato observed that there is an identification $$F_{\widetilde X} = F_X\times_{F_{\Sigma}} F_{\widetilde\Sigma}.$$ In other words, just as a a subdivision of a cone complex is a subfunctor of the original cone complex, a toric modification of a toric variety, interpreted as a functor on logarithmic schemes, is a subfunctor of the original functor. This formalism does not require $X$ to be toric, only that it has a logarithmic structure. A logarithmic structure on $X$ gives rise to a functor on the category of cone complexes whose ${{\mathbb R}}_{\geq 0}$ points is its tropicalization. Given a logarithmic scheme $X$, a *logarithmic modification* is by pulling back the subfunctor defined by a refinement on its tropicalization. It is a basic observation that these subfunctors are in turn obtained from schemes with logarithmic structure. That is, they are representable by schemes with logarithmic structure [@Kato-LogMod]. The interpretation of logarithmic modifications as subfunctors, or monomorphisms of logarithmic schemes, is lost once we pass to the representing ordinary scheme. This manifests combinatorially. Passing from the logarithmic scheme to its underlying scheme is analogous to replacing a fan with its partially ordered set of faces. Any subdivision of polyhedral complexes is injective, but the map of fans induces a map on partially ordered sets that is no longer an injection. Logarithmic and tropical moduli of curves ----------------------------------------- In our examples, the logarithmic scheme above will be replaced by a moduli space of curves or of logarithmic stable maps to a logarithmic scheme $X$. The fan side is controlled by tropical moduli theory, which we briefly recall. We follow standard conventions for tropical curves [@CCUW]. \[def: trop-curve\] An **$n$-marked tropical curve** ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is a finite graph $G$ with vertex set $V$ and edge set $E$, enhanced with 1. **markings** $m: \{1,\ldots,n\}\to V$, 2. a **vertex genus function** $g:V\to {{\mathbb N}}$, 3. an **edge length function** $\ell: E\to {{\mathbb R}}_{+}$. The **genus** of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is equal to $$g({\scalebox{0.8}[1.3]{$\sqsubset$}}) = h_1(G)+\sum_{v\in V} g(v).$$ The edge length function gives ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ the structure of a metric space, where marked points are realized by attaching copies ${{\mathbb R}}_{\geq 0}$ to appropriate vertices, as “legs”. There is a notion of a family of tropical curves. Let $\sigma$ be a cone with dual cone $S_\sigma$. A **family of tropical curves over $\sigma$** is a tropical curve in the sense above, but whose length function takes values in in $S_\sigma$. A point of $\sigma$ is a monoid homomorphism $\varphi: S_\sigma \to {{\mathbb R}}_{\geq 0}$, and applying it to the edge length $\ell(e)\in S_\sigma$, we obtain a positive real length for each edge and thus a tropical curve. Given a family of logarithmic curves $\mathscr C/S$, there is a tropicalization ${\scalebox{0.8}[1.3]{$\sqsubset$}}/S^\trop$, which is a family of tropical curves with the natural genus and marking data. When $S$ is a single point with logarithmic structure, the tropicalization is obtained by decorating the edges of the dual graph of $\mathscr C$ with the data of the deformation parameters of the nodes. The procedure is treated in detail by [@CCUW] We may view ${\overline{\cM}\vphantom{\cM}}_{g,n}$ as a category fibered in groupoids over logarithmic schemes, and to emphasize this, we temporarily introduce the notation $\cM_{g,n}^{\mathrm{log}}$. There is a tropical moduli stack $\cM_{g,n}^\trop$. This tropical moduli stack is a category fibered in groupoids over the category of rational polyhedral cone complexes [@CCUW]. There is a surjective morphism $$\cM_{g,n}^{\mathrm{log}}\to \cM_{g,n}^\trop$$ by taking a family of logarithmic curves to its tropicalization. Minimality and tropical deformations ------------------------------------ The moduli space of stable curves ${\overline{\cM}\vphantom{\cM}}_{g,n}$ can be recovered as a subcategory of $\cM_{g,n}^{\mathrm{log}}$. Let $S = \operatorname{Spec}(P\to {{\mathbb C}})$ and let $\mathscr C/S$ be a logarithmic curve. The dual graph of $\mathscr C$ determines a cone of the tropical moduli space $\cM_{g,n}^\trop$, which we denote $\sigma(\mathscr C)$. The logarithmic curve over a logarithmic point is *minimal* if the dual monoid $\mathrm{Hom}(P,{{\mathbb R}}_{\geq 0})$ is isomorphic to $\sigma(\mathscr C)$. That is, in the corresponding family of tropical curves, there are no unexpected relationships between the edge lengths. A family is said to be minimal if it is minimal at every point. F. Kato characterizes the moduli space of stable curves as follows. The moduli space of stable curves ${\overline{\cM}\vphantom{\cM}}_{g,n}$ can be identified with the open substack of $\cM_{g,n}^{\mathrm{log}}$ parameterizing minimal logarithmic curves. In other words, there are no relations between the deformation parameters of the nodes of $\mathscr C$. Mapping spaces -------------- We also require the parallel statements for the moduli spaces of stable maps to logarithmic schemes. Let $\Sigma$ be a fan and let ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ be a tropical curve. A **tropical map** from ${\scalebox{0.8}[1.3]{$\sqsubset$}}\to \Sigma$ is a continuous piecewise linear map such that every face of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ maps to a face of $\Sigma$. Families of tropical maps are defined in the analogous fashion, and one obtains a moduli space $\mathsf T_\Gamma(\Sigma)$ of tropical stable maps. Observe that continuity of the map ${\scalebox{0.8}[1.3]{$\sqsubset$}}\to \Sigma$ imposes nontrivial restrictions on the edge lengths of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$. For instance, if a collection of edges $e_1,\ldots, e_k$ form a cycle in ${\scalebox{0.8}[1.3]{$\sqsubset$}}$, continuity forces that the total displacement around the cycle is zero, and therefore a linear relationship between the edge lengths of the $e_i$. Given a logarithmic scheme $X$ with tropicalization $\Sigma$, there is a fibered category $\mathsf K^{\mathrm{log}}_\Gamma(X)$ over logarithmic schemes, whose fiber over a logarithmic scheme is the groupoid of families of logarithmic curves over $S$ together with a map to $X$. There is again a morphism $$\mathsf K^{\mathrm{log}}_\Gamma(X)\to \mathsf T_\Gamma(\Sigma).$$ Abramovich–Chen and Gross–Siebert have developed a theory of minimal (resp. basic) logarithmic maps [@AC11; @Che10; @GS13]. There is a subcategory $\mathsf K^{\mathrm{}}_\Gamma(X)$ of minimal logarithmic maps from within the category all logarithmic maps $\mathsf K^{\mathrm{log}}_\Gamma(X)$. A family is said to be minimal if, at each geometric point, the associated tropical family is a cone of this tropical moduli space. The dual viewpoint to the statement above is that [a logarithmic family of maps is minimal precisely when the associated tropical family of maps is maximal.]{} It is immediate from this that if a family of logarithmic stable maps to $X$ is minimal, the underlying family of curves is typically not minimal when $X$ has nontrivial logarithmic structure. Equivalently, in the induced map from $\mathsf T_\Gamma(\Sigma)\to \fM_{g,n}^\trop$, there are cones of the source that do not surject onto their target cones. Variation of minimality and subcategories ----------------------------------------- Subdivisions of tropical moduli spaces produce subcategories in the following manner. Given a refinement of cone stacks $$\cM^\trop_\Gamma\to\cM_{g,n}^\trop,$$ base change along the tropicalization map $\cM_{g,n}^{\mathrm{log}}\to \cM_{g,n}^\trop$ gives rise to a subcategory $$\cM^{\mathrm{log}}_\Gamma\to \cM_{g,n}^{\mathrm{log}}.$$ Inside $\cM^{\mathrm{log}}_\Gamma$, we may pick out a further subcategory of $\Gamma$-minimal objects. Specifically, an object in this category is still a family of logarithmic curves $\mathscr C\to S$ over a logarithmic base. The tropicalization is then a family ${\scalebox{0.8}[1.3]{$\sqsubset$}}\to S^\trop$, and we obtain a moduli map $$S^\trop\to \cM^\trop_\Gamma.$$ A family of curves is $\Gamma$-minimal if, at every point of $S$, the tropicalization of the base is identified with a cone of the moduli space $\cM^\trop_\Gamma$. This subcategory of $\Gamma$-minimal objects is representable by a stack, and in fact by a logarithmic modification of ${\overline{\cM}\vphantom{\cM}}_{g,n}$. Once again, there are parallel statements for tropical maps, which we employ in the next section. Curves in product ================= To begin, we examine the product formula in an entirely combinatorial setting. Fix fans $\Sigma_X$ and $\Sigma_Y$ associated to the target varieties $X$ and $Y$, and let $\Sigma_Z$ be their product. As in the introduction, fix discrete data $\Gamma$, which, in a mild abuse of notation, will be used to describe the compatible discrete data on $X$, $Y$, and the product. Combinatorial products {#sec: comb-prod} ---------------------- We require a basic finiteness statement. Fix discrete data $\Gamma$. The cone complex $\mathsf T_\Gamma(\Sigma_Z)$ associated to the space $\mathsf K_\Gamma(Z)$ of logarithmic stable maps to $Z$ is a finite-type cone complex. The targets have simple normal crossings logarithmic structures, so under these hypotheses, the statement is equivalent to the combinatorial finiteness proved by Gross and Siebert [@GS13 Section 3.1]. The forgetful morphism $\mathsf K_\Gamma(Z)\to {\overline{\cM}\vphantom{\cM}}_{g,n}$ induces a morphism of cone complexes $$\mathsf T_\Gamma(\Sigma_Z)\to \cM_{g,n}^\trop.$$ The image of the forgetful morphism $$\mathsf T_\Gamma(\Sigma_Z)\to \cM_{g,n}^\trop.$$ is supported on a finite type conical subset of the cone complex $\cM_{g,n}^\trop$. This follows immediately from the combinatorial finiteness in the previous lemma. Although the image of the morphism is a conical subset, it is not a *subcomplex* of the tropical moduli space of curves. There exists a smooth (unimodular) subdivision of conical subcomplexes $$\cM_\Gamma^{\trop}\to \cM_{g,n}^\trop$$ such that the images of the morphisms $$\mathsf T_\Gamma(\Sigma_Z)\to \cM_{g,n}^\trop, \ \ \ \mathsf T_\Gamma(\Sigma_X)\to \cM_{g,n}^\trop, \ \ \textrm{ and} \ \ \ \mathsf T_\Gamma(\Sigma_Y)\to \cM_{g,n}^\trop$$ are each unions of cones in the subdivision $\cM_\Gamma^{\trop}$. The lemma is a consequence of [@AK00 Lemma 4.3], which asserts that for a morphism of polyhedral complexes, there is always a projective subdivision of the target that ensures that the image of each cone of the source is a union of cones. Toric resolution of singularities guarantees the existence of a smooth subdivision. [*From this point forward, we fix a choice of subdivision $\cM_\Gamma^{\trop}$ of the tropical moduli space of curves realizing the lemma above.* ]{} As constructed, the morphism $$\mathsf{T}_\Gamma(\Sigma_X)\to \cM_\Gamma^\trop$$ may not be a morphism of cone complexes, as the image of a cone may be a union of cones, and therefore not contained in a single cone. However, we choose subdivisions $\mathsf{T}_\Gamma(X)^\dagger$, $\mathsf{T}_\Gamma(Y)^\dagger$, and $\mathsf{T}_\Gamma(Z)^\dagger$ that make these morphisms maps of cone complexes. Indeed, this may be obtained as the fiber product in the category of cone complexes. By construction, the cone complexes satisfy the following condition: Each of the morphisms $$\mathsf T_\Gamma(\Sigma_Z)^\dagger\to \cM_{\Gamma}^\trop, \ \ \ \mathsf T_\Gamma(\Sigma_X)^\dagger\to \cM_{\Gamma}^\trop, \ \ \textrm{ and} \ \ \ \mathsf T_\Gamma(\Sigma_Y)^\dagger\to \cM_{\Gamma}^\trop$$ have the property that they map cones surjectively onto cones. Virtual strict transforms ------------------------- The formalism in Section \[sec: subdivisions\], together with the subdivision $\cM_\Gamma^{\trop}$ produced in the previous section, give rise to a birational models of the moduli spaces of curves and maps. These will be the spaces claimed to exist in the introductory statements. In order to lift our combinatorial statements to virtual classes, we use a formalism developed by Abramovich and Wise [@AW]. If $W$ is a toric variety with dense torus $T$, the *Artin fan* of $W$ is the toric Artin stack $\mathscr A_W = [W/T]$. Both $W$ and $\mathscr A_W$ are equipped with divisorial logarithmic structure, and the morphism $$W\to \mathscr A_W$$ is strict. The Artin fan is logarithmically étale over a point. For an arbitrary logarithmic scheme $W$, there is a replacement of the global quotient construction above, producing an Artin fan, which continues to be an Artin stack, logarithmically étale over a point; $W$ has a strict morphism $$W\to \mathscr A_W.$$ The stack $\mathscr A_W$ is essentially combinatorial, constructed from the local toric models that give charts for the logarithmic structure for $W$. Details may be found in [@ACMUW]. Manolache has a defined a functorial virtual pullback in Chow homology for morphisms equipped with relative perfect obstruction theories [@Mano12]. Let $W$ be logarithmically smooth. The moduli stack of logarithmic maps $\mathsf K_\Gamma(\mathscr A_W)$ from curves to $\mathscr A_W$ is a logarithmically smooth stack that is locally of finite type. The natural morphism $$\mathsf K_\Gamma(W)\to \mathsf K_\Gamma(\mathscr A_W)$$ has a natural relative perfect obstruction theory. The virtual pullback of the fundamental class is equal to the virtual fundamental class of $\mathsf K_\Gamma(W)$. The virtual class defined above has natural lifts to any logarithmic modification. Any subdivision $$\mathsf T_\Gamma(\Sigma_W)^\dagger\to \mathsf T_\Gamma(\Sigma_W)$$ induces a logarithmic modifications $$\mathsf K_\Gamma(\mathscr A_W)^\dagger\to \mathsf K_\Gamma(\mathscr A_W) \ \ \textrm{and} \ \ \ \mathsf K_\Gamma(W)^\dagger\to \mathsf K_\Gamma(W)$$ by pulling back [@AW; @Kato94]. The moduli space $\mathsf K_\Gamma(\mathscr A_W)^\dagger$ is certainly logarithmically smooth, since it is a logarithmically étale modification of a logarithmically smooth stack. We summarize the observations concerning the virtual structure of this stack from [@R19 Section 3.5]. The morphism $\pi: \mathsf K_\Gamma(W)^\dagger\to \mathsf K_\Gamma(\mathscr A_W)^\dagger$ has a relative perfect obstruction theory. The virtual pullback of the fundamental class defines a virtual fundamental class $$[\mathsf K_\Gamma(W)^\dagger]^{\mathrm{vir}}:=\pi^{!}[\mathsf K_\Gamma(\mathscr A_W)^\dagger]$$ for $\mathsf K_\Gamma(W)^\dagger$. Moreover, pushforward along the modification $$\mathsf K_\Gamma(W)^\dagger\to \mathsf K_\Gamma(W)$$ identifies virtual classes. The virtual class $[\mathsf K_\Gamma(W)^\dagger]^{\mathrm{vir}}$ functions as a *virtual strict transform* of the virtual class on $\mathsf K_\Gamma(W)$. Proof of Theorem \[thm: product-formula\] ----------------------------------------- Let ${\overline{\cM}\vphantom{\cM}}_\Gamma$ be the logarithmic modification of ${\overline{\cM}\vphantom{\cM}}_{g,n}$ defined by pulling back the tropical subdivision $\cM_\Gamma^{\trop}\to \cM_{g,n}^\trop$. Consider the following commutative diagram. $$\begin{tikzcd} \mathsf K_\Gamma(Z)^\dagger\arrow{d}{\varphi}\arrow{r}{\vartheta} & \mathsf P(X\times Y) \arrow{r} \arrow{d} & \mathsf K_\Gamma(X)^\dagger\times \mathsf K_\Gamma(Y)^\dagger\arrow{d}{\phi} \\ \mathsf K_\Gamma(\mathscr A_Z)^\dagger \arrow{r}[swap]{\nu} & \mathsf P(\mathscr A_X\times \mathscr A_Y) \arrow{r}[swap]{g} \arrow{d} & \mathsf K_\Gamma(\mathscr A_X)^\dagger\times \mathsf K_\Gamma(\mathscr A_Y)^\dagger\arrow{d}{\psi} \\ & {\overline{\cM}\vphantom{\cM}}_\Gamma \arrow[swap]{r}{\Delta} & {\overline{\cM}\vphantom{\cM}}_\Gamma\times{\overline{\cM}\vphantom{\cM}}_\Gamma. \end{tikzcd}$$ We are now led to the main result, which asserts that after these logarithmic modifications, the virtual pullbacks in the above diagram are compatible. There is equality of Chow homology classes on the space $\mathsf P(X\times Y)$: $$\vartheta_\star [\mathsf K_\Gamma(Z)^\dagger]^{\mathrm{vir}} = \Delta^! [ \mathsf K_\Gamma(X)^\dagger\times \mathsf K_\Gamma(Y)^\dagger]^{\mathrm{vir}}.$$ We first inspect the vertical map $\psi$, which, on each factor forgets the data of the map and stabilizes if necessary. We claim that this map is flat with reduced fibers. In order to see this, we first note that the morphism is a toroidal morphism of toroidal embeddings. This is established in [@AW Proposition 3.2]. We can therefore use the polyhedral criteria for equidimensionality [@AK00 Lemma 4.1] and reducedness [@AK00 Lemma 5.2]. These are satisfied by construction. The flatness of $\psi$ leads to the main conclusions necessary for the proof. The flatness and reducedness implies that the two squares on the right are Cartesian in the categories of fine and saturated logarithmic stacks, as well as in ordinary stacks. The morphisms $\varphi$ and $\phi$ each give relative obstruction theories and these are compatible, and the proof of compatibility is identical to [@Beh97 Proposition 6]. As noted above, the morphism $\psi$ is a flat morphism by construction, and therefore $g$ is the flat base change of $\Delta$. In turn, $\Delta$ is the diagonal morphism on a smooth Deligne–Mumford stack, and is therefore a local complete intersection morphism. Since the property of being local complete intersection is stable under flat base change, we see $g$ is also a local complete intersection morphism. The obstruction theories given by $\Delta$ and $g$ are therefore compatible [@stacks-project [Tag 069I](https://stacks.math.columbia.edu/tag/069I)]. Finally, we consider the morphism $\nu$. Since the stack $\mathsf P(\mathscr A_X\times \mathscr A_Y)$ is a fiber product in ordinary schemes, it parameterizes data $$(C,\widetilde C_1,\widetilde C_2, \widetilde C_1\to\mathscr A_X,\widetilde C_2\to \mathscr A_Y).$$ That is families of curves, together with two destabilizations of the curve, respectively equipped with logarithmic morphisms to $\mathscr A_X$ and to $\mathscr A_Y$. We identify the tropicalization of this moduli problem analogously, as parameterizing data of piecewise linear families $$({\scalebox{0.8}[1.3]{$\sqsubset$}},\widetilde {\scalebox{0.8}[1.3]{$\sqsubset$}}_1,\widetilde {\scalebox{0.8}[1.3]{$\sqsubset$}}_2, \widetilde {\scalebox{0.8}[1.3]{$\sqsubset$}}_1\to\Sigma_X,\widetilde {\scalebox{0.8}[1.3]{$\sqsubset$}}_2\to \Sigma_Y).$$ Let $\sigma$ be the base of such a family. Note that fiberwise over $\sigma$, $\widetilde {\scalebox{0.8}[1.3]{$\sqsubset$}}_1$ and $\widetilde {\scalebox{0.8}[1.3]{$\sqsubset$}}_2$ are each simply subdivisions of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$. In particular, after a further subdivision of $\sigma$, there exists a common refinement $\widetilde {\scalebox{0.8}[1.3]{$\sqsubset$}}$ of these two curves, admitting maps to $\Sigma_X$ and $\Sigma_Y$, and therefore to $\Sigma_Z$. Observe that the morphism $$\mathsf K_\Gamma(\mathscr A_Z)\to \mathsf P(\mathscr A_X\times \mathscr A_Y)$$ is pulled back from the morphism of their tropicalizations. The argument above shows that the analogous morphism of tropicalizations is a subdivision. The pullback is therefore a logarithmic modification, and we conclude that $\nu$ has pure degree $1$. The product formula now follows from a standard diagram chase, using the established compatibilities. The fact that $\nu$ is pure degree $1$ shows that $$\vartheta_\star[\mathsf K_\Gamma(Z)^\dagger]^{\mathrm{vir}} = \phi^![\mathsf P(\mathscr A_X\times \mathscr A_Y)]$$ by an application of Costello’s pushforward theorem [@Cos06 Theorem 5.0.1]. We also have the equality $$\phi^![\mathsf P(\mathscr A_X\times \mathscr A_Y)] = g^![\mathsf K_\Gamma(X)^\dagger\times\mathsf K_\Gamma(Y)^\dagger],$$ by functoriality of virtual pullbacks [@Mano12 Theorem 4.1]. The result follows. Toric contact cycles -------------------- To deduce Theorem \[thm: HPS-theorem\], we apply the argument with a minor change. Specifically, for the rubber geometry $\mathsf K_{\Gamma}({\mathbb{P}}^1)_{\mathbb G_m}$, we need a replacement for $\mathsf K_\Gamma(\mathscr A_X)$. The latter can be identified as fibered category over logarithmic schemes whose fibers are $(C,{\scalebox{0.8}[1.3]{$\sqsubset$}}\to {{\mathbb R}})$, where $C$ is a logarithmic curve, ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is its tropicalization, and the map ${\scalebox{0.8}[1.3]{$\sqsubset$}}\to {{\mathbb R}}$ is piecewise linear, balanced, with asymptotic slopes given by $A$. We replace $\mathsf K_\Gamma(\mathscr A_X)$ with the category $\mathsf K_\Gamma(\mathscr A_X)_{\mathbb G_{\trop}}$ where (1) the data ${\scalebox{0.8}[1.3]{$\sqsubset$}}\to {{\mathbb R}}$ is only required to be determined up to additive translation on the target, and (2) $C$ is required to be stable. The corresponding category of minimal objects of schemes is an open subset of a blowup of ${\overline{\cM}\vphantom{\cM}}_{g,n}$. The moduli space is constructed in detail in [@MW17 Section 5]. With this replacement, an identical diagram chase as above yields the claimed result. The cycle of curves in any toric variety, not necessarily one of product type, can be decomposed into products of strict transforms of double ramification cycles. Indeed, for $X$ toric of dimension $r$, equipped with its toric boundary logarithmic structure, the moduli space $\mathsf K_\Gamma(X)$ of logarithmic maps is virtually birational to $\mathsf K_\Gamma(({\mathbb{P}}^1)^r)$ by the invariance statements of [@AW]. The result above may then be applied to each factor of the product. A sample subdivision -------------------- The subdivisions constructed in Section \[sec: comb-prod\] can be described explicitly for the double ramification and toric contact cycles. We include a figure below, which describes a piece of the subdivision for genus $2$ curves mapping to rubber ${\mathbb{P}}^1$ with degree $3$, totally ramified over $0$ and $\infty$. \(a) at (0,0) ; (b) at (3,0) ; (0,0)–(-0.5,0); (-0.5,0)–(-0.7,0); (3,0)–(3.5,0); (3.5,0)–(3.7,0); (a) edge\[me=3\] (b); (3.5,-1.5)–(3.7,-1.5); (3.5,-1.5)–(-0.5,-1.5); (-0.5,-1.5)–(-0.7,-1.5); The cone associated to the cover in Figure \[fig: degree-3-cover\] is a ray, since by continuity of the tropical map, the three bounded edges must have the same edge length. However, this ray maps into a $3$-dimensional cone in $\cM_{2,2}^\trop$. This $3$-dimensional cone is dual to a codimension $3$ stratum, consisting of curves with two genus $0$ components and three mutual nodes. Perform a stellar subdivision of this cone. After this subdivision, the ray of the tropical rubber mapping space now maps onto a subcomplex of this cone. Similar subdivisions can be made in other cones of the moduli space. After subdividing $\cM_{2,2}^\trop$, we obtain a birational model ${\overline{\cM}\vphantom{\cM}}_\Gamma$ of the space of stable curves. The refinement may be pulled back to the tropical rubber space. The resulting virtual birational model of the rubber mapping space has “virtually proper intersections” with strata. That is, the preimage of a stratum under the map $$\mathsf K_{\Gamma}({\mathbb{P}}^1)_{\mathbb G_m}\to {\overline{\cM}\vphantom{\cM}}_\Gamma,$$ if nonempty, has the expected virtual dimension. The pushforward of the virtual class is a *transverse double ramification cycle*, which we view as a strict transform of the usual double ramification cycle under the blowup. Our main theorem asserts that products of pullbacks (under point-forgetting) of such transverse cycles are toric contact cycles. We note that the combinatorics that arises in identifying the locus in the tropical moduli space of curves that admit a piecewise linear map to ${{\mathbb R}}$ with the given slopes is essentially a manifestation of the *admissible weightings* that appear in Pixton’s formula for the double ramification cycle [@JPPZ].
--- abstract: 'It has been believed for a long time that the tensionless limit of superstring theory can be described by a higher spin gauge theory. Recently, a concrete realization of this idea was proposed via 3d Aharony-Bergman-Jafferis (ABJ) theory with the help of holographic duality. In this note we review our work on finding a similar relation involving 2d coset type models. We start by proposing and examining holographic dualities between 3d higher spin gauge theories with matrix valued fields and the large $N$ limit of 2d coset type models. After that we discuss possible relations to superstring theory with emphasis on the role of the matrix form of the higher spin fields and the extended supersymmetry.' address: - 'Department of Mathematical and Statistical Sciences, 632 CAB, University of Alberta, Edmonton, Alberta T6G 2G1, Canada' - 'Department of Physics, Rikkyo University, 3-34-1 Nishi-Ikebukuro, Toshima, Tokyo 171-8501, Japan' - 'University of Luxembourg, Mathematics Research Unit, FSTC, Campus Kirchberg, 6, rue Coudenhove-Kalergi, L-1359 Luxembourg-Kirchberg, Luxembourg' author: - Thomas Creutzig - Yasuaki Hikida - 'Peter B. Rønne' title: 'Higher spin AdS$_3$ holography and superstring theory' --- [^1] [^2] [^3] Introduction ============ The gauge theory of higher spin fields can be introduced as a natural extension of the electromagnetic theory with a spin-1 field and gravity theory described by a spin-2 field. Superstring theory includes a large spectrum of massive higher spin states and it is believed that the tensionless limit of superstring theory can be described by a higher spin gauge theory. Moreover, several examples of AdS/CFT dualities involving higher spin gauge theories are known, and while these dualities are much simpler than the full superstring dualities they do share important non-trivial features. The most famous example of a non-trivial higher spin gauge theory is given by the Vasiliev theory [@Vasiliev:2003ev]. It was proposed that 4d Vasiliev theory is dual to the 3d O$(N)$ vector model [@Klebanov:2002ja] (see also [@Sezgin:2002rt]) and this proposal was confirmed by examining the correlation functions [@Giombi:2009wh; @Giombi:2010vg]. The role of higher spin symmetry in the correspondence was clarified in [@Maldacena:2011jn; @Maldacena:2012sf]. Further, it is possible to extend the Vasiliev theory to include matrix valued fields i.e. to associate Chan-Paton (CP) factors. Recently, it was argued in [@Chang:2012kt] that the 4d extended Vasiliev theory with CP factors is dual to the 3d Aharony-Bergman-Jafferis (ABJ) theory. Since the ABJ theory is known to be dual to the superstring theory on AdS$_3 \times \mathbb{C}$P$^3$ [@Aharony:2008ug; @Aharony:2008gk], the duality in [@Chang:2012kt] suggests a non-trivial relation (called as ABJ triality) between higher spin theory, ABJ theory and superstring theory. In other words, through the AdS/CFT correspondence, it becomes possible to examine superstring theory in terms of gauge theory with a large amount of higher spin symmetry. The lower dimensional version of the proposal by [@Klebanov:2002ja] was introduced in [@Gaberdiel:2010pz] (see [@Gaberdiel:2012uj] for a review), and the claim is that the 3d Vasiliev theory in [@Prokushkin:1998bq] is dual to a 2d large $N$ minimal model. This proposal is motivated by the enhancement of the asymptotic symmetry in 3d higher spin gauge theory found in [@Henneaux:2010xg; @Campoleoni:2010zq; @Campoleoni:2011hg]. There are several generalizations of this proposal. A truncated version was proposed in [@Ahn:2011pv; @Gaberdiel:2011nt]. Moreover, supersymmetric versions were introduced in [@Creutzig:2011fe; @Creutzig:2012ar; @Beccaria:2013wqa] and several supporting checks can be found, e.g., in [@Candu:2012jq; @Hanaki:2012yf; @Henneaux:2012ny; @Creutzig:2012xb; @Moradi:2012xd]. It is also possible to extend the 3d Vasiliev theory to include matrix valued fields [@Prokushkin:1998bq]. Therefore, it is natural to expect that the AdS/CFT correspondence with the 3d extended Vasiliev theory with CP factors leads to a non-trivial correspondence between higher spin gauge theory and superstring theory as in [@Chang:2012kt]. We expect to obtain a deeper understanding about such trialities by studying the lower dimensional version since, in general, lower dimensional theories are more tractable than higher dimensional ones. In this note we would like to review our works on this subject in [@Creutzig:2013tja; @Creutzig:2014ula]. Similar works may be found in [@Gaberdiel:2013vva; @Gaberdiel:2014yla; @Beccaria:2014jra; @Gaberdiel:2014cha; @Candu:2014yva]. The rest of this note is organized as follows; In the next section we review the ABJ triality in [@Chang:2012kt] and explain why it is important to introduce CP factors to the higher spin fields. In section \[CP\] we propose a duality between 3d higher spin gauge theory with CP factors and a 2d coset type model, and give several supporting arguments for the proposal. In section \[N=3\] we slightly generalize the duality to accommodate extended supersymmetry, and discuss the relation to superstring theory by making use of the $\mathcal{N}=3$ superconformal symmetry of the 2d coset type model. In section \[conclusion\] we conclude this note. A review of ABJ triality {#ABJ} ======================== In order to explain the ideas on the ABJ triality in [@Chang:2012kt], let us start form the original proposal by Klebanov and Polyakov [@Klebanov:2002ja], where the minimal Vasiliev theory on AdS$_4$ is dual to the 3d O$(N)$ vector model. The Vasiliev theory includes totally symmetric tensor fields $ \varphi_{\mu_1 \ldots \mu_s}$, which transform under the gauge transformation as $$\begin{aligned} \varphi_{\mu_1 \ldots \mu_s} \sim \varphi_{\mu_1 \ldots \mu_s} + \partial_{( \mu_1} \xi_{\mu_2 \ldots \mu_s )} \, .\end{aligned}$$ Here $\xi_{\mu_1 \ldots \mu_{s-1}}$ are gauge parameters and the parenthesis means symmetrization of the indices. In the minimally truncated case, the theory includes a gauge field for each even spin $s=2,4,6,\ldots$. The dual theory is proposed to be 3d O$(N)$ vector model, which consists of $N$ free bosons $h_i$ $(i=1,2,\ldots ,N)$ in the vector representation of the O$(N)$ global symmetry. We need to take the large $N$ limit to relate to the classical theory of higher spin fields. We also assign an O$(N)$ singlet condition to the operators, which may then be given by bilinears of $h_i$ such as $h_i h^i$. The operators dual to the higher spin gauge fields are then constructed as $$\begin{aligned} J_{\mu_1 \ldots \mu_s} = h_i \partial_{( \mu_1} \cdots \partial_{\mu_s )} h^i + \cdots\end{aligned}$$ by the action of derivatives. In [@Chang:2012kt] they extended this duality in order to relate to superstring theory. There are several crucial points such as the introduction of supersymmetry, deformation from the free theory, and so on. Among them, here we would like to focus on the CP factor for the higher spin gauge fields. In the Vasiliev theory, it is not so difficult to extend the theory to include matrix valued fields. Field equations in the Vasiliev theory are given in terms of a non-commutative $*$-product. Now we require the fields to take $M \times M$ matrix values such as $ [\varphi_{\mu_1 \ldots \mu_s}]^\alpha_{~\beta} $ with $\alpha ,\beta = 1,2, \ldots, M$. Due to the replacement, we need to change the $*$-multiplication by including also the multiplication in the matrix algebra. Even after the changes, we can use the same field equations since the $*$-product already has a non-abelian nature. A version of the 4d extended Vasiliev theory is proposed in [@Chang:2012kt] to be dual to the ABJ theory, which is a 3d $\text{U}(N) \times \text{U}(M)$ Chern-Simons matter theory. The theory includes bi-fundamental matter, i.e. fields $A_i^\alpha, B_\beta^j$ in the bi-fundamental representation of $\text{U}(N) \times \text{U}(M)$. We need to take a large $N$ limit to get the relation to the classical higher spin theory. As in the case without CP factor, let us assign a U$(N)$ invariant condition. Then we can construct higher spin currents in terms of bi-linears of bi-fundamentals as $$\begin{aligned} [J_{\mu_1 \ldots \mu_2}]^\alpha_{~ \beta} = A_i^\alpha \partial_{( \mu_1} \cdots \partial_{\mu_s )} B^i_\beta + \cdots \, .\end{aligned}$$ In this way we can construct $M \times M$ matrix valued currents dual to $M \times M$ matrix valued higher spin fields. In order to have operators dual to string states, we have to also assign a U$(M)$ invariant condition as well. The U$(M)$ singlets are given by single trace operators in the form of $\text{tr} \, ( ABAB \cdots AB ) $. Since the bi-linear $AB$ is supposed to correspond to a single-particle state in higher spin theory, this implies that a generic string corresponds to a multi-particle state of higher spin fields in the singlet of the U$(M)$ symmetry for the CP factor. This is a lesson we obtained from the ABJ triality in [@Chang:2012kt], and we shall utilize it in order to construct a lower dimensional analogue. Higher spin AdS$_3$ holography with CP factor {#CP} ============================================= We now try to find an AdS$_3$ version of the ABJ triality. This should relate higher spin theory on AdS$_3$, 2d CFT and superstring theory on AdS$_3 \times $M$_7$. Here M$_7$ represents a 7d manifold. From the analysis of the ABJ triality, we should extend the 3d Vasiliev theory such that the theory includes $M \times M$ matrix valued fields, which was actually constructed in [@Prokushkin:1998bq]. First we propose which model is the 2d CFT dual to the 3d Vasiliev theory with CP factors, and check the proposal. Then we will discuss the relation to superstring theory. We consider the 3d Vasiliev theory with $M \times M$ matrix valued fields and also with $\mathcal{N}=2$ supersymmetry. The theory includes massive matter fields along with higher spin gauge fields. The gauge algebra is a supersymmetric higher spin algebra denoted by shs$_M [\lambda]$, and the masses of the matter fields are also parametrized by the same parameter $\lambda$. The proposal is that the dual theory is given by the following coset [@Creutzig:2013tja] $$\begin{aligned} \frac{\text{su}(N+M)_k \oplus \text{so}(2NM)_1}{\text{su}(N)_{k+M} \oplus \text{u}(1)_\kappa} \label{coset}\end{aligned}$$ with $\kappa = NM (N+M) (N+M+k)$. In order to relate to the classical higher spin theory, we take a large $N$ limit where $N,k\to \infty$ while we keep $M$ finite as well as the ’t Hooft parameter $$\begin{aligned} \label{} \lambda = \frac{N}{N + M + k} \, .\end{aligned}$$ This ’t Hooft parameter is identified with $\lambda$ appearing in the dual higher spin theory. Moreover, $M$ is set to be the same as the size of CP factor. For $M=2$ our duality reduces to the one in [@Gaberdiel:2013vva] obtained independently. There are several results that support our conjecture. First of all, we can see that the proposal is a natural extension of the previously known duality without CP factor. Indeed, the coset with $M=1$ reduces to the coset used in the duality of [@Creutzig:2011fe], which is an $\mathcal{N}=2$ supersymmetric extension of the original proposal in [@Gaberdiel:2010pz]. Moreover, in the limit of large level $k \to \infty$ (or $\lambda \to 0$ in terms of the ’t Hooft parameter) the coset can be shown to reduce to a free system with bi-fundamentals. The group manifold SU$(N+M)$ may be described by an $(N + M) \times (N \times M) $ matrix as $$\begin{aligned} \begin{pmatrix} A & B\\ C & D \end{pmatrix} \, .\end{aligned}$$ Ignoring the U$(1)$ factor, $A$ corresponds to the gauge factor SU$(N)$ in the denominator of the coset , while $D$ represents SU$(M)$ symmetry which can be shown to decouple in the limit. The other blocks $B,C$ transform as bi-fundamental representations under the $\text{SU}(N) \times \text{SU}(M)$ transformation. The limit of large level $k$ corresponds to the small curvature limit of the coset manifold, and the bi-fundamentals become free boson fields. Therefore, we can apply the arguments for the ABJ triality in section \[ABJ\]. From the bilinears of the bi-fundamentals, we can construct higher spin currents of $M \times M$ matrix form, and we can see that they are dual to the higher spin gauge fields with U$(M)$ CP factor at the parameter value $\lambda = 0$. Even with generic $M$ and $\lambda$, we have evidence for our conjecture. In [@Creutzig:2013tja] we have shown that the one-loop partition function of the higher spin theory can be reproduced by the ’t Hooft limit of the coset , see also [@Candu:2013fta]. For the higher spin gauge theory, the one-loop partition function can be written in terms of a one-loop determinant, and the explicit expression may be found in [@Creutzig:2013tja] and references therein. For the dual coset a state is labeled by $(\Lambda_{N+M};\Lambda_{N})$ in the ’t Hooft limit, where $\Lambda_L$ represents the highest weight for SU$(L)$. In order to determine the spectrum of the theory, we need to specify how to take pairs of holomorphic and anti-holomorphic parts. Here we take a diagonal modular invariant as $$\begin{aligned} \mathcal{H} = \bigoplus_{\Lambda_{N+M},\Lambda_N} \mathcal{H}_{(\Lambda_{N+M};\Lambda_N)} \otimes \bar{ \mathcal{H} }_{(\Lambda_{N+M};\Lambda_N)^*}\end{aligned}$$ where the charge conjugated states are paired. Using the methods developed in [@Gaberdiel:2011zw; @Candu:2012jq], we can show the match of the partition functions in the ’t Hooft limit once we assume the decoupling of so called “light states.” We can also show that the symmetry algebra matches for first few spins explicitly. We may now be able to say that the duality between the 3d Vasiliev theory with U$(M)$ CP factor and the coset is more or less concrete. Thus the next question would be how the duality relates to superstring theory. Let us first review the arguments by Gaberdiel and Gopakumar [@Gaberdiel:2013vva; @Gaberdiel:2014cha]. They focused on the case with $M=2$. In this case the coset model coincides with the Wolf space model, which is known to possess a large $\mathcal{N}=4$ superconformal symmetry [@Spindel:1988sr; @VanProeyen:1989me; @Sevrin:1989ce].[^4] With the large supersymmetry, we can identify the target space of superstring theory involved as AdS$_3 \times$S$^3 \times$S$^3 \times$S$^1$. However, now the higher spin theory includes only $2 \times 2$ matrix valued fields and it is not obvious how to see the relation to string states since the situation is much different from the one in [@Chang:2012kt]. Recently, they examined their conjecture more closely when the radius of one of two $S^3$’s becomes very large [@Gaberdiel:2014cha]. In this case the dual CFT has a small $\mathcal{N}=4$ superconformal symmetry, and there is a considerable amount of literature on the duality between the 2d CFT and superstring theory. However, if we want to apply the picture obtained in [@Chang:2012kt], it is better to keep $M$ generic. Therefore, we utilize the coset with generic $M$. As we saw in section \[ABJ\], we need to assign a U$(M)$ invariant condition to the CP factor of higher spin fields. Thus, instead of the coset , we consider the following coset as [@Creutzig:2013tja; @Candu:2013fta] $$\begin{aligned} \frac{\text{su}(N+M)_k \oplus \text{so}(2NM)_1}{\text{su}(N)_{k+M} \oplus \text{su}(M)_{k+N} \oplus \text{u}(1)_\kappa} \, , \label{KScoset}\end{aligned}$$ which is a Kazama-Suzuki model with $\mathcal{N}=2$ superconformal symmetry [@Kazama:1988qp; @Kazama:1988uz]. The target space of superstring theory dual to the coset is of the form AdS$_3 \times$M$_7$, but the $\mathcal{N}=2$ superconformal symmetry is not enough to determine M$_7$. Therefore, we cannot identify which superstring theory is involved in our triality. However, we noticed that the $\mathcal{N}=2$ supersymmetry of the coset is enhanced to $\mathcal{N}=3$ at a specific value of the level $k=N+M$ [@Creutzig:2014ula]. With the extended supersymmetry the candidates of M$_7$ are quite restricted, and it is expected that we can construct an AdS$_3$ version of the ABJ triality by making use of the critical level coset model. Relations to superstring theory {#N=3} =============================== Let us first look for a 3d Vasiliev theory with $\mathcal{N}= p > 2$ supersymmetry. It is known to be difficult to extend the 3d Vasiliev theory to have extended supersymmetry with generic parameter $\lambda$. However for $\lambda=1/2$ the supersymmetry can be enhanced to a generic $\mathcal{N}= p > 2$ [@Prokushkin:1998bq; @Henneaux:2012ny]. At this value of the parameter, the matter become massless and conformally coupled to gravity, and we consistently truncate the field content by half. We consider the case with $\mathcal{N}=2n+1$ $(n =0,1,2,\ldots)$,[^5] whose supersymmetry algebra so$(2n+1|2)$ is generated by $$\begin{aligned} T_{\alpha \beta} = \{ y_\alpha , y_\beta \} \, , \quad Q^I_\alpha = y_\alpha \otimes \phi^I \, , \quad M^{IJ} = [\phi^I , \phi^J] \, .\end{aligned}$$ Here we have introduced two types of parameters $y_\alpha$ $(\alpha =1,2)$ and $\phi^I$ $(I=1,2,\ldots , 2n+1)$ with the properties $$\begin{aligned} [y_\alpha , y_\beta] = 2 i \epsilon_{\alpha \beta} \, , \quad \{ \phi^I , \phi^J \} = 2 \delta^{IJ} \, .\end{aligned}$$ The twister variables $y_\alpha$ organize fields with higher spin in a neat way. Moreover, $\phi^I$ generates the Clifford algebra, which can be realized by $2^n \times 2^n$ matrices. The fields now depend on $\phi^I$, or in other words they are associated with a U$(2^n)$ CP factor. The proposal here is that the higher spin theory with extended supersymmetry is dual to the coset with $M =2^{n-1}$ and a specific value of the level $k=N-M$ (after assuming some decoupling of free fermions) [@Creutzig:2014ula]. We assume the following non-standard Hilbert space as $$\begin{aligned} \mathcal{H} = \bigoplus_{\Lambda_{N+M}} \mathcal{H}_{\Lambda_{N+M}} \otimes \bar {\mathcal{H} }_{\Lambda_{N+M}^*} \, , \quad \mathcal{H}_{\Lambda_{N+M}} = \bigoplus_{\Lambda_N \in \Omega } (\Lambda_{N+M}; \Lambda_N) \, . \label{non-diag}\end{aligned}$$ At large $N$ the label $\Lambda_N$ can be represented by two Young diagrams $(\Lambda_N^l, \Lambda_N^r)$ and $\Omega$ means that the sum is taken over $\Lambda_N^l = (\Lambda_N^r)^t$. Here $t$ represents the transpose of the Young diagram. For $n=0$, the duality reduces to the one proposed in [@Beccaria:2013wqa]. Developing the techniques used in [@Beccaria:2013wqa], we can show that the partition function in the large $N$ limit (with the assumption of the decoupling of light states again) reproduces the one from the dual classical higher spin theory. This means that the spectrum matches between the proposed dual theories. In order to understand the meaning of the Hilbert space in , we move to another expression by making use of the level-rank duality in [@Kazama:1988qp; @Naculich:1997ic]. We considered the coset with $k=N-M$, but a level-rank dual expression is given by the coset with $k=N+M$. We should remark that the number of decoupled fermions is changed. See [@Creutzig:2014ula] for more detailed explanation. In the level-rank dual expression, a su$(N+M)$ factor with the level $k=N+M$ appears in the numerator of the coset . A crucial point here is that the $\text{su}(N+M)_{N+M}$ factor has a realization in terms of free fermions in the adjoint representation of su$(N+M)$. With this realization, the Hilbert space is generated by the free fermions modulo the factors in the denominator of and the repeated fusions of adjoint fermions yield the states in the representations of the form $\Lambda \in \Omega$ as discussed in [@Beccaria:2013wqa]. Going back to the original form before applying the level-rank duality, the Hilbert space becomes the one in . The symmetry generators of the critical level coset can be constructed by the free fermions. For examples, spin one currents can be given by bi-linears of fermions, while spin 3/2 currents are written in terms of the product of three fermions. Explicit forms of these currents can be found in [@Creutzig:2014ula]. In this way we found another type of duality between 3d higher spin theory and 2d coset type model. Let us try to figure out the superstring theory related to these dual models (assuming that it exists). From the lesson obtained in section \[ABJ\], we have to deal with singlets in the sense of the CP factor of the higher spin fields. Here we set $M$ to be a generic positive integer. Then, it is natural to think of the Kazama-Suzuki model with the critical level $k=N+M$. The higher spin theory dual to the Kazama-Suzuki model includes fields of the $2M \times 2M$ matrix form, but with U($M$) invariant condition assigned, see also [@Candu:2013fta]. One of the main results obtained in [@Creutzig:2014ula] is that the critical level Kazama-Suzuki model has a $\mathcal{N}=3$ superconformal symmetry. From the dual conformal symmetry, the target space of superstring theory is fixed to be of the form AdS$_3 \times$M$_7$, as mentioned above. For the cases with $\mathcal{N}=3$ superconformal symmetry, the known explicit examples are M$_7 =$(S$^3 \times$S$^3 \times$S$^1)/\mathbb{Z}_2$ in [@Yamaguchi:1999gb] and M$_7 =$SO$(3)/$U$(1)$ or SO$(5)$/SO$(3)$ in [@Argurio:2000tg]. The BPS spectrum and marginal deformations are studied in [@Argurio:2000xm] for the latter two models, and their result is consistent with those for our coset [@Creutzig:2014ula]. Therefore, we may conjecture that superstring theory on AdS$_3 \times$M$_7$ with M$_7 =$SO$(3)/$U$(1)$ or SO$(5)$/SO$(3)$ and our coset are dual to each other. In order to examine whether this conjecture is true or not, we need to investigate our proposed triality in further detail. Conclusion ========== In this note we have reviewed our works on a lower dimensional analogue of ABJ triality in [@Chang:2012kt]. Extending the duality by Klebanov and Polyakov in [@Klebanov:2002ja], the authors in [@Chang:2012kt] proposed a triality between 4d extended Vasiliev theory, superstring theory and the ABJ theory. We have explained why the extension of Vasiliev theory with CP factor is important to see relations to superstring theory. Inspired by the work, we have extended the duality by Gaberdiel and Gopakumar in [@Gaberdiel:2010pz] such that the 3d extended Vasiliev theory with U$(M)$ CP factor in [@Prokushkin:1998bq] is involved. Our conjecture in [@Creutzig:2013tja] is that the dual theory is given by the coset model in . We gave several supporting arguments for the duality, for instance, by showing the match of one-loop partition functions. In order to see relations to superstring theory, we extend the duality to have more supersymmetry. In [@Creutzig:2014ula] we proposed a duality between 3d Vasiliev theory with extended supersymmetry and the coset at a critical level. Based on the duality we proposed that the Kazama-Suzuki model with the critical level $k=N+M$ is dual to a superstring theory with the help of $\mathcal{N}=3$ superconformal symmetry of the critical level model. We have worked on a lower dimensional version since it is expected to allow to study the triality in more detail than the original ABJ triality. Indeed the 2d coset type models in and can be solved exactly, in principle. Moreover, the gauge sector of 3d Vasiliev theory is topological and dynamical degrees of freedom exist only in the matter sector. However, at least for our case, the supersymmetry is not so large to fix the dual superstring theory uniquely. We are currently working to make our conjecture more concrete, and we would like to report on our findings in the near future. Recently, we started to understand the nature of marginal deformations of the 2d coset models. Higher spin gauge theory should correspond to the tensionless limit of superstring theory, so we need to deform the coset model to compare with superstring theory at a typical point of the moduli space. The higher spin symmetry is generically broken by the marginal deformation of the critical level Kazama-Suzuki model and the mass of higher spin fields generated through the breaking of higher spin symmetry is computed [@HR15; @CH15]. [HLGPR12]{} Ofer Aharony, Oren Bergman, and Daniel Louis Jafferis, *[Fractional M2-branes]{}*, JHEP **0811** (2008), 043. Ofer Aharony, Oren Bergman, Daniel Louis Jafferis, and Juan Maldacena, *[$\mathcal{N}=6$ superconformal Chern-Simons-matter theories, M2-branes and their gravity duals]{}*, JHEP **0810** (2008), 091. Riccardo Argurio, Amit Giveon, and Assaf Shomer, *[Superstring theory on AdS$_3 \times G / H$ and boundary $\mathcal{N}=3$ superconformal symmetry]{}*, JHEP **0004** (2000), 010. [to3em]{}, *[The spectrum of $\mathcal{N} = 3$ string theory on AdS$_3 \times G / H$]{}*, JHEP **0012** (2000), 025. Changhyun Ahn, *[The large $N$ ’t Hooft limit of coset minimal models]{}*, JHEP **1110** (2011), 125. Matteo Beccaria, Constantin Candu, and Matthias R. Gaberdiel, *[The large $\mathcal{N} = 4$ superconformal $W_{\infty}$ algebra]{}*, JHEP **1406** (2014), 117. Matteo Beccaria, Constantin Candu, Matthias R. Gaberdiel, and Michael Groher, *[$\mathcal{N}=1$ extension of minimal model holography]{}*, JHEP **1307** (2013), 174. Andrea Campoleoni, Stefan Fredenhagen, and Stefan Pfenninger, *[Asymptotic W-symmetries in three-dimensional higher-spin gauge theories]{}*, JHEP **1109** (2011), 113. Andrea Campoleoni, Stefan Fredenhagen, Stefan Pfenninger, and Stefan Theisen, *[Asymptotic symmetries of three-dimensional gravity coupled to higher-spin fields]{}*, JHEP **1011** (2010), 007. Constantin Candu and Matthias R. Gaberdiel, *[Supersymmetric holography on AdS$_3$]{}*, JHEP **1309** (2013), 071. Thomas Creutzig and Yasuaki Hikida, *[Higgs phenomenon for higher spin fields on AdS$_3$]{}*, arXiv:1506.04465. Thomas Creutzig, Yasuaki Hikida, and Peter B. R[ø]{}nne, *[Higher spin AdS$_3$ supergravity and its dual CFT]{}*, JHEP **1202** (2012), 109. [to3em]{}, *[Extended higher spin holography and Grassmannian models]{}*, JHEP **1311** (2013), 038. [to3em]{}, *[$\mathcal{N}=1$ supersymmetric higher spin holography on AdS$_3$]{}*, JHEP **1302** (2013), 019. [to3em]{}, *[Three point functions in higher spin AdS$_3$ supergravity]{}*, JHEP **1301** (2013), 171. [to3em]{}, *[Higher spin AdS$_{3}$ holography with extended supersymmetry]{}*, JHEP **1410** (2014), 163. Chi-Ming Chang, Shiraz Minwalla, Tarun Sharma, and Xi Yin, *[ABJ triality: From higher spin fields to strings]{}*, J.Phys. **A46** (2013), 214009. Constantin Candu, Cheng Peng, and Carl Vollenweider, *[Extended supersymmetry in AdS$_{3}$ higher spin theories]{}*, JHEP **1412** (2014), 113. Constantin Candu and Carl Vollenweider, *[On the coset duals of extended higher spin theories]{}*, JHEP **1404** (2014), 145. Matthias R. Gaberdiel and Rajesh Gopakumar, *[An AdS$_3$ dual for minimal model CFTs]{}*, Phys.Rev. **D83** (2011), 066007. [to3em]{}, *[Large $\mathcal{N}=4$ holography]{}*, JHEP **1309** (2013), 036. [to3em]{}, *[Minimal model holography]{}*, J.Phys. **A46** (2013), 214002. [to3em]{}, *[Higher spins & strings]{}*, JHEP **1411** (2014), 044. Matthias R. Gaberdiel, Rajesh Gopakumar, Thomas Hartman, and Suvrat Raju, *[Partition functions of holographic minimal models]{}*, JHEP **1108** (2011), 077. Matthias R. Gaberdiel and Cheng Peng, *[The symmetry of large $\mathcal N= 4$ holography]{}*, JHEP **1405** (2014), 152. Matthias R. Gaberdiel and Carl Vollenweider, *[Minimal model holography for SO$(2N)$]{}*, JHEP **1108** (2011), 104. Simone Giombi and Xi Yin, *[Higher spin gauge theory and holography: The three-point functions]{}*, JHEP **1009** (2010), 115. [to3em]{}, *[Higher spins in AdS and twistorial holography]{}*, JHEP **1104** (2011), 086. Marc Henneaux, Gustavo Lucena Gómez, Jaesung Park, and Soo-Jong Rey, *[Super-$W_\infty$ asymptotic symmetry of higher-spin AdS$_3$ Supergravity]{}*, JHEP **1206** (2012), 037. Kentaro Hanaki and Cheng Peng, *[Symmetries of holographic super-minimal models]{}*, JHEP **1308** (2013), 030. Marc Henneaux and Soo-Jong Rey, *[Nonlinear $W_\infty$ as asymptotic symmetry of three-dimensional higher spin anti-de Sitter gravity]{}*, JHEP **1012** (2010), 007. Yasuaki Hikida and Peter B. R[ø]{}nne, *[Marginal deformations and the Higgs phenomenon in higher spin AdS$_3$ holography]{}*, arXiv:1503.03870 \[hep-th\]. I.R. Klebanov and A.M. Polyakov, *[AdS dual of the critical O$(N)$ vector model]{}*, Phys.Lett. **B550** (2002), 213–219. Yoichi Kazama and Hisao Suzuki, *[Characterization of $\mathcal{N}=2$ superconformal models generated by coset space method]{}*, Phys.Lett. **B216** (1989), 112. [to3em]{}, *[New $\mathcal{N}=2$ superconformal field theories and superstring compactification]{}*, Nucl.Phys. **B321** (1989), 232. Juan Maldacena and Alexander Zhiboedov, *[Constraining conformal field theories with a higher spin symmetry]{}*, J.Phys. **A46** (2013), 214011. [to3em]{}, *[Constraining conformal field theories with a slightly broken higher spin symmetry]{}*, Class.Quant.Grav. **30** (2013), 104003. Heidar Moradi and Konstantinos Zoubos, *[Three-point functions in $\mathcal{N}=2$ higher-spin holography]{}*, JHEP **1304** (2013), 018. Stephen G. Naculich and Howard J. Schnitzer, *[Superconformal coset equivalence from level rank duality]{}*, Nucl.Phys. **B505** (1997), 727–748. S.F. Prokushkin and Mikhail A. Vasiliev, *[Higher spin gauge interactions for massive matter fields in 3-D AdS space-time]{}*, Nucl.Phys. **B545** (1999), 385. E. Sezgin and P. Sundell, *[Massless higher spins and holography]{}*, Nucl.Phys. **B644** (2002), 303–370. P. Spindel, A. Sevrin, W. Troost, and Antoine Van Proeyen, *[Extended supersymmetric sigma models on group manifolds. 1. The complex structures]{}*, Nucl.Phys. **B308** (1988), 662. Alexander Sevrin and Georgios Theodoridis, *[$\mathcal{N}=4$ superconformal coset theories]{}*, Nucl.Phys. **B332** (1990), 380. M.A. Vasiliev, *[Nonlinear equations for symmetric massless higher spin fields in (A)dS$_{(d)}$]{}*, Phys.Lett. **B567** (2003), 139–151. Antoine Van Proeyen, *[Realizations of $\mathcal{N}=4$ superconformal algebras on Wolf spaces]{}*, Class.Quant.Grav. **6** (1989), 1501. S. Yamaguchi, Y. Ishimoto, and K. Sugiyama, *[AdS$_3/$CFT$_2$ correspondence and space-time $\mathcal{N}=3$ superconformal algebra]{}*, JHEP **9902** (1999), 026. [^1]: The work of TC is supported by NSERC grant number RES0019997. [^2]: The work of YH is supported by JSPS KAKENHI Grant Number 24740170. [^3]: The work of PBR is funded by AFR grant 3971664 from Fonds National de la Recherche, Luxembourg, and partial support by the Internal Research Project GEOMQ11 (Martin Schlichenmaier), University of Luxembourg, is also acknowledged. [^4]: It was already suggested in [@Henneaux:2012ny] to use the Wolf space model for the construction of a higher spin holography with extended supersymmetry. [^5]: See [@Candu:2014yva] for the cases with $\mathcal{N}=2n$.
--- abstract: 'Dirac fermions in graphene can be subjected to non-abelian gauge fields by implementing certain modulations of the carbon site potentials. Artificial graphene, engineered with a lattice of CO molecules on top of the surface of Cu, offers an ideal arena to study their effects. In this work, we show by symmetry arguments how the underlying CO lattice must be deformed to obtain these gauge fields, and estimate their strength. We also discuss the fundamental differences between abelian and non-abelian gauge fields from the Dirac electrons point of view, and show how a constant (non-abelian) magnetic field gives rise to either a Landau level spectrum or a quadratic band touching, depending on the gauge field that realizes it (a known feature of non-abelian gauge fields known as the Wu-Yang ambiguity). We finally present the characteristic signatures of these effects in the site-resolved density of states that can be directly measured in the current molecular graphene experiment, and discuss prospects to realize the interaction induced broken symmetry states of a quadratic touching in this system.' author: - Fernando de Juan bibliography: - 'nonabelian.bib' title: 'Non-abelian gauge fields and quadratic band touchings in molecular graphene' --- Introduction ============ Condensed matter systems that host Dirac fermions as their electronic excitations have drawn a lot of attention in recent years as they have become more and more experimentally accesible and controlable, with graphene[@CGP09] and topological insulators[@HK10] being the most prominent examples of such materials. A remarkable feature of Dirac fermions realized in graphene’s honeycomb lattice in particular is that one can further manipulate them externally by inducing controlled strains in the sample, which couple to them as an effective gauge potential[@VKG10]. This idea of strain engineering[@PC09] has led to many interesting predictions[@GKG10; @JCV11], and is most spectacularly illustrated by the Landau level spectrum recently observed[@LBM10] in scanning tunneling microscopy (STM). This system proved to be very versatile, and in search for even better tunability several proposals were conceived to make artificial versions of it[@PL09; @GSP09; @SGK11]. In a recent experimental breakthrough, a realization of this type of systems, termed molecular graphene[@GMK12], was built, which allows for almost complete control of the electronic degrees of freedom within it. In this system, a triangular lattice of CO molecules is assembled in the surface of bulk Cu, confining the surface electrons to move in an effective hexagonal potential. In this way, effective Dirac fermions emerge at the $K$ points of the superlattice potential, which can then be probed directly with an STM. This system thus offers wide tunability to modify the electronic structure of the surface states by distorting the CO lattice in any desired way, or by adding new atoms to the existing structure. Indeed, several remarkable phenomena have already been demonstrated[@GMK12] beyond the strain induced Landau levels, such as the opening of a gap by means of a Kekulé distortion or the creation of an n-p-n junction. Other interesting proposals such as the observation of fractional charge in a vortex [@HCM07; @B12] or the synthesis of a quantum spin Hall phase[@GGH12] should also be experimentally accesible. As noted in ref. , artificial graphene should be also ideal to explore the more recent prediction that a full $SU(2)$ non-abelian gauge field is in fact realizable in this system, and the strain induced one is just one component of it. Non-abelian gauge fields (of singular nature) were known to emerge in graphene due to disclinations in the lattice[@GGV93], but they can also be generated in a smooth fashion by modulating the on-site potential of the carbon atoms in a certain way. As we will discuss, the effects of non-abelian gauge fields can be very different from their abelian counterparts, and it is the purpose of this work to discuss how to adapt the molecular graphene experiment to probe these differences. In particular, we will show that a quadratic band touching can be generated with these fields, allowing a controlled simulation of this band structure which is prone to many-body instabilities[@WAF10; @MEM11; @VJB12]. In general, effective external gauge fields acting on a fermion system may have non-abelian structure when the fermions have internal degrees of freedom, and the gauge field is a matrix acting on this degree of freedom $\vec A_{ab}$ whose components need not commute. A typical condensed matter example is spin and the spin-orbit interaction, which can be modeled as an SU(2) gauge field [@FS93; @T08], but there are many more examples [@WZ84; @OBS05; @RJO05; @DGJ11]. A more recent one is bilayer graphene [@SGG12], where the two components of the SU(2) doublet correspond to the wave functions in the two layers, and the interlayer interaction plays the role of the gauge field. In the case of monolayer graphene, the SU(2) doublet is made with the valley degree of freedom [@GGR12]. The non-abelian field strength is defined in terms of the covariant derivative $D_i = \partial_i -iA_i$ as $F_{ij} = [D_i,D_j] = \partial_i A_j - \partial_j A_i - i[A_i,A_j]$, which in two dimensions gives rise to a non-abelian magnetic field of the form $$B^{\alpha} = \vec \partial \times \vec A^{\alpha} + \epsilon^{\alpha \beta \gamma} \vec A^{\beta} \times \vec A^{\gamma},\label{magnetic}$$ where $\vec A_{ab} = \vec A^{\alpha} \Lambda^{\alpha}_{ab}$ with $\Lambda^{\alpha}_{ab}$ the generators of SU(2), repeated indices are summed, and $\alpha=x,y,z$ (the indices $ab$ will be implicit from now on). The last term in this expression arises because of the non-commutativity of the field components and makes non-abelian gauge fields fundamentally different from their abelian counterparts. In particular, it is responsible for a tricky feature of these gauge fields known as the Wu-Yang ambiguity [@WY75]: the fact that one may have physically distinct gauge fields (i.e. not gauge equivalent) with the same magnetic field. Indeed, consider these two simple examples[@FSW97]. The first (type I) is $\vec A^{(3)} = B/2 (-y,x)$, $\vec A^{(1)}=\vec A^{(2)}=0$, which we recognize as the analog of the symmetric gauge for constant (abelian) magnetic field $B$, in this case in the $z$ direction. The second (type II) is $\vec A^{(1)} = \sqrt{B/2}(1,0)$, $\vec A^{(2)} = \sqrt{B/2}(0,1)$ and $\vec A^{(3)} = 0$, it also gives constant field $B_0$ due to the second term in eq. (\[magnetic\]), and it is not gauge related to type I. The magnetic field alone is therefore not enough to distinguish these two cases[^1], but we will see that the spectrum obtained for each case is very different, and this is the physics that, as we will show, can be probed directly in the molecular graphene experiment. ---------- ----------------------- --------------------------- ---------------------- --------------------------- Rep. Symm. adapted $\sigma_i \otimes \tau_j$ Symm. adapted $\sigma_i \otimes \tau_j$ $A_1$ $\mathcal{I}$ $\mathcal{I}$ $\Lambda_x\Sigma_z$ $\sigma_x \tau_x $ $B_1$ $\Lambda_z$ $\tau_z$ $\Lambda_y\Sigma_z$ $\sigma_x \tau_y$ $A_2$ $\Sigma_z$ $\sigma_z\tau_z$ $\Lambda_x$ $-\sigma_y\tau_y$ $B_2$ $\Sigma_z\Lambda_z$ $\sigma_z$ $\Lambda_y$ $\sigma_y\tau_x$ $E_{1x}$ $\Sigma_x$ $\sigma_x\tau_z$ $\Lambda_x\Sigma_y$ $-\tau_y$ $E_{1y}$ $\Sigma_y$ $\sigma_y$ $-\Lambda_x\Sigma_x$ $\sigma_z\tau_x$ $E_{2x}$ $-\Lambda_z \Sigma_y$ $-\sigma_y\tau_z$ $\Lambda_y\Sigma_x$ $-\sigma_z\tau_y$ $E_{2y}$ $\Lambda_z\Sigma_x$ $\sigma_x$ $\Lambda_y\Sigma_y$ $\tau_x$ ---------- ----------------------- --------------------------- ---------------------- --------------------------- : Classification of basis matrices in the low energy theory around the $K$,$K'$ points in graphene according to the representations of the symmetry group $C_{6v}$, and their explicit realization in the basis $(\psi_{AK},\psi_{BK},\psi_{AK'},\psi_{BK'})$ (see ref. for details).[]{data-label="tab"} Symmetry analysis and microscopic calculation ============================================= To realize an SU(2) gauge field in graphene, what we need is to externally apply certain on-site potential patterns[@GGR12] to the carbon atoms of the honeycomb lattice. It is not a priori clear, however, how this may be achieved in a molecular graphene experiment, where the “effective honeycomb lattice“ is engineered with the potential landscape induced by a triangular array of CO molecules. In terms of an effective tight binding model, it is natural to think that small distortions of this triangular lattice will produce potential changes in the effective carbon sites, but what distortions will give rise to the correct potentials? And more importantly, since these distortions may induce changes in the effective hopping as well[@GMK12], is it possible to modulate *only* the on-site potential? To answer these questions, a symmetry approach to the problem appears better suited. The way that external perturbations couple to the low energy degrees of freedom around a high-symmetry point of the Brillouin Zone can be determined just by symmetry arguments. This approach has been fruitfully employed in graphene to discuss the coupling of phonons, strains, or electromagnetic fields [@M07; @B08; @WZ10; @L12] and we now show how it can be used to see the emergence of non-abelian gauge fields from small CO displacements the molecular graphene. In the half-filled honeycomb lattice, electrons close to the Fermi surface live near the $K$ and $K'$ points, and are described by an effective spinor $(\psi_{AK},\psi_{BK},\psi_{AK'},\psi_{BK'})$, where $A/B$ denotes the sublattice degree of freedom. The effective Hamiltonian is conventionally written in the basis of the Pauli matrices $\sigma_i \otimes \tau_j$, where $\sigma_i$ acts on the sublattice and $\tau_i$ on the valley degrees of freedom, and $i=x,y,z$ (the identity in both sets is understood to be included as part of the basis). To exploit the fact that the Hamiltonian must be a scalar under the symmetry group $C_{6v}$ of the honeycomb lattice, one can relabel these basis matrices in terms of a new symmetry adapted set $\Sigma_i$ and $\Lambda_i$ with the Pauli matrix algebra and well-defined transformation properties under this group (technically, the group is $C_{6v}''$ because the unit cell has been tripled to consider $K$ and $K'$ at the same time. We will refer to the labels under $C_{6v}$ for simplicity, see ref. for details). The relation of these matrices to the original ones and the representations according to which they transform are reproduced in table \[tab\]. In this basis, the low energy Hamiltonian is simply written as ($v_F=1$) $$H = \vec \Sigma \cdot \vec k,$$ and in this form it is simple to see that the matrices $\Lambda_i$ commute with the Hamiltonian and generate an SU(2) symmetry, which corresponds to rotations in the valley degree of freedom. A gauge field is by definition a field that couples minimally in the form $k_i \rightarrow k_i + A_i$, and in analogy with the usual electromagnetic field that couples as $H_{U(1)} = \vec \Sigma \cdot \vec A$, one may introduce an SU(2) gauge field that couples as ![(Color online) The four possible CO displacements with symmetries $E_1$ and $E_2$. The CO molecules are represented in red, and the effective honeycomb lattice is shown in black. The unit cell is shaded in gray, but more hexagons are shown to make the symmetry of the modes apparent. The on-site potentials that match the symmetry labels are also shown in the effective carbon sites. Note that the prefactors only refer to the potentials, not to the displacements. One may think of these displacements as the K-point phonons of the triangular CO lattice.[]{data-label="phonon"}](fig1.jpg){width="8.7cm"} $$\begin{aligned} H_{SU(2)} &= \vec \Sigma \cdot \left( \Lambda_x \vec A^{(x)} + \Lambda_y \vec A^{(y)} +\cdot \Lambda_z \vec A^{(z)}\right),\end{aligned}$$ which is a coupling allowed by symmetry if the gauge fields $\vec A^{\alpha}$ have their origin in a microscopic perturbation with the same symmetry as the matrix that accompanies them. The power of the symmetry analysis is thus that one can now say what type of perturbations correspond to each term only by inspection of table \[tab\]. Perturbations in the first column have the periodicity of the unit cell, while those in the second column have the periodicity of a tripled unit cell (because of intervalley mixing). Moreover, within nearest neighbour tight binding (TB), those perturbations diagonal in sublattice ($\propto \sigma_0$ or $\sigma_z$) correspond to potential modulations, while those off diagonal correspond to hopping modulations. With this criterion, the gauge field $\vec A^{(z)}$ is readily identified as the usual strain-induced gauge field. The gauge field components $\vec A^{(x)}$ and $\vec A^{(y)}$ correspond, respectively, to the valley mixing $E_1$ and $E_2$ potential perturbations defined in ref. (which are labeled $G'$ under $C_{6v}''$). Their corresponding potentials are depicted in fig. \[phonon\] in the effective carbon sites. ![(Color online) With the same conventions that fig. 1, combinations of CO displacements that produce a quadratic band touching. Again, note that the prefactors only refer to the potentials, not to the displacements.[]{data-label="cuad"}](fig2.jpg){width="8.7cm"} In real graphene this type of potential perturbation is the one induced by phonons like the LO/LA phonon at the K point [@Falko], or the ZO/ZA phonon at the K point in the presence of a perpendicular electric field, and it is known that it can also be produced by a particular substrate [@Pankratov]. For molecular graphene, this analysis immediately allows to find the CO displacements that will induce these potential modulations. These should be displacements with a tripled unit cell and the appropriate symmetry labels, and in fact may be simply interpreted as the $E_1$ and $E_2$ phonons of the triangular CO lattice at the $K$ point. These displacements are, for three consecutive CO molecules $$\begin{aligned} \nonumber \vec r_{CO,E_{1x}} &= \left\{ (1,0) \; , \; (-\tfrac{1}{2},0) \; , \; (-\tfrac{1}{2},0) \right\}, \\ \nonumber \vec r_{CO,E_{1y}} &= \left\{ (0,1) \; , \; (0,-\tfrac{1}{2})\; , \; (0,-\tfrac{1}{2})\right\}, \end{aligned}$$ $$\begin{aligned} \nonumber \vec r_{CO,E_{2x}} &= \left\{ (0,0) \; , \; (0,-\tfrac{\sqrt{3}}{2}) \; , \;(0,\tfrac{\sqrt{3}}{2}) \right\} ,\\ \nonumber \vec r_{CO,E_{2x}} &= \left\{ (0,0) \; , \;(\tfrac{\sqrt{3}}{2},0)\; , \; (-\tfrac{\sqrt{3}}{2},0)\right\} ,\end{aligned}$$ and are also shown in fig \[phonon\]. Indeed, within a TB model one can parametrize the change in on-site potential with displacement as $$V_i = V' \sum_j \Delta \vec r_{j,CO} \cdot \vec \delta_{ij}, \label{pot}$$ for a carbon site $i$ with $j$ CO neighbours at equilibrium distances $\delta_{ij}$ from it, and verify that the potentials shown in fig. \[phonon\] are given by eq. (\[pot\]). The constant $V' \equiv \partial V /\partial a$ parametrizes the change in on-site potential with distance, and may be estimated by realizing that this physical mechanism is responsible for the scalar potential $\phi$ in the continuum Dirac equation. A comparison with the p-n junction experiment yields $V' \approx 22$ $\text{meV}/\text{\AA}$ (see appendix), which is very similar to $\partial t /\partial a = \beta t/a \approx 20$ $\text{meV}/\text{\AA}$. This is also consistent with the fact that in real graphene the analog of $V'$ for carbon displacements[@Falko] is of the same order as $\partial t /\partial a$. Finally, the symmetry analysis also reveals that close to the Dirac point, the desired CO displacements do not introduce any other change in the effective theory other than the $\vec A^{(x)},\vec A^{(y)}$ gauge fields. In particular, while these displacements may induce nearest neighbour hopping changes, these cannot appear in the low energy theory because there are no intervalley matrices in the $E_1$ or $E_2$ representations that are sublattice off-diagonal. This hopping changes thus have no effect in the low energy properties and we will not consider them in what follows. Changes in the next nearest neighbour hopping $t'$ due this displacements are small and need not be considered. To obtain the gauge field from a microscopic calculation, one may substitute eq. (\[pot\]) in the effective tight binding model $$H = -t\sum_{\left<i,j\right>} c^{\dagger}_i c_j -t'\sum_{\left<\left<i,j\right>\right>} c^{\dagger}_i c_j + \sum_i V_i c^{\dagger}_i c_i. \label{TBH}$$ The potential modulation (depicted in fig. \[phonon\]) that gives rise to the non-abelian gauge fields is [@GGR12] $$\begin{aligned} V(\vec x) = \frac{3}{2} V' \left[ u_{E_{2y}} \cos \vec K \vec x + u_{E_{1x}} \sin \vec K \vec x \right. \\ +\left. \frac{2}{\sqrt{3}} \sin \vec G \vec x \left(u_{E_{1y}} \cos \vec K \vec x+u_{E_{2x}} \sin \vec K \vec x \right) \right], \nonumber\end{aligned}$$ with $\vec K=(4\pi/3\sqrt{3},0)$ a vector joining the two dirac points and $\vec G=(0,-4\pi/3)$ a reciprocal lattice vector. To project this perturbations into the Dirac points one performs the sum $$H = \sum_i V_i c^{\dagger}_i c_i = \sum_{\vec x} V(\vec x) c^{A,\dagger}_{\vec x} c^A_{\vec x} + V(\vec x + \vec \delta_1) c^{B,\dagger}_{\vec x} c^B_{\vec x},$$ with $\vec x = n \vec a_1 + m\vec a_2$ the lattice positions, $\vec \delta_1=a(0,1)$ a nearest neighbour vector, and $$\begin{aligned} c^A_x = e^{i \vec K \vec x} c^A_K + e^{-i \vec K \vec x}c^A_{K'}, \\ c^B_x = e^{i \vec K \vec x} c^B_K + e^{-i \vec K \vec x}c^B_{K'}.\end{aligned}$$ This sum gives exactly the matrices dictated by symmetry $$H = \frac{3}{4} V' (-\tau_2 u_{E_{1x}} + \tau_1 \sigma_3 u_{E_{1y}}-\tau_2 \sigma_3 u_{E_{2x}} + \tau_1 u_{E_{2y}}),$$ so that the final formula relating the effective gauge fields to CO displacements is $$\begin{aligned} \vec A^{(1)} &= \frac{3}{4} V' (-u_{E_{1y}},u_{E_{1x}}), \\ \vec A^{(2)} &= \frac{3}{4} V' (u_{E_{2x}},u_{E_{2y}}). \label{dispcorresp}\end{aligned}$$ Physical effects ================ ![(Color online) Total density of states for any type II gauge field of strength $u=0.5 \text{\AA}$ (red line) and $u=1\text{\AA}$ (black line), for $t=90$ meV and $t'=0$. The unperturbed LDOS is shown as a dashed blue line for comparison. Inset: Band structure of the system for $u=1 \text{\AA}$. Note the similarity with bilayer graphene.[]{data-label="total"}](fig3.jpg){width="7cm"} As described in the introduction, we now consider two gauge field configurations that are not related by a gauge transformation, but whose magnetic field is the same, and consider how they should be seen in a local density of states (LDOS) measurement. Consider the type I gauge field, with a magnetic field pointing in a general direction $b^{\alpha}$ in $SU(2)$ space, $A_i^{\alpha} = b^{\alpha} B/2(y,-x)$. When $b^{\alpha}=(0,0,1)$ we have the usual strain induced gauge field. The case $b^{\alpha}=(0,1,0)$ was discussed in ref. . In general, by a constant $SU(2)$ rotation of the Hamiltonian, it is not difficult to see that for any $b^{\alpha}$ the spectrum is still given by Landau levels $E_n = \sqrt{2Bn}$. The only difference appears in the wavefunctions, because the sublattice polarization turns out to be given by the projection of $b^{\alpha}$ onto the $z$ axis. For strain induced fields it is maximum, but for potential induced ones the density of states is in fact constant across the unit cell. One can estimate the magnetic field induced in the molecular graphene experiment with these gauge fields as follows. Take $u_{E_{1x}} = u_{max}/L * x$, with $L$ the radius of the (approximately circular) sample and $u_{max}$ the maximum displacement (at $x=L$). The magnetic field is (recovering all units) $$B^{(x)} = \frac{\hbar/e}{\hbar v_F} \frac{3V'}{4} \frac{u_{max}}{L}.$$ With $\hbar/e = 6.5 \cdot 10^4 T\text{\AA}^2$ and taking $u_{max} = 0.1 a$ and $\sqrt{3}a/L \approx 1/10$ and a Fermi velocity[@GMK12] $\hbar v_F \approx 1.5 \text{eV}\text{\AA}$ we obtain $B \approx 3.75 T$, which is not very large compared to the strain induced one that is typically achieved. ![(color online) LDOS as a function of energy for two different gauge fields. Top left: $E_{1x}-E_{2x}$, with strength u=1 $\text{\AA}$, $t=90$ meV and $t'=0$. Top right: $E_{1y}-E_{2y}$, same parameters. Bottom plots are the same but with $t'=0.18t$ and a Lorentzian broadening of $\Sigma =0.2 t$. The insets show the corresponding on-site potential and the color code for the different sites within the unit cell. Note that the missing lines in the plots overlap with the line shown that has the same on-site potential.[]{data-label="site"}](fig4a.jpg "fig:"){width="4.25cm"} ![(color online) LDOS as a function of energy for two different gauge fields. Top left: $E_{1x}-E_{2x}$, with strength u=1 $\text{\AA}$, $t=90$ meV and $t'=0$. Top right: $E_{1y}-E_{2y}$, same parameters. Bottom plots are the same but with $t'=0.18t$ and a Lorentzian broadening of $\Sigma =0.2 t$. The insets show the corresponding on-site potential and the color code for the different sites within the unit cell. Note that the missing lines in the plots overlap with the line shown that has the same on-site potential.[]{data-label="site"}](fig4b.jpg "fig:"){width="4.25cm"} ![(color online) LDOS as a function of energy for two different gauge fields. Top left: $E_{1x}-E_{2x}$, with strength u=1 $\text{\AA}$, $t=90$ meV and $t'=0$. Top right: $E_{1y}-E_{2y}$, same parameters. Bottom plots are the same but with $t'=0.18t$ and a Lorentzian broadening of $\Sigma =0.2 t$. The insets show the corresponding on-site potential and the color code for the different sites within the unit cell. Note that the missing lines in the plots overlap with the line shown that has the same on-site potential.[]{data-label="site"}](fig4c.jpg "fig:"){width="4.25cm"} ![(color online) LDOS as a function of energy for two different gauge fields. Top left: $E_{1x}-E_{2x}$, with strength u=1 $\text{\AA}$, $t=90$ meV and $t'=0$. Top right: $E_{1y}-E_{2y}$, same parameters. Bottom plots are the same but with $t'=0.18t$ and a Lorentzian broadening of $\Sigma =0.2 t$. The insets show the corresponding on-site potential and the color code for the different sites within the unit cell. Note that the missing lines in the plots overlap with the line shown that has the same on-site potential.[]{data-label="site"}](fig4d.jpg "fig:"){width="4.25cm"} The type II gauge field has better prospects to be experimentally accessible. Keeping $\vec A^{(3)}=0$, there are in fact four possible choices of constant gauge fields that give constant magnetic field, given by $$\begin{aligned} \vec A^{1} = \sqrt{B/2}(1,0) & &\vec A^{2} = \sqrt{B/2} (0,\pm1), \end{aligned}$$ which, by eq. (\[dispcorresp\]), is produced with the displacements $E_{1y}\pm E_{2y}$, and $$\begin{aligned} \vec A^{1} = \sqrt{B/2}(0,1) & &\vec A^{2} = \sqrt{B/2}(\pm1,0),\end{aligned}$$ which is produced with the displacements $E_{1x}\pm E_{2x}$. These displacements and their on-site potentials are depicted in fig. \[cuad\]. The magnetic field is given by $B = 9/8(V' u/v_F)^2$ with $u= u_{E_{1i}} = \pm u_{E_{2i}}$ representing the modulus of the displacements in fig. \[cuad\]. It is interesting to note that the estimate for $B$ in this case for $u= 0.1a$ is $B \approx 16 T$. The Dirac Hamiltonian in the presence of these gauge fields is formally analogous to that of bilayer graphene (for a single valley), with the role of the layer played by the valley here[^2], and an effective interlayer coupling $\gamma = \sqrt{2B} $. The spectrum of these Hamiltonians is well known to be a quadratic band touching, with two extra parabolic bands at higher energies. Considering first the case $t'=0$, the density of states (DOS) of this system is finite at the touching point $E_D=0$, and has a jump at $\pm v_F\sqrt{2B}= 3/2V' u$, as depicted in fig. \[total\]. Considering a displacement $u = 0.1a = 1\text{\AA}$, the kink in the LDOS should appear at $\pm \text{30 meV}$, which should be easily observable. The precise location of this jump should serve as an independent estimate of the parameter $V'$. The main effect of a finite $t'$ is to shift $E_D$ to a higher value, as we will see below. Moreover, this type of gauge field shows more complicated local density of states across the enlarged unit cell. In fig. \[site\] we show the LDOS for the cases $E_{1x}-E_{2x}$ and $E_{1y}-E_{2y}$. The other two combinations are obtained by mirror symmetry. For $t'=0$, we observe different local gaps for different sites, and finite LDOS at $E=0$. For more faithful comparison with the experiment, we have also plotted the LDOS for $t'=0.18t$, and with a Lorentzian broadening of $\Sigma=0.2t$. We observe the main effect of a shift in $E_D$, as well as some electron-hole assymmetry, but the main features that characterize the non-abelian gauge field remain. The identification of these features in an STM measurement would represent a demonstration of the presence of the type II constant non-abelian gauge field. Discussion ========== In this work we have shown, by means of a symmetry analysis, how non-abelian gauge fields may be implemented in molecular graphene, and what their experimental signatures should be in the LDOS. For type I gauge fields of constant magnetic field, we have shown that because of the different microscopic origin of gauge fields $A^{(3)}$ (hopping change) and $A^{(1,2)}$ (potential change), the magnetic field that one gets in the second case is relatively smaller. While this may make the Landau level spectrum more difficult to observe, the presence of this type of field could also be readily detected, for example, in a quantum interference experiment in the weak field limit [@JCV11]. We have also shown that type II constant non-abelian gauge fields generate a quadratic band touching analogous to bilayer graphene. Because of the enhanced DOS at the Fermi level, the electron-electron interaction is known to drive this system to a broken symmetry state whose precise characteristics are still controversial [@WAF10; @MEM11; @VJB12]. In the current molecular graphene experiment, the Coulomb interaction is screened by the metallic bulk, leaving residual Hubbard interactions estimated to be $U \sim 0.5t \sim 50$ meV (see Supplementary Material of ref. ). While an ideal quadratic touching is unstable to infinitesimal short range interactions, the current broadening due to bulk tunneling ($\sim 0.2t$) is perhaps too large and may challenge the observation of the interaction induced transition. Both bulk tunneling and screening could be reduced by performing future experiments in bulk insulators with metallic surfaces (such as the recently discovered topological insulators[@HK10]) which may eventually allow to study the fate of the many body state with a tunable analog of the interlayer hopping. Incidentally, it is also interesting to note that this instability can be interpreted as a non-abelian magnetic catalysis, where an infinitesimal field drives chiral symmetry breaking [@GHS98]. Furthermore, the controlled simulation of these non-abelian gauge fields, when made position dependent, may be used to study the generation of zero-energy flat bands, as those observed in the twisted bilayer system [@SGG12], or the physics of topological defects in the gauge field[@GGR12]. In summary, the molecular graphene experiment has great potential to observe many interesting phenomena related to non-abelian gauge fields with an unprecedent tunability, and which, as we have shown, should be realizable in the current experimental samples. Acknowledgements ================ I would like to thank D. Rastawiki, V. Juricic, H. Ochoa, A. G. Grushin and H. Manoharan for useful discussions. Funding from the “Programa Nacional de Movilidad de Recursos Humanos" (Spanish MECD) is acknowledged. Appendix ======== Estimate of V’ -------------- The parameter $V'$ describes the change of on-site potential due to the displacements of neighbouring $CO$ molecules. As such, it is featured both in the non abelian gauge fields (which come from ”optical“ displacements) and in the strain-induced scalar potential $\phi$ (which comes from ”acoustical“ displacements). The scalar potential $\phi$ also has a contribution from NNN hopping change $ \partial t' /\partial a$ but it is much smaller and will be neglected. To see this, consider an isotropic expansion of the $CO$ lattice. For every carbon site $i$, the induced potential is given by eq. (\[pot\]). Because displacement is smooth we may write $$\begin{aligned} V_{\vec x} =& V' \sum_m \delta_m^i r^i_{x+\delta_m,CO} \approx V' \sum_m \delta_m^i \frac{\delta_m^j \partial^j r^i_{x,CO}}{a} \nonumber \\ =& \frac{3a}{2} V' (u_{xx}+u_{yy}),\end{aligned}$$ and plugging directly into the TB Hamiltonian eq. (\[TBH\]), we obtain that $\phi = 3V'a/2(u_{xx} + u_{yy})$. Now consider the p-n-p juntion experiment in ref. . The middle region is strained from $d=17.8$ to $d=20.4$ so $u_{xx} = u_{yy} = 0.14$. The change in scalar potential $\Delta \phi$ is 95 meV, so we obtain ($d= \sqrt{3} a$) $$V' = \frac{95 \text{meV}}{0.14 \sqrt{3} \; 17.8 \text{\AA}} = 22 \; \text{meV}/\text{\AA}.$$ A different estimate can be obtained from the nearly free electron model considered in ref (supp. mat.), where the scalar potential is $$H = \frac{8 \pi^2}{9 d^2 m} (u_{xx}+u_{yy}) = \frac{3a}{2} V' (u_{xx}+u_{yy}),$$ which gives $V' = 24 \; \text{meV}/\text{\AA}$ Symmetry of hopping perturbations --------------------------------- There are 9 independent hoppings in the tripled unit cell, which can be decomposed into combinations that have well defined transformation properties under the symmetries of the lattice. The 9 combinations and their symmetry labels are shown in fig. \[9hop\]. In the first row, one may identify the constant hopping ($A_1$), the $E_2$ pattern that gives rise to the usual gauge field, and the Kekulé distortions (any of the three domains can be obtained from these). The four combinations in the second row are $E_1$ and $E_2$ (and form the representation $G'$ when the enlarged group $C_{6v}''$ is considered). These hopping patterns are produced by the same $CO$ displacements that give the non-abelian gauge fields through charge modulation. In the main part of the text we claimed that these hopping distortions cannot couple to the low energy theory around the $K$ point. The reason is simply that there is no valley off-diagonal $E_1$ or $E_2$ matrix in the low energy theory whose microscopic origin is a hopping change. This can be seen directly by inspection of table \[tab\], where the valley mixing $E_1$ and $E_2$ matrices are all diagonal in sublattice. The hopping modulations will only appear in the low-energy theory if terms with higher order in momentum are considered. If one is interested in the whole band structure and not just low energies, these distortions in the hopping should be included by changing the NN hopping in the usual manner. ![(Color online) The 9 independent hopping patterns and their symmetry labels. Blue means positive and red negative, and black lines represent no change in the hopping. Hopping modulations of the corresponding symmetry may also be induced by these displacements (red is negative hopping and blue is positive. However, these particular patterns have no effect in the low energy theory: they do not affect the quadratic touching and the LDOS predictions around $E=E_D$.[]{data-label="9hop"}](fig5.jpg){width="9cm"} [^1]: The resolution to this apparent paradox is simply that for non-abelian fields there are further independent gauge invariant quantities, such as $D_iF_{jk}$, that distinguish among them. [^2]: Or to the Dirac Hamiltonian in the presence of Rashba spin-orbit coupling. Not surprisingly these Hamiltonians are described in terms of SU(2) gauge fields too.